Among the world’s richest and most powerful companies, Google, Facebook, Amazon, Microsoft, and Apple have made AI core parts of their business. Advances over the last decade, particularly in an AI technique called deep learning, have allowed them to monitor users’ behavior; recommend news, information, and products to them; and most of all, target them with ads. Last year Google’s advertising apparatus generated over $140 billion in revenue. Facebook’s generated $84 billion.
The companies have invested heavily in the technology that has brought them such vast wealth. Google’s parent company, Alphabet, acquired the London-based AI lab DeepMind for $600 million in 2014 and spends hundreds of millions a year to support its research. Microsoft signed a $1 billion deal with OpenAI in 2019 for commercialization rights to its algorithms.
At the same time, tech giants have become large investors in university-based AI research, heavily influencing its scientific priorities. Over the years, more and more ambitious scientists have transitioned to working for tech giants full time or adopted a dual affiliation. From 2018 to 2019, 58% of the most cited papers at the top two AI conferences had at least one author affiliated with a tech giant, compared with only 11% a decade earlier, according to a study by researchers in the Radical AI Network, a group that seeks to challenge power dynamics in AI.
The problem is that the corporate agenda for AI has focused on techniques with commercial potential, largely ignoring research that could help address challenges like economic inequality and climate change. In fact, it has made these challenges worse. The drive to automate tasks has cost jobs and led to the rise of tedious labor like data cleaning and content moderation. The push to create ever larger models has caused AI’s energy consumption to explode. Deep learning has also created a culture in which our data is constantly scraped, often without consent, to train products like facial recognition systems. And recommendation algorithms have exacerbated political polarization, while large language models have failed to clean up misinformation.
It’s this situation that Gebru and a growing movement of like-minded scholars want to change. Over the last five years, they’ve sought to shift the field’s priorities away from simply enriching tech companies, by expanding who gets to participate in developing the technology. Their goal is not only to mitigate the harms caused by existing systems but to create a new, more equitable and democratic AI.
“Hello from Timnit”
In December 2015, Gebru sat down to pen an open letter. Halfway through her PhD at Stanford, she’d attended the Neural Information Processing Systems conference, the largest annual AI research gathering. Of the more than 3,700 researchers there, Gebru counted only five who were Black.
Once a small meeting about a niche academic subject, NeurIPS (as it’s now known) was quickly becoming the biggest annual AI job bonanza. The world’s wealthiest companies were coming to show off demos, throw extravagant parties, and write hefty checks for the rarest people in Silicon Valley: skillful AI researchers.
That year Elon Musk arrived to announce the nonprofit venture OpenAI. He, Y Combinator’s then president Sam Altman, and PayPal cofounder Peter Thiel had put up $1 billion to solve what they believed to be an existential problem: the prospect that a superintelligence could one day take over the world. Their solution: build an even better superintelligence. Of the 14 advisors or technical team members he anointed, 11 were white men.
While Musk was being lionized, Gebru was dealing with humiliation and harassment. At a conference party, a group of drunk guys in Google Research T-shirts circled her and subjected her to unwanted hugs, a kiss on the cheek, and a photo.
Gebru typed out a scathing critique of what she had observed: the spectacle, the cult-like worship of AI celebrities, and most of all, the overwhelming homogeneity. This boy’s club culture, she wrote, had already pushed talented women out of the field. It was also leading the entire community toward a dangerously narrow conception of artificial intelligence and its impact on the world.
Google had already deployed a computer-vision algorithm that classified Black people as gorillas, she noted. And the increasing sophistication of unmanned drones was putting the US military on a path toward lethal autonomous weapons. But there was no mention of these issues in Musk’s grand plan to stop AI from taking over the world in some theoretical future scenario. “We don’t have to project into the future to see AI’s potential adverse effects,” Gebru wrote. “It is already happening.”
Gebru never published her reflection. But she realized that something needed to change. On January 28, 2016, she sent an email with the subject line “Hello from Timnit” to five other Black AI researchers. “I’ve always been sad by the lack of color in AI,” she wrote. “But now I have seen 5 of you 🙂 and thought that it would be cool if we started a black in AI group or at least know of each other.”
The email prompted a discussion. What was it about being Black that informed their research? For Gebru, her work was very much a product of her identity; for others, it was not. But after meeting they agreed: If AI was going to play a bigger role in society, they needed more Black researchers. Otherwise, the field would produce weaker science—and its adverse consequences could get far worse.
A profit-driven agenda
As Black in AI was just beginning to coalesce, AI was hitting its commercial stride. That year, 2016, tech giants spent an estimated $20 to $30 billion on developing the technology, according to the McKinsey Global Institute.
Heated by corporate investment, the field warped. Thousands more researchers began studying AI, but they mostly wanted to work on deep-learning algorithms, such as the ones behind large language models. “As a young PhD student who wants to get a job at a tech company, you realize that tech companies are all about deep learning,” says Suresh Venkatasubramanian, a computer science professor who now serves at the White House Office of Science and Technology Policy. “So you shift all your research to deep learning. Then the next PhD student coming in looks around and says, ‘Everyone’s doing deep learning. I should probably do it too.’”
But deep learning isn’t the only technique in the field. Before its boom, there was a different AI approach known as symbolic reasoning. Whereas deep learning uses massive amounts of data to teach algorithms about meaningful relationships in information, symbolic reasoning focuses on explicitly encoding knowledge and logic based on human expertise.
Some researchers now believe those techniques should be combined. The hybrid approach would make AI more efficient in its use of data and energy, and give it the knowledge and reasoning abilities of an expert as well as the capacity to update itself with new information. But companies have little incentive to explore alternative approaches when the surest way to maximize their profits is to build ever bigger models.
How AI is reinventing what computers are
Fall 2021: the season of pumpkins, pecan pies, and peachy new phones. Every year, right on cue, Apple, Samsung, Google, and others drop their latest releases. These fixtures in the consumer tech calendar no longer inspire the surprise and wonder of those heady early days. But behind all the marketing glitz, there’s something remarkable going on.
Google’s latest offering, the Pixel 6, is the first phone to have a separate chip dedicated to AI that sits alongside its standard processor. And the chip that runs the iPhone has for the last couple of years contained what Apple calls a “neural engine,” also dedicated to AI. Both chips are better suited to the types of computations involved in training and running machine-learning models on our devices, such as the AI that powers your camera. Almost without our noticing, AI has become part of our day-to-day lives. And it’s changing how we think about computing.
What does that mean? Well, computers haven’t changed much in 40 or 50 years. They’re smaller and faster, but they’re still boxes with processors that run instructions from humans. AI changes that on at least three fronts: how computers are made, how they’re programmed, and how they’re used. Ultimately, it will change what they are for.
“The core of computing is changing from number-crunching to decision-making,” says Pradeep Dubey, director of the parallel computing lab at Intel. Or, as MIT CSAIL director Daniela Rus puts it, AI is freeing computers from their boxes.
More haste, less speed
The first change concerns how computers—and the chips that control them—are made. Traditional computing gains came as machines got faster at carrying out one calculation after another. For decades the world benefited from chip speed-ups that came with metronomic regularity as chipmakers kept up with Moore’s Law.
But the deep-learning models that make current AI applications work require a different approach: they need vast numbers of less precise calculations to be carried out all at the same time. That means a new type of chip is required: one that can move data around as quickly as possible, making sure it’s available when and where it’s needed. When deep learning exploded onto the scene a decade or so ago, there were already specialty computer chips available that were pretty good at this: graphics processing units, or GPUs, which were designed to display an entire screenful of pixels dozens of times a second.
Anything can become a computer. Indeed, most household objects, from toothbrushes to light switches to doorbells, already come in a smart version.
Now chipmakers like Intel and Arm and Nvidia, which supplied many of the first GPUs, are pivoting to make hardware tailored specifically for AI. Google and Facebook are also forcing their way into this industry for the first time, in a race to find an AI edge through hardware.
For example, the chip inside the Pixel 6 is a new mobile version of Google’s tensor processing unit, or TPU. Unlike traditional chips, which are geared toward ultrafast, precise calculations, TPUs are designed for the high-volume but low-precision calculations required by neural networks. Google has used these chips in-house since 2015: they process people’s photos and natural-language search queries. Google’s sister company DeepMind uses them to train its AIs.
In the last couple of years, Google has made TPUs available to other companies, and these chips—as well as similar ones being developed by others—are becoming the default inside the world’s data centers.
AI is even helping to design its own computing infrastructure. In 2020, Google used a reinforcement-learning algorithm—a type of AI that learns how to solve a task through trial and error—to design the layout of a new TPU. The AI eventually came up with strange new designs that no human would think of—but they worked. This kind of AI could one day develop better, more efficient chips.
Show, don’t tell
The second change concerns how computers are told what to do. For the past 40 years we have been programming computers; for the next 40 we will be training them, says Chris Bishop, head of Microsoft Research in the UK.
Traditionally, to get a computer to do something like recognize speech or identify objects in an image, programmers first had to come up with rules for the computer.
With machine learning, programmers no longer write rules. Instead, they create a neural network that learns those rules for itself. It’s a fundamentally different way of thinking.
Decarbonizing industries with connectivity and 5G
The United Nations Intergovernmental Panel on Climate Change’s sixth climate change report—an aggregated assessment of scientific research prepared by some 300 scientists across 66 countries—has served as the loudest and clearest wake-up call to date on the global warming crisis. The panel unequivocally attributes the increase in the earth’s temperature—it has risen by 1.1 °C since the Industrial Revolution—to human activity. Without substantial and immediate reductions in carbon dioxide and other greenhouse gas emissions, temperatures will rise between 1.5 °C and 2 °C before the end of the century. That, the panel posits, will lead all of humanity to a “greater risk of passing through ‘tipping points,’ thresholds beyond which certain impacts can no longer be avoided even if temperatures are brought back down later on.”
Corporations and industries must therefore redouble their greenhouse gas emissions reduction and removal efforts with speed and precision—but to do this, they must also commit to deep operational and organizational transformation. Cellular infrastructure, particularly 5G, is one of the many digital tools and technology-enabled processes organizations have at their disposal to accelerate decarbonization efforts.
5G and other cellular technology can enable increasingly interconnected supply chains and networks, improve data sharing, optimize systems, and increase operational efficiency. These capabilities could soon contribute to an exponential acceleration of global efforts to reduce carbon emissions.
Industries such as energy, manufacturing, and transportation could have the biggest impact on decarbonization efforts through the use of 5G, as they are some of the biggest greenhouse-gas-emitting industries, and all rely on connectivity to link to one another through communications network infrastructure.
The higher performance and improved efficiency of 5G—which delivers higher multi-gigabit peak data speeds, ultra-low latency, increased reliability, and increased network capacity—could help businesses and public infrastructure providers focus on business transformation and reduction of harmful emissions. This requires effective digital management and monitoring of distributed operations with resilience and analytic insight. 5G will help factories, logistics networks, power companies, and others operate more efficiently, more consciously, and more purposely in line with their explicit sustainability objectives through better insight and more powerful network configurations.
This report, “Decarbonizing industries with connectivity & 5G,” argues that the capabilities enabled by broadband cellular connectivity primarily, though not exclusively, through 5G network infrastructure are a unique, powerful, and immediate enabler of carbon reduction efforts. They have the potential to create a transformational acceleration of decarbonization efforts, as increasingly interconnected supply chains, transportation, and energy networks share data to increase efficiency and productivity, hence optimizing systems for lower carbon emissions.
Surgeons have successfully tested a pig’s kidney in a human patient
The reception: The research was conducted last month and is yet to be peer reviewed or published in a journal, but external experts say it represents a major advance. “There is no doubt that this is a highly significant breakthrough,” says Darren K. Griffin, a professor of genetics at the University of Kent, UK. “The research team were cautious, using a patient who had suffered brain death, attaching the kidney to the outside of the body, and closely monitoring for only a limited amount of time. There is thus a long way to go and much to discover,” he added.
“This is a huge breakthrough. It’s a big, big deal,” Dorry Segev, a professor of transplant surgery at Johns Hopkins School of Medicine who was not involved in the research, told the New York Times. However, he added, “we need to know more about the longevity of the organ.”
The background: In recent years, research has increasingly zeroed in on pigs as the most promising avenue to help address the shortage of organs for transplant, but it has faced a number of obstacles, most prominently the fact that a sugar in pig cells triggers an aggressive rejection response in humans.
The researchers got around this by genetically altering the donor pig to knock out the gene encoding the sugar molecule that causes the rejection response. The pig was genetically engineered by Revivicor, one of several biotech companies working to develop pig organs to transplant into humans.
The big prize: There is a dire need for more kidneys. More than 100,000 people in the US are currently waiting for a kidney transplant, and 13 die of them every day, according to the National Kidney Foundation. Genetically engineered pigs could offer a crucial lifeline for these people, if the approach tested at NYU Langone can work for much longer periods.