The group aims to vaccinate about 20% of the people in the world, focusing on hard-to-reach populations in Africa, Latin America, and Asia. To do so, it needs another $4.9 billion in addition to the $2.1 billion it has already raised. But there are other problems. The cheaper and easier-to-transport vaccines like the ones pledged by AstraZeneca have been slower to gain regulatory approval. Meanwhile, other companies seem less interested in pitching in: Doctors Without Borders found that only 2% of Pfizer’s global supply had been granted to Covax, and Moderna is still “in talks” with the organization.
“Covax is a critical starting point that—without a commitment from President Biden—had a high probability of failure. It’s looking better now, but could still fail if it doesn’t get money and vaccines,” says Barry Bloom, a global health researcher at the Harvard T.H. Chan School of Public Health. Biden officially directed the US government to join Covax in late January.
If it can succeed, the international program has many upsides. It establishes a mechanism of fairness that doesn’t depend on colonial mentalities of quid pro quo, says Bloom. It also absolves individual rich countries from having to determine which countries get what percentage of the vaccines. “This is a way of saying somebody else will take the rap, especially for the delivery time,” he says.
We’re not safe until we’re all safe
The motive for getting the vaccine to poorer countries more quickly is not just altruism: evolution will punish any delays. SARS-CoV-2 has already mutated into several worrying new variants, and this process will continue. If countries with large populations wait to be vaccinated for years, the virus will keep mutating—potentially to the point that the first available vaccines lose effectiveness. That will be bad for everyone, but poorer countries, with less access to updated vaccines, will again feel more of the impact.
“We get more mutants and they get more deaths,” says Bloom.
Judd Walson, a global health researcher at the University of Washington, worries more about the indirect effects of the pandemic in the developing world, where in many places covid-19 doesn’t even rank in the top 20 causes of death. Health systems have directed a lot of personnel and resources to dealing with the pandemic—setting up quarantine centers, doing surveillance, and more. In addition, funders and ministries have been diverted away from diarrhea, malaria, and other killers.
As a result, those other programs are suffering: rates of immunization for diseases such as measles, diphtheria, tetanus, and whooping cough are declining, both for lack of supplies and personnel and because people fear going to health centers. “All those other things that are killing people are being neglected, so not providing a covid vaccine stops governments from shifting back to their priorities before the pandemic,” says Walson.
And while virus variants can travel fast in a highly connected world, so can economic instability. That’s one takeaway from a recent paper published by the nonprofit National Bureau of Economic Research. Sebnem Kalemli-Özcan, an economist at the University of Maryland, and colleagues analyzed how delays in global vaccine distribution would affect the economies in countries whose populations had already been vaccinated.
The economic cost of inequity
They found that a world where poorer countries have to wait to be vaccinated would see a global economic loss of about $9 trillion this year, with wealthy countries absorbing nearly half of those losses in declining trade and fractured supply lines. (A similar study by the RAND Corporation estimated that failure to ensure equitable covid-19 vaccine distribution could cost the global economy up to $1.2 trillion a year.) Ensuring equitable distribution is actually in the best interests of advanced economies. “Their hit will come back and hit you,” says Kalemli-Özcan.
How AI is reinventing what computers are
Fall 2021: the season of pumpkins, pecan pies, and peachy new phones. Every year, right on cue, Apple, Samsung, Google, and others drop their latest releases. These fixtures in the consumer tech calendar no longer inspire the surprise and wonder of those heady early days. But behind all the marketing glitz, there’s something remarkable going on.
Google’s latest offering, the Pixel 6, is the first phone to have a separate chip dedicated to AI that sits alongside its standard processor. And the chip that runs the iPhone has for the last couple of years contained what Apple calls a “neural engine,” also dedicated to AI. Both chips are better suited to the types of computations involved in training and running machine-learning models on our devices, such as the AI that powers your camera. Almost without our noticing, AI has become part of our day-to-day lives. And it’s changing how we think about computing.
What does that mean? Well, computers haven’t changed much in 40 or 50 years. They’re smaller and faster, but they’re still boxes with processors that run instructions from humans. AI changes that on at least three fronts: how computers are made, how they’re programmed, and how they’re used. Ultimately, it will change what they are for.
“The core of computing is changing from number-crunching to decision-making,” says Pradeep Dubey, director of the parallel computing lab at Intel. Or, as MIT CSAIL director Daniela Rus puts it, AI is freeing computers from their boxes.
More haste, less speed
The first change concerns how computers—and the chips that control them—are made. Traditional computing gains came as machines got faster at carrying out one calculation after another. For decades the world benefited from chip speed-ups that came with metronomic regularity as chipmakers kept up with Moore’s Law.
But the deep-learning models that make current AI applications work require a different approach: they need vast numbers of less precise calculations to be carried out all at the same time. That means a new type of chip is required: one that can move data around as quickly as possible, making sure it’s available when and where it’s needed. When deep learning exploded onto the scene a decade or so ago, there were already specialty computer chips available that were pretty good at this: graphics processing units, or GPUs, which were designed to display an entire screenful of pixels dozens of times a second.
Anything can become a computer. Indeed, most household objects, from toothbrushes to light switches to doorbells, already come in a smart version.
Now chipmakers like Intel and Arm and Nvidia, which supplied many of the first GPUs, are pivoting to make hardware tailored specifically for AI. Google and Facebook are also forcing their way into this industry for the first time, in a race to find an AI edge through hardware.
For example, the chip inside the Pixel 6 is a new mobile version of Google’s tensor processing unit, or TPU. Unlike traditional chips, which are geared toward ultrafast, precise calculations, TPUs are designed for the high-volume but low-precision calculations required by neural networks. Google has used these chips in-house since 2015: they process people’s photos and natural-language search queries. Google’s sister company DeepMind uses them to train its AIs.
In the last couple of years, Google has made TPUs available to other companies, and these chips—as well as similar ones being developed by others—are becoming the default inside the world’s data centers.
AI is even helping to design its own computing infrastructure. In 2020, Google used a reinforcement-learning algorithm—a type of AI that learns how to solve a task through trial and error—to design the layout of a new TPU. The AI eventually came up with strange new designs that no human would think of—but they worked. This kind of AI could one day develop better, more efficient chips.
Show, don’t tell
The second change concerns how computers are told what to do. For the past 40 years we have been programming computers; for the next 40 we will be training them, says Chris Bishop, head of Microsoft Research in the UK.
Traditionally, to get a computer to do something like recognize speech or identify objects in an image, programmers first had to come up with rules for the computer.
With machine learning, programmers no longer write rules. Instead, they create a neural network that learns those rules for itself. It’s a fundamentally different way of thinking.
Decarbonizing industries with connectivity and 5G
The United Nations Intergovernmental Panel on Climate Change’s sixth climate change report—an aggregated assessment of scientific research prepared by some 300 scientists across 66 countries—has served as the loudest and clearest wake-up call to date on the global warming crisis. The panel unequivocally attributes the increase in the earth’s temperature—it has risen by 1.1 °C since the Industrial Revolution—to human activity. Without substantial and immediate reductions in carbon dioxide and other greenhouse gas emissions, temperatures will rise between 1.5 °C and 2 °C before the end of the century. That, the panel posits, will lead all of humanity to a “greater risk of passing through ‘tipping points,’ thresholds beyond which certain impacts can no longer be avoided even if temperatures are brought back down later on.”
Corporations and industries must therefore redouble their greenhouse gas emissions reduction and removal efforts with speed and precision—but to do this, they must also commit to deep operational and organizational transformation. Cellular infrastructure, particularly 5G, is one of the many digital tools and technology-enabled processes organizations have at their disposal to accelerate decarbonization efforts.
5G and other cellular technology can enable increasingly interconnected supply chains and networks, improve data sharing, optimize systems, and increase operational efficiency. These capabilities could soon contribute to an exponential acceleration of global efforts to reduce carbon emissions.
Industries such as energy, manufacturing, and transportation could have the biggest impact on decarbonization efforts through the use of 5G, as they are some of the biggest greenhouse-gas-emitting industries, and all rely on connectivity to link to one another through communications network infrastructure.
The higher performance and improved efficiency of 5G—which delivers higher multi-gigabit peak data speeds, ultra-low latency, increased reliability, and increased network capacity—could help businesses and public infrastructure providers focus on business transformation and reduction of harmful emissions. This requires effective digital management and monitoring of distributed operations with resilience and analytic insight. 5G will help factories, logistics networks, power companies, and others operate more efficiently, more consciously, and more purposely in line with their explicit sustainability objectives through better insight and more powerful network configurations.
This report, “Decarbonizing industries with connectivity & 5G,” argues that the capabilities enabled by broadband cellular connectivity primarily, though not exclusively, through 5G network infrastructure are a unique, powerful, and immediate enabler of carbon reduction efforts. They have the potential to create a transformational acceleration of decarbonization efforts, as increasingly interconnected supply chains, transportation, and energy networks share data to increase efficiency and productivity, hence optimizing systems for lower carbon emissions.
Surgeons have successfully tested a pig’s kidney in a human patient
The reception: The research was conducted last month and is yet to be peer reviewed or published in a journal, but external experts say it represents a major advance. “There is no doubt that this is a highly significant breakthrough,” says Darren K. Griffin, a professor of genetics at the University of Kent, UK. “The research team were cautious, using a patient who had suffered brain death, attaching the kidney to the outside of the body, and closely monitoring for only a limited amount of time. There is thus a long way to go and much to discover,” he added.
“This is a huge breakthrough. It’s a big, big deal,” Dorry Segev, a professor of transplant surgery at Johns Hopkins School of Medicine who was not involved in the research, told the New York Times. However, he added, “we need to know more about the longevity of the organ.”
The background: In recent years, research has increasingly zeroed in on pigs as the most promising avenue to help address the shortage of organs for transplant, but it has faced a number of obstacles, most prominently the fact that a sugar in pig cells triggers an aggressive rejection response in humans.
The researchers got around this by genetically altering the donor pig to knock out the gene encoding the sugar molecule that causes the rejection response. The pig was genetically engineered by Revivicor, one of several biotech companies working to develop pig organs to transplant into humans.
The big prize: There is a dire need for more kidneys. More than 100,000 people in the US are currently waiting for a kidney transplant, and 13 die of them every day, according to the National Kidney Foundation. Genetically engineered pigs could offer a crucial lifeline for these people, if the approach tested at NYU Langone can work for much longer periods.