Connect with us

Tech

The great chip crisis threatens the promise of Moore’s Law

Published

on

The great chip crisis threatens the promise of Moore’s Law


Even as microchips have become essential in so many products, their development and manufacturing have come to be dominated by a small number of producers with limited capacity—and appetite—for churning out the commodity chips that are a staple for today’s technologies. And because making chips requires hundreds of manufacturing steps and months of production time, the semiconductor industry cannot quickly pivot to satisfy the pandemic-fueled surge in demand. 

After decades of fretting about how we will carve out features as small as a few nanometers on silicon wafers, the spirit of Moore’s Law—the expectation that cheap, powerful chips will be readily available—is now being threatened by something far more mundane: inflexible supply chains. 

A lonely frontier

Twenty years ago, the world had 25 manufacturers making leading-edge chips. Today, only Taiwan Semiconductor Manufacturing Company (TSMC) in Taiwan, Intel in the United States, and Samsung in South Korea have the facilities, or fabs, that produce the most advanced chips. And Intel, long a technology leader, is struggling to keep up, having repeatedly missed deadlines for producing its latest generations. 

One reason for the consolidation is that building a facility to make the most advanced chips costs between $5 billion and $20 billion. These fabs make chips with features as small as a few nanometers; in industry jargon they’re called 5-nanometer and 7-nanometer nodes. Much of the cost of new fabs goes toward buying the latest equipment, such as a tool called an extreme ultraviolet lithography (EUV) machine that costs more than $100 million. Made solely by ASML in the Netherlands, EUV machines are used to etch detailed circuit patterns with nanometer-size features.

Chipmakers have been working on EUV technology for more than two decades. After billions of dollars of investment, EUV machines were first used in commercial chip production in 2018. “That tool is 20 years late, 10x over budget, because it’s amazing,” says David Kanter, executive director of an open engineering consortium focused on machine learning. “It’s almost magical that it even works. It’s totally like science fiction.”

Such gargantuan effort made it possible to create the billions of tiny transistors in Apple’s M1 chip, which was made by TSMC; it’s among the first generation of leading-edge chips to rely fully on EUV. 

Only the largest tech companies are willing to pay hundreds of millions of dollars to design a chip for leading-edge nodes.

Paying for the best chips makes sense for Apple because these chips go into the latest MacBook and iPhone models, which sell by the millions at luxury-brand prices. “The only company that is actually using EUV in high volume is Apple, and they sell $1,000 smartphones for which they have insane margin,” Kanter says.

Not only are the fabs for manufacturing such chips expensive, but the cost of designing the immensely complex circuits is now beyond the reach of many companies. In addition to Apple, only the largest tech companies that require the highest computing performance, such as Qualcomm, AMD, and Nvidia, are willing to pay hundreds of millions of dollars to design a chip for leading–edge nodes, says Sri Samavedam, senior vice president of CMOS technologies at Imec, an international research institute based in Leuven, Belgium. 

Many more companies are producing laptops, TVs, and cars that use chips made with older technologies, and a spike in demand for these is at the heart of the current chip shortage. Simply put, a majority of chip customers can’t afford—or don’t want to pay for—the latest chips; a typical car today uses dozens of microchips, while an electric vehicle uses many more. It quickly adds up. Instead, makers of things like cars have stuck with chips made using older technologies.

What’s more, many of today’s most popular electronics simply don’t require leading-edge chips. “It doesn’t make sense to put, for example, an A14 [iPhone and iPad] chip in every single computer that we have in the world,” says Hassan Khan, a former doctoral researcher at Carnegie Mellon University who studied the public policy implications of the end of Moore’s Law and currently works at Apple. “You don’t need it in your smart thermometer at home, and you don’t need 15 of them in your car, because it’s very power hungry and it’s very expensive.”

The problem is that even as more users rely on older and cheaper chip technologies, the giants of the semiconductor industry have focused on building new leading-edge fabs. TSMC, Samsung, and Intel have all recently announced billions of dollars in investments for the latest manufacturing facilities. Yes, they’re expensive, but that’s where the profits are—and for the last 50 years, it has been where the future is. 

TSMC, the world’s largest contract manufacturer for chips, earned almost 60% of its 2020 revenue from making leading-edge chips with features 16 nanometers and smaller, including Apple’s M1 chip made with the 5-nanometer manufacturing process.

Making the problem worse is that “nobody is building semiconductor manufacturing equipment to support older technologies,” says Dale Ford, chief analyst at the Electronic Components Industry Association, a trade association based in Alpharetta, Georgia. “And so we’re kind of stuck between a rock and a hard spot here.”

Low-end chips

All this matters to users of technology not only because of the supply disruption it’s causing today, but also because it threatens the development of many potential innovations. In addition to being harder to come by, cheaper commodity chips are also becoming relatively more expensive, since each chip generation has required more costly equipment and facilities than the generations before. 

Some consumer products will simply demand more powerful chips. The buildout of faster 5G mobile networks and the rise of computing applications reliant on 5G speeds could compel investment in specialized chips designed for networking equipment that talks to dozens or hundreds of Internet-connected devices. Automotive features such as advanced driver-assistance systems and in-vehicle “infotainment” systems may also benefit from leading-edge chips, as evidenced by electric-vehicle maker Tesla’s reported partnerships with both TSMC and Samsung on chip development for future self-driving cars.

But buying the latest leading-edge chips or investing in specialized chip designs may not be practical for many companies when developing products for an “intelligence everywhere” future. Makers of consumer devices such as a Wi-Fi-enabled sous vide machine are unlikely to spend the money to develop specialized chips on their own for the sake of adding even fancier features, Kanter says. Instead, they will likely fall back on whatever chips made using older technologies can provide.

The majority of today’s chip customers make do with the cheaper commodity chips that represent a trade-off between cost and performance.

And lower-cost items such as clothing, he says, have “razor-thin margins” that leave little wiggle room for more expensive chips that would add a dollar—let alone $10 or $20—to each item’s price tag. That means the climbing price of computing power may prevent the development of clothing that could, for example, detect and respond to voice commands or changes in the weather.

The world can probably live without fancier sous vide machines, but the lack of ever cheaper and more powerful chips would come with a real cost: the end of an era of inventions fueled by Moore’s Law and its decades-old promise that increasingly affordable computation power will be available for the next innovation. 

The majority of today’s chip customers make do with the cheaper commodity chips that represent a trade-off between cost and performance. And it’s the supply of such commodity chips that appears far from adequate as the global demand for computing power grows. 

“It is still the case that semiconductor usage in vehicles is going up, semiconductor usage in your toaster oven and for all kinds of things is going up,” says Willy Shih, a professor of management practice at Harvard Business School. “So then the question is, where is the shortage going to hit next?”

A global concern

In early 2021, President Joe Biden signed an executive order mandating supply chain reviews for chips and threw his support behind a bipartisan push in Congress to approve at least $50 billion for semiconductor manufacturing and research. Biden also held two White House summits with leaders from the semiconductor and auto industries, including an April 12 meeting during which he prominently displayed a silicon wafer.

The actions won’t solve the imbalance between chip demand and supply anytime soon. But at the very least, experts say, today’s crisis represents an opportunity for the US government to try to finally fix the supply chain and reverse the overall slowdown in semiconductor innovation—and perhaps shore up the US’s capacity to make the badly needed chips.

An estimated 75% of all chip manufacturing capacity was based in East Asia as of 2019, with the US share sitting at approximately 13%. Taiwan’s TSMC alone has nearly 55% of the foundry market that handles consumer chip manufacturing orders.

Looming over everything is the US-China rivalry. China’s national champion firm SMIC has been building fabs that are still five or six years behind the cutting edge in chip technologies. But it’s possible that Chinese foundries could help meet the global demand for chips built on older nodes in the coming years.  “Given the state subsidies they receive, it’s possible Chinese foundries will be the lowest-cost manufacturers as they stand up fabs at the 22-nanometer and 14-nanometer nodes,” Khan says. “Chinese fabs may not be competitive at the frontier, but they could supply a growing portion of demand.”

Tech

Meet the people who warn the world about new covid variants

Published

on

Meet the people who warn the world about new covid variants


In March 2020, when the WHO declared a pandemic, the public sequence database GISAID held 524 covid sequences. Over the next month scientists uploaded 6,000 more. By the end of May, the total was over 35,000. (In contrast, global scientists added 40,000 flu sequences to GISAID in all of 2019.)

“Without a name, forget about it—we cannot understand what other people are saying,” says Anderson Brito, a postdoc in genomic epidemiology at the Yale School of Public Health, who contributes to the Pango effort. 

As the number of covid sequences spiraled, researchers trying to study them were forced to create entirely new infrastructure and standards on the fly. A universal naming system has been one of the most important elements of this effort: without it, scientists would struggle to talk to each other about how the virus’s descendants are traveling and changing—either to flag up a question or, even more critically, to sound the alarm.

Where Pango came from

In April 2020, a handful of prominent virologists in the UK and Australia proposed a system of letters and numbers for naming lineages, or new branches, of the covid family. It had a logic, and a hierarchy, even though the names it generated—like B.1.1.7—were a bit of a mouthful.

One of the authors on the paper was Áine O’Toole, a PhD candidate at the University of Edinburgh. Soon she’d become the primary person actually doing that sorting and classifying, eventually combing through hundreds of thousands of sequences by hand.

She says: “Very early on, it was just who was available to curate the sequences. That ended up being my job for a good bit. I guess I never understood quite the scale we were going to get to.”

She quickly set about building software to assign new genomes to the right lineages. Not long after that, another researcher, postdoc Emily Scher, built a machine-learning algorithm to speed things up even more. 

“Without a name, forget about it—we cannot understand what other people are saying.”

Anderson Brito, Yale School of Public Health

They named the software Pangolin, a tongue-in-cheek reference to a debate about the animal origin of covid. (The whole system is now simply known as Pango.)

The naming system, along with the software to implement it, quickly became a global essential. Although the WHO has recently started using Greek letters for variants that seem especially concerning, like delta, those nicknames are for the public and the media. Delta actually refers to a growing family of variants, which scientists  call by their more precise Pango names: B.1.617.2, AY.1, AY.2, and AY.3.

“When alpha emerged in the UK, Pango made it very easy for us to look for those mutations in our genomes to see if we had that lineage in our country too,” says Jolly. “Ever since then, Pango has been used as the baseline for reporting and surveillance of variants in India.”

Because Pango offers a rational, orderly approach to what would otherwise be chaos, it may forever change the way scientists name viral strains—allowing experts from all over the world to work together with a shared vocabulary. Brito says: “Most likely, this will be a format we’ll use for tracking any other new virus.”

Many of the foundational tools for tracking covid genomes have been developed and maintained by early-career scientists like O’Toole and Scher over the last year and a half. As the need for worldwide covid collaboration exploded, scientists rushed to support it with ad hoc infrastructure like Pango. Much of that work fell to tech-savvy young researchers in their 20s and 30s. They used informal networks and tools that were open source—meaning they were free to use, and anyone could volunteer to add tweaks and improvements. 

“The people on the cutting edge of new technologies tend to be grad students and postdocs,” says Angie Hinrichs, a bioinformatician at UC Santa Cruz who joined the project earlier this year. For example, O’Toole and Scher work in the lab of Andrew Rambaut, a genomic epidemiologist who posted the first public covid sequences online after receiving them from Chinese scientists. “They just happened to be perfectly placed to provide these tools that became absolutely critical,” Hinrichs says.

Building fast

It hasn’t been easy. For most of 2020, O’Toole took on the bulk of the responsibility for identifying and naming new lineages by herself. The university was shuttered, but she and another of Rambaut’s PhD students, Verity Hill, got permission to come into the office. Her commute, walking 40 minutes to school from the apartment where she lived alone, gave her some sense of normalcy.

Every few weeks, O’Toole would download the entire covid repository from the GISAID database, which had grown exponentially each time. Then she would hunt around for groups of genomes with mutations that looked similar, or things that looked odd and might have been mislabeled. 

When she got particularly stuck, Hill, Rambaut, and other members of the lab would pitch in to discuss the designations. But the grunt work fell on her. 

“Imagine going through 20,000 sequences from 100 different places in the world. I saw sequences from places I’d never even heard of.”

Áine O’Toole, University of Edinburgh

Deciding when descendants of the virus deserve a new family name can be as much art as science. It was a painstaking process, sifting through an unheard-of number of genomes and asking time and again: Is this a new variant of covid or not? 

“It was pretty tedious,” she says. “But it was always really humbling. Imagine going through 20,000 sequences from 100 different places in the world. I saw sequences from places I’d never even heard of.”

As time went on, O’Toole struggled to keep up with the volume of new genomes to sort and name.

In June 2020, there were over 57,000 sequences stored in the GISAID database, and O’Toole had sorted them into 39 variants. By November 2020, a month after she was supposed to turn in her thesis, O’Toole took her last solo run through the data. It took her 10 days to go through all the sequences, which by then numbered 200,000. (Although covid has overshadowed her research on other viruses, she’s putting a chapter on Pango in her thesis.) 

Fortunately, the Pango software is built to be collaborative, and others have stepped up. An online community—the one that Jolly turned to when she noticed the variant sweeping across India—sprouted and grew. This year, O’Toole’s work has been much more hands-off. New lineages are now designated mostly when epidemiologists around the world contact O’Toole and the rest of the team through Twitter, email, or GitHub— her preferred method. 

“Now it’s more reactionary,” says O’Toole. “If a group of researchers somewhere in the world is working on some data and they believe they’ve identified a new lineage, they can put in a request.”

The deluge of data has continued. This past spring, the team held a “pangothon,” a sort of hackathon in which they sorted 800,000 sequences into around 1,200 lineages. 

“We gave ourselves three solid days,” says O’Toole. “It took two weeks.”

Since then, the Pango team has recruited a few more volunteers, like UCSC researcher Hindriks and Yale researcher Brito, who both got involved initially by adding their two cents on Twitter and the GitHub page. A postdoc at the University of Cambridge, Chris Ruis, has turned his attention to helping O’Toole clear out the backlog of GitHub requests. 

O’Toole recently asked them to formally join the organization as part of the newly created Pango Network Lineage Designation Committee, which discusses and makes decisions about variant names. Another committee, which includes lab leader Rambaut, makes higher-level decisions.

“We’ve got a website, and an email that’s not just my email,” O’Toole says. “It’s become a lot more formalized, and I think that will really help it scale.” 

The future

A few cracks around the edges have started to show as the data has grown. As of today, there are nearly 2.5 million covid sequences in GISAID, which the Pango team has split into 1,300 branches. Each branch corresponds to a variant. Of those, eight are ones to watch, according to the WHO.

With so much to process, the software is starting to buckle. Things are getting mislabeled. Many strains look similar, because the virus evolves the most advantageous mutations over and over again. 

As a stopgap measure, the team has built new software that uses a different sorting method and can catch things that Pango may miss. 

Continue Reading

Tech

Disability rights advocates are worried about discrimination in AI hiring tools

Published

on

Disability rights advocates are worried about discrimination in AI hiring tools


Making hiring technology accessible means ensuring both that a candidate can use the technology and that the skills it measures don’t unfairly exclude candidates with disabilities, says Alexandra Givens, the CEO of the Center for Democracy and Technology, an organization focused on civil rights in the digital age.

AI-powered hiring tools often fail to include people with disabilities when generating their training data, she says. Such people have long been excluded from the workforce, so algorithms modeled after a company’s previous hires won’t reflect their potential.

Even if the models could account for outliers, the way a disability presents itself varies widely from person to person. Two people with autism, for example, could have very different strengths and challenges.

“As we automate these systems, and employers push to what’s fastest and most efficient, they’re losing the chance for people to actually show their qualifications and their ability to do the job,” Givens says. “And that is a huge loss.”

A hands-off approach

Government regulators are finding it difficult to monitor AI hiring tools. In December 2020, 11 senators wrote a letter to the US Equal Employment Opportunity Commission expressing concerns about the use of hiring technologies after the covid-19 pandemic. The letter inquired about the agency’s authority to investigate whether these tools discriminate, particularly against those with disabilities.

The EEOC responded with a letter in January that was leaked to MIT Technology Review. In the letter, the commission indicated that it cannot investigate AI hiring tools without a specific claim of discrimination. The letter also outlined concerns about the industry’s hesitance to share data and said that variation between different companies’ software would prevent the EEOC from instituting any broad policies.

“I was surprised and disappointed when I saw the response,” says Roland Behm, a lawyer and advocate for people with behavioral health issues. “The whole tenor of that letter seemed to make the EEOC seem like more of a passive bystander rather than an enforcement agency.”

The agency typically starts an investigation once an individual files a claim of discrimination. With AI hiring technology, though, most candidates don’t know why they were rejected for the job. “I believe a reason that we haven’t seen more enforcement action or private litigation in this area is due to the fact that candidates don’t know that they’re being graded or assessed by a computer,” says Keith Sonderling, an EEOC commissioner.

Sonderling says he believes that artificial intelligence will improve the hiring process, and he hopes the agency will issue guidance for employers on how best to implement it. He says he welcomes oversight from Congress.

Continue Reading

Tech

We just got our best-ever look at the inside of Mars

Published

on

We just got our best-ever look at the inside of Mars


NASA’s InSight robotic lander has just given us our first look deep inside a planet other than Earth. 

More than two years after its launch, seismic data that InSight collected has given researchers hints into how Mars was formed, how it has evolved over 4.6 billion years, and how it differs from Earth. A set of three new studies, published in Science this week, suggests that Mars has a thicker crust than expected, as well as a molten liquid core that is bigger than we thought.  

In the early days of the solar system, Mars and Earth were pretty much alike, each with a blanket of ocean covering the surface. But over the following 4 billion years, Earth became temperate and perfect for life, while Mars lost its atmosphere and water and became the barren wasteland we know today. Finding out more about what Mars is like inside might help us work out why the two planets had such very different fates. 

“By going from [a] cartoon understanding of what the inside of Mars looks like to putting real numbers on it,” said Mark Panning, project scientist for the InSight mission, during a NASA press conference, “we are able to really expand the family tree of understanding how these rocky planets form and how they’re similar and how they’re different.” 

Since InSight landed on Mars in 2018, its seismometer, which sits on the surface of the planet, has picked up more than a thousand distinct quakes. Most are so small they would be unnoticeable to someone standing on Mars’s surface. But a few were big enough to help the team get the first true glimpse of what’s happening underneath. 

NASA/JPL-CALTECH

Marsquakes create seismic waves that the seismometer detects. Researchers created a 3D map of Mars using data from two different kinds of seismic waves: shear and pressure waves. Shear waves, which can only pass through solids, are reflected off the planet’s surface.  

Pressure waves are faster and can pass through solids, liquids, and gases. Measuring the differences between the times that these waves arrived allowed the researchers to locate quakes and gave clues to the interior’s composition.  

One team, led by Simon Stähler, a seismologist at ETH Zurich, used data generated by 11 bigger quakes to study the planet’s core. From the way the seismic waves reflected off the core, they concluded it’s made from liquid nickel-iron, and that it’s far larger than had been previously estimated (between 2,230 and 2320 miles wide) and probably less dense. 

Continue Reading

Copyright © 2020 Diliput News.