Connect with us

Tech

The great chip crisis threatens the promise of Moore’s Law

Published

on

The great chip crisis threatens the promise of Moore’s Law


Even as microchips have become essential in so many products, their development and manufacturing have come to be dominated by a small number of producers with limited capacity—and appetite—for churning out the commodity chips that are a staple for today’s technologies. And because making chips requires hundreds of manufacturing steps and months of production time, the semiconductor industry cannot quickly pivot to satisfy the pandemic-fueled surge in demand. 

After decades of fretting about how we will carve out features as small as a few nanometers on silicon wafers, the spirit of Moore’s Law—the expectation that cheap, powerful chips will be readily available—is now being threatened by something far more mundane: inflexible supply chains. 

A lonely frontier

Twenty years ago, the world had 25 manufacturers making leading-edge chips. Today, only Taiwan Semiconductor Manufacturing Company (TSMC) in Taiwan, Intel in the United States, and Samsung in South Korea have the facilities, or fabs, that produce the most advanced chips. And Intel, long a technology leader, is struggling to keep up, having repeatedly missed deadlines for producing its latest generations. 

One reason for the consolidation is that building a facility to make the most advanced chips costs between $5 billion and $20 billion. These fabs make chips with features as small as a few nanometers; in industry jargon they’re called 5-nanometer and 7-nanometer nodes. Much of the cost of new fabs goes toward buying the latest equipment, such as a tool called an extreme ultraviolet lithography (EUV) machine that costs more than $100 million. Made solely by ASML in the Netherlands, EUV machines are used to etch detailed circuit patterns with nanometer-size features.

Chipmakers have been working on EUV technology for more than two decades. After billions of dollars of investment, EUV machines were first used in commercial chip production in 2018. “That tool is 20 years late, 10x over budget, because it’s amazing,” says David Kanter, executive director of an open engineering consortium focused on machine learning. “It’s almost magical that it even works. It’s totally like science fiction.”

Such gargantuan effort made it possible to create the billions of tiny transistors in Apple’s M1 chip, which was made by TSMC; it’s among the first generation of leading-edge chips to rely fully on EUV. 

Only the largest tech companies are willing to pay hundreds of millions of dollars to design a chip for leading-edge nodes.

Paying for the best chips makes sense for Apple because these chips go into the latest MacBook and iPhone models, which sell by the millions at luxury-brand prices. “The only company that is actually using EUV in high volume is Apple, and they sell $1,000 smartphones for which they have insane margin,” Kanter says.

Not only are the fabs for manufacturing such chips expensive, but the cost of designing the immensely complex circuits is now beyond the reach of many companies. In addition to Apple, only the largest tech companies that require the highest computing performance, such as Qualcomm, AMD, and Nvidia, are willing to pay hundreds of millions of dollars to design a chip for leading–edge nodes, says Sri Samavedam, senior vice president of CMOS technologies at Imec, an international research institute based in Leuven, Belgium. 

Many more companies are producing laptops, TVs, and cars that use chips made with older technologies, and a spike in demand for these is at the heart of the current chip shortage. Simply put, a majority of chip customers can’t afford—or don’t want to pay for—the latest chips; a typical car today uses dozens of microchips, while an electric vehicle uses many more. It quickly adds up. Instead, makers of things like cars have stuck with chips made using older technologies.

What’s more, many of today’s most popular electronics simply don’t require leading-edge chips. “It doesn’t make sense to put, for example, an A14 [iPhone and iPad] chip in every single computer that we have in the world,” says Hassan Khan, a former doctoral researcher at Carnegie Mellon University who studied the public policy implications of the end of Moore’s Law and currently works at Apple. “You don’t need it in your smart thermometer at home, and you don’t need 15 of them in your car, because it’s very power hungry and it’s very expensive.”

The problem is that even as more users rely on older and cheaper chip technologies, the giants of the semiconductor industry have focused on building new leading-edge fabs. TSMC, Samsung, and Intel have all recently announced billions of dollars in investments for the latest manufacturing facilities. Yes, they’re expensive, but that’s where the profits are—and for the last 50 years, it has been where the future is. 

TSMC, the world’s largest contract manufacturer for chips, earned almost 60% of its 2020 revenue from making leading-edge chips with features 16 nanometers and smaller, including Apple’s M1 chip made with the 5-nanometer manufacturing process.

Making the problem worse is that “nobody is building semiconductor manufacturing equipment to support older technologies,” says Dale Ford, chief analyst at the Electronic Components Industry Association, a trade association based in Alpharetta, Georgia. “And so we’re kind of stuck between a rock and a hard spot here.”

Low-end chips

All this matters to users of technology not only because of the supply disruption it’s causing today, but also because it threatens the development of many potential innovations. In addition to being harder to come by, cheaper commodity chips are also becoming relatively more expensive, since each chip generation has required more costly equipment and facilities than the generations before. 

Some consumer products will simply demand more powerful chips. The buildout of faster 5G mobile networks and the rise of computing applications reliant on 5G speeds could compel investment in specialized chips designed for networking equipment that talks to dozens or hundreds of Internet-connected devices. Automotive features such as advanced driver-assistance systems and in-vehicle “infotainment” systems may also benefit from leading-edge chips, as evidenced by electric-vehicle maker Tesla’s reported partnerships with both TSMC and Samsung on chip development for future self-driving cars.

But buying the latest leading-edge chips or investing in specialized chip designs may not be practical for many companies when developing products for an “intelligence everywhere” future. Makers of consumer devices such as a Wi-Fi-enabled sous vide machine are unlikely to spend the money to develop specialized chips on their own for the sake of adding even fancier features, Kanter says. Instead, they will likely fall back on whatever chips made using older technologies can provide.

The majority of today’s chip customers make do with the cheaper commodity chips that represent a trade-off between cost and performance.

And lower-cost items such as clothing, he says, have “razor-thin margins” that leave little wiggle room for more expensive chips that would add a dollar—let alone $10 or $20—to each item’s price tag. That means the climbing price of computing power may prevent the development of clothing that could, for example, detect and respond to voice commands or changes in the weather.

The world can probably live without fancier sous vide machines, but the lack of ever cheaper and more powerful chips would come with a real cost: the end of an era of inventions fueled by Moore’s Law and its decades-old promise that increasingly affordable computation power will be available for the next innovation. 

The majority of today’s chip customers make do with the cheaper commodity chips that represent a trade-off between cost and performance. And it’s the supply of such commodity chips that appears far from adequate as the global demand for computing power grows. 

“It is still the case that semiconductor usage in vehicles is going up, semiconductor usage in your toaster oven and for all kinds of things is going up,” says Willy Shih, a professor of management practice at Harvard Business School. “So then the question is, where is the shortage going to hit next?”

A global concern

In early 2021, President Joe Biden signed an executive order mandating supply chain reviews for chips and threw his support behind a bipartisan push in Congress to approve at least $50 billion for semiconductor manufacturing and research. Biden also held two White House summits with leaders from the semiconductor and auto industries, including an April 12 meeting during which he prominently displayed a silicon wafer.

The actions won’t solve the imbalance between chip demand and supply anytime soon. But at the very least, experts say, today’s crisis represents an opportunity for the US government to try to finally fix the supply chain and reverse the overall slowdown in semiconductor innovation—and perhaps shore up the US’s capacity to make the badly needed chips.

An estimated 75% of all chip manufacturing capacity was based in East Asia as of 2019, with the US share sitting at approximately 13%. Taiwan’s TSMC alone has nearly 55% of the foundry market that handles consumer chip manufacturing orders.

Looming over everything is the US-China rivalry. China’s national champion firm SMIC has been building fabs that are still five or six years behind the cutting edge in chip technologies. But it’s possible that Chinese foundries could help meet the global demand for chips built on older nodes in the coming years.  “Given the state subsidies they receive, it’s possible Chinese foundries will be the lowest-cost manufacturers as they stand up fabs at the 22-nanometer and 14-nanometer nodes,” Khan says. “Chinese fabs may not be competitive at the frontier, but they could supply a growing portion of demand.”

Tech

Rediscover trust in cybersecurity

Published

on

Rediscover trust in cybersecurity


The world has changed dramatically in a short amount of time—changing the world of work along with it. The new hybrid remote and in-office work world has ramifications for tech—specifically cybersecurity—and signals that it’s time to acknowledge just how intertwined humans and technology truly are.

Enabling a fast-paced, cloud-powered collaboration culture is critical to rapidly growing companies, positioning them to out innovate, outperform, and outsmart their competitors. Achieving this level of digital velocity, however, comes with a rapidly growing cybersecurity challenge that is often overlooked or deprioritized : insider risk, when a team member accidentally—or not—shares data or files outside of trusted parties. Ignoring the intrinsic link between employee productivity and insider risk can impact both an organizations’ competitive position and its bottom line. 

You can’t treat employees the same way you treat nation-state hackers

Insider risk includes any user-driven data exposure event—security, compliance or competitive in nature—that jeopardizes the financial, reputational or operational well-being of a company and its employees, customers, and partners. Thousands of user-driven data exposure and exfiltration events occur daily, stemming from accidental user error, employee negligence, or malicious users intending to do harm to the organization. Many users create insider risk accidentally, simply by making decisions based on time and reward, sharing and collaborating with the goal of increasing their productivity. Other users create risk due to negligence, and some have malicious intentions, like an employee stealing company data to bring to a competitor. 

From a cybersecurity perspective, organizations need to treat insider risk differently than external threats. With threats like hackers, malware, and nation-state threat actors, the intent is clear—it’s malicious. But the intent of employees creating insider risk is not always clear—even if the impact is the same. Employees can leak data by accident or due to negligence. Fully accepting this truth requires a mindset shift for security teams that have historically operated with a bunker mentality—under siege from the outside, holding their cards close to the vest so the enemy doesn’t gain insight into their defenses to use against them. Employees are not the adversaries of a security team or a company—in fact, they should be seen as allies in combating insider risk.

Transparency feeds trust: Building a foundation for training

All companies want to keep their crown jewels—source code, product designs, customer lists—from ending up in the wrong hands. Imagine the financial, reputational, and operational risk that could come from material data being leaked before an IPO, acquisition, or earnings call. Employees play a pivotal role in preventing data leaks, and there are two crucial elements to turning employees into insider risk allies: transparency and training. 

Transparency may feel at odds with cybersecurity. For cybersecurity teams that operate with an adversarial mindset appropriate for external threats, it can be challenging to approach internal threats differently. Transparency is all about building trust on both sides. Employees want to feel that their organization trusts them to use data wisely. Security teams should always start from a place of trust, assuming the majority of employees’ actions have positive intent. But, as the saying goes in cybersecurity, it’s important to “trust, but verify.” 

Monitoring is a critical part of managing insider risk, and organizations should be transparent about this. CCTV cameras are not hidden in public spaces. In fact, they are often accompanied by signs announcing surveillance in the area. Leadership should make it clear to employees that their data movements are being monitored—but that their privacy is still respected. There is a big difference between monitoring data movement and reading all employee emails.

Transparency builds trust—and with that foundation, an organization can focus on mitigating risk by changing user behavior through training. At the moment, security education and awareness programs are niche. Phishing training is likely the first thing that comes to mind due to the success it’s had moving the needle and getting employees to think before they click. Outside of phishing, there is not much training for users to understand what, exactly, they should and shouldn’t be doing.

For a start, many employees don’t even know where their organizations stand. What applications are they allowed to use? What are the rules of engagement for those apps if they want to use them to share files? What data can they use? Are they entitled to that data? Does the organization even care? Cybersecurity teams deal with a lot of noise made by employees doing things they shouldn’t. What if you could cut down that noise just by answering these questions?

Training employees should be both proactive and responsive. Proactively, in order to change employee behavior, organizations should provide both long- and short-form training modules to instruct and remind users of best behaviors. Additionally, organizations should respond with a micro-learning approach using bite-sized videos designed to address highly specific situations. The security team needs to take a page from marketing, focusing on repetitive messages delivered to the right people at the right time. 

Once business leaders understand that insider risk is not just a cybersecurity issue, but one that is intimately intertwined with an organization’s culture and has a significant impact on the business, they will be in a better position to out-innovate, outperform, and outsmart their competitors. In today’s hybrid remote and in-office work world, the human element that exists within technology has never been more significant.That’s why transparency and training are essential to keep data from leaking outside the organization. 

This content was produced by Code42. It was not written by MIT Technology Review’s editorial staff.

Continue Reading

Tech

How AI is reinventing what computers are

Published

on

How AI is reinventing what computers are


Fall 2021: the season of pumpkins, pecan pies, and peachy new phones. Every year, right on cue, Apple, Samsung, Google, and others drop their latest releases. These fixtures in the consumer tech calendar no longer inspire the surprise and wonder of those heady early days. But behind all the marketing glitz, there’s something remarkable going on. 

Google’s latest offering, the Pixel 6, is the first phone to have a separate chip dedicated to AI that sits alongside its standard processor. And the chip that runs the iPhone has for the last couple of years contained what Apple calls a “neural engine,” also dedicated to AI. Both chips are better suited to the types of computations involved in training and running machine-learning models on our devices, such as the AI that powers your camera. Almost without our noticing, AI has become part of our day-to-day lives. And it’s changing how we think about computing.

What does that mean? Well, computers haven’t changed much in 40 or 50 years. They’re smaller and faster, but they’re still boxes with processors that run instructions from humans. AI changes that on at least three fronts: how computers are made, how they’re programmed, and how they’re used. Ultimately, it will change what they are for. 

“The core of computing is changing from number-crunching to decision-­making,” says Pradeep Dubey, director of the parallel computing lab at Intel. Or, as MIT CSAIL director Daniela Rus puts it, AI is freeing computers from their boxes. 

More haste, less speed

The first change concerns how computers—and the chips that control them—are made. Traditional computing gains came as machines got faster at carrying out one calculation after another. For decades the world benefited from chip speed-ups that came with metronomic regularity as chipmakers kept up with Moore’s Law. 

But the deep-learning models that make current AI applications work require a different approach: they need vast numbers of less precise calculations to be carried out all at the same time. That means a new type of chip is required: one that can move data around as quickly as possible, making sure it’s available when and where it’s needed. When deep learning exploded onto the scene a decade or so ago, there were already specialty computer chips available that were pretty good at this: graphics processing units, or GPUs, which were designed to display an entire screenful of pixels dozens of times a second. 

Anything can become a computer. Indeed, most household objects, from toothbrushes to light switches to doorbells, already come in a smart version.

Now chipmakers like Intel and Arm and Nvidia, which supplied many of the first GPUs, are pivoting to make hardware tailored specifically for AI. Google and Facebook are also forcing their way into this industry for the first time, in a race to find an AI edge through hardware. 

For example, the chip inside the Pixel 6 is a new mobile version of Google’s tensor processing unit, or TPU. Unlike traditional chips, which are geared toward ultrafast, precise calculations, TPUs are designed for the high-volume but low-­precision calculations required by neural networks. Google has used these chips in-house since 2015: they process people’s photos and natural-­language search queries. Google’s sister company DeepMind uses them to train its AIs. 

In the last couple of years, Google has made TPUs available to other companies, and these chips—as well as similar ones being developed by others—are becoming the default inside the world’s data centers. 

AI is even helping to design its own computing infrastructure. In 2020, Google used a reinforcement-­learning algorithm—a type of AI that learns how to solve a task through trial and error—to design the layout of a new TPU. The AI eventually came up with strange new designs that no human would think of—but they worked. This kind of AI could one day develop better, more efficient chips. 

Show, don’t tell

The second change concerns how computers are told what to do. For the past 40 years we have been programming computers; for the next 40 we will be training them, says Chris Bishop, head of Microsoft Research in the UK. 

Traditionally, to get a computer to do something like recognize speech or identify objects in an image, programmers first had to come up with rules for the computer.

With machine learning, programmers no longer write rules. Instead, they create a neural network that learns those rules for itself. It’s a fundamentally different way of thinking. 

Continue Reading

Tech

Decarbonizing industries with connectivity and 5G

Published

on

Decarbonizing industries with connectivity and 5G


The United Nations Intergovernmental Panel on Climate Change’s sixth climate change report—an aggregated assessment of scientific research prepared by some 300 scientists across 66 countries—has served as the loudest and clearest wake-up call to date on the global warming crisis. The panel unequivocally attributes the increase in the earth’s temperature—it has risen by 1.1 °C since the Industrial Revolution—to human activity. Without substantial and immediate reductions in carbon dioxide and other greenhouse gas emissions, temperatures will rise between 1.5 °C and 2 °C before the end of the century. That, the panel posits, will lead all of humanity to a “greater risk of passing through ‘tipping points,’ thresholds beyond which certain impacts can no longer be avoided even if temperatures are brought back down later on.”

Corporations and industries must therefore redouble their greenhouse gas emissions reduction and removal efforts with speed and precision—but to do this, they must also commit to deep operational and organizational transformation. Cellular infrastructure, particularly 5G, is one of the many digital tools and technology-enabled processes organizations have at their disposal to accelerate decarbonization efforts.  

5G and other cellular technology can enable increasingly interconnected supply chains and networks, improve data sharing, optimize systems, and increase operational efficiency. These capabilities could soon contribute to an exponential acceleration of global efforts to reduce carbon emissions.

Industries such as energy, manufacturing, and transportation could have the biggest impact on decarbonization efforts through the use of 5G, as they are some of the biggest greenhouse-gas-emitting industries, and all rely on connectivity to link to one another through communications network infrastructure.

The higher performance and improved efficiency of 5G—which delivers higher multi-gigabit peak data speeds, ultra-low latency, increased reliability, and increased network capacity—could help businesses and public infrastructure providers focus on business transformation and reduction of harmful emissions. This requires effective digital management and monitoring of distributed operations with resilience and analytic insight. 5G will help factories, logistics networks, power companies, and others operate more efficiently, more consciously, and more purposely in line with their explicit sustainability objectives through better insight and more powerful network configurations.

This report, “Decarbonizing industries with connectivity & 5G,” argues that the capabilities enabled by broadband cellular connectivity primarily, though not exclusively, through 5G network infrastructure are a unique, powerful, and immediate enabler of carbon reduction efforts. They have the potential to create a transformational acceleration of decarbonization efforts, as increasingly interconnected supply chains, transportation, and energy networks share data to increase efficiency and productivity, hence optimizing systems for lower carbon emissions.

Explore more.

Continue Reading

Copyright © 2020 Diliput News.