Updater
June 07, 2024 , in technology

 

Is GenAI running out of chips?

The demand for the specialist chips that power generative AI models has sent prices through the roof and created a worldwide shortage. Competitors and governments are moving to close the gap, but lead times in the advanced chip segment are notoriously long.

Eidosmedia Chip shortage in AI

Chip shortage in AI | Eidosmedia

The explosion of generative AI (GenAI) has caused a parallel demand for GPU chips, making Nvidia, the manufacturer, one of the most valuable companies in the world. However, it has also led to a global shortage of powerful chips.

We look at the efforts by other chipmakers and AI companies themselves to close the gap as well as government responses. How will this affect AI developments in the near future?

Why GenAI needs GPUs

Increasingly, chips make the world go around, but when it comes to natural language processing (NLP) or generating images, not just any chip will do. The graphic processing units (GPUs) developed to handle the rapid video streams of computer gaming turned out to be ideal for the massive parallel processing needed to train AI models. From specialist supplier to the gaming sector, Nvidia found itself thrust almost overnight into the role of sole supplier of silicon to the exploding GenAI revolution.

AI Multiple Research explains “the number of parameters (consequently the width and depth) of the neural networks and therefore the model size is increasing. To build better deep learning models and power generative AI applications, organizations require increased computing power and memory bandwidth.” And it’s no easy feat to develop one of these powerful and specialized chips. Even Intel had to shelve its efforts after three years of trying to develop a chip to compete with Nvidia’s V100 TensorCore technology. Eventually, they went back to the drawing board - more on that later.

For its part, Nvidia had a headstart on some of the other chip makers, putting it in a prime position for the GenAI revolution.

How Nvidia dominates

Back in the 1990s, Nvidia got its start in the gaming sector by making chips for Playstation and Xbox. These days, it also makes chips like Volta, Xavier, and Tesla, enabling everything from data centers to autonomous driving. Now, as GenAI takes off, NVIDIA has a head start on other chip developers, resulting in impressive results. In Q2 of 2023, the company reached $1 trillion valuation, claiming its spot at the top of the GPU market. Last month Forbes predicted that the stock price would exceed $1,000 per share, which it duly did.

At the heart of these skyrocketing prices is the equally meteoric rise in demand for Nvidia’s AI-focused chips, which Forbes put at 410% to $18.4 billion at the end of January. Big-name customers like Microsoft, Google, and Meta are all turning to Nvidia for their upcoming AI initiatives.

“For AI workloads on the cloud, Nvidia almost has a monopoly with most cloud players offering only Nvidia GPUs as cloud GPUs,” reports AI Multiple Research. “Nvidia also launched its DGX Cloud offering providing cloud GPU infrastructure directly to enterprises.” With demand soaring — and a clearly dominant player establishing a near monopoly — it seems as though a chip shortage was almost inevitable. That’s exactly what happened, due in large part to the production capacity of CoWoS packaging technology causing “a major bottleneck in AI chip output and will stay as a problem for AI chip supply in 2024.” Inevitably, a lack of chips has forced ambitious AI companies to rethink their plans.

Despite all this, competitors and governments are finding ways to diversify the options in this increasingly important market.

Tech firms and governments respond to the demand surge

The overwhelming demand for GPU chips has triggered responses from tech firms and governments worldwide. Intel announced its latest entry into the world of AI chips in April of 2024. Intel claims the Gaudi 3 chip is more than “twice as power-efficient as and can run AI models one-and-a-half times faster than Nvidia’s H100 GPU,” reports NBC News.

In December, Advanced Micro Devices (AMD), launched its MI300 chip, calling it “the most advanced AI accelerator in the industry.” At the unveiling, the company rattled off a number of specs to prove it was better than the competition, specifically Nvidia. And according to The Economist , “the MI300 does indeed outshine the h100. Investors liked it, too—AMD’s share price jumped by 10% the next day.”

Even some of Nvidia’s own clients are getting into the chip game. Quartz reports, “Now other tech giants are introducing their own AI chips: Meta’s MTIA, Microsoft’s Maia, Amazon’s Trainium, and Google’s TPUs.”

While the free market does its thing, governments are also looking for solutions. U.S. President Joe Biden is on a quest to close the gap in the AI chip supply chain. Back in February, the U.S. announced a $5 billion investment in semiconductor research. That came on the heels of a ban on exporting certain kinds of AI chips.

The Financial Times reports that even though Taiwan Semiconductor Manufacturing Company — the world’s biggest chipmaker, which supplies Nvidia — has agreed to bring its latest technology to America, there are still hurdles to clear. Despite “TSMC’s planned $65bn of investments in Arizona” — and the ongoing arms race among other chip developers, all of which are also taking subsidies from Washington, some parts will likely be produced in China. As the Financial Times put it, this is “a reflection of the complexity involved in packaging various types of chips together to boost their performance and efficiency.”

A temporary brake?

Even as governments and competitors scramble to meet the need for more chips to enable GenAI, building new manufacturing facilities takes time. Slowing the race to AI dominance may not be a bad thing, depending on your perspective. Supply chain issues may give the industry and regulators time to catch up.

Interested?

Find out more about Eidosmedia products and technology.

GET IN TOUCH