Chai Shai Chaishai with me
Tech

The major bottleneck for AI is a deficiency of powerful processors.

As AI’s demand has skyrocketed, the global supply chain for the high-powered chips needed to create and deploy AI models has shown its limitations.

The ongoing chip bottleneck has hurt firms of all sizes, including some of the most prominent platforms in the artificial intelligence (AI) industry, and it is not expected to ease significantly for at least another year.

Microsoft’s most recent annual report is the latest indicator of an ongoing shortfall of AI chips. For the first time, GPU availability is highlighted in the study as a potential threat to financial stability.

When it comes to training and deploying AI algorithms, GPUs are a crucial piece of technology that helps conduct the many computations needed.

“As the demand for artificial intelligence (AI) services rises, we are constantly looking for ways to build out more datacenters and add more servers to accommodate our growing customer base,” microsoft composed. “We need authorized and buildable land, dependable electricity, networking supply, servers, including GPUs, and other components for our datacenters.”

Access to computer capacity is a major barrier to AI, and Microsoft’s recognition of GPUs emphasizes this issue. Both the corporations making AI tools and products and the businesses and end-users that want to employ AI for their own purposes are impacted by this problem.

The chief executive officer of OpenAI, Sam Altman, testified before the United States Senate in May and stated that the company’s chatbot tool was having trouble keeping up with the volume of requests being made of it by users.

We have a severe shortage of graphics processing units, therefore the fewer people who utilize this technology, the better, Altman stated. OpenAI is dedicated to providing adequate capacity, a business representative later confirmed.

This situation may sound familiar to you if you remember the huge price hikes for video game consoles and computer graphics cards during the last pandemic. When GPUs became hard to come by due to production delays, a lack of labor, disruptions in worldwide shipping, and persistent competing demand from bitcoin miners, a cottage industry of deal-tracking technology sprang up to help regular consumers get what they needed.

Demand for specialized labor is soaring

However, specialists in the field claim that the nature of the current deficit is very different. The current scarcity is not due to an interruption in supplies of consumer-focused GPUs, but rather to the rapid, booming demand for extremely high-end GPUs intended for advanced work, including the development and application of artificial intelligence models.

The manufacturing capacity for certain GPUs has been reached, but the sudden surge in demand has overrun the limited supply.

According to Raj Joshi, a senior vice president at Moody’s Investors Service who follows the semiconductors sector, there is a “huge sucking sound” emanating from enterprises that represents the unprecedented need for AI.

Neither the rate nor the magnitude of this demand’s growth could have been predicted, Joshi said. The sudden increase in demand caught the industry off guard, in my opinion.

Nvidia, the trillion-dollar chipmaker that dominates, by some estimates, 84% of the market for discrete GPUs, stands to gain enormously from the AI boom. Joshi predicted “unparalleled” revenue growth for Nvidia in the following quarters in a research note issued in May. He predicted that Nvidia’s data center division will generate more revenue than its competitors Intel and AMD put together.

Nvidia reported that it had “procured substantially higher supply for the second half of the year” in order to fulfill the growing demand for artificial intelligence processors in its May results call. On Tuesday, the business said nothing, citing the usual pre-earnings silence.

Meanwhile, on Tuesday, AMD stated that it anticipates releasing its response to Nvidia’s AI GPUs by the end of the year.

AMD CEO Lisa Su said on the company’s earnings call that customers are very interested in all of the company’s AI products. While there is still more to accomplish, I believe the strides we have made thus far are noteworthy.

From one bottleneck to the next

Sid Sheth, founder and CEO of AI firm d-Matrix, noted that the shortage of a crucial input from GPU-makers’ own suppliers exacerbated the problem. Silicon interposers combine high-bandwidth memory chips with dedicated processing units to finish graphics processing units.

The CHIPS Act, passed last year, is expected to give billions of dollars in support for the domestic chip industry and for chip research and development, and the Biden administration has made growing US chip manufacturing capacity a priority. However, these funds are being spread among a variety of chip technologies rather than being allocated just to increasing GPU output.

As more factories go live and Nvidia’s rivals increase their output, the chip shortage should improve. Some professionals estimate that this could take as long as three years.

Meanwhile, the shortage may prompt businesses to develop novel solutions. Companies who previously had access to plentiful chips are now forced to prioritize efficiency, according to Sheth.

The question was followed by the remark, “Necessity is the mother of invention, right?” A quote from Sheth. Since “people now find resourceful ways to use whatever computing power they have in a much smarter way” (since they no longer have access to boundless amounts of computing power).

Sheth suggested a number of alternatives to traditional CPUs and GPUs that may be used to accomplish computation, such as using smaller AI models that may be easier and less computationally intensive to train than a big model.

“In the end, this turns out to be a good thing,” he went on to say.

Show More

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button