Can Intel's Gaudi-3 Chip Compete with Nvidia?

Can Intel's Gaudi-3 Chip Compete with Nvidia?

Hello Everyone,

While I usually cover AI chips related things on my newly founded Semiconductor Newsletter , some things are worthy of our macro analysis and I believe this is one of them.


?? From our sponsor: ??


Bring your AI to every Mac app, using the apps context

Omnipilot brings AI to every Mac app, using the app's context to provide intelligent assistance. You can invoke it with a shortcut to supercharge writing, email, and getting answers.

Try it Now


Intel Vision Highlights

Recently it came to light that Intel was a huge beneficiary of Biden’s CHIPS Act, and I’m not sure why exactly. In late March, 2024 we found out that Intel will receive up to $8.5 billion in grants and $11 billion in loans from the U.S. government to produce cutting-edge semiconductors in the biggest deployment of funds under the CHIPS and Science Act.

Intel Foundry, a new division of the company responsible for manufacturing, had sales of $18.9 billion in 2023, down from $27.5 billion the previous year, the company reported about one week ago. What are the chances that an Intel turnaround is likely given the dominance of such players as TSMC, Nvidia and now rising again, Samsung?

Enter Gaudi.

The company's Gaudi 3 chip matches and exceeds Nvidia’s H100 AI processor?when it comes to training and deploying generative AI models.

In terms of total sales, 2023 may be the last year Intel is on top.

Inte’s Gaudi-3: An AI chip for 2024

Intel took the covers off the new Gaudi-3 AI accelerator this week at the annual Intel Vision conference and it joins AMD in having at least an alternative to Nvidia’s H100, it’s older chip before Blackwell arrived. Intel is looking to take market share from current leader Nvidia, which has an estimated 80% of the AI chip market.

View Gaudi-3 AI Specs

The Gaudi-3 AI chip made with TSMC's 5nm process node, and is looking to provide an alternative in a market dominated by Nvidia. Meanwhile BigTech are also trying to do more in-house in this respect. For instance, the WSJ reports that Google is making more of its own semiconductors, preparing a new chip that can handle everything from YouTube advertising to big data analysis as the company tries to combat rising artificial-intelligence costs. Google’s new AI chip is called Axion .

I definitely do think Pat Gelsinger may be past his prime pushing Intel into a no-win situation. Nvidia’s CUDA software moat and partnerships have never been stronger. Their software and custom AI chips continue to evolve at an astonishing rate. Nvidia is years ahead in AI chips themselves, and its Blackwell B200 GPUs make both AMD and Intel’s AI chips almost irrelevant unless supply is really tight. However even a sliver of this AI chip market might help Intel’s financial situation in the years ahead.

Nvidia’s G200 and B200 Blackwell AI chips are built to democratize trillion-parameter AI and are likely several years or generations beyond anything AMD, Intel or anyone else can produce right now in 2024 as far as I am aware.

Intel, to its defense has made considerable progress. Intel said Gaudi 3 promises a 4x increase in AI computing for BF16 and a 1.5x boost in memory bandwidth compared to Gaudi 2, substantially enhancing AI training and inference capabilities for global enterprises aiming to implement generative AI on a large scale.

Gaudi-3 Might be Intel’s Last Shot at Relevance in AI

Intel announced Gaudi 3 availability to original equipment manufacturers (OEMs) – including Dell Technologies, Hewlett Packard Enterprise, Lenovo and Supermicro – broadening the AI data center market offerings for enterprises.

Intel says the Gaudi 3 is up to 1.7 times faster when training common large language models than the H100. Intel even says its chip is an average of 1.3 times faster than Nvidia’s beefier Nvidia H200 when it comes to inferencing certain language models. How does it come to such conclusions? I hope more clarity will be revealed soon.

Intel's chief executive said the company is committed to becoming the world’s No. 2 AI systems foundry by the end of the decade but I am not too bullish on its prospects, I think AMD is a better run firms and more AI-native.

Intel introduced the Intel Gaudi 3 AI accelerator on April 9, 2024, at the Intel Vision event in Phoenix, Arizona. It is designed to bring global enterprises choice for generative AI, building on the performance and scalability of its Gaudi 2 predecessor, but it will have an uphill battle.

Intel’s stock INTL 0.00%↑ is down 20% in 2024 alone. Intel is now deeply unprofitable, but thinks it can break even by 2027. Intel’s chip-making division accumulated $7 billion in operating losses in 2023. Intel’s future in AI is fairly poor given how competitive the AI chip space is going to become next decade.

When you are outsourcing some of your work to some of your biggest competitors, it’s not usually a good sign. Then there is the China problem . In a perplexing figure, Gelsinger says these latest numbers are partially the result of Intel’s past mistakes catching up with its foundry business, which caused the chipmaker to outsource about 30 percent of all its wafer production to other foundries, like TSMC, one of Intel’s biggest competitors currently.

Taiwan thanks the inept Americans for their business. Given that Taiwanese-Americans already run both Nvidia and AMD, it’s a bit hilarious to behold. Taiwan’s TSMC will get around $6.6 Billion from Biden’s CHIPs Act themselves. Still, it’s good for the ecosystem to have more options than just the H100 and hopefully this means compute gets cheaper. Nvidia’s monopoly on the AI chip sector hasn’t been the most healthy in 2022 and especially 2023.

The Gaudi 3 AI accelerator was built to increase the speed and efficiency of parallel AI operations. We definitely cannot take Intel’s internal benchmark comparisons seriously. Rather ostensibly, Intel said it is waiting for Nvidia to publish performance results for its newly unveiled Blackwell chip before it can compare it with Gaudi 3. It also comes in different configurations like a bundle of eight Gaudi 3 chips on one motherboard or a card that can slot into existing systems.

“Innovation is advancing at an unprecedented pace, all enabled by silicon — and every company is quickly becoming an AI company,” Intel CEO Pat Gelsinger said in a statement. “Intel is bringing AI everywhere across the enterprise from the PC to the data center to the edge.”

I’m adding tons of details and snippets to my Semiconductor coverage over on the new Newsletter. It’s only $5 a month but I’ve noticed a lack of easy to read coverage in the area.

Semiconductor Things?

This Newsletter was built to solve the pain point of getting the latest news on A.I. chips, semiconductors, datacenter innovation and chip news.

Nvidia has 80% of the Market

Gaude-3 will first be available to OEMs in the second quarter of 2024 and will be widely available in the third quarter. Again, Nvidia has an estimated 80% of the AI chip market with its graphics processors, known as GPUs, which have been the high-end chip of choice for AI builders over the past year. I think we can expect AMD to do fairly well with its offering which doesn’t leave a whole lot of wiggle room for Intel.

Truthfully, with the first quarter of 2024 over, analysts and research firms are out with their reports for the state of the semiconductor supply chain. Keybanc's latest investment report, which covers the state of the A.I. market regarding AMD, NVIDIA, Microsoft and others. After running channel and supply chain checks, Keybanc believes that NVIDIA's premier A.I. product, the GB200 Grace Blackwell Superchip is seeing strong industry interest and could generate anywhere between $90 billion and $140 billion in revenue. Frankly, wider top tier adoption of Blackwell means some trickle down effects for AMD in a positive manner.

Habana has since published a whitepaper for Gaudi 3 .

Read the Whitepaper

Google Axion

Even as Nvidia matures and AMD and Intel try to keep up, we can expect BigTech hyperscalers to really spend a bit more money on making their own custom chips for their own purposes.

Google’s custom Arm-based CPU to support its AI work in data centers is a great example which introduces a more powerful version of its Tensor Processing Units (TPU) AI chips. Google’s new Arm-based CPU, dubbed Axion is fairly interesting.

The dynamism in the AI semiconductor space is now in 2024 more exciting than Generative AI application releases themselves. So if you are really passionate about this stuff now there’s a lot to follow:

Majors trends in AI, Semi and Datacenter Evolution

  • Language models (large, small, open-source, multi-modal)
  • Semiconductor and AI chip preparation
  • Datacenter expansion and Fab diversification. “Fab” here refers to in the microelectronics industry, a semiconductor fabrication plant is a factory for semiconductor device fabrication.
  • New kinds of scaled AI supercomputers and Quantum hybrid supercomputers
  • Custom AI chips and accelerators
  • AI and Semiconductor startups

These make up a major focus of my work covering emerging tech.

Axion is really about efficiency and cutting costs though. Google is trying to make cloud computing more affordable with a custom-built Arm-based server chip, following similar efforts at rivals Alibaba, Amazon and Microsoft.Each BigTech hyperscale has its own uses cases, for instance, Google plans to run YouTube ad workloads on the Axion Arm chips once they’re available, and customers such as Snap are interested.’

According to the reports, the Axion chips are already powering YouTube ads, the Google Earth Engine, and other Google services.

Intel’s Gaudi 3 might get some customers when supply demand for H100s and Blackwell gets overloaded. It’s good that there are more alternatives in 2024 as it benefits the entire ecosystem. Even as the hyperscalers got a headstart with their huge 2023 orders of H100s.

Google’s own future might depend on its ability to grow Google Cloud, where its cloud is growing faster than its advertising business and now represents almost 11% of company revenue. Google Axion should definitely help.

“Axion is built on open foundations but customers using Arm anywhere can easily adopt Axion without re-architecting or re-writing their apps.”

Google says customers will be able to use its Axion CPU in cloud services like Google Compute Engine, Google Kubernetes Engine, Dataproc, Dataflow, Cloud Batch, and more. It’s a half decent pitch, though a lot of prospective customers simply won’t heard about it. Optimistically, Google might only hold about 8-10% cloud infrastructure market. AWS and Azure in particular are now becoming very competitive.

Reports on Axion suggest that Axion Arm-based CPU will also offer 30 percent better performance than “general-purpose Arm chips” and 50 percent more than Intel’s existing processors. Not sure exactly where Reuters got those figures. Axion is built on the standard Armv9 architecture and instruction set. Google Axion as a bet for Google Cloud isn’t very persuasive.

Intel’s market cap is down to $158 billion which means the market is not trusting Intel’s initiatives will work out. The Gaudi 3 is the third iteration of a processor series Intel obtained through a $2 billion startup acquisition in 2019 of Habana Labs. It isn’t even based on their own technology.

Intel says eight Gaudi 3 chips can be installed in a single server. That Intel claims they outperform Nvidia’s H100s is a little suspect. According to the company, each chip includes 21 Ethernet networking links that it uses to exchange data with the neighboring Gaudi 3 units. There are also three more networking links, for a total of 24, aboard each processor that allow it to interact with chips outside its host server.

But it doesn’t end there, semiconductor innovation is fast and frantic. Against the backdrop of Intel Vision, rival AMD announced ?(April 9th) two new chip lineups. Both of the new chip lineups join the company’s existing Versal product portfolio, which it obtained through its $50 billion purchase of Xilinx in 2022. The M&A, new startups and consolidation in the semiconductor space has just become really interesting to follow.

The hardware and datacenter evolution is becoming more important for how AI scales and what things like GPT-5 will be able to actually do. As LLMs get more efficient and synthetic data becomes more commonly used for training (since high quality real world data is now scarce) and bigger AI datacenters emerge, AI could find itself becoming more central to the world in which we live.

AI’s Exponential Acceleration

  • LLMs and SLMs are becoming more efficient
  • New kinds of Synthetic data will improve training
  • Bigger AI datacenters with better AI chips will allow scale
  • Cost of compute will get much cheaper
  • Models will learn more advanced reasoning and sequencing of steps allow for more task automation

The semiconductor fab and datacenter world needs to accelerate and diversify globally to keep up with advances in Generative AI. Our civilization won’t really be Generative AI native until this happens, and it will all take a number of more years.

Richard G.

Head of Engineering and head of business Unit at Asian Corporation

6 个月

The race for super memory will be interesting..Will Micron catch up to Hynix?

回复
Veliko Atanasov (CA)SA

Accounting | Finance | Automation | Data Science

7 个月

Don't count out Intel just yet - they are investing massively in building up their own fabs and the US is glad to help them with this. The other chip players will continue to rely on outsourced fabs like TSMC which may create supply chain issues depending on what China decides to do with the Taiwan and US chip regulations/agreements. Intel is a long-term investment.

Brian McMorris

President at Futura Automation, LLC

7 个月

part 3: What an empowered Intel w cutting edge GPU capabilities means for the semicon / chip industry is lower prices for AI. NVDA has a practical monopoly at this time. That is why it has sky-high 60% GM on its advanced chips, even after paying TSMC for the fabrication and packaging of the chips on very expensive state-of-the-art machines. Intel Fab Div is already pursuing partnerships with some of the key potential NVDA customers who build AI server farms: Amazon, Microsoft, Apple, Google. All those companies want to develop their own proprietary GPU designs and have Intel or TSMC fab the chips (the only two who can really do cutting edge fab). As prices and margins come down so will NVDA's stock price and revenue while Intel will move up the ladder on par with NVDA in market cap. AMD will also be a factor in the GPU marketplace, but without its own fab capability which it gave up years ago.

回复
Brian McMorris

President at Futura Automation, LLC

7 个月

part 2: Because it does not have to invest in and amortize fabrication, NVDA has high margins compared to Intel. But now Intel has reorganized to look like TSMC on one hand and NVDA on the other. If the two divisions, design and fab, are allowed to operate independently, chances are good they can catch NVDA on GPUs as they pay less attention to legacy CPU design. Intel has a deep development bench and no one should look past their capabilities if unleashed. At the same time, Intel is all-American with several fabs in Chandler, Hillsboro and soon in Ohio. If Taiwan is invaded by China, then we can assume China will extend its new anti-western policy towards semicon to TMSC and that source of fabrication will disappear for NVDA. As Intel catches up with TSMC on line spacing (EUV) and expands its GPU design team, they will provide western economies an option to NVDA. It will not be surprising to see NVDA turn to Intel to build its chips even while Intel's design division competes with NVDA. And this is the reason the US government has made the investment it did in the chip industry.

Brian McMorris

President at Futura Automation, LLC

7 个月

Part 1: Intel has been developing its AI chips for some time now, obviously. You don't just do this on a spur of the moment. They are splitting the company into a design division and a fab division (Foundry) specifically so that they are more responsive to market forces. Fabs are REALLY expensive, about $20B each, including EUV photo lith. They are a capital sink, like building cars are and they take as long to build, 5 years or more (for Intel's two new fabs in Chandler, AZ). Meanwhile, chip design is much more fluid, fast moving and responsive. NVDA has had the advantage of having no overhead for fabs since it contracts all its production, primarily to Taiwan Semi-TSMC (NVDA is fabless, which is also its Achilles heel). They have optimized their design for the latest fab capabilities at TSMC (line spacing) a company which adopted EUV earlier than Intel due to some balky decision making at Intel which has since been corrected.

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了