Tech News: AMD Unveils MI300 Series to Challenge NVIDIA's H100 in AI
AMD MI300 Series

Tech News: AMD Unveils MI300 Series to Challenge NVIDIA's H100 in AI

San Jose, California, December 6, Local Time?— AMD marked a pivotal moment in its five-year history with the highly anticipated launch of the MI300 series. Positioned as a major player in the thriving artificial intelligence (AI) accelerator market, AMD is set to go head-to-head with industry giant NVIDIA.

The MI300 series introduces two variants of its latest chips. First, the MI300X, a high-performance Graphics Processing Unit (GPU) tailored for AI computing. The second, MI300A, seamlessly integrates graphics processing capabilities with a standard Central Processing Unit (CPU), targeting both AI and scientific research applications.

The MI300X boasts a cutting-edge 5nm process, up to 8 XCD cores, 304 CDNA3 architecture compute units, 8 HBM3 cores, and an expanded 192GB memory capacity. With a memory bandwidth reaching 5.3TB/s and an Infinity Fabric bus bandwidth of 896GB/s, the MI300X comprises over 150 billion transistors. AMD's CEO, Lisa Su, highlighted its comparable AI software training capabilities to NVIDIA's H100, with superior performance in inference.


The MI300X

?

Simultaneously, the MI300A integrates Zen3 CPU and CDNA3 GPU on a single chip, utilizing HBM3 memory and the 4th generation Infinity Fabric high-speed bus for simplified architecture and enhanced programming convenience. With 228 CDNA3 compute cores, 24 Zen4 X86 cores, 4 IO Die, 8 HBM3, 128GB of memory, 5.3TB peak bandwidth, and 256MB of Infinity Cache, the MI300A's 3.5D packaging outperforms the competition, achieving four times the performance of NVIDIA's H100 in OpenFOAM applications.


The MI300A

?

In the AI chip market, NVIDIA has been the de facto leader, with the H100 dominating sales and quarterly revenues reaching approximately $14.5 billion. Despite this, AMD, along with other tech giants like Intel, Amazon, Google, and Microsoft, is challenging NVIDIA's dominance. AMD emphasizes that systems equipped with their new chips are on par with NVIDIA's top-tier systems based on the H100, with faster response times in generating large language models.

On the same day, AMD introduced the ROCm 6 software platform, designed to work seamlessly with the MI300 series processors, providing an 8x speed boost to Liama 2 language model computations. This platform will compete with NVIDIA's proprietary CUDA platform.

According to AMD's latest financial report, driven by the growing demand for computing power, the company anticipates fourth-quarter AI chip revenues to reach $400 million, surpassing $2 billion next year.

Lisa Su expressed optimistic predictions for the AI chip industry, foreseeing a market growth of up to $400 billion by 2027. This projection significantly exceeds other estimates, such as Gartner's prediction in August of a $119 billion AI chip market by 2027, up from approximately $53 billion this year. AMD stands poised to make a substantial impact on the evolving landscape of AI technology.

?

Stay tuned for updates: ?? https://www.smbom.com/ ??

If you like this article, please give us?a like! ?? ??

要查看或添加评论,请登录

SMYG LIMITED的更多文章

社区洞察

其他会员也浏览了