Intel announces biggest architectural change in company's 40 years

Intel announces biggest architectural change in company's 40 years

【Lansheng Technology News】Intel unveiled a sweeping update to its processing architecture at its 2023 "AI Everywhere" event, reflected in its mobile Core Ultra processors and desktop Core Ultra processors due to be released in 2024. These architectures combine traditional high-performance CPU cores with specialized cores for low-power tasks, graphics acceleration, and AI acceleration. The latest fifth-generation Xeon CPUs, announced at the same event, focus on server performance and add co-processor cores for cloud AI acceleration.


According to Intel's corporate vision, the future of AI processing is both in the cloud and at the edge. The company predicts that by 2028, 80% of PCs will be "AI PCs" equipped with AI co-processors.


Intel turns to neural processing unit Intel's AI co-processor, called a neural processing unit (NPU), is its latest big innovation. When combined with other dedicated CPU cores, Intel believes the new processors will improve overall performance while reducing power consumption and lowering total cost of ownership (TCO).


The diverse architecture of these devices combines multiple specialized cores assembled in a chipset-based system. Chipsets enable higher yields by reducing the actual silicon area per processor and optimizing the silicon wafers used for each chipset. Like most AI accelerators, Intel's NPU relies heavily on multiply-accumulate (MAC) units. MAC speeds up complex multiplication operations by reducing the need to move data between memory and registers.


Intel released Xeon and mobile processors in late 2023, and plans to launch desktop PC processors using Intel 4 in 2024. Intel 4 process technology is a 7nm geometry process that Intel claims will increase clock speeds by 20% with the same power consumption compared to Intel 7 process (10nm). Core Ultra and fifth-generation Xeon are Intel's first 7nm processors and the first Core CPUs with smaller process geometries since 2019.


From Many Identical Cores to Targeted Dedicated Cores Traditional cloud AI processing uses graphics processing units (GPUs) and tensor processing units (TPUs) for massively parallel processing and matrix math optimization. As a result, Intel's mainstream CPUs have long included GPUs. In Intel's previous architecture, the main CPU core was responsible for handling all computing loads, regardless of the size of the computing load. This will cause low-load tasks to use more power than necessary and take CPU cycles from high-load processes. It also leaves specialized math-intensive processing to the main CPU cores, which are not optimized for specialized operations.


Intel's Xeon server processors and its Core mobile and desktop CPUs have historically relied on speed and optimized software as workarounds. The new system, with multiple dedicated cores, is a radical departure from the "one size fits all" philosophy, instead of just increasing the number of identical cores.


Comprehensive solution to cloud AI with Gaudi3 In addition to the new NPU co-processor, Intel also announced the successor to the Gaudi2 deep learning AI accelerator. Gaudi3 AI accelerator targets cloud computing, large-scale deep learning and generative AI systems. Intel claims that Gaudi3 provides a 4x increase in BF16 performance compared to Gaudi2.


The BF16 (Brain Floating Point) number format is used to improve floating point performance in AI computing. It is a 16-bit variant of the IEEE 754 float32 format. BF16 retains the 8 exponent bits of standard float32, but only retains 8 mantissa significand bits instead of the 24 bits in float32. AI benefits more from the increased speed that comes with using 16-bit math than from the performance penalty from reduced mantissa precision. Gaudi3 will also double network performance and provide 1.5 times the bandwidth of Gaudi2.


Diving into cloud AI, edge AI and large system processing With Xeon server CPUs, workstation/laptop CPUs and Gaudi3 accelerators, Intel has expanded its AI to cover almost all key AI areas. Gaudi3 will find its way into large-scale AI systems. Fifth-generation Xeons will be used in server farms, combining data processing and traditional server activity with accelerated AI capabilities. And Core Ultra mobile and desktop CPUs will bring AI to individual users.


Lansheng Technology Limited, which is a spot stock distributor of many well-known brands, we have price advantage of the first-hand spot channel, and have technical supports.

Our main brands: STMicroelectronics, Toshiba, Microchip, Vishay, Marvell, ON Semiconductor, AOS, DIODES, Murata, Samsung, Hyundai/Hynix, Xilinx, Micron, Infinone, Texas Instruments, ADI, Maxim Integrated, NXP, etc

To learn more about our products, services, and capabilities, please visit our website at https://www.lanshengic.com


要查看或添加评论,请登录

Lansheng Technology Limited的更多文章

社区洞察

其他会员也浏览了