#StridingTowardsTheIntelligentWorld-The CPU-Centric Architecture Is Evolving into a Data-Centric Composable Architecture

#StridingTowardsTheIntelligentWorld-The CPU-Centric Architecture Is Evolving into a Data-Centric Composable Architecture

The recent boom in AI and real-time big data analytics applications has displaced the convention of centering computing power on a CPU and popularized a heterogeneous computing method across CPUs, GPUs, NPUs, and DPUs. This shift has placed higher requirements on memory capacity and bandwidth, which traditional CPU-centric architectures cannot meet.

Trends

The CPU-centric server architecture is evolving into a data-centric composable architecture

CPU-centered computing alone is now insufficient for increasingly diverse applications that demand quick real-time data processing, therefore a heterogeneous computing approach represented by GPU is emerging.

New computing hardware has seen a boost in hot data processing efficiency of I/O-intensive applications, but causes extra memory access pressure.

Current levels of capacity and bandwidth of local memory cannot meet the requirements for data processing.

In 2019, Intel launched the open interconnection standard, Compute Express Link (CXL). The new memory-semantic bus running on the CXL enables quick access to external memory and memory capacity expansion, making it possible to decouple memory from CPUs and allowing external high-speed storage devices and heterogeneous computing resources to form a memory pool.

The decoupling extends to the CPU-centric server architecture which is evolving into a data-centric composable architecture. Computing, memory, and storage resources from different computing units can be combined on demand, and the heterogeneous computing can directly access memory and storage resources through the high-speed bus.

The new server-based architecture has revolutionized the storage positioning, which will no longer be limited to managing disks, but will also incorporate in-memory storage in the future.

At the Flash Memory Summit 2023, MemVerge, Samsung, XConn, and H3 released 2 TB CXL memory pooling systems suited for AI, and also the South Korean company Panmnesia demonstrated its 6 TB CXL memory pooling system.

CPU-centric server architecture evolving into data-centric composable architecture

The CPU-centric storage architecture is evolving into a data-centric composable architecture

In the future, applications such as AI and big data will require higher performance and lower latency and CPU performance growth may slow down. With the development of the composable server architecture, the storage architecture will also evolve into a data-centric composable architecture to greatly improve the performance of storage systems. Various processors (CPUs and DPUs), memory pools, flash pools, and capacity disk pools of the storage system will be interconnected through new data buses. In this way, data can be directly stored in memory or flash memory after accessing the storage system, avoiding slow data access due to poor CPU performance.

Suggestions

Keep pace with the evolution of server and storage architectures, make timely adjustments, and seek opportunities from new storage

Data-centric architectures will be needed to cope with the sharp increase in data processing requirements. Enterprises should keep pace with the evolution of hardware in data centers, make timely adjustments, and build the optimal server and storage architectures to lay a solid data foundation for service development.

To read more articles and download the report, please visit https://www.huawei.com/en/giv/industries/data-storage.

要查看或添加评论,请登录

Huawei IT Products & Solutions的更多文章

社区洞察

其他会员也浏览了