Will 100GbE Dominate Thru 2024?

Will 100GbE Dominate Thru 2024?

Given that the new server processors from AMD (Genoa) and Intel (Sapphire Rapids) are hitting the market and providing support for PCIe Generation 5 I thought it was time to find out what data center architects are thinking. With PCIe Gen5 a typical 16-lane wide server I/O slot can now support 512 Gbps, making 400 GbE to the server viable. So, I surveyed the Linkedin crowd last weekend with the above-pictured question and four answers. While 886 people saw the survey, only 12 felt qualified enough to provide their thoughts.

Now a survey of only 12 people is not very scientific. In essence, three people think 2x200 GbE will be important in 2024, and three more people consider 1x400 GbE worthy next year. One person believes some blend of 25 GbE will remain a mainstay, while five think 2024 will be the year of 100GbE. This may be more of a data center switch or rack power-based decision than a server I/O one.

It used to be that an architect would put two 1U Top of Rack switches (let's call them "A" and "B") in each rack. The "A" switch would then go to the "A" Core switch at the center of the network via higher-speed uplinks, and the "B" switch would go to the "B" Core switch. Then port0 on the server's NIC would go to the "A" switch and port1 to the "B" switch. Last week we started receiving new 1U Top of Rack switches for testing, 100G/400GbE models, which sell for over $60K, so putting two at the top of the rack becomes very costly. One particular switch tells us how this market is trending, the Cisco C9500X-28C8D. This configuration has two sets of 14 ports of 100 GbE (hence 28C where "C" is the roman numeral for 100) and eight ports of 400 GbE (8D where "D" in roman numerals is 500, close). I could see architects using a dual port 100 GbE NIC and putting port0 into one of the left side 100 GbE switch ports, then putting port1 into a right side port of this switch. Then the first four 400 GbE ports go to the "A" Core switch, and the next four go to the "B" Core switch. This would be a balanced configuration and 14x100 GbE per side from NICs trunked up with 4x400 GbE. Sure, there would be some switch configuration work to do.

Today a 2U server, using a pair of the processors mentioned above, filling all the memory slots, a 2x100 GbE DPU, and a GPU, will draw around 1KW. Put 14 of them in the rack, set aside 20KW for the rack, and your power and HVAC guys will love you.

What are your thoughts about this approach and the future of 100 GbE and 400 GbE SmartNICs?

Michael Segel

All things Data | Chief Problem Solver | Hired Gun

1 年

Yeah, the surveys on LI don't reach enough of an audience to make any meaningful predictions. But when you consider that you can now get 100GbE switches at a reasonable price... and dual port Nic cards are ~1.5 - 2K apiece (YMMV) Yeah its the price point that has hit the sweet spot. You're also at a speed where you can successfully implement a data fabric.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了