Will 100GbE Dominate Thru 2024?

Will 100GbE Dominate Thru 2024?

Given that the new server processors from AMD (Genoa) and Intel (Sapphire Rapids) are hitting the market and providing support for PCIe Generation 5 I thought it was time to find out what data center architects are thinking. With PCIe Gen5 a typical 16-lane wide server I/O slot can now support 512 Gbps, making 400 GbE to the server viable. So, I surveyed the Linkedin crowd last weekend with the above-pictured question and four answers. While 886 people saw the survey, only 12 felt qualified enough to provide their thoughts.

Now a survey of only 12 people is not very scientific. In essence, three people think 2x200 GbE will be important in 2024, and three more people consider 1x400 GbE worthy next year. One person believes some blend of 25 GbE will remain a mainstay, while five think 2024 will be the year of 100GbE. This may be more of a data center switch or rack power-based decision than a server I/O one.

It used to be that an architect would put two 1U Top of Rack switches (let's call them "A" and "B") in each rack. The "A" switch would then go to the "A" Core switch at the center of the network via higher-speed uplinks, and the "B" switch would go to the "B" Core switch. Then port0 on the server's NIC would go to the "A" switch and port1 to the "B" switch. Last week we started receiving new 1U Top of Rack switches for testing, 100G/400GbE models, which sell for over $60K, so putting two at the top of the rack becomes very costly. One particular switch tells us how this market is trending, the Cisco C9500X-28C8D. This configuration has two sets of 14 ports of 100 GbE (hence 28C where "C" is the roman numeral for 100) and eight ports of 400 GbE (8D where "D" in roman numerals is 500, close). I could see architects using a dual port 100 GbE NIC and putting port0 into one of the left side 100 GbE switch ports, then putting port1 into a right side port of this switch. Then the first four 400 GbE ports go to the "A" Core switch, and the next four go to the "B" Core switch. This would be a balanced configuration and 14x100 GbE per side from NICs trunked up with 4x400 GbE. Sure, there would be some switch configuration work to do.

Today a 2U server, using a pair of the processors mentioned above, filling all the memory slots, a 2x100 GbE DPU, and a GPU, will draw around 1KW. Put 14 of them in the rack, set aside 20KW for the rack, and your power and HVAC guys will love you.

What are your thoughts about this approach and the future of 100 GbE and 400 GbE SmartNICs?

Michael Segel

All things Data | Chief Problem Solver | Hired Gun

2 年

Yeah, the surveys on LI don't reach enough of an audience to make any meaningful predictions. But when you consider that you can now get 100GbE switches at a reasonable price... and dual port Nic cards are ~1.5 - 2K apiece (YMMV) Yeah its the price point that has hit the sweet spot. You're also at a speed where you can successfully implement a data fabric.

要查看或添加评论,请登录

Scott Schweitzer, CISSP的更多文章

  • SuperNIC Explained? Part 2

    SuperNIC Explained? Part 2

    Earlier this summer, in Part 1, I speculated on NVIDIA's definition of a SuperNIC. On Friday, I received an email…

    8 条评论
  • SuperNIC Explained? Part 1

    SuperNIC Explained? Part 1

    During Jensen’s NVIDIA GTC keynote a few months back, he used the term "SuperNIC" interchangeably when discussing the…

    2 条评论
  • SmartNIC = (DPU, IPU, NPU)

    SmartNIC = (DPU, IPU, NPU)

    When we name an object, or class of objects, that immediately endows a measure of permanence, then we can begin…

    1 条评论
  • DPUs in ToR Switches

    DPUs in ToR Switches

    Recently on a SmartNICs Summit panel about the future, I clearly stated: “that there is rarely anything new under the…

  • Top Ten DPU Features in 2028

    Top Ten DPU Features in 2028

    The last panel of the 2023 SmartNIC Summit was titled "SmartNICs in 2028 and How We Got There," it was chaired by…

    2 条评论
  • GFTs, Hyperscaler Magic Pixie Dust

    GFTs, Hyperscaler Magic Pixie Dust

    Recent experience has shown that Hyperscalers are gaga about Generic Flow Tables (GFT) because they appreciate the…

    2 条评论
  • GFT, the Smart in SmartNIC

    GFT, the Smart in SmartNIC

    From AI-based trading solutions to security and storage, there are dozens of use cases for SmartNICs, but the most…

  • What Makes SmartNICs "Smart"

    What Makes SmartNICs "Smart"

    Standard Network Interface Cards (NICs) are engineered to convert electrical signals from the Ethernet into data…

    2 条评论
  • A Server Designed for 2x200GbE!

    A Server Designed for 2x200GbE!

    It appears Dell's engineers may have collaborated with NVIDIA when designing their new Intel Sapphire Rapids server…

    1 条评论
  • Power, Heat, Space, and the Move to Double-Wide SmartNICs

    Power, Heat, Space, and the Move to Double-Wide SmartNICs

    Every electron flowing through an ASIC at the heart of any SmartNIC produces an equal amount of heat. PCIe Power In…

    2 条评论

社区洞察

其他会员也浏览了