The Growing Chat GPT, Impact on Optical Modules

The Growing Chat GPT, Impact on Optical Modules

Chat GPT first drove the demand for high-computing power servers and GPUs. Since this year, 75% of the hardware's computing power has been used for training models, and 25% has been used to cope with access. This is because the daily access volume is currently only in the range of tens of millions to billions. It is not ruled out that it will become hundreds of millions of access volume next year. In this case, the proportion of training computing power and access computing power will be reversed, and data center investment will also increase

According to historical investment experience data, approximately for every 10 billion US dollars of data center investment, 6 billion US dollars will be used for plant, power equipment and air-conditioning, and the remaining 4 billion US dollars, 16% will be used for data center interconnection, 77% will be used to buy servers, 4% will be used to buy switches, and 3% will be used to buy optical modules.

For the sake of convenience, Before the calculation, we need to explain the data center architecture and computing assumptions.

  1. The current data center architecture is a leaf-spine structure. From the lowest level to the highest level, it is: server>>>top-of-rack switch>>>leaf switch>>>spine switch.
  2. Under this structure, there are three levels of data exchange. Note that the closer to the server, the smaller the data exchange, so the optical modules on the server are all short distance (less than 100m) and low rate (less than 100G).

We cannot use high-speed optical modules such as 400G and 800G near the server, because the branches are so high in bandwidth that it makes no sense to have the same rate of bandwidth on the switch that is in the flow aggregation.

What kind of optical modules are needed?

  1. From the server to the top-of-rack (ToR) switch: 24 servers in a single cabinet, each with 4 uplink interfaces, use 50G or 100G 100m.
  2. From ToR switch to leaf switch: The ratio of ToR uplink and downlink ports is 1:6. 96 downlink ports and 16 uplink ports. Use 400G 100m optical modules for uplink.
  3. From leaf switch to spine switch: The ratio of leaf switch uplink and downlink ports is 1:6, 48 downlink ports and 8 uplink ports, using 400G/800G 500m optical modules.
  4. Spine switch: 64 downlink ports, using 400G/800G 500m/2km optical modules.

The data center architectures of each company are not the same, so unless you have data center design drawings, it is impossible to estimate the demand for optical modules accurately. However, the number of optical modules is typically several times to an order of magnitude higher than that of servers.

No alt text provided for this image


Tanlink has high-precision COB equipment and the industry-leading COB manufacturing capabilities of 10G/25G/40G/100G/200G/400G AOC and DAC. Short-distance 100m-10KM modules can realize commercial-grade, full-temperature error-rate-free transmission, and can be produced in large quantities in a timely manner.


For more info please contact [email protected].

要查看或添加评论,请登录

Zoe Zhong的更多文章

社区洞察

其他会员也浏览了