Datacenters – Powerhouses of Digital Economy
Ramanathan B
Consulting for Digitalization Services | Industrial Automation and Control | Technical Sales | Predictive Maintenance | Operator Training Simulators | Industry 4.0|
If "Data is the new oil," then Datacenters are the refineries. Just as refineries process crude oil into valuable products like gasoline and plastics, Datacenters transform raw data into actionable insights and extract valuable information. Just as efficient refineries enhance the value derived from oil, well-designed Datacenters optimize data processing, storage, and management, fueling innovation and decision-making in various industries.
Cloud computing has also revolutionized the way companies run businesses and how individuals access and utilize their computing resources. Cloud service providers operate massive Datacenters where they host applications, platforms, and infrastructure for their customers. Going by the oil and data analogy; Cloud Computing are like Distribution Networks; Just like oil is distributed to fuel stations and industries, cloud services distribute processed data and applications to users across the globe.
Datacenters are the backbone for various companies that use advancing technologies such as Artificial intelligence, Machine learning, and Big data analytics. These technologies running in Datacenters continue to make businesses thrive but drive the power demand upward. It is estimated that Datacenters will account for 4% of global power demand in a few years’ time.
Two broad areas drive most of the electricity consumption in a Datacenters: Computing power (roughly 40% Datacenters power consumption) and Cooling systems (consume 38% to 40% power). These two energy-intensive components dictate the datacenter's power consumption. Internal power conditioning systems consume another 8% to 10%, while network and communications equipment and storage systems use about 5% each, and lighting facilities usually use 1% to 2% of power.
More Power means More Heat
Just as a powerful race car engine generates more heat when it runs at high speeds, Datacenters with increased processing power through connected servers produces more heat. A typical corporate Datacenters may house between 50,000 to 80,000 servers, while hyperscale Datacenters contain even more. As computing speeds and component densities rise, so does the heat generated within integrated circuits. The primary sources of heat include servers, storage devices, and networking equipment, all of which consume electricity during operation and generate waste heat.
Modern GPUs designed for generative AI can consume up to 700 watts, with future chips projected to reach 1,200 watts. This results in greater power and heat generation per square meter. Typically, these chips are mounted on blades within racks—around eight chips per blade and ten blades per rack—leading to significantly higher power usage and heat output than traditional Datacenter designs from just a few years ago. By 2027, the average power density in Datacenters is expected to increase from 36 kW to 50 kW per server rack.
Thermal management – key to Performance
For Electronic components, the general rule of thumb is that the speed of chemical reactions doubles for each increase of 10° C. Heat accelerates a component’s performance degradation and reduces its life. Electronic components need to be maintained at stable temperatures; otherwise, chemical reactions can break down or alter their materials and physical degradation.
The amount of heat produced by IT equipment varies depending on factors like hardware type, workload intensity or even ambient temperature among others. For instance, high-performance computing (HPC) servers as well as graphics processing units (GPUs), produce more heat compared to standard servers because they have a higher capacity for performing complex computations.
As Datacenters become more energy intensive and more heat generative, the capability of heat management must improve and truly holds the key to optimize Datacentre efficiency amidst capacity expansion.
领英推荐
Thermal Management Methods
Cooling methods for Datacenters can be classified into two main categories: air cooling and liquid cooling.
Air Cooling: This method relies on air conditioning and fans to dissipate heat from servers using convection. Over 80% of cooling systems utilize air cooling, primarily through Computer Room Air Conditioners (CRACs) or Computer Room Air Handlers (CRAHs), which circulate cool air and expel hot air from server racks. In typical air-cooled Datacenters, cold air enters through raised floors and flows towards the front of the racks, absorbing heat as it passes over the servers. The heated air is then expelled at the back, cooled, and recirculated. Techniques like hot aisle/cold aisle containment are employed to enhance cooling efficiency by separating cold air supply from hot exhaust, thereby minimizing air mixing, and reducing energy consumption.
Liquid Cooling: Although liquid cooling technology originated in the 1960s, it is gaining renewed interest as an ideal solution for hyperscale and AI-driven Datacenters.
Liquid cooling methods include direct-to-chip cooling, which attaches coolant plates to heat-generating components, capturing 50-80% of IT heat through coolant circulation and heat exchange. Immersion cooling technique is where servers are submerged in a dielectric fluid that absorbs heat and fluid is cooled by heat exchange, achieving over 95% heat removal. However, immersion cooling may lead to higher operational costs due to the use of specialized fluids.
For comparison between Air and Liquid Cooling techniques, the heat transfer coefficient of air for instance, under forced convection is around 100 W/(m2·K), and this value is around 3,000 for water, this means compared to air, water can transfer 30 times more heat. Water also has a much higher thermal capacity than air, which also means for the same amount of heat, the flowrate for liquid is much lower than air, which can save a significant amount of energy in operational costs.
As traditional air-cooling methods struggle (projected to exceed 100 kW per rack) to manage the thermal loads of modern high-density Datacenters, liquid cooling is emerging as a more efficient and sustainable solution for cooling high-performance servers.
Subsea to beat the heat
In the background of challenges around sustainability and heat management, some innovative relocations of Datacenters are being explored and taken into serious consideration.
Microsoft's Project Natick successfully deployed an underwater Datacenters off Scotland's Orkney Islands. This initiative utilizes the ocean's cool temperatures for thermal management, enhancing energy efficiency while minimizing freshwater usage. The project also incorporates renewable energy sources, such as solar, wind, and tidal energy, and has demonstrated a reduced server failure rate, paving the way for eco-friendly Datacenters designs.
Similarly, China is constructing the world’s first commercial underwater Datacenters off the coast of Sanya, Hainan Island. Spanning 68,000 square meters and featuring 100 data storage units, this facility aims to leverage cold seawater for cooling, significantly reducing energy consumption and conserving freshwater, positioning it as a more energy-efficient alternative to traditional land-based Datacenters.
As the Marvel analogy goes, 'With great power comes great responsibility.' For Datacenters that are destined to power today's digital economy and the future, we have a duty to ensure that we operate and manage these assets sustainably and efficiently. Datacenters demand for power is colossal, leading to concerns about carbon footprints and energy sources. Transitioning to renewable energy sources, such as solar and wind, is essential to ensure a sustainable power supply to these assets. From perspective of efficiency, Effective heat management is crucial to drive optimal performance and thereby prolong the lifespan of Datacenters.
Associate Business Analyst
3 个月Very informative and a good read. Data centers face the challenge of efficiently managing and optimising the power resources while striving to minimise the energy consumption. The future of data centers will look forward to usage of responsible energy and significant progress while targeting the goal of net zero emissions.