Tackling Monster-Cool Avoids a Red Hot Grid
Ravi Seethapathy
Advisor Smart Infrastructure; Corporate Director; International Speaker
This article was published in the June 2023 Newsletter of the Global Smart Energy Federation Global Smart Energy Federation |
In my previous articles (August 2020, Nov/Dec 2022 and Jan 2023 GSEF Newsletters), I wrote about air-conditioning load management and temperature mapping of large cooling assets (using fiber-sensors) to validate real-time “thermal headroom”. Rising ambient temperatures and extreme swings requires maneuverability to manage an asset’s real-time thermal capability and re-assess aging factors. Recently, I have been actively engaged in this area offering solutions and mentoring such technologies. In this article, I will concentrate on why tackling “monster cool” is vital to avoiding a “red hot” grid.
?Climate Change and rising ambient temperatures is causing all infrastructure assets (electric, water, gas, telecom, transport) to be suitably de-rated. This means existing assets should be managed to allow adequate real-time thermal capability to meet demand in tandem with ambient temperature swings. Otherwise, a generic name-plate derating (say 10-15%) will likely strand trillions of dollars of existing asset value. ?
?Electricity plays a vital part in our everyday life. Its use is taken for granted with just a flick of a switch. There is little or no public awareness about what happens behind (to enable that power) when the switch is indeed flicked on. Cooling is a very large part in this load management for many countries and growing fast. Rising temperatures, urbanization and rising incomes, all point to cooling comfort as an important priority for most people. The average heat-load in homes (multitude appliances and electronics) has been growing steadily. Even in colder countries where summer is just a few months in a year, there is a rising trend towards air-conditioning in the short summer months. The sharpest rise in cooling load is in Asia, Africa and EU where rising ambient temperatures are increasingly unbearable with many cities over 43 deg C (109 deg F) during hot summer days and with equally unbearable nights around 33 deg C (91 deg F).??
?But there is another hidden (and rising) cooling load that is far bigger and we are its biggest consumers. These are large data centers, network cloud servers and (soon) AI server farms.?The Energy Innovation Magazine (March 17, 2020) notes, cooling accounts for the greatest share of electricity use in data centers. Some large data centers require 100 MW of power capacity—enough to power 80,000 U.S. households (U.S. DOE 2020). Per an article by Masanet et al (2020), this past decade alone has seen (a) IP traffic increase by 10x; (b) cloud storage by 25x; and (c) compute instances (a measure of hosted application load) by 6x. These trends are expected to continue as the world consumes more data. Some smaller countries with expanding data center markets are seeing rapid growth (Ireland at 3x since 2015, accounting for 14% of total electricity consumption in 2021; Denmark at 3x by 2025, accounting for 7% of the country’s electricity use).
?Per the IEA’s Data Center Report (Sept 2022), strong efficiency improvements have helped limit electricity consumption rise for cooling in older and more traditional data centers from 97.6 TWh (2015) down to 50 TWh (2019) and forecasted to drop to 33 TWh by 2025. On the other hand, hyperscale data centers have doubled their energy demand in the same time period due to business growth and much higher server density per cabinet. Global electricity use in 2021 by such data centers was 320 TWh (1.3% of global electricity) and this excludes cryptocurrency (140 TWh in 2021). There has been no published report yet on the influence and growth of AI which could add to the above.
?Per the Energy Innovation Magazine (March 17, 2020), three primary thrusts have helped “plateau” the energy consumption in modern data centers, (a) energy efficient servers and data storage; (b) greater server virtualization enabling more applications on a single server; and (c) migration of compute instances to ultra-efficient large cloud and hyperscale data centers. Designing higher server operating temperature (18-27 deg C) have played their crucial part in lowering cooling requirements by 50%, but this has been offset by more blade density per cabinet, leading to a higher net heat load. A typical hyperscale data center is about 100,000-200,000 sq ft with about 220 cabinets, however, Apple’s Mesa Data Center in Arizona, USA spans 1.3 million sq. ft while Google's Council Bluffs Data Center, Iowa, USA covers more than 2 million sq. ft.
?IDTechEx (June 2023) report titled, "Thermal Management for Data Centers 2023-2033", covers the adoption of liquid cooling technologies, including (a) direct-to-chip cooling; (b) immersion cooling; and (c) single-phase and two-phase coolants. Cold plate cooling involves a cold plate directly on top of heat sources with the coolant absorbing the heat while immersion cooling submerges the heat sources into the coolant, allowing for direct contact and efficient heat dissipation. Liquid cooling is expensive and managing it requires abundant, omnipresent and accurate real-time temperature measurement, across the whole data center (my earlier article on fiber temperature sensing). Any small error in accuracy or delayed response could be consequential to their performance.
?A lot has improved in new hyperscale data center designs. Energy efficiency is measured in Power Usage Effectiveness (PUE). A PUE of 1.0 means nearly all of the energy is used for computing while 2.0 means that for every watt of IT power, an additional watt (or 100%) is consumed to cool and distribute power to the IT equipment. Google, Microsoft, Amazon and Apple’s trailing twelve-month PUE across its sites average around 1.08 to 1.15 (i.e. cooling load is 8-15% of the IT load), however, the average PUE across all global data centers reported by Statista for 2022 is 1.55 (albeit down from 1.98 in 2011). This 40% cooling energy divergence between best and the rest, is a cause for concern. New centers (depending on location) may not necessarily be super-efficient and such large data center PPA portfolios (in GW scale) often exceed the capacities of large generation companies to manage it by themselves.
?Today, in many smaller and emerging nations, it is a challenge for their utilities to serve growing residential and retail cooling loads (human comfort) using their existing T&D wires. In larger or more developed nations, it is an issue of equitable distribution of cheap green-energy across communities. In three Canadian landmark cases (late 2022), the provincial governments of British Columbia, Quebec and Manitoba have halted new power connections to crypto mining and have indicated to stop selling cheap (green) hydroelectric power to them, citing emerging policy issues related to cheap green energy allocation and equitable use of the common T&D wires. An additional factor being these centers do not create large local employment but yet seek large share of the cheap public power.
领英推荐
?All of the above, point to several “red-alerts” in energy policy, equitable distribution of cheap green energy and people rights to their comforts. As cooling load grows rapidly (residential, commercial, industrial), there will come a point when public backlash against large data centers seeking to corner even more cheap green power will come to a flashpoint. The following will be the reasons:
?1.?????Cooling for human comfort and survival should be accorded highest priority at the cheapest tariff
2.?????Data Centers should further innovate to reduce their cooling loads to (say) PUE of 1.03-1.05
3.?????The public good (data center) does not commensurate with local jobs (misdirected power allocation)
4.?????Rate-payer risk is high as these large businesses could re-locate
5.?????Large users must strive for net-zero on their own sites (no REC offsets)
?As Asian, African and a few EU countries witness 4-7% annual growth in residential cooling loads, the higher growth rate in data centers can take center stage driven by national policies in data-privacy, data-security and AI. Since such policies often force domestic co-location of data centers, they could see national cooling loads easily double. This unintended consequence will then compete and conflict with the cooling comfort of its citizenry.
?Data center, crypto mining and AI related energy use should be key priorities in formulating national energy policies and carbon footprint priorities. Offset credits merely mask the underlying structural imbalance. True net-zero for such assets is perhaps the only way.
?