Driving sustainability in a brownfield data center
Authors: Brett McCoy, Jayanta Ghosh, Abhinav Gupta, Vrudhi Shah, Varsha Vinod, and Bhuvnesh Bhatia
Introduction
Data centers are gaining profound significance in this digital age. Data centers are major consumers of resources and energy, and their demand continues to grow. It is estimated that by 2035, global information technology (IT) will consume 8.5% of all electricity, up from 5% in 2021, with data centers accounting for most of it.
Consequently, it all adds up to huge emissions from data centers, which range between 2.5% and 3.7% of global greenhouse gas (GHG) emissions. Also, data centers consume a substantial amount of water for cooling purposes, out of which almost 57% is potable water. With increasing adoption of compute-intensive technologies, such as machine learning and generative artificial intelligence (AI), data center capacity is set to increase tenfold from 2018 to 2025. As the demand for energy and resources continues to soar, the environmental impact of data center emissions becomes profound.
To overcome the challenges related to emissions, the data center industry has a critical role to play in promoting sustainable operations and directly contributing to the United Nation's sustainable development goals. The industry needs to work together and explore innovative solutions that assist in incorporating sustainability into their practices. Many data center operators have already publicly declared ambitious commitments toward net zero emissions and started adopting more sustainable approaches to digital?business.
While there are multiple ways to approach sustainability, for any organization that is just starting up, a systematic approach can go a long way in building the right set of capabilities. A baseline assessment focusing on understanding organization maturity and existing capabilities to meet its environmental, social, and governance goals would be the first step. This is followed by an analysis of current organizational carbon footprint and an estimated reduction potential by applying data center optimization solutions. Hence, by defining strategies to implement GHG reductions in data centers and using the right incentives, the industry can build efficient data centers with sustainability embedded across all the operations.
Why driving sustainability in a data center matters?
Driving sustainable IT strategies and initiatives, while balancing organization revenue and risk, and driving shareholder returns present various challenges. Organizations are looking for easy ways to minimize their environmental footprint?without compromising on the day-to-day operations.
Data centers are one of the biggest consumers of energy and water and play a major role in an organization’s GHG emissions. They are one of the most energy-intensive building types, consuming 10 to 50 times energy per floor space of a typical commercial office building. While there are several components that make up a data center, it is the emission hardware and infrastructure equipment that contribute most to the data center’s carbon output.
While there are several ways to address sustainability, this article focuses on solutions that lead to optimization of a Brownfield data center and lead to faster realization of benefits. While alternative solutions like building a Greenfield data center, migrating to a more efficient data center (colocation), or migrating to a public cloud impact the emissions positively, there are certain barriers that span across various verticals in any organization. While cost and time are the major barriers, other important factors like limitations of talent, technology, compliance, security, and other aspects are required to be thoroughly considered.
Before we begin to explore solutions and levers addressing data center emission, the first step is to scrutinize the metrics currently available to measure data center efficiency and then deep diving into solutions that can have a considerable impact on the efficiency of these?metrics.
How is data center efficiency measured?
Data center efficiency can be measured by many metrics that help to evaluate its effectiveness to assess factors like energy consumption, operational effectiveness, and environmental impacts. An efficiency metric used widely is power usage effectiveness (PUE). It is the ratio which represents the total power used by the facility to the power consumed by the IT load. The closer the PUE is to 1, the more?efficient the data center.
Why PUE alone is not a sufficient metric in evaluating data centers?
PUE is a straightforward metric that is easy to calculate and understand, which is a reason for its popularity. This metric has been used in data centers for many years to drive significant improvements for energy savings and operational improvements. There is increasing pressure by stakeholders to improve their efficiency by taking measures to reduce PUE. However, there are a few shortcomings with the sole focus on PUE number as organizations can numerically lower PUE by shifting some of the non-IT load like backup power to IT load. PUE also does not accurately represent the efficiency gains from methodologies such as the newer cooling mechanisms (E.g., direct liquid cooling) that are beneficial to data centers in terms of sustainability and do not influence PUE. Moreover, the data center servers use almost the same power when they are doing nothing, and this is often ignored because the PUE calculation overlooks it.
Hence, additional metrics that measure workload efficiency at a granular level are required. One such metric is Total PUE (TUE). It is a combination of IT PUE (ITUE) and PUE. ITUE measures the effectiveness at a rack level by determining how much energy reaches the rack and how much goes to the electrical components. This method provides a much better insight into what is happening at the rack level, such as server cooling fans, power supplies, and voltage regulators.
Data center efficiency can also be measured holistically by analyzing water usage effectiveness (WUE) and carbon usage effectiveness (CUE). WUE is promoted to show how little water their facilities use — a ratio that divides annual site water usage in liters by IT equipment energy usage in kilowatt-hours, and the CUE metric is a natural extension of PUE that measures the data center’s CO2 emissions as well as the energy consumption of IT equipment. When used in combination with PUE metric, data center operators can quickly assess the sustainability of their data centers, compare the results, and determine if any energy efficiency and/or sustainability improvements need to be made.
Pillars driving end-to-end sustainability in a data center
As discussed previously, datacenters are the major consumers of energy required by IT globally. And with increasing demand for high performance computing, this energy requirement is bound to increase.
Hence, it becomes imperative to look for solutions that target the overall data center ecosystem and eventually help improving the overall efficiency. From Deloitte’s point of view, this can be achieved by working across four pillars viz. infrastructure power optimization, cooling management, IT efficiency, and software efficiency.
The first pillar is focused on identifying how data center infrastructure can be optimized to consume less power, without hindering necessary business operations. One of the most effective ways to pursue sustainability is through energy efficient infrastructure and optimized power consumption.
The second pillar of sustainability revolves around cooling management. Cooling is paramount to keep data center equipment functional; hence, higher-efficiency cooling techniques such as liquid cooling and implementing newer technology within the existing one reduce the cooling burden.
The third pillar, IT efficiency, focuses on a data center’s critical load optimization through assessment of server utilization. In a typical data center, the server utilization is often less than 50%, and sometimes as low as 20%; whereas the electricity is consumed 24/7.
The fourth pillar underpins the importance of software efficiency at the data center. While software is not a direct consumer of energy, it directs and influences the operation of computer hardware, which indirectly impacts the hardware’s energy consumption and hence carbon emissions. Using innovative software solutions and AI-supported dashboards can have a significant impact on how the data center operations are handled.
Pillar 1: Infrastructure power optimization
This section explains diverse ways to use renewable energy and highlights Direct Current (DC) distribution system, which improves inherent efficiency of a data center.
Renewable energy
The benchmark must be to ensure that data centers are or have the capability to be powered by 100% renewable energy. The top three methods to be energy resilient are Purchase Power Agreements (PPAs), self-generation at the data center (such as solar panels), and Renewable Energy Certificates (RECs).
PPAs are established contracts with energy providers who supply renewable energy to data centers at an agreed-upon price and period. PPAs can be physical, where the data center takes title to the physical energy on the grid, or virtual, which are a financial contract for the underlying value of the energy. Since PPAs involve renewable energy, they can help reduce carbon emissions and better support sustainability goals of an organization.
Organizations can also vertically integrate and generate renewable energy themselves using solar, thermal, or wind power. Self-generation can help reduce dependence on the external grid and ensures energy efficiency and emission reductions; however, it requires significant investment.
RECs are certificates enabling organizations to claim and use renewably generated electricity and report near zero emissions. They are essentially investments in renewable energy and unlikely to lead to additional renewable energy generation, as it merely helps in offsetting the emissions. Hence, they should only make up an exceedingly small amount of a data center’s renewable energy procurement.
Each above-mentioned method has its own benefits. RECs can be implemented quickly with limited benefits in terms of emission control, whereas self-generation, even though highly beneficial in controlling emissions, is costly to implement. PPAs while delivering high benefits also keep cost of implementation under control.
Fact: At Google data centers, in 2022 alone, 21.6 million megawatt-hour of renewable energy was consumed, entirely?contributed by PPAs, via on-site generation and?from?local?grids.
DC powered data centers
380V DC is becoming an appealing option for powering data centers due to its high energy efficiency. To implement 380V?DC power distribution in a data center, only a single conversion from 480V grid-supplied Alternating Current (AC) to 380V DC is required to power the native DC equipment. It eliminates multiple conversions between AC voltage to DC voltage which happens in a traditional power distribution system at a data center. The traditional system results in wasted energy in each step, rejected in the form of heat which then must be cooled, wasting even more energy and increasing energy requirement cost. By eliminating power efficiency loss at each AC to DC conversion, 380V DC saves up?to 25% of overall power.
Data center owners and operators must look closely at their existing networking and power infrastructures to decide whether they should make the switch. Organizations, such as International Electrotechnical Commission, are actively working on standardization of power architectures in data centers. With the development of data-heavy, next-generation technologies, it may not be long before 380V DC power becomes the new normal.
Pillar 2: Cooling management
As cooling is a major energy utilizer, it is important to infuse newer technologies in the existing cooling system and try better sustainable cooling methods, such as liquid cooling which can reduce the power usage by making the inherent process efficient.
领英推荐
Active or smart cooling
Optimizing the airflow and cooling in the data center is one of the best ways to save energy. In an average data center, the cooling consumes 40% of the energy consumption. As such, there is a great deal of potential for energy savings, with some relatively simple changes to how data center is set up. Here are some ways to improve the cooling performance in the data?center.
Implementing a combination of these techniques will significantly enhance the efficiency and reliability of data center operations, leading to reduced energy consumption.
Liquid cooling
Using liquid to cool switches, servers, and other devices in the data center is gaining popularity in recent years. Air processing is now showing its limits in cooling racks containing newer-generation central processing units and graphical processing units efficiently and sustainably. Long proven for mainframe and gaming applications, liquid cooling is expanding to protect rack-mounted servers in data centers worldwide. Liquid cooling provides a multitude of benefits like reducing the energy needed to cool the equipment as well as alleviating the need to remove heat from the air (as the equipment does not blow hot air).
Fact: Immersion cooling delivers up to 90% reduction in cooling energy and 50% cut in total data center energy usage over air cooling.
Among the below three methods discussed to implement liquid cooling, starting with heat exchanger would help to gain capacity that meets near-term business needs and provides rapid return on investment. Progressing to direct-to-chip liquid cooling will remove about 70%-75% of the heat generated by the equipment in the rack, leaving only 25%-30% that needs to be removed by air-cooling systems. Finally, adopting immersion cooling maximizes the thermal transfer properties of liquid and is the most energy-efficient form of liquid cooling in the market.
Pillar 3: IT efficiency
Optimizing the design of data centers to maximize the efficient use of IT hardware is crucial in reducing energy consumption and minimizing their environmental impact. These optimizations can be achieved without introducing new equipment, for example, through rightsizing, which ensures optimal utilization of existing equipment. Furthermore, organizations can minimize maintenance costs by using standardized hardware within their data centers. Sustainability considerations for IT equipment should encompass the entire value chain, from sustainably sourcing the equipment to ensuring proper disposal methods are employed at the end of their life cycle.
Fact: A typical data center wastes large amounts of energy powering equipment doing little or no work. The average server operates at only 12-18% capacity!
Rightsizing instances
The efficient operation of a data center relies heavily on rightsizing Virtual Machines (VMs) following the virtualization of IT resources. Incorrectly sized VMs pose risks, such as application performance problems, system outages, and unnecessary consumption of energy and resources. The consequences of running idle servers, in terms of wasted energy, directly contribute to increased power consumption and higher emissions. To address these challenges, automating application resource management becomes essential. Continuous analysis of application demand and resource supply levels helps to ensure the optimal utilization of data center resources, all while safeguarding application?performance. Utilizing instances more efficiently can decrease idle energy consumption, rack space and cooling requirements, and manufacturing emissions.
Standardized hardware usage
The adoption of standardized hardware in data centers not only streamlines operations but also contributes significantly to sustainability by reducing energy consumption. Utilizing identical hardware components, such as racks; cooling systems; and compute, storage, and networking equipment throughout the data center, simplifies maintenance and troubleshooting, leading to quicker problem resolution and reduced repair time. Moreover, standardized hardware facilitates the prediction of energy consumption, enabling more efficient resource allocation based on demand forecasts. The broader goal of standardization is to enhance the sustainability of data centers by simplifying power distribution within racks. For instance, using standardized racks with individual server power from a low-voltage DC power shelf exemplifies this approach. Standardization not only facilitates faster infrastructure deployment but also aligns with environmentally conscious practices, promoting a more sustainable data center.
Resource circularity
E-waste is a growing environmental concern, and use of recycled or remanufactured hardware is a proactive step in reducing the amount of e?waste produced. E-waste generated from data centers, including racks, computing equipment, monitors, circuits, and other electrical components, contributes to the overall carbon footprint. Utilizing products designed for reusability, recyclability, and re-engineerability is crucial, promoting closed-loop systems where waste from one process becomes a resource for another. Recycling or refurbishing data center hardware not only maximizes resource longevity but also minimizes e-waste, offering a cost-effective alternative to upgrading hardware.
Fact: Cisco has implemented circularity into its product life cycle. Between 2021 and 2022, more than 1800 Cisco UCS servers from data centers were refurbished and resold.
To reinforce these efforts, data center operators are encouraged to collaborate with waste management organizations adhering to a 'zero waste to landfill' policy. Following the reduce, reuse, and recycle hierarchy aligns with sustainability goals by addressing Scope 3 emissions (emissions from assets not owned or controlled by the organization) by minimizing the environmental impact throughout the entire life cycle of data center equipment.
Responsible procurement
Sustainability of data centers is not only determined by their own operations but also by the practices of their suppliers. It is crucial to ensure that the suppliers of data center equipment follow sustainable practices. To ensure sustainability, it is necessary to establish sustainability criteria for data center equipment suppliers. These criteria can be based on the raw materials used, the use of renewable energy sources, and the end-of-life disposal of equipment. Monitoring the emissions from suppliers is a vital step in reducing Scope 3 or indirect emissions as it is beyond their direct control.
Fact: Cisco in its supply chain uses Responsible Business Code of Conduct for data center suppliers to?measure and manage suppliers’ conformance to Cisco’s environmental and human rights?requirements.
Pillar 4: Software efficiency
Software efficiency plays a central role in determining how resources are utilized within a data center. Unavailability of virtualized infrastructure and inefficient software development practices can lead to unnecessary strain on hardware, requiring more energy and resources to execute tasks. Optimal software design, virtualization, and active performance monitoring can dramatically reduce the computational workload, leading to lower energy consumption of a data center. This section covers solutions that focus on leveraging software-based solutions to drive efficiency in data centers.
Software-defined data center
A software-defined data center promotes sustainability by enhancing resource efficiency, which contributes to reduction in overall energy consumption and a smaller environmental footprint. Data center operations can be simplified by creating a hyperconverged environment that facilitates the delivery of IT resources as a service where virtualized compute, storage, and networking resources are combined on a standardized platform. This kind of design allows the data center to be managed as a unified system where infrastructure and workload management are controlled programmatically. Owing to this, workloads can be dynamically scaled up or down based on real-time requirements, ensuring that only necessary resources are consuming power. Further, smart network interface cards are used to offload server CPUs with several jobs required to manage modern distributed applications. This offloading results in freeing up a host’s core CPU, which then is utilized to support additional business applications resulting in improved performance and sharing of hardware components across a host cluster. As a result, the host infrastructure needed to support business workloads gets minimized, which leads to a reduction in overall energy requirement. Overall, the efficiency offered by a software-defined data center helps curtail the environmental impact of a data center.
Software build analysis using sustainability dashboards
While servers consume a sizable chunk of the energy supplied to IT equipment, it is also important to look at the data center in its entirety to drive energy efficiency. Bringing the ambient temperature and energy consumption of all the devices (e.g., servers, switches, load balancers, firewalls) in the data center into a single analytics dashboard helps understand the energy usage and GHG footprint at a row or rack level, as well as helps identify hot or cold regions in the data center.
To achieve this, a real-time dashboard highlights the key information on the software builds with respect to the energy and carbon impact of the computing resources used. Such solutions expose data and metrics to provide visualization of the resources used which are useful in determining the changes required in development practices. In addition, the resources used can also be tagged to specific projects and initiatives, enabling leadership to balance business priorities and sustainability goals.
Using data to drive sustainability
Real-time analytics use AI and Big Data utilizing operational metrics, such as energy usage and monitoring data of the dynamic environment in the data center’s cooling chain. Parameters in each of the links can be adjusted dynamically, which leads to a reduction in energy utilized and costs. By performing real-time monitoring of assets and utilizing historical data of the physical systems, such as power load and heating and cooling of the data center, AI models can be built to perform optimization of the systems based on prediction results of loads. Further, capitalizing on peak load and fluctuations enables dispatching of power only when needed. This predictive approach allows data centers to proactively adjust their operations.
Green software development
Green software development focuses on designing, developing, and deploying software applications with a focus on minimizing their environmental impact. It involves several key principles like designing energy-efficient algorithms, resource optimization, virtualization, and continuous measurement. Following such principles helps in reducing the total amount of energy needed for running workloads. Efficient code often requires fewer resources to run. Optimizing software reduces the strain on servers, thus consuming less electricity. Green coding emphasizes compatibility, thus elongating the life span of devices. Optimized code results in faster load times and less data transmission, leading to energy savings across data centers and networks. Considering the colossal energy demands of data centers globally, even minor reductions in code inefficiencies can translate to a substantial decrease in carbon emissions.
Fact: CEOs that implement sustainability and digital transformation initiatives, such as green coding,?report a higher average operating margin than their peers.
Where can you start today?
Organizations usually face challenges like “Where do we begin?” and “How do we evaluate different solutions,” while they plan on beginning their journey toward sustainability. To cater to this need, Deloitte’s extensive experience and proven track record in driving sustainability-related transformations have helped develop comprehensive sustainability strategies for several of its clients across multiple industries. Deloitte’s specialists work closely with clients to develop customized solutions aligned with their business?goals.
While there is no one-size-fits-all approach to driving sustainability in data centers, anyone looking to drive efficiency measures in their data centers can follow a generic stepwise approach that aligns organizational objectives. In the first step, data must be collected, which includes IT and non-IT discovery in a data center, energy usage, and efficiency metrics. The second step is to assess the current state of the organization, which includes activities like understanding the organization's sustainability maturity and their goals and understanding their current emissions and energy consumption. In the final step, we identify opportunities and present a roadmap for stakeholders to finalize.
Summary
The urgency of addressing sustainability in brownfield data centers arises from several interconnected factors. Data centers have become the backbone of our digital age; however, their energy demands are staggering, contributing significantly to carbon emissions across the globe. With the proliferation of newer technologies and solutions, their role and impact on the environment requires careful supervision. While building Greenfield data centers incurs significant cost and time, the focus should be on finding efficient ways to upgrade the existing data centers.
As discussed in the article, by optimizing energy efficiency, adopting renewable energy sources, performing resource optimization, enhancing cooling systems, and utilizing latest technological innovations, organizations across globe can optimize their existing data centers to reduce the environmental impact without compromising on the efficiency and growth. Adopting such sustainable design principles will lead to more efficient and cost-effective data centers. To conclude, with increased awareness of environmental risks, driving sustainability initiatives to cut down emissions has taken a center stage for most organizations in the recent years. For organizations relying heavily on data centers for their operations, it becomes critical to evaluate solutions that can help drive energy reduction. The article has charted out solutions focusing on?data center’s design, operation, and management along with a stepwise approach to kick-start the sustainability journey.
As used in this document, “Deloitte” means Deloitte Consulting LLP, a subsidiary of Deloitte LLP. Please see www.deloitte.com/us/about for a detailed description of our legal structure. Certain services may not be available to attest clients under the rules and regulations of public accounting.
GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.
Digital Infrastructure Energy Efficiency/Sustainability Consultant, ISO 22301/50001 Lead Auditor, ESOS Lead Assessor, Public Speaker, Former Lecturer - BCU, SDIA AB, EU-JRC Consultant EUCOC, DCS Award Winner
9 个月When you mention PUE et al, you should say that they are ISO/IEC standards, the ISO/IEC 30134 series or the EN50600-4-X series. I'm also very surprised that you didnt mention the EU Code of Conduct for Data Centres (Energy Efficiency) best practices, which most of your pillars appear to be based on. And even more surprised that you havent mentioned at all, the EN50600-5-1 Data Centre Maturity Model, which is an auditable standard. But, what do I know, you're the experts apparently!