Strategies for Improving the Sustainability of Data Centers via Energy Mix, Energy Conservation, and Circular Energy

: Information and communication technologies (ICT) are increasingly permeating our daily life and we ever more commit our data to the cloud. Events like the COVID-19 pandemic put an exceptional burden upon ICT. This involves increasing implementation and use of data centers, which increased energy use and environmental impact. The scope of this work is to summarize the present situation on data centers as to environmental impact and opportunities for improvement. First, we introduce the topic, presenting estimated energy use and emissions. Then, we review proposed strategies for energy efﬁciency and conservation in data centers. Energy uses pertain to power distribution, ICT, and non-ICT equipment (e


Introduction
The digital economy is expanding and so is the demand for information and communication technology (ICT), driving the data center industry.Compared to the recent "age of computing", the present time is regarded as the "age of data" [1].Drivers for the recent massive expansion of ICT are the fifth-generation mobile networks (5G), modern computing paradigms, internet of things (IoT) [2,3], cryptocurrencies, blockchain [4], big data science, artificial intelligence (AI), and emergencies like the ongoing COVID-19 pandemic [5,6].Key estimates on 2018-2023 digital evolution by Cisco are reported in Table 1 [7].
The fifth-generation mobile network, known as 5G, is being implemented to meet increasing service demand [8].The related energy demand is under investigation [9,10].
Cryptocurrencies (Bitcoin being the first and most famous) are media of exchange, which are digital, encrypted, and distributed.They are not issued by a central authority but rather are based on a distributed ledger, typically blockchain.Mining is the release of new units of cryptocurrencies [11,12].The energy and environmental costs of cryptocurrency mining is an emerging issue [13][14][15][16][17].The estimated energy use related to Bitcoin is reported in Figure 1.Sustainable alternatives are under investigation [18].
A blockchain is an open-source distributed database, based on state-of-the-art cryptography, via a distributed ledger [19].The first application of blockchains has been to support bitcoin transactions; today, they are regarded as disruptive in many applications [20,21], including climate change [22], energy [23], and health [24].A recent application of blockchains is in smart mobility, supporting Internet-of-Vehicles [25].The energy and environmental impact of blockchains is investigated [17,26].Other drivers are modern computing paradigms-cloud computing, edge computing, fog computing, and IoT [3].

2023 Variation
Internet users (billions) 3.9 The COVID-19 pandemic changed the use of ICT.In March 2020, Microsoft Teams use increased by 775% [27] and Facebook group calls increased tenfold in Italy [28]; Zoom exceeded 200 million daily participants [29].Changes in social media use following COVID-19 are addressed e.g., by J.P. Morgan [30]; Amazon notably profited [31].This can also lead to beneficial results: Ong et al. [32,33] estimate the impact of videoconferencing, in terms of energy and CO 2 costs over the life cycle, compared to face-to-face meetings.
Information traders (e.g., Google/Alphabet, Amazon, Apple, Facebook, Microsoft) are among top companies by market capitalization [34,35].ICT electricity demand is expected to accelerate 2020-2030, to 8% ÷ 21% (based on scenario) of total electricity demand [36].Power usage of data centers can be as high as hundreds of megawatts [37].Examples of top data centers by power are presented in Table 2.
The COVID-19 pandemic changed the use of ICT.In March 2020, Microsoft Teams use increased by 775% [27] and Facebook group calls increased tenfold in Italy [28]; Zoom exceeded 200 million daily participants [29].Changes in social media use following COVID-19 are addressed e.g., by J.P. Morgan [30]; Amazon notably profited [31].This can also lead to beneficial results: Ong et al. [32,33] estimate the impact of videoconferencing, in terms of energy and CO2 costs over the life cycle, compared to face-to-face meetings.
Information traders (e.g., Google/Alphabet, Amazon, Apple, Facebook, Microsoft) are among top companies by market capitalization [34,35].ICT electricity demand is expected to accelerate 2020-2030, to 8% ÷ 21% (based on scenario) of total electricity demand [36].Power usage of data centers can be as high as hundreds of megawatts [37].Examples of top data centers by power are presented in Table 2. Data centers use ICT equipment (servers, storage drives, and network devices), which are electrically powered.The operation of ICT requires non-ICT equipment, or data center physical infrastructure (DCPI), e.g., cooling to remove ICT heat, lighting (Figure 2).
Power flows in a typical data center are illustrated in Figure 3. Electricity use estimations 2010-2030 for ICT and data centers are presented in Figure 4. Global greenhouse gases (GHG) emissions by ICT are presented in Figure 5.  Data centers use ICT equipment (servers, storage drives, and network devices), which are electrically powered.The operation of ICT requires non-ICT equipment, or data center physical infrastructure (DCPI), e.g., cooling to remove ICT heat, lighting (Figure 2).

Annualized electricity (TWh)
The top items of energy consumptions are cooling and servers, estimated as 43% each, followed by storage drives and network devices (11% and 3%, respectively) [39].Other estimations are roughly 52% ICT, 38% cooling system, and 10% other equipment [40].On these grounds, energy and environmental sustainability of data cente ority in the ICT industry.European strategies push for data center to be carbon-2030 [43].On these grounds, energy and environmental sustainability of data centers is a priority in the ICT industry.European strategies push for data center to be carbon-neutral by 2030 [43].The main actions on energy use and operational carbon of data centers are high-performance computing (software); energy conservation of computer rooms (hardware); low-power servers (hardware); and renewable energy application (hardware) [47] (Figure On these grounds, energy and environmental sustainability of data centers is a priority in the ICT industry.European strategies push for data center to be carbon-neutral by 2030 [43].The main actions on energy use and operational carbon of data centers are highperformance computing (software); energy conservation of computer rooms (hardware); low-power servers (hardware); and renewable energy application (hardware) [47] (Figure 6).In this work, we focus on energy conservation strategies at physical level (ICT and DCPI), as outlined in Figure 7-energy conservation at the software level falls outside the scope.

Power Supply and Continuity
Electrical power supply plays a key role in the proper and efficient operation of a data center.The loads of a data center can be classified in two main levels according to the type of service requested.DCPI require a preferential service, by emergency generator sets (EGS)-a downtime of seconds is tolerable, given system inertia.ICT and critical equipment require a vital service with stringent continuity of supply, by Uninterruptible Power Systems (UPS)-they are vulnerable even to very short (milliseconds) voltage dips.Standard ISO 8528-1:2018 defines application, ratings, and performance of EGS [49].Ratings are outlined in Table 3.

Power Supply and Continuity
Electrical power supply plays a key role in the proper and efficient operation of a data center.The loads of a data center can be classified in two main levels according to the type of service requested.DCPI require a preferential service, by emergency generator sets (EGS)-a downtime of seconds is tolerable, given system inertia.ICT and critical equipment require a vital service with stringent continuity of supply, by Uninterruptible Power Systems (UPS)-they are vulnerable even to very short (milliseconds) voltage dips.Standard ISO 8528-1:2018 defines application, ratings, and performance of EGS [49].Ratings are outlined in Table 3.

Power Supply and Continuity
Electrical power supply plays a key role in the proper and efficient operation of a data center.The loads of a data center can be classified in two main levels according to the type of service requested.DCPI require a preferential service, by emergency generator sets (EGS)-a downtime of seconds is tolerable, given system inertia.ICT and critical equipment require a vital service with stringent continuity of supply, by Uninterruptible Power Systems (UPS)-they are vulnerable even to very short (milliseconds) voltage dips.Standard ISO 8528-1:2018 defines application, ratings, and performance of EGS [49].Ratings are outlined in Table 3.Only COP rating is sufficient for data center Tier III or IV certification (see 2.1) but implies an oversized EGS.DCP is not sufficient as it implies a reliable utility.The additional "data center continuous power" (DCC) rating is defined, as the maximum power that the generator can continuously deliver to a constant or varying load for unlimited time in a data center application [50,51].
Open-source ICT systems are redefining how power is distributed in IT racks, by replacing internal server power supply units (PSU) with a centralized rack-level PSU.Servers use a variety of dc voltage levels, ranging from 12 to 1 Vdc.These are generated from mains ac voltage in several steps via transformers, rectifiers, and converters.PSU can be divided in single-cord or dual-cord supply.Dual-corded equipment is normally fed from separate sources by two PSU below 50% capacity, so that either PSU can feed total load whenever necessary.

Tier Classification
Service continuity is of paramount importance for data centers.T by The Uptime Institute [53] is the international standard on data

Tier Classification
Service continuity is of paramount importance for data centers.The Tier classification by The Uptime Institute [53] is the international standard on data center performance.Tiers are assigned based on data center reliability.They consequently define criteria on maintenance, power supply, cooling, and fault capabilities.A summary of Tier classification is reported in Table 4.The Uptime Institute illustrates characteristics of Tiers and features and actions on ICT equipment, electrical infrastructure, mechanical infrastructure, and facility operations for reliability, availability, and serviceability [53].Examples of power distribution based on Tier are given in Figure 9.Following Tier classification, ratings are defined by standard ANSI/TIA 942-B-2017 [54], as:  Redundancy (multiplication of components) is a pursued strategy to reduce probability of failure and improve reliability (ability to perform under stated conditions for a given time) and availability (degree to which a system is operational when required for use) [41,56,57].Topology and power distribution for reliability is discussed e.g., by Wiboonrat [58].It should be remarked that redundancy increases costs and decreases energy efficiency [41].Therefore, sustainability should be assessed also based on reliability.This is also reflected in multidimensional metrics.

Power Losses
Multiple causes of energy loss exist in a data center, as discussed e.g., by Rasmussen [42].Ideally, all power should be delivered to IT equipment; in reality, energy is obviously also consumed by non-IT equipment/DCPI.Part of the DCPI is in series with and powers IT while the other is in parallel as it supports IT operation.DCPI efficiency is pursued via more efficient devices, accurate sizing, and innovative techniques.A more accurate sizing matching IT load is regarded as the most immediate opportunity.
Losses in DCPI components are usually divided among no-load losses, proportional losses, and square-law losses; typical values as a fraction of full load rating are reported [42].An energy model of data centers is also available, illustrating various items of energy loss (Figure 10) [42].Data center modeling in the literature is discussed by Ahmed et al. and a model of the electrical energy consumption of data center subsystems, considering their interactions and highlighting power losses, is presented [60].

UPS
Data centers are mission-critical, and reliability is expected; hence, UPS are key.Even if they operate in emergency conditions, they are part of the infrastructure and taken into account in efficiency measurements.UPS losses can be grouped into no-load losses, proportional losses, and square-law losses (Figure 11a), as follows [61]:

•
No-load losses: Caused by no-load losses in transformers, capacitors, and auxiliaries; • Proportional losses: Due to switching losses in transistors and conduction losses in semiconductors and rectifiers; • Square-law losses: Joule losses.
UPS efficiency can be pursued via technology, topology, and modularity.Efficiency typically decreases with reduced load in common UPS, while it is maintained in efficient UPS (Figure 11b).UPS efficiency is discussed e.g., by Milad and Darwish [62].
UPS efficiency values are usually given at 100% load under the most favorable conditions, leading to nearly identical values for different UPS.UPS efficiency depends on the load-increasing redundancy means adding extra spare capacity, hence redundancy can have a deep impact on efficiency.Efficiency at 30% load is proposed to better specify a UPS [61].An example can be given as follows.Assuming 800 kW load, so that a 1000 kW UPS operates at 80% load (typical threshold set by operators), the effect of UPS alternative configurations is as per Table 5.The same load represents a different relative load based on UPS configuration.Above 80% load, the energy cost of UPS losses is regarded as small with respect to IT load.Offline UPS provide the load with a bypass mains power supply without conditioning and ensure maximum efficiency (99%) compared to online UPS.Standard IEC 62040-3 [64] applies to electronic UPS that deliver single/three-phase fixed frequency ac output voltage not exceeding 1000 V and that incorporate an energy storage system, generally connected via a dc link.The standard specifies performance and test requirements for a complete UPS system (not of individual UPS functional units) and introduces a code of identification of UPS service.

Energy Conservation of Computer Rooms
Rong et al. [47] review technologies for optimizing energy use in data centers as of 2016, including energy conservation in computer rooms.Nadjahi et al. [40] discuss thermal loads, active cooling (air conditioning), and passive cooling (free cooling, liquid cooling, two-phase cooling, building envelope), as of 2018.Ebrahimi et al. [65] discuss configuration of data centers, thermal loads, and thermal management (cooling systems); they also provide discussion on technologies for waste heat recovery.Energy conservation of computer rooms may rely on:

•
New architecture and control of the cooling system [40]; Possible control of lighting system [66].

Improvement of the Cooling System
As one of major items of energy use in data centers, improvements of cooling systems are investigated.The main strategies are presented in the following.

Location of Data Centers
Choosing the most appropriate location for a data center is essential.One common driver in this choice is the risk associated with the site itself [67].Here, "risk" has a general meaning, pertaining not only to natural adverse events, but also to utility costs and relationship with other services and facilities.
On the other hand, electric air conditioning is found to aggravate urban microclimate (heat island effect), in turn exacerbating the need for cooling [47,68], in a vicious circle.Nonetheless, location choice based on environmental conditions can improve the efficiency of the cooling system [69,70].For example, locations with abundant water or cold climate are considered for natural cooling: BMW has a data center in Iceland [71,72], Facebook has and is investing in data centers in Sweden [73][74][75], and Google has a data center in Finland [76].Microsoft deployed an underwater data center off Scotland's Orkney Islands [77].In some of those locations, inexpensive, renewable energy is also available.
Lei et al. [78]  Standard IEC 62040-3 [64] applies to electronic UPS that deliver single/three-phase fixed frequency ac output voltage not exceeding 1000 V and that incorporate an energy storage system, generally connected via a dc link.The standard specifies performance and test requirements for a complete UPS system (not of individual UPS functional units) and introduces a code of identification of UPS service.

Energy Conservation of Computer Rooms
Rong et al. [47] review technologies for optimizing energy use in data centers as of 2016, including energy conservation in computer rooms.Nadjahi et al. [40] discuss thermal loads, active cooling (air conditioning), and passive cooling (free cooling, liquid cooling, two-phase cooling, building envelope), as of 2018.Ebrahimi et al. [65] discuss configuration of data centers, thermal loads, and thermal management (cooling systems); they also provide discussion on technologies for waste heat recovery.Energy conservation of computer rooms may rely on:

•
New architecture and control of the cooling system [40]; Possible control of lighting system [66].

Improvement of the Cooling System
As one of major items of energy use in data centers, improvements of cooling systems are investigated.The main strategies are presented in the following.

Location of Data Centers
Choosing the most appropriate location for a data center is essential.One common driver in this choice is the risk associated with the site itself [67].Here, "risk" has a general meaning, pertaining not only to natural adverse events, but also to utility costs and relationship with other services and facilities.
On the other hand, electric air conditioning is found to aggravate urban microclimate (heat island effect), in turn exacerbating the need for cooling [47,68], in a vicious circle.Nonetheless, location choice based on environmental conditions can improve the efficiency of the cooling system [69,70].For example, locations with abundant water or cold climate are considered for natural cooling: BMW has a data center in Iceland [71,72], Facebook has and is investing in data centers in Sweden [73][74][75], and Google has a data center in Finland [76].Microsoft deployed an underwater data center off Scotland's Orkney Islands [77].In some of those locations, inexpensive, renewable energy is also available.
Lei et al. [78] recently investigated achievable Power Usage Effectiveness (PUE, see Section 5.1.1),i.e., practical minimum PUE with given climate conditions and state-of-theart technologies, based on locations of 17 Facebook and Google hyperscale data centers, via a simulation model of free cooling with different sources.It is found that this can impact up to twice as much.The impact on different items of consumption is also noted.Other studies on quantification of the impact of the location were by Depoorter et al. [79] and by Shehabi et al. [80].Considerations on the location of data centers are addressed by Atkins [81].An illustration of the distribution of data centers is given in Figure 12. via a simulation model of free cooling with different sources.It is found that this can impact up to twice as much.The impact on different items of consumption is also noted.
Other studies on quantification of the impact of the location were by Depoorter et al. [79] and by Shehabi et al. [80].Considerations on the location of data centers are addressed by Atkins [81].An illustration of the distribution of data centers is given in Figure 12.

Room Configuration
The configuration of computer rooms impacts on airflow; parameters can be e.g., room ceiling or configuration of floor openings in raised-floor data centers [83][84][85].The cost for a proper design of room layout and ventilation system in large data centers is estimated as 8%-10% of total cost and it is usually compensated by energy saving in two to three years [47].

Room Temperature
Room temperature and humidity values are recommended, for equipment life.However, as better equipment is being released, thermal limits are relaxing accordingly, to cut cooling costs.ASHARE thermal guidelines recommended 20-25 °C dry-bulb air temperature in 2004 and 18-27 °C in 2008; in 2011, equipment classes were defined, allowing a range as wide as 5-45 °C.Dell presented servers able to withstand 45 °C air [86].Google raised room temperature to 26.7 °C [87].

Airflow Pattern and Active Cooling
Unified rack arrangement is now obsolete, implying mixing of hot and cold air; faceto-face or back-to-back (hot aisle/cold aisle) arrangement is now common [40,47,88].Bedekar et al. [89] investigated the optimal computer room air conditioning (CRAC) location via computational fluid dynamics.
Air distribution in computer rooms is discussed in the literature [40,47,90,91].Air distribution can be divided as upward or downward.Floor design can be hard floor or overhead.Air pattern can be open, local pipeline, or full pipeline [47].Air can be conveyed between CRAC and rack via flooded supply/return, locally ducted supply/return, or fully ducted supply/return [91].To avoid mixing of hot and cold air in hot aisle/cold aisle arrangement, cold aisle containment or hot aisle containment are implemented [40].The latter is found to be the best of the two [90].

Room Configuration
The configuration of computer rooms impacts on airflow; parameters can be e.g., room ceiling or configuration of floor openings in raised-floor data centers [83][84][85].The cost for a proper design of room layout and ventilation system in large data centers is estimated as 8%-10% of total cost and it is usually compensated by energy saving in two to three years [47].

Airflow Pattern and Active Cooling
Unified rack arrangement is now obsolete, implying mixing of hot and cold air; face-to-face or back-to-back (hot aisle/cold aisle) arrangement is now common [40,47,88].Bedekar et al. [89] investigated the optimal computer room air conditioning (CRAC) location via computational fluid dynamics.
Air distribution in computer rooms is discussed in the literature [40,47,90,91].Air distribution can be divided as upward or downward.Floor design can be hard floor or overhead.Air pattern can be open, local pipeline, or full pipeline [47].Air can be conveyed between CRAC and rack via flooded supply/return, locally ducted supply/return, or fully ducted supply/return [91].To avoid mixing of hot and cold air in hot aisle/cold aisle arrangement, cold aisle containment or hot aisle containment are implemented [40].The latter is found to be the best of the two [90].
The CRAC unit maintains temperature, airflow, and humidity in the computer room.It typically uses the compression refrigeration cooling.Other than energy consumption, the downsides of active cooling are noise and reliability [40].To address energy consumption, passive cooling has been investigated.

Innovative Cooling Systems
Innovative cooling systems [92][93][94] can be based on natural air, chilled water, liquid immersion, evaporative cooling, geothermal cooling, passive cooling, pumped two-phase cooling, as well as with the contribution of smart monitoring and IoT.

Use of Natural Cold Source
The use of a natural cold source can be a direct use or an indirect use.In direct use, outdoor air is directly introduced; humidity control and filtration are required.In indirect use, heat exchange equipment is used.The crucial point in using a natural cold source is the efficiency of the heat exchange between indoor and outdoor air [47].

Free Cooling
In free cooling, natural fluids are used, without mechanical active components [40].This increases energy efficiency, savings, and reliability.Free cooling exploits airside or waterside economization.In airside economization, cold air is used, which must be at least 5 • C colder than indoor air.In waterside economization, water is used, from a nearby water source.In both cases, free cooling can be direct or indirect.In addition, the heat pipe concept can be combined with free cooling.Free cooling technologies are discussed by Zhang et al. [95] and by Daraghmeh et al. [96].Techniques are reported in Table 6.Reviewed studies present PUE in the range 1.10-1.16and energy savings 30%-40%; certain studies declare a coefficient of performance up to 9-12, or energy savings up to 47%-49%.In high power density data centers, technologies other than air cooling are recommended, e.g., liquid cooling.This has a higher heat transfer capacity per unit mass [97], allowing for a lower temperature difference between equipment and coolant, potentially allowing for passive cooling and also for heat reuse.Liquid cooling systems are discussed e.g., by Capozzoli et al. [98].
Liquid cooling systems can be implemented via micro-channels flow and cold-plate heat exchangers in contact with components.Studies are e.g., by Zimmermann et al. [99,100] (hot water-cooled electronics and heat reuse; energy performance of Aquasar, the first hot water-cooled prototype), Coles et al. [101] (direct liquid cooling), and Iyengar et al. [102,103] (experimental on IBM chiller-less test facility).Commercial systems are proposed e.g., by Asetek [104].
Another emerging technique is the fully immersed direct liquid cooling [90].Commercial systems are proposed e.g., by Iceotope [105].Chi et al. [97] compare an air-cooled and a fully immersed liquid-cooled system.Temperatures for liquid-cooling systems are discussed in the literature [97,99,101,106].

Improvement of Lighting System
Energy saving in lighting is pursued reducing power losses via efficient equipment (passive measures) and regulating power use via control systems (active measures) [107].The motivation of lighting control is to provide lighting when, where, and in the amount needed.

Lighting Control
Many data centers implement a "lights-out" practice, in which light fixtures are switched manually across a (large) space.The drawbacks are that the illuminated area is large compared to the accessed spot, and that lights can then be left on unnecessarily [66].
A proposed approach is the "follow-me" lighting (implemented e.g., in Facebook's Oregon and North Carolina data centers) in which lighting is operated as a spotlight following the technician.Motion detectors are implemented in each light fixture and connected to a central application, which controls the on/off state and intensity (dimming) of each fixture [66].

Light Sources
The common, inexpensive technology for data center lighting is fluorescent lighting.Drawbacks are as follows: Life is shortened by number of starts and by dimming; maintenance is required, which is aggravated by shorter life, multitude of lamps, and disposal; dimming, as a cause of aging, is seldom implemented.LED lighting has surpassed fluorescent lighting in energy efficiency and light quality, and it is recommended on the grounds of lower electricity use, lower heat release (impacting on HVAC), and dimming capability.The higher price of LED fixtures is dropping and is compensated by longer life.To further reduce heat release in the data center, LED fixtures are available, which do not implement drivers and are powered via a central supply, providing power conversion and control [66].

Other Strategies for Lighting Improvement
Although black is the most common finish, white racking could reduce the number of luminaires and lighting energy use by as much as 37% each [108].

Low-Power Servers
The server is regarded as the basic unit of power and heat flow path [109].In lowpower servers, energy usage is reduced via components configuration.Approaches are illustrated by Rong et al. [47], including features, energy efficiency, and constraint of selected technologies.Recent advancements are presented by Jin et al. [109], who compare existing server power consumption models and identify future research trends.It is found that it is possible to handle a heavier workload without increasing energy use.While accurate power consumption models of servers result in accurate cooling load calculation and avoid excessive redundancy, energy-and thermal-aware managements based on the model results in the best overall energy-savings.Meisner et al. [110] investigated high-power versus low-power server design.

The Little Box Challenge
Shrinking magnetics, capacitance, and heat extraction are the main challenges in the design of high-power density converters [111].The Little Box Challenge (LBC) was a competition, sponsored by Google and the IEEE Power Electronics Society [112,113], to build a prototype of a power inverter with the given specifications (e.g., size approximately 1/10 of that of contemporary state of the art, efficiency above 95%, fitting in a 40 in 3 casing) [114].Improving Google's data center efficiency was among the scopes [112].
Main design challenges are discussed by Neumayr et al. [115].The outcomes ignited attention from the consumer electronics community and technology advancements.Design challenges and proposed solutions are examined [111,116] (Table 7).
Table 7. Main features of LBC finalists [111,116].Approaches to heat management are discussed by Kim et al. [116] (Table 8).Widebandgap switches cause lower switching losses than silicon switches.They are decisive in high-power density converters and were widely used in LBC prototypes.It is deduced that all teams relied on forced air cooling via fans.The winners (Red Electrical Devils by CE+T) paid much attention to thermal design [117].

Team
Thermal Management

CE+T RED
Copper enclosure, with gap-pad ETH !verter Forced air cooling by utilizing high fin-number heat sinks and six ultra-flat blowers Schneider Electric Heat sink over power switches with small fan, two air inlets on case Texas A&M Unspecified cooling system with heat sink Taiwan Tech Six fans, heat sink connected to aluminum case UIUC Copper enclosure, 2 mm tall heat sink fins, 6 radial fans Univ.Tennessee Heat sink over power switches, two small fans, air inlets on top and side Virginia Tech FEEC Copper enclosure, 10 micro-fans on side wall

Direct-Current Power Supply
The diffusion of dc grids and power supply systems is envisaged in the evolution of the power system and ICT [118,119].Wong et al. [118] simulate a modular dc power supply unit for servers.Pueschel [119] investigates a 380 Vdc microgrid, serving an office building and the data center of a German company, as an approach to energy efficiency and safety.

Semiconductors
Until recently, the best efficiency in UPS power stages (96%) was achieved via insulatedgate bipolar transistors (IGBT) with three-level switching topologies.Recently, silicon car-bide (SiC) transistors were able to exceed 98% efficiency, nearly independent of percentage load.This is possible via the properties of wide-bandgap (WBG) semiconductors.SiC devices are proposed by ROHM Semiconductor, Wolfspeed, and ON Semiconductor [120].As an example, the efficiency of a Mitsubishi SiC-based UPS is reported above 98% for any load above 30% [63].

Automation, Monitoring, and Robotization
The integration of robotics in data centers is envisaged for their management and maintenance.While robots cannot completely replace human operators, they can be used to automate repetitive tasks, relieving operators and increasing productivity.Robotic maintenance can enable the implementation of "lights out" data centers and of a vertical configuration of the space.Challenges and possible benefits are discussed [121].An unmanned data center has been launched by AOL [122].In addition, robotics can be used for diagnosis of data centers and energy management [123][124][125].Emerging applications of automation, monitoring, and robotization are presented by Levy and Subburaj [126].

Modular Data Centers
Modular data centers are mobile data centers, designed for rapid deployment, e.g., for disaster recovery.They feature high energy efficiency and density, and they can be easily scaled.As an example, HP manufactures the Performance Optimized Datacenter (POD).Model 240a, nicknamed "EcoPOD", has a declared PUE of 1.05 to 1.30.

Regulatory Environment Governing Data Centers
To support advancements in data center sustainability, a regulatory environment is important, providing standardized requirements and evaluation methods.

Indicators
Given the increasing impact of data centers on society under many aspects (energy, security, sustainability), the need for comparing different solutions calls for reliable metrics and figures of merit.That is the reason behind the introduction of the "multidimensional approach" by Levy and Raviv [41,127], who formalized more specific previous attempts, such as [128].That approach was then specialized on "green" [129,130] and sustainability metrics [131,132].
Concerning data center efficiency, common metrics are the power usage effectiveness (PUE) and the carbon usage effectiveness (CUE).PUE is defined as the ratio of total facility power to ICT power, quantifying extra power required per unit ICT power.The best PUE is ideally 1 and in practice it ranges from 1.2 (very efficient) to 3 (very inefficient).The data center infrastructure efficiency (DCIE) is sometimes used, equivalent to the inverse of PUE.CUE is defined as the ratio of total CO 2 emission to ICT power.Alternatively, it can be defined as the product of CO 2 emission factor (CEF) and PUE [133].
Other performance indicators are reported [47,133,134].Specific indicators are proposed, to quantify effectiveness of on-site renewable generation (on-site energy fraction, OEF, and on-site energy matching, OEM), energy reuse (energy reuse factor, ERF), and water usage (water usage effectiveness, WUE) [133].Concerning data center sustainability, specific environmental metrics beyond renewable energy and efficiency can be introduced, such as those related to lifecycle assessment, decommissioning costs, the use of recycled materials, and the possibility of second-life reuse of some parts [135,136].Moreover, indicators exist to correlate energy to processed information, e.g., joules per bit [137].Levy and Raviv present a discussion on metrics and sub-metrics and propose a new metric approach, the "data center multidimensional scorecard", illustrated in Figure 13 [41].In addition, Lykou et al. discuss existing metrics and propose a new, sustainability-oriented methodology [138].A comprehensive taxonomy on data center metrics is presented by Reddy et al. [139].

Trends
A global survey by the Uptime Institute reported average annual PUE to have decreased 2007-2013 and then stalled (Figure 14).Improvements are due to major steps in energy efficiency (hot/cold air separation, increased thermal limits, enhanced control, free cooling).Speculations on the recent stall include exceptional outdoor temperatures, shift of workloads to public cloud services-resulting in data centers operating inefficientlyor diffusion of the high power density data center.

Regulations
The main requirements of data centers [141] are on: Temperature and humidity control-see ASHRAE specifications [142], static electricity monitoring; fire suppression; physical security; continuity and quality of supply, availability, physical protection, information security (protection of personal and financial data), etc.The regulatory framework concerning data centers is constantly evolving; the main institutions that have contributed are listed by Levy and Raviv [41].An overview on the matter is reported in Table 9.

Certifications and Initiatives
Data centers fall within certifications or initiatives on sustainable ICT or buildings.In the United States, possible certifications for green data centers are the Leadership in Energy and Environmental Design (LEED) by the U.S. Green Building Council [143] and the U.S. National Data Center Energy Efficiency Information Program within the EN-ERGY STAR program [144].Other than advanced cooling and reduced energy use, features of an LEED compliant data center are a clean backup system, the use of renewable energy, green construction, and intelligent design [145].
A number of companies and other stakeholders of data center efficiency are part of The Green Grid consortium [146].The Green500 list biannually ranks supercomputers, in the TOP500 list, for energy efficiency-the NVIDIA DGX SuperPOD (2.356 Pflops) ranked first in November 2020 with 26.195 Gflops/W [147].Other pertaining initiatives in the U.S.A. are the Energy Efficiency Improvement Act of 2014 (H.R. 2126), the Data Center Optimization Initiative, and the Federal Data Center Consolidation Initiative [41].

Trends
A global survey by the Uptime Institute reported average annual PUE to have decreased 2007-2013 and then stalled (Figure 14).Improvements are due to major steps in energy efficiency (hot/cold air separation, increased thermal limits, enhanced control, free cooling).Speculations on the recent stall include exceptional outdoor temperatures, shift of workloads to public cloud services-resulting in data centers operating inefficiently-or diffusion of the high power density data center.

Trends
A global survey by the Uptime Institute reported average annual PUE to have decreased 2007-2013 and then stalled (Figure 14).Improvements are due to major steps in energy efficiency (hot/cold air separation, increased thermal limits, enhanced control, free cooling).Speculations on the recent stall include exceptional outdoor temperatures, shift of workloads to public cloud services-resulting in data centers operating inefficientlyor diffusion of the high power density data center.

Regulations
The main requirements of data centers [141] are on: Temperature and humidity control-see ASHRAE specifications [142], static electricity monitoring; fire suppression; physical security; continuity and quality of supply, availability, physical protection, information security (protection of personal and financial data), etc.The regulatory framework concerning data centers is constantly evolving; the main institutions that have contributed are listed by Levy and Raviv [41].An overview on the matter is reported in Table 9.

Certifications and Initiatives
Data centers fall within certifications or initiatives on sustainable ICT or buildings.In the United States, possible certifications for green data centers are the Leadership in Energy and Environmental Design (LEED) by the U.S. Green Building Council [143] and the U.S. National Data Center Energy Efficiency Information Program within the EN-ERGY STAR program [144].Other than advanced cooling and reduced energy use, features of an LEED compliant data center are a clean backup system, the use of renewable energy, green construction, and intelligent design [145].
A number of companies and other stakeholders of data center efficiency are part of The Green Grid consortium [146].The Green500 list biannually ranks supercomputers, in the TOP500 list, for energy efficiency-the NVIDIA DGX SuperPOD (2.356

Regulations
The main requirements of data centers [141] are on: Temperature and humidity control-see ASHRAE specifications [142], static electricity monitoring; fire suppression; physical security; continuity and quality of supply, availability, physical protection, information security (protection of personal and financial data), etc.The regulatory framework concerning data centers is constantly evolving; the main institutions that have contributed are listed by Levy and Raviv [41].An overview on the matter is reported in Table 9.

Certifications and Initiatives
Data centers fall within certifications or initiatives on sustainable ICT or buildings.In the United States, possible certifications for green data centers are the Leadership in Energy and Environmental Design (LEED) by the U.S. Green Building Council [143] and the U.S. National Data Center Energy Efficiency Information Program within the ENERGY STAR program [144].Other than advanced cooling and reduced energy use, features of an LEED compliant data center are a clean backup system, the use of renewable energy, green construction, and intelligent design [145].
A number of companies and other stakeholders of data center efficiency are part of The Green Grid consortium [146].The Green500 list biannually ranks supercomputers, in the TOP500 list, for energy efficiency-the NVIDIA DGX SuperPOD (2.356 Pflops) ranked first in November 2020 with 26.195 Gflops/W [147].Other pertaining initiatives in the U.S.A. are the Energy Efficiency Improvement Act of 2014 (H.R. 2126), the Data Center Optimization Initiative, and the Federal Data Center Consolidation Initiative [41].

Net Zero Energy Data Center
The concept of Net Zero Energy Data Center (NZEDC) can be taken to recap the main strategies for data center sustainability.NZEDC are defined, as per Deliverable 4.5 of project RenewIT [148], as data centers that «consume net zero non-renewable primary energy from power grid and district energy networks during their lifetime, while generating as much energy (both electric and thermal) as they use during a year».A road map towards NZEDC is presented in Table 10.Many technical concepts are investigated in the mentioned deliverable, as presented in Figure 15, and the results on energy flows are discussed.
Table 10.Road map to net zero energy data center, elaborated from [148].

I-Load Reduction II-Renewable Energy Integration
1.
Efficient power distribution 4.
Reuse of ICT waste heat 6.
Renewable-based power supply

Conclusions
Energy use for ICT is ever more increasing and so are concerns on the sustainability of data centers.In this paper, we review approaches to reduce energy consumption and resource depletion caused by the operation of data centers, highlighting promising strategies and future research directions.The main actions are on software (HPC) and hardware (energy conservation of computer rooms-cooling and lighting, energy conservation in electronic equipment, integration of renewable energy).Metrics and the regulatory environment are a useful framework to support actions.Several indicators have been introduced to assess the state of the art and future targets of single aspects of efficiency (energy efficiency, carbon impact, use of resources).As a general concept, the definition of NZEDC was proposed in the literature and it can be regarded as a useful benchmark.To reduce cooling load, several concepts have been proposed, taking advantage of favorable environmental conditions (location), natural cold sources, and passive cooling.Furthermore, electronics are evolving to reduce IT load, via energy-aware IT management and new architectures.Additionally, a balance must be achieved between energy conservation and performances (continuity and quality).The extension of efficiency initiatives to data centers and the investigation of new technologies are desirable.As our life ever more relies on data and thus on the data center industry, in light of the ongoing digital evolution and rising environmental concerns, the sustainability of data centers must be pursued.
Funding: This research received no external funding.

Figure 2 .
Figure 2. Energy use in a data center, elaborated from [41].

Figure 3 .
Figure 3. Power flows in a typical data center, elaborated from [42].

Figure 2 . 24 Figure 2 .
Figure 2. Energy use in a data center, elaborated from [41].Power flows in a typical data center are illustrated in Figure 3. Electricity use estimations 2010-2030 for ICT and data centers are presented in Figure 4. Global greenhouse gases (GHG) emissions by ICT are presented in Figure 5.

Figure 3 .
Figure 3. Power flows in a typical data center, elaborated from [42].

Figure 6 .
Figure 6.Schematic of main actions to reduce data center energy use and impact, elaborated from [47].

Figure 6 .
Figure 6.Schematic of main actions to reduce data center energy use and impact, elaborated from [47].

Figure 6 .
Figure 6.Schematic of main actions to reduce data center energy use and impact, elaborated from [47].

Figure 7 .
Figure 7. Examples of main opportunities for energy conservation at physical level.
Conventional internal server PSU architectures and centralized rack-level PSU architectures (12 Vdc and 48 Vdc) are the most used architectures in IT rack data centers.With best-in-class components, the consolidated 12 Vdc rack-level PSU architecture provides a small incremental energy efficiency improvement over the conventional architecture.Consolidating at 48 Vdc provides another small incremental energy efficiency improvement over 12 Vdc.Conventional and open-source PSU are illustrated in Figure 8. PEER REVIEW conventional architecture.Consolidating at 48 Vdc provides anoth energy efficiency improvement over 12 Vdc.Conventional and open trated in Figure 8.

Figure 8 .
Figure 8. Conventional server rack and rack with PSU disaggregated from s from [52].

Figure 8 .
Figure 8. Conventional server rack and rack with PSU disaggregated from servers, elaborated from [52].

Figure 10 .
Figure 10.Diagram of energy losses in a data center, elaborated from [42].Figure 10.Diagram of energy losses in a data center, elaborated from [42].

Figure 10 .Figure 11 .
Figure 10.Diagram of energy losses in a data center, elaborated from [42].Figure 10.Diagram of energy losses in a data center, elaborated from [42].

Figure 12 .
Figure 12.Distribution of data centers among top countries by numerosity, elaborated from [82].

Figure 12 .
Figure 12.Distribution of data centers among top countries by numerosity, elaborated from [82].

3. 1 . 3 .
Room Temperature Room temperature and humidity values are recommended, for equipment life.However, as better equipment is being released, thermal limits are relaxing accordingly, to cut cooling costs.ASHARE thermal guidelines recommended 20-25 • C dry-bulb air temperature in 2004 and 18-27 • C in 2008; in 2011, equipment classes were defined, allowing a range as wide as 5-45 • C. Dell presented servers able to withstand 45 • C air [86].Google raised room temperature to 26.7 • C [87].

Figure 15 .
Figure 15.Advanced technical concepts to reduce electric and cooling load, elaborated from [148].
* Average power output not exceeding 70% of rating.

Table 5 .
recently investigated achievable Power Usage Effectiveness (PUE, see Section 5.1.1),i.e., practical minimum PUE with given climate conditions and state-of-theart technologies, based on locations of 17 Facebook and Google hyperscale data centers, Example on comparison of loading of different UPS configurations.

Table 6 .
Free cooling techniques.
[41]]s) ranked first in November 2020 with 26.195 Gflops/W[147].Other pertaining initiatives in the U.S.A. are the Energy Efficiency Improvement Act of 2014 (H.R. 2126), the Data Center Optimization Initiative, and the Federal Data Center Consolidation Initiative[41].

Table 9 .
Main best practices, guidelines, and standards on data centers.