Strategies for improving the sustainability of data centers via practical and effective measures such as changing energy mix, enhancing energy efficiency, reducing energy use, and circular energy or recycling energy

Information and communication technologies (ICT) are increasingly permeating our daily life and we ever more commit our data to the cloud. Events like the COVID-19 pandemic put an exceptional burden upon ICT infrastructures. This involves implementation and utilization of data centers, which increasingly contribute to energy and environmental impact. In this work, we address strategies to improve energy efficiency of data centers, to enhance sustainability towards energy resource and the environment. Among key strategies are innovative cooling technologies, use of natural resources, automation techniques, low-power electronics, and equipment with extended thermal limits. Research perspectives are identified and estimates of improvement opportunities are presented.


Introduction
Digital economy and demand for information and communication technology (ICT) are ever more expanding, the driving data center industry. Compared to the recent "age of computing", the present time is regarded as the "age of data" [1]. Drivers for the recent massive expansion of ICT are 5G, modern computing paradigms, internet of things (IoT) [2,3], cryptocurrencies, blockchain [4], big data science, artificial intelligence (AI), and emergencies like the ongoing COVID-19 pandemic [5,6]. Key estimates on 2018-2023 digital evolution by Cisco are reported in Table 1 [7].
The fifth generation technology standard for broadband cellular networks, known as 5G, is being implemented to meet increasing service demand [8]. The related energy demand is under investigation [9,10].
Cryptocurrencies (bitcoin being the first and most famous) are digital assets used as medium of exchange, which are not issued by a central authority but are based on a distributed ledger, typically blockchain. Mining is the release of new units of cryptocurrencies [11,12]. The energy and environmental costs of cryptocurrency mining is an emerging issue [13][14][15][16][17]. The estimated energy use related e.g. to bitcoin is reported in Figure 1. Sustainable alternatives are under investigation [18].
Blockchains were invented to serve bitcoin transactions, but today they are regarded as disruptive in many applications [19], including climate change [20], energy [21], and health [22]. The energy and environmental impact of blockchains is investigated [17,23]. Other drivers are modern computing paradigms, e.g., cloud computing, edge computing, fog computing, and the IoT [3].  13 44 +238% Figure 1. Estimation of bitcoin electricity consumption, via the Cambridge Bitcoin Electricity Consumption Index [33] Data centers use ICT devices (servers, storage drives, and network devices) which are electrically powered. Heat caused by device operation must be removed by cooling equipment also powered by electric energy. Top items of energy consumptions are cooling and servers, estimated as 43% each, followed by storage drives and network devices (11% and 3% respectively) [34]. Other estimations are roughly 52% IT equipment, 38% cooling system, and 10% remaining equipment (e.g., power distribution, UPS) [35]. Power flows in a typical data center are illustrated in Figure 2. Percent of global GHG emissions by ICT is presented in Figure 3. Electricity use estimations 2010-2030 for ICT and data centers are presented in  For instance, Ong et al. [37,38] compared life-cycle energy and CO2 costs of videoconferencing and face-to-face meetings, with videoconferencing amounting up to 7% energy use and emissions of face-to-face meetings in 2012, and successive trends based on evolution of ICT and transport.
On these grounds, energy and environmental sustainability of data centers is a priority in the ICT industry. European strategies push for data center to be carbon-neutral by 2030 [39]. Main actions on energy use and operational carbon of data centers are [40]: highperformance computing (software); energy conservation of computer rooms (hardware); low-power servers (hardware); renewable energy application (hardware). All actions then impact on metrics (Figure 5).  Rong et al. [40] review technologies for optimizing energy use in data centers as of 2016, categorizing them in high-performance computing, low-power server design, energy conservation of computer rooms (location, layout and ventilation patterns, use of natural cold sources), integration of renewable energy, and metrics.
Ebrahimi et al.
[46] discuss configuration of data centers, thermal loads and thermal management (cooling systems); they also provide discussion on technologies for waste heat recovery.

Power supply and continuity
Electrical power supply plays a key role in the proper and efficient operation of a data center. The loads of a data center can be classified in two main level according to the level of service requested: • Computers, servers, storage, electronic and telecommunication equipment enclosed in the IT racks need a vital service with a very high level of continuity of supply, considering the vulnerability even for very short voltage dips (few milliseconds).

•
Cooling, lighting and other auxiliaries need a preferential service, considering the inertia of the system with a tolerable out of service of several seconds. Vital service is supplied by Uninterruptible Power Systems (UPS). Emergency service is supplied by Emergency Generator Sets (EGS).
Open-source IT systems are redefining how power is distributed within an IT rack by replacing internal server power supplies with a centralized rack-level power supply.
Rackmount servers use a variety of DC voltages ranging from 12 Vdc down to 1 Vdc, to power internal components, e.g., CPU, GPU, hard drives, memory, fans, peripherals.
Starting from AC voltages of power system down to these DC levels happen in several steps through transformers (AC-AC), rectifiers (AC-DC) and converters/regulators (DC-DC). Inside the IT rack, power supply units (PSUs) convert AC to DC power (typically to 12VDC, and more recently to 48VDC). Voltage Regulator Modules (VRMs) on the motherboard are DC to DC converters used to reduce the voltage further for final use by the internal components.
PSUs can be subdivided in one cord supply and dual cord supply. Conventional internal server PSU architectures and centralized rack-level PSU architectures (12 Vdc and 48Vdc) are the most used architectures in IT rack data centers. With best-in-class components, the consolidated 12 Vdc rack-level PSU architecture provides a small incremental energy efficiency improvement over the conventional architecture. And consolidating at 48 Vdc provides another small incremental energy efficiency improvement over 12 Vdc.

Tier classification
The Tier of a data center refers to the system used to quickly classify (and therefore identify) the level of performance of the electric power system supplying a data center. The appropriated Tier is assigned according to the complexity of the data center and redundant components [47].

•
Tier I: data center with only one power system and one cooling system. These data centers are programmed to shut down completely in the event of a failure. Backup policies and redundant components are absent or if present they are for a very limited number of components, these type of data centers maintain a stable activity at 99.671% per year, i.e. a downtime of 28.8 hours per year. Tier I system can be defined also N-system.

•
Tier II: data center with only one power system and one cooling system. Thanks to special components and special architectures, the structure will need less maintenance and therefore fewer interruptions. Some components are redundant and backup policies exist. For example, a redundant UPS can be installed. The minimum uptime of a Tier II data center must be 99.741%, or approximately 22 hours of downtime per year. Tier II system can be defined also N+1-system.

•
Tier III: is a data center equipped with multiple power systems and multiple cooling systems. All components are redundant. Most maintenance and upgrades don't require taking the data center offline. The minimum uptime of a Tier III level data center must be 99.982% or 1.6 hours of downtime per year. Tier III system can be defined also 2N-system.

•
Tier IV: it is designed and built to be totally fault tolerant and each component is redundant. It has multiple power and cooling systems. Guaranteed uptime is 99.995% (maximum annual downtime must not exceed 26.3 minutes). Tier IV system can be defined also 2(N+1)-system. It is important to highlight that in case of level of Tier greater than 1, several pieces of equipment are redundant. For example, in Tier IV there are 4 UPSs in comparison with only one of Tier 1. In this case the losses are four times, and the global efficiency of the system is reduced. The efficiency of key components within the power path can be schematized in the following steps: Electric power system efficiency; UPS efficiency; Oversizing of UPS; Redundancy of UPS; PSU efficiency; Oversizing of PSU; Redundancy of PSU.
Published UPS efficiencies are almost always quoted at 100% load under the most favorable conditions leading to nearly identical efficiencies between different UPSs. The energy efficiency of a UPS is dependent on the load it operates at. And since adding redundancy means adding extra spare capacity, redundancy can have a deep impact on efficiency.
For instance, a 1N configuration consisting of 250 kW modules for 1000 kW system capacity may operate at 80% load (typical threshold set by operators). An internally "modular" redundant UPS with 5 250 kW modules has 1250 kW of capacity, which would be equivalent to 64% load, and a parallel redundant (2+1) UPS configuration with 3 500 kW units has 1500 kW of capacity, or 53% load. If a UPS is loaded above 80%, the energy cost of UPS losses represents a small percentage compared to the cost of powering the IT load. As more load is added to a UPS, the electrical current running through its components increases. This causes losses in the UPS with the square of the current sometimes referred to as "I-squared R" losses. Power losses dissipated as heat are proportional to the square of the current. Square-law losses become significant (1-4%) at higher UPS loads.
There are three significant types of losses that a manufacturer can reduce to improve UPS efficiency: no-load losses, proportional losses, and square-law losses. Designers of data centers have three actions to improve the efficiency: technology, topology, and modularity. By understanding how these factors impact on efficiency, UPS systems can be designed that will significantly decrease the electrical cost of operation.
Offline UPS provide the load with a bypass mains power supply without conditioning and ensures maximum efficiency of 99% in comparison with online UPS.
IEC 62040-3 [48] applies to movable, stationary and fixed electronic uninterruptible power systems (UPS) that deliver single or three phase fixed frequency ac output voltage not exceeding 1 000 V and that incorporate an energy storage system, generally connected via a dc link. IEC 62040-3 is intended to specify performance and test requirements of a complete UPS system and not of individual UPS functional units. The standard introduces a code of identification of the service of the UPS, based on: • Output characteristics, in terms of voltage and frequency independent of the output in comparison with the input; • Output wave form, normal or in by-pass; • Dynamic performance in output.

Energy conservation of computer rooms
Energy conservation of computer rooms may rely on: • New architecture and control of the cooling system [35]; • Possible control of lighting system [49].

Improvement of cooling system
As one of major items of energy use in data centers, improvements of cooling systems are investigated.

Location of data centers
Electric air conditioning is found to aggravate urban microclimate (heat island effect), in turn exacerbating the need for cooling [40,50], in a vicious circle. On these grounds, location of data centers is a key element. Locations with natural cooling conditions are considered, e.g., with abundant water or cold climate [51,52]. BMW has a data center in Iceland [53,54], Facebook has and is investing in data centers in Sweden [51,52,55], and Google has a data center in Finland [56]. Microsoft deployed an underwater data center off Scotland's Orkney Islands [57]. In some of those locations, inexpensive, renewable energy is also available.
Lei et al. [58] recently investigated achievable PUE (practical minimum PUE with given climate conditions and state-of-the-art technologies) based on locations of 17 Facebook and Google hyperscale data centers, via a simulation model of free cooling with different sources. It is found that this can impact up to twice as much. Impact on different items of consumption is also noted. Other studies on quantification of the impact of the location were by Depoorter et al. [59] and by Shehabi et al [60]. Considerations on location of data centers, other than cooling, are addressed by Atkins [61].

Room configuration
Configuration and parameters of computer rooms impacts on airflow, e.g., room ceiling or floor openings in raised-floor data centers [63][64][65]. Design cost of layout and ventilation system in large data centers is estimated as almost 8%-10% of total cost and it is usually compensated by energy saving in almost two to three years [40].

Airflow pattern/active cooling
Unified rack arrangement is now obsolete, implying mixing of hot and cold air; face-to-face or back-to-back (hot aisle/cold aisle) arrangement is now common [35,40,68]. Bedekar et al. [69] investigated optimal CRAC location via CFD.
Air distribution can be upward or downward. Floor design can be hard floor or overheard floor. Air pattern can be open, local pipeline or full pipeline [40]. Air can be conveyed between CRAC and rack via flooded supply/return, locally ducted supply/return, fully ducted supply/return [70].
To avoid mixing of hot and cold air in hot aisle/cold aisle arrangement, cold aisle containment or hot aisle containment are implemented [35]. The latter is found to be the best of the two [71].
The CRAC unit maintains temperature, airflow, and humidity in the computer room. It typically uses the compression refrigeration cooling. Other than energy consumption, downsides of active cooling are noise and reliability [35]. To address energy consumption, passive cooling has been investigated.

Use of natural cold source
Use of natural cold source can be direct use or indirect use. In direct use, outdoor air is directly introduced; humidity control and filtration are required. In indirect use, heat exchange equipment is used. The crucial point in using natural cold source is the efficiency of the heat exchange between indoor and outdoor air [40].

Innovative cooling systems
Innovative cooling systems [72-74] can be based on natural air, chilled water, liquid immersion, evaporative cooling, geothermal cooling, passive cooling, pumped two-phase cooling, also with the contribution of smart monitoring and IoT.

Free cooling
In free cooling, natural fluids are used, without mechanical active components [35]. This increase energy efficiency, savings, and reliability. Airside and waterside economization exist. In airside economization, cold air is used, which must be at least 5 °C colder than indoor air. In waterside economization, water is used, from a nearby water source. In both cases, free cooling can be direct or indirect. Free cooling techniques are presented by Zhang et al. [75] and by Daraghmeh et al. [76]: • Airside: direct airside systems, indirect airside systems, multistage evaporative systems; • Waterside: direct water-cooled systems, air-cooled systems, cooling tower systems, integrated dry cooler-chiller (water-to-air dry cooler) systems; • Heat pipe: independent systems, integrated systems, cold storage systems, pulsating heat pipe systems.
Reviewed studies present PUE in the range 1.10-1.16 and energy savings 30%-40%; particular studies declare COP up to 9 or 12, or energy savings up to 47% or 49%.

Liquid cooling
In high power density data centers, technologies other than air cooling are recommended, e.g., liquid cooling. This has a higher heat transfer capacity per unit [77], allowing for a lower temperature difference between equipment and coolant, potentially allowing for passive cooling and also for heat reuse. Liquid cooling systems are discussed e.g. by Capozzoli et al. [78].
Liquid cooling systems can be implemented via micro-channels flow and cold-plate heat exchangers in contact with components (e.g., CPU or DIMM). Studies are e.g. by Zimmermann et al. [79,80] (hot water-cooled electronics and heat reuse; energy performance of Aquasar, the first hot water-cooled prototype), Coles et al. [81] (direct liquid cooling), Iyengar et al. [82,83] (experimental on IBM chiller-less test facility). Commercial systems are proposed e.g. by Asetek [84].
Another emerging technique is the fully immersed direct liquid cooling [71]. Commercial systems are proposed e.g. by Iceotope [85]. Chi et al. [77] compare an air-cooled and a fully immersed liquid-cooled system. Temperatures for liquid-cooling systems are discussed in literature [77,79,81,86].

Improvement of lighting system
Energy saving in lighting is pursued is reducing power losses via efficient equipment (passive measures) and regulating power use via control systems (active measures) [87]. The motivation of lighting control is to provide lighting when, where, and in the amount needed.

Lighting control
Many data centers implement a "lights-out" practice, in which light fixtures are switched manually across a (large) space. The drawbacks are that the illuminated area is large compared to the accessed spot, and that lights can then be left on unnecessarily [49].
A proposed approach is the "follow-me" lighting (implemented e.g. in Facebook's Oregon and North Carolina data centers) in which lighting is operated as a spotlight following the technician. Motion detectors are implemented in each light fixture and connected to a central application which controls on/off state and intensity (dimming) of each fixture [49].

Light sources
The common, inexpensive technology for data center lighting is fluorescent lighting. Drawbacks are as follows: Life is shortened by number of starts and by dimming; maintenance is required, which is aggravated by shorter life, multitude of lamps, and disposal; dimming, as a cause of aging, is seldom implemented [49].
LED lighting has surpassed fluorescent lighting in energy efficiency and light quality, and it is recommended on the grounds of lower electricity use, lower heat release (impacting on HVAC), and dimming capability. The higher price of LED fixtures is dropping and compensated by longer life [49].
To further reduce heat release in the data center, LED fixtures are available, which do not implement drivers and are powered via a central engine providing power conversion and control [49].

Other strategies for lighting improvement
Although black is the most common finish for ICT equipment, white racking could reduce the number of luminaires and lighting energy use by as much as 37% each [88].

Net zero energy data center
A net zero energy data center (NZEDC) is defined, e.g. in Deliverable 4.5 of project RenewIT [89], as a data center that «consume net zero non-renewable primary energy from power grid and district energy networks during their lifetime, while generating as much energy (both electric and thermal) as they use during a year.» A road map towards NZEDC is presented in (Figure 7). Many technical concepts are investigated, as presented in Figure 8, and results on energy flows are discussed. Protection from solar gains Increasing energy losses from white space to outdoors via walls Optimizing or increasing thermal capacity

Building envelope
Energy-aware IT management

Modular UPS Bypassed UPS Enhanced UPS for Electrical Energy Storage
Enhanced UPS with oversized power converter for regulating reactive power

Efficient power distribution
Free cooling Increased thermal limits Hot/cold aisle containment Variable air flow Partial load (redundant or oversized components) Efficient components

Low-ex climate control and cooling distribution
Solar dessicant evaporative cooling Reciprocanting engine CHP with absorption chiller District cooling and heat reuse Grid-fed wet cooling tower without chiller PV and wind turbines with vapor-compression chiller and lead-acid batteries Grid-fed vapor-compression chiller with large, chilled water storage Grid-fed vapor-compression chiller with Li-ion electric batteries Syngas reciprocating engine CHP with absorption chiller and large hot water storage Solar thermal cooling with double effect absorption chiller Solid biomass-based steam turbine with absorption chiller Wave power and free cooling with seawater Free cooling with aquifer thermal energy storage Grid-fed vapor-compression chiller with electrical energy and chilled water storages Biogas fuel cell with absorption chiller Grid-fed vapor-compression chiller with lead acid batteries and ice storage Concentrated solar power CHP with adsorption chiller Vapor-compression chiller with desiccant evaporative cooling using heat from chiller Power and cooling supply

Low-power servers
The server is regarded as the basic unit of power and heat flow path [90]. In lowpower servers, energy usage is reduced via components configuration. Approaches are illustrated by Rong et al. [40], including features, energy efficiency, and constraint of selected technologies. Recent advancements are presented by Jin et al. [90], who compare existing server power consumption models and identify future research trends. It is found that it is possible to handle a heavier workload without increasing energy use. While accurate power consumption models of servers result in accurate cooling load calculation and avoid excessive redundancy, energy-and thermal-aware managements based on the model results in the best overall energy-savings. Meisner et al. [91] investigated highpower versus low-power server design, questioning the perceived energy efficiency of the latter.

The Little Box Challenge
Shrinking magnetics, capacitance, and heat extraction is among main challenges in the design of high-power density converters [92]. The Little Box Challenge (LBC) was a competition, sponsored by Google and the IEEE Power Electronics Society [93,94], to build a prototype of power inverter with given specifications (e.g. size approximately 1/10 of that of contemporary state of the art, efficiency above 95%, fitting in a 40 in³ casing) [95]. Improving Google's data center efficiency was among the scopes [93]. The outcomes ignited attention from the consumer electronics community and technology advancements. Design challenges and proposed solutions are examined [92,[96][97][98].
Approaches to heat management are discussed by Kim et al. [96]. Wide-bandgap switches cause lower switching losses than silicon switches. They are decisive in highpower density converters and were widely used in LBC prototypes. It is deduced that all teams relied on forced air cooling via fans. The winners (Red Electrical Devils by CE+T) paid much attention to thermal design [99].

DC power supply
The diffusion of DC grids and power supply systems is envisaged in the evolution of the power system and ICT [100,101]. Wong et al. [100] simulate a modular DC power supply unit for servers. Pueschel [101] investigates a 380 V DC microgrid, serving an office building and the data center of a German company, as an approach to energy efficiency and safety.

Semiconductors
Until recently, the best efficiency in UPS power stages (96%) was achieved via insulated-gate bipolar transistors (IGBT) with three-level switching topologies. Recently, silicon carbide (SiC) transistors allowed to exceed 98% efficiency and nearly independent of percentage load. This is possible via the properties of wide-bandgap (WBG) semiconductors. SiC devices are proposed by ROHM Semiconductor, Wolfspeed, and ON Semiconductor [102]. As an example, the efficiency of a Mitsubishi SiC-based UPS is reported above 98% for any load above 30% [103].

Robotization of data centers
Integration of robotics in data center is envisaged for management and maintenance of data centers. While robots cannot completely replace human operators (at least in the near future), they can be used to automate repetitive task, relieving operators and increasing productivity. Robotic maintenance can enable the implementation of "lights out" data centers and of a vertical configuration. Challenges and possible benefits are discussed [104]. An unmanned data center has been launched by AOL [105]. In addition, robotics can be used for diagnosis of data centers and energy management [106][107][108].

Modular data centers
Modular data centers are mobile data centers, designed for rapid deployment, e.g. for disaster recovery. They feature high energy efficiency and density, and they can be easily scaled. As an example, HP manufactures the Performance Optimized Datacenter (POD). Model 240a, nicknamed "EcoPOD", has a declared PUE of 1.05 to 1.30.

Indicators
Common metrics for data center efficiency are the power usage effectiveness (PUE) and the carbon usage effectiveness (CUE).
PUE is defined as the ratio of total facility power to ICT equipment power, quantifying extra power required per unit ICT power. The best PUE is ideally 1 and in practice it ranges from 1.2 (very efficient) to 3 (very inefficient).
CUE is defined as the ratio of total CO2 emission to ICT equipment power. Alternatively it can be defined as the product of CO2 emission factor (CEF) and PUE [109].
Other performance indicators are reported [40,109,110]. Specific indicators are proposed, to quantify effectiveness of on-site renewable generation (on-site energy fraction, OEF, and on-site energy matching, OEM), energy reuse (energy reuse factor, ERF), and water usage (water usage effectiveness, WUE) [109].

Trends
A global survey by the Uptime Institute reported average annual PUE to have decreased 2007-2013 and then stalled (Figure 9). Improvements are due to major steps in energy efficiency (hot/cold air separation, increased thermal limits, enhanced control, free cooling). Speculations on the recent stall include exceptional outdoor temperatures, shift of workloads to public cloud services -resulting in data centers operating inefficiently, or diffusion of high power density data center.

Regulations
Main requirements on data centers can be as follows [112]: temperature and humidity control -see ASHRAE specifications [113]; static electricity monitoring; fire suppression; physical security. Data centers must comply with standards on continuity and quality of supply, availability, , physical protection, information security (protection of personal and financial data), etc.

Certifications and initiatives
Data centers fall within certifications or initiatives on sustainable ICT or buildings. In the United States, possible certifications for green data centers are the Leadership in Energy and Environmental Design (LEED) by the U.S. Green Building Council [114] and the U.S. National Data Center Energy Efficiency Information Program within the EN-ERGY STAR program [115]. Other than advanced cooling and reduced energy use, features of a LEED compliant data center are a clean backup system, use of renewable energy, green construction, and intelligent design [116].
Pertaining industry initiatives are as follows. A number of companies and other stakeholders of data center efficiency are part of The Green Grid consortium [117]. The Green500 list biannually ranks supercomputers, in the TOP500 list, for energy efficiency -the NVIDIA DGX SuperPOD (2.356 Pflops) ranked first in November 2020 with 26.195 Gflops/W [118].

Conclusions
Energy use for ICT is ever more increasing and so are concerns on sustainability of data centers. In this paper we review approaches to reduce energy consumption and resource depletion caused by operation of data centers, highlighting promising strategies and future research directions. Main actions are on software (HPC) and hardware (energy conservation of computer rooms -cooling and lighting, energy conservation in electronic equipment, integration of renewable energy). Metrics and the regulatory environment are a useful framework to support actions. Several indicators have been introduced to assess the state of the art and future targets of single aspects of efficiency (energy efficiency, carbon impact, use of resources). As an overall concept, the definition of NZEDC is proposed. To reduce cooling load, several concepts have been proposed, taking advantage of favorable environmental conditions (location), natural cold sources, and passive cooling. Also the electronics is evolving to reduce IT load, via energy-aware IT management and new architectures. Also, a balance must be stricken between energy conservation and performances (continuity and quality). The extension of efficiency initiatives to data centers and the investigation of new technologies are desirable. As our life ever more relies on data and thus on the data center industry, in light of the ongoing digital evolution and rising environmental concerns, sustainability of data centers must be pursued.