Next Article in Journal
Adaptive Recombination-Based Control Strategy for Cell Balancing in Lithium-Ion Battery Packs: Modeling and Simulation
Previous Article in Journal
Design of a High-Gain X-Band Electromagnetic Band Gap Microstrip Patch Antenna for CubeSat Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

A Systematic Review of Energy Efficiency Metrics for Optimizing Cloud Data Center Operations and Management

1
Mechanical, Automotive and Materials Engineering Department, University of Windsor, Windsor, ON N9B 3P4, Canada
2
Department of Energy, Aalborg University, 9220 Aalborg, Denmark
*
Authors to whom correspondence should be addressed.
Electronics 2025, 14(11), 2214; https://doi.org/10.3390/electronics14112214
Submission received: 25 April 2025 / Revised: 15 May 2025 / Accepted: 27 May 2025 / Published: 29 May 2025
(This article belongs to the Section Industrial Electronics)

Abstract

Cloud Data Centers (CDCs) are an essential component of the infrastructure for powering the digital life of modern society, hosting and processing vast amounts of data and enabling services such as streaming, Artificial Intelligence (AI), and global connectivity. Given this importance, their energy efficiency is a top priority, as they consume significant amounts of electricity, contributing to operational costs and environmental impact. Efficient CDCs reduce energy waste, lower carbon footprints, and support sustainable growth in digital services. Consequently, energy efficiency metrics are used to measure how effectively a CDC utilizes energy for computing versus cooling and other overheads. These metrics are essential because they guide operators in optimizing resource use, reducing costs, and meeting regulatory and environmental goals. To this end, this paper reviews more than 25 energy efficiency metrics and more than 250 literature references to CDCs, different energy-consuming components, and configuration setups. Then, some real-world case studies of corporations that use these metrics are presented. Thereby, the challenges and limitations are investigated for each metric, and associated future research directions are provided. Prioritizing energy efficiency in CDCs, guided by these energy efficiency metrics, is essential for minimizing environmental impact, reducing costs, and ensuring sustainable scalability for the digital economy.

1. Introduction

1.1. Background and Statistics

In recent years, the exponential growth of data-driven services—enabled by cloud computing and supported by hyper-scale infrastructure—has positioned CDCs as an essential component within global information and communication networks. Their central role in powering Artificial Intelligence (AI), big data analytics [1,2,3,4,5,6], e-commerce [7], streaming platforms [8,9,10,11,12,13], and edge computing [14,15,16,17,18] has made them critical to the digital infrastructure across all economic sectors. However, this massive computational footprint comes at a significant energy and environmental cost. While CDCs have become central to digital services, their growing computational load has made them one of the most energy-intensive classes of the digital infrastructure [19].
Global energy consumption by CDCs is projected to rise from 200 TWh in 2016 to nearly 2967 TWh by 2030, accounting for a significant share of total electricity use [20]. In the United States, CDCs consumed approximately 4.4% of total electricity in 2023, with projections reaching between 6.7% and 12% by 2028, mainly due to rising AI workloads [21]. Worldwide, electricity demand from CDCs is expected to exceed 857 TWh by 2028, with a Compound Annual Growth Rate (CAGR) of 19.5%, outpacing several major industrial sectors [22]. Within these facilities, Information Technology (IT) equipment accounts for roughly 60% of electricity consumption, while cooling and power delivery systems contribute another 30–40% [23]. This level of consumption places substantial pressure on energy infrastructure and raises critical concerns regarding sustainability, grid stability, and long-term cost efficiency [24].
These issues are further amplified by instabilities in the global energy market. Electricity prices are becoming increasingly volatile, influenced by geopolitical conflict, supply chain constraints, and fuel price fluctuations [25]. Meanwhile, many regions face escalating grid stress as electricity demand from both hyper-scale CDCs and electrified transport grows. Reports from the International Energy Agency (IEA) and the World Bank have noted the growing mismatch between CDC growth and the modernization rate of electric grids, particularly in regions where energy security is already constrained [26,27]. As a result, CDC operators must now address not only internal energy efficiency but also the broader challenge of navigating uncertain and competitive energy markets.
In many high-growth regions, energy management and reliability have become a key constraint [28] in which CDC expansion is not an exception. In Texas, concerns over grid capacity for AI and crypto workloads were publicly raised by industry leaders [29], and Taiwan has temporarily halted the development of large CDCs due to power concerns [30]. These developments reflect a shift toward proactive energy procurement strategies. For example, Microsoft’s long-term Power Purchase Agreement (PPA) with the Three Mile Island nuclear facility exemplifies efforts by major cloud providers to secure a reliable, dedicated energy supply for their infrastructure [31]. Sustainability objectives also influence the way CDCs are planned and managed. Many operators have committed to CO2-neutral targets and 100% Renewable Energy Sources (RESs). As of 2023, over 120 digital realty facilities operate on Renewable Energy Sources (RESs), supported by regional PPAs and clean energy procurement strategies [32]. In addition, thermal re-use systems, such as district heating with server exhaust and aquifer thermal storage, as well as water-efficient cooling technologies, are being deployed to reduce environmental impact [33]. These initiatives signal a broader shift from traditional energy efficiency metrics to integrated sustainability and resilience planning.
Existing metrics such as Power Usage Effectiveness (PUE), Data Center Infrastructure Efficiency (DCiE), and Carbon Usage Effectiveness (CUE) have been widely adopted to evaluate CDC energy performance [34,35]. However, these metrics often provide only a partial view, as they typically exclude software-related inefficiencies, dynamic workloads, and grid-level CO2 intensity. With the increasing adoption of virtualization, containerization, and serverless computing, these limitations are becoming more significant. As a result, new metrics—such as Server Power Usage Effectiveness (SPUE)—have been proposed to address energy use at more granular levels [36]. While promising, these approaches remain under development, and their practical implementation faces challenges, in terms of standardization and integration. Therefore, a comprehensive review of energy efficiency metrics is necessary to assess their relevance, gaps, and applicability in modern CDC environments.

1.2. Systematic Review Methodology

The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) systematic review process is a structured framework designed to ensure transparency, rigor, and reproducibility in systematic reviews [37], and our PRISMA checklist can be found in Supplementary Materials. It involves several key stages of keywords, identification, screening, eligibility, and inclusion, each important to synthesizing evidence from relevant studies. The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) framework of this review is illustrated in Figure 1:
The process begins with keywords. Firstly, a group of keywords is selected to make a comprehensive search strategy, centered on carefully chosen keywords. These keywords are derived from the research question, which results in the energy efficiency of CDCs, the metrics, and their definition.
Then, the identification phase starts, involving the search strategy execution across multiple databases, such as IEEEXplore, Google Scholar, Web of Science (WoS), Springer Nature, MDPI, ACM, Wiley Online Library, and ScienceDirect, to locate potentially relevant studies. Additional sources, such as conference proceedings, should also be searched. The goal is to cast a wide net, retrieving all records that might address the research question.
As the third step, during the screening phase, the deduplicated records are evaluated against predefined inclusion and exclusion criteria, typically in two stages: title/abstract screening and full-text review. In the first stage, the reviewers assess titles and abstracts to determine whether the studies are potentially relevant, excluding those that do not meet the criteria. The eligibility phase (Step 4) involves a detailed assessment of the full texts of the studies that passed the screening phase. Finally, the inclusion phase (Step 5) results in the selection of studies that fully meet the eligibility criteria and are included in the systematic review.

1.3. Literature Review

In the past ten years, the overall interest in the energy efficiency of CDCs has been evident, based on the publications published in Scopus (Figure 2) and their publication types (Figure 3). Based on these statistics, the journal publication type had the most significant share of research presentations on the energy efficiency metrics of CDCs (especially in 2024).
A study conducted in [38] used focus groups and interviews with CDCs managers to identify split incentives, imperfect information, and reliability trade-offs as barriers to energy efficiency investments. While market failures had limited impact, the high costs of context-specific information and opportunity costs from competing priorities were more significant in slowing adoption. In another study, a two-phase research was conducted by [39], investigating the non-technical barriers to energy efficiency. In Phase I, it was found that there were abundant technical solutions but insufficient focus on cultural shifts. Phase II’s interview with 16 CDC experts presented vendor-driven procurement, facility-specific energy metrics, and design convergence due to high-density infrastructure.
A comprehensive review analyzed over 200 CDC power models, organizing them into a hierarchical framework with hardware-centric and software-centric branches [40]. The hardware-centric models spanned digital circuit, component, server, data center, and systems-of-systems levels, while the software-centric models focused on operating systems, virtual machines, and applications. Efforts to establish energy efficiency metrics for wireless networks, such as radiated base station power normalized to area and traffic load (e.g., Watts/Erlang/km² or Watts/(bits/sec)/km²), reflect industry goals to create green networks. However, a study by [41] argued that these metrics are unsuitable for wireless systems, as they conflict with effective system operation and mislead network design. On the other hand, a study by [42] enhanced energy monitoring in telecommunication central offices by introducing novel metrics alongside the heating degree days parameter: the parameter of central utilization, index of cluster reliability, and reliability index.
The energy intensity metric—the ratio of energy consumed to data volume—is used to assess energy efficiency in communication networks and CDCs. This metric was criticized in [43] for its application, noting that weak correlations between data and energy use at short time scales lead to misleading results. Additionally, the importance of energy savings in CDCs was considered by [44], who proposed a model for measuring CDCs’ components to organize metrics and enhance corporate communication. The strengths and weaknesses of standard metrics were evaluated. The Power Usage Effectiveness (PUE) metric is one of the most used energy efficiency factors. The analysis conducted in [45] critiqued this metric’s limitations, noting that its instantaneous measurement of electrical energy use encouraged reporting of minimum values, thus reflecting only the lowest possible energy consumption.
As another metric class, cooling and thermal management of CDCs has come under focus. Several works, such as [46,47,48,49,50], have evaluated and reviewed several cooling and thermal strategies, as well as metrics in CDCs. Moreover, several reviews [51,52,53] have also been conducted on generally investigating the energy efficiency metrics of CDCs. Therefore, by highlighting efficiency trends and recalibrating estimates, ref. [54] presented policymakers and analysts with a refined perspective on data center energy use, its drivers, and near-term efficiency potential.
Three adaptive models—of Gradient Descent-Based Regression (GDR), Maximize Correlation Percentage (MCP), and Bandwidth-Aware Selection Policy (BW)—were designed in [55] to reduce energy consumption and Service Level Agreement (SLA) violations in CDCs. These models use energy-aware techniques for detecting overloaded hosts and selecting Virtual Machines (VMs) for migration. As another alternative, the work by [56] presented the enhanced multi-objective optimization algorithm for task scheduling, combining deep reinforcement learning and enhanced electric fish optimization for energy-efficient task scheduling. Following the importance of this subject, a resource prediction-based VM allocation method was introduced by [57] that reduced energy consumption and enhanced system reliability. It optimized feed-forward Neural Networks (NNs) using a self-adaptive differential evolution algorithm incorporating multi-dimensional learning and global exploration for superior global solution searching compared to traditional gradient descent.
The same subject was applied to vehicular edge cloud computing systems in [58]. It introduced a load-balancing algorithm that redistributes vehicles across roadside units based on load, computational capacity, and data rate. A robust security mechanism integrates an advanced encryption standard with electrocardiogram signals as encryption keys to secure data transmission. A caching strategy enables edge servers to store completed tasks, reducing latency and energy use. An optimization model minimized energy consumption while meeting latency constraints during computation offloading.
Moreover, the paper by [59] highlighted four key areas to enhance energy efficiency in CDCs: aligning architecture with emerging workloads, provisioning resources for future demands, improving the energy proportionality of machines, enhancing vertical integration across software stacks, and standardizing hardware–software interfaces for technology integration. Consequently, on the energy-management side, a hybrid policy-based reinforcement learning approach was presented in [60] for adaptive energy management in island group energy systems with constrained energy transmission. The paper introduced an island energy hub model enabling cascade energy utilization to meet island-specific demands and ensure a reliable supply. An energy management model for island groups was developed, accounting for the mismatch between energy demand and resources, in addition to the limited transmission capacity. Given the complexity of modeling, due to high renewable penetration and variable loads, the problem was framed as a model-free reinforcement learning task.
Therefore, a multi-energy trading market model based on price matching was proposed by [61] to promote collaboration across energy types and improve utilization through user participation. The model supports personalized energy responses while preserving user privacy and autonomy. A joint trading mechanism was developed to handle various energy types and time scales, reducing failures from overlooked transmission processes. Conversion devices are used to boost matching efficiency, and an income mechanism prevents operator bias. An enhanced hierarchical reinforcement learning algorithm is applied to manage large state-action spaces and sparse rewards. Furthermore, the review by [62] overviewed the trends in improving energy efficiency across cloud infrastructure, including servers, networking, management systems, and user software. It highlighted solutions, their benefits, and their trade-offs.

1.4. Existing Gaps and Contribution

In this context, the development of a unified, comprehensive understanding of energy efficiency metrics is critically needed. The current landscape includes a diverse and often fragmented set of methods, each targeting different aspects of energy performance—yet no systematic framework exists to guide their selection or application in line with evolving infrastructure models. As energy consumption continues to rise and sustainability becomes a primary design constraint, the ability to assess and manage CDC efficiency through appropriate metrics is no longer optional—it is foundational to both engineering practice and decision making.
To this end, this work overviews the energy efficiency metrics of CDCs, classifies them into two groups of IT-related and non-IT-related metrics, and investigates them in detail. Next, challenges and limitations are identified, and potential future research directions are presented. Consequently, the contributions of this work are highlighted as follows:
  • The complete analytics of the energy efficiency metrics of CDCs;
  • Presenting the energy-consuming components of a CDC;
  • Describing different centralized and decentralized setups of Uninterruptible Power Supplies (UPSs) and Power Distribution Units (PDUs) in CDCs;
  • Providing the challenges, limitations, and the associated potential research directions for each metric.

1.5. Paper Structure

The paper is structured as follows: Cloud Data Centers energy management concepts and their energy efficiency metrics are presented in Section 2 and Section 3. Next, real-world case studies, challenges, limitations, and future work, as well as the conclusion, are provided in Section 4, Section 5, Section 6, respectively.

2. Energy Management in Cloud Data Centers

Cloud Data Centers consume energy primarily for server operation, cooling systems, networking equipment, and power distribution [40,63,64,65,66]. Consequently, effective energy management in CDCs holds considerable importance. A typical CDC energy supply and consuming components are shown in Figure 4:

2.1. Servers and Racks

Racks and servers are the central computational engines of a CDC [67]. A server rack typically contains multiple servers stacked in vertical slots, including Central Processing Units (CPUs) [68,69], Graphical Processing Units (GPUs) [70,71,72,73], Random Accessible Memory (RAM) [74,75,76], storage drives [77,78,79,80], and networking interfaces [81,82,83,84,85,86]. These servers handle user requests, run applications, manage databases, and execute processing tasks. Their power draw is constant and scales with computing intensity. A high-density server environment leads to significant heat output, necessitating robust cooling solutions. Moreover, the performance-optimized design of modern servers often prioritizes speed over energy efficiency, further escalating their power needs. Thus, energy consumed by racks and servers is one of the most substantial factors in CDC energy efficiency.

2.2. Uninterruptible Power Supply (UPS)

Uninterruptible Power Supply is a system that ensures uninterrupted operation during a power failure [87,88]. It provides short-term backup power, using batteries (usually Lithium-Ion Batteries (LIBs) [89,90]), and it conditions the incoming power to protect equipment from surges, drops, and noise. These systems involve energy losses in the process of converting Alternating Current (AC) to Direct Current (DC) (to charge batteries) and DC back to AC (to power the load), known as double conversion [91,92,93,94]. These conversions are essential for high reliability; however, they consume additional power. Moreover, batteries require ongoing charging, and in large-scale CDCs the number of UPS units is significant enough to be taken into account, making them one of the largest IT-related energy consumers.

2.3. Power Distribution Unit (PDU)

Power Distribution Units are the important component within CDCs, distributing power from the UPS or main supply to individual equipment, such as servers and network devices [95,96,97]. They often include transformers, circuit breakers, and monitoring features. Power Distribution Units step down voltage and convert power to appropriate formats, during which energy losses occur. Intelligent PDUs allow for real-time power consumption monitoring and environmental condition tracking, which adds to their power usage. Generally, CDCs’ power supply, including UPSs, PDUs, and Renewable Energy Sources (RESs), is conducted in two forms of [98]:
  • Centralized (Figure 5): Electricity from a single UPS is distributed to multiple PDUs, which then channel the power to server racks. To avoid any delay in switching to UPS power, CDCs are equipped with double conversion UPS systems.
  • Distributed (Figure 6 and Figure 7): Instead of a centralized UPS, a battery cabinet serves every multiple rack, supporting the servers (Figure 6). This approach eliminates double conversion by modifying server power supply units to accept both AC power from the grid and DC power from the battery cabinet. The battery cabinet distributes DC power directly to the servers. As another form of distributed UPS, a battery is integrated into each server following the UPS. This configuration eliminates the AC/DC/AC double conversion, enhancing energy efficiency during normal operation, and it positions AC distribution closer to the IT load before conversion (Figure 7).

2.4. Information Technology (IT) Rooms and Equipment

Information Technology rooms contain not only servers but also the essential components of firewalls [99], switches [100], routers [101,102], load balancers [103], storage systems, and monitoring consoles [104], which handle data routing, traffic control [105,106,107], storage functions, and system diagnostics. These systems run continuously and rely on redundant setups for high availability. Additionally, control rooms with operator workstations, large display walls, and supporting electronics are included in this category. Although each piece of equipment consumes less power than servers, their combined energy demand significantly contributes to the IT-related energy footprint.

2.5. Heating, Ventilation, and Air Conditioning (HVAC)

Heating, Ventilation, and Air Conditioning systems are the largest non-IT energy consumer. They are responsible for maintaining stable temperature and humidity to prevent overheating [108], which can degrade or damage sensitive electronics. Cloud Data Centers use advanced cooling strategies, such as those presented in Table 1. These systems must operate continuously and at scale, especially in high-density environments where thermal loads are extreme. Energy is consumed not only by compressors and fans but also by pumps and sensors. Inefficient HVAC operation leads to higher PUE, a key metric for energy performance in CDCs, which is presented in the next section.

2.6. Facility Security, Lighting, and Offices

Security infrastructure restricts access to CDCs to authorized personnel, utilizing surveillance cameras [191], biometric access systems [192,193], motion sensors [194], alarms, and, occasionally, on-site monitoring rooms. Operating continuously with redundancies, backup power, and significant storage for video footage, these systems, while not major individual energy consumers, add to the facility’s baseline power usage. As another energy consumer, lighting systems can be considered.
Lighting illuminates server rooms, offices, corridors, emergency exits, and outdoor areas in CDCs [195]. Although modern facilities use energy-efficient Light-Emitting Diode (LED) lights and smart controls, such as motion sensors and timers to reduce usage, lighting still contributes significantly to non-IT energy consumption. In large CDCs with multiple shifts or frequent maintenance, lighting’s cumulative energy demand is substantial.
Cloud Data Center offices support staff including administrators, engineers, facility managers, and support teams, and they feature workstations, printers, telecommunication equipment, and often individual HVAC systems. While their energy use is lower than IT and cooling systems, these spaces contribute to non-IT energy consumption.
Based on the provided context about the energy-consuming components of a CDC, several energy efficiency metrics are useful in their efficient operation and management for considering usage, costs, overall efficiency, and sustainability. These metrics are presented in Section 3.

3. Energy Efficiency Metrics in Cloud Data Centers

Energy efficiency is the ratio of useful work output by a system to the total energy input [196,197,198]. In CDCs, this efficiency reflects the productive work carried out by various subsystems relative to the energy supplied [51]. To this end, we have divided these energy efficiency metrics into two classes:
  • IT-related metrics (Table 2 and Table 3): Measurements that evaluate the energy performance of computing and networking components. These metrics assess how effectively IT resources utilize energy to perform computational tasks, focusing on the ratio of computational output to energy consumed, the degree of resource utilization, and the adaptability of power consumption to workload variations.
  • Non-IT-related metrics (Table 4 and Table 5): These are used in the evaluation of supporting infrastructure such as power distribution, cooling systems, and building facilities. These metrics measure the proportion of energy used by non-IT systems relative to total energy consumption, with an emphasis on minimizing overhead and improving the efficiency of physical infrastructure and environmental controls.
According to Table 2 and Table 3, several sets of correlations are created, where improving one metric impacts others positively or negatively. Reducing APC positively correlates with ITEE, meaning lower power usage enhances efficiency of rated performance per power. However, it negatively correlates with PpW, CPE, and ScE, as cutting power may reduce performance or compute output, worsening these metrics that aim for higher values. Conversely, improving CPE, which measures compute output per IT power and targets higher values, positively correlates with PpW and ScE, indicating that better compute efficiency boosts performance and server efficiency, but it negatively correlates with APC, as higher compute output may increase power usage. DWPE shows positive correlations with ITEE and CPE, suggesting higher workload efficiency aligns with better equipment and compute efficiency, but it negatively impacts APC. EWR, where lower is better, negatively correlates with CPE, PpW, and ScE, meaning less energy per task improves these performance metrics, but it positively correlates with APC, as lower energy use may reflect higher power consumption. ITEE positively correlates with PpW and CPE but negatively with EWR, indicating efficient equipment supports performance but it may increase energy per task. OSWE positively correlates with Data Center energy Productivity (DCeP) and Data Center Performance Efficiency (DCPE) but negatively with PUE, suggesting system-wide efficiency improvements conflict with PUE optimization. PpW and ScE share strong positive correlations with each other, CPE, and ITEE, but they negatively correlate with the EWR and APC, reinforcing their alignment with performance efficiency. SPUE positively correlates with PUE but negatively with PpW and ScE, indicating trade-offs in server energy distribution. Finally, SWaP positively correlates with Data Center Power Density (DCPD) but negatively with the EWR, showing that optimizing performance per space and power aligns with density. However, it may increase energy per task.
About non-IT-related metrics (Table 4 and Table 5), the Corporate Average Data center Efficiency (CADE) metric, which targets a value of 1, positively correlates with the IT Equipment Utilization (ITEU) and DCiE, indicating that higher IT asset utilization and IT energy efficiency improve asset deployment efficiency. Still, it negatively correlates with PUE, as better asset efficiency often increases total facility power relative to IT power. The Data Center Availability (DCA) metric, also aiming for 1, negatively correlates with PUE Adjusted for Reliability (PUEreliability), suggesting that maximizing uptime may compromise reliability adjustments for power usage. The Data Center energy Productivity (DCeP) metric, targeting 1, positively correlates with DCPE and OSWE, showing that increased useful work per facility energy aligns with output efficiency and system workload efficiency, but it negatively correlates with PUE, as higher work output may elevate total power. Similarly, DCPE, also targeting 1, mirrors these correlations with Data Center Energy Productivity (DCeP), OSWE, and PUE. The Data Center green Efficiency (DCgE) metric, aiming for 1, positively correlates with Green Energy Coefficient (GEC) and DCiE, indicating that greater renewable energy use enhances green energy contributions and IT efficiency, but negatively correlates with CUE, as renewable energy reduces CO2 emissions. The DCPD metric, where higher values vary by context, positively correlates with SWaP, suggesting that higher IT power density supports space-adjusted performance, but negatively correlates with the Rack Cooling Index (RCI), as dense power usage may strain cooling infrastructure.
The Data Center Fixed to Variable Energy Ratio (DC-FVER) metric, where lower is better, negatively correlates with the Cooling Effectiveness Ratio (CER), indicating that reducing fixed-to-variable energy ratios conflicts with cooling efficiency. The Data Hall Utilization Efficiency (DH-UE) and Data Hall Utilization Rate (DH-UR) metrics, both targeting 1, positively correlate with each other and Total Utilization Efficiency (TUE), reflecting that active floor and rack deployment efficiencies enhance total utilization but lack negative correlations in the table. The Energy Baseline Score (EBS) metric, aiming for values less than 1, positively correlates with PUE, suggesting that lower energy baselines align with higher facility-to-IT power ratios, but it negatively correlates with Power Efficiency Savings (PEsavings), as reduced actual energy hinders savings potential. The Hardware Power Overhead Multiplier (H-POM) metric, targeting 1, positively correlates with ITEU, GEC, and DCiE, indicating that holistic performance aligns with utilization and green efficiency, but it negatively correlates with PUE. The Power Delivery Efficiency (PDE) and PEsavings metrics, both aiming for 1, positively correlate with DCiE and negatively with PUE, showing that efficient power delivery and energy savings enhance IT efficiency but increase facility power ratios. The PUEreliability metric, targeting 1, positively correlates with PUE but negatively with DCA, reflecting trade-offs between reliability and uptime. The System Infrastructure Power Optimization Metric (SI-POM) metric, also aiming for 1, positively correlates with CER and PDE but negatively with PUE, indicating that site infrastructure efficiency supports cooling and power delivery but conflicts with PUE. The TUE metric positively correlates with ITEU, DH-UR, and DH-UE, reinforcing utilization efficiencies, but it negatively correlates with PUE. Finally, PUE, DCiE, and CUE form a tight correlation cluster: DCiE positively correlates with PDE and negatively with PUE, while PUE negatively correlates with DCiE and positively with CUE, and CUE positively correlates with PUE, highlighting that minimizing CO2 emissions and maximizing IT efficiency often increases the facility-to-IT power ratio.

4. Real-World Examples

The proposed metrics are used in corporations’ (such as Google’s or Microsoft’s) CDC energy-efficient management. For instance, in 2023 Google reported an average annual PUE of 1.10 across its global fleet of large-scale CDCs, lower than the industry average of 1.58 [258]. Microsoft tracks PUE and WUE to monitor energy and water efficiency in its data centers. For the period from 1 July 2023 to 30 June 2024, Microsoft reported PUE and Water Usage Effectiveness (WUE) figures for its operational CDCs, emphasizing location-specific variables such as climate and humidity. A recent Microsoft study also highlighted that advanced cooling methods, such as cold plate and immersion cooling, can reduce data center emissions and water usage, particularly for AI workloads [259].
PayPal improved its CDC efficiency by adopting NVIDIA’s accelerated computing, reducing server energy consumption by nearly eight times while enhancing real-time fraud detection. This highlights how workload-specific efficiency improvements can be measured beyond traditional metrics such as PUE, as it is just the first step [260].

5. Challenges and Future Works

This section offers challenges and future scope for CDCs.

5.1. Challenges

5.1.1. IT-Related Metrics

Of the presented IT-related energy efficiency metrics CDCs, each faces challenges and limitations in quantifying power usage and performance in modern computing environments, as shown in Figure 8. APC struggles with the dynamic variability of cloud workloads, complicating consistent tracking, and it fails to reflect the productivity of power used, limiting its efficiency insights. CPE is challenged by the heterogeneity of cloud tasks, making computational output measurement complex, and overlooking the non-computational factors of memory and storage, missing a holistic view. DWPE grapples with consistently defining workloads across virtualized platforms and is sensitive to measurement timing, leading to inconsistent results. EWR faces difficulties in distinguishing wasted from useful energy and struggles to objectively assess task usefulness in mixed environments. ITEE’s efficiency varies with workload types, hindering cross-platform comparisons, and it ignores software or system management inefficiencies. OSWE finds it hard to define system boundaries in distributed cloud settings and does not distinguish between idle and active states. PpW is affected by workload diversity, making comparisons misleading, and it lacks standardized baselines for reliable benchmarking. ScE is complicated by workload distribution layers, missing broader data center interactions, while SPUE’s limited industry adoption and architecture-specific dynamics reduce its comparability. Lastly, SWaP tackles the conflicting optimization of space, power, and performance, with challenges in defining trade-offs and achieving automatic tuning.

5.1.2. Non-IT-Related Metrics

The related challenges for non-IT-related metrics are manifested in Figure 9.
Corporate Average Data Center Efficiency faces the challenge of aggregating data across diverse sites with varying architectures and operational models, which complicates consistent measurement. Its limitation lies in potentially masking inefficiencies at individual data centers due to the averaging effect, reducing its ability to pinpoint specific issues. DCA struggles with balancing energy efficiency against the need for high uptime, often leading to conservative, less efficient designs. However, it is limited as a reliability metric rather than an efficiency one, as it does not measure how efficiently availability is achieved.
Data Center Energy Productivity encounters difficulties in standardizing the measurement of productivity for abstract services such as cloud functions, lacking a universal approach. Its limitation is the challenge of correlating energy input with meaningful service output in a generalized manner, reducing its applicability. Data Center green Efficiency (DCgE) is hindered by restricted access to consistent, verifiable green energy data, and its limitation lies in not accounting for the intermittent availability of green sources or energy storage inefficiencies, which can skew results.
Data Center Power Density faces challenges with cooling bottlenecks and thermal hot spots caused by higher density designs. Its limitation is that it may prioritize compact layouts at the expense of cooling efficiency and maintainability, leading to operational trade-offs. DCPE requires complex performance benchmarking for diverse services in hybrid environments, and its lack of a universal performance definition limits comparability across data centers.
Data Center Fixed to Variable Energy Ratio demands granular instrumentation to distinguish fixed from variable energy components, a challenging task. Its static nature, not responsive to real-time load changes, is a key limitation. DH-UE is affected by layout heterogeneity and varying cooling strategies, which can distort utilization measurements, and it overlooks vertical space and cooling zones, providing an incomplete view of spatial efficiency.
Data Hall Utilization Rate lacks real-time rack-level monitoring and vendor standardization, limiting its precision. It only reflects space utilization, ignoring power or thermal efficiency, which restricts its scope. EBS struggles to establish reliable baselines in dynamic cloud environments, and its dependence on potentially outdated baseline periods undermines its relevance.
Power Efficiency Savings faces the challenge of quantifying savings through estimates rather than measured outcomes, and without a standard reference claimed savings may be inflated or inconsistent. SI-POM is intricate to quantify in shared facilities and lacks real-time tracking, missing transient inefficiencies.
Total Utilization Efficiency is complex to measure, due to factors of idle time, redundancy, and oversubscription, and its broad, aggregate nature reduces its actionability. PUE can be manipulated (such as by shutting off unused equipment) and is often reported under optimal conditions, while its focus on infrastructure energy ignores IT utilization, leading to potential misinterpretation.
Data Center Infrastructure Efficiency has limited new insights unless combined with other metrics, and it fails to capture internal inefficiencies in IT or cooling subsystems. Finally, CUE requires access to often unavailable or approximate CO2 emissions data from energy providers, and its exclusion of offset mechanisms or lifecycle emissions results in an incomplete environmental assessment.

5.2. Future Works

5.2.1. IT-Related Metrics

For APC, future work will focus on creating intelligent, workload-aware tools that dynamically adjust power metrics in real time, based on specific system parameters. These tools will incorporate system utilization rates, hardware degradation over time, and cooling system dynamics, while leveraging AI models to predict and optimize energy consumption proactively with high accuracy. For CPE, the goal is to establish standardized computational output units tailored to diverse applications, ensuring precise alignment of power consumption with specific compute tasks. Adaptive models are proposed to dynamically quantify the computational value of heterogeneous workloads, improving energy efficiency in real-time operational scenarios.
Data Center Workload Power Efficiency focuses on establishing universal workload definitions and standardized benchmark suites to ensure consistent, reproducible evaluation across diverse computing platforms. Automated profiling tools are recommended to enhance workload tagging and power consumption traceability, enabling precise, real-time assessments of energy efficiency. For EWR, the objective could be to develop refined methods for categorizing energy wastage, clearly distinguishing between essential and non-essential energy consumption under varying workload conditions. Artificial Intelligence techniques could be proposed to identify inefficiency patterns and recommend targeted system-level optimizations to minimize energy waste.
IT Equipment Energy Efficiency aims to integrate software performance indicators and real-time workload classification to enable comprehensive efficiency assessments. It promotes hardware–software co-design strategies to optimize performance by aligning computational demands with resource utilization. OSWE focuses on decomposing energy consumption into active, idle, and background categories through fine-grained telemetry, while mitigating virtualization and orchestration overhead in cloud-native environments to enhance overall system efficiency.
Performance per Watt advancements focus on developing workload-specific databases to inform precise system configurations and utilizing machine learning to dynamically optimize operations for maximum energy efficiency in real time. Server Compute Efficiency emphasizes detailed intra-server monitoring, including per-core and per-thread performance analysis, and explores correlations between compute efficiency, thermal dynamics, and performance throttling to enhance server-level optimization.
Server Power Usage Effectiveness could be in alignment with comprehensive PUE metrics, emphasizing the development of automated reporting tools and dashboard integration for real-time efficiency tracking. SWaP proposes multi-objective optimization and AI-driven decision frameworks to effectively balance trade-offs between space utilization, power consumption, and computational performance. The use of Digital Twin (DT) models makes it possible to simulate and validate SWaP strategies before implementation, improving decision-making precision.

5.2.2. Non-IT-Related Metrics

The future development of non-IT-related data center energy efficiency metrics will focus on three primary directions: (1) enhancing standardization by defining uniform calculation methodologies for metrics such as facility energy re-use and cooling efficiency across geographic and regulatory boundaries; (2) automating data collection and analysis through integration with building management systems, Internet of Things (IoT) sensors, and AI-based anomaly detection; and (3) using advanced technologies such as digital twins and predictive analytics to provide dynamic and scenario-based insights into infrastructure performance. For Corporate Average Data center Efficiency (CADE), upcoming efforts could emphasize the creation of standardized cross-organizational key performance indicators that account for energy use in auxiliary systems (such as lighting, HVAC, security) and align with Environmental, Social, and Governance (ESG) requirements, enabling consistent corporate-level reporting of environmental performance. DCA initiatives could increasingly incorporate energy-aware reliability models by quantifying the trade-offs between power consumption and system availability (such as the impact of redundancy level versus efficiency), supported by real-time telemetry and AI-driven predictive maintenance to optimize uptime without excess energy overhead.
For DCeP, the focus will shift toward automated, workload-aware productivity measurement using real-time operational data (task completion rate, computational throughput per energy unit), while ensuring alignment with changing industry performance benchmarks, such as output-based metrics, to enable accurate and continuous evaluation of energy-to-output efficiency.
Data Center green Efficiency will advance by implementing blockchain-based or smart contract systems for green energy traceability, enabling verifiable tracking of renewable energy inputs at a granular level, while also incorporating comprehensive Lifecycle Assessments (LCAs)—including embodied carbon and end-of-life emissions—to offer a more accurate evaluation of environmental impacts. DCPD will prioritize thermal-aware high-density architecture design, using Computational Fluid Dynamics (CFD) simulations and liquid cooling integration to mitigate hot spots and improve heat dissipation efficiency in rack-scale deployments. DCPE aims to utilize AI-driven adaptive performance models—such as reinforcement learning and digital twin simulation—to dynamically predict and balance workloads against energy consumption profiles, optimizing the energy-performance trade-off in near real-time. DC-FVER will refine cost allocation frameworks by incorporating time-of-use energy pricing models and real-time consumption data and by integrating with smart grid interfaces, enhancing cost traceability and enabling demand-side response participation. DH-UE and DH-UR will automate space and rack usage tracking through RFID, computer vision, and occupancy sensors, while optimizing modular hardware design and resource allocation through constraint-aware algorithms to maximize spatial and operational efficiency.
EBS development will focus on automated baseline recalibration using historical and streaming operational data, supported by predictive analytics to detect deviation trends and ensure that energy performance benchmarks remain relevant over time. H-POM intends to implement sub-component-level power telemetry (e.g., PSU-level, network interface cards) and develop standardized power overhead metrics, allowing for detailed attribution of energy use across hardware elements. PDE can enhance power delivery architectures by incorporating high-efficiency DC–DC conversion, busbar optimization, and real-time loss-tracking algorithms to minimize distribution inefficiencies within the data center power path. PEsavings may enable automated energy savings quantification by linking real-time energy data with dynamic cost models, providing finance and sustainability teams with clearer, auditable insights into cost-to-efficiency benefits. SI-POM can consider using IoT networks for fine-grained, edge-level monitoring of power and thermal systems, enabling closed-loop control schemes to reduce overcooling and improve power delivery precision. TUE will focus on building AI-assisted unified utilization frameworks that incorporate compute, memory, storage, and networking usage data to improve global resource scheduling and holistic system efficiency. PUE will be refined to consider workload type and utilization context, with hybrid metrics that integrate CO2-equivalent emissions per compute output to better align efficiency reporting with carbon accountability. DCiE development will emphasize real-time analytics platforms that correlate infrastructure energy usage with renewable input and cooling efficiency data to deliver more actionable, infrastructure-level insights. Finally, CUE can be evolved by standardizing carbon emissions reporting methodologies, including Scope 2 emissions, and integrating renewable energy certificates and time-stamped carbon intensity data to improve the accuracy and consistency of data center CO2 footprint assessments.

6. Conclusions

In this study, various components contributing to energy consumption within CDCs, including UPS and PDU configurations, as well as diverse energy efficiency metrics, have been systematically examined. The analyzed metrics offer critical insights into optimizing resource utilization, reducing energy waste, and ensuring compliance with environmental and regulatory standards. By addressing the inherent challenges and limitations of these metrics and exploring prospective research avenues—particularly the integration of AI technologies—further enhancements in energy efficiency could be realized. Advancements in this domain not only promise cost reductions and lower carbon emissions but also support the scalability and sustainability of digital services, fostering a more resilient and environmentally conscious digital infrastructure.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/electronics14112214/s1, Supplementary Materials: PRISMA Checklist.

Author Contributions

A.S.: conceptualization, software, validation, visualization, original writing; review/editing; formal analysis, investigation. H.S.: original writing, review/editing, formal analysis. A.R.: supervision, project management, formal analysis, validation, review/editing, funding acquisition. H.S.: original writing, review/editing, formal analysis. A.O.: review/editing, formal analysis, funding acquisition. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by Natural Sciences and Engineering Research Council of Canada (NSERC), under Discovery Grant Stream (RGPIN-2020-05513). As well, the project of Battery Management Systems for Next-Generation Data Centers (BMS-DC) funded this work by a grant from Energy Cluster Denmark.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No data were used in this review paper.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ACAlternating Current
AIArtificial Intelligence
APCAverage Power Consumption
CADECorporate Average Data center Efficiency
CAGRCompound Annual Growth Rate
CDCCloud Data Center
CERCooling Effectiveness Ratio
CPECompute Power Efficiency
CPUCenteral Processing Units
CRACComputer Room Air Conditioning
CRAHComputer Room Air Handler
CUECarbon Usage Effectiveness
DCDirect Current
DC-FVERData Center Fixed to Variable Energy Ratio
DCAData Center Availability
DCePData Center energy Productivity
DCgEData Center green Efficiency
DCiEData Center infrastructure Efficiency
DCPDData Center Power Density
DCPEData Center Performance Efficiency
DH-UEData Hall Utilization Efficiency
DH-URData Hall Utilization Rate
DTDigital Twin
DWPEData Center Workload Power Efficiency
EBSEnergy Baseline Score
ESGEnvironmental, Social, and Governance
EWREnergy Wastage Ratio
GECGreen Energy Coefficient
GPUGraphical Processing Unit
H-POMHardware Power Overhead Multiplier
HVACHeating, Ventilation, and Air Conditioning
IEAInternational Energy Agency
IoTInternet of Things
ITInformation Technology
ITEEIT Equipment Energy Efficiency
ITEUIT Equipment Utilization
LEDLight-Emitting Diode
LIBLithium-Ion Battery
NNNeural Network
NSERCNatural Sciences and Engineering Research Council of Canada
OSWEOperational System Workload Efficiency
PDEPower Delivery Efficiency
PDUPower Distribution Unit
PEsavingsPower Efficiency Savings
PPAPower Purchase Agreement
PpWPerformance per Watt
PRISMAPreferred Reporting Items for Systematic reviews and Meta-Analyses
PUEPower Usage Effectiveness
PUEreliabilityPUE Adjusted for Reliability
RAMRandom Accessible Memory
RCIRack Cooling Index
RESRenewable Energy Source
ScEServer compute Efficiency
SI-POMSystem Infrastructure Power Optimization Metric
SPUEServer Power Usage Effectiveness
SWaPSpace, Wattage, and Performance
TUETotal Utilization Efficiency
UPSUninterruptible Power Supply
VMVirtual Machine
WUEWater Usage Effectiveness

References

  1. Xu, C.; Wang, K.; Sun, Y.; Guo, S.; Zomaya, A.Y. Redundancy avoidance for big data in data centers: A conventional neural network approach. IEEE Trans. Netw. Sci. Eng. 2018, 7, 104–114. [Google Scholar] [CrossRef]
  2. Kaur, K.; Garg, S.; Kaddoum, G.; Bou-Harb, E.; Choo, K.K.R. A big data-enabled consolidated framework for energy efficient software defined data centers in IoT setups. IEEE Trans. Ind. Inform. 2019, 16, 2687–2697. [Google Scholar] [CrossRef]
  3. Xu, C.; Wang, K.; Li, P.; Xia, R.; Guo, S.; Guo, M. Renewable energy-aware big data analytics in geo-distributed data centers with reinforcement learning. IEEE Trans. Netw. Sci. Eng. 2018, 7, 205–215. [Google Scholar] [CrossRef]
  4. Rong, H.; Zhang, H.; Xiao, S.; Li, C.; Hu, C. Optimizing energy consumption for data centers. Renew. Sustain. Energy Rev. 2016, 58, 674–691. [Google Scholar] [CrossRef]
  5. Chaudhary, R.; Aujla, G.S.; Kumar, N.; Rodrigues, J.J. Optimized big data management across multi-cloud data centers: Software-defined-network-based analysis. IEEE Commun. Mag. 2018, 56, 118–126. [Google Scholar] [CrossRef]
  6. Gu, L.; Zeng, D.; Li, P.; Guo, S. Cost minimization for big data processing in geo-distributed data centers. IEEE Trans. Emerg. Top. Comput. 2014, 2, 314–323. [Google Scholar] [CrossRef]
  7. Zhou, Q.; Lou, J.; Jiang, Y. Optimization of energy consumption of green data center in e-commerce. Sustain. Comput. Inform. Syst. 2019, 23, 103–110. [Google Scholar] [CrossRef]
  8. Dong, C.; Wen, W.; Xu, T.; Yang, X. Joint optimization of data-center selection and video-streaming distribution for crowdsourced live streaming in a geo-distributed cloud platform. IEEE Trans. Netw. Serv. Manag. 2019, 16, 729–742. [Google Scholar] [CrossRef]
  9. Ranjan, R.; Wang, L.; Zomaya, A.Y.; Tao, J.; Jayaraman, P.P.; Georgakopoulos, D. Advances in methods and techniques for processing streaming big data in datacentre clouds. IEEE Trans. Emerg. Top. Comput. 2016, 4, 262–265. [Google Scholar] [CrossRef]
  10. Chen, W.; Paik, I.; Li, Z. Cost-aware streaming workflow allocation on geo-distributed data centers. IEEE Trans. Comput. 2016, 66, 256–271. [Google Scholar] [CrossRef]
  11. He, J.; Chaintreau, A.; Diot, C. A performance evaluation of scalable live video streaming with nano data centers. Comput. Netw. 2009, 53, 153–167. [Google Scholar] [CrossRef]
  12. Sajjad, H.P.; Danniswara, K.; Al-Shishtawy, A.; Vlassov, V. Spanedge: Towards unifying stream processing over central and near-the-edge data centers. In Proceedings of the 2016 IEEE/ACM Symposium on Edge Computing (SEC), Washington, DC, USA, 27–28 October 2016; pp. 168–178. [Google Scholar]
  13. Ranjan, R. Streaming big data processing in datacenter clouds. IEEE Cloud Comput. 2014, 1, 78–83. [Google Scholar] [CrossRef]
  14. Simić, M.; Prokić, I.; Dedeić, J.; Sladić, G.; Milosavljević, B. Towards edge computing as a service: Dynamic formation of the micro data-centers. IEEE Access 2021, 9, 114468–114484. [Google Scholar] [CrossRef]
  15. Bilal, K.; Khalid, O.; Erbad, A.; Khan, S.U. Potentials, trends, and prospects in edge technologies: Fog, cloudlet, mobile edge, and micro data centers. Comput. Netw. 2018, 130, 94–120. [Google Scholar] [CrossRef]
  16. Jiang, C.; Fan, T.; Gao, H.; Shi, W.; Liu, L.; Cérin, C.; Wan, J. Energy aware edge computing: A survey. Comput. Commun. 2020, 151, 556–580. [Google Scholar] [CrossRef]
  17. Jiang, C.; Cheng, X.; Gao, H.; Zhou, X.; Wan, J. Toward computation offloading in edge computing: A survey. IEEE Access 2019, 7, 131543–131558. [Google Scholar] [CrossRef]
  18. Premsankar, G.; Di Francesco, M.; Taleb, T. Edge computing for the Internet of Things: A case study. IEEE Internet Things J. 2018, 5, 1275–1284. [Google Scholar] [CrossRef]
  19. Safari, A.; Taghizad-Tavana, K.; Tarafdar Hagh, M. Artificial Intelligence-Driven Optimization of Internet Data Center Energy Consumption in Active Distribution Networks: A Transformer-Based Robust Control Model with Spatio-Temporal Flexibility Analytics. Available at SSRN 5056252. 2024. Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5056252 (accessed on 14 May 2025).
  20. Koot, M.; Wijnhoven, F. Usage impact on data center electricity needs: A system dynamic forecasting model. Appl. Energy 2021, 291, 116798. [Google Scholar] [CrossRef]
  21. U.S. Department of Energy. DOE Releases New Report Evaluating Increase in Electricity Demand from Data Centers. 2024. Available online: https://www.energy.gov/articles/doe-releases-new-report-evaluating-increase-electricity-demand-data-centers (accessed on 23 April 2025).
  22. IDC. IDC Report Reveals AI-Driven Growth in Datacenter Energy Consumption. 2024. Available online: https://my.idc.com/getdoc.jsp?containerId=prUS52611224&utm_source=chatgpt.com (accessed on 23 April 2025).
  23. McKinsey & Company. Energy Consumption in Data Centers: Air versus Liquid Cooling. Retrieved from Boyd Corporation. 2023. Available online: https://www.boydcorp.com/blog/energy-consumption-in-data-centers-air-versus-liquid-cooling.html (accessed on 23 April 2025).
  24. Cai, S.; Gou, Z. Towards energy-efficient data centers: A comprehensive review of passive and active cooling strategies. Energy Built Environ. 2024. Available online: https://www.sciencedirect.com/science/article/pii/S2666123324000916 (accessed on 23 April 2025).
  25. Saâdaoui, F.; Jabeur, S.B. Analyzing the influence of geopolitical risks on European power prices using a multiresolution causal neural network. Energy Econ. 2023, 124, 106793. [Google Scholar] [CrossRef]
  26. International Energy Agency. Electricity Security Matters More than Ever–Power Systems in Transition. Retrieved from IEA. 2020. Available online: https://www.iea.org/reports/power-systems-in-transition/electricity-security-matters-more-than-ever (accessed on 23 April 2025).
  27. World Bank. Selecting and Implementing Demand Response Programs to Support Grid Flexibility: A Guidance Note for Practitioners; World Bank: Washington, DC, USA, 2023; Available online: https://documents.worldbank.org/en/publication/documents-reports/documentdetail/099647511282438850/idu10031c3e11c08e145e11b50a114842d7d19fd (accessed on 23 April 2025).
  28. Safari, A.; Daneshvar, M.; Anvari-Moghaddam, A. Energy Intelligence: A Systematic Review of Artificial Intelligence for Energy Management. Appl. Sci. 2024, 14, 11112. [Google Scholar] [CrossRef]
  29. Houston Chronicle. BlackRock CEO: AI Data Center Growth Could Be Limited by Texas Grid. 2024. Available online: https://www.houstonchronicle.com/business/energy/article/ceraweek-power-grid-texas-blackrock-20213874.php (accessed on 23 April 2025).
  30. Reccessary. Taiwan to Stop Approving Data Centers over 5MW in the North Due to Electricity Concerns. 2024. Available online: https://www.datacenterdynamics.com/en/news/taiwan-to-stop-large-data-centers-in-the-north-cites-insufficient-power/ (accessed on 23 April 2025).
  31. Data Center Dynamics. Three Mile Island Nuclear Power Plant to Return as Microsoft Signs 20-Year 835MW AI Data Center PPA. 2024. Available online: https://www.datacenterdynamics.com/en/news/three-mile-island-nuclear-power-plant-to-return-as-microsoft-signs-20-year-835mw-ai-data-center-ppa/ (accessed on 23 April 2025).
  32. Kurt, A.; Gumus, M. Sustainable planning of penal facilities through multi-objective location-allocation modelling and data envelopment analysis. Socio-Econ. Plan. Sci. 2025, 98, 102147. [Google Scholar] [CrossRef]
  33. Woodruff, J.Z.; Brenner, P.; Buccellato, A.P.; Go, D.B. Environmentally opportunistic computing: A distributed waste heat reutilization approach to energy-efficient buildings and data centers. Energy Build. 2014, 69, 41–50. [Google Scholar] [CrossRef]
  34. Shao, X.; Zhang, Z.; Song, P.; Feng, Y.; Wang, X. A review of energy efficiency evaluation metrics for data centers. Energy Build. 2022, 271, 112308. [Google Scholar] [CrossRef]
  35. Long, S.; Li, Y.; Huang, J.; Li, Z.; Li, Y. A review of energy efficiency evaluation technologies in cloud data centers. Energy Build. 2022, 260, 111848. [Google Scholar] [CrossRef]
  36. Fieni, G.; Rouvoy, R.; Seinturier, L. xPUE: Extending Power Usage Effectiveness Metrics for Cloud Infrastructures. arXiv 2025, arXiv:2503.07124. [Google Scholar]
  37. Sarkis-Onofre, R.; Catalá-López, F.; Aromataris, E.; Lockwood, C. How to properly use the PRISMA Statement. Syst. Rev. 2021, 10, 117. [Google Scholar] [CrossRef]
  38. Klemick, H.; Kopits, E.; Wolverton, A. How do data centers make energy efficiency investment decisions? Qualitative evidence from focus groups and interviews. Energy Effic. 2019, 12, 1359–1377. [Google Scholar] [CrossRef]
  39. Newkirk, A.C.; Hanus, N.; Payne, C.T. Expert and operator perspectives on barriers to energy efficiency in data centers. Energy Effic. 2024, 17, 63. [Google Scholar] [CrossRef]
  40. Dayarathna, M.; Wen, Y.; Fan, R. Data center energy consumption modeling: A survey. IEEE Commun. Surv. Tutor. 2015, 18, 732–794. [Google Scholar] [CrossRef]
  41. Gandhi, A.D.; Newbury, M.E. Evaluation of the energy efficiency metrics for wireless networks. Bell Labs Tech. J. 2011, 16, 207–215. [Google Scholar] [CrossRef]
  42. D’Aniello, F.; Sorrentino, M.; Rizzo, G.; Trifirò, A.; Bedogni, F. Introducing innovative energy performance metrics for high-level monitoring and diagnosis of telecommunication sites. Appl. Therm. Eng. 2018, 137, 277–287. [Google Scholar] [CrossRef]
  43. Hossfeld, T.; Wunderer, S.; Loh, F.; Schien, D. Analysis of Energy Intensity and Generic Energy Efficiency Metrics in Communication Networks: Limits, Practical Applications and Case Studies. IEEE Access 2024, 12, 105527–105549. [Google Scholar] [CrossRef]
  44. Daim, T.; Justice, J.; Krampits, M.; Letts, M.; Subramanian, G.; Thirumalai, M. Data center metrics: An energy efficiency model for information technology managers. Manag. Environ. Qual. Int. J. 2009, 20, 712–731. [Google Scholar] [CrossRef]
  45. Yuventi, J.; Mehdizadeh, R. A critical analysis of power usage effectiveness and its use as data center energy sustainability metrics. Energy Build. 2013, 64, 90–94. [Google Scholar] [CrossRef]
  46. Herrlin, M.K. Airflow and cooling performance of data centers: Two performance metrics. ASHRAE Trans. 2008, 114, 182–187. [Google Scholar]
  47. Xie, M.; Wang, J.; Liu, J. Evaluation metrics of thermal management in data centers based on exergy analysis. Appl. Therm. Eng. 2019, 147, 1083–1095. [Google Scholar] [CrossRef]
  48. Chang, Q.; Huang, Y.; Liu, K.; Xu, X.; Zhao, Y.; Pan, S. Optimization Control Strategies and Evaluation Metrics of Cooling Systems in Data Centers: A Review. Sustainability 2024, 16, 7222. [Google Scholar] [CrossRef]
  49. Capozzoli, A.; Serale, G.; Liuzzo, L.; Chinnici, M. Thermal metrics for data centers: A critical review. Energy Procedia 2014, 62, 391–400. [Google Scholar] [CrossRef]
  50. Capozzoli, A.; Chinnici, M.; Perino, M.; Serale, G. Review on performance metrics for energy efficiency in data center: The role of thermal management. In Proceedings of the International Workshop on Energy Efficient Data Centers; Springer: Berlin/Heidelberg, Germany, 2014; pp. 135–151. [Google Scholar]
  51. Reddy, V.D.; Setz, B.; Rao, G.S.V.; Gangadharan, G.; Aiello, M. Metrics for sustainable data centers. IEEE Trans. Sustain. Comput. 2017, 2, 290–303. [Google Scholar] [CrossRef]
  52. Gowans, D. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures. National Renewable Energy Laboratory. 2013. Available online: https://www.energy.gov/oe/uniform-methods-project-methods-determining-energy-efficiency-savings-specific-measures (accessed on 14 May 2025).
  53. Whitehead, B.; Andrews, D.; Shah, A.; Maidment, G. Assessing the environmental impact of data centres part 1: Background, energy use and metrics. Build. Environ. 2014, 82, 151–159. [Google Scholar] [CrossRef]
  54. Masanet, E.; Shehabi, A.; Lei, N.; Smith, S.; Koomey, J. Recalibrating global data center energy-use estimates. Science 2020, 367, 984–986. [Google Scholar] [CrossRef] [PubMed]
  55. Yadav, R.; Zhang, W.; Kaiwartya, O.; Singh, P.R.; Elgendy, I.A.; Tian, Y.C. Adaptive energy-aware algorithms for minimizing energy consumption and SLA violation in cloud computing. IEEE Access 2018, 6, 55923–55936. [Google Scholar] [CrossRef]
  56. Nambi, S.; Thanapal, P. Emo–Ts: An Enhanced Multi-Objective Optimization Algorithm for Energy-Efficient Task Scheduling In Cloud Data Centers. IEEE Access 2025, 13, 8187–8200. [Google Scholar] [CrossRef]
  57. Swain, S.R.; Parashar, A.; Singh, A.K.; Lee, C.N. An intelligent virtual machine allocation optimization model for energy-efficient and reliable cloud environment. J. Supercomput. 2025, 81, 1–26. [Google Scholar] [CrossRef]
  58. Elgendy, I.A.; Muthanna, A.; Alshahrani, A.; Hassan, D.S.; Alkanhel, R.; Elkawkagy, M. Optimizing Energy Efficiency in Vehicular Edge-Cloud Networks Through Deep Reinforcement Learning-Based Computation Offloading. IEEE Access 2024, 12, 191537–191550. [Google Scholar] [CrossRef]
  59. Chong, F.T.; Heck, M.J.R.; Ranganathan, P.; Saleh, A.A.M.; Wassel, H.M.G. Data Center Energy Efficiency:Improving Energy Efficiency in Data Centers Beyond Technology Scaling. IEEE Des. Test 2014, 31, 93–104. [Google Scholar] [CrossRef]
  60. Yang, L.; Li, X.; Sun, M.; Sun, C. Hybrid Policy-Based Reinforcement Learning of Adaptive Energy Management for the Energy Transmission-Constrained Island Group. IEEE Trans. Ind. Inform. 2023, 19, 10751–10762. [Google Scholar] [CrossRef]
  61. Zhang, N.; Yan, J.; Hu, C.; Sun, Q.; Yang, L.; Gao, D.W.; Guerrero, J.M.; Li, Y. Price-Matching-Based Regional Energy Market with Hierarchical Reinforcement Learning Algorithm. IEEE Trans. Ind. Inform. 2024, 20, 11103–11114. [Google Scholar] [CrossRef]
  62. Mastelic, T.; Brandic, I. Recent Trends in Energy-Efficient Cloud Computing. IEEE Cloud Comput. 2015, 2, 40–47. [Google Scholar] [CrossRef]
  63. Zhang, Q.; Meng, Z.; Hong, X.; Zhan, Y.; Liu, J.; Dong, J.; Bai, T.; Niu, J.; Deen, M.J. A survey on data center cooling systems: Technology, power consumption modeling and control strategy optimization. J. Syst. Archit. 2021, 119, 102253. [Google Scholar] [CrossRef]
  64. Ahmed, K.M.U.; Bollen, M.H.; Alvarez, M. A review of data centers energy consumption and reliability modeling. IEEE Access 2021, 9, 152536–152563. [Google Scholar] [CrossRef]
  65. Cho, J.; Kim, Y. Improving energy efficiency of dedicated cooling system and its contribution towards meeting an energy-optimized data center. Appl. Energy 2016, 165, 967–982. [Google Scholar] [CrossRef]
  66. Shuja, J.; Bilal, K.; Madani, S.A.; Othman, M.; Ranjan, R.; Balaji, P.; Khan, S.U. Survey of techniques and architectures for designing energy-efficient data centers. IEEE Syst. J. 2014, 10, 507–519. [Google Scholar] [CrossRef]
  67. Li, Z.; Zhang, X.; Zuo, H.; Shang, Q.; Sun, G.; Huai, W.; Wang, T. Shaking table tests of double-layer micro-module data center: Structural responses and numerical simulation. Eng. Struct. 2025, 335, 120272. [Google Scholar] [CrossRef]
  68. Hewage, T.B.; Ilager, S.; Read, M.R.; Buyya, R. Aging-aware CPU Core Management for Embodied Carbon Amortization in Cloud LLM Inference. arXiv 2025, arXiv:2501.15829. [Google Scholar]
  69. Xiong, Z.; Tan, L.; Xu, J.; Cai, L. Online real-time energy consumption optimization with resistance to server switch jitter for server clusters. J. Supercomput. 2025, 81, 460. [Google Scholar] [CrossRef]
  70. Ronchetti, F.; Akishina, V.; Andreassen, E.; Bluhme, N.; Dange, G.; de Cuveland, J.; Erba, G.; Gaur, H.; Hutter, D.; Kozlov, G.; et al. Efficient high performance computing with the ALICE Event Processing Nodes GPU-based farm. Front. Phys. 2025, 13, 1541854. [Google Scholar] [CrossRef]
  71. Luo, Z.; Liu, J.; Lee, M.; Shroff, N.B. Prediction-Assisted Online Distributed Deep Learning Workload Scheduling in GPU Clusters. arXiv 2025, arXiv:2501.05563. [Google Scholar]
  72. Cui, W.; Zhang, J.; Zhao, H.; Liu, C.; Zhang, W.; Sha, J.; Chen, Q.; He, B.; Guo, M. XPUTimer: Anomaly Diagnostics for Divergent LLM Training in GPU Clusters of Thousand-Plus Scale. arXiv 2025, arXiv:2502.05413. [Google Scholar]
  73. Jiang, Y.; Fu, F.; Yao, X.; He, G.; Miao, X.; Klimovic, A.; Cui, B.; Yuan, B.; Yoneki, E. Demystifying Cost-Efficiency in LLM Serving over Heterogeneous GPUs. arXiv 2025, arXiv:2502.00722. [Google Scholar]
  74. Salmanian, Z.; Izadkhah, H.; Isazadeh, A. Optimizing web server RAM performance using birth–death process queuing system: Scalable memory issue. J. Supercomput. 2017, 73, 5221–5238. [Google Scholar] [CrossRef]
  75. Allcock, W.; Bernardoni, B.; Bertoni, C.; Getty, N.; Insley, J.; Papka, M.E.; Rizzi, S.; Toonen, B. Ram as a network managed resource. In Proceedings of the 2018 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW), Vancouver, BC, Canada, 21–25 May 2018; pp. 99–106. [Google Scholar]
  76. Bianchini, R.; Rajamony, R. Power and energy management for server systems. Computer 2004, 37, 68–76. [Google Scholar] [CrossRef]
  77. Shan, X.; Yu, H.; Chen, Y.; Chen, Y.; Yang, Z. S2A-P2FS: Secure Storage Auditing with Privacy-Preserving Flexible Data Sharing in Cloud-Assisted Industrial IoT. IEEE Trans. Mob. Comput. 2025, 1–17. Available online: https://ieeexplore.ieee.org/document/10886974 (accessed on 14 May 2025). [CrossRef]
  78. Yu, H.; Zhang, H.; Yang, Z.; Chen, Y.; Liu, H. Efficient and Secure Storage Verification in Cloud-Assisted Industrial IoT Networks. IEEE Trans. Comput. 2025, 74, 1702–1716. [Google Scholar] [CrossRef]
  79. Soundharya, U.L.; Vadivu, G.; Ragunathan, T. A neural network based optimization algorithm for file fetching in distributed file system. Int. J. Inf. Technol. Decis. Mak. 2025. Available online: https://www.worldscientific.com/doi/10.1142/S0219622025500063?srsltid=AfmBOopHb2ZJUwm_vYI7RszlNLx7VbEFz9_jThxS2Mq9k7zU6cW37Enz (accessed on 14 May 2025). [CrossRef]
  80. Shih, W.C.; Wang, Z.Y.; Kristiani, E.; Hsieh, Y.J.; Sung, Y.H.; Li, C.H.; Yang, C.T. The Construction of a Stream Service Application with DeepStream and Simple Realtime Server Using Containerization for Edge Computing. Sensors 2025, 25, 259. [Google Scholar] [CrossRef]
  81. Jasny, M.; Ziegler, T.; Binnig, C. Scalable Data Management on Next-Generation Data Center Networks. In Scalable Data Management for Future Hardware; Springer: Berlin/Heidelberg, Germany, 2025; pp. 199–221. [Google Scholar]
  82. Patronas, G.; Terzenidis, N.; Kashinkunti, P.; Zahavi, E.; Syrivelis, D.; Capps, L.; Wertheimer, Z.A.; Argyris, N.; Fevgas, A.; Thompson, C.; et al. Optical switching for data centers and advanced computing systems. J. Opt. Commun. Netw. 2025, 17, A87–A95. [Google Scholar] [CrossRef]
  83. Chen, B.; Zhang, Y.; Liang, H. Multi-Level Network Topology and Time Series Multi-Scenario Optimization Planning Method for Hybrid AC/DC Distribution Systems in Data Centers. Electronics 2025, 14, 264. [Google Scholar] [CrossRef]
  84. Hojati, E.; Sill, A.; Mengel, S.; Sayedi, S.M.B.; Bilbao, A.; Schmitt, K. A Comprehensive Monitoring, Visualization, and Management System for Green Data Centers. IEEE Syst. J. 2025, 19, 119–129. [Google Scholar] [CrossRef]
  85. Madhura, K.; Sekhar, G.C.; Sahu, A.; Karthikeyan, M.; Khurana, S.; Shukla, M.; Vashisht, N. Software defined network (SDN) based data server computing system. Int. J. Inf. Technol. 2025, 17, 607–613. [Google Scholar] [CrossRef]
  86. Xiao, Q.; Li, T.; Jia, H.; Mu, Y.; Jin, Y.; Qiao, J.; Pu, T.; Blaabjerg, F.; Gu, J.M. Electrical circuit analogy-based maximum latency calculation method of internet data centers in power-communication network. IEEE Trans. Smart Grid 2024. Available online: https://ieeexplore.ieee.org/document/10714408 (accessed on 14 May 2025).
  87. Milad, M.; Darwish, M. UPS system: How can future technology and topology improve the energy efficiency in data centers? In Proceedings of the 2014 49th International Universities Power Engineering Conference (UPEC), Cluj-Napoca, Romania, 2–5 September 2014; pp. 1–4. [Google Scholar]
  88. Ye, G.; Gao, F.; Fang, J.; Zhang, Q. Joint Workload Scheduling in Geo-Distributed Data Centers Considering UPS Power Losses. IEEE Trans. Ind. Appl. 2023, 59, 612–626. [Google Scholar] [CrossRef]
  89. Oshnoei, A.; Sorouri, H.; Safari, A.; Davari, P.; Zacho, M.; Johnsen, A.D.; Teodorescu, R. Accelerated SoH Balancing in Lithium-ion Battery Packs Using Finite Set MPC. In Proceedings of the 26th European Conference on Power Electronics and Applications, Paris, France, 31 March–4 April 2025. [Google Scholar]
  90. Sorouri, H.; Safari, A.; Oshnoei, A.; Teodorescu, R. Voltage-Controlled SoC Estimation in Lithium-Ion Batteries: A Comparative Analysis of Equivalent Circuit Models. In Proceedings of the 26th European Conference on Power Electronics and Applications, Paris, France, 31 March–4 April 2025. [Google Scholar]
  91. Tracy, J.G.; Pfitzer, H.E. Achieving high efficiency in a double conversion transformerless UPS. In Proceedings of the 31st Annual Conference of IEEE Industrial Electronics Society, 2005, IECON 2005, Raleigh, NC, USA, 6–10 November 2005; p. 4. [Google Scholar]
  92. Sato, E.K.; Kinoshita, M.; Yamamoto, Y.; Amboh, T. Redundant high-density high-efficiency double-conversion uninterruptible power system. IEEE Trans. Ind. Appl. 2010, 46, 1525–1533. [Google Scholar] [CrossRef]
  93. Milad, M.; Darwish, M. Comparison between Double Conversion Online UPS and Flywheel UPS technologies in terms of efficiency and cost in a medium Data Centre. In Proceedings of the 2015 50th International Universities Power Engineering Conference (UPEC), Stoke-on-Trent, UK, 1–4 September 2015; pp. 1–5. [Google Scholar]
  94. Oliveira, T.J.; Caseiro, L.M.; Mendes, A.M.; Cruz, S.M.; Perdigao, M.S. Online filter parameters estimation in a double conversion UPS system for real-time model predictive control performance optimization. IEEE Access 2022, 10, 30484–30500. [Google Scholar] [CrossRef]
  95. Parise, G.; Parise, L. Electrical distribution for a reliable data center. IEEE Trans. Ind. Appl. 2013, 49, 1697–1702. [Google Scholar] [CrossRef]
  96. Ganesh, L.; Weatherspoon, H.; Marian, T.; Birman, K. Integrated approach to data center power management. IEEE Trans. Comput. 2013, 62, 1086–1096. [Google Scholar] [CrossRef]
  97. Krein, P.T. Data center challenges and their power electronics. CPSS Trans. Power Electron. Appl. 2017, 2, 39–46. [Google Scholar] [CrossRef]
  98. Kontorinis, V.; Zhang, L.E.; Aksanli, B.; Sampson, J.; Homayoun, H.; Pettis, E.; Tullsen, D.M.; Rosing, T.S. Managing distributed ups energy for effective power capping in data centers. ACM SIGARCH Comput. Archit. News 2012, 40, 488–499. [Google Scholar] [CrossRef]
  99. Jia, X.; Wang, J.K. Distributed firewall for P2P network in data center. In Proceedings of the 2013 IEEE International Conference on Consumer Electronics-China, Hsinchu City, Taiwan, 3–6 June 2013; pp. 15–19. [Google Scholar]
  100. Farrington, N.; Porter, G.; Radhakrishnan, S.; Bazzaz, H.H.; Subramanya, V.; Fainman, Y.; Papen, G.; Vahdat, A. Helios: A hybrid electrical/optical switch architecture for modular data centers. In Proceedings of the ACM SIGCOMM 2010 Conference, New Delhi, India, 30 August–3 September 2010; pp. 339–350. [Google Scholar]
  101. Chen, K.; Hu, C.; Zhang, X.; Zheng, K.; Chen, Y.; Vasilakos, A.V. Survey on routing in data centers: Insights and future directions. IEEE Netw. 2011, 25, 6–10. [Google Scholar] [CrossRef]
  102. Shang, Y.; Li, D.; Xu, M. Energy-aware routing in data center network. In Proceedings of the First ACM SIGCOMM Workshop on Green Networking, New Delhi, India, 30 August 2010; pp. 1–8. [Google Scholar]
  103. Zhang, J.; Yu, F.R.; Wang, S.; Huang, T.; Liu, Z.; Liu, Y. Load balancing in data center networks: A survey. IEEE Commun. Surv. Tutor. 2018, 20, 2324–2352. [Google Scholar] [CrossRef]
  104. Santos, D.; Mataloto, B.; Ferreira, J.C. Data center environment monitoring system. In Proceedings of the 2019 4th International Conference on Cloud Computing and Internet of Things, Tokyo, Japan, 20–22 September 2019; pp. 75–81. [Google Scholar]
  105. Noormohammadpour, M.; Raghavendra, C.S. Datacenter traffic control: Understanding techniques and tradeoffs. IEEE Commun. Surv. Tutor. 2017, 20, 1492–1525. [Google Scholar] [CrossRef]
  106. Guo, Z.; Liu, S.; Zhang, Z.L. Traffic control for RDMA-enabled data center networks: A survey. IEEE Syst. J. 2019, 14, 677–688. [Google Scholar] [CrossRef]
  107. Benson, T.; Anand, A.; Akella, A.; Zhang, M. Understanding data center traffic characteristics. ACM SIGCOMM Comput. Commun. Rev. 2010, 40, 92–99. [Google Scholar] [CrossRef]
  108. Safari, A.; Kharrati, H.; Rahimi, A. A hybrid attention-based long short-term memory fast model for thermal regulation of smart residential buildings. IET Smart Cities 2024, 6, 361–371. [Google Scholar] [CrossRef]
  109. Jing, Y.; Xie, L.; Li, F.; Zhan, Z.; Wang, Z.; Yang, F.; Fan, J.; Zhu, Z.; Zhang, H.; Zhao, C.; et al. Field test of cooling systems in two air-cooled data centers: Various regions, air distributions and evaporative cooling technologies. Appl. Therm. Eng. 2024, 248, 123189. [Google Scholar] [CrossRef]
  110. Isazadeh, A.; Ziviani, D.; Claridge, D.E. Thermal management in legacy air-cooled data centers: An overview and perspectives. Renew. Sustain. Energy Rev. 2023, 187, 113707. [Google Scholar] [CrossRef]
  111. Gupta, R.; Asgari, S.; Moazamigoodarzi, H.; Pal, S.; Puri, I.K. Cooling architecture selection for air-cooled Data Centers by minimizing exergy destruction. Energy 2020, 201, 117625. [Google Scholar] [CrossRef]
  112. Kuzay, M.; Dogan, A.; Yilmaz, S.; Herkiloglu, O.; Atalay, A.S.; Cemberci, A.; Yilmaz, C.; Demirel, E. Retrofitting of an air-cooled data center for energy efficiency. Case Stud. Therm. Eng. 2022, 36, 102228. [Google Scholar] [CrossRef]
  113. Lin, J.; Lin, W.; Lin, W.; Wang, J.; Jiang, H. Thermal prediction for air-cooled data center using data driven-based model. Appl. Therm. Eng. 2022, 217, 119207. [Google Scholar] [CrossRef]
  114. Xiong, X.; Lee, P.S. Vortex-enhanced thermal environment for air-cooled data center: An experimental and numerical study. Energy Build. 2021, 250, 111287. [Google Scholar] [CrossRef]
  115. Wang, N.; Guo, Y.; Huang, C.; Tian, B.; Shao, S. Multi-scale collaborative modeling and deep learning-based thermal prediction for air-cooled data centers: An innovative insight for thermal management. Appl. Energy 2025, 377, 124568. [Google Scholar] [CrossRef]
  116. Li, N.; Li, H.; Duan, K.; Tao, W.Q. Evaluation of the cooling effectiveness of air-cooled data centers by energy diagram. Appl. Energy 2025, 382, 125215. [Google Scholar] [CrossRef]
  117. Chen, H.; Li, D.; Wang, S.; Chen, T.; Zhong, M.; Ding, Y.; Li, Y.; Huo, X. Numerical investigation of thermal performance with adaptive terminal devices for cold aisle containment in data centers. Buildings 2023, 13, 268. [Google Scholar] [CrossRef]
  118. Cheong, K.H.; Tang, K.J.W.; Koh, J.M.; Yu, S.C.M.; Acharya, U.R.; Xie, N.G. A novel methodology to improve cooling efficiency at data centers. IEEE Access 2019, 7, 153799–153809. [Google Scholar] [CrossRef]
  119. Jao, Y.C.; Zhang, Z.W.; Wang, C.C. Effect of uneven heat load on the airflow uniformity and thermal performance in a small-scale data center. Appl. Therm. Eng. 2024, 242, 122525. [Google Scholar] [CrossRef]
  120. Li, Y.; Zhu, C.; Li, X.; Yang, B. A Review of Non-Uniform Load Distribution and Solutions in Data Centers: Micro-Scale Liquid Cooling and Large-Scale Air Cooling. Energies 2025, 18, 149. [Google Scholar] [CrossRef]
  121. Heydari, A.; Gharaibeh, A.R.; Tradat, M.; Manaserh, Y.; Radmard, V.; Eslami, B.; Rodriguez, J.; Sammakia, B. Experimental evaluation of direct-to-chip cold plate liquid cooling for high-heat-density data centers. Appl. Therm. Eng. 2024, 239, 122122. [Google Scholar] [CrossRef]
  122. Shahi, P.; Mathew, A.; Saini, S.; Bansode, P.; Kasukurthy, R.; Agonafer, D. Assessment of reliability enhancement in high-power CPUs and GPUs using dynamic direct-to-chip liquid cooling. J. Enhanc. Heat Transf. 2022, 29, 1–13. [Google Scholar] [CrossRef]
  123. Kong, R.; Zhang, H.; Tang, M.; Zou, H.; Tian, C.; Ding, T. Enhancing data center cooling efficiency and ability: A comprehensive review of direct liquid cooling technologies. Energy 2024, 308, 132846. [Google Scholar] [CrossRef]
  124. Hnayno, M.; Chehade, A.; Klaba, H.; Bauduin, H.; Polidori, G.; Maalouf, C. Performance analysis of new liquid cooling topology and its impact on data centres. Appl. Therm. Eng. 2022, 213, 118733. [Google Scholar] [CrossRef]
  125. Lucchese, R.; Varagnolo, D.; Johansson, A. Controlled direct liquid cooling of data servers. IEEE Trans. Control Syst. Technol. 2020, 29, 2325–2338. [Google Scholar] [CrossRef]
  126. Wang, J.; Yuan, C.; Li, Y.; Li, C.; Wang, Y.; Wei, X. Direct-to-chip immersion liquid cooling for high-power vertical-cavity surface-emitting laser (VCSEL). Appl. Therm. Eng. 2025, 269, 126137. [Google Scholar] [CrossRef]
  127. Kim, J.; Choi, H.; Lee, S.; Lee, H. Computational study of single-phase immersion cooling for high-energy density server rack for data centers. Appl. Therm. Eng. 2025, 264, 125476. [Google Scholar] [CrossRef]
  128. Alkasmoul, F.S.; Almogbel, A.M.; Shahzad, M.W.; Al-damook, A.J. Thermal performance and optimum concentration of different nanofluids in immersion cooling in data center servers. Results Eng. 2025, 25, 103699. [Google Scholar] [CrossRef]
  129. Liu, S.; Guo, S.; Sun, H.; Xu, Z.; Li, X.; Wang, X. Evaluation of energy, economic, and pollution emission for single-phase immersion cooling data center with different economizers. Appl. Therm. Eng. 2025, 260, 125049. [Google Scholar] [CrossRef]
  130. Wu, X.; Yang, J.; Liu, Y.; Zhuang, Y.; Luo, S.; Yan, Y.; Xiao, L.; Han, X. Investigations on heat dissipation performance and overall characteristics of two-phase liquid immersion cooling systems for data center. Int. J. Heat Mass Transf. 2025, 239, 126575. [Google Scholar] [CrossRef]
  131. Sun, X.; Liu, Z.; Ji, S.; Yuan, K. Experimental study on thermal performance of a single-phase immersion cooling unit for high-density computing power data center. Int. J. Heat Fluid Flow 2025, 112, 109735. [Google Scholar] [CrossRef]
  132. Ozer, R.A. Heat sink optimization with response surface methodology for single phase immersion cooling. Int. J. Heat Fluid Flow 2025, 112, 109745. [Google Scholar] [CrossRef]
  133. Oh, H.; Jin, W.; Peng, P.; Winick, J.A.; Sickinger, D.; Sartor, D.; Zhang, Y.; Beckers, K.; Kitz, K.; Acero-Allard, D.; et al. Techno-economic performance of reservoir thermal energy storage for data center cooling system. Appl. Energy 2025, 391, 125858. [Google Scholar] [CrossRef]
  134. Pambudi, N.A.; Sarifudin, A.; Firdaus, R.A.; Ulfa, D.K.; Gandidi, I.M.; Romadhon, R. The immersion cooling technology: Current and future development in energy saving. Alex. Eng. J. 2022, 61, 9509–9527. [Google Scholar] [CrossRef]
  135. Li, X.; Xu, Z.; Liu, S.; Zhang, X.; Sun, H. Server performance optimization for single-phase immersion cooling data center. Appl. Therm. Eng. 2023, 224, 120080. [Google Scholar] [CrossRef]
  136. Liu, C.; Yu, H. Evaluation and optimization of a two-phase liquid-immersion cooling system for data centers. Energies 2021, 14, 1395. [Google Scholar] [CrossRef]
  137. Cho, J. Numerical analysis of rack-based data center cooling with rear door heat exchanger (RDHx): Interrelationship between thermal performance and energy efficiency. Case Stud. Therm. Eng. 2024, 63, 105247. [Google Scholar] [CrossRef]
  138. Mynampati, V.N.P.; Karajgikar, S.; Sheerah, I.; Agonafer, D.; Novotny, S.; Schmidt, R. Rear Door Heat Exchanger Cooling Performance in Telecommunication Data Centers. In Proceedings of the ASME International Mechanical Engineering Congress and Exposition, Vancouver, BC, Canada, 12–18 November 2010; Volume 44281, pp. 405–410. [Google Scholar]
  139. Nemati, K.; Alissa, H.A.; Murray, B.T.; Schneebeli, K.; Sammakia, B. Experimental failure analysis of a rear door heat exchanger with localized containment. IEEE Trans. Components Packag. Manuf. Technol. 2017, 7, 882–892. [Google Scholar] [CrossRef]
  140. Manaserh, Y.M.; Tradat, M.I.; Gharaibeh, A.R.; Sammakia, B.G.; Tipton, R. Shifting to energy efficient hybrid cooled data centers using novel embedded floor tiles heat exchangers. Energy Convers. Manag. 2021, 247, 114762. [Google Scholar] [CrossRef]
  141. Li, X.; Li, M.; Zhang, Y.; Han, Z.; Wang, S. Rack-level cooling technologies for data centers—A comprehensive review. J. Build. Eng. 2024, 5, 109535. [Google Scholar] [CrossRef]
  142. Yang, W.; Yang, L.; Ou, J.; Lin, Z.; Zhao, X. Investigation of heat management in high thermal density communication cabinet by a rear door liquid cooling system. Energies 2019, 12, 4385. [Google Scholar] [CrossRef]
  143. Gao, T.; Sammakia, B.G.; Geer, J.F.; Ortega, A.; Schmidt, R. Dynamic analysis of cross flow heat exchangers in data centers using transient effectiveness method. IEEE Trans. Components Packag. Manuf. Technol. 2014, 4, 1925–1935. [Google Scholar] [CrossRef]
  144. Karki, K.; Novotny, S.; Radmehr, A.; Patankar, S. Use of passive, rear-door heat exchangers to cool low to moderate heat loads. ASHRAE Trans. 2011, 117, 26–34. [Google Scholar]
  145. Wan, J.; Gui, X.; Kasahara, S.; Zhang, Y.; Zhang, R. Air flow measurement and management for improving cooling and energy efficiency in raised-floor data centers: A survey. IEEE Access 2018, 6, 48867–48901. [Google Scholar] [CrossRef]
  146. Abbas, A.; Huzayyin, A.; Mouneer, T.; Nada, S. Effect of data center servers’ power density on the decision of using in-row cooling or perimeter cooling. Alex. Eng. J. 2021, 60, 3855–3867. [Google Scholar] [CrossRef]
  147. Nada, S.; Abbas, A. Solutions of thermal management problems for terminal racks of in-row cooling architectures in data centers. Build. Environ. 2021, 201, 107991. [Google Scholar] [CrossRef]
  148. Abbas, A.; Huzayyin, A.; Mouneer, T.; Nada, S. Thermal management and performance enhancement of data centers architectures using aligned/staggered in-row cooling arrangements. Case Stud. Therm. Eng. 2021, 24, 100884. [Google Scholar] [CrossRef]
  149. Nada, S.A.; Abbas, A.M. Effect of in-row cooling units numbers/locations on thermal and energy management of data centers servers. Int. J. Energy Res. 2021, 45, 20270–20284. [Google Scholar] [CrossRef]
  150. Cho, J.; Woo, J. Development and experimental study of an independent row-based cooling system for improving thermal performance of a data center. Appl. Therm. Eng. 2020, 169, 114857. [Google Scholar] [CrossRef]
  151. Wang, L.; Wang, Y.; Bai, X.; Wu, T.; Ma, Y.; Jin, Y.; Jiang, H. An efficient assessment method for the thermal environment of a row-based cooling data center. Appl. Therm. Eng. 2025, 269, 126020. [Google Scholar] [CrossRef]
  152. Baghsheikhi, M.; Haftlangi, P.; Mohammadi, M. Analytical and experimental comparison of various in-row cooling systems for data centers: An exergy, exergoeconomic, and economic analysis. Therm. Sci. Eng. Prog. 2025, 57, 103086. [Google Scholar] [CrossRef]
  153. Cho, J. Optimal supply air temperature with respect to data center operational stability and energy efficiency in a row-based cooling system under fault conditions. Energy 2024, 288, 129797. [Google Scholar] [CrossRef]
  154. Wang, Y.; Bai, X.; Fu, Y.; Tang, Y.; Jin, C.; Li, Z. Field experiment and numerical simulation for airflow evaluation in a data center with row-based cooling. Energy Build. 2023, 294, 113231. [Google Scholar] [CrossRef]
  155. Singh, N.; Permana, I.; Agharid, A.P.; Wang, F.J. Innovative Retrofits for enhanced thermal performance in data centers using Independent Row-Based cooling systems. Therm. Sci. Eng. Prog. 2025, 57, 103101. [Google Scholar] [CrossRef]
  156. Moazamigoodarzi, H.; Gupta, R.; Pal, S.; Tsai, P.J.; Ghosh, S.; Puri, I.K. Modeling temperature distribution and power consumption in IT server enclosures with row-based cooling architectures. Appl. Energy 2020, 261, 114355. [Google Scholar] [CrossRef]
  157. Chu, J.; Huang, X. Research status and development trends of evaporative cooling air-conditioning technology in data centers. Energy Built Environ. 2023, 4, 86–110. [Google Scholar] [CrossRef]
  158. Kim, M.H.; Ham, S.W.; Park, J.S.; Jeong, J.W. Impact of integrated hot water cooling and desiccant-assisted evaporative cooling systems on energy savings in a data center. Energy 2014, 78, 384–396. [Google Scholar] [CrossRef]
  159. Han, Z.; Xue, D.; Wei, H.; Ji, Q.; Sun, X.; Li, X. Study on operation strategy of evaporative cooling composite air conditioning system in data center. Renew. Energy 2021, 177, 1147–1160. [Google Scholar] [CrossRef]
  160. Liu, Y.; Yang, X.; Li, J.; Zhao, X. Energy savings of hybrid dew-point evaporative cooler and micro-channel separated heat pipe cooling systems for computer data centers. Energy 2018, 163, 629–640. [Google Scholar] [CrossRef]
  161. Han, Z.; Sun, X.; Wei, H.; Ji, Q.; Xue, D. Energy saving analysis of evaporative cooling composite air conditioning system for data centers. Appl. Therm. Eng. 2021, 186, 116506. [Google Scholar] [CrossRef]
  162. Li, C.; Mao, R.; Wang, Y.; Zhang, J.; Lan, J.; Zhang, Z. Experimental study on direct evaporative cooling for free cooling of data centers. Energy 2024, 288, 129889. [Google Scholar] [CrossRef]
  163. Zhang, Y.; Han, H.; Zhang, Y.; Li, J.; Liu, C.; Liu, Y.; Gao, D. Experimental study on the performance of a novel data center air conditioner combining air cooling and evaporative cooling. Int. J. Refrig. 2025, 170, 98–112. [Google Scholar] [CrossRef]
  164. Mao, R.; Wu, H.; Li, C.; Zhang, Z.; Liang, X.; Zhou, J.; Chen, J. Experimental investigation on the application of cold-mist direct evaporative cooling in data centers. Int. J. Therm. Sci. 2025, 208, 109500. [Google Scholar] [CrossRef]
  165. Yan, W.; Cui, X.; Zhao, M.; Meng, X.; Yang, C.; Zhang, Y.; Liu, Y.; Jin, L. Multi-objective optimization of dew point indirect evaporative coolers for data centers. Appl. Therm. Eng. 2024, 241, 122425. [Google Scholar] [CrossRef]
  166. Wang, W.; Liang, C.; Zha, F.; Wang, H.; Shi, W.; Cui, Z.; Zhang, K.; Jia, H.; Li, J.; Li, X. Air conditioning system with dual-temperature chilled water for air grading treatment in data centers. Energy Build. 2023, 290, 113073. [Google Scholar] [CrossRef]
  167. Park, B.R.; Choi, Y.J.; Choi, E.J.; Moon, J.W. Adaptive control algorithm with a retraining technique to predict the optimal amount of chilled water in a data center cooling system. J. Build. Eng. 2022, 50, 104167. [Google Scholar] [CrossRef]
  168. Chen, L.; Wemhoff, A.P. The sustainability benefits of economization in data centers containing chilled water systems. Resour. Conserv. Recycl. 2023, 196, 107053. [Google Scholar] [CrossRef]
  169. Jiang, W.; Jia, Z.; Feng, S.; Liu, F.; Jin, H. Fine-grained warm water cooling for improving datacenter economy. In Proceedings of the 46th International Symposium on Computer Architecture, Phoenix, AZ, USA, 22–26 June 2019; pp. 474–486. [Google Scholar]
  170. Kim, Y.J.; Ha, J.W.; Park, K.S.; Song, Y.H. A study on the energy reduction measures of data centers through chilled water temperature control and water-side economizer. Energies 2021, 14, 3575. [Google Scholar] [CrossRef]
  171. Siriwardana, J.; Jayasekara, S.; Halgamuge, S.K. Potential of air-side economizers for data center cooling: A case study for key Australian cities. Appl. Energy 2013, 104, 207–219. [Google Scholar] [CrossRef]
  172. Jin, Y.; Bai, X.; Xu, X.; Mi, R.; Li, Z. Climate zones for the application of water-side economizer in a data center cooling system. Appl. Therm. Eng. 2024, 250, 123450. [Google Scholar] [CrossRef]
  173. Ham, S.W.; Kim, M.H.; Choi, B.N.; Jeong, J.W. Energy saving potential of various air-side economizers in a modular data center. Appl. Energy 2015, 138, 258–275. [Google Scholar] [CrossRef]
  174. Deymi-Dashtebayaz, M.; Namanlo, S.V.; Arabkoohsar, A. Simultaneous use of air-side and water-side economizers with the air source heat pump in a data center for cooling and heating production. Appl. Therm. Eng. 2019, 161, 114133. [Google Scholar] [CrossRef]
  175. Cho, J.; Lim, T.; Kim, B.S. Viability of datacenter cooling systems for energy efficiency in temperate or subtropical regions: Case study. Energy Build. 2012, 55, 189–197. [Google Scholar] [CrossRef]
  176. Cho, K.; Chang, H.; Jung, Y.; Yoon, Y. Economic analysis of data center cooling strategies. Sustain. Cities Soc. 2017, 31, 234–243. [Google Scholar] [CrossRef]
  177. Wang, J.; Zhang, Q.; Yoon, S.; Yu, Y. Reliability and availability analysis of a hybrid cooling system with water-side economizer in data center. Build. Environ. 2019, 148, 405–416. [Google Scholar] [CrossRef]
  178. Durand-Estebe, B.; Le Bot, C.; Mancos, J.N.; Arquis, E. Simulation of a temperature adaptive control strategy for an IWSE economizer in a data center. Appl. Energy 2014, 134, 45–56. [Google Scholar] [CrossRef]
  179. Kim, J.H.; Shin, D.U.; Kim, H. Data center energy evaluation tool development and analysis of power usage effectiveness with different economizer types in various climate zones. Buildings 2024, 14, 299. [Google Scholar] [CrossRef]
  180. Zou, S.; Pan, Y. Performance of a hybrid thermosyphon cooling system using airside economizers for data center free cooling under different climate conditions. J. Build. Eng. 2024, 98, 111235. [Google Scholar] [CrossRef]
  181. Chen, H.; Peng, Y.h.; Wang, Y.l. Thermodynamic analysis of hybrid cooling system integrated with waste heat reusing and peak load shifting for data center. Energy Convers. Manag. 2019, 183, 427–439. [Google Scholar] [CrossRef]
  182. Wang, J.; Zhang, Q.; Yoon, S.; Yu, Y. Impact of uncertainties on the supervisory control performance of a hybrid cooling system in data center. Build. Environ. 2019, 148, 361–371. [Google Scholar] [CrossRef]
  183. Fouladi, K.; Schaadt, J.; Wemhoff, A.P. A novel approach to the data center hybrid cooling design with containment. Numer. Heat Transf. Part A Appl. 2017, 71, 477–487. [Google Scholar] [CrossRef]
  184. Zhu, Y.; Zhang, Q.; Zeng, L.; Wang, J.; Zou, S. An advanced control strategy of hybrid cooling system with cold water storage system in data center. Energy 2024, 291, 130304. [Google Scholar] [CrossRef]
  185. Jahangir, M.H.; Mokhtari, R.; Mousavi, S.A. Performance evaluation and financial analysis of applying hybrid renewable systems in cooling unit of data centers—A case study. Sustain. Energy Technol. Assess. 2021, 46, 101220. [Google Scholar] [CrossRef]
  186. Wang, J.; Huang, Z.; Yue, C.; Zhang, Q.; Wang, P. Various uncertainties self-correction method for the supervisory control of a hybrid cooling system in data centers. J. Build. Eng. 2021, 42, 102830. [Google Scholar] [CrossRef]
  187. Zhou, F.; Shen, C.; Ma, G.; Yan, X. Power usage effectiveness analysis of a liquid-pump-driven hybrid cooling system for data centers in subclimate zones. Sustain. Energy Technol. Assess. 2022, 52, 102277. [Google Scholar] [CrossRef]
  188. Sbaity, A.A.; Louahlia, H.; Le Masson, S. Performance of a hybrid thermosyphon condenser for cooling a typical data center under various climatic constraints. Appl. Therm. Eng. 2022, 202, 117786. [Google Scholar] [CrossRef]
  189. Lamptey, N.B.; Anka, S.K.; Lee, K.H.; Cho, Y.; Choi, J.W.; Choi, J.M. Comparative energy analysis of cooling energy performance between conventional and hybrid air source internet data center cooling system. Energy Build. 2024, 302, 113759. [Google Scholar] [CrossRef]
  190. Zurmuhl, D.P.; Lukawski, M.Z.; Aguirre, G.A.; Law, W.R.; Schnaars, G.P.; Beckers, K.F.; Anderson, C.L.; Tester, J.W. Hybrid geothermal heat pumps for cooling telecommunications data centers. Energy Build. 2019, 188, 120–128. [Google Scholar] [CrossRef]
  191. Hanwha Vision America. Video Surveillance for Data Centers. 2025. Available online: https://hanwhavisionamerica.com/markets/data-centers/ (accessed on 21 April 2025).
  192. Giri, S.; Su, J.; Zajko, G.; Prasad, P. Authentication method to secure cloud data centres using biometric technology. In Proceedings of the 2020 5th International Conference on Innovative Technologies in Intelligent Systems and Industrial Applications (CITISIA), Sydney, Australia, 25–27 November 2020; pp. 1–9. [Google Scholar]
  193. Stefani, E.; Ferrari, C. Design and implementation of a multi-modal biometric system for company access control. Algorithms 2017, 10, 61. [Google Scholar] [CrossRef]
  194. Wang, C.; Schwan, K.; Talwar, V.; Eisenhauer, G.; Hu, L.; Wolf, M. A flexible architecture integrating monitoring and analytics for managing large-scale data centers. In Proceedings of the 8th ACM International Conference on Autonomic Computing, Karlsruhe, Germany, 14–18 June 2011; pp. 141–150. [Google Scholar]
  195. Data Center Frontier. Sustainable Lighting: Key Considerations for Green Data Centers. 2023. Available online: https://www.datacenterfrontier.com/voices-of-the-industry/article/11428741/sustainable-lighting-key-considerations-for-green-data-centers (accessed on 21 April 2025).
  196. Bakar, N.N.A.; Hassan, M.Y.; Abdullah, H.; Rahman, H.A.; Abdullah, M.P.; Hussin, F.; Bandi, M. Energy efficiency index as an indicator for measuring building energy performance: A review. Renew. Sustain. Energy Rev. 2015, 44, 1–11. [Google Scholar] [CrossRef]
  197. Berndt, E.R. Aggregate energy, efficiency and productivity measurement. Annu. Rev. Environ. Resour. 1978, 3, 225–273. [Google Scholar] [CrossRef]
  198. Giacone, E.; Mancò, S. Energy efficiency measurement in industrial processes. Energy 2012, 38, 331–345. [Google Scholar] [CrossRef]
  199. Aravanis, A.I.; Voulkidis, A.; Salom, J.; Townley, J.; Georgiadou, V.; Oleksiak, A.; Porto, M.R.; Roudet, F.; Zahariadis, T. Metrics for assessing flexibility and sustainability of next generation data centers. In Proceedings of the 2015 IEEE Globecom Workshops (GC Wkshps), San Diego, CA, USA, 6–10 December 2015; pp. 1–6. [Google Scholar]
  200. Song, Z.; Zhang, X.; Eriksson, C. Data center energy and cost saving evaluation. Energy Procedia 2015, 75, 1255–1260. [Google Scholar] [CrossRef]
  201. Mittal, S. Power management techniques for data centers: A survey. arXiv 2014, arXiv:1404.6681. [Google Scholar]
  202. Shaikh, A.; Uddin, M.; Elmagzoub, M.A.; Alghamdi, A. PEMC: Power Efficiency Measurement Calculator to Compute Power Efficiency and CO2 Emissions in Cloud Data Centers. IEEE Access 2020, 8, 195216–195228. [Google Scholar] [CrossRef]
  203. Uddin, M.; Darabidarabkhani, Y.; Shah, A.; Memon, J. Evaluating power efficient algorithms for efficiency and carbon emissions in cloud data centers: A review. Renew. Sustain. Energy Rev. 2015, 51, 1553–1563. [Google Scholar] [CrossRef]
  204. Shang, Y.; Li, D.; Zhu, J.; Xu, M. On the network power effectiveness of data center architectures. IEEE Trans. Comput. 2015, 64, 3237–3248. [Google Scholar] [CrossRef]
  205. Belady, C.L.; Malone, C.G. Metrics and an infrastructure model to evaluate data center efficiency. In Proceedings of the International Electronic Packaging Technical Conference and Exhibition, Singapore, 10–12 December 2007; Volume 42770, pp. 751–755. [Google Scholar]
  206. Kumar, R.; Khatri, S.K.; Diván, M.J. Efficiency measurement of data centers: An elucidative review. J. Discret. Math. Sci. Cryptogr. 2020, 23, 221–236. [Google Scholar] [CrossRef]
  207. Wilde, T.; Auweter, A.; Patterson, M.K.; Shoukourian, H.; Huber, H.; Bode, A.; Labrenz, D.; Cavazzoni, C. DWPE, a new data center energy-efficiency metric bridging the gap between infrastructure and workload. In Proceedings of the 2014 International Conference on High Performance Computing & Simulation (HPCS), New Orleans, LA, USA, 16–21 November 2014; pp. 893–901. [Google Scholar]
  208. Grishina, A.; Chinnici, M.; Kor, A.L.; Rondeau, E.; Georges, J.P.; De Chiara, D. Data center for smart cities: Energy and sustainability issue. In Big Data Platforms and Applications: Case Studies, Methods, Techniques, and Performance Evaluation; Springer: Berlin/Heidelberg, Germany, 2021; pp. 1–36. [Google Scholar]
  209. Grishina, A.; Chinnici, M.; De Chiara, D.; Rondeau, E.; Kor, A.L. Energy-oriented analysis of HPC cluster queues: Emerging metrics for sustainable data center. In Proceedings of the Applied Physics, System Science and Computers III: Proceedings of the 3rd International Conference on Applied Physics, System Science and Computers (APSAC2018), Dubrovnik, Croatia, 26–28 September 2018; Springer: Berlin/Heidelberg, Germany, 2019; pp. 286–300. [Google Scholar]
  210. Shiino, T. Standardizing Data Center Energy Efficiency Metrics in Preparation for Global Competition. NRI Papers. 2012. Available online: https://www.nri.com/content/900013140.pdf (accessed on 25 April 2025).
  211. Koutitas, G.; Demestichas, P. Challenges for energy efficiency in local and regional data centers. J. Green Eng. 2010, 1, 1–32. [Google Scholar]
  212. Chilukuri, M.; Dahlan, M.M.; Hwye, C.C. Benchmarking Energy Efficiency in Tropical Data Centres–Metrics and Mesurements. In Proceedings of the 2018 International Conference and Utility Exhibition on Green Energy for Sustainable Development (ICUE), Phuket City, Thailand, 24–26 October 2018; pp. 1–10. [Google Scholar]
  213. Khargharia, B.; Luo, H.; Al-Nashif, Y.; Hariri, S. Appflow: Autonomic performance-per-watt management of large-scale data centers. In Proceedings of the 2010 IEEE/ACM Int’l Conference on Green Computing and Communications & Int’l Conference on Cyber, Physical and Social Computing, Washington, DC, USA, 18–20 December 2010; pp. 103–111. [Google Scholar]
  214. Gandhi, A.; Harchol-Balter, M. How data center size impacts the effectiveness of dynamic power management. In Proceedings of the 2011 49th Annual Allerton Conference on Communication, Control, and Computing (Allerton), Monticello, IL, USA, 28–30 September 2011; pp. 1164–1169. [Google Scholar]
  215. Ruiu, P.; Fiandrino, C.; Giaccone, P.; Bianco, A.; Kliazovich, D.; Bouvry, P. On the energy-proportionality of data center networks. IEEE Trans. Sustain. Comput. 2017, 2, 197–210. [Google Scholar] [CrossRef]
  216. Khargharia, B.; Hariri, S.; Yousif, M.S. An adaptive interleaving technique for memory performance-per-watt management. IEEE Trans. Parallel Distrib. Syst. 2008, 20, 1011–1022. [Google Scholar] [CrossRef]
  217. Li, Z.; Yang, Y. RRect: A novel server-centric data center network with high power efficiency and availability. IEEE Trans. Cloud Comput. 2018, 8, 914–927. [Google Scholar] [CrossRef]
  218. Dalvandi, A.; Gurusamy, M.; Chua, K.C. Application scheduling, placement, and routing for power efficiency in cloud data centers. IEEE Trans. Parallel Distrib. Syst. 2016, 28, 947–960. [Google Scholar] [CrossRef]
  219. Jamalzadeh, M.; Behravan, N. An exhaustive framework for better data centers, energy efficiency and greenness by using metrics. Indian J. Comput. Sci. Eng. (IJCSE) 2012, 2, 2231–3850. [Google Scholar]
  220. Beitelmal, A.; Fabris, D. Servers and data centers energy performance metrics. Energy Build. 2014, 80, 562–569. [Google Scholar] [CrossRef]
  221. Wang, L.; Khan, S.U. Review of performance metrics for green data centers: A taxonomy study. J. Supercomput. 2013, 63, 639–656. [Google Scholar] [CrossRef]
  222. Nawathe, U.G.; Hassan, M.; Yen, K.C.; Kumar, A.; Ramachandran, A.; Greenhill, D. Implementation of an 8-core, 64-thread, power-efficient SPARC server on a chip. IEEE J. Solid-State Circuits 2008, 43, 6–20. [Google Scholar] [CrossRef]
  223. Rivoire, S.; Shah, M.A.; Ranganathan, P.; Kozyrakis, C.; Meza, J. Models and metrics to enable energy-efficiency optimizations. Computer 2007, 40, 39–48. [Google Scholar] [CrossRef]
  224. Anderson, S.F. Improving data center efficiency. Energy Eng. 2010, 107, 42–63. [Google Scholar] [CrossRef]
  225. Imani, M.; Garcia, R.; Huang, A.; Rosing, T. Cade: Configurable approximate divider for energy efficiency. In Proceedings of the 2019 Design, Automation & Test in Europe Conference & Exhibition (DATE), Florence, Italy, 25–29 March 2019; pp. 586–589. [Google Scholar]
  226. da Silva Rocha, É.; GF da Silva, L.; Santos, G.L.; Bezerra, D.; Moreira, A.; Gonçalves, G.; Marquezini, M.V.; Mehta, A.; Wildeman, M.; Kelner, J.; et al. Aggregating data center measurements for availability analysis. Softw. Pract. Exp. 2021, 51, 868–892. [Google Scholar] [CrossRef]
  227. Nguyen, T.A.; Min, D.; Choi, E.; Tran, T.D. Reliability and availability evaluation for cloud data center networks using hierarchical models. IEEE Access 2019, 7, 9273–9313. [Google Scholar] [CrossRef]
  228. Sego, L.H.; Marquez, A.; Rawson, A.; Cader, T.; Fox, K.; Gustafson, W.I., Jr.; Mundy, C.J. Implementing the data center energy productivity metric. ACM J. Emerg. Technol. Comput. Syst. (JETC) 2012, 8, 1–22. [Google Scholar] [CrossRef]
  229. Uddin, M.; Rahman, A.A. Energy efficiency and low carbon enabler green IT framework for data centers considering green metrics. Renew. Sustain. Energy Rev. 2012, 16, 4078–4094. [Google Scholar] [CrossRef]
  230. Gandhi, A.; Lee, D.; Liu, Z.; Mu, S.; Zadok, E.; Ghose, K.; Gopalan, K.; Liu, Y.D.; Hussain, S.R.; Mcdaniel, P. Metrics for sustainability in data centers. ACM SIGENERGY Energy Inform. Rev. 2023, 3, 40–46. [Google Scholar] [CrossRef]
  231. Shally, S.S.; Kumar, S. Measuring energy efficiency of cloud datacenters. Int. J. Recent Technol. Eng. 2019, 8, 5428–5433. [Google Scholar] [CrossRef]
  232. Metrics, G.G. Describing Datacenter Power Efficiency. Technical Committee White Paper, The Green Grid. 2007. Available online: https://www.thegreengrid.org/resources/library-and-tools (accessed on 25 April 2025).
  233. Schaeppi, B.; Bogner, T.; Schloesser, A.; Stobbe, L.; de Asuncao, M.D. Metrics for energy efficiency assessment in data centers and server rooms. In Proceedings of the 2012 Electronics Goes Green 2012+, Berlin, Germany, 9–12 September 2012; pp. 1–6. [Google Scholar]
  234. Herzog, C. Standardization Bodies, Initiatives and their relation to Green IT focused on the Data Centre Side. In Proceedings of the Energy Efficiency in Large Scale Distributed Systems: COST IC0804 European Conference, EE-LSDS 2013, Vienna, Austria, 22–24 April 2013; Revised Selected Papers;. Springer: Berlin/Heidelberg, Germany, 2013; pp. 289–299. [Google Scholar]
  235. Chen, D.; Henis, E.; Kat, R.I.; Sotnikov, D.; Cappiello, C.; Ferreira, A.M.; Pernici, B.; Vitali, M.; Jiang, T.; Liu, J.; et al. Usage centric green performance indicators. ACM SIGMETRICS Perform. Eval. Rev. 2011, 39, 92–96. [Google Scholar] [CrossRef]
  236. Pop, C.B.; Anghel, I.; Cioara, T.; Salomie, I.; Vartic, I. A swarm-inspired data center consolidation methodology. In Proceedings of the 2nd International Conference on Web Intelligence, Mining and Semantics, Craiova, Romania, 13–15 June 2012; pp. 1–7. [Google Scholar]
  237. Schödwell, B.; Erek, K.; Zarnekow, R. Data center green performance measurement: State of the art and open research challenges. In Proceedings of the Nineteenth Americas Conference on Information Systems, Chicago, IL, USA, 15–17 August 2013. [Google Scholar]
  238. Sisó, L.; Salom, J.; Jarus, M.; Oleksiak, A.; Zilio, T. Energy and heat-aware metrics for data centers: Metrics analysis in the framework of CoolEmAll project. In Proceedings of the 2013 International Conference on Cloud and Green Computing, Karlsruhe, Germany, 30 September–2 October 2013; pp. 428–434. [Google Scholar]
  239. Procaccianti, G.; Routsis, A. Energy efficiency and power measurements: An industrial survey. In Proceedings of the ICT for Sustainability 2016; Atlantis Press: Dordrecht, The Netherlands, 2016; pp. 69–78. [Google Scholar]
  240. Meisner, D.; Wu, J.; Wenisch, T.F. Bighouse: A simulation infrastructure for data center systems. In Proceedings of the 2012 IEEE International Symposium on Performance Analysis of Systems & Software, New Brunswick, NJ, USA, 1–3 April 2012; pp. 35–45. [Google Scholar]
  241. Tian, H.; Wu, D.; He, J.; Xu, Y.; Chen, M. On achieving cost-effective adaptive cloud gaming in geo-distributed data centers. IEEE Trans. Circuits Syst. Video Technol. 2015, 25, 2064–2077. [Google Scholar] [CrossRef]
  242. Zhu, T.; Kozuch, M.A.; Harchol-Balter, M. WorkloadCompactor: Reducing datacenter cost while providing tail latency SLO guarantees. In Proceedings of the 2017 Symposium on Cloud Computing, Santa Clara, CA, USA, 25–27 September 2017; pp. 598–610. [Google Scholar]
  243. Metri, G.; Srinivasaraghavan, S.; Shi, W.; Brockmeyer, M. Experimental analysis of application specific energy efficiency of data centers with heterogeneous servers. In Proceedings of the 2012 IEEE Fifth International Conference on Cloud Computing, Honolulu, HI, USA, 24–29 June 2012; pp. 786–793. [Google Scholar]
  244. Taheri, J.; Zomaya, A.Y. Energy efficiency metrics for data centers. In Energy-Efficient Distributed Computing Systems; John Wiley and Sons: Hoboken, NJ, USA, 2012; pp. 245–269. [Google Scholar]
  245. Lee, H. An Analysis of the Impact of Datacenter Temperature on Energy Efficiency. Ph.D. Thesis, Massachusetts Institute of Technology, Cambridge, MA, USA, 2012. [Google Scholar]
  246. Fiandrino, C.; Kliazovich, D.; Bouvry, P.; Zomaya, A.Y. Performance and energy efficiency metrics for communication systems of cloud computing data centers. IEEE Trans. Cloud Comput. 2015, 5, 738–750. [Google Scholar] [CrossRef]
  247. Cheng, H.; Liu, B.; Lin, W.; Ma, Z.; Li, K.; Hsu, C.H. A survey of energy-saving technologies in cloud data centers. J. Supercomput. 2021, 77, 13385–13420. [Google Scholar] [CrossRef]
  248. North, M.T.; Kulkarni, A.; Haley, D. Effects of Datacenter Cooling Subsystems Performance on TUE: Air vs. Liquid vs. Hybrid Cooling. In Proceedings of the 2024 23rd IEEE Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronic Systems (ITherm), Aurora, CO, USA, 28–31 May 2024; pp. 1–5. [Google Scholar]
  249. Avelar, V.; Azevedo, D.; French, A.; Power, E.N. PUE: A Comprehensive Examination of the Metric; White Paper 49; The Green Grid: Washington, DC, USA, 2012. [Google Scholar]
  250. Zoie, R.C.; Mihaela, R.D.; Alexandru, S. An analysis of the power usage effectiveness metric in data centers. In Proceedings of the 2017 5th International Symposium on Electrical and Electronics Engineering (ISEEE), Galaţi, Romania, 20–22 October 2017; pp. 1–6. [Google Scholar]
  251. Fawaz, A.H.; Mohammed, A.F.Y.; Laku, L.I.Y.; Alanazi, R. PUE or GPUE: A carbon-aware metric for data centers. In Proceedings of the 2019 21st International Conference on Advanced Communication Technology (ICACT), PyeongChang, Republic of Korea, 17–20 February 2019; pp. 38–41. [Google Scholar]
  252. Li, J.; Jurasz, J.; Li, H.; Tao, W.Q.; Duan, Y.; Yan, J. A new indicator for a fair comparison on the energy performance of data centers. Appl. Energy 2020, 276, 115497. [Google Scholar] [CrossRef]
  253. Abdilla, A.; Borg, S.P.; Licari, J. Relating measured PUE to the cooling strategy and operating conditions through a review of a number of Maltese data centres. Energy Rep. 2025, 13, 2612–2623. [Google Scholar] [CrossRef]
  254. Lei, N.; Ganeshalingam, M.; Masanet, E.; Smith, S.; Shehabi, A. Shedding light on US small and midsize data centers: Exploring insights from the CBECS survey. Energy Build. 2025, 115734. Available online: https://www.sciencedirect.com/science/article/abs/pii/S0378778825004645 (accessed on 14 May 2025). [CrossRef]
  255. Huang, H.; Lin, W.; Lin, J.; Li, K. Power Management Optimization for Data Centers: A Power Supply Perspective. IEEE Trans. Sustain. Comput. 2025. Available online: https://ieeexplore.ieee.org/document/10891660 (accessed on 14 May 2025).
  256. Hernandez, L.H.H.; Orozco, M. Measurement of Energy Efficiency Metrics of Data Centers. Case Study: Higher Education. Softw. Eng. Perspect. Intell. Syst. 2020, 1295, 23–35. [Google Scholar]
  257. Azevedo, D.; Patterson, M.; Pouchet, J.; Tipley, R. Carbon Usage Effectiveness (CUE): A Green Grid Data Center Sustainability Metric; White Paper 32; Green Grid: Washington, DC, USA, 2010. [Google Scholar]
  258. Google. Efficiency–Google Data Centers. 2024. Available online: https://datacenters.google/efficiency/ (accessed on 14 May 2025).
  259. Microsoft. Measuring Energy and Water Efficiency for Microsoft Datacenters. 2024. Available online: https://datacenters.microsoft.com/sustainability/efficiency/ (accessed on 14 May 2025).
  260. Rodriguez, J. Dial It In: Data Centers Need New Metric for Energy Efficiency. 2024. Available online: https://blogs.nvidia.com/blog/datacenter-efficiency-metrics-isc/ (accessed on 14 May 2025).
Figure 1. The Preferred Reporting Items for Systemic reviews and Meta-Analyses (PRISMA) process of this systematic review, including keywords, identification, screening, eligibility, and inclusion.
Figure 1. The Preferred Reporting Items for Systemic reviews and Meta-Analyses (PRISMA) process of this systematic review, including keywords, identification, screening, eligibility, and inclusion.
Electronics 14 02214 g001
Figure 2. Overview of related Scopus publications on energy efficiency of CDCs.
Figure 2. Overview of related Scopus publications on energy efficiency of CDCs.
Electronics 14 02214 g002
Figure 3. Overall statistics of Scopus publication types on energy efficiency of CDCs (Journal Papers, Conference Papers, Book Chapters, and Books).
Figure 3. Overall statistics of Scopus publication types on energy efficiency of CDCs (Journal Papers, Conference Papers, Book Chapters, and Books).
Electronics 14 02214 g003
Figure 4. A typical CDC facility, with its energy supply (PDU and Uninterruptible Power Supply (UPS)), and energy consumption components (Heating, Ventilation, and Air Conditioning (HVAC), security, IT, servers, racks, etc.).
Figure 4. A typical CDC facility, with its energy supply (PDU and Uninterruptible Power Supply (UPS)), and energy consumption components (Heating, Ventilation, and Air Conditioning (HVAC), security, IT, servers, racks, etc.).
Electronics 14 02214 g004
Figure 5. Centralized UPS configuration.
Figure 5. Centralized UPS configuration.
Electronics 14 02214 g005
Figure 6. Decentralized UPS configuration—Type 1.
Figure 6. Decentralized UPS configuration—Type 1.
Electronics 14 02214 g006
Figure 7. Decentralized UPS configuration—Type 2. The base concept for Figure 5, Figure 6 and Figure 7 are gathered from [98], and the concept of RES is added to them.
Figure 7. Decentralized UPS configuration—Type 2. The base concept for Figure 5, Figure 6 and Figure 7 are gathered from [98], and the concept of RES is added to them.
Electronics 14 02214 g007
Figure 8. Challenges and limitations of the presented IT-related energy efficiency CDC metrics.
Figure 8. Challenges and limitations of the presented IT-related energy efficiency CDC metrics.
Electronics 14 02214 g008
Figure 9. Challenges and limitations of the presented non-IT-related energy efficiency CDCs metrics.
Figure 9. Challenges and limitations of the presented non-IT-related energy efficiency CDCs metrics.
Electronics 14 02214 g009
Table 1. Comparison of CDC cooling strategies.
Table 1. Comparison of CDC cooling strategies.
Cooling StrategyMethodologyAdvantagesDisadvantagesExamples
Air-Based CoolingUses CRAC or CRAH units to circulate air around servers.Simple and widely adopted; low initial cost.High energy consumption; inefficient for high-density workloads.[109,110,111,112,113,114,115,116]
Cold/Hot Aisle ContainmentSeparates hot and cold air paths, using containment structures.Improves energy efficiency and reduces hot spots.Requires proper planning and layout; retrofitting is difficult.[110,117,118,119,120]
Direct-to-Chip Liquid CoolingDelivers coolant directly to components, such as CPUs, via cold plates.High cooling efficiency; ideal for high-performance systems.Higher cost; risk of leaks.[121,122,123,124,125,126]
Immersion CoolingSubmerges components in dielectric fluid for heat transfer.Excellent thermal performance; quiet operation.Expensive setup; fluid compatibility and maintenance complexity.[127,128,129,130,131,132,133,134,135,136]
Rear Door Heat ExchangersCooled water absorbs heat via coils mounted at the back of racks.Scalable and effective for dense racks.Adds rack weight; complex plumbing.[137,138,139,140,141,142,143,144,145]
In-Row CoolingPlaces cooling units between server racks for localized cooling.Targeted cooling; reduces airflow inefficiencies.High cost; depends on data center layout.[146,147,148,149,150,151,152,153,154,155,156]
Evaporative CoolingUses evaporating water to pre-cool intake air.Very energy-efficient in dry climates.Ineffective in humid climates; water treatment needed.[109,157,158,159,160,161,162,163,164,165]
Chilled Water CoolingChiller cools water that circulates through air handlers.Suitable for large-scale operations; reliable.Expensive infrastructure; needs regular maintenance.[166,167,168,169,170]
EconomizersUses outside air or water for cooling when ambient conditions allow.Major energy savings; eco-friendly.Weather-dependent; air filtration often required.[171,172,173,174,175,176,177,178,179,180]
Hybrid SystemsCombines multiple cooling methods (such as air + liquid, or free cooling).Flexible and efficient under varying loads.High complexity and initial setup cost.[123,140,180,181,182,183,184,185,186,187,188,189,190]
Table 2. Cloud Data Center energy efficiency IT-related metrics information.
Table 2. Cloud Data Center energy efficiency IT-related metrics information.
MetricDefinitionPrimary UseExamples
APCAverage power usage of IT equipment over time.Monitors IT power trends.[199,200,201]
CPEComputes output per unit of IT power.Measures IT computational efficiency.[202,203,204,205,206]
DWPEWorkload processed per unit of IT power.Workload efficiency at IT level.[51,207]
EWREnergy wasted per unit of computational work.Energy cost of work.[208,209]
ITEEEfficiency of IT hardware.Evaluates hardware efficiency.[210,211,212]
OSWESystem workload output per total energy.Measures system-level workload efficiency.[51]
PpWPerformance per unit of power.Measures IT hardware efficiency.[213,214,215,216]
ScEServer compute output per energy used.Assesses server-level efficiency.[217,218,219]
SPUEPUE-like metric for servers.Server-specific efficiency.[220]
SWaPComposite of performance vs. space and power.Space–power performance balance.[221,222,223]
Table 3. Cloud Data Center energy efficiency IT-related metrics mathematical details.
Table 3. Cloud Data Center energy efficiency IT-related metrics mathematical details.
MetricFormulationUnitsBest ValueCorrelation with Other Metrics
APC Total I T Power Time WLower is better↑ (ITEE), ↓ (PpW, CPE, ScE)
CPE Compute Output I T Power Ops/W + ↑ (PpW, ScE); ↓ (APC)
DWPE Workload I T Power Tasks/W + ↑ (ITEE, CPE); ↓ (APC)
EWR I T Energy Work Output kWh/Task0↓ (CPE, PpW, ScE); ↑ (APC)
ITEE Rated Performance Rated Power %1↑ (PpW, CPE); ↓ (EWR)
OSWE Workload Total System Energy %1↑ (DCeP, DCPE); ↓ (PUE)
PpW Performance Power Perf/W + ↑ (CPE, ScE, ITEE); ↓ (EWR, APC)
ScE Output Server Power Ops/W + ↑ (PpW, CPE, ITEE); ↓ (EWR)
SPUE Total Server Energy Compute Energy %1↑ (PUE); ↓ (PpW, ScE)
SWaP Performance Space × Power Ops/m2W + ↑ (DCPD); ↓ (EWR)
Table 4. Cloud Data Center energy efficiency non-IT-related metrics information.
Table 4. Cloud Data Center energy efficiency non-IT-related metrics information.
MetricDefinitionPrimary UseExamples
CADECorporate-level efficiencyCorporate energy assessment[51,224,225]
DCAData Center AvailabilityMeasures uptime reliability[226,227]
DCePEnergy productivityProductivity benchmark[228]
DCgEGreen energy efficiencyGreen energy impact[51,221,228,229,230]
DCPDPower densitySpace optimization[231]
DCPEPerformance per energyPerformance efficiency measure[51,232]
DC-FVERFixed to variable energyCost structure insight[233,234]
DH-UEIT floor space utilizationSpace optimization[235]
DH-URRack utilization rateRack deployment insight[236,237,238,239]
EBSBaseline comparison scoreTracks savings[240,241,242,243]
H-POMNon-computational (overhead) power consumed by hardware componentsIncludes everything consumed by the hardware[244,245]
PDEPower delivery efficiencyPower loss assessment[51,246]
PEsavingsEnergy efficiency savingsSavings tracking[199,247]
SI-POMInfrastructure optimizationInfrastructure focus[51]
TUEHolistic utilizationTotal resource usage[248]
PUETotal vs IT energyEfficiency benchmark[36,249,250,251,252,253,254,255]
DCiEInfra efficiencyInfrastructure energy ratio[206,256]
CUECarbon emissions per IT energyCarbon footprint metric[257]
Table 5. Cloud Data Center energy efficiency non-IT-related metrics mathematical details.
Table 5. Cloud Data Center energy efficiency non-IT-related metrics mathematical details.
MetricFormulationBest ValueUnitsCorrelation with Other Metrics
CADE IT Asset Utilization × D C i E IT Capacity × Facility Energy 1Ratio↑ (ITEU, DCiE); ↓ (PUE)
DCA Uptime Total Time 1Ratio↓ PUEreliability
DCeP Useful Work Total Facility Energy 1Output/W↑ (DCPE, OSWE); ↓ (PUE)
DCgE Renewable Total Energy × D C i E 1Ratio↑ (GEC, DCiE); ↓ (CUE)
DCPD IT Power Floor Area Higher variesW/m2↑ (SWaP); ↓ (RCI)
DCPE Output Facility Energy 1Output/W↑ (DCeP, OSWE); ↓ (PUE)
DC-FVER Fixed Variable Energy Lower is betterRatio↓ (CER)
DH-UE Active Floor Total Hall 1Ratio↑ (DH-UR, TUE)
DH-UR Deployed Racks Total Racks 1Ratio↑ (DH-UE, TUE)
EBS Actual Baseline 1Ratio↑ (PUE); ↓ (PEsavings)
H-POMCustom Formula1Ratio↑ (ITEU, GEC, DCiE); ↓ (PUE)
PDE IT Power Input Power 1Ratio↑ (DCiE); ↓ (PUE)
PEsavings Baseline Current Baseline 1Ratio↑ (DCiE); ↓ (PUE, EBS)
PUEreliability Total Redundancy IT Energy 1Ratio↑ (PUE); ↓ (DCA)
SI-POMCustom formula1Ratio↑ (CER, PDE); ↓ (PUE)
TUE I T E U × D H U R × D C i E 100 1Ratio↑ (ITEU, DH-UR, DH-UE); ↓ (PUE)
PUE Total Power IT Equipment Energy 1Ratio↓ (DCiE, PDE); ↑ (CUE)
DCiE IT Total Facility Energy 1Ratio↑ (PDE); ↓ (PUE)
CUE CO 2 IT Equipment Energy 0KgCO2/kWh↑ (PUE)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Safari, A.; Sorouri, H.; Rahimi, A.; Oshnoei, A. A Systematic Review of Energy Efficiency Metrics for Optimizing Cloud Data Center Operations and Management. Electronics 2025, 14, 2214. https://doi.org/10.3390/electronics14112214

AMA Style

Safari A, Sorouri H, Rahimi A, Oshnoei A. A Systematic Review of Energy Efficiency Metrics for Optimizing Cloud Data Center Operations and Management. Electronics. 2025; 14(11):2214. https://doi.org/10.3390/electronics14112214

Chicago/Turabian Style

Safari, Ashkan, Hoda Sorouri, Afshin Rahimi, and Arman Oshnoei. 2025. "A Systematic Review of Energy Efficiency Metrics for Optimizing Cloud Data Center Operations and Management" Electronics 14, no. 11: 2214. https://doi.org/10.3390/electronics14112214

APA Style

Safari, A., Sorouri, H., Rahimi, A., & Oshnoei, A. (2025). A Systematic Review of Energy Efficiency Metrics for Optimizing Cloud Data Center Operations and Management. Electronics, 14(11), 2214. https://doi.org/10.3390/electronics14112214

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop