Abstract
Modern data centers are becoming increasingly energy-intensive as AI workloads, hyperscale architectures, and high-power processors push power demand to unprecedented levels. This work examines the sources of rising energy consumption, including evolving IT load dynamics, variability, and the limitations of legacy AC-based power-delivery architectures. These challenges amplify the environmental impact of data centers and highlight their growing influence on global electricity systems. The paper analyzes why conventional grid-tied designs are insufficient for meeting future efficiency, flexibility, and sustainability requirements and surveys emerging solutions centered on DC microgrids, high-voltage DC distribution, and advanced wide-bandgap power electronics. The review further discusses the technical enablers that allow data centers to integrate renewable energy and energy storage more effectively, including simplified conversion chains, coordinated control hierarchies, and demand-aware workload management. Through documented strategies such as on-site renewable deployment, off-site procurement, hybrid energy systems, and flexible demand shaping, the study shows how data centers are increasingly positioned not only as major energy consumers but also as key catalysts for accelerating renewable-energy adoption. Overall, the findings illustrate how the evolving power architectures of large-scale data centers can drive innovation and growth across the renewable energy sector.
1. Introduction
The modern digital world is supported by an ever-growing demand for intricate, high-speed networks and information systems, driving the robust growth and vital positioning of data centers [1]. This expansion, particularly driven by applications such as hyperscale computing and AI, has dramatically increased the power consumption requirements of processing units [2,3]. Specialized components such as Graphics Processing Units (GPUs) and Tensor Processing Units (TPUs), which are crucial for training Large Language Models (LLMs) with massive parallel processing, consume significantly more power than traditional processing units [4,5]. The resulting power requirements for next-generation processors, which are supplied at ultra-low voltages (e.g., <0.8 V to 1 V), have surpassed 1 kW per device, leading to unprecedented load currents often exceeding 1000 A [6]. Figure 1 depicts this trend using some of the most widely used datacenter chips over the years [7,8,9,10,11,12,13,14,15,16,17,18]. It is important to note that Central Processing Units (CPUs) were more prominent in the past for a variety of datacenter workloads, but in recent years, with the advent of AI, GPUs and TPUs are the preferred choices for these tasks.
Figure 1.
Generations of widely used chips in data center servers. Chip graphics from manufacturer datasheets and www.techpowerup.com ©. The power rating growth trend is visible especially with the advent of AI/ML workloads and GPUs and TPUs becoming the main workhorse of AI servers (Power ratings based on chip maximum TDP).
This escalating power-hunger translates directly into critical challenges across the entire power conversion chain, necessitating immediate solutions to manage consumption, improve reliability, and optimize power-delivery architectures [19].
As data centers evolve into essential infrastructure supporting daily life and societal function, their substantial energy consumption has become a primary global concern [20]. Currently, global data centers account for approximately 1.7% to 2.2% of the world’s electricity consumption, with annual energy consumption (including data center computing and transmission) estimated at 480 to 660 TWh in 2021 [21]. Historically, U.S. data center energy usage accounted for 1% of the nation’s electricity consumption in 2005, rising to 1.8% of total U.S. electricity supply in 2014 [22], and increasing to 9% in 2020 [23]. Due to the acceleration of AI training, the proportion of global energy consumption by data centers, estimated at 3% in 2017, is projected to increase to 4.5% by 2025. Some extrapolations suggest the demand could rise from 2% today to as much as 7% of global electricity consumption by 2030 [24]. This tremendous power consumption, along with the corresponding carbon emissions, mandates a rapid shift toward sustainable, highly efficient energy systems. Beyond the profound environmental impact of high carbon emissions associated with conventional “brown energy”, the high electricity costs typically account for 40–80% of a data center’s long-term Operating Expenditures (OPEX) [25]. Figure 2 and Figure 3 show the current and projected total installed capacity and total energy consumption of datacenters, respectively, according to a recent report from IEA [26].
Figure 2.
Total installed capacity of data center facilities in GW from 2020 to 2035 (projected) by type.
Figure 3.
Total electricity consumption of data center facilities in GW from 2020 to 2035 (projected) by type.
Addressing the unprecedented power demands and ensuring system reliability necessitates a fundamental system redesign of the data center Power Delivery Architecture (PDA). Legacy systems are often unable to meet the higher power-density and efficiency requirements of next-generation facilities [27]. A core technological trend in this overhaul is the widespread adoption of DC distribution architectures. Specifically, architectures utilizing High Voltage Direct Current (HVDC) are gaining attention due to their potential for significantly higher power efficiency compared to traditional AC distribution [28,29]. This approach reduces the overall number of conversion stages, thereby boosting reliability and energy efficiency [21]. At the rack level, the shift to the 48 V distribution bus is now a widespread standard, mandatory for supporting increasing power requirements (with racks reaching over 100 kW) and minimizing distribution losses compared to legacy 12 V systems [30,31]. Crucially, the technological enabler for these high-performance architectures lies in advanced power electronics, particularly those leveraging Wide-Bandgap (WBG) devices like SiC and GaN, and advanced integrated magnetics design [6,32,33,34,35,36]. These components are essential for creating converters capable of achieving the high efficiency and high power density required across the entire power flow chain [4].
Beyond optimizing efficiency through electrification, achieving net-zero carbon emissions is a significant objective, driving the mandate to integrate clean energy sources such as solar, wind, and hydrogen directly into data center operations [37]. However, relying solely on Variable Renewable Energies (VREs) presents a significant engineering challenge due to their intermittent, fluctuating, and unpredictable nature, which prevents them from guaranteeing a constant power supply [38]. To mitigate this inherent variability and ensure continuous reliability, future data centers must incorporate sophisticated Energy Storage System (ESS) [39]. This often involves deploying distributed Uninterruptable Power Supply (UPS) systems (frequently based on Battery ESS (BESS)) for managing short-term power fluctuations and peak shaving [40]. Furthermore, there is growing interest in integrating Hydrogen Storage System (HSS) that utilizes electrolyzers and fuel cells for long-term energy storage, offering a cleaner backup alternative to traditional diesel generators [41]. By incorporating local generation and storage, the data center power supply architecture fundamentally evolves into a microgrid, establishing a localized power system capable of flexible operation, maximizing green energy utilization, and minimizing reliance on transmission [39].
The successful convergence of accelerating power demands, the critical need for robust efficiency, and the mandatory push toward decarbonization requires innovative, holistic solutions that harmonize advanced power electronics with intelligent energy management [42]. Also, isolated bidirectional DC-DC converters are integral to this future, as they are key components for interfacing distributed renewable generation and enabling bidirectional energy flow to and from storage systems within the microgrid environment. These capabilities are essential for the operation of renewable-powered data centers, allowing them to store surplus energy or rapidly switch power direction during grid outages.
The rapid escalation of energy consumption and the mandatory push toward decarbonization have fundamentally repositioned large-scale data centers from mere consumers to key catalysts for accelerating renewable energy adoption and technological innovation. This driving force operates through two primary mechanisms: direct market deployment pull and necessity-driven architectural push. Deployment acceleration occurs as datacenters increasingly engage in off-site renewable sourcing and Power Purchase Agreements (PPAs) to meet their energy demand and sustainability goals. These procurement strategies actively encourage investment in new renewable energy plants, thereby accelerating the decarbonization of local electrical grids. Simultaneously, the severe architectural challenges posed by next-generation IT loads, which demand unprecedented power density and efficiency (like the new OCP requirements of AI servers), necessitate a technological push towards Direct Current Microgrids (DCMGs) and innovative power electronic solutions. Ultimately, the pursuit of carbon neutrality, resilience, and high efficiency requires innovative, holistic solutions that harmonize advanced power electronics with intelligent energy management systems (EMS), illustrating how the evolving power architectures of data centers drive innovation and growth across the renewable energy sector.
To provide a comprehensive understanding of these challenges and potential solutions, this review is structured as follows: Section 2 examines the evolving energy demands, load characteristics, and reliability challenges in modern data centers, emphasizing the limitations of conventional grid-tied architectures. Section 3 explores DC Microgrids (DCMGs) as a key enabling technology for integrating renewables and enhancing system efficiency, covering control coordination, power electronic interfaces, and hybrid microgrid topologies. It focuses on the role of ESS and islanding capabilities in improving resilience during grid disturbances. Section 4 presents various renewable energy integration strategies, including on-site and off-site generation, hybrid renewable systems, and advanced workload management approaches for energy optimization.
Finally, Section 5 synthesizes the findings and presents future research directions to address the challenges in standardization, protection, multi-energy optimization, and intelligent energy management. Together, these sections provide a holistic view of the transition toward high-efficiency, renewable-powered, and resilient data centers, highlighting the critical role of DCMG technologies in enabling the sustainable digital economy of the future.
2. Energy Demands and Challenges in Modern Data Centers
The modern data center is a mission-critical infrastructure that operates around the clock to support the rapid expansion of the IT industry and the broader economy. However, this necessity has led to massive energy consumption and corresponding environmental concerns [43]. Data centers worldwide consumed approximately 205 TWh of electricity in 2018, an amount greater than the annual consumption of entire countries like Denmark and Ireland [44].
2.1. Overview of Power Consumption Patterns and Load Characteristics
Data center energy consumption is generally categorized into energy used by IT equipment and energy used by infrastructure facilities, such as power conditioning and cooling systems. A typical data center comprises three primary load sections: IT loads, cooling and environmental control equipment, and the Internal Power Conditioning System (IPCS), which includes UPS units, Power Delivery (PDU), and Power Supply Unit (PSU) [45].
The distribution of power consumption among these sections varies depending on the specific design and the equipment’s efficiency. Historically, the cooling infrastructure has been the largest energy consumer, accounting for up to 50% of the total energy in a typical DC, according to some statistics, followed by servers and storage at 26%. More recent data suggests that IT equipment (servers and network devices) comprises the largest segment, consuming around 45% (IT equipment) and 5% (network equipment). Cooling loads typically rank second at approximately 38%. Meanwhile, the power conditioning devices within the IPCS consume about 8% of the total power [45].
The power used by non-IT loads have a significant impact on load characteristic, efficiency and carbon footprint of a facility. To characterize how much power is used by the supporting infrastructure, like Heating, Ventilation, and Air Conditioning (HVAC), lighting, security, and auxiliary loads, versus the main load, which is the IT load, like Servers, Storage, Communication systems, and I/O boards, a metric called Power Usage Effectiveness (PUE) is used. PUE is defined as
Ideal PUE is equal to 1. However, achieving perfect PUE is impossible. PUE for current best-in-class hyperscale data centers is in the range of 1.15 to 1.3 and can exceed 2 for small-scale facilities. Google has reported a fleet-wide Trailing Twelve-Month (TTM) PUE of 1.09 in 2025 [46]. This shows how far facilities have come in improving power usage and managing non-IT power consumed in the plant, thereby reducing carbon footprint for a specific workload.
Figure 4 shows that, over the years, newer facilities have achieved lower PUE. As a result, a smaller fraction of the total power in the data center is used by supporting infrastructure such as HVAC and lighting. More power is delivered to the IT load, resulting in a lower carbon footprint and lower energy-related OPEX for the facility.
Figure 4.
Share of facility power for each load (IT and supporting infrastructure), the width of each arrow is proportional with its share of power. It can be seen that over the years, in newer facilities a larger share of total power entering a facility in consumed by IT load which translates to higher efficiency of a facility for a specific computational workload, a lower PUE and lower carbon footprint considering scale.
The power consumed by IT equipment includes components such as servers, storage, network switches, and local cooling fans. The Processing Unit (xPU) is typically the most significant contributor to total server power consumption. Saving a single watt at the xPU level can translate to 1.5 W saved at the server level and potentially up to 3 W at the overall data center system level due to the power-cascading effect in the IPCS.
Figure 5 shows an Open Compute Project (OCP) v3 High Power Rack (HPR) 4 IPCS architecture [47] with the respective input power of each stage. It can be seen that the IPCS or PDA of a data center consists of cascading power converters, each with a specific function. As a result, any efficiency improvement or power consumption reduction in the downstream near the IT load can save even more power due to the power cascading effect. This also shows that on-site integration of renewables, or lower-voltage integration (e.g., connecting to the power rack output rather than Low Voltage AC (LVAC) or Medium Voltage AC (MVAC), can be more effective at reducing a facility’s carbon footprint for facilities with lower installed capacity, as discussed in detail later. Equation (2) can be used to determine the input power of stage i of a total of j stages, where ei is the efficiency of each stage. For example, for the system shown in Figure 5, the total facility power from MV grid (excluding non-IT loads) is calculated in Equation (3).
Figure 5.
OCPv3HPR3&4-based PDA, each arrow notes the input power to each stage.
Data center workloads generally exhibit diurnal load patterns due to variations in user activity, as well as peaks caused by special events and holidays [44]. Modeling the exact power consumption is complex because it depends on multiple, often difficult-to-measure factors, including hardware specifications, computational workloads, application types, and cooling requirements. The power consumed by the hardware, the running software, and the surrounding cooling and power infrastructure are all closely coupled.
Power consumption in a computer system is fundamentally divided into two components: static (baseline) power and dynamic (active) power, Static power is consumed regardless of the system’s operational state, encompassing power waste due to semiconductor leakage currents (in xPUs, memory, I/O), fans, and the minimum power required to run the operating system and idling tasks. Dynamic power varies with the computational load or utilization. Most existing power models focus heavily on lower-level hardware systems, particularly processor power consumption, often overlooking components such as system fans and Solid State Drives (SSDs) [48]. In large hyperscale data centers, such as those built by Meta, these loads exhibit characteristic diurnal usage patterns. A key goal of power management is to improve energy proportionality, aligning power consumption closely with workload utilization to reduce the significant static power consumed during idle periods.
2.2. Impact of IT Load Variability in AI Loads and Solutions
IT load variability poses a major challenge for efficient data center operation, particularly given the dynamic, often spike nature of modern workloads, including those associated with AI and data analysis [49]. Many traditional energy management approaches primarily focus on modeling and managing xPU energy consumption. However, this narrow focus overlooks other critical subsystems whose contribution to overall energy consumption is growing [50].
Modern cloud data centers utilize huge storage subsystems and handle increasing communication traffic, making disk and Network Interface Card (NIC) subsystems significant contributors to energy consumption, alongside xPU and memory. Data analysis applications, which form the backbone of many AI and machine learning tasks, are typically classified as either data-intensive (I/O-bound) or computation-intensive [43]. While computation-intensive tasks place the highest demand on the xPU core (s), a holistic energy consumption model must incorporate the characteristics of the application (e.g., transactional web task, xPU-intensive task, or I/O intensive task) along with the energy consumed by the xPU, memory, disk, and NIC. Ignoring these components leads to inaccurate modeling and ineffective energy optimization algorithms, such as those governing virtual machine migration or resource provisioning [50].
The high variability in IT loads, driven by unpredictable user activity and complex applications, demands robust power delivery and stability solutions. In large-scale, real-life hyperscale data centers, job scheduling decisions aimed at mitigating environmental impact (such as shifting jobs to “greener” hours) are complicated by the highly variable, real-time resource and power usage [51].
2.3. Power Quality and Reliability Requirements
Maintaining power quality and consistency is paramount in data center operation, as highly sensitive IT equipment requires stringent environmental controls and reliable power delivery to ensure continuous, dependable service [52]. The ability of a data center to deliver services depends directly on its reliability, which is defined as the probability that the system will perform its intended function adequately for a specified period under operating conditions. Since data center components are repairable, the index of availability is widely used in reliability modeling [53]. Inherent availability is defined in Equation (4). MTBF is the Mean Time Between Failures and is defined by Equation (5), where λ is the failure rate and is defined as the rate at which a failure occurs per unit time in a specific interval. MTTR is the Mean Time to Repair a failure and return the unit to operation.
Based the availability index, the measure of reliability in data centers is reflected through uptime and the ability to meet mandated Service Level Agreements (SLAs). The Uptime Institute established a formal Tier Classification system (Tier I through Tier IV) to standardize design topologies and required levels of availability. This classification evaluates the data center’s ability to withstand failures in the power supply system and allow maintenance activities. The tier level for electrical infrastructure is primarily determined by the availability of the IPCS, which includes the backup generator supply, PDA, and utility interfaces. Tier IV, the most reliable, requires high redundancy in the power path to critical loads. However, this deterministic approach for calculating availability often overlooks vital factors such as the probability of grid supply outages, component failure rates, and random failure modes within the complex IPCS [53]. Table 1 shows the required availability for each of the tier classifications [54].
Table 1.
Expected Availability, Redundancy and Downtime of different tier classifications. Based on rewarded cases.
The reliability and availability tiers are another aspect that can be both a challenge and a driving force for renewable sector growth in data center industry. Renewable Energy System (RES) integration offers new opportunities to improve facility availability tier, as developing the IPCS with on-site RES into a microgrid with islanding capabilities, will significantly improve the tolerance of the facility to IPCS failures specially in MV frontend, UPS, PFC or power racks. It will open up new opportunities for RES integration and carbon footprint reduction in data centers, through multi-objective Capital Expenditures (CAPEX) and OPEX optimization using availability tier improvements as another aspect of system upgrade.
2.4. Limitations of Conventional Grid-Tied Architectures
Traditional data center operations are primarily dependent on the electrical grid for power [55]. This conventional reliance on the grid creates significant sustainability and operational limitations, especially concerning carbon emissions, peak power management, and the integration of renewable energy [56].
2.4.1. Environmental Constraints and Carbon Footprint
The primary limitation of a conventional grid-tied architecture is its direct link to carbon pollution. Due to data centers’ enormous and growing energy usage, they indirectly contribute substantially to global CO2 emissions, estimated potentially up to 720 million tons by 2030. Meeting ambitious environmental goals, such as those set by the UN’s Carbon-Free Energy Compact or government initiatives aiming for carbon-free electricity, requires decoupling data center power consumption from fossil-fuel-intensive grid sources [57]. Figure 6 shows total carbon emissions from data centers around the world in MTons derived based on electricity consumption and carbon emission correlation [58].
Figure 6.
Yearly CO2 emissions from data centers around the world (includes projections).
2.4.2. Economic and Operational Inflexibility
Electricity pricing exhibits considerable variability over time and across locations, influenced by factors such as fuel costs and regulatory constraints. For data centers, power costs can become a significant economic burden, sometimes surpassing the cost of purchasing the hardware itself. Conventional grid-tied systems expose operators to high electricity demand charges, especially during peak load times [59]. Managing data center load becomes complex because the interactions among hardware, software, cooling, and the power infrastructure are highly linked. Integration of RES can offer opportunities to reduce power cost related to OPEX, even if RES capacity is not sized to fully support facility power consumption.
2.4.3. Inability to Utilize VRE
The increasing penetration of VRE sources like solar and wind into the grid creates a challenge known as curtailment, in which excess VRE power cannot be absorbed by the grid or used immediately. Conventional architectures struggle to effectively utilize this excess, or “stranded,” green power. To address this, concepts like geographic load balancing and temporal workload shifting have been proposed to mitigate curtailment and reduce greenhouse gas emissions by migrating delay-tolerant data center workloads to areas or times when cleaner or excess power is available. This demonstrates the limitations of a static, grid-dependent architecture [60].
2.4.4. The Need for Hybrid and Microgrid Solutions
The rising volatility of renewable energy generation underscores the need for solutions beyond simple grid-tie-in. Emerging strategies advocate integrating data centers into more complex power systems, such as microgrids that incorporate diversified generation sources [61].
Hybrid power supply systems, utilizing on-site renewable energy sources (such as photovoltaic solar panels and wind turbines) coupled with ESS and potentially diesel generators, are crucial for reducing reliance on the conventional grid and ensuring quality service despite intermittent renewable energy supply [62]. Microgrid planning must focus on capacity configuration that balances CAPEX with OPEX while robustly handling VRE variability and unpredictable load demands. By adopting technologies such as electric storage and managing batch loads, data centers can operate more efficiently and mitigate the economic impact of VRE volatility, thereby moving beyond the constraints imposed by conventional grid dependence [61,63].
3. DC Microgrids: Enabling Renewable Integration in Future Data Centers
The growing scale of data centers, driven by the rapid expansion of cloud computing and AI, has positioned them as colossal energy consumers, imposing significant strain on traditional power grids and raising serious concerns about sustainability and reliability [64]. To address these profound challenges, integrating data centers into microgrids has emerged as a critical architectural solution, enhancing energy efficiency and reliability while facilitating the adoption of RES [65]. Microgrids, defined as self-contained systems integrating local power generation, energy storage, and flexible demand, offer a pathway toward self-reliant operations capable of sustaining critical loads even during significant utility grid disturbances [66].
3.1. Open Compute Project and the Future of Datacenters from the Perspective of Tech Leaders
The Open Compute Project (OCP) is a collaborative community focused on developing and sharing specifications for efficient, high-availability data center infrastructure, particularly rack and power architecture [67]. Historically, data centers relied on traditional power delivery, featuring multiple front-end AC power supplies per server, typically with N + N redundancy [47]. OCP drove the shift toward centralized rack power delivery, a key innovation being the introduction of the 48 V DC distribution system (often operating around 50–54 V) [30]. This architecture, exemplified by Open Rack V2.0 and V3, involves centralized, scalable power shelves distributing 48 V DC over a standard bus bar. The move from 12 V to 48 V reduced current by a factor of 4 at the same power, significantly increasing efficiency and simplifying current distribution requirements within the rack. The 48 V OCP architecture remains a staple for server and storage systems [29,68], and the latest HPR versions of OCP V3 have significantly increased capacity, moving from 3 kW per PSU to 5.5 kW, supporting rack power needs up to 190 kW [31,69].
However, the rapid escalation of power demands driven by AI/ML workloads, with racks nearing or exceeding 100 kW, is challenging the limits of 48 V DC distribution. At these higher power levels, 48 V requires increasingly larger and heavier bus bars, necessitates active cooling for higher currents, and requires excessive rack space for power components such as AC/DC converters and Battery Backup Units (BBUs) [70]. To overcome these limitations, the future architecture involves migrating to HVDC distribution, specifically the bipolar +/−400 Vdc system, which delivers power with lower current and minimal distribution loss (near 0%). Adopting +/−400 Vdc enables the AC/DC power conversion and battery backup systems to be relocated off-rack, often into a dedicated “Sidecar” power rack. This disaggregation strategy frees up the premium rack space entirely for computing payload (GPUs or TPUs), maximizes rack density, and offers enhanced energy efficiency and compatibility with emerging microgrid solutions. This high-voltage approach leverages the mature component ecosystem initially developed for the EV industry, positioning +/−400 Vdc as a crucial step toward future-proofing data centers for even higher-density requirements [71].
3.2. DC vs. AC vs. Hybrid Microgrids in Data Center Applications
The choice of microgrid topology AC, DC, or Hybrid AC/DC is critical for data center applications, given the facility’s immense energy demands and strict reliability requirements [65].
DCMGs are rapidly gaining traction, particularly because they align well with the native characteristics of RES and ESS. DCMG minimizes unnecessary conversion steps, directly reducing associated losses and increasing overall energy efficiency [72]. Figure 7 shows a possible DCMG implementation in the OCPv3HPR4 architecture. The simple integration of RES and ESS due to lower number of conversion stages and as a result, lower CAPEX and OPEX due to simpler IPCS and lower power loss is the main advantage of this architecture.
Figure 7.
DCMG-based data center facility with OCPv3HPR4 PDA. The ESS and RES are integrated into the bipolar HVDC facility distribution system.
Key advantages of DC systems in the data center environment include the following:
- Simplified control and stability: DC systems are inherently free from complex issues that plague AC grids, such as frequency stability, reactive power compensation, and phase synchronization requirements. Control complexity is consequently reduced in DCMGs [72].
- Efficiency: DC transmission is often considered more efficient than traditional AC transmission. Modern electronic loads, which constitute the core of data center IT equipment, are inherently DC consumers, supplying them directly with DC eliminates the need for internal AC-to-DC rectification stages, thereby reducing losses [21,73].
- Reliability: The bipolar DCMG configuration offers enhanced reliability because the remaining two healthy lines can still supply loads should one wire experience a fault. Furthermore, studies focusing on low-voltage bipolar DCMGs aim to provide super high-quality distribution, which is highly desirable for sensitive data center loads [74].
Despite these benefits, DC systems present significant challenges:
- Protection complexity: DC systems lack the natural current zero crossing inherent in AC systems, making fault current interruption and arc quenching considerably more complex and challenging for protection schemes [75].
- Inertia issues: due to the high penetration of Power Electronics Converters (PECs) interfacing distributed generators, DCMGs exhibit low rotational inertia, which can lead to rapid voltage fluctuations and stability issues when disturbances occur. Integrating ESS is essential for providing adequate inertia support [75].
- Standardization: the architecture and implementation of DCMGs are still hampered by a lack of dedicated standards and legislation, especially regarding voltage levels and safety protocols. International partnerships, such as the CurrentOS organization, aim to bridge the gap and provide technical documents to unify DCMG implementations and remove obstacles to the development of new facilities based on DCMG [76].
- Limitation of over-the-shelf (OTS) components: Due to the dependency of the industry on the OTS components, the new generation of PDAs for datacenters, like the DCMG or power rack (Google Sidecar) based PDAs, are limited to the use of OTS components for those architectures. It is expected that this limitation will be overcome soon by using OTS components from the EV industry, which widely utilizes DC systems, including distribution, power conversion, protection, and control, with minimal changes for the datacenter industry.
Figure 8 summarizes the challenges and main benefits of DCMG for data centers.
Figure 8.
Advantages and limitations of DCMG as an alternative to current PDAs in data centers.
AC Microgrids typically suffer from greater complexity due to the need to manage reactive power flow, ensure synchronization among distributed generators, and address voltage magnitude and phase angle differences, all of which contribute to system stability and efficiency [77].
Hybrid AC/DC microgrids aim to capture the best aspects of both worlds by employing an interlinking converter to manage power flow between interconnected AC and DC buses [78]. This approach allows for the integration of traditional AC infrastructure alongside native DC sources and loads. While conceptually flexible, hybrid systems introduce substantial complexity in terms of overall system structure and coordination control strategies. Research on hybrid grids is focused on developing coherent models and seamless transitions between operating modes [79].
Table 2 shows the advantages and challenges of a DCMG from the lens of a techno-financial analysis. Due to numerous factors that can affect the energy and component price in a facility, it would be inaccurate to provide any quantitative information. In Ref. [80], a techno-financial comparison of AC and DC systems for PV integration is provided. It is observable that, although DC systems currently have a higher CAPEX, long-term OPEX compensates for it because of higher efficiency.
Table 2.
Qualitative comparison OPEX and CAPEX of some of the aspects of legacy AC systems and DCMG.
It is also very important to note that for future AI and LLM datacenters, DC is a mandatory upgrade to make very high power densities required by 100 kW racks possible. A DC distribution increases the available space inside the PSUs for an upgrade to higher power ratings and opens up new possibilities for PoL converters to be able to supply the new power-hungry chips. Consequently, quantitively, DCMG will perform even better in datacenters than the available data from rural, urban or industrial cases. Because, it will provide substantially more advantages.
For data centers specifically, the operational efficiency afforded by DC distribution is a major driver, offering improved efficiency and reliability for sustainable data center design. The main obstacle in full implementation of such systems in new facilities is the challenges related to the DCMG, lack of standardization, and most importantly, the lack of Over-the-shelf (OTS) components because of the long history of dependency on AC as the primary PDA in data centers. Ideas like super-UPS try to overcome the challenges by offering a first step through upgrading existing systems, like the main UPS, which is a part of every data center PDA. Figure 9 shows the Super-UPS system. The idea is to utilize the inner DC-link inside the Back-to-Back (BTB) converter of the online UPS to create a smaller-scale DCMG that can incorporate multiple RES and ESS in DC architecture into the facility without the need to use multiple DC-AC and AC-DC conversion stages or stability challenges related to AC systems [84]. The green data net (GDN) UPS is a similar idea to the super UPS as a means of RES integration without significant system change, while also improving system redundancy [39].
Figure 9.
Super-UPS, a first step towards hybrid microgrids and DCMGs in data centers. RES and ESS are integrated using the DC link of BTB converter. Super-UPS can facilitate microgrid integration into legacy data center as it does not change the PDA.
3.3. Power Electronic Interfaces and Control Coordination
The functionality and performance of a data center microgrid are intrinsically tied to the performance of its power electronic interfaces and the sophistication of the control coordination strategies employed [22]. These interfaces manage the flow of power among diverse sources, storage systems, the loads, and the utility grid. Power electronic interfaces in a typical grid-interactive DCMG architecture include the following:
- RES: RESs are commonly coupled to the DCMG via unidirectional converters. These converters implement MPPT algorithms (such as the Perturb and Observe method) to maximize energy extraction under varying environmental conditions [85].
- ESS: ESSs utilize bidirectional DC/DC converters to manage charging and discharging dynamically. These devices must handle bidirectional power flow [72].
- Grid interface: the connection to the external utility grid is managed by a Grid-Side Converter (GSC), typically Cascaded H-Bridge (CHB) or Modular Multilevel Converter (MMC) based, coupled through a transformer. The GSC’s primary function is real power trade at unity power factor, but it is also utilized for ancillary services such as reactive power support or harmonic mitigation. It is worth noting that in legacy AC data center PDAs, power factor correction and harmonic mitigation were implemented by the central UPS, which was typically a BTB AC/AC converter [85].
Detailed study of PECs for DCMG and specifically data center DCMG is out of the scope of this review, however PECs are the fundamental blocks of the system that enable complex control hierarchy required to integrate RES and implement DCMG for a critical load such as a hyperscale data center. Any improvement in PECs, directly translates to more degrees of freedom and availability of more complex control, which corresponds to increased penetration of RES in data center facilities.
Control Coordination and Hierarchy
Effective microgrid operation, especially when sensitive data center loads are involved, relies on multilayered control coordination [75].
- Lower level control (converter control): This involves high-speed inner control loops responsible for current control and specific device performance (e.g., MPPT). In GSC grid-connected mode, an inner and an outer loop are used: the outer loop controls the DC bus voltage magnitude, and the inner loop controls the input current.
- Middle/upper level control (coordination and operation modes): this layer dictates the roles of the converters based on system needs, managing the transition between operating modes, most notably grid-connected mode and islanded mode [85]. This layer also controls ESSs and their share of power supply (e.g., in islanded mode) and their Stage of Charge (SOC) control.
Main upper level control systems include:
- DC bus signaling: a prevalent decentralized coordination strategy involves using the DC bus voltage level as the communication signal. Different voltage thresholds are assigned to converters; crossing a threshold triggers a specific action, such as an energy storage unit beginning to charge or discharge. This mechanism enables power electronic devices to perform smooth switching between various operational modes [86].
- Voltage stabilization roles: in grid-connected mode, the bi-directional AC/DC converter (GSC) is primarily responsible for stabilizing the DC bus voltage magnitude. However, in islanded mode, this crucial role shifts, and the bidirectional DC/DC converter becomes the voltage source, maintaining a stable DC voltage for critical loads [87].
- Hybrid storage management: sophisticated controllers manage hybrid energy storage by assigning responsibilities based on storage characteristics. Control filters and rate limiters are used to separate power components with different storage dynamics. For example, in a battery and supercapacitor-based Hybrid Energy Storage System (HESS), the battery handles low-frequency (slow) dynamics, thereby extending its life. In contrast, the ultracapacitor handles the high-frequency (fast) dynamics. This is controlled by a specific controller controlling energy flow of each, according the power requirement [88].
3.4. Energy Management Systems and Demand Response Integration
EMS is the intelligence core of a data center microgrid and uppermost control layer, responsible for optimal economic operation, resource scheduling, and managing the dynamic interaction between energy supply and computational demand [89].
3.4.1. EMS Objectives and Functions
The primary goals of the EMS in a DCMG context are multivariate [90]:
- Cost and environmental optimization: minimizing long-term operational costs, including electricity bills, generation costs, and ESS depreciation [65]. Simultaneously, EMS aims to maximize the use of renewable energy and reduce carbon dioxide emissions [91].
- Resource scheduling: coordinating the scheduling of on-site generation units such as combined heat, micro-gas turbines or thermal units, utility power procurement, and the allocation of computational workloads [92].
- Dynamic response: leveraging real-time data and forecasting techniques for dynamic energy balancing, particularly managing battery storage charge/discharge cycles and optimizing power distribution [90].
3.4.2. Demand Response Integration via Workload Flexibility
Data centers possess immense potential for DR due to their status as large, highly automated, and controllable loads. Unlike traditional industrial facilities, data centers can achieve DR by modulating IT operations, shifting load temporally, and manipulating power consumption profiles [93].
Key mechanisms for integrating DR include the following:
- Workload scheduling and shifting: overcoming the traditional limitation of treating IT load as uncontrollable, advanced algorithms realize high-density IT power load controllability [94]. This is achieved by exploiting the characteristics of delay-tolerant computational workloads, which are considered highly promising, flexible resources for power regulation [95].
The optimization process involves establishing time-shifting models for different types of delay-tolerant tasks, such as short-running deferrable, long-running continuous, and long-running interruptible workloads, to participate in day-ahead power scheduling [95].
Optimal scheduling with task transfer can effectively realize the transfer of both electricity and cooling loads, subsequently improving the performance of the Integrated Energy System [92].
- Dynamic voltage frequency scaling: servers utilize dynamic voltage frequency scaling techniques to scale the service rate and electrical power of data processing dynamically. This capability enables servers to operate in various states, allowing the data center to participate in integrated demand response by simultaneously optimizing energy and information flows [96].
- Thermal inertia exploitation: data centers generate significant waste heat, which links the electricity load, waste heat, cooling load, and workload in a deep relationship. The intrinsic thermal inertia of the inside air and infrastructure permits flexible cooling management. By setting an acceptable temperature range, the data center can relax strict constraints, making the cooling process responsive to energy availability and cost signals [96,97]. This waste heat can also be recovered and optimally scheduled alongside other resources to improve overall energy efficiency [94].
Figure 10 shows a DCMG based RES and ESS integrated data center facility with grid-connected or islanded operation capability. Lower lever operation modes is shown with each PEC interface. This shows the numerous combinations of different operation mode of each PEC, that should be managed by upper control layers and finally by EMS. This clarifies the extent of complexity of a DCMG based data center EMS and control especially considering reliability requirements of data centers. Each of the operational modes can be triggered based on load power consumption, utility grid condition, SOC of ESSs, generation of green fuel power plants and solar and wind conditions. Also, the GSC, when operating as rectifier or inverter, can operate in several sub-modes like grid forming and grid following source [98] or load [99], as well as, voltage control mode in rectifying mode. It is worth noting that the mentioned modes in Figure 10 are quite simplified, as control of BESS, HSS, GSC, solar and wind each include complex control systems that is out of the scope of this study.
Figure 10.
DCMG based data center control layers, and operation modes of GSC, RES, ESS, and Load. The complexity of control and possibility of numerous operational states in the system due to large number of operational states in each sub-system is illustrated.
4. Renewable Energy Integration Strategies
The solution to reducing data center carbon footprints proposed by major cloud providers and researchers involves significantly increasing the use of RES to meet power demand [100]. However, integrating RES like PV and wind power introduces complexities due to their inherent characteristics: variability, intermittency, and unpredictability [97]. This instability necessitates sophisticated strategies for energy supply management, storage optimization, and workload adaptation, forcing a significant paradigm change in how energy and computation activities are managed [39]. The following sections explore the major strategies employed to integrate RES effectively into data center operations.
4.1. On-Site vs. Off-Site Renewable Sourcing
Data center operators generally pursue renewable energy through several channels, which can be broadly classified by energy source: on-site generation, off-site generation, power purchase agreements, and renewable energy certificates [101].
4.1.1. On-Site Renewable Generation
On-site generation involves physically installing renewable energy plants directly at the data center facility. This approach transforms the data center from a mere consumer into an energy prosumer [101].
The most common on-site sources investigated for data centers are PV and wind turbines [102]. Other emerging green power supplies can serve as controllable baseload power, including fuel cells, biofuel-based gas turbines, and biomass generators [102]. The DATAZERO project, for example, focuses on powering a data center exclusively with locally generated solar and wind power [103].
A primary benefit of co-location is the minimization of energy transmission and distribution losses, which can be up to 15% when energy is transported over long distances via the grid [104]. Self-generation grants data center operators greater flexibility and control over capacity planning and daily power management. Furthermore, on-site generation can help reduce peak grid power costs [101]. By reducing reliance on the grid, operators can also minimize reliance on fossil fuels used by power utilities, which can be costly and less green [42].
Despite the advantages, on-site RES adoption faces several challenges. Due to the inherent characteristics of solar and wind, the power supply is often intermittent and variable [20]. This requires compensatory mechanisms, such as energy storage or grid access, to maintain stability. The capital costs and required space are also significant barriers. While studies suggest that space requirements and capital costs for solar may decrease rapidly in the future, solar energy currently consumes substantial space and remains relatively expensive [105]. For instance, a cloud provider like Apple invested in creating renewable energy plants globally, achieving a large cumulative installed power capacity predominantly from solar (77.2%) and wind (21.6%) by 2020 [37]. However, the high financial investment required for this type of installation may be impossible for smaller IT companies. Moreover, the data center’s location may not be optimal for renewable energy production.
4.1.2. Off-Site Renewable Sourcing
Off-site renewable sourcing generally refers to grid-centric approaches in which a data center purchases renewable energy generated elsewhere. This separation allows generators to be optimally placed for maximum productivity, while the data center operator focuses on efficiently managing the IT facility [101].
Data center operators frequently use Power Purchase Agreements (PPAs), which are contracts to purchase renewable energy from a third-party producer at a negotiated and often fixed price [37]. These agreements typically include bundled certification, allowing the operator to claim that the purchased electricity comes from a Carbon-Free Energy source. RECs function similarly, enabling operators to claim that a certain amount of their electricity is sourced from renewable sources [39]. PPAs also accelerate the decarbonization of local electrical grids by encouraging investment in new renewable energy plants.
A key challenge of off-site approaches is that the physical electrons supplying the data center must still travel through the general electrical grid. Even after a PPA is signed, the electricity delivered to the data center typically originates from the nearest generating plant, which may not be renewable. Furthermore, transporting electricity over long distances incurs energy losses. Grid-centric approaches also mean the data center is entirely dependent on the grid’s availability and reliability [42].
Figure 11 summarizes the RES integration strategies and depicts the main actors in the process.
Figure 11.
Strategies of RES integration. Through on-site RES (owned by data center) or off-site RES through PPAs and RECs with grid operator who handles a number of PPAs with private renewable energy sector which may include a combination of different RES types and ESS integrated into grid.
4.2. Hybrid Renewable Systems and Optimization of Energy Mix
Hybrid renewable energy systems are configurations that deliberately combine multiple energy sources (intermittent RES such as PV and wind turbines, baseload generators like fuel cells and natural gas turbines, and backup storage like batteries and hydrogen )to overcome the limitations of relying on a single source, thereby enhancing overall reliability and efficiency [103].
A robust hybrid system might feature a mix such as a natural gas turbine, PV, wind turbines, and battery storage. The natural gas turbine provides a baseload supplement to the intermittent solar and wind sources. The most complex autonomous systems, such as those investigated in the DATAZERO project, incorporate PV, wind turbines, battery storage for short-term balance, and hydrogen storage (electrolyzer/fuel cells) for long-term seasonal stability [39,103].
The fundamental goals of hybrid RES optimization are multi-objective: maximizing renewable penetration, minimizing the Levelized Cost of Electricity (LCOE) or total electricity bill, reducing carbon emissions, and ensuring system stability and meeting SLAs [105].
4.2.1. Sizing and Configuration Optimization
Optimal system design involves determining the ideal composition and capacity of each energy component.
A study analyzing hybrid systems for a data center model in Tianjin, China, ranked configurations based on these metrics: PV-wind turbine-battery was the best option, followed by PV-wind turbine, PV-only, and wind-only [100]. Generally, the larger the PV-rated power, the higher the renewable penetration achieved. Without battery capacity, PV-only systems achieved about 20% RP, increasing to 32% with additional battery storage.
In optimizing the mix of sources, differing results suggest distinct optimal solutions based on environment and demand profile. One analysis found that the optimal portfolio contained more solar than wind because solar was less volatile and aligned better with IT demand [106]. Conversely, another study found that maximizing renewable use while maintaining economic viability was best achieved through a balanced approach that prioritizes wind energy, strategically incorporates solar, and optimally deploys battery storage [107].
4.2.2. Load Management and Demand Shaping
A critical strategy in facilitating RES integration is adapting the computational load (demand) to match the variable renewable supply [108].
- The Virtual Battery Concept: This concept shifts computational demand to match available power, effectively treating flexible computation as a battery. This requires applications to be delay-tolerant or to migrate to locations where power is proactively available, or is predicted to be [109].
- GreenSwitch and GreenHadoop: These model-based approaches dynamically manage workloads (particularly batch jobs, which are often deferrable) and select energy sources to maximize green energy consumption [42].
- Active Delay: Active Delay actively adjusts the execution time of deferrable workloads to temporally align the data center’s power demand with the smoothed renewable energy generation, thereby improving utilization [110].
- Geographical Load Balancing: For geo-distributed data centers, workloads are dynamically distributed across sites based on local renewable energy availability, time-varying electricity prices, and weather conditions [111]. The GreenWare middleware, for example, dynamically dispatches requests across distributed data centers to maximize the use of renewable energy, subject to desired cost budgets and QoS constraints [20].
- Instability-Resilient Allocation: To address the inherent instability of RES, advanced systems utilize predictive models (often deep learning) to match renewable resources to workloads based on probability profiles [105]. An instability-resilient allocation framework maps renewable energy sources (which have different instability profiles) to specific physical machine) groups corresponding to their Service-Level-Objectives. This ensures that the probability of the renewable source producing enough energy is no less than the Service-Level-Objectives required by the workload, thus minimizing Service-Level-Objective violations due to insufficient renewable supply [112].
4.3. Case Studies of Renewable-Powered Data Centers
Numerous projects, both academic and industrial, have demonstrated the viability and benefits of integrating RES using advanced strategies.
4.3.1. Tianjin Hybrid Power System
A foundational study modeled a data center in Tianjin, China, using a hybrid power system comprising a natural gas turbine, PV, wind, and battery storage. The analysis compared several configurations based on RP and LCOE. The results showed that the PV-wind-battery configuration provided the best balance of high RP and low cost. The findings also provided specific optimized sizing recommendations for the investigated model [102].
4.3.2. DATAZERO Project
The DATAZERO project represents a cutting-edge effort to achieve complete energy autonomy. The goal is to establish a 1 MW data center powered entirely by local, 100% renewable energy sources (solar PV and wind turbine), bypassing the conventional grid entirely [39]. The infrastructure incorporates complementary storage technologies: Li-Ion batteries for short-term fluctuations and an integrated HSS utilizing an electrolyzer and fuel cells for long-term (seasonal) storage. This approach goes beyond traditional backup power (diesel generators and lead-acid batteries) by introducing “data center long-term green energy storage” [103].
The key innovation is a negotiation protocol between the IT decision module (workload scheduling) and the power decision module (energy optimization). This negotiation ensures that intermittent generation is matched to the computational demand, enabling high availability despite the intermittent nature of the sources. The objective is to achieve convergence between IT computation needs and the available electrical power profiles [103].
4.3.3. Parasol and GreenSwitch
Parasol is a research platform prototype, a solar-powered micro-datacenter featuring solar PV panels, a battery bank, and a grid-tie connection. It uses an air-side economizer for cooling when temperatures permit. GreenSwitch is the accompanying model-based framework for managing the mentioned system.
GreenSwitch dynamically schedules the workload and selects the energy source to minimize cost while maximizing green energy consumption. The system relies on a predictor to forecast workload and renewable energy production, enabling the solver to output optimized energy-use schedules. Real experiments demonstrated that applying intelligent workload and energy source management via GreenSwitch yielded significant cost reductions and effectively minimized the negative impact of electrical grid outages [42].
4.3.4. GreenWare and EcoMultiCloud (Geo-Distributed Management)
The use of software systems to manage geo-distributed data centers is essential for maximizing renewable energy benefits by exploiting spatial and temporal diversity [113].
GreenWare is a middleware system that uses dynamic request dispatching to maximize the percentage of renewable energy used across a network of distributed data centers, constrained by the operator’s desired cost budget and QoS requirements. GreenWare models the intermittent generation of wind and solar power based on local weather conditions [20].
And EcoMultiCloud investigated in a scenario with four geographically distributed data centers using on-site RES generators and battery storage. By adapting workload allocation (VM assignment and migration) to align with energy costs and local green energy availability, the system demonstrated that geographically distributed data centers can significantly reduce their energy bills. Results demonstrated significant cost savings with proper dimensioning of PV and battery capacity, and further reductions were achieved by incorporating migration policies, even without storage [29].
4.3.5. Industry Examples
Leading technology companies have made significant public commitments to renewable energy integration:
- Apple: constructed a massive 100-acre solar farm adjacent to its iCloud data center in North Carolina, designed to yield 84 GWh of clean, renewable energy annually. Apple also reached a cumulative installed renewable power capacity of 1524 MW globally by 2020, demonstrating a commitment to self-generation and PPAs [37].
- Green House Data: an operational industry case study cited for running on 100% renewable energy. This was achieved by leveraging renewable energy, virtualization, free cooling, and geo-dispersed Modular Data Center nodes. The result was a 64.5% reduction in energy costs, along with the elimination of carbon emissions associated with operations [104].
- Google and Facebook: these major cloud providers have also emphasized their transition from grid energy to renewable resources in geographically dispersed configurations and have made pledges to achieve carbon neutrality. Google, for instance, has operated its data centers using 100% renewable energy since 2017 (though often through off-site sourcing, such as PPAs) [103].
These case studies underscore the critical roles of sophisticated energy management strategies, optimal hardware sizing, and workload scheduling in realizing the potential of renewable energy to enable sustainable, economically viable data center operations.
5. Conclusions and Future Perspectives
Cloud computing, AI, and large-scale data analytics have made data centers a key part of our digital world, but this growth has also increased energy use and environmental impact. This review examined how data-center power systems have evolved, focusing on the shift from traditional AC systems to DC and hybrid microgrids to improve efficiency, resilience, and sustainability.
Today’s data centers face challenges such as high current density, power quality, and the need to use RES. Traditional AC-based power delivery systems, with several conversion steps and high distribution losses, cannot support current requirements. Moving to DC systems helps solve these problems by reducing conversion losses, simplifying control, and enabling better use of renewables and energy storage. These changes, supported by the Open Compute Project and new WBG power electronics like SiC and GaN devices, are important for building smaller, more efficient, and scalable systems.
Integrating DCMGs is a promising way to bring distributed renewables and storage into data centers. They are easier to control, avoid reactive power problems, and are highly efficient, making them a good fit for important IT environments. When combined with HESS, DCMGs can keep running during grid problems and help data centers become carbon-neutral. An intelligent EMS manages these parts, balancing changes in renewable energy, improving workload scheduling, and reducing costs and emissions.
Research and industry projects like DATAZERO and Parasol, as well as efforts by Google and Apple, show that running data centers entirely on renewable energy is possible. However, achieving this goal requires integrated solutions across power electronics, control, and computing. The next generation of data centers will become autonomous energy nodes that can optimize themselves, interact with the grid, and provide additional services while maintaining high uptime and service quality.
Future research can focus on areas like, creating standard DCMG frameworks for data centers, improving DC protection and fault isolation, optimizing multiple energy sources, including thermal and hydrogen systems, using AI for predictive control and digital twins and strengthening cybersecurity and ensuring sustainability throughout the lifecycle.
The final integration will not be possible until all the current challenges are solved. Some are financial like the higher CAPEX values with DC systems due to a lack of providers and competitors with the existing vendors and also limited supply of OTS DC components like ready to insert PSU racks. Another is the lack of standardization, which is currently being improved by organizations such as CurrentOS. The protection complexity of DC system is also a problem of CAPEX, rather than technologic challenge. Current solid-state circuit breakers are capable of satisfying the requirements of such systems. However, their prices are much higher than simple mechanical circuit breakers and such for AC systems. We believe that projects like OCP supported by tech leaders will be the ones to unlock the next phase of integration, because, as soon as tech leaders start incorporating such systems, the supply of such components will significantly increase and the CAPEX will drop significantly which will result in the widespread use of such systems.
In summary, integrating DC distribution, hybrid storage, renewable energy, and smart energy management is the way forward to building carbon-neutral, resilient, and intelligent data centers. These systems will support a sustainable digital future by aligning the growth of global computing with environmental care.
Author Contributions
Conceptualization, P.Z. and O.H.; methodology, O.H.; software, P.Z.; validation, O.H., J.R. and P.Z.; formal analysis, P.Z.; investigation, P.Z.; resources, J.R.; data curation, O.H.; writing—original draft preparation, P.Z.; writing—review and editing, O.H.; visualization, P.Z.; supervision, J.R.; project administration, O.H.; funding acquisition, O.H. All authors have read and agreed to the published version of the manuscript.
Funding
This work was supported by the Polish National Center of Science in frame of the project Sonata BIS: 2023/50/E/ST7/00097.
Data Availability Statement
No new data were created or analyzed in this study.
Conflicts of Interest
The authors declare no conflicts of interest.
Abbreviations
The following abbreviations are used in this manuscript:
| AI | Artificial Intelligence |
| ML | Machine Learning |
| LLM | Large Language Model |
| GPU | Graphics Processing Unit |
| TPU | Tensor Processing Unit |
| CPU | Central Processing Unit |
| DC | Direct Current |
| AC | Alternating Current |
| U.S. | United States |
| OPEX | Operating Expenditures |
| CAPEX | Capital Expenditures |
| IEA | International Energy Agency |
| PDA | Power Delivery Architecture |
| HVDC | High Voltage Direct Current |
| WBG | Wide-Bandgap |
| SiC | Silicon Carbide |
| GaN | Gallium Nitride |
| VRE | Variable Renewable Energy |
| ESS | Energy Storage System |
| UPS | Uninterruptable Power Supply |
| BESS | Battery Energy Storage System |
| HESS | Hybrid Energy Storage System |
| HSS | Hydrogen Storage System |
| IT | Information Technology |
| IPCS | Internal Power Conditioning System |
| PDU | Power Delivery Unit |
| PSU | Power Supply Unit |
| PUE | Power Usage Effectiveness |
| I/O | Input, Output |
| xPU | Processing Unit (any type) |
| OCP | Open Compute Project |
| HPR | High-Power Rack |
| LVAC | Low Voltage Alternating Current |
| MVAC | Medium Voltage Alternating Current |
| MV | Medium Voltage |
| SSD | Solid-State Drive |
| NIC | Network Interface Card |
| MTBF | Mean Time Between Failure |
| MTTR | Mean Time To Repair |
| SLA | Service Level Agreement |
| RES | Renewable Energy Source |
| PFC | Power Factor Correction |
| CO2 | Corbon Dioxide |
| UN | United Nations |
| BBU | Battery Backup Unit |
| EV | Electric Vehicle |
| DCMG | Direct Current Microgrid |
| GSC | Grid-Side Converter |
| CHB | Cascaded H-Bridge |
| MMC | Modular Multilevel Converter |
| BTB | Back-to-Back |
| PEC | Power Electronic Converter |
| OTS | Over-the-shelf |
| SOC | State of Charge |
| PoL | Point of Load |
| DR | Demand Response |
| PV | Solar Photovoltaic |
| WT | Wind Turbine |
| PPA | Power Purchase Agreement |
| RP | Renewable Penetration |
| LCOE | Levelized Cost of Electricity |
| QoS | Quality of Service |
| REC | Renewable Energy Certificate |
| TTM | Trailing Twelve-Month |
| TDP | Thermal Design Power |
References
- Chen, S.; Zhang, G.; Yu, S.S.; Mei, Y.; Zhang, Y. A Review of Isolated Bidirectional DC-DC Converters for Data Centers. Chin. J. Electr. Eng. 2023, 9, 1–22. [Google Scholar] [CrossRef]
- Ursino, M.; Rizzolatti, R.; Deboy, G.; Saggini, S.; Zufferli, K. High density Hybrid Switched Capacitor Sigma Converter for Data Center Applications. In Proceedings of the 2022 IEEE Applied Power Electronics Conference and Exposition (APEC), Houston, TX, USA, 20–24 March 2022; IEEE: New York, NY, USA, 2022; pp. 35–39. [Google Scholar] [CrossRef]
- Sandri, P. Increasing Hyperscale Data Center Efficiency: A Better Way to Manage 54-V\/48-V-to-Point-of-Load Direct Conversion. IEEE Power Electron. Mag. 2017, 4, 58–64. [Google Scholar] [CrossRef]
- Deboy, G.; Kasper, M.; Wattenberg, M.; Rizzolatti, R. Challenges and Solutions to Power Latest Processor Generations for Hyper Scale Datacenters. In Proceedings of the PCIM Europe 2024; International Exhibition and Conference for Power Electronics, Intelligent Motion, Renewable Energy and Energy Management, Nürnberg, Germany, 11–13 June 2024; Mesago PCIM GmbH: Stuttgart, Germany, 2024; pp. 15–18. [Google Scholar] [CrossRef]
- Ursino, M.; Rizzolatti, R.; Deboy, G.; Saggini, S.; Zufferli, K. Sigma Converter Family with Common Ground for the 48 V Data Center. IEEE Trans. Power Electron. 2023, 38, 10997–11009. [Google Scholar] [CrossRef]
- Zou, J.; Zhu, Y.; Ellis, N.M.; Horowitz, L.; Pilawa-Podgurski, R.C.N. A 48-V-to-1-V Gallium Nitride Switching Bus Converter for Processor Vertical Power Delivery with 2.7 mm Thickness and 3048 W/in3 Power Density. In Proceedings of the 2025 IEEE Applied Power Electronics Conference and Exposition (APEC), Atlanta, GA, USA, 16–20 March 2025; IEEE: New York, NY, USA, 2025; pp. 2276–2283. [Google Scholar] [CrossRef]
- NVIDIA Corporation. GB200 NVL72 Datasheet. Available online: https://nvdam.widen.net/s/wwnsxrhm2w/blackwell-datasheet-3384703 (accessed on 25 November 2025).
- Google LLC TPU v4—Cloud TPU Documentation (Measured Min/Mean/Max Power Per Chip). Available online: https://cloud.google.com/tpu/docs/v4 (accessed on 8 November 2025).
- NVIDIA Corporation. NVIDIA H100 Tensor Core GPU—Product Brief/Datasheet. Available online: https://www.nvidia.com/content/dam/en-zz/Solutions/gtcs22/data-center/h100/PB-11133-001_v01.pdf (accessed on 8 November 2025).
- NVIDIA Corporation. NVIDIA A100 Tensor Core GPU—Datasheet. Available online: https://www.nvidia.com/content/dam/en-zz/Solutions/Data-Center/a100/pdf/nvidia-a100-datasheet-nvidia-us-2188504-web.pdf (accessed on 8 November 2025).
- Amazon Web Services (AWS). AWS EC2 Graviton Processors—Overview (Graviton Family). Available online: https://aws.amazon.com/ec2/graviton/ (accessed on 8 November 2025).
- AMD (Advanced Micro Devices). AMD EPYCTM 7601—Product Specifications/Support. Available online: https://www.amd.com/en/support/downloads/drivers.html/processors/epyc/epyc-7001-series/amd-epyc-7601.html (accessed on 8 November 2025).
- NVIDIA Corporation. NVIDIA Tesla V100 GPU Accelerator—Datasheet. Available online: https://images.nvidia.com/content/technologies/volta/pdf/tesla-volta-v100-datasheet-letter-fnl-web.pdf (accessed on 8 November 2025).
- NVIDIA Corporation. NVIDIA Tesla P100 GPU Accelerator—Datasheet. Available online: https://images.nvidia.com/content/tesla/pdf/nvidia-tesla-p100-PCIe-datasheet.pdf (accessed on 8 November 2025).
- NVIDIA Corporation. Tesla K80 GPU Accelerator—Datasheet (Dual GK210). Available online: https://www.nvidia.com/content/dam/en-zz/Solutions/Data-Center/tesla-product-literature/TeslaK80-datasheet.pdf (accessed on 8 November 2025).
- NVIDIA Corporation. Tesla K20 GPU Active Accelerator—Board Specifications (GK110). Available online: https://www.nvidia.com/content/PDF/kepler/tesla-k20-active-bd-06499-001-v03.pdf (accessed on 8 November 2025).
- Intel Corporation. Intel® Xeon® Processor X5670 (12M Cache, 2.93 GHz, 6.40 GT/s Intel® QPI). Available online: https://ark.intel.com/content/www/us/en/ark/products/47920/intel-xeon-processor-x5670-12m-cache-2-93-ghz-6-40-gt-s-intel-qpi.html (accessed on 8 November 2025).
- Intel Corporation. Intel® Xeon® Processor E5-2680 (20M Cache, 2.70 GHz, Intel® QPI). Available online: https://www.intel.com/content/www/us/en/products/sku/64583/intel-xeon-processor-e5-2680-20m-cache-2-70-ghz-8-00-gts-intel-qpi/specifications.html (accessed on 8 November 2025).
- Baek, J.; Wang, P.; Jiang, S.; Chen, M. LEGO-PoL: A 93.1% 54V-1.5V 300A Merged-Two-Stage Hybrid Converter with a Linear Extendable Group Operated Point-of-Load (LEGO-PoL) Architecture. In Proceedings of the 2019 20th Workshop on Control and Modeling for Power Electronics (COMPEL), Toronto, ON, Canada, 17–20 June 2019; pp. 1–8. [Google Scholar] [CrossRef]
- Zhang, Y.; Wang, Y.; Wang, X. GreenWare: Greening Cloud-Scale Data Centers to Maximize the Use of Renewable Energy. In Middleware; Springer: Berlin/Heidelberg, Germany, 2011; pp. 143–164. [Google Scholar] [CrossRef]
- Chen, Y.; Shi, K.; Chen, M.; Xu, D. Data Center Power Supply Systems: From Grid Edge to Point-of-Load. IEEE J. Emerg. Sel. Top. Power Electron. 2023, 11, 2441–2456. [Google Scholar] [CrossRef]
- Krein, P.T. Data Center Challenges and Their Power Electronics. CPSS Trans. Power Electron. Appl. 2017, 2, 39–46. [Google Scholar] [CrossRef]
- Rahman, S.; Shehada, H.; Khan, I.A. Review of Isolated DC-DC Converters for Applications in Data Center Power Delivery. In Proceedings of the 2023 IEEE Texas Power and Energy Conference (TPEC), College Station, TX, USA, 13–14 February 2023; IEEE: New York, NY, USA, 2023; pp. 1–6. [Google Scholar] [CrossRef]
- Wattenberg, M.; Kasper, M.J.; Siemieniec, R.; Deboy, G. Innovative 12kW Three-level Power Supply for AI Servers Empowered by 400 V SiC MOSFET Technology. In Proceedings of the PCIM Conference, Nürnberg, Germany, 6–8 May 2025. [Google Scholar] [CrossRef]
- Sbabo, P.; Biadene, D.; Zhang, D.; Mattavelli, P.; Kolar, J.W. Ultra-Efficient Three-Phase Integrated-Active-Filter Isolated Rectifier for AI Data Center Applications. In Proceedings of the 2025 IEEE Energy Conversion Congress & Exposition Asia (ECCE-Asia), Bengaluru, India, 11–14 May 2025; IEEE: New York, NY, USA, 2025; pp. 1–7. [Google Scholar] [CrossRef]
- International Energy Agency (IEA). Energy and AI. Available online: https://www.iea.org/reports/energy-and-ai (accessed on 14 November 2025).
- Wu, H.; Wu, B.; Wang, Z. Research on the overall reliability of data centers. In Proceedings of the 2023 4th International Conference on Big Data Economy and Information Management, Zhengzhou, China, 8–10 December 2023; ACM: New York, NY, USA, 2023; pp. 802–807. [Google Scholar] [CrossRef]
- Qiu, M.; Sun, Z.; Liu, X.; Hobbs, K.; Meng, H.; Marzang, V.; Dahneem, A.; Cao, D. A High Conversion Ratio Matrix Autotransformer Switched-Capacitor Converter for 48 V Datacenter Application. IEEE Trans. Power Electron. 2025, 40, 1359–1375. [Google Scholar] [CrossRef]
- Sakalkar, V.; Laue, M. Data Centers of the Future (Presented by Google). Available online: https://youtu.be/ZLtJOpqgIgQ?si=rHeiSXlBlGjoGlDY (accessed on 8 November 2025).
- Li, X.; Jiang, S. Google 48V Rack Adaptation and Onboard Power Technology Update. Available online: https://youtu.be/aBkz2JR4UVs?si=z8ynTJwPII1tmYDE (accessed on 25 November 2025).
- Charest, G. Meta’s ORv3 “HPR Next” Ecosystem Solution. Available online: https://youtu.be/r120DepZXgQ?si=cxXzQep_94MHvD6F (accessed on 25 November 2025).
- Baek, J.; Elasser, Y.; Radhakrishnan, K.; Gan, H.; Douglas, J.P.; Krishnamurthy, H.K.; Li, X.; Jiang, S.; Sullivan, C.R.; Chen, M. Vertical Stacked LEGO-PoL CPU Voltage Regulator. IEEE Trans. Power Electron. 2022, 37, 6305–6322. [Google Scholar] [CrossRef]
- Elasser, Y.; Baek, J.; Radhakrishnan, K.; Gan, H.; Douglas, J.P.; Krishnamurthy, H.K.; Li, X.; Jiang, S.; De, V.; Sullivan, C.R.; et al. Mini-LEGO CPU Voltage Regulator. IEEE Trans. Power Electron. 2024, 39, 3391–3410. [Google Scholar] [CrossRef]
- Li, H.; Zeng, W.; Elasser, Y.; Chen, M. Air-LEGO: A Magnetic-Free Ultra-Thin 24V-to-1V 120A VRM with Air-Coupled Inductors. In Proceedings of the 2025 IEEE Applied Power Electronics Conference and Exposition (APEC), Atlanta, GA, USA, 16–20 March 2025; pp. 510–517. [Google Scholar] [CrossRef]
- Wang, P.; Chen, Y.; Szczeszynski, G.; Allen, S.; Giuliano, D.M.; Chen, M. MSC-PoL: Hybrid GaN–Si Multistacked Switched-Capacitor 48-V PwrSiP VRM for Chiplets. IEEE Trans. Power Electron. 2023, 38, 12815–12833. [Google Scholar] [CrossRef]
- Meng, H.; Sun, Z.; Qiu, M.; Liu, X.; Marzang, V.; Cao, D. MASC-PoL: A 48V-1V Matrix Autotransformer Switched-Capacitor Point-of-load DC-DC Converter for Data Center Application. In Proceedings of the 2024 IEEE Energy Conversion Congress and Exposition (ECCE), Phoenix, AZ, USA, 20–24 October 2024; IEEE: New York, NY, USA, 2024; pp. 2589–2595. [Google Scholar] [CrossRef]
- Gnibga, W.E.; Blavette, A.; Orgerie, A.-C. Renewable Energy in Data Centers: The Dilemma of Electrical Grid Dependency and Autonomy Costs. IEEE Trans. Sustain. Comput. 2024, 9, 315–328. [Google Scholar] [CrossRef]
- Deng, W.; Liu, F.; Jin, H.; Li, B.; Li, D. Harnessing renewable energy in cloud datacenters: Opportunities and challenges. IEEE Netw. 2014, 28, 48–55. [Google Scholar] [CrossRef]
- Pierson, J.-M.; Baudic, G.; Caux, S.; Celik, B.; Da Costa, G.; Grange, L.; Haddad, M.; Lecuivre, J.; Nicod, J.-M.; Philippe, L.; et al. DATAZERO: Datacenter with Zero Emission and Robust Management Using Renewable Energy. IEEE Access 2019, 7, 103209–103230. [Google Scholar] [CrossRef]
- Peng, X.; Bhattacharya, T.; Cao, T.; Mao, J.; Tekreeti, T.; Qin, X. Exploiting Renewable Energy and UPS Systems to Reduce Power Consumption in Data Centers. Big Data Res. 2022, 27, 100306. [Google Scholar] [CrossRef]
- Long, X.; Li, Y.; Li, Y.; Ge, L.; Gooi, H.B.; Chung, C.; Zeng, Z. Collaborative Response of Data Center Coupled with Hydrogen Storage System for Renewable Energy Absorption. IEEE Trans. Sustain. Energy 2024, 15, 986–1000. [Google Scholar] [CrossRef]
- Laganà, D.; Mastroianni, C.; Meo, M.; Renga, D. Reducing the Operational Cost of Cloud Data Centers through Renewable Energy. Algorithms 2018, 11, 145. [Google Scholar] [CrossRef]
- Dayarathna, M.; Wen, Y.; Fan, R. Data Center Energy Consumption Modeling: A Survey. IEEE Commun. Surv. Tutor. 2016, 18, 732–794. [Google Scholar] [CrossRef]
- Acun, B.; Lee, B.; Kazhamiaka, F.; Maeng, K.; Gupta, U.; Chakkaravarthy, M.; Brooks, D.; Wu, C.-J. Carbon Explorer: A Holistic Framework for Designing Carbon Aware Datacenters. In Proceedings of the 28th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Vancouver, BC, Canada, 25–29 March 2023; ACM: New York, NY, USA, 2023; Volume 2, pp. 118–132. [Google Scholar] [CrossRef]
- Ahmed, K.M.U.; Bollen, M.H.J.; Alvarez, M. A Review of Data Centers Energy Consumption and Reliability Modeling. IEEE Access 2021, 9, 152536–152563. [Google Scholar] [CrossRef]
- Google Data Center Efficiency. Available online: https://datacenters.google/efficiency/ (accessed on 15 November 2025).
- Gero, K. Power in the Ever-changing OCP Environment–OCP Rack Power. Available online: https://youtu.be/3MdhfkBq6kw?si=jv7qeFL54ZYRJ9c1 (accessed on 25 November 2025).
- Barroso, L.A.; Hölzle, U. The Case for Energy-Proportional Computing. Computer 2007, 40, 33–37. [Google Scholar] [CrossRef]
- Jia, Z.; Wang, L.; Zhan, J.; Zhang, L.; Luo, C. Characterizing data analysis workloads in data centers. In Proceedings of the 2013 IEEE International Symposium on Workload Characterization (IISWC), Portland, OR, USA, 22–24 September 2013; IEEE: New York, NY, USA, 2013; pp. 66–76. [Google Scholar] [CrossRef]
- Zhou, Z.; Abawajy, J.H.; Li, F.; Hu, Z.; Chowdhury, M.U.; Alelaiwi, A.; Li, K. Fine-Grained Energy Consumption Model of Servers Based on Task Characteristics in Cloud Data Center. IEEE Access 2018, 6, 27080–27090. [Google Scholar] [CrossRef]
- Radovanović, A.; Koningstein, R.; Schneider, I.; Chen, B.; Duarte, A.; Roy, B.; Xiao, D.; Haridasan, M.; Hung, P.; Care, N.; et al. Carbon-Aware Computing for Datacenters. IEEE Trans. Power Syst. 2023, 38, 1270–1280. [Google Scholar] [CrossRef]
- Wilde, T.; Auweter, A.; Patterson, M.K.; Shoukourian, H.; Huber, H.; Bode, A.; Labrenz, D.; Cavazzoni, C. DWPE, a new data center energy-efficiency metric bridging the gap between infrastructure and workload. In Proceedings of the 2014 International Conference on High Performance Computing & Simulation (HPCS), Bologna, Italy, 21–25 July 2014; IEEE: New York, NY, USA, 2014; pp. 893–901. [Google Scholar] [CrossRef]
- Arno, R.; Friedl, A.; Gross, P.; Schuerger, R.J. Reliability of Data Centers by Tier Classification. IEEE Trans. Ind. Appl. 2012, 48, 777–783. [Google Scholar] [CrossRef]
- Turner, W.P.; Seader, J.H.; Brill, K.G. Industry Standard Tier Classifications Define Site Infrastructure Performance. Available online: https://critical-eng.com/wp-content/uploads/2020/09/Uptime-Industry-Standard-Tier-Classifications.pdf (accessed on 15 November 2025).
- Joshi, Y.; Kumar, P. Introduction to Data Center Energy Flow and Thermal Management. In Energy Efficient Thermal Management of Data Centers; Springer: Boston, MA, USA, 2012; pp. 1–38. [Google Scholar] [CrossRef]
- Andrae, A.; Edler, T. On Global Electricity Usage of Communication Technology: Trends to 2030. Challenges 2015, 6, 117–157. [Google Scholar] [CrossRef]
- Liu, Y.; Wei, X.; Xiao, J.; Liu, Z.; Xu, Y.; Tian, Y. Energy consumption and emission mitigation prediction based on data center traffic and PUE for global data centers. Glob. Energy Interconnect. 2020, 3, 272–282. [Google Scholar] [CrossRef]
- Niesel, J. Estimated Greenhouse Gas Emissions from Data Centres (2023–2030) by Geo- Graphical Region.csv (from Künstliche Intelligenz: Energieverbrauch Und Umweltauswirkungen) [Data Set Resource]. Available online: https://daten.greenpeace.de/dataset/kunstliche-intelligenz-energieverbrauch-und-umweltauswirkungen/resource/cfa2aa72-6f13-4f37-827c-72324300fd99 (accessed on 16 November 2025).
- Zheng, X.; Cai, Y. Energy-aware load dispatching in geographically located Internet data centers. Sustain. Comput. Inform. Syst. 2011, 1, 275–285. [Google Scholar] [CrossRef]
- Zheng, J.; Chien, A.A.; Suh, S. Mitigating Curtailment and Carbon Emissions through Load Migration between Data Centers. Joule 2020, 4, 2208–2222. [Google Scholar] [CrossRef]
- Mao, Y.; Yuan, J.; Long, D.; Lin, H. Robust Optimization of Data Center Microgrid Capacity Configuration Considering Load Characteristics. In Proceedings of the 2024 8th International Conference on Automation, Control and Robots (ICACR), Xiangyang, China, 1–3 November 2024; IEEE: New York, NY, USA, 2024; pp. 122–125. [Google Scholar] [CrossRef]
- Ahammed, M.T.; Osman, N.; Das, C.; Hossain, M.A.; Hossain, S.; Kaium, M.H. Analysis of Energy Consumption for a Hybrid Green Data Center. In Proceedings of the 2022 International Conference on Innovations in Science, Engineering and Technology (ICISET), Chittagong, Bangladesh, 26–27 February 2022; IEEE: New York, NY, USA, 2022; pp. 318–323. [Google Scholar] [CrossRef]
- Cao, F.; Wang, Y.; Zhu, F.; Cao, Y.; Ding, Z. UPS Node based Workload Management for Data Centers considering Flexible Service Requirements. In Proceedings of the 2019 IEEE/IAS 55th Industrial and Commercial Power Systems Technical Conference (I&CPS), Calgary, AB, Canada, 5–8 May 2019; IEEE: New York, NY, USA, 2019; pp. 1–9. [Google Scholar] [CrossRef]
- Lazaar, N.; Barakat, M.; Hafiane, M.; Sabor, J.; Gualous, H. Modeling and control of a hydrogen-based green data center. Electr. Power Syst. Res. 2021, 199, 107374. [Google Scholar] [CrossRef]
- Yu, L.; Jiang, T.; Zou, Y. Distributed Real-Time Energy Management in Data Center Microgrids. IEEE Trans. Smart Grid 2018, 9, 3748–3762. [Google Scholar] [CrossRef]
- Siegle, A. The Data Center as a Power Plant: Solutions for Accommodating Data Center Load Growth on a Decarbonized Grid 2025. Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5138427 (accessed on 15 November 2025).
- Open Compute Project (OCP). Open Rack V3 48V PSU Specification. Available online: https://www.opencompute.org/w/index.php?title=Open_Rack/SpecsAndDesigns (accessed on 15 November 2025).
- Solomentsev, M. Differential Power Processing for Ultra-Efficient Rack Level Power Conversion. Available online: https://www.youtube.com/watch?v=In-XlRHiXLk (accessed on 4 August 2025).
- Keyhani, H.; Shapiro, D.; Fernandes, J.; Kim, B. Meta Open Rack V3 48 V PSU Specification (Version 1.0). Available online: https://www.opencompute.org/wiki/Open_Rack/SpecsAndDesigns (accessed on 9 November 2025).
- Sun, D.; Shapiro, D.; Kim, B.; Athavale, J.; Mercado, R. Meta Open Rack V3 BBU Module Specification, (Version 1.4). Available online: https://www.opencompute.org/wiki/Open_Rack/SpecsAndDesigns (accessed on 25 November 2025).
- Li, X.; Ravikumar, K. +/− 400VDC Rack Power System for ML/AI Application. Available online: https://youtu.be/l8ChVDv5aoo?si=ByKCRKpUewdaSFLv (accessed on 25 November 2025).
- Sanjeev, P.; Padhy, N.P.; Agarwal, P. Peak Energy Management Using Renewable Integrated DC Microgrid. IEEE Trans. Smart Grid 2018, 9, 4906–4917. [Google Scholar] [CrossRef]
- Pires, V.F.; Pires, A.; Cordeiro, A. DC Microgrids: Benefits, Architectures, Perspectives and Challenges. Energies 2023, 16, 1217. [Google Scholar] [CrossRef]
- Rivera, S.; Lizana, F.R.; Kouro, S.; Dragicevic, T.; Wu, B. Bipolar DC Power Conversion: State-of-the-Art and Emerging Technologies. IEEE J. Emerg. Sel. Top. Power Electron. 2021, 9, 1192–1204. [Google Scholar] [CrossRef]
- Jithin, K.; Purayil Haridev, P.; Mayadevi, N.; Pillai Harikumar, R.; Prabhakaran Mini, V. A Review on Challenges in DC Microgrid Planning and Implementation. J. Mod. Power Syst. Clean Energy 2023, 11, 1375–1395. [Google Scholar] [CrossRef]
- CurrentOS Organization. CurrentOS-Technical Documents. Available online: https://currentos.org/ (accessed on 6 November 2025).
- Kundur, P.; Paserba, J.; Ajjarapu, V.; Andersson, G.; Bose, A.; Canizares, C. Definition and Classification of Power System Stability IEEE/CIGRE Joint Task Force on Stability Terms and Definitions. IEEE Trans. Power Syst. 2004, 19, 1387–1401. [Google Scholar] [CrossRef]
- Pires, V.F.; Foito, D.; Cordeiro, A.; Roncero-Clemente, C.; Martins, J.F.; Pires, A.J. Interlink Converter for Hybrid AC to Bipolar DC Microgrid or to Two DC Microgrids. In Proceedings of the IECON 2022–48th Annual Conference of the IEEE Industrial Electronics Society, Brussels, Belgium, 17–20 October 2022; IEEE: New York, NY, USA, 2022; pp. 1–6. [Google Scholar] [CrossRef]
- Zhang, H.; He, S.; Yuan, Z.; Cheng, Z.; Cheng, J.; Hu, B.; Xu, L.; Fan, X. A Seamless Switching Strategy for Hybrid AC/DC Microgrids Under Varied Control Complexities. IEEE Access 2025, 13, 122420–122431. [Google Scholar] [CrossRef]
- Li, W.; He, K.; Wang, Y. Cost comparison of AC and DC collector grid for integration of large-scale PV power plants. J. Eng. 2017, 2017, 795–800. [Google Scholar] [CrossRef]
- Shabbir, G.; Hasan, A.; Yaqoob Javed, M.; Shahid, K.; Mussenbrock, T. Review of DC Microgrid Design, Optimization, and Control for the Resilient and Efficient Renewable Energy Integration. Energies 2025, 18, 6364. [Google Scholar] [CrossRef]
- Rank, S.; Bonnema, E.; Scheib, J.; Wilson, E.; Fregosi, D.; Ravula, S.; Saussele, J.; Brhlik, D.; Fregosi, D.; Ravula, S.; et al. A Comparative Study of DC and AC Microgrids in Commercial Buildings Across Different Climates and Operating Profiles. In Proceedings of the IEEE First International Conference on DC Microgrids, Atlanta, GA, USA, 24–27 May 2015; NREL/CP-5500-63959; Preprint; National Renewable Energy Laboratory: Golden, CO, USA, 2015. Available online: https://www.nrel.gov/docs/fy15osti/63959.pdf (accessed on 19 December 2025).
- Adegboyega, A.W.; Sepasi, S.; Howlader, H.O.R.; Griswold, B.; Matsuura, M.; Roose, L.R. DC Microgrid Deployments and Challenges: A Comprehensive Review of Academic and Corporate Implementations. Energies 2025, 18, 1064. [Google Scholar] [CrossRef]
- Xu, D.; Li, H.; Zhu, Y.; Shi, K.; Hu, C. High-surety Microgrid: Super Uninterruptable Power Supply with Multiple Renewable Energy Sources. Electr. Power Compon. Syst. 2015, 43, 839–853. [Google Scholar] [CrossRef]
- Xu, H.G.; He, J.P.; Qin, Y.; Li, Y.H. Energy management and control strategy for DC micro-grid in data center. In Proceedings of the 2012 China International Conference on Electricity Distribution, Shanghai, China, 10–14 September 2012; IEEE: New York, NY, USA, 2012; pp. 1–6. [Google Scholar] [CrossRef]
- Schonbergerschonberger, J.; Duke, R.; Round, S.D. DC-Bus Signaling: A Distributed Control Strategy for a Hybrid Renewable Nanogrid. IEEE Trans. Ind. Electron. 2006, 53, 1453–1460. [Google Scholar] [CrossRef]
- Mohamed, A.; Elshaer, M.; Mohammed, O. Bi-directional AC-DC/DC-AC converter for power sharing of hybrid AC/DC systems. In Proceedings of the 2011 IEEE Power and Energy Society General Meeting, Detroit, MI, USA, 24–28 July 2011; IEEE: New York, NY, USA, 2011; pp. 1–8. [Google Scholar] [CrossRef]
- Kollimalla, S.K.; Mishra, M.K.; Narasamma, N.L. Design and Analysis of Novel Control Strategy for Battery and Supercapacitor Storage System. IEEE Trans. Sustain. Energy 2014, 5, 1137–1144. [Google Scholar] [CrossRef]
- Yang, X.; Wang, Y.; He, H.; Sun, C.; Zhang, Y. Deep Reinforcement Learning for Economic Energy Scheduling in Data Center Microgrids. In Proceedings of the 2019 IEEE Power & Energy Society General Meeting (PESGM), Atlanta, GA, USA, 4–8 August 2019; IEEE: New York, NY, USA, 2019; pp. 1–5. [Google Scholar] [CrossRef]
- Bhardwaj, R.; Padmavathy, R.; Preetha, M.; Suresh, R.; Kumar, Y.; Dilip, S.; Tharmar, S. EMS for Sustainable Data Centers. E3S Web Conf. 2024, 591, 01006. [Google Scholar] [CrossRef]
- Ding, Z.; Cao, Y.; Xie, L.; Lu, Y.; Wang, P. Integrated Stochastic Energy Management for Data Center Microgrid Considering Waste Heat Recovery. IEEE Trans. Ind. Appl. 2019, 55, 2198–2207. [Google Scholar] [CrossRef]
- Wang, J.; Deng, H.; Liu, Y.; Guo, Z.; Wang, Y. Coordinated optimal scheduling of integrated energy system for data center based on computing load shifting. Energy 2023, 267, 126585. [Google Scholar] [CrossRef]
- Aksanli, B.; Akyurek, A.S.; Rosing, T. Minimizing the effects of data centers on microgrid stability. In Proceedings of the 2015 Sixth International Green and Sustainable Computing Conference (IGSC), Las Vegas, NV, USA, 14–16 December 2015; IEEE: New York, NY, USA, 2015; pp. 1–9. [Google Scholar] [CrossRef]
- Yang, T.; Zhao, Y.; Pen, H.; Wang, Z. Data center holistic demand response algorithm to smooth microgrid tie-line power fluctuation. Appl. Energy 2018, 231, 277–287. [Google Scholar] [CrossRef]
- Liu, L.; Shen, X.; Chen, Z.; Sun, Q.; Wennersten, R. Optimal Energy Management of Data Center Micro-Grid Considering Computing Workloads Shift. IEEE Access 2024, 12, 102061–102075. [Google Scholar] [CrossRef]
- Lyu, J.; Zhang, S.; Cheng, H.; Yuan, K.; Song, Y.; Fang, S. Optimal Sizing of Energy Station in the Multienergy System Integrated with Data Center. IEEE Trans. Ind. Appl. 2021, 57, 1222–1234. [Google Scholar] [CrossRef]
- Wang, R.; Lu, Y.; Zhu, K.; Hao, J.; Wang, P.; Cao, Y. An Optimal Task Placement Strategy in Geo-Distributed Data Centers Involving Renewable Energy. IEEE Access 2018, 6, 61948–61958. [Google Scholar] [CrossRef]
- Ali, S.A.; Serna-Torre, P.; Hidalgo-Gonzalez, P.; Dozein, M.G.; Bahrani, B. Modularized Small-Signal Modeling of Grid-Forming Inverters. IEEE Access 2025, 13, 97011–97037. [Google Scholar] [CrossRef]
- Tavakoli, S.D.; Dozein, M.G.; Lacerda, V.A.; Mañe, M.C.; Prieto-Araujo, E.; Mancarella, P.; Gomis-Bellmunt, O. Grid-Forming Services from Hydrogen Electrolyzers. IEEE Trans. Sustain. Energy 2023, 14, 2205–2219. [Google Scholar] [CrossRef]
- He, W.; Xu, Q.; Zhao, S.; Liu, S.; Li, H. Performance Analysis of Data Centers Applying Hybrid Renewable Energy Power Systems. Energy Proc. 2023, 30, 2965. [Google Scholar] [CrossRef]
- Cao, Z.; Zhou, X.; Hu, H.; Wang, Z.; Wen, Y. Toward a Systematic Survey for Carbon Neutral Data Centers. IEEE Commun. Surv. Tutor. 2022, 24, 895–936. [Google Scholar] [CrossRef]
- Li, C.; Wang, R.; Li, T.; Qian, D.; Yuan, J. Managing Green Datacenters Powered by Hybrid Renewable Energy Systems. Available online: https://www.usenix.org/conference/icac14/technical-sessions/presentation/li_chao (accessed on 9 November 2025).
- Haddad, M. Sizing and Management of Hybrid Renewable Energy System for Data Center Supply. Ph.D. Thesis, Université Bourgogne Franche-Comté, Dijon, Besançon. Available online: https://theses.hal.science/tel-02736497 (accessed on 7 November 2025).
- Shuja, J.; Gani, A.; Shamshirband, S.; Ahmad, R.W.; Bilal, K. Sustainable Cloud Data Centers: A survey of enabling techniques and technologies. Renew. Sustain. Energy Rev. 2016, 62, 195–214. [Google Scholar] [CrossRef]
- Shen, H.; Wang, H.; Gao, J.; Buyya, R. An Instability-Resilient Renewable Energy Allocation System for a Cloud Datacenter. IEEE Trans. Parallel Distrib. Syst. 2023, 34, 1020–1034. [Google Scholar] [CrossRef]
- Liu, Z.; Chen, Y.; Bash, C.; Wierman, A.; Gmach, D.; Wang, Z.; Marwah, M.; Hyser, C. Renewable and cooling aware workload management for sustainable data centers. In Proceedings of the 12th ACM Sigmetrics/Performance Joint International Conference on Measurement and Modeling of Computer Systems, London, UK, 11–15 June 2012; ACM: New York, NY, USA, 2012; pp. 175–186. [Google Scholar] [CrossRef]
- Parsons, J. A Techno-Economic Assessment of Hybrid Renewable Energy and Battery Storage Systems for Data Centers. Diploma Thesis, Massachusetts Institute of Technology, Cambridge, MA, USA, 2025. [Google Scholar]
- Fridgen, G.; Körner, M.-F.; Walters, S.; Weibelzahl, M. Not All Doom and Gloom: How Energy-Intensive and Temporally Flexible Data Center Applications May Actually Promote Renewable Energy Sources. Bus. Inf. Syst. Eng. 2021, 63, 243–256. [Google Scholar] [CrossRef]
- Agarwal, A.; Sun, J.; Noghabi, S.; Iyengar, S.; Badam, A.; Chandra, R.; Seshan, S.; Kalyanaraman, S. Redesigning Data Centers for Renewable Energy. In Proceedings of the Twentieth ACM Workshop on Hot Topics in Networks, Virtual, 10–12 November 2021; ACM: New York, NY, USA, 2021; pp. 45–52. [Google Scholar] [CrossRef]
- Liu, X.; Hua, Y.; Liu, X.; Yang, L.; Sun, Y. Design and Implementation of Smooth Renewable Power in Cloud Data Centers. IEEE Trans. Cloud Comput. 2023, 11, 85–96. [Google Scholar] [CrossRef]
- Liu, Z.; Lin, M.; Wierman, A.; Low, S.H.; Andrew, L.L.H. Geographical load balancing with renewables. ACM Sigmetrics Perform. Eval. Rev. 2011, 39, 62–66. [Google Scholar] [CrossRef]
- Gao, J.; Wang, H.; Shen, H. Smartly Handling Renewable Energy Instability in Supporting a Cloud Datacenter. In Proceedings of the 2020 IEEE International Parallel and Distributed Processing Symposium (IPDPS), New Orleans, LA, USA, 18–22 May 2020; IEEE: New York, NY, USA, 2020; pp. 769–778. [Google Scholar] [CrossRef]
- Ghamkhari, M.; Mohsenian-Rad, H. Optimal integration of renewable energy resources in data centers with behind-the-meter renewable generator. In Proceedings of the 2012 IEEE International Conference on Communications (ICC), Ottawa, ON, Canada, 10–15 June 2012; IEEE: New York, NY, USA, 2012; pp. 3340–3344. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.










