Next Article in Journal
Simplifications in the Optimization of Heat Pumps and Their Comparison for Effects on the Accuracy of the Results
Previous Article in Journal
Perspectives on Cleaner-Pulverized Coal Combustion: The Evolving Role of Combustion Modifiers and Biomass Co-Firing
Previous Article in Special Issue
Bi-Level Optimization-Based Bidding Strategy for Energy Storage Using Two-Stage Stochastic Programming
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Energy Storage Systems for AI Data Centers: A Review of Technologies, Characteristics, and Applicability

Advanced Research Institute, Virginia Polytechnic Institute and State University, Arlington, VA 22203, USA
*
Author to whom correspondence should be addressed.
Energies 2026, 19(3), 634; https://doi.org/10.3390/en19030634 (registering DOI)
Submission received: 15 December 2025 / Revised: 17 January 2026 / Accepted: 24 January 2026 / Published: 26 January 2026
(This article belongs to the Special Issue Modeling and Optimization of Energy Storage in Power Systems)

Abstract

The fastest growth in electricity demand in the industrialized world will likely come from the broad adoption of artificial intelligence (AI)—accelerated by the rise of generative AI models such as OpenAI’s ChatGPT. The global “data center arms race” is driving up power demand and grid stress, which creates local and regional challenges because people in the area understand that the additional data center-related electricity demand is coming from faraway places, and they will have to support the additional infrastructure while not directly benefiting from it. So, there is an incentive for the data center operators to manage the fast and unpredictable power surges internally so that their loads appear like a constant baseload to the electricity grid. Such high-intensity and short-duration loads can be served by hybrid energy storage systems (HESSs) that combine multiple storage technologies operating across different timescales. This review presents an overview of energy storage technologies, their classifications, and recent performance data, with a focus on their applicability to AI-driven computing. Technical requirements of storage systems, such as fast response, long cycle life, low degradation under frequent micro-cycling, and high ramping capability—which are critical for sustainable and reliable data center operations—are discussed. Based on these requirements, this review identifies lithium titanate oxide (LTO) and lithium iron phosphate (LFP) batteries paired with supercapacitors, flywheels, or superconducting magnetic energy storage (SMES) as the most suitable HESS configurations for AI data centers. This review also proposes AI-specific evaluation criteria, defines key performance metrics, and provides semi-quantitative guidance on power–energy partitioning for HESSs in AI data centers. This review concludes by identifying key challenges, AI-specific research gaps, and future directions for integrating HESSs with on-site generation to optimally manage the high variability in the data center load and build sustainable, low-carbon, and intelligent AI data centers.

1. Introduction

1.1. Background and Motivation

The recent explosion of artificial intelligence (AI) applications has reshaped computing around the world. Each generation of AI model requires more computing power, faster movement of data, and higher reliability. The field has rapidly expanded, leading to the explosive growth of data centers globally. AI data centers, built to support large-scale model training and inference, have correspondingly consumed large quantities of electricity on par with small cities [1]. Generative AI has further accelerated this trend. State-of-the-art models like OpenAI’s ChatGPT, Google Gemini, and Anthropic’s Claude train on and run thousands of graphics processing units (GPUs) or tensor processing units (TPUs) in parallel [2]. In recent months, technology giants have entered a period of fierce competition that many analysts call an “AI arms race” [3]. Leading firms such as Google, Meta, Amazon, and OpenAI are investing hundreds of billions of dollars to increase computing power and data center capacity with the goal of outcompeting rivals in the training and deployment of large AI models [4,5,6]. This arms race has accelerated the construction of new hyperscale facilities and is rapidly driving global electricity demand to levels not seen in recent decades, despite energy and climate concerns from environmental groups against building new data centers [7].
Due to this worldwide expansion, new data center clusters are appearing in many regions and competing for available power and infrastructure capacity [8,9,10]. Each cycle of AI training requires a considerable amount of energy and generates significant heat that needs to be continuously managed. To meet this increasing demand, cloud service providers and technology companies are rapidly constructing hyperscale campuses across North America, Europe, and Asia [11]. Among all regions, Northern Virginia is notable as the world’s largest data center hub, hosting more than 250 facilities and several gigawatts of connected load [12,13]. Local utilities and communities are facing the challenge of supporting this massive demand while ensuring grid reliability [10]. Generation and transmission expansion, as well as required substation upgrades, take years to develop, but the need for AI is growing far more quickly. Similar grid bottlenecks are becoming serious problems for both operators and planners in other major hubs like Oregon (USA), Dublin (Ireland), and Singapore [14,15,16].
According to the International Energy Agency (IEA), data centers accounted for 415 TWh of electricity consumption in 2024, which is equivalent to about 1.5% of the world’s total consumption [17]. About 45% of this usage came from the United States, with China coming in second at 25% and Europe at 15% [17]. A high-growth scenario would result in an annual electric power requirement of more than 219 GW by 2030 [18]. Various sources predict that the worldwide need for data center capacity might grow at a compound annual growth rate (CAGR) between 15% and 22% through 2030 [18].
In the US, data centers in 2024 consumed more than 4% of the country’s electric energy [19,20]. This share is projected to rise to nearly 12% by 2030 [19,20]. To accommodate this rapid growth, an estimated 47 GW of additional generation capacity will be required by the end of the decade [21]. Meeting this power demand could require close to USD 50 billion in new investments in US power generation infrastructure over the same period [21].
Generative AI workloads are expected to be a major contributor to this sharp rise in demand. Under high-growth projections, global data center capacity is expected to expand at a CAGR of about 22% by 2030. Over the same period, generative AI workloads are forecast to grow much faster, at roughly 39% CAGR, compared with around 16% CAGR for other digital services [18]. This gap between AI and other digital workloads highlights how AI-related computing is going to shape future power requirements. As AI computing turns into a new form of industrial infrastructure, managing its energy footprint has become both a technical challenge and a social concern. Many communities recognize that they host large data centers and support grid upgrades. However, most of the economic benefits often flow elsewhere. In addition to grid and cost challenges, the rapid expansion of AI data centers also raises environmental concerns related to increased emissions, water use, and land footprint. This creates a growing need for equitable, long-term, energy-efficient, and sustainable energy management solutions.

1.2. Challenges with AI Data Center Power Profiles

AI data centers are significantly different from traditional data centers in terms of electricity. Traditional data centers that host cloud storage or web servers typically show reasonably stable loads throughout the day. In comparison, AI data centers operate with highly dynamic load profiles because of much higher rack power densities. Traditional server racks typically operate in the range of 4 kW to 12 kW, while high-density racks can operate up to 20 kW [22]. In contrast, modern AI training racks generally draw between 50 kW and 200 kW, which highlights the enormous power requirements of graphics processing unit (GPU)-based systems [22,23]. This power demand can fluctuate rapidly depending on whether training or inference tasks are running, resulting in significant variations in both electricity use and cooling demand. Such fast-changing loads may also introduce power quality issues such as harmonic distortion and voltage flicker [24]. To maintain the reliable operation of sensitive computing equipment, these power quality issues must be addressed.
The total power drawn by an AI data center can change dramatically within short time intervals [25]. In [26], measurements from large-scale AI training clusters show that synchronized GPU compute and communication phases generate rapid and high-amplitude power swings. At the system level, thousands of GPUs may operate simultaneously during model training, which pushes total consumption close to the facility’s design limit for long durations. When the training jobs are finished, the load drops almost instantly [26]. Similarly, inference workloads generate short but intense surges in power demand. These rapid swings reach several megawatts in magnitude and place continuous stress on both the electrical and thermal management systems [25]. If unmanaged, this stress can trigger throttling or cause faults in the system.
Such unpredictable fluctuations in load also create challenges beyond the facility boundary. For instance, utilities may experience serious voltage variations, transformer overloading, and frequency instability in their system. To manage these problems, utilities often maintain extra reserve capacity that raises operational costs and reduces efficiency. Like utilities, AI data center operators also experience an increase in operational cost due to the impact of peak demand charges. Most commercial electricity tariffs are based on the highest load in any 15 or 30 min demand window, which means even brief spikes can lead to large increases in monthly bills [27].
AI data center operators are seeking strategies to stabilize their loads and make their facilities appear as customers with predictable load to the grid. As these facilities are becoming more power-hungry, traditional electrical design practices, backup systems, control methods, protection schemes, low-voltage, and fault ride-through strategies are gradually becoming insufficient for modern AI workloads [28,29]. For instance, conventional UPS units and diesel generator-based backup systems were designed for rare outages, not for continuous cycling or multi-megawatt ramping throughout the day. This makes expansion planning and operations increasingly difficult for both the utilities and the operators. These challenges are not theoretical. They have already been observed in existing North American facilities, where real-time load ramps and oscillations have stressed both grid equipment and on-site power electronics. These observations emphasize the need for fast-responding energy storage that can buffer large swings in power demand and protect both the grid and internal systems. The following subsection presents examples that illustrate the magnitude and speed of these load variations in real AI data center operations.

1.3. Examples of Rapid and Oscillatory Load Patterns in Modern AI Data Centers

Recent field observations from North American facilities show how quickly large computing loads can change in practice [25]. Ref. [25] shows an example where a data center ramps down from roughly 450 MW to about 40 MW within 36 s at a particular hour. The load then remains at approximately 7 MW for several hours before ramping back to 450 MW within only a few minutes. Such behavior demonstrates how AI-driven compute clusters can rapidly shed or restore enormous amounts of load, far faster than traditional industrial processes.
The North American Electric Reliability Corporation (NERC) recently reported that AI data centers often exhibit periodic, repetitive, and sustained load patterns that can introduce forced oscillations at subsynchronous frequencies during compute cycles. “Subsynchronous frequencies” are electrical oscillation frequencies that occur below the nominal power system frequency (e.g., below 60 Hz in North America) and can interact adversely with generators, electronic equipment, and other loads, potentially causing instability or equipment stress. This behavior is mostly driven by the rapid switching between “compute” and “communication” phases in large synchronized GPU workloads, which was also verified by researchers and engineers from Microsoft, OpenAI, and NVIDIA [26]. These oscillations may arise naturally from coordinated GPU activity or from unintended interactions among power electronics equipment. In [25], the NERC presented another example that shows a load profile of an AI data center during training for two minutes in which power fluctuations within the 0.6–1.0 p.u. range and forced oscillations were observed. Such fluctuations can create significant stress on both the utility grid and on-site power electronics-based equipment.
An additional example can be found in the NERC report, which shows a cryptocurrency mining facility in North America that ramped down 298 MW within 25 s after a control fault [25]. The load also exhibited subsynchronous oscillations with a peak-to-peak amplitude of roughly 25 MW. Although crypto mining differs from AI workloads, the fundamental behavior is similar. That means it has high digital loads that can change extremely fast and create power system instability if left unmanaged.
These examples demonstrate why large AI facilities, especially in regions such as Northern Virginia in the United States, require fast-responding energy storage to buffer extreme ramps, control oscillations, and maintain power quality for both the grid and the data centers.

1.4. Role of Energy Storage Systems

Different energy storage systems (ESSs) are emerging as alternatives or complements to traditional UPS solutions. Modern ESS can not only provide reliable backup power during outages, but also actively support power quality improvement, fast load-following, and peak shaving during normal operation [30]. Their fast response capabilities allow them to absorb or deliver power within milliseconds and make them well suited to reduce power fluctuations and mitigate the disturbances caused by AI workloads. At the system level, maintaining a stable and predictable demand profile requires flexible, high-response, variable-duration energy solutions that combine reliability support with advanced load-balancing capabilities.
By storing energy during low-demand periods and releasing it when the load increases, storage systems can smooth out rapid power fluctuations and reduce peak demand in AI data centers. This helps both the operator and the local utility. The operator benefits from lower electricity costs and improved resilience, while the utility sees a more predictable and grid-friendly load. On top of that, a well-designed ESS can support other goals such as frequency control, voltage regulation, and greater use of on-site renewable energy if available.
The storage needs of AI data centers go far beyond conventional backup applications as their load fluctuations occur across multiple timescales—from milliseconds to hours. No single storage technology can effectively handle all of these variations. Hybrid energy storage systems (HESSs) address this limitation. A hybrid energy storage system, or HESS, is an integrated combination of two or more energy storage technologies to optimize overall performance, efficiency, lifespan, and cost for a given application. A HESS is typically formed by combining high energy density and fast high power density devices, each optimized for a different timescale and power range. For example, supercapacitors or flywheels can handle very fast transient spikes but last only for milliseconds to seconds [31]. Lithium-ion or sodium–sulfur batteries can manage fluctuations that span several minutes or hours [32]. For longer-term balancing, technologies like flow batteries or other storage can be added. By coordinating these as different layers, hybrid systems can deliver both fast response and large capacity while minimizing degradation of the battery components. These features of hybrid systems allow AI data centers to appear as steady and predictable loads to the grid, even though their internal power demand may change rapidly.
Hybrid systems also improve economic performance. It enables optimal use of each component according to its strengths and reduces replacement costs by extending lifetimes. Furthermore, hybrid systems can participate in multiple energy markets simultaneously, such as peak shaving, demand response, and ancillary services, that generate additional revenue streams.

1.5. Scope and Contributions of This Review

This review paper provides a comprehensive overview of available energy storage technologies and their applicability to AI data centers. It aims to bridge two growing research areas: advanced energy storage and high-performance computing infrastructure. While many previous reviews have focused on grid-scale storage, electric vehicles, or renewable integration, this review explicitly addresses the unique operational requirements imposed by AI-driven computing, including rapid load changes, micro-cycling, and very high ramp rates.
This paper begins with a structured overview of major energy storage classifications, including chemical, electrochemical, electrical, mechanical, and electromagnetic systems. It then defines AI-specific evaluation criteria and key performance metrics to assess storage technologies in the AI data center environment. For each category, the basic operating principles, AI-relevant advantages, limitations, typical failure modes, and practical deployment considerations are discussed. Performance indicators such as response time, densities, energy use duration, efficiency, and cycle life are emphasized.
Next, this review examines the role of HESSs in addressing multi-timescale load fluctuations in AI data centers and discusses common hybrid architectures and technology pairings. Recent studies have been reviewed to demonstrate how hybridization improves performance through specific system architectures and control strategies. A semi-quantitative guideline for power–energy partitioning between high-power and high-energy storage layers is presented for AI data center applications. This review also investigates recent control strategies for HESSs and discusses their transferability to AI data center environments. Key design trade-offs related to cost, footprint, complexity, and reliability are discussed, along with cases where hybridization may provide limited benefit.
Then, this paper discusses deployment challenges and research opportunities for energy storage in AI data centers. It examines system-level challenges of energy storage systems in an AI data center, including placement, power electronics coordination, thermal management, and safety constraints. Grid coordination, siting limitations, and market barriers are also analyzed. Finally, this paper identifies and prioritizes AI-specific research gaps to guide future storage design and deployment.
Finally, this paper concludes that HESSs will be essential for managing the fast and multi-timescale power dynamics of future AI data centers. Despite recent progress, critical research gaps remain in AI-aware load and degradation modeling and in coordinated control of workloads and storage systems. Addressing these gaps will be key to designing reliable, scalable, and grid-friendly AI data center energy infrastructures.

2. Overview of Energy Storage Systems

Energy storage systems have played a key role in power systems for decades. The rapid growth of AI data centers has made them increasingly critical for managing fast and unpredictable power demands. A wide range of energy storage technologies exists, operating across different power levels, response times, and discharge durations. However, no single storage technology can simultaneously meet the fast ramp rate, frequent micro-cycling, high power density, and reliability requirements of AI data centers. These challenges motivate a structured evaluation of available storage options for AI-driven computing environments.
Accordingly, this section reviews the major classes of energy storage systems and introduces performance metrics relevant to AI data center operation. It then defines AI-specific evaluation criteria and compares storage technologies based on their suitability for highly dynamic AI workloads.

2.1. Classification of Major Energy Storage Systems

Energy storage technologies can be classified in several ways. For example, we can classify them according to their storage medium, response time, application, and discharge duration. However, one of the most widely used approaches is to classify them based on the form of energy stored. Figure 1 illustrates a detailed classification of energy storage systems based on the type of energy stored.
These energy storage technologies operate in different ranges of power and discharge durations, as illustrated in Figure 2. The left portion of Figure 2 is adapted from the “Energy Storage Technology Review” by SBC Energy Institute [33]. The plot is updated by using the present storage time and system power information from the DOE Global Energy Storage Database [34]. However, the right portion of the figure is created entirely from the current database to reflect commercially available systems rather than theoretical values [34]. Figure 2 highlights that power electronics-based technologies such as supercapacitors and SMESs dominate short-duration, high-power applications, while electrochemical storage or batteries occupy the minute-to-day range at tens to hundreds of megawatts.
While this classification highlights the broad range of available energy storage technologies and their practical operating regimes, it does not indicate which options are most suitable for AI data center applications. Since requirements differ a lot from traditional use cases, a more targeted evaluation framework is required. The following subsection introduces AI-specific criteria that form the basis for assessing the suitability of different energy storage technologies in AI data center environments.

2.2. AI-Specific Evaluation Criteria for Energy Storage

Energy storage systems for AI data centers must be evaluated using criteria that differ from those applied to conventional data centers or grid applications. One defining feature of AI data center operation is micro-cycling of ESSs. This micro-cycling refers to the repeated, shallow charge–discharge cycling of the ESS caused by rapid and frequent power fluctuations from GPU-based training and inference workloads rather than rare deep cycles associated with backup operation. This behavior imposes a fundamentally different cycling profile on energy storage systems, particularly battery energy storage systems. A recent experimental study in ref. [35] shows that very shallow micro-cycling, typically involving depth of discharge below 2%, does not lead to battery aging in the same way as deep discharge cycles do. In AI data center operations, battery degradation is mainly driven by deep discharge events, thermal stress, and prolonged high C-rate operation rather than by the total number of micro-cycles. C-rate is defined as the amount of the charge or discharge current with respect to its nominal capacity. However, if micro-cycling involves a depth of discharge above approximately 2%, it can begin to contribute noticeably to battery degradation [35]. Therefore, for micro-cycling of larger magnitudes, it is more advantageous to select an ESS with higher micro-cycling capability or pair batteries with other energy storage technologies in a hybrid configuration. Such hybrid structures allow high-power and fast-response devices to absorb frequent cycling stress, while batteries are reserved for sustained energy-oriented functions.
Ramp rate capability is another critical criterion. AI data centers can exhibit load ramps of several megawatts per second, which—if sustained—surpass the safe operating range of many storage systems. Therefore, ESSs must tolerate high-power transients without excessive thermal or electrochemical stress.
Another important evaluation criterion is how the ESS will coordinate with on-site power electronics. Modern AI data centers rely heavily on DC buses, fast converters, and tightly regulated voltage levels. Energy storage devices must interact smoothly with these systems to avoid oscillations, voltage dips, or converter overloads during sudden workload transitions.
Lastly, the choice of ESS for AI data centers is heavily influenced by available physical space. High rack density, strict fire safety rules, limited space, and 24/7 operation demand higher safety and reliability than most grid-scale systems. Indoor real estate is extremely valuable, and most rack and floor space is reserved for IT equipment. As a result, large ESS systems are often moved outside or avoided altogether. This is why evaluating ESSs based on energy and power density, round-trip efficiency, and fire safety are also very crucial.
Apart from these, cost is a critical evaluation criterion for energy storage in AI data centers. Both power-related cost (USD/kW) and energy-related cost (USD/kWh) must be considered along with other cost components such as replacement cost, operation and maintenance costs, etc. [36].
These AI-specific criteria provide the foundation for evaluating energy storage technologies and motivate the selection of appropriate performance metrics, which are discussed in the following subsection.

2.3. Key Performance Metrics for AI Data Center Needs

Selecting suitable energy storage technology for an AI data center requires careful consideration of performance metrics that directly affect reliability, cost, and integration feasibility. These metrics are selected in a way that reflects the AI-specific operational characteristics and system constraints discussed previously. As a result, the relative importance and prioritization of conventional performance indicators that are commonly used for grid-scale or backup-oriented storage may change when evaluating energy storage systems for AI data center environments. With this perspective in mind, the most important performance metrics for AI data center energy storage applications are summarized below.
  • Response Time: AI workloads can change within milliseconds. ESSs must respond instantly to maintain voltage stability, suppress fast transients, and provide ride-through protection during short disturbances.
  • Cycle Life and Degradation Characteristics: AI workloads impose frequent shallow cycling rather than infrequent deep discharge. As discussed in Section 2.2, very shallow micro-cycling does not significantly contribute to battery aging. ESS degradation is mainly driven by deep discharge events, thermal stress, and sustained high C-rate operation. However, when micro-cycling involves a larger depth of discharge, its impact on degradation becomes more pronounced. Hence, cycle life and degradation are data center-specific, which are very important metrics for ESS selection. In summary, technologies and system architectures that offer high cycle life and maintain stable performance under micro-cycling with minimal degradation for a particular AI data center design should be selected.
  • Power Rating and Ramp Capability: Modern GPU racks may draw 50–200 kW per rack. ESSs must deliver high-power output and tolerate multi-megawatt per second power ramps without performance degradation.
  • Power and Energy Density: Power density decides how quickly the system can utilize the power. On the other hand, energy densities directly affect the space efficiency of energy storage systems in AI data centers [37]. So, higher densities are desirable.
  • Round-Trip Efficiency: Frequent cycling for smoothing and peak management makes high efficiency essential. Low efficiency increases energy losses and operating costs, particularly when storage is used continuously rather than only for backup.
Apart from these performance metrics, the choice of energy storage for AI data centers is also influenced by reliability, safety, space limitations, and various economic factors such as costs and savings. AI data centers operate under very strict uptime requirements, and storage systems can be installed indoors near critical IT equipment. This makes the safety and reliability of the ESS very essential. Moreover, limited indoor space favors compact and modular solutions that can be deployed or replaced with minimal disruption. Lastly, replacement cost and value-stacking potential are important, since storage systems are expected to support services such as peak shaving, demand response, and power quality improvement. Together, these metrics help AI data center operators and utilities determine which storage technologies are best suited for the case in hand.

2.4. Comparison of Energy Storage Technologies for AI Data Centers

This subsection compares major energy storage classes using the AI-specific evaluation criteria discussed above. Table 1 summarizes qualitative advantages, limitations, and typical failure modes, while Table 2 presents a detailed comparison based on key performance metrics relevant to AI data center applications. To keep the comparison location-independent and relevant to AI data center applications, some ESSs, such as pumped hydro storage, compressed air energy storage, etc., are excluded from this table.

2.5. Electrochemical Energy Storage

Electrochemical energy storage systems store energy in chemical form and release it through electrochemical reactions. They are the most widely deployed storage technology for data centers today due to their high round-trip efficiency, high energy density, fast response times, modularity, scalability, and compact footprint suitable for indoor electrical rooms. In contrast, mechanical (such as flywheel) and electromagnetic storage can also offer exceptionally fast response and very high round-trip efficiency. However, their low energy density, cost, scalability, and footprint limitations make electrochemical storage more practical for large-scale AI data center deployment.
Electrochemical storage technologies can be broadly categorized into classical batteries and flow batteries. Each category serves different operational needs and use cases.

2.5.1. Classical Batteries

Classical batteries such as lithium-ion, lead acid, nickel-based, and zinc-based chemistries store energy within the cell structure. In most AI data centers, battery energy storage systems (BESSs) are used as part of the uninterruptible power supply (UPS) [24]. They are installed both inside the facility and at the campus or the electrical yard, where compact size, fast response, and high reliability are important.
Among available battery technologies, lithium-ion (Li-ion) batteries dominate AI data center deployments due to their 100-millisecond-level response time, high round-trip efficiency (typically 90–98%), and wide commercial availability. However, different lithium-ion chemistries exhibit substantially different C-rate, energy density, cycle life, and thermal behavior. These characteristics directly affect their suitability for AI workloads. For instance, C-rate indirectly indicates the battery’s potential power ramping capability. Although, actual ramping is constrained by thermal, electrical, and system-level limits. For AI data center use, a higher C-rate range is desirable. Table 3 shows a comparison of C-rates for Li-ion battery variants.
Among the variants of Li-ion batteries, lithium titanate oxide (LTO) and lithium iron phosphate (LFP) exhibit qualities that make them very well suited for indoor UPS-type applications [44]. As shown in Table 2, LTO batteries have exceptional cycle life (more than 10,000 cycles at 100% depth of discharge or DOD) [46]. In addition, LTO batteries can sustain very high C-rates (often ≥ 20C). These characteristics make LTO batteries highly suitable for AI data center applications, especially as a high-power storage solution if a hybrid storage structure is desired. It can also serve as a standalone ESS that can support both high-power and sustained high-energy scenarios. Despite being very favorable, LTO batteries come with a serious trade-off that can be a deal breaker for an AI data center. Among the variants of Li-ion batteries, LTO has the lowest energy density (60–90 Wh/kg), which means its footprint is relatively large. Similar to LTO batteries, LFP batteries are well suited because of their long cycle life, often exceeding 5000–8000 cycles under moderate depth-of-discharge. In addition to the good cycle life, their high C-rates (often ≥10C), acceptable energy density (90–180 Wh/kg), and very high round-trip efficiency (93–98%, higher than LTO’s 85–90%) make them a good choice for high-reliability UPS applications.
Compared with LTO and LFP, lithium nickel manganese cobalt (LNMC) and lithium nickel cobalt aluminum (LNCA) batteries offer higher energy densities, typically 160–270 Wh/kg for LNMC and 200–260 Wh/kg for LNCA. These higher energy densities allow more compact indoor energy storage solutions. However, LNMC and LNCA batteries have a higher risk of thermal runaway compared with LFP and LTO chemistries, which is why LFP and LTO are often preferred in applications where safety is critical [56]. Thermal runaway is a self-reinforcing process in which rising battery temperature triggers chemical reactions that generate additional heat, making the failure difficult to stop once it begins. In addition, LNMC and LNCA batteries have very low C-rate and much lower cycle life, which makes them undesirable for applications where high-power ramping and frequent micro-cycling are common. However, these batteries can still be used with proper safety in an AI data center in a hybrid ESS where an HPS component exists.
The other two variants, lithium manganese oxide (LMO) and lithium cobalt oxide (LCO), fall at the bottom of the suitability ladder despite having good energy density (100–150 Wh/kg for LMO and 150–200 Wh/kg for LCO) and excellent round-trip efficiency (90–95% for both). For instance, LMO batteries have a very competitive C-rate, but their very small lifecycle count makes them somewhat limited for the AI data center deployment. Similarly, LCO batteries have slightly better lifecycle count, but their poor C-rate makes them undesirable in this context.
Conventional lead acid batteries remain common in legacy UPS systems due to their low upfront cost and proven reliability for short-duration backup. However, their lower energy density, limited cycle life, very low C-rate, limited depth of discharge, and poor tolerance to frequent partial cycling make them less suitable. Nickel-based and zinc-based batteries are used in niche applications but have seen limited adoption in modern AI-focused data centers.

2.5.2. Flow Batteries

Flow batteries store energy in liquid electrolytes housed in external tanks, while power conversion happens in the electrochemical cell stack [57]. This separation allows power (kW) and energy capacity (kWh) to be scaled independently [58]. As a result, long discharge durations of 4–12+ h or more can be achieved through system design rather than chemistry limitations [33,58]. Flow batteries offer long cycle life, low degradation, and high safety [58]. Vanadium redox flow batteries (VRFBs) are the most mature and widely deployed flow battery technology. They avoid electrolyte cross-contamination and can maintain stable performance over 20–30 years of operation [57]. Their working principles are well documented in the literature [57,59] and are illustrated in Figure 3.
From an AI data center perspective, flow batteries have several limitations. They exhibit poor ramp rate capability compared with other electrochemical storage technologies. Their energy density is very low, which leads to large physical footprint. Their round-trip efficiency is also much lower than that of lithium-ion batteries. These characteristics make flow batteries unsuitable for fast, space-constrained, or rack-adjacent AI applications.
However, flow batteries can still play a role as high-energy storage (HES) in hybrid configuration. When deployed at the campus or grid-side level, they can provide long-duration energy support, peak reduction, and renewable energy integration without experiencing significant degradation. Large-scale deployments already exist, particularly in China, where multiple 100 MW-class systems are in operation [60].

2.6. Electrical Energy Storage

Electrical energy storage technologies store and release energy directly in electrical form, which allows extremely fast response and high power delivery. They are especially useful for short-duration events that require rapid load compensation, such as the sudden GPU power ramps frequently observed in AI computing. In data center applications, electrical energy storage is mainly represented by power electronic capacitors (PECs) and electrochemical double-layer capacitors (EDLCs), commonly known as supercapacitors.

2.6.1. Power Electronic Capacitors (PECs)

Power electronic capacitors (PECs), such as film and aluminum electrolytic capacitors, are widely used in power conversion systems. They offer microsecond-level response and very high power density, exceeding 10 kW/L [33,34]. However, their energy capacity is extremely small, typically only a few watt-hours, even for large banks. As a result, PECs are not used for energy storage but for power conditioning. In AI data centers, they stabilize DC bus voltage, suppress harmonics and switching ripple, and protect power converters during fast load transients.

2.6.2. Electrochemical Double-Layer Capacitors (EDLCs)

Electrochemical double-layer capacitors (EDLCs), commonly called supercapacitors (SCs), respond within 1–10 ms, which is much faster than lithium-ion batteries [40]. They provide very high power density (≈10,000 W/kg) and extremely long cycle life, often exceeding 500,000 to 1,000,000 cycles with minimal degradation [61,62]. Their round-trip efficiency is also high, typically between 90 and 98% [40,61,62,63].
SCs store energy through charge separation at the electrode–electrolyte interface rather than chemical reactions [59]. Detailed working principles are well established in the literature [59] and are illustrated in Figure 4.
The main limitation of SCs is their very low energy density, typically 1–10 Wh/kg, compared with 80–150 Wh/kg for lithium-ion batteries [61,62]. This restricts discharge duration to seconds or a few minutes. As a result, SCs are not suitable for sustained energy supply in AI data centers.
However, SCs are highly effective for AI workloads with fast and large power ramps. They are well suited for ramp rate smoothing, short ride-through protection, and absorbing sudden GPU power spikes. Commercial systems can scale to the 1–2 MW range and provide sufficient short-duration energy to protect batteries from transient overloads and reduce degradation.

2.7. Mechanical ESS: Flywheel Energy Storage Systems (FESSs)

Mechanical energy storage systems convert electrical energy into kinetic or potential energy. Among them, flywheel energy storage systems (FESSs) are most relevant for AI data centers due to their high-power capability and short-duration operation. Unlike pumped hydro or compressed air systems, which are used for long-duration storage, flywheels are designed for fast and frequent power balancing near loads.
FESSs store energy as rotational kinetic energy in a high-speed rotor supported by magnetic bearings and operated in a vacuum to reduce losses. Their working principles are well established in the literature [64] and are illustrated in Figure 5. Their fast millisecond-level response makes them highly effective for managing sudden power spikes and stabilizing voltage during rapid AI workload changes.
Flywheels typically supply power for a few seconds to a few minutes, which is sufficient for short ride-through events or until batteries or generators respond [64]. They offer high efficiency (90–95%) and extremely long cycle life, often exceeding one million cycles [59,65]. Although their energy capacity is limited, flywheels serve as an effective high-power storage layer in AI data centers. When deployed near loads or in hybrid systems, they reduce stress on battery energy storage, improve power quality, and support reliable operation of high-density GPU infrastructure.

2.8. Electromagnetic ESSs: Superconducting Magnetic Energy Storage (SMES)

Superconducting magnetic energy storage (SMES) stores energy in the magnetic field of a superconducting coil carrying DC current with near-zero resistance. This enables microsecond-level response, very high power density, and essentially unlimited cycle life [39,66]. A typical SMES system consists of a superconducting coil, cryogenic support, and power conditioning hardware, as shown in Figure 6. Its structure and operating principles are well established in the literature [66,67]. Like supercapacitors, SMES can be a strong high-power layer in hybrid structure. However, its energy capacity is very limited and cryogenic complexity is high.

3. Hybrid Energy Storage Systems (HESSs)

AI data centers operate under highly dynamic and software-driven load conditions. Often, a single type of energy storage system can be insufficient to support these efficiently and economically. Fast-response technologies such as supercapacitors and flywheels can handle short power bursts, but they cannot sustain energy delivery for minutes or hours. Batteries can also deliver short power bursts as well as longer-duration energy, but frequent high-power partial cycling accelerates their degradation. This premature degradation makes battery-only solutions less economical over time for many AI data centers. Hence, despite being the most popular choice for data center applications, batteries alone are not enough to meet the demands of large-scale AI operations [68].
To address these limitations, this section examines the role of hybrid energy storage systems (HESSs) in an AI data center. Common hybrid architectures and technology pairings are reviewed and the benefits of hybridization in AI data centers are discussed. A semi-quantitative guideline for power–energy partitioning between high-power and high-energy storage layers is proposed. Finally, recent HESS control strategies are reviewed, their applicability to AI data centers is discussed, and key design trade-offs are highlighted.

3.1. HESS Architectures and Combinations

Hybrid energy storage systems can be arranged in different configurations depending on their operating goals and how the individual storage devices work together. There are different ways for the coupling of the energy storage in a HESS, and these choices determine how power is shared, how quickly each device responds to load changes, and how the system supports both on-site equipment and the utility grid [69,70].
For AI data centers, the structure of the HESS plays a key role in smoothing fast GPU power spikes, managing longer energy variations, and maintaining a stable and predictable demand profile. High-power storage (HPS) devices manage rapid fluctuations, while high-energy storage (HES) devices provide sustained support [71]. The overall architecture ensures that both operate together as a coordinated system. In addition to electrical storage, thermal energy storage (TES), such as chilled water or ice storage systems, can also be integrated as part of a broader hybrid solution. Because cooling accounts for about 38% of total data center energy use, TES can help shift or smooth cooling demand. This allows TES to complement electrical storage and reduce overall peak load at the facility [72,73].
Figure 7 illustrates these ideas. Figure 7a presents a generic HESS architecture, showing how converters and the energy management system (EMS) coordinate multiple storage units. Figure 7b shows practical hybrid storage combinations suitable for AI data centers, where long-term energy devices such as BESSs, flow batteries (FBs), or fuel cells (FCs) are paired with fast-response devices like supercapacitors, flywheels, SMES, or short-term BESSs. These combinations allow hybrid systems to cover a wide range of power and energy needs, making them well suited for the high variability of modern AI workloads.
Based on the discussion in Section 2, LTO and LFP batteries emerge as the most suitable BESS options for AI data centers, preferably as HES systems. Supercapacitors, flywheels, and SMES are best suited as HPS layers. Other HES technologies, such as flow batteries and fuel cells, can also play a role in long-duration energy support for AI data center applications.

3.2. Hybridization Benefits and Applications for AI Data Centers

Hybrid energy storage systems (HESSs) offer a practical solution by combining HPS technologies such as supercapacitors, flywheels, or SMES with HES systems such as BESSs, FCs, or FBs. The HPS device absorbs fast disturbances, while the HES device supplies sustained energy. This division of roles allows the system to respond efficiently across all relevant AI timescales and reduces stress on the main storage layer. The key benefits of this hybridization in the context of AI data center operation are summarized below.
  • Improved handling of fast and slow power fluctuations: GPU clusters can transition from partial load to near-full consumption within seconds, and inference workloads often generate short but intense power bursts. HESS configurations assign these high-frequency events to fast-response HPS devices while reserving slower, multi-minute fluctuations for HES systems. This prevents over-cycling of HES and reduces the propagation of disturbances to the grid and internal power electronics.
  • Extended battery lifetime and lower replacement costs: Standalone battery systems face accelerated degradation in AI environments because of frequent and irregular workload-driven cycling [74]. Hybridization significantly improves battery life by shifting high-power, high-frequency demands to SCs, flywheels, or SMES [62,64,75]. Studies in similar applications show that SC–BESS and SMES–BESS combinations can extend BESS lifetime by 19–26% [75,76]. A study shows that a flywheel–BESS hybrid configuration significantly slowed the battery aging by a factor of 300% [64]. This benefit alone can bring meaningful reductions in data center total cost of ownership.
  • Power quality improvements for sensitive GPU loads: Large GPU racks require stable voltage, low harmonic distortion, and a well-regulated DC bus. Fast storage layers in a HESS act as local “shock absorbers,” responding within microseconds to milliseconds to maintain voltage stability and dampen rapid load swings. This protects both IT equipment and upstream converters, which is critical in high-density AI racks operating at tens or hundreds of kilowatts.
  • Accelerated interconnection and peak demand relief: One of the largest barriers to new AI data center growth is the delay associated with securing firm grid interconnections [24]. HESS-supported battery strategies allow facilities to operate under interruptible interconnection agreements by riding through curtailments and temporary shortfalls. This approach can shorten interconnection timelines by years and unlock substantial new capacity.
  • Lower energy costs and new revenue opportunities: Hybrid systems lower operating costs by allowing batteries to perform energy arbitrage while high-power devices handle rapid fluctuations without cycling the main battery. This reduces peak demand charges and improves overall efficiency. Large HESS installations can also participate in wholesale market services such as frequency regulation, voltage support, spinning reserves, and fast frequency response, which can create additional revenue streams that help offset capital costs.
  • Enhanced reliability and ride-through capability: AI workloads cannot tolerate interruptions, as even brief disturbances can interrupt training jobs or damage equipment. HESS improves reliability by ensuring fast-response storage handles short-term disturbances while longer-duration storage maintains continuity during sustained grid events. In some designs, HESSs can supplement or partially replace traditional UPS infrastructure [24].
  • Scalable configurations for diverse data center needs: Different AI campuses may prioritize peak shaving, interconnection acceleration, fast transient response, or long-duration energy shifting. HESS allows flexible pairing such as BESS–SCs, BESS–FESS, BESS–SMES, FB–SC, or FC–BESS (short term), based on space constraints, cost, grid limitations, and workload characteristics. This adaptability positions HESS as a key component of future AI power architectures.

3.3. Improving Performance by Hybridization

Hybrid energy storage improves data center performance by matching storage characteristics to workload timescales. Data center power demand contains both fast transients and slower energy variations. Treating these behaviors with a single storage device leads to inefficiencies and accelerated degradation.
Several studies show that fast storage layers effectively absorb short and frequent power spikes and improve the overall efficiency of the hybrid system. It can also extend UPS lifetime, reduce server downtime, and save costs. Devices such as supercapacitors or flywheels provide high power density, fast response, and very high cycle life. Using them for transient events may protect batteries from excessive cycling and degradation.
In [77], a hybrid energy buffering (HEB) system has been proposed to handle power mismatches in data centers. The study has combined batteries with supercapacitors and used an adaptive control framework to dynamically assign loads based on power demand patterns. This study shows that HESSs could improve energy efficiency by 39.7%, extend battery lifetime by 4.7 times, decrease system downtime by 41%, and increase peak shaving benefit by 1.9 times.
The study in [78] has proposed a hierarchical dispatch strategy that allows a data center to participate in power system dispatch using its UPS with hybrid energy storage. They used available energy analysis and a two-level model predictive control (MPC)-based framework, where the upper-level schedules power and the lower level reallocates loads between batteries and hydrogen fuel cells. This hybridization has increased UPS utilization by about 85% and reduced state-of-charge (SOC) fluctuations by 19%.
The study in [79] has presented a HESS structure that uses supercapacitors along with UPS batteries in a data center. The study has also proposed the integration of HESS with dynamic voltage and frequency scaling (DVFS) to cap the peak power demand. This hybridization has reduced energy storage cost by 34% on average compared with the batteries-only model.
In summary, it is evident that hybridization of energy storage can improve data center performance. By combining HES devices with fast-response HPS systems at a suitable ratio, and using advanced control methods, hybrid systems can improve efficiency, extend storage lifetime, and significantly reduce costs.

3.4. Power–Energy Partitioning for AI Data Centers

In many cases, hybridization can improve the performance of AI data centers and reduce overall system cost. However, these benefits depend strongly on how power and energy are partitioned among the different storage layers. To the best of the authors’ knowledge, there is no fixed quantitative rule for power–energy partitioning in hybrid energy storage systems specifically designed for AI data centers. Nevertheless, based on the studies reviewed in Section 3.3, a semi-quantitative power–energy partitioning can be assumed. Table 4 summarizes representative power and energy shares for high-power and high-energy storage layers and provides a practical guideline for AI data center applications.

3.5. Control Strategies for Hybrid Energy Storage System

Control strategies play a central role in hybrid energy storage systems for AI data centers. The energy management system (EMS) or the controller must allocate power dynamically between HPS and HES devices. Most HESS controllers use a two-layer structure with a fast device-level controller that regulates converters and bus voltage and a supervisory controller that assigns power based on frequency components, state-of-charge limits, or optimization rules [80].
Old HESS control methods include filtration-based power splitting, rule-based control, droop control, and deadbeat control [75,76,80,81,82]. Newer research applies model predictive control (MPC), fuzzy logic, neural networks, and optimization techniques to improve dynamic performance and storage lifespan [80,83,84,85].
Old control strategies were originally developed for general microgrid and renewable integration in the grid. However, some of these are partially transferable to AI data center applications after modifications. For instance, in [76,86], BESS–SC hybrid storage is controlled based on filtration-based power splitting to extend BESS lifetime and prevent premature degradation. These controllers use low-pass filters, so the BESS supplies slow power changes while the supercapacitor handles fast transients. However, their transferability to AI data centers depends on reliable high-bandwidth sensing and careful filter tuning to avoid delayed battery response under large, highly synchronized GPU load ramps. Communication latency may further affect controller performance under AI-scale transients.
In [82], a newer fuzzy logic and moving average-based power management method is proposed for a hybrid BESS–flywheel energy storage system to reduce battery stress while keeping the flywheel always available. It adapts the moving-average window using fuzzy logic, based on battery ramp rate (RR) and flywheel state of charge (SOC). The controller has successfully maintained the flywheel SOC above 40% and significantly reduced BESS ramp rate peaks compared with a conventional low-pass filter method. This controller can be directly transferable to AI data center applications. However, practical deployment of such adaptive controllers requires robust real-time parameter tuning and reliable SOC estimation.
Data center-focused control frameworks are the most directly transferable to AI data centers because they already assume fast and bursty load behavior and tight coupling with power delivery/UPS infrastructure. For example, the study in [87] proposes a hierarchical control framework for hybrid energy storage devices (ESDs) in data centers to achieve power capping and cost reduction without impacting performance. It uses supervisory heuristic control with crossover filtering to coordinate the charging and discharging of batteries, supercapacitors, and flywheels across server, rack, and data-center levels. Results show that this control framework reduces the total amortized cost by up to 70% compared with centralized UPS baselines. Similarly, the study in [78] proposes a hierarchical dispatch strategy for a data center UPS in a hybrid structure (battery and hydrogen fuel cell) to participate in power system dispatch. It uses model predictive control (MPC) at the lower level to reallocate power between storage units while tracking dispatch commands and stabilizing battery SOC. MPC-based strategies are effective but require accurate models, load forecasts, and sufficient computing resources for real-time AI workloads.
The study in [84] has proposed an optimization-based control strategy to coordinate BESS and TES in data centers by dispatching surplus emergency storage capacity for grid services. It uses an MILP/MIQP-based optimal dispatch, where BESS provides fast services and TES handles slow energy shifting. Results show USD 1.6 million lifetime benefit from BESS (1.29 times the investment) and USD 0.35 million from TES (2.39 times the investment), proving the effectiveness of the control strategy. A similar study uses only mixed-integer quadratic programming (MIQP)-based optimized control strategy to coordinate BESS and TES in data centers [85]. This control strategy also shows similar results in cost. These optimization-based approaches highlight strong economic potential but depend on accurate thermal modeling, forecasting of cooling demand, and tight integration with facility-level cooling control systems.
Overall, control strategies developed specifically for data centers are the most directly transferable to AI applications, while older general-purpose HESS controllers can still be applied after appropriate modifications. However, real-world deployment in AI data centers requires careful consideration of their practical implementation challenges and potential limitations. Table 5 summarizes representative HESS control strategies and highlights their applicability to AI data center environments.

3.6. Design Trade-Offs in Hybrid Energy Storage Systems

As discussed in previous subsections, HESS provides several benefits, but these benefits come with important trade-offs. Combining multiple storage technologies increases upfront and replacement costs due to additional hardware, power electronics, and control infrastructure. The idea of “bigger the better” will not provide cost benefit after a certain point. Hence, optimum storage sizing is necessary.
To achieve the maximum benefits from hybridization, more complex control strategies are required. This added complexity increases system vulnerability by introducing additional software, communication, and coordination layers. This may increase the risk of operational failures and cyber-physical attacks if not properly designed and secured. As a result, a trade-off arises between maximizing hybridization benefits and maintaining control simplicity as well as system robustness.
From a deployment perspective, the added footprint of multiple storage devices can be challenging for indoor, space-constrained AI data centers. Limited space may not allow the intended hybrid energy storage system configuration. In such cases, designers may favor higher-cost, energy-dense storage technologies or be forced to deploy undersized, low-density devices. This can also require downsizing the high-power storage (HPS) layer, reducing its ability to sustain fast power support. As a result, the degree of power–energy partitioning is weakened, and the benefits of the hybrid energy storage system are reduced.
In addition, hybrid energy storage systems often require multiple cooling mechanisms and fire safety measures to accommodate different storage technologies. Enhancing cooling and safety improves performance and reliability but increases system cost, infrastructure complexity, and potential points of failure. Moreover, allocating more space for cooling systems can also reduce the space available for energy storage, forcing compromises in system sizing. As a result, a trade-off arises between thermal safety, cost, spatial efficiency, and achievable storage capacity.

3.7. Cases When Hybridization May Provide Limited Benefit

Hybridization of multiple energy storage systems may bring several benefits for AI data centers. However, it is not universally optimal. In AI data centers where workloads are dominated by steady inference tasks with relatively low ramp rates and limited micro-cycling magnitude, a well-optimized battery energy storage system may be sufficient to meet performance and reliability requirements. In such cases, adding a fast-response storage layer may provide little additional operational or economic benefit.
HESS introduces additional components, control complexity, and hence additional capital cost. These expenses can be greater than the advantages if GPU workloads are steady and power fluctuations are manageable by a single storage type, in most cases, an optimized battery storage system. Therefore, hybridization of multiple ESSs should be guided by measured load profiles rather than adopted by default.

4. Discussions on Deployment Challenges and Research Opportunities

Energy storage has been widely studied in applications such as electric vehicles, renewable integration, grid support, and conventional data centers. AI data centers introduce additional challenges because storage must operate under software-driven, centrally scheduled, highly synchronized workloads. Unlike traditional applications, storage operation in AI data centers is directly coupled with workload scheduling, which affects control, degradation, cooling, safety, and system integration.
This section discusses system-level constraints, deployment challenges, and key research opportunities associated with integrating ESSs into AI data center environments.

4.1. System-Level Deployment Challenges of Storage in AI Data Centers

The challenges faced by energy storage systems in AI data centers are not simply more aggressive versions of those encountered in EV or grid-scale applications. Instead, they arise from a different coupling between software-controlled workloads and electrical infrastructure. In AI data centers, software scheduling directly determines when large power ramps occur inside the facility. As a result, workload scheduling becomes an integral part of the overall control problem and must be coordinated with grid and storage operation.
One major challenge is identifying where the storage system will be located. Indoor deployment near GPU racks reduces response time but introduces strict constraints on space, thermal management, and fire safety. Outdoor or campus-level storage offers more space and cooling flexibility but may not respond fast enough to mitigate millisecond-scale transients. This creates a trade-off between electrical performance and physical feasibility that is less pronounced in grid-scale or EV applications.
As mentioned earlier in Section 2.2, AI power profiles require closer coordination between energy storage systems and power electronics. In practice, ESSs and power electronics must operate as a single integrated system to maintain voltage stability and protect equipment. Moreover, fast load changes place stress on converters, DC buses, and protection devices, which increases the need for robust control and protection schemes.

4.2. Thermal and Cooling Challenges of Storage Integration in AI Data Centers

Thermal management represents a critical challenge for integrating energy storage systems into AI data centers. As discussed in Section 2.2 and Section 2.5, many electrochemical systems and their associated power electronics generate excessive heat during frequent charging and discharging, high C-rate operation, and rapid power ramps. This can accelerate degradation if heat is not removed effectively. These effects are more severe for chemistries with narrow thermal safety margins. Hence, effective cooling, conservative operating limits, and careful chemistry selection are critical for safe and reliable deployment.

4.3. Grid Interconnection, Siting, and Market Barriers

The rapid growth of AI data centers is placing increasing pressure on electric grids, especially in regions where transmission and interconnection capacity is already limited. In the United States, areas such as Northern Virginia illustrate the tension between fast AI expansion and constrained grid infrastructure [12,88,89]. New data center clusters often face long interconnection queues and may require accepting interruptible service or curtailment during peak periods. Energy storage can help by absorbing or supplying power locally, which reduces stress on upstream grid equipment. However, this benefit depends on having clear rules that allow storage to participate in reliability services, demand response programs, and emergency operations.
Siting is also a key issue. In Northern Virginia, both existing and proposed data centers are facing growing community opposition because of water use for cooling, noise and low-frequency sound near residential areas, and broader environmental and land-use impacts [88,90]. Many locations are already constrained by zoning limits and limited developable land, which makes it difficult for data centers to allocate additional space for large storage systems. Because of the footprint, noise, or safety requirements, not all storage technologies are suitable for every site. These challenges reinforce the need for flexible siting strategies, including grid-side storage shared by multiple facilities, smaller modular systems, or hybrid solutions that reduce the on-campus footprint.
Another barrier is market and tariff design. AI data centers may fall under special customer classes or operate in restructured markets where capacity, energy, and ancillary services follow different rules and price signals [91]. This makes it difficult for AI data centers to deploy and operate storage systems and creates uncertainty about long-term revenue. Even hybrid storage systems that reduce load swings, support frequency regulation, or provide black-start capability may not be fully compensated for all the services they provide. Therefore, clearer policies are needed to recognize the multi-service value of storage and support grid-friendly operation of AI data centers.

4.4. New Research Questions

Most existing conclusions in high-power and high-energy storage literature do not consider AI data center-specific challenges. Most existing studies assume that power demand is driven by external events or user behavior. On the other hand, power demand is generated internally by software-controlled workload scheduling in AI data centers. As a result, most conclusions do not transfer directly to AI data center environments and motivate a set of new research questions. These questions highlight the need for quantitative frameworks that link workload scheduling behavior to storage ramp rates, cycling depth, and long-term degradation.
One key issue is the link between workload scheduling and storage degradation. In AI data centers, micro-cycling is not an external disturbance but a direct consequence of software decisions. This raises the question of how job scheduling policies translate into battery cycling depth, ramp rates, and long-term degradation.
Another open research question concerns AI-aware load modeling. Existing storage planning models typically rely on aggregated or statistical demand profiles. In contrast, AI data center loads depend on workload type, model size, and scheduling behavior. New mathematical models are needed for new research on storage sizing, control design, and degradation analysis.
Another open research question is how ESS control should be coordinated with workload scheduling in real time. Of course, this will introduce new control complexity and cyber-physical risks, which can be another dimension for research.
While Section 3.4 provides a semi-quantitative guideline for power–energy partitioning in AI data center HESSs, important gaps remain. The guidance is still based on a limited set of available studies, reflecting the early stage of HESS research for AI data centers. Further research is needed to generalize partitioning rules across different technologies, system sizes, scheduling policies, and operating conditions. Addressing this gap is critical for designing cost-effective HESS architectures that balance performance, degradation, and footprint.
AI data centers also raise new questions about degradation modeling. Existing battery aging models are typically designed for EV, consumer electronics, or traditional UPS systems. These models use lifetime indicators such as equivalent full cycles or ampere-hour throughput that can overestimate degradation when applied to AI data centers where micro-cycling is dominant with occasional high C-rate events. Hence, improved models are needed to link AI-driven power behavior directly to long-term storage health.
Finally, cyber-physical security represents an important and underexplored research gap. Storage controllers are increasingly integrated with data center management software and communication networks. Failures or cyber-attacks could trigger large and rapid power swings that affect both the data center and the grid. New control architecture is needed that remains fast, robust, and secure under these conditions.

4.5. Prioritized Research Gaps and Future Directions

The previous subsection identified a broad set of AI-specific research questions; this subsection highlights the most critical gaps that require immediate attention. The highest-priority research needs include:
  • AI-aware load and degradation modeling: Develop models that directly link workload scheduling, micro-cycling behavior, and long-term storage degradation.
  • Quantitative HESS sizing guidelines: Establish practical rules for power–energy partitioning between high-power and high-energy storage layers based on realistic AI load profiles.
  • Joint control of workloads and storage: Design coordinated control frameworks that optimize compute scheduling and storage dispatch under fast AI-driven power ramps.
  • Cyber-physical security and resilience: Ensure storage control systems remain stable and secure under faults, misconfigurations, or cyber-attacks.
  • Scalable deployment strategies: Study modular and hybrid architectures that balance performance, footprint, and cost in large AI data center deployments.

5. Conclusions

AI data centers are emerging as one of the most demanding and dynamic electrical loads in modern power systems. Their rapid growth, high rack densities, and unpredictable multi-timescale load patterns place significant stress on both on-site infrastructure and the utility grid. These facilities can ramp up and down hundreds of megawatts within seconds and generate sustained oscillations during GPU-intensive workloads. Conventional UPS systems and traditional control methods, which were designed primarily for backup operation and slower load dynamics, are increasingly inadequate for managing this new class of AI-driven loads.
Energy storage systems, particularly when deployed in hybrid configurations, offer a practical and scalable pathway to address these challenges. Fast-response storage technologies such as supercapacitors, flywheels, SMES, and to some extent a few lithium-ion batteries can handle millisecond-level disturbances and protect sensitive power electronics. In contrast, high-energy systems such as most lithium-ion, sodium–sulfur, flow batteries, and fuel cell ESSs can be used to manage longer-duration load changes and reduce peak demand. Hybrid energy storage systems integrate these complementary technologies into a coordinated platform that improves power quality, extends battery lifetime, supports accelerated interconnection, and enhances the reliability and resilience of AI campuses. Based on the comparative analysis presented in this review, combinations using LTO or LFP batteries as high-energy storage together with supercapacitors, flywheels, or SMES as high-power layers emerge as particularly well suited for AI data center applications.
As AI data centers spread to new locations beyond established hubs, energy storage will become even more important for supporting safe, efficient, and grid-friendly growth. At the same time, the interaction between storage control, AI workload management, and power system planning opens many new research opportunities. These include AI-specific evaluation criteria, workload-aware control strategies, real-time coordination across multiple system layers, secure operation of cyber-physical systems, and integration with cooling systems.
Overall, the cases presented in this review show that hybrid energy storage architectures are likely to become foundational components of future AI energy systems. Their ability to handle both fast and slow power variations, support internal system stability, and meet external grid requirements will be essential as AI computing demands continue to grow over the coming decade. Hybrid systems also offer a natural path to integrate emerging on-site generation technologies. By identifying key performance metrics, AI-centered research gaps, and semi-quantitative design guidance, this review provides a structured foundation for future development. Continued research, real-world demonstrations, and coordinated policy development will be critical to turn these capabilities into reliable, cost-effective, and sustainable data center operations.

Author Contributions

Conceptualization, S.R. and T.A.K.; formal analysis, S.R. and T.A.K.; investigation, S.R. and T.A.K.; writing—original draft preparation, T.A.K.; writing—review and editing, S.R.; visualization, T.A.K.; supervision, S.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

No new data were created or analyzed in this study.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Kimball, S. Data Centers Powering Artificial Intelligence Could Use More Electricity Than Entire Cities. Available online: https://www.cnbc.com/2024/11/23/data-centers-powering-ai-could-use-more-electricity-than-entire-cities.html (accessed on 1 December 2025).
  2. Li, T.; Pan, J.; Ma, S.; Raikov, A.; Arkhipov, A. SimpleScale: Simplifying the Training of an LLM Model Using 1024 GPUs. Appl. Sci. 2025, 15, 8265. [Google Scholar] [CrossRef]
  3. Sigalos, M. OpenAI’s Historic Week Has Redefined the AI Arms Race for Investors. Available online: https://www.cnbc.com/2025/09/26/openai-big-week-ai-arms-race.html (accessed on 1 December 2025).
  4. Milmo, D. Boom or Bubble? Inside the $3tn AI Datacentre Spending Spree. Available online: https://www.theguardian.com/technology/2025/nov/02/global-datacentre-boom-investment-debt/ (accessed on 1 December 2025).
  5. Reuters. From OpenAI to Nvidia, Firms Channel Billions into AI Infrastructure as Demand Booms. Reuters, 31 December 2025.
  6. Barber, P. Data Centre Boom Sparks Deals Rush. Available online: https://www.ft.com/content/42f3dec5-b8dc-49a2-aa5c-0e62ab529173/ (accessed on 12 December 2025).
  7. Milman, O. More Than 200 Environmental Groups Demand Halt to New US Datacenters. Available online: https://www.theguardian.com/us-news/2025/dec/08/us-data-centers (accessed on 12 December 2025).
  8. Soni, A.; Sophia, D.M.; Navin, N. Microsoft Unveils $23 Billion in New AI Investments with Big Focus on India. Reuters, 10 December 2025.
  9. Amazon Will Invest AU$20 Billion in Data Center Infrastructure in Australia. Available online: https://www.aboutamazon.com/news/aws/amazon-data-center-investment-in-australia (accessed on 12 December 2025).
  10. Chen, X.; Wang, X.; Colacelli, A.; Lee, M.; Xie, L. Electricity Demand and Grid Impacts of AI Data Centers: Challenges and Prospects. arXiv 2025, arXiv:2509.07218. [Google Scholar] [CrossRef]
  11. Chapman, H. New Data Center Developments: December 2025. Available online: https://www.datacenterknowledge.com/data-center-construction/new-data-center-developments-december-2025 (accessed on 13 December 2025).
  12. JLARC. Data Centers in Virginia. Available online: https://jlarc.virginia.gov/landing-2024-data-centers-in-virginia.asp (accessed on 12 December 2025).
  13. Data Centers|Northern Virginia Regional Commission—Website. Available online: https://www.novaregion.org/1598/Data-Centers (accessed on 12 December 2025).
  14. Howland, E. Grid Constraints Limit Near-Term Data Center Growth in Northwest: NPCC Panelist. Available online: https://www.utilitydive.com/news/data-center-load-northwest-npcc-power-plan-microsoft/735346/ (accessed on 12 December 2025).
  15. Curran, I. New Data Centres Must Generate and Supply Electricity to Wider Market, Regulator Rules. Available online: https://www.irishtimes.com/business/2025/12/12/new-data-centres-must-generate-and-supply-electricity-to-wider-market-regulator-rules/ (accessed on 12 December 2025).
  16. Er, D.; Ang, A. The Future of Data Centres in Singapore|Addleshaw Goddard LLP. Available online: https://www.addleshawgoddard.com/en/insights/insights-briefings/2025/real-estate/future-data-centres-singapore/ (accessed on 12 December 2025).
  17. de-Bray, G.; Najeeb, N.; DeBlase, N. AI and Energy Sectors More Intertwined than Ever; Deutsche Bank Research Institute: Frankfurt am Main, Germany, 2025. [Google Scholar]
  18. Srivathsan, B.; Sorel, M.; Sachdeva, P. AI Power: Expanding Data Center Capacity to Meet Growing Demand. Available online: https://www.mckinsey.com/industries/technology-media-and-telecommunications/our-insights/ai-power-expanding-data-center-capacity-to-meet-growing-demand#/ (accessed on 12 December 2025).
  19. Wendling, J. Charted: The Rising Share of U.S. Data Center Power Demand. Available online: https://www.visualcapitalist.com/sp/gx03-charted-the-rising-share-of-u-s-data-center-power-demand/ (accessed on 12 December 2025).
  20. U.S. Department of Energy. DOE Releases New Report Evaluating Increase in Electricity Demand from Data Centers. Available online: https://www.energy.gov/articles/doe-releases-new-report-evaluating-increase-electricity-demand-data-centers (accessed on 12 December 2025).
  21. Davenport, C.; Singer, B.; Mehta, N.; Lee, B.; Mackay, J.; Modak, A.; Corbett, B.; Miller, J.; Hari, T.; Ritchie, J.; et al. Generational Growth: AI, Data Centers and the Coming US Power Demand Surge; Goldman Sachs: New York, NY, USA, 2024. [Google Scholar]
  22. Johnstone, C. How Power Density Is Changing in Data Centers and What It Means for Liquid Cooling. Available online: https://jetcool.com/post/how-power-density-is-changing-in-data-centers/ (accessed on 12 December 2025).
  23. Bommarito, M. Rack Density Evolution: From 5 kW to 350 kW per Rack. Available online: https://michaelbommarito.com/wiki/datacenters/technology/rack-density/ (accessed on 12 December 2025).
  24. Taimela, P. Why Battery Energy Storage Is the Future of Data Center UPS Solutions. Available online: https://www.flexgen.com/resources/blog/expert-qa-why-battery-energy-storage-future-data-center-ups-solutions (accessed on 12 December 2025).
  25. Force, NERC Large Loads Task. Characteristics and Risks of Emerging Large Loads. North American Electric Reliability Cor-Poration (NERC), White Paper. 2025. Available online: https://www.studocu.com/row/document/beijing-normal-university/knot-theory/nerc-report-characteristics-risks-of-emerging-large-loads-july-2025/141225500 (accessed on 12 December 2025).
  26. Choukse, E.; Warrier, B.; Heath, S.; Belmont, L.; Zhao, A.; Khan, H.A.; Harry, B.; Kappel, M.; Hewett, R.J.; Datta, K.; et al. Power Stabilization for AI Training Datacenters. arXiv 2025, arXiv:2508.14318. [Google Scholar] [CrossRef]
  27. Rates and Tariffs|Virginia|Dominion Energy. Available online: https://www.dominionenergy.com/virginia/rates-and-tariffs (accessed on 12 December 2025).
  28. Liu, K.; Li, B.; Jiang, Q.; Zhang, Y.; Liu, T. Fault ride-through strategy for hybrid cascaded HVDC systems based on controllable LCC. IEEE Trans. Circuits Syst. II Express Briefs 2026, 73, 88–92. [Google Scholar] [CrossRef]
  29. Xie, Y.; Cui, W.; Wierman, A. Enhancing Data Center Low-Voltage Ride-Through. arXiv 2025, arXiv:2510.03867. [Google Scholar] [CrossRef]
  30. Ali, Z.M.; Calasan, M.; Aleem, S.H.E.A.; Jurado, F.; Gandoman, F.H. Applications of Energy Storage Systems in Enhancing Energy Management and Access in Microgrids: A Review. Energies 2023, 16, 5930. [Google Scholar] [CrossRef]
  31. Aghmadi, A.; Mohammed, O.A. Energy Storage Systems: Technologies and High-Power Applications. Batteries 2024, 10, 141. [Google Scholar] [CrossRef]
  32. Liu, X.; Li, W.; Guo, X.; Su, B.; Guo, S.; Jing, Y.; Zhang, X. Advancements in Energy-Storage Technologies: A Review of Current Developments and Applications. Sustainability 2025, 17, 8316. [Google Scholar] [CrossRef]
  33. Decourt, B.; Debarre, R. Electricity Storage: Leading the Energy Transition Factbook; Schlumberger Business Consulting (SBC) Energy Institute: Gravenhage, The Netherlands, 2013. [Google Scholar]
  34. Department of Energy Global Energy Storage Database. Available online: https://gesdb.sandia.gov/ (accessed on 1 November 2025).
  35. Soto, A.; Berrueta, A.; Mateos, M.; Sanchis, P.; Ursúa, A. Impact of micro-cycles on the lifetime of lithium-ion batteries: An experimental study. J. Energy Storage 2022, 55, 105343. [Google Scholar] [CrossRef]
  36. Wang, D.; Ren, C.; Sivasubramaniam, A.; Urgaonkar, B.; Fathy, H. Energy Storage in Datacenters: What, Where, and How Much? SIGMETRICS Perform. Eval. Rev. 2012, 40, 187–198. [Google Scholar] [CrossRef]
  37. Eapen, D.E.; Suresh, R.; Patil, S.; Rengaswamy, R. A systems engineering perspective on electrochemical energy technologies and a framework for application driven choice of Technology. Renew. Sustain. Energy Rev. 2021, 147, 111165. [Google Scholar] [CrossRef]
  38. Technologies: Why Energy Storage? Available online: https://energystorageeurope.eu/why-energy-storage/technologies/ (accessed on 10 January 2026).
  39. Adetokun, B.B.; Oghorada, O.; Abubakar, S.J. Superconducting Magnetic Energy Storage Systems: Prospects and Challenges for Renewable Energy Applications. J. Energy Storage 2022, 55, 105663. [Google Scholar] [CrossRef]
  40. Liu, W.; Sun, X.; Yan, X.; Gao, Y.; Zhang, X.; Wang, K.; Ma, Y. Review of Energy Storage Capacitor Technology. Batteries 2024, 10, 271. [Google Scholar] [CrossRef]
  41. Nazaralizadeh, S.; Banerjee, P.; Srivastava, A.K.; Famouri, P. Battery Energy Storage Systems: A review of Energy Management Systems and health metrics. Energies 2024, 17, 1250. [Google Scholar] [CrossRef]
  42. Environmental and Energy Study Institute (EESI) Fact Sheet: Energy Storage. 2019. Available online: https://www.eesi.org/papers/view/energy-storage-2019 (accessed on 10 January 2026).
  43. Nyamathulla, S.; Dhanamjayulu, C. A review of Battery Energy Storage Systems and Advanced Battery Management System for different applications: Challenges and recommendations. J. Energy Storage 2024, 86, 111179. [Google Scholar] [CrossRef]
  44. Hannan, M.A.; Lipu, M.S.H.; Hussain, A.; Mohamed, A. A Review of Lithium-Ion Battery State of Charge Estimation and Management System in Electric Vehicle Applications: Challenges and Recommendations. Renew. Sustain. Energy Rev. 2017, 78, 834–854. [Google Scholar] [CrossRef]
  45. Leonardi, S.G.; Samperi, M.; Frusteri, L.; Antonucci, V.; D’Urso, C. A review of sodium-metal chloride batteries: Materials and cell design. Batteries 2023, 9, 524. [Google Scholar] [CrossRef]
  46. Understanding LTO Batteries and Their Advantages—Large Battery. Available online: https://www.large-battery.com/blog/understanding-lto-batteries-advantages/ (accessed on 10 January 2026).
  47. 6 Types of Lithium Ion Batteries: Everything to Know: Ecoflow US. Available online: https://www.ecoflow.com/za/blog/types-of-lithium-ion-batteries (accessed on 10 January 2026).
  48. What Is Cycle Life of Battery? Available online: https://nbcellenergy.com/what-is-cycle-life-of-battery/ (accessed on 10 January 2026).
  49. Storeoupes LFP vs. NMC: Which Is the Best Choice for Home Battery Backup? Available online: https://oupes.com/blogs/blogs/oupes-lfp-vs-nmc-which-is-the-best-choice-for-home-battery-backup (accessed on 10 January 2026).
  50. Comparison of Cycle Life of LFP and NMC Lithium Batteries. Available online: https://www.porffor.com/newsinfo/783093.html (accessed on 10 January 2026).
  51. Elalfy, D.A.; Gouda, E.; Kotb, M.F.; Bureš, V.; Sedhom, B.E. Comprehensive Review of Energy Storage Systems Technologies, objectives, challenges, and future trends. Energy Strategy Rev. 2024, 54, 101482. [Google Scholar] [CrossRef]
  52. Understanding Lithium-Ion Battery Weight and Energy Density. Available online: https://www.large-battery.com/blog/lithium-ion-battery-weight-and-density-explained-guide/ (accessed on 10 January 2026).
  53. Key Metrics and Definitions for Energy Storage. Available online: https://courses.ems.psu.edu/eme812/node/803 (accessed on 10 January 2026).
  54. LTO Battery 40120 10,000 mah 2.4 V. Available online: https://lithium-titanate-battery.com/lto-battery-40120-10000mah-2-4v/ (accessed on 10 January 2026).
  55. Lithium Nickel Cobalt Aluminum Oxide (NCA, LiNi0.8Co0.15Al0.05O2, be-45) Cathode Powder. Available online: https://www.fuelcellstore.com/lithium-nickel-cobalt-aluminum-oxide-nca-be-45 (accessed on 10 January 2026).
  56. Schöberl, J.; Ank, M.; Schreiber, M.; Wassiliadis, N.; Lienkamp, M. Thermal runaway propagation in automotive lithium-ion batteries with NMC-811 and LFP Cathodes: Safety Requirements and impact on system integration. eTransportation 2024, 19, 100305. [Google Scholar] [CrossRef]
  57. Dassisti, M.; Mastrorilli, P.; Rizzuti, A.; Cozzolino, G.; Chimienti, M.; Olabi, A.-G.; Matera, F.; Carbone, A.; Ramadan, M. Vanadium: A Transition Metal for Sustainable Energy Storing in Redox Flow Batteries; Elsevier Ebooks: Amsterdam, The Netherlands, 2022; pp. 208–229. [Google Scholar] [CrossRef]
  58. Olabi, A.G.; Allam, M.A.; Abdelkareem, M.A.; Deepa, T.D.; Alami, A.H.; Abbas, Q.; Alkhalidi, A.; Sayed, E.T. Redox Flow Batteries: Recent Development in Main Components, Emerging Technologies, Diagnostic Techniques, Large-Scale Applications, and Challenges and Barriers. Batteries 2023, 9, 409. [Google Scholar] [CrossRef]
  59. Zhang, Z.; Ding, T.; Zhou, Q.; Sun, Y.; Qu, M.; Zeng, Z.; Ju, Y.; Li, L.; Wang, K.; Chi, F. A Review of Technologies and Applications on Versatile Energy Storage Systems. Renew. Sustain. Energy Rev. 2021, 148, 111263. [Google Scholar] [CrossRef]
  60. Anyadetanwu, I.S.; Buzzi, F.; Peljo, P.; Bischi, A.; Bertei, A. System-Level Dynamic Model of Redox Flow Batteries (RFBs) for Energy Losses Analysis. Energies 2024, 17, 5324. [Google Scholar] [CrossRef]
  61. Gopi, C.V.V.M.; Ramesh, R. Review of Battery-Supercapacitor Hybrid Energy Storage Systems for Electric Vehicles. Results Eng. 2024, 24, 103598. [Google Scholar] [CrossRef]
  62. Yaseen, M.; Khattak, M.A.K.; Humayun, M.; Usman, M.; Shah, S.S.; Bibi, S.; Hasnain, B.S.U.; Ahmad, S.M.; Khan, A.; Shah, N.; et al. A Review of Supercapacitors: Materials Design, Modification, and Applications. Energies 2021, 14, 7779. [Google Scholar] [CrossRef]
  63. Luo, X.; Wang, J.; Dooner, M.; Clarke, J. Overview of Current Development in Electrical Energy Storage Technologies and the Application Potential in Power System Operation. Appl. Energy 2015, 137, 511–536. [Google Scholar] [CrossRef]
  64. Li, X.; Palazzolo, A. A Review of Flywheel Energy Storage Systems: State of the Art and Opportunities. J. Energy Storage 2022, 46, 103576. [Google Scholar] [CrossRef]
  65. Zhang, J.W.; Wang, Y.H.; Liu, G.C.; Tian, G.Z. A Review of Control Strategies for Flywheel Energy Storage System and a Case Study with Matrix Converter. Energy Rep. 2022, 8, 3948–3963. [Google Scholar] [CrossRef]
  66. Hernando López de Toledo, C.; Munilla, J.; García-Tabarés, L.; Gil, C.; Ballarín, N.; Orea, J.; Iturbe, R.; López, B.; Ballarino, A. Design of Superconducting Magnetic Energy Storage (SMES) for Waterborne Applications. IEEE Trans. Appl. Supercond. 2025, 35, 1–5. [Google Scholar] [CrossRef]
  67. Khaleel, M.; Yusupov, Z.; Nassar, Y.; El-khozondar, H.J.; Ahmed, A.; Alsharif, A. Technical Challenges and Optimization of Superconducting Magnetic Energy Storage in Electrical Power Systems. e-Prime—Adv. Electr. Eng. Electron. Energy 2023, 5, 100223. [Google Scholar] [CrossRef]
  68. BESS and Data Centers: Powering AI with Smart Energy Systems—CARRAR. Available online: https://www.carrar.net/resources/bess-and-ai-driven-data-centers/ (accessed on 12 December 2025).
  69. Atawi, I.E.; Al-Shetwi, A.Q.; Magableh, A.M.; Albalawi, O.H. Recent Advances in Hybrid Energy Storage System Integrated Renewable Power Generation: Configuration, Control, Applications, and Future Directions. Batteries 2023, 9, 29. [Google Scholar] [CrossRef]
  70. Bocklisch, T. Hybrid Energy Storage Approach for Renewable Energy Applications. J. Energy Storage 2016, 8, 311–319. [Google Scholar] [CrossRef]
  71. Hajiaghasi, S.; Salemnia, A.; Hamzeh, M. Hybrid Energy Storage System for Microgrids Applications: A Review. J. Energy Storage 2019, 21, 543–570. [Google Scholar] [CrossRef]
  72. Ahmed, K.M.U.; Bollen, M.H.J.; Alvarez, M. A Review of Data Centers Energy Consumption and Reliability Modeling. IEEE Access 2021, 9, 152536–152563. [Google Scholar] [CrossRef]
  73. Ahmed, K.M.U.; Alvarez, M.; Bollen, M.H.J. Reliability Analysis of Internal Power Supply Architecture of Data Centers in Terms of Power Losses. Electr. Power Syst. Res. 2021, 193, 107025. [Google Scholar] [CrossRef]
  74. Xu, B.; Oudalov, A.; Ulbig, A.; Andersson, G.; Kirschen, D.S. Modeling of Lithium-Ion Battery Degradation for Cell Life Assessment. IEEE Trans. Smart Grid 2018, 9, 1131–1140. [Google Scholar] [CrossRef]
  75. Li, J.; Yang, Q.; Robinson, F.; Liang, F.; Zhang, M.; Yuan, W. Design and Test of a New Droop Control Algorithm for a SMES/Battery Hybrid Energy Storage System. Energy 2017, 118, 1110–1122. [Google Scholar] [CrossRef]
  76. Gee, A.M.; Robinson, F.V.P.; Dunn, R.W. Analysis of Battery Lifetime Extension in a Small-Scale Wind-Energy System Using Supercapacitors. IEEE Trans. Energy Convers. 2013, 28, 24–33. [Google Scholar] [CrossRef]
  77. Liu, L.; Li, C.; Sun, H.; Hu, Y.; Gu, J.; Li, T.; Xin, J.; Zheng, N. HEB: Deploying and managing hybrid energy buffers for improving datacenter efficiency and economy. In Proceedings of the 42nd Annual International Symposium on Computer Architecture, Portland, OR, USA, 13–17 June 2015; pp. 463–475. [Google Scholar]
  78. Wang, K.; Ye, L.; Yang, S.; Deng, Z.; Song, J.; Li, Z.; Zhao, Y. A hierarchical dispatch strategy of hybrid energy storage system in internet data center with model predictive control. Appl. Energy 2023, 331, 120414. [Google Scholar] [CrossRef]
  79. Zheng, W.; Ma, K.; Wang, X. Hybrid energy storage with supercapacitor for cost-efficient data center power shaving and capping. IEEE Trans. Parallel Distrib. Syst. 2017, 28, 1105–1118. [Google Scholar] [CrossRef]
  80. Babu, T.S.; Vasudevan, K.R.; Ramachandaramurthy, V.K.; Sani, S.B.; Chemud, S.; Lajim, R.M. A Comprehensive Review of Hybrid Energy Storage Systems: Converter Topologies, Control Strategies and Future Prospects. IEEE Access 2020, 8, 148702–148721. [Google Scholar] [CrossRef]
  81. Ali, M.H.; Slaifstein, D.; Ibanez, F.M.; Zugschwert, C.; Pugach, M. Power Management Strategies for Vanadium Redox Flow Battery and Supercapacitors in Hybrid Energy Storage Systems. In Proceedings of the 2022 IEEE PES Innovative Smart Grid Technologies Conference Europe (ISGT-Europe), Novi Sad, Serbia, 10–12 October 2022. [Google Scholar] [CrossRef]
  82. Maroufi, S.M.; Karrari, S.; Rajashekaraiah, K.; De Carne, G. Power Management of Hybrid Flywheel-Battery Energy Storage Systems Considering the State of Charge and Power Ramp Rate. IEEE Trans. Power Electron. 2025, 40, 9944–9956. [Google Scholar] [CrossRef]
  83. Torreglosa, J.P.; García, P.; Fernández, L.M.; Jurado, F. Energy Dispatching Based on Predictive Controller of an Off-Grid Wind Turbine/Photovoltaic/Hydrogen/Battery Hybrid System. Renew. Energy 2015, 74, 326–336. [Google Scholar] [CrossRef]
  84. Zhang, Y.; Tang, H.; Li, H.; Wang, S. Unlocking the Flexibilities of Data Centers for Smart Grid Services: Optimal Dispatch and Design of Energy Storage Systems under Progressive Loading. Energy 2025, 316, 134511. [Google Scholar] [CrossRef]
  85. Wang, Z.; Yin, Z.; Yang, J.; Wang, J. Coordinated Optimization of Distributed Energy System and Storage-Enhanced Uninterruptible Power Supply in Data Center: A Three-Level Optimization Framework with Model Predictive Control. Energy Convers. Manag. 2025, 342, 120137. [Google Scholar] [CrossRef]
  86. Ramos, G.A.; Costa-Castelló, R. Energy Management Strategies for Hybrid Energy Storage Systems Based on Filter Control: Analysis and Comparison. Electronics 2022, 11, 1631. [Google Scholar] [CrossRef]
  87. Sun, M.; Xue, Y.; Bogdan, P.; Tang, J.; Wang, Y.; Lin, X. Hierarchical and hybrid energy storage devices in data centers: Architecture, control and provisioning. PLoS ONE 2018, 13, e0191450. [Google Scholar] [CrossRef] [PubMed]
  88. Patel, K.; Steinberger, K.; Debenedictis, A.; Wu, M.; Blair, J.; Picciano, P.; Oporto, P.; Li, R.; Mahoney, B.; Solfest, A.; et al. Virginia Data Center Study: Electric Infrastructure and Customer Rate Impacts. Available online: https://jlarc.virginia.gov/pdfs/presentations/JLARC%20Virginia%20Data%20Center%20Study_FINAL_12-09-2024.pdf (accessed on 20 January 2026).
  89. Blume, P. Dateline Ashburn: Data Centers Drive New Energy Disputes in Northern Virginia. Available online: https://broadbandbreakfast.com/dateline-ashburn-data-centers-drive-new-energy-disputes-in-northern-virginia/ (accessed on 12 December 2025).
  90. $64 Billion of Data Center Projects Have Been Blocked or Delayed Amid Local Opposition. Available online: https://www.datacenterwatch.org/report/ (accessed on 12 December 2025).
  91. Reisingergooch. Virginia Energy Regulatory Updates (September 2025). Available online: https://reisingergooch.com/virginia-energy-regulatory-updates-september-2025/ (accessed on 20 January 2026).
Figure 1. Classification of energy storage systems by the form of energy stored. (Chemical storage provides long-duration energy, electrical and electromagnetic storage enable very fast response, electrochemical storage balances energy and power, mechanical storage supports large-scale and long-duration applications, and thermal storage is suitable for low-cost bulk energy shifting).
Figure 1. Classification of energy storage systems by the form of energy stored. (Chemical storage provides long-duration energy, electrical and electromagnetic storage enable very fast response, electrochemical storage balances energy and power, mechanical storage supports large-scale and long-duration applications, and thermal storage is suitable for low-cost bulk energy shifting).
Energies 19 00634 g001
Figure 2. Energy storage technologies by system power and discharge duration. (The (left) panel shows all major technologies (adapted from [33] and updated using [34]), while the (right) panel shows commercially available electrochemical systems with explicit labels. Key takeaway: electrochemical storage spans the power–duration range most relevant to AI data center operation).
Figure 2. Energy storage technologies by system power and discharge duration. (The (left) panel shows all major technologies (adapted from [33] and updated using [34]), while the (right) panel shows commercially available electrochemical systems with explicit labels. Key takeaway: electrochemical storage spans the power–duration range most relevant to AI data center operation).
Energies 19 00634 g002
Figure 3. A schematic diagram illustrating the working principle of a vanadium redox flow battery (VRFB).
Figure 3. A schematic diagram illustrating the working principle of a vanadium redox flow battery (VRFB).
Energies 19 00634 g003
Figure 4. A schematic diagram illustrating the working principle of a supercapacitor.
Figure 4. A schematic diagram illustrating the working principle of a supercapacitor.
Energies 19 00634 g004
Figure 5. Schematic diagram of a flywheel energy storage system.
Figure 5. Schematic diagram of a flywheel energy storage system.
Energies 19 00634 g005
Figure 6. Generic structure of a superconducting magnetic energy storage (SMES) system.
Figure 6. Generic structure of a superconducting magnetic energy storage (SMES) system.
Energies 19 00634 g006
Figure 7. (a) Generic architecture of a hybrid energy storage system showing coordinated control of multiple energy storage devices through an energy management system and power converters; (b) example high-energy and high-power storage pairings tailored to meet the power requirements of AI data centers.
Figure 7. (a) Generic architecture of a hybrid energy storage system showing coordinated control of multiple energy storage devices through an energy management system and power converters; (b) example high-energy and high-power storage pairings tailored to meet the power requirements of AI data centers.
Energies 19 00634 g007
Table 1. Advantages, limitations, and typical failure modes of major energy storage classes.
Table 1. Advantages, limitations, and typical failure modes of major energy storage classes.
Storage ClassAdvantagesLimitationsTypical Failure Modes
ChemicalLong-durationSlow response, very less energy density,
poor efficiency
Leakage can form a
flammable layer
ElectricalUltra-fast response,
exceptional cycle life
Extremely low energy capacity,
short duration only
Capacitance degradation,
dielectric breakdown
ElectrochemicalHigh energy density, UPS-compatible, many optionsAging under frequent high C-rate cycling,
thermal and fire safety constraints
Thermal runaway, accelerated degradation, BMS/inverter faults
ElectromagneticVery-fast response,
excellent cycle life
Extremely low energy capacity, very high cost, cryogenic complexity, short duration onlyQuench events, cryogenic or
auxiliary system failure
MechanicalFast response, very high
cycle life
Limited duration; mechanical footprint and
vibration constraints
Bearing wear, rotor imbalance and disintegration, interface failures
ThermalLong-duration cooling support, reduces electrical peaks indirectlyCannot mitigate electrical transients,
slow response
Pump/Valve failures, thermal losses, control mismatch
Table 2. Comparison of key performance metrics for energy storage systems in AI data center applications [33,34,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53].
Table 2. Comparison of key performance metrics for energy storage systems in AI data center applications [33,34,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53].
ESS Broad ESS Mid ESS Sub Power RatingDischarge TimeEnergy DensityPower DensityResponse TimeLife CycleEfficiency (%)Micro-Cycling Ramp Rate Fit AI Data Center Suitability
Chemical Hydrogen 1 kW–1 GWmin-weeks≤10 kWh/kg>500 kW/m3min-level500–200020–66Poor Poor Very Limited (HES only)
Electrical PEC 100 kW–4 MWms-s≤0.2 Wh/L10 kW/Lµs-ms10k–100k95–99Excellent Excellent Suitable (HPS only)
EDLC ≤4 MWs-min0.2–7 Wh/kg>100,000 kW/m3ms-level500 k–1 M90–98Excellent Excellent Highly Suitable (HPS only)
Electro-chemical Flow Battery VRFB 5 kW–200 MW2–20 h16–35 Wh/L0.5–2 kW/m3100 ms12 k–14 k~70Excellent Poor Conditional (Bulky, HES only)
Lead–Acid2 kW–100 MW~10 h50–90 Wh/L200–500 kW/m3100 ms500–200070–90Poor Limited Very Limited (old technology)
Lithium-Ion LCO kW range~10 h150–200 Wh/kg<1000 kW/m3100 ms500–100090–95Poor Limited Limited
LFP 1 kW–32 MW10+ h90–180 Wh/kg<1000 kW/m3100 ms5000–8000+93–98Good Moderate Suitable (HPS, HES preferred)
LNMC 1 kW–10+ MW10+ h160–270 Wh/kg<1500 kW/m3100 ms1000–200090–95Moderate Moderate Conditional
LNCA ≤2 MW10+ h200–260 Wh/kg<1600 kW/m3100 ms500–100090–95Poor Moderate Conditional
LTO 1 kW–10 MW10+ h60–90 Wh/kg<3000 kW/m3100 ms10k+85–90Good to ExcellentGood Highly Suitable (HPS, HES preferred)
LMO 1 kW–5 MW10+ h100–150 Wh/kg<1600 kW/m3100 ms300–75090–95Poor Limited Conditional
Lithium-Polymer 10 kW–<10 MW≤4 h100–265 Wh/kg<300 kW/m3100 ms300–100090–95Poor Limited Conditional
Sodium–Sulfur 200 kW–50 MW≤8 h150–240 Wh/kg150–230 kW/m3100 ms4500+70–90Moderate Limited Conditional
Sodium-Ion 10 kW–<5 MW<12 h100–160 Wh/kg<50 kW/m3100 ms3000–10k85–92Good Limited Conditional
Sodium–Metal 20 kW–5 MW≤6 h150–447 Wh/kg<200 kW/m3s-level450088–98Moderate Limited Conditional
Nickel–Cadmium ≤20 MWh range30–75 Wh/kg250–1000 kW/m3100 ms1000–500060–80Moderate Limited Conditional
Nickel–Metal kW–1 MWh range75–80 Wh/kg<300 kW/m3100 ms1000–500060–70Poor Poor Not suitable (low efficiency)
Electromagnetic SMES 1–100 MW≤1 min0.5–5 Wh/kg>100,000 kW/m3ms-level10k–100k95–98Excellent Excellent Suitable (HPS only)
Mechanical Flywheel ≤20 MWs-min5–50 Wh/kg1000–2000 kW/m3ms-level≥1 M85–97Excellent Good Highly Suitable (HPS only)
Thermal Sensible Heat Chilled Water175 kW–90 MW≤48 h30–50 Wh/kg100–1000+ kW/m3min-h1000–10k+70–90N/A N/A Conditional (cooling load)
Note: Time Units—µs: microseconds; ms: milliseconds; s: seconds; min: minutes; h: hours; Micro-cycling tolerance, ramp rate fit, and AI data center suitability are qualitatively assessed based on the AI-specific evaluation criteria (Section 2.2) and key performance metrics (Section 2.3), considering response time, cycling life and degradation, footprint, efficiency, and C-rate (for electrochemical).
Table 3. C-rates of Li-ion battery variants [54,55].
Table 3. C-rates of Li-ion battery variants [54,55].
Li-Ion VariantsC-Rate
Lithium Cobalt Oxide (LCO)0.5C–1C
Lithium Iron Phosphate (LFP) 1C–10C, up to 25C discharge possible on some cells
Lithium Nickel Manganese Cobalt Oxide (LNMC) 0.7C–1C (Charge), 1C–2C (Discharge)
Lithium Nickel Cobalt Aluminum Oxide (LNCA) 0.1C–0.7C (Charge), 1C+ (Discharge)
Lithium Titanate (LTO)1C–10C (Charge), 1C–20C (Discharge), 30C (Pulse, 5 s)
Lithium Manganese Oxide (LMO)0.7C–3C (Charge), 1C–10C (Discharge), 30C (Pulse, 5 s)
Table 4. Semi-quantitative power–energy partitioning for ESS hybridization in AI data centers.
Table 4. Semi-quantitative power–energy partitioning for ESS hybridization in AI data centers.
LayerPower Share (%)Energy Share (%)
HPS Layer91% [77]
71.4% [78]
31% [77]
33% [78]
HES Layer9% [77]
28.6% [78]
69% [77]
67% [78]
Recommendation: HPS: Power ≈ 10–20% of total system power, energy ≈ 25–33% of total capacity
HES: Power ≈ 80–90% of total system power, energy ≈ 67–75% of total capacity
Table 5. Brief overview of control strategies for HESS in applications that are relevant to AI data centers.
Table 5. Brief overview of control strategies for HESS in applications that are relevant to AI data centers.
HESSControl MethodObjectiveTransferability
BESS–SC [76]Filtration-basedExtending BESS lifetimeNeeds evaluation
BESS–SC [86]Filtration-basedPreventing premature degradation of ESSNeeds evaluation
BESS–FESS [82]Moving average filtering and fuzzy logic-basedMinimize battery ramp rate and preserves flywheel SOCDirectly transferable
BESS–SC–Flywheels [87]Hierarchical supervisory heuristic control with crossover filteringAchieve power capping and cost reduction in data centersDirectly transferable
BESS–Hydrogen Fuel Cell [78]Model predictive control (MPC)Power dispatchDirectly transferable
BESS–TES [84]Optimization-based dispatchSupport the power grid and earn revenue for data centerDirectly transferable
BESS–TES [85]Optimization and model predictive controlMinimize operating cost, refining real-time charge–discharge operation, reducing SOC fluctuations and improving stability in data centerDirectly transferable
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Rahman, S.; Khan, T.A. Energy Storage Systems for AI Data Centers: A Review of Technologies, Characteristics, and Applicability. Energies 2026, 19, 634. https://doi.org/10.3390/en19030634

AMA Style

Rahman S, Khan TA. Energy Storage Systems for AI Data Centers: A Review of Technologies, Characteristics, and Applicability. Energies. 2026; 19(3):634. https://doi.org/10.3390/en19030634

Chicago/Turabian Style

Rahman, Saifur, and Tafsir Ahmed Khan. 2026. "Energy Storage Systems for AI Data Centers: A Review of Technologies, Characteristics, and Applicability" Energies 19, no. 3: 634. https://doi.org/10.3390/en19030634

APA Style

Rahman, S., & Khan, T. A. (2026). Energy Storage Systems for AI Data Centers: A Review of Technologies, Characteristics, and Applicability. Energies, 19(3), 634. https://doi.org/10.3390/en19030634

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop