1. Introduction
The rapid proliferation of artificial intelligence (AI) has fundamentally altered the trajectory of digital infrastructure. As training and inference workloads scale, the focus surrounding data centers has shifted from optimizing traditional enterprise IT to managing the physical, electrical, and regulatory demands of hyperscale computing. This transition raises significant challenges at the intersection of power systems engineering, infrastructure planning, and energy policy.
This paper presents a techno-economic framework that synthesizes four interrelated research domains—data center design, resource requirements, grid interconnection challenges, and behind-the-meter (BTM) microgrid solutions—to support early-stage data center development decisions. The framework evaluates how different BTM generation portfolios balance regulatory requirements and energy technology constraints against the market drivers of deployment speed and capital cost. The framework is applicable to any US wholesale electricity market facing large-load interconnection constraints. To ground the analysis in a specific regulatory and market context, the Electric Reliability Council of Texas (ERCOT) serves as the demonstration case, selected for its rapid data center expansion, evolving large-load regulations, and distinctive Connect-and-Manage interconnection approach. Unlike traditional operational optimization studies, this work adopts a scenario-based comparative approach, evaluating six distinct BTM configurations against a pure-grid baseline to characterize the trade-space across cost, resilience, and deployment timeline dimensions.
1.1. Background and Motivation
To contextualize the scale of this infrastructure challenge, it is useful to examine the geographic distribution of modern compute capacity. As illustrated in
Figure 1, hyperscale development within the United States has historically been concentrated in Tier 1 markets such as Northern Virginia and Silicon Valley. However, as power availability in these saturated markets diminishes, development is shifting toward deregulated, power-advantaged markets. The ERCOT footprint—specifically the Dallas–Fort Worth (DFW), Austin, San Antonio, and Houston metros—has emerged as a significant growth corridor. Recent industry analyses indicate that the DFW market alone has surpassed 1 gigawatt (GW) of operational capacity, with the broader North American pipeline exceeding 25 GW under construction [
1,
2].
This trend is not limited to the United States.
Figure 2 illustrates the global distribution of hyperscale cloud regions, highlighting the established FLAP-D markets in Europe (Frankfurt, London, Amsterdam, Paris, Dublin) and major Asia-Pacific hubs such as Tokyo and Singapore, many of which have also entered the 1 GW operational capacity tier [
2]. The concentration of North American capacity serves as the baseline for extrapolating future Zettascale AI compute demand analyzed in this study, further underscoring the relevance of BTM microgrids for bypassing congested regional transmission networks.
1.2. Literature Review
1.2.1. Data Center Design and Resource Requirements
The transition from cloud computing to AI-driven Zettascale compute has necessitated substantial changes in data center architecture. Historically, landmark studies [
3,
4] demonstrated that data centers maintained a relatively flat global energy footprint despite growing compute demand, owing to efficiency gains in hyperscale architecture and server utilization. However, those efficiency gains have largely been exhausted. Recent analyses indicate that modern GPU clusters—such as the NVIDIA NVL72—now exceed 100 kW per rack, pushing facilities toward gigawatt-scale footprints [
5,
6]. This power density is driving a shift in thermal management, with thermodynamic studies documenting the industry-wide transition from air cooling to direct-to-chip (DTC) and immersion liquid cooling systems to prevent thermal throttling [
7]. Additionally, modern hyperscale designs are increasingly adopting medium-voltage (MV) internal distribution to manage electrical loads while minimizing resistive losses [
8].
The environmental footprint of AI workloads is closely scrutinized in the contemporary literature. Generative AI training clusters consume up to eight times more energy than typical computing workloads, with global data center electricity consumption projected to surpass 1000 TWh by 2026 [
9,
10]. This energy demand is intertwined with water consumption: large-scale data centers can consume up to 5 million gallons of water daily for evaporative cooling [
11,
12]. The literature identifies a tension between Water Usage Effectiveness (WUE) and Power Usage Effectiveness (PUE): closed-loop liquid cooling can reduce WUE to near zero but increases PUE by requiring additional electricity for chillers [
7,
11]. Prior research emphasizes the value of co-optimizing distributed electricity and water technologies at both the community and facility levels to resolve these trade-offs [
13].
1.2.2. Grid Interconnection and Regulatory Challenges
As data centers continue to seek hundreds of megawatts of capacity, they are encountering the physical and regulatory limits of legacy utility grids. Across US wholesale electricity markets, the influx of large flexible loads has created significant transmission planning bottlenecks [
14,
15]. Studies mapping grid infrastructure reveal a high volume of speculative interconnection queues, forcing grid planners to risk overbuilding transmission lines for projects that may not materialize [
16]. These delays are not confined to any single market: interconnection timelines of 3–7 years are now common across PJM, MISO, and ERCOT, with some utilities reporting waits of up to 10 years [
17,
18].
Emerging research on load flexibility suggests that these delays may be partially addressable without new infrastructure. A recent analysis of 22 US balancing authorities found that 76–98 GW of new load could be integrated with only 0.25–0.5% annual curtailment, leveraging existing system headroom that results from intentional planning reserves [
17]. ERCOT alone was estimated to have 10 GW of curtailment-enabled headroom at 0.5% curtailment, comparable to PJM (18 GW) and MISO (15 GW). A complementary study demonstrated that a 500 MW data center using flexible grid connections paired with “bring-your-own-capacity” (BYOC) arrangements—in which the facility directly procures accredited capacity through on-site resources, clean energy PPAs, or virtual power plants—could reach full operation 3–5 years faster than traditional interconnection, with grid power available for more than 99% of all hours [
18]. These findings suggest that BTM generation serves not only as a resilience strategy but also as an enabling architecture for flexible interconnection agreements that can accelerate deployment across constrained markets.
Regulatory bodies and utilities are increasingly concerned about grid transient stability. The high-frequency oscillation and instantaneous power spikes of AI inference workloads require stringent Voltage Ride-Through (VRT) capabilities, prompting debates over cost-sharing and remote disconnection protocols [
19,
20]. A phenomenon described as the “self-preservation paradox” occurs when hyperscale facilities, equipped with sensitive Uninterruptible Power Supplies (UPSs), detect minor grid disturbances and automatically disconnect to protect IT loads, shifting instantaneously to backup generation [
21]. While this preserves the individual facility, the simultaneous disconnection of numerous hyperscale loads creates a large, near-instantaneous power surplus on the broader grid, driving over-frequency events.
This scenario has occurred multiple times within the Dominion Energy territory of Virginia (PJM Interconnection). In July 2024, a high-voltage line failure caused approximately 70 data centers to simultaneously withdraw from the grid, shedding roughly 1500 MW of load [
22,
23]. In February 2025, a similar malfunction caused another 40 data centers to drop approximately 1800 MW of demand [
22,
24]. As regional grids are saturated with these loads, such events demonstrate that utility interconnections present not only a capacity bottleneck but an active risk to regional transient stability. ERCOT has estimated that a simultaneous loss exceeding 2600 MW of demand could place the entire system at risk [
22].
Navigating these complex dynamics requires deep understanding of local market structures, as previously demonstrated in frameworks for deploying distributed energy assets within ERCOT’s competitive environment [
25]. Effective resolution also requires broader system analysis approaches to energy policy [
26] and targeted grid modernization strategies [
27].
1.2.3. Behind-the-Meter Resources and Microgrids
To circumvent multi-year interconnection delays and volatile grid tariffs, developers are increasingly adopting “Bring Your Own Power” (BYOP) models. The literature on BTM generation focuses on the deployment of distributed energy resources (DERs) to enhance facility resilience [
28]. Mixed-integer linear programming models demonstrate that sizing BTM energy storage alongside renewable generation can significantly reduce loss of load during grid emergencies [
29]. Decomposition techniques have proven effective for modeling the utility and operational dispatch of complex, multi-asset DER systems [
30]. Understanding battery degradation is essential to lowering the probabilistic operational costs of renewable-based microgrids [
31], and industry analyses highlight the role of dispatchable thermal assets in firming variable renewables [
32].
The growing secondary market of depreciated electric vehicle (EV) batteries, increasingly supported by global critical mineral optimization strategies, reverse logistics supply chains, and lithium circular economy frameworks [
33,
34,
35], offers a potential source of low-cost stationary storage for hybrid microgrid configurations. These shared automotive-to-stationary pathways have been studied extensively for their environmental implications [
36].
Importantly, the underlying microgrid coordination challenges examined in this study—including demand management, distributed control, and the integration of heterogeneous DERs behind a common bus—parallel those found in grid-connected buildings, where human-in-the-loop optimization approaches have demonstrated the value of hierarchical control strategies for balancing local generation and grid interaction [
19]. Furthermore, data centers exhibit characteristics of partially flexible loads capable of participating in demand response strategies and mitigating congestion through coordinated load management [
37].
1.3. Research Gap and Contributions
While the existing literature documents the escalating energy demands of AI data centers and the resulting regulatory challenges in markets like ERCOT, there remains a gap in techno-economic modeling that synthesizes these challenges into a unified analytical framework. Current studies on microgrids tend to isolate specific technologies—evaluating either battery resilience or thermal generation independently—without accounting for how geographic siting dictates both grid interconnection options and overall capital costs.
This paper addresses this gap by evaluating a multi-tiered, Hybrid BTM architecture and demonstrating how siting location influences the viability of different energy configurations. Specifically, it assesses the techno-economic feasibility (via LCOE) of pairing location-dependent baseload generation—such as Enhanced Geothermal Systems, which have been shown to be viable for re-energizing legacy infrastructure in specific geographic zones [
38]—with a dual-chemistry BESS. Importantly, this study is designed as a scenario-based comparative framework rather than a mathematical optimization model. Because several of the baseload technologies evaluated (particularly SMRs and EGSs for BTM data center applications) remain at early stages of commercial deployment, formulating a deterministic objective function would require rigid assumptions about uncertain future costs and operational parameters. Instead, the framework characterizes the trade-space across six fixed technology portfolios, enabling decision-makers to evaluate the competing priorities of cost, resilience, and technology readiness.
2. Data Center Infrastructure and AI Compute
The surge in AI inference and training workloads has created a divergence in data center architecture. Unlike traditional enterprise facilities, AI compute centers require extreme power densities, specialized structural engineering, and substantial data throughput [
19,
39,
40].
2.1. Architecture, Resource Demands, and Physical Constraints
As the industry moves through 2026, facilities are categorized by their compute philosophy: Enterprise (Reliability), Cloud (Elasticity), Inference (Latency), and Training (Throughput) [
41,
42,
43,
44,
45] as shown in
Table 1.
Due to extreme power densities, the limiting factor for facility design is no longer physical square footage but rather the utility power feed and thermal rejection capabilities. This creates a “stranded space” phenomenon, where physical floor area remains unused because electrical and cooling limits have been reached [
46,
47].
Table 2 consolidates the key physical constraints, resource demands, and networking profiles across data center typologies and
Table 3 provides a scale rack, compute, and bandwidth comparison across different MW scales.
Inference centers function as “token factories,” requiring substantial external egress (North–South traffic) to deliver generated content, in contrast to training centers that require high internal (East–West) fabric bandwidth for GPU synchronization [
1]. Simultaneously, the water required to cool these racks has become a permitting bottleneck, with open-loop evaporative cooling towers in ERCOT’s climate operating at a WUE of up to 2.0 L/kWh during summer months [
46,
47,
48].
2.2. AI Hardware Benchmarks and Zettascale Compute
The baseline unit of analysis for this framework is the NVIDIA GB200 NVL72 rack [
49]. Each rack draws a steady-state 125 kW and provides approximately 1.4 Exaflops of FP4 precision compute for inference tasks, requiring over 700 racks to reach the Zettaflop scale [
50]. Racks weigh approximately 3000 lbs, necessitating slab-on-grade flooring. Direct-to-chip liquid cooling is mandatory, reducing the closed-loop WUE below 0.1 L/kWh [
51].
To estimate operational output, standard compute scaling laws for transformer-based Mixture-of-Experts (MoE) models are applied. Generating a single token requires approximately
floating-point operations (FLOPs), where
is the number of active parameters [
41]. The total token throughput is:
Assuming an active parameter set of 30 billion, each token requires 60 billion FLOPs. A 100 MW facility achieving 1.1 Zettaflops of active compute can theoretically generate 36.6 billion tokens per second.
3. Grid Interconnection Challenges: ERCOT Case Study
Interconnection bottlenecks affect all major US electricity markets, though the specific regulatory thresholds, study processes, and cost structures vary by region. To illustrate the regulatory landscape that motivates BTM generation, this section details the ERCOT interconnection framework. ERCOT is particularly instructive because Texas is experiencing among the highest levels of data center investment nationally and because ERCOT’s unique Connect-and-Manage (C&M) approach—which allows resources to enter the market as energy-only and manages them through proactive transmission planning—offers a distinctive regulatory context for evaluating BTM strategies [
52]. The fundamental challenge illustrated here—regulatory gates that impose multi-year delays on large loads—is common across US RTOs, including PJM, MISO, and SPP, each of which faces comparable or larger interconnection queues [
17,
53].
3.1. Regulatory Barriers: The 25 MW and 75 MW Thresholds
ERCOT has developed regulatory procedures to integrate large loads into the service area [
54,
55,
56,
57], some in response to Texas Senate Bill 6 (SB 6), which requires loads above 75 MW to undergo the Batch Study process [
58,
59]. This is shown in
Table 4 and explained in the bullets below:
The 25 MW Threshold (PGRR 115): Any load exceeding 25 MW must submit dynamic modeling (PSCAD) to ERCOT to ensure voltage stability. Loads below this threshold face minimal oversight.
The 75 MW Threshold (TX SB 6 Large Load): Texas Senate Bill 6 defines 75 MW as a “Large Load,” mandating a Batch Study process and installation of remote-disconnection hardware allowing ERCOT to shed the load during grid emergencies.
3.2. High-Voltage Trade-Offs: 138 kV vs. 345 kV
A 50 MW load optimally fits a 138 kV interconnection, utilizing 20–30% of a circuit’s capacity with a substation CapEx of
$10–
$15 M. Loads exceeding 100 MW generally require a 345 kV connection, governed by strict proximity rules [
54,
56,
57]. A recent example is the 830 MW Galaxy Digital data center campus, which cleared the interconnection process [
60].
Table 5 summarizes the cost structure.
4. Methodology
This section presents the analytical framework used to evaluate the six BTM generation scenarios. The methodology proceeds through five stages, as illustrated in
Figure 3: (1) specification of the BTM generation architecture and technology options; (2) definition of deployment scenarios; (3) stochastic demand modeling via Monte Carlo simulation; (4) hourly dispatch simulation over a synthetic operating year; and (5) techno-economic and resilience evaluation using LCOE and ALOLP metrics.
4.1. BTM Generation Architecture
To mitigate ERCOT’s interconnection constraints and reduce exposure to macro-grid outage risk, the framework evaluates a portfolio of BTM generation technologies, each serving a distinct operational role [
61].
4.1.1. Dispatchable Thermal Generation
Reciprocating Internal Combustion Engines (RICEs), such as the Wärtsilä 31SG, provide long-duration firming and operate on medium-pressure distribution lines, bypassing the need for gas booster compressors required by aeroderivative turbines [
62].
Table 6 compares fuel pressure requirements. Modular microturbines provide scalable power blocks with low NOx emissions and rapid ramp rates [
63].
4.1.2. Baseload Generation
Enhanced Geothermal Systems (EGSs) provide continuous carbon-free power with capacity factors consistently exceeding 90% [
38,
64,
65,
66]. Small Modular Reactors (SMRs) offer high energy density and zero operational carbon emissions, serving as a firm baseload option [
67,
68]. Utility-scale Solar/Wind Power Purchase Agreements (PPAs) provide primary annual energy volume and reduce carbon intensity.
4.1.3. Tiered-Performance Hybrid BESS
Rather than relying on monolithic lithium-ion installations, the framework specifies an AC-coupled hybrid architecture meeting at a medium-voltage (MV) collector bus. This pairs two complementary tiers:
“Sprinter” (High C-Rate Primary Unit): Tier-1 OEM batteries (e.g., Tesla Megapacks) equipped with grid-forming inverters. These units operate at a high C-rate (1C to 2C), handling millisecond-level frequency regulation, active filtering, and instantaneous power spikes from AI inference bursts.
“Marathoner” (Low-C-Rate Secondary Unit): Repurposed second-life EV batteries, which retain substantial volumetric capacity but have degraded high-discharge capability. Operating at a low C-rate (0.1 C to 0.25 C), these banks are paired with commercial off-the-shelf (COTS) solar inverters to reduce CapEx.
The facility’s Energy Management System (EMS) monitors the State of Charge (SoC) of the Sprinter blocks. As the Sprinters discharge to meet transient spikes, the EMS commands the Marathoner bank to discharge a steady, low-amperage baseline across the MV bus, effectively replenishing the primary frequency-regulation assets using the repurposed EV batteries as a bulk energy reserve. This AC-coupled separation enables the EMS to orchestrate two different battery chemistries and inverter classes independently.
The Hybrid BESS fulfills four operational functions: (1) absorbing mechanical ramp-rate transients during the 5–10 min required for thermal generators to spool up; (2) peak shaving during diurnal demand spikes to avoid over-provisioning mechanical generation; (3) solar ride-through during cloud-cover events that can reduce PPA output by up to 80 MW; and (4) long-duration energy storage during macro-grid failures, where the EV battery bank sustains operations while the Sprinter units maintain frequency regulation.
Table 7 summarizes the generation stack balancing strategy.
4.2. Scenario Definitions
Deployment is phased to avoid triggering intensive interconnection study requirements. In the ERCOT case study, the phases are structured as follows: Phase 1 (25 MW pilot), Phase 2 (100 MW, divided into Phase 2a and 2b to remain below the 75 MW SB 6 large-load threshold), and Phase 3 (250 MW hyperscale). While the specific regulatory breakpoints differ across RTOs, the underlying strategy—phasing load additions to remain below applicable thresholds—is transferable to other markets. Six BTM configurations are evaluated against a pure-grid baseline, as summarized in
Table 8.
4.3. Stochastic Demand Modeling
To capture the volatile nature of AI inference workloads, the facility’s power demand is modeled stochastically rather than deterministically. At any time
t, total facility power demand
is bounded by the physical infrastructure capacity of the active phase and is modeled by mapping a dynamic utilization coefficient
to the data center’s physical constraints:
The simulation utilizes a Monte Carlo approach (
iterations) driven by a mean-reverting stochastic process. The simulated utilization at time
t is defined as:
where
is a deterministic diurnal baseline and
is a stochastic noise parameter. Three demand scenarios are formulated with distinct
base distributions:
Normal Day (Diurnal Human Interaction): A bimodal curve using superposition of two sine waves to create primary morning and secondary evening peaks:
High Usage/Viral Event (Spike): An aggressively skewed unimodal curve sustaining heavy daytime load:
Weekend/Agentic Batch Load: A low-amplitude, low-frequency wave anchoring utilization between 20% and 40%:
Because the compute workload scales linearly with the physical infrastructure of each phase, the stochastic demand profile is normalized as a percentage of total facility capacity. The resulting distribution applies to the 25 MW, 100 MW, and 250 MW operating phases.
Figure 4 presents the Monte Carlo simulation output, and
Table 9 summarizes the key statistics.
The stochastic nature of these demand profiles validates the need for highly responsive generation assets (microturbines, RICEs, BESS) to fill the gap between fixed baseload output and stochastic peak demand.
4.4. Dispatch Simulation Framework
The stochastic dispatch simulation was performed on an hourly time-step over a representative 8760 h synthetic year derived from the Monte Carlo demand model. A merit-order dispatch logic was applied, prioritizing zero-marginal-cost baseload generation first, followed by variable renewables, and finishing with dispatchable fuel-based or battery peaking assets. Ramp-rate constraints were imposed that were consistent with published technical specifications: RICEs at 10–15% per minute equivalent hourly flexibility; microturbines with near-instantaneous ramp within hourly resolution; and geothermal systems and SMR treated as fixed baseload assets. Battery round-trip efficiency was assumed to be 88%, with a maximum 4 h discharge duration and state-of-charge constraints enforced within the dispatch framework.
The total MWh capacity of the Hybrid BESS in each scenario is bifurcated into a 1:4 tiered-performance ratio between high-performance primary storage (Sprinter) and bulk secondary storage (Marathoner). The specific allocations are as follows: Scenarios 1 and 6 at 150 MWh total (30 MWh Tier-1 + 120 MWh EV); Scenario 2 at 175 MWh total (35 MWh + 140 MWh); and Scenario 4 at 125 MWh total (25 MWh + 100 MWh).
4.5. LCOE Formulation
The Levelized Cost of Energy (LCOE) represents the per-megawatt-hour cost of building and operating a generating plant over its assumed financial life [
69]. It is calculated as follows:
where
represents investment expenditures (CapEx),
operations and maintenance expenditures,
fuel expenditures, and
electrical energy generated in year
t. The discount rate
r is assumed to be 8% for industrial energy projects, and
n is the expected system lifespan.
To determine the blended LCOE for Phase 3 (250 MW), the sum product of each technology’s individual LCOE and its proportional generation output is computed from the stochastic dispatch simulation results.
Commodity fuels are converted to electrical fuel cost using the thermodynamic heat rate:
where
is the commodity price (
$/MMBtu) and
is the heat rate (MMBtu/MWh). RICEs operate at approximately 8.5 MMBtu/MWh, yielding
$29.75/MWh at
$3.50/MMBtu [
62]. Microturbines operate at roughly 11.5 MMBtu/MWh, resulting in
$40.25/MWh [
63]. Light-water SMRs operate at approximately 10.4 MMBtu/MWh; at a uranium cost of
$0.80/MMBtu, the fuel cost is
$8.32/MWh [
67]. Natural gas pricing is based on a Henry Hub forward strip of
$3.50–
$4.50/MMBtu for 2026–2030 [
70]. Geothermal LCOE assumes drilling and completion costs aligned with median projections from the 2025 NREL Geothermal Report [
66].
Additionally, the Operational Capacity Factor (CF) quantifies the ratio of actual dispatched energy to maximum theoretical output for each physical BTM asset:
where
is the projected energy produced during hour
t and
is the rated capacity. The resilience reserve (RR) represents the fraction of nameplate capacity held in reserve to satisfy Uptime Tier IV continuous power mandates:
4.6. ALOLP Formulation
To quantify the operational resilience provided by BTM architectures, this study adapts the traditional Loss of Load Probability (LOLP) metric [
71]. For a data center operating a hybrid microgrid, the site-level LOLP is calculated as the joint probability that the critical IT load (
) exceeds the combined available capacity of the macro-grid (
) and localized BTM assets (
) during any hour
t:
The Avoided Loss of Load Probability (ALOLP) isolates the resilience contribution of the BTM microgrid by calculating the difference between the baseline macro-grid risk and the mitigated site-level risk:
A higher ALOLP indicates a greater reduction in outage exposure. Because Uptime Tier IV compliance requires near-continuous operation (99.995% availability), any event where represents a potential operational failure.
4.7. Key Modeling Assumptions
Table 10 consolidates the principal modeling assumptions underlying the techno-economic and resilience evaluations.
5. Results
This section presents the outputs of the dispatch simulation, financial evaluation, and resilience analysis for the six BTM scenarios at Phase 3 (250 MW).
5.1. Generation Dispatch Profiles
Figure 5 illustrates the 24 h generation dispatch profiles for each scenario, utilizing the stacked merit-order dispatch logic described in
Section 4.4.
Baseline (Pure Grid): The entire 250 MW load is drawn from the ERCOT 345 kV backbone, with the data center load following the diurnal demand curve directly.
Scenario 1 (RICE + Battery + Grid): A 345 kV grid connection provides a firm 100 MW anchor. RICEs load-follow the diurnal variance above 100 MW, while the BESS handles the steepest transient peaks.
Scenario 2 (Solar PPA + RICE + Battery + Grid): A sleeved utility-scale Solar PPA provides a parabolic daylight curve. The grid link is capped at 10 MW. RICEs provide bulk power outside solar hours, while the BESS discharges during the rapid evening ramp.
Scenario 3 (100% Geothermal): The EGS provides a firm 250 MW baseload that closely matches stochastic demand. The grid connection is present but relegated to cold standby.
Scenario 4 (Geothermal + PPA + Grid): The geothermal system provides a 120 MW flat anchor. The Solar PPA contributes up to 80 MW of daytime peaking power. The ERCOT grid and BESS fill the remaining dynamic gaps.
Scenario 5 (SMR + PPA + Grid): An SMR provides a steady 180 MW zero-carbon baseload. A Solar PPA handles daytime surges, and the grid accommodates the remaining stochastic variation.
Scenario 6 (Geothermal + Microturbines + Grid): The geothermal system provides a 100 MW floor, supported by a 50 MW grid link. Modular microturbines handle the remaining volatile demand, leveraging their rapid ramp rates to match stochastic inference spikes.
5.2. Financial Analysis
5.2.1. Technology-Level Cost Assumptions
Baseline capital expenditures, O&M costs, and fuel price trajectories were derived from the NREL Annual Technology Baseline (ATB) [
70].
Table 11 shows the installed generation capacity by scenario and
Table 12 presents the component-level assumptions and resulting calculated LCOE for each technology.
5.2.2. Asset Utilization and the Cost of Resilience
In traditional utility economics, LCOE penalizes assets with low capacity factors. However, evaluating BTM microgrids for Tier IV data centers requires recognizing that unutilized generation capacity functions as a quantified insurance policy for maintaining operational continuity.
Table 13 presents the dispatch-derived utilization metrics for each physical BTM asset.
As shown in
Figure 6, baseload technologies deployed to cover the primary load—such as EGS in Scenario 3—achieve high utilization rates (83.33% CF). Because the geothermal system is sized at 300 MW to serve the 250 MW load, the remaining 16.67% is dedicated to fault tolerance (N + 1 redundancy). Conversely, dispatchable thermal assets and the Hybrid BESS operate at lower capacity factors. The BESS, designed primarily to bridge 10–15 s mechanical startup transients and ride through ERCOT frequency deviations, achieves only 7.50% operational CF, with its remaining 92.50% functioning as a resilience reserve. The higher isolated LCOE associated with these low-CF assets represents the financial premium for continuous operational resilience.
5.2.3. Blended LCOE Scenario
Applying the technology-level costs to the stochastic dispatch profiles yields the blended cost of electricity for each scenario at 250 MW, as presented in
Table 14.
5.3. Resilience Analysis
The ALOLP was calculated for each scenario to quantify the probability that the BTM infrastructure can sustain the 250 MW Phase 3 critical IT load during a prolonged macro-grid failure without violating Uptime Tier IV continuous power requirements as shown in
Table 15. The ALOLP is influenced by the combination of firm baseload capacity, Hybrid BESS duration, and reliance on external fuel supply chains.
Scenario 3 (100% Geothermal) achieves the highest ALOLP (>99.9%) under the modeled conditions (see
Table 10), owing to its independence from both the macro-grid and external fuel pipelines. Scenarios 1 and 4, despite utilizing the Hybrid BESS for transient smoothing, exhibit lower ALOLP scores because their maximum dispatchable BTM generation falls short of the 250 MW peak requirement (
Table 11), necessitating compute load-shedding during grid outages. These ALOLP values are conditional on the assumptions regarding outage behavior, BESS bridging performance, and fuel availability specified in
Section 4.7 and should be interpreted as scenario-dependent rather than universally predictive. When considered alongside the LCOE results in
Table 14, the trade-space between cost and resilience becomes apparent: the lowest-LCOE configuration (S4,
$64.50/MWh) achieves only 58.0% ALOLP, while the highest-resilience configuration (S3, >99.9% ALOLP) carries a moderate LCOE of
$68.00/MWh.
The redundancy topologies also differ by scenario class. RICEs and microturbines (Scenarios 1, 2, and 6) are inherently modular—a 150 MW RICE installation comprises roughly ten 15 MW engines, enabling N + 2 architecture where the BESS covers any individual engine failure. Baseload-anchored scenarios (3, 4, and 5) achieve redundancy by maintaining the ERCOT grid connection in hot standby, with the BESS riding through the frequency transient during an automated transfer. Maintenance profiles vary accordingly: RICEs require periodic 48 h service windows managed through engine cycling; microturbines benefit from extended service intervals due to air-bearing designs; the EGS requires redundant dual-well loops for descaling; and SMRs require multi-week refueling outages every 24–60 months, during which the facility reverts to grid-only operation.
6. Discussion
The evaluation of six distinct microgrid scenarios reveals that no single technology portfolio optimally satisfies all deployment objectives; rather, developers must navigate a trade-space balancing LCOE, ALOLP, and deployment speed. While pure utility interconnection and sleeved PPAs offer competitive baseline costs (e.g., $35.00/MWh for solar), they expose critical IT loads to multi-year regulatory interconnection queues and weather-driven macro-grid vulnerabilities, yielding ALOLP metrics that may be insufficient for Tier IV facilities.
Integrating dispatchable BTM thermal assets—such as RICEs or microturbines—accelerates time-to-first-compute by circumventing large-load grid studies. When paired with the dual-chemistry BESS, these assets provide effective transient protection. However, this speed comes at the cost of higher continuous fuel dependency and localized emissions.
For operational resilience, zero-carbon baseload technologies (EGS and SMR) demonstrated the highest ALOLP values (>99.9% and 72.0% respectively, limited by sizing constraints). By decoupling the facility from external fuel pipelines and the macro-grid, these technologies represent a pathway toward sustainable hyperscale infrastructure. The financial analysis further indicates that the capital costs associated with BTM storage can be mitigated by incorporating depreciated, second-life EV batteries into the BESS architecture for bulk energy shifting.
6.1. Grid-Interactive Operations and Flexible Interconnection
Beyond securing on-site power, BTM-equipped data centers have the potential to contribute to regional grid stability rather than undermine it. As documented in
Section 1.2.2, the “self-preservation paradox” has resulted in load-rejection events exceeding 1500 MW in concentrated data center markets, posing significant risks to grid frequency regulation. A facility operating with robust BTM generation can mitigate this risk by maintaining controlled, predictable ramp rates on its grid interconnection—gradually adjusting its grid draw rather than executing abrupt disconnections during minor disturbances. This managed load behavior reduces the probability of cascading over-frequency events and positions the data center as a stabilizing grid participant.
This operational model aligns with the emerging flexible interconnection paradigm documented in the recent literature. Norris et al. [
17] demonstrated that across 22 US balancing authorities, the average curtailment duration during system-stress hours is only 1.7–2.5 h, with nearly 90% of curtailed hours retaining at least 50% of the new load. The BTM generation portfolios evaluated in this study—particularly the tiered-performance BESS, which is designed to bridge transient events of precisely this duration—represent the physical infrastructure required to implement such flexible arrangements. In this context, the ALOLP metric can be interpreted as quantifying the facility’s capacity to bridge targeted grid curtailment windows while maintaining Tier IV reliability at the compute layer, rather than as a measure of indefinite islanding capability.
The Princeton ZERO Lab and Camus Energy study [
18] further demonstrated that flexible grid connections paired with BYOC arrangements can reduce the net cost increase to other ratepayers to near zero, with the data center contributing approximately
$733 million per GW toward incremental system costs through a combination of direct capacity procurement and energy payments. During early deployment phases, where BTM generation capacity may temporarily exceed IT load as infrastructure is built ahead of server installation, the facility could also function as a generation asset, exporting surplus power to the wholesale market. Recent research on demand response strategies for flexible loads has further demonstrated the potential for large electricity consumers to participate in congestion mitigation through coordinated load management [
37]. While the operational requirements of Tier IV data centers limit their participation in traditional demand curtailment programs, the combination of BTM generation and intelligent grid-parallel controls enables a form of grid interaction that prioritizes load stability and controlled ramping over the hard disconnection behaviors that currently threaten regional grids.
6.2. Limitations
Several limitations of this analysis should be acknowledged. First, as a scenario-based comparative framework, the study evaluates fixed technology portfolios rather than optimizing asset sizing or dispatch through a formal objective function. This design choice reflects the speculative nature of costs for emerging technologies (particularly SMRs and EGS in BTM data center applications), where deterministic optimization would produce precise but potentially misleading results given current cost uncertainty.
Second, the reported ALOLP values are conditional on the modeling assumptions specified in
Table 10, including BESS bridging performance, fuel availability during islanding, and grid outage characteristics. These results should therefore be interpreted as scenario-dependent estimates of curtailment-bridging capacity under the stated conditions, rather than universally predictive reliability guarantees.
Third, the LCOE calculations are based on annualized capacity factors and lifecycle costs rather than hourly market-clearing prices, and therefore do not capture the effects of real-time wholesale market dynamics (e.g., price spikes during scarcity events, ancillary service revenues, or congestion pricing). Similarly, the analysis does not include Net Present Value (NPV) or payback period calculations, as these require proprietary revenue assumptions (e.g., price per compute token or per TFLOP-hour) that vary substantially across operators and contract structures.
Fourth, the stochastic demand model utilizes synthetic demand profiles based on three representative scenarios rather than empirical telemetry from operating AI inference facilities. While the Monte Carlo approach captures the general stochastic character of AI workloads, facility-specific demand patterns may differ.
Finally, while the techno-economic framework and BTM technology trade-offs are designed to be generalizable across US wholesale electricity markets, the specific regulatory thresholds, interconnection procedures, and wholesale market structures used in this study reflect the ERCOT service territory. Application to capacity markets such as PJM or MISO would require adjustment of baseline grid cost assumptions and regulatory threshold parameters. The fundamental BTM architecture and resilience dynamics evaluated here, however, would remain applicable, as the underlying interconnection bottlenecks and system headroom opportunities have been documented across all major US balancing authorities [
17].
6.3. Future Work
Future research should extend this framework in several directions. First, applying the scenario-based methodology to other US electricity markets—particularly PJM and MISO, which exhibit even larger curtailment-enabled headroom than ERCOT [
17]—would test the generalizability of the BTM technology trade-offs identified here and enable market-specific regulatory threshold calibration. Second, integrating dynamic cooling constraints—specifically the WUE versus PUE trade-offs documented in the literature—would enable co-optimization of thermal management and generation dispatch. Third, incorporating hourly wholesale market pricing and ancillary service compensation would enable NPV analysis and evaluation of revenue-generating grid interactions, including the economics of flexible interconnection agreements and BYOC arrangements [
18]. Fourth, as commercial cost data for EGS and SMR matures, formal optimization methods (e.g., mixed-integer linear programming) could be applied to identify optimal technology portfolios for specific geographic and regulatory contexts. Fifth, exploring how grid-interactive data centers might function as managed generation assets during early deployment phases—when BTM capacity exceeds IT load—represents a promising avenue for both grid reliability and project economics.
7. Conclusions
The transition toward Zettascale AI computing has pushed data center infrastructure to an inflection point. As rack densities exceed 100 kW and facility power requirements scale to 250 MW and beyond, developers face increasing constraints in relying exclusively on legacy macro-grid interconnections. Across US wholesale electricity markets, the influx of large flexible loads has resulted in multi-year transmission planning queues and transient stability risks. In the ERCOT market examined here, these challenges are compounded by SB 6 Large Load regulations and physical 345 kV congestion, but analogous constraints exist in PJM, MISO, and other RTOs [
17].
This research demonstrates that integrating grid-tied, behind-the-meter energy architectures represents an increasingly important strategy for achieving both speed to market and sustained operational viability. By developing a techno-economic framework that synthesizes LCOE, ALOLP, and internally derived asset utilization metrics, this paper provides an analytical blueprint for navigating the modern interconnection challenge. Importantly, the BTM generation portfolios evaluated here also serve as the enabling infrastructure for the flexible interconnection and bring-your-own-capacity strategies that are emerging as practical pathways to accelerate large-load integration across US electricity markets [
18].
The scenario modeling yields several findings relevant to early-stage data center development:
- 1.
BTM Generation Can Accelerate Deployment: Facilities that incorporate baseload BTM generation—such as EGSs or SMRs—can bypass congested regional transmission queues. By operating in a grid-parallel configuration, these sites reduce reliance on utility upgrades, potentially accelerating the timeline to first compute.
- 2.
The Financial Cost of Resilience Is Addressable: While dispatchable assets like RICEs and the dual-chemistry BESS operate at low capacity factors (7.5% to 30%), their resilience reserves (>70%) serve as quantifiable operational insurance. Under the modeled conditions, this reserve margin bridges mechanical transients, prevents thermal throttling, and supports scenario-dependent ALOLP values exceeding 99% in the strongest configurations. These resilience estimates are conditional on the assumptions in
Table 10 and should be validated against site-specific operating conditions.
- 3.
Decoupled BESS Architectures Can Optimize Capital: The proposed dual-chemistry BESS—pairing high-C-rate smart inverters for frequency regulation with repurposed second-life EV batteries for bulk energy shifting—absorbs the instantaneous power spikes inherent to AI inference workloads without requiring over-provisioning of expensive utility-scale lithium-ion installations.
There is no single solution to the data center energy challenge. Different technological combinations require distinct trade-offs regarding technological readiness, zero-carbon mandates, and fuel supply chain diversity. Siting decisions for future Zettascale data centers should be co-optimized with local baseload generation potential rather than relying solely on fiber optic proximity. The framework presented here suggests that the strategic combination of geographic siting and Hybrid BTM generation offers a viable pathway for balancing regulatory compliance, capital cost, and Uptime Tier IV resilience across US electricity markets. By adopting a phased expansion strategy—managing load additions relative to applicable regulatory thresholds—and integrating BTM generation portfolios, data center developers can reduce their exposure to interconnection delays while contributing to, rather than undermining, regional grid stability. While the scenario parameters in this study reflect the ERCOT market, the underlying technology trade-offs and the flexible interconnection strategies they enable are applicable wherever large flexible loads confront constrained transmission and generation infrastructure.
Author Contributions
Conceptualization, E.C.J.S.; methodology, E.C.J.J.; validation, E.C.J.J.; formal analysis, E.C.J.J.; investigation, E.C.J.J.; data curation, E.C.J.J.; writing—original draft preparation, E.C.J.J. and E.C.J.S.; writing—review and editing, E.C.J.J. and E.C.J.S.; visualization, E.C.J.J. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
No new data were created.
Acknowledgments
During the preparation of this manuscript/study, the authors used Gemini 3.1 Pro, OpenAI ChatGPT-5, and Claude Opus 4.6 for the purposes of formatting, proofreading, review, and code support specifically for image and LaTeX file generation. The authors have reviewed and edited the output and take full responsibility for the content of this publication.
Conflicts of Interest
The authors Erick C. Jones, Jr. and Erick C. Jones, Sr. are father and son. All authors contributed substantially to the work and have approved the submission. No authors were involved in the editorial handling of this manuscript. The authors provide independent consulting and advisory services to organizations in the data center and energy sectors. These organizations had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results. This manuscript represents independent academic research and was not sponsored, reviewed, or financially supported by any commercial entity.
References
- CBRE Research. North America Data Center Trends H2 2025; CBRE: Dallas, TX, USA, 2026; Available online: https://www.cbre.com/insights/books/north-america-data-center-trends-h2-2025 (accessed on 14 January 2026).
- Cushman & Wakefield. 2025 Global Data Center Market Comparison; Cushman & Wakefield Data Center Advisory Group: Chicago, IL, USA, 2025; Available online: https://www.cushmanwakefield.com/en/insights/global-data-center-market-comparison (accessed on 14 January 2026).
- Masanet, E.; Shehabi, A.; Lei, N.; Smith, S.; Koomey, J. Recalibrating global data center energy-use estimates. Science 2020, 367, 984–986. [Google Scholar] [CrossRef] [PubMed]
- Jones, N. How to stop data centres from gobbling up the world’s electricity. Nature 2018, 561, 163–166. [Google Scholar] [CrossRef] [PubMed]
- McKinsey & Company. Scaling Bigger, Faster, Cheaper Data Centers with Marter Designs; McKinsey Private Equity Insights: New York, NY, USA, 2025; Available online: https://www.mckinsey.com/industries/private-capital/our-insights/scaling-bigger-faster-cheaper-data-centers-with-smarter-designs (accessed on 14 January 2026).
- White & Case LLP. Intelligent Design: Constructing Next Generation Data Centers for the AI Boom; White & Case LLP: New York, NY, USA, 2025; Available online: https://www.whitecase.com/insight-our-thinking/intelligent-design-constructing-next-generation-data-centers-ai-boom (accessed on 14 January 2026).
- Zhou, G.; Zhou, J.; Huai, X.; Zhou, F.; Jiang, Y. A two-phase liquid immersion cooling strategy utilizing vapor chamber heat spreader for data center servers. Appl. Therm. Eng. 2023, 225, 118289. [Google Scholar] [CrossRef]
- International Association of Electrical Inspectors (IAEI). Modern Data Centers: Electrical Trends, Risks, and NEC® 2026 Implications; IAEI Magazine: Richardson, TX, USA, 2026; Available online: https://iaeimagazine.org/electrical-fundamentals/modern-data-centers-electrical-trends-risks-and-nec-2026-implications/ (accessed on 1 March 2026).
- Zewe, A. Explained: Generative AI’s Environmental Impact; MIT News: Cambridge, MA, USA, 2025; Available online: https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117 (accessed on 14 January 2026).
- Sheng, Y.; Zhang, C.; Zhu, Z.; Xu, H.; Wen, J.; Wang, R.; Yang, J.; Wang, Q.; Bu, S. Power for AI Data Centers: Energy Demand, Grid Impacts, Challenges and Perspectives. Energies 2026, 19, 722. [Google Scholar] [CrossRef]
- Lincoln Institute of Land Policy. Data Drain: The Land and Water Impacts of the AI Boom; Land Lines Magazine: Cambridge, MA, USA, 2025; Available online: https://www.lincolninst.edu/publications/land-lines-magazine/articles/land-water-impacts-data-centers/ (accessed on 14 January 2026).
- Brookings Institution. The Future of Data Centers; Brookings Institution: Washington, DC, USA, 2025; Available online: https://www.brookings.edu/articles/the-future-of-data-centers/ (accessed on 14 January 2026).
- Jones, E.C.; Leibowicz, B.D. Co-optimization and community: Maximizing the benefits of distributed electricity and water technologies. Sustain. Cities Soc. 2021, 64, 102515. [Google Scholar] [CrossRef]
- University of Texas at Austin Energy Institute. 2025 Large Loads Symposium: The Texas Initiative for Datacenter Energy and Large Loads; University of Texas at Austin Energy Institute: Austin, TX, USA, 2025; Available online: https://energy.utexas.edu/2025-large-loads-symposium (accessed on 14 January 2026).
- GridLab. Practical Guidance and Considerations for Large Load Interconnections; GridLab: Washington, DC, USA, 2025; Available online: https://gridlab.org/portfolio-item/practical-guidance-and-considerations-for-large-load-interconnections/ (accessed on 14 January 2026).
- Paces. The Grid Is Planning for Data Centers That Will Never Exist: Aligning Transmission Investment with Development Reality; Whitepaper; Paces: New York, NY, USA, 2026; Available online: https://www.paces.com/white-papers/grid-planning-data-centers-transmission-investment (accessed on 1 March 2026).
- Norris, T.H.; Profeta, T.; Patino-Echeverri, D.; Cowie-Haskell, A. Rethinking Load Growth: Assessing the Potential for Integration of Large Flexible Loads in US Power Systems; NI R 25-01; Nicholas Institute for Energy, Environment & Sustainability, Duke University: Durham, NC, USA, 2025; Available online: https://nicholasinstitute.duke.edu/publications/rethinking-load-growth (accessed on 14 January 2026).
- Brancucci, C.; Cutler, D.; Jenkins, J. Flexible Data Centers: A Faster, More Affordable Path to Power; Camus Energy, Encoord, and Princeton ZERO Lab: San Francisco, CA, USA, 2025; Available online: https://www.camus.energy/resources (accessed on 14 January 2026).
- Belfer Center for Science and International Affairs. AI, Data Centers, and the U.S. Electric Grid: A Watershed Moment; Harvard University: Cambridge, MA, USA, 2026; Available online: https://www.belfercenter.org/research-analysis/ai-data-centers-us-electric-grid (accessed on 1 March 2026).
- Orrick. Powering Data Centers | Megawatts to Megabytes: Chapter 4 Contracting for Power; Orrick: New York, NY, USA, 2025; Available online: https://www.orrick.com/en/Insights/2025/11/Powering-Data-Centers (accessed on 14 January 2026).
- Ginzburg-Ganz, E.; Lifshits, P.; Machlev, R.; Belikov, J.; Krieger, Z.; Levron, Y. Technical Challenges of AI Data Center Integration into Power Grids—A Survey. Energies 2026, 19, 137. [Google Scholar] [CrossRef]
- Blunt, K.; Hiller, J. Exclusive | A New Threat to Power Grids: Data Centers Unplugging at Once. The Wall Street Journal. 1 March 2026. Available online: https://www.wsj.com/business/energy-oil/a-new-threat-to-power-grids-data-centers-unplugging-at-once-741f1bda (accessed on 1 March 2026).
- S&P Global Ratings. Generative AI Has Written The Check, But Can The Power Sector Cash It? S&P Global Ratings: New York, NY, USA, 2025; Available online: https://www.spglobal.com/ratings/en/regulatory/article/power-sector-update-generative-ai-has-written-the-check-but-can-the-power-sector-cash-it-s101649140 (accessed on 14 January 2026).
- Gridwatch. Data Centers Disconnecting Simultaneously Emerging as New Risk to U.S. Power Grid Stability; Gridwatch: Washington, DC, USA, 2026; Available online: https://gridwatch.ph/data-centers-disconnecting-simultaneously-emerging-as-new-risk-to-u-s-power-grid-stability/ (accessed on 1 March 2026).
- Kemabonta, T.; Jones, E.; Harmon, D.; Pittman, J. A New Approach to Developing Community Solar Projects for LMI Communities in ERCOT’s Competitive Electricity Markets. In 2021 IEEE Global Humanitarian Technology Conference (GHTC); IEEE: New York, NY, USA, 2021; pp. 86–93. [Google Scholar] [CrossRef]
- Aghapour, R.; Alavi, S.; Atitebi, O.; Jones, E.C., Jr. Achieving Sustainable Energy Transition: A System Analysis Approach to Energy Policy Implementation. In Proceedings of the 2025 IISE Annual Conference; Institute of Industrial and Systems Engineers (IISE): Norcross, GA, USA, 2025. [Google Scholar]
- Jeffers, R.; Kushner, D.; Ntakou, E.; Paaso, A.; Chalamala, B.; Jourdier, B.; Rahmatian, F.; Fotuhu-Firuzabad, M.; Jones, E., Jr.; Araneda Tapia, J.C.; et al. Enabling Climate Adaptation and Mitigation through Grid Modernization. In IEEE Power & Energy Society; IEEE: New York, NY, USA, 2024. [Google Scholar] [CrossRef]
- Vertiv. Wired for Change: Data Centers’ Dynamic Shift to Hybrid Power Solutions; Whitepaper; Vertiv: Westerville, OH, USA, 2025; Available online: https://www.vertiv.com/48ecc6/globalassets/documents/white-papers/wired-for-change---Data-centers-dynamic-shift-to-hybrid-power-solutions-white-paper.pdf (accessed on 14 January 2026).
- Angizeh, S.; Ghofrani, A.; Zaidan, E.; Jafari, M.A. Resilience-Oriented Behind-the-Meter Energy Storage System Evaluation for Mission-Critical Facilities. IEEE Access 2021, 9, 80854–80865. [Google Scholar] [CrossRef]
- Jones, E. Decomposing Systems: Illustrating the Utility of Distributed Energy Resources with Decomposition Techniques. In Proceedings of the 2020 IISE Annual Conference; Institute of Industrial and Systems Engineers (IISE): Norcross, GA, USA, 2020. [Google Scholar]
- Javidsharifi, M.; Pourroshanfekr Arabani, H.; Kerekes, T.; Sera, D.; Spataru, S.; Guerrero, J.M. Effect of Battery Degradation on the Probabilistic Optimal Operation of Renewable-Based Microgrids. Electricity 2022, 3, 53–74. [Google Scholar] [CrossRef]
- Wärtsilä. Data Centre Dispatchable Capacity: A Major Opportunity for Europe’s Energy Transition; Whitepaper; Wärtsilä: Helsinki, Finland, 2025; Available online: https://www.wartsila.com/docs/default-source/energy-docs/technology-products/white-papers/data-centre-dispatchable-capacity-avk-wartsila_white-paper_2025.pdf (accessed on 14 January 2026).
- Jones, E.C., Jr. Lithium Supply Chain Optimization: A Global Analysis of Critical Minerals for Batteries. Energies 2024, 17, 2685. [Google Scholar] [CrossRef]
- Atitebi, O.S.; Dumre, K.; Jones, E.C., Jr. Supporting a Lithium Circular Economy via Reverse Logistics: Improving the Preprocessing Stage of the Lithium-Ion Battery Recycling Supply Chain. Energies 2025, 18, 651. [Google Scholar] [CrossRef]
- Atitebi, O.S.; Jones, E.C., Jr. Centralized vs. Decentralized Black-Mass Production: A Comparative Analysis of Lithium Reverse Logistics Supply Chain Networks. Logistics 2025, 9, 97. [Google Scholar] [CrossRef]
- Jones, E.C.; Leibowicz, B.D. Contributions of shared autonomous vehicles to climate change mitigation. Transp. Res. Part D Transp. Environ. 2019, 72, 279–298. [Google Scholar] [CrossRef]
- Fotopoulou, M.; Tsekouras, G.; Rakopoulos, D.; Kontargyri, V. Demand response strategies for flexible loads and congestion mitigation. Sustain. Energy Grids Netw. 2025, 44, 102051. [Google Scholar] [CrossRef]
- Jones, E.C., Jr.; Sridharan, C.M.; Aghapour, R.; Rodriguez, A. Re-Energizing Legacy Fossil Infrastructure: Evaluating Geothermal Power in Tribal Lands and HUBZones. Sustainability 2025, 17, 2558. [Google Scholar] [CrossRef]
- Patterson, D.; Gonzalez, J.; Le, Q.; Liang, C.; Munguia, L.M.; Rothchild, D.; So, D.; Texier, M.; Dean, J. Carbon Emissions and Large Neural Network Training. Commun. ACM 2021, 64, 86–93. [Google Scholar]
- Prismecs. AI’s Impact on Data Center Power Demand: Strategic Solutions; Prismecs Industry Insights; Prismecs: Houston, TX, USA, 2026; Available online: https://prismecs.com/blog/data-center-power-demand-ai-energy-solutions (accessed on 1 March 2026).
- Kaplan, J.; McCandlish, S.; Henighan, T.; Brown, T.B.; Chess, B.; Child, R.; Gray, S.; Radford, A.; Wu, J.; Amodei, D. Scaling Laws for Neural Language Models. arXiv 2020, arXiv:2001.08361. [Google Scholar] [CrossRef]
- Deloitte Insights. Can US Infrastructure Keep Up with the AI Economy? Deloitte: New York, NY, USA, 2025. [Google Scholar]
- Uptime Institute. Tier Standard: Topology; Uptime Institute: Seattle, WA, USA, 2021. [Google Scholar]
- Shehabi, A.; Smith, S.J.; Hubbard, A.; Newkirk, A.; Lei, N.; Siddik, M.A.B.; Holecek, B.; Koomey, J.; Masanet, E.; Sartor, D. United States Data Center Energy Usage Report; Tech. Rep. LBNL-1005775; Lawrence Berkeley National Laboratory: Berkeley, CA, USA, 2016. [Google Scholar]
- Synergy Research Group. Hyperscale Data Center Capacity Reaches New Highs as AI Drives Expansion; Synergy Research: Reno, NV, USA, 2025; Available online: https://www.srgresearch.com/ (accessed on 14 January 2026).
- Uptime Institute. Annual Data Center Cooling and Sustainability Report: The Shift to Direct-to-Chip Liquid Cooling; Uptime Institute Intelligence: Seattle, WA, USA, 2025. [Google Scholar]
- ASHRAE Technical Committee 9.9. Datacom Equipment Power Trends and Cooling Applications, 3rd ed.; American Society of Heating, Refrigerating and Air-Conditioning Engineers: Atlanta, GA, USA, 2024. [Google Scholar]
- Mytton, D. Data centre water consumption. npj Clean Water 2021, 4, 11. [Google Scholar] [CrossRef]
- NVIDIA Corporation. NVIDIA GB200 NVL72 Architecture: The Engine of the Zettascale AI Era; NVIDIA Technical Whitepaper; NVIDIA Corporation: Santa Clara, CA, USA, 2024; Available online: https://www.nvidia.com/en-us/data-center/gb200-nvl72/ (accessed on 14 January 2026).
- Oracle. Behind the Scenes: Scale Your NVIDIA GB200 NVL72 Deployments with Dedicated OCI APIs; Oracle Cloud Infrastructure Blog: Austin, TX, USA, 2026; Available online: https://blogs.oracle.com/cloud-infrastructure/behind-the-scenes-scale-nvidia-gb200-nvl72-deployments (accessed on 1 March 2026).
- Introl. Liquid Cooling for AI: Data Center Infrastructure Essential for 2025; Introl Insights: Chicago, IL, USA, 2026; Available online: https://introl.com/blog/liquid-cooling-ai-data-center-infrastructure-essential-2025 (accessed on 1 March 2026).
- Trabish, H.K. Will ERCOT’s Streamlined Connect-and-Manage Approach Work for Other Markets? Utility Dive: Newton, MA, USA, 2025; Available online: https://www.utilitydive.com/news/ercot-connect-and-manage-spp-miso-eris/749083/ (accessed on 14 January 2026).
- IEEE Standards Association. White Paper—Review of Industry Efforts and Standards of Grid Readiness for Data Center Deployment; IEEE: New York, NY, USA, 2026; Available online: https://ieeexplore.ieee.org/document/11366058 (accessed on 1 March 2026).
- Electric Reliability Council of Texas (ERCOT). NPRR1234: Interconnection Requirements for Large Loads and Modeling Standards for Loads 25 MW or Greater; ERCOT Board Report: Austin, TX, USA, 2025. Available online: https://www.ercot.com/mktrules/issues/NPRR1234 (accessed on 14 January 2026).
- Electric Reliability Council of Texas (ERCOT). Planning Guide Revision Request (PGRR) 115: Related to Large Load Interconnection; ERCOT Public Notice: Austin, TX, USA, 2024.
- ERCOT Large Load Working Group (LLWG). Batch Study Process Meetings and Large Load Management; ERCOT Committees: Austin, TX, USA, 2025; Available online: https://www.ercot.com/committees/tac/llwg (accessed on 14 January 2026).
- Yes Energy. How ISOs and RTOs Are Addressing Large Load Growth in 2025; Yes Energy Market Insights: Boulder, CO, USA, 2025; Available online: https://www.yesenergy.com/blog/how-isos-and-rtos-are-addressing-large-load-growth-in-2025 (accessed on 14 January 2026).
- Public Utility Commission of Texas (PUCT). Implementation of Senate Bill 6: Large Flexible Loads and ERCOT Grid Reliability; Public Utility Commission of Texas (PUCT): Austin, TX, USA, 2023.
- Troutman Pepper Locke. Federal and State Policymakers Target AI Data Centers as Electricity Costs and Grid Reliability Concerns Mount; Legal Insights: Washington, DC, USA, 2026. [Google Scholar]
- Galaxy Digital Inc. Galaxy Completes ERCOT Interconnection Studies and Secures Approval for Additional 830 Megawatts at Helios Data Center Campus; PR Newswire: New York, NY, USA, 2026. [Google Scholar]
- Orrick. Megawatts to Megabytes: Orrick’s 2025 Guide to Developing, Financing & Powering Data Centers; Orrick, Herrington & Sutcliffe LLP: San Francisco, CA, USA, 2025; Available online: https://media.orrick.com/Media-Library/public/files/insights/2025/megawatts-to-megabytes-orrick-2025-guide-to-developing-financing-powering-data-centers.pdf (accessed on 14 January 2026).
- Wärtsilä Corporation. Engine Power Plants vs. Gas Turbines: A Comparison of Flexible Generation for Grid Balancing; Technical Review; Wärtsilä Corporation: Helsinki, Finland, 2023. [Google Scholar]
- Capstone Green Energy. Microturbine Technology for Ultra-Low Emission Distributed Generation and Microgrids; Capstone Technical Brief; Capstone Green Energy: Van Nuys, CA, USA, 2024. [Google Scholar]
- King, B.; Ricks, W.; Pastorek, N.; Larsen, J. The Potential for Geothermal Energy to Meet Growing Data Center Electricity Demand; Rhodium Group: San Francisco, CA, USA, 2025; Available online: https://rhg.com/research/geothermal-data-center-electricity-demand/ (accessed on 14 January 2026).
- U.S. Department of Energy (DOE). Pathways to Commercial Liftoff: Enhanced Geothermal Systems; U.S. Department of Energy (DOE): Washington, DC, USA, 2023.
- National Renewable Energy Laboratory (NREL). 2025 Geothermal Report; Tech. Rep. 2025; NREL: Golden, CO, USA, 2025. Available online: https://docs.nrel.gov/docs/fy26osti/91898.pdf (accessed on 14 January 2026).
- Cha, S. The Potential Role of Small Modular Reactors (SMRs) in Addressing the Increasing Power Demand of the Artificial Intelligence Industry: A Scenario-Based Analysis. Nucl. Eng. Technol. 2025, 57, 103314. [Google Scholar] [CrossRef]
- International Atomic Energy Agency (IAEA). Advances in Small Modular Reactor Technology Developments; International Atomic Energy Agency (IAEA): Vienna, Austria, 2024. [Google Scholar]
- Lazard. Lazard’s Levelized Cost of Energy Analysis—Version 17.0; Lazard: New York, NY, USA, 2024. [Google Scholar]
- National Renewable Energy Laboratory (NREL). 2025 Annual Technology Baseline; NREL: Golden, CO, USA, 2025. Available online: https://atb.nrel.gov/ (accessed on 14 January 2026).
- Billinton, R.; Allan, R.N. Reliability Evaluation of Power Systems, 2nd ed.; Plenum Press: New York, NY, USA, 1996. [Google Scholar]
Figure 1.
Geographic density of hyperscale and colocation data center infrastructure across the Continental United States [
1]. Heatmap clustering indicates concentration in Tier 1 markets. The highlighting identifies the ERCOT footprint in Texas, showing development clusters in the DFW, Austin, San Antonio, and Houston metros that are straining interconnection queues and motivating localized BTM generation.
Figure 1.
Geographic density of hyperscale and colocation data center infrastructure across the Continental United States [
1]. Heatmap clustering indicates concentration in Tier 1 markets. The highlighting identifies the ERCOT footprint in Texas, showing development clusters in the DFW, Austin, San Antonio, and Houston metros that are straining interconnection queues and motivating localized BTM generation.
Figure 2.
Global distribution of hyperscale cloud regions and primary colocation data center hubs (operational capacity basis) [
2]. The North American footprint serves as the baseline for extrapolating future Zettascale AI compute demand analyzed in this study. The size of the dots relates to the relative size of the data center clusters.
Figure 2.
Global distribution of hyperscale cloud regions and primary colocation data center hubs (operational capacity basis) [
2]. The North American footprint serves as the baseline for extrapolating future Zettascale AI compute demand analyzed in this study. The size of the dots relates to the relative size of the data center clusters.
Figure 3.
Analytical framework for the scenario-based techno-economic and resilience evaluation of BTM generation architectures for hyperscale AI data centers.
Figure 3.
Analytical framework for the scenario-based techno-economic and resilience evaluation of BTM generation architectures for hyperscale AI data centers.
Figure 4.
Monte Carlo simulation () of 24 h stochastic power demand across three usage scenarios, normalized to facility capacity.
Figure 4.
Monte Carlo simulation () of 24 h stochastic power demand across three usage scenarios, normalized to facility capacity.
Figure 5.
Twenty-four-hour Generation dispatch profiles for the 250 MW facility across six evaluated scenarios. Stacked areas represent individual generation asset contributions. The solid black line indicates stochastic AI IT demand, while the dashed red line represents the total effective demand inclusive of Hybrid BESS charging loads (plotted negatively below the zero-axis).
Figure 5.
Twenty-four-hour Generation dispatch profiles for the 250 MW facility across six evaluated scenarios. Stacked areas represent individual generation asset contributions. The solid black line indicates stochastic AI IT demand, while the dashed red line represents the total effective demand inclusive of Hybrid BESS charging loads (plotted negatively below the zero-axis).
Figure 6.
Allocation of total physical nameplate capacity for BTM energy assets. The chart illustrates the division between the Operational Capacity Factor (active dispatch) and the resilience reserve (N + 1 standby capacity).
Figure 6.
Allocation of total physical nameplate capacity for BTM energy assets. The chart illustrates the division between the Operational Capacity Factor (active dispatch) and the resilience reserve (N + 1 standby capacity).
Table 1.
2026 Data center infrastructure comparison [
42,
43,
45,
46].
Table 1.
2026 Data center infrastructure comparison [
42,
43,
45,
46].
| Feature | Enterprise (Legacy) | Hyperscale Cloud | AI Inference (Edge) | AI Training (Factory) |
|---|
| Philosophy | Reliability and Sovereignty | Elasticity and Scale | Latency and Response | Throughput and Sync |
| Rack Power Density | 5–15 kW | 15–40 kW | 30–150 kW | 150 kW–1 MW+ |
| Compute (per rack) | ∼500 TFLOPS (FP32) | ∼5 PFLOPS (Mixed) | ∼1.4 ExaFLOPS (FP4) | ∼2.5 ExaFLOPS (FP8) |
| Cooling Method | Air (CRAC/CRAH) | Hybrid (Air + Liquid) | Liquid (RDHx/Cold Plate) | Full Liquid (Immersion) |
| Internal Networking | 25 G Ethernet | 100 G/400 G Ethernet | 400 G/800 G Ethernet | 800 G–1.6 T InfiniBand |
Table 2.
Physical constraints, resource demands, and networking by data center type [
46,
47,
48].
Table 2.
Physical constraints, resource demands, and networking by data center type [
46,
47,
48].
| DC Type | Racks per 10K ft2 | Bottleneck | Egress | WUE (L/kWh) | Daily Water | Cooling |
|---|
| Enterprise | 200–250 | Floor Space | Low | 1.8–2.5 | 50 k–300 k gal | Evap. Air |
| Hyperscale | 150–200 | Airflow | Ultra-High | 0.3–1.2 | 1 M–5 M gal | Evap./Free |
| AI Inference | 60–100 | Power Feed | High/Bursty | 0.2–0.8 | 500–10 k gal | Hybrid/RDHx |
| AI Training | 15–50 | Thermal | Very Low | 0.5–1.5 | 1 M–3 M gal | DTC Liquid |
Table 3.
Scale comparison of AI data centers.
Table 3.
Scale comparison of AI data centers.
| Metric | 25 MW (Sub-Threshold) | 50 MW (Mid-Market) | 100 MW (Large Scale) |
|---|
| Rack Count (NVL72) | 200 Racks | 400 Racks | 800 Racks |
| Compute Potential | 280 Exaflops | 560 Exaflops | 1.1 Zettaflops |
| Internal Bandwidth (Fabric) | 320 Petabits/s | 640 Petabits/s | 1.28 Exabits/s |
| External Bandwidth (Egress) | 200 Terabits/s | 400 Terabits/s | 800 Terabits/s |
Table 4.
ERCOT proximity and auto-approval limits.
Table 4.
ERCOT proximity and auto-approval limits.
| Request Size | Proximity Requirement | Approval Experience |
|---|
| <10 MW | Anywhere on the city grid | Standard business application; 3–6 months. |
| 10–40 MW | Within 2 miles of a substation | Feasibility study required; 6–12 months. |
| 40–74 MW | Near 69 kV/138 kV Lines | System Impact Study required; 12–24 months. |
| 75 MW+ | Direct to 138 kV/345 kV | Large Load Interconnection (LLI); 3–5+ years. |
Table 5.
100 MW interconnection cost comparison.
Table 5.
100 MW interconnection cost comparison.
| Cost Component | 1-Mile Line Rule (New Substation) | 3-Mile Substation Rule (Tie-In) |
|---|
| Substation CapEx | $15–$25 M (you build it) | $5–$10 M (utility upgrade fee) |
| Transmission Line | $2–$5 M (1 mile) | $6–$15 M (3 miles) |
| Land Requirement | 5–10 acres for substation | Minimal (Right-of-Way only) |
| Total Est. CapEx | $17–$30 M | $12–$27 M |
Table 6.
Fuel pressure requirements (RICE vs. turbine).
Table 6.
Fuel pressure requirements (RICE vs. turbine).
| Feature | Wärtsilä (31 SG/50 SG) | Aeroderivative Turbine (e.g., GE LM2500) |
|---|
| Typical Inlet Pressure | 5.1 bar (∼74 psi) | 250–500+ psi |
| Tap Type | Medium-pressure distribution | High-pressure transmission |
| On-Site Compression | Often unnecessary | Almost always required |
Table 7.
BTM generation stack balancing strategy.
Table 7.
BTM generation stack balancing strategy.
| Component | Role | Strategy |
|---|
| PPA (Solar/Wind) | Primary Energy Source | Provides bulk annual GWh; lowers carbon intensity. |
| On-Site BESS | High-Speed Balancing | Absorbs solar transients; provides ride-through during intermittency events. |
| RICEs | Long-Duration Firming | Operates during nighttime or multi-day low-wind/solar periods. |
| Small Grid Link | Emergency Standby | Black-start capability or support during on-site maintenance. |
Table 8.
Summary of deployment scenarios.
Table 8.
Summary of deployment scenarios.
| | Scenario | Primary Strategy | Key Trade-Off |
|---|
| BL | Pure Grid Play | 100% utility interconnection (138/345 kV) | Lowest CapEx; 5+-year delays at 250 MW |
| S1 | RICE + Battery Island | Off-grid RICE/BESS (Ph. 1–2); grid backbone at Ph. 3 | Fastest deployment (18 mo.); gas price exposure |
| S2 | Hybrid PPA + RICE | Solar PPA primary; RICE firming; grid capped at 10 MW | Reduced carbon; moderate LCOE |
| S3 | Grid to Geothermal | Grid for Ph. 1–2; EGS baseload at Ph. 3 | Eliminates grid congestion risk; long EGS lead time |
| S4 | Geo + PPA + Grid | Solar PPA + BESS initially; EGS baseline at Ph. 3 | Lowest LCOE post-transition; grant-eligible |
| S5 | Grid/PPA to SMR | Grid/PPA for Ph. 1–2; SMR anchors Ph. 3 | Maximum power density; longest regulatory timeline |
| S6 | Microturbines to Geo | Microturbine + BESS (Ph. 1–2); EGS + grid at Ph. 3 | Urban permitting advantages; bridge to zero-carbon |
Table 9.
Twenty-four-hour power demand Monte Carlo results.
Table 9.
Twenty-four-hour power demand Monte Carlo results.
| Phase and Cap Limit | Scenario | Avg Power | 95th Percentile Peak | Min Idle Power |
|---|
| Phase 1 (25 MW) | Normal Day | 12.9 MW | 20.7 MW | 5.0 MW |
| Phase 1 (25 MW) | Spike | 19.5 MW | 25.0 MW (Clipped) | 9.1 MW |
| Phase 1 (25 MW) | Weekend/Batch | 11.0 MW | 15.9 MW | 6.3 MW |
| Phase 2 (100 MW) | Normal Day | 51.6 MW | 82.8 MW | 20.0 MW |
| Phase 2 (100 MW) | Spike | 78.2 MW | 100.0 MW (Clipped) | 37.7 MW |
| Phase 2 (100 MW) | Weekend/Batch | 43.9 MW | 63.7 MW | 24.4 MW |
| Phase 3 (250 MW) | Normal Day | 129.2 MW | 208.4 MW | 50.0 MW |
| Phase 3 (250 MW) | Spike | 195.9 MW | 250.0 MW (Clipped) | 93.5 MW |
| Phase 3 (250 MW) | Weekend/Batch | 109.8 MW | 156.0 MW | 61.4 MW |
Table 10.
Consolidated modeling assumptions.
Table 10.
Consolidated modeling assumptions.
| Parameter | Value | Basis |
|---|
| Discount rate (r) | 8% | Industrial energy project standard |
| System lifespans (n) | SMR: 40 year; EGS: 30 year; Grid/PPA/RICE/Micro: 20 year; BESS: 15 year | NREL ATB [70] |
| BESS round-trip efficiency | 88% | Industry standard for Li-ion |
| BESS max discharge duration | 4 h (5 h system at 80% DoD) | Design specification |
| BESS tiered ratio (Sprinter/Marathoner) | 1:4 (MWh basis) | CapEx optimization |
| Mechanical startup bridging | 5–15 min | BESS provides full transient coverage |
| Natural gas price | $3.50/MMBtu (Henry Hub forward) | 2026–2030 industrial procurement [70] |
| Uranium fuel cost | $0.80/MMBtu | Light-water SMR standard [67] |
| EGS capacity factor | 95–98% | NREL 2025 Geothermal Report [66] |
| SMR capacity factor | 95% | IAEA reference [68] |
| Monte Carlo iterations | | Convergence testing |
| Grid outage modeling | Based on ERCOT historical FOR and weather events | [22,71] |
| Fuel availability during islanding | Continuous pipeline for NG; multi-year for uranium; none for EGS | Scenario-dependent |
Table 11.
Installed generation capacity by scenario (Phase 3: 250 MW max IT demand).
Table 11.
Installed generation capacity by scenario (Phase 3: 250 MW max IT demand).
| Scenario | Geo. | SMR | Solar | RICE | Micro. | Grid | Hybrid BESS (MW/MWh) |
|---|
| S1: Island | - | - | - | 150 | - | 100 | 30 MW/150 MWh |
| S2: Hybrid PPA | - | - | 120 | 210 | - | 10 | 35 MW/175 MWh |
| S3: 100% Geo. | 250 | - | - | - | - | - | - |
| S4: Geo + PPA | 120 | - | 80 | - | - | 130 | 25 MW/125 MWh |
| S5: SMR + PPA | - | 180 | 50 | - | - | 70 | - |
| S6: Geo + Micro. | 100 | - | - | - | 80 | 50 | 30 MW/150 MWh |
Table 12.
Baseline financial assumptions and calculated LCOE (2026 basis).
Table 12.
Baseline financial assumptions and calculated LCOE (2026 basis).
| Technology | CapEx ($/kW) | Fixed O&M ($/kW-Year) | Var. O&M ($/MWh) | Fuel ($/MWh) | Cap. Factor (%) | Calc. LCOE ($/MWh) |
|---|
| ERCOT Grid (345 kV) | 150 | 5.00 | 0 | 75.00 | 100% | 76.71 |
| Utility Solar PPA | 0 | 0 | 0 | 35.00 | 28% | 35.00 |
| RICE (Natural Gas) | 1200 | 0 | 15.00 | 29.75 | 90% | 60.27 |
| Enhanced Geothermal | 4500 | 115.00 | 0 | 0.00 | 95% | 68.61 |
| Small Modular Reactor | 7500 | 120.00 | 0 | 8.32 | 95% | 93.24 |
| Microturbines | 1500 | 0 | 20.00 | 40.25 | 90% | 79.62 |
| Hybrid BESS (LCOS) | 1250 * | 10.00 | 0 | 0.00 | 21% ** | 88.58 |
Table 13.
Internally derived BTM asset dispatch and utilization (250 MW baseline IT load).
Table 13.
Internally derived BTM asset dispatch and utilization (250 MW baseline IT load).
| Asset Scenario | Nameplate Capacity (MW) | Projected Annual Dispatch (MWh) | Total Possible Generation (MWh) | Capacity Factor (%) | Resilience Reserve (%) |
|---|
| 100% Geothermal (S3) | 300.0 | 2,190,000 | 2,628,000 | 83.33% | 16.67% |
| SMR Nuclear (S5) | 278.0 | 2,190,000 | 2,435,280 | 89.93% | 10.07% |
| Microturbines (S6) | 300.0 | 1,182,600 | 2,628,000 | 45.00% | 55.00% |
| RICE Peaking (S1) | 300.0 | 788,400 | 2,628,000 | 30.00% | 70.00% |
| Tiered-Performance BESS | 250.0 | 164,250 | 2,190,000 | 7.50% | 92.50% |
Table 14.
Financial comparison of Phase 3 generation scenarios.
Table 14.
Financial comparison of Phase 3 generation scenarios.
| Scenario | Primary Baseload | Peak Load Filler | Est. Blended LCOE | Volatility Risk |
|---|
| Baseline | ERCOT Grid | ERCOT Grid | $75.00/MWh | Extremely High |
| S1 (Island) | RICE (Gas) | BESS | $85.50/MWh | High (Gas Market) |
| S2 (Hybrid PPA) | Solar PPA/RICE | BESS/Grid | $72.40/MWh | Moderate |
| S3 (Geo-Long) | Geothermal | Geothermal | $68.00/MWh | Lowest (Fixed) |
| S4 (Geo + PPA) | Geothermal | Solar PPA/BESS | $64.50/MWh | Lowest (Fixed) |
| S5 (Nuclear) | SMR | Solar PPA | $94.20/MWh | Low |
| S6 (Geo + Micro) | Geothermal | Microturbines | $77.80/MWh | Moderate |
Table 15.
Resilience metrics and Avoided Loss of Load Probability (ALOLP) by scenario (250 MW load).
Table 15.
Resilience metrics and Avoided Loss of Load Probability (ALOLP) by scenario (250 MW load).
| Scenario | Islanding Capability | Critical Fuel Dependency | Max BTM Output (MW) | ALOLP |
|---|
| S1: Island (RICE) | Seamless | Natural Gas Pipeline | 180 MW * | 72.5% |
| S2: Hybrid PPA | Interrupted | Natural Gas Pipeline | 245 MW | 91.2% |
| S3: 100% Geothermal | Seamless | None (Zero Carbon) | 250 MW | >99.9% |
| S4: Geo + PPA | Seamless | None (Zero Carbon) | 145 MW * | 58.0% |
| S5: SMR + PPA | Seamless | Uranium (Multi-year) | 180 MW * | 72.0% |
| S6: Geo + Microturbines | Seamless | Natural Gas Pipeline | 210 MW * | 84.0% |
| Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |