Next Article in Journal
Challenges in the Legal and Technical Integration of Photovoltaics in Multi-Family Buildings in the Polish Energy Grid
Previous Article in Journal
Bi-Objective Intraday Coordinated Optimization of a VPP’s Reliability and Cost Based on a Dual-Swarm Particle Swarm Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Perspective

Artificial Intelligence in Local Energy Systems: A Perspective on Emerging Trends and Sustainable Innovation

by
Sára Ferenci
1,2,
Florina-Ambrozia Coteț
2,
Elena Simina Lakatos
1,3,
Radu Adrian Munteanu
2 and
Loránd Szabó
2,*
1
Institute for Research in Circular Economy and Environment “Ernest Lupan”, 400561 Cluj-Napoca, Romania
2
Faculty of Electrical Engineering, Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania
3
Academy of Romanian Scientists, 050044 Bucharest, Romania
*
Author to whom correspondence should be addressed.
Energies 2026, 19(2), 476; https://doi.org/10.3390/en19020476
Submission received: 21 December 2025 / Revised: 11 January 2026 / Accepted: 15 January 2026 / Published: 17 January 2026
(This article belongs to the Section A1: Smart Grids and Microgrids)

Abstract

Local energy systems (LESs) are becoming larger and more heterogeneous as distributed energy resources, electrified loads, and active prosumers proliferate, increasing the need for reliable coordination of operation, markets, and community governance. This Perspective synthesizes recent literature to map how artificial intelligence (AI) supports forecasting and situational awareness, optimization, and real-time control of distributed assets, and community-oriented markets and engagement, while arguing that adoption is limited by system-level credibility rather than model accuracy alone. The analysis highlights interlocking deployment barriers, such as governance-integrated explainability, distributional equity, privacy and data governance, robustness under non-stationarity, and the computational footprint of AI. Building on this diagnosis, the paper proposes principles-as-constraints for sustainable, trustworthy LES AI and a deployment-oriented validation and reporting framework. It recommends evaluating LES AI with deployment-ready evidence, including stress testing under shift and rare events, calibrated uncertainty, constraint-violation and safe-fallback behavior, distributional impact metrics, audit-ready documentation, edge feasibility, and transparent energy/carbon accounting. Progress should be judged by measurable system benefits delivered under verifiable safeguards.

Graphical Abstract

1. Introduction

1.1. Motivation and Problem Setting

Local energy systems (LESs), as shown in Figure 1, include microgrids, energy communities, and district/municipal renewable energy networks. In this paper, an LES is understood as a distribution-connected, community-/district-scale socio-technical system coordinated within an operational boundary (typically behind a point of common coupling to the upstream grid) and an institutional boundary (participants operating under shared governance and market rules). These archetypes differ in decision rights and market interfaces, so later AI implications are interpreted as context-dependent. LESs are central to contemporary efforts to decarbonize the power sector, improve resilience, and increase citizen participation in energy governance [1]. They combine distributed energy resources (DERs) such as rooftop photovoltaics (PV), small wind generators, energy storage, electric vehicles (EVs), and demand-side flexibility to operate at neighborhood or municipal scales.
These power systems create complex operational challenges: stochastic renewable supply, temporally and spatially variable demand, multi-stakeholder coordination, and interactions with upstream distribution networks [2].
Artificial intelligence (AI), comprising a family of data-driven and learning-based methods, is positioned to help manage LES complexity [3]. However, at local scales, the value of AI depends on socio-technical conditions, particularly trust, accountability, and robustness under real operational constraints, rather than predictive or control performance alone [4].
Accordingly, this Perspective adopts a deployment-oriented lens on AI in LESs, arguing that successful deployment depends on system-level credibility (governance, accountability, auditability, robustness, privacy, and sustainability) rather than model accuracy alone, and it proposes verifiable design expectations and evidence to support responsible scale-up in community settings.

1.2. Perspective Scope and Literature Anchoring

As a Perspective, this paper does not seek comprehensive coverage of all AI-for-energy publications. Instead, it advances a thesis that system-level credibility (such as governance, accountability, auditability, robustness, privacy, and sustainability) is a primary limiting factor for effective AI adoption in LESs. The discussion is anchored in representative evidence drawn from recent surveys [5,6,7,8,9], widely used methodological families (e.g., forecasting, optimization, reinforcement learning including safe/constrained variants), and LES-focused studies where operational constraints, stakeholder roles, and deployment conditions are explicit.
The referenced literature was identified through targeted searches of major engineering databases (Web of Science Core Collection, ScienceDirect, Scopus, and IEEE Xplore), with emphasis placed on 2020–2025 publications to reflect the recent acceleration of AI-enabled LES research. Keyword searches were complemented by iterative backward and forward citation chaining, with priority given to recent contributions and to studies addressing deployment-relevant trade-offs (privacy, cybersecurity, interpretability, fairness, robustness, auditability, interoperability, and lifecycle sustainability). The resulting evidence base was used to support a forward-looking synthesis and to derive actionable principles and research directions, rather than to produce a systematic taxonomy or a quantitative meta-analysis. Accordingly, inclusion was purposive, aiming to illuminate recurring constraints and failure modes rather than to provide an exhaustive catalogue of methods.

1.3. Core Thesis, Novelty, and Contributions

The novelty of this Perspective is to reframe “AI for LESs” from a primarily model-centric focus, where progress is often judged by forecasting or control accuracy, toward a system-level view in which the central question is whether AI-enabled coordination can be deployed credibly and fairly in real communities. Unlike recent review papers that primarily catalogue AI methods/use cases for energy management and microgrids or survey LES/energy-community modeling and planning, this Perspective foregrounds deployment credibility as the central contribution, i.e., it translates governance integration, fairness, privacy, robustness, auditability, and AI sustainability reporting into verifiable design expectations and minimum evidence for responsible scale-up.
Within this framing, the paper provides an integrated synthesis of emerging AI roles across forecasting and situational awareness, real-time optimization and control of distributed assets, and community-oriented markets and engagement, and it links these roles to the practical barriers that typically determine uptake.
It further consolidates the main technical and socio-technical constraints that shape responsible deployment, emphasizing how explainability and governance integration, distributional fairness, privacy and data governance, robustness and transferability under non-stationarity, and the computational footprint of AI interact as coupled design requirements rather than isolated concerns. Building on this diagnosis, the paper translates trustworthy AI principles into actionable, verifiable design expectations and a deployment-oriented roadmap, highlighting the minimum evidence that should be reported, such as stress testing under distribution shift, distributional impact metrics, audit-ready documentation, edge readiness with safe fallbacks, and transparent sustainability reporting, so that LES AI contributions can be assessed, replicated, and scaled with social legitimacy. Here, we use deployment-ready evidence to mean an auditable minimum set of robustness, constraint-compliance, stakeholder-impact, and governance checks that goes beyond conventional offline validation.
Even where advanced methods explicitly encode safety/security constraints (e.g., safe policy learning for congestion-aware multi-energy operation and safe RL for frequency-secure, risk-averse dispatch), credibility in real LES deployment remains limited by governance, auditability, fairness, privacy, robustness, and sustainability requirements that are rarely treated jointly [10,11].
As a Perspective, this manuscript intentionally avoids algorithmic derivations and method-by-method taxonomies and instead focuses on system-level credibility constraints and the evidence required for responsible LES deployment.

1.4. Paper Structure

The paper is structured as a progression from framing to deployment guidance. After the Introduction, Section 2 maps how AI supports core LES functions and why this becomes increasingly important as decentralization scales. Section 3 analyzes the main adoption-limiting conditions that affect trust, legitimacy, and operational safety in community settings. Section 4 distills these insights into principles for sustainable, trustworthy LES AI expressed as verifiable design expectations, while Section 5 translates the principles into a deployment-oriented research agenda and actionable recommendations focused on validation, reporting, and evidence that support replication and scale-up.

2. Emerging Roles for AI in LESs

Building on the deployment-oriented framing of the Introduction, this section maps three functional domains in which AI is reshaping LESs (forecasting and situational awareness, optimization and real-time control, and community-oriented markets and engagement) before later chapters examine the credibility constraints that determine whether these capabilities can be deployed responsibly at scale.

2.1. Forecasting and Situational Awareness

Accurate forecasting underpins virtually all operational and economic decisions in LESs. Short-term and ultra-short-term (minutes-to-hours) forecasts of electricity demand, renewable generation, and local flexibility availability directly influence storage dispatch, peer-to-peer trading outcomes, and interactions with distribution networks. In practice, these forecasts translate into concrete operational actions such as storage dispatch scheduling, flexibility activation, and local market clearing under uncertainty, while PV yield is also influenced by degradation/soiling effects that motivate monitoring and data-driven correction [12,13,14,15]. Traditional statistical approaches, while computationally efficient, struggle to capture non-linear dependencies introduced by high penetrations of variable renewables and electric mobility. This motivates learning-based forecasting, but it also raises an output-choice question: deterministic models provide point predictions and can be sufficient for monitoring or low-regret scheduling with adequate buffers, whereas probabilistic models (quantiles/intervals/scenarios) are preferable when uncertainty materially affects decisions (e.g., storage scheduling, reserve allocation, congestion-aware operation, market bidding). Hybrid approaches combine physics-based structure with data-driven adaptability to improve robustness and consistency under changing operating conditions [16,17,18].
ML methods, including artificial neural networks, support vector regression, gradient boosting, and DL architectures, such as long short-term memory networks, have demonstrated superior performance for load and generation forecasting at fine temporal resolutions [19,20,21]. In particular, deep recurrent models are well-suited to LESs where historical patterns, weather inputs, and socio-behavioral factors interact dynamically. Probabilistic forecasting techniques, including quantile regression forests and Bayesian neural networks, further allow operators to quantify uncertainty and support risk-aware decision-making [22,23,24].
Beyond forecasting individual variables, AI enhances situational awareness by fusing heterogeneous data streams from smart meters, IoT sensors, weather services, and market platforms. This data fusion enables near-real-time system state estimation at the community-level, supporting early detection of anomalies, congestion risks, or supply shortfalls [25,26,27]. Hybrid physics-informed ML approaches are increasingly proposed to improve generalization and interpretability, embedding physical constraints into data-driven models to ensure plausibility under rare or extreme conditions [28,29,30].
Despite strong benchmark results, LES forecasting and situational awareness often fail in deployment because non-stationarity and distribution shift (new DERs, tariff changes, behavioral adaptation, extreme weather) can silently degrade accuracy and miscalibrate uncertainty, limited data quality and observability (missing intervals, sensor drift, fleet changes, aggregation bias) propagate errors into state estimation and anomaly detection, and feedback coupling alters the data-generating process; these issues motivate stress testing, drift monitoring, and calibrated probabilistic outputs as baseline requirements for deployable LES forecasting [31,32].
Forecasting claims should be evaluated using rolling-origin (walk-forward) backtesting with leakage-safe temporal splits, and should report both point and probabilistic accuracy. In addition to the standard point-forecast metrics (e.g., mean absolute error and root mean squared error), probabilistic forecasts should include calibration measures and proper scoring rules such as the continuous ranked probability score and prediction-interval coverage. Robustness should be assessed under missing data and communication losses, supported by sensitivity analyses and appropriate classification metrics (e.g., precision and recall) for event-based forecasting [33].

2.2. Optimization and Real-Time Control of DERs

Coordinating DERs in LESs presents a high-dimensional, multi-objective optimization problem. Objectives typically include cost minimization, emissions reduction, self-consumption maximization, and maintenance of comfort constraints, while accounting for technical limits of assets and grid constraints [34,35]. AI-driven optimization methods offer scalable alternatives to rule-based or fully model-driven control strategies [36,37,38].
RL has emerged as a particularly influential paradigm for LES control. By learning control policies through interaction with the environment, RL-based controllers can adapt to changing system conditions without explicit system models [39,40,41]. However, many widely used model-free RL methods are sample-inefficient, which makes trial-and-error learning costly on physical assets and encourages heavy reliance on simulation and digital twins. This reliance interacts with safety requirements, because unconstrained exploration can violate hard operational limits (e.g., equipment ratings, voltage/frequency/security margins, or comfort constraints). As a result, practical pathways increasingly emphasize restricted exploration with conservative rule-based or MPC fallbacks and offline batch reinforcement learning trained on logged operational data when live exploration is unacceptable.
Applications include battery energy management, coordinated EV charging, and demand response activation [42,43]. In practice, RL deployment is typically incremental, progressing from simulation and digital twins to constrained pilots with conservative fallbacks before wider rollout. Multi-agent RL further enables decentralized control architectures, aligning well with the distributed nature of LESs and reducing communication requirements [41,44].
However, challenges related to safety, convergence, and transferability remain. Safe RL techniques, constrained optimization, and hybrid model predictive control (MPC) frameworks augmented with ML surrogates are increasingly explored to provide stability guarantees and operational robustness [45,46]. These hybrid approaches combine the foresight of MPC with the adaptability of learning-based methods, making them particularly attractive for real-world LES deployments.
Control-oriented AI, particularly RL, remains vulnerable to failure modes that simulation studies often underplay: sim-to-real mismatch such as unmodeled losses, delays, protection actions, and user responses that can invalidate learned policies; reward and constraint misspecification that drives optimization of proxy objectives and may violate comfort, equity, or network limits; instability or non-convergence and brittle behavior under rare events; and multi-agent non-stationarity that produces oscillations or strategic interactions between controllers. In addition, reproducibility is a practical deployment barrier: reported performance can vary substantially with random seeds, environment implementations, and tuning protocols, which complicates benchmarking and risk assessment for LES operators [47,48]. These risks motivate greater emphasis on constrained and safe learning, hybrid MPC, certified fallback controllers, and transparent reporting of constraint-violation rates and safe-fallback behavior.

2.3. Enabling Energy Communities, Local Markets, and Citizen Engagement

A defining characteristic of LESs is the active participation of citizens as prosumers, investors, and co-governors of energy assets [49,50]. AI plays a growing role in enabling local energy markets, peer-to-peer (P2P) trading schemes, and community-level coordination mechanisms [51,52,53,54]. Matching algorithms, price formation models, and bidding strategies increasingly rely on AI to manage complexity and scale while respecting individual preferences [41].
AI-enabled platforms facilitate P2P energy trading by forecasting surplus generation, estimating user flexibility, and optimizing matching between buyers and sellers in real-time [55,56]. These mechanisms can increase local renewable utilization and economic value retention within communities. Moreover, AI-driven personalization tools, such as tailored feedback on consumption patterns or automated tariff selection, support behavioral change and sustained engagement [50,57].
However, citizen-facing AI systems introduce additional design requirements. Transparency, explainability, and procedural fairness are critical to maintaining trust and social legitimacy [58,59]. Poorly designed optimization mechanisms may, if left unconstrained, disproportionately benefit asset-rich households, exacerbating energy inequality within communities.
Consequently, recent research emphasizes fairness-aware market design and participatory co-design of AI-enabled platforms, aligning technical performance with social objectives and regulatory frameworks such as the European Clean Energy Package or the European Green Deal [60,61].
Market- and citizen-facing AI creates risks beyond technical optimality (strategic gaming, opacity that undermines trust, benefit concentration favoring asset-rich or highly flexible households, and privacy or consent frictions) so strong technical performance can still be unacceptable without procedural fairness, transparent allocation rules, and auditable decision trails [62].
Strategic behavior is a practical deployment risk in local markets: when payoffs depend on reported flexibility, baselines, or private valuations, participants may misreport, withhold bids, or exploit multi-stage designs via inc–dec gaming, which degrades efficiency and can undermine perceived legitimacy [63,64]. Incentive-compatible market design aims to make truthful reporting a best response (along with voluntary participation and budget balance), and has been demonstrated for community energy trading through staged mechanisms and incentive-compatible double-auction variants [65,66]. For local flexibility markets, the mechanism must also remain compatible with distribution-network constraints (otherwise standard pricing/clearing can fail), so deployment-oriented validation should document the rule set and test gaming resilience under plausible deviations [63,67].
These concerns are visible in practice. For example, the “Smart and Fair Neighbourhood” trials developed as part of the “Local Energy Oxfordshire (LEO)” project explicitly targeted equitable and fair participation and reported real engagement and inclusion barriers (e.g., retrofit cost/technology access constraints and the need for trusted community intermediaries) [68]. Likewise, the Cornwall Local Energy Market (LEM) trial implemented an auction-based local flexibility marketplace and reports both the scale of participation and the practical importance of customer experience and bill-payer value (including reported MWh traded through the platform) [69].
Comparable insights are reported in EU Horizon 2020 demonstrators of local flexibility markets and community trading, including the InterFlex field implementation of congestion management via a local flexibility market, the FEVER pilots with peer-to-peer flexibility trading in an energy-community setting, and +CityxChange’s Trondheim local energy and flexibility market deployment [70,71,72].
Beyond these, pilots such as ReFLEX Orkney and Quartierstrom explicitly document deployment enablers and barriers as well as user-acceptance considerations in community-scale settings [73,74]. Finally, empirical evaluations of benefit-sharing schemes using real household data show that “fair” allocations can trade off with stability over time, reinforcing that fairness design choices matter operationally rather than only normatively [75].

2.4. Synthesis: Why AI Is Becoming Indispensable for the Future of LESs

The previous discussion supports a converging conclusion: as LESs increase in scale, heterogeneity, and societal relevance, their operation and governance will exceed the limits of manual, rule-based, or purely model-driven approaches. High temporal variability of renewable generation, growing electrification of transport and heating, and the emergence of active prosumers collectively generate a level of complexity that cannot be managed effectively without advanced data-driven intelligence.
To summarize how rising LES complexity translates into specific AI requirements and failure risks, Table 1 maps key drivers to capabilities and typical outcomes if AI is absent. These role-specific limitations recur across domains and are consolidated as a structured challenge/failure-mode taxonomy in Section 3 (the table in Section 3.6), which then informs the verifiable principles and reporting minimums in Section 4 and Section 5 [76,77]. In the table, the second column specifies the design and operational requirements, while the final column outlines the associated risks or failure modes should those requirements remain unmet.
These risk pathways also motivate the safeguards and deployment principles to be detailed in the next chapters.
AI provides the connective layer that transforms LESs from loosely coordinated collections of assets into adaptive, learning socio-technical systems. Forecasting models enable anticipation rather than reaction; optimization and control algorithms support continuous balancing of competing objectives under uncertainty; and market-oriented AI tools translate technical flexibility into accessible, community-level participation mechanisms. The cross-layer nature of these functions is illustrated in Figure 2, which positions AI as an integrative intelligence layer linking technical operation, market coordination, and governance requirements. Without these capabilities, LESs risk inefficiencies, inequitable outcomes, and operational fragility, consistent with the risk pathways summarized in Table 1.
As LESs scale in complexity and participation, AI is likely to become structural rather than incremental. Key coordination tasks already exceed what static rules or purely model-driven methods can handle reliably in real time. As decentralization deepens, many LESs will depend on AI-enabled decision support to balance technical constraints, social objectives, and regulatory requirements, especially where high DER diversity, fast local markets, and tight network constraints coexist. In that sense, AI becomes core infrastructure for next-generation LESs, not an add-on. Without AI-enabled or similarly advanced decision support, forecasting, coordination, and governance-aware control remain fragile at scale.

3. Key Technical and Socio-Technical Challenges

Despite rapid progress, several interlocking technical and socio-technical challenges must be addressed for AI to mature responsibly in LESs. These challenges become more acute as LESs scale toward high DER diversity, multi-vector coupling, and community-level participation, where operational decisions increasingly intersect with market rules and social expectations, affecting reliability during peaks, resilience under extremes, and equitable access to flexibility [6,7].
These challenge domains are coupled. Privacy-preserving designs (e.g., federated learning/differential privacy) can reduce utility and add communication/latency overhead, and may introduce additional attack surfaces if not paired with robust aggregation and monitoring [78,79,80]. Robustness and safety measures (e.g., stress testing under shift, conservative fallbacks) can trade off against average-case performance and increase computational burden [81,82]. Fairness objectives may also incur a “price of fairness,” so privacy–utility, robustness–performance, and fairness–welfare trade-offs should be made explicit rather than assumed compatible by default [83].

3.1. Explainability, Acceptance, and Governance

High-performing ML models often function as opaque “black boxes”, which is problematic in LES contexts where automated decisions (e.g., curtailment logic, storage prioritization, or flexibility activation) directly affect households and community trust. Explainable AI (XAI) methods are therefore essential to make decisions interpretable to prosumers, operators, and local authorities, and to support auditing, accountability, and safe operational deployment. Notably, operator-facing explainability (e.g., constraint binding, uncertainty drivers) differs from citizen-facing explainability (e.g., bill impacts, comfort trade-offs) and should be designed accordingly [58,84].
Beyond explainability, governance requires institutional integration: logging, auditable decision trails, and redress mechanisms should be embedded in LES platforms so that automated actions can be traced and contested where needed. This aligns with the broader regulatory direction in Europe toward transparency, consumer protection, and accountable operation in increasingly decentralized electricity systems [60,61].

3.2. Fairness and Distributional Equity

AI-driven optimization can unintentionally encode or amplify inequalities: welfare-maximizing dispatch or pricing may disadvantage households with limited flexibility, poor housing efficiency, or low DER ownership. This is a recognized risk in fairness research on LESs and energy communities, which emphasizes that “efficient” outcomes are not necessarily socially acceptable or equitable [83,85].
Addressing distributional equity requires making fairness explicit in problem formulation (e.g., fairness-aware constraints, distributive objectives, and transparent benefit-allocation rules) combined with participatory design so that allocation mechanisms reflect community norms and do not systematically exclude certain user groups [6,83].

3.3. Data Governance and Privacy

LES AI depends on fine-grained consumption and flexibility data, which can reveal sensitive behavioral patterns. Privacy risks, therefore, extend beyond cybersecurity into inference threats (e.g., occupancy or appliance-use profiling). Privacy-preserving ML approaches, such as federated learning, differential privacy, and secure multiparty computation, offer concrete technical pathways to reduce centralized data exposure while still enabling predictive and control intelligence [86,87].
However, practical deployment remains non-trivial: privacy techniques can introduce performance degradation, communication overhead, and new attack surfaces (e.g., poisoning or gradient leakage). Robust privacy-preserving load forecasting results exist, but implementing them in real LES operations requires careful trade-offs between utility, trust, operational constraints, and governance requirements [86,88].

3.4. Robustness, Safety, and Transferability

LESs operate under non-stationarity: seasonal shifts, changing occupancy behavior, new DER penetration, and evolving tariff/market conditions can all cause distribution shift and degrade model performance. Robustness, therefore, requires uncertainty-aware learning, domain adaptation, and safety-constrained control design, especially when AI influences real-time operation or market-facing decisions. Simulation-only validation is insufficient for deployment decisions because real LESs introduce feedback effects (behavioral adaptation, policy constraints, maintenance realities) that can shift system dynamics [7,89].
Transferability is also a bottleneck. While transfer learning and meta-learning can reduce retraining effort when deploying to new neighborhoods or communities, “one-size-fits-all” models risk negative transfer if local contexts differ materially in assets, behavior, or governance structures. This makes rigorous validation across diverse LES archetypes and transparent reporting of failure modes essential [7,8].
Reliability engineers increasingly highlight that credible ML for safety-relevant LES control requires stress testing, uncertainty-aware reporting, and clear failure-mode assumptions, not only average-case accuracy. This motivates reliability- and safety-case-style evidence that yields interpretable system-level reliability predictions [90].

3.5. Environmental Footprint of AI

The computational cost of AI, particularly large DL models and the infrastructure supporting training and inference, has a non-negligible energy and emissions footprint. This creates a tension: AI may reduce system-level emissions through better coordination, yet increase emissions through computation demand if not designed and operated efficiently [81,82].
Mitigation strategies include model efficiency (compression, pruning, lightweight architectures), scheduling computations when electricity is low-carbon, and pushing inference to the edge where appropriate. Critically, lifecycle or system-level assessments should be used to compare the computation burden of AI against the operational emissions reductions achieved in LES operation [59].
Recent standards-oriented guidance emphasizes assessing AI impacts across lifecycle stages (design/data, training, inference/deployment, and supply-chain/infrastructure effects such as cooling/water), not only training electricity [91]. Given the rapid growth of global e-waste and still-limited formal recycling, “green AI” implementation in LESs should also consider hardware lifetime extension and end-of-life pathways (reuse/refurbishment/recycling) when advocating scale-up [92].

3.6. Synthesis: From AI Capability to Responsible LES Deployment

The challenges detailed in the previous sections of this chapter indicate that high predictive or optimization performance alone is insufficient for real-world LES deployment. Because AI increasingly shapes operational decisions and value allocation, responsible deployment requires explicit safeguards that translate technical performance into legitimacy, accountability, and resilience. Table 2 translates the challenge domains into concrete implementation mechanisms and the risks that arise if they are omitted. The second column specifies the design and operational requirements, while the final one outlines the associated risk or failure mode should that requirement remain unmet.
A system-level view is provided in Figure 3, which frames these requirements as operational “guardrails” surrounding the AI-enabled LES decision layer and supported by continuous monitoring and community oversight. This synthesis emphasizes that the maturity of AI in LESs depends not only on better models, but on enforceable design constraints across transparency, fairness, privacy, robustness, and environmental footprint.

4. Principles for Sustainable, Trustworthy AI in LESs

To harness the operational benefits of AI (e.g., peak reduction, congestion avoidance, and improved market participation) without undermining trust, Section 4 translates the preceding challenges into actionable principles for sustainable LES deployment.
Because the principles are interdependent, we treat them as verifiable design constraints rather than aspirational labels. To reduce overlap, we distinguish user-facing oversight and fairness (Section 4.1 and Section 4.2), data and computation stewardship (Section 4.3 and Section 4.4), engineering interoperability and reproducibility (Section 4.5), and governance-facing auditability, contestability, and accountable operations (Section 4.6).

4.1. Human-Centered AI and Meaningful Oversight

LES automation can directly affect household comfort, mobility, and bills; therefore, AI should augment decision-making rather than replace it with opaque mandates. Human-centered design requires clear communication of system status and uncertainty, controllable autonomy (e.g., opt-out/override), and interaction patterns that help users recover when AI is wrong. These needs align with established human–AI interaction guidance emphasizing transparency, progressive disclosure, and graceful failure handling in user-facing AI systems [93,94].
In practice, LES platforms can implement human-centered oversight through role-based permissions, explainable and contestable decisions, and explicit fallback modes (manual override and safe defaults) [58]. These user-facing mechanisms should feed the audit trail and redress processes described in Section 4.6.

4.2. Fairness-by-Design and Distributional Equity

Efficiency-maximizing control or pricing can unintentionally redistribute costs and benefits toward already-flexible or asset-rich prosumers. Fairness should therefore be encoded explicitly in objectives and constraints (e.g., bounded bill impacts, minimum service guarantees, benefit-allocation rules), rather than treated as an ex-post add-on. The growing LES fairness literature shows that “fairness” is multi-dimensional (procedural, distributive, contextual) and must be operationalized with metrics that match the community’s goals and constraints [85].
In LES operation and local markets, fairness can be made testable by reporting distributional impacts (who benefits/loses) and enforcing simple constraints during dispatch/settlement [76,83]. Practical examples include: bill-impact caps (e.g., limit the increase vs. a counterfactual baseline for each household), minimum benefit guarantees (e.g., require non-negative savings for at least p% of participants), group-disparity limits (e.g., bound differences in average savings across defined groups), and inequality indices on benefits such as a Gini/Atkinson-type bound [95] or a minimum Jain-style fairness score [96]. These formulations are compatible with optimization-based market/aggregation mechanisms (e.g., virtual-power-plant-style intermediated clearing [97]) and can be integrated as explicit constraints alongside efficiency objectives [98].
For energy communities specifically, benefit-allocation mechanisms (including cooperative-game approaches and bargaining-based designs) illustrate how different fairness concepts trade off with total welfare (the “price of fairness”), underscoring why fairness should be chosen transparently and tested under realistic heterogeneity [83].

4.3. Privacy-Preserving Data Architectures

LES intelligence often depends on high-resolution consumption and DER data that can reveal sensitive behavioral patterns. Privacy protection should be designed at the architecture level: minimize data collection, keep raw data local where feasible, and separate operational necessity from convenience features.
Technically, federated learning (FL) enables collaborative forecasting/control without centralizing raw household data, but must be combined with privacy mechanisms suited to adversarial settings (e.g., differential privacy, secure aggregation, or MPC) to reduce leakage through gradients or updates [88,99]. Studies on privacy-preserving FL for residential load forecasting demonstrate both feasibility and the practical design choices/trade-offs required [86].
Differential privacy provides a formal privacy guarantee and remains a core building block for many LES privacy strategies [78]. Secure aggregation strengthens FL by ensuring the server cannot inspect individual client updates [79].
Secure multiparty computation offers additional options for privacy-preserving analytics when stronger cryptographic guarantees are required [80]. Because FL introduces new threat surfaces (poisoning, inference, aggregation attacks), smart-grid-specific surveys recommend integrating threat modeling, robust aggregation, and monitoring from the outset. These techniques are complementary rather than substitutes and are most effective when combined with explicit threat modeling and monitoring [88].
Because privacy mechanisms introduce new threat surfaces, their monitoring requirements should be integrated into the audit trail and governance processes described in Section 4.6.

4.4. Resource-Aware Algorithms and Environmental Responsibility

Resource-aware design means matching model complexity to the value of the decision (e.g., simple baselines where sufficient; heavier models only when they measurably improve system outcomes), and reporting computation and energy budgets as part of responsible evaluation. This ensures that AI-related resource use remains proportionate to the operational and sustainability value delivered in LES operation.
In LES deployments, the environmental footprint of using AI extends beyond operational computing electricity to include impacts from data collection and communication, hardware manufacture (embodied emissions/materials), deployment and monitoring, update cycles, and end-of-life handling (e-waste, recycling, refurbishment) [91,100]. A lifecycle-oriented framing motivates reporting material and energy inputs, as well as resulting wastes, across the full AI lifecycle (data → training → deployment → updates → retirement), and recent guidance highlights that manufacturing and supply-chain emissions are often missing from AI sustainability reports [91,101]. Operational “green AI” implementation paths therefore include: data minimization and local preprocessing to reduce telemetry and storage, right-sizing and reuse of models (transfer/reuse where appropriate) and carbon-aware scheduling of training, deployment choices that reduce energy per decision (edge–cloud partitioning, compression/quantization) while tracking latency and energy per inference, and update policies triggered by drift/shift rather than fixed retraining cycles, to avoid unnecessary compute and data movement [101,102,103]. A circular-economy lens further recommends extending hardware lifetimes (repairability, modular replacement, reuse/refurbishment), documenting end-of-life pathways, and avoiding premature hardware refresh, particularly relevant given the growth of e-waste streams and the value of recoverable materials [92,104].
Practical methods for quantifying and reducing computational footprint are now well documented, including experiment tracking, carbon accounting, and “green-by-design” workflow rules [105,106].
In practical LES deployments, this often translates into the use of lightweight edge inference mechanisms to enable real-time control, the scheduling of training processes during low-carbon periods whenever possible, and the adoption of model updating policies that respond dynamically to detected distribution shifts rather than relying on fixed retraining cycles [88,102,103].

4.5. Interoperability and Reproducible Engineering

LESs are inherently multi-vendor and multi-domain; without interoperability, AI models become brittle, site-specific, and expensive to maintain. Interoperability requires open, well-documented data models, consistent measurement definitions, stable application programming interfaces (APIs), and transparent interfaces between optimization/control layers and market and engagement platforms.
In power systems, the common information model (CIM) is widely discussed as a foundation for semantic interoperability, and reviews of CIM-based integration highlight recurring practical issues (extensions, harmonization, validation) that directly affect data quality for downstream AI [107,108].
For demand response and flexibility services, OpenADR provides a standards-based pathway to scalable automation and has been implemented in agent-based and real-world evaluation contexts [109,110].
From an AI perspective, standards and open formats enable reproducible pipelines and comparable evidence across sites [109,110,111,112]. This also supports independent auditing and governance processes, complementing the requirements in Section 4.6.

4.6. Regulatory Engagement, Auditability, and Accountable Operations

Accountable LES AI requires auditability and contestability for operators, communities, and (where applicable) regulators, building on the human-facing transparency mechanisms in Section 4.1. Auditability includes: event logging, versioning of models/data, documented decision pathways, monitoring for drift and disparate impacts, and mechanisms for redress (appeals/compensation) when automated actions cause harm.
Concrete documentation and audit tooling can be operationalized using established AI governance artifacts such as model cards (what a model is for, tested conditions, limitations, and ethical considerations) [113]. Algorithmic auditing research provides lifecycle-aligned audit frameworks (from design through deployment) and emphasizes organizational accountability processes, not just technical checks [114].
For LESs, this implies that “trustworthy AI” should be verified continuously (monitoring + periodic audits), not assumed at commissioning [115,116,117].

4.7. Synthesis: Principles-as-Constraints for Sustainable, Trustworthy AI in LESs

The principles detailed in the previous sections provide a concrete pathway from “AI can improve LES operation” to “AI can be deployed responsibly at scale”. Taken together, they shift the focus from model-centric performance to system-level trustworthiness, where acceptable LES deployment depends on human oversight, equity safeguards, privacy-preserving data practices, robust operation under uncertainty, and verifiable accountability as a coupled set of constraints.
A concise illustrative LES scenario is a 200-household energy community with PV, shared battery storage, EV charging, and heat pumps on a constrained low-voltage feeder. The operator uses probabilistic forecasts to form day-ahead schedules and bids, and a constrained real-time controller (e.g., MPC with learning-based components) to dispatch flexibility while enforcing feeder and asset limits. A local flexibility mechanism allocates incentives for shifting or curtailment under congestion. The principles act as constraints throughout: human oversight enables opt-out/override and understandable explanations; fairness-by-design caps bill impacts and monitors distributional effects; privacy-by-design keeps household data local with monitoring integrated into the audit trail; and interoperability/auditability support multi-vendor integration, logging, versioning, and redress when automation affects comfort or bills.
To operationalize this interaction in practice, Table 3 consolidates these principles into a practitioner-oriented checklist, linking each principle to its intent in LES contexts, implementable design actions, and the evidence that should be reported to demonstrate compliance (e.g., override mechanisms, disparate impact monitoring, privacy threat models, computation-related budgets, and audit trails). Its purpose is to make the principles verifiable by specifying the minimum documentation and evaluation evidence required to support trustworthy AI claims in LES deployments.
A system-level view is provided in Figure 4, which frames trustworthy AI as a surrounding set of enforceable constraints around the AI decision layer (forecasting, optimization, control, and market participation).
This framing highlights that trustworthiness is not achieved by any single technique, but by co-design across people, platforms, and policy: interoperability enables portability and benchmarking, resource awareness protects sustainability claims, and auditability supports regulatory and community legitimacy. In this sense, Table 3 and Figure 4 jointly reinforce the central message of this chapter: trustworthy AI in LESs requires principles-as-constraints, not principles-as-aspirations.

5. Research Agenda and Actionable Recommendations

Building on the technical and socio-technical challenges in Section 3 and the principles proposed in Section 4, the authors outline a practical research agenda for accelerating responsible AI deployment in LESs. The agenda emphasizes not only algorithmic performance, but also robustness under distribution shift, explainability, and auditability for community governance, distributional fairness, low-latency edge operation, real-world validation through living labs, and standardized sustainability reporting of AI components.

5.1. Hybrid Physics–ML Models for Robust Forecasting and Control

A central limitation of purely data-driven models in LES contexts is brittleness under rare events and distributional change (weather extremes, new DER penetrations, altered user behavior), including multi-vector coupling effects across electricity, heating, and mobility. Hybrid approaches that embed physical structure into learning, ranging from physics-informed neural networks (PINNs) to operator-learning frameworks, offer a path to improved extrapolation and data efficiency. PINNs, for example, constrain learning with governing equations and have become a widely used paradigm for physically consistent inference and control, especially when measurement coverage is incomplete [118,119].
For LES forecasting and control, hybrid strategies are particularly promising when paired with uncertainty-aware decision-making (e.g., MPC with learned components) and stress testing under out-of-distribution (OOD) conditions. Recent work in renewable energy forecasting demonstrates that explicitly combining physics-derived predictors with ML improves predictive skill compared with state-of-the-art purely data-driven predictor sets, illustrating the value of hybridization for operationally relevant horizons [120]. In parallel, broader reviews of physics-informed ML in renewable energy systems underline the opportunity to use physical constraints as “guardrails” that improve generalization while reducing the volume of training data required [16].
An actionable recommendation is to prioritize hybrid forecasting and control pipelines that can quantify uncertainty, explicitly test rare events and distribution shifts, and report constraint-violation rates and calibration under extremes alongside standard accuracy metrics. This ensures that performance evaluation goes beyond conventional accuracy measures and captures the robustness and reliability of models under realistic operating conditions.

5.2. Explainable, Auditable Algorithms Tailored to Community Contexts

Building on the explainability and governance requirements identified in Section 3 and Section 4, the research frontier is shifting from generic explainability toward contextual explainability: explanations that are meaningful to prosumers, operators, and local authorities for specific LES decisions such as dispatch, flexibility allocation, and pricing.
In communal energy settings, this motivates audit-ready pipelines: decision logs, model documentation (e.g., local model cards), and procedures for review and redress that communities can actually use [121]. Work on algorithmic transparency in communal energy systems highlights why contestability and governance integration must be treated as first-class design goals rather than afterthoughts [122].
A critical actionable recommendation is to develop LES-specific XAI benchmarks that address dispatch, flexibility allocation, and pricing, and that include representative tasks, datasets/simulators, governance artifacts (decision logs/model cards), and human-subject evaluation protocols designed to measure trust, comprehension, and perceived fairness, rather than focusing solely on technical fidelity. This combined approach ensures that explainability is assessed not only in terms of algorithmic performance but also in relation to how stakeholders experience and interpret AI-driven decisions within LESs.

5.3. Fairness-Aware Market and Control Mechanisms

Accordingly, research should prioritize measurable equity indicators and pilot-ready validation protocols for fairness-aware market and control mechanisms in LESs. This includes distributional reporting (who benefits/loses), subgroup sensitivity analysis under realistic heterogeneity, and mixed-method assessment of perceived legitimacy alongside technical feasibility.
Recent studies emphasized that fairness in LESs is multi-dimensional (procedural, distributive, recognitional) and requires interdisciplinary evaluation, not only engineering metrics [85]. In energy communities, benefit-allocation mechanisms are increasingly studied as technical–institutional design choices that must be assessed for inclusivity, not just feasibility [76].
Accordingly, an actionable recommendation is to treat fairness as a measurable requirement by defining equity indicators that capture who benefits and who loses, embedding constraints such as minimum service guarantees, protections for vulnerable groups, and progressive allocation rules, and validating these mechanisms in pilot projects through mixed-method evaluation. This approach ensures that fairness is not left as an abstract principle but becomes an operational criterion integrated into the design and assessment of LESs.

5.4. Lightweight, Edge-Capable AI for Distributed LES Operation

LES decision cycles often require low latency, high availability, and graceful degradation under connectivity constraints. Edge-capable AI reduces dependence on the cloud, lowers communication overhead, and can improve privacy by keeping sensitive data local. Surveys of edge computing for IoT-enabled smart grids highlight the architectural motivation for pushing intelligence closer to devices to meet response-time and reliability constraints [123].
In microgrid energy management, RL approaches have shown strong performance in simulation-based settings, but practical deployment raises issues of computation, safety, and stability. Deep RL methods for microgrid energy management systems are widely studied, and the literature provides a concrete baseline for what must be made lighter, safer, and more deployable [124]. Hybrid cloud–edge scheduling architectures also illustrate how decision-making can be partitioned, enabling fast local inference while retaining higher-level coordination layers [125].
An essential actionable recommendation is to report edge readiness explicitly by documenting latency, energy consumption per inference, bandwidth demand, and fallback control mechanisms, ideally, rule-based or certified controllers that maintain safety and protection constraints when AI performance degrades, while favoring architectures that remain safe under conditions of partial observability and intermittent connectivity. This ensures that AI solutions deployed in LESs are not only efficient but also resilient to real-world constraints and variability in communication infrastructure.

5.5. Longitudinal Field Studies and Living Labs

To generate deployment-ready evidence, evaluation of AI methods in LESs should increasingly rely on longitudinal field studies and living labs. Real deployments introduce feedback loops: user adaptation, trust dynamics, policy constraints, maintenance realities, and emergent system behavior. Living labs and demonstrators provide a structured way to study these effects over time and under real governance arrangements.
Evidence from large demonstrator settings shows how living laboratories can function as “blueprints” for developing and testing smart LESs and AI-based control strategies in real-world contexts [126]. Complementary longitudinal living-lab studies of household energy management systems demonstrate how user interaction and learning evolve, exactly the type of socio-technical dynamics that short pilots and purely technical evaluations often miss [127,128].
A possible actionable recommendation is to shift evaluation norms toward longitudinal, mixed-method evidence, including quasi-experimental or staged rollout designs where feasible, that captures adoption, retention, behavioral responses, and dispute resolution, while also reporting contextual factors that enable replication across different sites. This approach ensures that assessments of AI in LESs reflect not only technical performance but also the lived dynamics of user engagement and institutional adaptation over time.

5.6. Standardized LCA and Reporting for AI in Energy

If AI is deployed to reduce emissions and improve sustainability, the AI component itself must be evaluated transparently. This includes energy use in training and inference, data movement, hardware assumptions, and grid carbon intensity. “What gets reported gets optimized” without common reporting, comparisons are misleading, and greenwashing risks increase.
Work on standardized reporting of ML energy use and emissions provides concrete guidance for measurement and disclosure, including the need to report energy and emissions across the ML lifecycle [129]. Practical tools such as Green Algorithms operationalize carbon accounting for computation and have become widely used in research contexts [130]. Recent work further argues for lifecycle approaches that include the carbon footprint of data and broader system boundaries, reinforcing why protocol-level agreement matters [131].
A practical, actionable recommendation is to adopt a minimum “AI sustainability reporting bundle” in LES papers and projects, including the reporting of computation-related energy, emissions assumptions, hardware and power usage effectiveness (PUE), inference cost per decision, and sensitivity to grid carbon intensity, the functional unit (e.g., per control action or per household-day), and a clear system-boundary statement. This ensures that sustainability considerations are made transparent and comparable, allowing researchers and practitioners to evaluate the environmental footprint of AI deployments alongside their technical performance.

5.7. Synthesis: A Practical Roadmap for Action

Taken together, the above analysis shows that “better AI models” are not, by themselves, a sufficient research goal for local energy system deployment. The research agenda instead converges on a deployment-oriented definition of progress: models must remain robust under distribution shifts and rare events, and decisions must be explainable, transparent, and contestable in community contexts. Outcomes must be demonstrably fair, solutions must be operationally feasible at the edge, claims must be validated through longitudinal real-world evidence, and sustainability impacts must be reported consistently. In this sense, responsible AI for LESs is best framed as an integrated pipeline that links algorithmic innovation to verification, governance, and environmental accountability, enabling both researchers and reviewers to assess maturity beyond accuracy alone.
The research agenda below is intentionally comprehensive because LES AI is a complex, coupled socio-technical system: progress in one area (e.g., control) is often blocked by evidence, governance, data, and deployment constraints. To avoid diluting the message, we highlight three high-impact priorities that unlock most others:
  • deployment-ready evidence and validation (stress testing under shift, constraint-violation reporting, monitoring, and audit hooks);
  • safe and robust autonomy under real constraints (constrained/safe learning, certified fallbacks, sim-to-real validation);
  • legitimacy at scale (fairness, privacy, accountability, and contestability integrated into operational workflows).
The remaining directions are retained as enablers (interoperability, data governance, edge feasibility, and sustainability reporting) that determine whether the above-mentioned priorities can be realized in practice.
This is visible in real demonstrations: the DOE-supported Bronzeville Community Microgrid integrates PV and storage under a dedicated controller/cluster controller and must manage grid-connected/islanded transitions, underscoring why “edge readiness” evidence should include latency, connectivity-loss behavior, and safe-fallback performance [132,133].
Accordingly, Table 4 (below) maps the full agenda to deployment-ready deliverables and the associated minimum reporting requirements to enable reviewers and practitioners to assess maturity beyond accuracy.
Deployment-ready evidence here denotes the minimum auditable proof that an AI method remains acceptable in its intended LES operating context, not only on offline test sets or simulations, but also under realistic shift, constraints, and operational handling. Accordingly, minimum reporting requirements are the smallest set of tests and artifacts that let an independent reader judge maturity beyond accuracy, aligned with AI risk-management and documentation guidance [134].
In the table, “minimum reporting (beyond accuracy)” is intended as a deployment threshold: the smallest evidence package that must be reported to support credible field-readiness claims for the stated function and context. By contrast, best-practice recommendations are stronger extensions that increase confidence, external validity, and scalability but may not be feasible in early-stage studies. For clarity: minimum implies at least one context-relevant stress test with quantified degradation, a basic distributional impact/fairness check where user outcomes differ, demonstrated edge readiness (latency/footprint and connectivity-loss behavior with safe fallback) when real-time operation is claimed, and core sustainability reporting (training/inference energy and carbon assumptions) when AI scale-up is advocated.
For near-term real-world deployment, the most critical metrics are those that directly signal operational safety and controllability:
  • constraint-violation rates (and severity) under nominal and stressed conditions;
  • safe-fallback behavior (trigger conditions, time-to-stabilize, and performance under connectivity/sensor loss);
  • uncertainty quality for risk-sensitive decisions (calibration/coverage).
Secondary but still important near-term indicators include distributional impacts (who benefits/loses), latency/compute footprint for edge feasibility, and audit-trail completeness.
To make the agenda operational, each research direction can be phrased as testable questions with explicit evidence expectations:
  • RP1: Hybrid physics–ML for robust forecasting & control
    RQ1: Under predefined distribution shifts (e.g., seasonal change, PV/EV uptake, tariff changes) and rare-event scenarios, can hybrid models maintain calibrated uncertainty and bounded degradation relative to baselines (reported via OOD performance, calibration, and stress-test results)?
    RQ2: When integrated into control (e.g., MPC/RL with constraints), do hybrid components measurably reduce constraint violations and improve resilience compared with pure ML or pure physics baselines, under the same stress-test suite?
  • RP2: Explainable, auditable algorithms for community contexts
    RQ1: Do contextual explanations (for dispatch/pricing decisions) improve stakeholder comprehension/trust and reduce override/friction compared with non-explanatory interfaces, measured via a pre-registered user study and operational logs?
    RQ2: Can an audit protocol (logs + model cards + decision traceability) support reproducible post hoc reconstruction of key decisions and a functioning redress workflow across at least three distinct LES contexts?
  • RP3: Fairness-aware market and control mechanisms
    RQ1: Can a P2P/flexibility mechanism be designed to balance welfare and equity, with explicitly chosen fairness objectives, and verified on real/representative household data across ≥3 communities by reporting distributional impacts (who benefits/loses) and sensitivity to heterogeneity?
    RQ2: Under realistic participation variability and strategic behavior, do fairness constraints remain satisfied over time (longitudinal stability), and what is the measured “price of fairness” under transparent assumptions?
  • RP4: Lightweight, edge-capable AI
    RQ1: For real-time functions, can edge-deployed models meet latency/energy/footprint budgets and maintain acceptable safety metrics (constraint-violation rate, safe-fallback behavior) under connectivity loss and sensor degradation?
    RQ2: Does a defined fallback controller (rule-based/MPC/safe policy) guarantee bounded behavior during edge failures, with documented trigger conditions, time-to-stabilize, and recovery performance?
  • RP5: Longitudinal field studies & living labs
    Q1: Over multi-month deployments, what levels of adoption/retention and behavioral change are achieved, and which governance frictions (consent, disputes, opt-outs) dominate in practice across multiple sites?
    RQ2: Which elements transfer across sites (models, interfaces, governance processes), and what are the quantified limits of external validity when tariffs, asset mixes, and community rules differ?
  • RP6: Standardized LCA and AI sustainability reporting
    RQ1: Can a standardized reporting template produce comparable sustainability claims across studies by explicitly stating functional units, boundaries, carbon-intensity assumptions, and uncertainty for training and inference?
    RQ2: At the system level, do reported net impacts show that operational benefits outweigh AI lifecycle burdens under transparent assumptions (including update frequency and deployment scale), enabling like-for-like comparison across deployments?
A system-level view of how the priorities listed in Table 4 connect is provided in Figure 5, which organizes the agenda into enabling foundations (privacy and data governance, interoperability, human oversight and audit logs, resource-aware computation), six core research streams, and downstream validation practices that support scalable deployment.
As summarized in Table 4 and Figure 5, the most transferable contributions to deployment-ready LES AI are evaluation protocols, assurance artifacts, and reporting templates that make safety, fairness, robustness, and sustainability claims comparable across sites, rather than isolated model improvements.
Being a Perspective, this manuscript provides a deployment-oriented synthesis rather than an exhaustive systematic review, and the underlying evidence base for LES AI remains heterogeneous and often simulation-centric. Accordingly, the key empirical gap is multi-site, longitudinal field validation with replication packages and auditable reporting, especially robustness under shift/rare events, constraint-violation rates, and safe-fallback behavior, and distributional impacts in community settings. In this sense, the minimum reporting items in Table 4 should be interpreted as maturity gates for field-readiness claims, aligned with established ML production-readiness and AI risk-management guidance, rather than as proof that any single method generalizes across LES archetypes [134,135].
Although derived from LES realities (multi-actor governance, local markets, citizen legitimacy), the proposed principles generalize to other distributed cyber-physical energy systems where AI mediates decisions under uncertainty and hard constraints, e.g., virtual power plants/aggregator-controlled DER fleets [136], building clusters and district heating networks [137], and EV charging infrastructures [138]. Across these settings, recurring requirements include robustness under non-stationarity, constraint-aware safety, auditability/traceability, and cybersecurity/privacy-by-design, particularly salient for DER-rich systems with expanding attack surfaces [139]. Finally, sector-agnostic AI risk-management guidance reinforces using such principles as a context-weighted checklist (varying emphasis by application) rather than a one-size-fits-all recipe [135].

6. Discussion

This Perspective argues that AI is becoming structural for LESs: the complexity created by DER diversity, electrified loads, and active prosumers exceeds what rule-based or purely model-driven approaches can coordinate reliably at scale. However, adoption is limited by system-level credibility rather than model accuracy alone. Progress should therefore be judged by measurable benefits delivered under transparent constraints and accountable processes.
AI in LESs requires multi-party, collaborative governance because decisions must simultaneously account for grid-operational constraints set by the distribution system operator, the operation of the energy-management platform provided by service companies, and the need for community legitimacy involving both prosumers and local authorities. A practical approach is a polycentric or self-governance model: the community defines the objectives and acceptable trade-offs (for example, limits on bill impacts or opt-out rules), the distribution system operator specifies the operational constraints and safety limits, and an independent oversight body (or periodic third-party review) verifies documentation, monitoring practices, and redress pathways. This aligns with risk-based AI governance approaches (e.g., EU AI Act) and established AI risk-management guidance (e.g., NIST AI RMF), while recognizing that energy communities operate through multi-level governance arrangements [101,140,141].
Policy suggestions include:
  • standard-setting for interoperability and auditability (minimum logging/versioning, documentation templates, open interfaces);
  • subsidy/innovation funding that preferentially supports open standards, living labs, and edge-ready secure deployments;
  • minimum audit requirements for AI-enabled flexibility/market platforms (monitoring for drift and distributional impacts, periodic compliance checks, and documented redress);
  • regulatory sandboxes or pilots that require public reporting of “deployment-ready evidence” rather than accuracy-only claims [140].
Figure 6 provides a one-page synthesis of the argument of the paper, linking LES complexity drivers to AI roles, deployment challenges, principles-as-constraints, the proposed research agenda, and the resulting deployment-ready evidence needed to achieve measurable energy outcomes.
A central shift is therefore needed from model-centric evaluation to deployment-ready evidence: robustness under drift and rare events, uncertainty calibration, constraint-violation rates, safe fallback behavior, distributional impacts (who benefits/loses), and auditable documentation that supports traceability and contestability. Social legitimacy is not automatic: even technically efficient mechanisms can lose acceptance if benefits concentrate among asset-rich or highly flexible households, implying that fairness objectives and redress mechanisms must be explicit, monitored, and revisited as communities evolve. Scaling also depends on interoperability and comparable evidence across sites, open standards, stable interfaces, and reproducible protocols that make local assumptions (tariffs, governance, asset mix) visible and reduce silent failures during transfer. Finally, credible sustainability claims require resource-aware AI and transparent reporting of computing/energy use, functional units, boundaries, and carbon intensity assumptions, so net impacts can be compared across deployments.

7. Conclusions

AI can significantly improve the operation and governance of LESs, but responsible scaling depends less on proposing ever more complex models and more on deploying intelligence that is robust under drift and rare events, explainable and contestable in community contexts, fair in the distribution of costs and benefits, privacy-preserving by design, interoperable across vendors and sites, auditable over the full lifecycle, and environmentally proportionate in its computational footprint.
This paper has outlined how AI is already reshaping LESs through forecasting and situational awareness, optimization and real-time control of distributed resources, and community-facing markets and engagement, while also highlighting the technical and socio-technical barriers that can undermine trust, performance, and legitimacy. It then translated these insights into principles-as-constraints and a deployment-oriented agenda that emphasizes hybrid physics–ML approaches for robustness, contextual explainability supported by audit-ready pipelines, fairness-aware market and control mechanisms validated with distributional reporting and mixed methods, edge-capable implementations with safe fallbacks, longitudinal evidence from living labs, and standardized sustainability and lifecycle reporting for AI components. Ultimately, progress in AI for LESs should be judged by deployment-ready evidence and transparent documentation demonstrating measurable energy benefits under verifiable safeguards, enabling local energy transitions that are both scalable and socially legitimate.

Author Contributions

Conceptualization, E.S.L., R.A.M., and L.S.; methodology, L.S.; investigation, F.-A.C. and S.F.; writing—original draft preparation, L.S.; writing—review and editing, E.S.L. and L.S.; visualization, F.-A.C. and S.F. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partially supported by the Interreg Danube Region project “Circular DigiBuild–Boosting the uptake of emerging technologies in circular economy implementation in construction and buildings industry in Danube region to sustainably harness the twin transition for greener future”, Grant Agreement DRP0200309.

Data Availability Statement

Not applicable.

Acknowledgments

This paper benefited from the linguistic and stylistic enhancements provided by Microsoft Copilot, ensuring clarity, grammatical accuracy, and consistency.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial intelligence
APIApplication programming interface
CIMCommon information model
DERDistributed energy resource
DLDeep learning
EVElectric vehicle
FLFederated learning
LCALife cycle assessment
LESLocal energy system
LSTMLong short-term memory
MLMachine learning
MPCModel predictive control
OODOut-of-distribution
PINNPhysics-informed neural network
PUEPower usage effectiveness
PVPhotovoltaics
RLReinforcement learning
XAIExplainable artificial intelligence

References

  1. Neves, C.; Oliveira, T.; Sarker, S. Citizens’ participation in local energy communities: The role of technology as a stimulus. Eur. J. Inf. Syst. 2025, 34, 122–145. [Google Scholar] [CrossRef]
  2. Adham, M.; Keene, S.; Bass, R.B. Distributed energy resources: A systematic literature review. Energy Rep. 2025, 13, 1980–1999. [Google Scholar] [CrossRef]
  3. Moustakas, K.; Loizidou, M.; Rehan, M.; Nizami, A. A review of recent developments in renewable and sustainable energy systems: Key challenges and future perspective. Renew. Sustain. Energy Rev. 2020, 119, 109418. [Google Scholar] [CrossRef]
  4. Chen, D.; Lin, X.; Qiao, Y. Perspectives for artificial intelligence in sustainable energy systems. Energy 2025, 318, 134711. [Google Scholar] [CrossRef]
  5. Safari, A.; Daneshvar, M.; Anvari-Moghaddam, A. Energy intelligence: A systematic review of artificial intelligence for energy management. Appl. Sci. 2024, 14, 11112. [Google Scholar] [CrossRef]
  6. Barabino, E.; Fioriti, D.; Guerrazzi, E.; Mariuzzo, I.; Poli, D.; Raugi, M.; Razaei, E.; Schito, E.; Thomopulos, D. Energy Communities: A review on trends, energy system modelling, business models, and optimisation objectives. Sustain. Energy Grids Netw. 2023, 36, 101187. [Google Scholar] [CrossRef]
  7. Kachirayil, F.; Weinand, J.M.; Scheller, F.; McKenna, R. Reviewing local and integrated energy system models: Insights into flexibility and robustness challenges. Appl. Energy 2022, 324, 119666. [Google Scholar] [CrossRef]
  8. Cuisinier, E.; Bourasseau, C.; Ruby, A.; Lemaire, P.; Penz, B. Techno-economic planning of local energy systems through optimization models: A survey of current methods. Int. J. Energy Res. 2021, 45, 4888–4931. [Google Scholar] [CrossRef]
  9. Szabó, G.S.; Coteț, F.-A.; Ferenci, S.; Szabó, L. Advances in materials and manufacturing for scalable and decentralized green hydrogen production systems. J. Manuf. Mater. Process. 2026, 10, 28. [Google Scholar] [CrossRef]
  10. Feng, J.; Ren, Z.; Li, C.; Li, W. A Benders-combined safe reinforcement learning framework for risk-averse dispatch considering frequency security constraints. IEEE Trans. Circuits Syst. II Express Briefs 2025, 72, 1063–1067. [Google Scholar] [CrossRef]
  11. Jia, X.; Xia, Y.; Yan, Z.; Gao, H.; Qiu, D.; Guerrero, J.M.; Li, Z. Coordinated operation of multi-energy microgrids considering green hydrogen and congestion management via a safe policy learning approach. Appl. Energy 2025, 401, 126611. [Google Scholar] [CrossRef]
  12. Szabó, G.-S.; Szabó, R.; Szabó, L. A Review of the mitigating methods against the energy conversion decrease in solar panels. Energies 2022, 15, 6558. [Google Scholar] [CrossRef]
  13. Kong, W.; Dong, Z.Y.; Jia, Y.; Hill, D.J.; Xu, Y.; Zhang, Y. Short-term residential load forecasting based on LSTM recurrent neural network. IEEE Trans. Smart Grid 2017, 10, 841–851. [Google Scholar] [CrossRef]
  14. Khajeh, H.; Laaksonen, H. Applications of probabilistic forecasting in smart grids: A review. Appl. Sci. 2022, 12, 1823. [Google Scholar] [CrossRef]
  15. Rodrigues, F.; Cardeira, C.; Calado, J.M.; Melicio, R. Short-term load forecasting of electricity demand for the residential sector based on modelling techniques: A systematic review. Energies 2023, 16, 4098. [Google Scholar] [CrossRef]
  16. Parsa, S.M. Physics-informed machine learning meets renewable energy systems: A review of advances, challenges, guidelines, and future outlooks. Appl. Energy 2025, 402, 126925. [Google Scholar] [CrossRef]
  17. Moazzen, F.; Hossain, M.; Li, L.; Mohammadi-ivatloo, B. Energy management under uncertainty for hybrid microgrids: From data to decision-making. IET Renew. Power Gener. 2025, 19, e70174. [Google Scholar] [CrossRef]
  18. Parejo, A.; García, S.; Personal, E.; Guerrero, J.I.; Carrasco, A.; León, C. Probabilistic forecasting framework oriented to distribution networks and microgrids. IEEE Trans. Autom. Sci. Eng. 2024, 22, 1183–1195. [Google Scholar] [CrossRef]
  19. Wang, Y.; Zhang, N.; Chen, X. A short-term residential load forecasting model based on LSTM recurrent neural network considering weather features. Energies 2021, 14, 2737. [Google Scholar] [CrossRef]
  20. Islam, B.u.; Ahmed, S.F. Short-term electrical load demand forecasting based on LSTM and RNN deep neural networks. Math. Probl. Eng. 2022, 2022, 2316474. [Google Scholar] [CrossRef]
  21. Xue, H.; Ma, J.; Zhang, J.; Jin, P.; Wu, J.; Du, F. Power forecasting for photovoltaic microgrid based on multiscale CNN-LSTM network models. Energies 2024, 17, 3877. [Google Scholar] [CrossRef]
  22. Xie, X.; Ding, Y.; Sun, Y.; Zhang, Z.; Fan, J. A novel time-series probabilistic forecasting method for multi-energy loads. Energy 2024, 306, 132456. [Google Scholar] [CrossRef]
  23. Bazionis, I.K.; Georgilakis, P.S. Review of deterministic and probabilistic wind power forecasting: Models, methods, and future research. Electricity 2021, 2, 13–47. [Google Scholar] [CrossRef]
  24. Hong, T.; Fan, S. Probabilistic electric load forecasting: A tutorial review. Int. J. Forecast. 2016, 32, 914–938. [Google Scholar] [CrossRef]
  25. Almasoudi, F.M. Enhancing power grid resilience through real-time fault detection and remediation using advanced hybrid machine learning models. Sustainability 2023, 15, 8348. [Google Scholar] [CrossRef]
  26. Zaben, M.M.; Worku, M.Y.; Hassan, M.A.; Abido, M.A. Machine learning methods for fault diagnosis in ac microgrids: A systematic review. IEEE Access 2024, 12, 20260–20298. [Google Scholar] [CrossRef]
  27. Zulu, M.L.T.; Carpanen, R.P.; Tiako, R. A comprehensive review: Study of artificial intelligence optimization technique applications in a hybrid microgrid at times of fault outbreaks. Energies 2023, 16, 1786. [Google Scholar] [CrossRef]
  28. Farea, A.; Yli-Harja, O.; Emmert-Streib, F. Understanding physics-informed neural networks: Techniques, applications, trends, and challenges. AI 2024, 5, 1534–1557. [Google Scholar] [CrossRef]
  29. Wang, T.; Zhang, C.; Hao, Z.; Monti, A.; Ponci, F. Data-driven fault detection and isolation in DC microgrids without prior fault data: A transfer learning approach. Appl. Energy 2023, 336, 120708. [Google Scholar] [CrossRef]
  30. Trivedi, R.; Patra, S.; Khadem, S. A data-driven short-term PV generation and load forecasting approach for microgrid applications. IEEE J. Emerg. Sel. Top. Ind. Electron. 2022, 3, 911–919. [Google Scholar] [CrossRef]
  31. Liu, G.; Ma, Y.; Liu, X.; Ma, R. CLEAR-E: Concept-aware lightweight energy adaptation for smart grid load forecasting. Neurocomputing 2025, 658, 131667. [Google Scholar] [CrossRef]
  32. Qin, D.; Wu, X.; Sun, D.; Liang, Z.; Zhang, N. Load forecasting under distribution shift: An online quantile ensembling approach. Appl. Energy 2025, 401, 126812. [Google Scholar] [CrossRef]
  33. Bahman, S.; Zareipour, H. Long-term multi-resolution probabilistic load forecasting using temporal hierarchies. Energies 2025, 18, 2908. [Google Scholar] [CrossRef]
  34. Rojek, I.; Mikołajewski, D.; Prokopowicz, P.; Piechowiak, M. AI-Based modeling and optimization of AC/DC power systems. Energies 2025, 18, 5660. [Google Scholar] [CrossRef]
  35. Dev, A.; Kumar, V.; Khare, G.; Giri, J.; Amir, M.; Ahmad, F.; Jain, P.; Anand, S. Advancements and challenges in microgrid technology: A comprehensive review of control strategies, emerging technologies, and future directions. Energy Sci. Eng. 2025, 13, 2112–2134. [Google Scholar] [CrossRef]
  36. Swain, A.; Salkuti, S.R.; Swain, K. An optimized and decentralized energy provision system for smart cities. Energies 2021, 14, 1451. [Google Scholar] [CrossRef]
  37. Akbulut, O.; Cavus, M.; Cengiz, M.; Allahham, A.; Giaouris, D.; Forshaw, M. Hybrid intelligent control system for adaptive microgrid optimization: Integration of rule-based control and deep learning techniques. Energies 2024, 17, 2260. [Google Scholar] [CrossRef]
  38. Gros, I.-C.; Lü, X.; Oprea, C.; Lu, T.; Pintilie, L. Artificial intelligence (AI)-based optimization of power electronic converters for improved power system stability and performance. In Proceedings of the 14th International Symposium on Diagnostics for Electrical Machines, Power Electronics and Drives (SDEMPED ‘2023), Chania, Greece, 28–31 August 2023; pp. 204–210. [Google Scholar] [CrossRef]
  39. Gautam, M. Deep reinforcement learning for resilient power and energy systems: Progress, prospects, and future avenues. Electricity 2023, 4, 336–380. [Google Scholar] [CrossRef]
  40. Lu, Y.; Xiang, Y.; Huang, Y.; Yu, B.; Weng, L.; Liu, J. Deep reinforcement learning based optimal scheduling of active distribution system considering distributed generation, energy storage and flexible load. Energy 2023, 271, 127087. [Google Scholar] [CrossRef]
  41. Guo, G.; Gong, Y. Multi-microgrid energy management strategy based on multi-agent deep reinforcement learning with prioritized experience replay. Appl. Sci. 2023, 13, 2865. [Google Scholar] [CrossRef]
  42. Kertész, N.; Szabó, L. Advances and future trends in battery management systems. Eng. Proc. 2024, 79, 66. [Google Scholar] [CrossRef]
  43. Khan, M.R.; Haider, Z.M.; Malik, F.H.; Almasoudi, F.M.; Alatawi, K.S.S.; Bhutta, M.S. A comprehensive review of microgrid energy management strategies considering electric vehicles, energy storage systems, and AI techniques. Processes 2024, 12, 270. [Google Scholar] [CrossRef]
  44. Li, S.; Cao, D.; Hu, W.; Huang, Q.; Chen, Z.; Blaabjerg, F. Multi-energy management of interconnected multi-microgrid system using multi-agent deep reinforcement learning. J. Mod. Power Syst. Clean Energy 2023, 11, 1606–1617. [Google Scholar] [CrossRef]
  45. Hu, J.; Shan, Y.; Guerrero, J.M.; Ioinovici, A.; Chan, K.W.; Rodriguez, J. Model predictive control of microgrids—An overview. Renew. Sustain. Energy Rev. 2021, 136, 110422. [Google Scholar] [CrossRef]
  46. Stavrev, S.; Ginchev, D. Reinforcement learning techniques in optimizing energy systems. Electronics 2024, 13, 1459. [Google Scholar] [CrossRef]
  47. Prudencio, R.F.; Maximo, M.R.; Colombini, E.L. A survey on offline reinforcement learning: Taxonomy, review, and open problems. IEEE Trans. Neural Netw. Learn. Syst. 2023, 35, 10237–10257. [Google Scholar] [CrossRef]
  48. Bui, V.-H.; Mohammadi, S.; Das, S.; Hussain, A.; Hollweg, G.V.; Su, W. A critical review of safe reinforcement learning strategies in power and energy systems. Eng. Appl. Artif. Intell. 2025, 143, 110091. [Google Scholar] [CrossRef]
  49. Andriopoulos, N.; Plakas, K.; Birbas, A.; Papalexopoulos, A. Design of a prosumer-centric local energy market: An approach based on prospect theory. IEEE Access 2024, 12, 32014–32032. [Google Scholar] [CrossRef]
  50. Parag, Y.; Sovacool, B.K. Electricity market design for the prosumer era. Nat. Energy 2016, 1, 16032. [Google Scholar] [CrossRef]
  51. Arévalo, P.; Ochoa-Correa, D.; Villa-Ávila, E.; Iñiguez-Morán, V.; Astudillo-Salinas, P. Systematic review of hierarchical and multi-agent optimization strategies for P2P energy management and electric machines in microgrids. Appl. Sci. 2025, 15, 4817. [Google Scholar] [CrossRef]
  52. Islam, S.N. A review of peer-to-peer energy trading markets: Enabling models and technologies. Energies 2024, 17, 1702. [Google Scholar] [CrossRef]
  53. Deconinck, G. Decentralised control and peer-to-peer cooperation in smart energy systems. In Shaping an Inclusive Energy Transition; Weijnen, M.P.C., Lukszo, Z., Farahani, S., Eds.; Springer International Publishing: Cham, Switzerland, 2021; pp. 121–138. [Google Scholar]
  54. Binyamin, S.S.; Slama, S.A.B.; Zafar, B. Artificial intelligence-powered energy community management for developing renewable energy systems in smart homes. Energy Strategy Rev. 2024, 51, 101288. [Google Scholar] [CrossRef]
  55. Domènech Monfort, M.; De Jesús, C.; Wanapinit, N.; Hartmann, N. A review of peer-to-peer energy trading with standard terminology proposal and a techno-economic characterisation matrix. Energies 2022, 15, 9070. [Google Scholar] [CrossRef]
  56. Tushar, W.; Nizami, S.; Azim, M.I.; Yuen, C.; Smith, D.B.; Saha, T.; Poor, H.V. Peer-to-peer energy sharing: A comprehensive review. Found. Trends Electr. Energy Syst. 2023, 6, 1–82. [Google Scholar] [CrossRef]
  57. Ahmed, S.; Ali, A.; D’angola, A. A review of renewable energy communities: Concepts, scope, progress, challenges, and recommendations. Sustainability 2024, 16, 1749. [Google Scholar] [CrossRef]
  58. Machlev, R.; Heistrene, L.; Perl, M.; Levy, K.Y.; Belikov, J.; Mannor, S.; Levron, Y. Explainable Artificial Intelligence (XAI) techniques for energy and power systems: Review, challenges and opportunities. Energy AI 2022, 9, 100169. [Google Scholar] [CrossRef]
  59. Talaat, F.M.; Kabeel, A.; Shaban, W.M. Towards sustainable energy management: Leveraging explainable artificial intelligence for transparent and efficient decision-making. Sustain. Energy Technol. Assess. 2025, 78, 104348. [Google Scholar] [CrossRef]
  60. Roberts, J. What are energy communities under the EU’s clean energy package? In Renewable Energy Communities and the Low Carbon Energy Transition in Europe; Coenen, F.H.J.M., Hoppe, T., Eds.; Springer: Cham, Switzerland, 2022; pp. 23–48. [Google Scholar]
  61. Miłek, D.; Nowak, P.; Latosińska, J. The development of renewable energy sources in the European Union in the light of the European Green Deal. Energies 2022, 15, 5576. [Google Scholar] [CrossRef]
  62. Hrgović, I.; Pavić, I. Reward design for intelligent deep reinforcement learning based power flow control using topology optimization. Sustain. Energy Grids Netw. 2025, 41, 101580. [Google Scholar] [CrossRef]
  63. Rebenaque, O.; Schmitt, C.; Schumann, K.; Dronne, T.; Roques, F. Success of local flexibility market implementation: A review of current projects. Util. Policy 2023, 80, 101491. [Google Scholar] [CrossRef]
  64. Martin, P.; Christine, B.; Gert, B.; Marius, B. Strategic behavior in market-based redispatch: International experience. Electr. J. 2022, 35, 107095. [Google Scholar] [CrossRef]
  65. Sim, J.; Lee, D.-J.; Yoon, K. Incentive-compatible double auction for Peer-to-Peer energy trading considering heterogeneous power losses and transaction costs. Appl. Energy 2025, 377, 124543. [Google Scholar] [CrossRef]
  66. Khorasany, M.; Razzaghi, R.; Gazafroudi, A.S. Two-stage mechanism design for energy trading of strategic agents in energy communities. Appl. Energy 2021, 295, 117036. [Google Scholar] [CrossRef]
  67. Heinrich, C.; Ziras, C.; Jensen, T.V.; Bindner, H.W.; Kazempour, J. A local flexibility market mechanism with capacity limitation services. Energy Policy 2021, 156, 112335. [Google Scholar] [CrossRef]
  68. Project LEO Final Report: A Digest of Key Learnings. 2023. Available online: https://project-leo.co.uk/wp-content/uploads/2023/02/LEO-Final-Report-v3a_Web_lr.pdf (accessed on 11 December 2025).
  69. The Future of Flexibility. How Local Energy Markets Can Support the UK’s Net Zero Energy Challenge. 2019. Available online: https://www.centrica.com/media/ueujub5s/the-future-of-flexibility-centrica-cornwall-lem-report.pdf (accessed on 12 December 2025).
  70. Khomami, H.P.; Fonteijn, R.; Geelen, D. Flexibility market design for congestion management in smart distribution grids: The dutch demonstration of the interflex project. In Proceedings of the IEEE PES Innovative Smart Grid Technologies Europe (ISGT-Europe ‘2020), The Hague, The Netherlands, 26–28 October 2020; pp. 1191–1195. [Google Scholar] [CrossRef]
  71. Flexible Energy Production, Demand and Storage-Based Virtual Power Plants for Electricity Markets and Resilient DSO Operation. Available online: https://fever-h2020.eu/ (accessed on 9 January 2026).
  72. Trondheim Flexibility Market Deployment Report. 2023. Available online: https://cityxchange.eu/knowledge-base/d5-6-trondheim-flexibility-market-deployment-report/ (accessed on 12 December 2025).
  73. Couraud, B.; Andoni, M.; Robu, V.; Norbu, S.; Chen, S.; Flynn, D. Responsive FLEXibility: A smart local energy system. Renew. Sustain. Energy Rev. 2023, 182, 113343. [Google Scholar] [CrossRef]
  74. Michellod, J.L.; Kuch, D.; Winzer, C.; Patel, M.K.; Yilmaz, S. Building social license for automated demand-side management—Case study research in the Swiss residential sector. Energies 2022, 15, 7759. [Google Scholar] [CrossRef]
  75. Limmer, S. Empirical study of stability and fairness of schemes for benefit distribution in local energy communities. Energies 2023, 16, 1756. [Google Scholar] [CrossRef]
  76. Gasca, M.-V.; Rigo-Mariani, R.; Debusschere, V.; Sidqi, Y. Fairness in energy communities: Centralized and decentralized frameworks. Renew. Sustain. Energy Rev. 2025, 208, 115054. [Google Scholar] [CrossRef]
  77. Aysolmaz, B.; Müller, R.; Meacham, D. The public perceptions of algorithmic decision-making systems: Results from a large-scale survey. Telemat. Inform. 2023, 79, 101954. [Google Scholar] [CrossRef]
  78. El Ouadrhiri, A.; Abdelhadi, A. Differential privacy for deep and federated learning: A survey. IEEE Access 2022, 10, 22359–22380. [Google Scholar] [CrossRef]
  79. Bonawitz, K.; Ivanov, V.; Kreuter, B.; Marcedone, A.; McMahan, H.B.; Patel, S.; Ramage, D.; Segal, A.; Seth, K. Practical secure aggregation for privacy-preserving machine learning. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security (CCS ‘2017), Dallas, TX, USA, 30 October 2017; pp. 1175–1191. [Google Scholar] [CrossRef]
  80. Zapechnikov, S. Secure multi-party computations for privacy-preserving machine learning. Procedia Comput. Sci. 2022, 213, 523–527. [Google Scholar] [CrossRef]
  81. Kumar, M.; Zhang, X.; Liu, L.; Wang, Y.; Shi, W. Energy-efficient machine learning on the edges. In Proceedings of the International Parallel and Distributed Processing Symposium Workshops (IPDPSW ‘2020), New Orleans, LA, USA, 18–22 May 2020; pp. 912–921. [Google Scholar] [CrossRef]
  82. Alzu’bi, S.; Kanan, T.; Elbes, M.; Kanaan, G.; Trrad, I. Energy-efficient edge deployment of generative AI models using federated learning. Clust. Comput. 2025, 28, 315. [Google Scholar] [CrossRef]
  83. Volpato, G.; Carraro, G.; Dal Cin, E.; Rech, S. On the different fair allocations of economic benefits for energy communities. Energies 2024, 17, 4788. [Google Scholar] [CrossRef]
  84. Chauhan, D.; Bahad, P.; Jain, J.K. Sustainable AI: Environmental implications, challenges and opportunities. In Explainable AI (XAI) for Sustainable Development: Trends and Applications; Lakshmi , D., Tiwari, R.S., Dhanaraj, R.K., Kadry, S., Eds.; CRC Press: Boca Raton, FL, USA, 2024; pp. 1–16. [Google Scholar]
  85. Soares, J.; Lezama, F.; Faia, R.; Limmer, S.; Dietrich, M.; Rodemann, T.; Ramos, S.; Vale, Z. Review on fairness in local energy systems. Appl. Energy 2024, 374, 123933. [Google Scholar] [CrossRef]
  86. Fernández, J.D.; Menci, S.P.; Lee, C.M.; Rieger, A.; Fridgen, G. Privacy-preserving federated learning for residential short-term load forecasting. Appl. Energy 2022, 326, 119915. [Google Scholar] [CrossRef]
  87. Feretzakis, G.; Papaspyridis, K.; Gkoulalas-Divanis, A.; Verykios, V.S. Privacy-preserving techniques in generative AI and large language models: A narrative review. Information 2024, 15, 697. [Google Scholar] [CrossRef]
  88. Zhang, Z.; Rath, S.; Xu, J.; Xiao, T. Federated learning for smart grid: A survey on applications and potential vulnerabilities. ACM Trans. Cyber-Phys. Syst. 2024. accepted. [Google Scholar] [CrossRef]
  89. Hao, J.; Yang, Y.; Xu, C.; Du, X. A comprehensive review of planning, modeling, optimization, and control of distributed energy systems. Carbon Neutral. 2022, 1, 28. [Google Scholar] [CrossRef]
  90. Song, L.-K.; Tao, F.; Peng, G.-Z. Mixed loss-guided modular regression for dependent system reliability. Reliab. Eng. Syst. Saf. 2025, 267, 111898. [Google Scholar] [CrossRef]
  91. Measuring What Matters: How to Assess AI’s Environmental Impact; International Telecommunication Union (ITU): Geneva, Switzerland, 2025.
  92. The Global E-Waste Monitor 2024; International Telecommunication Union (ITU): Geneva, Switzerland, 2024.
  93. Zhan, Z.; Zhong, C.; Zheng, J.; Zhong, W. Human-AI co-innovation: Navigating the innovative problem-solving landscape with the process model and technology empowerment. In Proceedings of the 7th International Conference on Technology in Education (ICTE ‘2024), Hradec Kralove, Czech Republic, 2–5 December 2024; pp. 15–37. [Google Scholar] [CrossRef]
  94. Xu, W.; Dainoff, M.J.; Ge, L.; Gao, Z. Transitioning to human interaction with AI systems: New challenges and opportunities for HCI professionals to enable human-centered AI. Int. J. Hum.-Comput. Interact. 2023, 39, 494–518. [Google Scholar] [CrossRef]
  95. Atkinson, A.B. On the measurement of inequality. J. Econ. Theory 1970, 2, 244–263. [Google Scholar] [CrossRef]
  96. Jain, R.K.; Chiu, D.-M.W.; Hawe, W.R. A Quantitative Measure of Fairness and Discrimination; Digital Equipment Corporation, Eastern Research Laboratory: Marlborough, MA, USA, 1984. [Google Scholar]
  97. Wang, Y.; Zhang, C.; Sun, J.; Zhao, Y.; Lu, Y. Demand side response market clearing strategy based on virtual power plants. In Proceedings of the 3rd International Conference on Electrical Engineering and Control Science (IC2ECS ‘2023), Hangzhou, China, 29–31 December 2023; pp. 726–731. [Google Scholar] [CrossRef]
  98. Xinying Chen, V.; Hooker, J.N. A guide to formulating fairness in an optimization model. Ann. Oper. Res. 2023, 326, 581–619. [Google Scholar] [CrossRef]
  99. Ali, M.; Kumar, A.; Choi, B.J. Privacy preserving federated learning for energy disaggregation of smart homes. IET Cyber-Phys. Syst. Theory Appl. 2025, 10, e70013. [Google Scholar] [CrossRef]
  100. Schneider, I.; Xu, H.; Benecke, S.; Patterson, D.; Huang, K.; Ranganathan, P.; Elsworth, C. An introduction to life-cycle emissions of AI hardware. IEEE Micro 2025, 45, 9–19. [Google Scholar] [CrossRef]
  101. NIST AI RMF Playbook; National Institute of Standards and Technology (NIST): Gaithersburg, MD, USA, 2025.
  102. Lee, B.C.; Brooks, D.; van Benthem, A.; Elgamal, M.; Gupta, U.; Hills, G.; Liu, V.; Phan, L.T.X.; Pierce, B.; Stewart, C. A view of the sustainable computing landscape. Patterns 2025, 6, 101296. [Google Scholar] [CrossRef]
  103. Khan, S.; Naz, N.S.; Mazhar, T.; Tariq, M.U.; Shahzad, T.; Guizani, S.; Hamam, H. Green AI techniques for reducing energy consumption in AI systems. Array 2025, 29, 100652. [Google Scholar] [CrossRef]
  104. Dalhammar, C.; Richter, J.; Montenegro, P. Drivers and barriers for “circular” consumer electronics in the European Union. In Proceedings of the Electronics Goes Green 2024 + (EGG) Conference, Berlin, Germany, 18–20 June 2024. [Google Scholar] [CrossRef]
  105. Lannelongue, L.; Grealey, J.; Bateman, A.; Inouye, M. Ten simple rules to make your computing more environmentally sustainable. PloS Comput. Biol. 2021, 17, e1009324. [Google Scholar] [CrossRef] [PubMed]
  106. Adhikari, N.; Li, H.; Gopalakrishnan, B. A Bibliometric and Systematic Review of Carbon Footprint Tracking in Cross-Sector Industries: Emerging Tools and Technologies. Sustainability 2025, 17, 4205. [Google Scholar] [CrossRef]
  107. Kim, H.J.; Jeong, C.M.; Sohn, J.-M.; Joo, J.-Y.; Donde, V.; Ko, Y.; Yoon, Y.T. A comprehensive review of practical issues for interoperability using the common information model in smart grids. Energies 2020, 13, 1435. [Google Scholar] [CrossRef]
  108. Shahid, K.; Nainar, K.; Olsen, R.L.; Iov, F.; Lyhne, M.; Morgante, G. On the use of common information model for smart grid applications—A conceptual approach. IEEE Trans. Smart Grid 2021, 12, 5060–5072. [Google Scholar] [CrossRef]
  109. Patsonakis, C.; Bintoudi, A.D.; Kostopoulos, K.; Koskinas, I.; Tsolakis, A.C.; Ioannidis, D.; Tzovaras, D. Optimal, dynamic and reliable demand-response via openadr-compliant multi-agent virtual nodes: Design, implementation & evaluation. J. Clean. Prod. 2021, 314, 127844. [Google Scholar] [CrossRef]
  110. Nordman, B.; Parker, L.; Prakash, A.K.; Piette, M.A. Transforming Demand Response Using Open ADR 3.0; Lawrence Berkeley National Laboratory: Berkeley, CA, USA, 2024. [Google Scholar]
  111. Oviedo, J.; Rodriguez, M.; Trenta, A.; Cannas, D.; Natale, D.; Piattini, M. ISO/IEC quality standards for AI engineering. Comput. Sci. Rev. 2024, 54, 100681. [Google Scholar] [CrossRef]
  112. Morstyn, T.; Collett, K.A.; Vijay, A.; Deakin, M.; Wheeler, S.; Bhagavathy, S.M.; Fele, F.; McCulloch, M.D. OPEN: An open-source platform for developing smart local energy system applications. Appl. Energy 2020, 275, 115397. [Google Scholar] [CrossRef]
  113. Crisan, A.; Drouhard, M.; Vig, J.; Rajani, N. Interactive model cards: A human-centered approach to model documentation. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (FAccT ‘2022), Seoul, Republic of Korea, 21–24 June 2022; pp. 427–439. [Google Scholar] [CrossRef]
  114. Koshiyama, A.; Kazim, E.; Treleaven, P.; Rai, P.; Szpruch, L.; Pavey, G.; Ahamat, G.; Leutner, F.; Goebel, R.; Knight, A. Towards algorithm auditing: Managing legal, ethical and technological risks of AI, ML and associated algorithms. R. Soc. Open Sci. 2024, 11, 230859. [Google Scholar] [CrossRef]
  115. Cali, U.; Catak, F.O.; Halden, U. Trustworthy cyber-physical power systems using AI: Dueling algorithms for PMU anomaly detection and cybersecurity. Artif. Intell. Rev. 2024, 57, 183. [Google Scholar] [CrossRef]
  116. Ucar, A.; Karakose, M.; Kırımça, N. Artificial intelligence for predictive maintenance applications: Key components, trustworthiness, and future trends. Appl. Sci. 2024, 14, 898. [Google Scholar] [CrossRef]
  117. Ali, W.; Din, I.U.; Almogren, A.; Khan, M.Y.; Altameem, A. PowerTrust: AI-based trustworthiness assessment in the internet of grid things. IEEE Access 2024, 12, 161884–161896. [Google Scholar] [CrossRef]
  118. Cuomo, S.; Di Cola, V.S.; Giampaolo, F.; Rozza, G.; Raissi, M.; Piccialli, F. Scientific machine learning through physics–informed neural networks: Where we are and what’s next. J. Sci. Comput. 2022, 92, 88. [Google Scholar] [CrossRef]
  119. Huang, B.; Wang, J. Applications of physics-informed neural networks in power systems—A review. IEEE Trans. Power Syst. 2022, 38, 572–588. [Google Scholar] [CrossRef]
  120. Kirchner-Bossi, N.; Kathari, G.; Porté-Agel, F. A hybrid physics-based and data-driven model for intra-day and day-ahead wind power forecasting considering a drastically expanded predictor search space. Appl. Energy 2024, 367, 123375. [Google Scholar] [CrossRef]
  121. Onan Demirel, H.; Irshad, L.; Ahmed, S.; Tumer, I.Y. Digital twin-driven human-centered design frameworks for meeting sustainability objectives. J. Comput. Inf. Sci. Eng. 2021, 21, 031012. [Google Scholar] [CrossRef]
  122. Yin, Z.; Zhou, Z.; Yu, F.; Gao, P.; Ni, S.; Li, H. A cloud–edge collaborative multi-timescale scheduling strategy for peak regulation and renewable energy integration in distributed multi-energy systems. Energies 2024, 17, 3764. [Google Scholar] [CrossRef]
  123. Li, J.; Gu, C.; Xiang, Y.; Li, F. Edge-cloud computing systems for smart grid: State-of-the-art, architecture, and applications. J. Mod. Power Syst. Clean Energy 2022, 10, 805–817. [Google Scholar] [CrossRef]
  124. Nakabi, T.A.; Toivanen, P. Deep reinforcement learning for energy management in a microgrid with flexible demand. Sustain. Energy Grids Netw. 2021, 25, 100413. [Google Scholar] [CrossRef]
  125. Alsurdeh, R.; Calheiros, R.N.; Matawie, K.M.; Javadi, B. Hybrid workflow scheduling on edge cloud computing systems. IEEE Access 2021, 9, 134783–134799. [Google Scholar] [CrossRef]
  126. Fan, Z.; Cao, J.; Jamal, T.; Fogwill, C.; Samende, C.; Robinson, Z.; Polack, F.; Ormerod, M.; George, S.; Peacock, A. The role of ‘living laboratories’ in accelerating the energy system decarbonization. Energy Rep. 2022, 8, 11858–11864. [Google Scholar] [CrossRef]
  127. Matschoss, K.; Laakso, S.; Heiskanen, E. What can we say about the longer-term impacts of a living lab experiment to save energy at home? Energy Effic. 2024, 17, 50. [Google Scholar] [CrossRef]
  128. Sahakian, M.; Rau, H.; Grealis, E.; Godin, L.; Wallenborn, G.; Backhaus, J.; Friis, F.; Genus, A.T.; Goggins, G.; Heaslip, E. Challenging social norms to recraft practices: A Living Lab approach to reducing household energy use in eight European countries. Energy Res. Soc. Sci. 2021, 72, 101881. [Google Scholar] [CrossRef]
  129. Neupane, B.; Belkadi, F.; Formentini, M.; Rozière, E.; Hilloulin, B.; Abdolmaleki, S.F.; Mensah, M. Machine learning algorithms for supporting life cycle assessment studies: An analytical review. Sustain. Prod. Consum. 2025, 56, 37–53. [Google Scholar] [CrossRef]
  130. Lannelongue, L.; Grealey, J.; Inouye, M. Green algorithms: Quantifying the carbon footprint of computation. Adv. Sci. 2021, 8, 2100707. [Google Scholar] [CrossRef] [PubMed]
  131. Mersy, G.; Krishnan, S. Toward a life cycle assessment for the carbon footprint of data. ACM SIGENERGY Energy Inf. Rev. 2024, 4, 25–33. [Google Scholar] [CrossRef]
  132. Paaso, E.; Gurung, N.; Lelic, M.; Nation, W.; Sharma, R.; Vukojevic, A.; Zheng, H. Microgrid-Integrated Solar-Storage Technology (MISST); No. DOE-ComEd-7166; Commonwealth Edison (ComEd): Chicago, IL, USA, 2022. [Google Scholar]
  133. John, J.S. The Country’s First Neighborhood Microgrid is Coming Online in Chicago. Available online: https://www.canarymedia.com/articles/grid-edge/the-countrys-first-neighborhood-microgrid-is-coming-online-in-chicago (accessed on 9 January 2026).
  134. Breck, E.; Cai, S.; Nielsen, E.; Salib, M.; Sculley, D. The ML test score: A rubric for ML production readiness and technical debt reduction. In Proceedings of the 2017 IEEE International Conference on Big Data, Boston, MA, USA, 11–14 December 2017; pp. 1123–1132. [Google Scholar] [CrossRef]
  135. Artificial Intelligence Risk Management Framework (AI RMF 1.0); National Institute of Standards and Technology (NIST): Gaithersburg, MD, USA, 2023.
  136. Liu, X.; Gao, C. Review and Prospects of Artificial Intelligence Technology in Virtual Power Plants. Energies 2025, 18, 3325. [Google Scholar] [CrossRef]
  137. Moshari, A.; Javanroodi, K.; Nik, V.M. Real-world deployment of model-free reinforcement learning for energy control in district heating systems: Enhancing flexibility across neighboring buildings. Appl. Energy 2026, 402, 126997. [Google Scholar] [CrossRef]
  138. Michailidis, P.; Michailidis, I.; Kosmatopoulos, E. Reinforcement learning for electric vehicle charging management: Theory and applications. Energies 2025, 18, 5225. [Google Scholar] [CrossRef]
  139. Chen, J.; Yan, J.; Kemmeugne, A.; Kassouf, M.; Debbabi, M. Cybersecurity of distributed energy resource systems in the smart grid: A survey. Appl. Energy 2025, 383, 125364. [Google Scholar] [CrossRef]
  140. European Union. Regulation (EU) 2024/1689. Harmonised Rules on Artificial Intelligence and Amending Regulations; European Parliament: Brussels, Belgium, 2024. [Google Scholar]
  141. Blasch, J.; van der Grijp, N.M.; Petrovics, D.; Palm, J.; Bocken, N.; Darby, S.J.; Barnes, J.; Hansen, P.; Kamin, T.; Golob, U. New clean energy communities in polycentric settings: Four avenues for future research. Energy Res. Soc. Sci. 2021, 82, 102276. [Google Scholar] [CrossRef]
Figure 1. Schematic representation of an LES annotated with key AI action points and data/control flows. Illustrative neighborhood-scale system comprising prosumers/consumers and DERs, such as PV, storage, EV charging, and flexible loads, coordinated by local control/energy management and connected to the upstream distribution grid. The annotations indicate representative interfaces for AI-enabled coordination (telemetry/forecasts → coordination; control setpoints → DERs; net import/export and flexibility interface → grid), highlighting the coupled technical, market, and governance context in which AI-enabled decisions operate.
Figure 1. Schematic representation of an LES annotated with key AI action points and data/control flows. Illustrative neighborhood-scale system comprising prosumers/consumers and DERs, such as PV, storage, EV charging, and flexible loads, coordinated by local control/energy management and connected to the upstream distribution grid. The annotations indicate representative interfaces for AI-enabled coordination (telemetry/forecasts → coordination; control setpoints → DERs; net import/export and flexibility interface → grid), highlighting the coupled technical, market, and governance context in which AI-enabled decisions operate.
Energies 19 00476 g001
Figure 2. AI as an integrative intelligence layer in LESs. Conceptual illustration showing how AI-enabled functions (forecasting/situational awareness, optimization/control, and market/engagement tools) connect technical operation with local market participation and governance requirements. The figure highlights that LES performance and legitimacy depend on cross-layer coordination, not isolated model accuracy.
Figure 2. AI as an integrative intelligence layer in LESs. Conceptual illustration showing how AI-enabled functions (forecasting/situational awareness, optimization/control, and market/engagement tools) connect technical operation with local market participation and governance requirements. The figure highlights that LES performance and legitimacy depend on cross-layer coordination, not isolated model accuracy.
Energies 19 00476 g002
Figure 3. Responsible AI guardrails for LES deployment. Conceptual framing of the AI decision layer (forecasting, optimization, control, and market participation) surrounded by operational requirements (explainability/governance integration, fairness, privacy/data governance, robustness/transferability, and environmental footprint) supported by continuous monitoring and community/operator oversight.
Figure 3. Responsible AI guardrails for LES deployment. Conceptual framing of the AI decision layer (forecasting, optimization, control, and market participation) surrounded by operational requirements (explainability/governance integration, fairness, privacy/data governance, robustness/transferability, and environmental footprint) supported by continuous monitoring and community/operator oversight.
Energies 19 00476 g003
Figure 4. Principles for sustainable, trustworthy AI in LESs. Conceptual view of trustworthy AI as enforceable constraints around the AI decision layer, spanning human-centered oversight, fairness-by-design, privacy-preserving architectures, resource-aware computation, interoperability/open standards, and auditability/contestability. The figure is intended as a “co-design map” across people, platforms, and policy.
Figure 4. Principles for sustainable, trustworthy AI in LESs. Conceptual view of trustworthy AI as enforceable constraints around the AI decision layer, spanning human-centered oversight, fairness-by-design, privacy-preserving architectures, resource-aware computation, interoperability/open standards, and auditability/contestability. The figure is intended as a “co-design map” across people, platforms, and policy.
Energies 19 00476 g004
Figure 5. The roadmap for deployment-ready AI in LESs. System-level organization of the research agenda into enabling foundations (e.g., data governance, interoperability, oversight and audit logs, resource-aware computation), core research streams (Table 4), and downstream validation practices that generate transferable evidence for real-world scaling (stress testing, safe-fallback verification, fairness reporting, edge readiness, longitudinal evaluation, and sustainability reporting).
Figure 5. The roadmap for deployment-ready AI in LESs. System-level organization of the research agenda into enabling foundations (e.g., data governance, interoperability, oversight and audit logs, resource-aware computation), core research streams (Table 4), and downstream validation practices that generate transferable evidence for real-world scaling (stress testing, safe-fallback verification, fairness reporting, edge readiness, longitudinal evaluation, and sustainability reporting).
Energies 19 00476 g005
Figure 6. Paper-at-a-glance synthesis. One-page conceptual summary linking LES complexity drivers, emerging AI roles, deployment barriers, principles-as-constraints, and the proposed deployment-oriented agenda and evidence expectations, culminating in measurable system-level outcomes (e.g., reliability, congestion mitigation, curtailment/peak reduction, and emissions impacts) under verifiable safeguards.
Figure 6. Paper-at-a-glance synthesis. One-page conceptual summary linking LES complexity drivers, emerging AI roles, deployment barriers, principles-as-constraints, and the proposed deployment-oriented agenda and evidence expectations, culminating in measurable system-level outcomes (e.g., reliability, congestion mitigation, curtailment/peak reduction, and emissions impacts) under verifiable safeguards.
Energies 19 00476 g006
Table 1. Conceptual synthesis: why AI is becoming indispensable for scalable and sustainable LESs.
Table 1. Conceptual synthesis: why AI is becoming indispensable for scalable and sustainable LESs.
LES Driver/Complexity PressureCorresponding AI
Requirement (Capability)
If Unmet:
Typical Risk/Failure Mode
High temporal variability of renewable generation (wind/solar)Probabilistic forecasting;
uncertainty-aware scheduling and dispatch
Reactive control, larger reserve needs, curtailment, higher costs
Electrification of transport and heating increases load volatilityShort-term load forecasting;
adaptive demand response;
peak management
Congestion, higher peak tariffs, reduced reliability during extremes
Heterogeneous DERs: PV, batteries, EVs, heat pumps, flexible loadsAsset-level modeling via learning;
hierarchical coordination;
multi-agent control
Poor interoperability, suboptimal use of flexibility, fragmented operation
Real-time multi-objective operation (cost, reliability, emissions) under uncertaintyMPC with learned components;
RL with safety constraints
Inefficiencies and operational fragility, inability to balance objectives consistently
Active prosumers and community participation at scaleAutomation of bidding/participation; user-centric recommendations;
aggregation intelligence
Low participation, unequal access to benefits, reduced social acceptance
Local markets and P2P trading require fast clearing and verificationMarket forecasting;
dynamic pricing;
learning-assisted market clearing and settlement
Market instability, transaction friction, limited scalability of local trading
Governance needs: fairness, transparency, and accountability in allocation/controlFairness-aware optimization;
explainable AI;
constraint handling aligned with policy
Perceived injustice, disputes, regulatory pushback, loss of trust
Operational security and resilience (faults, anomalies, cyber-physical attacks)Anomaly detection;
predictive maintenance;
intrusion detection and response
Longer outages, hidden degradations, higher safety and security exposure
Evolving regulatory requirements and compliance reportingAutomated monitoring;
auditable decision logs;
interpretable models for oversight
High compliance burden, slower deployment, reduced replicability across sites
Table 2. Conceptual synthesis of key technical and socio-technical challenges for responsible AI in LESs.
Table 2. Conceptual synthesis of key technical and socio-technical challenges for responsible AI in LESs.
Challenge
Domain
Design and Operational RequirementsPractical Mechanisms
(Examples)
If Unmet: Typical Risk/Failure Mode
Explainability, acceptance, and governanceDecisions must be intelligible and accountable to stakeholdersExplainable AI;
uncertainty communication;
decision logs; audit trails;
redress procedures;
community review
Loss of trust;
disputes;
low adoption;
regulatory barriers
Fairness and distributional equityBenefits and burdens must be distributed transparently and acceptablyFairness-aware objectives/constraints;
benefit-sharing rules;
participatory co-design;
subgroup evaluation
Systematically disadvantaged groups;
perceived injustice;
reduced participation
Data governance and privacyData use must minimize exposure while supporting operationPurpose limitation;
consent and access rules;
federated learning;
differential privacy;
secure multiparty computation
Privacy harms;
resistance to data sharing;
compliance risks;
weakened legitimacy
Robustness, safety, and transferabilityAI must remain safe under uncertainty, drift, and rare eventsUncertainty-aware models;
drift monitoring;
conservative fallbacks;
safety constraints;
validation across LES archetypes
Operational fragility;
unsafe control actions;
cascading failures;
negative transfer
Environmental footprint of AIComputations must be proportionate to operational valueEfficient models (compression/pruning);
edge inference;
carbon-aware scheduling;
lifecycle accounting
Undermined sustainability claims;
higher energy demand;
rebound effects
Table 3. Principles for sustainable, trustworthy AI in LESs: from intent to verifiable practice.
Table 3. Principles for sustainable, trustworthy AI in LESs: from intent to verifiable practice.
PrincipleCore Intent in LESsDesign Actions
(Practical Examples)
What to Document/Evaluate
Human-centered AIAugment (not replace) local decision-making and preserve accountable human oversightHuman-in-the-loop controls;
operator/prosumer dashboards;
meaningful opt-in/opt-out;
explanations tailored to stakeholders
Override rates;
user comprehension testing;
decision latency;
documented roles & responsibilities
Fairness-by-designPrevent systematic
disadvantage and ensure equitable benefit-sharing
Fairness-aware objectives/constraints;
subgroup performance checks;
benefit-sharing rules;
participatory co-design
Disparate impact metrics;
distribution of costs/benefits;
fairness constraints satisfaction;
grievance outcomes
Privacy-preserving architecturesMinimize personal data exposure while enabling forecasting and coordinationData minimization;
federated/local learning;
differential privacy where appropriate;
secure aggregation;
access controls
Data inventory & purpose;
privacy threat model;
privacy-utility trade-offs;
retention and access policy
Resource-aware
algorithms
Ensure model complexity is justified by net system benefits and sustainability goalsLightweight models;
compression/pruning;
edge inference;
carbon-aware scheduling;
avoid overtraining
Training/inference computation budget;
energy estimates;
model size/latency;
net-benefit statement
(system-level)
Interoperability and reproducible engineeringAvoid vendor lock-in and enable replication, benchmarking, and community innovationOpen data formats;
interoperable APIs;
reproducible pipelines;
standardized evaluation protocols and datasets
Interface specs; data schemas; reproducibility checklist; benchmarking setup and baselines
Regulatory engagement,
auditability,
and accountable operations
Enable oversight, compliance, and contestability of automated decisionsAuditable logs;
model cards; versioning & change management;
traceable constraints;
third-party audits
Audit trail completeness;
update governance;
compliance mapping;
documentation for regulators/communities
Table 4. Research priorities mapped to deployment-ready deliverables and associated reporting minimums.
Table 4. Research priorities mapped to deployment-ready deliverables and associated reporting minimums.
Research PriorityDeployment-Ready Deliverables
(Examples)
Minimum Reporting
(Beyond Accuracy)
Hybrid physics–ML for robust forecasting & controlHybrid/policy-constrained forecasters;
physics-informed components;
stress-test suite for rare events;
safe control integration (e.g., MPC/RL with constraints)
OOD/rare-event performance;
uncertainty calibration;
constraint-violation rate;
comparison to pure ML baseline
Explainable, auditable algorithms for community contextsContextual explanations for dispatch/pricing;
community-facing dashboards;
decision logs; model cards;
audit protocol + redress workflow
Explanation method + fidelity;
user comprehension/trust metrics;
audit trail completeness;
governance roles and appeal handling
Fairness-aware market and control mechanismsEquity-aware objectives/constraints;
benefit-sharing rules;
progressive pricing options;
pilot-ready evaluation protocol
Distributional impacts (who benefits/loses); fairness constraint satisfaction; subgroup sensitivity; qualitative findings from stakeholders
Lightweight,
edge-capable AI
Compressed/edge-deployed models;
edge–cloud partitioning design;
fallback control policy;
privacy-preserving local inference
Latency; energy per inference;
bandwidth demand;
performance under connectivity loss;
failure modes + safe fallback tests
Longitudinal field
studies & living labs
Living-lab deployment plan;
mixed-method instruments;
monitoring & iteration cycle;
replication package across sites
Study duration and context;
adoption/retention; behavior change;
governance frictions;
external validity/transferability limits
Standardized LCA and AI sustainability reportingComputation/energy tracking template;
LCA boundary definition;
reproducible configs;
carbon-aware scheduling guidance
Training/inference energy;
carbon intensity assumptions;
hardware/PUE; scope and uncertainty;
net-benefit statement at system-level
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ferenci, S.; Coteț, F.-A.; Lakatos, E.S.; Munteanu, R.A.; Szabó, L. Artificial Intelligence in Local Energy Systems: A Perspective on Emerging Trends and Sustainable Innovation. Energies 2026, 19, 476. https://doi.org/10.3390/en19020476

AMA Style

Ferenci S, Coteț F-A, Lakatos ES, Munteanu RA, Szabó L. Artificial Intelligence in Local Energy Systems: A Perspective on Emerging Trends and Sustainable Innovation. Energies. 2026; 19(2):476. https://doi.org/10.3390/en19020476

Chicago/Turabian Style

Ferenci, Sára, Florina-Ambrozia Coteț, Elena Simina Lakatos, Radu Adrian Munteanu, and Loránd Szabó. 2026. "Artificial Intelligence in Local Energy Systems: A Perspective on Emerging Trends and Sustainable Innovation" Energies 19, no. 2: 476. https://doi.org/10.3390/en19020476

APA Style

Ferenci, S., Coteț, F.-A., Lakatos, E. S., Munteanu, R. A., & Szabó, L. (2026). Artificial Intelligence in Local Energy Systems: A Perspective on Emerging Trends and Sustainable Innovation. Energies, 19(2), 476. https://doi.org/10.3390/en19020476

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop