Next Article in Journal
Short-Term Load Forecasting Based on Similar Day Theory and BWO-VMD
Previous Article in Journal
New Advances in Low-Energy Processes for Geo-Energy Development
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Regulating AI in the Energy Sector: A Scoping Review of EU Laws, Challenges, and Global Perspectives

by
Bo Nørregaard Jørgensen
* and
Zheng Grace Ma
SDU Center for Energy Informatics, 5230 Odense, Denmark
*
Author to whom correspondence should be addressed.
Energies 2025, 18(9), 2359; https://doi.org/10.3390/en18092359
Submission received: 16 March 2025 / Revised: 30 April 2025 / Accepted: 30 April 2025 / Published: 6 May 2025
(This article belongs to the Section F5: Artificial Intelligence and Smart Energy)

Abstract

:
Using the PRISMA-ScR methodology, this scoping review systematically analyzes how EU laws and regulations influence the development, adoption, and deployment of AI-driven digital solutions in energy generation, transmission, distribution, consumption, and markets. It identifies key regulatory barriers such as stringent risk assessments, cybersecurity obligations, and data access restrictions, along with enablers like regulatory sandboxes and harmonized compliance frameworks. Legal uncertainties, including AI liability and market manipulation risks, are also examined. To provide a comparative perspective, the EU regulatory approach is contrasted with AI governance models in the United States and China, highlighting global best practices and alignment challenges. The findings indicate that while the EU’s risk-based approach to AI governance provides a robust legal foundation, cross-regulatory complexity and sector-specific ambiguities necessitate further refinement. This paper proposes key recommendations, including the integration of AI-specific energy sector guidelines, acceleration of standardization efforts, promotion of privacy-preserving AI methods, and enhancement of international cooperation on AI safety and cybersecurity. These measures will help strike a balance between fostering trustworthy AI innovation and ensuring regulatory clarity, enabling AI to accelerate the clean energy transition while maintaining security, transparency, and fairness in digital energy systems.

1. Introduction

The rapid digitalization of the energy sector is increasingly driven by artificial intelligence (AI), enabling smart grid optimization, predictive maintenance, energy market forecasting, and consumer demand management [1,2]. However, the integration of AI into critical energy infrastructure introduces significant regulatory challenges related to cybersecurity, data protection, market integrity, and operational safety. The European Union (EU) has proactively developed a comprehensive regulatory framework to govern AI deployment, particularly through the Artificial Intelligence Act (AI Act), the General Data Protection Regulation (GDPR), the Directive on security of network and information systems (NIS2 Directive), the Cyber Resilience Act (CRA), the Network Code on Cybersecurity for the electricity sector (NCCS), and the EU Cybersecurity Act. These regulations establish strict compliance requirements for AI applications, particularly in high-risk domains such as grid management and automated decision-making in energy markets. While these rules enhance trust, security, and accountability, they also introduce complexities related to legal uncertainty, compliance costs, and potential innovation bottlenecks. AI-driven digital solutions promise improved efficiency and flexibility across the energy value chain—for example, machine learning models can forecast renewable power output or optimize electricity dispatch in real time [2]. At the same time, the energy sector’s critical role and system complexity make unregulated AI use risky. Errors in AI-driven grid management could destabilize the grid, while misuse of consumer energy data could breach privacy. Acknowledging both AI’s transformative benefits and its ethical, safety, and security challenges, policymakers have established legal frameworks to regulate its deployment in critical sectors like energy [3].
In the European Union (EU), a comprehensive approach to AI governance is emerging. The EU’s proposed Artificial Intelligence Act (AI Act) exemplifies a proactive strategy to ensure “safety and respect of existing legislation throughout the whole AI system lifecycle” [1]. In addition to the AI Act, several EU regulations on data protection, cybersecurity, and digital markets directly influence AI development and deployment in the energy sector. Compliance is not just a legal requirement but crucial for building trust in AI-driven energy innovations and protecting public interests such as grid reliability, consumer rights, and national security. Effective AI governance can foster innovation by providing clear rules and standards, but overly strict or fragmented regulations may create barriers.
This scoping review explores the landscape of EU laws and regulations that influence AI-driven digital solutions in the energy sector. We cover the entire energy domain, including generation (supply-side), transmission and distribution networks, consumption (demand-side applications), and energy markets. Key regulatory barriers, enablers, compliance challenges, and legal uncertainties are identified. To broaden the perspective, we also present a comparative analysis of how AI in energy is regulated (or guided) in the United States and China, highlighting divergences and convergences with the EU approach. By mapping the current regulatory environment and its implications, this review aims to inform both policymakers and industry stakeholders on how to navigate AI innovation in energy within the bounds of law. The ultimate objective is to clarify how legal frameworks can govern AI in energy in a manner that protects societal values while still enabling technological progress.
Following the Preferred Reporting Items for Systematic reviews and Meta-Analyses, the Scoping Reviews (PRISMA-ScR) guidelines, this scoping review first maps the regulatory landscape and then asks the following research question:
How do EU laws and regulations influence the development, adoption, and deployment of AI-driven digital solutions in the energy sector?
By systematically mapping the relevant EU regulatory instruments and literature to (1) summarize how key EU regulations affect AI in the energy sector; (2) analyze their impact on AI development, adoption, and deployment across generation, grids, consumption, and markets; (3) compare the EU’s regulatory approach with those in the U.S. and China; and (4) identify challenges, gaps, and recommendations for harmonizing AI governance for innovation within the energy sector. In doing so, we provide energy sector decision-makers and researchers with a consolidated reference on the evolving interplay between AI technologies and legal requirements in the EU context, presented in a professional and scientific manner suitable for academic and industry audiences.
The remainder of this paper is structured as follows. Section 2 presents the methodology, detailing the scoping review framework, search strategy, inclusion criteria, and data extraction process, following PRISMA-ScR guidelines. Section 3 provides an overview of key EU regulations, analyzing the AI Act, GDPR, NIS2, Cyber Resilience Act, Network Code on Cybersecurity, and the EU Cybersecurity Act, with a focus on their relevance to AI-driven energy solutions. Section 4 examines the impact of regulations on AI development, adoption, and deployment, discussing their effects on different segments of the energy sector: generation, grids, consumption, and markets. Section 5 provides a comparative analysis of the EU, the U.S., and China, highlighting key differences and similarities in AI governance approaches. Section 6 identifies regulatory challenges, gaps, and recommendations, offering policy suggestions for harmonizing AI governance with energy sector innovation. Finally, Section 7 concludes with key findings and directions for future research.

2. Methodology

This scoping review follows the PRISMA-ScR framework for systematic reviews and meta-analyses of scoping studies [4]. The scoping review methodology was chosen to map the breadth of literature and policy documents on AI regulations in the energy sector, identifying key concepts and gaps rather than testing a specific hypothesis. The methodology comprised a systematic search strategy, predefined inclusion/exclusion criteria, and a structured approach to data extraction and synthesis, as detailed below.
Data Sources: We targeted the following sources for relevant literature:
  • Official EU legal documents: The search in EUR-Lex, the EU’s legal database, identified regulations, directives, and decisions relevant to AI and digital technology in the energy sector [5]. Key documents included the AI Act proposal, GDPR, NIS2 Directive, Cyber Resilience Act proposal, Network Code on Cybersecurity for electricity, and the Cybersecurity Act.
  • Regulatory and policy reports: The European Commission website [6] and agencies like the European Union Agency for Cybersecurity (ENISA) [7] were searched for relevant reports, strategy documents, or press releases. For instance, the European Commission’s press release on the Digitalization of Energy Action Plan [8] and the adoption of the Network Code on Cybersecurity was reviewed [9]. ENISA publications like AI cybersecurity in electricity forecasting [10]) were included to glean insights into compliance guidelines and threat landscapes.
  • Academic literature: Searches in scholarly databases, including Scopus and Web of Science (WoS), identified articles that examine the intersection of law, regulation, and AI in the energy sector. These included a 2021 study on AI governance in electricity systems under the proposed AI Act [1] and a comparative analysis of AI and energy regulations in China [11].
  • Industry white papers and technical standards: The review included white papers from industry associations, consulting firms, and standardization bodies to analyze AI deployment in energy under regulatory constraints. Examples include a law firm’s analysis of AI and energy in the U.S. [12] and consulting reports on the EU AI Act’s implications for energy utilities [3].
  • Energy agencies: The review incorporated documents from energy regulators, such as the Agency for the Cooperation of Energy Regulators (ACER), the European Network of Transmission System Operators for Electricity (ENTSO-E), the International Energy Agency (IEA), the International Renewable Energy Agency (IRENA), and the North American Electric Reliability Corporation (NERC).
The search was conducted in January 2025 and updated in March 2025 to include recent developments. All publication types were considered, including legislation texts, academic studies, review articles, official reports, and credible news or white papers. The inclusion criteria limited sources to English-language publications.
Search Strategy: The selection of search terms was guided by the research question, ensuring coverage of key concepts related to the influence of EU laws and regulations on AI-driven digital solutions in the energy sector. The terms were chosen to capture three core dimensions: (1) AI technologies, including artificial intelligence and machine learning applications; (2) regulatory aspects, encompassing laws, regulations, directives, and policy frameworks; and (3) energy system components, spanning generation, transmission, distribution, consumption, and market mechanisms. These dimensions were structured into a comprehensive Boolean search string:
(EU OR “European Union”) AND (Energy or power) AND (Act OR Law OR Regulation OR Directive) AND (AI OR “Artificial Intelligence” OR Digitalization) AND (“Data Privacy” OR Cybersecurity OR Resilience) AND (Generation OR “Transmission grid” OR “Distribution grid” OR consumption OR Market)
This search string balanced specificity and breadth to retrieve relevant literature and served as a foundation for developing specialized queries tailored to different databases and indexing systems, adapting syntax and term variations as necessary to optimize retrieval across multiple sources. Table 1. lists the optimized search string for each of the data sources.
Inclusion Criteria: The review included sources that explicitly address the legal and regulatory aspects of AI in the energy sector. The focus was on EU legislation that directly or indirectly affects AI and on analyses of regulatory impacts on energy applications. Comparative discussions of regulatory frameworks in non-EU jurisdictions, including the U.S. and China, were also included. Both peer-reviewed and reputable gray literature sources were considered, since emerging regulations are often discussed outside academic journals. The review covered sources from 2016 to 2024, reflecting the period when AI applications in energy became more prominent, and key regulations such as the GDPR and AI-specific laws were introduced.
Exclusion Criteria: Technical and ethics studies on AI in energy without a regulatory component were excluded. The review also excluded legal analyses that did not address AI in the energy sector. Articles focused exclusively on general AI ethics or on energy policies unrelated to digitalization were not included. When multiple sources contained overlapping information, the review prioritized regulatory texts and official summaries. Secondary sources were included only if they provided unique perspectives or expert commentary.
Source Selection: This review adheres to the principles of the PRISMA-ScR framework to ensure a systematic and transparent source selection process. However, because our study encompasses a diverse range of document types, including official EU legal texts, policy reports, industry white papers, and academic literature, the standard PRISMA-ScR flow diagram, which is primarily designed for reviews based solely on peer-reviewed literature, does not adequately capture the nuances of our source selection process. To address this, we modified our reporting approach by presenting both an extended PRISMA-ScR flow diagram in Figure 1 and a structured summary of the identification, screening, and inclusion procedures in Table 2. This table details the source categories, selection criteria, and number of records identified, screened, and included for each category. By doing so, we maintain methodological rigor and transparency while appropriately reflecting the heterogeneity of our data sources. Of the 72 sources included in this review, 1 source (1.4%) was a peer-reviewed journal article, and 71 sources (98.6%) consisted of official legal texts, policy reports, white papers, or other gray literature.
Data Charting and Synthesis: For each included source, information was extracted on regulatory requirements, compliance challenges, regulatory gaps, and policy recommendations. The findings were organized thematically to align with the structure of this review. Consistent with the scoping review methodology, no formal quality appraisal was conducted, as the objective was to map available evidence rather than assess study design quality. However, priority was given to peer-reviewed or official sources when citing factual statements about laws. References were consistently cited using numeric markers to ensure transparency. The PRISMA-ScR checklist was followed to structure the reporting of this review. The PRISMA-ScR checklist [4] guided the reporting and transparency of the review process.

3. Overview of Key EU Regulations

This section outlines the major EU legal instruments governing AI-driven digital solutions in the energy sector. It covers both AI-specific regulations, such as the forthcoming AI Act, and broader technology-related laws, including cybersecurity and data protection regulations. Each instrument’s scope and key provisions are summarized, with a focus on clauses that impact AI development, deployment, and operation in energy applications. The key EU regulations covered are the Artificial Intelligence Act (AI Act) [13], the General Data Protection Regulation (GDPR) [14], the Directive on security of network and information systems (NIS2) [15], the proposed Cyber Resilience Act (CRA) [16], the Network Code on Cybersecurity for the electricity sector (NCCS) [9], and the EU Cybersecurity Act [17]. Table 1 at the end of this section provides a summary matrix of these regulations and their applicability across different segments of the energy sector.

3.1. EU Artificial Intelligence Act (AI Act)

The Artificial Intelligence Act is a landmark EU proposal dedicated to AI regulation. Proposed by the European Commission in April 2021 and expected to be enacted by 2024–2025, the AI Act will be the first horizontal regulation of AI systems in the EU [1,3]. It introduces a risk-based framework that categorizes AI systems into different risk levels with corresponding legal requirements [3]. In brief,
  • Prohibited AI practices: Certain AI uses deemed unacceptable (e.g., social scoring by governments, or AI that manipulates humans subconsciously) will be banned outright (these are not typical in the energy sector, so less relevant here).
  • High-risk AI systems: The Act designates specific high-impact use cases as “high-risk”, subjecting them to strict obligations. Notably, AI systems used in critical infrastructure management (including energy) are on this list. Annex III of the proposal explicitly classifies “AI systems intended to be used as safety components in the management and operation of […] the supply of water, gas, heating and electricity” as high-risk [18]. This means an AI system that, for example, controls an electrical substation, balances supply and demand on the grid, or optimizes a power plant’s output could be considered high-risk if its malfunction might jeopardize safety or service continuity [18]. High-risk AI providers must implement comprehensive risk management, ensure high-quality training data (to minimize bias and errors), enable traceability and logging, provide detailed technical documentation, and undergo conformity assessments before deployment [1]. Users of such AI (e.g., a grid operator deploying an AI tool) also have obligations like human oversight and monitoring of performance.
  • Limited-risk systems: For certain AI systems that interact with people or generate content, like chatbots or deepfakes, the Act requires transparency, i.e., disclosing that one is interacting with AI. In energy, an example might be an AI-powered customer service agent for a utility; it would need to identify itself as AI to the customer, but these systems are not “high-risk” under the Act.
  • Minimal-risk AI: AI applications with minimal risk, such as internal analytics tools, are not directly regulated by the Act, apart from general voluntary codes of conduct.
The AI Act introduces the first AI-specific compliance framework for the energy sector. Energy companies must assess whether their AI applications qualify as “high risk”, which is likely if they impact grid operations or safety. High-risk systems must follow new compliance steps, including an AI conformity assessment, either through self-assessment or with notified bodies, and registration in an EU database. The Act aims to “foster innovation and competitiveness while also protecting individual rights and societal values” [3], striking a balance between enabling AI and mitigating its risks. For instance, by mandating human oversight and transparency for high-risk AI, the Act addresses concerns like opacity of AI decisions in critical settings [1]. However, some sector-specific risks, such as AI-driven market manipulation and certain cybersecurity concerns, may not be fully addressed by the Act in its current form [1]. It is important to note that the AI Act is still in the legislative process (as of early 2025). Once in force, the AI Act will have a transition period of approximately two years before obligations take effect, giving energy stakeholders time to prepare. It also amends existing EU laws to align terminology and oversight, integrating them with product safety regimes and sector-specific regulations. As a key pillar of AI governance in Europe, the Act places the energy sector at the forefront due to its critical role and high-risk AI applications.

3.2. EU General Data Protection Regulation (GDPR)

The GDPR, in effect from May 2018, is the EU’s comprehensive data protection law. While not specific to AI or energy, it has significant implications for digital solutions involving personal data, including AI applications in the energy sector. Its relevance is most pronounced in the demand side and retail market, where personal data are prevalent. Examples include smart meter data linked to households, consumer energy usage profiles, billing and payment data, and AI-driven personalized energy services. Under the GDPR, such data are protected by strict rules on processing, consent, purpose limitation, and security [19].
Key provisions affecting AI solutions include:
  • Lawful basis and purpose limitation: Energy companies using AI that processes personal data, such as algorithms optimizing home energy consumption based on smart thermostat data, must establish a legal basis for data processing. This may include consent, contractual necessity, or legitimate interest. Data must not be repurposed in ways that conflict with their original purpose. For example, using detailed smart meter data to infer household behavior may require explicit consent due to their potential to reveal sensitive personal habits.
  • Data minimization and privacy-by-design: The GDPR pushes companies to collect only data that are necessary and to integrate privacy measures into technology design. This means that AI models should avoid using personally identifiable information if not needed. Techniques like anonymization or aggregation of energy data are encouraged so that AI can learn from usage patterns without exposing individual identities [19]. In a grid context, if AI monitors distributed energy resources, ensuring that individual customer data are de-identified can help compliance. ENISA’s guidance for AI in electricity demand forecasting, for example, highlights filtering or anonymizing consumer data to comply with the GDPR [10].
  • Automated decision-making and profiling (Article 22): The GDPR gives individuals rights when significant decisions about them are made by algorithms. In energy, an example could be an AI system that automatically adjusts a user’s tariffs or disconnects service for non-payment. If these decisions have legal or similarly significant effects, consumers have the right to human review and an explanation. Energy providers must be cautious when using AI for things like credit scoring for energy contracts or detecting fraud, as these could invoke Article 22.
  • Security and breach notification: The GDPR requires personal data protection against breaches. AI systems handling customer data must implement security controls, often aligning with cybersecurity laws, and report breaches to authorities within 72 h. This creates a strong incentive to secure AI pipelines, ensuring measures such as protecting training datasets stored in the cloud.
  • Data Protection Impact Assessments (DPIA): If an AI application poses a high risk to individual privacy, such as large-scale monitoring through smart devices, the GDPR requires a Data Protection Impact Assessment (DPIA). This structured assessment identifies risks and mitigation measures. Many AI projects in the energy sector, particularly smart home and smart city initiatives, will need DPIAs to evaluate privacy impacts systematically [19].
Compliance with the GDPR in AI projects has been identified as a challenge, but also a necessary trust factor. Industry analysis shows that privacy regulations like the GDPR can be seen as a regulatory barrier; for instance, limiting the sharing of datasets needed to train AI models [20]. Energy startups might find it “legally hazardous” if they cannot easily use consumer data to improve AI predictions [21]. The GDPR also fosters trust, as consumers are more likely to adopt AI-driven services, such as demand response programs or personalized energy recommendations when assured that their data is protected by strong regulations. Globally recognized as a model, GDPR compliance can serve as a competitive advantage in the energy sector.
AI-driven digital solutions handling customer or employee data in the energy sector must comply with the GDPR. This influences design choices by promoting privacy-preserving AI techniques, data governance practices through clear consent and purpose notices, and contractual agreements ensuring that AI vendors and cloud providers adhere to Data Processing Agreements. The GDPR operates alongside newer AI regulations, as the AI Act does not override it. Both frameworks apply, requiring high-risk AI systems to be safe and unbiased under the AI Act while remaining privacy-compliant under the GDPR. This dual compliance will be a key consideration for energy companies.

3.3. EU Cybersecurity of Networks and Information Systems (NIS2 Directive)

The NIS2 Directive is the EU’s main cybersecurity legislation for critical infrastructure and key sectors and was adopted in 2022 as an update to the original NIS Directive from 2016. The energy sector (electricity, oil, gas, district heating) is squarely within the scope of NIS2 [22]. This directive establishes a unified, higher baseline of cybersecurity requirements across Member States and significantly expands the number of entities covered. Its impact on AI-driven solutions is indirect but important: it forces energy sector organizations to pay robust attention to the security and resilience of all digital systems, including AI. Key points of NIS2 relevant to AI in energy:
  • Expanded Scope—Essential Entities: Under NIS2, medium and large organizations in the energy sector are classified as essential entities that must comply with the directive [22]. This means that most large utilities, grid operators such as transmission system operators (TSOs) and distribution system operators (DSOs), major power generation companies, and emerging players like large-scale renewable operators and aggregators. Many of these entities are actively implementing AI for grid management, trading, and maintenance. Smaller companies may be exempt, but supply chain dependencies will extend compliance requirements to AI providers, as essential entities will require their suppliers to adhere to regulations.
  • Cybersecurity Risk Management Measures: NIS2 mandates that covered entities implement “appropriate and proportional technical, operational, and organizational measures” to manage cybersecurity risks [23]. This includes measures such as risk analysis, incident handling, business continuity, encryption, access control, and others [24]. For an AI system integrated into operations, this means it must be secured as part of the overall network. For instance, if a DSO uses an AI to autonomously control voltage or detect outages, that AI and its data flows must adhere to state-of-the-art security, including secure development practices, regular vulnerability assessments, network segmentation, etc. State of the art is explicitly expected [23]. Implementing strong cybersecurity measures for AI models, such as protecting against adversarial attacks and ensuring the integrity of AI decisions, becomes a key aspect of compliance.
  • Incident Reporting: Energy entities must report significant cybersecurity incidents to national authorities, such as CSIRTs, within strict timelines, typically providing an initial notice within 24 h [9]. If an AI system is compromised, such as an intrusion manipulating an AI-driven energy management system, it triggers mandatory incident reporting. This requirement incentivizes continuous monitoring of AI behavior to detect anomalies and enhance security. Notably, the new Network Code on Cybersecurity for electricity aligns with NIS2’s reporting mechanisms [9], so that energy-specific incidents are handled coherently.
  • Supply Chain Security and Supervision: NIS2 places emphasis on the cybersecurity of supply chains and supplier relationships [22]. Energy companies must evaluate the security of any AI software or cloud service they use. Regulators can impose penalties or corrective measures for non-compliance, while European cooperation through bodies like the Cooperation Group will provide further guidance. From an AI perspective, this pushes the use of trusted AI providers and possibly certified products. It also means documentation: energy entities might have to document how an AI tool’s risks are managed, to satisfy regulators during audits.
  • National Strategies and Cross-Border Collaboration: Each Member State must have a national cybersecurity strategy for sectors, including energy [22]. This, along with broader EU initiatives, fosters a strong focus on cyber resilience across all levels. In this environment, AI projects may find it easier to access support and guidance from national authorities on secure deployment practices. Collaboration frameworks such as the EU CyCLONe network, which manages cybersecurity crises, also cover incidents in the energy sector [9], ensuring that if an AI-triggered incident occurs (e.g., a widespread grid disruption from a cyberattack on AI systems), there is coordination in response.
NIS2 does not specifically target AI, but strengthens cybersecurity requirements across sectors, indirectly enforcing higher security standards for AI systems in energy. Any AI solution must meet the same rigorous security standards as other OT and IT systems. A key intersection with the AI Act is its requirement for high-risk AI systems to be resilient against manipulation and address adversarial threats. NIS2 complements this by ensuring that organizations deploying AI have robust security management frameworks. This alignment is further reflected in the new electricity cybersecurity Network Code, which builds on NIS2 principles [9].
Energy companies must integrate compliance efforts to ensure that meeting NIS2 requirements, such as security leadership roles and audits, also covers AI deployments. At the same time, AI adoption must not create gaps in cybersecurity governance. Those who manage this effectively will find that strong cybersecurity, as mandated by NIS2, enables AI innovation by providing a secure environment for digital advancements.

3.4. EU Cyber Resilience Act (CRA)

The Cyber Resilience Act (CRA), adopted in 2024, establishes mandatory cybersecurity requirements for digital products, ensuring that they are designed securely and maintained against vulnerabilities. This regulation is particularly relevant to the energy sector, where modern systems depend on digital products such as IoT sensors, control systems, and software platforms, many of which embed or support AI. The CRA addresses cybersecurity at the product level, complementing NIS2, which focuses on organizational security practices.
Key aspects of the CRA include the following:
  • Scope of Products: The CRA applies to almost any software or hardware product with a digital component, except for certain products already regulated under other frameworks, such as medical devices [22]. This broad scope includes everything from AI software libraries to smart inverters in solar panel systems. In the energy sector, devices such as smart meters, EV charging stations, grid monitoring tools, and home energy management apps all fall under its requirements and must comply with its cybersecurity standards.
  • Baseline Security Requirements: Manufacturers must ensure that their products meet specific cybersecurity requirements, including protection against unauthorized access, secure-by-design and default configurations, data protection for confidentiality and integrity, and mitigation of known vulnerabilities [22]. For example, the AI software used in a wind turbine control system must not contain hardcoded passwords or unpatched vulnerabilities. The CRA’s requirements cover the entire product lifecycle, from design and development, where security must be embedded, to post-sale maintenance, which includes providing security updates [22].
  • Critical Products and Conformity Assessment: The Act designates certain product categories as “critical”, including operating systems, firewalls, and specific industrial control systems. These products may require third-party conformity assessments before they can be placed on the market [22]. In an energy context, a control system for substations or grid management software could be deemed critical and thus need an independent cybersecurity certification prior to deployment. If an AI platform falls under a critical category, its provider must get an external audit of its security measures.
  • CE Marking and EU Market Access: Compliant products will bear the CE marking, indicating that they meet CRA standards [22]. Non-compliance can lead to fines or exclusion from EU markets, making cybersecurity a crucial consideration for global AI vendors in the energy sector. This regulation extends the traditional concept of safety compliance, familiar in energy for electrical safety, to include the cyber safety of digital components.
  • Impact on AI Developers: While the CRA does not explicitly mention AI, any AI software qualifies as a digital product and must comply. This includes AI-driven energy management software. The CRA’s secure development requirements, such as addressing vulnerabilities and publishing security guidance, align with the AI Act’s mandates for high-risk AI, particularly regarding robustness and accuracy. For example, under the CRA, an AI software package must not contain unpatched critical common vulnerabilities and exposures (CVEs), while the AI Act requires safeguards against manipulation of its outputs. Together, these regulations drive developers toward building secure and trustworthy AI.
From an energy sector perspective, the CRA enhances trust by ensuring that products, including AI tools, meet a defined security baseline, reducing vulnerabilities in critical infrastructure [22]. It also eases the burden on end users, such as utilities, by eliminating the need to individually assess each product’s security. A certified CE-marked device should comply with EU security standards, streamlining procurement processes.
A practical example: Suppose a distribution system operator wants to deploy smart IoT sensors with AI capabilities at the grid edge for fault detection. Under the CRA, those sensor devices must come with secure firmware, perhaps encryption for data, and the vendor must commit to providing security patches for a defined period. The DSO can check for the CE mark for the CRA as a quick quality signal. This mitigates the scenario of deploying thousands of IoT sensors that later become botnet victims due to neglect.
The CRA came into force in 2024, and after an implementation period, its main obligations will apply from 2027 [22]. Energy organizations and manufacturers have a limited timeframe to align their product development and procurement processes with CRA requirements. For AI startups in the energy sector, security is not optional. Demonstrating CRA compliance, or actively preparing for it, could serve as a competitive advantage when engaging with risk-conscious energy utilities.
The Cyber Resilience Act strengthens the security of AI and digital components in energy systems by regulating the products themselves. It works alongside NIS2, which governs how these products are used and managed in operations, and the Cybersecurity Act, which provides certification frameworks for security evaluation. Together, these regulations create a layered defense: product security under the CRA, organizational security under NIS2, and certification support through the Cybersecurity Act. For AI-driven energy solutions, this comprehensive framework significantly enhances security, which is a crucial safeguard against rising cyber threats.

3.5. EU Network Code on Cybersecurity (Electricity Sector)

The Network Code on Cybersecurity for the electricity sector is a sector-specific regulation designed to bolster the cyber resilience of electricity grids across the EU. It is the first energy-specific cybersecurity regulation of its kind and was adopted by the European Commission via a Delegated Act in March 2024 [9]. Network codes in the EU context typically provide detailed rules for the technical operation of energy systems. This new network code expands by setting specific requirements for cybersecurity. Key features of the Network Code on Cybersecurity include the following:
  • Target and Scope: The code focuses on cross-border electricity flows and the key entities that influence them. In practice, this includes transmission system operators (TSOs), large distribution system operators (DSOs) connected to cross-border systems, and potentially other critical grid participants such as regional coordination centers. These entities manage the high-voltage grid and facilitate electricity movement between countries, making them essential to European energy security. A cyber incident affecting them could have widespread consequences across the EU.
  • Risk Assessment Regime: The core of the code is the establishment of a recurrent cybersecurity risk assessment process for the electricity sector [9]. TSOs and other designated operators must systematically identify digital assets and processes essential to cross-border electricity flows, assess their cybersecurity risks, and implement appropriate mitigation measures. AI systems used in grid operations, such as those used for load balancing or interconnector control, will be evaluated as part of this process. If an AI system is deemed critical, operators must assess the cyber risks it introduces, including threats like data poisoning attacks or AI malfunctions, and implement the necessary controls. These may range from technical safeguards to staff training for AI oversight.
  • Governance and Alignment with NIS2: The network code establishes a governance model for cybersecurity in electricity that leverages existing structures [9]. It aligns with horizontal legislation like NIS2 by using similar or the same reporting channels and collaborative bodies. For example, incident reporting under this code would go through the national CSIRTs, as mandated by NIS2 [9]. In the case of a large-scale cyber crisis affecting energy, coordination would occur via the EU CyCLONe network (Cyber Crisis Liaison Organization Network) [9]. This alignment prevents duplication, allowing energy companies to follow a unified process rather than separate compliance frameworks. It also integrates energy-specific insights into the broader cybersecurity ecosystem. The European Commission emphasized that this common baseline approach ensures consistency with NIS2 while respecting established practices in the power industry [9].
  • Stakeholder Collaboration: The code was developed with input from ENTSO-E (the European network of TSOs), the EU DSO entity, and the Agency for Cooperation of Energy Regulators (ACER) [9]. It fosters a collaborative approach where methodologies and risk scenarios can be shared, and the code can be updated as threats evolve [9]. We can expect sector-specific guidance, perhaps even lists of best practices for securing AI in control systems, to emerge under this framework.
  • Compliance and Enforcement: As a delegated regulation under the Electricity Regulation (EU) 2019/943, once it passes the scrutiny period by the Council and Parliament [9], it becomes binding. TSOs/DSOs will likely have to report compliance and could face regulatory scrutiny by energy regulators or cybersecurity authorities if they fall short. The code promotes a common baseline, but also respects investments already made [9], meaning that if a country’s TSO already has a robust cyber program, the code will not force unnecessary changes, so long as outcomes are met.
Grid operators increasingly rely on AI for tasks such as predictive maintenance, real-time grid balancing, and autonomous responses to disturbances. The cybersecurity network code ensures that as AI adoption increases, its associated cyber risks are formally addressed at a pan-European level. For instance, if AI automates decision-making, operators must assess risks such as attackers feeding false data to manipulate AI outputs. Mitigation strategies may include sensor authentication, anomaly detection algorithms, and fallback manual control modes. These measures must be documented in the risk assessments required by the code, ensuring that AI systems in energy operations remain secure and resilient.
The Network Code on Cybersecurity effectively acts as an enforcer of cybersecurity-by-design in grid modernization. It acknowledges the “growing digitalisation and interconnection of national power systems” and responds by setting a European standard for cybersecurity [25]. In doing so, it directly supports the safe deployment of AI: an AI that is integrated following the code’s process will have been vetted for cyber resilience. Without such a framework, the worry is that different countries or companies might secure their AI differently, leaving weak links.
This is a relatively new legislation, so its full impact will unfold in the coming years. But it clearly fills a gap by adding a sector-specific layer on top of general laws. The energy sector now has its own cybersecurity rulebook that must be followed alongside NIS2 and others, which should greatly clarify compliance for energy companies. General IT systems are governed by NIS2, while grid control and AI systems must also comply with the Network Code requirements. We will later discuss how this targeted approach compares to the U.S. or China, which may not have an exact analogue.

3.6. EU Cybersecurity Act and Certification Framework

The Cybersecurity Act, which came into force in June 2019, has two key components. First, it established ENISA, the EU Agency for Cybersecurity, as a permanent entity with an expanded mandate. Second, it introduced the European cybersecurity certification framework for ICT products, services, and processes [19]. While the act itself does not impose direct requirements on companies in the way GDPR or NIS2 do, it provides the tools and trust mechanisms that can greatly aid compliance and risk management for AI in energy. Key elements and their relevance:
  • ENISA’s Role: The Act empowered ENISA to actively support the implementation of cybersecurity policies, including developing certification schemes and providing guidance [19,26]. ENISA has been actively involved in cybersecurity efforts for both the energy sector and AI. It has published reports on AI-related cybersecurity challenges, including one on AI for electricity demand forecasting, which highlights potential threats and corresponding security controls [10], and is developing frameworks for AI cybersecurity best practices [27,28]. These efforts help interpret high-level regulations into practical measures. An energy company looking to secure an AI system can refer to ENISA’s guidance to understand threats specific to industrial AI and how to counter them, feeding into their NIS2 and network code compliance.
  • European Cybersecurity Certification Framework: The Act lays out a structure by which EU-wide cybersecurity certification schemes can be created for different categories of products/services [10,29]. These certification schemes are typically voluntary unless mandated by later legislation and include assurance levels of Basic, Substantial, and High. This is particularly relevant for the energy sector, as certification can apply to smart grids, IoT, and cloud services, all of which are essential for AI applications. Notably, a certification scheme for general IT product security based on Common Criteria and the EU Cloud Certification Scheme (EUCS) have been in advanced development stages [30]. A future certification scheme for AI systems or energy sector IT could emerge, but until then, existing schemes provide a useful framework. For example, if an AI platform for energy management operates on a cloud service, an energy company might opt for a provider certified under the EU Cloud Certification Scheme (EUCS) at the “High” assurance level, ensuring stringent security standards [31]. Similarly, the IoT components used in AI-driven smart grids might be certified under an EU IoT security scheme.
  • Synergy with the Cyber Resilience Act: The Cybersecurity Act’s framework essentially became the basis for the CRA’s future implementation. The CRA might leverage existing schemes or initiate new ones to certify that products meet its requirements. Energy companies can actively contribute to the development of certification schemes by engaging with standards organizations to ensure that energy-specific requirements, such as reliability, are addressed. This fosters a harmonized approach where independent certification replaces vendor-specific security claims, providing a higher level of assurance. Such certifications can also streamline procurement processes, enabling companies to prioritize certified AI systems for critical operations.
  • Global and Sectoral Influence: The Act also positions the EU as a leader in cybersecurity standard-setting [18]. For the energy sector, which operates within a globally interconnected framework involving cross-border grids and international equipment suppliers, EU cybersecurity standards could shape global norms. The Cybersecurity Act explicitly promotes the development of international standards for trustworthy technology, reinforcing the EU’s role in setting cybersecurity benchmarks that may influence regulations worldwide [18]. In AI and energy, this could lead to international guidelines or mutual recognition of certification, very relevant when compared with U.S. or Chinese approaches later.
Energy companies and AI developers can leverage the Cybersecurity Act in several practical ways. One key approach is certification. For instance, an AI-based control device for wind turbines could be certified under an appropriate cybersecurity scheme, possibly based on existing industrial cybersecurity standards, to demonstrate compliance with high-security requirements for grid operators and energy firms. Another approach is to utilize ENISA’s resources. ENISA has developed an AI Cybersecurity Framework, which provides best practices for securing AI applications. By adopting these guidelines, AI developers and energy companies can strengthen the security of their AI systems, ensuring compliance with industry standards and regulatory expectations [10,32]. Adopting these best practices can help meet multiple regulatory requirements simultaneously. They support compliance with the AI Act by ensuring AI robustness and reliability, while also aligning with NIS2 obligations related to risk management and cybersecurity resilience.
The Cybersecurity Act established the EU Cybersecurity Certification Group (ECCG) and various ad hoc working groups that include industry experts. Representatives from the energy sector participate to ensure that upcoming certification schemes align with sector-specific needs.
In summary, while the Cybersecurity Act does not directly impose obligations on companies, it forms the foundation of the cybersecurity ecosystem in which AI-driven energy solutions operate [10]. It provides key tools for trust, primarily through certification and expert guidance, which support compliance with other regulations such as the Cyber Resilience Act (CRA) and NIS2. The Act also solidifies ENISA’s role in advising on emerging technologies like AI, with AI threat landscape assessments explicitly within its mandate [19].
For energy stakeholders, actively engaging with the Cybersecurity Act’s resources, including new certification schemes and ENISA guidelines, is a proactive way to stay ahead of compliance demands and contribute to shaping secure AI adoption.
Table 3 shows a summary of key EU regulations and how they impact AI-driven digital solutions in the energy sector. The table outlines each law’s scope, relevance to AI in energy, and notable provisions or impacts. Together, these laws create a multifaceted compliance environment: AI-specific risks are covered by the AI Act; data issues by GDPR; cybersecurity by NIS2, CRA, and the Network Code; and overarching trust frameworks by the Cybersecurity Act.

4. Impact of Regulations on AI Development, Adoption, and Deployment in Energy

EU laws and regulations collectively shape how AI is developed, adopted, and deployed across the energy sector. In this section, we examine the sector-wide implications of the regulatory framework outlined above, drilling down into each segment of the energy value chain: generation (supply-side), transmission and distribution grids, consumption (demand-side), and energy markets. For each segment, we identify the relevant regulatory requirements, the compliance challenges faced by stakeholders, and any enabling effects or opportunities created by regulation. We also highlight legal uncertainties where rules are not yet clear-cut. The focus is on practical implications, examining how these laws impact real-world AI projects in the energy sector, from research and development to operational deployment.

4.1. AI in Energy Generation (Supply-Side) and Regulations

Context: On the generation side, AI is used for optimizing power plant performance (e.g., AI for predictive maintenance in thermal plants, or for wind turbine output prediction), managing renewable variability, automating control systems, and improving safety and efficiency. Generation assets are often owned by large utilities or independent power producers, which operate under heavy safety and reliability regulations even before AI.
Relevant Regulations and Impact:
  • AI Act (High-Risk Systems): Many generation facilities are part of critical infrastructure. If an AI system is introduced as a safety component in a power plant’s control system—for example, an AI that helps regulate reactor conditions in a nuclear plant or ensures safe load dispatch in a large hydro dam—it would likely be deemed high-risk under the AI Act’s critical infrastructure category [18]. Compliance would require the AI provider of the control system to conduct rigorous ex ante testing and certification of the AI. This could slow down AI adoption because AI providers must invest in documentation and risk assessments and possibly involve notified bodies for assessment. However, given that generation companies are accustomed to certification, e.g., turbines must be certified to grid codes, integrating an AI certification step might be feasible. A compliance challenge is the lack of established standards today for validating AI performance in control. Until formal standards are established, proving that an AI system meets the “state of the art” in accuracy and robustness may be challenging. This creates legal uncertainty that regulators need to address through guidelines or implementing acts.
  • GDPR (if personal data are involved): Power generation is primarily industrial and typically does not involve extensive personal data processing. However, certain cases require GDPR compliance. For example, AI systems optimizing distributed residential solar and battery systems may handle user data, overlapping with the consumer side. Within power plants, AI typically processes equipment sensor data, which are not personal. However, the GDPR becomes relevant when workforce data are involved, such as AI-driven workforce scheduling, or when AI-powered security systems process facial images, as these constitute personal data.
  • NIS2 and Cybersecurity/Network Code: Power generation, especially electricity generation, is part of the energy sector in NIS2 [22]. Large generators are considered essential entities. Therefore, any AI introduced in generation must be covered in the generator’s cybersecurity risk management. For instance, if a gas power plant deploys an AI for predictive maintenance that connects to cloud analytics, NIS2 would mandate assessing risks like data exfiltration or control override via that AI. The Network Code on Cybersecurity primarily focuses on grid operations, but also extends to generation assets, as TSOs depend on generator behavior for cross-border electricity flows. Risk assessments under the code may identify critical generation sites where digital systems, including AI-driven governors or automated voltage regulators, require cybersecurity risk mitigation measures [9]. This could drive sector-wide initiatives to enhance AI security in generator control systems, potentially leading to standardized configurations or emergency override mechanisms for AI anomalies. A key compliance challenge is integrating AI within both IT and OT environments. Many power plants operate under functional safety standards, such as IEC 61508, which govern traditional operational technology. Certifying AI within these frameworks is not straightforward, creating dual pressure for generation companies to maintain existing safety certifications while also meeting new cybersecurity and AI compliance requirements.
  • Cyber Resilience Act: If generators acquire new equipment with AI capabilities, such as a smart wind turbine controller with machine learning, the Cyber Resilience Act (CRA) ensures that the product is designed with cybersecurity in mind. This benefits generators by reducing concerns over fundamental security vulnerabilities. However, manufacturers must adapt to CRA requirements, which may initially delay product releases or slightly increase costs due to secure coding practices and external testing. In the medium term, the CRA is likely to enhance trust in AI-driven generation technologies. A wind farm operator, for example, can have greater confidence that an AI-enabled turbine bearing a CE mark under the CRA meets stringent security standards.
  • Cybersecurity Act (Certification): Large generators may rely on established certification schemes for different components, such as ISO/IEC 27001 for plant IT systems and IEC 62443 for industrial control system security. The EU cybersecurity certification framework could introduce sector-specific schemes, such as a “High” assurance certification for industrial automation security, which generators could use to certify AI-driven control systems. While not yet mandatory, forward-thinking companies may adopt these certifications to demonstrate due diligence, potentially lowering insurance costs and reducing liability risks.
Compliance Challenges: A key challenge in AI adoption for power generation is the potential ambiguity in liability. If an AI system in a power plant fails, causing an outage or damage, it is unclear how existing regulations will allocate responsibility. The AI Act introduces obligations for both AI providers and users, but energy-specific liability frameworks, such as grid codes and generator connection requirements, may still hold operators accountable for failures. This overlapping liability could slow AI adoption unless regulatory clarity emerges. Generators must also invest in workforce training. Control room engineers, for example, will need to understand AI-driven decisions to meet the AI Act’s human oversight requirements. This may require a cultural shift in traditionally conservative power generation engineering, where AI adoption has been met with skepticism. Regulation can help bridge this gap by enforcing transparency, logging, and explainability under the AI Act, reducing AI’s “black box” nature and increasing trust in its decision-making.
Enablers: Regulatory initiatives can accelerate modernization by fostering experimentation and innovation. EU and national bodies may establish regulatory sandboxes for energy AI, providing controlled environments where generators can test AI systems under regulatory supervision. These sandboxes could temporarily waive certain penalties, allowing companies to trial AI solutions without full compliance risks. While the laws discussed above do not explicitly mandate sandboxes, the EU AI Act encourages their use in its recitals and promotes codes of conduct for non-high-risk AI. Some EU Member States, such as the Netherlands [28] and Germany [34], have already introduced sandbox programs for smart grid innovation. These initiatives allow companies to test AI-driven solutions like virtual power plants under regulatory oversight, helping to refine future regulations and create a clearer compliance pathway for AI adoption in the energy sector.
Summary for Generation: Regulations ensure that AI in power generation is safe, cybersecure, and privacy-compliant, where relevant. The primary burden lies in demonstrating AI safety and robustness under the AI Act and integrating AI into existing safety and cybersecurity frameworks such as NIS2. Generators that proactively engage with regulators and contribute to developing AI testing standards for power generation may turn compliance into a competitive advantage, leading to AI adoption while others take a wait-and-see approach. The legal trajectory suggests that AI will become a standard feature in power generation, much like digital control systems did in the past. However, its adoption will come with formal certification and oversight requirements to ensure reliability and prevent incidents.

4.2. AI in Transmission and Distribution Grids and Regulations

Context: Transmission system operators and distribution system operators are increasingly using AI for applications such as predictive asset maintenance, network optimization, outage detection, self-healing grids, and the integration of distributed energy resources. Predictive asset maintenance helps identify power lines and transformers that are likely to fail. Network optimization includes dynamic line rating, voltage control, and congestion management. The grid is a critical infrastructure where reliability is essential. AI in this sector is subject to the highest level of regulatory scrutiny. Compliance with the AI Act, NIS2, the Network Code on Cybersecurity, and sector-specific standards is necessary to ensure AI-driven grid operations are safe, resilient, and aligned with cybersecurity best practices.
Relevant Regulations and Impact:
  • AI Act (High-Risk and Requirements): AI used by TSOs/DSOs for grid management will generally fall under high-risk AI, as it involves managing critical electricity network infrastructure [18]. The AI Act’s strict obligations apply to AI systems used in grid operations. Grid operators and technology vendors must implement comprehensive risk management measures, which include identifying potential failure modes, ensuring high accuracy through rigorous testing, and integrating failsafe mechanisms. AI false positives in fault detection, for example, must be accounted for to prevent unnecessary disruptions. Extensive documentation is required to demonstrate compliance. A particularly relevant requirement is human oversight, which mandates that high-risk AI systems must be effectively monitored by humans, with intervention or override capabilities when necessary. This aligns with existing grid operation practices, where operators can switch to manual control if automation fails. However, under the AI Act, this process must be formally structured. For example, if an AI system recommends switching off a power line to prevent overload, a human operator should verify the action, particularly in the early stages of adoption. Over time, as trust in AI grows, some decisions may become fully automated with only ex-post human monitoring. However, under the AI Act, operators must prove that the AI system is reliable and trustworthy before granting it greater autonomy.
  • NIS2 and Network Code (Cybersecurity): TSOs and large DSOs are explicitly essential entities under NIS2 and have some of the strictest obligations [22]. The new Network Code on Cybersecurity adds a tailored layer, requiring recurring cyber risk assessments focused on grid operations [9]. For AI in grids, continuous evaluation is essential to understand how AI alters the cybersecurity threat landscape. A distribution system operator deploying AI for distribution automation must analyze new risks, such as the possibility of an attacker manipulating sensor inputs to create false load readings and trigger a power outage. Mitigation strategies should include authentication of sensor data to prevent spoofing, redundancies where multiple AI models cross-check results for consistency, and controlled execution where AI provides recommendations, but a secure control system validates and executes them. Compliance is not a one-time process, but requires ongoing risk assessment and adaptation. The Network Code on Cybersecurity envisions a dynamic security process that evolves alongside emerging threats, ensuring that AI-enabled grid operations remain resilient over time [9]. This continuous loop could slow the deployment of AI if each new AI app triggers a lengthy risk assessment, but it is vital for security.
  • Cyber Resilience Act (Grid Equipment): Much grid equipment (RTUs, IEDs, sensors) and software will need CRA compliance, as discussed. For grid operators, one impact is that they will have to ensure new procurements after 2027 are CRA-compliant. If a DSO purchases AI-powered grid management software in 2028 that lacks CE marking under the CRA, it may face legal issues using it. Therefore, operators will likely update technical specifications to require CRA compliance from vendors. The CRA also covers things like routers, switches, and other network gear in substations, many of which often come with default passwords, etc., which historically is a weak point. The CRA should eliminate such weak points by forcing vendors to ship secure defaults [22]. Hence, the grid’s digital infrastructure becomes more secure by default, reducing the risk that an AI algorithm operating correctly could be undermined by underlying system hacks.
  • GDPR (Smart Grid Data): At the TSO level, personal data usage is minimal, but at the DSO level, particularly in low-voltage networks, there is a significant overlap with consumer data. DSOs manage smart meter data, which qualifies as personal data under the GDPR. When AI is applied to consumption or prosumer data, such as for detecting non-technical losses or forecasting local demand at the transformer level, GDPR compliance becomes mandatory. As regulated monopolies, DSOs must strictly adhere to purpose limitations when using customer data. If AI applications require data beyond what is essential for grid operation, they may conflict with the GDPR’s purpose limitation principle. For example, using smart meter data to infer household occupancy patterns might not be legally permitted unless the data are anonymized or aggregated. To navigate these restrictions, DSOs are likely to anonymize or aggregate data when training AI models for load forecasting. This approach aligns with regulatory best practices and is encouraged by ENISA’s privacy guidelines, reducing compliance risks while maintaining AI’s effectiveness in grid management [19].
  • Sector Regulations and Codes: Beyond the EU’s overarching regulations, transmission system operators and distribution system operators must adhere to technical standards such as ENTSO-E operational codes and national grid codes. These standards often mandate reliability criteria like N-1 security, ensuring that the grid can withstand the failure of a single component without widespread outages. When integrating AI into grid operations, operators must ensure that these systems do not compromise established reliability standards. While AI can optimize grid performance, excessive optimization could reduce necessary safety margins. Regulators may require demonstration that any automated system, including AI, maintains or enhances reliability. This could involve obtaining regulatory approval before deploying fully autonomous AI systems, such as voltage control mechanisms, to ensure compliance with operational criteria. Recognizing AI’s increasing role in grid management, some EU national energy regulators have initiated consultations to update regulatory frameworks. The European Commission has launched a stakeholder survey to gather input from grid operators and regulators on AI integration [35]. Additionally, the Eurelectric Action Plan on Grids outlines industry perspectives on the regulatory evolution needed for AI-driven grid management [36].
Compliance Challenges: For grid entities, a major challenge is integrating multiple compliance regimes. They must navigate electricity-specific regulations, safety standards, NIS2, the Network Code on Cybersecurity, and the AI Act, creating a complex compliance matrix. Avoiding duplication is critical, but one key uncertainty is how AI Act conformity assessments will align with existing power system equipment certification.
For example, if AI is embedded in protection relay devices, these devices are currently certified under electrical safety and reliability standards. The AI component, however, may require a separate conformity assessment under the AI Act. Ideally, standards bodies will establish harmonized frameworks so that a single assessment can cover both functional safety requirements and AI-specific obligations. Until such harmonization occurs, grid operators and technology vendors may need to undergo parallel certification processes, increasing compliance complexity and costs.
Another challenge is skills and culture: Grid operators require data scientists and cybersecurity experts to work closely with grid engineers and compliance officers. Ensuring effective communication across these disciplines is challenging, as it involves translating legal requirements into technical specifications and vice versa. This complexity makes interdisciplinary teams essential, as the regulatory environment directly necessitates collaboration between legal, technical, and operational experts.
Enablers: Regulation promotes best practices that strengthen grid operations. The AI Act’s transparency requirements encourage operators to maintain clear documentation on how AI systems make decisions. This not only satisfies regulatory obligations but also improves internal troubleshooting and system optimization. NIS2’s focus on supply chain security is likely to foster closer collaboration between TSOs, DSOs, and technology vendors. Vendors have a strong incentive to develop secure AI solutions, while operators must ensure compliance, making the co-development of more secure AI systems a natural outcome. Regulatory emphasis on cybersecurity may also drive innovation in adversarial AI defenses for the energy sector. Recognizing that AI can be targeted by cyberattacks, industry and academia may invest more in developing AI that detects spoofing attempts, quantifies uncertainty, and enhances resilience against manipulation. This growing focus on AI security stems from the regulatory acknowledgment of cyber risks in critical infrastructure [9].
Digital twins provide a valuable tool for grid operators to test AI algorithms in a simulated environment before deploying them in real-world operations. By running AI models on digital replicas of the grid, operators can analyze system behavior under various scenarios, including edge cases, and demonstrate due diligence to regulators. This approach helps address regulatory demands to prove AI safety and reliability before full-scale implementation.
EU regulators generally support sandbox testing and phased deployment strategies. One common approach is to introduce AI in advisory mode first, where it provides recommendations but does not take direct control. As confidence in its performance grows, operators can gradually increase automation while maintaining human oversight. This staged adoption aligns with regulatory expectations and minimizes risks while integrating AI into critical grid functions.
Summary for Grids: The regulatory framework essentially mandates that AI in grids be implemented with extreme care and security. The margin for error is low, matching the criticality of continuous power supply. Compliance is complex but ensures that when AI is in place, it will not be a flimsy add-on but a robust, well-governed part of grid operations. Over time, as standards and best practices solidify, compliance will become more routine (much like grid protection schemes today follow well-known standards). In the near term, TSOs/DSOs that pave the way might share lessons through ENTSO-E or EU-funded projects, helping harmonize approaches. The laws serve to reassure the public and stakeholders that the invisible algorithms now controlling parts of the grid are held to high standards, reducing reluctance to embrace these new tools.

4.3. AI in Energy Consumption (Demand-Side) and Regulations

Context: The demand side encompasses a wide range of consumer-facing AI applications, including smart home energy management systems, AI-driven demand response programs, personalized energy efficiency recommendations, and chatbots for customer service. AI platforms also assist consumers in optimizing their energy usage or switching tariffs. Beyond residential users, industrial and commercial consumers use AI for energy management, such as AI-driven building energy management systems and industrial load shifting. In recent years, prosumers who generate electricity through rooftop solar, battery storage, or electric vehicles have increasingly relied on AI to optimize when to consume, store, or sell electricity. These demand-side AI solutions often process personal data and directly affect consumers, raising concerns about privacy, data protection, and consumer rights, thus raising privacy and consumer protection concerns.
Relevant Regulations and Impact:
  • GDPR (and ePrivacy): The most relevant regulation on the demand side is the GDPR, which governs smart meter data as personal information. Smart meters collect granular household electricity data, revealing usage patterns and daily routines. Any AI processing these data must ensure compliance. For example, an AI-powered app analyzing smart meter data for energy-saving tips must obtain user consent or rely on a legal basis like legitimate interest, with proper assessments. In many EU countries, using smart meter data beyond billing requires explicit consent. If AI profiles customers into usage patterns for targeted offers, individuals may have the right to object. The draft ePrivacy Regulation, if enacted, could further regulate data use from smart devices. Compliance requires clear and plain-language privacy notices explaining AI data processing and automated decisions. Regulatory bodies such as the CNIL (French DPA) have issued GDPR guidance on AI [37], emphasizing the importance of choosing the right legal basis and conducting Data Protection Impact Assessments (DPIAs). Given the systematic monitoring of consumption data, demand-side AI providers will almost always need a DPIA.
  • AI Act (Limited-Risk Transparency and Some High-Risk): Most demand-side AI tools would not be classified as high-risk under the AI Act since they are not safety-critical. However, AI used for access to essential private services, such as credit scoring, is considered high-risk. In rare cases, this could apply to AI systems that determine eligibility for special energy tariffs if they affect financial obligations, similar to credit decisions. More relevant are the transparency requirements for AI that interacts with users. If a consumer engages with an AI chatbot about their energy bill, the AI Act requires the chatbot to disclose that it is AI and not a human [3]. capco.com. Similarly, if an AI generates content, such as an energy advisory report, it may need to identify itself as AI-generated. The Act also promotes voluntary codes of conduct for lower-risk AI. Companies offering AI-powered smart thermostats, for example, may adopt an EU-wide code to ensure fairness and transparency as a best practice.
  • Consumer Protection Law and Product Liability: While not explicitly addressing AI, general consumer protection laws such as the Unfair Commercial Practices Directive and the General Product Safety Regulation still apply. If an AI-driven product misleads consumers, for example, by falsely claiming greater energy savings than it delivers, regulators can intervene. Similarly, if a smart home AI device malfunctions and causes damage, such as an AI thermostat overheating an appliance and starting a fire, liability may fall under product liability law. The EU is updating the Product Liability Directive to cover software-related damage, including AI failures. These changes aim to make it easier for consumers to claim compensation, shifting the burden of proof in some cases when AI causes harm [38]. Companies must ensure truthful marketing of AI benefits and maintain product safety to comply with these evolving regulations. The AI Act will also amend product safety legislation. For example, the Machinery Regulation now includes provisions ensuring that AI-powered machines remain safe, reinforcing the importance of aligning AI-driven energy products with existing safety standards [39].
  • Cybersecurity (CRA, etc.): The demand side is experiencing a surge in Internet of Things devices such as smart plugs, thermostats, home batteries, and EV chargers, many of which include connectivity and artificial intelligence. These devices can serve as entry points for cyberattacks if not properly secured. The Cyber Resilience Act enhances security by enforcing stricter cybersecurity requirements, protecting both consumers and the grid. For instance, unsecured high-wattage IoT devices can be compromised by malware. If multiple devices are hijacked, attackers could manipulate them simultaneously to disrupt grid stability, a concern raised by researchers. Compliance with the Cyber Resilience Act will mitigate these risks by mandating measures like unique passwords and secure update mechanisms, strengthening AI-powered home devices against cyber threats [22]. While the Network and Information Security Directive 2 primarily targets operators of essential services and digital infrastructure, it does not directly apply to individual consumers or most device manufacturers unless they are classified as important entities in the digital sector. However, fostering a secure ecosystem benefits the reliability of demand-side AI applications. In the interim, organizations such as the European Union Agency for Cybersecurity provide guidelines for IoT security that manufacturers are encouraged to follow until the Cyber Resilience Act is fully enforced [32].
  • Energy-Specific Data Regulations: The European Union has introduced regulations to facilitate data sharing in the energy sector, promoting innovation while ensuring privacy protection. The Electricity Directive 2019/944 emphasizes data interoperability and grants consumers the right to access their energy consumption data and authorize third-party service providers, such as AI companies, to use it. This framework requires consumers to provide consent for data sharing, and distribution system operators must ensure data security [40]. This approach aligns with the General Data Protection Regulation, as consent serves as the legal basis for data processing, ensuring a level playing field. For AI startups offering new analytical applications, these laws are enabling. They allow access to smart meter data via standard application programming interfaces with user consent, eliminating the need for proprietary sensors. However, once they receive this data, they assume the responsibilities of a data controller under the GDPR [41]. Additionally, the EU Data Act promotes seamless data sharing between holders and users while maintaining confidentiality. This regulation supports innovative energy services by clarifying the rules on data usage and access [42]. These frameworks ensure secure and transparent handling of consumer data, fostering trust and enabling AI-driven services that optimize energy consumption.
Compliance Challenges: Demand-side AI providers, such as startups and device manufacturers, often face significant regulatory challenges that can be particularly burdensome relative to their size. Complying with the General Data Protection Regulation requires maintaining meticulous records, handling user access requests, and ensuring the deletion of data upon request. These obligations can be onerous, but are unavoidable, as non-compliance risks substantial fines. The stringent requirements of the GDPR can necessitate the reallocation of resources and may even deter small companies from engaging in new ventures [43]. Navigating the forthcoming AI Act presents additional complexities. While many demand-side AI applications may fall under the low-risk category, providers must still be aware of transparency obligations. Ensuring accessibility and fairness is also critical. For instance, an AI energy advisor might inadvertently provide suboptimal recommendations to certain customers, such as those with limited data or older devices. Although not illegal per se, companies could face reputational or legal issues if AI decisions are perceived as discriminatory. Another major challenge is fragmented guidance. Data protection is overseen by data protection authorities, energy services by energy regulators, and product safety by separate agencies. Small companies may struggle to satisfy all these regulatory bodies simultaneously. For example, they might need to comply with technical rules from energy regulators when interfacing with the grid while also adhering to privacy laws. The EU is attempting to harmonize these regulations through initiatives like the Data Governance Act, which aims to increase trust in data sharing and overcome technical obstacles [42].
Enablers: Regulations play a crucial role in building consumer trust, which is essential for the adoption of smart home technologies. Surveys show that consumers are often concerned about the privacy of smart home devices [44]. The enforcement of the GDPR, with its strict penalties for data misuse, reassures consumers and encourages them to adopt new services. Similarly, the forthcoming AI Act’s labeling requirements may allow energy companies to certify their AI services as compliant with EU ethical standards, turning regulatory adherence into a marketing advantage, much like how some products highlight GDPR compliance or security certifications.
Additionally, regulatory initiatives such as the EU Green Deal and digitalization strategy promote demand-side flexibility by encouraging consumer participation in energy markets through smart technologies. To support this, the European Commission and national bodies have funded pilot projects and established frameworks for demand response aggregators. Many of these pilots incorporate AI tools for forecasting or automating responses. These projects often operate in regulatory sandboxes or under temporary exemptions, providing space for innovation. The insights gained from these pilots contribute to formal regulation development. For example, if an AI-driven demand response scheme proves effective, regulators may integrate such schemes into the energy market framework while ensuring that they do not harm consumers or grid stability.
Summary for Consumption: On the demand side, regulations primarily protect consumers by safeguarding their privacy, security, and rights in the face of AI technologies. Compliance ensures that AI innovations, such as smarter home energy systems, do not come at the expense of personal privacy or consumer well-being. While these rules introduce compliance costs and complexity, they also create clear standards that allow responsible companies to differentiate themselves. Over time, as consumers become more comfortable with AI managing aspects of their energy usage, trust built through strong regulations may drive widespread adoption. This could lead to significant contributions to energy efficiency and peak load management, benefiting both climate goals and consumer electricity costs. However, ongoing regulatory vigilance is necessary. If AI-driven variable pricing causes confusion or unexpectedly higher bills for certain consumers, consumer protection rules may require updates or stricter enforcement. Ensuring transparency in algorithmic billing is one possible regulatory response. The regulation of AI in the energy sector will likely continue to evolve, adapting to emerging issues while maintaining the core goal of empowering consumers through technology rather than exploiting them.

4.4. AI in Energy Markets and Trading, and Regulations

Context: Energy markets, including wholesale electricity and gas markets, retail competition, and new flexibility markets, are increasingly relying on AI algorithms. These are used for automated trading, bidding strategies, market forecasting, anomaly detection, and portfolio optimization. Market operators such as power exchanges and participants, including utilities, traders, and large consumers, use AI to manage market complexity. For example, AI can analyze vast datasets to forecast prices or make arbitrage decisions. However, algorithmic decision-making raises concerns about fair competition, transparency, and systemic risks. Issues such as flash crashes or unintended coordination between algorithms could disrupt market stability, requiring careful oversight and regulatory measures.
Relevant Regulations and Impact:
  • AI Act (Gaps in High-Risk Coverage): Interestingly, the EU AI Act does not explicitly classify financial or energy trading algorithms as high-risk, unlike financial credit scoring. Research highlights that risks such as market dominance and price manipulation by AI are not directly addressed within the AI Act’s current scope [1]. This means that an AI system used for energy trading is likely categorized as low or minimal risk under the regulation, provided it does not directly affect critical infrastructure operations. As a result, such AI would not require a conformity assessment or registration. This could be seen as a regulatory gap, considering that an errant algorithm could cause widespread economic disruption or enable anti-competitive behavior. There have been calls to address these risks more explicitly [1]. For now, the AI Act may only impose transparency obligations in cases where an AI interacts with a human trader, though this is an unlikely scenario.
  • Market Regulations (REMIT and Competition Law): The EU regulates wholesale energy markets through the Regulation on Energy Market Integrity and Transparency (REMIT), which prohibits market manipulation and insider trading in electricity and gas markets. The REMIT mandates trading surveillance to detect abusive practices. Companies using AI for trading must ensure that their AI does not engage in manipulative behaviors, such as placing and withdrawing orders to influence prices, a tactic known as layering or spoofing, which has led to penalties in other financial markets. The REMIT applies regardless of whether trades are executed by a human or AI, making companies fully accountable. If an AI trading strategy causes an artificial price spike, the company could face an investigation or fines, even if there was no intent to manipulate the market [1]. EU competition law is also relevant, as AI could lead to tacit collusion, where algorithms learn to coordinate prices without explicit agreements [45]. While collusion is illegal under Article 101 TFEU (cartel law), proving AI-driven tacit collusion is complex. There is no law explicitly banning specific AI pricing strategies, but regulators caution that using common third-party AI platforms or pricing algorithms could inadvertently facilitate collusion, exposing companies to antitrust prosecution. Legal advisors are urging firms to consider antitrust compliance when deploying AI in pricing decisions, as enforcement approaches to algorithm-driven parallel behavior remain uncertain.
  • Financial Regulations (if applicable): Some advanced energy trading activities, such as derivatives and futures, fall under financial regulations like Markets in Financial Instruments Directive (MiFID II) and Market Abuse Regulation (MAR). MiFID II includes provisions for algorithmic trading in financial markets, requiring high-frequency algorithmic trading firms to be authorized and implement risk controls to prevent disorderly markets [46]. If an energy company engages in such trading activities, such as electricity derivatives, using AI, these rules apply. In practice, larger utilities often have trading divisions that are already under financial supervision. They must notify exchanges about their algorithmic trading techniques and ensure thorough testing of algorithms to avoid instability [47]. Therefore, energy AI traders might already be in compliance with these existing frameworks. MAR also imposes controls on algorithmic trading systems to prevent abusive practices, ensuring that firms deploying AI-based trading strategies maintain robust monitoring and safeguards [48].
  • Cybersecurity and Resilience: Market operators, such as EPEX Spot and national exchanges, along with aggregators running platforms, are likely classified under the NIS2 Directive as digital infrastructure or financial market infrastructure entities. [22,49]. This classification mandates robust cybersecurity measures for critical systems, including algorithmic matching engines and AI-driven market monitoring tools. A security breach or manipulation of these systems could lead to significant disruptions, such as false signals causing price volatility. Consequently, NIS2, alongside the general principles of the Network Code (which primarily focuses on grid operations), emphasizes the necessity for strong defensive measures [50]. Also, the upcoming Digital Operational Resilience Act (DORA) aims to enhance the digital operational resilience of financial entities within the EU, including those in the energy sector, if they are considered part of the financial market infrastructure. The DORA ensures that critical trading systems are equipped with backup and recovery mechanisms to maintain operational continuity. While the DORA does not specifically target AI systems, these systems are integral to the overall resilience planning mandated by the regulation [51].
  • Transparency and Accountability: The use of AI in markets raises calls for transparency. For instance, if an AI system autonomously makes bidding decisions in a capacity auction, questions arise regarding whether regulators or market operators should have the authority to audit the AI’s decision-making processes to ensure compliance and prevent manipulative practices. Currently, there is no explicit legislation mandating companies to disclose their proprietary algorithms, as these are often considered trade secrets. The EU’s Artificial Intelligence Act primarily focuses on user notification and transparency, which may not directly address the need for regulatory access to algorithmic logic in this context. However, existing financial regulations do require the maintenance of logs for automated trading decisions to facilitate oversight. This practice could extend to energy markets, where regulators might, in the future, request algorithmic trading policies from energy firms, similar to the requirements imposed by financial regulators. The Agency for the Cooperation of Energy Regulators (ACER) currently monitors trading patterns under the Regulation on Wholesale Energy Market Integrity and Transparency (REMIT). ACER is exploring the use of AI tools to enhance its surveillance capabilities, aiming to efficiently detect and analyze unusual trading behaviors that could indicate market manipulation [52,53].
Compliance Challenges: A big challenge is that the regulatory framework has not yet been fully adapted to AI in markets. Companies might exploit that gap for a competitive edge, but it is a risk if regulations catch up suddenly. For example, if tacit collusion via AI becomes evident, we might see a regulatory crackdown or new rules. Companies should conduct compliance reviews of their AI models; that is, they should test that their algorithm does not consistently set bids in a way that could be seen as manipulative. Another challenge is liability for AI errors: If an AI trader causes a huge loss, shareholders or counterparties might litigate. Internally, governance is needed: many firms are instituting AI oversight committees to evaluate such risks.
Enablers: On the other hand, AI can help with regulatory compliance itself. Energy exchanges and regulators can use AI to detect abuse (as mentioned). Companies can use AI to ensure that they follow complex market rules. For instance, AI tools can check that their bids respect all constraints or optimize within regulatory limits. In the EU’s push for more renewable and flexible markets, AI can enable new market entrants, like virtual power plant operators, to manage complexity. The regulations around aggregators have been updated, allowing them to bid into markets. Here, AI is a tool that allows them to do so effectively. Regulations that open markets to new players, as outlined in the Clean Energy Package directives, indirectly drive AI adoption. Without AI, these players could not handle the data and speed required. Furthermore, global best-practices from financial markets might be voluntarily adopted by energy traders to preempt stricter rules. For example, a power trading firm might implement an internal rule that any algorithm must be tested under simulated conditions of extreme volatility to ensure that it does not behave erratically, similar to testing in stock trading.
Legal Uncertainties and Gaps: The governance of AI in markets has gaps, as the AI Act is not covering it, and reliance on general market law, which was not written with AI in mind [1]. Some scholars and think-tanks suggest updating competition law doctrine to handle algorithmic tacit collusion or even adjusting the AI Act to include high-risk cases for AI in crucial economic decision-making. Until such changes occur, companies operate in a quasi-gray area, where ethics and self-regulation play a role. The EU’s high-level expert groups have recommended vigilance on AI in finance and trade, so we might see future policy moves.
Summary for Markets: In energy markets, existing integrity and competition regulations primarily govern AI-driven activities. So far, these regulations have been effective, as no major AI-driven crises have occurred in the EU energy markets. However, regulators remain vigilant. Companies should integrate compliance checks into AI development to ensure adherence to REMIT and competition rules. The approach balances acceptance with oversight. AI is encouraged for efficiency, but any misuse will face strict enforcement.
In conclusion, EU regulations shape AI’s role across generation, grids, consumption, and markets. They ensure AI is safe, secure, fair, and aligned with sector objectives but also introduce challenges such as higher costs, expertise demands, and regulatory uncertainty. AI development in energy cannot be limited to engineers. It requires a multidisciplinary approach that includes legal and regulatory planning. When properly implemented, regulation and innovation can complement each other. Regulation sets necessary guardrails that, as one analysis suggests, guide the “AI-energy power couple” toward societal benefits rather than allowing unchecked growth [12], The next sections will compare how this EU approach stands relative to other jurisdictions and then address overarching challenges and recommendations.

5. Comparative Analysis: EU vs. U.S. and China

The three jurisdictions examined illustrate fundamentally different governance logics for AI in the energy sector. The European Union applies a rules-driven and preventive model that establishes detailed ex-ante obligations to embed safety ethics and cybersecurity before deployment. The United States relies on a market-driven framework that builds on existing sectoral statutes and voluntary guidelines, intervening mainly ex post when reliability competition or consumer protection issues arise. China pursues a state-driven strategy that couples ambitious industrial objectives with top-down controls, enabling rapid national roll-out of AI while maintaining strict oversight of data security and social stability. The following subsections unpack these trajectories in greater detail and assess their implications for innovation risk management and international regulatory alignment.

5.1. United States: Sectoral and Market-Driven Approach with Emerging Oversight

Regulatory Framework: As of 2025, the United States lacks a comprehensive AI law comparable to the EU’s AI Act. Instead, AI governance relies on sector-specific regulations, soft law through guidelines and frameworks, and existing laws such as anti-discrimination and product liability. The U.S. approach has been more laissez-faire, prioritizing innovation and voluntary standards, though federal attention to AI risks is increasing.
Key Aspects in the U.S.:
  • The energy industry regulators The Federal Energy Regulatory Commission (FERC) has not issued AI-specific regulations. However, it oversees grid reliability standards developed by the North American Electric Reliability Corporation (NERC), which indirectly apply to technologies used in grid operations. NERC Critical Infrastructure Protection (CIP) standards impose cybersecurity requirements on bulk power system entities [54]. If a utility deploys AI in control systems, it must still comply with CIP standards for electronic security perimeters and incident reporting. Similar to NIS2, U.S. grid operators must secure their systems, but compliance falls under existing CIP rules rather than an AI-specific law.
  • Data Privacy: The U.S. does not have a nationwide equivalent to the GDPR. Instead, state laws such as California’s CCPA/CPRA and sector-specific regulations govern data privacy. In the energy sector, many states have rules protecting utility customer data. State public utility commissions often enforce “customer data privacy” regulations that restrict how utilities share or use consumption data, with requirements varying by state. AI handling smart meter data in California, for instance, must comply with the CCPA, ensuring consumer data rights, and adhere to utility commission rules on data consent. Generally, U.S. utilities secure customer consent for third-party energy data sharing through contracts rather than through legal mandates, similar to the EU’s approach.
  • AI Guidelines and Initiatives: In April 2024, the U.S. Department of Energy released AI for Energy: Opportunities for a Modern Grid and Clean Energy Economy, the first agency-level roadmap that maps AI use-cases to concrete grid-modernization objectives and sketches forthcoming performance metrics for federally funded pilots [55]. Although purely advisory, the roadmap signals that the criteria DOE is likely to embed in future funding calls and voluntary demonstration programs. More broadly, the DOE’s long-running “Artificial Intelligence for Energy” initiative continues to sponsor R&D and issue best-practice notes, but it imposes no binding obligations on utilities [12]. Complementing DOE’s work, the National Institute of Standards and Technology published a voluntary AI Risk Management Framework (AI RMF) in 2023 to guide organizations on bias, transparency, and security [56]. Energy companies that position themselves on the technological frontier have already begun aligning internal policies with these documents. At the federal level, the White House “AI Bill of Rights” blueprint (2022) articulates principles—safe and effective systems, algorithmic discrimination protections, data privacy, notice and explanation, and human alternatives—that utilities can incorporate into customer-facing AI applications [57].
  • The U.S. Department of Energy (DOE) has been actively promoting AI R&D for grid modernization and has issued non-binding strategies. For instance, the DOE’s “Artificial Intelligence for Energy” programs outline use cases and principles but do not impose new regulations [12]. NIST (National Institute of Standards and Technology) released a voluntary AI risk management framework (AI RMF) to help organizations manage AI risks (covering bias, transparency, security, etc.) in 2023 [56]. Many energy companies, being technology-forward, are likely to adopt such guidelines internally. Additionally, the White House released an AI “Bill of Rights” blueprint in 2022, which lays out principles like safe and effective systems, algorithmic discrimination protections, data privacy, etc., but again as guidance [57].
  • Recent Developments: Recognizing critical infrastructure stakes, President Biden’s Executive Order on AI (October 2023) explicitly calls for action to ensure AI’s safety in critical sectors [58]. It directs the DOE and the Department of Homeland Security to develop standards and practices for AI in critical infrastructure [12]. This is a notable shift towards more government oversight. While it is not a law, it likely heralds more concrete measures by possibly binding guidance or even regulations for things like AI in grid operations if needed. The EO also emphasizes cybersecurity for AI systems and could lead to more investment in testing AI for grid contingencies [12].
Governance Philosophy: The U.S. favors industry self-regulation and ex-post enforcement over ex-ante rules, allowing innovation to develop while using existing laws to address harm through courts or regulators. In energy, regulations are “technology-neutral”, focusing on reliability and market rules without specifying whether tasks are performed by humans or AI. For example, FERC prioritizes grid stability and allows AI use as long as reliability standards are met. If AI causes a violation, such as a blackout due to maloperation, penalties can be enforced under existing reliability laws.
This approach may leave gaps in addressing novel risks, such as algorithmic collusion. U.S. antitrust agencies have not issued formal policies on AI collusion, but the Department of Justice and the Federal Trade Commission have warned that algorithm use does not exempt companies from liability if collusion occurs [59]. Companies are expected to ensure that their AI systems comply with existing laws.
Comparison with EU:
  • Flexibility vs. Certainty: Compared to the EU, the U.S. framework is more flexible and less prescriptive. This can be good for innovation, as companies can experiment without waiting for approvals, but at the same time, it can lead to uncertainty and uneven practices. In the EU, an energy AI provider can look at the AI Act and plan compliance; in the U.S., the same provider must guess how various general laws might apply and stay tuned to agency guidance.
  • Cybersecurity: U.S. energy cybersecurity is governed by mandatory NERC standards for bulk power and a patchwork for distribution. NIS2’s broad scope in EU has no direct U.S. equivalent beyond those bulk power standards. That said, U.S. grid cybersecurity is quite stringent under NERC and enforced with hefty fines. The concept of “secure by design” for products, like CRA, is not yet law in the U.S., though there are initiatives, like the IoT Cybersecurity Improvement Act for federal procurements. The U.S. might rely on market pressure and lawsuits to punish insecure products more than proactive certification.
  • AI Ethics and Bias: The EU embeds these protections into law, while the U.S. relies on frameworks. For example, if an AI system unintentionally discriminates in energy marketing or demand response incentives, the EU’s AI Act and GDPR address fairness and data minimization. In the U.S., companies may refer to the AI Bill of Rights or the NIST RMF, but legal obligations arise only if civil rights laws are violated. The U.S. Federal Trade Commission has stated that it can act against “unfair or deceptive practices” involving AI, such as biased outcomes, under its general enforcement authority.
  • Best Practices and Trends: A growing best practice in the U.S. is voluntary transparency to consumers and regulators. Some utilities in pilot programs inform customers about how AI analyzes their usage data to build trust, driven more by public relations than by legal requirements. Another approach involves industry consortiums developing guidelines. For example, the Electric Power Research Institute (EPRI) has initiatives focused on AI ethics for power systems [60]. These can often move faster than regulations and sometimes influence regulators to later codify them.
Summary for U.S.: The U.S. currently relies on a patchwork of existing regulatory mechanisms and voluntary measures to govern AI in energy, with a stronger emphasis on cybersecurity and reliability through existing critical infrastructure rules, and privacy through state laws. The recent executive actions suggest a move toward more coordinated oversight, but likely still not as rigid as the EU’s approach. Energy companies in the U.S. have more latitude in deploying AI quickly, but they also face a bit more risk in that there is not a clear compliance checklist; they must proactively self-govern to avoid running afoul of broad regulations (like the REMIT’s counterpart in the U.S., the FERC’s anti-manipulation rule, etc.). We might characterize the U.S. stance as “cautious optimism” about AI’s benefits to energy, managed by adapting existing rules and fostering innovation-friendly guidelines rather than new hard laws.

5.2. China: State-Driven Strategy with Tight Control and Rapid Implementation

Regulatory Framework: China’s approach to AI governance is deeply intertwined with its broader governance model—emphasizing state control, national security, and strategic industrial policy. China does not have an AI law specific to energy, but it has a series of regulations and guidelines on AI and data that affect all sectors, and a strong top-down push to integrate AI in industries (including energy) as part of national plans.
Key Aspects in China:
  • National AI Strategy: China’s government identified AI as a key driver for economic development and has issued plans like the New Generation AI Development Plan (2017) which set goals for AI deployment across industries, aiming to be a global leader by 2030. This strategy explicitly mentions encouraging AI in areas like smart grid, renewable integration, and energy efficiency.
  • AI Regulations: In the past few years, China has introduced groundbreaking regulations on specific aspects of AI:
    Algorithms and Recommender Systems: The Internet Information Service Algorithmic Recommendation Management Provisions (effective March 2022) require companies to register algorithmic recommendation services with the Cyberspace Administration of China (CAC), adhere to rules ensuring content compliance and fairness, and allow users to opt out of recommendations [61]. While aimed at online content platforms, any energy app that provides algorithmic recommendations (e.g., an energy app recommending products or behaviors) technically could fall under this, though in practice it is more for internet platforms. Still, it shows the principle: algorithms must align with “core socialist values” and not endanger public order [11].
    Deep Synthesis (Deepfakes): The Provisions on Deep Synthesis Technologies (effective Jan 2023) regulate generative AI like deepfakes, requiring clear labeling and prohibiting misuse [62]. Not directly energy-related, but part of the AI legal landscape.
    Generative AI: In 2023, China issued Interim Measures for the Management of Generative AI Services (effective Aug 2023), which set rules for generative AI offered to the public, including content control, data governance, etc. [63]. Energy is not exempt, but likely not central to generative AI (unless using, say, GPT-like systems for customer service, which companies must ensure do not produce prohibited content).
    AI Ethics: China has guidelines like the Ethical Norms for New AI (2021), which, while not law, influence how companies should ensure AI is transparent, controllable, and does not violate ethics [64].
  • Data and Security Laws: China implemented in 2021 two major laws: the Personal Information Protection Law (PIPL), akin to the GDPR in many respects [65], and the Data Security Law (DSL) [66]. PIPL protects personal data, including energy usage data linked to individuals, requiring consent or other legal bases, and granting individuals rights, though with Chinese-specific exemptions such as broad allowances for state agencies. The DSL categorizes data, with some energy data potentially classified as important or core to the state, restricting cross-border transfers and impacting cloud AI services exporting data from China. The Cybersecurity Law of 2017 mandates critical sectors like energy to secure networks and possibly store data locally. These regulations impose strict data governance on energy companies using AI, ensuring that critical grid data remains within China and that personal consumer energy data are processed lawfully under the PIPL.
  • Energy Sector Regulation: China is drafting a new comprehensive Energy Law (as noted in a 2024 draft) aimed at optimizing energy use and ensuring security [11]. Although not explicitly about AI, it includes incentives for reliable and stable energy services, linking to advanced technologies like AI for grid stability. China’s energy regulators, such as the National Energy Administration, issue policies on smart grids and renewable integration that indirectly promote AI, for example, by requiring utilities to enhance forecasting, which implies AI adoption.
  • Standards and Implementation: In China, regulations are often supported by detailed technical standards from bodies like CESI or SGCC research units. State Grid likely has internal AI security standards for its operations. The Chinese model pilots technology in state-owned enterprises, with State Grid and China Southern Grid running large AI projects such as predictive maintenance and AI-powered drone inspections. The regulatory framework is directive rather than compliance-driven in the Western sense. When the government mandates AI adoption for specific goals, state companies implement it, managing regulatory approvals internally.
Governance Philosophy: China’s governance approach emphasizes proactive AI adoption while managing risks. The government promotes AI for economic and strategic benefits, including energy efficiency and green transition, while enforcing strict regulations to align AI with state priorities such as social stability, security, and risk mitigation. Individual rights receive less focus, though PIPL marks some progress, with greater emphasis on collective interests and state control.
Comparison with EU:
  • Regulatory Tightness: China’s AI regulations impose stricter controls than the EU on content and permissible behavior due to censorship and political oversight, but allow faster deployment. The state can mandate AI adoption without extended consensus building or legal obstacles. While the EU debates the AI Act, China already enforces algorithm registries [11] and AI output censorship. These restrictions mainly apply to public content, while industrial applications like energy face fewer explicit limits but must comply with security requirements.
  • Privacy: Both the EU and China have comprehensive data laws, with the GDPR and PIPL sharing structural similarities, such as requiring consent. Key differences lie in enforcement and exemptions. China’s enforcement may be selective, and state entities are largely exempt from PIPL. A state-owned power utility in China likely has more flexibility in using consumer data for grid management than a European utility under the GDPR.
  • Cybersecurity: China’s critical infrastructure rules, under the Cybersecurity Law and Critical Information Infrastructure regulations, mandate security reviews for important IT equipment, including AI in energy. They also prioritize secure, preferably domestic, technology procurement. The EU’s CRA shares the goal of ensuring secure products, but China enforces this partly through strict import controls, such as scrutinizing foreign AI software in the grid.
  • Security and Stability: These are paramount to Chinese energy policy. AI in grids or power plants must not compromise grid stability. The government will likely mandate rigorous testing through state QA processes. There might not be public information on it, but internal controls are likely stringent.
  • Centralized Data and AI Platforms: China favors centralized platforms, which could lead to a national energy data center using AI for grid operations across provinces. This would extend its regional grid dispatch model, with AI deployed centrally under government oversight rather than by individual companies. In contrast, the EU allows multiple private actors to innovate within regulatory frameworks.
  • Standardization and pilot projects: China is advancing standardization and pilot projects, viewing energy and AI regulation as interconnected, with a focus on security and reliability [11]. A key approach is to embed AI standards into energy reforms. As China drafts its Energy Law, it simultaneously shapes AI norms, ensuring alignment [11]. This integrated strategy could inform the EU, which could update energy legislation, such as network codes and market rules, alongside AI advancements.
  • Government as a stakeholder in AI deployment: In China, the government and state-owned enterprises lead much of the R&D, developing indigenous AI for grid management to reduce reliance on Western technology. This approach serves both as a policy for self-reliance and as a regulatory measure favoring domestic technology in procurement.
  • AI Ethics and Stability: Chinese regulators have explicitly noted maintaining social stability and avoiding economic disruption as goals of AI regulation [67]. This implies that, for instance, if algorithmic trading in energy markets (China has a nascent electricity market) caused volatility, authorities would step in quickly, perhaps by restricting AI trading or imposing caps. The Chinese system can respond quickly with administrative orders, whereas the EU might rely on market rules and later adjustments.
Summary for China: China’s AI regulatory framework for energy is state-driven and rapidly evolving. It combines aggressive AI adoption to enhance efficiency and meet renewable targets, with strict oversight focused on security, content control, and data governance. Energy companies must comply with stringent regulations, but enforcement is typically handled through state audits rather than through independent regulators or open courts, as seen in the EU. This allows China to scale AI deployments, such as nationwide smart meter analytics, but with less transparency in risk management and decision-making.
China’s approach is gaining influence, with other Asian countries observing its rapid grid modernization and considering similar governance strategies, such as national AI guidelines and controlled data flows. At the same time, China selectively adopts elements from EU regulations where beneficial. For instance, the PIPL reflects GDPR principles in several aspects [68], while maintaining China-specific adaptations.

5.3. Global Comparison

From a global perspective, the EU, the U.S., and China differ in the following ways:
  • The EU’s approach is rules-driven, prioritizing ethical and rights-based considerations while aiming to preempt potential harm through regulation. For instance, the Agency for the Cooperation of Energy Regulators (ACER) underlined this preventive stance in its Open Letter on the Notifications of Algorithmic Trading and Direct Electronic Access (July 2024), reminding market participants that AI-based trading tools must be notified in advance under REMIT [69]. The “Brussels effect” often extends these standards globally, influencing AI governance beyond Europe [18].
  • The U.S. approach is market-driven, reactive, and fragmented, prioritizing innovation with minimal initial constraints while addressing challenges as they arise. However, this is gradually shifting toward more regulatory oversight.
  • China is strategy-driven, rapid, and focused on control and security, implementing specific rules to direct AI’s trajectory in society and key sectors in line with state objectives [11]. It treats AI both as an opportunity to leapfrog in tech and as a potential tool that must be tightly guided.
Notably, all three see AI as crucial for energy transformation, whether it is managing renewables, improving efficiency, or new services. However, they balance facilitation and restriction differently:
  • The EU seeks to balance fostering innovation with protecting ethical and societal values through detailed legislation [3].
  • The U.S. historically leans pro-innovation, trusting existing laws to catch egregious problems.
  • China leans pro-control, aggressively deploying AI, but under watchful regulation aligning with government aims [11].
Each region can learn from the others. The EU could adopt China’s speed of implementation to better adapt to technological changes. The U.S. and China could benefit from the EU’s approach to formal risk assessments and transparency to build public trust. Meanwhile, both the EU and the U.S. remain concerned about China’s approach, particularly regarding global competitiveness and ethical differences, which can shape international agreements and standards.
Overall, no universal model for AI governance in energy exists yet, as it remains an evolving global issue. International collaboration may be necessary, particularly on AI safety standards for critical infrastructure, where the EU, the U.S., and China all have significant interests. The discussion on global harmonization will be revisited in the recommendations section.

6. Challenges, Gaps, and Recommendations

The analysis so far has highlighted an extensive regulatory landscape governing AI in the EU energy sector, as well as contrasting approaches in the U.S. and China. In this section, we distill the key challenges and gaps that remain in effectively regulating AI-driven energy solutions and propose recommendations to address them. These recommendations aim at policymakers, regulators, industry stakeholders, and international bodies, with the goal of improving regulatory clarity, encouraging innovation in a safe manner, and moving towards greater harmonization where beneficial.

6.1. Key Challenges and Gaps

The following challenges and gaps have been identified:
  • Overlapping Regulations and Compliance Complexity: A clear challenge is the sheer number of regulations an energy AI project must navigate—the AI Act, GDPR, NIS2, sector-specific codes, etc. While each addresses a facet (ethical AI, data privacy, cybersecurity), in practice, they can overlap or create duplicative requirements. For example, an AI system might require both an AI Act conformity assessment and a cybersecurity certification, plus data protection impact assessment under the GDPR. Especially for smaller companies or research collaborations, this is challenging. There is a risk of fragmented compliance efforts, where one team handles the GDPR and another handles the AI Act, potentially missing their interconnections. Gap: Guidance on how to streamline compliance across these laws has not yet been fully developed.
  • Legal Uncertainty and Evolving Standards: Many concepts in AI regulation are new and lack clear precedents or technical standards. “State-of-the-art” security under NIS2 [23] or “robustness” under the AI Act are somewhat open-ended until standards are set. Likewise, it is unclear how regulators will enforce certain provisions, such as determining whether an AI provider has sufficiently mitigated bias or quantifying “acceptable risk” in high-risk AI. This uncertainty may delay deployment, as companies adopt a wait-and-see approach, or result in inconsistent interpretations across jurisdictions. Gap: Similarly, as noted, AI in energy markets is a gap, as current high-risk classification does not cover it, so potential issues might slip through until addressed [1].
  • Keeping Pace with Technology: AI is advancing rapidly, with developments such as generative AI and increasingly autonomous control systems. Regulatory processes, particularly in the EU, are slower. By the time regulations take effect, new AI capabilities or risks may have emerged. For example, the AI Act drafts initially overlooked generative AI, requiring later amendments to address foundation models. In energy, if new AI techniques like swarm AI for distributed energy resource control arise, existing regulations may not explicitly cover them, leading to uncertainty about compliance. Gap: Regulatory agility is limited, requiring mechanisms for dynamic updates or clarifications. Approaches such as delegated acts, regulatory guidance, or insights from sandbox environments could help refine and adapt rules more effectively.
  • Data Availability vs. Privacy: AI development relies on large datasets, but energy data are often sensitive, including customer usage and grid operations, which have security and competitive implications. The GDPR and data localization laws can hinder data pooling for AI training. For instance, an EU energy AI developer may struggle to access pan-European smart meter data due to varying privacy law interpretations or proprietary data silos. Energy companies may also hesitate to share data with AI vendors due to liability concerns. Gap: The balance between data sharing for AI innovation and ensuring privacy and security is complex. Frameworks like the Data Governance Act aim to address this by establishing trusted data-sharing intermediaries, but these efforts are still in the early stages. Uncertainty around key concepts, such as what qualifies as true anonymization under the GDPR, remains a challenge. Energy consumption patterns may be difficult to fully anonymize, increasing the risk that data sharing could violate privacy regulations.
  • Talent and Interdisciplinary Understanding: Complying with these regulations requires expertise that crosses law, AI technology, and energy engineering. A shortage of professionals who understand all three realms is a challenge. Energy companies report difficulty in finding staff or consultants who can implement AI solutions and ensure compliance simultaneously. Gap: This is partly a workforce training gap, where universities and training programs have only recently begun integrating data science with energy and policy education.
  • Fragmentation Across Member States: Even though EU regulations aimed at harmonization. Some regulations, such as NIS2, allow for national implementation specifics, leading to inconsistencies. Member States may have different competent authorities and enforcement practices. Gap: Variations in enforcement create difficulties for companies operating across multiple EU countries. For instance, NIS2 may be enforced more strictly in one country than in another, and data protection authorities may provide differing guidance on energy data processing. Similarly, energy market rules vary despite EU integration efforts, leading to different constraints for AI solutions, such as varying data access rules from TSOs and DSOs.
  • Global Alignment and Competitiveness: The EU’s stringent AI regulations, while leading in ethical oversight, raise concerns about competitiveness. If regions like the U.S. impose fewer constraints, EU companies may struggle to keep pace with AI development. Additionally, varying regulatory regimes make it difficult for companies to create a single AI system for global deployment, requiring adjustments for compliance across the EU, the U.S., and China. Gap: There is no established international standard specifically for AI in critical infrastructure. While ISO/IEC efforts exist, adoption, and consistency remain limited. This lack of harmonization complicates cross-border issues, such as an EU company using a U.S.-based AI cloud service—raising questions about which laws apply in case of disputes or failures.
  • Ensuring Ethical and Fair Outcomes in Practice: Laws set principles such as transparency and fairness, but ensuring these outcomes in real-world AI is challenging. Gap: There is a lack of tools and methodologies for auditing AI in energy applications. For example, assessing whether an AI optimizing energy distribution unfairly favors certain regions or customers is complex. Regulations do not yet fully address fairness concerns, such as preventing AI-driven demand responses from excluding those without smart technology, which could worsen energy inequality. Regulators may also lack the resources and expertise to evaluate AI deployments effectively.
  • Engagement and Clarity for Industry: Ensuring that regulations do not create innovation bottlenecks or favor only large companies that can afford dedicated compliance departments. If compliance is perceived as too complex, it may discourage startups from innovating in the energy sector, pushing them toward less regulated domains. Gap: The industry often lacks clear guidance or best practices tailored to energy-specific compliance. Many companies, especially smaller ones, struggle to understand the exact requirements, leading to uncertainty about how to comply.

6.2. Recommendations

Addressing these challenges requires a multi-faceted approach, including legislative fine-tuning, regulatory guidance, industry best practices, and international cooperation. The following recommendations directly map to the challenges in Section 6.1.:
  • Develop Integrated Guidance and “One-Stop” Compliance Toolkits: The EU should create consolidated guidance for AI in critical sectors like energy that interprets overlapping regulations in a unified way. For example, a guidance document (or online toolkit) co-developed by the European Commission (AI Alliance), ENISA, and the European Data Protection Board could outline how an energy company can simultaneously comply with the AI Act, NIS2, and GDPR when deploying a specific solution (say, an AI for demand response). This should include use-case examples, a checklist of obligations, and references to standards. Such an approach would reduce fragmentation and especially help smaller actors. Regulatory sandboxes can play a role here: companies in a sandbox could pilot the integrated compliance approach, and regulators can then publish learnings. The EU’s planned AI regulatory sandboxes (mentioned in the AI Act drafts) could prioritize energy sector experiments to generate these insights.
  • Accelerate Standards Development: The EU, in collaboration with international bodies (ISO/IEC, IEEE), should prioritize developing harmonized technical standards for AI in safety-critical and energy systems. These standards would operationalize concepts like “robustness”, “accuracy”, and “security” for high-risk AI. Having standards will provide the concrete yardstick needed for compliance and streamline conformity assessments [1]. For example, a standard could specify testing procedures for an AI voltage control system under various grid conditions to ensure reliability, aligning with the AI Act and network code requirements. The EU could direct CEN/CENELEC and ETSI to collaborate with energy industry experts on this. Additionally, developing an AI cybersecurity standard, possibly extending ISA/IEC 62443 for industrial control to address AI-specific threats, would help bridge the AI Act and NIS2 requirements. Speed is critical, so iterative standard updates should be considered, such as initially publishing technical specifications that can later evolve into full standards.
  • Enhanced Regulatory Agility and Adaptive Regulation: To ensure that regulatory frameworks remain responsive to the rapid evolution of AI technologies in the energy sector, regulators should adopt a dual approach that combines agile regulatory tools with continuous monitoring and feedback. Regulators should periodically update high-risk use lists, issue rapid interpretative guidance, and offer experimental licenses to enable timely adjustments as new AI capabilities and risks emerge [18]. At the same time, establishing robust monitoring systems and structured feedback loops will allow for data-driven evaluations of the regulations’ real-world impacts, thereby reducing legal uncertainty and facilitating prompt refinements [70]. This integrated strategy creates a regulatory environment that is both agile and adaptive—providing clear, up-to-date guidance to industry stakeholders while keeping pace with technological advances [18,68].
  • Promote Data Sharing Frameworks with Privacy Protection: The EU should operationalize the Data Governance Act and upcoming Energy Data Space in a way that eases access to energy datasets for AI development while preserving privacy. This could include creating anonymized or synthetic datasets of smart meter data for research. Synthetic data does not violate GDPR because they are truly anonymized by default. Investment in privacy-enhancing technologies (PETs) like federated learning, where AI models train across datasets without sharing raw data, should be encouraged through research funding and pilot projects [71]. Regulators can provide guidance to clarify when energy data are classified as personal versus sufficiently aggregated to be non-personal, ensuring companies have confidence in sharing aggregated grid data for AI applications. Another approach is to establish a data sandbox where utilities and AI developers can collaborate on real data under strict privacy controls and oversight. This would enable the development of useful AI models that benefit the entire sector, similar to existing healthcare data sandbox initiatives in the EU.
  • Capacity Building and Interdisciplinary Teams: At the EU level, support should be given to training programs that blend AI, energy engineering, and law. This could mean new Erasmus Mundus master programs or professional certifications for “AI Governance in Energy”. Regulators and operators should hold joint workshops, e.g., ENISA, energy regulators, and AI experts co-host cybersecurity exercises for AI in grids (like scenario planning for AI failures or attacks). The energy community (ENTSO-E, EU DSO entity, etc.) could establish a permanent forum on AI governance in energy to share best practices. Also, ensure that regulators (like data protection authorities and energy regulators) get training on AI and energy systems so they can make informed decisions.
  • Harmonize Implementation and Foster Cross-Border Collaboration: Encourage Member States to implement directives like NIS2 in a harmonized way for energy. The Cooperation Group under NIS2 could issue specific energy sector guidance to align national efforts [22]. ACER could establish a common framework for evaluating AI in cross-border energy operations using the Network Code on Cybersecurity as a model for coordinated regional assessments. Mutual recognition of AI system approvals should also be considered. If an AI system is certified in one EU country as meeting the AI Act or network code requirements, other member states should accept it to prevent redundant approvals. This could be implemented through the EU AI Act’s conformity assessment regime and existing energy cooperation mechanisms, such as the CEER.
  • International Cooperation on AI Governance in Energy: Given the global nature of energy and AI, the EU, the U.S., China, and others should engage in dialogue to share best practices and align with certain standards. Forums such as the G20 or the IEA could facilitate discussions on AI in energy infrastructure, potentially leading to globally endorsed principles for trustworthy AI in critical infrastructure, similar to the OECD AI Principles, which received broad support, including from the U.S. and some backing from China. For example, all parties could agree on the principle of human oversight in critical AI decisions, even if implemented differently across jurisdictions. The EU could take the lead in establishing an International Electrotechnical Commission (IEC) working group on AI for energy system security, involving experts from the U.S. and China to develop standards that enhance interoperability and safety. This would help prevent conflicting regulatory requirements, ensuring that manufacturers do not need to design entirely separate AI systems for different markets.
  • Address Specific Gaps (Market AI, etc.) with Targeted Rules or Guidance: If concerns about AI-driven trading and market manipulation increase, EU regulators, including ACER for energy and competition authorities, should issue guidelines on algorithmic trading behavior. These guidelines could clarify how existing REMIT rules apply to AI and potentially require companies to implement internal policies for AI-driven trading. Similarly, for consumer protection, the European Consumer Organization (BEUC) and national agencies should develop guidelines ensuring fairness and transparency in AI-enabled dynamic pricing and recommendation services [44]. This would go beyond the AI Act’s provisions, addressing specific risks in energy markets [72]. Sector-specific guidance should address gaps not covered by the AI Act. The upcoming revision of the EU’s product liability and AI liability regimes should explicitly include energy use cases to clarify responsibility distribution. For instance, if a grid operator and an AI vendor share liability, the framework should define how this applies. Clear liability rules will incentivize all parties to uphold high safety standards.
  • Encourage Ethical AI and Transparency Beyond Compliance: Regulations set minimum requirements; however, exceeding them can help companies differentiate themselves and build trust. Industry bodies in the energy sector should develop codes of conduct for ethical AI, aligning with the AI Act’s encouragement of voluntary codes for non-high-risk AI. These codes should address sector-specific concerns, such as preventing bias in AI-driven resource allocation and ensuring explainability for grid operators. Regulators could endorse such codes, offering recognition or incentives, such as lighter oversight, for companies adhering to a certified code of conduct. Additionally, transparency tools should be expanded. For example, a registry of AI systems used in critical energy infrastructure, similar to the EU AI Act’s database but potentially including moderate-risk systems on a voluntary basis, could enhance oversight. Making this registry accessible to regulators and, where appropriate, to the public would help build trust in AI governance.
Looking forward, divergent regulatory paths could produce markedly different outcomes. A highly permissive AI regime may catalyze rapid innovation but risk undermining public trust, particularly in critical infrastructure such as energy. Conversely, an overly restrictive or fragmented regime may stifle promising applications, deterring small innovators or slowing the digital energy transition. The EU’s central challenge is therefore to strike an equilibrium that is anticipatory without being prescriptive—supporting safe experimentation, cross-border interoperability, and inclusive innovation. In practice, three near-term trends already point toward that balanced trajectory: (i) once the AI Act enters into force, a sequence of implementing and delegated acts will translate its high-level obligations into sector-specific benchmarks for electricity, gas, and heating systems, giving stakeholders clearer compliance targets; (ii) international standardization bodies (ISO/IEC JTC 1/SC 42; CEN/CENELEC) are converging on common AI-safety and cybersecurity standards, promising a single test route that can demonstrate conformity with the AI Act, NIS2, and the Cyber-Resilience Act at the same time; and (iii) energy regulators across several Member States are expanding regulatory-sandbox programs that let companies pilot AI-based flexibility or grid-automation services in a controlled setting, generating evidence to fine-tune future rules. Together, these developments suggest a governance model that encourages responsible progress rather than either unchecked acceleration or regulatory paralysis.

7. Conclusions

AI-driven digital solutions offer significant potential to transform the energy sector by facilitating the integration of renewable energy sources, enhancing the intelligence of power grids, and enabling consumers to manage their energy consumption more efficiently. The European Union has recognized that achieving these benefits necessitates a strong governance framework to mitigate associated risks.
This scoping review examines the impact of EU laws and regulations on the development, adoption, and deployment of AI in the energy sector. Key regulatory frameworks include the forthcoming AI Act, the General Data Protection Regulation (GDPR), the NIS2 Directive, the Cyber Resilience Act, the Network Code on Cybersecurity, and the Cybersecurity Act. These regulations introduce requirements that present both challenges and opportunities. On the one hand, they impose complex compliance obligations, potential constraints on innovation, and areas of legal ambiguity. On the other hand, they serve as enablers by fostering trust, ensuring safety and security, and harmonizing standards across Member States.
The EU’s regulatory approach is distinguished by its comprehensive and preventive nature, particularly in comparison to the more market-driven and less interventionist approach of the United States and the state-led, rapid deployment strategy of China. Each model offers distinct advantages and drawbacks. By observing these international approaches, the EU can incorporate greater flexibility and responsiveness, as exemplified by the United States, while maintaining a strategic emphasis on innovation, as seen in China. At the same time, it must remain committed to its fundamental principles of privacy, security, and ethical AI development. Key findings from this review include the following:
  • The EU is establishing a comprehensive regulatory framework for AI that will significantly impact AI solutions in the energy sector. The risk-based AI Act, along with strict data and cybersecurity laws, will require energy companies and technology providers to adapt to new compliance demands. This shift will likely necessitate cross-functional expertise and the early incorporation of legal considerations into technology development [1,24].
  • Regulatory barriers, including compliance costs, uncertainty in interpreting new requirements, and potential delays in AI deployment, present significant challenges. However, these are offset by several enablers. A clear regulatory framework can strengthen trust among consumers and investors, while a unified EU approach prevents the fragmentation of national regulations, reducing long-term costs. Additionally, certain regulations, such as data-sharing initiatives, actively support innovation by providing controlled access to essential resources [19,24].
  • Specific challenges like aligning AI with GDPR’s privacy mandates or ensuring cybersecurity for AI systems in critical grid operations will require careful attention. Yet, solutions are emerging, e.g., privacy-preserving computation and federated learning can allow AI models to train on distributed energy data without infringing privacy [70], and the Network Code on Cybersecurity provides a template for continuous risk management of new digital tools in grids [9].
  • The EU’s regulatory leadership is likely to shape AI governance beyond its borders through the “Brussels effect”, setting global standards for AI in the energy sector. However, differences between the U.S. and China underscore the need for dialogue and the potential development of international standards. Addressing cross-border risks, such as cybersecurity threats and market manipulation in interconnected energy systems, will require coordinated global efforts [11,12].
Recommendations put forth the aim to navigate and mitigate the identified challenges. These include developing integrated compliance guidance, speeding up standards, using sandboxes and pilot programs to refine regulatory approaches, and enhancing cooperation both within the EU and internationally. A central focus is adaptive governance, ensuring that regulations evolve alongside advancements in AI and the energy sector. Continuous stakeholder engagement, periodic rule reviews, as outlined in the AI Act’s revision mechanism [71], and knowledge sharing across jurisdictions will be vital.
For stakeholders in the energy sector, compliance and innovation must progress together. Rather than seeing regulation as a limitation, organizations should recognize it as a foundation for developing trustworthy AI solutions that foster public acceptance and long-term viability. Early adopters in the industry are already proactively implementing ethics and security-by-design principles, positioning themselves ahead of regulatory requirements.
For policymakers, the key challenge is achieving a balanced approach. The energy sector’s transformation is essential for meeting climate targets and ensuring a stable supply, with AI playing a crucial role in this shift. Regulations must mitigate risks without restricting the flexibility needed for large-scale technological deployment. This can be accomplished through smart regulation, with rules that are proportionate to risk, developed with industry input to ensure feasibility and adaptable to feedback and technological advancements, as exemplified by the AI Act’s approach. Future research directions stemming from this review include the following:
  • Empirical studies on the regulatory impact, such as surveys or case studies of energy companies, can assess how compliance efforts influence AI deployment timelines and strategies. Such research would offer policymakers valuable data on whether adjustments are necessary, whether through modifications in the AI Act’s implementing regulations or by providing additional support to SMEs.
  • Technical research on compliance solutions could play a crucial role in streamlining regulatory adherence. Developing tools that automate compliance checks, such as software that verifies AI models for the GDPR or AI Act conformity, would be highly valuable. Additionally, advancing AI-driven regulatory technology (regtech) to assist authorities in monitoring and enforcement could reduce compliance burdens and improve oversight efficiency.
  • A deeper comparative policy analysis as AI governance frameworks mature, including detailed comparisons between different regimes such as the EU, the U.S., China, Japan, and India in the energy sector, would provide valuable insights. This analysis could help guide regulatory convergence in the future.
  • Evaluating the effectiveness of sector-specific guidelines, such as the implementation of the Network Code on Cybersecurity, would provide valuable insights. Monitoring its practical impact could help determine whether similar regulations, such as a cybersecurity code for the gas sector or AI guidelines for the water sector, would be beneficial.
This study demonstrates that regulatory design plays a critical role in shaping the trajectory of AI deployment in the energy sector. By mapping the legal landscape, we reveal that governance frameworks do not merely constrain innovation—they actively shape the nature, pace, and equity of digital transformation. These findings contribute to the broader understanding that regulatory clarity, agility, and cross-sectoral alignment are essential to enabling both technological progress and public trust in AI-driven energy systems. While challenges exist, they can be addressed through continued collaboration among regulators, the energy industry, and AI developers. As a result, the EU’s legal framework can serve not as a barrier but as an enabler, supporting the twin goals of energy transition and digital innovation [73]. If managed wisely, it can guide the energy sector toward a future that is not only intelligent and efficient but also trustworthy and equitable.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ACERAgency for the Cooperation of Energy Regulators
AIArtificial Intelligence
AI ActArtificial Intelligence Act
CACCyberspace Administration of China
CRACyber Resilience Act
CEConformité Européenne (European Conformity Mark)
CVECommon Vulnerabilities and Exposures
DORADigital Operational Resilience Act
DOEDepartment of Energy
DSOsDistribution System Operators
ENISAEuropean Union Agency for Cybersecurity
EUEuropean Union
EUCSEU Cloud Certification Scheme
GDPRGeneral Data Protection Regulation
IEAInternational Energy Agency
IRENAInternational Renewable Energy Agency
MARMarket Abuse Regulation
NERCNorth American Electric Reliability Corporation
NERC CIPNERC Critical Infrastructure Protection
NIS2Network and Information Security Directive 2 (NIS2 Directive)
NCCSNetwork Code on Cybersecurity for the Electricity Sector
NISTNational Institute of Standards and Technology
PRISMA-ScRPreferred Reporting Items for Systematic Reviews and Meta-Analyses—Scoping Reviews
REMITRegulation on Energy Market Integrity and Transparency
EU Cybersecurity ActEuropean Union Cybersecurity Act
TSOsTransmission System Operators
PIPLPersonal Information Protection Law
DSLData Security Law

References

  1. European Commission. Proposal for a Regulation Laying Down Harmonized Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts. Front. Artif. Intell. 2021, 4, 690237. [Google Scholar] [CrossRef]
  2. International Energy Agency (IEA). Why AI and Energy Are the New Power Couple. IEA Commentary, 2 November 2023. Available online: https://www.iea.org/commentaries/why-ai-and-energy-are-the-new-power-couple (accessed on 11 March 2025).
  3. Capco. EU AI Act: Energy Implications. Capco Intelligence, 17 July 2024. Available online: https://www.capco.com/intelligence/capco-intelligence/eu-ai-act-energy-implications (accessed on 11 March 2025).
  4. PRISMA. PRISMA Extension for Scoping Reviews (PRISMA-ScR). PRISMA Statement, 2024. Available online: https://www.prisma-statement.org/scoping (accessed on 11 March 2025).
  5. EUR-Lex Homepage. Available online: https://eur-lex.europa.eu/homepage.html (accessed on 13 March 2025).
  6. Publications Office of the European Union. Available online: https://op.europa.eu/en/home (accessed on 13 March 2025).
  7. ENISA. Available online: https://www.enisa.europa.eu/ (accessed on 13 March 2025).
  8. European Commission. Action Plan on the Digitalisation of the Energy Sector: Roadmap Launched. Available online: https://commission.europa.eu/news/action-plan-digitalisation-energy-sector-roadmap-launched-2021-07-27_en (accessed on 13 March 2025).
  9. European Commission. New Network Code on Cybersecurity for the EU Electricity Sector. Energy-European Commission, 2024. Available online: https://energy.ec.europa.eu/news/new-network-code-cybersecurity-eu-electricity-sector-2024-03-11_en (accessed on 11 March 2025).
  10. European Union Agency for Cybersecurity (ENISA). Cybersecurity and Privacy in AI: Forecasting Demand on Electricity Grids. ENISA Publications, 2023. Available online: https://www.enisa.europa.eu/publications/cybersecurity-and-privacy-in-ai-forecasting-demand-on-electricity-grids (accessed on 11 March 2025).
  11. Katterbauer, K.; Özbay, R.D.; Yilmaz, S.; Meral, G. 2030 On the Interface Between AI and Energy Regulations in China. J. Recycl. Econ. Sustain. Policy 2024, 3, 51–62. Available online: https://respjournal.com/index.php/pub/article/download/50/32 (accessed on 11 March 2025).
  12. Morgan Lewis. The Intersection of Energy and Artificial Intelligence: Key Issues and Future Challenges. Morgan Lewis Publications, 12 August 2024. Available online: https://www.morganlewis.com/pubs/2024/08/the-intersection-of-energy-and-artificial-intelligence-key-issues-and-future-challenges (accessed on 11 March 2025).
  13. European Commission. Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts. COM/2021/206 Final. 2021. Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=OJ:L_202401689 (accessed on 26 February 2025).
  14. European Parliament and Council. Regulation (EU) 2016/679—General Data Protection Regulation (GDPR). 2016. Available online: https://eur-lex.europa.eu/eli/reg/2016/679/oj (accessed on 26 February 2025).
  15. European Parliament and Council. Directive (EU) 2022/2555—NIS2 Directive. 2023. Available online: https://eur-lex.europa.eu/eli/dir/2022/2555 (accessed on 26 February 2025).
  16. European Commission. Cyber Resilience Act (Proposal for a Regulation on Horizontal Cybersecurity Requirements for Products with Digital Elements). 2022. Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex:52022PC0454 (accessed on 26 February 2025).
  17. European Parliament and Council. Regulation (EU) 2019/881—Cybersecurity Act. 2019. Available online: https://eur-lex.europa.eu/eli/reg/2019/881/oj (accessed on 26 February 2025).
  18. European Commission. Proposal for a Regulation Laying Down Harmonized Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts. EUR-Lex, 2021. Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:52021PC0206 (accessed on 11 March 2025).
  19. European Union Agency for Cybersecurity (ENISA). Cybersecurity and Privacy in AI: Forecasting Demand on Electricity Grids. ENISA Publications, 2023. Available online: https://www.trustedai.ai/wp-content/uploads/2023/06/Cybersecurity-and-privacy-in-AI-Forecasting-demand-on-electricity-grids.pdf (accessed on 11 March 2025).
  20. Integrate.ai. Improving Smart Grid Management with Federated Learning. Integrate.ai Blog, 12 February 2025. Available online: https://www.integrate.ai/blog/improving-smart-grid-management-with-federated-learning (accessed on 11 March 2025).
  21. Center for Data Innovation. The Impact of the GDPR on Artificial Intelligence. Center for Data Innovation Report, 27 March 2018. Available online: https://www2.datainnovation.org/2018-impact-gdpr-ai.pdf (accessed on 11 March 2025).
  22. European Commission. The NIS2 Directive: Strengthening Cybersecurity in the EU. Digital Strategy-European Commission, 18 October 2024. Available online: https://digital-strategy.ec.europa.eu/en/policies/nis2-directive (accessed on 11 March 2025).
  23. KPMG. What Does NIS2 Mean for Energy Businesses? KPMG Insights, 19 April 2024. Available online: https://kpmg.com/uk/en/home/insights/2024/04/what-does-nis2-mean-for-energy-businesses.html (accessed on 11 March 2025).
  24. Metomic. A Complete Guide to NIS2. Metomic Resource Centre, 21 November 2024. Available online: https://www.metomic.io/resource-centre/a-complete-guide-to-nis2 (accessed on 11 March 2025).
  25. ENTSO-E. First Network Code on Cybersecurity for the Electricity Sector Published. ENTSO-E News, 24 May 2024. Available online: https://www.entsoe.eu/news/2024/05/24/first-network-code-on-cybersecurity-for-the-electricity-sector-published-today (accessed on 11 March 2025).
  26. Technology Law Dispatch. ENISA Releases Comprehensive Framework for Ensuring Cybersecurity in the Lifecycle of AI Systems. Technology Law Dispatch, 1 June 2023. Available online: https://www.technologylawdispatch.com/2023/06/data-cyber-security/enisa-releases-comprehensive-framework-for-ensuring-cybersecurity-in-the-lifecycle-of-ai-systems (accessed on 11 March 2025).
  27. FAICP. The Framework for AI Cybersecurity Resilience. FAICP Framework, 2024. Available online: https://www.faicp-framework.com/FAICR_Links.html (accessed on 11 March 2025).
  28. CEER. Making Energy Regulation Fit for Purpose: State of Play of Regulatory Sandboxes for the Energy Sector. Available online: https://www.ceer.eu/wp-content/uploads/2024/06/JRC-study-Making-energy-regulation-fit-for-purpose.-State-of-play-of-regulatory.pdf (accessed on 12 March 2025).
  29. International Renewable Energy Agency (IRENA). Regulatory Sandboxes for Smart Electrification: Power-to-Hydrogen. Available online: https://www.irena.org/Innovation-landscape-for-smart-electrification/Power-to-hydrogen/20-Regulatory-sandboxes (accessed on 12 March 2025).
  30. Skadden. Latest Draft of the European Cybersecurity Certification Scheme. Skadden Publications, 30 November 2023. Available online: https://www.skadden.com/insights/publications/2023/11/latest-draft-of-the-european-cybersecurity-certification-scheme (accessed on 11 March 2025).
  31. Center for Strategic & International Studies (CSIS). European Cybersecurity Certification Scheme for Cloud Services. CSIS Analysis, 1 September 2023. Available online: https://www.csis.org/analysis/european-cybersecurity-certification-scheme-cloud-services (accessed on 11 March 2025).
  32. Somos. Navigating NIS2 and the Cyber Resilience Act. Available online: https://www.somos.com/insights/navigating-nis2-and-cyber-resilience-act (accessed on 12 March 2025).
  33. Industrial Cyber. ENISA Reports on Cybersecurity Investments and the Impact of the NIS Directive. Industrial Cyber Reports, 23 November 2023. Available online: https://industrialcyber.co/reports/enisa-reports-on-cybersecurity-investments-impact-of-nis-directive-with-deep-dives-into-energy-health-sectors (accessed on 11 March 2025).
  34. International Energy Agency (IEA). JenErgieReal Regulatory Sandbox. Available online: https://www.iea.org/policies/17543-jenergiereal-regulatory-sandbox (accessed on 12 March 2025).
  35. European Commission. Targeted Stakeholder Survey for EU Grid Operators and Regulators on AI and Digitalization in Energy. Available online: https://energy.ec.europa.eu/news/targeted-stakeholder-survey-eu-grid-operators-and-regulators-pact-engagement-2025-01-23_en (accessed on 12 March 2025).
  36. Eurelectric. Action Plan on Grids. Available online: https://www.eurelectric.org/publications/eurelectric-action-plan-on-grids/ (accessed on 12 March 2025).
  37. Hunton Andrews Kurth LLP. CNIL Publishes a New Set of Guidelines on the Development of AI Systems. Privacy & Information Security Law Blog, 8 July 2024. Available online: https://www.hunton.com/privacy-and-information-security-law/cnil-publishes-a-new-set-of-guidelines-on-the-development-of-ai-systems (accessed on 11 March 2025).
  38. European Commission. EU Adapts Product Liability Rules to the Digital Age and Circular Economy. Available online: https://commission.europa.eu/news/eu-adapts-product-liability-rules-digital-age-and-circular-economy-2024-12-09_en (accessed on 12 March 2025).
  39. OSHA Europe. Regulation 2023/1230/EU on Machinery. Available online: https://osha.europa.eu/en/legislation/directive/regulation-20231230eu-machinery (accessed on 12 March 2025).
  40. European Parliament and Council. Directive (EU) 2019/944 on Common Rules for the Internal Market for Electricity. Available online: https://en.wikipedia.org/wiki/Electricity_Directive_2019 (accessed on 12 March 2025).
  41. Florence School of Regulation. Interoperability of Energy Services in Europe: What’s Behind It? Available online: https://fsr.eui.eu/interoperability-of-energy-services-in-europe-whats-behind-it (accessed on 12 March 2025).
  42. European Commission. The Data Act. Available online: https://digital-strategy.ec.europa.eu/en/policies/data-act (accessed on 12 March 2025).
  43. Reason Foundation. GDPR and Constraints for AI Startups. Available online: https://reason.org/commentary/gdpr-and-constraints-for-ai-startups (accessed on 12 March 2025).
  44. Ofgem. Understanding Consumer Attitudes on AI Use in the Energy Sector. 2024. Available online: https://www.ofgem.gov.uk/sites/default/files/2024-12/Ofgem_Artifical%20Intelligence_Final%20report.pdf (accessed on 26 February 2025).
  45. Council Regulation (EC) No 1/2003 of 16 December 2002 on the Implementation of the Rules on Competition Laid Down in Articles 81 and 82 of the Treaty. Available online: https://eur-lex.europa.eu/legal-content/EN/ALL/?uri=CELEX:32003R0001 (accessed on 12 March 2025).
  46. European Securities and Markets Authority. Article 17 Algorithmic Trading. Available online: https://www.esma.europa.eu/publications-and-data/interactive-single-rulebook/mifid-ii/article-17-algorithmic-trading (accessed on 12 March 2025).
  47. Norton Rose Fulbright. MiFID II: Frequency and Algorithmic Trading Obligations. Available online: https://www.nortonrosefulbright.com/en/knowledge/publications/6d7b8497/mifid-ii-mifir-series (accessed on 12 March 2025).
  48. Financial Conduct Authority. MAR 5A.5 Systems and Controls for Algorithmic Trading. Available online: https://www.handbook.fca.org.uk/handbook/MAR/5A/5.html (accessed on 12 March 2025).
  49. Advisera. Who Does NIS2 Apply to? Advisera Articles, 2024. Available online: https://advisera.com/articles/who-does-nis2-apply-to (accessed on 11 March 2025).
  50. European Commission. Proposal for a Directive on Measures for a High Common Level of Cybersecurity Across the Union (NIS2 Directive). Available online: https://digital-strategy.ec.europa.eu/en/library/proposal-directive-measures-high-common-level-cybersecurity-across-union (accessed on 12 March 2025).
  51. European Union. Regulation (EU) 2022/2554 of the European Parliament and of the Council of 14 December 2022 on Digital Operational Resilience for the Financial Sector (DORA). Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32022R2554 (accessed on 12 March 2025).
  52. Agency for the Cooperation of Energy Regulators (ACER). REMIT Quarterly Report—Monitoring Trading Patterns in Energy Markets. Available online: https://www.acer.europa.eu/sites/default/files/REMIT/REMIT%20Reports%20and%20Recommendations/REMIT%20Quarterly/REMITQuarterly_Q2_2020_3.0.pdf (accessed on 12 March 2025).
  53. European Parliament and Council. Regulation (EU) No 1227/2011 on Wholesale Energy Market Integrity and Transparency (REMIT). Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32011R1227 (accessed on 12 March 2025).
  54. NERC (North American Electric Reliability Corporation). NERC CIP Standards—Critical Infrastructure Protection. 2023. Available online: https://www.nerc.com/pa/Stand/Pages/Project-2014-XX-Critical-Infrastructure-Protection-Version-5-Revisions.aspx (accessed on 26 February 2025).
  55. U.S. Department of Energy (DOE). AI for Energy: Opportunities for a Modern Grid and Clean Energy Economy; U.S. Department of Energy: Washington, DC, USA, 2024. Available online: https://www.energy.gov/sites/default/files/2024-04/AI%20EO%20Report%20Section%205.2g%28i%29_043024.pdf (accessed on 25 April 2025).
  56. National Institute of Standards and Technology (NIST). AI Risk Management Framework (AI RMF). 2023. Available online: https://www.nist.gov/itl/ai-risk-management-framework (accessed on 26 February 2025).
  57. Office of Science and Technology Policy (OSTP). Blueprint for an AI Bill of Rights; The White House: Washington, DC, USA, 2022. Available online: https://bidenwhitehouse.archives.gov/ostp/ai-bill-of-rights/ (accessed on 11 March 2025).
  58. Executive Office of the President. Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. 2023. Available online: https://www.federalregister.gov/documents/2023/11/01/2023-24283/safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence (accessed on 26 February 2025).
  59. Federal Trade Commission (FTC). On Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence. 2024. Available online: https://www.ftc.gov/system/files/ftc_gov/pdf/FTC-AI-Use-Policy.pdf (accessed on 26 February 2025).
  60. Electric Power Research Institute (EPRI). Advancing AI Integration in the European Energy Sector. EPRI Press Releases, 7 November 2024. Available online: https://europe.epri.com/press-releases/advancing-ai-integration-european-energy-sector (accessed on 11 March 2025).
  61. Cyberspace Administration of China (CAC). Internet Information Service Algorithmic Recommendation Management Provisions; CAC: Beijing, China, 2022; Available online: https://www.chinalawtranslate.com/en/algorithms/ (accessed on 11 March 2025).
  62. Cyberspace Administration of China (CAC). Provisions on the Administration of Deep Synthesis Internet Information Services; CAC: Beijing, China, 2023; Available online: https://www.chinalawtranslate.com/en/deep-synthesis/ (accessed on 11 March 2025).
  63. Latham & Watkins LLP. China’s New AI Regulations: Interim Measures for the Management of Generative AI Services; Latham & Watkins LLP: Beijing, China, 2023; Available online: https://www.lw.com/admin/upload/SiteAttachments/Chinas-New-AI-Regulations.pdf (accessed on 11 March 2025).
  64. Ministry of Science and Technology of China (MOST). Ethical Norms for New AI Technologies; MOST: Beijing, China, 2021; Available online: https://cset.georgetown.edu/publication/ethical-norms-for-new-generation-artificial-intelligence-released/ (accessed on 11 March 2025).
  65. National People’s Congress of China. Personal Information Protection Law (PIPL). 2021. Available online: https://personalinformationprotectionlaw.com/ (accessed on 26 February 2025).
  66. National People’s Congress of China. Data Security Law (DSL). 2021. Available online: https://npcobserver.com/legislation/data-security-law/ (accessed on 26 February 2025).
  67. FiscalNote. China AI Policy Development: What You Need to Know. FiscalNote Blog, 7 August 2024. Available online: https://fiscalnote.com/blog/china-ai-policy-development-what-you-need-to-know (accessed on 11 March 2025).
  68. Tanner de Witt. Artificial Intelligence: Regulatory Landscape in China and Hong Kong. Tanner de Witt Blog, 14 November 2024. Available online: https://www.tannerdewitt.com/artificial-intelligence-regulatory-landscape-in-china-and-hong-kong (accessed on 11 March 2025).
  69. European Union Agency for the Cooperation of Energy Regulators (ACER). Open Letter on the Notifications of Algorithmic Trading and Direct Electronic Access; ACER: Ljubljana, Slovenia, 2024; Available online: https://www.acer.europa.eu/sites/default/files/REMIT/Guidance%20on%20REMIT%20Application/Open%20Letters%20on%20REMIT%20Policy/Open-letter-on-algorithmic-trading.pdf (accessed on 25 April 2025).
  70. Volkova, A.; Hatamian, M.; Anapyanova, A.; de Meer, H. Being Accountable is Smart: Navigating the Technical and Regulatory Landscape of AI-Based Services for Power Grid. arXiv 2024, arXiv:2408.01121v1. Available online: https://arxiv.org/abs/2408.01121 (accessed on 11 March 2025).
  71. Artificial Intelligence Act. Article 112: Evaluation and Review. Artificial Intelligence Act Website, 2024. Available online: https://artificialintelligenceact.eu/article/112 (accessed on 11 March 2025).
  72. European Consumer Organisation (BEUC). AI and Generative AI: Trilogue negotiations for the AI Act. 2023. Available online: https://www.beuc.eu/position-papers/ai-and-generative-ai-trilogue-negotiations-ai-act (accessed on 26 February 2025).
  73. European Parliament. The Role of Artificial Intelligence in the European Green Transition. 2021. Available online: https://www.europarl.europa.eu/RegData/etudes/STUD/2021/662906/IPOL_STU(2021)662906_EN.pdf (accessed on 26 February 2025).
Figure 1. The PRISMA-ScR flow diagram is based on the source counts listed in Table 1.
Figure 1. The PRISMA-ScR flow diagram is based on the source counts listed in Table 1.
Energies 18 02359 g001
Table 1. Search strings and search engines for each of the data sources. The search is conducted in March and February 2025.
Table 1. Search strings and search engines for each of the data sources. The search is conducted in March and February 2025.
SourceSearch Engine and String
EUR-LexGoogle site:eur-lex.europa.eu (“Energy Sector”) (Act OR Law OR Regulation OR Directive) (“Artificial Intelligence” OR Digitalization) (“Data Privacy” OR Cybersecurity OR Resilience) lang:en
European Commission websiteGoogle site:ec.europa.eu (EU OR “European Union”) (“Energy Sector”) (Act OR Law OR Regulation OR Directive) (“Artificial Intelligence” OR Digitalization) (“Data Privacy” OR Cybersecurity OR Resilience) (Generation OR “Transmission grid” OR “Distribution grid” OR Consumption OR Market) lang:en
ENISAGoogle site:www.enisa.europa.eu (EU OR “European Union”) (Energy) (AI OR “Artificial Intelligence” OR Digitalization) (“Data Privacy” OR Cybersecurity OR Resilience) (Act OR Law OR Regulation OR Directive) lang:en
Web of Scienceautoresearch.sdu.dk (EU OR “European Union”) AND (Energy) AND (AI OR “Artificial Intelligence” OR Digitalization) AND (Act OR Law OR Regulation OR Directive) AND (“Data Privacy” OR Cybersecurity OR Resilience) Title, Abstract, Key words
Scopus
Google Web SearchGoogle site:europa.eu (“European Union”) (“Energy sector”) (“White paper” OR Report) (Act OR Law OR Regulation OR Directive) (“Artificial Intelligence” OR “Data Privacy” OR Cybersecurity OR Resilience) lang:en
ACERGoogle site:www.acer.europa.eu (Energy) (AI OR “Artificial Intelligence”) lang:en
ENTSO-E Google site:www.entsoe.eu (Energy) (Act OR Law OR Regulation OR Directive) (AI OR “Artificial Intelligence” OR Digitalization OR “Data Privacy” OR Cybersecurity OR Resilience) (Generation OR “Transmission grid” OR “Distribution grid” OR Consumption OR Market) lang:en
IEAGoogle site:www.iea.org (Energy) (AI OR “Artificial Intelligence” OR Digitalization) (“Data Privacy” OR Cybersecurity OR Resilience) (Act OR Law OR Regulation OR Directive) lang:en
IRENAGoogle site:www.irena.org (Energy) (“Artificial Intelligence” OR Digitalization) (“Data Privacy” OR Cybersecurity OR Resilience) (Act OR Law OR Regulation OR Directive) lang:en
NERCGoogle site:www.nerc.com (Energy) (“Artificial Intelligence” OR Digitalization) (“Data Privacy” OR Cybersecurity OR Resilience) lang:en
OfgenGoogle site:www.ofgem.gov.uk (Energy) (“Artificial Intelligence” OR Digitalization) (“Data Privacy” OR Cybersecurity OR Resilience) lang:en
Table 2. Summary of source selection and the number of identified, screened, and included records.
Table 2. Summary of source selection and the number of identified, screened, and included records.
TypeSourceIdentificationScreeningIncluded
Official EU legal documentsEUR-Lex552Top 10019
Regulatory and policy reportsEuropean Commission website606Top 10024
European Union Agency for Cybersecurity (ENISA)41413
Academic literatureWeb of Science330
Scopus441
Industry white papers and technical standardsGoogle Web Search6570Top 10018
Energy AgenciesACER10101
ENTSO-E40401
IEA34341
IRENA991
NERC27271
Ofgem10101
Table 3. EU regulatory framework and its impact on AI in energy sector.
Table 3. EU regulatory framework and its impact on AI in energy sector.
EU RegulationRelevance to AI in EnergyKey Provisions/Impact
Artificial Intelligence Act (proposed)Many energy AI systems = high-risk (critical infra).High-risk AI (e.g., grid control AI) must undergo conformity assessment, ensure transparency, human oversight [1,19]. Imposes documentation and risk management for AI developers; aims to prevent harm and bias [1,3].
GDPR (2018)Smart meters, consumer AI apps, any personal data in AI.Legal basis needed for processing consumer energy data. Requires data minimization, possibly anonymization [19]. Article 22 limits fully automated decisions affecting consumers. DPIAs for high-risk data use. Strong security controls to protect personal data.
NIS2 Directive (2022)Energy companies (gen, grids, oil, gas) must secure all network and info systems (including AI systems).Requires state-of-the-art cyber measures [23] risk management, incident response, continuity [22]. Mandatory incident reporting for significant incidents [9]. Board-level accountability and possible fines. Supply chain security obligations affect the procurement of AI systems.
Cyber Resilience Act (2024)All energy sector digital products (IoT, software, including AI software) sold in the EU.The mandatory security by design requirement update protects against misuse [22]. Manufacturers must fix vulnerabilities and possibly get certified for critical products [22]. Energy IoT and AI devices will need CE marking for cyber compliance by 2027.
Network Code on Cybersecurity (2024, elec.)Grid operators, TSOs/DSOs using AI in system operations.Regular cyber risk assessments of critical grid ICT [9]. Identify critical digital systems (including AI) and mitigate risks. Aligns with NIS2 reporting [9]. Fosters ENTSO-E and DSO cooperation in setting security controls. Ensures robust security governance for any AI impacting cross-border electricity flows.
EU Cybersecurity Act (2019)Certification schemes and guidance covering AI components (voluntary but influential).Enables EU-wide certification schemes for ICT products [10]—e.g., cloud services used for AI can be certified [30]. ENISA develops best-practice frameworks (e.g., AI cybersecurity guidelines) [33]. Provides trust labels that energy companies can require from AI vendors (e.g., certified secure products).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jørgensen, B.N.; Ma, Z.G. Regulating AI in the Energy Sector: A Scoping Review of EU Laws, Challenges, and Global Perspectives. Energies 2025, 18, 2359. https://doi.org/10.3390/en18092359

AMA Style

Jørgensen BN, Ma ZG. Regulating AI in the Energy Sector: A Scoping Review of EU Laws, Challenges, and Global Perspectives. Energies. 2025; 18(9):2359. https://doi.org/10.3390/en18092359

Chicago/Turabian Style

Jørgensen, Bo Nørregaard, and Zheng Grace Ma. 2025. "Regulating AI in the Energy Sector: A Scoping Review of EU Laws, Challenges, and Global Perspectives" Energies 18, no. 9: 2359. https://doi.org/10.3390/en18092359

APA Style

Jørgensen, B. N., & Ma, Z. G. (2025). Regulating AI in the Energy Sector: A Scoping Review of EU Laws, Challenges, and Global Perspectives. Energies, 18(9), 2359. https://doi.org/10.3390/en18092359

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop