Next Article in Journal
Reframing Sustainability Learning Through Certification: A Practice-Perspective on Supply Chain Management
Previous Article in Journal
Platform-Based Human Resource Management Practices of the Digital Age: Scale Development and Validation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Addressing Challenges for the Effective Adoption of Artificial Intelligence in the Energy Sector

Division of Climate Change, Hankuk University of Foreign Studies, Yongin 17035, Republic of Korea
Sustainability 2025, 17(13), 5764; https://doi.org/10.3390/su17135764
Submission received: 7 June 2025 / Revised: 17 June 2025 / Accepted: 19 June 2025 / Published: 23 June 2025

Abstract

The integration of artificial intelligence (AI) in the energy sector offers transformative potential but is hindered by a complex web of interconnected socio-technical challenges. The existing scholarship often addresses these issues in isolation, lacking a practical framework to guide stakeholders through the complexities of responsible deployment. This study addresses this gap by conducting a systematic literature review to develop and propose an integrative, actionable governance framework. The proposed framework is built on four core principles: Trustworthiness, Sustainability, Equity, and Collaborative Adaptation. Crucially, it operationalizes these principles through a four-phased implementation process, a stakeholder-specific action matrix with measurable key performance indicators, and a set of critical success factors. By synthesizing diverse solutions—from technical standards for data and security to governance mechanisms for ethical oversight and workforce transition—into a structured, lifecycle-based approach, this study argues that moving beyond piecemeal fixes is essential for mitigating systemic risks. This framework provides a testable roadmap for future research and a practical guide for policymakers and industry leaders seeking to harness AI’s full potential in a sustainable, ethical, and inclusive manner.

1. Introduction

The energy sector is in the midst of a pivotal digital transformation, with artificial intelligence (AI) at its core, offering unprecedented potential to revolutionize power generation, ensure grid stability, and optimize overall energy efficiency [1,2]. While AI’s capabilities—spanning from resource exploration to real-time grid management—position it as a crucial enabler of innovation and the clean energy transition [3,4,5], its widespread adoption is not without significant hurdles. Indeed, as the urgency of the global energy transition intensifies, unaddressed challenges in AI deployment—ranging from technical integration complexities to profound ethical dilemmas—are emerging as critical bottlenecks that risk impeding progress toward a decarbonized and equitable energy future [6,7].
This rapid acceleration in AI adoption, exemplified by a more than twofold increase in AI-focused power sector activities between 2022 and 2023 [8,9], is occurring within an energy sector that is inherently conservative due to the high stakes associated with system reliability and stringent regulatory compliance [10]. Consequently, these are not merely technical teething problems but profound systemic issues that span multiple dimensions. They include technical barriers such as data integrity and cybersecurity [11,12]; economic hurdles like high implementation costs and uncertain ROI [13]; and critical socio-technical dilemmas including algorithmic bias, workforce skill gaps, and eroding public trust [7,14]. Crucially, these challenges are highly context-dependent, varying significantly across energy sub-sectors and geographical regions. Failure to comprehensively address these interconnected challenges risks not only slows the energy transition but also undermines public trust and investment in AI-driven energy solutions.
While the existing scholarship acknowledges AI’s potential, it often addresses these challenges in a fragmented, technical manner, overlooking their systemic interplay [15]. This leaves a critical lacuna: the absence of a holistic and, most importantly, actionable framework to navigate these interconnected risks. This study addresses this gap by developing an integrative socio-technical framework based on a systematic review of the literature. It argues that successful AI adoption requires a paradigm shift from piecemeal technological fixes to a systemic approach grounded in the principles of Sustainability, Trustworthiness, Equity, and Collaborative Adaptation. Therefore, this research is guided by a central question: How can a multi-dimensional, socio-technical framework provide actionable guidance for stakeholders to navigate the complex challenges of AI adoption in the energy sector? The paper is structured as follows: Section 2 details the methodology, Section 3 analyzes the identified challenges, Section 4 presents the proposed framework, and Section 5 offers concluding remarks.

2. Methodology

2.1. Search Strategy and Study Selection

This study employed a systematic literature review guided by the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines [16] to identify and synthesize research on the challenges of AI adoption in the energy sector. The academic database Google Scholar was searched to ensure broad interdisciplinary coverage, with the scope restricted to English-language publications published from 1 January 2018 to 31 December 2024. The following advanced query was used: (intitle:energy OR intitle:oil OR intitle:gas OR intitle:grid OR intitle:electricity OR intitle: “electric power” OR intitle:wind OR intitle:solar) (intitle: “artificial intelligence” OR intitle: “machine learning”) (intitle:challenges OR intitle:risk OR intitle:problems).
The screening process adhered to pre-defined eligibility criteria to ensure relevance and quality. Inclusion was limited to peer-reviewed journal articles and full conference papers whose primary focus was the challenges, risks, or barriers of AI/ML adoption within the energy sector. Studies with only a tangential discussion of challenges, purely theoretical AI papers lacking an energy context, and non-peer-reviewed materials such as editorials and white papers were excluded. This two-stage screening (title/abstract, then full-text) was conducted by a single author following a strict, pre-defined protocol to ensure consistency. The initial search yielded 211 records, from which 29 core peer-reviewed articles were selected for in-depth analysis.

2.2. Data Synthesis and Analysis

Data synthesis was performed using thematic analysis [17], an inductive process to systematically identify, analyze, and report patterns or themes from the selected literature. To supplement the academic analysis and incorporate crucial real-world context, a backward and forward snowballing process was applied to the 29 core articles. This yielded 55 additional publications. This secondary set included both peer-reviewed articles and a targeted selection of high-impact reports from authoritative bodies (e.g., IEA, WEF) that inform industry and policy discourse.

2.3. Methodological Limitations

Despite efforts to ensure rigor, certain limitations should be acknowledged. The reliance on English-language publications and a specific search query may have resulted in the omission of some pertinent studies. The single-reviewer process, while systematic, is inherently more susceptible to individual interpretation than a multi-reviewer approach, though expert validation was employed as a mitigating measure. These factors may influence the generalizability of some nuanced findings, suggesting avenues for future research employing broader search criteria and multi-reviewer validation.

3. A Multi-Dimensional Landscape of AI Adoption Challenges

The systematic review of peer-reviewed publications and expert reports revealed a complex and interconnected landscape of challenges hindering the effective adoption of AI in the energy sector. The analysis showed a strong emphasis in the literature on technical barriers, yet socio-technical and economic challenges were consistently identified as critical, often cascading, impediments to successful deployment. These findings are classified into four primary categories and seventeen subtopics, as summarized in Table 1.
Furthermore, a chronological examination of the literature cited in Table 1 reveals a notable evolution in the discourse. Foundational challenges such as data quality, cybersecurity, and high costs have persisted as core concerns throughout the 2018–2025 period (e.g., [11,24,35]). However, more recent scholarship increasingly emphasizes higher-order, systemic challenges. The focus has been shifting from initial implementation hurdles toward ensuring the Trustworthiness and transparency of deployed systems, as seen in the growing body of work on XAI [25,26]. Concurrently, as AI applications mature within specific domains, the challenge of coordinating across multiple energy vectors (e.g., electricity, gas, heat) has emerged as a key frontier [36,38]. Most significantly, the nature of socio-ethical concerns has deepened, moving beyond privacy to tackle nuanced issues of algorithmic bias and AI-driven greenwashing [41]. This trend suggests a maturation of the field, from asking “Can we build it?” to “How should we build it responsibly and integrate it systemically?” This section now offers an in-depth analysis of these dimensions, focusing on their critical interdependencies

3.1. Technical Issues

Technical challenges form a foundational impediment, often creating cascading issues across other domains. This review identifies five critical, interconnected sub-areas.

3.1.1. Data Quality and Noise (The Foundational Bottleneck)

Poor data quality (sensor noise, incomplete/heterogeneous datasets) is a primary bottleneck, leading to skewed AI predictions [5,11,19] and hampering applications like renewable forecasting [4,10]. This directly undermines AI model efficacy, erodes operator trust by reducing explainability (XAI) (Section 3.1.4; [25]), causes economic losses from suboptimal decisions (Section 3.2; [11]), and can increase computational (HPC) overhead and AI’s own energy footprint when attempting to compensate for data deficiencies (Section 3.1.2 and Section 3.2.4). The challenge is acute in legacy systems and developing nations [35].

3.1.2. HPC and Computational Overhead (Balancing Performance with Sustainability)

Sophisticated AI for energy applications demands significant high-performance computing (HPC) resources [3,20], leading to high capital/operational costs and substantial energy consumption [5,18]. This creates an economic strain and potential “AI divide” (Section 3.2.1; [29]), contributes to AI’s carbon footprint (Section 3.2.4; [42]), is exacerbated by efforts to overcome poor data quality or model complexity (Section 3.1.1 and Section 3.1.5), and can introduce latency in mission-critical systems [40]. Addressing this requires energy-efficient hardware/algorithms and “Sustainable AI” practices [43], though these may face technical or R&D investment limitations (Section 3.2.3).

3.1.3. Cybersecurity (Protecting Critical Infrastructure in an AI-Driven Landscape)

AI integration broadens the attack surface in critical energy infrastructure [12,22], making systems attractive targets with potentially devastating consequences [44]. Challenges include AI-specific vulnerabilities like adversarial attacks and data poisoning [24,45], risks from integrating AI with legacy OT systems and IoT devices (Section 3.3.1; [40]), potential for data privacy breaches eroding public trust (Section 3.4.2; [34]), and threats to system stability and safety [39]. Even AI-driven cybersecurity tools have vulnerabilities and face a workforce skills gap (Section 3.4.1). Defense-in-depth strategies are crucial but represent an ongoing operational and economic challenge (Section 3.3 and Section 3.2.1).

3.1.4. Explainability (XAI) (Bridging the Trust Gap in AI-Driven Energy Systems)

The ‘black box’ nature of complex AI models hinders understanding of their decision-making, a critical barrier in high-stakes energy applications [25,26]. This erodes operator trust, especially if data quality is questionable (Section 3.1.1), impedes debugging and validation [27], complicates regulatory compliance and accountability ([46]; Section 3.4.2), and hinders public acceptance (Section 3.4.3; [13]). While XAI techniques exist [47], their application in complex, real-time energy systems, and providing meaningful explanations to non-experts, remain challenging, creating a persistent tension with the drive for higher model accuracy and complexity (Section 3.1.5).

3.1.5. Model Complexity and Advanced Fault Handling (Navigating a Dynamic and Uncertain Operational Landscape)

Energy systems’ dynamism necessitates AI models capable of sophisticated reasoning and robust fault handling [11], but model complexity and generalizability are significant challenges [48]. Overfitting to noisy or unrepresentative data (Section 3.1.1) limits real-world utility [4,27]. The pursuit of accuracy often increases complexity, reducing XAI (Section 3.1.4) and increasing HPC costs (Section 3.1.2). Standard AI struggles with novel or rare events (e.g., extreme weather, cyber-physical attacks as in Section 3.1.3), necessitating advanced FDD and adaptive learning, though ensuring the safety of such adaptive systems is a research and operational challenge (Section 3.3; [10,49]). Hybrid physics-ML models (e.g., PINNs) offer promise [32,50] but can be complex to develop, requiring interdisciplinary expertise (linking to workforce skills, Section 3.4.1). Robustness against data/concept drift requires ongoing MLOps, an operational and strategic undertaking (Section 3.3; [27]).

3.2. Economic and Environmental Issues: Balancing Innovation with Financial Prudence and Ecological Responsibility

The adoption of AI in energy is shaped by economic realities and environmental imperatives, which are often interlinked.

3.2.1. High Costs/CAPEX (The Financial Barrier to AI-Driven Innovation)

Substantial upfront CAPEX for AI infrastructure, data systems, sensors, and specialized hardware/software is a primary hurdle ([28,29]. This creates an accessibility and Equity issue, potentially widening the “AI divide” [18], which is compounded by costs of integrating with legacy systems (Section 3.3.1; [35]), and can lead to risk aversion and delayed adoption given uncertain returns [11,13]. Mitigating this requires innovative financing and supportive policies (Section 3.2.3).

3.2.2. ROI Uncertainty (Navigating Unpredictable Returns in a Dynamic Sector)

Uncertainty regarding ROI for energy AI projects is significant due to market volatility, policy shifts, and difficulties in quantifying intangible benefits like improved reliability [13,30,31]. This is worsened by integration risks and AI performance variability linked to data quality or model generalizability (Section 3.1.1, Section 3.1.5 and Section 3.3.3; [29]), and challenges in scaling nascent technologies [32]. Robust planning and de-risking mechanisms are needed [51].

3.2.3. Policy and Funding Gaps (The Need for Supportive and Coherent Governance)

Successful AI adoption relies on supportive policies and funding, which are often lacking or misaligned [18,37]. Key issues include unclear regulatory frameworks for AI in energy [13,52], insufficient/misaligned funding [32], challenges in multi-operator/cross-border governance [34], and policy silos that fail to integrate AI promotion with Sustainability or other goals. Adaptive governance, targeted funding, and international collaboration are crucial [53].

3.2.4. AI’s Own Energy Consumption and Carbon Footprint (The Sustainability Paradox)

The significant energy footprint of training and running AI models presents a ‘Sustainability Paradox’ [18,42,54,55]. This is directly linked to HPC overhead and model complexity (Section 3.1.2 and Section 3.1.5) and is influenced by data center energy mix and water usage [56,57]. This has economic and social equity implications (Section 3.2.1 and Section 3.4.2) and necessitates a consideration of life cycle emissions. The pursuit of ‘Sustainable AI’ [43] via efficient algorithms/hardware and sustainable data center practices is vital [58], but faces its own R&D, cost, and policy challenges (Section 3.2.1 and Section 3.2.3).

3.3. Operational and Strategic Issues: Navigating Complexity in Dynamic Energy Systems and Business Environments

Effective AI integration challenges existing operational paradigms and demands astute strategic decision making, deeply interconnected with technical, economic, and socio-labor factors.

3.3.1. Real-Time Integration (The Imperative of Seamlessness and Responsiveness)

Seamless real-time integration for applications like grid stabilization or automated demand response [34,39] is operationally challenging. Legacy infrastructure often lacks necessary digital capabilities [35,36], requiring costly retrofits (linking to CAPEX, Section 3.2.1). Data latency and connectivity issues can nullify AI benefits [22,40], while a lack of interoperability across heterogeneous systems hinders holistic AI-driven operations [23] and multi-energy coordination (Section 3.3.2). Integrating AI into real-time control loops also heightens cybersecurity risks (Section 3.1.3).

3.3.2. Multi-Energy Coordination (Optimizing Interconnected Systems of Systems)

AI offers potential for optimizing increasingly integrated multi-energy systems (electricity, heat, gas, hydrogen) [37,38]. However, this involves high modeling complexity due to diverse constraints and dynamics [35], overcoming data fragmentation from historically siloed operations ([23]; linking to policy/governance, Section 3.2.3), addressing interoperability deficits [33], and navigating regulatory/market designs often ill-suited for cross-sector optimization [38]. Balancing conflicting objectives also presents an ethical and strategic challenge (Section 3.4.2).

3.3.3. Integration Risks (Managing Uncertainty in AI Deployment)

Deploying AI into safety-critical energy systems involves technical, operational, and market risks [25,29]. AI systems may underperform in real-world conditions due to data/concept drift or poor generalization (Section 3.1.5; [27,39]), impacting ROI (Section 3.2.2). Interoperability failures, cost overruns, cybersecurity breaches during integration (Section 3.1.3), and organizational resistance or change management issues (Section 3.4.1 and Section 3.4.3; [13]) are key concerns. Meticulous planning, phased deployment, and robust validation are crucial.

3.3.4. Novel Tech Transitions (Strategically Adopting Emerging AI-Enabled Solutions)

Adopting novel AI-enabled technologies (e.g., blockchain for energy, AI for ESG/greenwashing detection) presents strategic dilemmas [31,34,41]. Organizations must balance early-mover advantages against risks of unproven technologies and integration complexities [31]. New AI uses bring ethical implications (e.g., AI perpetuating greenwashing, Section 3.4.2; [41]), while transfer learning limitations ([32]; Section 3.1.5) and evolving regulatory landscapes (Section 3.2.3) add uncertainty. These transitions often require new business models and ecosystems [36].

3.4. Labor and Social Issues: Navigating the Human and Societal Dimensions of AI in Energy

AI adoption is profoundly intertwined with human and societal factors, which can amplify or mitigate other challenges and are crucial for a just and sustainable energy transition.

3.4.1. Workforce Skills (Bridging the Gap for an AI-Powered Energy Future)

A significant skills gap exists between current capabilities and the need for hybrid “energy + AI” professionals [11,29,59]. This shortage impacts AI system development quality, and AI-driven automation raises concerns about job displacement, necessitating large-scale reskilling/upskilling initiatives which have economic and strategic implications (Section 3.2 and Section 3.3; [13,27,60]). Educational systems often lag [61], and the energy sector faces challenges in attracting/retaining AI talent.

3.4.2. Ethics and Bias (Ensuring Fairness, Accountability, and Transparency in Algorithmic Decision Making)

AI in critical energy decisions raises profound ethical concerns regarding algorithmic bias, data privacy, and accountability [38,62]. Biased AI, trained on historical data reflecting societal inequities [26,63], could lead to discriminatory outcomes in resource allocation or pricing, eroding public trust (Section 3.4.3). Data privacy with smart meter/IoT data is crucial [10,34], as breaches are linked to cybersecurity risks (Section 3.1.3). Lack of XAI (Section 3.1.4) complicates accountability for AI failures [25]. AI could also facilitate “greenwashing” if misused [41]. Robust ethical AI governance is essential [13,64].

3.4.3. Public Acceptance (Building Trust and Ensuring Social License to Operate)

Public acceptance is critical for AI adoption in energy [13]. Lack of transparency (XAI, Ethics; Section 3.1.4 and Section 3.4.2), fears of job displacement (Section 3.4.1), data privacy concerns (Section 3.1.3 and Section 3.4.2) [22,65], perceptions of unfairness/bias [26], and concerns about AI’s environmental/safety impacts (Section 3.2.4) can fuel opposition. Proactive engagement, transparent communication, and community participation are vital [66].

3.4.4. Safety and Compliance (Upholding Reliability in High-Stakes Environments)

Energy systems are safety-critical. AI failures can have catastrophic consequences [28,35]. Ensuring AI reliability, especially for autonomous control, is a major concern, linked to model complexity and fault handling (Section 3.1.5) [3,20]. Compliance with evolving AI-specific standards and regulations is challenging (linking to policy gaps, Section 3.2.3; [22]). Maintaining appropriate human oversight in AI-controlled systems [67,68,69] and fostering a strong safety culture with trained personnel (Section 3.4.1) are crucial. Liability for AI-induced accidents also remains a complex issue (linking to Ethics and XAI, Section 3.4.2 and Section 3.1.4).

4. A Socio-Technical Framework for Actionable AI Governance

The preceding analysis in Section 3 reveals that the myriad challenges of AI adoption in the energy sector are not merely a list of discrete problems. Instead, they converge around four fundamental underlying themes. The technical and safety issues detailed (e.g., cybersecurity, explainability, reliability) are ultimately questions of Trustworthiness. The economic and environmental hurdles (e.g., high costs, ROI uncertainty, carbon footprint) challenge the long-term Sustainability of AI solutions. The socio-labor concerns (e.g., algorithmic bias, workforce displacement) raise critical questions of Equity. Finally, the operational and strategic barriers (e.g., integration risks, policy gaps, organizational inertia) underscore the necessity for Collaborative Adaptation.
Therefore, an effective framework cannot simply offer piecemeal solutions. It must be built upon principles that directly address these four core pillars. This section proposes such a framework, grounded in the principles of Trustworthiness, Sustainability, Equity, and Collaborative Adaptation, to provide a structured and holistic pathway for responsible AI integration.

4.1. The Core Principles of the Framework

The proposed framework is built upon four interconnected principles, derived from the systemic challenges identified in Section 3. These principles are not merely aspirational goals but serve as practical, guiding pillars for decision making throughout the entire AI lifecycle.

4.1.1. Trustworthiness

Trustworthiness serves as the bedrock of the framework, as its absence is a primary cause of adoption failure. This principle extends beyond the technical robustness of AI systems—encompassing their reliability, cybersecurity, and operational safety [3,22]—to include critical socio-technical dimensions. These are the pillars of explainability (transparency), fairness, and accountability, which are essential for building trust with operators, regulators, and the public. Integrating these ethical elements directly into the development process is fundamental for creating genuinely trustworthy AI [70]. An AI system that is opaque or biased cannot be considered trustworthy, regardless of its technical accuracy [25]. This holistic view aligns with the concept of ‘Trustworthy AI’ central to major regulatory initiatives, such as the European Union’s AI Act [46], which emphasize the need for systems to be lawful, ethical, and robust.

4.1.2. Sustainability

This principle addresses the long-term viability of AI solutions from both an economic and environmental perspective. Economic Sustainability requires a clear line of sight to a positive return on investment (ROI), navigating the high costs and financial risks that currently deter many stakeholders [13]. Environmental Sustainability confronts the paradox of AI’s own significant energy consumption and carbon footprint [5,42]. It mandates the adoption of “Green AI” practices, such as developing energy-efficient algorithms and transparently reporting the environmental impact of AI operations, to ensure that AI is a net positive for the energy system’s climate goals [43].

4.1.3. Equity

The principle of equity ensures that the benefits of the AI-driven energy transition are distributed fairly and that the technology does not perpetuate or exacerbate existing societal inequalities. This demands the proactive identification and mitigation of algorithmic bias, which can otherwise lead to discriminatory outcomes in areas like grid service allocation or access to clean energy programs [7]. Furthermore, equity encompasses the concept of a “just transition,” requiring concrete strategies to address workforce displacement through reskilling and upskilling, thereby ensuring that the economic benefits of automation are shared broadly [59,60].

4.1.4. Collaborative Adaptation

This final principle recognizes that responsible AI deployment is not a static, one-time event but an ongoing, dynamic process. It has two components. Collaboration emphasizes the necessity of multi-stakeholder approaches involving industry, policymakers, academia, and civil society to co-create standards, share knowledge, and build consensus [53,71]. Adaptation highlights the need for agile and flexible governance structures, such as the use of regulatory sandboxes, that can evolve alongside the rapidly advancing technology and changing market conditions [52]. Together, they foster resilient governance capable of navigating the inherent uncertainty of technological transformation.

4.2. A Phased Implementation Process

To translate the framework’s principles into practice, this section proposes a four-phased process that guides AI projects throughout their lifecycle. This approach ensures that technical development is continuously aligned with strategic, ethical, and social objectives by integrating specific, literature-grounded solutions at each stage. The specific strategies and mitigation measures discussed below are synthesized from a broad range of existing literature; a comprehensive summary mapping challenges to their corresponding solutions is provided in the Supplementary materials for reference.

4.2.1. Phase 1: Strategic Scoping and Design

This initial phase is the most critical for setting the project’s trajectory by embedding Sustainability and Equity from the outset. Before technical development, stakeholders must move beyond feasibility studies to conduct comprehensive risk and opportunity assessments. This involves de-risking the substantial economic uncertainties through robust scenario planning and pilot projects to clarify potential ROI [72]. For high-CAPEX initiatives, such as deploying robotics in offshore wind, securing government grants or forming public–private partnerships is a key strategic action [28]. To further mitigate investment risks, organizations can advocate for and adopt standardized AI certification frameworks that signal a commitment to quality and safety [13].
Crucially, this phase must include performing an ex-ante algorithmic impact assessment. This practice aligns with broader principles of responsible AI governance, which call for evaluating potential societal and ethical impacts before deployment to ensure technologies contribute positively to goals like energy justice [7,73]. It is also the stage for establishing multi-operator consortia for data sharing to reduce duplicative efforts and improve model foundations, provided that strong data governance and trust-building measures are co-developed [38]. Early and transparent engagement with communities and labor representatives is vital to build the social license to operate and to co-design “just transition” pathways [13,60].

4.2.2. Phase 2: Development and Validation

This phase focuses on the technical implementation of Trustworthiness and the environmental dimension of Sustainability. Foundational to this is establishing standard data governance and robust validation protocols to address the core challenge of poor data quality [11,74]. For privacy-sensitive data, techniques such as federated learning or synthetic data generation can be employed [75]. To build reliable and fault-tolerant models, developers should prioritize hybrid physics-ML models that embed domain knowledge, which are particularly effective in handling the extreme weather events common in energy systems [10,32].
To ensure cybersecurity, development must go beyond standard encryption and incorporate AI-specific defenses. This includes implementing adversarial training to build robustness against novel evasion attacks, an area of active research in smart grid security [76], alongside AI-based intrusion detection systems [45]. To enhance explainability, user-friendly XAI dashboards should be co-designed with operators, providing real-time model transparency and building trust [25]. Finally, to address environmental Sustainability, developers must adopt “Green AI” practices, including algorithmic efficiency techniques like model compression (pruning, quantization) and leveraging cloud/edge computing architectures to reduce computational overhead [54,58].

4.2.3. Phase 3: Deployment and Integration

During deployment, the framework’s focus shifts to navigating operational and organizational realities, testing the system’s real-world Trustworthiness and requiring deep Collaborative Adaptation. Seamless real-time integration often necessitates the modernization of legacy SCADA systems and the implementation of reliable sensor communications, especially for applications like automated demand response [22,77]. Advanced techniques like reinforcement learning (RL) can be used for adaptive dispatch but must be paired with robust fail-safe protocols and manual overrides to ensure safety [39].
This is also the critical phase for addressing the workforce skills gap. The deployment should be accompanied by domain-integrated AI training programs that equip operators and maintenance staff with the skills needed to manage, interpret, and trust the new systems [60]. Overcoming organizational resistance requires a clear change management strategy, continuous communication, and demonstrating the value of AI in augmenting, rather than simply replacing, human expertise.

4.2.4. Phase 4: Governance and Iteration

The final phase operationalizes Collaborative Adaptation through robust, long-term governance and an iterative learning cycle. This involves implementing robust Machine Learning Operation (MLOp) practices for the continuous monitoring of the AI system’s performance, fairness, and security. Such practices are essential for detecting model drift and unintended consequences in live environments, thereby ensuring sustained Trustworthiness over time [78,79]. Establishing clear ethical AI guidelines and accountability structures is essential for governing the live system and addressing any issues that arise [7,80].
This phase also embraces adaptation through policy and practice. Organizations can leverage regulatory sandboxes to test novel applications or business models in a controlled environment, providing regulators with the evidence needed to develop informed, agile policies [52]. The insights gained from ongoing operations feed back into the strategic scoping of future projects, creating a virtuous cycle of learning and improvement that is essential for navigating the long-term, dynamic evolution of AI in the energy sector.

4.3. Application in Practice: A Stakeholder-Oriented Action Matrix

While the phased process in Section 4.2 provides a structured roadmap, its success hinges on translating broad activities into specific, measurable responsibilities for the key actors within the energy ecosystem. A principle without a designated owner often remains an abstraction. To bridge this critical gap between process and practice, this section introduces a Stakeholder Action Matrix.
This matrix is designed as a practical tool for organizations to operationalize the framework’s principles. It moves beyond simply identifying solutions to assigning ownership and defining what success looks like. Table 2 offers illustrative examples of how different stakeholders can take concrete steps to address the systemic challenges discussed in Section 3, directly answering the question of how abstract goals like ‘Equity’ or ‘Trustworthiness’ can be implemented in real-world scenarios.
As demonstrated in Table 2, responsibility for the ethical and effective deployment of AI is distributed across the entire ecosystem. Progress does not depend on a single entity but on the coordinated actions of technology developers who build with integrity, utilities that deploy with equity, regulators who govern with foresight, and policymakers who foster an environment of adaptive and collaborative innovation. This matrix serves as a starting point for these actors to define their roles and hold each other accountable in the collective pursuit of a responsible AI-driven energy future.

4.4. Critical Success Factors for Implementation

The successful implementation of the proposed framework depends on more than just process adherence; it requires fostering a supportive ecosystem defined by several critical success factors. Grounded in the established organizational and strategic literature, these factors are the essential enablers for translating this framework from theory into practice.
  • Executive Leadership and Strategic Alignment: Successful AI adoption must be championed from the top and explicitly aligned with the organization’s core strategy. Leadership must frame AI not as an isolated IT project, but as a source of strategic value and transformation that warrants long-term investment, even when immediate ROI is uncertain [13]. This strategic alignment ensures that key principles like Equity and Sustainability are treated as core business objectives rather than secondary concerns, a concept central to leveraging technology for organizational transformation [82].
  • Multi-disciplinary Teams and a Culture of Responsibility: The identified challenges are inherently socio-technical and cannot be solved by data scientists alone. Assembling cross-functional teams that include domain experts (e.g., grid engineers), social scientists, and ethicists is crucial for navigating complexity and building a culture of responsibility [7]. Such collaboration fosters a “system-wide” perspective on error and ethical responsibility, moving beyond purely technical solutions to address the systemic issues inherent in large-scale technological systems [60].
  • Robust Data Governance and Infrastructure: Data are the lifeblood of any AI system, making robust, enterprise-wide data governance a non-negotiable prerequisite. The quality, integrity, and accessibility of data form the technical foundation upon which the principle of Trustworthiness is built [83]. Without a solid data infrastructure and clear governance policies, even the most advanced models are likely to fail or produce biased outcomes, a challenge consistently highlighted in the energy context [5,11].
  • Transparent Communication and Stakeholder Engagement: Building Trustworthiness and ensuring Public Acceptance depends on proactive and transparent communication with all stakeholders—including employees, customers, regulators, and local communities [13]. Moving beyond one-way information disclosure to genuine co-creation and partnership models, as advocated by international bodies, is essential for aligning AI systems with societal values and expectations [53,71].
  • Dynamic Capabilities and an Iterative Perspective: In a rapidly evolving technological and market landscape, organizations must cultivate “dynamic capabilities”—the ability to sense, seize, and reconfigure resources to adapt to change [84]. This means resisting a rigid, one-time deployment mindset in favor of a long-term, iterative perspective. Embracing experimentation through tools like regulatory sandboxes and committing to continuous learning from real-world feedback are the essence of sustainable innovation and Collaborative Adaptation in a complex field [49,52].
Ultimately, these success factors create the fertile ground upon which the framework can move from a theoretical construct to a lived reality. They are the organizational and cultural pillars that support the principles, processes, and actions needed to guide a truly responsible and beneficial integration of AI in the energy sector.

5. Conclusions and Future Directions

5.1. Conclusions

This systematic review has mapped the multifaceted and interconnected challenges hindering the adoption of AI in the energy sector. The analysis reveals that these obstacles form a complex socio-technical web, where technical vulnerabilities, economic uncertainties, operational risks, and profound socio-ethical dilemmas are deeply intertwined. The central argument of this paper is that prevailing approaches, which often focus on isolated technological fixes, are insufficient for navigating this complexity.
To address this gap, this study developed an integrative and actionable framework for the responsible deployment of AI. The core contribution is a practical tool designed to guide stakeholders through the intricacies of AI adoption. This framework is built upon four foundational principles—Trustworthiness, Sustainability, Equity, and Collaborative Adaptation—and is operationalized through a four-phased implementation process and a stakeholder-specific action matrix. By providing a structured methodology that embeds ethical and social considerations into the entire AI lifecycle, this framework offers a more robust pathway to harnessing AI’s transformative potential.
Ultimately, the strategic integration of AI is not merely a technical exercise but a profound socio-technical transition critical to achieving a sustainable and just energy future. Adopting a holistic approach, as outlined in this framework, is essential for mitigating systemic risks, building public trust, and ensuring that AI-driven energy innovation aligns with global imperatives such as the UN Sustainable Development Goals.

5.2. Future Directions

The insights and framework developed in this review open up several critical avenues for future research. To move from conceptualization to empirical validation, future work should prioritize the following specific and actionable research questions:
  • Empirical Validation of the Framework: The most pressing need is to empirically test the proposed framework’s effectiveness. Future research could conduct comparative case studies of two similar AI deployment projects—one explicitly using this framework’s process and matrix, and one not—to quantitatively assess its impact on project timelines, budget adherence, and stakeholder satisfaction. A key research question would be: To what extent does applying the Stakeholder Action Matrix (Table 2) in the scoping phase reduce downstream ethical and operational risks compared to traditional project management approaches?
  • Developing Domain-Specific “Green AI” Metrics: While the principle of “Green AI” is established, standardized metrics are lacking. Research is needed to move beyond general estimates of AI’s carbon footprint. This includes developing and validating domain-specific Life Cycle Assessment (LCA) models for common energy AI applications (e.g., wind forecasting vs. seismic imaging) and testing the hypothesis that using energy-efficient algorithms can reduce a model’s inference-related energy consumption by over 50% with a negligible drop in predictive accuracy.
  • Analyzing the Efficacy of Adaptive Governance Models: The framework advocates for adaptive governance, but the effectiveness of different models is unclear. Future work should analyze the outcomes of AI projects developed within “regulatory sandboxes” [52] to identify which specific governance mechanisms (e.g., mandatory third-party audits vs. co-regulatory bodies) best foster innovation while ensuring safety and public accountability.
  • Modeling “Just Transition” Pathways: Addressing the workforce transition requires more granular analysis. Future research should move beyond general impact assessments to develop predictive models for skill demand in specific energy roles (e.g., grid operator, PV technician). Subsequently, empirical studies could test the effectiveness of different reskilling program designs—such as public–private partnerships versus corporate-led initiatives—on long-term employee retention and career progression in an AI-driven energy sector.
Addressing these focused research questions will be crucial for moving from identifying challenges to implementing effective, responsible, and sustainable AI solutions across the global energy landscape.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/su17135764/s1.

Funding

This work was supported by Hankuk University of Foreign Studies Research Fund of 2025 (HUFS-25-04).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The publications analyzed in this study were identified using Google Scholar. The full-text manuscripts are available from the respective journal publishers’ websites (e.g., ScienceDirect, IEEE Xplore) or academic databases.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Zhao, H. Intelligent management of industrial building energy saving based on artificial intelligence. Sustain. Energy Technol. Assess. 2023, 56, 103087. [Google Scholar] [CrossRef]
  2. Pham, H.; Nong, D.; Simshauser, P.; Nguyen, G.H.; Duong, K.T. Artificial intelligence (AI) development in the Vietnam’s energy and economic systems: A critical review. J. Clean. Prod. 2024, 446, 140692. [Google Scholar] [CrossRef]
  3. Koroteev, D.; Tekic, Z. Artificial intelligence in oil and gas upstream: Trends, challenges, and scenarios for the future. Energy AI 2021, 3, 100041. [Google Scholar] [CrossRef]
  4. Lipu, M.H.; Miah, M.S.; Hannan, M.A.; Hussain, A.; Sarker, M.R.; Ayob, A.; Mahmud, M.S. Artificial intelligence based hybrid forecasting approaches for wind power generation: Progress, challenges and prospects. IEEE Access 2021, 9, 102460–102489. [Google Scholar] [CrossRef]
  5. IEA. Energy and AI; International Energy Agency: Paris, France, 2025. [Google Scholar]
  6. Che, E.E.; Abeng, K.R.; Iweh, C.D.; Tsekouras, G.J.; Fopah-Lele, A. The Impact of Integrating Variable Renewable Energy Sources into Grid-Connected Power Systems: Challenges, Mitigation Strategies, and Prospects. Energies 2025, 18, 689. [Google Scholar] [CrossRef]
  7. Chen, C.F.; Napolitano, R.; Hu, Y.; Kar, B.; Yao, B. Addressing machine learning bias to foster energy justice. Energy Res. Soc. Sci. 2024, 116, 103653. [Google Scholar] [CrossRef]
  8. Ahl, A. AI Joins the Front Lines in Battle to Clean Up Power Grids. Bloom. New Energy Financ. 2024. Available online: https://www.bnef.com/insights/33221 (accessed on 10 April 2025).
  9. Ahl, A. 2024 Digital Trends in Power. Bloom. New Energy Financ. 2024. Available online: https://www.bnef.com/insights/34889 (accessed on 10 April 2025).
  10. Mellit, A.; Kalogirou, S. Artificial intelligence and internet of things to improve efficacy of diagnosis and remote sensing of solar photovoltaic systems: Challenges, recommendations and future directions. Renew. Sustain. Energy Rev. 2021, 143, 110889. [Google Scholar] [CrossRef]
  11. Afridi, Y.S.; Ahmad, K.; Hassan, L. Artificial intelligence based prognostic maintenance of renewable energy systems: A review of techniques, challenges, and future research directions. Int. J. Energy Res. 2022, 46, 21619–21642. [Google Scholar] [CrossRef]
  12. WEF. Why We Need to Power Cyber Resilience in the Energy Sector; World Economic Forum: Cologny, Switzerland; Available online: https://www.weforum.org/stories/2025/05/powering-cyber-resilience-energy-sector/ (accessed on 10 May 2025).
  13. Park, C.; Kim, M. Utilization and challenges of artificial intelligence in the energy sector. Energy Environ. 2024, 2024, 0958305X241258795. [Google Scholar] [CrossRef]
  14. DNV. AI Brings Huge Opportunities and New but Manageable Risks for the Energy Industry. Available online: https://www.dnv.com/article/ai-brings-huge-opportunities-and-new-but-manageable-risks-for-the-energy-industry/ (accessed on 9 May 2025).
  15. WEF. AI’s Energy Dilemma: Challenges, Opportunities, and a Path Forward; World Economic Forum: Cologny, Switzerland; Available online: https://www.weforum.org/stories/2025/01/ai-energy-dilemma-challenges-opportunities-and-path-forward/ (accessed on 10 May 2025).
  16. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Moher, D. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ 2021, 372, n71. [Google Scholar] [CrossRef]
  17. Braun, V.; Clarke, V. Using thematic analysis in psychology. Qual. Res. Psychol. 2006, 3, 77–101. [Google Scholar] [CrossRef]
  18. Ahmad, T.; Zhang, D.; Huang, C.; Zhang, H.; Dai, N.; Song, Y.; Chen, H. Artificial intelligence in sustainable energy industry: Status Quo, challenges and opportunities. J. Clean. Prod. 2021, 289, 125834. [Google Scholar] [CrossRef]
  19. Zhao, Y.; Li, T.; Zhang, X.; Zhang, C. Artificial intelligence-based fault detection and diagnosis methods for building energy systems: Advantages, challenges and the future. Renew. Sustain. Energy Rev. 2019, 109, 85–101. [Google Scholar] [CrossRef]
  20. Guo, W.; Qureshi, N.M.F.; Jarwar, M.A.; Kim, J.; Shin, D.R. AI-oriented smart power system transient stability: The rationality, applications, challenges and future opportunities. Sustain. Energy Technol. Assess. 2023, 56, 102990. [Google Scholar] [CrossRef]
  21. Liu, Z.; Guo, H.; Zhang, Y.; Zuo, Z. A Comprehensive Review of Wind Power Prediction Based on Machine Learning: Models, Applications, and Challenges. Energies 2025, 18, 350. [Google Scholar] [CrossRef]
  22. Khan, M.A.; Saleh, A.M.; Waseem, M.; Sajjad, I.A. Artificial Intelligence Enabled Demand Response: Prospects and Challenges in Smart Grid Environment. IEEE Access 2022, 11, 1477–1505. [Google Scholar] [CrossRef]
  23. Liu, C.; Yang, S.; Hao, T.; Song, R. Service risk of energy industry international trade supply chain based on artificial intelligence algorithm. Energy Rep. 2022, 8, 13211–13219. [Google Scholar] [CrossRef]
  24. Mengidis, N.; Tsikrika, T.; Vrochidis, S.; Kompatsiaris, I. Blockchain and AI for the next generation energy grids: Cyber-security challenges and opportunities. Inf. Secur. 2019, 43, 21–33. [Google Scholar]
  25. Machlev, R.; Heistrene, L.; Perl, M.; Levy, K.Y.; Belikov, J.; Mannor, S.; Levron, Y. Explainable Artificial Intelligence (XAI) techniques for energy and power systems: Review, challenges and opportunities. Energy AI 2022, 9, 100169. [Google Scholar] [CrossRef]
  26. Nguyen, V.N.; Tarełko, W.; Sharma, P.; El-Shafay, A.S.; Chen, W.H.; Nguyen, P.Q.P.; Hoang, A.T. Potential of explainable artificial intelligence in advancing renewable energy: Challenges and prospects. Energy Fuels 2024, 38, 1692–1712. [Google Scholar] [CrossRef]
  27. Ahmad, T.; Madonski, R.; Zhang, D.; Huang, C.; Mujeeb, A. Data-driven probabilistic machine learning in sustainable smart energy/smart energy systems: Key developments, challenges, and future research opportunities in the context of smart grid paradigm. Renew. Sustain. Energy Rev. 2022, 160, 112128. [Google Scholar] [CrossRef]
  28. Mitchell, D.; Blanche, J.; Harper, S.; Lim, T.; Gupta, R.; Zaki, O.; Flynn, D. A review: Challenges and opportunities for artificial intelligence and robotics in the offshore wind sector. Energy AI 2022, 8, 100146. [Google Scholar] [CrossRef]
  29. Rinku; Singh, G. Artificial intelligence in sustainable energy industry: Status quo, challenges, and opportunities. EPRA Int. J. Multidiscip. Res. 2023, 9, 234–237. [Google Scholar] [CrossRef]
  30. Pandey, D.K.; Hynjra, A.I.; Bhaskar, R.; Al-Faryan, M.A.S. Artificial intelligence, machine learning and big data in natural resources management: A comprehensive bibliometric review of literature spanning 1975–2022. Resour. Policy 2023, 86, 104250. [Google Scholar] [CrossRef]
  31. Zhang, G.; Liu, J.; Pan, X.; Abed, A.M.; Le, B.N.; Ali, H.E.; Ge, Y. Latest avenues and approaches for biohydrogen generation from algal towards sustainable energy optimization: Recent innovations, artificial intelligence, challenges, and future perspectives. Int. J. Hydrogen Energy 2023, 48, 20988–21003. [Google Scholar] [CrossRef]
  32. Velpandian, M.; Basu, S. Unlocking new horizons, challenges of integrating machine learning to energy conversion and storage research. Indian Chem. Eng. 2025, 1–18. [Google Scholar] [CrossRef]
  33. Allal, Z.; Noura, H.N.; Salman, O.; Chahine, K. Machine learning solutions for renewable energy systems: Applications, challenges, limitations, and future directions. J. Environ. Manag. 2024, 354, 120392. [Google Scholar] [CrossRef]
  34. Kumari, A.; Gupta, R.; Tanwar, S.; Kumar, N. Blockchain and AI amalgamation for energy cloud management: Challenges, solutions, and future directions. J. Parallel Distrib. Comput. 2020, 143, 148–166. [Google Scholar] [CrossRef]
  35. Werbos, P.J. AI intelligence for the grid 16 years later: Progress, challenges and lessons for other sectors. In Proceedings of the 2018 International Joint Conference on Neural Networks (IJCNN), Rio de Janeiro, Brazil, 8–13 July 2018; pp. 1–8. [Google Scholar]
  36. Alam, M.M.; Hossain, M.J.; Habib, M.A.; Arafat, M.Y.; Hannan, M.A. Artificial intelligence integrated grid systems: Technologies, potential frameworks, challenges, and research directions. Renew. Sustain. Energy Rev. 2025, 211, 115251. [Google Scholar] [CrossRef]
  37. Ifaei, P.; Nazari-Heris, M.; Charmchi, A.S.T.; Asadi, S.; Yoo, C. Sustainable energies and machine learning: An organized review of recent applications and challenges. Energy 2023, 266, 126432. [Google Scholar] [CrossRef]
  38. Liu, Z.; Sun, Y.; Xing, C.; Liu, J.; He, Y.; Zhou, Y.; Zhang, G. Artificial intelligence powered large-scale renewable integrations in multi-energy systems for carbon neutrality transition: Challenges and future perspectives. Energy AI 2022, 10, 100195. [Google Scholar] [CrossRef]
  39. Shi, Z.; Yao, W.; Li, Z.; Zeng, L.; Zhao, Y.; Zhang, R.; Wen, J. Artificial intelligence techniques for stability analysis and control in smart grids: Methodologies, applications, challenges and future directions. Appl. Energy 2020, 278, 115733. [Google Scholar] [CrossRef]
  40. Strielkowski, W.; Vlasov, A.; Selivanov, K.; Muraviev, K.; Shakhnov, V. Prospects and challenges of the machine learning and data-driven methods for the predictive analysis of power systems: A review. Energies 2023, 16, 4025. [Google Scholar] [CrossRef]
  41. Boedijanto, F.J.O.; Delina, L.L. Potentials and challenges of artificial intelligence-supported greenwashing detection in the energy sector. Energy Res. Soc. Sci. 2024, 115, 103638. [Google Scholar] [CrossRef]
  42. de Vries, A. The growing energy footprint of artificial intelligence. Joule 2023, 7, 2191–2194. [Google Scholar] [CrossRef]
  43. Schwartz, R.; Dodge, J.; Smith, N.A.; Etzioni, O. Green AI. Commun. ACM 2020, 63, 54–63. [Google Scholar] [CrossRef]
  44. IEA. Electricity 2024: Analysis and Forecast to 2026; International Energy Agency: Paris, France, 2024. [Google Scholar]
  45. Asimopoulos, D.C.; Radoglou-Grammatikis, P.; Makris, I.; Mladenov, V.; Psannis, K.E.; Goudos, S.; Sarigiannidis, P. Breaching the Defense: Investigating FGSM and CTGAN Adversarial Attacks on IEC 60870-5-104 AI-enabled Intrusion Detection Systems. In Proceedings of the 18th International Conference on Availability, Reliability and Security, Benevento, Italy, 29 August–1 September 2023; pp. 1–8. [Google Scholar]
  46. European Commission. Proposal for a Regulation of the European Parliament and of the Council Laying down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union LEGISLATIVE acts. COM (2021) 206 Final. Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex:52021PC0206 (accessed on 20 April 2025).
  47. Samek, W.; Montavon, G.; Lapuschkin, S.; Anders, C.J.; Müller, K.R. Explaining deep neural networks and beyond: A review of methods and applications. Proc. IEEE 2021, 109, 247–278. [Google Scholar] [CrossRef]
  48. IEA. Electricity Grids and Secure Energy Transitions; International Energy Agency: Paris, France, 2023. [Google Scholar]
  49. Beck, R.; Dibbern, J.; Wiener, M. A multi-perspective framework for research on (sustainable) autonomous systems. Bus. Inf. Syst. Eng. 2022, 64, 265–273. [Google Scholar] [CrossRef]
  50. Raissi, M.; Perdikaris, P.; Karniadakis, G.E. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. J. Comput. Phys. 2019, 378, 686–707. [Google Scholar] [CrossRef]
  51. IEA. How Governments Support Clean Energy Start-Ups; International Energy Agency: Paris, France, 2022. [Google Scholar]
  52. Dimović, Z. Privacy and Data Protection Concerns in the Regulatory Framework of Slovenian Energy Law. Lexonomica 2023, 15, 53–76. [Google Scholar] [CrossRef]
  53. OECD. Artificial Intelligence in Society; OECD Publishing: Paris, France, 2021. [Google Scholar]
  54. Castro, D. Rethinking Concerns About AI’s Energy Use; Center for Data Innovation: Washington, DC, USA, 2024. [Google Scholar]
  55. Verdecchia, R.; Sallou, J.; Cruz, L. A systematic review of Green AI. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 2023, 13, e1507. [Google Scholar] [CrossRef]
  56. Kwon, S. Ensuring renewable energy utilization with quality of service guarantee for energy-efficient data center operations. Appl. Energy 2020, 276, 115424. [Google Scholar] [CrossRef]
  57. Li, B. The role of financial markets in the energy transition: An analysis of investment trends and opportunities in renewable energy and clean technology. Environ. Sci. Pollut. Res. 2023, 30, 97948–97964. [Google Scholar] [CrossRef]
  58. Chauhan, N.; Kaur, N.; Saini, K.S. Energy Efficient Resource Allocation in Cloud Data Center: A Comparative Analysis. In Proceedings of the 2022 International Conference on Computational Modelling, Simulation and Optimization (IC-CMSO), Pattaya, Thailand, 9–11 December 2022; pp. 201–206. [Google Scholar]
  59. Zirar, A.; Ali, S.I.; Islam, N. Worker and workplace Artificial Intelligence (AI) coexistence: Emerging themes and research agenda. Technovation 2023, 124, 102747. [Google Scholar] [CrossRef]
  60. Weinstein, J.; Reich, R.; Sahami, M. System Error: Where Big Tech Went Wrong and How We Can Reboot; Hachette: London, UK, 2021. [Google Scholar]
  61. Bila, S. Strategic priorities of social production digitalization: World experience. Univ. Econ. Bull. 2021, 48, 40–55. [Google Scholar] [CrossRef]
  62. Huriye, A.Z. The Ethics of Artificial Intelligence: Examining the Ethical Considerations Surrounding the Development and Use of AI. Am. J. Technol. 2023, 2, 37–44. [Google Scholar] [CrossRef]
  63. Gebru, T. Race and gender. In The Oxford Handbook of Ethics of AI; Dubber, M.D., Pasquale, F., Das, S., Eds.; Oxford University Press: New York, NY, USA, 2020; pp. 251–269. [Google Scholar]
  64. Chen, J.Y. Transparent Human–Agent Communications. Int. J. Hum. Comput. Interact. 2022, 38, 1737–1738. [Google Scholar] [CrossRef]
  65. Kablo, E.; Arias-Cabarcos, P. Privacy in the Age of Neurotechnology: Investigating Public Attitudes towards Brain Data Collection and Use. In Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security, Copenhagen, Denmark, 26–30 November 2023; pp. 225–238. [Google Scholar]
  66. Pradeep, A.; Bakoev, M.; Akhroljonova, N. A Reliability Analysis of Self-Driving Vehicles: Evaluating the Safety and Performance of Autonomous Driving Systems. In Proceedings of the 2023 15th International Conference on Electronics, Computers and Artificial Intelligence (ECAI), Ploiesti, Romania, 29 June–1 July 2023; pp. 1–5. [Google Scholar]
  67. Bignami, E.; Montomoli, J.; Bellini, V.; Cascella, M. Uncovering the power of synergy: A hybrid human–machine model for maximizing AI properties and human expertise. Crit. Care 2023, 27, 330. [Google Scholar] [CrossRef] [PubMed]
  68. Järvelä, S.; Nguyen, A.; Hadwin, A. Human and artificial intelligence collaboration for socially shared regulation in learning. Br. J. Educ. Technol. 2023, 54, 1057–1076. [Google Scholar] [CrossRef]
  69. Lew, R.; Boring, R.; Ulrich, T. Envisioning 21st Century Mixed-Initiative Operations for Energy Systems. In Human Factors and Systems Interaction; Nunes, I.L., Ed.; AHFE International: San Diego, CA, USA, 2022; Volume 52. [Google Scholar]
  70. Radanliev, P. AI Ethics: Integrating Transparency, Fairness, and Privacy in AI Development. Appl. Artif. Intell. 2025, 39, 2463722. [Google Scholar] [CrossRef]
  71. Susha, I.; Rukanova, B.; Zuiderwijk, A.; Gil-Garcia, J.R.; Hernandez, M.G. Achieving voluntary data sharing in cross sector partnerships: Three partnership models. Inf. Organ. 2023, 33, 100448. [Google Scholar] [CrossRef]
  72. Nadkarni, S.; Narayanan, V.K. Strategic schemas, strategic flexibility, and firm performance: The moderating role of industry clockspeed. Strateg. Manag. J. 2007, 28, 243–270. [Google Scholar] [CrossRef]
  73. Fjeld, J.; Achten, N.; Hilligoss, H.; Nagy, A.; Srikumar, M. Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based APPROACHES to Principles for AI; Berkman Klein Center Research Publication: Cambridge, MA, USA, 2020. [Google Scholar]
  74. Ashour, M.A.H. Optimizing Gas Production Forecasting in Iraq Using a Hybrid Artificial Intelligence Model. In Proceedings of the 2023 IEEE 14th Control and System Graduate Research Colloquium (ICSGRC), Shah Alam, Malaysia, 11 July 2023; pp. 220–223. [Google Scholar]
  75. Balboni, P.; Botsi, A.; Francis, K.; Barata, M.T. Designing Connected and Automated Vehicles around Legal and Ethical Concerns: Data Protection as a Corporate Social Responsibility. In Proceedings of the SETN 2020 Workshops, Athens, Greece, 2–3 June 2020; pp. 139–151. [Google Scholar]
  76. Bondok, A.H.; Mahmoud, M.; Badr, M.M.; Fouda, M.M.; Abdallah, M.; Alsabaan, M. Novel Evasion Attacks against Adversarial Training Defense for Smart Grid Federated Learning. IEEE Access 2023, 11, 112953–112972. [Google Scholar] [CrossRef]
  77. Salh, A.; Ngah, R.; Audah, L.; Kim, K.S.; Abdullah, Q.; Al-Moliki, Y.M.; Talib, H.N. Energy-Efficient Federated Learning with Resource Allocation for Green IoT Edge Intelligence in B5G. IEEE Access 2023, 11, 16353–16367. [Google Scholar] [CrossRef]
  78. Ermolieva, T.; Ermoliev, Y.; Zagorodny, A.; Bogdanov, V.; Borodina, O.; Havlik, P.; Zaslavskyi, V. Artificial Intelligence, Machine Learning, and Intelligent Decision Support Systems: Iterative “Learning” SQG-based procedures for Distributed Models’ Linkage. Artif. Intell. J. 2022, 94, 92–97. [Google Scholar]
  79. Kreuzberger, D.; Kühl, N.; Hirschl, S. Machine learning operations (mlops): Overview, definition, and architecture. IEEE Access 2023, 11, 31866–31879. [Google Scholar] [CrossRef]
  80. Tasnim, N.; Al Mamun, S.; Shahidul Islam, M.; Kaiser, M.S.; Mahmud, M. Explainable Mortality Prediction Model for Congestive Heart Failure with Nature-Based Feature Selection Method. Appl. Sci. 2023, 13, 6138. [Google Scholar] [CrossRef]
  81. Mitchell, M.; Wu, S.; Zaldivar, A.; Barnes, P.; Vasserman, L.; Hutchinson, B.; Spitzer, E.; Raji, I.D.; Gebru, T. Model cards for model reporting. In Proceedings of the Conference on Fairness, Accountability and Transparency, Atlanta, GA, USA, 29–31 January 2019; pp. 220–229. [Google Scholar]
  82. Henderson, J.C.; Venkatraman, H. Strategic alignment: Leveraging information technology for transforming organizations. IBM Syst. J. 1999, 38, 472–484. [Google Scholar] [CrossRef]
  83. Khatri, V.; Brown, C.V. Designing data governance. Commun. ACM 2010, 53, 148–152. [Google Scholar] [CrossRef]
  84. Teece, D.J.; Pisano, G.; Shuen, A. Dynamic capabilities and strategic management. Strateg. Manag. J. 1997, 18, 509–533. [Google Scholar] [CrossRef]
Table 1. Classification of AI challenges in the energy sector.
Table 1. Classification of AI challenges in the energy sector.
Major CategorySpecific SubtopicKey References and Main Focus
1. Technical(a) Data Quality and NoiseAfridi et al. [11] [Remote renewable energy (RE) sites] → Incomplete operational data
Ahmad et al. [18] [Sustainable energy] → Large-scale dataset bias
Zhao et al. [19] [Building energy] → Sensor noise, heterogeneity
(b) HPC and Computational OverheadGuo et al. [20] [Power system transient stability] → high-performance computing (HPC) overhead
Koroteev and Tekic [3] [Oil/gas upstream] → HPC for seismic analysis
Liu, Z. et al. [21] [Wind forecasting] → Complexity of large-scale simulations
(c) CybersecurityKhan et al. [22] [Smart grid demand response (DR)] → Cyber vulnerabilities
Liu, C. et al. [23] [Energy trade supply chain] → Data fragmentation and cyber threats
Mengidis et al. [24] [Blockchain+AI in next-gen grids] → Real-time security
(d) Explainability (XAI)Machlev et al. [25] [Grid operation] → Black-box ML hamper operator trust
Nguyen et al. [26] [RE forecasting] → Need standardized explainable AI (XAI) metrics
(e) Model Complexity and Advanced Fault HandlingLipu et al. [4] [Wind forecasting] → Environmental variability and drift
Mellit and Kalogirou [10] [Solar Photovoltaic (PV) + Internet of things (IoT)/AI] → Need for cost-effective fault detection and diagnosis (FDD) (multiple-fault detection, drone-based fault localization, fault prediction) in large-scale PV operations
Liu, Z. et al. [21] [Multi-energy] → Risk of overfitting with spatiotemporal data
Ahmad et al. [27] [Probabilistic machine learning, smart grids] → Overfitting in noisy data
2. Economic/
Environmental
(a) High Costs/capital expenditures (CAPEX)Mellit and Kalogirou [10] [Solar PV + IoT] → Cost-effective IoT+AI systems needed for PV maintenance
Park and Kim [13] [General energy] → Lack of AI certification ↑ cost risk
Mitchell et al. [28] [Offshore wind + robotics] → High operational and management costs, uncertain return on investment
Rinku and Singh [29] [RE] → Capital-intensive AI in small economies
(b) ROI UncertaintyPandey et al. [30] [Resource management] → Unclear returns for AI
Zhang et al. [31] [Biohydrogen] → Lab-to-market viability
(c) Policy and Funding GapsPark and Kim [13] [General energy] → No AI certification frameworks
Ahmad et al. [18] [Sustainable energy] → Lack of standard policy for AI
Liu, Z. et al. [21] [Large-scale RE] → Multi-operator funding complexities
Velpandian and Basu [32] [Energy Conversion and Storage] → High R&D cost, limited support
(d) AI Energy Consumption and Carbon FootprintLipu et al. [4] [Wind forecasting] → Additional computing cost for ensemble/hybrid models
Ahmad et al. [27] [Smart grids] → HPC usage can raise carbon emissions
3. Operational/Strategic(a) Real-Time IntegrationAllal et al. [33] [RE] → Unpredictable supply, standardization deficits
Kumari et al. [34] [Energy cloud management] → Real-time operation, blockchain interoperability
Werbos [35] [neural network-based load forecasting] → Legacy infrastructure hamper adoption
(b) Multi-Energy CoordinationAlam et al. [36] [AI-powered grid] → Interoperability
Ifaei et al. [37] [Multi-carrier systems] → Spatiotemporal data
Liu, Z. et al. [38] [Large-scale RE] → Electricity/gas/heat balancing
(c) Integration RisksLiu, C. et al. [23] [Supply chain] → Complex regulations, data fragmentation
Shi et al. [39] [Smart grid stability] → Dynamic security
Strielkowski et al. [40] [Predictive analysis] → HPC overhead for big time-series
(d) Novel Tech TransitionsZhang et al. [31] [Biohydrogen] → Infrastructure integration
Velpandian and Basu [32] [Energy conversion/storage] → Transfer learning limits
Boedijanto and Delina [41] [Greenwashing detection] → Potential new AI misuse
4. Labor/
Social
(a) Workforce SkillsAfridi et al. [11] [Prognostic maintenance] → Domain knowledge gap
Ahmad et al. [27] [Smart grids] → Need advanced data-science skill sets
Rinku and Singh [29] [RE] → Limited AI expertise in RE
(b) Ethics and BiasMellit and Kalogirou [10] [Solar PV + IoT] → Data privacy and transparency
Nguyen et al. [26] [RE forecasting] → Bias → unfair outcomes
Boedijanto and Delina [41] [Environment, Social, Governance (ESG)] → AI-driven greenwashing
(c) Public AcceptancePark and Kim [13] [Energy AI adoption] → Trust and acceptance issues
Boedijanto and Delina [41] [Greenwashing detection] → Transparency concerns
(d) Safety and ComplianceKoroteev and Tekic [3] [Oil/gas drilling] → Safety-critical ML deployment
Khan et al. [22] [Automated DR] → Safety protocols, standardization
Mitchell et al. [28] [Offshore wind robots] → Health/safety compliance in harsh environments
Table 2. Stakeholder action matrix for responsible AI adoption.
Table 2. Stakeholder action matrix for responsible AI adoption.
PrincipleKey Stakeholder Actionable StepsPotential Metrics (KPIs)
EquityUtility CompanyMandate the use of tools like algorithmic impact assessments (AIAs) to proactively identify and mitigate biases before deploying AI for dynamic pricing or grid repair prioritization, a practice increasingly called for in energy justice research [26,64].Reduction in service disparity (e.g., outage duration, pricing) across demographic groups; public feedback scores from targeted community feedback.
TrustworthinessEnergy RegulatorEstablish clear AI certification frameworks and liability rules for systems in safety-critical functions, a measure seen as crucial for de-risking investment and ensuring public safety [13]. Mandate independent, third-party audits for cybersecurity robustness and the implementation of user-centric explainability (XAI) features before deployment [25,45].Number of certified AI systems in operation; rate of AI-related safety or security incidents; average time-to-resolution for incident audits.
SustainabilityAI Technology Developer/Data Center OperatorDevelop and publish standardized documentation such as “Model Cards” [81] to transparently report model performance, limitations, and estimated lifecycle energy consumption, addressing calls for greater accountability for AI’s environmental impact [5,42]. Prioritize and invest in energy-efficient algorithms and hardware, adopting “Green AI” principles to actively reduce computational overhead [43,54].CO2e per 1000 model inferences; model accuracy per watt; data center power usage effectiveness (PUE); percentage of energy sourced from renewables.
Collaborative AdaptationPolicymaker/GovernmentCreate and fund national or regional “AI in Energy” regulatory sandboxes to allow for controlled experimentation with novel applications, fostering an evidence-based approach to agile governance [52]. Foster public–private partnerships focused on creating targeted reskilling and upskilling programs for the energy workforce, a key strategy for ensuring a just transition [53,60].Number of successful pilot projects graduated from sandboxes and scaled; number of workers retrained and placed in new energy sector jobs.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Park, C. Addressing Challenges for the Effective Adoption of Artificial Intelligence in the Energy Sector. Sustainability 2025, 17, 5764. https://doi.org/10.3390/su17135764

AMA Style

Park C. Addressing Challenges for the Effective Adoption of Artificial Intelligence in the Energy Sector. Sustainability. 2025; 17(13):5764. https://doi.org/10.3390/su17135764

Chicago/Turabian Style

Park, Chankook. 2025. "Addressing Challenges for the Effective Adoption of Artificial Intelligence in the Energy Sector" Sustainability 17, no. 13: 5764. https://doi.org/10.3390/su17135764

APA Style

Park, C. (2025). Addressing Challenges for the Effective Adoption of Artificial Intelligence in the Energy Sector. Sustainability, 17(13), 5764. https://doi.org/10.3390/su17135764

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop