Next Article in Journal
Analyzing SME Digitalization Requirements Through a Technology Radar Framework in Southeast Lower Saxony
Previous Article in Journal
MIIAM: An Algorithmic Model for Predicting Multimedia Effectiveness in eLearning Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

Integrating AI in Public Governance: A Systematic Review

1
Faculty of Law, Economic and Social Sciences, Agdal, Mohammed V University, Rabat 10100, Morocco
2
Alternative Management and Ecosystem Development Research Team, Faculty of Law, Economics and Social Sciences (FSJEST), Abdelmalek Essaadi University, Tetouan 93000, Morocco
3
Governance and Economics of Sustainable Development Research Team (GE2D), Polydisciplinary Faculty of Larache (FPL), Abdelmalek Essaadi University, Tetouan 93000, Morocco
*
Author to whom correspondence should be addressed.
Digital 2025, 5(4), 59; https://doi.org/10.3390/digital5040059
Submission received: 24 September 2025 / Revised: 19 October 2025 / Accepted: 22 October 2025 / Published: 3 November 2025

Abstract

Artificial intelligence is becoming a defining force in public governance, yet many institutions still struggle to adopt it in ethical, sustainable, and scalable ways. This article reports on a systematic literature review in line with PRISMA 2020 guidelines, covering 67 peer-reviewed studies published between 2014 and 2024. The review shows that AI can help public institutions work faster and more transparently, but it also reveals several common problems. Many organizations still face fragmented data, weak connections between systems, limited digital tools, a lack of staff skills, and ethical risks such as bias and privacy concerns. To address these problems, the study introduces the AI Integration Capability Model, a framework based on the Technology Acceptance Model, Digital-Era Governance, and Dynamic Capabilities theory. The model highlights four institutional pillars: data access and interoperability, digital infrastructure and redesigned processes, workforce skills and learning capacity, and leadership and management reform. Its relevance was tested through a three-round Delphi study with 15 senior experts from Moroccan public institutions, who agreed on the feasibility and urgency of all four pillars. The findings offer policymakers practical guidance for AI adoption and outline a roadmap for aligning innovation with institutional readiness and public trust.

1. Introduction

Over the past decade, artificial intelligence (AI) has progressively reshaped how public administrations operate, make decisions, and deliver services. AI is increasingly embedded in public sector operations, from traffic optimization and fraud detection to automating administrative tasks. Countries like the United States (USA), China, and Singapore have launched pilot initiatives in justice, healthcare, and urban planning [1,2]. However, many public institutions face challenges, including outdated infrastructure, fragmented digital strategies, and growing concerns over transparency, accountability, and algorithmic bias [3,4].
While interest in integrating AI into governance has expanded rapidly, existing research remains disjointed across domains and perspectives. Most studies are sector-specific or grounded in conceptual models with limited empirical validation, often neglecting the dynamic interplay between public institutions and private technology providers. This fragmentation underscores the need for an integrated synthesis to capture institutional readiness and support sustainable AI adoption [5,6]. Despite these growing contributions, most studies remain narrow in focus. They often explore AI’s technical, ethical, or sectoral aspects without linking them to broader institutional reforms. Few attempts combine theoretical, managerial, and governance dimensions in a single framework. This lack of integration makes understanding how AI transforms public organizations and decision-making practices difficult.
These limitations open a key research question: How is AI transforming decision-making, institutional structures, and regulatory models in public governance, and what lessons can guide its ethical and inclusive adoption?
Previous literature reviews have offered practical foundations, but they also show limitations. Sharma et al. [6] proposed a broad theoretical agenda for AI in governance, but lacked concrete examples of institutional change. Zuiderwijk et al. [7] focused on ethical and procedural issues but concentrated mainly on Western countries and did not explore how strategies vary globally.
Previous research has provided valuable insights into how AI supports efficiency, transparency, and innovation in the public sector. However, these studies often remain limited in scope and lack a comprehensive understanding of the institutional and governance mechanisms that enable ethical and sustainable AI use. Building on these contributions, this study aims to provide a broader, more integrated perspective that connects practice, perception, and policy within public governance systems.
This article addresses those gaps through a systematic literature review (SLR) of 67 peer-reviewed articles published between 2014 and 2024. It offers a dual-level analysis: the Technology Acceptance Model (TAM) explores user-level factors like trust and perceived usefulness, while the Digital-Era Governance (DEG) framework captures broader organizational reforms. These two perspectives are combined through the Dynamic Capabilities (DC) lens to assess how institutions adapt, learn, and transform in the face of digital change.
Building on the review’s findings, this article introduces the AI Integration Capability Model (AICM), which conceptualizes four institutional capabilities essential for ethical, inclusive, and scalable AI adoption: access to quality and interoperable data, modern digital infrastructure and process redesign, staff competencies and learning agility, and institutional leadership and change management.
The model was tested through a structured Delphi study involving fifteen senior experts from Morocco’s public administration to assess its practical relevance. The results confirmed the model’s applicability and revealed key strategic priorities for governments with evolving digital ecosystems.
Based on the theoretical frameworks discussed (TAM, DEG, and DC) and the gaps identified in the literature, this study formulates four research hypotheses that guide the analysis. These hypotheses reflect the institutional conditions that support ethical and practical AI integration in public governance.
Hypothesis 1 (H1).
The availability of high-quality and interoperable data positively influences the adoption and scalability of AI systems in public institutions.
Hypothesis 2 (H2).
Advanced digital infrastructure and redesigned administrative processes strengthen institutional efficiency and facilitate AI implementation.
Hypothesis 3 (H3).
Workforce competencies and learning agility enhance trust, acceptance, and the effective use of AI technologies within public organizations.
Hypothesis 4 (H4).
Institutional leadership and change management capabilities are critical in ensuring ethical oversight and sustainable AI-driven transformation.
These hypotheses connect individual acceptance factors (TAM), institutional reforms (DEG), and strategic adaptability (DC), forming the analytical basis for the AI Integration Capability Model.
The rest of the paper is structured as follows: Section 2 reviews the main theoretical foundations and previous studies that inform the background and theoretical framework; Section 3 presents the synthetic literature review methodology. Section 4 analyzes statistical trends and emerging research on AI-driven governance. Section 5 reports the results and discussion. Section 6 provides the empirical findings and thematic synthesis; Section 7 introduces the AI Integration Capability Model. Section 8 validates the AICM through the Delphi study conducted in Morocco. Section 9 discusses managerial and policy implications, and Section 10 concludes with final reflections and directions for future research.

2. Background and Theoretical Framework

Public institutions increasingly adopt AI to enhance service delivery, streamline decision-making, and support evidence-based policy development [6]. In practice, it facilitates the automation of routine tasks, enables real-time data analysis, and supports predictive planning in domains such as urban development, public health, and taxation [7]. However, its integration goes beyond technical innovation; it raises crucial questions about how institutions manage information, support accountability, and maintain public trust. Table 1 summarizes the main systematic literature reviews that have examined AI integration in governance, highlighting their aims, strengths, and gaps.

2.1. Theoretical Framework: TAM, DEG, Dynamic Capabilities, and Delphi

This study draws on four complementary frameworks to analyze AI adoption in public governance: the TAM, DEG, the DC approach, and the Delphi method. They offer a multi-level perspective linking individual acceptance, institutional reform, strategic adaptation, and expert validation.
TAM [9] explains how public employees evaluate AI tools based on perceived usefulness, ease of use, and trust [10]. It informs the analysis of frontline acceptance and user-level engagement.
DEG focuses on institutional transformation, emphasizing administrative efficiency, integrated service delivery, and citizen responsiveness [11]. It guides the examination of structural and procedural adaptations to digital change.
DC theory addresses organizations’ strategic capabilities to sense, seize, and reconfigure in dynamic environments [12]. It highlights how public institutions develop agility to integrate AI despite rigidities, skill gaps, or fragmented infrastructures.
Finally, the Delphi method supports expert consensus-building in uncertain or data-scarce contexts [13]. Applied here in a three-round study with Moroccan public sector experts, it validates the AI Integration Capability Model and strengthens the study’s practical relevance.
These frameworks collectively support a layered understanding of AI adoption, from micro-level user acceptance to macro-level institutional readiness and strategic capability.
Table 2 summarizes how each theoretical framework contributes to the main dimensions of AI adoption in public governance. While TAM and DEG address user acceptance and institutional transformation, respectively, the Dynamic Capabilities framework highlights the need for strategic agility, and the Delphi method reinforces expert-informed decision-making in uncertain or rapidly evolving contexts.
In parallel, Dospinescu and Buraga [14] examined the determinants of enterprise resource planning (ERP) adoption in organizations. They demonstrated that technological, organizational, and managerial factors jointly shape institutional readiness for digital transformation. Their conclusions are consistent with the capability-based reasoning adopted in this study, reinforcing the multidimensional perspective of the AICM.

2.2. Core AI Technologies in Public Governance

AI is a technology suite that simulates human cognitive functions such as reasoning, learning, and decision-making [15]. AI covers various technologies and approaches, including Machine Learning (ML), deep learning (DL), Natural Language Processing (NLP), and computer vision [16]. ML is an indispensable tool for advancing industry efficiency and decision-making [17].
  • ML enables systems to identify patterns and make predictions based on large datasets, without being explicitly programmed [18]. It supports fraud detection, policy forecasting, and automated classification in various administrative domains [19]. Applications include fraud detection in finance and predictive analytics in governance [20].
  • DL, as a specialized branch of ML, employs artificial neural networks with multiple layers to mimic the functioning of the human brain [21]. It excels at processing unstructured data, such as images and text, making it essential for applications like facial recognition, autonomous vehicle navigation, and healthcare diagnostics.
  • Generative AI: refers to models designed to generate new content by learning patterns and structures from existing data. These models can create text, images, music, and more, often indistinguishable from human-created outputs [22].
  • Large language models (LLMs), based on transformer architectures such as BERT and other generative models, can process, summarize, and generate language fluently. Their applications span public administration, education, and legal analysis, providing advanced tools for communication and document management [23].
Figure 1 illustrates interconnections between AI, ML, and DL. AI represents the largest domain, with ML and DL forming specialized subsets. Generative AI and LLMs are specific applications that apply these foundational technologies to expand AI’s capabilities.
Recent research shows that artificial intelligence increasingly converges with enterprise systems such as ERP and the Internet of Things (IoT). Jawad and Balázs [24] found that machine learning enhances ERP performance through predictive analytics, data optimization, and real-time adaptation supported by Industrial IoT. Wijesinghe et al. [25] also confirm that combining IoT and AI improves interoperability and responsiveness. These findings show that AI capability depends not only on algorithms but also on connecting diverse data sources and enabling automated learning.

3. Systematic Literature Review Methodology

The research methodology follows a five-step process (Figure 2) and builds on established protocols for systematic reviews [26]. This approach ensures analytical depth and methodological transparency. The review focuses on how AI is integrated into public governance, particularly in decision-making, transparency, ethics, and institutional transformation [27].
Before defining the mapping and research questions, a scoping review was conducted following the PRISMA 2020 protocol and the guidance for evidence-based research provided by Kitchenham and Charters [26]. It identified key themes, gaps, and issues in AI governance. The findings helped align the study’s objectives: to analyze AI integration in public governance, identify barriers and enablers, and propose an institutional readiness framework. Two questions were developed: mapping questions (MQs) to classify the literature and research questions (RQs) to guide analysis.

3.1. Mapping and Studying Questions

To guide the analysis, this study relies on two sets of structured questions: four MQs and five RQs. The MQs are designed to frame the selected literature’s scope, characteristics, and methodological patterns. In contrast, RQs allow a deeper analytical exploration of how AI is adopted in governance, focusing on efficiency, stakeholder perceptions, and ethical risks.
The Mapping Questions are as follows:
  • MQ1: What are the publication trends, thematic areas, and key sources in AI-related governance research?
  • MQ2: What types of contributions are presented in the paper?
  • MQ3: What research methodologies are used in the studies?
  • MQ4: In which governance sectors and institutional settings is AI being studied?
These MQs help categorize the literature and trace the evolution of research themes over time. Building on this foundation, the study uses five RQs to examine the challenges and opportunities of AI adoption:
  • RQ1: How does AI improve efficiency and automation in governance decision-making?
  • RQ2: What are AI integration’s main challenges, risks, and opportunities?
  • RQ3: How do institutional stakeholders such as policymakers, administrators, legal experts, and citizens perceive AI’s impact on governance processes?
  • RQ4: How does AI reshape institutional autonomy and the balance between automation and human oversight?
  • RQ5: What governance models and best international practices exist to support ethical and responsible AI adoption?
These questions form the analytical backbone of the review and ensure a balanced exploration of both technical and governance dimensions.

3.2. Search Strategy

To ensure comprehensive coverage, we used a defined search string based on Boolean logic applied across three digital databases: Scopus, the DGRL, and Google Scholar (used only for supplementary searches). The strategy prioritized peer-reviewed literature relevant to AI’s role in governance from 2014 to 2024.

3.2.1. Search String

Significant terms and their synonyms were selected and combined into a search string using Boolean operators (OR and AND) based on the mapping and research questions. Table 3 outlines these terms, grouped into two main categories: the first relates to AI, while the second focuses on the concept of governance.

3.2.2. Search Process

After identifying the search string, a comprehensive search was conducted in December 2024 across Scopus, the Digital Government Reference Library (DGRL), and Google Scholar. In total, 1368 records were retrieved (817 from Scopus, 31 from DGRL, and 520 from Google Scholar). All records were exported into Excel, where metadata such as title, authors, publication year, abstract, source, and type were systematically stored.
Duplicate records (n = 758) were removed before screening. The remaining 610 records were screened by title and abstract, excluding 359 papers that did not meet the inclusion criteria. At the eligibility stage, 251 full-text articles were assessed. Among these, 184 papers were excluded for the following reasons: not written in English (n = 42), full text unavailable (n = 28), or focusing exclusively on technical aspects without governance relevance (n = 114). Finally, 67 studies were included in the review.
The PRISMA 2020 flow diagram (Figure 3) summarizes the records flow through the different stages of identification, screening, eligibility, and inclusion.

3.3. Study Selection

This review focuses on how AI is integrated into public governance, particularly in decision-making, transparency, ethics, and institutional transformation. As shown in Table 4, specific inclusion and exclusion criteria were applied to ensure the relevance and quality of selected studies.

3.4. Quality Assessment

To assess the quality of the selected studies, we used a six-question scoring framework [28]. As shown in Table 5, the questionnaire includes six key questions designed to evaluate each paper’s relevance and reliability. This quality assessment confirms that only rigorous and meaningful studies are included in the final review. Papers must score above 3.5 out of 7 (50% of the total) to be retained, helping maintain a high standard and ensuring that the review focuses on valuable insights into AI’s impact on governance.

3.5. Data Extraction Strategy and Synthesis

A structured form captured metadata and findings aligned with the MQs and RQs. The synthesis combined three techniques: (1) vote counting to quantify themes; (2) narrative synthesis to organize qualitative insights, supported by visual tools (charts, maps); and (3) reciprocal translation, allowing comparison across studies: this thematic depth and comparative clarity.

3.6. Theoretical Frameworks

The study mobilizes three complementary theoretical frameworks to guide the thematic analysis conducted in this systematic literature review: the TAM, DEG, and the DC perspective. These frameworks were not applied a priori to filter studies but were used during the coding and interpretation phase, allowing for a structured, multi-level reading of the empirical material. This theoretical triangulation supports the identification and organization of themes across three analytical levels:
  • At the individual level, TAM helps assess how civil servants interact and accept AI systems, particularly regarding perceived usefulness and ease of use.
  • At the organizational level, DEG captures how public agencies evolve in structure, processes, and service delivery in response to AI technologies.
  • At the institutional level, DC theory examines how public institutions develop the capacity to sense digital opportunities, seize them strategically, and reconfigure internal resources in response to change.
This layered approach ensures conceptual consistency across the 67 reviewed articles and strengthens the interpretive rigor of the findings. Figure 4 maps each theoretical framework to the relevant level of analysis and associated thematic codes, providing a transparent analytical structure for the review process.

4. Statistical Trends in AI-Driven Governance Research

This section outlines the core characteristics of the 67 peer-reviewed articles selected for this review. Data sources, temporal distribution, and types of contributions organize the trends. These insights provide context on how academic interest in AI governance has evolved over time and across disciplines.

4.1. Data Sources and Selection

The research began with an extensive search across three primary databases: Scopus, the DGRL, and Google Scholar, which are usually used in systematic reviews. These databases were chosen for their comprehensive coverage of academic and peer-reviewed articles relevant to AI and governance.
After the selection and screening steps, all final articles were sourced exclusively from Scopus. This decision was based on the database’s ability to require the highest-quality and relevant studies for this review. Some articles from DGRL were initially considered but excluded during the quality assessment phase due to limited applicability or overlapping with Scopus findings. Also, articles from Google Scholar were excluded as they often overlapped with those retrieved from Scopus. By focusing exclusively on Scopus, the final dataset ensured consistent, peer-reviewed, and relevant content.
This systematic approach ensured the inclusion of high-quality research articles, forming a consistent foundation for this review.

4.2. Statistical Trends

As shown in Figure 5, the number of publications increased significantly after 2019. While early studies primarily addressed technical aspects of AI, more recent work has been oriented to governance, ethics, and institutional implementation. In 2024 alone, 29 studies were published, confirming the growing relevance of AI as a central theme in public administration research.

4.3. Contribution Types

Figure 6 presents the typology of the reviewed literature. Hybrid studies dominate the sample (41%), combining conceptual, empirical, or technical dimensions. Review articles account for 21%, while empirical assessments and solution-oriented proposals represent smaller shares (13% and 11%, respectively). This distribution highlights a strong focus on theoretical development but also points to a gap in validation and real-world application [29]. This imbalance indicates that while theoretical contributions are growing, practical applications and tested models remain limited, reinforcing the need for frameworks like the AICM to bridge theory and institutional practice.

4.4. Research Methods

Figure 7 highlights a strong preference for qualitative methods (72%), including case studies and document analysis. Quantitative designs represent 22% of the sample, while systematic reviews remain limited (7%). This methodological imbalance suggests that the field is still formative and could benefit from more mixed-method studies to deepen empirical understanding [30]. These findings confirm that AI governance research is dominated by qualitative inquiry, which provides contextual depth but limits generalizability, supporting the need for complementary empirical validation, such as the Delphi study conducted in this paper.

4.5. Geographic Distribution

Figure 8 shows where most studies on AI governance come from, highlighting six central regions: Europe, North America, Asia, the Middle East, Africa, and a cross-regional category. These results show differences between areas and help explain how politics, policies, and resources affect AI research.
Europe leads with 36% of all studies. This is primarily due to the European Union’s focus on ethical and legal rules, such as the EU AI Act. Many European papers discuss fairness, transparency, and keeping humans in control [31]. However, this focus on ethics also challenges staying innovative while following strict rules [32].
Cross-regional studies account for 20% of the research. These studies focus not on one country but on global issues like shared rules, cooperation, and building systems that work across borders. They offer ideas for working together on ethical and technical standards for AI worldwide [33].
The United States leads North America, representing 18% of publications. This region emphasizes technological innovation and practical deployment of AI in public services, supported by significant funding, partnerships with major tech firms, and mature digital infrastructure [5]. Nonetheless, concerns persist around inclusiveness, algorithmic bias, and transparency, especially in contexts where speed of implementation may outpace regulatory oversight [34].
Asia makes up 13% of the studies, led by China. The country invests heavily in smart cities and digital government, often choosing fast deployment over openness [35]. China wants to lead in AI by 2030 [4]. Other Asian countries, like Jordan, are trying to adapt AI to local needs [36].
Middle East and Africa: These regions contribute less, accounting for 7% and 6%, respectively. Restricted funding, infrastructure gaps, and shortages of technical expertise limit research efforts in these regions [37]. Despite these challenges, there are growing opportunities across global initiatives that encourage partnerships and resource sharing. Expanding AI research and governance efforts in these regions is critical to ensuring balanced global representation and addressing local governance needs.
These trends illustrate the dominance of high-income regions in shaping AI governance discourse and underline the importance of amplifying voices from underrepresented areas. Comparative insights across regions can inform more equitable and inclusive approaches to AI regulation. These differences also reflect varying levels of digital maturity and policy commitment. Regions such as Europe and North America reached earlier stages of AI institutionalization, while Africa and the Middle East remain in formative phases. This suggests that research activity mirrors each region’s stage of adoption rather than its overall performance in AI governance

5. Results and Discussion

This section presents the insights from 67 peer-reviewed studies on integrating AI in governance decision-making. Findings are structured around the research questions and supported by figures and tables highlighting emerging patterns. Each part addresses a specific dimension of the analysis, ranging from AI applications and implementation stages to efficiency outcomes, risks, stakeholder roles, and governance models.

5.1. RQ1: How Does AI Improve Efficiency and Automation in Governance Decision-Making?

AI enhances administrative efficiency and decision accuracy in public governance by simplifying administrative routines, enhancing decision quality, and accelerating service delivery. Two main dimensions emerge from the analysis of 67 studies: (i) how AI is applied across sectors, and (ii) how its impact is measured through performance metrics.

5.1.1. Areas of Application

As shown in Figure 9, predictive analytics and decision-support systems are the most frequently cited use cases, appearing in 39% of the reviewed studies. These systems help anticipate citizen needs and guide resource allocation. Robotic Process Automation (RPA), referenced in 26% of cases, facilitates repetitive administrative tasks, such as document verification or permit processing, thereby reducing staff workload [38]. Robotic process automation [39], found in 26% of studies, streamlines routine tasks such as form approvals, easing the burden on administrative staff [40].
Around 15% of the studies highlight using chatbots and NLP tools to improve citizen interaction and access to information [41]. AI also supports policy design (11%) and is used in education and healthcare through machine learning and deep learning applications [42]. These examples demonstrate the broad reach of AI tools beyond back-office automation, extending into strategic governance areas.

5.1.2. Efficiency Metrics

Figure 10 illustrates five core metrics through which AI enhances efficiency in public governance. The most frequently cited gain is time savings (37%), especially in healthcare and administrative tasks, where automation accelerates service delivery [43]. Cost reduction (25%) is also significant, as AI-driven processes help cut expenditures, notably in financial and budgeting systems [44]. In 20% of the studies, AI’s capacity to process vast datasets improves decision quality, with practical examples in urban mobility and infrastructure planning [22]. Error reduction (16%) contributes to more reliable services, such as tax administration [45]. Although less frequent (2%), the use of AI in detection is emerging, particularly in identifying anomalies within financial or benefits systems [43]. These metrics confirm AI’s role in building more responsive, transparent, and citizen-focused public systems [46].
The dominance of time-saving use cases indicates that most institutions are still in an early phase of AI adoption, focusing first on automating administrative routines. As digital capabilities and institutional maturity improve, more advanced applications, such as fraud detection or predictive analytics, appear later.

5.2. RQ2: What Are AI Integration’s Main Challenges, Risks, and Opportunities?

While AI integration yields significant public value, it simultaneously introduces complex institutional and ethical dilemmas and barriers. This section synthesizes the 67 reviewed studies to map out the key opportunities, challenges, and risks linked to AI adoption.

5.2.1. Opportunities for Governance

Figure 11 highlights five significant benefits associated with adopting AI in public governance. A first and widely reported opportunity is the capacity of AI to drive service innovation (38%), notably by enhancing access and participation. In Estonia, fully deploying digital public services has saved over 13 million working hours and increased voter engagement through online platforms [47]. Efficiency gains (36%) represent another clear advantage: AI tools have helped the UK reduce fraud by £2 billion, while in the US, automated systems cut processing times by 30% [48]. Transparency and accountability improvements (14%) are also significant, with real-time tracking tools reducing corruption by 32% in Estonia and increasing budget visibility by 25% in Sweden [49]. Additionally, automating routine administrative tasks (9%) has freed staff for more strategic roles and improved pandemic coordination by 40% in several countries [50]. Finally, although still limited (4%), international collaboration is emerging, as illustrated by the EU AI Act, which promotes a shared regulatory and ethical framework across borders [51]. These cases illustrate how AI can enable more responsive, coordinated, and transparent governance, provided that institutional capacities are in place to sustain and scale these innovations.

5.2.2. Key Challenges

Figure 12 highlights five significant barriers to AI adoption in public governance. Ethical concerns top the list (36%), driven by biased decision tools like COMPAS in the USA and massive data breaches, including one that exposed 30 million public records [37]. Technical challenges follow (27%), as many governments still rely on outdated infrastructure; in the EU, over 60% of municipalities lack adequate digital systems [36]. There are institutional gaps (15%) in staff training; only 25% of civil servants have received AI-related instruction [52]. Transparency remains limited (13%), with weak labeling and oversight, although initiatives like Explainable AI and impact assessments are being tested in Sweden [53]. Lastly, resistance to change (9%) restricts fears over job security, with nearly half of European public employees expressing concern [54].
Addressing these challenges requires targeted training, infrastructure, and institutional culture reforms. Based on the 67 studies reviewed, Table 6 summarizes the main categories of barriers and outlines strategic responses for responsible and effective AI integration. This table bridges the gap between observed obstacles and practical strategies to overcome them, based on the 67 articles reviewed.

5.2.3. Risks

As shown in Figure 13, four key risks require sustained attention. Bias and discrimination (37%) remain the primary concern, as automated systems can produce unfair outcomes that disproportionately affect vulnerable groups [60]. Loss of human oversight (29%) is another critical issue, particularly in sectors like justice, healthcare, and finance, where safeguards can amplify errors [61]. Privacy violations and data misuse (22%) raise serious ethical and legal challenges, including breaches of sensitive information such as patient records [62]. Finally, job displacement and rising inequalities (12%) highlight the risk of excluding low-skilled workers when automation advances without adequate retraining or support policies.

5.3. RQ3: How Do Institutional Stakeholders Perceive AI’s Impact on Governance Processes?

This section explores how different stakeholders view AI’s impact on governance processes. It considers their distinct roles, concerns, and levels of trust. Insights from the Technology Acceptance Model and Digital-Era Governance help explain these varied perspectives.

5.3.1. Key Stakeholders and Roles

Figure 14 presents five key stakeholder groups involved in shaping digital governance. Government officials and policymakers set the regulatory frameworks and oversee implementation. Their perception of usefulness strongly influences whether digital tools are adopted [63]. Citizens and public users engage with services such as healthcare and welfare, and their trust and perceptions of fairness are essential for broader acceptance [34]. Technical staff and AI experts design and maintain systems, shaping usability and perceived reliability [64]. Academic and legal experts are critical in defining ethical standards and legal boundaries to ensure transparency and uphold democratic principles [65]. Finally, the private sector brings innovation and technical expertise, but must comply with public regulations and ensure accountability [66]. Coordinating these diverse actors is essential to ensure that digital governance reforms are compelling and socially legitimate.

5.3.2. Stakeholder Priorities and Concerns

Stakeholders differ in their priorities based on institutional roles. These concerns reflect individual expectations and system-level needs. Table 7 synthesizes this diversity by mapping stakeholder roles, main concerns, and the theoretical lens through which these priorities are best understood. The table illustrates how stakeholder concerns align with institutional roles, revealing distinct priorities shaped by individual perceptions or systemic goals.
Beyond these theoretical alignments, stakeholder priorities also reveal which concerns dominate in practice. Figure 15 presents a visual breakdown of the most cited concerns, based on the 67 reviewed articles. Trust and accountability are the most cited issues, raised by 39% of citizens and 33% of academic and private sector actors. Trust influences perceptions of fairness and shapes the legitimacy of AI in public systems [37]. Privacy and security are key concerns for technical experts (23%) and citizens (22%), who demand stronger data protection measures. These issues relate to GDPR standards and TAM’s emphasis on system trustworthiness [46]. Efficiency and service performance are top concerns for public officials (29%) and technical staff. While these groups focus on operational gains, citizens place more value on outcomes than speed. Within DEG, this reflects digital maturity; within TAM, it helps explain adoption behaviors [5]. While less frequently cited, concerns about ethics and system rollout remain present. AI perceptions depend on user-level trust and the institutional capacity to support responsible implementation.
Beyond the descriptive frequencies, a comparative reading of the 67 studies was conducted to identify conceptual interrelations between AI readiness, institutional trust, and governance efficiency. This interpretive synthesis enriches the empirical narrative by connecting statistical patterns to theoretical constructs.

5.4. RQ4: How Does AI Reshape Institutional Autonomy and the Balance Between Automation and Human Oversight?

This section critically examines the institutional trade-offs in balancing AI’s decision support with human oversight to maintain trust, accountability, and regulatory coherence [69].

5.4.1. Risks of Over-Reliance on AI

Figure 16 highlights the top risks identified across the studies. The most reported risk across the reviewed studies is the loss of autonomy (27%), raising concerns about reduced human judgment in public decisions. Related to this, automated decision-making (24%) may weaken individual responsibility, especially in critical sectors like justice or healthcare. Other significant concerns include over-reliance on AI systems (18%) and the reduction in oversight (18%), reflecting discomfort with opaque technologies handling sensitive public tasks. Lastly, ethical ambiguity (13%) highlights challenges in assigning responsibility when automated processes fail. These risks call for robust human oversight and clear accountability frameworks to ensure that AI strengthens, rather than undermines, public governance [70].

5.4.2. Balance Between AI and Human Decisions

Figure 17 shows that 58% of institutions value AI for its role in supporting decision-making, while 38% highlight the importance of maintaining human oversight [71]. This balance is key. While AI enhances efficiency and insight, unchecked automation risks eroding public trust and ethical governance. Models blending AI with human review ensure legitimacy and adaptiveness. Nicola Palladino [72] argues that responsible AI integration requires clear oversight frameworks to protect citizens’ rights and ensure public accountability.

5.5. RQ5: What Governance Models and International Frameworks Support Responsible AI Decision-Making?

As AI becomes imperative to public sector transformation, effective governance frameworks must align innovation with ethical, legal, and societal expectations. This section presents leading regulatory initiatives and compares governance models used worldwide.

5.5.1. Key Regulatory Frameworks for AI Governance and Ethical Compliance

Figure 18 presents the most influential frameworks currently shaping AI regulation in public governance. The EU AI Act (30%) uses a risk-based approach, restricting high-risk applications like biometric surveillance while supporting innovation in safer areas [48]. The GDPR (25%) sets data protection, transparency, and user consent standards, especially in sectors such as health and finance [51]. The OECD AI principles (20%) promote fairness, accountability, and openness in digital public services [68]. National AI strategies (15%) reflect each country’s context; Canada prioritizes ethical research, while China focuses on control and economic growth [73]. Finally, the Global Partnership on AI (10%) supports international cooperation to defend human rights and shared values [74]. These frameworks combine legal, ethical, and strategic models to safeguard AI and support innovation while minimizing risks. Countries progressively adapt their regulatory tools to match national goals and global responsibilities.

5.5.2. Governance Models for Responsible AI Decision-Making

Figure 19 provides a comparative overview of the five main governance approaches shaping the global landscape. Five key governance models currently shape how AI is regulated across different regions. Hybrid governance (30%) blends legal rules with industry-led ethics, as seen in the EU, where formal laws work alongside corporate responsibility efforts [68]. Risk-based approaches (25%) classify tools by their potential impact on society; frameworks like the GDPR and the AI Act follow this model to protect rights while supporting innovation [56]. Self-regulation (20%) depends on tech companies such as Google and Microsoft setting internal ethical guidelines, though weak oversight limits public accountability [55]. Mixed governance (15%) is shared in places like the US, where regional laws let firms influence policy, encouraging innovation but leading to inconsistent standards [74]. Finally, centralized control (10%) is used in China [75], where the state tightly manages AI development to safeguard national interests and political goals [76]. A balanced governance strategy should blend legal safeguards, institutional flexibility, and inclusive design to ensure AI systems remain accountable, equitable, and trustworthy.
The distribution shown in Figure 19 should be interpreted as a reflection of regional governance traditions and research focus rather than as an indicator of superiority or performance. For example, the prominence of hybrid and risk-based models largely reflects the strong presence of European and North American studies, while regions with emerging regulatory systems, such as Africa and the Middle East, contribute fewer publications but provide unique contextual insights. This variation highlights how academic production often mirrors each region’s institutional priorities and policy maturity.

6. Empirical Findings and Thematic Synthesis

This section synthesizes evidence from 67 peer-reviewed studies on how AI is applied in public governance. The findings are structured around three key themes: administrative efficiency, decision-making, transparency, and accountability. These are interpreted through the TAM [9] and DEG [58] to better understand institutional and user-level dynamics in AI adoption.

6.1. Administrative Efficiency and Service Delivery

AI tools such as robotic process automation RPA and chatbots help governments reduce repetitive tasks and improve response times. Deloitte found that RPA can cut workloads by 30% [77]. In Saudi Arabia, AI boosted tax auditing efficiency by 84%. Decision-making and policy formulation [78]. In Los Angeles, a chatbot handled over 40,000 citizen requests, and Indonesia reported 25% cost savings. These benefits reflect TAM’s core concepts: usefulness and ease of use. However, successful adoption still requires adequate infrastructure and staff training.
For instance, in Estonia, the KrattAI program integrated machine learning tools into administrative workflows to automate public service delivery [79]. The system streamlined citizen–government interactions by automating permit requests and document verification, reducing processing time by nearly 40%. Similarly, in the United Arab Emirates, the Ministry of Health applied AI-driven chatbots to manage patient inquiries, improving response rates and freeing staff for higher-value tasks [80]. These examples demonstrate how AI implementation moves beyond theoretical design to deliver measurable efficiency and improvements in the citizen experience.

6.2. Decision-Making and Policy Formulation

AI supports evidence-based policymaking among tools like predictive analytics and scenario simulations. In France, municipalities used AI to assess the social impact of urban projects [81]. These applications line up with DEG principles of agility and integration. However, risks persist; AI may appear neutral, but biased data can harm vulnerable groups [82]. To protect fairness and trust, transparency and human oversight are essential [60]. Beyond strategic planning, several countries have already applied AI in policy formulation with measurable results. In France, predictive analytics systems were introduced in urban planning to evaluate the social and environmental impacts of new infrastructure projects, supporting data-driven decision-making and reducing approval delays [83]. In Canada, the Treasury Board’s Algorithmic Impact Assessment (AIA) framework guided the design of policy tools that balance efficiency with ethical safeguards, helping policymakers assess risks before AI deployment [84]. These experiences show that combining predictive analytics with ethical review processes can strengthen trust and transparency in public decision-making.

6.3. Transparency and Accountability

AI enhances transparency through anomaly detection and public audit trails. AI recovered $375 million in the USA by detecting government overpayments [85]. Several European cities now use algorithmic registers to disclose decision rules [59]. These initiatives align with the goals of algorithmic transparency and civic engagement. Zuiderwijk et al. [7] state that such applications can increase public trust and participation with clear communication and accessible data formats. However, the literature also warns of false positives and the ethical risks of over-reliance on automated systems. Iuga and Socol [86] point out the need for human validation and transparent audit trails to ensure due process and protect civil liberties.

6.4. Discussion and Synthesis

The reviewed studies confirm AI’s potential to improve efficiency, policy accuracy, and transparency. However, success depends on aligning user trust with institutional capacity. The deployment of AI in governance remains institutionally fragmented, raising concerns over stakeholder acceptance and legitimacy without strong infrastructure, regulation, and engagement. A key insight is the gap between technical deployment and governance alignment. Many projects lack ethical review or citizen feedback mechanisms, reducing legitimacy. Sustainable adoption needs to integrate micro-level acceptance and macro-level oversight. AI’s impact on governance will remain limited and uneven without this dual focus.
The results clearly show these dynamics. For instance, the regional differences in Figure 8 align with variations in institutional maturity and governance approaches, while the efficiency metrics in Figure 10 show how technical gains depend on data quality and leadership capacity. These connections illustrate that the challenges identified in the literature are conceptual and directly reflected in the patterns observed across regions and applications.

7. AI Integration Capability Model

AI holds significant promise for enhancing how public institutions make decisions, deliver services, and engage with citizens. However, this potential depends on strong internal capabilities that enable ethical, sustainable, and scalable adoption. Drawing on our systematic literature review findings, we introduce the AI Integration Capability Model (AICM), which synthesizes insights from the Dynamic Capabilities framework [12], TAM, and the DEG model. These frameworks help bridge individual-level perceptions with institutional readiness and systemic reform [27].

7.1. SLR Findings

The SLR of the 67 peer-reviewed studies reveals that barriers to AI adoption in public governance are predominantly institutional rather than technical. Recurring issues include data fragmentation, outdated digital infrastructure, skill deficits, and leadership inertia. These findings underscore the absence of an integrated framework to assess institutional readiness for ethical and scalable AI deployment.
In response, we propose AICM, a conceptual framework that translates empirical insights into a structured model for public-sector transformation. Grounded in the TAM, DEG, and DC theory, the AICM defines four interdependent pillars shown in Figure 20:
Pillar 1. Data access and interoperability: Ensures institutions can collect, share, and leverage high-quality, interoperable datasets across agencies and domains.
Pillar 2. Digital infrastructure and process redesign: Focuses on upgrading legacy systems and reengineering workflows to accommodate AI-driven service delivery.
Pillar 3. Workforce competencies and learning agility: Emphasizes capacity-building in digital literacy, algorithmic accountability, and continuous learning for public employees.
Pillar 4. Institutional leadership and change management: Promotes strategic alignment, participatory governance, and adaptive leadership to drive and sustain AI reforms.
Collectively, these pillars form a multi-level readiness framework tailored to the specific challenges of AI integration in public governance. The AICM bridges theoretical constructs with practical imperatives, offering a replicable tool for diagnostic assessment, policy design, and institutional capacity development.

7.2. Comparative Positioning with Existing AI Assessment Models

To contextualize the contribution of the AICM, we compare it to two prominent frameworks: the AI Readiness Index and commercial AI Maturity Models (e.g., IBM, PwC). While both are widely used, they fail to address the institutional complexity of AI adoption in public governance.
The AI Readiness Index offers national-level benchmarking but lacks granularity and operational guidance for individual institutions. Commercial AI Maturity Models, rooted in private-sector transformation logic, emphasize automation and ROI but often neglect public-sector concerns such as transparency, equity, and regulatory compliance.
In contrast, the AICM introduces a governance-specific, capability-oriented framework tailored to the needs of public institutions. It emphasizes internal enablers, data infrastructure, digital systems, human capital, and leadership, while grounding its design in established theories (TAM, DEG, and DC).
As summarized in Table 8, the AICM distinguishes itself through its institutional focus, high actionability, theoretical coherence, and contextual adaptability, particularly for public sectors in the Global South. It bridges diagnostic assessment with transformation planning, offering a practical roadmap for responsible and scalable AI integration.

7.3. Limitation

This study has certain limitations that should be acknowledged. Although the literature review examined a broad range of international sources, the Delphi validation was limited to the Moroccan public sector. This national focus enabled a detailed understanding of the local institutional and governance context. Yet, it constrains the extent to which the findings can be generalized to countries with different administrative systems or higher levels of technological maturity. Nevertheless, many African administrations face comparable structural challenges, including fragmented information systems, limited infrastructure, and shortages of digital skills, which suggests that the AICM framework retains relevance beyond the Moroccan case.
A second limitation concerns language and data accessibility. The review relied primarily on English-language databases, which may have excluded valuable studies or policy documents published in French or Arabic, particularly those produced in Francophone African countries. Expanding linguistic coverage and including local sources in future reviews would allow for a more inclusive and representative evidence base.
A third limitation relates to the composition of the Delphi panel. Although the participating experts represented diverse disciplines and professional backgrounds, the sample size was relatively small. This may not fully capture the variety of perspectives across Morocco’s higher education and public sector institutions. Future studies could broaden the panel to include additional stakeholders, thereby enhancing the robustness and balance of the consensus.
Despite these limitations, this research offers a coherent and adaptable analytical framework that can serve as a foundation for comparative studies and inform the design of AI governance strategies in developing and developed contexts.

8. Validating the AICM: Delphi Study in the Moroccan Public Sector

To assess the proposed AICM’s contextual relevance and institutional feasibility, a three-round Delphi study was conducted between December 2024 and February 2025. As introduced in the theoretical framework (Section 2.1), the Delphi method enabled structured consensus-building among a panel of 15 Moroccan public sector experts. This phase aimed to evaluate the model’s clarity, applicability, and strategic value in the context of AI-driven governance reform.

8.1. Methodology and Delphi Design

A three-round Delphi process was implemented to validate the AICM:
Round 1: Experts independently assessed each pillar, mentioned in Figure 20, on three evaluative dimensions (feasibility, urgency, and applicability) using a 5-point Likert scale (1 = very low; 5 = very high). Initial variability in scores was particularly evident for data interoperability and leadership feasibility.
Round 2: Participants received anonymized feedback comprising median scores and interquartile ranges (IQRs). This controlled feedback allowed experts to reconsider their ratings regarding group trends.
Round 3: Final evaluations were submitted, and qualitative feedback was collected. This round also invited targeted strategic recommendations for operationalizing each pillar.
Figure 21 presents the structure of the Delphi process, outlining how each round builds progressively toward convergence. Following best-practice Delphi criteria, consensus was defined as a median score ≥ 4.0 and IQR ≤ 1.0 [87]. Where less than 2% of responses were missing, mean imputation within expert profiles was applied to maintain internal consistency [88].
A purposive sampling strategy was employed to recruit 15 senior experts, each with at least 10 years of experience in public-sector modernization, digital transformation, or AI governance. The selection of 15 experts followed Delphi research standards, recommending panels of 10–20 participants to ensure diversity and manageability [89]. Experts were chosen using purposive sampling and met four main criteria: (i) at least ten years of professional experience, (ii) proven expertise in governance, AI, or digital transformation, (iii) participation in national reform or digital policy programs, and (iv) voluntary commitment to the study.
The final panel represented Morocco’s public sector diversity, including senior officials, data protection regulators, policy advisors, and academics. Each expert contributed practical insights into AI readiness, governance, and policy implementation. All participants gave informed consent, and their answers were treated confidentially. The study was voluntary and not financially rewarded, but each expert received formal acknowledgment and a summary of the main results.
Institutional representation included the following: (1) Moroccan ministries (20%), (2) public digital agencies (13%), (3) academic and research institutions (33%), (4) international organizations (13%), and (5) national IT/AI agencies (20%).

8.2. Consensus Results

The Delphi panel reached a high level of consensus across all four AICM pillars by the conclusion of the third round. As shown in Table 9, all pillars achieved a median score of 5.0, with interquartile ranges (IQRs) ranging from 0.0 to 0.5, indicating both strong agreement and minimal variance among expert responses:
Pillar 1: Data Access and Interoperability (Median = 5.0, IQR = 0.5).
Pillar 2: Digital Infrastructure and Process Redesign (Median = 5.0, IQR = 0.5).
Pillar 3: Workforce Competencies and Learning Agility (Median = 5.0, IQR = 0.0).
Pillar 4: Institutional Leadership and Change Management (Median = 5.0, IQR = 0.0).
The zero IQR recorded for Pillars 3 and 4 indicates unanimous expert consensus, underscoring the urgency and feasibility of enhancing human capital and leadership structures in Morocco’s AI governance landscape. These pillars were seen as universally applicable across ministries and sectors, likely to reflect widespread gaps in digital skills and strategic coordination, as previously highlighted in regional assessments [90,91].
While still within the consensus threshold, the slightly higher IQR (0.5) observed for Pillars 1 and 2 suggests moderate variation in expert perceptions, particularly regarding operational challenges such as data fragmentation and system interoperability. This may reflect the persistence of siloed information systems and legacy IT infrastructure, which were frequently cited as institutional barriers in the literature review and qualitative feedback.
Overall, the results validate the AICM’s constructive alignment with real-world governance challenges and demonstrate its adaptability across diverse administrative contexts. The model’s structure was theoretically sound and practically implementable, bridging individual capabilities (e.g., staff competencies) and systemic reforms (e.g., data infrastructure, leadership).
Building on expert recommendations from Round 3, Table 10 outlines actionable strategies for operationalizing each AICM pillar. These suggestions aim to support public administration transitioning from conceptual design to implementation.
The Delphi panel’s convergence confirms the AICM model’s internal coherence and operational resonance with current governance challenges. The theoretically robust and empirically grounded model offers a viable roadmap for adopting AI in public institutions and navigating fragmented digital ecosystems.
These results should be viewed as an analytical generalization rather than a statistical one. While they are grounded in the Moroccan public sector, they remain relevant to other countries facing similar governance structures, digital challenges, and institutional constraints, particularly within the broader African context.

8.3. Strategic Recommendations

Experts proposed concrete actions to operate the four pillars in the final round. These are summarized in Table 11 and include the following:
Pillar 1: Digitize legacy records and adopt national metadata and interoperability standards [90,91].
Pillar 2: Integrate disconnected platforms (e.g., Chikaya, Mahakim) through APIs, ensuring compliance with Law 09-08 on data protection [92].
Pillar 3: Develop a national competency framework in collaboration with universities, including training in AI ethics, algorithmic accountability, and explainability [93].
Pillar 4: Establish a High Council for AI Governance with cross-sectoral authority, modeled on Finland’s AuroraAI initiative [94].
These recommendations reinforce the AICM’s utility as a conceptual framework and a pragmatic tool for strategic planning and reform. They demonstrate how institutional capabilities can be systematically enhanced to enable ethical, scalable, and inclusive AI integration in public governance.
Table 11. Strategic recommendations for the implementation of AICM Pillars.
Table 11. Strategic recommendations for the implementation of AICM Pillars.
AICM PillarStrategic RecommendationsIllustrative Examples
Pillar 1: Data Access and InteroperabilityLaunch a national digitization programme to convert paper-based archives into interoperable digital records.Less than 20% of Moroccan institutions use interoperable systems; the majority still rely on manual documentation [90].
Establish a national metadata standard and enforce inter-agency data exchange protocols.
Pillar 2: Digital Infrastructure and Process RedesignIntegrate existing digital platforms (e.g., Chikaya, e-Huissier, Mahakim) using standardized APIs.The Court of Accounts [92] highlighted duplication and inefficiency due to disconnected systems.
Ensure all digital services comply with Law 09-08 on personal data protection and follow national cybersecurity norms.
Pillar 3: Workforce Competencies and Learning AgilityDevelop a national competency framework for public employees in partnership with universities.Aligns with TAM3: improving perceived ease of use and building trust through user literacy
[93,95].
Introduce training on AI ethics, algorithmic accountability, and explainability principles.
Pillar 4: Institutional Leadership and Change ManagementCreate a High Council for AI Governance under the Prime Minister, with cross-sectoral coordination powers.Inspired by initiatives like Finland’s AuroraAI, combining AI strategy with participatory governance [94].
Mandate sector-specific AI implementation roadmaps, with performance indicators and citizen feedback mechanisms.
Building on expert recommendations from Round 3, Table 12 outlines actionable strategies for operationalizing each AICM pillar. These suggestions aim to support public administration transitioning from conceptual design to implementation.
Ultimately, these recommendations offer more than technical guidance. They reflect a shared vision among experts for what responsible and inclusive AI governance should look like in practice. By focusing on the systems and the people behind them, the AICM helps public institutions move from abstract ambition to concrete action, one step at a time.

9. Managerial and Policy Implications

AI transforms tools or processes and reshapes how governments think, act, and relate to citizens. The AICM we propose offers more than a conceptual framework: it allows institutions to think strategically about their capacity to govern emerging technologies.

9.1. Strategic Urgency and Institutional Gap in Morocco

Despite increasing attention to artificial intelligence in public administration, Morocco has yet to adopt a formal national AI strategy. This institutional gap emerged prominently during the Delphi study, where experts cited fragmented data systems, paper-based archives, and the absence of shared metadata standards as persistent structural barriers to digital transformation.
These challenges underscore the importance of institutional readiness as a prerequisite for AI deployment. Rather than focusing solely on technological capacity, experts emphasized the need for coordinated reform, interoperability frameworks, and cross-agency alignment. The AICM directly responds to this need by offering a structured, empirically grounded roadmap for strengthening core governance capabilities.
By situating AI integration within the realities of administrative fragmentation and digital unevenness, the AICM enables governments, particularly in emerging economies, to build a scalable foundation for ethical and practical AI adoption.

9.2. AICM as an Action-Ariented Readiness Framework

Public institutions often struggle to translate digital ambitions into coherent strategies. The AICM gives them a practical starting point. It can be used as a diagnostic tool to assess readiness, highlighting strengths and gaps across four critical areas: data, infrastructure, people, and leadership. Ministries can integrate it into internal planning cycles, maturity assessments, or digital audits.
In countries like Morocco, where institutional maturity varies widely, the model also helps bridge national priorities with sectoral constraints. It offers a structure for building a national AI strategy or, at the very least, a common roadmap. Such a framework ensures that AI integration is not reduced to isolated pilots but becomes part of a shared long-term vision.

9.3. Anticipating Workforce Transformation in the Public Sector

One of the clearest signals from our literature review and the Delphi panel is that AI will fundamentally alter the composition of the public workforce. Routine administrative roles are already being automated. However, new needs are emerging for data stewards, explainability experts, legal engineers, and algorithmic auditors.
Institutions must anticipate these shifts. Training should go beyond technical skills, including ethical reasoning, legal safeguards, and the ability to interpret algorithmic outputs. Pillar 3 of the AICM points directly to this challenge. Working with universities and public service academies, governments can begin designing competency frameworks that reflect the skills of tomorrow’s civil servants.

9.4. Institutional Leadership for Ethical and Accountable AI

The transition to AI-enhanced governance requires more than operational change; it demands political coordination and clear public accountability. This is where Pillar 4 comes in. Leadership in this space means creating legitimate institutions that oversee algorithmic systems, respond to public concerns, and adjust policies as technologies evolve.
One recommendation that emerged strongly from the Delphi panel was the creation of a High Council for AI Governance, linked directly to the head of government. Its mission would include setting standards, coordinating across ministries, and establishing ethical oversight mechanisms, such as algorithmic registers, public impact assessments, and external audits of high-risk systems.
These are no longer optional. As AI moves closer to the heart of public decision-making, citizens will demand more visibility, fairness, and recourse. Institutions must be ready to deliver them.

9.5. Toward a Coherent National AI Strategy

Finally, and perhaps most urgently, many governments operate without a national AI strategy, or with one that remains disconnected from administrative realities. This creates fragmentation, inefficiency, and missed opportunities. AICM connects top-down planning with bottom-up capacity. It can help ministries align digital investments with governance goals and offer a shared language for policy dialogue between central authorities, regulators, sectoral agencies, and local governments. Simply, it helps move from ambition to execution.

10. Conclusions and Future Work

This review of 67 peer-reviewed studies (2014–2024) demonstrates that AI increasingly shapes public governance’s operational, strategic, and normative dimensions. Addressing RQ1 and RQ2, the findings confirm that AI can improve decision-making quality, service efficiency, and predictive capacity across multiple domains. However, these benefits are mediated by institutional readiness, as persistent challenges, such as fragmented data systems, legacy infrastructure, and skill deficits, constrain scalable and responsible implementation.
In response to RQ3, the review highlights heterogeneous stakeholder perceptions of AI’s legitimacy, trustworthiness, and operational utility, shaped by role-specific priorities and governance maturity. Addressing RQ4, the study finds that automation introduces significant risks to human oversight, autonomy, and accountability, necessitating a careful balance between efficiency and normative safeguards. In examining RQ5, the review synthesizes regulatory models and international frameworks and identifies a lack of operational convergence between high-level principles and institutional realities in many settings.
To integrate these insights into a coherent diagnostic and planning tool, the study introduced the AICM: a multi-level capability framework that links individual acceptance, organizational process redesign, and institutional reform capacity. The model, grounded in TAM, DEG, and DC theory, was empirically validated through a three-round Delphi study. Strong to unanimous consensus from a panel of senior Moroccan public-sector experts confirmed the four core pillars’ feasibility, urgency, and adaptability. While the empirical setting was Morocco, the model is scalable and transferable, providing a structured pathway for governments seeking to align AI initiatives with institutional capacity and public accountability.
While the Delphi validation focused on Morocco, the findings may extend to other African countries facing similar governance and digital maturity challenges. The model’s four pillars—data, infrastructure, workforce, and leadership—address issues common across low and middle-income contexts, where institutions often struggle with fragmented data, limited resources, and uneven digital transformation. This regional perspective shows that AICM can support broader efforts to design ethical and inclusive AI governance across Africa and comparable settings.
Future research should extend this foundation in six directions. First, robust performance metrics are needed to assess AI implementation regarding efficiency (RQ1), fairness, and transparency. Second, longitudinal designs can investigate how AI shapes institutional legitimacy and systemic resilience over time (RQ3, RQ4). Third, participatory AI governance practices, including explainable AI tools and civic auditing mechanisms, require further study to enhance democratic accountability (RQ5). Fourth, ethical risks associated with emergency AI deployment should be examined, particularly where oversight and proportionality may be compromised. Fifth, global regulatory frameworks such as the EU AI Act must be adapted for application in low-capacity environments. Finally, institutional models for secure, interoperable, and auditable public-sector data governance remain underexplored and deserve systematic attention.

Author Contributions

Conceptualization, A.A. and A.E.M.; methodology, A.A.; software, A.A.; validation, A.A., A.E.M., B.E.M. and O.B.; formal analysis, A.A.; resources, A.A., A.E.M., B.E.M. and O.B.; data curation, A.A., writing—original draft preparation, A.A., A.E.M., B.E.M. and O.B.; writing—review and editing, A.A., A.E.M., B.E.M. and O.B.; visualization, A.A.; supervision, A.E.M., project administration, B.E.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data will be made available upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ADDDigital Development Agency (Morocco)
AIArtificial Intelligence
AIAAlgorithmic Impact Assessment
AICMAI Integration Capability Model
APIApplication Programming Interface
BERTBidirectional Encoder Representations from Transformers
ERPEnterprise Resource Planning
CAFCanadian Armed Forces
CNSSNational Social Security Fund (Morocco)
COMPASCorrectional Offender Management Profiling for Alternative Sanctions
DGWGRData Governance Working Group Report
COREComputing Research and Education Association
DCDynamic Capabilities
DGRLDigital Government Reference Library
DEGDigital-Era Governance
DLDeep Learning
ENANational School of Administration Morocco
EUEuropean Union
GDPRGeneral Data Protection Regulation
GDSGovernment Digital Service (United Kingdom)
GPTGenerative Pretrained Transformer
HITLHuman-in-the-Loop
INDHInitiative Nationale pour le Développement Humain (Morocco)
INPTNational Institute of Posts and Telecommunications
IoTInternet of Things
LLMLarge Language Models
MEAEMinistry of Economic Affairs and Employment
MLMachine Learning
NLPNatural Language Processing
OECDOrganization for Economic Co-operation and Development
ROIReturn on Investment
SLRSystematic Literature Review
TAMTechnology Acceptance Model
UKUnited Kingdom
UNUnited Nations
USAUnited States
XAIExplainable Artificial Intelligence
X-RoadCross-Road Interoperability Platform (Estonia)

References

  1. Engel, C.; Linhardt, L.; Schubert, M. Code Is Law: How COMPAS Affects the Way the Judiciary Handles the Risk of Recidivism. Artif. Intell. Law 2025, 33, 383–404. [Google Scholar] [CrossRef]
  2. Lim, B.; Seth, I.; Rozen, W.M. The Role of Artificial Intelligence Tools on Advancing Scientific Research. Aesthetic Plast. Surg. 2023, 48, 3036–3038. [Google Scholar] [CrossRef] [PubMed]
  3. Rodrigues, R. Legal and Human Rights Issues of AI: Gaps, Challenges and Vulnerabilities. J. Responsible Technol. 2020, 4, 100005. [Google Scholar] [CrossRef]
  4. Schüller, M. Artificial Intelligence: New Challenges and Opportunities for Asian Countries. In Exchanges and Mutual Learning Among Asian Civilizations; Springer Nature Singapore: Singapore, 2023; pp. 277–285. ISBN 978-981-19716-4-8. [Google Scholar]
  5. Ahmed, M.I.; Spooner, B.; Isherwood, J.; Lane, M.; Orrock, E.; Dennison, A. A Systematic Review of the Barriers to the Implementation of Artificial Intelligence in Healthcare. Cureus 2023, 15, e46454. [Google Scholar] [CrossRef]
  6. Sharma, G.D.; Yadav, A.; Chopra, R. Artificial Intelligence and Effective Governance: A Review, Critique and Research Agenda. Sustain. Futures 2020, 2, 100004. [Google Scholar] [CrossRef]
  7. Zuiderwijk, A.; Chen, Y.-C.; Salem, F. Implications of the Use of Artificial Intelligence in Public Governance: A Systematic Literature Review and a Research Agenda. Gov. Inf. Q. 2021, 38, 101577. [Google Scholar] [CrossRef]
  8. Heimberger, H.; Horvat, D.; Schultmann, F. Exploring the Factors Driving AI Adoption in Production: A Systematic Literature Review and Future Research Agenda. Inf. Technol. Manag. 2024, 1–17. [Google Scholar] [CrossRef]
  9. Davis, F.D. Technology Acceptance Model: TAM. Al-Suqri MN Al-Aufi Inf. Seek. Behav. Technol. Adopt. 1989, 205, 5. [Google Scholar]
  10. Marangunić, N.; Granić, A. Technology Acceptance Model: A Literature Review from 1986 to 2013. Univers. Access Inf. Soc. 2015, 14, 81–95. [Google Scholar] [CrossRef]
  11. Dunleavy, P.; Margetts, H.; Bastow, S.; Tinkler, J. New Public Management Is Dead—Long Live Digital-Era Governance. J. Public Adm. Res. Theory 2006, 16, 467–494. [Google Scholar] [CrossRef]
  12. Teece, D.J. Explicating Dynamic Capabilities: The Nature and Microfoundations of (Sustainable) Enterprise Performance. Strateg. Manag. J. 2007, 28, 1319–1350. [Google Scholar] [CrossRef]
  13. Hasson, F.; Keeney, S.; McKenna, H. Research Guidelines for the Delphi Survey Technique. J. Adv. Nurs. 2000, 32, 1008–1015. [Google Scholar] [CrossRef] [PubMed]
  14. Dospinescu, O.; Buraga, S. Integrated ERP Systems—Determinant Factors for Their Adoption in Romanian Organizations. Systems 2025, 13, 667. [Google Scholar] [CrossRef]
  15. Benitez, J.M.; Castro, J.L.; Requena, I. Are Artificial Neural Networks Black Boxes? IEEE Trans. Neural Netw. 1997, 8, 1156–1164. [Google Scholar] [CrossRef]
  16. Vaghela, M.C.; Rathi, S.; Shirole, R.L.; Verma, J.; Shaheen; Panigrahi, S.; Singh, S. Leveraging AI and Machine Learning in Six-Sigma Documentation for Pharmaceutical Quality Assurance. Zhongguo Ying Yong Sheng Li Xue Za Zhi 2024, 40, e20240005. [Google Scholar] [CrossRef] [PubMed]
  17. Russell, S.J.; Norvig, P. Artificial Intelligence: A Modern Approach, 4th ed.; Prentice Hall Series in Artificial Intelligence; Pearson: Boston, MA, USA, 2022; ISBN 978-1-292-40117-1. [Google Scholar]
  18. Mohammed, S.; Budach, L.; Feuerpfeil, M.; Ihde, N.; Nathansen, A.; Noack, N.; Patzlaff, H.; Naumann, F.; Harmouch, H. The Effects of Data Quality on Machine Learning Performance on Tabular Data. Inf. Syst. 2025, 132, 102549. [Google Scholar] [CrossRef]
  19. Soori, M.; Arezoo, B.; Dastres, R. Artificial Intelligence, Machine Learning and Deep Learning in Advanced Robotics, a Review. Cogn. Robot. 2023, 3, 54–70. [Google Scholar] [CrossRef]
  20. Taye, M.M. Understanding of Machine Learning with Deep Learning: Architectures, Workflow, Applications and Future Directions. Computers 2023, 12, 91. [Google Scholar] [CrossRef]
  21. Buhmann, A.; Fieseler, C. Deep Learning Meets Deep Democracy: Deliberative Governance and Responsible Innovation in Artificial Intelligence. Bus. Ethics Q. 2023, 33, 146–179. [Google Scholar] [CrossRef]
  22. Ferrari, F.; Van Dijck, J.; Van Den Bosch, A. Observe, Inspect, Modify: Three Conditions for Generative AI Governance. New Media Soc. 2025, 27, 2788–2806. [Google Scholar] [CrossRef]
  23. Xu, J. Opening the ‘Black Box’ of Algorithms: Regulation of Algorithms in China. Commun. Res. Pract. 2024, 10, 288–296. [Google Scholar] [CrossRef]
  24. Jawad, Z.N.; Balázs, V. Machine Learning-Driven Optimization of Enterprise Resource Planning (ERP) Systems: A Comprehensive Review. Beni-Suef Univ. J. Basic Appl. Sci. 2024, 13, 4. [Google Scholar] [CrossRef]
  25. Wijesinghe, S.; Nanayakkara, I.; Pathirana, R.; Wickramarachchi, R.; Fernando, I. Impact of IoT Integration on Enterprise Resource Planning (ERP) Systems: A Comprehensive Literature Analysis. In Proceedings of the 2024 International Research Conference on Smart Computing and Systems Engineering (SCSE), Colombo, Sri Lanka, 4 April 2024; Volume 7, pp. 1–5. [Google Scholar]
  26. Kitchenham, B.; Charters, S. Guidelines for Performing Systematic Literature Reviews in Software Engineering; Technical Report; Keele University and Durham University: Staffordshire, UK, 2007. [Google Scholar]
  27. Dwivedi, R.; Nerur, S.; Balijepally, V. Exploring Artificial Intelligence and Big Data Scholarship in Information Systems: A Citation, Bibliographic Coupling, and Co-Word Analysis. Int. J. Inf. Manag. Data Insights 2023, 3, 100185. [Google Scholar] [CrossRef]
  28. Higgins, J.P.T.; Thomas, J.; Chandler, J.; Cumpston, M.; Li, T.; Page, M.J.; Welch, V.A. Cochrane Handbook for Systematic Reviews of Interventions, 1st ed.; Wiley: Hoboken, NJ, USA, 2019; ISBN 978-1-119-53662-8. [Google Scholar]
  29. Wiesmüller, S. Contextualisation of Relational AI Governance in Existing Research. In The Relational Governance of Artificial Intelligence; Springer Nature Switzerland: Cham, Switzerland, 2023; pp. 165–212. ISBN 978-3-031-25022-4. [Google Scholar]
  30. Haefner, N.; Parida, V.; Gassmann, O.; Wincent, J. Implementing and Scaling Artificial Intelligence: A Review, Framework, and Research Agenda. Technol. Forecast. Soc. Change 2023, 197, 122878. [Google Scholar] [CrossRef]
  31. Von Essen, L.; Ossewaarde, M. Artificial Intelligence and European Identity: The European Commission’s Struggle for Reconciliation. Eur. Politics Soc. 2024, 25, 375–402. [Google Scholar] [CrossRef]
  32. Botero Arcila, B. AI Liability in Europe: How Does It Complement Risk Regulation and Deal with the Problem of Human Oversight? Comput. Law Secur. Rev. 2024, 54, 106012. [Google Scholar] [CrossRef]
  33. Roberts, H.; Hine, E.; Taddeo, M.; Floridi, L. Global AI Governance: Barriers and Pathways Forward. Int. Aff. 2024, 100, 1275–1286. [Google Scholar] [CrossRef]
  34. Ingrams, A.; Klievink, B. Transparency’s Role in AI Governance. In The Oxford Handbook of AI Governance; Bullock, J.B., Chen, Y.-C., Himmelreich, J., Hudson, V.M., Korinek, A., Young, M.M., Zhang, B., Eds.; Oxford University Press: Oxford, UK, 2022; pp. 479–494. ISBN 978-0-19-757932-9. [Google Scholar]
  35. Ivic, A.; Milicevic, A.; Krstic, D.; Kozma, N.; Havzi, S. The Challenges and Opportunities in Adopting AI, IoT and Blockchain Technology in E-Government: A Systematic Literature Review. In Proceedings of the 2022 International Conference on Communications, Information, Electronic and Energy Systems (CIEES), Veliko Tarnovo, Bulgaria, 26–28 November 2022; IEEE: New York, NY, USA, 2022; pp. 1–6. [Google Scholar]
  36. Das, R.; Soylu, M. A Key Review on Graph Data Science: The Power of Graphs in Scientific Studies. Chemom. Intell. Lab. Syst. 2023, 240, 104896. [Google Scholar] [CrossRef]
  37. Bluemke, E.; Collins, T.; Garfinkel, B.; Trask, A. Exploring the Relevance of Data Privacy-Enhancing Technologies for AI Governance Use Cases 2023. arXiv 2023, arXiv:2303.08956. [Google Scholar] [CrossRef]
  38. Himanshu, H.; Department of HMCT, Chandigarh College of Hotel Management and Catering Technology, Punjab, India. Role of Artificial Intelligence in Decision Making. In Decision Strategies and Artificial Intelligence Navigating the Business Landscape; San International Scientific Publications: Kanyakumari, India, 2023; ISBN 978-81-963849-1-3. [Google Scholar]
  39. Waja, G.; Patil, G.; Mehta, C.; Patil, S. How AI Can Be Used for Governance of Messaging Services: A Study on Spam Classification Leveraging Multi-Channel Convolutional Neural Network. Int. J. Inf. Manag. Data Insights 2023, 3, 100147. [Google Scholar] [CrossRef]
  40. Mohammed, A.; Mohammad, M. How AI Algorithms Are Being Used in Applications. In Soft Computing and Signal Processing; Reddy, V.S., Prasad, V.K., Wang, J., Reddy, K.T.V., Eds.; Springer Nature Singapore: Singapore, 2023; Volume 313, pp. 41–53. ISBN 978-981-19866-8-0. [Google Scholar]
  41. Deng, B.; Qiu, Y. Comment on “Analytical Solutions to One-Dimensional Advection–Diffusion Equation with Variable Coefficients in Semi-Infinite Media” by Kumar, A., Jaiswal, D.K., Kumar, N., J. Hydrol., 2010, 380: 330–337. J. Hydrol. 2012, 424–425, 278–279. [Google Scholar] [CrossRef]
  42. Jha, J.; Vishwakarma, A.K.; N, C.; Nithin, A.; Sayal, A.; Gupta, A.; Kumar, R. Artificial Intelligence and Applications. In Proceedings of the 2023 1st International Conference on Intelligent Computing and Research Trends (ICRT), Roorkee, India, 3–4 February 2023; IEEE: New York, NY, USA, 2023; pp. 1–4. [Google Scholar]
  43. Bajwa, J.; Munir, U.; Nori, A.; Williams, B. Artificial Intelligence in Healthcare: Transforming the Practice of Medicine. Future Healthc. J. 2021, 8, e188–e194. [Google Scholar] [CrossRef]
  44. Yang, J.; Blount, Y.; Amrollahi, A. Artificial Intelligence Adoption in a Professional Service Industry: A Multiple Case Study. Technol. Forecast. Soc. Change 2024, 201, 123251. [Google Scholar] [CrossRef]
  45. Van Noordt, C.; Misuraca, G. Exploratory Insights on Artificial Intelligence for Government in Europe. Soc. Sci. Comput. Rev. 2022, 40, 426–444. [Google Scholar] [CrossRef]
  46. Gao, X.; Feng, H. AI-Driven Productivity Gains: Artificial Intelligence and Firm Productivity. Sustainability 2023, 15, 8934. [Google Scholar] [CrossRef]
  47. Espinosa, V.I.; Pino, A. E-Government as a Development Strategy: The Case of Estonia. Int. J. Public Adm. 2025, 48, 86–99. [Google Scholar] [CrossRef]
  48. Chen, Y. How Blockchain Adoption Affects Supply Chain Sustainability in the Fashion Industry: A Systematic Review and Case Studies. Int. Trans. Oper. Res. 2024, 31, 3592–3620. [Google Scholar] [CrossRef]
  49. Toll, D.; Lindgren, I.; Melin, U.; Madsen, C.Ø. Values, Benefits, Considerations and Risks of AI in Government: A Study of AI Policies in Sweden. JeDEM Ejournal Edemocr. Open Gov. 2020, 12, 40–60. [Google Scholar] [CrossRef]
  50. Ajali-Hernández, N.I.; Travieso-González, C.M. Novel Cost-Effective Method for Forecasting COVID-19 and Hospital Occupancy Using Deep Learning. Sci. Rep. 2024, 14, 25982. [Google Scholar] [CrossRef]
  51. European Commission. Proposal for a Regulation of the European Parliament and of the Council Laying down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts; COM/2021/206final; European Commission: Brussels, Belgium, 2021; pp. 1–107. Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex:52021PC0206 (accessed on 10 September 2025).
  52. Petrin, M. The Impact of AI and New Technologies on Corporate Governance and Regulation. Sing. J. Leg. Stud. 2024, 90. [Google Scholar] [CrossRef]
  53. Duberry, J. Chapter 14: AI and Data-Driven Political Communication (Re)Shaping Citizen–Government Interactions. In Research Handbook on Artificial Intelligence and Communication; Nah, S., Ed.; Edward Elgar Publishing: Cheltenham, UK, 2023; pp. 231–245. ISBN 978-1-80392-030-6. [Google Scholar]
  54. Liebig, L.; Güttel, L.; Jobin, A.; Katzenbach, C. Subnational AI Policy: Shaping AI in a Multi-Level Governance System. AI Soc. 2024, 39, 1477–1490. [Google Scholar] [CrossRef]
  55. Floridi, L.; Cowls, J. A Unified Framework of Five Principles for AI in Society. Harv. Data Sci. Rev. 2019, 535–545. [Google Scholar] [CrossRef]
  56. Gstrein, O.J.; Haleem, N.; Zwitter, A. General-Purpose AI Regulation and the European Union AI Act. Internet Policy Rev. 2024, 13, 1–26. [Google Scholar] [CrossRef]
  57. Hupont, I.; Fernández-Llorca, D.; Baldassarri, S.; Gómez, E. Use Case Cards: A Use Case Reporting Framework Inspired by the European AI Act. Ethics Inf. Technol. 2024, 26, 19. [Google Scholar] [CrossRef]
  58. Margetts, H.; Dunleavy, P. The Second Wave of Digital-Era Governance: A Quasi-Paradigm for Government on the Web. Philos. Trans. R. Soc. A Math. Phys. Eng. Sci. 2013, 371, 20120382. [Google Scholar] [CrossRef]
  59. Olsen, H.P.; Hildebrandt, T.T.; Wiesener, C.; Larsen, M.S.; Flügge, A.W.A. The Right to Transparency in Public Governance: Freedom of Information and the Use of Artificial Intelligence by Public Agencies. Digit. Gov. Res. Pract. 2024, 5, 1–15. [Google Scholar] [CrossRef]
  60. Pislaru, M.; Vlad, C.S.; Ivascu, L.; Mircea, I.I. Citizen-Centric Governance: Enhancing Citizen Engagement through Artificial Intelligence Tools. Sustainability 2024, 16, 2686. [Google Scholar] [CrossRef]
  61. Alon-Barkat, S.; Busuioc, M. Human–AI Interactions in Public Sector Decision Making: “Automation Bias” and “Selective Adherence” to Algorithmic Advice. J. Public Adm. Res. Theory 2023, 33, 153–169. [Google Scholar] [CrossRef]
  62. Murdoch, B. Privacy and Artificial Intelligence: Challenges for Protecting Health Information in a New Era. BMC Med. Ethics 2021, 22, 122. [Google Scholar] [CrossRef]
  63. Greene, K.G. AI Governance Multi-Stakeholder Convening. In The Oxford Handbook of AI Governance; Bullock, J.B., Chen, Y.-C., Himmelreich, J., Hudson, V.M., Korinek, A., Young, M.M., Zhang, B., Eds.; Oxford University Press: Oxford, UK, 2022; pp. 109–126. ISBN 978-0-19-757932-9. [Google Scholar]
  64. John, T. The Ethical Considerations of Artificial Intelligence in Clinical Decision Support. Proc. Wellingt. Fac. Eng. Ethics Sustain. Symp. 2022. [Google Scholar] [CrossRef]
  65. Ndrejaj, A.; Ali, M. Artificial Intelligence Governance: A Study on the Ethical and Security Issues That Arise. In Proceedings of the 2022 International Conference on Computing, Electronics & Communications Engineering (iCCECE), Southend, UK, 17–18 August 2022; IEEE: New York, NY, USA, 2022; pp. 104–111. [Google Scholar]
  66. De Cremer, D.; Narayanan, D. On Educating Ethics in the AI Era: Why Business Schools Need to Move beyond Digital Upskilling, towards Ethical Upskilling. AI Ethics 2023, 3, 1037–1041. [Google Scholar] [CrossRef]
  67. Iddrisu, A.-M.; Mensah, S.; Boafo, F.; Yeluripati, G.R.; Kudjo, P. A Sentiment Analysis Framework to Classify Instances of Sarcastic Sentiments within the Aviation Sector. Int. J. Inf. Manag. Data Insights 2023, 3, 100180. [Google Scholar] [CrossRef]
  68. Mavrogiorgos, K.; Kiourtis, A.; Mavrogiorgou, A.; Manias, G.; Kyriazis, D. A Question Answering Software for Assessing AI Policies of OECD Countries. In Proceedings of the 4th European Symposium on Software Engineering, Napoli, Italy, 1–3 December 2023; ACM: New York, NY, USA, 2023; pp. 31–36. [Google Scholar]
  69. Dang, H.B.; Pham, T.T.Q.; Nguyen, V.P.; Nguyen, V.H. Regulatory Impact of a Governmental Approach for Artificial Intelligence Technology Implementation in Vietnam. J. Infrastruct. Policy Dev. 2024, 8, 6631. [Google Scholar] [CrossRef]
  70. Thoene, U.; García Alonso, R.; Dávila Benavides, D.E. Ethical Frameworks and Regulatory Governance: An Exploratory Analysis of the Colombian Strategy for Artificial Intelligence. Law State Telecommun. Rev. 2024, 16, 146–171. [Google Scholar] [CrossRef]
  71. Arora, A.; Gupta, M.; Mehmi, S.; Khanna, T.; Chopra, G.; Kaur, R.; Vats, P. Towards Intelligent Governance: The Role of AI in Policymaking and Decision Support for E-Governance. In Information Systems for Intelligent Systems; So In, C., Londhe, N.D., Bhatt, N., Kitsing, M., Eds.; Springer Nature Singapore: Singapore, 2024; Volume 379, pp. 229–240. ISBN 978-981-9986-11-8. [Google Scholar]
  72. Palladino, N. A digital constitutionalism framework for AI. Riv. Di Digit. Politics 2023, 3, 521–542. [Google Scholar] [CrossRef]
  73. CAF. The Department of National Defence and Canadian Armed Forces Artificial Intelligence Strategy; CAF: Elmira, NY, USA, 2024; ISBN 978-0-660-45122-0. Available online: https://www.canada.ca/en/department-national-defence/corporate/reports-publications/dnd-caf-artificial-intelligence-strategy.html (accessed on 10 September 2025).
  74. Wendehorst, C. Data Governance Working Group: A Framework Paper for GPAI’s Work on Data Governance. 2020. Available online: https://ucrisportal.univie.ac.at/en/publications/data-governance-working-group-a-framework-paper-for-gpais-work-on (accessed on 25 July 2025).
  75. Wang, C.; Teo, T.S.H.; Janssen, M. Public and Private Value Creation Using Artificial Intelligence: An Empirical Study of AI Voice Robot Users in Chinese Public Sector. Int. J. Inf. Manag. 2021, 61, 102401. [Google Scholar] [CrossRef]
  76. Fjeld, J.; Achten, N.; Hilligoss, H.; Nagy, A.; Srikumar, M. Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI. SSRN Electron. J. 2020. [Google Scholar] [CrossRef]
  77. Leoni, G.; Bergamaschi, F.; Maione, G. Artificial Intelligence and Local Governments: The Case of Strategic Performance Management Systems and Accountability. In Artificial Intelligence and Its Contexts; Visvizi, A., Bodziany, M., Eds.; Springer International Publishing: Cham, Switzerland, 2021; pp. 145–157. ISBN 978-3-030-88971-5. [Google Scholar]
  78. Alshahrani, A.; Griva, A.; Dennehy, D.; Mäntymäki, M. Artificial Intelligence and Decision-Making in Government Functions: Opportunities, Challenges and Future Research. Transform. Gov. People Process Policy 2024, 18, 678–698. [Google Scholar] [CrossRef]
  79. Dreyling, R.; Tammet, T.; Pappel, I.; McBride, K. Navigating the AI Maze: Lessons from Estonia’s Bürokratt on Public Sector AI Digital Transformation 2024. Available online: https://ssrn.com/abstract=4850696 (accessed on 10 September 2025).
  80. Alketbi, M. Assessing Readiness for Transformation from Rule Based to Ai-Based Chatbot in UAE Healthcare: A Case Study of a Rehabilitation Hospital in Abu Dhabi. Master’s Thesis, United Arab Emirates University, Abu Dhabi, United Arab Emirates, 2025. [Google Scholar]
  81. Wang, S.; Zhang, Y.; Xiao, Y.; Liang, Z. Artificial Intelligence Policy Frameworks in China, the European Union and the United States: An Analysis Based on Structure Topic Model. Technol. Forecast. Soc. Change 2025, 212, 123971. [Google Scholar] [CrossRef]
  82. Misra, S.K.; Sharma, S.K.; Gupta, S.; Das, S. A Framework to Overcome Challenges to the Adoption of Artificial Intelligence in Indian Government Organizations. Technol. Forecast. Soc. Change 2023, 194, 122721. [Google Scholar] [CrossRef]
  83. OECD. Framework for the Classification of AI Systems; OECD Publishing: Paris, France, 2022. [Google Scholar]
  84. Government of Canada. Algorithmic Impact Assessment v0.10.0; Government of Canada: Ottawa, ON, USA, 2023. Available online: https://open.canada.ca/aia-eia-js_0.10 (accessed on 10 October 2025).
  85. Qudah, M.A.A.; Muradkhanli, L.; Salameh, A.A.; Rind, M.A.; Muradkhanli, Z. Artificial Intelligence Techniques In Improving the Quality of Services Provided By E-Government To Citizens. In Proceedings of the 2024 IEEE 1st Karachi Section Humanitarian Technology Conference (KHI-HTC), Tandojam, Pakistan, 8–9 January 2024; IEEE: New York, NY, USA, 2024; pp. 1–4. [Google Scholar]
  86. Iuga, I.C.; Socol, A. Government artificial intelligence readiness and brain drain: Influencing factors and spatial effects in the European union member states. J. Bus. Econ. Manag. 2024, 25, 268–296. [Google Scholar] [CrossRef]
  87. Mahajan, V. Book Review: The Delphi Method: Techniques and Applications. J. Mark. Res. 1976, 13, 317–318. [Google Scholar] [CrossRef]
  88. Skulmoski, G.J.; Hartman, F.T.; Krahn, J. The Delphi Method for Graduate Research. J. Inf. Technol. Educ. Res. 2007, 6, 1–21. [Google Scholar] [CrossRef]
  89. Hsu, C.-C.; Sandford, B.A. The Delphi Technique: Making Sense of Consensus. Pract. Assess. Res. Eval. 2007, 12, 10. [Google Scholar] [CrossRef]
  90. OECD. Digital Government Review of Morocco: Laying the Foundations for the Digital Transformation of the Public Sector in Morocco, OECD Digital Government Studies; OECD Publishing: Paris, France, 2018. [Google Scholar]
  91. United Nations Department of Economic and Social Affairs. United Nations E-Government Survey 2022: The Future of Digital Government; United Nations e-Government Survey Series, 1st ed.; United Nations Publications: New York, NY, USA, 2022; ISBN 978-92-1-123213-4. [Google Scholar]
  92. Cour des Comptes. Évaluation Des Services Publics En Ligne; Cour des Comptes: Rabat, Morocco, 2019; Available online: https://www.courdescomptes.ma/wp-content/uploads/2023/01/Rapport-services-en-ligne-2019.pdf (accessed on 5 April 2025).
  93. Venkatesh, V.; Bala, H. Technology Acceptance Model 3 and a Research Agenda on Interventions. Decis. Sci. 2008, 39, 273–315. [Google Scholar] [CrossRef]
  94. MEAE Finland’s Age of Artificial Intelligence—Turning Finland into a Leading Country in the Application of Artificial Intelligence: Objective and Recommendations for Measures; Publications of Ministry of Economic Affairs and Employment Helsinki. 2017. Available online: https://julkaisut.valtioneuvosto.fi/handle/10024/80849 (accessed on 10 March 2025).
  95. Nawaz, N.; Arunachalam, H.; Pathi, B.K.; Gajenderan, V. The Adoption of Artificial Intelligence in Human Resources Management Practices. Int. J. Inf. Manag. Data Insights 2024, 4, 100208. [Google Scholar] [CrossRef]
Figure 1. Hierarchical representation of AI, ML, and DL.
Figure 1. Hierarchical representation of AI, ML, and DL.
Digital 05 00059 g001
Figure 2. Mapping and review methodology steps.
Figure 2. Mapping and review methodology steps.
Digital 05 00059 g002
Figure 3. PRISMA 2020 flow diagram of the study selection process.
Figure 3. PRISMA 2020 flow diagram of the study selection process.
Digital 05 00059 g003
Figure 4. Multi-level theoretical mapping in the SLR framework.
Figure 4. Multi-level theoretical mapping in the SLR framework.
Digital 05 00059 g004
Figure 5. Yearly distribution of publications on AI in governance (2019–2024).
Figure 5. Yearly distribution of publications on AI in governance (2019–2024).
Digital 05 00059 g005
Figure 6. Typology of contributions in the selected studies.
Figure 6. Typology of contributions in the selected studies.
Digital 05 00059 g006
Figure 7. Distribution of methodological approaches in the selected studies.
Figure 7. Distribution of methodological approaches in the selected studies.
Digital 05 00059 g007
Figure 8. Regional contribution to AI governance research.
Figure 8. Regional contribution to AI governance research.
Digital 05 00059 g008
Figure 9. AI Applications in Governance.
Figure 9. AI Applications in Governance.
Digital 05 00059 g009
Figure 10. Efficiency metrics in AI implementation.
Figure 10. Efficiency metrics in AI implementation.
Digital 05 00059 g010
Figure 11. Opportunities presented by AI in governance.
Figure 11. Opportunities presented by AI in governance.
Digital 05 00059 g011
Figure 12. Challenges in AI Adoption for Institutional Governance.
Figure 12. Challenges in AI Adoption for Institutional Governance.
Digital 05 00059 g012
Figure 13. Risks of AI integration in governance.
Figure 13. Risks of AI integration in governance.
Digital 05 00059 g013
Figure 14. Stakeholder types in AI governance.
Figure 14. Stakeholder types in AI governance.
Digital 05 00059 g014
Figure 15. Stakeholders’ priorities in ethical AI governance.
Figure 15. Stakeholders’ priorities in ethical AI governance.
Digital 05 00059 g015
Figure 16. Key concerns of AI and human autonomy in governance.
Figure 16. Key concerns of AI and human autonomy in governance.
Digital 05 00059 g016
Figure 17. AI decision support vs. human oversight.
Figure 17. AI decision support vs. human oversight.
Digital 05 00059 g017
Figure 18. Global AI regulatory frameworks: adoption and influence on governance.
Figure 18. Global AI regulatory frameworks: adoption and influence on governance.
Digital 05 00059 g018
Figure 19. Different AI governance models.
Figure 19. Different AI governance models.
Digital 05 00059 g019
Figure 20. Essential pillars for AI integration in public governance.
Figure 20. Essential pillars for AI integration in public governance.
Digital 05 00059 g020
Figure 21. The four institutional pillars of the AI integration capability model.
Figure 21. The four institutional pillars of the AI integration capability model.
Digital 05 00059 g021
Table 1. Overview of the findings of previous SLRs on integrating AI in governance.
Table 1. Overview of the findings of previous SLRs on integrating AI in governance.
StudyNo. of
Papers
Focus
Area
Period
Covered
Key
Findings
StrengthsWeaknessesMain
Contributions
[6]74Theoretical frameworks for AI governance1983–2019AI improves public governance through automation, transparency, and predictive analytics.Provides a broad conceptual agenda and identifies critical research gaps.Limited empirical data; lacks connection to operational contexts.Established a theoretical foundation and encouraged empirical extensions for AI governance.
[7]26Public governance and AI ethics2010–2020Investigate transparency, accountability, and privacy challenges in public AI applications.Strong interdisciplinary design and robust theoretical grounding.Limited empirical and quantitative analysis; mostly Western contexts.Designed a comprehensive research agenda focused on procedural and normative issues.
[5]59AI adoption barriers in healthcare2000–2023Identifies six key barriers: ethical, technological, legal, workforce-related, social, and safety concerns.Offers actionable insights into overcoming sector-specific obstacles.Focused solely on healthcare, lacks applicability to broader governance settings.Synthesized implementation barriers and provided a sectoral framework for healthcare AI integration.
[8]47AI adoption in industrial production2010–2024Highlights 35 adoption factors across skills, data, ethics, and leadership themes.Structured categorization of influencing factors and strategic implications.Based only on academic literature, it lacks validation through real-world case studies.Proposed an adoption framework and identified gaps in training and institutional preparedness.
Table 2. TAM, DEG, DC, and Delphi contribute to key dimensions of AI governance.
Table 2. TAM, DEG, DC, and Delphi contribute to key dimensions of AI governance.
Analytical DimensionTAM (Individual)DEG (Institutional)DCDelphi
Perceived usefulness
Trust and fairness
Ease of use
Institutional efficiency
Integrated service delivery
Transparency and accountability
Adaptive capacity
Table 3. Search terms used in the search string.
Table 3. Search terms used in the search string.
ScopeTerms
AIAI, ML, DL, Neural networks, NLP, LLMs, Computer vision, Algorithmic transparency, AI fairness, Bias mitigation, AI security, Explainable AI, and Responsible AI.
GovernanceGovernance, Public administration, Policy, Transparency, Accountability, Ethics, Regulatory compliance, Risk mitigation, Digital governance, Regulation, Fairness, Auditing, Public trust.
Decision-makingDecision-making, Predictive analytics, Human oversight, Decision autonomy, Administration, Efficiency metrics, Strategic planning, Risk-based decisions, public services.
Table 4. Inclusion and Exclusion Criteria.
Table 4. Inclusion and Exclusion Criteria.
Inclusion CriteriaExclusion Criteria
IC1: Papers analyzing the integration of AI in governance decision-making processes, including case studies, frameworks, or empirical analysis of AI governance models (e.g., risk-based, self-regulation, hybrid).EC1: Documents not written in English.
IC2: Studies comparing AI-driven governance models to traditional decision-making approaches.EC2: Unavailability of the full text.
IC3: Research discussing ethical, operational, regulatory, or technical challenges, as well as governance strategies (e.g., AI audits, transparency laws, risk mitigation frameworks).
IC4: Studies evaluating AI’s impact on decision autonomy, transparency, accountability, and efficiency in governance.
IC5: Only the most recent and comprehensive version of duplicate works will be included.
EC3: Papers focusing solely on AI’s technical aspects (e.g., algorithmic performance, model optimization) without relevance to governance decision-making.
Table 5. Quality assessment criteria.
Table 5. Quality assessment criteria.
IDQuestionAnswer & Score
QA1Does the study primarily focus on AI’s impact on governance decision-making structures (risk-based, centralized, hybrid), processes, or administrative frameworks?‘Yes: +1’, ‘No: 0’
QA2Does the study assess how AI enhances policymaking and/or operational decision-making in governance institutions?‘Yes: +1’, ‘No: 0’
QA3Does the study evaluate ethical, legal, regulatory challenges, AI auditing, and transparency laws in AI-driven governance?‘Yes: +1’, ‘No: 0’
QA4Does the study propose frameworks, models, or best practices (e.g., EU AI Act, US AI Bill of Rights) for AI integration in governance decision-making?‘Yes: +1’, ‘No: 0’
QA5Does the study discuss AI risks (e.g., algorithmic bias, decision opacity, accountability concerns) and propose mitigation strategies?‘Yes: +1’, ‘No: 0’, ‘Partially: 0.5’
QA6Has the study been published in a recognized source (high-impact journals/conferences)?Conferences: A: +1.5, B: +1, C: +0.5, Not classified: +0 Journals: Q1: +2, Q2: +1.5, Q3 or Q4: +1, Not classified: +0
Table 6. Barriers and Enablers for AI Adoption in Public Governance.
Table 6. Barriers and Enablers for AI Adoption in Public Governance.
CategoryBarriersStrategic ResponsesReferences
EthicalAlgorithmic bias, opaque decisions, and fairness concernsEthical audits, explainable AI (XAI), fairness by design[37,55,56,57]
TechnicalLegacy systems, data quality issues, and lack of interoperabilityInfrastructure upgrades, robust data governance, and standardized APIs[18,36]
InstitutionalLack of training, siloed departments, resistance to changeDigital literacy programs, interdepartmental coordination, and agile teams[52,58]
TransparencyAbsence of audit trails, hidden algorithmsAlgorithm registers, public dashboards, and participatory audit mechanisms[53,59]
Social acceptanceCitizen mistrust, fear of surveillance, or job displacementCo-design processes, citizen engagement, and clear communication strategies[54,60]
Table 7. Stakeholder roles, priorities, and theoretical perspectives.
Table 7. Stakeholder roles, priorities, and theoretical perspectives.
Stakeholder GroupMain ConcernsTheoretical LensReferences
Government
officials
Efficiency, strategic alignment, and service delivery performanceDEG[5,63]
Citizens and service usersTrust, fairness, privacy, transparencyTAM[34,37,67]
Technical expertsSystem reliability, ease of use, and cybersecurityTAM[46,64]
Legal and academic expertsEthical safeguards, the rule of law, and democratic accountabilityDEG[53,65]
Private sector actorsInnovation, market viability, and public-private complianceTAM + DEG[66,68]
Table 8. Comparative Mapping of AI Governance Assessment Models.
Table 8. Comparative Mapping of AI Governance Assessment Models.
DimensionAI Readiness
Index
AI Maturity
Models
AICM (Our Study)
Level of analysisNational (macro)Organizational (mainly private)Institutional (public sector, multi-level)
ApproachBenchmarking/scoringStage-based transformationCapability-building and transformation roadmap
ActionabilityLowModerateHigh
Public governance orientationLimitedGenericStrong (contextualized and governance-specific)
Theoretical basisNoneImplicit/variableTAM, DEG, DC
Adaptability to the Global SouthWeakModerateStrong
Table 9. Delphi consensus on the four pillars of the AI integration capability model.
Table 9. Delphi consensus on the four pillars of the AI integration capability model.
AICM PillarMedianIQRConsensus Level
Data access and interoperability5.00.5Strong
Digital Infrastructure and process redesign5.00.5Strong
Workforce competencies and learning Agility5.00.0Unanimous
Institutional leadership and change management5.00.0Unanimous
Table 10. Evolution of expert consensus across Delphi rounds.
Table 10. Evolution of expert consensus across Delphi rounds.
AICM PillarRound 1 (Median/IQR)Round 2 (Median/IQR)Round 3 (Median/IQR)
P14.0/1.04.5/0.55.0/0.5
P24.0/1.04.5/0.55.0/0.5
P34.5/0.55.0/0.05.0/0.0
P44.0/1.05.0/0.55.0/0.0
Table 12. Strategic operational roadmap aligned with AICM Pillars.
Table 12. Strategic operational roadmap aligned with AICM Pillars.
AICM PillarObjectiveRecommended Actions
Pillar 1. Data Infrastructure and InteroperabilityCreate a reliable, machine-readable, standardized data infrastructureDigitize paper records; create an interoperable national data platform; enforce metadata and data-sharing standards
Pillar 2. Digital Infrastructure & Process RedesignRedesign services to be AI-ready, integrated, and user-focusedAPI-based integration of existing systems; aligning new projects with privacy and security standards
Pillar 3. Workforce CompetenciesTrain public servants for ethical and effective AI governanceDevelop training in AI ethics, audit, and explainability; deploy competency frameworks through universities
Pillar 4. Institutional LeadershipEnsure strategic coordination and political ownershipCreate a High Council for AI Governance; mandate sectoral AI roadmaps with evaluation and feedback mechanisms
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Aarab, A.; El Marzouki, A.; Boubker, O.; El Moutaqi, B. Integrating AI in Public Governance: A Systematic Review. Digital 2025, 5, 59. https://doi.org/10.3390/digital5040059

AMA Style

Aarab A, El Marzouki A, Boubker O, El Moutaqi B. Integrating AI in Public Governance: A Systematic Review. Digital. 2025; 5(4):59. https://doi.org/10.3390/digital5040059

Chicago/Turabian Style

Aarab, Amal, Abdenbi El Marzouki, Omar Boubker, and Badreddine El Moutaqi. 2025. "Integrating AI in Public Governance: A Systematic Review" Digital 5, no. 4: 59. https://doi.org/10.3390/digital5040059

APA Style

Aarab, A., El Marzouki, A., Boubker, O., & El Moutaqi, B. (2025). Integrating AI in Public Governance: A Systematic Review. Digital, 5(4), 59. https://doi.org/10.3390/digital5040059

Article Metrics

Back to TopTop