1. Introduction
The rapid proliferation of artificial intelligence (AI) across public administration and critical infrastructure has triggered a paradigm shift in how digital services are governed. While AI offers unprecedented efficiency in data-driven decision-making, its reliance on large-scale, high-dimensional datasets introduces significant vulnerabilities that traditional governance frameworks are ill-equipped to handle. The core concern for researchers and policymakers alike is the “operationalization gap”—the disconnect between abstract ethical principles (such as fairness and transparency) and the technical controls (such as adversarial robustness and differential privacy) required to secure these systems. Without a unified governance approach, AI deployments risk compromising citizen privacy, institutional accountability, and the resilience of sustainable infrastructure.
The problem to be addressed in this study is the conceptual fragmentation of the current research landscape. The existing literature often treats AI ethics and AI cybersecurity as distinct silos, leading to frameworks that are either too normative for technical implementation or too technical for policy oversight. This study aims to bridge this gap by synthesizing the fragmented dimensions of AI governance into a singular, policy-ready risk-tiering matrix. By doing so, the research aligns AI security with the mandates of Sustainable Development Goal 9 (SDG-9), promoting the development of resilient and inclusive digital infrastructure.
The main objective of this research is to evaluate the maturity of existing AI governance frameworks and to propose an integrated three-layer model that links principles to auditable technical evidence. To guide this inquiry, the following Research Questions (RQs) are addressed:
RQ1: What are the primary governance dimensions addressed in current AI literature, and how has their focus evolved between 2019 and 2025?
RQ2: To what extent do current frameworks provide technical operational controls for cybersecurity and privacy, as opposed to high-level ethical guidelines?
RQ3: How can AI governance be structured into a risk-tiering matrix that supports the sustainable engineering of public infrastructure (SDG-9)?
To address these questions, this study employed a systematic literature review (SLR) methodology following the PRISMA 2020 guidelines. The search strategy targeted five major databases (IEEE Xplore, ACM Digital Library, etc.) using strings that combined high-level governance terms with technical security keywords such as “adversarial ML” and “safety engineering”. An initial screening of 450 records resulted in 95 primary studies.
Following a two-stage backward and forward snowballing process, the final corpus comprised 95 high-quality studies. Each study was vetted against a five-point quality assessment (QA) rubric to ensure methodological rigor. To move beyond descriptive analysis, the study utilizes principal component analysis (PCA) and k-means clustering to quantitatively map the thematic clusters and identify latent gaps in the literature.
The remainder of this paper is structured as follows:
Section 2 provides an analytical synthesis of the literature, highlighting the shifts in risk perception and existing limitations.
Section 3 details the transparent search strategy, quality assessment procedures, and statistical methods used.
Section 4 presents the quantitative results, including the heat map of the framework.
Section 5 discusses the reconciliation of these findings with global standards, the link to SDG-9, and the proposed integrated governance matrix. Finally,
Section 6 concludes with a summary of contributions and future research directions.
2. Literature Review
2.1. Background
The rapid integration of artificial intelligence (AI) into public administration, critical infrastructure, and digital government systems has intensified scholarly attention toward the governance of intelligent technologies. As AI systems increasingly mediate public decision-making, automate regulatory processes, and manage sensitive data, concerns related to accountability, security, fairness, and sustainability have become central to both academic and policy debates. The existing literature agrees that effective AI governance requires more than regulatory compliance; it demands coordinated socio-technical mechanisms capable of aligning ethical principles with operational safeguards and institutional oversight [
1].
2.1.1. AI Governance Concepts
AI governance refers to the institutional structures, policies, and processes that guide the design, deployment, and oversight of AI systems in accordance with societal values and legal obligations [
2]. Early governance approaches largely focused on ethical principles such as fairness, transparency, and accountability, which have been widely endorsed by governments and international organizations.
However, research demonstrates that principle-based frameworks often lack concrete mechanisms for implementation and verification, resulting in what has been termed the operationalization gap—the disconnect between abstract ethical commitments and practical technical enforcement. As a result, recent scholarship conceptualizes AI governance as a socio-technical system integrating technical safeguards, organizational practices, regulatory instruments, and human oversight across the AI lifecycle [
3].
This shift reflects growing recognition that sustainable AI governance requires adaptive, multi-layered frameworks capable of responding to heterogeneous applications and rapidly evolving technological environments, particularly within public-sector and infrastructure-intensive domains [
4].
2.1.2. Cybersecurity Risks in AI Systems
AI systems introduce distinct cybersecurity vulnerabilities that extend beyond traditional information security models. Their dependence on large-scale data and complex learning architectures exposes them to threats such as adversarial manipulation, data poisoning, model inversion, and inference attacks [
5]. These risks may compromise sensitive information, distort automated decision processes, and undermine institutional trust.
The integration of AI into critical infrastructure—including healthcare systems, smart transportation, and digital public services—amplifies the potential societal impact of security failures. Despite this, systematic reviews indicate that cybersecurity remains inconsistently addressed within AI governance frameworks, with many prioritizing regulatory compliance or ethical guidance while offering limited operational security controls.
Consequently, the recent literature emphasizes that cybersecurity must be treated as a foundational governance dimension, requiring continuous risk assessment, secure lifecycle management, and incident response capabilities to ensure system resilience and sustainability [
6].
2.1.3. Governance Dimensions
To systematically evaluate the maturity of AI governance frameworks, this study adopts a five-dimension analytical lens derived from prior systematic reviews and governance surveys [
7]. These dimensions reflect both normative values and technical requirements:
Privacy encompassing lawful data processing, consent management, data minimization, and protection against re-identification risks [
8].
Ethics including fairness, transparency, human oversight, explainability, and alignment with societal values [
7].
Accountability referring to traceability, audit mechanisms, responsibility allocation, and legal liability across the AI lifecycle [
9].
Security addressing protection against adversarial attacks, system compromise, and operational failures [
10].
Bias focusing on the detection, mitigation, and monitoring of discriminatory or unfair algorithmic outcomes [
11].
Empirical evidence demonstrates that these dimensions are not addressed uniformly. Privacy and ethics dominate most governance frameworks, while accountability, cybersecurity controls, and bias mitigation remain underdeveloped or absent. This imbalance has contributed to fragmented governance structures that emphasize values without delivering enforceable safeguards.
By applying this dimensional typology, the present study enables a structured comparison of AI governance frameworks, facilitating the identification of dominant patterns, latent gaps, and opportunities for integration. This conceptual foundation underpins the subsequent empirical analyses, including framework mapping, co-occurrence analysis, and PCA-based clustering.
2.2. AI Governance
Artificial intelligence (AI) governance seeks to align the development and deployment of intelligent systems with societal values and legal requirements [
12]. Researchers distinguish between normative frameworks—which emphasize ethics principles such as fairness, transparency and accountability [
13]—and compliance-oriented frameworks, which focus on risk management, regulatory adherence and technical standards. Core governance dimensions include privacy, ensuring personal data are collected and used lawfully [
8]; ethics, encompassing fairness, non-discrimination and responsible innovation; accountability, which addresses traceability and responsibility for AI decisions; transparency, entailing explainability and auditability of models [
14]; and security, covering robustness against cyber threats and adversarial attacks. Additional dimensions, such as bias mitigation, participatory governance, customization, integration, resilience, automation and cost have gained attention as AI systems become more pervasive. The systematic reviews cited in the corpus collectively highlight these concepts and note persistent gaps between high-level ethics aspirations and the technical controls needed to realize them.
2.3. Systematic Reviews on AI Governance
A critical synthesis of the 15 review studies in
Table 1 reveals a significant shift in the conceptualization of AI governance, yet highlights persistent gaps in operationalizing these concepts.
The literature exhibits a clear chronological transition. Some studies, such as [
1,
15], focus almost exclusively on high-level ethical principles (fairness, accountability) and legal implications. However, as AI integration matured, the risk focus shifted toward tangible technical threats. By 2024–2025, studies such as [
16] began integrating “intellectual property risks” and “data security” as core governance pillars. This shift suggests a growing scholarly consensus that ethics alone—without technical verification and security alignment—is insufficient for AI oversight.
There is a notable conceptual disagreement between “human-centric” and “system-centric” governance. Reviews focusing on smart cities and public administration [
11] prioritize “citizen discontent” and “trust deficits” as the primary risks. In contrast, technical surveys [
10] view risks through the lens of data integrity and security verification. The critical gap identified in this synthesis is the lack of interdisciplinary frameworks that bridge these two: currently, frameworks either address the social perception of fairness [
3] or the technical reality of data security but rarely both.
A critical evaluation of prior methodologies reveals a reliance on “survey experiments” and “scoping reviews” that describe perceived risks rather than evaluating control effectiveness. For instance, while [
17] provide comprehensive lists of responsible AI (RAI) practices, they offer little critical engagement with the conflicts between these practices—such as how increasing “transparency”, as advocated in [
18], can inadvertently widen “cybersecurity vulnerabilities” by exposing model architecture to adversarial attacks.
Despite the increase in “trustworthy AI” research [
19], there remains a lack of “policy-ready” mechanisms. Most studies conclude with a “research agenda” [
20] rather than a validated matrix. Very few reviews integrate AI governance with cybersecurity frameworks; for example, none of the studies propose mapping high-level principles to controls (such as adversarial testing, incident management or red teaming). Moreover, many studies adopt normative perspectives and lack empirical validation, highlighting the need for more practice-oriented research [
16]. This study addresses this gap by synthesizing these fragmented areas—ethics, privacy, and security—into the integrated risk-tiering matrix and frameworks.
Table 1.
Comparative analysis of 15 systematic reviews.
Table 1.
Comparative analysis of 15 systematic reviews.
| Ref | Paper Title | Year | Primary Focus | Cybersecurity Relevance |
|---|
| [1] | Applying the ethics of AI: A systematic review of tools for developing and assessing AI-based systems | 2019 | Ethics & tools | Relies on high-level ethical guidelines without testing technical feasibility. |
| [15] | The governance of artificial intelligence in Canada: Findings and opportunities from a review of 84 AI governance initiatives | 2021 | Legal initiatives | Focuses on policy documentation; lacks technical security verification for the initiatives reviewed. |
| [21] | AI and the quest for diversity and inclusion: A systematic literature review | 2021 | Bias & social | Addresses social outcomes of bias but overlooks technical adversarial risks that can trigger biased decisions. |
| [13] | A survey of instruments and institutions available for the global governance of artificial intelligence | 2021 | Global regulatory gaps | Lists institutional bodies but does not analyze the lack of cross-border enforcement for security standards. |
| [18] | Insights into suggested responsible AI (RAI) practices in real-world settings: A systematic literature review | 2022 | Responsible AI (RAI) | Advocates for transparency without addressing how it can compromise model security via inversion attacks. |
| [22] | Research agenda for using artificial intelligence in health governance: Interpretive scoping review and framework | 2022 | Health sector ethics | Ethics frameworks are specific to medical use cases and do not integrate general cybersecurity-robust standards. |
| [23] | Digital transformation toward AI-augmented public administration: The perception of government employees and the willingness to use AI in government | 2022 | Employee perceptions | Measures “willingness to use” rather than the objective safety and robustness of the government systems. |
| [24] | Smart cities & citizen discontent: A systematic review of the literature | 2023 | Citizen participation | Prioritizes social feedback while neglecting the technical infrastructure needed to secure participatory platforms. |
| [20] | AI adoption and diffusion in public administration: A systematic literature review and future research agenda | 2023 | Adoption & privacy | Fails to provide a clear technical roadmap for linking privacy-by-design with operational security controls. |
| [17] | Ethics-based AI auditing: A systematic literature review on conceptualizations of ethical principles and knowledge contributions to stakeholders | 2024 | Auditing principles | Focuses on the conceptualization of principles rather than the verifiable evidence required for a successful audit. |
| [19] | An overview of trustworthy AI: Advances in IP protection, privacy-preserving federated learning, security verification, and GAI safety alignment | 2024 | IP & safety alignment | While technical, it treats IP and safety as separate issues without a unified governance risk-tiering matrix. |
| [9] | Barriers to artificial intelligence adoption in smart cities: A systematic literature review and research agenda | 2023 | Accountability | Identifies “accountability” as a barrier but does not define the technical audit trails needed to solve it. |
| [3] | AI: Friend or foe of fairness perceptions of the tax administration? A survey experiment on citizens’ procedural fairness perceptions | 2025 | Fairness perceptions | Uses survey experiments, which capture public perception rather than technical model fairness. |
| [10] | Big data governance challenges arising from data generated by intelligent systems technologies: A systematic literature review | 2025 | Data security | Addresses data security but lacks the “principles-to-controls” pipeline found in normative ethics reviews. |
| [11] | Unveiling civil servants’ preferences: Human–machine matching vs. regulating algorithms in algorithmic decision-making–Insights from a survey experiment | 2025 | Trust deficit | Analyzes trust as a psychological preference rather than a result of verifiable technical performance. |
3. Research Methodology
This study employs a systematic literature review (SLR) designed to identify, evaluate, and synthesize the existing studies on AI governance risks and their intersection with cybersecurity. Following the PRISMA 2020 guidelines (
Figure 1) and the Kitchenham protocol, the methodology is structured to ensure transparency, objectivity, and reproducibility. The review protocol was registered in PROSPERO (CRD420261286510).
3.1. Nature and Characteristics of the Study
Unlike development-focused work, this research is characterized by its systematic and analytical approach to existing knowledge. This study is a descriptive and exploratory research study. It describes the current landscape of AI governance and explores the persistent gaps between theoretical principles and operational security controls. The study adopts a positivist and systematic paradigm, relying on reproducible search strings, explicit inclusion/exclusion criteria, and standardized quality assessments to minimize researcher bias.
Positioned at the synthesis level of the research pyramid, this SLR aggregates primary journal studies to provide a high-level meta-analysis of the field. The primary purpose is basic and theoretical research, aimed at building a conceptual bridge between the fragmented domains of AI ethics and technical cybersecurity. The study uses inductive reasoning, moving from the specific findings of 95 individual studies to broader generalizations regarding global governance trends and thematic clusters.
The research utilizes secondary data sources, specifically peer-reviewed journal articles indexed in major digital libraries such as IEEE Xplore, ScienceDirect, and ACM. It covers the period from January 2020 to June 2025 to capture the rapid evolution of the field following the rise of generative AI and new regulations. The design is non-experimental and observational, focusing on the thematic and statistical analysis of a finalized corpus of 95 “good”-rated studies.
3.2. Data Collection and Analysis Techniques
The collection phase utilized Boolean queries (e.g., “cybersecurity” AND “AI” AND “governance”) to identify 2918 records, which were subsequently filtered through title, abstract, and full-text screening. For data analysis, the following techniques were employed.
Thematic Analysis: A 4W1H approach (Who, What, Where, When, How) was used to code content attributes such as governance solutions and challenges. To ensure consistency in framework evaluation, coverage levels were classified using a predefined coding scheme. A “high” level indicated explicit and operational governance mechanisms, “medium” referred to partial or principle-level coverage, and “low/absent” denoted minimal or no substantive treatment. All frameworks were coded iteratively, and ambiguous cases were re-examined until agreement was reached, ensuring consistent classification across the 95 frameworks.
Quantitative Synthesis: Publication trends were tracked by year to illustrate research acceleration.
Relational Mapping: A co-occurrence network analysis was performed to visualize the linkages between legal, technical, and ethical themes.
Statistical Clustering: Principal component analysis (PCA) and k-means clustering were used to group frameworks based on their coverage profiles.
To strengthen reproducibility, screening followed a fixed workflow aligned with PRISMA 2020. After de-duplication, records underwent title/abstract screening using the predefined inclusion/exclusion criteria (topic relevance to AI governance risks and cybersecurity; peer-reviewed; 2020–June 2025). Potentially relevant papers proceeded to full-text assessment, where eligibility was confirmed based on (i) explicit governance relevance and (ii) substantive treatment of security/privacy/robustness/bias risks (including studies that use alternative terminology such as “adversarial robustness” or “AI safety engineering”). Backward and forward snowballing was then applied to the full-text eligible set to capture additional governance-relevant security studies that may not be retrieved by keyword search; all snowballed candidates were de-duplicated and re-screened using the same criteria. Finally, all included studies were quality-appraised using the five-item QA rubric (
Section 3.3), retaining only studies with QA ≥ 3.5, which yielded a final corpus of 95 studies. As shown in
Figure 1 (PRISMA flow), the search identified 2918 records; after duplicate removal and preliminary filtering, 817 records were screened; 259 reports underwent full-text eligibility assessment; and 95 studies met the inclusion and QA criteria and were included in the final synthesis.
3.3. Quality Assessment (QA) Procedures
To address concerns regarding dataset integrity, each of the 95 studies was evaluated against five quality assessment (QA) criteria. Each study was scored on a scale of 0 to 1 (1 = Yes, 0.5 = Partially, 0 = No):
QA1: Is the research goal or AI governance framework clearly defined?
QA2: Does the study provide a detailed methodology for its conclusions?
QA3: Are the AI risks (cybersecurity, privacy, ethics) explicitly categorized?
QA4: Is there a discussion of the practical application of the proposed framework?
QA5: Is the study peer-reviewed and published in a high-impact venue?
Only studies with a total score of ≥3.5 were included in the final synthesis. The individual scores for all 95 papers are provided in the
Supplementary Materials.
4. Results
4.1. Publications by Year
The first analysis explores the acceleration in AI governance research since 2020.
Figure 2 shows the number of studies per year. After modest activity in 2020 and 2021, publications grew steeply in 2023 and peaked in 2024 with over 30 studies, reflecting the increasing urgency of AI governance and associated regulation.
4.2. Frequency and Distribution of Governance Dimensions
Figure 3 ranks the governance areas most frequently addressed in the literature. Accountability/transparency and ethics dominate, followed by legislation/regulation, trust, privacy, bias and compliance. Rarely addressed areas comprise citizen participation, regulatory gaps, smart city governance and stakeholder engagement [
6]. This distribution confirms that normative values are prioritized over participatory and operational governance.
4.3. Structural Maturity and Technical Coverage of Existing Frameworks
The AI governance literature comprises diverse frameworks that seek to operationalize principles such as transparency, accountability, privacy, fairness, ethics and security. Some frameworks are compliance-driven, embedding regulatory requirements (e.g., General Data Protection Regulation [GDPR], the European Union Artificial Intelligence Act [EU AI Act]), whereas others are ethics-driven, emphasizing values, such as justice, human rights and societal well-being. To map the coverage of these frameworks across governance dimensions, we compiled frameworks listed in the AI frameworks comparison dataset and encoded coverage at three ordinal levels (high, medium, low/absent).
Figure 4 visualizes this matrix as a heat map of AI governance frameworks versus governance dimensions. It shows clear heterogeneity: privacy and ethics tend to appear with high or medium coverage across most frameworks, while operational dimensions—such as bias mitigation, security controls and accountability mechanisms—are far less consistently treated and frequently fall into the low/absent band.
While AI governance frameworks frequently cite privacy as a normative principle, they often lack the technical specificity required to mitigate re-identification risks in high-dimensional datasets. To move from abstract commitments to measurable safeguards, operational controls must address risks such as linkage attacks and inference. Differential privacy provides a mathematically rigorous framework for minimizing the risk of identifying individuals within large datasets by adding controlled noise to queries [
74]. Furthermore, the application of Iterative Local Search techniques offers a robust method for preserving data privacy through optimized data transformation [
75]. By embedding these specific techniques—including k-anonymization, suppression, and generalization—within the governance lifecycle, organizations can ensure that privacy protection is not merely a policy statement but a practical, auditable technical reality.
4.4. Frameworks × Governance Dimensions
Figure 5 provides a more granular comparison of framework coverage. The revised heat map shows clear variation: 42 frameworks address both privacy and ethics; 30 focus only on privacy; 14 cover only ethics; and 10 neglect both privacy and ethics entirely. While many frameworks reference accountability, security, and bias at a conceptual or principle level, comparatively fewer provide operational mechanisms, measurable controls, or implementation-oriented procedures for these dimensions. In contrast, privacy and ethics are more consistently articulated across frameworks, resulting in an imbalance in depth of coverage rather than complete absence. This distribution indicates that contemporary AI governance frameworks overwhelmingly prioritize ethics and privacy considerations yet omit mechanisms for accountability, robust security, or bias mitigation.
Figure 5 therefore underscores a pronounced gap between high-level normative values and the concrete technical controls needed to ensure accountable, secure, and unbiased AI systems.
4.5. Governance Co-Occurrence Network
Figure 6 visualizes how governance themes co-occur across the dataset of high-quality studies. In the co-occurrence network, edges were weighted based on the frequency of joint appearance of governance dimensions within the same framework. A minimum occurrence threshold was applied to reduce visual noise and highlight stable relational patterns. Sensitivity checks were conducted by varying the threshold level, confirming that the dominant clusters and central relationships remained consistent, with only minor variations in peripheral links. Each node represents a governance area (e.g., privacy, security, ethics, citizen participation), with edges connecting areas that appear together in at least two papers, with edge thickness increasing in proportion to the frequency of co-occurrence.
Analysis of the network reveals four distinct clusters. The first cluster, the privacy–security–regulation cluster, shows that many papers examine data protection, technical security measures and regulatory compliance in concert. The second cluster, an ethics–transparency–accountability cluster, indicates a separate body of literature that links normative principles. The third cluster, the participation cluster, comprises citizen participation, inclusion and stakeholder engagement; its peripheral position in the network underscores the limited integration of participatory issues into mainstream governance discussions. Finally, the fourth cluster, the smart governance/Internet of Things (IoT) cluster, features strong links between “IoT applications”, “data governance AI”, “smart governance”, “smart city governance”, “policy development”, and “governance responsibilities,” suggesting that smart city and IoT topics are treated as a distinct thematic domain.
Overall, the co-occurrence network highlights a divide between technical/legal themes and ethics/normative themes, with participatory and smart city concerns forming separate, less-connected groups.
4.6. Framework Clusters by Governance-Dimension Profiles
Figure 7 maps the AI governance frameworks from our dataset onto two principal components derived from their governance-dimension coverage profiles. Principal component analysis (PCA) compresses the original dimensions into two orthogonal axes, PC1 and PC2. The first, PC1, captures the largest variation and is primarily driven by cybersecurity and accountability (technical/assurance emphasis). PC2 captures the next-largest variation and is dominated by ethics and privacy (human/social emphasis).
After projecting the frameworks onto these axes, using k-means clustering, they are grouped into four families:
Privacy/ethics-centric frameworks that address privacy and ethics but show limited coverage of accountability, security, and bias.
Risk/compliance-centric frameworks that focus on regulatory compliance and risk management, with limited ethics or participatory content.
Trust/participation-centric frameworks that mention trust or participation but have sparse coverage of the other dimensions.
A hybrid cluster in which frameworks distribute their focus more evenly across dimensions.
Most frameworks belong to the first two clusters, indicating that the governance landscape is fragmented—either ethics/privacy-driven or compliance/risk-driven—while truly integrated frameworks are scarce.
To move beyond visualization, we evaluated the statistical significance of the PCA. To ensure transparent statistical reporting, we report the variance explained by the retained components, whereas the full loading matrix used to interpret the PCA axes is presented in the
Supplementary Materials. Specifically, PC1 and PC2 together explain 72.4% of the total variance (PC1: 48.1%; PC2: 24.3%), and
Table 2 reports the corresponding loadings for each governance dimension. As shown in
Table 2, PC1 is primarily driven by cybersecurity and accountability (loadings > 0.75), whereas PC2 is dominated by ethics and privacy. These statistics provide the empirical basis for interpreting
Figure 7 and for applying k-means clustering as an exploratory grouping of coverage profile families, supporting interpretation of the dataset along a technical-risk axis versus a human ethics axis.
In addition, we applied k-means clustering to the PCA-reduced representation (k = 4) to group frameworks by similarity in their governance-dimension coverage profiles. This clustering was used as an exploratory synthesis tool to support interpretable family-level patterns in
Figure 7 rather than inferential prediction.
4.7. Synthesis of Systematic Reviews
Figure 8 presents the full AI frameworks comparison matrix derived from our dataset. Each row corresponds to an AI governance framework, and each column reflects one of the nine governance dimensions captured in the dataset (privacy, performance, precision, ethics, customization, integration, resilience, automation, cost). In operational terms, “high” denotes explicit, prescriptive and measurable guidance (often aligned with recognized standards); “medium” indicates partial or principle-level treatment without full operationalization; and “low/limited” reflects only cursory mention without concrete controls.
5. Discussion
The latest analysis of our full corpus reveals that AI governance research now encompasses 95 frameworks, illustrating both growth and diversification beyond the 15 identified in our systematic literature review (SLR). Privacy and ethics remain the most mature dimensions: 74.7% (71 of 95) of frameworks deliver explicit, prescriptive privacy guidance, while 57.9% (55 of 95) offer high-level ethics coverage. In contrast, control-oriented dimensions are far less developed. High coverage of performance and precision is limited to roughly one-quarter and one-tenth of frameworks, respectively, and customization, automation, resilience and cost are addressed mostly at medium or low levels, as shown in
Figure 8. For example, more than half of the frameworks provide low or no guidance on customization and cost, and the majority treat resilience only at the principle level. The co-occurrence network also shows that participatory and smart city themes remain marginal, isolated from mainstream governance discussions [
24]. Together, these patterns point to a fractured governance landscape: normative values, such as privacy and ethics, dominate, while accountability, security, bias mitigation and operational considerations are largely absent. Bridging this gap requires integrated frameworks that combine ethics and legal principles with technical controls and participatory mechanisms in order for AI governance to become both principled and actionable.
Table 3 summarizes framework coverage levels across governance dimensions. In addition to the gap between ethical values and technical controls, there is also a lack of connection between governance frameworks and sustainable engineering applications. Digital twins, structural health monitoring, and energy management in smart buildings all depend on trusted governance models to ensure infrastructure sustainability. The proposed risk framework can guide engineers to integrate privacy and security controls into these applications to achieve energy efficiency and system resilience.
6. Proposed Governance Risk Framework
To close the gap between ethics and practice, we distilled insights from the 95 AI governance frameworks into a concise governance risk-tiering matrix. The matrix cross-tabulates five risk domains—data privacy and protection; algorithmic bias and fairness; transparency and explainability; operational security; and regulatory compliance—against four severity levels. Each cell is color-coded (green, yellow, orange, red) and linked to policy-ready actions, ranging from basic safeguards (such as minimizing data retention, implementing role-based access control, and maintaining decision logs) to stringent interventions (including full data protection impact assessments (DPIA), human-in-the-loop approvals, and regulator notifications). Embedding this matrix within a three-layer governance model—principles, controls, and evidence—translates high-level values (fairness, privacy, accountability, transparency, and participation) into concrete technical and organizational controls and monitoring mechanisms. For privacy, the controls layer includes practical safeguards such as re-identification risk assessment and privacy-preserving techniques (e.g., differential privacy and anonymization). The evidence layer is demonstrated through auditable documentation, including DPIA records, reported re-identification risk measures, and—when differential privacy is applied—documented privacy parameters (privacy budget) that support verification and compliance. Applying this framework in engineering sectors—such as civil engineering, structural health monitoring, and renewable energy systems—provides a practical pathway to achieve efficiency and resilience goals. For example, the risk classification matrix can be used to determine the required control levels in smart buildings or smart grids to ensure AI-driven decisions support sustainability rather than hinder it. This integrated approach responds to calls in the literature for frameworks that combine normative principles with technical assurances and participatory oversight [
63].
To bridge the gap between high-level ethical principles and technical implementation, we propose a governance risk-tiering matrix (
Figure 9). This matrix categorizes AI risks into five key domains: data privacy, algorithmic bias, transparency, operational security, and regulatory compliance. By mapping these domains against four levels of severity—low, medium, high, and critical—the matrix provides a structured approach for decision-makers. For instance, high-risk deployments involving sensitive personal data are mapped to stringent requirements such as mandatory data protection impact assessments (DPIA) and human-in-the-loop oversight, whereas lower-risk applications focus on basic role-based access controls and routine audits.
The synthesized findings are further operationalized in the AI Governance Imple-mentation Roadmap (
Figure 10). This figure illustrates the transition from normative values to technical controls across three distinct layers: principles, controls and evidence. By following this sequential flow, organizations can ensure that transparency and accountability are not just theoretical goals but are supported by verifiable evidence and continuous monitoring. This roadmap serves as a practical guide for integrating the cybersecurity frameworks identified in this review into existing institutional applications.
Figure 11 maps the interlocking challenges identified in our systematic literature review (SLR). At its center is a single “risk & responsibility” node, signaling that sound governance must coordinate risk management and accountability across multiple domains. Five color-coded branches radiate from this core, grouping co-occurring topics into thematic clusters. The “data & privacy” branch (dark blue) comprises concerns over data utility, privacy protection and data governance, echoing frameworks that advocate data minimization and lawful processing [
25]. The “ethics principles” branch (green) clusters fairness, algorithmic accountability and explainability, reflecting studies on bias metrics and transparency mechanisms. The “security & resilience” branch (orange/red) captures cyber threats, adversarial robustness and incident response, emphasizing the need for continuous security testing. The “participation & inclusion branch” (yellow) groups issues such as stakeholder engagement, citizen consent and inclusive design; these themes underscore the importance of participatory governance and are discussed in works such as [
21]. Finally, the “technology & smart governance” branch (light blue) gathers together topics on AI adoption, IoT applications, smart cities and policy development, linking technological innovation to governance responsibilities [
30,
61]. By clustering challenges in this way,
Figure 11 shows that data privacy, ethics, security, participation and technology concerns are intertwined, and that comprehensive AI governance must address their interdependencies.
7. Limitations
A potential limitation of this study is the high concentration of literature from a single source, Government Information Quarterly (GIQ), which accounts for 37 of the 95 analyzed references (39%). While this concentration reflects the journal’s status as a “thematic anchor” for digital government research, it could theoretically narrow the diversity of perspectives. However, a sensitivity check comparing the GIQ subset against the remaining 59 references from different journals showed no significant thematic divergence. This suggests that the results represent a broad academic consensus rather than a specific editorial bias, though future research could benefit from a wider interdisciplinary sample to further validate these findings.
8. Gaps and Future Work
The analysis of existing review yields five structural gaps persist and warrant a focused research agenda. First, accountability, operational security, and bias mitigation are consistently under-specified relative to privacy and ethics; most frameworks do not operationalize audit trails, threat modeling, red teaming, or bias testing as first-class controls. Second, implementation guidance is lacking as many frameworks remain normative, offering principles without prescriptive procedures, metrics, or assurance evidence. Third, there is limited integration with established cybersecurity standards (e.g., ISO/IEC 27001 [
96], NIST CSF [
97], NIST AI RMF [
97]), leaving adversarial risks—including data poisoning and evasion—outside the governance control set. Fourth, participatory governance is peripheral: citizen participation, stakeholder engagement, and smart-city-specific constraints receive sporadic treatment, weakening democratic legitimacy and sociotechnical fit. Fifth, the principles–controls gap persists: even where high-level values are articulated (privacy, ethics), they are rarely mapped to verifiable technical and organizational measures (e.g., role-based logging, DPIA/TRA triggers, incident response, model cards with bias/capability limits). Future work should therefore (i) specify testable accountability and bias controls with audit artifacts; (ii) codify end-to-end implementation playbooks that tie principles to metrics and evidence; (iii) align AI governance with cybersecurity frameworks via shared control catalogs and adversarial evaluation protocols; (iv) embed participatory mechanisms and domain-specific governance (notably for smart cities) into baseline requirements; and (v) formalize a principles → controls → evidence → pipeline to close the assurance loop across high-risk deployments.
9. Conclusions
This systematic literature review synthesized 95 studies to evaluate the current landscape of AI governance risks and cybersecurity frameworks. Our findings confirm a significant “operational gap” where high-level ethical principles (e.g., fairness and transparency) lack the technical controls necessary for secure, real-world deployment. While privacy remains the most mature dimension, technical robustness and adversarial resilience are critically under-represented in existing frameworks.
The study contributes to the field by introducing an integrated three-layer governance model and a risk-tiering matrix (
Figure 9). These tools provide a structured path for organizations to move from normative values to verifiable evidence. Practically, the results serve as a roadmap for policymakers to integrate cybersecurity standards into AI regulatory compliance.
While the proposed governance risk-tiering matrix and three-layer framework have not yet been subjected to longitudinal empirical testing, their practical applicability is justified through a deductive synthesis of the existing literature. By integrating findings from 95 peer-reviewed frameworks, this research bridges the gap between high-level ethics and technical reality. The three-layer framework follows a logical institutional flow—moving from abstract principles to concrete controls and auditable evidence—making it compatible with existing organizational structures without requiring a total redesign of governance workflows. Furthermore, the risk-tiering matrix provides a standardized heuristic for decision-makers to categorize risks consistently and allocate resources efficiently. While longitudinal empirical studies are the logical next step, these models offer an immediate, evidence-based roadmap for organizations seeking to operationalize AI governance in a systematic and auditable manner.
From a sustainable engineering perspective, the proposed governance model supports the objectives of Sustainable Development Goal 9 (SDG-9) by strengthening the resilience, security, and long-term reliability of digital infrastructure. By integrating cybersecurity controls, accountability mechanisms, and risk-tiered governance across the AI lifecycle, the framework enables infrastructure operators and public institutions to anticipate vulnerabilities, reduce systemic failures, and ensure continuity of critical services. In this way, AI governance is positioned not merely as a regulatory requirement but as an enabling mechanism for sustainable, secure, and innovation-driven digital infrastructure development.
Future research should prioritize participatory governance by incorporating citizen perspectives and focus on the development of automated auditing tools capable of monitoring AI security in real time.
Author Contributions
Conceptualization, O.A. and A.A.; methodology, O.A. and A.A.; investigation, O.A.; data curation, O.A.; formal analysis, O.A.; validation, O.A. and A.A.; visualization, O.A.; writing—original draft preparation, O.A.; writing—review and editing, A.A.; supervision, A.A.; project administration, A.A. All authors have read and agreed to the published version of the manuscript.
Funding
The APC was funded by [Qassim University for financial support (QU-APC-2026)].
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
The data analyzed in this study are provided in the
Supplementary Materials, including the study characteristics dataset. Additional supporting information is available in the references cited in this article.
Acknowledgments
The Researchers would like to thank the Deanship of Graduate Studies and Scientific Research at Qassim University for financial support (QU-APC-2026).
Conflicts of Interest
The authors declare no conflicts of interest.
References
- Ortega-Bolaños, R.; Bernal-Salcedo, J.; Ortiz, M.G.; Sarmiento, J.G.; Ruz, G.A.; Tabares-Soto, R. Applying the ethics of AI: A systematic review of tools for developing and assessing AI-based systems. Artif. Intell. Rev. 2024, 57, 110. [Google Scholar] [CrossRef]
- Wirtz, B.W.; Weyerer, J.C.; Kehl, I. Governance of artificial intelligence: A risk and guideline-based integrative framework. Gov. Inf. Q. 2022, 39, 101685. [Google Scholar] [CrossRef]
- Decuypere, A.; Van de Vijver, A. AI: Friend or foe of fairness perceptions of the tax administration? A survey experiment on citizens’ procedural fairness perceptions. Gov. Inf. Q. 2025, 42, 102002. [Google Scholar] [CrossRef]
- Alshahrani, A.; Dennehy, D.; Mäntymäki, M. An attention-based view of AI assimilation in public sector organizations: The case of Saudi Arabia. Gov. Inf. Q. 2022, 39, 101617. [Google Scholar] [CrossRef]
- Hamon, R.; Junklewitz, H.; Garrido, J.S.; Sanchez, I. Three challenges to secure AI systems in the context of AI regulations. IEEE Access 2024, 12, 61022–61035. [Google Scholar] [CrossRef]
- Sharma, S.; Kar, A.K.; Gupta, M.P. Untangling the web between digital citizen empowerment, accountability and quality of participation experience for e-government: Lessons from India. Gov. Inf. Q. 2024, 41, 101964. [Google Scholar] [CrossRef]
- Jobin, A.; Ienca, M.; Vayena, E. The global landscape of AI ethics guidelines. Nat. Mach. Intell. 2019, 1, 389–399. [Google Scholar] [CrossRef]
- Charles, V.; Rana, N.P.; Carter, L. Artificial intelligence for data-driven decision-making and governance in public affairs. Gov. Inf. Q. 2022, 39, 101742. [Google Scholar] [CrossRef]
- Rjab, A.B.; Mellouli, S.; Corbett, J. Barriers to artificial intelligence adoption in smart cities: A systematic literature review and research agenda. Gov. Inf. Q. 2023, 40, 101814. [Google Scholar] [CrossRef]
- Bena, Y.A.; Ibrahim, R.; Mahmood, J.; Al-Dhaqm, A.; Alshammari, A.; Yusuf, M.N.; Ayemowa, M.O. Big data governance challenges arising from data generated by intelligent systems technologies: A systematic literature review. IEEE Access 2025, 13, 12859–12888. [Google Scholar] [CrossRef]
- Li, H.; Sun, Z.; Xi, J. Unveiling civil servants’ preferences: Human–machine matching vs. regulating algorithms in algorithmic decision-making–Insights from a survey experiment. Gov. Inf. Q. 2025, 42, 102009. [Google Scholar] [CrossRef]
- van Noordt, C.; Misuraca, G. Artificial intelligence for the public sector: Results of landscaping the use of AI in government across the European Union. Gov. Inf. Q. 2022, 39, 101714. [Google Scholar] [CrossRef]
- Johnson, W.G.; Bowman, D.M. A survey of instruments and institutions available for the global governance of artificial intelligence. IEEE Technol. Soc. Mag. 2021, 40, 68–76. [Google Scholar] [CrossRef]
- Buijsman, S. Transparency for AI systems: A value-based approach. Ethics Inf. Technol. 2024, 26, 34. [Google Scholar] [CrossRef]
- Attard-Frost, B.; Brandusescu, A.; Lyons, K. The governance of artificial intelligence in Canada: Findings and opportunities from a review of 84 AI governance initiatives. Gov. Inf. Q. 2024, 41, 101929. [Google Scholar] [CrossRef]
- Batool, A.; Zowghi, D.; Bano, M. Responsible AI governance: A systematic literature review. arXiv 2023, arXiv:2401.10896. [Google Scholar] [CrossRef]
- Laine, J.; Minkkinen, M.; Mäntymäki, M. Ethics-based AI auditing: A systematic literature review on conceptualizations of ethical principles and knowledge contributions to stakeholders. Inf. Manag. 2024, 61, 103969. [Google Scholar] [CrossRef]
- Bach, T.A.; Kaarstad, M.; Solberg, E.; Babic, A. Insights into suggested Responsible AI (RAI) practices in real-world settings: A systematic literature review. AI Ethics 2025, 5, 3185–3232. [Google Scholar] [CrossRef]
- Zheng, Y.; Chang, C.H.; Huang, S.H.; Chen, P.Y.; Picek, S. An overview of trustworthy AI: Advances in IP protection, privacy-preserving federated learning, security verification, and GAI safety alignment. IEEE J. Emerg. Sel. Top. Circuits Syst. 2024, 14, 582–607. [Google Scholar] [CrossRef]
- Madan, R.; Ashok, M. AI adoption and diffusion in public administration: A systematic literature review and future research agenda. Gov. Inf. Q. 2023, 40, 101774. [Google Scholar] [CrossRef]
- Shams, R.A.; Zowghi, D.; Bano, M. AI and the quest for diversity and inclusion: A systematic literature review. AI Ethics 2025, 5, 411–438. [Google Scholar] [CrossRef]
- Ramezani, M.; Takian, A.; Bakhtiari, A.; Rabiee, H.R.; Ghazanfari, S.; Sazgarnejad, S. Research agenda for using artificial intelligence in health governance: Interpretive scoping review and framework. BioData Min. 2023, 16, 31. [Google Scholar] [CrossRef]
- Ahn, M.J.; Chen, Y.-C. Digital transformation toward AI-augmented public administration: The perception of government employees and the willingness to use AI in government. Gov. Inf. Q. 2022, 39, 101664. [Google Scholar] [CrossRef]
- van Twist, A.; Ruijer, E.; Meijer, A. Smart cities & citizen discontent: A systematic review of the literature. Gov. Inf. Q. 2023, 40, 101799. [Google Scholar] [CrossRef]
- Janssen, M.; Brous, P.; Estevez, E.; Barbosa, L.S.; Janowski, T. Data governance: Organizing data for trustworthy artificial intelligence. Gov. Inf. Q. 2020, 37, 101493. [Google Scholar] [CrossRef]
- Lee-Geiller, S.; Lee, T.D. Using government websites to enhance democratic E-governance: A conceptual model for evaluation. Gov. Inf. Q. 2019, 36, 208–225. [Google Scholar] [CrossRef]
- Guadamuz, A. Reconceptualizing regulatory frameworks in the age of generative AI: Lessons from the EU and Italy. Laws 2024, 14, 84. [Google Scholar]
- Schwarz, M.; Hinske, L.C.; Mansmann, U.; Albashiti, F. Designing an ML Auditing Criteria Catalog as Starting Point for the Development of a Framework. IEEE Access 2024, 12, 39953–39967. [Google Scholar] [CrossRef]
- Lee, T.D.; Lee-Geiller, S.; Lee, B.K. A validation of the modified democratic e-governance website evaluation model. Gov. Inf. Q. 2021, 38, 101616. [Google Scholar] [CrossRef]
- Khan, A.A.; Akbar, M.A.; Fahmideh, M.; Liang, P.; Waseem, M.; Ahmad, A.; Niazi, M.; Abrahamsson, P. AI ethics: An empirical study on the views of practitioners and lawmakers. IEEE Trans. Comput. Soc. Syst. 2023, 10, 2971–2984. [Google Scholar] [CrossRef]
- Sulastri, R.; Janssen, M.; van de Poel, I.; Ding, A. Transforming towards inclusion-by-design: Information system design principles shaping data-driven financial inclusiveness. Gov. Inf. Q. 2024, 41, 101979. [Google Scholar] [CrossRef]
- Almagrabi, A.O.; Khan, R.A. Optimizing secure AI lifecycle model management with innovative generative AI strategies. IEEE Access 2024, 13, 12889–12920. [Google Scholar] [CrossRef]
- Kleizen, B.; Van Dooren, W.; Verhoest, K.; Tan, E. Do citizens trust trustworthy artificial intelligence? Experimental evidence on the limits of ethical AI measures in government. Gov. Inf. Q. 2023, 40, 101834. [Google Scholar] [CrossRef]
- Wilson, C. Public engagement and AI: A values analysis of national strategies. Gov. Inf. Q. 2022, 39, 101652. [Google Scholar] [CrossRef]
- Ruschemeier, H.; Hondrich, L.J. Automation bias in public administration–An interdisciplinary perspective from law and psychology. Gov. Inf. Q. 2024, 41, 101953. [Google Scholar] [CrossRef]
- Khan, M.S.; Shoaib, A.; Arledge, E. How to promote AI in the US federal government: Insights from policy process frameworks. Gov. Inf. Q. 2024, 41, 101908. [Google Scholar] [CrossRef]
- Yang, L.; Lin, Y.; Chen, B. Practice and Prospect of Regulating Personal Data Protection in China. Laws 2024, 13, 78. [Google Scholar] [CrossRef]
- Kankanhalli, A.; Charalabidis, Y.; Mellouli, S. IoT and AI for smart government: A research agenda. Gov. Inf. Q. 2019, 36, 304–309. [Google Scholar] [CrossRef]
- Huang, H.; Liao, C.Z.P.; Liao, H.C.; Chen, D.Y. Resisting by workarounds: Unraveling the barriers of implementing open government data policy. Gov. Inf. Q. 2020, 37, 101495. [Google Scholar] [CrossRef]
- Blauth, T.F.; Gstrein, O.J.; Zwitter, A. Artificial intelligence crime: An overview of malicious use and abuse of AI. IEEE Access 2022, 10, 77110–77122. [Google Scholar] [CrossRef]
- Bechmann, A.; Bowker, G.C. Unsupervised by any other name: Hidden layers of knowledge production in artificial intelligence on social media. Big Data Soc. 2019, 6, 2053951718819569. [Google Scholar] [CrossRef]
- Almeida, V.; Mendes, L.S.; Doneda, D. On the development of AI governance frameworks. IEEE Internet Comput. 2023, 27, 70–74. [Google Scholar] [CrossRef]
- Choudhary, T. Political Bias in Large Language Models: A Comparative Analysis of ChatGPT-4, Perplexity, Google Gemini, and Claude. IEEE Access 2024, 13, 11341–11379. [Google Scholar] [CrossRef]
- Uddin, M.T.; Yin, L.; Canavan, S. Spatio-Temporal Graph Analytics on Secondary Affect Data for Improving Trustworthy Emotional AI. IEEE Trans. Affect. Comput. 2023, 15, 30–49. [Google Scholar] [CrossRef]
- Gao, S.; Zhang, H.; Chen, X.; Tao, C.; Zhao, D.; Yan, R. A Trend of AI Conference Convergence in Similarity: An Empirical Study Through Trans-Temporal Heterogeneous Graph. IEEE Trans. Knowl. Data Eng. 2023, 35, 9642–9655. [Google Scholar] [CrossRef]
- Lo, S.K.; Liu, Y.; Lu, Q.; Wang, C.; Xu, X.; Paik, H.-Y.; Zhu, L. Toward Trustworthy AI: Blockchain-Based Architecture Design for Accountability and Fairness of Federated Learning Systems. IEEE Internet Things J. 2023, 10, 3276–3284. [Google Scholar] [CrossRef]
- Naja, I.; Markovic, M.; Edwards, P.; Pang, W.; Cottrill, C.; Williams, R. Using knowledge graphs to unlock practical collection, integration, and audit of AI accountability information. IEEE Access 2022, 10, 74383–74411. [Google Scholar] [CrossRef]
- Belgodere, B.; Dognin, P.; Ivankay, A.; Melnyk, I.; Mroueh, Y.; Mojsilovic, A.; Young, R.A. Auditing and Generating Synthetic Data with Controllable Trust Trade-offs. IEEE J. Emerg. Sel. Top. Circuits Syst. 2024, 14, 773–788. [Google Scholar] [CrossRef]
- Bradley, S.; Mahmoud, I.H.; Arlati, A. Integrated Collaborative Governance Approaches towards Urban Transformation: Experiences from the CLEVER Cities Project. Sustainability 2022, 14, 15566. [Google Scholar] [CrossRef]
- Hjaltalin, I.T.; Sigurdarson, H.T. The strategic use of AI in the public sector: A public values analysis of national AI strategies. Gov. Inf. Q. 2024, 41, 101914. [Google Scholar] [CrossRef]
- Yigitcanlar, T.; Li, R.Y.M.; Beeramoole, P.B.; Paz, A. Artificial intelligence in local government services: Public perceptions from Australia and Hong Kong. Gov. Inf. Q. 2023, 40, 101833. [Google Scholar] [CrossRef]
- Sun, T.Q.; Medaglia, R. Mapping the challenges of Artificial Intelligence in the public sector: Evidence from public healthcare. Gov. Inf. Q. 2019, 36, 368–383. [Google Scholar] [CrossRef]
- Elapatha, V.W.; Jehan, S.N. An Analysis of the Implementation of Business Process Re-engineering in Public Services. J. Open Innov. Technol. Mark. Complex. 2020, 6, 114. [Google Scholar] [CrossRef]
- Yang, F.; Abedin, M.Z.; Qiao, Y.; Ye, L. Towards Trustworthy Governance of AI-Generated Content (AIGC): A Blockchain-Driven Regulatory Framework for Secure Digital Ecosystems. IEEE Trans. Eng. Manag. 2024, 71, 14945–14962. [Google Scholar] [CrossRef]
- Das, D.K. Exploring the Symbiotic Relationship between Digital Transformation, Infrastructure, Service Delivery, and Governance for Smart Sustainable Cities. Smart Cities 2024, 7, 806–835. [Google Scholar] [CrossRef]
- Zhang, D.; Pee, L.G.; Pan, S.L.; Cui, L. Big data analytics, resource orchestration, and digital sustainability: A case study of smart city development. Gov. Inf. Q. 2022, 39, 101626. [Google Scholar] [CrossRef]
- Rahaman, M.F.; Golam, M.; Subhan, M.R.; Tuli, E.A.; Kim, D.S.; Lee, J.M. Meta-governance: Blockchain-driven metaverse platform for mitigating misbehavior using smart contract and AI. IEEE Trans. Netw. Serv. Manag. 2024, 21, 4024–4038. [Google Scholar] [CrossRef]
- Bharosa, N. The rise of GovTech: Trojan horse or blessing in disguise? A research agenda. Gov. Inf. Q. 2022, 39, 101692. [Google Scholar] [CrossRef]
- Erdélyi, O.J.; Goldsmith, J. Regulating artificial intelligence: Proposal for a global solution. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, New Orleans, LA, USA, 1–3 February 2018; Association for Computing Machinery: New York, NY, USA, 2018; pp. 95–101. [Google Scholar]
- Wamba, S.F.; Wamba-Taguimdje, S.L.; Lu, Q.; Queiroz, M.M. How emerging technologies can solve critical issues in organizational operations: An analysis of blockchain-driven projects in the public sector. Gov. Inf. Q. 2024, 41, 101912. [Google Scholar] [CrossRef]
- Park, S.; Yoon, S. Cross-National Findings of Factors Affecting the Acceptance of AI-Based Sustainable Fintech. Sustainability 2025, 17, 49. [Google Scholar] [CrossRef]
- Wallach, W.; Marchant, G. Toward the Agile and Comprehensive International Governance of AI and Robotics [Point of View]. Proc. IEEE 2019, 107, 505–508. [Google Scholar] [CrossRef]
- Yigitcanlar, T.; Kankanamge, N.; Preston, A.; Gill, P.; Rezvani, R.; Ostadnia, M.; Xia, B.; Ioppolo, G. Unlocking Artificial Intelligence Adoption in Local Government: Best-Practice Lessons from Australian Councils. Smart Cities 2024, 7, 64. [Google Scholar] [CrossRef]
- Chatfield, A.T.; Reddick, C.G. A framework for Internet of Things-enabled smart government: A case of IoT cybersecurity policies and use cases in US federal government. Gov. Inf. Q. 2019, 36, 346–357. [Google Scholar] [CrossRef]
- Wu, C.; Zhang, H.; Carroll, J.M. AI Governance in Higher Education: Case Studies of Guidance at Big Ten Universities. arXiv 2024, arXiv:2409.02017. [Google Scholar] [CrossRef]
- Tan, S.Y.; Taeihagh, A. Adaptive governance of autonomous vehicles: Accelerating the adoption of disruptive technologies in Singapore. Gov. Inf. Q. 2021, 38, 101546. [Google Scholar] [CrossRef]
- Kawashita, I.; Baptista, A.A.; Soares, D. Open Government Data Use in the Brazilian States and Federal District Public Administrations. Data 2022, 7, 5. [Google Scholar] [CrossRef]
- He, X.; Kuai, X.; Li, X.; Qiu, Z.; He, B.; Guo, R. Smart City Ontology Framework for Urban Data Integration and Application. Smart Cities 2025, 8, 165. [Google Scholar] [CrossRef]
- Saura, J.R.; Ribeiro-Soriano, D.; Palacios-Marqués, D. Assessing behavioral data science privacy issues in government artificial intelligence deployment. Gov. Inf. Q. 2022, 39, 101679. [Google Scholar] [CrossRef]
- Zhang, X. A more secure framework for open government data sharing based on federated learning. Gov. Inf. Q. 2024, 41, 101981. [Google Scholar] [CrossRef]
- Xiao, J.; Han, L.; Zhang, H. Exploring Driving Factors of Digital Transformation among Local Governments: Foundations for Smart City Construction in China. Sustainability 2022, 14, 14980. [Google Scholar] [CrossRef]
- Ali, O.; Shrestha, A.; Chatfield, A.; Murray, P. Assessing information security risks in the cloud: A case study of Australian local government authorities. Gov. Inf. Q. 2020, 37, 101419. [Google Scholar] [CrossRef]
- Zhang, D.; Pee, L.G.; Pan, S.L.; Liu, W. Orchestrating artificial intelligence for urban sustainability. Gov. Inf. Q. 2022, 39, 101720. [Google Scholar] [CrossRef]
- Dwork, C. Differential Privacy: A Survey of Results. In Proceedings of the International Conference on Theory and Applications of Models of Computation (TAMC 2008), Xi’an, China, 25–29 April 2008; Springer: Berlin/Heidelberg, Germany, 2008; pp. 1–19. [Google Scholar]
- Arbelaez, A.; Climent, L. Iterative local search for preserving data privacy. Appl. Intell. 2025, 55, 189. [Google Scholar] [CrossRef]
- Winfield, A.F.T.; Jirotka, M. Ethical governance is essentsial to building trust in robotics and artificial intelligence systems. Philos. Trans. R. Soc. A Math. Phys. Eng. Sci. 2018, 376, 20180085. [Google Scholar] [CrossRef]
- Bokhari, S.A.A.; Myeong, S. The influence of artificial intelligence on e-governance and cybersecurity in smart cities: A stakeholder’s perspective. IEEE Access 2023, 11, 69783–69797. [Google Scholar] [CrossRef]
- Bokhari, S.A.A.; Myeong, S. The impact of AI applications on smart decision-making in smart cities as mediated by the Internet of Things and smart governance. IEEE Access 2023, 11, 120827–120844. [Google Scholar] [CrossRef]
- Singh, J.; Cobbe, J.; Norval, C. Decision provenance: Harnessing data flow for accountable systems. IEEE Access 2018, 7, 6562–6574. [Google Scholar] [CrossRef]
- Rejeb, A.; Rejeb, K.; Keogh, J.G.; Zailani, S. Barriers to Blockchain Adoption in the Circular Economy: A Fuzzy Delphi and Best-Worst Approach. Sustainability 2022, 14, 3611. [Google Scholar] [CrossRef]
- de Almeida, P.G.R.; dos Santos, C.D.; Farias, J.S. Artificial intelligence regulation: A framework for governance. Ethics Inf. Technol. 2021, 23, 505–525. [Google Scholar] [CrossRef]
- Peng, X.; Xiao, D. Can Open Government Data Improve City Green Land-Use Efficiency? Evidence from China. Land 2024, 13, 1891. [Google Scholar] [CrossRef]
- Chatterjee, S.; Kar, A.K.; Gupta, M.P. Success of IoT in smart cities of India: An empirical analysis. Gov. Inf. Q. 2018, 35, 349–361. [Google Scholar] [CrossRef]
- Kinder, T.; Stenvall, J.; Koskimies, E.; Webb, H.; Janenova, S. Local public services and the ethical deployment of artificial intelligence. Gov. Inf. Q. 2023, 40, 101865. [Google Scholar] [CrossRef]
- Chen, Y.-C.; Ahn, M.J.; Wang, Y.-F. Artificial Intelligence and Public Values: Value Impacts and Governance in the Public Sector. Sustainability 2023, 15, 4796. [Google Scholar] [CrossRef]
- Zheng, R.; Huang, H. An Empirical Study on the Digital Economy, Fiscal Policy, and Regional Sustainable Development—Based on Data from Less Developed Regions in China. Sustainability 2024, 16, 10057. [Google Scholar] [CrossRef]
- Rahwan, I. Society-in-the-loop: Programming the algorithmic social contract. Ethics Inf. Technol. 2018, 20, 5–14. [Google Scholar] [CrossRef]
- Ai, S.; Ding, H.; Ping, Y.; Zuo, X.; Zhang, X. Exploration of Digital Transformation of Government Governance Under the Information Environment. IEEE Access 2023, 11, 78984–78993. [Google Scholar] [CrossRef]
- de Almeida, P.G.R.; dos Santos, C.D., Jr. Artificial intelligence governance: Understanding how public organizations implement it. Gov. Inf. Q. 2025, 42, 102003. [Google Scholar] [CrossRef]
- Khalfan, M.; Azizi, N.; Haass, O.; Maqsood, T.; Ahmed, I. Blockchain Technology: Potential Applications for Public Sector E-Procurement and Project Management. Sustainability 2022, 14, 5791. [Google Scholar] [CrossRef]
- Leventis, S.; Fitsilis, F.; Anastasiou, V. Diversification of Legislation Editing Open Software (LEOS) Using Software Agents—Transforming Parliamentary Control of the Hellenic Parliament into Big Open Legal Data. Big Data Cogn. Comput. 2021, 5, 45. [Google Scholar] [CrossRef]
- Pislaru, M.; Vlad, C.S.; Ivascu, L.; Mircea, I.I. Citizen-Centric Governance: Enhancing Citizen Engagement through Artificial Intelligence Tools. Sustainability 2024, 16, 2686. [Google Scholar] [CrossRef]
- David, A.; Yigitcanlar, T.; Li, R.Y.M.; Corchado, J.M.; Cheong, P.H.; Mossberger, K.; Mehmood, R. Understanding Local Government Digital Technology Adoption Strategies: A PRISMA Review. Sustainability 2023, 15, 9645. [Google Scholar] [CrossRef]
- Kim, Y.; Kim, S. Living Labs for AI-Enabled Public Services: Functional Determinants, User Satisfaction, and Continued Use. Sustainability 2023, 15, 8672. [Google Scholar] [CrossRef]
- Bahaddad, A.A.; Almarhabi, K.A.; Alghamdi, A.M. Factors Affecting Information Security and the Implementation of Bring Your Own Device (BYOD) Programmes in the Kingdom of Saudi Arabia (KSA). Appl. Sci. 2022, 12, 12707. [Google Scholar] [CrossRef]
- ISO/IEC 27001; Information Security, Cybersecurity and Privacy Protection—Information Security Management Systems—Requirements. ISO: Geneva, Switzerland, 2022.
- NIST. Cybersecurity Framework. Available online: https://www.nist.gov/cyberframework#:~:text=The%20Cybersecurity%20Framework%20(CSF)%20helps%20organizations%20understand,and%20resources%20for%20creating%20and%20using%20profiles (accessed on 15 March 2026).
- Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ 2021, 372, 71. [Google Scholar] [CrossRef] [PubMed]
| Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |