Next Article in Journal
Bioavailable Forms of Heavy Metals and Se in Soil in the Vicinity of the Pechenganikel Smelting Plant and the Relationship with Mineral Composition and Antioxidant Status of Biocrusts
Previous Article in Journal
A Hybrid System for Driver Assistance Using Computer Vision Methods
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Artificial Intelligence Standards in Conflict: Local Challenges and Global Ambitions

1
Data Analytics and Statistics, College of Science, University of North Texas, Denton, TX 76201, USA
2
Department of Information Science, University of North Texas, Denton, TX 76201, USA
*
Author to whom correspondence should be addressed.
Standards 2025, 5(4), 27; https://doi.org/10.3390/standards5040027
Submission received: 28 July 2025 / Revised: 6 September 2025 / Accepted: 26 September 2025 / Published: 11 October 2025

Abstract

This article examines the global efforts to govern and regulate Artificial Intelligence (AI) in response to its rapid development and growing influence across many parts of society. It explores how governance takes place at multiple levels, including international bodies, national governments, industries, companies, and communities. The study draws on a wide range of official documents, policy reports, and international agreements to build a timeline of key regulatory and standardization milestones. It also analyzes the challenges of coordinating across different legal systems, economic priorities, and cultural views. The findings show that while some progress has been made through soft-law frameworks and regional partnerships, deep divisions remain. These include unclear responsibilities, uneven enforcement, and risks of regulatory gaps. The article argues that effective AI governance requires stronger international cooperation, fair and inclusive participation, and awareness of power imbalances that shape policy decisions. Competing global and commercial interests can create obstacles to building systems that prioritize the public good. The conclusion highlights that future governance models must be flexible enough to adapt to fast-changing technologies, yet consistent enough to protect rights and promote trust. Addressing these tensions is critical for building a more just and accountable future of AI.

1. Introduction

AI continues to evolve rapidly, reshaping key sectors and raising urgent questions about ethics, social consequences, and long-term risks. These growing concerns have led to worldwide efforts to regulate and standardize AI in ways that promote responsible development and coordinate legal approaches across different regions. However, creating shared and comprehensive standards remains a major challenge due to the need to reconcile conflicting legal systems, cultural values, and policy goals. This paper examines the intensifying global focus on AI governance, discusses the imperative for harmonized regulatory practices, and draws attention to the inherent tensions between global aspirations and localized contexts.
Global interest in AI regulation increases due to its rapid growth and widespread adoption (Table 1). Currently, 77% of companies either use or explore AI, with 83% prioritizing AI integration in business plans [1]. AI’s commonness is substantial, with 77% of devices having AI components. Additionally, nine out of ten organizations consider AI essential for competitive advantage [2]. AI adoption and economic contributions continue to grow dramatically, projected to add around $15.7 trillion to the global economy by 2030. AI is expected to create 97 million new jobs, offsetting the loss of 85 million jobs and leading to a net employment gain of 12 million positions [1,2,3].
Business adoption of AI emphasizes customer service (56%), cybersecurity (51%), digital personal assistants (47%), customer relationship management (46%), and inventory management (40%) (Table 2). Notably, 60% of business owners foresee AI increasing productivity, though 52% of workers express concerns over job displacement [4,5,6].
AI significantly impacts consumer interactions, often unnoticed. Common uses include responding to texts and emails (45%), financial inquiries (43%), and travel planning (38%). Interestingly, only a third of consumers recognize their AI use, despite actual usage reaching 77% [7,8].
Trust remains a crucial factor influencing AI adoption. Currently, 65% of consumers trust businesses that utilize AI, while 78% believe generative AI’s benefits outweigh potential risks. However, concerns remain about cybersecurity, identity theft, and misuse of deceptive advertising [5].
Given AI’s transformative potential, global policymakers advocate comprehensive regulatory frameworks. Nations differ considerably in their regulatory approaches, creating fragmented governance landscapes. For instance, the European Union employs a human-centric, legally binding AI Act emphasizing fundamental rights [9]. The United States primarily relies on executive orders, industry guidelines, and state-level regulations, though federal initiatives are developing [10]. China balances innovation incentives with centralized governmental oversight [11]. International bodies like OECD, UNESCO, and G20 provide principle-based frameworks allowing for local adaptation and contextual flexibility [12,13,14,15,16].
Regulatory systems vary across regions, but a shared commitment has begun to form around core principles of AI governance, including transparency, fairness, accountability, and the continued presence of human judgment in critical decision-making processes [15]. Policymakers propose adaptive strategies combining global principles with local regulatory realities. UNESCO’s recommendations exemplify guidelines adaptable across jurisdictions, promoting ethical AI deployment [13].
Private corporations actively shape regulatory discourse. Companies including Microsoft, Google, and OpenAI publish their responsible AI frameworks, demonstrating industry self-regulation initiatives [17,18,19]. Academia and civil society provide critical perspectives on social equity, transparency, and human rights within AI governance discussions. Strong collaboration between public institutions and private companies plays a key role in balancing innovation with responsibility. These partnerships help create more inclusive governance approaches that better respond to the complex social effects of AI.
This article provides a comprehensive examination of the fundamental components constituting AI systems, evaluates the distribution of associated benefits and harms, and offers a comparative analysis of regulatory approaches adopted at both local and international levels. Following this introductory discussion, Section 2 defines AI and outlines the parameters determining its regulatory scope, while Section 3 details the documented benefits alongside the risks associated with AI implementation. Section 4 critically assesses which stakeholders benefit from AI developments and identifies disproportionately disadvantaged groups. Subsequent analysis in Section 5 addresses existing governance structures, describing frameworks of accountability and responsibility. A comparative exploration of local and global regulatory efforts is explained in Section 6, complemented by Section 7’s chronological overview of pivotal policy developments. Section 8 examines the implications of fragmented governance structures, and Section 9 proposes viable strategies for achieving more coherent and inclusive standards. Section 10 highlights the need for ongoing global and cross-disciplinary cooperation by offering final remarks on key challenges and suggests ways to support responsible AI development in the future.

2. Understanding AI: What Are We Regulating?

AI is no longer understood as a singular technological artifact or standalone computational tool, but rather as a distributed, socio-technical system composed of interdependent processes spanning data acquisition, algorithmic development, infrastructure design, deployment, and continuous adaptation to dynamic real-world contexts. This section explores the components of such systems and demonstrates the risks that emerge at each stage, emphasizing the need for governance frameworks that consider the full lifecycle of AI rather than focusing narrowly on isolated outputs or algorithms.

2.1. AI as a System

Across legal, scientific, and policy domains, there remains ongoing debate about the definition and scope of AI. The National AI Initiative Act of 2020 describes AI as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments” [20]. While concise, such definitions obscure the deeply layered reality of AI systems, which are shaped by algorithmic architectures, data flows, compute capacity, and sociopolitical forces that influence how outcomes are generated and perceived [21]. This embedded complexity challenges conventional regulatory tools and demands approaches attentive to the interactions among developers, infrastructure providers, engineers, data workers, end-users, and affected communities. Conceptualizing AI as a system of interdependent human and computational actors enables a shift from narrow algorithmic scrutiny to a broader concern with distributed accountability and institutional responsibility [15,22].

2.2. The AI Lifecycle: Interconnected Components

Regulatory frameworks must reflect the fact that AI systems are not reducible to algorithms alone but involve intricate and evolving interconnections among their various parts. Attempts to regulate single stages in isolation, without regard for systemic dependencies, are unlikely to be effective. Five major components define the AI lifecycle, which include data sourcing and annotation, algorithm design, computational infrastructure, deployment and continuous adaptation, and post-deployment governance and accountability. Each introduces specific vulnerabilities that accumulate across the system.

2.2.1. Data Sourcing and Annotation

Data constitutes the foundational resource for AI systems. However, sourcing practices, consent mechanisms, and annotation protocols often remain opaque, contributing to bias, exploitation, and exclusion [23]. Many AI systems rely on large-scale web scraping and distributed annotation labor, frequently introducing cultural, linguistic, or socioeconomic distortions. Although empirical research in data-centric AI emphasizes the importance of data quality over model complexity, regulatory discourse continues to focus disproportionately on algorithms [24].

2.2.2. Algorithm Design and Interpretability

Algorithmic development remains a central but contested area of AI practice. Increasing reliance on opaque methods such as deep learning and reinforcement learning presents significant challenges for traceability and accountability [25]. Design choices involving feature selection, regularization, or objective functions may carry implicit value judgments with legal or ethical consequences, especially when deployed in sensitive domains. Interpretability tools have seen meaningful curiosity in fields such as industrial optimization and smart energy systems, but their diffusion across sectors remains uneven [25].

2.2.3. Computational Infrastructure

The computational infrastructure behind AI imposes material costs and global labor burdens. Training large-scale models consumes vast energy resources and depends on rare earth minerals used in GPUs and other computational devices [26]. In addition, low-wage labor supports data curation, server maintenance, and operational continuity, often in regions of the Global South with limited regulatory protection [27]. Emerging proposals for Green AI advocate metrics for energy efficiency, carbon impact, and hardware sustainability, however policy implementation remains limited [28].

2.2.4. Deployment and Continuous Adaptation

Once deployed, AI systems operate in domains subject to rapid change, including finance, healthcare, education, and manufacturing. Real-world deployment introduces feedback loops and temporal drift, increasing the risk of unintended effects and performance degradation over time [24]. Many systems adapt through continuous updates, rendering static certification schemes or liability rules inadequate. The lack of reliable standards for post-deployment auditing allows failures to persist unnoticed and may lead to harmful consequences.

2.2.5. Post-Deployment Governance and Accountability

The current accountability infrastructure is insufficient to address risks emerging after deployment. Although some private actors have implemented ethics boards and voluntary monitoring mechanisms, enforcement capacity remains inconsistent [29]. Public trust in AI depends not only on legal compliance but also on participation in independent review and transparent accountability processes. Democratic co-governance models encourage transparent audit procedures and institutional frameworks for inclusive accountability that recognize technology as socially embedded rather than neutral [15,22].
For example, in healthcare AI systems, an end-to-end auditing framework might begin with verifying informed patient consent and dataset lineage (data stage), move to evaluating explainability mechanisms and bias detection during model training (algorithm stage), assess energy efficiency and resilience in deployment infrastructure (infrastructure stage), and culminate in continuous post-deployment audits that include red-teaming, incident reporting, and independent certification. Such lifecycle-wide auditing practices illustrate how regulatory frameworks can operationalize accountability at each stage of an AI system’s development and use.

2.3. The Case for Lifecycle Regulation

Efforts to regulate AI by focusing solely on platforms, algorithms, or use cases fail to address the interdependent nature of risks across the AI lifecycle. In the United States, policy responses have relied on voluntary standards, sector-specific rules, and state-level pilots, creating regulatory fragmentation and uneven protection [20]. While this flexibility may support innovation, it also introduces legal uncertainty and differential vulnerability. The EU AI Act offers a more integrated model, applying a risk-based framework to regulate AI systems according to their intended functions and social impact [20]. Although some view the approach as overly prescriptive, others argue that reframing the debate as a binary between innovation and regulation misrepresents the issue. Governance frameworks that address the full socio-technical system from data to deployment have the potential to support responsible innovation while preserving public accountability [29].

2.4. Systemic Governance and Democratic Principles

A robust AI governance strategy must be grounded in democratic values such as openness, inclusivity, and public engagement. The absence of meaningful opportunities for public input weakens the legitimacy of both governmental and technological decision-making processes [22]. Models of co-governance enable broader participation in ethical, legal, and technical deliberation, especially as AI technologies reshape institutions in labor, education, healthcare, and public administration. Systemic and distributed control recognizes the embedded nature of AI and allocates accountability to appropriate actors. Multi-level frameworks that involve governmental regulation, industry self-governance, and civil society participation can more effectively respond to the risks posed by AI as a socio-technical infrastructure [15,29].

3. Benefits and Harms of AI Systems

AI systems offer many benefits, but their growing use has also introduced serious risks, prompting diverse regulatory responses. Governments have proposed different frameworks depending on their legal traditions, risk tolerance, and technological capacities. This section analyzes approaches including risk-based, contextual, sectoral, and international models. Their strengths and limitations are discussed along with examples from recent applications.

3.1. Risk-Based and Contextual Regulation

One of the most important questions in AI governance is whether to apply general rules based on risk severity or to develop targeted policies for specific areas such as healthcare, finance, or criminal justice. The EU AI Act follows a risk-based structure that places AI systems into categories including unacceptable, high, limited, and minimal risk. Each category involves different requirements for documentation, transparency, and safety measures [30]. The Act applies uniformly across sectors but leaves room for certain adaptations.
In contrast, the United States, United Kingdom, and Canada favor contextual or sector-specific strategies. These often build on existing laws such as health or consumer protection statutes and integrate AI monitoring into those domains [31]. Although this method is more flexible, it can also lead to fragmented implementation and legal uncertainty.
A modular hybrid approach has emerged as a practical alternative. General rules are defined at the top level, and adjustments are introduced through domain-specific modules. This makes the system more responsive to contextual needs while keeping a unified structure [32].

3.2. Transparency and Compliance Measures

Most regulatory models now emphasize transparency in AI design and operation. Developers are expected to explain what data is used, how systems work, and what performance metrics apply. The EU AI Act requires that general-purpose and high-risk systems document and disclose their logic, datasets, and quality control procedures [30]. These measures are supported by growing calls for algorithmic accountability globally and locally.
Compliance depends on mechanisms such as conformity assessments. These may range from internal audits to third-party verification. In high-risk applications, the EU model requires external checks before deployment. While such measures support market trust, concerns persist about their effectiveness unless audits are truly independent and protected from conflicts of interest [33].
Concrete indicators increasingly appear in emerging best practices: documentation depth (e.g., data provenance and dataset lineage), minimum model card requirements (architecture, performance benchmarks, known limitations), red-teaming scope (testing adversarial inputs before deployment), and third-party audit independence (separation from developer funding). These measurable criteria, reflected in frameworks such as the NIST (2023 Framework) and the OECD (2024 Framework), provide regulators and stakeholders with more reliable tools to assess whether transparency and accountability commitments are substantively implemented.

3.3. Voluntary Standards and Adaptive Co-Governance

In addition to binding laws, governments often encourage industry participation through voluntary codes, soft law instruments, and ethical guidelines. For example, the 2023 Executive Order on Safe, Secure, and Trustworthy AI in the United States directs major firms to watermark outputs, conduct risk assessments, and share test results with the government [34]. Although such initiatives are adaptive and encourage experimentation, they may prioritize reputation over systemic risk reduction and lack proper enforcement.
Scholars have proposed adaptive co-governance to address these limitations. This model allows gradual alignment between legal rules, technological innovation, and stakeholder input. It encourages continuous improvement of minimum standards in response to real-world outcomes by drawing from environmental governance principles. Success depends on institutional resources, ongoing monitoring, and clear timelines for revisions [32].

3.4. Sector-Specific Regulatory Strategies

Embedding AI review into sectoral regulatory regimes remains essential for achieving context-sensitive governance. Sector-specific regulations integrate domain knowledge, ethical considerations, and existing legal infrastructure. They enable enforcement bodies to customize obligations to the nuances of different industries. However, this approach also introduces risks of fragmentation and overlapping standards if not coordinated properly. The following examples depict how this method is applied in financial services and healthcare.
Fair lending and consumer protection laws guide the use of AI in activities such as credit scoring, insurance underwriting, and automated trading. Tools used in Buy-Now-Pay-Later (BNPL) products, for instance, fall under the scope of existing legislation such as the Equal Credit Opportunity Act (ECOA) and Fair Credit Reporting Act (FCRA), which aim to maintain fairness and transparency even in the absence of AI-specific laws.
The FDA’s regulatory framework for Software as a Medical Device (SaMD) applies rigorous standards for validation, explainability, and continuous monitoring. It intends to support innovation while protecting patient safety. This model benefits from domain expertise and customized monitoring but may face challenges related to cross-sectoral consistency. Inter-agency cooperation through national AI authorities is often required to address shared risks across domains.

3.5. International Coordination and Legal Convergence

As AI systems are deployed across borders, global coordination becomes essential. The EU AI Act is expected to influence international markets through what scholars call the Brussels Effect. Firms outside the EU may adopt its standards to retain access to European users [35]. The Council of Europe’s 2024 Framework Convention on AI complements this with a treaty-based approach that addresses democracy and human rights [15].
However, global cooperation faces serious obstacles. Regulatory cultures differ widely. The EU emphasizes precaution and openness, the United States relies on market-driven sectoral rules, and countries like Brazil integrate AI oversight into digital rights frameworks. Scholars propose a 3C model, which is contextual, coherent, and commensurable, to allow national variation while maintaining measurable and interoperable regulatory practices [31].

3.6. Comparing Regulatory Strategies

Multiple governance strategies have been adopted or proposed across domains, each offering distinct strengths while also posing specific limitations. The effectiveness of these approaches depends on how well they balance consistency with contextual sensitivity, and enforceability with the capacity to evolve alongside emerging technologies.
Policymakers are increasingly tasked with navigating these tradeoffs to create regulatory systems that are both stable and responsive. Table 3 provides a comparative overview of these key regulatory models, outlining their core features, benefits, and potential challenges by synthesizing insights from sources already discussed in this section, including Sloane and Wüllhorst (2025), Lund et al. (2025), Park (2024), Batool et al. (2025), Wei et al. (2024), Papagiannidis et al. (2025), and related analyses [15,29,30,31,32,33].
Looking more closely, these strategies do not just differ in scope; they also diverge in how enforceable they are, the economic weight they carry, and how quickly they can adjust to new technologies. The EU AI Act, for instance, lays out a clear risk-based system that creates certainty for regulators and businesses. Still, many see it as too rigid to keep pace with emerging applications [9]. By contrast, the U.S. FDA’s model in the medical device field shows how sector-specific rules can work well in practice. It is more flexible and rooted in existing structures, but that same patchwork style can leave gaps between industries [36].
Canada’s Artificial Intelligence and Data Act (AIDA) tries to meet in the middle. It sets out broad principles but then layers on targeted rules for high-impact systems. The idea is to hold companies accountable without shutting the door on innovation [37]. However, Canada’s own policy papers point out that leaving too much to voluntary industry standards has not worked well in the past, especially when it comes to things like hiring algorithms or biometric systems, where self-regulation has allowed bias to slip through [38].
Another model, often called adaptive co-governance, takes yet a different angle. In this model, the focus is on continuous stakeholder input and the ability to revise rules as technology shifts. The approach is appealing because it builds in flexibility, but it also asks a lot from regulators, including time, resources, and steady engagement, that not every government can realistically sustain [21].
Ultimately, the challenge is about trade-offs: stability versus agility, cost versus protection, national rules versus global alignment. No one system has solved all of these at once, which is why many scholars argue that a polycentric model where multiple frameworks overlap and reinforce each other may be the most workable path forward.

3.7. Balancing Benefits and Risks Through Integrated Governance

Future AI governance will likely require a polycentric framework that brings together multiple levels of regulation. This structure connects national laws, sector-specific rules, industry protocols, and international agreements. It allows responsibilities to be distributed while adapting to local needs. Although soft law encourages innovation and legal flexibility helps systems respond to new developments, core protections must still be established through enforceable rules. Preventing regulatory capture and promoting accountability depends on meaningful public engagement, independent evaluations, systematic impact reviews, and mechanisms such as sunset clauses to revisit policies over time [33]. A major challenge will be maintaining coherence across these different approaches while keeping pace with rapid technological change.

4. Who Gains and Who Is Left Behind by AI

The benefits of AI development and deployment are not equally distributed. While a small group of technology companies and highly skilled users gain increased productivity and influence, others can be excluded or harmed. Developers of large-scale AI models operate in a loosely regulated environment with limited public awareness of how these models function or the ethical concerns they raise [39]. In many cases, there are no binding requirements to promote transparency, explainability, or reliability. Privacy protections remain weak, echoing the long-standing practices of mass data collection and misuse seen across the digital ecosystem [40]. The enforcement of intellectual property rights is also uneven, with many models relying on copyrighted materials without proper consent [41].

4.1. Gaps in Access, Digital Literacy, and Prompting Proficiency

One of the clearest divides emerges in access to AI tools. Populations without consistent internet service or digital devices are excluded from opportunities to improve their work or education with these technologies [42]. Even for those who have access, differences between free and premium versions of AI platforms can affect the quality and range of services available [43].
Among regular users, disparities in digital literacy and prompting skills deepen inequality. People with advanced training can obtain high-quality outputs by crafting precise queries, while others receive less relevant or inaccurate responses [44]. The limited transparency of most AI systems makes it difficult for beginners to understand how to interact effectively or evaluate risks [45]. This knowledge gap may also lead to the unintentional sharing of sensitive data by unaware users.
Older adults, non-native speakers, and individuals with lower education levels often face additional challenges. These groups may struggle with the language, design, or feedback style of AI interfaces, reinforcing digital exclusion and limiting participation in AI-enhanced work, learning, and communication.

4.2. Intellectual Property Owners and Cultural Creators

Artists, writers, journalists, and website owners face significant risks from unauthorized use of their work. Many generative AI models are trained on copyrighted content scraped from the web without consent, attribution, or compensation [46]. In some cases, outputs closely mimic the style of the original, damaging the rights of the creator and reducing potential earnings. Although legal efforts are ongoing to address these practices, enforcement remains inconsistent.
Cultural minorities and underrepresented creators may be disproportionately affected, as their content is often absorbed into AI systems with little visibility or protection. This not only raises concerns about appropriation but also about the dilution or misrepresentation of cultural narratives in AI-generated outputs.

4.3. Barriers Facing Independent Developers and Small Enterprises

The AI ecosystem is increasingly dominated by a few powerful corporations. Developers of tools such as ChatGPT, Gemini, and DeepSeek benefit from extensive support, including funding, hardware, and legal resources, through partnerships with large technology companies and government contracts [47]. In contrast, smaller developers face steep entry barriers, including high computing costs, limited market access, and regulatory uncertainty [48]. Current frameworks rarely prioritize fair competition or innovation from new entrants.
OpenAI, Google, Meta, Baidu, Microsoft, X, and similar firms often have privileged access to infrastructure, public datasets, and policy influence. Their position allows them to shape standards and practices, while smaller actors remain constrained by technical, financial, and legal limitations [49]. The lack of clear antitrust or open-access policies contributes to the concentration of power and slows the development of alternative models. At the same time, AI holds potential to support entrepreneurship ecosystems by improving decision-making and reducing failure risks for small firms [50]. Realizing these benefits, however, requires regulatory pathways that ensure equitable access to AI tools and market opportunities for independent developers.

4.4. Marginalized Users and Vulnerable Populations

Economically disadvantaged users, people with disabilities, and non-dominant language speakers often experience limited benefits from AI systems. Many tools are developed primarily for high-income markets and English-speaking users, resulting in performance gaps and exclusion. These tools may fail to recognize local dialects, cultural norms, or accessibility needs, making them less useful or even harmful for affected individuals.
Workers in sectors susceptible to automation, such as customer service, logistics, and retail, also face increasing insecurity. While AI tools may increase productivity for employers, they can reduce job stability or shift labor demands without adequate training or support for affected employees.

5. Governance, Accountability, and Responsibility

Trustworthy integration of AI into society depends on three interconnected principles: governance, accountability, and responsibility. Governance provides the foundational structures, policy frameworks, and organizational processes that guide the development and use of AI systems in accordance with ethical norms, legal requirements, and public expectations. It involves setting clear roles, establishing standards for data quality and fairness, and applying review mechanisms throughout the AI lifecycle. Accountability refers to the need for transparent explanations of how AI systems operate and make decisions, especially when outcomes are harmful or controversial. It requires traceable records of development and deployment decisions that allow for meaningful audits and evaluations, even in the case of opaque models such as deep neural networks. Responsibility addresses the allocation of blame or liability when AI causes harm. Unlike accountability, which seeks to explain what occurred, responsibility determines who should be held answerable for the consequences. The issue is particularly complex in AI due to the autonomy of systems, the distributed nature of development, and the emergence of behaviors that cannot always be predicted. Traditional legal doctrines, such as product liability and negligence, face new challenges in determining whether the developer, operator, data provider, or another actor should bear responsibility when AI systems malfunction or cause unintended harm [51].

5.1. AI Governance Models

AI governance refers to the structured mechanisms that guide the responsible design, implementation, and oversight of AI systems. It aims to align AI with ethical values, legal norms, and societal goals, while minimizing risks and maximizing benefits across different sectors and populations [52]. Effective governance spans internal organizational practices, national policies, and international coordination, forming a multi-layered framework to address the far-reaching implications of AI technologies [53].
Most governance models emphasize foundational principles such as transparency, fairness, accountability, and security. Transparency involves clearly disclosing the developer, training data, and use cases of AI models. Users should be able to understand how systems reach decisions or predictions, even when those systems involve complex algorithms. Fairness focuses on minimizing bias and ensuring that outcomes do not reinforce social inequalities. Equitable treatment of individuals and groups must be maintained across all stages of AI development and deployment.
A core component of governance is the identification of roles and responsibilities throughout the AI lifecycle. Developers, deployers, and operators must each be accountable for decisions made by or through the system. Strong data governance is also essential. This includes anonymization, informed consent, protection of personal data, and quality control of training inputs. Systems must undergo robust testing and ongoing evaluation to assess vulnerabilities, especially those related to adversarial attacks or unanticipated behaviors.
Governance structures operate at different levels. Companies are expected to adopt internal standards through responsible AI teams, ethics committees, or designated officers within organizations. Legislative bodies introduce laws specifically designed for AI or extend existing regulations to cover emerging risks at the national level. Notable examples include the EU AI Act, which takes a risk-based approach, imposing stricter rules on high-risk applications such as medical, legal, or law enforcement systems [54]. Canada’s Artificial Intelligence and Data Act (AIDA) and China’s Generative AI Measures follow similar trajectories, aiming to regulate AI with varying levels of severity depending on the potential impact [55].
Institutions are shaping governance through both binding and voluntary frameworks at the international level. The U.S. National Institute of Standards and Technology (NIST) has issued a voluntary AI Risk Management Framework that provides organizations with detailed guidance on risk assessment with an emphasis on promoting trustworthy AI. The OECD AI Principles, adopted by many countries, outline non-binding guidelines to support inclusive, sustainable, and human-centered AI systems [56].
Practical implementation of governance models continues to face challenges despite these developments. Rapid technological advances often outpace policy responses. Regulatory bodies may lack the technical expertise required to evaluate complex systems. Balancing innovation with risk reduction is a persistent tension. Global harmonization is also difficult due to regional differences and varying cultural or legal interpretations of fairness and responsibility.

5.2. Assigning Responsibility When AI Systems Cause Harm

Determining who is responsible when AI systems cause harm remains one of the most complex and unresolved challenges in the governance of AI [57]. Unlike traditional systems where causality and human intention are often traceable, AI introduces several layers of autonomy, opacity, and distributed development that obscure accountability. This becomes especially problematic when legal remedies or public trust depend on the ability to identify responsible parties.
Many AI models, especially deep learning systems, operate as “black boxes”. Even their developers may not fully understand the rationale behind specific outputs. This lack of interpretability complicates investigations following incidents where AI systems produce harmful or discriminatory results. These outcomes may arise from flawed training data, hidden biases, algorithmic errors, or the accumulation of small system interactions over time. Technical opacity challenges conventional liability frameworks based on intent, foreseeability, or direct causation.
The development of AI systems usually involves multiple actors. Each party may introduce risks, whether through poor data quality, flawed model design, inadequate testing, or lack of monitoring. Once deployed, models may also evolve based on user interaction or environmental feedback, making it difficult to assign blame to a single event or actor when something goes wrong.
Some systems also exhibit emergent behaviors where outcomes not explicitly programmed or foreseen during development may cause harm. If these behaviors are neither intentional nor testable in advance, legal responsibility becomes difficult to establish. Moreover, as AI becomes more autonomous, traditional human monitoring mechanisms become less effective, and the standard models of negligence or product liability appear insufficient.
Several regulatory responses are being developed to clarify these issues. The EU AI Act is one of the most comprehensive efforts to date as a tiered, risk-based approach. Certain applications are outright prohibited, while high-risk systems, such as those used in healthcare, education, and law enforcement, must comply with strict requirements. These include documented risk assessments, detailed data governance practices, transparency obligations, and post-deployment monitoring. Although the Act does not directly revise liability laws, its compliance mechanisms function as de facto standards of care. Developers and deployers who fail to meet these standards may be found liable under general product liability or negligence doctrines.
In the United States, NIST has introduced a voluntary AI Risk Management Framework. Although it is not a regulatory tool, it can support organizations in demonstrating due diligence. Adoption of such standards may help reduce liability risks, while noncompliance may serve as evidence of negligence. Similarly, the OECD AI Principles, again non-binding, promote global norms such as transparency, fairness, and accountability [56]. Organizations that align with these principles may strengthen their credibility and reduce legal and reputational exposure.
Effective assignment of responsibility in AI systems will require regulatory adaptation and legal innovation. Strict liability regimes may be necessary for some high-risk products. New legal categories may emerge to address shared responsibility across complex supply chains. Requirements for documentation, impact assessments, and independent audits can contribute to greater traceability and help identify which parties should be held accountable.
Ultimately, AI accountability demands a combination of regulatory reform, organizational transparency, and shared ethical commitment. Establishing clear roles and legal obligations across the AI lifecycle is essential not only for remedying harm but also for building public confidence and supporting safe innovation.

6. International and Local Governance in AI Regulation

AI standards now vary widely across local, national, and international levels. While regional and state-level efforts respond to specific challenges and risks, global frameworks attempt to support coordination, ethical alignment, and compatibility. This section provides an overview of key legislative and regulatory approaches to AI governance, focusing on a selection of notable frameworks at the global and local levels. It is important to note that this is not an exhaustive review of all existing and proposed AI legislation.

6.1. U.S. Federal and State-Level Developments

In the United States, the absence of comprehensive federal AI legislation has given rise to a range of state-level laws and policies. These efforts reflect public concern about AI’s social impact but also risk creating a fragmented regulatory environment. Companies operating across state lines face added complexity when compliance requirements differ by jurisdiction.
California’s AB 2013, titled the Generative Artificial Intelligence Training Data Transparency Act, mandates developers to disclose detailed information about the datasets used to train generative models [58]. Colorado’s AI Act, enacted in May 2024, introduces obligations for developers of high-risk AI systems, particularly those affecting housing, employment, and healthcare decisions [59]. In New York City, Local Law 144 came into force in 2023 and regulates the use of automated employment decision tools (AEDTs) [60].
Texas introduced the Responsible Artificial Intelligence Governance Act (TRAIGA), which will come into effect in 2026. It places limits on certain AI uses, such as behavioral manipulation and generation of unlawful deepfakes, and grants enforcement authority to the state’s Attorney General [61]. Montana’s Right to Compute Act protects individuals’ rights to use computing resources, including for AI development, while requiring risk assessment for AI-driven critical infrastructure [62].
A coordinated but non-legislative approach is emerging at the federal level. Executive Order 14110, issued in 2023, directs federal agencies to adopt safety, transparency, and accountability measures in their AI activities [34]. Agencies such as the FTC, FDA, and CFPB rely on existing legal frameworks to apply control over AI systems within their regulatory scopes. NIST introduced a voluntary but widely referenced AI Risk Management Framework (AI RMF), providing structured guidance for assessing and reducing AI-related risks [63].

6.2. International Standards and Frameworks

Global frameworks play a vital role in setting up shared ethical foundations and supporting interoperability across borders, particularly in light of the transnational nature of AI technologies.
The OECD AI Principles, first adopted in 2019 and updated in 2024, remain one of the earliest and most influential intergovernmental standards. These principles promote trustworthy AI that respects human rights, democratic values, and inclusive growth. Although non-binding, they have guided many national and corporate policies, reflecting widespread global support [12].
The UNESCO Recommendation on the Ethics of AI, endorsed by 193 member states in 2021, marked a significant step in global norm-setting. It positions AI within the broader framework of human rights, cultural diversity, and environmental sustainability. The recommendation emphasizes the need for robust governance structures and accountability mechanisms that protect individual dignity and societal well-being.
The EU AI Act, adopted in 2024, represents the most comprehensive legally binding framework to date. This regulation introduces a tiered, risk-based classification of AI systems, ranging from unacceptable to minimal risk. High-risk applications are subject to strict obligations [64].
A distinctive feature of the EU AI Act is its extraterritorial scope. Entities operating outside the EU must comply if their AI systems are used or marketed within the EU. This regulatory reach is expected to influence global standards by encouraging companies worldwide to align with EU requirements. It is also worth mentioning the Council of Europe’s Framework AI Convention which is the first legally binding treaty specifically designed to regulate AI. The convention, which opened for signatures on 5 September 2024, is aimed at ensuring that AI systems are developed and used with respect to human rights, democracy, and the rule of law [65].
Even as the EU AI Act lays out a comprehensive legal framework for the entire union, it is worth noting that several member states have either created their own national strategies for AI or are in the process of implementing them. These national efforts often seem to complement the EU AI Act, address specific national priorities, or simply set the state for how new law will be put into practice.
Some countries, like Germany, were ahead of the curve. In 2018 Germany released its own national AI strategy. While not a specific law, Germany’s strategy focuses on a human-centric approach and emphasizes promoting AI made in Germany in a way that is trustworthy [66]. Similarly, France has been working on its AI for Humanity strategy since 2018. The country is focused on a public–private partnership model to position itself as a leader in trustworthy AI. While a specific law has not been enacted, French regulatory bodies like the CINL (Commission nationale de l’informatique et des libertés) have been proactive in applying existing data protection and intellectual property laws to AI systems [67]. Italy has an AI bill that was approved by the Chamber of Deputies in June 2025 which is designed to implement parts of the EU AI Act while also adding national measures related to copyright, transparency, and even criminal enforcement [68]. The United Kingdom, while no longer part of the EU, remains a major influence in European AI governance. The UK has taken a principles-based, pro-innovation approach instead of a single, overarching AI law [69]. Other regions have taken more collaborative approaches such as ASEAN Guidance on AI Governance and the African Union Continental Artificial Intelligence Strategy, both adopted in 2024 [70].
There are several technical standardization bodies that contribute to the development of operational guidance and implementation tools. The International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) released ISO/IEC 42001, which outlines a formal structure for AI management systems [71]. This standard supports organizations in addressing governance, risk, and lifecycle concerns related to AI use.
The Institute of Electrical and Electronics Engineers (IEEE) has initiated the P7000 series, which focuses on ethical and social considerations in system design. Notable projects include IEEE P7000-2021 Standard Model Process for Addressing Ethical Concerns During System Design, IEEE 7001-2021 Standard for Transparency of Autonomous Systems and IEEE P7003-2024 Standard for Algorithmic Bias Considerations. These efforts aim to guide engineers and developers in integrating fairness and accountability into AI design from the outset [72,73,74].
International efforts, although diverse in format and authority, reflect a growing recognition that AI development requires coordinated norms and mechanisms. While soft-law instruments like the OECD and UNESCO frameworks rely on voluntary adoption, the EU AI Act illustrates a shift toward binding regulation with enforceable standards. Together, these frameworks form a foundation for cross-border alignment in the governance of AI.

6.3. Interaction Between Local and Global AI Standards

Regulatory frameworks for AI operate at different levels, each carrying distinct legal authority, priorities, and methods of implementation. State-level regulations, such as Colorado’s AI Act or New York City’s Local Law 144, derive authority from the sovereignty of individual states. These regulations are binding within state borders and often come with defined penalties for non-compliance. Their content reflects local political priorities and immediate societal concerns.
In contrast, international frameworks such as the OECD AI Principles and UNESCO’s Ethics Recommendation function as non-binding guidelines. These instruments aim to build normative consensus by outlining broad principles like human dignity, sustainability, and inclusivity. While they lack enforcement mechanisms, their influence can be significant [75].
The EU AI Act represents an important departure from this soft-law approach. It is a binding legal framework with enforceable rules that apply not only to actors within the EU but also to those outside the EU who market or deploy AI systems within its territory. This signals a broader move toward internationally recognized standards [76]. While the EU AI Act is limited to its 27 member states, the Council of Europe’s AI Framework Convention is open to any country in the world. This key difference gives the convention a much broader potential global footprint and may suggest it could become a more universal framework for AI governance.
Although legally distinct, domestic and international frameworks often interact indirectly. For instance, global companies operating in multiple markets may adopt the strictest applicable standards to simplify compliance. This behavior encourages convergence between legally binding state regulations and non-binding international norms.
Differences between national laws and international principles can create legal uncertainty and compliance difficulties for businesses [77]. Some places prioritize economic competitiveness over ethical concerns, while others emphasize human rights and fairness but struggle with enforcement. This divergence can weaken global coherence in AI governance and limit the effectiveness of voluntary frameworks.
Collaboration between jurisdictions, along with shared mechanisms for monitoring and dialog, will likely play a critical role in improving global alignment.

7. Timeline and Evolution of AI Standards and Policies

In order to understand AI-related governance, some key terms must first be clarified. The terms governance, regulation, standardization, and certification play a distinct role in shaping how AI systems are designed, developed, and managed across various domains.
Governance is a system of practices intended to maximize benefits and minimize harms caused by AI systems and refers to the broader structure in which decisions about AI are made. It includes formal and informal rules, processes, and institutions that influence the direction and use of AI. Governance can be initiated by governments, companies, sectors, or communities [78].
Regulation of AI is the development of public sector rules, policies and laws for promoting and regulating AI. These are enforceable and often include penalties for non-compliance. Regulation aims to address risks such as bias, discrimination, privacy violations, or harmful uses of AI [79].
Standardization refers to the creation of shared technical or procedural criteria that guide the development, evaluation, and use of AI systems, aiming to promote consistency, safety, and interoperability across different contexts [80].
Certification refers to the attestation that a product, process, person, or organization meets specified criteria. It usually involves an assessment conducted by a trusted third party and provides external stakeholders with assurance that ethical or regulatory standards have been met. In this way, certification functions as a governance tool that promotes transparency and incentivizes compliance with established requirements [81].
AI governance operates at multiple levels that complement one another. At the international level, organizations introduce principles, recommendations, and frameworks that extend across countries. National governments establish their own AI strategies, legal rules, and governance mechanisms, while provinces, regions, and cities address local needs and concerns through subnational approaches. Sectoral governance develops in areas such as healthcare, education, and transportation, often shaped by industry associations or regulatory bodies. Companies are involved in setting internal procedures for ethical review, safety checks, and responsible design in the corporate level. Organizational governance is applied by universities, research institutions, and nonprofits to guide the development and use of AI. Community-led and worker-led governance adds another dimension, as unions, local groups, and advocacy networks raise awareness, influence policies, and propose alternative accountability models. Across these levels, different policy tools are employed, including legislation such as national AI acts, regional directives, standards like technical documents, voluntary risk management frameworks, codes of conduct adopted by developers, and audit or assessment instruments such as fairness checklists or risk scoring systems [78].
These definitions clarify the distinct yet interconnected mechanisms that shape AI in practice. Governance provides the overall structure, regulation establishes enforceable rules, standardization sets shared technical criteria, and certification offers external validation of compliance with those criteria. Together they show the range of instruments available for guiding the responsible development and use of AI. The timeline in Table 4, Table 5 and Table 6 presents how these instruments have developed across global contexts, documenting milestones in governance, regulation, and standardization. The entries demonstrate how international organizations, governments, industries, and communities have advanced policies and frameworks in response to technological progress, societal expectations, and ethical concerns [82,83,84].

8. Conflicts and Challenges in Fragmented Governance

Global AI governance faces mounting complications due to regulatory fragmentation. Policymakers across different places introduce varied legal instruments, often grounded in distinct political, cultural, or economic priorities. This diversity creates legal ambiguity for those who build or implement AI technologies. Developers encounter inconsistent expectations, where the same system may be categorized and regulated differently across borders. This lack of clarity increases compliance costs and raises the risk of unintentional violations.
Different countries apply conflicting definitions to key concepts such as high-risk AI or automated decision-making. These discrepancies challenge cross-border coordination and obstruct the development of interoperable standards. Enforcement agencies operate without shared terminology or enforcement benchmarks. This limits effective collaboration and complicates dispute resolution. Inconsistent review capacity further weakens global trust in AI regulation. While some governments invest in expert institutions and public audits, others struggle with limited resources or lack independent regulatory mechanisms.
Companies exploit these inconsistencies. Firms often relocate or deploy systems in regions with weaker regulatory enforcement, a practice known as regulatory arbitrage. This practice places higher burdens on countries that adopt stricter frameworks, destabilizes global efforts to promote accountability, and reduces protections for affected individuals and communities. Some regions hesitate to enforce stricter norms out of fear that innovation and investment will shift elsewhere [85].
Geopolitical priorities continue to shape how nations approach AI development. For many countries, AI is viewed as a strategic asset with potential to strengthen national capabilities across sectors such as defense, digital infrastructure, and industrial innovation. This strategic framing influences both domestic agendas and the scope of international collaboration. Differences in national objectives and regulatory traditions contribute to varied approaches, which may limit opportunities for deep technical cooperation. In this environment, aligning global efforts remains challenging, but sustained dialog and coordination can create pathways for building shared standards that support both innovation and collective benefit. Many countries are now working to balance the pursuit of local progress with the broader aim of contributing to global alignment in AI development.
Cultural divergence also influences how societies interpret terms. These interpretations shape public support for different regulatory models and affect how rules are designed and applied. Disagreement about foundational values complicates efforts to build universal frameworks for AI safety or ethics.
Corporate actors exercise significant influence on rulemaking processes. Major technology companies often participate in technical standardization, shape early drafts of regulatory proposals, and fund research that informs policy. Although this involvement supports technical feasibility, it may marginalize civil society groups and shift priorities toward commercial outcomes. As a result, some communities and user groups remain underrepresented in governance discussions.
This produces incoherent regulatory outcomes. Some sectors face overlapping requirements, while others operate without any formal control. Emerging risks may fall outside current frameworks entirely. In the absence of coordinated governance, AI systems may evolve in incompatible ways, limiting opportunities for international collaboration and damaging trust. Reaching common ground on AI governance will require balancing national interests with global commitments and designing mechanisms that promote both fairness and functionality.

9. Toward Coherent and Comprehensive AI Standards

Global compatibility of standards is often viewed as a desirable goal since AI systems operate across borders. Scholars have argued that fragmented or conflicting rules can create barriers to innovation, limit enforcement, and increase burdens of compliance for developers and regulators alike [86]. Shared expectations may help support more inclusive, transparent, and accountable development, particularly when different values and risks are at stake. In practice, however, political fragmentation and divergent regulatory philosophies present ongoing obstacles. National interests, legal traditions, and cultural norms influence how countries approach problems.
Some of these differences are evident in the contrasting trajectories of leading AI platforms. ChatGPT, developed in the United States, has evolved within a commercial ecosystem that limits direct government control. DeepSeek, developed in China, reflects a regulatory model that emphasizes centralized control and tight restrictions on content and access [87]. These models illustrate how underlying political structures shape both the design and governance of AI systems, making full convergence around global standards unlikely in the near future.

9.1. Regional Cooperation and the Role of Soft Law

Although a unified global standard is not likely to be achieved, regional alliances such as the EU, the African Union, and ASEAN offer promising venues for building coordinated approaches among countries with shared priorities. These alliances can reduce regulatory fragmentation and support more consistent application of standards. For instance, the aforementioned EU AI Act allows companies to operate under a single regulatory framework across all 27 member states, rather than navigating individual national laws, thereby streamlining compliance and facilitating cross-border innovation [88].
Emerging economies such as Indonesia and the Philippines can influence regional standards by participating actively in alliances like ASEAN, which creates opportunities for their perspectives to shape policies rather than simply adopting those set by dominant actors [89]. Beyond regional unions, the Council of Europe’s Framework Convention on AI, Human Rights, Democracy and the Rule of Law represents the first binding international treaty on AI governance. While not yet in force, it provides a reference point for states outside the EU and highlights the growing convergence between soft-law instruments and binding legal frameworks.
In the early stages of AI governance, many regions favored soft-law instruments like nonbinding principles, frameworks, and recommendations. These tools allow flexibility as technologies evolve, while still promoting accountability and shared values. A notable example is the OECD AI Principles, adopted in 2019 and updated in 2024, which articulate commitments to fairness, transparency, robustness, and human rights. Although not legally enforceable, they influence national policies and help developers anticipate future regulatory trends [90].

9.2. Supporting Interoperability While Allowing Adaptation

The absence of a single global regime does not prevent the pursuit of interoperability. Interoperable standards enable systems to function across regions while respecting local differences. This approach offers developers and regulators a shared foundation to guide system design and compliance, even when enforcement mechanisms or policy details vary.
International standardization bodies such as ISO and IEEE are central to this process. They work to establish common definitions, performance metrics, and management frameworks for AI systems. These shared tools help moderate regulatory uncertainty and facilitate cross-border collaboration [91].
Interoperability also supports adaptability. As technologies evolve or new risks emerge, locals may revise their policies without disrupting existing systems. A consistent baseline, such as minimum transparency requirements or documentation practices, can help developers meet diverse legal obligations without significant redesign. This approach balances stability with flexibility and encourages responsible scaling of AI systems worldwide.

9.3. Including Underrepresented Regions and Diverse Stakeholders

The priorities and interests of underrepresented regions may be sidelined without active measures to expand participation. These include nations in the Global South, small and medium-sized enterprises, marginalized communities, and sectors with limited technical capacity.
Equitable participation in standard-setting processes is essential to create fair and widely accepted rules. Inclusive governance requires transparent procedures, accessible public comment opportunities, and ongoing mechanisms for evaluation and revision [92,93]. Committees that develop technical or ethical guidelines should reflect geographic, cultural, and economic diversity.
A key challenge in developing baseline interoperability is avoiding dominance by a narrow set of actors. Standards should not be dictated solely by a few economic or geopolitical interests. Instead, the process should seek consensus around minimum expectations that all stakeholders can support. Public trust and legitimacy increase when people understand how decisions are made and feel their perspectives are considered.
Establishing such frameworks involves more than technical precision. It requires attention to process, values, and accountability. Advisory bodies must provide room for diverse input, respect for context, and the possibility of course of correction. In this way, interoperable and inclusive AI standards can support innovation without replicating existing global inequalities [94].

10. Final Remarks

This paper has examined the development, risks, and governance challenges of AI systems through a global and multi-level lens. It has analyzed how AI influences nearly all sectors of modern life and how its rapid advancement demands not only technical solutions but also legal, ethical, and institutional responses. The discussion covered core concepts such as regulation, standardization, certification, global vs. local challenges, and has provided a timeline of key milestones that reflect the ongoing attempt to shape AI responsibly. A central concern throughout the paper has been the uneven distribution of both the benefits and harms of AI technologies. These disparities extend across geographic, economic, and social boundaries, often leaving vulnerable communities at greater risk and with fewer protections.
The complexity of AI governance lies in its cross-border and cross-sector nature. Regulatory institutions struggle to maintain pace with evolving technologies, while corporations often push products into the market before fully assessing the consequences. National agendas increasingly prioritize technological dominance, and in many cases, this ambition displaces public interest. Corporate actors frequently avoid meaningful accountability through lobbying, opacity in system design, and fragmented compliance with international norms. These dynamics obstruct efforts to build a shared and ethical foundation for AI governance.
Efforts to create universal frameworks face persistent challenges. Diverging national laws, cultural interpretations of privacy and fairness, and differences in technical infrastructure weaken attempts at alignment. Legal uncertainty remains a barrier for developers, and unclear responsibilities among institutions allow harmful or biased systems to proliferate. In such an environment, the demand for flexible yet consistent regulatory models becomes urgent. Standards must be designed to adapt without weakening core protections, and certification processes must apply equally across regions and platforms.
Governance requires more than high-level declarations. It must embed public accountability, access to compensation/recovery, and mechanisms for independent review into the entire AI lifecycle. Developers must prioritize system transparency and long-term societal outcomes rather than short-term performance gains. Policymakers must resist pressures to lower ethical requirements for the sake of economic gain. Civil society must gain stronger roles in shaping decisions, particularly in areas that affect public infrastructure, education, and health.
Future AI systems will become more integrated with decision-making at all levels. This intensifies the need for institutions to build technical knowledge, coordinate across fields, and apply transparent tools for evaluation and responsibility. Regional partnerships, voluntary frameworks, and shared technical standards may offer paths toward interoperability. However, these must be accompanied by binding responsibilities and meaningful monitoring to avoid becoming symbolic gestures. Existing power asymmetries between corporations and regulators, and between nations with advanced infrastructure and those without, must be addressed directly to prevent deepening inequality.
Strong public institutions and inclusive design processes must guide AI development to prevent it from reflecting only the interests of dominant actors rather than the needs of global populations. Private control over datasets, algorithms, and compute infrastructure gives unprecedented influence on a small number of companies. The drive for market control, coupled with nationalist competition to achieve “AI superiority,” creates further obstacles to long-term cooperation. These forces pose serious risks to equity, transparency, and the development of technology that serves collective well-being.
Several questions demand further attention: How can governance models reduce the influence of private power in shaping AI systems? Which institutional designs can support broad participation while keeping pace with rapid development? What protections can function when political systems lack independence or stability? How can accountability remain visible and enforceable when AI systems operate across national boundaries? These questions are not theoretical; they reflect the core tensions that shape the future of AI.
Technical progress holds little meaning if it fails to reflect democratic values, shared responsibility, and equitable access. The future of AI will be shaped not by its speed but by who defines its purpose and who benefits from its deployment. AI will transform societies regardless of national borders or institutional preferences. The critical question is not whether regulation will come, but whether it will arrive in time and in the right form. No innovation exists apart from the choices made around it. Governments, corporations, and communities now face a pivotal decision: whether to allow AI to deepen inequality or to demand frameworks that advance justice and protect the public good.

Author Contributions

Z.O.: Original draft preparation, Section 1, Section 7, Section 8 and Section 10, supervision, review and editing; M.O.: Section 1 and Section 5, review; B.D.L.: Section 4, Section 9 and Section 10, review; N.R.M.: Section 3, review; R.V.K.B.: Section 2, review; B.P.: Section 6, review. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All data is contained in the paper.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Exploding Topics. AI Statistics. 2025. Available online: https://explodingtopics.com/blog/ai-statistics (accessed on 5 September 2025).
  2. Authority Hacker. AI Statistics. 2025. Available online: https://www.authorityhacker.com/ai-statistics/ (accessed on 5 September 2025).
  3. PwC. AI Predictions. 2025. Available online: https://www.pwc.com/us/en/tech-effect/ai-analytics/ai-predictions.html (accessed on 5 September 2025).
  4. Forbes. 3 Ways Artificial Intelligence Is Transforming Business Operations. 2019. Available online: https://www.forbes.com/sites/falonfatemi/2019/05/29/3-ways-artificial-intelligence-is-transforming-business-operations/ (accessed on 5 September 2025).
  5. Forbes Advisor. AI Statistics. 2025. Available online: https://www.forbes.com/advisor/business/ai-statistics/ (accessed on 5 September 2025).
  6. Statista. Artificial Intelligence in Labor and Productivity. 2025. Available online: https://www.statista.com/topics/11516/artificial-intelligence-ai-in-labor-and-productivity/ (accessed on 5 September 2025).
  7. AIPRM. AI Statistics. 2025. Available online: https://www.aiprm.com/ai-statistics/ (accessed on 5 September 2025).
  8. Agility PR Solutions. AI Super Users: New Research Asserts That About One Sixth of the General Population Uses Generative AI Every Day. 2025. Available online: https://www.agilitypr.com/pr-news/public-relations/ai-super-users-new-research-asserts-that-about-one-sixth-of-the-general-population-uses-generative-ai-every-day-are-you-among-them/ (accessed on 5 September 2025).
  9. European Commission. European Approach to Artificial Intelligence. 2025. Available online: https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence (accessed on 5 September 2025).
  10. White House. White House Unveils America’s AI Action Plan. 2025. Available online: https://www.whitehouse.gov/articles/2025/07/white-house-unveils-americas-ai-action-plan/ (accessed on 5 September 2025).
  11. DigiChina. Translation: China Outlines Principles for Governing AI. 2025. Available online: https://digichina.stanford.edu/work/translation-china-outlines-principles-for-governing-ai/ (accessed on 5 September 2025).
  12. OECD. AI Principles. 2024. Available online: https://www.oecd.org/en/topics/sub-issues/ai-principles.html (accessed on 5 September 2025).
  13. UNESCO. Recommendation on the Ethics of Artificial Intelligence. 2025. Available online: https://www.unesco.org/en/artificial-intelligence/recommendation-ethics (accessed on 5 September 2025).
  14. G20 and Artificial Intelligence. 2025. Available online: https://www.caidp.org/resources/g20/ (accessed on 5 September 2025).
  15. Lund, B.; Orhan, Z.; Mannuru, N.R.; Bevara, R.V.; Porter, B.; Vinaih, M.K.; Bhaskara, P. Standards, frameworks, and legislation for artificial intelligence (AI) transparency. AI Ethics 2025, 5, 3639–3655. [Google Scholar] [CrossRef]
  16. Rotenberg, M.; Kyriakides, E. The AI Policy Sourcebook 2025; Center for AI and Digital Policy, 2025; ISBN 979-8218606619. [Google Scholar]
  17. Microsoft. Responsible AI. 2025. Available online: https://www.microsoft.com/ai/responsible-ai (accessed on 5 September 2025).
  18. Andhov, A. OpenAI’s transformation: From a non-profit to a 157 billion valuation. Bus. Law Rev. 2025, 46, 2–11. [Google Scholar] [CrossRef]
  19. OpenAI. Policy Research. 2025. Available online: https://openai.com/safety/ (accessed on 5 September 2025).
  20. Harris, L. Regulating Artificial Intelligence: U.S. and International Approaches; Congressional Research Service. (CRS Report No. R48555); 4 June 2025. Available online: https://www.congress.gov/crs-product/R48555 (accessed on 5 September 2025).
  21. Taeihagh, A. Governance of generative AI. Policy Soc. 2025, 44, 1–22. [Google Scholar] [CrossRef]
  22. Harvard Law Review. Co-governance and the future of AI regulation. Harv. Law Rev. 2025, 138, 1609–1635. [Google Scholar]
  23. Singh, P. Systematic review of data-centric approaches. Data Sci. Manag. 2023, 6, 144–157. [Google Scholar] [CrossRef]
  24. Sinha, S.; Lee, Y.M. Challenges with developing and deploying AI models and applications in industrial systems. Discov. Artif. Intell. 2024, 4, 55. [Google Scholar] [CrossRef]
  25. Alsaigh, R.; Mehmood, R.; Katib, I. AI explainability and governance in smart energy systems: A review. Front. Energy Res. 2023, 11, 1071291. [Google Scholar] [CrossRef]
  26. Kshetri, N. The environmental impact of artificial intelligence. IT Prof. 2024, 26, 9–13. [Google Scholar] [CrossRef]
  27. Regilme, S.S.F. Artificial Intelligence Colonialism: Environmental Damage, Labor Exploitation, and Human Rights Crises in the Global South. SAIS Rev. Int. Aff. 2024, 44, 75–92. [Google Scholar] [CrossRef]
  28. Hacker, P. Sustainable AI regulation. Common Mark. Law Rev. 2024, 61, 451–482. [Google Scholar] [CrossRef]
  29. Papagiannidis, E.; Mikalef, P.; Conboy, K. Responsible artificial intelligence governance: A review and research framework. J. Strateg. Inf. Syst. 2025, 34, 101885. [Google Scholar] [CrossRef]
  30. Sloane, M.; Wüllhorst, E. A systematic review of regulatory strategies and transparency mandates in AI regulation in Europe, the United States, and Canada. Data Policy 2025, 7, e11. [Google Scholar] [CrossRef]
  31. Paul, R. Chapter 20: The politics of regulating AI technologies: Towards AI competition states. In Handbook on Public Policy and Artificial Intelligence; Edward Elgar Publishing: Cheltenham, UK, 2024; pp. 261–279. [Google Scholar] [CrossRef]
  32. Batool, A.; Zowghi, D.; Bano, M. AI governance: A systematic literature review. AI Ethics 2025, 5, 3265–3279. [Google Scholar] [CrossRef]
  33. Wei, K.; Ezell, C.; Gabrieli, N.; Deshpande, C. How do AI companies “fine-tune” policy? Examining Regulatory Capture in AI Governance. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, San Jose, CA, USA, 21–23 October 2024; Volume 7, pp. 1539–1555. [Google Scholar] [CrossRef]
  34. Biden, J.R. Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. Executive Order No. 14110, 88 Fed. Reg. 75191 (30 October 2023). Available online: https://digital.library.unt.edu/ark:/67531/metadc2289524/ (accessed on 5 September 2025).
  35. Siegmann, C.; Anderljung, M. The Brussels effect and artificial intelligence. arXiv 2022, arXiv:2208.12645. [Google Scholar] [CrossRef]
  36. U.S. Food & Drug Administration. Artificial Intelligence in Software as a Medical Device. U.S. Department of Health and Human Services; 2025. Available online: https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-software-medical-device (accessed on 5 September 2025).
  37. Innovation, Science and Economic Development Canada (ISED). The Artificial Intelligence and Data Act (AIDA)—Companion Document. Government of Canada. 2025. Available online: https://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act-aida-companion-document (accessed on 5 September 2025).
  38. World Economic Forum. AI Governance Alliance. 2023. Available online: https://initiatives.weforum.org/ai-governance-alliance/home (accessed on 5 September 2025).
  39. White, J.; Lidskog, R. Ignorance and the regulation of artificial intelligence. J. Risk Res. 2022, 25, 488–500. [Google Scholar] [CrossRef]
  40. Muneer, M.; Rehman, F.; Sajjad, M.H.; Anwar, M.; Qureshi, K.N. Security and privacy concerns in AI models. In Next Generation AI Language Models in Research; CRC Press: Boca Raton, FL, USA, 2024; pp. 293–326. [Google Scholar]
  41. Kop, M. AI & intellectual property: Towards an articulated public domain. Tex. Intellect. Prop. Law J. 2019, 28, 298–342. [Google Scholar]
  42. Lutz, C. Digital inequalities in the age of AI. Hum. Behav. Emerg. Technol. 2019, 1, 141–148. [Google Scholar] [CrossRef]
  43. Musheyev, D.; Pan, A.; Gross, P.; Kamyab, D.; Kaplinsky, P.; Spivak, M.; Bragg, M.A.; Loeb, S.; Kabarriti, A.E. Readability and Information Quality in Cancer Information from a Free vs. Paid Chatbot. JAMA Netw. Open 2024, 7, e2422275. [Google Scholar] [CrossRef]
  44. Celik, I. Exploring the determinants of artificial intelligence (AI) literacy: Digital Divide, Computational Thinking, Cognitive Absorption. Telemat. Inform. 2023, 83, 102026. [Google Scholar] [CrossRef]
  45. Larsson, S.; Heintz, F. Transparency in artificial intelligence. Internet Policy Rev. 2020, 9, 1–16. [Google Scholar] [CrossRef]
  46. Chesterman, S. Good models borrow, great models steal: Intellectual property rights and generative AI. Policy Soc. 2025, 44, 23–37. [Google Scholar] [CrossRef]
  47. Hua, S.S.; Belfield, H. AI & Antitrust: Reconciling Tensions Between Competition Law and Cooperative AI Development. Yale J. Law Technol. 2021, 23, 415–551. [Google Scholar]
  48. Park, S. Bridging the global divide in AI regulation: A Proposal for a Contextual, Coherent, and Commensurable Framework. Wash. Int. Law J. 2024, 33, 216. [Google Scholar] [CrossRef]
  49. ASEAN. ASEAN Guide on AI Governance and Ethics. ASEAN. 2024. Available online: https://asean.org/book/asean-guide-on-ai-governance-and-ethics/ (accessed on 5 September 2025).
  50. Chaves-Maza, M.; Fedriani, E.M. How to avoid profiles of failure when supporting entrepreneurs in an economic crisis. J. Res. Mark. Entrep. 2025, 27, 17–38. [Google Scholar] [CrossRef]
  51. Herrera-Poyatos, A.; Del Ser, J.; de Prado, M.L.; Wang, F.Y.; Herrera-Viedma, E.; Herrera, F. Responsible Artificial Intelligence Systems: A Roadmap to Society’s Trust through Trustworthy AI, Auditability, Accountability, and Governance. arXiv 2025. [Google Scholar] [CrossRef]
  52. Mucci, T.; Stryker, C. What Is AI Governance? IBM Official Site. 2024. Available online: https://www.ibm.com/think/topics/ai-governance (accessed on 5 September 2025).
  53. Niesche, C. Artificial intelligence: The AI tightrope. Co. Dir. 2024, 40, 54–57. [Google Scholar]
  54. Ebers, M. Truly risk-based regulation of artificial intelligence how to implement the EU’s AI Act. Eur. J. Risk Regul. 2024, 16, 684–703. [Google Scholar] [CrossRef]
  55. Hooshidary, S.; Canada, C.; Clark, W. Artificial Intelligence in Government: The Federal and State Landscape. NCSL Report on AI, Cybersecurity, and Privacy. 2024. Available online: https://documents.ncsl.org/wwwncsl/Technology/Government-State-Fed-Landscape-v02.pdf (accessed on 5 September 2025).
  56. Russo, L.; Oder, N. OECD Principles for Trustworthy AI. 2023. Available online: https://oecd.ai/en/wonk/national-policies-2 (accessed on 5 September 2025).
  57. Cheong, B.C. Transparency and accountability in AI systems: Safeguarding wellbeing in the age of algorithmic decision-making. Front. Hum. Dyn. 2024, 6, 1421273. [Google Scholar] [CrossRef]
  58. Hancock, P.; Wofford, T. California’s AB 2013: Challenges and Opportunities. 27 November 2024. Available online: https://www.bakerbotts.com/thought-leadership/publications/2024/november/ca-ab-2013_gen-ai-compliance (accessed on 5 September 2025).
  59. Cho, T.; Claypoole, T.; Ey, T. Colorado Passes Consumer Protection Law Regulating AI. National Law Review. 19 June 2024. Available online: https://natlawreview.com/article/colorado-passes-consumer-protection-law-regulating-ai (accessed on 5 September 2025).
  60. Clavell, G.G.; González-Sendino, R. What we learned while automating bias detection in AI hiring systems for compliance with NYC local law 144. arXiv 2024. [Google Scholar] [CrossRef]
  61. McGinnis, K.; Carterrn, J. Texas Enacts the Responsible AI Governance Act. 7 July 2025. Available online: https://www.jdsupra.com/legalnews/a-red-state-model-for-comprehensive-ai-3452779/ (accessed on 5 September 2025).
  62. S.B. 212, 69th Leg., Reg. Sess. (Mont. 2025). Creating the Right to Compute Act and Requiring Shutdowns of AI Controlled Critical Infrastructure. Available online: https://bills.legmt.gov/#/laws/bill/2/LC0292 (accessed on 5 September 2025).
  63. Lahiri, S.; Saltz, J. The need for a risk management framework. Int. J. Inf. Syst. Proj. Manag. 2024, 12, 41–57. [Google Scholar] [CrossRef]
  64. Laux, J.; Wachter, S.; Mittelstadt, B. Trustworthy artificial intelligence and the European Union AI act. Regul. Gov. 2024, 18, 3–32. [Google Scholar] [CrossRef]
  65. Lebret, A. The Council of Europe Convention on Artificial Intelligence and Human Rights: A primarily procedural step towards safeguarding health rights in the digital age. J. Glob. Health Law 2025, 2, 93–113. [Google Scholar] [CrossRef]
  66. Hirsch-Kreinsen, H.; Krokowski, T. Trustworthy AI: AI made in Germany and Europe? AI Soc. 2024, 39, 2921–2931. [Google Scholar] [CrossRef]
  67. Duflot, A. Artificial Intelligence in the French Law of 2024. Leg. Issues Digit. Age 2024, 5, 37–56. [Google Scholar] [CrossRef]
  68. Gallese, C. Italy ∙ The Italian Artificial Intelligence Bill Draft. J. AI Law Regul. 2025, 2, 181–187. [Google Scholar] [CrossRef]
  69. Roberts, H.; Babuta, A.; Morley, J.; Thomas, C.; Taddeo, M.; Floridi, L. Artificial intelligence regulation in the United Kingdom: A path to good governance and global leadership? Internet Policy Rev. 2023, 12, 1709. [Google Scholar] [CrossRef]
  70. Smith, G.; Stanley, K.D.; Marcinek, K.; Cormarie, P.; Gunashekar, S. Liability for Harms from AI Systems: The Application of U.S. Tort Law and Liability to Harms from Artificial Intelligence Systems. RAND Corporation. 20 November 2024. Available online: https://www.rand.org/pubs/research_reports/RRA3243-4.html (accessed on 5 September 2025).
  71. Benraouane, S.A. AI Management System Certification According to the ISO/IEC 42001 Standard: How to Audit, Certify, and Build Responsible AI Systems; Productivity Press: New York, NY, USA, 2024. [Google Scholar]
  72. IEEE 7000-2021; IEEE Standard Model Process for Addressing Ethical Concerns During System Design. IEEE: New York, NY, USA, 2021. Available online: https://standards.ieee.org/ieee/7000/6781/ (accessed on 6 September 2025).
  73. IEEE 7001-2021; IEEE Standard for Transparency of Autonomous Systems. IEEE: New York, NY, USA, 2022. Available online: https://standards.ieee.org/ieee/7001/6929/ (accessed on 6 September 2025).
  74. IEEE 7003-2024; IEEE Standard for Algorithmic Bias Considerations. IEEE: New York, NY, USA, 2025. Available online: https://standards.ieee.org/ieee/7003/11357/ (accessed on 6 September 2025).
  75. Chang, C. Global Minds, Local Governance: AI in International Law. Social Science Research Network. 2025. Available online: https://illinoislawreview.org/online/global-minds-local-governance/ (accessed on 5 September 2025).
  76. Bruder, A.; Kourinian, A. The Impact of the EU AI Act on AI Reseller Deals. Mayer Brown. 2024. Available online: https://www.mayerbrown.com/en/insights/publications/2024/11/the-impact-of-the-eu-ai-act-on-ai-reseller-deals (accessed on 5 September 2025).
  77. Pasam, T.P. An Analysis of the Regulatory Landscape and how it Impacts the Adoption of AI in Compliance. Int. J. Innov. Res. Comput. Commun. Eng. 2024, 12, 9110–9118. [Google Scholar] [CrossRef]
  78. Attard-Frost, B.; Lyons, K. AI governance systems: A multi-scale analysis framework, empirical findings, and future directions. AI Ethics 2025, 5, 2557–2604. [Google Scholar] [CrossRef]
  79. Barfield, W.; Pagallo, U. Research Handbook on the Law of Artificial Intelligence; Edward Elgar Publishing: Cheltenham, UK, 2018; ISBN 978-1-78643-904-8. [Google Scholar]
  80. AI Standards Hub. Standards at a Glance. 2025. Available online: https://aistandardshub.org/resource/main-training-page-example/1-what-are-standards/ (accessed on 5 September 2025).
  81. Cihon, P. Standards for AI Governance: International Standards to Enable Global Coordination in AI Research & Development; Oxford Future of Humanity Institute: Oxford, UK, 2019; Volume 40, pp. 340–342. Available online: https://www.governance.ai/research-paper/standards-for-ai-governance-international-standards-to-enable-global-coordination-in-ai-research-development (accessed on 5 September 2025).
  82. European Commission. Proposal for a Regulation on a European approach for Artificial Intelligence. 2021. Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206 (accessed on 5 September 2025).
  83. NIST. Artificial Intelligence Risk Management Framework (AI RMF 1.0). 2023. Available online: https://www.nist.gov/itl/ai-risk-management-framework (accessed on 5 September 2025).
  84. OECD. OECD Principles on Artificial Intelligence. 2019. Available online: https://www.oecd.org/going-digital/ai/principles (accessed on 5 September 2025).
  85. Willesson, M. What Is and What Is not Regulatory Arbitrage? A Review and Syntheses. In Financial Markets, SME Financing and Emerging Economies. Palgrave Macmillan Studies in Banking and Financial Institutions; Chesini, G., Giaretta, E., Paltrinieri, A., Eds.; Palgrave Macmillan: Cham, Switzerland, 2017. [Google Scholar] [CrossRef]
  86. Cihon, P.; Kleinaltenkamp, M.J.; Schuett, J.; Baum, S.D. AI certification: Advancing ethical practice by reducing information asymmetries. IEEE Trans. Technol. Soc. 2021, 2, 200–209. [Google Scholar] [CrossRef]
  87. Roberts, H.; Cowls, J.; Morley, J.; Taddeo, M.; Wang, V.; Floridi, L. The Chinese Approach to Artificial Intelligence: An Analysis of Policy, Ethics, and Regulation. In Ethics, Governance, and Policies in Artificial Intelligence; Floridi, L., Ed.; Philosophical Studies Series; Springer: Cham, Switzerland, 2021; Volume 144. [Google Scholar] [CrossRef]
  88. Nannini, L.; Balayn, A.; Smith, A.L. Explainability in AI Policies: A Critical Review of Communications, Reports, Regulations, and Standards in the EU, US, and UK. In Proceedings of the 6th ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, 12–15 June 2023; ACM International Conference Proceeding Series. ACM: New York, NY, USA, 2023. [Google Scholar] [CrossRef]
  89. Keith, A.J. Governance of artificial intelligence in Southeast Asia. Glob. Policy 2024, 15, 937–954. [Google Scholar] [CrossRef]
  90. Bo, N.S. OECD digital education outlook 2023: Towards an effective education ecosystem. Hung. Educ. Res. J. 2025, 15, 284–289. [Google Scholar] [CrossRef]
  91. Gonzalez Torres, A.P.; Ali-Vehmas, T. AI regulation: Maintaining interoperability through value-sensitive standardisation. Ethics Inf. Technol. 2025, 27, 26. [Google Scholar] [CrossRef]
  92. Florunso, A.; Olanipekun, K.; Adewumi, T.; Samuel, B. A policy framework on AI usage in developing countries. Glob. J. Eng. Technol. Adv. 2024, 21, 154–166. [Google Scholar] [CrossRef]
  93. Mannuru, N.R.; Shahriar, S.; Teel, Z.A.; Wang, T.; Lund, B.D.; Tijani, S.; Pohboon, C.O.; Agbaji, D.; Alhassan, J.; Galley, J.; et al. Artificial intelligence in developing countries: The impact of generative artificial intelligence (AI) technologies for development. Inf. Dev. 2023, 41, 1036–1054. [Google Scholar] [CrossRef]
  94. Danks, D.; Trusilo, D. The challenge of ethical interoperability. Digit. Soc. 2022, 1, 11. [Google Scholar] [CrossRef]
Table 1. AI Adoption and Economic Impact.
Table 1. AI Adoption and Economic Impact.
Impact MetricReported Value
Companies using/exploring AI77%
Companies prioritizing AI strategies83%
Devices containing AI technology77%
Organizations leveraging AI competitively90%
Projected global economic contribution by 2030$15.7 trillion
Net job creation by AI (2025)12 million jobs
Table 2. Top Business Applications of AI.
Table 2. Top Business Applications of AI.
AI Application AreaAdoption Rate (%)
Customer Service56%
Cybersecurity51%
Digital Personal Assistants47%
Customer Relationship Management46%
Inventory Management40%
Table 3. Comparison of Major AI Regulatory Strategies and Their Key Strengths and Weaknesses.
Table 3. Comparison of Major AI Regulatory Strategies and Their Key Strengths and Weaknesses.
Regulatory StrategyStrengthsWeaknesses
Risk-basedAligns rules with risk level; promotes consistencyMay be rigid or too generic for some contexts
Contextual or task-specificAdapts to domain needs; uses existing lawsMay result in fragmented or overlapping rules
Modular hybridBalances clarity with responsivenessRequires design capacity and coordination
Voluntary standardsEncourages innovation; low cost to implementWeak control; may not reduce systemic risks
Adaptive co-governanceFlexible and stakeholder-drivenRequires long-term investment and institutional maturity
Sectoral integrationDraws on expert knowledge; builds on trusted systemsMay lack coordination across agencies and shared benchmarks
International convergencePromotes global standards and cooperationGeopolitical tensions and minimal harmonization may occur
Table 4. Timeline of AI Governance Milestones (Part I).
Table 4. Timeline of AI Governance Milestones (Part I).
YearEvent NameExplanation
1947Nuremberg CodeIntroduced ethical principles in human experimentation, influencing later discussions on AI ethics.
1950Turing TestProposed as a measure for machine intelligence, foundational to AI evaluation.
1967US Freedom of Information ActPromoted transparency, later affecting debates on AI data access.
1973Sweden Data ActFirst federal privacy law addressing unauthorized data access.
1978German Federal Data Protection ActIntroduced consent requirements for personal data processing.
1980OECD Privacy GuidelinesMarked one of the first international data protection efforts.
1995EU Data Protection DirectiveRequired EU member states to implement privacy laws.
2004DARPA Grand ChallengeAdvanced autonomous vehicle research and related AI ethics.
2016US National AI R&D Strategic PlanIdentified research priorities in AI.
2017China Next Generation AI PlanSet long-term strategy to lead global AI development.
2017Pan-Canadian AI StrategyFocused on AI research, talent development, and ethics.
2018Cambridge Analytica ScandalSparked global concern over data misuse and AI-based profiling.
2019OECD AI PrinciplesProvided the first intergovernmental AI framework.
2019Japan Social Principles on Human-Centric AIEmphasized ethical, inclusive AI development.
2019EU Ethics Guidelines for Trustworthy AIIdentified transparency, human agency, and accountability as core values.
Table 5. Timeline of AI Governance Milestones (Part II).
Table 5. Timeline of AI Governance Milestones (Part II).
YearEvent NameExplanation
2020US Executive Order on AIDirected federal agencies to promote trustworthy AI.
2020South Korea National AI StrategySet roadmap for ethical, data-driven AI leadership.
2020Singapore Model AI Governance Framework (Second Ed.)Offered operational guidance for responsible AI deployment.
2021EU AI Act ProposalIntroduced risk-based framework for regulating AI in the EU.
2021UNESCO Recommendation on the Ethics of AIAdopted by 193 countries as a global standard.
2021China Algorithm Recommendation RulesRequired transparency in content ranking and personalization.
2021NIST AI RMF Concept PaperLaid groundwork for voluntary AI risk management in the US.
2022ISO/IEC Risk Management StandardIntroduced cross-sector technical guidance on AI safety.
2022UK AI Governance White Paper (draft)Proposed flexible, sector-led oversight.
2022African Union AI Strategy (draft)Emphasized capacity-building and development-centered governance.
2022AI Verify (Singapore)Launched tool to assess ethical AI use.
2022Japan AI Governance GuidelinesReinforced human-centered principles in AI deployment.
2023NIST AI RMF 1.0Published full guidance on managing AI risks.
2023EU AI Act ApprovalSecured backing from EU Parliament for regulation.
2023AI Safety Institute (UK)Formed to evaluate frontier AI model risks.
Table 6. Timeline of AI Governance Milestones (Part III).
Table 6. Timeline of AI Governance Milestones (Part III).
YearEvent NameExplanation
2023Canada AIDA ProposalIntroduced legislation for regulating high-impact AI systems.
2023China Interim Measures for Generative AIFirst binding rules on generative AI in China.
2023IndiaAI Mission (proposed)Aimed to unify national AI innovation and regulation.
2023WEF AI Governance AllianceEncouraged global public–private dialog on responsible AI.
2023Bletchley DeclarationSigned during the AI Safety Summit for global cooperation.
2024Updated OECD AI PrinciplesReflected new concerns over generative AI systems.
2024UN AI Advisory BodyConvened to guide international cooperation on AI governance.
2024EU AI Act Entry into ForceMarked beginning of phased implementation.
2024EU General Purpose AI Code of PracticeBegan development for guidelines for broad AI systems.
2024Smart Africa AI InitiativesSupported data cooperation and ethics in African nations.
2025EU Unacceptable AI Ban and Literacy RulesTriggered enforcement of key EU AI Act provisions.
2025ISO/IEC 42,001 AI Management StandardEstablished organizational standards for AI governance.
2025India AI Sandbox PilotAllowed controlled AI testing environments.
2025Canada AI Regulatory SandboxSupported safe innovation for startups.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Orhan, Z.; Orhan, M.; Lund, B.D.; Mannuru, N.R.; Bevara, R.V.K.; Porter, B. Artificial Intelligence Standards in Conflict: Local Challenges and Global Ambitions. Standards 2025, 5, 27. https://doi.org/10.3390/standards5040027

AMA Style

Orhan Z, Orhan M, Lund BD, Mannuru NR, Bevara RVK, Porter B. Artificial Intelligence Standards in Conflict: Local Challenges and Global Ambitions. Standards. 2025; 5(4):27. https://doi.org/10.3390/standards5040027

Chicago/Turabian Style

Orhan, Zeynep, Mehmet Orhan, Brady D. Lund, Nishith Reddy Mannuru, Ravi Varma Kumar Bevara, and Brett Porter. 2025. "Artificial Intelligence Standards in Conflict: Local Challenges and Global Ambitions" Standards 5, no. 4: 27. https://doi.org/10.3390/standards5040027

APA Style

Orhan, Z., Orhan, M., Lund, B. D., Mannuru, N. R., Bevara, R. V. K., & Porter, B. (2025). Artificial Intelligence Standards in Conflict: Local Challenges and Global Ambitions. Standards, 5(4), 27. https://doi.org/10.3390/standards5040027

Article Metrics

Back to TopTop