Previous Article in Journal
A Hybrid System for Driver Assistance Using Computer Vision Methods
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Review

Artificial Intelligence Standards in Conflict: Local Challenges and Global Ambitions

1
Data Analytics and Statistics, College of Science, University of North Texas, Denton, TX 76201, USA
2
Department of Information Science, University of North Texas, Denton, TX 76201, USA
*
Author to whom correspondence should be addressed.
Standards 2025, 5(4), 27; https://doi.org/10.3390/standards5040027 (registering DOI)
Submission received: 28 July 2025 / Revised: 6 September 2025 / Accepted: 26 September 2025 / Published: 11 October 2025

Abstract

This article examines the global efforts to govern and regulate Artificial Intelligence (AI) in response to its rapid development and growing influence across many parts of society. It explores how governance takes place at multiple levels, including international bodies, national governments, industries, companies, and communities. The study draws on a wide range of official documents, policy reports, and international agreements to build a timeline of key regulatory and standardization milestones. It also analyzes the challenges of coordinating across different legal systems, economic priorities, and cultural views. The findings show that while some progress has been made through soft-law frameworks and regional partnerships, deep divisions remain. These include unclear responsibilities, uneven enforcement, and risks of regulatory gaps. The article argues that effective AI governance requires stronger international cooperation, fair and inclusive participation, and awareness of power imbalances that shape policy decisions. Competing global and commercial interests can create obstacles to building systems that prioritize the public good. The conclusion highlights that future governance models must be flexible enough to adapt to fast-changing technologies, yet consistent enough to protect rights and promote trust. Addressing these tensions is critical for building a more just and accountable future of AI.
Keywords: AI governance and regulations; standardization and certification; algorithmic risk; international policy frameworks; transparency and public trust; cross-sector collaboration AI governance and regulations; standardization and certification; algorithmic risk; international policy frameworks; transparency and public trust; cross-sector collaboration

Share and Cite

MDPI and ACS Style

Orhan, Z.; Orhan, M.; Lund, B.D.; Mannuru, N.R.; Bevara, R.V.K.; Porter, B. Artificial Intelligence Standards in Conflict: Local Challenges and Global Ambitions. Standards 2025, 5, 27. https://doi.org/10.3390/standards5040027

AMA Style

Orhan Z, Orhan M, Lund BD, Mannuru NR, Bevara RVK, Porter B. Artificial Intelligence Standards in Conflict: Local Challenges and Global Ambitions. Standards. 2025; 5(4):27. https://doi.org/10.3390/standards5040027

Chicago/Turabian Style

Orhan, Zeynep, Mehmet Orhan, Brady D. Lund, Nishith Reddy Mannuru, Ravi Varma Kumar Bevara, and Brett Porter. 2025. "Artificial Intelligence Standards in Conflict: Local Challenges and Global Ambitions" Standards 5, no. 4: 27. https://doi.org/10.3390/standards5040027

APA Style

Orhan, Z., Orhan, M., Lund, B. D., Mannuru, N. R., Bevara, R. V. K., & Porter, B. (2025). Artificial Intelligence Standards in Conflict: Local Challenges and Global Ambitions. Standards, 5(4), 27. https://doi.org/10.3390/standards5040027

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop