Ethics Without Teeth? Challenges and Opportunities in AI Declarations for Platform Governance
Abstract
1. Introduction
2. Background of the Study
2.1. The Concept of Ethics in Artificial Intelligence
2.2. Ethical AI: Definition and Moral Philosophies
2.3. Theoretical Framework of the Study
3. Methodology
3.1. Research Design
3.2. Descriptive Content Analysis
3.3. Qualitative Analysis
3.3.1. Review Approach
3.3.2. Search Strategy
3.3.3. Inclusion and Exclusion Criteria
- They explicitly addressed opportunities or challenges related to the implementation or operationalization of AI ethical principles.
- They discussed the governance of AI ethics in policy, legal, or applied organizational contexts.
- They analyzed soft law, ethical frameworks, or emerging tools (e.g., audits, impact assessments) within organizations.
4. Results
4.1. Analysis of Ethical Principles in AI Declarations
4.1.1. Comparative Presence of AI Ethics Principles in National Declarations
4.1.2. Comparative Presence of AI Ethics Principles in Major Global Players
4.2. Integrative Literature Review Findings
4.2.1. Realizing the Potential: Strengths and Opportunities in Ethical AI Declarations
Global Participation as a Foundation for Inclusive AI Governance
Comprehensive Coverage Harmonization of Global Standards
Addressing Ethical Issues in AI
AI for Social Good and the Preservation of Trust
4.2.2. Addressing the Challenges: Limitations and Weaknesses in Ethical AI Declarations
Lack of a Standard Framework for Ethical AI
Divergence in Definitions and Terminology in AI Ethics
Challenges in Translating Principles into Practices
Integration with Existing Legal Frameworks
Erosion of Trust Due to Limited Focus on Impact
Conflicting Objectives in Ethical AI Declarations
Geopolitical Tension
5. Discussion
5.1. Implications of AI Ethical Declarations for Digital Business and E-Commerce Firms
5.2. Global vs. National Declarations in the Context of Digital Business
5.2.1. Implications for Policymakers and International AI Governance
5.2.2. Implications for Digital Platforms Strategy
5.3. Ethical Declarations for Responsible Platform Governance
5.3.1. Dimension 1: Assessment Mechanism
5.3.2. Dimension 2: Governance Implementation Structures
5.4. A Tiered Framework for Evaluating AI Ethics Enforcement
6. Conclusions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Appendix A. AI Declarations Used in This Study
| Declaration/Policy Name | Year Issued/Latest Update | Country | AI Principles Mentioned |
|---|---|---|---|
| America’s AI Action Plan/National AI R&D Strategic Plan | 2025 (draft, published 10 July 2025) | United States | Deregulation; Promoting open source; Re-/upskilling; Promote AI research; Interpretability; Robustness; Control; Promote AI in government; Promote commercial AI innovation; Combatic synthetic media; Security |
| Executive Order 14110—“Safe, Secure & Trustworthy Development and Use of AI” (revoked) | 2023 | United States | Safety; Transparency; Accountability; Human oversight; Risk assessment; Interagency coordination; Safety & Security; Privacy Protection; Prevent discrimination and unlawful bias; Consumer & Worker Protection; Detecting synthetic content |
| Executive Order 14179—“Removing Barriers to American Leadership in AI” | 2025 | United States | Human flourishing; Economic competitiveness; National security; Freedom from bias; National Security; Public trust; Accountability; Risk mitigation; Fairness; Transparency; The need to protect civil rights and privacy |
| Montreal Declaration for a Responsible Development of Artificial Intelligence | 2018 | Canada | Promote well-being; Autonomy; Justice; Privacy; Knowledge; Democracy; Responsibility; Caution principle; Sustainable Development; Solidarity |
| Plano Brasileiro de Inteligência Artificial (PBIA) | 2024–2028 | Brazil | Inclusive growth & well-being; Human-centric & equitable values; Transparency & explainability; Robustness & security; Accountability (aligned with OECD AI Principles) |
| National AI Policy (Chile) | 2021 | Chile | AI for well-being; Respect for human rights & security; Sustainable development; Inclusiveness; Global cooperation |
| Chile—Draft AI Regulation Bill (Boletín 16 821-19) | 2024 (Reviewed in 2025) | Chile | Transparency; Promote innovation; Address the issue of deepfakes; Individual dignity, democratic integrity, or public security; Responsibility |
| Iniciativa con proyecto de decreto por el que se reforma la fracción XVII del artículo 73 de la Constitución Política de los Estados Unidos Mexicanos, en materia de inteligencia artificial. | 2025 | Mexico | Personal data protection; Responsible AI use; Fostering responsible innovation: protecting human rights, privacy; National security; Address societal and economic challenges |
| 2019 Plan Nacional de Inteligencia Artificial | 2019 | Argentina | Promote social and economic development; Respect human rights; Minimize social risks; Personal data and privacy; Ethics by design |
| Declaration/Policy | Year Issued/Latest Update | Country/Region | Security |
|---|---|---|---|
| African Union Continental AI Strategy | 2024 | African Union (Continental) | Risk mitigation; Data protection; Cybersecurity (Safety); Consumer protection; Inclusion; Reviewing labor protections; Aligning social media regulations; Non-discrimination; Job creation, health, and education |
| Kenya’s National AI Strategy 2025–2030 | 2025 | Kenya | Inclusivity; Non-discrimination; Transparency; Accountability; Explainability; Human-oversight; Auditing, Cultural preservation and contextualization; Environmental sustainability; Achieve national priorities; Enhance public services; Promote inclusive economic growth |
| African Declaration on Artificial Intelligence | 2025 | African Union + 49 nations | Sovereignty; Inclusivity; Diversity; Privacy, ethics, transparency, and explainability while prioritizing human dignity, rights, freedoms, and environmental sustainability; Regional and international collaboration; Re-/upskilling; open data; Promote innovation |
| Ghana’s National Artificial Intelligence Strategy (2023–2033) | 2024 | Ghana | Transparency; Security; Robustness; Human rights; Inclusiveness; Re-/upskilling; Facilitate data access and governance; Collaboration; Promote AI research; Promote AI adoption in the public sector |
| Rwanda’s National AI Strategy | 2025 | Rwanda | Transparency; Privacy; Fairness; Accountability; Human oversight; Fairness & Non-Discrimination; Safety & Robustness; Accountability; Societal & Environmental Well-being |
| Republic of Ghana National Artificial Intelligence Strategy 2023–2033 | Ongoing (2024 draft) | Nigeria | Transparency; Fairness; Accountability; Respect for human rights; Inclusivity; Non-discrimination; Innovation and adaptation; Long-term economic, social, and environmental goals; Risk management and resilience; Human-centric; Global leadership |
| SA National AI Policy Framework | 2024 | South Africa | Re-/upskilling; Safety; Sectoral oriented; Transparency; Explainability; Data protection; Promote research; Human oversight |
| Arab Republic of Egypt (2025). | 2025 | Egypt/Senegal/Tunisia | Fairness; Transparency; Accountability; Privacy protection; Human rights; Social justice; Safety; Non-discrimination |
| Declaration Name | Year Issued/Latest Update) | Country | AI Principles Mentioned |
|---|---|---|---|
| Beijing AI Principles | 2019 (reaffirmed in regulatory drafts 2023–2024) | China | AI R&D should serve human interests; Fairness, reduced discrimination/bias; Transparency; Explainability; Predictability; Traceability; Auditability; Accountability; Diversity; Inclusion |
| Social Principles of Human-Centric AI | 2019 (reinforced via AI Promotion Act 2025) | Japan | Human-centricity; Fairness; Transparency; Accountability; Privacy; Innovation; Fair competition; Safety & security; Education & literacy |
| India AI Governance Guidelines (Sutras) | 2025 | India | Trust is the foundation; Do no harm; Innovation over restraint; Fairness & qeuity; Accountability; Understandable by Design; Safety, Resilience & sustainability |
| Model AI Governance Framework | 2019 (updated 2020) | Singapore | Transparency; Explainability; Repeatability/reproducibility; Safety; Security; Robustness; Fairness; Data governance; Accountability; Human agency and oversight; Inclusive growth; Societal and environmental well-being |
| Stranas KA—National AI Strategy (2020–2045) | 2020 | Indonesia | Pancasila values ((1) Belief in the one true God, (2) A fair-minded and civilized humanity, (3) Unity of Indonesia, (4) Democracy (from the people) led by wisdom of consultation (of the) representatives (of the people), and (5) Social justice for every person in Indonesia); OECD principles |
| UAE National AI Strategy (2031) | 2017 (vision extended to 2031) | United Arab Emirates | Reflect national values; Privacy & data protection; Sustainability; Human-centered development; Safety; Fairness; Transparency; Explainability; Accountability; Peaceful coexistence with AI; Inclusive access and equity; Legal compliance; societal benefit |
| Saudi Arabia’s AI Ethics Principles 2.0 | 2023 | Saudi Arabia | Fairness; Privacy & security; Humanity; Social & environmental benefits; Reliability & safety; Transparency & explainability; Accountability & responsibility |
| AI Seoul Summit Declaration | 2024 | South Korea | Safety; Innovation; Inclusivity; International cooperation; Human-centered governance |
| Declaration Name | Year Issued (Latest Update) | Country | AI Principles Mentioned |
|---|---|---|---|
| Stratégie nationale pour l’intelligence artificielle (SNIA) | 2018 (updated acceleration phase 2023–2025) | France | Transparency; Fairness; Security; Accountability; Societal & environmental well-being; Open data policy; Promote research/innovation; European coordenation; Respect labor right; Privacy & data protection |
| Germany’s National AI Strategy | 2018 (updated 2020 & 2023) | Germany | Transparency; Traceability; Safety; Inclusion; Security; Robustness; Sustainability (society & environment); Non-discrimination; AI skills for all; Promote research/innovation |
| Italy’s national AI law (Law No. 132/2025) | 2025 (law passed) | Italy | Human-centric (enhance human decisions); Children’s protection; Workplace transparency; Deepfakes & criminal law; Protection of intellectual property rights; Privacy & data protection; Knowability; Traceability; Respect labor right; Transparency; accountability |
| Spanish AI regulatory framework (AESIA) | 2023 (agency); 2025 bills | Spain | Transparency; Promote innovation; Non-discrimination; Privacy & data protection; Protection of intellectual property rights; Fairness & non-discrimination; Fair Competition |
| Sweden National AI Strategy | 2018 (updated 2026) | Sweden | Fair competition; AI ecosystem that has a positive global impact on people and the planet; Democracy, Inclusion; Diversity; Re-/upskilling; Transparency and availability of data; Accountability; Collaboration. |
| Policy for the Development of Artificial Intelligence in Poland 2025–2030 (and | 2024 | Poland | Re-/upskilling; Promote innovation; Fair competition; Sovereignty; Transparency; Accountability; Robustness; Safety; Environmental and societal well-being; |
| Recommendations on AI in education, teaching and training/Digivisio 2030 program | 2024 | Finland | Fair competition; Provide best public services; Enhance employment; Investment in research; Promote green economic growth; Interoperability; Privacy; Security; Good usability; Portability |
| UK National AI Strategy (National Data Strategy (2020), a Plan for Digital Regulation (2021), and the UK Innovation Strategy (2021) | 2024–2025 | United Kingdom | Public participation; Safety; Privacy & data protection; Transparency; Promote research; Risk management; Accountability; Public trust & transparency; Fairness; Beneficence; Collaboration |
| Declaration Name | Year Issued | Latest Update | Ethical Principles Mentioned |
|---|---|---|---|
| OECD AI Principles (OECD Recommendation on AI) | 2019 | 2024 | Inclusive growth; Well-being; Sustainable development; Environmental sustainability; Non-discrimination; Equality; Human dignity; Autonomy; Privacy & data protection; Diversity; Fairness and non-discrimination; Social justice; Labor rights; Misinformation & disinformation; Freedom of expression; Robustness; Security; Safety; Risk management; Accountability; Transparency; Explainability; International collaboration |
| EU Ethics Guidelines for Trustworthy AI | 2019 | – | Human agency and oversight; Technical robustness and safety; Privacy and data governance, Transparency, diversity, non-discrimination, and fairness; Societal and environmental well-being; Accountability |
| UNESCO Recommendation on the Ethics of AI | 2021 | – | Do no harm; Human agency and oversight; Safety; Security; Fairness and non-discrimination; Privacy & data protection; Transparency; Explainability; Accountability; Inclusion; Well-being; Sustainable development; Environmental sustainability; Economic growth; International collaboration |
| G7 Hiroshima AI Process Leaders’ Statement | 2023 | 2025 | Risk management; Post-deployment monitoring; Transparency; Responsible information sharing; Security; Information integrity; Well-being; Privacy & data protection; Interoperability; International collaboration; Intellectual property protection |
| Statement on Inclusive and Sustainable AI for People and the Planet (Paris AI Action Summit) | 2025 | – | Human rights; Human dignity; Autonomy; Privacy & data protection; Fairness and non-discrimination; Social justice; AI accessibility; Sustainable for people and the planet; Safety; Reliability; Security; Accountability; Transparency; Explainability; Inclusion; Labor rights; Protection of consumers; Protection of intellectual property rights |
| Framework Convention on Artificial Intelligence | 2022 | 2024 | Human dignity; Autonomy, Equality & non-discrimination; Privacy and data protection; Transparency; Accountability and responsibility; Reliability; Safe innovation |
| BRICS AI Governance Declaration | 2025 | – | Inclusion; Human supervision; Transparency; Accountability; Maximizing societal benefits; Information integrity; Privacy; Safety; Security; Sustainable development; Environmental sustainability; AI must ensure decent work; Promote innovation; Fair competition; Digital sovereignty; International collaboration |
| Hamburg Declaration on Responsible AI for the SDGs | 2025 | – | Equality; Inclusive participation; Resource-efficient and climate-friendly; Economic growth; Combat disinformation; Security; International collaboration |
| Malta Declaration on the Use of AI | 2025 | – | Primacy and sanctity of human life; Security; Safety; International collaboration; Privacy & data protection; Promote Sustainability; Promote innovation; Respect for the environment; Promote innovation |
References
- Lazazzara, A.; Za, S.; Georgiadou, A. A taxonomy framework and process model to explore AI-enabled workplace inclusion. J. Bus. Res. 2025, 201, 115697. [Google Scholar] [CrossRef]
- Hassan, R.; Nguyen, N.; Finserås, S.R.; Adde, L.; Strümke, I.; Støen, R. Unlocking the black box: Enhancing human-AI collaboration in high-stakes healthcare scenarios through explainable AI. Technol. Forecast. Soc. Change 2025, 219, 124265. [Google Scholar] [CrossRef]
- Grand View Research. Artificial Intelligence Market Size, Share & Trends Analysis Report by Solution (Hardware, Software, Services), by Technology (Deep Learning, Machine Learning, NLP), by Function, by End-Use, By Region, and Segment Forecasts, 2023–2030 (Report No. GVR-1-68038-955-5). 2023. Available online: https://www.grandviewresearch.com/industry-analysis/artificial-intelligence-ai-market (accessed on 8 October 2025).
- Van Loo, R.; Aggarwal, N. Amazon’s Pricing Paradox. Harv. JL Tech. 2023, 37, 1. [Google Scholar]
- Bian, Z.; Che, C. How AI Overview of Customer Reviews Influences Consumer Perceptions in E-Commerce? J. Theor. Appl. Electron. Commer. Res. 2025, 20, 315. [Google Scholar] [CrossRef]
- Rezaei, M.; Pironti, M.; Quaglia, R. AI in knowledge sharing, which ethical challenges are raised in decision-making processes for organisations? Manag. Decis. 2025, 63, 3369–3388. [Google Scholar] [CrossRef]
- Teng, Z.; Xia, H.; He, Y. Algorithmic Fairness and Digital Financial Stress: Evidence from AI-Driven E-Commerce Platforms in OECD Economies. J. Theor. Appl. Electron. Commer. Res. 2025, 20, 213. [Google Scholar] [CrossRef]
- Chaudhary, S.; Khalil, A.; Attri, R.; Ractham, P. Deploying explainable AI in entrepreneurial organizations: Role of the human-AI interface. Technol. Forecast. Soc. Change 2025, 220, 124324. [Google Scholar] [CrossRef]
- Kan, M. Hacker Deepfakes Employee’s Voice in Phone Call to Breach IT Company. PCMag. 2023. Available online: https://www.pcmag.com/news/hacker-deepfakes-employees-voice-in-phone-call-to-breach-it-company (accessed on 8 October 2025).
- Wang, S.; Peng, K.L.; Huang, Z.; Ma, L. AI-generated videos: Influencing trustworthiness, awe, and behavioral intention in space tourism e-commerce. J. Theor. Appl. Electron. Commer. Res. 2025, 20, 307. [Google Scholar] [CrossRef]
- Kazim, E.; Koshiyama, A.S. A high-level overview of AI ethics. Patterns 2021, 2, 100314. [Google Scholar] [CrossRef]
- Hanisch, M.; Goldsby, C.M.; Fabian, N.E.; Oehmichen, J. Digital governance: A conceptual framework and research agenda. J. Bus. Res. 2023, 162, 113777. [Google Scholar] [CrossRef]
- Laviola, F.; Cucari, N. From promise to concern: Public perceptions of AI in ESG frameworks over time. Technol. Soc. 2026, 85, 103219. [Google Scholar] [CrossRef]
- Berente, N.; Gu, B.; Recker, J.; Santhanam, R. Managing artificial intelligence. MIS Q. 2021, 45, 1433–1450. [Google Scholar] [CrossRef]
- Yu, T.; Pan, Y.; Jang, W. Modeling Consumer Reactions to AI-Generated Content on E-Commerce Platforms: A Trust–Risk Dual Pathway Framework with Ethical and Platform Responsibility Moderators. J. Theor. Appl. Electron. Commer. Res. 2025, 20, 257. [Google Scholar] [CrossRef]
- Floridi, L.; Cowls, J. A unified framework of five principles for AI in society. In Machine Learning and the City: Applications in Architecture and Urban Design; John Wiley & Sons Ltd.: Hoboken, NJ, USA, 2022; pp. 535–545. [Google Scholar] [CrossRef]
- Floridi, L.; Cowls, J.; Beltrametti, M.; Chatila, R.; Chazerand, P.; Dignum, V.; Luetge, C.; Madelin, R.; Pagallo, U.; Rossi, F.; et al. AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds Mach. 2018, 28, 689–707. [Google Scholar] [CrossRef] [PubMed]
- Jobin, A.; Ienca, M.; Vayena, E. The global landscape of AI ethics guidelines. Nat. Mach. Intell. 2019, 1, 389–399. [Google Scholar] [CrossRef]
- Mittelstadt, B. Principles alone cannot guarantee ethical AI. Nat. Mach. Intell. 2019, 1, 501–507. [Google Scholar] [CrossRef]
- Morley, J.; Floridi, L.; Kinsey, L.; Elhalal, A. From what to how: An initial review of publicly available AI ethics tools, methods and research to translate principles into practices. Sci. Eng. Ethics 2020, 26, 2141–2168. [Google Scholar] [CrossRef]
- Munn, L. The uselessness of AI ethics. AI Ethics 2023, 3, 869–877. [Google Scholar] [CrossRef]
- Hong, S.; Ryee, H.; Jin, X.; Yang, D. How Organizations Choose Open-Source Generative AI Under Normative Uncertainty: The Moderating Role of Exploitative and Exploratory Behaviors. J. Theor. Appl. Electron. Commer. Res. 2025, 20, 250. [Google Scholar] [CrossRef]
- Rességuier, A.; Rodrigues, R. AI ethics should not remain toothless! A call to bring back the teeth of ethics. Big Data Soc. 2020, 7, 2053951720942541. [Google Scholar] [CrossRef]
- Corrêa, N.K.; Galvão, C.; Santos, J.W.; Del Pino, C.; Pinto, E.P.; Barbosa, C.; Massmann, D.; Mambrini, R.; Galvão, L.; Terem, E.; et al. Worldwide AI ethics: A review of 200 guidelines and recommendations for AI governance. Patterns 2023, 4, 100857. [Google Scholar] [CrossRef]
- Hagendorff, T. The ethics of AI ethics: An evaluation of guidelines. Minds Mach. 2020, 30, 99–120. [Google Scholar] [CrossRef]
- Adeusi, S.O. Roles and Importance of Ethics. In Research Anthology on Rehabilitation Practices and Therapy; IGI Global: Hershey, PA, USA, 2020; pp. 1–11. [Google Scholar]
- Fisher, A. Meta-Ethics: An Introduction; Routledge: Abingdon, UK, 2014. [Google Scholar]
- Kidder, R.M. How Good People Make Tough Choices: Resolving the Dilemmas of Everyday Living; Harper Perennial: New York, NY, USA, 2003. [Google Scholar]
- Paul, R.; Elder, L. The Thinker’s Guide to Understanding the Foundations of Ethical Reasoning: Based on Critical Thinking Concepts & Tools; Foundation for Critical Thinking: Dillon Beach, CA, USA, 2006. [Google Scholar]
- Proctor, J.D. Ethics in geography: Giving moral form to the geographical imagination. Area 1998, 30, 8–18. [Google Scholar] [CrossRef]
- Martin, K.E.; Freeman, R.E. The separation of technology and ethics in business ethics. J. Bus. Ethics 2004, 53, 353–364. [Google Scholar] [CrossRef]
- Kaplan, A.; Haenlein, M. Siri, Siri, in my hand: Who’s the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence. Bus. Horiz. 2019, 62, 15–25. [Google Scholar] [CrossRef]
- Theodorou, A.; Dignum, V. Towards ethical and socio-legal governance in AI. Nat. Mach. Intell. 2020, 2, 10–12. [Google Scholar] [CrossRef]
- Siau, K.; Wang, W. Artificial intelligence (AI) ethics: Ethics of AI and ethical AI. J. Database Manag. (JDM) 2020, 31, 74–87. [Google Scholar] [CrossRef]
- Gorr, M. Is moral status done with words? Ethics Inf. Technol. 2024, 26, 10. [Google Scholar] [CrossRef]
- Jaquet, F. Utilitarianism for the Error Theorist. J. Ethics 2021, 25, 39–55. [Google Scholar] [CrossRef]
- Mill, J.S. Utilitarianism. In Seven Masterpieces of Philosophy; Routledge: Abingdon, UK, 2016; pp. 329–375. [Google Scholar]
- Micewski, E.R.; Troy, C. Business ethics–deontologically revisited. J. Bus. Ethics 2007, 72, 17–25. [Google Scholar] [CrossRef]
- Albrechtslund, A. Ethics and technology design. Ethics Inf. Technol. 2007, 9, 63–72. [Google Scholar] [CrossRef]
- Friedman, B.; Kahn, P.; Borning, A. Value sensitive design: Theory and methods. In University of Washington Computer Science & Engineering Technical Report; No. 02-12-01; University of Washington: Washington, DC, USA, 2002; Available online: https://dada.cs.washington.edu/research/tr/2002/12/UW-CSE-02-12-01.pdf (accessed on 26 October 2025).
- Trianosky, G. What is virtue ethics all about? Am. Philos. Q. 1990, 27, 335–344. Available online: http://www.jstor.org/stable/20014344 (accessed on 26 October 2025).
- Winston, C. Norm structure, diffusion, and evolution: A conceptual approach. Eur. J. Int. Relat. 2018, 24, 638–661. [Google Scholar] [CrossRef]
- Ibrahim, I.A.; Zaidan, E.; Truby, J.; Hoppe, T. The AI Act and its green blind spots: Hidden environmental risks in the AI lifecycle. Technol. Soc. 2026, 86, 103284. [Google Scholar] [CrossRef]
- Franzke, A.S. An exploratory qualitative analysis of AI ethics guidelines. J. Inf. Commun. Ethics Soc. 2022, 20, 401–423. [Google Scholar] [CrossRef]
- Snyder, H. Literature review as a research methodology: An overview and guidelines. J. Bus. Res. 2019, 104, 333–339. [Google Scholar] [CrossRef]
- Torraco, R.J. Writing integrative literature reviews: Guidelines and examples. Hum. Resour. Dev. Rev. 2005, 4, 356–367. [Google Scholar] [CrossRef]
- Dara, R.; Hazrati Fard, S.M.; Kaur, J. Recommendations for ethical and responsible use of artificial intelligence in digital agriculture. Front. Artif. Intell. 2022, 5, 884192. [Google Scholar] [CrossRef]
- Elsbach, K.D.; van Knippenberg, D. Creating high-impact literature reviews: An argument for ‘integrative reviews’. J. Manag. Stud. 2020, 57, 1277–1289. [Google Scholar] [CrossRef]
- Alcayaga, A.; Wiener, M.; Hansen, E.G. Towards a framework of smart-circular systems: An integrative literature review. J. Clean. Prod. 2019, 221, 622–634. [Google Scholar] [CrossRef]
- Falsafi, A.; Togiani, A.; Colley, A.; Varis, J.; Horttanainen, M. Life cycle assessment in circular design process: A systematic literature review. J. Clean. Prod. 2025, 521, 146188. [Google Scholar] [CrossRef]
- Dabić, M.; Vlačić, B.; Kiessling, T.; Caputo, A.; Pellegrini, M. Serial entrepreneurs: A review of literature and guidance for future research. J. Small Bus. Manag. 2023, 61, 1107–1142. [Google Scholar] [CrossRef]
- Martín-Martín, A.; Thelwall, M.; Orduna-Malea, E.; Delgado López-Cózar, E. Google Scholar, Microsoft Academic, Scopus, Dimensions, Web of Science, and OpenCitations’ COCI: A multidisciplinary comparison of coverage via citations. Scientometrics 2020, 126, 871–906. [Google Scholar] [CrossRef] [PubMed]
- Jensenius, F.R.; Htun, M.; Samuels, D.J.; Singer, D.A.; Lawrence, A.; Chwe, M. The benefits and pitfalls of Google Scholar. PS Political Sci. Politics 2018, 51, 820–824. [Google Scholar] [CrossRef]
- Rowe, F. What literature review is not: Diversity, boundaries and recommendations. Eur. J. Inf. Syst. 2014, 23, 241–255. [Google Scholar] [CrossRef]
- Gusenbauer, M.; Haddaway, N.R. Which academic search systems are suitable for systematic reviews or meta-analyses? Evaluating retrieval qualities of Google Scholar, PubMed, and 26 other resources. Res. Synth. Methods 2020, 11, 181–217. [Google Scholar] [CrossRef]
- Rethlefsen, M.L.; Page, M.J. PRISMA 2020 and PRISMA-S: Common questions on tracking records and the flow diagram. J. Med. Libr. Assoc. 2022, 110, 253–257. [Google Scholar] [CrossRef]
- Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ 2021, 372, 71. [Google Scholar] [CrossRef]
- Kraus, S.; Breier, M.; Lim, W.M.; Dabić, M.; Kumar, S.; Kanbach, D.; Mukherjee, D.; Corvello, V.; Piñeiro-Chousa, J.; Liguori, E.; et al. Literature reviews as independent studies: Guidelines for academic practice. Rev. Manag. Sci. 2022, 16, 2577–2595. [Google Scholar] [CrossRef]
- Singh, V.K.; Singh, P.; Karmakar, M.; Leta, J.; Mayr, P. The journal coverage of Web of Science, Scopus and Dimensions: A comparative analysis. Scientometrics 2021, 126, 5113–5142. [Google Scholar] [CrossRef]
- Mazrekaj, L. Gender Entrepreneurial Behaviour: A SSLR (Semi-Systematic Literature Review) Approach. South East Eur. J. Econ. Bus. 2024, 19, 77–95. [Google Scholar] [CrossRef]
- Tranfield, D.; Denyer, D.; Smart, P. Towards a methodology for developing evidence-informed management knowledge by means of systematic review. Br. J. Manag. 2003, 14, 207–222. [Google Scholar] [CrossRef]
- Ulnicane, I. Artificial Intelligence in the European Union: Policy, ethics and regulation. In The Routledge Handbook of European Integrations; Taylor & Francis: Abingdon, UK, 2022. [Google Scholar]
- Phadermrod, B.; Crowder, R.M.; Wills, G.B. Importance-performance analysis based SWOT analysis. Int. J. Inf. Manag. 2019, 44, 194–203. [Google Scholar] [CrossRef]
- Chan, A.; Okolo, C.T.; Terner, Z.; Wang, A. The limits of global inclusion in AI development. arXiv 2021. [Google Scholar] [CrossRef]
- The White House. Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. 2023. Available online: https://www.congress.gov/crs-product/R47843 (accessed on 15 January 2026).
- Greenleaf, G. G20 makes declaration of ‘data free flow with trust’: Support and dissent. Priv. Laws Bus. Int. Rep. 2019, 160, 18–19. [Google Scholar]
- Fukuda-Parr, S.; Gibbons, E. Emerging consensus on ‘ethical AI’: Human rights critique of stakeholder guidelines. Glob. Policy 2021, 12, 32–44. [Google Scholar] [CrossRef]
- van Buitenlandse Zaken, M. Global Declaration on Information Integrity Online-Diplomatic Statement-Government. Government of Netherland. 2023. Available online: https://www.government.nl/documents/diplomatic-statements/2023/09/20/global-declaration-on-information-integrity-online (accessed on 15 January 2026).
- European Parliament & Council of the European Union. Regulation (EU) 2024/1689 of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act); Official Journal of the European Union: Luxembourg, 2024; Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689 (accessed on 10 January 2026).
- Fjeld, J.; Achten, N.; Hilligoss, H.; Nagy, A.; Srikumar, M. Principled artificial intelligence: Mapping consensus in ethical and rights-based approaches to principles for AI. In Berkman Klein Center Research Publication; Harvard University: Cambridge, MA, USA, 2020. [Google Scholar]
- Schiff, D.; Borenstein, J.; Biddle, J.; Laas, K. AI ethics in the public, private, and NGO sectors: A review of a global document collection. IEEE Trans. Technol. Soc. 2021, 2, 31–42. [Google Scholar] [CrossRef]
- Schiff, D.; Biddle, J.; Borenstein, J.; Laas, K. What’s next for AI ethics, policy, and governance? A global overview. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (AIES ‘20), New York, NY, USA, 7–8 February 2020; Association for Computing Machinery: New York, NY, USA, 2020; pp. 153–158. [Google Scholar]
- Auld, G.; Casovan, A.; Clarke, A.; Faveri, B. Governing AI through ethical standards: Learning from the experiences of other private governance initiatives. J. Eur. Public Policy 2022, 29, 1822–1844. [Google Scholar] [CrossRef]
- Smuha, N.A. The EU approach to ethics guidelines for trustworthy artificial intelligence. Comput. Law Rev. Int. 2019, 20, 97–106. [Google Scholar] [CrossRef]
- Rothenberger, L.; Fabian, B.; Arunov, E. Relevance of ethical guidelines for artificial intelligence—A survey and evaluation. In Proceedings of the 27th European Conference on Information Systems (ECIS 2019), Stockholm-Uppsala, Sweden, 8–14 June 2019; Association for Information Systems: Atlanta, GA, USA, 2019. [Google Scholar]
- Díaz-Rodríguez, N.; Del Ser, J.; Coeckelbergh, M.; de Prado, M.L.; Herrera-Viedma, E.; Herrera, F. Connecting the dots in trustworthy Artificial Intelligence: From AI principles, ethics, and key requirements to responsible AI systems and regulation. Inf. Fusion 2023, 99, 101896. [Google Scholar] [CrossRef]
- Cappelli, M.A.; Di Marzo Serugendo, G. A semi-automated software model to support AI ethics compliance assessment of an AI system guided by ethical principles of AI. AI Ethics 2025, 5, 1357–1380. [Google Scholar] [CrossRef]
- Bostrom, N.; Yudkowsky, E. The ethics of artificial intelligence. In Artificial Intelligence Safety and Security; Chapman and Hall/CRC: Boca Raton, FL, USA, 2018; pp. 57–69. [Google Scholar]
- Stahl, B.C. Artificial Intelligence for a Better Future: An Ecosystem Perspective on the Ethics of AI and Emerging Digital Technologies; Springer Nature: Cham, Switzerland, 2021; p. 124. [Google Scholar]
- Dressel, J.; Farid, H. The accuracy, fairness, and limits of predicting recidivism. Sci. Adv. 2018, 4, eaao5580. [Google Scholar] [CrossRef]
- Selbst, A.D. Disparate impact in big data policing. Ga. Law Rev. 2017, 52, 109–196. [Google Scholar] [CrossRef]
- Polykalas, S.E.; Prezerakos, G.N. When the mobile app is free, the product is your personal data. Digit. Policy Regul. Gov. 2019, 21, 89–101. [Google Scholar] [CrossRef]
- Hevelke, A.; Nida-Rümelin, J. Responsibility for crashes of autonomous vehicles: An ethical analysis. Sci. Eng. Ethics 2015, 21, 619–630. [Google Scholar] [CrossRef] [PubMed]
- EU Commission. White Paper on Artificial Intelligence—A European Approach to Excellence and Trust; COM(2020) 65 Final; European Commission: Brussels, Belgium, 2020. [Google Scholar]
- High-Level Expert Group on Artificial Intelligence. Ethics Guidelines for Trustworthy AI; European Commission: Brussels, Belgium, 2019; Available online: https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai (accessed on 1 March 2026).
- Farouk, M. Studying Human Robot Interaction and Its Characteristics. Int. J. Comput. Inf. Manuf. (IJCIM) 2022, 2, 38–49. [Google Scholar] [CrossRef]
- Laitinen, A.; Sahlgren, O. AI systems and respect for human autonomy. Front. Artif. Intell. 2021, 4, 151. [Google Scholar] [CrossRef] [PubMed]
- Rafanelli, L.M. Justice, injustice, and artificial intelligence: Lessons from political theory and philosophy. Big Data Soc. 2022, 9, 20539517221080676. [Google Scholar] [CrossRef]
- de Fine Licht, K.; de Fine Licht, J. Artificial intelligence, transparency, and public decision-making: Why explanations are key when trying to produce perceived legitimacy. AI Soc. 2020, 35, 917–926. [Google Scholar] [CrossRef]
- Namugenyi, C.; Nimmagadda, S.L.; Reiners, T. Design of a SWOT analysis model and its evaluation in diverse digital business ecosystem contexts. Procedia Comput. Sci. 2019, 159, 1145–1154. [Google Scholar] [CrossRef]
- Tomašev, N.; Cornebise, J.; Hutter, F.; Mohamed, S.; Picciariello, A.; Connelly, B.; Clopath, C. AI for social good: Unlocking the opportunity for positive impact. Nat. Commun. 2020, 11, 2468. [Google Scholar] [CrossRef] [PubMed]
- Royakkers, L.; Timmer, J.; Kool, L.; Van Est, R. Societal and ethical issues of digitization. Ethics Inf. Technol. 2018, 20, 127–142. [Google Scholar] [CrossRef]
- Whittlestone, J.; Nyrup, R.; Alexandrova, A.; Cave, S. The role and limits of principles in AI ethics: Towards a focus on tensions. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, Honolulu, HI, USA, 27–28 January 2019; pp. 195–200. [Google Scholar]
- Université de Montréal. Montréal Declaration for a Responsible Development of Artificial Intelligence; Université de Montréal: Montréal, QC, Canada, 2018; Available online: https://montrealdeclaration-responsibleai.com/the-declaration/ (accessed on 20 September 2025).
- Department of Industry; Science; Energy and Resources; Australian Government. Australia’s Artificial Intelligence Ethics Principles; Australian Government: Canberra, Australia, 2019. Available online: https://www.industry.gov.au/publications/australias-ai-ethics-principles (accessed on 20 September 2025).
- Binns, R. Fairness in machine learning: Lessons from political philosophy. In Proceedings of the 1st Conference on Fairness, Accountability and Transparency, FAT ‘18, New York, NY, USA, 23–24 February 2018; Volume 81, pp. 149–159. [Google Scholar]
- Kelley, S. Employee perceptions of the effective adoption of AI principles. J. Bus. Ethics 2022, 178, 871–893. [Google Scholar] [CrossRef] [PubMed]
- Toreini, E.; Aitken, M.; Coopamootoo, K.; Elliott, K.; Zelaya, C.G.; Van Moorsel, A. The relationship between trust in AI and trustworthy machine learning technologies. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, Barcelona, Spain, 27–30 January 2020; pp. 272–283. [Google Scholar]
- Cools, H. Navigating the responsible AI landscape: Unraveling the principles-to-practices gap of transparency and explainability at the BBC. Inf. Commun. Soc. 2025, 1–21. [Google Scholar] [CrossRef]
- Ibáñez, J.C.; Olmeda, M.V. Operationalising AI ethics: How are companies bridging the gap between practice and principles? An exploratory study. AI Soc. 2022, 37, 1663–1687. [Google Scholar] [CrossRef]
- Tang, L.; Li, J.; Fantus, S. Medical artificial intelligence ethics: A systematic review of empirical studies. Digit. Health 2023, 9, 20552076231186064. [Google Scholar] [CrossRef]
- Arbelaez Ossa, L.; Lorenzini, G.; Milford, S.R.; Shaw, D.; Elger, B.S.; Rost, M. Integrating ethics in AI development: A qualitative study. BMC Med. Ethics 2024, 25, 10. [Google Scholar] [CrossRef]
- Barrance, E.; Kazim, E.; Hilliard, A.; Trengove, M.; Zannone, S.; Koshiyama, A. Overview and commentary of the CDEI’s extended roadmap to an effective AI assurance ecosystem. Front. Artif. Intell. 2022, 5, 932358. [Google Scholar] [CrossRef]
- Hickok, M. Lessons learned from AI ethics principles for future actions. AI Ethics 2021, 1, 41–47. [Google Scholar] [CrossRef]
- Attard-Frost, B.; De los Ríos, A.; Walters, D.R. The ethics of AI business practices: A review of 47 AI ethics guidelines. AI Ethics 2023, 3, 389–406. [Google Scholar] [CrossRef]
- Ray, P.P. Benchmarking, ethical alignment, and evaluation framework for conversational AI: Advancing responsible development of chatgpt. BenchCouncil Trans. Benchmarks Stand. Eval. 2023, 3, 100136. [Google Scholar] [CrossRef]
- European Commission. Artificial Intelligence for Europe; COM(2018) 237 Final; European Commission: Brussels, Belgium, 2018. [Google Scholar]
- Hamon, R.; Junklewitz, H.; Sanchez, I.; Malgieri, G.; De Hert, P. Bridging the gap between AI and explainability in the GDPR: Towards trustworthiness-by-design in automated decision-making. IEEE Comput. Intell. Mag. 2022, 17, 72–85. [Google Scholar] [CrossRef]
- Winfield, A.F.; Jirotka, M. Ethical governance is essential to building trust in robotics and artificial intelligence systems. Philos. Trans. R. Soc. A Math. Phys. Eng. Sci. 2018, 376, 20180085. [Google Scholar] [CrossRef] [PubMed]
- Pagallo, U. The legal challenges of big data: Putting secondary rules first in the field of EU data protection. Eur. Data Prot. Law Rev. 2017, 3, 36–46. [Google Scholar] [CrossRef]
- Vedder, A.; Spajić, D. Moral autonomy of patients and legal barriers to a possible duty of health related data sharing. Ethics Inf. Technol. 2023, 25, 23. [Google Scholar] [CrossRef]
- Owczarczuk, M. Ethical and regulatory challenges amid artificial intelligence development: An outline of the issue. Ekon. I Prawo. Econ. Law 2023, 22, 295–310. [Google Scholar] [CrossRef]
- Bolte, L.; Vandemeulebroucke, T.; van Wynsberghe, A. From an ethics of carefulness to an ethics of desirability: Going beyond current ethics approaches to sustainable AI. Sustainability 2022, 14, 4472. [Google Scholar] [CrossRef]
- Lokshina, I.; Kniezova, J.; Lanting, C. On Building Users’ Initial Trust in Autonomous Vehicles. Procedia Comput. Sci. 2022, 198, 7–14. [Google Scholar] [CrossRef]
- Gibney, E. The scant science behind Cambridge Analytica’s controversial marketing techniques. Nature 2018, 555, 559–560. [Google Scholar] [CrossRef]
- Goggin, B. Inside Facebook’s Suicide Algorithm: Here’s How the Company Uses Artificial Intelligence to Predict Your Mental State from Your Posts. 2019. Available online: https://www.businessinsider.com/facebook-is-using-ai-to-try-to-predict-if-youre-suicidal-2018-12 (accessed on 6 November 2025).
- Hill, K. The secretive company that might end privacy as we know it. The New York Times. 18 January 2020. Available online: https://www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition.html (accessed on 3 October 2025).
- Goujard, C. Italian Privacy Regulator Bans ChatGPT. Politico. 2023. Available online: https://www.politico.eu/article/italian-privacy-regulator-bans-chatgpt/ (accessed on 6 November 2025).
- Claessens, S.; Frost, J.; Turner, G.; Zhu, F. Fintech credit markets around the world: Size, drivers and policy issues. BIS Q. Rev. 2018, 29–49. Available online: https://www.bis.org/publ/qtrpdf/r_qt1809e.htm (accessed on 15 February 2026).
- Calo, R. Artificial intelligence policy: A primer and roadmap. UCDL Rev. 2017, 51, 399. [Google Scholar]
- Truby, J.; Brown, R.; Dahdal, A. Banking on AI: Mandating a proactive approach to AI regulation in the financial sector. Law Financ. Mark. Rev. 2020, 14, 110–120. [Google Scholar] [CrossRef]
- Arvan, M. Mental time-travel, semantic flexibility, and AI ethics. AI Soc. 2023, 38, 2577–2596. [Google Scholar] [CrossRef]
- Sanderson, C.; Douglas, D.; Lu, Q.; Schleiger, E.; Whittle, J.; Lacey, J.; Newnham, G.; Hajkowicz, S.; Robinson, C.; Hansen, D. AI ethics principles in practice: Perspectives of designers and developers. IEEE Trans. Technol. Soc. 2023, 4, 171–187. [Google Scholar] [CrossRef]
- Song, L.; Shokri, R.; Mittal, P. Privacy risks of securing machine learning models against adversarial examples. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security; Association for Computing Machinery: New York, NY, USA, 2019. [Google Scholar]
- Zhu, Y.; Lu, Y. Practice and challenges of the ethical governance of artificial intelligence in China: A new perspective. Cult. Sci. 2024, 7, 14–23. [Google Scholar] [CrossRef]
- Bleher, H.; Braun, M. Reflections on putting AI ethics into practice: How three AI ethics approaches conceptualize theory and practice. Sci. Eng. Ethics 2023, 29, 21. [Google Scholar] [CrossRef]
- Bryson, J.J.; Malikova, H. Is there an AI cold war? Glob. Perspect. 2021, 2, 24803. [Google Scholar] [CrossRef]
- European Commission. Artificial Intelligence: A European Perspective; Publications Office of the European Union: Luxembourg, 2018. [Google Scholar]
- Bächle, T.C.; Bareis, J. “Autonomous weapons” as a geopolitical signifier in a national power play: Analysing AI imaginaries in Chinese and U.S. military policies. Eur. J. Futures Res. 2022, 10, 20. [Google Scholar] [CrossRef]
- Lingevicius, J. Military artificial intelligence as power: Consideration for European Union actorness. Ethics Inf. Technol. 2023, 25, 19. [Google Scholar] [CrossRef]
- Yee, K.; Sebag, A.S.; Redfield, O.; Eck, M.; Sheng, E.; Belli, L. A keyword based approach to understanding the overpenalization of marginalized groups by English marginal abuse models on Twitter. In Proceedings of the 3rd Workshop on Trustworthy Natural Language Processing (TrustNLP 2023); Association for Computational Linguistics (ACL): Kerrville, TX, USA, 2023; pp. 108–120. [Google Scholar]
- IBM Security. Cost of a Data Breach Report 2024; IBM Corporation: Armonk, NY, USA, 2024. Available online: https://www.ibm.com/downloads/documents/us-en/107a02e94948f4ec (accessed on 16 February 2026).
- Italian Parliament. Legge 23 settembre 2025, n. 132: Disposizioni e delega al Governo in materia di intelligenza artificiale (Law No. 132 of 23 September 2025 on Artificial Intelligence). Gazzetta Ufficiale Della Repubblica Italiana, No. 223. 23 September 2025. Available online: https://www.julia-project.eu/database/legislation/267 (accessed on 11 February 2026).
- OECD. Revised Recommendation of the Council on Artificial Intelligence; OECD: Paris, France, 2024; Available online: https://one.oecd.org/document/C/MIN(2024)16/FINAL/en/pdf (accessed on 11 February 2026).
- Huang, X.; Kou, T.; Zhou, Q. Embedding AI ethics in the data lifecycle: A framework for enterprise AI governance. Technol. Soc. 2026, 86, 103261. [Google Scholar] [CrossRef]
- OECD. Recommendation of the Council on Artificial Intelligence; OECD: Paris, France, 2019; Available online: https://legalinstruments.oecd.org/en/instruments/oecd-legal-0449 (accessed on 11 February 2026).
- UNESCO. Recommendation on the Ethics of Artificial Intelligence; UNESCO: Paris, France, 2021; Available online: https://www.unesco.org/en/articles/recommendation-ethics-artificial-intelligence (accessed on 11 February 2026).



| Normative ethics | The primary focus is to make moral judgments about particular kinds of actions and to offer reasons to support those judgments, to show that the judgments are reasonable (for example, prostitution is bad; it can lead to incurable diseases). |
| Descriptive ethics | It answers factual questions about the moral views of individuals or groups. Basically, statements of fact are made about the moral views held by individuals, groups, or society at large (for example, “Sussan believes prostitution is bad” or “Christians hate corrupt practices”). |
| Meta-ethics | Concerned with questions about the meaning of ethical terms. It attempts to analyze ethical concepts to determine their actual meanings and logical relations. |
| Ethical Concerns | Explanation |
|---|---|
| Inconclusive evidence | Algorithmic conclusions are probabilities and therefore not infallible. This can lead to unjustified actions. For example, an algorithm used to assess credit worthiness could be accurate 99% of the time, but this would still mean that one out of a hundred applicants would be denied credit wrongly. |
| Inscrutable evidence | A lack of interpretability and transparency can lead to algorithmic systems that are hard to control, monitor, and correct. This is the commonly cited “black-box” issue. |
| Misguided evidence | Conclusions can only be as reliable (but also as neutral) as the data they are based on, and this can lead to bias. For example, Dressel and Farid [80] found that the COMPAS recidivism algorithm commonly used in pretrial, parole, and sentencing decisions in the United States, is no more accurate at fair predictions made by people with little or no criminal justice expertise. |
| Unfair outcomes | An action could be found to be discriminatory if it has a disproportionate impact on one group of people. For instance, Selbst [81] articulates how the adoption of predictive policing tools is leading to more people of color being arrested, jailed or physically harmed by police. |
| Transformative effects | Algorithmic activities, like profiling, can lead to challenges for autonomy and informational privacy. For example, Polykalas and Prezerakos [82] examined the level of access required for personal data by more than 1000 apps listed in the “most popular” free and paid for categories on the Google Play Store. They found that free apps requested significantly more data than paid-for apps, suggesting that the business model of these “free” apps is the exploitation of personal data. |
| Traceability | It is hard to assign responsibility for algorithmic harms and this can lead to issues with moral responsibility. For example, it may be unclear who (or indeed what) is responsible for autonomous car fatalities. An in-depth ethical analysis of this specific issue is provided by Hevelke and Nida-Rümelin [83]. |
| Principles | Definitions |
|---|---|
| Beneficence | This principle has several definitions in declarations, for instance, promoting human well-being and flourishing, peace and happiness, creating socio-economic opportunities, and economic prosperity [18]. As the EU’s High-Level Expert Group on AI ([85], p. 4) emphasizes, “AI is not an end in itself, but rather a promising means to increase human flourishing, thereby enhancing individual and societal well-being and the common good, as well as bringing progress and innovation.” |
| Non-maleficence | This principle emphasizes the duty not to inflict harm on others, resonating with the primum non nocere guiding principle, or “first, do no harm” [86]. |
| Autonomy | The dimensions of autonomy encompass capacities for self-determination, normative obligations to respect and support it, relational recognition, self-respect, its exercise, and necessary material, economic, legal, cultural, and informational conditions, without these conditions being constitutive of autonomy itself [87]. |
| Justice | It provides a set of standards by which to fairly adjudicate people’s (often competing) claims to liberties, opportunities, resources, and modes of treatment [88]. |
| Explicability | It refers to the ability of AI systems to provide clear, understandable explanations of their decision-making processes, outcomes, and behaviors [89]. |
| Limitation in Ethical AI Declarations | How It Affects Digital Business (Mechanism of Influence) | Implications for E-Commerce Precisely |
|---|---|---|
| Lack of a standard framework | (1) Fragmented compliance costs as they must simultaneously navigate conflicting regional regulations; (2) complicated cross-border operations and platform governance; (3) competitive disadvantages when competitors exploit lax jurisdictions through regulatory arbitrage. | (1) Suffer massive inefficiencies as they must build and maintain separate AI systems for each market (EU version, US version, etc.); (2) disadvantage smaller platforms that cannot afford multi-jurisdictional compliance teams, thereby entrenching monopolies of large players like Amazon. |
| Divergence in definitions and terminology | (1) Ambiguity in key terms (e.g., “fairness”, “transparency”) makes it hard to align AI systems with compliance or customer expectations; (2) increase in the compliance cost as companies must design adaptive governance frameworks, not one-size-fits-all AI systems (for instance, fairness might be defined as equal opportunity in one company/country and demographic parity in another), which also pose a challenge to juridical department. | (1) E-commerce platforms face operational paralysis when identical AI features require contradictory implementations across markets. Suppose we consider a product recommendation system that aims to promote “fairness.” Under demographic parity, the system shows the same products to men and women; under equal opportunity, it shows products based on browsing history; and under national interest, it prioritizes domestic brands. |
| Challenges in translating principles into Practice | (1) Lack of implementation guidance, ethical KPIs. For instance, data scientists cannot translate “be fair” into code without concrete metrics. (2) invest heavily in trial-and-error experimentation that risks regulatory penalties; (3) adopt superficial compliance measures (ethics washing) that satisfy stated principles on paper while failing to address actual harms in practice. | (1) Ethical AI becomes a checkbox exercise with limited efficient/practical safeguards, especially in recommender systems or personalization engines. For instance, suppose the principle is “maintain transparent pricing”. It is unclear, explain to whom? (customers? regulators? researchers?); in what format? (technical documentation vs. plain language?); is it transparency, explainability, or interpretability principle? |
| Integration with legal frameworks | (1) Ethical declarations might not match actual laws, creating confusion between legal compliance and ethical responsibility. (2) AI Ethics declarations that ignore misinformation, disinformation, leave digital businesses legally and ethically unprotected; thus, platforms face reputational damage and regulatory backlash for spreading election misinformation or health disinformation. | (1) Businesses either over-invest in compliance or risk underperforming in new markets. (2) Silence on information disorder allows e-commerce platforms to weaponize AI for deception, like generating fake reviews at scale, creating synthetic “influencer” endorsements, producing misleading product comparisons, and using deepfake demonstrations. |
| Erosion of trust due to limited inclusion | (1) Ethical principles rarely involve input from SMEs, civil society, or diverse global stakeholders, making them seem top-down and biased. | (1) Platforms may struggle to build user trust or face boycotts over perceived bias or unfairness. |
| Conflicting objectives within declarations | (1) Some declarations promote both innovation and strict control, leaving firms uncertain how to balance speed vs. safety; (2) Wasted resources training employees on multiple conflicting versions of the same principle without clear guidance on which interpretation governs when conflicts arise. | (1) It provides perfect cover for exploitation where e-commerce platforms justify invasive behavioral tracking by claiming “personalization accuracy requires comprehensive data collection,” defend discriminatory dynamic pricing as “fairness unfortunately conflicts with revenue optimization,” rationalize opaque recommendation algorithms through “transparency risks competitive intelligence leakage or security”. |
| Geopolitical tensions | It forces digital businesses into incompatible regulatory trilemmas where operating globally requires simultaneously satisfying US innovation-first requirements, EU human-centric mandates, and Chinese sovereignty imperatives: (1) creating technical impossibilities where a single AI system cannot comply with all three jurisdictions,(2) forcing costly market-specific versions, (3) strategic market exits. | (1) It might mandate e-commerce platforms to perform geopolitical loyalty by adopting the ethics narrative of their chosen bloc. For instance, the US bans Chinese platforms like Temu and SHEIN, citing “surveillance threats,” while American platforms deploy equivalent tracking, the EU uses strict AI regulation to handicap US tech giants under the guise of “ethical superiority.” |
| Benchmark: MGA | Ethical AI Principles with Frequency Above 50% in Different Regional Declarations | Matching Percentage | |||
|---|---|---|---|---|---|
| Ethical AI Principles with Frequency Above 50% in MGAs | Europe | America | Africa | Asia | |
| Societal well-being | 1 | 1 | 1 | 1 | 100% |
| Accountability | 1 | 1 | 1 | 1 | 100% |
| Environmental well-being | 1 | 0 | 1 | 1 | 75% |
| Transparency | 1 | 0 | 1 | 1 | 75% |
| Fairness and non-discrimination | 1 | 1 | 1 | 1 | 100% |
| Inclusion | 0 | 0 | 1 | 1 | 50% |
| Security | 0 | 1 | 1 | 1 | 75% |
| Autonomy | 0 | 0 | 1 | 0 | 25% |
| Safety | 0 | 0 | 1 | 1 | 50% |
| Privacy and data protection | 1 | 1 | 1 | 1 | 100% |
| AI Ethical Principles Mentioned in More Than 50% of Regional Declarations but Not in MGAs | Promote research and innovation | Economic development; Promote research and innovation; Information integrity | Promote research and innovation | Open data policy | |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Haidar, A. Ethics Without Teeth? Challenges and Opportunities in AI Declarations for Platform Governance. J. Theor. Appl. Electron. Commer. Res. 2026, 21, 103. https://doi.org/10.3390/jtaer21040103
Haidar A. Ethics Without Teeth? Challenges and Opportunities in AI Declarations for Platform Governance. Journal of Theoretical and Applied Electronic Commerce Research. 2026; 21(4):103. https://doi.org/10.3390/jtaer21040103
Chicago/Turabian StyleHaidar, Ahmad. 2026. "Ethics Without Teeth? Challenges and Opportunities in AI Declarations for Platform Governance" Journal of Theoretical and Applied Electronic Commerce Research 21, no. 4: 103. https://doi.org/10.3390/jtaer21040103
APA StyleHaidar, A. (2026). Ethics Without Teeth? Challenges and Opportunities in AI Declarations for Platform Governance. Journal of Theoretical and Applied Electronic Commerce Research, 21(4), 103. https://doi.org/10.3390/jtaer21040103

