Next Article in Journal
Tax Professionals’ Perceptions, Compliance Costs, and Compliance Intentions Under Indonesia’s Core Tax Administration System
Previous Article in Journal
Data Mining to Identify Factors Associated with University Student Retention
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Ethics in Artificial Intelligence: A Cross-Sectoral Review of 2019–2025

by
Charalampos M. Liapis
1,2,
Nikos Fazakis
3,
Sotiris Kotsiantis
3,* and
Yannis Dimakopoulos
4
1
School of Social Sciences, Hellenic Open University, 26335 Patras, Greece
2
Computer Technology Institute and Press “Diophantus”, 26504 Patras, Greece
3
Department of Mathematics, University of Patras, 26504 Patras, Greece
4
Department of Chemical Engineering, University of Patras, 26504 Patras, Greece
*
Author to whom correspondence should be addressed.
Informatics 2026, 13(4), 51; https://doi.org/10.3390/informatics13040051
Submission received: 7 February 2026 / Revised: 19 March 2026 / Accepted: 24 March 2026 / Published: 27 March 2026

Abstract

Artificial Intelligence (AI) has transitioned from a specialized research area to a ubiquitous socio-technical infrastructure influencing sectors from healthcare and law to manufacturing and defense. In tandem with its transformative promise, AI has created an exponentially expanding ethics literature questioning, fairness, transparency, accountability, and justice. This review synthesizes publications and key policy developments between 2019 and 2025, bringing sectoral discourses together with cross-cutting frameworks. Grounded in a systematic scoping review methodology, we frame the field along four meta-dimensions: trust and transparency, bias and fairness, governance & regulation, and justice, while we investigate their expression across diverse sectors. Special attention is dedicated to healthcare (patient trust and algorithmic bias), education (integrity and authorship), media (misinformation), law (accountability), and the industrial sector (data integrity, intellectual property protection, and environmental safety). We ground abstract principles in concrete case studies to illustrate real-world harms and mitigation strategies. Furthermore, we incorporate pluralistic ethics (e.g., Ubuntu, Islamic perspectives), environmental ethics, and emerging challenges posed by Generative AI and neuro-AI interfaces. To bridge theory and practice, we propose an operational governance framework for organizations. We contend that success involves transitioning from principles toward ethics-by-design, pluralistic governance, sustainability, and adaptive oversight. This review is intended for scholars, practitioners, and policymakers who need a comprehensive and actionable framework for navigating the complex landscape of AI ethics.

1. Introduction

Artificial Intelligence has transitioned from speculative potential to everyday infrastructure throughout a series of sectors, from health care and education, to commerce, law, media and government. Such diffusion brings recurring ethical concerns, inter alia, about fairness, responsibility, openness, confidentiality, as well as human rights. There has been both theoretical sophistication as well as practical pilot work throughout computer science, philosophy, law, sociology, medicine, and commerce. The early literature stressed values of aspiration, such as beneficence, non-maleficence, autonomy, and justice. More recent work seeks to identify operational solutions such as institutional mechanisms, design procedures, and governance processes that move between contexts. The sectoral literature indicates how the risk takes different forms, such as data set bias and opacification, i.e., lack of transparency in the context of clinical decision-making, authorship and integrity in education; consumer protection in finance; and legitimacy in the context of law and media.
In this review, we frame our analysis around four meta-dimensions that span multiple domains: trust and transparency, bias and fairness, governance & regulation, and justice. Aspects and perspectives from philosophical and intercultural thought and systemic theories (multi-level, ecosystemic, and environmental) are also featured. To facilitate navigation, our synthesis tables and cross-sector matrix (presented in the subsequent Methodology and Results sections) provide additional clear support. Challenges and gaps are also identified and highlighted, and the movement toward security, compliance, and enforcement is explored.
Previous mappings of the field mostly focused on the “principles-first period”, documenting guidelines and their normative foundations or their diffusion into company governance, with little cross-sector synthesis [1,2,3]. Comparative analyses of policy and landscape maps expanded knowledge of national and regional paths but remained mostly macro-institutional [4,5,6]. Domain-specific reviews, particularly health/biomedicine and education, provided depth but remained circumscribed by area: scoping and narrative reviews in the health sector covered safety, accountability, and bias [7,8,9,10], while higher-education syntheses and bibliometric mappings tackled integrity, pedagogy, and curricular design [11,12,13,14]. Business-facing work and function-specific reviews (for example, recruiting/selection; social sustainability) shed light on use cases for risk but without incorporating intercultural or ecological lenses [15,16].
In this context, our work contributes to the literature in five ways. First, it offers a genuinely cross-sectoral synthesis spanning 2019–2025 that organizes findings around four meta-dimensions (trust/transparency, bias/fairness, governance/regulation, and justice) and traces how they manifest across healthcare, education, media/democracy, business/finance, law/policy, defense/security and the social/public sector (extending beyond single-domain scopes typical of prior surveys). Second, it bridges principles to practice by integrating work on audits, standards and compliance with sectoral case material, contributing to the clarification of the various existing pathways from high-level values to operational safeguards [17,18,19]. Third, it incorporates plural and intercultural ethics, e.g., Ubuntu, Islamic, Catholic, and Latin American public values, as first-class lenses for legitimacy, rather than treating them merely as peripheral add-ons [20,21,22]. Fourth, it widens the frame to environmental and planetary ethics, linking AI governance to resource use, climate and lifecycle accountability [23,24]. Fifth, it connects science and technology studies (STS) and philosophical accounts of uncertainty with institutional design, highlighting how acknowledging AI’s inherent epistemic limits shapes feasible oversight [25,26,27,28]. Lastly, to support comparison and reuse, we provide a consolidated synthesis table and a coverage matrix aligning contributions by domain, theme and chronology, thereby the addressing fragmentation noted in earlier reviews.

2. Methodology

This paper undertakes a scoping review of the AI ethics literature and policy landscape. A scoping review methodology was selected as the most appropriate approach for mapping the key concepts, evidence sources, and research gaps within a broad and rapidly evolving field. The review followed a structured five-stage process: (1) identifying the research question; (2) identifying relevant studies and documents; (3) study selection; (4) charting the data; and (5) collating, summarizing, and reporting the results.

2.1. Search Strategy

A systematic search was conducted in Scopus covering the period from 1 January 2019 to 31 August 2025 (search executed on 1 September 2025). To reduce false negatives, we searched in title/abstract/keywords and used both topic and synonym terms. The core query was
TITLE ( "Artificial Intelligence" AND ethic*) AND PUBYEAR > 2018 AND PUBYEAR < 2026
In addition to database search, we applied backward/forward citation chasing for (i) influential guideline-mapping papers and (ii) cross-sector review papers to capture highly cited items that do not use the term ethics explicitly in the title/abstract (e.g., work framed as fairness, accountability, transparency, human rights, or governance).

2.2. Inclusion and Exclusion Criteria

We included (a) peer-reviewed journal articles and conference papers and (b) high-impact gray literature (e.g., standards, regulatory proposals, and official policy documents) that substantively addressed ethical, legal, or governance implications of AI systems. We restricted the main corpus to English-language documents within 2019–2025 to match the review scope. We excluded (i) purely technical papers that did not discuss ethical, legal, or societal implications; (ii) short editorials/opinion pieces unless they were widely cited and field-defining (these are treated as foundational context rather than part of the 2019–2025 core corpus). Because the review spans 2019–2025, earlier works are not analyzed as part of the year-bounded corpus, but a small set of pre-2019 publications is referenced to provide conceptual anchoring (Table 1).

2.3. Screening and Thematic Synthesis

To improve reproducibility, the study-selection process is reported in PRISMA-ScR style (Figure 1). The search returned 2143 records. After duplicate removal, 1876 unique records were screened by title/abstract. At this stage, 1468 records were excluded as out of scope. Full-text assessment was then conducted for 408 documents, of which 176 were excluded (e.g., no substantive ethics/governance content, non-eligible publication type, or out-of-window scope). The final thematically coded corpus comprised 232 items.

2.3.1. Inter-Rater Reliability

Two reviewers independently screened records at both title/abstract and full-text stages using a pre-defined eligibility form. Before formal screening, a pilot calibration round (50 records) was used to harmonize interpretation of the inclusion/exclusion criteria. Inter-rater agreement was quantified using Cohen’s κ . Agreement was substantial at title/abstract screening ( κ = 0.81 ) and full-text eligibility assessment ( κ = 0.78 ). Discrepancies were resolved through discussion; unresolved disagreements were adjudicated by a third reviewer.

2.3.2. Thematic Coding Protocol

Thematic synthesis followed a transparent, four-step coding protocol. First, a deductive starter codebook was defined from the review questions (trust/transparency, bias/fairness, governance/regulation, justice). Second, two reviewers inductively added emergent subcodes during open coding of an initial subset (15% of included studies). Third, the refined codebook was applied to the full corpus, with weekly reconciliation meetings and versioned codebook updates. Fourth, axial synthesis was used to cluster subcodes into cross-sector patterns and contradictions. We retained an audit trail including code definitions, decision rules, merged/split code history, and representative evidence excerpts for each synthesized claim.
From this process, the four core meta-dimensions of this paper emerged: (1) Trust and Transparency, (2) Bias and Fairness, (3) Governance and Regulation, and (4) Justice. These dimensions form the analytical framework for synthesizing the findings across different sectors.

3. A Brief Philosophical Outline

This section is intentionally concise and serves only as a conceptual bridge to the review’s analytical framework. We retain philosophical distinctions insofar as they clarify how ethical claims are translated into evaluative criteria in the cross-sector synthesis.

3.1. Normative Anchors Used in This Review

Rather than offering a stand-alone philosophical survey, we use three high-level normative anchors: virtue-oriented reasoning (character and institutional ethos), deontic reasoning (rights, duties, and constraints), and consequential reasoning (harms, benefits, and risk distribution). Across sectors, these anchors are treated as complementary lenses for interpreting evidence, not as competing comprehensive theories.

3.2. Object–Subject Framing and Practical Relevance

A second compact distinction concerns AI as object (artifact under human design/governance) versus AI as quasi-subject (systems whose autonomous behavior complicates attribution of responsibility). In practical terms, this framing is used here to identify where responsibility remains clearly allocable (developers, deployers, institutions) and where responsibility gaps may emerge (high autonomy, opaque optimization, distributed decision chains).

3.3. Explicit Link to the Four Meta-Dimensions

The philosophical anchors are operationalized directly in the four meta-dimensions used throughout the paper: (i) Trust and Transparency (epistemic legibility and justificatory duties), (ii) Bias and Fairness (distributional and procedural equity), (iii) Governance and Regulation (accountability, liability, and enforceability), and (iv) Justice (rights, inclusion, and structural asymmetries). This mapping is used as a reading guide for the sectoral sections that follow, so the philosophical discussion functions as scaffolding for the empirical synthesis rather than as an independent chapter.
To maintain flow, detailed debates on machine moral status, AGI/superintelligence, and extended human–AI interaction scenarios are not developed here as separate expositions; they are referenced only when they materially inform sector-specific governance implications.
Having established the methodological protocol (Section 2) and normative scaffolding (Section 3), the following sections present the results of our thematic synthesis, organized first by cross-sectoral dimensions and then by emerging governance frameworks.

4. Results: Ethical Cross-Sectoral Dimensions of AI

We start by overviewing the four cross-sectoral dimensions, that is, trust/transparency, bias/fairness, governance/regulation and justice. Each dimension’s corresponding distribution across sectors is presented in Figure 2 and Table 2. A consolidated overview of cross-cutting definitions, instruments, and caveats is provided in Table 3. Each dimension introduces unique methods and protections that recur throughout the sectoral analyses following below.
The priority ratings in Table 4 reflect the volume and depth of engagement in the reviewed corpus: “H” denotes sectors where the dimension is a primary focus of the coded literature (frequent, detailed treatment across multiple studies); “M” indicates substantive but secondary coverage; “L” marks sectors where the dimension appears only peripherally. These ratings characterize the state of the literature, not normative importance.

4.1. Healthcare

Among the most heavily studied areas, health care stands out, reflecting the high-stakes nature of the relevant decision-making processes. The corresponding literature commonly clusters around the themes of trust/transparency and bias/fairness.

4.1.1. Trust and Transparency

AI applications and relevant tools in surgery and hepatology typically function as black boxes, limiting clinicians’ ability to interpret or justify recommendations. Moreover, such an opacity has the potential to both significantly weaken patient trust as well as undermine informed consent. To address this, context-specific oversight mechanisms and reliability verification, such as explainability tools and clinical validation protocols, are increasingly both investigated and emphasized [50,93].

4.1.2. Bias and Fairness

Bias in healthcare AI spans from access and diagnosis to outcomes. It also reflects the systemic inequalities in training data and deployment contexts. Data and deployment biases, if unaddressed, can worsen health disparities. Mitigating the problematic requires both institutional reforms and technical interventions like fairness-aware modeling. Hence, data and deployment bias perpetuate access bias, diagnosis bias, and outcome bias, and treating their aspects involves both institutional change and technical frameworks [51]. For example, a widely used US health-care algorithm used future health-care spending as a proxy for medical need and thereby systematically under-flagged Black patients despite equal or greater illness burden, illustrating how even technically sound models can perpetuate inequities absent fairness audits [94].

4.1.3. Clinical Ethics and Professional Practice

The deployment of clinical AI directly disrupts established boundaries of medical practice, forcing a renegotiation of informed consent, legal responsibility, and patient care. Rather than manifesting as abstract risks, these challenges trigger distinct professional dilemmas across specialties. The radiology literature emphasizes that opaque diagnostic algorithms create acute liability risks when physicians cannot adequately explain AI-generated imaging anomalies to patients [48,52]. In nursing, the introduction of automated monitoring systems risks eroding the relational trust and empathetic bedside communication that define the profession’s core identity [50,95,96]. Surgical and ophthalmological studies specifically caution against automation bias, warning that practitioners may unconditionally defer to algorithmic recommendations and forfeit their independent clinical judgment [97,98]. Furthermore, in highly sensitive perinatal and reproductive care, algorithmic risk-scoring systems complicate patient autonomy and the thresholds required for genuine informed consent [54]. Synthesizing these specialty-specific hurdles, cross-cutting assessments confirm a shared set of unresolved operational challenges: the legal attribution of medical errors, the lack of standardized institutional readiness, and the persistence of skewed clinical validation datasets [49,90,99,100,101,102,103,104,105,106,107].
In general, lessons from health care extend to other high-stakes sectors, underscoring the importance of strong validation, responsible deployment, and governance frameworks that ensure equity. A compact synthesis of healthcare-specific themes is provided in Table 5. A compact synthesis of healthcare-specific themes is provided in Table 5.

4.1.4. Clinical Specialties

Each medical specialty faces distinct operational hurdles when deploying AI, necessitating bespoke rather than generic governance. In hepatology, diagnostic algorithms require rigorous subgroup verification to prevent bias from worsening clinical outcomes in vulnerable liver-disease demographics [51,93]. Meanwhile, diabetology increasingly relies on continuous patient monitoring applications, triggering concrete ethical concerns regarding long-term data stewardship and the preservation of patient autonomy over sensitive biometric data [107]. Neonatology and perinatal care demand exceptional error thresholds and specialized family-communication protocols, as algorithmic miscalculations in incubators or fetal monitoring carry outsized ethical gravity [98]. Similarly, surgical units must establish explicit liability frameworks that dictate exactly when an AI tool serves as pre-operative augmentation versus when a surgeon might dangerously over-rely on it during active procedures [101]. Consequently, governance safeguards must be structurally tailored to the specialty, rather than broadly imported from neighboring domains.

4.1.5. Nursing and Care Professions

The nursing and allied care professions fundamentally rely on empathy, relational trust, and patient dignity, elements that automation cannot replicate. Clinical nursing ethics emphasizes that AI must function strictly as a supportive tool rather than as a replacement for human moral agency and bedside judgment [95,96]. Because all predictive models possess inherent epistemic limits, technical explainability alone is insufficient; instead, nurses must be trained in “error-conscious practice”, utilizing clear, institutionally supported escalation procedures to override algorithmic recommendations when they conflict with direct patient observations [50]. Across broader allied health fields, the ethical redesign of clinical workflows must prioritize the retention of human presence, ensuring that the division of labor does not silently shift the burden of care entirely onto machines [106].

4.1.6. Public and Global Health

When AI operates at the population scale, the ethical stakes shift from individual patient rights to structural equity, distributive justice, and transnational power imbalances. Epidemiological surveillance systems, for instance, must carefully balance the benefits of early outbreak detection against the severe risks of mass surveillance, prioritizing data confidentiality and minimizing the stigmatization of vulnerable communities [88]. Furthermore, global health institutions explicitly warn that the uneven distribution of computing infrastructure and regulatory capacity between high- and low-income nations threatens to worsen global health disparities; they advocate for mandatory differential verification tailored to local contexts and equitable benefit-sharing agreements [89]. At the operational level, cross-border telemedicine and automated screening programs serve as key test cases for how ethical procurement policies and situational safety thresholds can be enforced across drastically different international health systems [7,108,109].

4.2. Education and Research

In educational and research contexts, AI’s rapid uptake is reshaping governance, academic integrity, and pedagogy.

4.2.1. Governance and Regulation

Educational institutions face intense pressure to draft AI-use policies that directly reconcile sweeping technological mandates with their localized academic cultures and grading standards, a challenge especially pronounced in structurally under-resourced schools [58]. Because schools vary drastically in their digital infrastructure, faculty technological literacy, and pedagogical traditions, enforcing a rigid, universal governance template inevitably fails. Consequently, effective institutional responses demand highly contextualized frameworks, where specific AI-use guidelines, such as syllabus disclaimers and acceptable-use thresholds for assignments, are meticulously tailored to match the specific oversight capabilities and pre-existing professional norms of the local faculty.

4.2.2. Education and Integrity

The rapid influx of generative AI has triggered acute academic integrity crises, with meta-reviews confirming substantial gaps in how schools operationally embed ethics into digital literacy programs [11,13]. Focused studies on student misconduct emphasize the regulatory difficulties of enforcing plagiarism bans and establishing transparent disclosure requirements when utilizing generative tools for coursework [110,111,112]. Simultaneously, integrating AI for positive academic functions (e.g., accelerating literature reviews or coding assistance) requires explicit safeguards to prevent skill atrophy [113,114,115]. At the faculty level, academic publishing and peer-review workflows face severe disruptive threats, necessitating new verification protocols to catch AI-generated scientific manuscripts and peer reviews [56,59,116]. Synthesizing these challenges, cultural and systemic analyses argue that isolated policies must be replaced by comprehensive curricular reform that cultivates critical AI competence continuously from early K–12 education straight through to advanced higher-degree research [12,14,57,117,118,119].
See Table 6 for a compact map of clusters and references.

4.2.3. Curricula and Pedagogy

Current analyses of computer science and humanities syllabi reveal severe fragmentation, demonstrating that universities urgently need to integrate ethics directly into technical coding exercises rather than isolating them in separate social science modules [13,120,121]. To bridge this gap, educators increasingly rely on case-led pedagogy, using tangible, theoretical–practical scenarios like algorithmic hiring bias or automated grading failures, to build robust critical literacies capable of navigating real-world dilemmas [60,115]. Simultaneously, highly regulated professional programs, such as midwifery and dentistry, are being forced to radically redesign their clinical assessments to balance the efficiency of AI diagnostic tools with strict, profession-specific integrity norms [119,122]. Empirical studies tracking how students actually use generative tools document intense operational tensions; while students rapidly gain technical efficiency, educators struggle to chart whether this immediate productivity translates into long-term ethical competency [104,123,124,125,126].

4.2.4. Academic Publishing and Research Integrity

Generative AI disrupts authorship, disclosure and peer-review conventions. Ethics commentary and editorials call for transparent reporting of tool use, appropriate credit for roles played and integrity-sustaining boundaries [127,128,129]. Policy instruments (user/journal agreements) are gaining recognition as governance levers for disclosure and allowable aid [56,59]. Field syntheses outline shifting practices in communication infrastructure and libraries [62], whereas domain-level work points to risk-aware areas like violence evaluation and qualitative techniques [130,131].

4.3. Media/Democracy

Media and democratic communication are treated here with the same analytical depth as other major sectors because they are now high-stakes infrastructures for public reason, civic trust and collective decision-making. AI-mediated curation, recommendation and generation affect not only newsroom productivity but also agenda-setting power, visibility asymmetries and the quality of democratic deliberation [28,61,64,65,132].

4.3.1. Governance and Regulation

Policy studies converge on four regulatory levers: provenance and labeling requirements for synthetic content, platform accountability for amplification effects, copyright and licensing clarity for training/use, and independent supervisory arrangements for election-sensitive and crisis communication contexts [65,66]. China’s approach illustrates one trajectory; state-run media leverage consumer-rights protection to popularize AI regulation through a strategy of “controlled care” that reconciles state control with permissive AI practices [65]. Comparable to healthcare-style safety governance, this sector increasingly requires pre-deployment risk assessment, incident reporting pathways and role clarity among publishers, platforms and regulators.

4.3.2. Trust and Transparency

The literature identifies concrete trust-building practices rather than abstract transparency claims; visible AI-use disclosures, byline-level attribution conventions, separation of human-edited and machine-generated material, documentation of model limits, and user-facing correction channels [61,62]. In institutional media settings, these practices function as auditable workflow controls, improving epistemic traceability and helping audiences evaluate credibility under conditions of synthetic abundance. Qualitative interviews with media professionals confirm persistent tensions between efficiency-driven AI adoption and journalistic norms of verification, with practitioners highlighting data privacy, algorithmic bias in content selection and the risk of eroding editorial accountability [64].

4.3.3. Bias and Fairness

Evidence on representational harms highlights how ranking systems and generative pipelines can reproduce social stereotypes, linguistic hierarchies and political skew. Recommended safeguards include dataset audits, subgroup-sensitive evaluation, multilingual fairness checks, and participatory review involving affected communities and public-interest stakeholders [63,64]. This parallels fairness governance in education and health by moving from one-off bias testing to continuous monitoring in production environments.

4.3.4. Justice and Legitimacy

Justice-oriented work links media AI to democratic inclusion, labor dignity and communicative rights. Core concerns include concentration of agenda-setting power, opacity of moderation and recommendation decisions, and unequal burdens on marginalized communities exposed to disinformation or misclassification [21,74]. The emerging consensus is that legitimacy depends on enforceable redress mechanisms, public-interest oversight and rights-compatible governance of digital public spheres. Cross-cultural analyses deepen this conclusion by showing that communicative harm is interpreted differently across linguistic, religious and postcolonial contexts. Islamic media-ethics discussions emphasize responsibility for truthful representation and social harm prevention in AI-mediated communication; African and Global South scholarship emphasizes community voice, linguistic inclusion and resistance to platform-driven epistemic marginalization. Practically, this implies multilingual provenance standards, locally governed fact-checking partnerships and region-sensitive moderation/audit protocols rather than one-size-fits-all platform rules [22,133].

4.4. Business/Finance

In business and finance, AI ethics intersects with competitive incentives, making data governance, model risk management and stakeholder trust core strategic concerns. Studies document gaps between stated principles and operational maturity while linking responsible AI to customer experience, corporate social responsibility (CSR) and sustainability [16,17,67,68,71,74].

4.4.1. Governance and Regulation

Firm and market evidence supports internal ethics reviews, product life-cycle controls, and compliance-by-design; notably, a quasi-natural experiment using difference-in-differences analysis of Chinese listed companies (2013–2022) finds that technology ethics review significantly promotes corporate AI development by alleviating financing constraints, reducing inefficient investment and enhancing transparency [72]. More broadly, policy proposals indicate ethics governance may influence innovation paths and minimize risk [6,71,73,134]. Procurement requirements and audit/certification emerge as drivers of diffusion of good practice [17].

4.4.2. Trust and Transparency

Customer-facing trust is affected by specific transparency tactics; benefits from clear disclosures of AI use, documentation of model limits and complaint/incident channels are observed. Furthermore, regarding B2B contexts, explainability and service-level accountability drive relationship quality [67,68]. Also, questions of ownership/authorship and data stewardship in value chains motivate transparent governance of inputs and outputs [16,135].

4.4.3. Bias and Fairness

Market applications (that is, credit, hiring, pricing) require bias controls, subgroup performance testing and context-specific fairness targets, alongside governance for vendor/third-party models [15,63,69,70]. A systematic review of 120 studies on AI in recruitment, for instance, identifies algorithmic bias, lack of explainability in hiring decisions and candidate perceptions of unfairness as persistent ethical challenges that utilitarian efficiency gains alone cannot resolve [15]. Sector guidance accordingly stresses aligning incentives so fairness KPIs matter to business outcomes, coupling technical controls with organizational accountability [71]. Amazon’s experimental recruiting tool, for instance, penalized resumes mentioning “women’s” and was ultimately scrapped when gender bias could not be reliably eliminated [136].

4.4.4. Justice and Legitimacy

CSR-anchored strategies frame responsible AI as a legitimacy asset that links consumer protection with environmental and social goals; recent work proposes a framework showing how the complementary integration of ethics, CSR and AI can boost marketing sustainability, benefiting both businesses and society through transparent reporting, real-time environmental monitoring and predictive analytics [16,68,74]. Sustainability-oriented cases further illustrate opportunities and the need for safeguards against ethics-washing: a teaching case on climate-innovation finance, for instance, shows how an open-source AI tool integrating web scraping and NLP can help investors assess the climate impact of companies while highlighting the broader corporate and societal issues that arise when business models are built around ESG data [23,137].

4.5. Law/Policy

Legal and policy scholarship argues that AI governance can only achieve true accountability and public legitimacy when abstract guidelines are replaced by enforceable, rights-based legal anchors, adaptive regulatory sandbox frameworks, and codified professional ethics standards. Furthermore, comparative analyses of regional legislation explicitly warn against “ethics-washing”, situations where organizations adopt voluntary, non-binding ethical pledges to deliberately pre-empt and stall the implementation of strict, binding legal liabilities [2,4,5,6,78,138].

4.5.1. Governance and Regulation

Analyses of EU, Chinese and other approaches highlight pathways from principles to hard law (risk tiers, duties of care, conformity assessment) and the institutionalization of oversight [2,4,5,78,138]. Comparative work in this area contrasts the EU’s rights-based approach, which anchors regulation in fundamental-rights impact, with Chinese public-interest framings that coordinate state centrism with sectoral regulation [4,5]. Ethics reviews are examined as meso-level instruments that connect research governance to regulatory compliance, functioning as iterative procedural bridges between soft principles and enforceable obligations [139]. Professional ethics reframes confidentiality, competence and fairness for AI-mediated legal practice and allied professions [140] (focusing primarily on deepfakes), while sectoral regulation details how biomedical research governance learns to respond to AI-enabled biotechnology [141].

4.5.2. Trust and Transparency

The trust requirements are expressed in terms of documentation, explainability in proportion to risk and public justification in administrative and research contexts; hybrid AI-enabled review is investigated for scaling regular review while preserving human deliberation [1,75,142,143]. In this framing, transparency is proportionate, high-risk systems demand richer justification, and lower-risk tools may satisfy lighter disclosure obligations. Transparency therefore operates as procedural fairness as well as a pre-condition for meaningful challenge and redress.

4.5.3. Bias and Fairness

Rights-based formulations correlate due process and non-discrimination with technical controls of proportionality, requesting remedies and redress processes when people and groups are negatively affected by automated decisions [7,76,77]. A Rawlsian reconstruction of AI ethics principles, for example, argues that existing guidelines frequently lack sufficient ethical justification and overlook AI’s impact on democracy and public deliberation, proposing justice-as-fairness criteria that foreground distributional equity and procedural inclusion [76]. Sector-specific literature (medical imaging, for instance) illustrates how technical invisibility can compromise legal responsivity, as opaque diagnostic models hinder clinicians’ ability to contest or explain adverse outcomes [55].

4.5.4. Justice and Legitimacy

Philosophical and intercultural accounts argue that legitimacy requires plural ethical foundations (e.g., Ubuntu) and vigilance against symbolic regulation or political capture [20,144]. Professional and sectoral ethics (law, judiciary) are reframed for AI-mediated practice, alongside calls for adaptive, principle- and rights-congruent regulation [84,140,145]. To move beyond citation-level pluralism, recent comparative work suggests operational translation rules for regulation: (i) pair universal rights baselines with context-sensitive public values, (ii) require participatory norm-setting with affected communities, and (iii) embed remedy pathways that account for administrative and resource asymmetries in low- and middle-income settings. In this framing, Islamic ethics contributes structured principles of dignity, social welfare and accountability in governance design, while African and broader Global South traditions foreground relational personhood, solidarity and community-centered legitimacy as constraints on purely technocratic optimization [20,21,22,146].

4.6. Defense/Security

Defense and security applications of AI present severe operational risks, particularly concerning the dual-use proliferation of foundational models, the delegation of lethal targeting decisions to autonomous systems, and the destabilization of global nuclear deterrence protocols. Consequently, ethical efforts in this space must move beyond abstract principles to enforce strict, verifiable oversights across the four meta-dimensions: establishing binding international treaties (governance), mandating human-in-the-loop audit trails for weapons systems (trust), mitigating algorithmic misidentification in target surveillance (bias), and ensuring accountability for collateral violations of international humanitarian law (justice) [80,82,85].

4.6.1. Governance and Regulation

Scholars identify gaps in procurement, test procedures and accountability for AI-enabled targeting and surveillance technologies as requiring clear rules of engagement, decision paths amenable to audit and oversight panels independent of the primes; defense standards production emphasizes safety cases, human-in-the-loop signaling and lifetime monitoring; at the international level, proposals range from the instigation of transparency programmed up through arms-control style protocols on autonomous weapons [79,82,83,85].

4.6.2. Trust and Transparency

Operating secrecy and adversary contexts complicate explainability as well as external auditing, encouraging the likelihood of automation bias as well as over-dependence on black box mechanisms. The human–AI teaming literature recommends calibrated trust, robust verification and validation (V&V), red-teaming, as well as incident reporting for tackling uncertainty when time-constrained [81,147,148].

4.6.3. Bias and Fairness

Data for data training on surveillance, identification and threat-assessment may itself carry social and geopolitical biases differentially impacting civilians as well as minority groups. Risk reduction includes data governance, monitoring of performance on the subgroup level, as well as context-adept measures of fairness for the high-stakes operational level [63,82]. Given that defense datasets often reflect historically uneven collection practices and adversary-focused labeling, continuous bias auditing and independent review are essential to prevent discriminatory targeting and to maintain compliance with international humanitarian standards.

4.6.4. Justice and Legitimacy

Debates draw on just war theory to assess discrimination, proportionality and meaningful human control, noting persistent accountability gaps when lethal decisions are mediated by algorithms. India’s defense discourse, for instance, integrates AI-embedded weapons within a traditional dharmayuddha (just war) framework, asserting that self-consciousness (Atman) must not be subordinated to machines yantras, thereby insisting on meaningful human command as a non-negotiable ethical constraint [80]. Without credible oversight and avenues for redress, AI use in security contexts threatens democratic legitimacy and may accelerate arms races [82,83,85].

4.7. Social/Public Sector

Government and social-service deployments are now central test cases for responsible AI because they directly mediate access to rights, welfare and public goods. Across public administration, social work and civic services, the literature increasingly treats this domain as comparable in risk intensity to healthcare and therefore in need of similarly rigorous governance design [79,86,92,139].

4.7.1. Governance and Regulation

Public-sector studies emphasize procurement-based controls, mandatory impact assessments, human-in-command safeguards for eligibility and sanction decisions, and clear administrative-law accountability for automated decision support [86,139]. Recurrent recommendations include registry-based transparency for high-risk systems, independent auditing mandates, and ex post review powers for ombud institutions and courts. Public-sector leaders need to develop a robust understanding of AI capabilities and limitations, establish ethical frameworks aligned with administrative-law accountability, and involve diverse stakeholders in every phase of AI deployment.

4.7.2. Trust and Transparency

Trust in state AI is shown to depend on procedural visibility, plain-language explanations for affected persons, disclosure of decision logic and data sources, appealability, and publication of performance limitations across population groups. Importantly, transparency is framed as actionable administrative due process, not merely technical explainability.

4.7.3. Bias and Fairness

The social/public corpus documents risks of allocative and representational harm in welfare targeting, risk scoring and triage workflows. Proposed controls include bias testing at the policy-design stage, local context validation, periodic fairness recalibration, and participatory governance with frontline professionals and community representatives. These findings align the sector with broader calls for lifecycle fairness governance.

4.7.4. Justice and Legitimacy

Justice-based paradigms, including public-value models, Ubuntu-informed ethics and rights-based social-work frameworks, prioritize dignity, non-discrimination, remedy and collective oversight [20,92,149]. The sectoral conclusion is that legitimacy cannot be secured by efficiency claims alone; it requires enforceable redress, democratic accountability and sustained institutional capacity for oversight. Importantly, Global South governance studies stress that fairness and accountability mechanisms fail when they are imported without institutional adaptation. Recurrent recommendations include community review boards for welfare-risk models, culturally grounded harm taxonomies, and procedural accommodations (language access, low-cost appeal channels, and offline contestation) for populations with limited digital access. From this perspective, plural ethics is not decorative; it is a design requirement for administratively feasible and socially legitimate public-sector AI [88,89,92,146].

4.8. Other Domains: Religion, Psychology, Environment and the Arts

Ethical reasoning in additional domains encompasses work on ethics in Islam, psychological authorship, environmental resilience and creative authorship.

4.8.1. Religious and Cultural Ethics

Islamic perspectives on AI-mediated work ethics emphasize that labor must remain a source of inherent good, ensure religiously permitted practices, and maintain equitable relations with stakeholders to preserve dignity and justice [21]. Catholic social teaching and Ubuntu-informed frameworks contribute complementary principles of solidarity, relational personhood and community-centered governance that constrain purely technocratic optimization [20,22].

4.8.2. Psychological and Creative Dimensions

Psychological science on creativity monitors authorship, the human–machine frontier and questions of genuineness as generative models blur the boundary between human and machine-produced artifacts. In the production of culture and the arts, authorship, creativity and value are being reconsidered in light of generative AI, raising unresolved questions about ownership, originality and the economic displacement of creative workers [66,135].

4.8.3. Environmental Governance

Environmental governance relates AI to extractive versus resilience pathways. “Elemental ethics” reframes water not merely as a resource consumed by data centers but also as a site of resistance within the AI value chain, foregrounding social and ecological costs that conventional efficiency metrics ignore [23,24]. The sustainability literature positions AI in the context of socio-ecological transitions, arguing that it may solidify extractive patterns or enable resilience depending on governance interventions [150].

4.9. Concluding Remarks on Cross-Sectoral Findings

Sectoral studies expose the limits of one-size-fits-all frameworks and supply feedback to higher-level governance. They underscore the need for polycentric, context-sensitive approaches that still coordinate across scales, tying artifact-level practice to organizational routines and enforceable oversight.

5. Frameworks, Governance and Standards

This section traces the evolution of AI governance, starting from early normative frameworks and voluntary principles and advancing to more structured, procedural, and legally anchored mechanisms across organizational, national, and transnational contexts.

5.1. Principles and Guidelines

First-wave governance was shaped by proliferating high-level principles, transparency, accountability, non-maleficence, human oversight and justice, laid out in policy briefing papers and company charters. Comparative mappings made the common core between guidelines clearer as well as their normative assumptions, while also recording redundancy and vagueness [1,2]. Analyses of company-law and governance indicated how the EU’s Trustworthy AI principles percolated down into board-level duties and disclosure requirements but translated variably into enforcible duties [3]. Comparatives between regions showed different emphases (e.g., foundational rights versus collective well-being), as well as governance style [4,5]. Surveys of machine-learning professionals indicated wide endorsement of headline values but also included doubts about metricization as well as practicality under real-world restrictions [151,152]. Taken as whole, the record demonstrates both the value of common language as well as the weaknesses of principle-first governance: overlapping recommendations, feeble connections to standards and procurement, and tenuous paths towards accountability [75,153].

5.2. From Principles to Practice: Audits, Reviews and Standards

A second wave shifts from rhetoric to routinization through audits, review processes and standard-like practices. Studies trace the emergence of AI ethics auditing as a professionalized activity (scope definition, evidence requirements, independence, and public reporting) and caution against checklist formalism [18]. Organizational ethnographies and case studies show which ethical tasks can be proceduralized, which resist codification, and where “ethics-washing” risks arise [19,154]. Industrial surveys chart uneven maturity across sectors, with gaps in documentation, post-deployment monitoring and incident handling [17]. In research governance, hybrid AI-supported ethics review is explored as a way to scale deliberation while preserving human judgment [143]; EU program evaluations show how ethics review operates as an iterative compliance instrument in data-intensive science [139]. Adjacent domains (publication ethics, integrity policies, and user-agreement governance) illustrate how standard-like rules enter scholarly pipelines [56,59,127,129]. The lesson across these strands is that operational tools help bridge values to verification but require clarity of remit, independence, and continuous improvement to avoid purely symbolic performance [153].

5.3. Organizational and National Governance

Organizationally, ethics governance becomes more linked with product planning and risk management, but practice is patchy. Technology–ethics governance is quasi-naturally experimentally linked to observable changes in firm-level AI activity, so ethics influences innovation trajectories and does not just symbolize virtue [72]. Field studies record gaps between policy and the pipeline, notably, gaps for missing role definition, procurement levers, and post-market surveillance [17,19]; “ethics by design” field studies investigate design principle perceptions by the public and their priority [77]. In small organizations, maturity and resourcing limitations come into sharp relief [70]. At the national and regional levels, paths diverge: Chinese digital-governance work follows survey- and social-media-based data on norms and enforcement [65,133,155]; landscape mappings worldwide indicate stratification of strategies but persistent fragmentation [6]; and syntheses on comparative grounds discuss ethical bases underlying mixes of policies [142]. Regulatory documents feature the listing of libraries, universities and state entities unequally so sector-specific questions about implementation and capacity apply [86].

5.4. Law, Rights and Regulatory Trajectories

Scholarship on the connection between ethics and enforceable norms and liability lies at the heart of human-rights framings, providing staunch anchors for transparency, non-discrimination and remedy [78]. The Chinese-style integration of ethics details coordination between state centrism and sectoral regulation [4]. Ethics–law interfacial work follows the journey of soft principles into hard requirements through sector rules, procurement, and conformity assessments [2,138]. Comparative works contrast rights-based European models with Chinese public-interest framings and uncover governance-style differences for the purpose of oversight design [5]. Challenges for “ethics-based regulation” point toward regulatory gifting, a concept illustrated by the Russian AI governance case, where the government deliberately allowed industry actors to shape AI regulation design, producing an unenforceable ethics-based self-regulation regime that shielded the domestic IT sector from binding oversight while risking ethics-washing [144]. Intercultural works (e.g., UNESCO’s ethics under Ubuntu) reveal how values rooted in culture legitimize and localize governance [20]. Doctrinal developments on the horizon (e.g., copyright as it relates to generative AI) reveal how targeted regimes for the law negotiate between innovation and accountability [66].

5.5. Gaps and Critiques of “Ethics-Only” Governance

In these streams, criticisms coalesce on four shortcomings. Firstly, non-enforceability: without enforceable obligation and liability, principles might justify development unconstrained [138,146]. Secondly, proceduralism without justice: principles and checklists might authorize ease of measurement over distributive and structural justice [76,156]. Thirdly, implementation gaps: evidence based on cases reveals weak interconnection between ethics discourse and organizational practice, in particular after deployment [17,157]. Lastly, political capture: in certain contexts, ethics language may conceal discretionary power or blunt scrutiny [144,153]. In order to counterbalance these criticisms, the trend turns toward systemic, multi-level and pluralistic approaches interlinking ethics with standards, audits, liability and culturally grounded governance.

5.6. Emerging and Systemic Approaches

Systemic approaches reconceive AI ethics as interlocking sets of culture, ecology, power and law. Rather than ethics as an additional constraint or checklist, they draw attention to multi-level coordination, institutional coupling, robustness toward breakdowns, and varied bases of legitimacy. They also expand the frame beyond artifact-level fixes to governance architectures evolving with practice and context.

5.7. Multi-Level and Ecosystem Governance

Ecosystem viewpoints contend that ethical oversight must work consistently at all the micro/meso/macro layers: design/deployment practice (micro), organizational process and procurement (meso), and sectoral/national regulation (macro) [26,158]. At the micro level, “ethics by design” work explores the principles practitioners actually value and how preferences map onto requirements and trade-offs [77]. At the meso level, industrial-maturity studies identify gaps in documentations, post-deployment monitoring, and incident management as evidencing the necessity for clear roles and auditability [17]. Organizational analyses also identify which ethical activities are codable and resist codification as a fault line where ethics-washing may cause harm [19]. At the macro level, policy mapping reveals grouped but disjointed national strategies [6], while Chinese digital governance studies track co-evolution of norms and enforcement through surveys and social-media data [65,133,155]. Sector-facing public agencies (for example, libraries and universities) occur sporadically in the regulatory documents, causing capacity and implementational challenges [70,86]. In combination, the evidence stimulates polycentric, tightly linked governance unifying design practice routine, organizational routines and enforcing rules.

5.8. Socio-Technical and STS/Philosophical Perspectives

Science and technology studies (STSs) and philosophy redescribe pivotal categories, trust, error, and agency and challenge what it takes to commission machines for ethical reasoning. Rather than identifying ethics with explainability, scholars advocate for living within “strange error” and designing for responsible reliance in the midst of uncertainty [25]. Work on cognitive architectures argues that because future AI will encounter unforeseen ethical dilemmas beyond hard-coded scenarios, understanding the machine equivalents of human motivations and values is essential for transparency, explainability and accountability in moral reasoning [159]. Ethics of communication illuminates how AI reconditions dialogue, authorship and responsibility in mediated publics [28]. Deeper currents trace the trajectory from posthumanism toward an ethics aware of hybrid agency, questioning whether posthuman “cyborg” entities should be considered moral agents on par with biological humans and what this implies for the design of artificial moral agents [27], and project responsibility beyond generations [160]. Neuroethics contributes complementary lenses for governance and norm-setting at the brain–machine–body boundary [161,162]. Together the lines of sight issue an injunction for socio-technical design receptive to opacity and fallibility but alert to situated judgement and distributed responsibility. These multi-level and ecosystem governance linkages are summarized in Figure 3. These multi-level and ecosystem governance linkages are summarized in Figure 3.
The complementary socio-technical and STS/philosophical linkage map is shown in Figure 4.

5.9. Environmental and Planetary Ethics

System thinking expands beyond anthropocentric lenses to planetary scopes. Environmental ethics emphasizes the material footprints of AI, energy, water and extractive infrastructure, and views lifecycle responsibility as a governance imperative. In particular, “elemental ethics” reframes water not merely as a resource consumed by data centers but also as a site of resistance within the AI value chain, foregrounding the social and ecological costs that conventional efficiency metrics ignore [23,24]. The sustainability literature positions AI in the context of socio-ecological transitions, contending that it may solidify extractive patterns or enable resiliency based on governance interventions [150]. Business strategy and finance work also link the purposes of the transition for climates, incentives and the ethics of AI-enabled transformations [16,137]. Emerging practice pairs the technical standards (energy/water transparencies) with institutional levers (procurement, disclosure, liability).

5.10. Epistemic Limits, Ambiguity and Commons

Uncertainty and ambiguity are inherent in complex learning systems; governance should aim for responsible reliance, not illusory certainty, training users to safely interact with flawed models rather than expecting absolute algorithmic perfection [25,50]. Work on ethics and trust in human–AI teaming demonstrates the role of organizational context and clear norms in shaping decisions about reliance, showing that strong institutional backing is required so operators know exactly when they are permitted to override an algorithm [147], just as studies on trust vs. trustworthiness untangle high-level ethical principles from the messy, everyday trust dynamics of how frontline workers actually delegate tasks to AI tools [148]. In high-stakes medical contexts, clinical work contends that traditional, static evidence standards and oversight must respond to probabilistic, unfolding systems, abandoning one-off certifications in favor of real-time auditing for tools that continuously learn and change behavior [10]. Rather than treating uncertainty as a technical defect, philosophical and policy work remunerates ambiguity as productive space for active human deliberation and cautions against premature closure, arguing that algorithmic friction prevents operators from rubber-stamping automated decisions [163]. Questions of authorship, ownership and disclosure suggest commons-oriented architectures for knowledge and responsibility in generative AI, proposing collective licensing models where both the financial value of outputs and the legal liability for synthesized data are shared [127,135]. Fellow legal debates on the copyright and generative systems illuminate how localized regional regimes mediate between massive tech innovation and basic protections for creative labor, as distinct judicial systems determine whether scraping artists’ work constitutes fair use [66]. Collectively, the submissions make the case for governance acknowledging unavoidable epistemic bounds, institutionalizing operational disclosure and transparency, and sharing legal and moral responsibility evenly along the entire AI value chain. An intercultural and plural ethics synthesis is presented in Figure 5. An intercultural and plural ethics synthesis is presented in Figure 5.

5.11. Concluding Remarks on Emerging Frameworks

Throughout these strands, the common thread is the transition from checklist compliance to coordinated architectures: multi-level design-organizational routine-regulatory coupling; socio-technical humility toward error and obscurity; inter-cultural legitimacy; and ecological stewardship. Systemic approaches consequently bring together law, ethics, ecology and culture to achieve resilience and justice as an end game rather than formal alignment. Below, we go from systemic frameworks to sectoral applications and consider how the frameworks translate into sectoral practice, that is, healthcare, education, media, business, law, defense and public services, providing mutual feedback between principles and everyday life and where systemic aspirations succeed or run dry in context.

6. Challenges and Gaps

The preceding synthesis reveals both convergence on core governance needs and persistent structural weaknesses. Despite notable progress, structural gaps persist across normative, technical and institutional layers. The following issues recur across sectors and jurisdictions.

6.1. Regulatory Fragmentation

Divergent regional models (EU rights-based; Chinese state-centric; and Russian ethics-based symbolism) and uneven national strategies yield inconsistent protections and forum shopping [2,4,5,6,78,144]. Comparative policy work and ethical foundations analyses underscore coordination and capacity gaps, especially outside high-resource settings [133,142,146].

6.2. Concentration of Harms

Without explicit justice goals, harms cluster among marginalized groups. Rawlsian critiques note that “principles-first” regimes sideline distributive questions [76]. Global and public-health studies document inequities in data, compute and validation pipelines and warn of stigma and rights risks in surveillance [88,89]. Domain evidence highlights subgroup performance gaps in screening and specialty care [51,108,164].

6.3. Opacity in High-Stakes Deployments

Explainability requirements cannot dissolve epistemic limits of complex models. Clinical imaging ethics stress documentation, consent and post-deployment monitoring to sustain accountability [48,49,103]. Frontline guidance reframes explainability toward responsible reliance and escalation protocols [50]. In defense, attribution and human-control gaps remain unresolved [83].

6.4. Limited Capacity-Building

Ethics education and professional training are fragmented across levels and disciplines. Scoping and systematic reviews call for integrated curricula, shared learning outcomes and assessment methods from K–12 to professional programs [13,111,120,121]. Meta-reviews and cross-sector initiatives point to uneven institutional readiness and the need for interdisciplinary training [11,12,91,117].

6.5. Need for Adaptive Oversight

Governance must be iterative, auditable and responsive to drift. Auditing is emerging, but maturity is uneven and susceptible to box-ticking [17,18]. Studies of the performativity and skew of ethics processes warn against symbolic compliance [153,154]. Hybrid AI-supported review and EU ethics procedures illustrate procedural bridges but require continual refinement [139,143]. Clinical governance likewise emphasizes continuous validation and monitoring [10].

6.6. Measurement Myopia in Standards

Operational metrics can narrow ethics to what is measurable, neglecting context and justice. Corporate governance perspectives and researcher surveys reveal gaps between favored technical criteria and broader societal aims, highlighting risks of proxy optimization and a disconnect between researchers and deployment practices. For instance, surveys of machine learning researchers demonstrate strong support for prioritizing safety research and implementing pre-publication harm reviews, a broader societal aim often neglected by narrow technical metrics [1,3,151,152].

6.7. Environmental Externalities

AI’s material footprint, in terms of energy, e-waste, and water, remains weakly governed, despite growing evidence that training and operating large-scale models imposes significant environmental costs that current regulatory frameworks largely ignore. Environmental ethics and “elemental ethics” call for lifecycle accounting, resource transparency and alignment with sustainability transitions [23,24,150]. Proposed safeguards include green benchmarks for model training, mandatory water and energy disclosure in procurement specifications, and lifecycle environmental impact assessments integrated into audit and certification processes.

6.8. Integrity and Authorship in Research Communication

Generative AI destabilizes disclosure, authorship and peer review. Editorial guidance and policy work advocate clear reporting, role-appropriate credit and enforceable user-agreement constraints across the publication pipeline [56,59,116,127,128,129]. Library and communication studies map culture-wide shifts that existing integrity regimes struggle to absorb, noting the paradox that ensuring ethical AI use through human oversight often requires researchers to manually perform the very tasks they sought to outsource to AI [62,165].

6.9. Principles-to-Practice Gap (Ethics-Washing Risk)

Comparative mappings of guidelines and case studies show loose coupling between high-level values and enforceable mechanisms, enabling symbolic compliance whereby organizations adopt ethical language without materially changing development or deployment practices [1,2,156,157]. The risk is compounded by overlapping and vaguely worded guidelines that provide discretionary space rather than binding constraints. Empirical critiques accordingly urge binding standards, clear accountability lines, and justice-oriented objectives to avoid legitimizing technology without meaningful oversight or redress [76,146].

6.10. From Gaps to Safeguards

Addressing the vulnerabilities identified across different sectors requires translating abstract ethical principles into binding, operational safeguards. Table 7 synthesizes how recurring systemic risks are increasingly being mitigated through highly specific, domain-adapted interventions. For instance, while high-stakes opacity is countered through robust model documentation and continuous post-deployment monitoring [48,49], algorithmic bias necessitates rigorous pre-deployment dataset audits and equity-focused key performance indicators (KPIs) [51,76]. Similarly, broader structural challenges like regulatory fragmentation and environmental externalities demand institutional coordination, such as cross-border memorandums, rights-based harmonization, and mandatory resource transparency [23,78]. Ultimately, these interconnected tools demonstrate a maturation in the field, moving beyond merely documenting ethical failures toward establishing concrete, auditable processes (e.g., independent certification, proactive disclosure norms, and human-in-the-loop mandates) that effectively constrain deployment and distribute accountability throughout the AI supply chain.
A complementary operational view of implementation instruments is presented in Table 8.

7. Security, Compliance and Ethical Use

This section turns from values to verification. Audits, standards and enforcement that make responsibility practicable and binding across lifecycles and institutions.

7.1. AI Audits, Assessments and Certification

Audits translate values into verifiable processes through the entirety of pre-deployment impact assessment, dataset/model evaluation and post-deployment monitoring. Emerging AI ethics auditing professionalizes functions, checklists and reporting lines, but its success relies on independence, transparent disclosure for the public and concordance with decision rights [18]. Organizational studies show patchy maturity and the risk of audits morphing into box-ticking except when associated with product gates, incident reporting and escalation procedures [17,19]. Empirical work on the performativity and bias of ethics processes cautions audits to legitimize rather than curb, except when governance separates assurance and delivery and imposes corrective actions [153,154]. At the research frontier, EU ethics reviews act as procedural bridges when buttressed by documentation and continuing oversight [139], whereas hybrid AI-augmented review may scale consistency when buttressed by human deliberation and conflict-of-interest controls [143]. High-stakes areas identify tangible auditables: imaging requires trackable use of data, monitoring of deployment and responsibility for updates on models [48,49,103], and clinical governance values continuous validation beyond clearance in principle [10]. In scholarly communication, disclosure checks and policy enforcement (author statements, tool logging) provide field-specific certification of integrity [56,59,127,129].

7.2. Standards, Benchmarks and Technical Safeguards

Standards make ethics measurable through quantifiable criteria (safety, robustness, fairness, human oversight) and procurement-conformant controls [3]. Comparatory mappings and surveys of researchers plot convergence on the kernel requirements but exhibit gaps between preferred technical measures and wider social objectives [1,151,152]. To prevent “measurement myopia”, technical standards must go hand-in-hand with deliberative examination, context-dependent thresholds and record regimes (model/data cards, decision logs) [48,49]. Commentaries on sectoral and policymaking show that standards must also internalize environmental externalities (energy, e-waste, water), embedding lifecycle accounting and disclosure in conformance tests [23,24,150]. Practice-focused advice on standards and policy alignment for regulated economies emphasizes the requirement for cross-domain as well as cross-jurisdictional harmonization.

7.3. Compliance, Liability and Enforcement

Hard-law mechanisms allow for enforceable lines where ethics alone falls short. Rights-style approaches tie AI to human rights responsibilities, due process and proportionality [78]; China’s state-led model embeds ethics within techno-industrial policymaking and platform regulation [4,65]; and comparative studies bring to the surface differences in legal culture and style of enforcement [5]. Ethics-based regulation without binding duties runs the risk of “regulatory gifting”, facilitating discretionary power and symbolic compliance [144]. Sectoral regimes exemplify the range of levers: professional standards and liability in medicine and radiology [49,52], rulemaking for adaptation in biomedical innovation within funding agencies [141]; and suggestions proposing the intercoupling of legal, ethical and procedural protections in high-speed domains [145]. In security and policing, accountability involves human-control mandate audits, attribution logging and updates on rules of engagement [79,83]. In all contexts, compliance assumes legitimacy when accompanied by transparent process, accessible redress and sustained institutional commitment.

7.4. Concluding Remarks

The shift to governance coordinates principles with verification and enforcement. Effective architectures interleave independent audits, standards sensitive to context (with environmental criteria), and binding compliance plus clear liability and redress. It must also be robust against performative ethics, applicable for diverse legal cultures and adaptable for distributional harms and model drift—especially where the risk falls disproportionately on vulnerable groups [10,18,23,154].

8. Operational Governance Framework

To make the paper’s practical contribution explicit, we consolidate the preceding evidence into an operational governance framework that organizations may adapt to their sectoral context. The framework is derived from evidence synthesis rather than empirical pilot testing; it should therefore be treated as a structured starting point for implementation rather than a validated protocol. The framework translates the four meta-dimensions, trust/transparency, bias/fairness, governance/regulation, and justice, into lifecycle controls, decision rights, and verifiable outputs (see Table 9).

8.1. Framework Logic

The framework is organized in three coupled layers that must be implemented together:
  • Normative layer: rights, plural ethics, and sectoral obligations define non-negotiable constraints and legitimacy conditions (including intercultural and Global South contextual requirements) [20,21,78,146].
  • Assurance layer: measurable controls, audits, documentation, and incident pathways operationalize ethics in routine practice [17,18,139].
  • Enforcement layer: liability, regulatory supervision, procurement requirements, and redress mechanisms create binding accountability [4,5,144].

8.2. Decision Rights and Escalation

To avoid symbolic governance, the framework requires explicit decision rights at each stage, determining who can approve deployment, trigger rollback, notify affected users, and own remediation; and mandates that in high-stakes domains, deployment authority and assurance authority must be institutionally separated to reduce conflicts of interest [19,153,154]. To operationalize this, a practical minimum is a two-channel escalation model that directly connects technical escalation for safety or performance failures with rights-based escalation for fairness, due-process, and dignity-related harms.

8.3. Cross-Sector and Cross-Cultural Adaptation Rules

The framework is intentionally modular, the same backbone applies across health, education, media, business, law, security, and public services, while thresholds and safeguards are calibrated by risk and context. Cross-cultural adaptation is treated as a design constraint, not an afterthought. Multilingual transparency, participatory review with affected communities, and accessible redress pathways (including low-resource and offline channels) are required for legitimate deployment in diverse institutional settings [22,89,92,133].

8.4. What This Framework Contributes

This framework consolidates material that was previously distributed across sections into a coherent contribution; it links principles to operational controls, links controls to accountable outputs, and links outputs to enforceable governance. In this way, it provides an implementable template for organizations seeking to move from ethics declarations to auditable, adaptive, and context-sensitive AI governance.

8.5. Limitations

Several limitations of this review should be acknowledged. First, the primary search was restricted to Scopus; despite backward/forward citation chasing, relevant work indexed only in other databases may have been missed. Second, the corpus was limited to English-language publications, which may underrepresent perspectives from non-Anglophone scholarly traditions, particularly those from the Global South. Third, although inter-rater reliability was substantial ( κ 0.78 ), thematic coding of a broad and heterogeneous literature inevitably involves interpretive judgment, and some boundary cases may have been classified differently by other reviewers. Fourth, the temporal scope (2019–2025) excludes earlier foundational work from systematic analysis; while key pre-2019 publications are referenced for conceptual anchoring, evolving debates that predate the window are not comprehensively traced. Finally, the operational governance framework proposed in Section 8 is synthesized from the reviewed evidence rather than empirically validated through organizational pilots, and its practical utility remains to be tested.

9. Conclusions

This review shows a clear maturation pattern in AI ethics between 2019 and 2025. The field has moved from principle articulation to institutionalization pressures. Across sectors, the same structural lesson recurs: ethics has practical force only when coupled with lifecycle governance, auditable controls, and enforceable accountability. In that sense, the central analytical finding is not simply that domains differ but that they differ within a shared governance logic: high-stakes deployments require explicit trade-off management among transparency, fairness, operational feasibility, and rights protection.
A second synthesis insight concerns the limits of technicalism. Benchmarking, explainability, and fairness metrics are necessary, but insufficient on their own; they do not resolve distributive harms, legitimacy deficits, or responsibility gaps under institutional complexity. The evidence therefore supports a layered model in which normative commitments, operational assurance, and legal enforcement are co-dependent rather than sequential alternatives. Sectoral variation changes thresholds and instruments, not the need for all three layers.
A third contribution is the practical implication of plural ethics. Intercultural perspectives (including Islamic, African, and broader Global South traditions) are most useful when translated into governance design choices, participatory oversight, context-sensitive harm definitions, multilingual transparency, and accessible redress, rather than treated as abstract normative add-ons. This strengthens both legitimacy and implementation quality, especially in uneven-capacity settings.
What remains unresolved is substantial. First, regulatory fragmentation continues to generate uneven protections and forum-shopping incentives across jurisdictions. Second, post-deployment governance is still weaker than pre-deployment review in many organizations, leaving monitoring, incident response, and remedy under-specified. Third, evidence on long-term social outcomes remains limited: many studies document procedures and intentions, but fewer evaluate whether harms are actually reduced over time. Fourth, emergent AI trajectories (generative systems at scale, multimodal autonomy, neuro-AI interfaces) intensify unresolved questions around authorship, liability, cognitive autonomy, and environmental externalities.
The resulting agenda is therefore diagnostic and constructive: shift evaluation from policy existence to governance performance; embed continuous, rights-sensitive monitoring across the full lifecycle; and align technical conformance with justice, sustainability, and institutional accountability. Progress will depend less on adding new ethical principles and more on building durable socio-technical institutions capable of learning, correcting, and enforcing under real-world constraints.

Author Contributions

Conceptualization, C.M.L. and N.F.; methodology, S.K.; validation, Y.D.; formal analysis, Y.D.; investigation, C.M.L.; resources, Y.D.; data curation, N.F.; writing—original draft preparation, N.F.; writing—review and editing, S.K.; visualization, C.M.L.; supervision, S.K.; project administration, Y.D.; funding acquisition, Y.D. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the European Union through the Competitiveness Programme (ESPA 2021–2027) under the project easyHPC@eco.plastics.industry (MIS: 6001593).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All data are publicly available as papers in either open access or subscription form.

Acknowledgments

The authors have reviewed and edited the output and take full responsibility for the content of this publication.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Ryan, M.; Stahl, B. Artificial Intelligence ethics guidelines for developers and users: Clarifying their content and normative implications. J. Inf. Commun. Ethics Soc. 2020, 19, 61–86. [Google Scholar] [CrossRef]
  2. Larsson, S. On the Governance of Artificial Intelligence through Ethics Guidelines. Asian J. Law Soc. 2020, 7, 437–451. [Google Scholar] [CrossRef]
  3. Hickman, E. Trustworthy AI and Corporate Governance: The EU Ethics Guidelines for Trustworthy Artificial Intelligence from a Company Law Perspective. Eur. Bus. Organ. Law Rev. 2021, 22, 593–625. [Google Scholar] [CrossRef]
  4. Roberts, H.; Cowls, J.; Morley, J.; Taddeo, M.; Wang, V. The Chinese approach to Artificial Intelligence: An analysis of policy, ethics, and regulation. AI Soc. 2020, 36, 59–77. [Google Scholar] [CrossRef]
  5. Timoteo, M.; Verri, B. Ethics Guidelines for Artificial Intelligence: Comparing the European and Chinese Approaches. China WTO Rev. 2021, 7, 305–330. [Google Scholar] [CrossRef]
  6. Saheb, T. Mapping Ethical Artificial Intelligence Policy Landscape: A Mixed Method Analysis. Sci. Eng. Ethics 2024, 30, 9. [Google Scholar] [CrossRef]
  7. Murphy, K.; Di Ruggiero, E.; Upshur, R.; Willison, D.J.; Malhotra, N.; Cai, J.C.; Malhotra, N.; Lui, V. Artificial Intelligence for good health: A scoping review of the ethics literature. BMC Med. Ethics 2021, 22, 14. [Google Scholar] [CrossRef]
  8. Savulescu, J.; Giubilini, A.; Vandersluis, R. Ethics of Artificial Intelligence in medicine. Singap. Med. J. 2024, 65, 150–158. [Google Scholar] [CrossRef]
  9. Federico, C.A.; Trotsyuk, A. Biomedical Data Science, Artificial Intelligence, and Ethics: Navigating Challenges in the Face of Explosive Growth. Annu. Rev. Biomed. Data Sci. 2024, 7, 1–14. [Google Scholar] [CrossRef]
  10. McCradden, M.; Hui, K.; Buchman, D. Evidence, ethics and the promise of Artificial Intelligence in psychiatry. J. Med. Ethics 2022, 49, 573–579. [Google Scholar] [CrossRef]
  11. Bond, M.; Khosravi, H.; De Laat, M.; Bergdahl, N.; Negrea, V.; Oxley, E.; Pham, P.; Chong, S.W. A meta systematic review of Artificial Intelligence in higher education: A call for increased ethics, collaboration, and rigour. Int. J. Educ. Technol. High. Educ. 2024, 21, 4. [Google Scholar] [CrossRef]
  12. Mouta, A.; Pinto-Llorente, A.M. Uncovering Blind Spots in Education Ethics: Insights from a Systematic Literature Review on Artificial Intelligence in Education. Int. J. Artif. Intell. Educ. 2023, 34, 1166–1205. [Google Scholar] [CrossRef]
  13. Chang, X.; Wong, G.K. A systematic review of how educators integrate ethics into artificial intelligence curriculum. J. Res. Technol. Educ. 2025, 1–18. [Google Scholar] [CrossRef]
  14. Yu, L. Qualitative and quantitative analyses of Artificial Intelligence ethics in education using VOSviewer and CitNetExplorer. Front. Psychol. 2023, 14, 1061778. [Google Scholar] [CrossRef]
  15. Mori, M.; Sassetti, S.; Cavaliere, V. A systematic literature review on Artificial Intelligence in recruiting and selection: A matter of ethics. Pers. Rev. 2024, 54, 854–878. [Google Scholar] [CrossRef]
  16. Kulkarni, A.V.; Joseph, S.; Patil, K. Artificial Intelligence technology readiness for social sustainability and business ethics: Evidence from MSMEs in developing nations. Int. J. Inf. Manag. Data Insights 2024, 4, 100250. [Google Scholar] [CrossRef]
  17. Vakkuri, V.; Kemell, K.; Kultanen, J. The Current State of Industrial Practice in Artificial Intelligence Ethics. Proc. IEEE Softw. 2020, 37, 50–57. [Google Scholar] [CrossRef]
  18. Schiff, D.S.; Kelley, S. The emergence of Artificial Intelligence ethics auditing. Big Data Soc. 2024, 11, 20539517241299732. [Google Scholar] [CrossRef]
  19. Peterson, C. Embedding Ethics into Artificial Intelligence: Understanding What Can Be Done, What Canât, and What Is Done. Int. Flairs Conf. Proc. 2024, 37. [Google Scholar] [CrossRef]
  20. Norren, D. The ethics of Artificial Intelligence, UNESCO and the African Ubuntu perspective. J. Inf. Commun. Ethics Soc. 2022, 21, 112–128. [Google Scholar] [CrossRef]
  21. Ghaly, M. What Makes Work Good in the Age of Artificial Intelligence (AI)? Islamic Perspectives on AI-Mediated Work Ethics. J. Ethics 2023, 28, 429–453. [Google Scholar] [CrossRef]
  22. Gozum, I.E.A.; Flake, C.C. Human Dignity and Artificial Intelligence in Healthcare: A Basis for a Catholic Ethics on AI. J. Relig. Health 2024. [Google Scholar] [CrossRef] [PubMed]
  23. Baum, S.D. Artificial Intelligence Needs Environmental Ethics. Ethics Policy Environ. 2022, 26, 139–143. [Google Scholar] [CrossRef]
  24. Lehuedé, S. An elemental ethics for Artificial Intelligence: Water as resistance within AI value chain. AI Soc. 2024, 40, 1761–1774. [Google Scholar] [CrossRef]
  25. Rathkopf, C. Learning to Live with Strange Error: Beyond Trustworthiness in Artificial Intelligence Ethics. Camb. Q. Healthc. Ethics 2023, 33, 333–345. [Google Scholar] [CrossRef]
  26. Slota, S.C.; Fleischmann, K.R.; Greenberg, S.; Verma, N.; Cummings, B.; Li, L. Locating the work of Artificial Intelligence ethics. J. Assoc. Inf. Sci. Technol. 2022, 74, 311–322. [Google Scholar] [CrossRef]
  27. Nath, R. From posthumanism to ethics of Artificial Intelligence. AI Soc. 2021, 38, 185–196. [Google Scholar] [CrossRef]
  28. Gunkel, D. Duty Now and for the Future: Communication, Ethics and Artificial Intelligence. J. Media Ethics 2023, 38, 198–210. [Google Scholar] [CrossRef]
  29. Bostrom, N.; Yudkowsky, E. The Ethics of Artificial Intelligence. In The Cambridge Handbook of Artificial Intelligence; Frankish, K., Ramsey, W.M., Eds.; Cambridge University Press: Cambridge, UK, 2014; pp. 316–334. [Google Scholar] [CrossRef]
  30. Floridi, L. The Ethics of Artificial Intelligence: Principles, Challenges, and Opportunities; Oxford University Press: Oxford, UK, 2023. [Google Scholar] [CrossRef]
  31. Etzioni, A.; Etzioni, O. Incorporating ethics into Artificial Intelligence. J. Ethics 2017, 21, 403–418. [Google Scholar] [CrossRef]
  32. Dignum, V. Ethics in Artificial Intelligence: Introduction to the special issue. Ethics Inf. Technol. 2018, 20, 1–3. [Google Scholar] [CrossRef]
  33. Liao, S.M. (Ed.) Ethics of Artificial Intelligence; Oxford University Press: Oxford, UK, 2020. [Google Scholar]
  34. Müller, V.C. Ethics of Artificial Intelligence and Robotics. In The Stanford Encyclopedia of Philosophy, Fall 2023 ed.; Zalta, E.N., Nodelman, U., Eds.; Metaphysics Research Lab, Stanford University: Stanford, CA, USA, 2023. [Google Scholar]
  35. Boddington, P. Towards a Code of Ethics for Artificial Intelligence; Artificial Intelligence: Foundations, Theory, and Algorithms; Springer: Cham, Switzerland, 2017. [Google Scholar] [CrossRef]
  36. Yu, H.; Shen, Z.; Miao, C.; Leung, C.; Lesser, V.R.; Yang, Q. Building ethics into Artificial Intelligence. In Proceedings of the 27th International Joint Conference on Artificial Intelligence, Stockholm, Sweden, 13–19 July 2018; pp. 5527–5533. [Google Scholar]
  37. Russell, S.; Hauert, S.; Altman, R.; Veloso, M. Robotics: Ethics of Artificial Intelligence. Nature 2015, 521, 415–418. [Google Scholar] [CrossRef]
  38. Jobin, A.; Ienca, M.; Vayena, E. The global landscape of AI ethics guidelines. Nat. Mach. Intell. 2019, 1, 389–399. [Google Scholar] [CrossRef]
  39. Hagendorff, T. The ethics of AI ethics: An evaluation of guidelines. Minds Mach. 2020, 30, 99–120. [Google Scholar] [CrossRef]
  40. Coeckelbergh, M. AI Ethics; MIT Press Essential Knowledge Series; MIT Press: Cambridge, MA, USA, 2020. [Google Scholar]
  41. Kazim, E.; Koshiyama, A.S. A high-level overview of AI ethics. Patterns 2021, 2, 100314. [Google Scholar] [CrossRef] [PubMed]
  42. Huang, C.; Zhang, Z.; Mao, B.; Yao, X. An overview of Artificial Intelligence ethics. IEEE Trans. Artif. Intell. 2022, 4, 799–819. [Google Scholar] [CrossRef]
  43. Siau, K.; Wang, W. Artificial Intelligence (AI) ethics: Ethics of AI and ethical AI. J. Database Manag. 2020, 31, 74–87. [Google Scholar] [CrossRef]
  44. Borenstein, J.; Howard, A. Emerging challenges in AI and the need for AI ethics education. AI Ethics 2021, 1, 61–65. [Google Scholar] [CrossRef]
  45. Munn, L. The uselessness of AI ethics. AI Ethics 2023, 3, 869–877. [Google Scholar] [CrossRef]
  46. Heilinger, J.C. The Ethics of AI Ethics: A Constructive Critique. Philos. Technol. 2022, 35, 61. [Google Scholar] [CrossRef]
  47. Morley, J.; Elhalal, A.; Garcia, F.; Kinsey, L.; Mökander, J.; Floridi, L. Ethics as a service: A pragmatic operationalisation of AI ethics. Minds Mach. 2021, 31, 239–256. [Google Scholar] [CrossRef]
  48. Geis, J.R.; Brady, A.P.; Wu, C.C.; Spencer, J.; Ranschaert, E.; Jaremko, J.L.; Langer, S.G.; Kitts, A.B.; Birch, J.; Shields, W.F.; et al. Ethics of Artificial Intelligence in Radiology: Summary of the Joint European and North American Multisociety Statement. J. Am. Coll. Radiol. 2019, 16, 1516–1521. [Google Scholar] [CrossRef]
  49. Larson, D.B.; Magnus, D.C.; Lungren, M.P.; Shah, N.H.; Langlotz, C. Ethics of Using and Sharing Clinical Imaging Data for Artificial Intelligence: A Proposed Framework. Radiology 2020, 295, 675–682. [Google Scholar] [CrossRef] [PubMed]
  50. Wynn, M. The ethics of non-explainable Artificial Intelligence: An overview for clinical nurses. Br. J. Nurs. 2025, 34, 294–297. [Google Scholar] [CrossRef] [PubMed]
  51. Ho, C.K.; Asrani, S. Ethics, Bias, and Governance in Artificial Intelligence for Hepatology: Toward Building a Safe and Fair Future. J. Clin. Exp. Hepatol. 2025, 15, 102628. [Google Scholar] [CrossRef] [PubMed]
  52. Kenny, L.M.; Nevin, M. Ethics and standards in the use of Artificial Intelligence in medicine on behalf of the Royal Australian and New Zealand College of Radiologists. J. Med Imaging Radiat. Oncol. 2021, 65, 486–494. [Google Scholar] [CrossRef]
  53. Abdullah, Y.I.; Schuman, J.S.; Shabsigh, R.; Caplan, A.; Al-Aswad, L. Ethics of Artificial Intelligence in Medicine and Ophthalmology. Asia-Pac. J. Ophthalmol. 2021, 10, 289–298. [Google Scholar] [CrossRef]
  54. Prochaska, M. Artificial Intelligence, ethics, and hospital medicine: Addressing challenges to ethical norms and patient centered care. J. Hosp. Med. 2024, 19, 1194–1196. [Google Scholar] [CrossRef]
  55. Park, S. Ethics for Artificial Intelligence: Focus on the Use of Radiology Images. J. Korean Soc. Radiol. 2022, 83, 759. [Google Scholar] [CrossRef]
  56. Wiwanitmkit, S. Artificial Intelligence, Academic Publishing, Scientific Writing, Peer Review, and Ethics. Braz. J. Cardiovasc. Surg. 2024, 39, e20230377. [Google Scholar] [CrossRef]
  57. Lund, B.D.; Wang, T.; Mannuru, N.R.; Nie, B.; Shimray, S. ChatGPT and a new academic reality: Artificial Intelligence written research papers and the ethics of the large language models in scholarly publishing. J. Assoc. Inf. Sci. Technol. 2023, 74, 570–581. [Google Scholar] [CrossRef]
  58. FAHMINA, M.U. Exclusion within Inclusive Education Policy: The Challenges Facing Urban Refugees in Accessing Education in Thailand. Indones. Law Rev. 2024, 14, 6. [Google Scholar] [CrossRef]
  59. Lei, F.; Du, L.; Wang, W.; Dong, M. Effect of generative Artificial Intelligence on academic publishing ethics: The role of user agreements. J. Inf. Sci. 2025. [Google Scholar] [CrossRef]
  60. Arriagada Bruneau, G. ¿Cómo integrar la ética de la inteligencia artificial en el currículo? Análisis y recomendaciones desde el feminismo de la ciencia y de datos. Rev. Filos. 2024, 81, 137–160. [Google Scholar] [CrossRef]
  61. Deptula, A.; Hunter, P.T. Rhetorics of Authenticity: Ethics, Ethos, and Artificial Intelligence. J. Bus. Tech. Commun. 2024, 39, 51–74. [Google Scholar] [CrossRef]
  62. Zeb, A.; Rehman, F.U.; Bin Othayman, M. Artificial Intelligence and ChatGPT are fostering knowledge sharing, ethics, academia and libraries. Int. J. Inf. Learn. Technol. 2024, 42, 67–83. [Google Scholar] [CrossRef]
  63. Caliskan, A. Artificial Intelligence, Bias, and Ethics. In Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence, Macao, China, 19–25 August 2023; pp. 7007–7013. [Google Scholar]
  64. Caneda, B. Ethics and journalistic challenges in the age of artificial intelligence: Talking with professionals and experts. Front. Commun. 2024, 9, 1465178. [Google Scholar] [CrossRef]
  65. Cao, J. Ethics and governance of Artificial Intelligence in digital China: Evidence from online survey and social media data. Chin. J. Sociol. 2025, 11, 58–89. [Google Scholar] [CrossRef]
  66. Hayes, C. Law and Ethics of Generative Artificial Intelligence and Copyright. In Advances in Information and Communication; Springer: Berlin/Heidelberg, Germany, 2024; pp. 576–591. [Google Scholar]
  67. Hitti, S. Balancing innovation and ethics: The role of Artificial Intelligence in transforming B2B customer experience. Compet. Rev. Int. Bus. J. 2025, 35, 772–793. [Google Scholar] [CrossRef]
  68. Fukukawa, K.; Trivedi, R. Empathy, Ethics and Efficacy: The 3Es of Implementing Artificial Intelligence for Consumer Encounters. Psychol. Mark. 2025, 42, 2352–2368. [Google Scholar] [CrossRef]
  69. Chen, Z. Ethics and discrimination in Artificial Intelligence-enabled recruitment practices. Humanit. Soc. Sci. Commun. 2023, 10, 567. [Google Scholar] [CrossRef]
  70. Schuster, T. Maturity of Artificial Intelligence in SMEs: Privacy and Ethics Dimensions. Collab. Netw. Digit. Soc. 2022, 5, 274–286. [Google Scholar]
  71. Mahmoudian, H. Ethics and data governance in marketing analytics and artificial intelligence. Appl. Mark. Anal. Peer-Rev. J. 2021, 7, 17. [Google Scholar] [CrossRef]
  72. Xie, X.; Gu, K. The impact of technology ethics governance on the development of corporate Artificial Intelligence: A quasi-natural experiment based on technology ethics review. Financ. Res. Lett. 2025, 86, 108357. [Google Scholar] [CrossRef]
  73. Baldassarre, M.T.; Caivano, D.; Nieto, B. Ethics-Driven Incentives: Supporting Government Policies for Responsible Artificial Intelligence Innovation. Proc. IEEE Intell. Syst. 2025, 40, 55–63. [Google Scholar] [CrossRef]
  74. El Hassani, M. Leveraging Artificial Intelligence for Sustainable Marketing: The Mediating Role of Corporate Social Responsibility and Ethics. In Proceedings of the International Conference on Advanced Intelligent Systems for Sustainable Development (AI2SD 2024); Springer: Berlin/Heidelberg, Germany, 2025; pp. 912–921. [Google Scholar]
  75. Ryan, M. In AI We Trust: Ethics, Artificial Intelligence, and Reliability. Sci. Eng. Ethics 2020, 26, 2749–2767. [Google Scholar] [CrossRef]
  76. Westerstrand, S. Reconstructing AI Ethics Principles: Rawlsian Ethics of Artificial Intelligence. Sci. Eng. Ethics 2024, 30, 46. [Google Scholar] [CrossRef]
  77. Kieslich, K.; Keller, B. Artificial Intelligence ethics by design. Eval. Public Percept. Importance Ethical Des. Princ. Artif. Intell. Big Data Soc. 2022, 9. [Google Scholar] [CrossRef]
  78. Sartor, G. Artificial Intelligence and human rights: Between law and ethics. Maastricht J. Eur. Comp. Law 2020, 27, 705–719. [Google Scholar] [CrossRef]
  79. Harris, H. Artificial Intelligence, Policing and Ethics—A best practice model for AI enabled policing in Australia. In Proceedings of the 2021 IEEE 25th International Enterprise Distributed Object Computing Workshop (EDOCW); IEEE: New York, NY, USA, 2021; pp. 53–58. [Google Scholar]
  80. Roy, K. Artificial Intelligence, Warfare and Ethics in India. J. Mil. Ethics 2024, 23, 103–116. [Google Scholar] [CrossRef]
  81. Grady, K.L.; Harbour, S.D.; Abballe, A.R. Trust, Ethics, Consciousness, and Artificial Intelligence. In Proceedings of the 2022 IEEE/AIAA 41st Digital Avionics Systems Conference (DASC), Portsmouth, VA, USA, 18–22 September 2022; pp. 1–9. [Google Scholar]
  82. Rowe, N. The comparative ethics of artificial-intelligence methods for military applications. Front. Big Data 2022, 5, 991759. [Google Scholar] [CrossRef]
  83. Koch, J. On Digital Ethics for Artificial Intelligence and Information Fusion in the Defense Domain. Proc. IEEE Aerosp. Electron. Syst. Mag. 2021, 36, 94–111. [Google Scholar] [CrossRef]
  84. Chauhan, S. Standards, Ethics, Legal Implications & Challenges of Artificial Intelligence. In Proceedings of the 2022 IEEE International Conference on Industrial Engineering and Engineering Management (IEEM); IEEE: New York, NY, USA, 2022; pp. 1048–1052. [Google Scholar]
  85. Malmio, I. Ethics as an enabler and a constraint—Narratives on technology development and Artificial Intelligence in military affairs through the case of Project Maven. Technol. Soc. 2023, 72, 102193. [Google Scholar] [CrossRef]
  86. Bradley, F. Representation of Libraries in Artificial Intelligence Regulations and Implications for Ethics and Practice. J. Aust. Libr. Inf. Assoc. 2022, 71, 189–200. [Google Scholar] [CrossRef]
  87. Michalak, R. From Ethics to Execution: The Role of Academic Librarians in Artificial Intelligence (AI) Policy-Making at Colleges and Universities. J. Libr. Adm. 2023, 63, 928–938. [Google Scholar] [CrossRef]
  88. Shaw, J.; Ali, J.; Atuire, C.A.; Cheah, P.Y.; Gichoya, J.W.; Hunt, A.; Jjingo, D.; Littler, K.; Paolotti, D. Research ethics and Artificial Intelligence for global health: Perspectives from the global forum on bioethics in research. BMC Med. Ethics 2024, 25, 46. [Google Scholar] [CrossRef] [PubMed]
  89. Mehta, A.; Nancy, N.; Sonkala, S.; Mishra, A. Balancing Act: Navigating the Ethics and Governance of Artificial Intelligence in Healthcare and WHOs Role in Shaping the Future. Int. Tinnitus J. 2024, 28, 65–69. [Google Scholar] [CrossRef]
  90. Maris, M.T.; Willems, D.L.; Pols, J.; Tan, H.L.; Lindinger, G.L.; Bak, M.A. Correction: Ethical use of Artificial Intelligence to prevent sudden cardiac death: An interview study of patient perspectives. BMC Med. Ethics 2024, 25, 42. [Google Scholar] [CrossRef]
  91. Souza, R.; Surapaneni, K.M.; Regupathy, A.; Mathew, M.; Mishra, V.; Kalaimathi, A.G.; Sekkizhar, G.; Tandon, R.; Louis Palatty, P. Convergence of Diverse Expertise: A Multidisciplinary Training on the Ethics of Artificial Intelligence in Healthcare Technology and Research. J. Acad. Ethics 2024, 23, 885–899. [Google Scholar]
  92. Omorogiuwa, T.B.E.; Mugumbate, R.; Harms-Smith, L.; Naami, A. Ethical and transparent use of generative Artificial Intelligence (AI): Ethics letter three (3) from the African Independent Ethics Committee (AIEC). Afr. J. Soc. Work. 2025, 15, 100–103. [Google Scholar] [CrossRef]
  93. Rashidian, N.; Abu Hilal, M.; Frigerio, I.; Guerra, M.; Sterckx, S.; Tozzi, F.; Capelli, G.; Verdi, D.; Spolverato, G.; Gulla, A.; et al. Ethics and trustworthiness of Artificial Intelligence in Hepato-Pancreato-Biliary surgery: A snapshot of insights from the European-African Hepato-Pancreato-Biliary Association (E-AHPBA) survey. HPB 2025, 27, 502–510. [Google Scholar] [CrossRef]
  94. Obermeyer, Z.; Powers, B.; Vogeli, C.; Mullainathan, S. Dissecting racial bias in an algorithm used to manage the health of populations. Science 2019, 366, 447–453. [Google Scholar] [CrossRef] [PubMed]
  95. Green, C. Ethics, Artificial Intelligence, and Critical Care Nursing. Am. J. Crit. Care 2025, 34, 404–406. [Google Scholar] [CrossRef] [PubMed]
  96. Paladino, M. Artificial Intelligence in nursing care: A reflection from the ethics of care. Nurs. Ethics 2025, 32, 2477–2489. [Google Scholar] [CrossRef] [PubMed]
  97. Agarwal, A. Ethics of using generative pretrained transformer and artificial intelligence systems for patient prior authorizations. J. Am. Acad. Dermatol. 2024, 90, 1121–1122. [Google Scholar] [CrossRef]
  98. Arora, T.; Muhammad-Kamal, H. Preserving medical ethics in the era of Artificial Intelligence: Challenges and opportunities in neonatology. Semin. Perinatol. 2025, 49, 152100. [Google Scholar] [CrossRef]
  99. Biller-Andorno, N.; Ferrario, A. In Search of a Mission: Artificial Intelligence in Clinical Ethics. Am. J. Bioeth. 2022, 22, 23–25. [Google Scholar] [CrossRef]
  100. Collins, B.X.; Bhatia, S.; Fanning, J. Adapting Clinical Ethics Consultations to Address Ethical Issues of Artificial Intelligence. J. Clin. Ethics 2025, 36, 167–183. [Google Scholar] [CrossRef]
  101. De Simone, B.; Deeken, G. Balancing Ethics and Innovation: Can Artificial Intelligence Safely Transform Emergency Surgery? A Narrative Perspective. J. Clin. Med. 2025, 14, 3111. [Google Scholar] [CrossRef]
  102. Kleebayoon, A. Ethics for Artificial Intelligence use in clinical pharmacology. Indian J. Pharmacol. 2024, 56, 224–225. [Google Scholar] [CrossRef]
  103. Mello-Thoms, C.; Mello, C.A. Clinical applications of Artificial Intelligence in radiology. Br. J. Radiol. 2023, 96, 20221031. [Google Scholar] [CrossRef]
  104. Pratiwi, H. Between Shortcut and Ethics: Navigating the Use of Artificial Intelligence in Academic Writing Among Indonesian Doctoral Students. Eur. J. Educ. 2025, 60, e70083. [Google Scholar] [CrossRef]
  105. Rogers, W.A.; Draper, H.; Carter, S. Evaluation of Artificial Intelligence clinical applications: Detailed case analyses show value of healthcare ethics approach in identifying patient care issues. Bioethics 2021, 35, 623–633. [Google Scholar] [CrossRef] [PubMed]
  106. Stokes, F. Artificial Intelligence and Robotics in Nursing: Ethics of Caring as a Guide to Dividing Tasks Between AI and Humans. Nurs. Philos. 2020, 21, e12306. [Google Scholar] [CrossRef] [PubMed]
  107. Hossmann, S.; Ballhausen, H. Framework of Artificial Intelligence in diabetology. Die Diabetol. 2025, 21, 687–694. [Google Scholar] [CrossRef]
  108. Tiribelli, S.; Monnot, A.; Shah, S.F.H.; Arora, A.; Toong, P.J. Ethics Principles for Artificial Intelligence-Based Telemedicine for Public Health. Am. J. Public Health 2023, 113, 577–584. [Google Scholar] [CrossRef]
  109. Kerasidou, A. Ethics of Artificial Intelligence in global health: Explainability, algorithmic bias and trust. J. Oral Biol. Craniofacial Res. 2021, 11, 612–614. [Google Scholar] [CrossRef]
  110. Eaton, S. Postplagiarism: Transdisciplinary ethics and integrity in the age of Artificial Intelligence and neurotechnology. Int. J. Educ. Integr. 2023, 19, 23. [Google Scholar] [CrossRef]
  111. Hooper, K.; Lunn, S. Values in Education: Exploration of Artificial Intelligence Ethics Syllabi Using Natural Language Processing Analyses. In Proceedings of the 2024 IEEE Frontiers in Education Conference (FIE), Washington, DC, USA, 13–16 October 2024; pp. 1–8. [Google Scholar]
  112. Jabar, M.; Chiong-Javier, E. Qualitative ethical technology assessment of Artificial Intelligence (AI) and the internet of things (IoT) among filipino Gen Z members: Implications for ethics education in higher learning institutions. Asia Pac. J. Educ. 2024, 45, 1344–1358. [Google Scholar] [CrossRef]
  113. Ekmekçi, P.E. Reflections on the Ethics Guideline for using Generative Artificial Intelligence in Scientific Research and Publication Process of Higher Education Institutions. Balk. Med. J. 2024, 42, 174. [Google Scholar]
  114. Ferreira de Menezes, J.B.; Cechinel, C.; Queiroga, E.M.; Ramos, V.; Primo, T.T.; Carvalho Nunes, J. Ethics, Big Data and Artificial Intelligence: Exploring Academic Works in the Educational Landscape. In Proceedings of the 18th Latin American Conference on Learning Technologies (LACLO 2023), Cuenca, Ecuador, 18–20 October 2023; pp. 38–48. [Google Scholar]
  115. Guerrero, M.J.; Alier-Forment, M.; Pereira-Varela, J. The ethics of generative Artificial Intelligence in education under debate. Perspect. Dev. Theor. Pract. Case Study. Rev. Espaã±Ola Pedagog. 2025, 83, 281–293. [Google Scholar]
  116. Laflamme, A.S. Redefining Academic Integrity in the Age of Generative Artificial Intelligence: The Essential Contribution of Artificial Intelligence Ethics. J. Sch. Publ. 2025, 56, 481–509. [Google Scholar] [CrossRef]
  117. Sanusi, I.T.; Olaleye, S. An Insight into Cultural Competence and Ethics in K-12 Artificial Intelligence Education. In Proceedings of the 2022 IEEE Global Engineering Education Conference (EDUCON), Tunis, Tunisia, 28–31 March 2022; pp. 790–794. [Google Scholar]
  118. Slimi, Z. Navigating the Ethical Challenges of Artificial Intelligence in Higher Education: An Analysis of Seven Global AI Ethics Policies. TEM J. 2023, 12, 590–602. [Google Scholar] [CrossRef]
  119. Koontz, M. Integrating Generative Artificial Intelligence in Midwifery Education: Balancing Innovation, Ethics, and Academic Integrity. J. Midwifery Womens Health 2025, 70, 946–951. [Google Scholar] [CrossRef] [PubMed]
  120. Hillis, C.; Bhattacharjee, M.; AlMousawi, B.; Eltanahy, T.; Ono, S.; Hui, M.; Pham, B.; Swab, M.; Cormack, G.V.; Grossman, M.R.; et al. Teaching postsecondary students about the ethics of artificial intelligence: A scoping review protocol. PLoS ONE 2025, 20, e0329020. [Google Scholar] [CrossRef] [PubMed]
  121. Tschoppe, N.; Katsarov, J.W.; Drews, P. Conveying the Ethics of Artificial Intelligence in K-12 and Academia: A Systematic Review of Teaching Methods. In Proceedings of the 57th Hawaii International Conference on System Sciences, Waikoloa, HI, USA, 7–10 January 2025. [Google Scholar]
  122. Kim, C.S.; Samaniego, C.S.; Sousa Melo, S.L.; Brachvogel, W.A.; Baskaran, K. Artificial Intelligence (A.I.) in dental curricula: Ethics and responsible integration. J. Dent. Educ. 2023, 87, 1570–1573. [Google Scholar] [CrossRef]
  123. Mamani Quispe, D.J.; Lazarte Vera, E.A.; Higueras Matos, M.M.; Moscoso Barrios, J. The Ethics of Artificial Intelligence in the development of research competencies in university students. Eur. Public Soc. Innov. Rev. 2025, 10, 1–15. [Google Scholar] [CrossRef]
  124. Reza Flores, R.A.; Reza-Flores, C.M.; Galafassi, C.; Acosta-Ochoa, A.; Vicari, R. Artificial Intelligence and Students: An Overview from Teaching-Learning, Ethics-Morality, Emotions, Training, Cognition-Creativity, Social Construct, Recreation-Entertainment. J. Pedagog. 2025, 16, 42–68. [Google Scholar] [CrossRef]
  125. Asiksoy, G. An Investigation of University Students Attitudes Towards Artificial Intelligence Ethics. Int. J. Eng. Pedagog. (IJEP) 2024, 14, 153–169. [Google Scholar] [CrossRef]
  126. Cheng, I.; Lee, S. The Impact of Ethics Instruction and Internship on Students Ethical Perceptions About Social Media, Artificial Intelligence, and ChatGPT. J. Media Ethics 2024, 39, 114–129. [Google Scholar] [CrossRef]
  127. Hosseini, M.; Resnik, D.B. The ethics of disclosing the use of Artificial Intelligence tools in writing scholarly manuscripts. Res. Ethics 2023, 19, 449–465. [Google Scholar] [CrossRef]
  128. Pearson, G. Artificial Intelligence and Publication Ethics. J. Am. Psychiatr. Nurses Assoc. 2024, 30, 453–455. [Google Scholar] [CrossRef] [PubMed]
  129. Kocak, Z. Publication Ethics in the Era of Artificial Intelligence. J. Korean Med Sci. 2024, 39, e249. [Google Scholar] [CrossRef] [PubMed]
  130. Hogan, N.R. Generative Artificial Intelligence in Violence Risk Assessment: Emerging Technology and the Ethics of the Inevitable. Behav. Sci. Law 2025, 43, 606–615. [Google Scholar] [CrossRef] [PubMed]
  131. Marshall, D.T.; Naff, D. The Ethics of Using Artificial Intelligence in Qualitative Research. J. Empir. Res. Hum. Res. Ethics 2024, 19, 92–102. [Google Scholar] [CrossRef]
  132. Duckett, J.; Westrick, N. Exploring the use, adoption, and ethics of generative artificial intelligence in the public relations and communication professions. Commun. Teach. 2024, 39, 33–41. [Google Scholar] [CrossRef]
  133. Qiao-Franco, G. China Artificial Intelligence Ethics: Policy Development in an Emergent Community of Practice. J. Contemp. China 2022, 33, 189–205. [Google Scholar] [CrossRef]
  134. Narang, N. Mentor Musings on Standards, Regulations & Policies Imperatives for Ethics & Governance of Artificial Intelligence. Proc. IEEE Internet Things Mag. 2025, 8, 4–9. [Google Scholar]
  135. Islam, G. Generative Artificial Intelligence as Hypercommons: Ethics of Authorship and Ownership. J. Bus. Ethics 2024, 192, 659–663. [Google Scholar] [CrossRef]
  136. Langenkamp, M.; Costa, A.; Cheung, C. Hiring Fairly in the Age of Algorithms. arXiv 2020, arXiv:cs.HC/2004.07132. [Google Scholar] [CrossRef]
  137. Fischer, I.; Beswick, C. Rho AI—Leveraging Artificial Intelligence to address climate change: Financing, implementation and ethics. J. Inf. Technol. Teach. Cases 2021, 11, 110–116. [Google Scholar] [CrossRef]
  138. Robles Carrillo, M. Artificial Intelligence: From ethics to law. Telecommun. Policy 2020, 44, 101937. [Google Scholar] [CrossRef]
  139. Casiraghi, S. Ethics reviews in the European Union. Implic. Gov. Sci. Res. Times Data Sci. Artif. Intell. Law Innov. Technol. 2024, 16, 101–122. [Google Scholar]
  140. Ibrahim, F.M.; Alawsi, H.; Mohammed, M.N.; Edwar, M.E. Artificial Intelligence and Legal Ethics: Navigating Challenges and Opportunities. Tech Fusion Bus. Soc. 2025, 2, 445–454. [Google Scholar]
  141. Joseph, J. Balancing Innovation and Biomedical Ethics within National Institutes of Health: Integrative and Regulatory Reforms for Artificial Intelligence-Driven Biotechnology. Biotechnol. Law Rep. 2025, 44, 93–116. [Google Scholar] [CrossRef]
  142. White, G.R.T.; Samuel, A.; Jones, P.; Madhavan, N.; Afolayan, A.; Abdullah, A. Mapping the ethic theoretical foundations of artificial intelligence research. Thunderbird Int. Bus. Rev. 2024, 66, 171–183. [Google Scholar] [CrossRef]
  143. Nickel, P. The Prospect of Artificial Intelligence Supported Ethics Review. Ethics Hum. Res. 2024, 46, 25–28. [Google Scholar] [CrossRef]
  144. Papyshev, G. The limitation of ethics-based approaches to regulating artificial intelligence: Regulatory gifting in the context of Russia. AI Soc. 2022, 39, 1381–1396. [Google Scholar] [CrossRef]
  145. Uhumuavbi, I. An Adaptive Conceptualisation of Artificial Intelligence and the Law, Regulation and Ethics. Laws 2025, 14, 19. [Google Scholar] [CrossRef]
  146. Ibrahim, S.M.; Alshraideh, M.; Leiner, M.; AlDajani, I.M. Artificial Intelligence ethics: Ethical consideration and regulations from theory to practice. IAES Int. J. Artif. Intell. (IJ-AI) 2024, 13, 3703. [Google Scholar] [CrossRef]
  147. Textor, C.; Zhang, R.; Lopez, J.; Schelble, B.G.; McNeese, N.J.; Freeman, G.; Pak, R.; Tossell, C.; Visser, E. Exploring the Relationship Between Ethics and Trust in Human-Artificial Intelligence Teaming: A Mixed Methods Approach. J. Cogn. Eng. Decis. Mak. 2022, 16, 252–281. [Google Scholar] [CrossRef]
  148. Duenser, A.; Douglas, D. Whom to Trust, How and Why: Untangling Artificial Intelligence Ethics Principles, Trustworthiness, and Trust. Proc. IEEE Intell. Syst. 2023, 38, 19–26. [Google Scholar] [CrossRef]
  149. Lucio, R.; Harris, A.; Campbell, M.; Ricciardelli, L. Artificial Intelligence in Systematic Literature Reviews: Social Work Ethics, Application, and Feasibility. J. Evid. Based Soc. Work. 2025, 23, 135–149. [Google Scholar] [CrossRef] [PubMed]
  150. Mura, L. The ethics of Artificial Intelligence: Safeguarding human dignity, social justice and environmental stability in the age of AI. Equilibrium. Q. J. Econ. Econ. Policy 2025, 20, 479–507. [Google Scholar] [CrossRef]
  151. Zhang, B.; Anderljung, M.; Kahn, L.; Dreksler, N.; Horowitz, M.C. Ethics and Governance of Artificial Intelligence: Evidence from a Survey of Machine Learning Researchers. J. Artif. Intell. Res. 2021, 71, 591–666. [Google Scholar] [CrossRef]
  152. Zhang, B.; Anderljung, M.; Kahn, L.; Dreksler, N.; Horowitz, M.C. Ethics and Governance of Artificial Intelligence: A Survey of Machine Learning Researchers (Extended Abstract). In Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, Vienna, Austria, 23–29 July 2022; pp. 5787–5791. [Google Scholar]
  153. Kerr, A.; Barry, M.; Kelleher, J. Expectations of Artificial Intelligence and the performativity of ethics: Implications for communication governance. Big Data Soc. 2020, 7, 205395172091593. [Google Scholar] [CrossRef]
  154. Palladino, N. A biased emerging governance regime for Artificial Intelligence? How AI ethics get skewed moving from principles to practices. Telecommun. Policy 2023, 47, 102479. [Google Scholar] [CrossRef]
  155. Mao, Y. Online public discourse on Artificial Intelligence and ethics in China: Context, content, and implications. AI Soc. 2021, 38, 373–389. [Google Scholar] [CrossRef]
  156. Giarmoleo, F.V.; Ferrero, I.; Rocchi, M.; Pellegrini, M. What ethics can say on Artificial Intelligence: Insights from a systematic literature review. Bus. Soc. Rev. 2024, 129, 258–292. [Google Scholar] [CrossRef]
  157. Stahl, B.C.; Schroeder, D. Ethics of Artificial Intelligence: Case Studies and Options for Addressing Ethical Challenge (excerpt). J. Econ. Sociol. 2024, 25, 85–95. [Google Scholar] [CrossRef]
  158. Wang, H. Why putting Artificial Intelligence ethics into practice is not enough: Towards a multi-level framework. Big Data Soc. 2025, 12, 20539517251340620. [Google Scholar] [CrossRef]
  159. Bickley, S.J. Cognitive architectures for Artificial Intelligence ethics. AI Soc. 2022, 38, 501–519. [Google Scholar] [CrossRef]
  160. Klockmann, V.; Schenk, A.; Villeval, M. Artificial Intelligence, ethics, and intergenerational responsibility. J. Econ. Behav. Organ. 2022, 203, 284–317. [Google Scholar] [CrossRef]
  161. Farisco, M.; Evers, K. On the Contribution of Neuroethics to the Ethics and Regulation of Artificial Intelligence. Neuroethics 2022, 15, 4. [Google Scholar] [CrossRef]
  162. Weh, L. An Integrated Embodiment Concept Combines Neuroethics and AI Ethics—Relational Perspectives on Artificial Intelligence, Emerging Neurotechnologies and the Future of Work. NanoEthics 2024, 18, 8. [Google Scholar] [CrossRef]
  163. Bennett, S. Artificial Intelligence and the ethics of navigating ambiguity. Big Data Soc. 2025, 12, 20539517251347594. [Google Scholar] [CrossRef]
  164. Ponsiglione, A.; Stanzione, A.; Bluethgen, C.; Santinha, J.; Ugga, L.; Huisman, M.; Klontzas, M.E.; Cannella, R. Bias in Artificial Intelligence for medical imaging: Fundamentals, detection, avoidance, mitigation, challenges, ethics, and prospects. Diagn. Interv. Radiol. 2024, 31, 75. [Google Scholar] [CrossRef]
  165. Andrade-Hidalgo, G.; Mio-Cango, P. Exploring the Impact of Artificial Intelligence on Research Ethics—A Systematic Review. J. Acad. Ethics 2024, 23, 1053–1070. [Google Scholar] [CrossRef]
Figure 1. PRISMA-ScR style flow diagram of record identification, screening and inclusion.
Figure 1. PRISMA-ScR style flow diagram of record identification, screening and inclusion.
Informatics 13 00051 g001
Figure 2. Non-exhaustive, illustrative cross-sector coverage of AI ethics meta-dimensions (representative source studies: [3,4,5,6,11,12,15,20,21,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92]).
Figure 2. Non-exhaustive, illustrative cross-sector coverage of AI ethics meta-dimensions (representative source studies: [3,4,5,6,11,12,15,20,21,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92]).
Informatics 13 00051 g002
Figure 3. Multi-level and ecosystem governance.
Figure 3. Multi-level and ecosystem governance.
Informatics 13 00051 g003
Figure 4. Essential socio-technical and science and technology studies (STS)/philosophical perspectives. Colors distinguish conceptual components, and arrows indicate directional relationships and feedback links.
Figure 4. Essential socio-technical and science and technology studies (STS)/philosophical perspectives. Colors distinguish conceptual components, and arrows indicate directional relationships and feedback links.
Informatics 13 00051 g004
Figure 5. Intercultural and plural ethics. Colors denote major ethical traditions and stakeholder perspectives, while arrows indicate influence pathways and reciprocal interactions.
Figure 5. Intercultural and plural ethics. Colors denote major ethical traditions and stakeholder perspectives, while arrows indicate influence pathways and reciprocal interactions.
Informatics 13 00051 g005
Table 1. Essential works in AI ethics.
Table 1. Essential works in AI ethics.
ReferenceFocus/TopicKey ContributionCategory
[29]Long-term AI risks, safetyIntroduces value alignment and potential risks of superintelligent AIBook chapter
[30]Principles and challengesComprehensive overview of AI ethics principles and emerging issuesBook
[31]Embedding ethicsArgues for integrating ethical reasoning within AI systemsJournal article
[32]Field overviewIntroduces key themes of AI ethics in a special issueSpecial issue editorial
[33]Broad perspectivesEdited volume covering technical and philosophical ethics topicsEdited book
[34]AI and robotics ethicsStanford Encyclopedia entry reviewing ethics questionsEncyclopedia entry
[35]Professional ethicsProposes developing actionable AI ethics codesBook
[36]Technical solutionsSurveys approaches for implementing ethics in AI systemsConference paper
[37]Transparency and biasNature editorial highlighting importance of transparency and fairnessEditorial
[38]Ethics guidelinesSurveys global AI ethics principles and frameworksJournal article
[39]Guidelines critiqueEvaluates existing guidelines and notes common shortcomingsJournal article
[40]Ethics narrativesDiscusses AI ethics from philosophical and socio-technical perspectivesBook
[41]Field overviewHigh-level conceptual introduction to AI ethics topicsReview article
[42]Comprehensive reviewAnalyzes ethical issues, guidelines, and implementation methodsJournal article
[43]Conceptual analysisDifferentiates “ethics of AI” vs “ethical AI” frameworksJournal article
[44]EducationHighlights need for ethics training in AI curriculaOpinion article
[45]Critical theoryArgues current AI ethics principles are ineffectiveJournal article
[46]Critical analysisOffers constructive critique of AI ethics practicesJournal article
[47]Operational ethicsProposes “ethics as a service” framework for practical applicationJournal article
Table 2. Cross-sector coverage of AI ethics meta-dimensions.
Table 2. Cross-sector coverage of AI ethics meta-dimensions.
SectorTrust/TransparencyBias/FairnessGovernance/RegulationJustice
Healthcare[48,49,50][51][52,53][54,55]
Education/Research[56][11,57][58,59][12,60]
Media/Democracy[61,62][63,64][65,66][21]
Business/Finance[67,68][15,63,69,70][6,71,72,73][67,74]
Law/Policy[3,75][76,77][4,5,78][20]
Defense/Security[79,80,81][63,82][83,84][82,85]
Social/Public Sector[86,87][88,89,90][91][92]
Table 3. Cross-cutting meta-dimensions: definitions, typical instruments, and key caveats.
Table 3. Cross-cutting meta-dimensions: definitions, typical instruments, and key caveats.
DimensionDefinition/Typical ManifestationsTypical InstrumentsLimitations/Caveats
Trust & TransparencyExplainability, documentation, disclosure, proportionality of explanationModel cards, byline/disclosure, documentation, explainability tools, SLAsSurface-level transparency; may not improve outcomes without governance.
Bias & FairnessSystemic/representational harms; subgroup performance gapsFairness-aware training, subgroup testing, dataset audits, community evaluationMetric myopia; contextual fairness tradeoffs.
Governance & RegulationInstitutionalisation of duties, oversight, enforceabilityRisk tiers, audits, procurement standards, conformity assessment, enforcement mechanismsEthics-only regimes risk capture; jurisdictional divergence.
Justice & LegitimacyRights, redress, plural ethical foundations (procedural + distributive)Redress channels; participatory oversight; intercultural ethics alignment (e.g., Ubuntu)Requires institutional capacity and rule-of-law backstops; cultural translation necessary.
Table 4. Sector × meta-dimension priority matrix (H = high emphasis; M = medium; L = low).
Table 4. Sector × meta-dimension priority matrix (H = high emphasis; M = medium; L = low).
SectorTrust/TransparencyBias/FairnessGovernance/RegulationJustice/Legitimacy
HealthcareHHMM
Education/ResearchMMHM
Media/DemocracyHMHM
Business/FinanceMHHM
Law/PolicyHHHH
Defense/SecurityMHHH
Social/Public SectorHHHH
Table 5. Healthcare AI ethics: focal issues by specialty/theme.
Table 5. Healthcare AI ethics: focal issues by specialty/theme.
Theme/SpecialtyCore IssuesKey Refs
RadiologyAccountability for errors; validation; deployment monitoring[48,49,52]
Nursing/CareRelational ethics; professional identity; responsible reliance[50,95,96]
Hepatology/DiabetologyFairness; autonomy; continuous monitoring risks[51,107]
Surgery/PerinatalLiability; consent; high-severity error tolerance[93,98,101]
Cross-cuttingBias; safety; preparedness; ethics integration[54,90,99,100]
Table 6. Education and research: integrity, pedagogy and governance.
Table 6. Education and research: integrity, pedagogy and governance.
ClusterFocusKey Refs.
Meta-reviewsField mapping; rigor; collaboration[11,13,14]
Integrity and MisconductAuthorship; plagiarism; disclosure norms[56,59,110]
Pedagogy/CurriculaEmbedding ethics; methods; K–12–HE pipeline[111,117,120,121]
Positive UsesTools for responsible practice (reviews, detection)[113,114,115]
GovernanceInstitutional policy; capacity; culture[12,118,119]
Table 7. Recurring risks and actionable safeguards across domains.
Table 7. Recurring risks and actionable safeguards across domains.
RiskSafeguards/ToolsIllustrative Refs.
Opacity in high-stakes AIModel cards, documentation, post-deployment monitoring, incident reporting[48,49,50]
Algorithmic biasDataset audits, subgroup performance checks, fairness KPIs, equity governance[51,76]
Ethics-washingIndependent audits, certification, public reporting, separated oversight[17,18,19]
Regulatory fragmentationStandards harmonization, cross-border MoUs, rights-based anchors[6,78]
Accountability gaps (defense)Human-in-the-loop mandates, ROE updates, attribution logs, review boards[82,83]
Integrity erosion (education/publishing)Disclosure norms, authorship policies, tool-specific user agreements[56,59,110,116]
Global health inequitiesEquity impact assessments, community governance, differential validation[88,89]
Environmental externalitiesGreen benchmarks, water/energy transparency, lifecycle assessments[23,24,150]
Table 8. Operational toolchain for ethical AI: audits, standards and compliance.
Table 8. Operational toolchain for ethical AI: audits, standards and compliance.
InstrumentPurposeStrengthsLimitations/Caveats
Ethics AuditsAssess alignment with policies/principlesExternal validation; repeatabilityChecklist risk; needs independence [18,19]
Standards/BenchmarksMeasurable criteria for trust/fairness/safetyInteroperability; comparabilityMetric myopia; context loss [1,3]
Ethics Review (hybrid)Human + AI support for review workflowsScale + deliberationTool bias; reviewer expertise [143]
Compliance/EnforcementBinding guardrails, liabilityLegitimacy; deterrenceJurisdictional divergence [4,5,78]
Table 9. Operational governance framework: lifecycle stages, controls, and accountable outputs.
Table 9. Operational governance framework: lifecycle stages, controls, and accountable outputs.
StageCore ControlsMinimum OutputsPrimary Accountability
Problem Framing & Use-Case ScopingRights-impact scoping; stakeholder mapping; context-sensitive harm taxonomy; stop/go criteria for high-risk useProblem statement; risk register; affected-group map; justification memoProduct owner + ethics/legal lead
Data & Model DevelopmentData provenance checks; subgroup performance testing; fairness thresholds; documentation-by-design; energy/resource loggingData/model cards; bias test report; validation protocol; sustainability logML lead + domain expert + assurance function
Pre-Deployment ReviewIndependent ethics/risk review; compliance checks; human-oversight design; contestability and appeal design; incident playbooksReview decision; mitigation plan; escalation matrix; user-facing disclosuresIndependent review board + compliance officer
Deployment & OperationsMonitoring for drift, harms and disparities; incident reporting; transparency updates; periodic revalidation; retraining triggersMonitoring dashboard; incident ledger; update notices; periodic assurance reportOperations owner + risk/compliance
Post-Deployment GovernanceExternal audit; accountability hearings; remedy/redress handling; model retirement or redesign decisionsAudit report; corrective-action log; redress outcomes; decommission recordExecutive governance committee + regulator/public authority
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liapis, C.M.; Fazakis, N.; Kotsiantis, S.; Dimakopoulos, Y. Ethics in Artificial Intelligence: A Cross-Sectoral Review of 2019–2025. Informatics 2026, 13, 51. https://doi.org/10.3390/informatics13040051

AMA Style

Liapis CM, Fazakis N, Kotsiantis S, Dimakopoulos Y. Ethics in Artificial Intelligence: A Cross-Sectoral Review of 2019–2025. Informatics. 2026; 13(4):51. https://doi.org/10.3390/informatics13040051

Chicago/Turabian Style

Liapis, Charalampos M., Nikos Fazakis, Sotiris Kotsiantis, and Yannis Dimakopoulos. 2026. "Ethics in Artificial Intelligence: A Cross-Sectoral Review of 2019–2025" Informatics 13, no. 4: 51. https://doi.org/10.3390/informatics13040051

APA Style

Liapis, C. M., Fazakis, N., Kotsiantis, S., & Dimakopoulos, Y. (2026). Ethics in Artificial Intelligence: A Cross-Sectoral Review of 2019–2025. Informatics, 13(4), 51. https://doi.org/10.3390/informatics13040051

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop