Ethics and Governance of Artificial Intelligence (AI) Systems

A special issue of Systems (ISSN 2079-8954). This special issue belongs to the section "Artificial Intelligence and Digital Systems Engineering".

Deadline for manuscript submissions: 31 July 2026 | Viewed by 2009

Special Issue Editor


E-Mail Website
Guest Editor
Department of Business Administration, University of West Florida, Pensacola, FL 32514, USA
Interests: AI governance and governance; strategic resilience and sustainability planning; adaptive leadership; disaster resilience and emergency management; hybrid intelligence applications; organizational sustainability; regional cooperation frameworks; public sector innovation; governance models for emerging technologies

Special Issue Information

Dear Colleagues,

Artificial intelligence systems are rapidly transforming operations across public and private sectors, fundamentally altering decision-making, resource allocation, and service delivery. As AI becomes embedded in critical infrastructure, financial systems, emergency response, and strategic planning, robust ethical frameworks and effective governance mechanisms are urgently needed.

This Special Issue explores comprehensive ethical and governance challenges in AI implementation, examining how organizations develop frameworks to ensure AI systems operate responsibly, transparently, and effectively. We seek contributions addressing algorithmic accountability, data security, system reliability, and the balance between automation and human oversight. Research areas include ethical frameworks for AI deployment, governance models and regulatory approaches, transparency in machine learning systems, cybersecurity considerations, risk assessment frameworks, and best practices for the responsible implementation of AI.

We invite scholars to submit original research articles, case studies, and reviews that advance understanding of how ethical principles can be operationalized in AI systems. This research aims to provide practical guidance to organizations navigating AI deployment challenges while maintaining public trust and regulatory compliance, ultimately promoting responsible innovation that benefits both organizations and the public.

In this Special Issue, original research articles, case studies, and reviews are welcome. Research areas may include (but are not limited to) the following topics:

  • Ethical frameworks for AI system design and deployment in public or private sectors;
  • Algorithmic accuracy, reliability, and accountability mechanisms;
  • AI governance models, regulatory approaches, and policy frameworks;
  • Case studies of ethical and governance challenges in AI implementation;
  • Data security, privacy protection, and system integrity in AI applications;
  • Cybersecurity considerations for AI systems;
  • Transparency and explainability in machine learning systems;
  • Human-AI interaction, governance structures, and operational considerations;
  • Performance standards, quality assurance, and validation methods for AI systems;
  • Risk assessment and management frameworks for AI deployment;
  • Organizational structures and procedures for AI oversight;
  • Legal and regulatory compliance in AI operations;
  • Decision-making protocols for AI-augmented systems;
  • Audit and monitoring mechanisms for AI systems;
  • Best practices for responsible AI implementation.

I look forward to receiving your contributions.

Dr. Haris Alibašić
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Systems is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • AI governance
  • algorithmic accountability
  • ethical AI frameworks
  • cybersecurity in AI systems
  • transparency and explainability
  • responsible AI deployment
  • data security and privacy
  • human-AI interaction
  • regulatory compliance
  • operational integrity

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

28 pages, 2250 KB  
Article
Occupational Gender Bias in Chinese Generative AI Models: Cross-Model Evidence of Stereotypical Amplification and Systematic Underrepresentation
by Yunhong Liu, Aijun Lin, Sui Peng and Zelong Cai
Systems 2026, 14(3), 286; https://doi.org/10.3390/systems14030286 - 9 Mar 2026
Viewed by 536
Abstract
Occupational gender stereotypes are widely embedded in social cognition and increasingly reproduced through generative artificial intelligence (AI). Two mainstream Chinese generative AI models (DeepSeek V3 and Qwen 2.5) were audited by eliciting occupation–gender pronoun associations for 72 census-anchored occupations using a standardized questionnaire [...] Read more.
Occupational gender stereotypes are widely embedded in social cognition and increasingly reproduced through generative artificial intelligence (AI). Two mainstream Chinese generative AI models (DeepSeek V3 and Qwen 2.5) were audited by eliciting occupation–gender pronoun associations for 72 census-anchored occupations using a standardized questionnaire and an automated testing pipeline. Each occupation was queried in 1000 independent rounds, yielding 2,880,000 item-level observations. The results show that, for both models, the fitted relationship between census female shares and model-implied female pronoun associations follows an S-shaped pattern. This pattern is consistent with a dominance-amplifying mapping that pushes male-dominated occupations toward lower female attribution and female-dominated occupations toward higher female attribution. Meanwhile, women’s overall visibility is consistently shifted downward: when the census benchmark is 50% female, the predicted female proportion remains below parity at 48% in DeepSeek and 43% in Qwen. Cross-model comparisons reveal substantial heterogeneity in bias profiles: DeepSeek primarily compresses female attribution in male-dominated occupations, whereas Qwen amplifies female dominance in occupations where women already predominate. Overall, these findings characterize a multi-layered output-level bias pattern combining structural amplification with a system-wide downward shift in women’s aggregate visibility. Full article
(This article belongs to the Special Issue Ethics and Governance of Artificial Intelligence (AI) Systems)
Show Figures

Figure 1

15 pages, 712 KB  
Article
Stage-Aware Governance of Large Language Models: Managing Uncertainty and Human Oversight in AI-Assisted Literature Review Systems
by Junic Kim and Haeyong Shin
Systems 2026, 14(2), 153; https://doi.org/10.3390/systems14020153 - 31 Jan 2026
Viewed by 807
Abstract
This study proposes a stage-aware governance framework for large language models (LLMs) that structures human oversight and accountability across different decision stages in AI-assisted literature review systems. Large language models (LLMs) are increasingly embedded in systematic review workflows, yet how human oversight and [...] Read more.
This study proposes a stage-aware governance framework for large language models (LLMs) that structures human oversight and accountability across different decision stages in AI-assisted literature review systems. Large language models (LLMs) are increasingly embedded in systematic review workflows, yet how human oversight and accountability should be structured across different decision stages remains unclear. This study evaluates three LLMs in a controlled two-stage literature review workflow—title-and-abstract screening and eligibility assessment—using identical evidence inputs and fixed inclusion criteria, with outputs benchmarked against expert consensus under fully reproducible conditions with standardized prompts and comprehensive logging. While LLMs closely matched expert decisions during screening (precision 0.83–0.91; F1 up to 0.89; Cohen’s κ 0.65–0.85), performance degraded substantially at the eligibility stage (F1 0.58–0.65; κ 0.52–0.62), indicating increased epistemic uncertainty when fine-grained criteria must be inferred from abstract-level information. Importantly, disagreements clustered in borderline cases rather than random error, supporting a stage-aware governance approach in which LLMs automate high-throughput screening while inter-model disagreement is operationalized as an actionable uncertainty signal that triggers human oversight in more consequential decision stages. These findings highlight the need for explicit oversight thresholds, responsibility allocation, and auditability in the responsible deployment of AI-assisted decision systems for evidence synthesis. Full article
(This article belongs to the Special Issue Ethics and Governance of Artificial Intelligence (AI) Systems)
Show Figures

Figure 1

Back to TopTop