Human-Computer Interaction and Human-Centered AI

A special issue of AI (ISSN 2673-2688). This special issue belongs to the section "AI Systems: Theory and Applications".

Deadline for manuscript submissions: 30 June 2026 | Viewed by 1798

Special Issue Editor


E-Mail Website
Guest Editor
Faculty of Informatics, Juraj Dobrila University of Pula, 52100 Pula, Croatia
Interests: human-computer interaction; empirical software engineering technology adoption; social computing; human-centered AI
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

As artificial intelligence continues to transform how individuals work, learn, communicate, and receive services, it is essential to design AI systems that genuinely support human needs. Human-centered AI (HCAI) represents a paradigm in which AI must remain transparent, accountable, fair, and aligned with human goals. Instead of expecting people to adapt to AI systems, HCAI calls for designing systems that adapt to human cognition, limitations, values, social contexts, and lived experiences.

This Special Issue will focus on the intersection of HCI and AI, emphasizing how humans perceive, understand, trust, and interact with intelligent systems. Key challenges include the following:

  • Enabling meaningful human oversight;
  • Building trust without overreliance;
  • Creating transparent and explainable interaction mechanisms;
  • Addressing bias, fairness, and inclusivity;
  • Designing adaptive AI interfaces that support diverse users;
  • Ensuring that AI-driven systems enhance rather than undermine human autonomy.

We invite diverse contributions that examine the behavioral, psychological, ethical, social, and technological foundations of human-centered AI. Submissions may include conceptual analyses, empirical studies, design frameworks, methodological innovations, system prototypes, policy perspectives, or review papers (systematic, narrative, or scoping).

Topics of Interest

We seek contributions in areas including, but not limited to, the following:

A. Human-Centered AI Foundations:

  • Principles and frameworks for designing human-centered AI;
  • AI explainability, interpretability, and transparency in interaction;
  • Mental models of AI decision-making;
  • Trust calibration and user perceptions of uncertainty.

B. Ethics, Society, and Responsible AI:

  • Fairness, accountability, transparency, and ethics (FATE);
  • Bias mitigation in interactive AI systems;
  • Societal implications of AI in high-stakes domains;
  • Human oversight, accountability, and value-sensitive design.

C. Human–AI Collaboration and Interaction:

  • Interaction models for mixed-initiative systems;
  • User feedback loops, interactive machine learning, co-adaptive AI;
  • Collaborative decision-making in professional and operational contexts;
  • Behavioral and emotional responses to AI behavior.

D. Adaptive and Personalized AI:

  • Personalized algorithms and adaptive user interfaces;
  • Cognitive load-aware and emotion-aware AI;
  • Multilingual, cross-cultural, and inclusive interaction design.

E. Evaluation and Methodology:

  • Methods and metrics for evaluating human–AI interaction;
  • User studies, longitudinal evaluations, and trust calibrations;
  • Hybrid methodological approaches (HCI + AI + psychology).

F. Applications of Human-Centered AI:

  • Education, healthcare, transportation, and public administration;
  • Accessibility-oriented applications;
  • AI for creative and collaborative work;
  • Social robots and conversational agents.

Prof. Dr. Tihomir Orehovački
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. AI is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • human-centered AI
  • human–computer interaction
  • explainable AI
  • responsible AI
  • trust in AI
  • mental models
  • adaptive systems
  • user experience
  • fairness and ethics
  • human–AI collaboration
  • interaction design
  • cognitive models
  • value-sensitive design
  • behavioral responses to AI

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

33 pages, 7893 KB  
Article
A Fuzzy and Explainable AI Framework for Comparing Physical and Perceptual Representations in Galaxy Morphology
by Gabriel Marín Díaz, Alvaro Manuel Rodriguez-Rodriguez and Eva María Andrés Núñez
AI 2026, 7(5), 159; https://doi.org/10.3390/ai7050159 - 30 Apr 2026
Viewed by 940
Abstract
Galaxy morphology combines measurable structural properties with subjective visual interpretation, limiting strictly hard-label classifications. This study proposes a framework designed to compare physically derived and human-based galaxy classifications while explicitly accounting for uncertainty and interpretability. Using photometric and structural features from the Sloan [...] Read more.
Galaxy morphology combines measurable structural properties with subjective visual interpretation, limiting strictly hard-label classifications. This study proposes a framework designed to compare physically derived and human-based galaxy classifications while explicitly accounting for uncertainty and interpretability. Using photometric and structural features from the Sloan Digital Sky Survey (SDSS), physical groupings are obtained through Fuzzy C-Means clustering, enabling gradual transitions via soft memberships. Human clusters are constructed from Galaxy Zoo 2 debiased vote fractions, capturing aggregated perceptual judgments. Supervised models are trained to predict both physical and human cluster assignments from the same set of physical variables, providing a quantitative assessment of structural coherence and perceptual–physical alignment. SHAP-based explainability identifies the relative influence of color and concentration parameters in each scheme. Results show that physical clustering is driven by structural concentration and bulge dominance, while human classification exhibits smoother decision boundaries and greater sensitivity to photometric appearance. Discrepancies concentrate in transitional and orientation-sensitive systems. An interactive visualization layer supports traceable qualitative inspection. The framework provides a reproducible methodology for analyzing classification consistency, uncertainty, and human–model alignment. Full article
(This article belongs to the Special Issue Human-Computer Interaction and Human-Centered AI)
Show Figures

Figure 1

Back to TopTop