Human-Centered Artificial Intelligence

A special issue of Future Internet (ISSN 1999-5903).

Deadline for manuscript submissions: 28 February 2026 | Viewed by 8126

Special Issue Editors


E-Mail Website
Guest Editor
Department of Computer Science and Engineering, University of Louisville, Louisville, KY 40208, USA
Interests: computer vision; machine learning; human-computer interaction; robotics

E-Mail Website
Guest Editor
Department of Informatics, Ying Wu College of Computing, New Jersey Institute of Technology, Newark, NJ 07102, USA
Interests: human-computer interaction; accessibility; human-AI interaction; design research

E-Mail Website
Guest Editor
College of Information Sciences and Technology, Pennsylvania State University, University Park, PA 16802, USA
Interests: human-computer interaction; intelligent interaction systems; AI for accessibility; accessible computing

E-Mail Website
Guest Editor
College of Information Sciences and Technology, Pennsylvania State University, University Park, PA 16802, USA
Interests: human–computer interaction; computer-supported collaborative work; community informatics; design research; learning science

Special Issue Information

Dear Colleagues,

We are pleased to announce a Special Issue of Future Internet entitled “Human-Centred Artificial Intelligence”. This Special Issue aims to explore the diverse aspects of designing and developing AI systems that prioritize human values, ethical considerations, and societal well-being. As AI continues to be integrated into various facets of our lives, from healthcare and education to transportation and entertainment, it is crucial that these systems are developed with a human-centric approach. Human-Centred Artificial Intelligence (HCAI) strives to create AI technologies that are innovative, efficient, responsible, and beneficial to society.

Unlike traditional AI approaches that focus on technical performance, HCAI emphasizes transparency, fairness, and user empowerment. This holistic approach addresses potential risks such as biases in decision making, lack of interpretability, and issues of trust as well as accountability. By fostering interdisciplinary collaboration, HCAI aims to develop AI systems that enhance human capabilities, support social good, and contribute to a more equitable and just society.

We invite submissions of original research papers, review articles, and short communications to this Special Issue to highlight cutting-edge research and innovative approaches that contribute to the advancement of HCAI. We welcome submissions on a broad range of topics within HCAI, including, but not limited to, the following:

  • Explainable AI (XAI).
  • Fairness and bias in AI.
  • Human–AI collaboration.
  • Ethical and trustworthy AI.
  • User-centered design for AI.
  • AI and accessibility.
  • Human factors in AI.
  • Societal impacts of AI.
  • Human–robot interaction (HRI).
  • Adaptive learning systems.
  • AI in education.
  • Patient-centered healthcare AI.

Dr. Rui Yu
Dr. Sooyeon Lee
Dr. Syed Masum Billah
Dr. John M. Carroll
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Future Internet is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • human-centered AI
  • explainable AI
  • fairness in AI
  • human–AI collaboration

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

55 pages, 4454 KiB  
Article
The Future of Education: A Multi-Layered Metaverse Classroom Model for Immersive and Inclusive Learning
by Leyli Nouraei Yeganeh, Nicole Scarlett Fenty, Yu Chen, Amber Simpson and Mohsen Hatami
Future Internet 2025, 17(2), 63; https://doi.org/10.3390/fi17020063 - 4 Feb 2025
Cited by 2 | Viewed by 2606
Abstract
Modern education faces persistent challenges, including disengagement, inequitable access to learning resources, and the lack of personalized instruction, particularly in virtual environments. In this perspective, we envision a transformative Metaverse classroom model, the Multi-layered Immersive Learning Environment (Meta-MILE) to address these critical issues. [...] Read more.
Modern education faces persistent challenges, including disengagement, inequitable access to learning resources, and the lack of personalized instruction, particularly in virtual environments. In this perspective, we envision a transformative Metaverse classroom model, the Multi-layered Immersive Learning Environment (Meta-MILE) to address these critical issues. The Meta-MILE framework integrates essential components such as immersive infrastructure, personalized interactions, social collaboration, and advanced assessment techniques to enhance student engagement and inclusivity. By leveraging three-dimensional (3D) virtual environments, artificial intelligence (AI)-driven personalization, gamified learning pathways, and scenario-based evaluations, the Meta-MILE model offers tailored learning experiences that traditional virtual classrooms often struggle to achieve. Acknowledging potential challenges such as accessibility, infrastructure demands, and data security, the study proposed practical strategies to ensure equitable access and safe interactions within the Metaverse. Empirical findings from our pilot experiment demonstrated the framework’s effectiveness in improving engagement and skill acquisition, with broader implications for educational policy and competency-based, experiential learning approaches. Looking ahead, we advocate for ongoing research to validate long-term learning outcomes and technological advancements to make immersive learning more accessible and secure. Our perspective underscores the transformative potential of the Metaverse classroom in shaping inclusive, future-ready educational environments capable of meeting the diverse needs of learners worldwide. Full article
(This article belongs to the Special Issue Human-Centered Artificial Intelligence)
Show Figures

Figure 1

21 pages, 2476 KiB  
Article
Enhancing Human–Agent Interaction via Artificial Agents That Speculate About the Future
by Casey C. Bennett, Young-Ho Bae, Jun-Hyung Yoon, Say Young Kim and Benjamin Weiss
Future Internet 2025, 17(2), 52; https://doi.org/10.3390/fi17020052 - 21 Jan 2025
Viewed by 857
Abstract
Human communication in daily life entails not only talking about what we are currently doing or will do, but also speculating about future possibilities that may (or may not) occur, i.e., “anticipatory speech”. Such conversations are central to social cooperation and social cohesion [...] Read more.
Human communication in daily life entails not only talking about what we are currently doing or will do, but also speculating about future possibilities that may (or may not) occur, i.e., “anticipatory speech”. Such conversations are central to social cooperation and social cohesion in humans. This suggests that such capabilities may also be critical for developing improved speech systems for artificial agents, e.g., human–agent interaction (HAI) and human–robot interaction (HRI). However, to do so successfully, it is imperative that we understand how anticipatory speech may affect the behavior of human users and, subsequently, the behavior of the agent/robot. Moreover, it is possible that such effects may vary across cultures and languages. To that end, we conducted an experiment where a human and autonomous 3D virtual avatar interacted in a cooperative gameplay environment. The experiment included 40 participants, comparing different languages (20 English, 20 Korean), where the artificial agent had anticipatory speech either enabled or disabled. The results showed that anticipatory speech significantly altered the speech patterns and turn-taking behavior of both the human and the agent, but those effects varied depending on the language spoken. We discuss how the use of such novel communication forms holds potential for enhancing HAI/HRI, as well as the development of mixed reality and virtual reality interactive systems for human users. Full article
(This article belongs to the Special Issue Human-Centered Artificial Intelligence)
Show Figures

Figure 1

30 pages, 10493 KiB  
Article
Visualisation Design Ideation with AI: A New Framework, Vocabulary, and Tool
by Aron E. Owen and Jonathan C. Roberts
Future Internet 2024, 16(11), 406; https://doi.org/10.3390/fi16110406 - 5 Nov 2024
Viewed by 3184
Abstract
This paper introduces an innovative framework for visualisation design ideation, which includes a collection of terms for creative visualisation design, the five-step process, and an implementation called VisAlchemy. Throughout the visualisation ideation process, individuals engage in exploring various concepts, brainstorming, sketching ideas, prototyping, [...] Read more.
This paper introduces an innovative framework for visualisation design ideation, which includes a collection of terms for creative visualisation design, the five-step process, and an implementation called VisAlchemy. Throughout the visualisation ideation process, individuals engage in exploring various concepts, brainstorming, sketching ideas, prototyping, and experimenting with different methods to visually represent data or information. Sometimes, designers feel incapable of sketching, and the ideation process can be quite lengthy. In such cases, generative AI can provide assistance. However, even with AI, it can be difficult to know which vocabulary to use and how to strategically approach the design process. Our strategy prompts imaginative and structured narratives for generative AI use, facilitating the generation and refinement of visualisation design ideas. We aim to inspire fresh and innovative ideas, encouraging creativity and exploring unconventional concepts. VisAlchemy is a five-step framework: a methodical approach to defining, exploring, and refining prompts to enhance the generative AI process. The framework blends design elements and aesthetics with context and application. In addition, we present a vocabulary set of 300 words, underpinned from a corpus of visualisation design and art papers, along with a demonstration tool called VisAlchemy. The interactive interface of the VisAlchemy tool allows users to adhere to the framework and generate innovative visualisation design concepts. It is built using the SDXL Turbo language model. Finally, we demonstrate its use through case studies and examples and show the transformative power of the framework to create inspired and exciting design ideas through refinement, re-ordering, weighting of words and word rephrasing. Full article
(This article belongs to the Special Issue Human-Centered Artificial Intelligence)
Show Figures

Figure 1

Back to TopTop