Explainability in Human-Computer Interaction and Collaboration

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: 15 May 2024 | Viewed by 1173

Special Issue Editors


E-Mail Website
Guest Editor
Laboratoire Connaissance et Intelligence Artificielle Distribuées (CIAD), University of Technology of Belfort-Montbéliard (UTBM), 90010 Belfort, France
Interests: explainable artificial intelligence (XAI); human computer interaction (HCI); multiagent systems
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
ITIS, Luxembourg Institute of Science and Technology (LIST), 4362 Esch-sur-Alzette, Luxembourg
Interests: XAI; HRI; HCI; multiagent systems

E-Mail Website
Guest Editor
AI-Robolab/ICR, Computer Science and Communications, University of Luxembourg, 4365 Esch-sur-Alzette, Luxembourg
Interests: developing countries; transportation; multiagent systems; explainable AI; holonic system

E-Mail Website
Guest Editor
Department of Computer Science, University of Luxembourg, 4365 Esch-sur-Alzette, Luxembourg
Interests: responsible AI; governance of AI; computational auditing; regulatory compliance

E-Mail Website
Guest Editor
DCS, University of Luxembourg, 2 Av. de l'Universite, 4365 Esch-sur-Alzette, Luxembourg
Interests: agents and reasoning; knowledge representation; HCI; XAI

Special Issue Information

Dear Colleagues,

Explainability, also known as interpretability in some contexts, in human–computer interaction (HCI) and collaboration refers to the ability of a computer system to provide users with information about their inner workings and why certain decisions were made.

It is an important aspect of HCI because it helps users trust and understand the behavior of AI systems, thereby increasing user satisfaction and improving decision making.

In HCI, explainability is particularly indispensable for AI systems, such as decision-making systems, recommendation systems, and natural language processing systems, as these systems can be complex and difficult for users to understand. In human-computer collaboration, explainability helps in the coordination and cooperation between humans and computers by providing a clear understanding of the goals, actions, and decisions of the system.

Explainability in HCI can be achieved through various techniques, such as the following:

  • Providing users with concise, understandable explanations of how a system operates, such as through visualizations or natural language explanations;
  • Allowing users to view the components of an AI system's conclusion, such as the input data and the weights given to certain features;
  • Enabling users with the ability to investigate and interact with and explore the internal workings of a system, for example through interactive visualizations or “what if” simulations;
  • Empowering the users with the ability to control and customize the behavior of a system, for example, by changing settings or offering feedback.

Explainable AI (XAI) can bring several advantages to HCI: increased trust and satisfaction, improved accountability and transparency, better user engagement, and improved decision making. However, recent work in the literature has pointed out that the design of XAI systems should consider the user's background, cognitive skills, and prior knowledge. Thus, various challenges need to be considered: balancing explainability and performance; understanding user needs and preferences; addressing diversity, discrimination, and bias; considering complexity and overhead, and ethical and social implications; conducting evaluations and validation; and ensuring privacy and security.

Explainability in HCI supplements the existing literature in various ways: trust understandability and acceptance; human-centered AI that considers the human perspective and context; AI ethics that are transparent and accountable; and human-robot/agent teams.

The overall focus of this Special Issue is on designing AI systems that are transparent and understandable to users. The scope of this Special Issue is broad and interdisciplinary as it encompasses different areas. The following are more specific suggested themes and article types for submission:

Theoretical foundations: This theme includes cognitive psychology, which can help understand how people process and understand explanations; computer science, which can help design algorithms and methods for generating explanations; and AI, which can help understand how to make AI systems transparent and interpretable.

Applications and domains: This theme includes healthcare, finance, education, entertainment, robotics, etc. Additionally, XAI is essential for the development of AI systems for non-expert users, such as children or older adults.

Evaluation and assessment: Evaluation is challenging because of the complexity of HCI, and the diversity of the user's background, cognitive skills, and prior knowledge.

Methods to evaluate explainability in HCI can include user studies, experiments, and surveys, which can help understand the user's perception, understanding, and trust of the AI system's explanations. Additionally, a combination of objective and subjective measures can be used to evaluate the quality of the explanations generated by an AI system.

In summary, the combination of explainability and HCI/C can help to improve the user satisfaction, trustworthiness, effectiveness, and efficiency of AI systems by designing them in a way that considers the human perspective and context while reducing errors and biases.

We look forward to receiving your contributions.

Dr. Yazan Mualla
Dr. Amro Najjar
Dr. Igor Tchappi
Dr. Joris Hulstijn
Prof. Dr. Leon van der Torre
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • explainability and explainable artificial intelligence
  • human-computer interaction
  • human-robot/agent collaboration
  • human-centered AI
  • AI ethics
  • decision-making systems
  • recommender systems
  • explainability in multiagent systems
  • human-robot teams

Published Papers

This special issue is now open for submission.
Back to TopTop