Explainable User Models

A special issue of Multimodal Technologies and Interaction (ISSN 2414-4088).

Deadline for manuscript submissions: closed (20 February 2022) | Viewed by 4934

Special Issue Editors


E-Mail Website
Guest Editor
Explainable Artificial Intelligence, University of Maastricht, Maastricht/TU Delft, Delft, The Netherlands
Interests: explanations; natural language generation; human-computer interaction; personalization (recommender systems); intelligent user interfaces; responsible data analytics
Web Information Systems, Delft University of Technology, Delft, The Netherlands
Interests: explanations; human computation; human-computer interaction; responsible use of data; bias; video summarization

Special Issue Information

Dear  Colleagues,

This special issue addresses research on Explainable User Models. As AI systems’ actions and decisions will significantly affect their users, it is important to be able to understand how an AI system represents its users. It is a well-known hurdle that many AI algorithms behave largely as black boxes. One key aim of explainability is, therefore, to make the inner workings of AI systems more accessible and transparent.

Such explanations can be helpful in the case when the system uses information about the user to develop a working representation of the user, and then uses this representation to adjust or inform system behavior. E.g., an educational system could detect whether students have a more internal or external locus of control, a music recommender system could adapt the music it is playing to the current mood of a user, or an aviation system could detect the visual memory capacity of its pilots. However,  when adapting to such user models it is crucial that these models are accurately detected. Furthermore, for such explanations to be useful, they need to be able to explain or justify their representations of users in a human-understandable way. This creates a necessity for techniques that will create models for the automatic generation of satisfactory explanations intelligible for human users interacting with the system.

The scope of the special issue includes but is not limited to:

Detection and Modelling

  • Novel ways of Modeling User Preferences
  • Types of information to model (Knowledge, Personality, Cognitive differences, etc.)
  • Distinguishing between stationary versus transient user models (e.g., Personality vs Mood)
  • Context modeling (e.g., at work versus at home, lean in versus lean out activities)
  • User models from heterogeneous sources (e.g., behavior, ratings, and reviews)
  • Enrichment and Crowdsourcing for Explainable User Models

Ethics

  • Detection of sensitive or rarely reported attributes (e.g., gender, race, sexial orientation)
  • Implicit user modeling versus explicit user modeling (e.g., questionnaires versus inference from behavior)
  • User modeling for self actualization (e.g., user modeling to improve dietary or news consumption habits)

Human understandability

  • Metrics and methodologies for evaluating fitness for the purpose of explanations
  • Balancing completeness and understandability for complex user models
  • Explanations to mitigate human biases (e.g., confirmation bias, anchoring)
  • Effect of user model explanation on subsequent user interaction (e.g., simulations, and novel evaluation methodologies)

Effectiveness

  • Analysis or comparison of context of use of explanation (e.g., risk, time pressure, error tolerance)
  • Analysis of context of use of system (e.g., decision support, prediction)
  • Analysis or comparison of effect of explaining in specific domains (e.g., education, health, recruitment, security)

Adaptive presentation of the explanations

  • For different types of user
  • Interactive explanations
  • Investigation of which presentational aspects are beneficial to tailor in the explanation (e.g., level of detail, terminology, modality text or graphics, level of interaction)

Important Dates & Facts:
Manuscripts due by: February 20, 2022
Notification to authors: March 15, 2022

Prof. Dr. Nava Tintarev
Ms. Oana Inel
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Multimodal Technologies and Interaction is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

31 pages, 57438 KiB  
Article
Interactive Visualizations of Transparent User Models for Self-Actualization: A Human-Centered Design Approach
by Mouadh Guesmi, Mohamed Amine Chatti, Alptug Tayyar, Qurat Ul Ain and Shoeb Joarder
Multimodal Technol. Interact. 2022, 6(6), 42; https://doi.org/10.3390/mti6060042 - 30 May 2022
Cited by 7 | Viewed by 3583
Abstract
This contribution sheds light on the potential of transparent user models for self-actualization. It discusses the development of EDUSS, a conceptual framework for self-actualization goals of transparent user modeling. Drawing from a qualitative research approach, the framework investigates self-actualization from psychology and computer [...] Read more.
This contribution sheds light on the potential of transparent user models for self-actualization. It discusses the development of EDUSS, a conceptual framework for self-actualization goals of transparent user modeling. Drawing from a qualitative research approach, the framework investigates self-actualization from psychology and computer science disciplines and derives a set of self-actualization goals and mechanisms. Following a human-centered design (HCD) approach, the framework was applied in an iterative process to systematically design a set of interactive visualizations to help users achieve different self-actualization goals in the scientific research domain. For this purpose, an explainable user interest model within a recommender system is utilized to provide various information on how the interest models are generated from users’ publication data. The main contributions are threefold: First, a synthesis of research on self-actualization from different domains. Second, EDUSS, a theoretically-sound self-actualization framework for transparent user modeling consisting of five main goals, namely, Explore, Develop, Understand, Scrutinize, and Socialize. Third, an instantiation of the proposed framework to effectively design interactive visualizations that can support the different self-actualization goals, following an HCD approach. Full article
(This article belongs to the Special Issue Explainable User Models)
Show Figures

Figure 1

Back to TopTop