Next Article in Journal
Negative Expressions by Social Robots and Their Effects on Persuasive Behaviors
Previous Article in Journal
A Meta-Learner Based on the Combination of Stacking Ensembles and a Mixture of Experts for Balancing Action Unit Recognition
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

Intuitive Recognition of a Virtual Agent’s Learning State Through Facial Expressions in VR

1
School of Computer Science and Electrical Engineering, Handong Global University, Pohang 37554, Republic of Korea
2
MyMeta, Seoul 08790, Republic of Korea
*
Author to whom correspondence should be addressed.
Electronics 2025, 14(13), 2666; https://doi.org/10.3390/electronics14132666
Submission received: 15 May 2025 / Revised: 21 June 2025 / Accepted: 27 June 2025 / Published: 30 June 2025
(This article belongs to the Special Issue Advances in Human-Computer Interaction: Challenges and Opportunities)

Abstract

As artificial intelligence agents become integral to immersive virtual reality environments, their inherent opacity presents a significant challenge to transparent human–agent communication. This study aims to determine if a virtual agent can effectively communicate its learning state to a user through facial expressions, and to empirically validate a set of designed expressions for this purpose. We designed three animated facial expression sequences for a stylized three-dimensional avatar, each corresponding to a distinct learning outcome: clear success (Case A), mixed performance (Case B), and moderate success (Case C). An initial online survey (n=93) first confirmed the general interpretability of these expressions, followed by a main experiment in virtual reality (n=30), where participants identified the agent’s state based solely on these visual cues. The results strongly supported our primary hypothesis (H1), with participants achieving a high overall recognition accuracy of approximately 91%. While user background factors did not yield statistically significant differences, observable trends suggest they may be worthy of future investigation. These findings demonstrate that designed facial expressions serve as an effective and intuitive channel for real-time, affective explainable artificial intelligence (affective XAI), contributing a practical, human-centric method for enhancing agent transparency in collaborative virtual environments.
Keywords: virtual reality; facial-expression-based transparency; explainable AI (XAI); human–AI interaction virtual reality; facial-expression-based transparency; explainable AI (XAI); human–AI interaction

Share and Cite

MDPI and ACS Style

Lee, W.; Jin, D.H. Intuitive Recognition of a Virtual Agent’s Learning State Through Facial Expressions in VR. Electronics 2025, 14, 2666. https://doi.org/10.3390/electronics14132666

AMA Style

Lee W, Jin DH. Intuitive Recognition of a Virtual Agent’s Learning State Through Facial Expressions in VR. Electronics. 2025; 14(13):2666. https://doi.org/10.3390/electronics14132666

Chicago/Turabian Style

Lee, WonHyong, and Dong Hwan Jin. 2025. "Intuitive Recognition of a Virtual Agent’s Learning State Through Facial Expressions in VR" Electronics 14, no. 13: 2666. https://doi.org/10.3390/electronics14132666

APA Style

Lee, W., & Jin, D. H. (2025). Intuitive Recognition of a Virtual Agent’s Learning State Through Facial Expressions in VR. Electronics, 14(13), 2666. https://doi.org/10.3390/electronics14132666

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop