Next Article in Journal
Cohousing IoT: Technology Design for Life in Community
Next Article in Special Issue
Perspective-Taking in Virtual Reality and Reduction of Biases against Minorities
Previous Article in Journal
Multi-Session Influence of Two Modalities of Feedback and Their Order of Presentation on MI-BCI User Training
Previous Article in Special Issue
Intelligent Blended Agents: Reality–Virtuality Interaction with Artificially Intelligent Embodied Virtual Humans
Article

Building an Emotionally Responsive Avatar with Dynamic Facial Expressions in Human—Computer Interactions

by 1,†, 2,*,†, 2,† and 2,*,†
1
Computer Science Department, University of Florida, Gainesville, FL 32611, USA
2
Computer Science Department, Colorado State University, Fort Collins, CO 80521, USA
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Academic Editors: Kangsoo Kim, Sun Joo (Grace) Ahn and Gerd Bruder
Multimodal Technol. Interact. 2021, 5(3), 13; https://doi.org/10.3390/mti5030013
Received: 29 December 2020 / Revised: 1 March 2021 / Accepted: 16 March 2021 / Published: 20 March 2021
(This article belongs to the Special Issue Social Interaction and Psychology in XR)
The role of affect has been long studied in human–computer interactions. Unlike previous studies that focused on seven basic emotions, an avatar named Diana was introduced who expresses a higher level of emotional intelligence. To adapt to the users various affects during interaction, Diana simulates emotions with dynamic facial expressions. When two people collaborated to build blocks, their affects were recognized and labeled using the Affdex SDK and a descriptive analysis was provided. When participants turned to collaborate with Diana, their subjective responses were collected and the length of completion was recorded. Three modes of Diana were involved: a flat-faced Diana, a Diana that used mimicry facial expressions, and a Diana that used emotionally responsive facial expressions. Twenty-one responses were collected through a five-point Likert scale questionnaire and the NASA TLX. Results from questionnaires were not statistically different. However, the emotionally responsive Diana obtained more positive responses, and people spent the longest time with the mimicry Diana. In post-study comments, most participants perceived facial expressions on Diana’s face as natural, four mentioned uncomfortable feelings caused by the Uncanny Valley effect. View Full-Text
Keywords: human–computer interaction; affective computing; facial expression human–computer interaction; affective computing; facial expression
Show Figures

Figure 1

MDPI and ACS Style

Wang, H.; Gaddy, V.; Beveridge, J.R.; Ortega, F.R. Building an Emotionally Responsive Avatar with Dynamic Facial Expressions in Human—Computer Interactions. Multimodal Technol. Interact. 2021, 5, 13. https://doi.org/10.3390/mti5030013

AMA Style

Wang H, Gaddy V, Beveridge JR, Ortega FR. Building an Emotionally Responsive Avatar with Dynamic Facial Expressions in Human—Computer Interactions. Multimodal Technologies and Interaction. 2021; 5(3):13. https://doi.org/10.3390/mti5030013

Chicago/Turabian Style

Wang, Heting, Vidya Gaddy, James R. Beveridge, and Francisco R. Ortega 2021. "Building an Emotionally Responsive Avatar with Dynamic Facial Expressions in Human—Computer Interactions" Multimodal Technologies and Interaction 5, no. 3: 13. https://doi.org/10.3390/mti5030013

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop