Next Article in Journal
Crack-Length Estimation for Structural Health Monitoring Using the High-Frequency Resonances Excited by the Energy Release during Fatigue-Crack Growth
Next Article in Special Issue
Emotion-Driven Analysis and Control of Human-Robot Interactions in Collaborative Applications
Previous Article in Journal
An Improved Vulnerability Exploitation Prediction Model with Novel Cost Function and Custom Trained Word Vector Embedding
Previous Article in Special Issue
Correlation Analysis of Different Measurement Places of Galvanic Skin Response in Test Groups Facing Pleasant and Unpleasant Stimuli
Article

Assessing Automated Facial Action Unit Detection Systems for Analyzing Cross-Domain Facial Expression Databases

1
Psychological Process Team, BZP, Robotics Project, RIKEN, 2-2-2 Hikaridai, Seika-cho, Soraku-gun, Kyoto 6190288, Japan
2
KOHINATA Limited Liability Company, 2-7-3, Tateba, Naniwa-ku, Osaka 5560020, Japan
*
Authors to whom correspondence should be addressed.
Academic Editors: Nilanjan Sarkar and Zhi Zheng
Sensors 2021, 21(12), 4222; https://doi.org/10.3390/s21124222
Received: 27 April 2021 / Revised: 15 June 2021 / Accepted: 17 June 2021 / Published: 20 June 2021
(This article belongs to the Special Issue Emotion Recognition in Human-Machine Interaction)
In the field of affective computing, achieving accurate automatic detection of facial movements is an important issue, and great progress has already been made. However, a systematic evaluation of systems that now have access to the dynamic facial database remains an unmet need. This study compared the performance of three systems (FaceReader, OpenFace, AFARtoolbox) that detect each facial movement corresponding to an action unit (AU) derived from the Facial Action Coding System. All machines could detect the presence of AUs from the dynamic facial database at a level above chance. Moreover, OpenFace and AFAR provided higher area under the receiver operating characteristic curve values compared to FaceReader. In addition, several confusion biases of facial components (e.g., AU12 and AU14) were observed to be related to each automated AU detection system and the static mode was superior to dynamic mode for analyzing the posed facial database. These findings demonstrate the features of prediction patterns for each system and provide guidance for research on facial expressions. View Full-Text
Keywords: action unit; automatic facial detection; facial expressions; machine analysis; sensing dynamic face action unit; automatic facial detection; facial expressions; machine analysis; sensing dynamic face
Show Figures

Figure 1

MDPI and ACS Style

Namba, S.; Sato, W.; Osumi, M.; Shimokawa, K. Assessing Automated Facial Action Unit Detection Systems for Analyzing Cross-Domain Facial Expression Databases. Sensors 2021, 21, 4222. https://doi.org/10.3390/s21124222

AMA Style

Namba S, Sato W, Osumi M, Shimokawa K. Assessing Automated Facial Action Unit Detection Systems for Analyzing Cross-Domain Facial Expression Databases. Sensors. 2021; 21(12):4222. https://doi.org/10.3390/s21124222

Chicago/Turabian Style

Namba, Shushi, Wataru Sato, Masaki Osumi, and Koh Shimokawa. 2021. "Assessing Automated Facial Action Unit Detection Systems for Analyzing Cross-Domain Facial Expression Databases" Sensors 21, no. 12: 4222. https://doi.org/10.3390/s21124222

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop