Next Article in Journal
Multivariable Adaptive Super-Twisting Guidance Law Based on Barrier Function
Next Article in Special Issue
Continuous Emotion Recognition with Spatiotemporal Convolutional Neural Networks
Previous Article in Journal
Characterization of an X-ray Source Generated by a Portable Low-Current X-Pinch
Previous Article in Special Issue
Real-Time Facial Emotion Recognition Framework for Employees of Organizations Using Raspberry-Pi

Viewpoint Robustness of Automated Facial Action Unit Detection Systems

Psychological Process Team, Guardian Robot Project, RIKEN, 2-2-2 Hikaridai, Seika-cho, Soraku-gun, Kyoto 619-0288, Japan
Faculty of Arts, Kyoto University of the Arts, 2-116 Uryuyama Kitashirakawa, Sakyo, Kyoto 606-8271, Japan
Authors to whom correspondence should be addressed.
Academic Editor: Monica Perusquia Hernandez
Appl. Sci. 2021, 11(23), 11171;
Received: 12 October 2021 / Revised: 5 November 2021 / Accepted: 21 November 2021 / Published: 25 November 2021
(This article belongs to the Special Issue Research on Facial Expression Recognition)
Automatic facial action detection is important, but no previous studies have evaluated pre-trained models on the accuracy of facial action detection as the angle of the face changes from frontal to profile. Using static facial images obtained at various angles (0°, 15°, 30°, and 45°), we investigated the performance of three automated facial action detection systems (FaceReader, OpenFace, and Py-feat). The overall performance was best for OpenFace, followed by FaceReader and Py-Feat. The performance of FaceReader significantly decreased at 45° compared to that at other angles, while the performance of Py-Feat did not differ among the four angles. The performance of OpenFace decreased as the target face turned sideways. Prediction accuracy and robustness to angle changes varied with the target facial components and action detection system. View Full-Text
Keywords: action unit; angle; automatic facial detection action unit; angle; automatic facial detection
Show Figures

Figure 1

MDPI and ACS Style

Namba, S.; Sato, W.; Yoshikawa, S. Viewpoint Robustness of Automated Facial Action Unit Detection Systems. Appl. Sci. 2021, 11, 11171.

AMA Style

Namba S, Sato W, Yoshikawa S. Viewpoint Robustness of Automated Facial Action Unit Detection Systems. Applied Sciences. 2021; 11(23):11171.

Chicago/Turabian Style

Namba, Shushi, Wataru Sato, and Sakiko Yoshikawa. 2021. "Viewpoint Robustness of Automated Facial Action Unit Detection Systems" Applied Sciences 11, no. 23: 11171.

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

Back to TopTop