Next Article in Journal
Deposition and Mobilization of Microplastics in a Low-Energy Fluvial Environment from a Geomorphological Perspective
Previous Article in Journal
Crystal Structure of Human Lysozyme Complexed with N-Acetyl-α-d-glucosamine
 
 
Article

AU-Guided Unsupervised Domain-Adaptive Facial Expression Recognition

1
College of Big Data and Internet, Shenzhen Technology University, Shenzhen 518118, China
2
Alibaba Group, Hangzhou 310011, China
3
National University of Singapore, Singapore 119077, Singapore
*
Author to whom correspondence should be addressed.
Academic Editors: Tao Lei, Xianye Ben, Peng Zhang and Lei Chen
Appl. Sci. 2022, 12(9), 4366; https://doi.org/10.3390/app12094366
Received: 16 March 2022 / Revised: 21 April 2022 / Accepted: 21 April 2022 / Published: 26 April 2022
(This article belongs to the Special Issue Deep Learning for Facial Expression Analysis)
Domain diversities, including inconsistent annotation and varied image collection conditions, inevitably exist among different facial expression recognition (FER) datasets, posing an evident challenge for adapting FER models trained on one dataset to another one. Recent works mainly focus on domain-invariant deep feature learning with adversarial learning mechanisms, ignoring the sibling facial action unit (AU) detection task, which has obtained great progress. Considering that AUs objectively determine facial expressions, this paper proposes an AU-guided unsupervised domain-adaptive FER (AdaFER) framework to relieve the annotation bias between different FER datasets. In AdaFER, we first leverage an advanced model for AU detection on both a source and a target domain. Then, we compare the AU results to perform AU-guided annotating, i.e., target faces that own the same AUs as source faces would inherit the labels from the source domain. Meanwhile, to achieve domain-invariant compact features, we utilize an AU-guided triplet training, which randomly collects anchor–positive–negative triplets on both domains with AUs. We conduct extensive experiments on several popular benchmarks and show that AdaFER achieves state-of-the-art results on all these benchmarks. View Full-Text
Keywords: facial expression recognition (FER); action units; unsupervised cross-domain FER facial expression recognition (FER); action units; unsupervised cross-domain FER
Show Figures

Figure 1

MDPI and ACS Style

Peng, X.; Gu, Y.; Zhang, P. AU-Guided Unsupervised Domain-Adaptive Facial Expression Recognition. Appl. Sci. 2022, 12, 4366. https://doi.org/10.3390/app12094366

AMA Style

Peng X, Gu Y, Zhang P. AU-Guided Unsupervised Domain-Adaptive Facial Expression Recognition. Applied Sciences. 2022; 12(9):4366. https://doi.org/10.3390/app12094366

Chicago/Turabian Style

Peng, Xiaojiang, Yuxin Gu, and Panpan Zhang. 2022. "AU-Guided Unsupervised Domain-Adaptive Facial Expression Recognition" Applied Sciences 12, no. 9: 4366. https://doi.org/10.3390/app12094366

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop