Next Article in Journal
Fast and Robust Time Synchronization with Median Kalman Filtering for Mobile Ad-Hoc Networks
Previous Article in Journal
Impact of Current Pulsation on BLDC Motor Parameters
Previous Article in Special Issue
Single-Shot 3D Shape Reconstruction Using Structured Light and Deep Convolutional Neural Networks
Open AccessArticle

Action Unit Detection by Learning the Deformation Coefficients of a 3D Morphable Model

Media Integration and Communication Center, University of Florence, 50134 Firenze, Italy
*
Author to whom correspondence should be addressed.
Sensors 2021, 21(2), 589; https://doi.org/10.3390/s21020589
Received: 3 December 2020 / Revised: 31 December 2020 / Accepted: 12 January 2021 / Published: 15 January 2021
(This article belongs to the Special Issue Recent Advances in Depth Sensors and Applications)
Facial Action Units (AUs) correspond to the deformation/contraction of individual facial muscles or their combinations. As such, each AU affects just a small portion of the face, with deformations that are asymmetric in many cases. Generating and analyzing AUs in 3D is particularly relevant for the potential applications it can enable. In this paper, we propose a solution for 3D AU detection and synthesis by developing on a newly defined 3D Morphable Model (3DMM) of the face. Differently from most of the 3DMMs existing in the literature, which mainly model global variations of the face and show limitations in adapting to local and asymmetric deformations, the proposed solution is specifically devised to cope with such difficult morphings. During a training phase, the deformation coefficients are learned that enable the 3DMM to deform to 3D target scans showing neutral and facial expression of the same individual, thus decoupling expression from identity deformations. Then, such deformation coefficients are used, on the one hand, to train an AU classifier, on the other, they can be applied to a 3D neutral scan to generate AU deformations in a subject-independent manner. The proposed approach for AU detection is validated on the Bosphorus dataset, reporting competitive results with respect to the state-of-the-art, even in a challenging cross-dataset setting. We further show the learned coefficients are general enough to synthesize realistic 3D face instances with AUs activation. View Full-Text
Keywords: 3D Morphable Model; dictionary learning; 3DMM deformation coefficients; Action Unit detection; Action Unit synthesis 3D Morphable Model; dictionary learning; 3DMM deformation coefficients; Action Unit detection; Action Unit synthesis
Show Figures

Figure 1

MDPI and ACS Style

Ariano, L.; Ferrari, C.; Berretti, S.; Del Bimbo, A. Action Unit Detection by Learning the Deformation Coefficients of a 3D Morphable Model. Sensors 2021, 21, 589. https://doi.org/10.3390/s21020589

AMA Style

Ariano L, Ferrari C, Berretti S, Del Bimbo A. Action Unit Detection by Learning the Deformation Coefficients of a 3D Morphable Model. Sensors. 2021; 21(2):589. https://doi.org/10.3390/s21020589

Chicago/Turabian Style

Ariano, Luigi; Ferrari, Claudio; Berretti, Stefano; Del Bimbo, Alberto. 2021. "Action Unit Detection by Learning the Deformation Coefficients of a 3D Morphable Model" Sensors 21, no. 2: 589. https://doi.org/10.3390/s21020589

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Search more from Scilit
 
Search
Back to TopTop