Next Article in Journal
Detectability of Osseous Lesions with a Pre-Programmed Low-Dose Protocol for Cone-Beam Computed Tomography
Next Article in Special Issue
Driver Stress State Evaluation by Means of Thermal Imaging: A Supervised Machine Learning Approach Based on ECG Signal
Previous Article in Journal
Novel Electronic Device to Quantify the Cyclic Fatigue Resistance of Endodontic Reciprocating Files after Using and Sterilization
Previous Article in Special Issue
Call Redistribution for a Call Center Based on Speech Emotion Recognition
Article

Cost-Effective CNNs for Real-Time Micro-Expression Recognition

ImViA EA 7535, University Bourgogne Franche-Comté, 21000 Dijon, France
*
Author to whom correspondence should be addressed.
Appl. Sci. 2020, 10(14), 4959; https://doi.org/10.3390/app10144959
Received: 5 June 2020 / Revised: 10 July 2020 / Accepted: 16 July 2020 / Published: 19 July 2020
(This article belongs to the Special Issue Ubiquitous Technologies for Emotion Recognition)
Micro-Expression (ME) recognition is a hot topic in computer vision as it presents a gateway to capture and understand daily human emotions. It is nonetheless a challenging problem due to ME typically being transient (lasting less than 200 ms) and subtle. Recent advances in machine learning enable new and effective methods to be adopted for solving diverse computer vision tasks. In particular, the use of deep learning techniques on large datasets outperforms classical approaches based on classical machine learning which rely on hand-crafted features. Even though available datasets for spontaneous ME are scarce and much smaller, using off-the-shelf Convolutional Neural Networks (CNNs) still demonstrates satisfactory classification results. However, these networks are intense in terms of memory consumption and computational resources. This poses great challenges when deploying CNN-based solutions in many applications, such as driver monitoring and comprehension recognition in virtual classrooms, which demand fast and accurate recognition. As these networks were initially designed for tasks of different domains, they are over-parameterized and need to be optimized for ME recognition. In this paper, we propose a new network based on the well-known ResNet18 which we optimized for ME classification in two ways. Firstly, we reduced the depth of the network by removing residual layers. Secondly, we introduced a more compact representation of optical flow used as input to the network. We present extensive experiments and demonstrate that the proposed network obtains accuracies comparable to the state-of-the-art methods while significantly reducing the necessary memory space. Our best classification accuracy was 60.17% on the challenging composite dataset containing five objectives classes. Our method takes only 24.6 ms for classifying a ME video clip (less than the occurrence time of the shortest ME which lasts 40 ms). Our CNN design is suitable for real-time embedded applications with limited memory and computing resources. View Full-Text
Keywords: computer vision; deep learning; optical flow; micro facial expressions; real-time processing computer vision; deep learning; optical flow; micro facial expressions; real-time processing
Show Figures

Figure 1

MDPI and ACS Style

Belaiche, R.; Liu, Y.; Migniot, C.; Ginhac, D.; Yang, F. Cost-Effective CNNs for Real-Time Micro-Expression Recognition. Appl. Sci. 2020, 10, 4959. https://doi.org/10.3390/app10144959

AMA Style

Belaiche R, Liu Y, Migniot C, Ginhac D, Yang F. Cost-Effective CNNs for Real-Time Micro-Expression Recognition. Applied Sciences. 2020; 10(14):4959. https://doi.org/10.3390/app10144959

Chicago/Turabian Style

Belaiche, Reda, Yu Liu, Cyrille Migniot, Dominique Ginhac, and Fan Yang. 2020. "Cost-Effective CNNs for Real-Time Micro-Expression Recognition" Applied Sciences 10, no. 14: 4959. https://doi.org/10.3390/app10144959

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop