Next Article in Journal
Wetting Resistance of Commercial Membrane Distillation Membranes in Waste Streams Containing Surfactants and Oil
Previous Article in Journal
A Single-Stage LED Tube Lamp Driver with Power-Factor Corrections and Soft Switching for Energy-Saving Indoor Lighting Applications
Article Menu
Issue 2 (February) cover image

Export Article

Open AccessArticle
Appl. Sci. 2017, 7(2), 112;

A Multi-Layer Fusion-Based Facial Expression Recognition Approach with Optimal Weighted AUs

Beijing Municipal Key Laboratory of Multimedia and Intelligent Software Technology, Beijing University of Technology, Beijing 100124, China
School of Computer Science, Engineering and Mathematics, Flinders University of South Australia, Adelaide 5001, Australia
School of Electrical, electronic & communications Engineering, University College Dublin, Dublin 4, Ireland
Author to whom correspondence should be addressed.
Academic Editor: Antonio Fernández-Caballero
Received: 29 November 2016 / Revised: 27 December 2016 / Accepted: 17 January 2017 / Published: 24 January 2017
Full-Text   |   PDF [24879 KB, uploaded 24 January 2017]   |  


Affective computing is an increasingly important outgrowth of Artificial Intelligence, which is intended to deal with rich and subjective human communication. In view of the complexity of affective expression, discriminative feature extraction and corresponding high-performance classifier selection are still a big challenge. Specific features/classifiers display different performance in different datasets. There has currently been no consensus in the literature that any expression feature or classifier is always good in all cases. Although the recently updated deep learning algorithm, which uses learning deep feature instead of manual construction, appears in the expression recognition research, the limitation of training samples is still an obstacle of practical application. In this paper, we aim to find an effective solution based on a fusion and association learning strategy with typical manual features and classifiers. Taking these typical features and classifiers in facial expression area as a basis, we fully analyse their fusion performance. Meanwhile, to emphasize the major attributions of affective computing, we select facial expression relative Action Units (AUs) as basic components. In addition, we employ association rules to mine the relationships between AUs and facial expressions. Based on a comprehensive analysis from different perspectives, we propose a novel facial expression recognition approach that uses multiple features and multiple classifiers embedded into a stacking framework based on AUs. Extensive experiments on two public datasets show that our proposed multi-layer fusion system based on optimal AUs weighting has gained dramatic improvements on facial expression recognition in comparison to an individual feature/classifier and some state-of-the-art methods, including the recent deep learning based expression recognition one. View Full-Text
Keywords: feature fusion; multi-layer ensemble; action units (AUs); association rules; facial expression recognition feature fusion; multi-layer ensemble; action units (AUs); association rules; facial expression recognition

Figure 1

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).

Share & Cite This Article

MDPI and ACS Style

Jia, X.; Liu, S.; Powers, D.; Cardiff, B. A Multi-Layer Fusion-Based Facial Expression Recognition Approach with Optimal Weighted AUs. Appl. Sci. 2017, 7, 112.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics



[Return to top]
Appl. Sci. EISSN 2076-3417 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top