Next Article in Journal
Gas Sensor Based on 3-D WO3 Inverse Opal: Design and Applications
Next Article in Special Issue
Object Tracking Using Local Multiple Features and a Posterior Probability Measure
Previous Article in Journal
MIP-Based Sensors: Promising New Tools for Cancer Biomarker Determination
Article Menu
Issue 4 (April) cover image

Export Article

Open AccessArticle
Sensors 2017, 17(4), 712; doi:10.3390/s17040712

Facial Expression Recognition with Fusion Features Extracted from Salient Facial Areas

School of Control Science and Engineering, Shandong University, Jinan 250061, China
*
Author to whom correspondence should be addressed.
Academic Editors: Xue-Bo Jin, Shuli Sun, Hong Wei and Feng-Bao Yang
Received: 23 January 2017 / Revised: 23 March 2017 / Accepted: 24 March 2017 / Published: 29 March 2017
View Full-Text   |   Download PDF [1677 KB, uploaded 29 March 2017]   |  

Abstract

In the pattern recognition domain, deep architectures are currently widely used and they have achieved fine results. However, these deep architectures make particular demands, especially in terms of their requirement for big datasets and GPU. Aiming to gain better results without deep networks, we propose a simplified algorithm framework using fusion features extracted from the salient areas of faces. Furthermore, the proposed algorithm has achieved a better result than some deep architectures. For extracting more effective features, this paper firstly defines the salient areas on the faces. This paper normalizes the salient areas of the same location in the faces to the same size; therefore, it can extracts more similar features from different subjects. LBP and HOG features are extracted from the salient areas, fusion features’ dimensions are reduced by Principal Component Analysis (PCA) and we apply several classifiers to classify the six basic expressions at once. This paper proposes a salient areas definitude method which uses peak expressions frames compared with neutral faces. This paper also proposes and applies the idea of normalizing the salient areas to align the specific areas which express the different expressions. As a result, the salient areas found from different subjects are the same size. In addition, the gamma correction method is firstly applied on LBP features in our algorithm framework which improves our recognition rates significantly. By applying this algorithm framework, our research has gained state-of-the-art performances on CK+ database and JAFFE database. View Full-Text
Keywords: facial expression recognition; fusion features; salient facial areas; hand-crafted features; feature correction facial expression recognition; fusion features; salient facial areas; hand-crafted features; feature correction
Figures

Figure 1

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. (CC BY 4.0).

Scifeed alert for new publications

Never miss any articles matching your research from any publisher
  • Get alerts for new papers matching your research
  • Find out the new papers from selected authors
  • Updated daily for 49'000+ journals and 6000+ publishers
  • Define your Scifeed now

SciFeed Share & Cite This Article

MDPI and ACS Style

Liu, Y.; Li, Y.; Ma, X.; Song, R. Facial Expression Recognition with Fusion Features Extracted from Salient Facial Areas. Sensors 2017, 17, 712.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Sensors EISSN 1424-8220 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top