Next Article in Journal
A Review of Wearable Solutions for Physiological and Emotional Monitoring for Use by People with Autism Spectrum Disorder and Their Caregivers
Next Article in Special Issue
Lane Endpoint Detection and Position Accuracy Evaluation for Sensor Fusion-Based Vehicle Localization on Highways
Previous Article in Journal
Foreground Detection with Deeply Learned Multi-Scale Spatial-Temporal Features
Previous Article in Special Issue
Cloud Update of Tiled Evidential Occupancy Grid Maps for the Multi-Vehicle Mapping
Open AccessArticle

Driver’s Facial Expression Recognition in Real-Time for Safe Driving

Department of Computer Engineering, Keimyung University, Daegu 42601, Korea
*
Author to whom correspondence should be addressed.
Sensors 2018, 18(12), 4270; https://doi.org/10.3390/s18124270
Received: 6 November 2018 / Revised: 27 November 2018 / Accepted: 3 December 2018 / Published: 4 December 2018
(This article belongs to the Special Issue Sensors Applications in Intelligent Vehicle)
In recent years, researchers of deep neural networks (DNNs)-based facial expression recognition (FER) have reported results showing that these approaches overcome the limitations of conventional machine learning-based FER approaches. However, as DNN-based FER approaches require an excessive amount of memory and incur high processing costs, their application in various fields is very limited and depends on the hardware specifications. In this paper, we propose a fast FER algorithm for monitoring a driver’s emotions that is capable of operating in low specification devices installed in vehicles. For this purpose, a hierarchical weighted random forest (WRF) classifier that is trained based on the similarity of sample data, in order to improve its accuracy, is employed. In the first step, facial landmarks are detected from input images and geometric features are extracted, considering the spatial position between landmarks. These feature vectors are then implemented in the proposed hierarchical WRF classifier to classify facial expressions. Our method was evaluated experimentally using three databases, extended Cohn-Kanade database (CK+), MMI and the Keimyung University Facial Expression of Drivers (KMU-FED) database, and its performance was compared with that of state-of-the-art methods. The results show that our proposed method yields a performance similar to that of deep learning FER methods as 92.6% for CK+ and 76.7% for MMI, with a significantly reduced processing cost approximately 3731 times less than that of the DNN method. These results confirm that the proposed method is optimized for real-time embedded applications having limited computing resources. View Full-Text
Keywords: facial expression recognition; deep neural networks; embedded application; ADAS; weighted random forest facial expression recognition; deep neural networks; embedded application; ADAS; weighted random forest
Show Figures

Figure 1

MDPI and ACS Style

Jeong, M.; Ko, B.C. Driver’s Facial Expression Recognition in Real-Time for Safe Driving. Sensors 2018, 18, 4270.

AMA Style

Jeong M, Ko BC. Driver’s Facial Expression Recognition in Real-Time for Safe Driving. Sensors. 2018; 18(12):4270.

Chicago/Turabian Style

Jeong, Mira; Ko, Byoung C. 2018. "Driver’s Facial Expression Recognition in Real-Time for Safe Driving" Sensors 18, no. 12: 4270.

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Search more from Scilit
 
Search
Back to TopTop