Next Article in Journal
An Improved Slice Reconciliation Protocol for Continuous-Variable Quantum Key Distribution
Previous Article in Journal
Dissipation-Dependent Thermal Escape from a Potential Well
Article

Deep Coupling Recurrent Auto-Encoder with Multi-Modal EEG and EOG for Vigilance Estimation

1
College of Computer Science and Technology, Harbin Engineering University, Harbin 150001, China
2
Department of Information Engineering, Hulunbuir Vocational Technical College, Hulunbuir 021000, China
*
Author to whom correspondence should be addressed.
Academic Editor: Francesco Carlo Morabito
Entropy 2021, 23(10), 1316; https://doi.org/10.3390/e23101316
Received: 11 September 2021 / Revised: 29 September 2021 / Accepted: 7 October 2021 / Published: 9 October 2021
Vigilance estimation of drivers is a hot research field of current traffic safety. Wearable devices can monitor information regarding the driver’s state in real time, which is then analyzed by a data analysis model to provide an estimation of vigilance. The accuracy of the data analysis model directly affects the effect of vigilance estimation. In this paper, we propose a deep coupling recurrent auto-encoder (DCRA) that combines electroencephalography (EEG) and electrooculography (EOG). This model uses a coupling layer to connect two single-modal auto-encoders to construct a joint objective loss function optimization model, which consists of single-modal loss and multi-modal loss. The single-modal loss is measured by Euclidean distance, and the multi-modal loss is measured by a Mahalanobis distance of metric learning, which can effectively reflect the distance between different modal data so that the distance between different modes can be described more accurately in the new feature space based on the metric matrix. In order to ensure gradient stability in the long sequence learning process, a multi-layer gated recurrent unit (GRU) auto-encoder model was adopted. The DCRA integrates data feature extraction and feature fusion. Relevant comparative experiments show that the DCRA is better than the single-modal method and the latest multi-modal fusion. The DCRA has a lower root mean square error (RMSE) and a higher Pearson correlation coefficient (PCC). View Full-Text
Keywords: vigilance estimation; electroencephalogram; electrooculogram; deep coupling recurrent auto-encoder; multi-modal fusion vigilance estimation; electroencephalogram; electrooculogram; deep coupling recurrent auto-encoder; multi-modal fusion
Show Figures

Figure 1

MDPI and ACS Style

Song, K.; Zhou, L.; Wang, H. Deep Coupling Recurrent Auto-Encoder with Multi-Modal EEG and EOG for Vigilance Estimation. Entropy 2021, 23, 1316. https://doi.org/10.3390/e23101316

AMA Style

Song K, Zhou L, Wang H. Deep Coupling Recurrent Auto-Encoder with Multi-Modal EEG and EOG for Vigilance Estimation. Entropy. 2021; 23(10):1316. https://doi.org/10.3390/e23101316

Chicago/Turabian Style

Song, Kuiyong, Lianke Zhou, and Hongbin Wang. 2021. "Deep Coupling Recurrent Auto-Encoder with Multi-Modal EEG and EOG for Vigilance Estimation" Entropy 23, no. 10: 1316. https://doi.org/10.3390/e23101316

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop