Next Article in Journal
CoRL: Collaborative Reinforcement Learning-Based MAC Protocol for IoT Networks
Next Article in Special Issue
Motor Imagery Based Continuous Teleoperation Robot Control with Tactile Feedback
Previous Article in Journal
Frequency Splitting and Transmission Characteristics of MCR-WPT System Considering Non-Linearities of Compensation Capacitors
Previous Article in Special Issue
Simultaneous Decoding of Eccentricity and Direction Information for a Single-Flicker SSVEP BCI
Open AccessArticle

Optimal Feature Search for Vigilance Estimation Using Deep Reinforcement Learning

1
Intelligent Information System and Embedded Software Engineering, Kwangwoon University, Seoul 01897, Korea
2
Department of Computer Engineering, Kwangwoon University, Seoul 01897, Korea
3
Faculty of Science and Engineering, Rijksuniversiteit Groningen, 9727 Groningen, The Netherlands
4
Department of Computer Science and Engineering, Seoul National University, Seoul 08826, Korea
*
Author to whom correspondence should be addressed.
Electronics 2020, 9(1), 142; https://doi.org/10.3390/electronics9010142
Received: 29 November 2019 / Revised: 2 January 2020 / Accepted: 6 January 2020 / Published: 11 January 2020
A low level of vigilance is one of the main reasons for traffic and industrial accidents. We conducted experiments to evoke the low level of vigilance and record physiological data through single-channel electroencephalogram (EEG) and electrocardiogram (ECG) measurements. In this study, a deep Q-network (DQN) algorithm was designed, using conventional feature engineering and deep convolutional neural network (CNN) methods, to extract the optimal features. The DQN yielded the optimal features: two CNN features from ECG and two conventional features from EEG. The ECG features were more significant for tracking the transitions within the alertness continuum with the DQN. The classification was performed with a small number of features, and the results were similar to those from using all of the features. This suggests that the DQN could be applied to investigating biomarkers for physiological responses and optimizing the classification system to reduce the input resources. View Full-Text
Keywords: feature extraction; deep Q-network; electrocardiogram; electroencephalogram; optimal feature selection; vigilance estimation feature extraction; deep Q-network; electrocardiogram; electroencephalogram; optimal feature selection; vigilance estimation
Show Figures

Figure 1

MDPI and ACS Style

Seok, W.; Yeo, M.; You, J.; Lee, H.; Cho, T.; Hwang, B.; Park, C. Optimal Feature Search for Vigilance Estimation Using Deep Reinforcement Learning. Electronics 2020, 9, 142.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop