Next Article in Journal
Synthesis of Cu2O/CuO Nanocrystals and Their Application to H2S Sensing
Next Article in Special Issue
AI-Based Sensor Information Fusion for Supporting Deep Supervised Learning
Previous Article in Journal
City Scale Particulate Matter Monitoring Using LoRaWAN Based Air Quality IoT Devices
Previous Article in Special Issue
Road Surface Classification Using a Deep Ensemble Network with Sensor Feature Selection
Open AccessEditor’s ChoiceArticle

Validating Deep Neural Networks for Online Decoding of Motor Imagery Movements from EEG Signals

1
Institute for Cognitive Systems, Technical University of Munich, 80333 Munich, Germany
2
Neuroscientific System Theory, Department of Electrical and Computer Engineering, Technical University of Munich, 80333 Munich, Germany
3
Research and Development, Integrated Research, Sydney 2060, Australia
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(1), 210; https://doi.org/10.3390/s19010210
Received: 25 September 2018 / Revised: 18 December 2018 / Accepted: 26 December 2018 / Published: 8 January 2019
Non-invasive, electroencephalography (EEG)-based brain-computer interfaces (BCIs) on motor imagery movements translate the subject’s motor intention into control signals through classifying the EEG patterns caused by different imagination tasks, e.g., hand movements. This type of BCI has been widely studied and used as an alternative mode of communication and environmental control for disabled patients, such as those suffering from a brainstem stroke or a spinal cord injury (SCI). Notwithstanding the success of traditional machine learning methods in classifying EEG signals, these methods still rely on hand-crafted features. The extraction of such features is a difficult task due to the high non-stationarity of EEG signals, which is a major cause by the stagnating progress in classification performance. Remarkable advances in deep learning methods allow end-to-end learning without any feature engineering, which could benefit BCI motor imagery applications. We developed three deep learning models: (1) A long short-term memory (LSTM); (2) a spectrogram-based convolutional neural network model (CNN); and (3) a recurrent convolutional neural network (RCNN), for decoding motor imagery movements directly from raw EEG signals without (any manual) feature engineering. Results were evaluated on our own publicly available, EEG data collected from 20 subjects and on an existing dataset known as 2b EEG dataset from “BCI Competition IV”. Overall, better classification performance was achieved with deep learning models compared to state-of-the art machine learning techniques, which could chart a route ahead for developing new robust techniques for EEG signal decoding. We underpin this point by demonstrating the successful real-time control of a robotic arm using our CNN based BCI. View Full-Text
Keywords: Brain-Computer Interfaces; spectrogram-based convolutional neural network model (pCNN); Deep Learning; electroencephalography (EEG); long short-term memory (LSTM); recurrent convolutional neural network (RCNN) Brain-Computer Interfaces; spectrogram-based convolutional neural network model (pCNN); Deep Learning; electroencephalography (EEG); long short-term memory (LSTM); recurrent convolutional neural network (RCNN)
Show Figures

Figure 1

MDPI and ACS Style

Tayeb, Z.; Fedjaev, J.; Ghaboosi, N.; Richter, C.; Everding, L.; Qu, X.; Wu, Y.; Cheng, G.; Conradt, J. Validating Deep Neural Networks for Online Decoding of Motor Imagery Movements from EEG Signals. Sensors 2019, 19, 210.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Search more from Scilit
 
Search
Back to TopTop