Next Article in Journal
Model Based Approach to Cyber–Physical Systems Status Monitoring
Next Article in Special Issue
Machine Learning Techniques with ECG and EEG Data: An Exploratory Study
Previous Article in Journal
GeoQoE-Vanet: QoE-Aware Geographic Routing Protocol for Video Streaming over Vehicular Ad-hoc Networks
Previous Article in Special Issue
Evaluation of Features in Detection of Dislike Responses to Audio–Visual Stimuli from EEG Signals
Open AccessArticle

Classification of Vowels from Imagined Speech with Convolutional Neural Networks

Institute of Computer Science, University of Tartu, Ülikooli 18, 50090 Tartu, Estonia
Author to whom correspondence should be addressed.
Computers 2020, 9(2), 46;
Received: 12 May 2020 / Revised: 26 May 2020 / Accepted: 27 May 2020 / Published: 1 June 2020
(This article belongs to the Special Issue Machine Learning for EEG Signal Processing)
Imagined speech is a relatively new electroencephalography (EEG) neuro-paradigm, which has seen little use in Brain-Computer Interface (BCI) applications. Imagined speech can be used to allow physically impaired patients to communicate and to use smart devices by imagining desired commands and then detecting and executing those commands in a smart device. The goal of this research is to verify previous classification attempts made and then design a new, more efficient neural network that is noticeably less complex (fewer number of layers) that still achieves a comparable classification accuracy. The classifiers are designed to distinguish between EEG signal patterns corresponding to imagined speech of different vowels and words. This research uses a dataset that consists of 15 subjects imagining saying the five main vowels (a, e, i, o, u) and six different words. Two previous studies on imagined speech classifications are verified as those studies used the same dataset used here. The replicated results are compared. The main goal of this study is to take the proposed convolutional neural network (CNN) model from one of the replicated studies and make it much more simpler and less complex, while attempting to retain a similar accuracy. The pre-processing of data is described and a new CNN classifier with three different transfer learning methods is described and used to classify EEG signals. Classification accuracy is used as the performance metric. The new proposed CNN, which uses half as many layers and less complex pre-processing methods, achieved a considerably lower accuracy, but still managed to outperform the initial model proposed by the authors of the dataset by a considerable margin. It is recommended that further studies investigating classifying imagined speech should use more data and more powerful machine learning techniques. Transfer learning proved beneficial and should be used to improve the effectiveness of neural networks. View Full-Text
Keywords: EEG; imagined speech; machine learning; convolutional neural networks; transfer learning EEG; imagined speech; machine learning; convolutional neural networks; transfer learning
Show Figures

Figure 1

MDPI and ACS Style

Tamm, M.-O.; Muhammad, Y.; Muhammad, N. Classification of Vowels from Imagined Speech with Convolutional Neural Networks. Computers 2020, 9, 46.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

Search more from Scilit
Back to TopTop