Next Article in Journal
An Experimental and Analytical Study on the Deflection Behavior of Precast Concrete Beams with Joints
Next Article in Special Issue
EigenScape: A Database of Spatial Acoustic Scene Recordings
Previous Article in Journal
Impact of Alloying on Stacking Fault Energies in γ-TiAl
Previous Article in Special Issue
Sound Synthesis of Objects Swinging through Air Using Physical Models
Open AccessArticle

Identifying Single Trial Event-Related Potentials in an Earphone-Based Auditory Brain-Computer Interface

by Eduardo Carabez *,†, Miho Sugi, Isao Nambu and Yasuhiro Wada
Department of Electrical Engineering, Nagaoka University of Technology, 1603-1, Kamitomioka Nagaoka, Niigata 940-2188, Japan
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Academic Editor: Vesa Valimaki
Appl. Sci. 2017, 7(11), 1197; https://doi.org/10.3390/app7111197
Received: 20 October 2017 / Accepted: 17 November 2017 / Published: 21 November 2017
(This article belongs to the Special Issue Sound and Music Computing)
As brain-computer interfaces (BCI) must provide reliable ways for end users to accomplish a specific task, methods to secure the best possible translation of the intention of the users are constantly being explored. In this paper, we propose and test a number of convolutional neural network (CNN) structures to identify and classify single-trial P300 in electroencephalogram (EEG) readings of an auditory BCI. The recorded data correspond to nine subjects in a series of experiment sessions in which auditory stimuli following the oddball paradigm were presented via earphones from six different virtual directions at time intervals of 200, 300, 400 and 500 ms. Using three different approaches for the pooling process, we report the average accuracy for 18 CNN structures. The results obtained for most of the CNN models show clear improvement over past studies in similar contexts, as well as over other commonly-used classifiers. We found that the models that consider data from the time and space domains and those that overlap in the pooling process usually offer better results regardless of the number of layers. Additionally, patterns of improvement with single-layered CNN models can be observed. View Full-Text
Keywords: convolutional neural networks (CNN); auditory brain-computer interface (BCI); P300; virtual sound; electroencephalogram (EEG); pool strategies; classification convolutional neural networks (CNN); auditory brain-computer interface (BCI); P300; virtual sound; electroencephalogram (EEG); pool strategies; classification
Show Figures

Figure 1

MDPI and ACS Style

Carabez, E.; Sugi, M.; Nambu, I.; Wada, Y. Identifying Single Trial Event-Related Potentials in an Earphone-Based Auditory Brain-Computer Interface. Appl. Sci. 2017, 7, 1197.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop