Next Article in Journal
An Experimental and Analytical Study on the Deflection Behavior of Precast Concrete Beams with Joints
Next Article in Special Issue
EigenScape: A Database of Spatial Acoustic Scene Recordings
Previous Article in Journal
Impact of Alloying on Stacking Fault Energies in γ-TiAl
Previous Article in Special Issue
Sound Synthesis of Objects Swinging through Air Using Physical Models
Article Menu
Issue 11 (November) cover image

Export Article

Open AccessArticle
Appl. Sci. 2017, 7(11), 1197; https://doi.org/10.3390/app7111197

Identifying Single Trial Event-Related Potentials in an Earphone-Based Auditory Brain-Computer Interface

Department of Electrical Engineering, Nagaoka University of Technology, 1603-1, Kamitomioka Nagaoka, Niigata 940-2188, Japan
These authors contributed equally to this work.
*
Author to whom correspondence should be addressed.
Academic Editor: Vesa Valimaki
Received: 20 October 2017 / Accepted: 17 November 2017 / Published: 21 November 2017
(This article belongs to the Special Issue Sound and Music Computing)
View Full-Text   |   Download PDF [3513 KB, uploaded 21 November 2017]   |  

Abstract

As brain-computer interfaces (BCI) must provide reliable ways for end users to accomplish a specific task, methods to secure the best possible translation of the intention of the users are constantly being explored. In this paper, we propose and test a number of convolutional neural network (CNN) structures to identify and classify single-trial P300 in electroencephalogram (EEG) readings of an auditory BCI. The recorded data correspond to nine subjects in a series of experiment sessions in which auditory stimuli following the oddball paradigm were presented via earphones from six different virtual directions at time intervals of 200, 300, 400 and 500 ms. Using three different approaches for the pooling process, we report the average accuracy for 18 CNN structures. The results obtained for most of the CNN models show clear improvement over past studies in similar contexts, as well as over other commonly-used classifiers. We found that the models that consider data from the time and space domains and those that overlap in the pooling process usually offer better results regardless of the number of layers. Additionally, patterns of improvement with single-layered CNN models can be observed. View Full-Text
Keywords: convolutional neural networks (CNN); auditory brain-computer interface (BCI); P300; virtual sound; electroencephalogram (EEG); pool strategies; classification convolutional neural networks (CNN); auditory brain-computer interface (BCI); P300; virtual sound; electroencephalogram (EEG); pool strategies; classification
Figures

Figure 1

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. (CC BY 4.0).
SciFeed

Share & Cite This Article

MDPI and ACS Style

Carabez, E.; Sugi, M.; Nambu, I.; Wada, Y. Identifying Single Trial Event-Related Potentials in an Earphone-Based Auditory Brain-Computer Interface. Appl. Sci. 2017, 7, 1197.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Appl. Sci. EISSN 2076-3417 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top