Next Article in Journal
State Recognition of Bone Drilling Based on Acoustic Emission in Pedicle Screw Operation
Next Article in Special Issue
Unifying Terrain Awareness for the Visually Impaired through Real-Time Semantic Segmentation
Previous Article in Journal
A Double Dwell High Sensitivity GPS Acquisition Scheme Using Binarized Convolution Neural Network
Previous Article in Special Issue
A Wearable Body Controlling Device for Application of Functional Electrical Stimulation
Article Menu
Issue 5 (May) cover image

Export Article

Sensors 2018, 18(5), 1483;

Use of the Stockwell Transform in the Detection of P300 Evoked Potentials with Low-Cost Brain Sensors
Tecnológico Nacional de México-CENIDET, Interior Internado Palmira S/N, Col. Palmira, Cuernavaca, Morelos, C.P. 62490, México
Tecnológico Nacional de México-Instituto Tecnológico de Orizaba, Av. Oriente 9 N° 852, Col. Emiliano Zapata, Orizaba, C.P. 94320, México
Author to whom correspondence should be addressed.
Received: 23 February 2018 / Accepted: 21 April 2018 / Published: 9 May 2018


The evoked potential is a neuronal activity that originates when a stimulus is presented. To achieve its detection, various techniques of brain signal processing can be used. One of the most studied evoked potentials is the P300 brain wave, which usually appears between 300 and 500 ms after the stimulus. Currently, the detection of P300 evoked potentials is of great importance due to its unique properties that allow the development of applications such as spellers, lie detectors, and diagnosis of psychiatric disorders. The present study was developed to demonstrate the usefulness of the Stockwell transform in the process of identifying P300 evoked potentials using a low-cost electroencephalography (EEG) device with only two brain sensors. The acquisition of signals was carried out using the Emotiv EPOC® device—a wireless EEG headset. In the feature extraction, the Stockwell transform was used to obtain time-frequency information. The algorithms of linear discriminant analysis and a support vector machine were used in the classification process. The experiments were carried out with 10 participants; men with an average age of 25.3 years in good health. In general, a good performance (75–92%) was obtained in identifying P300 evoked potentials.
P300 evoked potentials; Stockwell transform; electroencephalograph; brain-computer interface; non-invasive brain sensors; signals processing; wireless device

1. Introduction

In recent times, technological progress has allowed brain-computer interfaces (BCI) to be used more frequently. Their main purpose is to control devices by means of brain signals. This is of great relevance in the area of rehabilitation because it provides a different method of communication for those who have a motor disability, such as amyotrophic lateral sclerosis, Becker muscular dystrophy, Duchenne muscular dystrophy, Guillain-Barré syndrome, quadriplegia, brain injury, spinal cord injury, and so forth. BCI systems have been developed that allow the detection of mental fatigue [1,2], movement of joints [3], imaginary movement [4,5,6], mental tasks [7], emotions [8], and more. EEG signals are electrical potentials caused by a set of neurons when a brain process is performed. They are obtained using an electroencephalograph, directly from the scalp. These signals are considered stochastic because they have great variability and a low signal-to-noise ratio. At present, several types of EEG signals have been classified, such as the sensorimotor rhythm (SMR) [9], slow cortical potential (SCP) [10], event-related potential (ERP) [11], and steady-state visual evoked potential (SSVEP) [12], among others.
The P300 wave is an ERP which is associated with cognition. It is a positive deflection of the electric potential which is generated approximately 300–500 ms after an infrequent stimulus related to a specific event [13]. It is most evident in the delta and theta frequency bands [14,15]. The stimuli can be visual [16], auditory [17], or somatosensory [18]. It has been shown that the less probable the stimulus, the greater the amplitude of the response peak [19]. The P300 evoked potential has been used in applications such as lie detectors, spellers, and the diagnosis of psychiatric disorders [20].
The P300 speller is one of the most commonly-used applications in BCI systems. This application allows the selection and display of different characters on a digital screen through the detection of P300 evoked potentials generated from visual stimuli. It was proposed by Farwell and Douchin in 1988 [21]. It has been reported that the electrodes P07, P08, Fz, Cz, Pz, and Oz are efficient in detecting P300 evoked potentials, with these regions being associated with memory, attention, and visual processes [22].
Currently there are several methodologies that allow the detection of P300 evoked potentials [23,24,25]. Algorithms such as the wavelet transform have extracted patterns of the EEG signal in a time-frequency distribution, performing excellently in classification [13]. However, most of the methodologies developed use high-resolution professional EEG equipment with several acquisition electrodes, which is not feasible for some institutions that do not have the necessary resources to acquire this equipment. Recently, technological advancement has allowed the development of portable EEG devices that are economical compared to professional EEG equipment. By using these portable devices, lower quality EEG signals with greater noise are acquired. For this reason, it is necessary to use appropriate methods to obtain an optimal classification performance.
In this study, different algorithms were investigated to define those that are more efficient at classifying P300 evoked potentials obtained using Emotiv EPOC®, a wireless EEG device. This equipment was manufactured by the company EMOTIV located in San Francisco, USA. The Stockwell transform was used as a feature extractor, as other studies have reported it has a good time-frequency resolution and is extensively used for the analysis of non-stationary signals, such as EEG signals [26]. In addition, the classifiers of linear discriminant analysis (LDA) and the support vector machine (SVM) were used, of which the SVM classifier performed better in the detection of P300 evoked potentials.
This article is organized as follows: Section 2 describes some works related to BCI systems, Section 3 details the characteristics of the Stockwell transform, Section 4 outlines the materials and methods used in this study, Section 5 analyzes the data and results and, finally, in Section 6 the discussion is presented.

2. Related Work

Some important features of the EEG signal are hidden in the time domain, so certain investigations are needed to analyze the signal in the frequency domain [27,28]. The Fourier transform is a useful tool in the study of stationary signals. It converts a signal that is in the time domain into its frequency equivalent. The Fourier transform is defined as:
X ( f ) = + h ( t ) e j 2 π ft   dt ,
where h(t) represents the signal in the time domain, f the frequency, t is the time, and X(f) is the signal in the frequency domain. In the paper by Güneysu et al. [29] SSVEP were induced by groups of light emitting diodes operating at different frequencies (7, 9, 11, and 15 Hz). The subjects were focused on a specific group, then with the fast Fourier transform (FFT) and a Gaussian model the dominant frequency component was detected. The performance in terms of detection was 75% on average. In another study [30] the FFT was used for the detection of mental commands, in order to control a wheelchair, and the performance obtained was 76% on average.
The drawback of the Fourier transform is that it does not contain temporal information. Therefore, its use is not recommended for the analysis of non-stationary signals, where the frequency varies with time. The short-time Fourier transform (STFT) solves this problem by adding a window function to the Fourier transform. This provides a local spectrum that allows the analysis of the frequency in different time intervals. The equation is defined as follows:
STFT ( τ ,   f )   =   + h ( t ) g ( τ     t ) e   j 2 π ft   dt ,
where h(t) is the signal, f is the frequency, and g(τ − t) is the window function. In the paper by Phothisonothai et al. [31], through the STFT it was found that the coherence difference in the theta and alpha bands was statistically significant, and the duty cycle was suggested as a characteristic for SSVEP-based applications. The main drawback of the STFT is that the resolution in time and frequency remains constant. This is because the extension of the window function remains fixed throughout the analysis.
The wavelet transform is used in the analysis of non-stationary signals. It represents the signal in its frequency components during a time interval. It is formed by a base function (mother wavelet) that can be modified in its scale and translation factor. The continuous wavelet transform (CWT) is defined as follows:
W ( s ,   τ ) = + h ( t ) 1 s ψ (   t     τ s )   dt ,
where h(t) represents the signal, ψ(t) is the mother wavelet function, s is the scale factor, and τ is the translation factor. The function ψ(t) expands when s > 1 and contracts when s < 1. The relationship between scale and frequency is inverse; high scales correspond to low frequencies and low scales correspond to high frequencies.
The wavelet transform has been used in several studies for the detection of P300 evoked potentials. In the paper by Motlagh et al. [32] the P300 components were detected with an efficiency greater than 90%. The continuous and discrete wavelet transform were used as feature extractors and the SVM algorithm as a classifier. The EEG signals used were extracted from the database “BCI competition 2003”.
In research by Guo et al. [13] the discrete wavelet transform was used together with the Fisher criterion, obtaining a yield higher than 90% in the detection of P300 evoked potentials. In the paper Costagliola et al. [33] different mother wavelets were compared and it was concluded that the functions daubechies 4, biorthogonal 2.4, biorthogonal 4.4, biorthogonal 5.5, coiflets 2, symlets 4, and symlets 6 provide greater efficiency in the detection of P300 components.
The Gabor transform is used to estimate the time-frequency distribution of a signal. It has been used in the analysis of EEG signals with patients with epilepsy [34]. The equation is defined as follows:
G ( τ ,   f ) = + h ( t ) w ( τ     t ) e   j 2 π ft   dt ,
where h(t) is the signal and f is the frequency. The expression w(τ − t) represents a window function that modifies its extension according to the values of the variance σ:
w ( t ) = 1 2 π σ 2   e t 2 2 σ 2
The Stockwell transform has been used in image processing and in the biomedical field [35]. In research by Senapati et al. [36] and Upadhyay et al. [37] it was shown to be an efficient tool for removing ocular artifacts from EEG signals. In the paper by Vijean et al. [7] it was found to be effective in detecting mental tasks.

3. Stockwell Transform

The Stockwell transform allows the analysis of a signal in a time-frequency distribution. It has been shown to have a better resolution than the Gabor transform [38]. It is defined as:
S ( τ ,   f ) = + h ( t )   w ( τ     t ,   f )   e   j 2 π f t dt ,
where h(t) represents the signal and w(τ t, f) is generally defined as a normalized and positive Gaussian function [36]:
w ( τ     t ,   f ) = f 2 π e   ( τ     t ) 2 f 2 2
where the window function w(τ t, f) is shortened as the frequency increases and lengthens when the frequency decreases. The Stockwell transform provides a frequency-dependent resolution and maintains a direct relationship with the Fourier spectrum. Therefore, it obtains local phase information with absolute reference [39].
The phase “with absolute reference” means that the phase information is always referenced to time t = 0. This condition occurs in each of the samples obtained from the Stockwell transform. The average time of the Stockwell transform is equal to the Fourier spectrum [40].
One of the drawbacks of the Stockwell transform is the redundant information of the time-frequency space that it generates, which causes greater consumption of computational resources.

4. Materials and Methods

4.1. Data Acquisition

The EEG signals of this study were acquired in the electronics department of the Centro Nacional de Investigación y Desarrollo Tecnológico (CENIDET). The average age of the 10 participants was 25.3, and all were men in a good state of health. The same experimental conditions were established for each subject so that the same number of samples was obtained from each of them.
The Emotiv EPOC® commercial electroencephalograph was used, which contains 16 electrodes positioned according to the international system 10–20. The EEG signals are obtained from 14 sensors located in the areas AF3, F7, F3, FC5, T7, P7, O1, O2, P8, T8, FC6, F4, F8, and AF4. The other two sensors are used as references and are located in zones P3 and P4. Figure 1 shows the device and the distribution of its electrodes.
The EEG signals obtained had a sampling frequency of 128 Hz and only the electrodes O1, O2 and the references (R1 and R2) were used. The OpenViBE® [41] software version 0.17.1 was used to obtain the P300 evoked potentials. This software allows the manipulation of experimental scenarios such as the P300 speller. In this study, we used the application known as “P300: Magic Card®” [42], which is similar to the P300 speller, except that images are displayed instead of characters.
During the development of the experiment, each participant was asked to sit in front of the monitor at a distance of 1 m. Then, an array of images in three rows and four columns was projected onto the screen. The matrix of images is shown in Figure 2.
At the beginning of each trial, the subject was asked to focus his attention on a particular image. Then, these images began to appear and disappear at random. In one trial each image appeared 12 times with a fixed time of 200 ms. There was a recess period of 100 ms between each image. The complete experiment consisted of six trials, in total, and each image of the matrix appeared 72 times. This means that each subject generated a total of 72 evoked potentials. Each participant was asked not to blink or perform eye movements as much as possible to avoid generating noise in the EEG signal. In a previous work [43], this same method of P300 evoked potential acquisition was used and obtained excellent results. The complete EEG signal was acquired during an online process through the OpenViBE® software. Then, in an offline process, the OpenViBE® software was used again to divide the signal into two groups. In Group 1 the signals were related to P300 type events, and in Group 2 the signals were not related to this type of event. To obtain Group 1, samples of the EEG signal were taken (epochs of 700 ms) at each instant that the target image was presented. The rest of the signal was considered Group 2. Then, the data was stored for offline processing with Matlab® software version R2012a. Figure 3 shows the complete process of the BCI system development.

4.2. Feature Extraction

From the acquired EEG signal, the mean was subtracted. Then, the Stockwell transform was used to obtain a time-frequency distribution of the EEG signal. Subsequently, the samples obtained were divided into different frequency bands: 1–5 Hz (delta), 5–8 Hz (theta), 8–15 Hz (alpha), 15–30 Hz (beta), and 30–64 Hz (gamma). This is in order to compare the success rate of identification in each of these intervals. Each of the frequency bands was averaged and five signals representative of the different cerebral rhythms (delta, theta, alpha, beta, and gamma) were obtained. Then, the signals were divided into time intervals (2 s duration and 0.25 s displacement). Subsequently, different mathematical functions were applied, obtaining different feature vectors.
The mathematical functions used were the standard deviation, kurtosis, asymmetry coefficient, area under the curve, and average power. The different feature vectors obtained were used in the training and classification phase.

4.3. Classification

In this study, the classifiers of the linear discriminant analysis (LDA) and the support vector machine (SVM) were used. LDA is one of the most commonly used classification algorithms in BCI systems [44,45], as it is a simple but accurate method for the identification of EEG signals. The LDA algorithm determines the optimal axes in terms of classification by increasing the variance between the classes and decreasing the variance within the class [46].
The SVM algorithm is robust in binary classification and is considered one of the most accurate classifiers to detect P300 evoked potentials [25]. The SVM separates the data from two classes by finding a hyperplane with the maximum possible margin [47]. SVM can use different kernel functions, the most used are [48]:
  • Radial Basis Function, K ( x i   ,   x j )   =   e   x i       x j 2 2 σ 2 ,   σ     0 .
  • Polynomial, K ( x i   ,   x j )   =   ( x i   ×   x j   +   1 ) d ,   d   > 0 .
  • Sigmoidal, K ( x i   ,   x j )   =   tanh ( k x i   ×   x j     δ ) .
  • Cauchy, K ( x i   ,   x j )   =   ( 1   +   x     y 2 2 σ 2 ) 1 ,   σ     0 .  
  • Logarithmic, K ( x i   ,   x j )   =     log ( x     y d   +   c ) ,   d   > 0 .
In the classification process only 100 s of the signal acquired in the experiment was used (50 s from each group). The feature vector that was obtained was divided into two equal parts: training and testing. Then, the training vector was divided into two groups (P300 and non-P300), which was used to train the classifier.
The test vector was formed with samples of type P300 and non-P300 distributed alternately (10 segments of 5 s). It was used to verify the efficiency of the classifier. The performance was established according to the number of samples correctly classified in the P300 and non-P300 groups with the LDA and SVM classifiers. The kernel functions used in the SVM were linear, quadratic, and radial basis. In the Gaussian radial base kernel function (RBF), a scale factor (sigma) of 1 and a penalty parameter (C) of 1 were used. The complete methodology with the different algorithms used in this study is shown in Figure 4.

5. Results

The results showed a higher classification performance in the frequency ranges of 1–5 and 5–8 Hz, corresponding to the delta and theta rhythms. Figure 5 shows the averaged EEG signals of the Subjects for the two conditions (Target/Non-Target).
The average Target signal shows a negative component (P1) with a minimum value of −2.384 μV at 141.6 ms and two positive components (P2 and P3) with maximum values of 1.44 and 2.211 μV at 251.7 and 495.5 ms, respectively. The EEG signals shown were filtered into a frequency band of 1-8 Hz. Table 1 shows the average of the peak values of the components P1, P2 and P3 obtained at different times of occurrence. The Target signals (epochs of 700 ms) that did not clearly present these components were discarded in the average.
The average amplitudes obtained were lower for P1 and higher for P2 and P3 compared to the average amplitudes of Figure 5. The standard deviations of the amplitudes (4.03, 3.26, and 4.17 μV) show that consistent values were obtained in the peaks of the components P1, P2, and P3, respectively. By means of the standard deviations of the times of occurrence of the amplitude peaks, the time ranges of 113.28–205.24, 203.14–329.2, and 402.12–555.18 ms were established for the components P1, P2, and P3, respectively. Component P3 shows a positive deflection of the electric potential in a time range of 402.12–555.18 ms, which are properties of the P300 evoked potential. Figure 6 shows the Stockwell spectrograms in the frequency ranges of 1–5 and 5–8 Hz obtained from the EEG signal of Subject 3.
The Y-axis represents the frequency distribution (Hertz) and the X-axis represents the time in seconds. Figure 6a shows a frequency range of 1–5 Hz representing the delta rhythm and Figure 6b shows a frequency range of 5–8 Hz representing the theta rhythm. Stockwell transform spectrograms were obtained from an EEG signal distributed alternately in 10 second segments with samples of P300 and non-P300. The color bar represents the instantaneous amplitude obtained by calculating the absolute value of the Stockwell transform. The blue color represents the low amplitudes and the red color the high amplitudes.
The feature vectors that obtained a better performance in the classifiers were the combinations of average power and area under the curve, and the asymmetry coefficient and standard deviation. Finally, the classifier with the best performance was SVM with the RBF kernel. The results shown in this study are based on the parameters that achieved the best performance. Figure 7 shows the result of the classification obtained from Subject 3, in the frequency range of 5–8 Hz, using the functions of asymmetry coefficient and standard deviation with the SVM algorithm.
In this particular case, 92% accuracy of identification was obtained. The SVM algorithm divided the space into two groups by means of the training feature vectors. The blue area represents the P300 group and the red area represents the non-P300 group. Moreover, triangles of blue and red are displayed, which represent the test feature vectors of the groups P300 and non-P300, respectively. Figure 8 shows the results of the classification obtained from Subject 2 in the frequency range of 5–8 Hz, using the functions of average power and area under the curve with the SVM algorithm.
In this particular case, 80% accuracy was obtained in identification. The data obtained could not be separated efficiently with a linear kernel. Therefore, to correctly separate the two classes of P300 and non-P300 (blue and red color, respectively) the RBF kernel was used with the data to create non-linear combinations of the original features to project them onto a higher dimensional space through a mapping function where it becomes linearly separable. The RBF kernel was used in each of the cases because it allowed a better separation of the two groups. Table 2 shows the methodologies that obtained the best performances with the different subjects.
The classification yields obtained for each of the subjects show that the parameters and algorithms used can correctly identify the P300 evoked potentials. The performance obtained in the frequency range of 1–5 Hz with the feature vectors of average power and area under the curve was 79.5% on average, and with the feature vectors of the asymmetry coefficient and standard deviation was 83.1% on average.
On the other hand, a better performance was obtained in the frequency range of 5–8 Hz with the feature vectors of the asymmetry coefficient and standard deviation (84.1% on average) than with the feature vectors of average power and area under the curve (80.5% on average). The highest percentages of classification were obtained in the frequency of 5–8 Hz, with the feature vectors of the asymmetry coefficient and standard deviation in Subjects 2 and 3 obtaining values of 90% and 92%, respectively.

6. Discussion

This study shows that the Stockwell transform is a useful algorithm that allows the detection of P300 evoked potentials induced by visual stimuli. Identification was achieved with a commercial wireless electroencephalograph using only the channels O1 and O2. The electrodes were chosen based on other studies [22,49] which showed that the channels Fz, Cz, Pz, PO7, PO8, and Oz contain information that provides a better classification performance in the P300 speller. The Emotiv Epoc® device does not have any of the aforementioned electrodes; however, it has the channels O1 and O2, which are very close to the Oz channel of the 10–20 system. It is also important to mention that the chosen channels are located in the occipital area of the brain, which is associated with visual processes.
In general, an acceptable classification performance was obtained with the different subjects and the selected methods (above 75%). The highest percentage of classification obtained was 92%. It should be mentioned that other studies that involve the detection of P300 evoked potentials have obtained similar or greater performances; however, most of them use high-cost professional EEG equipment and several acquisition electrodes [50,51,52]. This is an important limitation in the development of research and applications of BCI systems, because institutions do not always have sufficient resources to obtain this equipment. For this reason, the Emotiv EPOC® device was used, which is a low-cost portable device. However, the signals obtained were of poorer quality and, therefore, had a lower signal-to-noise ratio. In addition, it is important to note that this portable device has a limited number of electrodes in a fixed distribution. Due to these deficiencies and limitations, it was necessary to implement a methodology that largely excluded noise, correctly extracted signal characteristics, and achieved an efficient identification of P300 evoked potentials.
Therefore, a time-frequency analysis was chosen, because these are widely used in the analysis of non-stationary signals. A good time-frequency distribution will only be possible if a narrow window function is used during the analysis of high-frequency components in the signal, and a wider window function is used during the analysis of low-frequency components in the signal. The window function of the Stockwell transform fulfills the previously-mentioned requirements of a good time-frequency distribution [38]. The Stockwell transform is a method of spectral localization that can be considered a generalization of STFT and an extension of CWT [20]. It also has a better resolution than the Gabor transform [38]. Due to this, the development of the proposed BCI system included as a feature extractor the Stockwell transform, and it was shown that this technique allows the correct identification of P300 evoked potentials, even with low-cost equipment and when only acquiring EEG signals from electrodes O1 and O2.
The frequency ranges that allowed better identification of the P300 evoked potentials were within the values of 1–5 and 5–8 Hz. This suggests that the P300 evoked potentials occur in the delta and theta brain rhythms. This has been found in other studies, and in papers by Kolev et al. [53] and Yordanova et al. [54] it was demonstrated by means of a time-frequency analysis that sub-components in the delta and theta bands coexist in the formation of P300 potentials.
In this study, the SVM classifier performed better than the LDA classifier, which suggests that the SVM algorithm is more suitable for the detection of P300 evoked potentials. SVM is an algorithm that allows pattern recognition and provides an excellent solution for discrimination between two different classes. In Thulasidas’s work [55] it was used as a classifier in the P300 speller and high levels of yield were obtained. In the paper by Tayeb et al. [25] it is mentioned that several classification algorithms have been used for the detection of P300 evoked potentials, such as artificial neural networks, naive Gaussian Bayes, and the SVM. Among them, the SVM algorithm is one of the most precise.
In conclusion, an adequate performance (in the range of 75–92%) was obtained in the detection of P300 evoked potentials by means of the Stockwell transform and using low-cost wireless EEG equipment with only two acquisition channels. For future work, a multidimensional analysis will be done with EEG signals from different electrodes. Algorithms, such as the two-dimensional Stockwell transform, will be used to improve performance in the identification of P300 evoked potentials.

Author Contributions

A.F.P.-V. coded all the Matlab® and OpenViBE® scripts, developed the experimental processes, and wrote the manuscript; C.D.G.-B. designed the methodology of the algorithms and supervised the experimental process; A.M.-S. designed part of the experimental process and taught how to use the EEG device correctly; and R.P.-G. participated in the design of the experimental process and provided valuable information to prepare this manuscript.


This research was funded by CONACYT grant number 600119 and TECNM grant number 6457.18-P.


The authors of this document would like to thank CONACYT (Consejo Nacional de Ciencia y Tecnología), TECNM (Tecnológico Nacional de México) and PRODEP, for the support provided during the development of this research.

Conflicts of Interest

The authors declare no conflict of interest.


  1. Zhang, X.; Li, J.; Liu, Y.; Zhang, Z.; Wang, Z.; Luo, D.; Zhou, X.; Zhu, M.; Salman, W.; Hu, G.; et al. Design of a fatigue detection system for high-speed trains based on driver vigilance using a wireless wearable EEG. Sensors 2017, 17, 486. [Google Scholar] [CrossRef] [PubMed]
  2. Li, G.; Chung, W.Y. A context-aware EEG headset system for early detection of driver drowsiness. Sensors 2015, 15, 20873–20893. [Google Scholar] [CrossRef] [PubMed]
  3. Marquez, L.A.P.; Munoz, G.R. Analysis and classification of electroencephalographic signals (EEG) to identify arm movements. In Proceedings of the 10th International Conference on Electrical Engineering, Computing Science and Automatic Control (CCE), Mexico City, Mexico, 30 September–4 October 2013; pp. 138–143. [Google Scholar]
  4. Liu, Y.-H.; Huang, S.; Huang, Y.-D. Motor Imagery EEG Classification for Patients with Amyotrophic Lateral Sclerosis Using Fractal Dimension and Fisher’s Criterion-Based Channel Selection. Sensors 2017, 17, 1557. [Google Scholar] [CrossRef] [PubMed]
  5. Lee, D.; Park, S.H.; Lee, S.G. Improving the accuracy and training speed of motor imagery brain-computer interfaces using wavelet-based combined feature vectors and gaussian mixture model-supervectors. Sensors 2017, 17, 2282. [Google Scholar] [CrossRef] [PubMed]
  6. Lo, C.C.; Chien, T.Y.; Chen, Y.C.; Tsai, S.H.; Fang, W.C.; Lin, B.S. A wearable channel selection-based brain-computer interface for motor imagery detection. Sensors 2016, 16, 213. [Google Scholar] [CrossRef] [PubMed]
  7. Vijean, V.; Hariharan, M.; Saidatul, A.; Yaacob, S. Mental tasks classifications using S-transform for BCI applications. In Proceedings of the IEEE Conference on Sustainable Utilization and Development in Engineering and Technology (STUDENT), Piscataway, NJ, USA, 20–21 October 2011; pp. 69–73. [Google Scholar]
  8. Chai, X.; Wang, Q.; Zhao, Y.; Li, Y.; Liu, D.; Liu, X.; Bai, O. A fast, efficient domain adaptation technique for cross-domain electroencephalography(EEG)-based emotion recognition. Sensors 2017, 17, 1014. [Google Scholar] [CrossRef] [PubMed]
  9. Korik, A.; Sosnik, R.; Siddique, N.; Coyle, D. Imagined 3D hand movement trajectory decoding from sensorimotor EEG rhythms. In Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics (SMC), Budapest, Hungary, 9–12 October 2016; pp. 4591–4596. [Google Scholar]
  10. Makary, M.M.; Bu-Omer, H.M.; Soliman, R.S.; Park, K.; Kadah, Y.M. Spectral Subtraction Denoising Preprocessing Block to Improve Slow Cortical Potential Based Brain–Computer Interface. J. Med. Biol. Eng. 2017, 38, 87–98. [Google Scholar] [CrossRef]
  11. Kim, K.; Lim, S.H.; Lee, J.; Choi, J.W.; Kang, W.S.; Moon, C. Joint maximum likelihood time delay estimation of unknown event-related potential signals for EEG sensor signal quality enhancement. Sensors 2016, 16, 891. [Google Scholar] [CrossRef] [PubMed]
  12. Floriano, A.; Diez, P.F.; Freire Bastos-Filho, T. Evaluating the Influence of Chromatic and Luminance Stimuli on SSVEPs from Behind-the-Ears and Occipital Areas. Sensors 2018, 18, 615. [Google Scholar] [CrossRef] [PubMed]
  13. Guo, S.; Lin, S.; Huang, Z. Feature extraction of P300s in EEG signal with discrete wavelet transform and fisher criterion. In Proceedings of the 8th International Conference on BioMedical Engineering and Informatics (BMEI), Shenyang, China, 14–16 October 2015; pp. 200–204. [Google Scholar]
  14. Schürmann, M.; Başar-Eroglu, C.; Kolev, V.; Başar, E. Delta responses and cognitive processing: Single-trial evaluations of human visual P300. Int. J. Psychophysiol. 2000, 39, 229–239. [Google Scholar] [CrossRef]
  15. Yordanova, J.; Rosso, O.A.; Kolev, V. A transient dominance of theta event-related brain potential component characterizes stimulus processing in an auditory oddball task. Clin. Neurophysiol. 2003, 114, 529–540. [Google Scholar] [CrossRef]
  16. Liu, Y.; Zhou, Z.; Hu, D. Comparison of stimulus types in visual P300 speller of brain-computer interfaces. In Proceedings of the 9th IEEE International Conference on Cognitive Informatics (ICCI), Beijing, China, 7–9 July 2010; pp. 273–279. [Google Scholar]
  17. Chang, M.; Makino, S.; Rutkowski, T.M. Classification improvement of P300 response based auditory spatial speller brain-computer interface paradigm. In Proceedings of the IEEE Region 10 Conference on TENCON, Xi’an, China, 22–25 October 2013; pp. 1–4. [Google Scholar]
  18. Kodama, T.; Makino, S. Convolutional Neural Network Architecture and Input Volume Matrix Design for ERP Classifications in a Tactile P300—Based Brain—Computer Interface. In Proceedings of the 39th IEEE International Conference on Engineering in Medicine and Biology Society (EMBC), Jeju, Korea, 11–15 July 2017; pp. 3814–3817. [Google Scholar]
  19. Nicolas-Alonso, L.F.; Gomez-Gil, J. Brain computer interfaces, a review. Sensors 2012, 12, 1211–1279. [Google Scholar] [CrossRef] [PubMed]
  20. Jones, K.A.; Porjesz, B.; Chorlian, D.; Rangaswamy, M.; Kamarajan, C.; Padmanabhapillai, A.; Stimus, A.; Begleiter, H. S-transform time-frequency analysis of P300 reveals deficits in individuals diagnosed with alcoholism. Clin. Neurophysiol. 2006, 117, 2128–2143. [Google Scholar] [CrossRef] [PubMed]
  21. Farwell, L.A.; Donchin, E. Talking off the top of your head: Toward a mental prothesis utilizing event-relatedpotencials. Electroencephalogr. Clin. Neurophysiol. 1988, 70, 510–523. [Google Scholar] [CrossRef]
  22. Alvarado-Gonzalez, M.; Garduno, E.; Bribiesca, E.; Yanez-Suarez, O.; Medina-Banuelos, V. P300 Detection Based on EEG Shape Features. Comput. Math. Methods Med. 2016, 2016, 2029791. [Google Scholar] [CrossRef] [PubMed]
  23. Maddula, R.K.; Stivers, J.; Mousavi, M.; Ravindran, S.; de Sa, V.R. Deep Recurrent Convolutional Neural Networks for Classifying P300 Bci Signals. In Proceedings of the 7th Graz Brain-Computer Interface Conference, Graz, Austria, 18–22 September 2017. [Google Scholar]
  24. Onishi, A.; Takano, K.; Kawase, T.; Ora, H.; Kansaku, K. Affective stimuli for an auditory P300 brain-computer interface. Front. Neurosci. 2017, 11, 522. [Google Scholar] [CrossRef] [PubMed]
  25. Tayeb, S.; Mahmoudi, A.; Regragui, F.; Himmi, M.M. Efficient detection of P300 using Kernel PCA and support vector machine. In Proceedings of the Second World Conference on Complex Systems (WCCS), Agadir, Morocco, 10–12 November 2014; pp. 17–22. [Google Scholar]
  26. Hariharan, M.; Vijean, V.; Sindhu, R.; Divakar, P.; Saidatul, A.; Yaacob, S. Classification of mental tasks using stockwell transform. Comput. Electr. Eng. 2014, 40, 1741–1749. [Google Scholar] [CrossRef]
  27. Grubov, V.V.; Sitnikova, E.Y.; Pavlov, A.N.; Khramova, M.V.; Koronovskii, A.A.; Hramov, A.E. Time-frequency analysis of epileptic EEG patterns by means of empirical modes and wavelets. Proc. SPIE 2014, 9448, 94481Q. [Google Scholar] [CrossRef]
  28. Sarker, T.; Paul, S.; Rayhan, A.; Zabir, I.; Shahnaz, C. Bi-spectral higher order statistics and time-frequency domain features for arithmetic task classification from EEG signals. In Proceedings of the IEEE International Conference on Imaging, Vision and Pattern Recognition (icIVPR), Dhaka, Bangladesh, 13–14 February 2017; pp. 1–4. [Google Scholar]
  29. Güneysu, A.; Akin, H.L. An SSVEP based BCI to control a humanoid robot by using portable EEG device. In Proceedings of the 35th Annual International Conference of the IEEE on Engineering in Medicine and Biology Society (EMBC), Osaka, Japan, 3–7 July 2013; pp. 6905–6908. [Google Scholar]
  30. Swee, S.K.; You, L.Z. Fast fourier analysis and EEG classification brainwave controlled wheelchair. In Proceedings of the 2nd International Conference on Control Science and Systems Engineering (ICCSSE), Singapore, 27–29 July 2016; pp. 20–23. [Google Scholar]
  31. Phothisonothai, M.; Watanabe, K. Time-frequency analysis of duty cycle changing on steady-state visual evoked potential: EEG recording. In Proceedings of the Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA), Chiang Mai, Thailand, 9–12 December 2014; pp. 2010–2013. [Google Scholar]
  32. Motlagh, F.E.; Tang, S.H.; Motlagh, O. Combination of continuous and discrete wavelet coefficients in single-trial P300 detection. In Proceedings of the IEEE-EMBS Conference on Biomedical Engineering and Sciences (IECBES), Langkawi, Malaysia, 17–19 December 2012; pp. 954–959. [Google Scholar]
  33. Costagliola, S.; Dal Seno, B.; Matteucci, M. Recognition and classification of P300s in EEG signals by means of feature extraction using wavelet decomposition. In Proceedings of the International Joint Conference on Neural Networks, Atlanta, GA, USA, 14–19 June 2009; pp. 597–603. [Google Scholar]
  34. Chen, L.; Zhao, E.; Wang, D.; Han, Z.; Zhang, S.; Xu, C. Feature extraction of EEG signals from epilepsy patients based on Gabor Transform and EMD decomposition. In Proceedings of the Sixth International Conference on Natural Computation (ICNC), Yantai, China, 10–12 August 2010; Volume 3, pp. 1243–1247. [Google Scholar]
  35. Brown, R.A.; Frayne, R. A fast discrete S-transform for biomedical signal processing. In Proceedings of the 30th Annual International Conference on Engineering in Medicine and Biology Society (EMBS), Vancouver, BC, Canada, 20–24 August 2008; Volume 2008, pp. 2586–2589. [Google Scholar]
  36. Senapati, K.; Kar, S.; Routray, A. A new technique for removal of ocular artifacts from EEG signals using S-transform. In Proceedings of the International Conference on Systems in Medicine and Biology (ICSMB), Kharagpur, India, 16–18 December 2010; pp. 113–116. [Google Scholar]
  37. Upadhyay, R.; Padhy, P.K.; Kankar, P.K. Ocular artifact removal from EEG signals using discrete orthonormal stockwell transform. In Proceedings of the Annual IEEE India Conference (INDICON), New Delhi, India, 17–20 December 2015; pp. 1–5. [Google Scholar]
  38. Shekar, B.H.; Rajesh, D.S. Stockwell Transform based Face Recognition: A Robust and an Accurate Approach. In Proceedings of the International Conference on Advances in Computing, Communications and Informatics (ICACCI), Jaipur, India, 21–24 September 2016; pp. 168–174. [Google Scholar]
  39. Stockwell, R.G.; Mansinha, L.; Lowe, R.P. Localization of the complex spectrum: The S transform. IEEE Trans. Signal Process. 1996, 44, 998–1001. [Google Scholar] [CrossRef]
  40. Stockwell, R.G. A basis for efficient representation of the S-transform. Digit. Signal Process. A Rev. J. 2007, 17, 371–393. [Google Scholar] [CrossRef]
  41. Renard, Y.; Lotte, F.; Gibert, G.; Congedo, M.; Maby, E.; Delannoy, V.; Bertrand, O.; Lécuyer, A. OpenViBE: An Open-Source Software Platform to Design, Test, and Use Brain–Computer Interfaces in Real and Virtual Environments. Presence Teleoper. Virtual Environ. 2010, 19, 35–53. [Google Scholar] [CrossRef]
  42. P300: Magic Card. Available online: (accessed on 19 March 2018).
  43. Perez Vidal, A.F.; Oliver Salazar, M.A.; Salas Lopez, G. Development of a Brain-Computer Interface Based on Visual Stimuli for the Movement of a Robot Joints. IEEE Lat. Am. Trans. 2016, 14, 477–484. [Google Scholar] [CrossRef]
  44. Robinson, N.; Vinod, A.P. Bi-Directional Imagined Hand Movement Classification Using Low Cost EEG-Based BCI. In Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics (SMC), Hong Kong, China, 9–12 October 2015; pp. 3134–3139. [Google Scholar]
  45. Wu, S.-L.; Wu, C.-W.; Pal, N.R.; Chen, C.-Y.; Chen, S.-A.; Lin, C.-T. Common spatial pattern and lnear discriminant analysis for motor imagery classification. In Proceedings of the IEEE Symposium on Computational Intelligence, Cognitive Algorithms, Mind, and Brain (CCMB), Singapore, 6–19 April 2013; pp. 146–151. [Google Scholar]
  46. Bang, J.W.; Choi, J.S.; Park, K.R. Noise reduction in brainwaves by using both EEG signals and frontal viewing camera images. Sensors 2013, 13, 6272–6294. [Google Scholar] [CrossRef] [PubMed]
  47. Zhang, J.; Chen, M.; Zhao, S.; Hu, S.; Shi, Z.; Cao, Y. ReliefF-based EEG sensor selection methods for emotion recognition. Sensors 2016, 16, 1558. [Google Scholar] [CrossRef] [PubMed]
  48. Quitadamo, L.R.; Cavrini, F.; Sbernini, L.; Riillo, F.; Bianchi, L.; Seri, S.; Saggio, G. Support vector machines to detect physiological patterns for EEG and EMG-based human-computer interaction: A review. J. Neural Eng. 2017, 14, 11001. [Google Scholar] [CrossRef] [PubMed]
  49. Krusienski, D.J.; Sellers, E.W.; McFarland, D.J.; Vaughan, T.M.; Wolpaw, J.R. Toward enhanced P300 speller performance. J. Neurosci. Methods 2008, 167, 15–21. [Google Scholar] [CrossRef] [PubMed]
  50. Mautner, P.; Vareka, L. Off-line Analysis of the P300 Event-Related Potential using Discrete Wavelet Transform. In Proceedings of the 36th International Conference on Telecommunications and Signal Processing (TSP), Rome, Italy, 2–4 July 2013; pp. 569–572. [Google Scholar]
  51. Shi, K.; Gao, N.; Li, Q.; Bai, O. A P300 brain-computer interface design for virtual remote control system. In Proceedings of the 3rd IEEE International Conference on Control Science and Systems Engineering (ICCSSE), Beijing, China, 17–19 August 2017; pp. 326–329. [Google Scholar]
  52. Guger, C.; Ortner, R.; Dimov, S.; Allison, B. A comparison of face speller approaches for P300 BCls. In Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics (SMC), Budapest, Hungary, 9–12 October 2016; pp. 4809–4812. [Google Scholar]
  53. Kolev, V.; Demiralp, T.; Yordanova, J.; Ademoglu, A.; Isoglu-Alkaç, U. Time-frequency analysis reveals multiple functional components during oddball P300. Neuroreport 1997, 8, 2061–2065. [Google Scholar] [CrossRef] [PubMed]
  54. Yordanova, J.; Devrim, M.; Kolev, V.; Ademoglu, A.; Demiralp, T. Multiple time-frequency components account for the complex functional reactivity of P300. Neuroreport 2000, 11, 1097–1103. [Google Scholar] [CrossRef] [PubMed]
  55. Thulasidas, M.; Guan, C.; Wu, J. Robust classification of EEG signal for brain-computer interface. IEEE Trans. Neural Syst. Rehabil. Eng. 2006, 14, 24–29. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Emotiv EPOC® wireless EEG headset: (a) Placement of the device on the head of the subject; (b) Distribution of the electrodes according to the international 10–20 system.
Figure 1. Emotiv EPOC® wireless EEG headset: (a) Placement of the device on the head of the subject; (b) Distribution of the electrodes according to the international 10–20 system.
Sensors 18 01483 g001
Figure 2. Matrix of images used to obtain P300 evoked potentials: (a) At the beginning of the experiment, all the images are displayed; and (b) during the development of the experiment, the images are hidden and appear randomly one by one.
Figure 2. Matrix of images used to obtain P300 evoked potentials: (a) At the beginning of the experiment, all the images are displayed; and (b) during the development of the experiment, the images are hidden and appear randomly one by one.
Sensors 18 01483 g002
Figure 3. Complete process of the BCI System.
Figure 3. Complete process of the BCI System.
Sensors 18 01483 g003
Figure 4. Methodology used in the processing of EEG signals.
Figure 4. Methodology used in the processing of EEG signals.
Sensors 18 01483 g004
Figure 5. Average EEG signals for the two conditions (Target/Non-Target).
Figure 5. Average EEG signals for the two conditions (Target/Non-Target).
Sensors 18 01483 g005
Figure 6. Stockwell transform spectrograms of the EEG signal of Subject 3: (a) frequency range of 1–5 Hz; and (b) frequency range of 5–8 Hz.
Figure 6. Stockwell transform spectrograms of the EEG signal of Subject 3: (a) frequency range of 1–5 Hz; and (b) frequency range of 5–8 Hz.
Sensors 18 01483 g006
Figure 7. Classification obtained from Subject 3 with the SVM algorithm using the RBF kernel.
Figure 7. Classification obtained from Subject 3 with the SVM algorithm using the RBF kernel.
Sensors 18 01483 g007
Figure 8. Classification obtained from Subject 2 with the SVM algorithm using the RBF kernel.
Figure 8. Classification obtained from Subject 2 with the SVM algorithm using the RBF kernel.
Sensors 18 01483 g008
Table 1. Statistical values of the components P1, P2, and P3.
Table 1. Statistical values of the components P1, P2, and P3.
ComponentAmplitude (µV)Time (ms)
MeanStandard DeviationMeanStandard Deviation
Table 2. Performance obtained in the classification process with the SVM algorithm (%).
Table 2. Performance obtained in the classification process with the SVM algorithm (%).
SubjectAverage Power—Area under the CurveAsymmetry Coefficient—Standard Deviation
1–5 Hz5–8 Hz1–5 Hz5–8 Hz

© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (
Sensors EISSN 1424-8220 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top