Next Article in Journal
A Global Analysis of Research Outputs on Neurotoxicants from 2011–2020: Adverse Effects on Humans and the Environment
Next Article in Special Issue
Adaptive Industrial Control System Attack Sample Expansion Algorithm Based on Generative Adversarial Network
Previous Article in Journal
Power Darna Distribution with Right Censoring: Estimation, Testing, and Applications
Previous Article in Special Issue
Analysis and Prediction Research for Bipropellant Thruster Mixture Ratio Based on BP-RNN Chain Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research on Unsupervised Classification Algorithm Based on SSVEP

1
School of Automation, Beijing Information Science and Technology University (BISTU), Beijing 100192, China
2
Institute of Intelligent Networked Things and Cooperative Control, Beijing Information Science and Technology University (BISTU), Beijing 100192, China
3
Intelligent Perception and Control of High-End Equipment Beijing International Science and Technology Cooperation Base, Beijing Information Science and Technology University (BISTU), Beijing 100192, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(16), 8274; https://doi.org/10.3390/app12168274
Submission received: 23 July 2022 / Revised: 15 August 2022 / Accepted: 16 August 2022 / Published: 18 August 2022
(This article belongs to the Special Issue AI Applications in the Industrial Technologies)

Abstract

:
Filter Bank Canonical Correlation Analysis (FBCCA) is used to classify electroencephalography (EEG) signals to overcome insufficient training data for EEG signal classification. This approach is not constrained by the training data or time and also performs unsupervised Steady-State Visual Evoked Potential (SSVEP) classification in a short time, which is easy to extend and optimize. By examining the data set from the Brain–Computer Interface (BCI) contest and comparing it to Canonical Correlation Analysis (CCA) using various parameter settings, the results show that FBCCA carries better classification performance than CCA. When the number of harmonics is 4 and the number of subbands is 5, the identification rate of 40 targets with the frequency difference of 0.2 Hz achieves 88.9%, and the maximum information transfer rate (ITR) achieves 88.64 bits/min, which shows superior compatibility and practicability.

1. Introduction

Brain–Computer Interface (BCI) systems can interact directly with external devices without using the brain’s regular output circuit, while enables conscious control of brain activity terminal equipment. BCI offers significant scientific study value and broad application potential in various domains, including the operation of an intelligent EEG wheelchair via an electroencephalography (EEG) signal, rehabilitation training and the creation of wearable devices [1,2,3].
Three paradigms for brain–computer interfaces are frequently used: Motor Imagery, P300 signal [4,5], and Steady–State Visual Evoked Potential (SSVEP) [6,7]. SSVEP is a potential created in the brain’s visual cortex or the occipital region outside the skull when the human eye is stimulated with a specific frequency of flash or image [8,9]. A BCI system based on SSVEP has garnered considerable interest due to its advantages, including high classification accuracy, short training time and fast pace of Information Transfer Rate (ITR) [10,11]. Zhu et al. [12] used ensemble learning to improve the performance of ear–EEG in SSVEP–based BCI through re–implementing a compact convolutional neural network. Cherloo et al. [13] presented the Spatio–Spectral canonical correlation analysis (SS–CCA) algorithm, which is inspired by common spatio–spectral patterns algorithm. It is used as the base model of filter bank canonical correlation analysis (FBCCA), which can help increase the frequency recognition performance. Kwon et al. [14] proposed a novel multidomain signal–to–signal translation method to generate artificial SSVEP signals from resting EEG. Mao et al. [15] presented a recursive Bayes–based technique for enhancing the classification effectiveness of high–frequency SSVEP. Lin et al. [16] suggested a Volterra–filter–based nonlinear model of SSVEP. The training cost is minimized in this model by transfer learning and recreating stable reference signals using a small number of training targets.
This paper focuses on the SSVEP algorithm. SSVEP typically places a premium on the simplicity of use of brain–computer interface devices in practical applications. In certain application cases, obtaining sufficient individual data to pre–train the BCI system is challenging. Therefore, an unsupervised fast SSVEP classification algorithm based on FBCCA was developed. Simultaneously, the minimum time required to collect EEG data from 30 subjects with a 0.2 Hz frequency variation of 40 targets can be lowered to 1.5 s. Additionally, the maximum accuracy can reach 89.1% regardless of the sampling period when the sampling time approaches 3.5 s. Each set of trials’ processing time is less than 2.5 s, which bodes well for the use of some brain control devices. When the data length is 2 s, the classification accuracy may rise by more than 20% when compared to the standard canonical correlation analysis (CCA) algorithm.

2. Materials and Methods

2.1. Unsupervised Classification Algorithm FBCCA

The FBCCA algorithm does not need training and can directly conduct unsupervised classification [17]. Traditional CCA requires pre–training of models with EEG data before online classification [18,19].
As shown in Figure 1, the main steps of FBCCA are as follows: (1) filter bank analysis; (2) canonical correlation analysis between SSVEP subband components and sinusoidal reference signals; and (3) target classification and recognition.
Filter bank analysis in FBCCA can deconstruct SSVEP into several subband components, allowing the discriminant information contained in SSVEP harmonic components to be extracted. As a result, FBCCA delivers more extensive and robust harmonic information to recognize SSVEP targets and has a higher classification recognition effect.

2.1.1. Filter Bank Construction

First, an appropriate filter bank for multi–subband EEG signal filtering need to be built. Generally, digital signal processors include FIR and IIR. IIR filter carries the characteristics of short processing time and large amount of data, which can be used to filter EEG signals. The experimental results show that the Chebyshev type–1 filter in IIR filter can process SSVEP signal well. This work also uses Chebyshev type–1 filter to process SSVEP signal. As shown in Figure 2, we constructed a Chebyshev type–1 filter bank with W p of 90 Hz and W s of 100 Hz at high frequency. The W p and W s at low frequency are shown in Table 1.
Finally, we need to determine the subband’s range. In general, when a single channel’s EEG signal is filtered by m filter banks, m EEG signals processed by the subband filter may be produced. Finally, when the channel count is n, the m filter banks can be filtered to generate an n × m matrix. The (1) represents the EEG data for n channels.
C = channe l 1 channe l 2 channe l n 1 channe l n
After processing each channel with a Chebyshev type–1 filter bank consisting of m subbands, the processed subband filtered EEG signal produced is as follows:
C n m = channe l 11 channe l 1 m channe l 21 channe l 2 m channe l ( n 1 ) 1 channe l ( n 1 ) m channe l n 1 channe l n m

2.1.2. CCA Analysis

After obtaining n × m subband filter matrix, we need to do the CCA analysis. CCA is a multivariate statistical method that maximizes the correlation between two sets of data, which is widely used in BCI frequency classification and detection based on SSVEP [20,21]. The method is mainly used to calculate the correlation coefficient between the multi–channel EEG signal and the established reference sine and cosine signal to determine the maximum correlation of the signal. Figure 3 shows the process.
As shown in Figure 3, CCA can analyze multi–channel EEG signals simultaneously, providing more channel information and more successfully extracting features from SSVEP EEG data. When CCA is used to extract the SSVEP response frequency, two sets of multivariate variables are defined as X and Y. Among them, X is the multi–channel EEG signal of SSVEP, and Y is the reference signal associated with the stimulation frequency f.
Y = sin ( 2 π f t ) cos ( 2 π f t ) sin ( 2 π N f t ) cos ( 2 π N f t )
As shown in (3), N denotes the number of harmonics in f, and X denotes the number of channels. Assuming i stimulation frequencies exist, then Y i is as follows:
Y i = sin ( 2 π f 1 t ) sin ( 2 π f i t ) cos ( 2 π f 1 t ) cos ( 2 π f i t ) sin ( 2 π N f 1 t ) sin ( 2 π N f i t ) cos ( 2 π N f 1 t ) cos ( 2 π N f i t )
CCA identifies a pair of vectors W X and W Y for each set of multidimensional variables X and Y and optimizes the correlation between related variables x = X T W X and y = Y T W Y through W X and W Y , as shown in the following formula:
max ρ W X , W Y = E x T y E x T x E y T y = E W X T X Y T W Y E W X T X X T W X E W Y T Y Y T W Y
The maximum correlation coefficient r h o between X and Y is obtained from (5). Different f are selected, and ρ is calculated. Then, the frequency corresponding to the maximum ρ is regarded as the SSVEP response frequency.

2.2. The Experiment

2.2.1. Test Datasets Source

The data was acquired using the Neuracle 64–lead equipment during the third “China Brain–Computer Interface Competition” using the SSVEP paradigm. As shown in Figure 4, the stimulus paradigm has 40 targets. The stimulus frequency is 8∼15.8 Hz, and the interval is 0.2 Hz.

2.2.2. Experimental Paradigm Process

The experimental data were gathered in blocks, with EEG data taken sequentially in each block, which contained an indefinite number of stimulation tests in random order. In a single block, the frequency of each stimulus trial is not fixed. A single trial lasts 3.5∼4.5 s. The prompt phase of the target starts at 0.5 s, and a red block will appear on the screen to indicate the target position and lasts for 0.5 s.
During the stimulus, 40 targets flashed at the same time, and the brightness of each target changed sinusoidal according to its predetermined frequency. Subjects were instructed to maintain a rigorous concentration on the recommended object, eliciting a steady–state visual evoked response in their EEG signals. Each trial begins with a trigger at the stimulus stage’s initial location. The experimental data were downsampled to 250 Hz EEG data without any further filtering, the visual latency was 0.13 s, and the confidence was 0.95.

2.2.3. Channel Selection

When SSVEP signals are acquired, the response signal is mainly located in the posterior occipital area of the brain [22]. For the regularly used 64–channel EEG acquisition device, the main channels for SSVEP signal processing and classification are Pz, POz, PO3, PO4, PO5, PO6, Oz, O1 and O2, as shown in Figure 5, which this study selected for signal processing and classification [23].

3. Results and Analysis

3.1. Evaluation Metrics

3.1.1. Accuracy Calculation Process

This study analyses the performance of the algorithm regarding the accuracy. As shown in (6).
A c = t = 1 n 1 , y ^ t = y t 0 , y ^ t y t × 100 n
where A c represents the classification accuracy, n is the total number of samples, t is the sample number, y t is the true label of the sample, and y t ^ is the inferred label of the sample.

3.1.2. Information Transmission Rate Calculation Process

The information transfer rate ( I T R ) is a frequently used metric for evaluating the performance of BCI, and its formula is as follows:
I T R = log 2 N + A c log 2 A c + ( 1 A c ) log 2 ( 1 A c N 1 ) × 60 T
where A c represents the classification accuracy, N represents the target number and T represents the entire detection time, including the target fixation time and the gaze shifting time. The optimal average data length and classification accuracy can be calculated by simulating the online trials, and then the theoretical I T R value can be obtained. As a result, if we want to improve I T R , we might consider the three aspects above.

3.2. Parameter Setting and Evaluation Standard

3.2.1. Subband Number

The primary purpose of the filter bank analysis is to decompose SSVEP into subband components to extract independent information from harmonics more efficiently. This approach separates the SSVEP into equal–bandwidth subbands and assesses the accuracy by varying the number of subbands. The number of subbands is set to 3∼7, and all other variables are strictly controlled. The time duration is 1.5 s with three harmonics. The accuracy is shown in Figure 6.
We obtained the greatest accuracy, 87.9%, based on five subbands. When the number of subbands is greater than 5, the accuracy will be reduced to a certain extent. Thus, the number of subbands should be based on 5. When the number of subbands is 5, the decomposition of subbands at 8 Hz is shown in Figure 7, where the time–frequency diagram of six blocks of the single channel (Pz) is drawn.

3.2.2. Harmonic Number

SSVEP’s main components are stimulation frequency and harmonics. Harmonic may give helpful information for identifying the frequency. After determining the number of subbands, we need to determine the number of harmonics. As the number of harmonics grows, the signal strength may weaken, increasing computation time while not improving the accuracy.
As a result, we compared harmonic numbers to find the best suitable value. The number of harmonics is set to 3∼7, and all other variables are strictly controlled. The time duration is 1.5 s with five subbands. The accuracy is shown in Figure 8.
As shown in Figure 8, the number of four harmonics obtained is the maximum accuracy of 85.9%. After greater than four harmonics, the data accuracy will be reduced to a certain extent, and thus the number of harmonics should be 4.

3.3. Comparison of FBCCA and CCA

The evaluation criterion includes the average accuracy and I T R of 30 subjects. For different application scenarios, the standards are inconsistent. I T R is more suitable for online testing and evaluation and has a relatively considerable effect on the actual application of SSVEP and BCI. However, accuracy evaluation is also a critical criterion in some scenarios that need higher precision but not a quicker acquisition time. As a result, this study analyzes these two commonly used criteria.
The FBCCA algorithm is compared to the traditional CCA algorithm, using the best parameter in harmonic and subband numbers above. The extracted data length takes 2 s, and the comparative results are shown in Figure 9.
As shown in Figure 9, the algorithm accuracy of FBCCA is nearly 20% higher than CCA, while I T R is nearly 30% higher, indicating a significant improvement in the algorithm performance.

3.4. The Effect of Data Length

To evaluate the effect of extraction time on SSVEP performance, we process a comparison experiment in terms of the average accuracy rate A a v e , average information transmission rate I T R a v e and minimum correct number of subjects S m i n conducted on acquisition time intervals of 1.5, 2, 2.5, 3 and 3.5 s. S 1 and S 2 indicate the numbers of subjects whose accuracy rates are more than 80% and less than 50%, respectively.
As shown in Table 2, the accuracy can reach the maximum with the increase of time, which is 88.9%. For an SSVEP system with 40 targets and a frequency difference of 0.2 Hz, the recognition rate of nearly 90% can already meet the requirements of various control systems with lower real–time, and more sampling extraction time can be used to ensure the accuracy rate. Moreover, increasing the time extraction length can improve the abnormal samples to some extent, which is shown as the minimum correct number of subjects in Table 2.
Since the EEG intensity of each subject is different, the acquisition environment and the interference level of the acquisition in some application scenarios are also different, and it is necessary to ensure the minimum number of correct or minimum accuracy. Theoretically, for the time length, the longer, the better, which is confirmed by S 1 and S 2 in Table 2. However, if the time is too long, after the response time of the stimulus, the acquired data will revert to the baseline of the EEG signal after the stimulus–response period, which may interfere with the results.

4. Discussion

In many application scenarios, such as a brain–controlled wheelchair and BCI spelling, a higher I T R can meet the high–level requirements for the response time of controlling and transmission rate of signals in actual control. In this case, it is more practical to consider the I T R than the accuracy alone; I T R is similar to the concept of derivative. If the accuracy does not grow with it when the time grows by a certain value, the I T R will decrease and vice versa.
As shown in Table 2, when the I T R exceeds 80 bits/min, the data length is less than or equal to 2 s. Although the accuracy is only about 70%, abnormal data is limited during actual application processes, which means I T R will further increase, and it can meet the fundamental requirements of BCI wheelchair control.
As shown in Figure 10, all metrics are time–dependent and most balanced at 2.5 s. It reveals that A a v e , S m i n and S 1 are positively correlated with the time length, while I T R a v e and S 2 are negatively correlated with time length. The performance of A a v e , S m i n , S 1 , S 2 , which are all related to the accuracy, improves after time length increase, and I T R a v e , which is related to the efficiency, on the contrary, decreases. Different application scenarios should decide the acquisition time length according to the requirements.
Two criteria can evaluate the performance of the model, one is the accuracy rate, and the other is the I T R . The accuracy rate is used as a criterion for a trial or application scenario, which needs higher precision but not a quicker acquisition time. The experimental data in this paper shows that the highest accuracy rate can be achieved with a time length of 3.5 s, four harmonics and five subbands, which is 88.9%. I T R has a relatively important significance for online testing. For the unsupervised FBCCA algorithm, Table 2 above also lists some comparative ITRs, and then the maximum ITR can be achieved with a time length of 1.5 s, four harmonics and five subbands, which is 88.64 bits/min.

5. Conclusions

SSVEP is referred to as the rhythmic EEG signal induced by continuous visual stimulation. There are now several methods of studying SSVEP. The primary target of this study was to achieve the quick classification of unsupervised SSVEP using FBCCA. The unsupervised model required no training and was highly robust. Additionally, the experimental analysis revealed that the FBCCA algorithm outperformed the CCA algorithm in terms of the recognition classification and I T R . The acquisition time should be chosen based on the needs of the application scenario when there are five subbands and four harmonics. The longer the acquisition time, the more appropriate it is for precision–required application scenarios, whereas the shorter the time, the more appropriate it is for high real–time required application scenarios.
For 40 targets with a frequency difference of 0.2 Hz, the FBCCA algorithm achieved an 88.9% identification rate and a maximum I T R of 88.64 bits/min, which provides a reference value for subsequent SSVEP research. Compared to the CCA algorithm, the classification accuracy may rise by more than 20% when the data length is 2 s. However, we used offline datasets for research and only simulated online testing, which has its limitations. This research can continue to conduct real–time online testing in different application scenarios to solve this problem in the future.

Author Contributions

Conceptualization, Y.W., R.Y. and X.L.; methodology, Y.W. and W.C.; software, R.Y. and X.L.; validation, J.N.; formal analysis, X.L. and R.Y.; investigation, J.N.; resources, X.L.; data curation, X.L.; writing—original draft preparation, X.L. and R.Y; writing—review and editing, R.Y. and J.N.; visualization, R.Y.; supervision, Y.W. and W.C.; project administration, Y.W., W.C. and R.Y.; funding acquisition, Y.W. and W.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Beijing Natural Science Foundation (4202026).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this article (the third “China Brain–Computer Interface Competition”) can be found online at https://www.datafountain.cn/competitions/338/datasets (accessed on 20 July 2022).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Deng, X.; Yu, Z.L.; Lin, C.; Gu, Z.; Li, Y. A Bayesian Shared Control Approach for Wheelchair Robot With Brain Machine Interface. IEEE Trans. Neural Syst. Rehabil. Eng. 2019, 28, 328–338. [Google Scholar] [CrossRef] [PubMed]
  2. Hernández-Rojas, L.G.; Montoya, O.M.; Antelis, J.M. Anticipatory Detection of Self-Paced Rehabilitative Movements in the Same Upper Limb From EEG Signals. IEEE Access 2020, 8, 119728–119743. [Google Scholar] [CrossRef]
  3. Mehreen, A.; Anwar, S.M.; Haseeb, M.; Majid, M.; Ullah, M.O. A Hybrid Scheme for Drowsiness Detection Using Wearable Sensors. IEEE Sens. J. 2019, 19, 5119–5126. [Google Scholar] [CrossRef]
  4. Jin, J.; Miao, Y.; Daly, I.; Zuo, C.; Hu, D.; Cichocki, A. Correlation-based channel selection and regularized feature optimization for MI-based BCI. Neural Netw. 2019, 118, 262–270. [Google Scholar] [CrossRef] [PubMed]
  5. Huang, W.; Zhang, P.; Yu, T.; Gu, Z.; Guo, Q.; Li, Y. A P300-Based BCI system using stereoelectroencephalography and its application in a brain mechanistic study. IEEE Trans. Biomed. Eng. 2020, 68, 2509–2519. [Google Scholar] [CrossRef] [PubMed]
  6. Wong, C.M.B.W.; Wang, Z.; Lao, K.F.; Rosa, A.; Wan, F. Spatial filtering in SSVEP-based BCIs: Unified framework and new improvements. IEEE Trans. Biomed. Eng. 2020, 67, 3057–3072. [Google Scholar] [CrossRef] [PubMed]
  7. Hsu, H.T.; Shyu, K.K.; Hsu, C.C.; Lee, L.H.; Lee, P.L. Phase-Approaching Stimulation Sequence for SSVEP-Based BCI: A Practical Use in VR/AR HMD. IEEE Trans. Neural Syst. Rehabil. Eng. 2021, 29, 2754–2764. [Google Scholar] [CrossRef] [PubMed]
  8. Chevallier, S.; Kalunga, E.K.; Barthélemy, Q.; Monacelli, E. Review of Riemannian distances and divergences, applied to SSVEP-based BCI. Neuroinformatics 2021, 19, 93–106. [Google Scholar] [CrossRef] [PubMed]
  9. Park, S.; Cha, H.S.; Kwon, J.; Kim, H.; Im, C.H. Development of an online home appliance control system using augmented reality and an ssvep-based brain–computer interface. In Proceedings of the 2020 Eighth International Winter Conference on Brain–Computer Interface (BCI), Gangwon, Korea, 26–28 February 2020; pp. 1–2. [Google Scholar]
  10. Venuto, D.D.; Mezzina, G. A single-trial P300 detector based on symbolized EEG and autoencoded-(1D) CNN to improve ITR performance in BCIs. Sensors 2021, 21, 3961. [Google Scholar] [CrossRef] [PubMed]
  11. Ke, Y.; Liu, P.; An, X.; Song, X.; Ming, D. An online SSVEP-BCI system in an optical see-through augmented reality environment. J. Neural Eng. 2020, 17, 016066. [Google Scholar] [CrossRef] [PubMed]
  12. Zhu, Y.; Li, Y.; Lu, J.; Li, P. EEGNet with ensemble learning to improve the cross-session classification of SSVEP based BCI from Ear-EEG. IEEE Access 2021, 9, 15295–15303. [Google Scholar] [CrossRef]
  13. Cherloo, M.N.; Amiri, H.K.; Daliri, M.R. A novel approach for frequency recognition in SSVEP-based BCI. J. Neurosci. Methods 2022, 371, 109499. [Google Scholar] [CrossRef] [PubMed]
  14. Kwon, J.; Im, C.H. Novel Signal-to-Signal Translation Method Based on StarGAN to Generate Artificial EEG for SSVEP-Based Brain–Computer Interfaces. Expert Syst. Appl. 2022, 203, 117574. [Google Scholar] [CrossRef]
  15. Mao, X.; Li, W.; Hu, H.; Jin, J.; Chen, G. Improve the classification efficiency of high-frequency phase-tagged SSVEP by a recursive bayesian-based approach. IEEE Trans. Neural Syst. Rehabil. Eng. 2020, 28, 561–572. [Google Scholar] [CrossRef] [PubMed]
  16. Lin, J.; Liang, L.; Han, X.; Yang, C.; Chen, X.; Gao, X. Cross-target transfer algorithm based on the volterra model of SSVEP-BCI. Tsinghua Sci. Technol. 2021, 26, 505–522. [Google Scholar] [CrossRef]
  17. Bolaños, M.C.; Ballestero, S.B.; Puthusserypady, S. Filter bank approach for enhancement of supervised Canonical Correlation Analysis methods for SSVEP-based BCI spellers. In Proceedings of the 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Online, 1–5 November 2021; pp. 337–340. [Google Scholar]
  18. Liu, Q.; Jiao, Y.; Miao, Y.; Zuo, C.; Wang, X.; Cichocki, A.; Jin, J. Efficient representations of EEG signals for SSVEP frequency recognition based on deep multiset CCA. Neurocomputing 2020, 378, 36–44. [Google Scholar] [CrossRef]
  19. Zhang, H.Y.; Stevenson, C.E.; Jung, T.P.; Ko, L.W. Stress-induced effects in resting EEG spectra predict the performance of SSVEP-based BCI. IEEE Trans. Neural Syst. Rehabil. Eng. 2020, 28, 1771–1780. [Google Scholar] [CrossRef] [PubMed]
  20. Wei, Q.; Zhu, S.; Wang, Y.; Gao, X.; Guo, H.; Wu, X. A training data-driven canonical correlation analysis algorithm for designing spatial filters to enhance performance of SSVEP-based BCIs. Int. J. Neural Syst. 2020, 30, 2050020. [Google Scholar] [CrossRef] [PubMed]
  21. Zhao, J.; Zhang, W.; Wang, J.H.; Li, W.; Lei, C.; Chen, G.; Liang, Z.; Li, X. Decision-making selector (DMS) for integrating CCA-based methods to improve performance of SSVEP-based BCIs. IEEE Trans. Neural Syst. Rehabil. Eng. 2020, 28, 1128–1137. [Google Scholar] [CrossRef] [PubMed]
  22. Chen, Y.; Yang, C.; Chen, X.; Wang, Y.; Gao, X. A novel training-free recognition method for SSVEP-based BCIs using dynamic window strategy. J. Neural Eng. 2021, 18, 036007. [Google Scholar] [CrossRef] [PubMed]
  23. Ravi, A.; Pearce, S.; Zhang, X.; Jiang, N. User-specific channel selection method to improve SSVEP BCI decoding robustness against variable inter-stimulus distance. In Proceedings of the 2019 Ninth International IEEE/EMBS Conference on Neural Engineering (NER), San Francisco, CA, USA, 20–23 March 2019; pp. 283–286. [Google Scholar]
Figure 1. The process of FBCCA in EEG signal analysis.
Figure 1. The process of FBCCA in EEG signal analysis.
Applsci 12 08274 g001
Figure 2. Bode diagram of the filter bank.
Figure 2. Bode diagram of the filter bank.
Applsci 12 08274 g002
Figure 3. Process of CCA in EEG signal analysis.
Figure 3. Process of CCA in EEG signal analysis.
Applsci 12 08274 g003
Figure 4. A 40–target stimulation interface.
Figure 4. A 40–target stimulation interface.
Applsci 12 08274 g004
Figure 5. Channel locations.
Figure 5. Channel locations.
Applsci 12 08274 g005
Figure 6. Accuracy of different subband numbers.
Figure 6. Accuracy of different subband numbers.
Applsci 12 08274 g006
Figure 7. Time–frequency diagram and subband decomposition.
Figure 7. Time–frequency diagram and subband decomposition.
Applsci 12 08274 g007
Figure 8. The accuracy of different harmonic numbers.
Figure 8. The accuracy of different harmonic numbers.
Applsci 12 08274 g008
Figure 9. The accuracy and I T R of FBCCA and CCA.
Figure 9. The accuracy and I T R of FBCCA and CCA.
Applsci 12 08274 g009
Figure 10. Comparison of different data lengths.
Figure 10. Comparison of different data lengths.
Applsci 12 08274 g010
Table 1. W p and W s at low frequency of the filter bank.
Table 1. W p and W s at low frequency of the filter bank.
Filter BankCF1CF2CF3CF4CF5CF6CF7
W p (Hz)6142230384654
W s (Hz)4101624324048
Table 2. Comparison of different data lengths.
Table 2. Comparison of different data lengths.
Acquisition
Time (s)
A ave (%) ITR ave
(bits/min)
S min S 1 S 2
1.571.588.6410136
2.079.180.0912175
2.583.175.3130184
3.086.869.6241194
3.588.963.4848212
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wu, Y.; Yang, R.; Chen, W.; Li, X.; Niu, J. Research on Unsupervised Classification Algorithm Based on SSVEP. Appl. Sci. 2022, 12, 8274. https://doi.org/10.3390/app12168274

AMA Style

Wu Y, Yang R, Chen W, Li X, Niu J. Research on Unsupervised Classification Algorithm Based on SSVEP. Applied Sciences. 2022; 12(16):8274. https://doi.org/10.3390/app12168274

Chicago/Turabian Style

Wu, Yingnian, Rui Yang, Wenbai Chen, Xin Li, and Jiaxin Niu. 2022. "Research on Unsupervised Classification Algorithm Based on SSVEP" Applied Sciences 12, no. 16: 8274. https://doi.org/10.3390/app12168274

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop