Trends and Limitations in Transformer-Based BCI Research
Abstract
1. Introduction
2. Methodology
- The work was not published in a journal or conference.
- The Results section of the original article is not presented clearly or in a manner that is sufficiently transparent.
- The work is a literature review, or any type of work other than original, and hence not a primary source of presented data.
- 1.
- Works by mental task (paradigm-based);
- 2.
- Works by dataset selection (metric-based);
- 3.
- Works and technologies by signal processing application (i.e., signal classification vs. signal denoising).
3. Results
3.1. Meta Analysis
3.2. Qualitative Assessment
3.2.1. Proposed Frameworks Improvements for Task-Related Processing
3.2.2. Classification Performance
3.2.3. Artifact Removal
# | Study | Dataset | Artifact Type | Architecture | Symbiont | CC | MSE | RRMSE | tRRMSE | sRRMSE | RE | PRD | SNR |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
EEGDenoiseNet | |||||||||||||
1 | Bellamkonda et al. [49] | EEGDenoiseNet | muscular | hybrid | CNN | 0.9212 | NR | 0.353 | NR | NR | NR | 2.412 | 35.39 |
2 | Tiwari et al. [50] | EEGDenoiseNet | not specified | hybrid | LSTM/GRU | – | NR | NR | NR | NR | NR | NR | NR |
3 | Xiaorang et al. [51] | EEGDenoiseNet | muscular | pure | – | 0.732 | NR | NR | NR | 0.677 | 0.626 | NR | NR |
4 | Xiaorang et al. [51] | EEGDenoiseNet | ocular | pure | – | 0.868 | NR | NR | NR | 0.497 | 0.491 | NR | NR |
5 | Wang et al. [52] | EEGDenoiseNet | muscular 1 | hybrid | GRU | 0.844–0.982 | NR | NR | NR | NR | NR | NR | 10.06–35.13 |
6 | Wang et al. [52] | EEGDenoiseNet | ocular 1 | hybrid | GRU | 0.922–0.987 | NR | NR | NR | NR | NR | NR | 19.92–39.93 |
7 | Huang et al. [48] | EEGDenoiseNet | muscular | hybrid | DDIM | 0.989 | NR | NR | NR | 0.171 | 0.154 | NR | NR |
8 | Huang et al. [48] | EEGDenoiseNet | ocular | hybrid | DDIM | 0.983 | NR | NR | NR | 0.182 | 0.188 | NR | NR |
SSED | |||||||||||||
9 | Yin et al. [8] | SSED | ocular | pure | – | 0.978 ± 0.007 | – | 0.156 ± 0.016 | 0.164 ± 0.019 | 0.163 ± 0.013 | – | – | 16.914 ± 0.948 |
10 | Yin et al. [8] | SSED | muscular | pure | – | 0.988 ± 0.003 | – | 0.135 ± 0.020 | NR | NR | – | – | 17.73 ± 1.236 |
11 | Huang et al. [48] | SSED | ocular | hybrid | DDIM | 0.992 | NR | NR | NR | 0.121 | 0.127 | NR | NR |
BCIC IV 2a / 2b | |||||||||||||
12 | Chen et al. [53] | BCIC IV 2a | non-specific | hybrid | DDIM | 58.4 ± 1.3 | NR | NR | NR | NR | NR | NR | NR |
13 | Yin et al. [8] | BCIC IV 2a | non-specific | pure | – | NR | – | – | – | – | – | – | – |
14 | Yin et al. [8] | BCIC IV 2b | non-specific | pure | – | NR | – | – | – | – | – | – | – |
Other / Miscellaneous | |||||||||||||
15 | Tiwari et al. [50] | VEP | not specified | hybrid | LSTM/GRU | 0.9513 | 0.033 | NR | NR | NR | NR | NR | 10.56 |
16 | Tiwari et al. [50] | MNIST | not specified | hybrid | LSTM/GRU | 0.813 | 0.0286 | NR | NR | NR | NR | NR | NR |
17 | Yin et al. [8] | MNE Sample | non-specific | pure | – | 91.34 ± 3.87 | NR | NR | NR | NR | NR | NR | NR |
18 | Alzahab et al. [54] | AMIGOS | non-specific | pure | – | 93.04 ± 2.72 | 0.9665 | 0.0004 | 0.0192 | NR | NR | NR | NR |
19 | Chen et al. [53] | DEAP | non-specific | hybrid | DDIM | 58.3 ± 1.4 | NR | NR | NR | NR | NR | NR | NR |
4. Discussion
4.1. Computational Efficiency and Real-Time Considerations
4.2. Research Directions and Key Observations
- Standardization and reproducibility: Transformer-based EEG studies vary dramatically in preprocessing pipelines, channel configurations, and evaluation splits. To establish comparability, future research should adopt shared benchmark datasets with fixed train/test partitions, transparent preprocessing scripts, and mandatory reporting of key metrics (accuracy, F1, , CC, RMSE, SNR). A community-agreed benchmarking protocol that integrates both classification and denoising pipelines would enable cumulative progress rather than isolated performance reports.
- Efficiency and real-time applicability: The computational footprint of transformer models remains prohibitive for live BCI use. Future architectures should prioritize efficient attention mechanisms (linear, kernelized, or windowed attention), model compression, and adaptive windowing. Inference-time reporting, hardware benchmarking, and real-time latency validation should become standardized requirements for all BCI transformer publications.
- Pretraining and transferability: Most EEG transformers are trained from scratch, forfeiting the benefits of pretraining common in NLP and vision. Future work must leverage self-supervised or contrastive pretraining across large, diverse EEG corpora, enabling generalizable embeddings transferable to MI, ERP, or cognitive tasks. Cross-task transfer and domain adaptation frameworks should be benchmarked on fixed cross-subject splits to quantify real generalization rather than overfitting to dataset idiosyncrasies.
- Automated and adaptive architecture optimization: Manually designed hybrids are reaching diminishing returns. Recent work using genetic and evolutionary optimization to co-tune CNN–Transformer topologies, attention depth, and preprocessing parameters demonstrates clear gains in cross-subject robustness. Extending this paradigm with multi-objective optimization—balancing accuracy, latency, and stability—can accelerate the discovery of individualized and hardware-aware architectures.
- Denoising integration and task relevance: The field of transformer-based EEG denoising is expanding rapidly yet remains disconnected from downstream performance evaluation. Most studies report signal-level metrics (CC, RMSE, SNR) without validating how denoising affects decoding accuracy or information transfer. Future pipelines must adopt proxy-task evaluations, training denoising and classification jointly or sequentially, ensuring that signal cleaning translates to functional improvement in cognitive or motor decoding.
- Dataset realism and artifact taxonomy: EEGDenoiseNet remains the de facto benchmark for transformer-based EEG denoising, yet its reliance on synthetic, single-artifact data fundamentally limits ecological validity. A growing portion of recent denoising research is therefore methodologically detached from real-world benefit, as improvements in correlation or error metrics on synthetic signals do not necessarily translate to enhanced decoding in practical BCI tasks. To ensure translational relevance, future studies must couple denoising evaluation with task-driven benchmarks—for instance, retraining or testing MI classifiers on denoised BCIC IV 2a data as a proxy for assessing functional impact. In realistic settings where no artifact-free ground truth exists, performance improvement in downstream decoding (e.g., motor imagery accuracy or kappa) should serve as the principal validation metric. Benchmark updates should thus include real, multi-artifact recordings with synchronized EOG, EMG, and motion channels, standardized artifact taxonomies, and shared denoise–decode baselines to allow a reproducible and functionally meaningful comparison across architectures.
- Cross-subject and adaptive generalization: Inter-subject domain shifts continue to degrade performance in both MI classification and denoising. Meta-learning, conditional normalization, and domain-adversarial training offer promising adaptation mechanisms but lack standardized evaluation. Future benchmarks should require subject-agnostic validation, reporting both absolute accuracy and adaptation cost per new user.
- Integrative and hybrid paradigms: Diffusion–transformer hybrids, temporal variational models, and cross-modal fusion architectures (EEG–EOG, EEG–fNIRS) are emerging but lack principled justification for their added complexity. Future studies should evaluate hybrid gains using ablation-based efficiency metrics, ensuring that each added mechanism contributes measurable benefit to denoising quality or classification robustness.
- From offline to closed-loop BCIs: A critical step toward translation lies in online validation. Offline pipelines must evolve into real-time adaptive systems with latency-aware inference, dynamic feedback, and cross-session persistence. Integrating lightweight transformer variants (e.g., Performer, Longformer) into embedded hardware or wearable platforms will mark the transition from research prototypes to deployable neurotechnologies.
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
BCI | Brain–Computer Interface |
CNN | Convolutional Neural Network |
DDIM | Denoising Diffusion Implicit Models |
DL | Deep Learning |
EEG | Electroencephalography |
GAN | Generative Adversarial Network |
GRU | Gated Recurrent Unit |
HLT | High-Level Transformer |
HT | Hierarchical Transformer |
LLT | Low-Level Transformer |
LSTM | Long-Short Term Memory |
MI | Motor Imagery |
ML | Machine Learning |
MLP | Multi-Layer Perceptron |
NLP | Natural Language Processing |
SNR | Signal-to-Noise Ratio |
TCN | Temporal Convolution Network |
XGBoost | eXtreme Gradient Boosting |
Appendix A
# | Study Name | Dataset | n | Acc [%] | Year | Country |
---|---|---|---|---|---|---|
Continued on next page | ||||||
1 | Multiscale Convolutional Transformer for EEG Classification of Mental Imagery in Different Modalities [33] | Arizona State University Dataset | 2 | 72.0 | 2023 | South Korea |
2 | A hybrid network using transformer with modified locally linear embedding and sliding window convolution for EEG decoding [16] | BCI Competition IV 2a | 2 | 84.44 | 2024 | China |
3 | A two-stage transformer based network for motor imagery classification [11] | BCI Competition IV 2a | 2 | 88.5 | 2024 | India |
4 | Three-stage transfer learning for motor imagery EEG recognition [59] | BCI Competition IV 2a | 2 | 72.24 | 2024 | China |
5 | Temporal–spatial transformer based motor imagery classification for BCI using independent component analysis [15] | BCI Competition IV 2a | 2 | 88.75 | 2024 | Saudi Arabia |
6 | Hierarchical Transformer for Motor Imagery-Based Brain Computer Interface [10] | BCI Competition IV 2a | 2 | 90.0 | 2023 | South Korea |
7 | DeepEnsemble: A Novel Brain Wave Classification in MI-BCI using Ensemble of Deep Learners [14] | BCI Competition IV 2a | 2 | 96.07 | 2023 | Canada |
8 | EEG classification algorithm of motor imagery based on CNN-Transformer fusion network [13] | BCI Competition IV 2a | 2 | 99.29 | 2022 | China |
9 | Compact convolutional transformer for subject-independent motor imagery EEG-based BCIs [12] | BCI Competition IV 2a | 4 | 70.12 | 2024 | Kazakhstan |
10 | CTNet: a convolutional transformer network for EEG-based motor imagery classification [19] | BCI Competition IV 2a | 4 | 82.52 | 2024 | China |
11 | EEG-VTTCNet: A loss joint training model based on the vision transformer and the temporal convolution network for EEG-based motor imagery classification [17] | BCI Competition IV 2a | 4 | 84.58 | 2024 | China |
12 | EEG-TCNTransformer: A Temporal Convolutional Transformer for Motor Imagery Brain–Computer Interfaces [18] | BCI Competition IV 2a | 4 | 83.41 | 2024 | Australia |
13 | Swin-CANet: A Novel Integration of Swin Transformer with Channel Attention for Enhanced Motor Imagery Classification [21] | BCI Competition IV 2a | 4 | 78.78 | 2024 | China |
14 | BDAN-SPD: A Brain Decoding Adversarial Network Guided by Spatiotemporal Pattern Differences for Cross-Subject MI-BCI | BCI Competition IV 2a | 4 | 77.49 | 2024 | China |
15 | Temporal Focal Modulation Networks for EEG-Based Cross-Subject Motor Imagery Classification [60] | BCI Competition IV 2a | 4 | 84.57 | 2024 | Tunisia |
16 | MSVTNet: Multi-Scale Vision Transformer Neural Network for EEG-Based Motor Imagery Decoding [31] | BCI Competition IV 2a | 4 | 82.56 | 2024 | China |
17 | EEG Motor Imagery Classification using Integrated Transformer-CNN for Assistive Technology Control [22] | BCI Competition IV 2a | 4 | 75.3 | 2024 | United States |
18 | SCTrans: Motor Imagery EEG Classification Method based on CNN-Transformer Structure [23] | BCI Competition IV 2a | 4 | 68.61 | 2024 | China |
19 | Deep temporal networks for EEG-based motor imagery recognition [61] | BCI Competition IV 2a | 4 | 84.0 | 2024 | India |
20 | Classification Algorithm for Electroencephalogram-based Motor Imagery Using Hybrid Neural Network with Spatio-temporal Convolution and Multi-head Attention Mechanism [62] | BCI Competition IV 2a | 4 | 83.3 | 2023 | China |
21 | A shallow mirror transformer for subject-independent motor imagery BCI [24] | BCI Competition IV 2a | 4 | 70.41 | 2023 | China |
22 | Research on Motor Imagery EEG Classification Method based on Improved Transformer [63] | BCI Competition IV 2a | 4 | 94.24 | 2023 | China |
23 | Transformer-Based Network with Optimization for Cross-Subject Motor Imagery Identification [64] | BCI Competition IV 2a | 4 | 63.56 | 2023 | China |
24 | A Spatial-Temporal Transformer based on Domain Generalization for Motor Imagery Classification [65] | BCI Competition IV 2a | 4 | 57.705 | 2023 | China |
25 | Front-End Replication Dynamic Window (FRDW) for Online Motor Imagery Classification [55] | BCI Competition IV 2a | 4 | 66.51 | 2023 | China |
26 | Exploring the Potential of Attention Mechanism-Based Deep Learning for Robust Subject-Independent Motor-Imagery Based BCIs [27] | BCI Competition IV 2a | 4 | 74.73 | 2023 | Kazakhstan |
27 | Local and global convolutional transformer-based motor imagery EEG classification [20] | BCI Competition IV 2a | 4 | 80.2 | 2023 | China |
28 | Global Adaptive Transformer for Cross-Subject Enhanced EEG Classification [25] | BCI Competition IV 2a | 4 | 76.58 | 2023 | China |
29 | A Channel Selection Method for Motor Imagery EEG Based on Fisher Score of OVR-CSP [66] | BCI Competition IV 2a | 4 | 85.54 | 2023 | China |
30 | EEG Conformer: Convolutional Transformer for EEG Decoding and Visualization [26] | BCI Competition IV 2a | 4 | 78.66 | 2023 | China |
31 | Excellent fine-tuning: From specific-subject classification to cross-task classification for motor imagery [67] | BCI Competition IV 2a | 4 | 86.11 | 2023 | China |
32 | A novel hybrid CNN-Transformer model for EEG Motor Imagery classification [68] | BCI Competition IV 2a | 4 | 83.91 | 2022 | China |
33 | Three-stage transfer learning for motor imagery EEG recognition [59] | BCI Competition IV 2b | 2 | 69.29 | 2024 | China |
34 | ConTraNet: A hybrid network for improving the classification of EEG and EMG signals with limited training data [69] | BCI Competition IV 2b | 2 | 83.61 | 2024 | Germany |
35 | Compact convolutional transformer for subject-independent motor imagery EEG-based BCIs [12] | BCI Competition IV 2b | 3 | 70.12 | 2024 | Kazakhstan |
36 | CTNet: a convolutional transformer network for EEG-based motor imagery classification [19] | BCI Competition IV 2b | 3 | 76.27 | 2024 | China |
37 | EEG-VTTCNet: A loss joint training model based on the vision transformer and the temporal convolution network for EEG-based motor imagery classification [17] | BCI Competition IV 2b | 3 | 90.94 | 2024 | China |
38 | A two-stage transformer based network for motor imagery classification | BCI Competition IV 2b | 3 | 88.3 | 2024 | India |
39 | BDAN-SPD: A Brain Decoding Adversarial Network Guided by Spatiotemporal Pattern Differences for Cross-Subject MI-BCI [70] | BCI Competition IV 2b | 3 | 85.19 | 2024 | China |
40 | Temporal Focal Modulation Networks for EEG-Based Cross-Subject Motor Imagery Classification [60] | BCI Competition IV 2b | 3 | 82.22 | 2024 | Tunisia |
41 | MSVTNet: Multi-Scale Vision Transformer Neural Network for EEG-Based Motor Imagery Decoding [31] | BCI Competition IV 2b | 3 | 70.3 | 2024 | China |
42 | Temporal–spatial transformer based motor imagery classification for BCI using independent component analysis | BCI Competition IV 2b | 3 | 84.2 | 2024 | Saudi Arabia |
43 | A shallow mirror transformer for subject-independent motor imagery BCI [24] | BCI Competition IV 2b | 3 | 77.36 | 2023 | China |
44 | A Spatial-Temporal Transformer based on Domain Generalization for Motor Imagery Classification [65] | BCI Competition IV 2b | 3 | 75.089 | 2023 | China |
45 | Exploring the Potential of Attention Mechanism-Based Deep Learning for Robust Subject-Independent Motor-Imagery Based BCIs [27] | BCI Competition IV 2b | 3 | 72.0 | 2023 | Kazakhstan |
46 | Global Adaptive Transformer for Cross-Subject Enhanced EEG Classification [25] | BCI Competition IV 2b | 3 | 92.08 | 2023 | China |
47 | EEG Conformer: Convolutional Transformer for EEG Decoding and Visualization [26] | BCI Competition IV 2b | 3 | 84.63 | 2023 | China |
48 | Excellent fine-tuning: From specific-subject classification to cross-task classification for motor imagery [67] | BCI Competition IV 2b | 3 | 88.39 | 2023 | China |
49 | MI-MBFT: Superior Motor Imagery Decoding of Raw EEG Data Based on a Multibranch and Fusion Transformer Framework [29] | BCI Competition IV 3a | 2 | 94.64 | 2024 | China |
50 | Deep temporal networks for EEG-based motor imagery recognition [61] | BCI Competition IV 3a | 2 | 99.7 | 2024 | India |
51 | Hierarchical Transformer for Motor Imagery-Based Brain Computer Interface [10] | Cho Dataset | - | 84.6 | 2023 | South Korea |
52 | Continual Learning of a Transformer-Based Deep Learning Classifier Using an Initial Model from Action Observation EEG Data to Online Motor Imagery Classification [32] | Custom | 2 | 77.0 | 2023 | Taiwan |
53 | A Novel Algorithmic Structure of EEG Channel Attention Combined With Swin Transformer for Motor Patterns Classification [71] | Custom | 2 | 87.67 | 2023 | China |
54 | Utilizing the Transformer Architecture Combined with EEGNet to Achieve Real-Time Manipulation of EEG in the Metaverse [34] | Custom | 3 | 71.31 | 2024 | China |
55 | Multimodal brain-controlled system for rehabilitation training: Combining asynchronous online brain–computer interface and exoskeleton [35] | Custom | 4 | 91.25 | 2024 | China |
56 | Multiscale Convolutional Transformer for EEG Classification of Mental Imagery in Different Modalities [33] | Custom | 4 | 62.0 | 2023 | South Korea |
57 | Hierarchical Transformer for Brain Computer Interface | Lee Dataset | 2 | 81.3 | 2023 | South Korea |
58 | Hierarchical Transformer for Motor Imagery-Based Brain Computer Interface [10] | Lee Dataset | 2 | 82.1 | 2023 | South Korea |
59 | ConTraNet: A hybrid network for improving the classification of EEG and EMG signals with limited training data [69] | Mendeley sEMG | 10 | 77.15 | 2024 | Germany |
60 | ConTraNet: A hybrid network for improving the classification of EEG and EMG signals with limited training data [69] | Mendeley sEMG V1 | 7 | 85.0 | 2024 | Germany |
61 | BDAN-SPD: A Brain Decoding Adversarial Network Guided by Spatiotemporal Pattern Differences for Cross-Subject MI-BCI [70] | OpenBMI | 2 | 79.37 | 2024 | China |
62 | MSVTNet: Multi-Scale Vision Transformer Neural Network for EEG-Based Motor Imagery Decoding [31] | OpenBMI | 2 | 75.93 | 2024 | China |
63 | SCTrans: Motor Imagery EEG Classification Method based on CNN-Transformer Structure [23] | OpenBMI | 2 | 73.33 | 2024 | China |
64 | A shallow mirror transformer for subject-independent motor imagery BCI [24] | OpenBMI | 2 | 80.54 | 2023 | China |
65 | Local and global convolutional transformer-based motor imagery EEG classification [20] | OpenBMI | 2 | 81.04 | 2023 | China |
66 | MI-MBFT: Superior Motor Imagery Decoding of Raw EEG Data Based on a Multibranch and Fusion Transformer Framework [29] | PhysioNet MI | 2 | 84.07 | 2024 | China |
67 | Study of an Optimization Tool Avoided Bias for Brain-Computer Interfaces Using a Hybrid Deep Learning Model [30] | PhysioNet MI | 2 | 74.54 | 2024 | Spain |
68 | Hierarchical Transformer for Motor Imagery-Based Brain Computer Interface [10] | PhysioNet MI | 2 | 83.5 | 2023 | South Korea |
69 | A Transformer-Based Approach Combining Deep Learning Network and Spatial-Temporal Information for Raw EEG Classification [28] | PhysioNet MI | 2 | 83.31 | 2022 | China |
70 | ConTraNet: A hybrid network for improving the classification of EEG and EMG signals with limited training data [69] | PhysioNet MI | 3 | 74.38 | 2024 | Germany |
71 | Exploring the Potential of Attention Mechanism-Based Deep Learning for Robust Subject-Independent Motor-Imagery Based BCIs [27] | PhysioNet MI | 3 | 86.47 | 2023 | Kazakhstan |
72 | A Transformer-Based Approach Combining Deep Learning Network and Spatial-Temporal Information for Raw EEG Classification [28] | PhysioNet MI | 3 | 74.44 | 2022 | China |
73 | A Transformer-Based Approach Combining Deep Learning Network and Spatial-Temporal Information for Raw EEG Classification [28] | PhysioNet MI | 4 | 64.22 | 2022 | China |
74 | Motor Imagery and Mental Arithmetic Classification Based on Transformer Deep Learning Network [72] | Shin, Blankertz | 2 | 88.67 | 2024 | China |
75 | Classification of EEG signals based on CNN-Transformer model [73] | Shin, Blankertz | 2 | 87.23 | 2023 | China |
References
- Burnham, J.F. Scopus database: A review. Biomed. Digit. Libr. 2006, 3, 1. [Google Scholar] [CrossRef] [PubMed]
- Fantozzi, P.; Naldi, M. The explainability of transformers: Current status and directions. Computers 2024, 13, 92. [Google Scholar] [CrossRef]
- Vidyasagar, K.C.; Kumar, K.R.; Sai, G.A.; Ruchita, M.; Saikia, M.J. Signal to image conversion and convolutional neural networks for physiological signal processing: A review. IEEE Access 2024, 12, 66726–66764. [Google Scholar] [CrossRef]
- Su, L.; Zuo, X.; Li, R.; Wang, X.; Zhao, H.; Huang, B. A systematic review for transformer-based long-term series forecasting. Artif. Intell. Rev. 2025, 58, 80. [Google Scholar] [CrossRef]
- Pfeffer, M.A.; Ling, S.S.H.; Wong, J.K.W. Exploring the Frontier: Transformer-Based Models in EEG Signal Analysis for Brain-Computer Interfaces. Comput. Biol. Med. 2024, 178, 108705. [Google Scholar] [CrossRef]
- BCI Competition IV. 2008. Available online: http://www.bbci.de/competition/iv (accessed on 2 September 2025).
- Schalk, G.; McFarland, D.J.; Hinterberger, T.; Birbaumer, N.; Wolpaw, J.R. BCI2000: A General-Purpose Brain-Computer Interface (BCI) System. IEEE Trans. Biomed. Eng. 2004, 51, 1034–1043. [Google Scholar] [CrossRef]
- Yin, J.; Liu, A.; Wang, L.; Qian, R.; Chen, X. Integrating spatial and temporal features for enhanced artifact removal in multi-channel EEG recordings. J. Neural Eng. 2024, 21, 056018. [Google Scholar] [CrossRef]
- Ouahidi, Y.E.; Mohammadi, P. A Strong and Simple Deep Learning Baseline for BCI Classification from EEG. arXiv 2023, arXiv:2309.07159. [Google Scholar]
- Deny, P.; Cheon, S.; Son, H.; Choi, K.W. Hierarchical transformer for motor imagery-based brain computer interface. IEEE J. Biomed. Health Inform. 2023, 27, 5459–5470. [Google Scholar] [CrossRef]
- Chaudhary, P.; Dhankhar, N.; Singhal, A.; Rana, K. A two-stage transformer based network for motor imagery classification. Med. Eng. Phys. 2024, 128, 104154. [Google Scholar] [CrossRef]
- Keutayeva, A.; Fakhrutdinov, N.; Abibullaev, B. Compact convolutional transformer for subject-independent motor imagery EEG-based BCIs. Sci. Rep. 2024, 14, 25775. [Google Scholar] [CrossRef] [PubMed]
- Liu, H.; Liu, Y.; Wang, Y.; Liu, B.; Bao, X. EEG classification algorithm of motor imagery based on CNN-Transformer fusion network. In Proceedings of the 2022 IEEE International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom), Wuhan, China, 9–11 December 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 1302–1309. [Google Scholar]
- Mehtiyev, A.; Al-Najjar, A.; Sadreazami, H.; Amini, M. Deepensemble: A novel brain wave classification in MI-BCI using ensemble of deep learners. In Proceedings of the 2023 IEEE International Conference on Consumer Electronics (ICCE), Las Vegas, NV, USA, 6–8 January 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 1–5. [Google Scholar]
- Hameed, A.; Fourati, R.; Ammar, B.; Ksibi, A.; Alluhaidan, A.S.; Ayed, M.B.; Khleaf, H.K. Temporal–spatial transformer based motor imagery classification for BCI using independent component analysis. Biomed. Signal Process. Control 2024, 87, 105359. [Google Scholar] [CrossRef]
- Li, K.; Chen, P.; Chen, Q.; Li, X. A hybrid network using transformer with modified locally linear embedding and sliding window convolution for EEG decoding. J. Neural Eng. 2024, 21, 066049. [Google Scholar] [CrossRef] [PubMed]
- Shi, X.; Li, B.; Wang, W.; Qin, Y.; Wang, H.; Wang, X. EEG-VTTCNet: A loss joint training model based on the vision transformer and the temporal convolution network for EEG-based motor imagery classification. Neuroscience 2024, 556, 42–51. [Google Scholar] [CrossRef]
- Nguyen, A.H.P.; Oyefisayo, O.; Pfeffer, M.A.; Ling, S.H. EEG-TCNTransformer: A Temporal Convolutional Transformer for Motor Imagery Brain–Computer Interfaces. Signals 2024, 5, 605–632. [Google Scholar] [CrossRef]
- Zhao, W.; Jiang, X.; Zhang, B.; Xiao, S.; Weng, S. CTNet: A convolutional transformer network for EEG-based motor imagery classification. Sci. Rep. 2024, 14, 20237. [Google Scholar] [CrossRef]
- Zhang, J.; Li, K.; Yang, B.; Han, X. Local and global convolutional transformer-based motor imagery EEG classification. Front. Neurosci. 2023, 17, 1219988. [Google Scholar] [CrossRef]
- Shi, Y.; Wang, M. Swin-CANet: A Novel Integration of Swin Transformer with Channel Attention for Enhanced Motor Imagery Classification. In Proceedings of the 2024 IEEE 4th International Conference on Software Engineering and Artificial Intelligence (SEAI), Xiamen, China, 21–23 June 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 57–62. [Google Scholar]
- Zare, S.; Sun, Y. EEG Motor Imagery Classification using Integrated Transformer-CNN for Assistive Technology Control. In Proceedings of the 2024 IEEE/ACM Conference on Connected Health: Applications, Systems and Engineering Technologies (CHASE), Wilmington, DE, USA, 19–21 June 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 189–190. [Google Scholar]
- Sun, B.; Wang, Q.; Li, S.; Deng, Q. SCTrans: Motor Imagery EEG Classification Method based on CNN-Transformer Structure. In Proceedings of the 2024 5th International Seminar on Artificial Intelligence, Networking and Information Technology (AINIT), Nanjing, China, 29–31 March 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 2001–2004. [Google Scholar]
- Luo, J.; Wang, Y.; Xia, S.; Lu, N.; Ren, X.; Shi, Z.; Hei, X. A shallow mirror transformer for subject-independent motor imagery BCI. Comput. Biol. Med. 2023, 164, 107254. [Google Scholar] [CrossRef]
- Song, Y.; Zheng, Q.; Wang, Q.; Gao, X.; Heng, P.A. Global adaptive transformer for cross-subject enhanced EEG classification. IEEE Trans. Neural Syst. Rehabil. Eng. 2023, 31, 2767–2777. [Google Scholar] [CrossRef]
- Song, Y.; Zheng, Q.; Liu, B.; Gao, X. EEG conformer: Convolutional transformer for EEG decoding and visualization. IEEE Trans. Neural Syst. Rehabil. Eng. 2022, 31, 710–719. [Google Scholar] [CrossRef]
- Keutayeva, A.; Abibullaev, B. Exploring the potential of attention mechanism-based deep learning for robust subject-independent motor-imagery based BCIs. IEEE Access 2023, 11, 107562–107580. [Google Scholar] [CrossRef]
- Xie, J.; Zhang, J.; Sun, J.; Ma, Z.; Qin, L.; Li, G.; Zhou, H.; Zhan, Y. A transformer-based approach combining deep learning network and spatial-temporal information for raw EEG classification. IEEE Trans. Neural Syst. Rehabil. Eng. 2022, 30, 2126–2136. [Google Scholar] [CrossRef] [PubMed]
- Luo, J.; Cheng, Q.; Wang, H.; Du, Q.; Wang, Y.; Li, Y. MI-MBFT: Superior Motor Imagery Decoding of Raw EEG Data Based on a Multi-Branch and Fusion Transformer Framework. IEEE Sens. J. 2024, 24, 34879–34891. [Google Scholar] [CrossRef]
- Ajali-Hernández, N.I.; Travieso-González, C.M.; Bermudo-Mora, N.; Reino-Cacho, P.; Rodríguez-Saucedo, S. Study of an Optimization Tool Avoided Bias for Brain-Computer Interfaces Using a Hybrid Deep Learning Model. IRBM 2024, 45, 100836. [Google Scholar] [CrossRef]
- Liu, K.; Yang, T.; Yu, Z.; Yi, W.; Yu, H.; Wang, G.; Wu, W. MSVTNet: Multi-Scale Vision Transformer Neural Network for EEG-Based Motor Imagery Decoding. IEEE J. Biomed. Health Inform. 2024, 28, 7126–7137. [Google Scholar] [CrossRef]
- Lee, P.L.; Chen, S.H.; Chang, T.C.; Lee, W.K.; Hsu, H.T.; Chang, H.H. Continual learning of a transformer-based deep learning classifier using an initial model from action observation EEG data to online motor imagery classification. Bioengineering 2023, 10, 186. [Google Scholar] [CrossRef]
- Ahn, H.J.; Lee, D.H.; Jeong, J.H.; Lee, S.W. Multiscale convolutional transformer for EEG classification of mental imagery in different modalities. IEEE Trans. Neural Syst. Rehabil. Eng. 2022, 31, 646–656. [Google Scholar] [CrossRef]
- Li, P.L.; Yuan, J.J. Utilizing the Transformer Architecture Combined with EEGNet to Achieve Real-Time Manipulation of EEG in the Metaverse. In Proceedings of the 2024 International Conference on System Science and Engineering (ICSSE), Hsinchu, Taiwan, 26–28 June 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 1–8. [Google Scholar]
- Liu, L.; Li, J.; Ouyang, R.; Zhou, D.; Fan, C.; Liang, W.; Li, F.; Lv, Z.; Wu, X. Multimodal brain-controlled system for rehabilitation training: Combining asynchronous online brain–computer interface and exoskeleton. J. Neurosci. Methods 2024, 406, 110132. [Google Scholar] [CrossRef]
- Hossain, K.M.; Islam, M.A.; Hossain, S.; Nijholt, A.; Ahad, M.A.R. Status of deep learning for EEG-based brain–computer interface applications. Front. Comput. Neurosci. 2023, 16, 1006763. [Google Scholar] [CrossRef]
- Elashmawi, W.H.; Ayman, A.; Antoun, M.; Mohamed, H.; Mohamed, S.E.; Amr, H.; Talaat, Y.; Ali, A. A Comprehensive Review on Brain–Computer Interface (BCI)-Based Machine and Deep Learning Algorithms for Stroke Rehabilitation. Appl. Sci. 2024, 14, 6347. [Google Scholar] [CrossRef]
- Khademi, Z.; Ebrahimi, F.; Kordy, H.M. A review of critical challenges in MI-BCI: From conventional to deep learning methods. J. Neurosci. Methods 2023, 383, 109736. [Google Scholar] [CrossRef] [PubMed]
- Brookshire, G.; Kasper, J.; Blauch, N.M.; Wu, Y.C.; Glatt, R.; Merrill, D.A.; Gerrol, S.; Yoder, K.J.; Quirk, C.; Lucero, C. Data leakage in deep learning studies of translational EEG. Front. Neurosci. 2024, 18, 1373515. [Google Scholar] [CrossRef] [PubMed]
- Varoquaux, G. Assessing and tuning brain decoders: Cross-validation, caveats, and guidelines. NeuroImage 2017, 145, 166–179. [Google Scholar] [CrossRef]
- Brunner, C.; Leeb, R.; Müller-Putz, G.P.; Schlögl, A.; Pfurtscheller, G. BCI Competition 2008—Graz Data Set A (Data Set 2a); Technical report; Graz University of Technology: Graz, Austria, 2008. [Google Scholar]
- Tangermann, M.; Müller, K.R.; Aertsen, A.; Birbaumer, N.; Braun, C.; Brunner, C.; Leeb, R.; Mehring, C.; Miller, K.J.; Müller-Putz, G.R.; et al. Review of the BCI Competition IV. Front. Neurosci. 2012, 6, 55. [Google Scholar] [CrossRef]
- Zhang, C.; Liu, Y.; Wu, X. TFANet: A temporal fusion attention neural network for motor imagery decoding. Front. Neurosci. 2025, 19, 1635588. [Google Scholar] [CrossRef]
- Liao, W.; Miao, Z.; Liang, S.; Zhang, L.; Li, C. A composite improved attention convolutional network for motor imagery EEG classification. Front. Neurosci. 2025, 19, 1543508. [Google Scholar] [CrossRef]
- Zhao, W.; Zhang, B.; Zhou, H.; Wei, D.; Huang, C.; Lan, Q. Multi-scale convolutional transformer network for motor imagery brain–computer interface. Sci. Rep. 2025, 15, 96611. [Google Scholar] [CrossRef]
- Liao, W.; Liu, H.; Wang, W. Advancing BCI with a transformer-based model for motor imagery decoding. Sci. Rep. 2025, 15, 06364. [Google Scholar] [CrossRef]
- Song, J.; Zhai, Q.; Wang, C.; Liu, J. EEGGAN-Net: Enhancing EEG signal classification through data augmentation based on generative adversarial networks. Front. Hum. Neurosci. 2024, 18, 1430086. [Google Scholar] [CrossRef]
- Huang, X.; Li, C.; Liu, A.; Qian, R.; Chen, X. EEGDfus: A conditional diffusion model for fine-grained EEG denoising. IEEE J. Biomed. Health Inform. 2024, 29, 2557–2569. [Google Scholar] [CrossRef]
- Bellamkonda, N.L.; Goru, H.K.; Solasuttu, B.; Gangu, V.R. A Hybrid Residual CNN and Multi-Head Self-Attention Network for Denoising Muscle Artifacts in EEG Signals. In Proceedings of the 2025 6th International Conference on Data Intelligence and Cognitive Informatics (ICDICI), Tirunelveli, India, 9–11 July 2025; IEEE: Piscataway, NJ, USA, 2025; pp. 21–27. [Google Scholar]
- Tiwari, N.; Anwar, S. BiGRU-TFA: An Attention-Enhanced Model for EEG Signal Reconstruction Using Temporal and Frequency Features. IEEE Sens. J. 2025, 25, 27077–27085. [Google Scholar] [CrossRef]
- Pu, X.; Yi, P.; Chen, K.; Ma, Z.; Zhao, D.; Ren, Y. EEGDnet: Fusing non-local and local self-similarity for EEG signal denoising with transformer. Comput. Biol. Med. 2022, 151, 106248. [Google Scholar] [CrossRef]
- Wang, W.; Li, B.; Wang, H. A novel end-to-end network based on a bidirectional GRU and a self-attention mechanism for denoising of electroencephalography signals. Neuroscience 2022, 505, 10–20. [Google Scholar] [CrossRef]
- Chen, J.; Pi, D.; Jiang, X.; Gao, F.; Wang, B.; Chen, Y. EEGCiD: EEG Condensation Into Diffusion Model. IEEE Trans. Autom. Sci. Eng. 2024, 22, 8502–8518. [Google Scholar] [CrossRef]
- Alzahab, N.A.; Alshawa, N. Automatic Reconstruction of Noisy Electroencephalography (EEG) Channels with Transformer-Based Architectures for Sustainable Systems. In Proceedings of the 2024 10th International Conference on Computing, Engineering and Design (ICCED), Jeddah, Saudi Arabia, 11–12 December 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 1–6. [Google Scholar]
- Chen, X.; An, J.; Wu, H.; Li, S.; Liu, B.; Wu, D. Front-end Replication Dynamic Window (FRDW) for Online Motor Imagery Classification. IEEE Trans. Neural Syst. Rehabil. Eng. 2023, 31, 3906–3914. [Google Scholar] [CrossRef]
- Pfeffer, M.A.; Nguyen, A.H.P.; Kim, K.; Wong, J.K.W.; Ling, S.H. Evolving optimized transformer-hybrid systems for robust BCI signal processing using genetic algorithms. Biomed. Signal Process. Control 2025, 108, 107883. [Google Scholar] [CrossRef]
- Wilson, J.A.; Mellinger, J.; Schalk, G.; Williams, J. A Procedure for Measuring Latencies in Brain–Computer Interfaces. Front. Neurosci. 2010, 4, 306. [Google Scholar] [CrossRef] [PubMed]
- LaRocco, J.; Le, M.; Paeng, D.H. Optimizing Computer–Brain Interface Parameters for Non-Invasive Applications. Front. Neuroinformatics 2020, 14, 1. [Google Scholar] [CrossRef]
- Li, J.; She, Q.; Meng, M.; Du, S.; Zhang, Y. Three-stage transfer learning for motor imagery EEG recognition. Med. Biol. Eng. Comput. 2024, 62, 1689–1701. [Google Scholar] [CrossRef]
- Hameed, A.; Fourati, R.; Ammar, B.; Sanchez-Medina, J.; Ltifi, H. Temporal Focal Modulation Networks for EEG-Based Cross-Subject Motor Imagery Classification. In Proceedings of the International Conference on Computational Collective Intelligence, Leipzig, Germany, 9–11 September 2024; Springer: Berlin/Heidelberg, Germany, 2024; pp. 445–457. [Google Scholar]
- Sharma, N.; Upadhyay, A.; Sharma, M.; Singhal, A. Deep temporal networks for EEG-based motor imagery recognition. Sci. Rep. 2023, 13, 18813. [Google Scholar] [CrossRef]
- Shi, X.; Li, B.; Wang, W.; Qin, Y.; Wang, H.; Wang, X. Classification algorithm for electroencephalogram-based motor imagery using hybrid neural network with spatio-temporal convolution and multi-head attention mechanism. Neuroscience 2023, 527, 64–73. [Google Scholar] [CrossRef] [PubMed]
- Liu, Y.; Liu, Z.; Huang, L. Research on Motor Imagery EEG Classification Method based on Improved Transformer. In Proceedings of the Fifth International Conference on Image Processing and Intelligent Control (IPIC 2025), Qingdao, China, 9–11 May 2025; Volume 13782, pp. 195–201. [Google Scholar]
- Tan, X.; Wang, D.; Chen, J.; Xu, M. Transformer-based network with optimization for cross-subject motor imagery identification. Bioengineering 2023, 10, 609. [Google Scholar] [CrossRef] [PubMed]
- Liu, S.; An, L.; Zhang, C.; Jia, Z. A spatial-temporal transformer based on domain generalization for motor imagery classification. In Proceedings of the 2023 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Honolulu, HI, USA, 1–4 October 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 3789–3794. [Google Scholar]
- Mu, W.; Wang, J.; Wang, L.; Wang, P.; Han, J.; Niu, L.; Bin, J.; Liu, L.; Zhang, J.; Jia, J.; et al. A channel selection method for motor imagery EEG based on Fisher score of OVR-CSP. In Proceedings of the 2023 11th International Winter Conference on Brain-Computer Interface (BCI), Gangwon, Republic of Korea, 20–22 February 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 1–4. [Google Scholar]
- Jia, X.; Song, Y.; Xie, L. Excellent fine-tuning: From specific-subject classification to cross-task classification for motor imagery. Biomed. Signal Process. Control 2023, 79, 104051. [Google Scholar] [CrossRef]
- Ma, Y.; Song, Y.; Gao, F. A novel hybrid CNN-transformer model for EEG motor imagery classification. In Proceedings of the 2022 International Joint Conference on Neural Networks (IJCNN), Padua, Italy, 18–23 July 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 1–8. [Google Scholar]
- Ali, O.; Saif-ur Rehman, M.; Glasmachers, T.; Iossifidis, I.; Klaes, C. ConTraNet: A hybrid network for improving the classification of EEG and EMG signals with limited training data. Comput. Biol. Med. 2024, 168, 107649. [Google Scholar] [CrossRef]
- Wei, F.; Xu, X.; Li, X.; Wu, X. BDAN-SPD: A brain decoding adversarial network guided by spatiotemporal pattern differences for cross-subject MI-BCI. IEEE Trans. Ind. Inform. 2024, 20, 14321–14329. [Google Scholar] [CrossRef]
- Wang, H.; Cao, L.; Huang, C.; Jia, J.; Dong, Y.; Fan, C.; De Albuquerque, V.H.C. A novel algorithmic structure of EEG Channel Attention combined with Swin Transformer for motor patterns classification. IEEE Trans. Neural Syst. Rehabil. Eng. 2023, 31, 3132–3141. [Google Scholar] [CrossRef]
- Ye, Y.; Tong, J.; Yang, S.; Chang, Y.; Du, S. Motor Imagery and Mental Arithmetic Classification Based on Transformer Deep Learning Network. In Proceedings of the 2024 IEEE International Conference on Mechatronics and Automation (ICMA), Tianjin, China, 4–7 August 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 357–362. [Google Scholar]
- Liu, J.; Dong, E.; Tong, J.; Yang, S.; Du, S. Classification of EEG signals based on CNN-Transformer model. In Proceedings of the 2023 IEEE International Conference on Mechatronics and Automation (ICMA), Harbin, China, 6–9 August 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 2095–2099. [Google Scholar]
BCI Approach and Subcategories | Count | Denoising Study Document Query | Count |
---|---|---|---|
General Signal Processing | 2788 | No Exclusion Criteria | 5167 |
General Motor Imagery | 2602 | Transformers | 146 |
CNNs | 951 | Diffusion | 78 |
Transformers | 205 | BCI and Diffusion | 36 |
CNNs and MI | 511 | GANs | 45 |
Transformers and MI | 54 | Self-Attention | 65 |
By BCI Feature Paradigm | Count | By Signal Processing Methodology | Count |
---|---|---|---|
Motor Imagery | 2602 | No Exclusion Criteria | 6841 |
P300 | 388 | Traditional ML | 1421 |
Error-related Potential(s) | 314 | Deep Learning | 1597 |
Imagined/Inner Speech | 138 | ||
Mental Workload Estimation | 107 |
# | Study (First Author) | Model | Type | n | Acc [%] | Evaluation Protocol |
---|---|---|---|---|---|---|
Continued on next page | ||||||
BCI Competition IV 2a | ||||||
1 | Deny (2023) [10] | Hierarchical Transformer | Pure | 2 | 90.00 | subject-dependent |
2 | Chaudhary (2024) [11] | Two-stage Transformer | Pure | 2 | 88.50 | subject-dependent |
3 | Keutayeva (2024) [12] | Compact Conv. Transformer | Pure | 4 | 70.12 | subject-independent |
4 | Liu (2022) [13] | CNN–Transformer Fusion | Hybrid | 2 | 99.29 | non-standard split |
5 | Mehtiyev (2023) [14] | DeepEnsemble (ViT + CNN) | Hybrid | 2 | 96.07 | subject-dependent |
6 | Hameed (2024) [15] | Transformer + ICA | Hybrid | 2 | 88.75 | subject-dependent |
7 | Li (2024) [16] | Transformer + M-LLE + SW-CNN | Hybrid | 2 | 84.44 | subject-dependent |
8 | Shi (2024) [17] | EEG-VTTCNet (ViT + TCN) | Hybrid | 4 | 84.58 | subject-dependent |
9 | Nguyen (2024) [18] | EEG-TCNTransformer | Hybrid | 4 | 83.41 | subject-dependent |
10 | Zhao (2024) [19] | CTNet | Hybrid | 4 | 82.52 | subject-dependent |
11 | Zhang (2023) [20] | Local + Global Conformer/Transformer | Hybrid | 4 | 80.20 | subject-dependent |
12 | Shi (2024) [21] | Swin-CANet | Hybrid | 4 | 78.78 | subject-dependent |
13 | Zare (2024) [22] | Integrated Transformer–CNN | Hybrid | 4 | 75.30 | subject-dependent |
14 | SCTrans (2024) [23] | SCTrans | Hybrid | 4 | 68.61 | subject-dependent |
BCI Competition IV 2b | ||||||
15 | Luo (2023) [24] | Shallow Mirror Transformer | Pure | 3 | 77.36 | subject-independent |
16 | Song (2023) [25] | Global Adaptive Transformer | Pure | 3 | 92.08 | subject-dependent |
17 | Keutayeva (2024) [12] | Compact Conv. Transformer | Pure | 3 | 70.12 | subject-independent |
18 | Shi (2024) [17] | EEG-VTTCNet (ViT + TCN) | Hybrid | 3 | 90.94 | subject-dependent |
19 | Song (2023) [26] | EEG Conformer | Hybrid | 3 | 84.63 | subject-dependent |
20 | Zhao (2024) [19] | CTNet | Hybrid | 3 | 76.27 | subject-dependent |
PhysioNet MI | ||||||
21 | Keutayeva (2023) [27] | Attention-based Transformer | Pure | 3 | 86.47 | subject-independent |
22 | Xie (2022) [28] | Transformer + DL (2-class) | Hybrid | 2 | 83.31 | subject-dependent |
23 | Xie (2022) [28] | Transformer + DL (3-class) | Hybrid | 3 | 74.44 | subject-dependent |
24 | Xie (2022) [28] | Transformer + DL (4-class) | Hybrid | 4 | 64.22 | subject-dependent |
25 | Luo (2024) [29] | MI-MBFT (Multi-branch Transformer) | Hybrid | 2 | 84.07 | subject-dependent |
26 | Ajali (2024) [30] | Optimization + DL | Hybrid | 2 | 74.54 | subject-dependent |
OpenBMI | ||||||
27 | Luo (2023) [24] | Shallow Mirror Transformer | Pure | 2 | 80.54 | subject-independent |
28 | Liu (2024) [31] | MSVTNet (ViT) | Pure | 2 | 75.93 | subject-dependent |
29 | Zhang (2023) [20] | Local + Global Conformer/Transformer | Hybrid | 2 | 81.04 | subject-dependent |
30 | SCTrans (2024) [23] | SCTrans | Hybrid | 2 | 73.33 | subject-dependent |
Other / Custom or Additional Benchmarks | ||||||
31 | Lee (2023) [32] | Continual Transformer (online) | Pure | 2 | 77.00 | online/real-time |
32 | Ahn (2023) [33] | Multiscale Conv-Transformer | Hybrid | 2 | 72.00 | subject-dependent |
33 | Li (2024) [34] | Transformer + EEGNet (metaverse) | Hybrid | 3 | 71.31 | online/real-time |
34 | Liu (2024) [35] | Multimodal Transformer (exoskeleton) | Hybrid | 4 | 91.25 | online/real-time |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Pfeffer, M.A.; Wong, J.K.W.; Ling, S.H. Trends and Limitations in Transformer-Based BCI Research. Appl. Sci. 2025, 15, 11150. https://doi.org/10.3390/app152011150
Pfeffer MA, Wong JKW, Ling SH. Trends and Limitations in Transformer-Based BCI Research. Applied Sciences. 2025; 15(20):11150. https://doi.org/10.3390/app152011150
Chicago/Turabian StylePfeffer, Maximilian Achim, Johnny Kwok Wai Wong, and Sai Ho Ling. 2025. "Trends and Limitations in Transformer-Based BCI Research" Applied Sciences 15, no. 20: 11150. https://doi.org/10.3390/app152011150
APA StylePfeffer, M. A., Wong, J. K. W., & Ling, S. H. (2025). Trends and Limitations in Transformer-Based BCI Research. Applied Sciences, 15(20), 11150. https://doi.org/10.3390/app152011150