Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (156)

Search Parameters:
Keywords = motor imagery EEG (MI-EEG)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 2115 KiB  
Article
GAH-TNet: A Graph Attention-Based Hierarchical Temporal Network for EEG Motor Imagery Decoding
by Qiulei Han, Yan Sun, Hongbiao Ye, Ze Song, Jian Zhao, Lijuan Shi and Zhejun Kuang
Brain Sci. 2025, 15(8), 883; https://doi.org/10.3390/brainsci15080883 - 19 Aug 2025
Viewed by 211
Abstract
Background: Brain–computer interfaces (BCIs) based on motor imagery (MI) offer promising solutions for motor rehabilitation and communication. However, electroencephalography (EEG) signals are often characterized by low signal-to-noise ratios, strong non-stationarity, and significant inter-subject variability, which pose significant challenges for accurate decoding. Existing methods [...] Read more.
Background: Brain–computer interfaces (BCIs) based on motor imagery (MI) offer promising solutions for motor rehabilitation and communication. However, electroencephalography (EEG) signals are often characterized by low signal-to-noise ratios, strong non-stationarity, and significant inter-subject variability, which pose significant challenges for accurate decoding. Existing methods often struggle to simultaneously model the spatial interactions between EEG channels, the local fine-grained features within signals, and global semantic patterns. Methods: To address this, we propose the graph attention-based hierarchical temporal network (GAH-TNet), which integrates spatial graph attention modeling with hierarchical temporal feature encoding. Specifically, we design the graph attention temporal encoding block (GATE). The graph attention mechanism is used to model spatial dependencies between EEG channels and encode short-term temporal dynamic features. Subsequently, a hierarchical attention-guided deep temporal feature encoding block (HADTE) is introduced, which extracts local fine-grained and global long-term dependency features through two-stage attention and temporal convolution. Finally, a fully connected classifier is used to obtain the classification results. The proposed model is evaluated on two publicly available MI-EEG datasets. Results: Our method outperforms multiple existing state-of-the-art methods in classification accuracy. On the BCI IV 2a dataset, the average classification accuracy reaches 86.84%, and on BCI IV 2b, it reaches 89.15%. Ablation experiments validate the complementary roles of GATE and HADTE in modeling. Additionally, the model exhibits good generalization ability across subjects. Conclusions: This framework effectively captures the spatio-temporal dynamic characteristics and topological structure of MI-EEG signals. This hierarchical and interpretable framework provides a new approach for improving decoding performance in EEG motor imagery tasks. Full article
Show Figures

Figure 1

26 pages, 3497 KiB  
Article
A Multi-Branch Network for Integrating Spatial, Spectral, and Temporal Features in Motor Imagery EEG Classification
by Xiaoqin Lian, Chunquan Liu, Chao Gao, Ziqian Deng, Wenyang Guan and Yonggang Gong
Brain Sci. 2025, 15(8), 877; https://doi.org/10.3390/brainsci15080877 - 18 Aug 2025
Viewed by 300
Abstract
Background: Efficient decoding of motor imagery (MI) electroencephalogram (EEG) signals is essential for the precise control and practical deployment of brain-computer interface (BCI) systems. Owing to the complex nonlinear characteristics of EEG signals across spatial, spectral, and temporal dimensions, efficiently extracting multidimensional [...] Read more.
Background: Efficient decoding of motor imagery (MI) electroencephalogram (EEG) signals is essential for the precise control and practical deployment of brain-computer interface (BCI) systems. Owing to the complex nonlinear characteristics of EEG signals across spatial, spectral, and temporal dimensions, efficiently extracting multidimensional discriminative features remains a key challenge to improving MI-EEG decoding performance. Methods: To address the challenge of capturing complex spatial, spectral, and temporal features in MI-EEG signals, this study proposes a multi-branch deep neural network, which jointly models these dimensions to enhance classification performance. The network takes as inputs both a three-dimensional power spectral density tensor and two-dimensional time-domain EEG signals and incorporates four complementary feature extraction branches to capture spatial, spectral, spatial-spectral joint, and temporal dynamic features, thereby enabling unified multidimensional modeling. The model was comprehensively evaluated on two widely used public MI-EEG datasets: EEG Motor Movement/Imagery Database (EEGMMIDB) and BCI Competition IV Dataset 2a (BCIIV2A). To further assess interpretability, gradient-weighted class activation mapping (Grad-CAM) was employed to visualize the spatial and spectral features prioritized by the model. Results: On the EEGMMIDB dataset, it achieved an average classification accuracy of 86.34% and a kappa coefficient of 0.829 in the five-class task. On the BCIIV2A dataset, it reached an accuracy of 83.43% and a kappa coefficient of 0.779 in the four-class task. Conclusions: These results demonstrate that the network outperforms existing state-of-the-art methods in classification performance. Furthermore, Grad-CAM visualizations identified the key spatial channels and frequency bands attended to by the model, supporting its neurophysiological interpretability. Full article
(This article belongs to the Section Neurotechnology and Neuroimaging)
Show Figures

Figure 1

29 pages, 1397 KiB  
Review
Artificial Intelligence Approaches for EEG Signal Acquisition and Processing in Lower-Limb Motor Imagery: A Systematic Review
by Sonia Rocío Moreno-Castelblanco, Manuel Andrés Vélez-Guerrero and Mauro Callejas-Cuervo
Sensors 2025, 25(16), 5030; https://doi.org/10.3390/s25165030 - 13 Aug 2025
Viewed by 425
Abstract
Background: Motor imagery (MI) is defined as the cognitive ability to simulate motor movements while suppressing muscular activity. The electroencephalographic (EEG) signals associated with lower limb MI have become essential in brain–computer interface (BCI) research aimed at assisting individuals with motor disabilities. Objective: [...] Read more.
Background: Motor imagery (MI) is defined as the cognitive ability to simulate motor movements while suppressing muscular activity. The electroencephalographic (EEG) signals associated with lower limb MI have become essential in brain–computer interface (BCI) research aimed at assisting individuals with motor disabilities. Objective: This systematic review aims to evaluate methodologies for acquiring and processing EEG signals within brain–computer interface (BCI) applications to accurately identify lower limb MI. Methods: A systematic search in Scopus and IEEE Xplore identified 287 records on EEG-based lower-limb MI using artificial intelligence. Following PRISMA guidelines (non-registered), 35 studies met the inclusion criteria after screening and full-text review. Results: Among the selected studies, 85% applied machine or deep learning classifiers such as SVM, CNN, and LSTM, while 65% incorporated multimodal fusion strategies, and 50% implemented decomposition algorithms. These methods improved classification accuracy, signal interpretability, and real-time application potential. Nonetheless, methodological variability and a lack of standardization persist across studies, posing barriers to clinical implementation. Conclusions: AI-based EEG analysis effectively decodes lower-limb motor imagery. Future efforts should focus on harmonizing methods, standardizing datasets, and developing portable systems to improve neurorehabilitation outcomes. This review provides a foundation for advancing MI-based BCIs. Full article
Show Figures

Figure 1

24 pages, 4294 KiB  
Article
Post Hoc Event-Related Potential Analysis of Kinesthetic Motor Imagery-Based Brain-Computer Interface Control of Anthropomorphic Robotic Arms
by Miltiadis Spanos, Theodora Gazea, Vasileios Triantafyllidis, Konstantinos Mitsopoulos, Aristidis Vrahatis, Maria Hadjinicolaou, Panagiotis D. Bamidis and Alkinoos Athanasiou
Electronics 2025, 14(15), 3106; https://doi.org/10.3390/electronics14153106 - 4 Aug 2025
Viewed by 287
Abstract
Kinesthetic motor imagery (KMI), the mental rehearsal of a motor task without its actual performance, constitutes one of the most common techniques used for brain–computer interface (BCI) control for movement-related tasks. The effect of neural injury on motor cortical activity during execution and [...] Read more.
Kinesthetic motor imagery (KMI), the mental rehearsal of a motor task without its actual performance, constitutes one of the most common techniques used for brain–computer interface (BCI) control for movement-related tasks. The effect of neural injury on motor cortical activity during execution and imagery remains under investigation in terms of activations, processing of motor onset, and BCI control. The current work aims to conduct a post hoc investigation of the event-related potential (ERP)-based processing of KMI during BCI control of anthropomorphic robotic arms by spinal cord injury (SCI) patients and healthy control participants in a completed clinical trial. For this purpose, we analyzed 14-channel electroencephalography (EEG) data from 10 patients with cervical SCI and 8 healthy individuals, recorded through Emotiv EPOC BCI, as the participants attempted to move anthropomorphic robotic arms using KMI. EEG data were pre-processed by band-pass filtering (8–30 Hz) and independent component analysis (ICA). ERPs were calculated at the sensor space, and analysis of variance (ANOVA) was used to determine potential differences between groups. Our results showed no statistically significant differences between SCI patients and healthy control groups regarding mean amplitude and latency (p < 0.05) across the recorded channels at various time points during stimulus presentation. Notably, no significant differences were observed in ERP components, except for the P200 component at the T8 channel. These findings suggest that brain circuits associated with motor planning and sensorimotor processes are not disrupted due to anatomical damage following SCI. The temporal dynamics of motor-related areas—particularly in channels like F3, FC5, and F7—indicate that essential motor imagery (MI) circuits remain functional. Limitations include the relatively small sample size that may hamper the generalization of our findings, the sensor-space analysis that restricts anatomical specificity and neurophysiological interpretations, and the use of a low-density EEG headset, lacking coverage over key motor regions. Non-invasive EEG-based BCI systems for motor rehabilitation in SCI patients could effectively leverage intact neural circuits to promote neuroplasticity and facilitate motor recovery. Future work should include validation against larger, longitudinal, high-density, source-space EEG datasets. Full article
(This article belongs to the Special Issue EEG Analysis and Brain–Computer Interface (BCI) Technology)
Show Figures

Figure 1

29 pages, 2830 KiB  
Article
BCINetV1: Integrating Temporal and Spectral Focus Through a Novel Convolutional Attention Architecture for MI EEG Decoding
by Muhammad Zulkifal Aziz, Xiaojun Yu, Xinran Guo, Xinming He, Binwen Huang and Zeming Fan
Sensors 2025, 25(15), 4657; https://doi.org/10.3390/s25154657 - 27 Jul 2025
Viewed by 476
Abstract
Motor imagery (MI) electroencephalograms (EEGs) are pivotal cortical potentials reflecting cortical activity during imagined motor actions, widely leveraged for brain-computer interface (BCI) system development. However, effectively decoding these MI EEG signals is often overshadowed by flawed methods in signal processing, deep learning methods [...] Read more.
Motor imagery (MI) electroencephalograms (EEGs) are pivotal cortical potentials reflecting cortical activity during imagined motor actions, widely leveraged for brain-computer interface (BCI) system development. However, effectively decoding these MI EEG signals is often overshadowed by flawed methods in signal processing, deep learning methods that are clinically unexplained, and highly inconsistent performance across different datasets. We propose BCINetV1, a new framework for MI EEG decoding to address the aforementioned challenges. The BCINetV1 utilizes three innovative components: a temporal convolution-based attention block (T-CAB) and a spectral convolution-based attention block (S-CAB), both driven by a new convolutional self-attention (ConvSAT) mechanism to identify key non-stationary temporal and spectral patterns in the EEG signals. Lastly, a squeeze-and-excitation block (SEB) intelligently combines those identified tempo-spectral features for accurate, stable, and contextually aware MI EEG classification. Evaluated upon four diverse datasets containing 69 participants, BCINetV1 consistently achieved the highest average accuracies of 98.6% (Dataset 1), 96.6% (Dataset 2), 96.9% (Dataset 3), and 98.4% (Dataset 4). This research demonstrates that BCINetV1 is computationally efficient, extracts clinically vital markers, effectively handles the non-stationarity of EEG data, and shows a clear advantage over existing methods, marking a significant step forward for practical BCI applications. Full article
(This article belongs to the Special Issue Advanced Biomedical Imaging and Signal Processing)
Show Figures

Figure 1

34 pages, 3704 KiB  
Article
Uncertainty-Aware Deep Learning for Robust and Interpretable MI EEG Using Channel Dropout and LayerCAM Integration
by Óscar Wladimir Gómez-Morales, Sofia Escalante-Escobar, Diego Fabian Collazos-Huertas, Andrés Marino Álvarez-Meza and German Castellanos-Dominguez
Appl. Sci. 2025, 15(14), 8036; https://doi.org/10.3390/app15148036 - 18 Jul 2025
Viewed by 442
Abstract
Motor Imagery (MI) classification plays a crucial role in enhancing the performance of brain–computer interface (BCI) systems, thereby enabling advanced neurorehabilitation and the development of intuitive brain-controlled technologies. However, MI classification using electroencephalography (EEG) is hindered by spatiotemporal variability and the limited interpretability [...] Read more.
Motor Imagery (MI) classification plays a crucial role in enhancing the performance of brain–computer interface (BCI) systems, thereby enabling advanced neurorehabilitation and the development of intuitive brain-controlled technologies. However, MI classification using electroencephalography (EEG) is hindered by spatiotemporal variability and the limited interpretability of deep learning (DL) models. To mitigate these challenges, dropout techniques are employed as regularization strategies. Nevertheless, the removal of critical EEG channels, particularly those from the sensorimotor cortex, can result in substantial spatial information loss, especially under limited training data conditions. This issue, compounded by high EEG variability in subjects with poor performance, hinders generalization and reduces the interpretability and clinical trust in MI-based BCI systems. This study proposes a novel framework integrating channel dropout—a variant of Monte Carlo dropout (MCD)—with class activation maps (CAMs) to enhance robustness and interpretability in MI classification. This integration represents a significant step forward by offering, for the first time, a dedicated solution to concurrently mitigate spatiotemporal uncertainty and provide fine-grained neurophysiologically relevant interpretability in motor imagery classification, particularly demonstrating refined spatial attention in challenging low-performing subjects. We evaluate three DL architectures (ShallowConvNet, EEGNet, TCNet Fusion) on a 52-subject MI-EEG dataset, applying channel dropout to simulate structural variability and LayerCAM to visualize spatiotemporal patterns. Results demonstrate that among the three evaluated deep learning models for MI-EEG classification, TCNet Fusion achieved the highest peak accuracy of 74.4% using 32 EEG channels. At the same time, ShallowConvNet recorded the lowest peak at 72.7%, indicating TCNet Fusion’s robustness in moderate-density montages. Incorporating MCD notably improved model consistency and classification accuracy, especially in low-performing subjects where baseline accuracies were below 70%; EEGNet and TCNet Fusion showed accuracy improvements of up to 10% compared to their non-MCD versions. Furthermore, LayerCAM visualizations enhanced with MCD transformed diffuse spatial activation patterns into more focused and interpretable topographies, aligning more closely with known motor-related brain regions and thereby boosting both interpretability and classification reliability across varying subject performance levels. Our approach offers a unified solution for uncertainty-aware, and interpretable MI classification. Full article
(This article belongs to the Special Issue EEG Horizons: Exploring Neural Dynamics and Neurocognitive Processes)
Show Figures

Figure 1

24 pages, 890 KiB  
Article
MCTGNet: A Multi-Scale Convolution and Hybrid Attention Network for Robust Motor Imagery EEG Decoding
by Huangtao Zhan, Xinhui Li, Xun Song, Zhao Lv and Ping Li
Bioengineering 2025, 12(7), 775; https://doi.org/10.3390/bioengineering12070775 - 17 Jul 2025
Viewed by 468
Abstract
Motor imagery (MI) EEG decoding is a key application in brain–computer interface (BCI) research. In cross-session scenarios, the generalization and robustness of decoding models are particularly challenging due to the complex nonlinear dynamics of MI-EEG signals in both temporal and frequency domains, as [...] Read more.
Motor imagery (MI) EEG decoding is a key application in brain–computer interface (BCI) research. In cross-session scenarios, the generalization and robustness of decoding models are particularly challenging due to the complex nonlinear dynamics of MI-EEG signals in both temporal and frequency domains, as well as distributional shifts across different recording sessions. While multi-scale feature extraction is a promising approach for generalized and robust MI decoding, conventional classifiers (e.g., multilayer perceptrons) struggle to perform accurate classification when confronted with high-order, nonstationary feature distributions, which have become a major bottleneck for improving decoding performance. To address this issue, we propose an end-to-end decoding framework, MCTGNet, whose core idea is to formulate the classification process as a high-order function approximation task that jointly models both task labels and feature structures. By introducing a group rational Kolmogorov–Arnold Network (GR-KAN), the system enhances generalization and robustness under cross-session conditions. Experiments on the BCI Competition IV 2a and 2b datasets demonstrate that MCTGNet achieves average classification accuracies of 88.93% and 91.42%, respectively, outperforming state-of-the-art methods by 3.32% and 1.83%. Full article
(This article belongs to the Special Issue Brain Computer Interfaces for Motor Control and Motor Learning)
Show Figures

Figure 1

22 pages, 4882 KiB  
Article
Dual-Branch Spatio-Temporal-Frequency Fusion Convolutional Network with Transformer for EEG-Based Motor Imagery Classification
by Hao Hu, Zhiyong Zhou, Zihan Zhang and Wenyu Yuan
Electronics 2025, 14(14), 2853; https://doi.org/10.3390/electronics14142853 - 17 Jul 2025
Viewed by 356
Abstract
The decoding of motor imagery (MI) electroencephalogram (EEG) signals is crucial for motor control and rehabilitation. However, as feature extraction is the core component of the decoding process, traditional methods, often limited to single-feature domains or shallow time-frequency fusion, struggle to comprehensively capture [...] Read more.
The decoding of motor imagery (MI) electroencephalogram (EEG) signals is crucial for motor control and rehabilitation. However, as feature extraction is the core component of the decoding process, traditional methods, often limited to single-feature domains or shallow time-frequency fusion, struggle to comprehensively capture the spatio-temporal-frequency characteristics of the signals, thereby limiting decoding accuracy. To address these limitations, this paper proposes a dual-branch neural network architecture with multi-domain feature fusion, the dual-branch spatio-temporal-frequency fusion convolutional network with Transformer (DB-STFFCNet). The DB-STFFCNet model consists of three modules: the spatiotemporal feature extraction module (STFE), the frequency feature extraction module (FFE), and the feature fusion and classification module. The STFE module employs a lightweight multi-dimensional attention network combined with a temporal Transformer encoder, capable of simultaneously modeling local fine-grained features and global spatiotemporal dependencies, effectively integrating spatiotemporal information and enhancing feature representation. The FFE module constructs a hierarchical feature refinement structure by leveraging the fast Fourier transform (FFT) and multi-scale frequency convolutions, while a frequency-domain Transformer encoder captures the global dependencies among frequency domain features, thus improving the model’s ability to represent key frequency information. Finally, the fusion module effectively consolidates the spatiotemporal and frequency features to achieve accurate classification. To evaluate the feasibility of the proposed method, experiments were conducted on the BCI Competition IV-2a and IV-2b public datasets, achieving accuracies of 83.13% and 89.54%, respectively, outperforming existing methods. This study provides a novel solution for joint time-frequency representation learning in EEG analysis. Full article
(This article belongs to the Special Issue Artificial Intelligence Methods for Biomedical Data Processing)
Show Figures

Figure 1

14 pages, 1563 KiB  
Article
High-Resolution Time-Frequency Feature Selection and EEG Augmented Deep Learning for Motor Imagery Recognition
by Mouna Bouchane, Wei Guo and Shuojin Yang
Electronics 2025, 14(14), 2827; https://doi.org/10.3390/electronics14142827 - 14 Jul 2025
Viewed by 394
Abstract
Motor Imagery (MI) based Brain Computer Interfaces (BCIs) have promising applications in neurorehabilitation for individuals who have lost mobility and control over parts of their body due to brain injuries, such as stroke patients. Accurately classifying MI tasks is essential for effective BCI [...] Read more.
Motor Imagery (MI) based Brain Computer Interfaces (BCIs) have promising applications in neurorehabilitation for individuals who have lost mobility and control over parts of their body due to brain injuries, such as stroke patients. Accurately classifying MI tasks is essential for effective BCI performance, but this task remains challenging due to the complex and non-stationary nature of EEG signals. This study aims to improve the classification of left and right-hand MI tasks by utilizing high-resolution time-frequency features extracted from EEG signals, enhanced with deep learning-based data augmentation techniques. We propose a novel deep learning framework named the Generalized Wavelet Transform-based Deep Convolutional Network (GDC-Net), which integrates multiple components. First, EEG signals recorded from the C3, C4, and Cz channels are transformed into detailed time-frequency representations using the Generalized Morse Wavelet Transform (GMWT). The selected features are then expanded using a Deep Convolutional Generative Adversarial Network (DCGAN) to generate additional synthetic data and address data scarcity. Finally, the augmented feature maps data are subsequently fed into a hybrid CNN-LSTM architecture, enabling both spatial and temporal feature learning for improved classification. The proposed approach is evaluated on the BCI Competition IV dataset 2b. Experimental results showed that the mean classification accuracy and Kappa value are 89.24% and 0.784, respectively, making them the highest compared to the state-of-the-art algorithms. The integration of GMWT and DCGAN significantly enhances feature quality and model generalization, thereby improving classification performance. These findings demonstrate that GDC-Net delivers superior MI classification performance by effectively capturing high-resolution time-frequency dynamics and enhancing data diversity. This approach holds strong potential for advancing MI-based BCI applications, especially in assistive and rehabilitation technologies. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

25 pages, 6826 KiB  
Article
Multi-Class Classification Methods for EEG Signals of Lower-Limb Rehabilitation Movements
by Shuangling Ma, Zijie Situ, Xiaobo Peng, Zhangyang Li and Ying Huang
Biomimetics 2025, 10(7), 452; https://doi.org/10.3390/biomimetics10070452 - 9 Jul 2025
Viewed by 466
Abstract
Brain–Computer Interfaces (BCIs) enable direct communication between the brain and external devices by decoding motor intentions from EEG signals. However, the existing multi-class classification methods for motor imagery EEG (MI-EEG) signals are hindered by low signal quality and limited accuracy, restricting their practical [...] Read more.
Brain–Computer Interfaces (BCIs) enable direct communication between the brain and external devices by decoding motor intentions from EEG signals. However, the existing multi-class classification methods for motor imagery EEG (MI-EEG) signals are hindered by low signal quality and limited accuracy, restricting their practical application. This study focuses on rehabilitation training scenarios, aiming to capture the motor intentions of patients with partial or complete motor impairments (such as stroke survivors) and provide feedforward control commands for exoskeletons. This study developed an EEG acquisition protocol specifically for use with lower-limb rehabilitation motor imagery (MI). It systematically explored preprocessing techniques, feature extraction strategies, and multi-classification algorithms for multi-task MI-EEG signals. A novel 3D EEG convolutional neural network (3D EEG-CNN) that integrates time/frequency features is proposed. Evaluations on a self-collected dataset demonstrated that the proposed model achieved a peak classification accuracy of 66.32%, substantially outperforming conventional approaches and demonstrating notable progress in the multi-class classification of lower-limb motor imagery tasks. Full article
(This article belongs to the Special Issue Advances in Brain–Computer Interfaces 2025)
Show Figures

Figure 1

27 pages, 1883 KiB  
Article
Advancing Fractal Dimension Techniques to Enhance Motor Imagery Tasks Using EEG for Brain–Computer Interface Applications
by Amr F. Mohamed and Vacius Jusas
Appl. Sci. 2025, 15(11), 6021; https://doi.org/10.3390/app15116021 - 27 May 2025
Viewed by 620
Abstract
The ongoing exploration of brain–computer interfaces (BCIs) provides deeper insights into the workings of the human brain. Motor imagery (MI) tasks, such as imagining movements of the tongue, left and right hands, or feet, can be identified through the analysis of electroencephalography (EEG) [...] Read more.
The ongoing exploration of brain–computer interfaces (BCIs) provides deeper insights into the workings of the human brain. Motor imagery (MI) tasks, such as imagining movements of the tongue, left and right hands, or feet, can be identified through the analysis of electroencephalography (EEG) signals. The development of BCI systems opens up opportunities for their application in assistive devices, neurorehabilitation, and brain stimulation and brain feedback technologies, potentially helping patients to regain the ability to eat and drink without external help, move, or even speak. In this context, the accurate recognition and deciphering of a patient’s imagined intentions is critical for the development of effective BCI systems. Therefore, to distinguish motor tasks in a manner differing from the commonly used methods in this context, we propose a fractal dimension (FD)-based approach, which effectively captures the self-similarity and complexity of EEG signals. For this purpose, all four classes provided in the BCI Competition IV 2a dataset are utilized with nine different combinations of seven FD methods: Katz, Petrosian, Higuchi, box-counting, MFDFA, DFA, and correlation dimension. The resulting features are then used to train five machine learning models: linear, Gaussian, polynomial support vector machine, regression tree, and stochastic gradient descent. As a result, the proposed method obtained top-tier results, achieving 79.2% accuracy when using the Katz vs. box-counting vs. correlation dimension FD combination (KFD vs. BCFD vs. CDFD) classified by LinearSVM, thus outperforming the state-of-the-art TWSB method (achieving 79.1% accuracy). These results demonstrate that fractal dimension features can be applied to achieve higher classification accuracy for online/offline MI-BCIs, when compared to traditional methods. The application of these findings is expected to facilitate the enhancement of motor imagery brain–computer interface systems, which is a key issue faced by neuroscientists. Full article
(This article belongs to the Section Applied Neuroscience and Neural Engineering)
Show Figures

Figure 1

18 pages, 1850 KiB  
Article
Cross-Subject Motor Imagery Electroencephalogram Decoding with Domain Generalization
by Yanyan Zheng, Senxiang Wu, Jie Chen, Qiong Yao and Siyu Zheng
Bioengineering 2025, 12(5), 495; https://doi.org/10.3390/bioengineering12050495 - 7 May 2025
Viewed by 826
Abstract
Decoding motor imagery (MI) electroencephalogram (EEG) signals in the brain–computer interface (BCI) can assist patients in accelerating motor function recovery. To realize the implementation of plug-and-play functionality for MI-BCI applications, cross-subject models are employed to alleviate time-consuming calibration and avoid additional model training [...] Read more.
Decoding motor imagery (MI) electroencephalogram (EEG) signals in the brain–computer interface (BCI) can assist patients in accelerating motor function recovery. To realize the implementation of plug-and-play functionality for MI-BCI applications, cross-subject models are employed to alleviate time-consuming calibration and avoid additional model training for target subjects by utilizing EEG data from source subjects. However, the diversity in data distribution among subjects limits the model’s robustness. In this study, we investigate a cross-subject MI-EEG decoding model with domain generalization based on a deep learning neural network that extracts domain-invariant features from source subjects. Firstly, a knowledge distillation framework is adopted to obtain the internally invariant representations based on spectral features fusion. Then, the correlation alignment approach aligns mutually invariant representations between each pair of sub-source domains. In addition, we use distance regularization on two kinds of invariant features to enhance generalizable information. To assess the effectiveness of our approach, experiments are conducted on the BCI Competition IV 2a and the Korean University dataset. The results demonstrate that the proposed model achieves 8.93% and 4.4% accuracy improvements on two datasets, respectively, compared with current state-of-the-art models, confirming that the proposed approach can effectively extract invariant features from source subjects and generalize to the unseen target distribution, hence paving the way for effective implementation of the plug-and-play functionality in MI-BCI applications. Full article
(This article belongs to the Special Issue Medical Imaging Analysis: Current and Future Trends)
Show Figures

Figure 1

24 pages, 3207 KiB  
Article
A Novel 3D Approach with a CNN and Swin Transformer for Decoding EEG-Based Motor Imagery Classification
by Xin Deng, Huaxiang Huo, Lijiao Ai, Daijiang Xu and Chenhui Li
Sensors 2025, 25(9), 2922; https://doi.org/10.3390/s25092922 - 5 May 2025
Viewed by 941
Abstract
Motor imagery (MI) is a crucial research field within the brain–computer interface (BCI) domain. It enables patients with muscle or neural damage to control external devices and achieve movement functions by simply imagining bodily motions. Despite the significant clinical and application value of [...] Read more.
Motor imagery (MI) is a crucial research field within the brain–computer interface (BCI) domain. It enables patients with muscle or neural damage to control external devices and achieve movement functions by simply imagining bodily motions. Despite the significant clinical and application value of MI-BCI technology, accurately decoding high-dimensional and low signal-to-noise ratio (SNR) electroencephalography (EEG) signals remains challenging. Moreover, traditional deep learning approaches exhibit limitations in processing EEG signals, particularly in capturing the intrinsic correlations between electrode channels and long-distance temporal dependencies. To address these challenges, this research introduces a novel end-to-end decoding network that integrates convolutional neural networks (CNNs) and a Swin Transformer, aiming at enhancing the classification accuracy of the MI paradigm in EEG signals. This approach transforms EEG signals into a three-dimensional data structure, utilizing one-dimensional convolutions along the temporal dimension and two-dimensional convolutions across the EEG electrode distribution for initial spatio-temporal feature extraction, followed by deep feature exploration using a 3D Swin Transformer module. Experimental results show that on the BCI Competition IV-2a dataset, the proposed method achieves 83.99% classification accuracy, which is significantly better than the existing deep learning methods. This finding underscores the efficacy of combining a CNN and Swin Transformer in a 3D data space for processing high-dimensional, low-SNR EEG signals, offering a new perspective for the future development of MI-BCI. Future research could further explore the applicability of this method across various BCI tasks and its potential clinical implementations. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

21 pages, 7640 KiB  
Article
MCL-SWT: Mirror Contrastive Learning with Sliding Window Transformer for Subject-Independent EEG Recognition
by Qi Mao, Hongke Zhu, Wenyao Yan, Yu Zhao, Xinhong Hei and Jing Luo
Brain Sci. 2025, 15(5), 460; https://doi.org/10.3390/brainsci15050460 - 27 Apr 2025
Viewed by 777
Abstract
Background: In brain–computer interfaces (BCIs), transformer-based models have found extensive application in motor imagery (MI)-based EEG signal recognition. However, for subject-independent EEG recognition, these models face challenges: low sensitivity to spatial dynamics of neural activity and difficulty balancing high temporal resolution features [...] Read more.
Background: In brain–computer interfaces (BCIs), transformer-based models have found extensive application in motor imagery (MI)-based EEG signal recognition. However, for subject-independent EEG recognition, these models face challenges: low sensitivity to spatial dynamics of neural activity and difficulty balancing high temporal resolution features with manageable computational complexity. The overarching objective is to address these critical issues. Methods: We introduce Mirror Contrastive Learning with Sliding Window Transformer (MCL-SWT). Inspired by left/right hand motor imagery inducing event-related desynchronization (ERD) in the contralateral sensorimotor cortex, we develop a mirror contrastive loss function. It segregates feature spaces of EEG signals from contralateral ERD locations while curtailing variability in signals sharing similar ERD locations. The Sliding Window Transformer computes self-attention scores over high temporal resolution features, enabling efficient capture of global temporal dependencies. Results: Evaluated on benchmark datasets for subject-independent MI EEG recognition, MCL-SWT achieves classification accuracies of 66.48% and 75.62%, outperforming State-of-the-Art models by 2.82% and 2.17%, respectively. Ablation studies validate the efficacy of both the mirror contrastive loss and sliding window mechanism. Conclusions: These findings underscore MCL-SWT’s potential as a robust, interpretable framework for subject-independent EEG recognition. By addressing existing challenges, MCL-SWT could significantly advance BCI technology development. Full article
(This article belongs to the Special Issue The Application of EEG in Neurorehabilitation)
Show Figures

Figure 1

25 pages, 2026 KiB  
Article
EEG Signal Prediction for Motor Imagery Classification in Brain–Computer Interfaces
by Óscar Wladimir Gómez-Morales, Diego Fabian Collazos-Huertas, Andrés Marino Álvarez-Meza and Cesar German Castellanos-Dominguez
Sensors 2025, 25(7), 2259; https://doi.org/10.3390/s25072259 - 3 Apr 2025
Cited by 1 | Viewed by 2442
Abstract
Brain–computer interfaces (BCIs) based on motor imagery (MI) generally require EEG signals recorded from a large number of electrodes distributed across the cranial surface to achieve accurate MI classification. Not only does this entail long preparation times and high costs, but it also [...] Read more.
Brain–computer interfaces (BCIs) based on motor imagery (MI) generally require EEG signals recorded from a large number of electrodes distributed across the cranial surface to achieve accurate MI classification. Not only does this entail long preparation times and high costs, but it also carries the risk of losing valuable information when an electrode is damaged, further limiting its practical applicability. In this study, a signal prediction-based method is proposed to achieve high accuracy in MI classification using EEG signals recorded from only a small number of electrodes. The signal prediction model was constructed using the elastic net regression technique, allowing for the estimation of EEG signals from 22 complete channels based on just 8 centrally located channels. The predicted EEG signals from the complete channels were used for feature extraction and MI classification. The results obtained indicate a notable efficacy of the proposed prediction method, showing an average performance of 78.16% in classification accuracy. The proposed method demonstrated superior performance compared to the traditional approach that used few-channel EEG and also achieved better results than the traditional method based on full-channel EEG. Although accuracy varies among subjects, from 62.30% to an impressive 95.24%, these data indicate the capability of the method to provide accurate estimates from a reduced set of electrodes. This performance highlights its potential to be implemented in practical MI-based BCI applications, thereby mitigating the time and cost constraints associated with systems that require a high density of electrodes. Full article
Show Figures

Figure 1

Back to TopTop