Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (227)

Search Parameters:
Keywords = Motor Imagery (MI)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 4294 KiB  
Article
Post Hoc Event-Related Potential Analysis of Kinesthetic Motor Imagery-Based Brain-Computer Interface Control of Anthropomorphic Robotic Arms
by Miltiadis Spanos, Theodora Gazea, Vasileios Triantafyllidis, Konstantinos Mitsopoulos, Aristidis Vrahatis, Maria Hadjinicolaou, Panagiotis D. Bamidis and Alkinoos Athanasiou
Electronics 2025, 14(15), 3106; https://doi.org/10.3390/electronics14153106 - 4 Aug 2025
Abstract
Kinesthetic motor imagery (KMI), the mental rehearsal of a motor task without its actual performance, constitutes one of the most common techniques used for brain–computer interface (BCI) control for movement-related tasks. The effect of neural injury on motor cortical activity during execution and [...] Read more.
Kinesthetic motor imagery (KMI), the mental rehearsal of a motor task without its actual performance, constitutes one of the most common techniques used for brain–computer interface (BCI) control for movement-related tasks. The effect of neural injury on motor cortical activity during execution and imagery remains under investigation in terms of activations, processing of motor onset, and BCI control. The current work aims to conduct a post hoc investigation of the event-related potential (ERP)-based processing of KMI during BCI control of anthropomorphic robotic arms by spinal cord injury (SCI) patients and healthy control participants in a completed clinical trial. For this purpose, we analyzed 14-channel electroencephalography (EEG) data from 10 patients with cervical SCI and 8 healthy individuals, recorded through Emotiv EPOC BCI, as the participants attempted to move anthropomorphic robotic arms using KMI. EEG data were pre-processed by band-pass filtering (8–30 Hz) and independent component analysis (ICA). ERPs were calculated at the sensor space, and analysis of variance (ANOVA) was used to determine potential differences between groups. Our results showed no statistically significant differences between SCI patients and healthy control groups regarding mean amplitude and latency (p < 0.05) across the recorded channels at various time points during stimulus presentation. Notably, no significant differences were observed in ERP components, except for the P200 component at the T8 channel. These findings suggest that brain circuits associated with motor planning and sensorimotor processes are not disrupted due to anatomical damage following SCI. The temporal dynamics of motor-related areas—particularly in channels like F3, FC5, and F7—indicate that essential motor imagery (MI) circuits remain functional. Limitations include the relatively small sample size that may hamper the generalization of our findings, the sensor-space analysis that restricts anatomical specificity and neurophysiological interpretations, and the use of a low-density EEG headset, lacking coverage over key motor regions. Non-invasive EEG-based BCI systems for motor rehabilitation in SCI patients could effectively leverage intact neural circuits to promote neuroplasticity and facilitate motor recovery. Future work should include validation against larger, longitudinal, high-density, source-space EEG datasets. Full article
(This article belongs to the Special Issue EEG Analysis and Brain–Computer Interface (BCI) Technology)
Show Figures

Figure 1

29 pages, 2830 KiB  
Article
BCINetV1: Integrating Temporal and Spectral Focus Through a Novel Convolutional Attention Architecture for MI EEG Decoding
by Muhammad Zulkifal Aziz, Xiaojun Yu, Xinran Guo, Xinming He, Binwen Huang and Zeming Fan
Sensors 2025, 25(15), 4657; https://doi.org/10.3390/s25154657 - 27 Jul 2025
Viewed by 365
Abstract
Motor imagery (MI) electroencephalograms (EEGs) are pivotal cortical potentials reflecting cortical activity during imagined motor actions, widely leveraged for brain-computer interface (BCI) system development. However, effectively decoding these MI EEG signals is often overshadowed by flawed methods in signal processing, deep learning methods [...] Read more.
Motor imagery (MI) electroencephalograms (EEGs) are pivotal cortical potentials reflecting cortical activity during imagined motor actions, widely leveraged for brain-computer interface (BCI) system development. However, effectively decoding these MI EEG signals is often overshadowed by flawed methods in signal processing, deep learning methods that are clinically unexplained, and highly inconsistent performance across different datasets. We propose BCINetV1, a new framework for MI EEG decoding to address the aforementioned challenges. The BCINetV1 utilizes three innovative components: a temporal convolution-based attention block (T-CAB) and a spectral convolution-based attention block (S-CAB), both driven by a new convolutional self-attention (ConvSAT) mechanism to identify key non-stationary temporal and spectral patterns in the EEG signals. Lastly, a squeeze-and-excitation block (SEB) intelligently combines those identified tempo-spectral features for accurate, stable, and contextually aware MI EEG classification. Evaluated upon four diverse datasets containing 69 participants, BCINetV1 consistently achieved the highest average accuracies of 98.6% (Dataset 1), 96.6% (Dataset 2), 96.9% (Dataset 3), and 98.4% (Dataset 4). This research demonstrates that BCINetV1 is computationally efficient, extracts clinically vital markers, effectively handles the non-stationarity of EEG data, and shows a clear advantage over existing methods, marking a significant step forward for practical BCI applications. Full article
(This article belongs to the Special Issue Advanced Biomedical Imaging and Signal Processing)
Show Figures

Figure 1

34 pages, 3704 KiB  
Article
Uncertainty-Aware Deep Learning for Robust and Interpretable MI EEG Using Channel Dropout and LayerCAM Integration
by Óscar Wladimir Gómez-Morales, Sofia Escalante-Escobar, Diego Fabian Collazos-Huertas, Andrés Marino Álvarez-Meza and German Castellanos-Dominguez
Appl. Sci. 2025, 15(14), 8036; https://doi.org/10.3390/app15148036 - 18 Jul 2025
Viewed by 298
Abstract
Motor Imagery (MI) classification plays a crucial role in enhancing the performance of brain–computer interface (BCI) systems, thereby enabling advanced neurorehabilitation and the development of intuitive brain-controlled technologies. However, MI classification using electroencephalography (EEG) is hindered by spatiotemporal variability and the limited interpretability [...] Read more.
Motor Imagery (MI) classification plays a crucial role in enhancing the performance of brain–computer interface (BCI) systems, thereby enabling advanced neurorehabilitation and the development of intuitive brain-controlled technologies. However, MI classification using electroencephalography (EEG) is hindered by spatiotemporal variability and the limited interpretability of deep learning (DL) models. To mitigate these challenges, dropout techniques are employed as regularization strategies. Nevertheless, the removal of critical EEG channels, particularly those from the sensorimotor cortex, can result in substantial spatial information loss, especially under limited training data conditions. This issue, compounded by high EEG variability in subjects with poor performance, hinders generalization and reduces the interpretability and clinical trust in MI-based BCI systems. This study proposes a novel framework integrating channel dropout—a variant of Monte Carlo dropout (MCD)—with class activation maps (CAMs) to enhance robustness and interpretability in MI classification. This integration represents a significant step forward by offering, for the first time, a dedicated solution to concurrently mitigate spatiotemporal uncertainty and provide fine-grained neurophysiologically relevant interpretability in motor imagery classification, particularly demonstrating refined spatial attention in challenging low-performing subjects. We evaluate three DL architectures (ShallowConvNet, EEGNet, TCNet Fusion) on a 52-subject MI-EEG dataset, applying channel dropout to simulate structural variability and LayerCAM to visualize spatiotemporal patterns. Results demonstrate that among the three evaluated deep learning models for MI-EEG classification, TCNet Fusion achieved the highest peak accuracy of 74.4% using 32 EEG channels. At the same time, ShallowConvNet recorded the lowest peak at 72.7%, indicating TCNet Fusion’s robustness in moderate-density montages. Incorporating MCD notably improved model consistency and classification accuracy, especially in low-performing subjects where baseline accuracies were below 70%; EEGNet and TCNet Fusion showed accuracy improvements of up to 10% compared to their non-MCD versions. Furthermore, LayerCAM visualizations enhanced with MCD transformed diffuse spatial activation patterns into more focused and interpretable topographies, aligning more closely with known motor-related brain regions and thereby boosting both interpretability and classification reliability across varying subject performance levels. Our approach offers a unified solution for uncertainty-aware, and interpretable MI classification. Full article
(This article belongs to the Special Issue EEG Horizons: Exploring Neural Dynamics and Neurocognitive Processes)
Show Figures

Figure 1

24 pages, 890 KiB  
Article
MCTGNet: A Multi-Scale Convolution and Hybrid Attention Network for Robust Motor Imagery EEG Decoding
by Huangtao Zhan, Xinhui Li, Xun Song, Zhao Lv and Ping Li
Bioengineering 2025, 12(7), 775; https://doi.org/10.3390/bioengineering12070775 - 17 Jul 2025
Viewed by 363
Abstract
Motor imagery (MI) EEG decoding is a key application in brain–computer interface (BCI) research. In cross-session scenarios, the generalization and robustness of decoding models are particularly challenging due to the complex nonlinear dynamics of MI-EEG signals in both temporal and frequency domains, as [...] Read more.
Motor imagery (MI) EEG decoding is a key application in brain–computer interface (BCI) research. In cross-session scenarios, the generalization and robustness of decoding models are particularly challenging due to the complex nonlinear dynamics of MI-EEG signals in both temporal and frequency domains, as well as distributional shifts across different recording sessions. While multi-scale feature extraction is a promising approach for generalized and robust MI decoding, conventional classifiers (e.g., multilayer perceptrons) struggle to perform accurate classification when confronted with high-order, nonstationary feature distributions, which have become a major bottleneck for improving decoding performance. To address this issue, we propose an end-to-end decoding framework, MCTGNet, whose core idea is to formulate the classification process as a high-order function approximation task that jointly models both task labels and feature structures. By introducing a group rational Kolmogorov–Arnold Network (GR-KAN), the system enhances generalization and robustness under cross-session conditions. Experiments on the BCI Competition IV 2a and 2b datasets demonstrate that MCTGNet achieves average classification accuracies of 88.93% and 91.42%, respectively, outperforming state-of-the-art methods by 3.32% and 1.83%. Full article
(This article belongs to the Special Issue Brain Computer Interfaces for Motor Control and Motor Learning)
Show Figures

Figure 1

22 pages, 4882 KiB  
Article
Dual-Branch Spatio-Temporal-Frequency Fusion Convolutional Network with Transformer for EEG-Based Motor Imagery Classification
by Hao Hu, Zhiyong Zhou, Zihan Zhang and Wenyu Yuan
Electronics 2025, 14(14), 2853; https://doi.org/10.3390/electronics14142853 - 17 Jul 2025
Viewed by 266
Abstract
The decoding of motor imagery (MI) electroencephalogram (EEG) signals is crucial for motor control and rehabilitation. However, as feature extraction is the core component of the decoding process, traditional methods, often limited to single-feature domains or shallow time-frequency fusion, struggle to comprehensively capture [...] Read more.
The decoding of motor imagery (MI) electroencephalogram (EEG) signals is crucial for motor control and rehabilitation. However, as feature extraction is the core component of the decoding process, traditional methods, often limited to single-feature domains or shallow time-frequency fusion, struggle to comprehensively capture the spatio-temporal-frequency characteristics of the signals, thereby limiting decoding accuracy. To address these limitations, this paper proposes a dual-branch neural network architecture with multi-domain feature fusion, the dual-branch spatio-temporal-frequency fusion convolutional network with Transformer (DB-STFFCNet). The DB-STFFCNet model consists of three modules: the spatiotemporal feature extraction module (STFE), the frequency feature extraction module (FFE), and the feature fusion and classification module. The STFE module employs a lightweight multi-dimensional attention network combined with a temporal Transformer encoder, capable of simultaneously modeling local fine-grained features and global spatiotemporal dependencies, effectively integrating spatiotemporal information and enhancing feature representation. The FFE module constructs a hierarchical feature refinement structure by leveraging the fast Fourier transform (FFT) and multi-scale frequency convolutions, while a frequency-domain Transformer encoder captures the global dependencies among frequency domain features, thus improving the model’s ability to represent key frequency information. Finally, the fusion module effectively consolidates the spatiotemporal and frequency features to achieve accurate classification. To evaluate the feasibility of the proposed method, experiments were conducted on the BCI Competition IV-2a and IV-2b public datasets, achieving accuracies of 83.13% and 89.54%, respectively, outperforming existing methods. This study provides a novel solution for joint time-frequency representation learning in EEG analysis. Full article
(This article belongs to the Special Issue Artificial Intelligence Methods for Biomedical Data Processing)
Show Figures

Figure 1

14 pages, 1563 KiB  
Article
High-Resolution Time-Frequency Feature Selection and EEG Augmented Deep Learning for Motor Imagery Recognition
by Mouna Bouchane, Wei Guo and Shuojin Yang
Electronics 2025, 14(14), 2827; https://doi.org/10.3390/electronics14142827 - 14 Jul 2025
Viewed by 300
Abstract
Motor Imagery (MI) based Brain Computer Interfaces (BCIs) have promising applications in neurorehabilitation for individuals who have lost mobility and control over parts of their body due to brain injuries, such as stroke patients. Accurately classifying MI tasks is essential for effective BCI [...] Read more.
Motor Imagery (MI) based Brain Computer Interfaces (BCIs) have promising applications in neurorehabilitation for individuals who have lost mobility and control over parts of their body due to brain injuries, such as stroke patients. Accurately classifying MI tasks is essential for effective BCI performance, but this task remains challenging due to the complex and non-stationary nature of EEG signals. This study aims to improve the classification of left and right-hand MI tasks by utilizing high-resolution time-frequency features extracted from EEG signals, enhanced with deep learning-based data augmentation techniques. We propose a novel deep learning framework named the Generalized Wavelet Transform-based Deep Convolutional Network (GDC-Net), which integrates multiple components. First, EEG signals recorded from the C3, C4, and Cz channels are transformed into detailed time-frequency representations using the Generalized Morse Wavelet Transform (GMWT). The selected features are then expanded using a Deep Convolutional Generative Adversarial Network (DCGAN) to generate additional synthetic data and address data scarcity. Finally, the augmented feature maps data are subsequently fed into a hybrid CNN-LSTM architecture, enabling both spatial and temporal feature learning for improved classification. The proposed approach is evaluated on the BCI Competition IV dataset 2b. Experimental results showed that the mean classification accuracy and Kappa value are 89.24% and 0.784, respectively, making them the highest compared to the state-of-the-art algorithms. The integration of GMWT and DCGAN significantly enhances feature quality and model generalization, thereby improving classification performance. These findings demonstrate that GDC-Net delivers superior MI classification performance by effectively capturing high-resolution time-frequency dynamics and enhancing data diversity. This approach holds strong potential for advancing MI-based BCI applications, especially in assistive and rehabilitation technologies. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

25 pages, 6826 KiB  
Article
Multi-Class Classification Methods for EEG Signals of Lower-Limb Rehabilitation Movements
by Shuangling Ma, Zijie Situ, Xiaobo Peng, Zhangyang Li and Ying Huang
Biomimetics 2025, 10(7), 452; https://doi.org/10.3390/biomimetics10070452 - 9 Jul 2025
Viewed by 381
Abstract
Brain–Computer Interfaces (BCIs) enable direct communication between the brain and external devices by decoding motor intentions from EEG signals. However, the existing multi-class classification methods for motor imagery EEG (MI-EEG) signals are hindered by low signal quality and limited accuracy, restricting their practical [...] Read more.
Brain–Computer Interfaces (BCIs) enable direct communication between the brain and external devices by decoding motor intentions from EEG signals. However, the existing multi-class classification methods for motor imagery EEG (MI-EEG) signals are hindered by low signal quality and limited accuracy, restricting their practical application. This study focuses on rehabilitation training scenarios, aiming to capture the motor intentions of patients with partial or complete motor impairments (such as stroke survivors) and provide feedforward control commands for exoskeletons. This study developed an EEG acquisition protocol specifically for use with lower-limb rehabilitation motor imagery (MI). It systematically explored preprocessing techniques, feature extraction strategies, and multi-classification algorithms for multi-task MI-EEG signals. A novel 3D EEG convolutional neural network (3D EEG-CNN) that integrates time/frequency features is proposed. Evaluations on a self-collected dataset demonstrated that the proposed model achieved a peak classification accuracy of 66.32%, substantially outperforming conventional approaches and demonstrating notable progress in the multi-class classification of lower-limb motor imagery tasks. Full article
(This article belongs to the Special Issue Advances in Brain–Computer Interfaces 2025)
Show Figures

Figure 1

18 pages, 2404 KiB  
Article
Treatment-Associated Neuroplastic Changes in People with Stroke-Associated Ataxia—An fMRI Study
by Patricia Meier, Christian Siedentopf, Lukas Mayer-Suess, Michael Knoflach, Stefan Kiechl, Gudrun Sylvest Schönherr, Astrid E. Grams, Elke R. Gizewski, Claudia Lamina, Malik Galijasevic and Ruth Steiger
Neurol. Int. 2025, 17(6), 84; https://doi.org/10.3390/neurolint17060084 - 29 May 2025
Viewed by 1091
Abstract
Background/Objectives: In consideration of the significance of the pursuit of training-induced neuroplastic changes in the stroke population, who are reliant on neurorehabilitation treatment for the restoration of neuronal function, the objectives of this trial were to investigate fMRI paradigms for acute stroke [...] Read more.
Background/Objectives: In consideration of the significance of the pursuit of training-induced neuroplastic changes in the stroke population, who are reliant on neurorehabilitation treatment for the restoration of neuronal function, the objectives of this trial were to investigate fMRI paradigms for acute stroke patients with ataxic symptoms, to follow up on changes in motor function and balance due to recovery and rehabilitation, and to investigate the different effects of two treatment methods on neuronal plasticity. Methods: Therefore, fMRI-paradigms foot tapping and the motor imagery (MI) of a balancing task (tandem walking) were employed. Results: The paradigms investigated were suitable for ataxic stroke patients to monitor changes in neuroplasticity while revealing increased activity in the primary motor cortex (M1) and the cerebellum over 3 months of treatment. Furthermore, analysis of the more complex balance task revealed augmented activation of association areas due to training. Coordination exercises, constituting a specific treatment of ataxic symptoms, indicate more consolidated brain activations, corresponding to a faster motor learning process. Activation within Brodmann Area 7 has been prominent among all paradigms, indicating a special importance of this region for coordinative functions. Conclusions: Further studies are needed to confirm our results in larger patient groups. Clinical Trial Registration: German Clinical Trials Registry (drks.de). Identifier: DRKS00020825. Registered 16.07.2020. Full article
Show Figures

Figure 1

27 pages, 1883 KiB  
Article
Advancing Fractal Dimension Techniques to Enhance Motor Imagery Tasks Using EEG for Brain–Computer Interface Applications
by Amr F. Mohamed and Vacius Jusas
Appl. Sci. 2025, 15(11), 6021; https://doi.org/10.3390/app15116021 - 27 May 2025
Viewed by 534
Abstract
The ongoing exploration of brain–computer interfaces (BCIs) provides deeper insights into the workings of the human brain. Motor imagery (MI) tasks, such as imagining movements of the tongue, left and right hands, or feet, can be identified through the analysis of electroencephalography (EEG) [...] Read more.
The ongoing exploration of brain–computer interfaces (BCIs) provides deeper insights into the workings of the human brain. Motor imagery (MI) tasks, such as imagining movements of the tongue, left and right hands, or feet, can be identified through the analysis of electroencephalography (EEG) signals. The development of BCI systems opens up opportunities for their application in assistive devices, neurorehabilitation, and brain stimulation and brain feedback technologies, potentially helping patients to regain the ability to eat and drink without external help, move, or even speak. In this context, the accurate recognition and deciphering of a patient’s imagined intentions is critical for the development of effective BCI systems. Therefore, to distinguish motor tasks in a manner differing from the commonly used methods in this context, we propose a fractal dimension (FD)-based approach, which effectively captures the self-similarity and complexity of EEG signals. For this purpose, all four classes provided in the BCI Competition IV 2a dataset are utilized with nine different combinations of seven FD methods: Katz, Petrosian, Higuchi, box-counting, MFDFA, DFA, and correlation dimension. The resulting features are then used to train five machine learning models: linear, Gaussian, polynomial support vector machine, regression tree, and stochastic gradient descent. As a result, the proposed method obtained top-tier results, achieving 79.2% accuracy when using the Katz vs. box-counting vs. correlation dimension FD combination (KFD vs. BCFD vs. CDFD) classified by LinearSVM, thus outperforming the state-of-the-art TWSB method (achieving 79.1% accuracy). These results demonstrate that fractal dimension features can be applied to achieve higher classification accuracy for online/offline MI-BCIs, when compared to traditional methods. The application of these findings is expected to facilitate the enhancement of motor imagery brain–computer interface systems, which is a key issue faced by neuroscientists. Full article
(This article belongs to the Section Applied Neuroscience and Neural Engineering)
Show Figures

Figure 1

18 pages, 1850 KiB  
Article
Cross-Subject Motor Imagery Electroencephalogram Decoding with Domain Generalization
by Yanyan Zheng, Senxiang Wu, Jie Chen, Qiong Yao and Siyu Zheng
Bioengineering 2025, 12(5), 495; https://doi.org/10.3390/bioengineering12050495 - 7 May 2025
Viewed by 744
Abstract
Decoding motor imagery (MI) electroencephalogram (EEG) signals in the brain–computer interface (BCI) can assist patients in accelerating motor function recovery. To realize the implementation of plug-and-play functionality for MI-BCI applications, cross-subject models are employed to alleviate time-consuming calibration and avoid additional model training [...] Read more.
Decoding motor imagery (MI) electroencephalogram (EEG) signals in the brain–computer interface (BCI) can assist patients in accelerating motor function recovery. To realize the implementation of plug-and-play functionality for MI-BCI applications, cross-subject models are employed to alleviate time-consuming calibration and avoid additional model training for target subjects by utilizing EEG data from source subjects. However, the diversity in data distribution among subjects limits the model’s robustness. In this study, we investigate a cross-subject MI-EEG decoding model with domain generalization based on a deep learning neural network that extracts domain-invariant features from source subjects. Firstly, a knowledge distillation framework is adopted to obtain the internally invariant representations based on spectral features fusion. Then, the correlation alignment approach aligns mutually invariant representations between each pair of sub-source domains. In addition, we use distance regularization on two kinds of invariant features to enhance generalizable information. To assess the effectiveness of our approach, experiments are conducted on the BCI Competition IV 2a and the Korean University dataset. The results demonstrate that the proposed model achieves 8.93% and 4.4% accuracy improvements on two datasets, respectively, compared with current state-of-the-art models, confirming that the proposed approach can effectively extract invariant features from source subjects and generalize to the unseen target distribution, hence paving the way for effective implementation of the plug-and-play functionality in MI-BCI applications. Full article
(This article belongs to the Special Issue Medical Imaging Analysis: Current and Future Trends)
Show Figures

Figure 1

24 pages, 3207 KiB  
Article
A Novel 3D Approach with a CNN and Swin Transformer for Decoding EEG-Based Motor Imagery Classification
by Xin Deng, Huaxiang Huo, Lijiao Ai, Daijiang Xu and Chenhui Li
Sensors 2025, 25(9), 2922; https://doi.org/10.3390/s25092922 - 5 May 2025
Viewed by 863
Abstract
Motor imagery (MI) is a crucial research field within the brain–computer interface (BCI) domain. It enables patients with muscle or neural damage to control external devices and achieve movement functions by simply imagining bodily motions. Despite the significant clinical and application value of [...] Read more.
Motor imagery (MI) is a crucial research field within the brain–computer interface (BCI) domain. It enables patients with muscle or neural damage to control external devices and achieve movement functions by simply imagining bodily motions. Despite the significant clinical and application value of MI-BCI technology, accurately decoding high-dimensional and low signal-to-noise ratio (SNR) electroencephalography (EEG) signals remains challenging. Moreover, traditional deep learning approaches exhibit limitations in processing EEG signals, particularly in capturing the intrinsic correlations between electrode channels and long-distance temporal dependencies. To address these challenges, this research introduces a novel end-to-end decoding network that integrates convolutional neural networks (CNNs) and a Swin Transformer, aiming at enhancing the classification accuracy of the MI paradigm in EEG signals. This approach transforms EEG signals into a three-dimensional data structure, utilizing one-dimensional convolutions along the temporal dimension and two-dimensional convolutions across the EEG electrode distribution for initial spatio-temporal feature extraction, followed by deep feature exploration using a 3D Swin Transformer module. Experimental results show that on the BCI Competition IV-2a dataset, the proposed method achieves 83.99% classification accuracy, which is significantly better than the existing deep learning methods. This finding underscores the efficacy of combining a CNN and Swin Transformer in a 3D data space for processing high-dimensional, low-SNR EEG signals, offering a new perspective for the future development of MI-BCI. Future research could further explore the applicability of this method across various BCI tasks and its potential clinical implementations. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

21 pages, 7640 KiB  
Article
MCL-SWT: Mirror Contrastive Learning with Sliding Window Transformer for Subject-Independent EEG Recognition
by Qi Mao, Hongke Zhu, Wenyao Yan, Yu Zhao, Xinhong Hei and Jing Luo
Brain Sci. 2025, 15(5), 460; https://doi.org/10.3390/brainsci15050460 - 27 Apr 2025
Viewed by 714
Abstract
Background: In brain–computer interfaces (BCIs), transformer-based models have found extensive application in motor imagery (MI)-based EEG signal recognition. However, for subject-independent EEG recognition, these models face challenges: low sensitivity to spatial dynamics of neural activity and difficulty balancing high temporal resolution features [...] Read more.
Background: In brain–computer interfaces (BCIs), transformer-based models have found extensive application in motor imagery (MI)-based EEG signal recognition. However, for subject-independent EEG recognition, these models face challenges: low sensitivity to spatial dynamics of neural activity and difficulty balancing high temporal resolution features with manageable computational complexity. The overarching objective is to address these critical issues. Methods: We introduce Mirror Contrastive Learning with Sliding Window Transformer (MCL-SWT). Inspired by left/right hand motor imagery inducing event-related desynchronization (ERD) in the contralateral sensorimotor cortex, we develop a mirror contrastive loss function. It segregates feature spaces of EEG signals from contralateral ERD locations while curtailing variability in signals sharing similar ERD locations. The Sliding Window Transformer computes self-attention scores over high temporal resolution features, enabling efficient capture of global temporal dependencies. Results: Evaluated on benchmark datasets for subject-independent MI EEG recognition, MCL-SWT achieves classification accuracies of 66.48% and 75.62%, outperforming State-of-the-Art models by 2.82% and 2.17%, respectively. Ablation studies validate the efficacy of both the mirror contrastive loss and sliding window mechanism. Conclusions: These findings underscore MCL-SWT’s potential as a robust, interpretable framework for subject-independent EEG recognition. By addressing existing challenges, MCL-SWT could significantly advance BCI technology development. Full article
(This article belongs to the Special Issue The Application of EEG in Neurorehabilitation)
Show Figures

Figure 1

13 pages, 2762 KiB  
Article
Research on Adaptive Discriminating Method of Brain–Computer Interface for Motor Imagination
by Jifeng Gong, Huitong Liu, Fang Duan, Yan Che and Zheng Yan
Brain Sci. 2025, 15(4), 412; https://doi.org/10.3390/brainsci15040412 - 18 Apr 2025
Viewed by 666
Abstract
(1) Background: Brain–computer interface (BCI) technology represents a cutting-edge field that integrates brain intelligence with machine intelligence. Unlike BCIs that rely on external stimuli, motor imagery-based BCIs (MI-BCIs) generate usable brain signals based on an individual’s imagination of specific motor actions. Due [...] Read more.
(1) Background: Brain–computer interface (BCI) technology represents a cutting-edge field that integrates brain intelligence with machine intelligence. Unlike BCIs that rely on external stimuli, motor imagery-based BCIs (MI-BCIs) generate usable brain signals based on an individual’s imagination of specific motor actions. Due to the highly individualized nature of these signals, identifying individuals who are better suited for MI-BCI applications and improving its efficiency is critical. (2) Methods: This study collected four motor imagery tasks (left hand, right hand, foot, and tongue) from 50 healthy subjects and evaluated MI-BCI adaptability through classification accuracy. Functional networks were constructed using the weighted phase lag index (WPLI), and relevant graph theory parameters were calculated to explore the relationship between motor imagery adaptability and functional networks. (3) Results: Research has demonstrated a strong correlation between the network characteristics of tongue imagination and MI-BCI adaptability. Specifically, the nodal degree and characteristic path length in the right hemisphere were found to be significantly correlated with classification accuracy (p < 0.05). (4) Conclusions: The findings of this study offer new insights into the functional network mechanisms of motor imagery, suggesting that tongue imagination holds potential as a predictor of MI-BCI adaptability. Full article
Show Figures

Figure 1

10 pages, 944 KiB  
Article
Motor Imagery Training Improves Interoception and Satisfaction with Performance
by Chiara Di Tella and Enrica L. Santarcangelo
Medicina 2025, 61(4), 734; https://doi.org/10.3390/medicina61040734 - 16 Apr 2025
Cited by 1 | Viewed by 829
Abstract
Background and Objectives: Sport practice, performance satisfaction, and interoception influence physical and mental health. Motor imagery (MI) training improves sensorimotor and cognitive–emotional functions. This study aimed to (a) compare sedentary and artistic gymnastics-practicing young females and (b) evaluate the changes in interoception [...] Read more.
Background and Objectives: Sport practice, performance satisfaction, and interoception influence physical and mental health. Motor imagery (MI) training improves sensorimotor and cognitive–emotional functions. This study aimed to (a) compare sedentary and artistic gymnastics-practicing young females and (b) evaluate the changes in interoception and performance satisfaction occurring in gymnastics-practicing participants after one month of motor imagery training. Materials and Methods: The difference in interoceptive accuracy (IA) and sensibility (IS) between young sedentary females (Control group, C, n = 27) and age-matched females practicing artistic gymnastics (Experimental group, E, n = 27) were studied using the Interoceptive Accuracy Scale (IAS), the Multisensory Assessment of Interoceptive Awareness (MAIA), and Body Perception Questionnaire (BPQ). The capacity for focusing one’s attention on specific tasks (absorption) was assessed by the Tellegen Absorption Scale (TAS). Groups were compared at T0 (before motor imagery training). In group E, the same variables and satisfaction with performance were rated before and after 1 month of motor imagery training. The years of practice and absorption were used as covariates in analyses. Results: (a) Group E exhibited significantly higher scores in the MAIA dimensions than group C and similar BPQ and IAS scores; (b) group E’s satisfaction with performance, MAIA, IAS, and BPQ scores increased significantly from T0 to T1. The increase in performance satisfaction became non-significant when using years of practice as the control. The improvement in MAIA dimensions became non-significant when using TAS as the control. Conclusions: Despite the limitations as a result of the absence of an objective evaluation of the performance and physiological correlations of mental imagery and interoceptive accuracy, the baseline differences between the two groups confirm that practicing artistic gymnastics improves interoception. The experience undergone by group E of better performance after training is associated with further improvement in interoceptive intermingled pathways and shared relay stations of sensorimotor and interoceptive information. The results are relevant to the setting up of personalized mental training to improve physical and mental health. Full article
(This article belongs to the Section Sports Medicine and Sports Traumatology)
Show Figures

Figure 1

25 pages, 2026 KiB  
Article
EEG Signal Prediction for Motor Imagery Classification in Brain–Computer Interfaces
by Óscar Wladimir Gómez-Morales, Diego Fabian Collazos-Huertas, Andrés Marino Álvarez-Meza and Cesar German Castellanos-Dominguez
Sensors 2025, 25(7), 2259; https://doi.org/10.3390/s25072259 - 3 Apr 2025
Cited by 1 | Viewed by 2077
Abstract
Brain–computer interfaces (BCIs) based on motor imagery (MI) generally require EEG signals recorded from a large number of electrodes distributed across the cranial surface to achieve accurate MI classification. Not only does this entail long preparation times and high costs, but it also [...] Read more.
Brain–computer interfaces (BCIs) based on motor imagery (MI) generally require EEG signals recorded from a large number of electrodes distributed across the cranial surface to achieve accurate MI classification. Not only does this entail long preparation times and high costs, but it also carries the risk of losing valuable information when an electrode is damaged, further limiting its practical applicability. In this study, a signal prediction-based method is proposed to achieve high accuracy in MI classification using EEG signals recorded from only a small number of electrodes. The signal prediction model was constructed using the elastic net regression technique, allowing for the estimation of EEG signals from 22 complete channels based on just 8 centrally located channels. The predicted EEG signals from the complete channels were used for feature extraction and MI classification. The results obtained indicate a notable efficacy of the proposed prediction method, showing an average performance of 78.16% in classification accuracy. The proposed method demonstrated superior performance compared to the traditional approach that used few-channel EEG and also achieved better results than the traditional method based on full-channel EEG. Although accuracy varies among subjects, from 62.30% to an impressive 95.24%, these data indicate the capability of the method to provide accurate estimates from a reduced set of electrodes. This performance highlights its potential to be implemented in practical MI-based BCI applications, thereby mitigating the time and cost constraints associated with systems that require a high density of electrodes. Full article
Show Figures

Figure 1

Back to TopTop