Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (60)

Search Parameters:
Keywords = BCI competition IV

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 890 KiB  
Article
MCTGNet: A Multi-Scale Convolution and Hybrid Attention Network for Robust Motor Imagery EEG Decoding
by Huangtao Zhan, Xinhui Li, Xun Song, Zhao Lv and Ping Li
Bioengineering 2025, 12(7), 775; https://doi.org/10.3390/bioengineering12070775 - 17 Jul 2025
Viewed by 363
Abstract
Motor imagery (MI) EEG decoding is a key application in brain–computer interface (BCI) research. In cross-session scenarios, the generalization and robustness of decoding models are particularly challenging due to the complex nonlinear dynamics of MI-EEG signals in both temporal and frequency domains, as [...] Read more.
Motor imagery (MI) EEG decoding is a key application in brain–computer interface (BCI) research. In cross-session scenarios, the generalization and robustness of decoding models are particularly challenging due to the complex nonlinear dynamics of MI-EEG signals in both temporal and frequency domains, as well as distributional shifts across different recording sessions. While multi-scale feature extraction is a promising approach for generalized and robust MI decoding, conventional classifiers (e.g., multilayer perceptrons) struggle to perform accurate classification when confronted with high-order, nonstationary feature distributions, which have become a major bottleneck for improving decoding performance. To address this issue, we propose an end-to-end decoding framework, MCTGNet, whose core idea is to formulate the classification process as a high-order function approximation task that jointly models both task labels and feature structures. By introducing a group rational Kolmogorov–Arnold Network (GR-KAN), the system enhances generalization and robustness under cross-session conditions. Experiments on the BCI Competition IV 2a and 2b datasets demonstrate that MCTGNet achieves average classification accuracies of 88.93% and 91.42%, respectively, outperforming state-of-the-art methods by 3.32% and 1.83%. Full article
(This article belongs to the Special Issue Brain Computer Interfaces for Motor Control and Motor Learning)
Show Figures

Figure 1

22 pages, 4882 KiB  
Article
Dual-Branch Spatio-Temporal-Frequency Fusion Convolutional Network with Transformer for EEG-Based Motor Imagery Classification
by Hao Hu, Zhiyong Zhou, Zihan Zhang and Wenyu Yuan
Electronics 2025, 14(14), 2853; https://doi.org/10.3390/electronics14142853 - 17 Jul 2025
Viewed by 266
Abstract
The decoding of motor imagery (MI) electroencephalogram (EEG) signals is crucial for motor control and rehabilitation. However, as feature extraction is the core component of the decoding process, traditional methods, often limited to single-feature domains or shallow time-frequency fusion, struggle to comprehensively capture [...] Read more.
The decoding of motor imagery (MI) electroencephalogram (EEG) signals is crucial for motor control and rehabilitation. However, as feature extraction is the core component of the decoding process, traditional methods, often limited to single-feature domains or shallow time-frequency fusion, struggle to comprehensively capture the spatio-temporal-frequency characteristics of the signals, thereby limiting decoding accuracy. To address these limitations, this paper proposes a dual-branch neural network architecture with multi-domain feature fusion, the dual-branch spatio-temporal-frequency fusion convolutional network with Transformer (DB-STFFCNet). The DB-STFFCNet model consists of three modules: the spatiotemporal feature extraction module (STFE), the frequency feature extraction module (FFE), and the feature fusion and classification module. The STFE module employs a lightweight multi-dimensional attention network combined with a temporal Transformer encoder, capable of simultaneously modeling local fine-grained features and global spatiotemporal dependencies, effectively integrating spatiotemporal information and enhancing feature representation. The FFE module constructs a hierarchical feature refinement structure by leveraging the fast Fourier transform (FFT) and multi-scale frequency convolutions, while a frequency-domain Transformer encoder captures the global dependencies among frequency domain features, thus improving the model’s ability to represent key frequency information. Finally, the fusion module effectively consolidates the spatiotemporal and frequency features to achieve accurate classification. To evaluate the feasibility of the proposed method, experiments were conducted on the BCI Competition IV-2a and IV-2b public datasets, achieving accuracies of 83.13% and 89.54%, respectively, outperforming existing methods. This study provides a novel solution for joint time-frequency representation learning in EEG analysis. Full article
(This article belongs to the Special Issue Artificial Intelligence Methods for Biomedical Data Processing)
Show Figures

Figure 1

14 pages, 1563 KiB  
Article
High-Resolution Time-Frequency Feature Selection and EEG Augmented Deep Learning for Motor Imagery Recognition
by Mouna Bouchane, Wei Guo and Shuojin Yang
Electronics 2025, 14(14), 2827; https://doi.org/10.3390/electronics14142827 - 14 Jul 2025
Viewed by 300
Abstract
Motor Imagery (MI) based Brain Computer Interfaces (BCIs) have promising applications in neurorehabilitation for individuals who have lost mobility and control over parts of their body due to brain injuries, such as stroke patients. Accurately classifying MI tasks is essential for effective BCI [...] Read more.
Motor Imagery (MI) based Brain Computer Interfaces (BCIs) have promising applications in neurorehabilitation for individuals who have lost mobility and control over parts of their body due to brain injuries, such as stroke patients. Accurately classifying MI tasks is essential for effective BCI performance, but this task remains challenging due to the complex and non-stationary nature of EEG signals. This study aims to improve the classification of left and right-hand MI tasks by utilizing high-resolution time-frequency features extracted from EEG signals, enhanced with deep learning-based data augmentation techniques. We propose a novel deep learning framework named the Generalized Wavelet Transform-based Deep Convolutional Network (GDC-Net), which integrates multiple components. First, EEG signals recorded from the C3, C4, and Cz channels are transformed into detailed time-frequency representations using the Generalized Morse Wavelet Transform (GMWT). The selected features are then expanded using a Deep Convolutional Generative Adversarial Network (DCGAN) to generate additional synthetic data and address data scarcity. Finally, the augmented feature maps data are subsequently fed into a hybrid CNN-LSTM architecture, enabling both spatial and temporal feature learning for improved classification. The proposed approach is evaluated on the BCI Competition IV dataset 2b. Experimental results showed that the mean classification accuracy and Kappa value are 89.24% and 0.784, respectively, making them the highest compared to the state-of-the-art algorithms. The integration of GMWT and DCGAN significantly enhances feature quality and model generalization, thereby improving classification performance. These findings demonstrate that GDC-Net delivers superior MI classification performance by effectively capturing high-resolution time-frequency dynamics and enhancing data diversity. This approach holds strong potential for advancing MI-based BCI applications, especially in assistive and rehabilitation technologies. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

15 pages, 13180 KiB  
Article
Channel-Dependent Multilayer EEG Time-Frequency Representations Combined with Transfer Learning-Based Deep CNN Framework for Few-Channel MI EEG Classification
by Ziang Liu, Kang Fan, Qin Gu and Yaduan Ruan
Bioengineering 2025, 12(6), 645; https://doi.org/10.3390/bioengineering12060645 - 12 Jun 2025
Viewed by 494
Abstract
The study of electroencephalogram (EEG) signals is crucial for understanding brain function and has extensive applications in clinical diagnosis, neuroscience, and brain–computer interface technology. This paper addresses the challenge of recognizing motor imagery EEG signals with few channels, which is essential for portable [...] Read more.
The study of electroencephalogram (EEG) signals is crucial for understanding brain function and has extensive applications in clinical diagnosis, neuroscience, and brain–computer interface technology. This paper addresses the challenge of recognizing motor imagery EEG signals with few channels, which is essential for portable and real-time applications. A novel framework is proposed that applies a continuous wavelet transform to convert time-domain EEG signals into two-dimensional time-frequency representations. These images are then concatenated into channel-dependent multilayer EEG time-frequency representations (CDML-EEG-TFR), incorporating multidimensional information of time, frequency, and channels, allowing for a more comprehensive and enriched brain representation under the constraint of few channels. By adopting a deep convolutional neural network with EfficientNet as the backbone and utilizing pre-trained weights from natural image datasets for transfer learning, the framework can simultaneously learn temporal, spatial, and channel features embedded in the CDML-EEG-TFR. Moreover, the transfer learning strategy effectively addresses the issue of data sparsity in the context of a few channels. Our approach enhances the classification accuracy of motor imagery EEG signals in few-channel scenarios. Experimental results on the BCI Competition IV 2b dataset show a significant improvement in classification accuracy, reaching 80.21%. This study highlights the potential of CDML-EEG-TFR and the EfficientNet-based transfer learning strategy in few-channel EEG signal classification, laying a foundation for practical applications and further research in medical and sports fields. Full article
(This article belongs to the Special Issue Artificial Intelligence for Biomedical Signal Processing, 2nd Edition)
Show Figures

Graphical abstract

21 pages, 1134 KiB  
Article
Dynamic Ensemble Selection for EEG Signal Classification in Distributed Data Environments
by Małgorzata Przybyła-Kasperek and Jakub Sacewicz
Appl. Sci. 2025, 15(11), 6043; https://doi.org/10.3390/app15116043 - 27 May 2025
Viewed by 476
Abstract
This study presents a novel approach to EEG signal classification in distributed environments using dynamic ensemble selection. In scenarios where data dispersion arises due to privacy constraints or decentralized data collection, traditional global modelling is impractical. We propose a framework where classifiers are [...] Read more.
This study presents a novel approach to EEG signal classification in distributed environments using dynamic ensemble selection. In scenarios where data dispersion arises due to privacy constraints or decentralized data collection, traditional global modelling is impractical. We propose a framework where classifiers are trained locally on independent subsets of EEG data without requiring centralized access. A dynamic coalition-based ensemble strategy is employed to integrate the outputs of these local models, enabling adaptive and instance-specific decision-making. Coalitions are formed based on conflict analysis between model predictions, allowing either consensus (unified) or diversity (diverse) to guide the ensemble structure. Experiments were conducted on two benchmark datasets: an epilepsy EEG dataset comprising 150 segmented EEG time series from ten patients, and the BCI Competition IV Dataset 1, with continuous recordings from seven subjects performing motor imagery tasks, for which a total of 1400 segments were extracted. In the study, we also evaluated the non-distributed (centralized) approach to provide a comprehensive performance baseline. Additionally, we tested a convolutional neural network specifically designed for EEG data, ensuring our results are compared against advanced deep learning methods. Gradient Boosting combined with measurement-level fusion and unified coalitions consistently achieved the highest performance, with an F1-score, accuracy, and balanced accuracy of 0.987 (for nine local tables). The results demonstrate the effectiveness and scalability of dynamic coalition-based ensembles for EEG diagnosis in distributed settings, highlighting their potential in privacy-sensitive clinical and telemedicine applications. Full article
(This article belongs to the Special Issue EEG Signal Processing in Medical Diagnosis Applications)
Show Figures

Figure 1

27 pages, 1883 KiB  
Article
Advancing Fractal Dimension Techniques to Enhance Motor Imagery Tasks Using EEG for Brain–Computer Interface Applications
by Amr F. Mohamed and Vacius Jusas
Appl. Sci. 2025, 15(11), 6021; https://doi.org/10.3390/app15116021 - 27 May 2025
Viewed by 534
Abstract
The ongoing exploration of brain–computer interfaces (BCIs) provides deeper insights into the workings of the human brain. Motor imagery (MI) tasks, such as imagining movements of the tongue, left and right hands, or feet, can be identified through the analysis of electroencephalography (EEG) [...] Read more.
The ongoing exploration of brain–computer interfaces (BCIs) provides deeper insights into the workings of the human brain. Motor imagery (MI) tasks, such as imagining movements of the tongue, left and right hands, or feet, can be identified through the analysis of electroencephalography (EEG) signals. The development of BCI systems opens up opportunities for their application in assistive devices, neurorehabilitation, and brain stimulation and brain feedback technologies, potentially helping patients to regain the ability to eat and drink without external help, move, or even speak. In this context, the accurate recognition and deciphering of a patient’s imagined intentions is critical for the development of effective BCI systems. Therefore, to distinguish motor tasks in a manner differing from the commonly used methods in this context, we propose a fractal dimension (FD)-based approach, which effectively captures the self-similarity and complexity of EEG signals. For this purpose, all four classes provided in the BCI Competition IV 2a dataset are utilized with nine different combinations of seven FD methods: Katz, Petrosian, Higuchi, box-counting, MFDFA, DFA, and correlation dimension. The resulting features are then used to train five machine learning models: linear, Gaussian, polynomial support vector machine, regression tree, and stochastic gradient descent. As a result, the proposed method obtained top-tier results, achieving 79.2% accuracy when using the Katz vs. box-counting vs. correlation dimension FD combination (KFD vs. BCFD vs. CDFD) classified by LinearSVM, thus outperforming the state-of-the-art TWSB method (achieving 79.1% accuracy). These results demonstrate that fractal dimension features can be applied to achieve higher classification accuracy for online/offline MI-BCIs, when compared to traditional methods. The application of these findings is expected to facilitate the enhancement of motor imagery brain–computer interface systems, which is a key issue faced by neuroscientists. Full article
(This article belongs to the Section Applied Neuroscience and Neural Engineering)
Show Figures

Figure 1

18 pages, 1850 KiB  
Article
Cross-Subject Motor Imagery Electroencephalogram Decoding with Domain Generalization
by Yanyan Zheng, Senxiang Wu, Jie Chen, Qiong Yao and Siyu Zheng
Bioengineering 2025, 12(5), 495; https://doi.org/10.3390/bioengineering12050495 - 7 May 2025
Viewed by 744
Abstract
Decoding motor imagery (MI) electroencephalogram (EEG) signals in the brain–computer interface (BCI) can assist patients in accelerating motor function recovery. To realize the implementation of plug-and-play functionality for MI-BCI applications, cross-subject models are employed to alleviate time-consuming calibration and avoid additional model training [...] Read more.
Decoding motor imagery (MI) electroencephalogram (EEG) signals in the brain–computer interface (BCI) can assist patients in accelerating motor function recovery. To realize the implementation of plug-and-play functionality for MI-BCI applications, cross-subject models are employed to alleviate time-consuming calibration and avoid additional model training for target subjects by utilizing EEG data from source subjects. However, the diversity in data distribution among subjects limits the model’s robustness. In this study, we investigate a cross-subject MI-EEG decoding model with domain generalization based on a deep learning neural network that extracts domain-invariant features from source subjects. Firstly, a knowledge distillation framework is adopted to obtain the internally invariant representations based on spectral features fusion. Then, the correlation alignment approach aligns mutually invariant representations between each pair of sub-source domains. In addition, we use distance regularization on two kinds of invariant features to enhance generalizable information. To assess the effectiveness of our approach, experiments are conducted on the BCI Competition IV 2a and the Korean University dataset. The results demonstrate that the proposed model achieves 8.93% and 4.4% accuracy improvements on two datasets, respectively, compared with current state-of-the-art models, confirming that the proposed approach can effectively extract invariant features from source subjects and generalize to the unseen target distribution, hence paving the way for effective implementation of the plug-and-play functionality in MI-BCI applications. Full article
(This article belongs to the Special Issue Medical Imaging Analysis: Current and Future Trends)
Show Figures

Figure 1

24 pages, 3207 KiB  
Article
A Novel 3D Approach with a CNN and Swin Transformer for Decoding EEG-Based Motor Imagery Classification
by Xin Deng, Huaxiang Huo, Lijiao Ai, Daijiang Xu and Chenhui Li
Sensors 2025, 25(9), 2922; https://doi.org/10.3390/s25092922 - 5 May 2025
Viewed by 863
Abstract
Motor imagery (MI) is a crucial research field within the brain–computer interface (BCI) domain. It enables patients with muscle or neural damage to control external devices and achieve movement functions by simply imagining bodily motions. Despite the significant clinical and application value of [...] Read more.
Motor imagery (MI) is a crucial research field within the brain–computer interface (BCI) domain. It enables patients with muscle or neural damage to control external devices and achieve movement functions by simply imagining bodily motions. Despite the significant clinical and application value of MI-BCI technology, accurately decoding high-dimensional and low signal-to-noise ratio (SNR) electroencephalography (EEG) signals remains challenging. Moreover, traditional deep learning approaches exhibit limitations in processing EEG signals, particularly in capturing the intrinsic correlations between electrode channels and long-distance temporal dependencies. To address these challenges, this research introduces a novel end-to-end decoding network that integrates convolutional neural networks (CNNs) and a Swin Transformer, aiming at enhancing the classification accuracy of the MI paradigm in EEG signals. This approach transforms EEG signals into a three-dimensional data structure, utilizing one-dimensional convolutions along the temporal dimension and two-dimensional convolutions across the EEG electrode distribution for initial spatio-temporal feature extraction, followed by deep feature exploration using a 3D Swin Transformer module. Experimental results show that on the BCI Competition IV-2a dataset, the proposed method achieves 83.99% classification accuracy, which is significantly better than the existing deep learning methods. This finding underscores the efficacy of combining a CNN and Swin Transformer in a 3D data space for processing high-dimensional, low-SNR EEG signals, offering a new perspective for the future development of MI-BCI. Future research could further explore the applicability of this method across various BCI tasks and its potential clinical implementations. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

22 pages, 2411 KiB  
Article
A Synergy of Convolutional Neural Networks for Sensor-Based EEG Brain–Computer Interfaces to Enhance Motor Imagery Classification
by Souheyl Mallat, Emna Hkiri, Abdullah M. Albarrak and Borhen Louhichi
Sensors 2025, 25(2), 443; https://doi.org/10.3390/s25020443 - 13 Jan 2025
Viewed by 1624
Abstract
Enhancing motor disability assessment and its imagery classification is a significant concern in contemporary medical practice, necessitating reliable solutions to improve patient outcomes. One promising avenue is the use of brain–computer interfaces (BCIs), which establish a direct communication pathway between users and machines. [...] Read more.
Enhancing motor disability assessment and its imagery classification is a significant concern in contemporary medical practice, necessitating reliable solutions to improve patient outcomes. One promising avenue is the use of brain–computer interfaces (BCIs), which establish a direct communication pathway between users and machines. This technology holds the potential to revolutionize human–machine interaction, especially for individuals diagnosed with motor disabilities. Despite this promise, extracting reliable control signals from noisy brain data remains a critical challenge. In this paper, we introduce a novel approach leveraging the collaborative synergy of five convolutional neural network (CNN) models to improve the classification accuracy of motor imagery tasks, which are essential components of BCI systems. Our method demonstrates exceptional performance, achieving an accuracy of 79.44% on the BCI Competition IV 2a dataset, surpassing existing state-of-the-art techniques in using multiple CNN models. This advancement offers significant promise for enhancing the efficacy and versatility of BCIs in a wide range of real-world applications, from assistive technologies to neurorehabilitation, thereby providing robust solutions for individuals with motor disabilities. Full article
Show Figures

Figure 1

22 pages, 2977 KiB  
Article
Motor Imagery EEG Classification Based on Multi-Domain Feature Rotation and Stacking Ensemble
by Xianglong Zhu, Ming Meng, Zewen Yan and Zhizeng Luo
Brain Sci. 2025, 15(1), 50; https://doi.org/10.3390/brainsci15010050 - 7 Jan 2025
Cited by 3 | Viewed by 1416
Abstract
Background: Decoding motor intentions from electroencephalogram (EEG) signals is a critical component of motor imagery-based brain–computer interface (MI–BCIs). In traditional EEG signal classification, effectively utilizing the valuable information contained within the electroencephalogram is crucial. Objectives: To further optimize the use of information from [...] Read more.
Background: Decoding motor intentions from electroencephalogram (EEG) signals is a critical component of motor imagery-based brain–computer interface (MI–BCIs). In traditional EEG signal classification, effectively utilizing the valuable information contained within the electroencephalogram is crucial. Objectives: To further optimize the use of information from various domains, we propose a novel framework based on multi-domain feature rotation transformation and stacking ensemble for classifying MI tasks. Methods: Initially, we extract the features of Time Domain, Frequency domain, Time-Frequency domain, and Spatial Domain from the EEG signals, and perform feature selection for each domain to identify significant features that possess strong discriminative capacity. Subsequently, local rotation transformations are applied to the significant feature set to generate a rotated feature set, enhancing the representational capacity of the features. Next, the rotated features were fused with the original significant features from each domain to obtain composite features for each domain. Finally, we employ a stacking ensemble approach, where the prediction results of base classifiers corresponding to different domain features and the set of significant features undergo linear discriminant analysis for dimensionality reduction, yielding discriminative feature integration as input for the meta-classifier for classification. Results: The proposed method achieves average classification accuracies of 92.92%, 89.13%, and 86.26% on the BCI Competition III Dataset IVa, BCI Competition IV Dataset I, and BCI Competition IV Dataset 2a, respectively. Conclusions: Experimental results show that the method proposed in this paper outperforms several existing MI classification methods, such as the Common Time-Frequency-Spatial Patterns and the Selective Extract of the Multi-View Time-Frequency Decomposed Spatial, in terms of classification accuracy and robustness. Full article
(This article belongs to the Section Neurotechnology and Neuroimaging)
Show Figures

Figure 1

24 pages, 6974 KiB  
Article
Performance Improvement with Reduced Number of Channels in Motor Imagery BCI System
by Ali Özkahraman, Tamer Ölmez and Zümray Dokur
Sensors 2025, 25(1), 120; https://doi.org/10.3390/s25010120 - 28 Dec 2024
Viewed by 1459
Abstract
Classifying Motor Imaging (MI) Electroencephalogram (EEG) signals is of vital importance for Brain–Computer Interface (BCI) systems, but challenges remain. A key challenge is to reduce the number of channels to improve flexibility, portability, and computational efficiency, especially in multi-class scenarios where more channels [...] Read more.
Classifying Motor Imaging (MI) Electroencephalogram (EEG) signals is of vital importance for Brain–Computer Interface (BCI) systems, but challenges remain. A key challenge is to reduce the number of channels to improve flexibility, portability, and computational efficiency, especially in multi-class scenarios where more channels are needed for accurate classification. This study demonstrates that combining Electrooculogram (EOG) channels with a reduced set of EEG channels is more effective than relying on a large number of EEG channels alone. EOG channels provide useful information for MI signal classification, countering the notion that they only introduce eye-related noise. The study uses advanced deep learning techniques, including multiple 1D convolution blocks and depthwise-separable convolutions, to optimize classification accuracy. The findings in this study are tested on two datasets: dataset 1, the BCI Competition IV Dataset IIa (4-class MI), and dataset 2, the Weibo dataset (7-class MI). The performance for dataset 1, utilizing 3 EEG and 3 EOG channels (6 channels total), is of 83% accuracy, while dataset 2, with 3 EEG and 2 EOG channels (5 channels total), achieves an accuracy of 61%, demonstrating the effectiveness of the proposed channel reduction method and deep learning model. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

27 pages, 2015 KiB  
Article
Developing Innovative Feature Extraction Techniques from the Emotion Recognition Field on Motor Imagery Using Brain–Computer Interface EEG Signals
by Amr F. Mohamed and Vacius Jusas
Appl. Sci. 2024, 14(23), 11323; https://doi.org/10.3390/app142311323 - 4 Dec 2024
Cited by 5 | Viewed by 1268
Abstract
Research on brain–computer interfaces (BCIs) advances the way scientists understand how the human brain functions. The BCI system, which is based on the use of electroencephalography (EEG) signals to detect motor imagery (MI) tasks, enables opportunities for various applications in stroke rehabilitation, neuroprosthetic [...] Read more.
Research on brain–computer interfaces (BCIs) advances the way scientists understand how the human brain functions. The BCI system, which is based on the use of electroencephalography (EEG) signals to detect motor imagery (MI) tasks, enables opportunities for various applications in stroke rehabilitation, neuroprosthetic devices, and communication tools. BCIs can also be used in emotion recognition (ER) research to depict the sophistication of human emotions by improving mental health monitoring, human–computer interactions, and neuromarketing. To address the low accuracy of MI-BCI, which is a key issue faced by researchers, this study employs a new approach that has been proven to have the potential to enhance motor imagery classification accuracy. The basic idea behind the approach is to apply feature extraction methods from the field of emotion recognition to the field of motor imagery. Six feature sets and four classifiers were explored using four MI classes (left and right hands, both feet, and tongue) from the BCI Competition IV 2a dataset. Statistical, wavelet analysis, Hjorth parameters, higher-order spectra, fractal dimensions (Katz, Higuchi, and Petrosian), and a five-dimensional combination of all five feature sets were implemented. GSVM, CART, LinearSVM, and SVM with polynomial kernel classifiers were considered. Our findings show that 3D fractal dimensions predominantly outperform all other feature sets, specifically during LinearSVM classification, accomplishing nearly 79.1% mean accuracy, superior to the state-of-the-art results obtained from the referenced MI paper, where CSP reached 73.7% and Riemannian methods reached 75.5%. It even performs as well as the latest TWSB method, which also reached approximately 79.1%. These outcomes emphasize that the new hybrid approach in the motor imagery/emotion recognition field improves classification accuracy when applied to motor imagery EEG signals, thus enhancing MI-BCI performance. Full article
(This article belongs to the Section Applied Neuroscience and Neural Engineering)
Show Figures

Figure 1

26 pages, 7119 KiB  
Article
MACNet: A Multidimensional Attention-Based Convolutional Neural Network for Lower-Limb Motor Imagery Classification
by Ling-Long Li, Guang-Zhong Cao, Yue-Peng Zhang, Wan-Chen Li and Fang Cui
Sensors 2024, 24(23), 7611; https://doi.org/10.3390/s24237611 - 28 Nov 2024
Viewed by 1268
Abstract
Decoding lower-limb motor imagery (MI) is highly important in brain–computer interfaces (BCIs) and rehabilitation engineering. However, it is challenging to classify lower-limb MI from electroencephalogram (EEG) signals, because lower-limb motions (LLMs) including MI are excessively close to physiological representations in the human brain [...] Read more.
Decoding lower-limb motor imagery (MI) is highly important in brain–computer interfaces (BCIs) and rehabilitation engineering. However, it is challenging to classify lower-limb MI from electroencephalogram (EEG) signals, because lower-limb motions (LLMs) including MI are excessively close to physiological representations in the human brain and generate low-quality EEG signals. To address this challenge, this paper proposes a multidimensional attention-based convolutional neural network (CNN), termed MACNet, which is specifically designed for lower-limb MI classification. MACNet integrates a temporal refining module and an attention-enhanced convolutional module by leveraging the local and global feature representation abilities of CNNs and attention mechanisms. The temporal refining module adaptively investigates critical information from each electrode channel to refine EEG signals along the temporal dimension. The attention-enhanced convolutional module extracts temporal and spatial features while refining the feature maps across the channel and spatial dimensions. Owing to the scarcity of public datasets available for lower-limb MI, a specified lower-limb MI dataset involving four routine LLMs is built, consisting of 10 subjects over 20 sessions. Comparison experiments and ablation studies are conducted on this dataset and a public BCI Competition IV 2a EEG dataset. The experimental results show that MACNet achieves state-of-the-art performance and outperforms alternative models for the subject-specific mode. Visualization analysis reveals the excellent feature learning capabilities of MACNet and the potential relationship between lower-limb MI and brain activity. The effectiveness and generalizability of MACNet are verified. Full article
Show Figures

Figure 1

32 pages, 1980 KiB  
Article
Transforming Motor Imagery Analysis: A Novel EEG Classification Framework Using AtSiftNet Method
by Haiqin Xu, Waseem Haider, Muhammad Zulkifal Aziz, Youchao Sun and Xiaojun Yu
Sensors 2024, 24(19), 6466; https://doi.org/10.3390/s24196466 - 7 Oct 2024
Viewed by 2023
Abstract
This paper presents an innovative approach for the Feature Extraction method using Self-Attention, incorporating various Feature Selection techniques known as the AtSiftNet method to enhance the classification performance of motor imaginary activities using electrophotography (EEG) signals. Initially, the EEG signals were sorted and [...] Read more.
This paper presents an innovative approach for the Feature Extraction method using Self-Attention, incorporating various Feature Selection techniques known as the AtSiftNet method to enhance the classification performance of motor imaginary activities using electrophotography (EEG) signals. Initially, the EEG signals were sorted and then denoised using multiscale principal component analysis to obtain clean EEG signals. However, we also conducted a non-denoised experiment. Subsequently, the clean EEG signals underwent the Self-Attention feature extraction method to compute the features of each trial (i.e., 350×18). The best 1 or 15 features were then extracted through eight different feature selection techniques. Finally, five different machine learning and neural network classification models were employed to calculate the accuracy, sensitivity, and specificity of this approach. The BCI competition III dataset IV-a was utilized for all experiments, encompassing the datasets of five volunteers who participated in the competition. The experiment findings reveal that the average accuracy of classification is highest among ReliefF (i.e., 99.946%), Mutual Information (i.e., 98.902%), Independent Component Analysis (i.e., 99.62%), and Principal Component Analysis (i.e., 98.884%) for both 1 and 15 best-selected features from each trial. These accuracies were obtained for motor imagery using a Support Vector Machine (SVM) as a classifier. In addition, five-fold validation was performed in this paper to assess the fair performance estimation and robustness of the model. The average accuracy obtained through five-fold validation is 99.89%. The experiments’ findings indicate that the suggested framework provides a resilient biomarker with minimal computational complexity, making it a suitable choice for advancing Motor Imagery Brain–Computer Interfaces (BCI). Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

16 pages, 584 KiB  
Article
Enhancing Motor Imagery Classification in Brain–Computer Interfaces Using Deep Learning and Continuous Wavelet Transform
by Yu Xie and Stefan Oniga
Appl. Sci. 2024, 14(19), 8828; https://doi.org/10.3390/app14198828 - 1 Oct 2024
Cited by 1 | Viewed by 1545
Abstract
In brain–computer interface (BCI) systems, motor imagery (MI) electroencephalogram (EEG) is widely used to interpret the human brain. However, MI classification is challenging due to weak signals and a lack of high-quality data. While deep learning (DL) methods have shown significant success in [...] Read more.
In brain–computer interface (BCI) systems, motor imagery (MI) electroencephalogram (EEG) is widely used to interpret the human brain. However, MI classification is challenging due to weak signals and a lack of high-quality data. While deep learning (DL) methods have shown significant success in pattern recognition, their application to MI-based BCI systems remains limited. To address these challenges, we propose a novel deep learning algorithm that leverages EEG signal features through a two-branch parallel convolutional neural network (CNN). Our approach incorporates different input signals, such as continuous wavelet transform, short-time Fourier transform, and common spatial patterns, and employs various classifiers, including support vector machines and decision trees, to enhance system performance. We evaluate our algorithm using the BCI Competition IV dataset 2B, comparing it with other state-of-the-art methods. Our results demonstrate that the proposed method excels in classification accuracy, offering improvements for MI-based BCI systems. Full article
Show Figures

Figure 1

Back to TopTop