Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (190)

Search Parameters:
Keywords = motor imagery EEG signal

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 1440 KB  
Article
Efficient EEG-Based Person Identification: A Unified Framework from Automatic Electrode Selection to Intent Recognition
by Yu Pan, Jingjing Dong and Junpeng Zhang
Sensors 2026, 26(2), 687; https://doi.org/10.3390/s26020687 - 20 Jan 2026
Viewed by 162
Abstract
Electroencephalography (EEG) has attracted significant attention as an effective modality for interaction between the physical and virtual worlds, with EEG-based person identification serving as a key gateway to such applications. Despite substantial progress in EEG-based person identification, several challenges remain: (1) how to [...] Read more.
Electroencephalography (EEG) has attracted significant attention as an effective modality for interaction between the physical and virtual worlds, with EEG-based person identification serving as a key gateway to such applications. Despite substantial progress in EEG-based person identification, several challenges remain: (1) how to design an end-to-end EEG-based identification pipeline; (2) how to perform automatic electrode selection for each user to reduce redundancy and improve discriminative capacity; (3) how to enhance the backbone network’s feature extraction capability by suppressing irrelevant information and better leveraging informative patterns; and (4) how to leverage higher-level information in EEG signals to achieve intent recognition (i.e., EEG-based task/activity recognition under controlled paradigms) on top of person identification. To address these issues, this article proposes, for the first time, a unified deep learning framework that integrates automatic electrode selection, person identification, and intent recognition. We introduce a novel backbone network, AES-MBE, which integrates automatic electrode selection (AES) and intent recognition. The network combines a channel-attention mechanism with a multi-scale bidirectional encoder (MBE), enabling adaptive capture of fine-grained local features while modeling global temporal dependencies in both forward and backward directions. We validate our approach using the PhysioNet EEG Motor Movement/Imagery Dataset (EEGMMIDB), which contains EEG recordings from 109 subjects performing 4 tasks. Compared with state-of-the-art methods, our framework achieves superior performance. Specifically, our method attains a person identification accuracy of 98.82% using only 4 electrodes and an average intent recognition accuracy of 91.58%. In addition, our approach demonstrates strong stability and robustness as the number of users varies, offering insights for future research and practical applications. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

20 pages, 2616 KB  
Article
MS-TSEFNet: Multi-Scale Spatiotemporal Efficient Feature Fusion Network
by Weijie Wu, Lifei Liu, Weijie Chen, Yixin Chen, Xingyu Wang, Andrzej Cichocki, Yunhe Lu and Jing Jin
Sensors 2026, 26(2), 437; https://doi.org/10.3390/s26020437 - 9 Jan 2026
Viewed by 191
Abstract
Motor imagery signal decoding is an important research direction in the field of brain–computer interfaces, which aim to judge the motor imagery state of an individual by analyzing electroencephalogram (EEG) signals. Deep learning technology has been gradually applied to EEG classification, which can [...] Read more.
Motor imagery signal decoding is an important research direction in the field of brain–computer interfaces, which aim to judge the motor imagery state of an individual by analyzing electroencephalogram (EEG) signals. Deep learning technology has been gradually applied to EEG classification, which can automatically extract features. However, when processing complex EEG signals, the existing decoding models cannot effectively fuse features at different levels, resulting in limited classification performance. This study proposes a multi-scale spatiotemporal efficient feature fusion network (MS-TSEFNet), which learns the dynamic changes in EEG signals at different time scales through multi-scale convolution modules and combines the spatial attention mechanism to efficiently capture the spatial correlation between electrodes in EEG signals. In addition, the network adopts an efficient feature fusion strategy to deeply fuse features at different levels, thereby improving the expression ability of the model. In the task of motor imagery signal decoding, MS-TSEFNet shows higher accuracy and robustness. We use the public BCIC-IV2a, BCIC-IV2b and ECUST datasets for evaluation. The experimental results show that the average classification accuracy of MS-TSEFNet reaches 80.31%, 86.69% and 71.14%, respectively, which is better than the current state-of-the-art algorithms. We conducted an ablation experiment to further verify the effectiveness of the model. The experimental results showed that each module played an important role in improving the final performance. In particular, the combination of the multi-scale convolution module and the feature fusion module significantly improved the model’s ability to extract the spatiotemporal features of EEG signals. Full article
(This article belongs to the Special Issue EEG Signal Processing Techniques and Applications—3rd Edition)
Show Figures

Figure 1

33 pages, 9268 KB  
Article
Gaussian Connectivity-Driven EEG Imaging for Deep Learning-Based Motor Imagery Classification
by Alejandra Gomez-Rivera, Diego Fabian Collazos-Huertas, David Cárdenas-Peña, Andrés Marino Álvarez-Meza and German Castellanos-Dominguez
Sensors 2026, 26(1), 227; https://doi.org/10.3390/s26010227 - 29 Dec 2025
Viewed by 485
Abstract
Electroencephalography (EEG)-based motor imagery (MI) brain–computer interfaces (BCIs) hold considerable potential for applications in neuro-rehabilitation and assistive technologies. Yet, their development remains constrained by challenges such as low spatial resolution, vulnerability to noise and artifacts, and pronounced inter-subject variability. Conventional approaches, including common [...] Read more.
Electroencephalography (EEG)-based motor imagery (MI) brain–computer interfaces (BCIs) hold considerable potential for applications in neuro-rehabilitation and assistive technologies. Yet, their development remains constrained by challenges such as low spatial resolution, vulnerability to noise and artifacts, and pronounced inter-subject variability. Conventional approaches, including common spatial patterns (CSP) and convolutional neural networks (CNNs), often exhibit limited robustness, weak generalization, and reduced interpretability. To overcome these limitations, we introduce EEG-GCIRNet, a Gaussian connectivity-driven EEG imaging representation network coupled with a regularized LeNet architecture for MI classification. Our method integrates raw EEG signals with topographic maps derived from functional connectivity into a unified variational autoencoder framework. The network is trained with a multi-objective loss that jointly optimizes reconstruction fidelity, classification accuracy, and latent space regularization. The model’s interpretability is enhanced through its variational autoencoder design, allowing for qualitative validation of its learned representations. Experimental evaluations demonstrate that EEG-GCIRNet outperforms state-of-the-art methods, achieving the highest average accuracy (81.82%) and lowest variability (±10.15) in binary classification. Most notably, it effectively mitigates BCI illiteracy by completely eliminating the “Bad” performance group (<60% accuracy), yielding substantial gains of ∼22% for these challenging users. Furthermore, the framework demonstrates good scalability in complex 5-class scenarios, performing competitive classification accuracy (75.20% ± 4.63) with notable statistical superiority (p = 0.002) against advanced baselines. Extensive interpretability analyses, including analysis of the reconstructed connectivity maps, latent space visualizations, Grad-CAM++ and functional connectivity patterns, confirm that the model captures genuine neurophysiological mechanisms, correctly identifying integrated fronto-centro-parietal networks in high performers and compensatory midline circuits in mid-performers. These findings suggest that EEG-GCIRNet provides a robust and interpretable end-to-end framework for EEG-based BCIs, advancing the development of reliable neurotechnology for rehabilitation and assistive applications. Full article
Show Figures

Figure 1

21 pages, 2686 KB  
Article
A Deep Learning Approach to Classifying User Performance in BCI Gaming
by Aimilia Ntetska, Anastasia Mimou, Katerina D. Tzimourta, Pantelis Angelidis and Markos G. Tsipouras
Electronics 2025, 14(24), 4974; https://doi.org/10.3390/electronics14244974 - 18 Dec 2025
Viewed by 431
Abstract
Brain–Computer Interface (BCI) systems are rapidly evolving and increasingly integrated into interactive environments such as gaming and Virtual/Augmented Reality. In such applications, user adaptability and engagement are critical. This study applies deep learning to predict user performance in a 3D BCI-controlled game using [...] Read more.
Brain–Computer Interface (BCI) systems are rapidly evolving and increasingly integrated into interactive environments such as gaming and Virtual/Augmented Reality. In such applications, user adaptability and engagement are critical. This study applies deep learning to predict user performance in a 3D BCI-controlled game using pre-game Motor Imagery (MI) electroencephalographic (EEG) recordings. A total of 72 EEG recordings were collected from 36 participants, 17 using the Muse 2 headset and 19 using the Emotiv Insight device, during left and right hand MI tasks. The signals were preprocessed and transformed into time–frequency spectrograms, which served as inputs to a custom convolutional neural network (CNN) designed to classify users into three performance levels: low, medium, and high. The model achieved classification accuracies of 83% and 95% on Muse 2 and Emotiv Insight data, respectively, at the epoch level, and 75% and 84% at the subject level, using LOSO-CV. These findings demonstrate the feasibility of using deep learning on MI EEG data to forecast user performance in BCI gaming, enabling adaptive systems that enhance both usability and user experience. Full article
Show Figures

Figure 1

19 pages, 6764 KB  
Article
A Dual-Validation Framework for Temporal Robustness Assessment in Brain–Computer Interfaces for Motor Imagery
by Mohamed A. Hanafy, Saykhun Yusufjonov, Payman SharafianArdakani, Djaykhun Yusufjonov, Madan M. Rayguru and Dan O. Popa
Technologies 2025, 13(12), 595; https://doi.org/10.3390/technologies13120595 - 18 Dec 2025
Viewed by 625
Abstract
Brain–computer interfaces using motor imagery (MI-BCIs) offer a promising noninvasive communication pathway between humans and engineered equipment such as robots. However, for MI-BCIs based on electroencephalography (EEG), the reliability of the interface across recording sessions is limited by temporal non-stationary effects. Overcoming this [...] Read more.
Brain–computer interfaces using motor imagery (MI-BCIs) offer a promising noninvasive communication pathway between humans and engineered equipment such as robots. However, for MI-BCIs based on electroencephalography (EEG), the reliability of the interface across recording sessions is limited by temporal non-stationary effects. Overcoming this barrier is critical to translating MI-BCIs from controlled laboratory environments to practical uses. In this paper, we present a comprehensive dual-validation framework to rigorously evaluate the temporal robustness of EEG signals of an MI-BCI. We collected data from six participants performing four motor imagery tasks (left/right hand and foot). Features were extracted using Common Spatial Patterns, and ten machine learning classifiers were assessed within a unified pipeline. Our method integrates within-session evaluation (stratified K-fold cross-validation) with cross-session testing (bidirectional train/test), complemented by stability metrics and performance heterogeneity assessment. Findings reveal minimal performance loss between conditions, with an average accuracy drop of just 2.5%. The AdaBoost classifier achieved the highest within-session performance (84.0% system accuracy, F1-score: 83.8%/80.9% for hand/foot), while the K-nearest neighbors (KNN) classifier demonstrated the best cross-session robustness (81.2% system accuracy, F1-score: 80.5%/80.2% for hand/foot, 0.663 robustness score). This study shows that robust performance across sessions is attainable for MI-BCI evaluation, supporting the pathway toward reliable, real-world clinical deployment. Full article
(This article belongs to the Collection Selected Papers from the PETRA Conference Series)
Show Figures

Figure 1

16 pages, 2128 KB  
Article
Robust Motor Imagery–Brain–Computer Interface Classification in Signal Degradation: A Multi-Window Ensemble Approach
by Dong-Geun Lee and Seung-Bo Lee
Biomimetics 2025, 10(12), 832; https://doi.org/10.3390/biomimetics10120832 - 12 Dec 2025
Viewed by 568
Abstract
Electroencephalography (EEG)-based brain–computer interface (BCI) mimics the brain’s intrinsic information-processing mechanisms by translating neural oscillations into actionable commands. In motor imagery (MI) BCI, imagined movements evoke characteristic patterns over the sensorimotor cortex, forming a biomimetic channel through which internal motor intentions are decoded. [...] Read more.
Electroencephalography (EEG)-based brain–computer interface (BCI) mimics the brain’s intrinsic information-processing mechanisms by translating neural oscillations into actionable commands. In motor imagery (MI) BCI, imagined movements evoke characteristic patterns over the sensorimotor cortex, forming a biomimetic channel through which internal motor intentions are decoded. However, this biomimetic interaction is highly vulnerable to signal degradation, particularly in mobile or low-resource environments where low sampling frequencies obscure these MI-related oscillations. To address this limitation, we propose a robust MI classification framework that integrates spatial, spectral, and temporal dynamics through a filter bank common spatial pattern with time segmentation (FBCSP-TS). This framework classifies motor imagery tasks into four classes (left hand, right hand, foot, and tongue), segments EEG signals into overlapping time domains, and extracts frequency-specific spatial features across multiple subbands. Segment-level predictions are combined via soft voting, reflecting the brain’s distributed integration of information and enhancing resilience to transient noise and localized artifacts. Experiments performed on BCI Competition IV datasets 2a (250 Hz) and 1 (100 Hz) demonstrate that FBCSP-TS outperforms CSP and FBCSP. A paired t-test confirms that accuracy at 110 Hz is not significantly different from that at 250 Hz (p < 0.05), supporting the robustness of the proposed framework. Optimal temporal parameters (window length = 3.5 s, moving length = 0.5 s) further stabilize transient-signal capture and improve SNR. External validation yielded a mean accuracy of 0.809 ± 0.092 and Cohen’s kappa of 0.619 ± 0.184, confirming strong generalizability. By preserving MI-relevant neural patterns under degraded conditions, this framework advances practical, biomimetic BCI suitable for wearable and real-world deployment. Full article
Show Figures

Graphical abstract

26 pages, 5681 KB  
Article
Physiological Artifact Suppression in EEG Signals Using an Efficient Multi-Scale Depth-Wise Separable Convolution and Variational Attention Deep Learning Model for Improved Neurological Health Signal Quality
by Vandana Akshath Raj, Tejasvi Parupudi, Vishnumurthy Kedlaya K, Ananthakrishna Thalengala and Subramanya G. Nayak
Technologies 2025, 13(12), 578; https://doi.org/10.3390/technologies13120578 - 9 Dec 2025
Viewed by 604
Abstract
Artifacts remain a major challenge in electroencephalogram (EEG) recordings, often degrading the accuracy of clinical diagnosis, brain computer interface (BCI) systems, and cognitive research. Although recent deep learning approaches have advanced EEG denoising, most still struggle to model long-range dependencies, maintain computational efficiency, [...] Read more.
Artifacts remain a major challenge in electroencephalogram (EEG) recordings, often degrading the accuracy of clinical diagnosis, brain computer interface (BCI) systems, and cognitive research. Although recent deep learning approaches have advanced EEG denoising, most still struggle to model long-range dependencies, maintain computational efficiency, and generalize to unseen artifact types. To address these challenges, this study proposes MDSC-VA, an efficient denoising framework that integrates multi-scale (M) depth-wise separable convolution (DSConv), variational autoencoder-based (VAE) latent encoding, and a multi-head self-attention mechanism. This unified architecture effectively balances denoising accuracy and model complexity while enhancing generalization to unseen artifact types. Comprehensive evaluations on three open-source EEG datasets, including EEGdenoiseNet, a Motion Artifact Contaminated Multichannel EEG dataset, and the PhysioNet EEG Motor Movement/Imagery dataset, demonstrate that MDSC-VA consistently outperforms state-of-the-art methods, achieving a higher signal-to-noise ratio (SNR), lower relative root mean square error (RRMSE), and stronger correlation coefficient (CC) values. Moreover, the model preserved over 99% of the dominant neural frequency band power, validating its ability to retain physiologically relevant rhythms. These results highlight the potential of MDSC-VA for reliable clinical EEG interpretation, real-time BCI systems, and advancement towards sustainable healthcare technologies in line with SDG-3 (Good Health and Well-Being). Full article
Show Figures

Graphical abstract

15 pages, 3074 KB  
Article
An SSVEP-Based Brain–Computer Interface Device for Wheelchair Control Integrated with a Speech Aid System
by Abdulrahman Mohammed Alnour Ahmed, Yousef Al-Junaidi, Abdulaziz Al-Tayar, Ammar Qaid and Khurram Karim Qureshi
Eng 2025, 6(12), 343; https://doi.org/10.3390/eng6120343 - 1 Dec 2025
Viewed by 713
Abstract
This paper presents a brain–computer interface (BCI) system based on steady-state visual evoked potential (SSVEP) for controlling an electric wheelchair integrated with a speech aid module. The system targets individuals with severe motor disabilities, such as amyotrophic lateral sclerosis (ALS) or multiple sclerosis [...] Read more.
This paper presents a brain–computer interface (BCI) system based on steady-state visual evoked potential (SSVEP) for controlling an electric wheelchair integrated with a speech aid module. The system targets individuals with severe motor disabilities, such as amyotrophic lateral sclerosis (ALS) or multiple sclerosis (MS), who may experience limited mobility and speech impairments. EEG signals from the occipital lobe are recorded using wet electrodes and classified using deep learning models, including ResNet50, InceptionV4, and VGG16, as well as Canonical Correlation Analysis (CCA). The ResNet50 model demonstrated the best performance for nine-class SSVEP signal classification, achieving an offline accuracy of 81.25% and a real-time performance of 72.44%, thereby clarifying that these results correspond to SSVEP-based analysis rather than motor imagery. The classified outputs are used to trigger predefined wheelchair movements and vocal commands using an Arduino-controlled system. The prototype was successfully implemented and verified through experimental evaluation, demonstrating promising results for mobility and communication assistance. Full article
Show Figures

Figure 1

21 pages, 4829 KB  
Article
Multi-Modal EEG–Fusion Neurointerface Wheelchair Control System
by Rongrong An, Yijie Zhou, Hongwei Chen and Xin Xu
Appl. Sci. 2025, 15(23), 12577; https://doi.org/10.3390/app152312577 - 27 Nov 2025
Viewed by 440
Abstract
The development of effective and user-friendly brain–computer interface (BCI) systems is essential for enhancing mobility and autonomy among individuals with physical disabilities. Recent studies have demonstrated significant advances in BCI technologies, particularly in the areas of motor imagery (MI), blink detection, and attention-level [...] Read more.
The development of effective and user-friendly brain–computer interface (BCI) systems is essential for enhancing mobility and autonomy among individuals with physical disabilities. Recent studies have demonstrated significant advances in BCI technologies, particularly in the areas of motor imagery (MI), blink detection, and attention-level analysis. However, existing systems often face limitations, such as low classification accuracy, high latency, and poor robustness in dynamic, real-world environments. Furthermore, most traditional BCIs rely on single-modality approaches, which restrict their adaptability and real-time performance. This paper aims to address these challenges by presenting a multi-modal Electroencephalography (EEG)–fusion neurointerface wheelchair system integrating MI, intentional blink detection, and attention-level analysis. The proposed system improves on previous methods by employing a novel eight-channel needle-shaped dry electrode EEG headset, which significantly enhances signal quality through better electrode–skin contact without the need for conductive gels. Additionally, the system processes EEG signals in real-time using a Jetson Nano platform, incorporating a dual-threshold blink detection algorithm for emergency stops, an optimized random forest classifier for decoding directional MI, and a support vector machine (SVM) for attention-level assessment. Experimental evaluations involving classification accuracy, response latency, and trajectory-following precision confirmed robust system performance. MI classification accuracy averaged around 80%, with optimized attention-level analysis reaching up to 94.1%. Trajectory control tests demonstrated minimal deviation from predefined paths (typically less than 0.25 m). These results highlight the system’s advancements over existing single-modality BCIs, showcasing its potential to significantly improve the quality of life for mobility-impaired users. Future studies should focus on enhancing lateral MI detection accuracy, expanding datasets, and validating system robustness across diverse real-world scenarios. Full article
Show Figures

Figure 1

25 pages, 3379 KB  
Article
LPGGNet: Learning from Local–Partition–Global Graph Representations for Motor Imagery EEG Recognition
by Nanqing Zhang, Hongcai Jian, Xingchen Li, Guoqian Jiang and Xianlun Tang
Brain Sci. 2025, 15(12), 1257; https://doi.org/10.3390/brainsci15121257 - 23 Nov 2025
Viewed by 544
Abstract
Objectives: Existing motor imagery electroencephalography (MI-EEG) decoding approaches are constrained by their reliance on sole representations of brain connectivity graphs, insufficient utilization of multi-scale information, and lack of adaptability. Methods: To address these constraints, we propose a novel Local–Partition–Global Graph learning [...] Read more.
Objectives: Existing motor imagery electroencephalography (MI-EEG) decoding approaches are constrained by their reliance on sole representations of brain connectivity graphs, insufficient utilization of multi-scale information, and lack of adaptability. Methods: To address these constraints, we propose a novel Local–Partition–Global Graph learning Network (LPGGNet). The Local Learning module first constructs functional adjacency matrices using partial directed coherence (PDC), effectively capturing causal dynamic interactions among electrodes. It then employs two layers of temporal convolutions to capture high-level temporal features, followed by Graph Convolutional Networks (GCNs) to capture local topological features. In the Partition Learning module, EEG electrodes are divided into four partitions through a task-driven strategy. For each partition, a novel Gaussian median distance is used to construct adjacency matrices, and Gaussian graph filtering is applied to enhance feature consistency within each partition. After merging the local and partitioned features, the model proceeds to the Global Learning module. In this module, a global adjacency matrix is dynamically computed based on cosine similarity, and residual graph convolutions are then applied to extract highly task-relevant global representations. Finally, two fully connected layers perform the classification. Results: Experiments were conducted on both the BCI Competition IV-2a dataset and a laboratory-recorded dataset, achieving classification accuracies of 82.9% and 87.5%, respectively, which surpass several state-of-the-art models. The contribution of each module was further validated through ablation studies. Conclusions: This study demonstrates the superiority of integrating multi-view brain connectivities with dynamically constructed graph structures for MI-EEG decoding. Moreover, the proposed model offers a novel and efficient solution for EEG signal decoding. Full article
Show Figures

Figure 1

23 pages, 6005 KB  
Article
Takens-Based Kernel Transfer Entropy Connectivity Network for Motor Imagery Classification
by Alejandra Gomez-Rivera, Andrés M. Álvarez-Meza, David Cárdenas-Peña and Alvaro Orozco-Gutierrez
Sensors 2025, 25(22), 7067; https://doi.org/10.3390/s25227067 - 19 Nov 2025
Viewed by 616
Abstract
Reliable decoding of motor imagery (MI) from electroencephalographic signals remains a challenging problem due to their nonlinear, noisy, and non-stationary nature. To address this issue, this work proposes an end-to-end deep learning model, termed TEKTE-Net, that integrates time embeddings with a kernelized Transfer [...] Read more.
Reliable decoding of motor imagery (MI) from electroencephalographic signals remains a challenging problem due to their nonlinear, noisy, and non-stationary nature. To address this issue, this work proposes an end-to-end deep learning model, termed TEKTE-Net, that integrates time embeddings with a kernelized Transfer Entropy estimator to infer directed functional connectivity in MI-based brain–computer interface (BCI) systems. The proposed model incorporates a customized convolutional module that performs Takens’ embedding, enabling the decoding of the underlying EEG activity without requiring explicit preprocessing. Further, the architecture estimates nonlinear and time-delayed interactions between cortical regions using Rational Quadratic kernels within a differentiable framework. Evaluation of TEKTE-Net on semi-synthetic causal benchmarks and the BCI Competition IV 2a dataset demonstrates robustness to low signal-to-noise conditions and interpretability through temporal, spatial, and spectral analyses of learned connectivity patterns. In particular, the model automatically highlights contralateral activations during MI and promotes spectral selectivity for the beta and gamma bands. Overall, TEKTE-Net offers a fully trainable estimator of functional brain connectivity for decoding EEG activity, supporting MI-BCI applications, and promoting interpretability of deep learning models. Full article
Show Figures

Figure 1

36 pages, 2484 KB  
Review
Signal Preprocessing, Decomposition and Feature Extraction Methods in EEG-Based BCIs
by Bandile Mdluli, Philani Khumalo and Rito Clifford Maswanganyi
Appl. Sci. 2025, 15(22), 12075; https://doi.org/10.3390/app152212075 - 13 Nov 2025
Viewed by 1169
Abstract
Brain–Computer Interface (BCI) technology facilitates direct communication between the human brain and external devices by interpreting brain wave patterns associated with specific motor imagery tasks, which are derived from EEG signals. Although BCIs allow applications such as robotic arm control and smart assistive [...] Read more.
Brain–Computer Interface (BCI) technology facilitates direct communication between the human brain and external devices by interpreting brain wave patterns associated with specific motor imagery tasks, which are derived from EEG signals. Although BCIs allow applications such as robotic arm control and smart assistive environments, they face major challenges, mainly due to the large variation in EEG characteristics between and within individuals. This variability is caused by low signal-to-noise ratio (SNR) due to both physiological and non-physiological artifacts, which severely affect the detection rate (IDR) in BCIs. Advanced multi-stage signal processing pipelines, including efficient filtering and decomposition techniques, have been developed to address these problems. Additionally, numerous feature engineering techniques have been developed to identify highly discriminative features, mainly to enhance IDRs in BCIs. In this review, several pre-processing techniques, including feature extraction algorithms, are critically evaluated using deep learning techniques. The review comparatively discusses methods such as wavelet-based thresholding and independent component analysis (ICA), including empirical mode decomposition (EMD) and its more sophisticated variants, such as Self-Adaptive Multivariate EMD (SA-MEMD) and Ensemble EMD (EEMD). These methods are examined based on machine learning models using SVM, LDA, and deep learning techniques such as CNNs and PCNNs, highlighting key limitations and findings, including different performance metrics. The paper concludes by outlining future directions. Full article
Show Figures

Figure 1

26 pages, 1351 KB  
Review
Trends and Limitations in Transformer-Based BCI Research
by Maximilian Achim Pfeffer, Johnny Kwok Wai Wong and Sai Ho Ling
Appl. Sci. 2025, 15(20), 11150; https://doi.org/10.3390/app152011150 - 17 Oct 2025
Cited by 1 | Viewed by 1979
Abstract
Transformer-based models have accelerated EEG motor imagery (MI) decoding by using self-attention to capture long-range temporal structures while complementing spatial inductive biases. This systematic survey of Scopus-indexed works from 2020 to 2025 indicates that reported advances are concentrated in offline, protocol-heterogeneous settings; inconsistent [...] Read more.
Transformer-based models have accelerated EEG motor imagery (MI) decoding by using self-attention to capture long-range temporal structures while complementing spatial inductive biases. This systematic survey of Scopus-indexed works from 2020 to 2025 indicates that reported advances are concentrated in offline, protocol-heterogeneous settings; inconsistent preprocessing, non-standard data splits, and sparse efficiency frequently reporting cloud claims of generalization and real-time suitability. Under session- and subject-aware evaluation on the BCIC IV 2a/2b dataset, typical performance clusters are in the high-80% range for binary MI and the mid-70% range for multi-class tasks with gains of roughly 5–10 percentage points achieved by strong hybrids (CNN/TCN–Transformer; hierarchical attention) rather than by extreme figures often driven by leakage-prone protocols. In parallel, transformer-driven denoising—particularly diffusion–transformer hybrids—yields strong signal-level metrics but remains weakly linked to task benefit; denoise → decode validation is rarely standardized despite being the most relevant proxy when artifact-free ground truth is unavailable. Three priorities emerge for translation: protocol discipline (fixed train/test partitions, transparent preprocessing, mandatory reporting of parameters, FLOPs, per-trial latency, and acquisition-to-feedback delay); task relevance (shared denoise → decode benchmarks for MI and related paradigms); and adaptivity at scale (self-supervised pretraining on heterogeneous EEG corpora and resource-aware co-optimization of preprocessing and hybrid transformer topologies). Evidence from subject-adjusting evolutionary pipelines that jointly tune preprocessing, attention depth, and CNN–Transformer fusion demonstrates reproducible inter-subject gains over established baselines under controlled protocols. Implementing these practices positions transformer-driven BCIs to move beyond inflated offline estimates toward reliable, real-time neurointerfaces with concrete clinical and assistive relevance. Full article
(This article belongs to the Special Issue Brain-Computer Interfaces: Development, Applications, and Challenges)
Show Figures

Figure 1

29 pages, 3490 KB  
Article
Lower-Limb Motor Imagery Recognition Prototype Based on EEG Acquisition, Filtering, and Machine Learning-Based Pattern Detection
by Sonia Rocío Moreno-Castelblanco, Manuel Andrés Vélez-Guerrero and Mauro Callejas-Cuervo
Sensors 2025, 25(20), 6387; https://doi.org/10.3390/s25206387 - 16 Oct 2025
Viewed by 1177
Abstract
Advances in brain–computer interface (BCI) research have explored various strategies for acquiring and processing electroencephalographic (EEG) signals to detect motor imagery (MI) activities. However, the complexity of multichannel clinical systems and processing techniques can limit their accessibility outside specialized centers, where complex setups [...] Read more.
Advances in brain–computer interface (BCI) research have explored various strategies for acquiring and processing electroencephalographic (EEG) signals to detect motor imagery (MI) activities. However, the complexity of multichannel clinical systems and processing techniques can limit their accessibility outside specialized centers, where complex setups are not feasible. This paper presents a proof-of-concept prototype of a single-channel EEG acquisition and processing system designed to identify lower-limb motor imagery. The proposed proof-of-concept prototype enables the wireless acquisition of raw EEG values, signal processing using digital filters, and the detection of MI patterns using machine learning algorithms. Experimental validation in a controlled laboratory with participants performing resting, MI, and movement tasks showed that the best performance was obtained by combining Savitzky–Golay filtering with a Random Forest classifier, reaching 87.36% ± 4% accuracy and an F1-score of 87.18% ± 3.8% under five-fold cross-validation. These findings confirm that, despite limited spatial resolution, MI patterns can be detected using appropriate AI-based filtering and classification. The novelty of this work lies in demonstrating that a single-channel, portable EEG prototype can be effectively used for lower-limb MI recognition. The portability and noise resilience achieved with the prototype highlight its potential for research, clinical rehabilitation, and assistive device control in non-specialized environments. Full article
Show Figures

Figure 1

33 pages, 3983 KB  
Article
Real-Time EEG Decoding of Motor Imagery via Nonlinear Dimensionality Reduction (Manifold Learning) and Shallow Classifiers
by Hezzal Kucukselbes and Ebru Sayilgan
Biosensors 2025, 15(10), 692; https://doi.org/10.3390/bios15100692 - 13 Oct 2025
Cited by 1 | Viewed by 1179
Abstract
This study introduces a real-time processing framework for decoding motor imagery EEG signals by integrating manifold learning techniques with shallow classifiers. EEG recordings were obtained from six healthy participants performing five distinct wrist and hand motor imagery tasks. To address the challenges of [...] Read more.
This study introduces a real-time processing framework for decoding motor imagery EEG signals by integrating manifold learning techniques with shallow classifiers. EEG recordings were obtained from six healthy participants performing five distinct wrist and hand motor imagery tasks. To address the challenges of high dimensionality and inherent nonlinearity in EEG data, five nonlinear dimensionality reduction methods, t-SNE, ISOMAP, LLE, Spectral Embedding, and MDS, were comparatively evaluated. Each method was combined with three shallow classifiers (k-NN, Naive Bayes, and SVM) to investigate performance across binary, ternary, and five-class classification settings. Among all tested configurations, the t-SNE + k-NN pairing achieved the highest accuracies, reaching 99.7% (two-class), 99.3% (three-class), and 89.0% (five-class). ISOMAP and MDS also delivered competitive results, particularly in multi-class scenarios. The presented approach builds upon our previous work involving EEG datasets from individuals with spinal cord injury (SCI), where the same manifold techniques were examined extensively. Comparative findings between healthy and SCI groups reveal consistent advantages of t-SNE and ISOMAP in preserving class separability, despite higher overall accuracies in healthy subjects due to improved signal quality. The proposed pipeline demonstrates low-latency performance, completing signal processing and classification in approximately 150 ms per trial, thereby meeting real-time requirements for responsive BCI applications. These results highlight the potential of nonlinear dimensionality reduction to enhance real-time EEG decoding, offering a low-complexity yet high-accuracy solution applicable to both healthy users and neurologically impaired individuals in neurorehabilitation and assistive technology contexts. Full article
(This article belongs to the Section Wearable Biosensors)
Show Figures

Figure 1

Back to TopTop