Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (125)

Search Parameters:
Keywords = motor imagery electroencephalography

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
41 pages, 5539 KB  
Article
Robust Covert Spatial Attention Decoding from Low-Channel Dry EEG by Hybrid AI Model
by Doyeon Kim and Jaeho Lee
AI 2026, 7(1), 9; https://doi.org/10.3390/ai7010009 - 30 Dec 2025
Viewed by 554
Abstract
Background: Decoding covert spatial attention (CSA) from dry, low-channel electroencephalography (EEG) is key for gaze-independent brain–computer interfaces (BCIs). Methods: We evaluate, on sixteen participants and three tasks (CSA, motor imagery (MI), Emotion), a four-electrode, subject-wise pipeline combining leak-safe preprocessing, multiresolution wavelets, and a [...] Read more.
Background: Decoding covert spatial attention (CSA) from dry, low-channel electroencephalography (EEG) is key for gaze-independent brain–computer interfaces (BCIs). Methods: We evaluate, on sixteen participants and three tasks (CSA, motor imagery (MI), Emotion), a four-electrode, subject-wise pipeline combining leak-safe preprocessing, multiresolution wavelets, and a compact Hybrid encoder (CNN-LSTM-MHSA) with robustness-oriented training (noise/shift/channel-dropout and supervised consistency). Results: Online, the Hybrid All-on-Wav achieved 0.695 accuracy with end-to-end latency ~2.03 s per 2.0 s decision window; the pure model inference latency is ≈185 ms on CPU and ≈11 ms on GPU. The same backbone without defenses reached 0.673, a CNN-LSTM 0.612, and a compact CNN 0.578. Offline subject-wise analyses showed a CSA median Δ balanced accuracy (BAcc) of +2.9%p (paired Wilcoxon p = 0.037; N = 16), with usability-aligned improvements (error 0.272 → 0.268; information transfer rate (ITR) 3.120 → 3.240). Effects were smaller for MI and present for Emotion. Conclusions: Even with simple hardware, compact attention-augmented models and training-time defenses support feasible, low-latency left–right CSA control above chance, suitable for embedded or laptop-class deployment. Full article
(This article belongs to the Section Medical & Healthcare AI)
Show Figures

Figure 1

33 pages, 9268 KB  
Article
Gaussian Connectivity-Driven EEG Imaging for Deep Learning-Based Motor Imagery Classification
by Alejandra Gomez-Rivera, Diego Fabian Collazos-Huertas, David Cárdenas-Peña, Andrés Marino Álvarez-Meza and German Castellanos-Dominguez
Sensors 2026, 26(1), 227; https://doi.org/10.3390/s26010227 - 29 Dec 2025
Viewed by 424
Abstract
Electroencephalography (EEG)-based motor imagery (MI) brain–computer interfaces (BCIs) hold considerable potential for applications in neuro-rehabilitation and assistive technologies. Yet, their development remains constrained by challenges such as low spatial resolution, vulnerability to noise and artifacts, and pronounced inter-subject variability. Conventional approaches, including common [...] Read more.
Electroencephalography (EEG)-based motor imagery (MI) brain–computer interfaces (BCIs) hold considerable potential for applications in neuro-rehabilitation and assistive technologies. Yet, their development remains constrained by challenges such as low spatial resolution, vulnerability to noise and artifacts, and pronounced inter-subject variability. Conventional approaches, including common spatial patterns (CSP) and convolutional neural networks (CNNs), often exhibit limited robustness, weak generalization, and reduced interpretability. To overcome these limitations, we introduce EEG-GCIRNet, a Gaussian connectivity-driven EEG imaging representation network coupled with a regularized LeNet architecture for MI classification. Our method integrates raw EEG signals with topographic maps derived from functional connectivity into a unified variational autoencoder framework. The network is trained with a multi-objective loss that jointly optimizes reconstruction fidelity, classification accuracy, and latent space regularization. The model’s interpretability is enhanced through its variational autoencoder design, allowing for qualitative validation of its learned representations. Experimental evaluations demonstrate that EEG-GCIRNet outperforms state-of-the-art methods, achieving the highest average accuracy (81.82%) and lowest variability (±10.15) in binary classification. Most notably, it effectively mitigates BCI illiteracy by completely eliminating the “Bad” performance group (<60% accuracy), yielding substantial gains of ∼22% for these challenging users. Furthermore, the framework demonstrates good scalability in complex 5-class scenarios, performing competitive classification accuracy (75.20% ± 4.63) with notable statistical superiority (p = 0.002) against advanced baselines. Extensive interpretability analyses, including analysis of the reconstructed connectivity maps, latent space visualizations, Grad-CAM++ and functional connectivity patterns, confirm that the model captures genuine neurophysiological mechanisms, correctly identifying integrated fronto-centro-parietal networks in high performers and compensatory midline circuits in mid-performers. These findings suggest that EEG-GCIRNet provides a robust and interpretable end-to-end framework for EEG-based BCIs, advancing the development of reliable neurotechnology for rehabilitation and assistive applications. Full article
Show Figures

Figure 1

21 pages, 2686 KB  
Article
A Deep Learning Approach to Classifying User Performance in BCI Gaming
by Aimilia Ntetska, Anastasia Mimou, Katerina D. Tzimourta, Pantelis Angelidis and Markos G. Tsipouras
Electronics 2025, 14(24), 4974; https://doi.org/10.3390/electronics14244974 - 18 Dec 2025
Viewed by 389
Abstract
Brain–Computer Interface (BCI) systems are rapidly evolving and increasingly integrated into interactive environments such as gaming and Virtual/Augmented Reality. In such applications, user adaptability and engagement are critical. This study applies deep learning to predict user performance in a 3D BCI-controlled game using [...] Read more.
Brain–Computer Interface (BCI) systems are rapidly evolving and increasingly integrated into interactive environments such as gaming and Virtual/Augmented Reality. In such applications, user adaptability and engagement are critical. This study applies deep learning to predict user performance in a 3D BCI-controlled game using pre-game Motor Imagery (MI) electroencephalographic (EEG) recordings. A total of 72 EEG recordings were collected from 36 participants, 17 using the Muse 2 headset and 19 using the Emotiv Insight device, during left and right hand MI tasks. The signals were preprocessed and transformed into time–frequency spectrograms, which served as inputs to a custom convolutional neural network (CNN) designed to classify users into three performance levels: low, medium, and high. The model achieved classification accuracies of 83% and 95% on Muse 2 and Emotiv Insight data, respectively, at the epoch level, and 75% and 84% at the subject level, using LOSO-CV. These findings demonstrate the feasibility of using deep learning on MI EEG data to forecast user performance in BCI gaming, enabling adaptive systems that enhance both usability and user experience. Full article
Show Figures

Figure 1

19 pages, 6764 KB  
Article
A Dual-Validation Framework for Temporal Robustness Assessment in Brain–Computer Interfaces for Motor Imagery
by Mohamed A. Hanafy, Saykhun Yusufjonov, Payman SharafianArdakani, Djaykhun Yusufjonov, Madan M. Rayguru and Dan O. Popa
Technologies 2025, 13(12), 595; https://doi.org/10.3390/technologies13120595 - 18 Dec 2025
Viewed by 465
Abstract
Brain–computer interfaces using motor imagery (MI-BCIs) offer a promising noninvasive communication pathway between humans and engineered equipment such as robots. However, for MI-BCIs based on electroencephalography (EEG), the reliability of the interface across recording sessions is limited by temporal non-stationary effects. Overcoming this [...] Read more.
Brain–computer interfaces using motor imagery (MI-BCIs) offer a promising noninvasive communication pathway between humans and engineered equipment such as robots. However, for MI-BCIs based on electroencephalography (EEG), the reliability of the interface across recording sessions is limited by temporal non-stationary effects. Overcoming this barrier is critical to translating MI-BCIs from controlled laboratory environments to practical uses. In this paper, we present a comprehensive dual-validation framework to rigorously evaluate the temporal robustness of EEG signals of an MI-BCI. We collected data from six participants performing four motor imagery tasks (left/right hand and foot). Features were extracted using Common Spatial Patterns, and ten machine learning classifiers were assessed within a unified pipeline. Our method integrates within-session evaluation (stratified K-fold cross-validation) with cross-session testing (bidirectional train/test), complemented by stability metrics and performance heterogeneity assessment. Findings reveal minimal performance loss between conditions, with an average accuracy drop of just 2.5%. The AdaBoost classifier achieved the highest within-session performance (84.0% system accuracy, F1-score: 83.8%/80.9% for hand/foot), while the K-nearest neighbors (KNN) classifier demonstrated the best cross-session robustness (81.2% system accuracy, F1-score: 80.5%/80.2% for hand/foot, 0.663 robustness score). This study shows that robust performance across sessions is attainable for MI-BCI evaluation, supporting the pathway toward reliable, real-world clinical deployment. Full article
(This article belongs to the Collection Selected Papers from the PETRA Conference Series)
Show Figures

Figure 1

16 pages, 2128 KB  
Article
Robust Motor Imagery–Brain–Computer Interface Classification in Signal Degradation: A Multi-Window Ensemble Approach
by Dong-Geun Lee and Seung-Bo Lee
Biomimetics 2025, 10(12), 832; https://doi.org/10.3390/biomimetics10120832 - 12 Dec 2025
Viewed by 532
Abstract
Electroencephalography (EEG)-based brain–computer interface (BCI) mimics the brain’s intrinsic information-processing mechanisms by translating neural oscillations into actionable commands. In motor imagery (MI) BCI, imagined movements evoke characteristic patterns over the sensorimotor cortex, forming a biomimetic channel through which internal motor intentions are decoded. [...] Read more.
Electroencephalography (EEG)-based brain–computer interface (BCI) mimics the brain’s intrinsic information-processing mechanisms by translating neural oscillations into actionable commands. In motor imagery (MI) BCI, imagined movements evoke characteristic patterns over the sensorimotor cortex, forming a biomimetic channel through which internal motor intentions are decoded. However, this biomimetic interaction is highly vulnerable to signal degradation, particularly in mobile or low-resource environments where low sampling frequencies obscure these MI-related oscillations. To address this limitation, we propose a robust MI classification framework that integrates spatial, spectral, and temporal dynamics through a filter bank common spatial pattern with time segmentation (FBCSP-TS). This framework classifies motor imagery tasks into four classes (left hand, right hand, foot, and tongue), segments EEG signals into overlapping time domains, and extracts frequency-specific spatial features across multiple subbands. Segment-level predictions are combined via soft voting, reflecting the brain’s distributed integration of information and enhancing resilience to transient noise and localized artifacts. Experiments performed on BCI Competition IV datasets 2a (250 Hz) and 1 (100 Hz) demonstrate that FBCSP-TS outperforms CSP and FBCSP. A paired t-test confirms that accuracy at 110 Hz is not significantly different from that at 250 Hz (p < 0.05), supporting the robustness of the proposed framework. Optimal temporal parameters (window length = 3.5 s, moving length = 0.5 s) further stabilize transient-signal capture and improve SNR. External validation yielded a mean accuracy of 0.809 ± 0.092 and Cohen’s kappa of 0.619 ± 0.184, confirming strong generalizability. By preserving MI-relevant neural patterns under degraded conditions, this framework advances practical, biomimetic BCI suitable for wearable and real-world deployment. Full article
Show Figures

Graphical abstract

21 pages, 4829 KB  
Article
Multi-Modal EEG–Fusion Neurointerface Wheelchair Control System
by Rongrong An, Yijie Zhou, Hongwei Chen and Xin Xu
Appl. Sci. 2025, 15(23), 12577; https://doi.org/10.3390/app152312577 - 27 Nov 2025
Viewed by 368
Abstract
The development of effective and user-friendly brain–computer interface (BCI) systems is essential for enhancing mobility and autonomy among individuals with physical disabilities. Recent studies have demonstrated significant advances in BCI technologies, particularly in the areas of motor imagery (MI), blink detection, and attention-level [...] Read more.
The development of effective and user-friendly brain–computer interface (BCI) systems is essential for enhancing mobility and autonomy among individuals with physical disabilities. Recent studies have demonstrated significant advances in BCI technologies, particularly in the areas of motor imagery (MI), blink detection, and attention-level analysis. However, existing systems often face limitations, such as low classification accuracy, high latency, and poor robustness in dynamic, real-world environments. Furthermore, most traditional BCIs rely on single-modality approaches, which restrict their adaptability and real-time performance. This paper aims to address these challenges by presenting a multi-modal Electroencephalography (EEG)–fusion neurointerface wheelchair system integrating MI, intentional blink detection, and attention-level analysis. The proposed system improves on previous methods by employing a novel eight-channel needle-shaped dry electrode EEG headset, which significantly enhances signal quality through better electrode–skin contact without the need for conductive gels. Additionally, the system processes EEG signals in real-time using a Jetson Nano platform, incorporating a dual-threshold blink detection algorithm for emergency stops, an optimized random forest classifier for decoding directional MI, and a support vector machine (SVM) for attention-level assessment. Experimental evaluations involving classification accuracy, response latency, and trajectory-following precision confirmed robust system performance. MI classification accuracy averaged around 80%, with optimized attention-level analysis reaching up to 94.1%. Trajectory control tests demonstrated minimal deviation from predefined paths (typically less than 0.25 m). These results highlight the system’s advancements over existing single-modality BCIs, showcasing its potential to significantly improve the quality of life for mobility-impaired users. Future studies should focus on enhancing lateral MI detection accuracy, expanding datasets, and validating system robustness across diverse real-world scenarios. Full article
Show Figures

Figure 1

25 pages, 3379 KB  
Article
LPGGNet: Learning from Local–Partition–Global Graph Representations for Motor Imagery EEG Recognition
by Nanqing Zhang, Hongcai Jian, Xingchen Li, Guoqian Jiang and Xianlun Tang
Brain Sci. 2025, 15(12), 1257; https://doi.org/10.3390/brainsci15121257 - 23 Nov 2025
Viewed by 513
Abstract
Objectives: Existing motor imagery electroencephalography (MI-EEG) decoding approaches are constrained by their reliance on sole representations of brain connectivity graphs, insufficient utilization of multi-scale information, and lack of adaptability. Methods: To address these constraints, we propose a novel Local–Partition–Global Graph learning [...] Read more.
Objectives: Existing motor imagery electroencephalography (MI-EEG) decoding approaches are constrained by their reliance on sole representations of brain connectivity graphs, insufficient utilization of multi-scale information, and lack of adaptability. Methods: To address these constraints, we propose a novel Local–Partition–Global Graph learning Network (LPGGNet). The Local Learning module first constructs functional adjacency matrices using partial directed coherence (PDC), effectively capturing causal dynamic interactions among electrodes. It then employs two layers of temporal convolutions to capture high-level temporal features, followed by Graph Convolutional Networks (GCNs) to capture local topological features. In the Partition Learning module, EEG electrodes are divided into four partitions through a task-driven strategy. For each partition, a novel Gaussian median distance is used to construct adjacency matrices, and Gaussian graph filtering is applied to enhance feature consistency within each partition. After merging the local and partitioned features, the model proceeds to the Global Learning module. In this module, a global adjacency matrix is dynamically computed based on cosine similarity, and residual graph convolutions are then applied to extract highly task-relevant global representations. Finally, two fully connected layers perform the classification. Results: Experiments were conducted on both the BCI Competition IV-2a dataset and a laboratory-recorded dataset, achieving classification accuracies of 82.9% and 87.5%, respectively, which surpass several state-of-the-art models. The contribution of each module was further validated through ablation studies. Conclusions: This study demonstrates the superiority of integrating multi-view brain connectivities with dynamically constructed graph structures for MI-EEG decoding. Moreover, the proposed model offers a novel and efficient solution for EEG signal decoding. Full article
Show Figures

Figure 1

29 pages, 3490 KB  
Article
Lower-Limb Motor Imagery Recognition Prototype Based on EEG Acquisition, Filtering, and Machine Learning-Based Pattern Detection
by Sonia Rocío Moreno-Castelblanco, Manuel Andrés Vélez-Guerrero and Mauro Callejas-Cuervo
Sensors 2025, 25(20), 6387; https://doi.org/10.3390/s25206387 - 16 Oct 2025
Viewed by 1132
Abstract
Advances in brain–computer interface (BCI) research have explored various strategies for acquiring and processing electroencephalographic (EEG) signals to detect motor imagery (MI) activities. However, the complexity of multichannel clinical systems and processing techniques can limit their accessibility outside specialized centers, where complex setups [...] Read more.
Advances in brain–computer interface (BCI) research have explored various strategies for acquiring and processing electroencephalographic (EEG) signals to detect motor imagery (MI) activities. However, the complexity of multichannel clinical systems and processing techniques can limit their accessibility outside specialized centers, where complex setups are not feasible. This paper presents a proof-of-concept prototype of a single-channel EEG acquisition and processing system designed to identify lower-limb motor imagery. The proposed proof-of-concept prototype enables the wireless acquisition of raw EEG values, signal processing using digital filters, and the detection of MI patterns using machine learning algorithms. Experimental validation in a controlled laboratory with participants performing resting, MI, and movement tasks showed that the best performance was obtained by combining Savitzky–Golay filtering with a Random Forest classifier, reaching 87.36% ± 4% accuracy and an F1-score of 87.18% ± 3.8% under five-fold cross-validation. These findings confirm that, despite limited spatial resolution, MI patterns can be detected using appropriate AI-based filtering and classification. The novelty of this work lies in demonstrating that a single-channel, portable EEG prototype can be effectively used for lower-limb MI recognition. The portability and noise resilience achieved with the prototype highlight its potential for research, clinical rehabilitation, and assistive device control in non-specialized environments. Full article
Show Figures

Figure 1

21 pages, 2248 KB  
Article
TSFNet: Temporal-Spatial Fusion Network for Hybrid Brain-Computer Interface
by Yan Zhang, Bo Yin and Xiaoyang Yuan
Sensors 2025, 25(19), 6111; https://doi.org/10.3390/s25196111 - 3 Oct 2025
Viewed by 1166
Abstract
Unimodal brain–computer interfaces (BCIs) often suffer from inherent limitations due to the characteristic of using single modalities. While hybrid BCIs combining electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) offer complementary advantages, effectively integrating their spatiotemporal features remains a challenge due to inherent signal [...] Read more.
Unimodal brain–computer interfaces (BCIs) often suffer from inherent limitations due to the characteristic of using single modalities. While hybrid BCIs combining electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) offer complementary advantages, effectively integrating their spatiotemporal features remains a challenge due to inherent signal asynchrony. This study aims to develop a novel deep fusion network to achieve synergistic integration of EEG and fNIRS signals for improved classification performance across different tasks. We propose a novel Temporal-Spatial Fusion Network (TSFNet), which consists of two key sublayers: the EEG-fNIRS-guided Fusion (EFGF) layer and the Cross-Attention-based Feature Enhancement (CAFÉ) layer. The EFGF layer extracts temporal features from EEG and spatial features from fNIRS to generate a hybrid attention map, which is utilized to achieve more effective and complementary integration of spatiotemporal information. The CAFÉ layer enables bidirectional interaction between fNIRS and fusion features via a cross-attention mechanism, which enhances the fusion features and selectively filters informative fNIRS representations. Through the two sublayers, TSFNet achieves deep fusion of multimodal features. Finally, TSFNet is evaluated on motor imagery (MI), mental arithmetic (MA), and word generation (WG) classification tasks. Experimental results demonstrate that TSFNet achieves superior classification performance, with average accuracies of 70.18% for MI, 86.26% for MA, and 81.13% for WG, outperforming existing state-of-the-art multimodal algorithms. These findings suggest that TSFNet provides an effective solution for spatiotemporal feature fusion in hybrid BCIs, with potential applications in real-world BCI systems. Full article
Show Figures

Figure 1

18 pages, 1949 KB  
Article
EEG-Based Analysis of Motor Imagery and Multi-Speed Passive Pedaling: Implications for Brain–Computer Interfaces
by Cristian Felipe Blanco-Diaz, Aura Ximena Gonzalez-Cely, Denis Delisle-Rodriguez and Teodiano Freire Bastos-Filho
Signals 2025, 6(4), 52; https://doi.org/10.3390/signals6040052 - 1 Oct 2025
Viewed by 1150
Abstract
Decoding motor imagery (MI) of lower-limb movements from electroencephalography (EEG) signals remains a challenge due to the involvement of deep cortical regions, limiting the applicability of Brain–Computer Interfaces (BCIs). This study proposes a novel protocol that combines passive pedaling (PP) as sensory priming [...] Read more.
Decoding motor imagery (MI) of lower-limb movements from electroencephalography (EEG) signals remains a challenge due to the involvement of deep cortical regions, limiting the applicability of Brain–Computer Interfaces (BCIs). This study proposes a novel protocol that combines passive pedaling (PP) as sensory priming with MI at different speeds (30, 45, and 60 rpm) to improve EEG-based classification. Ten healthy participants performed PP followed by MI tasks while EEG data were recorded. An increase in spectral relative power around Cz associated with both PP and MI was observed, varying with speed and suggesting that PP may enhance cortical engagement during MI. Furthermore, our classification strategy, based on Convolutional Neural Networks (CNNs), achieved an accuracy of 0.87–0.89 across four classes (three speeds and rest). This performance was also compared with the standard Common Spatial Patterns (CSP) and Linear Discriminant Analysis (LDA), which achieved an accuracy of 0.67–0.76. These results demonstrate the feasibility of multiclass decoding of imagined pedaling velocities and lay the groundwork for speed-adaptive BCIs, supporting future personalized and user-centered neurorehabilitation interventions. Full article
(This article belongs to the Special Issue Advances in Biomedical Signal Processing and Analysis)
Show Figures

Figure 1

18 pages, 711 KB  
Review
Exploring Imagined Movement for Brain–Computer Interface Control: An fNIRS and EEG Review
by Robert Finnis, Adeel Mehmood, Henning Holle and Jamshed Iqbal
Brain Sci. 2025, 15(9), 1013; https://doi.org/10.3390/brainsci15091013 - 19 Sep 2025
Cited by 3 | Viewed by 3527
Abstract
Brain–Computer Interfaces (BCIs) offer a non-invasive pathway for restoring motor function, particularly for individuals with limb loss. This review explored the effectiveness of Electroencephalography (EEG) and function Near-Infrared Spectroscopy (fNIRS) in decoding Motor Imagery (MI) movements for both offline and online BCI systems. [...] Read more.
Brain–Computer Interfaces (BCIs) offer a non-invasive pathway for restoring motor function, particularly for individuals with limb loss. This review explored the effectiveness of Electroencephalography (EEG) and function Near-Infrared Spectroscopy (fNIRS) in decoding Motor Imagery (MI) movements for both offline and online BCI systems. EEG has been the dominant non-invasive neuroimaging modality due to its high temporal resolution and accessibility; however, it is limited by high susceptibility to electrical noise and motion artifacts, particularly in real-world settings. fNIRS offers improved robustness to electrical and motion noise, making it increasingly viable in prosthetic control tasks; however, it has an inherent physiological delay. The review categorizes experimental approaches based on modality, paradigm, and study type, highlighting the methods used for signal acquisition, feature extraction, and classification. Results show that while offline studies achieve higher classification accuracy due to fewer time constraints and richer data processing, recent advancements in machine learning—particularly deep learning—have improved the feasibility of online MI decoding. Hybrid EEG–fNIRS systems further enhance performance by combining the temporal precision of EEG with the spatial specificity of fNIRS. Overall, the review finds that predicting online imagined movement is feasible, though still less reliable than motor execution, and continued improvements in neuroimaging integration and classification methods are essential for real-world BCI applications. Broader dissemination of recent advancements in MI-based BCI research is expected to stimulate further interdisciplinary collaboration among roboticists, neuroscientists, and clinicians, accelerating progress toward practical and transformative neuroprosthetic technologies. Full article
(This article belongs to the Special Issue Exploring the Neurobiology of the Sensory-Motor System)
Show Figures

Figure 1

20 pages, 6116 KB  
Article
Automated Detection of Motor Activity Signatures from Electrophysiological Signals by Neural Network
by Onur Kocak
Symmetry 2025, 17(9), 1472; https://doi.org/10.3390/sym17091472 - 6 Sep 2025
Viewed by 912
Abstract
The aim of this study is to analyze the signal generated in the brain for a specific motor task and to identify the region where it occurs. For this purpose, electroencephalography (EEG) signals were divided into delta, theta, alpha, and beta frequency sub-bands, [...] Read more.
The aim of this study is to analyze the signal generated in the brain for a specific motor task and to identify the region where it occurs. For this purpose, electroencephalography (EEG) signals were divided into delta, theta, alpha, and beta frequency sub-bands, and feature extraction was performed by looking at the time-frequency characteristics of the signals belonging to the obtained sub-bands. The epoch corresponding to motor imagery or action and the signal source in the brain were determined by power spectral density features. This study focused on a hand open–close motor task as an example. A machine learning structure was used for signal recognition and classification. The highest accuracy of 92.9% was obtained with the neural network in relation to signal recognition and action realization. In addition to the classification framework, this study also incorporated advanced preprocessing and energy analysis techniques. Eye blink artifacts were automatically detected and removed using independent component analysis (ICA), enabling more reliable spectral estimation. Furthermore, a detailed channel-based and sub-band energy analysis was performed using fast Fourier transform (FFT) and power spectral density (PSD) estimation. The results revealed that frontal electrodes, particularly Fp1 and AF7, exhibited dominant energy patterns during both real and imagined motor tasks. Delta band activity was found to be most pronounced during rest with T1 and T2, while higher-frequency bands, especially beta, showed increased activity during motor imagery, indicating cognitive and motor planning processes. Although 30 s epochs were initially used, event-based selection was applied within each epoch to mark short task-related intervals, ensuring methodological consistency with the 2–4 s windows commonly emphasized in the literature. After artifact removal, motor activity typically associated with the C3 region was also observed with greater intensity over the frontal electrode sites Fp1, Fp2, AF7, and AF8, demonstrating hemispheric symmetry. The delta band power was found to be higher than that of other frequency bands across T0, T1, and T2 conditions. However, a marked decrease in delta power was observed from T0 to T1 and T2. In contrast, beta band power increased by approximately 20% from T0 to T2, with a similar pattern also evident in gamma band activity. These changes indicate cognitive and motor planning processes. The novelty of this study lies in identifying the electrode that exhibits the strongest signal characteristics for a specific motor activity among 64-channel EEG recordings and subsequently achieving high-performance classification of the corresponding motor activity. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

20 pages, 2115 KB  
Article
GAH-TNet: A Graph Attention-Based Hierarchical Temporal Network for EEG Motor Imagery Decoding
by Qiulei Han, Yan Sun, Hongbiao Ye, Ze Song, Jian Zhao, Lijuan Shi and Zhejun Kuang
Brain Sci. 2025, 15(8), 883; https://doi.org/10.3390/brainsci15080883 - 19 Aug 2025
Viewed by 1761
Abstract
Background: Brain–computer interfaces (BCIs) based on motor imagery (MI) offer promising solutions for motor rehabilitation and communication. However, electroencephalography (EEG) signals are often characterized by low signal-to-noise ratios, strong non-stationarity, and significant inter-subject variability, which pose significant challenges for accurate decoding. Existing methods [...] Read more.
Background: Brain–computer interfaces (BCIs) based on motor imagery (MI) offer promising solutions for motor rehabilitation and communication. However, electroencephalography (EEG) signals are often characterized by low signal-to-noise ratios, strong non-stationarity, and significant inter-subject variability, which pose significant challenges for accurate decoding. Existing methods often struggle to simultaneously model the spatial interactions between EEG channels, the local fine-grained features within signals, and global semantic patterns. Methods: To address this, we propose the graph attention-based hierarchical temporal network (GAH-TNet), which integrates spatial graph attention modeling with hierarchical temporal feature encoding. Specifically, we design the graph attention temporal encoding block (GATE). The graph attention mechanism is used to model spatial dependencies between EEG channels and encode short-term temporal dynamic features. Subsequently, a hierarchical attention-guided deep temporal feature encoding block (HADTE) is introduced, which extracts local fine-grained and global long-term dependency features through two-stage attention and temporal convolution. Finally, a fully connected classifier is used to obtain the classification results. The proposed model is evaluated on two publicly available MI-EEG datasets. Results: Our method outperforms multiple existing state-of-the-art methods in classification accuracy. On the BCI IV 2a dataset, the average classification accuracy reaches 86.84%, and on BCI IV 2b, it reaches 89.15%. Ablation experiments validate the complementary roles of GATE and HADTE in modeling. Additionally, the model exhibits good generalization ability across subjects. Conclusions: This framework effectively captures the spatio-temporal dynamic characteristics and topological structure of MI-EEG signals. This hierarchical and interpretable framework provides a new approach for improving decoding performance in EEG motor imagery tasks. Full article
Show Figures

Figure 1

24 pages, 4294 KB  
Article
Post Hoc Event-Related Potential Analysis of Kinesthetic Motor Imagery-Based Brain-Computer Interface Control of Anthropomorphic Robotic Arms
by Miltiadis Spanos, Theodora Gazea, Vasileios Triantafyllidis, Konstantinos Mitsopoulos, Aristidis Vrahatis, Maria Hadjinicolaou, Panagiotis D. Bamidis and Alkinoos Athanasiou
Electronics 2025, 14(15), 3106; https://doi.org/10.3390/electronics14153106 - 4 Aug 2025
Cited by 1 | Viewed by 853
Abstract
Kinesthetic motor imagery (KMI), the mental rehearsal of a motor task without its actual performance, constitutes one of the most common techniques used for brain–computer interface (BCI) control for movement-related tasks. The effect of neural injury on motor cortical activity during execution and [...] Read more.
Kinesthetic motor imagery (KMI), the mental rehearsal of a motor task without its actual performance, constitutes one of the most common techniques used for brain–computer interface (BCI) control for movement-related tasks. The effect of neural injury on motor cortical activity during execution and imagery remains under investigation in terms of activations, processing of motor onset, and BCI control. The current work aims to conduct a post hoc investigation of the event-related potential (ERP)-based processing of KMI during BCI control of anthropomorphic robotic arms by spinal cord injury (SCI) patients and healthy control participants in a completed clinical trial. For this purpose, we analyzed 14-channel electroencephalography (EEG) data from 10 patients with cervical SCI and 8 healthy individuals, recorded through Emotiv EPOC BCI, as the participants attempted to move anthropomorphic robotic arms using KMI. EEG data were pre-processed by band-pass filtering (8–30 Hz) and independent component analysis (ICA). ERPs were calculated at the sensor space, and analysis of variance (ANOVA) was used to determine potential differences between groups. Our results showed no statistically significant differences between SCI patients and healthy control groups regarding mean amplitude and latency (p < 0.05) across the recorded channels at various time points during stimulus presentation. Notably, no significant differences were observed in ERP components, except for the P200 component at the T8 channel. These findings suggest that brain circuits associated with motor planning and sensorimotor processes are not disrupted due to anatomical damage following SCI. The temporal dynamics of motor-related areas—particularly in channels like F3, FC5, and F7—indicate that essential motor imagery (MI) circuits remain functional. Limitations include the relatively small sample size that may hamper the generalization of our findings, the sensor-space analysis that restricts anatomical specificity and neurophysiological interpretations, and the use of a low-density EEG headset, lacking coverage over key motor regions. Non-invasive EEG-based BCI systems for motor rehabilitation in SCI patients could effectively leverage intact neural circuits to promote neuroplasticity and facilitate motor recovery. Future work should include validation against larger, longitudinal, high-density, source-space EEG datasets. Full article
(This article belongs to the Special Issue EEG Analysis and Brain–Computer Interface (BCI) Technology)
Show Figures

Figure 1

29 pages, 2830 KB  
Article
BCINetV1: Integrating Temporal and Spectral Focus Through a Novel Convolutional Attention Architecture for MI EEG Decoding
by Muhammad Zulkifal Aziz, Xiaojun Yu, Xinran Guo, Xinming He, Binwen Huang and Zeming Fan
Sensors 2025, 25(15), 4657; https://doi.org/10.3390/s25154657 - 27 Jul 2025
Cited by 2 | Viewed by 1681
Abstract
Motor imagery (MI) electroencephalograms (EEGs) are pivotal cortical potentials reflecting cortical activity during imagined motor actions, widely leveraged for brain-computer interface (BCI) system development. However, effectively decoding these MI EEG signals is often overshadowed by flawed methods in signal processing, deep learning methods [...] Read more.
Motor imagery (MI) electroencephalograms (EEGs) are pivotal cortical potentials reflecting cortical activity during imagined motor actions, widely leveraged for brain-computer interface (BCI) system development. However, effectively decoding these MI EEG signals is often overshadowed by flawed methods in signal processing, deep learning methods that are clinically unexplained, and highly inconsistent performance across different datasets. We propose BCINetV1, a new framework for MI EEG decoding to address the aforementioned challenges. The BCINetV1 utilizes three innovative components: a temporal convolution-based attention block (T-CAB) and a spectral convolution-based attention block (S-CAB), both driven by a new convolutional self-attention (ConvSAT) mechanism to identify key non-stationary temporal and spectral patterns in the EEG signals. Lastly, a squeeze-and-excitation block (SEB) intelligently combines those identified tempo-spectral features for accurate, stable, and contextually aware MI EEG classification. Evaluated upon four diverse datasets containing 69 participants, BCINetV1 consistently achieved the highest average accuracies of 98.6% (Dataset 1), 96.6% (Dataset 2), 96.9% (Dataset 3), and 98.4% (Dataset 4). This research demonstrates that BCINetV1 is computationally efficient, extracts clinically vital markers, effectively handles the non-stationarity of EEG data, and shows a clear advantage over existing methods, marking a significant step forward for practical BCI applications. Full article
(This article belongs to the Special Issue Advanced Biomedical Imaging and Signal Processing)
Show Figures

Figure 1

Back to TopTop