sensors-logo

Journal Browser

Journal Browser

EEG-Based Brain–Computer Interface: Trends, Challenges and Advancements

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (15 October 2025) | Viewed by 10577

Special Issue Editor


E-Mail Website
Guest Editor
Department of Educational and Counselling Psychology, McGill University, Montréal, QC H3A 1Y2, Canada
Interests: artificial intelligence; human and machine learning; multimodal interaction; cognitive and affective modeling
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

In recent years, the development of the brain–computer interface (BCI) technology has enhanced the ability of human brain activity to interact with the environment. It can further lead to the generation of new neurorehabilitation methods for people with physical disabilities (such as paralyzed patients and amputees) and brain injuries (such as stroke patients).

The development of artificial intelligence has promoted the advancement of electroencephalographic (EEG)-based BCI technologies. The intelligent brain–computer interface system based on EEG can continuously monitor the fluctuations in the human cognitive state under monotonous tasks, which is of great significance for both people requiring medical support and researchers. Currently, many BCI studies focus on EEG signals related to whole-body kinematics motor imagery and various senses. Therefore, it is necessary to study the various experimental paradigms used in EEG-based brain–computer interface systems.

This Special Issue will focus on the latest research progress in using EEG-based BCI. Researchers are welcome to submit original research on common brain–computer interface paradigms, signal processing methods, and their applications in target patients.

Dr. Imène Jraidi
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

23 pages, 6005 KB  
Article
Takens-Based Kernel Transfer Entropy Connectivity Network for Motor Imagery Classification
by Alejandra Gomez-Rivera, Andrés M. Álvarez-Meza, David Cárdenas-Peña and Alvaro Orozco-Gutierrez
Sensors 2025, 25(22), 7067; https://doi.org/10.3390/s25227067 - 19 Nov 2025
Viewed by 380
Abstract
Reliable decoding of motor imagery (MI) from electroencephalographic signals remains a challenging problem due to their nonlinear, noisy, and non-stationary nature. To address this issue, this work proposes an end-to-end deep learning model, termed TEKTE-Net, that integrates time embeddings with a kernelized Transfer [...] Read more.
Reliable decoding of motor imagery (MI) from electroencephalographic signals remains a challenging problem due to their nonlinear, noisy, and non-stationary nature. To address this issue, this work proposes an end-to-end deep learning model, termed TEKTE-Net, that integrates time embeddings with a kernelized Transfer Entropy estimator to infer directed functional connectivity in MI-based brain–computer interface (BCI) systems. The proposed model incorporates a customized convolutional module that performs Takens’ embedding, enabling the decoding of the underlying EEG activity without requiring explicit preprocessing. Further, the architecture estimates nonlinear and time-delayed interactions between cortical regions using Rational Quadratic kernels within a differentiable framework. Evaluation of TEKTE-Net on semi-synthetic causal benchmarks and the BCI Competition IV 2a dataset demonstrates robustness to low signal-to-noise conditions and interpretability through temporal, spatial, and spectral analyses of learned connectivity patterns. In particular, the model automatically highlights contralateral activations during MI and promotes spectral selectivity for the beta and gamma bands. Overall, TEKTE-Net offers a fully trainable estimator of functional brain connectivity for decoding EEG activity, supporting MI-BCI applications, and promoting interpretability of deep learning models. Full article
Show Figures

Figure 1

25 pages, 5973 KB  
Article
An Attention-Residual Convolutional Network for Real-Time Seizure Classification on Edge Devices
by Peter A. Akor, Godwin Enemali, Usman Muhammad, Rajiv Ranjan Singh and Hadi Larijani
Sensors 2025, 25(22), 6855; https://doi.org/10.3390/s25226855 - 10 Nov 2025
Viewed by 553
Abstract
Epilepsy affects over 50 million people globally, with accurate seizure type classification directly influencing treatment selection as different seizure types respond to specific antiepileptic medications. Manual electroencephalogram (EEG) interpretation remains time-intensive and requires specialized expertise, creating clinical workflow bottlenecks. This work presents EEG-ARCNet, [...] Read more.
Epilepsy affects over 50 million people globally, with accurate seizure type classification directly influencing treatment selection as different seizure types respond to specific antiepileptic medications. Manual electroencephalogram (EEG) interpretation remains time-intensive and requires specialized expertise, creating clinical workflow bottlenecks. This work presents EEG-ARCNet, an attention-residual convolutional network integrating residual connections with channel attention mechanisms to extract discriminative temporal and spectral features from multi-channel EEG recordings. The model combines nine statistical temporal features with five frequency-band power measures through Welch’s spectral decomposition, processed through attention-enhanced convolutional pathways. Evaluated on the Temple University Hospital Seizure Corpus, EEG-ARCNet achieved 99.65% accuracy with 99.59% macro-averaged F1-score across five seizure types (absence, focal non-specific, simple partial, tonic-clonic, and tonic). To validate practical deployment, the model was implemented on Raspberry Pi 4, achieving a 2.06 ms average inference time per 10 s segment with 35.4% CPU utilization and 499.4 MB memory consumption. The combination of high classification accuracy and efficient edge deployment demonstrates technical feasibility for resource-constrained seizure-monitoring applications. Full article
Show Figures

Figure 1

29 pages, 3490 KB  
Article
Lower-Limb Motor Imagery Recognition Prototype Based on EEG Acquisition, Filtering, and Machine Learning-Based Pattern Detection
by Sonia Rocío Moreno-Castelblanco, Manuel Andrés Vélez-Guerrero and Mauro Callejas-Cuervo
Sensors 2025, 25(20), 6387; https://doi.org/10.3390/s25206387 - 16 Oct 2025
Viewed by 912
Abstract
Advances in brain–computer interface (BCI) research have explored various strategies for acquiring and processing electroencephalographic (EEG) signals to detect motor imagery (MI) activities. However, the complexity of multichannel clinical systems and processing techniques can limit their accessibility outside specialized centers, where complex setups [...] Read more.
Advances in brain–computer interface (BCI) research have explored various strategies for acquiring and processing electroencephalographic (EEG) signals to detect motor imagery (MI) activities. However, the complexity of multichannel clinical systems and processing techniques can limit their accessibility outside specialized centers, where complex setups are not feasible. This paper presents a proof-of-concept prototype of a single-channel EEG acquisition and processing system designed to identify lower-limb motor imagery. The proposed proof-of-concept prototype enables the wireless acquisition of raw EEG values, signal processing using digital filters, and the detection of MI patterns using machine learning algorithms. Experimental validation in a controlled laboratory with participants performing resting, MI, and movement tasks showed that the best performance was obtained by combining Savitzky–Golay filtering with a Random Forest classifier, reaching 87.36% ± 4% accuracy and an F1-score of 87.18% ± 3.8% under five-fold cross-validation. These findings confirm that, despite limited spatial resolution, MI patterns can be detected using appropriate AI-based filtering and classification. The novelty of this work lies in demonstrating that a single-channel, portable EEG prototype can be effectively used for lower-limb MI recognition. The portability and noise resilience achieved with the prototype highlight its potential for research, clinical rehabilitation, and assistive device control in non-specialized environments. Full article
Show Figures

Figure 1

21 pages, 2248 KB  
Article
TSFNet: Temporal-Spatial Fusion Network for Hybrid Brain-Computer Interface
by Yan Zhang, Bo Yin and Xiaoyang Yuan
Sensors 2025, 25(19), 6111; https://doi.org/10.3390/s25196111 - 3 Oct 2025
Viewed by 973
Abstract
Unimodal brain–computer interfaces (BCIs) often suffer from inherent limitations due to the characteristic of using single modalities. While hybrid BCIs combining electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) offer complementary advantages, effectively integrating their spatiotemporal features remains a challenge due to inherent signal [...] Read more.
Unimodal brain–computer interfaces (BCIs) often suffer from inherent limitations due to the characteristic of using single modalities. While hybrid BCIs combining electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) offer complementary advantages, effectively integrating their spatiotemporal features remains a challenge due to inherent signal asynchrony. This study aims to develop a novel deep fusion network to achieve synergistic integration of EEG and fNIRS signals for improved classification performance across different tasks. We propose a novel Temporal-Spatial Fusion Network (TSFNet), which consists of two key sublayers: the EEG-fNIRS-guided Fusion (EFGF) layer and the Cross-Attention-based Feature Enhancement (CAFÉ) layer. The EFGF layer extracts temporal features from EEG and spatial features from fNIRS to generate a hybrid attention map, which is utilized to achieve more effective and complementary integration of spatiotemporal information. The CAFÉ layer enables bidirectional interaction between fNIRS and fusion features via a cross-attention mechanism, which enhances the fusion features and selectively filters informative fNIRS representations. Through the two sublayers, TSFNet achieves deep fusion of multimodal features. Finally, TSFNet is evaluated on motor imagery (MI), mental arithmetic (MA), and word generation (WG) classification tasks. Experimental results demonstrate that TSFNet achieves superior classification performance, with average accuracies of 70.18% for MI, 86.26% for MA, and 81.13% for WG, outperforming existing state-of-the-art multimodal algorithms. These findings suggest that TSFNet provides an effective solution for spatiotemporal feature fusion in hybrid BCIs, with potential applications in real-world BCI systems. Full article
Show Figures

Figure 1

15 pages, 1937 KB  
Article
Improving the Performance of Electrotactile Brain–Computer Interface Using Machine Learning Methods on Multi-Channel Features of Somatosensory Event-Related Potentials
by Marija Novičić, Olivera Djordjević, Vera Miler-Jerković, Ljubica Konstantinović and Andrej M. Savić
Sensors 2024, 24(24), 8048; https://doi.org/10.3390/s24248048 - 17 Dec 2024
Cited by 2 | Viewed by 1459
Abstract
Traditional tactile brain–computer interfaces (BCIs), particularly those based on steady-state somatosensory–evoked potentials, face challenges such as lower accuracy, reduced bit rates, and the need for spatially distant stimulation points. In contrast, using transient electrical stimuli offers a promising alternative for generating tactile BCI [...] Read more.
Traditional tactile brain–computer interfaces (BCIs), particularly those based on steady-state somatosensory–evoked potentials, face challenges such as lower accuracy, reduced bit rates, and the need for spatially distant stimulation points. In contrast, using transient electrical stimuli offers a promising alternative for generating tactile BCI control signals: somatosensory event-related potentials (sERPs). This study aimed to optimize the performance of a novel electrotactile BCI by employing advanced feature extraction and machine learning techniques on sERP signals for the classification of users’ selective tactile attention. The experimental protocol involved ten healthy subjects performing a tactile attention task, with EEG signals recorded from five EEG channels over the sensory–motor cortex. We employed sequential forward selection (SFS) of features from temporal sERP waveforms of all EEG channels. We systematically tested classification performance using machine learning algorithms, including logistic regression, k-nearest neighbors, support vector machines, random forests, and artificial neural networks. We explored the effects of the number of stimuli required to obtain sERP features for classification and their influence on accuracy and information transfer rate. Our approach indicated significant improvements in classification accuracy compared to previous studies. We demonstrated that the number of stimuli for sERP generation can be reduced while increasing the information transfer rate without a statistically significant decrease in classification accuracy. In the case of the support vector machine classifier, we achieved a mean accuracy over 90% for 10 electrical stimuli, while for 6 stimuli, the accuracy decreased by less than 7%, and the information transfer rate increased by 60%. This research advances methods for tactile BCI control based on event-related potentials. This work is significant since tactile stimulation is an understudied modality for BCI control, and electrically induced sERPs are the least studied control signals in reactive BCIs. Exploring and optimizing the parameters of sERP elicitation, as well as feature extraction and classification methods, is crucial for addressing the accuracy versus speed trade-off in various assistive BCI applications where the tactile modality may have added value. Full article
Show Figures

Figure 1

19 pages, 9860 KB  
Article
High-Density Electroencephalogram Facilitates the Detection of Small Stimuli in Code-Modulated Visual Evoked Potential Brain–Computer Interfaces
by Qingyu Sun, Shaojie Zhang, Guoya Dong, Weihua Pei, Xiaorong Gao and Yijun Wang
Sensors 2024, 24(11), 3521; https://doi.org/10.3390/s24113521 - 30 May 2024
Cited by 5 | Viewed by 2075
Abstract
In recent years, there has been a considerable amount of research on visual evoked potential (VEP)-based brain–computer interfaces (BCIs). However, it remains a big challenge to detect VEPs elicited by small visual stimuli. To address this challenge, this study employed a 256-electrode high-density [...] Read more.
In recent years, there has been a considerable amount of research on visual evoked potential (VEP)-based brain–computer interfaces (BCIs). However, it remains a big challenge to detect VEPs elicited by small visual stimuli. To address this challenge, this study employed a 256-electrode high-density electroencephalogram (EEG) cap with 66 electrodes in the parietal and occipital lobes to record EEG signals. An online BCI system based on code-modulated VEP (C-VEP) was designed and implemented with thirty targets modulated by a time-shifted binary pseudo-random sequence. A task-discriminant component analysis (TDCA) algorithm was employed for feature extraction and classification. The offline and online experiments were designed to assess EEG responses and classification performance for comparison across four different stimulus sizes at visual angles of 0.5°, 1°, 2°, and 3°. By optimizing the data length for each subject in the online experiment, information transfer rates (ITRs) of 126.48 ± 14.14 bits/min, 221.73 ± 15.69 bits/min, 258.39 ± 9.28 bits/min, and 266.40 ± 6.52 bits/min were achieved for 0.5°, 1°, 2°, and 3°, respectively. This study further compared the EEG features and classification performance of the 66-electrode layout from the 256-electrode EEG cap, the 32-electrode layout from the 128-electrode EEG cap, and the 21-electrode layout from the 64-electrode EEG cap, elucidating the pivotal importance of a higher electrode density in enhancing the performance of C-VEP BCI systems using small stimuli. Full article
Show Figures

Figure 1

Review

Jump to: Research

29 pages, 1397 KB  
Review
Artificial Intelligence Approaches for EEG Signal Acquisition and Processing in Lower-Limb Motor Imagery: A Systematic Review
by Sonia Rocío Moreno-Castelblanco, Manuel Andrés Vélez-Guerrero and Mauro Callejas-Cuervo
Sensors 2025, 25(16), 5030; https://doi.org/10.3390/s25165030 - 13 Aug 2025
Cited by 1 | Viewed by 2805
Abstract
Background: Motor imagery (MI) is defined as the cognitive ability to simulate motor movements while suppressing muscular activity. The electroencephalographic (EEG) signals associated with lower limb MI have become essential in brain–computer interface (BCI) research aimed at assisting individuals with motor disabilities. Objective: [...] Read more.
Background: Motor imagery (MI) is defined as the cognitive ability to simulate motor movements while suppressing muscular activity. The electroencephalographic (EEG) signals associated with lower limb MI have become essential in brain–computer interface (BCI) research aimed at assisting individuals with motor disabilities. Objective: This systematic review aims to evaluate methodologies for acquiring and processing EEG signals within brain–computer interface (BCI) applications to accurately identify lower limb MI. Methods: A systematic search in Scopus and IEEE Xplore identified 287 records on EEG-based lower-limb MI using artificial intelligence. Following PRISMA guidelines (non-registered), 35 studies met the inclusion criteria after screening and full-text review. Results: Among the selected studies, 85% applied machine or deep learning classifiers such as SVM, CNN, and LSTM, while 65% incorporated multimodal fusion strategies, and 50% implemented decomposition algorithms. These methods improved classification accuracy, signal interpretability, and real-time application potential. Nonetheless, methodological variability and a lack of standardization persist across studies, posing barriers to clinical implementation. Conclusions: AI-based EEG analysis effectively decodes lower-limb motor imagery. Future efforts should focus on harmonizing methods, standardizing datasets, and developing portable systems to improve neurorehabilitation outcomes. This review provides a foundation for advancing MI-based BCIs. Full article
Show Figures

Figure 1

Back to TopTop