Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (775)

Search Parameters:
Keywords = brain—computer interfaces (BCIs)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
13 pages, 1546 KB  
Article
Specificity of Pairing Afferent and Efferent Activity for Inducing Neural Plasticity with an Associative Brain–Computer Interface
by Kirstine Schultz Dalgaard, Emma Rahbek Lavesen, Cecilie Sørenbye Sulkjær, Andrew James Thomas Stevenson and Mads Jochumsen
Sensors 2026, 26(2), 549; https://doi.org/10.3390/s26020549 - 14 Jan 2026
Abstract
Brain–computer interface-based (BCI) training induces neural plasticity and promotes motor recovery in stroke patients by pairing movement intentions with congruent electrical stimulation of the affected limb, eliciting somatosensory afferent feedback. However, this training can potentially be refined further to enhance rehabilitation outcomes. It [...] Read more.
Brain–computer interface-based (BCI) training induces neural plasticity and promotes motor recovery in stroke patients by pairing movement intentions with congruent electrical stimulation of the affected limb, eliciting somatosensory afferent feedback. However, this training can potentially be refined further to enhance rehabilitation outcomes. It is not known how specific the afferent feedback needs to be with respect to the efferent activity from the brain. This study investigated how corticospinal excitability, a marker of neural plasticity, was modulated by four types of BCI-like interventions that varied in the specificity of afferent feedback relative to the efferent activity. Fifteen able-bodied participants performed four interventions: (1) wrist extensions paired with radial nerve peripheral electrical stimulation (PES) (matching feedback), (2) wrist extensions paired with ulnar nerve PES (non-matching feedback), (3) wrist extensions paired with sham radial nerve PES (no feedback), and (4) palmar grasps paired with radial nerve PES (partially matching feedback). Each intervention consisted of 100 pairings between visually cued movements and PES. The PES was triggered based on the peak of maximal negativity of the movement-related cortical potential associated with the visually cued movement. Before, immediately after, and 30 min after the intervention, transcranial magnetic stimulation-elicited motor-evoked potentials were recorded to assess corticospinal excitability. Only wrist extensions paired with radial nerve PES significantly increased the corticospinal excitability with 57 ± 49% and 65 ± 52% immediately and 30 min after the intervention, respectively, compared to the pre-intervention measurement. In conclusion, maximizing the induction of neural plasticity with an associative BCI requires that the afferent feedback be precisely matched to the efferent brain activity. Full article
(This article belongs to the Special Issue Sensors for Biomechanical and Rehabilitation Engineering)
Show Figures

Figure 1

18 pages, 1165 KB  
Review
Bridging Silence: A Scoping Review of Technological Advancements in Augmentative and Alternative Communication for Amyotrophic Lateral Sclerosis
by Filipe Gonçalves, Carla S. Fernandes, Margarida I. Teixeira, Cláudia Melo and Cátia Dias
Sclerosis 2026, 4(1), 2; https://doi.org/10.3390/sclerosis4010002 - 13 Jan 2026
Viewed by 33
Abstract
Background: Amyotrophic lateral sclerosis (ALS) progressively impairs motor function, compromising speech and limiting communication. Augmentative and alternative communication (AAC) is essential to maintain autonomy, social participation, and quality of life for people with ALS (PALS). This review maps technological developments in AAC, from [...] Read more.
Background: Amyotrophic lateral sclerosis (ALS) progressively impairs motor function, compromising speech and limiting communication. Augmentative and alternative communication (AAC) is essential to maintain autonomy, social participation, and quality of life for people with ALS (PALS). This review maps technological developments in AAC, from low-tech tools to advanced brain–computer interface (BCI) systems. Methods: We conducted a scoping review following the PRISMA extension for scoping reviews. PubMed, Web of Science, SciELO, MEDLINE, and CINAHL were screened for studies published up to 31 August 2025. Peer-reviewed RCT, cohort, cross-sectional, and conference papers were included. Single-case studies of invasive BCI technology for ALS were also considered. Methodological quality was evaluated using JBI Critical Appraisal Tools. Results: Thirty-seven studies met inclusion criteria. High-tech AAC—particularly eye-tracking systems and non-invasive BCIs—were most frequently studied. Eye tracking showed high usability but was limited by fatigue, calibration demands, and ocular impairments. EMG- and EOG-based systems demonstrated promising accuracy and resilience to environmental factors, though evidence remains limited. Invasive BCIs showed the highest performance in late-stage ALS and locked-in syndrome, but with small samples and uncertain long-term feasibility. No studies focused exclusively on low-tech AAC interventions. Conclusions: AAC technologies, especially BCIs, EMG and eye-tracking systems, show promise in supporting autonomy in PALS. Implementation gaps persist, including limited attention to caregiver burden, healthcare provider training, and the real-world use of low-tech and hybrid AAC. Further research is needed to ensure that communication solutions are timely, accessible, and effective, and that they are tailored to functional status, daily needs, social participation, and interaction with the environment. Full article
Show Figures

Figure 1

41 pages, 3213 KB  
Review
Generative Adversarial Networks for Modeling Bio-Electric Fields in Medicine: A Review of EEG, ECG, EMG, and EOG Applications
by Jiaqi Liang, Yuheng Zhou, Kai Ma, Yifan Jia, Yadan Zhang, Bangcheng Han and Min Xiang
Bioengineering 2026, 13(1), 84; https://doi.org/10.3390/bioengineering13010084 - 12 Jan 2026
Viewed by 277
Abstract
Bio-electric fields—manifested as Electroencephalogram (EEG), Electrocardiogram (ECG), Electromyogram (EMG), and Electrooculogram (EOG)—are fundamental to modern medical diagnostics but often suffer from severe data imbalance, scarcity, and environmental noise. Generative Adversarial Networks (GANs) offer a powerful, nonlinear solution to these modeling hurdles. This review [...] Read more.
Bio-electric fields—manifested as Electroencephalogram (EEG), Electrocardiogram (ECG), Electromyogram (EMG), and Electrooculogram (EOG)—are fundamental to modern medical diagnostics but often suffer from severe data imbalance, scarcity, and environmental noise. Generative Adversarial Networks (GANs) offer a powerful, nonlinear solution to these modeling hurdles. This review presents a comprehensive survey of GAN methodologies specifically tailored for bio-electric signal processing. We first establish a theoretical foundation by detailing GAN principles, training mechanisms, and critical structural variants, including advancements in loss functions and conditional architectures. Subsequently, the paper extensively analyzes applications ranging from high-fidelity signal synthesis and noise reduction to multi-class classification. Special attention is given to clinical anomaly detection, specifically covering epilepsy, arrhythmia, depression, and sleep apnea. Furthermore, we explore emerging applications such as modal transformation, Brain–Computer Interfaces (BCI), de-identification for privacy, and signal reconstruction. Finally, we critically evaluate the computational trade-offs and stability issues inherent in current models. The study concludes by delineating prospective research avenues, emphasizing the necessity of interdisciplinary synergy to advance personalized medicine and intelligent diagnostic systems. Full article
Show Figures

Graphical abstract

41 pages, 5539 KB  
Article
Robust Covert Spatial Attention Decoding from Low-Channel Dry EEG by Hybrid AI Model
by Doyeon Kim and Jaeho Lee
AI 2026, 7(1), 9; https://doi.org/10.3390/ai7010009 - 30 Dec 2025
Viewed by 554
Abstract
Background: Decoding covert spatial attention (CSA) from dry, low-channel electroencephalography (EEG) is key for gaze-independent brain–computer interfaces (BCIs). Methods: We evaluate, on sixteen participants and three tasks (CSA, motor imagery (MI), Emotion), a four-electrode, subject-wise pipeline combining leak-safe preprocessing, multiresolution wavelets, and a [...] Read more.
Background: Decoding covert spatial attention (CSA) from dry, low-channel electroencephalography (EEG) is key for gaze-independent brain–computer interfaces (BCIs). Methods: We evaluate, on sixteen participants and three tasks (CSA, motor imagery (MI), Emotion), a four-electrode, subject-wise pipeline combining leak-safe preprocessing, multiresolution wavelets, and a compact Hybrid encoder (CNN-LSTM-MHSA) with robustness-oriented training (noise/shift/channel-dropout and supervised consistency). Results: Online, the Hybrid All-on-Wav achieved 0.695 accuracy with end-to-end latency ~2.03 s per 2.0 s decision window; the pure model inference latency is ≈185 ms on CPU and ≈11 ms on GPU. The same backbone without defenses reached 0.673, a CNN-LSTM 0.612, and a compact CNN 0.578. Offline subject-wise analyses showed a CSA median Δ balanced accuracy (BAcc) of +2.9%p (paired Wilcoxon p = 0.037; N = 16), with usability-aligned improvements (error 0.272 → 0.268; information transfer rate (ITR) 3.120 → 3.240). Effects were smaller for MI and present for Emotion. Conclusions: Even with simple hardware, compact attention-augmented models and training-time defenses support feasible, low-latency left–right CSA control above chance, suitable for embedded or laptop-class deployment. Full article
(This article belongs to the Section Medical & Healthcare AI)
Show Figures

Figure 1

33 pages, 9268 KB  
Article
Gaussian Connectivity-Driven EEG Imaging for Deep Learning-Based Motor Imagery Classification
by Alejandra Gomez-Rivera, Diego Fabian Collazos-Huertas, David Cárdenas-Peña, Andrés Marino Álvarez-Meza and German Castellanos-Dominguez
Sensors 2026, 26(1), 227; https://doi.org/10.3390/s26010227 - 29 Dec 2025
Viewed by 424
Abstract
Electroencephalography (EEG)-based motor imagery (MI) brain–computer interfaces (BCIs) hold considerable potential for applications in neuro-rehabilitation and assistive technologies. Yet, their development remains constrained by challenges such as low spatial resolution, vulnerability to noise and artifacts, and pronounced inter-subject variability. Conventional approaches, including common [...] Read more.
Electroencephalography (EEG)-based motor imagery (MI) brain–computer interfaces (BCIs) hold considerable potential for applications in neuro-rehabilitation and assistive technologies. Yet, their development remains constrained by challenges such as low spatial resolution, vulnerability to noise and artifacts, and pronounced inter-subject variability. Conventional approaches, including common spatial patterns (CSP) and convolutional neural networks (CNNs), often exhibit limited robustness, weak generalization, and reduced interpretability. To overcome these limitations, we introduce EEG-GCIRNet, a Gaussian connectivity-driven EEG imaging representation network coupled with a regularized LeNet architecture for MI classification. Our method integrates raw EEG signals with topographic maps derived from functional connectivity into a unified variational autoencoder framework. The network is trained with a multi-objective loss that jointly optimizes reconstruction fidelity, classification accuracy, and latent space regularization. The model’s interpretability is enhanced through its variational autoencoder design, allowing for qualitative validation of its learned representations. Experimental evaluations demonstrate that EEG-GCIRNet outperforms state-of-the-art methods, achieving the highest average accuracy (81.82%) and lowest variability (±10.15) in binary classification. Most notably, it effectively mitigates BCI illiteracy by completely eliminating the “Bad” performance group (<60% accuracy), yielding substantial gains of ∼22% for these challenging users. Furthermore, the framework demonstrates good scalability in complex 5-class scenarios, performing competitive classification accuracy (75.20% ± 4.63) with notable statistical superiority (p = 0.002) against advanced baselines. Extensive interpretability analyses, including analysis of the reconstructed connectivity maps, latent space visualizations, Grad-CAM++ and functional connectivity patterns, confirm that the model captures genuine neurophysiological mechanisms, correctly identifying integrated fronto-centro-parietal networks in high performers and compensatory midline circuits in mid-performers. These findings suggest that EEG-GCIRNet provides a robust and interpretable end-to-end framework for EEG-based BCIs, advancing the development of reliable neurotechnology for rehabilitation and assistive applications. Full article
Show Figures

Figure 1

19 pages, 5214 KB  
Article
TF-Denoiser: A Time-Frequency Domain Joint Method for EEG Artifact Removal
by Yinghui Meng, Changxiang Yuan, Wen Feng, Duan Li, Jiaofen Nan, Yongquan Xia, Fubao Zhu and Jiaoshuai Song
Electronics 2026, 15(1), 132; https://doi.org/10.3390/electronics15010132 - 27 Dec 2025
Viewed by 208
Abstract
Electroencephalography (EEG) signal acquisition is often affected by artifacts, challenging applications such as brain disease diagnosis and Brain-Computer Interfaces (BCIs). This paper proposes TF-Denoiser, a deep learning model using a joint time-frequency optimisation strategy for artifact removal. The proposed method first employs a [...] Read more.
Electroencephalography (EEG) signal acquisition is often affected by artifacts, challenging applications such as brain disease diagnosis and Brain-Computer Interfaces (BCIs). This paper proposes TF-Denoiser, a deep learning model using a joint time-frequency optimisation strategy for artifact removal. The proposed method first employs a position embedding module to process EEG data, enhancing temporal feature representation. Then, the EEG signals are transformed from the time domain to the complex frequency domain via Fourier transform, and the real and imaginary parts are denoised separately. The multi-attention denoising module (MA-denoise) is used to extract both local and global features of EEG signals. Finally, joint optimisation of time-frequency features is performed to improve artifact removal performance. Experimental results demonstrate that TF-Denoiser outperforms the compared methods in terms of correlation coefficient (CC), relative root mean square error (RRMSE), and signal-to-noise ratio (SNR) on electromyography (EMG) and electrooculography (EOG) datasets. It effectively reduces ocular and muscular artifacts and improves EEG denoising robustness and system stability. Full article
(This article belongs to the Section Bioelectronics)
Show Figures

Figure 1

21 pages, 2686 KB  
Article
A Deep Learning Approach to Classifying User Performance in BCI Gaming
by Aimilia Ntetska, Anastasia Mimou, Katerina D. Tzimourta, Pantelis Angelidis and Markos G. Tsipouras
Electronics 2025, 14(24), 4974; https://doi.org/10.3390/electronics14244974 - 18 Dec 2025
Viewed by 389
Abstract
Brain–Computer Interface (BCI) systems are rapidly evolving and increasingly integrated into interactive environments such as gaming and Virtual/Augmented Reality. In such applications, user adaptability and engagement are critical. This study applies deep learning to predict user performance in a 3D BCI-controlled game using [...] Read more.
Brain–Computer Interface (BCI) systems are rapidly evolving and increasingly integrated into interactive environments such as gaming and Virtual/Augmented Reality. In such applications, user adaptability and engagement are critical. This study applies deep learning to predict user performance in a 3D BCI-controlled game using pre-game Motor Imagery (MI) electroencephalographic (EEG) recordings. A total of 72 EEG recordings were collected from 36 participants, 17 using the Muse 2 headset and 19 using the Emotiv Insight device, during left and right hand MI tasks. The signals were preprocessed and transformed into time–frequency spectrograms, which served as inputs to a custom convolutional neural network (CNN) designed to classify users into three performance levels: low, medium, and high. The model achieved classification accuracies of 83% and 95% on Muse 2 and Emotiv Insight data, respectively, at the epoch level, and 75% and 84% at the subject level, using LOSO-CV. These findings demonstrate the feasibility of using deep learning on MI EEG data to forecast user performance in BCI gaming, enabling adaptive systems that enhance both usability and user experience. Full article
Show Figures

Figure 1

19 pages, 6764 KB  
Article
A Dual-Validation Framework for Temporal Robustness Assessment in Brain–Computer Interfaces for Motor Imagery
by Mohamed A. Hanafy, Saykhun Yusufjonov, Payman SharafianArdakani, Djaykhun Yusufjonov, Madan M. Rayguru and Dan O. Popa
Technologies 2025, 13(12), 595; https://doi.org/10.3390/technologies13120595 - 18 Dec 2025
Viewed by 465
Abstract
Brain–computer interfaces using motor imagery (MI-BCIs) offer a promising noninvasive communication pathway between humans and engineered equipment such as robots. However, for MI-BCIs based on electroencephalography (EEG), the reliability of the interface across recording sessions is limited by temporal non-stationary effects. Overcoming this [...] Read more.
Brain–computer interfaces using motor imagery (MI-BCIs) offer a promising noninvasive communication pathway between humans and engineered equipment such as robots. However, for MI-BCIs based on electroencephalography (EEG), the reliability of the interface across recording sessions is limited by temporal non-stationary effects. Overcoming this barrier is critical to translating MI-BCIs from controlled laboratory environments to practical uses. In this paper, we present a comprehensive dual-validation framework to rigorously evaluate the temporal robustness of EEG signals of an MI-BCI. We collected data from six participants performing four motor imagery tasks (left/right hand and foot). Features were extracted using Common Spatial Patterns, and ten machine learning classifiers were assessed within a unified pipeline. Our method integrates within-session evaluation (stratified K-fold cross-validation) with cross-session testing (bidirectional train/test), complemented by stability metrics and performance heterogeneity assessment. Findings reveal minimal performance loss between conditions, with an average accuracy drop of just 2.5%. The AdaBoost classifier achieved the highest within-session performance (84.0% system accuracy, F1-score: 83.8%/80.9% for hand/foot), while the K-nearest neighbors (KNN) classifier demonstrated the best cross-session robustness (81.2% system accuracy, F1-score: 80.5%/80.2% for hand/foot, 0.663 robustness score). This study shows that robust performance across sessions is attainable for MI-BCI evaluation, supporting the pathway toward reliable, real-world clinical deployment. Full article
(This article belongs to the Collection Selected Papers from the PETRA Conference Series)
Show Figures

Figure 1

16 pages, 2128 KB  
Article
Robust Motor Imagery–Brain–Computer Interface Classification in Signal Degradation: A Multi-Window Ensemble Approach
by Dong-Geun Lee and Seung-Bo Lee
Biomimetics 2025, 10(12), 832; https://doi.org/10.3390/biomimetics10120832 - 12 Dec 2025
Viewed by 532
Abstract
Electroencephalography (EEG)-based brain–computer interface (BCI) mimics the brain’s intrinsic information-processing mechanisms by translating neural oscillations into actionable commands. In motor imagery (MI) BCI, imagined movements evoke characteristic patterns over the sensorimotor cortex, forming a biomimetic channel through which internal motor intentions are decoded. [...] Read more.
Electroencephalography (EEG)-based brain–computer interface (BCI) mimics the brain’s intrinsic information-processing mechanisms by translating neural oscillations into actionable commands. In motor imagery (MI) BCI, imagined movements evoke characteristic patterns over the sensorimotor cortex, forming a biomimetic channel through which internal motor intentions are decoded. However, this biomimetic interaction is highly vulnerable to signal degradation, particularly in mobile or low-resource environments where low sampling frequencies obscure these MI-related oscillations. To address this limitation, we propose a robust MI classification framework that integrates spatial, spectral, and temporal dynamics through a filter bank common spatial pattern with time segmentation (FBCSP-TS). This framework classifies motor imagery tasks into four classes (left hand, right hand, foot, and tongue), segments EEG signals into overlapping time domains, and extracts frequency-specific spatial features across multiple subbands. Segment-level predictions are combined via soft voting, reflecting the brain’s distributed integration of information and enhancing resilience to transient noise and localized artifacts. Experiments performed on BCI Competition IV datasets 2a (250 Hz) and 1 (100 Hz) demonstrate that FBCSP-TS outperforms CSP and FBCSP. A paired t-test confirms that accuracy at 110 Hz is not significantly different from that at 250 Hz (p < 0.05), supporting the robustness of the proposed framework. Optimal temporal parameters (window length = 3.5 s, moving length = 0.5 s) further stabilize transient-signal capture and improve SNR. External validation yielded a mean accuracy of 0.809 ± 0.092 and Cohen’s kappa of 0.619 ± 0.184, confirming strong generalizability. By preserving MI-relevant neural patterns under degraded conditions, this framework advances practical, biomimetic BCI suitable for wearable and real-world deployment. Full article
Show Figures

Graphical abstract

26 pages, 5681 KB  
Article
Physiological Artifact Suppression in EEG Signals Using an Efficient Multi-Scale Depth-Wise Separable Convolution and Variational Attention Deep Learning Model for Improved Neurological Health Signal Quality
by Vandana Akshath Raj, Tejasvi Parupudi, Vishnumurthy Kedlaya K, Ananthakrishna Thalengala and Subramanya G. Nayak
Technologies 2025, 13(12), 578; https://doi.org/10.3390/technologies13120578 - 9 Dec 2025
Viewed by 547
Abstract
Artifacts remain a major challenge in electroencephalogram (EEG) recordings, often degrading the accuracy of clinical diagnosis, brain computer interface (BCI) systems, and cognitive research. Although recent deep learning approaches have advanced EEG denoising, most still struggle to model long-range dependencies, maintain computational efficiency, [...] Read more.
Artifacts remain a major challenge in electroencephalogram (EEG) recordings, often degrading the accuracy of clinical diagnosis, brain computer interface (BCI) systems, and cognitive research. Although recent deep learning approaches have advanced EEG denoising, most still struggle to model long-range dependencies, maintain computational efficiency, and generalize to unseen artifact types. To address these challenges, this study proposes MDSC-VA, an efficient denoising framework that integrates multi-scale (M) depth-wise separable convolution (DSConv), variational autoencoder-based (VAE) latent encoding, and a multi-head self-attention mechanism. This unified architecture effectively balances denoising accuracy and model complexity while enhancing generalization to unseen artifact types. Comprehensive evaluations on three open-source EEG datasets, including EEGdenoiseNet, a Motion Artifact Contaminated Multichannel EEG dataset, and the PhysioNet EEG Motor Movement/Imagery dataset, demonstrate that MDSC-VA consistently outperforms state-of-the-art methods, achieving a higher signal-to-noise ratio (SNR), lower relative root mean square error (RRMSE), and stronger correlation coefficient (CC) values. Moreover, the model preserved over 99% of the dominant neural frequency band power, validating its ability to retain physiologically relevant rhythms. These results highlight the potential of MDSC-VA for reliable clinical EEG interpretation, real-time BCI systems, and advancement towards sustainable healthcare technologies in line with SDG-3 (Good Health and Well-Being). Full article
Show Figures

Graphical abstract

24 pages, 4080 KB  
Article
MCRBM–CNN: A Hybrid Deep Learning Framework for Robust SSVEP Classification
by Depeng Gao, Yuhang Zhao, Jieru Zhou, Haifei Zhang and Hongqi Li
Sensors 2025, 25(24), 7456; https://doi.org/10.3390/s25247456 - 8 Dec 2025
Viewed by 488
Abstract
The steady-state visual evoked potential (SSVEP), a non-invasive EEG modality, is a prominent approach for brain–computer interfaces (BCIs) due to its high signal-to-noise ratio and minimal user training. However, its practical utility is often hampered by susceptibility to noise, artifacts, and concurrent brain [...] Read more.
The steady-state visual evoked potential (SSVEP), a non-invasive EEG modality, is a prominent approach for brain–computer interfaces (BCIs) due to its high signal-to-noise ratio and minimal user training. However, its practical utility is often hampered by susceptibility to noise, artifacts, and concurrent brain activities, complicating signal decoding. To address this, we propose a novel hybrid deep learning model that integrates a multi-channel restricted Boltzmann machine (RBM) with a convolutional neural network (CNN). The framework comprises two main modules: a feature extraction module and a classification module. The former employs a multi-channel RBM to unsupervisedly learn latent feature representations from multi-channel EEG data, effectively capturing inter-channel correlations to enhance feature discriminability. The latter leverages convolutional operations to further extract spatiotemporal features, constructing a deep discriminative model for the automatic recognition of SSVEP signals. Comprehensive evaluations on multiple public datasets demonstrate that our proposed method achieves competitive performance compared to various benchmarks, particularly exhibiting superior effectiveness and robustness in short-time window scenarios. Full article
(This article belongs to the Special Issue EEG Signal Processing Techniques and Applications—3rd Edition)
Show Figures

Figure 1

20 pages, 4204 KB  
Systematic Review
A Multidimensional Benchmark of Public EEG Datasets for Driver State Monitoring in Brain–Computer Interfaces
by Sirine Ammar, Nesrine Triki, Mohamed Karray and Mohamed Ksantini
Sensors 2025, 25(24), 7426; https://doi.org/10.3390/s25247426 - 6 Dec 2025
Viewed by 1096
Abstract
Electroencephalography (EEG)-based brain-computer interfaces (BCIs) hold significant potential for enhancing driver safety through real-time monitoring of cognitive and affective states. However, the development of reliable BCI systems for Advanced Driver Assistance Systems (ADAS) depends on the availability of high-quality, publicly accessible EEG datasets [...] Read more.
Electroencephalography (EEG)-based brain-computer interfaces (BCIs) hold significant potential for enhancing driver safety through real-time monitoring of cognitive and affective states. However, the development of reliable BCI systems for Advanced Driver Assistance Systems (ADAS) depends on the availability of high-quality, publicly accessible EEG datasets collected during driving tasks. Existing datasets lack standardized parameters and contain demographic biases, which undermine their reliability and prevent the development of robust systems. This study presents a multidimensional benchmark analysis of seven publicly available EEG driving datasets. We compare these datasets across multiple dimensions, including task design, modality integration, demographic representation, accessibility, and reported model performance. This benchmark synthesizes existing literature without conducting new experiments. Our analysis reveals critical gaps, including significant age and gender biases, overreliance on simulated environments, insufficient affective monitoring, and restricted data accessibility. These limitations hinder real-world applicability and reduce ADAS performance. To address these gaps and facilitate the development of generalizable BCI systems, this study provides a structured, quantitative benchmark analysis of publicly available driving EEG datasets, suggesting criteria and recommendations for future dataset design and use. Additionally, we emphasize the need for balanced participant distributions, standardized emotional annotation, and open data practices. Full article
(This article belongs to the Section Cross Data)
Show Figures

Figure 1

22 pages, 12089 KB  
Article
A Brain–Computer Interface for Control of a Virtual Prosthetic Hand
by Ángel del Rosario Zárate-Ruiz, Manuel Arias-Montiel and Christian Eduardo Millán-Hernández
Computation 2025, 13(12), 287; https://doi.org/10.3390/computation13120287 - 6 Dec 2025
Viewed by 1243
Abstract
Brain–computer interfaces (BCIs) have emerged as an option that allows better communication between humans and some technological devices. This article presents a BCI based on the steady-state visual evoked potentials (SSVEP) paradigm and low-cost hardware to control a virtual prototype of a robotic [...] Read more.
Brain–computer interfaces (BCIs) have emerged as an option that allows better communication between humans and some technological devices. This article presents a BCI based on the steady-state visual evoked potentials (SSVEP) paradigm and low-cost hardware to control a virtual prototype of a robotic hand. A LED-based device is proposed as a visual stimulator, and the Open BCI Ultracortex Biosensing Headset is used to acquire the electroencephalographic (EEG) signals for the BCI. The processing and classification of the obtained signals are described. Classifiers based on artificial neural networks (ANNs) and support vector machines (SVMs) are compared, demonstrating that the classifiers based on SVM have superior performance to those based on ANN. The classified EEG signals are used to implement different movements in a virtual prosthetic hand using a co-simulation approach, showing the feasibility of BCI being implemented in the control of robotic hands. Full article
Show Figures

Figure 1

21 pages, 866 KB  
Review
Using VR and BCI to Improve Communication Between a Cyber-Physical System and an Operator in the Industrial Internet of Things
by Adrianna Piszcz, Izabela Rojek, Nataša Náprstková and Dariusz Mikołajewski
Appl. Sci. 2025, 15(23), 12805; https://doi.org/10.3390/app152312805 - 3 Dec 2025
Viewed by 613
Abstract
The Industry 5.0 paradigm places humans and the environment at the center. New communication methods based on virtual reality (VR) and brain–computer interfaces (BCIs) can improve system–operator interaction in multimedia communications, providing immersive environments where operators can more intuitively manage complex systems. The [...] Read more.
The Industry 5.0 paradigm places humans and the environment at the center. New communication methods based on virtual reality (VR) and brain–computer interfaces (BCIs) can improve system–operator interaction in multimedia communications, providing immersive environments where operators can more intuitively manage complex systems. The study was conducted through a systematic literature review combined with bibliometric and thematic analyses to map the current landscape of VR-BCI communication frameworks in IIoT environments. The methodology employed included structured resource selection, comparative assessment of interaction modalities, and cross-domain synthesis to identify patterns, gaps, and emerging technology trends. Key challenges identified include reliable signal processing, real-time integration of neural data with immersive interfaces, and the scalability of VR-BCI solutions in industrial applications. The study concludes by outlining future research directions focused on hybrid multimodal interfaces, adaptive cognition-based automation, and standardized protocols for evaluating human–cyber-physical system communication. VR interfaces enable operators to visualize and interact with network data in 3D, improving their monitoring and troubleshooting in real time. By integrating BCI technology, operators can control systems using neural signals, reducing the need for physical input devices and streamlining operation (including touchless technology). BCI-based protocols enable touchless control, which can be particularly useful in situations where operators must multitask, bypassing traditional input methods such as keyboards or mice. VR environments can simulate network conditions, allowing operators to practice and refine their responses to potential problems in a controlled, safe environment. Combining VR with BCI allows for the creation of adaptive interfaces that respond to the operator’s cognitive load, adjusting the complexity of the displayed information based on real-time neural feedback. This integration can lead to more personalized and effective training programs for operators, enhancing their skills and decision-making. VR and BCI-based solutions also have the potential to reduce operator fatigue by enabling more natural and intuitive interaction with complex systems. The use of these advanced technologies in multimedia telecommunications can translate into more efficient, precise, and user-friendly system management, ultimately improving service quality. Full article
(This article belongs to the Special Issue Brain-Computer Interfaces: Development, Applications, and Challenges)
Show Figures

Figure 1

15 pages, 1245 KB  
Article
Comparison of Classifier Calibration Schemes for Movement Intention Detection in Individuals with Cerebral Palsy for Inducing Plasticity with Brain–Computer Interfaces
by Mads Jochumsen, Cecilie Sørenbye Sulkjær and Kirstine Schultz Dalgaard
Sensors 2025, 25(23), 7347; https://doi.org/10.3390/s25237347 - 2 Dec 2025
Viewed by 463
Abstract
Brain–computer interfaces (BCIs) have successfully been used for stroke rehabilitation by pairing movement intentions with, e.g., functional electrical stimulation. It has also been proposed that BCI training is beneficial for people with cerebral palsy (CP). To develop BCI training for CP patients, movement [...] Read more.
Brain–computer interfaces (BCIs) have successfully been used for stroke rehabilitation by pairing movement intentions with, e.g., functional electrical stimulation. It has also been proposed that BCI training is beneficial for people with cerebral palsy (CP). To develop BCI training for CP patients, movement intentions must be detected from single-trial EEG. The study aim was to detect movement intentions in CP patients and able-bodied participants using different classification scenarios to show the technical feasibility of BCI training in CP patients. Five CP patients and fifteen able-bodied participants performed wrist extensions and ankle dorsiflexions while EEG was recorded. All but one participant repeated the experiment on 1–2 additional days. The EEG was divided into movement intention and idle epochs that were classified with a random forest classifier using temporal, spectral, and template matching features to estimate movement intention detection performance. When calibrating the classifier on data from the same day and participant, 75% and 85% classification accuracies were obtained for CP- and able-bodied participants, respectively. The performance dropped by 5–15 percentage points when training the classifier on data from other days and other participants. In conclusion, movement intentions can be detected from single-trial EEG, indicating the technical feasibility of using BCIs for motor training in people with CP. Full article
Show Figures

Figure 1

Back to TopTop