Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (68)

Search Parameters:
Keywords = steady-state visual evoked potential signals

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 3728 KB  
Article
Improved SSVEP Classification Through EEG Artifact Reduction Using Auxiliary Sensors
by Marcin Kołodziej, Andrzej Majkowski and Przemysław Wiszniewski
Sensors 2026, 26(3), 917; https://doi.org/10.3390/s26030917 - 31 Jan 2026
Viewed by 253
Abstract
Steady-state visual evoked potentials (SSVEPs) are one of the key paradigms used in brain–computer interface (BCI) systems. Their performance, however, is substantially degraded by EEG artifacts of muscular, motion-related, and ocular origin. This issue is particularly pronounced in individuals exhibiting increased facial muscle [...] Read more.
Steady-state visual evoked potentials (SSVEPs) are one of the key paradigms used in brain–computer interface (BCI) systems. Their performance, however, is substantially degraded by EEG artifacts of muscular, motion-related, and ocular origin. This issue is particularly pronounced in individuals exhibiting increased facial muscle tension or involuntary eye movements. The aim of this study was to develop and evaluate an EEG artifact reduction method based on auxiliary channels, including central (Cz), frontal (Fp1), electrooculographic (HEOG), and muscular electrodes (neck, cheek, jaw). Signals from these channels were used to model the physical sources of interference recorded concurrently with occipital brain activity (O1, O2, Oz). EEG signal cleaning was performed using linear regression in 1-s windows, followed by frequency-domain analysis to extract features related to stimulation frequencies and SSVEP classification using SVM and CNN algorithms. The experiment involved three visual stimulation frequencies (7, 8, and 9 Hz) generated by LEDs and the recording of controlled facial and jaw-related artifacts. Experiments conducted on 12 participants demonstrated a 9% increase in classification accuracy after artifact removal. Further analysis indicated that the Cz and jaw channels contributed most significantly to effective artifact suppression. The results confirm that the use of auxiliary channels substantially improves EEG signal quality and enhances the reliability of BCI systems under real-world conditions. Full article
(This article belongs to the Special Issue Advances in EEG Sensors: Research and Applications)
Show Figures

Figure 1

24 pages, 4080 KB  
Article
MCRBM–CNN: A Hybrid Deep Learning Framework for Robust SSVEP Classification
by Depeng Gao, Yuhang Zhao, Jieru Zhou, Haifei Zhang and Hongqi Li
Sensors 2025, 25(24), 7456; https://doi.org/10.3390/s25247456 - 8 Dec 2025
Viewed by 574
Abstract
The steady-state visual evoked potential (SSVEP), a non-invasive EEG modality, is a prominent approach for brain–computer interfaces (BCIs) due to its high signal-to-noise ratio and minimal user training. However, its practical utility is often hampered by susceptibility to noise, artifacts, and concurrent brain [...] Read more.
The steady-state visual evoked potential (SSVEP), a non-invasive EEG modality, is a prominent approach for brain–computer interfaces (BCIs) due to its high signal-to-noise ratio and minimal user training. However, its practical utility is often hampered by susceptibility to noise, artifacts, and concurrent brain activities, complicating signal decoding. To address this, we propose a novel hybrid deep learning model that integrates a multi-channel restricted Boltzmann machine (RBM) with a convolutional neural network (CNN). The framework comprises two main modules: a feature extraction module and a classification module. The former employs a multi-channel RBM to unsupervisedly learn latent feature representations from multi-channel EEG data, effectively capturing inter-channel correlations to enhance feature discriminability. The latter leverages convolutional operations to further extract spatiotemporal features, constructing a deep discriminative model for the automatic recognition of SSVEP signals. Comprehensive evaluations on multiple public datasets demonstrate that our proposed method achieves competitive performance compared to various benchmarks, particularly exhibiting superior effectiveness and robustness in short-time window scenarios. Full article
(This article belongs to the Special Issue EEG Signal Processing Techniques and Applications—3rd Edition)
Show Figures

Figure 1

22 pages, 12089 KB  
Article
A Brain–Computer Interface for Control of a Virtual Prosthetic Hand
by Ángel del Rosario Zárate-Ruiz, Manuel Arias-Montiel and Christian Eduardo Millán-Hernández
Computation 2025, 13(12), 287; https://doi.org/10.3390/computation13120287 - 6 Dec 2025
Viewed by 1407
Abstract
Brain–computer interfaces (BCIs) have emerged as an option that allows better communication between humans and some technological devices. This article presents a BCI based on the steady-state visual evoked potentials (SSVEP) paradigm and low-cost hardware to control a virtual prototype of a robotic [...] Read more.
Brain–computer interfaces (BCIs) have emerged as an option that allows better communication between humans and some technological devices. This article presents a BCI based on the steady-state visual evoked potentials (SSVEP) paradigm and low-cost hardware to control a virtual prototype of a robotic hand. A LED-based device is proposed as a visual stimulator, and the Open BCI Ultracortex Biosensing Headset is used to acquire the electroencephalographic (EEG) signals for the BCI. The processing and classification of the obtained signals are described. Classifiers based on artificial neural networks (ANNs) and support vector machines (SVMs) are compared, demonstrating that the classifiers based on SVM have superior performance to those based on ANN. The classified EEG signals are used to implement different movements in a virtual prosthetic hand using a co-simulation approach, showing the feasibility of BCI being implemented in the control of robotic hands. Full article
Show Figures

Figure 1

15 pages, 3074 KB  
Article
An SSVEP-Based Brain–Computer Interface Device for Wheelchair Control Integrated with a Speech Aid System
by Abdulrahman Mohammed Alnour Ahmed, Yousef Al-Junaidi, Abdulaziz Al-Tayar, Ammar Qaid and Khurram Karim Qureshi
Eng 2025, 6(12), 343; https://doi.org/10.3390/eng6120343 - 1 Dec 2025
Viewed by 741
Abstract
This paper presents a brain–computer interface (BCI) system based on steady-state visual evoked potential (SSVEP) for controlling an electric wheelchair integrated with a speech aid module. The system targets individuals with severe motor disabilities, such as amyotrophic lateral sclerosis (ALS) or multiple sclerosis [...] Read more.
This paper presents a brain–computer interface (BCI) system based on steady-state visual evoked potential (SSVEP) for controlling an electric wheelchair integrated with a speech aid module. The system targets individuals with severe motor disabilities, such as amyotrophic lateral sclerosis (ALS) or multiple sclerosis (MS), who may experience limited mobility and speech impairments. EEG signals from the occipital lobe are recorded using wet electrodes and classified using deep learning models, including ResNet50, InceptionV4, and VGG16, as well as Canonical Correlation Analysis (CCA). The ResNet50 model demonstrated the best performance for nine-class SSVEP signal classification, achieving an offline accuracy of 81.25% and a real-time performance of 72.44%, thereby clarifying that these results correspond to SSVEP-based analysis rather than motor imagery. The classified outputs are used to trigger predefined wheelchair movements and vocal commands using an Arduino-controlled system. The prototype was successfully implemented and verified through experimental evaluation, demonstrating promising results for mobility and communication assistance. Full article
Show Figures

Figure 1

21 pages, 424 KB  
Article
MultiHeadEEGModelCLS: Contextual Alignment and Spatio-Temporal Attention Model for EEG-Based SSVEP Classification
by Vangelis P. Oikonomou
Electronics 2025, 14(22), 4394; https://doi.org/10.3390/electronics14224394 - 11 Nov 2025
Viewed by 693
Abstract
Steady-State Visual Evoked Potentials (SSVEPs) offer a robust basis for brain–computer interface (BCI) systems due to their high signal-to-noise ratio, minimal user training requirements, and suitability for real-time decoding. In this work, we propose MultiHeadEEGModelCLS, a novel Transformer-based architecture that integrates context-aware representation [...] Read more.
Steady-State Visual Evoked Potentials (SSVEPs) offer a robust basis for brain–computer interface (BCI) systems due to their high signal-to-noise ratio, minimal user training requirements, and suitability for real-time decoding. In this work, we propose MultiHeadEEGModelCLS, a novel Transformer-based architecture that integrates context-aware representation learning into SSVEP decoding. The model employs a dual-stream spatio-temporal encoder to process both the input EEG trial and a contextual signal (e.g., template or reference trial), enhanced by a learnable classification ([CLS]) token. Through self-attention and cross-attention mechanisms, the model aligns trial-level representations with contextual cues. The architecture supports multi-task learning via signal reconstruction and context-informed classification heads. Evaluation on benchmark datasets (Speller and BETA) demonstrates state-of-the-art performance, particularly under limited data and short time window scenarios, achieving higher classification accuracy and information transfer rates (ITR) compared to existing deep learning methods such as the multi-branch CNN (ConvDNN). Our method achieved an ITR of 283 bits/min and 222 bits/min for the Speller and BETA datasets, and a ConvDNN of 238 bits/min and 181 bits/min. These results highlight the effectiveness of contextual modeling in enhancing the robustness and efficiency of SSVEP-based BCIs. Full article
(This article belongs to the Special Issue Digital Signal and Image Processing for Multimedia Technology)
Show Figures

Figure 1

21 pages, 1995 KB  
Article
A Feasibility Study on Enhanced Mobility and Comfort: Wheelchairs Empowered by SSVEP BCI for Instant Noise Cancellation and Signal Processing in Assistive Technology
by Chih-Tsung Chang, Kai-Jun Pai, Ming-An Chung and Chia-Wei Lin
Electronics 2025, 14(21), 4338; https://doi.org/10.3390/electronics14214338 - 5 Nov 2025
Viewed by 584
Abstract
Steady-state visual evoked potential (SSVEP)-based brain–computer interface (BCI) technology offers a promising solution for wheelchair control by translating neural signals into navigation commands. A major challenge—signal noise caused by eye blinks—is addressed in this feasibility study through real-time blink detection and correction. The [...] Read more.
Steady-state visual evoked potential (SSVEP)-based brain–computer interface (BCI) technology offers a promising solution for wheelchair control by translating neural signals into navigation commands. A major challenge—signal noise caused by eye blinks—is addressed in this feasibility study through real-time blink detection and correction. The proposed design utilizes sensors to capture both SSVEP and blink signals, enabling the isolation and compensation of interference, which improves control accuracy by 14.68%. Real-time correction during blinks significantly enhances system reliability and responsiveness. Furthermore, user data and global positioning system (GPS) trajectories are uploaded to the cloud via Wi-Fi 6E for continuous safety monitoring. This approach not only restores mobility for users with physical disabilities but also promotes independence and spatial autonomy. Full article
(This article belongs to the Special Issue Innovative Designs in Human–Computer Interaction)
Show Figures

Figure 1

28 pages, 1036 KB  
Review
Recent Advances in Portable Dry Electrode EEG: Architecture and Applications in Brain-Computer Interfaces
by Meihong Zhang, Bocheng Qian, Jianming Gao, Shaokai Zhao, Yibo Cui, Zhiguo Luo, Kecheng Shi and Erwei Yin
Sensors 2025, 25(16), 5215; https://doi.org/10.3390/s25165215 - 21 Aug 2025
Cited by 3 | Viewed by 11318
Abstract
As brain–computer interface (BCI) technology continues to advance, research on human brain function has gradually transitioned from theoretical investigation to practical engineering applications. To support EEG signal acquisition in a variety of real-world scenarios, BCI electrode systems must demonstrate a balanced combination of [...] Read more.
As brain–computer interface (BCI) technology continues to advance, research on human brain function has gradually transitioned from theoretical investigation to practical engineering applications. To support EEG signal acquisition in a variety of real-world scenarios, BCI electrode systems must demonstrate a balanced combination of electrical performance, wearing comfort, and portability. Dry electrodes have emerged as a promising alternative for EEG acquisition due to their ability to operate without conductive gel or complex skin preparation. This paper reviews the latest progress in dry electrode EEG systems, summarizing key achievements in hardware design with a focus on structural innovation and material development. It also examines application advances in several representative BCI domains, including emotion recognition, fatigue and drowsiness detection, motor imagery, and steady-state visual evoked potentials, while analyzing system-level performance. Finally, the paper critically assesses existing challenges and identifies critical future research priorities. Key recommendations include developing a standardized evaluation framework to bolster research reliability, enhancing generalization performance, and fostering coordinated hardware-algorithm optimization. These steps are crucial for advancing the practical implementation of these technologies across diverse scenarios. With this survey, we aim to offer a comprehensive reference and roadmap for researchers engaged in the development and implementation of next-generation dry electrode EEG-based BCI systems. Full article
Show Figures

Figure 1

16 pages, 1786 KB  
Article
Enhanced SSVEP Bionic Spelling via xLSTM-Based Deep Learning with Spatial Attention and Filter Bank Techniques
by Liuyuan Dong, Chengzhi Xu, Ruizhen Xie, Xuyang Wang, Wanli Yang and Yimeng Li
Biomimetics 2025, 10(8), 554; https://doi.org/10.3390/biomimetics10080554 - 21 Aug 2025
Viewed by 976
Abstract
Steady-State Visual Evoked Potentials (SSVEPs) have emerged as an efficient means of interaction in brain–computer interfaces (BCIs), achieving bioinspired efficient language output for individuals with aphasia. Addressing the underutilization of frequency information of SSVEPs and redundant computation by existing transformer-based deep learning methods, [...] Read more.
Steady-State Visual Evoked Potentials (SSVEPs) have emerged as an efficient means of interaction in brain–computer interfaces (BCIs), achieving bioinspired efficient language output for individuals with aphasia. Addressing the underutilization of frequency information of SSVEPs and redundant computation by existing transformer-based deep learning methods, this paper analyzes signals from both the time and frequency domains, proposing a stacked encoder–decoder (SED) network architecture based on an xLSTM model and spatial attention mechanism, termed SED-xLSTM, which firstly applies xLSTM to the SSVEP speller field. This model takes the low-channel spectrogram as input and employs the filter bank technique to make full use of harmonic information. By leveraging a gating mechanism, SED-xLSTM effectively extracts and fuses high-dimensional spatial-channel semantic features from SSVEP signals. Experimental results on three public datasets demonstrate the superior performance of SED-xLSTM in terms of classification accuracy and information transfer rate, particularly outperforming existing methods under cross-validation across various temporal scales. Full article
(This article belongs to the Special Issue Exploration of Bioinspired Computer Vision and Pattern Recognition)
Show Figures

Figure 1

12 pages, 1330 KB  
Article
Steady-State Visual-Evoked-Potential–Driven Quadrotor Control Using a Deep Residual CNN for Short-Time Signal Classification
by Jiannan Chen, Chenju Yang, Rao Wei, Changchun Hua, Dianrui Mu and Fuchun Sun
Sensors 2025, 25(15), 4779; https://doi.org/10.3390/s25154779 - 3 Aug 2025
Viewed by 892
Abstract
In this paper, we study the classification problem of short-time-window steady-state visual evoked potentials (SSVEPs) and propose a novel deep convolutional network named EEGResNet based on the idea of residual connection to further improve the classification performance. Since the frequency-domain features extracted from [...] Read more.
In this paper, we study the classification problem of short-time-window steady-state visual evoked potentials (SSVEPs) and propose a novel deep convolutional network named EEGResNet based on the idea of residual connection to further improve the classification performance. Since the frequency-domain features extracted from short-time-window signals are difficult to distinguish, the EEGResNet starts from the filter bank (FB)-based feature extraction module in the time domain. The FB designed in this paper is composed of four sixth-order Butterworth filters with different bandpass ranges, and the four bandwidths are 19–50 Hz, 14–38 Hz, 9–26 Hz, and 3–14 Hz, respectively. Then, the extracted four feature tensors with the same shape are directly aggregated together. Furthermore, the aggregated features are further learned by a six-layer convolutional neural network with residual connections. Finally, the network output is generated through an adaptive fully connected layer. To prove the effectiveness and superiority of our designed EEGResNet, necessary experiments and comparisons are conducted over two large public datasets. To further verify the application potential of the trained network, a virtual simulation of brain computer interface (BCI) based quadrotor control is presented through V-REP. Full article
(This article belongs to the Special Issue Intelligent Sensor Systems in Unmanned Aerial Vehicles)
Show Figures

Figure 1

22 pages, 1350 KB  
Article
Optimization of Dynamic SSVEP Paradigms for Practical Application: Low-Fatigue Design with Coordinated Trajectory and Speed Modulation and Gaming Validation
by Yan Huang, Lei Cao, Yongru Chen and Ting Wang
Sensors 2025, 25(15), 4727; https://doi.org/10.3390/s25154727 - 31 Jul 2025
Viewed by 1105
Abstract
Steady-state visual evoked potential (SSVEP) paradigms are widely used in brain–computer interface (BCI) systems due to their reliability and fast response. However, traditional static stimuli may reduce user comfort and engagement during prolonged use. This study proposes a dynamic stimulation paradigm combining periodic [...] Read more.
Steady-state visual evoked potential (SSVEP) paradigms are widely used in brain–computer interface (BCI) systems due to their reliability and fast response. However, traditional static stimuli may reduce user comfort and engagement during prolonged use. This study proposes a dynamic stimulation paradigm combining periodic motion trajectories with speed control. Using four frequencies (6, 8.57, 10, 12 Hz) and three waveform patterns (sinusoidal, square, sawtooth), speed was modulated at 1/5, 1/10, and 1/20 of each frequency’s base rate. An offline experiment with 17 subjects showed that the low-speed sinusoidal and sawtooth trajectories matched the static accuracy (85.84% and 83.82%) while reducing cognitive workload by 22%. An online experiment with 12 subjects participating in a fruit-slicing game confirmed its practicality, achieving recognition accuracies above 82% and a System Usability Scale score of 75.96. These results indicate that coordinated trajectory and speed modulation preserves SSVEP signal quality and enhances user experience, offering a promising approach for fatigue-resistant, user-friendly BCI application. Full article
(This article belongs to the Special Issue EEG-Based Brain–Computer Interfaces: Research and Applications)
Show Figures

Figure 1

7 pages, 808 KB  
Proceeding Paper
Performance of a Single-Flicker SSVEP BCI Using Single Channels
by Gerardo Luis Padilla and Fernando Daniel Farfán
Eng. Proc. 2024, 81(1), 19; https://doi.org/10.3390/engproc2024081019 - 6 Jun 2025
Viewed by 1107
Abstract
This study investigated performance characteristics and channel selection strategies for single-flicker steady-state visual evoked potential (SSVEP) brain–computer interfaces (BCIs) using minimal recording channels. SSVEP clustering patterns from seven subjects, who focused on four static targets while being exposed to a central 15 Hz [...] Read more.
This study investigated performance characteristics and channel selection strategies for single-flicker steady-state visual evoked potential (SSVEP) brain–computer interfaces (BCIs) using minimal recording channels. SSVEP clustering patterns from seven subjects, who focused on four static targets while being exposed to a central 15 Hz stimulus, were analyzed. Using a single-channel approach, signal energy patterns were examined, and principal component analysis (PCA) was performed, which explained over 90% of the data variance. The Calinski–Harabasz Index quantified state separability, identifying channels and comparisons with maximum clustering efficiency. The results demonstrate the feasibility of implementing single-flicker SSVEP BCIs with reduced recording channels, contributing to more practical and efficient BCI systems. Full article
(This article belongs to the Proceedings of The 1st International Online Conference on Bioengineering)
Show Figures

Figure 1

18 pages, 4639 KB  
Article
Using Hybrid Feature and Classifier Fusion for an Asynchronous Brain–Computer Interface Framework Based on Steady-State Motion Visual Evoked Potentials
by Bo Hu, Jun Xie, Huanqing Zhang, Junjie Liu and Hu Wang
Appl. Sci. 2025, 15(11), 6010; https://doi.org/10.3390/app15116010 - 27 May 2025
Viewed by 958
Abstract
This study proposes an asynchronous brain–computer interface (BCI) framework based on steady-state motion visual evoked potentials (SSMVEPs), designed to enhance the accuracy and robustness of control state recognition. The method integrates filter bank common spatial patterns (FBCSPs) and filter bank canonical correlation analysis [...] Read more.
This study proposes an asynchronous brain–computer interface (BCI) framework based on steady-state motion visual evoked potentials (SSMVEPs), designed to enhance the accuracy and robustness of control state recognition. The method integrates filter bank common spatial patterns (FBCSPs) and filter bank canonical correlation analysis (FBCCA) to extract complementary spatial and frequency domain features from EEG signals. These multimodal features are then fused and input into a dual-classifier structure consisting of a support vector machine (SVM) and extreme gradient boosting (XGBoost). A weighted fusion strategy is applied to combine the probabilistic outputs of both classifiers, allowing the system to leverage their respective strengths. Experimental results demonstrate that the fused FB(CSP + CCA)-(SVM + XGBoost) model achieves superior performance in distinguishing intentional control (IC) and non-control (NC) states compared to models using a single feature type or classifier. Furthermore, the visualization of feature distributions using UMAP shows improved inter-class separability when combining FBCSP and FBCCA features. These findings confirm the effectiveness of both feature-level and classifier-level fusion in asynchronous BCI systems. The proposed approach offers a promising and practical solution for developing more reliable and user-adaptive BCI applications, particularly in real-world environments requiring flexible control without external cues. Full article
Show Figures

Figure 1

29 pages, 4973 KB  
Article
A Hybrid CNN-LSTM Approach for Muscle Artifact Removal from EEG Using Additional EMG Signal Recording
by Marcin Kołodziej, Marcin Jurczak, Andrzej Majkowski, Andrzej Rysz and Bartosz Świderski
Appl. Sci. 2025, 15(9), 4953; https://doi.org/10.3390/app15094953 - 29 Apr 2025
Cited by 4 | Viewed by 3289
Abstract
Removing artifacts from electroencephalography (EEG) signals is a common technique. Although numerous algorithms have been proposed, most rely solely on EEG data. In this study, we introduce a novel approach utilizing a hybrid convolutional neural network–long short-term memory (CNN-LSTM) architecture alongside simultaneous recording [...] Read more.
Removing artifacts from electroencephalography (EEG) signals is a common technique. Although numerous algorithms have been proposed, most rely solely on EEG data. In this study, we introduce a novel approach utilizing a hybrid convolutional neural network–long short-term memory (CNN-LSTM) architecture alongside simultaneous recording of facial and neck EMG signals. This setup enables the precise elimination of artifacts from the EEG signal. To validate the method, we collected a dataset from 24 participants who were presented with a light-emitting diode (LED) stimulus that elicited steady-state visual evoked potentials (SSVEPs) while they performed strong jaw clenching, an action known to induce significant artifacts. We then assessed the algorithm’s ability to remove artifacts while preserving SSVEP responses. The results were compared against other commonly used algorithms, such as independent component analysis and linear regression. The findings demonstrate that the proposed method exhibits excellent performance, effectively removing artifacts while retaining the EEG signal’s useful components. Full article
(This article belongs to the Section Biomedical Engineering)
Show Figures

Figure 1

20 pages, 2133 KB  
Article
Real-Time Mobile Robot Obstacles Detection and Avoidance Through EEG Signals
by Karameldeen Omer, Francesco Ferracuti, Alessandro Freddi, Sabrina Iarlori, Francesco Vella and Andrea Monteriù
Brain Sci. 2025, 15(4), 359; https://doi.org/10.3390/brainsci15040359 - 30 Mar 2025
Cited by 3 | Viewed by 2991
Abstract
Background/Objectives: The study explores the integration of human feedback into the control loop of mobile robots for real-time obstacle detection and avoidance using EEG brain–computer interface (BCI) methods. The goal is to assess the possible paradigms applicable to the most current navigation system [...] Read more.
Background/Objectives: The study explores the integration of human feedback into the control loop of mobile robots for real-time obstacle detection and avoidance using EEG brain–computer interface (BCI) methods. The goal is to assess the possible paradigms applicable to the most current navigation system to enhance safety and interaction between humans and robots. Methods: The research explores passive and active brain–computer interface (BCI) technologies to enhance a wheelchair-mobile robot’s navigation. In the passive approach, error-related potentials (ErrPs), neural signals triggered when users comment or perceive errors, enable automatic correction of the robot navigation mistakes without direct input or command from the user. In contrast, the active approach leverages steady-state visually evoked potentials (SSVEPs), where users focus on flickering stimuli to control the robot’s movements directly. This study evaluates both paradigms to determine the most effective method for integrating human feedback into assistive robotic navigation. This study involves experimental setups where participants control a robot through a simulated environment, and their brain signals are recorded and analyzed to measure the system’s responsiveness and the user’s mental workload. Results: The results show that a passive BCI requires lower mental effort but suffers from lower engagement, with a classification accuracy of 72.9%, whereas an active BCI demands more cognitive effort but achieves 84.9% accuracy. Despite this, task achievement accuracy is higher in the passive method (e.g., 71% vs. 43% for subject S2) as a single correct ErrP classification enables autonomous obstacle avoidance, whereas SSVEP requires multiple accurate commands. Conclusions: This research highlights the trade-offs between accuracy, mental load, and engagement in BCI-based robot control. The findings support the development of more intuitive assistive robotics, particularly for disabled and elderly users. Full article
(This article belongs to the Special Issue Multisensory Perception of the Body and Its Movement)
Show Figures

Figure 1

14 pages, 13932 KB  
Article
Dual-Mode Visual System for Brain–Computer Interfaces: Integrating SSVEP and P300 Responses
by Ekgari Kasawala and Surej Mouli
Sensors 2025, 25(6), 1802; https://doi.org/10.3390/s25061802 - 14 Mar 2025
Cited by 2 | Viewed by 4385
Abstract
In brain–computer interface (BCI) systems, steady-state visual-evoked potentials (SSVEP) and P300 responses have achieved widespread implementation owing to their superior information transfer rates (ITR) and minimal training requirements. These neurophysiological signals have exhibited robust efficacy and versatility in external device control, demonstrating enhanced [...] Read more.
In brain–computer interface (BCI) systems, steady-state visual-evoked potentials (SSVEP) and P300 responses have achieved widespread implementation owing to their superior information transfer rates (ITR) and minimal training requirements. These neurophysiological signals have exhibited robust efficacy and versatility in external device control, demonstrating enhanced precision and scalability. However, conventional implementations predominantly utilise liquid crystal display (LCD)-based visual stimulation paradigms, which present limitations in practical deployment scenarios. This investigation presents the development and evaluation of a novel light-emitting diode (LED)-based dual stimulation apparatus designed to enhance SSVEP classification accuracy through the integration of both SSVEP and P300 paradigms. The system employs four distinct frequencies—7 Hz, 8 Hz, 9 Hz, and 10 Hz—corresponding to forward, backward, right, and left directional controls, respectively. Oscilloscopic verification confirmed the precision of these stimulation frequencies. Real-time feature extraction was accomplished through the concurrent analysis of maximum Fast Fourier Transform (FFT) amplitude and P300 peak detection to ascertain user intent. Directional control was determined by the frequency exhibiting maximal amplitude characteristics. The visual stimulation hardware demonstrated minimal frequency deviation, with error differentials ranging from 0.15% to 0.20% across all frequencies. The implemented signal processing algorithm successfully discriminated between all four stimulus frequencies whilst correlating them with their respective P300 event markers. Classification accuracy was evaluated based on correct task intention recognition. The proposed hybrid system achieved a mean classification accuracy of 86.25%, coupled with an average ITR of 42.08 bits per minute (bpm). These performance metrics notably exceed the conventional 70% accuracy threshold typically employed in BCI system evaluation protocols. Full article
Show Figures

Graphical abstract

Back to TopTop