Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (111)

Search Parameters:
Keywords = SSVEP

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 2065 KB  
Article
Improving Individual-Specific SSVEP-BCI with Adaptive Channel and Subspace Selection in TRCA
by Hui Li, Guanghua Xu, Shanzheng Feng, Chenghang Du, Chengcheng Han, Jiachen Kuang and Sicong Zhang
Sensors 2026, 26(4), 1123; https://doi.org/10.3390/s26041123 - 9 Feb 2026
Viewed by 365
Abstract
The individual-specific steady-state visual evoked potential (SSVEP)-based brain–computer interface (BCI) is characterized by individual calibration data, resulting in satisfactory performance. However, existing individual-specific SSVEP-BCIs employ generalized channels and task-related subspaces, which seriously limit their potential advantages and lead to suboptimal solutions. In this [...] Read more.
The individual-specific steady-state visual evoked potential (SSVEP)-based brain–computer interface (BCI) is characterized by individual calibration data, resulting in satisfactory performance. However, existing individual-specific SSVEP-BCIs employ generalized channels and task-related subspaces, which seriously limit their potential advantages and lead to suboptimal solutions. In this study, AS-TRCA was proposed to develop a purely individual-specific SSVEP-BCI by fully exploiting individual-specific knowledge. AS-TRCA involves optimal channel learning and selection (OCLS) as well as optimal subspace selection (OSS). OCLS aims to pick the optimal subject-specific channels by employing sparse learning with spatial distance constraints. Meanwhile, OSS adaptively determines the appropriate number of optimal subject-specific task-related subspaces by maximizing profile likelihood. The extensive experimental results demonstrate that AS-TRCA can acquire meaningful channels and determine the proper number of task-related subspaces for each subject compared to traditional methods. Furthermore, combining AS-TRCA with existing advanced calibration-based SSVEP decoding methods, including deep learning methods, to establish a purely individual-specific SSVEP-BCI can further enhance the decoding performance of these methods. Specifically, AS-TRCA improved the average accuracy as follows: TRCA 7.21%, SSCOR 7.61%, TRCA-R 6.58%, msTRCA 7.70%, scTRCA 4.47%, TDCA 2.91%, and bi-SiamCA 3.23%. AS-TRCA is promising for further advancing the performance of SSVEP-BCI and promoting its practical applications. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

27 pages, 3728 KB  
Article
Improved SSVEP Classification Through EEG Artifact Reduction Using Auxiliary Sensors
by Marcin Kołodziej, Andrzej Majkowski and Przemysław Wiszniewski
Sensors 2026, 26(3), 917; https://doi.org/10.3390/s26030917 - 31 Jan 2026
Viewed by 442
Abstract
Steady-state visual evoked potentials (SSVEPs) are one of the key paradigms used in brain–computer interface (BCI) systems. Their performance, however, is substantially degraded by EEG artifacts of muscular, motion-related, and ocular origin. This issue is particularly pronounced in individuals exhibiting increased facial muscle [...] Read more.
Steady-state visual evoked potentials (SSVEPs) are one of the key paradigms used in brain–computer interface (BCI) systems. Their performance, however, is substantially degraded by EEG artifacts of muscular, motion-related, and ocular origin. This issue is particularly pronounced in individuals exhibiting increased facial muscle tension or involuntary eye movements. The aim of this study was to develop and evaluate an EEG artifact reduction method based on auxiliary channels, including central (Cz), frontal (Fp1), electrooculographic (HEOG), and muscular electrodes (neck, cheek, jaw). Signals from these channels were used to model the physical sources of interference recorded concurrently with occipital brain activity (O1, O2, Oz). EEG signal cleaning was performed using linear regression in 1-s windows, followed by frequency-domain analysis to extract features related to stimulation frequencies and SSVEP classification using SVM and CNN algorithms. The experiment involved three visual stimulation frequencies (7, 8, and 9 Hz) generated by LEDs and the recording of controlled facial and jaw-related artifacts. Experiments conducted on 12 participants demonstrated a 9% increase in classification accuracy after artifact removal. Further analysis indicated that the Cz and jaw channels contributed most significantly to effective artifact suppression. The results confirm that the use of auxiliary channels substantially improves EEG signal quality and enhances the reliability of BCI systems under real-world conditions. Full article
(This article belongs to the Special Issue Advances in EEG Sensors: Research and Applications)
Show Figures

Figure 1

15 pages, 1697 KB  
Article
Online Compensation of Systematic Effects in Stimuli Generation for XR-Based SSVEP BCIs
by Leopoldo Angrisani, Egidio De Benedetto, Matteo D’Iorio, Luigi Duraccio, Fabrizio Lo Regio and Annarita Tedesco
Sensors 2026, 26(3), 766; https://doi.org/10.3390/s26030766 - 23 Jan 2026
Viewed by 343
Abstract
Background: Brain–Computer Interfaces (BCIs) based on Steady-State Visually Evoked Potentials (SSVEPs) and Extended Reality (XR) offer promising solutions for highly wearable applications, but their classification performance can be affected by systematic effects in stimulus presentation. Novelty: This study introduces a novel [...] Read more.
Background: Brain–Computer Interfaces (BCIs) based on Steady-State Visually Evoked Potentials (SSVEPs) and Extended Reality (XR) offer promising solutions for highly wearable applications, but their classification performance can be affected by systematic effects in stimulus presentation. Novelty: This study introduces a novel online compensation method to compensate for systematic effects in the Refresh Rate (RR) of XR displays, enhancing SSVEP classification without requiring additional training or invasive measurements. Methods: A non-invasive monitoring module was incorporated into the developed BCI pipeline to measure frame rate variations in the XR display, allowing deviations between nominal RR and measured values to be automatically detected and compensated for. Classification performance was evaluated using Filter Bank Canonical Correlation Analysis (FBCCA). Statistical significance was assessed using Student’s t-test. Materials: Two datasets were used: a dataset based on Moverio BT-350, including 9 subjects, and a dataset based on HoloLens 2, including 30 subjects, all collected by the authors. Results: The proposed compensation method led to significant improvements in SSVEP classification accuracy, proportional to the magnitude of fps deviations. In some cases, classification accuracy increased by up to 300% relative to its original value. Statistical analyses confirmed the reliability of the results across subjects and datasets. Conclusions: These findings show that the proposed method effectively enhances SSVEP-based BCIs in XR environments and provides a robust foundation for practical applications requiring high reliability. Full article
Show Figures

Figure 1

20 pages, 7063 KB  
Article
Effective Brain Connectivity Analysis During Endogenous Selective Attention Based on Granger Causality
by Walter Escalante Puente de la Vega and Alexander N. Pisarchik
Appl. Sci. 2026, 16(1), 101; https://doi.org/10.3390/app16010101 - 22 Dec 2025
Cited by 3 | Viewed by 931
Abstract
Endogenous selective attention, the cognitive process of selectively attending to non-literal, ambiguous, or multistable interpretations of sensory input, remains poorly understood at the network level. To address this gap, we applied Granger causality (GC) analysis to electroencephalographic (EEG) recordings to characterize effective connectivity [...] Read more.
Endogenous selective attention, the cognitive process of selectively attending to non-literal, ambiguous, or multistable interpretations of sensory input, remains poorly understood at the network level. To address this gap, we applied Granger causality (GC) analysis to electroencephalographic (EEG) recordings to characterize effective connectivity during sustained attention to ambiguous visual stimuli. Participants viewed the Necker cube, whose left and right faces were modulated at 6.67 Hz and 8.57 Hz, respectively, enabling objective tracking of perceptual dominance via steady-state visually evoked potentials (SSVEPs). GC analysis revealed robust directed connectivity between frontal and occipito-parietal areas during sustained perception of a specific cube orientation. We found that the magnitude of the GC-derived F-statistics correlated positively with attention performance indices during the left-face orientation task and negatively during the right-face orientation task, indicating that interregional causal influence scales with cognitive engagement in ambiguous interpretation. These results establish GC as a sensitive and reliable approach for characterizing dynamic, directional neural interactions during perceptual ambiguity, and, most notably, reveal, for the first time, an occipito-frontal effective connectivity architecture specifically recruited in support of endogenous selective attention. The methodology and findings hold translational potential for applications in neuroadaptive interfaces, cognitive diagnostics, and the study of disorders involving impaired symbolic processing. Full article
Show Figures

Figure 1

24 pages, 4080 KB  
Article
MCRBM–CNN: A Hybrid Deep Learning Framework for Robust SSVEP Classification
by Depeng Gao, Yuhang Zhao, Jieru Zhou, Haifei Zhang and Hongqi Li
Sensors 2025, 25(24), 7456; https://doi.org/10.3390/s25247456 - 8 Dec 2025
Viewed by 658
Abstract
The steady-state visual evoked potential (SSVEP), a non-invasive EEG modality, is a prominent approach for brain–computer interfaces (BCIs) due to its high signal-to-noise ratio and minimal user training. However, its practical utility is often hampered by susceptibility to noise, artifacts, and concurrent brain [...] Read more.
The steady-state visual evoked potential (SSVEP), a non-invasive EEG modality, is a prominent approach for brain–computer interfaces (BCIs) due to its high signal-to-noise ratio and minimal user training. However, its practical utility is often hampered by susceptibility to noise, artifacts, and concurrent brain activities, complicating signal decoding. To address this, we propose a novel hybrid deep learning model that integrates a multi-channel restricted Boltzmann machine (RBM) with a convolutional neural network (CNN). The framework comprises two main modules: a feature extraction module and a classification module. The former employs a multi-channel RBM to unsupervisedly learn latent feature representations from multi-channel EEG data, effectively capturing inter-channel correlations to enhance feature discriminability. The latter leverages convolutional operations to further extract spatiotemporal features, constructing a deep discriminative model for the automatic recognition of SSVEP signals. Comprehensive evaluations on multiple public datasets demonstrate that our proposed method achieves competitive performance compared to various benchmarks, particularly exhibiting superior effectiveness and robustness in short-time window scenarios. Full article
(This article belongs to the Special Issue EEG Signal Processing Techniques and Applications—3rd Edition)
Show Figures

Figure 1

22 pages, 12089 KB  
Article
A Brain–Computer Interface for Control of a Virtual Prosthetic Hand
by Ángel del Rosario Zárate-Ruiz, Manuel Arias-Montiel and Christian Eduardo Millán-Hernández
Computation 2025, 13(12), 287; https://doi.org/10.3390/computation13120287 - 6 Dec 2025
Viewed by 1759
Abstract
Brain–computer interfaces (BCIs) have emerged as an option that allows better communication between humans and some technological devices. This article presents a BCI based on the steady-state visual evoked potentials (SSVEP) paradigm and low-cost hardware to control a virtual prototype of a robotic [...] Read more.
Brain–computer interfaces (BCIs) have emerged as an option that allows better communication between humans and some technological devices. This article presents a BCI based on the steady-state visual evoked potentials (SSVEP) paradigm and low-cost hardware to control a virtual prototype of a robotic hand. A LED-based device is proposed as a visual stimulator, and the Open BCI Ultracortex Biosensing Headset is used to acquire the electroencephalographic (EEG) signals for the BCI. The processing and classification of the obtained signals are described. Classifiers based on artificial neural networks (ANNs) and support vector machines (SVMs) are compared, demonstrating that the classifiers based on SVM have superior performance to those based on ANN. The classified EEG signals are used to implement different movements in a virtual prosthetic hand using a co-simulation approach, showing the feasibility of BCI being implemented in the control of robotic hands. Full article
Show Figures

Figure 1

15 pages, 3074 KB  
Article
An SSVEP-Based Brain–Computer Interface Device for Wheelchair Control Integrated with a Speech Aid System
by Abdulrahman Mohammed Alnour Ahmed, Yousef Al-Junaidi, Abdulaziz Al-Tayar, Ammar Qaid and Khurram Karim Qureshi
Eng 2025, 6(12), 343; https://doi.org/10.3390/eng6120343 - 1 Dec 2025
Viewed by 883
Abstract
This paper presents a brain–computer interface (BCI) system based on steady-state visual evoked potential (SSVEP) for controlling an electric wheelchair integrated with a speech aid module. The system targets individuals with severe motor disabilities, such as amyotrophic lateral sclerosis (ALS) or multiple sclerosis [...] Read more.
This paper presents a brain–computer interface (BCI) system based on steady-state visual evoked potential (SSVEP) for controlling an electric wheelchair integrated with a speech aid module. The system targets individuals with severe motor disabilities, such as amyotrophic lateral sclerosis (ALS) or multiple sclerosis (MS), who may experience limited mobility and speech impairments. EEG signals from the occipital lobe are recorded using wet electrodes and classified using deep learning models, including ResNet50, InceptionV4, and VGG16, as well as Canonical Correlation Analysis (CCA). The ResNet50 model demonstrated the best performance for nine-class SSVEP signal classification, achieving an offline accuracy of 81.25% and a real-time performance of 72.44%, thereby clarifying that these results correspond to SSVEP-based analysis rather than motor imagery. The classified outputs are used to trigger predefined wheelchair movements and vocal commands using an Arduino-controlled system. The prototype was successfully implemented and verified through experimental evaluation, demonstrating promising results for mobility and communication assistance. Full article
Show Figures

Figure 1

21 pages, 424 KB  
Article
MultiHeadEEGModelCLS: Contextual Alignment and Spatio-Temporal Attention Model for EEG-Based SSVEP Classification
by Vangelis P. Oikonomou
Electronics 2025, 14(22), 4394; https://doi.org/10.3390/electronics14224394 - 11 Nov 2025
Cited by 1 | Viewed by 827
Abstract
Steady-State Visual Evoked Potentials (SSVEPs) offer a robust basis for brain–computer interface (BCI) systems due to their high signal-to-noise ratio, minimal user training requirements, and suitability for real-time decoding. In this work, we propose MultiHeadEEGModelCLS, a novel Transformer-based architecture that integrates context-aware representation [...] Read more.
Steady-State Visual Evoked Potentials (SSVEPs) offer a robust basis for brain–computer interface (BCI) systems due to their high signal-to-noise ratio, minimal user training requirements, and suitability for real-time decoding. In this work, we propose MultiHeadEEGModelCLS, a novel Transformer-based architecture that integrates context-aware representation learning into SSVEP decoding. The model employs a dual-stream spatio-temporal encoder to process both the input EEG trial and a contextual signal (e.g., template or reference trial), enhanced by a learnable classification ([CLS]) token. Through self-attention and cross-attention mechanisms, the model aligns trial-level representations with contextual cues. The architecture supports multi-task learning via signal reconstruction and context-informed classification heads. Evaluation on benchmark datasets (Speller and BETA) demonstrates state-of-the-art performance, particularly under limited data and short time window scenarios, achieving higher classification accuracy and information transfer rates (ITR) compared to existing deep learning methods such as the multi-branch CNN (ConvDNN). Our method achieved an ITR of 283 bits/min and 222 bits/min for the Speller and BETA datasets, and a ConvDNN of 238 bits/min and 181 bits/min. These results highlight the effectiveness of contextual modeling in enhancing the robustness and efficiency of SSVEP-based BCIs. Full article
(This article belongs to the Special Issue Digital Signal and Image Processing for Multimedia Technology)
Show Figures

Figure 1

21 pages, 1995 KB  
Article
A Feasibility Study on Enhanced Mobility and Comfort: Wheelchairs Empowered by SSVEP BCI for Instant Noise Cancellation and Signal Processing in Assistive Technology
by Chih-Tsung Chang, Kai-Jun Pai, Ming-An Chung and Chia-Wei Lin
Electronics 2025, 14(21), 4338; https://doi.org/10.3390/electronics14214338 - 5 Nov 2025
Viewed by 685
Abstract
Steady-state visual evoked potential (SSVEP)-based brain–computer interface (BCI) technology offers a promising solution for wheelchair control by translating neural signals into navigation commands. A major challenge—signal noise caused by eye blinks—is addressed in this feasibility study through real-time blink detection and correction. The [...] Read more.
Steady-state visual evoked potential (SSVEP)-based brain–computer interface (BCI) technology offers a promising solution for wheelchair control by translating neural signals into navigation commands. A major challenge—signal noise caused by eye blinks—is addressed in this feasibility study through real-time blink detection and correction. The proposed design utilizes sensors to capture both SSVEP and blink signals, enabling the isolation and compensation of interference, which improves control accuracy by 14.68%. Real-time correction during blinks significantly enhances system reliability and responsiveness. Furthermore, user data and global positioning system (GPS) trajectories are uploaded to the cloud via Wi-Fi 6E for continuous safety monitoring. This approach not only restores mobility for users with physical disabilities but also promotes independence and spatial autonomy. Full article
(This article belongs to the Special Issue Innovative Designs in Human–Computer Interaction)
Show Figures

Figure 1

34 pages, 1172 KB  
Article
Stimulus-Evoked Brain Signals for Parkinson’s Detection: A Comprehensive Benchmark Performance Analysis on Cross-Stimulation and Channel-Wise Experiments
by Krishna Patel, Rajendra Gad, Marissa Lourdes de Ataide, Narayan Vetrekar, Teresa Ferreira and Raghavendra Ramachandra
Bioengineering 2025, 12(11), 1185; https://doi.org/10.3390/bioengineering12111185 - 30 Oct 2025
Viewed by 966
Abstract
Parkinson’s disease (PD) is a progressive neurodegenerative disorder that affects both motor and cognitive functions, often resulting in misdiagnosis during its early stages. The condition severely impacts daily living, diminishing an individual’s ability to work and carry out routine tasks independently. Consequently, the [...] Read more.
Parkinson’s disease (PD) is a progressive neurodegenerative disorder that affects both motor and cognitive functions, often resulting in misdiagnosis during its early stages. The condition severely impacts daily living, diminishing an individual’s ability to work and carry out routine tasks independently. Consequently, the development of automated methods for reliable PD detection has gained growing research interest. Among the available approaches, Electroencephalography (EEG) has emerged as a promising non-invasive and cost-effective tool. Nevertheless, most existing studies have predominantly focused on resting-state EEG, which constrains the generalizability and robustness of the proposed detection models. This study introduces a cross-stimulation evaluation framework to assess its impact on Parkinson’s disease detection algorithms and conducts channel-wise analysis to identify the most discriminative brain regions for accurate diagnosis. To support this research, we present the newly introduced Parkinson’s disease EEG (ParEEG) database, comprising 203,520 EEG samples from 60 subjects recorded based on Resting-State Visual Evoked Potential (RSVEP) and Steady-State Visually Evoked Potential (SSVEP) stimuli. In this study, we evaluate the performance of individual EEG channels using two handcrafted and two deep learning-based methods, employing a 10-fold cross-validation strategy, to ensure statistical reliability and establish benchmark results. Experimental results show that CRC and LSTM consistently achieved high accuracies (95–100%) with low variability (standard deviation < 2%). The analysis indicates that EEG channels in the frontal, fronto-central, and central–parietal regions consistently yield higher classification accuracy in Parkinson’s disease detection. Our findings offer valuable insights into channel-specific neural alterations for better interpretability in PD, and the cross-stimulation evaluation enhances the generalizability of EEG-based PD detection for practical diagnostic purposes. Full article
Show Figures

Figure 1

16 pages, 1786 KB  
Article
Enhanced SSVEP Bionic Spelling via xLSTM-Based Deep Learning with Spatial Attention and Filter Bank Techniques
by Liuyuan Dong, Chengzhi Xu, Ruizhen Xie, Xuyang Wang, Wanli Yang and Yimeng Li
Biomimetics 2025, 10(8), 554; https://doi.org/10.3390/biomimetics10080554 - 21 Aug 2025
Viewed by 1053
Abstract
Steady-State Visual Evoked Potentials (SSVEPs) have emerged as an efficient means of interaction in brain–computer interfaces (BCIs), achieving bioinspired efficient language output for individuals with aphasia. Addressing the underutilization of frequency information of SSVEPs and redundant computation by existing transformer-based deep learning methods, [...] Read more.
Steady-State Visual Evoked Potentials (SSVEPs) have emerged as an efficient means of interaction in brain–computer interfaces (BCIs), achieving bioinspired efficient language output for individuals with aphasia. Addressing the underutilization of frequency information of SSVEPs and redundant computation by existing transformer-based deep learning methods, this paper analyzes signals from both the time and frequency domains, proposing a stacked encoder–decoder (SED) network architecture based on an xLSTM model and spatial attention mechanism, termed SED-xLSTM, which firstly applies xLSTM to the SSVEP speller field. This model takes the low-channel spectrogram as input and employs the filter bank technique to make full use of harmonic information. By leveraging a gating mechanism, SED-xLSTM effectively extracts and fuses high-dimensional spatial-channel semantic features from SSVEP signals. Experimental results on three public datasets demonstrate the superior performance of SED-xLSTM in terms of classification accuracy and information transfer rate, particularly outperforming existing methods under cross-validation across various temporal scales. Full article
(This article belongs to the Special Issue Exploration of Bioinspired Computer Vision and Pattern Recognition)
Show Figures

Figure 1

12 pages, 1330 KB  
Article
Steady-State Visual-Evoked-Potential–Driven Quadrotor Control Using a Deep Residual CNN for Short-Time Signal Classification
by Jiannan Chen, Chenju Yang, Rao Wei, Changchun Hua, Dianrui Mu and Fuchun Sun
Sensors 2025, 25(15), 4779; https://doi.org/10.3390/s25154779 - 3 Aug 2025
Viewed by 954
Abstract
In this paper, we study the classification problem of short-time-window steady-state visual evoked potentials (SSVEPs) and propose a novel deep convolutional network named EEGResNet based on the idea of residual connection to further improve the classification performance. Since the frequency-domain features extracted from [...] Read more.
In this paper, we study the classification problem of short-time-window steady-state visual evoked potentials (SSVEPs) and propose a novel deep convolutional network named EEGResNet based on the idea of residual connection to further improve the classification performance. Since the frequency-domain features extracted from short-time-window signals are difficult to distinguish, the EEGResNet starts from the filter bank (FB)-based feature extraction module in the time domain. The FB designed in this paper is composed of four sixth-order Butterworth filters with different bandpass ranges, and the four bandwidths are 19–50 Hz, 14–38 Hz, 9–26 Hz, and 3–14 Hz, respectively. Then, the extracted four feature tensors with the same shape are directly aggregated together. Furthermore, the aggregated features are further learned by a six-layer convolutional neural network with residual connections. Finally, the network output is generated through an adaptive fully connected layer. To prove the effectiveness and superiority of our designed EEGResNet, necessary experiments and comparisons are conducted over two large public datasets. To further verify the application potential of the trained network, a virtual simulation of brain computer interface (BCI) based quadrotor control is presented through V-REP. Full article
(This article belongs to the Special Issue Intelligent Sensor Systems in Unmanned Aerial Vehicles)
Show Figures

Figure 1

22 pages, 1350 KB  
Article
Optimization of Dynamic SSVEP Paradigms for Practical Application: Low-Fatigue Design with Coordinated Trajectory and Speed Modulation and Gaming Validation
by Yan Huang, Lei Cao, Yongru Chen and Ting Wang
Sensors 2025, 25(15), 4727; https://doi.org/10.3390/s25154727 - 31 Jul 2025
Viewed by 1191
Abstract
Steady-state visual evoked potential (SSVEP) paradigms are widely used in brain–computer interface (BCI) systems due to their reliability and fast response. However, traditional static stimuli may reduce user comfort and engagement during prolonged use. This study proposes a dynamic stimulation paradigm combining periodic [...] Read more.
Steady-state visual evoked potential (SSVEP) paradigms are widely used in brain–computer interface (BCI) systems due to their reliability and fast response. However, traditional static stimuli may reduce user comfort and engagement during prolonged use. This study proposes a dynamic stimulation paradigm combining periodic motion trajectories with speed control. Using four frequencies (6, 8.57, 10, 12 Hz) and three waveform patterns (sinusoidal, square, sawtooth), speed was modulated at 1/5, 1/10, and 1/20 of each frequency’s base rate. An offline experiment with 17 subjects showed that the low-speed sinusoidal and sawtooth trajectories matched the static accuracy (85.84% and 83.82%) while reducing cognitive workload by 22%. An online experiment with 12 subjects participating in a fruit-slicing game confirmed its practicality, achieving recognition accuracies above 82% and a System Usability Scale score of 75.96. These results indicate that coordinated trajectory and speed modulation preserves SSVEP signal quality and enhances user experience, offering a promising approach for fatigue-resistant, user-friendly BCI application. Full article
(This article belongs to the Special Issue EEG-Based Brain–Computer Interfaces: Research and Applications)
Show Figures

Figure 1

22 pages, 4200 KB  
Article
Investigation of Personalized Visual Stimuli via Checkerboard Patterns Using Flickering Circles for SSVEP-Based BCI System
by Nannaphat Siribunyaphat, Natjamee Tohkhwan and Yunyong Punsawad
Sensors 2025, 25(15), 4623; https://doi.org/10.3390/s25154623 - 25 Jul 2025
Cited by 1 | Viewed by 2655
Abstract
In this study, we conducted two steady-state visual evoked potential (SSVEP) studies to develop a practical brain–computer interface (BCI) system for communication and control applications. The first study introduces a novel visual stimulus paradigm that combines checkerboard patterns with flickering circles configured in [...] Read more.
In this study, we conducted two steady-state visual evoked potential (SSVEP) studies to develop a practical brain–computer interface (BCI) system for communication and control applications. The first study introduces a novel visual stimulus paradigm that combines checkerboard patterns with flickering circles configured in single-, double-, and triple-layer forms. We tested three flickering frequency conditions: a single fundamental frequency, a combination of the fundamental frequency and its harmonics, and a combination of two fundamental frequencies. The second study utilizes personalized visual stimuli to enhance SSVEP responses. SSVEP detection was performed using power spectral density (PSD) analysis by employing Welch’s method and relative PSD to extract SSVEP features. Commands classification was carried out using a proposed decision rule–based algorithm. The results were compared with those of a conventional checkerboard pattern with flickering squares. The experimental findings indicate that single-layer flickering circle patterns exhibit comparable or improved performance when compared with the conventional stimuli, particularly when customized for individual users. Conversely, the multilayer patterns tended to increase visual fatigue. Furthermore, individualized stimuli achieved a classification accuracy of 90.2% in real-time SSVEP-based BCI systems for six-command generation tasks. The personalized visual stimuli can enhance user experience and system performance, thereby supporting the development of a practical SSVEP-based BCI system. Full article
Show Figures

Figure 1

41 pages, 2631 KB  
Systematic Review
Brain-Computer Interfaces and AI Segmentation in Neurosurgery: A Systematic Review of Integrated Precision Approaches
by Sayantan Ghosh, Padmanabhan Sindhujaa, Dinesh Kumar Kesavan, Balázs Gulyás and Domokos Máthé
Surgeries 2025, 6(3), 50; https://doi.org/10.3390/surgeries6030050 - 26 Jun 2025
Cited by 3 | Viewed by 5629
Abstract
Background: BCI and AI-driven image segmentation are revolutionizing precision neurosurgery by enhancing surgical accuracy, reducing human error, and improving patient outcomes. Methods: This systematic review explores the integration of AI techniques—particularly DL and CNNs—with neuroimaging modalities such as MRI, CT, EEG, and ECoG [...] Read more.
Background: BCI and AI-driven image segmentation are revolutionizing precision neurosurgery by enhancing surgical accuracy, reducing human error, and improving patient outcomes. Methods: This systematic review explores the integration of AI techniques—particularly DL and CNNs—with neuroimaging modalities such as MRI, CT, EEG, and ECoG for automated brain mapping and tissue classification. Eligible clinical and computational studies, primarily published between 2015 and 2025, were identified via PubMed, Scopus, and IEEE Xplore. The review follows PRISMA guidelines and is registered with the OSF (registration number: J59CY). Results: AI-based segmentation methods have demonstrated Dice similarity coefficients exceeding 0.91 in glioma boundary delineation and tumor segmentation tasks. Concurrently, BCI systems leveraging EEG and SSVEP paradigms have achieved information transfer rates surpassing 22.5 bits/min, enabling high-speed neural decoding with sub-second latency. We critically evaluate real-time neural signal processing pipelines and AI-guided surgical robotics, emphasizing clinical performance and architectural constraints. Integrated systems improve targeting precision and postoperative recovery across select neurosurgical applications. Conclusions: This review consolidates recent advancements in BCI and AI-driven medical imaging, identifies barriers to clinical adoption—including signal reliability, latency bottlenecks, and ethical uncertainties—and outlines research pathways essential for realizing closed-loop, intelligent neurosurgical platforms. Full article
Show Figures

Figure 1

Back to TopTop