Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (6)

Search Parameters:
Keywords = code-modulated visual evoked potentials (c-VEP)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 1921 KB  
Article
Streamlining cVEP Paradigms: Effects of a Minimized Electrode Montage on Brain–Computer Interface Performance
by Milán András Fodor, Atilla Cantürk, Gernot Heisenberg and Ivan Volosyak
Brain Sci. 2025, 15(6), 549; https://doi.org/10.3390/brainsci15060549 - 23 May 2025
Cited by 4 | Viewed by 1315
Abstract
(1) Background: Brain–computer interfaces (BCIs) enable direct communication between the brain and external devices using electroencephalography (EEG) signals, offering potential applications in assistive technology and neurorehabilitation. Code-modulated visual evoked potential (cVEP)-based BCIs employ code-pattern-based stimulation to evoke neural responses, which can then be [...] Read more.
(1) Background: Brain–computer interfaces (BCIs) enable direct communication between the brain and external devices using electroencephalography (EEG) signals, offering potential applications in assistive technology and neurorehabilitation. Code-modulated visual evoked potential (cVEP)-based BCIs employ code-pattern-based stimulation to evoke neural responses, which can then be classified to infer user intent. While increasing the number of EEG electrodes across the visual cortex enhances classification accuracy, it simultaneously reduces user comfort and increases setup complexity, duration, and hardware costs. (2) Methods: This online BCI study, involving thirty-eight able-bodied participants, investigated how reducing the electrode count from 16 to 6 affected performance. Three experimental conditions were tested: a baseline 16-electrode configuration, a reduced 6-electrode setup without retraining, and a reduced 6-electrode setup with retraining. (3) Results: Our results indicate that, on average, performance declines with fewer electrodes; nonetheless, retraining restored near-baseline mean Information Transfer Rate (ITR) and accuracy for those participants for whom the system remained functional. The results reveal that for a substantial number of participants, the classification pipeline fails after electrode removal, highlighting individual differences in the cVEP response characteristics or inherent limitations of the classification approach. (4) Conclusions: Ultimately, this suggests that minimal cVEP-BCI electrode setups capable of reliably functioning across all users might only be feasible through other, more flexible classification methods that can account for individual differences. These findings aim to serve as a guideline for what is currently achievable with this common cVEP paradigm and to highlight where future research should focus in order to move closer to a practical and user-friendly system. Full article
Show Figures

Figure 1

16 pages, 664 KB  
Article
Evaluation of Different Visual Feedback Methods for Brain—Computer Interfaces (BCI) Based on Code-Modulated Visual Evoked Potentials (cVEP)
by Milán András Fodor, Hannah Herschel, Atilla Cantürk, Gernot Heisenberg and Ivan Volosyak
Brain Sci. 2024, 14(8), 846; https://doi.org/10.3390/brainsci14080846 - 22 Aug 2024
Cited by 6 | Viewed by 2773
Abstract
Brain–computer interfaces (BCIs) enable direct communication between the brain and external devices using electroencephalography (EEG) signals. BCIs based on code-modulated visual evoked potentials (cVEPs) are based on visual stimuli, thus appropriate visual feedback on the interface is crucial for an effective BCI system. [...] Read more.
Brain–computer interfaces (BCIs) enable direct communication between the brain and external devices using electroencephalography (EEG) signals. BCIs based on code-modulated visual evoked potentials (cVEPs) are based on visual stimuli, thus appropriate visual feedback on the interface is crucial for an effective BCI system. Many previous studies have demonstrated that implementing visual feedback can improve information transfer rate (ITR) and reduce fatigue. This research compares a dynamic interface, where target boxes change their sizes based on detection certainty, with a threshold bar interface in a three-step cVEP speller. In this study, we found that both interfaces perform well, with slight variations in accuracy, ITR, and output characters per minute (OCM). Notably, some participants showed significant performance improvements with the dynamic interface and found it less distracting compared to the threshold bars. These results suggest that while average performance metrics are similar, the dynamic interface can provide significant benefits for certain users. This study underscores the potential for personalized interface choices to enhance BCI user experience and performance. By improving user friendliness, performance, and reducing distraction, dynamic visual feedback could optimize BCI technology for a broader range of users. Full article
Show Figures

Figure 1

19 pages, 9860 KB  
Article
High-Density Electroencephalogram Facilitates the Detection of Small Stimuli in Code-Modulated Visual Evoked Potential Brain–Computer Interfaces
by Qingyu Sun, Shaojie Zhang, Guoya Dong, Weihua Pei, Xiaorong Gao and Yijun Wang
Sensors 2024, 24(11), 3521; https://doi.org/10.3390/s24113521 - 30 May 2024
Cited by 8 | Viewed by 2459
Abstract
In recent years, there has been a considerable amount of research on visual evoked potential (VEP)-based brain–computer interfaces (BCIs). However, it remains a big challenge to detect VEPs elicited by small visual stimuli. To address this challenge, this study employed a 256-electrode high-density [...] Read more.
In recent years, there has been a considerable amount of research on visual evoked potential (VEP)-based brain–computer interfaces (BCIs). However, it remains a big challenge to detect VEPs elicited by small visual stimuli. To address this challenge, this study employed a 256-electrode high-density electroencephalogram (EEG) cap with 66 electrodes in the parietal and occipital lobes to record EEG signals. An online BCI system based on code-modulated VEP (C-VEP) was designed and implemented with thirty targets modulated by a time-shifted binary pseudo-random sequence. A task-discriminant component analysis (TDCA) algorithm was employed for feature extraction and classification. The offline and online experiments were designed to assess EEG responses and classification performance for comparison across four different stimulus sizes at visual angles of 0.5°, 1°, 2°, and 3°. By optimizing the data length for each subject in the online experiment, information transfer rates (ITRs) of 126.48 ± 14.14 bits/min, 221.73 ± 15.69 bits/min, 258.39 ± 9.28 bits/min, and 266.40 ± 6.52 bits/min were achieved for 0.5°, 1°, 2°, and 3°, respectively. This study further compared the EEG features and classification performance of the 66-electrode layout from the 256-electrode EEG cap, the 32-electrode layout from the 128-electrode EEG cap, and the 21-electrode layout from the 64-electrode EEG cap, elucidating the pivotal importance of a higher electrode density in enhancing the performance of C-VEP BCI systems using small stimuli. Full article
Show Figures

Figure 1

13 pages, 655 KB  
Article
cVEP Training Data Validation—Towards Optimal Training Set Composition from Multi-Day Data
by Piotr Stawicki and Ivan Volosyak
Brain Sci. 2022, 12(2), 234; https://doi.org/10.3390/brainsci12020234 - 8 Feb 2022
Cited by 12 | Viewed by 2845
Abstract
This paper investigates the effects of the repetitive block-wise training process on the classification accuracy for a code-modulated visual evoked potentials (cVEP)-based brain–computer interface (BCI). The cVEP-based BCIs are popular thanks to their autocorrelation feature. The cVEP-based stimuli are generated by a specific [...] Read more.
This paper investigates the effects of the repetitive block-wise training process on the classification accuracy for a code-modulated visual evoked potentials (cVEP)-based brain–computer interface (BCI). The cVEP-based BCIs are popular thanks to their autocorrelation feature. The cVEP-based stimuli are generated by a specific code pattern, usually the m-sequence, which is phase-shifted between the individual targets. Typically, the cVEP classification requires a subject-specific template (individually created from the user’s own pre-recorded EEG responses to the same stimulus target), which is compared to the incoming electroencephalography (EEG) data, using the correlation algorithms. The amount of the collected user training data determines the accuracy of the system. In this offline study, previously recorded EEG data collected during an online experiment with 10 participants from multiple sessions were used. A template matching target identification, with similar models as the task-related component analysis (TRCA), was used for target classification. The spatial filter was generated by the canonical correlation analysis (CCA). When comparing the training models from one session with the same session’s data (intra-session) and the model from one session with the data from the other session (inter-session), the accuracies were (94.84%, 94.53%) and (76.67%, 77.34%) for intra-sessions and inter-sessions, respectively. In order to investigate the most reliable configuration for accurate classification, the training data blocks from different sessions (days) were compared interchangeably. In the best training set composition, the participants achieved an average accuracy of 82.66% for models based only on two training blocks from two different sessions. Similarly, at least five blocks were necessary for the average accuracy to exceed 90%. The presented method can further improve cVEP-based BCI performance by reusing previously recorded training data. Full article
Show Figures

Figure 1

14 pages, 2596 KB  
Article
A BCI Gaze Sensing Method Using Low Jitter Code Modulated VEP
by Ibrahim Kaya, Jorge Bohórquez and Özcan Özdamar
Sensors 2019, 19(17), 3797; https://doi.org/10.3390/s19173797 - 2 Sep 2019
Cited by 2 | Viewed by 4485
Abstract
Visual evoked potentials (VEPs) are used in clinical applications in ophthalmology, neurology, and extensively in brain–computer interface (BCI) research. Many BCI implementations utilize steady-state VEP (SSVEP) and/or code modulated VEP (c-VEP) as inputs, in tandem with sophisticated methods to improve information transfer rates [...] Read more.
Visual evoked potentials (VEPs) are used in clinical applications in ophthalmology, neurology, and extensively in brain–computer interface (BCI) research. Many BCI implementations utilize steady-state VEP (SSVEP) and/or code modulated VEP (c-VEP) as inputs, in tandem with sophisticated methods to improve information transfer rates (ITR). There is a gap in knowledge regarding the adaptation dynamics and physiological generation mechanisms of the VEP response, and the relation of these factors with BCI performance. A simple, dual pattern display setup was used to evoke VEPs and to test signatures elicited by non-isochronic, non-singular, low jitter stimuli at the rates of 10, 32, 50, and 70 reversals per second (rps). Non-isochronic, low-jitter stimulation elicits quasi-steady-state VEPs (QSS-VEPs) that are utilized for the simultaneous generation of transient VEP and QSS-VEP. QSS-VEP is a special case of c-VEPs, and it is assumed that it shares similar generators of the SSVEPs. Eight subjects were recorded, and the performance of the overall system was analyzed using receiver operating characteristic (ROC) curves, accuracy plots, and ITRs. In summary, QSS-VEPs performed better than transient VEPs (TR-VEP). It was found that in general, 32 rps stimulation had the highest ROC area, accuracy, and ITRs. Moreover, QSS-VEPs were found to lead to higher accuracy by template matching compared to SSVEPs at 32 rps. To investigate the reasons behind this, adaptation dynamics of transient VEPs and QSS-VEPs at all four rates were analyzed and speculated. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

12 pages, 693 KB  
Article
A Novel Dictionary-Driven Mental Spelling Application Based on Code-Modulated Visual Evoked Potentials
by Felix Gembler and Ivan Volosyak
Computers 2019, 8(2), 33; https://doi.org/10.3390/computers8020033 - 30 Apr 2019
Cited by 14 | Viewed by 6623
Abstract
Brain–computer interfaces (BCIs) based on code-modulated visual evoked potentials (c-VEPs) typically utilize a synchronous approach to identify targets (i.e., after preset time periods the system produces command outputs). Hence, users have only a limited amount of time to fixate a desired target. This [...] Read more.
Brain–computer interfaces (BCIs) based on code-modulated visual evoked potentials (c-VEPs) typically utilize a synchronous approach to identify targets (i.e., after preset time periods the system produces command outputs). Hence, users have only a limited amount of time to fixate a desired target. This hinders the usage of more complex interfaces, as these require the BCI to distinguish between intentional and unintentional fixations. In this article, we investigate a dynamic sliding window mechanism as well as the implementation of software-based stimulus synchronization to enable the threshold-based target identification for the c-VEP paradigm. To further improve the usability of the system, an ensemble-based classification strategy was investigated. In addition, a software-based approach for stimulus on-set determination is proposed, which allows for an easier setup of the system, as it reduces additional hardware dependencies. The methods were tested with an eight-target spelling application utilizing an n-gram word prediction model. The performance of eighteen participants without disabilities was tested; all participants completed word- and sentence spelling tasks using the c-VEP BCI with a mean information transfer rate (ITR) of 75.7 and 57.8 bpm, respectively. Full article
(This article belongs to the Special Issue Computer Technologies for Human-Centered Cyber World)
Show Figures

Figure 1

Back to TopTop