Next Article in Journal
Validity of Coupling TRACAB’s Gen5 and Mediacoach Systems to Calculate Accelerations and Decelerations in Professional Football
Next Article in Special Issue
Systematic Review of Commercially Available Clinical CMUT-Based Systems for Use in Medical Ultrasound Imaging: Products, Applications, and Performance
Previous Article in Journal
Adaptive Vectorial Restoration from Dynamic Speckle Patterns Through Biological Scattering Media Based on Deep Learning
Previous Article in Special Issue
An Ultrasound Prototype for Remote Hand Movement Sensing: The Finger Tapping Case
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Dual-Mode Visual System for Brain–Computer Interfaces: Integrating SSVEP and P300 Responses

Engineering for Health Research Group, Biomedical Engineering, Aston University, Aston Street, Birmingham B4 7ET, UK
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(6), 1802; https://doi.org/10.3390/s25061802
Submission received: 29 December 2024 / Revised: 20 February 2025 / Accepted: 11 March 2025 / Published: 14 March 2025

Abstract

:
In brain–computer interface (BCI) systems, steady-state visual-evoked potentials (SSVEP) and P300 responses have achieved widespread implementation owing to their superior information transfer rates (ITR) and minimal training requirements. These neurophysiological signals have exhibited robust efficacy and versatility in external device control, demonstrating enhanced precision and scalability. However, conventional implementations predominantly utilise liquid crystal display (LCD)-based visual stimulation paradigms, which present limitations in practical deployment scenarios. This investigation presents the development and evaluation of a novel light-emitting diode (LED)-based dual stimulation apparatus designed to enhance SSVEP classification accuracy through the integration of both SSVEP and P300 paradigms. The system employs four distinct frequencies—7 Hz, 8 Hz, 9 Hz, and 10 Hz—corresponding to forward, backward, right, and left directional controls, respectively. Oscilloscopic verification confirmed the precision of these stimulation frequencies. Real-time feature extraction was accomplished through the concurrent analysis of maximum Fast Fourier Transform (FFT) amplitude and P300 peak detection to ascertain user intent. Directional control was determined by the frequency exhibiting maximal amplitude characteristics. The visual stimulation hardware demonstrated minimal frequency deviation, with error differentials ranging from 0.15 % to 0.20 % across all frequencies. The implemented signal processing algorithm successfully discriminated between all four stimulus frequencies whilst correlating them with their respective P300 event markers. Classification accuracy was evaluated based on correct task intention recognition. The proposed hybrid system achieved a mean classification accuracy of 86.25 %, coupled with an average ITR of 42.08 bits per minute (bpm). These performance metrics notably exceed the conventional 70% accuracy threshold typically employed in BCI system evaluation protocols.

Graphical Abstract

1. Introduction

Brain–computer interface (BCI) systems, also termed neural interfaces, establish a direct communication pathway between the Central Nervous System (CNS) and external devices through the detection, analysis, and translation of neural signals, particularly in the field of neurorehabilitation [1,2,3]. All brain–computer interface (BCI) systems adhere to a similar functional model, albeit with minor variations in their precise implementation and technical specifications. The assessment of cerebral activity can be accomplished through diverse neuroimaging and neurophysiological modalities, including functional near-infrared spectroscopy (fNIRS), electrocorticography (ECoG), functional magnetic resonance imaging (fMRI), and electroencephalography (EEG). EEG has emerged as the predominant methodology in BCI applications due to its favourable characteristics comprising economically viable instrumentation, the non-invasive nature of signal acquisition, and superior portability [4,5]. These attributes have contributed significantly to its widespread adoption in the field. These systems, whether non-invasive (utilising surface electrodes) or invasive (employing implanted electrodes), fundamentally operate through a structured sequence of signal acquisition and processing stages [6]. Initially, raw electrophysiological signals from the user’s brain activity are detected through electrode arrays (typically ranging from microvolts to millivolts in amplitude) and digitised via analogue-to-digital conversion before transmission to a computing system. These signals then undergo pre-processing protocols to minimise physiological and environmental artefacts (such as electromagnetic interference, muscle activity, and ocular movements) and enhance the signal-to-noise ratio (SNR) through spatial and temporal filtering techniques [7].
Specific signal characteristics are isolated during the feature extraction phase, including amplitude variations, frequency band powers, and temporal patterns. These features undergo classification through computational algorithms to identify distinct brain activity patterns associated with intended user commands. The classified signals are then decoded and translated into control inputs for the external interface, generating real-time feedback to the user through visual, auditory, or tactile channels, at which point the acquisition-processing cycle recommences [8,9]. The systematic workflow of these processes is depicted in Figure 1.
Multiple EEG paradigms have emerged as an improvement over traditional BCI platforms, which predominantly relied on singular EEG paradigms [10]. The limitation of single-paradigm systems lies in their potential incompatibility with certain users and their susceptibility to erroneous signal recognition. Recent advancements in BCI systems have demonstrated enhanced performance metrics through the integration of multiple paradigms, yielding superior accuracy and increased response rates in the control of peripheral applications [11,12].
BCI systems principally utilise EEG signals derived from steady-state visual-evoked potentials (SSVEP) or transient-evoked potentials, particularly the P300 component [13,14]. The preferential employment of these neurophysiological paradigms stems from their demonstrated capacity to achieve superior recognition accuracy in practical applications. SSVEP-based paradigms have garnered substantial adoption in BCI systems owing to their advantageous characteristics, including elevated information transfer rates (ITR), robust signal-to-noise ratios (SNR), rapid response latency, and minimal user training requirements. SSVEPs manifest as periodic neural responses elicited by rhythmic photic stimulation at predetermined frequencies [15,16]. These responses exhibit frequency-locked oscillations corresponding to both the fundamental frequency of the visual stimulus and its harmonic components. This phenomenon, termed frequency tagging, involves the presentation of distinct visual stimuli oscillating at known frequencies. Upon attentional engagement by the subject, these stimuli generate frequency-specific SSVEP responses that can be quantitatively analysed following neural signal acquisition and digitalisation [17].
Although SSVEP-based BCIs exhibit advantageous characteristics, they present several significant limitations. The photic stimulation inherent to these systems poses a risk of triggering photosensitive epilepsy, thereby restricting their applicability across the broader population due to potential adverse health implications. Prolonged exposure to SSVEP stimuli necessitates sustained visual fixation on flickering light sources, which can induce optical fatigue [18,19]. This fatigue subsequently compromises user performance and system accuracy. This fatigue can adversely affect user performance and system precision. Moreover, the SSVEP response varies amongst users, with performance fluctuating based on low or high visual stimulus flickering frequencies [19]. Individual variations in SSVEP response strength create obstacles for standardising the system for widespread adoption [20]. Furthermore, SSVEP signal strength and frequency typically decrease with age or due to neurological conditions, impacting performance [21]. Whilst researchers have investigated various methodological approaches to address these constraints, considerable scope remains for optimising both stimulus presentation paradigms and signal processing algorithms to enhance system precision and user experience for recent studies [22,23].
Studies demonstrate that LED-based visual stimuli consistently produce more robust SSVEP neural responses compared to LCD displays, due to their superior temporal precision and luminance control [21]. LEDs offer precise frequency control without refresh rate limitations, enabling the exploration of optimal stimulation frequencies (typically 6–30 Hz range), while larger stimuli enhance SSVEP amplitude by recruiting greater populations of neurons across the primary visual cortex, improving signal-to-noise ratios. Standard LCD refresh rates (60 Hz) restrict available stimulation frequencies to divisors of 60, while LED systems can generate any frequency within physiological constraints, and this understanding has led to development of hybrid systems.
Inter-stimulus distance significantly influences SSVEP performance, with greater distances improving classification accuracy [24]. Whilst lower and medium frequencies yield higher signal-to-noise ratios, they can induce visual fatigue during extended use, compromising system accuracy [19]. Higher frequencies offer enhanced comfort but potentially reduced SNR. Regarding colour, green light minimises eye strain and maintains high ITR during prolonged use, whereas red light poses risks for photosensitive epileptic seizures [25].
Conversely, P300 component manifests as a positive deflection in event-related potentials (ERPs) of human cerebral activity, occurring approximately 300 ms subsequent to the presentation of a stochastic stimulus [26,27,28]. This endogenous potential, predominantly observed in the parietal cortex, represents the cognitive processing of contextually significant stimuli within a sequence of standard events. The neurophysiological response enables BCI applications to exploit the temporal characteristics of the P300 event to deduce user intent based on the precise latency of the positive deflection. This temporal specificity facilitates the development of robust classification algorithms for real-time intent detection [29]. The P300 paradigm demonstrates information transfer rates comparable to SSVEP systems whilst requiring abbreviated training intervals, making it particularly suitable for applications demanding rapid user adaptation and consistent performance metrics [30].
Hybrid BCIs combine multiple input methods or paradigms to boost performance, integrating EEG with other physiological signals, utilising multi-sensory approaches, or merging different EEG patterns. These systems overcome individual paradigm limitations, enhancing accuracy, reliability, ITR, and user performance whilst reducing false positives. However, they incur higher costs and greater operational complexity [31].
Scientific investigations have demonstrated that BCI systems employing singular EEG paradigms exhibit diminished accuracy compared to hybrid implementations, owing to potential user incompatibility and susceptibility to erroneous signal classification [32,33]. This limitation arises from the inherent variability in individual neurophysiological responses and the complex nature of brain signal patterns. Hybrid architectures, which integrate multiple neurophysiological paradigms such as SSVEP and P300 responses, demonstrate enhanced capability in discriminating distinct cognitive intentions, whilst simultaneously reducing response latency and elevating information transfer rates. Notably, Bai et al. [34] achieved 94.29% accuracy and 28.64 bits/min ITR in their speller system, whilst Kapgate et al. [12,35] demonstrated strong accuracy and viability in virtual reality gaming and avatar control applications. The synergistic integration of complementary paradigms facilitates robust signal processing and classification methodologies. These improved performance metrics, encompassing accuracy, speed, and reliability, underscore the advantages of multi-paradigmatic approaches in contemporary BCI system design, particularly for applications requiring precise user intent detection and reliable command execution.
This research developed and tested a hybrid SSVEP + P300 BCI system for improved classification accuracy and reliability. The system utilised a portable dual-stimulus design, enabling sequential validation of user intention. Primary classification employed Power Spectral Density analysis for SSVEP frequency identification, whilst P300 event markers provided secondary verification to minimise false positives. The design prioritised computational efficiency and practical usability, making it suitable for real-world applications requiring robust command verification.

2. Materials and Methods

2.1. Hardware Design

The visual stimuli comprised a geometrically optimised array of eight light-emitting diodes (LEDs), specifically engineered to maximise visual-evoked potential amplitude and signal quality. The primary stimulation elements consisted of four radially arranged green COB (Chip on Board) LEDs, each with a diameter of 80 mm (wavelength: 520–530 nm), selected for SSVEP elicitation due to the heightened photoreceptor sensitivity and superior cortical response [25,36,37]. Four high-power 1-watt red LEDs (wavelength: 620–625 nm) were concentrically positioned within the COB array to facilitate P300 event-related potential responses.
The precision control architecture was implemented via a Teensy 3.2 microcontroller, PJRC, Sherwood, Portland, OR, USA, Sourced from Mouser UK (Buckinghamshire, UK) featuring an ARM Cortex-M4 processor operating at 72 MHz clock frequency. The system employed a multithreaded architecture to generate precisely timed parallel outputs, enabling the simultaneous control of four LEDs at distinct frequencies (7 Hz, 8 Hz, 9 Hz, and 10 Hz) for SSVEP elicitation, as shown in Figure 2. Each frequency was generated with a base timing resolution of 13.89 nanoseconds, ensuring precise phase control and temporal stability crucial for reliable steady-state visual-evoked responses. The multithreaded architecture ensured deterministic timing through independent thread execution, maintaining precise frequency generation and phase relationships between stimuli, which are essential for optimal SSVEP response discrimination.
For P300 event-related potential elicitation, four red LEDs were programmed with a pseudorandom stimulus presentation protocol, with each LED generating a single flash at random intervals within a 2000 ms epoch. Each LED flash event was temporally marked through ASCII character transmission (‘o’, ‘p’, ‘q’, ‘r’) via serial communication (baud rate: 9600 bits/s), enabling precise temporal synchronisation between stimulus onset and electroencephalographic data acquisition. This configuration enabled the precise temporal marking of stimulus events for subsequent P300 component extraction and analysis, with each marker uniquely identifying the corresponding LED stimulus source. Figure 3 illustrates the SSVEP and P300 hybrid stimuli.
Serial data transmission between the Teensy microcontroller and the host computer was implemented via an FTDI FT232R USB-to-UART interface controller, sourced from Mouser UK, achieving reliable event marker transmission with a latency of less than 1 ms. This FTDI-based serial interface enabled precise temporal synchronisation between stimulus presentation events and EEG data acquisition through microsecond-level temporal resolution, essential for accurate event-related potential analysis.

2.2. Signal Acquisition and Processing

EEG signal acquisition was performed using a g.tec Unicorn Hybrid Black (www.gtec.at) wireless amplification system (sampling rate: 250 Hz, resolution: 24-bit). Six electrodes were positioned according to the International 10–20 system, comprising three midline locations: frontal (Fz), central (Cz), and parietal (Pz); two parieto-occipital sites: left (PO7) and right (PO8); and midline occipital (Oz). Electrode-scalp impedances were maintained below 5 k Ω through the application of the conductive gel. The EEG was recorded with reference to the left mastoid electrode and a ground electrode positioned at AFz. Signal acquisition utilised wireless data transmission. A schematic representation of the complete hardware, including the visual stimuli, data acquisition hardware, and robotic control system, is presented in Figure 4. The hybrid stimuli implementation workflow is shown in Figure 5.
Signal processing protocols incorporated sequential filtering operations: initially, a 50 Hz notch filter was applied to eliminate power line interference from the raw EEG data. Subsequently, signal-specific filtering was implemented, SSVEP data underwent bandpass filtering (6.5–30 Hz, Butterworth, 4th order), and P300 data were processed using a 15 Hz low-pass filter (Butterworth, 4th order). For SSVEP feature extraction, power spectral density (PSD) estimation was performed using Welch’s method (Hamming window, 50% overlap), with maximum amplitude values identified at the target frequencies (7, 8, 9, and 10 Hz). P300 response analysis involved the temporal alignment of event markers with their corresponding timestamps, followed by peak detection within a 290–500 ms post-stimulus window. This window was selected to encompass the characteristic P300 component latency range for visual stimuli.

2.3. Methodological Validation of Experimental Design

Study participants were recruited according to predefined inclusion and exclusion criteria. The inclusion criteria specified individuals aged 18 years or above with no prior BCI experience. Exclusion criteria comprised any history or diagnosis of photosensitive epilepsy, ensuring participant safety during visual stimulation protocols. The final cohort comprised 12 participants (7 female, 5 male; mean age = 21.0 years) all with normal or corrected-to-normal vision.
The visual stimulation apparatus was positioned at a fixed distance of 60 cm from the participant’s nasion, corresponding to a visual angle of approximately 5° for each LED. Participants were instructed to maintain a fixed gaze on individual LEDs sequentially, with transitions between stimuli guided by auditory cues. Five experimental trials were conducted per participant, with mandatory five minute rest intervals between trials to minimise visual fatigue and maintain optimal attention levels. All trials were conducted under controlled ambient illumination (approximately 250 lux) to ensure consistent visual stimulus contrast.
All experimental procedures adhered to the ethical principles established by the World Medical Association’s Declaration of Helsinki (2013) for human participant research. The study protocol, participant information sheets, and consent documentation received formal approval from the Research Ethics Committee at Aston University. All participants were provided with comprehensive written and verbal information regarding the experimental procedures, and written informed consent was obtained before study participation.
Output control implementation utilised a LEGO® MINDSTORMS® EV3 robotic platform, with directional navigation (forward, backward, left, and right) determined by the processed EEG signals. Control decisions were based on two criteria: maximum amplitude detection in the SSVEP frequency spectrum and P300 event-related potential peak identification within the designated temporal window (290–500 ms post-stimulus). Upon successful feature extraction and classification, command signals were transmitted to the EV3 robotic platform via Bluetooth protocol. The mapping between extracted neurophysiological features and corresponding robotic directional commands is presented in Table 1. Real-time auditory feedback was implemented through the EV3’s integrated speaker system, with successful command execution indicated by a single auditory pulse (frequency: 1 kHz, duration: 200 ms) and failed command execution signalled by a double auditory pulse (frequency: 1 kHz, duration: 200 ms, inter-pulse interval: 100 ms).

3. Results

To validate the frequency detection algorithms, power spectral density (PSD) analysis was performed on the EEG data during isolated visual stimulation from each LED. Spectral analysis was conducted using Welch’s periodogram method to identify the dominant frequency components in the EEG signal. Maximum amplitude values were extracted within frequency bands of interest (7 ± 0.5 Hz, 8 ± 0.5 Hz, 9 ± 0.5 Hz, and 10 ± 0.5 Hz) to verify the accurate detection of the stimulus frequencies and validate the signal processing pipeline. The resultant power spectral density distributions and frequency-specific response characteristics are presented in Figure 6 and Figure 7. Additionally, Figure 8 presents the P300 event marker alongside its corresponding timestamp on an EEG stream.
System performance validation was conducted through a quantitative assessment of directional command accuracy across all trials and participants. Command accuracy was defined as the successful correlation between the participant’s intended directional input (indicated by focused attention on the corresponding LED stimulus) and the resultant robotic platform movement. For each participant ( n = 12 ), directional accuracy metrics were computed per trial (5 trials) for all four navigational commands (forward, backward, left, and right). A successful trial was recorded when the EV3 robotic platform executed the correct directional movement corresponding to the participant’s attended LED stimulus. The classification accuracy was calculated as the ratio of successful commands to total command attempts, expressed as a percentage.
The comprehensive analysis of system performance, comprising directional command accuracies across five experimental trials for all participants ( n = 12 ), including individual and mean performance metrics with corresponding standard deviations, is summarised in Table 2. An analysis of individual participant performance metrics, as illustrated in Figure 9, reveals notable inter-subject variability in classification accuracy.
The data demonstrate that participants S9 and S11 achieved superior performance (>95% accuracy), whilst participants S1–S8 and S10–S12 exhibited accuracies ranging between 75% and 90%. This heterogeneity in performance may be attributed to several factors, including variations in individual photoreceptor sensitivity, cognitive attention levels, and neurophysiological response characteristics. Notably, all participants ( n = 12 ) maintained classification accuracies above the established 70% threshold criterion for practical BCI implementations, substantiating the robustness of the dual-mode paradigm across a diverse user cohort. These findings suggest that individual differences in visual processing and cognitive engagement significantly influence BCI performance outcomes.

4. Discussion

The empirical findings substantiate the effectiveness of the proposed dual-modality visual stimulation hardware, which integrates steady-state visual-evoked potentials and P300 event-related potentials. The system achieved a mean classification accuracy of 86.25 % across the participant cohort ( n = 12 ), markedly surpassing the established 70% threshold criterion conventionally employed in brain–computer interface implementations. Spectral analysis via power density estimation validated the precise generation and detection of the four designated stimulation frequencies (7 Hz, 8 Hz, 9 Hz, and 10 Hz). Oscilloscopic measurements demonstrated negligible frequency deviation ranging from 0.15 % to 0.20 %, substantiating the robustness of the hardware architecture. The quantitative assessment revealed that forward and backward commands (7 Hz and 9 Hz) showed marginally higher accuracy than left and right directional controls, as shown in Figure 10.
The observed 100% accuracy for forward and back movements, contrasted with variations in left and right accuracy, likely stemmed from the visual stimulus positioning and viewing angles. The experimental setup placed stimuli 60 cms from users, with 8.9° vertical viewing angles for vertical LEDs (forward/back movements) and 13.4° horizontal angles (left/right movements). Whilst the design considered peripheral vision and aligned with research indicating optimal stimuli frequency interference reduction between 4–13-degree angles [38], the asymmetry in horizontal viewing angles may have introduced perceptual variations. These findings suggest scope for optimisation through refined stimulus placement and systematic investigation of how subtle angular variations affect BCI control performance across directional movements.
The temporal progression of classification accuracy, as illustrated in Figure 11, reveals a noteworthy pattern wherein performance metrics demonstrate consistent improvement from session 1 ( 79.17 %) through session 4 ( 91.67 %), followed by a marginal decline in session 5 ( 89.58 %). This observed degradation in performance during the final session, despite implemented rest intervals, suggests the manifestation of visual fatigue, a known phenomenon in SSVEP-based paradigms. This deterioration is likely attributable to prolonged exposure to repetitive visual stimulation, manifesting in decreased photoreceptor sensitivity and diminished cortical responses. Notably, despite these constraints, all participants ( n = 12 ) maintained classification accuracies above the established 70% threshold criterion for practical BCI implementations, substantiating the robustness of the dual-mode paradigm across a diverse user cohort. These findings underscore the critical importance of optimising session durations and rest protocols in practical BCI implementations, particularly for applications requiring extended periods of user engagement. Future investigations would benefit from incorporating physiological markers of visual fatigue and implementing adaptive stimulus parameters to maintain consistent performance across extended operational periods.

5. Conclusions

This investigation has successfully demonstrated the efficacy of a novel dual-mode visual stimulation system that integrates SSVEP and P300 responses for brain–computer interface applications. The system achieved a mean classification accuracy of 86.25 % across participants, notably exceeding the conventional 70% threshold for practical BCI implementations, whilst maintaining minimal frequency deviation (0.15–0.20%) in stimulus generation. The technical validation confirmed the successful integration of LED-based SSVEP stimulation (7–10 Hz) with P300 event markers, supported by robust signal processing for concurrent feature extraction. An analysis of user performance revealed consistent improvement across initial sessions, though the impact of visual fatigue became apparent in extended use scenarios. Despite these fatigue effects, the system maintained acceptable accuracy levels throughout testing. Future development should focus on implementing adaptive stimulus parameters, optimising session durations and rest protocols, incorporating physiological markers of fatigue, and enhancing system robustness for extended operational periods. These results demonstrate that this hybrid approach offers a promising direction for practical BCI applications, particularly in assistive technology and device control scenarios, providing enhanced reliability and accuracy compared to single-modality systems whilst maintaining user accessibility.

Author Contributions

S.M. conceptualised the idea, with E.K. leading the work. S.M. designed the experimental hardware with E.K.’s assistance. E.K. collected the data. S.M. and E.K. analysed and interpreted the data. All of the authors were contributors in writing the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Engineering and Physical Sciences Research Council (EPSRC) Doctoral Training Partnership grant, (grant number EP/W5245661/1).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Zhang, J.; Li, J.; Huang, Z.; Huang, D.; Yu, H.; Li, Z. Recent Progress in Wearable Brain–Computer Interface (BCI) Devices Based on Electroencephalogram (EEG) for Medical Applications: A Review. Health Data Sci. 2023, 3, 0096. [Google Scholar] [CrossRef]
  2. Mane, R.; Chouhan, T.; Guan, C. BCI for stroke rehabilitation: Motor and beyond. J. Neural Eng. 2020, 17, 041001. [Google Scholar] [CrossRef] [PubMed]
  3. Sterniuk, A.; Browarska, N.; Al-Bakri, A.; Pelc, M.; Zygarlicki, J.; Sidikova, M.; Martinek, R.; Gorzelanczyk, E.J. Summary of over Fifty Years with Brain-Computer Interfaces—A Review. Brain Sci. 2021, 11, 43. [Google Scholar] [CrossRef]
  4. Yadav, H.; Maini, S. Electroencephalogram based brain-computer interface: Applications, challenges, and opportunities. Multimed. Tools Appl. 2023, 82, 47003–47047. [Google Scholar] [CrossRef] [PubMed]
  5. Mouli, S.; Palaniappan, R.; Molefi, E.; McLoughlin, I. In-Ear Electrode EEG for Practical SSVEP BCI. Technologies 2020, 8, 63. [Google Scholar] [CrossRef]
  6. Aljalal, M.; Ibrahim, S.; Djemal, R.; Ko, W. Comprehensive review on brain-controlled mobile robots and robotic arms based on electroencephalography signals. Intell. Serv. Robot. 2020, 13, 539–563. [Google Scholar] [CrossRef]
  7. Janapati, R.; Dalal, V.; Sengupta, R. Advances in modern EEG-BCI signal processing: A review. Mater. Today Proc. 2023, 80, 2563–2566. [Google Scholar] [CrossRef]
  8. Mondini, V.; Mangia, A.L.; Cappello, A. EEG-Based BCI System Using Adaptive Features Extraction and Classification Procedures. Comput. Intell. Neurosci. 2016, 2016, 4562601. [Google Scholar] [CrossRef]
  9. Müller-Putz, G.; Schwarz, A.; Pereira, J.; Ofner, P.; Hessing, B.; Schneiders, M.; Stein, S.; Ramsay, A.; Williamson, J.H.; Murray-Smith, R.; et al. Non-invasive Brain–Computer Interfaces for Control of Grasp Neuroprosthesis: The European MoreGrasp Initiative. In Neuroprosthetics and Brain-Computer Interfaces in Spinal Cord Injury: A Guide for Clinicians and End Users; Müller-Putz, G., Rupp, R., Eds.; Springer International Publishing: Cham, Switzerland, 2021; pp. 307–352. [Google Scholar] [CrossRef]
  10. Rezeika, A.; Benda, M.; Stawicki, P.; Gembler, F.; Saboor, A.; Volosyak, I. Brain–Computer Interface Spellers: A Review. Brain Sci. 2018, 8, 57. [Google Scholar] [CrossRef]
  11. Mai, X.; Ai, J.; Ji, M.; Zhu, X.; Meng, J. A hybrid BCI combining SSVEP and EOG and its application for continuous wheelchair control. Biomed. Signal Process. Control 2024, 88, 105530. [Google Scholar] [CrossRef]
  12. Kapgate, D.D. Application of hybrid SSVEP + P300 brain computer interface to control avatar movement in mobile virtual reality gaming environment. Behav. Brain Res. 2024, 472, 115154. [Google Scholar] [CrossRef] [PubMed]
  13. Wang, M.; Daly, I.; Allison, B.Z.; Jin, J.; Zhang, Y.; Chen, L.; Wang, X. A new hybrid BCI paradigm based on P300 and SSVEP. J. Neurosci. Methods 2015, 244, 16–25. [Google Scholar] [CrossRef] [PubMed]
  14. Pan, J.; Wang, L.; Huang, H.; Xiao, J.; Wang, F.; Liang, Q.; Xu, C.; Li, Y.; Xie, Q. A Hybrid Brain–Computer Interface Combining P300 Potentials and Emotion Patterns for Detecting Awareness in Patients with Disorders of Consciousness. IEEE Trans. Cogn. Dev. Syst. 2023, 15, 1386–1395. [Google Scholar] [CrossRef]
  15. Shi, M.; Liu, X.; Zhou, C.; Chao, F.; Liu, C.; Jiao, X.; An, Y.; Nwachukwu, S.E.; Jiang, M. Towards portable SSVEP-based brain-computer interface using Emotiv EPOC and mobile phone. In Proceedings of the 2018 Tenth International Conference on Advanced Computational Intelligence (ICACI), Xiamen, China, 29–31 March 2018; pp. 249–253. [Google Scholar] [CrossRef]
  16. Kancaoğlu, M.; Kuntalp, M. Low-cost, mobile EEG hardware for SSVEP applications. HardwareX 2024, 19, e00567. [Google Scholar] [CrossRef] [PubMed]
  17. Cohen, M.X. Analyzing Neural Time Series Data; The MIT Press: Cambridge, MA, USA, 2014. [Google Scholar] [CrossRef]
  18. Ladouce, S.; Darmet, L.; Torre Tresols, J.J.; Velut, S.; Ferraro, G.; Dehais, F. Improving user experience of SSVEP BCI through low amplitude depth and high frequency stimuli design. Sci. Rep. 2022, 12, 8865. [Google Scholar] [CrossRef]
  19. Siribunyaphat, N.; Punsawad, Y. Steady-State visual-evoked Potential-Based Brain–Computer Interface Using a Novel Visual Stimulus with Quick Response (QR) Code Pattern. Sensors 2022, 22, 1439. [Google Scholar] [CrossRef]
  20. Guger, C.; Allison, B.Z.; Großwindhager, B.; Prückl, R.; Hintermüller, C.; Kapeller, C.; Bruckner, M.; Krausz, G.; Edlinger, G. How Many People Could Use an SSVEP BCI? Front. Neurosci. 2012, 6, 169. [Google Scholar] [CrossRef]
  21. Chin, S.S.; Mah, W.L.; Mok, S.Y.; Ng, D.W.K.; Tan, L.F.; Tan, Y.Q.; Ramli, N.; Goh, K.J.; Goh, S.Y. Age-dependent changes in steady-state visual-evoked potentials. Neurol. Asia 2022, 27, 745–752. [Google Scholar] [CrossRef]
  22. Liu, Q.; Chen, K.; Ai, Q.; Xie, S. Review: Recent Development of Signal Processing Algorithms for SSVEP-based Brain Computer Interfaces. J. Med Biol. Eng. 2014, 34, 299–309. [Google Scholar] [CrossRef]
  23. Wang, S.; Ji, B.; Shao, D.; Chen, W.; Gao, K. A Methodology for Enhancing SSVEP Features Using Adaptive Filtering Based on the Spatial Distribution of EEG Signals. Micromachines 2023, 14, 976. [Google Scholar] [CrossRef]
  24. Gao, Y.; Ravi, A.; Jiang, N. Does Inter-Stimulus Distance Influence the Decoding Performance of SSVEP and SSMVEP BCI? In Proceedings of the 10th International IEEE/EMBS Conference on Neural Engineering (NER), Virtual, 4–6 May 2021; pp. 507–510. [Google Scholar] [CrossRef]
  25. Duart, X.; Quiles, E.; Suay, F.; Chio, N.; García, E.; Morant, F. Evaluating the Effect of Stimuli Color and Frequency on SSVEP. Sensors 2020, 21, 117. [Google Scholar] [CrossRef] [PubMed]
  26. Havaei, P.; Zekri, M.; Mahmoudzadeh, E.; Rabbani, H. An efficient deep learning framework for P300 evoked related potential detection in EEG signal. Comput. Methods Programs Biomed. 2023, 229, 107324. [Google Scholar] [CrossRef]
  27. Bianchi, L.; Liti, C.; Liuzzi, G.; Piccialli, V.; Salvatore, C. Improving P300 Speller performance by means of optimization and machine learning. Ann. Oper. Res. 2021, 312, 1221–1259. [Google Scholar] [CrossRef]
  28. Delijorge, J.; Mendoza-Montoya, O.; Gordillo, J.L.; Caraza, R.; Martinez, H.R.; Antelis, J.M. Evaluation of a P300-Based Brain-Machine Interface for a Robotic Hand-Orthosis Control. Front. Neurosci. 2020, 14, 589659. [Google Scholar] [CrossRef] [PubMed]
  29. Xiao, X.; Xu, M.; Wang, Y.; Jung, T.P.; Ming, D. A comparison of classification methods for recognizing single-trial P300 in brain-computer interfaces. In Proceedings of the 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Berlin, Germany, 23–27 July 2019; pp. 3032–3035. [Google Scholar] [CrossRef]
  30. Joshi, R.K.; S, M.K.; S, H.R.; Jayachandra, M.; Pandya, H.J. Design, Development and Validation of a Portable Visual P300 Event-Related Potential Extraction System. In Proceedings of the 2022 IEEE Biomedical Circuits and Systems Conference (BioCAS), Taipei, Taiwan, 13–15 October 2022; pp. 409–413. [Google Scholar] [CrossRef]
  31. Choi, I.; Rhiu, I.; Lee, Y.; Yun, M.H.; Nam, C.S. A Systematic Review of Hybrid brain-computer interfaces: Taxonomy and Usability Perspectives. PLoS ONE 2017, 12, e0176674. [Google Scholar] [CrossRef]
  32. Han, Y.; Park, S.; Ha, J.; Kim, L. Hybrid approach of SSVEP and EEG-based eye-gaze tracking for enhancing BCI performance. In Proceedings of the 2023 11th International Winter Conference on Brain-Computer Interface (BCI), Jeongseon, Republic of Korea, 20–22 February 2023; pp. 1–4. [Google Scholar] [CrossRef]
  33. Olesen, S.D.T.; Das, R.; Olsson, M.D.; Khan, M.A.; Puthusserypady, S. Hybrid EEG-EOG-based BCI system for Vehicle Control. In Proceedings of the 2021 9th International Winter Conference on Brain-Computer Interface (BCI), Jeongseon, Republic of Korea, 22–24 February 2021; pp. 1–6. [Google Scholar] [CrossRef]
  34. Bai, X.; Li, M.; Qi, S.; Ng, A.C.M.; Ng, T.; Qian, W. A Hybrid P300-SSVEP brain-computer Interface Speller with a Frequency Enhanced Row and Column Paradigm. Front. Neurosci. 2023, 17, 1133933. [Google Scholar] [CrossRef] [PubMed]
  35. Kapgate, D.D. Hybrid SSVEP + P300 brain-computer Interface Can Deal with non-stationary Cerebral Responses with the Use of Adaptive Classification. J. Neurorestoratology 2024, 12, 100109. [Google Scholar] [CrossRef]
  36. Tantisatirapong, S.; Dechwechprasit, P.; Senavongse, W.; Phothisonothai, M. Time-frequency based coherence analysis of red and green flickering visual stimuli for EEG-controlled applications. In Proceedings of the 2017 9th International Conference on Knowledge and Smart Technology (KST), Chonburi, Thailand, 1–4 February 2017; pp. 279–283. [Google Scholar] [CrossRef]
  37. Mouli, S.; Palaniappan, R.; Sillitoe, I.P.; Gan, J.Q. Performance analysis of multi-frequency SSVEP-BCI using clear and frosted colour LED stimuli. In Proceedings of the 13th IEEE International Conference on BioInformatics and BioEngineering, Chania, Greece, 10–13 November 2013; pp. 1–4. [Google Scholar] [CrossRef]
  38. Zambalde, E.P.; Borges, L.R.; Jablonski, G.; Barros de Almeida, M.; Naves, E.L.M. Influence of Stimuli Spatial Proximity on a SSVEP-Based BCI Performance. IRBM 2022, 43, 621–627. [Google Scholar] [CrossRef]
Figure 1. Fundamental components of BCI system.
Figure 1. Fundamental components of BCI system.
Sensors 25 01802 g001
Figure 2. Oscilloscopic validation of LED Stimuli frequencies.
Figure 2. Oscilloscopic validation of LED Stimuli frequencies.
Sensors 25 01802 g002
Figure 3. SSVEP and P300 event markers for hybrid stimuli.
Figure 3. SSVEP and P300 event markers for hybrid stimuli.
Sensors 25 01802 g003
Figure 4. Prototype BCI real-time system for robot direction control.
Figure 4. Prototype BCI real-time system for robot direction control.
Sensors 25 01802 g004
Figure 5. Hybrid stimuli implementation workflow.
Figure 5. Hybrid stimuli implementation workflow.
Sensors 25 01802 g005
Figure 6. Power spectral density analysis of 7 Hz and 8 Hz.
Figure 6. Power spectral density analysis of 7 Hz and 8 Hz.
Sensors 25 01802 g006
Figure 7. Power spectral density analysis of 9 Hz and 10 Hz.
Figure 7. Power spectral density analysis of 9 Hz and 10 Hz.
Sensors 25 01802 g007
Figure 8. P300 potential after event marker ‘q’ for backward direction.
Figure 8. P300 potential after event marker ‘q’ for backward direction.
Sensors 25 01802 g008
Figure 9. BCI control accuracy across participants.
Figure 9. BCI control accuracy across participants.
Sensors 25 01802 g009
Figure 10. Average accuracy for each direction of control.
Figure 10. Average accuracy for each direction of control.
Sensors 25 01802 g010
Figure 11. BCI control accuracy across sessions.
Figure 11. BCI control accuracy across sessions.
Sensors 25 01802 g011
Table 1. Robot direction and corresponding features.
Table 1. Robot direction and corresponding features.
SSVEP Frequency (Hz)P300 Event MarkerRobot Navigation
7oForward
8pRight
9qBackward
10rLeft
Table 2. BCI control performance data for all participants.
Table 2. BCI control performance data for all participants.
Participant 1Participant 2
TrialFBLRA (%)TrialFBLRA (%)
11110751111075
211011002110175
3111110031111100
411117541111100
511111005111075
Participant 3Participant 4
11110751111075
21101752111075
3111110031111100
4111110041111100
51101755110175
Participant 5Participant 6
111011001111075
211107521111100
31111753110175
411111004110175
5111110051111100
Participant 7Participant 8
11110751111075
211111002110175
3111110031111100
4111110041111100
511017551111100
Participant 9Participant 10
111111001110175
2111110021111100
311017531111100
41110754110175
511111005111075
Participant 11Participant 12
111111001111075
21101752111075
311017531111100
4111110041110100
5111110051111100
F = Front, B = Back, L = Left, R = Right, and A = Accuracy. 1 indicates successful control, and 0 indicates unsuccessful control in the respective direction.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kasawala, E.; Mouli, S. Dual-Mode Visual System for Brain–Computer Interfaces: Integrating SSVEP and P300 Responses. Sensors 2025, 25, 1802. https://doi.org/10.3390/s25061802

AMA Style

Kasawala E, Mouli S. Dual-Mode Visual System for Brain–Computer Interfaces: Integrating SSVEP and P300 Responses. Sensors. 2025; 25(6):1802. https://doi.org/10.3390/s25061802

Chicago/Turabian Style

Kasawala, Ekgari, and Surej Mouli. 2025. "Dual-Mode Visual System for Brain–Computer Interfaces: Integrating SSVEP and P300 Responses" Sensors 25, no. 6: 1802. https://doi.org/10.3390/s25061802

APA Style

Kasawala, E., & Mouli, S. (2025). Dual-Mode Visual System for Brain–Computer Interfaces: Integrating SSVEP and P300 Responses. Sensors, 25(6), 1802. https://doi.org/10.3390/s25061802

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop