You are currently viewing a new version of our website. To view the old version click .
Micromachines
  • Article
  • Open Access

8 November 2025

Intersession Robust Hybrid Brain–Computer Interface: Safe and User-Friendly Approach with LED Activation Mechanism

,
and
1
Department of Electronics and Automation, Gumushane University, Gumushane 29100, Turkey
2
Department of Software Engineering, Gumushane University, Gumushane 29100, Turkey
3
Department of Electrical and Electronics Engineering, Tokat Gaziosmanpasa University, Tokat 60100, Turkey
*
Author to whom correspondence should be addressed.
This article belongs to the Special Issue Bioelectronics and Its Limitless Possibilities

Abstract

This study introduces a hybrid Brain–Computer (BCI) system with a robust and secure activation mechanism between sessions, aiming to minimize the negative effects of visual stimulus-based BCI systems on user eye health. The system is based on the integration of Electroencephalography (EEG) signals and Electrooculography (EOG) artefacts, and includes an LED stimulus operating at a frequency of 7 Hz for safe activation and objects moving in different directions. While the LED functions as an activation switch that reduces visual fatigue caused by traditional visual stimuli, moving objects provide command generation depending on the user’s intention. In order to evaluate the stability of the system against physiological and psychological conditions, data were collected from 15 participants in two different sessions. The Correlation Alignment (CORAL) method was applied to the data to reduce the variance between sessions and to increase stability. A Bootstrap Aggregating algorithm was used in the classification processes, and with the CORAL method, the system accuracy rate was increased from 81.54% to 94.29%. Compared to similar BCI approaches, the proposed system offers a safe activation mechanism that effectively adapts to users’ changing cognitive states throughout the day by reducing visual fatigue, despite using a low number of EEG channels, and demonstrates its practicality and effectiveness by performing on par or superior to other systems in terms of high accuracy and robust stability.

1. Introduction

Today, Brain–Computer Interface (BCI) systems are attracting intense attention due to their potential to help individuals with severe motor impairments by enabling control of peripheral devices through brain signals. BCI provides a communication or movement channel for individuals who have lost voluntary muscle control by translating brain signals into control commands. These systems aim to increase autonomy and quality of life by allowing individuals to perform tasks such as letter selection, device control, or wheelchair use with their brain activities [1]. In Electroencephalography (EEG)-based BCI studies, several widely adopted paradigms are employed during signal acquisition, including Visual Evoked Potentials (VEPs) [2], Motor Imagery (MI) [3], and P300 [4] responses. The P300 paradigm is characterized by a positive peak in the EEG signal that occurs approximately 300 to 600 milliseconds following the presentation of a task-relevant stimulus, such as a flashing light, a sound, or a specific visual cue. This component is typically elicited when the user consciously recognizes a stimulus, and is easily detectable through non-invasive EEG recordings. In contrast, the MI paradigm relies on the mental rehearsal of motor actions rather than actual physical movement. When a person imagines a specific motion like moving a hand or foot, distinct brainwave patterns are produced, which can be recorded and analyzed using EEG systems. These imagined movements generate neural signals in motor-related cortical areas, making MI a powerful technique for voluntary control in BCIs [5]. The VEP approach, on the other hand, involves presenting users with rhythmic visual stimuli such as blinking lights or oscillating patterns at specific frequencies. These stimuli induce synchronized voltage changes in the visual cortex [6]. When the stimulation frequency exceeds approximately 6 Hz, the response transitions into a Steady-State Visual Evoked Potential (SSVEP), where the brain’s electrical activity synchronizes with the frequency of the stimulus. SSVEP allows for rapid and reliable classification of user intent based on frequency-locked neural responses [7]. All of these paradigms generate EEG data that must be further processed through signal processing algorithms, enabling the system to classify and interpret user intentions effectively. Each paradigm has unique advantages and is selected based on application context, user capability, and system requirements. The basic BCI schematic is shown in Figure 1.
Figure 1. The basic scheme of BCI systems.
Although BCI systems were developed to benefit people with motor impairments, the paradigms used in the systems create cognitive load on the user and may experience control lapses due to the variable nature of EEG signals and sensitivity to interference [8]. Hybrid BCI systems developed to overcome these limitations aim to increase performance, reliability, and availability by combining multiple signal sources [9]. While EEG is powerful in detecting mental intention, Electrooculography (EOG) offers fast and precise control with its high signal-to-noise ratio. The combination of these two signals provides an interaction that is both voluntary and natural. It has been reported in the literature that hybrid systems offer higher accuracy and flexibility than single BCI systems [10,11]. In particular, integrating EOG into the system increases resistance to signal interruptions and provides intuitive use with minimal training [12]. Thus, EEG–EOG-based hybrid BCI systems are promising for real-world applications by providing faster, more accurate, and more user-friendly solutions [13].
Eye movements, particularly within electrodes placed over the frontal cortex, frequently create EOG artefacts in EEG recordings. These artefacts are considered non-neural interferences that can distort the interpretation of brain activity and potentially be misclassified as unwanted commands in BCI systems [14]. In applications where such an effect is undesirable, various filtering techniques are applied to suppress EOG components. However, in cases where eye activity is consciously monitored [15], these artefacts can be reused as informative signals. Movements such as blinks, vertical and horizontal gaze shifts, and eye closures can be detected and distinguished from EEG data. Once identified, these signals can be used as control inputs for applications such as BCI-based typing systems [16].
Visual stimulus-based BCI systems such as SSVEP and P300 have significant limitations in terms of usability, comfort, and eye health. Since these systems require focusing on flickering Light-Emitting Diodes (LEDs) for a long time, they can cause complaints such as eye fatigue, visual exhaustion, and headaches [17,18]. Visual stimuli that flicker at certain frequencies used in visual stimulus-based systems have the potential to pose risks to eye health in individuals with light sensitivity. High-contrast stimuli, especially in the 15–25 Hz range, increase the risk of epileptic seizures in photosensitive individuals. Moreover, some people are sensitive even to single flashes or frequencies as high as 65 Hz, which raises concerns about the safety of such systems for certain user groups [19]. It has also been reported that repetitive stimuli cause dry eyes and loss of attention [20]. In traditional designs, emphasis is placed on system performance and Information Transfer Rate (ITR), while user experience remains in the background. In conclusion, although visual BCI systems based on vibratory stimuli are effective in principle, their long-term use may be uncomfortable or unhealthy for users [21]. This clearly demonstrates the need for more comfortable stimulus approaches.
Another important challenge faced by BCI systems is that system performance varies between sessions. Factors such as the unstable nature of EEG signals, repositioning of electrodes, impedance change, user fatigue, or distraction change signal patterns between sessions. Therefore, a classifier trained on data obtained in one session may exhibit lower performance in subsequent sessions. This reduces the usability and reliability of the BCI system and, in practical terms, means that users must recalibrate before each use. Calibrating at each session is both laborious and makes continuous use difficult. From a user perspective, BCI behaviour is unpredictable from one session to the next, harming trust and usability. It is not acceptable for independent use if a system that works very well early in the day works poorly when used again later in the day [22]. Addressing variability across sessions and even across individuals is critical to moving any designed BCI system from laboratory settings to everyday life.
The BCI problems that this study focuses on are summarized as follows:
  • Flickering stimuli: Visual stimulus-based BCI systems may cause discomfort in the user such as eye fatigue, headache, and risk of epilepsy due to flickering visual stimuli [17]. This limits long-term use of the system, reduces the comfort level, and causes some users to be excluded for security reasons.
  • Intersession instability: Intersession variability that occurs due to the unstable nature of EEG signals negatively affects system performance, and the need for recalibration at each session reduces the usability of the system [22]. This situation significantly limits the use of BCI systems in real life.
  • Reliability and accuracy: Single BCI systems are not reliable in terms of system security due to their dependence on EEG signals that vary frequently between sessions. EEG alone can experience control lapses due to its variable nature and sensitivity to interference [8]. This situation shows that there is a need for the existence of multi-source hybrid BCI systems.
  • System control and security: In BCI systems, activation is critical to prevent unintentional commands and safely initiate user control. Especially in real-world applications, it is important for user experience that not every signal is perceived as a command [13]. It is important to prevent random eye/muscle movements from being perceived as wrong commands in designed systems.
In this study, a two-stage hybrid BCI system was developed in which SSVEP and EOG artefacts were used together. The user is presented with a 7 Hz frequency LED and objects moving in four different directions on a single screen. The system is structured in two stages to ensure conscious activation. In the first stage, it was detected whether the user produced an SSVEP response via the LED by looking at the screen. At this stage, the LED functioned as a “brain-controlled safety switch”. When the LED was detected, the second stage of the system was activated, thus greatly reducing the risk of incorrect commands. The 7 Hz frequency was strategically chosen to reduce the risk of triggering in photosensitive individuals and ensure adequate SSVEP production. In the second stage of the system, the trajectory of the moving object that the user was looking at was determined using EOG artefacts evident in the frontal lobe. Thus, the activation intention was detected via EEG, and the system output command intention was detected via EOG. In the first stage, in frequency domain proportioned trapezoidal features were extracted using Power Spectral Density (PSD). Feature data was classified by Random Forest (RF), Support Vector Machine (SVM), and Bootstrap Aggregating (Bagging) algorithms and accuracy rates of 98.67%, 98.63%, and 99.12% were obtained, respectively. In the second classification stage, only samples with correct LED activation were evaluated; time domain-based power, energy, and 20th degree polynomial features were extracted from these signals. The feature data obtained in the time domain was classified with RF, SVM, and Bagging algorithms, and average accuracy rates of 79.87%, 76.31%, and 81.54% were obtained, respectively. Then, the Correlation Alignment (CORAL) method was applied to the feature data in order to reduce the distribution differences between sessions. CORAL ensures statistical fit between source and target data by aligning covariance matrices [23]. As a result of the classification made with CORAL, the Bagging algorithm increased from 81.54% average accuracy rate to 94.29% accuracy rate, providing the best performance between sessions despite individual variations.
The main contributions that the designed hybrid BCI system aims to provide are presented below.
  • Visual comfort: The only vibrating stimulus used in the system is 7 Hz LED, and this frequency value is outside the high risk 15–25 Hz range [19] and is a partially ideal value. All other stimulators in the system are motion-based and do not require vibration. This largely eliminates the visual fatigue problem caused by visual stimulus-based systems in users.
  • Intersession stability: In the designed system, the dataset was recorded in two different sessions. By applying the CORAL method to the recorded data, differences between sessions were minimized and the system was aimed to have a stable structure between sessions.
  • Reliability and accuracy: The designed BCI system benefits from the high signal clarity of EOG, as opposed to the unstable structure of EEG signals. By processing eye movements with high precision, complex orbits can be classified accurately by the system. The overall effect is a hybrid BCI system that is both user-friendly and reliable.
  • System security and activation: In the designed hybrid system, the fact that SSVEP and EOG require approval together via 7 Hz LED provides a natural security control. Since the system switches to control mode only after the safe activation of the first stage has occurred, the user can freely look or blink while the LED is not active; there is no risk of making an unintentional choice.
In this study, it was aimed to provide robustness against intersession performance changes in order to increase the adaptability and generalizability of the previously proposed hybrid BCI system [24] to real-world conditions. The designed system was tested in two different time periods (morning and evening), and the CORAL-based domain adaptation method was used. In this context, the need for recalibration of the system was reduced and not only the instantaneous accuracy rates but also the time-varying cognitive and physiological states of the users were evaluated. In this respect, a system closer to real-world applications has been presented. In addition, consistent results obtained by increasing the number of participants in the study showed that the system can be adapted to different user profiles. Presenting all visual stimuli on a single screen facilitated the integration of the system into real life and increased user ergonomics. In addition, the system, which was tested using fewer features and data with the same number of channels, has been made suitable for portable applications with low resource requirements. All these elements have revealed that the system can maintain high performance, reliability, and ease of use despite its simplified structure.

3. Materials and Methods

3.1. EEG Device

Studies show that Emotiv (EMOTIV Inc., San Francisco, CA, USA), Quik-Cap (Compumedics Neuroscan, Abbotsford, Victoria, Australia), and MindWave (NeuroSky Inc., San Jose, CA, USA) EEG devices are frequently used, and among these, Emotiv (EMOTIV Inc., San Francisco, CA, USA) is frequently preferred due to its low cost, sufficient number of channels, and ease of use. The Emotiv Flex EEG device used in the study has 32 channels and 2 reference electrodes, and the electrode positions comply with the international 10/20 system. Offering approximately 9 h of uninterrupted use with its wireless structure and rechargeable battery, the device provides 128–256 Hz sampling rate and 16–32 bit resolution values [41].

3.2. Participants

This study was conducted with the participation of 15 healthy subjects (9 males and 6 females) between the ages of 20 and 30 (avg. 22.5), without any addiction or chronic disease. The participants were informed in detail about the study and were included in the study by filling out the informed consent form. The experiments were conducted with the approval of the ethics committee of Trabzon Kanuni Education and Research Hospital numbered 23618724.

3.3. Normalization

Normalization is a fundamental pre-processing step that directly affects model performance by making data at different scales comparable [42]. In this study, Z-score normalization was used in order to reduce the amplitude and signal-to-noise ratio (SNR) differences that may occur between two sessions. Z-score is a standardization method that expresses the distance of each data point from the mean in its distribution in terms of standard deviation and is calculated with Equation (1).
z = x μ σ  
In Equation (1), x represents the data point, μ represents the mean of the cluster, and σ represents the standard deviation [43].

3.4. Correlation Alignment (CORAL)

Distribution differences between biological signals recorded in different sessions may negatively affect the generalization success of machine learning-based classifiers. In this study, the CORAL method was used to reduce this problem. CORAL provides statistical fit without the need for label information by aligning the covariance matrices of the source and target datasets. This method, which is based on whitening the source data and rescaling it according to the target distribution, improves transfer performance by increasing intersession compatibility [23].

3.5. Power Spectral Density (PSD)

In the first classification stage of the study, where 7 Hz LED was detected using SSVEP, the PSD method was used to analyze the frequency components of EEG signals. With PSD, EEG data, which is difficult to analyze in the time domain, is made more meaningful by displaying the power distribution of the signal on the frequency axis. The Welch method was preferred because it provides low variance and stable results. The Welch method works by dividing the data into K sub-segments of equal length that partially overlap each other. The PSD estimate is expressed by Equation (2).
P w e l c h ( f ) = 1 K . L . U i = 0 K 1 n = 0 L 1 x i n . ω n e j 2 π f n 2
In Equation (2), K refers to how many segments the data is divided into, L is the length of each divided segment, ω n is the window function, and x i n is the nth sample of the i . segment. U represents the strength of the windowing function [44].

3.6. Numerical Integration (Trapezoidal)

The trapezoidal method, one of the numerical integration methods, was used to approximately calculate the area under the PSD values in the 4–10 Hz and 6–8 Hz frequency ranges for the detection of 7 Hz LED. The trapezoidal method calculates the area under the function by approximating the positive function f to be integrated with piecewise linear curves in the range [a, b]. The trapezoidal method is frequently preferred in power analysis of biological signals such as EEG/EOG due to its ease of application and low computational cost, as well as providing sufficient accuracy. Using the trapezoidal method, the area under a function curve is calculated with Equation (3).
a b f x d x = h 2 f x 0 + 2 i = 1 n 1 f x i + f ( x n )
In Equation (3), the parameters a and b represent the integration limits of the f function, ( x 0 , x 1 , , x n ) represent the equal interval points where the integration limit is divided, and h represents the width of the lower trapezoids [45,46].

3.7. Polynomial Curve Fitting Method

In this study, the polynomial curve fitting method was used to classify EOG artefacts in the time domain. This method is the process of creating a polynomial model based on the least squares method on the dataset. The most appropriate polynomial coefficients are obtained by minimizing the total error between the start and end data points of the function and the model [47]. The y = P 1 ( x ) polynomial model fitted for y = f ( x ) with starting point x 0 and end point x 1 is expressed by Equation (4).
p x = a 0 x n + a 1 x n 1 + + a n 1 x + a n  
In Equation (4), n represents the degree of the polynomial and a i represents the polynomial coefficients. In the study, the polyfit function was used to fit a polynomial curve to certain data points.

3.8. Data Acquisition

In the data acquisition, an Emotiv Flex EEG device, a laptop with an Intel Core i7–12650 processor, and a 31.5-inch LCD screen with a 260 Hz refresh rate were used as hardware infrastructure. For visual stimulation, an LED flashing at a frequency of 7 Hz was placed at the upper middle point of the screen, and the subject was positioned on a chair approximately 90 cm away from the screen. The procedure was explained to each subject in detail before the experiment, and their consent was obtained. The orbital task order that the subjects were asked to follow is shown in Figure 2.
Figure 2. Experimental protocol and task order of movement trajectories applied in this study.
The subjects were fitted with an EEG cap using saline water. The contact level of the electrodes was verified via EmotivPRO software (v3.5.3), aiming for at least 98% signal accuracy. Electrode placement was made in accordance with the international 10–20 system, and contact quality was optimized with saline solution before each session. In the study, only four EEG channels (Fp1, F7, F8, and Fp2) in the frontal lobe region were actively used. This channel selection was made both to reduce system complexity and target regions where artefacts associated with eye movements can be obtained at the highest level [48]. Data recording in the study was carried out in two sessions.

3.8.1. Session 1—Morning Trials

The first session was held to record the cognitive performance differences of users in the morning hours (08:00–10:00). The session started with activating the 7 Hz LED. The experiment was started with a 3 s initial warning sound. Moving balls were activated, and the participants were first asked to follow the left-cross motion trajectory moving in the upper left corner of the screen for 10 s. The experiment ended with a 3 s final warning sound. The visual of the data recording stage is shown in Figure 3.
Figure 3. Experimental setup during the signal recording phase, showing EEG device attachment, subject posture, and LED stimulus configuration.
A total of 16 s of data were recorded while the subjects watched the target with their eyes. The same protocol was repeated for other movement trajectories, and 10 repetitions were performed for each class. After the LED was turned off, recordings were taken for the passive viewing class representing random eye movements. Subjects were asked to perform random gazes without focusing on any target. A total of 50 records were obtained from each subject with 10 repetitions for 5 different classes. The average value graphs of the movement trajectory data of the first session obtained from a randomly selected subject (Subject-2) are shown in Figure 4.
Figure 4. Average value graphs of Session 1: (a) left-cross movement trajectory; (b) right-cross movement trajectory; (c) right–left movement trajectory; (d) up and down movement trajectory.

3.8.2. Session 2—Evening Trials

The second session was held in the evening (16:00–18:00) in order to evaluate the stability of the system against the individuals’ cognitive and neurophysiological changes during the day. The EEG head was repositioned, preserving the channel placements and impedance levels used in the first session, and the electrodes were contacted with saline (salt) water again. To avoid signal differences between sessions, special care was taken to place the EEG electrodes in exactly the same positions. The data collection procedure was repeated, remaining exactly the same as the first session, and 50 records were obtained with 10 repetitions for 5 different classes. The average value graphs of the movement trajectory data of the second session obtained from a randomly selected subject (Subject-2) are shown in Figure 5.
Figure 5. Average value graphs of Session 2: (a) left-cross movement trajectory; (b) right-cross movement trajectory; (c) right–left movement trajectory; (d) up and down movement trajectory.
The signals obtained in each session were recorded with a sampling frequency of 256 Hz, and the raw data was stored in “.csv” format. A total of 1500 records were obtained from all subjects. Data were transferred to Matlab (R2023b) environment for signal processing stages.

4. Results

The dataset created using the proposed approach consists of five classes. Four of these classes were recorded while users followed visually guided moving object trajectories while the LED flickering at a frequency of 7 Hz was active. The fifth class was obtained when the LED was off and the user performed random eye movements. This structure enabled the system to be constructed in a two-stage classification structure. The scheme of the designed hybrid BCI system is shown in Figure 6.
Figure 6. Scheme of the proposed hybrid BCI system illustrating the signal acquisition, processing, and classification stages.
In the first stage, LED on (SSVEP present) and LED off (SSVEP absent) states were distinguished. In the second stage, the data of four different moving trajectories recorded only when the LED was on were classified. The stability of the system between sessions was tested by using all the data collected in the morning session as training data and all the data in the evening session as test data. In each session, a total of 16 s of data (4096 samples × 4 channels) were recorded, with 3 s audio warnings at the beginning and end.
In order to prevent the system from being affected by the beginning and ending warning sounds, these sections were removed from the signal and only signals of 10 s (2560 samples × 4 channels) were evaluated. Then, these raw signals were divided into 3 s segments with 1 s overlap, and five 3 s signals (768 samples × 4 channels) were obtained from each trial. Signals were passed through a 5th order Butterworth bandpass filter in the range of 1–15 Hz via the Matlab function filtfilt command. Butterworth filters are frequently preferred in EEG processing applications because they provide a flat frequency response in the passband and preserve the amplitude components of the signal [49]. Filters of different orders were tested, and the optimal performance was achieved in the 5th-order filter, considering the balance between passband sharpness and signal distortion. An example signals of the Fp1 channel before (Figure 7a) and after (Figure 7b) filtering is shown in Figure 7.
Figure 7. Acquired from the fp1 channel: (a) unfiltered signal; (b) signal filtered with a 1–15 Hz 5th-order Butterworth bandpass filter.
Filtered signals were converted from time domain to frequency domain using the Welch PSD method [44]. Using the Hamming window, windowing was performed with a length of 640 samples and an overlap of 639 samples. Hamming windowing parameters used in PSD analysis were determined by experimental methods in accordance with the data structure. It is stated in the literature that the correct selection of window length and overlap ratio plays an important role in establishing a balance between frequency resolution and temporal sensitivity [50,51]. Frequency ranges of 4–10 Hz and 6–8 Hz were determined on the spectrum obtained from PSD, and the areas of these two frequency bands were calculated with the trapezoidal integration method [52], which is frequently used in the literature to calculate the band power. The calculated values are compared to each other. Using the obtained proportional trapezoidal features, it was aimed to detect the SSVEP activity generated using 7 Hz LED. The SSVEP created when the LED is on is represented in Figure 8.
Figure 8. SSVEP potentials occurring in the LED on position.
In Figure 8, while the L1 length represents the SSVEP response occurring in the 4–10 Hz range, the L2 length represents the SSVEP response occurring in the 6–8 Hz range. By proportioning these two lengths to each other, a proportional trapezoidal feature was obtained. Z-score normalization was applied to the obtained feature data in order to balance inter-individual and intersession amplitude changes, making the mean 0 and the standard deviation 1 [53]. Normalized feature data was classified using RF, SVM, and Bagging algorithms. The SVM algorithm was configured using the fitcecoc function in the Matlab environment. With the Bagging algorithm implemented through the fitcensemble function, it aimed to increase classification performance by training more than one weak learner on data subsets. The RF algorithm was implemented using the Matlab function TreeBagger. The classification accuracy rates obtained are presented in Table 1. In the table, Class-1 represents the moving trajectory data recorded in the LED active position and Class-2 represents the random gaze data recorded in the LED off position.
Table 1. Accuracy rates of the first classification stage where the system detects LED activation.
When Table 1 is examined, it is seen that the Bagging learning algorithm exhibits the best performance for the first classification stage with an accuracy rate of 99.12%. The second classification stage was carried out on the dataset created using the raw forms of the data correctly classified by the algorithms in the first stage. At this stage, a 4th-degree Butterworth filter, which has the ability to filter the signals without distorting the amplitude components of the signal thanks to its flat frequency response in the pass band [49], was applied to the signals in the range of 0.5–32 Hz. After the filtering process, signal power, signal energy, and polynomial fitting feature extraction methods based on time domains were used. The polynomial fitting method was extended to the 20th degree and all obtained polynomial coefficients were included in the feature vector. Higher-order polynomials are effective in improving classification performance by representing the structural trends of the signal in more detail [47]. Figure 9 shows the curve created using the right-cross motion trajectory and fitted using the 20th-degree polynomial for the signal of the Fp1 channel.
Figure 9. Polynomial curve fitted using 20th-degree polynomial for the signal of Fp1 channel.
The obtained feature data was applied to RF algorithm was implemented via the TreeBagger function, and 50 decision trees were used in the model. The number of trees was fixed at the point where the accuracy performance remained stable within the confidence interval (95%) in the preliminary tests. The feature subsets for each tree were randomly selected, which aimed to reduce the risk of over-learning of the model. The obtained feature data were classified with a multi-class SVM model using the fitcecoc function in the Matlab environment. The SVM hyperparameters were automatically adjusted with the Bayesian optimization method, and the combination that provided the best cross-validation performance was selected. The Bagging algorithm was structured as an ensemble model consisting of 100 decision trees, each with a maximum depth of 15, using the fitcensemble function. The number and depth of trees were determined in the preliminary tests to provide the optimum balance between model complexity and processing time. The average accuracy rates of classifications performed over 10 independent repetitions are given in Figure 10.
Figure 10. Average accuracy rates of the second classification stage (without CORAL applied).
Figure 10 shows the average accuracy rates of the second classification stage. Average accuracy rates of 79.87%, 76.31%, and 81.54% were obtained for the RF, SVM, and Bagging algorithms, respectively. It can be observed that the Bagging algorithm provides better performance than other algorithms.
In signal processing applications, overfitting is the situation where the model learns the patterns in the training data in too much detail and fails to capture the general structure underlying these patterns. Higher-order polynomials can pose a risk of overfitting, especially in small datasets. Therefore, the 20th-order polynomial used in this study was systematically tested for its impact on classification performance. Figure 11 compares the approximations obtained with the 15th-, 20th-, and 25th-order polynomials with the original signal.
Figure 11. Comparison of polynomial fitting curves with the original signal. (a) The 15th-degree polynomial captures the general low-frequency trend of the signal while tracking short-term fluctuations to a limited extent. (b) The 20th-degree polynomial exhibits a better fit by compensating for both the slower trend and the local task-related variations in the signal; this degree was considered a candidate to optimize the generalization-fitting balance. (c) The 25th-degree polynomial tends to overfit local fluctuations, especially in the extreme regions of the window and at sharp change points, indicating a potential risk of overfitting.
While the lower-order (15th) polynomials reflect the underlying signal trend, they are found to underrepresent high-frequency components. On the other hand, the 25th-order polynomial overfits local fluctuations, suggesting that it overfits noise components rather than the physiological characteristics of the signal. To quantitatively demonstrate the performance impact of the selection of the polynomial degree, features obtained for three different degrees (15, 20, and 25) were classified using the Bagging classifier. The accuracy rates of the classification process are given in Table 2. In Table 2, Class-1 represents the left-cross movement trajectory, Class-2 represents the right-cross movement trajectory, Class-3 represents the right–left movement trajectory, Class-4 represents the up and down movement trajectory, and n represents the polynomial degree.
Table 2. Accuracy rates of the classification of feature data generated using the 15th, 20th, and 25th polynomial degrees with the Bagging algorithm.
The results (Table 2) show that the average accuracy rate (81.95%) achieved with the 20th-degree polynomial is higher than both the 15th degree (74.10%) and 25th degree (80.80%). This demonstrates an optimal point where the model complexity is not excessive but rather better represents the meaningful variations in the signal. In particular, the 87.80% accuracy of the 20th-degree polynomial on the Class-4 samples supports the power of this approach in capturing task-relevant components of the signal.
The purpose of the polynomial approach is to represent a slow trend overlaid on short-term fluctuations in the signal. Very low degrees lead to underrepresentation due to high bias, while very high degrees lead to overfitting due to high variance. The selection of the 20th-degree polynomial was based on both this theoretical trade-off and the empirical accuracy increase shown in Table 2. Thus, model performance was systematically tested for overfitting. This additional analysis supports the fact that the 20th-degree polynomial was not chosen randomly but rather through experimental observation and statistical validation. The results demonstrate that physiologically meaningful signal trends can be captured and that classification accuracy is significantly improved with this optimization.

4.1. CORAL Adaptation

In order to reduce the statistical distribution differences between the two sessions in the study, the CORAL method was applied to the feature data of the second classification stage. CORAL provides cross-domain statistical agreement by aligning the covariance matrices of source (training) and target (test) datasets. Within the scope of this method, the covariance matrix of the normalized training data was whitened and then recolored according to the covariance structure of the test data, making the source data statistically compatible with the target distribution. Thus, the generalization ability of the model across sessions is increased [23]. In order to see the distribution of the obtained features, a scatter plot of the randomly selected FP2 channel is given in Figure 12. In the figure, Class-1 represents the left-cross movement trajectory, Class-2 represents the right-cross movement trajectory, Class-3 represents the right–left movement trajectory, and Class-4 represents the up and down movement trajectory.
Figure 12. Three-dimensional scatter plot of power, energy, and polynomial curve fitting features of the FP2 channel.
As can be inferred from Figure 11, the features can be clearly distinguished. Then, the feature data was classified using RF, SVM, and Bagging machine learning algorithms, as in the first classification stage. All of the data recorded in the first session (early in the day), scaled using the CORAL adaptation method, was used as training data, and all of the data recorded in the second session (late in the day) was used as test data. The RF model was constructed using the TreeBagger function in the Matlab environment and implemented to include a total of 50 decision trees. This number was determined according to typical values suggested in the literature [54,55] and the accuracy–stability balance provided in preliminary experiments. Each tree was trained on randomly selected feature subsets. Model outputs were evaluated by converting from cellular format to numerical form in order to make them suitable for numerical analysis of class labels. The SVM algorithm was constructed with a one-vs-one strategy using the fitcecoc function. Hyperparameter optimization was performed with the Bayesian optimization algorithm. In this process, KernelFunction, KernelScale, and BoxConstraint parameters were optimized; the best model was determined as a result of 30 iterations of the experiment. The Bagging algorithm was implemented using the fitcensemble function, and a model consisting of 100 decision trees, each with a maximum split depth of 15, was constructed in order to reduce the risk of overfitting and increase the generalizability of the model. These parameters were determined by examining previous similar studies [12,56] and experimental accuracy analyses. All classification processes were carried out over 10 independent repetitions and the average accuracy rates, ITR, precision, recall, and F1-measure values obtained are given in Table 3. In Table 3, Class-1 represents the left-cross movement trajectory, Class-2 represents the right-cross movement trajectory, Class-3 represents the right–left movement trajectory, and Class-4 represents the up and down movement trajectory.
Table 3. Accuracy, ITR, precision, recall, and F1-measure values of the data to which the CORAL adaptation method was applied.
When Table 3 is examined, it is seen that the RF algorithm performs well with an average accuracy rate of 93.80%, an ITR value of 37.54 (bits/min), and a precision of 94.07%. The algorithm stands out with its 96.53% accuracy rate, especially in Class-4. The SVM algorithm showed a performance close to RF, with an average accuracy rate of 92.02%, an ITR value of 35.38 (bits/min), and a precision of 92.82%. While a high accuracy rate of 98.66% was achieved in Class-4, a lower performance was achieved compared to other algorithms with an accuracy rate of 87.18% in Class-1. The Bagging method showed the highest overall performance with an average accuracy 94.29%, an ITR of 38.35 (bits/min), and a precision of 94.55%. The algorithm stands out with accuracy rates of 96.84% and 95.00%, especially in Class-2 and Class-3, respectively. Overall, the Bagging algorithm exhibited the best performance with high accuracy rate and ITR. While the RF algorithm showed performance close to Bagging, SVM fell behind the other two algorithms despite achieving high accuracy rates in some classes. The results obtained for this study revealed that the CORAL method is an effective domain adaptation method in classifying EOG artefakt signals found in EEG signals. In addition, the performance of the Bagging algorithm showed that ensemble learning methods were effective for the current study.

4.2. Feature Extraction

Given that the 20th-degree polynomial expansion produces a large number of features, and that many of the generated features may be correlated and increase noise, a feature selection step was applied to the dataset to reduce the high dimensionality. A one-way Analysis of Variance (ANOVA) F-test approach was chosen, which evaluates each feature by testing for significant differences in its mean value across classes. This technique effectively scores how well each feature discriminates between classes, helping to identify and remove irrelevant features. The process and outcome stages are outlined below:
  • ANOVA test on all features: A one-way ANOVA was applied to all 92 features (23 features × 4 channels) extracted from 4 channels (Fp1, F7, F8, and Fp2) to assess their significance in distinguishing between target classes. The ANOVA examined the inter- and intra-class variance of each feature to determine whether the class means of that feature differed significantly.
  • Selection criterion (p < 0.05): Using a significance threshold of p < 0.05, 66 features that showed statistically significant differences between class means were selected. Features falling outside the threshold were removed from the data. In other words, significant features with p-values below 0.05 were retained for the model.
  • Retraining with selected features: The classification model was retrained using only the selected features using ANOVA. The average accuracies obtained for the RF, SVM, and Bagging algorithms are close to the performance using all 92 features, and no significant decreases are observed. The RF algorithm decreased from 93.80% to 90.93%, the SVM algorithm from 92.02% to 90.11%, and the Bagging algorithm from 94.29% to 92.09%. The highest accuracy rate of 92.09% was obtained using the Bagging algorithm. All classifiers exhibited slightly lower, but still high, accuracies in the classification results. Despite these modest decreases, the results obtained with the reduced feature sets remained robust and competitive. The classification results showed that the most useful components of the data were preserved.
  • Impact on performance and robustness: The accuracy rates obtained with the reduced feature set indicate that the extracted features are relatively meaningless. Removing these less informative features did not significantly degrade model performance. On the contrary, it increased model robustness by removing redundant information. By reducing the feature set, noise that could hinder class identification was also reduced. This additional step validated the robustness of our approach and increased reproducibility by focusing on the most useful features while partially preserving model performance. The resulting accuracy rates are shown in Table 4. In Table 4, Class-1 represents the left-cross movement trajectory, Class-2 represents the right-cross movement trajectory, Class-3 represents the right–left movement trajectory, and Class-4 represents the up and down movement trajectory.
    Table 4. Accuracy and ITR values obtained with 66 features selected as a result of applying the ANOVA feature selection method to the data adapted with CORAL.
ANOVA-based feature selection reduced the size of the feature set from 92 to 66 while maintaining high classification performance. The classifier achieved 92.09% accuracy rate with this reduced set, demonstrating that the model retained the most informative and discriminative features. This result confirms that the proposed system is not overly sensitive to noisy or correlated inputs. Furthermore, the selection process increased both the interpretability and reproducibility of the model, improving its generalization ability.

4.3. Cross-Session Evaluation (Wilcoxon Test)

In this study, EEG data were collected from participants in two separate recording sessions conducted at different times of the day (morning and evening) under consistent experimental conditions. The entire dataset from the morning experiments was used to train the Bagging classifier, while the entire dataset from the evening experiments was used solely to test the trained model. This clear temporal separation between the training and testing phases allows for the assessment of the system’s robustness to non-stationarity and diurnal signal variability, a critical factor for practical BCI deployment.
The Wilcoxon signed-rank test was applied to evaluate the temporal robustness and generalizability of the proposed hybrid BCI system and to examine whether there was a significant difference between the classification accuracies obtained. This test was chosen because of its ability to statistically evaluate the median difference between two paired measurement groups when the normal distribution assumption is not met. A p < 0.05 value, widely used in the literature, was used as the threshold for statistical significance in the analysis. The Bagging algorithm, which provides the highest accuracy rate within the scope of the study, was used in the tests. The model was trained with data from the morning session; the accuracy rate for the first dataset (morning) was obtained by testing with morning data, and the accuracy rate for the second dataset (evening) was obtained by testing with data from the evening session. These accuracies were subjected to the Wilcoxon signed-rank test, and the results are presented in Table 5. In Table 5, Class-1 represents the left-cross movement trajectory, Class-2 represents the right-cross movement trajectory, Class-3 represents the right–left movement trajectory, and Class-4 represents the up and down movement trajectory.
Table 5. Wilcoxon p value of accuracy rates obtained from training and test data using the Bagging algorithm.
The Wilcoxon signed-rank test yielded a p = 0.125 result, demonstrating that class-based accuracies did not differ significantly across sessions, demonstrating consistent and stable classification performance across sessions.
Although a slight decrease in accuracy was observed, performance remained consistently high across sessions. A p value of 0.125 (p > 0.05) indicated that there was no statistically significant difference between training and testing accuracies across sessions. This result demonstrates that the proposed system maintains robust classification performance across recording times, confirming its potential for real-world use where recording variability is unavoidable.

5. Discussion and Future Work

Studies in the literature have revealed that hybrid BCI systems created with the integrated use of EEG and EOG artefacts offer significant advantages in terms of accuracy and flexibility [9]. SSVEP-based systems stand out with their high ITR rates; however, they have limiting factors such as user comfort, individual differences, and intersession variability [17,22]. Recently, properly processed EOG artefacts have attracted attention as a low-cost and fast-reacting alternative control interface [12]. Domain adaptation methods, especially for eliminating statistical differences between sessions, have come to the fore in the literature. In addition, it has been observed that systems developed with a low number of channels provide significant advantages to the user in terms of portability, ease of use, and hardware simplicity [30].
In this study, a hybrid BCI system that uses EEG signals and EOG artefacts is proposed to overcome the problems of session variability, visual stimulus-induced disturbances, reliability, and involuntary system activation encountered in traditional EEG-based systems. With the proposed system, a structure that is resistant to the physiological and psychological fluctuations of the users during the day is aimed at. Accordingly, data recordings were carried out in two different sessions, morning and evening. All moving objects in the designed interface are presented to the user simultaneously, aiming to ensure that the system is suitable for use in real-life conditions. The designed system has a two-stage classification structure. In the first stage, by extracting the average trapezoidal features in the frequency domain, SSVEP activation corresponding to 7 Hz LED was detected as a safe trigger mechanism. In the second stage, using raw EOG artefacts in EEG signals, power, energy, 20th degree polynomial coefficients, and time domain features were extracted in order to classify four different movement trajectories. A comprehensive evaluation was conducted to ensure the robustness and generalization of the proposed hybrid BCI system. The analysis of polynomial fitting revealed that model accuracy strongly depended on the polynomial order, with the 20th-degree polynomial providing the optimal balance between bias and variance, achieving the highest overall accuracy (81.95%) without overfitting. Models were trained using Bagging, SVM, and RF algorithms. The second session data was used to test the models. The results revealed that the Bagging algorithm achieved the highest success with 99.12% accuracy in the first classification stage and demonstrated the reliability of the ensemble learning approach for the detection of low-frequency visual stimuli. In the second stage, the data was first classified without applying any adaptation method. Then, the classification process was repeated by applying the CORAL adaptation method and the results were compared. The Bagging algorithm achieved the best performance across sessions, reaching an accuracy rate of 94.29% from an average accuracy rate of 81.54%, despite individual variations. Feature selection was performed to evaluate the extracted features. Feature selection using the ANOVA F test (p < 0.05) effectively reduced the feature set from 92 to 66 while maintaining high classification performance. The Bagging classifier achieved 92.09% accuracy with the reduced set, confirming that redundant and noisy features were successfully removed without compromising model robustness. Finally, temporal robustness was verified using the Wilcoxon signed-rank test (p = 0.125), which showed no statistically significant difference between morning and evening session accuracies. This confirmed that the proposed system exhibited stable and reliable performance across sessions and its suitability for real-world BCI applications. Basic information about similar studies conducted in the field is given in Table 6.
Table 6. Comparison of the current study with and without CORAL applied with studies conducted in the field.
In the study combining SSVEP and eye movements, researchers [57] reported 81.67% accuracy with their proposed approach. This rate is below the accuracy level of the proposed study and can be attributed to the fact that the Bayesian update mechanism used is not robust enough to individual differences. In another study, researchers [12] presented a system that provides over 80% accuracy by combining MI and EOG, but this system has limitations in practical applications due to both the training requirement and low ITR. Compared to these studies, the proposed system combines the high ITR advantage of SSVEP with the fast and involuntary responses of EOG and offers an intuitive control infrastructure that does not require training. In addition, the double-stage control (SSVEP + EOG approval) offered by our system in terms of security eliminates the risk of users issuing unintentional commands, increasing the level of reliability for real-world applications. With these aspects, the proposed system offers a hybrid BCI approach that prioritizes both user comfort and technical accuracy, and is relatively more balanced, safe, and applicable compared to existing studies in the literature.
Again, when Table 6 is examined, it is seen that high accuracy rates and ITR values are obtained in the developed systems [15,27]. However, these systems require complex processing steps due to the high number of channels and are not sufficient in terms of comfort. In the proposed study, a similar accuracy rate (94.29%) was achieved by using only four channels (FP1, F7, F8, and FP2), and a speed sufficient for practical applications was achieved with an ITR of 38.35 (bits/min). Another notable example is the 21-channel EEG-based system developed by researchers [59], which achieved 98.8% accuracy and 44.1 (bits/min) ITR. However, the equipment that such systems must carry is quite complex and costly. In this context, obtaining high accuracy and sufficient ITR with only four channels and minimal hardware in the proposed study is an important advantage in terms of hardware cost and user ergonomics.
In our study, we aimed to increase the adaptability and generalizability of the hybrid BCI system previously presented by researchers [24] to real-world applications. The proposed BCI system stands out with its resilience to performance losses that occur between sessions. In order to test the stability of the proposed system in real-world usage conditions, data were collected in two separate sessions, morning and evening, and the CORAL-based domain adaptation method was applied to reduce the distribution differences between sessions. Thus, the distribution differences that occurred between sessions were statistically balanced, and the need for recalibration of the system was eliminated. In this context, an approach closer to real-world applications is presented by evaluating the time-varying cognitive and physiological states of system users. In addition, by increasing the number of participants, it was shown that the developed system can exhibit similar performance on different individuals, which significantly strengthened the generalizability of the system. When participant diversity was evaluated together with the consistent results obtained despite biological and cognitive variations, it was revealed that the system is applicable not only for certain individuals but also for wider user groups. In addition, by showing all moving object trajectories to the user on a single screen during the data recording phase, it was aimed to integrate the system more easily into daily life. This holistic stimulus presentation both increased user ergonomics and simplified the system installation process, providing its suitability for practical applications. Another particularly striking point is that, while the same number of channels (four) were used in the classification process, the system was tested using fewer features (four) and recording less data. This situation is advantageous especially for low-resource portable systems and makes the system applicable even under hardware limitations.
Despite the promising results of the designed system, there are some limitations to this study. For example, in the designed system, the electrode positions were positioned to be exactly the same in both sessions in order to minimize the distribution differences between sessions during the data recording phase. To eliminate the impedance difference, the electrodes were wetted with salt water in each session. Additionally, the system provides intersession stability for a single user. Existing trained models provided high accuracy and ITR rate for the same person. However, the model trained for one user was not examined in the study given by the other user. Additionally, if there is deviation in electrode positions, system performance may be negatively affected. Another problem is that the system was not tested in real time. The system, which is intended to provide maximum suitability for real-time applications, must be tested in real time. The problem observed during the data recording phase is that the Emotiv Flex EEG headset used creates a feeling of pain in the users due to pressure in the following minutes. Additionally, the device cannot be adjusted according to head size. In this study, which aimed to highlight the concept of comfort, it was seen that the EEG headset caused problems, especially for people with large head sizes. In this study, three different machine learning algorithms were used in the classification stage, and the results were compared. However, testing the proposed method with deep learning methods can increase system performance. Moreover, although studies have suggested that movement trajectories are intuitive and comfortable, more studies are needed to evaluate the cognitive workload and fatigue caused by these tasks. Thanks to the improvements identified and proposed solutions, the designed hybrid BCI system has the potential to turn into a more comfortable, real-life, and comfortable system.

6. Conclusions

In this study, in order to overcome the basic limitations of EEG-based systems in user comfort, intersession instability, reliability, and system activation, a two-stage hybrid BCI system is proposed, where EOG artefacts occurring in EEG signals and the SSVEP response caused by the 7 Hz LED are used, and both time and frequency domain qualities are used together. In this system, the user can safely activate the interface using a single low-frequency (7 Hz) LED stimulus, which remains outside the 15–25 Hz range associated with increased risk of photosensitive epileptic seizures. Subsequently, four-way object tracking is performed using eye movements derived from EOG artefacts. This structure provides both a security layer that prevents involuntary commands and the opportunity to interact with low cognitive load. The data was collected in two separate sessions, and the stability of the system was tested at the intersession level. The CORAL method applied within the scope of domain adaptation reduced the statistical differences between sessions and enabled the system to operate without the need for recalibration. In the first classification stage of the study, the signals were filtered in the range of 1–15 Hz, and then proportional trapezoidal features were extracted by performing PSD analysis. Feature data was classified by Bagging, SVM, and RF algorithms and accuracy rates of 99.12%, 98.63%, and 98.67% were obtained, respectively.
In the second classification stage, power, energy, and curve fitting (20th degree) features were extracted from the data of movement trajectories. A comparative analysis of polynomial orders was conducted to mitigate the risk of overfitting. It revealed that model performance was highly dependent on the order of the polynomial used for signal fitting (20th-order). Low-order polynomial coefficients (15th-order) underfitted the data, achieving an average accuracy of 74.10%, while high-order polynomial coefficients (25th-order) slightly overfitted the data, reducing overall performance to 80.80% despite local improvements. The 20th-order polynomial achieved the best balance between bias and variance, achieving an average accuracy of 81.95% and the highest class-specific accuracy (87.80% for Class-4). These results confirm that this order provides sufficient model flexibility to capture task-related variation without overfitting noise components. Consequently, the 20th-order polynomial was determined to be the optimal configuration, effectively improving classification accuracy and signal interpretability while preventing overfitting. The extracted features (20th-order) were classified with RF, Bagging, and SVM algorithms and average accuracy rates of 79.87%, 81.54%, and 76.31% were obtained, respectively. The classification process was then repeated by applying the CORAL method to the data. An average accuracy rate of 93.80%, 37.54 (bits/min) ITR value, and 94.07% precision were obtained for the RF algorithm, and an average accuracy rate of 92.02%, 35.38 (bits/min) ITR value, and 92.82% precision were obtained for the SVM algorithm. The Bagging algorithm showed the highest performance with 94.29% accuracy, 38.35 (bits/min) ITR, 94.55% precision, and 94.42% F1-measure. The Bagging algorithm with the CORAL domain adaptation method achieved the best performance between sessions, reaching an accuracy rate of 94.29% from an average accuracy rate of 81.54%, despite individual variations. An ANOVA analysis was performed to evaluate system performance based on the extracted features and whether reducing the feature matrix would maintain model robustness. Initially, 92 features (23 × 4 per channel) were extracted from the EEG data. After applying a one-way ANOVA F-test with a significance threshold of p < 0.05, 66 features were determined to be statistically significant and retained for classification. When the classifiers were retrained using the reduced feature set, only a small decrease in performance was observed compared to the results obtained with the full 92 features. Average accuracies decreased from 93.80% to 90.93% for the RF algorithm, from 92.02% to 90.11% for the SVM, and from 94.29% to 92.09% for the Bagging classifier. Despite this small reduction, all models maintained high and consistent levels of accuracy, confirming that the removed features contributed little to class separation. The Bagging classifier again achieved the highest accuracy (92.09%), demonstrating that the subset selected by ANOVA retained the most discriminative and meaningful features. Furthermore, by removing redundant and correlated components, the classification became more robust to noise and reduced the tendency for overfitting. Overall, the results demonstrated that ANOVA-based feature selection could successfully reduce the feature space by approximately 30% (from 92 to 66) while maintaining high classification performance. This confirmed that the proposed hybrid BCI framework effectively captures essential signal features, and its performance remains robust, interpretable, and reproducible even under reduced feature conditions.
To assess the temporal robustness of the proposed hybrid BCI system, it was quantitatively evaluated by comparing the classification accuracies obtained from two independent recording sessions (morning and evening). The Bagging classifier trained on the morning dataset was then tested on both morning and evening data to examine the stability of performance under session variability. A Wilcoxon signed-rank test was applied to the paired-class accuracies obtained from the two sessions. The test yielded a p-value of 0.125, which is greater than the significance threshold (p < 0.05). This result confirmed that there was no statistically significant difference between the classification accuracies obtained from the morning and evening sessions. The proposed hybrid BCI system demonstrated temporal stability and strong generalization, maintaining high accuracy despite diurnal variations and session-to-session non-stationarity. These findings support the robustness of the system for practical BCI applications where recording conditions and user situations may change over time. In addition, the comfort-oriented selection of visual stimuli used in the system and the establishment of a careful movement-based structure during the direction determination phase have provided an effective solution for problems such as visual fatigue. Although user experience remains in the background in traditional systems, the proposed system offers a holistic approach that prioritizes both technical success and user ergonomics. As a result, the proposed hybrid BCI system with its low channel count, intersession stability, security structure that prevents unintentional commands, and relatively high classification performance, is promising in the transition from the laboratory environment to real-world applications.

Author Contributions

Conceptualization, S.A. and M.M.; methodology, S.A. and M.M.; software, S.A. and M.M.; validation, L.G. and M.M.; formal analysis, S.A.; investigation, L.G.; resources, S.A.; data curation, M.M.; writing—original draft preparation, S.A.; writing—review and editing, S.A.; visualization, L.G.; supervision, L.G.; project administration, M.M.; funding acquisition, S.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Health Institutes of Turkey (TUSEB), grant number 36126, and the APC was funded by the authors.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Ethics Committee of Trabzon Kanuni Training and Research Hospital (protocol code 23618724 and date of approval 06/2023).

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
AccAccuracy
AvgAverage
BaggingBootstrap Aggregating
BCIBrain–Computer Interface
CCACanonical Correlation Analysis
CORALCorrelation Alignment
EEGElectroencephalography
EOGElectrooculography
FBCCAFilter-Bank Canonical Correlation Analysis
ITRInformation Transfer Rate
LDALinear Discriminant Analysis
LEDLight-Emitting Diode
MIMotor Imagery
PSDPower Spectral Density
RFRandom Forest
SSVEPSteady-State Visual Evoked Potential
SubSubject
SVMSupport Vector Machine
VEPVisual Evoked Potential

References

  1. Awuah, W.A.; Ahluwalia, A.; Darko, K.; Sanker, V.; Tan, J.K.; Tenkorang, P.O.; Ben-Jaafar, A.; Ranganathan, S.; Aderinto, N.; Mehta, A.; et al. Bridging Minds and Machines: The Recent Advances of Brain-Computer Interfaces in Neurological and Neurosurgical Applications. World Neurosurg. 2024, 189, 138–153. [Google Scholar] [CrossRef] [PubMed]
  2. Dong, Y.; Zheng, L.; Pei, W.; Gao, X.; Wang, Y. A 240-target VEP-based BCI system employing narrow-band random sequences. J. Neural Eng. 2025, 22, 026024. [Google Scholar] [CrossRef]
  3. Cueva, V.M.; Bougrain, L.; Lotte, F.; Rimbert, S. Reliable predictor of BCI motor imagery performance using median nerve stimulation. J. Neural Eng. 2025, 22, 026039. [Google Scholar] [CrossRef]
  4. Pitt, K.M.; Boster, J.B. Identifying P300 brain-computer interface training strategies for AAC in children: A focus group study. AAC Augment. Altern. Commun. 2025, 1–10. [Google Scholar] [CrossRef]
  5. Padfield, N.; Zabalza, J.; Zhao, H.; Masero, V.; Ren, J. EEG-Based Brain-Computer Interfaces Using Motor-Imagery: Techniques and Challenges. Sensors 2019, 19, 1423. [Google Scholar] [CrossRef]
  6. Suefusa, K.; Tanaka, T. A comparison study of visually stimulated brain-computer and eye-tracking interfaces. J. Neural Eng. 2017, 14, 036009. [Google Scholar] [CrossRef]
  7. Nicolas-Alonso, L.F.; Gomez-Gil, J. Brain computer interfaces, a review. Sensors 2012, 12, 1211–1279. [Google Scholar] [CrossRef]
  8. Jiang, J.; Zhou, Z.; Yin, E.; Yu, Y.; Hu, D. Hybrid Brain-Computer Interface (BCI) based on the EEG and EOG signals. Biomed. Mater. Eng. 2014, 24, 2919–2925. [Google Scholar] [CrossRef] [PubMed]
  9. Wolpaw, J.R.; Wolpaw, E.W. Brain–Computer Interfaces: Principles and Practice; Oxford University Press: Oxford, UK, 2012; pp. 1–424. [Google Scholar] [CrossRef]
  10. Kinney-Lang, E.; Kelly, D.; Floreani, E.D.; Jadavji, Z.; Rowley, D.; Zewdie, E.T.; Anaraki, J.R.; Bahari, H.; Beckers, K.; Castelane, K.; et al. Advancing Brain-Computer Interface Applications for Severely Disabled Children Through a Multidisciplinary National Network: Summary of the Inaugural Pediatric BCI Canada Meeting. Front. Hum. Neurosci. 2020, 14, 593883. [Google Scholar] [CrossRef]
  11. Orlandi, S.; House, S.C.; Karlsson, P.; Saab, R.; Chau, T. Brain-Computer Interfaces for Children with Complex Communication Needs and Limited Mobility: A Systematic Review. Front. Hum. Neurosci. 2021, 15, 643294. [Google Scholar] [CrossRef] [PubMed]
  12. Huang, Q.; Zhang, Z.; Yu, T.; He, S.; Li, Y. An EEG-/EOG-Based Hybrid Brain-Computer Interface: Application on Controlling an Integrated Wheelchair Robotic Arm System. Front. Neurosci. 2019, 13, 459140. [Google Scholar] [CrossRef]
  13. Mussi, M.G.; Adams, K.D. EEG hybrid brain-computer interfaces: A scoping review applying an existing hybrid-BCI taxonomy and considerations for pediatric applications. Front. Hum. Neurosci. 2022, 16, 1007136. [Google Scholar] [CrossRef]
  14. Fatourechi, M.; Bashashati, A.; Ward, R.K.; Birch, G.E. EMG and EOG artifacts in brain computer interface systems: A survey. Clin. Neurophysiol. 2007, 118, 480–494. [Google Scholar] [CrossRef] [PubMed]
  15. Zhu, Y.; Li, Y.; Lu, J.; Li, P. A Hybrid BCI Based on SSVEP and EOG for Robotic Arm Control. Front. Neurorobot. 2020, 14, 583641. [Google Scholar] [CrossRef] [PubMed]
  16. Liu, X.; Hu, B.; Si, Y.; Wang, Q. The role of eye movement signals in non-invasive brain-computer interface typing system. Med. Biol. Eng. Comput. 2024, 62, 1981–1990. [Google Scholar] [CrossRef] [PubMed]
  17. Zhu, D.; Bieger, J.; Molina, G.G.; Aarts, R.M. A survey of stimulation methods used in SSVEP-based BCIs. Comput. Intell. Neurosci. 2010, 2010, 702357. [Google Scholar] [CrossRef]
  18. Brennan, C.P.; McCullagh, P.J.; Galway, L.; Lightbody, G. Promoting autonomy in a smart home environment with a smarter interface. In Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society 2015, EMBS, Milan, Italy, 25–29 August 2015; Volume 2015, pp. 5032–5035. [Google Scholar] [CrossRef]
  19. Fisher, R.S.; Harding, G.; Erba, G.; Barkley, G.L.; Wilkins, A. Photic- and pattern-induced seizures: A review for the epilepsy foundation of America working group. Epilepsia 2005, 46, 1426–1441. [Google Scholar] [CrossRef]
  20. Chu, C.; Luo, J.; Tian, X.; Han, X.; Guo, S. A P300 Brain-Computer Interface Paradigm Based on Electric and Vibration Simple Command Tactile Stimulation. Front. Hum. Neurosci. 2021, 15, 641357. [Google Scholar] [CrossRef]
  21. Punsawad, Y.; Wongsawat, Y. Motion visual stimulus for SSVEP-based BCI system. In Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society 2012, EMBS, San Diego, CA, USA, 28 August–1 September 2012; Volume 2012, pp. 3837–3840. [Google Scholar] [CrossRef]
  22. Sung, D.J.; Kim, K.-T.; Jeong, J.-H.; Kim, L.; Lee, S.J.; Kim, H.; Kim, S.-J. Improving inter-session performance via relevant session-transfer for multi-session motor imagery classification. Heliyon 2024, 10, e37343. [Google Scholar] [CrossRef]
  23. Sun, B.; Saenko, K. Deep CORAL: Correlation alignment for deep domain adaptation. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Cham, Switzerland, 2016; Volume 9915, pp. 443–450. [Google Scholar] [CrossRef]
  24. Aydin, S.; Melek, M.; Gökrem, L. A Safe and Efficient Brain–Computer Interface Using Moving Object Trajectories and LED-Controlled Activation. Micromachines 2025, 16, 340. [Google Scholar] [CrossRef]
  25. Ramli, R.; Arof, H.; Ibrahim, F.; Mokhtar, N.; Idris, M.Y.I. Using finite state machine and a hybrid of EEG signal and EOG artifacts for an asynchronous wheelchair navigation. Expert. Syst. Appl. 2015, 42, 2451–2463. [Google Scholar] [CrossRef]
  26. Kubacki, A. Use of Force Feedback Device in a Hybrid Brain-Computer Interface Based on SSVEP, EOG and Eye Tracking for Sorting Items. Sensors 2021, 21, 7244. [Google Scholar] [CrossRef]
  27. Zhang, J.; Gao, S.; Zhou, K.; Cheng, Y.; Mao, S. An online hybrid BCI combining SSVEP and EOG-based eye movements. Front. Hum. Neurosci. 2023, 17, 1103935. [Google Scholar] [CrossRef]
  28. Karas, K.; Pozzi, L.; Pedrocchi, A.; Braghin, F.; Roveda, L. Brain-computer interface for robot control with eye artifacts for assistive applications. Sci. Rep. 2023, 13, 17512. [Google Scholar] [CrossRef]
  29. Kumar, D.; Sharma, A. Electrooculogram-based virtual reality game control using blink detection and gaze calibration. In Proceedings of the 2016 International Conference on Advances in Computing, Communications and Informatics, ICACCI 2016, Jaipur, India, 21–24 September 2016; pp. 2358–2362. [Google Scholar] [CrossRef]
  30. Özkahraman, A.; Ölmez, T.; Dokur, Z. Performance Improvement with Reduced Number of Channels in Motor Imagery BCI System. Sensors 2024, 25, 120. [Google Scholar] [CrossRef]
  31. Chen, X.; Wang, Y.; Gao, S.; Jung, T.P.; Gao, X. Filter bank canonical correlation analysis for implementing a high-speed SSVEP-based brain-computer interface. J. Neural Eng. 2015, 12, 046008. [Google Scholar] [CrossRef] [PubMed]
  32. Nakanishi, M.; Wang, Y.; Chen, X.; Wang, Y.T.; Gao, X.; Jung, T.P. Enhancing detection of SSVEPs for a high-speed brain speller using task-related component analysis. IEEE Trans. Biomed. Eng. 2018, 65, 104–112. [Google Scholar] [CrossRef]
  33. Ladouce, S.; Darmet, L.; Tresols, J.J.T.; Velut, S.; Ferraro, G.; Dehais, F. Improving user experience of SSVEP BCI through low amplitude depth and high frequency stimuli design. Sci. Rep. 2022, 12, 8865. [Google Scholar] [CrossRef]
  34. Chai, X.; Zhang, Z.; Guan, K.; Liu, G.; Niu, H. A radial zoom motion-based paradigm for steady state motion visual evoked potentials. Front. Hum. Neurosci. 2019, 13, 451739. [Google Scholar] [CrossRef]
  35. Stawicki, P.; Volosyak, I. Comparison of Modern Highly Interactive Flicker-Free Steady State Motion Visual Evoked Potentials for Practical Brain–Computer Interfaces. Brain Sci. 2020, 10, 686. [Google Scholar] [CrossRef] [PubMed]
  36. Peguero, J.D.C.; Hernández-Rojas, L.G.; Mendoza-Montoya, O.; Caraza, R.; Antelis, J.M. SSVEP detection assessment by combining visual stimuli paradigms and no-training detection methods. Front. Neurosci. 2023, 17, 1142892. [Google Scholar] [CrossRef]
  37. Reitelbach, C.; Oyibo, K. Optimal Stimulus Properties for Steady-State Visually Evoked Potential Brain–Computer Interfaces: A Scoping Review. Multimodal Technol. Interact. 2024, 8, 6. [Google Scholar] [CrossRef]
  38. Esteves, A.; Velloso, E.; Bulling, A.; Gellersen, H. Orbits: Gaze interaction for smart watches using smooth pursuit eye movements. In Proceedings of the UIST 2015—Proceedings of the 28th Annual ACM Symposium on User Interface Software and Technology 2015, Charlotte, NC, USA, 8–11 November 2015; pp. 457–466. [Google Scholar] [CrossRef]
  39. Saha, S.; Baumert, M. Intra- and Inter-subject Variability in EEG-Based Sensorimotor Brain Computer Interface: A Review. Front. Comput. Neurosci. 2020, 13, 87. [Google Scholar] [CrossRef]
  40. Giles, J.; Ang, K.K.; Phua, K.S.; Arvaneh, M. A Transfer Learning Algorithm to Reduce Brain-Computer Interface Calibration Time for Long-Term Users. Front. Neuroergonomics 2022, 3, 837307. [Google Scholar] [CrossRef]
  41. Värbu, K.; Muhammad, N.; Muhammad, Y. Past, Present, and Future of EEG-Based BCI Applications. Sensors 2022, 22, 3331. [Google Scholar] [CrossRef] [PubMed]
  42. Bishop, C. Pattern Recognition and Machine Learning; Springer: Berlin/Heidelberg, Germany, 2006; Available online: https://link.springer.com/book/9780387310732 (accessed on 9 June 2025).
  43. Han, J.; Kamber, M.; Pei, J. Data Mining: Concepts and Techniques, 3rd ed.; Morgan Kaufmann: San Francisco, CA, USA, 2012; Available online: http://books.google.com/books?id=pQws07tdpjoC&pgis=1 (accessed on 10 June 2025).
  44. Tiwari, S.; Goel, S.; Bhardwaj, A. MIDNN- a classification approach for the EEG based motor imagery tasks using deep neural network. Appl. Intell. 2022, 52, 4824–4843. [Google Scholar] [CrossRef]
  45. Burden, R.L.; Faires, J.D. Numerical Analysis, 9th ed.; Brooks Cole Cengage: New York, NY, USA, 2011. [Google Scholar]
  46. Chapra, S.C.; Canale, R.P.; Chapra, S. Numerical Methods for Engineers: With Personal Computer Applications, 6th ed.; McGraw-Hill: Columbus, OH, USA, 2010; Available online: https://www.researchgate.net/publication/44398580 (accessed on 10 June 2025).
  47. Mahmood, H.R.; Gharkan, D.K.; Jamil, G.I.; Jaish, A.A.; Yahya, S.T. Eye Movement Classification using Feature Engineering and Ensemble Machine Learning. Eng. Technol. Appl. Sci. Res. 2024, 14, 18509–18517. [Google Scholar] [CrossRef]
  48. Liu, S.; Zhang, D.; Liu, Z.; Liu, M.; Ming, Z.; Liu, T.; Suo, D.; Funahashi, S.; Yan, T. Review of brain–computer interface based on steady-state visual evoked potential. Brain Sci. Adv. 2022, 8, 258–275. [Google Scholar] [CrossRef]
  49. Sörnmo, L.; Laguna, P. Bioelectrical Signal Processing in Cardiac and Neurological Applications|ScienceDirect. 2005. Available online: https://www.sciencedirect.com/book/9780124375529/bioelectrical-signal-processing-in-cardiac-and-neurological-applications (accessed on 4 June 2025).
  50. Mitra, P.; Bokil, H. Observed Brain Dynamics; Oxford University Press: Oxford, UK, 2007; pp. 1–404. [Google Scholar] [CrossRef]
  51. Welch, P.D. The Use of Fast Fourier Transform for the Estimation of Power Spectra: A Method Based on Time Averaging Over Short, Modified Periodograms. IEEE Trans. Audio Electroacoust. 1967, 15, 70–73. [Google Scholar] [CrossRef]
  52. Avots, E.; Jermakovs, K.; Bachmann, M.; Päeske, L.; Ozcinar, C.; Anbarjafari, G. Ensemble Approach for Detection of Depression Using EEG Features. Entropy 2022, 24, 211. [Google Scholar] [CrossRef]
  53. Lehmann, C.; Koenig, T.; Jelic, V.; Prichep, L.; John, R.E.; Wahlund, L.-O.; Dodge, Y.; Dierks, T. Application and comparison of classification algorithms for recognition of Alzheimer’s disease in electrical brain activity (EEG). J. Neurosci. Methods 2007, 161, 342–350. [Google Scholar] [CrossRef] [PubMed]
  54. Zhou, Z.H.; Wu, J.; Tang, W. Ensembling neural networks: Many could be better than all. Artif. Intell. 2002, 137, 239–263. [Google Scholar] [CrossRef]
  55. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  56. Zhang, Y.; Zhou, G.; Jin, J.; Wang, X.; Cichocki, A. SSVEP recognition using common feature analysis in brain-computer interface. J. Neurosci. Methods 2015, 244, 8–15. [Google Scholar] [CrossRef]
  57. Mai, X.; Ai, J.; Ji, M.; Zhu, X.; Meng, J. A hybrid BCI combining SSVEP and EOG and its application for continuous wheelchair control. Biomed. Signal Process Control 2020, 88, 105530. [Google Scholar] [CrossRef]
  58. Saravanakumar, D.; Reddy, M.R. A virtual speller system using SSVEP and electrooculogram. Adv. Eng. Inform. 2020, 44, 101059. [Google Scholar] [CrossRef]
  59. Chen, W.; Chen, S.K.; Liu, Y.H.; Chen, Y.J.; Chen, C.S. An Electric Wheelchair Manipulating System Using SSVEP-Based BCI System. Biosensors 2022, 12, 772. [Google Scholar] [CrossRef]
  60. Ishizuka, K.; Kobayashi, N.; Saito, K. High Accuracy and Short Delay 1ch-SSVEP Quadcopter-BMI Using Deep Learning. J. Robot. Mechatron. 2020, 32, 738–744. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.