Next Article in Journal
Reducing Defense Vulnerabilities in Federated Learning: A Neuron-Centric Approach
Previous Article in Journal
Digital Twin-Based and Knowledge Graph-Enhanced Emergency Response in Urban Infrastructure Construction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Using Hybrid Feature and Classifier Fusion for an Asynchronous Brain–Computer Interface Framework Based on Steady-State Motion Visual Evoked Potentials

1
School of Mechanical Engineering, Xinjiang University, Urumqi 830017, China
2
School of Mechanical Engineering, Xi’an Jiaotong University, Xi’an 710049, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(11), 6010; https://doi.org/10.3390/app15116010
Submission received: 18 April 2025 / Revised: 23 May 2025 / Accepted: 26 May 2025 / Published: 27 May 2025

Abstract

:
This study proposes an asynchronous brain–computer interface (BCI) framework based on steady-state motion visual evoked potentials (SSMVEPs), designed to enhance the accuracy and robustness of control state recognition. The method integrates filter bank common spatial patterns (FBCSPs) and filter bank canonical correlation analysis (FBCCA) to extract complementary spatial and frequency domain features from EEG signals. These multimodal features are then fused and input into a dual-classifier structure consisting of a support vector machine (SVM) and extreme gradient boosting (XGBoost). A weighted fusion strategy is applied to combine the probabilistic outputs of both classifiers, allowing the system to leverage their respective strengths. Experimental results demonstrate that the fused FB(CSP + CCA)-(SVM + XGBoost) model achieves superior performance in distinguishing intentional control (IC) and non-control (NC) states compared to models using a single feature type or classifier. Furthermore, the visualization of feature distributions using UMAP shows improved inter-class separability when combining FBCSP and FBCCA features. These findings confirm the effectiveness of both feature-level and classifier-level fusion in asynchronous BCI systems. The proposed approach offers a promising and practical solution for developing more reliable and user-adaptive BCI applications, particularly in real-world environments requiring flexible control without external cues.

1. Introduction

Brain–computer interface (BCI) technology is a rapidly evolving interdisciplinary field that enables direct communication between the central nervous system (CNS) and external devices, bypassing traditional neuromuscular pathways [1,2]. By decoding brain signals into interpretable control commands, BCI systems have been widely applied in human–machine interaction, neurorehabilitation, and communication support for individuals with severe motor impairments [3,4,5].
Among various BCI paradigms, steady-state visual evoked potentials (SSVEPs) have been widely adopted due to their high signal-to-noise ratio (SNR), low training requirements, and strong cross-subject consistency [6,7,8]. In SSVEP-based systems, users focus on flickering visual stimuli to evoke frequency-specific EEG responses for multi-target selection. However, prolonged exposure to high-frequency flickering stimuli can induce visual fatigue, reducing user comfort and long-term usability [9]. To improve user experience, researchers have proposed the steady-state motion visual evoked potential (SSMVEP) paradigm. This approach uses periodic motion stimuli—such as expanding or contracting rings—instead of flashing lights, preserving the frequency-specific response while improving visual comfort [10]. Previous studies have shown that SSMVEPs are more suitable for long-term use, offering better user acceptance and greater signal stability in practical environments [11].
In terms of operational mode, BCI systems can be categorized into synchronous and asynchronous types [12]. Synchronous BCIs require users to issue commands within predefined temporal windows, which simplifies signal processing but limits user autonomy and interaction flexibility [13]. In contrast, asynchronous BCIs allow users to initiate commands at any arbitrary time, aligning more closely with natural interaction. However, such systems must continuously monitor EEG signals to distinguish between intentional control (IC) and non-control (NC) states in real time—a technically demanding task in real-world scenarios [14,15,16].
State detection methods for asynchronous BCIs can be broadly divided into threshold-based and classifier-based approaches [17]. Threshold-based methods, such as canonical correlation analysis (CCA), are efficient but sensitive to individual variability and environmental noise, which may degrade performance [18,19]. Recently, more studies have introduced supervised learning classifiers to enhance system adaptability and robustness. Existing methods have explored convolutional neural networks [20], complex network analysis [21], and multimodal feature fusion techniques [22]. For example, Zhang X et al. [20] proposed an FFT-CNN architecture that achieved improved accuracy in asynchronous SSVEP classification, but the model was prone to overfitting due to its high complexity. Zhang W et al. [21] employed filter banks and an optimized complex network (OCN) to extract attention-related features from multiband EEG data, followed by classification using a support vector machine (SVM). However, this approach focused primarily on attention state representation and did not fully leverage frequency-specific evoked responses inherent to SSVEP/SSMVEP paradigms. Du et al. [22] introduced a feature fusion technique combining task-related component analysis (TRCA) coefficients and power spectral density (PSD) features into a composite TRPSD descriptor, achieving enhanced performance with a stepwise linear discriminant analysis (SWLDA) classifier. Despite the progress in feature fusion techniques, many existing methods still rely on either spatial or frequency domain features alone, overlooking the benefits of their integration. Furthermore, the lack of consideration for the compatibility between specific feature types and classifier architectures often leads to suboptimal model performance.
In the context of SSMVEP-based asynchronous state detection, the core assumption is that NC-state EEG lacks pronounced frequency components corresponding to stimulus frequencies, whereas IC-state EEG exhibits clear responses at stimulus frequencies and their harmonics [23]. Therefore, a hybrid feature extraction strategy that integrates frequency domain correlation and spatial energy distribution is desirable. Canonical correlation analysis is particularly effective for quantifying the similarity between EEG signals and reference templates across multiple frequencies, requiring no training and offering high efficiency for real-time applications [24]. In parallel, the common spatial pattern (CSP) algorithm can identify spatial filters that maximize the variance difference between IC and NC states, enabling the extraction of discriminative spatial features [25]. Motivated by these insights, this study proposes a dual-feature fusion framework combining CCA and CSP-derived features, designed to capture both frequency and spatial domain information.
Moreover, different classifiers exhibit varying degrees of compatibility with different types of features [26]. SVMs have been widely applied to motor-related EEG classification tasks, showing robust performance in high-dimensional, sparse, and linearly separable data contexts [27]. In contrast, extreme gradient boosting (XGBoost) has demonstrated strong capability in modeling nonlinear relationships and handling redundant features, making it well suited for complex EEG tasks such as seizure detection and emotion recognition [28,29]. The features extracted by FBCSPs and FBCCA differ substantially in their statistical characteristics, representational structures, and dimensional profiles. FBCSPs extract channel-wise variance through linear spatial filtering, producing relatively low-dimensional features with a linear structure. These properties make FBCSP features well suited for boundary-based classifiers such as SVMs [30]. In contrast, FBCCA generates high-dimensional features by measuring frequency domain correlation with reference templates. These features are inherently nonlinear and more complex in structure, aligning well with XGBoost, which excels in nonlinear modeling and embedded feature selection [31]. The direct concatenation of these heterogeneous feature sets into a single classifier could introduce issues such as dimensionality mismatch, overfitting, and compromised generalization. To address this, the proposed framework adopts a structured feature–classifier pairing approach: the SVM is paired with FBCSP features, and XGBoost is paired with FBCCA features. This strategy leverages the strengths of each classifier in handling its respective feature type. A weighted probabilistic fusion mechanism is then employed to integrate the outputs from the two classifiers. This fusion enables the system to exploit the complementary strengths of both classifiers without diminishing discriminative capability. By employing this dual-branch classifier fusion scheme—an SVM for FBCSPs and XGBoost for FBCCA—the framework achieves both classifier-specific adaptability and coordinated optimization of feature–model compatibility, thereby enhancing robustness and generalization in asynchronous BCI state detection.
In summary, this study proposes a dual-level fusion framework for asynchronous SSMVEP-based BCI systems, incorporating a three-tier collaborative structure that integrates heterogeneous features, heterogeneous classifiers, and decision-level fusion. At the feature level, spatial–domain and frequency domain features are extracted using FBCSP and FBCCA, enabling joint modeling of spatial distributions and frequency-locked responses in EEG signals. At the classifier level, SVM and XGBoost are respectively employed to process the FBCSP and FBCCA features, and their outputs are combined using a weighted probabilistic fusion strategy to achieve structured feature–classifier alignment and collaborative optimization.
This study systematically introduces a novel dual-feature fusion mechanism based on FBCSPs and FBCCA into the asynchronous SSMVEP framework, achieving joint modeling of spatial and frequency information. Furthermore, a classifier ensemble is constructed based on feature–classifier compatibility, where an SVM and XGBoost are paired with their respective feature types, enhancing the model’s adaptability to heterogeneous features and improving generalization performance. The proposed method offers a novel and practical approach for developing high-performance, scalable asynchronous BCI systems.

2. Materials and Methods

2.1. Subjects

A total of 10 participants aged between 23 and 26 years were recruited for this study. All participants had normal or corrected-to-normal vision and no history of psychiatric disorders. Written informed consent was obtained from all participants after a full explanation of the experimental procedures.

2.2. Paradigm Design

This study employed a ring-shaped contraction–expansion SSMVEP paradigm. The SSMVEP paradigm consisted of a high-contrast background and a low-contrast annular stimulus. The primary parameters of the low-contrast ring included its inner and outer diameters. The high-contrast background determined the maximum diameter of the entire visual paradigm. The area ratio (C) between the ring and the background was defined as follows:
C = S 1 S S 1
In this equation, S 1 represents the total area of the annular stimulus, and S denotes the total area of the background.
At the onset of the paradigm, the outer diameters of the concentric rings were arranged in an arithmetic sequence. Given the maximum diameter of the visual field ( r m a x ), the outer diameter ( r 1 i ) and inner diameter ( r 2 i ) of each ring were related to the area as follows:
r 1 i = ( 2 i 1 ) r m a x / 2 n
S i = π ( r 1 i ) 2 π ( r 1 i 1 ) 2
S 1 i = π ( r 1 i ) 2 π ( r 2 i ) 2
In this equation, i denotes the index of the i -th ring counted from the center outward, and n represents the total number of rings. S i indicates the area enclosed between the outer diameter of the i -th ring and that of the ( i − 1)-th ring, while S 1 i refers to the area of the i -th annular ring.
To ensure that the overall luminance of the paradigm remains constant during motion, the area ratio C between the annular rings and the background must be maintained at a fixed value. Therefore, the inner diameter of each ring needs to be determined through calculation. Based on Equations (1)–(4), the inner diameter r 2 i of the i -th ring can be computed as follows:
r 2 i = ( 1 C ) ( r 1 i ) 2 + C ( r 1 i 1 ) 2
Let the variation range of the outer diameter of each annular ring be defined as [ ( i + 1 ) r m a x / n , i r m a x / n ] c . The outer diameter follows a sinusoidal motion trajectory. Accordingly, the time-dependent change in the outer diameter r 1 i of the i -th ring can be expressed as:
r 1 i = i + 1 c o s ( 2 π f t ) r m a x / n
In this equation, f represents the stimulation frequency, and t denotes the stimulation time.
In this study, eight types of ring-shaped stimulation paradigms with distinct flickering frequencies (i.e., 3.00 Hz, 3.25 Hz, 3.50 Hz, 3.75 Hz, 4.00 Hz, 4.25 Hz, 4.50 Hz, and 4.75 Hz) were employed, as illustrated in Figure 1. In addition, to evaluate the proposed algorithm’s ability to distinguish between idle and control states, a designated idle region was included to collect EEG data corresponding to the idle state. Participants were instructed to fixate within a dashed rectangular frame during idle trials to ensure that no control commands were generated. As a result, a total of nine target classes were collected in this study, including eight stimulation conditions and one idle condition.

2.3. Data Acquisition

EEG signals were recorded using the g.USBamp system (The manufacturer of this device is g.tec medical engineering GmbH, Schiedlberg, Austria). EEG electrodes were positioned in accordance with the international 10–20 system. The reference electrode (A1) was placed on the unilateral earlobe of the subject, while the ground electrode was located at Fpz, on the prefrontal cortex. Since SSMVEPs elicit the strongest responses in the occipital region of the brain, six electrodes—O1, Oz, O2, PO3, POz, and PO4—were selected for data acquisition in this area, as illustrated in Figure 2. EEG signals were sampled at 1200 Hz, followed by preprocessing steps that included a 4th-order Butterworth notch filter (48–52 Hz) to eliminate powerline interference and an 8th-order Butterworth band-pass filter (2–100 Hz) to suppress low- and high-frequency noise. The filtered signals were subsequently downsampled to 240 Hz. Throughout the recording session, electrode impedance was maintained below 10 kΩ.

2.4. Experimental Procedure

During the experiment, participants were instructed to sequentially gaze at eight target stimuli, each corresponding to a different stimulation frequency (3.00 Hz, 3.25 Hz, 3.50 Hz, 3.75 Hz, 4.00 Hz, 4.25 Hz, and 4.75 Hz). Each frequency-specific target was presented 20 times, with idle fixation trials randomly interspersed among the eight stimulation conditions to simulate the non-control state. Each trial lasted 5 s, consisting of a 1 s cue period, a 3 s stimulation period, and a 1 s rest period. The entire experiment comprised 10 blocks, with each block containing 16 trials. After the completion of each block, participants were given a 2 min break to minimize fatigue.

2.5. Feature Extraction and Classification Algorithms

2.5.1. Common Spatial Patterns

The CSP is a spatial filtering algorithm specifically designed for binary classification tasks, and it is widely used in the field of brain signal processing. This algorithm effectively extracts spatial distribution features from multichannel EEG data. CSPs have been extensively applied in brain–computer interface systems and other neural signal analysis tasks, demonstrating particularly strong performance in decoding motor imagery, such as hand movement intentions. The core principle of CSPs involves the use of matrix diagonalization techniques to identify an optimal set of spatial filters. These filters are then used to project the EEG data in a way that maximizes the variance difference between two classes. This approach enables the extraction of feature vectors with enhanced class separability.
Assume two classes of EEG data, X 1 and X 2 , with each trial having dimensions of N × T (number of channels × number of time points). By applying Equations (7) and (8) to X 1 and X 2 , the average sample covariance matrices for the two classes, R 1 ¯ and R 2 ¯ , can be obtained.
R 1 ¯ = 1 M 1 i = 1 M 1 X 1 , i X 1 , i T t r a c e ( X 1 , i X 1 , i T )
R 2 ¯ = 1 M 2 i = 1 M 2 X 2 , i X 2 , i T t r a c e ( X 2 , i X 2 , i T )
In these equations, M 1 and M 2 represent the total number of samples in X 1 and X 2 , respectively. X 1 , i T and X 2 , i T denote the transpose of the i -th sample in X 1 , i and X 2 , i , respectively. t r a c e ( X ) refers to the sum of the elements along the main diagonal of matrix X .
Next, the composite spatial covariance matrix is calculated using Equation (9), and eigenvalue decomposition is performed on the resulting composite covariance matrix R using Equation (10):
R = R 1 ¯ + R 2 ¯
R = U λ U T
In this equation, λ denotes the diagonal matrix of eigenvalues of the covariance matrix R , U represents the matrix of corresponding eigenvectors, and U T is the transpose of U .
After obtaining the eigenvalue matrix λ and eigenvector matrix U through decomposition, the whitening transformation matrix P is computed as follows:
P = λ U T
The whitening matrix P is then applied to transform the average covariance matrix R 1 ¯ and R 2 ¯ , as follows:
S 1 = P R 1 ¯ P T
S 2 = P R 2 ¯ P T
Subsequently, principal component decomposition is performed on S 1 and S 2 .
S 1 = B Λ 1 B T
S 2 = B Λ 2 B T
Using Equations (14) and (15), it can be demonstrated that the eigenvector matrices of S 1 and S 2 are identical. Under this condition, the eigenvalue matrices of S 1 and S 2 satisfy the following relationship:
Λ 1 + Λ 2 = I
According to Equation (16), the eigenvalue matrices of S 1 and S 2 are complementary and sum to the identity matrix. This implies that when an eigenvalue of S 1 reaches its maximum, the corresponding eigenvalue of S 2 attains its minimum. Therefore, the eigenvalues of S 1 are sorted in descending order, while those of S 2 are sorted in ascending order. The eigenvectors corresponding to the top mmm largest eigenvalues of S 1 and the top mmm smallest eigenvalues of S 2 are selected and combined to form B ^ . Consequently, the final spatial filter projection matrix W is obtained as:
W = B ^ T P
Let X denote the input EEG signal. By applying the spatial filter, the filtered signal Z can be obtained as follows:
Z = W X
Subsequently, the variance of the filtered signal is calculated, normalized, and then log-transformed to obtain the corresponding feature vector f for the input signal X , as shown below:
f = l o g v a r ( Z ) s u m ( v a r ( Z ) )
In this equation, v a r ( Z ) denotes the variance of the filtered signal Z .

2.5.2. Canonical Correlation Analysis

CCA is an unsupervised machine learning algorithm used to explore the relationship between two sets of variables, X = ( X 1 , X 2 , X 3 , , X n ) and Y = ( Y 1 , Y 2 , Y 3 , , Y n ) . The fundamental idea of CCA is to find two sets of non-zero vectors, α = ( α 1 , α 1 , α 1 , α 1 , α n ) and β = ( β 1 , β 1 , β 1 , β 1 , β n ) , which are used to linearly combine the variables in each set into new variables, U and V . Specifically, the linear combinations are defined as U = α 1 X 1 + α 2 X 2 + + α n X n and V = β 1 Y 1 + β 2 Y 2 + + β n Y n . By applying the coefficient vectors α and β , the analysis shifts from examining the relationship between the original variable sets, X and Y , to investigating the correlation between the derived variables, U and V .
In the process of using CCA to identify SSMVEP signals, a set of sine and cosine functions must be constructed as reference signals, as defined by the following equation:
Y = sin ( 2 π f i t ) cos ( 2 π f i t ) sin ( 2 π H f i t ) , t = 1 F s , 2 F s , , N F s
In this equation, H denotes the number of harmonics, N represents the number of sampling points, and F s is the sampling frequency.
The recorded evoked EEG signal is denoted as X = ( X 1 , X 2 , X 3 , , X C ) , where C is the number of electrode channels. By solving Equation (21), a set of canonical correlation coefficients between the EEG signal X and the reference signal Y can be obtained, represented as ρ f i = ( ρ 1 , ρ 2 , ρ 3 , , ρ C ) .
ρ ( X , Y ) = E ( α T X Y T β ) E ( α T X X T α ) E ( β T Y Y T β )
After computing each set of data, four correlation coefficient vectors, ρ f i , are obtained, each corresponding to a specific flickering frequency, f i , as shown in Equation (22). The frequency associated with the maximum correlation coefficient ρ x in the vector ρ f i is selected as the final identified target frequency, denoted as f t a r g e t .
f t a r g e t = argmax f i ρ x
Since the CCA algorithm employs the correlation coefficient corresponding to each stimulus frequency as the basis for classification, its accuracy can to some extent reflect the quality of the evoked signals induced by the paradigm.

2.5.3. Filter Bank Combination Characteristics

Filter banks (FBs) are a commonly used signal processing technique that decomposes the input signal into multiple frequency sub-bands. It has been widely applied in EEG signal analysis and has given rise to algorithms such as filter bank common spatial patterns (FBCSPs) and filter bank canonical correlation analysis (FBCCA). In this approach, the EEG signal is first filtered into different frequency bands using a filter bank. The filter bank analysis decomposes the original EEG signal into five sub-band signals using a set of predefined band-pass filters. Each sub-band covers a specific frequency range and facilitates the effective extraction of frequency-specific neural activity features, while preserving the phase information of the signal. The selected five sub-bands span the frequency range of 2–40 Hz and are defined as follows: [2–8] Hz, [8–14] Hz, [14–20] Hz, [20–30] Hz, and [30–40] Hz. This configuration encompasses commonly observed EEG rhythms, including delta, theta, alpha, beta, and low-gamma bands, thereby providing a comprehensive frequency representation for subsequent feature extraction and classification tasks.
Features are then extracted from the filtered signals using CSP and CCA methods. Finally, the CSP and CCA features are combined through a cross-feature fusion strategy. The detailed procedure is as follows:
First, the input data X are processed using a set of predefined filters, resulting in a filter bank denoted as W z .
W z = ( w 1 , w 2 , , w n )
In this equation, w i denotes the i -th filter, and n represents the total number of filters in the filter bank.
The input data are then transformed using the filter bank W z , resulting in the filtered data X , as expressed by the following equation:
X = W z X
Feature extraction is performed on the filter bank outputs using the CCA algorithm (Equation (21)) and the CSP algorithm (Equation (19)), resulting in the following expression:
ρ ( X , Y ) = ( ρ ( w 1 X , Y ) , , ρ ( w n X , Y ) )
f ( X ) = ( f ( w 1 X ) , , f ( w n X ) )
The combined feature T f , ρ is obtained by cross-fusing the features extracted using the CSP and CCA algorithms.
T f , ρ = C o n c a t ( f ( w 1 X ) , ρ ( w 1 X , Y ) , , f ( w n X ) , ρ ( w n X , Y ) )

2.5.4. Classifier Design and Fusion Strategy

Considering that the extracted features require classification, two classifiers were selected based on the characteristics of the respective features. For features extracted using the CSP algorithm, SVM was adopted, as the CSP aims to maximize variance between two classes, making an SVM a suitable choice for binary classification. In contrast, the features derived from the CCA algorithm tend to have higher dimensionality due to the presence of multiple stimulation frequencies; therefore, the XGBoost algorithm, an ensemble learning method, was chosen to better handle this complexity. Accordingly, in this study, an SVM and XGBoost were used to classify features extracted by the CSP and CCA algorithms, respectively.
XGBoost is well suited for handling a large number of features and can automatically learn nonlinear relationships among them, making it particularly effective for complex and high-dimensional datasets. In contrast, SVMs excel at identifying optimal decision boundaries, especially in binary classification tasks. Given the nature of the combined features—CCA-derived features represent multi-class frequency information, while CSP-derived features capture binary spatial energy differences—XGBoost may outperform an SVM when classifying CCA features, whereas an SVM may be more effective than XGBoost for CSP features. Therefore, a weighted (SVM + XGBoost) classifier is proposed to predict the combined FB(CSP + CCA) features. This hybrid approach enables the classifier to learn the relative importance of different feature types, potentially enhancing the model’s ability to recognize asynchronous states. The detailed design is as follows:
p = ( 1 q ) S V M T f , ρ + q X G B o o t T f , ρ
In this equation, p represents the final predicted probability for the combined feature T f , ρ ; q denotes the weighting coefficient. T f , ρ is the fused feature vector constructed from both CSP and CCA features. S V M T f , ρ and X G B o o t T f , ρ correspond to the predicted probabilities generated by the SVM and XGBoost classifiers, respectively, based on the input feature T f , ρ .

2.6. Statistical Analyses

For the classification accuracy of different methods under each subject condition, we adopted the paired t-test to evaluate the statistical significance of the differences between the methods. And Bonferroni correction was adopted to adjust the p value. A corrected p value of less than 0.05 was taken as the criterion for judging that the difference was statistically significant.

3. Results

3.1. Combinatorial Characterization

First, the features extracted by the CCA algorithm using 2 s data segments were analyzed. Dimensionality reduction was performed using the UMAP algorithm on the feature sets derived from both CCA and FBCCA, and the results are shown in Figure 3. Figure 3a presents the reduced features obtained from the FBCCA algorithm, while Figure 3b illustrates those from the standard CCA algorithm. As shown in Figure 3a, although a small number of features corresponding to the eight stimulation frequencies and the idle state were confused, the FBCCA-derived features still exhibit relatively clear boundaries. In contrast, Figure 3b shows that the boundaries between features extracted by the CCA algorithm become more blurred, and the separation between different frequencies begins to fade. Overall, the features extracted by FBCCA demonstrate smaller intra-class distances and larger inter-class distances.
Subsequently, features extracted by the CSP algorithm using 2 s data segments were analyzed. UMAP was applied to both the CSP and FBCSP feature sets, and the results are shown in Figure 4. Figure 4a corresponds to the dimensionality-reduced features from the CSP, and Figure 4b corresponds to those from the FBCSP. As observed in Figure 4, both methods show some degree of boundary confusion between the idle and control states after dimensionality reduction. However, the FBCSP exhibits smaller intra-class distances and fewer instances of feature overlap, indicating better class separability compared to the CSP.
The features extracted by FBCSP and FBCCA were then standardized separately and fused. UMAP was applied to the combined features, and the results are presented in Figure 5. Compared to Figure 4a, Figure 5 shows that the class separability between different states improved significantly after combining FBCCA and FBCSP features, with an increase in inter-class distances and a reduction in feature confusion. This indicates that the fusion of features from FBCCA and the FBCSP yields superior results in the low-dimensional space.
This improvement can be attributed to the complementary nature of the two algorithms: the FBCSP emphasizes the spatial characteristics of EEG signals, while FBCCA focuses on frequency-specific information related to the stimulus response. Although the FBCSP can effectively distinguish between idle and control states based on spatial energy differences, it cannot explicitly determine whether these differences arise from stimulus-induced brain oscillations. In contrast, CCA can precisely capture correlations between EEG signals and target stimulation frequencies. Therefore, combining both algorithms leverages their respective strengths and results in better feature boundary formation in the dimensionality-reduced space than using either FBCCA or the FBCSP alone.

3.2. Analysis of Weighted Results

Following the above feature dimensionality reduction analysis, the FB(CSP + CCA) combination was shown to produce more discriminative low-dimensional features. To further evaluate the classification performance of the FB(CSP + CCA) features, both XGBoost and SVM classifiers were applied. The results are illustrated in Figure 6.
As shown in Figure 6a,b, the FB(CSP + CCA) features achieved the highest classification accuracy under both XGBoost and SVM classifiers when compared with other individual feature sets, which is consistent with the findings from the previous dimensionality reduction analysis. Furthermore, it can be observed that the classification performance of CCA and CSP features varies slightly depending on the classifier used. To further explore this effect, classification results for the same feature types under different classifiers were plotted in the same figure, as shown in Figure 7.
In Figure 7a, the classification accuracy of FBCSP-SVM can be seen to be higher than that of FBCSP-XGBoost, and CSP-SVM outperforms CSP-XGBoost. Conversely, in Figure 7b, FBCCA-XGBoost and CCA-XGBoost achieve higher accuracies than their respective SVM counterparts. These results suggest that for features extracted using CSPs and FBCSPs, the SVM provides a better classification performance, while for features extracted using CCA and FBCCA, XGBoost is more effective.
From a theoretical standpoint, an SVM utilizes kernel functions to map data into a higher-dimensional space, where it becomes linearly separable. CCA features, projected based on different stimulus frequencies, tend to be sparse in the original feature space. When these sparse features are mapped into a high-dimensional space, the increased sparsity may hinder the performance of the SVM in terms of learning and generalization. In contrast, CSP features are designed to maximize inter-class differences, making them well suited for binary classification. The distribution of CSP features is more compact in the feature space, thereby enhancing the effectiveness of SVM classification. On the other hand, XGBoost handles high-dimensional data more efficiently and captures complex nonlinear relationships through its tree-based structure, making it well suited for multi-frequency CCA-based feature vectors.
Considering the complementary strengths of the SVM and XGBoost for different types of features, a weighted ensemble classifier combining the outputs of both models was proposed. The combined output was calculated using a weighting coefficient, where the sum of the weights assigned to XGBoost and SVM equals 1. When the XGBoost weight was set to 1, the SVM weight was 0, and vice versa. The average classification performance for different XGBoost weights is shown in Figure 8.
As shown in Figure 8, the highest classification accuracy was achieved when the weight assigned to the XGBoost classifier was 0.45 and the weight assigned to the SVM classifier was 0.55. Based on this result, the ensemble model was configured with XGBoost and SVM weights set to 0.45 and 0.55, respectively.
The combined features were then classified using the SVM, XGBoost, and the weighted ensemble classifiers. The classification results are presented in Figure 9, and the average performance was statistically evaluated using a paired-sample t-test.
As shown in Figure 9, due to individual variability, the classification accuracy for participant S2 did not improve when using the SVM + XGBoost ensemble classifier; in fact, it was slightly lower than that achieved with the standalone SVM classifier. For all other participants, the ensemble classifier yielded a noticeable improvement. On average, the SVM + XGBoost classifier achieved an accuracy of 93.1%, representing a 1.5% improvement over the SVM classifier (p < 0.05) and a 3.7% improvement over the XGBoost classifier (p < 0.001).
These results indicate that the ensemble classifier combining the SVM and XGBoost significantly enhances the overall classification performance. This improvement can be attributed to the complementary strengths of the two classifiers: the SVM is particularly effective for binary spatial features extracted by CSPs, while XGBoost performs better on multi-frequency features derived from CCA. To further validate this observation, the classifiers were also applied to the raw EEG signals, and the results are shown in Figure 10.
As shown in Figure 10, for most participants, the highest classification accuracy was achieved using the SVM classifier. Only participant S1 showed a 1.9% improvement in accuracy when using the ensemble classifier. On average, the SVM classifier outperformed both the ensemble and the XGBoost classifiers, with the SVM + XGBoost ensemble achieving a mean accuracy of 74.9%, which was 2% lower than that of the SVM classifier alone (p < 0.05). These findings suggest that the ensemble classifier does not provide additional benefits when applied directly to raw EEG data and may, in fact, reduce model performance.
Furthermore, a comparison between classification results using fused features and those using raw EEG data demonstrates a significant performance gain. Specifically, the accuracy improved by 23.7% under XGBoost (p < 0.001), 14.8% under SVM (p < 0.001), and 18.3% under the SVM + XGBoost ensemble classifier (p < 0.001). These findings provide strong evidence that using fused FB(CCA + CSP) features as classifier inputs substantially improves classification accuracy compared to using raw EEG signals.

4. Discussion

This study addresses the challenge of reliably distinguishing between IC and NC states in asynchronous BCIs. It proposes a hybrid SSMVEP-BCI system that integrates multidimensional features with a structurally complementary classification strategy. The system combines FBCSPs and FBCCA to jointly extract spatial and frequency information from EEG signals, effectively enhancing inter-class separability during the feature modeling phase. For classification, the SVM and XGBoost are fused with weighted integration to improve both the discriminative power and stability of the model. Experimental results demonstrate that under asynchronous conditions, the proposed system significantly outperforms methods based on single-feature extraction or traditional classification architectures, validating its effectiveness and practicality.
FBCSPs and FBCCA represent two fundamental processing pathways for extracting information from EEG signals: spatial patterns and frequency features, respectively. FBCSPs, which are derived from the CSP method, apply spatial filtering to maximize the covariance difference between two classes. This reflects category-specific differences in the spatial distribution of cortical activity and is particularly effective for capturing cooperative patterns across multiple channels in tasks such as attentional shifts or motor imagery [32,33]. FBCCA, on the other hand, is based on CCA, which assumes that the brain exhibits phase-locked responses at specific frequencies to periodic external stimuli (e.g., SSVEPs/SSMVEPs). It extracts neural coherence features by maximizing the correlation between EEG signals and reference sinusoidal sequences [34,35]. These two feature types model distinct neural mechanisms: FBCSPs focus on spatial distribution patterns, while FBCCA emphasizes frequency-locked coupling. They are not redundant but rather represent highly complementary neural encoding pathways.
Moreover, both FBCSPs and FBCCA employ a filter bank strategy to decompose and reconstruct signals across multiple frequency sub-bands. This approach helps mitigate the distribution instability caused by individual frequency drift and channel jitter, and it has become a key trend in recent steady-state visual evoked potential (SSVEP) research [36,37]. By integrating features from different frequency bands and decoding pathways, this study achieves a robust and biologically grounded feature representation. This enriched signal foundation enhances the discrimination of complex IC/NC states in asynchronous tasks and improves resistance to noise. The fusion of FBCSPs and FBCCA enables synergistic modeling of the multichannel encoding characteristics in SSMVEP signals, enhancing the structural representation of neural information. Visualization results demonstrate superior inter-class separability for the fused features, supporting our modeling hypothesis regarding the spatial–frequency coupling mechanism. This study, based on the fusion idea of the complementarity of neural processing paths, has advantages in terms of physiological rationality and model interpretability.
In terms of classifier design, this study adopts the principle of feature structure–classifier preference matching to construct a dual-classifier framework integrating an SVM and XGBoost. The SVM, which is grounded in the theory of margin maximization, demonstrates a strong generalization ability when dealing with structured, low-dimensional, and boundary-concentrated data. It is particularly suited for spatially projected features that are linearly or nearly linearly separable [38]. In EEG signal processing, SVMs are widely used for classification tasks, such as motor imagery and SSVEP, due to their robustness and resistance to overfitting in high-dimensional spaces [39,40]. In contrast, XGBoost, as a representative of ensemble tree models, excels in handling nonlinear, high-dimensional data with strong feature interdependencies [41]. It is especially effective for modeling complex frequency domain features [42], such as the high-dimensional sub-band response features produced by FBCCA.
The weighted fusion mechanism we designed goes beyond traditional majority voting. Instead, it establishes a structurally synergistic decision-making process by analyzing the performance weights of each classifier across different feature types. This approach balances discriminative power and resistance to interference, avoiding the interpretability challenges often associated with deep learning models. Compared to recent “black-box” deep learning-based EEG decoding methods [43], our dual-classifier structure maintains high interpretability and flexible parameter tuning, making it particularly well suited for high-reliability, asynchronous systems where minimizing false activations is critical. SVMs excel at learning geometric boundaries in the low-dimensional space shaped by FBCSPs, while XGBoost effectively handles the high-dimensional, nonlinear feature distributions derived from FBCCA. By fusing the outputs of both classifiers with learned weights, the system achieves enhanced discriminative stability in high-dimensional feature space and avoids overfitting to specific feature sets that can occur with single models. This classifier fusion represents not only an improvement in model accuracy but also an architectural optimization of the fusion strategy, highlighting the system’s innovation in information integration mechanisms.
Despite the encouraging results achieved in this study, several issues remain that warrant further investigation. First, the current fusion weights are empirically set and cannot dynamically adapt to real-world interaction conditions such as user state fluctuations and signal variability. Future research could explore adaptive weighting mechanisms based on reinforcement learning, Bayesian optimization, or neural architecture search, aiming to enhance the system’s long-term applicability and robustness. Second, the system has not yet been deployed in a real-time asynchronous interaction environment, lacking comprehensive evaluation in terms of response latency, computational resource consumption, and user feedback. Asynchronous BCI systems are highly sensitive to response time [44]. Therefore, future efforts should focus on embedding the model into real-time systems and conducting system-level online testing.
The current experimental validation was conducted only with healthy subjects, and there is a lack of application evaluation in clinical populations such as individuals with motor disorders or cognitive impairments. Given the known differences in neural response timing and activation patterns among these groups [45,46], it is important to develop personalized EEG feature extraction strategies to improve the model’s generalizability across diverse populations. Additionally, this study has not yet conducted a direct quantitative comparison of classification performance with other mainstream methods, such as TRCA or CNN-based classifiers. Therefore, in future work, we plan to integrate our fusion strategy into more advanced models to evaluate its generalizability, robustness, and applicability across different classification frameworks. These insights would provide a theoretical foundation for developing highly trustworthy, interpretable, and auditable BCI systems.
In conclusion, this study demonstrates that combining multi-band EEG features (FBCSP and FBCCA) with dual classifier ensembles (SVM and XGBoost) provides a promising solution for improving asynchronous state detection in SSMVEP-BCI systems. The proposed methodology enhances feature separability and classification reliability, which is a key step in the development of practical, user-friendly, and high-performance asynchronous BCIs.

5. Conclusions

This study effectively tackled the challenge of accurately detecting control states in asynchronous BCI systems without relying on external cues, offering a valuable advancement in SSMVEP-based brain–computer interface research. The core innovation lies in the proposed dual-layer fusion framework, which systematically integrates filter bank CSP and CCA features with an ensemble classifier combining an SVM and XGBoost. This hybrid approach achieved a high average classification accuracy of 93.1% and significantly improved inter-class feature separability. Looking ahead, future work will explore the extension of this fusion strategy to multi-paradigm BCI systems and investigate adaptive weighting mechanisms to enhance system responsiveness and facilitate broader clinical applications.

Author Contributions

B.H.: writing—original draft preparation, writing—review and editing, and methodology; J.X.: investigation and funding acquisition; H.Z.: formal analysis; J.L. and H.W.: investigation. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (52365040), the Open Funding Project of National Key Laboratory of Human Factors Engineering (SYFD061903K), and the Foundation of the Key Laboratory for Equipment Advanced Research (6142222200209).

Institutional Review Board Statement

This study was approved by the Ethics Committee of Xi’an Jiaotong University, and all procedures were conducted in accordance with institutional guidelines and the Declaration of Helsinki.

Informed Consent Statement

All participants involved in this study provided written informed consent prior to the experiment. They were fully informed of the purpose, procedures, and potential risks of the study and voluntarily agreed to participate.

Data Availability Statement

The raw data supporting the conclusions of this study will be made available by the authors, without undue reservation.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Saha, S.; Mamun, K.A.; Ahmed, K.; Mostafa, R.; Naik, G.R.; Darvishi, S.; Khandoker, A.H.; Baumert, M. Progress in brain computer interface: Challenges and opportunities. Front. Syst. Neurosci. 2021, 15, 578875. [Google Scholar] [CrossRef]
  2. Tang, X.; Shen, H.; Zhao, S.; Li, N.; Liu, J. Flexible brain–computer interfaces. Nat. Electron. 2023, 6, 109–118. [Google Scholar] [CrossRef]
  3. Esposito, D.; Centracchio, J.; Andreozzi, E.; Gargiulo, G.D.; Naik, G.R.; Bifulco, P. Biosignal-based human–machine interfaces for assistance and rehabilitation: A survey. Sensors 2021, 21, 6863. [Google Scholar] [CrossRef]
  4. Zhang, H.; Jiao, L.; Yang, S.; Li, H.; Jiang, X.; Feng, J.; Zou, S.; Xu, Q.; Gu, J.; Wang, X.; et al. Brain–computer interfaces: The innovative key to unlocking neurological conditions. Int. J. Surg. 2024, 110, 5745–5762. [Google Scholar] [CrossRef]
  5. Awuah, W.A.; Ahluwalia, A.; Darko, K.; Sanker, V.; Tan, J.K.; Tenkorang, P.O.; Ben-Jaafar, A.; Ranganathan, S.; Aderinto, N.; Mehta, A.; et al. Bridging minds and machines: The recent advances of brain-computer interfaces in neurological and neurosurgical applications. World Neurosurg. 2024, 189, 138–153. [Google Scholar] [CrossRef] [PubMed]
  6. Al-Qaysi, Z.T.; Albahri, A.S.; Ahmed, M.A.; Hamid, R.A.; Alsalem, M.A.; Albahri, O.S.; Alamoodi, A.H.; Homod, R.Z.; Shayea, G.G.; Duhaim, A.M. A comprehensive review of deep learning power in steady-state visual evoked potentials. Neural Comput. Appl. 2024, 36, 16683–16706. [Google Scholar] [CrossRef]
  7. Ravi, A.; Lu, J.; Pearce, S.; Jiang, N. Enhanced system robustness of asynchronous BCI in augmented reality using steady-state motion visual evoked potential. IEEE Trans. Neural Syst. Rehabil. Eng. 2022, 30, 85–95. [Google Scholar] [CrossRef] [PubMed]
  8. Wong, C.M.; Wang, Z.; Rosa, A.C.; Chen, C.L.P.; Jung, T.-P.; Hu, Y. Transferring subject-specific knowledge across stimulus frequencies in SSVEP-based BCIs. IEEE Trans. Autom. Sci. Eng. 2021, 18, 552–563. [Google Scholar] [CrossRef]
  9. Azadi Moghadam, M.; Maleki, A. Fatigue factors and fatigue indices in SSVEP-based brain-computer interfaces: A systematic review and meta-analysis. Front. Hum. Neurosci. 2023, 17, 1248474. [Google Scholar] [CrossRef]
  10. Reitelbach, C.; Oyibo, K. Optimal stimulus properties for steady-state visually evoked potential brain–computer interfaces: A scoping review. Multimodal Technol. Interact. 2024, 8, 6. [Google Scholar] [CrossRef]
  11. Gao, Z.; Yuan, T.; Zhou, X.; Ma, C.; Ma, K.; Hui, P. A deep learning method for improving the classification accuracy of SSMVEP-based BCI. IEEE Trans. Circuits Syst. II Express Briefs 2020, 67, 3447–3451. [Google Scholar] [CrossRef]
  12. Wang, H.; Yan, F.; Xu, T.; Yin, H.; Chen, P.; Yue, H. Brain-controlled wheelchair review: From wet electrode to dry electrode, from single modal to hybrid modal, from synchronous to asynchronous. IEEE Access 2021, 9, 55920–55938. [Google Scholar] [CrossRef]
  13. Rezvani, S.; Hosseini-Zahraei, S.H.; Tootchi, A.; Guger, C.; Chaibakhsh, Y.; Saberi, A.; Chaibakhsh, A. A review on the performance of brain-computer interface systems used for patients with locked-in and completely locked-in syndrome. Cogn. Neurodyn. 2024, 18, 1419–1443. [Google Scholar] [CrossRef]
  14. Gutierrez-Martinez, J.; Mercado-Gutierrez, J.A.; Carvajal-Gámez, B.E.; Rosas-Trigueros, J.L.; Contreras-Martinez, A.E. Artificial intelligence algorithms in visual evoked potential-based brain-computer interfaces for motor rehabilitation applications: Systematic review and future directions. Front. Hum. Neurosci. 2021, 15, 772837. [Google Scholar] [CrossRef]
  15. Shukla, P.K.; Chaurasiya, R.K.; Verma, S.; Sinha, G.R. A thresholding-free state detection approach for home appliance control using P300-based BCI. IEEE Sens. J. 2021, 21, 16927–16936. [Google Scholar] [CrossRef]
  16. Santamaría-Vázquez, E.; Martínez-Cagigal, V.; Pérez-Velasco, S.; Marcos-Martínez, D.; Hornero, R. Robust Asynchronous Control of ERP-Based Brain–Computer Interfaces Using Deep Learning. Comput. Methods Programs Biomed. 2022, 215, 106623. [Google Scholar] [CrossRef]
  17. Chen, W.; Chen, S.-K.; Liu, Y.-H.; Chen, Y.-J.; Chen, C.-S. An electric wheelchair manipulating system using SSVEP-based BCI system. Biosensors 2022, 12, 772. [Google Scholar] [CrossRef]
  18. Rostami, E.; Ghassemi, F.; Tabanfar, Z. Canonical Correlation Analysis of Task-Related Components as a Noise-Resistant Method in Brain–Computer Interface Speller Systems Based on Steady-State Visual Evoked Potential. Biomed. Signal Process. Control 2022, 73, 103449. [Google Scholar] [CrossRef]
  19. Zhang, X.; Xu, G.; Mou, X.; Ravi, A.; Li, M.; Wang, Y. A convolutional neural network for the detection of asynchronous steady state motion visual evoked potential. IEEE Trans. Neural Syst. Rehabil. Eng. 2019, 27, 1303–1311. [Google Scholar] [CrossRef]
  20. Zhang, W.; Zhou, T.; Zhao, J.; Ji, B.; Wu, Z. Recognition of the idle state based on a novel IFB-OCN method for an asynchronous brain–computer interface. J. Neurosci. Methods 2020, 341, 108776. [Google Scholar] [CrossRef]
  21. Du, J.; Ke, Y.; Liu, P.; Liu, W.; Kong, L.; Wang, N. A two-step idle-state detection method for SSVEP BCI. In Proceedings of the 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Berlin, Germany, 23–27 July 2019; pp. 3095–3098. [Google Scholar]
  22. Yang, C.; Yan, X.; Wang, Y.; Chen, Y.; Zhang, H.; Gao, X. Spatio-temporal equalization multi-window algorithm for asynchronous SSVEP-based BCI. J. Neural Eng. 2021, 18, 0460b7. [Google Scholar] [CrossRef] [PubMed]
  23. Wang, X.; Liu, A.; Wu, L.; Li, C.; Liu, Y.; Chen, X. A generalized zero-shot learning scheme for SSVEP-based BCI system. IEEE Trans. Neural Syst. Rehabil. Eng. 2023, 31, 863–874. [Google Scholar] [CrossRef]
  24. Pei, Y.; Luo, Z.; Zhao, H.; Xu, D.; Li, W.; Yan, Y. A tensor-based frequency features combination method for brain–computer interfaces. IEEE Trans. Neural Syst. Rehabil. Eng. 2021, 30, 465–475. [Google Scholar] [CrossRef] [PubMed]
  25. Mirjalili, S.; Powell, P.; Strunk, J.; James, T.; Duarte, A. Evaluation of classification approaches for distinguishing brain states predictive of episodic memory performance from electroencephalography: Abbreviated title: Evaluating methods of classifying memory states from EEG. NeuroImage 2022, 247, 118851. [Google Scholar] [CrossRef]
  26. Wang, J.; Wang, M. Review of the Emotional Feature Extraction and Classification Using EEG Signals. Cogn. Robot. 2021, 1, 29–40. [Google Scholar] [CrossRef]
  27. Kim, H.; Yoshimura, N.; Koike, Y. Characteristics of Kinematic Parameters in Decoding Intended Reaching Movements Using Electroencephalography (EEG). Front. Neurosci. 2019, 13, 1148. [Google Scholar] [CrossRef]
  28. Wu, J.; Zhou, T.; Li, T. Detecting Epileptic Seizures in EEG Signals with Complementary Ensemble Empirical Mode Decomposition and Extreme Gradient Boosting. Entropy 2020, 22, 140. [Google Scholar] [CrossRef] [PubMed]
  29. Khan, M.S.; Salsabil, N.; Alam, M.G.R.; Dewan, M.A.A.; Uddin, M.Z. CNN-XGBoost Fusion-Based Affective State Recognition Using EEG Spectrogram Image Analysis. Sci. Rep. 2022, 12, 14122. [Google Scholar] [CrossRef]
  30. Demir, S.; Şahin, E.K. Liquefaction prediction with robust machine learning algorithms (SVM, RF, and XGBoost) supported by genetic algorithm-based feature selection and parameter optimization from the perspective of data processing. Environ. Earth Sci. 2022, 81, 459. [Google Scholar] [CrossRef]
  31. Duan, L.; Wang, Z.; Qiao, Y.; Wang, Y.; Huang, Z.; Zhang, B. An Automatic Method for Epileptic Seizure Detection Based on Deep Metric Learning. IEEE J. Biomed. Health Inform. 2021, 26, 2147–2157. [Google Scholar] [CrossRef]
  32. Ye, J.; Zhu, J.; Huang, S. Weighted Filter Bank and Regularization Common Spatial Pattern-Based Decoding Algorithm for Brain-Computer Interfaces. Appl. Sci. 2025, 15, 5159. [Google Scholar] [CrossRef]
  33. Lin, C.L.; Chen, L.T. Improvement of Brain–Computer Interface in Motor Imagery Training through the Designing of a Dynamic Experiment and FBCSP. Heliyon 2023, 9, e13745. [Google Scholar] [CrossRef]
  34. Lee, T.; Nam, S.; Hyun, D.J. Adaptive Window Method Based on FBCCA for Optimal SSVEP Recognition. IEEE Trans. Neural Syst. Rehabil. Eng. 2022, 31, 78–86. [Google Scholar] [CrossRef]
  35. Chen, X.; Wang, Y.; Gao, S.; Jung, T.P.; Gao, X. Filter Bank Canonical Correlation Analysis for Implementing a High-Speed SSVEP-Based Brain–Computer Interface. J. Neural Eng. 2015, 12, 046008. [Google Scholar] [CrossRef]
  36. Yao, H.; Liu, K.; Deng, X.; Tang, X.; Yu, H. FB-EEGNet: A Fusion Neural Network across Multi-Stimulus for SSVEP Target Detection. J. Neurosci. Methods 2022, 379, 109674. [Google Scholar] [CrossRef]
  37. Li, Z.; Liu, K.; Deng, X.; Wang, G. Spatial Fusion of Maximum Signal Fraction Analysis for Frequency Recognition in SSVEP-Based BCI. Biomed. Signal Process. Control 2020, 61, 102042. [Google Scholar] [CrossRef]
  38. Jakkula, V. Tutorial on Support Vector Machine (SVM). Sch. EECS Wash. State Univ. 2006, 37, 3. [Google Scholar]
  39. Sha’Abani, M.N.A.H.; Fuad, N.; Jamal, N.; Ismail, M.F. kNN and SVM Classification for EEG: A Review. In Proceedings of the 5th International Conference on Electrical, Control & Computer Engineering (InECCE2019), Kuantan, Pahang, Malaysia, 29 July 2019; Springer: Singapore, 2020; pp. 555–565. [Google Scholar]
  40. Rajalakshmi, A.; Sridhar, S.S. Classification of Yoga, Meditation, Combined Yoga–Meditation EEG Signals Using L-SVM, KNN, and MLP Classifiers. Soft Comput. 2024, 28, 4607–4619. [Google Scholar] [CrossRef]
  41. Chen, T.; Guestrin, C. XGBoost: A Scalable Tree Boosting System. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 785–794. [Google Scholar]
  42. Xiong, X.; Ren, S.; Yi, S.; Wang, C.; Liu, R.; He, J. Detection of Sleep K-Complexes Using BOHB-Tuned XGBoost Based on Multi-Domain Feature Extraction and KMeansSMOTE. Eng. Res. Express 2025, 7, 015235. [Google Scholar] [CrossRef]
  43. Craik, A.; He, Y.; Contreras-Vidal, J.L. Deep Learning for Electroencephalogram (EEG) Classification Tasks: A Review. J. Neural Eng. 2019, 16, 031001. [Google Scholar] [CrossRef]
  44. Xia, B.; Li, X.; Xie, H.; Yang, W.; Li, J.; He, L. Asynchronous Brain–Computer Interface Based on Steady-State Visual-Evoked Potential. Cogn. Comput. 2013, 5, 243–251. [Google Scholar] [CrossRef]
  45. Croce, P.; Quercia, A.; Costa, S.; Zappasodi, F. EEG Microstates Associated with Intra- and Inter-Subject Alpha Variability. Sci. Rep. 2020, 10, 2469. [Google Scholar] [CrossRef] [PubMed]
  46. Saha, S.; Baumert, M. Intra- and Inter-Subject Variability in EEG-Based Sensorimotor Brain Computer Interface: A Review. Front. Comput. Neurosci. 2020, 13, 87. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The SSMVEP paradigm based on ring contraction–expansion form.
Figure 1. The SSMVEP paradigm based on ring contraction–expansion form.
Applsci 15 06010 g001
Figure 2. Electrode arrangement position.
Figure 2. Electrode arrangement position.
Applsci 15 06010 g002
Figure 3. CCA feature downscaling map.
Figure 3. CCA feature downscaling map.
Applsci 15 06010 g003
Figure 4. CSP feature downscaling map.
Figure 4. CSP feature downscaling map.
Applsci 15 06010 g004
Figure 5. FB(CSP + CCA) downscaled feature map.
Figure 5. FB(CSP + CCA) downscaled feature map.
Applsci 15 06010 g005
Figure 6. Classification accuracy of different classifiers.
Figure 6. Classification accuracy of different classifiers.
Applsci 15 06010 g006
Figure 7. Classification results of the same type of features under different classifiers.
Figure 7. Classification results of the same type of features under different classifiers.
Applsci 15 06010 g007
Figure 8. Effect of weighted values on output accuracy.
Figure 8. Effect of weighted values on output accuracy.
Applsci 15 06010 g008
Figure 9. Classification accuracy of combined features under combined classifiers.
Figure 9. Classification accuracy of combined features under combined classifiers.
Applsci 15 06010 g009
Figure 10. Classification accuracy of the original signal under the combined classifier.
Figure 10. Classification accuracy of the original signal under the combined classifier.
Applsci 15 06010 g010
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hu, B.; Xie, J.; Zhang, H.; Liu, J.; Wang, H. Using Hybrid Feature and Classifier Fusion for an Asynchronous Brain–Computer Interface Framework Based on Steady-State Motion Visual Evoked Potentials. Appl. Sci. 2025, 15, 6010. https://doi.org/10.3390/app15116010

AMA Style

Hu B, Xie J, Zhang H, Liu J, Wang H. Using Hybrid Feature and Classifier Fusion for an Asynchronous Brain–Computer Interface Framework Based on Steady-State Motion Visual Evoked Potentials. Applied Sciences. 2025; 15(11):6010. https://doi.org/10.3390/app15116010

Chicago/Turabian Style

Hu, Bo, Jun Xie, Huanqing Zhang, Junjie Liu, and Hu Wang. 2025. "Using Hybrid Feature and Classifier Fusion for an Asynchronous Brain–Computer Interface Framework Based on Steady-State Motion Visual Evoked Potentials" Applied Sciences 15, no. 11: 6010. https://doi.org/10.3390/app15116010

APA Style

Hu, B., Xie, J., Zhang, H., Liu, J., & Wang, H. (2025). Using Hybrid Feature and Classifier Fusion for an Asynchronous Brain–Computer Interface Framework Based on Steady-State Motion Visual Evoked Potentials. Applied Sciences, 15(11), 6010. https://doi.org/10.3390/app15116010

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop