Next Article in Journal
Association Between Homocysteine, Vitamin B12, Folate and Migraine: An Updated Systematic Review and Meta-Analysis
Previous Article in Journal
Speech and Language Changes During Rapid Eye Movement (REM) Sleep with Potential Diagnostic Markers: A Systematic Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

EEG Feature Extraction and Classification for Upper Limb Flexion and Extension Motor Imagery Based on Discriminative Filter Bank Common Spatial Pattern

School of Information Science and Technology, Nantong University, Nantong 226019, China
*
Author to whom correspondence should be addressed.
Brain Sci. 2026, 16(2), 217; https://doi.org/10.3390/brainsci16020217
Submission received: 7 January 2026 / Revised: 3 February 2026 / Accepted: 9 February 2026 / Published: 11 February 2026
(This article belongs to the Section Neurorehabilitation)

Abstract

Background: Traditional common spatial pattern (CSP) algorithms for upper limb neural rehabilitation face inherent challenges of overlapping cortical representations and frequency sensitivity, which hinder the decoding performance of motor imagery (MI) electroencephalogram (EEG) signals. Objective: To address these issues, this study adopts an improved discriminative filter bank CSP (DFBCSP) framework and applies it to the decoding of upper limb MI-EEG signals, achieving remarkable classification performance. Methods: EEG data were acquired from sixteen participants performing two-class (left upper limb flexion-extension vs. relaxing) and three-class (left upper limb flexion vs. right upper limb extension vs. relaxing) MI tasks. The acquired EEG data were then decomposed into nine distinct sub-bands, followed by the adoption of a mutual information-based feature selection strategy to optimize the feature sets. These optimized feature sets were subsequently input into three classification models, namely multilayer perceptron (MLP), support vector machine (SVM), and linear discriminant analysis (LDA), for MI task classification. Results: Experimental results demonstrate that the DFBCSP + MLP method significantly outperforms the traditional CSP approach. Specifically, it achieves an accuracy of 94.83% (Kappa coefficient: 0.890) in two-class MI tasks and 86.20% (Kappa coefficient: 0.775) in three-class MI tasks. Conclusion: The DFBCSP + MLP framework exhibits high robustness and provides a potential technical framework and theoretical basis for future research on the rehabilitation of patients with upper limb motor dysfunction.

1. Introduction

In recent years, the incidence of limb disability and paralysis resulting from amyotrophic lateral sclerosis (ALS) and spinal cord injury (SCI) has been steadily increasing. Patients with motor disabilities have intact cognitive functions but lack voluntary control over their muscles and the peripheral nervous system, making it impossible for them to effectively control their body trunk through brain signals. Although traditional clinical approaches have achieved certain progress in limb function recovery, they still struggle to fully restore complex motor functions. Brain–Computer Interface (BCI) technology offers the possibility of establishing a bidirectional communication pathway between the brain and external devices. By acquiring and decoding neurophysiological signals such as Electroencephalography (EEG), this technology can directly convert the subject’s motor intentions into physical commands, thereby enabling the control of assistive devices [1,2]. This technology brings new hope to patients with motor impairments but intact cognitive abilities.
In neural rehabilitation, the Motor Imagery (MI) experimental paradigm is widely used as a control source due to its ability to map active intentions without requiring actual limb movements. It not only promotes the repair or reconstruction of damaged motor pathways and awakens some dormant neural synapses but also can interact with actual movements to achieve better motor cortex reorganization effects [3]. However, distinguishing between different fine movements within the upper limb (Extension and Flexion) still poses significant challenges [4]. The representational areas of such movements in the cerebral cortex highly overlap, and EEG signals are characterized by low signal-to-noise ratio (SNR) and significant individual differences, which place extremely high demands on the discriminative power of feature extraction algorithms.
Commonly utilized feature extraction methods for MI can be categorized into several primary types based on the nature of the classification features. First, time–domain analysis focuses on waveform characteristics as they evolve over time, employing techniques such as statistical features (mean, variance, and standard deviation) [5], autoregressive (AR) models [6], and event-related potentials (ERP) [7]. Second, frequency–domain analysis investigates EEG signal characteristics across different frequency bands corresponding to various physiological states, utilizing methods such as power spectral density (PSD) [8,9], differential entropy (DE) [10], and energy ratios [11]. Thirdly, spatial–domain analysis focuses on exploring the activation relationships among different brain regions, mainly extracting features through methods such as Common Spatial Pattern (CSP) [12,13], Independent Component Analysis (ICA) [14], and Surface Laplacian (SL) [15]. In addition, time-frequency analysis can be performed to capture the time-frequency domain features of EEG signals using techniques including Wavelet Transform (WT) [16] and Hilbert-Huang Transform-Empirical Mode Decomposition (HHT-EMD) [17]. MI-EEG signals exhibit the most distinct spatial distribution differences, making the CSP widely used for extracting discriminative spatial features. Numerous researchers have conducted extensive studies to derive more prominent CSP features [18]. For instance, Ana [19] proposed an adaptive CSP (ACSP) algorithm, which significantly reduced the training time required for new subjects and achieved successful three-class classification for complex MI tasks. Tang et al. [20] proposed a method based on the Bhattacharyya distance to select the optimal frequency band for each electrode across different subjects, followed by feature extraction using an improved B-CSP algorithm to achieve the classification of motor imagery tasks. Fu et al. [21] addressed the issue that the CSP algorithm repeatedly selects feature patterns in the feature space by proposing a sparse CSP algorithm. This approach embeds sparse technology and iterative search into the CSP framework, selecting EEG signals from a few channels with the most prominent features and thereby improving the accuracy of feature classification. Peterson V et al. [22] proposed a novel classification method integrating multi-band and time window techniques. This method extracts features from each frequency band using the CSP algorithm and incorporates a priori discriminative information into the model via a fast feature selection and classification approach based on elastic net regression, thereby improving the classification accuracy of MI-based BCI systems. Convolutional networks exhibit excellent automatic feature extraction capability in complex signal processing scenarios [23]. Fu et al. [24] proposed a convolutional transformer network integrated with an adaptive learning module, which not only enhances the individual motor imagery classification performance but also shortens the calibration time for new subjects.
The CSP is a spatial filtering feature extraction algorithm designed for binary classification tasks. It is capable of extracting the spatial distribution components of each class from multi-channel BCI data. But the traditional CSP algorithm is highly sensitive to noise, and its performance is significantly dependent on the selection of frequency bands [25]. To address these limitations, we introduce the discriminative filter bank common spatial pattern (DFBCSP), which enables the precise elimination of redundant information while retaining features with the highest discriminative contribution. We aim to explore the potential of the DFBCSP algorithm in the classification of EEG signals during upper-limb extension and flexion MI. We developed a feature extraction model based on DFBCSP and integrated it with classification techniques, including multi-layer perceptron (MLP), support vector machine (SVM), and linear discriminant analysis (LDA), to analyze EEG signals from subjects performing upper-limb imagery tasks. By comparing with the traditional CSP, this study evaluates the classification performance of the DFBCSP in upper limb motor imagery tasks using statistical metrics such as classification accuracy, Kappa value, and Receiver Operating Characteristic (ROC) curve.

2. Materials and Methods

The study encompasses three core components: EEG signal acquisition, signal processing, and classification performance evaluation. The detailed implementation procedures are illustrated in Figure 1, and the entire workflow was executed using the OpenViBE platform (https://openvibe.inria.fr, OpenViBE 3.5.0).

2.1. Subjects, Data Acquisition, and Experimental Procedure

Sixteen healthy participants (labeled S1–S16) were recruited for the study, comprising ten males and six females with an age range of 21–27 years. All subjects were right-handed and possessed normal or corrected-to-normal vision. Furthermore, none of the participants reported a history of psychiatric or neurological disorders.
The OpenBCI platform (openbci.com, New York) was selected for EEG signal acquisition, with the sampling rate configured at 250 Hz. The electrode arrangement strictly adhered to the International 10–20 system, as illustrated in Figure 2. The red regions represent the data acquisition electrodes, and the blue regions denote the reference electrodes positioned at the earlobes. When subjects perform unilateral limb Motor Imagery (MI) tasks, regular potential changes are generated in relevant regions of the cerebral cortex. Specifically, the μ rhythm (8–12 Hz) and β rhythm (13–30 Hz) in the contralateral primary sensorimotor cortex exhibit a significant decrease in energy, known as Event-Related Desynchronization (ERD); simultaneously, the energy of the corresponding rhythms in the ipsilateral regions increases significantly, referred to as Event-Related Synchronization (ERS) [26]. Therefore, only C3, Cz, and C4, which are most closely associated with the primary sensorimotor cortex, were selected as measurement channels in this experiment. This streamlined channel scheme not only effectively reduces computational load but also significantly improves the subject’s wearing comfort. Prior to data collection, to optimize signal transmission quality, experimenters need to clean the subject’s scalp with medical alcohol and apply conductive gel to minimize electrode impedance, thereby ensuring that the collected EEG signals have a high SNR.
To evaluate the performance of the DFBCSP algorithm under different complexity levels, this study designed two-class and three-class MI experiments. The two-class task included left upper limb flexion and extension and relaxing state, with 10 blocks and a total of 200 trials; the three-class task comprised left upper limb flexion, right upper limb extension, and relaxing state, involving 10 blocks and a total of 300 trials. The task design was intended to elicit distinct ERD/ERS features, thereby providing a high-quality neurophysiological foundation for multi-band decomposition and spatial feature extraction of the DFBCSP algorithm. The experimental timing sequence is illustrated in Figure 3: at t = 0 s, a green cross was displayed to guide subjects into a focused state and calibrate the baseline signal; at t = 1 s, a target task was prompted via icons, for the two-class task, a red left-pointing arrow corresponded to left upper limb flexion and extension, and a red right-pointing arrow to the relaxing state; for the three-class task, a red left-pointing arrow denoted left upper limb flexion, a red right-pointing arrow stood for right upper limb extension, and a red upward-pointing arrow represented the relaxing state; subjects performed a 4 s MI from t = 1 s to t = 5 s, and the electroencephalographic signals in this time window served as the core input for the DFBCSP algorithm. Random inter-task rest periods were set between consecutive tasks. To suppress the interference of physiological artifacts on filter training, subjects maintained complete bodily stillness throughout the experiment to eliminate electromyography (EMG) interference; additionally, they minimized blinking and swallowing within the task window to inhibit physiological artifacts such as electrooculography (EOG).

2.2. EEG Data Preprocessing

We first performed Independent Principal Component Analysis (IPCA) on the continuous EEG signals; artifactual components such as electrooculographic (EOG) and electromyographic (EMG) artifacts were identified and removed by analyzing the temporal waveforms, scalp topographies, and power spectra of each component, after which the remaining components were back-projected to reconstruct artifact-free EEG signals. Subsequently, 4-s EEG segments were extracted from each trial as the full motor imagery (MI) cycle, and aberrant trials with a peak-to-peak amplitude exceeding ± 100 μV were excluded. Finally, zero-phase band-pass filtering at 4–40 Hz was applied to the extracted epochs using an 8th-order Butterworth band-pass filter to suppress low-frequency drifts and high-frequency noise, while preserving the mu (8–12 Hz) and beta (12–30 Hz) rhythmic components that primarily reflect MI-related event-related desynchronization (ERD) and event-related synchronization (ERS). The processed 4-s EEG segments were used as the input for subsequent DFBCSP feature extraction and classification.

2.3. Method for Feature Extraction

2.3.1. Common Spatial Pattern

The basic principle of the CSP algorithm is to utilize matrix diagonalization to find a set of optimal spatial filters for projection, maximize the variance difference between the two types of signals, and thus obtain feature vectors with high discriminability.
Assume that X 1 and X 2   are the spatiotemporal signal matrices of multi-channel evoked responses in two-class MI tasks, respectively, both with a dimension of N × T , where N is the number of EEG channels, and T is the number of samples collected from each channel. To calculate their covariance matrices, it is assumed that N < T. For the two types of EEG MI tasks, a mathematical model of composite sources is generally adopted to describe the EEG signal. X 1 and X 2 can be expressed separately as follows:
X 1 = [ C 1   C M ] [ S 1 S M ]     ,   X 2 = [ C 2   C M ] [ S 2 S M ]  
In Equation (1), S 1     and S 2   represent the two types of tasks, respectively. S M denotes the common source signal shared by the two types of tasks. C 1 and C 2 are composed of the common spatial patterns associated with S 1 and S 2 . Since each spatial pattern is a vector of dimension N   ×   1, this vector is used to represent the distribution weight of the signal induced by a single source signal across N leads. C M denotes the common spatial pattern corresponding to S M . The normalized covariance matrices R 1 and R 2 of X 1 and X 2 are expressed as follows, respectively:
R 1 = X 1 X 1 T trace ( X 1 X 1 T )
R 2 = X 2 X 2 T trace ( X 2 X 2 T )
X T denotes the transpose of the matrix X , and trace ( X ) denotes the sum of the diagonal elements of the matrix X . Next, the mixed spatial covariance matrix R is calculated and subjected to eigenvalue decomposition:
R = R - 1 + R - 2   ,   R = U λ U T  
R - i ( i = 1,2) denotes the average covariance matrix for the experiments of Task 1 and Task 2, respectively. U is the eigenvector matrix of the matrix λ , and λ is the diagonal matrix formed by the corresponding eigenvalues. The eigenvalues are then sorted in descending order. Perform the following transformation and decomposition on R 2 and R 2 :
P   =   λ 1 U T   ,   S 1   =   P R 1 P T   =   B 1 λ 1 B 1 T   ,   S 2   =   P R 2 P T   =   B 2 λ 2 B 2 T    
From the above, it can be concluded that:
B 1   =   B 2   =   V   ,   λ 1   +   λ 2 =   I  
Since the sum of the eigenvalues of the two types of matrices is always 1, the eigenvector corresponding to the maximum eigenvalue of S 1 yields the minimum eigenvalue of S 2 , and vice versa. When the eigenvalues in λ 1 are sorted in descending order, the corresponding eigenvalues in λ 2 are sorted in ascending order. Based on this, it can be inferred that λ 1 and λ 2 take the following forms:
λ 1   =   diag ( I 1   σ M   0 ) , λ 2   =   diag ( 0   σ M   I 2 )  
For the test data x i , the corresponding projection matrix W (serving as the spatial filter) and its eigenvector f i are given as follows. By comparing f i with f L and f R , the category of the i-th motor imagery can be determined.
W = B T P   ,   Z i = W * x i   ,   f i = VAR ( Z i ) sum ( VAR ( Z i ) )  

2.3.2. Discriminative Filter Bank Common Spatial Pattern

Traditional CSP algorithms typically extract features across a relatively broad frequency band (e.g., 4–40 Hz). However, the most discriminative neural oscillations, specifically the μ and β rhythms often reside within highly specific and narrow sub-bands that exhibit significant inter-subject and intra-subject variability. Utilizing a wide frequency range tends to introduce noise from irrelevant spectral components, which may obscure task-related information. Moreover, CSP filters are prone to overfitting when the available training samples are limited, leading to suboptimal generalization on novel datasets. The DFBCSP addresses these limitations systematically by integrating a filter bank architecture with discriminative feature selection strategies.
In the study, the 4–40 Hz frequency band is decomposed into nine sub-bands using a bank of Butterworth filters, specifically: 4–8 Hz, 8–12 Hz, 12–16 Hz, 16–20 Hz, 20–24 Hz, 24–28 Hz, 28–32 Hz, 32–36 Hz, and 36–40 Hz. The methodology not only ensures the preservation of narrow-band components that may contain essential task-related information but also facilitates the extraction of band-specific spatial filter features. The flowchart of the DFBCSP algorithm is shown in Figure 4.
We employ the CSP algorithm to extract features from signal components that have been decomposed into multiple sub-bands. For the k ( k = 1,2 , 9 ) frequency band, x k , i represents the test data in the k-th sub-frequency band during the i-th imagination. The proposed algorithm applies CSP independently to each x k , i . Specifically, spatial covariance matrices are computed for the two task categories (Class 1 and Class 2) to determine the average normalized covariance matrices, denoted as R k , 1 and R k , 2 , for each respective class within the k frequency band.
R k , 1 = avg ( X k , 1 X k , 1 T trace ( X k , 1 X k , 1 T ) ) C l a s s 1
R k , 2 = avg ( X k , 2 X k , 2 T trace ( X k , 2 X k , 2 T ) ) C l a s s 2
X k , 1 and X k , 2 represent the self-sub-band covariance matrices. The subsequent processing steps are similar to the CSP method described above:
R k = R k , 1 + R k , 2 , R k = U k λ k U k T
P k = λ k 1 U k T , S k , 1 = P k R k , 1 P k T , S k , 2 = P k R k , 2 P k T
W k = B k T P k , Z k , i = W k x k , i  
f k , i = V A R ( Z k , i ) V A R ( Z k , i )  
In the final classification, we opted for mutual information rather than the Fisher Ratio method. Define f k = { f 1 , i , f 2 , i , , f 9 , i } as the feature set for sub-band k and Y = { y 1 , y 2 } as the label set. Discretize the continuous f k , i into n equal-width intervals, the mutual information is then computed as:
M I k = f f k y Y p ( f , y ) log 2 p ( f , y ) p ( f ) p ( y )  
Arrange the sub-bands by M I k in descending order, select the top m sub-bands with the highest mutual information, and concatenate them to produce the final feature vector:
F k = [ f 1 T ,   f 2 T , ,   f m T ] T  
Input the above F k into a classifier (such as SVM) to obtain discriminant scores. Finally, perform score fusion on the discriminant scores of the nine channels to get the final imagination classification.

2.4. Classification

To evaluate the efficacy of the feature extraction algorithms and construct robust EEG recognition models, the study employs three highly representative and widely utilized classifiers within the field of EEG signal processing: Multi-layer perceptron (MLP) [27], support vector machine (SVM) [28], and linear discriminant analysis (LDA) [29]. By coupling these classifiers with both DFBCSP and conventional CSP, we aim to comprehensively investigate the performance of distinct model configurations across two-class and three-class classification tasks.

2.4.1. SVM

The primary objective of the SVM is to map input vectors into a high-dimensional feature space, constructing an optimal separating hyperplane within this space to achieve the precise partitioning of feature vectors. During the classification evaluation for the motor imagery tasks in this study, we compared the performance of linear, nonlinear, and polynomial kernel functions. Empirical results indicated that the linear kernel yielded the optimal performance; consequently, it was selected as the final kernel, with the penalty parameter C in the C-SVC classifier set to 1. For the feature F k     R D   and the label y     { 1 , 1 } , learn the hyperplane w T F k   +   b   =   0 . The optimization objective is as follows:
min w , b 1 2 w 2 s . t . y i ( w F k , i + b ) 1   ( i )  
Then introduce the Lagrange multipliers α i     0 and transform it into a dual problem:
max α i   =   1 N α i 1 2 i , j   =   1 N α i α j y i y j K ( F k , i , F k , j ) s . t . i = 1 N α i y i = 0
K ( F k , i , F k , j ) is kernel function. After solving, the classifier is obtained:
f S V M ( F k ) = s i g n ( i   =   1 N α i y i K ( F k , i , F k ) + b )  
The bias b is determined by the support vector samples. Fuse (e.g., average) the SVM discriminant scores of the 9 channels and output the final classification.

2.4.2. MLP

MLP represents a quintessential feedforward artificial neural network characterized by robust nonlinear mapping capabilities, enabling the capture of intricate functional relationships inherent in EEG signal features. In this study, the MLP classifier is structured with an input layer, a hidden layer, and an output layer. We configured a hidden layer comprising 64 neurons and adopted the rectified linear unit as the activation function to expedite gradient convergence. The model parameters are updated via the Backpropagation algorithm combined with the Adam optimizer, utilizing the cross-entropy loss function to minimize classification error. The specific process is as follows. Here, f , h , and y ^ are the outputs of the input layer, hidden layer, and output layer, respectively. W 1 and W 2 are the corresponding parameter matrices. b 1 and b 2 are the corresponding biases, and L is the cross–entropy loss.
h =   max ( 0 ,   f ) = max ( 0 ,   W 1 F k + b 1 )  
y ^ = 1 1   + exp ( ( W 2 h + b 2 ) )  
L = 1 N i = 1 N [ y i log y ^ i + ( 1 y i ) log ( 1 y ^ i ) ]  

2.4.3. LDA

LDA’s fundamental principle involves identifying the optimal projection direction to reduce the dimensionality of high-dimensional data, ensuring that the transformed low-dimensional representation satisfies the discriminant criterion of minimizing intra-class scatter while maximizing inter-class scatter. In this study, we employ the classical LDA algorithm configured with an automatic shrinkage estimator to enhance the model’s generalization capability in scenarios involving limited sample sizes. The specific process is as follows. Here, μ c , S w , S b , w , and y ^ represent the class mean, within—class scatter, between—class scatter, optimal projection direction, and final decision output, respectively.
μ c = 1 N c i = 1 N c F k , i ( c )  
S w = c = 1 2 i = 1 N c ( F k , i ( c )     μ c ) ( F k , i ( c )     μ c ) T    
S b = ( μ 1     μ 2 ) ( μ 1     μ 2 ) T  
w = S w 1 ( μ 1     μ 2 )  
y ^ = sign ( w T F k     w T ( μ 1 + μ 2 ) 2 )  

3. Results and Analysis

Figure 5 first shows the change process of ERD/ERS from 0 s to 8 s. There are obvious ERD and ERS in the electroencephalogram cortical signals, and the time range for the occurrence of ERD/ERS in each experiment is from 3.5 s to 5.5 s. When imagining the flexion and extension movements of the left upper limb, the C3 channel shows a higher potential during the ERS phenomenon, while the C4 channel shows a lower potential during the ERD phenomenon, which reflects the dynamic neural regulation of the brain when processing information.
To facilitate optimal experimental outcomes, the dataset is evenly divided into five subsets. Each time, four subsets are used as the training set and one subset is used as the validation set to conduct a five—fold cross—validation experiment. The specific training accuracies for the respective classification tasks are detailed in Table 1 and Table 2. Subsequently, the performance of six algorithm combinations—comprising DFBCSP + MLP, DFBCSP + SVM, DFBCSP + LDA, CSP + MLP, CSP + SVM, and CSP + LDA—was rigorously evaluated on the test set using classification accuracy, the Kappa coefficient, and receiver operating characteristic (ROC) curves.
For subjects S1–S16, we use the average accuracy and standard deviation in the table to calculate the 95% confidence interval for each algorithm. The calculation formula is as follows:
confidence   interval   =   x - ± t α 2 , n 1 s n  
Here, x - represents the accuracy rate, n is the number of samples, s is the standard deviation, and the degrees of freedom t α 2 , n 1 .
To quantify the differences in classification performance among the various algorithms, we first conducted a one-way repeated-measures analysis of variance (ANOVA [30]) on the classification accuracies of 16 subjects obtained with the six algorithms. The results demonstrated an extremely significant main effect of algorithm type on classification accuracy, indicating the presence of overall differences in the accuracies of the six algorithms, as detailed in Table 3 and Table 4. To further identify the specific sources of these differences, we performed post hoc multiple comparisons using Bonferroni-corrected paired t-tests. The test results showed that the DFBCSP series algorithms achieved significantly higher accuracies than their corresponding conventional CSP series algorithms. Additionally, under the DFBCSP framework, the Multi-Layer Perceptron (MLP) classifier significantly outperformed the Support Vector Machine (SVM) and Linear Discriminant Analysis (LDA) classifiers, which confirms the performance superiority of the DFBCSP + MLP algorithm.
To conduct an in-depth analysis of the classification performance of each algorithm, we introduced the Kappa coefficient as a pivotal evaluation metric. An elevation in this value directly reflects enhancements in both model accuracy and stability [31]. Drawing upon experimental data from subjects S1 through S16, Table 5 and Table 6 delineate the specific performance of the six comparative methods across two-class and three-class classification tasks, respectively. Through the comparison of these quantitative data, performance disparities among the models when addressing classification tasks of varying complexities can be clearly observed.
The ROC curve not only dynamically depicts the trade-off between the true positive rate (TPR) and false positive rate (FPR) across a continuum of decision thresholds but also offers a robust characterization of a model’s discriminative performance in motor imagery electroencephalography (MI-EEG) classification. In this study, clear classification labels were defined a priori: for the two-class task, MI of left upper limb extension/flexion was designated as the positive class, while the resting (relaxation) state was assigned as the negative class. For the three-class task, to enable a streamlined evaluation of multidimensional discriminative performance and objectively reflect the model’s overall ability to recognize the three specific movement states—Relaxation, Left Limb Flexion, and Right Limb Extension—we employed the macro-average ROC curve, which is derived by macro-averaging the per-class ROC curves computed for each individual class against all other classes combined. As a canonical global metric for assessing classifier performance, the AUC is highly sensitive to the distinctness of the learned classification boundaries; an AUC value approaching 1 (corresponding to an ROC curve that nears the top-left corner of the coordinate plane) signifies the model’s superior capacity to extract and discriminate task-relevant MI-EEG signal features [32]. Figure 6 and Figure 7 present the ROC curve profiles for representative subjects across different experimental paradigms, while Table 7 reports the mean AUC values (including the macro-average AUC for the three-class task) for each classification method.
Based on the data in the above pictures and tables, it can be concluded that the DFBCSP + MLP algorithm demonstrates significant performance advantages when processing EEG signals. In summary, the DFBCSP + MLP algorithm demonstrated superior classification performance in the task of upper-limb-motor-intention recognition; comparative experiments indicate that this algorithm achieved a significant improvement in performance relative to the conventional CSP algorithm.
Figure 8 shows the confusion matrix of the real-time three-classification results for one of the subjects. The main diagonal of the confusion matrix reflects the prediction of the three categories. The columns represent the actual categories, and the rows represent the predicted categories. The darker the color, the higher the prediction accuracy.

4. Discussion

The performance of DFBCSP and conventional CSP in motor imagery (MI) EEG decoding is systematically evaluated through two-class (relaxing vs. left upper limb MI) and three-class tasks, with results corroborated by accuracy, Kappa coefficient, ROC curves, and AUC metrics.
Experimental results demonstrate that the selection of feature extraction methods and classifiers exerts a significant influence on the recognition performance of electroencephalogram (EEG) signals. In terms of feature extraction, DFBCSP exhibited superior performance to the traditional CSP in both binary and ternary classification tasks, with an average accuracy improvement of approximately 5% to 8% and a significantly lower standard deviation. This verifies the robustness of multi-band discriminative features in capturing the non-stationary information of EEG signals and suppressing individual differences. In the aspect of classifiers, under the same feature conditions, the performance of the three classifiers followed the order of MLP > SVM > LDA. This reflects that nonlinear classifiers represented by MLP possess stronger feature mapping and discriminative capabilities than linear classifiers (LDA) in processing high-dimensional EEG features. In summary, the DFBCSP + MLP combination achieved the highest recognition accuracy in both binary (94.83%) and ternary (86.20%) classification tasks, with the most concentrated distribution of 95% confidence intervals. This proves that the combination serves as the optimal recognition framework for addressing complex multi-class EEG tasks.
BCI illiteracy refers to the phenomenon in motor imagery (MI) based brain–computer interface (BCI) training and experiments where certain individuals fail to achieve sufficiently high and usable control performance over an extended period [33]. Its causes may be associated with multiple factors, such as neurophysiological differences, yet no unified conclusion has been reached to date. In the experiments, Subject S4 exhibited a significantly lower accuracy rate than other participants in the ternary classification task, presenting this phenomenon in a typical manner. The DFBCSP method can amplify weak and scattered discriminative information through multi-band filtering and mutual information-based discriminative feature selection. Even for Subject S4, the DFBCSP combined with a multi-layer perceptron (MLP) achieved an accuracy increase of approximately 10.38% compared with the CSP + MLP framework in the three-class task; in two-class and three-class tasks, it further yielded increases of 16.92% and 15.54%, respectively, relative to the CSP + LDA framework. This demonstrates the DFBCSP’s robustness and compensatory ability for users with weak signals/low controllability, and its potential to enhance the generalizability of BCI systems to illiterate subjects. This study only investigates the manifestations and performance differences of BCI illiteracy, without exploring its underlying neural mechanisms or enabling a rigorous clinical diagnosis of BCI illiteracy. Moreover, DFBCSP cannot fundamentally eradicate this phenomenon. Future research will conduct in-depth investigations into the causes and intervention strategies of BCI illiteracy, analyze the characteristics of low-controllability users via multi-dimensional assessment, and explore the improvement effects of combined methodological frameworks.
Regarding the computational complexity and system response time, we have specifically presented them in Table 8. Here, N, K, T, F, S, and H, respectively, represent the number of EEG signal channels, the number of sub-bands of DFBCSP, the number of sampling points of the EEG signal, the number of extracted features, the number of training samples, and the number of neurons in the hidden layer of the MLP.
While the proposed approach yields promising results, several limitations merit discussion. First, the study’s sample size (sixteen participants) is relatively small, and the MI tasks (e.g., left flexion vs. right extension) are simplified compared to real-world rehabilitation scenarios. Future research should expand the cohort to include more diverse populations and validate the method on more complex, ecologically valid MI paradigms. Second, the current DFBCSP framework uses a fixed set of nine sub-bands, which may not be optimal for all individuals or MI tasks. Adaptive sub-band selection (e.g., personalized frequency partitioning based on individual EEG characteristics) could further enhance performance, particularly for subjects with “BCI illiteracy”.
In terms of clinical relevance and translational potential, the method’s feasibility hinges on addressing key real-world constraints. Temporally, the DFBCSP + MLP’s offline processing latency is acceptable for offline personalized rehabilitation planning but requires optimization (e.g., adaptive sub-band pruning, lightweight MLP quantization) to meet the <100 ms latency threshold for real-time BCI-guided training, critical for patient-machine interaction in clinical settings. System-wise, the moderate computational demands (Table 8) are compatible with portable EEG devices, supporting deployment in rehabilitation centers or home-based care, though power consumption optimization is needed for long-term wearable use. Clinically, the simplified MI tasks limit direct translation; future validation on patients with upper limb dysfunction should incorporate functional, task-specific movements (e.g., reaching, grasping) to align with real rehabilitation goals.
In recent years, deep learning methods like CNNs and their variants [34,35] have achieved excellent performance on public MI-EEG datasets by automatically learning spatiotemporal features from raw or minimally preprocessed signals in an end-to-end manner, showing great potential for modeling complex nonlinear EEG patterns yet relying on sufficient data and computing power; by contrast, the DFBCSP + MLP proposed in this study integrates classic neurophysiological priors with a lightweight nonlinear classifier, where DFBCSP leverages μ and β band ERD/ERS priors and improves feature interpretability and robustness via multi-sub-band filtering and discriminative feature selection, and MLP implements nonlinear mapping at the feature space level with far fewer parameters than deep CNNs/EEGNet. This method has lower demands for training data scale and hardware resources, making it more suitable for the few-shot learning, portability and real-time performance requirements of rehabilitation scenarios; for future work, we will further incorporate lightweight CNNs/EEGNet or attention modules into the framework to build a “DFBCSP features + micro deep networks” hybrid architecture, and integrate transfer learning and adaptive frequency band selection strategies to compare and fuse the performance and deployability of mainstream deep learning methods in clinical populations. Notably, the present study does not incorporate the integration of physical modeling (e.g., the finite element method, FEM) with artificial intelligence (AI) into its research scope, yet numerous studies have demonstrated that this integration strategy can effectively enhance the generalization ability and interpretability of AI systems for upper limb rehabilitation [36]. This presents a highly promising future research direction for further improving the interpretability and robustness of rehabilitation-based brain–computer interface (BCI) systems based on the method we proposed.
Despite these limitations, the findings highlight DFBCSP’s potential as a robust feature extraction tool for upper limb MI-EEG decoding, providing a technical foundation for precision neurorehabilitation. For clinical applications, the DFBCSP + MLP model could be integrated into wearable BCI devices to deliver personalized rehabilitation training, particularly for patients with upper limb motor dysfunction.

5. Conclusions

We investigated six algorithm combinations to optimize the classification strategy and performance of the MI-BCI system by differentiating between upper limb extension and flexion movements. These combinations included DFBCSP + MLP, DFBCSP + SVM, and DFBCSP + LDA, alongside the conventional CSP + based counterparts: CSP + MLP, CSP + SVM, and CSP + LDA. By incorporating multi-band filtering and mutual information-based discriminative feature selection, the DFBCSP algorithm effectively eliminates redundant information while retaining features that yield the highest classification contribution. Methodologies employing the DFBCSP algorithm consistently outperformed those based on the conventional CSP algorithm. Specifically, in the two-class classification task, the DFBCSP + MLP method achieved a remarkable average accuracy of 94.83%, representing an improvement of approximately 8.23% over traditional methods; the average Kappa coefficient reached 0.890, and the average AUC value was 0.954. Furthermore, in the three-class classification task, the Kappa coefficient improved by approximately 0.10. Moreover, our results indicate that the MLP classifier, leveraging its nonlinear mapping capabilities, exhibited performance significantly superior to that of SVM and LDA. Consequently, identified as the optimal combination of spatial filtering and classification algorithms, the DFBCSP + MLP method demonstrates immense potential for enhancing the performance of upper limb motor imagery systems.

Author Contributions

Y.Z.: Methodology, Software, Data curation, Validation, Writing—original draft, Visualization. X.S.: Conceptualization, Resources, Writing—review & editing, Supervision, Project administration, Funding acquisition. All authors have read and agreed to the published version of the manuscript.

Funding

Funded by Basic Research Program of Jiangsu (BK20251914) and the General Program of Natural Science Foundation of Nantong (JC2023072).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data is available from the lead contact upon request.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
BCIBain–Computer Interfaces
CSPcommon spatial pattern
DFBCSPDiscriminative Filter Bank Common Spatial Pattern
MImotor imagery
MLPMultilayer Perceptron
SVMSupport Vector Machine
LDALinear Discriminant Analysis
ALSAmyotrophic Lateral Sclerosis
SCISpinal Cord Injury
ERDEvent Related Desynchronization
ERSasevent-related synchronization
EMGElectromyogram
EOGElectrooculogram
ROCReceiver Operating Characteristic
TPRTrue Positive Rate
FPRFalse Positive Rate
AUCArea Under the Curve

References

  1. Teng, J.; Cho, S.; Lee, M.S. Tri-manual interaction in hybrid BCI-VR systems: Integrating gaze, EEG control for enhanced 3D object manipulation. Front. Neurorobotics 2025, 19, 1628968. [Google Scholar] [CrossRef]
  2. Kosmyna, N.; Hauptmann, E.; Hmaidan, Y. A Brain-Controlled Quadruped Robot: A Proof-of-Concept Demonstration. Sensors 2023, 24, 80. [Google Scholar] [CrossRef]
  3. Butler, J.A.; Page, J.S. Mental Practice with Motor Imagery: Evidence for Motor Recovery and Cortical Reorganization After Stroke. Arch. Phys. Med. Rehabil. 2006, 87, 2–11. [Google Scholar] [CrossRef]
  4. Gruenwald, J.; Znobishchev, A.; KaPeller, C.; Kamada, K.; Scharinger, J.; Guger, C. Time-variant linear discriminant analysis improves hand gesture and finger movement decoding for invasive brain-Computer interfaces. Front. Neurosci. 2019, 13, 901. [Google Scholar] [CrossRef]
  5. Ghaffar, M.S.B.A.; Khan, U.S.; Iqbal, J.; Rashid, N.; Hamza, A.; Qureshi, W.S.; Tiwana, M.I.; Izhar, U. Improving classification performance of four class FNIRS-BCI using Mel Frequency Cepstral Coefficients (MFCC). Infrared Phys. Technol. 2021, 112, 103589. [Google Scholar] [CrossRef]
  6. Moraes, C.P.A.; dos Santos, L.H.; Fantinato, D.G.; Neves, A.; Adali, T. Independent Vector Analysis for Feature Extraction in Motor Imagery Classification. Sensors 2024, 24, 5428. [Google Scholar] [CrossRef]
  7. Spanos, M.; Gazea, T.; Triantafyllidis, V.; Mitsopoulos, K.; Vrahatis, A.; Hadjinicolaou, M.; Bamidis, P.D.; Athanasiou, A. Post Hoc Event-Related Potential Analysis of Kinesthetic Motor Imagery-Based Brain-Computer Interface Control of Anthropomorphic Robotic Arms. Electronics 2025, 14, 3106. [Google Scholar] [CrossRef]
  8. Wang, D.; Zhang, X.; Yue, S.; Guo, D.; Jiang, L.; Feng, C.; Leng, J.; Huang, S.; Zhang, Y.; Xu, F. Analysis of electroencephalography oscillation characteristics in spinal cord injury patients with neuropathic pain. Brain Res. Bull. 2025, 229, 111438. [Google Scholar] [CrossRef]
  9. Tiwari, S.; Goel, S.; Bhardwaj, A. MIDNN- a classification approach for the EEG based motor imagery tasks using deep neural network. Appl. Intell. 2021, 52, 4824–4843. [Google Scholar] [CrossRef]
  10. Z, Y.; Z, Q. Differential entropy feature signal extraction based on activation mode and its recognition in convolutional gated recurrent unit network. Front. Phys. 2021, 8, 629620. [Google Scholar] [CrossRef]
  11. Fauzi, H.; Azzam, M.A.; Shapiai, M.I.; Kyoso, M.; Khairuddin, U.; Komura, T. Energy extraction method for EEG channel selection. TELKOMNIKA (Telecommun. Comput. Electron. Control) 2019, 17, 2561. [Google Scholar] [CrossRef]
  12. Moufassih, M.; Tarahi, O.; Hamou, S.; Agounad, S.; Azami, H.I. Boosting motor imagery brain-computer interface classification using multiband and hybrid feature extraction. Multimed. Tools Appl. 2023, 83, 49441–49472. [Google Scholar] [CrossRef]
  13. Al Shiam, A.; Hassan, K.M.; Islam, R.; Almassri, A.M.M.; Wagatsuma, H.; Molla, K.I. Motor Imagery Classification Using Effective Channel Selection of Multichannel EEG. Brain Sci. 2024, 14, 462. [Google Scholar] [CrossRef]
  14. Al-Qazzaz, N.K.; Aldoori, A.A.; Ali, S.H.B.M.; Ahmad, S.A.; Mohammed, A.K.; Mohyee, M.I. EEG Signal Complexity Measurements to Enhance BCI-Based Stroke Patients’ Rehabilitation. Sensors 2023, 23, 3889. [Google Scholar] [CrossRef]
  15. Kapralov, N.; Idaji, M.J.; Stephani, T.; Studenova, A.; Vidaurre, C.; Ros, T.; Villringer, A.; Nikulin, V. Sensorimotor brain-computer interface performance depends on signal-to-noise ratio but not connectivity of the mu rhythm in a multiverse analysis of longitudinal data. J. Neural Eng. 2024, 21, 056027. [Google Scholar] [CrossRef]
  16. Huang, Z.; Wei, Q. Tensor decomposition-based channel selection for motor imagery-based brain-computer interfaces. Cogn. Neurodyn. 2023, 18, 877–892. [Google Scholar] [CrossRef]
  17. Gong, P.; Chen, M.Y.; Zhang, L.; Jian, W.J. HHT-Based Selection of Optimal Time-Frequency Patterns for Motor Imagery. Appl. Mech. Mater. 2013, 2617, 3522–3525. [Google Scholar] [CrossRef]
  18. Haresh, M.V.; Kannadasan, K.; Shameedha Begum, B. An EEG-based imagined speech recognition using CSP-TP feature fusion for enhanced BCI communication. Behav. Brain Res. 2025, 493, 115652. [Google Scholar] [CrossRef]
  19. Costa, A.; Møller, J.; Iversen, H.; Puthusserypady, S. An adaptive CSP filter to investigate user independence in a 3-class MI-BCI paradigm. Comput. Biol. Med. 2018, 103, 24–33. [Google Scholar] [CrossRef]
  20. Tang, Z.-C.; Li, C.; Wu, J.-F.; Liu, P.-C.; Cheng, S.-W. Classifcation of EEG-based single-trial motor imagery tasks using a B-CSP method for BCI. Front. Inform. Technol. Electron. Eng. 2019, 20, 1087–1098. [Google Scholar] [CrossRef]
  21. Fu, R.; Han, M.; Tian, Y.; Shi, P. Improvement motor imagery EEG classification based on sparse common spatial pattern and regularized discriminant analysis. J. Neurosci. Methods 2020, 343, 108833. [Google Scholar] [CrossRef]
  22. Peterson, V.; Wyser, D.; Lambercy, O.; Spies, R.; Gassert, R. A penalized time-frequency band feature selection and classification procedure for improved motor intention decoding in multichannel EEG. J. Neural Eng. 2019, 16, 016019. [Google Scholar] [CrossRef] [PubMed]
  23. Pratticò, D.; Laganà, F. Infrared Thermographic Signal Analysis of Bioactive Edible Oils Using CNNs for Quality Assessment. Signals 2025, 6, 38. [Google Scholar] [CrossRef]
  24. Fu, R.; Shen, L.; Lu, B.; Cai, M.; Wen, G.; Chen, J.; Hua, C. A convolutional transformer network with adaptation learning modules for enhancing motor imagery classification. Expert Syst. Appl. 2025, 269, 126381. [Google Scholar] [CrossRef]
  25. Meng, M.; Dong, Z.; Gao, Y.; She, Q. Optimal channel and frequency band-based feature selection for motor imagery electroencephalogram classification. Int. J. Imaging Syst. Technol. 2022, 33, 670–679. [Google Scholar] [CrossRef]
  26. Liu, Y.; Yu, S.; Li, J.; Ma, J.; Wang, F.; Sun, S.; Yao, D.; Xu, P.; Zhang, T. Brain state and dynamic transition patterns of motor imagery revealed by the bayes hidden markov model. Cogn. Neurodynamics 2024, 18, 2455–2470. [Google Scholar] [CrossRef]
  27. de Menezes, J.A.A.; Gomes, J.C.; Hazin, V.d.C.; Dantas, J.C.S.; Rodrigues, M.C.A.; dos Santos, W.P. Classification based on sparse representations of attributes derived from empirical mode decomposition in a multiclass problem of motor imagery in EEG signals. Health Technol. 2023, 13, 747–767. [Google Scholar] [CrossRef]
  28. Kanagaluru, V.; Sasikala, M. Two Class Motor Imagery EEG Signal Classification for BCI Using LDA and SVM. Trait. Signal 2024, 41, 2743–2749. [Google Scholar] [CrossRef]
  29. Kabir, H.; Akhtar, N.I.; Tasnim, N.; Miah, A.S.M.; Lee, H.-S.; Jang, S.-W.; Shin, J. Exploring Feature Selection and Classification Techniques to Improve the Performance of an Electroencephalography-Based Motor Imagery Brain-Computer Interface System. Sensors 2024, 24, 4989. [Google Scholar] [CrossRef]
  30. Laganà, F.; Faccì, A.R. Parametric optimisation of a pulmonary ventilator using the Taguchi method. J. Electr. Eng. 2025, 76, 265–274. [Google Scholar] [CrossRef]
  31. Wu, P.; Fei, K.; Chen, B.; Pan, L. MSEI-ENet: A Multi-Scale EEG-Inception Integrated Encoder Network for Motor Imagery EEG Decoding. Brain Sci. 2025, 15, 129. [Google Scholar] [CrossRef]
  32. Cui, Y.; Xie, S.; Fu, Y.; Xie, X. Predicting Motor Imagery BCI Performance Based on EEG Microstate Analysis. Brain Sci. 2023, 13, 1288. [Google Scholar] [CrossRef] [PubMed]
  33. Kim, D.-H.; Shin, D.-H.; Kam, T.-E. Bridging the BCI illiteracy gap: A subject-to-subject semantic style transfer for EEG-based motor imagery classification. Front. Hum. Neurosci. 2023, 17, 1194751. [Google Scholar] [CrossRef] [PubMed]
  34. Lawhern, V.J.; Solon, A.J.; Waytowich, N.R.; Gordon, S.M.; Hung, C.P.; Lance, B.J. EEGNet: A compact convolutional neural network for EEG-based brain–computer interfaces. J. Neural Eng. 2018, 15, 056013. [Google Scholar] [CrossRef]
  35. Wu, X.; Chu, Y.; Li, Q.; Luo, Y.; Zhao, Y.; Zhao, X. AMEEGNet: Attention-based multiscale EEGNet for effective motor imagery EEG decoding. Front. Neurorobotics 2025, 19, 1540033. [Google Scholar] [CrossRef] [PubMed]
  36. Pratticò, D.; De Carlo, D.; Silipo, G.; Laganà, F. Hybrid FEM-AI Approach for Thermographic Monitoring of Biomedical Electronic Devices. Computers 2025, 14, 344. [Google Scholar] [CrossRef]
Figure 1. Basic steps of research content.
Figure 1. Basic steps of research content.
Brainsci 16 00217 g001
Figure 2. Electrode distribution.
Figure 2. Electrode distribution.
Brainsci 16 00217 g002
Figure 3. Experimental paradigm.
Figure 3. Experimental paradigm.
Brainsci 16 00217 g003
Figure 4. Flow chart of the DFBCSP algorithm processing.
Figure 4. Flow chart of the DFBCSP algorithm processing.
Brainsci 16 00217 g004
Figure 5. The time course of ERD/ERS from second 0 to 8.
Figure 5. The time course of ERD/ERS from second 0 to 8.
Brainsci 16 00217 g005
Figure 6. ROC curves of the six classification methods in the two-class classification task: (a) results for subject S4; (b) results for subject S6; (c) results for subject S11.
Figure 6. ROC curves of the six classification methods in the two-class classification task: (a) results for subject S4; (b) results for subject S6; (c) results for subject S11.
Brainsci 16 00217 g006
Figure 7. Macro-ROC curves of the six classification methods in the three-class classification task: (a) results for subject S2; (b) results for subject S7; (c) results for subject S13.
Figure 7. Macro-ROC curves of the six classification methods in the three-class classification task: (a) results for subject S2; (b) results for subject S7; (c) results for subject S13.
Brainsci 16 00217 g007
Figure 8. Three-class classification confusion matrix.
Figure 8. Three-class classification confusion matrix.
Brainsci 16 00217 g008
Table 1. Accuracy (%) of different methods (test set) in a two-class classification task.
Table 1. Accuracy (%) of different methods (test set) in a two-class classification task.
SubjectMethod
CSP + LDACSP + SVMCSP + MLPDFBCSP + LDADFBCSP + SVMDFBCSP + MLP
S192.5793.6492.7894.2796.3698.75
S287.6292.3393.4691.7592.0595.28
S383.1484.5186.385.2791.9495.48
S474.9579.2581.3784.6287.4591.87
S578.3880.3483.6186.4589.8393.24
S681.8282.7483.5486.2190.5993.66
S784.5785.4984.9287.7389.4794.08
S886.4888.7391.2391.3993.7697.34
S979.5681.4681.7784.6987.5192.57
S1085.7987.3488.4290.9491.7694.84
S1184.6386.8585.6888.6492.4595.18
S1280.6283.0585.4184.7587.6893.71
S1382.3783.7784.3785.7690.1694.14
S1479.7481.1982.6484.9888.7893.71
S1585.3188.5489.7990.2993.4696.59
S1689.4890.6291.3492.4492.9796.87
Average accuracy (%)83.56 ± 4.3085.62 ± 4.2086.60 ± 3.8588.14 ± 3.1591.01 ± 2.4594.83 ± 1.79
95% confidence interval (%)(81.29, 85.83)(83.43, 87.81)(84.58, 88.62)(86.38, 89.90)(89.68, 92.34)(93.78, 95.88)
Table 2. Accuracy (%) of different methods (test set) in three-class classification tasks.
Table 2. Accuracy (%) of different methods (test set) in three-class classification tasks.
SubjectMethod
CSP + LDACSP + SVMCSP + MLPDFBCSP + LDADFBCSP + SVMDFBCSP + MLP
S176.4176.8579.3782.5182.9485.73
S276.6278.3981.5480.2381.7686.41
S378.8381.4983.7182.7484.6188.25
S463.5965.6470.2871.3574.2479.13
S577.4276.8379.4181.5483.6284.39
S668.5171.4574.8475.6980.3783.46
S767.2770.673.1674.3777.8282.37
S873.3976.3179.5281.2585.3990.29
S980.3179.8781.3379.9180.1785.57
S1069.8474.6279.2580.3483.5187.64
S1178.2581.1383.6187.5889.7792.74
S1267.4369.5774.2578.3683.0485.36
S1379.2278.4680.3479.7785.2787.18
S1468.3170.3575.4379.8580.7984.74
S1566.5869.7171.7676.9779.6483.53
S1681.4782.7984.3987.7991.5792.48
Average accuracy (%)73.34 ± 5.6775.25 ± 4.9678.26 ± 4.2880.02 ± 4.1382.78 ± 4.1086.20 ± 3.47
95% confidence interval (%)(70.28, 76.40)(72.72, 77.78)(76.02, 80.50)(77.85, 82.19)(79.94, 84.62)(84.25, 88.15)
Table 3. Significance testing for two-class classification results.
Table 3. Significance testing for two-class classification results.
Value p, Value t (df = 15)CSP + LDACSP + SVMCSP + MLPDFBCSP + LDADFBCSP + SVMDFBCSP + MLP
CSP + LDA/t = 1.73, p = 0.295t = 3.15, p < 0.05t = 6.34, p < 0.05t = 9.27, p < 0.05t = 12.15, p < 0.05
CSP + SVM//t = 1.42, p = 0.407t = 5.58, p < 0.05t = 8.51, p < 0.05t = 11.38, p < 0.05
CSP + MLP///t = 4.82, p < 0.05t = 7.75, p < 0.05t = 10.62, p < 0.05
DFBCSP + LDA////t = 5.63, p < 0.05t = 7.45, p < 0.05
DFBCSP + SVM/////t = 4.89, p < 0.05
Table 4. Significance testing for three-class classification results.
Table 4. Significance testing for three-class classification results.
Value p, Value t (df = 15)CSP + LDACSP + SVMCSP + MLPDFBCSP + LDADFBCSP + SVMDFBCSP + MLP
CSP + LDA/t = 1.59, p = 0.367t = 2.94, p < 0.05t = 5.82, p < 0.05t = 8.45, p < 0.05t = 10.87, p < 0.05
CSP + SVM//t = 1.35, p = 0.438t = 5.07, p < 0.05t = 7.71, p < 0.05t = 10.12 p < 0.05
CSP + MLP///t = 4.31, p < 0.05t = 6.96, p < 0.05t = 9.37, p < 0.05
DFBCSP + LDA////t = 5.03, p < 0.05t = 6.74, p < 0.05
DFBCSP + SVM/////t = 4.26, p < 0.05
Table 5. Kappa coefficients of a two-class classification method.
Table 5. Kappa coefficients of a two-class classification method.
SubjectMethod
CSP + LDACSP + SVMCSP + MLPDFBCSP + LDADFBCSP + SVMDFBCSP + MLP
S10.7340.7490.7770.7980.8260.861
S20.6520.6870.7120.7620.8570.906
S30.7380.7670.7860.8170.8340.875
S40.6310.6850.7250.7850.8210.881
S50.7020.7140.7420.7830.8320.893
S60.6550.6970.7210.7940.8360.884
S70.7240.7410.7930.8240.8570.912
S80.7080.7220.7610.8110.8520.896
S90.6870.7130.7480.7870.8380.892
S100.7410.7890.8140.8610.8920.921
S110.7220.7530.7920.8240.8710.931
S120.6520.6860.7270.7650.8130.856
S130.6650.7140.7340.7670.8070.867
S140.7370.7510.7810.8120.8460.878
S150.6480.6870.7160.7580.8180.863
S160.7110.7690.810.8350.8750.924
Average Kappa coefficient0.694 ± 0.040.727 ± 0.030.759 ± 0.030.799 ± 0.030.842 ± 0.020.890 ± 0.02
Table 6. Kappa coefficients of three-class classification methods.
Table 6. Kappa coefficients of three-class classification methods.
SubjectMethod
CSP + LDACSP + SVMCSP + MLPDFBCSP + LDADFBCSP + SVMDFBCSP + MLP
S10.6470.6590.690.7190.7260.772
S20.5940.6270.6580.7030.7390.793
S30.6150.6430.6740.7120.7250.759
S40.4620.5240.6020.6370.6930.757
S50.5440.6030.6140.6330.6880.762
S60.6230.6480.6960.7220.7470.806
S70.4930.5440.6180.6270.6510.734
S80.4840.5270.6090.6350.6620.748
S90.5130.5720.6240.6740.7130.771
S100.5370.5710.6180.6620.710.763
S110.5210.5930.6290.6680.7340.788
S120.4730.5420.6170.6490.6810.734
S130.5830.6150.6540.7140.7620.815
S140.6470.6730.7080.7280.7530.803
S150.5240.5680.6280.6910.7370.807
S160.4970.5570.610.6770.7210.784
Average Kappa coefficient0.546 ± 0.060.592 ± 0.040.641 ± 0.030.678 ± 0.030.715 ± 0.030.775 ± 0.02
Table 7. Average AUC values of different classification methods.
Table 7. Average AUC values of different classification methods.
AUC ValueMethod
CSP + LDACSP + SVMCSP + MLPDFBCSP + LDADFBCSP + SVMDFBCSP + MLP
Two-class classification (average)0.7950.8330.8570.8940.9280.954
Three-class classification (macro-average) 0.7540.7810.8030.8470.8620.897
Table 8. Analysis of computational complexity and response time.
Table 8. Analysis of computational complexity and response time.
Method
CSP + LDACSP + SVMCSP + MLPDFBCSP + LDADFBCSP + SVMDFBCSP + MLP
Computational Complexity O (N3 + T × N2 + F3)O (N3 + T × N2 + S2 × F)O (N3 + T × N2 + S × F × H)O (K × (N3 + T × N2) + F3)O (K × (N3 + T × N2) + S2 × F)O (K × (N3 + T × N2) + S × F × H)
Response time0.3–0.5 s0.3–0.5 s0.3–0.5 s0.3–0.5 s0.3–0.5 s0.3–0.5 s
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, Y.; Shen, X. EEG Feature Extraction and Classification for Upper Limb Flexion and Extension Motor Imagery Based on Discriminative Filter Bank Common Spatial Pattern. Brain Sci. 2026, 16, 217. https://doi.org/10.3390/brainsci16020217

AMA Style

Zhang Y, Shen X. EEG Feature Extraction and Classification for Upper Limb Flexion and Extension Motor Imagery Based on Discriminative Filter Bank Common Spatial Pattern. Brain Sciences. 2026; 16(2):217. https://doi.org/10.3390/brainsci16020217

Chicago/Turabian Style

Zhang, Yuqi, and Xiaoyan Shen. 2026. "EEG Feature Extraction and Classification for Upper Limb Flexion and Extension Motor Imagery Based on Discriminative Filter Bank Common Spatial Pattern" Brain Sciences 16, no. 2: 217. https://doi.org/10.3390/brainsci16020217

APA Style

Zhang, Y., & Shen, X. (2026). EEG Feature Extraction and Classification for Upper Limb Flexion and Extension Motor Imagery Based on Discriminative Filter Bank Common Spatial Pattern. Brain Sciences, 16(2), 217. https://doi.org/10.3390/brainsci16020217

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop