Next Article in Journal
Lightweight Deep Learning Surrogates for ERA5-Based Solar Forecasting: An Accuracy–Efficiency Benchmark in Complex Terrain
Next Article in Special Issue
Impact of Internal Validation Protocols on Predictive Maintenance Performance in Biomedical Equipment
Previous Article in Journal
Democratic Innovation: Systematic Evaluation of Blockchain-Based Electronic Voting (2022–2025)
Previous Article in Special Issue
Multimodal Clustering and Spatiotemporal Analysis of Wearable Sensor Data for Occupational Health Risk Monitoring
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

TPHFC-Net—A Triple-Path Heterogeneous Feature Collaboration Network for Enhancing Motor Imagery Classification

1
School of Intelligent Science and Control Engineering, Jinling Institute of Technology, Nanjing 211199, China
2
College of Information Science and Technology & Artificial Intelligence, Nanjing Forestry University, Nanjing 210037, China
*
Author to whom correspondence should be addressed.
Technologies 2026, 14(2), 96; https://doi.org/10.3390/technologies14020096
Submission received: 18 November 2025 / Revised: 3 January 2026 / Accepted: 7 January 2026 / Published: 2 February 2026

Abstract

Electroencephalography-based motor imagery (EEG-MI) classification is a cornerstone of Brain–Computer Interface (BCI) systems, enabling the identification of motor intentions by decoding neural patterns within EEG signals. However, conventional methods, predominantly reliant on convolutional neural networks (CNNs), are proficient at extracting local temporal features but struggle to capture long-range dependencies and global contextual information. To address this limitation, we propose a Triple-path Heterogeneous Feature Collaboration Network (TPHFC-Net), which synergistically integrates three distinct temporal modeling pathways: a multi-scale Temporal Convolutional Network (TCN) to capture fine-grained local dynamics, a Transformer branch to model global dependencies via multi-head self-attention, and a Long Short-Term Memory (LSTM) network to track sequential state evolution. These heterogeneous features are subsequently fused adaptively by a dynamic gating mechanism. In addition, the model’s robustness and discriminative power are further augmented by a lightweight front-end denoising diffusion model for enhanced noisy feature representation and a back-end prototype attention mechanism to bolster the inter-class separability of non-stationary EEG features. Extensive experiments on the BCI Competition IV-2a and IV-2b datasets validate the superiority of the proposed model, achieving mean classification accuracies of 82.45% and 89.49%, respectively, on the subject-dependent MI task and significantly outperforming existing mainstream baselines.

1. Introduction

Brain–Computer Interface (BCI) technology establishes a direct communication pathway between the human brain and external devices, emerging as a transformative force in fields such as rehabilitative engineering, robotics, and cognitive neuroscience [1]. As a prominent paradigm within this domain, Motor Imagery-based BCI (MI-BCI) operates by decoding the specific neural patterns generated during imagined limb movements, while concurrently holding significant promise for practical applications [2,3]. Among various modalities for monitoring neural activities, electroencephalography (EEG) is the predominant method due to its non-invasive nature, which records bioelectric signals from cortical neurons via scalp-mounted electrodes and simultaneously offers an ideal balance of a high safety profile, excellent temporal resolution, and low cost, thus establishing it as the standard for both research and application in MI-BCI [4,5].
The technical workflow of an MI-BCI system typically involves the acquisition and preprocessing of EEG signals from a specific mental task, followed by feature extraction and classification to recognize the user’s motor intent. Despite its structured process, EEG-MI classification faces significant challenges arising from the inherent electrophysiological properties of EEG signals. At the signal level, the challenge lies in the extremely low signal-to-noise ratio (SNR), which is often below −10 dB. This poor SNR arises because the microvolt-level motor-related cortical potentials (MRCPs) are heavily contaminated by artifacts such as electromyography (EMG) and ocular interference, resulting in a weak signal that impairs the efficacy of feature extraction and classification [6]. At the feature level, the key event-related desynchronization/synchronization (ERD/ERS) patterns exhibit high non-stationarity and substantial inter-subject variability, making it difficult for traditional methods reliant on hand-crafted features like Common Spatial Patterns (CSP) or Power Spectral Density (PSD) to capture these complex non-linear dynamics [7,8]. Most critically, conventional approaches suffer from significant performance degradation across different sessions and subjects, with clinical data revealing accuracy fluctuations of up to 10–20% for such classic classifiers as Linear Discriminant Analysis (LDA) and Support Vector Machine (SVM) [9], thereby severely hampering the practical deployment of MI-BCI systems.
To overcome the heavy reliance on manual feature engineering and the poor generalization of traditional methods, deep learning has emerged as a new paradigm for motor imagery decoding in MI-BCI systems. Unlike their conventional counterparts, deep neural networks can automatically learn complex non-linear mappings and demonstrate superior robustness against inter-subject variability and low SNR. Early explorations, such as Joseph et al.’s 1994 use of neural networks to classify clinical neurophysiological data, paved the way for this shift [10]. Following this work, the advent of novel spatio-temporal convolutional architectures like Shallow ConvNet and Deep ConvNet, which directly extract ERD/ERS features from EEG μ / β rhythms, marked the transition of EEG-MI classification to an era of end-to-end modeling [11]. Subsequent research has focused on architectural refinements to advance classification performance. EEGNet introduced depthwise separable convolutions to reduce model complexity [12], while EEG-TCNet enhanced temporal dependency modeling by integrating a Temporal Convolutional Network (TCN) into the EEGNet framework [13]. Further advancements came from models like CIACNet [14], SMT [15], and ASiBLS [16], which augmented the TCN with techniques such as multi-branch structures, multi-scale convolutions, and attention mechanisms. Despite these improvements, the fixed-dilation convolutions of TCNs still struggle to capture the aperiodic rhythms, temporal jitter, and cross-phase latency variations inherent in EEG signals. This limitation has spurred the development of hybrid modeling architectures. One prominent direction involves integrating the Transformer’s self-attention mechanism with the TCN, enabling models like EEG-Conformer [17], M-FANet [18] and MCTD [19] to synergistically model both local rhythms and long-range dependencies. Another approach leverages the Long Short-Term Memory (LSTM) architecture for its capacity to track dynamic state evolution. By combining convolutions with the gated units of LSTM, models such as CNN-LSTM [20] and FBLSTM [21] have improved the representation of phase-like changes in EEG signals, thereby boosting the accuracy of classification.
Owing to their distinct designs, different temporal modeling architectures present unique strengths and limitations in capturing the complex temporal features of EEG signals. TCN, for instance, is capable of efficiently extracting short-term local features via dilated convolutions, making it well-suited for transient responses like ERD/ERS in μ / β rhythms. However, its fixed receptive field is ill-equipped for non-uniform rhythmic variations. Transformer excels at modeling global dependencies using self-attention but is less sensitive to abrupt local events. Meanwhile, LSTM adeptly tracks gradual rhythmic trends by modeling state evolution, yet its inherent short-term memory restricts its capacity for global dependency modeling. The complementary nature of these architectures provides a compelling foundation for building robust, high-accuracy classification models. Nevertheless, naive fusion strategies have yielded limited gains. TCN-Transformer hybrids often suffer from poor modular coupling and inadequate fusion of global and local features [22]. Similarly, TCN-LSTM models, constrained by their serial design and parameter redundancy, struggle to simultaneously capture both long-range dependencies and sudden local rhythms [23]. These limitations reveal a critical insight: merely stacking modules is insufficient to unlock their respective structural advantages. This underscores the need for a sophisticated synergistic framework that organically integrates and dynamically complements the local perception of TCN, the global correlation of Transformer, and the state tracking of LSTM. Such a framework promises a more profound and comprehensive modeling of the complex temporal dynamics within EEG-MI signals.
Building upon the insights above, we introduce the Triple-path Heterogeneous Feature Collaboration Network (TPHFC-Net), an end-to-end model for enhanced motor imagery classification. As depicted in Figure 1, TPHFC-Net is architected with a four-stage progressive framework: (1) Progressive Feature Extractor (PFE): A composite front-end, integrating multi-scale temporal and depthwise separable convolutions, performs initial feature extraction. Subsequently, a denoising diffusion model is employed to bolster the noise robustness of these features. (2) Triple-Path Collaborative Temporal Architecture (TPCTA): The features extracted by PFE are channeled into three parallel streams and processed independently by TCN, Transformer, and LSTM modules. This tripartite design concurrently captures local rhythmic dynamics, models global cross-stage dependencies, and tracks the continuous evolution of signal states. (3) Dynamic Gating Fusion Module (DGFM): A dynamic gating mechanism adaptively learns the importance weights of the heterogeneous features from each stream, followed by a weighted sum fusion that achieves synergistic complementarity and yields an optimized, unified representation. (4) Prototype-Guided Classifier (PGC): In the final stage, a prototype-based attention mechanism guides features toward their corresponding class centers to enhance inter-class separability, after which a fully connected layer performs the final classification.
The primary contributions of this work are threefold:
  • We introduce a synergistic triple-path temporal modeling mechanism that concurrently leverages TCN, Transformer, and LSTM. This approach holistically models the short-term, global, and state-evolutionary characteristics of EEG-MI signals, thereby enhancing the representational power of the model.
  • We architect TPHFC-Net, an end-to-end neural network featuring a four-stage progressive framework for accurate motor intent recognition from EEG-MI signals.
  • We conduct comprehensive experiments on the BCI Competition IV-2a and IV-2b datasets under both subject-dependent and subject-independent settings, demonstrating that the proposed TPHFC-Net consistently outperforms mainstream baseline methods in EEG-MI classification.

2. Related Works

2.1. Classification of Motor-Imagery EEG

EEG-MI classification is a task fundamentally composed of two stages: data preprocessing, followed by model construction and training. The primary goal of preprocessing is to enhance the quality of raw EEG data, a critical step that directly governs the accuracy of all subsequent feature extraction and classification processes. Standard preprocessing techniques include filtering (e.g., band-pass filtering to mitigate high-frequency noise and baseline drift), artifact removal (to correct for ocular interference) and signal normalization, all of which serve to improve the SNR [24,25]. Once the data is cleaned, the subsequent stage involves constructing and training a model to extract discriminative features and ultimately classify the user’s intended motor task. It should be noted that the design of this model must be closely adapted to the intrinsic characteristics of the EEG signal.
In the context of MI, EEG signals present three crucial temporal characteristics: (1) Short-term local features: MI-induced neural phenomena, such as the suppression of μ (8–13 Hz) and β (13–30 Hz) rhythms, are often transient and concentrated within narrow time windows. Capturing these rapid, localized dynamics is therefore essential for EEG-MI classification [26]. (2) Global dependency features: Complex imaginary movements can elicit synergistic activity across distant brain regions, characterized by large temporal spans and strong global interdependencies. This manifests as phenomena like delayed synchronization and time-lagged coupling between signals from central and parietal areas [27]. (3) State-evolutionary features: The MI process itself is not static but unfolds through distinct phases (e.g., preparation, initiation, maintenance, termination). This phased evolution results in an unstable temporal structure where rhythms evolve slowly, exhibiting clear state continuity and long-range temporal dependencies. For instance, the β rhythm might be enhanced during initiation and return to baseline upon termination [28].
To address the challenge of EEG-MI classification, early research heavily relied on traditional machine learning pipelines. A typical workflow would involve first handcrafting features from the preprocessed data such as frequency energy, PSD or CSP, and then feeding them into a classic classifier like LDA, SVM, or k-Nearest Neighbors (k-NN) [29,30]. Among these, the Filter Bank Common Spatial Pattern (FBCSP) framework, proposed by Kai et al. [31], is arguably the most iconic. By integrating multi-band filtering with spatial feature extraction, FBCSP proved highly effective at enhancing task-relevant discriminative patterns and has seen widespread adoption in practical MI-BCI systems [32].
Despite their successes, these traditional methods’ heavy reliance on handcrafted features makes them struggle to capture the intricate, non-linear temporal structures inherent in EEG-MI signals, such as the short-term local features, global dependencies, and state-evolutionary dynamics discussed earlier. Furthermore, these methods are often sensitive to noise and exhibit poor generalization. Consequently, they are increasingly unable to meet the stringent demands for accuracy and robustness required in practical application scenarios.

2.2. Motor-Imagery Classification with CNN

The limitations of traditional machine learning spurred the adoption of deep learning, with CNNs yielding significant performance gains in MI classification. Pioneering work by Joseph et al. [10] in 1994 first applied neural networks to clinical neurophysiological data, developing a classification model that not only outperformed LDA but also demonstrated a distinct advantage in processing non-linear features. Though constrained by the computational bottlenecks and prohibitive training times of the era, this research paved the way for the later dominance of deep learning in MI classification.
A major breakthrough was the design of Shallow ConvNet by Schirrmeister et al. [11], inspired by the highly successful FBCSP method. This model ingeniously emulated FBCSP’s core components and used temporal convolutions to replicate band-pass filtering and spatial convolutions to mimic CSP’s spatial transformations, thus effectively isolated frequency-specific features and highlighted critical channel combinations, yielding accuracies that rivaled or surpassed FBCSP and firmly validated the superiority of CNNs in this domain. The drive for efficiency led Lawhern et al. [12] to develop EEGNet, a compact CNN architecture tailored for EEG data. By employing depthwise separable convolutions to decouple temporal and spatial feature learning, EEGNet drastically reduced its parameters while maintaining accuracy comparable to much larger models, establishing it as a cornerstone for lightweight MI classification. Building upon this foundation, Riyad et al. [33] proposed MI-EEGNet, which integrated an Inception-style architecture for multi-scale feature extraction and an Xception-like structure for enhanced modeling efficiency, thereby achieving stronger generalization across multiple datasets. A subsequent paradigm shift came when Ingolfsson et al. [13] addressed the persistent issue of causality in time-series modeling. Their EEG-TCNet model marked the first systematic application of TCN to MI classification. By incorporating causal and dilated convolutions, EEG-TCNet efficiently captured long-range temporal dependencies within a compact framework, solidifying TCN as a mainstream technique by delivering high accuracy with minimal parameters.
The advent of EEG-TCNet spurred a new wave of research aimed at extending and refining its architecture, which primarily targeted various aspects of the model. One major thrust was architectural innovation. CIACNet by Liao et al. [14] introduced a dual-branch convolutional structure with an enhanced convolutional block attention module (CBAM), empowering the TCN to model temporal features at varying semantic levels. Another approach, seen in ASiBLS by Yang et al. [16], employed a primary-auxiliary branch design to extract global and differential features, using a similarity-guided loss to foster complementary learning and boost generalization. A second area of focus was the optimization of convolutional units to better capture multi-scale features. Salami et al. [34] augmented the TCN with Inception modules in their EEG-ITNet model, enabling joint spectral-temporal modeling and significantly improving cross-subject recognition. Similarly, the SMT model from Yu et al. [15] featured a multi-branch separable convolution (MSC) module, where parallel branches with different kernel sizes captured short- and long-term temporal patterns that were subsequently integrated by a unified TCN. The integration of attention mechanisms emerged as another key strategy for refining feature relevance. For example, ETCNet by Qin et al. [35] synergistically combined an Efficient Channel Attention (ECA) module with a TCN. In this design, the ECA module first refines channel-wise representations, which the TCN then processes for temporal modeling, ultimately yielding higher classification accuracy.
As this body of work illustrates, the exceptional capacity of TCN for modeling local temporal dynamics solidifies it as a cornerstone of modern MI classification. Consequently, innovating upon this TCN foundation, whether through novel architectures, advanced feature modeling techniques, or other enhancements, remains the primary frontier for advancing the accuracy, generalization, and robustness.

2.3. TCN Combined with Transformer/LSTM

While TCNs demonstrate a marked ability to extract local temporal features from EEG signals, such as ERD/ERS, their inherent fixed receptive fields constrain the capacity to model long-range dependencies. This limitation makes it difficult for TCN to effectively capture cross-phase, long-term dynamic information within EEG signals. To circumvent this, some studies have integrated the self-attention mechanism of the Transformer model, which can directly model global dependencies across arbitrary time points, thereby enabling a sharper focus on critical temporal information. Song et al. [17] introduced the EEG Conformer model, which combines convolutional modules for local feature extraction with a Transformer to capture long-distance temporal dependencies, thus judiciously balancing local and global feature modeling capabilities. Expanding on this, Qin et al. [18] developed M-FANet, which incorporates multiple attention mechanisms to selectively emphasize frequency, spatial, and feature map dimensions for comprehensive multi-feature extraction, while simultaneously using regularization to suppress feature redundancy and bolster robustness and generalization. Furthermore, researchers have explored extending single convolutions to multi-scale variants, integrating them with Transformer to further enhance the model’s ability to represent EEG temporal characteristics. Hang et al. [19] presented the MCTD model, which extracts local features across diverse frequency ranges using dynamic convolutions, subsequently employing self-attention to model global temporal dependencies, thereby enriching the model’s capacity to express complex temporal features. In comparison, Zhu et al. [36] proposed IMCTNet, which adopts a more sophisticated multi-scale convolutional architecture and incorporates a channel attention mechanism to adaptively augment the representation capability of features at different scales, ultimately demonstrating superior feature expression and generalization performance.
Despite the Transformer’s notable strengths in modeling global dependencies, it still lacks an efficient mechanism for capturing the dynamic state evolution processes inherent in EEG signals. Long Short-Term Memory (LSTM) networks, as time-series modeling architectures endowed with memory mechanisms, are proficient at continuously tracking rhythmic changes in brain electrical signals. This makes them particularly well-suited for characterizing the evolving patterns from the initiation to the termination phases within MI tasks. Consequently, researchers have also endeavored to incorporate LSTMs to bolster models’ ability to characterize signal state evolution features. Early investigations by Saputra et al. [37] directly applied LSTMs for classification following CSP feature extraction to verify their basic utility; however, their experimental results revealed suboptimal adaptability to complex and high-noise EEG signals. Ghinoiu et al. [20] subsequently introduced a CNN-LSTM-based architecture that leverages convolutional layers to directly extract spatial features from multi-channel EEG signals, with LSTM then modeling their temporal evolution. This hybrid approach considerably enhanced the models’ joint spatio-temporal modeling capabilities. Gui et al. [21] designed the FBLSTM model, which utilizes filter banks for multi-frequency band information extraction, integrates convolutions for spatial feature extraction, and then employs an attention-equipped LSTM module to model temporal variations. This holistic strategy facilitates the joint learning of frequency, spatial, and temporal domain information, thereby effectively enhancing the synergistic expressive power across multi-modal features.
Evidently, constructing hybrid temporal feature modeling structures that judiciously integrate TCN, Transformer, and LSTM, by capitalizing on their complementary strengths, will enable the comprehensive, joint modeling of short-term local features, global dependencies, and state evolution characteristics of EEG signals. Such an approach holds significant promise for yielding more flexible, refined, and accurate temporal feature representations and classification capabilities.

3. Methodology

In this section, we detail the proposed end-to-end neural network model based on Triple-path Heterogeneous Feature Collaboration (TPHFC-Net).It adopts a four-stage progressive architecture design: a Progressive Feature Extractor with integrated noise diffusion modeling (PFE), a Triple-path Collaborative Temporal Architecture (TPCTA) comprising TCN, Transformer, and LSTM, a Dynamic Gating mechanism for adaptive fusion of heterogeneous temporal features, and a Feature Classifier with a prototype attention mechanism.The overall preprocessing, model construction, and training pipeline of the proposed model is illustrated in Figure 2.

3.1. Data Pre-Processing

The initial pre-processing stage focuses on preparing the raw EEG signals while retaining their intrinsic spectral characteristics. Specifically, a broad-band Finite Impulse Response (FIR) filter with a passband of 0.5–100 Hz is applied to remove extremely low-frequency baseline drift and suppress high-frequency noise, while largely preserving the original EEG information content. Owing to its inherent linear-phase property, the FIR filter ensures that the temporal structure of the EEG signals remains intact without introducing phase distortion.
This research utilizes the BCI Competition IV-2a and IV-2b dataset, which, respectively, contain recordings from nine subjects. In BCI Competition IV-2a dataset, each subject completed 72 trials for four distinct MI tasks: left hand, right hand, feet, and tongue. Each trial comprises T = 1000 time samples recorded from C = 22 EEG channels. Consequently, the labeled sample set for any given subject can be defined as S k = { ( X i k , y i k ) } i = 1 M , where X i k R C × T is the data matrix of subject k for the i trial, y i k is the corresponding class label from {left hand, right hand, feet, tongue}, and M = 288 is the total number of trials per subject. In contrast, the BCI Competition IV-2b dataset also comprises EEG recordings from nine subjects but focuses on a binary MI classification task involving left-hand and right-hand imagery. For each trial, EEG signals are recorded over T = 1000 time samples recorded from C = 3 EEG channels (C3, Cz, and C4), which are closely associated with motor-related cortical activity. The corresponding labeled sample set provides a complementary evaluation scenario characterized by limited spatial information.
To finalize the pre-processing pipeline, the labeled sample set undergoes global Z-score normalization to ensure a consistent data distribution across all channels and time points, followed by reshaping into the required tensor format to yield a pre-processed dataset ready for model training.

3.2. Progressive Feature Extractor

The proposed PFE derives information-dense, dimensionally compact, and robust spatiotemporal features from the high-dimensional raw data through a three-stage process: decoupling of spatiotemporal feature, multi-scale pattern capture, and diffusion-driven feature enhancement. This process ultimately generates an optimized feature tensor for a subsequent triple-path collaborative temporal architecture.

3.2.1. Decoupling of Spatiotemporal Features

For a given input sample tensor X input R C × T , the spatiotemporal decoupling process commences by applying a 2D temporal convolution layer with a (1, 32) kernel and L = 16 output channels to capture localized temporal patterns, yielding an initial feature map F init R L × C × T . To prevent premature entanglement of spatiotemporal information, this map is then fed into a depthwise separable convolution module consisting of a depthwise spatial convolution and a pointwise convolution. Specifically, the depthwise layer utilizes a (C,1) kernel to independently model inter-channel spatial relationships at each time step. This is followed by a pointwise layer that facilitates cross-channel information exchange and expands the feature channel dimension from L = 16 to G = 32 . The final output of this decoupling module is formulated as:
F DST = ELU W p · W s · BatchNorm F init
where F DST R G × T , W s and W p represent the kernel weights for the depthwise spatial convolution and pointwise convolution, respectively. This design ensures that each output feature is a non-linear combination of all input channel features, achieving effective cross-channel fusion while preserving the critical separation of spatiotemporal information.

3.2.2. Multi-Scale Pattern Capture

The core of our multi-scale pattern capture strategy is the Temporal Inception module, which enhances feature richness and discriminative power by employing multiple parallel temporal convolution paths. These paths utilize different kernel sizes to achieve varying receptive fields, enabling efficient modeling of multi-temporal resolution features within the signal. For computational efficiency and to broaden the temporal context, the process begins with an average pooling layer with a kernel size of (1, 8) to compresses the time dimension from T = 1000 to T = 125 . The pooled feature map F DST R G × T is then fed into four parallel branches with a unified output channel dimension H = G / 4 to capture temporal patterns with different time scales: three grouped-convolutional branches with varying kernel sizes of (1, K i ) ( K i { 3 , 5 , 7 } ) and a max-pooling branch with a kernel size of (1, K p ) ( K p = 3). Unlike standard convolutions, these convolutional branches employ grouped convolutions by dividing input channels into H groups for independent computation, which significantly reduces the parameter count. The outputs of the four branches, P i R H × T , can be expressed as:
P i [ h , t ] = Dropout ELU k = 1 K i W i · F DST , i = 1 , 2 , 3 Dropout ELU W i · MaxPooling ( F DST ) , i = 4
Similarly, a second average pooling operation further compresses the time dimension to T = 15 . This progressive dimensionality reduction strategy ensures that critical discriminative information is effectively encoded into the feature representation prior to extensive dimensionality reduction. Finally, the resulting feature tensors from these four branches are concatenated along the channel dimension, forming a unified and comprehensive multi-scale feature representation F MMC R G × T , denoted as:
F MMC = AvgPooling ELU Concat ( P 1 , P 2 , P 3 , P 4 )

3.2.3. Diffusion-Driven Feature Enhancement

To address the challenges of significant noise and inter-subject/session variability inherent in EEG signals, we introduce a diffusion-driven feature enhancement mechanism. In contrast to conventional methods like Dropout and Additive Noise that inject static noise, this mechanism dynamically adapts to the feature state by iteratively refining the noise distribution, thereby enabling a more robust recovery of the underlying signal representation. The mechanism operates iteratively, with each iteration comprising a forward noising phase and a reverse denoising phase based on Denoising Diffusion Probabilistic Model (DDPM). In the forward phase, controlled Gaussian noise ε t is injected into the input feature map F MMC to construct a noisy version F t ^ :
F t ^ = α t F MMC + 1 α t ε t , t { 1 , 2 , , T S }
where α t = 1 β t is defined as the fidelity coefficient, β t is a predefined linear noise scale parameter, and t represents the iteration timestep. As to the reverse phase, it employs a lightweight network f θ to estimate the injected noise ε pred = f θ ( F t ^ ) , which is then used to progressively denoise the feature map:
ε t 1 = ε t β t 1 α t · ε pred
After T S rounds of iterative denoising, this module obtains the final noise correction result ε t and yields the final feature enhancement term, which is integrated back into the original feature map via a residual connection:
F aug = F MMC + λ d · σ ( F MMC ) · ε 0
This formulation scales the final injected noise based on both a fixed hyper parameter λ d = 0.1 and the standard deviation of the original features σ ( F MMC ) . This adaptively matches the perturbation’s energy to the feature’s intrinsic scale, functioning as a stable and effective regularization method.
Through the entire progressive feature extraction process, the raw input data is compressed and encoded into a highly compact and refined feature tensor F aug R G × T .

3.3. Triple-Path Collaborative Temporal Architecture

Upon completion of the progressive feature extraction, the model engages its core computational engine: the Triple-Path Collaborative Temporal Architecture (TPCTA). The TPCTA is founded on the premise that any single temporal modeling paradigm has inherent inductive biases, preventing it from comprehensively capturing all dependencies within a signal [38]. To overcome this, the TPCTA deploys three parallel paths tailored to target the distinct temporal properties coexisting in EEG signals: short-term local patterns, long-range global dependencies, and continuous state evolution. This multi-path approach ensures a holistic and robust representation of the signal’s intricate temporal characteristics.

3.3.1. Lite-MSTCN: Capturing Local Multi-Scale Dependencies

The first path, Lite-MSTCN, leverages a multi-scale TCN to capture local dependencies across various time scales, such as the μ / β rhythmic signatures in EEG signals. Unlike traditional TCNs that sequentially stack dilated convolutions, Lite-MSTCN employs a parallel structure to broaden its multi-scale perception and incorporates an attention mechanism for adaptive, cross-scale feature integration. Initially, a lightweight convolution (LiteConv) layer performs a foundational transformation on the input feature tensor F aug . By synergizing depthwise and pointwise convolutions for intra-channel temporal modeling and inter-channel feature integration, respectively, which are followed by a channel shuffle operation to enhance cross-channel information exchange, this design boosts the model’s feature representation while markedly reducing computational overhead. The transformed feature tensor F base R G × T is then channeled into three parallel dilated convolution branches. For branch i { 0 , 1 , 2 } , the dilation factor d i = 2 i increases exponentially while the kernel size remains fixed as K b = 3 . This architectural choice allows the model to achieve an exponentially expanding receptive field without added parametric complexity, facilitating the efficient capture of temporal dependencies at diverse scales. The output of each branch B i R G × T is further refined through Batch Normalization and a GELU activation function:
B i = GELU ( BN ( CausalConv 1 D ( F base , K b , d i ) ) )
where CausalConv1D() represents a causal convolution.
Departing from the static fusion methods (e.g., summation or concatenation) of conventional TCNs, Lite-MSTCN introduces a lightweight channel attention module to dynamically fuse the multi-scale features. The mechanism computes a global context vector via temporal average pooling, which then informs a compact two-layer convolutional network to generate adaptive attention weights W DC i for the three parallel branches. The final output of Lite-MSTCN H TCN R G × T is a dynamically weighted combination of the branch features, tailored to the specific characteristics of B i :
H TCN = i = 0 2 W DC i · B i

3.3.2. Lite-Transformer: Capturing Global Contextual Dependencies

To capture global contextual dependencies that extend beyond the fixed receptive fields of TCNs, such as the long-range association between task cues and motor execution, the TPCTA incorporates a second path: a lightweight Transformer (Lite-Transformer). Standard Transformers are prone to overfitting when applied to short-sequence, small-sample EEG datasets, primarily due to their lack of inductive bias. To mitigate this issue, Lite-Transformer fortifies the standard architecture by incorporating convolutional inductive biases and a dynamic gating mechanism. Distinct from variants that rely solely on self-attention or convolutional bias, Lite-Transformer introduces a dynamic fusion mechanism that orchestrates a parallel interplay between global self-attention and local convolutional attention. This allows the model to capture global context while retaining sensitivity to local rhythmic patterns, enhancing its adaptability to non-stationary EEG signals.
The process begins by projecting the input tensor F aug into a stable feature space via a 1 × 1 convolution and BatchNorm1d layer, yielding the projected feature map:
F MAP = BN ( W MAP · F aug )
where F MAP R C T × G , and C T is the Transformer channels. F MAP is then fed into two parallel branches within Lite-Transformer. The global context branch employs multi-head self-attention ( H head = 4 ) to capture non-local dependencies across the entire sequence:
F TRANS g = MultiHeadAttention ( F MAP T )
Concurrently, the local structure branch processes the tensor F MAP with a LiteConv module. This step is crucial for injecting key convolutional inductive biases (e.g., translation invariance) into the model, enabling a more robust extraction of local structural features:
F TRANS l = ( LiteConv ( F MAP ) ) T
Finally, a Linear Attention Gating unit composed of a multi-layer perceptron and a Sigmoid function is utilized to perform a weighted fusion of the features from the two branches. Critically, this unit takes the output of the global context branch as its input to dynamically generate a gating value between 0 and 1 for each time step and feature dimension, which then modulates the combination of the global and local feature streams:
H TRANS = ( G ( F TRANS g ) F TRANS g + ( 1 G ( F TRANS g ) ) F TRANS l ) T
where G · and ⊙ represent linear gating operation and element-wise multiplication operation, respectively. This design empowers the model to autonomously arbitrate between the discriminative global context from self-attention and the robust local features from convolutions, based on the input data pattern, thereby resulting in a dynamic and complementary synergy between the two modeling paradigms.

3.3.3. Lite-LSTM: Modeling State Evolution Dynamics

In contrast to the stateless TCN and non-recurrent Transformer, the stateful architecture of LSTM offers a distinct advantage in modeling temporal dynamics and non-stationarity. This rationale underpins the inclusion of a third path, Lite-LSTM, whose inclusion is not for architectural novelty but to serve as the dedicated “state evolution expert”. Leveraging its internal cell state and sophisticated gating mechanism, Lite-LSTM models the continuous narrative of the cognitive task, thereby filling a functional void left by the other two paths.
Lite-LSTM consists of a two-layer unidirectional LSTM architecture: the first layer maps the input sequence to a sequence of hidden states, which in turn serves as the input for the second layer to generate the final hidden state sequence. The state transition process can be concisely expressed as:
h ( 1 ) = LSTM 1 ( F aug )
h ( 2 ) = LSTM 2 ( h ( 1 ) )
The output sequence from the second layer serves directly as the final feature representation of Lite-LSTM:
H LSTM = h ( 2 ) = [ h 1 ( 2 ) , h 2 ( 2 ) , , h T ( 2 ) ]
Within the TPCTA framework, Lite-LSTM provides a modeling perspective that is orthogonal to Lite-TCN and Lite-Transformer. It offers a Markovian view of state evolution, enabling the model to capture state evolution memory such as the continuous progression of brain states during a MI task, which are inherently ill-equipped to handle for stateless or non-recurrent architectures. The inclusion of Lite-LSTM is therefore vital for ensuring the architecture’s robustness, further complementing and enhancing comprehensive feature learning capabilities of the model.

3.4. Dynamic Gating Fusion Module

As detailed in Section 3.3, the TPCTA architecture yields three heterogeneous feature tensors: H TCN H TRANS and H LSTM . While dimensionally identical, these tensors encapsulate temporal information derived from three distinct modeling paradigms—convolutional, self-attentional, and recurrent. This heterogeneity demands their fusion into a single, more discriminative representation. To this end, we introduce a dynamic gating fusion module designed to adaptively weight the contribution of each path at every timestep, enabling a context-aware synthesis of these diverse features.
The process begins by concatenating the three heterogeneous features along the channel dimension to form an aggregated feature tensor H a = Concat [ H TCN ; H TRANS ; H LSTM ] . This provides a holistic input to a dedicated lightweight gating network F g , which is composed of two 1D convolutional layers with a unified kernel size of 3 and ELU activations. The output of F g is then passed through a Softmax function to yield the dynamic gating weights for the three paths W g R 3 × T :
W g = Softmax ( F g ( H a ) )
where W g = [ W TCN ; W TRANS ; W LSTM ] and each slice quantifies the relative importance of the path features across all timesteps. The final fused representation H fused is then computed by performing a timestep-wise weighted summation of the path features with these dynamic weights:
H fused = W TCN H TCN + W TRANS H TRANS + W LSTM H LSTM
This dynamic gating fusion mechanism is essentially a data-driven arbitration strategy that empowers the model to learn complex fusion policies directly from the input data. For instance, the model can learn to amplify the contribution of H TCN when local signal rhythms are prominent, or conversely, prioritize H TRANS when long-range dependencies are more critical.
Following this dynamic fusion, the model proceeds to a final feature integration stage. A 1 × 1 convolution first projects the dimension of the fused feature H fused from G = 32 to a more expressive dimension of G fused = 48 , followed by normalization and a non-linear activation to enhance the feature representation. The resulting temporal sequence is then condensed into a fixed-dimension feature vector χ final R 1 × G fused , by applying Global Average Pooling across the temporal dimension, preparing it for the ultimate classification.

3.5. Prototype-Guided Classifier

The significant non-stationarity and distribution shifts inherent in EEG signals pose a considerable challenge, often rendering linear classifiers insufficient to establish stable inter-class decision boundaries. We address this limitation by introducing a Prototype-Guided Classifier (PGC) that precedes the final linear classifier. The core principle of PGC is to enhance feature separability through a refinement step that leverages a set of learnable class prototypes to optimize feature representations prior to the classification decision.
The PGC maintains a set of learnable class prototypes ρ = [ ρ 1 , ρ 2 , , ρ N ] , where N is the number of classes (four in this task), and the prototype vector ρ n R 1 × G fused can be viewed as a learnable centroid or a canonical exemplar of class n within the feature space. For any given input feature tensor χ final R 1 × G fused , the module processes it via a two-phase procedure:
Phase 1: Attention Weight Generation. Rather than directly computing input-prototype similarities, the module first feeds the input feature χ final into a dedicated feed-forward attention network to generate a set of dynamic, input-specific attention weights. These weights then undergo channel-wise refinement via a lightweight depthwise separable convolution before being normalized by a Softmax function to produce the final prototype fusion weights w = [ w 1 , w 2 , , w N ] :
w = Softmax ( DepthwiseConv ( AttentionNet ( χ final ) ) )
Phase 2: Prototype-Guided Feature Refinement. Using the weights computed in the previous phase, the model performs a weighted sum over the prototype space to construct χ proto , a prototype context vector highly relevant to the current samples:
χ proto = n = 1 N w n · ρ n
This vector embodies a context-aware representation synthesized from the global class structure but guided by the sample’s specific affinities, which is then integrated back into the original feature via a scaled residual connection with a learnable scaling factor λ p , forming the refined feature vector χ refined R G fused
χ refined = χ final + λ p · χ proto
This refinement process can be interpreted as an adaptive modulation of the original feature vector. It leverages the global manifold of the feature space, as defined by the prototypes, to gently steer each sample’s representation towards its corresponding class region. This process guides the model to learn an enhanced feature space characterized by greater intra-class compactness and inter-class separability.
Finally, the prototype-guided feature vector χ refined is fed into a standard fully-connected layer with weight matrix W c R N × G fused and a Softmax function to compute the final posterior class probabilities:
y i ^ = Softmax ( W c · χ refined )
This entire pipeline, from the dynamic gating fusion to the prototype-guided classification, collectively ensures that the model maximally utilizes the heterogeneous temporal features from multiple paths and leads to highly robust and accurate classification.

3.6. Loss Functions and Training Strategy

Given the model’s predicted probabilities from the Softmax layer and the one-hot encoded ground-truth labels, the loss is formulated as:
L C E = 1 B i = 1 B n = 1 N y i , n log ( y ^ i , n )
where B is the number of samples in a batch, N is the number of classes, and y ^ i , n is the predicted probability that sample i belongs to class n. This formulation is equivalent to the negative log-likelihood of the true class and serves to minimize the Kullback–Leibler (KL) divergence between the predicted and true distributions, which effectively reduces their statistical distance and strengthens the discriminative power of the model.
The model’s parameters are optimized by using Stochastic Gradient Descent (SGD). We selected SGD for its well-documented stability and predictable convergence, qualities that are particularly beneficial for maintaining strong generalization performance in models with complex feature fusion architectures. The update rule for the trainable parameters θ t is given by:
θ t + 1 = θ t η ( ( L C E ( θ t ) ) + λ θ t )
where η denotes the learning rate, ( L C E ( θ t ) ) is the gradient of the loss function with respect to θ t , and λ is the weight decay coefficient that enforces L2 regularization.

4. Experiments Details

4.1. Experiment Setup

4.1.1. Experiment Preparations

In our experiments, we utilize a software stack consisted of Python 3.8, PyTorch 2.4.0, and CUDA 12.6, running on a Windows 11 OS. The hardware platform is a workstation equipped with an Intel i7-14700KF CPU (Intel, Santa Clara, USA), 32 GB of DDR4 RAM (Gloway, Shenzhen, China), and a Tesla P40 GPU (NVIDIA, Santa Clara, USA).

4.1.2. Dataset and Evaluation Metrics

To evaluate the performance of our proposed model, we conduct extensive experiments on the BCI Competition IV-2a and IV-2b datasets, a widely adopted benchmark for MI classification. The BCI Competition IV-2a dataset contains 9 × 4 × 72 = 2592 samples collected from 9 subjects, each performing 72 trials for four distinct MI tasks (left hand, right hand, both feet, and tongue) with each trial’s data constituting a single sample of 22 × 1000 = 22,000 points acquired from 22 EEG channels over 1000 time points. By contrast, the BCI Competition IV-2b dataset consists of EEG recordings from nine subjects performing two motor imagery tasks (left hand and right hand). Each subject completes multiple trials per session, with EEG signals acquired from three central channels closely related to MI-related activity. For each trial, the EEG data are segmented into 1000 time points, resulting in a single sample of 3 × 1000 = 3000 points. Compared with BCI IV-2a, it provides substantially fewer spatial channels, thereby serving as a more challenging benchmark for evaluating the model’s ability to extract discriminative temporal features under limited spatial information.
We selected Accuracy and Cohen’s Kappa as the primary metrics for performance evaluation, which are considered among the most prevalent and significant indicators in EEG classification. Accuracy, denoted as p o , is formally defined as the ratio of the number of correct predictions to the total number of classification trials:
p o = i = 1 N T P i M
where N represents the number of classes (specifically, N = 4 for the four-class MI task), M is the total number of samples, and T P i is the count of true positives for class i, i.e., instances of class i correctly identified as such.
Cohen’s Kappa, a metric particularly effective for imbalanced datasets, evaluates the consistency between model predictions and true labels while explicitly correcting for chance agreement. This makes Kappa a fairer metric for comparing algorithm performance, as it avoids the misleadingly high scores often achieved by models that rely on naive or biased classification strategies. The Kappa coefficient ( κ ) is calculated as follows:
κ = p o p e 1 p e
In this equation, p o denotes the overall accuracy (observed agreement). The term p e quantifies the hypothetical probability of chance agreement and is defined as p e = i = 1 N ( A i × B i M 2 ) , where A i and B i denote the total number of actual instances and predicted instances for class i, respectively.

4.1.3. Implementation Details

The hyperparameters for model training are detailed in Table 1. The model was trained using the Stochastic Gradient Descent (SGD) optimizer with a learning rate of 0.001. A batch size of 32 was selected to balance maximizing the GPU’s parallel processing capability for higher computational throughput against the constraint of the hardware memory capacity, thereby preventing out-of-memory (OOM) errors. With these settings, the model consistently achieved convergence within 2000 epochs.

4.2. Comparison with SOTA

Table 2 presents the quantitative results for subject-dependent classification in BCI Competition IV-2a dataset, where the proposed model achieves a remarkable average accuracy of 82.45%. This performance represents a significant gain over classic baselines, such as EEGNet (+10.05%), EEG-ITNet (+5.71%), and EEG-TCNet (+5.10%). This superiority stems from overcoming the limitations of conventional methods, which rely on convolutional networks that primarily capture short-term local features (e.g., μ / β rhythms) while overlooking other critical temporal dynamics. This advantage also extends beyond classic benchmarks to recent state-of-the-art (SOTA) works, including MBCNN-EATCFNet (2025), DMSACNN (2025), and MSSAN (2024). While these advanced methods enhance the TCN framework with techniques like multi-branch structures, multi-scale convolutions, or attention mechanisms, they still adopt a one-sided approach, failing to achieve comprehensive temporal modeling. In contrast, our model introduces a three-path synergistic architecture that uniquely integrates three distinct modeling paradigms: leveraging TCN for short-term local features, Transformer for long-range global dependencies, and LSTM for state evolution dynamics. By adaptively fusing these complementary features derived from convolutional, self-attention, and recurrent paradigms, our model generates a more discriminative representation, leading to a substantial boost in classification performance.
The model’s superiority is further corroborated by its Kappa coefficient. With an average Kappa value of 0.77, it surpasses all other models listed in Table 2 and confirms a higher degree of agreement between its predictions and the true labels. This metric is particularly insightful because it quantifies agreement while accounting for chance, meaning the improved Kappa score indicates substantially enhanced reliability and stability in the model predictions. This implies that our model excels not only in capturing latent data patterns and minimizing misclassifications but also in demonstrating robust performance in practical applications. Consequently, the proposed model is distinguished by its dual advantages in accuracy and reliability, highlighting its significant practical utility and strong potential for widespread adoption.
To further validate the robustness of the proposed framework, we extended the evaluation to the BCI Competition IV-2b dataset. As presented in Table 3, the model achieves a state-of-the-art average subject-dependent classification accuracy of 89.49% and a Kappa coefficient of 0.78, consistently outperforming all competing approaches. This represents a substantial improvement over the second-best SMT model (87.67%) and a significant margin over classic baselines such as EEGNet (85.24%) and Shallow ConvNet (83.98%). Notably, the model demonstrates exceptional adaptability to individual subjects, attaining near-perfect classification rates on Subject 4 (99.17%) and Subject 5 (98.33%). This finding is particularly significant given that the BCI IV-2b dataset contains only three EEG channels (C3, Cz, C4), providing limited spatial information. The results indicate that even in scenarios with sparse spatial features, our proposed three-path synergistic architecture effectively compensates by extracting high-quality heterogeneous temporal features. This confirms the model’s efficacy and stability across varying EEG acquisition configurations.
In addition to subject-dependent evaluations, we employed a Leave-One-Subject-Out (LOSO) cross-validation protocol to rigorously assess the model’s generalization capability. Table 4 presents a comprehensive comparison of the classification accuracy and Kappa scores against several state-of-the-art baseline methods on the BCI Competition IV-2a and IV-2b datasets. As illustrated in the table, the proposed model consistently outperforms all competing approaches across both datasets. Specifically, on the BCI IV 2a dataset, our model achieves a leading accuracy of 67.36% and a Kappa coefficient of 0.56, surpassing the second-best performer, EEG-TCNet. The performance advantage is even more pronounced on the BCI IV 2b dataset, where the proposed method attains an accuracy of 83.74% and a Kappa of 0.67, demonstrating a significant margin over established models such as EEGNet. These results underscore the superior capability of the proposed architecture in capturing subject-invariant features, thereby exhibiting strong robustness against inter-subject variability.
Figure 3 provides a visual performance assessment via box plots, which illustrates the accuracy distribution of our model and competing models across all subjects. An analysis of the plots reveals that our model demonstrates comprehensive superiority to other models across key statistical metrics including the median (horizontal line), quartiles (box edges), and range (whiskers). However, it is noteworthy that certain models exhibit strong outlier performance. For instance, the EEG Conformer achieves a higher maximum accuracy on some subjects, while MSSAN and M-FANet exhibit greater stability (i.e., lower variance) in some cases, indicating their efficacy under particular conditions. Despite these isolated strengths, our model achieves a superior overall balance between peak performance and consistency across the cohort, which lies in an innovative temporal architecture that effectively harmonizes diverse modeling paradigms.
Figure 4 details the classification outcomes for each subject on MI task through confusion matrices, where the main diagonal signifies correct predictions and off-diagonal elements indicate misclassifications between true (Y-axis) and predicted (X-axis) labels. The results reveal a significant performance divergence among subjects. Subjects 3, 7, and 9 demonstrated robust performance, characterized by high accuracy for the left/right-hand classes and minimal inter-class confusion. Conversely, Subjects 2, 5, and 4 exhibited suboptimal results, with particularly high error rates for the ‘Feet’ class. This is exemplified by Subject 2, who showed the most pronounced performance degradation, with seven ‘Feet’ trials misclassified as ‘Left Hand’. We attribute this inter-subject performance variance to three primary factors: signal quality, individual neurophysiological differences, and the representativeness of the training data. Superior performance in certain subjects likely correlates with high SNR and more distinct features, whereas poor results may stem from noise-corrupted signals that impede effective feature extraction, a challenge especially prominent for the more complex ‘Feet’ and ‘Tongue’ MI tasks. Furthermore, inherent variability in individual brainwave patterns and muscle artifacts can lead to subject-specific model performance. The success with Subject 3, for instance, may be due to their highly discernible and stable EEG patterns. Finally, the comprehensiveness of the training data is critical. If the training set inadequately captures the full spectrum of a subject’s unique EEG signatures, the model’s generalization capabilities will inevitably be compromised.

4.3. Computational Cost

Apart from classification accuracy, computational efficiency is a critical criterion for evaluating the practicality of deep learning models in real-world BCI systems. To rigorously assess the proposed model’s deployment feasibility, we conducted a quantitative analysis of its complexity using three key metrics: the number of trainable parameters (Params), floating-point operations (FLOPs), and average inference latency per trial. Table 5 presents a comparative summary of these metrics against state-of-the-art baseline methods. As shown in Table 5, the proposed model comprises 48.47 k parameters and requires approximately 62.07 M FLOPs per forward inference. While this indicates a moderate increase in computational load compared to ultra-lightweight architectures like EEGNet (3.44 k Params, 24.44 M FLOPs) or EEG-TCNet, our model maintains a significantly lower footprint than deeper or Transformer-based networks. Specifically, its computational cost is substantially reduced compared to DeepConvNet (283.25 M FLOPs) and EEG-Conformer (789.8 k Params), balancing structural complexity with resource efficiency. In terms of execution speed, inference latency is a decisive factor for online decoding. The proposed model achieves an average latency of 2.743 ms, which is faster than complex models such as SMT (3.065 ms) and EEG-Conformer (4.67 ms). Although slightly higher than that of shallow networks, this latency remains negligible within the context of BCI feedback loops. These results demonstrate that the proposed model effectively trades off a marginal increase in computational cost for enhanced representation capability, ensuring it remains lightweight enough for real-time applications.

4.4. t-SNE Visualization of the Extracted Features

t-Distributed Stochastic Neighbor Embedding (t-SNE) is a dimensionality reduction technique designed to visualize high-dimensional data in a lower-dimensional space while preserving local structural relationships. In this study, t-SNE was employed to analyze the feature distributions learned by the model, serving as a qualitative assessment of its classification efficacy. Figure 5 illustrates the feature embeddings for Subject 1 of the BCI IV 2a dataset across various stages: the Lite-MSTCN, Lite-Transformer, and Lite-LSTM paths, followed by the dynamic gated fusion module and the prototype-enhanced classifier.
In t-SNE visualization, a distinct separation between clusters of different classes suggests significant divergence in the high-dimensional space, indicating that the model has effectively captured discriminative features. Conversely, dense clustering within the same class reflects high intra-class consistency, implying strong feature similarity among samples of the same category. As shown in the results, features processed through the dynamic gated fusion stage and subsequent prototype-enhanced classifier exhibit superior inter-class separability and intra-class compactness. This demonstrates the model’s capability to optimally integrate heterogeneous temporal features extracted from multiple pathways, ultimately ensuring robust and accurate classification performance.

4.5. Ablation Study

An ablation study was conducted to systematically evaluate the contributions of the TCN, Transformer, and LSTM modules. Quantitative results (mean accuracy and kappa) are summarized in Table 6, with per-subject accuracy detailed in Figure 6. The baseline model, stripped of all three temporal feature extraction paths, established a performance floor at 73.27% accuracy and 0.66 kappa. Individually enabling each path validated their distinct and complementary roles: the TCN path yielded the largest gain (+4.74% accuracy, +0.05 kappa) by capturing local temporal patterns; the Transformer path contributed by modeling global dependencies (+2.73% accuracy, +0.02 kappa); and the LSTM path offered benefits by tracking state evolution (+2.19% accuracy, +0.01 kappa). The synergy between these paradigms was evident in dual-path configurations. Fusing TCN with either Transformer or LSTM via dynamic gating consistently outperformed single-path models, boosting accuracy by at least 1.89%. Figure 6 corroborates this, showing that these dual-path combinations consistently outperform single-path across most subjects. This synergy suggests that the integration of diverse temporal modeling paradigms effectively overcomes the inherent blind spots of any single approach. Notably, the TCN module was crucial for stabilizing performance on non-stationary subjects, where standalone Transformer or LSTM models faltered. This stabilizing effect is also reflected in the reduced cross-subject performance volatility observed in the TCN+Transformer combination.
The full tripartite architecture, leveraging adaptive fusion of all three paths, culminated in the highest performance, reaching a mean accuracy of 82.45% and a kappa of 0.77. This configuration not only surpassed all sub-models in aggregate metrics but also delivered a more balanced and robust performance profile across all nine subjects, as seen in Figure 6. This demonstrates the architecture’s superior adaptability in feature fusion, which mitigates dependency on any signal modeling paradigm. Collectively, the ablation study provides compelling evidence for the architectural rationale of our model, validating the potent synergy achieved by the fusion of TCN, Transformer, and LSTM for MI classification.

5. Conclusions

In this paper, we present TPHFC-Net, an end-to-end neural network built upon a triple-path collaborative temporal architecture for the four-class MI classification task. The model concurrently leverages TCN, Transformer, and LSTM to capture short-term local, long-range global, and state evolution features from EEG-MI signals, respectively. By integrating these heterogeneous yet complementary features through a adaptive fusion module, TPHFC-Net creates a highly discriminative representation. This advanced representation enables superior classification performance by effectively addressing the limitations of incomplete temporal modeling and suboptimal performance in prior methods. Extensive experiments on the BCI Competition IV-2a dataset validated our approach, demonstrating that TPHFC-Net significantly outperforms existing mainstream models.
The central finding of this study is that the synergistic integration of diverse temporal modeling paradigms, rather than their simple concatenation, can unlock a new performance ceiling for EEG-MI classification. However, despite its strong performance, TPHFC-Net has two primary limitations. First, its parallel architecture introduces a significant computational overhead. Second, its feature modeling is predominantly confined to the temporal domain. These limitations point to clear directions for future research. Future work should focus on multi-domain fusion, integrating spatial and frequency-domain information to complement the temporal features. Furthermore, optimizing the model through techniques like network pruning or knowledge distillation could enhance its computational efficiency, making it more viable for real-world MI-BCI applications.

Author Contributions

Conceptualization, Y.J.; data curation, D.W.; formal analysis, Y.J. and C.L.; methodology, C.D. and Y.J.; software, Y.J.; supervision, C.L.; validation, D.W.; writing—original draft, Y.J. and C.D.; writing—review and editing, C.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work is partially supported by the Jiangsu Province Industry-University-Research Collaboration Project (Grant No. BY20230186).

Institutional Review Board Statement

The study that collected and published original dataset, Brunner et al. (2008) [43] has stated that their data collection protocol was approved by the local ethics committee of Graz University of Technology. All participants provided written informed consent before the experiments, as detailed in the original publication. Corresponding information can be verified in the official dataset description paper available at: http://www.bbci.de/competition/iv/desc_2a.pdf, accessed on 17 November 2025.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available in [BCIIVdataset2a] at [https://bnci-horizon-2020.eu/database/data-sets], accessed on 17 November 2025, reference number [001-2014]. These data were derived from the following resources available in the public domain: [https://www.bbci.de/competition/iv/#dataset2a], accessed on 17 November 2025.

Acknowledgments

We are hugely grateful to the possible anonymous reviewers for their careful, unbiased, and constructive suggestions with respect to the original manuscript.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Shenoy Handiru, V.; Vinod, A.; Guan, C. EEG Source Imaging of Movement Decoding: The State of the Art and Future Directions. IEEE Syst. Man Cybern. Mag. 2018, 4, 14–23. [Google Scholar] [CrossRef]
  2. Liang, W.; Jin, J.; Xu, R.; Wang, X.; Cichocki, A. Variance characteristic preserving common spatial pattern for motor imagery BCI. Front. Hum. Neurosci. 2023, 17, 1243750. [Google Scholar] [CrossRef]
  3. Tung, S.W.; Guan, C.; Ang, K.K.; Phua, K.S.; Wang, C.; Zhao, L.; Teo, W.P.; Chew, E. Motor imagery BCI for upper limb stroke rehabilitation: An evaluation of the EEG recordings using coherence analysis. In Proceedings of the 2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Osaka, Japan, 3–7 July 2013; Volume 2013, pp. 261–264. [Google Scholar] [CrossRef]
  4. Khademi, Z.; Ebrahimi, F.; Kordy, H.M. A review of critical challenges in MI-BCI: From conventional to deep learning methods. J. Neurosci. Methods 2023, 383, 109736. [Google Scholar] [CrossRef] [PubMed]
  5. Orban, M.; Elsamanty, M.; Guo, K.; Zhang, S.; Yang, H. A Review of Brain Activity and EEG-Based Brain–Computer Interfaces for Rehabilitation Application. Bioengineering 2022, 9, 768. [Google Scholar] [CrossRef]
  6. Saha, S.; Mamun, K.A.; Ahmed, K.; Mostafa, R.; Naik, G.R.; Darvishi, S.; Khandoker, A.H.; Baumert, M. Progress in Brain Computer Interface: Challenges and Opportunities. Front. Syst. Neurosci. 2021, 15, 578875. [Google Scholar] [CrossRef]
  7. Samek, W.; Kawanabe, M.; Müller, K.R. Divergence-Based Framework for Common Spatial Patterns Algorithms. IEEE Rev. Biomed. Eng. 2014, 7, 50–72. [Google Scholar] [CrossRef] [PubMed]
  8. Lotte, F.; Congedo, M.; Lécuyer, A.; Lamarche, F.; Arnaldi, B. A review of classification algorithms for EEG-based brain–computer interfaces. J. Neural Eng. 2007, 4, R1. [Google Scholar] [CrossRef]
  9. dos Santos, E.M.; San-Martin, R.; Fraga, F.J. Comparison of subject-independent and subject-specific EEG-based BCI using LDA and SVM classifiers. Med. Biol. Eng. Comput. 2023, 61, 835–845. [Google Scholar] [CrossRef]
  10. Sgro, J. Neural network classification of clinical neurophysiological data for acute care monitoring. In A Decade of Neural Networks: Practical Applications and Prospects; Alacron, Inc.: Nashua, NH, USA, 1994; pp. 95–106. [Google Scholar]
  11. Tibor Schirrmeister, R.; Springenberg, J.T.; Fiederer, L.D.J.; Glasstetter, M.; Eggensperger, K.; Tangermann, M.; Hutter, F.; Burgard, W.; Ball, T. Deep learning with convolutional neural networks for EEG decoding and visualization. arXiv 2017, arXiv:1703.05051. [Google Scholar] [CrossRef]
  12. Lawhern, V.J.; Solon, A.J.; Waytowich, N.R.; Gordon, S.M.; Hung, C.P.; Lance, B.J. EEGNet: A compact convolutional neural network for EEG-based brain–computer interfaces. J. Neural Eng. 2018, 15, 056013. [Google Scholar] [CrossRef]
  13. Ingolfsson, T.M.; Hersche, M.; Wang, X.; Kobayashi, N.; Cavigelli, L.; Benini, L. EEG-TCNet: An Accurate Temporal Convolutional Network for Embedded Motor-Imagery Brain–Machine Interfaces. In Proceedings of the 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Toronto, ON, Canada, 11–14 October 2020; pp. 2958–2965. [Google Scholar] [CrossRef]
  14. Liao, W.; Miao, Z.; Liang, S.; Zhang, L.; Li, C. A composite improved attention convolutional network for motor imagery EEG classification. Front. Neurosci. 2025, 19, 1543508. [Google Scholar] [CrossRef]
  15. Yu, Z.; Cao, D.; Zhou, P. Motor Imagery EEG Decoding Based on Multi-Branch Separable Temporal Convolutional Network. In Proceedings of the 2024 China Automation Congress (CAC), Qingdao, China, 1–3 November 2024; pp. 6058–6063. [Google Scholar] [CrossRef]
  16. Yang, Y.; Li, M.; Wang, L. An adaptive session-incremental broad learning system for continuous motor imagery EEG classification. Med. Biol. Eng. Comput. 2025, 63, 1059–1079. [Google Scholar] [CrossRef]
  17. Song, Y.; Zheng, Q.; Liu, B.; Gao, X. EEG Conformer: Convolutional Transformer for EEG Decoding and Visualization. IEEE Trans. Neural Syst. Rehabil. Eng. 2023, 31, 710–719. [Google Scholar] [CrossRef]
  18. Qin, Y.; Yang, B.; Ke, S.; Liu, P.; Rong, F.; Xia, X. M-FANet: Multi-Feature Attention Convolutional Neural Network for Motor Imagery Decoding. IEEE Trans. Neural Syst. Rehabil. Eng. 2024, 32, 401–411. [Google Scholar] [CrossRef] [PubMed]
  19. Hang, W.; Wang, J.; Liang, S.; Lei, B.; Wang, Q.; Li, G.; Chen, B.; Qin, J. Multiscale Convolutional Transformer with Diverse-aware Feature Learning for Motor Imagery EEG Decoding. IEEE Trans. Cogn. Dev. Syst. 2025, 17, 1389–1400. [Google Scholar] [CrossRef]
  20. Ghinoiu, B.; Vlădăreanu, V.; Travediu, A.M.; Vlădăreanu, L.; Pop, A.; Feng, Y.; Zamfirescu, A. EEG-Based Mobile Robot Control Using Deep Learning and ROS Integration. Technologies 2024, 12, 261. [Google Scholar] [CrossRef]
  21. Gui, Y.; Tian, Z.; Liu, X.; Hu, B.; Wang, Q. FBLSTM: A Filter-Bank LSTM-based deep learning method for MI-EEG classification. In Proceedings of the International Conference on Signal Processing and Communication Technology (SPCT 2022), Harbin, China, 23–25 December 2022; Proceedings of SPIE, the International Society for Optical Engineering; SPIE: Bellingham, WA, USA, 2023; Volume 12615, pp. 470–475. [Google Scholar] [CrossRef]
  22. Chen, H.; Tian, A.; Zhang, Y.; Liu, Y. Early Time Series Classification Using TCN-Transformer. In Proceedings of the 2022 IEEE 4th International Conference on Civil Aviation Safety and Information Technology (ICCASIT), Dali, China, 12–14 October 2022; pp. 1079–1082. [Google Scholar] [CrossRef]
  23. Xiong, F.; Fan, M.; Yang, X.; Li, Y.; Yang, C.; Zheng, J.; Wang, C.; Zhou, J. Research on Emotion Recognition Model Based on ConvTCN-LSTM-DCAN Model with Sparse EEG Channels. Res. Sq. 2024. [Google Scholar] [CrossRef]
  24. Jiang, X.; Bian, G.B.; Tian, Z. Removal of Artifacts from EEG Signals: A Review. Sensors 2019, 19, 987. [Google Scholar] [CrossRef]
  25. Abibullaev, B.; Keutayeva, A.; Zollanvari, A. Deep Learning in EEG-Based BCIs: A Comprehensive Review of Transformer Models, Advantages, Challenges, and Applications. IEEE Access 2023, 11, 127271–127301. [Google Scholar] [CrossRef]
  26. McFarland, D.J.; Miner, L.A.; Vaughan, T.M.; Wolpaw, J.R. Mu and Beta Rhythm Topographies During Motor Imagery and Actual Movements. Brain Topogr. 2000, 12, 177–186. [Google Scholar] [CrossRef]
  27. Vafaei, E.; Hosseini, M. Transformers in EEG Analysis: A Review of Architectures and Applications in Motor Imagery, Seizure, and Emotion Classification. Sensors 2025, 25, 1293. [Google Scholar] [CrossRef] [PubMed]
  28. Liu, Y.; Yu, S.; Li, J.; Ma, J.; Wang, F.; Sun, S.; Yao, D.; Xu, P.; Zhang, T. Brain state and dynamic transition patterns of motor imagery revealed by the Bayes hidden Markov model. Cognitive Neurodyn. 2024, 18, 2455–2470. [Google Scholar] [CrossRef]
  29. Narayan, Y. Motor-Imagery EEG Signals Classificationusing SVM, MLP and LDA Classifiers. Turk. J. Comput. Math. Educ. (TURCOMAT) 2021, 12, 3339–3344. [Google Scholar] [CrossRef]
  30. Aggarwal, S.; Chugh, N. Signal processing techniques for motor imagery brain computer interface: A review. Array 2019, 1–2, 100003. [Google Scholar] [CrossRef]
  31. Ang, K.K.; Chin, Z.Y.; Zhang, H.; Guan, C. Filter Bank Common Spatial Pattern (FBCSP) in Brain-Computer Interface. In Proceedings of the 2008 IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence), Hong Kong, China, 1–8 June 2008; pp. 2390–2397. [Google Scholar] [CrossRef]
  32. Avelar, M.C.; Almeida, P.; Faria, B.M.; Reis, L.P. Applications of Brain Wave Classification for Controlling an Intelligent Wheelchair. Technologies 2024, 12, 80. [Google Scholar] [CrossRef]
  33. Riyad, M.; Khalil, M.; Adib, A. MI-EEGNET: A novel convolutional neural network for motor imagery classification. J. Neurosci. Methods 2021, 353, 109037. [Google Scholar] [CrossRef] [PubMed]
  34. Salami, A.; Andreu-Perez, J.; Gillmeister, H. EEG-ITNet: An Explainable Inception Temporal Convolutional Network for Motor Imagery Classification. IEEE Access 2022, 10, 36672–36685. [Google Scholar] [CrossRef]
  35. Qin, Y.; Li, B.; Wang, W.; Shi, X.; Wang, H.; Wang, X. ETCNet: An EEG-based motor imagery classification model combining efficient channel attention and temporal convolutional network. Brain Res. 2024, 1823, 148673. [Google Scholar] [CrossRef] [PubMed]
  36. Zhu, L.; Wang, Y.; Huang, A.; Tan, X.; Zhang, J. An improved multi-scale convolution and Transformer network for EEG-based motor imagery decoding. Int. J. Mach. Learn. Cybern. 2025, 16, 4997–5012. [Google Scholar] [CrossRef]
  37. Saputra, M.; Setiawan, N.A.; Ardiyanto, I. Deep Learning Methods for EEG Signals Classification of Motor Imagery in BCI. Int. J. Inf. Technol. Electr. Eng. (IJITEE) 2019, 3, 80. [Google Scholar] [CrossRef][Green Version]
  38. Kim, J.; Kim, H.; Kim, H.; Lee, D.; Yoon, S. A comprehensive survey of deep learning for time series forecasting: Architectural diversity and open challenges. Artif. Intell. Rev. 2025, 58, 216. [Google Scholar] [CrossRef]
  39. Liu, K.; Xing, X.; Yang, T.; Yu, Z.; Xiao, B.; Wang, G.; Wu, W. DMSACNN: Deep Multiscale Attentional Convolutional Neural Network for EEG-Based Motor Decoding. IEEE J. Biomed. Health Inform. 2025, 29, 4884–4896. [Google Scholar] [CrossRef] [PubMed]
  40. Chunduri, V.; Aoudni, Y.; Khan, S.; Aziz, A.; Rizwan, A.; Deb, N.; Keshta, I.; Soni, M. Multi-scale spatiotemporal attention network for neuron based motor imagery EEG classification. J. Neurosci. Methods 2024, 406, 110128. [Google Scholar] [CrossRef]
  41. Xiong, S.; Wang, L.; Xia, G.; Deng, J. MBCNN-EATCFNet: A multi-branch neural network with efficient attention mechanism for decoding EEG-based motor imagery. Robot. Auton. Syst. 2025, 185, 104899. [Google Scholar] [CrossRef]
  42. Chen, X.; Teng, X.; Chen, H.; Pan, Y.; Geyer, P. Toward reliable signals decoding for electroencephalogram: A benchmark study to EEGNeX. Biomed. Signal Process. Control 2024, 87, 105475. [Google Scholar] [CrossRef]
  43. Brunner, C.; Leeb, R.; Müller-Putz, G. BCI Competition 2008–Graz Data Set A. IEEE Dataport 2024. [Google Scholar] [CrossRef]
Figure 1. The Overall Architecture of TPHFC-Net. The architecture initiates by progressively extracting robust features from EEG signals, a process that incorporates a denoising diffusion model. The extracted features are then channeled into three parallel streams, where TCN, Transformer, and LSTM modules concurrently model the inherent heterogeneous temporal dynamics. To effectively integrate these complementary representations, a dynamic gating module adaptively fuses the features from all three pathways, before feeding the resulting unified representation into a prototype-attention-based classifier for the final classification task.
Figure 1. The Overall Architecture of TPHFC-Net. The architecture initiates by progressively extracting robust features from EEG signals, a process that incorporates a denoising diffusion model. The extracted features are then channeled into three parallel streams, where TCN, Transformer, and LSTM modules concurrently model the inherent heterogeneous temporal dynamics. To effectively integrate these complementary representations, a dynamic gating module adaptively fuses the features from all three pathways, before feeding the resulting unified representation into a prototype-attention-based classifier for the final classification task.
Technologies 14 00096 g001
Figure 2. The flowchart of our methodology.
Figure 2. The flowchart of our methodology.
Technologies 14 00096 g002
Figure 3. Boxplot of accuracy distribution for different models on BCI Competition IV-2a.
Figure 3. Boxplot of accuracy distribution for different models on BCI Competition IV-2a.
Technologies 14 00096 g003
Figure 4. The confusion matrices for all 9 subjects on BCI Competition IV-2a.
Figure 4. The confusion matrices for all 9 subjects on BCI Competition IV-2a.
Technologies 14 00096 g004
Figure 5. The distribution of feature vectors for S01 based from BCI IV 2a. All feature vectors are mapped to the 2D space using the t-SNE method. (a) Raw Signal. (b) TCN Features. (c) Transformer Features. (d) LSTM Features. (e) Fused Features. (f) Prototype-Attention Features.
Figure 5. The distribution of feature vectors for S01 based from BCI IV 2a. All feature vectors are mapped to the 2D space using the t-SNE method. (a) Raw Signal. (b) TCN Features. (c) Transformer Features. (d) LSTM Features. (e) Fused Features. (f) Prototype-Attention Features.
Technologies 14 00096 g005
Figure 6. The accuracy for each subject of the ablation experiment on BCI Competition IV-2a.
Figure 6. The accuracy for each subject of the ablation experiment on BCI Competition IV-2a.
Technologies 14 00096 g006
Table 1. Model Training Parameter Configuration.
Table 1. Model Training Parameter Configuration.
Configuration ItemParameter
Batch-size32
Learning-rate0.001
Epochs2000
OptimizerSGD
Table 2. Subject-dependent classification accuracy (%) and Kappa scores of different models on the BCI Competition IV-2a dataset.
Table 2. Subject-dependent classification accuracy (%) and Kappa scores of different models on the BCI Competition IV-2a dataset.
MethodS1S2S3S4S5S6S7S8S9AVGKappa
EEGNet [12]84.3454.0687.5463.5967.3954.8888.876.7574.2472.400.63
Shallow ConvNet [11]79.5156.2588.8980.957.2953.8291.6781.2579.1774.310.66
EEG-TCNet [13]85.7765.0294.5164.9175.3661.487.3683.7678.0377.350.70
EEG-ITNet [34]84.3862.8589.9369.174.3157.6488.5483.6880.2176.74
DMSACNN [39]86.8161.1192.7167.0172.5770.8387.585.0780.2178.200.71
EEG Conformer [17]88.1961.4693.4078.1352.0865.2892.3688.1988.8978.660.72
MSSAN [40]83.1969.9793.4470.9779.3167.2881.2284.6683.3379.26
M-FANet [18]86.8175.0091.6773.6176.3961.4685.7675.6987.1579.390.73
ASiBLS [16]85.1775.8386.7173.7179.2068.7882.9183.283.4679.890.72
ETCNet [35]90.6264.9393.7578.4779.5166.3287.8581.9482.9980.710.74
MBCNN-EATCFNet [41]84.7267.7194.5874.1781.7469.3190.3583.6885.8381.34
SMT [15]83.3368.4192.9383.3376.6574.6594.0982.5683.6882.180.76
Ours87.5062.5096.5378.4778.8269.4490.6287.8590.2882.450.77
Table 3. Subject-dependent classification accuracy (%) and Kappa scores of different models on the BCI Competition IV-2b dataset.
Table 3. Subject-dependent classification accuracy (%) and Kappa scores of different models on the BCI Competition IV-2b dataset.
MethodS1S2S3S4S5S6S7S8S9AVGKappa
EEGNet [12]73.5669.2185.8196.9491.4477.9491.1393.0688.0685.240.71
Shallow ConvNet [11]71.2563.9377.8196.5694.0687.8187.1991.5685.6383.980.68
Deep ConvNet [11]72.6967.7981.7594.2590.8185.4490.5091.3186.1284.500.69
EEG-TCNet [13]75.0870.4384.3896.3895.2578.4488.3192.6984.1985.010.70
SMT [15]77.9372.3586.8897.5694.3885.6392.8295.0786.5287.670.74
Ours83.3372.5077.9299.1798.3390.8393.3396.2593.7589.490.78
Table 4. Comparison of classification accuracy (%) and Kappa scores for subject-independent MI tasks across various models on BCI Competition IV-2a and IV-2b datasets.
Table 4. Comparison of classification accuracy (%) and Kappa scores for subject-independent MI tasks across various models on BCI Competition IV-2a and IV-2b datasets.
MethodBCI IV-2aBCI IV-2b
Accuracy (%)KappaAccuracy (%)Kappa
EEGNet [12]56.850.4276.110.52
Shallow ConvNet [11]56.750.4274.920.50
EEG Conformer [17]57.430.4373.610.47
EEGNeX [42]63.020.5174.470.48
EEG-TCNet [13]65.120.5275.140.50
Ours67.360.5683.740.67
Table 5. Parameter number, FLOPs and Mean latency comparison.
Table 5. Parameter number, FLOPs and Mean latency comparison.
MethodFLOPs (M)Params (k)Mean Latency (ms)
EEGNet [12]24.443.440.71
Shallow ConvNet [11]113.9246.120.45
DeepConvNet [11]283.2567.050.79
EEG-TCNet [13]13.744.041.56
EEG-Conformer [17]63.86789.804.67
SMT [15]56.20297.493.065
Ours62.0748.472.743
Table 6. The average accuracy and kappa of the ablation experiment on BCI Competition IV-2a. The symbol ✓ indicates that the corresponding module is included, while a blank entry indicates that the module is not included.
Table 6. The average accuracy and kappa of the ablation experiment on BCI Competition IV-2a. The symbol ✓ indicates that the corresponding module is included, while a blank entry indicates that the module is not included.
TCNTransformerLSTMAccuracy (%)Kappa
82.450.77
79.860.73
79.900.73
78.860.72
78.010.71
76.000.68
75.460.67
73.270.66
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jin, Y.; Dou, C.; Wang, D.; Liu, C. TPHFC-Net—A Triple-Path Heterogeneous Feature Collaboration Network for Enhancing Motor Imagery Classification. Technologies 2026, 14, 96. https://doi.org/10.3390/technologies14020096

AMA Style

Jin Y, Dou C, Wang D, Liu C. TPHFC-Net—A Triple-Path Heterogeneous Feature Collaboration Network for Enhancing Motor Imagery Classification. Technologies. 2026; 14(2):96. https://doi.org/10.3390/technologies14020096

Chicago/Turabian Style

Jin, Yuchen, Chunxu Dou, Dingran Wang, and Chao Liu. 2026. "TPHFC-Net—A Triple-Path Heterogeneous Feature Collaboration Network for Enhancing Motor Imagery Classification" Technologies 14, no. 2: 96. https://doi.org/10.3390/technologies14020096

APA Style

Jin, Y., Dou, C., Wang, D., & Liu, C. (2026). TPHFC-Net—A Triple-Path Heterogeneous Feature Collaboration Network for Enhancing Motor Imagery Classification. Technologies, 14(2), 96. https://doi.org/10.3390/technologies14020096

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop