Next Article in Journal
Musical Training and Perceptual History Shape Alpha Dynamics in Audiovisual Speech Integration
Previous Article in Journal
Fractal Analysis of Brain Activity During Risky Drinking in Adolescents and Young Adults
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

LPGGNet: Learning from Local–Partition–Global Graph Representations for Motor Imagery EEG Recognition

1
School of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
2
School of Engineering and Technology, Zunyi Normal University, Zunyi 563006, China
3
Chongqing Key Laboratory of Complex Systems and Bionic Control, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
4
School of Electrical Engineering, Yanshan University, Qinhuangdao 066004, China
*
Author to whom correspondence should be addressed.
Brain Sci. 2025, 15(12), 1257; https://doi.org/10.3390/brainsci15121257
Submission received: 20 October 2025 / Revised: 15 November 2025 / Accepted: 21 November 2025 / Published: 23 November 2025

Abstract

Objectives: Existing motor imagery electroencephalography (MI-EEG) decoding approaches are constrained by their reliance on sole representations of brain connectivity graphs, insufficient utilization of multi-scale information, and lack of adaptability. Methods: To address these constraints, we propose a novel Local–Partition–Global Graph learning Network (LPGGNet). The Local Learning module first constructs functional adjacency matrices using partial directed coherence (PDC), effectively capturing causal dynamic interactions among electrodes. It then employs two layers of temporal convolutions to capture high-level temporal features, followed by Graph Convolutional Networks (GCNs) to capture local topological features. In the Partition Learning module, EEG electrodes are divided into four partitions through a task-driven strategy. For each partition, a novel Gaussian median distance is used to construct adjacency matrices, and Gaussian graph filtering is applied to enhance feature consistency within each partition. After merging the local and partitioned features, the model proceeds to the Global Learning module. In this module, a global adjacency matrix is dynamically computed based on cosine similarity, and residual graph convolutions are then applied to extract highly task-relevant global representations. Finally, two fully connected layers perform the classification. Results: Experiments were conducted on both the BCI Competition IV-2a dataset and a laboratory-recorded dataset, achieving classification accuracies of 82.9% and 87.5%, respectively, which surpass several state-of-the-art models. The contribution of each module was further validated through ablation studies. Conclusions: This study demonstrates the superiority of integrating multi-view brain connectivities with dynamically constructed graph structures for MI-EEG decoding. Moreover, the proposed model offers a novel and efficient solution for EEG signal decoding.

1. Introduction

Brain–computer interfaces (BCIs) refer to systems that directly capture and interpret neural activity to facilitate information exchange between humans and machines [1]. They hold significant application value and promise in the medical field, aiding patients with severe neurological disorders in functional recovery, improved quality of life, and neurological rehabilitation. Representative applications include restoring communication [2], stroke rehabilitation [3], and mental health interventions [4,5].
Electroencephalography (EEG) is broadly adopted in BCI systems [6] because of its non-invasive and high temporal resolution. Among EEG paradigms, motor imagery (MI) can trigger the corresponding neural regions without physical movement. As a result, it is especially suitable for rehabilitation training and prosthetic control in patients with motor impairments. MI-EEG signals mainly show rhythmic activity in the sensorimotor cortex, especially ERD/ERS in the μ- and β bands range (event-related desynchronization/synchronization) [7]. For MI-BCI systems, accurate classification and robust recognition depend on efficient extraction and representation of these features. However, the challenges of precise decoding arise from the inherent low signal-to-noise ratio, non-stationarity, and inter-individual variability of MI signals [8]. We can make improvements in both signal processing and decoding frameworks.
In signal processing, we can enhance signal quality by improving brain connectivity and partition approaches, both of which have been extensively discussed in prior research. For instance, phase-based connectivity measures (e.g., PLV/PLI) overcome artifacts and inter-individual amplitude variations, effectively distinguishing different imagery tasks [9]. Similarly, PDC connectivity reflects regional directionality, outperforming single-channel features and improving classification tasks [10]. Furthermore, fusing connectivity measures, such as spectral coherence, imaginary coherence, and phase difference, enables a comprehensive capture of overall brain network structure [11].
Simultaneously, electrode partitioning based on functional-anatomical correspondence reduces spatial variability and improves the physiological interpretability of EEG features [12]. Regional aggregation not only enhances signal-to-noise ratio but also yields more stable functional connectivity estimates [13]. Partitioning achieved through spectral clustering methods (e.g., K-layer Laplacian averaging or eigenvector averaging) exhibits spatially coherent and distinct lateralization patterns, aligning more closely with the anatomical characteristics of motor imagery tasks [11].
These approaches not only enhance the reliability of EEG feature representation but also lay a solid foundation for constructing brain networks with higher neurophysiological relevance.
In EEG decoding, existing MI-EEG decoding frameworks often rely on combining manually extracted features with shallow classifiers. For instance, discriminative spatial features are commonly extracted using the CSP algorithm and subsequently classified with methods such as LDA [14]. Lee et al. [15] firstly obtains features through CWT and DWT (discrete wavelet transform/continuous wavelet transform), followed by dimensionality reduction via PCA. Subsequently, GMM-UBM (Generalized Universal Background Model) and the EM algorithm are used to optimize the training set. However, it requires pre-setting critical time–frequency parameters, and the final classification is achieved through an SVM. Manual feature design heavily relies on expert prior knowledge and struggles to fully capture the inherent complex nonlinear spatiotemporal structures of EEG signals.
With the development of deep learning, researchers have begun exploring data-driven approaches to automatically learn discriminative features from EEG signals [16,17]. Convolutional Neural Networks (CNNs), leveraging their strengths in spatio-temporal representation, have been widely used in MI-EEG decoding tasks. For example, temporal information has been extensively studied using 1-DCNN [18,19] and multi-scale 1-D CNN [20]. For spatial information, earlier methods typically adopted one of two strategies: applying 1-D CNNs across the sensor channels to capture global spatial patterns [18,19,20], or employing compact 2-D CNN on EEG topography to extract localized spatial features [21,22]. However, these approaches may fail to effectively learn the inherent spatial topological relationships within EEG signals.
Over the past few years, Graph Convolutional Networks (GCNs) have introduced graph-based modeling by embedding spatial topological or functional connectivity among electrodes [23,24]. It is worth noting that establishing appropriate adjacency relationships enables more effective propagation and aggregation of information across the graph structure. This mechanism helps capture interactions between different brain regions and reveal characteristics of the overall network. Compared to CNNs, GCNs offer unique advantages in modeling non-Euclidean spatial data and demonstrate strong potential in MI-EEG decoding tasks.
Despite these advantages, existing GCN methods still face several challenges. Firstly, most models typically rely on static adjacency matrices [25], which cannot adequately represent time varying characteristics of EEG signals. Secondly, single spatial topology approaches (such as geometric distance or functional connectivity) often fail to comprehensively capture the multidimensional interaction patterns between brain regions, limiting feature discriminability [9,26]. Furthermore, many GCNs employ global adjacency modeling [27,28], overlooking task-driven local relationships and failing to capture complex interactions between localized regions. Simultaneously, due to volume conduction effects [29], EEG signals contain overlapping signals from different brain regions, leading to reduce spatial resolution.
To address these challenges, we propose a novel LPGGNet framework. The Local Learning module extracts local temporal and spatial features from EEG signals, while the Partition Learning module captures information within partitions and inter-partitioned features. The Global Learning module extracts global features from EEG signals.
This work offers the following main contributions:
(1)
We propose a novel LPGGNet framework with hierarchical architecture designed to capture local, partitioned, and global brain activities in MI-EEG.
(2)
A novel partition method for EEG electrodes is introduced to spatially isolate task-relevant brain activities and reduce inter-partition interference.
(3)
A novel Gaussian median distance (GMD)-based method is proposed to quantify inter-electrode relationships, which better aligns with the physiological characteristics of EEG signal propagation in the brain.
(4)
A BCI-based intelligent wheelchair system is developed to validate the usefulness of the proposed LPGGNet.
The remainder of the paper is arranged as follows: In Section 2, we review related work on feature extraction from EEG data; In Section 3, the proposed methodology is explained; In Section 4, extensive experiments are presented on two datasets; Section 5 provides discussions, and Section 6 concludes the paper.

2. Related Works

In BCI studies, extracting features from EEG signals is challenging due to their complex spatial arrangement and non-Euclidean nature. In this section, we provide a comprehensive review of Brain Connectivity and Partitions, CNNs and GCNs in EEG signal decoding. By simultaneously incorporating two complementary brain connectivity approaches, GCNs can better capture spatial representations in EEG recording, as we suggest.

2.1. Brain Connectivity and Partitions of EEG Signals

In recent years, extensive research has demonstrated that incorporating brain connectivity and electrode partitioning strategies into EEG analysis can significantly enhance classification performance. Functional connectivity and effective connectivity features between brain regions capture cross-regional interaction information, which is often more discriminative than single-channel features. For example, Leeuwis et al. [30] analyzed multi-scale functional connectivity patterns in motor imagery BCI studies, revealing that connection dynamics at local, large-scale, and global levels are closely correlated with classification performance. Maghsoudi et al. [31] achieved significant accuracy improvements by combining effective connectivity features with hierarchical machine learning methods for hand movement imagery classification.
Furthermore, region-based analysis methods demonstrate strong performance within hemispheric partitioning frameworks. Lun et al. [32] proposed a motor imagery classification method based on left–right EEG differences, validating the advantages of partition strategies for feature discrimination. Zhang et al. [33] examined the effects of electrode density and distribution on motor imagery source localization and classification. The study showed that rational electrode partitioning enhances spatial representation and overall decoding performance.
Due to the scarcity of studies on task-driven partitioning in motor imagery, this paper proposed a four-partition scheme based on four motor imagery tasks. Integrating this partitioning strategy with functional connectivity analysis improves EEG classification performance while enhancing the model’s neurophysiological interpretability.

2.2. Convolutional Neural Networks (CNNs)

Due to their end-to-end learning framework and efficient local representation capability, CNNs are extensively employed for modeling the spatiotemporal characteristics of EEG signals. Early studies often reorganized EEG signals into 2D matrices (channel × time) or constructed 3D tensors (e.g., frequency × space × time) via spatial interpolation; these representations were then processed using 2D or 3D convolution operations [34,35]. As an example, Schirrmeister et al. [34] designs an end-to-end CNN architecture that applied 2D convolution to effectively decode original EEG signals. Zhao et al. [35] introduces a novel framework for MI-EEG decoding that leverages a 3D structure and specially designed 3D CNNs to enhance performance.
However, these methods suffer from a fundamental limitation: they rely on discrete convolution defined in Euclidean space, overlooking the intrinsic non-Euclidean spatial relationships and functional connectivity patterns among electrodes [36].
In recent years, the Transformer architecture has also been introduced into EEG signal decoding research, leveraging self-attention mechanisms to capture long-range temporal dependencies and global spatial relationships. Wan et al. [37] proposed EEGformer, utilizing Transformer modules to enhance multi-scale temporal dependency modeling; Liu et al. [38] developed MSVTNet, which fuses multi-scale visual Transformers with EEG features, achieving outstanding performance in motor imagery EEG classification.
These attention-based models offer new direction for EEG decoding while serving as an important complement to traditional CNNs. However, the standard Transformer architecture lacks explicit modeling of electrode spatial topology and functional connectivity, making it difficult to directly capture structural associations between brain regions.

2.3. Graph Convolutional Networks (GCNs)

To reduce these limitations of CNNs and Transformer, GCNs have been introduced for EEG signal processing. GCNs can handle non-Euclidean data and explicitly model relationships between electrodes, making them more suitable for representing the brain’s functional connectivity networks [39]. A key aspect of GCNs lies in constructing adjacency relationships between electrodes. Early studies predominantly relied on prior knowledge (e.g., physical distance between electrodes) to build static adjacency matrices. For example, Du et al. [40] used Euclidean distance to construct the adjacency matrix, while Hou et al. [36] utilized the absolute Pearson correlation matrix to build a graph for MI tasks recognition.
Nevertheless, these methods still exhibit notable shortcomings. On one hand, predefined or manually designed adjacency matrices makes it difficult to adapt to individual variability and task-dependent changes [41]. On the other hand, traditional functional connectivity metrics (such as Pearson correlation or phase-locking value) often fail to effectively capture nonlinear or high-frequency dynamic interactions [42].
To overcome these problems, our approach integrates spatial distribution features alongside functional coupling characteristics. Specifically, we introduce a novel adjacency relationship based on Gaussian Median Distance (GMD) to characterize the spatial connectivity between EEG electrodes. This approach better aligns with the physiological characteristics of EEG signal propagation in the brain. Furthermore, to capture directional interactions between signals, we incorporate Partial Directed Coherence (PDC) to reliably obtain causal relationships.

3. Materials and Methods

3.1. Datasets

Extensive experimental validation on the BCI Competition IV 2a dataset (Dataset A) and a laboratory-collected dataset (Dataset B) was conducted to examine the feasibility of the proposed method. The two datasets are described in Section 3.1.1 and Section 3.1.2.

3.1.1. Dataset A

This dataset originates from the 2008 Brain–Computer Interface Competition [43]. Comprising EEG data from nine subjects, the dataset involves four motor imagery tasks: imagining movements of the left hand, imagining movements of the right hand, imagining movements of the both feet, and imagining movements of the tongue. Each subject participated in two sessions recorded on different days, with the first session designated for training and the second session for testing, constituting a cross-session setup.
Each session contained six runs of 48 trials each, with 12 trials dedicated to each motor imagery task. Thus, each session totals 288 trials. Data acquisition includes 22 EEG channels, with electrode placement as shown in Figure 1.

3.1.2. Dataset B

This dataset was collected independently by our laboratory and comprises EEG recordings from five subjects. All subjects are healthy, right-handed males aged 22–30. The experiment was approved by the university ethics committee and conducted in a relatively quiet environment. EEG data for each subject is divided into three segments. The experimental paradigm involves four motor imagery tasks: imagining movements of the left hand, right hand, both feet, and tongue. EEG data were recorded with the ActiCHamp system developed by Brain Products GmbH. Thirty-two Ag/AgCl electrodes were used in the system. Fz was the reference electrode, while the other electrodes served to EEG data. Electrode placement is illustrated in Figure 2a.
Each trial lasts 11 s, consisting of four stages: 0–4 s of rest and relaxation, 4–6 s of task instruction, 6–10 s of motor imagery, and a task completion cue at the 11th second (a voice announcement signaling the end of the task). The subsequent trial begins immediately afterward. The temporal sequence of the experimental paradigm is shown in Figure 2b. We collected a total of four sets of motor imagery signals from each subject, with each set comprising 100 trials. Training was performed using the first three sets, and testing with the final set, constituting a Within-Session setup.

3.1.3. Data Processing

Both datasets were collected in an experimental setting. For each trail within each dataset, we acquired 4 s of motor imagery data at a sampling rate of 250 Hz. To address background noise such as electromyography (EMG) and electrooculography (EOG) in the EEG signals, the following measures were implemented: (1) Filtering using a 6th-order Butterworth filter with a bandwidth of 0.5–40 Hz [44]; (2) Removing irrelevant EOG signals; (3) Applying Z-score normalization to reduce sample variability.

3.2. LPGGNet

LPGGNet employs a three-layer strategy, with its overall framework illustrated in Figure 3. The specific details are as follows:
(1)
Local Learning Module: Adjacency matrices are constructed based on PDC to capture directional dependencies between EEG channels. Following this, two layers of temporal convolutional neural networks (TC) are then applied to capture advanced temporal features from the channel signals. Finally, local topological features of the signals are captured using a graph convolutional network (GCN), which integrates temporal dynamics with their corresponding local graph structures.
(2)
Partition Learning Module: An adjacency matrix is first constructed for each partition based on the Gaussian median distance (GMD), upon which a graph filter is built to optimize partition-level signal representations. Subsequently, two layers of temporal CNNs are employed to capture high-level temporal dynamics within each partition. Finally, features from all partitions are integrated using the arithmetic mean method to form a new partition-based representation.
(3)
Global Learning Module: Node features obtained from the Local Learning and Partition Learning modules are first fused to unified feature representation. A dynamic global adjacency matrix is then constructed based on cosine similarity to capture inter-node dependencies across all electrodes. To effectively exploit these relationships, two residual graph convolutional layers are employed, which not only extract inter-electrode global features but also alleviate overfitting through residual connections. Finally, the learned global representations are fed into two fully connected (FC) layers to perform classification.

3.3. Local Learning Module

To more effectively capture local EEG features, we first apply temporal convolutions to extract their characteristics. Then, PDC are employed to establish relationships between electrodes, enabling directional signal transmission. Finally, a single layer of GCN is applied to propagate EEG features across nodes and extract local topological features.

3.3.1. PDC-Based Electrode Relationships

In GCN-based feature extraction, the adjacency matrix plays a crucial role. Equally important is the direction of information flow, which is essential for analyzing relationships between brain regions. In this study, a partial directed coherence (PDC) algorithm is employed to evaluate the interactions between electrodes. PDC is a widely used frequency domain metric for assessing effective connectivity, derived from the multivariate autoregressive (MVAR) model [45]. It quantifies the directed transmission of data between nodes in multi-channel signals (e.g., EEG) and reveals causal influences at specific frequencies. The core principle of PDC involves frequency domain normalization to eliminate interference from other nodes and highlight directional connections. This approach substantially reduces indirect influences among information streams, emphasizing the most critical relationships of electrodes.
The PDC from node j to node i at frequency f can be represented as:
P D C i j f = | A i j ( f ) | m = 1 N | A m i f | 2
Here, | A i j ( f ) | represents the strength of the direct transmission path from node j to node i . m = 1 N | A m i f | 2 is the total output power range of node j to all nodes (including node i). P D C i j f ( 0 , 1 ) .

3.3.2. Temporal Graph Convolution Network (TGCN)

EEG signals are multi-channel time series that not only contain rich temporal dynamics but also reflect brain activity recorded across different electrodes, influenced by the spatial topology of the scalp. To effectively capture this inherent spatiotemporal coupling in EEG signals, this paper proposes a TGCN that combines both temporal convolution (TC) and graph convolution (GCN) components, as illustrated in Figure 3a. Firstly, this module performs temporal feature extraction on raw EEG signals using a two TC layers. The core computation of the TC operation can be expressed as:
Z l o c a l T l + 1 = F m a x p o o l ( F r e l u ( F b n ( Z l o c a l T l W l o c a l T l + 1 + b l o c a l T l + 1 ) ) )
Here, Z l o c a l T l is input to the l + 1 th layer. When l = 0 , Z l o c a l T 0 represents the preprocessed EEG signal. The symbol T represents Temporal, local refers to the Local Learning Module, and * denotes a 2D convolution (with spatial dimension equal to 1). W l o c a l T l + 1 represents the convolutional weight, b l o c a l T l + 1 denotes the bias term, F b n ( · ) denotes the batch normalization function, F m a x p o o l refers to the max pooling process, and F r e l u ( · ) denotes the ReLU nonlinear activation. Specific design is as follows: the first TC layer uses a convolutional kernel of size 1 × 85, aiming to cover a longer time window to capture macroscopic temporal features and reduce computational complexity; the second TC layer uses a smaller convolutional kernel of size 1 × 30 to focus on extracting localized features.
Secondly, the output features Z l o c a l T 2 are input into the GCN. This layer considers each EEG channel as a node within a graph, where the node features correspond to the temporal representations derived from the TC layer. For simplicity, we define x = ( x 1 , x 2 , , x N ) T = Z l o c a l T 2 .
Spectral graph theory forms the basis of GCN. Given an undirected graph G = ( V , E ) with its adjacency matrix A R N × N , degree matrix D = A , and identity matrix I. We can define the graph Laplacian operator L :
L = I D 1 2 A D 1 2 = U λ 1 λ N U T
Here, L was normalized, U = ( u 1 , u 2 , , u n ) denotes the matrix composed of the feature vectors of L, λ i is feature value of L, and N is the number of nodes.
The matrix U encodes the structural information of the graph signal. For classification tasks on graphs, our focus lies in identifying an appropriate convolution kernel g to minimize the classification loss. Thus, the convolution operation, connecting the graph signal with the convolution kernel, is defined as:
( x g θ ) G = U g θ Λ U T x
By approximating the convolution kernel using polynomials, we introduce spatial localization. Here, K is referred to as the receptive field, meaning that the embedding update of each node involves aggregating embeddings only from its neighbors within K-hops. To reduce the computational complexity of convolutions, Chebyshev polynomials are used to approximate the convolution kernel, defined as:
g θ Λ k = 0 K θ k T k Λ ~ = T 0 Λ ~ = I K = 0 T 1 Λ ~ = Λ ~ K = 1 2 Λ ~ T k 1 Λ ~ T k 2 Λ ~ K 2
Here, θ k denotes the Chebyshev polynomial coefficient, and T k ( Λ ~ ) corresponds to the Chebyshev polynomial with Λ ~ = 2 Λ λ m a x I .
Therefore, Equation (6) represents the convolution operation on graph signals.
x g θ G = U g θ Λ U T x U k = 0 K θ k T k Λ ~ U T x = k = 0 K θ k T k L ~ x = θ x + θ ( D 1 2 A D 1 2 ) x = θ ( I + D 1 2 A D 1 2 ) x = θ D ~ 1 2 A ~ D ~ 1 2 x
where L ~ = 2 L λ m a x I , λ m a x 2 , K = 1 , θ = θ0 = θ1, A ~ = A + I , D ~ i i = j A ~ i j .
Thus, the update rule of graph convolution (GCN) is as follows.
Z G ( l ) = F g e l u ( D ~ 1 2 A ~ D ~ 1 2 X ( l 1 ) W G ( l ) + b G ( l ) )
Here, at the l -th layer, Z l represents the extracted features, while W l and b l denote the corresponding trainable weight matrix and bias term, respectively. F ( · ) represents the GeLU activation function.
In this module, the relationship between electrodes is computed via PDC to a directed weighted graph. A single layer of GCN is applied, with input derived from the output Z l o c a l T 2 of two consecutive temporal convolutions. Therefore:
In TGCN, the adjacency matrix is constructed based on PDC and obtained by averaging all trials for each subject. The input signal is Z l o c a l T 2 , and the output of the TGCN can be expressed as:
Z l o c a l G ( 1 ) = F g e l u ( A ˇ p d c Z l o c a l T 2 W G ( 1 ) + b G ( 1 ) )
Here, A ˇ p d c = D ~ p d c 1 2 A ~ p d c D ~ p d c 1 2 .

3.4. Partition Learning Module

This module first adopts a task-driven partitioning strategy to divide EEG electrodes into four partitions. Then, a Gaussian Median Distance-based graph filter is applied to the electrode signals within each partition. Finally, the signals from the four partitions are fused using an arithmetic mean algorithm.

3.4.1. Partition Strategy

Different brain regions are highly interconnected, and different motor imagery (MI) tasks correspond to distinct sensitive areas. Prior knowledge from neuroscience indicates that the four MI tasks have their own sensitive regions. Referencing the key EEG electrodes associated with the four MI tasks—C3 is typically related to left-hand task, C4 to right-hand task [46], CPz and Pz to tongue task [47], and Fz to feet task [48]. In addition, Cz is closely related to all four types of MI.
Based on this, we divided the 22 electrodes of Dataset A into four partitions, containing 7, 10, 7, and 10 electrodes, respectively, as shown in Figure 4b. Specifically: P1 = {FC3, FC1, C5, C3, C1, CP3, CP1}, P2 = {Fz, FC1, FCz, FC2, C1, Cz, C2, CP1, CPz, CP2}, P3 = {FC2, FC4, C6, C4, C2, CP4, CP2}, and P4 = {POz, P1, Pz, P2, C1, Cz, C2, CP1, CPz, CP2}. As shown in Figure 4a, we use a color coding scheme where the number of colors in a cell corresponds to how many partitions an electrode belongs to. Accordingly, the electrodes shared by three partitions are 124 = P 1 P 2 P 4 = { C 1 , C P 1 } , 234 = P 2 P 3 P 4 = { C 2 , C P 2 } . Those shared by two partitions are ¨ 12 = P 1 P 2 = { F C 1 } , ¨ 23 = P 2 P 3 = { F C 2 } , ¨ 24 = P 2 P 4 = { C z , C P z } .
The foremost principle for defining partition boundaries is the non-overlap of core electrodes. Consider the right-hand MI partition (P1): it must include the C3 electrode. Its right boundary should not extend beyond column a4 (Cz), its upper boundary should not exceed row r2 (Fz), and its lower boundary should not go beyond row r6 (Pz). For the foot area (P2), the left boundary should not exceed column a2 (C3), the right boundary should not exceed column a6 (C4), and the lower boundary should not exceed row r6 (Pz). To ensure symmetry with the tongue area, the lower boundary is designed to include row r5 (CPz). Using the same partitioning method, we divided the 32 electrodes in Dataset B into four partitions. All partitions are shown in Figure 4.

3.4.2. Gaussian Median Distance (GMD) Method

The correlation of EEG electrodes i and j is given by:
E [ Φ i Φ j ] = E [ V G ( r i , r s ) I m ( r s ) d r s V G ( r j , r t ) I m ( r t ) d r t ]
where the Green’s function from the source point r s to electrode r i is represented by G ( r i , r s ) . Φ i represents the potential recorded by the i -th electrode.
On the scalp surface, the electric potential can be viewed as the projection of the three-dimensional (3D) Green’s function onto the curved scalp geometry. Since the scalp surface is essentially a two-dimensional (2D) manifold, we need to project the 3D Green’s function onto a 2D plane to achieve a 2D approximation. In this study, we approximate the 3D Green’s function as a Gaussian function along the normal direction:
G ( r , r s ) g 0 e x p ( r r s 2 2 l 2 )
Here, l is the correlation length, and g 0 is a constant.
Normalized electrode correlations:
ρ i j = E [ Φ i Φ j ] E [ Φ i 2 ] E [ Φ j 2 ] = e x p ( d i j 2 4 l 2 )
Let δ 2 = 4 l 2 , where δ is the median distance. Therefore, the Gaussian median distance of electrodes can be expressed as:
G i j d i j = ρ i j ( d i j ) = e x p ( d i j 2 δ 2 )
From Equation (12), it is evident that the correlation between electrodes decays with the square of their distance, consistent with the propagation pattern of EEG signals within the brain.
In this study, due to the substantial noise present in EEG signals, δ is defined as the median of the Euclidean distances between all electrode pairs in each trial to enhance robustness against outliers. d i j denotes the Euclidean distance between electrodes, and is expressed as d i j = x i x j 2 + y i y j 2 + ( z i z j ) 2 .

3.4.3. Partitioned Feature Fusion

Partitioned feature fusion is designed to process and integrate features extracted from overlapping electrodes across the four partitions. Each partition calculates the relationship between electrodes using GMD to construct a topological graph. Based on Equation (12), the adjacency matrix for this module is:
A P = 0 i f d i j > δ exp d i j 2 δ 2 i f 0 < d i j δ 1 i f d i j = 0
The Gaussian filter is expressed as follows:
F = D o u t 1 L P = I D o u t 1 A P
where L P is the Laplace matrix for A P . L P = D o u t A P .
The filtered signal is:
X = F X
Then, X undergoes two consecutive time domain convolutions adopting kernel sizes 1 × 85 and 1 × 30, respectively. The feature extraction is based on Equation (2) but omits the outermost max-pooling operation.
Z p a r t i t i o n T ( 1 ) = F r e l u ( F b n ( X W p a r t i t i o n T ( 1 ) + b p a r t i t i o n T ( 1 ) ) )
Z p a r t i t i o n T ( 2 ) = F r e l u ( F b n ( Z p a r t i t i o n T ( 1 ) W p a r t i t i o n T ( 2 ) + b p a r t i t i o n T ( 2 ) ) )
In this module, the outputs of the four partitions are Z p a r t i t i o n T 2 ( P 1 ) , Z p a r t i t i o n T ( 2 ) (P2), Z p a r t i t i o n T 2 ( P 3 ) , and Z p a r t i t i o n T 2 ( P 4 ) , respectively.
As analyzed in Section 3.4.1, the set of electrodes repeated twice is: ¨ 12 ¨ 23 ¨ 24 = { F C 1 , F C 2 , C z , C P z } , while the set repeated three times is: 124 234 = { C 1 , C 2 , C P 1 , C P 2 } . These electrode features are processed using an arithmetic averaging algorithm to obtain new electrode features, simultaneously eliminating duplicate electrodes. Taking C1 as an example, it belongs to partitions P1, P2 and P4. Its final feature can be expressed as:
Z p a r t i t i o n T ( 2 ) C 1 =   Γ Z p a r t i t i o n T ( 2 ) P 1 C 1 , Z p a r t i t i o n T ( 2 ) P 2 C 1 , Z p a r t i t i o n T ( 2 ) P 4 C 1 ; 3
Here, Γ ( f e a t u r e s ; n ) denotes the arithmetic mean of n features.
The partitions after feature processing are labeled as P 1 n e w , P 2 n e w , P 3 n e w , and Therefore, the fused feature can be represented as:
Z p a r t i t i o n f u s e d ( 2 ) = Π ( Z p a r t i t i o n T 2 P 1 n e w , Z p a r t i t i o n T 2 P 2 n e w , Z p a r t i t i o n T 2 ( P 3 n e w ) , Z p a r t i t i o n T 2 ( P 4 n e w ) ; c h a n n e l )
Here, Π ( f e a t u r e s ; c h a n n e l ) denotes concatenation along the channel dimension.

3.5. Global Learning Module

The global learning module fuses the node features from the Local Learning module with those from the Partition Learning module. It then computes the cosine similarity between electrodes to construct the adjacency matrix of the Global Learning module. Finally, two residual GCNs are applied to extract global spatial features while mitigating the risk of overfitting.
The fused features between nodes are as follows:
Z f u s e = Π Z l o c a l G ( 1 ) , Z p a r t i t i o n T ( 2 ) ; f e a t u r e s
Here, Π ; f e a t u r e s denotes concatenation along the feature dimension.
The cosine similarity between two vectors is:
cos ( η i , η j ) = η i η j η i η j
The adjacency matrix of the global module can be represented as:
A g l o b a l = cos ( η 1 , η 1 ) cos ( η 1 , η N ) c o s ( η N , η 1 ) c o s ( η N , η N )
Here, N represents the total number of channels, and η i denotes the feature of the i -th channel. A g l o b a l is dynamically updated in each training iteration to enable adaptive learning.
Finally, according to the GCN update Equation (7), the output of the first layer of the dual-layer residual GCN is:
Z g l o b a l G ( 1 ) = F g e l u ( D ~ 1 1 2 A ~ 1 D ~ 1 1 2 Z f u s e W G ( 1 ) + b G ( 1 ) )
Here, A ~ 1 = A g l o b a l 1 + I denotes the adjacency matrix for the first-layer of the residual GCNs. The corresponding degree matrix is given by D ~ 1 = A ~ 1 .
The second layer of the residual GCNs yields:
Z g l o b a l G ( 2 ) = F g e l u D ~ 2 1 2 A ~ 2 D ~ 2 1 2 Z f u s e + Z g l o b a l G ( 1 ) W G 2 + b G ( 2 ) + Z f u s e
Here, A ~ 2 = A g l o b a l 2 + I denotes the 2nd-layer residual GCN’s adjacency matrix, and D ~ 2 = A ~ 2 denotes the corresponding degree matrix.

3.6. Classification Module

The feature Z g l o b a l G ( 2 ) is input to two successive FC layers, with the output obtained via a Softmax function. The cross-entropy loss function is used in LPGGNet model.
Loss = i = 1 C y i l o g p i
Here, p denotes the predicted probability, C stands for the total classes, y is the ground truth label. Adam optimizer, used in the model, combines the advantages of RMSprop and AdaGrad by incorporating momentum and adaptive learning rates, which helps to accelerate convergence. This makes Adam an efficient approach for updating network weights during training.

4. Experiment and Results

4.1. Evaluation Metrics

We employ several common metrics to evaluate the proposed model. We first introduce accuracy [49], which represents the proportion of correctly predicted samples. Its calculation is given by Equation (26):
A c c = T P + T N T P + F N + F P + T N
Additionally, accuracy of classification is further assessed using the Kappa coefficient [50]. This metric takes into account the possibility of random consistency in the classification results. It is particularly well-suited for evaluating MI-EEG tasks, it reflects the stability and reliability of the model in decoding motor imagery signals. It can be expressed by Equation (27).
K a p = P a P e 1 P e
Here, P a denotes the actual observed consistency ratio, while P e represents the expected random consistency ratio.

4.2. Model Parameters

All models in this paper were developed using the PyTorch deep learning framework. All experimental procedures were carried out on a desktop computer powered by an NVIDIA GeForce RTX 4070 GPU and an Intel Core™ i7-12700F CPU. The development environment comprised Python 3.8.1, PyTorch 1.10.1, and CUDA 10.2. The hyperparameters were set as follows: learning rate 0.001, dropout 0.5, regularization coefficient 0.069, batch size 64, and weight decay 0.01. The epoch count for Datasets A and B was set to 300, respectively. All these parameters were determined through a grid search during the experiments to ensure optimal model performance on the validation set. To mitigate the impact of randomness during model training, results are presented as the average of one hundred independent experiments.

4.3. Experiment on Data A and Data B

4.3.1. Overview Performance

To comprehensively evaluate the proposed model, we benchmarked it against a range of classical and state-of-the-art methods on Dataset A and Dataset B. The compared models spanned from FBCSP (2012) and Shallow/Deep ConvNet (2017) to more recent ones like TS-SEFFNet (2021), Conformer (2022), and GECNN (2024). Table 1 and Table 2 present a summary of the experimental results.
A brief description of the compared models is as follows.
FBCSP [51]: A feature extraction method based on CSP.
Shallow ConvNet [34]: Consisting of two convolutional layers, this compact convolutional neural network is specifically developed for efficient EEG signal classification.
Deep ConvNet [34]: A deep convolutional network structured with five modules, each responsible for distinct stages of spatial and temporal feature processing.
TS-SEFFNet [52]: A time spectrum-based compressed excitation feature fusion network.
SWLDA [53]: SWLDA selects significant features step by step based on statistical criteria to construct an optimal linear classifier.
GAT [54]: GAT introduces a self-attention mechanism to dynamically compute the importance weights of neighboring nodes.
Conformer [55]: A compact convolutional transformer combining the advantages of CNN and Transformer.
GECNN [56]: A graph embedding convolutional neural network.
As shown in Table 1, our model significantly outperforms other comparison methods on Subjects 2, 3, and 6, achieving accuracy rates of 72.9%, 95.5%, and 76.7%, respectively. Regarding the overall metrics of average accuracy (Avg) and Kappa coefficient (Kap), the proposed model achieved the highest values (82.9% and 0.772), indicating superior overall classification performance and consistency. Despite slight underperformance against Conformer or GECNN on specific subjects (e.g., 1, 4, 7, 8), it exhibited exceptional stability across all subjects, confirming its strong generalization and robustness.
As shown in Table 2, our model also performed well, achieving the highest accuracy on four out of five subjects. The Avg reached 87.50%, with a Kap of 0.8392, both significantly outperforming other comparison models.
To demonstrate the statistical significance of the classification accuracy achieved by the proposed model, we employed the nonparametric Wilcoxon signed-rank test [57] on Dataset A. Using the proposed LPGGNet as the baseline, we conducted pairwise comparisons for each reproduced model. The p-values from the tests are listed in Table 3. The table shows that the p-values for the proposed method versus each reproduced method are all less than 0.05. This indicates that the proposed method achieves significantly higher recognition accuracy compared to other methods.
Combining the experimental results from Table 1, Table 2 and Table 3, our model maintains stable and high performance across different subjects and datasets. The results confirm that the proposed LPGGNet is both effective and advanced for MI-EEG classification.
To further evaluate the stability and distribution characteristics of each model, we plotted violin plots of classification accuracy on Dataset A and Dataset B. These plots integrate the features of box plots and density distributions, providing an intuitive visualization of the model’s central tendency, dispersion, and overall distribution.
According to Figure 5, the proposed LPGGNet provides enhanced classification performance and robustness for both datasets. The violin plots show that the median accuracy of LPGGNet is notably higher than that of the comparison models, with a more compact and skewed distribution. This demonstrates that LPGGNet consistently maintains high accuracy with minimal fluctuations across subjects. Compared to the comparison models, LPGGNet not only attains higher average accuracy but also exhibits stronger generalization ability and robustness. Moreover, it effectively alleviates performance variability induced by individual differences, thereby validating the effectiveness of its architecture.

4.3.2. Ablation Experiments

In this section, we conducted ablation experiments at two levels to validate the effectiveness of the proposed LPGGNet. First, we performed ablation experiments on Dataset A and Dataset B for each major module (i.e., local learning, partition learning, and global learning modules) to evaluate their contributions. The experimental results are shown in Table 4.
The findings can be analyzed from three perspectives:
Single-Module Performance: When used independently, Partition-only demonstrates the best performance among the three standalone modules, accuracies of 80.1% and 84.92% on both datasets. Such a result significantly outperforms both Local-only and Global-only modules. This indicates that capturing interactions between different partitions is crucial for the MI tasks.
Module Necessity: Removing any module from the complete model results in performance degradation, demonstrating that each module is indispensable. Performance degradation was most pronounced when removing the Partition learning module. The complete model achieves 82.9% vs. 73.8% after removal on Dataset A, further validating its critical role. Similarly, removing either the Local Learning or Global Learning module also reduces model performance by 0.6% and 3.5%, respectively, on Dataset A. This indicates all three modules possess irreplaceable functions.
Synergistic Effect: The full model (Ours) achieves the best performance on both datasets (82.9% on Dataset A, 87.50% on Dataset B), surpassing all ablated variants. Notably, its performance surpasses the second-best configuration (Local removed) by 0.6% and 0.97% on Datasets A and B, respectively. These findings confirm that the three learning mechanisms are complementary, and their synergistic integration is crucial for achieving optimal and robust classification performance.
In conclusion, the ablation studies solidly verify that each proposed module effectively captures distinct features, and their combination within LPGGNet is both necessary and effective attaining optimal performance.
Second, we conducted further ablation experiments on the components within each major module across Datasets A and B to validate their effectiveness. The results are shown in Table 5.
In the local learning module, replacing the PDC adjacency component with Pearson correlation coefficient led to accuracy decreases of 0.8% and 1.2% on Datasets A and B, respectively. In the partition learning module, replacing the GMD component with an inverse-square component reduced the accuracy by 1.6% and 2.3%, respectively.
Furthermore, when removing the partitioning strategy from the partition learning module, accuracy decreased by 1.2% and 1.9% on Datasets A and B, respectively. When removing the residual connection component from the global learning module, accuracy decreased by 0.6% and 0.5% on Datasets A and B, respectively.
These results indicate that both component replacement and removal lead to model performance degradation, with the most significant impact observed when replacing the GMD component or removing the partitioning strategy. This finding aligns with the conclusions in Table 4, further validating that the partition learning module contributes most significantly to overall performance.

4.3.3. Online Wheelchair Experiment

To evaluate the applicability of our model in rehabilitation, An intelligent wheelchair control system, grounded in motor imagery BCI, was designed. As shown in Figure 6, it includes signal acquisition, LPGGNet decoding, and wheelchair control. The system first acquires raw EEG signal using BrainAmp equipment from BrainProduct GmbH, Gilching, Germany, and transmits it via TCP/IP. Subsequently, data preprocessing is performed, followed by training using LPGGNet. Finally, the trained outputs are mapped into wheelchair control commands, which facilitates real-time control of the smart wheelchair via Bluetooth.
We invited five subjects from Dataset B to take part in an online experiment. Each subject completed a total of 120 MI tasks across three phases, with 40 tasks per phase. Each task commenced with a clenching signal serving as the starting point for recording MI-EEG signals. Following the clenching signal, subjects maintained a 4 s state of MI. Subsequently, LPGGNet performed decoding and mapped the results to wheelchair commands, with the preprocessing steps following Section 3.1.3 Data Processing. The MI tasks map to four operational commands for the smart wheelchair. Specifically, imagining the left hand corresponds to a left turn, imagining the right hand to a right turn, imagining both feet to forward movement, and imagining the tongue to backward movement.
Table 6 shows the classification accuracy of the online wheelchair experiment. Compared with the offline data listed in Table 2, we observe a lower accuracy in the online experiment due to the more complex environmental conditions during online data collection.
Specifically, challenges in online environments include environmental noise interference, fatigue and attention fluctuations in subjects, system latency, and potential signal loss or anomalies during transmission or processing. To address these issues, future improvement directions may include: (1) optimizing signal preprocessing and filtering methods to enhance noise resistance, (2) designing more robust models to accommodate individual variations and fatigue states, (3) reducing system latency and improving real-time signal processing workflows, and (4) introducing adaptive correction mechanisms to dynamically adjust model performance in online settings.

5. Discussion

5.1. Effect of Gaussian Median Distance (GMD)

In Partition Learning module, we propose a novel GMD to measure the relationship between electrodes. Compared to traditional methods using an inverse square function of Euclidean distance (ISED) [58] to measure electrode relationships, it effectively suppresses long-range connections, resulting in smoother attenuation. To demonstrate the advantages of GMD, we analyzed data from subject A01 in the BCI IV 2a dataset. Adjacency matrices were constructed using both methods across four partitions, followed by heatmap visualization.
Analysis of Figure 7 reveals that GMD with exponential decay causes weights between long-range electrodes to rapidly approach zero, resulting in a sparser graph structure. In contrast, ISED exhibits slower decay and preserves numerous weak long-range connections, leading to a dense graph structure with poor discriminability. Since connections between long-range nodes persist, ISED often induces a global smoothing effect that introduces noise and weakens local functional relationships. Conversely, GMD emphasizes connections within local neighborhoods, promoting signal smoothing within localized partitions and better preserving the spatial specificity of EEG signals. Overall, the adjacency matrix constructed using GMD not only aligns more closely with the physiological characteristics of EEG signals but also proves more suitable for subsequent graph convolutional learning tasks.

5.2. Visualization Analysis of LPGGNet

5.2.1. Visualization of Feature Separation

To further investigate the ability of the models to represent EEG features, this paper utilizes the t-SNE [59] algorithm to visualize high-dimensional EEG features learned by the Deep ConvNet, Shallow ConvNet, GAT, TS-SEFFNet, and LPGGNet in a two-dimensional space. As shown in Figure 8, the different models exhibit significant differences in feature separability and clarity of inter-class boundaries.
Specifically, features extracted by Shallow ConvNet and TS-SEFFNet exhibit considerable overlap across the four classes, particularly with highly conflated distributions between Class 2 and Class 3 samples. The fuzzy class boundaries and large distances between samples within the same class indicate feature redundancy and limited discriminative power. Deep ConvNet and GAT demonstrate improved feature clustering, with markedly increased inter-class distances and more compact intra-class samples. However, small transitional regions exist between categories, which indicates that the discriminability of features still needs to be improved.
In contrast, LPGGNet exhibits the clearest feature distribution, forming distinct and stable boundaries between categories. This model significantly enhances feature separability, enabling samples within the same category to achieve higher clustering in low-dimensional space while effectively widening the distance between different categories. This demonstrates that LPGGNet more fully captures task-relevant discriminative information in EEG signals, thereby forming a more structured and recognizable representation in the feature space.
Further analysis from the perspective of category confusion reveals that Shallow ConvNet and GAT models exhibit significant overlap between categories 2 and 3. TS-SEFFNet shows insufficient distinction between categories 0 and 1, while Deep ConvNet improves separation for both pairs of categories. LPGGNet effectively mitigates multi-class confusion, exhibiting only minor overlap between Class 0 and Class 1 while maintaining the clearest overall classification boundaries. This outcome reflects its superiority in capturing distinct activation patterns across brain regions corresponding to different motor imagery tasks.

5.2.2. Visualization of Electrode Contributions

To demonstrate the contribution of electrodes in the LPGGNet decision-making process, we visualized Dataset A using BrainNet Viewer [60] (Figure 9). Node sizes in the figure reflect the magnitude of each electrode’s contribution to the model’s classification. Noticeable differences in node sizes among electrodes indicate uneven contributions to the model’s decision-making.
Specifically, electrodes Fz, FC1, C3, Cz, C4, P1, and Pz exhibit large node sizes, indicating their higher contribution to model classification. Among these, C3, Cz, and C4 are located in the bilateral motor cortex and central parietal regions; Fz and FC1 are in the anterior central region; and P1 and Pz are in the parietal midline region. These brain areas align closely with the neural cortex associated with the motor imagery task. In contrast, electrodes such as FCz, CP1, CP2, CP4, C6, and FC4 exhibit medium node sizes, playing a supplementary role in model classification. Electrodes located in marginal or occipital regions show smaller node sizes, exerting limited influence on model output.
Overall, the spatial distribution of electrodes indicates that LPGGNet primarily relies on signals from central and central-parietal regions for discrimination in motor imagery tasks. This aligns closely with known motor cortex activity, further validating the physiological interpretability of the model’s spatial feature extraction.

5.3. Practicality of LPGGNet

To further evaluate the practicality of the proposed model, we compared its parameters, training time, and computational complexity (FLOPs). As shown in Table 7, LPGGNet exhibits a slight increase in parameters (0.513 M) and computational cost (486.2 M) compared to Shallow ConvNet, TS-SEFFNet, and GECNN. However, this moderate computational overhead yields significant improvements in classification performance. Meanwhile, the model’s training time increases only slightly to 14.361 min. Given the corresponding performance gains, this additional overhead is acceptable, indicating that LPGGNet achieves a reasonable balance between accuracy and computational efficiency.
These results demonstrate that the proposed multi-level graph learning and feature fusion mechanism effectively enhances feature representation capabilities. It achieves this improvement without significantly increasing the computational burden. This finding validates the model’s practicality and scalability. However, when deployed on resource-constrained embedded systems or mobile devices, the higher FLOPs may cause inference delays or reduced usability. This issue remains an important aspect for further optimization in future research.

5.4. Relationship to Existing Hierarchical GCN and Spatio-Temporal GNN Models

While the proposed LPGGNet is conceptually similar to hierarchical and spatio-temporal graph models like DiffPool, T-GCN and EEGFormer, it distinguishes itself through a unique structural design.
DiffPool [61] achieves hierarchical graph representations through learnable pooling. In contrast, LPGGNet constructs partitioned adjacency based on task-driven partitioning and Gaussian median distance, offering greater neurobiological interpretability. T-GCN [62] combines GCN with GRU to jointly model spatio-temporal dependencies. By comparison, LPGGNet employs a hierarchical decoupling approach. It first extracts local temporal and topological features, and then integrates partitioned and global dependencies. EEGFormer [37] learns multi-layer spatio-temporal relationships through attention mechanisms. In contrast, LPGGNet explicitly constructs a multi-layer graph structure based on PDC, Gaussian distance, and cosine similarity.
Overall, LPGGNet integrates the strengths of these models into a three-stage graph structure learning framework tailored to EEG characteristics.

6. Conclusions

A novel LPGGNet is proposed in this paper to enhance the decoding performance of MI-EEG signals by capturing brain task information across different granularities of EEG map data. Within LPGGNet, three hierarchical modules are designed to extract EEG signal features. The Local Learning module extracts local features, the Partition Learning module acquires task-relevant partitioned features, and the Global Learning module learns the overall characteristics of fused features. These three modules effectively improve the distinguishability between different motor tasks and enhance the clarity of distinct brain regions. Although the proposed LPGGNet achieves state-of-the-art performance on both a public dataset and a private dataset, it still has some limitations. The current framework relies on predefined GMD and PDC adjacency matrices, which to some extent restrict its flexibility and adaptability in modeling local node features and complex structures. Future work will focus on: (1) incorporating Transformer mechanisms to better capture inter-partition features and dynamically learn adjacency matrices; (2) exploring adaptive scaling factor in Gaussian Median method from a data-driven perspective; (3) integrating multi-brain-region prior knowledge to improve motor imagery classification accuracy.

Author Contributions

Conceptualization, N.Z., X.L. and X.T.; methodology, N.Z. and H.J.; software, N.Z. and H.J.; validation, N.Z.; writing—original draft preparation, N.Z.; writing—review and editing, N.Z.; supervision, G.J.; project administration, X.T. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the State Key Laboratory of Brain Cognition and Brain-inspired Intelligence Technology (Grant No. SKLBIK2025009), Zunyi Normal University Academic Emerging Scholars Cultivation and Innovative Exploration Special Project (Cultivation Project) [Grant No. ZunShiXM [2023]-1-02], Zunyi City Science and Technology Program Project [Grant No. ZunShiKeHe [2023]-161].

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Ethics Committee for Research Involving Human Participants at Chongqing University of Posts and Telecommunications, Chongqing (protocol 16/2024/2025, approved on 20 February 2025).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

A publicly available dataset (Dataset A) was used in this study. These data can be found here: https://www.bbci.de/competition/download/competition_iv/BCICIV_2a_gdf.zip (accessed on 20 March 2025). A private dataset (Dataset B) was used in this study. The dataset presented in this article are not readily available because the data are part of an ongoing study. Requests to access the datasets should be directed to corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Su, J.; Wang, J.; Wang, W.; Wang, Y.; Bunterngchit, C.; Zhang, P.; Hou, Z.G. An adaptive hybrid brain computer interface for hand function rehabilitation of stroke patients. IEEE Trans. Neural Syst. Rehabil. Eng. 2024, 32, 2950–2960. [Google Scholar] [CrossRef]
  2. Oh, E.; Shin, S.; Kim, S.P. Brain–computer interface in critical care and rehabilitation. Acute Crit. Care 2024, 39, 24. [Google Scholar] [CrossRef]
  3. Ma, Z.Z.; Wu, J.J.; Cao, Z.; Hua, X.Y.; Zheng, M.X.; Xing, X.X.; Xu, J.G. Motor imagery-based brain–computer interface rehabilitation programs enhance upper extremity performance and cortical activation in stroke patients. J. Neuroeng. Rehabil. 2024, 21, 91. [Google Scholar] [CrossRef] [PubMed]
  4. Wang, A.; Sun, J. Personalized EEG-guided brain stimulation targeting in major depression via network controllability and multi-objective optimization. BMC Psychiatry 2025, 25, 723. [Google Scholar] [CrossRef] [PubMed]
  5. Tozzi, L.; Zhang, X.; Pines, A.; Olmsted, A.M.; Zhai, E.S.; Anene, E.T.; Williams, L.M. Personalized brain circuit scores identify clinically distinct biotypes in depression and anxiety. Nat. Med. 2024, 30, 2076–2087. [Google Scholar] [CrossRef]
  6. Mikołajewska, E.; Mikołajewski, D. Non-invasive EEG-based brain-computer interfaces in patients with disorders of consciousness. Mil. Med. Res. 2014, 1, 14. [Google Scholar] [CrossRef] [PubMed]
  7. Yang, B.; Rong, F.; Xie, Y.; Li, D.; Zhang, J.; Li, F.; Gao, X. A multi-day and high-quality EEG dataset for motor imagery brain-computer interface. Sci. Data 2025, 12, 488. [Google Scholar] [CrossRef]
  8. Degirmenci, M.; Yuce, Y.K.; Perc, M.; Isler, Y. EEG channel and feature investigation in binary and multiple motor imagery task predictions. Front. Hum. Neurosci. 2024, 18, 1525139. [Google Scholar] [CrossRef]
  9. Almohammadi, A.; Wang, Y.-K. Revealing brain connectivity: Graph embeddings for EEG representation learning and comparative analysis of structural and functional connectivity. Front. Neurosci. 2024, 17, 1288433. [Google Scholar] [CrossRef]
  10. Awais, M.A.; Yusoff, M.Z. Partial Directed Coherence for the Classification of Motor Imagery-Based Brain–Computer Interface. In Proceedings of the Multimedia University Engineering Conference (MECON 2022); Atlantis Press: Dordrecht, The Netherlands, 2022; pp. 121–131. [Google Scholar]
  11. Cattai, T.; Colonnese, S.; Barbarossa, S. Robust graph topology inference for multiple brain EEG networks. IEEE Trans. Signal Inf. Process. Netw. 2025, 11, 1317–1331. [Google Scholar] [CrossRef]
  12. Scrivener, C.L.; Reader, A.T. Variability of EEG electrode positions and their underlying brain regions: Visualizing gel artifacts from a simultaneous EEG–fMRI dataset. Brain Behav. 2022, 12, e2476. [Google Scholar] [CrossRef]
  13. Stier, C.; Loose, M.; Loew, C.; Segovia-Oropeza, M.; Baek, S.; Lerche, H.; Focke, N.K. Comprehensive evaluation of EEG spatial sampling, head modeling, and parcellation effects on network alterations in idiopathic generalized epilepsy. medRxiv 2024. [Google Scholar] [CrossRef]
  14. Wang, L.; Li, Z.X. EEG classification based on common spatial pattern and LDA. In Proceedings of the 7th International Conference on Artificial Life and Robotics (ICAROB), Oita, Japan, 13–16 January 2020; pp. 1–6. [Google Scholar]
  15. Lee, D.; Park, S.H.; Lee, S.G. Improving the accuracy and training speed of motor imagery brain–computer interfaces using wavelet-based combined feature vectors and Gaussian mixture model-supervectors. Sensors 2017, 17, 2282. [Google Scholar] [CrossRef]
  16. Ma, W.; Zheng, Y.; Li, T.; Li, Z.; Li, Y.; Wang, L. A comprehensive review of deep learning in EEG-based emotion recognition: Classifications, trends, and practical implications. PeerJ Comput. Sci. 2024, 10, e2065. [Google Scholar] [CrossRef]
  17. Saibene, A.; Ghaemi, H.; Dagdevir, E. Deep learning in motor imagery EEG signal decoding: A Systematic Review. Neurocomputing 2024, 610, 128577. [Google Scholar] [CrossRef]
  18. Tabar, Y.R.; Halici, U. A Novel Deep Learning Approach for Classification of EEG Motor Imagery Signals. J. Neural Eng. 2016, 14, 016003. [Google Scholar] [CrossRef]
  19. Lawhern, V.J.; Solon, A.J.; Waytowich, N.R.; Gordon, S.M.; Hung, C.P.; Lance, B.J. EEGNet: A compact convolutional neural network for EEG-based brain–computer interfaces. J. Neural Eng. 2018, 15, 056013. [Google Scholar] [CrossRef] [PubMed]
  20. Li, Y.; Zheng, W.; Zong, Y.; Cui, Z.; Zhang, T.; Zhou, X. A bi-hemisphere domain adversarial neural network model for EEG emotion recognition. IEEE Trans. Affect. Comput. 2018, 12, 494–505. [Google Scholar] [CrossRef]
  21. Lomelin-Ibarra, V.A.; Gutierrez-Rodriguez, A.E.; Cantoral-Ceballos, J.A. Motor imagery analysis from extensive EEG data representations using convolutional neural networks. Sensors 2022, 22, 6093. [Google Scholar] [CrossRef]
  22. Tang, X.; Yang, C.; Sun, X.; Zou, M.; Wang, H. Motor imagery EEG decoding based on multi-scale hybrid networks and feature enhancement. IEEE Trans. Neural Syst. Rehabil. Eng. 2023, 31, 1208–1218. [Google Scholar] [CrossRef]
  23. Hamidi, A.; Kiani, K. Motor Imagery EEG signals classification using a Transformer-GCN approach. Appl. Soft Comput. 2025, 170, 112686. [Google Scholar] [CrossRef]
  24. Shan, X.; Cao, J.; Huo, S.; Chen, L.; Sarrigiannis, P.G.; Zhao, Y. Spatial–temporal graph convolutional network for Alzheimer classification based on brain functional connectivity imaging of electroencephalogram. Hum. Brain Mapp. 2022, 43, 5194–5209. [Google Scholar] [CrossRef] [PubMed]
  25. Graña, M.; Morais-Quilez, I. A review of Graph Neural Networks for Electroencephalography data analysis. Neurocomputing 2023, 562, 126901. [Google Scholar] [CrossRef]
  26. Tian, W.; Li, M.; Ju, X.; Liu, Y. Applying multiple functional connectivity features in GCN for EEG-based human identification. Brain Sci. 2022, 12, 1072. [Google Scholar] [CrossRef]
  27. Mohammadi, H.; Karwowski, W. Graph neural networks in brain connectivity studies: Methods, challenges, and future directions. Brain Sci. 2024, 15, 17. [Google Scholar] [CrossRef] [PubMed]
  28. Klepl, D.; Wu, M.; He, F. Graph neural network-based EEG classification: A survey. IEEE Trans. Neural Syst. Rehabil. Eng. 2024, 32, 493–503. [Google Scholar] [CrossRef]
  29. Henry, J.C. Electroencephalography: Basic principles, clinical applications, and related fields. Neurology 2006, 67, 2092. [Google Scholar] [CrossRef]
  30. Leeuwis, N.; Yoon, S.; Alimardani, M. Functional connectivity analysis in motor-imagery brain–computer interfaces. Front. Hum. Neurosci. 2021, 15, 732946. [Google Scholar] [CrossRef]
  31. Maghsoudi, A.; Shalbaf, A. Hand motor imagery classification using effective connectivity and hierarchical machine learning in EEG signals. J. Biomed. Phys. Eng. 2022, 12, 161. [Google Scholar] [CrossRef]
  32. Lun, X.; Liu, J.; Zhang, Y.; Hao, Z.; Hou, Y. A motor imagery signals classification method via the difference of EEG signals between left and right hemispheric electrodes. Front. Neurosci. 2022, 16, 865594. [Google Scholar] [CrossRef]
  33. Yazıcı, M.; Ulutaş, M.; Okuyan, M. Effect of EEG electrode numbers on source estimation in motor imagery. Brain Sci. 2025, 15, 685. [Google Scholar] [CrossRef]
  34. Schirrmeister, R.T.; Springenberg, J.T.; Fiederer, L.D.J.; Glasstetter, M.; Eggensperger, K.; Tangermann, M.; Ball, T. Deep learning with convolutional neural networks for EEG decoding and visualization. Hum. Brain Mapp. 2017, 38, 5391–5420. [Google Scholar] [CrossRef]
  35. Zhao, X.; Zhang, H.; Zhu, G.; You, F.; Kuang, S.; Sun, L. A multi-branch 3D convolutional neural network for EEG-based motor imagery classification. IEEE Trans. Neural Syst. Rehabil. Eng. 2019, 27, 2164–2177. [Google Scholar] [CrossRef]
  36. Hou, Y.; Jia, S.; Lun, X.; Hao, Z.; Shi, Y.; Li, Y.; Lv, J. GCNs-net: A graph convolutional neural network approach for decoding time-resolved EEG motor imagery signals. IEEE Trans. Neural Netw. Learn. Syst. 2022, 35, 7312–7323. [Google Scholar] [CrossRef]
  37. Wan, Z.; Li, M.; Liu, S.; Huang, J.; Tan, H.; Duan, W. EEGformer: A transformer-based brain activity classification method using EEG signal. Front. Neurosci. 2023, 17, 1148855. [Google Scholar] [CrossRef] [PubMed]
  38. Liu, K.; Yang, T.; Yu, Z.; Yi, W.; Yu, H.; Wang, G.; Wu, W. MSVTNet: Multi-scale vision transformer neural network for EEG-based motor imagery decoding. IEEE J. Biomed. Health Inf. 2024, 28, 7126–7137. [Google Scholar] [CrossRef] [PubMed]
  39. Jin, M.; Du, C.; He, H.; Cai, T.; Li, J. PGCN: Pyramidal graph convolutional network for EEG emotion recognition. IEEE Trans. Multimed. 2024, 26, 9070–9082. [Google Scholar] [CrossRef]
  40. Du, G.; Su, J.; Zhang, L.; Su, K.; Wang, X.; Teng, S.; Liu, P.X. A multi-dimensional graph convolution network for EEG emotion recognition. IEEE Trans. Instrum. Meas. 2022, 71, 2518311. [Google Scholar] [CrossRef]
  41. Aung, H.W.; Li, J.J.; Shi, B.; An, Y.; Su, S.W. EEG_GLT-Net: Optimising EEG graphs for real-time motor imagery signals classification. Biomed. Signal Process. Control 2025, 104, 107458. [Google Scholar] [CrossRef]
  42. Chiarion, G.; Sparacino, L.; Antonacci, Y.; Faes, L.; Mesin, L. Connectivity analysis in EEG data: A tutorial review of the state of the art and emerging trends. Bioengineering 2023, 10, 372. [Google Scholar] [CrossRef]
  43. Brunner, C.; Leeb, R.; Müller-Putz, G.; Schlögl, A.; Pfurtscheller, G. BCI Competition 2008–Graz data set A. Inst. Knowl. Discov. Graz Univ. Technol. 2008, 16, 34. [Google Scholar]
  44. Altaheri, H.; Muhammad, G.; Alsulaiman, M.; Amin, S.U.; Altuwaijri, G.A.; Abdul, W.; Bencherif, M.A.; Faisal, M. Deep learning techniques for classification of electroencephalogram (EEG) motor imagery (MI) signals: A review. Neural Comput. Appl. 2023, 35, 14681–14722. [Google Scholar] [CrossRef]
  45. Awais, M.A.; Yusoff, M.Z.; Khan, D.M.; Yahya, N.; Kamel, N.; Ebrahim, M. Effective connectivity for decoding electroencephalographic motor imagery using a probabilistic neural network. Sensors 2021, 21, 6570. [Google Scholar] [CrossRef]
  46. Tian, G.; Liu, Y. Simple convolutional neural network for left-right hands motor imagery EEG signals classification. Int. J. Cogn. Inform. Nat. Intell. 2019, 13, 36–49. [Google Scholar] [CrossRef]
  47. Giannopulu, I.; Mizutani, H. Neural kinesthetic contribution to motor imagery of body parts: Tongue, hands, and feet. Front. Hum. Neurosci. 2021, 15, 602723. [Google Scholar] [CrossRef]
  48. Sauvage, C.; Jissendi, P.; Seignan, S.; Manto, M.; Habas, C. Brain areas involved in the control of speed during a motor sequence of the foot: Real movement versus mental imagery. J. Neuroradiol. 2013, 42, 115–125. [Google Scholar] [CrossRef]
  49. Zhao, W.; Jiang, X.; Zhang, B.; Xiao, S.; Weng, S. CTNet: A convolutional transformer network for EEG-based motor imagery classification. Sci. Rep. 2024, 14, 20237. [Google Scholar] [CrossRef]
  50. Chen, J.; Yu, Z.; Gu, Z.; Li, Y. Deep temporal-spatial feature learning for motor imagery-based brain–computer interfaces. IEEE Trans. Neural Syst. Rehabil. Eng. 2020, 28, 2356–2366. [Google Scholar] [CrossRef]
  51. Ang, K.K.; Chin, Z.Y.; Wang, C.; Guan, C.; Zhang, H. Filter bank common spatial pattern algorithm on BCI competition IV datasets 2a and 2b. Front. Neurosci. 2012, 6, 39. [Google Scholar] [CrossRef]
  52. Li, Y.; Guo, L.; Liu, Y.; Liu, J.; Meng, F. A temporal-spectral-based squeeze-and-excitation feature fusion network for motor imagery EEG decoding. IEEE Trans. Neural Syst. Rehabil. Eng. 2021, 29, 1534–1545. [Google Scholar] [CrossRef] [PubMed]
  53. Mohammadi, E.; Daneshmand, P.G.; Khorzooghi, S.M.S.M. Electroencephalography-based brain–computer interface motor imagery classification. J. Med. Signals Sens. 2022, 12, 40–47. [Google Scholar] [PubMed]
  54. Velickovic, P.; Cucurull, G.; Casanova, A.; Romero, A.; Lio, P.; Bengio, Y. Graph attention networks. arXiv 2017, arXiv:1710.10903. [Google Scholar]
  55. Song, Y.; Zheng, Q.; Liu, B.; Gao, X. EEG conformer: Convolutional transformer for EEG decoding and visualization. IEEE Trans. Neural Syst. Rehabil. Eng. 2022, 31, 710–719. [Google Scholar] [CrossRef] [PubMed]
  56. Shi, J.; Tang, J.; Lu, Z.; Zhang, R.; Yang, J.; Guo, Q.; Zhang, D. A brain topography graph embedded convolutional neural network for EEG-based motor imagery classification. Biomed. Signal Process. Control 2024, 95, 106401. [Google Scholar] [CrossRef]
  57. Woolson, R.F. Wilcoxon signed-rank test. Wiley Encycl. Clin. Trials. 2007, 1–3. [Google Scholar] [CrossRef]
  58. Jia, J.; Zhang, B.; Lv, H.; Xu, Z.; Hu, S.; Li, H. CR-GCN: Channel-Relationships-Based Graph Convolutional Network for EEG Emotion Recognition. Brain Sci. 2022, 12, 987. [Google Scholar] [CrossRef]
  59. Maaten, L.v.d.; Hinton, G.E. Visualizing Data Using T-SNE. J. Mach. Learn. Res. 2008, 9, 2579–2605. [Google Scholar]
  60. Xia, M.; Wang, J.; He, Y. BrainNet Viewer: A network visualization tool for human brain connectomics. PLoS ONE 2013, 8, e68910. [Google Scholar] [CrossRef]
  61. Ying, Z.; You, J.; Morris, C.; Ren, X.; Hamilton, W.; Leskovec, J. Hierarchical graph representation learning with differentiable pooling. Adv. Neural Inf. Process. Syst. 2018, 31, 4805–4815. [Google Scholar]
  62. Zhao, L.; Song, Y.; Zhang, C.; Liu, Y.; Wang, P.; Lin, T.; Deng, M.; Li, H. T-GCN: A temporal graph convolutional network for traffic prediction. IEEE Trans. Intell. Transp. Syst. 2019, 21, 3848–3858. [Google Scholar] [CrossRef]
Figure 1. Dataset I Standard 10–20 Electrode Distribution.
Figure 1. Dataset I Standard 10–20 Electrode Distribution.
Brainsci 15 01257 g001
Figure 2. Laboratory Private Dataset. (a) Electrode Distribution of Dataset B. (b) Experimental Paradigm of Dataset B, illustrating the timing sequence of each motor imagery trial.
Figure 2. Laboratory Private Dataset. (a) Electrode Distribution of Dataset B. (b) Experimental Paradigm of Dataset B, illustrating the timing sequence of each motor imagery trial.
Brainsci 15 01257 g002
Figure 3. Framework of LPGGNet. (a) Learning local features of EEG signals; (b) Learning partitioned features of EEG signals; (c) Learning global features of EEG signals; (d) Classifying EEG signals.
Figure 3. Framework of LPGGNet. (a) Learning local features of EEG signals; (b) Learning partitioned features of EEG signals; (c) Learning global features of EEG signals; (d) Classifying EEG signals.
Brainsci 15 01257 g003
Figure 4. Schematic diagram of electrodes partitioning. (a) Electrode color coding, where each color represents a partition, and electrodes with multiple colors indicate that they belong to multiple partitions. (b) Electrode partitions in Dataset A. (c) Electrode partitions in Dataset B.
Figure 4. Schematic diagram of electrodes partitioning. (a) Electrode color coding, where each color represents a partition, and electrodes with multiple colors indicate that they belong to multiple partitions. (b) Electrode partitions in Dataset A. (c) Electrode partitions in Dataset B.
Brainsci 15 01257 g004
Figure 5. Violin plots of classification accuracy for Datasets A and B. (a) Shows the distribution of classification accuracy for four neural network models on Dataset A. (b) Shows the distribution of classification accuracy for four neural network models on Dataset B. The red line represents the median accuracy, the white box indicates the interquartile range, and the kernel density shape reflects the overall distribution of accuracy values across subjects.
Figure 5. Violin plots of classification accuracy for Datasets A and B. (a) Shows the distribution of classification accuracy for four neural network models on Dataset A. (b) Shows the distribution of classification accuracy for four neural network models on Dataset B. The red line represents the median accuracy, the white box indicates the interquartile range, and the kernel density shape reflects the overall distribution of accuracy values across subjects.
Brainsci 15 01257 g005
Figure 6. Online Wheelchair Experiment.
Figure 6. Online Wheelchair Experiment.
Brainsci 15 01257 g006
Figure 7. Heatmaps of adjacency matrices constructed based on GMD and ISED. (ad) show the adjacency matrices constructed based on ISED for partitions P1–P4, respectively; (eh) show the adjacency matrices constructed based on GMD for partitions P1–P4, respectively. (P1) 1 denotes the first electrode in the first partition.
Figure 7. Heatmaps of adjacency matrices constructed based on GMD and ISED. (ad) show the adjacency matrices constructed based on ISED for partitions P1–P4, respectively; (eh) show the adjacency matrices constructed based on GMD for partitions P1–P4, respectively. (P1) 1 denotes the first electrode in the first partition.
Brainsci 15 01257 g007
Figure 8. t-SNE visualization of features learned by Subject 3 in Datasets A and B using different methods. The numbers 0, 1, 2, and 3 denotes the category labels corresponding to imagined movements of the left hand, imagined movements of right hand, imagined movements of both feet, and imagined movements of tongue, respectively.
Figure 8. t-SNE visualization of features learned by Subject 3 in Datasets A and B using different methods. The numbers 0, 1, 2, and 3 denotes the category labels corresponding to imagined movements of the left hand, imagined movements of right hand, imagined movements of both feet, and imagined movements of tongue, respectively.
Brainsci 15 01257 g008
Figure 9. Visualization of electrode contributions for subject 3 in dataset A using BrainNet Viewer. Larger nodes indicate higher electrode contribution during LPGGNet’s decision-making process.
Figure 9. Visualization of electrode contributions for subject 3 in dataset A using BrainNet Viewer. Larger nodes indicate higher electrode contribution during LPGGNet’s decision-making process.
Brainsci 15 01257 g009
Table 1. Evaluation Results of Models on Dataset A.
Table 1. Evaluation Results of Models on Dataset A.
SubjectFBCSP
2012
Shallow
ConvNet *
2017
Deep
ConvNet *
2017
GAT *
2017
TS-
SEFFNet *
2021
SWLDA
2022
Conformer
2022
GECNN
2024
Ours
176.0074.7580.1582.3676.1771.4388.1987.9087.5
256.5055.3260.3967.1254.6243.8761.4667.4972.9
381.2580.1488.5390.3385.7677.3593.4093.4195.5
461.0067.0070.4268.3361.6359.4478.1371.4974.3
555.0066.5671.3375.6764.3851.7452.0883.7082.3
645.2563.1264.6662.0858.7548.2065.2860.9376.7
782.7583.1185.8988.0180.6969.1092.3690.6187.8
881.2575.2083.1680.9078.8274.4888.1983.7681.9
970.7576.8285.3182.3179.5583.9788.8984.8587.2
Avg ±
Std (%)
67.75 ±
13.73
71.35 ±
8.93
76.64 ±
10.2
77.45 ±
9.79
71.15 ±
11.31
64.40 ±
14.11
78.66 ±
15.3
80.46 ±
11.16
82.9 ±
7.38
Kap (%)57.0061.8868.5669.6662.1253.0071.5574.0077.2
* indicates that the model was reproduced, and the bold font indicates the highest average accuracy.
Table 2. Evaluation Results of Models on Dataset B.
Table 2. Evaluation Results of Models on Dataset B.
SubjectShallow
ConvNet *
2017
Deep
ConvNet *
2017
GAT *
2017
TS-SEFFNet *
2021
Ours
181.4283.0985.3477.3887.50
279.5583.0278.8372.1385.91
387.1092.5489.0586.2694.05
482.3072.7482.0176.7981.33
581.8984.0483.5580.0888.75
Avg ± Std (%)82.45 ± 2.883.08 ± 7.0283.75 ± 3.878.52 ± 8.9387.50 ± 4.61
Kap (%)76.8578.4779.0271.0583.92
* indicates that the model was reproduced, and the bold font indicates the highest average accuracy.
Table 3. The p-value between our proposed model and other models.
Table 3. The p-value between our proposed model and other models.
MethodShallow
ConvNet
Deep
ConvNet
GATTS-
SEFFNet
GECNN
p-value0.0080.0150.0320.0460.020
Table 4. Ablation Experiments on Modules of LPGGNet.
Table 4. Ablation Experiments on Modules of LPGGNet.
ModuleLocal LearningPartition LearningGlobal LearningDataset ADataset B
Local only74.380.02
Partition only80.184.92
Global only70.677.00
Local removed82.386.53
Partition removed73.877.18
Global removed79.483.88
Ours82.987.50
● indicates that the module is used, while ○ indicates that the module is removed.
Table 5. Ablation Experiments on Components of LPGGNet.
Table 5. Ablation Experiments on Components of LPGGNet.
ModuleLocal LearningPartition LearningGlobal LearningDataset ADataset B
ComponentPDCPearsonGMDInverse SquarePartitionResidual LinksACC (%)ACC (%)
Method82.186.3
81.385.2
82.387.0
81.785.6
82.987.5
● indicates that the module is used, while ○ indicates that the module is removed.
Table 6. Online Control Performance of Intelligent Wheelchair Using LPGGNet.
Table 6. Online Control Performance of Intelligent Wheelchair Using LPGGNet.
SubjectLeftRightFootTongueMean
170.6668.6688.3070.1274.43
266.7161.3670.5689.3371.99
373.2380.0081.4563.3374.50
470.1266.1586.4465.0071.92
568.0871.4580.1083.8875.87
Mean69.7669.5281.3774.3373.74
Table 7. Comparison of parameters and computational cost on Dataset A.
Table 7. Comparison of parameters and computational cost on Dataset A.
ModelsParameters (M)Training Time
(min)
FLOPs (M)ACC (%)
* Shallow ConvNet [26]0.0489.83648.371.35
* TS-SEFFNet [41]0.28310.210301.271.15
* GECNN [43]0.19311.053263.1278.83
Proposed0.51314.361486.282.90
M: million, min:minutes, ms:millisecond. * indicates that the model was reproduced.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, N.; Jian, H.; Li, X.; Jiang, G.; Tang, X. LPGGNet: Learning from Local–Partition–Global Graph Representations for Motor Imagery EEG Recognition. Brain Sci. 2025, 15, 1257. https://doi.org/10.3390/brainsci15121257

AMA Style

Zhang N, Jian H, Li X, Jiang G, Tang X. LPGGNet: Learning from Local–Partition–Global Graph Representations for Motor Imagery EEG Recognition. Brain Sciences. 2025; 15(12):1257. https://doi.org/10.3390/brainsci15121257

Chicago/Turabian Style

Zhang, Nanqing, Hongcai Jian, Xingchen Li, Guoqian Jiang, and Xianlun Tang. 2025. "LPGGNet: Learning from Local–Partition–Global Graph Representations for Motor Imagery EEG Recognition" Brain Sciences 15, no. 12: 1257. https://doi.org/10.3390/brainsci15121257

APA Style

Zhang, N., Jian, H., Li, X., Jiang, G., & Tang, X. (2025). LPGGNet: Learning from Local–Partition–Global Graph Representations for Motor Imagery EEG Recognition. Brain Sciences, 15(12), 1257. https://doi.org/10.3390/brainsci15121257

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop