Next Article in Journal
Titanium–Aluminum–Vanadium Surfaces Generated Using Sequential Nanosecond and Femtosecond Laser Etching Provide Osteogenic Nanotopography on Additively Manufactured Implants
Next Article in Special Issue
An Ensemble Learning for Automatic Stroke Lesion Segmentation Using Compressive Sensing and Multi-Resolution U-Net
Previous Article in Journal
Design of a Humanoid Upper-Body Robot and Trajectory Tracking Control via ZNN with a Matrix Derivative Observer
Previous Article in Special Issue
Feedback-Driven Dynamical Model for Axonal Extension on Parallel Micropatterns
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improved Automatic Deep Model for Automatic Detection of Movement Intention from EEG Signals

1
Biomedical Engineering Department, Faculty of Electrical and Computer Engineering, University of Tabriz, Tabriz 51666-16471, Iran
2
Miyaneh Faculty of Engineering, University of Tabriz, Miyaneh 51666-16471, Iran
3
College of Engineering, Design and Physical Sciences, Brunel University London, Uxbridge UB8 3PH, UK
*
Authors to whom correspondence should be addressed.
Biomimetics 2025, 10(8), 506; https://doi.org/10.3390/biomimetics10080506 (registering DOI)
Submission received: 12 May 2025 / Revised: 11 June 2025 / Accepted: 30 July 2025 / Published: 4 August 2025

Abstract

Automated movement intention is crucial for brain–computer interface (BCI) applications. The automatic identification of movement intention can assist patients with movement problems in regaining their mobility. This study introduces a novel approach for the automatic identification of movement intention through finger tapping. This work has compiled a database of EEG signals derived from left finger taps, right finger taps, and a resting condition. Following the requisite pre-processing, the captured signals are input into the proposed model, which is constructed based on graph theory and deep convolutional networks. In this study, we introduce a novel architecture based on six deep convolutional graph layers, specifically designed to effectively capture and extract essential features from EEG signals. The proposed model demonstrates a remarkable performance, achieving an accuracy of 98% in a binary classification task when distinguishing between left and right finger tapping. Furthermore, in a more complex three-class classification scenario, which includes left finger tapping, right finger tapping, and an additional class, the model attains an accuracy of 92%. These results highlight the effectiveness of the architecture in decoding motor-related brain activity from EEG data. Furthermore, relative to recent studies, the suggested model exhibits significant resilience in noisy situations, making it suitable for online BCI applications.

1. Introduction

The brain–computer interface (BCI) allows direct interaction between the human brain and external equipment. Electroencephalography (EEG) data utilized in motor imagery brain–computer interfaces (BCIs) enable users to accomplish a variety of tasks without requiring physical movement [1]. EEG signals are used in many ways to detect emotions, detect sleep stages, detect driver fatigue, detect epilepsy, and detect depression [2]. This approach’s impact on the rehabilitation of persons with disabilities has raised it to a prominent interdisciplinary issue in recent years. Motor imagining (MI)-EEG analyzes and interprets signals from imagined tasks to control peripherals, wheelchairs, and prostheses [1].
The foundation for the majority of BCIs is established by evoked activity paradigms, including visually evoked steady-state potentials (SSVEPs) [3,4], event-related potentials (ERPs) [5], and motor-related paradigms like motor imagery [6]. Visual and attentional processes are necessary for the reliable elicitation of a quantifiable response in SSVEP and ERP. Conversely, movement neural correlates enable the voluntary generation of movement intents without the need for external inputs, thereby facilitating the intuitive control of BCIs [7,8]. Power changes across numerous EEG frequency bands are employed to assess movement intent. This method disregards the movement-related information present in the broader EEG spectrum and temporal domain due to the non-stationarity of the EEG signal. Neural movement correlates, such as motor-related cortical potential (MRCP) and event-related synchronization (ERD/S), are frequently employed to assess voluntary movement intention, execution, and visualization using EEG [9]. ERD and ERS are frequently employed to assess movement intention and imaging, resulting in a decrease in μ [10] and β [11] power and an increase in power. In order to ascertain tasks associated with movement, numerous features are extracted from the EEG spectral domain. The most prevalent method of evaluating ERD is through the analysis of power spectral density (PSD) and time frequency [12,13]. A progressive negative cortical potential, or MRCP, is detected at low frequencies and manifests approximately two seconds prior to voluntary movement. Identification is challenging due to the fact that the amplitude of MRCP is small (8–10 μV) in comparison to spontaneous EEG activity (100 μV) [14]. The average of numerous voluntary movement EEG samples is a widely used method for determining MRCP. In the following, recent studies of computational techniques for evaluating and tracking movement intention that are automatically developed based on EEG data are reviewed.
Haw et al. [15] used a single-channel EEG signal to automatically identify the movement intentions of five healthy individuals. The movement intention was categorized using the BP component. Two-stage classification was based on error thresholds and correlation. The technique has an accuracy rate of 70%. One of the study’s weaknesses was the difference in the proposed method’s performance among people. The use of a single-channel EEG signal in their study proved to be favorable. Yom et al. [16] employed a sample of five healthy adults to determine movement intention automatically. EEG waves were employed in nine channels during the experiment. To capture the signal, a finger-tapping action was also used. The researchers employed the MRP component to characterize movement intentions. The part was pre-processed using a low-pass filter set at 10 Hz. Categorization was accomplished using the K-nearest neighbor (KNN) and support vector machine (SVM). Bai et al. [17] conducted an experiment with 12 participants to assess the automated identification of movement intention. They recorded using 122 channels of EEG signals. Finger tapping provided the framework for the movement performed in their experiment. The researchers employed the MRP and ERD components to characterize movement intentions. The part was pre-processed using a Butterworth low-pass filter of the third order. The classification accuracy in the two stages using artificial neural networks (ANNs) was 75%. One of the method’s disadvantages is that it employs 122 EEG signal channels, which may be unpleasant for patients and increase power consumption in prosthetic devices. Kato et al. [18] used a single-channel EEG signal to automatically identify the purpose of movement in seven healthy persons. Their experiment revealed tapping-based movement. The contingent negative variation (CNV) component was used to classify movement intentions. For classification, the SVM was utilized. Boye et al. [19] employed a single volunteer to automatically determine movement intention. The EEG signal was obtained by tapping the finger. The researchers employed the MRP component to characterize movement intentions. In the pre-processing step, a low-pass filter and principal component analysis (PCA) were applied. The classification challenge was performed using the KNN and SVM algorithms. They found a 96% classification sensitivity for the two stages. The testing on a single subject was a limitation of the study. Lew et al. [20] utilized eight healthy participants and two stroke survivors to automatically measure movement intention. For signal recording, sixty-four EEG channels were employed. They employed arm motion as the foundation for movement in their experiment. In the pre-processing step, an IIR filter with a cutoff frequency of 0.1 was used. The KNN was used to categorize data. Their technique of distinguishing movement intention was shown to be effective 76% of the time. According to studies, the proposed algorithm was effective 82% of the time for healthy participants and 64% of the time for sick individuals. In the study by Niazi et al. [21], 16 healthy volunteers’ movement intentions were automatically recognized. The data were collected utilizing 10 channels of EEG signals. Leg movement served as the foundation for the movement type in their study. The researchers employed the BP and MRCP components to classify movement intentions. For categorization, the Neyman–Pearson Lemma (NPL) was utilized. Niazi et al. [22] conducted studies on twenty healthy people and five stroke sufferers to automatically assess movement intention. The study utilized ten channels of EEG signals for re-recording and focused on limb movements. The researchers employed the MRP component to characterize movement intentions. During the data-processing step, a band-pass filter was applied in the frequency range of 0.05 to 10 Hz. Ahmadian et al. [23] conducted experiments with three healthy participants. The study acquired data for the automated identification of movement intention utilizing 128 channels of EEG signals. The signal was captured using a tapping motion with the fingertips. The researchers employed the BP component to characterize movement intentions. The pre-processed component used an ideal filter with a frequency range of 0.5 to 70 Hz. Furthermore, the dimensionality of the feature vector was reduced using the independent component analysis (ICA) technique. It took around 51 s for the algorithm to discriminate between the blind sources. The study’s limitations were the huge number of channels employed in the EEG signal and the small number of samples collected. In order to automatically recognize movement intention, Jochumsen et al. [24] conducted an experiment with 12 healthy volunteers. Moreover, the signal was recorded using 10 channels of EEG signals. Additionally, the way they moved was determined by how their legs moved during the trial. In the study, 0.5 to 10 Hz was the perfect filter they employed for the pre-processed portion. The feature-vector dimension was decreased by the researchers using the constraint–satisfaction–problem (CSP) technique. For the classification, SVM was also employed. Their approach was claimed to have an 80% overall effectiveness rate in differentiating movement intention. Xu et al. [25] included nine healthy participants. To capture the signal, they used nine EEG wave channels. They used the MRCP component in the experiment, and their movement style was also determined by foot movement. To pre-process the data, they used a band-pass filter with frequency ranges of 0.5 to 3 Hz. They claimed 75% classification accuracy for the first two phases using KNN. Jiang et al. [26] studied nine healthy people. They used nine EEG channels to record the signal for automatically recognizing movement intention. Furthermore, they used the MRCP component to classify movement intention and determined the movement type based on leg movement during the trial. They increased the SNR by using LSF. Their two-stage classification was estimated to be 76% accurate. Wairagkar et al. [27] recruited nine healthy participants, six men and eight women, aged 22 to 30. The autocorrelation function was used in this analysis. The researchers used the ERD component to categorize movement intentions. The categorization process also made use of the KNN. They claimed that their two-stage classification was 78% sensitive. Shahini et al. [28] presented a new method for automatically detecting movement intention using EEG signals. They used basic convolutional neural networks (CNNs) for feature selection/extraction and classification, achieving significant accuracy in two and three different classes of finger strokes. Their network architecture included 10 convolutional layers and two fully connected layers. Jochumsen et al. [29] used EEG and electromyography (EMG) signals to automatically detect movement intention in Parkinson’s disease patients. These researchers used engineering methods for feature selection/extraction and were able to collect and classify a database in three distinct scenarios. The three-class mode has the highest reported classification accuracy of around 89%. Lutes et al. [30] used EEG signals to automatically detect movement intentions. In this study, they combined convolutional networks and achieved an accuracy of 98.50 for negative skew. When compared to other networks like EEGNet and GraphNet, their model outperformed them all. Choi et al. [31] used EEG signals to classify movement intentions. They selected and extracted features from an offline dataset using a proposed pipeline. They also used the SVM classifier for classification. The final classification accuracy was reported to be 86%. Dong et al. [32] proposed automatically detecting movement intention using EEG signals. These researchers used transfer learning to identify movement intention based on the affected arm’s bidirectional movement. They collected a database of 12 healthy individuals who were able to recognize movement intention using machine learning techniques on virtual reality (VR) induction. The average accuracy reported in this study was 85%.
Analysis of prior studies indicates that the majority necessitate elevated EEG channels for optimal efficacy. This issue can enhance the computational efficiency of the algorithm and render its application in artificial prostheses unfeasible. Furthermore, the processes of feature selection, extraction, and classification in the majority of studies are conducted manually and through engineering techniques. This necessitates prior understanding of the issue and is inappropriate for BCI applications. Moreover, research utilizing deep learning is not without its limitations. A fundamental limitation of these studies is the absence of a substantial database for network training, as deep learning networks require extensive data. Furthermore, owing to the low SNR ratio of EEG signals, the robustness of the deep models discussed in studies concerning movement intention in noisy environments has not been assessed. This research aims to address the challenges associated with recent studies and offer a dependable method for the automatic classification of movement intention. In this study, a comprehensive database of EEG signals was collected under three distinct conditions in two scenarios: resting state, right finger movement, and left finger movement. Upon completion of the pre-processing phase, the recorded data will proceed to the automated feature selection and extraction phase, utilizing a combination of graph theory and deep convolutional networks to classify various categories within the specified scenarios.
A.
Preparation of a database of EEG signals during movement intention testing in two separate scenarios
B.
Automatic presentation of an intelligent system in the automatic classification of movement intention based on the combination of graph theory and deep convolutional networks
C.
Presenting a new model with high speed and accuracy for classifying left finger stroke, right finger stroke, and the resting state
D.
Achieving the highest classification accuracy for the two-class mode compared to recent research
E.
Ability to apply the algorithm in noisy environments in order to use the proposed model in online applications
The article’s remaining sections are arranged as follows: The model used in this study is mathematically analyzed in the Section 2. The proposed model used in this study is thoroughly described in the Section 3, which also includes a suggested architecture, data-recording techniques, and other relevant information. The simulation results are shown in the Section 4 along with a comparison with current research findings. The conclusion is covered in the Section 5.

2. Materials and Procedures

In this section, the mathematical basis of the algorithms used in this research, which include generative adversarial networks (GANs) and graph neural networks (GNNs), is fully examined.

2.1. Generative Adversarial Networks (GANs)

In 2014, Ian J. Goodfellow and his associates proposed the GAN. In machine learning, GANs perform unsupervised learning tasks. These networks are made up of two models that recognize and incorporate the patterns in the input data on their own. The generator and discriminator are the names given to these two models. The discriminator and the generator compete to identify, record, and reproduce changes made to the dataset. New samples that are statistically representative of the original dataset can be generated by GANs [33].
A neural network functions as the generator, creating artificial data for the discriminator’s training. The generator gains the capacity to produce adequate data. For the discriminator, the generated instances are considered negative training examples. The generator creates a sample after receiving a fixed-length random noise vector as input. The generator’s main goal is to fool the discriminator into thinking that its output is “genuine”. The component of the GAN responsible for training the generator includes (a) a noisy input vector; (b) a generative network that converts the random input into a data sample; (c) a discriminative network, which categorizes the produced data; and (d) generative losses, which penalize the generative network for perceiving the differentiator as foolish.
The backpropagation algorithm adjusts each weight appropriately by assessing the impact of the weight on the output. This method is employed to acquire gradients, which can facilitate modifications to the productive weights.
A discriminator is a neural network that differentiates authentic data from synthetic data produced by the generator. The training data for the differentiator is sourced from two distinct origins: (a) The discriminator utilizes authentic data samples, including medical images, medical signals, human subjects, and currency notes, as positive examples in the training process. (b) During the training process, counterfeit samples produced by the generator are utilized as negative samples.
Throughout the training process, the discriminator is linked to two loss functions. In the training of the discriminant network, productive losses are disregarded, and solely discriminant losses are utilized. The discriminator, throughout the training process, classifies the authentic data and the fabricated data provided by the generator. The discriminant loss penalizes the misclassification of a genuine data sample as a counterfeit sample or vice versa. The discriminator adjusts its weights by backpropagating the losses through its network.
In GANs, the following equation is minimized in the training stage:
log ( 1 D ( G ( Z ) ) ) min max G D V ( G , D ) = E x P d a t a [ log D ( x ) ] +   E p z ( z ) [ log ( 1 D ( G ( Z ) ) ]
To properly distinguish between real and fake data, the discriminator (D) in the above equation needs to be set up. The aforementioned equation requires iterative algorithms because it cannot be solved in a closed form. In order to address the problem of overfitting, the generator function (G) is also optimized once for every k function D optimization [33].

2.2. Graph Convolutional Network

A GNN is designed to operate diagonally on graphs, which are data structures composed of nodes (also known as vertices) and the edges that connect them [34]. GNNs have radically revolutionized how we use and assess graph-organized data.
GNNs are typically used to learn an embedding of the graph structure, in which the GNN records the features of the nodes (i.e., what they contain) as well as the topology of the graph. These representations may then be used to perform various tasks such as classifying whole graphs, predicting the existence of an edge between two nodes, and determining a node’s label. The following section discusses some of the themes linked to GNNs [34].
Vertices and edges: A graph comprises a collection of points (vertices) interconnected by lines (edges). Vertices denote entities, objects, or concepts, whereas edges signify relationships or connections among them. Directed versus Undirected: In a directed graph, the edges possess a direction that signifies the flow of the relationship. Weighted Graph: In these graphs, edges possess associated weights. Graph representation encodes the structure and attributes of a graph for neural network processing. Graphs incorporate node data along with the interconnections between data points. A graphical representation is required to illustrate the connections among nodes. Presented below are several prevalent graph representations utilized in deep learning. Adjacency Matrix: This matrix enumerates all vertices that are connected to a specific vertex (all nodes linked to a node). Incidence Matrix: An N × M matrix where N represents the number of nodes and M denotes the edges of the graph. It is utilized to represent the graph as a matrix. The value is 1 if the node possesses a specific edge and 0 if it does not. Degree Matrix: A diagonal matrix that enumerates the edges associated with each node [35].
An adjacency matrix is employed to connect each vertex in the graph. Furthermore, the degree matrix can be derived from the adjacency matrix. The diagonal elements of this diagonal matrix correspond to the sum of the edges connected to the respective vertex. The degree matrix is denoted as D R N × N and the graph matrix as W R N × N , with the i-th diagonal element of the degree matrix defined as follows:
D ii = i W i j
An alternative definition of the Laplacian matrix is as follows:
L = D W R N × N
L = U Λ U T
The Laplacian matrix is known to be formed by subtracting the degree matrices from the adjacency matrix, as per the preceding relation. Graph basis functions are calculated using this matrix. In the Laplacian matrix, Singular Value Decomposition (SVD) may be used to produce graph basis functions. Additionally, the matrix of singular values and the matrix of eigenvectors in the form of Relation (5) may be used to define the Laplacian matrix. The columns of the eigenvector matrix match the eigenvectors of the Laplacian matrix, as stated in Equation (5). Based on these eigenvectors, it is also feasible to perform a Fourier transform. The diagonal eigenvalues, which include Λ = d i a g ( [ λ 0 , , λ N 1 ] ) in the connection shown below, determine the Fourier bases:
U = [ u 0 , , u N 1 ] R N × N
To enhance comprehension, Relations (7) and (8) define the Fourier transform and inverse Fourier transform of a signal, such that
q ^ = U T q
q = U U T q = U q ^
Equation (7) states that q ^ stands for the graph’s Fourier transform. Additionally, the feature vector for a signal like q with Fourier bases and the graph’s Fourier transform is feasible, according to Equation (8). Another method for calculating the graph convolution operator is to use the Fourier transform of each signal to perform a convolution of two signals in the graph domain. For ease of comprehension, the relationship between the convolution of two signals, z and y, and the operator * g is as follows:
z * g = U ( ( U T z ) ( U T y ) )
In the connection above, a graph convolution operator combined with neural networks is described by the g ( ) filter function. z is the version that g ( L ) filtered, based on the relationship mentioned above:
y = g ( L ) z
The following definition of graph convolution may be obtained by setting the Laplacian matrix and breaking it down into singular values and eigenvectors [34,35]:
y = g ( L ) z = U g ( Λ ) U T z = U ( g ( Λ ) ) ( U T z ) = U ( U T ( U g ( Λ ) ) ) ( U T z ) = z * g ( U g ( Λ ) )

3. Proposed Model

This section delineates the suggested methodology of this work for detecting movement intention from EEG data. This section addresses the procedures for database recording, data pre-processing, network architecture design, optimization of architectural parameters, and the allocation of training and testing data. Figure 1 visually illustrates the proposed flowchart of the investigation.

3.1. Data Acquisition

This study involved the collection of an extensive database utilizing EEG signals for the automatic classification of movement intention. Sixteen undergraduate and graduate students (eight women and eight men), aged between 20 and 33 years with an average BDI of 22, were solicited to partake in the signal recording experiment for movement intention. The experiment was described to all participants, and informed consent was obtained from them. Furthermore, ethics permit number IR.TBZ.1399,5,4 was granted by the ethics committee of Tabriz University for the documentation of EEG data. An open BCI amplifier from an American company was utilized in this test to record the signal according to the 10–20 standard. The sampling frequency of this 21-channel amplifier was 1024 Hz for signal recording, with A1 and A2 channels utilized as references. This experiment aimed to classify three states: resting, right-hand finger tapping, and left-hand finger tapping. Consequently, two distinct scenarios were evaluated for classification. The initial scenario comprises classes for right-hand finger taps and left-hand finger taps. The second scenario encompasses right-hand finger strikes, left-hand finger strikes, and a resting state.
The signal recording had 40 repetitions, the length of each mode was 5 s, and for each mode, 5 × 1024 = 5120 sampling points with 40 repetitions were available. Among the participants in the experiment, 12 were right-handed and 4 were left-handed. According to studies [27,28], only 6 pairs of electrodes were considered for signal recording, which included F3-C3, Fz-Cz, F4-C4, C3-P3, Cz-Pz, and C4-P4, and the rest of the electrodes were not used for recording and processing. This work reduces the computational complexity for the classification operation to a significant extent. Thus, the dimensions of the data for each class of left finger tap, resting state, and right finger tap were equal to 40 (repetitions) × 5120 (sampling points) = 204,800 per participant. The devices used for testing are shown in Figure 2.

3.2. Pre-Processing of EEG Data

All processing in this study was performed offline. This sub-section contains the pre-processing pertinent to this study. Before the data gathered in this bulletin can proceed to the classification processing stage, it must undergo pre-processing. This study includes pre-processing techniques such as the application of a notch filter, a second-order Butterworth filter, data enhancement, and data normalization. Subsequently, each of these steps is elucidated individually:
I.
To eliminate the interference caused by the 50 Hz frequency of municipal electricity, a notch filter was applied to the EEG data collected from the F3-C3, Fz-Cz, F4-C4, C3-P3, Cz-Pz, and C4-P4 channel pairs.
II.
The recorded data underwent processing through a second-order Butterworth filter, targeting the frequency range of 0.05 to 60 Hz for the respective channels of the recorded signals.
III.
The recorded data are augmented through GANs to mitigate the occurrence of overfitting. Data augmentation in the GAN is performed by the generator and the discriminator, as previously stated. The subsequent section will provide a comprehensive description of the data augmentation process utilizing the GAN. The generator and discriminator in the GAN execute data augmentation, as previously mentioned. A uniformly distributed 100-dimensional vector is transformed into a 1 × 204,800-dimensional signal by the generating network. The generator produces a one-dimensional signal with vector dimensions of 100, characterized by a uniform distribution. The generating network consists of six convolutional layers, each with dimensions of 512, 1024, 2048, 40,996, 8192, and 204,800. Batch normalization and Relu activation are utilized in each layer. The repetitions and learning rate are established at 150 and 0.01, respectively. The discriminative network receives a one-dimensional vector as input and assesses its authenticity. This network consists of six dense layers. Employing adversarial generative networks, the data is enhanced from 204,800 dimensions to 250,000 dimensions.
IV.
During the final phase, data normalization is executed to optimize the training process within the range of 0 to 1.

3.3. Graph Design

Following the determination of the functional relationship between the EEG channels, a proximity matrix is created. This may be achieved by assessing the channel correlation and showing the findings as an EEG channel connection matrix. To eliminate the network adjacency matrix, the sparse approximation of the connectivity matrix is set to a threshold. The recommended model utilizes the constructed graph as input to select, extract, and classify information. Figure 3 shows an overview of the proposed architecture.

3.4. Customized Architecture

This subsection delineates the deep architecture developed for classifying movement intention into two and three distinct classes. Figure 4 illustrates the intricately designed architecture. This figure indicates that following a dropout layer, the data arrives at the initial layer of the convolutional graph, accompanied by a max pooling layer and a batch normalizer utilizing the Leaky-Relu activation function. To select or extract the automatic feature, these layers are reiterated five additional times. The data is subsequently input into a dropout layer. Subsequently, the data undergoes a flattening operation. The classes pertaining to the right finger stroke, resting state, and left finger stroke are evaluated using a Softmax activation function to classify into two and three distinct categories.
The proposed deep architecture features a node graph representing the quantity of EEG channels considered. In the proposed architecture, each vertex receives 1000 samples. The coefficients of X1–X6 are presented as Chebyshev polynomial coefficients for each layer in Table 1.

3.5. Series of Tests, Validation, and Training

This subsection delineates the methodology for partitioning the data into training and evaluation sets. Specifically, 70% of the data is allocated for network training, 20% for the validation set, and 10% for the test set. The design methodology employs a trial-and-error approach concerning variables and algorithms. Consequently, Table 2 presents the parameters, variables, and various optimization algorithms pertinent to the development of the proposed deep architecture.

4. Experimental Results

This section will delineate the results pertaining to the proposed model. This section comprises multiple subsections: the first presents the results of the proposed network optimization, the second details the simulation results, and the third offers a comparison with recent research. The simulation results of this study were conducted on the Python version 3.10 programming platform within the Google Colab Prime environment, utilizing 32 GB of RAM and a 60 GPU.

4.1. Enhancing Outcomes

This subsection will present the optimization results derived from the proposed deep architecture. Figure 5 illustrates the outcomes pertaining to the determination of the number of layers in the proposed deep network. This figure indicates that the selection of six convolutional graph layers has proven optimal for accurately classifying movement intention. Thus, augmenting the number of layers minimally impacts classification accuracy while significantly elevating the algorithm’s computational complexity. Figure 6 illustrates the outcomes of Chebyshev polynomial selection for the classification of automatic movement intentions. Thus, selecting X = 5 can accelerate the network’s convergence to the target value.

4.2. Results of the Simulation

This subsection will present the simulation results of the proposed model.
Figure 7 illustrates the accuracy and error of movement intention classification across 150 iterations of the network for two classes (left finger tap and right finger tap) and three classes (left finger tap, resting state, and right finger tap). Figure 7a indicates that the network’s accuracy has attained 98% after 150 iterations for the classification of two classes and has stabilized. Furthermore, as indicated by the same figure, the accuracy for the three-class mode stabilizes at approximately 92% following 120 iterations. Figure 7b additionally illustrates the classification error for both the two-class and three-class modes. According to the same figure, as the number of repetitions for two-class and three-class classification increases, the network error attains its minimum value. Table 3 analyzes the outcomes pertaining to the assessment criteria for the binary and ternary classifications. This study employs evaluation criteria comprising accuracy, precision, sensitivity, specificity, and the kappa coefficient. In the two-class mode, all evaluation metrics exceed 97%, demonstrating the efficacy of the proposed deep network. Figure 8 illustrates the receiver operating characteristic (ROC) curve analysis for the classification of various classes. Consequently, it is established that for the classification, both scenarios fall within the range of 0.9 to 1, indicating the optimal performance in the automatic classification of movement intention across two and three distinct classes. Figure 9 illustrates instances of the left finger tap, right finger tap, and resting state in both the initial and final layers of the proposed deep network. It is evident that, in the two-class mode, nearly all samples are distinctly separated in the final layer of the network.

4.3. Comparison with Current Methodologies and Research

This subsection contrasts the proposed method with current methodologies and studies. Table 4 delineates the methodologies and accuracy of contemporary studies focused on classifying movement intention. The proposed model demonstrates enhanced accuracy compared to recent studies, attaining a classification accuracy of 92% for three classes, whereas studies [31,32] report accuracies of approximately 85% and 86%, respectively. It is essential to recognize that the databases employed in recent studies vary, making direct comparison unjust. The parameters for recording brain signals, participant numbers in each study, sampling frequencies, and additional factors differ among research studies. Based on the current evidence, to guarantee equitable comparative conditions, we utilized modern methodologies from the registered database and compared the results with our proposed model. This research utilizes pre-trained networks Inception [36], VGG [37], U-net [38], and basic CNN [28]. Figure 10 displays the results obtained for the classification of two categories. The proposed model has clearly converged to the target value more swiftly. Additionally, we implemented a supplementary comparative strategy employing manual feature selection/extraction and feature learning techniques. Thus, for the manual approach, the attributes of mean, variance, peak coefficient, power, kurtosis, and skewness were derived from the recorded EEG signals, and classifications were executed utilizing the KNN [39] SVM [40], multi-layer perceptron (MLP) [41], basic CNN [28], and the proposed method. The feature learning method entailed classifying the recorded signals without the use of feature selection or extraction, relying on the designated classifications. Table 5 delineates the comparative results of the manual technique and the feature learning approach. The proposed model employing a feature learning approach has exhibited an enhanced performance. Nonetheless, the proposed model demonstrates low accuracy in comparison to the manual method. The proposed model, which amalgamates graph theory and deep convolutional networks, autonomously and end-to-end learns salient features from recorded signals, differentiating between two and three classes. Manual methods, while simple, require prior understanding of the problem and may be impractical for online applications.
EEG signals exhibit a very low SNR. This problem can obstruct classification in online applications. Minimal motion and ambient noise can hinder the accurate discernment of movement intention. The employed classification algorithm must exhibit significant resilience to environmental noise. Thus, we have synthetically introduced Gaussian white noise with random dispersion into the signals recorded at various dB levels to assess the efficacy of our proposed deep model under noisy conditions. The results obtained are illustrated in Figure 11. The figure clearly illustrates that the proposed network exhibits a significantly lower slope of decreasing classification accuracy in response to increasing noise compared to the other networks analyzed. This illustrates the significance of integrating graph theory with deep convolutional networks.
As we know, the use of artificial intelligence [42,43,44,45], which is a subset of machine learning [46,47], has been very useful in various applications, including vibration [48], deep learning [49,50,51,52,53,54], fuzzy networks [55,56], financial markets [57,58,59,60,61], leadership [62], education [63,64,65], mathematics [66,67], architecture [68,69,70], history [71], civil engineering [72,73], optimization [74], economics [75,76,77], chemistry [78], law [79], aerospace [80], image processing [81], transportation [82,83,84,85], intelligent systems [86,87,88,89], supply chain [90], computers [91], business [92], etc.
Notwithstanding the exceptional performance, the proposed model exhibits certain deficiencies. The implementation of the proposed deep model in this research necessitates an expansion of the database dimensions. Furthermore, it is essential to employ classical overlay for data augmentation and evaluate its efficacy against GANs. Furthermore, to assess the proposed model in real-time settings, it is essential to utilize dry electrodes for signal acquisition to eliminate issues related to gel desiccation during recording.

5. Conclusions

This study introduces a novel method for the automatic detection of movement intention across two categories: left finger tap and right finger tap, as well as three categories: left finger tap, resting state, and right finger tap. To achieve this objective, EEG signals from 20 participants were collected during the movement intention test. Following essential pre-processing, feature selection/extraction and automatic classification were conducted utilizing a combination of graph theory and deep convolutional networks. The proposed network included six convolutional graph layers that could perform end-to-end classification operations. The classification outcomes of movement intention in this study are highly promising, even amidst environmental noise, and can rival the results of recent research. The suggested method is applicable in numerous domains within the field of BCI.

Author Contributions

Conceptualization, S.M.; methodology, R.A., L.Z.L. and S.M.; software, L.Z.L. and S.M.; validation, L.Z.L. and S.M.; writing—original draft preparation, R.A., S.M. and S.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

This study was conducted in accordance with the Declaration of Helsinki and approved by the Tabriz University Ethics Committee (protocol code IR.TBZ.1399.5.4 and date of approval 20 March 2021).

Informed Consent Statement

Written informed consent has been obtained from the patient(s) to publish this paper.

Data Availability Statement

The original contributions presented in this study are included in the article material. Further inquiries can be directed to the corresponding authors. The image in Figure 1 was captured directly during the EEG data recording session of our study by the authors. It is an original image and not obtained from any external source or public database. The image is not available for public use and is restricted to this research purpose only.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Nawaser, K.; Jafarkhani, F.; Khamoushi, S.; Yazdi, A.; Mohsenifard, H.; Gharleghi, B. The Dark Side of Digitalization: A Visual Journey of Research through Digital Game Addiction and Mental Health. IEEE Eng. Manag. Rev. 2024, 1, 1–27. [Google Scholar] [CrossRef]
  2. Li, C.; Zhang, Z.; Zhang, X.; Huang, G.; Liu, Y.; Chen, X. EEG-Based Emotion Recognition via Transformer Neural Architecture Search. IEEE Trans. Ind. Inform. 2022, 19, 6016–6025. [Google Scholar] [CrossRef]
  3. Khalil, M.A.; Can, J.; George, K. Deep Learning Applications in Brain Computer Interface Based Lie Detection. In Proceedings of the 2023 IEEE 13th Annual Computing and Communication Workshop and Conference (CCWC), Las Vegas, NV, USA, 8–11 March 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 189–192. [Google Scholar]
  4. Kang, Q.; Li, F.; Gao, J. Exploring the Functional Brain Network of Deception in Source-Level EEG via Partial Mutual Information. Electronics 2023, 12, 1633. [Google Scholar] [CrossRef]
  5. Li, F.; Zhu, H.; Xu, J.; Gao, Q.; Guo, H.; Wu, S.; Li, X.; He, S. Lie detection using fNIRS monitoring of inhibition-related brain regions discriminates infrequent but not frequent liars. Front. Hum. Neurosci. 2018, 12, 71. [Google Scholar] [CrossRef]
  6. Sheykhivand, S.; Yousefi Rezaii, T.; Mousavi, Z.; Meshini, S. Automatic Stage Scoring of Single-Channel Sleep EEG Using CEEMD of Genetic Algorithm and Neural Network. Comput. Intell. Electr. Eng. 2018, 9, 15–28. [Google Scholar]
  7. Delmas, H.; Denault, V.; Burgoon, J.K.; Dunbar, N.E. A review of automatic lie detection from facial features. J. Nonverbal Behav. 2024, 48, 93–136. [Google Scholar] [CrossRef]
  8. Kanna, R.K.; Kripa, N.; Vasuki, R. Systematic Design Of Lie Detector System Utilizing EEG Signals Acquisition. Int. J. Sci. Technol. Res. 2021, 9, 610–612. [Google Scholar]
  9. Abootalebi, V.; Moradi, M.H.; Khalilzadeh, M.A. A new approach for EEG feature extraction in P300-based lie detection. Comput. Methods Programs Biomed. 2009, 94, 48–57. [Google Scholar] [CrossRef]
  10. Amir, S.; Ahmed, N.; Chowdhry, B.S. Lie detection in interrogations using digital signal processing of brain waves. In Proceedings of the 2013 3rd International Conference on Instrumentation, Communications, Information Technology and Biomedical Engineering (ICICI-BME), Bandung, Indonesia, 7–8 November 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 209–214. [Google Scholar]
  11. Mohammed, I.J.; George, L.E. A Survey for Lie Detection Methodology Using EEG Signal Processing. J. Al-Qadisiyah Comput. Sci. Math. 2022, 14, 42–54. [Google Scholar] [CrossRef]
  12. Gao, J.; Tian, H.; Yang, Y.; Yu, X.; Li, C.; Rao, N. A novel algorithm to enhance P300 in single trials: Application to lie detection using F-score and SVM. PLoS ONE 2014, 9, e109700. [Google Scholar] [CrossRef]
  13. Simbolon, A.I.; Turnip, A.; Hutahaean, J.; Siagian, Y.; Irawati, N. An experiment of lie detection based EEG-P300 classified by SVM algorithm. In Proceedings of the 2015 International Conference on Automation, Cognitive Science, Optics, Micro Electro-Mechanical System, and Information Technology (ICACOMIT), Bandung, Indonesia, 29–30 October 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 68–71. [Google Scholar]
  14. Saini, N.; Bhardwaj, S.; Agarwal, R. Classification of EEG signals using hybrid combination of features for lie detection. Neural Comput. Appl. 2020, 32, 3777–3787. [Google Scholar] [CrossRef]
  15. Dodia, S.; Edla, D.R.; Bablani, A.; Cheruku, R. Lie detection using extreme learning machine: A concealed information test based on short-time Fourier transform and binary bat optimization using a novel fitness function. Comput. Intell. 2020, 36, 637–658. [Google Scholar] [CrossRef]
  16. Yohan, K. Using EEG and Machine Learning to Perform Lie Detection. Bachelor’s Thesis, University of Moratuwa, Moratuwa, Sri Lanka, 2019. [Google Scholar]
  17. Baghel, N.; Singh, D.; Dutta, M.K.; Burget, R.; Myska, V. Truth identification from EEG signal by using convolution neural network: Lie detection. In Proceedings of the 2020 43rd International Conference on Telecommunications and Signal Processing (TSP), Milan, Italy, 7–9 July 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 550–553. [Google Scholar]
  18. Zhang, S.; Tong, H.; Xu, J.; Maciejewski, R. Graph convolutional networks: A comprehensive review. Comput. Soc. Netw. 2019, 6, 11. [Google Scholar] [CrossRef]
  19. Iqbal, T.; Ali, H. Generative adversarial network for medical images (MI-GAN). J. Med. Syst. 2018, 42, 231. [Google Scholar] [CrossRef] [PubMed]
  20. Chen, M.; Wei, Z.; Huang, Z.; Ding, B.; Li, Y. Simple and deep graph convolutional networks. In Proceedings of the International Conference on Machine Learning, Vancouver, BC, Canada, 13–19 July 2025; PMLR: New York, NY, USA, 2020; pp. 1725–1735. [Google Scholar]
  21. Murugeswari, P.; Vijayalakshmi, S. A New Method of Interval Type-2 Fuzzy-Based CNN for Image Classification. In Computational Vision and Bio-Inspired Computing: ICCVBIC 2020; Springer: Singapore, 2021; pp. 733–746. [Google Scholar]
  22. Somers, L.P.; Bosten, J.M. Predicted effectiveness of EnChroma multi-notch filters for enhancing color perception in anomalous trichromats. Vis. Res. 2024, 218, 108381. [Google Scholar] [CrossRef] [PubMed]
  23. Suescún-Díaz, D.; Ule-Duque, G.; Cardoso-Páez, L.E. Butterworth filter to reduce reactivity fluctuations. Karbala Int. J. Mod. Sci. 2024, 10, 8. [Google Scholar] [CrossRef]
  24. Henderi, H.; Wahyuningsih, T.; Rahwanto, E. Comparison of Min-Max normalization and Z-Score Normalization in the K-nearest neighbor (kNN) Algorithm to Test the Accuracy of Types of Breast Cancer. Int. J. Inform. Inf. Syst. 2021, 4, 13–20. [Google Scholar] [CrossRef]
  25. Alom, M.Z.; Taha, T.M.; Yakopcic, C.; Westberg, S.; Sidike, P.; Nasrin, M.S.; Van Esesn, B.C.; Awwal, A.A.S.; Asari, V.K. The history began from alexnet: A comprehensive survey on deep learning approaches. arXiv 2018, arXiv:1803.01164. [Google Scholar] [CrossRef]
  26. Nouleho, S.; Barth, D.; Quessette, F.; Weisser, M.-A.; Watel, D.; David, O. A new graph modelisation for molecule similarity. arXiv 2018, arXiv:1807.04528. [Google Scholar] [CrossRef]
  27. Targ, S.; Almeida, D.; Lyman, K. Resnet in resnet: Generalizing residual architectures. arXiv 2016, arXiv:1603.08029. [Google Scholar] [CrossRef]
  28. Shahini, N.; Bahrami, Z.; Sheykhivand, S.; Marandi, S.; Danishvar, M.; Danishvar, S.; Roosta, Y. Automatically identified EEG signals of movement intention based on CNN network (End-To-End). Electronics 2022, 11, 3297. [Google Scholar] [CrossRef]
  29. Jochumsen, M.; Poulsen, K.B.; Sørensen, S.L.; Sulkjær, C.S.; Corydon, F.K.; Strauss, L.S.; Roos, J.B. Single-trial movement intention detection estimation in patients with Parkinson’s disease: A movement-related cortical potential study. J. Neural Eng. 2024, 21, 046036. [Google Scholar] [CrossRef]
  30. Lutes, N.; Nadendla, V.S.S.; Krishnamurthy, K. Convolutional spiking neural networks for intent detection based on anticipatory brain potentials using electroencephalogram. Sci. Rep. 2024, 14, 8850. [Google Scholar] [CrossRef]
  31. Choi, H.J.; Das, S.; Peng, S.; Bajcsy, R.; Figueroa, N. On the Feasibility of EEG-based Motor Intention Detection for Real-Time Robot Assistive Control. arXiv 2024, arXiv:2403.08149. [Google Scholar]
  32. Dong, R.; Zhang, X.; Li, H.; Masengo, G.; Zhu, A.; Shi, X.; He, C. EEG generation mechanism of lower limb active movement intention and its virtual reality induction enhancement: A preliminary study. Front. Neurosci. 2024, 17, 1305850. [Google Scholar] [CrossRef]
  33. You, A.; Kim, J.K.; Ryu, I.H.; Yoo, T.K. Application of generative adversarial networks (GAN) for ophthalmology image domains: A survey. Eye Vis. 2022, 9, 6. [Google Scholar] [CrossRef] [PubMed]
  34. Wang, Y.; Sun, Y.; Liu, Z.; Sarma, S.E.; Bronstein, M.M.; Solomon, J.M. Dynamic graph cnn for learning on point clouds. ACM Trans. Graph. (Tog) 2019, 38, 1–12. [Google Scholar] [CrossRef]
  35. Bogaerts, T.; Masegosa, A.D.; Angarita-Zapata, J.S.; Onieva, E.; Hellinckx, P. A graph CNN-LSTM neural network for short and long-term traffic forecasting based on trajectory data. Transp. Res. Part C Emerg. Technol. 2020, 112, 62–77. [Google Scholar] [CrossRef]
  36. Si, C.; Yu, W.; Zhou, P.; Zhou, Y.; Wang, X.; Yan, S. Inception transformer. Adv. Neural Inf. Process. Syst. 2022, 35, 23495–23509. [Google Scholar]
  37. Wen, L.; Li, X.; Li, X.; Gao, L. A new transfer learning based on VGG-19 network for fault diagnosis. In Proceedings of the 2019 IEEE 23rd international conference on computer supported cooperative work in design (CSCWD), Porto, Portugal, 6–8 May 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 205–209. [Google Scholar]
  38. Du, G.; Cao, X.; Liang, J.; Chen, X.; Zhan, Y. Medical Image Segmentation based on U-Net: A Review. J. Imaging Sci. Technol. 2020, 64, 020508–1–020508-12. [Google Scholar] [CrossRef]
  39. Guo, G.; Wang, H.; Bell, D.; Bi, Y.; Greer, K. KNN model-based approach in classification. In Proceedings of the On The Move to Meaningful Internet Systems 2003: CoopIS, DOA, and ODBASE: OTM Confederated International Conferences, CoopIS, DOA, and ODBASE 2003, Catania, Italy, 3–7 November 2003; Springer: Berlin/Heidelberg, Germany, 2003; pp. 986–996. [Google Scholar]
  40. Huang, S.; Cai, N.; Pacheco, P.P.; Narrandes, S.; Wang, Y.; Xu, W. Applications of support vector machine (SVM) learning in cancer genomics. Cancer Genom. Proteom. 2018, 15, 41–51. [Google Scholar]
  41. Popescu, M.-C.; Balas, V.E.; Perescu-Popescu, L.; Mastorakis, N. Multilayer perceptron and neural networks. WSEAS Trans. Circuits Syst. 2009, 8, 579–588. [Google Scholar]
  42. Sajjadi Mohammadabadi, S.M. From Generative AI to Innovative AI: An Evolutionary Roadmap. arXiv 2025, arXiv:2503.11419. [Google Scholar] [CrossRef]
  43. Sadeghi, S.; Niu, C. Augmenting Human Decision-Making in K-12 Education: The Role of Artificial Intelligence in Assisting the Recruitment and Retention of Teachers of Color for Enhanced Diversity and Inclusivity. Leadersh. Policy Sch. 2024, 1–21. [Google Scholar] [CrossRef]
  44. Golkarfard, A.; Sadeghmalakabadi, S.; Talebian, S.; Basirat, S.; Golchin, N. Ethical Challenges of AI Integration in Architecture and Built Environment. Curr. Opin. 2025, 5, 1136–1147. [Google Scholar] [CrossRef]
  45. Ahmadirad, Z. Evaluating the Influence of AI on Market Values in Finance: Distinguishing Between Authentic Growth and Speculative Hype. Int. J. Adv. Res. Humanit. Law 2024, 1, 50–57. [Google Scholar] [CrossRef]
  46. Dehghanpour Abyaneh, M.; Narimani, P.; Javadi, M.S.; Golabchi, M.; Attarsharghi, S.; Hadad, M. Predicting Surface Roughness and Grinding Forces in UNS S34700 Steel Grinding: A Machine Learning and Genetic Algorithm Approach to Coolant Effects. Physchem 2024, 4, 495–523. [Google Scholar] [CrossRef]
  47. Narimani, P.; Abyaneh, M.D.; Golabchi, M.; Golchin, B.; Haque, R.; Jamshidi, A. Digitalization of Analysis of a Concrete Block Layer Using Machine Learning as a Sustainable Approach. Sustainability 2024, 16, 7591. [Google Scholar] [CrossRef]
  48. Mahdavimanshadi, M.; Anaraki, M.G.; Mowlai, M.; Ahmadirad, Z. A Multistage Stochastic Optimization Model for Resilient Pharmaceutical Supply Chain in COVID-19 Pandemic Based on Patient Group Priority. In Proceedings of the 2024 Systems and Information Engineering Design Symposium (SIEDS), Charlottesville, VA, USA, 3 May 2024; pp. 382–387. [Google Scholar] [CrossRef]
  49. Afsharfard, A.; Jafari, A.; Rad, Y.A.; Tehrani, H.; Kim, K.C. Modifying Vibratory Behavior of the Car Seat to Decrease the Neck Injury. J. Vib. Eng. Technol. 2023, 11, 1115–1126. [Google Scholar] [CrossRef]
  50. Sadeghi, S.; Marjani, T.; Hassani, A.; Moreno, J. Development of Optimal Stock Portfolio Selection Model in the Tehran Stock Exchange by Employing Markowitz Mean-Semivariance Model. J. Financ. Issues 2022, 20, 47–71. [Google Scholar] [CrossRef]
  51. Mohammedi, M.; Naseri, M.; Mehrabi Jorshary, K.; Golchin, N.; Akhmedov, S.; Uglu, V.S.O. Economic, Environmental, and Technical Optimal Energy Scheduling of Smart Hybrid Energy System Considering Demand Response Participation. Oper. Res. Forum 2025, 6, 83. [Google Scholar] [CrossRef]
  52. Naseri, S.; Eshraghi, S.; Talebian, S. Innovative Sustainable Architecture: A Lesson Learned from Amphibious House in the UK. Curr. Opin. 2024, 4, 766–777. [Google Scholar] [CrossRef]
  53. Du, T.X.; Jorshary, K.M.; Seyedrezaei, M.; Uglu, V.S.O. Optimal Energy Scheduling of Load Demand with Two-Level Multi-Objective Functions in Smart Electrical Grid. Oper. Res. Forum 2025, 6, 66. [Google Scholar] [CrossRef]
  54. Amiri, N.; Honarmand, M.; Dizani, M.; Moosavi, A.; Kazemzadeh Hannani, S. Shear-Thinning Droplet Formation inside a Microfluidic T-Junction under an Electric Field. Acta Mech. 2021, 232, 2535–2554. [Google Scholar] [CrossRef]
  55. Mohammadabadi, S.M.S.; Zawad, S.; Yan, F.; Yang, L. Speed Up Federated Learning in Heterogeneous Environments: A Dynamic Tiering Approach. IEEE Internet Things J. 2024, 12, 5026–5035. [Google Scholar] [CrossRef]
  56. Basirat, S.; Raoufi, S.; Bazmandeh, D.; Khamoushi, S.; Entezami, M. Ranking of AI-Based Criteria in Health Tourism Using Fuzzy SWARA Method. Comput. Decis. Mak. 2025, 2, 530–545. [Google Scholar] [CrossRef]
  57. Khatami, S.S.; Shoeibi, M.; Salehi, R.; Kaveh, M. Energy-Efficient and Secure Double RIS-Aided Wireless Sensor Networks: A QoS-Aware Fuzzy Deep Reinforcement Learning Approach. J. Sens. Actuator Netw. 2025, 14, 18. [Google Scholar] [CrossRef]
  58. Nezhad, K.K.; Ahmadirad, Z.; Mohammadi, A.T. The Dynamics of Modern Business: Integrating Research Findings into Practical Management; Nobel Sciences: Stockholm, Sweden, 2024. [Google Scholar]
  59. Ahmadirad, Z. The Beneficial Role of Silicon Valley’s Technological Innovations and Venture Capital in Strengthening Global Financial Markets. Int. J. Mod. Achiev. Sci. Eng. Technol. 2024, 1, 9–17. [Google Scholar] [CrossRef]
  60. Saremi, S.Y.; Taghizadeh, M. RFID Adoption by Supply Chain Organizations in Malaysia. Int. Proc. Econ. Dev. Res. 2013, 59, 178. [Google Scholar]
  61. Pazouki, S.; Jamshidi, M.B.; Jalali, M.; Tafreshi, A. The Integration of Big Data in FinTech: Review of Enhancing Financial Services through Advanced Technologies. World J. Adv. Res. Rev. 2025, 25, 546–556. [Google Scholar] [CrossRef]
  62. Mansouri, S.; Mohammed, H.; Korchiev, N.; Anyanwu, K. Taming Smart Contracts with Blockchain Transaction Primitives: A Possibility? In Proceedings of the 2024 IEEE International Conference on Blockchain (Blockchain), Trinity College Dublin, Dublin, Ireland, 19–22 August 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 575–582. [Google Scholar] [CrossRef]
  63. Mansouri, S.; Samatova, V.; Korchiev, N.; Anyanwu, K. DeMaTO: An Ontology for Modeling Transactional Behavior in Decentralized Marketplaces. In Proceedings of the 2023 IEEE/WIC International Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT), Venice, Italy, 26–29 October 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 171–180. [Google Scholar] [CrossRef]
  64. Khorsandi, H.; Mohsenibeigzadeh, M.; Tashakkori, A.; Kazemi, B.; Khorashadi Moghaddam, P.; Ahmadirad, Z. Driving Innovation in Education: The Role of Transformational Leadership and Knowledge Sharing Strategies. Curr. Opin. 2024, 4, 505–515. [Google Scholar] [CrossRef]
  65. Mohammadabadi, S.M.S.; Yang, L.; Yan, F.; Zhang, J. Communication-Efficient Training Workload Balancing for Decentralized Multi-Agent Learning. In Proceedings of the 2024 IEEE 44th International Conference on Distributed Computing Systems (ICDCS), Jersey City, NJ, USA, 23–26 July 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 680–691. [Google Scholar] [CrossRef]
  66. Asadi, M.; Taheri, R. Enhancing Peer Assessment and Engagement in Online IELTS Writing Course through a Teacher’s Multifaceted Approach and AI Integration. Technol. Assist. Lang. Educ. 2024, 2, 94–117. [Google Scholar] [CrossRef]
  67. Abbasi, E.; Dwyer, E. The Efficacy of Commercial Computer Games as Vocabulary Learning Tools for EFL Students: An Empirical Investigation. Sunshine State TESOL J. 2024, 16, 24–35. [Google Scholar] [CrossRef]
  68. Mohaghegh, S.; Kondo, S.; Yemiscioglu, G.; Muhtaroglu, A. A Novel Multiplier Hardware Organization for Finite Fields Defined by All-One Polynomials. IEEE Trans. Circuits Syst. II Express Briefs 2022, 69, 5084–5088. [Google Scholar] [CrossRef]
  69. Mohaghegh, A.; Huang, C. Feature-Guided Sampling Strategy for Adaptive Model Order Reduction of Convection-Dominated Problems. arXiv 2025. [Google Scholar] [CrossRef]
  70. Talebian, S.; Golkarieh, A.; Eshraghi, S.; Naseri, M.; Naseri, S. Artificial Intelligence Impacts on Architecture and Smart Built Environments: A Comprehensive Review. Adv. Civ. Eng. Environ. Sci. 2025, 2, 45–56. [Google Scholar] [CrossRef]
  71. Karkehabadi, A.; Sadeghmalakabadi, S. Evaluating Deep Learning Models for Architectural Image Classification: A Case Study on the UC Davis Campus. In Proceedings of the 2024 IEEE 8th International Conference on Information and Communication Technology (CICT), Prayagraj, UP, India, 6–8 December 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 1–6. [Google Scholar] [CrossRef]
  72. Naseri, S. AI in Architecture and Urban Design and Planning: Case Studies on Three AI Applications. GSC Adv. Res. Rev. 2024, 21, 565–577. [Google Scholar] [CrossRef]
  73. Hanif, E.; Hashemnejad, H.; Ghafourian, M. Factors Affecting the Augmentation of Spatial Dynamics in Social Sustainability with an Emphasis on Human-Orientation of Space. J. Hist. Cult. Art Res. 2017, 6, 419–435. [Google Scholar] [CrossRef]
  74. Barati Nia, A.; Moug, D.M.; Huffman, A.P.; DeJong, J.T. Numerical Investigation of Piezocone Dissipation Tests in Clay: Sensitivity of Interpreted Coefficient of Consolidation to Rigidity Index Selection. In Cone Penetration Testing 2022; CRC Press: Boca Raton, FI, USA, 2022; pp. 282–287. [Google Scholar] [CrossRef]
  75. Barati-Nia, A.; Parrott, A.E.; Sorenson, K.; Moug, D.M.; Khosravifar, A. Comparing Cyclic Direct Simple Shear Behavior of Fine-Grained Soil Prepared with SHANSEP or Recompression Approaches. In Proceedings of the Geotechnical Frontiers 2025, Louisville, KY, USA, 2–5 March 2025; ASCE: Reston, VA, USA, 2025; pp. 419–429. [Google Scholar] [CrossRef]
  76. Splechtna, R.; Behravan, M.; Jelovic, M.; Gracanin, D.; Hauser, H.; Matkovic, K. Interactive Design-of-Experiments: Optimizing a Cooling System. IEEE Trans. Vis. Comput. Graph. 2024, 31, 44–53. [Google Scholar] [CrossRef]
  77. Entezami, M.; Basirat, S.; Moghaddami, B.; Bazmandeh, D.; Charkhian, D. Examining the Importance of AI-Based Criteria in the Development of the Digital Economy: A Multi-Criteria Decision-Making Approach. J. Soft Comput. Decis. Anal. 2025, 3, 72–95. [Google Scholar] [CrossRef]
  78. Roshdieh, N.; Farzad, G. The Effect of Fiscal Decentralization on Foreign Direct Investment in Developing Countries: Panel Smooth Transition Regression. Int. Res. J. Econ. Manag. Stud. (IRJEMS) 2024, 3, 133–140. [Google Scholar] [CrossRef]
  79. Pazouki, S.; Jamshidi, M.B.; Jalali, M.; Tafreshi, A. Artificial Intelligence and Digital Technologies in Finance: A Comprehensive Review. J. Econ. Financ. Account. Stud. 2025, 7, 54–69. [Google Scholar] [CrossRef]
  80. Ahmadirad, Z. The Role of AI and Machine Learning in Supply Chain Optimization. Int. J. Mod. Achiev. Sci. Eng. Technol. 2025, 2, 1–8. [Google Scholar] [CrossRef]
  81. Motta de Castro, E.; Bozorgmehrian, F.; Carrola, M.; Koerner, H.; Samouei, H.; Asadi, A. Sulfur-Driven Reactive Processing of Multiscale Graphene/Carbon Fiber-Polyether Ether Ketone (PEEK) Composites with Tailored Crystallinity and Enhanced Mechanical Performance. Compos. Part B Eng. 2025, 295, 112180. [Google Scholar] [CrossRef]
  82. Pazouki, S.; Jamshidi, M.B.; Jalali, M.; Tafreshi, A. Transformative Impact of AI and Digital Technologies on the FinTech Industry: A Comprehensive Review. Int. J. Adv. Res. Humanit. Law 2025, 2, 1–27. [Google Scholar] [CrossRef]
  83. Azadmanesh, M.; Roshanian, J.; Georgiev, K.; Todrov, M.; Hassanalian, M. Synchronization of Angular Velocities of Chaotic Leader-Follower Satellites Using a Novel Integral Terminal Sliding Mode Controller. Aerosp. Sci. Technol. 2024, 150, 109211. [Google Scholar] [CrossRef]
  84. Kavianpour, S.; Haghighi, F.; Sheykhfard, A.; Das, S.; Fountas, G.; Oshanreh, M.M. Assessing the Risk of Pedestrian Crossing Behavior on Suburban Roads Using Structural Equation Model. J. Traffic Transp. Eng. (Engl. Ed.) 2024, 11, 853–866. [Google Scholar] [CrossRef]
  85. Pourasghar, A.; Mehdizadeh, E.; Wong, T.C.; Hoskoppal, A.K.; Brigham, J.C. A Computationally Efficient Approach for Estimation of Tissue Material Parameters from Clinical Imaging Data Using a Level Set Method. J. Eng. Mech. 2024, 150, 04024075. [Google Scholar] [CrossRef]
  86. Espahbod, S. Intelligent Freight Transportation and Supply Chain Drivers: A Literature Survey. In Proceedings of the Seventh International Forum on Decision Sciences; Springer: Singapore, 2020; pp. 49–56. [Google Scholar] [CrossRef]
  87. Mirbakhsh, A.; Lee, J.; Besenski, D. Spring–Mass–Damper-Based Platooning Logic for Automated Vehicles. Transp. Res. Rec. 2023, 2677, 1264–1274. [Google Scholar] [CrossRef]
  88. Sajjadi Mohammadabadi, S.M.; Entezami, M.; Karimi Moghaddam, A.; Orangian, M.; Nejadshamsi, S. Generative Artificial Intelligence for Distributed Learning to Enhance Smart Grid Communication. Int. J. Intell. Netw. 2024, 5, 267–274. [Google Scholar] [CrossRef]
  89. Mirbakhsh, A.; Lee, J.; Besenski, D. Development of a Signal-Free Intersection Control System for CAVs and Corridor Level Impact Assessment. Future Transp. 2023, 3, 552–567. [Google Scholar] [CrossRef]
  90. Dokhanian, S.; Sodagartojgi, A.; Tehranian, K.; Ahmadirad, Z.; Moghaddam, P.K.; Mohsenibeigzadeh, M. Exploring the Impact of Supply Chain Integration and Agility on Commodity Supply Chain Performance. World J. Adv. Res. Rev. 2024, 22, 441–450. [Google Scholar] [CrossRef]
  91. Gudarzi Farahani, Y.; Mirarab Baygi, S.A.; Abbasi Nahoji, M.; Roshdieh, N. Presenting the Early Warning Model of Financial Systemic Risk in Iran’s Financial Market Using the LSTM Model. Int. J. Finance Manag. Account. 2026, 11, 29–38. [Google Scholar] [CrossRef]
  92. Ghoreishi, E.; Abolhassani, B.; Huang, Y.; Acharya, S.; Lou, W.; Hou, Y.T. Cyrus: A DRL-Based Puncturing Solution to URLLC/eMBB Multiplexing in O-RAN. In Proceedings of the 2024 33rd International Conference on Computer Communications and Networks (ICCCN), Kailua-Kona, HI, USA, 29–31 July 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 1–9. [Google Scholar] [CrossRef]
Figure 1. The main framework of this study used to automatically detect movement intention from EEG signals.
Figure 1. The main framework of this study used to automatically detect movement intention from EEG signals.
Biomimetics 10 00506 g001
Figure 2. Example of a tapping device and recording of EEG signals.
Figure 2. Example of a tapping device and recording of EEG signals.
Biomimetics 10 00506 g002
Figure 3. An overview of the architecture presented in this study.
Figure 3. An overview of the architecture presented in this study.
Biomimetics 10 00506 g003
Figure 4. Detailed architecture of the proposed deep model along with dimensions of samples in each layer.
Figure 4. Detailed architecture of the proposed deep model along with dimensions of samples in each layer.
Biomimetics 10 00506 g004
Figure 5. Accuracy and time related to movement intention classification considering different graphConv layers.
Figure 5. Accuracy and time related to movement intention classification considering different graphConv layers.
Biomimetics 10 00506 g005
Figure 6. Classification accuracy of movement intention by considering different polynomial variables.
Figure 6. Classification accuracy of movement intention by considering different polynomial variables.
Biomimetics 10 00506 g006
Figure 7. Accuracy and error of the proposed model in two-class and three-class scenarios for 150 network iterations. (a) Accuracy of network, (b) Loss function of network.
Figure 7. Accuracy and error of the proposed model in two-class and three-class scenarios for 150 network iterations. (a) Accuracy of network, (b) Loss function of network.
Biomimetics 10 00506 g007
Figure 8. Analysis of ROC curves for different classes of movement intention.
Figure 8. Analysis of ROC curves for different classes of movement intention.
Biomimetics 10 00506 g008
Figure 9. Recorded EEG samples related to left finger tap, right finger tap, and resting state in two different scenarios for the input and output of the proposed network.
Figure 9. Recorded EEG samples related to left finger tap, right finger tap, and resting state in two different scenarios for the input and output of the proposed network.
Biomimetics 10 00506 g009aBiomimetics 10 00506 g009b
Figure 10. Evaluating the efficacy of the constructed network against pre-trained networks.
Figure 10. Evaluating the efficacy of the constructed network against pre-trained networks.
Biomimetics 10 00506 g010
Figure 11. Evaluation of the intended deep network’s performance in noisy situations.
Figure 11. Evaluation of the intended deep network’s performance in noisy situations.
Biomimetics 10 00506 g011
Table 1. Details about hyperparameters in the proposed deep architecture.
Table 1. Details about hyperparameters in the proposed deep architecture.
LayerShape of Weight TensorShape of BiasNumber of Parameters
Graph 1(x1, 250,000, 250,000)250,00062,500,000,000 × x1 + 250,000
Graph 2(x2, 250,000, 125,000)125,00031,250,000,000 × x2 + 125,000
Graph 3(x3, 125,000, 62,500)62,5007,812,500,000 × x3 + 62,500
Graph 4(x4, 31,250, 15,625)15,625488,281,250 × x4 + 31,250
Graph 5(x5, 15,625, 7813)7813122,078,125 × x5 + 15,625
Graph 6(x6, 7813, 3907)390730,525,391 × x6 + 7813
Flattening Layer-23907
Table 2. Selected variables in the proposed architecture.
Table 2. Selected variables in the proposed architecture.
ModelParametersValuesOptimal Value
GANBatch Size4, 6, 8, 10, 1210
OptimizerAdam, SGD, AdamaxAdam
Conv layers3, 4, 5, 66
Learning Rate0.1, 0.01, 0.001, 0.00010.01
Number of GConv2, 3, 4, 5, 6, 76
ConvGraphBatch Size in DFCGN8, 16, 3232
Batch NormalizationReLU, Leaky-ReLULeaky-ReLU
Learning Rate in DFCGN0.1, 0.01, 0.001, 0.0001, 0.000010.0001
Dropout Rate0.1, 0.2, 0.30.1
Weight of Optimizer10 × 10−3, 10 × 10−4, 10 × 10−5, 10 × 10−6
Table 3. Different evaluation indices used to automatically classify movement intention.
Table 3. Different evaluation indices used to automatically classify movement intention.
Measurement IndexAccuracy (%)Sensitivity (%)Precision (%)Specificity (%)Kappa Coefficient
2-class98.197.497.497.80.88
3-class92.291.789.491.40.81
Table 4. The suggested model is compared to other recent investigations.
Table 4. The suggested model is compared to other recent investigations.
ResearchThe Method UsedACC (%)
Jochumsen et al. [24]CSP + SVM80
Xu et al. [25]MRCP Component + KNN75
Jiang et al. [26]MRCP Component76
Wairagkar et al. [27]ERD Component + KNN78
Shahini et al. [28]CNN89
Jochumsen et al. [29]Hand Crafted Features + KNN89
Lutes et al. [30]CNN98.50 (two class)
Choi et al. [31]Hand Crafted Features + SVM86
Dong et al. [32]Transfer Learning85
Our ModelGAN + Graph Theory + CNN98.2 (two class) 92 (three class)
Table 5. Comparison of the proposed model with manual methods.
Table 5. Comparison of the proposed model with manual methods.
MethodFeature Learning (ACC)Handcrafted Features (ACC)
KNN76%82%
SVM80%85%
CNN84%60%
MLP75%79%
P-M92%69%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zare Lahijan, L.; Meshgini, S.; Afrouzian, R.; Danishvar, S. Improved Automatic Deep Model for Automatic Detection of Movement Intention from EEG Signals. Biomimetics 2025, 10, 506. https://doi.org/10.3390/biomimetics10080506

AMA Style

Zare Lahijan L, Meshgini S, Afrouzian R, Danishvar S. Improved Automatic Deep Model for Automatic Detection of Movement Intention from EEG Signals. Biomimetics. 2025; 10(8):506. https://doi.org/10.3390/biomimetics10080506

Chicago/Turabian Style

Zare Lahijan, Lida, Saeed Meshgini, Reza Afrouzian, and Sebelan Danishvar. 2025. "Improved Automatic Deep Model for Automatic Detection of Movement Intention from EEG Signals" Biomimetics 10, no. 8: 506. https://doi.org/10.3390/biomimetics10080506

APA Style

Zare Lahijan, L., Meshgini, S., Afrouzian, R., & Danishvar, S. (2025). Improved Automatic Deep Model for Automatic Detection of Movement Intention from EEG Signals. Biomimetics, 10(8), 506. https://doi.org/10.3390/biomimetics10080506

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop