Next Article in Journal
Molecular Recognition TechnologyTM (MRT™) for Selective Metal Separation in Green E-Waste Processing
Previous Article in Journal
Bridging the Gap: Integrating 9R Strategies for Circularity in Microelectronics
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

Multi-Class Electroencephalography Motor Imagery Classification of Limb Movements Using Convolutional Neural Network †

by
Yean Ling Chan
1,*,
Yiqi Tew
1,
Ching Pang Goh
1 and
Choon Kit Chan
2
1
Faculty of Computing and Information Technology, Tunku Abdul Rahman University of Management and Technology (TAR UMT), Setapak, Kuala Lumpur 53300, Malaysia
2
Centre for Sustainable Engineering Solutions, INTI International University, Nilai 71800, Negeri Sembilan, Malaysia
*
Author to whom correspondence should be addressed.
Presented at 2025 IEEE International Conference on Computation, Big-Data and Engineering (ICCBE), Penang, Malaysia, 27–29 June 2025.
Eng. Proc. 2026, 128(1), 20; https://doi.org/10.3390/engproc2026128020
Published: 11 March 2026

Abstract

We classified essential motor actions, dorsal and plantar flexion (lower limb), and arm movement (upper limb) from electroencephalography (EEG)-based brain–computer interface (BCI) signals, using a convolutional neural network (CNN). Different from previous research on upper or lower limb motor imagery in isolation, we integrated both categories in a unified framework to explore a broader range of movements for broader applications. These motor actions are fundamental to daily activities such as walking, running, maintaining balance, lifting, reaching, and exercising. Upper limb EEG data were provided by INTI International University, whereas lower limb data were obtained from a publicly available dataset, recorded using 16-channel Emotiv and OpenBCI systems, respectively, each with distinct sampling rates and signal formats. To improve signal quality and facilitate joint model training, all signals were downsampled to 125 Hz, standardized to 16 channels, segmented using sliding windows, normalized via StandardScaler, and labelled according to action class. The processed data were used to train a CNN model configured with a kernel size of 3 and rectified linear unit activation functions. Training was terminated early at epoch 11 using an early stopping strategy, resulting in approximately 67% accuracy for both training and validation sets. Although this accuracy was moderate for deep learning, a promising outcome for EEG-based multi-class motor imagery classification was obtained, with the challenges posed by limited data availability, low inter-class feature discriminability, and the inherently noisy nature of non-invasive EEG signals. The results of this study underscore the potential of CNN-based models for future real-time BCI applications. By expanding the dataset, deep learning architectures can be refined to improve signal preprocessing techniques. Prosthetic devices need to be integrated to validate the system in practical scenarios.

1. Introduction

According to the World Health Organization (WHO), 16% of the global population experienced a significant disability in 2023 [1]. Among these, limb disabilities represent a major disability, characterized by substantial impairment or loss of function in one or more limbs. Such conditions arise from genetic disorders, accidents or trauma, vascular diseases, cancer, and other medical conditions. Limb impairments affect an individual’s mobility, independence, psychological well-being, employment opportunities, and overall quality of life [2].
Prosthetic limbs play a vital role in restoring function and mobility for individuals with limb disabilities or amputations. These artificial devices replicate the appearance and functionality of natural limbs, enabling users to perform daily activities with greater autonomy and comfort [3]. Traditional prosthetic technologies, such as body-powered, myoelectric, and switch/button-controlled systems, rely on indirect methods to interpret user intent, often through muscle contractions or mechanical inputs rather than direct neural signals [4]. Consequently, these systems face limitations in terms of control precision, movement naturalness, and overall functionality, which can contribute to low user acceptance and limited adoption.
Recent advancements in brain–computer interface (BCI) technology and robotics offer promising avenues for enhancing prosthetic limb performance. BCI systems facilitate direct communication between the brain and external devices, with electroencephalography (EEG) widely adopted for recording brain activity due to its non-invasive nature, high temporal resolution, and cost-effectiveness [5]. Recent development has shown that EEG is not only effective for motor task but also for identifying internal cognitive states, providing a versatile platform for real-time brain signal analysis [6]. Motor imagery, when users mentally simulate limb movements, has gained significant attention for its applicability across both clinical and non-clinical domains [7]. Integrating EEG-based BCI with motor imagery enables intuitive and natural control of prosthetic limbs by interpreting brain signals directly, thereby offering substantial improvements over traditional muscle-based control systems.
However, most existing EEG-based BCI research has focused on upper or lower limb movements in isolation. This narrow scope limits the practical use of such systems for individuals with combined upper and lower limb disabilities, as it does not support coordinated or simultaneous control across multiple limbs. To address this limitation, we developed a unified framework that includes both upper and lower limb motor imagery tasks. Specifically, the developed framework includes dorsal and plantar flexion for the lower limbs (Figure 1), and elbow flexion and extension for the upper limbs (Figure 2). These movements are essential for a wide range of daily activities, including walking, running, maintaining balance, lifting, reaching, and exercising.

2. Literature Review

The history of prosthetic limbs has undergone significant advancements, evolving from rudimentary wooden and metal constructs to more refined peg legs and hand hooks during the Middle Ages [10]. The Renaissance period introduced durable materials such as iron and steel, laying the foundation for functional prosthetics developed during the American Civil War in 1863. Between the 1970s and 1990s, lighter materials, including plastics, polycarbonates, resins, and carbon fibre, were introduced to enhance durability and user comfort. From the 2000s to 2014, highly specialized innovations, such as motorized prosthetics controlled by sensors and microprocessors, enabled the development of responsive running blades and terrain-adaptive limbs, all of which significantly improved mobility and performance.
The effectiveness of traditional control methods for upper and lower limb prosthetics varies. Common approaches for upper limb prosthetics include myoelectric, body-powered, and switch/button-controlled systems. Body-powered prosthetics, which rely on the user’s physical movements, perform basic grasping tasks but are limited in functionality, sensory feedback, and ease of use [11,12,13]. Myoelectric prosthetics utilize electromyography (EMG) signals from residual muscles to enable more intuitive control; however, they struggle to replicate the multi-joint coordination characteristic of natural limb movement [14,15]. Switch and button control systems provide discrete input commands but tend to result in slower response times and are less effective for managing multi-joint movements simultaneously [16,17].
For lower limb prosthetics, passive devices remain the most widely used. Nonetheless, powered prosthetics have potential in reducing energy expenditure and improving gait symmetry through advanced control mechanisms designed to assist users across varied terrains, including slopes and stairs [18]. These systems incorporate torque sensors and motorized assemblies to adapt movement based on ground reaction forces and surface conditions [19]. Despite these advancements, limitations persist. Most powered prosthetics lack stumble-recovery capabilities, reducing their effectiveness during unexpected motions such as slips [18]. Additionally, the absence of adequate sensory feedback in current designs contributes to increased cognitive load and mobility challenges, which may lead to user abandonment [20].
EEG-based BCI systems represent a significant leap forward in prosthetic limb control, surpassing traditional methods that rely on mechanical interfaces between the residual limb and the prosthetic device. Different from conventional systems that constrain users to basic motion and positioning via stump-based control, EEG-BCIs interpret the brain’s electrical activity, enabling users to initiate movement through thought alone [9,21]. EEG signals are effectively decoded for arm movements, allowing for more fluid and natural interaction with the environment compared to the rigid control mechanisms of traditional prosthetics [22]. The use of non-invasive wireless EEG systems enhances the practicality of BCI applications, supporting deployment in diverse real-world contexts [21].
Motor imagery is widely used in EEG-based BCIs due to its ability to translate mental representations of movement into actionable control signals without requiring physical motion. By mentally simulating motor actions, users generate distinct neural patterns detectable via EEG, making motor imagery a practical and non-invasive approach for controlling prosthetic limbs and rehabilitation devices [23]. However, previous EEG-based BCI research focused on the classification of either upper or lower limb movements in isolation [24]. While the results contributed to the development of a unidimensional approach limit the applicability of these systems for individuals with combined upper and lower limb impairments. The absence of coordinated or simultaneous multi-limb control limits the practicality of current solutions in real-world assistive applications, where integrated and comprehensive functionality is essential.
In summary, EEG-based BCI systems have demonstrated considerable promise in enabling intuitive control of prosthetic limbs, particularly through motor imagery techniques. Despite so, the focus on isolated limb movements restricts their utility for users with extensive motor impairments. To address this limitation, BCI systems capable of supporting simultaneous multi-limb control need to be developed. Therefore, we developed an EEG-based BCI to classify motor imagery tasks involving both arm and leg movements to advance versatile and practical assistive technologies.

3. Methodology

We classified EEG signals for different limb movements using a one-dimensional CNN, implemented in Python version 3.8.8 within a Jupyter Notebook version 6.3.0 environment. Figure 3 illustrates the overview of the classification process throughout this study.

3.1. Data Acquisition

The dataset for both upper and lower limb movements was acquired from publicly available EEG recordings from six subjects. Two distinct EEG devices were utilized across the two categories, and the data were obtained in CSV format.
  • Upper limb–arm movement data: EEG data corresponding to arm movements were acquired using an Emotiv device with 16 channels at a sampling rate of 256 Hz. The data was sourced from the Centre for Sustainable Engineering Solutions, INTI International University [25].
  • Lower limb–dorsal and plantar flexion data: EEG data for these movements were recorded using an OpenBCI device with 16 channels at a sampling rate of 125 Hz. The data were obtained from publicly available Mendeley Data [26].

3.2. Data Preprocessing

The EEG data preprocessing was conducted to ensure consistency and suitability for multi-class motor imagery classification. Initially, EEG signals were loaded from individual CSV files using the Pandas library. Only the 16 columns corresponding to EEG channels were retained, while all non-relevant columns were excluded to focus the analysis on neural activity. To address variability in recording durations across datasets, temporal alignment was performed. The OpenBCI recordings for lower limb movements (dorsal and plantar flexion) were performed for 20 s, whereas the Emotiv recordings for upper limb movements (arm flexion and extension) exhibited longer and inconsistent durations. To ensure temporal uniformity and enable effective joint training of the CNN model, the Emotiv data were truncated to 20 s. This standardization of input duration minimized temporal bias and enhanced the model’s ability to generalize across different motor imagery tasks. Sampling rate discrepancies were resolved by resampling. The Emotiv data, originally recorded at 256 Hz, were resampled to 125 Hz using polyphase resampling via the resample_poly function from the SciPy library. This adjustment ensured a consistent temporal resolution across both datasets, as the OpenBCI data were already recorded at 125 Hz.

3.3. Data Segmentation and Labelling

A sliding window was employed to segment the continuous EEG data into overlapping epochs. Each epoch was treated as an individual sample for CNN. A window size of 128 data points was used, with a stride of 32 data points, resulting in a 75% overlap between consecutive windows. Then, the labels for each action were assigned as follows: 0 for dorsal flexion, 1 for plantar flexion, and 2 for arm movement.

3.4. Data Normalization

Before training the CNN model, the EEG values from 16 channels, i.e., input features, were normalized using the StandardScaler from the scikit-learn library. This standardization process ensures that each feature has a mean of 0 and a standard deviation of 1, which improves the training stability and performance of the neural network.

3.5. Data Splitting

The normalized data was randomly split into training and testing sets using a stratified train–test split with an 80/20 ratio. Stratification was applied to maintain the class distribution in both sets. A random state of 42 was used to ensure reproducibility.

3.6. Model Training

The configuration of the proposed CNN model is summarized in Table 1. The CNN model was trained using the Adam optimizer and the sparse categorical cross-entropy loss function. The convolutional layers used a kernel size of 3 and the ReLU activation function. The model was trained for 50 epochs with a batch size of 16. An Early Stopping callback was implemented to prevent overfitting, monitoring the validation loss with a patience of 5 epochs and restoring the best-performing weights.

3.7. Performance Evaluation

The performance of the trained model was evaluated based on the testing set. The training and validation accuracy, the epoch at which training stopped due to early stopping, and the epoch with the best validation loss were recorded.

4. Results and Discussion

The CNN model was trained to classify three motor imagery tasks, arm movement, dorsal flexion, and plantar flexion. The model was trained with a maximum of 50 epochs with early stopping enabled to prevent overfitting. The training process was terminated at epoch 11 and achieved a final training accuracy of 66.94% and validation accuracy of 66.67% (Figure 4).
Figure 4a illustrates the loss over epochs for both the training and validation datasets. Initially, a sharp decline in training loss is observed from epoch 1 to 2, indicating rapid learning of key features. Over subsequent epochs, training and validation losses converge and fluctuate minimally, remaining around 0.465. This suggests the model is not overfitting and can generalize relatively well to unseen data. The close alignment between training and validation loss curves further reinforces this conclusion. Figure 4b shows the accuracy over epochs. The training accuracy exhibits some fluctuations, ranging roughly between 65 and 69%, which is expected for the complexity and noisiness of EEG data. In contrast, the validation accuracy remains remarkably consistent at approximately 66.67% throughout the training process. This plateaued performance implies that improvements in classification accuracy are limited by the inherent challenges of EEG-based motor imagery tasks, while the model learns stable patterns.
Although this level of accuracy might appear moderate compared with other deep learning tasks, it is reasonable for EEG-based BCI multi-class motor imagery classification. Specifically, with the challenge of data limitation, inherent noise and non-stationarity of EEG signals, and low inter-class discriminability when performing motor imagery tasks [27,28]. These challenges become even more noticeable when data from several limb regions and hardware sources are combined.
Despite these challenges, the model demonstrated stable and consistent performance, indicating its capability to learn meaningful patterns from the EEG data. Achieving approximately 67% for training and validation accuracy supports the feasibility of employing a CNN for generalized decoding of upper and lower limb movements in a unified BCI framework.

5. Conclusions

A CNN model was developed to classify both upper and lower limb motor imagery tasks using EEG signals in a unified framework. By integrating data from distinct hardware platforms and standardizing signal characteristics, the feasibility of joint training for arm movement, dorsal flexion, and plantar flexion was validated. The developed CNN model showed a 67% accuracy on training and testing datasets, underscoring the potential of deep learning methods for multi-class motor imagery classification. These results are promising, considering the inherent challenges posed by inter-subject variability, signal noise, and the complexity of non-invasive EEG data, especially in applications requiring the decoding of multi-limb movements for real-world assistive technologies such as brain-controlled prosthetics or wheelchairs for individuals with multiple motor impairments. The dataset needs to be expanded to enhance model generalizability, improve EEG signal quality through advanced preprocessing techniques, and optimize model performance by exploring alternative deep learning architectures and hyperparameter tuning. Additionally, the system’s performance on actual prosthetic devices needs to be validated for its applicability.

Author Contributions

Conceptualization, Y.L.C., C.P.G. and Y.T.; data collection, C.K.C.; methodology, Y.L.C., C.P.G. and Y.T.; software, Y.L.C.; validation, Y.L.C., C.P.G. and Y.T.; formal analysis, Y.L.C.; investigation, Y.L.C.; resources, C.K.C. and C.P.G.; writing—original draft preparation, Y.L.C.; writing—review and editing, C.P.G. and Y.T.; visualization, Y.L.C.; supervision, C.P.G.; project administration, C.P.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Internal Research Grant of TAR UMT (Tunku Abdul Rahman University of Management and Technology), Malaysia under Grant No. UC/I/G2024-00141.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Choon Kit Chan, and approved by the Institutional Ethics Committee of INTI International University (INTI/CPS/2025/007).

Informed Consent Statement

Written informed consent has been obtained from the participant(s) to publish this paper.

Data Availability Statement

The upper limb dataset was provided by the Centre for Sustainable Engineering Solutions, INTI International University. The lower limb dataset used in this study is publicly available and cited in the reference list [24].

Acknowledgments

The author would like to express sincere gratitude to the Centre for Sustainable Engineering Solutions, INTI International University, for providing the upper limb EEG dataset used in this study, and to Centre of Computational Intelligence, TAR UMT for providing the equipment and resources to conduct this study. Their support and contribution were instrumental in facilitating the research and enabling the development and evaluation of the proposed system.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. World Health Organization. Disability. Available online: https://www.who.int/news-room/fact-sheets/detail/disability-and-health (accessed on 4 May 2025).
  2. Bhayana, H.; Bu, S.; Saini, U.C.; Mehra, A. Prevalence and factors associated with psychological morbidity, phantom limb Pain in lower limb amputees. Injury 2024, 55, 111828. [Google Scholar] [CrossRef] [PubMed]
  3. Lechler, K.; Frossard, B.; Whelan, L.; Langlois, D.; Müller, R.; Kristjansson, K. Motorized Biomechatronic Upper and Lower Limb Prostheses—Clinically Relevant Outcomes. PM&R 2018, 10, S207–S219. [Google Scholar] [CrossRef] [PubMed]
  4. Wong, S.; Gui, C. Brain controlled robotic arms-advancements in prosthetic technology. Univ. West. Ont. Med. J. 2019, 87, 59–61. [Google Scholar] [CrossRef]
  5. Padfield, N.; Zabalza, J.; Zhao, H.; Masero, V.; Ren, J. EEG-Based Brain-Computer Interfaces Using Motor-Imagery: Techniques and Challenges. Sensors 2019, 19, 1423. [Google Scholar] [CrossRef] [PubMed]
  6. Hasibuan, M.S.; Isnanto, R.R.; Dewi, D.A.; Kurniawan, T.B.; Yeh, M.-L.; Wijaya, A. A Proposed Model for Detecting Learning Styles Based on the Felder–Silverman Model Using KNN and LR with Electroencephalography (EEG). Journal of Applied Data Sciences 2025, 6, 1129–1139. [Google Scholar] [CrossRef]
  7. Värbu, K.; Muhammad, N.; Muhammad, Y. Past, Present, and Future of EEG-Based BCI Applications. Sensors 2022, 22, 3331. [Google Scholar] [CrossRef] [PubMed]
  8. Connexions. Wikimedia Commons. Available online: https://commons.wikimedia.org/wiki/File:Dorsiplantar.jpg (accessed on 5 March 2026).
  9. Wikimedia Commons. Available online: https://commons.wikimedia.org/wiki/File:Flexion_Extension_Arm.png (accessed on 5 March 2026).
  10. Prosthetics Through the Ages|NIH MedlinePlus Magazine. Available online: https://magazine.medlineplus.gov/article/prosthetics-through-the-ages (accessed on 1 June 2025).
  11. Aman, M.; Sporer, M.E.; Gstoettner, C.; Prahm, C.; Hofer, C.; Mayr, W.; Farina, D.; Aszmann, O.C. Bionic hand as artificial organ: Current status and future perspectives. Artif. Organs 2019, 43, 109–118. [Google Scholar] [CrossRef] [PubMed]
  12. Salminger, S.; Roche, A.D.; Sturma, A.; Mayer, J.A.; Aszmann, O.C. Hand Transplantation Versus Hand Prosthetics: Pros and Cons. Curr. Surg. Rep. 2016, 4, 8. [Google Scholar] [CrossRef] [PubMed]
  13. Cheesborough, J.; Smith, L.; Kuiken, T.; Dumanian, G. Targeted Muscle Reinnervation and Advanced Prosthetic Arms. Semin. Plast. Surg. 2015, 29, 062–072. [Google Scholar] [CrossRef] [PubMed]
  14. Jiang, N.; Rehbaum, H.; Vujaklija, I.; Graimann, B.; Farina, D. Intuitive, Online, Simultaneous, and Proportional Myoelectric Control Over Two Degrees-of-Freedom in Upper Limb Amputees. IEEE Trans. Neural Syst. Rehabil. Eng. 2014, 22, 501–510. [Google Scholar] [CrossRef] [PubMed]
  15. Young, A.J.; Smith, L.H.; Rouse, E.J.; Hargrove, L.J. A comparison of the real-time controllability of pattern recognition to conventional myoelectric control for discrete and simultaneous movements. J. Neuroeng. Rehabil. 2014, 11, 5. [Google Scholar] [CrossRef] [PubMed]
  16. Trapp, S.; Lepsien, J.; Sehm, B.; Villringer, A.; Ragert, P. Changes of Hand Switching Costs during Bimanual Sequential Learning. PLoS ONE 2012, 7, e45857. [Google Scholar] [CrossRef] [PubMed]
  17. Guo, J.-Y.; Zheng, Y.-P.; Xie, H.-B.; Koo, T.K. Towards the application of one-dimensional sonomyography for powered upper-limb prosthetic control using machine learning models. Prosthet. Orthot. Int. 2013, 37, 43–49. [Google Scholar] [CrossRef] [PubMed][Green Version]
  18. Danforth, S.M.; Holmes, P.D.; Vasudevan, R. Trip Recovery in Lower-Limb Prostheses using Reachable Sets of Predicted Human Motion. arXiv 2020, arXiv:2010.11228. [Google Scholar] [CrossRef]
  19. Butt, A.M.; Qureshi, K.K. Smart Lower Limb Prostheses with a Fiber Optic Sensing Sole: A Multicomponent Design Approach. Sens. Mater. 2019, 31, 2965. [Google Scholar] [CrossRef]
  20. Petrini, F.M.; Valle, G.; Bumbasirevic, M.; Barberi, F.; Bortolotti, D.; Cvancara, P.; Hiairrassary, A.; Mijovic, P.; Sverrisson, A.Ö.; Pedrocchi, A.; et al. Enhancing functional abilities and cognitive integration of the lower limb prosthesis. Sci. Transl. Med. 2019, 11, eaav8939. [Google Scholar] [CrossRef] [PubMed]
  21. Kuo, C.-C.; Knight, J.L.; Dressel, C.A.; Chiu, A.W.L. Non-Invasive BCI for the Decoding of Intended Arm Reaching Movement in Prosthetic Limb Control. Am. J. Biomed. Eng. 2012, 2, 155–162. [Google Scholar] [CrossRef]
  22. Makwanda, A.B.; Ikhile, A.O. Advancements of Upper Limb Prostheses can Improve Patient Quality of Life: A Technology Review. Undergrad. Res. Nat. Clin. Sci. Technol. (URNCST) J. 2023, 7, 1–8. [Google Scholar] [CrossRef]
  23. Yang, E.; Shankar, K.; Perumal, E.; Seo, C. Optimal Fuzzy Logic Enabled EEG Motor Imagery Classification for Brain Computer Interface. IEEE Access 2024, 12, 46002–46011. [Google Scholar] [CrossRef]
  24. AL-Quraishi, M.S.; Elamvazuthi, I.; Daud, S.A.; Parasuraman, S.; Borboni, A. EEG-Based Control for Upper and Lower Limb Exoskeletons and Prostheses: A Systematic Review. Sensors 2018, 18, 3342. [Google Scholar] [CrossRef] [PubMed]
  25. Centre for Sustainable Engineering Solutions-INTI ESG. Available online: https://newinti.edu.my/esg/index.php/centre-for-sustainable-engineering-solutions/ (accessed on 30 May 2025).
  26. Asanza, V.; Lorente-Leyva, L.L.; Peluffo-Ordóñez, D.H.; Montoya, D.; Gonzalez, K. MILimbEEG: A dataset of EEG signals related to upper and lower limb execution of motor and motor imagery tasks. Data Brief 2023, 50, 109540. [Google Scholar] [CrossRef] [PubMed]
  27. Sakhavi, S.; Guan, C.; Yan, S. Parallel convolutional-linear neural network for motor imagery classification. In 2015 23rd European Signal Processing Conference (EUSIPCO); IEEE: Nice, France, 2015; pp. 2736–2740. [Google Scholar] [CrossRef]
  28. Dose, H.; Møller, J.S.; Iversen, H.K.; Puthusserypady, S. An end-to-end deep learning approach to MI-EEG signal classification for BCIs. Expert Syst. Appl. 2018, 114, 532–542. [Google Scholar] [CrossRef]
Figure 1. Example of dorsal and plantar flexion (lower limb) [8].
Figure 1. Example of dorsal and plantar flexion (lower limb) [8].
Engproc 128 00020 g001
Figure 2. Example of elbow flexion and extension (upper limb) [9].
Figure 2. Example of elbow flexion and extension (upper limb) [9].
Engproc 128 00020 g002
Figure 3. Multi-class EEG motor imagery classification.
Figure 3. Multi-class EEG motor imagery classification.
Engproc 128 00020 g003
Figure 4. (a) Losses over epochs; (b) accuracy over epochs.
Figure 4. (a) Losses over epochs; (b) accuracy over epochs.
Engproc 128 00020 g004
Table 1. CNN model for EEG classification.
Table 1. CNN model for EEG classification.
ComponentsDescription
CNN Architecture
  • 1D convolutional layer (64 filters, kernel size 3, ReLU activation).
  • Max pooling layer.
  • Droupout layer (0.3).
  • 1D convolutional layer (128 filters, kernel size 3, ReLU activation).
  • Global average pooling layer.
  • Dense layer (64 units, ReLU activation).
  • Output layer.
Compile CNN Model
  • Adam optimizer.
  • Loss function: sparse categorical cross-entropy.
CNN Model Training
  • Input: training EEG data, training labels.
  • Validation data: validation EEG data, validation labels.
  • Epochs: 50.
  • Batch size: 16.
  • Early stopping: monitor validation loss, patience 5.
Evaluate Performance
  • Output: training accuracy, validation accuracy.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chan, Y.L.; Tew, Y.; Goh, C.P.; Chan, C.K. Multi-Class Electroencephalography Motor Imagery Classification of Limb Movements Using Convolutional Neural Network. Eng. Proc. 2026, 128, 20. https://doi.org/10.3390/engproc2026128020

AMA Style

Chan YL, Tew Y, Goh CP, Chan CK. Multi-Class Electroencephalography Motor Imagery Classification of Limb Movements Using Convolutional Neural Network. Engineering Proceedings. 2026; 128(1):20. https://doi.org/10.3390/engproc2026128020

Chicago/Turabian Style

Chan, Yean Ling, Yiqi Tew, Ching Pang Goh, and Choon Kit Chan. 2026. "Multi-Class Electroencephalography Motor Imagery Classification of Limb Movements Using Convolutional Neural Network" Engineering Proceedings 128, no. 1: 20. https://doi.org/10.3390/engproc2026128020

APA Style

Chan, Y. L., Tew, Y., Goh, C. P., & Chan, C. K. (2026). Multi-Class Electroencephalography Motor Imagery Classification of Limb Movements Using Convolutional Neural Network. Engineering Proceedings, 128(1), 20. https://doi.org/10.3390/engproc2026128020

Article Metrics

Back to TopTop