Next Article in Journal
Lignin Waste Valorization in the Bioeconomy Era: Toward Sustainable Innovation and Climate Resilience
Previous Article in Journal
Wavelet Fusion with Sobel-Based Weighting for Enhanced Clarity in Underwater Hydraulic Infrastructure Inspection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

Uncertainty-AwareDeep Learning for Robust and Interpretable MI EEG Using Channel Dropout and LayerCAM Integration

by
Óscar Wladimir Gómez-Morales
1,2,*,
Sofia Escalante-Escobar
2,
Diego Fabian Collazos-Huertas
2,
Andrés Marino Álvarez-Meza
2 and
German Castellanos-Dominguez
2
1
Faculty of Systems and Telecommunications, Universidad Estatal Península de Santa Elena, La Libertad 240204, Ecuador
2
Signal Processing and Recognition Group, Universidad Nacional de Colombia sede Manizales, Km 7 vía al Magdalena, Manizales 170003, Colombia
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(14), 8036; https://doi.org/10.3390/app15148036
Submission received: 5 June 2025 / Revised: 14 July 2025 / Accepted: 16 July 2025 / Published: 18 July 2025
(This article belongs to the Special Issue EEG Horizons: Exploring Neural Dynamics and Neurocognitive Processes)

Abstract

Motor Imagery (MI) classification plays a crucial role in enhancing the performance of brain–computer interface (BCI) systems, thereby enabling advanced neurorehabilitation and the development of intuitive brain-controlled technologies. However, MI classification using electroencephalography (EEG) is hindered by spatiotemporal variability and the limited interpretability of deep learning (DL) models. To mitigate these challenges, dropout techniques are employed as regularization strategies. Nevertheless, the removal of critical EEG channels, particularly those from the sensorimotor cortex, can result in substantial spatial information loss, especially under limited training data conditions. This issue, compounded by high EEG variability in subjects with poor performance, hinders generalization and reduces the interpretability and clinical trust in MI-based BCI systems. This study proposes a novel framework integrating channel dropout—a variant of Monte Carlo dropout (MCD)—with class activation maps (CAMs) to enhance robustness and interpretability in MI classification. This integration represents a significant step forward by offering, for the first time, a dedicated solution to concurrently mitigate spatiotemporal uncertainty and provide fine-grained neurophysiologically relevant interpretability in motor imagery classification, particularly demonstrating refined spatial attention in challenging low-performing subjects. We evaluate three DL architectures (ShallowConvNet, EEGNet, TCNet Fusion) on a 52-subject MI-EEG dataset, applying channel dropout to simulate structural variability and LayerCAM to visualize spatiotemporal patterns. Results demonstrate that among the three evaluated deep learning models for MI-EEG classification, TCNet Fusion achieved the highest peak accuracy of 74.4% using 32 EEG channels. At the same time, ShallowConvNet recorded the lowest peak at 72.7%, indicating TCNet Fusion’s robustness in moderate-density montages. Incorporating MCD notably improved model consistency and classification accuracy, especially in low-performing subjects where baseline accuracies were below 70%; EEGNet and TCNet Fusion showed accuracy improvements of up to 10% compared to their non-MCD versions. Furthermore, LayerCAM visualizations enhanced with MCD transformed diffuse spatial activation patterns into more focused and interpretable topographies, aligning more closely with known motor-related brain regions and thereby boosting both interpretability and classification reliability across varying subject performance levels. Our approach offers a unified solution for uncertainty-aware, and interpretable MI classification.
Keywords: motor imagery; channel dropout; class activation maps; spatiotemporal uncertainty motor imagery; channel dropout; class activation maps; spatiotemporal uncertainty

Share and Cite

MDPI and ACS Style

Gómez-Morales, Ó.W.; Escalante-Escobar, S.; Collazos-Huertas, D.F.; Álvarez-Meza, A.M.; Castellanos-Dominguez, G. Uncertainty-AwareDeep Learning for Robust and Interpretable MI EEG Using Channel Dropout and LayerCAM Integration. Appl. Sci. 2025, 15, 8036. https://doi.org/10.3390/app15148036

AMA Style

Gómez-Morales ÓW, Escalante-Escobar S, Collazos-Huertas DF, Álvarez-Meza AM, Castellanos-Dominguez G. Uncertainty-AwareDeep Learning for Robust and Interpretable MI EEG Using Channel Dropout and LayerCAM Integration. Applied Sciences. 2025; 15(14):8036. https://doi.org/10.3390/app15148036

Chicago/Turabian Style

Gómez-Morales, Óscar Wladimir, Sofia Escalante-Escobar, Diego Fabian Collazos-Huertas, Andrés Marino Álvarez-Meza, and German Castellanos-Dominguez. 2025. "Uncertainty-AwareDeep Learning for Robust and Interpretable MI EEG Using Channel Dropout and LayerCAM Integration" Applied Sciences 15, no. 14: 8036. https://doi.org/10.3390/app15148036

APA Style

Gómez-Morales, Ó. W., Escalante-Escobar, S., Collazos-Huertas, D. F., Álvarez-Meza, A. M., & Castellanos-Dominguez, G. (2025). Uncertainty-AwareDeep Learning for Robust and Interpretable MI EEG Using Channel Dropout and LayerCAM Integration. Applied Sciences, 15(14), 8036. https://doi.org/10.3390/app15148036

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop