Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (84)

Search Parameters:
Keywords = Monte–Carlo Dropout

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 7150 KB  
Article
Distress-Level Prediction of Pavement Deterioration with Causal Analysis and Uncertainty Quantification
by Yifan Sun, Qian Gao, Feng Li and Yuchuan Du
Appl. Sci. 2025, 15(20), 11250; https://doi.org/10.3390/app152011250 - 21 Oct 2025
Viewed by 414
Abstract
Pavement performance prediction serves as a core basis for maintenance decision-making. Although numerous studies have been conducted, most focus on road segments and aggregate indicators such as IRI and PCI, with limited attention to the daily deterioration of individual distresses. Subject to the [...] Read more.
Pavement performance prediction serves as a core basis for maintenance decision-making. Although numerous studies have been conducted, most focus on road segments and aggregate indicators such as IRI and PCI, with limited attention to the daily deterioration of individual distresses. Subject to the combined influence of multiple factors, pavement distress deterioration exhibits pronounced nonlinear and time-lag characteristics, making distress-level predictions prone to disturbances and highly uncertain. To address this challenge, this study investigates the distress-level deterioration of three representative distresses—transverse cracks, alligator cracks, and potholes—with causal analysis and uncertainty quantification. Based on two years of high-frequency road inspection data, a continuous tracking dataset comprising 164 distress sites and 9038 records was established using a three-step matching algorithm. Convergent cross mapping was applied to quantify the causal strength and lag days of environmental factors, which were subsequently embedded into an encoder–decoder framework to construct a BayesLSTM model. Monte Carlo Dropout was employed to approximate Bayesian inference, enabling probabilistic characterization of predictive uncertainty and the construction of prediction intervals. Results indicate that integrating causal and time-lag characteristics improves the model’s capacity to identify key drivers and anticipate deterioration inflection points. The proposed BayesLSTM achieved high predictive accuracy across all three distress types, with a prediction interval coverage of 100%, thereby enhancing the reliability of prediction by providing both deterministic results and interval estimates. These findings facilitate the identification of high-risk distresses and their underlying mechanisms, offering support for rational allocation of maintenance resources. Full article
(This article belongs to the Special Issue New Technology for Road Surface Detection, 2nd Edition)
Show Figures

Figure 1

20 pages, 1250 KB  
Article
Symmetric 3D Convolutional Network with Uncertainty Estimation for MRI-Based Striatal DaT-Uptake Assessment in Parkinson’s Disease
by Walid Abdullah Al, Il Dong Yun and Yun Jung Bae
Appl. Sci. 2025, 15(20), 10977; https://doi.org/10.3390/app152010977 - 13 Oct 2025
Viewed by 318
Abstract
Dopamine transporter (DaT) imaging is commonly used for monitoring Parkinson’s disease (PD), where the amount of striatal DaT uptake serves as the PD severity indicator. MRI of the nigral region has recently emerged as a safer and more available alternative. This work introduces [...] Read more.
Dopamine transporter (DaT) imaging is commonly used for monitoring Parkinson’s disease (PD), where the amount of striatal DaT uptake serves as the PD severity indicator. MRI of the nigral region has recently emerged as a safer and more available alternative. This work introduces a 3D convolutional network-based symmetric regressor for predicting the DaT-uptake amount from nigral MRI patches. Unlike the typical deep networks, the proposed model leverages the lateral symmetry between right and left nigrae by incorporating a paired input–output architecture that concurrently predicts DaT uptakes for both the right and left striata, while employing a symmetric loss that constrains the difference between right-to-left predictions. To improve model reliability, we also propose a symmetric Monte Carlo dropout strategy for providing fruitful uncertainty estimates about the prediction. Evaluated on 734 3D nigral patches, our symmetric regressor demonstrated a 12.11% improvement in prediction error compared to standard deep-learning models. Furthermore, the reliability was enhanced, resulting in a 5% reduction in the prediction uncertainty interval at a 95% coverage probability for the true DaT-uptake amount. Our findings demonstrate that integrating structural symmetry into model design is a powerful strategy for achieving accurate and reliable predictions for PD severity analysis. Full article
(This article belongs to the Section Biomedical Engineering)
Show Figures

Figure 1

22 pages, 9163 KB  
Article
Unsupervised Convolutional Transformer Autoencoder for Robust Health Indicator Construction and RUL Prediction in Rotating Machinery
by Amrit Dahal, Hong-Zhong Huang, Cheng-Geng Huang, Tudi Huang, Smaran Khanal and Sajawal Gul Niazi
Appl. Sci. 2025, 15(20), 10972; https://doi.org/10.3390/app152010972 - 13 Oct 2025
Viewed by 388
Abstract
Prognostics for rotating machinery, particularly bearings, encounter significant challenges in constructing reliable health indicators (HIs) that accurately reflect degradation trajectories, thereby enabling precise remaining useful life (RUL) predictions. This article proposes a novel integrated approach for predicting the RUL of bearings without manual [...] Read more.
Prognostics for rotating machinery, particularly bearings, encounter significant challenges in constructing reliable health indicators (HIs) that accurately reflect degradation trajectories, thereby enabling precise remaining useful life (RUL) predictions. This article proposes a novel integrated approach for predicting the RUL of bearings without manual feature engineering. Specifically, a sequential autoencoder integrating a convolutional neural network (CNN) and vision Transformer (Vi-T) is employed to capture the local spatial patterns and global temporal correlations of time-domain vibration signals. The Wasserstein distance is introduced to quantify the divergence between healthy and degraded signal embeddings, resulting in a robust HI metric. Subsequently, the derived HI is fed into a CNN-bidirectional long short-term memory-regressor with Monte Carlo dropout to provide RUL predictions and Bayesian uncertainty estimates. Experimental results from the Xi’an Jiao-Tong University bearing dataset demonstrate that the proposed method surpasses conventional techniques in HI construction and RUL prediction accuracy, demonstrating its efficacy for complex industrial systems with minimal data preprocessing. Full article
(This article belongs to the Section Mechanical Engineering)
Show Figures

Figure 1

26 pages, 4780 KB  
Article
Uncertainty Quantification Based on Block Masking of Test Images
by Pai-Xuan Wang, Chien-Hung Liu and Shingchern D. You
Information 2025, 16(10), 885; https://doi.org/10.3390/info16100885 - 11 Oct 2025
Viewed by 208
Abstract
In image classification tasks, models may occasionally produce incorrect predictions, which can lead to severe consequences in safety-critical applications. For instance, if a model mistakenly classifies a red traffic light as green, it could result in a traffic accident. Therefore, it is essential [...] Read more.
In image classification tasks, models may occasionally produce incorrect predictions, which can lead to severe consequences in safety-critical applications. For instance, if a model mistakenly classifies a red traffic light as green, it could result in a traffic accident. Therefore, it is essential to assess the confidence level associated with each prediction. Predictions accompanied by high confidence scores are generally more reliable and can serve as a basis for informed decision-making. To address this, the present paper extends the block-scaling approach—originally developed for estimating classifier accuracy on unlabeled datasets—to compute confidence scores for individual samples in image classification. The proposed method, termed block masking confidence (BMC), applies a sliding mask filled with random noise to occlude localized regions of the input image. Each masked variant is classified, and predictions are aggregated across all variants. The final class is selected via majority voting, and a confidence score is derived based on prediction consistency. To evaluate the effectiveness of BMC, we conducted experiments comparing it against Monte Carlo (MC) dropout and a vanilla baseline across image datasets of varying sizes and distortion levels. While BMC does not consistently outperform the baselines under standard (in-distribution) conditions, it shows clear advantages on distorted and out-of-distribution (OOD) samples. Specifically, on the level-3 distorted iNaturalist 2018 dataset, BMC achieves a median expected calibration error (ECE) of 0.135, compared to 0.345 for MC dropout and 0.264 for the vanilla approach. On the level-3 distorted Places365 dataset, BMC yields an ECE of 0.173, outperforming MC dropout (0.290) and vanilla (0.201). For OOD samples in Places365, BMC achieves a peak entropy of 1.43, higher than the 1.06 observed for both MC dropout and vanilla. Furthermore, combining BMC with MC dropout leads to additional improvements. On distorted Places365, the median ECE is reduced to 0.151, and the peak entropy for OOD samples increases to 1.73. Overall, the proposed BMC method offers a promising framework for uncertainty quantification in image classification, particularly under challenging or distribution-shifted conditions. Full article
(This article belongs to the Special Issue Machine Learning and Data Mining for User Classification)
Show Figures

Figure 1

20 pages, 1579 KB  
Article
Towards Trustworthy and Explainable-by-Design Large Language Models for Automated Teacher Assessment
by Yuan Li, Hang Yang and Quanrong Fang
Information 2025, 16(10), 882; https://doi.org/10.3390/info16100882 - 10 Oct 2025
Viewed by 243
Abstract
Conventional teacher assessment is labor-intensive and subjective. Prior LLM-based systems improve scale but rely on post hoc rationales and lack built-in trust controls. We propose an explainable-by-design framework that couples (i) Dual-Lens Hierarchical Attention—a global lens aligned to curriculum standards and a local [...] Read more.
Conventional teacher assessment is labor-intensive and subjective. Prior LLM-based systems improve scale but rely on post hoc rationales and lack built-in trust controls. We propose an explainable-by-design framework that couples (i) Dual-Lens Hierarchical Attention—a global lens aligned to curriculum standards and a local lens aligned to subject-specific rubrics—with (ii) a Trust-Gated Inference module that combines Monte-Carlo-dropout calibration and adversarial debiasing, and (iii) an On-the-Spot Explanation generator that shares the same fused representation and predicted score used for decision making. Thus, explanations are decision-consistent and curriculum-anchored rather than retrofitted. On TeacherEval-2023, EdNet-Math, and MM-TBA, our model attains an Inter-Rater Consistency of 82.4%, Explanation Credibility of 0.78, Fairness Gap of 1.8%, and Expected Calibration Error of 0.032. Faithfulness is verified via attention-to-rubric alignment (78%) and counterfactual deletion tests, while trust gating reduces confidently wrong outputs and triggers reject-and-refer when uncertainty is high. The system retains 99.6% accuracy under cross-domain transfer and degrades only 4.1% with 15% ASR noise, reducing human review workload by 41%. This establishes a reproducible path to trustworthy and pedagogy-aligned LLMs for high-stakes educational evaluation. Full article
(This article belongs to the Special Issue Advancing Educational Innovation with Artificial Intelligence)
Show Figures

Figure 1

14 pages, 1932 KB  
Article
Skin Cancer Detection and Classification Through Medical Image Analysis Using EfficientNet
by Sima Das and Rishabh Kumar Addya
NDT 2025, 3(4), 23; https://doi.org/10.3390/ndt3040023 - 26 Sep 2025
Viewed by 710
Abstract
Skin cancer is one of the most prevalent and potentially lethal cancers worldwide, highlighting the need for accurate and timely diagnosis. Convolutional neural networks (CNNs) have demonstrated strong potential in automating skin lesion classification. In this study, we propose a multi-class classification model [...] Read more.
Skin cancer is one of the most prevalent and potentially lethal cancers worldwide, highlighting the need for accurate and timely diagnosis. Convolutional neural networks (CNNs) have demonstrated strong potential in automating skin lesion classification. In this study, we propose a multi-class classification model using EfficientNet-B0, a lightweight yet powerful CNN architecture, trained on the HAM10000 dermoscopic image dataset. All images were resized to 224 × 224 pixels and normalized using ImageNet statistics to ensure compatibility with the pre-trained network. Data augmentation and preprocessing addressed class imbalance, resulting in a balanced dataset of 7512 images across seven diagnostic categories. The baseline model achieved 77.39% accuracy, which improved to 89.36% with transfer learning by freezing the convolutional base and training only the classification layer. Full network fine-tuning with test-time augmentation increased the accuracy to 96%, and the final model reached 97.15% when combined with Monte Carlo dropout. These results demonstrate EfficientNet-B0’s effectiveness for automated skin lesion classification and its potential as a clinical decision support tool. Full article
Show Figures

Figure 1

11 pages, 975 KB  
Article
Cost Effectiveness of Adjunctive Neurofeedback vs. Psychotherapy or Pharmacotherapy for Post-Traumatic Stress Disorder
by Jeffrey D. Voigt, Aron Tendler, Carl Marci and Linda L. Carpenter
Healthcare 2025, 13(19), 2388; https://doi.org/10.3390/healthcare13192388 - 23 Sep 2025
Viewed by 902
Abstract
Background: Neurofeedback shows promise as an adjunctive therapy for post-traumatic stress disorder (PTSD), but its cost effectiveness has not been studied. Objectives: To assess the cost and effectiveness of neurofeedback plus other therapies (NF + OT) vs. guideline therapies alone. Methods: TreeAge software [...] Read more.
Background: Neurofeedback shows promise as an adjunctive therapy for post-traumatic stress disorder (PTSD), but its cost effectiveness has not been studied. Objectives: To assess the cost and effectiveness of neurofeedback plus other therapies (NF + OT) vs. guideline therapies alone. Methods: TreeAge software was used to develop Markov models comparing NF + OT therapy to psychotherapy and pharmacotherapy over 1–3 years. Costs were derived from Medicare rates and literature. Effectiveness was measured using CAPS-5 score reductions converted to quality-adjusted life years (QALYs) using regression analysis. Dropout and relapse rates were derived from systematic reviews and meta-analysis. Results: NF + OT resulted in greater improvements in CAPS-5 scores and was less costly than OT. In the base case, NF + OT was less expensive (on average) for years 1–3 by USD 2568−USD 4140 (vs. psychotherapy) and USD 2282−USD 7217 (vs. pharmacotherapy). QALYs improved by 0.04 compared to psychotherapy and 0.24 compared to pharmacotherapy. NF + OT dominated (lower cost, better outcomes) psychotherapy 12% of the time and pharmacotherapy 26.5% of the time in Monte Carlo simulation. Further, Monte Carlo simulation did not demonstrate dominance at any point in time for either pharmacotherapy or psychotherapy over NF + OT. Conclusions: Based on lower costs and improved effectiveness, NF + OT should be considered for treating PTSD. Full article
(This article belongs to the Special Issue Healthcare Economics, Management, and Innovation for Health Systems)
Show Figures

Figure 1

17 pages, 5431 KB  
Article
Localization Meets Uncertainty: Uncertainty-Aware Multi-Modal Localization
by Hye-Min Won, Jieun Lee and Jiyong Oh
Technologies 2025, 13(9), 386; https://doi.org/10.3390/technologies13090386 - 1 Sep 2025
Viewed by 805
Abstract
Reliable localization is critical for robot navigation in complex indoor environments. In this paper, we propose an uncertainty-aware localization method that enhances the reliability of localization outputs without modifying the prediction model itself. This study introduces a percentile-based rejection strategy that filters out [...] Read more.
Reliable localization is critical for robot navigation in complex indoor environments. In this paper, we propose an uncertainty-aware localization method that enhances the reliability of localization outputs without modifying the prediction model itself. This study introduces a percentile-based rejection strategy that filters out unreliable 3-degree-of-freedom pose predictions based on aleatoric and epistemic uncertainties the network estimates. We apply this approach to a multi-modal end-to-end localization that fuses RGB images and 2D LiDAR data, and we evaluate it across three real-world datasets collected using a commercialized serving robot. Experimental results show that applying stricter uncertainty thresholds consistently improves pose accuracy. Specifically, the mean position error, calculated as the average Euclidean distance between the predicted and ground-truth (x, y) coordinates, is reduced by 41.0%, 56.7%, and 69.4%, and the mean orientation error, representing the average angular deviation between the predicted and ground-truth yaw angles, is reduced by 55.6%, 65.7%, and 73.3%, when percentile thresholds of 90%, 80%, and 70% are applied, respectively. Furthermore, the rejection strategy effectively removes extreme outliers, resulting in better alignment with ground truth trajectories. To the best of our knowledge, this is the first study to quantitatively demonstrate the benefits of percentile-based uncertainty rejection in multi-modal and end-to-end localization tasks. Our approach provides a practical means to enhance the reliability and accuracy of localization systems in real-world deployments. Full article
(This article belongs to the Special Issue AI Robotics Technologies and Their Applications)
Show Figures

Figure 1

19 pages, 5315 KB  
Article
Style-Aware and Uncertainty-Guided Approach to Semi-Supervised Domain Generalization in Medical Imaging
by Zineb Tissir, Yunyoung Chang and Sang-Woong Lee
Mathematics 2025, 13(17), 2763; https://doi.org/10.3390/math13172763 - 28 Aug 2025
Viewed by 726
Abstract
Deep learning has significantly advanced medical image analysis by enabling accurate, automated diagnosis across diverse clinical tasks such as lesion classification and disease detection. However, the practical deployment of these systems is still hindered by two major challenges: the limited availability of expert-annotated [...] Read more.
Deep learning has significantly advanced medical image analysis by enabling accurate, automated diagnosis across diverse clinical tasks such as lesion classification and disease detection. However, the practical deployment of these systems is still hindered by two major challenges: the limited availability of expert-annotated data and substantial domain shifts caused by variations in imaging devices, acquisition protocols, and patient populations. Although recent semi-supervised domain generalization (SSDG) approaches attempt to address these challenges, they often suffer from two key limitations: (i) reliance on computationally expensive uncertainty modeling techniques such as Monte Carlo dropout, and (ii) inflexible shared-head classifiers that fail to capture domain-specific variability across heterogeneous imaging styles. To overcome these limitations, we propose MultiStyle-SSDG, a unified semi-supervised domain generalization framework designed to improve model generalization in low-label scenarios. Our method introduces a multi-style ensemble pseudo-labeling strategy guided by entropy-based filtering, incorporates prototype-based conformity and semantic alignment to regularize the feature space, and employs a domain-specific multi-head classifier fused through attention-weighted prediction. Additionally, we introduce a dual-level neural-style transfer pipeline that simulates realistic domain shifts while preserving diagnostic semantics. We validated our framework on the ISIC2019 skin lesion classification benchmark using 5% and 10% labeled data. MultiStyle-SSDG consistently outperformed recent state-of-the-art methods such as FixMatch, StyleMatch, and UPLM, achieving statistically significant improvements in classification accuracy under simulated domain shifts including style, background, and corruption. Specifically, our method achieved 78.6% accuracy with 5% labeled data and 80.3% with 10% labeled data on ISIC2019, surpassing FixMatch by 4.9–5.3 percentage points and UPLM by 2.1–2.4 points. Ablation studies further confirmed the individual contributions of each component, and t-SNE visualizations illustrate enhanced intra-class compactness and cross-domain feature consistency. These results demonstrate that our style-aware, modular framework offers a robust and scalable solution for generalizable computer-aided diagnosis in real-world medical imaging settings. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Show Figures

Figure 1

27 pages, 5818 KB  
Article
Scenario-Based Stochastic Optimization for Renewable Integration Under Forecast Uncertainty: A South African Power System Case Study
by Martins Osifeko and Josiah Munda
Processes 2025, 13(8), 2560; https://doi.org/10.3390/pr13082560 - 13 Aug 2025
Viewed by 1374
Abstract
South Africa’s transition to a renewable-powered grid faces critical challenges due to the inherent variability of wind and solar generation as well as the need for economically viable and reliable dispatch strategies. This study proposes a scenario-based stochastic optimization framework that integrates machine [...] Read more.
South Africa’s transition to a renewable-powered grid faces critical challenges due to the inherent variability of wind and solar generation as well as the need for economically viable and reliable dispatch strategies. This study proposes a scenario-based stochastic optimization framework that integrates machine learning forecasting and uncertainty modeling to enhance operational decision making. A hybrid Long Short-Term Memory–XGBoost model is employed to forecast wind, photovoltaic (PV) power, concentrated solar power (CSP), and electricity demand, with Monte Carlo dropout and quantile regression used for uncertainty quantification. Scenarios are generated using appropriate probability distributions and are reduced via Temporal-Aware K-Means Scenario Reduction for tractability. A two-stage stochastic program then optimizes power dispatch under uncertainty, benchmarked against Deterministic, Rule-Based, and Perfect Information models. Simulation results over 7 days using five years of real-world South African energy data show that the stochastic model strikes a favorable balance between cost and reliability. It incurs a total system cost of ZAR 1.748 billion, with 1625 MWh of load shedding and 1283 MWh of curtailment, significantly outperforming the deterministic model (ZAR 1.763 billion; 3538 MWh load shedding; 59 MWh curtailment) and the rule-based model (ZAR 1.760 billion, 1.809 MWh load shedding; 1475 MWh curtailment). The proposed stochastic framework demonstrates strong potential for improving renewable integration, reducing system penalties, and enhancing grid resilience in the face of forecast uncertainty. Full article
Show Figures

Figure 1

26 pages, 8736 KB  
Article
Uncertainty-Aware Fault Diagnosis of Rotating Compressors Using Dual-Graph Attention Networks
by Seungjoo Lee, YoungSeok Kim, Hyun-Jun Choi and Bongjun Ji
Machines 2025, 13(8), 673; https://doi.org/10.3390/machines13080673 - 1 Aug 2025
Cited by 1 | Viewed by 721
Abstract
Rotating compressors are foundational in various industrial processes, particularly in the oil-and-gas sector, where reliable fault detection is crucial for maintaining operational continuity. While Graph Attention Network (GAT) frameworks are widely available, this study advances the state of the art by introducing a [...] Read more.
Rotating compressors are foundational in various industrial processes, particularly in the oil-and-gas sector, where reliable fault detection is crucial for maintaining operational continuity. While Graph Attention Network (GAT) frameworks are widely available, this study advances the state of the art by introducing a Bayesian GAT method specifically tailored for vibration-based compressor fault diagnosis. The approach integrates domain-specific digital-twin simulations built with Rotordynamic software (1.3.0), and constructs dual adjacency matrices to encode both physically informed and data-driven sensor relationships. Additionally, a hybrid forecasting-and-reconstruction objective enables the model to capture short-term deviations as well as long-term waveform fidelity. Monte Carlo dropout further decomposes prediction uncertainty into aleatoric and epistemic components, providing a more robust and interpretable model. Comparative evaluations against conventional Long Short-Term Memory (LSTM)-based autoencoder and forecasting methods demonstrate that the proposed framework achieves superior fault-detection performance across multiple fault types, including misalignment, bearing failure, and unbalance. Moreover, uncertainty analyses confirm that fault severity correlates with increasing levels of both aleatoric and epistemic uncertainty, reflecting heightened noise and reduced model confidence under more severe conditions. By enhancing GAT fundamentals with a domain-tailored dual-graph strategy, specialized Bayesian inference, and digital-twin data generation, this research delivers a comprehensive and interpretable solution for compressor fault diagnosis, paving the way for more reliable and risk-aware predictive maintenance in complex rotating machinery. Full article
(This article belongs to the Section Machines Testing and Maintenance)
Show Figures

Figure 1

34 pages, 3704 KB  
Article
Uncertainty-Aware Deep Learning for Robust and Interpretable MI EEG Using Channel Dropout and LayerCAM Integration
by Óscar Wladimir Gómez-Morales, Sofia Escalante-Escobar, Diego Fabian Collazos-Huertas, Andrés Marino Álvarez-Meza and German Castellanos-Dominguez
Appl. Sci. 2025, 15(14), 8036; https://doi.org/10.3390/app15148036 - 18 Jul 2025
Viewed by 1055
Abstract
Motor Imagery (MI) classification plays a crucial role in enhancing the performance of brain–computer interface (BCI) systems, thereby enabling advanced neurorehabilitation and the development of intuitive brain-controlled technologies. However, MI classification using electroencephalography (EEG) is hindered by spatiotemporal variability and the limited interpretability [...] Read more.
Motor Imagery (MI) classification plays a crucial role in enhancing the performance of brain–computer interface (BCI) systems, thereby enabling advanced neurorehabilitation and the development of intuitive brain-controlled technologies. However, MI classification using electroencephalography (EEG) is hindered by spatiotemporal variability and the limited interpretability of deep learning (DL) models. To mitigate these challenges, dropout techniques are employed as regularization strategies. Nevertheless, the removal of critical EEG channels, particularly those from the sensorimotor cortex, can result in substantial spatial information loss, especially under limited training data conditions. This issue, compounded by high EEG variability in subjects with poor performance, hinders generalization and reduces the interpretability and clinical trust in MI-based BCI systems. This study proposes a novel framework integrating channel dropout—a variant of Monte Carlo dropout (MCD)—with class activation maps (CAMs) to enhance robustness and interpretability in MI classification. This integration represents a significant step forward by offering, for the first time, a dedicated solution to concurrently mitigate spatiotemporal uncertainty and provide fine-grained neurophysiologically relevant interpretability in motor imagery classification, particularly demonstrating refined spatial attention in challenging low-performing subjects. We evaluate three DL architectures (ShallowConvNet, EEGNet, TCNet Fusion) on a 52-subject MI-EEG dataset, applying channel dropout to simulate structural variability and LayerCAM to visualize spatiotemporal patterns. Results demonstrate that among the three evaluated deep learning models for MI-EEG classification, TCNet Fusion achieved the highest peak accuracy of 74.4% using 32 EEG channels. At the same time, ShallowConvNet recorded the lowest peak at 72.7%, indicating TCNet Fusion’s robustness in moderate-density montages. Incorporating MCD notably improved model consistency and classification accuracy, especially in low-performing subjects where baseline accuracies were below 70%; EEGNet and TCNet Fusion showed accuracy improvements of up to 10% compared to their non-MCD versions. Furthermore, LayerCAM visualizations enhanced with MCD transformed diffuse spatial activation patterns into more focused and interpretable topographies, aligning more closely with known motor-related brain regions and thereby boosting both interpretability and classification reliability across varying subject performance levels. Our approach offers a unified solution for uncertainty-aware, and interpretable MI classification. Full article
(This article belongs to the Special Issue EEG Horizons: Exploring Neural Dynamics and Neurocognitive Processes)
Show Figures

Figure 1

23 pages, 2250 KB  
Article
Machine Learning Techniques for Uncertainty Estimation in Dynamic Aperture Prediction
by Carlo Emilio Montanari, Robert B. Appleby, Davide Di Croce, Massimo Giovannozzi, Tatiana Pieloni, Stefano Redaelli and Frederik F. Van der Veken
Computers 2025, 14(7), 287; https://doi.org/10.3390/computers14070287 - 18 Jul 2025
Viewed by 615
Abstract
The dynamic aperture is an essential concept in circular particle accelerators, providing the extent of the phase space region where particle motion remains stable over multiple turns. The accurate prediction of the dynamic aperture is key to optimising performance in accelerators such as [...] Read more.
The dynamic aperture is an essential concept in circular particle accelerators, providing the extent of the phase space region where particle motion remains stable over multiple turns. The accurate prediction of the dynamic aperture is key to optimising performance in accelerators such as the CERN Large Hadron Collider and is crucial for designing future accelerators like the CERN Future Circular Hadron Collider. Traditional methods for computing the dynamic aperture are computationally demanding and involve extensive numerical simulations with numerous initial phase space conditions. In our recent work, we have devised surrogate models to predict the dynamic aperture boundary both efficiently and accurately. These models have been further refined by incorporating them into a novel active learning framework. This framework enhances performance through continual retraining and intelligent data generation based on informed sampling driven by error estimation. A critical attribute of this framework is the precise estimation of uncertainty in dynamic aperture predictions. In this study, we investigate various machine learning techniques for uncertainty estimation, including Monte Carlo dropout, bootstrap methods, and aleatory uncertainty quantification. We evaluated these approaches to determine the most effective method for reliable uncertainty estimation in dynamic aperture predictions using machine learning techniques. Full article
(This article belongs to the Special Issue Machine Learning and Statistical Learning with Applications 2025)
Show Figures

Figure 1

17 pages, 18350 KB  
Article
Physics-Informed Deep Learning for Karst Spring Prediction: Integrating Variational Mode Decomposition and Long Short-Term Memory with Attention
by Liangjie Zhao, Stefano Fazi, Song Luan, Zhe Wang, Cheng Li, Yu Fan and Yang Yang
Water 2025, 17(14), 2043; https://doi.org/10.3390/w17142043 - 8 Jul 2025
Cited by 2 | Viewed by 1370 | Correction
Abstract
Accurately forecasting karst spring discharge remains a significant challenge due to the inherent nonstationarity and multi-scale hydrological dynamics of karst hydrological systems. This study presents a physics-informed variational mode decomposition long short-term memory (VMD-LSTM) model, enhanced with an attention mechanism and Monte Carlo [...] Read more.
Accurately forecasting karst spring discharge remains a significant challenge due to the inherent nonstationarity and multi-scale hydrological dynamics of karst hydrological systems. This study presents a physics-informed variational mode decomposition long short-term memory (VMD-LSTM) model, enhanced with an attention mechanism and Monte Carlo dropout for uncertainty quantification. Hourly discharge data (2013–2018) from the Zhaidi karst spring in southern China were decomposed using VMD to extract physically interpretable temporal modes. These decomposed modes, alongside precipitation data, were input into an attention-augmented LSTM incorporating physics-informed constraints. The model was rigorously evaluated against a baseline standalone LSTM using an 80% training, 15% validation, and 5% testing data partitioning strategy. The results demonstrate substantial improvements in prediction accuracy for the proposed framework compared to the standard LSTM model. Compared to the baseline LSTM, the RMSE during testing decreased dramatically from 0.726 to 0.220, and the NSE improved from 0.867 to 0.988. The performance gains were most significant during periods of rapid conduit flow (the peak RMSE decreased by 67%) and prolonged recession phases. Additionally, Monte Carlo dropout, using 100 stochastic realizations, effectively quantified predictive uncertainty, achieving over 96% coverage in the 95% confidence interval (CI). The developed framework provides robust, accurate, and reliable predictions under complex hydrological conditions, highlighting substantial potential for supporting karst groundwater resource management and enhancing flood early-warning capabilities. Full article
Show Figures

Figure 1

15 pages, 2722 KB  
Article
Predicting the Evolution of Capacity Degradation Histograms of Rechargeable Batteries Under Dynamic Loads via Latent Gaussian Processes
by Daocan Wang, Xinggang Li and Jiahuan Lu
Energies 2025, 18(13), 3503; https://doi.org/10.3390/en18133503 - 2 Jul 2025
Viewed by 546
Abstract
Accurate prediction of lithium-ion battery capacity degradation under dynamic loads is crucial yet challenging due to limited data availability and high cell-to-cell variability. This study proposes a Latent Gaussian Process (GP) model to forecast the full distribution of capacity fade in the form [...] Read more.
Accurate prediction of lithium-ion battery capacity degradation under dynamic loads is crucial yet challenging due to limited data availability and high cell-to-cell variability. This study proposes a Latent Gaussian Process (GP) model to forecast the full distribution of capacity fade in the form of high-dimensional histograms, rather than relying on point estimates. The model integrates Principal Component Analysis with GP regression to learn temporal degradation patterns from partial early-cycle data of a target cell, using a fully degraded reference cell. Experiments on the NASA dataset with randomized dynamic load profiles demonstrate that Latent GP enables full-lifecycle capacity distribution prediction using only early-cycle observations. Compared with standard GP, long short-term memory (LSTM), and Monte Carlo Dropout LSTM baselines, it achieves superior accuracy in terms of Kullback–Leibler divergence and mean squared error. Sensitivity analyses further confirm the model’s robustness to input noise and hyperparameter settings, highlighting its potential for practical deployment in real-world battery health prognostics. Full article
(This article belongs to the Section D: Energy Storage and Application)
Show Figures

Figure 1

Back to TopTop