Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (39)

Search Parameters:
Keywords = aleatoric

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 8736 KiB  
Article
Uncertainty-Aware Fault Diagnosis of Rotating Compressors Using Dual-Graph Attention Networks
by Seungjoo Lee, YoungSeok Kim, Hyun-Jun Choi and Bongjun Ji
Machines 2025, 13(8), 673; https://doi.org/10.3390/machines13080673 (registering DOI) - 1 Aug 2025
Abstract
Rotating compressors are foundational in various industrial processes, particularly in the oil-and-gas sector, where reliable fault detection is crucial for maintaining operational continuity. While Graph Attention Network (GAT) frameworks are widely available, this study advances the state of the art by introducing a [...] Read more.
Rotating compressors are foundational in various industrial processes, particularly in the oil-and-gas sector, where reliable fault detection is crucial for maintaining operational continuity. While Graph Attention Network (GAT) frameworks are widely available, this study advances the state of the art by introducing a Bayesian GAT method specifically tailored for vibration-based compressor fault diagnosis. The approach integrates domain-specific digital-twin simulations built with Rotordynamic software (1.3.0), and constructs dual adjacency matrices to encode both physically informed and data-driven sensor relationships. Additionally, a hybrid forecasting-and-reconstruction objective enables the model to capture short-term deviations as well as long-term waveform fidelity. Monte Carlo dropout further decomposes prediction uncertainty into aleatoric and epistemic components, providing a more robust and interpretable model. Comparative evaluations against conventional Long Short-Term Memory (LSTM)-based autoencoder and forecasting methods demonstrate that the proposed framework achieves superior fault-detection performance across multiple fault types, including misalignment, bearing failure, and unbalance. Moreover, uncertainty analyses confirm that fault severity correlates with increasing levels of both aleatoric and epistemic uncertainty, reflecting heightened noise and reduced model confidence under more severe conditions. By enhancing GAT fundamentals with a domain-tailored dual-graph strategy, specialized Bayesian inference, and digital-twin data generation, this research delivers a comprehensive and interpretable solution for compressor fault diagnosis, paving the way for more reliable and risk-aware predictive maintenance in complex rotating machinery. Full article
(This article belongs to the Section Machines Testing and Maintenance)
Show Figures

Figure 1

32 pages, 1666 KiB  
Article
Dimension-Adaptive Machine Learning for Efficient Uncertainty Quantification in Geological Carbon Storage Models
by Seyed Kourosh Mahjour, Ali Saleh and Seyed Saman Mahjour
Processes 2025, 13(6), 1834; https://doi.org/10.3390/pr13061834 - 10 Jun 2025
Viewed by 824
Abstract
Carbon capture and storage (CCS) plays a role in mitigating climate change, but effective implementation requires accurate prediction of CO2 behavior in geological formations. This study introduces a novel machine learning framework for quantifying uncertainty across 2D and 3D carbon storage models. [...] Read more.
Carbon capture and storage (CCS) plays a role in mitigating climate change, but effective implementation requires accurate prediction of CO2 behavior in geological formations. This study introduces a novel machine learning framework for quantifying uncertainty across 2D and 3D carbon storage models. We develop a dimension-adaptive Bayesian neural network architecture that enables efficient knowledge transfer between dimensional representations while maintaining physical consistency. The framework incorporates aleatoric uncertainty from inherent geological variability and epistemic uncertainty from model limitations. Trained on over 5000 high-fidelity simulations across multiple geological scenarios, our approach demonstrates superior computational efficiency, reducing analysis time for 3D models by 87% while maintaining prediction accuracy within 5% of full simulations. The framework effectively captures complex uncertainty patterns in spatiotemporal CO2 plume evolution. It identifies previously unrecognized parameter interdependencies, particularly between vertical permeability anisotropy and capillary entry pressure, which significantly impact plume migration in 3D models but are often overlooked in 2D representations. Compared with traditional Monte Carlo methods, our approach provides more accurate uncertainty bounds and enhanced identification of high-risk scenarios. This multidimensional framework enables rapid assessment of storage capacity and leakage risk under uncertainty, providing a practical tool for CCS site selection and operational decision-making across dimensional scales. Full article
(This article belongs to the Section Environmental and Green Processes)
Show Figures

Figure 1

54 pages, 6418 KiB  
Review
Navigating Uncertainty: Advanced Techniques in Pedestrian Intention Prediction for Autonomous Vehicles—A Comprehensive Review
by Alireza Mirzabagheri, Majid Ahmadi, Ning Zhang, Reza Alirezaee, Saeed Mozaffari and Shahpour Alirezaee
Vehicles 2025, 7(2), 57; https://doi.org/10.3390/vehicles7020057 - 9 Jun 2025
Viewed by 1401
Abstract
The World Health Organization reports approximately 1.35 million fatalities annually due to road traffic accidents, with pedestrians constituting 23% of these deaths. This highlights the critical need to enhance pedestrian safety, especially given the significant role human error plays in road accidents. Autonomous [...] Read more.
The World Health Organization reports approximately 1.35 million fatalities annually due to road traffic accidents, with pedestrians constituting 23% of these deaths. This highlights the critical need to enhance pedestrian safety, especially given the significant role human error plays in road accidents. Autonomous vehicles present a promising solution to mitigate these fatalities by improving road safety through advanced prediction of pedestrian behavior. With the autonomous vehicle market projected to grow substantially and offer various economic benefits, including reduced driving costs and enhanced safety, understanding and predicting pedestrian actions and intentions is essential for integrating autonomous vehicles into traffic systems effectively. Despite significant advancements, replicating human social understanding in autonomous vehicles remains challenging, particularly in predicting the complex and unpredictable behavior of vulnerable road users like pedestrians. Moreover, the inherent uncertainty in pedestrian behavior adds another layer of complexity, requiring robust methods to quantify and manage this uncertainty effectively. This review provides a structured and in-depth analysis of pedestrian intention prediction techniques, with a unique focus on how uncertainty is modeled and managed. We categorize existing approaches based on prediction duration, feature type, and model architecture, and critically examine benchmark datasets and performance metrics. Furthermore, we explore the implications of uncertainty types—epistemic and aleatoric—and discuss their integration into autonomous vehicle systems. By synthesizing recent developments and highlighting the limitations of current methodologies, this paper aims to advance the understanding of Pedestrian intention Prediction and contribute to safer and more reliable autonomous vehicle deployment. Full article
Show Figures

Figure 1

20 pages, 1172 KiB  
Article
Uncertainty-Aware Parking Prediction Using Bayesian Neural Networks
by Alireza Nezhadettehad, Arkady Zaslavsky, Abdur Rakib and Seng W. Loke
Sensors 2025, 25(11), 3463; https://doi.org/10.3390/s25113463 - 30 May 2025
Viewed by 776
Abstract
Parking availability prediction is a critical component of intelligent transportation systems, aiming to reduce congestion and improve urban mobility. While traditional deep learning models such as Long Short-Term Memory (LSTM) networks have been widely applied, they lack mechanisms to quantify uncertainty, limiting their [...] Read more.
Parking availability prediction is a critical component of intelligent transportation systems, aiming to reduce congestion and improve urban mobility. While traditional deep learning models such as Long Short-Term Memory (LSTM) networks have been widely applied, they lack mechanisms to quantify uncertainty, limiting their robustness in real-world deployments. This paper proposes a Bayesian Neural Network (BNN)-based framework for parking occupancy prediction that explicitly models both epistemic and aleatoric uncertainty. Although BNNs have shown promise in other domains, they remain underutilised in parking prediction—likely due to the computational complexity and the absence of real-time context integration in earlier approaches. Our approach leverages contextual features, including temporal and environmental factors, to enhance uncertainty-aware predictions. The framework is evaluated under varying data conditions, including data scarcity (90%, 50%, and 10% of training data) and synthetic noise injection to simulate aleatoric uncertainty. Results demonstrate that BNNs outperform other methods, achieving an average accuracy improvement of 27.4% in baseline conditions, with consistent gains under limited and noisy data. Applying uncertainty thresholds at 20% and 30% further improves reliability by enabling selective, confidence-based decision making. This research shows that modelling both types of uncertainty leads to significantly improved predictive performance in intelligent transportation systems and highlights the potential of uncertainty-aware approaches as a foundation for future work on integrating BNNs with hybrid neuro-symbolic reasoning to enhance decision making under uncertainty. Full article
(This article belongs to the Special Issue Sensors in 2025)
Show Figures

Figure 1

18 pages, 6030 KiB  
Article
Uncertainty Quantification to Assess the Generalisability of Automated Masonry Joint Segmentation Methods
by Jack M. W. Smith and Chrysothemis Paraskevopoulou
Infrastructures 2025, 10(4), 98; https://doi.org/10.3390/infrastructures10040098 - 18 Apr 2025
Viewed by 586
Abstract
Masonry-lined tunnels form a vital part of the world’s operational railway networks. However, in many cases their structural condition is deteriorating, so it is vital to undertake regular condition assessments to ensure their safety. In order to reduce costs and improve the repeatability [...] Read more.
Masonry-lined tunnels form a vital part of the world’s operational railway networks. However, in many cases their structural condition is deteriorating, so it is vital to undertake regular condition assessments to ensure their safety. In order to reduce costs and improve the repeatability of these assessments, automated deep learning-based tunnel analysis workflows have been proposed. However, for such methods to be applied in practice to a safety-critical situation, it is necessary to validate their conclusions. This study analysed how uncertainty quantification methods can be used to assess the test time performance of neural networks trained for masonry joint segmentation without the laborious labelling of additional ground truths. It applies test-time augmentation (TTA) and Monte Carlo dropout (MCD) to evaluate both the aleatoric and epistemic uncertainties of a selection of trained models. It then shows how these can be used to generate uncertainty maps to aid an engineer’s interpretation of the neural network output. Full article
Show Figures

Figure 1

20 pages, 8363 KiB  
Article
Predicting Stress–Strain Curve with Confidence: Balance Between Data Minimization and Uncertainty Quantification by a Dual Bayesian Model
by Tianyi Li, Zhengyuan Chen, Zhen Zhang, Zhenhua Wei, Gan-Ji Zhong, Zhong-Ming Li and Han Liu
Polymers 2025, 17(4), 550; https://doi.org/10.3390/polym17040550 - 19 Feb 2025
Cited by 1 | Viewed by 767
Abstract
Driven by polymer processing–property data, machine learning (ML) presents an efficient paradigm in predicting the stress–strain curve. However, it is generally challenged by (i) the deficiency of training data, (ii) the one-to-many issue of processing–property relationship (i.e., aleatoric uncertainty), and (iii) the unawareness [...] Read more.
Driven by polymer processing–property data, machine learning (ML) presents an efficient paradigm in predicting the stress–strain curve. However, it is generally challenged by (i) the deficiency of training data, (ii) the one-to-many issue of processing–property relationship (i.e., aleatoric uncertainty), and (iii) the unawareness of model uncertainty (i.e., epistemic uncertainty). Here, leveraging a Bayesian neural network (BNN) and a recently proposed dual-architected model for curve prediction, we introduce a dual Bayesian model that enables accurate prediction of the stress–strain curve while distinguishing between aleatoric and epistemic uncertainty at each processing condition. The model is trained using a Taguchi array dataset that minimizes the data size while maximizing the representativeness of 27 samples in a 4D processing parameter space, significantly reducing data requirements. By incorporating hidden layers and output-distribution layers, the model quantifies both aleatoric and epistemic uncertainty, aligning with experimental data fluctuations, and provides a 95% confidence interval for stress–strain predictions at each processing condition. Overall, this study establishes an uncertainty-aware framework for curve property prediction with reliable, modest uncertainty at a small data size, thus balancing data minimization and uncertainty quantification. Full article
(This article belongs to the Special Issue Simulation and Calculation of Polymer Composite Materials)
Show Figures

Graphical abstract

35 pages, 4267 KiB  
Article
Uncertainty-Aware Multimodal Trajectory Prediction via a Single Inference from a Single Model
by Ho Suk and Shiho Kim
Sensors 2025, 25(1), 217; https://doi.org/10.3390/s25010217 - 2 Jan 2025
Viewed by 1624
Abstract
In the domain of autonomous driving, trajectory prediction plays a pivotal role in ensuring the safety and reliability of autonomous systems, especially when navigating complex environments. Unfortunately, trajectory prediction suffers from uncertainty problems due to the randomness inherent in the driving environment, but [...] Read more.
In the domain of autonomous driving, trajectory prediction plays a pivotal role in ensuring the safety and reliability of autonomous systems, especially when navigating complex environments. Unfortunately, trajectory prediction suffers from uncertainty problems due to the randomness inherent in the driving environment, but uncertainty quantification in trajectory prediction is not widely addressed, and most studies rely on deep ensembles methods. This study presents a novel uncertainty-aware multimodal trajectory prediction (UAMTP) model that quantifies aleatoric and epistemic uncertainties through a single forward inference. Our approach employs deterministic single forward pass methods, optimizing computational efficiency while retaining robust prediction accuracy. By decomposing trajectory prediction into velocity and yaw components and quantifying uncertainty in both, the UAMTP model generates multimodal predictions that account for environmental randomness and intention ambiguity. Evaluation on datasets collected by CARLA simulator demonstrates that our model not only outperforms Deep Ensembles-based multimodal trajectory prediction method in terms of accuracy such as minFDE and miss rate metrics but also offers enhanced time to react for collision avoidance scenarios. This research marks a step forward in integrating efficient uncertainty quantification into multimodal trajectory prediction tasks within resource-constrained autonomous driving platforms. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

25 pages, 8887 KiB  
Article
A Gaussian Process-Enhanced Non-Linear Function and Bayesian Convolution–Bayesian Long Term Short Memory Based Ultra-Wideband Range Error Mitigation Method for Line of Sight and Non-Line of Sight Scenarios
by A. S. M. Sharifuzzaman Sagar, Samsil Arefin, Eesun Moon, Md Masud Pervez Prince, L. Minh Dang, Amir Haider and Hyung Seok Kim
Mathematics 2024, 12(23), 3866; https://doi.org/10.3390/math12233866 - 9 Dec 2024
Viewed by 1279
Abstract
Relative positioning accuracy between two devices is dependent on the precise range measurements. Ultra-wideband (UWB) technology is one of the popular and widely used technologies to achieve centimeter-level accuracy in range measurement. Nevertheless, harsh indoor environments, multipath issues, reflections, and bias due to [...] Read more.
Relative positioning accuracy between two devices is dependent on the precise range measurements. Ultra-wideband (UWB) technology is one of the popular and widely used technologies to achieve centimeter-level accuracy in range measurement. Nevertheless, harsh indoor environments, multipath issues, reflections, and bias due to antenna delay degrade the range measurement performance in line-of-sight (LOS) and non-line-of-sight (NLOS) scenarios. This article proposes an efficient and robust method to mitigate range measurement error in LOS and NLOS conditions by combining the latest artificial intelligence technology. A GP-enhanced non-linear function is proposed to mitigate the range bias in LOS scenarios. Moreover, NLOS identification based on the sliding window and Bayesian Conv-BLSTM method is utilized to mitigate range error due to the non-line-of-sight conditions. A novel spatial–temporal attention module is proposed to improve the performance of the proposed model. The epistemic and aleatoric uncertainty estimation method is also introduced to determine the robustness of the proposed model for environment variance. Furthermore, moving average and min-max removing methods are utilized to minimize the standard deviation in the range measurements in both scenarios. Extensive experimentation with different settings and configurations has proven the effectiveness of our methodology and demonstrated the feasibility of our robust UWB range error mitigation for LOS and NLOS scenarios. Full article
(This article belongs to the Special Issue Modeling and Simulation in Engineering, 3rd Edition)
Show Figures

Figure 1

23 pages, 1060 KiB  
Article
Uncertainty-Aware Time Series Anomaly Detection
by Paul Wiessner, Grigor Bezirganyan, Sana Sellami, Richard Chbeir and Hans-Joachim Bungartz
Future Internet 2024, 16(11), 403; https://doi.org/10.3390/fi16110403 - 31 Oct 2024
Cited by 3 | Viewed by 3662
Abstract
Traditional anomaly detection methods in time series data often struggle with inherent uncertainties like noise and missing values. Indeed, current approaches mostly focus on quantifying epistemic uncertainty and ignore data-dependent uncertainty. However, consideration of noise in data is important as it may have [...] Read more.
Traditional anomaly detection methods in time series data often struggle with inherent uncertainties like noise and missing values. Indeed, current approaches mostly focus on quantifying epistemic uncertainty and ignore data-dependent uncertainty. However, consideration of noise in data is important as it may have the potential to lead to more robust detection of anomalies and a better capability of distinguishing between real anomalies and anomalous patterns provoked by noise. In this paper, we propose LSTMAE-UQ (Long Short-Term Memory Autoencoder with Aleatoric and Epistemic Uncertainty Quantification), a novel approach that incorporates both aleatoric (data noise) and epistemic (model uncertainty) uncertainties for more robust anomaly detection. The model combines the strengths of LSTM networks for capturing complex time series relationships and autoencoders for unsupervised anomaly detection and quantifies uncertainties based on the Bayesian posterior approximation method Monte Carlo (MC) Dropout, enabling a deeper understanding of noise recognition. Our experimental results across different real-world datasets show that consideration of uncertainty effectively increases the robustness to noise and point outliers, making predictions more reliable for longer periodic sequential data. Full article
(This article belongs to the Special Issue Industrial Internet of Things (IIoT): Trends and Technologies)
Show Figures

Figure 1

40 pages, 2712 KiB  
Article
Improving Re-Identification by Estimating and Utilizing Diverse Uncertainty Types for Embeddings
by Markus Eisenbach, Andreas Gebhardt, Dustin Aganian and Horst-Michael Gross
Algorithms 2024, 17(10), 430; https://doi.org/10.3390/a17100430 - 26 Sep 2024
Viewed by 1156
Abstract
In most re-identification approaches, embedding vectors are compared to identify the best match for a given query. However, this comparison does not take into account whether the encoded information in the embedding vectors was extracted reliably from the input images. We propose the [...] Read more.
In most re-identification approaches, embedding vectors are compared to identify the best match for a given query. However, this comparison does not take into account whether the encoded information in the embedding vectors was extracted reliably from the input images. We propose the first attempt that illustrates how all three types of uncertainty, namely model uncertainty (also known as epistemic uncertainty), data uncertainty (also known as aleatoric uncertainty), and distributional uncertainty, can be estimated for embedding vectors. We provide evidence that we do indeed estimate these types of uncertainty, and that each type has its own value for improving re-identification performance. In particular, while the few state-of-the-art approaches that employ uncertainty for re-identification during inference utilize only data uncertainty to improve single-shot re-identification performance, we demonstrate that the estimated model uncertainty vector can be utilized to modify the feature vector. We explore the best method for utilizing the estimated model uncertainty based on the Market-1501 dataset and demonstrate that we are able to further enhance the performance above the already strong baseline UAL. Additionally, we show that the estimated distributional uncertainty resembles the degree to which the current sample is out-of-distribution. To illustrate this, we divide the distractor set of the Market-1501 dataset into four classes, each representing a different degree of out-of-distribution. By computing a score based on the estimated distributional uncertainty vector, we are able to correctly order the four distractor classes and to differentiate them from an in-distribution set to a significant extent. Full article
(This article belongs to the Special Issue Machine Learning for Pattern Recognition (2nd Edition))
Show Figures

Figure 1

30 pages, 2097 KiB  
Article
Incoherence: A Generalized Measure of Complexity to Quantify Ensemble Divergence in Multi-Trial Experiments and Simulations
by Timothy Davey
Entropy 2024, 26(8), 683; https://doi.org/10.3390/e26080683 - 13 Aug 2024
Viewed by 1263
Abstract
Complex systems pose significant challenges to traditional scientific and statistical methods due to their inherent unpredictability and resistance to simplification. Accurately detecting complex behavior and the uncertainty which comes with it is therefore essential. Using the context of previous studies, we introduce a [...] Read more.
Complex systems pose significant challenges to traditional scientific and statistical methods due to their inherent unpredictability and resistance to simplification. Accurately detecting complex behavior and the uncertainty which comes with it is therefore essential. Using the context of previous studies, we introduce a new information-theoretic measure, termed “incoherence”. By using an adapted Jensen-Shannon Divergence across an ensemble of outcomes, we quantify the aleatoric uncertainty of the system. First we compared this measure to established statistical tests using both continuous and discrete data. Before demonstrating how incoherence can be applied to identify key characteristics of complex systems, including sensitivity to initial conditions, criticality, and response to perturbations. Full article
(This article belongs to the Special Issue Information and Self-Organization III)
Show Figures

Figure 1

24 pages, 9831 KiB  
Article
A Novel Computational Instrument Based on a Universal Mixture Density Network with a Gaussian Mixture Model as a Backbone for Predicting COVID-19 Variants’ Distributions
by Yas Al-Hadeethi, Intesar F. El Ramley, Hiba Mohammed, Nada M. Bedaiwi and Abeer Z. Barasheed
Mathematics 2024, 12(8), 1254; https://doi.org/10.3390/math12081254 - 20 Apr 2024
Viewed by 1892
Abstract
Various published COVID-19 models have been used in epidemiological studies and healthcare planning to model and predict the spread of the disease and appropriately realign health measures and priorities given the resource limitations in the field of healthcare. However, a significant issue arises [...] Read more.
Various published COVID-19 models have been used in epidemiological studies and healthcare planning to model and predict the spread of the disease and appropriately realign health measures and priorities given the resource limitations in the field of healthcare. However, a significant issue arises when these models need help identifying the distribution of the constituent variants of COVID-19 infections. The emergence of such a challenge means that, given limited healthcare resources, health planning would be ineffective and cost lives. This work presents a universal neural network (NN) computational instrument for predicting the mainstream symptomatic infection rate of COVID-19 and models of the distribution of its associated variants. The NN is based on a mixture density network (MDN) with a Gaussian mixture model (GMM) object as a backbone. Twelve use cases were used to demonstrate the validity and reliability of the proposed MDN. The use cases included COVID-19 data for Canada and Saudi Arabia, two date ranges (300 and 500 days), two input data modes, and three activation functions, each with different implementations of the batch size and epoch value. This array of scenarios provided an opportunity to investigate the impacts of epistemic uncertainty (EU) and aleatoric uncertainty (AU) on the prediction model’s fitting. The model accuracy readings were in the high nineties based on a tolerance margin of 0.0125. The primary outcome of this work indicates that this easy-to-use universal MDN helps provide reliable predictions of COVID-19 variant distributions and the corresponding synthesized profile of the mainstream infection rate. Full article
Show Figures

Figure 1

29 pages, 6144 KiB  
Article
BayesNet: Enhancing UAV-Based Remote Sensing Scene Understanding with Quantifiable Uncertainties
by A. S. M. Sharifuzzaman Sagar, Jawad Tanveer, Yu Chen, L. Minh Dang, Amir Haider, Hyoung-Kyu Song and Hyeonjoon Moon
Remote Sens. 2024, 16(5), 925; https://doi.org/10.3390/rs16050925 - 6 Mar 2024
Cited by 6 | Viewed by 1979
Abstract
Remote sensing stands as a fundamental technique in contemporary environmental monitoring, facilitating extensive data collection and offering invaluable insights into the dynamic nature of the Earth’s surface. The advent of deep learning, particularly convolutional neural networks (CNNs), has further revolutionized this domain by [...] Read more.
Remote sensing stands as a fundamental technique in contemporary environmental monitoring, facilitating extensive data collection and offering invaluable insights into the dynamic nature of the Earth’s surface. The advent of deep learning, particularly convolutional neural networks (CNNs), has further revolutionized this domain by enhancing scene understanding. However, despite the advancements, traditional CNN methodologies face challenges such as overfitting in imbalanced datasets and a lack of precise uncertainty quantification, crucial for extracting meaningful insights and enhancing the precision of remote sensing techniques. Addressing these critical issues, this study introduces BayesNet, a Bayesian neural network (BNN)-driven CNN model designed to normalize and estimate uncertainties, particularly aleatoric and epistemic, in remote sensing datasets. BayesNet integrates a novel channel–spatial attention module to refine feature extraction processes in remote sensing imagery, thereby ensuring a robust analysis of complex scenes. BayesNet was trained on four widely recognized unmanned aerial vehicle (UAV)-based remote sensing datasets, UCM21, RSSCN7, AID, and NWPU, and demonstrated good performance, achieving accuracies of 99.99%, 97.30%, 97.57%, and 95.44%, respectively. Notably, it has showcased superior performance over existing models in the AID, NWPU, and UCM21 datasets, with enhancements of 0.03%, 0.54%, and 0.23%, respectively. This improvement is significant in the context of complex scene classification of remote sensing images, where even slight improvements mark substantial progress against complex and highly optimized benchmarks. Moreover, a self-prepared remote sensing testing dataset is also introduced to test BayesNet against unseen data, and it achieved an accuracy of 96.39%, which showcases the effectiveness of the BayesNet in scene classification tasks. Full article
(This article belongs to the Section AI Remote Sensing)
Show Figures

Figure 1

20 pages, 3084 KiB  
Article
Natural Gradient Boosting for Probabilistic Prediction of Soaked CBR Values Using an Explainable Artificial Intelligence Approach
by Esteban Díaz and Giovanni Spagnoli
Buildings 2024, 14(2), 352; https://doi.org/10.3390/buildings14020352 - 26 Jan 2024
Cited by 10 | Viewed by 2246
Abstract
The California bearing ratio (CBR) value of subgrade is the most used parameter for dimensioning flexible and rigid pavements. The test for determining the CBR value is typically conducted under soaked conditions and is costly, labour-intensive, and time-consuming. Machine learning (ML) techniques have [...] Read more.
The California bearing ratio (CBR) value of subgrade is the most used parameter for dimensioning flexible and rigid pavements. The test for determining the CBR value is typically conducted under soaked conditions and is costly, labour-intensive, and time-consuming. Machine learning (ML) techniques have been recently implemented in engineering practice to predict the CBR value from the soil index properties with satisfactory results. However, they provide only deterministic predictions, which do not account for the aleatoric uncertainty linked to input variables and the epistemic uncertainty inherent in the model itself. This work addresses this limitation by introducing an ML model based on the natural gradient boosting (NGBoost) algorithm, becoming the first study to estimate the soaked CBR value from this probabilistic perspective. A database of 2130 soaked CBR tests was compiled for this study. The NGBoost model showcased robust predictive performance, establishing itself as a reliable and effective algorithm for predicting the soaked CBR value. Furthermore, it produced probabilistic CBR predictions as probability density functions, facilitating the establishment of reliable confidence intervals, representing a notable improvement compared to conventional deterministic models. Finally, the Shapley additive explanations method was implemented to investigate the interpretability of the proposed model. Full article
Show Figures

Figure 1

12 pages, 1861 KiB  
Article
Cohesion: A Measure of Organisation and Epistemic Uncertainty of Incoherent Ensembles
by Timothy Davey
Entropy 2023, 25(12), 1605; https://doi.org/10.3390/e25121605 - 30 Nov 2023
Cited by 1 | Viewed by 1258
Abstract
This paper offers a measure of how organised a system is, as defined by self-consistency. Complex dynamics such as tipping points and feedback loops can cause systems with identical initial parameters to vary greatly by their final state. These systems can be called [...] Read more.
This paper offers a measure of how organised a system is, as defined by self-consistency. Complex dynamics such as tipping points and feedback loops can cause systems with identical initial parameters to vary greatly by their final state. These systems can be called non-ergodic or incoherent. This lack of consistency (or replicability) of a system can be seen to drive an additional form of uncertainty, beyond the variance that is typically considered. However, certain self-organising systems can be shown to have some self-consistency around these tipping points, when compared with systems that find no consistent final states. Here, we propose a measure of this self-consistency that is used to quantify our confidence in the outcomes of agent-based models, simulations or experiments of dynamical systems, which may or may not contain multiple attractors. Full article
(This article belongs to the Special Issue Information and Self-Organization III)
Show Figures

Figure 1

Back to TopTop