Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (23)

Search Parameters:
Keywords = epistemic modeling errors

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 11583 KB  
Article
The MTA-TPACK Dynamic Collaboration Spiral: Making Pedagogical Thinking Visible in Human–AI Scientific Visualization for Sustainable Teacher Innovation
by Hung-Cheng Chen and Lung-Hsiang Wong
Sustainability 2026, 18(6), 2718; https://doi.org/10.3390/su18062718 - 11 Mar 2026
Viewed by 143
Abstract
Generative AI (GenAI) challenges traditional technology integration frameworks by introducing agentic systems that actively participate in meaning-making, requiring educators to shift from tool operation to cognitive orchestration. This study introduces the MTA–TPACK Dynamic Collaboration Spiral, a theoretical framework that integrates Meta-Task Awareness (MTA) [...] Read more.
Generative AI (GenAI) challenges traditional technology integration frameworks by introducing agentic systems that actively participate in meaning-making, requiring educators to shift from tool operation to cognitive orchestration. This study introduces the MTA–TPACK Dynamic Collaboration Spiral, a theoretical framework that integrates Meta-Task Awareness (MTA) to explain how static knowledge resources are dynamically activated during human–AI collaboration. We empirically illustrate this framework through a two-phase scientific visualization task concerning typhoon–terrain interactions, utilizing Midjourney for human-led orchestration and GPT-4o for closed-loop refinement. The results demonstrate that successful integration requires translating abstract disciplinary knowledge into precise, AI-intelligible visual constraints rather than relying solely on technical prompting skills. Furthermore, we document how evaluation practices evolve from simple error correction to structured, AI-assisted diagnosis. The resulting visual artifacts embody Visible Pedagogical Thinking (VPT)—externalized cognitive constructs that make expert reasoning inspectable and reusable. By foregrounding evaluation-centered task design, this study provides a transferable, theoretically grounded account of how human–AI collaboration can remain pedagogically meaningful. The model contributes to sustainable pedagogical innovation by offering a roadmap for strengthening teachers’ long-term epistemic agency in AI-mediated design environments. Full article
Show Figures

Figure 1

34 pages, 13605 KB  
Article
BUM: Bayesian Uncertainty Minimization for Transferable Adversarial Examples in SAR Recognition
by Hongqiang Wang, Yuqing Lan, Fuzhan Yue, Zhenghuan Xia and Tao Zhang
Remote Sens. 2026, 18(5), 693; https://doi.org/10.3390/rs18050693 - 26 Feb 2026
Viewed by 207
Abstract
Adversarial examples pose a significant threat to Deep Neural Networks (DNNs) underpinning Synthetic Aperture Radar (SAR) Automatic Target Recognition (ATR) systems, as these models exhibit acute susceptibility to such malicious inputs. While white-box attacks achieve high success rates, their transferability to unknown black-box [...] Read more.
Adversarial examples pose a significant threat to Deep Neural Networks (DNNs) underpinning Synthetic Aperture Radar (SAR) Automatic Target Recognition (ATR) systems, as these models exhibit acute susceptibility to such malicious inputs. While white-box attacks achieve high success rates, their transferability to unknown black-box models—particularly across different network architectures (e.g., from CNNs to Vision Transformers)—remains a significant challenge. Existing gradient-based iterative methods often overfit the specific decision boundary of the surrogate model, resulting in poor generalization. To address this, we propose a novel generative attack framework termed BUM. Instead of merely maximizing the classification error, BUM explicitly models and minimizes the epistemic uncertainty of the surrogate model. By leveraging Monte Carlo (MC) Dropout to simulate a Bayesian ensemble, we train a generator to craft perturbations that are consistently adversarial across stochastic sub-models. This regularization forces the attack to target high-level, structure-aware semantic features shared among architectures, rather than low-level, model-specific artifacts. Extensive experiments on the MSTAR and FUSAR datasets demonstrate the superior black-box transferability of BUM. Full article
Show Figures

Figure 1

15 pages, 712 KB  
Article
Stage-Aware Governance of Large Language Models: Managing Uncertainty and Human Oversight in AI-Assisted Literature Review Systems
by Junic Kim and Haeyong Shin
Systems 2026, 14(2), 153; https://doi.org/10.3390/systems14020153 - 31 Jan 2026
Viewed by 543
Abstract
This study proposes a stage-aware governance framework for large language models (LLMs) that structures human oversight and accountability across different decision stages in AI-assisted literature review systems. Large language models (LLMs) are increasingly embedded in systematic review workflows, yet how human oversight and [...] Read more.
This study proposes a stage-aware governance framework for large language models (LLMs) that structures human oversight and accountability across different decision stages in AI-assisted literature review systems. Large language models (LLMs) are increasingly embedded in systematic review workflows, yet how human oversight and accountability should be structured across different decision stages remains unclear. This study evaluates three LLMs in a controlled two-stage literature review workflow—title-and-abstract screening and eligibility assessment—using identical evidence inputs and fixed inclusion criteria, with outputs benchmarked against expert consensus under fully reproducible conditions with standardized prompts and comprehensive logging. While LLMs closely matched expert decisions during screening (precision 0.83–0.91; F1 up to 0.89; Cohen’s κ 0.65–0.85), performance degraded substantially at the eligibility stage (F1 0.58–0.65; κ 0.52–0.62), indicating increased epistemic uncertainty when fine-grained criteria must be inferred from abstract-level information. Importantly, disagreements clustered in borderline cases rather than random error, supporting a stage-aware governance approach in which LLMs automate high-throughput screening while inter-model disagreement is operationalized as an actionable uncertainty signal that triggers human oversight in more consequential decision stages. These findings highlight the need for explicit oversight thresholds, responsibility allocation, and auditability in the responsible deployment of AI-assisted decision systems for evidence synthesis. Full article
(This article belongs to the Special Issue Ethics and Governance of Artificial Intelligence (AI) Systems)
Show Figures

Figure 1

24 pages, 4607 KB  
Article
Cross-Modal Interaction Fusion-Based Uncertainty-Aware Prediction Method for Industrial Froth Flotation Concentrate Grade by Using a Hybrid SKNet-ViT Framework
by Fanlei Lu, Weihua Gui, Yulong Wang, Jiayi Zhou and Xiaoli Wang
Sensors 2026, 26(1), 150; https://doi.org/10.3390/s26010150 - 25 Dec 2025
Viewed by 452
Abstract
In froth flotation, the features of froth images are important information to predict the concentrate grade. However, the froth structure is influenced by multiple factors, such as air flowrate, slurry level, ore properties, reagents, etc., which leads to highly complex and dynamic changes [...] Read more.
In froth flotation, the features of froth images are important information to predict the concentrate grade. However, the froth structure is influenced by multiple factors, such as air flowrate, slurry level, ore properties, reagents, etc., which leads to highly complex and dynamic changes in the image features. Additionally, issues such as the immeasurability of ore properties and measurement errors pose significant uncertainties including aleatoric uncertainty (intrinsic variability from ore fluctuations and sensor noise) and epistemic uncertainty (incomplete feature representation and local data heterogeneity) and generalization challenges for prediction models. This paper proposes an uncertainty quantification regression framework based on cross-modal interaction fusion, which integrates the complementary advantages of Selective Kernel Networks (SKNet) and Vision Transformers (ViT). By designing a cross-modal interaction module, the method achieves deep fusion of local and global features, reducing epistemic uncertainty caused by incomplete feature expression in single-models. Meanwhile, by combining adaptive calibrated quantile regression—using exponential moving average (EMA) to track real-time coverage and adjust parameters dynamically—the prediction interval coverage is optimized, addressing the inability of static quantile regression to adapt to aleatoric uncertainty. And through the localized conformal prediction module, sensitivity to local data distributions is enhanced, avoiding the limitation of global conformal methods in ignoring local heterogeneity. Experimental results demonstrate that this method significantly improves the robustness of uncertainty estimation while maintaining high prediction accuracy, providing strong support for intelligent optimization and decision-making in industrial flotation processes. Full article
Show Figures

Figure 1

39 pages, 26945 KB  
Article
Deep Learning-Based Prediction of Ship Roll Motion with Monte Carlo Dropout
by Gi-yong Kim, Chaeog Lim, Sang-jin Oh, In-hyuk Nam, Yu-mi Lee and Sung-chul Shin
J. Mar. Sci. Eng. 2025, 13(12), 2378; https://doi.org/10.3390/jmse13122378 - 15 Dec 2025
Cited by 1 | Viewed by 1167
Abstract
Accurate prediction of ship roll motion is essential for safe and autonomous navigation. This study presents a deep learning framework that estimates both roll motion and epistemic uncertainty using Monte Carlo (MC) Dropout. Two architectures, a Long Short-Term Memory (LSTM) network and a [...] Read more.
Accurate prediction of ship roll motion is essential for safe and autonomous navigation. This study presents a deep learning framework that estimates both roll motion and epistemic uncertainty using Monte Carlo (MC) Dropout. Two architectures, a Long Short-Term Memory (LSTM) network and a Transformer encoder, were trained on HydroD–Wasim simulations covering various sea states, speeds, and damage conditions, and validated with real voyage data from two ferries, showing complementary performance, where LSTM achieved higher accuracy and Transformer provided more reliable confidence intervals. Model performance was evaluated by mean squared error (MSE), prediction interval coverage probability (PICP), and prediction interval normalized average width (PINAW). The LSTM achieved lower MSE, showing superior deterministic accuracy, while the Transformer produced higher PICP and wider PINAW, indicating more reliable uncertainty estimation. Results confirm that MC Dropout effectively quantifies epistemic uncertainty, improving the reliability of deep learning–based ship motion forecasting for intelligent maritime operations. Full article
(This article belongs to the Special Issue Machine Learning for Prediction of Ship Motion)
Show Figures

Figure 1

28 pages, 4625 KB  
Article
Hybrid PCA-Based and Machine Learning Approaches for Signal-Based Interference Detection and Anomaly Classification Under Synthetic Data Conditions
by Sebastián Čikovský, Patrik Šváb and Peter Hanák
Sensors 2025, 25(24), 7585; https://doi.org/10.3390/s25247585 - 14 Dec 2025
Viewed by 709
Abstract
This article addresses anomaly detection in multichannel spatiotemporal data under strict low-false-alarm constraints (e.g., 1% False Positive Rate, FPR), a requirement essential for safety-critical applications such as signal interference monitoring in sensor networks. We introduce a lightweight, interpretable pipeline that deliberately avoids deep [...] Read more.
This article addresses anomaly detection in multichannel spatiotemporal data under strict low-false-alarm constraints (e.g., 1% False Positive Rate, FPR), a requirement essential for safety-critical applications such as signal interference monitoring in sensor networks. We introduce a lightweight, interpretable pipeline that deliberately avoids deep learning dependencies, implemented solely in NumPy and scikit-learn. The core innovation lies in fusing three complementary anomaly signals in an ensemble: (i) Principal Component Analysis (PCA) Reconstruction Error (MSE) to capture global structure deviations, (ii) Local Outlier Factor (LOF) on residual maps to detect local rarity, and (iii) Monte Carlo Variance as a measure of epistemic uncertainty in model predictions. These signals are combined via learned logistic regression (F*) and specialized Neyman–Pearson optimized fusion (F** and F***) to rigorously enforce bounded false alarms. Evaluated on synthetic benchmarks that simulate realistic anomalies and extensive SNR shifts (±12 dB), the fusion approach demonstrates exceptional robustness. While the best single baseline (MC-variance) achieves a True Positive Rate (TPR) of ≈0.60 at 1% FPR on the 0 dB hold-out, the fusion significantly raises this to ≈0.74 (F**), avoiding the performance collapse of baselines under degraded SNR (maintaining ≈ 0.62 TPR at −12 dB). This deployable solution provides a transparent, edge-ready anomaly detection capability that is highly effective at operating points critical for reliable monitoring in dynamic environments. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

24 pages, 625 KB  
Article
The Regress of Uncertainty and the Forecasting Paradox
by Nassim Nicholas Taleb and Pasquale Cirillo
Risks 2025, 13(12), 247; https://doi.org/10.3390/risks13120247 - 10 Dec 2025
Viewed by 3797
Abstract
We show that epistemic uncertainty–our iterated ignorance about our own ignorance–inevitably thickens statistical tails, even under perceived thin-tailed environments from past realizations. Any claim of precise risk carries a margin of error, and that margin itself is uncertain, in an infinite regress of [...] Read more.
We show that epistemic uncertainty–our iterated ignorance about our own ignorance–inevitably thickens statistical tails, even under perceived thin-tailed environments from past realizations. Any claim of precise risk carries a margin of error, and that margin itself is uncertain, in an infinite regress of doubt. This “errors-on-errors” mechanism rules out thin-tailed certainty: predictive laws must be heavier-tailed than their in-sample counterparts. The result is the Forecasting Paradox: the future is structurally more extreme than the past. This insight collapses branching scenarios into a single heavy-tailed forecast, with direct implications for risk management, scientific modeling, and AI safety. Full article
(This article belongs to the Special Issue Innovative Quantitative Methods for Financial Risk Management)
Show Figures

Figure 1

21 pages, 11842 KB  
Article
Optimizing Fuel Consumption Prediction Model Without an On-Board Diagnostic System in Deep Learning Frameworks
by Rıdvan Keskin, Egemen Belge and Senol Hakan Kutoglu
Sensors 2025, 25(22), 7031; https://doi.org/10.3390/s25227031 - 18 Nov 2025
Viewed by 849
Abstract
Real-time prediction of the instantaneous fuel consumption rate (FCR) of any vehicle is the key to improving energy efficiency and reducing emissions. The conventional prediction methods, which include an on-board diagnostic (OBD) system, require the specific vehicle parameters and environmental conditions such as [...] Read more.
Real-time prediction of the instantaneous fuel consumption rate (FCR) of any vehicle is the key to improving energy efficiency and reducing emissions. The conventional prediction methods, which include an on-board diagnostic (OBD) system, require the specific vehicle parameters and environmental conditions such as air density. We propose a data-driven Bayesian optimization and Monte Carlo (MC) Dropout methods-based long short-term memory (BMC-LSTM) network FCR prediction model using only the vehicle’s throttle position, velocity, and acceleration data. The cost-effective LSTM network-based solution enhances the high-resolution prediction accuracy within a deep learning framework. The network is integrated with the Bayesian optimization and MC-Dropout methods to ensure a probabilistically optimal hyperparameter set and robust networks. The proposed method presents an FCR model that provides calibrated predictions and reliability against distribution drift by probabilistically tuning hyperparameters with Bayesian optimization and quantifying epistemic uncertainty with the MC-Dropout. Our approach requires only vehicle speed, longitudinal acceleration, and throttle position at inference time. Note, however, that the reference FCR used to train and validate the models was obtained from OBD during data acquisition. The performance of the proposed method is compared with a conventional LSTM and Bidirectional LSTM-based multidimensional models, XGBoost and support vector regression-based models, and first- and fourth-order polynomials, which are derived using the least-squares method. The prediction performance of the method is evaluated using Mean Squared Error, Root Mean Squared Error, Mean Absolute, and R-squared statistical metrics. The proposed method achieves a superior R2 score and substantially reduces the conventional error metrics. Full article
(This article belongs to the Section Electronic Sensors)
Show Figures

Figure 1

13 pages, 1426 KB  
Article
Bayesian Neural Networks for Quantifying Uncertainty in Solute Transport Through Saturated Porous Media
by Seyed Kourosh Mahjour
Processes 2025, 13(10), 3324; https://doi.org/10.3390/pr13103324 - 17 Oct 2025
Viewed by 1281
Abstract
Uncertainty quantification (UQ) is critical for predicting solute transport in heterogeneous porous media, with applications in groundwater management and contaminant remediation. Traditional UQ methods, such as Monte Carlo (MC) simulations, are computationally expensive and impractical for real-time decision-making. This study introduces a novel [...] Read more.
Uncertainty quantification (UQ) is critical for predicting solute transport in heterogeneous porous media, with applications in groundwater management and contaminant remediation. Traditional UQ methods, such as Monte Carlo (MC) simulations, are computationally expensive and impractical for real-time decision-making. This study introduces a novel machine learning framework to address these limitations. We developed a surrogate model for a 2D advection-dispersion solute transport model using a Bayesian Neural Network (BNN). The BNN was trained on a synthetic dataset generated by simulating solute transport across various stochastic permeability and dispersivity fields. Uncertainty was quantified through variational inference, capturing both data-related (aleatoric) and model-related (epistemic) uncertainties. We evaluated the framework’s performance against traditional MC simulations. Our BNN model accurately predicts solute concentration distributions with a mean squared error (MSE) of 9.8 × 105, significantly outperforming other machine learning surrogates. The framework successfully quantifies uncertainty, providing calibrated confidence intervals that align closely with the spread of the MC results. The proposed approach achieved a 98.5% reduction in computational time compared to a standard Monte Carlo simulation with 1000 realizations, representing a 65-fold speed-up. A sensitivity analysis revealed that permeability field heterogeneity is the dominant source of uncertainty in plume migration. The developed machine learning framework offers a computationally efficient and robust alternative for quantifying uncertainty in solute transport models. By accurately predicting solute concentrations and their associated uncertainties, our approach can inform risk-based decision-making in environmental and hydrogeological applications. The method shows promise for scaling to more complex, three-dimensional systems. Full article
(This article belongs to the Section Chemical Processes and Systems)
Show Figures

Figure 1

14 pages, 319 KB  
Article
Carbon Price Prediction and Risk Assessment Considering Energy Prices Based on Uncertain Differential Equations
by Di Gao, Bingqing Wu, Chengmei Wei, Hao Yue, Jian Zhang and Zhe Liu
Mathematics 2025, 13(17), 2834; https://doi.org/10.3390/math13172834 - 3 Sep 2025
Viewed by 1000
Abstract
Against the backdrop of escalating atmospheric carbon dioxide concentrations, carbon emission trading systems (ETS) have emerged as pivotal policy instruments, with China’s ETS playing a prominent role globally. The carbon price, central to ETS functionality, guides resource allocation and corporate strategies. Due to [...] Read more.
Against the backdrop of escalating atmospheric carbon dioxide concentrations, carbon emission trading systems (ETS) have emerged as pivotal policy instruments, with China’s ETS playing a prominent role globally. The carbon price, central to ETS functionality, guides resource allocation and corporate strategies. Due to unexpected events, political conflicts, limited access to data information, and insufficient cognitive levels of market participants, there are epistemic uncertainties in the fluctuations of carbon and energy prices. Existing studies often lack effective handling of these epistemic uncertainties in energy prices and carbon prices. Therefore, the core objective of this study is to reveal the dynamic linkage patterns between energy prices and carbon prices, and to quantify the impact mechanism of epistemic uncertainties on their relationship with the help of uncertain differential equations. Methodologically, a dynamic model of carbon and energy prices was constructed, and analytical solutions were derived and their mathematical properties were analyzed to characterize the linkage between carbon and energy prices. Furthermore, based on the observation data of coal prices in Qinhuangdao Port and national carbon prices, the unknown parameters of the proposed model were estimated, and uncertain hypothesis tests were conducted to verify the rationality of the proposed model. Results showed that the mean squared error of the established model for fitting the linkage relationship between carbon and energy prices was 0.76, with the fitting error controlled within 3.72%. Moreover, the prediction error was 1.88%. Meanwhile, the 5% value at risk (VaR) of the logarithmic return rate of carbon prices was predicted to be 0.0369. The research indicates that this methodology provides a feasible framework for capturing the uncertain interactions in the carbon-energy market. The price linkage mechanism revealed by it helps market participants optimize their risk management strategies and provides more accurate decision-making references for policymakers. Full article
(This article belongs to the Special Issue Uncertainty Theory and Applications)
Show Figures

Figure 1

17 pages, 5431 KB  
Article
Localization Meets Uncertainty: Uncertainty-Aware Multi-Modal Localization
by Hye-Min Won, Jieun Lee and Jiyong Oh
Technologies 2025, 13(9), 386; https://doi.org/10.3390/technologies13090386 - 1 Sep 2025
Viewed by 1633
Abstract
Reliable localization is critical for robot navigation in complex indoor environments. In this paper, we propose an uncertainty-aware localization method that enhances the reliability of localization outputs without modifying the prediction model itself. This study introduces a percentile-based rejection strategy that filters out [...] Read more.
Reliable localization is critical for robot navigation in complex indoor environments. In this paper, we propose an uncertainty-aware localization method that enhances the reliability of localization outputs without modifying the prediction model itself. This study introduces a percentile-based rejection strategy that filters out unreliable 3-degree-of-freedom pose predictions based on aleatoric and epistemic uncertainties the network estimates. We apply this approach to a multi-modal end-to-end localization that fuses RGB images and 2D LiDAR data, and we evaluate it across three real-world datasets collected using a commercialized serving robot. Experimental results show that applying stricter uncertainty thresholds consistently improves pose accuracy. Specifically, the mean position error, calculated as the average Euclidean distance between the predicted and ground-truth (x, y) coordinates, is reduced by 41.0%, 56.7%, and 69.4%, and the mean orientation error, representing the average angular deviation between the predicted and ground-truth yaw angles, is reduced by 55.6%, 65.7%, and 73.3%, when percentile thresholds of 90%, 80%, and 70% are applied, respectively. Furthermore, the rejection strategy effectively removes extreme outliers, resulting in better alignment with ground truth trajectories. To the best of our knowledge, this is the first study to quantitatively demonstrate the benefits of percentile-based uncertainty rejection in multi-modal and end-to-end localization tasks. Our approach provides a practical means to enhance the reliability and accuracy of localization systems in real-world deployments. Full article
(This article belongs to the Special Issue AI Robotics Technologies and Their Applications)
Show Figures

Figure 1

25 pages, 15183 KB  
Article
Permittivity Measurement in Multi-Phase Heterogeneous Concrete Using Evidential Regression Deep Network and High-Frequency Electromagnetic Waves
by Zhaojun Hou, Hui Liu, Jianchuan Cheng, Qifeng Zhang and Zheng Tong
Materials 2025, 18(16), 3766; https://doi.org/10.3390/ma18163766 - 11 Aug 2025
Cited by 2 | Viewed by 845
Abstract
Permittivity measurements of concrete materials benefit from the application of high-frequency electromagnetic waves (HF-EMWs), but they still face the problem of being aleatory and exhibit epistemic uncertainty, originating from multi-phase heterogeneous materials and the limited knowledge of HF-EMW propagation. This limitation restricts the [...] Read more.
Permittivity measurements of concrete materials benefit from the application of high-frequency electromagnetic waves (HF-EMWs), but they still face the problem of being aleatory and exhibit epistemic uncertainty, originating from multi-phase heterogeneous materials and the limited knowledge of HF-EMW propagation. This limitation restricts the precision of non-destructive testing. This study proposes an evidential regression deep network for conducting permittivity measurements with uncertainty quantification. This method first proposes a finite-difference time-domain (FDTD) model with multi-phase heterogeneous concrete materials to simulate HF-EMW propagation in a concrete sample or structure, obtaining the HF-EMW echo that contains aleatory uncertainties owing to the limited knowledge of wave propagation. A U-net-based model is then proposed to denoise an HF-EMW, where the difference between a couple of observed and denoised HF-EMWs characterizes aleatory uncertainty owing to measurement noise. Finally, a Dempster–Shafer theory-based (DST-based) evidential regression network is proposed to compute permittivity, incorporating the quantification of two types of uncertainty using a Gaussian random fuzzy number (GRFN): a type of fuzzy set that has the characteristics of a Gaussian fuzzy number and a Gaussian random variable. An experiment with 1500 samples indicates that the proposed method measures permittivity with a mean square error of 7.50% and a permittivity uncertainty value of 74.70% in four types of concrete materials. Additionally, the proposed method can quantify the uncertainty in permittivity measurements using a GRFN-based belief measurement interval. Full article
Show Figures

Figure 1

54 pages, 6418 KB  
Review
Navigating Uncertainty: Advanced Techniques in Pedestrian Intention Prediction for Autonomous Vehicles—A Comprehensive Review
by Alireza Mirzabagheri, Majid Ahmadi, Ning Zhang, Reza Alirezaee, Saeed Mozaffari and Shahpour Alirezaee
Vehicles 2025, 7(2), 57; https://doi.org/10.3390/vehicles7020057 - 9 Jun 2025
Cited by 5 | Viewed by 5912
Abstract
The World Health Organization reports approximately 1.35 million fatalities annually due to road traffic accidents, with pedestrians constituting 23% of these deaths. This highlights the critical need to enhance pedestrian safety, especially given the significant role human error plays in road accidents. Autonomous [...] Read more.
The World Health Organization reports approximately 1.35 million fatalities annually due to road traffic accidents, with pedestrians constituting 23% of these deaths. This highlights the critical need to enhance pedestrian safety, especially given the significant role human error plays in road accidents. Autonomous vehicles present a promising solution to mitigate these fatalities by improving road safety through advanced prediction of pedestrian behavior. With the autonomous vehicle market projected to grow substantially and offer various economic benefits, including reduced driving costs and enhanced safety, understanding and predicting pedestrian actions and intentions is essential for integrating autonomous vehicles into traffic systems effectively. Despite significant advancements, replicating human social understanding in autonomous vehicles remains challenging, particularly in predicting the complex and unpredictable behavior of vulnerable road users like pedestrians. Moreover, the inherent uncertainty in pedestrian behavior adds another layer of complexity, requiring robust methods to quantify and manage this uncertainty effectively. This review provides a structured and in-depth analysis of pedestrian intention prediction techniques, with a unique focus on how uncertainty is modeled and managed. We categorize existing approaches based on prediction duration, feature type, and model architecture, and critically examine benchmark datasets and performance metrics. Furthermore, we explore the implications of uncertainty types—epistemic and aleatoric—and discuss their integration into autonomous vehicle systems. By synthesizing recent developments and highlighting the limitations of current methodologies, this paper aims to advance the understanding of Pedestrian intention Prediction and contribute to safer and more reliable autonomous vehicle deployment. Full article
Show Figures

Figure 1

22 pages, 15137 KB  
Article
Sensitivity Analysis on the Impact of Input Parameters on Seismic Hazard Results: A Case Study of Central America
by Carlos Gamboa-Canté, Mario Arroyo-Solórzano, Alicia Rivas-Medina and Belén Benito
Geosciences 2025, 15(1), 4; https://doi.org/10.3390/geosciences15010004 - 29 Dec 2024
Cited by 3 | Viewed by 3206
Abstract
We present a sensitivity analysis on the impact of input parameters and methods used on the results of a probabilistic seismic hazard assessment (PSHA). The accurate estimation of the parameters in recurrence models (declustering and fitting methods), along with the selection of scaling [...] Read more.
We present a sensitivity analysis on the impact of input parameters and methods used on the results of a probabilistic seismic hazard assessment (PSHA). The accurate estimation of the parameters in recurrence models (declustering and fitting methods), along with the selection of scaling relationships for determining maximum magnitude and the selection of ground motion models (GMMs), enhance control over epistemic uncertainties when constructing the logic tree, minimizing final calculation errors and producing credible results for the study region. This study focuses on Central America, utilizing recent data from seismic, geological, and geophysical studies to improve uncertainty analyses through classic statistical methods. The results demonstrate that proper fitting of the recurrence model can stabilize acceleration variations regardless of the declustering method or b-value fitting method used. Regarding scaling relationships, their low impact on the final results is noted, provided the models are tailored to the tectonic regime under study. Finally, it is shown that the GMM contributes the most variability to seismic hazard results; therefore, their selection should be conditioned on calibration with observed data through residual analysis where region-specific models are not available. Full article
(This article belongs to the Section Geophysics)
Show Figures

Figure 1

25 pages, 8887 KB  
Article
A Gaussian Process-Enhanced Non-Linear Function and Bayesian Convolution–Bayesian Long Term Short Memory Based Ultra-Wideband Range Error Mitigation Method for Line of Sight and Non-Line of Sight Scenarios
by A. S. M. Sharifuzzaman Sagar, Samsil Arefin, Eesun Moon, Md Masud Pervez Prince, L. Minh Dang, Amir Haider and Hyung Seok Kim
Mathematics 2024, 12(23), 3866; https://doi.org/10.3390/math12233866 - 9 Dec 2024
Cited by 2 | Viewed by 2080
Abstract
Relative positioning accuracy between two devices is dependent on the precise range measurements. Ultra-wideband (UWB) technology is one of the popular and widely used technologies to achieve centimeter-level accuracy in range measurement. Nevertheless, harsh indoor environments, multipath issues, reflections, and bias due to [...] Read more.
Relative positioning accuracy between two devices is dependent on the precise range measurements. Ultra-wideband (UWB) technology is one of the popular and widely used technologies to achieve centimeter-level accuracy in range measurement. Nevertheless, harsh indoor environments, multipath issues, reflections, and bias due to antenna delay degrade the range measurement performance in line-of-sight (LOS) and non-line-of-sight (NLOS) scenarios. This article proposes an efficient and robust method to mitigate range measurement error in LOS and NLOS conditions by combining the latest artificial intelligence technology. A GP-enhanced non-linear function is proposed to mitigate the range bias in LOS scenarios. Moreover, NLOS identification based on the sliding window and Bayesian Conv-BLSTM method is utilized to mitigate range error due to the non-line-of-sight conditions. A novel spatial–temporal attention module is proposed to improve the performance of the proposed model. The epistemic and aleatoric uncertainty estimation method is also introduced to determine the robustness of the proposed model for environment variance. Furthermore, moving average and min-max removing methods are utilized to minimize the standard deviation in the range measurements in both scenarios. Extensive experimentation with different settings and configurations has proven the effectiveness of our methodology and demonstrated the feasibility of our robust UWB range error mitigation for LOS and NLOS scenarios. Full article
(This article belongs to the Special Issue Modeling and Simulation in Engineering, 3rd Edition)
Show Figures

Figure 1

Back to TopTop