Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (22)

Search Parameters:
Keywords = fully probabilistic technique

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 743 KB  
Article
Tensor Train Completion from Fiberwise Observations Along a Single Mode
by Shakir Showkat Sofi and Lieven De Lathauwer
Mathematics 2026, 14(5), 922; https://doi.org/10.3390/math14050922 - 9 Mar 2026
Viewed by 327
Abstract
Tensor completion is an extension of matrix completion aimed at recovering a multiway data tensor by leveraging a given subset of its entries (observations) and the pattern of observation. The low-rank assumption is key in establishing a relationship between the observed and unobserved [...] Read more.
Tensor completion is an extension of matrix completion aimed at recovering a multiway data tensor by leveraging a given subset of its entries (observations) and the pattern of observation. The low-rank assumption is key in establishing a relationship between the observed and unobserved entries of the tensor. The low-rank tensor completion problem is typically solved using numerical optimization techniques, where the rank information is used either implicitly (in the rank minimization approach) or explicitly (in the error minimization approach). Current theories concerning these techniques often study probabilistic recovery guarantees under conditions such as random uniform observations and incoherence requirements. However, if an observation pattern exhibits some low-rank structure that can be exploited, more efficient algorithms with deterministic recovery guarantees can be designed by leveraging this structure. This work shows how to use only standard linear algebra operations to compute the tensor train decomposition of a specific type of “fiber-wise” observed tensor, where some of the fibers of a tensor (along a single specific mode) are either fully observed or entirely missing, unlike the usual entry-wise observations. From an application viewpoint, this setting is relevant when it is easier to sample or collect a multiway data tensor along a specific mode (e.g., temporal). The proposed completion method is fast and is guaranteed to work under reasonable deterministic conditions on the observation pattern. Through numerical experiments, we showcase interesting applications and use cases that illustrate the effectiveness of the proposed approach. Full article
Show Figures

Figure 1

26 pages, 24257 KB  
Article
Selection of Optimal Vector-Valued Intensity Measures for Seismic Fragility Analysis in Shield Tunnels Based on LSTM Neural Networks
by Jinghan Zhang, Meng Zhang, Tao Du and Yang Wang
Buildings 2026, 16(5), 1085; https://doi.org/10.3390/buildings16051085 - 9 Mar 2026
Viewed by 231
Abstract
This research introduces a novel approach for seismic fragility assessment by employing a long short-term memory (LSTM) neural network to identify the most effective scalar and vector intensity measures (IMs). This approach enables the rapid and accurate plotting of vector fragility surfaces for [...] Read more.
This research introduces a novel approach for seismic fragility assessment by employing a long short-term memory (LSTM) neural network to identify the most effective scalar and vector intensity measures (IMs). This approach enables the rapid and accurate plotting of vector fragility surfaces for shield tunnels embedded in layered soils and subjected to seismic actions. First, an extensive suite of two-dimensional, fully nonlinear soil–structure interaction analyses was executed to generate ground–motion–structure response pairs. These records were subsequently leveraged to train the LSTM network, which received free-field acceleration time histories and directly output critical engineering demand parameters along the tunnel lining. The developed framework significantly mitigates computational expenses while maintaining an acceptable level of fidelity relative to the reference finite element results. Consequently, it serves as an alternative to traditional time history evaluation techniques. Second, we conducted an IM screening process using the results of the LSTM predictions. On the basis of criteria such as relevance, efficiency, practicality, and professionalism, we benchmarked 17 scalar IM and 3 vector IM candidate schemes. The findings indicate that the peak ground velocity (PGV) serves as the most effective scalar IM, whereas the combination of peak ground acceleration (PGA) and PGV forms the optimal vector IM. Finally, probabilistic demand and capacity models are integrated within a fully analytical fragility formulation to derive both scalar and vector fragility estimates. Comparative evaluation reveals that vector IM based fragility surfaces markedly reduce epistemic uncertainty and furnish refined probabilistic descriptions of damage states (DSs) across the seismic demand space. Full article
(This article belongs to the Special Issue Applications of Computational Methods in Structural Engineering)
Show Figures

Figure 1

32 pages, 4759 KB  
Article
Development of a Bayesian Network and Information Gain-Based Axis Dynamic Mechanism for Ankle Joint Rehabilitation
by Huiguo Ma, Yuqi Bao, Jingfu Lan, Xuewen Zhu, Pinwei Wan, Raquel Cedazo León, Shuo Jiang, Fangfang Chen, Jun Kang, Qihan Guo, Peng Zhang and He Li
Biomimetics 2025, 10(12), 823; https://doi.org/10.3390/biomimetics10120823 - 9 Dec 2025
Viewed by 765
Abstract
In response to the personalized and precise rehabilitation needs for motor injuries and stroke associated with population aging, this study proposes a design method for an intelligent rehabilitation trainer that integrates Bayesian information gain (BIG) and axis matching techniques. Grounded in the biomechanical [...] Read more.
In response to the personalized and precise rehabilitation needs for motor injuries and stroke associated with population aging, this study proposes a design method for an intelligent rehabilitation trainer that integrates Bayesian information gain (BIG) and axis matching techniques. Grounded in the biomechanical characteristics of the human ankle joint, the design fully draws upon biomimetic principles, constructing a 3-PUU-R hybrid serial–parallel bionic mechanism. By mimicking the dynamic variation of the ankle’s instantaneous motion axis and its balance between stiffness and compliance, a three-dimensional digital model was developed, and multi-posture human factor simulations were conducted, thereby achieving a rehabilitation process more consistent with natural human movement patterns. Natural randomized disability grade experimental data were collected for 100 people to verify the validity of the design results. On this basis, a Bayesian information gain framework was established by quantifying the reduction of uncertainty in rehabilitation outcomes through characteristic parameters, enabling the dynamic optimization of training strategies for personalized and precise ankle rehabilitation. The rehabilitation process was modeled as a problem of uncertainty quantification and information gain optimization. Prior distributions were constructed using surface EMG (electromyography) signals and motion trajectory errors, and mutual information was used to drive the dynamic adjustment of training strategies, ultimately forming a closed-loop control architecture of “demand perception–strategy optimization–execution adaptation.” This innovative integration of probabilistic modeling and cross-joint bionic design overcomes the limitations of single-joint rehabilitation and provides a new paradigm for the development of intelligent rehabilitation devices. The deep integration mechanism-based dynamic axis matching and Bayesian information gain holds significant theoretical value and engineering application prospects for enhancing the effectiveness of neural plasticity training. Full article
(This article belongs to the Special Issue Advanced Service Robots: Exoskeleton Robots 2025)
Show Figures

Graphical abstract

20 pages, 16406 KB  
Article
Stochastic Behaviour of Directional Fire Spread: A Segmentation-Based Analysis of Experimental Burns
by Ladan Tazik, Willard J. Braun, John R. J. Thompson and Geoffrey Goetz
Fire 2025, 8(10), 384; https://doi.org/10.3390/fire8100384 - 25 Sep 2025
Viewed by 1856
Abstract
Understanding the dynamics of fire propagation is essential in improving predictive models and developing effective fire management strategies. This study applies computer vision techniques to complement traditional fire behaviour modelling. We employ the Segment Anything Model to achieve the accurate segmentation of experimental [...] Read more.
Understanding the dynamics of fire propagation is essential in improving predictive models and developing effective fire management strategies. This study applies computer vision techniques to complement traditional fire behaviour modelling. We employ the Segment Anything Model to achieve the accurate segmentation of experimental fire videos, enabling the frame-by-frame segmentation of fire perimeters, quantification of the rate of spread in multiple directions, and explicit analysis of slope effects. Our laboratory experiments reveal that the ROS increases exponentially with slope, but with coefficients differing from those prescribed in the Canadian Fire Behaviour Prediction System, reflecting differences in field conditions. Complementary field data from prescribed burns in coniferous fuels (C-7) further demonstrate that slope effects vary under operational conditions, suggesting field-dependent dynamics not fully captured by existing deterministic models. Our experiments show that, even under controlled laboratory conditions, substantial variability in spread rate is observed, underscoring the inherent stochasticity of fire spread. Together, these findings highlight the value of vision-based perimeter extraction in generating precise spread data and reinforce the need for probabilistic modelling approaches that explicitly account for uncertainty and emergent dynamics in fire behaviour. Full article
Show Figures

Figure 1

46 pages, 47184 KB  
Article
Goodness of Fit in the Marginal Modeling of Round-Trip Times for Networked Robot Sensor Transmissions
by Juan-Antonio Fernández-Madrigal, Vicente Arévalo-Espejo, Ana Cruz-Martín, Cipriano Galindo-Andrades, Adrián Bañuls-Arias and Juan-Manuel Gandarias-Palacios
Sensors 2025, 25(17), 5413; https://doi.org/10.3390/s25175413 - 2 Sep 2025
Viewed by 1708
Abstract
When complex computations cannot be performed on board a mobile robot, sensory data must be transmitted to a remote station to be processed, and the resulting actions must be sent back to the robot to execute, forming a repeating cycle. This involves stochastic [...] Read more.
When complex computations cannot be performed on board a mobile robot, sensory data must be transmitted to a remote station to be processed, and the resulting actions must be sent back to the robot to execute, forming a repeating cycle. This involves stochastic round-trip times in the case of non-deterministic network communications and/or non-hard real-time software. Since robots need to react within strict time constraints, modeling these round-trip times becomes essential for many tasks. Modern approaches for modeling sequences of data are mostly based on time-series forecasting techniques, which impose a computational cost that may be prohibitive for real-time operation, do not consider all the delay sources existing in the sw/hw system, or do not work fully online, i.e., within the time of the current round-trip. Marginal probabilistic models, on the other hand, often have a lower cost, since they discard temporal dependencies between successive measurements of round-trip times, a suitable approximation when regime changes are properly handled given the typically stationary nature of these round-trip times. In this paper we focus on the hypothesis tests needed for marginal modeling of the round-trip times in remotely operated robotic systems with the presence of abrupt changes in regimes. We analyze in depth three common models, namely Log-logistic, Log-normal, and Exponential, and propose some modifications of parameter estimators for them and new thresholds for well-known goodness-of-fit tests, which are aimed at the particularities of our setting. We then evaluate our proposal on a dataset gathered from a variety of networked robot scenarios, both real and simulated; through >2100 h of high-performance computer processing, we assess the statistical robustness and practical suitability of these methods for these kinds of robotic applications. Full article
Show Figures

Figure 1

23 pages, 9603 KB  
Article
Label-Efficient Fine-Tuning for Remote Sensing Imagery Segmentation with Diffusion Models
by Yiyun Luo, Jinnian Wang, Jean Sequeira, Xiankun Yang, Dakang Wang, Jiabin Liu, Grekou Yao and Sébastien Mavromatis
Remote Sens. 2025, 17(15), 2579; https://doi.org/10.3390/rs17152579 - 24 Jul 2025
Viewed by 2092
Abstract
High-resolution remote sensing imagery plays an essential role in urban management and environmental monitoring, providing detailed insights for applications ranging from land cover mapping to disaster response. Semantic segmentation methods are among the most effective techniques for comprehensive land cover mapping, and they [...] Read more.
High-resolution remote sensing imagery plays an essential role in urban management and environmental monitoring, providing detailed insights for applications ranging from land cover mapping to disaster response. Semantic segmentation methods are among the most effective techniques for comprehensive land cover mapping, and they commonly employ ImageNet-based pre-training semantics. However, traditional fine-tuning processes exhibit poor transferability across different downstream tasks and require large amounts of labeled data. To address these challenges, we introduce Denoising Diffusion Probabilistic Models (DDPMs) as a generative pre-training approach for semantic features extraction in remote sensing imagery. We pre-trained a DDPM on extensive unlabeled imagery, obtaining features at multiple noise levels and resolutions. In order to integrate and optimize these features efficiently, we designed a multi-layer perceptron module with residual connections. It performs channel-wise optimization to suppress feature redundancy and refine representations. Additionally, we froze the feature extractor during fine-tuning. This strategy significantly reduces computational consumption and facilitates fast transfer and deployment across various interpretation tasks on homogeneous imagery. Our comprehensive evaluation on the sparsely labeled dataset MiniFrance-S and the fully labeled Gaofen Image Dataset achieved mean intersection over union scores of 42.7% and 66.5%, respectively, outperforming previous works. This demonstrates that our approach effectively reduces reliance on labeled imagery and increases transferability to downstream remote sensing tasks. Full article
(This article belongs to the Special Issue AI-Driven Mapping Using Remote Sensing Data)
Show Figures

Graphical abstract

24 pages, 9633 KB  
Article
Assessment of Knot-Induced Degradation in Timber Beams: Probabilistic Modeling and Data-Driven Prediction of Load Capacity Loss
by Peixuan Wang, Guoming Liu, Fanrong Li, Shengcai Li, Gabriele Milani and Donato Abruzzese
Buildings 2025, 15(12), 2058; https://doi.org/10.3390/buildings15122058 - 15 Jun 2025
Cited by 1 | Viewed by 1162
Abstract
Timber structural performance is significantly influenced by natural knots, which serve as critical indicators in ancient architectural heritage preservation and modern sustainable building design. However, existing studies lack a comprehensive quantitative analysis of how the randomness of timber knot parameters relates to load-bearing [...] Read more.
Timber structural performance is significantly influenced by natural knots, which serve as critical indicators in ancient architectural heritage preservation and modern sustainable building design. However, existing studies lack a comprehensive quantitative analysis of how the randomness of timber knot parameters relates to load-bearing capacity degradation. This study introduces a multiscale evaluation framework that integrates physical testing, probabilistic modeling, and data-driven techniques. Firstly, static tests on full-scale timber beams with artificially introduced knots reveal the failure mechanisms and load capacity reduction associated with knots in the tension zone. Subsequently, a three-dimensional Monte Carlo simulation, modeling random distributions of knot position and size, demonstrates that the midspan region is most sensitive to knot effects, with load capacity loss being more pronounced on the tension side than on the compression side. Finally, a predictive model based on a fully connected neural network is developed; feature analysis indicates that the longitudinal position of knots exerts a stronger nonlinear influence on load capacity than radial depth or diameter. The results establish a mapping between knot characteristics, stress field distortion, and ultimate load capacity, providing a theoretical basis for safety evaluation of historic timber structures and the design of defect-tolerant timber beams in modern engineering. Full article
(This article belongs to the Section Building Structures)
Show Figures

Figure 1

23 pages, 5815 KB  
Article
Enhanced Landslide Risk Assessment Through Non-Probabilistic Stability Analysis: A Hybrid Framework Integrating Space–Time Distribution and Vulnerability Models
by Suxun Shu, Kang Pi, Wenhui Gong, Chunmei Zhou, Jiajun Qian and Zhiquan Yang
Sustainability 2025, 17(9), 4146; https://doi.org/10.3390/su17094146 - 3 May 2025
Cited by 1 | Viewed by 1434
Abstract
Landslide risk assessment can quantify the potential damage caused by landslides to disaster-bearing bodies, which can help to reduce casualties and economic losses. It is not only a tool for disaster prevention and mitigation, but also a key step to achieve the coordinated [...] Read more.
Landslide risk assessment can quantify the potential damage caused by landslides to disaster-bearing bodies, which can help to reduce casualties and economic losses. It is not only a tool for disaster prevention and mitigation, but also a key step to achieve the coordinated development of the environment, economy, and society, and it provides important support for the realization of the global sustainable development goals (SDGs). In this study, a risk assessment method is proposed for an individual landslide based on the non-probabilistic reliability theory. The method represents an improvement to and innovation in existing risk assessment methods, which can obtain more accurate assessment results with fewer sample data points, refines the methods and steps of landslide risk assessment, and fully considers the destabilization mechanism of the landslide and the interaction with disaster-bearing bodies. A non-probabilistic reliability analysis of the slope was conducted, and the possibility of landslide occurrence was characterized by the failure value of the slope. Moreover, the influence range of the landslide was predicted using empirical formulas; space–time distribution probabilities of the disaster-bearing bodies were estimated by combining their location and activity patterns; and the vulnerability of the disaster-bearing bodies was calculated according to the landslide intensity and the resistance or susceptibility index of the disaster-bearing bodies. The method’s feasibility was verified through its application to the Xiatudiling landslide as a case study. In the process of performing slope stability calculations, it was found that the calculation results of the Monte Carlo method were consistent with those of the non-probabilistic reliability approach proposed in this paper, which was able to obtain more accurate results with less sample data. The personnel life and economic risks were 1.8499 persons/year and CNY 184,858/year (USD 25,448/year), respectively, under heavy rainfall conditions. The results were compared with the risk judgment criteria for geological disasters, and both risk values were unacceptable. After landslide treatment, the possibility of landslide occurrence was reduced, and the personnel life risk and economic risk of the landslide were also reduced. Both risk values then became acceptable. The effect of landslide treatment was obvious. The proposed method provides a new technique for assessing landslide risks and can help in designing mitigation strategies. This method can be applied to landslide risk surveys conducted by geological disaster prevention institutions, demonstrating enhanced applicability in data-scarce regions to improve risk assessment efficiency. It is particularly suitable for emergency management authorities, enabling rapid and comprehensive assessment of landslide risk levels to support informed decision making during critical response scenarios. Full article
(This article belongs to the Section Hazards and Sustainability)
Show Figures

Figure 1

18 pages, 5048 KB  
Review
Clustering with Uncertainty: A Literature Review to Address a Cross-Domain Perspective
by Salvatore Flavio Pileggi
Informatics 2025, 12(2), 38; https://doi.org/10.3390/informatics12020038 - 9 Apr 2025
Viewed by 3707
Abstract
Clustering is a very popular computational technique that, because of imperfect data, is often applied in the presence of some kind of uncertainty. Taking into account such an uncertainty (and model), the computational output accordingly contributes to increasing the accuracy of the computations [...] Read more.
Clustering is a very popular computational technique that, because of imperfect data, is often applied in the presence of some kind of uncertainty. Taking into account such an uncertainty (and model), the computational output accordingly contributes to increasing the accuracy of the computations and their effectiveness in context. However, there are challenges. This paper presents a literature review on the topic. It aims to identify and discuss the associated body of knowledge according to a cross-domain perspective. A semi-systematic methodology has allowed for the selection of 68 papers, prioritizing the most recent contributions and an intrinsic application-oriented approach. The analysis has underscored the relevance of the topic in the last two decades, in which computation has become somewhat pervasive in the context of inherent data complexity. Furthermore, it has identified a trend of domain-specific solutions over generic-purpose approaches. On one side, this trend enables a more specific set of solutions within specific communities; on the other side, the resulting distributed approach is not always well integrated with the mainstream. The latter aspect may generate a further fragmentation of the body of knowledge, mostly because of some lack of abstraction in the definition of specific problems. While in general terms these gaps are largely understandable within the research community, a lack of implementations to provide ready-to-use resources is critical overall. In more technical terms, solutions in the literature present a certain inclination to mixed methods, in addition to the classic application of Fuzzy Logic and other probabilistic approaches. Last but not least, the propagation of the uncertainty in the current technological context, characterised by data and computational intensive solutions, is not fully analysed and critically discussed in the literature. The conducted analysis intrinsically suggests consolidation and enhanced operationalization though Open Software, which is crucial to establish scientifically sound computational frameworks. Full article
Show Figures

Figure 1

25 pages, 5486 KB  
Article
Optimisation and Calibration of Bayesian Neural Network for Probabilistic Prediction of Biogas Performance in an Anaerobic Lagoon
by Benjamin Steven Vien, Thomas Kuen, Louis Raymond Francis Rose and Wing Kong Chiu
Sensors 2024, 24(8), 2537; https://doi.org/10.3390/s24082537 - 15 Apr 2024
Cited by 10 | Viewed by 3400
Abstract
This study aims to enhance diagnostic capabilities for optimising the performance of the anaerobic sewage treatment lagoon at Melbourne Water’s Western Treatment Plant (WTP) through a novel machine learning (ML)-based monitoring strategy. This strategy employs ML to make accurate probabilistic predictions of biogas [...] Read more.
This study aims to enhance diagnostic capabilities for optimising the performance of the anaerobic sewage treatment lagoon at Melbourne Water’s Western Treatment Plant (WTP) through a novel machine learning (ML)-based monitoring strategy. This strategy employs ML to make accurate probabilistic predictions of biogas performance by leveraging diverse real-life operational and inspection sensor and other measurement data for asset management, decision making, and structural health monitoring (SHM). The paper commences with data analysis and preprocessing of complex irregular datasets to facilitate efficient learning in an artificial neural network. Subsequently, a Bayesian mixture density neural network model incorporating an attention-based mechanism in bidirectional long short-term memory (BiLSTM) was developed. This probabilistic approach uses a distribution output layer based on the Gaussian mixture model and Monte Carlo (MC) dropout technique in estimating data and model uncertainties, respectively. Furthermore, systematic hyperparameter optimisation revealed that the optimised model achieved a negative log-likelihood (NLL) of 0.074, significantly outperforming other configurations. It achieved an accuracy approximately 9 times greater than the average model performance (NLL = 0.753) and 22 times greater than the worst performing model (NLL = 1.677). Key factors influencing the model’s accuracy, such as the input window size and the number of hidden units in the BiLSTM layer, were identified, while the number of neurons in the fully connected layer was found to have no significant impact on accuracy. Moreover, model calibration using the expected calibration error was performed to correct the model’s predictive uncertainty. The findings suggest that the inherent data significantly contribute to the overall uncertainty of the model, highlighting the need for more high-quality data to enhance learning. This study lays the groundwork for applying ML in transforming high-value assets into intelligent structures and has broader implications for ML in asset management, SHM applications, and renewable energy sectors. Full article
Show Figures

Figure 1

16 pages, 1301 KB  
Article
Probabilistic Shear Strength Prediction for Deep Beams Based on Bayesian-Optimized Data-Driven Approach
by Mao-Yi Liu, Zheng Li and Hang Zhang
Buildings 2023, 13(10), 2471; https://doi.org/10.3390/buildings13102471 - 28 Sep 2023
Cited by 12 | Viewed by 2130
Abstract
To ensure the safety of buildings, accurate and robust prediction of a reinforced concrete deep beam’s shear capacity is necessary to avoid unpredictable accidents caused by brittle failure. However, the failure mechanism of reinforced concrete deep beams is very complicated, has not been [...] Read more.
To ensure the safety of buildings, accurate and robust prediction of a reinforced concrete deep beam’s shear capacity is necessary to avoid unpredictable accidents caused by brittle failure. However, the failure mechanism of reinforced concrete deep beams is very complicated, has not been fully elucidated, and cannot be accurately described by simple equations. To solve this issue, machine learning techniques have been utilized and corresponding prediction models have been developed. Nevertheless, these models can only provide deterministic prediction results of the scalar type, and the confidence level is uncertain. Thus, these prediction results cannot be used for the design and assessment of deep beams. Therefore, in this paper, a probabilistic prediction approach of the shear strength of reinforced concrete deep beams is proposed based on the natural gradient boosting algorithm trained on a collected database. A database of 267 deep beam experiments was utilized, with 14 key parameters identified as the inputs related to the beam geometry, material properties, and reinforcement details. The proposed NGBoost model was compared to empirical formulas from design codes and other machine learning methods. The results showed that the NGBoost model achieved higher accuracy in mean shear strength prediction, with an R2 of 0.9045 and an RMSE of 38.8 kN, outperforming existing formulas by over 50%. Additionally, the NGBoost model provided probabilistic predictions of shear strength as probability density functions, enabling reliable confidence intervals. This demonstrated the capability of the data-driven NGBoost approach for robust shear strength evaluation of RC deep beams. Overall, the results illustrated that the proposed probabilistic prediction approach dramatically surpassed the current formulas adopted in design codes and machine learning models in both prediction accuracy and robustness. Full article
(This article belongs to the Topic AI Enhanced Civil Infrastructure Safety)
Show Figures

Figure 1

25 pages, 12380 KB  
Article
Efficient Calibration of Groundwater Contaminant Transport Models Using Bayesian Optimization
by Hao Deng, Shengfang Zhou, Yong He, Zeduo Lan, Yanhong Zou and Xiancheng Mao
Toxics 2023, 11(5), 438; https://doi.org/10.3390/toxics11050438 - 6 May 2023
Cited by 10 | Viewed by 3660
Abstract
Numerical modeling is a significant tool to understand the dynamic characteristics of contaminants transport in groundwater. The automatic calibration of highly parametrized and computationally intensive numerical models for the simulation of contaminant transport in the groundwater flow system is a challenging task. While [...] Read more.
Numerical modeling is a significant tool to understand the dynamic characteristics of contaminants transport in groundwater. The automatic calibration of highly parametrized and computationally intensive numerical models for the simulation of contaminant transport in the groundwater flow system is a challenging task. While existing methods use general optimization techniques to achieve automatic calibration, the large numbers of numerical model evaluations required in the calibration process lead to high computing overhead and limit the efficiency of model calibration. This paper presents a Bayesian optimization (BO) method for efficient calibration of numerical models of groundwater contaminant transport. A Bayes model is built to fully represent calibration criteria and derive the objective function for model calibration. The efficiency of model calibration is made possible by the probabilistic surrogate model and the expected improvement acquisition function in BO. The probabilistic surrogate model approximates the computationally expensive objective function with a closed-form expression that can be computed efficiently, while the expected improvement acquisition function proposes the most promising model parameters to improve the fitness to the calibration criteria and reduce the uncertainty of the surrogate model. These schemes allow us to find the optimized model parameters effectively by using a small number of numerical model evaluations. Two case studies for the calibration of the Cr(VI) transport model demonstrate that the BO method is effective and efficient in the inversion of hypothetical model parameters, the minimization of the objective function, and the adaptation of different model calibration criteria. Specifically, this promising performance is achieved within 200 numerical model evaluations, which substantially reduces the computing budget for model calibration. Full article
Show Figures

Figure 1

15 pages, 8593 KB  
Article
Missing Structural Health Monitoring Data Recovery Based on Bayesian Matrix Factorization
by Shouwang Sun, Sheng Jiao, Qi Hu, Zhiwen Wang, Zili Xia, Youliang Ding and Letian Yi
Sustainability 2023, 15(4), 2951; https://doi.org/10.3390/su15042951 - 6 Feb 2023
Cited by 9 | Viewed by 2847
Abstract
The exposure of bridge health-monitoring systems to extreme conditions often results in missing data, which constrains the health monitoring system from working. Therefore, there is an urgent need for an efficient data cleaning method. With the development of big data and machine-learning techniques, [...] Read more.
The exposure of bridge health-monitoring systems to extreme conditions often results in missing data, which constrains the health monitoring system from working. Therefore, there is an urgent need for an efficient data cleaning method. With the development of big data and machine-learning techniques, several methods for missing-data recovery have emerged. However, optimization-based methods may experience overfitting and demand extensive tuning of parameters, and trained models may still have substantial errors when applied to unseen datasets. Furthermore, many methods can only process monitoring data from a single sensor at a time, so the spatiotemporal dependence among monitoring data from different sensors cannot be extracted to recover missing data. Monitoring data from multiple sensors can be organized in the form of matrix. Therefore, matrix factorization is an appropriate way to handle monitoring data. To this end, a hierarchical probabilistic model for matrix factorization is formulated under a fully Bayesian framework by incorporating a sparsity-inducing prior over spatiotemporal factors. The spatiotemporal dependence is modeled to reconstruct the monitoring data matrix to achieve the missing-data recovery. Through experiments using continuous monitoring data of an in-service bridge, the proposed method shows good performance of missing-data recovery. Furthermore, the effect of missing data on the preset rank of matrix is also investigated. The results show that the model can achieve higher accuracy of missing-data recovery with higher preset rank under the same case of missing data. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) in Structural Health Monitoring)
Show Figures

Figure 1

21 pages, 9586 KB  
Article
An Application of Relative Entropy in Structural Safety Analysis of Elastoplastic Beam under Fire Conditions
by Marcin Kamiński and Michał Strąkowski
Energies 2023, 16(1), 207; https://doi.org/10.3390/en16010207 - 25 Dec 2022
Cited by 2 | Viewed by 2338
Abstract
The main aim of this work is to propose a new algorithm of reliability assessment for steel civil engineering structures subjected to fire temperatures. This new algorithm is based upon the relative probabilistic entropy concept elaborated by Bhattacharyya, and this probabilistic distance is [...] Read more.
The main aim of this work is to propose a new algorithm of reliability assessment for steel civil engineering structures subjected to fire temperatures. This new algorithm is based upon the relative probabilistic entropy concept elaborated by Bhattacharyya, and this probabilistic distance is sought in-between extreme and admissible deformations of some structural beam subjected to higher temperatures. Similar to the First Order Reliability Method, this approach uses the first two probabilistic characteristics of the structural response, when structural output may be modelled with the use of Gaussian distribution. The probabilistic structural response is found here using hybrid computational technique–the Finite Element Method system ABAQUS with its fully coupled thermo-elastic analysis with 3D solid elements and probabilistic modules implemented in the computer algebra system MAPLE. The probabilistic response is determined via a triple stochastic analysis, which is based on the classical Monte-Carlo simulation, iterative generalized stochastic perturbation technique, and also using semi-analytical probabilistic calculus. Final determination of the relative entropy in the Serviceability Limit State of the given structure and its comparison with the results obtained using the FORM analysis enables to calibrate this new technique to numerical values proposed in the engineering designing codes. Hence, a more accurate probabilistic method may use some experimental-based admissible values included in the existing design of legal provisions. Full article
Show Figures

Figure 1

18 pages, 6353 KB  
Article
Towards Informed Water Resources Planning and Management
by Paolo Reggiani, Amal Talbi and Ezio Todini
Hydrology 2022, 9(8), 136; https://doi.org/10.3390/hydrology9080136 - 30 Jul 2022
Cited by 6 | Viewed by 3774
Abstract
In Water Resources Planning and Management, decision makers, although unsure of future outcomes, must take the most reliable and assuring decisions. Deterministic and probabilistic prediction techniques, combined with optimization tools, have been widely used to meet the objective of improving planning as well [...] Read more.
In Water Resources Planning and Management, decision makers, although unsure of future outcomes, must take the most reliable and assuring decisions. Deterministic and probabilistic prediction techniques, combined with optimization tools, have been widely used to meet the objective of improving planning as well as management. Bayesian decision approaches are available to link probabilistic predictions to optimized decision schemes, but scientists are not fully able to express themselves in a language familiar to decision makers, who fear basing their decisions on “uncertain” forecasts in the vain belief that deterministic forecasts are more informative and reliable. This situation is even worse in the case of climate change projections, which bring additional degrees of uncertainty into the picture. Therefore, a need emerges to create a common approach and means of communication between scientists, who deal with optimization tools, probabilistic predictions and long-term projections, and operational decision makers, who must be facilitated in understanding, accepting, and acknowledging the benefits arising from operational water resources management based on probabilistic predictions and projections. Our aim here was to formulate the terms of the problem and the rationale for explaining and involving decision makers with the final objective of using probabilistic predictions/projections in their decision-making processes. Full article
(This article belongs to the Collection Feature Papers of Hydrology)
Show Figures

Figure 1

Back to TopTop