Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (25)

Search Parameters:
Keywords = Markov chain random field

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 534 KiB  
Article
Inference for Two-Parameter Birnbaum–Saunders Distribution Based on Type-II Censored Data with Application to the Fatigue Life of Aluminum Coupon Cuts
by Omar M. Bdair
Mathematics 2025, 13(4), 590; https://doi.org/10.3390/math13040590 - 11 Feb 2025
Cited by 1 | Viewed by 655
Abstract
This study addresses the problem of parameter estimation and prediction for type-II censored data from the two-parameter Birnbaum–Saunders (BS) distribution. The BS distribution is commonly used in reliability analysis, particularly in modeling fatigue life. Accurate estimation and prediction are crucial in many fields [...] Read more.
This study addresses the problem of parameter estimation and prediction for type-II censored data from the two-parameter Birnbaum–Saunders (BS) distribution. The BS distribution is commonly used in reliability analysis, particularly in modeling fatigue life. Accurate estimation and prediction are crucial in many fields where censored data frequently appear, such as material science, medical studies and industrial applications. This paper presents both frequentist and Bayesian approaches to estimate the shape and scale parameters of the BS distribution, along with the prediction of unobserved failure times. Random data are generated from the BS distribution under type-II censoring, where a pre-specified number of failures (m) is observed. The generated data are used to calculate the Maximum Likelihood Estimation (MLE) and Bayesian inference and evaluate their performances. The Bayesian method employs Markov Chain Monte Carlo (MCMC) sampling for point predictions and credible intervals. We apply the methods to both datasets generated under type-II censoring and real-world data on the fatigue life of 6061-T6 aluminum coupons. Although the results show that the two methods yield similar parameter estimates, the Bayesian approach offers more flexible and reliable prediction intervals. Extensive R codes are used to explain the practical application of these methods. Our findings confirm the advantages of Bayesian inference in handling censored data, especially when prior information is available for estimation. This work not only supports the theoretical understanding of the BS distribution under type-II censoring but also provides practical tools for analyzing real data in reliability and survival studies. Future research will discuss extensions of these methods to the multi-sample progressive censoring model with larger datasets and the integration of degradation models commonly encountered in industrial applications. Full article
Show Figures

Figure 1

28 pages, 3873 KiB  
Article
Bayesian Inference for Long Memory Stochastic Volatility Models
by Pedro Chaim and Márcio Poletti Laurini
Econometrics 2024, 12(4), 35; https://doi.org/10.3390/econometrics12040035 - 27 Nov 2024
Viewed by 1565
Abstract
We explore the application of integrated nested Laplace approximations for the Bayesian estimation of stochastic volatility models characterized by long memory. The logarithmic variance persistence in these models is represented by a Fractional Gaussian Noise process, which we approximate as a linear combination [...] Read more.
We explore the application of integrated nested Laplace approximations for the Bayesian estimation of stochastic volatility models characterized by long memory. The logarithmic variance persistence in these models is represented by a Fractional Gaussian Noise process, which we approximate as a linear combination of independent first-order autoregressive processes, lending itself to a Gaussian Markov Random Field representation. Our results from Monte Carlo experiments indicate that this approach exhibits small sample properties akin to those of Markov Chain Monte Carlo estimators. Additionally, it offers the advantages of reduced computational complexity and the mitigation of posterior convergence issues. We employ this methodology to estimate volatility dependency patterns for both the SP&500 index and major cryptocurrencies. We thoroughly assess the in-sample fit and extend our analysis to the construction of out-of-sample forecasts. Furthermore, we propose multi-factor extensions and apply this method to estimate volatility measurements from high-frequency data, underscoring its exceptional computational efficiency. Our simulation results demonstrate that the INLA methodology achieves comparable accuracy to traditional MCMC methods for estimating latent parameters and volatilities in LMSV models. The proposed model extensions show strong in-sample fit and out-of-sample forecast performance, highlighting the versatility of the INLA approach. This method is particularly advantageous in high-frequency contexts, where the computational demands of traditional posterior simulations are often prohibitive. Full article
Show Figures

Figure 1

12 pages, 1657 KiB  
Article
Developing Theoretical Models for Atherosclerotic Lesions: A Methodological Approach Using Interdisciplinary Insights
by Amun G. Hofmann
Life 2024, 14(8), 979; https://doi.org/10.3390/life14080979 - 5 Aug 2024
Viewed by 1208
Abstract
Atherosclerosis, a leading cause of cardiovascular disease, necessitates advanced and innovative modeling techniques to better understand and predict plaque dynamics. The present work presents two distinct hypothetical models inspired by different research fields: the logistic map from chaos theory and Markov models from [...] Read more.
Atherosclerosis, a leading cause of cardiovascular disease, necessitates advanced and innovative modeling techniques to better understand and predict plaque dynamics. The present work presents two distinct hypothetical models inspired by different research fields: the logistic map from chaos theory and Markov models from stochastic processes. The logistic map effectively models the nonlinear progression and sudden changes in plaque stability, reflecting the chaotic nature of atherosclerotic events. In contrast, Markov models, including traditional Markov chains, spatial Markov models, and Markov random fields, provide a probabilistic framework to assess plaque stability and transitions. Spatial Markov models, visualized through heatmaps, highlight the spatial distribution of transition probabilities, emphasizing local interactions and dependencies. Markov random fields incorporate complex spatial interactions, inspired by advances in physics and computational biology, but present challenges in parameter estimation and computational complexity. While these hypothetical models offer promising insights, they require rigorous validation with real-world data to confirm their accuracy and applicability. This study underscores the importance of interdisciplinary approaches in developing theoretical models for atherosclerotic plaques. Full article
(This article belongs to the Special Issue Microvascular Dynamics: Insights and Applications)
Show Figures

Figure 1

20 pages, 16972 KiB  
Article
Sideband Vibro-Acoustics Suppression and Numerical Prediction of Permanent Magnet Synchronous Motor Based on Markov Chain Random Carrier Frequency Modulation
by Yong Chen, Bingxiao Yan, Liming Zhang, Kefu Yao and Xue Jiang
Appl. Sci. 2024, 14(11), 4808; https://doi.org/10.3390/app14114808 - 2 Jun 2024
Cited by 1 | Viewed by 1048
Abstract
This paper presents a Markov chain random carrier frequency modulation (MRCFM) technique for suppressing sideband vibro-acoustic responses caused by discontinuous pulse-width modulation (DPWM) in permanent magnet synchronous motors (PMSMs) for new energy vehicles. Firstly, the spectral and order distributions of the sideband current [...] Read more.
This paper presents a Markov chain random carrier frequency modulation (MRCFM) technique for suppressing sideband vibro-acoustic responses caused by discontinuous pulse-width modulation (DPWM) in permanent magnet synchronous motors (PMSMs) for new energy vehicles. Firstly, the spectral and order distributions of the sideband current harmonics and radial electromagnetic forces introduced by DPWM are characterized and identified. Then, the principle and implementation method of three-state Markov chain random number generation are proposed, and particle swarm optimization (PSO) algorithm is chosen to quickly find the key parameters of transition probability and random gain. A Simulink and JMAG multi-physics field co-simulation model is built to simulate and predict the suppression effect of the MRCFM method on the sideband vibro-acoustic response. Finally, a 12-slot-10-pole PMSM test platform is built for experimental testing. The results show that the sideband current harmonics and vibro-acoustic response are effectively suppressed after the optimization of Markov chain algorithm. The constructed multi-physics field co-simulation model can accurately predict the amplitude characteristics of the sideband current harmonics and vibro-acoustic response. Full article
(This article belongs to the Section Acoustics and Vibrations)
Show Figures

Figure 1

22 pages, 651 KiB  
Article
Optimization of Active Learning Strategies for Causal Network Structure
by Mengxin Zhang and Xiaojun Zhang
Mathematics 2024, 12(6), 880; https://doi.org/10.3390/math12060880 - 17 Mar 2024
Viewed by 1632
Abstract
Causal structure learning is one of the major fields in causal inference. Only the Markov equivalence class (MEC) can be learned from observational data; to fully orient unoriented edges, experimental data need to be introduced from external intervention experiments to improve the identifiability [...] Read more.
Causal structure learning is one of the major fields in causal inference. Only the Markov equivalence class (MEC) can be learned from observational data; to fully orient unoriented edges, experimental data need to be introduced from external intervention experiments to improve the identifiability of causal graphs. Finding suitable intervention targets is key to intervention experiments. We propose a causal structure active learning strategy based on graph structures. In the context of randomized experiments, the central nodes of the directed acyclic graph (DAG) are considered as the alternative intervention targets. In each stage of the experiment, we decompose the chain graph by removing the existing directed edges; then, each connected component is oriented separately through intervention experiments. Finally, all connected components are merged to obtain a complete causal graph. We compare our algorithm with previous work in terms of the number of intervention variables, convergence rate and model accuracy. The experimental results show that the performance of the proposed method in restoring the causal structure is comparable to that of previous works. The strategy of finding the optimal intervention target is simplified, which improves the speed of the algorithm while maintaining the accuracy. Full article
(This article belongs to the Special Issue Research Progress and Application of Bayesian Statistics)
Show Figures

Figure 1

18 pages, 2272 KiB  
Review
Application of Mass Service Theory to Economic Systems Optimization Problems—A Review
by Farida F. Galimulina and Naira V. Barsegyan
Mathematics 2024, 12(3), 403; https://doi.org/10.3390/math12030403 - 26 Jan 2024
Cited by 3 | Viewed by 1811
Abstract
An interdisciplinary approach to management allows for the integration of knowledge and tools of different fields of science into a unified methodology in order to improve the efficiency of resource management of different kinds of systems. In the conditions of global transformations, it [...] Read more.
An interdisciplinary approach to management allows for the integration of knowledge and tools of different fields of science into a unified methodology in order to improve the efficiency of resource management of different kinds of systems. In the conditions of global transformations, it is economic systems that have been significantly affected by external destabilizing factors. This determines the focus of attention on the need to develop tools for the modeling and optimization of economic systems, both in terms of organizational structure and in the context of resource management. The purpose of this review study is to identify the current gaps (shortcomings) in the scientific literature devoted to the issues of the modeling and optimization of economic systems using the tools of mass service theory. This article presents a critical analysis of approaches for the formulation of provisions on mass service systems in the context of resource management. On the one hand, modern works are characterized by the inclusion of an extensive number of random factors that determine the performance and efficiency of economic systems: the probability of delays and interruptions in mobile networks; the integration of order, inventory, and production management processes; the cost estimation of multi-server system operation; and randomness factors, customer activity, and resource constraints, among others. On the other hand, controversial points are identified. The analytical study carried out allows us to state that the prevailing majority of mass service models applied in relation to economic systems and resource supply optimization are devoted to Markov chain modeling. In terms of the chronology of the problems studied, there is a marked transition from modeling simple systems to complex mass service networks. In addition, we conclude that the complex architecture of modern economic systems opens up a wide research field for finding a methodology for assessing the dependence of the enterprise performance on the effect of optimization provided by using the provisions of mass service theory. This statement can be the basis for future research. Full article
Show Figures

Figure 1

12 pages, 284 KiB  
Article
Stochastic Process Leading to Catalan Number Recurrence
by Mariusz Białecki
Mathematics 2023, 11(24), 4953; https://doi.org/10.3390/math11244953 - 14 Dec 2023
Cited by 1 | Viewed by 2465
Abstract
Motivated by a simple model of earthquake statistics, a finite random discrete dynamical system is defined in order to obtain Catalan number recurrence by describing the stationary state of the system in the limit of its infinite size. Equations describing dynamics of the [...] Read more.
Motivated by a simple model of earthquake statistics, a finite random discrete dynamical system is defined in order to obtain Catalan number recurrence by describing the stationary state of the system in the limit of its infinite size. Equations describing dynamics of the system, represented by partitions of a subset of {1,2,,N}, are derived using basic combinatorics. The existence and uniqueness of a stationary state are shown using Markov chains terminology. A well-defined mean-field type approximation is used to obtain block size distribution and the consistency of the approach is verified. It is shown that this recurrence asymptotically takes the form of Catalan number recurrence for particular dynamics parameters of the system. Full article
(This article belongs to the Special Issue Mathematical Modeling in Geophysics: Concepts and Practices)
15 pages, 4761 KiB  
Article
Inversion of Rayleigh Wave Dispersion Curve Extracting from Ambient Noise Based on DNN Architecture
by Qingsheng Meng, Yuhong Chen, Fei Sha and Tao Liu
Appl. Sci. 2023, 13(18), 10194; https://doi.org/10.3390/app131810194 - 11 Sep 2023
Cited by 6 | Viewed by 2563
Abstract
The inversion of the Rayleigh wave dispersion curve is a crucial step in obtaining the shear wave velocity (VS) of near-surface structures. Due to the characteristics of being ill-posed and nonlinear, the existing inversion methods presented low efficiency and ambiguity. [...] Read more.
The inversion of the Rayleigh wave dispersion curve is a crucial step in obtaining the shear wave velocity (VS) of near-surface structures. Due to the characteristics of being ill-posed and nonlinear, the existing inversion methods presented low efficiency and ambiguity. To address these challenges, we describe a six-layer deep neural network algorithm for the inversion of 1D VS from dispersion curves of the fundamental mode Rayleigh surface waves. Our method encompasses several key advancements: (1) we use a finer layer to construct the 1-D VS model of the subsurface, which can describe a more complex near-surface geology structure; (2) considering the ergodicity and orderliness of strata evolution, the constrained Markov Chain was employed to reconstruct the complex velocity model; (3) we build a practical and complete dispersion curve inversion process. Our model tested the performance using a random synthetic dataset and the influence of different factors, including the number of training samples, learning rate, and the selection of optimal artificial neural network architecture. Finally, the field test dispersion data were used to further verify the method’s effectiveness. Our synthetic dataset proved the diversity and rationality of the random VS model. The results of training and predicting showed higher accuracy and could speed the inversion process (only ~15 s), and we proved the important effect of different factors. The outcomes derived from the application of this technique to the measured dispersion data in the Yellow River Delta exhibit a strong correlation with the outcomes obtained from the integration of the very fast simulated annealing method and the downhill simplex method, as well as the statistically derived shear wave velocity data of the sedimentary layers in the Yellow River Delta. From a long-term perspective, our method can provide an alternative for deriving VS models for complex near-surface structures. Full article
(This article belongs to the Special Issue Machine Learning Approaches for Geophysical Data Analysis)
Show Figures

Figure 1

11 pages, 278 KiB  
Communication
Equivalence between LC-CRF and HMM, and Discriminative Computing of HMM-Based MPM and MAP
by Elie Azeraf, Emmanuel Monfrini and Wojciech Pieczynski
Algorithms 2023, 16(3), 173; https://doi.org/10.3390/a16030173 - 21 Mar 2023
Cited by 3 | Viewed by 2385
Abstract
Practitioners have used hidden Markov models (HMMs) in different problems for about sixty years. Moreover, conditional random fields (CRFs) are an alternative to HMMs and appear in the literature as different and somewhat concurrent models. We propose two contributions: First, we show that [...] Read more.
Practitioners have used hidden Markov models (HMMs) in different problems for about sixty years. Moreover, conditional random fields (CRFs) are an alternative to HMMs and appear in the literature as different and somewhat concurrent models. We propose two contributions: First, we show that the basic linear-chain CRFs (LC-CRFs), considered as different from HMMs, are in fact equivalent to HMMs in the sense that for each LC-CRF there exists an HMM—that we specify—whose posterior distribution is identical to the given LC-CRF. Second, we show that it is possible to reformulate the generative Bayesian classifiers maximum posterior mode (MPM) and maximum a posteriori (MAP), used in HMMs, as discriminative ones. The last point is of importance in many fields, especially in natural language processing (NLP), as it shows that in some situations dropping HMMs in favor of CRFs is not necessary. Full article
(This article belongs to the Special Issue Mathematical Models and Their Applications IV)
19 pages, 6532 KiB  
Article
Application of the Coupled Markov Chain in Soil Liquefaction Potential Evaluation
by Hsiu-Chen Wen, An-Jui Li, Chih-Wei Lu and Chee-Nan Chen
Buildings 2022, 12(12), 2095; https://doi.org/10.3390/buildings12122095 - 29 Nov 2022
Cited by 1 | Viewed by 2174
Abstract
The evaluation of localized soil-liquefaction potential is based primarily on the individual evaluation of the liquefaction potential in each borehole, followed by calculating the liquefaction-potential index between boreholes through Kriging interpolation, and then plotting the liquefaction-potential map. However, misjudgments in design, construction, and [...] Read more.
The evaluation of localized soil-liquefaction potential is based primarily on the individual evaluation of the liquefaction potential in each borehole, followed by calculating the liquefaction-potential index between boreholes through Kriging interpolation, and then plotting the liquefaction-potential map. However, misjudgments in design, construction, and operation may occur due to the complexity and uncertainty of actual geologic structures. In this study, the coupled Markov chain (CMC) method was used to create and analyze stratigraphic profiles and to grid the stratum between each borehole so that the stratum consisted of several virtual boreholes. The soil-layer parameters were established using homogenous and random field models, and the subsequent liquefaction-potential-evaluation results were compared with those derived using the Kriging method. The findings revealed that within the drilling data range in this study, the accuracy of the CMC model in generating stratigraphic profiles was greater than that of the Kriging method. Additionally, if the CMC method incorporated with random field parameters were to be used in engineering practice, we recommend that after calculating the curve of the mean, the COV should be set to 0.25 as a conservative estimation of the liquefaction-potential interval that considers the evaluation results of the Kriging method. Full article
(This article belongs to the Special Issue Advances in Soils and Foundations)
Show Figures

Figure 1

15 pages, 790 KiB  
Article
Prediction and Surveillance Sampling Assessment in Plant Nurseries and Fields
by Nora C. Monsalve and Antonio López-Quílez
Appl. Sci. 2022, 12(18), 9005; https://doi.org/10.3390/app12189005 - 8 Sep 2022
Cited by 1 | Viewed by 1454
Abstract
In this paper, we propose a structured additive regression (STAR) model for modeling the occurrence of a disease in fields or nurseries. The methodological approach involves a Gaussian field (GF) affected by a spatial process represented by an approximation to a Gaussian Markov [...] Read more.
In this paper, we propose a structured additive regression (STAR) model for modeling the occurrence of a disease in fields or nurseries. The methodological approach involves a Gaussian field (GF) affected by a spatial process represented by an approximation to a Gaussian Markov random field (GMRF). This modeling allows the building of maps with prediction probabilities regarding the presence of a disease in plants using Bayesian kriging. The advantage of this modeling is its computational benefit when compared with known spatial hierarchical models and with the Bayesian inference based on Markov chain Monte Carlo (MCMC) methods. Inference through the use of the integrated nested Laplace approximation (INLA) with the stochastic partial differential equation (SPDE) approach facilitates the handling of large datasets in excellent computation times. Our approach allows the evaluation of different sampling strategies, from which we obtain inferences and prediction maps with similar behaviour to those obtained when we consider all subjects in the study population. The analysis of the different sampling strategies allows us to recognize the relevance of spatial components in the studied phenomenon. We demonstrate how Bayesian kriging can incorporate sources of uncertainty associated with the prediction parameters, which leads to more realistic and accurate estimation of the uncertainty. We illustrate the methodology with samplings of Citrus macrophylla affected by the tristeza virus (CTV) grown in a nursery. Full article
(This article belongs to the Special Issue Spatial Analysis of Agricultural Data)
Show Figures

Figure 1

8 pages, 2145 KiB  
Article
Patterns Simulations Using Gibbs/MRF Auto-Poisson Models
by Stelios Zimeras
Technologies 2022, 10(3), 69; https://doi.org/10.3390/technologies10030069 - 6 Jun 2022
Cited by 1 | Viewed by 2244
Abstract
Pattern analysis is the process where characteristics of big data can be recognized using specific methods. Recognition of the data, especially images, can be achieved by applying spatial models, explaining the neighborhood structure of the patterns. These models can be introduced by Markov [...] Read more.
Pattern analysis is the process where characteristics of big data can be recognized using specific methods. Recognition of the data, especially images, can be achieved by applying spatial models, explaining the neighborhood structure of the patterns. These models can be introduced by Markov random field (MRF) models where conditional distribution of the pixels may be defined by a specific distribution. Various spatial models could be introduced, explaining the real patterns of the data; one class of these models is based on the Poisson distribution, called auto-Poisson models. The main advantage of these models is the consideration of the local characteristics of the image. Based on the local analysis, various patterns can be introduced and models that better explain the real data can be estimated, using advanced statistical techniques like Monte Carlo Markov Chains methods. These methods are based on simulations where the proposed distribution must converge to the original (final) one. In this work, an analysis of a MRF model under Poisson distribution would be defined and simulations would be illustrated based on Monte Carlo Markov Chains (MCMC) process like Gibbs sampler. Results would be illustrated using simulated and real patterns data. Full article
(This article belongs to the Special Issue 10th Anniversary of Technologies—Recent Advances and Perspectives)
Show Figures

Figure 1

26 pages, 18513 KiB  
Article
HMM-Based Dynamic Mapping with Gaussian Random Fields
by Hongjun Li, Miguel Barão, Luís Rato and Shengjun Wen
Electronics 2022, 11(5), 722; https://doi.org/10.3390/electronics11050722 - 26 Feb 2022
Cited by 1 | Viewed by 2031
Abstract
This paper focuses on the mapping problem for mobile robots in dynamic environments where the state of every point in space may change, over time, between free or occupied. The dynamical behaviour of a single point is modelled by a Markov chain, which [...] Read more.
This paper focuses on the mapping problem for mobile robots in dynamic environments where the state of every point in space may change, over time, between free or occupied. The dynamical behaviour of a single point is modelled by a Markov chain, which has to be learned from the data collected by the robot. Spatial correlation is based on Gaussian random fields (GRFs), which correlate the Markov chain parameters according to their physical distance. Using this strategy, one point can be learned from its surroundings, and unobserved space can also be learned from nearby observed space. The map is a field of Markov matrices that describe not only the occupancy probabilities (the stationary distribution) as well as the dynamics in every point. The estimation of transition probabilities of the whole space is factorised into two steps: The parameter estimation for training points and the parameter prediction for test points. The parameter estimation in the first step is solved by the expectation maximisation (EM) algorithm. Based on the estimated parameters of training points, the parameters of test points are obtained by the predictive equation in Gaussian processes with noise-free observations. Finally, this method is validated in experimental environments. Full article
(This article belongs to the Special Issue Recent Advanced Applications of Rehabilitation and Medical Robotics)
Show Figures

Figure 1

15 pages, 5749 KiB  
Article
“Realistic Choice of Annual Matrices Contracts the Range of λS Estimates” under Reproductive Uncertainty Too
by Dmitrii O. Logofet, Leonid L. Golubyatnikov, Elena S. Kazantseva and Nina G. Ulanova
Mathematics 2021, 9(23), 3007; https://doi.org/10.3390/math9233007 - 24 Nov 2021
Cited by 4 | Viewed by 1735
Abstract
Our study is devoted to a subject popular in the field of matrix population models, namely, estimating the stochastic growth rate, λS, a quantitative measure of long-term population viability, for a discrete-stage-structured population monitored during many years. “Reproductive uncertainty [...] Read more.
Our study is devoted to a subject popular in the field of matrix population models, namely, estimating the stochastic growth rate, λS, a quantitative measure of long-term population viability, for a discrete-stage-structured population monitored during many years. “Reproductive uncertainty” refers to a feature inherent in the data and life cycle graph (LCG) when the LCG has more than one reproductive stage, but when the progeny cannot be associated to a parent stage in a unique way. Reproductive uncertainty complicates the procedure of λS estimation following the defining of λS from the limit of a sequence consisting of population projection matrices (PPMs) chosen randomly from a given set of annual PPMs. To construct a Markov chain that governs the choice of PPMs for a local population of Eritrichium caucasicum, an short-lived perennial alpine plant species, we have found a local weather index that is correlated with the variations in the annual PPMs, and we considered its long time series as a realization of the Markov chain that was to be constructed. Reproductive uncertainty has required a proper modification of how to restore the transition matrix from a long realization of the chain, and the restored matrix has been governing random choice in several series of Monte Carlo simulations of long-enough sequences. The resulting ranges of λS estimates turn out to be more narrow than those obtained by the popular i.i.d. methods of random choice (independent and identically distributed matrices); hence, we receive a more accurate and reliable forecast of population viability. Full article
(This article belongs to the Special Issue Advances in the Mathematics of Ecological Modelling)
Show Figures

Figure 1

28 pages, 8063 KiB  
Review
gPCE-Based Stochastic Inverse Methods: A Benchmark Study from a Civil Engineer’s Perspective
by Filippo Landi, Francesca Marsili, Noemi Friedman and Pietro Croce
Infrastructures 2021, 6(11), 158; https://doi.org/10.3390/infrastructures6110158 - 5 Nov 2021
Cited by 15 | Viewed by 2989
Abstract
In civil and mechanical engineering, Bayesian inverse methods may serve to calibrate the uncertain input parameters of a structural model given the measurements of the outputs. Through such a Bayesian framework, a probabilistic description of parameters to be calibrated can be obtained; this [...] Read more.
In civil and mechanical engineering, Bayesian inverse methods may serve to calibrate the uncertain input parameters of a structural model given the measurements of the outputs. Through such a Bayesian framework, a probabilistic description of parameters to be calibrated can be obtained; this approach is more informative than a deterministic local minimum point derived from a classical optimization problem. In addition, building a response surface surrogate model could allow one to overcome computational difficulties. Here, the general polynomial chaos expansion (gPCE) theory is adopted with this objective in mind. Owing to the fact that the ability of these methods to identify uncertain inputs depends on several factors linked to the model under investigation, as well as the experiment carried out, the understanding of results is not univocal, often leading to doubtful conclusions. In this paper, the performances and the limitations of three gPCE-based stochastic inverse methods are compared: the Markov Chain Monte Carlo (MCMC), the polynomial chaos expansion-based Kalman Filter (PCE-KF) and a method based on the minimum mean square error (MMSE). Each method is tested on a benchmark comprised of seven models: four analytical abstract models, a one-dimensional static model, a one-dimensional dynamic model and a finite element (FE) model. The benchmark allows the exploration of relevant aspects of problems usually encountered in civil, bridge and infrastructure engineering, highlighting how the degree of non-linearity of the model, the magnitude of the prior uncertainties, the number of random variables characterizing the model, the information content of measurements and the measurement error affect the performance of Bayesian updating. The intention of this paper is to highlight the capabilities and limitations of each method, as well as to promote their critical application to complex case studies in the wider field of smarter and more informed infrastructure systems. Full article
(This article belongs to the Special Issue Inspection, Assessment and Retrofit of Transport Infrastructure)
Show Figures

Figure 1

Back to TopTop