You are currently viewing a new version of our website. To view the old version click .
Buildings
  • Article
  • Open Access

10 December 2025

Structural Damage Identification with Machine Learning Based Bayesian Model Selection for High-Dimensional Systems

and
Faculty of Engineering, Kyushu University, 744 Motooka, Nishi-ku, Fukuoka 819-0395, Japan
*
Author to whom correspondence should be addressed.
This article belongs to the Section Building Structures

Abstract

Identifying structural damage in high-dimensional systems remains a major challenge due to the curse of dimensionality and the inherent sparsity of real-world damage scenarios. Traditional Bayesian or optimization-based approaches often become computationally intractable when applied to structures with a large number of uncertain parameters, where only a few members are actually damaged. To address this problem, this study proposes a Machine Learning and Widely Applicable Information Criterion (WAIC) based Bayesian framework for efficient and accurate damage identification in high-dimensional systems. In the proposed approach, an ML is first trained using simulated modal responses under randomly generated damage patterns. The ML predicts the most likely damaged members by measured responses, effectively reducing the high-dimensional search space to a small subset of candidates. Subsequently, a WAIC is employed to estimate the model combined by these candidates, while automatically selecting the optimal damage model. By combining the localization capability of ML with the uncertainty quantification of Bayesian inference, the proposed method achieves high identification accuracy with significantly reduced computational cost of model selection. Numerical experiments on a high-dimensional truss system demonstrate that the method can accurately locate and quantify multiple damages even under noise contamination. The results confirm that the hybrid framework effectively mitigates the curse of dimensionality and provides a robust solution for structural damage identification in large-scale structural systems.

1. Introduction

Structural health monitoring aims to ensure the safety and reliability of engineering structures by identifying and quantifying potential damage at an early stage [1]. As modern infrastructures grow in scale and complexity, traditional inspection-based approaches are insufficient due to their dependence on accessibility and subjective judgment [2]. Structural model updating methods have garnered significant attention in recent years as advanced SHM techniques.
Model updating techniques can generally be divided into deterministic and probabilistic approaches. In deterministic model updating, the problem is formulated as an optimization task, where an objective function quantifying the discrepancy between measured and simulated structural responses is minimized to achieve the best-fit model. Various optimization algorithms, such as genetic algorithms [3,4], particle swarm optimization [5], and simulated annealing and its derivatives [6], have been widely applied in this context. However, deterministic methods yield only a single optimal solution, while model updating problems like most ill-posed inverse problems often possess multiple plausible solutions due to measurement noise, incomplete observations, or model uncertainties [7,8]. In contrast, probabilistic approaches, particularly Bayesian model updating, treat model parameters as random variables and provide posterior probability distributions, thereby offering a more comprehensive quantification of uncertainty and parameter correlation. However, for large-scale structures and complex structures with many potential uncertain parameters may lead to large errors in the damage identification [9].
To solve these difficulties with high-dimensional problems in Bayesian model updating, a lot of stochastic simulations, such as importance sampling [10], Gibbs sampling [11], Markov Chain Monte Carlo (MCMC) [12], and their derivative algorithms [13,14,15,16], are proposed to solve the computational complexity of integrals. However, some sampling-based Bayesian updating methods have been reported to struggle in accurately identifying the true structural damage location and severity when the parameter dimensionality is high, sensor density is limited, or the measurements are contaminated by high-level noise [17,18,19]. Some researchers have attempted to reduce the dimension of the posterior function in Bayesian inference, with model selection being one such approach. Model selection methods, stochastic simulation methods, such as nested sampling [20], evolutionary nested sampling [21], and widely applicable information criterion (WAIC) [22] successfully select the optimal model and damage location. However, these methods are all based on known possible models or low-dimensional models comparison; select optimal model in high-dimensional models remains extremely challenging.
In recent years, machine learning (ML) and pattern recognition techniques have increasingly been incorporated into vibration-based structural health monitoring to improve model updating. Data-driven classifiers and regression models, such as neural networks, support vector machines, and ensemble boosting algorithms, have been successfully used to map modal characteristics to structural damage indicators, demonstrating strong potential for reducing reliance on explicit modeling [23,24,25]. Several studies have combined ML and dimensionality-reduction techniques with Bayesian updating or model selection to improve inference stability for high-dimensional problems. Such as include transfer-learning-based Bayesian updating, ML-based feature mappings for Bayesian model updating, surrogate-assisted Bayesian inference for large-scale structures [26,27,28,29]. While such approaches simplify Bayesian inference by projecting the high-dimensional posterior onto a reduced feature space, they typically ignore the comparison among multiple candidate models. For high-dimensional structures, however, the number of possible models increases combinatorially, and model selection becomes a fundamental yet unresolved challenge.
To overcome the limitation, this study introduces an ML-assisted Bayesian framework that integrates model updating with model selection based on WAIC. The proposed method employs an ML LogitBoost classifier [30,31,32] trained on synthetic modal features to rank the likelihood of parameters in a damage scenario. Only the top-ranked candidates are then included in the Bayesian posterior, which highly reduces the number of Bayesian models. Multiple Bayesian models are evaluated using WAIC, and the model with the lowest WAIC value is selected as the optimal model to represent the damaged structure.
The effectiveness of the proposed approach is validated on a 31-bar truss structure model and compared with a direct MH sampling method without model selection on the same numerical example. The results demonstrate that the proposed approach achieves higher efficiency and stability in identifying damage while maintaining robustness under different noise levels, providing an effective and generalizable framework for vibration-based structural health monitoring.

2. Theoretical Background

2.1. Bayesian Theory Model Updating

Bayesian theory is a widely recognized method for constructing probability models based on collected data, comprising a prior distribution, likelihood function, and evidence [33]. The measured responses through stochastic subspace identification [34], Hilbert-Huang transform [35], NeXT-ITD [36], etc., can achieve the r th circular natural frequency and mode shape as D = ω ~ r ,   φ ~ r . Suppose vector θ = θ 1 , θ 2 , θ n is the unknown parameters, where n is the number of unknown structural parameters. The predicted r th circular natural frequency and mode shape can be obtained by θ as ω r θ and φ r θ . The errors between the measured modal data and the predicted data as ε = ε ω , ε φ , where
ε ω , r = ω ~ r ω r θ ω ~ r
ε φ , r = φ ~ r φ r θ φ ~ r
Assuming the normalized error ε follow a zero-mean Gaussian distribution with variance σ 2 , the following equations can be given [37,38,39]:
p ω ~ r θ , M j = 2 π σ 2 0.5 e x p 2 σ 2 1 ω ~ r ω r θ ω ~ r 2
p φ ~ r θ , M j = 2 π σ 2 0.5 e x p 2 σ 2 1 1 φ ~ r φ r θ φ ~ r 2
where M j is the selected model class. Thus, the likelihood function in Bayesian inference can be expressed as the conjunction of Equations (3) and (4) as
p D θ , M j = r = 1 m p ω ~ r θ , M j   p φ ~ r θ , M j = 2 π σ 2 m e x p J θ
where
J θ = r = 1 m ω ~ r ω r θ ω ~ r 2 + 1 φ ~ r φ r θ φ ~ r 2
The posterior probability density function (PDF) of the uncertainty parameters can be obtained as follows, according to Bayesian inference,
p θ D , M j = p D θ , M j p θ M j p D M j
where p θ M j is the prior function in the case of being generally unable to grasp more accurate known information, adopting the prior distribution as a uniform distribution can effectively avoid the subjective error caused by the assumption of an inaccurate prior distribution. p D M j is the evidence function, it is a normalized constant, which makes the integration of the posterior PDF over the parameter space equal to unity. In summary, the posterior can be derived as
p θ D , M j 2 π σ 2 m e x p J θ
It is often difficult to directly solve the posterior distribution. In practice, the distribution of uncertain parameters is characterized by an MCMC sampling algorithm, and considering the most probable value (MPV) or mean value as the identified value of uncertain parameters. The MPV and mean values are both derived from the posterior distribution estimated using the MCMC samples. After running the MCMC algorithm, a large collection of samples is obtained that approximates the posterior probability distribution of the uncertain parameters. Based on these samples, the posterior density is reconstructed (e.g., via kernel density estimation), from which different point estimators can be extracted. The posterior mean is computed as the arithmetic average of all MCMC samples and is used in this study as the identified parameter estimate. In contrast, the MPV corresponds to the point at which the reconstructed posterior density attains its maximum, representing the parameter value with the highest posterior probability among all possible values. In this study, the mean value is treated as the identified parameter estimate.

2.2. ML-Based Candidate Parameters Generation

In Equation (8), M j should be carried out by model selection before solving the posterior function. Model selection is usually conducted by calculating the evidence function p D M j to evaluate the reliability of a given model. However, when dealing with high-dimensional posterior functions and an unknown model, it is extremely difficult to exhaustively classify and compare all possible models, making it challenging to determine the unknown parameters that need to be updated [40]. Some ML-based methods attempt to determine both model selection and unknown parameters directly through the training of high-dimensional models. Nevertheless, such approaches are prone to overfitting and may yield inaccurate parameter estimates [41,42,43].
In this paper, LogitBoost is implemented with shallow decision trees (maximum 10 splits) and 100 boosting iterations to rank all uncertain parameters θ = θ 1 , θ 2 , θ n by measured D = ω ~ r ,   φ ~ r . These hyperparameters are selected following common boosting practice and prior empirical guidelines for weak learners [44]. The top K parameters are then selected for subsequent model selection.
To construct the model selection, we first generated N synthetic training samples under various random parameter configurations. For each sample, the modal was perturbed by assigning random variations to 1 to K components of the parameter vector θ , while keeping the remaining components unchanged. The corresponding modal values D are extracted to form the observation dataset. The dataset represents the modal value of the model under different parameter conditions.
During training, the LogitBoost algorithm iteratively fits weak decision trees h t D to the pseudo-residuals of the logistic loss. The ensemble model is updated as
F t D = F t 1 D + v h t D
where v is the shrinkage factor controlling the learning rate. After T boosting iterations, the final ensemble output is transformed into the S c o r e for each parameter through the logistic function:
S c o r e = F T D
The resulting F T D quantify the likelihood that each model uncertain parameter θ i contributes to the measured modal D . The higher the score, the greater the likelihood of parameter θ i .
In the synthetic training dataset, the damage condition of each parameter is randomly generated within the specified range (0, 1). For each training sample, 1 to K parameters are randomly selected and assigned as damaged and labeled as 1, the remaining parameters are labeled as 0.

2.3. Model Selection Based on WAIC

After obtaining the likelihood that each model uncertain parameter θ i by ML, ranking the top K parameters as candidates. Each possible combination among these top K parameters is regarded as a separate structure model M j , the maximum dimension of M j is N e . To determine which model best explains the measured modal value, a model selection method is conducted using the WAIC [22], a fully Bayesian criterion that approximates the expected predictive accuracy of each M j .
For a given model M j , the likelihood of the measured modal D under MCMC sampling sets θ s obtained from the posterior p θ D , M j is computed as
W A I C j = 2 ( l p p d j p W A I C , j )
where
l p p d j = l o g 1 S S = 1 S p D θ S , M j
is the log pointwise predictive density calculated by sampling results, and
p W A I C , j = V a r s log p D θ S , M j
represents the effective number of parameters penalizing model complexity.
A smaller WAIC value indicated better generalization and higher predictive performance [45]. Therefore, the model M j with the minimum WAIC, M τ is selected as the most plausible representation of the structural Bayesian model, and the identified results from samples are selected as identified values of model uncertain parameters. This approach provides a more robust alternative to classical evidence-based model selection methods, especially in high-dimensional parameter spaces where posterior estimation is computationally intractable.
It is worth noting that WAIC evaluates not only the goodness of fit but also the predictive uncertainty of each candidate model. Because the uncertainty penalty differs across models, it is uncommon for distinct damage scenarios to produce identical WAIC values even when their modal responses are highly correlated. In rare cases where multiple models yield nearly similar WAIC values, the criterion tends to favor the model whose posterior distribution exhibits greater stability and lower effective complexity. This behavior reflects WAIC’s inherent preference for parsimonious and well-identified models under correlated modal measurements.

3. Flowchart

The overall workflow of the proposed framework is illustrated in Figure 1. First, the modal data D is measured from the practice model. It is used to train the ML-based candidate parameters generation, which ranks the top K parameters of the structural uncertain model parameters θ . The top K parameters with the highest p i are selected to combine different Bayesian model M j . Next, MCMC sampling is performed to estimate the posterior distribution of different models. The predictive performance of each model M j is then evaluated using WAIC. Finally, the model M τ with the smallest WAIC value is chosen as the most plausible representation of the current structural Bayesian model.
Figure 1. Flowchart of the proposed method.

4. Numerical Model

A numerical example is employed to verify the performance of the presented novel damage identification method. The example is a 2-D planar truss structure consisting of 31 bars that has been studied by many researchers [46,47,48], as shown in Figure 2. A randomly selected damage scenario with 3 damaged bars (No. 6, 22, 27) is defined for the structure in different noise conditions. Both the conventional damage identification based on the Bayesian method without model selection and the proposed ML-assisted Bayesian with WAIC-based model selection framework are applied to identify structural damage under different noise levels.
Figure 2. A planar truss structure.
The finite element model of the planar truss structure is built using 2-D truss elements in Matlab, with 25 degrees of freedom in total. The model consists of 14 nodes and 31 bar elements. The node n 1 at the left is fixed in horizontal and vertical directions, while the right-end node n 7 is constrained in the vertical direction. All bars share the same elastic modulus 70   G P a , density ρ = 7850   k g / m 3 , and cross-sectional area A = 2.0 × 10 4   m 2 . The system is assumed to be lightly damped, and a proportional damping ratio of ζ = 0.5 % for all modes.
The unknown parameters to be updated in this example are the relative reduction factor in the elasticity modulus of the randomly selected bars. It is supposed that there are three uncertainty reduction factors of the elasticity modulus. So there are total three unknown parameters needed to be identified due to the introduction of a shared standard variance σ 2 in Equation (8).
The assigned exact values for the 3 reduction factors are given in the third column of Table 1, and the model specified by those exact values is regarded as the reference model. Based on Qian and Zheng [21], five sensors are supposed to the deployed at nodes n 2 , n 4 , n 6 , n 10 , and n 12 for capturing the measurement of the reference model and the responses of the candidate models parameterized by unknown parameters. The first five frequencies and modal vectors at the five observation points compose the simulated measurement data D . Considering the high dimension of the truss model parameters, in the current problem, the proposed method initiates with S = 10,000 , N e = 4 , K = 8 . A preliminary robustness assessment indicated that the LogitBoost ranking becomes stable once the number of synthetic training samples exceeds approximately 3000. Therefore, set the number of randomly generated training samples N t r a i n = 5000 .
Table 1. Identified damaged bar location and reduction factor for different noise level.
The robustness and reliability of the presented method for damage identification in a noise environment are first tested in this example. To perform that, noise at different levels is added to the simulated measurement, resulting contaminated measurement D i , j = D ~ i , j   ( 1 + η r a n d ) [18]. where D i , j is the i j th noisy modal response, D ~ i , j is the i j th measured modal component without noise, η is the noise level, η = 5 % , η = 10 % , η = 20 % are considered in this study.
After inputting the measured D , as described in Section 2, all parameters of the structure will be scored by ML. In the proposed numerical example, 31 bar reduction factors are scored by 5000 training samples. Under the noise η = 5 % condition, the top K scores results for the full 31 uncertain parameters are shown in Figure 3.
Figure 3. Top K score by ML and the measured modal value.
As shown in Figure 3, although bar No. 6 is damaged, the score indicates that its correlation with the measured modal value is weaker than that of bar No. 25 and is comparable to bar No. 7. Therefore, if damage identifications are performed directly using methods like ML or Bayesian approaches without model selection, it would result in significant errors or reduced computational efficiency. For comparison, direct Bayesian updating and ML regression results are added in Section 5, both methods showing larger errors than the proposed framework. Based on the top K score uncertain parameters in Figure 3, we instantiate a family of Bayesian candidate models by enumerating all subsets of size k N e within the pool contains the top K parameters. For each given model M j combined by the parameters in the pool, the p θ D , M j is calculated by WAIC. Then, the model with the minimum WAIC is selected as the final model. The calculated results from 1-D to 3-D of WAIC are shown in Figure 4.
Figure 4. Model selection with WAIC algorithm.
In Figure 4, each point in the figure represents one Bayesian model, where the three axes correspond to bar No included in the model, and the color scale indicates the corresponding WAIC value. Models with lower WAIC values are plotted in cooler colors, indicating better predictive performance. The different symbols of points represent the number of uncertain parameters in the model (Only 1, 2, 3 dimensions of model uncertain parameters are shown in Figure 4). The yellow star marks the model with the minimum WAIC. In Figure 4, the combination of No. 6, 22, and 27 exhibits the lowest WAIC, indicating that the model with these damaged bars is the most likely Bayesian model close to the practice scenario. This result agrees well with the true damage scenario, demonstrating that the proposed method can accurately locate multiple damages within a large structural system. The convergence process in WAIC of the posterior mean for the best model is shown in Figure 5.
Figure 5. Convergence of the posterior mean of the best model.
As shown in Figure 5, the mean values of the reduction factors gradually stabilize after approximately 300 iterations, indicating that the reduction factors gradually stabilize after approximately 300 iterations, indicating that the Markov chain has reached a stationary state. The three reduction factors of bar No. 6, 22, 27 converge to their true values with minor fluctuations, confirming the stability and robustness of the sampling process.
The full identification results and errors of the damaged structure by the proposed method for the three different noise levels are presented in Table 1 and Figure 6. The result shows the proposed method accurately locating and estimating the uncertain parameters for damaged bar 6, 22, and 27. Bar 22 exhibited the most stable estimates, with deviations within 1% across all noise levels, while bar 27 showed slightly larger variations (3~9%), suggesting moderate sensitivity to high noise contamination. That is because the reduction factor of bar 22 has the most significant correlation with the measured modal value, as shown in Figure 3. The WAIC values indicate that the model achieved its optimal balance between goodness-of-fit and complexity under different noise conditions.
Figure 6. Identified reduction factors under different noise levels.
Overall, these results demonstrate that the proposed method can effectively locate and quantify damages with high accuracy and robustness, maintaining reliable performance even under high noise conditions.

5. Comparison with MCMC&ML Without Model Selection

To further evaluate the performance of the proposed framework, two baseline approaches were implemented for comparison: (a) a direct MCMC sampling method operating in the full 31-dimensional parameter space, and (b) an ML direct regression method that predicts all reduction factors simultaneously without model selection.
In both baselines, all 31 structural parameters are treated as uncertain, and Bayesian inference or ML regression is directly applied to estimate the reduction factors under a 5% noise level. The corresponding identification results are presented in Figure 7.
Figure 7. Comparison of the exact and identified damage by MCMC&ML.
As shown in Figure 7, the 31D MCMC method is able to roughly locate the damaged regions; however, the posterior mean estimates exhibit substantial dispersion and lack sparsity, even under relatively low noise. The sampling chain shows slow convergence in the high-dimensional space, leading to inflated posterior uncertainty. The ML direct regression baseline performs even less favorably, producing significant estimation errors and many false alarms due to the ill-posed and highly nonlinear mapping between modal features and distributed element-wise damage parameters.
These results highlight the limitations of applying MCMC sampling or ML regression directly to high-dimensional parameter spaces. In contrast, the proposed ML-assisted WAIC-based Bayesian framework (Figure 6) demonstrates substantially improved accuracy, stability, and sparsity in identifying both the true damage location and its severity. This confirms the necessity of performing parameter ranking and model selection prior to Bayesian updating and underscores the advantage of integrating ML and Bayesian inference for high-dimensional structural damage identification.

6. Limitation and Future Direction

Although the proposed ML-assisted Bayesian model selection framework has demonstrated high accuracy and robustness in identifying structural damage, several limitations should be noted. First, the current approach assumes that the structural behavior remains within the linear elastic range, and thus the influence of nonlinear responses or contact effects is not considered. Future studies could extend the method to nonlinear or hysteretic systems [49] to improve its applicability to real-world structures under strong loading conditions. Second, the present study used simulated modal parameters with artificial noise, whereas actual field data often contain additional uncertainties arising from environmental variations, boundary conditions, and sensor errors. Experimental validation using real vibration measurements will therefore be an important next step. Third, the WAIC-based model comparison currently relies on a limited number of candidate models generated from top-ranked parameters; confirming the size of the pool space will enhance identification reliability and calculation speed. In addition, the machine learning localizer used a single LogitBoost classifier for feature-to-damage mapping; integrating deep neural networks [50] or hybrid learning models [51] may improve predictive power and generalization. Finally, computational efficiency remains a concern for large-scale structures with high-dimensional parameter spaces. Future work could focus on parallelized MCMC sampling, surrogate-based Bayesian inference, or active learning strategies to reduce computational cost while maintaining accuracy.

7. Conclusions

This study presented an ML-assisted Bayesian framework for structural damage localization and model selection, integrating machine learning–based probability ranking with Bayesian inference and WAIC evaluation. By combining the interpretability of Bayesian model selection with the efficiency of machine learning, the proposed method effectively reduced the search space of uncertain parameters while maintaining accurate posterior estimation. The results demonstrated that the LogitBoost classifier successfully identified the most probable damaged parameters. The model with the minimum WAIC value was selected as the optimal model, indicating a reliable balance between model fit and complexity.
In comparison with traditional high-dimensional Bayesian sampling, the proposed hybrid framework achieved faster convergence and more stable posterior estimates, while maintaining robustness under different noise conditions. The WAIC-based model comparison also provided a principled criterion for selecting the most plausible structural model, avoiding overfitting and unnecessary complexity. These findings highlight the potential of combining stochastic inference with data-driven learning to enhance both the efficiency and accuracy of model updating.
Overall, this framework provides a robust and computationally efficient tool for model updating and damage detection in high-dimensional structures. Future work will focus on extending this approach to nonlinear systems, incorporating field-measured vibration data, and improving computational speed through advanced sampling algorithms.

Author Contributions

Conceptualization, K.W. and Y.K.; Methodology, K.W.; Software, K.W.; Writing—original draft, K.W.; Writing—review and editing, Y.K.; Visualization, K.W.; Supervision, Y.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Farrar, C.R.; Worden, K. Structural Health Monitoring: A Machine Learning Perspective; John Wiley & Sons: Hoboken, NJ, USA, 2012. [Google Scholar]
  2. Sohn, H.; Farrar, C.R.; Hemez, F.M.; Shunk, D.D.; Stinemates, D.W.; Nadler, B.R.; Czarnecki, J.J. A Review of Structural Health Monitoring Literature: 1996–2001; Los Alamos National Laboratory: Los Alamos, NM, USA, 2003; Volume 1, pp. 1–7.
  3. Zimmerman, D.C.; Yap, K.; Hasselman, T. Evolutionary Approach for Model Refinement. Mech. Syst. Signal Process. 1999, 13, 609–625. [Google Scholar] [CrossRef]
  4. Deng, L.; Cai, C.S. Bridge model updating using response surface method and genetic algorithm. J. Bridge Eng. 2010, 15, 553–564. [Google Scholar] [CrossRef]
  5. Marwala, T. (Ed.) Finite-element-model Updating Using Particle-swarm Optimization. In Finite-Element-Model Updating Using Computional Intelligence Techniques: Applications to Structural Dynamics; Springer: London, UK, 2010; pp. 67–84. [Google Scholar] [CrossRef]
  6. Levin, R.I.; Lieven, N.A.J. Dynamic Finite Element Model Updating Using Simulated Annealing and Genetic Algorithms. Mech. Syst. Signal Process. 1998, 12, 91–120. [Google Scholar] [CrossRef]
  7. Beck, J.L.; Katafygiotis, L.S. Updating models and their uncertainties. I: Bayesian statistical framework. J. Eng. Mech. 1998, 124, 455–461. [Google Scholar] [CrossRef]
  8. Kouchmeshky, B.; Aquino, W.; Billek, A.E. Structural damage identification using co-evolution and frequency response functions. Struct. Control. Health Monit. 2008, 15, 162–182. [Google Scholar] [CrossRef]
  9. Zhang, E.L.; Feissel, P.; Antoni, J. A comprehensive Bayesian approach for model updating and quantification of modeling errors. Probabilistic Eng. Mech. 2011, 26, 550–560. [Google Scholar] [CrossRef]
  10. Yuan, C.; Druzdzel, M.J. Importance sampling algorithms for Bayesian networks: Principles and performance. Math. Comput. Model. 2006, 43, 1189–1207. [Google Scholar] [CrossRef]
  11. Ching, J.; Muto, M.; Beck, J.L. Structural Model Updating and Health Monitoring with Incomplete Modal Data Using Gibbs Sampler. Comput.-Aided Civ. Infrastruct. Eng. 2006, 21, 242–257. [Google Scholar] [CrossRef]
  12. Beck, J.L.; Au, S.-K. Bayesian Updating of Structural Models and Reliability using Markov Chain Monte Carlo Simulation. J. Eng. Mech. 2002, 128, 380–391. [Google Scholar] [CrossRef]
  13. Ching, J.; Chen, Y.-C. Transitional Markov Chain Monte Carlo Method for Bayesian Model Updating, Model Class Selection, and Model Averaging. J. Eng. Mech. 2007, 133, 816–832. [Google Scholar] [CrossRef]
  14. Cheung, S.H.; Beck, J.L. Bayesian Model Updating Using Hybrid Monte Carlo Simulation with Application to Structural Dynamic Models with Many Uncertain Parameters. J. Eng. Mech. 2009, 135, 243–255. [Google Scholar] [CrossRef]
  15. Boulkaibet, I.; Mthembu, L.; Marwala, T.; Friswell, M.I.; Adhikari, S. Finite element model updating using the shadow hybrid Monte Carlo technique. Mech. Syst. Signal Process. 2015, 52–53, 115–132. [Google Scholar] [CrossRef]
  16. Chen, T.; Fox, E.; Guestrin, C. Stochastic Gradient Hamiltonian Monte Carlo. In Proceedings of the 31st International Conference on Machine Learning, Beijing, China, 21–26 June 2014; pp. 1683–1691. [Google Scholar]
  17. Zeng, J.; Yan, W.-J. High-Dimensional Bayesian inference for model updating with neural likelihood approximator powered by dimensionality-reducing flow-based generative model. Mech. Syst. Signal Process. 2025, 231, 112688. [Google Scholar] [CrossRef]
  18. Capellari, G.; Chatzi, E.; Mariani, S. Structural Health Monitoring Sensor Network Optimization through Bayesian Experimental Design. ASCE-ASME J. Risk Uncertain. Eng. Syst. Part A Civ. Eng. 2018, 4, 04018016. [Google Scholar] [CrossRef]
  19. Torzoni, M.; Manzoni, A.; Mariani, S. Enhancing Bayesian model updating in structural health monitoring via learnable mappings. arXiv 2025, arXiv:2405.13648. [Google Scholar] [CrossRef]
  20. Skilling, J. Nested sampling for general Bayesian computation. Bayesian Anal. 2006, 1, 833–859. [Google Scholar] [CrossRef]
  21. Qian, F.; Zheng, W. An evolutionary nested sampling algorithm for Bayesian model updating and model selection using modal measurement. Eng. Struct. 2017, 140, 298–307. [Google Scholar] [CrossRef]
  22. Watanabe, S. A widely applicable Bayesian information criterion. J. Mach. Learn. Res. 2013, 14, 867–897. [Google Scholar]
  23. Sohn, H. Effects of environmental and operational variability on structural health monitoring. Philos. Trans. R. Soc. A 2007, 365, 539–560. [Google Scholar] [CrossRef]
  24. Indhu, R.; Sundar, G.R.; Parveen, H.S. A Review of Machine Learning Algorithms for vibration-based SHM and vision-based SHM. In Proceedings of the 2022 Second International Conference on Artificial Intelligence and Smart Energy (ICAIS), Coimbatore, India, 23–25 February 2022; pp. 418–422. [Google Scholar] [CrossRef]
  25. Figueiredo, E.; Park, G.; Farrar, C.R.; Worden, K.; Figueiras, J. Machine learning algorithms for damage detection under operational and environmental variability. Struct. Health Monit. 2011, 10, 559–572. [Google Scholar] [CrossRef]
  26. Worden, K.; Burrows, A.P. Optimal sensor placement for fault detection. Eng. Struct. 2001, 23, 885–901. [Google Scholar] [CrossRef]
  27. Ierimonti, L.; Cavalagli, N.; Venanzi, I.; Garcia-Macias, E.; Ubertini, F. A transfer Bayesian learning methodology for structural health monitoring of monumental structures. Eng. Struct. 2021, 247, 113089. [Google Scholar] [CrossRef]
  28. Chen, H.; Huang, B.; Zhang, H.; Xue, K.; Sun, M.; Wu, Z. An efficient Bayesian method with intrusive homotopy surrogate model for stochastic model updating. Comput.-Aided Civ. Infrastruct. Eng. 2024, 39, 2500–2516. [Google Scholar] [CrossRef]
  29. Yu, X.; Li, X.; Bai, Y. Evaluating maximum inter-story drift ratios of building structures using time-varying models and Bayesian filters. Soil Dyn. Earthq. Eng. 2022, 162, 107496. [Google Scholar] [CrossRef]
  30. Otero, J.; Sánchez, L. Induction of descriptive fuzzy classifiers with the Logitboost algorithm. Soft Comput. 2006, 10, 825–835. [Google Scholar] [CrossRef]
  31. Kim, K.; Seo, M.; Kang, H.; Cho, S.; Kim, H.; Seo, K.-S. Application of LogitBoost Classifier for Traceability Using SNP Chip Data. PLoS ONE 2015, 10, e0139685. [Google Scholar] [CrossRef]
  32. Cuneyitoglu Ozkul, M.; Saranli, A.; Yazicioglu, Y. Acoustic surface perception from naturally occurring step sounds of a dexterous hexapod robot. Mech. Syst. Signal Process. 2013, 40, 178–193. [Google Scholar] [CrossRef]
  33. Bernardo, J.M.; Smith, A.F.M. Bayesian Theory; John Wiley & Sons, Ltd.: Hoboken, NJ, USA, 1994; pp. 240–376. [Google Scholar] [CrossRef]
  34. Peeters, B.; De Roeck, G. Reference-Based Stochastic Subspace Identification for Output-Only Modal Analysis. Mech. Syst. Signal Process. 1999, 13, 855–878. [Google Scholar] [CrossRef]
  35. Xu, Y.L.; Chen, S.W.; Zhang, R.C. Modal identification of Di Wang Building under Typhoon York using the Hilbert–Huang transform method. Struct. Des. Tall Spec. Build. 2003, 12, 21–47. [Google Scholar] [CrossRef]
  36. Ibrahim, S.R.; Pappa, R.S. Large Modal Survey Testing Using the Ibrahim Time Domain Identification Technique. J. Spacecr. Rocket. 1982, 19, 459–465. [Google Scholar] [CrossRef]
  37. Christodoulou, K.; Ntotsios, E.; Papadimitriou, C.; Panetsos, P. Structural model updating and prediction variability using Pareto optimal models. Comput. Methods Appl. Mech. Eng. 2008, 198, 138–149. [Google Scholar] [CrossRef]
  38. Christodoulou, K.; Papadimitriou, C. Structural identification based on optimally weighted modal residuals. Mech. Syst. Signal Process. 2007, 21, 4–23. [Google Scholar] [CrossRef]
  39. Zhang, F.-L.; Wei, J.-Y.; Ni, Y.-C.; Lam, H.-F. Efficient Bayesian model updating with surrogate model of a high-rise building based on MCMC and ambient data. Eng. Struct. 2025, 339, 120656. [Google Scholar] [CrossRef]
  40. Johnson, V.E.; Rossell, D. Bayesian Model Selection in High-Dimensional Settings. J. Am. Stat. Assoc. 2012, 107, 649–660. [Google Scholar] [CrossRef] [PubMed]
  41. Cuong-Le, T.; Nghia-Nguyen, T.; Khatir, S.; Trong-Nguyen, P.; Mirjalili, S.; Nguyen, K.D. An efficient approach for damage identification based on improved machine learning using PSO-SVM. Eng. Comput. 2022, 38, 3069–3084. [Google Scholar] [CrossRef]
  42. Lee, Y.; Kim, H.; Min, S.; Yoon, H. Structural damage detection using deep learning and FE model updating techniques. Sci. Rep. 2023, 13, 18694. [Google Scholar] [CrossRef]
  43. Pei, X.-Y.; Hou, Y.; Huang, H.-B.; Zheng, J.-X. A Deep Learning-Based Structural Damage Identification Method Integrating CNN-BiLSTM-Attention for Multi-Order Frequency Data Analysis. Buildings 2025, 15, 763. [Google Scholar] [CrossRef]
  44. Friedman, J.; Hastie, T.; Tibshirani, R. Additive logistic regression: A statistical view of boosting (with discussion and a rejoinder by the authors). Ann. Stat. 2000, 28, 337–407. [Google Scholar] [CrossRef]
  45. Gelman, A.; Hwang, J.; Vehtari, A. Understanding predictive information criteria for Bayesian models. Stat. Comput. 2014, 24, 997–1016. [Google Scholar] [CrossRef]
  46. Messina, A.; Williams, E.J.; Contursi, T. Structural Damage Detection by a Sensitivity and Statistical-Based Method. J. Sound Vib. 1998, 216, 791–808. [Google Scholar] [CrossRef]
  47. Seyedpoor, S.M. A two stage method for structural damage detection using a modal strain energy based index and particle swarm optimization. Int. J. Non-Linear Mech. 2012, 47, 1–8. [Google Scholar] [CrossRef]
  48. Nobahari, M.; Seyedpoor, S. An efficient method for structural damage localization based on the concepts of flexibility matrix and strain energy of a structure. Struct. Eng. Mech. 2013, 46, 231–244. [Google Scholar] [CrossRef]
  49. Morris, K.A. What is Hysteresis? Appl. Mech. Rev. 2012, 64, 050801. [Google Scholar] [CrossRef]
  50. Sze, V.; Chen, Y.-H.; Yang, T.-J.; Emer, J.S. Efficient Processing of Deep Neural Networks: A Tutorial and Survey. Proc. IEEE 2017, 105, 2295–2329. [Google Scholar] [CrossRef]
  51. Mossavar-Rahmani, F.; Larson-Daugherty, C. Supporting the Hybrid Learning Model: A New Proposition. MERLOT J. Online Learn. Teach. 2007, 3, 67–78. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.