# Mechanical Reliability Assessment by Ensemble Learning

^{1}

^{2}

^{3}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Failure Probability Estimation: ML-Based Framework

## 3. MCS to Prepare the Training Data

#### 3.1. General MCS to Estimate CFPs

#### 3.2. Importance Sampling to Estimate Very Small CFPs

**u**that satisfy the condition

## 4. Train the ML Model by Ensemble Learning

- Bagging method generally builds several instances of a black-box estimator from bootstrap replicates of the original training set and then aggregates their individual predictions to form a final prediction. This method is employed as a way to reduce the variance of a base estimator (e.g., a decision tree) by introducing randomization into its construction process. Random Forest is representative among bagging methods.
- Boosting is a widely used ensemble approach, which can effectively boost a set of weak classifiers to a strong classifier by iteratively adjusting the weight distribution of samples in the training set and learning base classifiers from them. At each round, the weight of misclassified samples is increased and the base classifiers will focus on these more. This is equivalent to inferring classifiers from training data that are sampled from the original data set based on the weight distribution. Gradient Boosting is a mostly used boosting method.
- Stacking involves training a learning algorithm to combine the predictions of several other learning algorithms. First, all of the other algorithms are trained using the available data; then, a combiner algorithm is trained to make a final prediction using all the predictions of the other algorithms as inputs. Stacking typically yields a performance that is better than any single trained models.

#### 4.1. Random Forest

#### 4.2. Gradient Boosting

#### 4.3. Stacking

## 5. Numerical Examples

#### 5.1. Three Test Examples

#### 5.2. A Benchmark Example: 10-DOF Duffing Oscillator

## 6. Conclusions

## Author Contributions

## Funding

## Conflicts of Interest

## References

- Bertsche, B. Reliability in Automotive and Mechanical Engineering: Determination of Component and System Reliability; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2008; pp. 1–2. [Google Scholar]
- Au, S.K.; Beck, J.L. First, excursion probabilities for linear systems by very efficient importance sampling. Probabilistic Eng. Mech.
**2001**, 16, 193–207. [Google Scholar] [CrossRef] - Valdebenito, M.A.; Jensen, H.A.; Labarca, A.A. Estimation of first excursion probabilities for uncertain stochastic linear systems subject to Gaussian load. Comput. Struct.
**2014**, 138, 36–48. [Google Scholar] [CrossRef] - Lee, S.H.; Kwak, B.M. Response surface augmented moment method for efficient reliability analysis. Struct. Saf.
**2006**, 28, 261–272. [Google Scholar] [CrossRef] - Hadidi, A.; Azar, B.F.; Rafiee, A. Efficient response surface method for high-dimensional structural reliability analysis. Struct. Saf.
**2017**, 68, 15–27. [Google Scholar] [CrossRef] - Pan, Q.; Dias, D. An efficient reliability method combining adaptive support vector machine and monte carlo simulation. Struct. Saf.
**2017**, 67, 85–95. [Google Scholar] [CrossRef] - Dubourg, V.; Sudret, B.; Deheeger, F. Metamodel-based importance sampling for structural reliability analysis. Probabilistic Eng. Mech.
**2013**, 33, 47–57. [Google Scholar] [CrossRef] [Green Version] - Cardoso, J.B.; de Almeida, J.R.; Dias, J.M.; Coelho, P.G. Structural reliability analysis using monte carlo simulation and neural networks. Adv. Eng. Softw.
**2008**, 39, 505–513. [Google Scholar] [CrossRef] - Su, G.; Peng, L.; Hu, L. A gaussian process-based dynamic surrogate model for complex engineering structural reliability analysis. Struct. Saf.
**2017**, 68, 97–109. [Google Scholar] [CrossRef] - Nitze, I.; Schulthess, U.; Asche, H. Comparison of machine learning algorithms random forest, artificial neural network and support vector machine to maximum likelihood for supervised crop type classification. In Proceedings of the 4th GEOBIA, Rio de Janeiro, Brazil, 7–9 May 2012; Volume 79, pp. 35–40. [Google Scholar]
- Kang, S.C.; Koh, H.M.; Choo, J.F. An efficient response surface method using moving least squares approximation for structural reliability analysis. Probabilistic Eng. Mech.
**2010**, 25, 365–371. [Google Scholar] [CrossRef] - Breiman, L. Random forests. Mach. Learn.
**2001**, 1, 5–32. [Google Scholar] [CrossRef] [Green Version] - Friedman, J.H. Greedy function approximation: A gradient boosting machine. Ann. Stat.
**2001**, 29, 1189–1232. [Google Scholar] [CrossRef] - Jensen, J.J. Fatigue damage estimation in nonlinear systems using a combination of Monte Carlo simulation and the First, Order Reliability Method. Mar. Struct.
**2015**, 44, 203–210. [Google Scholar] [CrossRef] - Der Kiureghian, A. The geometry of random vibrations and solutions by FORM and SORM. Probabilistic Eng. Mech.
**2000**, 15, 81–90. [Google Scholar] [CrossRef] - Phoon, K.K.; Huang, S.P.; Quek, S.T. Simulation of second-order processes using Karhunen–Loeve expansion. Comput. Struct.
**2002**, 80, 1049–1060. [Google Scholar] [CrossRef] - Wolpert, D.H. Stacked generalization. Neural Netw.
**1992**, 5, 241–259. [Google Scholar] [CrossRef] - Friedman, J.; Hastie, T.; Tibshirani, R. The Elements of Statistical Learning; Springer Series in Statistics: New York, NY, USA, 2001; pp. 355–360. [Google Scholar]
- Mrabet, E.; Guedri, M.; Ichchou, M.N.; Ghanmi, S. Stochastic structural and reliability based optimization of tuned mass damper. Mech. Syst. Signal Process.
**2015**, 60, 437–451. [Google Scholar] [CrossRef] - Schuëller, G.I.; Pradlwarter, H.J. Benchmark study on reliability estimation in higher dimensions of structural systems—An overview. Struct. Saf.
**2007**, 29, 167–182. [Google Scholar] [CrossRef] - Au, S.K.; Ching, J.; Beck, J.L. Application of subset simulation methods to reliability benchmark problems. Struct. Saf.
**2007**, 29, 183–193. [Google Scholar] [CrossRef] - Jensen, H.A.; Valdebenito, M.A. Reliability analysis of linear dynamical systems using approximate representations of performance functions. Struct. Saf.
**2007**, 29, 222–237. [Google Scholar] [CrossRef] - Katafygiotis, L.S.; Cheung, S.H. Application of spherical subset simulation method and auxiliary domain method on a benchmark reliability study. Struct. Saf.
**2007**, 29, 194–207. [Google Scholar] [CrossRef] - Pradlwarter, H.J.; Schueller, G.I.; Koutsourelakis, P.S.; Charmpis, D.C. Application of line sampling simulation method to reliability benchmark problems. Struct. Saf.
**2007**, 29, 208–221. [Google Scholar] [CrossRef]

**Figure 3.**Illustration of a symmetric elementary failure region [3].

**Figure 5.**An illustrative node splitting process. The symbol ‘?’ means that the variable used to carry out the next splitting needs to be determined.

**Table 1.**Pseudo-code for Stacking [17].

Input: Training data $D={\{{x}_{i},{y}_{i}\}}_{i=1}^{m}$, ${x}_{i}\in X$, ${y}_{i}\in y$; a Stacking algorithm. |

Output: A meta learner H. |

Step1: Induce T base learners, i.e., ${h}_{1}$, ${h}_{2}$, …, ${h}_{T}$, from the training set. |

for t ← 1 to T |

Learn a base learner ${h}_{t}$ from D. |

end for |

Step2: Construct a new dataset ${D}^{\prime}$, where ${D}^{\prime}={\left\{({\mathbf{x}}_{i}^{{}^{\prime}},{y}_{i})\right\}}_{i=1}^{m}$. Here, |

${{\mathbf{x}}^{\prime}}_{i}=[{h}_{1}\left({\mathbf{x}}_{i}\right),{h}_{2}\left({\mathbf{x}}_{i}\right),\dots ,{h}_{T}\left({\mathbf{x}}_{i}\right)]$. |

Step3: Build a meta-learner $\mathit{H}$ from ${D}^{\prime}$. Output $\mathit{H}$. |

Parameters | RF | GB | ETs |
---|---|---|---|

nTrees | 20 | 20 | 20 |

nFeatures | 3 | 3 | 3 |

maxFeatures | 2 | 2 | 2 |

Variables | Mean ($\mathit{\mu}$) | SD | Ratio (r) | Range Scope |
---|---|---|---|---|

${m}_{1},\dots ,{m}_{10}$ | $10\times {10}^{3}$ kg | $1.0\times {10}^{3}$ kg | $0.1$ | $\mu \pm 5\mu r$ |

${k}_{1},{k}_{2},{k}_{3}$ | $40\times {10}^{6}$ N/m | $4.0\times {10}^{6}$ N/m | $0.1$ | $\mu \pm 5\mu r$ |

${k}_{4},{k}_{5},{k}_{6}$ | $36\times {10}^{6}$ N/m | $3.6\times {10}^{6}$ N/m | $0.1$ | $\mu \pm 5\mu r$ |

${k}_{7},{k}_{8},{k}_{9},{k}_{10}$ | $32\times {10}^{6}$ N/m | $3.2\times {10}^{6}$ N/m | $0.1$ | $\mu \pm 5\mu r$ |

${\zeta}_{1},\dots ,{\zeta}_{10}$ | $620\times {10}^{4}$ N s/m | $62\times {10}^{4}$ N s/m | $0.1$ | $\mu \pm 5\mu r$ |

**Table 4.**Thresholds of interest to evaluate failure probability [20].

Failure Defined by | Res. Threshold1 | Res. Threshold2 |
---|---|---|

First, DOF | 0.057 m | 0.073 m |

Tenth DOF | 0.013 m | 0.017 m |

Parameters | RF | GB | ETs |
---|---|---|---|

nTrees | 30 | 30 | 30 |

nFeatures | 30 | 30 | 30 |

maxFeatures | 6 | 6 | 6 |

Method | 1st-DOF $0.057\mathit{m}$ | 1st-DOF $0.073\mathit{m}$ | 10th-DOF $0.013\mathit{m}$ | 10th-DOF $0.017\mathit{m}$ |
---|---|---|---|---|

Standard MCS | 1.06E-4 | 8.07E-7 | 4.88E-5 | 2.52E-7 |

num of samples | 2.98E+7 | 2.98E+7 | 2.98E+7 | 2.98E+7 |

SubsetSim/MCMC [21] | 1.20E-4 | 1.00E-6 | 6.60E-5 | 4.70E-7 |

num of samples | 1850 | 2750 | 2300 | 2750 |

SubsetSim/Hybrid [21] | 1.10E-4 | 1.10E-6 | 5.90E-5 | 3.20E-7 |

num of samples | 2128 | 3163 | 2645 | 3680 |

Complex Modal Ana. [22] | 1.00E-4 | 9.80E-7 | 6.00E-5 | 4.60E-7 |

num of samples | 300 | 300 | 300 | 300 |

Spherical SubsetSim [23] | 9.20E-5 | 8.80E-7 | 4.60E-5 | 5.30E-7 |

num of samples | 3070 | 4200 | 3250 | 4900 |

Line sampling [24] | 9.80E-5 | 9.70E-7 | 6.00E-5 | 4.60E-7 |

num of samples | 360 | 3600 | 360 | 360 |

RF-based | 7.6E-5 | 1.0E-6 | 4.2E-5 | 1.1E-7 |

num of samples | 500 | 500 | 500 | 500 |

GB-based | 8.48E-5 | 9.15E-7 | 4.24E-5 | 1.06E-7 |

num of samples | 500 | 500 | 500 | 500 |

ETs-based | 8.73E-5 | 9.89E-7 | 4.09E-5 | 1.10E-7 |

num of samples | 500 | 500 | 500 | 500 |

Stacking-based | 1.0E-4 | 9.2E-7 | 4.3E-5 | 1.1E-7 |

num of samples | 500 | 500 | 500 | 500 |

© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

You, W.; Saidi, A.; Zine, A.-m.; Ichchou, M.
Mechanical Reliability Assessment by Ensemble Learning. *Vehicles* **2020**, *2*, 126-141.
https://doi.org/10.3390/vehicles2010007

**AMA Style**

You W, Saidi A, Zine A-m, Ichchou M.
Mechanical Reliability Assessment by Ensemble Learning. *Vehicles*. 2020; 2(1):126-141.
https://doi.org/10.3390/vehicles2010007

**Chicago/Turabian Style**

You, Weizhen, Alexandre Saidi, Abdel-malek Zine, and Mohamed Ichchou.
2020. "Mechanical Reliability Assessment by Ensemble Learning" *Vehicles* 2, no. 1: 126-141.
https://doi.org/10.3390/vehicles2010007