Next Article in Journal
Velocity Prediction of a Pipeline Inspection Gauge (PIG) with Machine Learning
Next Article in Special Issue
Susceptibility Analysis of Glacier Debris Flow Based on Remote Sensing Imagery and Deep Learning: A Case Study along the G318 Linzhi Section
Previous Article in Journal
Frequency Domain Specifications Based Robust Decentralized PI/PID Control Algorithm for Benchmark Variable-Area Coupled Tank Systems
Previous Article in Special Issue
Scientometric Analysis of Artificial Intelligence (AI) for Geohazard Research
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Machine Learning Models for Slope Stability Classification of Circular Mode Failure: An Updated Database and Automated Machine Learning (AutoML) Approach

1
Badong National Observation and Research Station of Geohazards (BNORSG), China University of Geosciences, Wuhan 430074, China
2
Three Gorges Research Center for Geo-Hazards of the Ministry of Education, China University of Geosciences, Wuhan 430074, China
3
School of Economics and Management, China University of Geosciences, Wuhan 430074, China
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(23), 9166; https://doi.org/10.3390/s22239166
Submission received: 25 October 2022 / Revised: 16 November 2022 / Accepted: 23 November 2022 / Published: 25 November 2022

Abstract

:
Slope failures lead to large casualties and catastrophic societal and economic consequences, thus potentially threatening access to sustainable development. Slope stability assessment, offering potential long-term benefits for sustainable development, remains a challenge for the practitioner and researcher. In this study, for the first time, an automated machine learning (AutoML) approach was proposed for model development and slope stability assessments of circular mode failure. An updated database with 627 cases consisting of the unit weight, cohesion, and friction angle of the slope materials; slope angle and height; pore pressure ratio; and corresponding stability status has been established. The stacked ensemble of the best 1000 models was automatically selected as the top model from 8208 trained models using the H2O-AutoML platform, which requires little expert knowledge or manual tuning. The top-performing model outperformed the traditional manually tuned and metaheuristic-optimized models, with an area under the receiver operating characteristic curve (AUC) of 0.970 and accuracy (ACC) of 0.904 based on the testing dataset and achieving a maximum lift of 2.1. The results clearly indicate that AutoML can provide an effective automated solution for machine learning (ML) model development and slope stability classification of circular mode failure based on extensive combinations of algorithm selection and hyperparameter tuning (CASHs), thereby reducing human efforts in model development. The proposed AutoML approach has the potential for short-term severity mitigation of geohazard and achieving long-term sustainable development goals.

1. Introduction

Natural hazards like landslide and subsidence have been acknowledged as a major factor disturbing sustainable development in developing countries [1,2,3,4]. For example, a catastrophic landfill slope failure occurred on 20 December 2015, in Guangming, Shenzhen, China, took the lives of 69 people [5]. The risk assessment and management of natural hazard will have a short-term benefit for severity mitigation and a long-term benefit for achieving sustainable development goals [1].
The evaluation of slope stability is of primary importance for natural hazard risk assessment and management in mountain areas. Numerous efforts have been made for slope stability assessment [6,7,8,9]. However, slope stability assessment for circular mode failure, a typical problem, still remains a challenge for the practitioner and researcher due to inherent complexity and uncertainty [10]. An extensive body of literature exists regarding slope stability assessments of circular failure, and significant progress has been achieved. Three main categories of assessment approaches have emerged: analytical approaches, numerical approaches, and machine learning (ML)-based approaches [11,12,13]. Limited equilibrium methods, such as the simplified Bishop, Spencer, and Morgenstern-Price methods, are commonly used analytical approaches and have been routinely used in practice. Generally, geometrical data, physical and shear strength parameters (unit weight, cohesion, and friction angle), and the pore pressure ratio are required in limited equilibrium methods [14,15]. However, the results vary across different methods due to different assumptions [9]. Numerical approaches (e.g., finite element methods) have been widely adopted for slope stability assessment. However, due to the requirement of numerous expensive input parameters, these models can be applied only in limited cases [16]. Recently, ML-based approaches have led to giant strides in slope stability assessment. A summary of the slope stability assessments of circular failure using ML approaches is given in Table 1. Among the various ML approaches used, artificial neural networks (ANNs) are widely utilized for slope stability assessment due to their simple structure and acceptable accuracy [11,17,18]. Recently, sophisticated ML algorithms, including but not limited to support vector machine (SVM), decision tree (DT), extreme learning machine (ELM), random forest (RF), and gradient boosting machine (GBM) algorithms, have been utilized for slope stability assessment. Hyperparameter tuning is a fundamental step required for accurate ML modeling [19,20]. As listed in Table 1, grid search (GS) and metaheuristic methods, such as the artificial bee colony (ABC) algorithm, genetic algorithm (GA), and particle swarm optimization (PSO), have been utilized for hyperparameter tuning in ML-based slope stability assessment. For example, Qi and Tang [16] simultaneously trained six firefly algorithm (FA)-optimized ML models, including multilayer perceptron neural network, logistic regression (LR), DT, RF, SVM, and GBM models, based on 148 cases of circular mode failure. The FA-optimized SVM was selected as the final model, with an area under the receiver operating characteristic curve (AUC) of 0.967 for the testing dataset. The performance of eight ensemble learning approaches was compared by [12] based on a dataset with 444 cases of circular mode failure. A stacked model was selected as the final model, with an AUC of 0.9452 for the testing dataset.
Although ML-based models have been widely applied, some studies have been based on a small number of samples, which may affect the generalization ability of the classifier. Moreover, most ML models have been manually developed by researchers with expert knowledge in a trial-and-error approach. In fact, exhaustive steps, including data preprocessing [31], feature engineering [32], ML algorithm selection [33], and hyperparameter tuning, are involved in practical applications of ML. Among them, model selection and hyperparameter tuning remain challenges for successful ML-based modeling [34]. Based on the no-free-lunch theorem [35], there is no algorithm that outperforms all others in all problems. Therefore, at present, according to prior experience, candidate off-the-shelf models are trained with a training dataset and validated by researchers. The ML model that provides the best performance is considered the final model and tested with an out-of-box testing dataset. This traditional workflow makes the model development process knowledge-based and time-consuming [36], and might yield unsatisfactory results [37]. However, most practitioners and researchers lack the knowledge and expertise required to build satisfactory ML models. Hence, an objective workflow with less human effort is needed, providing a basis for the concept of automated ML (AutoML) [38].
From the perspective of automation, AutoML is a systematic framework that automates algorithm selection and hyperparameter tuning and explores different combinations of factors with minimal human intervention [34,39,40,41]. AutoML has been successfully applied for ML modeling in a variety of fields, including tunnel displacement prediction [36], tunnel boring machine performance prediction [34], and earthquake casualty and economic loss prediction [42]. Thus, the generalization ability of this approach has been confirmed.
In the present study, an updated database with 627 cases consisting of the unit weight, cohesion, and friction angle of the slope materials: slope angle and height, pore pressure ratio, and corresponding stability status of circular mode failure, has been collected. For the first time, an AutoML approach was proposed for slope stability classification. The top model was selected from 8208 trained ML models by exploring numerous combinations of algorithm selection and hyperparameter tuning (CASHs) with minimal human intervention.
The major contribution of this paper is highlighted as follows:
(a)
A large database consisting of 627 cases has been collected for slope stability classification.
(b)
Based on the updated dataset, an AutoML approach was proposed for slope stability classification without the need for manual trial and error. The proposed AutoML approach outperformed the existing ML models by achieving superior performance.
The rest of this paper is organized as follows: the updated database and methodology are presented in Section 2 and Section 3, respectively. Section 4 presents and discusses experimental results. Finally, the conclusions and further work are presented in Section 5.

2. Database

As listed in Table 1, the input features relevant to the slope stability assessment of the circular failure model (schematic illustrated in inset of Figure 1) mainly include the unit weight, cohesion, and friction angle of the slope materials, the slope angle and height, and the pore pressure ratio. Moreover, these features are fundamental input parameters for limit equilibrium methods, such as the simplified Bishop method [15,43]. Based on the previous research listed in Table 1, an updated database consisting of 627 cases was obtained from previous studies [11,12,16,24,30,44] and is listed in Appendix A. The database consists of the unit weight, cohesion, and friction angle of the slope materials, the slope angle and height, the pore pressure ratio, and the corresponding stability status. The numbers of positive (stable) and negative (failure) samples are 311 and 316, respectively. The statistics of the input features are summarized in Table 2. To better visualize the collected dataset, ridgeline plots showing the density distributions of the input features based on kernel density estimation [3] are presented in Figure 1. As shown, the collected dataset was distributed in a wide range of regions, and the distribution was not symmetric.
The Pearson correlation coefficient (R) was adopted to further reveal the linear correlations between input features and the slope stability status and is shown in the lower left half of the panels in Figure 2. As shown, relatively poor linear correlations with correlation coefficients lower than 0.5 were observed between the input features and the slope stability status. Significant linear correlations (R = 0.71, 0.71, and 0.68) were noted for the unit weight, friction angle, and slope angle. Additionally, a moderate correlation (R = 0.51) was found between the unit weight and slope height.
Furthermore, the multivariate principal component analysis (PCA) technique [45] was applied to enhance the visualization of the statistical relationships among features. The PCA results shown in Figure 3 demonstrate that the first three principal components (PC1-PC3) account for 79.09% of the entire multivariate variance in space. PC1 is mainly associated with the unit weight, friction angle, and slope angle. PC2 corresponds to the pore pressure ratio. Moreover, overlapping among failure and stability classes can be clearly observed. In other words, the decision boundary for separating slope failure and stability is highly nonlinear and complex.

3. Methodology

3.1. AutoML

From the perspective of automation, AutoML is a systematic model that automates the algorithm selection and hyperparameter tuning processes and explores different CASHs with minimal human intervention [34,39,40]. More formally, the CASH problem can be stated as follows. Let A = { A 1 , A 2 , , A R } be a set of ML algorithms, Λ = { Λ 1 , Λ 2 , , Λ R } be the corresponding hyperparameters, and L be the loss function. When adopting k-fold cross validation (CV), the training dataset D t r a i n i n g is divided into subsets { D t r a i n i n g ( 1 ) , D t r a i n i n g ( 2 ) , , D t r a i n i n g ( k ) } and { D v a l i d a t i o n ( 1 ) , D v a l i d a t i o n ( 2 ) , , D v a l i d a t i o n ( k ) } . The CASH problem is defined as
A * λ * arg min A ( j ) A , λ ( j ) Λ 1 k i = 1 k L ( A λ ( j ) , D t r a i n i n g ( i ) , D v a l i d a t i o n ( i ) )
Generally, AutoML consists of the following three key components: a search space, a search strategy, and a performance evaluation strategy [40] (schematically illustrated in Figure 4). The search space refers to a set of hyperparameters and the range of each hyperparameter. The search strategy refers to the strategy of selecting the optimal hyperparameters from the search space. Grid search and Bayesian optimization are commonly used search strategies. The performance evaluation strategy refers to the method used to evaluate the performance of the trained models.
Various open-source platforms, such as AutoKeras, AutoPyTorch, AutoSklearn, AutoGluon, and H2O AutoML, have been developed to facilitate the adoption of AutoML [46]. Previous studies [47,48] have demonstrated the strong feature of H2O AutoML for processing large and complicated datasets by quickly searching the optimal model without the need for manual trial and error. Moreover, H2O AutoML provides a user interface for non-experts to import and split datasets, identify the response column, and automatically train and tune models. Therefore, in the present study, the H2O AutoML platform was adopted for the automated assessment of slope.
The H2O AutoML platform includes the following commonly used ML algorithms: generalized linear model (GLM), distributed random forest (DRF), extremely randomized tree (XRT), deep neural network (DNN), and GBM algorithms [49]. The abovementioned ML algorithms in the H2O AutoML platform are briefly described as follows.
GLM is an extended form of a linear model. Given the input variable x, the conditional probability of the output class falling within the class c of observations is defined as follows:
y ^ c = P r ( y = c | x ) = e x T β c + β c 0 k = 1 K ( e x T β k + β k 0 )
where β c is the vector of coefficients for class c.
The DRF is an ensemble learning approach based on decision trees. In the DRF training process, multiple decision trees are built. To reduce the variance, the final prediction was obtained by aggregating the outputs from all decision trees.
Similar to the DRF, XRT is based on multiple decision trees, but randomization is strongly emphasized to reduce the variance with little influence on the bias. The following main innovations are involved in the XRT process: random division of split nodes using cut points and full adoption of the entire training dataset instead of a bootstrap sample for the growth of trees.
The DNN in H2O AutoML is based on a multilayer feedforward artificial neural network with multiple hidden layers. There are a large number of hyperparameters involved in DNN training, which makes it notoriously difficult to manually tune. Cartesian and random grid searches are available in H2O AutoML for DNN hyperparameter optimization.
GBM is an ensemble learning method. The basic idea of GBM is to combine weak base learners (usually decision trees) for the generation of strong learners. The objective is to minimize the error in the objective function through an iterative process using gradient descent.
In addition, stacked ensembles can be built using either the best-performing models or all the trained models.

3.2. Search Space and Search Strategy

In the present study, a random grid search was adopted for hyperparameter tuning in the search space. When adopting k-fold CV, the hyperparameter tuning process can be described as follows (schematically illustrated in Figure 5). First, possible combinations of the tuned parameters are generated. Then, CV is performed using a possible parameter combination. The training dataset is divided into k equal-sized subsets. A single subset is treated as the validation subset, while the remaining subsets are adopted for classification training. The average accuracy from k validation sets is computed and adopted as the performance measure of the k-CV classifier model. The above process is repeated for all possible parameter combinations. A ranking of all trained classifiers by model performance is obtained. The classifier that yields the highest accuracy is selected.

3.3. Performance Evaluation Measures

In the present study, widely applied criteria, including the accuracy (ACC), AUC, sensitivity (SEN), specificity (SPE), positive predictive value (PPV), negative predictive value (NPV), and Matthews correlation coefficient (MCC), were adopted for performance evaluation (Table 3). The AUC can be interpreted as follows: an AUC equal to 1.0 indicates perfect discriminative ability, an AUC value from 0.9 to 1.0 indicates highly accurate discriminative ability, an AUC value from 0.7 to 0.9 indicates moderately accurate discriminative ability, an AUC value from 0.5 to 0.7 demonstrates inaccurate discriminative ability, and an AUC less than 0.5 indicates no discriminative ability.

3.4. Slope Stability Assessment through AutoML

In the present study, the H2O AutoML approach was adopted for ML model development for slope stability classification (schematic illustrated in Figure 6). First, the database listed in Appendix A was randomly divided into training and testing datasets at a ratio of 80% to 20%, respectively. ML models, including GLM, DRF, XRT, DNN, and GBM were automated and developed (schematic illustrated in Figure 6). To enhance the reliability and performance, the common 10-fold CV was performed. A full list of tuned hyperparameters and the corresponding searchable values are given in Table 4. Stacked ensembles were developed based on the best-performing models and all the tuned models. A leaderboard ranking the mode performance accuracy was achieved. The leader models were saved and evaluated on the testing dataset.
The AutoML process was implemented using H2O AutoML (3.36.1.2) with an Intel(R) Xeon(R) E-2176M @ 2.70 GHz CPU with 64 GB RAM. The maximum time allotted to run generation classifiers, except for the stacked ensembles, was set to 3600 s.

4. Results and Discussions

4.1. Performance Analysis

A total of 8208 ML models, including bypass CV models, were trained with the H2O AutoML platform and saved. The top five models from the leaderboard were selected and listed in Table 4 for testing. The performance evaluation metrics for the top five models on the testing dataset are listed in Table 5.
As listed in Table 5, the stacked ensemble of the best 1000 models (H2O1) ranked as the top-performing model. The corresponding ROC curves are shown in Figure 7, which clearly indicates that the top-performing model is capable of providing highly accurate discriminative ability, with AUC of 0.999 and 0.970 for the training and testing dataset, respectively. The model performance was further evaluated using gain and lift charts (Figure 8). A gain chart measures the effectiveness of a classifier based on the percentage of correct classifications obtained with the model versus the percentage of correct classifications obtained by chance (i.e., the baseline). As shown, for the top model, only 30% of the population is required to achieve an accuracy of 60%, compared to 30% for the random model. The top classifier is capable of achieving a maximum lift of 2.1. In other words, when only 10% of the sample was selected, the average accuracy of the top model was approximately two times higher than that of the random model.
Figure 9 demonstrates the correlation between NPV and PPV for the obtained top-five classification models based on the testing dataset. As shown, the top-performing model (H2O1) falls within zone 2, in which the obtained NPV is greater than the PPV. This result indicates that the top-performing model (H2O1) tends to classify slope status as a failure (negative status) more often than stable (positive status). In other words, the top-performing model (H2O1) may overestimate stability.

4.2. Model Interpretation

In the present study, the partial dependence plot graphically revealing the input–output relationship was adopted for model interpretation. The partial dependence plot has been considered as one of the most popular model agnostic tools due to the advantages of simple definition and easy implementation. The partial dependence relations of the input features in the top-performing model (H2O1) are shown in Figure 10. In partial dependence plots, features with greater variability have more significant effects on the model [18,50]. As shown, the top-performing model (H2O1) is highly influenced by the slope height and friction angle.

4.3. Validation of the AutoML Model in ACADS Example

Furthermore, the predictive capacity of the top-performing model (H2O1) was validated on the Australian Association for Computer-Aided Design (ACADS) referenced slope example EX1, which is a simple homogeneous slope. The slope is 20 m long and 10 m high. The geometry and material properties are shown in Figure 11. With the parameters listed in Figure 11, the example slope was estimated to fail [43]. The top-performing model (H2O1) successfully classified the slope example as a failure case.

4.4. Comparison with Exiting Models

To further assess performance, the top-performing model (H2O1) from the AutoML approach was further compared with a manually derived ML model for slope stability assessment (Table 6). As shown in Table 6, in the previous studies, the firefly algorithm optimized SVM (FA-SVM) provides the best performance with an AUC of 0.967 [16], followed by ensemble classifiers on the extreme gradient boosting (XGB-CM) [11]. Obviously, the top-performing model (H2O1) is of better generalization ability than the existing models shown in Table 6 with the largest AUC and ACC values. These comparative results clearly indicate that the top-performing model (H2O1) from AutoML approach is capable of providing better generalization performance than the manually derived ML and metaheuristics-optimized model.

4.5. Advantages and Limitations of the Proposed Approach

Generally, the traditional ML models require workflows which encompass data preprocessing, feature engineering, ML algorithm selection, and hyperparameter tuning to be constructed, and are often developed based on prior experience. Due to varying levels of knowledge, the traditional ML model may not fully exploit the power of ML, resulting in less optimal results than those obtained with other models. Therefore, it is not objective to claim that one algorithm outperforms another without adjusting the hyperparameters. In contrast, AutoML is capable of automatically implementing the above processes and extensively exploring different workflows with minimal human intervention, resulting in a better model. In fact, previous studies [51,52] have reported that AutoML outperformed traditional ML models that were manually developed by data scientists. Moreover, it takes less computational time to train AutoML, with hundreds of optional pipelines, than it does to train a manually derived ML model, often requiring days to tune. In fact, based on the collected dataset, the computational time of AutoML with 8408 pipelines is one hour. Moreover, various commercial and open-source AutoML platforms have been developed, and many successful implementations have been reported. For example, an AutoML vision model was implemented for production recommendation using Google Cloud AutoML without hiring ML engineers [40]. These results may suggest that AutoML is preferred in some cases. However, due to the complex and involved process required to build an AutoML system from scratch, AutoML is still in an early stage of development. At present, AutoML is not fully automated [37,40]. For example, human efforts are still needed for data collection and data cleaning. For now, clear objectives based on high-quality data must be defined for AutoML. Nevertheless, the AutoML approach holds limitations such as black box, and is computationally expensive for large-scale datasets due to extensive searching of different pipelines.

5. Conclusions

In the present study, an updated database consisting of 627 cases was collected for slope stability classification of circular failure model. For the first time, an AutoML approach was proposed for ML model development. Instead of manually building a pipeline for ML algorithm selection and hyperparameter tuning, AutoML is capable of automatically implementing model development and performing extensive searches of different pipelines with minimal human intervention. The stacked ensemble of the best 1000 models was selected as the top model from 8208 ML trained models. The top-performing model provided highly accurate discriminative ability, with an AUC of 0.970 and an ACC of 0.904 for the testing dataset, achieving a maximum lift of 2.1. The trained AutoML model outperformed traditional manually tuned and metaheuristic-optimized models. AutoML was verified as an effective tool for automated ML model development and slope stability assessments of circular failure.
Given the successful use of AutoML for classification of slope stability for circular mode failure, it seems that such a methodology could be useful for short-term severity mitigation of geohazard and achieving long-term sustainable development goals.
Although the proposed AutoML approach shows promising results, it still has some limitations. Beyond the black box nature, among the major shortcomings of AutoML, a solution is their computational complexity. Future works should focus on developing explainable and interpretable ML models by coupling data-driven models with physical models.

Author Contributions

J.M.: Investigation, Methodology, Data curation, Formal analysis, Writing—original draft, Writing—review & editing, Funding acquisition. S.J.: Visualization, Software. Z.L.: Resources, Investigation. Z.R.: Resources, Investigation. D.L.: Resources, Investigation. C.T.: Resources, Investigation. H.G.: Visualization, Validation. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Major Program of the National Natural Science Foundation of China (Grant No. 42090055), the National Natural Science Foundation of China (Grant Nos. 42177147 and 71874165), and the Fundamental Research Funds for the Central Universities, China University of Geosciences (Wuhan) (CUG2642022006).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data used are contained in Appendix A.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

AB: adaptive boost; ABC: artificial bee colony; ACC: accuracy; ACADS: Australian Association for Computer Aided Design; ADB: adaptive boosted decision tree; ANN: artificial neural network; AUC: area under the receiver operating characteristic curve; AutoML: automated machine learning; B-ANN: bagging artificial neural network; BC: bagging classifier; BDA: Bayes discriminant analysis; B-KNN: bagging k-nearest neighbors; BP: back-propagation; B-SVM: bagging support vector machine; CASHs: combinations of algorithm selection and hyperparameter tuning; CV: cross validation; DNN: deep neural network; DRF: distributed random forest; DT: decision tree; ELM: extreme learning machine; FA: firefly algorithm; GA: genetic algorithm; GBM: gradient boosting machine; GLM: generalized linear model; GNB: Gaussian naive bayes; GP: Gaussian process; GS: grid search; GSA: gravitational search algorithm; HGB: hist gradient boosting classifier; HS: harmony search, KNN: k-nearest neighbors; LDA: linear discriminant analysis; LM: Levenberg–Marquardt; LR: logistic regression; LSSVM: least squares support vector machine; MDMSE: margin distance minimization selective ensemble; ML: machine learning; MLP: multilayer perceptron; MO: metaheuristic optimized; NB: naive Bayes; NPV: negative predictive value; OEC: optimum ensemble classifier; PC: principal component; PCA: principal component analysis; PPV: positive predictive value; PSO: particle swarm optimization; QDA: quadratic discriminant analysis; RBF: radial basis function; RBP: resilient back-propagation; RF: random forest; RMV: relevance vector machine; SCG: scaled conjugate gradient; SEN: sensitivity; SGD: stochastic gradient descent; SPE: specificity; Std.: standard deviation; SVM: support vector machine; XRT: extremely randomized tree.

Appendix A. Updated Dataset for Slope Stability Assessments of Circular Mode Failure

No γ (kN/m3)c (kPa) φ (°) β (°) H (m) r u Status
117.984.9530.0219.9880.3Stable
2185302080.3Stable
321.476.930.0231.0176.80.38Failure
421.516.94303176.810.38Failure
521.788.553227.9812.80.49Failure
621.828.62322812.80.49Failure
722.4103530100Stable
821.41030.3430200Stable
922.4103545100.4Failure
1027.31039415110.25Stable
1127.31039404700.25Stable
1222.4103530100.25Stable
1321.41030.3430200.25Stable
14271039415110.25Stable
15271039404700.25Stable
1627.31039404800.25Stable
1721.3610.0530.3330200Stable
1819.9710.0528.9834.0360.3Stable
1922.3810.0535.0130100Stable
2022.3810.0535.0145100.4Failure
2119.0810.059.9925.02500.4Failure
2219.0810.0519.9830500.4Failure
2318.8310.3521.2934.03370.3Failure
2416.511.490303.660Failure
2516.4711.550303.60Failure
2619.0311.727.9934.98210.11Failure
2719.0611.72835210.11Failure
2819.0611.712835210.11Failure
2919.0611.752835210.11Failure
301411.972630880Failure
3119.6311.97202212.190.41Failure
321411.972630880.45Failure
3319.6311.97202221.190.4Failure
3418.51203060Failure
3518.51203060.25Failure
3619.61219.982212.20.41Failure
3713.971226.0130880Failure
3818.461203060Failure
3913.971226.0130880.45Failure
4027.31431411100.25Stable
41271431411100.25Stable
4218.8414.36252030.50Stable
4318.8414.36252030.50.45Failure
4418.8414.362520.3500.45Failure
4518.814.425.0219.9830.60Stable
4618.814.425.0219.9830.60.45Failure
4718.815.3130.0225.0210.60.38Stable
4818.8415.32302510.670.38Stable
4920.5616.2126.5130400Failure
5020.616.2826.530400Failure
5127.316.8285090.50.25Stable
522716.8285090.50.25Stable
5320.9619.9640.0140.02120Stable
5421.9819.963645500Failure
5519.9719.963645500.25Failure
5619.9719.963645500.5Failure
5718.7719.969.9925.02500.3Failure
5818.7719.9619.9830500.3Failure
5921.9819.9622.0119.981800Failure
6021.9819.9622.0119.981800.1Failure
6122203645500Failure
6220203645500.25Failure
6320203645500.5Failure
64182430.1545200.12Failure
6517.9824.0130.1545200.12Failure
6618.8324.7621.2929.2370.5Failure
6720.4124.9132210.670.35Stable
6820.3924.9113.012210.60.35Stable
6918.52503060Failure
7018.52503060.25Failure
7118.4625.0603060Failure
7218.7725.0619.9830500.2Failure
7318.7725.069.9925.02500.2Failure
7427.3263150920.25Stable
7527263150920.25Stable
7618.6826.3415358.230Failure
7718.6626.4114.9934.988.20Failure
7828.429.4135.0134.981000Stable
7928.4429.4235351000Stable
8018.7730.019.9925.02500.1Stable
8118.7730.0119.9830500.1Stable
8220.9630.0135.0140.02120.4Stable
8318.9730.0135.0134.98110.2Stable
8427.331.529.7411350.25Stable
852731.529.7411350.25Stable
8627323342.63010.25Failure
8727323342.42890.25Stable
88273233422890.25Stable
8920.3933.4610.9816.0145.80.2Failure
9020.4133.52111645.720.2Failure
9120.4133.52111645.70.2Failure
9220.9634.9627.9940.02120.5Stable
93273535423590.25Stable
942737.53537.83200.25Stable
952737.535383200.25Stable
9628.439.1637.9834.981000Stable
9728.4439.2338351000Stable
98274035434200.25Failure
9919.9740.0630.0230150.3Stable
10019.9740.0640.0140.02100.2Stable
10120.9645.0225.0249.03120.3Stable
10217.9845.0225.0225.02140.3Stable
103254635474430.25Stable
104254635444350.25Stable
105254635464320.25Stable
106254635463930.25Stable
107254840493300.25Stable
10826.435026.64092.20.15Stable
10926.75026.6501700.25Stable
110275040424070.25Stable
11125553645.52990.25Stable
112255536442990.25Stable
11318.8457.46202030.50Stable
11418.857.4719.9819.9830.60Stable
11526.86028.8591080.25Stable
11631.36837472130.25Failure
11731.36837463660.25Stable
11831.368.637473050.25Failure
119167020401150Failure
12015.9970.0719.9840.021150Failure
12122.3899.934545150.25Stable
12222.41004545150.25Stable
1232512045531200Stable
12424.96120.0445531200Stable
12526.491503345730.15Stable
12626.715033501300.25Stable
12726.8915033521200.25Stable
1282615045302000.25Stable
12926150.0545502000Stable
13025.96150.054549.982000Stable
13126.8120035581380.25Stable
13226.5730038.745.3800.15Failure
13326.7830038.7541550.25Failure
13419.965219.956653644.997500.25Failure
13525.638.83625260Stable
13622.88031.7836.8645.450.54Failure
13723.5252049.11150.41Stable
13816720401150Failure
13927.337.33130300Stable
1402203645500.25Stable
1412731.530411350.25Stable
14218.800814.404825.0219.98130.60Stable
14319.617.829.246.8201.20.37Stable
14418.8415.32302510.70.38Stable
14525463644.52990.25Stable
14619.6311.98202212.190Failure
147251245531200Stable
14818.772430.019.9925.016500.1Stable
149254635474430.29Stable
15018.772425.058359.9925.016500.2Failure
15130.9530.7927.0839.77131.220.22Stable
15217.414.9521.245150.4Failure
15323.125.229.236.561.90.4Stable
15421.516.94303176.80.38Failure
15520.959245.01525.0249.025120.3Stable
15627323342.63010.29Failure
15715.989270.0733519.9840.0151150Failure
158120304580.29Failure
159254635502850.25Stable
16013.972812.00426.0129.998880.45Failure
16118.6826.3415358.230.25Failure
16218.772430.0119.9829.998500.1Stable
163220403380.35Stable
1642003645500.25Failure
16531.368.637473050Failure
16622103545100.403Failure
16718526.515.52530.4Failure
16821.7322745600Failure
1691411.972630880.25Failure
17018.8414.36252030.50.25Stable
171120304540.25Stable
1721852215.52530.4Failure
17326.244.1432.2637.71359.040.21Stable
17419.965219.956653644.997500.5Failure
17522203645500.25Failure
176120303540Stable
1772512045531200.25Stable
17831.36837463660Failure
17926.536.13135390Stable
18020.959230.0135.0140.015120.4Stable
18127.31039404700.29Stable
18227.336150920.29Stable
18318.84020207.620.45Failure
18426.241.53635300Stable
18527.438.13125420Stable
18626.93041.1331.688.160.3Stable
18720.815.62030450Failure
1882727.329.134126.50.3Failure
18930.3315.6224.2152.585.760.25Failure
1901911.920.421.04540.75Stable
19118.89.82119.29390.25Failure
19221.134.22630750Failure
193200.13645500.29Failure
194240403380.3Failure
19524.4511.3439.3144.039.790.43Failure
196180303380.303Stable
19720.4124.91132210.670.35Stable
19821.831.22530600Failure
199200.13645500.503Failure
200240403380.303Stable
20126.7826.7930.6643.66249.70.25Stable
20231.2525.7327.9748.2391.550.21Failure
203120.03303540.29Failure
2042203645500.25Failure
205255536452390.25Stable
206232419.8233800Failure
20721.203523.751500.25Failure
20820.959234.9616527.9940.015120.5Stable
209120304580.25Failure
210277022.845600.32Stable
21118.772419.9566519.9829.998500.3Failure
21228.4429.4235351000.25Stable
21320.815.42130530Failure
21419.59612.00419.9821.99512.20.405Failure
21522.124.239.745.849.50.21Stable
21622.429.32650500Failure
21720024.52080.35Stable
21825553645.52990Stable
21917.5522.08034.995.880.35Failure
22020.5214.0626.2325.389.860.37Stable
22121.981619.9566522.00519.9811800.1Failure
22218.4612.004029.99860Failure
22320.45161530360.25Stable
22421.133.52840310Failure
22522203645300.29Failure
22617.6101621.890.4Stable
22731.368.637472700.25Failure
22823.41538.530.345.20.28Failure
22916.47211.55385029.9983.60Failure
23023.47032372140.25Failure
23124.8645.639.836.31386.080.21Stable
23217.21024.2517.07380.4Stable
23314.801720500Failure
23417.86024.3822.448.230.39Stable
23518.822514.620.32500.4Failure
23618.829210.3534521.28534.026370.3Failure
23718.8457.46202030.50.25Stable
23831.368.637473050.25Stable
23928.019.537.3641.86538.10.23Stable
24025633244.52390.25Stable
24118.603221.8460.25Stable
24225.838.23327400Stable
24331.3683749200.50.29Failure
244167020401150.25Failure
245220403380.393Stable
246254635502840.25Stable
24720.627.82735700Failure
248224030301960Stable
24918.971230.0135.0134.98110.2Stable
25026.243.83835680Stable
25117.97724.9516530.01519.98180.3Stable
25222.428.92428350Failure
25325.639.83630320Stable
25419.3619.838.4943.4148.880.43Failure
25520.4124.9132210.70.35Stable
25623.51027261900Failure
25717.4202418.43510.4Failure
25817.610821.890.4Stable
25922.379210.0533535.0129.998100Stable
26021.78288.5528531.99527.98412.80.49Failure
26119.6311.97202212.190.405Failure
262254840453300.25Stable
26325.839.43325450Stable
26427403547.12920.25Failure
265120303580.25Failure
26622.379299.93334544.997150.25Stable
26716.511.490303.660.25Failure
26825.834.73330500Stable
26926.62031.7842.7251.480.4Failure
270240403380.3Stable
27118.84020207.620Failure
27218.772425.0583519.9829.998500.2Failure
273222123302570Failure
27423.29.539.6939.3410.490.44Failure
27521.78034.2357.130.32Stable
27614.801720500.25Failure
27731.36837472130Failure
27821.832.72750500Failure
27921.828.82635990Failure
28026.242.83730370Stable
28122103530100.29Stable
28219.621.829.537.840.30.25Stable
28318.603226.5460.25Stable
28427.31039415110.29Stable
28528.073538.9344.54361.510.24Stable
28619.6311.97202212.20.41Failure
287275040424070.29Stable
28821.739.2130.633.0619.780.29Stable
28927.31431411100.29Stable
29026.695026.6501700.25Stable
29126.535.43230210Stable
29226.541.83642540Stable
29318.772419.956659.9925.016500.3Failure
29429.738.0932.9245.48410.40.26Stable
29526.242.33623360Stable
29620.616.2826.530400.25Failure
29720.959219.9566540.00540.015120Stable
29820.628.52740650Failure
29917.29037.2244.5542.30.28Failure
30012.34025.9246.828.080.43Failure
3012737.53537.83200.29Stable
30224.9636120.0445531200Stable
30322.145.849.545.849.50.21Stable
30411.94031.7532.493.920.11Stable
30517.977245.01525.0225.016140.3Stable
30620.632.42630420Failure
30718.882621.8400.4Failure
30831.3683747360.50.25Failure
30926.8313.9835.4643.596.140.23Stable
31021.203523.751500.25Stable
3112203645500Failure
31217.977224.00830.1544.997200.12Failure
31320203645300.503Failure
31424.579.9841.3135.46526.130.27Stable
31521.529.82640700Failure
31627.12218.625.61000.19Failure
31722103645500.29Failure
31821.66.51940500Failure
31920.9721.831.8138.0957.750.24Failure
32026.837.53230260Stable
32125.9576150.054549.9792000Stable
32219.965210.0533528.9834.02660.3Stable
32322.5429.420242100Stable
3242642.43738550Stable
32520.4124.9132210.670Stable
326212024215650Stable
32731.3683749200.50.25Failure
32820.632.42635550Failure
32916.0511.490303.660Failure
33025463644.52990Stable
33119.4311.16032.345.350.36Failure
3322030.32545530Failure
33321.981619.956653644.997500Failure
33427.331.529.703411350.293Stable
33521.5152941.5123.60.36Stable
33620.814.82130400Failure
33725.843.33730330Stable
33820.4133.52111645.720Failure
33927403547.12920Failure
3402440.83535500Stable
34122.41004545150.25Failure
342256332463000.25Stable
343182430.245200.12Failure
34426.816028.8591080.25Stable
34528.3544.9733.4943.16413.420.25Failure
34619.084810.053359.9925.016500.4Failure
3472727.329.1351500.26Failure
34831.368378305.50.25Failure
349254840493300Stable
35018.800857.4691519.9819.98130.60Stable
351273233423010.25Failure
352254635463930Stable
35318.84020207.60.45Failure
35420.391224.908313.00521.99510.60.35Stable
355261545502000Stable
35631.358.835.547.5438.50.25Failure
35718.658826.408814.98534.988.20Failure
35821.11030.3430200Stable
35925.841.23530400Stable
36021.47046.902330.01531.00576.80.38Failure
36123.47032372140Failure
362200202080.35Stable
363232020.346.240.30.25Stable
36431.358.835.547.5502.70.25Failure
3652639.43625300Stable
36627.31039404800Stable
36721.827.62535600Failure
36821.428.82050520Failure
36919.965240.0633530.01529.998150.3Stable
3702082010100Failure
37123.83138.747.523.50.31Stable
37226.642.43725520Stable
37328.439.1630537.9834.981000Stable
37421.5117.8231.7547.0349.920.52Failure
375220403380.35Failure
37623020201000.3Failure
37721.4302020610Failure
37826.640.73535600Stable
37927.8345.0135.9547.83456.380.25Stable
380254635444350.29Stable
38118.714.7528.1218.818.620.31Stable
38226.644.13835420Stable
38328.429.409835.0134.981000Stable
38419.02811.703927.9934.98210.11Failure
38518.45018.5817.827.550.43Failure
386273535423590.29Stable
38731.368.63747.5262.50.25Failure
38831.36837463660.25Failure
389274335434200.29Failure
390120303540.25Stable
39126.1815944.9331.5172.980.1Failure
39219.32019.4420.268.480.45Failure
3933027.3834.5743.46319.210.27Failure
394120304580Failure
39528.5142.3432.243.25453.60.25Stable
39611.82033.731.263.910.42Stable
39718.8415.32302510.670Stable
3982735.83230690Stable
399182121.3321.8400.4Failure
40017.821.213.9218.43510.4Stable
40127.316.2285090.50.29Stable
40222.320.13140.2880.19Stable
40322.52016252200Stable
40413.972812.00426.0129.998880Failure
405254635464320.29Stable
40620303645500.29Failure
40723.231.22330330Failure
40825.4333320350Failure
40926150.0545502000.25Stable
41019.965240.0633540.00540.015100.2Stable
41120.391233.4611510.9816.00645.80.2Failure
41228.4439.2338351000.25Stable
413211030.34330300.29Stable
414222915184000Failure
41527.827.827412360.1Stable
41626.542.93834360Stable
41718.829224.7582521.28529.203370.5Failure
41821.981619.9566522.00519.9811800Failure
41918.800815.305130.01525.01610.60.38Stable
42021.838.62322812.80Failure
42122.858.4638.1225.6711.340.56Stable
42218.5250306.0030.29Failure
4232738.43325220Stable
4242441.53630510Stable
42521.4302020610.5Failure
4262615045302300.29Stable
42718.5120306.0030.29Failure
42822.379210.0533535.0144.997100.4Failure
42920.561616.205426.50529.998400Failure
430316837463660.25Failure
43121.356810.0533530.3329.998200Stable
432254635502840Stable
43327323342.22390.29Stable
43425.636.83435600Stable
4352003645500.5Failure
43619.084810.0533519.9829.998500.4Failure
43733.1668.5441.1151.98188.150.44Failure
43821.203518.43730.25Stable
43920.626.312225350Failure
44018.4625.05835029.99860Failure
44122.304026.5780.25Stable
442120303540.29Stable
44318.1210.5730.8432.4521.770.11Failure
44419.629.62340580Failure
4452727.329.1371840.22Failure
446255536452990.25Stable
44722.51820202900Stable
44818.800814.404825.0219.98130.60.45Failure
449120304540Stable
45023.47032372140Stable
45120.4133.52111610.670.35Stable
45225.4333320350Stable
45327.331.530411350.25Stable
45421.4103030200.25Stable
45518.668.815358.20Failure
45628.49.835351000Stable
45725.965045502000Stable
45818.468.3503060Failure
45921.363.353030200Stable
46015.9923.3520401150Failure
46120.398.3132210.60.35Stable
46219.64202212.20.41Failure
46320.3911.15111645.80.2Failure
46419.033.92835210.11Failure
46517.981.65302080.3Stable
46620.966.654040120Stable
46720.9611.652840120.5Stable
46819.973.35293460.3Stable
46918.77101025500.1Stable
47018.77102030500.1Stable
47118.778.352030500.2Failure
47220.565.42730400Failure
47316.473.850303.60Failure
47418.84.8252030.60Stable
47518.819.15202030.60Stable
47628.413.0538351000Stable
47724.964045531200Stable
47818.46403060Failure
47922.383.353530100Stable
48021.986.653645500Failure
48118.85.1302510.60.38Stable
48218.84.8253176.80.38Failure
48321.472.33030880.45Failure
48413.9742645200.12Failure
48517.9883045150.25Failure
48622.3833.34545100.4Stable
48722.383.353545500.25Failure
48819.976.653645500.25Failure
48919.976.653645500.5Failure
49020.96152549120.3Stable
49120.96103540120.4Stable
49219.9713.353030150.3Stable
49317.98152525140.3Stable
49418.97103535110.2Stable
49519.9713.354040100.2Stable
49618.838.252121370.5Stable
49718.833.452134370.3Failure
49818.778.351025500.2Failure
49918.776.651025500.3Failure
50019.083.351025500.4Failure
50118.776.652030500.3Failure
50219.083.352030500.4Failure
50321.986.6522201800Failure
50421.986.6522201800.1Failure
50520203645500Failure
5062727.329.1215650.26Failure
5072727.329.1351500.22Failure
5082727.329.1371840.3Failure
5090.6570.1760.3330.660.0410Failure
51010.1960.7780.660.50Stable
5110.914110.94310Stable
5120.650.16700.5660.030Failure
5130.7520.0670.6740.5660.10Stable
5140.5630.4670.4440.7550.5750Failure
5150.7180.1660.2890.4150.0530.7Stable
5160.690.080.4440.4150.0610.81Failure
5170.7670.0570.7110.5280.0640.98Failure
5180.7180.2230.2440.3020.2290.4Failure
5190.670.0780.6220.660.1050.22Failure
5200.6330.0330.6670.3770.040.6Stable
5210.7380.1330.8890.7550.060Stable
5220.7380.2330.6220.7550.061Stable
5230.7030.0670.6440.6420.030.6Stable
5240.6610.20.2220.4720.250.2Stable
5250.6610.20.4440.5660.250.2Stable
5260.6610.1670.4440.5660.250.4Failure
5270.7240.1080.5890.5660.20Failure
5280.580.07700.5660.0180Failure
5290.6620.0960.5560.3770.1530Stable
5300.6620.3830.4440.3770.1530Stable
53110.2610.8440.660.50Stable
5320.4920.080.5780.5660.440Failure
5330.8790.8110.60Stable
5340.650.0800.5660.030Failure
5350.7880.0670.7780.5660.050Stable
5360.7740.1330.80.8490.250Failure
5370.6620.1020.6670.4720.0530.76Stable
5380.6620.0960.5560.3770.1530.9Failure
5390.7560.0460.6670.5850.3840.76Failure
5400.4920.080.5780.5660.440.9Failure
5410.6330.160.670.8490.10.24Failure
5420.7880.66610.8490.0750.5Stable
5430.7880.0670.7780.8490.050.8Failure
5440.7030.1330.80.8490.250.5Failure
5450.7030.1330.80.8490.251Failure
5460.7380.30.5560.9250.060.6Stable
5470.7380.20.7780.7550.060.8Stable
5480.7030.2670.6670.5660.0750.6Stable
5490.6330.30.5560.4720.070.6Stable
5500.6680.20.7780.660.0550.4Stable
5510.7030.2670.8890.7550.050.4Stable
5520.6330.1650.4730.5510.1851Failure
5530.6330.0690.4730.6420.1850.6Failure
5540.6610.1670.2220.4720.250.4Failure
5550.6610.1330.2220.4720.250.6Failure
5560.6720.0670.2220.4720.250.8Failure
5570.6610.1330.4440.5660.250.6Failure
5580.6720.0670.4440.5660.250.8Failure
5590.7740.1330.4890.3770.90Failure
5600.7740.1330.4890.3770.90.2Failure
56117.639.530.250380.04Stable
56217.3393050350.04Stable
56317.838.730.560260Stable
56417.93931.255250.15Stable
56517.3393050260.2Stable
56617.337.93045290.37Stable
56717.538.52950330.2Stable
56817.539.229.755310Stable
56917.839.831.345320.34Stable
57017.3393048300.03Stable
57118.357.238.638310.64Stable
57217.4543.558290.05Failure
57317.81444.265310.07Failure
57417.4043.760260.4Failure
57519.857.541.362230.19Stable
57620.56.512.542700Failure
57721.47.116.744701Failure
57821.59.511.540750Failure
57920.66.79.445300Failure
58020.99.718.539381Failure
58121.49.421.8301061Failure
58219.96.819.430801Failure
58320.214.918.540701Failure
58419915.245270Failure
58519.716.421.430551Failure
58621.27.822.445251Failure
58719.97.415.644301Failure
58819.97.121.230550Failure
58922.210.725.235451Failure
59021.87.217.840341Failure
59121.87.217.842411Failure
59221.9634.7714.1528600Stable
59321.9634.7714.15241150Stable
59422.9332.3319.7330501Stable
59522.1519.4713.29281101Stable
59623.420936.5500Stable
59721.818.059.7230400Failure
59823.9832.7717.28401000Failure
59920.5724.815.5340501Stable
60021.224.8817.2944520Failure
60122.1551945401Failure
60221.818.059.7235400Failure
60323.7536.7822.6342431Failure
60420.9823.592045650Failure
60522.624.0614.04261901Stable
60622.2927.5410.140700Stable
60722.124.6716.240701Stable
60820.2532.411.9945361Failure
60920.815.578.7429.7351Failure
61021.1715.441633321Failure
61122.9433.7723.29271701Stable
61222.9546.4925.1130421Stable
61321.9219.415.535801Failure
61421.4228.916.240301Stable
61520.840.2519.39451231Failure
61620.134.6124.6922940Stable
61719.1919.6917.6834431Failure
61819.1812.89.4545200Failure
61917.822.26.054051.61Failure
62019.615.5315.8835971Failure
62119.8133.7519.46201201Stable
62219.8119.9711.0835350Failure
62319.7179.3845201Failure
62420.221.219.8935621Failure
62517.9624.012840601Failure
62625553644.52990.25Stable
62721.9819.9622.0119.981800.01Failure

References

  1. Méheux, K.; Dominey-Howes, D.; Lloyd, K. Natural hazard impacts in small island developing states: A review of current knowledge and future research needs. Nat. Hazards 2007, 40, 429–446. [Google Scholar] [CrossRef]
  2. Iai, S. Geotechnics and Earthquake Geotechnics towards Global Sustainability; Springer: Dordrecht, The Netherlands, 2011; Volume 15. [Google Scholar]
  3. Ma, J.W.; Liu, X.; Niu, X.X.; Wang, Y.K.; Wen, T.; Zhang, J.R.; Zou, Z.X. Forecasting of Landslide Displacement Using a Probability-Scheme Combination Ensemble Prediction Technique. Int. J. Environ. Res. Public Health 2020, 17, 4788. [Google Scholar] [CrossRef]
  4. Niu, X.X.; Ma, J.W.; Wang, Y.K.; Zhang, J.R.; Chen, H.J.; Tang, H.M. A novel decomposition-ensemble learning model based on ensemble empirical mode decomposition and recurrent neural network for landslide displacement prediction. Appl. Sci. 2021, 11, 4684. [Google Scholar] [CrossRef]
  5. Ouyang, C.J.; Zhou, K.Q.; Xu, Q.; Yin, J.H.; Peng, D.L.; Wang, D.P.; Li, W.L. Dynamic analysis and numerical modeling of the 2015 catastrophic landslide of the construction waste landfill at Guangming, Shenzhen, China. Landslides 2017, 14, 705–718. [Google Scholar] [CrossRef]
  6. Duncan, J.M. Soil Slope Stability Analysis. In Landslides: Investigation and Mitigation, Transportation Research Board Special Report 247; National Academy Press: Washington, DC, USA, 1996; pp. 337–371. [Google Scholar]
  7. Duncan, J.M.; Wright, S.G. The accuracy of equilibrium methods of slope stability analysis. Eng. Geol. 1980, 16, 5–17. [Google Scholar] [CrossRef]
  8. Zhu, D.Y.; Lee, C.F.; Jiang, H.D. Generalised framework of limit equilibrium methods for slope stability analysis. Géotechnique 2003, 53, 377–395. [Google Scholar] [CrossRef]
  9. Liu, S.Y.; Shao, L.T.; Li, H.J. Slope stability analysis using the limit equilibrium method and two finite element methods. Comput. Geotech. 2015, 63, 291–298. [Google Scholar] [CrossRef]
  10. Li, A.J.; Merifield, R.S.; Lyamin, A.V. Limit analysis solutions for three dimensional undrained slopes. Comput. Geotech. 2009, 36, 1330–1351. [Google Scholar] [CrossRef]
  11. Pham, K.; Kim, D.; Park, S.; Choi, H. Ensemble learning-based classification models for slope stability analysis. Catena 2021, 196, 104886. [Google Scholar] [CrossRef]
  12. Lin, S.; Zheng, H.; Han, B.; Li, Y.; Han, C.; Li, W. Comparative performance of eight ensemble learning approaches for the development of models of slope stability prediction. Acta Geotech. 2022, 17, 1477–1502. [Google Scholar] [CrossRef]
  13. Kardani, N.; Zhou, A.; Nazem, M.; Shen, S.-L. Improved prediction of slope stability using a hybrid stacking ensemble method based on finite element analysis and field data. J. Rock Mech. Geotech. Eng. 2021, 13, 188–201. [Google Scholar] [CrossRef]
  14. Wang, H.B.; Xu, W.Y.; Xu, R.C. Slope stability evaluation using Back Propagation Neural Networks. Eng. Geol. 2005, 80, 302–315. [Google Scholar] [CrossRef]
  15. Wang, L.; Chen, Z.; Wang, N.; Sun, P.; Yu, S.; Li, S.; Du, X. Modeling lateral enlargement in dam breaches using slope stability analysis based on circular slip mode. Eng. Geol. 2016, 209, 70–81. [Google Scholar] [CrossRef]
  16. Qi, C.; Tang, X. Slope stability prediction using integrated metaheuristic and machine learning approaches: A comparative study. Comput. Ind. Eng. 2018, 118, 112–122. [Google Scholar] [CrossRef]
  17. Qi, C.; Tang, X. A hybrid ensemble method for improved prediction of slope stability. Int. J. Numer. Anal. Methods Geomech. 2018, 42, 1823–1839. [Google Scholar] [CrossRef]
  18. Zhou, J.; Li, E.; Yang, S.; Wang, M.; Shi, X.; Yao, S.; Mitri, H.S. Slope stability prediction for circular mode failure using gradient boosting machine approach based on an updated database of case histories. Saf. Sci. 2019, 118, 505–518. [Google Scholar] [CrossRef]
  19. Ma, J.; Xia, D.; Wang, Y.; Niu, X.; Jiang, S.; Liu, Z.; Guo, H. A comprehensive comparison among metaheuristics (MHs) for geohazard modeling using machine learning: Insights from a case study of landslide displacement prediction. Eng. Appl. Artif. Intell. 2022, 114, 105150. [Google Scholar] [CrossRef]
  20. Ma, J.; Xia, D.; Guo, H.; Wang, Y.; Niu, X.; Liu, Z.; Jiang, S. Metaheuristic-based support vector regression for landslide displacement prediction: A comparative study. Landslides 2022, 19, 2489–2511. [Google Scholar] [CrossRef]
  21. Feng, X.-T. Introduction of Intelligent Rock Mechanics; Science Press: Beijing, China, 2000. [Google Scholar]
  22. Lu, P.; Rosenbaum, M.S. Artificial Neural Networks and Grey Systems for the Prediction of Slope Stability. Nat. Hazards 2003, 30, 383–398. [Google Scholar] [CrossRef]
  23. Xue, X.; Yang, X.; Chen, X. Application of a support vector machine for prediction of slope stability. Sci. China Technol. Sci. 2014, 57, 2379–2386. [Google Scholar] [CrossRef]
  24. Hoang, N.-D.; Pham, A.-D. Hybrid artificial intelligence approach based on metaheuristic and machine learning for slope stability assessment: A multinational data analysis. Expert Syst. Appl. 2016, 46, 60–68. [Google Scholar] [CrossRef]
  25. Hoang, N.-D.; Tien Bui, D. Chapter 18—Slope Stability Evaluation Using Radial Basis Function Neural Network, Least Squares Support Vector Machines, and Extreme Learning Machine Slope Stability Evaluation Using Radial Basis Function Neural Network, Least Squares Support Vector Machines, and Extreme Learning Machine. In Handbook of Neural Computation; Samui, P., Sekhar, S., Balas, V.E., Eds.; Academic Press: Washington, DC, USA, 2017; pp. 333–344. [Google Scholar] [CrossRef]
  26. Feng, X.; Li, S.; Yuan, C.; Zeng, P.; Sun, Y. Prediction of Slope Stability using Naive Bayes Classifier. KSCE J. Civ. Eng. 2018, 22, 941–950. [Google Scholar] [CrossRef]
  27. Lin, Y.; Zhou, K.; Li, J. Prediction of Slope Stability Using Four Supervised Learning Methods. IEEE Access 2018, 6, 31169–31179. [Google Scholar] [CrossRef]
  28. Amirkiyaei, V.; Ghasemi, E. Stability assessment of slopes subjected to circular-type failure using tree-based models. Int. J. Geotech. Eng. 2020, 16, 301–311. [Google Scholar] [CrossRef]
  29. Haghshenas, S.S.; Haghshenas, S.S.; Geem, Z.W.; Kim, T.-H.; Mikaeil, R.; Pugliese, L.; Troncone, A. Application of Harmony Search Algorithm to Slope Stability Analysis. Land 2021, 10, 1250. [Google Scholar] [CrossRef]
  30. Zhang, H.; Wu, S.; Zhang, X.; Han, L.; Zhang, Z. Slope stability prediction method based on the margin distance minimization selective ensemble. CATENA 2022, 212, 106055. [Google Scholar] [CrossRef]
  31. Zou, Z.; Yang, Y.; Fan, Z.; Tang, H.; Zou, M.; Hu, X.; Xiong, C.; Ma, J. Suitability of data preprocessing methods for landslide displacement forecasting. Stoch. Environ. Res. Risk Assess. 2020, 34, 1105–1119. [Google Scholar] [CrossRef]
  32. Ma, J.W.; Wang, Y.K.; Niu, X.X.; Jiang, S.; Liu, Z.Y. A comparative study of mutual information-based input variable selection strategies for the displacement prediction of seepage-driven landslides using optimized support vector regression. Stoch. Environ. Res. Risk Assess. 2022, 36, 3109–3129. [Google Scholar] [CrossRef]
  33. Wang, Y.K.; Tang, H.M.; Huang, J.S.; Wen, T.; Ma, J.W.; Zhang, J.R. A comparative study of different machine learning methods for reservoir landslide displacement prediction. Eng. Geol. 2022, 298, 106544. [Google Scholar] [CrossRef]
  34. Zhang, Q.; Hu, W.; Liu, Z.; Tan, J. TBM performance prediction with Bayesian optimization and automated machine learning. Tunn. Undergr. Space Technol. 2020, 103, 103493. [Google Scholar] [CrossRef]
  35. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef] [Green Version]
  36. Zhang, D.; Shen, Y.; Huang, Z.; Xie, X. Auto machine learning-based modelling and prediction of excavation-induced tunnel displacement. J. Rock Mech. Geotech. Eng. 2022, 14, 1100–1114. [Google Scholar] [CrossRef]
  37. Sun, Z.; Sandoval, L.; Crystal-Ornelas, R.; Mousavi, S.M.; Wang, J.; Lin, C.; Cristea, N.; Tong, D.; Carande, W.H.; Ma, X.; et al. A review of Earth Artificial Intelligence. Comput. Geosci. 2022, 159, 105034. [Google Scholar] [CrossRef]
  38. Jiang, S.; Ma, J.W.; Liu, Z.Y.; Guo, H.X. Scientometric Analysis of Artificial Intelligence (AI) for Geohazard Research. Sensors 2022, 22, 7814. [Google Scholar] [CrossRef]
  39. Fallatah, O.; Ahmed, M.; Gyawali, B.; Alhawsawi, A. Factors controlling groundwater radioactivity in arid environments: An automated machine learning approach. Sci. Total Environ. 2022, 830, 154707. [Google Scholar] [CrossRef]
  40. Quan, S.Q.; Feng, J.H.; Xia, H. Automated Machine Learning in Action. Manning Publications, Co.: New York, NY, USA, 2022. [Google Scholar]
  41. Mahjoubi, S.; Barhemat, R.; Guo, P.; Meng, W.; Bao, Y. Prediction and multi-objective optimization of mechanical, economical, and environmental properties for strain-hardening cementitious composites (SHCC) based on automated machine learning and metaheuristic algorithms. J. Clean. Prod. 2021, 329, 129665. [Google Scholar] [CrossRef]
  42. Chen, W.; Zhang, L. An automated machine learning approach for earthquake casualty rate and economic loss prediction. Reliab. Eng. Syst. Saf. 2022, 225, 108645. [Google Scholar] [CrossRef]
  43. Erzin, Y.; Cetin, T. The prediction of the critical factor of safety of homogeneous finite slopes using neural networks and multiple regressions. Comput. Geosci. 2013, 51, 305–313. [Google Scholar] [CrossRef]
  44. Sakellariou, M.G.; Ferentinou, M.D. A study of slope stability prediction using neural networks. Geotech. Geol. Eng. 2005, 23, 419–445. [Google Scholar] [CrossRef]
  45. Hoang, N.-D.; Bui, D.T. Spatial prediction of rainfall-induced shallow landslides using gene expression programming integrated with GIS: A case study in Vietnam. Nat. Hazards 2018, 92, 1871–1887. [Google Scholar] [CrossRef]
  46. Ferreira, L.; Pilastri, A.; Martins, C.M.; Pires, P.M.; Cortez, P. A Comparison of AutoML Tools for Machine Learning, Deep Learning and XGBoost. In Proceedings of the 2021 International Joint Conference on Neural Networks (IJCNN), Shenzhen, China, 18–22 July 2021; pp. 1–8. [Google Scholar]
  47. Sun, A.Y.; Scanlon, B.R.; Save, H.; Rateb, A. Reconstruction of GRACE Total Water Storage Through Automated Machine Learning. Water Resour. Res. 2021, 57, e2020WR028666. [Google Scholar] [CrossRef]
  48. Babaeian, E.; Paheding, S.; Siddique, N.; Devabhaktuni, V.K.; Tuller, M. Estimation of root zone soil moisture from ground and remotely sensed soil information with multisensor data fusion and automated machine learning. Remote Sens. Environ. 2021, 260, 112434. [Google Scholar] [CrossRef]
  49. Cook, D. Practical Machine Learning with H2O: Powerful, Scalable Techniques for Deep Learning and AI; O’Reilly Media, Inc.: Sebastopol, CA, USA, 2016. [Google Scholar]
  50. Laakso, T.; Kokkonen, T.; Mellin, I.; Vahala, R. Sewer Condition Prediction and Analysis of Explanatory Factors. Water 2018, 10, 1239. [Google Scholar] [CrossRef] [Green Version]
  51. Padmanabhan, M.; Yuan, P.; Chada, G.; Nguyen, H.V. Physician-Friendly Machine Learning: A Case Study with Cardiovascular Disease Risk Prediction. J. Clin. Med. 2019, 8, 1050. [Google Scholar] [CrossRef] [Green Version]
  52. Ou, C.; Liu, J.; Qian, Y.; Chong, W.; Liu, D.; He, X.; Zhang, X.; Duan, C.-Z. Automated Machine Learning Model Development for Intracranial Aneurysm Treatment Outcome Prediction: A Feasibility Study. Front. Neurol. 2021, 12, 735142. [Google Scholar] [CrossRef]
Figure 1. Ridgeline plots showing the density distributions of the input features. The inset shows a schematic diagram of the circular failure model.
Figure 1. Ridgeline plots showing the density distributions of the input features. The inset shows a schematic diagram of the circular failure model.
Sensors 22 09166 g001
Figure 2. Scatter matrix showing the collected dataset. The panels in the upper right show the data points, and the lower left half of the figure shows the correlation coefficients between the features and the slope stability status.
Figure 2. Scatter matrix showing the collected dataset. The panels in the upper right show the data points, and the lower left half of the figure shows the correlation coefficients between the features and the slope stability status.
Sensors 22 09166 g002
Figure 3. 3D PCA score plot of the input features.
Figure 3. 3D PCA score plot of the input features.
Sensors 22 09166 g003
Figure 4. Schematic diagram showing the workflow of AutoML.
Figure 4. Schematic diagram showing the workflow of AutoML.
Sensors 22 09166 g004
Figure 5. Schematic diagram showing hyperparameter tuning based on the k-fold CV and random grid search methods.
Figure 5. Schematic diagram showing hyperparameter tuning based on the k-fold CV and random grid search methods.
Sensors 22 09166 g005
Figure 6. Flowchart of the AutoML-based slope stability classification.
Figure 6. Flowchart of the AutoML-based slope stability classification.
Sensors 22 09166 g006
Figure 7. ROC curve of the top-performing model (H2O1) from AutoML.
Figure 7. ROC curve of the top-performing model (H2O1) from AutoML.
Sensors 22 09166 g007
Figure 8. Cumulative gain and lift charts for the top-performing model (H2O1) based on testing data.
Figure 8. Cumulative gain and lift charts for the top-performing model (H2O1) based on testing data.
Sensors 22 09166 g008
Figure 9. Correlation between the NPV and PPV values of the classification models based on the testing dataset.
Figure 9. Correlation between the NPV and PPV values of the classification models based on the testing dataset.
Sensors 22 09166 g009
Figure 10. Partial dependence plots of the input features in the top-performing model (H2O1) for the classification of slope stability. (a) Unit weight and pore pressure ratio, (b) cohesion and friction angle, (c) slope angle and slope height, (d) unit weight, (e) cohesion, (f) friction angle, (g) slope angle, (h) slope height, and (i) pore pressure ratio.
Figure 10. Partial dependence plots of the input features in the top-performing model (H2O1) for the classification of slope stability. (a) Unit weight and pore pressure ratio, (b) cohesion and friction angle, (c) slope angle and slope height, (d) unit weight, (e) cohesion, (f) friction angle, (g) slope angle, (h) slope height, and (i) pore pressure ratio.
Sensors 22 09166 g010
Figure 11. ACADS reference slope example EX1 (Unit: m).
Figure 11. ACADS reference slope example EX1 (Unit: m).
Sensors 22 09166 g011
Table 1. Summary of the slope stability assessment of circular mode failure using MLs.
Table 1. Summary of the slope stability assessment of circular mode failure using MLs.
ReferenceData Size
(Stable/Failure)
Input FeaturesData PreprocessingML Algorithm SelectionHyperparameter TuningFinal Model and Performance
[21]82
(38/44)
γ , c , φ , β , H ,   r u /BPTrial and error
GA
GA-optimized BP was selected as the final model, with an AUC of 0.455 for the testing dataset.
[22]32
(14/18)
γ , c , φ , β , H ,   r u /ANNTrial and errorThe ANN achieved an ACC of 1.00 for the testing dataset in two cases.
[23]46
(17/29)
γ , c , φ , β , H ,   r u Data normalizationSVMPSOPSO-SVM achieved an ACC of 0.8125 for the testing dataset.
[24]168
(84/84)
γ , c , φ , β , H ,   r u Data normalizationLSSVMFAThe FA-optimized LSSVM achieved an AUC of 0.86 for the testing dataset.
[25]168
(84/84)
γ , c , φ , β , H ,   r u Data normalizationRBF
LSSVM
ELM
Orthogonal least squares
GA
Trial and error
The GA-ELM was selected as the final model, with an AUC of 0.8706 for the testing dataset.
[26]82
(49/33)
γ , c , φ , β , H ,   r u /NB/NB achieved an ACC of 0.846 for the testing dataset.
[27]107
(48/59)
γ , c , φ , β , H ,   r u /RF
SVM
Bayes
GSA
Ten-fold CVThe GSA was selected as the final model, with an AUC of 0.889 for the testing dataset.
[17]168
(84/84)
γ , c , φ , β , H ,   r u Data normalizationGP
QDA
SVM
ADB-DT
ANN
KNN
Classifier ensemble
GAThe optimum ensemble classifier was selected as the final model, with an AUC of 0.943 for the testing dataset.
[16]148
(78/70)
γ , c , φ , β , H ,   r u Data normalizationLR
DT
RF
GBM
SVM
BP
FA
GS
The FA-optimized SVM was selected as the final model, with an AUC of 0.967 for the testing dataset.
[18]221
(115/106)
γ , c , φ , β , H ,   r u Data normalizationANN
SVM
RF
GBM
Five-fold CVThe GBM-based model was selected as the final model, with an AUC of 0.900 for the testing dataset.
[28]87
(42/45)
γ , c , φ , β , H ,   r u /J48Trial and errorJ48 achieved an ACC of 0.9231 for the testing dataset.
[13]257
(123/134)
γ , c , φ , β , H ,   r u /XGB
RF
LR
SVM
BC
LDA
KNN
DT
MLP
GNB
XRT
Stacked ensemble
ABC
PSO
The stacked ensemble was selected as the final model, with an AUC of 0.904 for the testing dataset.
[11]153
(83/70)
γ , c , φ , β , H ,   r u Data normalization and outlier removingKNN
SVM
SGD
GP
QDA
GNB
DT
ANN
Bagging ensemble
Heterogeneous ensemble
GSAn ensemble classifier based on extreme gradient boosting was selected as the final model, with an AUC of 0.914 for the testing dataset.
[29]19
(13/6)
γ , c , φ , β , H ,   r u Data normalizationK-means clusterHSK-means clustering optimized by HS achieved an ACC of 0.89 for all datasets.
[12]444
(224/220)
γ , c , φ , β , H ,   r u Data normalizationAdaBoost
GBM
Bagging
XRT
RF
HGB
Voting
Stacked
GSA stacked model was selected as the final model, with an AUC of 0.9452 for the testing dataset.
[30]422
(226/196)
γ , c , φ , β , H ,   r u Data normalizationMDMSEGSThe MDMSE model achieved an AUC of 0.8810 for the testing dataset.
Note: Abbreviations in this table are explained in Abbreviations.
Table 2. Summary of the input feature statistics.
Table 2. Summary of the input feature statistics.
Input FeatureNotationRangeMedianMeanStd.
Unit weight (kN/m3) γ 0.492–33.16020.95920.1857.044
Cohesion (kPa) c 0–300.0019.69025.60031.036
Friction angle (°) φ 0–49.50028.80025.30812.331
Slope angle (°) β 0.302–65.00034.98032.60513.711
Slope height (m) H 0.018–565.00045.80090.289120.140
Pore pressure ratio r u 0–1.0000.2500.2540.260
Table 3. Confusion matrix and performance measures for slope stability assessment.
Table 3. Confusion matrix and performance measures for slope stability assessment.
PredictedStableFailure
Actual
StableTrue positive (TP)False negative (FN)Sensitivity: S E N = T P T P + F N
(The ideal value is 1, whereas the worst is zero.)
FailureFalse positive (FP)True negative (TN)Specificity S P E = T N F P + T N
(The ideal value is 1, whereas the worst is zero.)
Positive predictive value (PPV) P P V = T P T P + F P
(The ideal value is 1, whereas the worst is zero.)
Negative predictive value (NPV) N P V = T N F N + T N
(The ideal value is 1, whereas the worst is zero.)
Accuracy A C C = T P + T N T P + F N + F P + T N
(The ideal value is 1, whereas the worst is zero.)
Matthews correlation coefficient M C C = T P T N F P F N ( T P + F P ) ( T P + F N ) ( T N + F P ) ( T N + F N )
(The ideal value is 1.)
Table 4. The hyperparameter search space for GS optimization for AutoML-based slope stability classification.
Table 4. The hyperparameter search space for GS optimization for AutoML-based slope stability classification.
AlgorithmParameterSearchable values
DNNAdaptive learning rate time smoothing factor (epsilon) { 10 6 , 10 7 , 10 8 , 10 9 }
Hidden layer size (hidden)Grid search 1: {20}, {50}, {100}
Grid search 2: {20, 20}, {50, 50}, {100, 100}
Grid search 3: {20, 20, 20}, {50, 50, 50}, {100, 100, 100}
Hidden_dropout_ratioGrid search 1: {0.1}, {0.2}, {0.3}, {0.4}, {0.5}
Grid search 2: {0.1, 0.1}, {0.2, 0.2}, {0.3, 0.3}, {0.4, 0.4}, {0.5, 0.5}
Grid search 3: {0.1, 0.1, 0.1}, {0.2, 0.2, 0.2} {0.3, 0.3, 0.3}, {0.4, 0.4, 0.4}, {0.5, 0.5, 0.5}
Input_dropout_ratio{0.0, 0.05, 0.1, 0.15, 0.2}
Adaptive learning rate time decay factor (rho){0.9, 0.95, 0.99}
GLMRegularization distribution between L1 and L2 (alpha){0.0, 0.2, 0.4, 0.6, 0.8, 1.0}
GBMColumn sampling rate (col_sample_rate){0.4, 0.7, 1.0}
Column sample rate per tree (col_sample_rate_per_tree){0.4, 0.7, 1.0}
Maximum tree depth (max_depth){3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17}
Minimum number of observations for a leaf (min_rows){1, 5, 10, 15, 30, 100}
Minimum relative improvement in squared error reduction (min_split_improvement) { 10 4 , 10 5 }
Row sampling rate (sample_rate){0.50, 0.60, 0.70, 0.80, 0.90, 1.00}
Table 5. Comparison of the performance of the selected top-five models from AutoML in slope stability assessments of circular mode failure based on the selected test data.
Table 5. Comparison of the performance of the selected top-five models from AutoML in slope stability assessments of circular mode failure based on the selected test data.
Model IDModel TypeHyperparametersAUCConfusion MatrixPerformance Measures
H2O1Stacked ensembleThe base models are the top-1000 trained models, and the metalearner is a GLM. A logit transformation is used for the predicted probabilities.0.970 PredictedStableFailureSEN = 0.968
SPE = 0.841
PPV = 0.857
NPV = 0.964
ACC = 0.904
MCC = 0.815
Actual
Stable602
Failure1053
H2O2GBMscore_tree_interval = 5; ntrees = 105; max_depth = 7; stopping_metric = logloss; stopping_tolerance = 0.045; learn_rate = 0.1; learn_rate_annealing = 1; sample_rate = 1; col_sample_rate = 0.4; col_sample_rate_change_per_level = 1; col_sample_rate_per_tree = 0.70.968 PredictedStableFailureSEN = 0.903
SPE = 0.937
PPV = 0.933
NPV = 0.908
ACC = 0.920
MCC = 0.840
Actual
Stable566
Failure459
H2O3DRFNtrees = 50; max_depth = 200.963 PredictedStableFailureSEN = 0.839
SPE = 0.968
PPV = 0.963
NPV = 0.859
ACC = 0.904
MCC = 0.815
Actual
Stable5210
Failure261
H2O4XRscore_tree_interval = 5; max_after_balance_size = 5; max_confusion_matrix_size = 20; ntrees = 50; max_depth = 20; stopping_metric = logloss; stopping_tolerance = 0.045; sample_rate = 0.6320.963 PredictedStableFailureSEN = 0.871
SPE = 0.937
PPV = 0.931
NPV = 0.881
ACC = 0.904
MCC = 0.810
Actual
Stable548
Failure459
H2O5GBMscore_tree_interval = 5; ntrees = 97; max_depth = 7; stopping_metric = logloss; stopping_tolerance = 0.045; learn_rate = 0.1; learn_rate_annealing = 1; sample_rate = 0.8; col_sample_rate = 0.8; col_sample_rate_change_per_level = 1; col_sample_rate_per_tree = 0.80.960 PredictedStableFailureSEN = 0.968
SPE = 0.810
PPV = 0.833
NPV = 0.962
ACC = 0.888
MCC = 0.786
Actual
Stable602
Failure1251
Table 6. Comparison of different ML models for slope stability assessments of circular mode failure.
Table 6. Comparison of different ML models for slope stability assessments of circular mode failure.
ReferenceModelAUCACCReferenceModelAUCACC
[24]BDA
LM-ANN
SCG-ANN
RMV
SVM
RBP-ANN
MO-LSSVM
0.75
0.79
0.81
0.83
0.83
0.84
0.86
/[25]RBF
LSSVM
ELM
/0.81
0.8706
0.8400
[17]GA-GP
GA-QDA
GA-SVM
GA-ANN
GA-ADB-DT
GA-KNN
GA-OEC
0.893
0.798
0.908
0.877
0.936
0.908
0.943
/[27]RF
SVM
NB
GSA
0.833
0.556
0.667 0.886
/
[16]FA-LR
FA-DT
FA-MLP
FA-RF
FA-GBM
FA-SVM
0.822
0.854
0.864
0.957
0.962
0.967
/[18]ANN
SVM
RF
GBM
0.888
0.889
0.897
0.900
/
[13]XGB
RF
LR
SVM
BC
LDA
KNN
DT
MLP
GNB
XRT
Stacked ensemble
0.77
0.79
0.83
081
0.71
0.80
0.78
0.72
0.83
0.7.
0.74
0.90
/[11]KNN
SVM
SGD
GP
QDA
GNB
DT
ANN
B-KNN
B-SVM
B-ANN
RF
AB
GBM
XGB
Heterogeneous ensemble
0.931
0.796
0.688
0.933
0.817
0.775
0.829
0.817
0.938
0.892
0.933
0.904
0.910
0.929
0.950
0.950
0.839
0.806
0.710
0.839
0.774
0.806
0.774
0.806
0.871
0.871
0.839
0.806
0.839
0.774
0.903
0.806
[12]GBM
Bagging
Adaboost
XRT
RF
HGB
Voting
Stacked
0.9199
0.9291
0.9199
0.9519
0.9268
0.8970
0.9588
0.9382
/[30]SVM
DT
LR
NB
Boosting
MDMSE
/0.8452
0.8333
<0.75
<0.75
0.8214
0.8810
Current studyH2O1 (Stacked Ensemble_Best1000)0.9700.904
Note: The best results are shown in bold italics. The results for relatively small sample sets (less than 100) are not presented or compared.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ma, J.; Jiang, S.; Liu, Z.; Ren, Z.; Lei, D.; Tan, C.; Guo, H. Machine Learning Models for Slope Stability Classification of Circular Mode Failure: An Updated Database and Automated Machine Learning (AutoML) Approach. Sensors 2022, 22, 9166. https://doi.org/10.3390/s22239166

AMA Style

Ma J, Jiang S, Liu Z, Ren Z, Lei D, Tan C, Guo H. Machine Learning Models for Slope Stability Classification of Circular Mode Failure: An Updated Database and Automated Machine Learning (AutoML) Approach. Sensors. 2022; 22(23):9166. https://doi.org/10.3390/s22239166

Chicago/Turabian Style

Ma, Junwei, Sheng Jiang, Zhiyang Liu, Zhiyuan Ren, Dongze Lei, Chunhai Tan, and Haixiang Guo. 2022. "Machine Learning Models for Slope Stability Classification of Circular Mode Failure: An Updated Database and Automated Machine Learning (AutoML) Approach" Sensors 22, no. 23: 9166. https://doi.org/10.3390/s22239166

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop