Next Article in Journal
Quantitative Stability of Optimization Problems with Stochastic Constraints
Previous Article in Journal
A Predictive Maintenance Strategy for Multi-Component Systems Based on Components’ Remaining Useful Life Prediction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Developing Hybrid DMO-XGBoost and DMO-RF Models for Estimating the Elastic Modulus of Rock

1
School of Resources and Safety Engineering, Central South University, Changsha 410083, China
2
Changsha Institute of Mining Research Co., Ltd., Changsha 410012, China
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(18), 3886; https://doi.org/10.3390/math11183886
Submission received: 17 August 2023 / Revised: 1 September 2023 / Accepted: 11 September 2023 / Published: 12 September 2023
(This article belongs to the Section Engineering Mathematics)

Abstract

:
Accurate estimation of the elastic modulus (E) of rock is critical for the design of geotechnical projects such as mining, slopes, and tunnels. However, the determination of rock mechanical parameters usually involves high budget and time requirements. To address this problem, numerous researchers have developed machine learning models to estimate the E of rock. In this study, two novel hybrid ensemble learning models were developed to estimate the E of rock by optimizing the extreme gradient boosting (XGBoost) and random forest (RF) algorithms through the dwarf mongoose optimization (DMO) approach. Firstly, 90 rock samples with porosity, dry density, P-wave velocity, slake durability, and water absorption as input indicators were collected. Subsequently, the hyperparameters of XGBoost and RF were tuned by DMO. Based on the optimal hyperparameters configuration, two novel hybrid ensemble learning models were constructed using the training set (80% of the data). Finally, the performance of the developed models was evaluated by the coefficient of determination (R2 score), root mean squared error (RMSE), mean absolute error (MAE), and variance accounted for (VAF) on the test set (20% of the data). The results show that the DMO-RF model achieved the best comprehensive performance with an R2 score of 0.967, RMSE of 0.541, MAE of 0.447, and VAF of 0.969 on the test set. The dry density and slake durability were more influential indicators than others. Moreover, the convergence curves suggested that the DMO-RF model can reduce the generalization error and avoid overfitting. The developed models can be regarded as viable and useful tools in estimating the E of rock.

1. Introduction

Elastic modulus (E) is one of the key parameters characterizing the properties of rock, and the accurate estimation of the E of rock is essential for the safe construction of geotechnical engineering [1,2,3,4,5]. The E of rock can be determined by situ and laboratory tests, and the test process should comply with a series of operating specifications. However, it is sometimes difficult to extract high-quality core specimens from fragile, weak, and stratified rock masses. Additionally, the actual testing process is often limited by time and cost constraints [6,7,8]. Therefore, it is essential to develop economical, non-destructive, and indirect methods to estimate the E of rock.
In the early stage of the development of rock mechanics, several scholars attempted to estimate the E of rock by some easily accessible rock indexes, such as the point-load index, slake durability index, Schmidt hammer rebound number, and P-wave velocity [9,10,11,12]. As a result, several empirical formulas between E and the above indexes were established. For example, Sachpazis [13] calculated a correlation coefficient of 0.78 between E and the Schmidt hammer rebound number in carbonate rocks. Lashkaripour [14] analyzed the correlation between the porosity and E of mudrocks. Yasar and Erdogan [15] determined the statistical relations between the hardness and E of rocks. Moradian and Behnia [16] derived fit equations for E and the P-wave velocity of sedimentary rocks by using an ultrasonic test. Armaghani et al. [17] proposed several simple and multiple regression equations to calculate the E of granite rocks. Shan and Di [18] developed formulas to calculate the E of multiple-jointed basalt rocks by laboratory tests. Karakus et al. [19] adopted a multiple regression model to analyze the elastic properties of intact rock. However, the estimation results of these empirical formulas are only applicable to specific rock data and do not reach the generalization purpose.
On the other hand, in recent years, machine learning (ML) approaches have garnered considerable interest in geotechnical engineering because of their powerful nonlinear data processing capabilities [20,21,22]. With the accumulation of data, some researchers have attempted to develop ML models to estimate the E of rock. For instance, Armaghani et al. [17] proposed an adaptive neuro-fuzzy inference system (ANFIS) for predicting unconfined compressive strength (UCS) and E of granite rocks. Cao et al. [23] hybridized an extreme gradient boosting machine (XGBoost) with the firefly algorithm to estimate the E and UCS of granite rock. Meng and Wu [24] combined the experimental, numerical, and random forest (RF) methods to predict the UCS and the E of frozen fractured rock. Abdi et al. [25] investigated the feasibility of tree-based techniques, including RF, Adaboost, XGBoost, and Catboost models, in predicting the E of weak rock. Acar and Kaya [26] adopted the least square support vector machine (LS-SVM) model to predict the E of weak rock. Several scholars [27,28,29] adopted different artificial neural network models (ANN) to predict the E of different types of rocks, respectively. More related works on the estimation of E using ML methods and details are tabulated in Table 1.
In comparison to other methods, ML approaches can yield dependable results by establishing the nonlinear relationship between input and output variables [37,38]. It is a promising method for estimating the E of rock. Recently, the XGBoost and RF algorithms have shown great potential to improve prediction accuracy and have been successfully applied in many fields, such as electricity consumption forecasting [39,40], infectious disease prediction [41,42], mining maximum subsidence prediction [43,44], and heavy metal contamination prediction [45,46]. XGBoost and RF are two efficient ensemble methods that combine multiple homogeneous weak learners in certain ways to reduce overfitting. However, the ML algorithms require a proper hyperparameter setting to improve the accuracy. As a novel heuristic optimization algorithm, the dwarf mongoose optimization (DMO) algorithm has proven to have a strong global search capability and high search efficiency in solving optimization problems [47]. Therefore, it may be an efficient approach to optimize the hyperparameters of XGBoost and RF models. The available studies show that the DMO algorithm has not been integrated with the XGBoost and RF models to estimate the E of rock.
This study aims to investigate the feasibility of XGBoost and RF algorithms optimized by the DMO approach to estimate the E of rock. First, a dataset comprising 90 specimens with five indicators was compiled from available rock mechanics data. Next, the DMO-XGBoost and DMO-RF models were developed. The proposed models can be used to estimate the E of rock easily by avoiding the complexity of core specimen preparation in the laboratory.

2. Methodology

2.1. XGBoost Algorithm

XGBoost is a boosting ensemble algorithm with a decision tree (DT) as the base learner that improves the classical gradient boosting decision tree (GBDT) algorithm [48]. It is designed using an efficient, flexible, and portable distributed gradient boosting framework. Compared with GBDT, the calculation speed is faster while retaining excellent performance. The flowchart of this algorithm is indicated in Figure 1. The core principle of XGBoost is to build DTs one after another and train the subsequent DT with the residuals of the previous ones. The values computed by all the constructed DTs are integrated to achieve a better result. It has been regarded as an advanced evaluator with ultra-high performance in both classification and regression [21]. The XGBoost algorithm is explained as follows:
y ^ i ( t ) = k = 1 t f k x i = y ^ i ( t 1 ) + f t x i
where y ^ i ( t ) denotes the final model, y ^ i ( t 1 ) is the previous model, x i represents the features corresponding to the sample i, f t x i is the newly generated DT model, and t is the total number of DT models.
The objective function of the ensemble algorithm is an alternative. In order to avoid overfitting and improve iteration efficiency, XGBoost introduces model complexity to measure the computational efficiency [49]. Accordingly, the objective function of XGBoost can be given by:
O b j t = i = 1 t l y i , y ^ i ( t ) + i = 1 t Ω f i
where y i is the actual value, l y i , y ^ i ( t ) is a convex loss function describing how well the model fits with training data, and Ω f i is the penalty term for regularization to avoid overfitting.
Gradient statistics are applied to the loss function, which effectively eliminates constant parameters. The simplified objective function is obtained as follows:
O b j t = i = 1 t g i f t x i + 1 2 h i f t 2 x i + Ω f t
where g i = y ^ i ( t 1 ) l y i , y ^ i ( t ) and h i = 2 y ^ i ( t 1 ) l y i , y ^ i ( t ) are the first and second order gradient statistics of the loss function, respectively. The penalty term Ω f t is evaluated by:
Ω f t = γ T + 1 2 α w + 1 2 λ w 2
where α and λ are regularization parameters, whose default values are 1 and 0, respectively, T is the number of leaf nodes controlled by the parameter γ, and w denotes the corresponding weight of the leaf nodes.

2.2. Random Forest Algorithm

RF is a bagging ensemble algorithm with DT as the base learner [50]. Its principle diagram is shown in Figure 2. Different DTs are constructed to set up a forest by extracting sample and feature subsets from the original data. It is a stochastic process that ensures the independence of each DT, which improves the generalization capacity and avoids overfitting [51]. Moreover, all constructed DTs tend to grow freely without pruning, which makes the construction process of each DT fast and the computational efficiency of RF improved. By integrating the results generated by all constructed DTs, the voting (for classification problems) or averaging (for regression problems) approach is used to determine the final results [21]. According to the type of output variable in this paper, the RF algorithm is explained as follows:
y ( x ) = 1 T i = 1 T f i ( x )
where y ( x ) represents the predicted value of the RF model, f i ( x ) is the predicted value of each DT, and T represents the number of constructed DTs.

2.3. Dwarf Mongoose Optimization

The DMO algorithm is a swarm intelligence optimization algorithm that simulates the seminomadic life of dwarf mongoose in nature. The dwarf mongooses are known for foraging and scouting as a unit, and the DMO algorithm finds the optimal solution by simulating their social behavior. The DMO algorithm divides the swarm into three groups: the alpha group, the scout group, and the babysitter group [47,52,53].
Firstly, the alpha group leads the scout group to forage and scout for a new sleeping mound, leaving the babysitters group to take care of the young dwarf mongooses in the old sleeping mound. Secondly, when a food source is not found, the alpha and scout groups will go back to the old sleeping mound to exchange members with the babysitter group. Thirdly, the alpha group continues to go out foraging and scouting for new sleeping mounds. The family returns intermittently to exchange babysitters and repeats the cycle until a food source is found.
The process of DMO for hyperparameter tuning can be broken down into the following steps [47]:
(1) Initialization: The DMO algorithm starts by initializing the population of solutions as follows:
X = x 1 , 1 x 1 , 2 x 1 , d 1 x 1 , d x 2 , 1 x 2 , 2 x 2 , d 1 x 2 , d x i , j x n , 1 x n , 2 x n , d 1 x n , d
where X is the set of current candidate populations, n denotes the population size, and d is the dimension of the problem. These solutions represent different sets of hyperparameters for the XGBoost and RF models.
(2) Objective function: The DMO algorithm requires an objective function that quantifies the performance of each solution. In this study, the average root mean squared error (RMSE) values of five-fold cross-validation (CV) were employed as the fitness function. The foraging behavior of the dwarf mongoose is mimicked by dividing the population into alpha, scout, and babysitter groups. The probability of each individual becoming alpha is computed by:
α = f i t i i = 1 n f i t i
where fiti denotes the fitness value of the i-th individual.
(3) Behavior of dwarf mongooses: In the optimization of the DMO process, these dwarf mongooses work together to find food and a new sleeping mound. The movement of dwarf mongooses is determined by the leader of its subgroup. Their movement is based on the position of the alpha and the current position of the solution, which is calculated as follows:
M = i = 1 n X i × s m i X i
where Xi is the position of the alpha group, and smi denotes the current position of the solution.
(4) Cooperative search: DMO encourages a cooperative search among the solutions in the population. This means that solutions collaborate and share information to improve the overall performance of the population. In the context of hyperparameter optimization, it implies that solutions with promising hyperparameters influence or guide other solutions towards better hyperparameter configurations.
(5) Communication and information sharing: In DMO, solutions share information about their performance and hyperparameters with others. This communication helps the population converge toward better solutions.
(6) Update hyperparameters: Based on the cooperative search and shared information, the hyperparameters of each solution are updated.
(7) Fitness evaluation: After updating the hyperparameters, the fitness of each solution is re-evaluated using the objective function. Solutions that perform better are favored and contribute more to the cooperative search process.
(8) Termination: The algorithm terminates when a stopping criterion is met.
(9) Final solution selection: Once the optimization process concludes, the solution with the best-performing hyperparameters, according to the objective function, is selected as the final model configuration.

2.4. Model Evaluation Metrics

In this study, four metrics were employed to assess the performance of the proposed models, including the coefficient of determination (R2 score), the RMSE, the mean absolute error (MAE), and the variance accounted for (VAF). RMSE and VAF are maximization performance metrics, while MAE and VAF are minimization performance metrics [54].
R2 score represents the proportion of the squared correlation between the predicted and actual values of the target variable, which can be calculated by:
R 2 = 1 i y i y ^ i 2 i y i y ¯ i 2
where y i indicates the actual value, y ^ i indicates the predicted value of the model, and y ¯ i represents the average of the actual values.
RMSE represents the standard deviation of the fitted error between the predicted value and the actual value, which can be calculated by:
RMSE = 1 n i = 1 n y ^ i y i 2
where n indicates the total number of samples.
VAF characterizes the prediction performance by comparing the standard deviation of the fitted error with the standard deviation of the actual value, which is defined as:
VAF = 1 var y i y ^ i var y i × 100
where var indicates the variance.
MAE represents the average error between predicted value and actual value, which can be calculated by:
MAE = 1 n i = 1 n y i y ^ i ,

2.5. Proposed Approach

In this study, two novel hybrid ensemble learning models were proposed to estimate the E of rock by optimizing the XGBoost and RF models through the DMO algorithm. The structure of the proposed approach is indicated in Figure 3. Firstly, a database including 90 rock samples with five indicators was established. Secondly, 80% of the samples were utilized for training, while the remaining 20% were reserved for testing [55]. Thirdly, to validate the superiority of the proposed hybrid ensemble learning models, a comparison against default XGBoost and RF models was performed. Finally, the R2 score, RMSE, MAE, and VAF were adopted to evaluate the performance of models on the test set.
The XGBoost and RF algorithms were implemented on the Python library “sci-kit-learn” [56], and the DMO algorithm was implemented on the Python library “MELPY” [57,58]. All experiments were processed using a Windows 10, 64-bit computer with 8 Gb of RAM running an Intel® Core™ i7-9700F CPU @ 3.00 GHz × 2.

3. Data and Variables

According to the work of Abdi et al. [25], four types of rocks, including marl, siltstone, claystone, and limestone, were collected and cored in the laboratory to obtain standard core samples. A total of 90 rock samples were obtained for conducting physical tests and developing E-estimation models. Among the 90 samples, each set of data contains five indicators, including porosity (A1), dry density (A2), P-wave velocity (A3), slake durability (A4), and water absorption (A5). Among them, porosity (A1) is an important factor in determining the strength and deformation behavior of rock, and water absorption (A5) is related to porosity. Dry density (A2) and P-wave velocity (A3) are two common petrophysical properties related to the E of rock. Yagiz et al. [28] found that slake durability (A4) cycles have a significant effect on the prediction of UCS and modulus of elasticity for carbonate rocks. It is important that these five indicators can be conveniently measured in the laboratory, and therefore, they were selected as input indicators. The detailed descriptions of these indicators are displayed in Table 2, and the statistics of the dataset are illustrated in Table 3.
The violin plots of the five indicators and E are shown in Figure 4. These plots are a combination of box plots and density plots, offering insights into the overall distribution of the dataset. In each violin plot, the white dot at the center represents the median of the samples, the upper and lower extents of the thick black line denote the third and first quartiles of the samples, the top and bottom of the thin black line indicate the upper and lower adjacent values, and the black dots are outliers. From Figure 4, it can be seen that the distribution of all the data is relatively balanced, but there are some outliers, which indicates the complexity of estimating the E of rock.
To visualize the distribution of this dataset, the correlation pair plots and heatmap of the Pearson correlation coefficient between indicators and E are displayed in Figure 5. The results show that A2, A3, and A4 are positively correlated with E, while others are negatively correlated. Among them, the strongest correlation was found between A2 and E with an absolute value of the correlation coefficient of 0.78, and the weakest correlation was found between A1 and E with an absolute value of the correlation coefficient of 0.28.

4. Results and Analysis

4.1. Results of Hyperparameters Tuning

In contrast to other optimization methods, swarm intelligence algorithms need to determine the population size and the number of iterations, which directly affect the running time of the models and the ability to tune the optimal hyperparameters [59]. In this study, each model performed 100 iterations at population sizes of 10, 20, 30, 40, 50, 75, and 100, and the average RMSE values of five-fold CV were employed as the fitness function. The fitness curves and the running time of all models are indicated in Figure 6. With an increase in the number of iterations, the average fitness value gradually reaches a stable point. The minimum fitness value corresponds to the optimal population size of the models. However, an excessively large population size will significantly prolong the running time, which is not conducive to the practical application of these models in engineering [60,61]. Therefore, the best trade-off between performance and efficiency was achieved when the population sizes of the DMO-RF and DMO-XGBoost models were 75 and 20, respectively. After tuning the population size, the hyperparameters of the RF and XGBoost models were tuned based on the optimization algorithm. The scope, interval values, and ultimate optimization results of the hyperparameter values are presented in Table 4.

4.2. Model Comparison and Evaluation

To further compare the estimation performances of the two proposed hybrid ensemble learning models with the default XGBoost and RF models, the scatter plots of the four models in the training and testing phases are presented in Figure 7. The vertical and horizontal axes represent predicted and actual values of E, respectively. The diagonal dashed line represents the ideal regression line, representing that the predicted and actual values are equal. Models with more data points lying on this line exhibit higher predictive performance. The solid blue lines are the 10% boundaries, which were set to better observe the distribution of points and compare models. The green and red scatters represent the training data points and test data points, respectively. At the same time, the values of the evaluation metrics (R2, RMSE, MAE, and VAF) of each model in the training and testing phases are recorded in Table 5. Some observations can be obtained from Figure 7 and Table 5. First, in the training phase, the XGBoost and DMO-XGBoost models performed better than the RF and DMO-RF models without the phenomenon of the green scatters outside the 10% boundaries, and the default XGBoost model performed the best in all evaluation metrics. Second, in contrast to the training phase, the predictive performance of each model decreased during the testing phase, but the DMO approach improved the performance of the default XGBoost and RF models in the testing phase. Third, in the testing phase, the DMO-RF and DMO-XGBoost models had fewer red scatters outside the 10% boundaries than the default RF and XGBoost models, and the DMO-RF model performed best in all evaluation metrics.

5. Discussion

5.1. Comparison with Different Classical Hybrid Models

To verify the effectiveness of the proposed DMO algorithm, the classical simulated annealing (SA) and Bayesian optimization (BO) algorithms were introduced to optimize the RF and XGBoost models for comparison [62,63,64,65]. The evaluation metrics of the SA-RF, SA-XGBoost, BO-RF, and BO-XGBoost models in the training and testing phases are shown in Table 6.
To compare the developed models more visually, the Taylor diagram was plotted, as seen in Figure 8. A complete Taylor diagram consists of three components: standard deviation (SD, black and green dashed line), root mean square deviation (RMSD, red dashed line), and correlation coefficient (CC, blue dashed line) [66]. The red pentacle in Figure 8 was the reference point, which represents the actual E values, and the scatters represent the different prediction models. The scatter with the closest distance to the reference point (with the lowest RMSD value) corresponds to the optimal prediction model. The SD, RMSE, and CC values of all scatters are calculated in Table 7. It can be seen that the DMO-RF model was the best model that was closest to the reference point with an RMSD of 0.530, followed by the BO-RF, DMO-XGBoost, BO-XGBoost, SA-RF, and SA-XGBoost models.

5.2. Comparison with Other ML Models

To confirm the performance of the developed models, three other ML models were introduced for comparison, namely the SVM, decision tree (DT), and multilayer perceptron neural network (MLPNN). The hyperparameters of these models were default. In addition, Adaptive boosting machine (Adaboost) and Category gradient boosting machine (CatBoost) methods were developed by Abdi et al. [25] on the same dataset. Furthermore, to visually compare the seven ML models, a scoring system was implemented to give corresponding scores to each model [67]. The values and scores of all models on four evaluation metrics are calculated in Table 8. It can be seen that the overall score ranking for all models is DMO-RF (28) > DMO-XGBoost (24) > CatBoost (20) > Adaboost (16) > SVM (12) > MLPNN (8) > DT (4).

5.3. Relative Importance of Indicators

In this study, the relative importance of each indicator was determined by combining the DMO-RF and DMO-XGBoost models with the permutation feature importance technique, which is indicated in Figure 9 [68,69]. Based on the DMO-RF model, the rank of the indicators’ importance was A2 > A4 > A3 > A5 > A1. According to the DMO-XGBoost model, the rank of the indicators’ importance was A2 > A4 > A5 > A3 > A1. The results were consistent with the calculations of the feature importance analysis module built into the XGBoost and RF models. Apparently, two indicators were the most important: A2 (dry density) and A4 (slake durability). The results can be used as a reference for developing more reliable E estimation models of rock in the future.

5.4. Overfitting Validation

Furthermore, to further analyze whether the proposed DMO-RF and DMO-XGBoost models suffered from overfitting or underfitting, the convergence curves are plotted in Figure 10. The horizontal axis represents the number of samples, while the vertical axis represents the R2 score obtained through a five-fold CV. It can be seen that the DMO-RF and DMO-XGBoost models tended to converge as the sample size increased. However, the DMO-RF model performed with less error between the training and test sets than the DMO-XGBoost model, which indicates that the proposed DMO-RF model effectively mitigated the issue of generalization error and exhibited a certain degree of resistance against overfitting.

5.5. Limitations

Although the proposed hybrid ensemble approaches obtained excellent results in estimating the E of rock, there are still some limitations to consider:
(1) The dataset used in this study is relatively small, and the specimens in the original dataset were composed of four types of weak rocks, including marl, siltstone, claystone, and limestone. There is no consideration of other types of rock, such as granite, basalt, and other hard rocks. This situation might lead to the presence of a sampling bias that might affect the generalizability of the proposed ML models. Therefore, it is crucial to expand the database by collecting various types of rock cases to increase the quantity and quality of the dataset.
(2) More indicators should be considered. Although the five indicators in this study affect the E of rock significantly, other factors such as depth of coring, mineral composition, grain size distribution, and RQD index also have an effect on the E of rock. It is valuable to explore the influences of these indicators on the E of rock estimation.
Although the aforementioned limitations exist, the developed DMO-XGBoost and DMO-RF models in this study could be considered a feasible and practical tool for estimating the E of rock. Compared to traditional expensive and time-consuming laboratory testing methods for rock mechanical parameters, the proposed models have potential advantages in terms of cost savings and time efficiency. Geotechnical engineers can significantly reduce the need for costly laboratory testing, providing project budget savings while enabling faster and more effective assessment of geologic and geotechnical risks. In addition, the developed models could be extended to estimate other geotechnical parameters, such as UCS, rock shear strength, and rock brittleness, et al.

6. Conclusions

E is one of the important parameters in rock mechanics. Accurately estimating the E of rock is significant for the design and construction of geotechnical projects. In this study, DMO-XGBoost and DMO-RF models were developed to estimate the E of rock. The effectiveness of the proposed models was verified using a database including 90 rock samples with five indicators. To avoid overfitting or selection bias, the five-fold CV method was combined with the DMO algorithm to tune the hyperparameters on the training set (80% of the dataset). The R2 score, RMSE, MAE, and VAF were adopted to evaluate the performance of models on the test set (20% of the dataset). In addition, two default ensemble models (XGBoost and RF) were introduced and compared with the proposed two hybrid models. Overall, the DMO algorithm improved the predictive performance of the default XGBoost and RF models, and the DMO-RF model performed best with an R2 of 0.967, RMSE of 0.541, MAE of 0.447, and VAF of 0.969 on the test set. Furthermore, the classical SA and BO algorithms were introduced to optimize the RF and XGBoost models for comparison, and the Taylor diagram was plotted to determine the comprehensive rank, which was DMO-RF > BO-RF > DMO-XGBoost > BO-XGBoost > SA-RF > SA-XGBoost. The permutation feature importance technique revealed that dry density and slake durability were more influential than other indicators in the evaluation results. Based on the convergence curves, it was verified that the DMO-RF model can reduce the generalization error and avoid overfitting.
In future work, a larger dataset containing higher-quality data should be established to improve the estimation performance. In conclusion, the proposed DMO-RF model in this paper can be considered a viable and useful tool in estimating the E of rock and can be recommended for the application of other geotechnical parameter estimations, such as UCS, rock shear strength, and rock brittleness, amongst others.

Author Contributions

Conceptualization, L.L.; methodology, L.L.; software, Z.J.; validation, W.L. and Z.J.; formal analysis, L.L.; investigation, L.L.; resources, L.L.; data curation, L.L.; writing—original draft preparation, W.L.; writing—review and editing, L.L.; visualization, W.L.; supervision, G.Z.; project administration, G.Z.; funding acquisition, G.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key Research and Development Program of China (2018YFC0604606).

Data Availability Statement

Data will be made available on request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bieniawski, Z.T. Estimating the strength of rock materials. J. South. Afr. Inst. Min. 1974, 74, 312–320. [Google Scholar] [CrossRef]
  2. Zhao, Y.; Zhao, G.; Xu, L.; Zhou, J.; Huang, X. Mechanical property evolution model of cemented tailings-rock backfill considering strengthening and weakening effects. Constr. Build. Mater. 2023, 377, 131081. [Google Scholar] [CrossRef]
  3. Lin, L.; Xu, J.; Yuan, J.; Yu, Y. Compressive strength and elastic modulus of RBAC: An analysis of existing data and an artificial intelligence based prediction. Case Stud. Constr. Mater. 2023, 18, e02184. [Google Scholar] [CrossRef]
  4. Liu, J.; Jiang, Q.; Chen, T.; Yan, S.; Ying, J.; Xiong, X.; Zheng, H. Bayesian Estimation for Probability Distribution of Rock’s Elastic Modulus Based on Compression Wave Velocity and Deformation Warning for Large Underground Cavern. Rock Mech. Rock Eng. 2022, 55, 3749–3767. [Google Scholar] [CrossRef]
  5. Yang, J.P.; Chen, W.Z.; Yang, D.S.; Tian, H.M. Estimation of Elastic Moduli of Non-persistent Fractured Rock Masses. Rock Mech. Rock Eng. 2016, 49, 1977–1983. [Google Scholar] [CrossRef]
  6. Thomaz, W.d.A.; Miyaji, D.Y.; Possan, E. Comparative study of dynamic and static Young’s modulus of concrete containing basaltic aggregates. Case Stud. Constr. Mater. 2021, 15, e00645. [Google Scholar] [CrossRef]
  7. Sonmez, H.; Gokceoglu, C.; Nefeslioglu, H.A.; Kayabasi, A. Estimation of rock modulus: For intact rocks with an artificial neural network and for rock masses with a new empirical equation. Int. J. Rock Mech. Min. Sci. 2006, 43, 224–235. [Google Scholar] [CrossRef]
  8. Aksoy, C.O.; Aksoy, G.G.U.; Yaman, H.E. The Importance of deformation modulus on design of rocks with numerical modeling. Geomech. Geophys. Geo-Energy Geo-Resour. 2022, 8, 103. [Google Scholar] [CrossRef]
  9. Katz, O.; Reches, Z.; Roegiers, J.C. Evaluation of mechanical rock properties using a Schmidt Hammer. Int. J. Rock Mech. Min. Sci. 2000, 37, 723–728. [Google Scholar] [CrossRef]
  10. Cilli, P.A.; Chapman, M. The Power-Law Relation Between Inclusion Aspect Ratio and Porosity: Implications for Electrical and Elastic Modeling. J. Geophys. Res.-Solid Earth 2020, 125, e2019JB019187. [Google Scholar] [CrossRef]
  11. Heidari, M.; Khanlari, G.R.; Kaveh, M.T.; Kargarian, S. Predicting the Uniaxial Compressive and Tensile Strengths of Gypsum Rock by Point Load Testing. Rock Mech. Rock Eng. 2012, 45, 265–273. [Google Scholar] [CrossRef]
  12. Asem, P.; Gardoni, P. A generalized Bayesian approach for prediction of strength and elastic properties of rock. Eng. Geol. 2021, 289, 106187. [Google Scholar] [CrossRef]
  13. Sachpazis, C.I. Correlating schmidt hardness with compressive strength and young’s modulus of carbonate rocks. Bull. Int. Assoc. Eng. Geol. 1990, 42, 75–83. [Google Scholar] [CrossRef]
  14. Lashkaripour, G.R. Predicting mechanical properties of mudrock from index parameters. Bull. Eng. Geol. Environ. 2002, 61, 73–77. [Google Scholar] [CrossRef]
  15. Yasar, E.; Erdogan, Y. Estimation of rock physicomechanical properties using hardness methods. Eng. Geol. 2004, 71, 281–288. [Google Scholar] [CrossRef]
  16. Moradian, Z.A.; Behnia, M. Predicting the Uniaxial Compressive Strength and Static Young’s Modulus of Intact Sedimentary Rocks Using the Ultrasonic Test. Int. J. Geomech. 2009, 9, 14–19. [Google Scholar] [CrossRef]
  17. Armaghani, D.J.; Mohamad, E.T.; Momeni, E.; Narayanasamy, M.S.; Amin, M.F.M. An adaptive neuro-fuzzy inference system for predicting unconfined compressive strength and Young’s modulus: A study on Main Range granite. Bull. Eng. Geol. Environ. 2015, 74, 1301–1319. [Google Scholar] [CrossRef]
  18. Shan, Z.-G.; Di, S.-J. Loading-unloading test analysis of anisotropic columnar jointed basalts. J. Zhejiang Univ.-Sci. A 2013, 14, 603–614. [Google Scholar] [CrossRef]
  19. Karakus, M.; Kumral, M.; Kilic, O. Predicting elastic properties of intact rocks from index tests using multiple regression modelling. Int. J. Rock Mech. Min. Sci. 2005, 42, 323–330. [Google Scholar] [CrossRef]
  20. Liang, W.; Sari, Y.A.; Zhao, G.; McKinnon, S.D.; Wu, H. Probability Estimates of Short-Term Rockburst Risk with Ensemble Classifiers. Rock Mech. Rock Eng. 2021, 54, 1799–1814. [Google Scholar] [CrossRef]
  21. Zhang, W.; Zhang, R.; Wu, C.; Goh, A.T.C.; Lacasse, S.; Liu, Z.; Liu, H. State-of-the-art review of soft computing applications in underground excavations. Geosci. Front. 2020, 11, 1095–1106. [Google Scholar] [CrossRef]
  22. Min, C.; Xiong, S.; Shi, Y.; Liu, Z.; Lu, X. Early-age compressive strength prediction of cemented phosphogypsum backfill using lab experiments and ensemble learning models. Case Stud. Constr. Mater. 2023, 18, e02107. [Google Scholar] [CrossRef]
  23. Cao, J.; Gao, J.; Nikafshan Rad, H.; Mohammed, A.S.; Hasanipanah, M.; Zhou, J. A novel systematic and evolved approach based on XGBoost-firefly algorithm to predict Young’s modulus and unconfined compressive strength of rock. Eng. Comput. 2022, 38, 3829–3845. [Google Scholar] [CrossRef]
  24. Meng, W.Z.; Wu, W. Machine Learning-Aided Prediction of the Mechanical Properties of Frozen Fractured Rocks. Rock Mech. Rock Eng. 2023, 56, 261–273. [Google Scholar] [CrossRef]
  25. Abdi, Y.; Momeni, E.; Armaghani, D.J. Elastic modulus estimation of weak rock samples using random forest technique. Bull. Eng. Geol. Environ. 2023, 82, 176. [Google Scholar] [CrossRef]
  26. Acar, M.C.; Kaya, B. Models to estimate the elastic modulus of weak rocks based on least square support vector machine. Arab. J. Geosci. 2020, 13, 590. [Google Scholar] [CrossRef]
  27. Pappalardo, G.; Mineo, S. Static elastic modulus of rocks predicted through regression models and Artificial Neural Network. Eng. Geol. 2022, 308, 106829. [Google Scholar] [CrossRef]
  28. Yagiz, S.; Sezer, E.A.; Gokceoglu, C. Artificial neural networks and nonlinear regression techniques to assess the influence of slake durability cycles on the prediction of uniaxial compressive strength and modulus of elasticity for carbonate rocks. Int. J. Numer. Anal. Methods Geomech. 2012, 36, 1636–1650. [Google Scholar] [CrossRef]
  29. Dehghan, S.; Sattari, G.; Chehreh Chelgani, S.; Aliabadi, M.A. Prediction of uniaxial compressive strength and modulus of elasticity for Travertine samples using regression and artificial neural networks. Min. Sci. Technol. 2010, 20, 41–46. [Google Scholar] [CrossRef]
  30. Majdi, A.; Beiki, M. Evolving neural network using a genetic algorithm for predicting the deformation modulus of rock masses. Int. J. Rock Mech. Min. Sci. 2010, 47, 246–253. [Google Scholar] [CrossRef]
  31. Khandelwal, M.; Singh, T.N. Predicting elastic properties of schistose rocks from unconfined strength using intelligent approach. Arab. J. Geosci. 2011, 4, 435–442. [Google Scholar] [CrossRef]
  32. Ocak, I.; Seker, S.E. Estimation of Elastic Modulus of Intact Rocks by Artificial Neural Network. Rock Mech. Rock Eng. 2012, 45, 1047–1054. [Google Scholar] [CrossRef]
  33. Bejarbaneh, B.Y.; Bejarbaneh, E.Y.; Amin, M.F.M.; Fahimifar, A.; Armaghani, D.J.; Abd Majid, M.Z. Intelligent modelling of sandstone deformation behaviour using fuzzy logic and neural network systems. Bull. Eng. Geol. Environ. 2018, 77, 345–361. [Google Scholar] [CrossRef]
  34. Saedi, B.; Mohammadi, S.D.; Shahbazi, H. Prediction of uniaxial compressive strength and elastic modulus of migmatites using various modeling techniques. Arab. J. Geosci. 2018, 11, 574. [Google Scholar] [CrossRef]
  35. Rezaei, M. Indirect measurement of the elastic modulus of intact rocks using the Mamdani fuzzy inference system. Measurement 2018, 129, 319–331. [Google Scholar] [CrossRef]
  36. Yang, L.; Feng, X.; Sun, Y. Predicting the Young’s Modulus of granites using the Bayesian model selection approach. Bull. Eng. Geol. Environ. 2019, 78, 3413–3423. [Google Scholar] [CrossRef]
  37. Liu, L.; Zhao, G.; Liang, W. Slope Stability Prediction Using k-NN-Based Optimum-Path Forest Approach. Mathematics 2023, 11, 3071. [Google Scholar] [CrossRef]
  38. Jong, S.C.; Ong, D.E.L.; Oh, E. State-of-the-art review of geotechnical-driven artificial intelligence techniques in underground soil-structure interaction. Tunn. Undergr. Space Technol. 2021, 113, 103946. [Google Scholar] [CrossRef]
  39. Pang, X.F.; Luan, C.F.; Liu, L.; Liu, W.; Zhu, Y.C. Data-driven random forest forecasting method of monthly electricity consumption. Electr. Eng. 2022, 104, 2045–2059. [Google Scholar] [CrossRef]
  40. Wang, Y.Y.; Sun, S.F.; Chen, X.Q.; Zeng, X.J.; Kong, Y.; Chen, J.; Guo, Y.S.; Wang, T.Y. Short-term load forecasting of industrial customers based on SVMD and XGBoost. Int. J. Electr. Power Energy Syst. 2021, 129, 106830. [Google Scholar] [CrossRef]
  41. Guo, K.H.; Shen, C.C.; Zhou, X.K.; Ren, S.; Hu, M.; Shen, M.X.; Chen, X.; Guo, H.F. Traffic Data-Empowered XGBoost-LSTM Framework for Infectious Disease Prediction. IEEE Trans. Intell. Transp. Syst. 2023, 2, 1307–1318. [Google Scholar] [CrossRef]
  42. Meng, D.L.; Xu, J.; Zhao, J.J. Analysis and prediction of hand, foot and mouth disease incidence in China using Random Forest and XGBoost. PLoS ONE 2021, 16, e0261629. [Google Scholar] [CrossRef] [PubMed]
  43. Gu, Z.Y.; Cao, M.C.; Wang, C.G.; Yu, N.; Qing, H.Y. Research on Mining Maximum Subsidence Prediction Based on Genetic Algorithm Combined with XGBoost Model. Sustainability 2022, 14, 10421. [Google Scholar] [CrossRef]
  44. Zhou, X.Z.; Zhao, C.; Bian, X.C. Prediction of maximum ground surface settlement induced by shield tunneling using XGBoost algorithm with golden-sine seagull optimization. Comput. Geotech. 2023, 154, 105156. [Google Scholar] [CrossRef]
  45. Bhagat, S.K.; Tung, T.M.; Yaseen, Z.M. Heavy metal contamination prediction using ensemble model: Case study of Bay sedimentation, Australia. J. Hazard. Mater. 2021, 403, 123492. [Google Scholar] [CrossRef]
  46. Jia, X.L.; Fu, T.T.; Hu, B.F.; Shi, Z.; Zhou, L.Q.; Zhu, Y.W. Identification of the potential risk areas for soil heavy metal pollution based on the source-sink theory. J. Hazard. Mater. 2020, 393, 122424. [Google Scholar] [CrossRef]
  47. Agushaka, J.O.; Ezugwu, A.E.; Abualigah, L. Dwarf Mongoose Optimization Algorithm. Comput. Methods Appl. Mech. Eng. 2022, 391, 114570. [Google Scholar] [CrossRef]
  48. Chen, T.Q.; Guestrin, C. XGBoost: A Scalable Tree Boosting System. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), San Francisco, CA, USA, 13–17 August 2016; pp. 785–794. [Google Scholar]
  49. Bentejac, C.; Csorgo, A.; Martinez-Munoz, G. A comparative analysis of gradient boosting algorithms. Artif. Intell. Rev. 2021, 54, 1937–1967. [Google Scholar] [CrossRef]
  50. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  51. Gislason, P.O.; Benediktsson, J.A.; Sveinsson, J.R. Random Forests for land cover classification. Pattern Recognit. Lett. 2006, 27, 294–300. [Google Scholar] [CrossRef]
  52. Mehmood, K.; Chaudhary, N.I.; Khan, Z.A.; Cheema, K.M.; Raja, M.A.Z.; Milyani, A.H.; Azhari, A.A. Dwarf Mongoose Optimization Metaheuristics for Autoregressive Exogenous Model Identification. Mathematics 2022, 10, 3821. [Google Scholar] [CrossRef]
  53. Agushaka, J.O.; Ezugwu, A.E.; Olaide, O.N.; Akinola, O.; Abu Zitar, R.; Abualigah, L. Improved Dwarf Mongoose Optimization for Constrained Engineering Design Problems. J. Bionic Eng. 2023, 20, 1263–1295. [Google Scholar] [CrossRef] [PubMed]
  54. Yin, X.; Gao, F.; Wu, J.; Huang, X.; Pan, Y.; Liu, Q. Compressive strength prediction of sprayed concrete lining in tunnel engineering using hybrid machine learning techniques. Undergr. Space. 2022, 7, 928–943. [Google Scholar] [CrossRef]
  55. Wang, M.; Zhao, G.; Liang, W.; Wang, N. A comparative study on the development of hybrid SSA-RF and PSO-RF models for predicting the uniaxial compressive strength of rocks. Case Stud. Constr. Mater. 2023, 18, e02191. [Google Scholar] [CrossRef]
  56. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine Learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
  57. Thieu, N.V.; Mirjalili, S. MEALPY: An open-source library for latest meta-heuristic algorithms in Python. J. Syst. Archit. 2023, 139, 102871. [Google Scholar] [CrossRef]
  58. Thieu, N.V.; Barma, S.D.; Lam, T.V.; Kisi, O.; Mahesha, A. Groundwater level modeling using Augmented Artificial Ecosystem Optimization. J. Hydrol. 2023, 617, 129034. [Google Scholar] [CrossRef]
  59. Piotrowski, A.P.; Napiorkowski, J.J.; Piotrowska, A.E. Population size in Particle Swarm Optimization. Swarm Evol. Comput. 2020, 58, 100718. [Google Scholar] [CrossRef]
  60. Zhou, J.; Huang, S.; Qiu, Y. Optimization of random forest through the use of MVO, GWO and MFO in evaluating the stability of underground entry-type excavations. Tunn. Undergr. Space Technol. 2022, 124, 104494. [Google Scholar] [CrossRef]
  61. Zhao, G.Y.; Wang, M.; Liang, W.Z. A Comparative Study of SSA-BPNN, SSA-ENN, and SSA-SVR Models for Predicting the Thickness of an Excavation Damaged Zone around the Roadway in Rock. Mathematics 2022, 10, 1351. [Google Scholar] [CrossRef]
  62. Tsai, C.W.; Hsia, C.H.; Yang, S.J.; Liu, S.J.; Fang, Z.Y. Optimizing hyperparameters of deep learning in predicting bus passengers based on simulated annealing. Appl. Soft Comput. 2020, 88, 106068. [Google Scholar] [CrossRef]
  63. Metropolis, N.; Rosenbluth, A.W.; Rosenbluth, M.N.; Teller, A.H.; Teller, E. Equation of State Calculations by Fast Computing Machines. J. Chem. Phys. 1953, 21, 1087–1092. [Google Scholar] [CrossRef]
  64. Kirkpatrick, S.; Gelatt, C.D., Jr.; Vecchi, M.P. Optimization by simulated annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef]
  65. Hasanpour, R.; Rostami, J.; Schmitt, J.; Ozcelik, Y.; Sohrabian, B. Prediction of TBM jamming risk in squeezing grounds using Bayesian and artificial neural networks. J. Rock Mech. Geotech. Eng. 2020, 12, 21–31. [Google Scholar] [CrossRef]
  66. Zhou, Q.M.; Chen, D.L.; Hu, Z.Y.; Chen, X. Decompositions of Taylor diagram and DISO performance criteria. Int. J. Climatol. 2021, 41, 5726–5732. [Google Scholar] [CrossRef]
  67. Zorlu, K.; Gokceoglu, C.; Ocakoglu, F.; Nefeslioglu, H.A.; Acikalin, S. Prediction of uniaxial compressive strength of sandstones using petrography-based models. Eng. Geol. 2008, 96, 141–158. [Google Scholar] [CrossRef]
  68. Gregorutti, B.; Michel, B.; Saint-Pierre, P. Correlation and variable importance in random forests. Stat. Comput. 2017, 27, 659–678. [Google Scholar] [CrossRef]
  69. Altmann, A.; Tolosi, L.; Sander, O.; Lengauer, T. Permutation importance: A corrected feature importance measure. Bioinformatics 2010, 26, 1340–1347. [Google Scholar] [CrossRef]
Figure 1. Principle diagram of XGBoost model.
Figure 1. Principle diagram of XGBoost model.
Mathematics 11 03886 g001
Figure 2. Principle diagram of RF model.
Figure 2. Principle diagram of RF model.
Mathematics 11 03886 g002
Figure 3. Flowchart of the proposed approach.
Figure 3. Flowchart of the proposed approach.
Mathematics 11 03886 g003
Figure 4. Violin plots of the dataset.
Figure 4. Violin plots of the dataset.
Mathematics 11 03886 g004
Figure 5. Correlation pair plots and heatmap.
Figure 5. Correlation pair plots and heatmap.
Mathematics 11 03886 g005
Figure 6. Results of development with DMO-RF and DMO-XGBoost models.
Figure 6. Results of development with DMO-RF and DMO-XGBoost models.
Mathematics 11 03886 g006
Figure 7. Regression diagrams of all models in the training and testing phases.
Figure 7. Regression diagrams of all models in the training and testing phases.
Mathematics 11 03886 g007
Figure 8. Comparison of models in Taylor diagram.
Figure 8. Comparison of models in Taylor diagram.
Mathematics 11 03886 g008
Figure 9. Relative importance of indicators.
Figure 9. Relative importance of indicators.
Mathematics 11 03886 g009
Figure 10. Convergence curves of DMO-RF and DMO-XGBoost models.
Figure 10. Convergence curves of DMO-RF and DMO-XGBoost models.
Mathematics 11 03886 g010
Table 1. Related works on E estimation using ML approaches.
Table 1. Related works on E estimation using ML approaches.
YearReferencesModelsInputsNumber of Samples
2010Majdi and Beiki [30]GA-ANNUCS, GSI, RQD, ρ, n, NJ120
2010Dehghan et al. [29]ANNVp, Is(50), Rn, n30
2011Khandelwal and Singh [31]ANNUCS, BTS120
2012Ocak and Seker [32]ANNUCS, γ195
2015Armaghani et al. [17]ANFISρ, Vp, content of Qtz, Kpr, Plg, and Bi45
2018Bejarbaneh et al. [33]ANNIs(50), Vp, Rn96
2018Saedi et al. [34]ANFISBPI, BTS, Vp, Is(50)120
2018Rezaei [35]MFISH, ρ, n, DI50
2019Yang et al. [36]BayesianIS(50), Rn, Vp, n, UCS71
2020Acar and Kaya [26]LS-SVMVp, γ, Is(50), BTS575
2022Cao et al. [23]XGBoost-fireflyρ, Vp, content of Qtz, Kpr, Plg, Bi45
2022Pappalardo and Mineo [27]ANNn, γ, Vp, Edyn, UCS/
2023Meng and Wu [24]RFPF, SF, IAF, TF, NF/
2023Abdi et al. [25]RFn, ρ, Vp, Id2, Abs90
Note: GSI is geological strength index; RQD is rock mass quality designation; GA is genetic algorithm; BTS is Brazilian tensile strength; γ is unit weight; ρ is density; n is porosity; NJ is number of joints per meter; Vp is P-wave velocity; IS(50) is point load index; Rn is the Schmidt hammer rebound number; Qtz is quartz; Kpr is alkali feldspar; Plg is plagioclase; Bi is biotite; BPI is block punch index; H is depth of coring; DI is durability index; Edyn is dynamic elastic modulus; Id2 is slake durability; Abs is water absorption; PF is persistence factor of ice-filled fractures; SF is spacing between fractures; IAF is inclination angle of fractures; TF is thickness of fractures; NF is number of fractures.
Table 2. Descriptions of input indicators.
Table 2. Descriptions of input indicators.
IndicatorDescription
Porosity (%)The ratio of void space to the total volume of rock.
Dry density (g/cm3)The mass per unit volume of a rock sample without any water content.
P-wave velocity (m/s)The speed at which compressional waves travel through a rock sample.
Slake durability (%)A measure of the resistance of a rock sample to disintegration when exposed to wetting and drying cycles.
Water absorption (%)The amount of water that a rock sample can absorb when it is immersed in water for a specified period of time.
Table 3. Statistical values of the dataset.
Table 3. Statistical values of the dataset.
IndicatorsMinimumMedianMaximumMeanStandard
A1 (%)5.4421.3256.5522.699.59
A2 (g/cm3)1.612.062.982.170.24
A3 (m/s)1011.532075.053250.451989.66541.78
A4 (%)22.7580.5296.5474.8716.40
A5 (%)2.5110.5926.5411.625.24
E (GPa)1.123.5913.234.502.92
Table 4. Scope and interval values of DMO-RF and DMO-XGBoost models.
Table 4. Scope and interval values of DMO-RF and DMO-XGBoost models.
ML AlgorithmsHyperparametersScope of ValuesInterval of ValuesOptimal Values
DMO-RFn_estimators(10, 300)1010
max_depth(1, 20)14
min_samples_split(2, 10)0.0015
DMO-XGBoostn_estimators(10, 300)1012
max_depth(1, 15)15
learning_rate(0.001, 1)0.0011
reg_alpha(0.001, 1)0.0011
reg_lambda(0.001, 1)0.0010.636
Table 5. Performance comparison of different models.
Table 5. Performance comparison of different models.
ModelsR2RMSEMAEVAF
Training
RF0.9640.5410.3680.965
XGBoost0.9990.00100.999
DMO-RF0.9680.5130.3530.968
DMO-XGBoost0.9990.0160.0120.999
Testing
RF0.9130.8860.7480.920
XGBoost0.7951.3570.9940.816
DMO-RF0.9670.5410.4470.969
DMO-XGBoost0.9350.7630.6740.936
The results presented in bold denote the best values.
Table 6. Performance comparison of different hybrid models.
Table 6. Performance comparison of different hybrid models.
ModelsR2RMSEMAEVAF
Training
SA-RF0.9490.6460.4910.949
SA-XGBoost0.9970.0290.0220.997
BO-RF0.9500.6440.4780.950
BO-XGBoost0.9960.1830.1380.996
Testing
SA-RF0.9330.7740.6410.935
SA-XGBoost0.9180.8580.6810.921
BO-RF0.9330.7730.6540.936
BO-XGBoost0.9530.6480.5590.954
Table 7. Taylor diagram values of all models.
Table 7. Taylor diagram values of all models.
ReferenceDMO-RFDMO-XGBoostSA-RFSA-XGBoostBO-RFBO-XGBoost
SD2.9982.72.9992.5192.5132.6712.562
CC00.9880.9680.9770.9680.9810.975
RMSD10.5300.7570.7640.8430.6410.758
Table 8. Performance comparison of different ML models on test set.
Table 8. Performance comparison of different ML models on test set.
ModelsR2ScoreRMSEScoreMAEScoreVAFScoreTotal Score
Adaboost0.84151.18850.92050.849520
CatBoost0.78441.38241.07540.788416
SVM0.73731.41031.13230.745312
DT0.68811.67911.32110.70114
MLPNN0.72121.47321.16320.72228
DMO-RF0.96770.54170.44770.969728
DMO-XGBoost0.93560.76360.67460.936624
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lin, W.; Liu, L.; Zhao, G.; Jian, Z. Developing Hybrid DMO-XGBoost and DMO-RF Models for Estimating the Elastic Modulus of Rock. Mathematics 2023, 11, 3886. https://doi.org/10.3390/math11183886

AMA Style

Lin W, Liu L, Zhao G, Jian Z. Developing Hybrid DMO-XGBoost and DMO-RF Models for Estimating the Elastic Modulus of Rock. Mathematics. 2023; 11(18):3886. https://doi.org/10.3390/math11183886

Chicago/Turabian Style

Lin, Weixing, Leilei Liu, Guoyan Zhao, and Zheng Jian. 2023. "Developing Hybrid DMO-XGBoost and DMO-RF Models for Estimating the Elastic Modulus of Rock" Mathematics 11, no. 18: 3886. https://doi.org/10.3390/math11183886

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop