Rockburst Intensity Classification Prediction Based on Multi-Model Ensemble Learning Algorithms
Abstract
:1. Introduction
2. Establishment of Rock Burst Prediction Database
2.1. Rockburst Event Data Sources
2.2. Analysis of Event Characteristic Indicators for Rockburst
3. Principles of Stacking and Voting Integration Algorithms
3.1. Principle of Stacking Integration Algorithm
3.2. Principle of Voting Integration Algorithm
4. Rockburst Risk Prediction Model Based on Integrated Algorithms
4.1. Rockburst Risk Prediction Process Based on Integrated Algorithms
- (1)
- In order to improve the performance of prediction for learner models and reduce the influence of sample differences, the SMOTE data processing method was selected to oversample the original data and process the rockburst samples with different proportions and grades as 1:1:1:1, which in turn improved the accuracy of the prediction results.
- (2)
- Among the processed rockburst samples, 80% was randomly selected as the training set and 20% was selected as the test set.
- (3)
- A base model library was established, which contains two modules: integration algorithms and basic algorithms. Among them, XGBoost, LightGBM, CatBoost, and Random Forest were selected as the integrated algorithms, while the basic algorithms include decision tree (DT), simple Bayesian model (NB), K-neighborhood algorithm (KNN), support vector machine (SVM), logistic regression (LR), linear discriminant analysis (LDA), multilayer neural network (MLPC), and Gaussian process (GP).
- (4)
- We determined the hyperparameters, search range, and search step of the base models, and obtained the optimal hyperparameter values by the 5-fold cross-validation and combined with a random search grid method.
- (5)
- The optimized base models were selected as an appropriate integration strategy for algorithmic fusion. In this paper, the stacking integration strategy and voting integration strategy were chosen as the integration rules. Stacking integration was referred to the basic algorithms—basic algorithms, basic algorithms—integration algorithms, integration algorithms—basic algorithms, integration algorithms—basic algorithms, which are composed of base models and meta models. The voting method proposed three integration strategies, including eight basic algorithm models weighted integration, eight integrated algorithm models weighted integration, and basic algorithm models + integrated algorithm models weighted integration, and selected accuracy, F1 score, recall, precision, and cv-mean as the weight values.
- (6)
- According to different integration strategies, different integrated classification models could be obtained. The test set was applied to the integrated algorithm models, and the classification performance of various fusion algorithms were compared and analyzed using accuracy, F1 score, recall, and precision, and the optimal integration scheme was finally determined. The flow chart of rockburst risk prediction is shown in Figure 5.
4.2. Raw Data Processing
4.3. Hyperparameter Optimization of the Base Algorithms
4.4. Strategy and Process of the Stacking Integration Algorithm
- (1)
- Determine the hyperparameter index of each base classifier (basic algorithms+ integrated algorithms), determine the hyperparameter values of each algorithm using the random search grid method, and debug the classification models.
- (2)
- Use the debugged classification models to calculate the probability of rockburst risk separately, and find the accuracy and evaluation index of each model.
- (3)
- Build the stacking overlay strategy in the Sklearn module, and determine the base models and meta models according to the abovementioned fusion strategy.
- (4)
- Calculate the accuracy and evaluation indexes of 24 stacking integration algorithms (Stacking1~Stacking24) to determine the best stacking integration strategy and model. The strategy and process of stacking integration algorithm are shown in Figure 6.
4.5. Strategy and Process of Voting Integration Algorithm
- (1)
- Determine the hyperparameters of each basic classifier (basic algorithms + integrated algorithms) and debug the classification model.
- (2)
- Use the debugged classification models to calculate the probability of rockburst risk, the accuracy, and evaluation index of each model.
- (3)
- Determine the voting models using a weighted combination strategy based on the mean, accuracy, F1 score, precision, recall, and the cross-validation results of each base classifier.
- (4)
- Calculate the accuracy and evaluation indexes of 18 voting algorithms (Voting1~Voting18) to determine the best voting integration strategy and model.
5. Analysis and Discussion of Prediction Results
5.1. Comparison and Evaluation of Base Algorithms
5.2. Classification Performance Evaluation of the Stacking Integration Algorithm
5.3. Classification Performance Evaluation of the Voting Integration Algorithm
5.4. Comparative Analysis of the Fusion Results of Other Combination Strategies
5.5. Comparative Analysis of Other Models with Different Sample Combinations
6. Conclusions
- (1)
- In this paper, machine learning algorithms and 5-fold cross validation methods were introduced into rockburst intensity prediction, 12 basic algorithm models were introduced, and 632 sets of data were obtained by using the SMOTE oversampling data processing method and evaluated in comparison. The results showed that compared with the basic algorithms, the prediction performance of traditional integrated algorithms was good, and they showed good generalization ability.
- (2)
- From the 35 stacking models obtained, it can be seen that the performance of different fusion strategy schemes in dealing with rockburst level prediction problem varied greatly, and the performance was not completely better than that of traditional integration algorithms. In the case of integration algorithms as the meta model, the overall improvement in prediction performance was less than that of the stacking fusion model with the basic algorithm as base model. The stacking fusion model with the base classifier as the basic algorithm, when using the basic algorithm as the meta model, had an overall less-improved prediction performance than the stacking fusion model with the integrated algorithm as the base model. The stacking algorithms achieved better prediction results for low-level rockburst events, and the overall prediction performance was higher than some of the individual integrated algorithms.
- (3)
- Stacking integration algorithms can fuse the independent machine algorithms and give full play to the performance of each algorithm. When the meta model was a basic algorithm, the accuracy of the new feature information provided by different algorithms selected as the base model was higher than that provided by the original feature information for the basic algorithm in the meta model, because the basic algorithm processed the feature information contained in the data differently from the integrated algorithm, and the basic algorithm had different acceptability of the feature information, so the stacking integrated algorithm could achieve better results. In addition, the stacking fusion strategy integration algorithm obtained better prediction results for low-intensity rockburst events, and the overall prediction performance was higher than that of some individual integration algorithms.
- (4)
- Different voting strategies behave differently. For integrated algorithms as the base model, the accuracy and other evaluation metrics decreased when the new fusion model was obtained using each fusion voting strategy. For basic algorithms as the base model, the accuracy and other evaluation metrics increased when the new model was obtained using each fusion voting strategy. For basic algorithm as the weighted fusion of the integrated algorithm models + basic algorithm models, the values of all metrics increased when each metric was used for the weighting operation.
- (5)
- The influence of different rockburst event feature indicators on the model prediction results was analyzed, and the four basic integrated algorithms and the best-performing fusion strategy model were selected for the good performance of the CatBoost algorithm model in dealing with the rockburst prediction problem. When selecting different feature parameters as different data and input for analysis, it is considered that important feature parameters have a great influence on the predictive effect of algorithm performance; the maximum tangential stress σθ and elastic energy index Wet play a more important impact on the propensity of rockburst.
- (6)
- With the continuous improvement of the amount of rock burst data, problems such as outliers, missing values, and data imbalance appear in the original data set. Therefore, this paper adopted the integrated fusion method to integrate multiple machine-learning algorithms to obtain a better-performance rockburst risk prediction model. Because different machine learning methods have their own advantages and disadvantages in the process of computing, choosing appropriate machine learning algorithms to combine can weaken the problems that occur in processing data. Therefore, considering the prediction index of multiple rock bursts, the common machine learning algorithms are combined at this stage to give full play to the advantages of each machine learning algorithm. In addition, the combination method chosen in this paper is not mentioned in the previous study, which is also a new attempt.
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- He, J.; Dou, L.; Gong, S.; Li, J.; Ma, Z. Rock burst assessment and prediction by dynamic and static stress analysis based on micro-seismic monitoring. Int. J. Rock Mech. Min. Sci. 2017, 93, 46–53. [Google Scholar] [CrossRef]
- Si, X.; Gong, F. Strength-weakening effect and shear-tension failure mode transformation mechanism of rockburst for fine-grained granite under triaxial unloading compression. Int. J. Rock Mech. Min. Sci. 2020, 131, 104347. [Google Scholar] [CrossRef]
- Dong, L.; Tong, X.; Li, X.; Zhou, J.; Wang, S.; Liu, B. Some developments and new insights of environmental problems and deep mining strategy for cleaner production in mines. J. Clean. Prod. 2018, 210, 1562–1578. [Google Scholar] [CrossRef]
- Zhou, Z.; Cai, X.; Li, X.; Cao, W.; Du, X. Dynamic Response and Energy Evolution of Sandstone Under Coupled Static–Dynamic Compression: Insights from Experimental Study into Deep Rock Engineering Applications. Rock Mech. Rock Eng. 2019, 53, 1305–1331. [Google Scholar] [CrossRef]
- Dong, L.; Chen, Y.; Sun, D.; Zhang, Y. Implications for rock instability precursors and principal stress direction from rock acoustic experiments. Int. J. Min. Sci. Technol. 2021, 31, 789–798. [Google Scholar] [CrossRef]
- Dong, L.; Yang, L.; Chen, Y. Acoustic Emission Location Accuracy and Spatial Evolution Characteristics of Granite Fracture in Complex Stress Conditions. Rock Mech. Rock Eng. 2022, 1–18. [Google Scholar] [CrossRef]
- Russenes, B.F. Analysis of rock spalling for tunnels in steep valley sides. Nor. Inst. Technol. 1974. [Google Scholar]
- Barton, N.; Lien, R.; Lunde, J. Engineering classification of rock masses for the design of tunnel support. Rock Mech. Rock Eng. 1974, 6, 189–236. [Google Scholar] [CrossRef]
- Hoek, E.; Brown, E.T. Underground Excavation in Rock; Institute of Mining and Metallurgy: London, UK, 1980. [Google Scholar]
- Kidybiński, A. Bursting liability indices of coal. Int. J. Rock Mech. Min. Sci. Geomech. Abstracts. Pergamon 1981, 18, 295–304. [Google Scholar] [CrossRef]
- Wang, J.A.; Park, H.D. Comprehensive prediction of rockburst based on analysis of strain energy in rocks. Tunn. Undergr. Space Technol. 2001, 16, 49–57. [Google Scholar] [CrossRef]
- Tarasov, B.G.; Randolph, M.F. Superbrittleness of rocks and earthquake activity. Int. J. Rock Mech. Min. Sci. 2011, 48, 888–898. [Google Scholar] [CrossRef]
- Du, C.; Pan, Y.; Liu, Q.; Huang, X.; Yin, X. Rockburst inoculation process at different structural planes and microseismic warning technology: A case study. Bull. Eng. Geol. Environ. 2022, 81, 1–18. [Google Scholar] [CrossRef]
- Xie, H. Fractal characteristics and mechanism of rockburst. Chin. J. Rock Mech. Eng. 1993, 12, 28–37. (in Chinese). [Google Scholar]
- Qiao, C. Research on the Prediction of Rockburst Hazard in Deep-Buried Tunnels Based on Catastrophe Theory; Hebei University of Engineering: Handan, China, 2018. [Google Scholar]
- Cai, W.; Dou, L.; Zhang, M.; Cao, W.; Shi, J.-Q.; Feng, L. A fuzzy comprehensive evaluation methodology for rock burst forecasting using microseismic monitoring. Tunn. Undergr. Space Technol. 2018, 80, 232–245. [Google Scholar] [CrossRef]
- Cai, W.; Dou, L.; Si, G.; Cao, A.; He, J.; Liu, S. A principal component analysis/fuzzy comprehensive evaluation model for coal burst liability assessment. Int. J. Rock Mech. Min. Sci. 2016, 81, 62–69. [Google Scholar] [CrossRef]
- Dong, L.-J.; Zhou, Y.; Deng, S.-J.; Wang, M.; Sun, D.-Y. Evaluation methods of man-machine-environment system for clean and safe production in phosphorus mines: A case study. J. Central South Univ. 2021, 28, 3856–3870. [Google Scholar] [CrossRef]
- Wang, M.; Jin, J.; Li, L. SPA-VFS Model for the Prediction of Rockburst. In Proceedings of the 2008 Fifth International Conference on Fuzzy Systems and Knowledge Discovery, Jinan, China, 18–20 October 2008; Volume 5, pp. 34–38. [Google Scholar]
- Liang, W.; Zhao, G.; Wu, H.; Dai, B. Risk assessment of rockburst via an extended MABAC method under fuzzy environment. Tunn. Undergr. Space Technol. 2018, 83, 533–544. [Google Scholar] [CrossRef]
- Wang, J.; Huang, M.; Guo, J. Rock Burst Evaluation Using the CRITIC Algorithm-Based Cloud Model. Front. Phys. 2021, 8, 593701. [Google Scholar] [CrossRef]
- Liu, R.; Ye, Y.; Hu, N.; Chen, H.; Wang, X. Classified prediction model of rockburst using rough sets-normal cloud. Neural Comput. Appl. 2018, 31, 8185–8193. [Google Scholar] [CrossRef]
- Xue, Y.; Li, Z.; Li, S.; Qiu, D.; Tao, Y.; Wang, L.; Yang, W.; Zhang, K. Prediction of rock burst in underground caverns based on rough set and extensible comprehensive evaluation. Bull. Eng. Geol. Environ. 2017, 78, 417–429. [Google Scholar] [CrossRef]
- Liu, L.; Chen, Z.; Wang, L. Rock burst laws in deep mines based on combined model of membership function and dominance-based rough set. J. Cent. South Univ. 2015, 22, 3591–3597. [Google Scholar] [CrossRef]
- Zhang, M. Prediction of rockburst hazard based on particle swarm algorithm and neural network. Neural Comput. Appl. 2021, 34, 2649–2659. [Google Scholar] [CrossRef]
- Feng, G.L.; Xia, G.Q.; Chen, B.R.; Xiao, Y.X.; Zhou, R.C. A Method for Rockburst Prediction in the Deep Tunnels of Hydropower Stations Based on the Monitored Microseismicity and an Optimized Probabilistic Neural Network Model. Sustainability 2019, 11, 3212. [Google Scholar] [CrossRef]
- Zhou, J.; Li, X.; Shi, X. Long-term prediction model of rockburst in underground openings using heuristic algorithms and support vector machines. Saf. Sci. 2012, 50, 629–644. [Google Scholar] [CrossRef]
- Ji, B.; Xie, F.; Wang, X.; He, S.; Song, D. Investigate Contribution of Multi-Microseismic Data to Rockburst Risk Prediction Using Support Vector Machine with Genetic Algorithm. IEEE Access 2020, 8, 58817–58828. [Google Scholar] [CrossRef]
- Dong, L.-J.; Li, X.-B.; Peng, K. Prediction of rockburst classification using Random Forest. Trans. Nonferrous Met. Soc. China 2013, 23, 472–477. [Google Scholar] [CrossRef]
- Lan, T.; Zhang, Z.; Sun, J.; Zhao, W.; Zhang, M.; Jia, W.; Liu, M.; Guo, X. Regional prediction and prevention analysis of rockburst hazard based on the Gaussian process for binary classification. Front. Earth Sci. 2022, 10, 1779. [Google Scholar] [CrossRef]
- Li, N.; Feng, X.; Jimenez, R. Predicting rock burst hazard with incomplete data using Bayesian networks. Tunn. Undergr. Space Technol. 2017, 61, 61–70. [Google Scholar] [CrossRef]
- Kong, G.; Xia, Y.; Qiu, C. Cost-sensitive Bayesian network classifiers and their applications in rock burst prediction. In International Conference on Intelligent Computing; Springer: Cham, Switzerland, 2014; pp. 101–112. [Google Scholar]
- Dong, L.; Wesseloo, J.; Potvin, Y.; Li, X. Discrimination of Mine Seismic Events and Blasts Using the Fisher Classifier, Naive Bayesian Classifier and Logistic Regression. Rock Mech. Rock Eng. 2015, 49, 183–211. [Google Scholar] [CrossRef]
- Dong, L.J.; Wesseloo, J.; Potvin, Y.; Li, X.B. Discriminant models of blasts and seismic events in mine seismology. Int. J. Rock Mech. Min. Sci. 2016, 86, 282–291. [Google Scholar] [CrossRef]
- Li, N.; Jimenez, R. A logistic regression classifier for long-term probabilistic prediction of rock burst hazard. Nat. Hazards 2017, 90, 197–215. [Google Scholar] [CrossRef]
- Saghatforoush, A.; Monjezi, M.; Faradonbeh, R.S.; Armaghani, D.J. Combination of neural network and ant colony optimization algorithms for prediction and optimization of flyrock and back-break induced by blasting. Eng. Comput. 2015, 32, 255–266. [Google Scholar] [CrossRef]
- Dumitru, C.; Maria, V. Advantages and Disadvantages of Using Neural Networks for Predictions. Ovidius Univ. Ann. Ser. Econ. Sci. 2013, 13, 444–449. [Google Scholar]
- Karamizadeh, S.; Abdullah, S.M.; Halimi, M.; Shayan, J.; Javad Rajabi, M. Advantage and drawback of support vector machine functionality. In Proceedings of the 2014 International Conference on Computer, Communications, and Control Technology (I4CT), Langkawi, Malaysia, 2–4 September 2014; IEEE: Manhattan, NY, USA, 2014; pp. 63–65. [Google Scholar]
- Ahmad, M.W.; Mourshed, M.; Rezgui, Y. Trees vs Neurons: Comparison between random forest and ANN for high-resolution prediction of building energy consumption. Energy Build. 2017, 147, 77–89. [Google Scholar] [CrossRef]
- Costabal, F.S.; Perdikaris, P.; Kuhl, E.; Hurtado, D.E. Multi-fidelity classification using Gaussian processes: Accelerating the prediction of large-scale computational models. Comput. Methods Appl. Mech. Eng. 2019, 357, 112602. [Google Scholar] [CrossRef]
- Xue, Y.; Bai, C.; Qiu, D.; Kong, F.; Li, Z. Predicting rockburst with database using particle swarm optimization and extreme learning machine. Tunn. Undergr. Space Technol. 2020, 98, 103287. [Google Scholar] [CrossRef]
- Pu, Y.; Apel, D.B.; Xu, H. Rockburst prediction in kimberlite with unsupervised learning method and support vector classifier. Tunn. Undergr. Space Technol. 2019, 90, 12–18. [Google Scholar] [CrossRef]
- Wu, S.; Wu, Z.; Zhang, C. Rock burst prediction probability model based on case analysis. Tunn. Undergr. Space Technol. 2019, 93, 103069. [Google Scholar] [CrossRef]
- Xu, C.; Liu, X.; Wang, E.; Zheng, Y.; Wang, S. Rockburst prediction and classification based on the ideal-point method of information theory. Tunn. Undergr. Space Technol. 2018, 81, 382–390. [Google Scholar] [CrossRef]
- Xie, X.; Jiang, W.; Guo, J. Research on Rockburst Prediction Classification Based on GA-XGB Model. IEEE Access 2021, 9, 83993–84020. [Google Scholar] [CrossRef]
- Zhang, J.; Wang, Y.; Sun, Y.; Li, G. Strength of ensemble learning in multiclass classification of rockburst intensity. Int. J. Numer. Anal. Methods Géoméch. 2020, 44, 1833–1853. [Google Scholar] [CrossRef]
- Wang, S.-M.; Zhou, J.; Li, C.-Q.; Armaghani, D.J.; Li, X.-B.; Mitri, H.S. Rockburst prediction in hard rock mines developing bagging and boosting tree-based ensemble techniques. J. Central South Univ. 2021, 28, 527–542. [Google Scholar] [CrossRef]
- Tan, W.; Hu, N.; Ye, Y.; Wu, M.; Huang, Z.; Wang, X. Rockburst intensity classification prediction based on four ensemble learning. Chin. J. Rock Mech. Eng. 2022, 41 (Suppl. S2), 3250–3259. [Google Scholar]
- Liu, D.; Dai, Q.; Zuo, J.; Shang, Q.; Chen, G.; Guo, Y. Research on rockburst grade prediction based on stacking integrated algorithm. Chin. J. Rock Mech. Eng. 2022, 41 (Suppl. S1), 2915–2926. [Google Scholar]
- Wang, Y.H.; Li, W.D.; Li, Q.G. Fuzzy mathematics comprehensive evaluation method for rockburst prediction. Chin. J. Rock Mech. Eng. 1998, 15–23. [Google Scholar]
- Bai, M.Z.; Wang, L.J.; Xu, Z.Y. Study on a Neutral Network Model and its Application in Predicting the Risk of Rock Blast. China Saf. Sci. J. 2002, 12, 68–72. [Google Scholar]
- Gong, F.; Li, X. A distance discriminant analysis method for prediction of possibility and classification of rockburst and its application. Yanshilixue Yu Gongcheng Xuebao/Chin. J. Rock Mech. Eng. 2007, 26, 1012–1018. [Google Scholar]
- Wang, J.L.; Chen, J.P.; Yang, J.; Que, J.S. Method of distance discriminant analysis for determination of classification of rockburst. Rock Soil Mech. 2009, 30, 2203–2208. [Google Scholar]
- Gong, F.Q.; Li, X.B.; Zhang, W. Rockburst prediction of underground engineering based on Bayes discriminant analysis method. Rock Soil Mech. 2010, 31 (Suppl. S1), 370–377. [Google Scholar]
- Kang, Y. Research on the Failure Mechanism of Surrounding Rock in Deep Tunnels. Ph.D. Thesis, Chongqing University: Chongqing, China, 2006. [Google Scholar]
- He, Z.; Li, X.; Lu, Y.; Wang, X. Application of BP neural network model in rockburst prediction in deep tunnels. Chin. J. Undergr. Space Eng. 2008, 3, 494–498. [Google Scholar]
- Zhang, L.W.; Zhang, D.Y.; Qiu, D.H. Application of extension evaluation method in rockburst prediction based on rough set theory. J. China Coal Soc. 2010, 35, 1461–1465. [Google Scholar]
- Yi, Y.; Cao, P.; Pu, C. Multi-factorial comprehensive estimation for Jinchuan’s deep typical rockburst tendency. Keji Daobao/Sci. Technol. Rev. 2010, 28, 76–80. [Google Scholar]
- Ding, X.D.; Wu, J.M.; Li, J.; Liu, C.J. Artificial neural network for forecasting and classification of rockbursts. J. Hohai Univ. Nat. Sci. 2003, 31, 424–427. [Google Scholar]
- Jin-Lin, Y.; Xi-Bing, L.; Zi-Long, Z.; Ye, L. A Fuzzy assessment method of rock-burst prediction based on rough set theory. Met. Mine 2010, 39, 26–29. [Google Scholar]
- Feng, X.T.; Wang, L.N. Rockburst prediction based on neural networks. Trans. Nonferrous Met. Soc. China 1994, 4, 7–14. [Google Scholar]
- Zhang, J.F. Research on Staged Prediction and Control Technology of Rock Burst Disaster in Daxiangling Tunnel. Master’s Thesis, Southwest Jiaotong University, Chengdu, China, 2010. [Google Scholar]
- Liang, Z.Y. Research on Rockburst Prediction and Prevention Countermeasures in the Diversion Tunnel of Jinping II Hydropower Station. Master’s Thesis, Chengdu University of Technology, Chengdu, China, 2004. [Google Scholar]
- Xu, M.G.; Du, Z.J.; Yao, G.H.; Liu, Z.P. Rockburst prediction of chengchao iron mine during deep mining. Chin. J. Rock Mech. Eng. 2008, 27 (Suppl. S1), 2918–2921. [Google Scholar]
- Ran, L.; Yi-Cheng, Y.; Guang-Quan, Z.; Nan, Y.; Hu, C.; Qi-Hu, W. Grading prediction model of rockburst based on rough set-multidimensional normal cloud. Met. Mine 2019, 48, 48–55. [Google Scholar]
- Wang, Y.; Xu, Q.; Chai, H.; Liu, L.; Xia, Y.; Wang, X. Rock burst prediction in deep shaft based on RBF-AR model. J. Jilin Univ. Earth Sci. Ed. 2013, 43, 1943–1949. [Google Scholar]
- Wang, Y.C.; Shang, Y.Q.; Sun, H.Y.; Yan, X.S. Study of prediction of rockburst intensity based on efficacy coefficient method. Rock Soil Mech. 2010, 31, 529–534. [Google Scholar]
- Song, C.S.; Li, D.H. Artificial neural networks for predicting rockburst in deep mining. J. Henan Polytech. Univ. Nat. Sci. 2007, 26, 365–369. [Google Scholar]
- Liu, Z.J.; Yuan, Q.P.; Li, J.L. Application of fuzzy probability model to pedict of classification of rockburst intensity. Chin. J. Rock Mech. Eng. 2008, 27 (Suppl. S1), 3095–3103. [Google Scholar]
- Li, S.L.; Feng, X.T.; Wang, Y.J.; Yang, N. Evaluation of rockburst tendency of hard rock in deep well. J. Northeast. Univ. 2001, 22, 60–63. [Google Scholar]
- Tang, S.H.; Wu, Z.J.; Chen, X.H. Approach to occurrence and mechanism of rockburst in deep underground mines. Chin. J. Rock Mech. Eng. 2003, 8. [Google Scholar]
- Cai, S.J.; Zhang, L.; Zhou, W. Research on prediction of rock burst in deep hard-rock mines. J. Saf. Sci. Technol. 2005, 1, 17–20. [Google Scholar]
- Qin, S.W.; Chen, J.P.; Wang, Q.; Qiu, D.H. Research on rockburst prediction with extenics evaluation based on rough set. In Proceedings of the 13th International Symposium on Rockburst and Seismicity in Mines; Rinton Press: Dalian, China, 2009; pp. 937–944. [Google Scholar]
- Zhang, L.X.; Li, C.H. Study on Tendency Analysis of Rockburst and Comprehensive Prediction of Different Types of Surrounding Rock; Rinton Press: Princeton, NJ, USA, 2009. [Google Scholar]
- Zhang, Z.L. Research on Rockburst and Large Deformation Prediction of Xuefengshan Tunnel of Shaohuai Expressway. Master’s Thesis, Chengdu University of Technology, Chengdu, China, 2002. [Google Scholar]
- Liu, J.P. Research on the Relationship between Ground Pressure Activity and Microseismic Temporal and Spatial Evolution in Deep Mines. Master’s Thesis, Northeastern University, Shenyang, China, 2011. [Google Scholar]
- Zhang, B. Research on Safety and Stability of Deep Buried Highway Tunnel based on Rock Mass Anisotropy. Ph.D. Thesis, Graduate School of Chinese Academy of Sciences (Wuhan Institute of Rock and Soil Mechanics), Wuhan, China, 2007. [Google Scholar]
- Xiao, X.P. Research on Prediction and Prevention of Rock Burst in Traffic Tunnel of Jinping II Hydropower Station. Master’s Thesis, Southwest Jiaotong University, Chengdu, China, 2005. [Google Scholar]
- Jiang, L.F. Research on Rockburst Prediction and Prevention in Anlu Tunnel of Guangkun Railway. Master’s Thesis, Southwest Jiaotong University, Chengdu, China, 2008. [Google Scholar]
- Wang, G.Y.; Zhang, S.X.; Ren, G.F. Analysis and prediction of rock burst in deep mining of Tonglushan copper-iron ore. Min. Saf. Environ. Prot. 2005, 32, 20–22. [Google Scholar]
- Guo, C.; Zhang, Y.S.; Deng, H.; Su, Z.; Sun, D. Study on rock burst prediction in the deep-buried tunnel at Gaoligong Mountain based on the rock proneness. Geotech. Investig. Surv. 2011, 39, 8–13. [Google Scholar]
- Zhang, C.Q.; Zhou, H.; Feng, X.T. An index for estimating the stability of brittle surrounding rock mass: FAI and its engineering application. Rock Mech. Rock Eng. 2011, 44, 401–414. [Google Scholar] [CrossRef]
- Zhang, C.; Feng, X.T.; Zhou, H.; Qiu, S.; Wu, W. Case histories of four extremely intense rockbursts in deep tunnels. Rock Mech. Rock Eng. 2012, 45, 275–288. [Google Scholar] [CrossRef]
- Sun, H.F.; Li, S.C.; Qiu, D.H.; Zhang, L.W.; Zhang, N. Application of extensible comprehensive evaluation to rockburst prediction in a relative shallow chamber. In Proc., RaSiM7 (2009): Controlling Seismic Hazard and Sustainable Development of Deep Mines; Tang, C.A., Ed.; Rinton Press: Princeton, NJ, USA, 2009; pp. 777–784. [Google Scholar]
- Yu, X.Z. Development of Database Management System for Highway Tunnel Geological Disaster Prediction and Treatment Measures. Master’s Thesis, Chongqing University, Chongqing, China, 2009. [Google Scholar]
- Li, L. Study on Scheme Optimization and Rockburst Prediction in Deep Mining in Xincheng Gold Mine. Ph. D. Thesis, University of Science and Technology, Beijing, China, 2009. [Google Scholar]
- Zhang, Y.L.; Liu, X.; Hu, Z.Q. Rock burst forecast based on artificial neural network in underground engineering. Hunan Nonferrous Met. 2007, 23, 1–4. [Google Scholar]
- Zhao, X.F. Study on the High Geo-Stress and Rockburst of the Deep-Lying Long Tunnel. Master’s Thesis, North China University of Water Resources and Electric Power, Zhengzhou, China, 2007. [Google Scholar]
- Hao, J. Quality Evaluation and Stability Study of Surrounding Rock during Construction of Tunnel in High Ground Stress Area. Ph.D. Thesis, Xinjiang Agricultural University, Urumqi, China, 2015. [Google Scholar]
- Sun, C.S. Tunnel rockburst prediction model based on improved MATLAB-BP neural network algorithm. Chongqing Jiaotong Univ. (Nat. Sci. Ed.) 2019, 38, 41–49. [Google Scholar]
- Zhou, J.; Li, X.; Mitri, H.S. Classification of rockburst in underground projects: Comparison of ten supervised learning methods. J. Comput. Civ. Eng. 2016, 30, 04016003. [Google Scholar] [CrossRef]
- Sun, L.; Hu, N.; Ye, Y.; Tan, W.; Wu, M.; Wang, X.; Huang, Z. Ensemble stacking rockburst prediction model based on Yeo–Johnson, K-means SMOTE, and optimal rockburst feature dimension determination. Sci. Rep. 2022, 12, 15352. [Google Scholar] [CrossRef]
- Liang, W.; Sari, Y.A.; Zhao, G.; McKinnon, S.D.; Wu, H. Probability estimates of short-term rockburst risk with ensemble classifiers. Rock Mech. Rock Eng. 2021, 54, 1799–1814. [Google Scholar] [CrossRef]
- Kumar, R. Machine Learning Quick Reference: Quick and Essential Machine Learning Hacks for Training Smart Data Models; Packt Publishing Ltd.: Birmingham, UK, 2019. [Google Scholar]
- Jung, Y. Multiple predicting K-fold cross-validation for model selection. J. Nonparametric Stat. 2018, 30, 197–215. [Google Scholar] [CrossRef]
- Chawla, N.V.; Bowyer, K.W.; Hall, L.O.; Kegelmeyer, W.P. SMOTE: Synthetic minority over-sampling technique. J. Artif. Intell. Res. 2002, 16, 321–357. [Google Scholar] [CrossRef]
Statistical Parameters | Characteristic Indicators | |||||
---|---|---|---|---|---|---|
Average | 59.97 | 115.32 | 7.04 | 20.54 | 4.99 | 0.64 |
S.D. | 49.23 | 47.76 | 4.28 | 2.95 | 0.78 | 13.03 |
Min. | 2.60 | 15.50 | 0.38 | 0.70 | 0.04 | 0.15 |
25% quantile | 32.08 | 83.60 | 3.72 | 2.92 | 0.29 | 11.54 |
50% quantile | 50.44 | 111.50 | 6.30 | 4.80 | 0.46 | 17.50 |
75% quantile | 68.00 | 144.40 | 9.25 | 6.31 | 0.64 | 25.00 |
Max. | 297.80 | 304.20 | 22.60 | 21.00 | 5.26 | 80.00 |
Classification | Methods | Hyperparameters | Search Scope and Number of Steps | Optimal Value of the Hyperparameters |
---|---|---|---|---|
Integration algorithm | Random Forest | max_depth | (10, 100, 5) | 25 |
min_samples_leaf | (2, 11, 1) | 3 | ||
min_samples_split | (2, 11, 1) | 9 | ||
LightGBM | num_leaves | (2, 100, 1) | 81 | |
n_estimators | (200, 2000, 50) | 1600 | ||
subsample | (0.1, 1, 0.1) | 0.8 | ||
max_depth | (10, 100, 5) | 75 | ||
learning_rate | (0.01, 0.2, 0.01) | 0.09 | ||
XGBoost | n_estimators | (200, 2000, 50) | 1700 | |
learning_rate | (0.01, 0.2, 0.01) | 0.05 | ||
max_depth | (10, 100, 5) | 25 | ||
colsample_bytree | (0.1, 1, 0.1) | 0.7 | ||
subsample | (0.1, 1, 0.1) | 0.8 | ||
CatBoost | random_strength | (2, 10, 1) | 3 | |
learning_rate | (0.1, 1, 0.1) | 0.2 | ||
iterations | (200, 2000, 100) | 1900 | ||
random_seed | (10, 100, 10) | 30 | ||
depth | (1, 10, 1) | 6 | ||
Basic algorithm | Decision Tree | min_samples_split | (2, 11, 1) | 10 |
max_depth | (10, 100, 5) | 85 | ||
min_samples_leaf | (2, 11, 1) | 2 | ||
KNN | n_neighbors | (1, 20, 1) | 1 | |
SVM | C | (0.1, 10, 0.1) | 6.8 | |
gamma | [0.0001, 0.001, 0.01, 0.1, 1, 3, 5, 7] | 0.01 | ||
Logistic Regression | penalty | [‘l1’, ‘l2’] | 12 | |
C | (0.1, 10, 10.1) | 0.1 | ||
Multi-layer neural network | hidden_layer_sizes | (10, 100, 1) | 65 | |
learning_rate_init | (0.0001, 0.02, 0.0001) | 0.0145 | ||
max_iter | (200, 2000, 100) | 1900 |
Algorithms | Accuracy | Precision | Recall | F1 score | cv_mean |
---|---|---|---|---|---|
clf1 | 0.8031 | 0.7977 | 0.8031 | 0.7952 | 0.7525 |
clf2 | 0.8110 | 0.8169 | 0.8110 | 0.8092 | 0.7743 |
clf3 | 0.8346 | 0.8349 | 0.8346 | 0.8296 | 0.7802 |
clf4 | 0.8504 | 0.8493 | 0.8504 | 0.8465 | 0.7802 |
clf5 | 0.7087 | 0.7135 | 0.7087 | 0.7036 | 0.6752 |
clf6 | 0.7717 | 0.7624 | 0.7717 | 0.7650 | 0.7426 |
clf7 | 0.7874 | 0.7823 | 0.7874 | 0.7838 | 0.7485 |
clf8 | 0.5591 | 0.5671 | 0.5591 | 0.5608 | 0.5228 |
clf9 | 0.6063 | 0.6158 | 0.6063 | 0.6082 | 0.5208 |
clf10 | 0.6535 | 0.6848 | 0.6535 | 0.6497 | 0.4535 |
clf11 | 0.7717 | 0.7659 | 0.7717 | 0.7599 | 0.7010 |
clf12 | 0.5748 | 0.6410 | 0.5748 | 0.5829 | 0.5748 |
Algorithm Type | Algorithm | Testing Accuracy | Stacking-I | Stacking-II |
---|---|---|---|---|
Integration Algorithms | clf1 | 0.8031 | 0.7714 | 0.8189 |
clf2 | 0.8110 | 0.7953 | 0.8031 | |
clf3 | 0.8346 | 0.8189 | 0.8110 | |
clf4 | 0.8504 | 0.8031 | 0.8268 | |
Basic Algorithms | clf5 | 0.7087 | 0.6772 | 0.7008 |
clf6 | 0.7717 | 0.7165 | 0.7087 | |
clf7 | 0.7874 | 0.7874 | 0.7953 | |
clf8 | 0.5591 | 0.7874 | 0.7874 | |
clf9 | 0.6063 | 0.8031 | 0.8031 | |
clf10 | 0.6535 | 0.7717 | 0.6850 | |
clf11 | 0.7717 | 0.8110 | 0.7795 | |
clf12 | 0.5748 | 0.7874 | 0.7953 |
Algorithm Type | Algorithm | Precision | Stacking-I | Stacking-II |
---|---|---|---|---|
Integration Algorithms | clf1 | 0.7977 | 0.7853 | 0.8190 |
clf2 | 0.8169 | 0.8069 | 0.7998 | |
clf3 | 0.8349 | 0.8293 | 0.8120 | |
clf4 | 0.8493 | 0.8073 | 0.8262 | |
Basic Algorithms | clf5 | 0.7135 | 0.7093 | 0.7129 |
clf6 | 0.7624 | 0.7160 | 0.7180 | |
clf7 | 0.7823 | 0.7907 | 0.7876 | |
clf8 | 0.5671 | 0.7895 | 0.7822 | |
clf9 | 0.6158 | 0.8128 | 0.8008 | |
clf10 | 0.6848 | 0.7810 | 0.7117 | |
clf11 | 0.7659 | 0.8179 | 0.7731 | |
clf12 | 0.6410 | 0.7907 | 0.7945 |
Algorithm Type | Algorithms | Recall | Stacking-I | Stacking-II |
---|---|---|---|---|
Integration Algorithms | clf1 | 0.8031 | 0.7717 | 0.8189 |
clf2 | 0.8110 | 0.7953 | 0.8031 | |
clf3 | 0.8346 | 0.8189 | 0.8110 | |
clf4 | 0.8504 | 0.8031 | 0.8268 | |
Basic Algorithms | clf5 | 0.7087 | 0.6772 | 0.7008 |
clf6 | 0.7717 | 0.7165 | 0.7087 | |
clf7 | 0.7874 | 0.7874 | 0.7953 | |
clf8 | 0.5591 | 0.7874 | 0.7874 | |
clf9 | 0.6063 | 0.8031 | 0.8031 | |
clf10 | 0.6535 | 0.7717 | 0.6850 | |
clf11 | 0.7717 | 0.8110 | 0.7795 | |
clf12 | 0.5748 | 0.7874 | 0.7953 |
Algorithm Type | Algorithms | F1 Score | Stacking-I | Stacking-II |
---|---|---|---|---|
Integration Algorithms | clf1 | 0.7952 | 0.7748 | 0.8177 |
clf2 | 0.8092 | 0.7957 | 0.8012 | |
clf3 | 0.8296 | 0.8198 | 0.8114 | |
clf4 | 0.8465 | 0.8020 | 0.8263 | |
Basic Algorithms | clf5 | 0.7036 | 0.6808 | 0.6999 |
clf6 | 0.7650 | 0.7148 | 0.7102 | |
clf7 | 0.7838 | 0.7883 | 0.7897 | |
clf8 | 0.5608 | 0.7877 | 0.7841 | |
clf9 | 0.6082 | 0.8062 | 0.8010 | |
clf10 | 0.6497 | 0.7721 | 0.6931 | |
clf11 | 0.7599 | 0.8133 | 0.7742 | |
clf12 | 0.5829 | 0.7883 | 0.7927 |
Model | Combination Strategy | Basic Algorithmic | Testing Accuracy | Precision | Recall | F1 score |
---|---|---|---|---|---|---|
voting 1 | Integrated algorithmic | clf1~clf4 | 0.7874 | 0.7895 | 0.7874 | 0.7877 |
voting 2 | Basic algorithmic model | clf5~clf12 | 0.7559 | 0.7617 | 0.7559 | 0.7571 |
voting 3 | Integrated Algorithm + Basic Algorithm | clf1~clf12 | 0.8110 | 0.8104 | 0.8110 | 0.8106 |
Model | Combination Strategy | Basic Algorithmic | Testing Accuracy | Precision | Recall | F1 score |
---|---|---|---|---|---|---|
voting 4 | Integrated algorithmic | clf1~clf4 | 0.7874 | 0.7895 | 0.7874 | 0.7877 |
voting 5 | Basic algorithmic model | clf5~clf12 | 0.7559 | 0.7639 | 0.7559 | 0.7582 |
voting 6 | Integrated Algorithm + Basic Algorithm | clf1~clf12 | 0.8031 | 0.8028 | 0.8031 | 0.8028 |
Model | Combination Strategy | Basic Algorithmic | Testing Accuracy | Precision | Recall | F1 score |
---|---|---|---|---|---|---|
voting 7 | Integrated algorithmic | clf1~clf4 | 0.7874 | 0.7895 | 0.7874 | 0.7877 |
voting 8 | Basic algorithmic model | clf5~clf12 | 0.7795 | 0.7717 | 0.7795 | 0.7736 |
voting 9 | Integrated Algorithm + Basic Algorithm | clf1~clf12 | 0.8110 | 0.8112 | 0.8110 | 0.8110 |
Model | Combination Strategy | Basic Algorithmic | Testing Accuracy | Precision | Recall | F1 score |
---|---|---|---|---|---|---|
voting 10 | Integrated algorithmic | clf1~clf4 | 0.7874 | 0.7895 | 0.7874 | 0.7877 |
voting 11 | Basic algorithmic model | clf5~clf12 | 0.7717 | 0.7635 | 0.7717 | 0.7658 |
voting 12 | Integrated Algorithm + Basic Algorithm | clf1~clf12 | 0.8110 | 0.8112 | 0.8110 | 0.8110 |
Model | Combination Strategy | Basic Algorithmic | Testing Accuracy | Precision | Recall | F1 score |
---|---|---|---|---|---|---|
voting 13 | Integrated algorithmic | clf1~clf4 | 0.7874 | 0.7895 | 0.7874 | 0.7877 |
voting 14 | Basic algorithmic model | clf5~clf12 | 0.7795 | 0.7715 | 0.7795 | 0.7733 |
voting 15 | Integrated Algorithm + Basic Algorithm | clf1~clf12 | 0.8268 | 0.8275 | 0.8268 | 0.8266 |
Model | Combination Strategy | Basic Algorithmic | Testing Accuracy | Precision | Recall | F1 score |
---|---|---|---|---|---|---|
voting 16 | Integrated algorithmic | clf1~clf4 | 0.7874 | 0.7895 | 0.7874 | 0.7877 |
voting 17 | Basic algorithmic model | clf5~clf12 | 0.7717 | 0.7645 | 0.7717 | 0.7667 |
voting 18 | Integrated Algorithm + Basic Algorithm | clf1~clf12 | 0.8268 | 0.8275 | 0.8268 | 0.8266 |
Model | First-Level Classfier | Second-Level Classfier | Testing Accuracy | Precision | Recall | F1 score |
---|---|---|---|---|---|---|
Stacking 25 | clf1 | clf8 | 0.7795 | 0.7774 | 0.7795 | 0.7778 |
Stacking 26 | clf1, clf2 | clf8 | 0.8189 | 0.8236 | 0.8189 | 0.8200 |
Stacking 27 | clf1~clf3 | clf8 | 0.7795 | 0.7816 | 0.7795 | 0.7796 |
Stacking 28 | clf1~clf4 | clf8 | 0.7874 | 0.7895 | 0.7874 | 0.7877 |
Stacking 29 | clf1~clf5 | clf8 | 0.8031 | 0.8037 | 0.8031 | 0.8031 |
Stacking 30 | clf1~clf6 | clf8 | 0.7953 | 0.7964 | 0.7953 | 0.7953 |
Stacking 31 | clf1~clf7 | clf8 | 0.8110 | 0.8133 | 0.8110 | 0.8117 |
Stacking 32 | clf1~clf7, clf9 | clf8 | 0.8031 | 0.8045 | 0.8031 | 0.8035 |
Stacking 33 | clf1~clf7, clf9, clf10 | clf8 | 0.8031 | 0.8045 | 0.8031 | 0.8035 |
Stacking 34 | clf1~clf7, clf9~clf11 | clf8 | 0.8189 | 0.8199 | 0.8189 | 0.8192 |
Stacking 35 | clf1~clf3, clf9~clf12 | clf8 | 0.8031 | 0.8045 | 0.8031 | 0.8035 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wang, J.; Ma, H.; Yan, X. Rockburst Intensity Classification Prediction Based on Multi-Model Ensemble Learning Algorithms. Mathematics 2023, 11, 838. https://doi.org/10.3390/math11040838
Wang J, Ma H, Yan X. Rockburst Intensity Classification Prediction Based on Multi-Model Ensemble Learning Algorithms. Mathematics. 2023; 11(4):838. https://doi.org/10.3390/math11040838
Chicago/Turabian StyleWang, Jiachuang, Haoji Ma, and Xianhang Yan. 2023. "Rockburst Intensity Classification Prediction Based on Multi-Model Ensemble Learning Algorithms" Mathematics 11, no. 4: 838. https://doi.org/10.3390/math11040838
APA StyleWang, J., Ma, H., & Yan, X. (2023). Rockburst Intensity Classification Prediction Based on Multi-Model Ensemble Learning Algorithms. Mathematics, 11(4), 838. https://doi.org/10.3390/math11040838