Study on the Geological Condition Analysis and Grade Division of High Altitude and Cold Stope Slope

: Analysis of the geological conditions of high-altitude and low-temperature stope slopes and the study of grade division are the basis for the evaluation of slope stability. Based on the engineering background of the eastern slope of the Preparatory iron mine in Hejing County, Xinjiang, we comprehensively analyse and summarize the factors that affect the geological conditions of high-altitude and cold slopes and ﬁnally determine nine geological conditions that affect the index parameters. Based on a back-propagation (BP) neural network algorithm, we establish an applicable network model to analyse the geological conditions of slopes in cold areas. The model is applied to the eastern slope to analyse and classify the geological conditions of the high-altitude and low-temperature slopes. The research results show that the skarn rock layer in the eastern slope is in a stable state and not prone to landslides, and its corresponding geological condition is Grade I; meanwhile, the monzonite porphyry rock layer is in a relatively stable state, with a potential for landslides and a corresponding geological condition Grade II. The marble rock layer is in a generally stable state, there is the possibility of landslide accidents, and the corresponding geological condition level is Grade III. The limestone rock layer is in an unstable state and prone to landslide accidents, it has a corresponding geology condition Grade IV. Therefore, the eastern slope can be divided into different geological condition regions: Zone I, Zone II, Zone III, and Zone IV, and the corresponding geological condition levels for these are Grade I, Grade II, Grade III, and Grade IV. These results may provide a basis for the stability evaluation of high altitudes and cold slopes.


Introduction
Slope instability is one of the world's geological disasters [1][2][3][4]. Every year, the economic losses of various countries in the world caused by geological disasters due to slope instability reach immeasurable levels [5][6][7]. Currently, there are no accurate statistics on the loss, but the loss is still huge. Under the action of freezing and thawing cycles, blasting mining, weathering and other factors, slopes in cold areas can easily cause damage to the mechanical properties of rock slopes and lead to their instability in mines in cold areas [8][9][10][11]. Therefore, open pit mine slope landslides are a potential hazard in harsh environments with high altitudes and cold areas. For example, the "329" landslide disaster. On 29 March 2013, a landslide occurred on Zeri Mountain in the Jiama (in Tibet Province) mining area of the China National Gold Group, causing more than 2 million cubic metres of slope landslides. Eighty-three field workers were buried, and the landslide was investigated afterwards. The reason was found to be factors such as the freezing and thawing of ice and snow.
With the implementation of Western development and progress of engineering technology, difficult-to-mine mineral resources and hidden dangers left over by exploited mineral resources in the harsh environments of Tibet, Xinjiang and other cold regions have

Data Processing
This article uses a BP neural network-supervised algorithm to classify the data. The specific method is as follows: by continuously selecting certain characteristic parameters from the sample data that have been collected, trained, checked, filtered, and subsequently set according to the classifier in advance we can determine the criteria and summarize and sort the samples that have been further screened out and have been identified.
Before the data classification process begins, a certain amount of training data must be mastered because the continuous and stable operation of the BP neural network strictly requires the input of relevant training data. Only in this way can it be extracted through the feature extraction of the input training data in the subsequent classification process to establish a scientific and rigorous classification model. Then, the existing training data are analysed by comparing the classification model with the verification model. Finally, the classification of the data is completed.

Data Processing
This article uses a BP neural network-supervised algorithm to classify the data. The specific method is as follows: by continuously selecting certain characteristic parameters from the sample data that have been collected, trained, checked, filtered, and subse- To eliminate the influence of other transformation functions on the transformed image as much as possible, it is necessary to normalize the collected data. Normalization processing refers to the use of the principle of invariant moments to convert the unquantifiable expression form into a value in the range of 0-1 for processing so that the expression form becomes a scalar.
The formula for data normalization is: where x is the value before conversion, y is the converted value, Min value is the sample minimum and Max value is the sample maximum. The BP neural network actually achieves the accuracy requirements through repeated training of multiple samples to find the minimum value of the error function. The most common method to determine whether the error satisfies the error accuracy requirement is logistic regression. This article uses a binary logistic regression method to determine the two results (True/False) of the input data and the corresponding probability (PTrue/PFalse) of the results to determine whether the accuracy of the network system satisfies the requirements. The formula is as follows: In the formula: x-input sample parameters; t-temporary variable; w, b-model parameters.
The sigmoid function is usually used as the use condition of the conversion function: in the logical judgment, when h(t) > 0.5, y = 1. The formula is as follows: From formula (3), we can see: its parameter curve is shown in Figure 2.

BP Neural Network Forward Transmission and Reverse Feedback
(1) Forward transmission The input parameters of the neural network reach the output end through the input end and each node of the intermediate layer (hidden layer). This method is forward transmission. The intermediate layer can be adjusted by changing the weight relationship between the intermediate layer and the output layer, output threshold updating of the intermediate layer, and other adjustment methods to reduce the generalization error between each node and the actual value, and to therefore achieve the desired result.
The multi-layer perceptron is composed of one or more single-layer perceptrons, which can calculate nonlinear data. The input and output ends of the multi-layer perceptron contain multiple hidden layers [18]. However, thus far, there are different opinions on the number of hidden layers.
The decision-making area of a single-layer perceptron is divided by an extended twodimensional data plane. In addition, when the multi-layer perceptron contains only one hidden layer, the decision-making area can be an open convex area or a closed concave area. When the multilayer perceptron contains more than one hidden layer, its decision-making area can show diversified shapes and area divisions. Figure 3 shows the change in the weight relationship during the forward transmission.

BP Neural Network Forward Transmission and Reverse Feedback
(1) Forward transmission The input parameters of the neural network reach the output end through the input end and each node of the intermediate layer (hidden layer). This method is forward transmission. The intermediate layer can be adjusted by changing the weight relationship between the intermediate layer and the output layer, output threshold updating of the intermediate layer, and other adjustment methods to reduce the generalization error between each node and the actual value, and to therefore achieve the desired result.
The multi-layer perceptron is composed of one or more single-layer perceptrons, which can calculate nonlinear data. The input and output ends of the multi-layer perceptron contain multiple hidden layers [18]. However, thus far, there are different opinions on the number of hidden layers.
The decision-making area of a single-layer perceptron is divided by an extended two-dimensional data plane. In addition, when the multi-layer perceptron contains only one hidden layer, the decision-making area can be an open convex area or a closed concave area. When the multilayer perceptron contains more than one hidden layer, its decision-making area can show diversified shapes and area divisions. Figure 3 shows the change in the weight relationship during the forward transmission. Neural network training samples must introduce randomly assigned weights and biases. Simultaneously, the randomly assigned weights and random assigned biases in-

BP Neural Network Forward Transmission and Reverse Feedback
(1) Forward transmission The input parameters of the neural network reach the output end through the input end and each node of the intermediate layer (hidden layer). This method is forward transmission . The intermediate layer can be adjusted by changing the weight relationship  between the intermediate layer and the output layer, output threshold updating of the  intermediate layer, and other adjustment methods to reduce the generalization error between each node and the actual value, and to therefore achieve the desired result.
The multi-layer perceptron is composed of one or more single-layer perceptrons, which can calculate nonlinear data. The input and output ends of the multi-layer perceptron contain multiple hidden layers [18]. However, thus far, there are different opinions on the number of hidden layers.
The decision-making area of a single-layer perceptron is divided by an extended two-dimensional data plane. In addition, when the multi-layer perceptron contains only one hidden layer, the decision-making area can be an open convex area or a closed concave area. When the multilayer perceptron contains more than one hidden layer, its decision-making area can show diversified shapes and area divisions. Figure 3 shows the change in the weight relationship during the forward transmission. Neural network training samples must introduce randomly assigned weights and biases. Simultaneously, the randomly assigned weights and random assigned biases in- Neural network training samples must introduce randomly assigned weights and biases. Simultaneously, the randomly assigned weights and random assigned biases introduced are not randomly selected but must satisfy the weights. The condition of interval real number is (-1, 1), and the offset is (0, 1) interval real number. Only after the abovementioned conditions are satisfied can the network model be forward propagated again. In this process, X 1 and X 2 are calculated by formulas (4)- (8).
For the neuron ƒ(z 1 ), the following calculations are performed when only the weight assignment is considered: In the formula,w (x 1 )1 represents the weight of x 1 to y 1 , as shown in Figure 3. Similarly, we can calculate: In summary, the output value of each node can be calculated by the formula of forward transmission. Accordingly, the actual output result of the forward transmission model can also be obtained by calculation, and the final output result y 5 obtained by the above formula is exactly the actual output result of the forward transmission mode.
(2) Back feedback To facilitate the error to participate in subsequent calculations, this article assumes that t is the expected output value of the training data. Because y 5 is the actual output value of the forward propagation model, it is assumed that the difference between the actual output value and the expected output value is δ = t − y 5 . In addition, the difference between the actual output value and the expected output value must be based on actual conditions during the definition process. It is assumed that there is an error between the actual output value and the expected output value of each node, and this error is defined as δi. By training the error between the actual value and the expected value, the adjustment of the weight is a crucial step for the feedback adjustment of the BP neural network. The specific process is shown in Figure 4.
In summary, the output value of each node can be calculated by the formula of forward transmission. Accordingly, the actual output result of the forward transmission model can also be obtained by calculation, and the final output result obtained by the above formula is exactly the actual output result of the forward transmission mode.
(2) Back feedback To facilitate the error to participate in subsequent calculations, this article assumes that t is the expected output value of the training data. Because is the actual output value of the forward propagation model, it is assumed that the difference between the actual output value and the expected output value is In addition, the difference between the actual output value and the expected output value must be based on actual conditions during the definition process. It is assumed that there is an error between the actual output value and the expected output value of each node, and this error is defined as δi. By training the error between the actual value and the expected value, the adjustment of the weight is a crucial step for the feedback adjustment of the BP neural network. The specific process is shown in Figure 4. For the error, , and similarly,.
The calculation method for 1  and 2  is as follows: For the error, δ 3 = w (y 3 ) δ, and similarly,. δ 4 = w (y 4 ) δ The calculation method for δ 1 and δ 2 is as follows: Using formula (9) and calculating according to the principle of this formula, the value of δ 1 , δ 2 , δ 3 , δ 4 can be finally obtained. In addition, the theoretical basis of back propagation is the change in relationship between the error and the weight. The variation ∆w i obtained by adjusting the weight is calculated by error. The calculation formula of the weight variation is as follows: where: η is the learning rate. The weight of w (x 1 )1 can be adjusted as follows: Similarly, the weight of w (x 2 )1 is adjusted as: The weight is calculated and adjusted according to Formula (12), and the final result is an update of the weight. A single back propagation includes the calculation, adjustment and updating of the weights of all nodes. Only after these tasks are completed is back propagation considered completed once. The essence of the realization of the reverse transmission algorithm is to complete the parameter adjustment of the sample model. In this process, forward transmission and reverse feedback are continuously performed. Finally, the error, weight and accuracy of the model reach the desired value.
In summary, the training process of the neural network can be completed through forward transmission and reverse feedback. However, this type of training will not continue indefinitely. Under certain conditions, the training will stop. The BP network training model stops in two situations; after setting and reaching the maximum number of iterations and after reaching a certain threshold.

Geological Condition Analysis and Network Output Parameter Setting
(1) Determine the geological conditions index In the process of using the BP neural network to classify and predict the geological conditions of the stope slope, it is necessary to establish the corresponding BP neural network model. The first step in establishing a BP neural network model is to evaluate the reliability of its input parameters and filter the parameters to exclude unreliable input parameters so that the final output parameters are as accurate and reliable as possible and can show the influence of different geological factors on the geological conditions of the slope.
Generally, the geological influencing factors of slopes are the slope, slope height, lithology, unit weight, internal friction angle, porosity, cohesion, freeze-thaw cycles, etc. The slope and slope height determine the geometry of the slope and are indispensable factors for its existence. Furthermore, the lithology, gravity, internal friction angle, porosity, cohesion, etc. are important characteristics of the rock mass of the slope as they characterize the quality of the rock mass that composes it. As a unique feature of a slope in a cold region, the freeze-thaw cycle plays a huge role in the classification of geological conditions there. Various geological factors have a certain connection, while some other factors do not. Despite this, all of these factors play a vital role in the division of the slope geological conditions and therefore all will be divided. This is an important factor in the grade of slope geological conditions. Generally, the number of parameters has little effect on neurons, and the number of parameters only represents the number of input neurons. In addition, the increase in number of parameters increases the simulation recognition time, and the actual engineering volume greatly increases. Therefore, to reduce the actual workload, this paper simplifies the input parameters of the model, and according to the modelling data and simulation results, the slope geological condition indicators are the freeze-thaw coefficient, hydrogeology, rock gravity, cohesion, internal friction angle, slope, slope height, porosity, and other factors.
(2) Set model output parameters The output parameters are the grades of the geological conditions of the slopes in preparation for the iron ore mine, and the output parameters are divided into 4 grades according to the four expected output values of Grade I, Grade II, Grade III, and Grade IV. The specific content is shown in Table 1.

Determination of the Grid Structure
(1) Determination of the number of perceptrons The input layer, hidden layer and output layer constitute the basic structure of the BP neural network. The number of hidden layers depends on the complexity of parameter selection. For a more complex problem to be solved, there are more hidden layers, and the difficulty of the corresponding model convergence increases. (2) Determination of the number of network nodes The method of dividing the network nodes of the input layer and output layer is unified and clear: generally, once the number of research projects is determined, the input layer and output layer are determined, but there is no scientific and consistent method of dividing the hidden layer of network nodes.
However, the neural network model constructed based on the slope parameters contains only a single hidden layer, so simply calculating the number of nodes in this layer can reveal the number of hidden layer nodes in the entire neural network model, which greatly simplifies the calculation process.
Because the number of rows of the input vector is equal to the number of nodes of the input layer, by knowing that the number of rows of the input vector is 8, it can be directly obtained that the input layer has 8 nodes. In addition, the number of nodes in the output layer is equal to the amount of output data points. Because the number of output data points is 4, it can be concluded that the number of nodes in the output layer is also 4. In addition to the above information and because the number of nodes in the hidden layer is 12, it is finally determined that the structure of the BP neural network is 8-12-4, as shown in Figure 5.

Selection and Processing of Training Samples
Two key factors of the network model training sample selection are mainly related to the complexity of the training sample. Firstly, the accuracy of the training sample. For training samples, the accuracy is positively correlated with the complexity of the samples. If the accuracy of the training samples increases, the complexity of the samples also increases, which will eventually increase the demand for the number of samples. Secondly, noise in the data. The noise in the sample data is also positively correlated with the complexity of the sample. If the noise in the data increases, the complexity of the training sample will also significantly increase, which affects the selection of the final training sample. Therefore, when training a neural network, it is necessary to control the relationship between the complexity of the sample and the accuracy of the training data and to provide as much key and useful information as possible to reduce the interference of redundant and useless information.
After consulting a large quantity of mine slope data, 54 neural network training samples were selected from them, with the literature [21,22] has mentioning the necessity of checking the dependency of each parameter before the ANN. However, the dependency of the parameters does not need to be discussed in this study. Because all the parameters are randomly selected, there is no dependence between the parameters in the nine main slope-influencing factors (according to the actual situation of Beizhan iron ore). All parameters are shown in Table 2, and the normalized data processing is shown in Table 3.

Selection and Processing of Training Samples
Two key factors of the network model training sample selection are mainly related to the complexity of the training sample. Firstly, the accuracy of the training sample. For training samples, the accuracy is positively correlated with the complexity of the samples. If the accuracy of the training samples increases, the complexity of the samples also increases, which will eventually increase the demand for the number of samples. Secondly, noise in the data. The noise in the sample data is also positively correlated with the complexity of the sample. If the noise in the data increases, the complexity of the training sample will also significantly increase, which affects the selection of the final training sample. Therefore, when training a neural network, it is necessary to control the relationship between the complexity of the sample and the accuracy of the training data and to provide as much key and useful information as possible to reduce the interference of redundant and useless information.
After consulting a large quantity of mine slope data, 54 neural network training samples were selected from them, with the literature [21,22] has mentioning the necessity of checking the dependency of each parameter before the ANN. However, the dependency of the parameters does not need to be discussed in this study. Because all the parameters are randomly selected, there is no dependence between the parameters in the nine main slopeinfluencing factors (according to the actual situation of Beizhan iron ore). All parameters are shown in Table 2, and the normalized data processing is shown in Table 3.

Sample Training and Result Analysis
The training steps are shown in Figure 6. (1) Convergence graph According to the data, the convergence curve is shown in Figure 7, which shows that the minimum momentum is added to the training so that the probability that the convergence curve has a local minimum is reduced after 1000 iterations of learning.
(2) Error distribution diagram According to the data, the error distribution histogram is shown in Figure 8. The distribution histogram shows that the predicted sample is compared with the actual sample, and the error value is mostly distributed between -6% and 6%, which indicates that the result after training is reliable. (1) Convergence graph According to the data, the convergence curve is shown in Figure 7, which shows that the minimum momentum is added to the training so that the probability that the convergence curve has a local minimum is reduced after 1000 iterations of learning. (1) Convergence graph According to the data, the convergence curve is shown in Figure 7, which shows that the minimum momentum is added to the training so that the probability that the convergence curve has a local minimum is reduced after 1000 iterations of learning.
(2) Error distribution diagram According to the data, the error distribution histogram is shown in Figure 8. The distribution histogram shows that the predicted sample is compared with the actual sample, and the error value is mostly distributed between -6% and 6%, which indicates that the result after training is reliable. (2) Error distribution diagram According to the data, the error distribution histogram is shown in Figure 8. The distribution histogram shows that the predicted sample is compared with the actual sample, and the error value is mostly distributed between -6% and 6%, which indicates that the result after training is reliable.  (3) Regression analysis graph Regression diagrams are made according to the data: Figure 9 shows the regression diagram of 70% training samples; Figure 10 shows the regression diagram of 15% verification samples; Figure 11 shows the regression diagram of 15% test samples; Figure 12 shows the regression diagram of the overall sample. Among them, the abscissas 0 and 1 represent the target value, and the ordinate represents the sample value after debugging. If the slope of the curve approaches 1, it means that the target value is very close to the theoretical value, which implies that the regression analysis is very accurate.  (3) Regression analysis graph Regression diagrams are made according to the data: Figure 9 shows the regression diagram of 70% training samples; Figure 10 shows the regression diagram of 15% verification samples; Figure 11 shows the regression diagram of 15% test samples; Figure 12 shows the regression diagram of the overall sample. Among them, the abscissas 0 and 1 represent the target value, and the ordinate represents the sample value after debugging. If the slope of the curve approaches 1, it means that the target value is very close to the theoretical value, which implies that the regression analysis is very accurate.  (3) Regression analysis graph Regression diagrams are made according to the data: Figure 9 shows the regression diagram of 70% training samples; Figure 10 shows the regression diagram of 15% verification samples; Figure 11 shows the regression diagram of 15% test samples; Figure 12 shows the regression diagram of the overall sample. Among them, the abscissas 0 and 1 represent the target value, and the ordinate represents the sample value after debugging. If the slope of the curve approaches 1, it means that the target value is very close to the theoretical value, which implies that the regression analysis is very accurate.           The sample data in Table 4 represent the true value and predicted value of the sample and the error between them. In the table, the error is small (the maximum is 6.1%), which shows that the accuracy of network training is high.

Determination of Parameter Samples of Geological Condition Indicators
There are two main mining areas in the current mining area of the preparation for iron ore: open-pit mining and side-hanging mining. The slope area of this study is mainly the side slope between the pit and the side-hanging mine, which forms after the open-pit mining, i.e., the east side slope area of the mine. Referring to the geological report of the area and the index data of the test items, a set of parameter samples containing geological conditions index can be obtained, as shown in Table 5.

Calculation Results and Analysis
According to the training results of the BP neural network model based on the training samples in the previous section, the accuracy of the network is high, so it can be used to prepare for the calculation of the iron ore geological index parameter samples. After normalizing the data in the geological condition parameter table (Table 5), it is input into the neural network model for calculation, and the result is shown in Table 6. Table 7 is obtained after summarizing the samples of the same geological condition level among 13 groups of samples. Grade III: The geological conditions are poor, which may cause damage 4, 9 Grade IV: Poor geological conditions, easy to cause damage Based on Table 7, the BP neural network analysis shows that among the 13 samples of the eastern slope in this cold area, 5 have geological conditions of Grade I, and 3 have geological conditions of Grade II. There are 3 with condition Grade III and 2 with Grade IV conditions. The 13 sample numbers are distributed in different locations on the eastern slope. After their positions have been marked on the eastern slope, the distribution area map of the eastern slope samples, as shown in Figure 13, is obtained. The numbers 1 to 13 in the diagram are the sampling points, and 13 samples are taken from different areas of the east slope of Beizhan Iron Mine. Figure 13 shows that although the distribution positions of the 13 samples on the eastern slope are random; there is a certain distribution law, i.e., the distribution among the samples at identical or similar geological condition levels is relatively dense, and samples of different geological conditions are far apart and sparsely distributed. As a result, the regions where samples with identical or similar geological condition levels are located can be statistically divided, so that the eastern slope can be divided into geological conditions. The specific divisions are shown in Figure 14.
although the distribution positions of the 13 samples on the eastern slope are random; there is a certain distribution law, i.e., the distribution among the samples at identical or similar geological condition levels is relatively dense, and samples of different geological conditions are far apart and sparsely distributed. As a result, the regions where samples with identical or similar geological condition levels are located can be statistically divided, so that the eastern slope can be divided into geological conditions. The specific divisions are shown in Figure 14.

Concluding Remarks
A BP neural network is used to classify the geological conditions of the eastern slope of the preparatory iron mine and the overall division. The eastern slope of the iron mine preparation is divided into four areas: Zone I, Zone II, Zone III, and Zone IV. The corresponding geological condition grades for zones I to IV are grade I, grade II, grade III, and grade IV, respectively. Among them, the rock formation in Zone I is mainly skarn rock formation, which is also the main occurrence area of ore bodies. It has high unit weight, high hardness, undeveloped rock joints, high integrity, and good physical and mechanical properties, so its geological conditions are good. Damage does not easily occur, and the corresponding geological condition is grade I. The rock formation in Zone II is mainly monzonite porphyry. Compared with skarn its weight and hardness are slightly lower, however, the rock layer is thick and the joints are less developed. Therefore, it has better physical and mechanical properties. The conditions are good, there are only potential destructive factors, and the corresponding geological conditions are grade II. The rock formations in Zone III are mainly marble formations. Compared with skarn and monzonite porphyries, marble is relatively poor in lithology, has low gravity and hardness, and has more joints in the formations. The physical and mechanical properties are poor, but its thickness is large, and the layered distribution slightly compensates for the lack of lithology. Therefore, its geological conditions are general, and there is a possibility of damage. The corresponding geological conditions are grade III. The rock formation in Zone IV is mainly limestone rock. It has the worst lithology among the four rock formations, with low unit weight, low hardness, well-developed joints, and large porosity. After long-term weathering, erosion, and freezing and thawing cycles, its physical properties are destroyed. Therefore, the geological conditions in this area are poor and easily destroyed. The corresponding geological conditions are grade IV.

Concluding Remarks
A BP neural network is used to classify the geological conditions of the eastern slope of the preparatory iron mine and the overall division. The eastern slope of the iron mine preparation is divided into four areas: Zone I, Zone II, Zone III, and Zone IV. The corresponding geological condition grades for zones I to IV are grade I, grade II, grade III, and grade IV, respectively. Among them, the rock formation in Zone I is mainly skarn rock formation, which is also the main occurrence area of ore bodies. It has high unit weight, high hardness, undeveloped rock joints, high integrity, and good physical and mechanical properties, so its geological conditions are good. Damage does not easily occur, and the corresponding geological condition is grade I. The rock formation in Zone II is mainly monzonite porphyry. Compared with skarn its weight and hardness are slightly lower, however, the rock layer is thick and the joints are less developed. Therefore, it has better physical and mechanical properties. The conditions are good, there are only potential destructive factors, and the corresponding geological conditions are grade II. The rock formations in Zone III are mainly marble formations. Compared with skarn and monzonite porphyries, marble is relatively poor in lithology, has low gravity and hardness, and has more joints in the formations. The physical and mechanical properties are poor, but its thickness is large, and the layered distribution slightly compensates for the lack of lithology. Therefore, its geological conditions are general, and there is a possibility of damage. The corresponding geological conditions are grade III. The rock formation in Zone IV is mainly limestone rock. It has the worst lithology among the four rock formations, with low unit weight, low hardness, well-developed joints, and large porosity. After longterm weathering, erosion, and freezing and thawing cycles, its physical properties are destroyed. Therefore, the geological conditions in this area are poor and easily destroyed. The corresponding geological conditions are grade IV.