Prediction of Silicon Content in the Hot Metal of a Blast Furnace Based on FPA-BP Model

: In the process of blast furnace smelting, the stability of the hearth thermal state is essential. According to the analysis of silicon content in hot metal and its change trend, the operation status of the blast furnace can be judged to ensure the stable and smooth operation of the blast furnace. Based on the error back-propagation neural network (BP), the ﬂower pollination algorithm (FPA) is used to optimize the weight and threshold of the BP neural network, and the prediction model of silicon content is established. At the same time, the principal component analysis method is used to reduce the dimension of the input sequence to obtain relevant indicators. The relevant indicators are used as the input, and silicon content in the hot metal is used as the output, which is substituted into the model for training and utilizes the trained model to predict. The results show that the hit rate of the prediction model is 16% higher than the non-optimized BP prediction model. At the same time, the evaluation indicators and operation speed of the model are improved compared with the BP prediction model, which can be more accurately applied to predict the silicon content of the hot metal.


Introduction
Blast furnace ironmaking is essential in iron and steel production.Iron ore is reduced and melted into slag and hot metal in the blast furnace through various physical and chemical changes.The stable thermal state of the hearth plays an essential role in the stable operation and reaction of the blast furnace [1,2].In practice, the smelting process of the blast furnace is highly complex and closed, and it is difficult to measure the hearth temperature directly [3].The change in the silicon content of hot metal is closely related to the thermal stability of the blast furnace hearth.Therefore, the shift in silicon content in the hot metal is generally used to indirectly reflect the thermal state change of the blast furnace hearth [4][5][6].Reasonable control of silicon content in the hot metal can not only maintain the stability of the blast furnace, predict the fuel ratio of the blast furnace, and improve the utilization factor, but can also reduce the smelting slag for subsequent steelmaking links [7][8][9].Therefore, the accurate prediction of the silicon content in hot metal is an essential prerequisite for maintaining the smooth operation of blast furnaces and a necessary guarantee to achieve energy conservation and emission reduction.
To date, scholars at home and abroad have undertaken much work on predicting silicon content in hot metal.The prediction models adopted are generally divided into mechanism, empirical, and data-driven [10].The mechanism and practical models rely on theoretical knowledge and artificial field experience to predict and guide.However, due to the complex internal reaction of the blast furnace and the profound human subjective impact, it is easy to cause a significant deviation in the prediction results.The data-driven model is mainly used by operators to analyze historical data, seek the correlation between data, and fully mine the decision-making relationship behind the data, and can achieve better prediction accuracy and generalization performance, which is more in line with the blast furnace ironmaking process.In recent years, it has received widespread attention and made some progress.By using statistical knowledge, the prediction model of silicon content in hot metal based on rough set theory and a BP neural network [11], the weighted limit learning machine prediction model [12], and the genetic algorithm optimization BP neural network prediction model [13] have been established, respectively, and good results have been achieved.In the prediction model of silicon content in hot metal, the rationality of the input sequence and prediction method determines the prediction accuracy, training speed, and industrial adaptability of the model [14,15].Previous studies on the prediction of silicon content in hot metal have mainly used relatively stable operating data.Due to the complexity of blast furnace smelting, furnace condition fluctuations occur occasionally, and the prediction accuracy needs to be more in case of significant instability in furnace conditions [16][17][18][19].Therefore, developing a new prediction model for silicon content in hot metal with a high prediction hit rate, good stability, adaptability to furnace condition fluctuations, and a small prediction error is significant.
This paper proposes a prediction model based on the combination of principal component analysis (PCA) and a flower pollination algorithm-error backpropagation neural network (FPA-BP).The input sequence of the model is determined by PCA dimension reduction processing.The prediction model of silicon content in hot metal is established based on the complex furnace condition fluctuation data of a steel plant to provide a new strategy for the stability of the blast furnace.

Data Source
The data used are from the actual operation of an iron and steel plant.There are 300 groups of data, and each group of data contains 64 attributes.The data are divided into five parts: charge structure, blast furnace operating parameters, tapping composition, gas composition, and slag composition.The selection of input parameters plays an important role in the prediction structure and results.If all attributes are taken as inputs, the model structure is extremely complex and difficult to achieve.If the attribute selection is insufficient, the key factors affecting the model will be missing, leading to the failure of the model.Therefore, it is crucial to select the input attributes of the model reasonably.Based on on-site data collection and combined with manual experience, 21 attributes closely related to the influence of molten iron temperature were selected as influencing factors.The selected influencing factors included oxygen enrichment, coke ratio to furnace, hot air temperature, furnace top temperature, furnace top pressure, permeability index, comprehensive smelting strength, inlet temperature, tuyere area, hot air pressure, CO utilization rate, pressure difference pressure, water temperature difference, molten iron temperature, material batch, coal injection ratio, wind speed, comprehensive coke ratio, outlet temperature, coal injection amount, and silicon content of the previous furnace.The above variables were labeled as z i (i = 1, 2, . . ., 21), and the silicon content of the hot metal was selected as the target variable.Parts of the original data are shown in Table 1, below.

Data Missing Value Processing
Data recording can fail due to faults in the blast furnace sensor or the operator of responsibility in the blast furnace smelting process.For missing data, Lagrangian linear interpolation was generally used to fill.

Data Normalization
The dimension and dimension units between different parameters are not uniform, and there are often significant differences, making the analysis process complex and affecting the results.The dimension influence between parameters can be avoided by data normalization.After normalization, all parameters are in the same dimension, which is appropriate for parameter comparison and analysis.
The Z-Score standardization method was used for normalization; it is simple and fast in operation, and has a particular anti-interference ability.The specific calculation formula is shown in Formula (1).
where z is the value after normalization; z is the original value; µ is the variance value of the population sample; and δ is the standard deviation of the population sample.

Principal Component Analysis Screening Input Sequence
When using neural network for prediction, the number of input sequences and the correlation and coupling between parameters are also very critical.When selecting neural network input sequences, not only the number of input sequence parameters, but also the correlation and coupling between parameters should be considered [20].The PCA was used to complete the input parameter screening.The core idea of principal component analysis is to solve the correlation matrix of input variables according to orthogonal transformation and obtain the cumulative variance contribution rate according to the corresponding characteristic value of the correlation matrix to obtain the principal component parameters of the original variables.Multidimensional data were reduced to obtain comprehensive indicators while retaining the integrity of information and improving the prediction accuracy of the neural network model.
The steps of principal component analysis are as follows: Step 1: Calculate the normalized data correlation coefficient matrix R. Use Equation (2) to calculate the normalized coefficient correlation matrix R = (r ij ) (21 × 21) In the equation, r ij is the correlation coefficient between factor i and factor j, where i, j = 1, 2, . . ., 21.
Step 2: Calculate the eigenvalues and eigenvectors of a matrix.
Calculate the contribution rate of each principal component, where the contribution rate of the k-th principal component is shown in Equation (4): To obtain the cumulative contribution rate of the first k principal components, as shown in Equation ( 5), Taking eigenvalues with a cumulative contribution rate of 90%, λ 1 , λ 2 , . . ., λ m , the first m principal components corresponding to m were used to reduce the dimensionality of these 21 indicators.The results are shown in Table 2. From Table 2, it can be seen that once the cumulative contribution rate of the first nine principal components reached 90%, extracting the first nine principal components could fully reflect the impact of input variables on the silicon content of molten iron.
Step 4: Calculate principal component load and principal component score.
Based on the previous calculation results, use Equations ( 6) and (7) to calculate the principal component loads β i and corresponding scores γ i of the eigenvalues λ i and eigenvector e i , respectively: The relationship between nine principal components and twenty-one variables is shown in Equation ( 8): In the formula, z i (i = 1, 2, . . ., 21) is the normalized value of the input variable and a ij (i = 1, 2, . . ., 9, j = 1, 2, . . ., 21) is the main component coefficient matrix, as shown in Table 3.

BP Neural Network Prediction Model
The BP (backpropagation) neural network is a multi-layer feedforward neural network for error backpropagation learning.The neural network has a simple structure and substantial computing power and comprises an input, hidden, and output layer [21].The specific network is shown in Figure 1, where Xi represents the input parameter, and Y means the silicon content of the output hot metal.The data were compared with the absolute value after being calculated by the three-layer network structure.The error will be backpropagated when it does not conform to the set value.The neural network weight and threshold value will be adjusted for re-calculation.The calculation will end when the error meets the committed value or reaches the set number of iterations' end [22].

Flower Pollination Algorithm
British scholar Yang proposed the flower pollination algorithm in 2012 based on simplifying flower pollination behavior modeling in nature [23].The reproduction of plants in nature generally depends on the pollination of flowers.According to the pollination objects, there are generally two kinds: self-pollination and cross-pollination.Self-pollination refers to pollinating the stamen pollen of flowers to the main pistil, while cross-pollination expresses the pollination of flowers between different plants.According to the manner of pollination, this is generally divided into biological pollination and abiotic pollination.Most flower pollination relies on bees, insects, and other organisms, and a few kinds rely on wind, water, and other abiotic methods for pollination.Plants reproduce by pollination.The flower pollination algorithm is based on the following four principles [24,25]: Principle 1: Biological cross-pollination is regarded as a global search behavior, and the propagation behavior is considered to conform to Levy flight distribution.The formula is shown in Formula (9).

(
) where X i+1 j is the solution of generation i + 1; X i j represents the solution of the ith generation; Xbest is the current best solution; α is a weighting factor, usually 0.01; and L means the step size obtained from the levy flight.
The step length L obtained from the levy flight is expressed as: where Γ(γ) is a standard gamma function; c is the regularization parameter of the distribution amplitude, which can be taken as 1 according to experience; and t is the step size generated by the nonlinear transformation.
The step size t generated by nonlinear transformation is expressed by Equation (11).

Flower Pollination Algorithm
British scholar Yang proposed the flower pollination algorithm in 2012 based on simplifying flower pollination behavior modeling in nature [23].The reproduction of plants in nature generally depends on the pollination of flowers.According to the pollination objects, there are generally two kinds: self-pollination and cross-pollination.Self-pollination refers to pollinating the stamen pollen of flowers to the main pistil, while cross-pollination expresses the pollination of flowers between different plants.According to the manner of pollination, this is generally divided into biological pollination and abiotic pollination.Most flower pollination relies on bees, insects, and other organisms, and a few kinds rely on wind, water, and other abiotic methods for pollination.Plants reproduce by pollination.The flower pollination algorithm is based on the following four principles [24,25]: Principle 1: Biological cross-pollination is regarded as a global search behavior, and the propagation behavior is considered to conform to Levy flight distribution.The formula is shown in Formula (9).X i+1 j = X i j + αL X i j − X best (9) where X i+1 j is the solution of generation i + 1; X i j represents the solution of the ith generation; X best is the current best solution; α is a weighting factor, usually 0.01; and L means the step size obtained from the levy flight.
The step length L obtained from the levy flight is expressed as: where Γ(γ) is a standard gamma function; c is the regularization parameter of the distribution amplitude, which can be taken as 1 according to experience; and t is the step size generated by the nonlinear transformation.
The step size t generated by nonlinear transformation is expressed by Equation (11).
In Formula (4), δ 2 meets the conditions in Formula (12): Principle 2: Biological self-pollination is regarded as a local search, and the process can be described by a formula, as shown in Formula (13).
where X i+1 j and X i j are randomly selected solutions, and ε represents the reproduction probability and takes the random number within the closed range [0,1].
Principle 3: The value of reproduction probability is proportional to the approximation of two flowers during pollination.
Principle 4: The time of global and local pollination transformation is controlled by the random probability p ∈ (0.1).

Model Optimization
As the BP neural network is prone to be trapped in local optimization and unstable due to manual parameter adjustment, the flower pollination algorithm is introduced to optimize the neural network.The specific process is as follows: (1) The basic parameters of the neural network and flower pollination algorithm are set.The total number of training samples is 250, and the maximum number of training times is 6000, while the learning rate is 0.03, and the error setting value is 0.001.Code the weights and thresholds of the network to the pollen individuals, and each can represent a network structure [26].The single hidden layer BP neural network can map any function [27,28].In this paper, the single hidden layer neural network was used.The logarithmic sigmoid function was used as the transfer function between the hidden layer and the output layer, as shown in Equation (14).
The principal component analysis input sequence determines the number of neurons in the network input layer, which was 9.The output parameter is the silicon content of hot metal, so the number of output layers was 1.The number of neurons in the hidden layer is determined by the empirical Formula (15) [29].
where N is the number of neurons in the hidden layer; n is the number of neurons in the input layer; i is the number of neurons in the output layer; and a is a constant between 1 and 10.
The relationship between the number of neurons and the error is shown in Figure 2. When the number of neurons in the hidden layer is 12, the prediction error reaches the minimum value.Therefore, the number of neurons in the hidden layer is selected as 12.
Principle 2: Biological self-pollination is regarded as a local search, and the process can be described by a formula, as shown in Formula (13).
( ) where X i+1 j and X i j are randomly selected solutions, and ε represents the reproduction probability and takes the random number within the closed range [0,1].
Principle 3: The value of reproduction probability is proportional to the approximation of two flowers during pollination.
Principle 4: The time of global and local pollination transformation is controlled by the random probability p ∈ (0.1).

Model Optimization
As the BP neural network is prone to be trapped in local optimization and unstable due to manual parameter adjustment, the flower pollination algorithm is introduced to optimize the neural network.The specific process is as follows: (1) The basic parameters of the neural network and flower pollination algorithm are set.The total number of training samples is 250, and the maximum number of training times is 6000, while the learning rate is 0.03, and the error setting value is 0.001.Code the weights and thresholds of the network to the pollen individuals, and each can represent a network structure [26].The single hidden layer BP neural network can map any function [27,28].In this paper, the single hidden layer neural network was used.The logarithmic sigmoid function was used as the transfer function between the hidden layer and the output layer, as shown in Equation ( 14).
( ) The principal component analysis input sequence determines the number of neurons in the network input layer, which was 9.The output parameter is the silicon content of hot metal, so the number of output layers was 1.The number of neurons in the hidden layer is determined by the empirical Formula (15) [29].
where N is the number of neurons in the hidden layer; n is the number of neurons in the input layer; i is the number of neurons in the output layer; and a is a constant between 1 and 10.
The relationship between the number of neurons and the error is shown in Figure 2. When the number of neurons in the hidden layer is 12, the prediction error reaches the minimum value.Therefore, the number of neurons in the hidden layer is selected as 12.  (2) During the operation in the algorithm, the position of the pollen particles is initialized randomly.Each position of the pollen particles is regarded as a weight distribution of the neural network.The fitness function value of each individual is calculated using Equation ( 16), and the most petite individual with the fitness value is retained.
where N is the total number of training samples; Z j.i is the target output value; Y j.i is the actual output value; and C is the number of output neurons in the network.
(3) According to the randomly generated conversion probability, all pollen particle positions are converted and updated between local and global searches.Calculate the fitness and find the optimal solution.
(4) The optimal solution is decoded into the weights and thresholds of the BP neural network, and then the training calculation is carried out.Judge whether the training conditions are met according to the final results.If yes, end the training, input samples, and make a prediction.If not, repeat ( 2) and (3) to continue the calculation.
The basic flow of the FPA-BP algorithm is shown in Figure 3.
(2) During the operation in the algorithm, the position of the pollen particles is initialized randomly.Each position of the pollen particles is regarded as a weight distribution of the neural network.The fitness function value of each individual is calculated using Equation ( 16), and the most petite individual with the fitness value is retained.
( ) where N is the total number of training samples; Zj.i is the target output value; Yj.i is the actual output value; and C is the number of output neurons in the network.
(3) According to the randomly generated conversion probability, all pollen particle positions are converted and updated between local and global searches.Calculate the fitness and find the optimal solution.
(4) The optimal solution is decoded into the weights and thresholds of the BP neural network, and then the training calculation is carried out.Judge whether the training conditions are met according to the final results.If yes, end the training, input samples, and make a prediction.If not, repeat ( 2) and ( 3) to continue the calculation.
The basic flow of the FPA-BP algorithm is shown in Figure 3.

Data Set Segmentation
After the model parameters had been determined, the data set was divided.Two hundred and fifty data groups were used as the training set, and fifty groups were used as the test set.Figure 4 shows hot metal silicon content data segmentation before and after abnormal value processing.

Data Set Segmentation
After the model parameters had been determined, the data set was divided.Two hundred and fifty data groups were used as the training set, and fifty groups were used as the test set.Figure 4 shows hot metal silicon content data segmentation before and after abnormal value processing.
(2) During the operation in the algorithm, the position of the pollen particles is initialized randomly.Each position of the pollen particles is regarded as a weight distribution of the neural network.The fitness function value of each individual is calculated using Equation ( 16), and the most petite individual with the fitness value is retained.
( ) where N is the total number of training samples; Zj.i is the target output value; Yj.i is the actual output value; and C is the number of output neurons in the network.
(3) According to the randomly generated conversion probability, all pollen particle positions are converted and updated between local and global searches.Calculate the fitness and find the optimal solution.
(4) The optimal solution is decoded into the weights and thresholds of the BP neural network, and then the training calculation is carried out.Judge whether the training conditions are met according to the final results.If yes, end the training, input samples, and make a prediction.If not, repeat ( 2) and ( 3) to continue the calculation.
The basic flow of the FPA-BP algorithm is shown in Figure 3.

Data Set Segmentation
After the model parameters had been determined, the data set was divided.Two hundred and fifty data groups were used as the training set, and fifty groups were used as the test set.Figure 4 shows hot metal silicon content data segmentation before and after abnormal value processing.

Selection of Model Evaluation Indicators
The model's running time and the algorithm's prediction accuracy were used as the evaluation indicators for the excellent running of the model.The running time of the model determines the timeliness of the prediction results.The prediction accuracy of the algorithm is the core index to consider the effectiveness of the algorithm, which should be measured and characterized from multiple aspects.In this model, hit rate, average absolute error, root mean square error, and average fundamental percentage error were used as evaluation indicators to indicate the accuracy of the prediction model [30,31].
Hit rate (HR) was selected to characterize the model prediction reliability within the acceptable process range.The hit rate is the ratio of the number of samples whose absolute prediction error value is less than or equal to p to the total number of set pieces in this model, p = 0.1, which is adopted.
Mean absolute error (MAE) represents the total value of the overall deviation between the predicted value of the model and the measured value.
The root means square error (RMSE) is selected to represent the deviation fluctuation between the predicted value of the model and the measured reference value.
The mean absolute percentage error (MAPE) is selected to represent the relative value of the overall deviation between the predicted value and the measured value of the model.
where m is the hit rate; n is the number of samples; e i is the prediction error; i is the test sample number; p is the required error value; ŷi is the predicted value; and y i represents the actual value.

Model Prediction
The model was built on the Jupyter Notebook platform using Python language and validated with some input data.The results are shown in Table 4.It can be seen that the model prediction is feasible, and the FPA-BP model prediction results are more accurate than the BP prediction model.
Bring the test set into the BP model as well as the FPA-BP model for prediction, and compare the prediction effect of the optimized model with that of the non-optimized model.As shown in Figures 5 and 6, it can be seen that the overall prediction trend of the BP prediction model is consistent with the actual situation.When the data were stable, the predicted value was close to the actual value, but the error at the inflection point was significant.When the furnace condition fluctuated, the difference between the expected and actual values was substantial.This means that the prediction effect could not meet the essential requirements and needs further improvement.From the prediction results of the FPA-BP model, we can see that the predicted value was closer to the actual value, and the prediction effect of the model was better when the blast furnace was stable or fluctuating.The FPA-BP prediction model was superior to the BP.The BP neural network uses a stepwise descent method for weights and thresholds, which can easily fall into local extremum and slow convergence speed.The use of FPA optimization can significantly improve these problems, while also improving the generalization ability and prediction accuracy.Bring the test set into the BP model as well as the FPA-BP model for prediction, and compare the prediction effect of the optimized model with that of the non-optimized model.As shown in Figures 5 and 6, it can be seen that the overall prediction trend of the BP prediction model is consistent with the actual situation.When the data were stable, the predicted value was close to the actual value, but the error at the inflection point was significant.When the furnace condition fluctuated, the difference between the expected and actual values was substantial.This means that the prediction effect could not meet the essential requirements and needs further improvement.From the prediction results of the FPA-BP model, we can see that the predicted value was closer to the actual value, and the prediction effect of the model was better when the blast furnace was stable or fluctuating.The FPA-BP prediction model was superior to the BP.The BP neural network uses a stepwise descent method for weights and thresholds, which can easily fall into local extremum and slow convergence speed.The use of FPA optimization can significantly improve these problems, while also improving the generalization ability and prediction accuracy.Observe the prediction effect of the BP and FPA-BP models by analyzing the absolute error.The fundamental mistake of the BP model prediction shown in Figure 7 fluctuates between −0.2415 and 0.3430, while the maximum error is 0.1386.It can be seen that the absolute error undulates in a wide range and high frequency.As presented in Figure 8,  Bring the test set into the BP model as well as the FPA-BP model for prediction, and compare the prediction effect of the optimized model with that of the non-optimized model.As shown in Figures 5 and 6, it can be seen that the overall prediction trend of the BP prediction model is consistent with the actual situation.When the data were stable, the predicted value was close to the actual value, but the error at the inflection point was significant.When the furnace condition fluctuated, the difference between the expected and actual values was substantial.This means that the prediction effect could not meet the essential requirements and needs further improvement.From the prediction results of the FPA-BP model, we can see that the predicted value was closer to the actual value, and the prediction effect of the model was better when the blast furnace was stable or fluctuating.The FPA-BP prediction model was superior to the BP.The BP neural network uses a stepwise descent method for weights and thresholds, which can easily fall into local extremum and slow convergence speed.The use of FPA optimization can significantly improve these problems, while also improving the generalization ability and prediction accuracy.Observe the prediction effect of the BP and FPA-BP models by analyzing the absolute error.The fundamental mistake of the BP model prediction shown in Figure 7 fluctuates between −0.2415 and 0.3430, while the maximum error is 0.1386.It can be seen that the absolute error undulates in a wide range and high frequency.As presented in Figure 8, Observe the prediction effect of the BP and FPA-BP models by analyzing the absolute error.The fundamental mistake of the BP model prediction shown in Figure 7 fluctuates between −0.2415 and 0.3430, while the maximum error is 0.1386.It can be seen that the absolute error undulates in a wide range and high frequency.As presented in Figure 8, the fundamental error predicted by the FPA-BP model dates between −0.1178 and 0.1729, with a maximum error of 0.1186.It can be seen that the error is generally lower than that of the BP model, the base fluctuates in a small range, and the frequency is shallow.
the fundamental error predicted by the FPA-BP model dates between −0.1178 and 0.1729, with a maximum error of 0.1186.It can be seen that the error is generally lower than that of the BP model, the base fluctuates in a small range, and the frequency is shallow.5.The HR of the FPA-BP prediction model was 86%, superior to 70% of the BP neural network prediction model.The MAPE, the MAE, and the RMSE of the FPA-BP prediction model were 0.1999, 0.0614, and 0.0932, respectively, lower than 0.1305, 0.0444, and 0.0599 for the BP neural network prediction model.The calculation results show that the prediction performance indicators of the FPA-BP model were significantly better than those of the BP model, and the model accuracy was better.In addition, the running time of the FPA-BP prediction model was 0.3230 s, which was faster than 0.8601 for the BP neural network model.the fundamental error predicted by the FPA-BP model dates between −0.1178 and 0.1729, with a maximum error of 0.1186.It can be seen that the error is generally lower than that of the BP model, the base fluctuates in a small range, and the frequency is shallow.5.The HR of the FPA-BP prediction model was 86%, superior to 70% of the BP neural network prediction model.The MAPE, the MAE, and the RMSE of the FPA-BP prediction model were 0.1999, 0.0614, and 0.0932, respectively, lower than 0.1305, 0.0444, and 0.0599 for the BP neural network prediction model.The calculation results show that the prediction performance indicators of the FPA-BP model were significantly better than those of the BP model, and the model accuracy was better.In addition, the running time of the FPA-BP prediction model was 0.3230 s, which was faster than 0.8601 for the BP neural network model.5.The HR of the FPA-BP prediction model was 86%, superior to 70% of the BP neural network prediction model.The MAPE, the MAE, and the RMSE of the FPA-BP prediction model were 0.1305, 0.0444, and 0.0599, respectively, lower than 0.1999, 0.0614, and 0.0932 for the BP neural network prediction model.The calculation results show that the prediction performance indicators of the FPA-BP model were significantly better than those of the BP model, and the model accuracy was better.In addition, the running time of the FPA-BP prediction model was 0.3230 s, which was faster than 0.8601 for the BP neural network model.

Conclusions
In view of the difficulty in predicting the silicon content in hot metal under complex furnace conditions, a prediction model based on PCA and FPA-BP was proposed, and the following conclusions were drawn:

Figure 1 .
Figure 1.The basic structure of the BP neural network.

Figure 1 .
Figure 1.The basic structure of the BP neural network.

Figure 2 .
Figure 2. Error comparison of the number of hidden layer neurons.Figure 2. Error comparison of the number of hidden layer neurons.

Figure 2 .
Figure 2. Error comparison of the number of hidden layer neurons.Figure 2. Error comparison of the number of hidden layer neurons.

Figure 3 .
Figure 3. Schematic diagram of the optimization model of the flower pollination algorithm.

Figure 4 .
Figure 4. Segmentation of silicon content data in hot metal after outlier processing.

Figure 3 .
Figure 3. Schematic diagram of the optimization model of the flower pollination algorithm.

Figure 3 .
Figure 3. Schematic diagram of the optimization model of the flower pollination algorithm.

Figure 4 .
Figure 4. Segmentation of silicon content data in hot metal after outlier processing.Figure 4. Segmentation of silicon content data in hot metal after outlier processing.

Figure 4 .
Figure 4. Segmentation of silicon content data in hot metal after outlier processing.Figure 4. Segmentation of silicon content data in hot metal after outlier processing.

Figure 5 .
Figure 5.Comparison between the actual value and BP prediction value.

Figure 6 .
Figure 6.Comparison between the actual value and predicted value of FPA-BP.

Figure 5 .
Figure 5.Comparison between the actual value and BP prediction value.

Figure 5 .
Figure 5.Comparison between the actual value and BP prediction value.

Figure 6 .
Figure 6.Comparison between the actual value and predicted value of FPA-BP.

Figure 6 .
Figure 6.Comparison between the actual value and predicted value of FPA-BP.

Figure 7 .
Figure 7. Prediction error of the BP model.

Figure 8 .
Figure 8. Prediction error of the FPA-BP model.The characteristics and model running time predicted by the BP and FPA-BP models are shown in Table5.The HR of the FPA-BP prediction model was 86%, superior to 70% of the BP neural network prediction model.The MAPE, the MAE, and the RMSE of the FPA-BP prediction model were 0.1999, 0.0614, and 0.0932, respectively, lower than 0.1305, 0.0444, and 0.0599 for the BP neural network prediction model.The calculation results show that the prediction performance indicators of the FPA-BP model were significantly better than those of the BP model, and the model accuracy was better.In addition, the running time of the FPA-BP prediction model was 0.3230 s, which was faster than 0.8601 for the BP neural network model.

Figure 7 .
Figure 7. Prediction error of the BP model.

Figure 7 .
Figure 7. Prediction error of the BP model.

Figure 8 .
Figure 8. Prediction error of the FPA-BP model.The characteristics and model running time predicted by the BP and FPA-BP models are shown in Table5.The HR of the FPA-BP prediction model was 86%, superior to 70% of the BP neural network prediction model.The MAPE, the MAE, and the RMSE of the FPA-BP prediction model were 0.1999, 0.0614, and 0.0932, respectively, lower than 0.1305, 0.0444, and 0.0599 for the BP neural network prediction model.The calculation results show that the prediction performance indicators of the FPA-BP model were significantly better than those of the BP model, and the model accuracy was better.In addition, the running time of the FPA-BP prediction model was 0.3230 s, which was faster than 0.8601 for the BP neural network model.

Figure 8 .
Figure 8. Prediction error of the FPA-BP model.The characteristics and model running time predicted by the BP and FPA-BP models are shown in Table5.The HR of the FPA-BP prediction model was 86%, superior to 70% of the BP neural network prediction model.The MAPE, the MAE, and the RMSE of the FPA-BP prediction model were 0.1305, 0.0444, and 0.0599, respectively, lower than 0.1999, 0.0614, and 0.0932 for the BP neural network prediction model.The calculation results show that the prediction performance indicators of the FPA-BP model were significantly better than those of the BP model, and the model accuracy was better.In addition, the running time of the FPA-BP prediction model was 0.3230 s, which was faster than 0.8601 for the BP neural network model.

Table 1 .
Partial raw data table.

Table 2 .
Principal component analysis of the data.

Table 4 .
Model prediction results and some original data.

Table 5 .
Comprehensive quantitative characterization of prediction results of the FPA-BP and BP algorithms.

Table 5 .
Comprehensive quantitative characterization of prediction results of the FPA-BP and BP algorithms.

Table 5 .
Comprehensive quantitative characterization of prediction results of the FPA-BP and BP algorithms.