Next Article in Journal
Investigation of the Fluid Flow in a Large Ball Valve Designed for Natural Gas Pipelines
Previous Article in Journal
Engineering and Geophysical Research of the Tailing Dump under the Conditions of Growing Soils of the Base
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Prediction and Optimization of Matte Grade in ISA Furnace Based on GA-BP Neural Network

1
Faculty of Metallurgical and Energy Engineering, Kunming University of Science and Technology, Kunming 650093, China
2
Yunnan Copper Industry Co., Ltd., Kunming 650102, China
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2023, 13(7), 4246; https://doi.org/10.3390/app13074246
Submission received: 31 January 2023 / Revised: 18 March 2023 / Accepted: 23 March 2023 / Published: 27 March 2023
(This article belongs to the Section Computing and Artificial Intelligence)

Abstract

:
The control of matte grade determines the production cost of the copper smelting process. In this paper, an optimal matte-grade control model is established to derive the optimal matte grade with the objective of minimizing the cost in the whole process of copper smelting. This paper also uses the prediction capability of the BP (Backpropagation) neural network to establish a BP neural network prediction model for the matte grade, considering various factors affecting matte grade (including the input copper concentrate amount and its composition content, air drumming amount, oxygen drumming amount, melting agent amount, and other process parameters). In addition, the paper also uses the optimal matte grade to optimize the dosing, air supply/oxygen supply, and oxygen supply for the ISA and other furnaces. When using BP networks only, it is a nonconvex problem with gradient descent, which tends to fall into local minima and has some bias in the prediction results. This problem can be solved by optimizing its weights and thresholds through GA (Genetic Algorithm) to find the optimal solution. The analysis results show that the average absolute error of the simulation of the BP neural network prediction model for ice copper grade after GA optimization is 0.51%, which is better than the average absolute error of 1.17% of the simulation of the single BP neural network model.

1. Introduction

It is difficult to establish the mechanism model of the smelting process because of the complex furnace structure, various materials in the furnace, and rapid physical and chemical reactions in the furnace. Matte grade determines the amount of ISA furnace slag, which directly affects the heat loss of the ISA furnace and the heat supply of the converter. So, matte grade control is crucial in copper smelting. Meanwhile, the processing capacity of the converter blowing slag and the planned cold material determines the matte grade required for smelting and, finally, determines the production organization mode and smelting cost of the smelting furnace and converter. However, copper smelting system is a multidimensional, nonlinear, dynamic, and open complex system, and the matte grade is related to many factors, resulting in the prediction and modeling of matte grade is a NP-complete problem (Nondeterministic Polynomial complete problem).
BP neural network [1,2] is widely used in actual production prediction because of self-learning, self-organizing, self-adapting, and strong nonlinear function approximation abilities [3,4]. He et al. [5] proposed a hybrid model based on ladle thermal state and BP neural network to study actual steelmaking process characteristics. This helps to solve the problem that a single steelmaking process mechanism model is difficult to accurately predict the temperature of molten steel. The practical application shows that the model achieves a good prediction performance. Senthilkumar et al. [6] developed a BP neural network model with a network structure of 3-2-2-1 to predict the rheological stresses in Al-Mg nanocomposites at high temperatures. The temperature, strain, and strain rate data during hot compression experiments were collected as model input. The model outputs the flow stress, which is compared with the Arrhenius hyperbolic and Johnson–Cook constitutive models. The results indicate that the BP neural network model had better prediction accuracy. Li et al. [7] used a BP neural network to predict the leaching rate of indium and optimize the acid leaching process of ITO (Indium Tin Oxide) waste target to recover indium. The results showed that the relative error between the output of the prediction model and the actual leaching rate was 1.0~1.3%. Cui et al. [8] established a BP neural network prediction model for the content of variables that could not be detected in real time to predict the current (Si) content in the blast furnace smelting process. Combined with an expert system, the prediction model finally achieved an optimal control of the coal injection process. Liu et al. [9] established a prediction model of end-point phosphorus content for BOF (Basic Oxygen Furnace) based on monotone-constrained backpropagation (BP) neural network, and 200 sets of data were used to verify the accuracy of the model. The hit ratios in the range of ±0.005% and ±0.003% are 94% and 74%, respectively; the results indicate predictive accuracy is better than that of an ordinary BP neural network. Singha et al. [10] used the initial pH value, initial Pb2+ concentration, adsorbent dosage, and contact time as the input layer of the BP neural network to predict the removal rate of Pb2+ in the study of hydrometallurgical extraction of lead, and the actual results showed that the prediction results of the model were excellent. Although the BP neural network is used for model prediction with high generalization ability, it needs to be optimized by an optimization algorithm due to its slow learning speed and tendency to fall into local minima [11].
Genetic Algorithm (GA) [12,13] is a widely used optimization algorithm. In this algorithm, through iterative evolution of the population, the individuals with higher fitness in the current population are inherited by the next generation, and individuals with lower fitness are eliminated. Finally, the individual with the most increased fitness is found, and the optimization result is obtained. Hoseinian et al. [14] studied the column-leaching process of copper oxide ore. A BP neural network model with the optimal network structure of 4-15-10-1 was designed to predict the copper recovery rate. GA was utilized to optimize the network connection weights and thresholds. The results show that the mean square error decreased from 66.67 to 0.02. Liu, He Miao et al. [15] proposed a power supply optimization strategy based on the improved BP neural network and compared PSO-BP (Particle Swarm Optimization-Backpropagation) neural network with GA-BP neural network. The results indicated the prediction error of the power supply strategy of GA-optimizing BP (GA-BP) neural network is the minimum. Liu et al. [16,17] established a BOF end-point temperature and carbon content prediction model and a BOF end-point P and O content prediction model based on principal component analysis (PCA)—genetic algorithm (GA)—backpropagation (BP) neural network. The root mean square temperature error between predicted and actual values is 7.89, carbon content is 0.0030, P content is 0.0015%, and O content is 0.0049%. Chakraborty et al. [18] used a real-coded genetic algorithm to optimize the BP neural network to predict the phase transition temperature during continuous steel cooling. The network’s input layer included seven influencing factors, e.g., the chemical composition of steel, cooling rate, and austenitizing temperature; the output layer outputs six temperature values, e.g., ferrite start transition temperature and end temperature, pearlite start transition temperature and end temperature. The experimental results show that the optimized prediction results are closer to the actual values. Wang et al. [19,20] and Zeng [21] established a copper flash smelting neural network model to predict matte grade, optimized the weight of the matte-grade neural network model by GA, and then simulated and optimized the input layer process parameters by GA. The optimization results are better than the single BP neural network model, but this model only considers the energy costs of the flash smelting process. Wang et al. [22] proposed a neural network algorithm based on GA-Elman-Regularization. Regularization is introduced to improve the generalization ability of the network and to enhance the network accuracy. In comparison of experimental results of motion trajectory prediction by the Elman neural network, GA-Elman neural network, and GA-Elman-Regularization neural network on the semiphysical dataset, the average error of prediction is 1.37%, 0.82%, and 0.556%. The experiments show that the optimized algorithm improves the ability of network generalization and the accuracy of prediction.
In this paper, a matte-grade optimization control model is established to solve the optimal matte grade, with the objective of minimizing the cost in the whole process of copper smelting. Meanwhile, based on the global optimization ability of GA and the strong generalization ability of the BP neural network, a GA-BP neural network matte-grade prediction model is established to predict the matte grade of ISA smelting process parameters in real time. Then, the prediction result is compared with the optimal matte grade to regulate and control each furnace to make the production operate economically. In this model, the BP neural network prediction model is trained to obtain weights and thresholds, and GA is used to find the optimal weights and thresholds. The prediction results of a single BP neural network and GA-BP neural network are compared with the expected results to verify the effectiveness of the prediction model.

2. Model

The matte-grade optimization control model was established by using Matlab as a tool and taking the lowest production cost of the whole process of copper smelting as the objective function. At the same time, the BP neural network prediction model of matte grade is established to realize real-time prediction of matte grade.

2.1. Matte-Grade Optimization Control Model

As one of the most important control parameters in copper ISA smelting, converter blowing, and anode furnace refining, matte grade determines the oxygen scheduling scheme of the ISA furnace and converter and the materials entering the ISA furnace, converter, and anode furnace. The optimization control of matte grade is to find the best matte grade when a certain amount of copper concentrate is added under the restriction of converter slag processing capacity and converter planned cold material processing capacity in the process of converter blowing to maintain the process parameters (including total oxygen, total fan blast, and flux amount) within the allowable range and minimize the smelting cost. A nonlinear mathematical model for matte-grade optimization control is established as follows:
min W = C ISA P coal + O ISA + O ps P O 2 + A ISA + A ps + A an P air + F ISA   + F ps + B an P re G min     G     G max R O 2 min     R O 2     R O 2 max S Fe min     S Fe     S Fe max S CaO min     S CaO     S CaO max C i min     C i   C i max ( i = 1 , 2 , 3 , 4 ) S ps min     S ps     S ps max D min     D     D max ( i = 1 , 2 , 3 , 4 )
where C ISA , O ISA , A ISA , and F ISA are the coal feed amount(t), oxygen supply amount (Nm3), blast volume (Nm3), and flux amount(t) of the ISA furnace, respectively; O ps , A ps , and F ps are the oxygen supply volume (Nm3), blast volume (Nm3), and flux amount(t) of PS converter, respectively; A an and B an are the blast volume (Nm3) and the reductant mass (t) of anode furnace; P coal is the unit price of coal (Yuan/t); P O 2 is the unit price of oxygen (Yuan/Nm3); P air is the unit price of compressed air (Yuan/Nm3); P flux is the unit price of flux (Yuan/Nm3); P re is the unit price of reductant (Yuan/t); G is matte grade (%); G max and G min are the upper and lower limits values of matte grade, respectively; R O 2 is oxygen-enriched concentration (%); R O 2 max and R O 2 min are the upper and lower limits of R O 2 values, respectively; S Fe is the slag type of SiO2/Fe (%); S Fe max and S Fe min are the upper and lower limits of S Fe values, respectively; S CaO is the slag type of SiO2/CaO (%); S CaO max and S CaO min are the upper and lower limits of S CaO values, respectively; C i   ( i = 1 , 2 , 3 , 4 ) represents the amount of cold material, residual anode, crude copper, and anode slag processed in each converter of the converter (t); C max and C min are the upper and lower limits of the C i   ( i = 1 , 2 , 3 , 4 ) value, respectively; S ps is the amount of slag per PS furnace(t); S ps max and S ps min are the upper and lower limit values of S ps , respectively; D i   ( i = 1 , 2 , 3 ) represents the amount of cold material, residual anode, and purchased crude copper processed in each anode furnace (t); D max and D min are the upper and lower limit values of D i   ( i = 1 , 2 , 3 ) , respectively. The relationship between these process parameters is nonlinear, and the best matte grade can be found by GA under the condition of the lowest cost in the whole process of copper smelting.

2.2. Matte-Grade Prediction Model Based on BP Neural Network

Matte grade is determined by the process parameters such as the composition and feed of raw materials (copper concentrate, flux, air, and oxygen) and the smelting characteristics in the production of copper matte by ISA furnace. The nonlinear relationship between matte grade and its influencing factors can be mined from the production data of the copper smelting process using the characteristics of the BP neural network, such as parallel and distributed information processing, self-organization, self-adaptation, and approaching any complex nonlinear function. With the input of different samples, the running state of the ISA furnace is dynamically tracked, and the matte grade under the current production condition is predicted in real time. Then, the real-time matte grade is compared with the best matte grade in the matte-grade optimization control model, to adjust and control the feeding of the ISA furnace, the converter and, the anode furnace. In this way, the cost of the whole copper smelting process can be minimized.

2.2.1. Model Structure

The input layer of the BP neural network can build a nonlinear model G = F(X) between the matte grade G and the influencing factor X, based on the main factor X that affects the matte grade, where the equation for the influencing factor X is as follows:
X = [X1, X2, X3, X4, X5, X6, X7, X8]T
In this formula, X1 is Cu content, X2 is S content, X3 is Fe content, X4 is SiO2 content, X5 is CaO content in the copper concentrate, X6 is oxygen volume, X7 is blast volume, and X8 is flux amount.
In the model, the influence degree of each factor on matte grade is reflected by the weight of each influence factor, and the final network output value is constantly approaching the actual value by using the self-training, learning, and adjustment abilities of the BP network. The model uses eight main influences X as the feature set with a complete dataset of 910 and one output term of matte grade; 900 sets of data were used as the training set, and 10 were used as the test set. The number of hidden layer units in the model is determined to be 13 by the empirical formula n 1 =   n + m   + α [23] ( n 1 is the number of hidden layer units, n is the number of input layer units, m is the number of output layer units, and α is a constant between 1 and 10). A matte-grade prediction model with the network structure of 8-13-1 is shown in Figure 1.

2.2.2. Model Algorithm Process

The nonlinear relationship fitting based on the BP neural network is usually completed through three steps: BP neural network construction, BP neural network training, and BP neural network prediction. The training process is completed by forward propagation and error backpropagation. The forward propagation direction is input layer → hidden layer → output layer, and the state of neurons in each layer only affects neurons in the next layer. If the desired output is not obtained in the output layer, it will turn toward the error backpropagation process. By alternating the two processes, the error function gradient descent strategy is implemented in the weight and threshold vector space, and a set of weights and thresholds are searched dynamically iteratively to minimize the error between the predicted value and the expected value so as to complete the process of information extraction and memory. According to the structure of the BP neural network of matte grade and the relationship between matte grade and various factors, the algorithm process is determined as shown in Figure 2.
The specific algorithm is described as follows:
(1)
Small weights and thresholds are randomly assigned to the network.
(2)
The training samples (input and output data) were imported, the network output values were calculated and compared with the expected output values, and the local errors of all layers were obtained.
(3)
The network is trained, and the weight and threshold of each layer are adjusted by iterative formula. For the output layer and the hidden layer, the weight change from the ith input to the kth output is as follows:
Δ w ki ( t ) = η E ( t ) w ki ( t )
The threshold changes to:
Δ b ki ( t ) = η E ( t ) b ki ( t )
Among them, E ( t ) is mean square error; w ( t ) and b ( t ) are weights and thresholds at time t, respectively; Δ w ( t ) and Δ b ( t ) are weights and threshold adjustment values at time t, respectively; η is the learning rate.
The total error sum of squares is calculated. If its value is less than the set target value or the number of iterations reaches the set value, then the model stops. If neither is reached, the network training will continue to iteratively adjust the weights and thresholds until the conditions are met.

3. Model-Solving Algorithm

Based on the prediction model, the GA algorithm is used to optimize the weight and threshold value of the model to improve the prediction ability model.

3.1. Model-Solving Process

According to the optimization control model of matte grade, the matte grade and related process parameters of each furnace can be obtained under the condition of the lowest energy consumption cost in the whole process of copper smelting, and the BP neural network prediction model of matte grade can predict the matte grade in real time under the current production conditions. Combined with the optimization control model and prediction model, the predicted value of matte grade is compared with the optimal value, and the production process parameters are regulated to make the current matte grade approximate to the optimal value to achieve the goal of optimal production cost.
In solving the BP neural network prediction model of matte grade, GA is also used to optimize the network weight and threshold, so that the predicted value is more accurate. The specific model-solving process is shown in Figure 3.

3.2. GA-BP Neural Network Algorithm

There are hidden units in the BP neural network. Although the gradient descent function is adopted on the surface of the weight and threshold vector, the global minimum of error cannot be guaranteed, so the optimal weight and threshold of the network cannot be guaranteed. Even though the BP neural network model has a certain prediction ability of matte grade, in order to predict matte grade more accurately, this paper uses GA to optimize the weights and thresholds of the BP neural network model and employs population search to optimize the weights and thresholds of the neural network, which can solve the problem of excessive dependence on gradient function of the neural network, and finally achieves the goal of global minimum error.
GA-optimization BP neural network is divided into three parts: BP neural network structure determination, GA optimization, and BP neural network prediction. Among them, the design of the BP neural network structure has been discussed previously: According to the available network, the node number of each neural layer was used to determine the length of the genetic algorithm individually. When using a GA to optimize the weights and thresholds of a BP neural network, each individual in the population contains all the weights and thresholds of a network, and the individual calculates the individual fitness value by the fitness function, and the genetic algorithm finds the individual corresponding to the optimal fitness value by selection, crossover and variation operations. BP neural network prediction uses the optimal individual obtained by genetic algorithm to assign the initial weight and threshold of the network. Then the data are used to train the network, and the result is the output of the prediction model. The specific algorithm flow chart is as follows (Figure 4):
(1)
Chromosome coding
According to the previously established BP neural network for the matte grade, all its weights and thresholds were used as genes to construct chromosomes, and segmented real numbers were used to encode chromosomes, each of which was a real number string. All genes form a chromosome vector H = [h1, h2, ···hk, ···hn], where [h1, h2, ···hk] are the weight genes in the chromosome and [hk+1, ···hn] are the threshold genes in the chromosome.
(2)
Population initialization
L chromosomes are randomly generated by BP neural network weight threshold initialization, and the initial population size is set as S.
(3)
Adaptation function
The chromosomes in the genetic algorithm population were used as the weights and thresholds of the BP neural network, the trained BP neural network was used for model prediction, and the absolute value of the error between the predicted and desired outputs was taken as the individual fitness value F. The formula was calculated as follows:
F = y i o i
In this formula, yi is the expected output value of matte grade for the ith input sample of the BP neural network; oi is the predicted output value of matte grade for the ith input sample. The genetic algorithm optimizes the matte-grade BP neural network weights and thresholds, and the goal is to obtain the network weights and thresholds with the smallest F in the evolutionary process.
(4)
Select Operation
Genetic algorithm selection operation has various methods, such as the roulette method and tournament method [24]. In this paper, we choose the roulette wheel method, namely, the selection strategy based on the proportion of adaptation, and the selection probability pi for each individual i is as follows:
f i = 1 / F i
p i = f i i = 1 N f i
In this formula, Fi is the fitness value of individual i. The smaller the fitness value, the better, so the inverse of the fitness value is calculated before individual selection. fi is the inverse of Fi; N is the number of individuals in each population.
(5)
Crossover operations
Since the individuals are coded with real numbers, the crossover operation method uses the simple single-point crossover method. The crossover operation method for the uth chromosome au and the vth chromosome av at the w position is as follows:
a uw = a uw ( 1 b ) + a vw b a vw = a vw ( 1 b ) + a uw b
In this formula, b is a random number between [0, 1].
(6)
Mutation operation
The jth gene of the ith chromosome was selected for mutation. The mutation is performed as follows:
a ij = a ij + a ij a max × f ( g )         0.5 P m     r     P m a ij + a ij a min   ×   f ( g )                 r     0.5 P m
In this formula, amax is the upper bound of gene aij; amin is the lower bound of gene aij; f ( g ) = r ( 1 g / G max ) ; r is a random number between [0, 1]; g is the number of current iterations; Gmax is the maximum number of evolution; and Pm is the probability of gene variation.

4. Simulation and Analysis

To train the network, 900 sets of production data from a copper smelter were used as training samples. Another 10 sets of data were used to predict the network model and compare it with the desired output values. Ten groups of training samples randomly selected are provided in Table 1.

4.1. BP Neural Network Simulation and Analysis

Using the established BP neural network prediction model for ice copper grade for several simulation experiments, the final optimal parameters of the network were obtained as follows: the number of iterations I = 100, the learning rate η = 0.1, the error accuracy of training Ep = 0.00004, and the model prediction data and expectation data are given in Table 2.
Figure 5 shows a line graph of the output data of the matte-grade neural network prediction model compared to the desired output data.
From the results obtained in Table 2 and Figure 5, it can be analyzed that the BP neural network prediction model of matte grade has a good ability to mine the implied regulation in the samples, and the model has a certain generalization ability by using the training samples to train the neural network. Meanwhile, the network output data of the test samples and the expected output data were controlled within the error range.

4.2. GA-BP Neural Network Simulation and Analysis

Based on the BP neural network prediction model of ice copper grade with structure 8-13-1, the genetic algorithm was designed to optimize its weights and thresholds. Weight values and threshold genes owned by each layer of each training sample group constitute a chromosome. The chromosome design parameters are chromosome number L = 90, chromosome population size S = 10, evolutionary algebra M = 200, crossover probability Pc = 0.3, and mutation probability Pm = 0.1. After the simulation experiments using the above GA-BP neural network prediction model for the matte grade, the model predicted data compared with the actual expected data are displayed in the following table.
The following figure compares the test sample network output data and the desired output data after the simulation of the matte-grade GA-BP neural network prediction model with actual production data.
From the test results of the GA-BP neural network prediction model of matte in Table 3 and Figure 6, it can be seen that the BP neural network model optimized by the genetic algorithm has more accurate matte-grade prediction results than the single BP neural network prediction results. From Table 3, it can be concluded that the average absolute error of matte-grade GA-BP neural network prediction model simulation is 0.51%, which is 56.41% lower than the matte-grade BP neural network prediction model, and the GA-BP neural network prediction results have smaller errors and are closer to the expected value, so the optimization effect is obvious.

5. Conclusions

(1)
An optimized control model for ice matte grade was developed, and the lowest smelting cost W for the whole copper smelting process is the target function to find the best matte grade, and the consumption of other incoming materials such as oxygen, air, and melt corresponding to ISA furnace, converter, and anode furnace.
(2)
A BP neural network prediction model of the matte grade was established, and its weight threshold was optimized by GA. A stable and reasonable input–output relationship between matte grade and its influencing factors is obtained through sample training.
(3)
Combining the matte-grade optimization control model and the matte-grade BP neural network prediction model, the matte grade is predicted in real time and compared with the optimal matte grade to adjust the process parameters of each furnace to make the production operate economically.
(4)
In the future, based on the matte-grade optimization control model and matte-grade GA-BP neural network prediction model, a three-flow optimization prediction model coupled with material flow, energy flow, and value flow can be established for various furnace copper smelting processes.

Author Contributions

Conceptualization, L.Z., D.Z., and D.L.; methodology, L.Z., D.Z., H.W., and L.J.; software, L.Z., D.Z., Z.X., and L.J.; validation, L.Z., D.Z., and D.L.; formal analysis, L.Z. and D.Z.; investigation, D.Z., D.L., and L.J.; resources, L.Z., D.Z., D.L., and H.W.; data curation, L.Z., D.Z., and Z.X.; writing—original draft preparation, L.J. and D.Z.; writing—review and editing, L.Z., D.Z., and Z.X.; visualization, L.Z. and D.Z.; supervision, D.Z., D.L., and H.W.; project administration, D.Z. and D.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by Yunnan Major Scientific and Technological Projects (grant NO. 202202AG050002).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The financial support by Yunnan Major Scientific and Technological Projects and Dong Li (YEIG Dianzhong Electric Distribution and Retail Supply Co., Ltd. (Kunming, China) is gratefully acknowledged.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hagan, M.T.; Demuth, H.B.; Beale, M. Neural Network Design; PWS Publishing Co.: Boston, MA, USA, 2007; Volume 3, pp. 154–196. [Google Scholar]
  2. Ding, S.; Su, C.; Yu, J. An Optimizing Bp Neural Network Algorithm Based on Genetic Algorithm; Kluwer Academic Publishers: Dordrecht, The Netherlands, 2011. [Google Scholar]
  3. Ren, C.; An, N.; Wang, J.; Li, L.; Hu, B.; Shang, D. Optimal parameters selection for BP neural network based on particle swarm optimization: A case study of wind speed forecasting. Knowl. Based Syst. 2014, 56, 226–239. [Google Scholar] [CrossRef]
  4. Okun, M.S. Deep-brain stimulation—Entering the era of human neural-network modulation. N. Engl. J. Med. 2014, 371, 1369–1373. [Google Scholar] [CrossRef] [PubMed]
  5. He, F.; He, D.-F.; Xu, A.-J.; Wang, H.B.; Tian, N.Y. Hybrid Model of Molten Steel Temperature Prediction Based on Ladle Heat Status and Artificial Neural Network. J. Iron Steel Res. 2014, 21, 181–190. [Google Scholar] [CrossRef]
  6. Senthilkumar, V.; Balaji, A.; Arulkirubakaran, D. Application of constitutive and neural network models for prediction of high temperature flow behavior of Al/Mg based nanocomposite. Trans. Nonferrous Met. Soc. China 2013, 23, 1737–1750. [Google Scholar] [CrossRef]
  7. Li, R.-D.; Yuan, T.C.; Fan, W.B.; Qiu, Z.L.; Su, W.J.; Zhong, N.Q. Recovery of indium by acid leaching waste ITO target based on neural network. Trans. Nonferrous Met. Soc. China 2014, 24, 257–262. [Google Scholar] [CrossRef]
  8. Cui, G.M.; Hu, D.-F.; Xiang, M.A. Operational-Pattern Optimization in Blast Furnace PCI Based on Prediction Model of Neural Network. J. Iron Steel Res. 2014, 26, 8–12. [Google Scholar]
  9. Zhou, K.X.; Lin, W.H.; Sun, J.K.; Zhang, J.S.; Zhang, D.Z.; Feng, X.M.; Liu, Q. Prediction model of end-point phosphorus content for BOF based on monotone-constrained BP neural network. J. Iron Steel Res. Int. 2021, 29, 751–760. [Google Scholar] [CrossRef]
  10. Singha, B.; Bar, N.; Das, S.K. The use of artificial neural network (ANN) for modeling of Pb(II) adsorption in batch process. J. Mol. Liq. 2015, 211, 228–232. [Google Scholar] [CrossRef]
  11. Alabass, M.; Jaf, S.; Abdullah, A.H.M. Optimize BpNN using new breeder genetic algorithm. In Proceedings of the International Conference on Advanced Intelligent Systems and Informatics, Cairo, Egypt, 24–26 October 2016; Springer International Publishing: Cham, Switzerland, 2017; pp. 373–382. [Google Scholar]
  12. Horn, J.; Nafpliotis, N.; Goldberg, D.E. A Niched Pareto Genetic Algorithm for Multiobjective Optimization. In Proceedings of the First IEEE Conference on Evolutionary Computation. IEEE World Congress on Computational Intelligence, Orlando, FL, USA, 27–29 June 1994; Volume 1, pp. 82–87. [Google Scholar]
  13. Deb, K.; Agrawal, S.; Pratap, A.; Meyarivan, T. A Fast Elitist Non-dominated Sorting Genetic Algorithm for Multi-objective Optimization: NSGA-II. In Proceedings of the Parallel Problem Solving from Nature PPSN VI, Paris, France, 18–20 September 2000; Springer: Berlin/Heidelberg, Germany, 2000; pp. 849–858. [Google Scholar]
  14. Hoseinian, F.S.; Abdollahzade, A.; Mohamadi, S.S.; Hashemzadeh, M. Recovery prediction of copper oxide ore column leaching by hybrid neural genetic algorithm. Trans. Nonferrous Met. Soc. China 2017, 27, 686–693. [Google Scholar] [CrossRef]
  15. Liu, H.M.; Zhao, Y.L.; Cheng, Y.M.; Wu, J.; Al Shurafa, M.A.; Liu, C.; Lee, I.K. A new power supply strategy for high power rectifying units in electrolytic copper process. J. Electr. Eng. Technol. 2022, 17, 1143–1156. [Google Scholar] [CrossRef]
  16. Liu, Z.; Cheng, S.; Liu, P. Prediction model of BOF end-point temperature and carbon content based on PCA-GA-BP neural network. Metall. Res. Technol. 2022, 119, 1–11. [Google Scholar] [CrossRef]
  17. Liu, Z.; Cheng, S.; Liu, P. Prediction model of BOF end-point P and O contents based on PCA–GA–BP neural network. High Temp. Mater. Process. 2022, 41, 505–513. [Google Scholar] [CrossRef]
  18. Chakraborty, S.; Chattopadhyay, P.P.; Ghosh, S.K.; Datta, S. Incorporation of prior knowledge in neural network model for continuous cooling of steel using genetic algorithm. Appl. Soft Comput. 2017, 58, 297–306. [Google Scholar] [CrossRef]
  19. Wang, J.L.; Hong, L.U.; Zeng, Q. Application of GA-BP to the Matte Grade Model Based on Neural Network. Jiangxi Nonferrous Met. 2003, 17, 39–42. [Google Scholar]
  20. Wang, J.L.; Lu, H.; Zeng, Q.Y.; Zhang, C.F. Control optimization of copper flash smelting process based on genetic algorithms. Chin. J. Nonferrous Met. 2007, 17, 156–160. [Google Scholar]
  21. Zeng, Q.Y.; Wang, I.L. Developing of the Copper Flash Smelting Model based on Neural Network. J. South. Inst. Metall. 2003, 24, 15–18. [Google Scholar]
  22. Wang, M.; Wu, J.; Guo, J.; Su, L.; An, B. Multi-Point Prediction of Aircraft Motion Trajectory Based on GA-Elman-Regularization Neural Network. Integr. Ferroelectr. 2020, 210, 116–127. [Google Scholar]
  23. Zhang, L.M. The Model and Application of Artificial Neural Network; Fudan University Press: Shanghai, China, 1993. [Google Scholar]
  24. Zhou, M.; Sun, S.D. Genetic Algorithms Theory and Applications; National Defence Industry Press: Washington, DC, USA, 1999. [Google Scholar]
Figure 1. A schematic diagram of the BP neural network prediction model for matte grade.
Figure 1. A schematic diagram of the BP neural network prediction model for matte grade.
Applsci 13 04246 g001
Figure 2. Flow chart of the BP neural network algorithm for matte grade.
Figure 2. Flow chart of the BP neural network algorithm for matte grade.
Applsci 13 04246 g002
Figure 3. GA-BP neural network ISA smelting matte-grade optimization and prediction model solution flow chart.
Figure 3. GA-BP neural network ISA smelting matte-grade optimization and prediction model solution flow chart.
Applsci 13 04246 g003
Figure 4. GA-BP algorithm flow chart.
Figure 4. GA-BP algorithm flow chart.
Applsci 13 04246 g004
Figure 5. Chart of the test results of the BP neural network prediction model for matte grade.
Figure 5. Chart of the test results of the BP neural network prediction model for matte grade.
Applsci 13 04246 g005
Figure 6. Matte-grade GA-BP neural network prediction model test result graph.
Figure 6. Matte-grade GA-BP neural network prediction model test result graph.
Applsci 13 04246 g006
Table 1. Ten groups of training samples taken from a copper smelting enterprise.
Table 1. Ten groups of training samples taken from a copper smelting enterprise.
Cu
(t)
S
(t)
Fe
(t)
SiO2
(t)
CaO
(t)
Oxygen
(Nm3)
Blast
(Nm3)
Flux
(t)
Matte Grade
(%)
22.91625.29723.6209.2411.663209,300152,6603556.65%
20.91822.63122.20911.5592.268199,852173,1403961.69%
21.24524.42923.85310.5981.333212,448151,8904256.43%
21.78024.48823.97110.4601.181206,088161,23747.757.06%
22.15423.55822.90610.9141.555207,382170,8534960.09%
22.03923.96523.19510.6271.651205,373164,9125057.72%
20.96623.74522.54110.5271.583209,857170,4184662.72%
22.94523.20822.38511.1972.114203,196162,3274051.46%
22.61225.32824.2699.5971.158202,332169,97243.557.81%
22.29723.57222.68810.7211.791199,668162,5704251.79%
Table 2. The simulated predicted data and expected output values of BP neural network prediction model for matte grade.
Table 2. The simulated predicted data and expected output values of BP neural network prediction model for matte grade.
NumberNetwork OutputExpected OutputAbsolute ErrorRelative Error
152.71%50.88%1.83%3.60%
258.15%60.28%2.13%3.66%
356.36%57.13%0.77%1.37%
456.81%57.57%0.76%1.34%
556.63%57.78%1.15%2.03%
654.92%53.11%1.81%3.41%
751.36%53.35%1.99%3.87%
856.06%56.68%0.62%1.11%
955.71%56.11%0.40%0.72%
1056.92%56.64%0.28%0.49%
Table 3. GA-BP neural network prediction model simulation prediction data versus desired output data.
Table 3. GA-BP neural network prediction model simulation prediction data versus desired output data.
NumberNetwork OutputExpected OutputAbsolute ErrorRelative Error
150.40%50.88%0.48%0.95%
260.10%60.28%0.18%0.30%
357.35%57.13%0.22%0.39%
457.48%57.57%0.09%0.16%
557.68%57.78%0.10%0.17%
652.99%53.11%0.12%0.23%
753.78%53.35%0.43%0.81%
856.47%56.68%0.21%0.37%
955.57%56.11%0.54%0.97%
1057.15%56.64%0.51%0.90%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhao, L.; Zhu, D.; Liu, D.; Wang, H.; Xiong, Z.; Jiang, L. Prediction and Optimization of Matte Grade in ISA Furnace Based on GA-BP Neural Network. Appl. Sci. 2023, 13, 4246. https://doi.org/10.3390/app13074246

AMA Style

Zhao L, Zhu D, Liu D, Wang H, Xiong Z, Jiang L. Prediction and Optimization of Matte Grade in ISA Furnace Based on GA-BP Neural Network. Applied Sciences. 2023; 13(7):4246. https://doi.org/10.3390/app13074246

Chicago/Turabian Style

Zhao, Luo, Daofei Zhu, Dafang Liu, Huitao Wang, Zhangming Xiong, and Lei Jiang. 2023. "Prediction and Optimization of Matte Grade in ISA Furnace Based on GA-BP Neural Network" Applied Sciences 13, no. 7: 4246. https://doi.org/10.3390/app13074246

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop