# Hot Metal Temperature Forecasting at Steel Plant Using Multivariate Adaptive Regression Splines

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Materials and Methods

#### 2.1. Explanatory Variables

_{i}, were selected for predicting the final hot metal temperature at BOF, Y. The initial temperature of the hot metal, X

_{1}, is measured with disposable thermocouples in the iron runner of the blast furnace during tapping [39,40]. Three measurements are usually taken by cast: Firstly, just after drilling the tap hole; secondly, when the slag arises; and finally, approximately at the end of the cast [5]. Hot metal temperature for each torpedo is calculated by time interpolation between consecutive thermocouple measurements.

_{2}, that is to say, the time between temperature measurements in the BF and in the BOF, was taken as the effective transport time. It comprises torpedo car operations, pouring of hot metal into the ladle, and ladle transport. Since the hot metal pouring may extend over a significant period of time, there is not a clear time limit between torpedo and ladle. Considering that the heat losses in torpedoes and ladles were found to be similar [21], both holding times can be grouped together without excessive simplification.

_{3}, accounts for the thermal losses during this phase. It is assumed that the mass flowrates of desulfurizing agents and inert gases are constant and therefore, the phase duration is the main aspect to be considered. Possible differences between different desulfurizing agents are neglected assuming that hot metal stirring causes the main effect on temperature.

_{4}and X

_{5}, respectively, where chosen as a convenient way of describing their initial thermal condition. Other aspects, such as the actual lining thickness or the amounts of slag and iron solidified inside the torpedo, cannot be accurately measured and are not considered in this model. Moreover, lining pre-heating is not considered because burners efficiency was not fully described in the available data. Considering that the number of pre-heated torpedoes and ladles was less than 5%, these cases were left outside the scope of the model.

#### 2.2. Process Dataset

_{1}and y, respectively, exhibit similar distributions and temporal evolution. Rather than random changes, local tendencies can also be recognized; therefore, it seems appropriate to retain and exploit the time information contained in the data.

_{1}curves and histograms reflect the expected correlation between both variables. However, several features of the y curve do not match well with corresponding features of x

_{1}, as for example the drop in y that can be observed in the vicinity of t = 10,000. This indicates that other variables are also critical.

_{2}, x

_{4}, and x

_{5}) have lower daily means, showing that normal production times are usually short with occasional longer times. Holding time x

_{2}and empty torpedo time x

_{4}are very similar, being dominated by torpedo logistics. Since empty torpedo movements are less critical than full ones, x

_{4}has more dispersion than x

_{2}.

_{4}, exhibits a multimodal distribution showing the different process situations to which the steel plant reacts with a different number of hot metal ladles in service.

_{3}, is much more centered as can be expected from prescribed desulfurization requirements. The bimodal histogram reveals that cases without pre-treatment (x

_{3}= 0) are frequent.

#### 2.3. Multivariate Adaptive Regression Splines with Moving Training Window

#### 2.3.1. Forward Phase

_{i}is one of the input variables and c is one of the values of that input in the dataset (i.e., an observation of x

_{i}), which is usually referred to as the knot of the pair of basis functions. In other words, a new pair of functions from the collection

_{m}are estimated by minimizing the residual sum-of-squares, that is, by standard linear regression finding the pair of basis functions that gives the maximum reduction in error. This pair is added to the model and this process of adding terms continues until the number of terms in the model reach a prescribed limit.

#### 2.3.2. Backward Phase

_{F}basis functions. This model typically overfits the data and will not generalize well to new data. Therefore, a backward deletion pass is applied by iteratively deleting the basis function whose removal causes the smallest increase in residual squared error until the model has only the intercept term. An estimated best model, ${\widehat{f}}_{\lambda}$ for each model size λ, where λ is the number of terms in the model, is obtained at each step. Cross-validation could be used to estimate the optimal model size, but for computational savings the generalized cross-validation (GCV) is preferably used and works well in practice [23,43,44]:

#### 2.3.3. Model Hyperparameters

_{F}, d, L, and w. The first four are typical settings of the standard MARS technique, while the last two (L and w) are the original of this study. In particular, w is a consequence of the novel continuous training approach adopted here which is required to ensure long-term performance of the model. The basic configuration of these six hyperparameters, together with the ranges to be tested, and the finally adopted configuration can be found in Table 2.

_{F}= 21, d = 2, L = 0, w = 1000} was taken as the base case. Self-interactions of variables where not allowed (S = 1) in order to avoid singularities near the boundaries of the domain [23,32]; this criterion was maintained along the study. The remaining parameters where varied to assess their effect on model performance. Interactions between variables where not allowed for the base case (I = 1) in order to start with a simple additive model. The maximum number of base functions in the forward phase, M

_{F}, was taken initially as maximum (21, 2N + 1), where N is the number of input variables, as suggested by Milborrow [45]. The GCV penalty per knot was chosen as d = 2, as suggested by Hastie [44] for additive models. Finally, the initial choices for w and L, were based on previous experiences of the authors [21,22].

## 3. Results

#### 3.1. Computation

^{6}different models. Results were compared in terms of the mean absolute error (MAE). This is a convenient choice since the error of hot metal temperature forecast is proportional to the excess or defect in hot metal consumption [38]. Therefore, MAE reduction has a straight translation to economic savings and environmental improvements [36,37,38].

#### 3.2. Study of Hyperparameters

_{F}= 21 for the maximum number of functions in the forward phase is close to its optimum value when w ≥ 1000. In fact, the best MAE for w = 1000 and 2000 is obtained when M

_{F}= 11 but the improvement in MAE is only around 0.0003 with regard to M

_{F}= 21. This finding suggests that, for this particular problem, M

_{F}= 2N + 1 is a good choice, but max(21, 2N + 1) = 21 can also be a reasonable selection for a large dataset [45]. The small effect of an additional increase of M

_{F}is logical, since the number of basis functions in excess at the end of the forward phase will be pruned in the backward phase of the method. However, M

_{F}still has some influence because GCV is a good proxy—but only a proxy—of the forecasting capabilities of the model. Consequently, increasing M

_{F}above the optimum gives slightly overfitted models.

_{F}, d, and w) when L = 4. It can be seen that the incorporation of four lagged observations not only improves MAE but also makes it less dependent on the other hyperparameters. For w = 2000, any M

_{F}> 15 gives the lowest MAE of 0.0506. Similarly, for w = 2000, any d ≥ 2 gives also the minimum MAE. It can be concluded that any model configuration within {S = 1, I = 1, M

_{F}> 15, d ≥ 2, L = 4, w = 2000} gives the best results.

#### 3.3. Model Validation

_{F}= 21, d = 2, L = 4, w = 2000} was applied to the validation dataset for assessing the actual forecasting performance of the method. The model errors from t = 10,001 to t = 12,195 are shown in Figure 5.

## 4. Discussion

_{F}= 21, d = 2, L = 4, w = 2000} to the validation dataset provided 2195 evaluations of the method from t = 10,001 to t = 12,195. The resulting basis functions and coefficients in Equation (2) at t = 11,755 are shown in Table 3. The model at this point is taken as an example to discuss the features of the model. In this case, the model comprises 12 basis functions including the intercept term. Considering that 21 functions where allowed in the forward phase (M

_{F}= 21), it is inferred that nine functions with the lowest contribution to GCV where pruned in the backward phase.

_{1}, x

_{2}, and x

_{4}, that is, the initial hot metal temperature, the total holding time, and the empty torpedo time, respectively. Empty ladle time and pre-treatment time have less importance; in fact, the actual temperature of the previous heat x

_{6}= y

_{t−}

_{1}, is a better predictor. Moreover, the MARS method automatically excludes the less important variables. For this particular heat (t = 11,755), the actual temperature four heats before, x

_{9}= y

_{t−}

_{4}, is not included in the model. This indicates that its contribution to model performance is negligible or even adverse. This is not the general case for every heat; for example, at t = 10,656 a basis function is also included for x

_{9}. In general, it is positive to include x

_{9}in the model to improve its performance, as indicated in Figure 3c for curve w = 2000 at L = 4. It can be seen that some basis functions appear in pairs, as is the case for x

_{1}, x

_{2}, and x

_{6}. Other basis functions are individual, indicating that the corresponding symmetric functions were removed in the backward phase. A basis function without its symmetric variant indicates that either the involved variable is relevant only above a particular value (this is the case of x

_{3}, x

_{5}, x

_{7}, and x

_{8}), or in the lower part of its range (as for x

_{4}). As can be seen, the adaptive knot location of MARS method succeeds in capturing the nonlinearities of the data using segmented linear regression.

_{i}. The rest of the input variables are kept at their mid-points (x

_{k}= 0.5 for k ≠ i).

_{1}indicates that the effect of the initial temperature tends to damp as it increases. This is a coherent result, since higher thermal losses are foreseen from higher initial temperatures in all the phases of the process. A similar reasoning can be applied to the empty torpedo time, x

_{4}, considering that the thermal losses are expected to be higher for shorter times, since the lining temperature is higher. The effect of holding time, x

_{2}, and empty ladle time, x

_{5}, was found to be almost linear within the considered ranges, and with the anticipated slope, as can be seen in the related plots.

_{6}is smaller for x

_{6}< 0.64. This behavior is more evident for x

_{7}and x

_{8}which are relevant only above 0.58 and 0.68, respectively.

_{tk}= 0.5 for k ≠ i and k ≠ j). It can be seen that the adaptive knot location succeeds in representing the non-linear features of the multivariate process dataset.

_{F}= 21, d = 2, L = 0, w = 1000} gives a MAE of 0.0518 giving a 25% error reduction with reference to ARIMAX. The best performing MARS {S = 1, I = 1, M

_{F}= 21, d = 2, L = 4, w = 2000} provides 0.0506 of MAE, representing a 27% and a 30% of error reduction with regard to ARIMAX and MAS, respectively. This model configuration is the new benchmark for this problem.

_{F}= 21, d = 2, L = 0} requires at least ten previous observations of the five input variables, as illustrated in Figure 7. Configurations with additional input variables would require more than one hundred previous observations of all the inputs. This limitation poses a potential problem when some registers are missing in the process database. However, a judicious implementation of the model should relieve this problem, as indicated by the shaded region in Figure 7. This region delimits the lowest MAE that can be achieved by applying the best configuration for the available data-window at execution time.

## 5. Conclusions

## Supplementary Materials

## Author Contributions

## Funding

## Acknowledgments

## Conflicts of Interest

## References

- McLean, A. The science and technology of steelmaking—Measurements, models, and manufacturing. Metall. Mater. Trans. B
**2006**, 37, 319–332. [Google Scholar] [CrossRef] - Ghosh, A.; Chatterjee, A. Iron Making and Steelmaking: Theory and Practice; PHI Learning Pvt. Ltd.: New Delhi, India, 2008. [Google Scholar]
- Miller, T.W.; Jimenez, J.; Sharan, A.; Goldstein, D.A. Oxygen Steelmaking Processes. In The Making, Shaping, and Treating of Steel, 11th ed.; Fruehan, R.J., Ed.; The AISE Steel Foundation: Pittsburgh, PA, USA, 1998; pp. 475–524. [Google Scholar]
- Williams, R.V. Control of oxygen steelmaking. In Control and Analysis in Iron and Steelmaking, 1st ed.; Butterworth Scientific Ltd.: London, UK, 1983; pp. 147–176. [Google Scholar]
- Jiménez, J.; Mochón, J.; de Ayala, J.S.; Obeso, F. Blast furnace hot metal temperature prediction through neural networks-based models. ISIJ Int.
**2004**, 44, 573–580. [Google Scholar] [CrossRef] - Martín, R.D.; Obeso, F.; Mochón, J.; Barea, R.; Jiménez, J. Hot metal temperature prediction in blast furnace using advanced model based on fuzzy logic tools. Ironmak. Steelmak.
**2007**, 34, 241–247. [Google Scholar] [CrossRef] - Sugiura, M.; Shinotake, A.; Nakashima, M.; Omoto, N. Simultaneous Measurements of Temperature and Iron–Slag Ratio at Taphole of Blast Furnace. Int. J. Thermophys.
**2014**, 35, 1320–1329. [Google Scholar] [CrossRef] - Jiang, Z.H.; Pan, D.; Gui, W.H.; Xie, Y.F.; Yang, C.H. Temperature measurement of molten iron in taphole of blast furnace combined temperature drop model with heat transfer model. Ironmak. Steelmak.
**2018**, 45, 230–238. [Google Scholar] [CrossRef] - Pan, D.; Jiang, Z.; Chen, Z.; Gui, W.; Xie, Y.; Yang, C. Temperature Measurement Method for Blast Furnace Molten Iron Based on Infrared Thermography and Temperature Reduction Model. Sensors
**2018**, 18, 3792. [Google Scholar] [CrossRef][Green Version] - Pan, D.; Jiang, Z.; Chen, Z.; Gui, W.; Xie, Y.; Yang, C. Temperature Measurement and Compensation Method of Blast Furnace Molten Iron Based on Infrared Computer Vision. IEEE Trans. Instrum. Meas.
**2018**, 1–13. [Google Scholar] [CrossRef] - Jin, S.; Harmuth, H.; Gruber, D.; Buhr, A.; Sinnema, S.; Rebouillat, L. Thermomechanical modelling of a torpedo car by considering working lining spalling. Ironmak. Steelmak.
**2018**, 1–5. [Google Scholar] [CrossRef] - Frechette, M.; Chen, E. Thermal insulation of torpedo cars. In Proceedings of the Association for Iron and Steel Technology (Aistech) Conference Proceedings, Charlotte, NC, USA, 9–12 May 2005. [Google Scholar]
- Nabeshima, Y.; Taoka, K.; Yamada, S. Hot metal dephosphorization treatment in torpedo car. Kawasaki Steel Tech. Rep.
**1991**, 24, 25–31. [Google Scholar] - Niedringhaus, J.C.; Blattner, J.L.; Engel, R. Armco’s Experimental 184 Mile Hot Metal Shipment. In Proceedings of the 47th Ironmaking Conference, Toronto, ON, Canada, 17–20 April 1988. [Google Scholar]
- Goldwaser, A.; Schutt, A. Optimal torpedo scheduling. J. Artif. Intell. Res.
**2018**, 63, 955–986. [Google Scholar] [CrossRef] - Wang, G.; Tang, L. A column generation for locomotive scheduling problem in molten iron transportation. In Proceedings of the 2007 IEEE International Conference on Automation and Logistics, Jinan, China, 18–21 August 2007. [Google Scholar]
- He, F.; He, D.F.; Xu, A.J.; Wang, H.B.; Tian, N.Y. Hybrid model of molten steel temperature prediction based on ladle heat status and artificial neural network. J. Iron Steel Res. Int.
**2014**, 21, 181–190. [Google Scholar] [CrossRef] - Du, T.; Cai, J.J.; Li, Y.J.; Wang, J.J. Analysis of Hot Metal Temperature Drop and Energy-Saving Mode on Techno-Interface of BF-BOF Route. Iron Steel
**2008**, 43, 83–86, 91. [Google Scholar] - Liu, S.W.; Yu, J.K.; Yan, Z.G.; Liu, T. Factors and control methods of the heat loss of torpedo-ladle. J. Mater. Metall.
**2010**, 9, 159–163. [Google Scholar] - Wu, M.; Zhang, Y.; Yang, S.; Xiang, S.; Liu, T.; Sun, G. Analysis of hot metal temperature drop in torpedo car. Iron Steel
**2002**, 37, 12–15. [Google Scholar] - Díaz, J.; Fernández, F.J.; Suárez, I. Hot Metal Temperature Prediction at Basic-Lined Oxygen Furnace (BOF) Converter Using IR Thermometry and Forecasting Techniques. Energies
**2019**, 12, 3235. [Google Scholar] [CrossRef][Green Version] - Díaz, J.; Fernandez, F.J.; Gonzalez, A. Prediction of hot metal temperature in a BOF converter using an ANN. In Proceedings of the IRCSEEME 2018: International Research Conference on Sustainable Energy, Engineering, Materials and Environment, Mieres, Spain, 25–27 July 2018. [Google Scholar]
- Friedman, J.H. Multivariate adaptive regression splines. Ann. Stat.
**1991**, 19, 1–67. [Google Scholar] [CrossRef] - Friedman, J.H.; Roosen, C.B. An Introduction to Multivariate Adaptive Regression Splines. Stat. Methods Med. Res.
**1995**, 4, 197–217. [Google Scholar] [CrossRef] - Nieto, P.; Suárez, V.; Antón, J.; Bayón, R.; Blanco, J.; Fernández, A. A new predictive model of centerline segregation in continuous cast steel slabs by using multivariate adaptive regression splines approach. Materials
**2015**, 8, 3562–3583. [Google Scholar] [CrossRef][Green Version] - Mukhopadhyay, A.; Iqbal, A. Prediction of mechanical property of steel strips using multivariate adaptive regression splines. J. Appl. Stat.
**2009**, 36, 1–9. [Google Scholar] [CrossRef] - Yu, W.H.; Yao, C.G.; Yi, X.D. A Predictive Model of Hot Rolling Flow Stress by Multivariate Adaptive Regression Spline. In Materials Science Forum; Trans Tech Publications Ltd.: Stafa-Zurich, Switzerland, 2017; Volume 898, pp. 1148–1155. [Google Scholar]
- Mehdizadeh, S.; Behmanesh, J.; Khalili, K. Comprehensive modeling of monthly mean soil temperature using multivariate adaptive regression splines and support vector machine. Theor. Appl. Climatol.
**2018**, 133, 911–924. [Google Scholar] [CrossRef] - Yang, C.C.; Prasher, S.O.; Lacroix, R.; Kim, S.H. Application of multivariate adaptive regression splines (MARS) to simulate soil temperature. Trans. ASAE
**2004**, 47, 881. [Google Scholar] [CrossRef] - Krzemień, A. Fire risk prevention in underground coal gasification (UCG) within active mines: Temperature forecast by means of MARS models. Energy
**2019**, 170, 777–790. [Google Scholar] [CrossRef] - Kuhn, M.; Johnson, K. Nonlinear regression models. In Applied Predictive Modeling, 1st ed.; Springer: New York, NY, USA, 2010; pp. 145–151. [Google Scholar]
- Jekabsons, G. ARESLab: Adaptive Regression Splines Toolbox for Matlab/Octave. 2016. Available online: http://www.cs.rtu.lv/jekabsons/Files/ARESLab.pdf (accessed on 15 November 2019).
- Saltelli, A.; Ratto, M.; Andres, T.; Campolongo, F.; Cariboni, J.; Gatelli, D.; Tarantola, S. Global Sensitivity Analysis: The Primer; John Wiley & Sons: Chichester, UK, 2008. [Google Scholar]
- Mazumdar, D.; Evans, J.W. Elements of mathematical modeling. In Modeling of Steelmaking Processes, 1st ed.; CRC Press: Boca Raton, FL, USA, 2010; pp. 139–173. [Google Scholar]
- Sickert, G.; Schramm, L. Long-time experiences with implementation, tuning and maintenance of transferable BOF process models. Rev. Metall.
**2007**, 104, 120–127. [Google Scholar] [CrossRef] - Ares, R.; Balante, W.; Donayo, R.; Gómez, A.; Perez, J. Getting more steel from less hot metal at Ternium Siderar steel plant. Rev. Metall.
**2010**, 107, 303–308. [Google Scholar] [CrossRef] - Bradarić, T.D.; Slović, Z.M.; Raić, K.T. Recent experiences with improving steel-to-hot-metal ratio in BOF steelmaking. Metall. Mater. Eng.
**2016**, 22, 101–106. [Google Scholar] [CrossRef][Green Version] - Díaz, J.; Fernández, F.J. The impact of hot metal temperature on CO2 emissions from basic oxygen converter. Environ. Sci. Pollut. R.
**2019**, 1–10. [Google Scholar] [CrossRef] - Geerdes, M.; Toxopeus, H.; van der Vliet, C. Casthouse Operation. In Modern Blast Furnace Ironmaking: An Introduction, 1st ed.; Verlag Stahleisen GmbH: Düsseldorf, Germany, 2015; pp. 97–103. [Google Scholar]
- Kozlov, V.; Malyshkin, B. Accuracy of measurement of liquid metal temperature using immersion thermocouples. Metallurgist
**1969**, 13, 354–356. [Google Scholar] [CrossRef] - Jekabsons, G.; Zhang, Y. Adaptive basis function construction: An approach for adaptive building of sparse polynomial regression models. In Machine Learning, 1st ed.; IntechOpen Ltd.: London, UK, 2010; pp. 127–156. [Google Scholar]
- Smith, P.L. Curve Fitting and Modeling with Splines Using Statistical Variable Selection Techniques; Report NASA 166034; Langley Research Center: Hampton, VA, USA, 1982. [Google Scholar]
- Craven, P.; Wahba, G. Smoothing noisy data with spline functions. Numer. Math.
**1978**, 31, 377–403. [Google Scholar] [CrossRef] - Hastie, T.; Tibshirani, R.; Friedman, J. MARS: Multivariate Adaptive Regression Splines. In The Elements of Statistical Learning: Data Mining, Inference and Prediction, 2nd ed.; Springer: New York, NY, USA, 2009; pp. 241–249. [Google Scholar]
- Milborrow, M.S. Package ‘Earth’. 9 November 2019. Available online: https://cran.r-project.org/web/packages/earth/earth.pdf (accessed on 18 January 2019).

**Figure 1.**Hot metal process from blast furnace (BF) to basic-lined oxygen furnace (BOF) in which the following phases are considered: BF tapping, pre-treatment, transport, and transfer to BOF shop. To predict the final hot metal temperature, Y, five relevant variables are used as model inputs: Initial temperature X

_{1}, total elapsed time X

_{2}, pre-treatment duration X

_{3}, empty torpedo duration X

_{4}, empty ladle duration X

_{5}.

**Figure 2.**Time evolution and histograms corresponding to the six involved variables. The dataset contains 12,195 registers covering one full production year. The first 10,000 heats were used for training and testing while the last 2195 were reserved for final validation. The minimum–maximum interval (gray area) and the average value (solid line) for groups of 30 heats are shown for clarity. The dashed boxes illustrate a moving training window of width w = 2000 for heat number t = 2500.

**Figure 3.**Comparison of the mean absolute error (MAE) as a function of model hyperparameters: (

**a**) Order of interaction between input variables, I; (

**b**) maximum number of functions in the forward phase, M

_{F}; (

**c**) penalty for model complexity, d; and (

**d**) lagged terms as additional inputs, L. Each single point in a curve comprises 8000 evaluations of the model, from t = 2000 to t = 10,000.

**Figure 4.**Effect of d, M

_{F}, and w on the mean absolute error (MAE) for L = 4: (

**a**) Penalty for model complexity, d, and (

**b**) maximum number of functions in the forward phase, M

_{F}. Each single point in a curve comprises 8000 evaluations of the model, from t = 2000 to t = 10,000.

**Figure 5.**Prediction errors (with sign) of the final multivariate adaptive regression splines (MARS) model for the 2195 heats in the validation dataset: (

**a**) Time evolution where each dot represents one heat and the solid curve represents the daily MAE (30 heat grouping) and (

**b**) error distribution for individual heats.

**Figure 6.**Graphical representation of the final MARS model {S = 1, I = 1, M

_{F}= 21, d = 2, L = 4, w = 2000} at heat number t = 11,755. The diagonal elements of the mosaic are the line plots of the predicted hot metal temperature, ${\widehat{y}}_{t}$, versus each individual input variable, x

_{i}. The other mosaic elements are the response surfaces of the model for any combination of two variables (x

_{i}, x

_{j}); the contour plots below the diagonal are traced around the actual value of $\overrightarrow{x}$ at t = 11,755: ${\overrightarrow{x}}_{t}$ = (0.41, 0.14, 0.42, 0.38, 0.054, 0.53, 0.58, 0.61, 0.43) whereas the graphs above the diagonal are traced around the mid-point of the range of $\overrightarrow{x}$ = (0.5, 0.5, 0.5, 0.5, 0.5) axes limits are [0,1] for all the variables.

**Figure 7.**Comparison of the mean absolute error (MAE) as a function of the width of the training window, w. A single point in a curve comprises 8000 evaluations of the model, from t = 2001 to t = 10,000. The dashed bold line represents the best result obtained with a hybrid method based on moving average smoothing (MAS), and time series auto regressive integral moving average with exogenous predictors (ARIMAX). All methods were applied to the same dataset [21].

Description | Symbol | Min | Max | Unit |
---|---|---|---|---|

Initial temperature | X_{1} | 1400 | 1540 | °C |

Total holding time | X_{2} | 2 | 20 | h |

Pre-treatment duration | X_{3} | 0 | 40 | min |

Empty torpedo duration | X_{4} | 1 | 16 | h |

Empty ladle duration | X_{5} | 0 | 8 | h |

Final temperature | Y | 1200 | 1420 | °C |

Description | Symbol | Base | Min | Max | Final |
---|---|---|---|---|---|

Maximum self-interaction order | S | 1 | 1 | 1 | 1 |

Maximum interaction order | I | 1 | 1 | 5 | 1 |

Maximum functions in the forward phase | M_{F} | 21 | 3 | 49 | 21 |

Penalty for model complexity | d | 2 | 0 | 12 | 2 |

Width of moving training window | w | 1000 | 10 | 2000 | 2000 |

${y}_{t-1},\text{}{y}_{t-2}\dots {y}_{t-L}$ as additional predictors | L | 0 | 0 | 6 | 4 |

**Table 3.**Model equations for MARS {S = 1, I = 1, M

_{F}= 21, d = 2, L = 4, w = 2000} at t = 11,755. The mean squared error (MSE) and the generalized cross validation (GCV) of the model predictions for the training data are 0.00408 and 0.00418, respectively. The columns MSE and GCV indicate the new values obtained when the basis function is removed from the model.

Basis Function | Coefficient | MSE | GCV | ||
---|---|---|---|---|---|

${B}_{0}$ | 1 | ${\beta}_{0}$ | 0.4411 | - | - |

${B}_{1}$ | (x_{2} − 0.2672)_{+} | ${\beta}_{1}$ | −0.5320 | 0.00541 | 0.00553 |

${B}_{2}$ | (0.6998 − x_{4})_{+} | ${\beta}_{2}$ | 0.3201 | 0.00526 | 0.00537 |

${B}_{3}$ | (x_{1} − 0.4643)_{+} | ${\beta}_{3}$ | 0.3777 | 0.00510 | 0.00521 |

${B}_{4}$ | (0.2672 − x_{2})_{+} | ${\beta}_{4}$ | 0.4428 | 0.00455 | 0.00465 |

${B}_{5}$ | (0.4643 − x_{1})_{+} | ${\beta}_{5}$ | −0.5489 | 0.00430 | 0.00439 |

${B}_{6}$ | (0.6409 − x_{6})_{+} | ${\beta}_{6}$ | −0.1124 | 0.00414 | 0.00423 |

${B}_{7}$ | (x_{5} − 0.0312)_{+} | ${\beta}_{7}$ | −0.0683 | 0.00412 | 0.00421 |

${B}_{8}$ | (x_{6} − 0.6409)_{+} | ${\beta}_{8}$ | 0.1408 | 0.00412 | 0.00421 |

${B}_{9}$ | (x_{7} − 0.5818)_{+} | ${\beta}_{9}$ | 0.0862 | 0.00411 | 0.00420 |

${B}_{10}$ | (x_{3} − 0.4571)_{+} | ${\beta}_{10}$ | −0.0526 | 0.00410 | 0.00419 |

${B}_{11}$ | (x_{8} − 0.6818)_{+} | ${\beta}_{11}$ | 0.0928 | 0.00409 | 0.00418 |

© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Díaz, J.; Fernández, F.J.; Prieto, M.M.
Hot Metal Temperature Forecasting at Steel Plant Using Multivariate Adaptive Regression Splines. *Metals* **2020**, *10*, 41.
https://doi.org/10.3390/met10010041

**AMA Style**

Díaz J, Fernández FJ, Prieto MM.
Hot Metal Temperature Forecasting at Steel Plant Using Multivariate Adaptive Regression Splines. *Metals*. 2020; 10(1):41.
https://doi.org/10.3390/met10010041

**Chicago/Turabian Style**

Díaz, José, Francisco Javier Fernández, and María Manuela Prieto.
2020. "Hot Metal Temperature Forecasting at Steel Plant Using Multivariate Adaptive Regression Splines" *Metals* 10, no. 1: 41.
https://doi.org/10.3390/met10010041