Optimisation of Operator Support Systems through Artificial Intelligence for the Cast Steel Industry: A Case for Optimisation of the Oxygen Blowing Process Based on Machine Learning Algorithms
Abstract
:1. Introduction
2. Materials and Methods
2.1. Oxygen Blowing Process
2.1.1. Provided Data
2.1.2. Oxygen Blowing Process Model
2.2. Finite Markov Decision Process
2.3. Reinforcement Learning Method Selection
- State–value function for policy (): This function calculates the expected return when starting in and following thereafter;
- Action–value function for policy (): This defines the expected return starting from , taking action and following policy .
- : This is the reward that the agent receives after applying action in state ;
- γ: This parameter is called the discount factor, and it determines the present value of future rewards, that is, how much the agent trusts its estimates;
- : This expression indicates the maximum q-value of the action–value function by applying the action that corresponds to this state–action pair.
Algorithm 1: Q-Learning Algorithm |
* Initialisation* Algorithm parameters: Step size α and discount factor γ ϵ (0, 1], small ε > 0 Initialise Q(s, a) for all states and actions arbitrarily, except Q(terminal, a) = 0 Loop for each episode: Loop for each step of the episode: according to the behaviour policy Update the action–value function with the following expression: Until S is terminal |
2.4. Data Processing
2.4.1. Assessment of the Historical Data
2.4.2. Adaptation to Finite Markov Decision Process Structure
State Signal
Action Signal
- The melt will not overheat, and consequently the waiting time for cooling will be avoided;
- This will increase the number of heats that require single oxygen blowing to achieve the target state;
- Delays will be avoided because of the fulfilment of the previous points;
- At the end of the process, the oxygen content will be negligible and the amount of chemical elements used to deoxidise the melt will be reduced.
Reward Signal
- +10: This is the largest positive reward, which is given when the “super-successful” target state is achieved;
- 0: This reward is received by the agent when the “simple-successful” target state is achieved;
- −10: This is the negative reward that the agent receives if any of the previous target states are reached;
- −50: This is an extra negative reward that the agent receives when the content of C is reduced in excess; that is, below 0.1%. When this occurs, the heat is automatically considered unsuccessful. Thus, this state, together with the other target states explained above, are the only final states.
3. Development of the Agent and Training
3.1. Historical Dataset Creation with Finite Markov Decision Process Structure
3.2. Training with the Historical Database
- : A low value is assigned to the step-size parameter so as not to imitate the performance of the operators because it is not optimal. If the agent expects to improve this performance, it must learn the environment dynamics;
- : A discount factor is not necessary to train the agent because it is based on historical data;
- : This maximum accepted training error shows that a variation below the fourth decimal is insignificant.
3.3. Training with the Model
- If the state is a target state (“super-successful” or “simple-successful” target states), the heat ends and the agent receives a positive reward of 10 or 0, respectively;
- If the C content of the state is below 0.1%, the heat ends, regardless of the contents of the other chemical elements of the state, and the agent receives a very negative reward of −50;
- If the state does not fit above criteria, the reward is −10. If the number of actions taken by the agent surpasses six the heat ends, otherwise another suitable action is taken (return to step 2).
- After the end of the heat simulation, whether successful or unsuccessful, the training continues with the initilisation of a new heat (return to step 1 of Figure 10).
4. Results
- Time, materials and energy are saved, as the resources spent on the previous steps are less likely to be wasted;
- As a consequence of the above, there is an increase in productivity;
- Both of the above benefits would positively affect the profits of the company.
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
Appendix A
Type of Process | Start Time | End Time | Requested Weight | Scrap Weight | Total Weight | Tapping Weight | Blowing O2 | Temperature | Sample Number | C | Si | Mn | P | S | N | Al | Cr | Cu | Mo | Ni | H | O |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
EAF Melting | 2019-02-01 01:26:21.630 | 2019-02-01 05:02:25.293 | 5000 | 5050 | 5322.2 | 5480 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
2019-02-01 02:59:01.513 | 2019-02-01 02:59:01.873 | - | - | - | - | - | 1461 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
2019-02-01 03:11:17.187 | 2019-02-01 03:11:17.500 | - | - | - | - | - | 1570 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
2019-02-01 03:15:58.000 | - | - | - | - | - | - | 1 | 0.946 | 0.26 | 1.11 | 0.022 | 0.0042 | 0.018 | 0.215 | 0.12 | 0.053 | 0.02 | 0.06 | - | 0.034 | ||
EAF Oxygen Blowing | 2019-02-01 03:22:29.157 | 2019-02-01 03:22:29.517 | - | - | - | - | - | 1577 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
2019-02-01 03:24:02.593 | 2019-02-01 03:34:33.213 | - | - | - | - | 50 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
2019-02-01 03:33:26.120 | 2019-02-01 03:33:26.433 | - | - | - | - | - | 1700 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
2019-02-01 03:36:46.000 | - | - | - | - | - | - | 2 | 0.286 | 0 | 0.82 | 0.0147 | 0.004 | 0.007 | 0.371 | 0.12 | 0.051 | 0.02 | 0.06 | NULL | 0.032 | ||
2019-02-01 03:38:17.153 | 2019-02-01 03:41:30.263 | - | - | - | - | 6 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
2019-02-01 03:44:45.000 | - | - | - | - | - | - | 3 | 0.184 | 0 | 0.74 | 0.0138 | 0.0045 | 0.006 | 0.231 | 0.11 | 0.052 | 0.02 | 0.06 | NULL | 0.002 | ||
EAF Refining | 2019-02-01 03:52:21.247 | 2019-02-01 03:52:21.560 | - | - | - | - | - | 1707 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
2019-02-01 04:08:15.490 | 2019-02-01 04:08:15.850 | - | - | - | - | - | 1728 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
2019-02-01 04:13:02.000 | - | - | - | - | - | NULL | 4 | 0.208 | 0.29 | 1.48 | 0.0199 | 0.0036 | 0.006 | 0.052 | 0.14 | 0.056 | 0.02 | 0.06 | 3.397 | 0.021 |
Type of Process | Start Time | End Time | Requested Weight | Blowing O2 | Temperature | Sample Number | C | P | S | H | O |
---|---|---|---|---|---|---|---|---|---|---|---|
EAF Melting | 2019-02-01 01:26:21.630 | 2019-02-01 05:02:25.293 | 5000 | - | - | - | - | - | - | - | - |
2019-02-01 02:59:01.513 | 2019-02-01 02:59:01.873 | - | - | 1461 | - | - | - | - | - | - | |
2019-02-01 03:11:17.187 | 2019-02-01 03:11:17.500 | - | - | 1570 | - | - | - | - | - | - | |
2019-02-01 03:15:58.000 | - | - | - | 1 | 0.946 | 0.022 | 0.0042 | - | 0.034 | ||
EAF Oxygen Blowing | 2019-02-01 03:22:29.157 | 2019-02-01 03:22:29.517 | - | - | 1577 | - | - | - | - | - | - |
2019-02-01 03:24:02.593 | 2019-02-01 03:34:33.213 | - | 50 | - | - | - | - | - | - | - | |
2019-02-01 03:33:26.120 | 2019-02-01 03:33:26.433 | - | - | 1700 | - | - | - | - | - | - | |
2019-02-01 03:36:46.000 | - | - | - | 2 | 0.286 | 0.0147 | 0.004 | NULL | 0.032 | ||
2019-02-01 03:38:17.153 | 2019-02-01 03:41:30.263 | - | 6 | - | - | - | - | - | - | - | |
2019-02-01 03:44:45.000 | - | - | - | 3 | 0.184 | 0.0138 | 0.0045 | NULL | 0.002 | ||
EAF Refining | 2019-02-01 03:52:21.247 | 2019-02-01 03:52:21.560 | - | - | 1707 | - | - | - | - | - | - |
2019-02-01 04:08:15.490 | 2019-02-01 04:08:15.850 | - | - | 1728 | - | - | - | - | - | - | |
2019-02-01 04:13:02.000 | - | - | - | 4 | 0.208 | 0.0199 | 0.0036 | 3.397 | 0.021 |
Appendix B
- Dynamic Programming (DP) [29]: these algorithms are the basis of RL, and they are capable of computing optimal policies if a perfect model of the environment is given. This requirement, together with the tremendous computational cost, limits the utility of this kind of algorithms.
- Monte Carlo (MC) methods [30]: in comparison with dynamic programming, these methods have lower computational expenses, and they do not need a perfect model of the environment to attain an optimal policy. Therefore, knowing the dynamics of the environment is not required because the agent learns from direct interaction. These methods only need a simulated or actual experience of this interaction in the form of state–action-reward sequences. Another distinctive feature is that they estimate value functions by averaging sample returns. Thus, MC methods are very suitable for episodic tasks, and they do not bootstrap because they base their estimations on the outcome of the interaction and not on other estimates.
- Temporal-Difference (TD) methods [26]: this group of methods combine the most striking features of the two previous ones. From MC methods, TD methods take the ability to learn directly from raw experience, whereas DP and TD methods share that they base their estimates on other learned ones (they bootstrap). Therefore, TD methods do not require a complete model of the environment, and the learning is online and fully incremental. This last feature makes them very appropriate to deal with continuing tasks and episodic tasks whose episodes are very long. Although it has not been proved mathematically, the efficiency and speed of convergence of these methods improve those of MC methods.
References
- Oztemel, E.; Gursev, S. Literature Review of Industry 4.0 and Related Technologies. J. Intell. Manuf. 2018, 31, 127–182. [Google Scholar] [CrossRef]
- Hozdić, E. Smart Factory for Industry 4.0: A Review. Int. J. Mod. Manuf. Technol. 2015, 7, 28–35. [Google Scholar]
- Tsai, Y.T.; Lee, C.H.; Liu, T.Y.; Chang, T.J.; Wang, C.S.; Pawar, S.J.; Huang, P.H.; Huang, J.H. Utilization of a Reinforcement Learning Algorithm for the Accurate Alignment of a Robotic Arm in a Complete Soft Fabric Shoe Tongues Automation Process. J. Manuf. Syst. 2020, 56, 501–513. [Google Scholar] [CrossRef]
- Spielberg, S.; Tulsyan, A.; Lawrence, N.P.; Loewen, P.D.; Gopaluni, R.B. Toward Self-Driving Processes: A Deep Reinforcement Learning Approach to Control. AIChE J. 2019, 65, e16689. [Google Scholar] [CrossRef] [Green Version]
- Alves Goulart, D.; Dutra Pereira, R. Autonomous PH Control by Reinforcement Learning for Electroplating Industry Wastewater. Comput. Chem. Eng. 2020, 140, 106909. [Google Scholar] [CrossRef]
- Pane, Y.P.; Nageshrao, S.P.; Kober, J.; Babuška, R. Reinforcement Learning Based Compensation Methods for Robot Manipulators. Eng. Appl. Artif. Intell. 2019, 78, 236–247. [Google Scholar] [CrossRef]
- Qiu, S.; Li, Z.; Li, Z.; Li, J.; Long, S.; Li, X. Model-Free Control Method Based on Reinforcement Learning for Building Cooling Water Systems: Validation by Measured Data-Based Simulation. Energy Build. 2020, 218, 110055. [Google Scholar] [CrossRef]
- Cassol, G.O.; Campos, G.V.K.; Thomaz, D.M.; Capron, B.D.O.; Secchi, A.R. Reinforcement Learning Applied to Process Control: A Van Der Vusse Reactor Case Study. Comput. Aided Chem. Eng. 2018, 44, 553–558. [Google Scholar] [CrossRef]
- Zarandi, M.H.F.; Ahmadpour, P. Fuzzy Agent-Based Expert System for Steel Making Process. Expert Syst. Appl. 2009, 36, 9539–9547. [Google Scholar] [CrossRef]
- Cavaliere, P. Electric Arc Furnace: Most Efficient Technologies for Greenhouse Emissions Abatement. In Clean Ironmaking and Steelmaking Processes; Cavaliere, P., Ed.; Springer International Publishing: Cham, Switzerland, 2019; pp. 303–375. [Google Scholar] [CrossRef]
- Marchiori, F.; Belloni, A.; Benini, M.; Cateni, S.; Colla, V.; Ebel, A.; Lupinelli, M.; Nastasi, G.; Neuer, M.; Pietrosanti, C.; et al. Integrated Dynamic Energy Management for Steel Production. Energy Procedia 2017, 105, 2772–2777. [Google Scholar] [CrossRef]
- Aksyonov, K.; Antonova, A.; Goncharova, N. Analysis of the Electric Arc Furnace Workshop Logistic Processes Using Multiagent Simulation. Adv. Intell. Syst. Comput. 2017, 678, 390–397. [Google Scholar] [CrossRef]
- Carlsson, L.S.; Samuelsson, P.B.; Jönsson, P.G. Predicting the Electrical Energy Consumption of Electric Arc Furnaces Using Statistical Modeling. Metals 2019, 9, 959. [Google Scholar] [CrossRef] [Green Version]
- Ruiz, E.; Ferreño, D.; Cuartas, M.; Lloret, L.; Ruiz Del Árbol, P.M.; López, A.; Esteve, F.; Gutiérrez-solana, F. Machine Learning Methods for the Prediction of the Inclusion Content of Clean Steel Fabricated by Electric Arc Furnace and Rolling. Metals 2021, 11, 914. [Google Scholar] [CrossRef]
- Chen, C.; Wang, N.; Chen, M. Optimization of Dephosphorization Parameter in Consteel Electric Arc Furnace Using Rule Set Model. Steel Res. Int. 2021, 92, 200719. [Google Scholar] [CrossRef]
- Cherukuri, H.; Perez-Bernabeu, E.; Selles, M.; Schmitz, T. Machining Chatter Prediction Using a Data Learning Model. J. Manuf. Mater. Processing 2019, 3, 45. [Google Scholar] [CrossRef] [Green Version]
- Vimpari, J.; Lilja, J.; Paananen, T.; Leyva, C.; Stubbe, G.; Kleimt, B.; Fuchs, P.; Gassner, G.; Helaakoski, H.; Heiskanen, M.; et al. D1.1 Use Case Descriptions and Analysis. 2018. Available online: https://ec.europa.eu/research/participants/documents/downloadPublic?documentIds=080166e5a0fdfd41&appId=PPGMS (accessed on 29 March 2021).
- Pierre, R.; Kleimt, B.; Schlinge, L.; Unamuno, I.; Arteaga, A. Modelling of EAF Slag Properties for Improved Process Control. In Proceedings of the 11th European Electric Steelmaking Conference, Venice, Italy, 25–27 May 2016; p. 10. [Google Scholar]
- Kleimt, B.; Pierre, R.; Dettmer, B.; Jianxiong, D.; Schlinge, L.; Schliephake, H. Continuous Dynamic EAF Process Control for Increased Energy and Resource Efficiency. In Proceedings of the 10th European Electric Steelmaking Conference, Graz, Austria, 25–28 September 2012; p. 10. [Google Scholar]
- Rekersdrees, T.; Snatkin, H.; Schlinge, L.; Pierre, R.; Kordel, T.; Kleimt, B.; Gogolin, S.; Haverkamp, V. Adaptative EAF Online Control Based on Innovative Sensors and Comprehensive Models. In Proceedings of the 3rd European Steel Technology & Application Days, Vienna, Austria, 26–29 June 2017; p. 12. [Google Scholar]
- Dankar, F.K.; Ibrahim, M. Fake It till You Make It: Guidelines for Effective Synthetic Data Generation. Appl. Sci. 2021, 11, 2158. [Google Scholar] [CrossRef]
- Sutton, R.S.; Barto, A.G. Reinforcement Learning: An. Introduction, 2nd ed.; Bach, F., Ed.; The MIT Press: Cambridge, MA, USA, 2018. [Google Scholar]
- Martin, L.P. Chapter 8 Markov Decision Processes. Stoch. Model. 1990, 2, 331–434. [Google Scholar]
- Stork, J.; Zaefferer, M.; Bartz-Beielstein, T.; Eiben, A.E. Understanding the Behavior of Reinforcement Learning Agents. I. In Proceedings of the 9th International Conference on Bioinspired Optimisation Methods and Their Applications, BIOMA 2020, Brussels, Belgium, 19–20 November 2020; pp. 148–160. [Google Scholar] [CrossRef]
- Buchli, J.; Farshidian, F.; Winkler, A.; Sandy, T.; Giftthaler, M. Optimal and Learning Control for Autonomous Robots. arXiv 2017, arXiv:1708.09342. [Google Scholar]
- Kaisaravalli Bhojraj, G.; Surya Achyut, M.Y. Policy-Based Reinforcement Learning Control for Window Opening and Closing in an Office Building. Master’s Thesis, Dalarna University, Falun, Sweden, 2020. [Google Scholar]
- The Python Standard Library—Python 3.7.10 Documentation. Available online: https://docs.python.org/3.7/library/index.html (accessed on 29 March 2021).
- User Guide—Pandas 1.0.1 Documentation. Available online: https://pandas.pydata.org/pandas-docs/version/1.0.1/user_guide/index.html (accessed on 29 March 2021).
- Barto, A.G. Reinforcement Learning and Dynamic Programming. IFAC Proc. Vol. 1995, 28, 407–412. [Google Scholar] [CrossRef]
- Mondal, A.K. A Survey of Reinforcement Learning Techniques: Strategies, Recent Development, and Future Directions. arXiv 2020, arXiv:2001.06921. [Google Scholar]
Labels State | Far Exceeded Target | Slightly Exceeded Target | Target | Close to Target | Far from Target | |
---|---|---|---|---|---|---|
Components | ||||||
C (%) | [0.0, 0.1) | [0.1, 0.195) | (0.235, 0.35) | [0.35, ∞) | ||
P (%) | - | - | (0.0219, 0.03) | [0.03, ∞) | ||
S (%) | - | - | [0.0, 0.01] | (0.01, 0.0125) | [0.0125, ∞) | |
H (ppm) | - | - | [0.0, 6.0] | (6.0, 10) | [10, ∞) | |
O (%) | - | - | [0.0, 0.1] | (0.1, 0.25) | [0.25, ∞) |
Sequence | C | P | S | H | O | Action | Next C | Next P | Next S | Next H | Next O |
---|---|---|---|---|---|---|---|---|---|---|---|
1 | 0.9464 | 0.022 | 0.0042 | 10.0 | 0.034 | 50.0 | 0.2862 | 0.0147 | 0.004 | 10.0 | 0.032 |
2 | 0.2862 | 0.0147 | 0.004 | 10.0 | 0.032 | 4.0 | 0.1837 | 0.0138 | 0.0045 | 3.397 | 0.002 |
Sequence | C | P | S | H | O | Action | Next C | Next P | Next S | Next H | Next O | Reward |
---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | F | C | T | F | T | 50.0 | C | T | T | F | T | −10 |
2 | C | T | T | F | T | 4.0 | L | T | T | T | T | 0 |
Action (m3) | 2 | 4 | 8 | 10 | 20 | 30 | 35 | 40 | 45 | 50 | 55 | 60 | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
State | |||||||||||||
CTTFT | −6.667 | −2.161 | 0.0 | 9.999 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | −9.999 | |
FTTFT | −6.7 | −3.368 | 0.0 | −14.85 | −4.3 | −15.88 | −22.06 | −12.73 | −15.74 | −31.71 | −10.0 | −22.08 | |
FTCFT | 0.0 | 0.0 | 0.0 | 0.0 | −9.999 | −10.0 | −0.011 | −20.23 | 0.0 | −30.0 | 9.999 | −19.85 | |
FCTFT | 0.0 | 4.974 | −5.025 | 0.0 | 0.0 | −3.266 | −4.975 | −11.28 | −5.05 | −1.442 | −5.025 | −8.848 | |
FCCFT | 0.0 | −10.0 | 0.0 | 0.0 | 0.0 | 0.0 | −9.999 | −5.025 | −24.88 | 0.0 | 0.0 | −9.999 |
Action (m3) | 2 | 4 | 8 | 10 | 20 | 30 | 35 | 40 | 45 | 50 | 55 | 60 | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
State | |||||||||||||
CTTFT | 3 | 7 | 2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | |
FTTFT | 3 | 3 | 0 | 4 | 7 | 5 | 15 | 19 | 13 | 12 | 2 | 5 | |
FTCFT | 0 | 0 | 0 | 0 | 1 | 2 | 1 | 6 | 0 | 3 | 1 | 2 | |
FCTFT | 0 | 2 | 2 | 0 | 0 | 3 | 2 | 7 | 4 | 7 | 2 | 9 | |
FCCFT | 0 | 2 | 0 | 1 | 1 | 0 | 1 | 2 | 2 | 3 | 0 | 1 |
Chemical Element | Mean | Standard Deviation |
---|---|---|
C | 0.8536 | 0.1579 |
Si | 0.1225 | 0.0839 |
Mn | 0.8603 | 0.1988 |
P | 0.0206 | 0.007 |
S | 0.0084 | 0.0025 |
N | 0.0119 | 0.003 |
Al | 0.2557 | 0.1714 |
Cr | 0.1222 | 0.0517 |
Cu | 0.0479 | 0.0086 |
Mo | 0.0150 | 0.0109 |
Ni | 0.0540 | 0.0244 |
H | 0.0 | 0.0 |
O | 0.0246 | 0.0199 |
Simulated Heats | Super-Successful Heats (Final State: TTTTT) | Simple-Successful Heats (Final State: LTTTT) | Rejected Heats Due to the C Content | Rejected Heats Due to the Number of Actions | Total Successful Heats |
---|---|---|---|---|---|
<15,000 (when ε > 0) | 1888 | 5645 | 7382 | 482 | 7533 |
>15,000 (when ε = 0) | 2157 | 6444 | 6331 | 68 | 8601 |
Total (30,000 heats) | 4045 | 11,989 | 13,713 | 550 | 16,034 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ojeda Roldán, Á.; Gassner, G.; Schlautmann, M.; Acevedo Galicia, L.E.; Andreiana, D.S.; Heiskanen, M.; Leyva Guerrero, C.; Dorado Navas, F.; del Real Torres, A. Optimisation of Operator Support Systems through Artificial Intelligence for the Cast Steel Industry: A Case for Optimisation of the Oxygen Blowing Process Based on Machine Learning Algorithms. J. Manuf. Mater. Process. 2022, 6, 34. https://doi.org/10.3390/jmmp6020034
Ojeda Roldán Á, Gassner G, Schlautmann M, Acevedo Galicia LE, Andreiana DS, Heiskanen M, Leyva Guerrero C, Dorado Navas F, del Real Torres A. Optimisation of Operator Support Systems through Artificial Intelligence for the Cast Steel Industry: A Case for Optimisation of the Oxygen Blowing Process Based on Machine Learning Algorithms. Journal of Manufacturing and Materials Processing. 2022; 6(2):34. https://doi.org/10.3390/jmmp6020034
Chicago/Turabian StyleOjeda Roldán, Álvaro, Gert Gassner, Martin Schlautmann, Luis Enrique Acevedo Galicia, Doru Stefan Andreiana, Mikko Heiskanen, Carlos Leyva Guerrero, Fernando Dorado Navas, and Alejandro del Real Torres. 2022. "Optimisation of Operator Support Systems through Artificial Intelligence for the Cast Steel Industry: A Case for Optimisation of the Oxygen Blowing Process Based on Machine Learning Algorithms" Journal of Manufacturing and Materials Processing 6, no. 2: 34. https://doi.org/10.3390/jmmp6020034
APA StyleOjeda Roldán, Á., Gassner, G., Schlautmann, M., Acevedo Galicia, L. E., Andreiana, D. S., Heiskanen, M., Leyva Guerrero, C., Dorado Navas, F., & del Real Torres, A. (2022). Optimisation of Operator Support Systems through Artificial Intelligence for the Cast Steel Industry: A Case for Optimisation of the Oxygen Blowing Process Based on Machine Learning Algorithms. Journal of Manufacturing and Materials Processing, 6(2), 34. https://doi.org/10.3390/jmmp6020034