Computer Vision for Safety Management in the Steel Industry
Abstract
:1. Introduction
2. Background
2.1. Overview of the Steelmaking Process
2.2. Occupational Hazards in Steel Manufacturing
2.3. Current Safety Practices in the Steel Manufacturing Industry
2.4. Computer Vision Applications in Safety Management and Review of YOLO Models
2.5. Research Need Statement
3. Materials and Methods
3.1. Characterization of Computer Vision Applications for Safety in Steel Manufacturing
3.2. Evaluation and Selection of Commercially Available CV Systems for Safety Management
3.2.1. Computer Vision System Search
3.2.2. Computer Vision System Evaluation and Selection
3.3. Pilot Study on Safety Hard Hat Detection
3.3.1. Pilot Study Context
3.3.2. Detection Models
3.3.3. Dataset Collection and Processing
3.3.4. Evaluation Metrics
4. Results and Discussion
4.1. Safety Hazard Characterization and TOPSIS Analysis
4.1.1. Safety Hazard Characterization for CV Applications in Steel Manufacturing
4.1.2. TOPSIS Analysis
4.2. Pilot Case Study Results: Safety Hard Hat Detection Using Candidate CV System
4.2.1. Experimental Environment
4.2.2. Detection Results across Models
Precision, Recall, F-1 Score, and mAP Results
Specificity and AUC Results
5. Contributions and Limitations of This Study
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Khahro, S.H.; Khahro, Q.H.; Ali, T.H.; Memon, Z.A. Industrial Accidents and Key Causes: A Case Study of the Steel Industry. ACM Int. Conf. Proceeding Ser. 2023, 66–70. [Google Scholar] [CrossRef]
- Sacks, R.; Perlman, A.; Barak, R. Construction safety training using immersive virtual reality. Constr. Manag. Econ. 2013, 31, 1005–1017. [Google Scholar] [CrossRef]
- Zhang, S.; Sulankivi, K.; Kiviniemi, M.; Romo, I.; Eastman, C.M.; Teizer, J. BIM-based fall hazard identification and prevention in construction safety planning. Saf. Sci. 2015, 72, 31–45. [Google Scholar] [CrossRef]
- Nordlöf, H.; Wiitavaara, B.; Winblad, U.; Wijk, K.; Westerling, R. Safety culture and reasons for risk-taking at a large steel-manufacturing company: Investigating the worker perspective. Saf. Sci. 2015, 73, 126–135. [Google Scholar] [CrossRef]
- Zhou, D.; Xu, K.; Lv, Z.; Yang, J.; Li, M.; He, F.; Xu, G. Intelligent Manufacturing Technology in the Steel Industry of China: A Review. Sensors 2022, 22, 8194. [Google Scholar] [CrossRef] [PubMed]
- Chai, J.; Zeng, H.; Li, A.; Ngai, E.W.T. Deep learning in computer vision: A critical review of emerging techniques and application scenarios. Mach. Learn. Appl. 2021, 6, 100134. [Google Scholar] [CrossRef]
- Feng, X.; Jiang, Y.; Yang, X.; Du, M.; Li, X. Computer vision algorithms and hardware implementations: A survey. Integration 2019, 69, 309–320. [Google Scholar] [CrossRef]
- Lan, R.; Awolusi, I.; Cai, J. Digital computer vision for safety management in steel manufacturing. In Proceedings of the Iron & Steel Technology Conference, Detroit, MI, USA, 8–11 May 2023; pp. 31–42. [Google Scholar]
- Marks, E.D.; Teizer, J. Method for testing proximity detection and alert technology for safe construction equipment operation. Constr. Manag. Econ. 2013, 31, 636–646. [Google Scholar] [CrossRef]
- Awolusi, I.; Song, S.; Marks, E. Forklift safety: Sensing the dangers with technology. Prof. Saf. 2017, 62, 36–39. [Google Scholar]
- Kursunoglu, N.; Onder, S.; Onder, M. The Evaluation of Personal Protective Equipment Usage Habit of Mining Employees Using Structural Equation Modeling. Saf. Health Work. 2022, 13, 180–186. [Google Scholar] [CrossRef]
- Zhang, M.; Cao, Z.; Yang, Z.; Zhao, X. Utilizing Computer Vision and Fuzzy Inference to Evaluate Level of Collision Safety for Workers and Equipment in a Dynamic Environment. J. Constr. Eng. Manag. 2020, 146, 04020051. [Google Scholar] [CrossRef]
- Ghosh, A.; Chatterjee, A. Ironmaking and Steelmaking: Theory and Practice; PHI Learn. Priv. Ltd.: Delhi, India, 2008. [Google Scholar]
- Xu, Z.J.; Zheng, Z.; Gao, X.Q. Operation optimization of the steel manufacturing process: A brief review. Int. J. Miner. Metall. Mater. 2021, 28, 1274–1287. [Google Scholar] [CrossRef]
- Bae, J.; Li, Y.; Ståhl, N.; Mathiason, G.; Kojola, N. Using Machine Learning for Robust Target Prediction in a Basic Oxygen Furnace System. Metall. Mater. Trans. B 2020, 51, 1632–1645. [Google Scholar] [CrossRef]
- Nutting, J.; Edward, F.; Wondris, E. Steel. Encyclopedia Britannica. Available online: https://www.britannica.com/technology/steel (accessed on 4 October 2022).
- Burchart-Korol, D. Life cycle assessment of steel production in Poland: A case study. J. Clean. Prod. 2013, 54, 235–243. [Google Scholar] [CrossRef]
- Nair, A.T.; Mathew, A.; Archana, A.R.; Akbar, M.A. Use of hazardous electric arc furnace dust in the construction industry: A cleaner production approach. J. Clean. Prod. 2022, 377, 134282. [Google Scholar] [CrossRef]
- World Steel Association. Safety and Health in the Steel Industry: Data Report 2023. 2023. Available online: https://worldsteel.org/steel-topics/safety-and-health/safety-and-health-in-the-steel-industry-data-report-2023/ (accessed on 18 November 2023).
- Ali, M.X.M.; Arifin, K.; Abas, A.; Ahmad, M.A.; Khairil, M.; Cyio, M.B.; Samad, M.A.; Lampe, I.; Mahfudz, M.; Ali, M.N. Systematic Literature Review on Indicators Use in Safety Management Practices among Utility Industries. Int. J. Environ. Res. Public Health 2022, 19, 6198. [Google Scholar] [CrossRef] [PubMed]
- Tang, B.; Chen, L.; Sun, W.; Lin, Z.K. Review of surface defect detection of steel products based on machine vision. IET Image Process 2022, 17, 303–322. [Google Scholar] [CrossRef]
- International Labour Organization. Sectoral Activities Programme. In Proceedings of the Code of Practice on Safety and Health in the Iron and Steel Industry: Meeting of Experts to Develop a Revised Code of Practice on Safety and Health in the Iron and Steel Industry, Geneva, Switzerland, 1–9 February 2005. [Google Scholar]
- Kifle, M.; Engdaw, D.; Alemu, K.; Sharma, H.R.; Amsalu, S.; Feleke, A.; Worku, W. Work related injuries and associated risk factors among iron and steel industries workers in Addis Ababa, Ethiopia. Saf. Sci. 2014, 63, 211–216. [Google Scholar] [CrossRef]
- National Institute for Occupational Safety and Health. Hierarchy of Controls. CDC. 2015. Available online: https://www.cdc.gov/niosh/hierarchy-of-controls/about/index.html (accessed on 20 November 2023).
- Houette, B.; Mueller-Hirth, N. Practices, preferences, and understandings of rewarding to improve safety in high-risk industries. J. Saf. Res. 2022, 80, 302–310. [Google Scholar] [CrossRef]
- Berhan, E. Prevalence of occupational accident; and injuries and their associated factors in iron, steel and metal manufacturing industries in Addis Ababa. Cogent Eng. 2020, 7, 1723211. [Google Scholar] [CrossRef]
- Awolusi, I.; Marks, E.; Hallowell, M. Wearable technology for personalized construction safety monitoring and trending: Review of applicable devices. Autom. Constr. 2018, 85, 96–106. [Google Scholar] [CrossRef]
- Márquez-Sánchez, S.; Campero-Jurado, I.; Herrera-Santos, J.; Rodríguez, S.; Corchado, J.M. Intelligent platform based on smart ppe for safety in workplaces. Sensors 2021, 21, 4652. [Google Scholar] [CrossRef]
- Nnaji, C.; Awolusi, I.; Park, J.W.; Albert, A. Wearable sensing devices: Towards the development of a personalized system for construction safety and health risk mitigation. Sensors 2021, 21, 682. [Google Scholar] [CrossRef]
- Hong, X.; Lv, B. Application of Training Simulation Software and Virtual Reality Technology in Civil Engineering. In Proceedings of the 2022 IEEE International Conference on Electrical Engineering, Big Data and Algorithms (EEBDA), Changchun, China, 25–27 February 2022; pp. 520–524. [Google Scholar]
- Velev, D.; Zlateva, P. Virtual Reality Challenges in Education and Training. Int. J. Learn. 2017, 3, 33–37. [Google Scholar] [CrossRef]
- Chai, X.; Lee, B.G.; Pike, M.; Wu, R.; Chieng, D.; Chung, W.Y. Pre-impact Firefighter Fall Detection Using Machine Learning on the Edge. IEEE Sens. J. 2023, 23, 14997–15009. [Google Scholar] [CrossRef]
- Zhou, L.; Zhang, L.; Konz, N. Computer Vision Techniques in Manufacturing. IEEE Trans. Syst. Man. Cybern. Syst. 2022, 53, 105–117. [Google Scholar] [CrossRef]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. arXiv 2015, arXiv:1506.02640. [Google Scholar]
- Redmon, J.; Farhadi, A. YOLO9000: Better, faster, stronger. In Proceedings of the 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, 21–26 November 2017; pp. 6517–6525. [Google Scholar] [CrossRef]
- Bochkovskiy, A.; Wang, C.-Y.; Liao, H.-Y.M. YOLOv4: Optimal Speed and Accuracy of Object Detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
- Terven, J.; Córdova-Esparza, D.M.; Romero-González, J.A. A Comprehensive Review of YOLO Architectures in Computer Vision: From YOLOv1 to YOLOv8 and YOLO-NAS. Mach. Learn. Knowl. Extr. 2023, 5, 1680–1716. [Google Scholar] [CrossRef]
- Jiang, P.; Ergu, D.; Liu, F.; Cai, Y.; Ma, B. A Review of Yolo Algorithm Developments. Procedia Comput. Sci. 2021, 199, 1066–1073. [Google Scholar] [CrossRef]
- Li, Y.; Wang, H.; Dang, L.M.; Nguyen, T.N.; Han, D.; Lee, A.; Jang, I.; Moon, H. A deep learning-based hybrid framework for object detection and recognition in autonomous driving. IEEE Access 2020, 8, 194228–194239. [Google Scholar] [CrossRef]
- Narejo, S.; Pandey, B.; Vargas, D.E.; Rodriguez, C.; Anjum, M.R. Weapon Detection Using YOLO V3 for Smart Surveillance System. Math. Probl. Eng. 2021, 2021, 9975700. [Google Scholar] [CrossRef]
- Ragab, M.G.; Abdulkader, S.J.; Muneer, A.; Alqushaibi, A.; Sumiea, E.H.; Qureshi, R.; Al-Selwi, S.M.; Alhussian, H. A Comprehensive Systematic Review of YOLO for Medical Object Detection (2018 to 2023). IEEE Access 2024, 12, 57815–57836. [Google Scholar] [CrossRef]
- Lippi, M.; Bonucci, N.; Carpio, R.F.; Contarini, M.; Speranza, S.; Gasparri, A. A YOLO-based pest detection system for precision agriculture. In Proceedings of the 2021 29th Mediterranean Conference on Control and Automation, MED 2021, Puglia, Italy, 22–25 June 2021; pp. 342–347. [Google Scholar] [CrossRef]
- Kim, K.; Kim, K.; Jeong, S. Application of YOLO v5 and v8 for Recognition of Safety Risk Factors at Construction Sites. Sustainability 2023, 15, 15179. [Google Scholar] [CrossRef]
- Qiu, Q.; Lau, D. Real-time detection of cracks in tiled sidewalks using YOLO-based method applied to unmanned aerial vehicle (UAV) images. Autom. Constr. 2023, 147, 104745. [Google Scholar] [CrossRef]
- Gai, R.; Chen, N.; Yuan, H. A detection algorithm for cherry fruits based on the improved YOLO-v4 model. Neural Comput. Appl. 2023, 35, 13895–13906. [Google Scholar] [CrossRef]
- Jung, H.; Rhee, J. Application of YOLO and ResNet in Heat Staking Process Inspection. Sustainability 2022, 14, 15892. [Google Scholar] [CrossRef]
- Mushtaq, F.; Ramesh, K.; Deshmukh, S.; Ray, T.; Parimi, C.; Tandon, P.; Jha, P.K. YOLO-v5 and image processing based component identification system. Eng. Appl. Artif. Intell. 2023, 118, 105665. [Google Scholar] [CrossRef]
- Kou, X.; Liu, S.; Cheng, K.; Qian, Y. Development of a YOLO-V3-based model for detecting defects on steel strip surface. Measurement 2021, 182, 109454. [Google Scholar] [CrossRef]
- Pitts, H. Warehouse Robot Detection for Human Safety Using YOLOv8. In Proceedings of the SoutheastCon 2024, Atlanta, GA, USA, 15–24 March 2024; pp. 1184–1188. [Google Scholar] [CrossRef]
- Hao, Z.; Wang, Z.; Bai, D.; Tao, B.; Tong, X.; Chen, B. Intelligent Detection of Steel Defects Based on Improved Split Attention Networks. Front. Bioeng. Biotechnol. 2022, 9, 810876. [Google Scholar]
- Martins, L.A.O.; Pádua, F.L.C.; Almeida, P.E.M. Automatic detection of surface defects on rolled steel using Computer Vision and Artificial Neural Networks. In Proceedings of the IECON 2010-36th Annual Conference on IEEE Industrial Electronics Society, Glendale, AZ, USA, 7–10 November 2010; pp. 1081–1086. [Google Scholar]
- Sizyakin, R.; Voronin, V.; Gapon, N.; Zelensky, A.; Pižurica, A. Automatic detection of welding defects using the convolutional neural network. In Automated Visual Inspection and Machine Vision III; SPIE: Philadelphia, PA, USA, 2019. [Google Scholar]
- Bolderston, A. Conducting a research interview. J. Med. Imaging Radiat. Sci. 2012, 43, 66–76. [Google Scholar] [CrossRef]
- Guest, G.; Namey, E.; Taylor, J.; Eley, N.; McKenna, K. Comparing focus groups and individual interviews: Findings from a randomized study. Int. J. Soc. Res. Methodol. 2017, 20, 693–708. [Google Scholar] [CrossRef]
- Creswell, J.W. Editorial: Mapping the field of mixed methods research. J. Mix. Methods Res. 2009, 3, 95–108. [Google Scholar] [CrossRef]
- Çelikbilek, Y.; Tüysüz, F. An in-depth review of theory of the TOPSIS method: An experimental analysis. J. Manag. Anal. 2020, 7, 281–300. [Google Scholar] [CrossRef]
- Kraujalienė, L. Comparative Analysis of Multicriteria Decision-Making Methods Evaluating the Efficiency of Technology Transfer. Bus. Manag. Educ. 2019, 17, 72–93. [Google Scholar] [CrossRef]
- Olson, D.L. Comparison of weights in TOPSIS models. Math. Comput. Model. 2004, 40, 721–727. [Google Scholar] [CrossRef]
- Chakraborty, S. TOPSIS and Modified TOPSIS: A comparative analysis. Decis. Anal. J. 2022, 2, 100021. [Google Scholar] [CrossRef]
- Yahya, M.N.; Gökçekuş, H.; Ozsahin, D.U.; Uzun, B. Evaluation of wastewater treatment technologies using topsis. Desalin. Water Treat. 2020, 177, 416–422. [Google Scholar] [CrossRef]
- Rombach, R.; Blattmann, A.; Lorenz, D.; Esser, P.; Ommer, B. High-Resolution Image Synthesis with Latent Diffusion Models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022. [Google Scholar]
- Tharwat, A. Classification assessment methods. Appl. Comput. Inform. 2018, 17, 168–192. [Google Scholar] [CrossRef]
- Obi, J.C. A comparative study of several classification metrics and their performances on data. World J. Adv. Eng. Technol. Sci. 2023, 8, 308–314. [Google Scholar] [CrossRef]
- Yan, X.; Zhang, H.; Li, H. Computer vision-based recognition of 3D relationship between construction entities for monitoring struck-by accidents. Comput. Aided Civ. Infrastruct. Eng. 2020, 35, 1023–1038. [Google Scholar] [CrossRef]
- Anwar, Q.; Hanif, M.; Shimotoku, D.; Kobayashi, H.H. Driver awareness collision/proximity detection system for heavy vehicles based on deep neural network. J. Phys. Conf. Ser. 2022, 2330, 012001. [Google Scholar] [CrossRef]
- Wu, H.; Wu, D.; Zhao, J. An intelligent fire detection approach through cameras based on computer vision methods. Process Saf. Environ. Prot. 2019, 127, 245–256. [Google Scholar] [CrossRef]
- Krestenitis, M.; Orfanidis, G.; Ioannidis, K.; Avgerinakis, K.; Vrochidis, S.; Kompatsiaris, I. Oil spill identification from satellite images using deep neural networks. Remote Sens. 2019, 11, 1762. [Google Scholar] [CrossRef]
- Lee, H.; Lee, G.; Lee, S.H.; Ahn, C.R. Assessing exposure to slip, trip, and fall hazards based on abnormal gait patterns predicted from confidence interval estimation. Autom. Constr. 2022, 139, 104253. [Google Scholar] [CrossRef]
- Luo, H.; Wang, M.; Wong, P.K.Y.; Cheng, J.C.P. Full body pose estimation of construction equipment using computer vision and deep learning techniques. Autom. Constr. 2020, 110, 103016. [Google Scholar] [CrossRef]
- Balakreshnan, B.; Richards, G.; Nanda, G.; Mao, H.; Athinarayanan, R.; Zaccaria, J. PPE compliance detection using artificial intelligence in learning factories. Procedia Manuf. 2020, 45, 277–282. [Google Scholar] [CrossRef]
- Abd El-Rahiem, B.; Sedik, A.; El Banby, G.M.; Ibrahem, H.M.; Amin, M.; Song, O.Y.; Khalaf, A.A.M.; Abd El-Samie, F.E. An efficient deep learning model for classification of thermal face images. J. Enterp. Inf. Manag. 2020, 36, 706–717. [Google Scholar] [CrossRef]
- Gupta, A.; Ramanath, R.; Shi, J.; Keerthi, S.S. Adam vs. SGD: Closing the generalization gap on image classification. In Proceedings of the OPT2021: 13th Annual Workshop on Optimization for Machine Learning, Virtual, 22 October 2021. [Google Scholar]
Use Case | Sector | Classes | YOLO Network | Number of Images | Performance | Source |
---|---|---|---|---|---|---|
Detection of construction equipment | Construction | 4 | YOLO v5 & v8 | 4800 | [email protected]–0.951 | [43] |
Detection of sidewalk cracks | Construction | 4 | YOLO v2, v3 and v4-tiny | 4000 | Accuracy 0.94 | [44] |
Detection of cherry fruits | Agriculture | 3 | YOLO v4 | 400 | F1-score 0.947 | [45] |
Heat staking process inspection | Automobile industry | 3 | YOLO v5 | 3000 | [email protected]–0.95 | [46] |
Assembly component identification | Aerospace | 150 | YOLO v5 | 9450 | [email protected]–0.99 | [47] |
Surface defects on steel surface | Steel | 6 | YOLO v3 | 4057 | [email protected]–0.72 | [48] |
Warehouse robot detection | Logistics | 2 | YOLO v8 | 335 | [email protected]–0.86 | [49] |
Features | YOLOv5 | YOLOv8 | YOLOv9 |
---|---|---|---|
Network Type | Fully Connected | Fully Connected | Fully Connected |
Backbone for Feature Extraction | CSPDarknet53 | Custom CSPDarknet53 Backbone Cross-stage Partial Connection | Generalized Efficient Layer Aggregation Network (GLEAN) |
Neck | Path Aggregation Network (PANet) | Path Aggregation Network (PANet) | Programmable Gradient Information (PGI) |
Head | YOLOv5 Head (Three Detection Layers) | YOLOv8 Head (Improved Anchor-free) | YOLOv8 Head (Adaptive Anchor-free) |
Safety Hazard | CV Task | Data Collection Device | Hazard Sample Scenarios | Computer Vision Application | Source |
---|---|---|---|---|---|
Struck-by | Object tracking, object detection | RGB Cameras, RGBD Cameras, Stereo Vision Cameras | (1) In mini-mills with shredders on site, workers are exposed to flying objects as raw materials are shredded and crushed. (2) Moving equipment such as dump trucks or gantry cranes can strike workers. (6) Workers can be struck by finished products, shears, and bends rebars and by the trucks used for loading. | CV can track personnel in real time and issue alerts when workers are in proximity to the danger of being struck. | [64] |
Caught in-between | Object tracking, object detection | RGB Cameras, RGBD Cameras, Stereo Vision Cameras | (1) In mini-mills with shredders on site, workers are exposed to the danger of being caught in between equipment, the shredder mill, and raw materials at the shredding yard. (2) There is a danger of workers being caught between loads transferred by a crane (within its swing radius) and stationary or moving objects in the steel shop. (6) Similarly, during the finishing and transportation, there is a hazard of workers being caught between trucks and objects, which could be the stacked rebars. | CV tasks can help in classifying, detecting, and tracking people and equipment, therefore alerting when workers are in proximity to these hazardous scenarios. | [65] |
Fire | Object detection, object classification | RGB Cameras, RGBD Cameras, Infrared Cameras, Flash LIDAR | (2) There is a very high risk of fire hazards near furnaces. Molten steel is usually at very high temperatures > 2800 F. This increases the occurrence of fire hazards. (3) The LMS is a hotspot that includes molten steel undergoing its purification process, alloying, desulphurization, degassing, etc. (4) The solidification of liquid steel involves working at high temperatures, and there is a possibility of a fire hazard in this work process. | CV technologies can successfully detect fire at its incipient stage. Most of the time, it is earlier than smoke detectors due to its early detection using DL programs. | [66] |
Spills | Object classification, segmentation, and detection | RGB Camera, RGBD Cameras | (2) Spills from the molten metal at the furnace can pose a hazard to workers. (3) Spill hazard is also prominent during the tapping process from EAF to the refining station. (4) There is the likelihood of spills of molten steel, especially during the transfer of molten steel from the Ladle to the Tundish. | CV using segmentation tasks can detect these spills and alert workers in real time when in proximity to the spills. | [67] |
Slips/trips/falls | Object detection, action recognition | RGB Cameras, RGBD Cameras | (2) Slip, trip, and fall occurrences are high at the furnace due to the nature of activities during this process. There is usually reduced visibility at this location due to the excess heat from the furnace, which increases the possibility of trips and falls. (5) Workers may trip at the rolling mill due to improperly stacked rolling equipment. (6) Workers are exposed to the danger of slips, trips, and falls during the finishing and transportation work process; stacked rebars can cause this, too. | CV can detect workers’ proximity to the hazards causing slips, trips, and falls and detect when workers fall; this is viable in real time when integrated with sensors that can be integrated into safety vests, smart watches, or hard hats. | [68] |
Bad worker posture | Pose estimation/action recognition | RGB Cameras, RGBD Cameras, Stereo Vision Cameras | This hazard is recognized and characterized in all phases of the work process. Bad worker posture is observed in all work areas that involve workers’ presence. There is an increasing need to observe human interaction with tools, equipment, and the environment. | CV techniques using images and videos can detect workers’ postures in every work process and send alerts when workers’ posture poses a risk to their safety. | [69] |
No PPE | Classification, object detection | RGB Cameras, RGBD Cameras | Personal protective equipment (PPE) is required at all work processes in the mini-mill. These protective outfits are to be worn at all work process locations. | CV using images can detect workers not wearing their PPEs, including safety helmets, vests, gloves, and, in some cases, glasses. | [70] |
Extreme temperature | Image classification, segmentation, and object detection | Infrared Cameras (IR) | The entire work process in the mini-mill steel process subjects and endangers workers to extremely high temperatures, which needs to be critically monitored. | CV using heat maps on face detection of workers can detect, based on varying color codes, when workers are experiencing heat stress. | [71] |
Linguistic Value | PPED | DP | OA | EG | H | GF | PHE | STF | AS | RT | GUI |
---|---|---|---|---|---|---|---|---|---|---|---|
Available = 2 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 2 |
Not Available = 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
CV System | PPED | DP | OA | EG | H | GF | PHE | STF | AS | RT | GUI |
---|---|---|---|---|---|---|---|---|---|---|---|
Everguard | 2 | 2 | 1 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 2 |
Intenseye | 2 | 2 | 1 | 2 | 1 | 1 | 2 | 2 | 1 | 2 | 2 |
Cogniac | 2 | 2 | 2 | 1 | 1 | 1 | 1 | 1 | 1 | 2 | 2 |
Protex | 2 | 2 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2 | 2 |
Rhyton | 2 | 1 | 1 | 2 | 1 | 1 | 2 | 1 | 1 | 1 | 1 |
Chooch | 2 | 2 | 2 | 1 | 1 | 2 | 2 | 1 | 1 | 2 | 2 |
Kogniz | 2 | 1 | 2 | 1 | 1 | 1 | 2 | 2 | 1 | 2 | 2 |
Matroid | 2 | 2 | 2 | 1 | 1 | 2 | 1 | 1 | 1 | 2 | 2 |
Weights | 0.1 | 0.05 | 0.1 | 0.1 | 0.05 | 0.1 | 0.1 | 0.1 | 0.15 | 0.1 | 0.05 |
CV System | PPED | DP | OA | EG | H | GF | PHE | STF | AS | RT | GUI |
---|---|---|---|---|---|---|---|---|---|---|---|
Everguard | 0.354 | 0.392 | 0.224 | 0.485 | 0.603 | 0.485 | 0.417 | 0.485 | 0.603 | 0.371 | 0.371 |
Intenseye | 0.354 | 0.392 | 0.224 | 0.485 | 0.302 | 0.243 | 0.417 | 0.485 | 0.302 | 0.371 | 0.371 |
Cogniac | 0.354 | 0.392 | 0.447 | 0.243 | 0.302 | 0.243 | 0.209 | 0.243 | 0.302 | 0.371 | 0.371 |
Protex | 0.354 | 0.392 | 0.224 | 0.243 | 0.302 | 0.243 | 0.209 | 0.243 | 0.302 | 0.371 | 0.371 |
Rhyton | 0.354 | 0.196 | 0.224 | 0.485 | 0.302 | 0.243 | 0.417 | 0.243 | 0.302 | 0.186 | 0.186 |
Chooch | 0.354 | 0.392 | 0.447 | 0.243 | 0.302 | 0.485 | 0.417 | 0.243 | 0.302 | 0.371 | 0.371 |
Kogniz | 0.354 | 0.196 | 0.447 | 0.243 | 0.302 | 0.243 | 0.417 | 0.485 | 0.302 | 0.371 | 0.371 |
Matroid | 0.354 | 0.392 | 0.447 | 0.243 | 0.302 | 0.485 | 0.209 | 0.243 | 0.302 | 0.371 | 0.371 |
CV System | PPED | DP | OA | EG | H | GF | PHE | STF | AS | RT | GUI |
---|---|---|---|---|---|---|---|---|---|---|---|
Everguard | 0.035 | 0.020 | 0.022 | 0.049 | 0.030 | 0.049 | 0.042 | 0.049 | 0.090 | 0.037 | 0.019 |
Intenseye | 0.035 | 0.020 | 0.022 | 0.049 | 0.015 | 0.024 | 0.042 | 0.049 | 0.045 | 0.037 | 0.019 |
Cogniac | 0.035 | 0.020 | 0.045 | 0.024 | 0.015 | 0.024 | 0.021 | 0.024 | 0.045 | 0.037 | 0.019 |
Protex | 0.035 | 0.020 | 0.022 | 0.024 | 0.015 | 0.024 | 0.021 | 0.024 | 0.045 | 0.037 | 0.019 |
Rhyton | 0.035 | 0.010 | 0.022 | 0.049 | 0.015 | 0.024 | 0.042 | 0.024 | 0.045 | 0.019 | 0.009 |
Chooch | 0.035 | 0.020 | 0.045 | 0.024 | 0.015 | 0.049 | 0.042 | 0.024 | 0.045 | 0.037 | 0.019 |
Kogniz | 0.035 | 0.010 | 0.045 | 0.024 | 0.015 | 0.024 | 0.042 | 0.049 | 0.045 | 0.037 | 0.019 |
Matroid | 0.035 | 0.020 | 0.045 | 0.024 | 0.015 | 0.049 | 0.021 | 0.024 | 0.045 | 0.037 | 0.019 |
V+ | 0.035 | 0.020 | 0.045 | 0.049 | 0.030 | 0.049 | 0.042 | 0.049 | 0.090 | 0.037 | 0.019 |
V− | 0.035 | 0.010 | 0.022 | 0.024 | 0.015 | 0.024 | 0.021 | 0.024 | 0.045 | 0.019 | 0.009 |
CV System | S+ | S− | Pi | Ranking |
---|---|---|---|---|
Everguard | 0.022 | 0.071 | 0.760 | 1st |
Intenseye | 0.058 | 0.046 | 0.444 | 2nd |
Cogniac | 0.067 | 0.032 | 0.324 | 6th |
Protex | 0.071 | 0.023 | 0.246 | 8th |
Rhyton | 0.067 | 0.032 | 0.323 | 7th |
Chooch | 0.059 | 0.045 | 0.435 | 3rd |
Kogniz | 0.060 | 0.044 | 0.426 | 4th |
Matroid | 0.062 | 0.040 | 0.392 | 5th |
Parameter Name | Configuration |
---|---|
CPU | Intel i9-13900K ×1 |
GPU | NVIDIA 4090 ×1 |
RAM | 64 GB |
Language | Python |
Operating System | Ubuntu 22.04 |
Average Precision | Recall | F1-Score | mAP | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
K-Fold | YOLOv5m | YOLOv8m | YOLOv9c | YOLOv5m | YOLOv8m | YOLOv9c | YOLOv5m | YOLOv8m | YOLOv9c | YOLOv5m | YOLOv8m | YOLOv9c |
1 | 0.96 | 0.97 | 0.94 | 0.96 | 0.98 | 0.98 | 0.97 | 0.97 | 0.98 | 0.93 | 0.93 | 0.94 |
2 | 0.97 | 0.98 | 0.98 | 0.99 | 0.98 | 0.97 | 0.958 | 0.98 | 0.97 | 0.94 | 0.92 | 0.94 |
3 | 0.99 | 0.98 | 0.98 | 0.98 | 0.97 | 0.98 | 0.94 | 0.98 | 0.98 | 0.95 | 0.95 | 0.95 |
4 | 0.98 | 0.98 | 0.99 | 0.97 | 0.97 | 0.97 | 0.98 | 0.98 | 0.98 | 0.95 | 0.94 | 0.95 |
5 | 0.98 | 0.98 | 0.98 | 0.98 | 0.98 | 0.98 | 0.93 | 0.98 | 0.98 | 0.93 | 0.94 | 0.95 |
Avg | 0.976 | 0.978 | 0.974 | 0.976 | 0.976 | 0.976 | 0.9556 | 0.982 | 0.978 | 0.938 | 0.936 | 0.944 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Lan, R.; Awolusi, I.; Cai, J. Computer Vision for Safety Management in the Steel Industry. AI 2024, 5, 1192-1215. https://doi.org/10.3390/ai5030058
Lan R, Awolusi I, Cai J. Computer Vision for Safety Management in the Steel Industry. AI. 2024; 5(3):1192-1215. https://doi.org/10.3390/ai5030058
Chicago/Turabian StyleLan, Roy, Ibukun Awolusi, and Jiannan Cai. 2024. "Computer Vision for Safety Management in the Steel Industry" AI 5, no. 3: 1192-1215. https://doi.org/10.3390/ai5030058
APA StyleLan, R., Awolusi, I., & Cai, J. (2024). Computer Vision for Safety Management in the Steel Industry. AI, 5(3), 1192-1215. https://doi.org/10.3390/ai5030058