An Automatic Defect Detection System for Petrochemical Pipeline Based on Cycle-GAN and YOLO v5
Abstract
:1. Introduction
- (1)
- Time-consuming and inefficient: It takes interpreters a long time to work on a large number of detection videos;
- (2)
- High false and missed detection rate: Even professionals can cause about 20% of false and missed detection due to long-term fatigue, misjudgment, operational errors, and missing data;
- (3)
- Non-uniform evaluation results: The interpretation results are heavily influenced by subjective factors because the definitions of some defects are relatively close, or even a single defect can be interpreted as two or three different types;
- (4)
- Inaccurate judgment of defect grade: It is difficult for the interpreter to measure the grade of the defect from the currently commonly used video information on subjective judgment.
- (1)
- To address the issues of distortion, noise, and uneven illumination caused by the hemispherical camera, the distorted pipe wall image is unfolded by geometric coordinate transformation, and the uneven illumination is equalized by a Retinex-based image enhancement method.
- (2)
- For the high repetition of adjacent frames of video with contextual coherence, the image stitching algorithm based on SIFT features is carried out on the unfolded pipe inner wall images to obtain a complete and coherent image;
- (3)
- To solve the problems of the small amount of sample data and the unbalanced number of defect categories, a sample enhancement strategy based on Cycle-GAN is proposed in which the defective areas are randomly generated at the non-defect pipe walls to obtain pseudo-samples of defect pipe and enrich the sample diversity based on existing data sets;
- (4)
- YOLO (You Only Look Once) v5-based object detection network is used to detect the specific defective areas on the stitched pipeline inner wall images and classify the categories. The proposed network is first pre-trained on similar data sets; therefore, the amount of required data is reduced by transfer learning methods. The Transformer attention mechanism is integrated into YOLO v5 to help the network detect the region of interest in images and improve the detection and recognition accuracy.
2. Related Works
2.1. Pipeline Internal Defect Detection Based on Computer Vision
2.2. Pipeline Internal Defect Detection Based on Deep Learning
3. Proposed Methods
3.1. Pipeline Data Preprocessing
3.1.1. Image Unfolding
3.1.2. Luminance Correction
3.1.3. Image Stitching
- (1)
- Extract key points: Search image locations on all scale spaces, and identify the potential interest points with scale and rotation invariance through a Gaussian differential function.
- (2)
- Locate key points and determine feature orientation: A fitted model is used to determine the position and scale at each candidate position that feature sets and are established from the and , respectively.
- (3)
- Match feature point pairs to build the correspondence between images: The random sampling consensus algorithm (RANSAC) is adopted to eliminate mismatched key points in this paper. The data are cut into two parts of the correct points as inner points, and the anomalous data as outer points. The iterative method is used to find the optimal parameter model in the data set, in which the points that do not conform to the optimal model are defined as “outer points”. Finally, the subsets and are generated, and for each , there exists one such that matches .
3.2. Sample Balance Strategy
3.2.1. Framework Structure of Cycle-GAN
3.2.2. Improved Loss Function of Cycle-GAN Based on Gradient Penalty
- (1)
- It is necessary to carefully design and balance the training degree of the generator and the discriminator in the process of training; otherwise, it serious gradient disappearance in the generator will occur when the discriminator is trained better, making it difficult to continue training;
- (2)
- The loss function of the generator and the discriminator cannot indicate the training process, due to the lack of meaningful metrics associated with the quality of generated images;
- (3)
- The generated images may lack diversity because of the mode collapse. These defects are caused by two reasons: one is that the distance measurement type of the equivalent change is unreasonable, and the other is that the samples generated after random initialization by the generator have almost no non-negligible overlap with the real sample distribution.
3.3. Defect Detection Based on YOLO v5
3.3.1. Network Structure of YOLO v5
- (1)
- The image is divided into grids cells, utilized for predicting the objects when the centers fall in the grids;
- (2)
- B bounding boxes are predicted per grids cell, with and confidence for a total of five values predicted on each bounding box.
- (3)
- The category is predicted in each grid cell at a total of C categories. The final output tensor is the dimension, which is the prediction of the locations and categories for B bounding boxes of grids cells. The network structure of YOLO v5 is shown in Figure 5.
3.3.2. Transformer Self-Attention Module
4. Implementation Details and Results
4.1. Pipeline Data Preprocessing
4.1.1. Analysis of the Pipeline Dataset
4.1.2. Preprocessing Results
4.2. Construction of Pipeline Defect Image Database
4.2.1. Defect Generation Based on Cycle-GAN
4.2.2. Pipeline Data Set for Defect Detection
4.3. YOLO v5 Training and Evaluation
4.3.1. Computer Configuration
4.3.2. Transfer Training
4.3.3. Hyperparametric Evolution
4.3.4. Results and Evaluation
5. Discussion
5.1. Comparison with Different Preprocessing Strategies
5.2. Data Enhancement by Cycle-GAN
5.3. Comparison with Different Attention Mechanisms
5.4. Robust Analysis
6. Conclusions
- (1)
- Aiming at the image distortion during data collection, a geometric coordinate transformation method is used to unfold the ring area of the pipe wall. For the uneven illumination caused by robot lighting equipment, the brightness of the image is equalized by the MSRCP algorithm;
- (2)
- In order to make full use of the context information in the video data, an image stitching algorithm based on SIFT feature is applied to stitch the continuous unfolded images of the inner wall, obtaining a coherent and tiled image of the pipeline inner wall, which can help the defect detection network to identify the defective area accurately;
- (3)
- To address the issues of complex pipeline environment, a small amount of data and unbalanced sample classes, a sample enhancement strategy based on cycle-GAN is proposed in which local defective areas are randomly generated on the original inner wall image. A total of 3200 images are generated as the augmented dataset for defect detection as expanded to about 5.5 times of the original, including 3565 corrosion areas, 2797 oxide layers peeling areas, 3627 sediment areas, and 3124 penetration areas. This data enhancement strategy not only enriches the diversity of samples, but also solves the long-tail problem caused by the imbalance between sample classes.
- (4)
- For the pipe wall defect detection task, the YOLO v5-based model is proposed. In order to solve the overfitting problem caused by small samples and large models, the transfer learning strategy is adopted to train the feature extraction network with similar data sets. Aiming at the changing scale of the defective area, the Transformer attention mechanism is integrated into YOLO v5 to help the network find the region of interest in the complex background and improve the detection and recognition of small-scale defects. The average detection precision of the proposed method on the test set is 93.10%, the average recall rate is 90.96%, and the F1-score is 0.920.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Appendix A
References
- Hawari, A.; Alamin, M.; Alkadour, F.; Elmasry, M.; Zayed, T. Automated defect detection tool for closed circuit television (cctv) inspected sewer pipelines. Autom. Constr. 2018, 89, 99–109. [Google Scholar] [CrossRef]
- Zhang, C.; Xu, X.; Fan, C.; Wang, G. Literature Review of Machine Vision in Application Field. In E3S Web of Conferences; EDP Sciences: Xi’an, China, 2021; Volume 236, p. 04027. [Google Scholar]
- Scholl, K.U.; Kepplin, V.; Berns, K.; Dillmann, R. Controlling a multi-joint robot for autonomous sewer inspection. In Proceedings of the 2000 ICRA. Millennium Conference—IEEE International Conference on Robotics and Automation—Symposia Proceedings (Cat. No.00CH37065), San Francisco, CA, USA, 24–28 April 2000; Volume 2, pp. 1701–1706. [Google Scholar] [CrossRef]
- Ahrary, A.; Nassiraei, A.A.; Ishikawa, M. A study of an autonomous mobile robot for a sewer inspection system. Artif. Life Robot. 2007, 11, 23–27. [Google Scholar] [CrossRef]
- Scholl, K.U.; Kepplin, V.; Berns, K.; Dillmann, R. An articulated service robot for autonomous sewer inspection tasks. In Proceedings of the 1999 IEEE/RSJ International Conference on Intelligent Robots and Systems. Human and Environment Friendly Robots with High Intelligence and Emotional Quotients (Cat. No.99CH36289), Kyongju, Korea, 17–21 October 1999; Volume 2, pp. 1075–1080. [Google Scholar] [CrossRef]
- Rome, E.; Hertzberg, J.; Kirchner, F.; Licht, U.; Christaller, T. Towards autonomous sewer robots: The MAKRO project. Urban Water 1999, 1, 57–70. [Google Scholar] [CrossRef]
- Moselhi, O.; Shehab-Eldeen, T. Automated detection of surface defects in water and sewer pipes. Autom. Constr. 1999, 8, 581–588. [Google Scholar] [CrossRef]
- Yang, M.; Su, T.C. Segmenting ideal morphologies of sewer pipe defects on CCTV images for automated diagnosis. Expert Syst. Appl. 2009, 36, 3562–3573. [Google Scholar] [CrossRef]
- Myrans, J.; Everson, R.; Kapelan, Z. Automated detection of fault types in CCTV sewer surveys. J. Hydroinform. 2019, 21, 153–163. [Google Scholar] [CrossRef]
- Guo, W.; Soibelman, L.; Garrett, J.H., Jr. Automated defect detection for sewer pipeline inspection and condition assessment. Autom. Constr. 2009, 18, 587–596. [Google Scholar] [CrossRef]
- Ye, X.; Zuo, J.; Li, R.; Wang, Y.; Gan, L.; Yu, Z.; Hu, X. Diagnosis of sewer pipe defects on image recognition of multi-features and support vector machine in a southern Chinese city. J. Environ. Sci. Eng. 2019, 13, 17. [Google Scholar] [CrossRef]
- Khalifa, I.; Aboutabl, A.E.; Barakat, A.A. A New Image-Based Model for Predicting Cracks in Sewer Pipes. Int. J. Adv. Comput. Sci. Appl. 2013, 4, 15238-3779. [Google Scholar] [CrossRef] [Green Version]
- Halfawy, M.R.; Hengmeechai, J. Automated defect detection in sewer closed circuit television images using histograms of oriented gradients and support vector machine. Autom. Constr. 2014, 38, 1–13. [Google Scholar] [CrossRef]
- Fang, X.; Guo, W.; Li, Q.; Zhu, J.; Chen, Z.; Yu, J.; Zhou, B.; Yang, H. Sewer Pipeline Fault Identification Using Anomaly Detection Algorithms on Video Sequences. IEEE Access 2020, 8, 39574–39586. [Google Scholar] [CrossRef]
- Moradi, S.; Zayed, T. Real-Time Defect Detection in Sewer Closed Circuit Television Inspection Videos. In Proceedings of the Pipelines 2017, Berlin, Germany, 8–11 May 2017. [Google Scholar]
- Moradi, S.; Zayed, T.; Nasiri, F.; Golkhoo, F. Automated Anomaly Detection and Localization in Sewer Inspection Videos Using Proportional Data Modeling and Deep Learning–Based Text Recognition. J. Infrastruct. Syst. 2020, 26, 04020018. [Google Scholar] [CrossRef]
- Wang, M.; Kumar, S.S.; Cheng, J. Automated sewer pipe defect tracking in CCTV videos based on defect detection and metric learning. Autom. Constr. 2021, 121, 103438. [Google Scholar] [CrossRef]
- Haurum, J.B.; Bahnsen, C.H.; Pedersen, M.; Moeslund, T.B. Water Level Estimation in Sewer Pipes Using Deep Convolutional Neural Networks. Water 2020, 12, 3412. [Google Scholar] [CrossRef]
- Ji, H.W.; Yoo, S.S.; Lee, B.J.; Koo, D.D.; Kang, J.H. Measurement of Wastewater Discharge in Sewer Pipes Using Image Analysis. Water 2020, 12, 1771. [Google Scholar] [CrossRef]
- Pan, G.; Zheng, Y.; Guo, S.; Lv, Y. Automatic sewer pipe defect semantic segmentation based on improved U- Net. Autom. Constr. 2020, 119, 103383. [Google Scholar] [CrossRef]
- Piciarelli, C.; Avola, D.; Pannone, D.; Foresti, G.L. A Vision-Based System for Internal Pipeline Inspection. IEEE Trans. Ind. Inform. 2019, 15, 3289–3299. [Google Scholar] [CrossRef]
- Wang, M.; Cheng, J.C. A unified convolutional neural network integrated with conditional random field for pipe defect segmentation. Comput.-Aided Civ. Infrastruct. Eng. 2020, 35, 162–177. [Google Scholar] [CrossRef]
- Chen, K.; Hu, H.; Chen, C.; Chen, L.; He, C. An intelligent sewer defect detection method based on convolutional neural network. In Proceedings of the 2018 IEEE International Conference on Information and Automation (ICIA), Wuyishan, China, 11–13 August 2018; pp. 1301–1306. [Google Scholar]
- Xie, Q.; Li, D.; Xu, J.; Yu, Z.; Wang, J. Automatic detection and classification of sewer defects via hierarchical deep learning. IEEE Trans. Autom. Sci. Eng. 2019, 16, 1836–1847. [Google Scholar] [CrossRef]
- Hassan, S.I.; Dang, L.M.; Mehmood, I.; Im, S.; Choi, C.; Kang, J.; Park, Y.S.; Moon, H. Underground sewer pipe condition assessment based on convolutional neural networks. Autom. Constr. 2019, 106, 102849. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef] [Green Version]
- Wang, M.; Luo, H.; Cheng, J.C.P. Towards an automated condition assessment framework of underground sewer pipes based on closed-circuit television (CCTV) images. Tunn. Undergr. Space Technol. 2021, 110, 103840. [Google Scholar] [CrossRef]
- Kumar, S.S.; Wang, M.; Abraham, D.M.; Jahanshahi, M.R.; Iseley, T.; Cheng, J.C.P. Deep Learning–Based Automated Detection of Sewer Defects in CCTV Videos. J. Comput. Civ. Eng. 2020, 34, 04019047. [Google Scholar] [CrossRef]
- Yin, X.; Ma, T.; Bouferguene, A.; Al-Hussein, M. Automation for sewer pipe assessment: CCTV video interpretation algorithm and sewer pipe video assessment (SPVA) system development. Autom. Constr. 2021, 125, 103622. [Google Scholar] [CrossRef]
- Rao, A.S.; Nguyen, T.; Palaniswami, M.S.; Ngo, T.D. Vision-based automated crack detection using convolutional neural networks for condition assessment of infrastructure. Struct. Health Monit. 2020, 20, 2124–2142. [Google Scholar] [CrossRef]
- Li, D.; Xie, Q.; Yu, Z.; Wu, Q.; Zhou, J.; Wang, J. Sewer pipe defect detection via deep learning with local and global feature fusion. Autom. Constr. 2021, 129, 103823. [Google Scholar] [CrossRef]
- Klusek, M.; Szydlo, T. Supporting the Process of Sewer Pipes Inspection Using Machine Learning on Embedded Devices. In Proceedings of the ICCS, Phagwara, India, 4–5 December 2021. [Google Scholar]
- Dang, L.M.; Wang, H.; Li, Y.; Nguyen, T.N.; Moon, H. DefectTR: End-to-end defect detection for sewage networks using a transformer. Constr. Build. Mater. 2022, 325, 126584. [Google Scholar] [CrossRef]
- Ma, D.; Liu, J.; Fang, H.; Wang, N.; Zhang, C.; Li, Z.; Dong, J. A Multi-defect detection system for sewer pipelines based on StyleGAN-SDM and fusion CNN. Constr. Build. Mater. 2021, 312, 125385. [Google Scholar] [CrossRef]
- Tezerjani, A.D.; Mehrandezh, M.; Paranjape, R. Defect detection in pipes using a mobile laser-optics technology and digital geometry. In MATEC Web of Conferences; EDP Sciences: Nanaimo, BC, Canada; Regina, SK, Canada, 2015; Volume 32, p. 06006. [Google Scholar]
- Liu, Z.; Kleiner, Y. State of the art review of inspection technologies for condition assessment of water pipes. Measurement 2013, 46, 1–15. [Google Scholar] [CrossRef] [Green Version]
- Pang, G.; Shen, C.; Cao, L.; Hengel, A.V.D. Deep learning for anomaly detection: A review. ACM Comput. Surv. (CSUR) 2021, 54, 1–38. [Google Scholar] [CrossRef]
- Lepot, M.; Stanić, N.; Clemens, F.H. A technology for sewer pipe inspection (Part 2): Experimental assessment of a new laser profiler for sewer defect detection and quantification. Autom. Constr. 2017, 73, 1–11. [Google Scholar] [CrossRef]
- Land, E.H.; McCann, J.J. Lightness and retinex theory. Josa 1971, 61, 1–11. [Google Scholar] [CrossRef]
- Jobson, D.J.; Rahman, Z.u.; Woodell, G.A. A multiscale retinex for bridging the gap between color images and the human observation of scenes. IEEE Trans. Image Process. 1997, 6, 965–976. [Google Scholar] [CrossRef] [Green Version]
- Ng, P.C.; Henikoff, S. SIFT: Predicting amino acid changes that affect protein function. Nucleic Acids Res. 2003, 31, 3812–3814. [Google Scholar] [CrossRef] [Green Version]
- Zhu, J.Y.; Park, T.; Isola, P.; Efros, A.A. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2223–2232. [Google Scholar]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
- Isola, P.; Zhu, J.Y.; Zhou, T.; Efros, A.A. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1125–1134. [Google Scholar]
- Adler, J.; Lunz, S. Banach wasserstein gan. In Proceedings of the Advances in Neural Information Processing Systems 31 (NeurIPS 2018), Montréal, QC, Canada, 3–8 December 2018; Volume 31. [Google Scholar]
- Mao, X.; Li, Q.; Xie, H.; Lau, R.Y.; Wang, Z.; Paul Smolley, S. Least squares generative adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2794–2802. [Google Scholar]
- Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 580–587. [Google Scholar] [CrossRef] [Green Version]
- Girshick, R. Fast r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1440–1448. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. In Proceedings of the NIPS, Barcelona, Spain, 5–10 December 2016. [Google Scholar]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. Ssd: Single shot multibox detector. In Proceedings of the European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2016; pp. 21–37. [Google Scholar]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. In Proceedings of the Advances in Neural Information Processing Systems 30 (NIPS 2017), Long Beach, CA, USA, 4–9 December 2017; Volume 30. [Google Scholar]
- Pinkus, A. Approximation theory of the MLP model in neural networks. Acta Numer. 1999, 8, 143–195. [Google Scholar] [CrossRef]
- Zhang, Y.; Xu, B.; Zhao, T. Convolutional multi-head self-attention on memory for aspect sentiment classification. IEEE/CAA J. Autom. Sin. 2020, 7, 1038–1044. [Google Scholar] [CrossRef]
Predicted Number | Ground-Truth Number | ||||||
---|---|---|---|---|---|---|---|
Corrosion | Peeling | Sediment | Penetration | Background | Total | Precision | |
Corrosion | 813 | 13 | 33 | 17 | 5 | 881 | 92.28% |
Peeling | 21 | 627 | 25 | 12 | 7 | 692 | 90.61% |
Sediment | 26 | 27 | 817 | 21 | 3 | 894 | 91.39% |
Penetration | 3 | 5 | 6 | 725 | 0 | 739 | 98.11% |
Background | 29 | 26 | 25 | 7 | |||
Total | 892 | 698 | 906 | 782 | 93.10% | ||
Recall | 91.14% | 89.83% | 90.18% | 92.71% | 90.96% | ||
F1-score | 0.917 | 0.91 | 0.912 | 0.925 | 0.920 |
Predicted Number | Ground-Truth Number | ||||||
---|---|---|---|---|---|---|---|
Corrosion | Peeling | Sediment | Penetration | Background | Total | Precision | |
Corrosion | 174 | 3 | 6 | 0 | 5 | 188 | 92.55% |
Peeling | 7 | 110 | 2 | 0 | 6 | 125 | 88.00% |
Sediment | 9 | 3 | 89 | 1 | 1 | 103 | 86.41% |
Penetration | 1 | 0 | 0 | 14 | 0 | 15 | 93.33% |
Background | 4 | 9 | 4 | 0 | |||
Total | 195 | 125 | 101 | 15 | 90.07% | ||
Recall | 89.23% | 88.00% | 88.12% | 93.33% | 89.67% | ||
F1-score | 0.909 | 0.902 | 0.903 | 0.929 | 0.899 |
Predicted Number | Ground-Truth Number | ||||||
---|---|---|---|---|---|---|---|
Corrosion | Peeling | Sediment | Penetration | Background | Total | Precision | |
Corrosion | 639 | 10 | 27 | 17 | 0 | 693 | 92.21% |
Peeling | 14 | 517 | 23 | 12 | 1 | 567 | 91.18% |
Sediment | 17 | 24 | 728 | 20 | 2 | 791 | 92.04% |
Penetration | 2 | 5 | 6 | 711 | 0 | 724 | 98.20% |
Background | 25 | 17 | 21 | 7 | |||
Total | 697 | 573 | 805 | 767 | 93.41% | ||
Recall | 91.68% | 90.23% | 90.43% | 92.70% | 91.26% | ||
F1-score | 0.919 | 0.912 | 0.913 | 0.925 | 0.923 |
Predicted Number | Ground-Truth Number | ||||||
---|---|---|---|---|---|---|---|
Corrosion | Peeling | Sediment | Penetration | Background | Total | Precision | |
Corrosion | 314 | 17 | 25 | 4 | 15 | 375 | 83.73% |
Peeling | 5 | 384 | 3 | 1 | 23 | 416 | 92.31% |
Sediment | 26 | 19 | 163 | 5 | 9 | 222 | 73.42% |
Penetration | 5 | 6 | 7 | 83 | 3 | 104 | 79.81% |
Background | 6 | 23 | 10 | 2 | |||
Total | 356 | 449 | 208 | 95 | 82.32% | ||
Recall | 88.20% | 85.52% | 78.37% | 87.37% | 84.86% | ||
F1-score | 0.859 | 0.846 | 0.81 | 0.855 | 0.836 |
Predicted Number | Ground-Truth Number | ||||||
---|---|---|---|---|---|---|---|
Corrosion | Peeling | Sediment | Penetration | Background | Total | Precision | |
Corrosion | 319 | 13 | 19 | 2 | 11 | 364 | 87.64% |
Peeling | 4 | 397 | 4 | 0 | 19 | 424 | 93.63% |
Sediment | 22 | 18 | 172 | 5 | 13 | 230 | 74.78% |
Penetration | 3 | 4 | 6 | 86 | 1 | 100 | 86.00% |
Background | 8 | 17 | 7 | 2 | |||
Total | 356 | 449 | 208 | 95 | 85.51% | ||
Recall | 89.61% | 88.42% | 82.69% | 90.53% | 87.81% | ||
F1-score | 0.886 | 0.88 | 0.851 | 0.891 | 0.866 |
Original | Unfold | Ours (Unfold & Stitch) | |
---|---|---|---|
Precision | 82.32% | 85.51% | 93.10% |
Recall | 84.86% | 87.81% | 90.96% |
F1-score | 0.836 | 0.866 | 0.92 |
Predicted Number | Ground-Truth Number | ||||||
---|---|---|---|---|---|---|---|
Corrosion | Peeling | Sediment | Penetration | Background | Total | Precision | |
Corrosion | 15 | 1 | 2 | 0 | 6 | 24 | 62.50% |
Peeling | 3 | 7 | 0 | 0 | 4 | 14 | 50.00% |
Sediment | 6 | 1 | 6 | 1 | 3 | 17 | 35.29% |
Penetration | 1 | 0 | 1 | 1 | 1 | 4 | 25.00% |
Background | 4 | 4 | 2 | 0 | |||
Total | 29 | 13 | 11 | 2 | 43.20% | ||
Recall | 51.72% | 53.85% | 54.55% | 50.00% | 52.53% | ||
F1-score | 0.566 | 0.579 | 0.583 | 0.556 | 0.474 |
Predicted Number | Ground-Truth Number | ||||||
---|---|---|---|---|---|---|---|
Corrosion | Peeling | Sediment | Penetration | Background | Total | Precision | |
Corrosion | 799 | 19 | 39 | 26 | 7 | 890 | 89.78% |
Peeling | 23 | 597 | 47 | 10 | 11 | 688 | 86.77% |
Sediment | 31 | 37 | 799 | 29 | 4 | 900 | 88.78% |
Penetration | 5 | 6 | 9 | 703 | 2 | 725 | 96.97% |
Background | 34 | 39 | 12 | 14 | |||
Total | 892 | 698 | 906 | 782 | 90.57% | ||
Recall | 89.57% | 85.53% | 88.19% | 89.90% | 88.30% | ||
F1-score | 0.897 | 0.876 | 0.89 | 0.898 | 0.894 |
Predicted Number | Ground-Truth Number | ||||||
---|---|---|---|---|---|---|---|
Corrosion | Peeling | Sediment | Penetration | Background | Total | Precision | |
Corrosion | 805 | 17 | 38 | 21 | 7 | 888 | 90.65% |
Peeling | 27 | 609 | 41 | 12 | 11 | 700 | 87.00% |
Sediment | 29 | 33 | 803 | 21 | 4 | 890 | 90.22% |
Penetration | 4 | 7 | 5 | 711 | 2 | 729 | 97.53% |
Background | 27 | 32 | 19 | 17 | |||
Total | 892 | 698 | 906 | 782 | 91.35% | ||
Recall | 90.25% | 87.25% | 88.63% | 90.92% | 89.26% | ||
F1-score | 0.904 | 0.889 | 0.896 | 0.908 | 0.903 |
Predicted Number | Ground-Truth Number | ||||||
---|---|---|---|---|---|---|---|
Corrosion | Peeling | Sediment | Penetration | Background | Total | Precision | |
Corrosion | 809 | 21 | 33 | 27 | 6 | 896 | 90.29% |
Peeling | 23 | 603 | 49 | 9 | 14 | 698 | 86.39% |
Sediment | 31 | 29 | 811 | 25 | 3 | 899 | 90.21% |
Penetration | 4 | 8 | 4 | 709 | 3 | 728 | 97.39% |
Background | 25 | 37 | 9 | 12 | |||
Total | 892 | 698 | 906 | 782 | 91.07% | ||
Recall | 90.70% | 86.39% | 89.51% | 90.66% | 89.32% | ||
F1-score | 0.905 | 0.883 | 0.899 | 0.905 | 0.902 |
None | CBAM | SE | Ours (Transformer) | |
---|---|---|---|---|
Precision | 90.57% | 91.35% | 91.07 | 93.10% |
Recall | 88.30% | 89.26% | 89.32% | 90.96% |
F1-score | 0.894 | 0.903 | 0.902 | 0.92 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Chen, K.; Li, H.; Li, C.; Zhao, X.; Wu, S.; Duan, Y.; Wang, J. An Automatic Defect Detection System for Petrochemical Pipeline Based on Cycle-GAN and YOLO v5. Sensors 2022, 22, 7907. https://doi.org/10.3390/s22207907
Chen K, Li H, Li C, Zhao X, Wu S, Duan Y, Wang J. An Automatic Defect Detection System for Petrochemical Pipeline Based on Cycle-GAN and YOLO v5. Sensors. 2022; 22(20):7907. https://doi.org/10.3390/s22207907
Chicago/Turabian StyleChen, Kun, Hongtao Li, Chunshu Li, Xinyue Zhao, Shujie Wu, Yuxiao Duan, and Jinshen Wang. 2022. "An Automatic Defect Detection System for Petrochemical Pipeline Based on Cycle-GAN and YOLO v5" Sensors 22, no. 20: 7907. https://doi.org/10.3390/s22207907
APA StyleChen, K., Li, H., Li, C., Zhao, X., Wu, S., Duan, Y., & Wang, J. (2022). An Automatic Defect Detection System for Petrochemical Pipeline Based on Cycle-GAN and YOLO v5. Sensors, 22(20), 7907. https://doi.org/10.3390/s22207907