Image Classification Method Based on Improved Deep Convolutional Neural Networks for the Magnetic Flux Leakage (MFL) Signal of Girth Welds in Long-Distance Pipelines
Abstract
:1. Introduction
2. Principle
2.1. MFL In-Line Inspection
2.2. Convolutional Neural Network (CNN)
2.3. The Attention Mechanism
2.4. Deep Convolutional Generative Adversarial Network (DCGAN)
2.5. Residual Network (ResNet)
3. Methods
3.1. Establishment of the Original Data Set
3.2. Data Set Enhancement
3.3. Data Set Classification
4. Conclusions and Discussion
- (1)
- The DCGAN_GP can enhance the data set of girth weld signal images obtained via MFL in-line inspection. The improved ResNet-50_CBAM displays a strong generalization ability and robustness and can effectively classify with the data set of girth weld signal images obtained via MFL in-line inspection with an accuracy rate of over 80%. However, the improved model confuses strip defects to a high degree with an accuracy of 67%, so the recognition method of strip defects by this model needs to be improved in the following study.
- (2)
- The incomplete fusion, incomplete penetration, cracks, pits, and undercuts pose greater threats to the safety of girth welds. However, these types were not selected as test objects for this study because they represent an insufficient proportion of the data set. Future efforts should focus more on challenging expanding the data set and realizing multi-classification for the data set.
- (3)
- In the next study, the GoogleNet model, VGG model, and other models should be improved to classify the database. The classification results should be compared to those of this paper to select a better classification method.
- (4)
- This pipeline section will soon complete the second round of MFL in-line inspection. The new data set should be classified using the improved model. Pipeline operators can make comprehensive judgments based on both classification results, thereby assisting pipeline operators in strengthening the safety management of girth welds.
- (5)
- It is also possible to use other nondestructive testing results, such as ultrasonic and TOFD as labels for girth weld MFL signal images to establish new data sets. The classification results are of significance to pipeline safety management.
Author Contributions
Funding
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Wang, K.; Xie, K.; Zhang, H.; Qiang, Y.; Du, Y.; Xiong, Y.; Zou, Z.; Zhang, M.; Zhong, L.; Akkurt, N.; et al. Numerical evaluation of the coupled/uncoupled effectiveness of a fluid-solid-thermal multi-field model for a long-distance energy transmission pipeline. Energy 2022, 251, 123964. [Google Scholar] [CrossRef]
- Tong, S.-J.; Wu, Z.-Z.; Wang, R.-J.; Wu, H. Fire Risk Study of Long-distance Oil and Gas Pipeline Based on QRA. Procedia Eng. 2016, 135, 369–375. [Google Scholar] [CrossRef]
- He, G.; He, T.; Liao, K.; Deng, S.; Chen, D. Experimental and numerical analysis of non-contact magnetic detecting signal of girth welds on steel pipelines. ISA Trans. 2022, 125, 681–698. [Google Scholar] [CrossRef]
- Lu, K.-Q.; Lin, J.-Q.; Chen, Z.-F.; Wang, W.; Yang, H. Safety assessment of incomplete penetration defects at the root of girth welds in pipelines. Ocean Eng. 2021, 230, 109003. [Google Scholar] [CrossRef]
- Shehab, L.H.; Fahmy, O.M.; Gasser, S.M.; El-Mahallawy, M.S. An efficient brain tumor image segmentation based on deep residual networks (ResNets). J. King Saud Univ. Eng. Sci. 2021, 33, 404–412. [Google Scholar] [CrossRef]
- Zhang, X.; Li, Y.; Tao, Y.; Wang, Y.; Xu, C.; Lu, Y. A novel method based on infrared spectroscopic inception-resnet networks for the detection of the major fish allergen parvalbumin. Food Chem. 2021, 337, 127986. [Google Scholar] [CrossRef]
- Li, W.; Zhu, D.; Wang, Q. A single view leaf reconstruction method based on the fusion of ResNet and differentiable render in plant growth digital twin system. Comput. Electron. Agric. 2022, 193, 106712. [Google Scholar] [CrossRef]
- Loey, M.; Manogaran, G.; Taha, M.H.N.; Khalifa, N.E.M. Fighting against COVID-19: A novel deep learning model based on YOLO-v2 with ResNet-50 for medical face mask detection. Sustain. Cities Soc. 2022, 65, 102600. [Google Scholar] [CrossRef]
- Charles, V.B.; Surendran, D.; SureshKumar, A. Heart disease data based privacy preservation using enhanced ElGamal and ResNet classifier. Biomed. Signal Process. Control 2021, 71, 103185. [Google Scholar] [CrossRef]
- Ma, K.; Zhan, C.A.; Yang, F. Multi-classification of arrhythmias using ResNet with CBAM on CWGAN-GP augmented ECG Gramian Angular Summation Field. J. Biomed. Signal Process. Control 2022, 77, 103684. [Google Scholar] [CrossRef]
- Wang, L.; Yuan, X.; Zong, M.; Ma, Y.; Ji, W.; Liu, M.; Wang, R. Multi-cue based four-stream 3D ResNets for video-based action recognition. Inf. Sci. 2021, 575, 654–665. [Google Scholar] [CrossRef]
- Ikechukwu, A.V.; Murali, S.; Deepu, R.; Shivamurthy, R. ResNet-50 vs VGG-19 vs training from scratch: A comparative analysis of the segmentation and classification of Pneumonia from chest X-ray images. J. Glob. Transit. Proc. 2021, 2, 375–381. [Google Scholar] [CrossRef]
- Dawod, R.G.; Dobre, C. ResNet interpretation methods applied to the classification of foliar diseases in sunflower. J. Agric. Food Res. 2022, 9, 100323. [Google Scholar] [CrossRef]
- Lu, Z.; Bai, Y.; Chen, Y.; Su, C.; Lu, S.; Zhan, T.; Hong, X.; Wang, S. The classification of gliomas based on a Pyramid dilated convolution resnet model. Pattern Recognit. Lett. 2020, 133, 173–179. [Google Scholar] [CrossRef]
- Shipway, N.; Huthwaite, P.; Lowe, M.; Barden, T. Using ResNets to perform automated defect detection for Fluorescent Penetrant Inspection. NDT E Int. 2021, 119, 102400. [Google Scholar] [CrossRef]
- Taigman, Y.; Yang, M.; Ranzato, M.A.; Wolf, L. DeepFace: Closing the gap to human-level performance in face verification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, 23–28 June 2014; pp. 1701–1708. [Google Scholar]
- Ossama, A.-H.; Abdel-Rahman, M.; Hui, J.; Penn, G. Applying convolutional neural network concepts to hybrid NN-HMM model for speech recognition. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Kyoto, Japan, 25–30 March 2012; pp. 4277–4280. [Google Scholar]
- Ossama, A.-H.; Abdel-Rahman, M.; Hui, J.; Li, D.; Penn, G.; Yu, D. Convolutional neural network for speech recognition. IEEE/ACM Trans. Audio Speech Lang. Process. 2014, 22, 1533–1545. [Google Scholar]
- Zhu, H.-K. Key Algorithms on Computer-Aided Electrocardiogram Analysis and Development of Remote Multi-signs Monitoring System. Ph.D. Dissertation, Suzhou Institute of Nano-tech and Nano-bionics, Chinese Academy of Sciences,, Suzhou, China, 2013. (In Chinese). [Google Scholar]
- Yang, L.; Wang, Z.; Gao, S.; Shi, M.; Liu, B. Magnetic flux leakage image classification method for pipeline weld based on optimized convolution kernel. Neurocomputing 2019, 365, 229–238. [Google Scholar] [CrossRef]
- Wang, H.; Yajima, A.; Liang, R.Y.; Castaneda, H. A clustering approach for assessing external corrosion in a buried pipeline based on hidden Markov random field model. Struct. Saf. 2015, 56, 18–29. [Google Scholar] [CrossRef]
- Chen, J.; Huang, S.; Zhao, W. Three-dimensional defect inversion from magnetic flux terminal leakage signals using iterative neural network. IET Sci. Meas. Technol. 2015, 9, 418–426. [Google Scholar] [CrossRef]
- Khodayari-Rostamabad, A.; Reilly, J.P.; Nikolova, N.K.; Hare, J.R.; Pasha, S. Machine Learning Techniques for the Analysis of Magnetic Flux Leakage Images in Pipeline Inspection. IEEE Trans. Magn. 2009, 45, 3073–3084. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Sutskever, I.I.; Hinton, G. Imagenet classification with deep convolutional neural network. In Proceedings of the Advances in Neural Information Processing System, Lake Tahoe, NV, USA, 3–6 December 2012; pp. 1097–1105. [Google Scholar]
- Szegedy, C.; Liu, W.; Jia, Y.-Q.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large scale image recognition. arXiv 2014, arXiv:1409.1556v6. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 630–645. [Google Scholar]
- Wang, X.-G. Deep learning in image recognition. Commun. CCF 2015, 11, 15–23. (In Chinese) [Google Scholar]
- Shaker, A.M.; Tantawi, M.; Shedeed, H.A.; Tolba, M.F. Generalization of Convolutional Neural Networks for ECG Classification Using Generative Adversarial Networks. IEEE Access 2020, 8, 35592–35605. [Google Scholar] [CrossRef]
- Wang, P.; Hou, B.; Shao, S.; Yan, R. ECG Arrhythmias Detection Using Auxiliary Classifier Generative Adversarial Network and Residual Network. IEEE Access 2019, 7, 100910–100922. [Google Scholar] [CrossRef]
- Jun, T.J.; Nguyen, H.M.; Kang, D.; Kim, D.; Kim, D.; Kim, Y.H. ECG arrhythmia classification using a 2-D convolutional neural network. arXiv 2018, arXiv:1804.06812. [Google Scholar]
- Ukil, A.; Bandyopadhyay, S.; Puri, C.; Singh, R.; Pal, A. Class augmented semi-supervised learning for practical clinical analytics on physiological signals. arXiv 2018, arXiv:1812.07498. [Google Scholar]
- Arjovsky, M.; Bottou, L. Towards principled methods for training generative adversarial networks. arXiv 2017, arXiv:1701.04862. [Google Scholar]
- Zhu, B.; Jiao, J.; Tse, D. Deconstructing Generative Adversarial Networks. IEEE Trans. Inf. Theory 2020, 66, 7155–7179. [Google Scholar] [CrossRef]
- Mirza, M.; Osindero, S. Conditional Generative Adversarial Nets. arXiv 2014, arXiv:1411.1784. [Google Scholar]
- Arjovsky, M.; Chintala, S.; Bottou, L. Wasserstein GAN. arXiv 2017, arXiv:1701.07875v3. [Google Scholar]
- Gulrajani, I.; Ahmed, F.; Arjovsky, M.; Dumoulin, V.; Courville, A.C. Improved training of wasserstein GANS. In Proceedings of the 31th International Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 5769–5779. [Google Scholar]
- Azizzadeh, T.; Safizadeh, M. Estimation of the diameters, depths and separation distances of the closely-spaced pitting defects using combination of three axial MFL components. Measurement 2019, 138, 341–349. [Google Scholar] [CrossRef]
- Piao, G.; Guo, J.; Hu, T.; Leung, H.; Deng, Y. Fast reconstruction of 3-D defect profile from MFL signals using key physics-based parameters and SVM. NDT Int. 2019, 103, 26–38. [Google Scholar] [CrossRef]
- Zarándy, Á.; Rekeczky, C.; Szolgay, P.; Chua, L.O. Overview of CNN research: 25 years history and the current trends. In Proceedings of the 2015 IEEE International Symposium on Circuits and Systems (ISCAS), Lisbon, Portugal, 24–27 May 2015; pp. 401–404. [Google Scholar] [CrossRef]
- Woo, S.; Park, J.; Lee, J.Y.; In, S.K. CBAM: Convolutional block attention module. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
- Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. In Annual Conference on Neural Information Processing Systems; MIT Press: Cambridge, MA, USA, 2014; pp. 2672–2680. [Google Scholar]
- Radford, A.; Metz, L.; Chintala, S. Unsupervised representation learning with deep convolutional generative adversarial networks. J. Comput. Sci. 2015, 3, 251–276. [Google Scholar]
- Lin, T.Y.; Goyal, P.; Girshik, R.; He, K.; Dollar, P. Focal loss for dense object detection. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar]
Model | Number of Residual Unit Layers | Number of Parameters | Computation/ GMACs | |||
---|---|---|---|---|---|---|
64 d | 128 d | 256 d | 512 d | |||
ResNet-18 | 2 | 2 | 2 | 2 | 11.7 | 1.8 |
ResNet-34 | 3 | 4 | 6 | 3 | 21.8 | 3.7 |
ResNet-50 | 3 | 4 | 6 | 3 | 25.6 | 4.1 |
ResNet-101 | 3 | 4 | 23 | 3 | 44.6 | 7.9 |
ResNet-152 | 3 | 8 | 36 | 3 | 60.2 | 11.6 |
No. | Layers | The Size of Output Image |
---|---|---|
The size of the input picture is 64 × 64 | ||
1 | Conv 7 × 764 + MaxPool 3 × 3 | 32 × 32 |
2 | ResNet-Unit_CBAM (Conv 1 ×1, 1, 256)×3 | 16 × 16 |
3 | ResNet-Unit_CBAM (Conv 1 ×1, 1, 512)×4 | 8 × 8 |
4 | ResNet-Unit_CBAM (Conv 1 × 1, 1024)×6 | 4 × 4 |
5 | ResNet-Unit_CBAM (Conv 1 × 1, 2048)×3 | 2 × 2 |
6 | AvgPool | 1 × 1 |
No. | Label | Amount | Proportion |
---|---|---|---|
1 | Circular defects | 1057 | 72.3% |
2 | Strip defects | 345 | 23.6% |
3 | Defects, such as incomplete fusion, incomplete penetration, crack, pit, and undercut | 60 | 4.1% |
Total | 1462 |
No. | Label | Amount (Original) | Proportion (Original) | Amount (Without Testing Set) | Proportion (Without Testing Set) | Amount (After Enhancement) | Proportion (After Enhancement) |
---|---|---|---|---|---|---|---|
1 | Circular defects | 1057 | 75.4% | 997 | 76.3% | 1660 | 62.2% |
2 | Strip defects | 345 | 24.6% | 309 | 23.7% | 1010 | 37.8% |
Total | 1402 | 1306 | 2670 |
Fold | Precision | Recall | F1-Score | |
---|---|---|---|---|
0 | Strip defects are missing | 1.00 | 0.67 | 0.80 |
Circular defects are missing | 0.83 | 1.00 | 0.91 | |
Accuracy | 0.88 | |||
1 | Strip defects are missing | 1.00 | 0.64 | 0.78 |
Circular defects are missing | 0.82 | 1.00 | 0.90 | |
Accuracy | 0.86 | |||
2 | Strip defects are missing | 1.00 | 0.67 | 0.80 |
Circular defects are missing | 0.83 | 1.00 | 0.91 | |
Accuracy | 0.88 | |||
3 | Strip defects are missing | 1.00 | 0.67 | 0.80 |
Circular defects are missing | 0.83 | 1.00 | 0.91 | |
Accuracy | 0.88 | |||
4 | Strip defects are missing | 0.96 | 0.64 | 0.77 |
Circular defects are missing | 0.82 | 0.98 | 0.89 | |
Accuracy | 0.85 | |||
5 | Strip defects are missing | 0.96 | 0.67 | 0.79 |
Circular defects are missing | 0.83 | 0.98 | 0.90 | |
Accuracy | 0.86 | |||
6 | Strip defects are missing | 0.92 | 0.67 | 0.77 |
Circular defects are missing | 0.83 | 0.97 | 0.89 | |
Accuracy | 0.85 | |||
7 | Strip defects are missing | 0.96 | 0.67 | 0.79 |
Circular defects are missing | 0.83 | 0.98 | 0.90 | |
Accuracy | 0.86 | |||
8 | Strip defects are missing | 1.00 | 0.56 | 0.71 |
Circular defects are missing | 0.79 | 1.00 | 0.88 | |
Accuracy | 0.83 | |||
9 | Strip defects are missing | 0.92 | 0.67 | 0.77 |
Circular defects are missing | 0.83 | 0.97 | 0.89 | |
Accuracy | 0.85 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Geng, L.; Dong, S.; Qian, W.; Peng, D. Image Classification Method Based on Improved Deep Convolutional Neural Networks for the Magnetic Flux Leakage (MFL) Signal of Girth Welds in Long-Distance Pipelines. Sustainability 2022, 14, 12102. https://doi.org/10.3390/su141912102
Geng L, Dong S, Qian W, Peng D. Image Classification Method Based on Improved Deep Convolutional Neural Networks for the Magnetic Flux Leakage (MFL) Signal of Girth Welds in Long-Distance Pipelines. Sustainability. 2022; 14(19):12102. https://doi.org/10.3390/su141912102
Chicago/Turabian StyleGeng, Liyuan, Shaohua Dong, Weichao Qian, and Donghua Peng. 2022. "Image Classification Method Based on Improved Deep Convolutional Neural Networks for the Magnetic Flux Leakage (MFL) Signal of Girth Welds in Long-Distance Pipelines" Sustainability 14, no. 19: 12102. https://doi.org/10.3390/su141912102
APA StyleGeng, L., Dong, S., Qian, W., & Peng, D. (2022). Image Classification Method Based on Improved Deep Convolutional Neural Networks for the Magnetic Flux Leakage (MFL) Signal of Girth Welds in Long-Distance Pipelines. Sustainability, 14(19), 12102. https://doi.org/10.3390/su141912102