# A Methodology to Automatically Segment 3D Ultrasonic Data Using X-ray Computed Tomography and a Convolutional Neural Network

^{1}

^{2}

^{3}

^{4}

^{*}

## Abstract

**:**

## 1. Introduction

- A methodology to use ultrasonic and XCT data and its application for porosity assessment, which is a use-case where UT data would not enable the formation of a supervised dataset;
- the construction of a dataset of ultrasonic porosity images;
- the optimization of phased array ultrasonic inspections enhances the detail level in the analysis of porosity, and void shape, size and distribution;
- the evaluation of model performance depending on which data were used for training or testing, equal to an F1 of 0.6–0.7 and a IoU of 0.4–0.5 for the test data. Furthermore, the results proved to be robust, since the segmentation was equivalent for data that were part of the training and the test datasets in different training processes.

## 2. Materials and Methods

#### 2.1. Materials

#### 2.2. Methodology

#### 2.2.1. NDT Inspections: XCT and Ultrasonic Phased Array

#### 2.2.2. Data Preprocessing

#### Obtention of Projection Images

- Let I = {1,...,l} denote the indices of the rows of a matrix A.
- Let J = {1,...,m} denote the indices of the columns of matrix A
- Let K = {1,...,n} denote the indices of the slices of matrix A
- I, J, K are oriented to the height, width and thickness of the coupon, respectively
- Let ${a}_{i,j,k}$ denote the value of the element A[i, j, k]
- Let F be the summation operation in the case of XCT. The maximum operation in the case of the ultrasonic data$$\begin{array}{c}\hfill projection={U}_{ij}=F{\left({a}_{ij}\right)}_{k={k}_{1}}^{{k}_{2}},\phantom{\rule{1.em}{0ex}}i=1,\cdots ,l\phantom{\rule{1.em}{0ex}}j=1,\cdots ,m;\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}{k}_{1}\phantom{\rule{0.166667em}{0ex}}\mathrm{and}\phantom{\rule{0.166667em}{0ex}}{k}_{2}\phantom{\rule{0.166667em}{0ex}}\mathrm{found}\phantom{\rule{0.166667em}{0ex}}\mathrm{experimentally}.\end{array}$$

#### 2D Registration of Projections

#### Labels

#### 2.2.3. Modeling

#### Segmentation of UT Projections

- Global thresholds: different values for global segmentation of the projections were applied. The pixel values of the ultrasonic projections were normalized between [0, 1], and the thresholds were in the range [0.25, 0.4] with a 0.05 step. We also explored local segmentation algorithms such as Sauvola, but their results were found to be too noisy.
- Network architecture: the network is a slightly modified version of the one shown in [37]. The hyperparameters are shown in Table 1. The proposed network has four convolutional layers with two max-pooling layers and three FC layers. The network architecture is illustrated in Figure 3. It was trained from scratch using extracted patches. Convolutional layers have a 3 × 3 kernel, stride 1, and no padding. Max pooling is performed with a 2 × 2 window and stride 2. All the hidden layers, except the output units, are equipped with rectified linear units (ReLU).

#### Training and Testing

#### Evaluation

- False positive (FP): negative class predicted as positive.
- True positive (TP): positive class predicted as positive.
- True negative (TN): negative class predicted as negative.
- False negative (FN): negative class predicted as positive.

#### 2.3. Tools

## 3. Results

#### 3.1. Dataset

#### 3.2. CNN Training

#### Comparison of Segmentation Algorithms along the Dataset

## 4. Discussion

#### 4.1. Preprocessing

#### 4.2. Registration

#### 4.3. Evaluation and Labels

#### 4.4. Segmentation Results

## 5. Conclusions and Future Work

## Author Contributions

## Funding

## Institutional Review Board Statement

## Informed Consent Statement

## Data Availability Statement

## Acknowledgments

## Conflicts of Interest

## Abbreviations

MDPI | Multidisciplinary Digital Publishing Institute |

DOAJ | Directory of open access journals |

NDT | Non-destructive testing |

UT | Ultrasonic testing |

XCT | X-ray computed tomography |

CNN | Convolutional neural networks |

## Appendix A. Results for Training in Data of Coupon 2, and Test in Coupon 1

XCT Labels | Manual Labels | |
---|---|---|

Avg Training Precision: | 0.32 | 0.51 |

Avg Training Recall: | 0.45 | 0.89 |

Avg Training F1: | 0.25 | 0.62 |

Avg Training IoU: | 0.14 | 0.62 |

XCT Labels | Manual Labels | |
---|---|---|

Avg Test Precision: | 0.29 | 0.72 |

Avg Test Recall: | 0.36 | 0.78 |

Avg Test F1: | 0.29 | 0.73 |

Avg Training IoU: | 0.17 | 0.62 |

**Figure A1.**Training and validation curves for the networks trained on each type of labels. (

**a**) Network trained with XCT labels. (

**b**) Network trained with manual labels.

## Appendix B. Dataset of Projections

**Figure A2.**Dataset of projections and their imageID. From left to right: Ultrasonic, XCT labels, and manual labels images. (

**a**) Projections obtained from coupon 1. From left to right: Ultrasonic, XCT labels, and manual labels images. (

**b**) Projections obtained from coupon 2.

## References

- Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going Deeper with Convolutions. arXiv
**2014**, arXiv:1409.4842. [Google Scholar] - Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv
**2015**, arXiv:1409.1556. [Google Scholar] - He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. arXiv
**2015**, arXiv:1512.03385. [Google Scholar] - Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. Commun. ACM
**2017**, 60, 84–90. [Google Scholar] [CrossRef] - Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention Is All You Need. arXiv
**2017**, arXiv:1706.03762. [Google Scholar] - Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. arXiv
**2016**, arXiv:1506.01497. [Google Scholar] [CrossRef] [PubMed] - Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. arXiv
**2016**, arXiv:1506.02640. [Google Scholar] - He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask R-CNN. arXiv
**2018**, arXiv:1703.06870. [Google Scholar] - Xiao, H.; Chen, D.; Xu, J.; Guo, S. Defects Identification Using the Improved Ultrasonic Measurement Model and Support Vector Machines. NDT E Int.
**2020**, 111, 102223. [Google Scholar] [CrossRef] - Ye, J.; Toyama, N. Benchmarking Deep Learning Models for Automatic Ultrasonic Imaging Inspection. IEEE Access
**2021**, 9, 36986–36994. [Google Scholar] [CrossRef] - Latête, T.; Gauthier, B.; Belanger, P. Towards Using Convolutional Neural Network to Locate, Identify and Size Defects in Phased Array Ultrasonic Testing. Ultrasonics
**2021**, 115, 106436. [Google Scholar] [CrossRef] [PubMed] - Medak, D.; Posilovic, L.; Subasic, M.; Budimir, M.; Loncaric, S. Automated Defect Detection From Ultrasonic Images Using Deep Learning. IEEE Trans. Ultrason. Ferroelectr. Freq. Control
**2021**, 68, 3126–3134. [Google Scholar] [CrossRef] [PubMed] - Tan, M.; Pang, R.; Le, Q.V. EfficientDet: Scalable and Efficient Object Detection. arXiv
**2020**, arXiv:1911.09070. [Google Scholar] - Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal Loss for Dense Object Detection. arXiv
**2018**, arXiv:1708.02002. [Google Scholar] - Medak, D.; Posilovic, L.; Subasic, M.; Budimir, M.; Loncaric, S. Deep Learning-Based Defect Detection From Sequences of Ultrasonic B-Scans. IEEE Sens. J.
**2022**, 22, 2456–2463. [Google Scholar] [CrossRef] - Virkkunen, I.; Koskinen, T.; Jessen-Juhler, O.; Rinta-aho, J. Augmented Ultrasonic Data for Machine Learning. J. Nondestruct. Eval.
**2021**, 40, 4. [Google Scholar] [CrossRef] - Cantero-Chinchilla, S.; Wilcox, P.D.; Croxford, A.J. A Deep Learning Based Methodology for Artefact Identification and Suppression with Application to Ultrasonic Images. NDT E Int.
**2022**, 126, 102575. [Google Scholar] [CrossRef] - Meng, M.; Chua, Y.J.; Wouterson, E.; Ong, C.P.K. Ultrasonic Signal Classification and Imaging System for Composite Materials via Deep Convolutional Neural Networks. Neurocomputing
**2017**, 257, 128–135. [Google Scholar] [CrossRef] - Li, C.; He, W.; Nie, X.; Wei, X.; Guo, H.; Wu, X.; Xu, H.; Zhang, T.; Liu, X. Intelligent Damage Recognition of Composite Materials Based on Deep Learning and Ultrasonic Testing. AIP Adv.
**2021**, 11, 125227. [Google Scholar] [CrossRef] - Qiao, S.; Chen, L.C.; Yuille, A. DetectoRS: Detecting Objects with Recursive Feature Pyramid and Switchable Atrous Convolution. arXiv
**2020**, arXiv:2006.02334. [Google Scholar] - Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. arXiv
**2014**, arXiv:1311.2524. [Google Scholar] - Smith, R.A. President, BINDT. Workshop on NDT and SHM Requirements for Aerospace, Composites. Available online: https://www.bindt.org/admin/Downloads/2016 (accessed on 10 January 2022).
- Sket, F.; Seltzer, R.; Molina-Aldareguía, J.; Gonzalez, C.; LLorca, J. Determination of Damage Micromechanisms and Fracture Resistance of Glass Fiber/Epoxy Cross-Ply Laminate by Means of X-ray Computed Microtomography. Compos. Sci. Technol.
**2012**, 72, 350–359. [Google Scholar] [CrossRef] - Mutiargo, B. Evaluation of X-Ray Computed Tomography (CT) Images of Additively Manufactured Components Using Deep Learning. In Proceedings of the 3rd Singapore International Non-Destructive Testing Conference and Exhibition (SINCE2019), Singapore, 12 May 2019; p. 9. [Google Scholar]
- Hernández, S.; Sket, F.; Molina-Aldareguı´a, J.; González, C.; LLorca, J. Effect of Curing Cycle on Void Distribution and Interlaminar Shear Strength in Polymer-Matrix Composites. Compos. Sci. Technol.
**2011**, 71, 1331–1341. [Google Scholar] [CrossRef] - Smith, R.A.; Nelson, L.J.; Mienczakowski, M.J.; Wilcox, P.D. Ultrasonic Tracking of Ply Drops in Composite Laminates. In Proceedings of the 42nd Annual Review of Progress in Quantitative Nondestructive Evaluation: Incorporating the 6th European-American Workshop on Reliability of NDE, Minneapolis, MN, USA, 26–31 July 2016; p. 050006. [Google Scholar] [CrossRef]
- Ma, L.; Ma, T.; Liu, R.; Fan, X.; Luo, Z. Toward Fast, Flexible, and Robust Low-Light Image Enhancement. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 5627–5636. [Google Scholar] [CrossRef]
- Liu, Y.; Yan, Z.; Tan, J.; Li, Y. Multi-Purpose Oriented Single Nighttime Image Haze Removal Based on Unified Variational Retinex Model. IEEE Trans. Circuits Syst. Video Technol.
**2023**, 33, 1643–1657. [Google Scholar] [CrossRef] - Sparkman, D.; Wallentine, S.; Flores, M.; Wertz, J.; Welter, J.; Schehl, N.; Dierken, J.; Zainey, D.; Aldrin, J.; Uchic, M. A Supervised Learning Approach for Prediction of X-Ray Computed Tomography Data from Ultrasonic Testing Data. In Proceedings of the 45th Annual Review of Progress in Quantitative Nondestructive Evaluation, Burlington, VT, USA, 15–19 July 2019; Volume 38, p. 030002. [Google Scholar] [CrossRef]
- Birt, E.A.; Smith, R.A. A Review of NDE Methods for Porosity. Insight-Non-Destr. Test. Cond. Monit.
**2004**, 46, 681–686. [Google Scholar] [CrossRef] - Ding, S.; Jin, S.; Luo, Z.; Liu, H.; Chen, J.; Lin, L.; Laboratory, E. Investigations on Relationship between Porosity and Ultrasonic Attenuation Coefficient in CFRP Laminates Based on RMVM. In Proceedings of the 7th International Symposium on NDT in Aerospace, Bremen, Germany, 16–18 November 2015. [Google Scholar]
- Lin, L.; Luo, M.; Tian, H. Experimental Investigation on Porosity of Carbon Fiber-Reinforced Composite Using Ultrasonic Attenuation Coefficient. In Proceedings of the 17th World Conference on Nondestructive Testing, Shanghai, China, 25–28 October 2008; p. 9. [Google Scholar]
- Mehdikhani, M.; Gorbatikh, L.; Verpoest, I.; Lomov, S.V. Voids in Fiber-Reinforced Polymer Composites: A Review on Their Formation, Characteristics, and Effects on Mechanical Performance. J. Compos. Mater.
**2018**, 53, 1579–1669. [Google Scholar] [CrossRef] - Bhat, S.S.; Zhang, J.; Larrosa, N. Sizing Limitations of Ultrasonic Array Images for Non-Sharp Defects and Their Impact on Structural Integrity Assessments. Theor. Appl. Fract. Mech.
**2022**, 122, 103625. [Google Scholar] [CrossRef] - Chapon, A.; Pereira, D.; Toews, M.; Belanger, P. Deconvolution of Ultrasonic Signals Using a Convolutional Neural Network. Ultrasonics
**2021**, 111, 106312. [Google Scholar] [CrossRef] - Arganda-Carreras, I.; Sorzano, C.O.S.; Marabini, R.; Carazo, J.M.; Ortiz-de-Solorzano, C.; Kybic, J. Consistent and Elastic Registration of Histological Sections Using Vector-Spline Regularization. In Computer Vision Approaches to Medical Image Analysis; Hutchison, D., Kanade, T., Kittler, J., Kleinberg, J.M., Mattern, F., Mitchell, J.C., Naor, M., Nierstrasz, O., Pandu Rangan, C., Steffen, B., et al., Eds.; Springer: Berlin/Heidelberg, Germany, 2006; Volume 4241, pp. 85–95. [Google Scholar] [CrossRef]
- Fan, Z.; Wu, Y.; Lu, J.; Li, W. Automatic Pavement Crack Detection Based on Structured Prediction with the Convolutional Neural Network. arXiv
**2018**, arXiv:1802.02208. [Google Scholar] - Abadi, M.; Agarwal, A.; Barham, P.; Brevdo, E.; Chen, Z.; Citro, C.; Corrado, G.S.; Davis, A.; Dean, J.; Devin, M.; et al. TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems. arXiv
**2015**, arXiv:1603.04467. [Google Scholar] - Schindelin, J.; Arganda-Carreras, I.; Frise, E.; Kaynig, V.; Longair, M.; Pietzsch, T.; Preibisch, S.; Rueden, C.; Saalfeld, S.; Schmid, B.; et al. Fiji: An Open-Source Platform for Biological-Image Analysis. Nat. Methods
**2012**, 9, 676–682. [Google Scholar] [CrossRef] [PubMed] - van der Walt, S.; Schönberger, J.L.; Nunez-Iglesias, J.; Boulogne, F.; Warner, J.D.; Yager, N.; Gouillart, E.; Yu, T. Scikit-Image: Image Processing in Python. PeerJ
**2014**, 2, e453. [Google Scholar] [CrossRef] [PubMed] - Hunt, M.; Jupyter Lab. nanoHUB. 2018. Available online: https://nanohub.org/resources/jupyterlab60/about (accessed on 5 May 2020).
- Caswell, T.A.; Droettboom, M.; Lee, A.; Hunter, J.; Firing, E.; Stansby, D.; Klymak, J.; Hoffmann, T.; Andrade, E.S.D.; Varoquaux, N.; et al. Matplotlib/Matplotlib: REL: V3.2.1. Zenodo. 2020. Available online: https://matplotlib.org/stable/users/project/citing.html (accessed on 4 June 2019).
- Oliphant, T.E. A Guide to NumPy. Available online: https://web.mit.edu/dvp/Public/numpybook.pdf (accessed on 25 January 2022).
- Kemenade, H.V.; Wiredfool; Murray, A.; Clark, A.; Karpinsky, A.; Gohlke, C.; Dufresne, J.; Nulano; Crowell, B.; Schmidt, D.; et al. Python-Pillow/Pillow 7.1.2. Zenodo. 2020. Available online: https://buildmedia.readthedocs.org/media/pdf/pillow/latest/pillow.pdf (accessed on 4 February 2019).
- Sofroniew, N.; Lambert, T.; Evans, K.; Nunez-Iglesias, J.; Solak, A.C.; Yamauchi, K.; Buckley, G.; Bokota, G.; Tung, T.; Freeman, J.; et al. Napari/Napari: 0.3.3. Zenodo. 2020. Available online: https://napari.org/stable/citing-napari (accessed on 12 May 2021).

**Figure 3.**Diagram of the CNN structure. The leftmost image is the input patch with one channel. Other cubes indicate the feature maps obtained from convolution (Conv) or max pooling. All convolutional layers have a kernel of 3 × 3, stride 1 and zeros padding. Max pooling is performed with stride 2 over a 2 × 2 window.

**Figure 4.**Results for projection 1 of test data (Coupon 2). (

**a**) Ultrasonic projection, XCT ground truth, and predicted image. (

**b**) Ultrasonic input, manual labels, and predicted image.

**Figure 5.**Results for the projection 3 of test data. (

**a**) Ultrasonic input, XCT ground truth, and predicted image. (

**b**) Ultrasonic input, manual labels, and predicted image.

**Figure 6.**Training and validation curves for the networks trained on each type of labels. (

**a**) Network trained with XCT labels. (

**b**) Network trained with manual labels.

**Figure 7.**Comparison of F1 for the 0.25,0.3 and 0.35 global threshold segmentations applied to the US projection on a [0, 1] gray scale, and the CNN. In the case of the network, the value of F1 was obtained when the image formed part of the test dataset.

**Figure 8.**Comparison of IoU for the 0.25,0.3 and 0.35 global threshold segmentations applied to the US projection on a [0, 1] gray scale, and the CNN. In the case of the network, the value of IoU was obtained when the image formed part of the test dataset.

**Figure 9.**Detail of a projection illustrating the differences between the two labels used. (

**a**) Ultrasonic input. (

**b**) XCT labels superimposed. (

**c**) Manual labels superimposed.

Labels | Ratio | Epoch | h | s |
---|---|---|---|---|

Manual | 11 | 20 | 5 | 1 |

XCT | 6 | 25 | 5 | 1 |

XCT Labels | Manual Labels | |
---|---|---|

Avg Training Precision: | 0.29 | 0.90 |

Avg Training Recall: | 0.42 | 0.53 |

Avg Training F1: | 0.31 | 0.66 |

Avg IoU: | 0.19 | 0.50 |

XCT Labels | Manual Labels | |
---|---|---|

Avg Test Precision: | 0.27 | 0.67 |

Avg Test Recall: | 0.49 | 0.72 |

Avg Test F1: | 0.27 | 0.67 |

Avg IoU: | 0.16 | 0.51 |

**Table 4.**Average evaluation metrics for the manually annotated labels and projections from the two coupons. In the case of the CNN, the metrics were obtained when the projections belong to the test dataset.

Avg. Precision | Avg. Recall | Avg. F1 | Avg. IoU | |
---|---|---|---|---|

CNN | 0.74 | 0.65 | 0.66 | 0.50 |

Thr: 0.25 | 0.35 | 0.92 | 0.44 | 0.30 |

Thr: 0.3 | 0.58 | 0.79 | 0.60 | 0.44 |

Thr: 0.35 | 0.77 | 0.59 | 0.60 | 0.44 |

Thr: 0.4 | 0.86 | 0.39 | 0.49 | 0.34 |

Thr: 0.45 | 0.92 | 0.24 | 0.35 | 0.22 |

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |

© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Caballero, J.-I.; Cosarinsky, G.; Camacho, J.; Menasalvas, E.; Gonzalo-Martin, C.; Sket, F.
A Methodology to Automatically Segment 3D Ultrasonic Data Using X-ray Computed Tomography and a Convolutional Neural Network. *Appl. Sci.* **2023**, *13*, 5933.
https://doi.org/10.3390/app13105933

**AMA Style**

Caballero J-I, Cosarinsky G, Camacho J, Menasalvas E, Gonzalo-Martin C, Sket F.
A Methodology to Automatically Segment 3D Ultrasonic Data Using X-ray Computed Tomography and a Convolutional Neural Network. *Applied Sciences*. 2023; 13(10):5933.
https://doi.org/10.3390/app13105933

**Chicago/Turabian Style**

Caballero, Juan-Ignacio, Guillermo Cosarinsky, Jorge Camacho, Ernestina Menasalvas, Consuelo Gonzalo-Martin, and Federico Sket.
2023. "A Methodology to Automatically Segment 3D Ultrasonic Data Using X-ray Computed Tomography and a Convolutional Neural Network" *Applied Sciences* 13, no. 10: 5933.
https://doi.org/10.3390/app13105933