Figure 1.
Methodological diagram of the proposed methods.
Figure 1.
Methodological diagram of the proposed methods.
Figure 2.
Thermite weld image.
Figure 2.
Thermite weld image.
Figure 3.
The computation of the original LBP descriptor for the center pixel (highlighted in red). First, the pixel value of the center pixel is compared with the gray values of pixels in its neighbourhood region . Thereafter, binary code is obtained, where pixels greater than the center pixels are assigned a value of “1” and pixels smaller than the center pixels are assigned a value “0”. Finally, the binary code is converted into a binary number which represents the LBP feature of the center pixel.
Figure 3.
The computation of the original LBP descriptor for the center pixel (highlighted in red). First, the pixel value of the center pixel is compared with the gray values of pixels in its neighbourhood region . Thereafter, binary code is obtained, where pixels greater than the center pixels are assigned a value of “1” and pixels smaller than the center pixels are assigned a value “0”. Finally, the binary code is converted into a binary number which represents the LBP feature of the center pixel.
Figure 4.
Examples of the modified LBP descriptor: the circular (8, 1), (12, 2), (12, 3) neighborhoods, grey pixels represent the center pixels and black pixels represent the neighbours of the center pixels.
Figure 4.
Examples of the modified LBP descriptor: the circular (8, 1), (12, 2), (12, 3) neighborhoods, grey pixels represent the center pixels and black pixels represent the neighbours of the center pixels.
Figure 5.
Computation of a histogram vector using the uniform LBP extractor: First, an image is divided into cells of equal size (red and blue as shown), thereafter, patterns, of which 58 are uniform, are extracted on each cell (represented by histograms). Finally, the patterns from all the cells are concatenated into a single histogram to obtain the final feature vector for the given image.
Figure 5.
Computation of a histogram vector using the uniform LBP extractor: First, an image is divided into cells of equal size (red and blue as shown), thereafter, patterns, of which 58 are uniform, are extracted on each cell (represented by histograms). Finally, the patterns from all the cells are concatenated into a single histogram to obtain the final feature vector for the given image.
Figure 6.
(a) keypoint extraction: An image is divided into grids of equal sizes, and the center pixel (highlighted in red) in each grid is considered a keypoint. and (b) keypoint description: a square region is placed and centered around every keypoint and oriented along the keypoint’s dominant orientation. This region is then divided into 16 sub-regions.
Figure 6.
(a) keypoint extraction: An image is divided into grids of equal sizes, and the center pixel (highlighted in red) in each grid is considered a keypoint. and (b) keypoint description: a square region is placed and centered around every keypoint and oriented along the keypoint’s dominant orientation. This region is then divided into 16 sub-regions.
Figure 7.
Image representation using BoVW: First, keypoints (highlighted in red) are detected, thereafter (a), the descriptor vectors are generated (b), the obtained vectors are then grouped into multiple codebooks using the Kmeans clustering algorithm (example , , and ) (c). Finally, coding and pooling steps are applied to represent each image in terms of codewords, and to provide a global feature representation, respectively (d).
Figure 7.
Image representation using BoVW: First, keypoints (highlighted in red) are detected, thereafter (a), the descriptor vectors are generated (b), the obtained vectors are then grouped into multiple codebooks using the Kmeans clustering algorithm (example , , and ) (c). Finally, coding and pooling steps are applied to represent each image in terms of codewords, and to provide a global feature representation, respectively (d).
Figure 8.
SVM concept for a binary classification task: Circular and squared shapes represent two distinct instances belonging to two class labels. The line between and planes is considered an optimal hyperplane, and it is obtained by maximizing the distance d.
Figure 8.
SVM concept for a binary classification task: Circular and squared shapes represent two distinct instances belonging to two class labels. The line between and planes is considered an optimal hyperplane, and it is obtained by maximizing the distance d.
Figure 9.
DCNN architecture: The red line illustrates the transition from the convolution layer to the pooling layer.
Figure 9.
DCNN architecture: The red line illustrates the transition from the convolution layer to the pooling layer.
Figure 13.
Shrinkage cavities.
Figure 13.
Shrinkage cavities.
Figure 14.
Original images (a,c,e), enhanced images (b,d,f).
Figure 14.
Original images (a,c,e), enhanced images (b,d,f).
Figure 15.
Weld joint segmentation and RoI extraction. (a) Chan-Vese active contour model is applied and its energy is minimum at the weld joint boundaries, (b) weld joint image is segmented, (c) a bounding box is placed on the boundaries of the segmented weld joint image, and (d) region inside the bounding box is cropped, and it represents weld join region of interest.
Figure 15.
Weld joint segmentation and RoI extraction. (a) Chan-Vese active contour model is applied and its energy is minimum at the weld joint boundaries, (b) weld joint image is segmented, (c) a bounding box is placed on the boundaries of the segmented weld joint image, and (d) region inside the bounding box is cropped, and it represents weld join region of interest.
Figure 16.
Some of the obtained results: (a) application of Chan–Vese ACM and (b) segmented thermite weld images.
Figure 16.
Some of the obtained results: (a) application of Chan–Vese ACM and (b) segmented thermite weld images.
Figure 17.
Some of the post-processing results applied on wormhole defect images.
Figure 17.
Some of the post-processing results applied on wormhole defect images.
Figure 18.
Percentage of segmented weld joint images per each class.
Figure 18.
Percentage of segmented weld joint images per each class.
Figure 19.
Graphical User Interface for onsite defect investigation.
Figure 19.
Graphical User Interface for onsite defect investigation.
Table 1.
Accuracies of LBP features with KNN classifier.
Table 1.
Accuracies of LBP features with KNN classifier.
Cell Size | Length | Parameter | Val. Acc (%) | Test Acc (%) |
---|
[] | 147,500 | K = 10 | 85 | 81 |
| | K = 25 | 81 | 79 |
| | K = 40 | 89 | 91 |
| | K = 55 | 86 | 81 |
[] | 36,875 | K = 10 | 83 | 79 |
| | K = 25 | 87 | 83 |
| | K = 40 | 81 | 76 |
| | K = 50 | 79 | 77 |
[] | 5900 | K = 10 | 83 | 85 |
| | K = 25 | 85 | 85 |
| | K = 40 | 77 | 79 |
| | K = 55 | 81 | 79 |
[] | 1475 | K = 10 | 75 | 71 |
| | K = 25 | 77 | 73 |
| | K = 40 | 83 | 79 |
| | K = 50 | 77 | 73 |
Table 2.
Accuracies of LBP features with SVM classifier.
Table 2.
Accuracies of LBP features with SVM classifier.
Cell Size | Length | Parameter | Val. Acc (%) | Test Acc (%) |
---|
[] | 147,500 | | 83 | 85 |
| | | 87 | 89 |
| | | 83 | 81 |
| | | 79 | 81 |
[] | 36,875 | | 85 | 85 |
| | | 79 | 79 |
| | | 81 | 77 |
| | | 79 | 77 |
[] | 5900 | | 75 | 73 |
| | | 77 | 75 |
| | | 77 | 77 |
| | | 75 | 77 |
[] | 1475 | | 77 | 77 |
| | | 77 | 75 |
| | | 75 | 73 |
| | | 75 | 73 |
Table 3.
Accuracies of LBP features with Naive Bayes classifier.
Table 3.
Accuracies of LBP features with Naive Bayes classifier.
Cell Size | Length | Val. Acc (%) | Test Acc (%) |
---|
[] | 147,500 | 73 | 71 |
[] | 36,875 | 75 | 75 |
[] | 5900 | 79 | 77 |
[] | 1475 | 81 | 81 |
Table 4.
Highest classification accuracy of each classifier for LBP features.
Table 4.
Highest classification accuracy of each classifier for LBP features.
Method | Optimal Parameters | Length | Test Acc (%) |
---|
LBP + KNN | Cell size: | K = 40 | 147,500 | 91 |
LBP + SVM | Cell size: | = 0.5 | 147,500 | 89 |
LBP + NB | Cell size: | NB | 1475 | 81 |
Table 5.
Accuracies of BoVW approach with KNN classifier.
Table 5.
Accuracies of BoVW approach with KNN classifier.
Codebooks | Length | Parameter | Val. Acc (%) | Test Acc (%) |
---|
400 | 400 | K = 10 | 85 | 83 |
| | K = 25 | 89 | 89 |
| | K = 40 | 85 | 87 |
| | K = 55 | 83 | 85 |
1200 | 1200 | K = 10 | 83 | 81 |
| | K = 25 | 83 | 85 |
| | K = 40 | 87 | 87 |
| | K = 50 | 87 | 83 |
2000 | 2000 | K = 10 | 81 | 83 |
| | K = 25 | 83 | 83 |
| | K = 40 | 85 | 87 |
| | K = 55 | 85 | 83 |
3200 | 3200 | K = 10 | 81 | 81 |
| | K = 25 | 85 | 83 |
| | K = 40 | 83 | 83 |
| | K = 50 | 77 | 79 |
Table 6.
Accuracies of BoVW features with SVM classifier.
Table 6.
Accuracies of BoVW features with SVM classifier.
Codebooks | Length | Parameter | Val. Acc (%) | Test Acc (%) |
---|
400 | 400 | = 0.25 | 85 | 83 |
| | = 0.5 | 91 | 91 |
| | = 2 | 87 | 87 |
| | = 4 | 87 | 85 |
1200 | 1200 | = 0.25 | 87 | 89 |
| | = 0.5 | 87 | 85 |
| | = 2 | 89 | 89 |
| | = 4 | 83 | 83 |
2000 | 2000 | = 0.25 | 85 | 83 |
| | = 0.5 | 83 | 83 |
| | = 2 | 85 | 85 |
| | = 4 | 87 | 83 |
3200 | 3200 | = 0.25 | 81 | 81 |
| | = 0.5 | 85 | 87 |
| | = 2 | 85 | 83 |
| | = 4 | 81 | 79 |
Table 7.
Accuracies of BoVW features with Naive Bayes classifier.
Table 7.
Accuracies of BoVW features with Naive Bayes classifier.
Cell Size | Length | Val. Acc (%) | Test Acc (%) |
---|
400 | 400 | 77 | 75 |
1200 | 1200 | 75 | 75 |
2000 | 2000 | 79 | 79 |
3200 | 3200 | 77 | 75 |
Table 8.
Highest classification accuracy of each classifier for BoVW features.
Table 8.
Highest classification accuracy of each classifier for BoVW features.
Method | Optimal Parameters | Length | Test Acc (%) |
---|
BoVW + KNN | Codewords: 400 | K = 25 | 400 | 89 |
BoVW + SVM | Codewords: 400 | = 0.5 | 400 | 91 |
BoVW + NB | Codewords: 2000 | NB | 2000 | 79 |
Table 9.
Highest classification accuracy by each feature extractor.
Table 9.
Highest classification accuracy by each feature extractor.
Method | Vector Length | Compt. Time (s) | Test Acc (%) |
---|
LBP + KNN | 147,500 | 1.89 | 91 |
BoVW + SVM | 400 | 0.21 | 91 |
Table 10.
DCNN validation accuracies with varying convolution parameters and a single fully connected layer.
Table 10.
DCNN validation accuracies with varying convolution parameters and a single fully connected layer.
Layers | | First | Second | Third | Acc (%) |
---|
| Filter size | | | | 91 |
Convolution | Stride | 1 | 1 | 1 |
Padding | 2 | 2 | 2 |
Pooling | Filter size | | | |
Stride | 1 | 1 | 1 |
| Filter size | | | | 89 |
Convolution | Stride | 1 | 1 | 1 |
Padding | 2 | 2 | 2 |
Pooling | Filter size | | | |
Stride | 1 | 1 | 1 |
| Filter size | | | | 87 |
Convolution | Stride | 1 | 1 | 1 |
Padding | 2 | 2 | 2 |
Pooling | Filter size | | | |
Stride | 1 | 1 | 1 |
Table 11.
DCNN validation accuracies with varying convolution parameters and two fully connected layers of same size.
Table 11.
DCNN validation accuracies with varying convolution parameters and two fully connected layers of same size.
Layers | | First | Second | Third | Acc (%) |
---|
| Filter size | | | | 97 |
Convolution | Stride | 1 | 1 | 1 |
Padding | 2 | 2 | 2 |
Pooling | Filter size | | | |
Stride | 1 | 1 | 1 |
| Filter size | | | | 93 |
Convolution | Stride | 1 | 1 | 1 |
Padding | 2 | 2 | 2 |
Pooling | Filter size | | | |
Stride | 1 | 1 | 1 |
| Filter size | | | | 93 |
Convolution | Stride | 1 | 1 | 1 |
Padding | 2 | 2 | 2 |
Pooling | Filter size | | | |
Stride | 1 | 1 | 1 |
Table 12.
DCNN validation accuracies with varying convolution parameters and three fully connected layers of same size.
Table 12.
DCNN validation accuracies with varying convolution parameters and three fully connected layers of same size.
Layers | | First | Second | Third | Acc (%) |
---|
| Filter size | | | | 93 |
Convolution | Stride | 1 | 1 | 1 |
Padding | 2 | 2 | 2 |
Pooling | Filter size | | | |
Stride | 1 | 1 | 1 |
| Filter size | | | | 91 |
Convolution | Stride | 1 | 1 | 1 |
Padding | 2 | 2 | 2 |
Pooling | Filter size | | | |
Stride | 1 | 1 | 1 |
| Filter size | | | | 87 |
Convolution | Stride | 1 | 1 | 1 |
Padding | 2 | 2 | 2 |
Pooling | Filter size | | | |
Stride | 1 | 1 | 1 |
Table 13.
Highest DCNN classification accuracies with increasing sizes of fully connected hidden layers.
Table 13.
Highest DCNN classification accuracies with increasing sizes of fully connected hidden layers.
No. of Layers | Layer Size | Convolution Kernel | Pooling Kernel | Test Acc (%) | Comp. Time (s) |
---|
| | | | | |
1 | {80} | | | 89 | 0.97 |
| | | | | |
| | | | | |
2 | {80,80} | | | 95 | 1.17 |
| | | | | |
| | | | | |
3 | {80,80,80} | | | 93 | 1.39 |
| | | | | |
Table 14.
Best architecture for weld joint defect classification.
Table 14.
Best architecture for weld joint defect classification.
Architecture | Method | Test Acc (%) | Compt. Time (s) |
---|
One | BoVW + SVM | 91 | 0.21 |
Two | DCNN | 95 | 1.17 |