Author Contributions
Conceptualization, W.-L.M.; methodology, Y.-Y.C. and B.-H.L.; software, C.-C.W., Y.-T.W. and C.-Y.Y.; validation, C.-C.W., Y.-T.W. and C.-Y.Y.; investigation, Y.-R.C.; resources, W.-L.M.; data curation, Y.-R.C.; writing—original draft preparation, W.-L.M. and Y.-R.C.; writing—review and editing W.-L.M.; visualization, Y.-R.C.; funding acquisition, W.-L.M.; project administration, W.-L.M. All authors have read and agreed to the published version of the manuscript.
Figure 1.
Photographic image showing a practical implementation of the proposed system.
Figure 1.
Photographic image showing a practical implementation of the proposed system.
Figure 2.
(a) Image of actual aluminum rim; (b) numerical rendering of the aluminum rim.
Figure 2.
(a) Image of actual aluminum rim; (b) numerical rendering of the aluminum rim.
Figure 3.
Defects typical of aluminum rims: (a) dirt spot, (b) paint stain, (c) scratch, and (d) dent.
Figure 3.
Defects typical of aluminum rims: (a) dirt spot, (b) paint stain, (c) scratch, and (d) dent.
Figure 4.
Environment layout.
Figure 4.
Environment layout.
Figure 5.
Select the machined surface.
Figure 5.
Select the machined surface.
Figure 6.
Simulated detection path.
Figure 6.
Simulated detection path.
Figure 7.
Basic control interface.
Figure 7.
Basic control interface.
Figure 8.
Automated detection interface.
Figure 8.
Automated detection interface.
Figure 9.
Test results interface.
Figure 9.
Test results interface.
Figure 10.
Flowchart showing the experiments conducted in this study.
Figure 10.
Flowchart showing the experiments conducted in this study.
Figure 11.
The basic architecture of the GAN network.
Figure 11.
The basic architecture of the GAN network.
Figure 12.
Schematic diagram showing the architecture of YOLO v3.
Figure 12.
Schematic diagram showing the architecture of YOLO v3.
Figure 13.
Schematic diagram showing the YOLO v4 object detection architecture.
Figure 13.
Schematic diagram showing the YOLO v4 object detection architecture.
Figure 14.
Practical implementation of the proposed defect detection system.
Figure 14.
Practical implementation of the proposed defect detection system.
Figure 15.
Distribution of defect types as percentages.
Figure 15.
Distribution of defect types as percentages.
Figure 16.
Photographs examples of the three types of defect addressed in this study.
Figure 16.
Photographs examples of the three types of defect addressed in this study.
Figure 17.
Flowchart showing the implementation of the proposed generative adversarial networks.
Figure 17.
Flowchart showing the implementation of the proposed generative adversarial networks.
Figure 18.
Examples of cropped flawed images.
Figure 18.
Examples of cropped flawed images.
Figure 19.
GAN training results: (a) 10,000 iterations, (b) 20,000 iterations, and (c) 30,000 iterations.
Figure 19.
GAN training results: (a) 10,000 iterations, (b) 20,000 iterations, and (c) 30,000 iterations.
Figure 20.
DCGAN training results: (a) 10,000 iterations; (b) 20,000 iterations; (c) 30,000 iterations.
Figure 20.
DCGAN training results: (a) 10,000 iterations; (b) 20,000 iterations; (c) 30,000 iterations.
Figure 21.
Comparison of actual photographic images and generated images.
Figure 21.
Comparison of actual photographic images and generated images.
Figure 22.
Flowchart showing experiments involving the application of original and generated images to YOLO v3 and YOLO v4.
Figure 22.
Flowchart showing experiments involving the application of original and generated images to YOLO v3 and YOLO v4.
Figure 23.
Schematic diagram illustrating the predicted and actual bounding boxes.
Figure 23.
Schematic diagram illustrating the predicted and actual bounding boxes.
Figure 24.
Training results for YOLO v3: (a) original images only (b) original images plus DCGAN synthetic images.
Figure 24.
Training results for YOLO v3: (a) original images only (b) original images plus DCGAN synthetic images.
Figure 25.
Training results for YOLO v4: (a) original images only (b) original images plus DCGAN synthetic images.
Figure 25.
Training results for YOLO v4: (a) original images only (b) original images plus DCGAN synthetic images.
Figure 26.
Photographs showing the locations of defects.
Figure 26.
Photographs showing the locations of defects.
Table 1.
Specifications of the industrial camera GS3-U3-51S5C-C.
Table 1.
Specifications of the industrial camera GS3-U3-51S5C-C.
Firmware | 2.25.3 | Gain Range | 0 dB~48 dB |
Resolution | 2448 × 2048 | Exposure Range | 0.006 ms~32 s |
Frame Rate | 75 FPS | Interface | USB3.1 |
Chrome | Color | Dimensions/Mass | 44 mm × 29 mm × 58 mm/90 g |
Sensor | Sony IMX250, CMOS,2/3” | Power Requirements | 5 V via USB3.1 or 8~24 V via GPIO |
Readout Method | Global shutter | Lens Mount | C-mount |
Table 2.
Dataset details used to evaluate image sets and CNNs.
Table 2.
Dataset details used to evaluate image sets and CNNs.
Experiment\Total Sample | Total Number of Samples (Photos) | Number of Training Samples (Photos) | Number of Testing Samples (Photos) |
---|
YOLO v3 Original images | 245 | 196 | 49 |
YOLO v4 Original images | 245 | 196 | 49 |
YOLO v3 Original images + DCGAN | 545 | 436 | 109 |
YOLO v4 Original images + DCGAN | 545 | 436 | 109 |
Table 3.
Model prediction data.
Table 3.
Model prediction data.
Analysis\Methods | YOLO v3 | YOLO v4 | YOLO v3 + DCGAN | YOLO v4 + DCGAN |
---|
TP | 217 | 98 | 176 | 213 |
FP | 268 | 67 | 153 | 56 |
FN | 89 | 209 | 130 | 93 |
TN | 562 | 770 | 677 | 774 |
Table 4.
Calculated results.
Table 4.
Calculated results.
Analysis\Methods | YOLO v3 | YOLO v4 | YOLO v3 + DCGAN | YOLO v4 + DCGAN |
---|
Total number of defects | 306 | 307 | 306 | 306 |
detected | 217 | 98 | 176 | 213 |
Accuracy | 68.5% | 75.8% | 75% | 86.8% |
Recall | 70.9% | 31.9% | 57.5% | 69.6% |
Precision | 44.7% | 59.3% | 53.4% | 79.1% |
Table 5.
Detection accuracy as a function of the number of iterations.
Table 5.
Detection accuracy as a function of the number of iterations.
Methods\Analysis | Accuracy | Precision | Recall |
---|
YOLO v4 + DCGAN (5000) | 80.6% | 66.4% | 56.2% |
YOLO v4 + DCGAN (4000) | 63.7% | 41.1% | 80.0% |
YOLO v4 + DCGAN (3000) | 76.1% | 54.4% | 70.2% |
YOLO v4 + DCGAN (2000) | 86.8% | 79.1% | 69.6% |
Table 6.
Computational efficiency of the proposed automated detection system.
Table 6.
Computational efficiency of the proposed automated detection system.
Methods\Time | Robot | detect | Total |
---|
YOLO v3 | 2 min 39 s | 56.3 s | 3 min 35.3 s |
YOLO v3 + DCGAN | 2 min 39 s | 56.2 s | 3 min 35.2 s |
YOLO v4 | 2 min 39 s | 56.3 s | 3 min 35.3 s |
YOLO v4 + DCGAN(5000) | 2 min 39 s | 56.1 s | 3 min 35.1 s |