Next Article in Journal
Ring-Core Photonic Crystal Fiber Sensor Based on SPR for Extra-Wide Refractive Index Detection
Previous Article in Journal
Study on the Anisotropy of Triply Periodic Minimal Surface Porous Structures
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Baru-Net: Surface Defects Detection of Highly Reflective Chrome-Plated Appearance Parts

College of Marine Equipment and Mechanical Engineering, Jimei University, Xiamen 361000, China
*
Author to whom correspondence should be addressed.
Coatings 2023, 13(7), 1205; https://doi.org/10.3390/coatings13071205
Submission received: 12 May 2023 / Revised: 26 June 2023 / Accepted: 4 July 2023 / Published: 5 July 2023
(This article belongs to the Section Corrosion, Wear and Erosion)

Abstract

:
Chrome plating parts with highly reflective characteristics are often used as appearance parts and must undergo strict defect detection to ensure quality. The defect detection method based on machine vision is the best choice. But due to the characteristic of high reflection, image acquisition is hard. For diverse defect appearances, it is difficult to use traditional algorithm for feature extraction. In this paper, a reasonable lighting scheme was designed to collect images effectively, and artificial defect images were made to expand the dataset to improve the deficiency of defect samples. A network, Baru-Net (Bis-Attention Rule), based on Unet architecture, the CBAM module and the ASPP module, was designed, and a block-step training strategy was proposed. With hyperparameter debugging, the semantic segmentation and classification of defects were carried out, and an accuracy rate of 98.3% achieved. Finally, QT realized the call to the weight model so that the AI model could be integrated into the automatic detection system.

1. Introduction

Chrome plating is a common process for appearance parts because of its excellent features, such as corrosion resistance, high brightness and specular reflection. With improvements in living standards, people’s requirements for the appearance of products are more and more strict, and slight product defects in the appearance can cause surface corrosion and sales difficulties, so it is necessary to detect all the appearance parts before they are assembled into the product. At present, appearance defect inspection is mainly performed manually. Through investigation, it was found that the appearance parts are often transported to a fixed detection area to wait for manual surface defect inspection. The delivery will delay production and reduce manufacturing efficiency. Moreover, multiple factors influence the reliability of manual surface defects detection, such as the fluctuation of the environment, eye strain of the workers, caused by long-term work, and a lack of experience of the workers, which makes it easy to cause missed detection or false detection. On the other hand, prolonged eye observation on highly reflective surfaces may also cause damage to workers’ eye health. Therefore, it is very important to improve the automation level of chrome plating appearance parts detection based on the defects detection algorithm.
The detection of highly reflective surface defects has attracted the attention of many scholars. Traditional feature extraction and AI technology are trendily adopted [1]. Neogi [2] adopted a global adaptive percentile threshold of gradient image to identify steel strips’ surface defects. Shao [3] proposed the entropy, grid gray gradient, and Hu invariant moment matching and morphological classification to detect highly reflective roller surfaces. Feng [4] proposed an adaptive image chromaticity information adjusting method, an adaptive threshold segmentation and a Haar-like feature extraction algorithm to detect surface defects in the high-reflective-metal surface. He [5] skillfully used a different image, in which the defect features were highlighted, and found out the rail surface defects through adaptive threshold binarization. YUN [6] used a discrete wavelet transform and double, adaptive, and local threshold to achieve defect detection in steel wire rods.
The above research was based on traditional computer vision technology, which uses image features to distinguish defects and obtain good experimental results. The traditional method has strong advantages for the cases where the defect samples are minimal and the defect morphology is simple, but the traditional method has the following common problems:
  • High requirements for lighting technology. It is necessary to ensure the stability of lighting and avoid the influence of ambient light to obtain reliable digital image quality to meet the threshold condition.
  • Image prepossessing and feature extraction algorithms are complex; several thresholds are commonly needed in these sequential algorithms, which influence the reliability and robustness of detection.
Therefore, the traditional method of defect feature extraction is often difficult for tasks with unstable ambient light or various defects’ morphology. Therefore, deep learning models with automatic feature extraction ability have been applied to such defect detection.
Guo [7] used CNN to classify resistance welding spots with very complex and diverse features and obtained a 99.01% accuracy rate, which is far above human work. Ş. Öztürk [8] introduced end-to-end training of a two-stage network with several extensions of the training process; the result shows that a 100% detection rate was achieved on the surface defect dataset DAGM and KolektorSDD. Je-Kang Park [9] used CNN to inspect surface defects and achieve an inspection speed thousands of times higher than that of manual inspection with close accuracy. Zhifen Zhang [10] combined multiple hierarchical features into one feature to locate defect details and achieve the potential for real-time detection. R. Baskaran [11] adopted MobileNet to detect steel frame structure defects based on Transfer Learning, and built an Android application incorporated with the model proposed, achieving an accuracy rate of 91% for scratch. Z. Ma [12] proposed a novel and lightweight detection method based on attention mechanism to inspect aluminum strip defects based on the YOLOv4 framework. The above two pieces of research focused on lightweight network design and network scale compression work to deploy defect detection to mobile phones or embedded systems.
It can be seen from the above research that deep learning has good feasibility and a strong potential for complex surface defect detection. In this paper, a reasonable lighting scheme was adopted to acquire the right images. We introduced a network called Baru-Net. It was based on the Unet architecture, a lightweight attention module CBAM (Convolutional Block Attention Module) and an ASPP (Atrous Spatial Pyramid Pooling) module. The attention module and ASPP can improve the segmentation accuracy and performance of the network and achieve effective segmentation and classification of surface defects of chrome-plated appearance parts.

2. High Reflective Chrome-Plated Surface Image Acquisition

The acquisition of proper images is the prerequisite for detecting highly reflective appearance parts, and reasonable lighting will significantly improve the detection effect [13,14]. Figure 1 and Figure 2 show the comparison of two lighting schemes. In Figure 1a, the traditional ring shaped light source causes specular reflection, resulting in distorted image collecting, as shown in Figure 1b.
In Figure 2a, two LED strip backlight sources were used and placed on symmetrical sides of the camera. The beam of light and the measured surface were at a certain angle, so that the reflected light could enter the camera for imaging. We fused these two separate images from the two light sources based on OpenCV, as shown in Figure 2b. The fused images overcame the specular reflection problem and effectively detected defects.

3. Identification and Location of Highly Reflective Surface Defects Based on Baru-Net

3.1. Network Structure

The proposed network Baru-Net was built on Unet architecture and consists of a CBAM module and an ASPP module, as shown in Figure 3.

3.1.1. Unet Architecture

Unet is a classic semantic segmentation network widely used in defect segmentation tasks; it builds on encoder–decoder end-to-end architecture [15]. Convolution and downsampling are used as encoding methods to obtain the feature map, which can expand the receptive field and integrate context information. Upsampling and deconvolution are used to decode the feature map and output a pixel-level classification result with the same size as the original image to achieve semantic segmentation. In upsampling methods, bilinear interpolation is a simple and effective up-sampling method, which has the advantages of fast calculation speed, small calculation, wide application, a strong ability to retain image details, fast running speed and simple operation, so bilinear interpolation was chosen in this paper.
Unet adopts a skip connection to reduce the information loss in convolution and downsampling by concatenating the feature map in the upsampling process and the feature map with the same size in the downsampling. Therefore, Unet can combine low resolution information (for object class recognition) with high resolution information (for precise segmentation and positioning).

3.1.2. CBAM Module

Baru-Net added an attention mechanism and an ASPP module based on Unet architecture. The Unet encoder has four stages. At each stage, two 3 × 3 convolutional layers are used to obtain the feature map, which is then input into the CBAM module. The output feature map of CBAM is processed by the pooling layer and sent to the next stage. The final stage feature map is put into the ASPP module. CBAM is a lightweight attention module, and it includes two sub-modules, as shown in Figure 4a; one is the CAM (Channel Attention Module), and the other is the SAM (Spatial Attention Module), which allows CBAM to judge essential parts of an image in both channel and spatial dimensions.
The CAM (as shown in Figure 4b) performs global average pooling (AvgPool) and global maximum pooling (MaxPool) on the incoming feature map (C1 × W1 × H1) to obtain two C1 × 1 × 1 feature maps, then puts these two C1 × 1 × 1 feature maps into a shared fully connected layer (Shared MLP) and adds the two output result maps. Then, Sigmoid is used to obtain a weight between 0 and 1, which is the channel attention value. Finally, the channel attention value is multiplied by each channel of the original map F to obtain a new feature map F′, which has the same size as F (as shown in Figure 4a). CAM keeps the channels’ dimensions unchanged, compresses the spatial dimensions and extracts the channel attention value, usually for multi-classification detection [16].
The SAM is the complement to the CAM, as shown in Figure 4c. An average pooling and a maximum pooling are applied to the input feature map F′ along the channel of each feature point. F M a x s R 1 × H × W and F a v g s R 1 × H × W are the average pooling feature and the maximum pooling feature of the channels. Then, these two maps are concatenated together and achieve a 2 × W1 × H1 feature map. A standard convolution layer with a channel count of 1 is used for concatenation and convolution, reducing the feature map channels to 1 × W1 × H1. Then, Sigmoid is called to obtain a 2D feature map with weights between 0 and 1, which is the spatial attention value. Multiply each element in the spatial attention map by the corresponding element in the feature map, and then add weights to obtain a new weighted feature map F″ which has the same channels size of C1 × H1 × W1 as the import feature map.
So SAM calculates the spatial attention value based on F′ and multiplies it by F′ to obtain F″. Then, F″ is added to the input feature to perform adaptive feature refinement and obtain the output feature of CBAM, as shown in Figure 4a. CBAM is a lightweight and general-purpose module that does not generate a large amount of redundant computing or parameters. While saving computational overhead, it analyzes which locations of the feature map need to be emphasized or suppressed, thereby quickly determining the highly focused area of the input image and achieving significant performance improvements [17].

3.1.3. ASPP Module

Segmentation requires a large receptive field. The larger convolution kernel or larger pool step size can enlarge the receptive field, but the former requires too much calculation, while the latter will cause resolution loss. ASPP extends the convolution kernel by adding (spreading rate-1) zero between the elements of the convolution kernel to generate convolution kernels with different sampling rates so as to realize free multi-scale feature extraction, as shown in Figure 5. It has the advantages of both large convolution kernel and a large pooling step.
In this paper, the result of encoding with feature map channels of C4 × H4 × W4 was put into the ASPP module for further processing. Three atrous convolutions with spreading rates of 6, 12, and 18 were used to extract multi-scale feature extraction. The extracted feature map remained the same size, and the results were concatenated along the channel with the results generated by ordinary convolution and adaptive average pooling, which were used to obtain image-level global features. The concatenated feature map had a channel size of 5C4 × H4 × W4, in order to use the feature map as the input for the next layer, performing a convolution operation on the feature map to reduce its size back to C4 × H4 × W4. ASPP had a larger receptive field without losing too much multi-resolution and without additional calculation. Therefore, ASPP is very suitable for segmentation and detection tasks.

3.2. Experiments Setup

3.2.1. Dataset Making

Labelme is used to label the collected images, as shown in Figure 6. A json file will be generated for each image, which can be converted into a label image with four categories, image background, hump, pit and scratch, through a Python script.
In deep learning, the dataset scale influences the accuracy, generalization ability and robustness of the model. But defective samples are limited in industry. In order to expand the dataset, Photoshop was used to cut and copy the defect areas from the original images and paste them into the non-defect image to complete the artificial defect image so as to solve the problem of insufficient defect samples, as shown in Figure 7. Each artificial defect image contained only one type of defect. No combination of multiple defects was set. Then, digital image processing algorithms such as random rotation, cropping, shifting and deformation were used to enhance the dataset further.
The dataset included three parts: original image, artificial defect image and data enhanced image. There were 100 original images, 300 artificial defect images and 600 data-enhanced images, respectively, totaling 1000. Then, the final dataset was divided into train set, validation set and test set, and the ratio was 6:2:2.

3.2.2. Model Training and Evaluation

The model training was carried out on NVIDIA GeForce RTX3060 GPU, the initial learning rate was set to 0.0005 and the iteration number epoch was set to 50. Other hyperparameters are shown in Table 1.
Accuracy and Macro-F1 are adopted as the evaluation indexes of the model, as shown in Formulas (1)–(3).
A c c u r a c y = T P + T N T P + T N + F P + F N
F 1 s c o r e i = 2 T P i 2 T P i + F P i + F N i
M a r c o F 1 = F 1 s c o r e 1 + F 1 s c o r e 2 + F 1 s c o r e 3 + F 1 s c o r e 4 4
where TPi, FPi and FNi represent the true positive, false positive and false negative of each category, respectively.
Class imbalance generally affects the convergence of the model and the generalization ability on the test set. A special training strategy was adopted in this paper to deal with the class imbalance caused by the size of defects that were much smaller than the background area. We cut the original image into several block images with defects and block background images without defects. As shown in Figure 8, the image was divided according to the preset size. The defective block was put into stage 1 training, and the optimal model was saved as the pre-training model for stage 2 training. The background block image was put into stage 2 training based on the pre-training model, so that the model could segment the defects well.
To verify the proposed training method, the Macro_F1 of the conventional training method (method of putting all samples into model training) and the proposed method are compared and shown in Figure 9. The Macro_F1 of the proposed training method showed more suitability for the situation in this paper.
To verify whether ASPP and CBAM modules can improve the network performance, Unet, Unet + CBAM, Unet + ASPP and Baru-Net (Unet + CBAM + ASPP) were compared. As shown in Figure 10, CBAM or ASPP modules could separately improve the validation accuracy and reduce loss; the simultaneous use of CBAM and ASPP could further enhance network performance.
In a word, Baru-Net extracts image features in the down-sampling based on Unet. Due to the use of CBAM, a lightweight attention module that can enhance useful features and suppress noise in both spatial and channel scales, Baru-Net finally obtains more accurate detailed features. The introduced ASPP module could expand the receptive field and extract features at multiple scales.
To verify the performance of the Baru-Net model, compare it with the commonly used models such as UNet, UNet++ [18], AttentionUNet [19] and Res_UNet [20], as shown in Table 2; both the Macro_F1 score and the accuracy of Baru-Net are significantly improved compared with the other models.

3.3. Defect Identification and Experimental Result Analysis

Baru-Net was used to detect the surface defects of chrome-plating appearance parts, as shown in Figure 11. The results show that the proposed model could accurately classify and segment chrome-plated surface defects.
In order to prove the validity of the artificial defect dataset, we trained Baru-Net on the datasets with and without artificial defects, respectively. Segmentation and identification results of Figure 11a are shown in Figure 12. The model trained on the original dataset could not detect the target well, as shown in Figure 12a. The error detection rate of the model trained on the enhanced dataset was significantly reduced, but the contours of defects were inaccurate, and stains and dust were wrongly identified as defects, as shown in Figure 12b. The model trained on the dataset with artificial defects obtained the best detection results, as shown in Figure 11d. It can be seen that in the case of defect detection, it is of practical significance to use artificial defect images to make up for the shortage in defect samples.
We also compared the proposed method with the traditional method based on OpenCV digital image processing algorithm to verify the effectiveness of the proposed method. The results show that OpenCV was insufficient in extracting small and hump defects, as shown in the blue and red wireframe in Figure 13a. Moreover, it is challenging to set thresholds to distinguish and identify different defects.
Meanwhile, the proposed method was compared with the Yolov8 model training under the same dataset as Baru-net. By comparing Figure 14a,c and 14b,d, the results show that yolov8 had false or missing detection. In contrast, Baru-net had significantly better detection accuracy for stains and small-size defects than Yolov8. This was because the original Yolov8 model was trained based on a large dataset, and the dataset in this paper is a small sample dataset, which could not meet the detection requirements even if transfer training was carried out. Conversely, the proposed Baru-net was based on the Unet network and inherited the advantages of Unet, which just needed a small amount of data to achieve high-quality segmentation, and was suitable for dealing with the problem of small datasets in industry.

3.4. Model Deployment and Practical Application

We used OpenVINO to deploy the trained model weight for the automatic detection system developed based on QT. The implementation processes were as follows:
  • Transform the trained weight file into ONNX type and OpenVINO intermediate IR file.
  • Configure OpenVINO, OpenCV and libtorch environments in QT.
  • Call the API of Pytorch in OpenVINO, load the converted ONNX and IR files into the OpenVINO interface, and encapsulate this process into a reasoning and prediction class.
  • Use OpenCV to read the image to be predicted, and put it into the inference port for inference and prediction.
  • Convert the prediction result into the image type used in QT, and the sorting mechanism will be triggered according to the defect detection result.
The automatic detection system was developed based on QT, and a stepper motor drove the conveyor belt; the defect detection was accurate and efficient, and the detection speed of a single piece reached about 800 ms, which met industrial inspection requirements. However, the charging and blanking mechanism needs to be further developed, and as for special-shaped parts, the detection path can be designed with the cooperation of the robot.

4. Conclusions

The following conclusions can be drawn based on the results and discussion.
(1)
Firstly, we proposed a network for surface defect detection in a highly reflective chrome plating work-piece that combined dual attention mechanisms and semantic segmentation to detect three kinds of defects. Secondly, a fusion of images from different light angles was used to avoid the effects of high reflectivity. In addition, the dataset was enhanced by creating artificial defect images to solve the problem of insufficient datasets. Finally, a step-by-step training strategy could solve the problem of category imbalance caused by the defect size being too minor compared to the background.
(2)
The model achieved a detection accuracy of 98.3 IEEE and a detection speed of around 800 ms on a single GPU. The Baru-Net and dual-angle light source approach could be applied to the industrial scene of small-sized samples and highly reflective surface.
(3)
In the future, the dataset will be further expanded in terms of the image number and defect type to ensure better generalization. The method based on transfer learning can also be implemented to improve the model performance. In addition, for the detection of special-shaped parts, we will further optimize the inspection device so that it can meet practical needs.

Author Contributions

Methodology, J.C.; Software, B.Z.; Formal analysis, Q.J.; Investigation, X.C.; Data curation, B.Z.; Writing—original draft, J.C. and B.Z.; Writing—review and editing, J.C. and B.Z.; Funding acquisition, J.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Natural Science Foundation of Fujian, China (Grant No. 2021J01855). Doctoral research fund of Jimei University (Grant No. ZQ2021055). Jimei University cultivate program of National Nature Science Foundation of China (Grant No. ZP2022013).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhou, A.; Ai, B.; Qu, P.; Shao, W. Defect Detection for Highly Reflective Rotary Surfaces: An Overview. Meas. Sci. Technol. 2021, 32, 062001. [Google Scholar] [CrossRef]
  2. Neogi, N.; Mohanta, D.K.; Dutta, P.K. Defect Detection of Steel Surfaces with Global Adaptive Percentile Thresholding of Gradient Image. J. Inst. Eng. India Ser. B 2017, 98, 557–565. [Google Scholar] [CrossRef]
  3. Shao, W.; Peng, P.; Shao, Y.; Zhou, A. A Method for Identifying Defects on Highly Reflective Roller Surface Based on Image Library Matching. Math. Probl. Eng. 2020, 2020, 1837528. [Google Scholar] [CrossRef]
  4. Feng, W.; Liu, H.; Zhao, D.; Xu, X. Research on Defect Detection Method for High-Reflective-Metal Surface Based on High Dynamic Range Imaging. Optik 2020, 206, 164349. [Google Scholar] [CrossRef]
  5. He, Z.; Wang, Y.; Yin, F.; Liu, J. Surface Defect Detection for High-Speed Rails Using an Inverse P-M Diffusion Model. Sens. Rev. 2016, 36, 86–97. [Google Scholar] [CrossRef]
  6. Yun, J.P.; Choi, D.; Jeon, Y.; Park, C.; Kim, S.W. Defect Inspection System for Steel Wire Rods Produced by Hot Rolling Process. Int. J. Adv. Manuf. Technol. 2014, 70, 1625–1634. [Google Scholar] [CrossRef]
  7. Guo, Z.; Ye, S.; Wang, Y.; Lin, C. Resistance Welding Spot Defect Detection with Convolutional Neural Networks. In Computer Vision Systems; Liu, M., Chen, H., Vincze, M., Eds.; Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2017; Volume 10528, pp. 169–174. ISBN 978-3-319-68344-7. [Google Scholar]
  8. Öztürk, Ş.; Akdemir, B. Fuzzy Logic-Based Segmentation of Manufacturing Defects on Reflective Surfaces. Neural Comput. Appl. 2018, 29, 107–116. [Google Scholar] [CrossRef]
  9. Park, J.-K.; Kwon, B.-K.; Park, J.-H.; Kang, D.-J. Machine Learning-Based Imaging System for Surface Defect Inspection. Int. J. Precis. Eng. Manuf.-Green Technol. 2016, 3, 303–310. [Google Scholar] [CrossRef]
  10. Zhang, Z.; Wen, G.; Chen, S. Weld Image Deep Learning-Based on-Line Defects Detection Using Convolutional Neural Networks for Al Alloy in Robotic Arc Welding. J. Manuf. Process. 2019, 45, 208–216. [Google Scholar] [CrossRef]
  11. Baskaran, R.; Fernando, P. Steel Frame Structure Defect Detection Using Image Processing and Artificial Intelligence. In Proceedings of the 2021 International Conference on Smart Generation Computing, Communication and Networking (SMART GENCON), Pune, India, 29 October 2021; IEEE: Piscataway, NJ, USA; pp. 1–6. [Google Scholar]
  12. Ma, Z.; Li, Y.; Huang, M.; Huang, Q.; Cheng, J.; Tang, S. A Lightweight Detector Based on Attention Mechanism for Aluminum Strip Surface Defect Detection. Comput. Ind. 2022, 136, 103585. [Google Scholar] [CrossRef]
  13. Xu, L.M.; Yang, Z.Q.; Jiang, Z.H.; Chen, Y. Light Source Optimization for Automatic Visual Inspection of Piston Surface Defects. Int. J. Adv. Manuf. Technol. 2017, 91, 2245–2256. [Google Scholar] [CrossRef]
  14. Rosati, G.; Boschetti, G.; Biondi, A.; Rossi, A. Real-Time Defect Detection on Highly Reflective Curved Surfaces. Opt. Lasers Eng. 2009, 47, 379–384. [Google Scholar] [CrossRef]
  15. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015; Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F., Eds.; Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2015; Volume 9351, pp. 234–241. ISBN 978-3-319-24573-7. [Google Scholar]
  16. Li, H.; Chen, H.; Jia, Z.; Zhang, R.; Yin, F. A Parallel Multi-Scale Time-Frequency Block Convolutional Neural Network Based on Channel Attention Module for Motor Imagery Classification. Biomed. Signal Process. Control 2023, 79, 104066. [Google Scholar] [CrossRef]
  17. Zhao, Z.; Chen, K.; Yamane, S. CBAM-Unet++:Easier to Find the Target with the Attention Module “CBAM”. In Proceedings of the 2021 IEEE 10th Global Conference on Consumer Electronics (GCCE), Kyoto, Japan, 12 October 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 655–657. [Google Scholar]
  18. Zhou, Z.; Rahman Siddiquee, M.M.; Tajbakhsh, N.; Liang, J. UNet++: A Nested U-Net Architecture for Medical Image Segmentation. In Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support; Stoyanov, D., Taylor, Z., Carneiro, G., Syeda-Mahmood, T., Martel, A., Maier-Hein, L., Tavares, J.M.R.S., Bradley, A., Papa, J.P., Belagiannis, V., Eds.; Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2018; Volume 11045, pp. 3–11. ISBN 978-3-030-00888-8. [Google Scholar]
  19. Oktay, O.; Schlemper, J.; Folgoc, L.L.; Lee, M.; Heinrich, M.; Misawa, K.; Mori, K.; McDonagh, S.; Hammerla, N.Y.; Kainz, B.; et al. Attention U-Net: Learning Where to Look for the Pancreas. arXiv 2018, arXiv:1804.03999. [Google Scholar]
  20. Zhang, Z.; Liu, Q.; Wang, Y. Road Extraction by Deep Residual U-Net. IEEE Geosci. Remote Sens. Lett. 2018, 15, 749–753. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Distortion of specular reflection. (a) Lighting scheme 1. (b) Image from ring shaped lighting.
Figure 1. Distortion of specular reflection. (a) Lighting scheme 1. (b) Image from ring shaped lighting.
Coatings 13 01205 g001
Figure 2. Symmetrical backlight sources scheme. (a) Lighting scheme 2. (b) Image from symmetrical backlight sources.
Figure 2. Symmetrical backlight sources scheme. (a) Lighting scheme 2. (b) Image from symmetrical backlight sources.
Coatings 13 01205 g002
Figure 3. Network structure of Baru-Net.
Figure 3. Network structure of Baru-Net.
Coatings 13 01205 g003
Figure 4. Principle of CBAM module. (a) CBAM module structure diagram. (b) CAM module structure diagram. (c) SAM module structure diagram.
Figure 4. Principle of CBAM module. (a) CBAM module structure diagram. (b) CAM module structure diagram. (c) SAM module structure diagram.
Coatings 13 01205 g004
Figure 5. ASPP module structure diagram.
Figure 5. ASPP module structure diagram.
Coatings 13 01205 g005
Figure 6. Label making.
Figure 6. Label making.
Coatings 13 01205 g006
Figure 7. Artificial defect image making.
Figure 7. Artificial defect image making.
Coatings 13 01205 g007
Figure 8. Image block schematic diagram.
Figure 8. Image block schematic diagram.
Coatings 13 01205 g008
Figure 9. Comparison of the conventional and the proposed training methods.
Figure 9. Comparison of the conventional and the proposed training methods.
Coatings 13 01205 g009
Figure 10. Comparison of network performance with different structures. (a) Comparison of validation accuracy. (b) Comparison of loss.
Figure 10. Comparison of network performance with different structures. (a) Comparison of validation accuracy. (b) Comparison of loss.
Coatings 13 01205 g010
Figure 11. The result of defect segmentation and identification. (a) Pit. (b) Hump. (c) Scratch. (d) Segmentation of pit. (e) Segmentation of hump. (f) Segmentation of scratch.
Figure 11. The result of defect segmentation and identification. (a) Pit. (b) Hump. (c) Scratch. (d) Segmentation of pit. (e) Segmentation of hump. (f) Segmentation of scratch.
Coatings 13 01205 g011aCoatings 13 01205 g011b
Figure 12. Test results of the training model under different datasets. (a) The result of the original dataset. (b) The result of the enhanced dataset.
Figure 12. Test results of the training model under different datasets. (a) The result of the original dataset. (b) The result of the enhanced dataset.
Coatings 13 01205 g012
Figure 13. The comparison results of OpenCV and Baru_Net. (a) OpenCV. (b) Baru_Net.
Figure 13. The comparison results of OpenCV and Baru_Net. (a) OpenCV. (b) Baru_Net.
Coatings 13 01205 g013
Figure 14. The comparison results of yolov8 and Baru_Net. (a,c) Yolov8. (b,d) Baru_Net.
Figure 14. The comparison results of yolov8 and Baru_Net. (a,c) Yolov8. (b,d) Baru_Net.
Coatings 13 01205 g014
Table 1. Training hyperparameters setting.
Table 1. Training hyperparameters setting.
HyperparametersBatch SizeWidthHeightLossOptimizer
Parameter value49696Cross Entropy LossAdam
Table 2. Accuracy of different network models.
Table 2. Accuracy of different network models.
ModelAccuracy (%)Macro-F1 (%)
UNet++96.189.9
UNet91.489.4
AttentionUNet97.890.1
Res_UNet89.388.4
Baru-Net98.391.3
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, J.; Zhang, B.; Jiang, Q.; Chen, X. Baru-Net: Surface Defects Detection of Highly Reflective Chrome-Plated Appearance Parts. Coatings 2023, 13, 1205. https://doi.org/10.3390/coatings13071205

AMA Style

Chen J, Zhang B, Jiang Q, Chen X. Baru-Net: Surface Defects Detection of Highly Reflective Chrome-Plated Appearance Parts. Coatings. 2023; 13(7):1205. https://doi.org/10.3390/coatings13071205

Chicago/Turabian Style

Chen, Junying, Bin Zhang, Qingshan Jiang, and Xiuyu Chen. 2023. "Baru-Net: Surface Defects Detection of Highly Reflective Chrome-Plated Appearance Parts" Coatings 13, no. 7: 1205. https://doi.org/10.3390/coatings13071205

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop