Next Article in Journal
An Advanced Data Fusion Method to Improve Wetland Classification Using Multi-Source Remotely Sensed Data
Next Article in Special Issue
Fault Diagnosis of Rotating Machinery: A Highly Efficient and Lightweight Framework Based on a Temporal Convolutional Network and Broad Learning System
Previous Article in Journal
Sensor Actuator Network for In Situ Studies of Antarctic Plants Physiology
Previous Article in Special Issue
Integrated Gradient-Based Continuous Wavelet Transform for Bearing Fault Diagnosis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Nonlinear and Dotted Defect Detection with CNN for Multi-Vision-Based Mask Inspection

Department of IT Convergence Engineering, Kumoh National Institute of Technology, Gumi-si 39177, Republic of Korea
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(22), 8945; https://doi.org/10.3390/s22228945
Submission received: 21 September 2022 / Revised: 14 November 2022 / Accepted: 16 November 2022 / Published: 18 November 2022

Abstract

:
This paper addresses the problem of nonlinear and dotted defect detection for multi-vision-based mask inspection systems in mask manufacturing lines. As the mask production amounts increased due to the spread of COVID-19 around the world, the mask inspection systems require more efficient defect detection algorithms. However, the traditional computer vision detection algorithms suffer from various types and very small sizes of the nonlinear and dotted defects on masks. This paper proposes a deep learning-based mask defect detection method, which includes a convolutional neural network (CNN) and efficient preprocessing. The proposed method was developed to be applied to real manufacturing systems, and thus all the training and inference processes were conducted with real data produced by real mask manufacturing systems. Experimental results show that the nonlinear and dotted defects were successfully detected by the proposed method, and its performance was higher than the previous method.

1. Introduction

As COVID-19 spread around the world, the use of masks increased, and subsequently, many automated mask production factories were built. Mask production lines include a fabric supply device for supplying mask fabrics for making mask filters, a mask molding device for molding mask fabrics, and a band attachment device for attaching a band to the mask filters. In the automated production of a mask, after manufacturing a mask by attaching a band to a mask filter, an inspection process is required to inspect the mask for defects before packaging the mask. So, our proposed method algorithm is used in final process with a vision system in a mask manufacturing factory, as shown in Figure 1. In the band attachment test, it is checked whether the band is firmly attached to the mask filter. Things such as dust or hair can be stuck to the mask. These masks must be removed through inspection because they cause discomfort to consumers. During this inspection process, normal and defective products are selected, and defective products are discarded. Human visual inspection is not effective, in terms of fatigue and costs, for determining whether a mask passes or fails inspection. The method introduced in this study strives to resolve these pain points.
Some techniques that are related to image-based inspection systems are Hough transform [1,2,3], local binary patterns (LBP) [4,5], you only look once (YOLO) [6,7], single-shot multibox detector (SSD) [8], RetinaNet [9], scale invariant feature transform (SIFT) [10,11,12], speeded up robust features (SURF) [13,14], artificial neural networks (ANN) [15,16,17], and convolutional neural network (CNN) [18,19,20,21,22]. Hough transform is a technique for detecting straight lines in an image. The detected straight lines have the maximum and minimum values of the x and y coordinates, and various kinds of image processing can be conducted based on these values. LBP is a rotation invariant texture measure that is simple and powerful. YOLO is used for detecting and recognizing various objects in a picture or video. Object detection in YOLO also provides the class probabilities of the detected images. SSD is faster than YOLO. Accuracy is also higher than YOLO. The main idea is predicting category scores and box offsets for a fixed set of default bounding boxes using small convolutional filters applied to feature maps. RetinaNet is composed of a backbone network and two task-specific subnetworks. While having the advantage of the fast detection time of the one-stage detector, the detection performance degradation problem of the one-stage detector was improved. SIFT is an algorithm that extracts features that are invariant to rotation and scale. This method extracts features from two different images and matches the most similar features to find the corresponding parts in the two images. However, the disadvantage of this method is that the processing speed is very low. SURF is an algorithm designed to compensate for the shortcomings of SIFT. SURF uses a method of approximating the LOG with a BOX filter, and the processing speed is made high by adding many characteristic elements in each step of calculating the key point and descriptor. However, when the viewpoint of lighting changes, the features of the image cannot be properly detected. Among these techniques, the CNN and ANN use deep learning and are mainly used for image processing. ANN is an algorithm that mimics the information processing method of the human brain. This technique is used for solving decision-making problems, such as prediction and classification, and it consists of an input layer, a hidden layer, and an output layer. A CNN consists of a convolutional layer, ReLU, and pooling, and when it is given data to learn, it creates a learning model using a fully connected layer. Then, verification is carried out by training and testing the created model. The CNN is useful for finding patterns to recognize images. However, it is hard to find a proper model for a certain application.
The contributions of this paper are as follows:
  • The data for training and verification were produced in the real mask production lines.
  • An efficient pre-processing process was developed to apply highly small dotted defects, which were difficult to be trained and inferred to the CNN.
  • Various types of nonlinear and dotted defects were successfully detected by the proposed method based on the CNN.
The remainder of this paper is organized as follows: Section 2 describes the training problem of the small dotted defect with the masks. Additionally, the reason for using multi-vision-based mask inspection is addressed. In Section 3, preprocessing methods of input data before training and the CNN model are proposed. In Section 4, the evaluation results of the proposed method are shown and compared with other methods quantitatively using a confusion matrix. Finally, Section 5 gives conclusions.

2. Problem Description

2.1. Multi-Vision-Based Mask Inspection

The visual inspection system uses an automated visual inspection device with a camera. This system can improve efficiency and perform objective and accurate quality control by automating the manual inspection of parts with computer-based camera imaging technology.
Several visual inspection systems exist, such as eddy current [23,24,25], thermography [26,27,28], and dye penetrant testing [29,30,31]. Eddy current consists of an electronic sensor and a magnetic coil that induces a magnetic field. The magnetic field is induced in such a way that the interaction between the target magnetic field and the component under examination induces an eddy current that can be measured using an electromagnetic wave sensor. This system has the advantage of being simple and easy, and the inspection can be performed without physical contact. Thermography uses a thermal sensor. A thermal sensor measures the infrared radiation of the inspected component; the radiation flux is converted into a temperature and the temperature distribution is then represented in the form of a thermal image. This system is suitable for surface and interior inspection and has great advantages in detecting large voids or crack defects. Dye penetrant testing detects discontinuities by applying a colored fluid penetrant to the inspection surface. Then, a light source is used by the inspector to highlight the defective features of the surface being inspected. The multi-vision system is a digital camera/software-based system that can visually recognize media registration marks and automatically compensate for distortion, and image drift. In our mask production factory, we use a multi-vision system for this reason, and it improves process time than single-vision system. When taking an image in a multi-vision system, four mask images come in at once. Then, the algorithm divides the four images into quarters and trains each image one by one, as shown in Figure 2, and it is the first algorithm to detect mask inspection using a multi-vision-based system, as shown in Table 1.

2.2. Mask Defect Detection

For a nonlinear defect, which is a thickness of 100 μm attached to a normal mask, as shown in Figure 3b, the mask is classified as a nonlinear defect mask. If a dotted defect is a radius of 100 μm attached to a normal mask, as shown in Figure 3c, the mask is classified as a dotted defect mask. Additionally, white paint or dust may come off during the mask production process, as shown in Figure 3d. It is difficult to distinguish mask defects using a traditional computer image processing method.
Our previous work [32] about detecting nonlinear and dotted defect masks was based on the Pearson correlation coefficient, a calculation that cuts only the filter part of the mask and performs LoG processing to make the defective parts in the filter more visible. After this step, by blurring the average of the mask filter, the difference between the image of the normal mask and the defective area was amplified by changing the pixel values of the part containing the defective part and the surrounding area. The Pearson correlation coefficient was derived by extracting the histogram for each normal and defective mask. This method found the Pearson correlation coefficient per kernel size, and using this value, the minimum value is obtained. Then, the Pearson correlation coefficient value determines whether the mask was normal or defective with that value. It is good to detect different types of nonlinear defect matter in the same mask image, but it is difficult to do this in different mask images because the mask filter patterns may vary slightly, even for a single mask type. However, when we acquired more data from the real vision-based system and tested it, the performance deteriorated. Therefore, we decided to apply the CNN-based method, which is robust to various patterns of defect masks.

3. Methods

In an actual automated mask factory, when images are collected with a vision system, four masks are included in one image. Because it is necessary to distinguish between images in the learning process, each image was divided into fourths. With a sufficient amount of data, one-hot encoding was then performed to convert the images into numerical data. After this step, images were classified as normal mask images or defective mask images, and this classification was used to train the model in this study. Finally, the trained model was tested on real images to verify its accuracy.

3.1. Data Acquisition

Because there were no mask image data that were publicly available, the mask images used in this experiment were produced by a mask manufacturing factory, as shown in Figure 4. After completing all the processes in the mask production line, the mask passes through the multi-vision system, and an image is taken when every 4 masks are produced, as shown in Figure 5. Then, our proposed method is conducted for detecting normal masks and defective masks. In this study, 1000 of the 1300 normal mask images were used for learning, 300 images were used for testing; 1000 of the 1300 defective mask images were used for learning, and 300 were used for testing.

3.2. Nonlinear Defect Detection

The deep neural network model proposed in this paper was built as shown in Figure 6. In the nonlinear defect detection CNN model, the input image was 2592 × 1136 pixels, each convolution layer was composed of a 3 × 3 filter, and the padding, which refers to the number of pixels added to an image when it is being processed by the kernel of a CNN, was set to “same”. After each layer trained features in the order of 32, 32, 64, and 256, an activation function, ReLU, was applied, followed by max pooling. The output was then flattened, a fully connected layer with 256 outputs was applied, and the ReLU activation function was applied again. The dense layer was reapplied as many times as the number of output classes. To get the final output, it had a final pass through the softmax layer. When the model was compiled, the loss used binary_crossentropy, which is used for binary classification because there were only two types of data to classify—normal and defective. The Adam optimizer was used, and finally the metrics were set to “accuracy” to find the accuracy. The number of iterations of the model (epochs) was set to 15, as shown in Figure 7, and the batch size was set to 32.

3.3. Dotted Defect Detection

In the dotted defect detection preprocessing method, several processes are added. If the entire mask image and background are trained together, it is difficult to train small dots. Therefore, using template matching, crop only the mask filter part for focusing small dots. Template matching is an algorithm that finds an area in the original image that matches the template images. The minimum cross correlation (TM_CCORR) multiplicatively matches the template against the image, so a perfect match will be large and bad matches will be small or 0. The minimum cross coefficient (TM_CCOEFF), which is using the TM_CCORR method after correcting brightness. The minimum square difference (TM_SQDIFF), which is subtracting the pixels of the original image from the template image, squared, and added, is a method of template matching. For the calculation of template matching, the minimum square difference (TM_SQDIFF) is defined as:
R ( x ,   y ) = x , y ( T ( x , y ) I ( x + x ,   y + y ) ) 2 .
In this equation, x′ and y′ represent the coordinates of the template image. TM_SQDIFF is used for obtaining the difference between the template image and the source image. Place the template image on top of the original image and move it little by little to compare it until reaching the end of the image. The area most similar to the template image is detected in the original image, as shown in Figure 8. After applying template matching, use the morphology erode calculation in OpenCV for making the black dots bigger. The erosion operation replaces the values of all pixels in the kernel area with the local minimum in the kernel. Therefore, applying an erosion operation reduces the bright areas and increases the dark areas, as shown in Figure 9 and Figure 10. However, the morphology erode calculation cannot detect white dotted defective parts. Therefore, our proposed method uses CNN to detect various types of dotted defect masks. The deep neural network model proposed in this paper was built as shown in Figure 11. In the dotted defect detection CNN model, the input image after preprocessing was 2592 × 1136 pixels, each convolution layer was composed of a 3 × 3 filter, and the padding was set to “same”. After each layer trained features in the order of 32, 64, 128, an activation function, ReLU, was applied, followed by max pooling. The output was then flattened, and a fully connected layer with 256 outputs was applied with ReLU. To get the final output, a softmax layer is used. The Adam optimizer was used, and the metrics were set to “accuracy”. The epochs were set to 15, as shown in Figure 12, and the batch size was set to 32.

4. Experiments and Results

4.1. Mask Defect Detection

Our proposed method was conducted on Python 3.7.9, TensorFlow 2.2.0, NVIDIA GeForce RTX 3060. The result of the feature map created by the layers in the neural network proposed in this study is shown in Figure 13 and Figure 14. Table 2 shows the confusion matrix for the mask nonlinear defect detection results. The recall, which is the ratio of the data for which the prediction is true, is 1 for the data having an actual value of true. The precision, where the actual value is the true data ratio, is 0.99 for the data for which the prediction is true. The accuracy, which is an index that evaluates how closely the actual data matches the predicted data, is 99.8%. Computation cost was about 0.06 s per single mask image. However, when we used a single vision-based system, the computation cost was about 0.2 s per single mask image.
Table 3 shows the confusion matrix for mask dotted defect detection results. The recall, which is the ratio of the data for which the prediction is true, is 0.99 for the data having an actual value of true. The precision, where the actual value is the true data ratio, is 1 for the data for which the prediction is true. The accuracy, which is an index that evaluates how closely the actual data matches the predicted data, is 99.8%. Computation cost was about 0.14 s per single mask image. When comparing computation cost in a single-vision-based system and a multi-vision-based system, computation cost was improved form 0.45 s to 0.12 s.

4.2. Quantitative Comparison Results

A total of 1000 out of 1300 normal images were used for training, and 300 were used for testing. Additionally, 1000 out of 1300 defective images were used for training and 300 were used for testing. Since there is no defect detection algorithm in the actual mask, we compared the proposed method with our previous work [32], SSD, RetinaNet, and YOLO v5. In the proposed method, the threshold for judging normal and defect masks was set to 0.8. Then, we compared the result of the proposed method with previous work according to the change in the threshold, which is shown as a receiver operating characteristic (ROC) curve in Figure 15 and Figure 16. The graph shows our CNN model is robust, even with a changing threshold. The difference in accuracy between the method proposed in this paper, SSD, YOLO v5, RetinaNet, and our previous work is shown in Table 2 and Table 3. The specific model of the YOLO v5 was the YOLO V5l. Computation cost was about 0.3 s per 4 images. Additionally, YOLO v5 had relatively bad results in mask inspection because the line defective part and dotted defective part is too small. So, it is hard to distinguish normal masks and defective masks in YOLO v5. SSD computation cost was about 0.23 s per 4 images. SSD showed better results in mask inspection than YOLO v5, but still the defective part of the mask image was to too small to detect well. RetinaNet computation cost was about 1.2 s per 4 images. Additionally, RetinaNet showed the lowest accuracy compared to other methods. Our previous work computation cost was about 0.4 s per 4 images. Our previous work also shows relatively bad results because when a mask is produced in a factory, it is not always produced in a precise shape, and wrinkles can also occur in various patterns for each mask. The Pearson correlation coefficient method detected with high accuracy only when defective parts were changed in the same image, but when a different mask image was used, the difference between the normal and defective mask values were not big because of the different patterns in each image. On the other hand, when the method proposed in this paper was tested, there was high accuracy, regardless of the different patterns in each image.

5. Conclusions

In this paper, we proposed a suitable CNN structure and proper preprocessing methods for the detection of normal masks and masks having various nonlinear, dotted defects in a mask production line. When the classical computer vision-based method was applied, it was difficult to detect defects due to the diversity of the masks, but the defects were successfully detected when the deep learning-based method with preprocessing was applied. Multi-vision-based system computation time was more improved than the single-vision-based system. The experiment was performed with actual mask image data and showed higher accuracy than other methods for detecting normal masks and defective masks.

Author Contributions

Conceptualization, H.L.; Methodology, J.W.; Software, J.W.; Writing—original draft, J.W.; Writing—review & editing, H.L.; Visualization, H.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the MSIT (Ministry of Science and ICT), Korea, under the Grand Information Technology Research Center support program (IITP-2022-2020-0-01612) supervised by the IITP (Institute for Information & communications Technology Planning & Evaluation), and in part by the project for “Customized technology partner” funded Korea Ministry of SMEs and Startups in 2020 (project No. G21S299392101), and in part by the Government-wide R&D Fund for Infections Disease Research (GFID), funded by the Ministry of the Interior and Safety, Republic of Korea (grant number: 20014854).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Mukhopadhyay, P.; Chaudhuri, B.B. A survey of Hough Transform. Pattern Recognit. 2015, 48, 993–1010. [Google Scholar] [CrossRef]
  2. Duan, D.; Xie, M.; Mo, Q.; Han, Z.; Wan, Y. An improved Hough transform for line detection. In Proceedings of the 2010 International Conference on Computer Application and System Modeling (ICCASM 2010), Taiyuan, China, 22–24 October 2010; pp. V2-354–V2-357. [Google Scholar] [CrossRef]
  3. Ye, H.; Shang, G.; Wang, L.; Zheng, M. A new method based on Hough transform for quick line and circle detection. In Proceedings of the 2015 8th International Conference on Biomedical Engineering and Informatics (BMEI), Shenyang, China, 14–16 October 2015; pp. 52–56. [Google Scholar] [CrossRef]
  4. Song, C.; Yang, F.; Li, P. Rotation Invariant Texture Measured by Local Binary Pattern for Remote Sensing Image Classification. In Proceedings of the 24th International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 August 2010; pp. 3–6. [Google Scholar]
  5. Huang, D.; Shan, C.; Ardabilian, M.; Wang, Y.; Chen, L. Local Binary Patterns and Its Application to Facial Image Analysis: A Survey. IEEE Trans. Syst. Man Cybern. Part C 2011, 41, 765–781. [Google Scholar] [CrossRef] [Green Version]
  6. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar] [CrossRef] [Green Version]
  7. Redmon, J.; Farhadi, A. YOLOv3: An Incremental Improvement. arXiv 2015, arXiv:1804.02767. [Google Scholar]
  8. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. SSD: Single shot MultiBox Detector. arXiv 2015, arXiv:1512.02325. [Google Scholar]
  9. Lin, T.-Y.; Goyal, P.; Girshick, R.; He, K.; Dollar, P. Focal Loss for Dense Object Detection. arXiv 2018, arXiv:1708.02002. [Google Scholar]
  10. Guo, F.; Yang, J.; Chen, Y.; Yao, B. Research on image detection and matching based on SIFT features. In Proceedings of the 2018 3rd International Conference on Control and Robotics Engineering (ICCRE), Nagoya, Japan, 20–23 April 2018; pp. 130–134. [Google Scholar]
  11. Satare, R.N.; Khot, S.R. Image matching with SIFT feature. In Proceedings of the 2018 2nd International Conference on Inventive Systems and Control (ICISC), Coimbatore, India, 19–20 January 2018; pp. 384–387. [Google Scholar]
  12. Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  13. Bay, H.; Tuytelaars, T.; Van Gool, L. SURF: Speeded Up Robust Features. In Proceedings of the European Conference on Computer Vision—ECCV 2006, Graz, Austria, 7–13 May 2006; Springer: Berlin/Heidelberg, Germany, 2006; Volume 3951. [Google Scholar] [CrossRef]
  14. Juan, L.; Oubong, G. SURF applied in panorama image stitching. In Proceedings of the 2010 2nd International Conference on Image Processing Theory, Tools and Applications, Paris, France, 7–10 July 2010; pp. 495–499. [Google Scholar] [CrossRef]
  15. Mayannavar, S.; Wali, U.; Aparanji, V. A Novel ANN Structure for Image Recognition. arXiv 2020, arXiv:2010.04586. [Google Scholar]
  16. Madani, K. Artificial Neural Networks Based Image Processing & Pattern Recognition: From Concepts to Real-World Applications. In Proceedings of the 2008 First Workshops on Image Processing Theory, Tools and Applications, Sousse, Tunisia, 23–26 November 2008; pp. 1–9. [Google Scholar]
  17. Pourghahestani, F.A.; Rashedi, E. Object detection in images using artificial neural network and improved binary gravitational search algorithm. In Proceedings of the 2015 4th Iranian Joint Congress on Fuzzy and Intelligent Systems (CFIS), Zahedan, Iran, 9–11 September 2015; pp. 1–4. [Google Scholar] [CrossRef]
  18. Li, S.; Song, W.; Fang, L.; Chen, Y.; Ghamisi, P.; Benediktsson, J.A. Deep learning for hyperspectral image classification: An overview. IEEE Trans. Geosci. Remote Sens. 2019, 57, 6690–6709. [Google Scholar] [CrossRef]
  19. Chauhan, R.; Ghanshala, K.K.; Joshi, R.C. Convolutional Neural Network (CNN) for Image Detection and Recognition. In Proceedings of the 2018 First International Conference on Secure Cyber Computing and Communication (ICSCCC), Jalandhar, India, 15–17 December 2018; pp. 278–282. [Google Scholar]
  20. Yang, S.B.; Lee, S.J. Improved CNN algorithm for object detection in large images. J. Korea Soc. Comput. Inf. 2020, 25, 45–53. [Google Scholar]
  21. Galvez, R.L.; Bandala, A.A.; Dadios, E.P.; Vicerra, R.R.P.; Maningo, J.M.Z. Object Detection Using Convolutional Neural Networks. In Proceedings of the TENCON 2018—2018 IEEE Region 10 Conference, Jeju Island, Republic of Korea, 28–31 October 2018; pp. 2023–2027. [Google Scholar] [CrossRef]
  22. Srivastava, S.; Divekar, A.V.; Anilkumar, C.; Naik, I.; Kulkarni, V.; Pattabiraman, V. Comparative analysis of deep learning image detection algorithms. J. Big Data 2021, 8, 66. [Google Scholar] [CrossRef]
  23. Sophian, A.; Tian, G.; Fan, M. Pulsed eddy current non-destructive testing and evaluation: A review. Chin. J. Mech. Eng. 2017, 30, 500–514. [Google Scholar] [CrossRef] [Green Version]
  24. Deng, W.; Bao, J.; Ye, B. Defect Image Recognition and Classification for Eddy Current Testing of Titanium Plate Based on Convolutional Neural Network. Complexity 2020, 2020, 8868190. [Google Scholar] [CrossRef]
  25. Bartels, K.A.; Fisher, J.L. Multifrequency eddy current image processing techniques for nondestructive evaluation. In Proceedings of the International Conference on Image Processing, Washington, DC, USA, 23–26 October 1995; Volume 1, pp. 486–489. [Google Scholar] [CrossRef]
  26. Ebayyeh, A.A.R.M.A.; Mousavi, A. A review and analysis of automatic optical inspection and quality monitoring methods in electronics industry. IEEE Access 2020, 8, 183192–183271. [Google Scholar] [CrossRef]
  27. Massaro, A.; Panarese, A.; Dipierro, G.; Cannella, E.; Galiano, A. Infrared Thermography and Image Processing applied on Weldings Quality Monitoring. In Proceedings of the 2020 IEEE International Workshop on Metrology for Industry 4.0 & IoT, Virtual Conference, 3–5 June 2020; pp. 559–564. [Google Scholar]
  28. Saragadam, V.; Dave, A.; Veeraraghavan, A.; Baraniuk, R. Thermal Image Processing via Physics-Inspired Deep Networks. arXiv 2021, arXiv:2108.07973. [Google Scholar]
  29. Abend, K. Fully automated dye-penetrant inspection of automotive parts. J. Mater. Manuf. 1998, 107, 624–629. [Google Scholar]
  30. Endramawan, T.; Sifa, A. Non Destructive Test Dye Penetrant and Ultrasonic on Welding SMAW Butt Joint with Acceptance Criteria ASME Standard. IOP Conf. Ser. Mater. Sci. Eng. 2018, 306, 012122. [Google Scholar] [CrossRef]
  31. Kalambe, P.; Ikhar, S.; Dhopte, V. Dye Penetrant Inspection of Turbine Blade Root Attachment. Int. J. Innov. Res. Technol. 2020, 6, 783–785. [Google Scholar]
  32. Lee, H.; Lee, H. Average Blurring-Based Anomaly Detection for Vision-based Mask Inspection Systems. In Proceedings of the 2021 21st International Conference on Control, Automation and Systems (ICCAS), Jeju, Republic of Korea, 12–15 October 2021; pp. 2144–2146. [Google Scholar] [CrossRef]
Figure 1. Mask production line with a multi-visual inspection system in mask manufacturing factory.
Figure 1. Mask production line with a multi-visual inspection system in mask manufacturing factory.
Sensors 22 08945 g001
Figure 2. Image processing in multi-vision system.
Figure 2. Image processing in multi-vision system.
Sensors 22 08945 g002
Figure 3. (a) Normal mask image; (b) nonlinear defect mask image; (c) dotted defect mask image; and (d) dotted defect mask image.
Figure 3. (a) Normal mask image; (b) nonlinear defect mask image; (c) dotted defect mask image; and (d) dotted defect mask image.
Sensors 22 08945 g003aSensors 22 08945 g003b
Figure 4. Mask production factory. The non-woven fabric, nose support, and filter are made in a (a) materials supply unit. Mask forming is conducted in a (b) forming unit. Ear loops are attached in (c) ear loop unit 1 and (d) ear loop unit 2.
Figure 4. Mask production factory. The non-woven fabric, nose support, and filter are made in a (a) materials supply unit. Mask forming is conducted in a (b) forming unit. Ear loops are attached in (c) ear loop unit 1 and (d) ear loop unit 2.
Sensors 22 08945 g004
Figure 5. Mask image from a multi-vision system in a mask production line.
Figure 5. Mask image from a multi-vision system in a mask production line.
Sensors 22 08945 g005
Figure 6. Nonlinear defect CNN model used in the proposed method.
Figure 6. Nonlinear defect CNN model used in the proposed method.
Sensors 22 08945 g006
Figure 7. Nonlinear defect CNN model training result.
Figure 7. Nonlinear defect CNN model training result.
Sensors 22 08945 g007
Figure 8. Apply various template matching methods in the mask image, (a) apply the minimum cross coefficient method; (b) apply the minimum cross correlation method; and (c) apply the minimum square difference method.
Figure 8. Apply various template matching methods in the mask image, (a) apply the minimum cross coefficient method; (b) apply the minimum cross correlation method; and (c) apply the minimum square difference method.
Sensors 22 08945 g008
Figure 9. Preprocessing in dotted defect detection. (a) Apply template matching to mask image; (b) apply morphology erode calculation to (a) image.
Figure 9. Preprocessing in dotted defect detection. (a) Apply template matching to mask image; (b) apply morphology erode calculation to (a) image.
Sensors 22 08945 g009
Figure 10. Preprocessing (template matching and erode operation) in dotted defect detection.
Figure 10. Preprocessing (template matching and erode operation) in dotted defect detection.
Sensors 22 08945 g010
Figure 11. Dotted defect CNN model used in the proposed method.
Figure 11. Dotted defect CNN model used in the proposed method.
Sensors 22 08945 g011
Figure 12. Dotted defect CNN model training result.
Figure 12. Dotted defect CNN model training result.
Sensors 22 08945 g012
Figure 13. Dotted defect model training results for convolution layers. (a) Feature map of the first convolution layer; (b) feature map of the second convolution layer; and (c) feature map of the third convolution layer.
Figure 13. Dotted defect model training results for convolution layers. (a) Feature map of the first convolution layer; (b) feature map of the second convolution layer; and (c) feature map of the third convolution layer.
Sensors 22 08945 g013aSensors 22 08945 g013b
Figure 14. Nonlinear defect model training results for convolution layers. (a) Feature map of the first convolution layer; (b) feature map of the second convolution layer; (c) feature map of the third convolution layer; and (d) feature map of the fourth convolution layer.
Figure 14. Nonlinear defect model training results for convolution layers. (a) Feature map of the first convolution layer; (b) feature map of the second convolution layer; (c) feature map of the third convolution layer; and (d) feature map of the fourth convolution layer.
Sensors 22 08945 g014aSensors 22 08945 g014b
Figure 15. ROC curve for comparing nonlinear defect detection using CNN with previous work.
Figure 15. ROC curve for comparing nonlinear defect detection using CNN with previous work.
Sensors 22 08945 g015
Figure 16. ROC curve for comparing dotted defect detection using CNN with previous work.
Figure 16. ROC curve for comparing dotted defect detection using CNN with previous work.
Sensors 22 08945 g016
Table 1. Related works using inspection method.
Table 1. Related works using inspection method.
Related WorksReal DataInspection MethodMask Application
[23,24]O
X
Eddy currentX
X
[26,27]O
O
ThermographyX
X
[29,30]O
X
Dye penetrant testingX
X
ProposedOMulti-vision systemO
Table 2. Quantitative comparison results (nonlinear defect detection).
Table 2. Quantitative comparison results (nonlinear defect detection).
TPFNFPTNRecallPrecisionAccuracy
Proposed method3000129910.9999.8%
Previous work [32]25644632370.850.882.1%
YOLO v5198102882120.660.6968.3%
SSD22373702300.750.7675.5%
RetinaNet182118862140.60.6766%
Table 3. Quantitative comparison results (dotted defect detection).
Table 3. Quantitative comparison results (dotted defect detection).
TPFNFPTNRecallPrecisionAccuracy
Proposed method299103000.99199.8%
Previous work [32]24456892110.810.7375.8%
YOLO v5206941191810.680.6364.5%
SSD22971972030.760.772%
RetinaNet1851151411590.610.5657.3%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Woo, J.; Lee, H. Nonlinear and Dotted Defect Detection with CNN for Multi-Vision-Based Mask Inspection. Sensors 2022, 22, 8945. https://doi.org/10.3390/s22228945

AMA Style

Woo J, Lee H. Nonlinear and Dotted Defect Detection with CNN for Multi-Vision-Based Mask Inspection. Sensors. 2022; 22(22):8945. https://doi.org/10.3390/s22228945

Chicago/Turabian Style

Woo, Jimyeong, and Heoncheol Lee. 2022. "Nonlinear and Dotted Defect Detection with CNN for Multi-Vision-Based Mask Inspection" Sensors 22, no. 22: 8945. https://doi.org/10.3390/s22228945

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop