Inclusion Detection in Injection-Molded Parts with the Use of Edge Masking
Abstract
:1. Introduction
1.1. Background and Related Work
1.2. The Main Contribution
2. Materials and Methods
2.1. The Laboratory Stand
2.2. Diagram of the Proposed Method
2.3. Preparation of the Reference Pair Database
- Edge mask calculation directly from the reference imageIn the first step, edges are detected using the Canny filter. Parameters of the filter, particularly standard deviation and hysteresis thresholds, are adjusted to ensure correct detection and avoid false detections, which would cause the exclusion of some areas from the inclusion detection algorithm. Then, morphological filters [23] are applied in order to extend the mask to the neighborhood of edges detected by the Canny filter.
- Edge mask calculation from the 3D modelThe method requires a 3D model of the object. The corresponding projection of the model edges is superimposed on the object image from the camera. This method is more robust than the first one, as it is not sensitive to noise, shadows, etc. However, it requires matching the model with the object observed by the camera.
2.3.1. Method 1: Edge Mask Calculation Directly from the Reference Image
- Edge detection using a Canny filter. A relatively high standard deviation allows for discarding small irregularities, which should not be included in the edge mask as potential flaws.
- Dilation of edges using a circle with diameter dd as a structuring element. The purpose is to exclude from calculations not only one-pixel-wide edges but also the surrounding area, since in later stages irregularities are searched for in the neighborhood of each non-masked pixel of the object.
- Inclusion in the edge mask of small, isolated areas surrounded by edges, which are assumed to be narrow hollow areas or small poles, where shadows can often be present and may cause false alarms. This step is performed as morphological closing using a circle with diameter dc as a structuring element.Exclusion from the edge mask of small, isolated areas, which may be potential inclusions that appeared at the output of the Canny filter. This is achieved by removing all connected components that have fewer than p pixels.
- Standard deviation of Canny filter: σ = 5;
- Hysteresis thresholds of Canny filter: Θ1 = 0.02, Θ2 = 0.1;
- Size of the structuring element for dilation in Step 2: dd = 15;
- Size of the structuring element for closing in Step 3: dc = 10;
- Minimum number of pixels for connected components: p = 300.
2.3.2. Method 2: Edge Mask Calculated from the 3D Model
- The reference image, which is an image of the object from the camera, taken, if possible, under similar conditions (illumination, camera parameters) to the target conditions;
- The reference mask—the binary mask of edges calculated from the STL file, aligned with the reference image based on a manual indication of corresponding points.
2.3.3. Comparison of Edge Mask Calculation Methods
2.4. Inclusion Detection
- Part classification, necessary to apply the corresponding edge mask; see Section 2.4.1;
- Matching the reference mask with the image under inspection; see Section 2.4.2;
- Detection of irregularities in the surface grayscale, performed for the part of the object surface that has not been masked by the edge mask; see Section 2.4.3.
2.4.1. Part Classification
2.4.2. Matching the Reference Mask with the Image Under Inspection
- The reference object, i.e., the object seen by the camera in a fixed position, called the reference position;
- The reference mask, i.e., a mask calculated from a 3D object geometrically transformed to a position identical to the position of the reference object.
2.4.3. Detection of Irregularities in the Surface Grayscale
3. Results and Discussions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Luo, Q.; Fang, X.; Su, J.; Zhou, J.; Zhou, B.; Yang, C.; Liu, L.; Gui, W.; Tian, L. Automated Visual Defect Classification for Flat Steel Surface: A Survey. IEEE Trans. Instrum. Meas. 2020, 69, 9329–9349. [Google Scholar] [CrossRef]
- Fang, X.; Luo, Q.; Zhou, B.; Li, C.; Tian, L.; Fang, X.; Luo, Q.; Zhou, B.; Li, C.; Tian, L. Research Progress of Automated Visual Surface Defect Detection for Industrial Metal Planar Materials. Sensors 2020, 20, 5136. [Google Scholar] [CrossRef] [PubMed]
- Xie, X. A Review of Recent Advances in Surface Defect Detection using Texture analysis Techniques. ELCVIA Electron. Lett. Comput. Vis. Image Anal. 2008, 7, 1. [Google Scholar] [CrossRef]
- Tsai, D.M.; Chen, M.C.; Li, W.C.; Chiu, W.Y. A fast regularity measure for surface defect detection. Mach. Vis. Appl. 2012, 23, 869–886. [Google Scholar] [CrossRef]
- Ma, Y.; Li, Q.; Zhou, Y.; He, F.; Xi, S. A surface defects inspection method based on multidirectional gray-level fluctuation. Int. J. Adv. Robot. Syst. 2017, 14, 1–17. [Google Scholar] [CrossRef]
- Weyrich, M.; Wang, Y. A Real-time and Vision-based Methodology for Processing 3D Objects on a Conveyor Belt Model Driven Development of Service Oriented Plant Controls View project Autonomous Systems View project. Int. J. Syst. Appl. Eng. Dev. 2011, 5, 561–569. [Google Scholar]
- Zhiznyakov, A.L.; Privezentsev, D.G.; Zakharov, A.A. Using fractal features of digital images for the detection of surface defects. Pattern Recognit. Image Anal. 2015, 25, 122–131. [Google Scholar] [CrossRef]
- Zhang, M.; Shi, H.; Yu, Y.; Zhou, M. A computer vision based conveyor deviation detection system. Appl. Sci. 2020, 10, 2402. [Google Scholar] [CrossRef]
- Yang, Y.; Miao, C.; Li, X.; Mei, X. On-line conveyor belts inspection based on machine vision. Optik 2014, 125, 5803–5807. [Google Scholar] [CrossRef]
- Ren, Z.; Fang, F.; Yan, N.; Wu, Y. State of the Art in Defect Detection Based on Machine Vision. Int. J. Precis. Eng. Manuf.-Green Technol. 2021, 9, 661–691. [Google Scholar] [CrossRef]
- Bhatt, P.M.; Malhan, R.K.; Rajendran, P.; Shah, B.C.; Thakar, S.; Yoon, Y.J.; Gupta, S.K. Image-Based Surface Defect Detection Using Deep Learning: A Review. J. Comput. Inf. Sci. Eng. 2021, 21, 040801. [Google Scholar] [CrossRef]
- Ke, K.C.; Huang, M.S. Quality prediction for injection molding by using a multilayer perceptron neural network. Polymers 2020, 12, 1812. [Google Scholar] [CrossRef] [PubMed]
- Cha, Y.-J.; Choi, W.; Suh, G.; Mahmoudkhani, S.; Büyüköztürk, O. Autonomous Structural Visual Inspection Using Region-Based Deep Learning for Detecting Multiple Damage Types. Comput.-Aided Civ. Infrastruct. Eng. 2018, 33, 731–747. [Google Scholar] [CrossRef]
- Kocon, M.; Malesa, M.; Rapcewicz, J. Ultra-Lightweight Fast Anomaly Detectors for Industrial Applications. Sensors 2024, 24, 161. [Google Scholar] [CrossRef]
- Zong, Y.; Liang, J.; Wang, H.; Ren, M.; Zhang, M.; Li, W.; Lu, W.; Ye, M. An intelligent and automated 3D surface defect detection system for quantitative 3D estimation and feature classification of material surface defects. Opt. Lasers Eng. 2021, 144, 106633. [Google Scholar] [CrossRef]
- Chen, Y.; Ding, Y.; Zhao, F.; Zhang, E.; Wu, Z.; Shao, L. Surface defect detection methods for industrial products: A review. Appl. Sci. 2021, 11, 7657. [Google Scholar] [CrossRef]
- Liu, L.; Wang, H.; Yu, B.; Xu, Y.; Shen, J. Improved algorithm of light scattering by a coated sphere. China Particuology 2007, 5, 230–236. [Google Scholar] [CrossRef]
- Li, B.; Wang, J.; Gao, Z.; Gao, N. Light Source Layout Optimization Strategy Based on Improved Artificial Bee Colony Algorithm. Math. Probl. Eng. 2021, 2021, 099757. [Google Scholar] [CrossRef]
- Kokka, A.; Pulli, T.; Ferrero, A.; Dekker, P.; Thorseth, A.; Kliment, P.; Klej, A.; Gerloff, T.; Ludwig, K.; Poikonen, T.; et al. Validation of the fisheye camera method for spatial non-uniformity corrections in luminous flux measurements with integrating spheres. Metrologia 2019, 56, 045002. [Google Scholar] [CrossRef]
- Kokka, A.; Pulli, T.; Poikonen, T.; Askola, J.; Ikonen, E. Fisheye camera method for spatial non-uniformity corrections in luminous flux measurements with integrating spheres. Metrologia 2017, 54, 577–583. [Google Scholar] [CrossRef]
- Canny, J. A Computational Approach to Edge Detection. IEEE Trans. Pattern Anal. Mach. Intell. 1986, 8, 679–698. [Google Scholar] [CrossRef] [PubMed]
- Granlund, G.H. In Search of a General Picture Processing Operator. Comput. Graph. Image Process. 1978, 8, 155–173. [Google Scholar] [CrossRef]
- Serra, J. Image Analysis and Mathematical Morphology; Academic Press: New York, NY, USA, 1982. [Google Scholar]
- Iandola, F.N.; Moskewicz, M.W.; Ashraf, K.; Han, S.; Dally, W.J.; Keutzer, K. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <1 MB model size. In Proceedings of the ICLR, Toulon, France, 24–26 April 2017. [Google Scholar]
- Bay, H.; Ess, A.; Tuytelaars, T.; Van Gool, L. SURF: Speeded Up Robust Features. Comput. Vis. Image Underst. (CVIU) 2008, 110, 346–359. [Google Scholar] [CrossRef]
Method Based on Edge Detector | Method Based on 3D Model | |
---|---|---|
Accuracy | Low Sensitive to the noise from the edge detection filter. The flaws recognized by the filter as edges cannot be detected and undetected edges can be later reported as flaws. | Medium Parts details, such as injection marks, not included in CAD models may cause false detections. |
Automation | High Fast and fully automated, no manual action needed. | Medium Needs some manual adjustment for matching the mask with the reference image, when a model is introduced to the database. |
Robustness | Low Sensitive to edge detection filter parameters, the results depend on lighting conditions, noise, and quality of the image. Needs heavily dilated edges—some areas have to be excluded from quality control. | High No need to detect edges, which are always well matched and excluded from quality control. |
Indicator | Description | Value |
---|---|---|
TP (true positive) | The number of inclusions that were correctly detected | 261 |
FP (false positive) | The number of false detections—when grayscale variation is detected as inclusion, but human experts consider the sample acceptable | 40 |
FN (false negative) | The number of undetected inclusions | 0 |
Precision | TP/(TP + FP) Percentage of detections that are classified by experts as inclusions | 261/(261 + 40) = 87% |
Recall | TP/(TP + FN) Percentage of inclusions that were detected | 261/(261 + 0) = 100% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Rotter, P.; Klemiato, M.; Knapik, D.; Rosół, M.; Putynkowski, G. Inclusion Detection in Injection-Molded Parts with the Use of Edge Masking. Sensors 2024, 24, 7150. https://doi.org/10.3390/s24227150
Rotter P, Klemiato M, Knapik D, Rosół M, Putynkowski G. Inclusion Detection in Injection-Molded Parts with the Use of Edge Masking. Sensors. 2024; 24(22):7150. https://doi.org/10.3390/s24227150
Chicago/Turabian StyleRotter, Pawel, Maciej Klemiato, Dawid Knapik, Maciej Rosół, and Grzegorz Putynkowski. 2024. "Inclusion Detection in Injection-Molded Parts with the Use of Edge Masking" Sensors 24, no. 22: 7150. https://doi.org/10.3390/s24227150
APA StyleRotter, P., Klemiato, M., Knapik, D., Rosół, M., & Putynkowski, G. (2024). Inclusion Detection in Injection-Molded Parts with the Use of Edge Masking. Sensors, 24(22), 7150. https://doi.org/10.3390/s24227150