Next Article in Journal
Joint Spatial-Spectral Smoothing in a Minimum-Volume Simplex for Hyperspectral Image Super-Resolution
Previous Article in Journal
Automatic Method for Bone Segmentation in Cone Beam Computed Tomography Data Set
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Visual Detection of Surface Defects Based on Self-Feature Comparison in Robot 3-D Printing

1
The State Key Laboratory of Fluid Power and Mechatronic Systems, College of Mechanical Engineering, Zhejiang University, Hangzhou 310027, China
2
Key Laboratory of 3D Printing Process and Equipment of Zhejiang Province, College of Mechanical Engineering, Zhejiang University, Hangzhou 310027, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2020, 10(1), 235; https://doi.org/10.3390/app10010235
Submission received: 3 December 2019 / Revised: 25 December 2019 / Accepted: 25 December 2019 / Published: 27 December 2019
(This article belongs to the Section Mechanical Engineering)

Abstract

:
Fused Deposition Modeling (FDM) additive manufacturing technology is widely applied in recent years. However, there are many defects that may affect the surface quality, accuracy, or even cause the collapse of the parts in the printing process. In the existing defect detection technology, the characteristics of parts themselves may be misjudged as defects. This paper presents a solution to the problem of distinguishing the defects and their own characteristics in robot 3-D printing. A self-feature extraction method of shape defect detection of 3D printing products is introduced. Discrete point cloud after model slicing is used both for path planning in 3D printing and self-feature extraction at the same time. In 3-D printing, it can generate G-code and control the shooting direction of the camera. Once the current coordinates have been received, the self-feature extraction begins, whose key steps are keeping a visual point cloud of the printed part and projecting the feature points to the picture under the equal mapping condition. After image processing technology, the contours of pictured projected and picture captured will be detected. At last, the final defects can be identified after evaluation of contour similarity based on empirical formula. This work will help to detect the defects online, improve the detection accuracy, and reduce the false detection rate without being affected by its own characteristics.

1. Introduction

Fused deposition modeling (FDM) is a widely used additive manufacturing (commonly known as 3-D printing) which fabricates parts by adding material layer-by-layer [1,2,3,4]. The quality and accuracy of the surface may be affected by the defects which are unavoidably generated in the process, and even worse, the defects may cause the surface collapse. [5,6]. In 3-D printing, early defect detection may facilitate the printer taking corrective measures, reducing the material waste of printing resources.
Defect detection is a hot research field. The existing defect detection methods can be divided into two groups: machine vision-based monitoring systems and laser scanning-based monitoring systems. The former mainly uses cameras to take pictures, while the latter can measure the height of the object, which cannot be achieved by a monocular vision-based system. Lin et al. [7] adopt laser scanning technology to detect overfill and underfill defects on upper surface of the deposited parts in the additive manufacturing process by comparing the existing point cloud with the pre-sliced stereolithography (STL) model. Liu et al. [8] proposed a stereo vision measurement system that could simultaneously acquire the surface grayscale image and depth image without extra data registration and the calibration method, improving the accuracy of surface defect detection. Ren et al. [9] proposed a data-driven photometric stereo by establishing the Gaussian process (GP) model to represent the nonlinear reflectance behavior of various materials based on measured reflectance datasets.
Defect detection algorithms can be classified as conventional methods and methods based on deep learning. The main difference between the two methods is that deep learning can extract features automatically by convolutional neural networks, while conventional machine learning methods need to manually design corresponding features engineering. There are many researchers developing deep learning methods for their particular application fields. Chang et al. [10] designed an image acquisition module to capture surface images through bright field illumination. A deep learning model named TinyDefectNet was proposed to detect the location and classes of defects. Tabernik et al. [11] proposed segmentation-based deep learning architecture to detect and segment surface cracks. Villalba-Diez et al. [12] developed a deep neural network to classify optical defects, proving a great application potential in Industry 4.0. Methods based on deep learning usually need much data. Zhang et al. [13] proposed a deep convolutional neural network named UCR to detect both common and rare defects of the surface of aluminum profiles. Du et al. [14] proposed Feature Pyramid Network (FPN) to improve model performance from data augmentation and algorithms in detecting X-ray image defects of automobile die cast aluminum parts.
To improve the performance of the model based on deep learning, as much labeling information as possible must be offered which costs time and money, while methods based on machine learning could get better performance in small datasets. Zhou et al. [15] proposed an automatic inspection system with five-plane array charge-coupled device (CCD) cameras and four LED light sources in a closed environment. A support vector machine algorithm was adopted to classify defects based on the extracted features in candidate defect regions. Wang et al. [16] developed a computational framework that contains three steps to detect complex component surface defect position, shape, number, and size, respectively. Abul’khanov et al. [17] created visual and numerical tools to analyze a rough surface, through characterizing the rough surface by building its information pattern through imaging micro-roughnesses on the controlled surface and using the parameter value. Chervyakov et al. [18] proposed two modified adaptive median filterings of impulse noise in images. The experiment showed potential application in processing satellite and medical imagery, geophysical data, and in other areas of digital image processing.
In the existing works, most researchers focus on detection defects of the upper surface in FDM. Outer surface detection has some advantages in FDM because of its invariant layers. When the gaps of the layers are irregular, defects are likely to appear. In our previous work, a multiview and all-round vision detection system has been presented for the outer surface according to the invariance of the gaps between layers on the outer surface, which can detect the defects on the outer surface online and monitor the 3-D printing process of creating an object [19]. However, it is only applicable to the parts whose outer surface change is gentle. For the steep changed outer surface in the field of view, it can hardly distinguish the defects from its self-feature, such as the eyes on the face, which may be identified as defects.
This paper presents a self-feature extraction method of shape defect detection of robot 3-D printing products, which distinguish the defects and their own characteristics by comparing the theoretical projection contours and the experimental contours of products.
The outline of this paper is organized as follows. Section 2.1 introduces the whole process of this method. In Section 2.2, the implementation details of feature extraction of the model itself are analyzed in detail. Section 3 proposes an evaluation function to judge the similarity between the self-feature contours and the defects contours.

2. Methodology

2.1. The Whole Process of Identifying Defects and Their Own Characteristics

In this research, the hardware platform is shown in Figure 1 and the program is implemented with C++ and Open CV, during which the CCD camera is consolidated with the nozzle and always kept perpendicular to the outer surface during the printing process. The main software used in the development of the image detection system is Visual Studio 2015. The robot 3-D printer in research is based on the Japanese Mitsubishi RV-6SD 6-DOF robot as the basic experimental platform, combined with the German Basler acA1600-20gm camera and computer.
The algorithm for visual detection of surface defects based on self-feature comparison is shown in Figure 2. First, the program slices the 3-D printing model into a discrete point cloud layer by layer, which is the common source of self-feature and machining path in 3-D printing, but their subsequent processes differ from each other.
For the 3-D printing, the processing steps are as follows (as shown in Figure 2I).
Generate the machining path and send G-code to robot;
The robot receives instructions and starts printing. Then, at a certain interval, the host computer transmits the current coordinates to the theoretical model while sending acquisition instructions to the camera.
The picture captured will be preprocessed by histogram equalization, Local Binary Patter (LBP), median filtering to be identified and processed easily by the computer.
And for the self-feature extraction, the steps can be shown below (as shown in Figure 2II).
The printed part will be preserved according to the current coordinates, and the unprinted part will be ignored.
According to the current camera direction, the visual point clouds are selected.
Under established rules, feature points are extracted and theoretical contours are reconstructed.
By calibrating the experimental platform, the mapping relationship between part and picture captured is obtained. Then the program projects the feature contours extracted above to the picture under the equal mapping condition.
Then the contours from both the picture projected and the picture captured will be detected by the image processing technology, which is the contour detection technology based on the laminate structure characteristics of FDM [7]. At last, the defects and their own characteristics are identified by comparing the parameters of the contours in the picture captured from the platform and the picture projected from the theoretical model. When the similarity of parameters between the former contour and one of the latter contours reaches a threshold, it means the contour is its own feature, not the defect, and vice versa. For parts, the processing process in this system is shown in Figure 3.

2.2. Self-Feature Extraction of Model Itself Based on the Location Relation of Point Cloud

The specific process of self-feature extraction in Figure 2II is shown in Figure 4a. After model slicing, the model is divided into layers. Each layer is composed of multiple contours, and each contour is composed of points end to end. Once the current coordinates are received, the program automatically extracts the printed part that is below layer N, and screens the visible point cloud. Taking layer N as an example, the red lines of the top view of layer N consist of discrete points, and the thicker ones are visible from current camera direction. The red squares are the feature points that must be the visibility point in the red bold lines.
Suppose the camera is directed towards parallel light, the point that is not obscured by other points or contours is the visual point. Therefore, the contours with visibility points are called visible contours, such as the contours 1 and 2 in Figure 4b. Then the next step is to find the feature points in the visible points, whose specific steps are as follows.
Adjust the coordinate system to take the camera direction as the y-axis
Sort all visual points by y values and then travel through each visual point layer by layer.
Judge whether the point is a feature point by the three-point feature judgment method that is the analysis of visual continuity and angle change of three adjacent points in the same contour (as shown in Figure 4c).
According to the principle of industrial CCD camera imaging, the coordinates of the projection point are determined by the coordinates of the original point and the coordinates and parameters of the camera, as shown in Figure 5. Take a’ as an example, the coordinates of a’ in the image are determined by the coordinates of a in reality and the coordinates of center of O the parameters the camera. Therefore, the mapping relationship can be calculated by monocular camera calibration. Then the feature points extracted before will be projected onto the image by the mapping relationship, which is the theoretical projection picture. Finally, the contours of the projected image will be found out by the image processing technology.

3. Evaluation of Contour Similarity Based on Empirical Formula

This paper introduces the experiment by using the model shown in Figure 6a, and divides the collected pictures into three parts according to the acquisition time. Some pictures are selected from each part arbitrarily as representatives to reflect the online detection results in the experiment process. Figure 6b–d are the experimental and theoretical results of randomly selected vertical acquisition nodes (a certain height layer) in the experiment, while θ 1   θ 2   θ 3 is the image from different angles in the printing process under the current height layer. At any time, theoretical pictures can be obtained from the collected signals, and the experimental pictures and theoretical pictures can be judged by comparison. The following picture shows the whole experimental and theoretical images of the model when printing is completed.
As shown in Figure 7, the red-filled part is represented as contour, and the yellow border is the outer rectangle of contour. In this study, these contours are mainly presented by central coordinates, aspect ratio, and area. The parameters of Figure 7 are shown in the Table 1 and Table 2.
Based on the above parameters, an evaluation function is proposed to evaluate the contours similarity in this paper.
SOCP = β n ( α 1 * l x m x n 2 + y m y n 2 l + α 2 * r m r n r m r n r m r n + α 3 * A m A n A m A n A m A n )
where SOCP means the similarity of contours parameters;   x   y are the center coordinates that represent the position of contour; r is aspect ratio of width to height that shows the shape of contour; A is the pixel statistical value that means the area of contour; m   represents contour number in the picture captured from the platform; n represents the contour number in the picture projected from the theoretical model; l is the error threshold for contour location; α 1   α 2   α 3 are the weight coefficients of contour parameters; β n is the weight coefficients of theoretical contours. x and y are the central coordinates of the contour location, r represents the aspect ratio of the shape of the contour, m real represents the contour number in an image captured by the platform and n represents the contour number of an image projected by a theoretical model. A represents the pixel statistics values of the contour area.
According to formula 1, we can choose different threshold and weight coefficients to adapt to different detection environments and requirements. In this experiment, β n , α 1   α 2   α 3 and l are set to 1 , 0.4 ,   0.3 , 0.3 and 80, respectively. Thus, the maximum value of SOCP is 1, and the closer the value is to 1, the higher the similarity is. The similarity of contours parameters calculated in Figure 7 is in Table 3 (the size of the picture is 1626 × 1236 pixels.)
In our previous work [19], the morphological operator with 5 pixels by 5 pixels was adopted and the size of contour filtering was set as 75 pixels. We kept the same experimental parameters as previous work. In this experiment, the minimum defect size that our system can detect is 8.5 pixels by 8.5 pixels and the image size of our system acquiring is 1626 pixels by 1236 pixels.
As seen from the Table 3, the red font means that the similarity reaches a certain threshold. In other words, contours 1, 2, and 3 are the self-feature contours a, b, and c, respectively, while contours 4, 5, and 6 are defects contours. It is obvious that the detection results are in good agreement with Figure 7. When there is an extra theoretical contour, it means that some features are not printed out. And when the experimental contour is redundant, some surface defects may have occurred. As long as the theoretical contours correspond to the experimental contours one by one, there are no defects in the picture.
Self-feature extraction of the model itself based on the location relation of the point cloud could distinguish the self-feature of the model and defects in the manufacturing process. The similarity formula which takes the location, shape, and size of contours into consideration can effectively evaluate the similarity between the real contours and theoretical feature contours, reducing the missing detection rate. The method we proposed in this work can detect defects that are generated in the process of FDM and distinguish the self-features of the objects from real defects, reducing the model’s missing rate. And it has a great application potential in real-time quality inspection in FDM, especially for models with great gradients.

4. Conclusions and Future Work

The defects in the 3-D printing process may affect the surface quality and accuracy and cause a certain waste of filaments, power, and time. This paper develops a visual detection system of surface defects based on self-feature comparison in robot 3-D printing, which can distinguish the defects and their own characteristics. Three main achievements have been made in this work.
(1)
A visual detection system of surface defects based on self-feature comparison has been designed, where both path planning in 3-D printing and self-feature extraction are from discrete point clouds after model slicing.
(2)
A self-feature extraction method is introduced. The visual points of the printed part can be selected according to the current coordinates and camera direction. Then the feature points judged by the three-point feature judgment method are projected onto the theoretical pictures to achieve the self-feature pictures.
(3)
This paper presents an evaluation of contour similarity based on empirical formula based on the contour parameters detected by image processing technology.
The method we proposed in this work can detect defects that are generated in the process of FDM and distinguish self-features of the objects and real defects, reducing the model’s missing rate. Note that in theory, the minimum size of defects the method can detect is 8.5 pixels by 8.5 pixels and as for the real physical size, it needs to be converted by the camera parameters and detection distance.
In future work, the processing time of the proposed algorithm should be further shortened so that it could satisfy the need for real-time online detection. And to improve the robustness of the detection system, materials of printing objects, experimental environments, and equipment should be taken into consideration.

Author Contributions

Investigation, W.D. and W.S.; writing—original draft preparation, W.D.; visualization, W.D. and W.S.; project administration, H.S.; J.F. and Y.X. All authors have read and agreed to the published version of the manuscript.

Funding

This work was financially supported by the National Nature Science Foundation of China (No. 51975518), the Science Fund for Creative Research Groups of National Natural Science Foundation of China (No. 51821093), Key Research and Development Plan of Zhejiang Province (No. 2018C01073) and the Fundamental Research Funds for the Central Universities (No. 2019QNA4004). The founding sponsors had no role in the design of the study; in the collection, analysis, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results.

Acknowledgments

Thanks for partial technical support by Senxin Liu in our research group.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Galantucci, L.M.; Lavecchia, F.; Percoco, G. Study of compression properties of topologically optimized FDM made structured parts. CIRP Ann. Manuf. Technol. 2008, 57, 243–246. [Google Scholar] [CrossRef]
  2. Dudek, P. FDM 3D printing technology in manufacturing composite elements. Arch. Metall. Mater. 2013, 58, 1415–1418. [Google Scholar] [CrossRef]
  3. Novakova-Marcincinova, L.; Novak-Marcincin, J.; Barna, J.; Torok, J. Special materials used in FDM rapid prototyping technology application. In Proceedings of the IEEE International Conference on Intelligent Engineering Systems, Lisbon, Portugal, 13–15 June 2012; pp. 73–76. [Google Scholar]
  4. Chua, C.K.; Leong, K.F. 3D Printing and Additive Manufacturing: Principles and Applications (with Companion Media Pack) Fourth Edition of Rapid Prototyping, 4th ed.; World Scientific Publishing Company: Singapore, 2014. [Google Scholar]
  5. Bochmann, L.; Bayley, C.; Hel, M.; Transchel, R.; Wegener, K.; Dornfeld, D. Understanding error generation in fused deposition modeling. Surf. Topogr. 2015, 3, 014002. [Google Scholar] [CrossRef]
  6. Anitha, R.; Arunachalam, S.; Radhakrishnan, P. Critical parameters influencing the quality of prototypes in fused deposition modelling. J. Mater. Process. Technol. 2001, 118, 385–388. [Google Scholar] [CrossRef]
  7. Lin, W.; Shen, H.; Fu, J.; Wu, S. Online quality monitoring in material extrusion additive manufacturing processes based on laser scanning technology. Precis. Eng. 2019, 60, 76–84. [Google Scholar] [CrossRef]
  8. Liu, Z.; Wu, S.; Wu, Q.; Quan, C.; Ren, Y. A Novel Stereo Vision Measurement System Using Both Line Scan Camera and Frame Camera. IEEE Trans. Instrum. Meas. 2019, 68, 3563–3575. [Google Scholar] [CrossRef]
  9. Ren, M.; Wang, X.; Xiao, G.; Chen, M.; Fu, L. Fast Defect Inspection Based on Data-Driven Photometric Stereo. IEEE Trans. Instrum. Meas. 2019, 68, 1148–1156. [Google Scholar] [CrossRef]
  10. Chang, F.; Liu, M.; Dong, M.; Duan, Y. A mobile vision inspection system for tiny defect detection of smooth car-body surface based on deep ensemble learning. Meas. Sci. Technol. 2019, 30, 125905. [Google Scholar] [CrossRef]
  11. Tabernik, D.; Šela, S.; Skvarč, J.; Skočaj, D. Segmentation-based deep-learning approach for surface-defect detection. J. Intell. Manuf. 2019. [Google Scholar] [CrossRef] [Green Version]
  12. Villalba-Diez, J.; Schmidt, D.; Gevers, R.; Ordieres-Meré, J.; Buchwitz, M.; Wellbrock, W. Deep Learning for Industrial Computer Vision Quality Control in the Printing Industry 4.0. Sensors 2019, 19, 3987. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Zhang, D.; Song, K.; Xu, J.; He, Y.; Yan, Y. Unified detection method of aluminium profile surface defects: Common and rare defect categories. Opt. Lasers Eng. 2020, 126, 105936. [Google Scholar] [CrossRef]
  14. Du, W.; Shen, H.; Fu, J.; Zhang, G.; He, Q. Approaches for improvement of the X-ray image defect detection of automobile casting aluminum parts based on deep learning. NDT E Int. 2019, 107, 102144. [Google Scholar] [CrossRef]
  15. Zhou, Q.; Chen, R.; Huang, B.; Liu, C.; Yu, J.; Yu, X. An Automatic Surface Defect Inspection System for Automobiles Using Machine Vision Methods. Sensors 2019, 19, 644. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Wang, Z.; Zhu, D. An accurate detection method for surface defects of complex components based on support vector machine and spreading algorithm. Measurement 2019, 147, 106886. [Google Scholar] [CrossRef]
  17. Abul’khanov, S.R.; Kazanskiy, N.L. Information Pattern in Imaging of a Rough Surface. IOP Conf. Ser. Mater. Sci. Eng. 2018, 302, 012068. [Google Scholar] [CrossRef]
  18. Chervyakov, N.I.; Lyakhov, P.A.; Orazaev, A.R. Two methods of adaptive median filtering of impulse noise in images. Comput. Opt. 2018, 42, 667–678. [Google Scholar] [CrossRef]
  19. Shen, H.; Sun, W.; Fu, J. Multi-view online vision detection based on robot fused deposit modeling 3D printing technology. Rapid Prototyp. J. 2018, 25, 343–355. [Google Scholar] [CrossRef]
Figure 1. Hardware system structures of robot Fused deposition modeling (FDM) system.
Figure 1. Hardware system structures of robot Fused deposition modeling (FDM) system.
Applsci 10 00235 g001
Figure 2. The flowchart of the visual detection algorithm of surface defects based on self-feature comparison in 3-D printing: I 3D printing processing steps; II Contour detection; III Identify defects and own characteristics.
Figure 2. The flowchart of the visual detection algorithm of surface defects based on self-feature comparison in 3-D printing: I 3D printing processing steps; II Contour detection; III Identify defects and own characteristics.
Applsci 10 00235 g002
Figure 3. The processing process of parts in the system: I Contour Detection from real model; II Contour detection from theoretical model.
Figure 3. The processing process of parts in the system: I Contour Detection from real model; II Contour detection from theoretical model.
Applsci 10 00235 g003
Figure 4. Self-feature extraction: (a) The specific process of self-feature extraction; (b) Extraction rules of visual points and feature points in current camera direction; (c) Three-point judgment of feature point.
Figure 4. Self-feature extraction: (a) The specific process of self-feature extraction; (b) Extraction rules of visual points and feature points in current camera direction; (c) Three-point judgment of feature point.
Applsci 10 00235 g004
Figure 5. Principle of industrial camera imaging.
Figure 5. Principle of industrial camera imaging.
Applsci 10 00235 g005
Figure 6. Some on-line test pictures during the experiment: (a) Theoretical model; (bd) Experimental and theoretical results of randomly selected vertical acquisition nodes. θ 1   θ 2   θ 3 is the image from different angles in the printing process under the current height layer.
Figure 6. Some on-line test pictures during the experiment: (a) Theoretical model; (bd) Experimental and theoretical results of randomly selected vertical acquisition nodes. θ 1   θ 2   θ 3 is the image from different angles in the printing process under the current height layer.
Applsci 10 00235 g006
Figure 7. The contours of the projected image (a) and the image captured (b) in the experiment.
Figure 7. The contours of the projected image (a) and the image captured (b) in the experiment.
Applsci 10 00235 g007
Table 1. Parameters of contours in Figure 7a (unit/pixel).
Table 1. Parameters of contours in Figure 7a (unit/pixel).
Rectangle/ContourCenter
Coordinates
WidthHeightAspect Ratio Width to HeightContour Area (Statistical Value of the Interior Red Pixels)
No. 1(543, 189)2531391.82023,496
No. 2(1084, 190)2521271.98422,374
No. 3(814, 604)5122512.04041,760
Table 2. Parameters of contours in Figure 7b (unit/pixel).
Table 2. Parameters of contours in Figure 7b (unit/pixel).
Rectangle/ContourCenter CoordinatesWidthHeightAspect Ratio Width to HeightContour Area (Statistical Value of the Interior Red Pixels)
No. 1(514, 180)2471351.83024,109
No. 2(1067, 189)2491431.74123,177
No. 3(786, 619)5222661.96239,168
No. 4(758, 404)3021392.17325,243
No. 5(1162, 559)1061470.72110,068
No. 6(798, 882)2211052.10515,249
Table 3. The similarity of the contours parameters in Figure 7.
Table 3. The similarity of the contours parameters in Figure 7.
SOCP123456
a83.9%−163.7%−151.2%−74.0%−290.7%−283.8%
b−189.5%86.8%−164.9%−114.1%−120.5%−285.7%
c−175.5%−160.8%71.1%−28.4%−96.7%−59.2%

Share and Cite

MDPI and ACS Style

Shen, H.; Du, W.; Sun, W.; Xu, Y.; Fu, J. Visual Detection of Surface Defects Based on Self-Feature Comparison in Robot 3-D Printing. Appl. Sci. 2020, 10, 235. https://doi.org/10.3390/app10010235

AMA Style

Shen H, Du W, Sun W, Xu Y, Fu J. Visual Detection of Surface Defects Based on Self-Feature Comparison in Robot 3-D Printing. Applied Sciences. 2020; 10(1):235. https://doi.org/10.3390/app10010235

Chicago/Turabian Style

Shen, Hongyao, Wangzhe Du, Weijun Sun, Yuetong Xu, and Jianzhong Fu. 2020. "Visual Detection of Surface Defects Based on Self-Feature Comparison in Robot 3-D Printing" Applied Sciences 10, no. 1: 235. https://doi.org/10.3390/app10010235

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop