Visual Detection of Surface Defects Based on Self ‐ Feature Comparison in Robot 3 ‐ D Printing

: Fused Deposition Modeling (FDM) additive manufacturing technology is widely applied in recent years. However, there are many defects that may affect the surface quality, accuracy, or even cause the collapse of the parts in the printing process. In the existing defect detection technology, the characteristics of parts themselves may be misjudged as defects. This paper presents a solution to the problem of distinguishing the defects and their own characteristics in robot 3 ‐ D printing. A self ‐ feature extraction method of shape defect detection of 3D printing products is introduced. Discrete point cloud after model slicing is used both for path planning in 3D printing and self ‐ feature extraction at the same time. In 3 ‐ D printing, it can generate G ‐ code and control the shooting direction of the camera. Once the current coordinates have been received, the self ‐ feature extraction begins, whose key steps are keeping a visual point cloud of the printed part and projecting the feature points to the picture under the equal mapping condition. After image processing technology, the contours of pictured projected and picture captured will be detected. At last, the final defects can be identified after evaluation of contour similarity based on empirical formula. This work will help to detect the defects online, improve the detection accuracy, and reduce the false detection rate without being affected by its own characteristics.


Introduction
Fused deposition modeling (FDM) is a widely used additive manufacturing (commonly known as 3-D printing) which fabricates parts by adding material layer-by-layer [1][2][3][4]. The quality and accuracy of the surface may be affected by the defects which are unavoidably generated in the process, and even worse, the defects may cause the surface collapse. [5,6]. In 3-D printing, early defect detection may facilitate the printer taking corrective measures, reducing the material waste of printing resources.
Defect detection is a hot research field. The existing defect detection methods can be divided into two groups: machine vision-based monitoring systems and laser scanning-based monitoring systems. The former mainly uses cameras to take pictures, while the latter can measure the height of the object, which cannot be achieved by a monocular vision-based system. Lin et al. [7] adopt laser scanning technology to detect overfill and underfill defects on upper surface of the deposited parts in the additive manufacturing process by comparing the existing point cloud with the pre-sliced stereolithography (STL) model. Liu et al. [8] proposed a stereo vision measurement system that could simultaneously acquire the surface grayscale image and depth image without extra data registration and the calibration method, improving the accuracy of surface defect detection. Ren et al. [9] proposed a data-driven photometric stereo by establishing the Gaussian process (GP) model to represent the nonlinear reflectance behavior of various materials based on measured reflectance datasets.
Defect detection algorithms can be classified as conventional methods and methods based on deep learning. The main difference between the two methods is that deep learning can extract features automatically by convolutional neural networks, while conventional machine learning methods need to manually design corresponding features engineering. There are many researchers developing deep learning methods for their particular application fields. Chang et al. [10] designed an image acquisition module to capture surface images through bright field illumination. A deep learning model named TinyDefectNet was proposed to detect the location and classes of defects. Tabernik et al. [11] proposed segmentation-based deep learning architecture to detect and segment surface cracks. Villalba-Diez et al. [12] developed a deep neural network to classify optical defects, proving a great application potential in Industry 4.0. Methods based on deep learning usually need much data. Zhang et al. [13] proposed a deep convolutional neural network named UCR to detect both common and rare defects of the surface of aluminum profiles. Du et al. [14] proposed Feature Pyramid Network (FPN) to improve model performance from data augmentation and algorithms in detecting X-ray image defects of automobile die cast aluminum parts.
To improve the performance of the model based on deep learning, as much labeling information as possible must be offered which costs time and money, while methods based on machine learning could get better performance in small datasets. Zhou et al. [15] proposed an automatic inspection system with five-plane array charge-coupled device (CCD) cameras and four LED light sources in a closed environment. A support vector machine algorithm was adopted to classify defects based on the extracted features in candidate defect regions. Wang et al. [16] developed a computational framework that contains three steps to detect complex component surface defect position, shape, number, and size, respectively. Abul'khanov et al. [17] created visual and numerical tools to analyze a rough surface, through characterizing the rough surface by building its information pattern through imaging micro-roughnesses on the controlled surface and using the parameter value. Chervyakov et al. [18] proposed two modified adaptive median filterings of impulse noise in images. The experiment showed potential application in processing satellite and medical imagery, geophysical data, and in other areas of digital image processing.
In the existing works, most researchers focus on detection defects of the upper surface in FDM. Outer surface detection has some advantages in FDM because of its invariant layers. When the gaps of the layers are irregular, defects are likely to appear. In our previous work, a multiview and allround vision detection system has been presented for the outer surface according to the invariance of the gaps between layers on the outer surface, which can detect the defects on the outer surface online and monitor the 3-D printing process of creating an object [19]. However, it is only applicable to the parts whose outer surface change is gentle. For the steep changed outer surface in the field of view, it can hardly distinguish the defects from its self-feature, such as the eyes on the face, which may be identified as defects.
This paper presents a self-feature extraction method of shape defect detection of robot 3-D printing products, which distinguish the defects and their own characteristics by comparing the theoretical projection contours and the experimental contours of products.
The outline of this paper is organized as follows. Section 2.1 introduces the whole process of this method. In Section 2.2, the implementation details of feature extraction of the model itself are analyzed in detail. Section 3 proposes an evaluation function to judge the similarity between the selffeature contours and the defects contours.

The Whole Process of Identifying Defects and Their Own Characteristics
In this research, the hardware platform is shown in Figure 1 and the program is implemented with C++ and Open CV, during which the CCD camera is consolidated with the nozzle and always kept perpendicular to the outer surface during the printing process. The main software used in the development of the image detection system is Visual Studio 2015. The robot 3-D printer in research is based on the Japanese Mitsubishi RV-6SD 6-DOF robot as the basic experimental platform, combined with the German Basler acA1600-20gm camera and computer.
The algorithm for visual detection of surface defects based on self-feature comparison is shown in Figure 2. First, the program slices the 3-D printing model into a discrete point cloud layer by layer, which is the common source of self-feature and machining path in 3-D printing, but their subsequent processes differ from each other. For the 3-D printing, the processing steps are as follows (as shown in Figure 2I).
① Generate the machining path and send G-code to robot; ② The robot receives instructions and starts printing. Then, at a certain interval, the host computer transmits the current coordinates to the theoretical model while sending acquisition instructions to the camera. ③ The picture captured will be preprocessed by histogram equalization, Local Binary Patter (LBP), median filtering to be identified and processed easily by the computer.
And for the self-feature extraction, the steps can be shown below (as shown in Figure 2II).
① The printed part will be preserved according to the current coordinates, and the unprinted part will be ignored. ② According to the current camera direction, the visual point clouds are selected. ③ Under established rules, feature points are extracted and theoretical contours are reconstructed.
④ By calibrating the experimental platform, the mapping relationship between part and picture captured is obtained. Then the program projects the feature contours extracted above to the picture under the equal mapping condition. Then the contours from both the picture projected and the picture captured will be detected by the image processing technology, which is the contour detection technology based on the laminate structure characteristics of FDM [7]. At last, the defects and their own characteristics are identified by comparing the parameters of the contours in the picture captured from the platform and the picture projected from the theoretical model. When the similarity of parameters between the former contour and one of the latter contours reaches a threshold, it means the contour is its own feature, not the defect, and vice versa. For parts, the processing process in this system is shown in Figure 3. Contour detection from theoretical model.

Self-Feature Extraction of Model Itself Based on the Location Relation of Point Cloud
The specific process of self-feature extraction in Figure 2II is shown in Figure 4a. After model slicing, the model is divided into layers. Each layer is composed of multiple contours, and each contour is composed of points end to end. Once the current coordinates are received, the program automatically extracts the printed part that is below layer N, and screens the visible point cloud. Taking layer N as an example, the red lines of the top view of layer N consist of discrete points, and the thicker ones are visible from current camera direction. The red squares are the feature points that must be the visibility point in the red bold lines.
Suppose the camera is directed towards parallel light, the point that is not obscured by other points or contours is the visual point. Therefore, the contours with visibility points are called visible contours, such as the contours 1 and 2 in Figure 4b. Then the next step is to find the feature points in the visible points, whose specific steps are as follows.
① Adjust the coordinate system to take the camera direction as the y-axis ② Sort all visual points by y values and then travel through each visual point layer by layer. ③ Judge whether the point is a feature point by the three-point feature judgment method that is the analysis of visual continuity and angle change of three adjacent points in the same contour (as shown in Figure 4c).
According to the principle of industrial CCD camera imaging, the coordinates of the projection point are determined by the coordinates of the original point and the coordinates and parameters of the camera, as shown in Figure 5. Take a' as an example, the coordinates of a' in the image are determined by the coordinates of a in reality and the coordinates of center of O the parameters the camera. Therefore, the mapping relationship can be calculated by monocular camera calibration.
Then the feature points extracted before will be projected onto the image by the mapping relationship, which is the theoretical projection picture. Finally, the contours of the projected image will be found out by the image processing technology.

Evaluation of Contour Similarity Based on Empirical Formula
This paper introduces the experiment by using the model shown in Figure 6a, and divides the collected pictures into three parts according to the acquisition time. Some pictures are selected from each part arbitrarily as representatives to reflect the online detection results in the experiment process. Figure 6b-d are the experimental and theoretical results of randomly selected vertical acquisition nodes (a certain height layer) in the experiment, while is the image from different angles in the printing process under the current height layer. At any time, theoretical pictures can be obtained from the collected signals, and the experimental pictures and theoretical pictures can be judged by comparison. The following picture shows the whole experimental and theoretical images of the model when printing is completed. is the image from different angles in the printing process under the current height layer.
As shown in Figure 7, the red-filled part is represented as contour, and the yellow border is the outer rectangle of contour. In this study, these contours are mainly presented by central coordinates, aspect ratio, and area. The parameters of Figure 7 are shown in the Tables 1 and 2.  Based on the above parameters, an evaluation function is proposed to evaluate the contours similarity in this paper.
where SOCP means the similarity of contours parameters; are the center coordinates that represent the position of contour; is aspect ratio of width to height that shows the shape of contour; is the pixel statistical value that means the area of contour; m represents contour number in the picture captured from the platform; n represents the contour number in the picture projected from the theoretical model; l is the error threshold for contour location; are the weight coefficients of contour parameters; is the weight coefficients of theoretical contours. x and y are the central coordinates of the contour location, r represents the aspect ratio of the shape of the contour, m real represents the contour number in an image captured by the platform and n represents the contour number of an image projected by a theoretical model. A represents the pixel statistics values of the contour area.
According to formula 1, we can choose different threshold and weight coefficients to adapt to different detection environments and requirements. In this experiment, , and l are set to 1 , 0.4, 0.3,0.3 and 80, respectively. Thus, the maximum value of SOCP is 1, and the closer the value is to 1, the higher the similarity is. The similarity of contours parameters calculated in Figure 7 is in Table 3 (the size of the picture is 1626 × 1236 pixels.) In our previous work [19], the morphological operator with 5 pixels by 5 pixels was adopted and the size of contour filtering was set as 75 pixels. We kept the same experimental parameters as previous work. In this experiment, the minimum defect size that our system can detect is 8.5 pixels by 8.5 pixels and the image size of our system acquiring is 1626 pixels by 1236 pixels.  Table 3, the red font means that the similarity reaches a certain threshold. In other words, contours 1, 2, and 3 are the self-feature contours a, b, and c, respectively, while contours 4, 5, and 6 are defects contours. It is obvious that the detection results are in good agreement with Figure 7. When there is an extra theoretical contour, it means that some features are not printed out. And when the experimental contour is redundant, some surface defects may have occurred. As long as the theoretical contours correspond to the experimental contours one by one, there are no defects in the picture.
Self-feature extraction of the model itself based on the location relation of the point cloud could distinguish the self-feature of the model and defects in the manufacturing process. The similarity formula which takes the location, shape, and size of contours into consideration can effectively evaluate the similarity between the real contours and theoretical feature contours, reducing the missing detection rate. The method we proposed in this work can detect defects that are generated in the process of FDM and distinguish the self-features of the objects from real defects, reducing the model's missing rate. And it has a great application potential in real-time quality inspection in FDM, especially for models with great gradients.

Conclusions and Future Work
The defects in the 3-D printing process may affect the surface quality and accuracy and cause a certain waste of filaments, power, and time. This paper develops a visual detection system of surface defects based on self-feature comparison in robot 3-D printing, which can distinguish the defects and their own characteristics. Three main achievements have been made in this work.
(1) A visual detection system of surface defects based on self-feature comparison has been designed, where both path planning in 3-D printing and self-feature extraction are from discrete point clouds after model slicing.
(2) A self-feature extraction method is introduced. The visual points of the printed part can be selected according to the current coordinates and camera direction. Then the feature points judged by the three-point feature judgment method are projected onto the theoretical pictures to achieve the self-feature pictures. (3) This paper presents an evaluation of contour similarity based on empirical formula based on the contour parameters detected by image processing technology.
The method we proposed in this work can detect defects that are generated in the process of FDM and distinguish self-features of the objects and real defects, reducing the model's missing rate. Note that in theory, the minimum size of defects the method can detect is 8.5 pixels by 8.5 pixels and as for the real physical size, it needs to be converted by the camera parameters and detection distance.
In future work, the processing time of the proposed algorithm should be further shortened so that it could satisfy the need for real-time online detection. And to improve the robustness of the detection system, materials of printing objects, experimental environments, and equipment should be taken into consideration.