Rotation Estimation and Segmentation for Patterned Image Vision Inspection Rotation Estimation and Segmentation for Patterned Image Vision Inspection

: Pattern images can be segmented in a template unit for efﬁcient fabric vision inspection; however, segmentation criteria critically affect the segmentation and defect detection performance. To get the undistorted criteria for rotated images, rotation estimation of absolute angle needs to be proceeded. Given that conventional rotation estimations do not satisfy both rotation errors and computation times, patterned fabric defects are detected using manual visual methods. To solve these problems, this study proposes the application of segmentation reference point candidate (SRPC), generated based on a Euclidean distance map (EDM). SRPC is used to not only extract criteria points but also estimate rotation angle. The rotation angle is predicted using the orientation vector of SRPC instead of all pixels to reduce estimation times. SRPC-based image segmentation increases the robustness against the rotation angle and defects. The separation distance value for SRPC area distinction is calculated automatically. The performance of the proposed method is similar to state-of-the-art rotation estimation methods, with a suitable inspection time in actual operations for patterned fabric. The similarity between the segmented images is better than conventional methods. The proposed method extends the target of vision inspection on plane fabric to checked or striped pattern. Abstract: Pattern images can be segmented in a template unit for efficient fabric vision inspection; however, segmentation criteria critically affect the segmentation and defect detection performance. To get the undistorted criteria for rotated images, rotation estimation of absolute angle needs to be proceeded. Given that conventional rotation estimations do not satisfy both rotation errors and computation times, patterned fabric defects are detected using manual visual methods. To solve these problems, this study proposes the application of segmentation reference point candidate (SRPC), generated based on a Euclidean distance map (EDM). SRPC is used to not only extract criteria points but also estimate rotation angle. The rotation angle is predicted using the orientation vector of SRPC instead of all pixels to reduce estimation times. SRPC-based image segmentation increases the robustness against the rotation angle and defects. The separation distance value for SRPC area distinction is calculated automatically. The performance of the proposed method is similar to state-of-the-art rotation estimation methods, with a suitable inspection time in actual operations for patterned fabric. The similarity between the segmented images is better than conventional methods. The proposed method extends the target of vision inspection on plane fabric to checked or striped pattern.


Introduction
Most cameras used in machine vision support high-resolution image acquisition to detect detailed defects [1]. The acquired images should be cropped before use because deep learning neural networks have limited input data sizes for low memory usage considering the low spec of PC installed environments and preventing image loss [2]. Figure 1 shows examples of images acquired using a machine vision camera.

Introduction
Most cameras used in machine vision support high-resolution image acquisiti detect detailed defects [1]. The acquired images should be cropped before use bec deep learning neural networks have limited input data sizes for low memory usage sidering the low spec of PC installed environments and preventing image loss [2]. Fi 1 shows examples of images acquired using a machine vision camera. The image is cropped using a matching technique such as correlation analysis based on a predefined template [3] if a specific area of the image is targeted as with semiconductor chip inspection. When the template is not specified, such as for concrete cracks, data can be generated by dividing the image into sizes that are easy to learn. An elaborate segmentation criterion is required to ensure uniformity in the shapes of cropped patterns obtained from their images. Thus, for an image that is rotated or has a defect, many problems can occur when finding the split reference point.
It is also possible to perform vision inspection by deep-learning through data augmentation without cutting unit patterns based on a certain criterion. However, if images are augmented at various angles for various types of fabrics, the learning time increases proportionally. In addition, this study assumes that learning for fabric vision inspection is processed only with normal data in actual environments as described in [4,5] where it is difficult to obtain abnormal samples for each new-patterned fabric. For this reason, there is a possibility that it is judged as a defect but not a defect because it learns the various unit shapes generated by the sliding window during augmentation as normal even though there is an error with a different pattern area as shown in Figure 2. Moreover, it is difficult to determine the template area of learning and inspection images only by data augmentation, so the segmentation reference point candidate (SRPC)-based scheme is proposed to get the similar area. If a template is determined, leaning is performed by segmented images which have same size and similar shape. Otherwise, the learning takes a long time because more images are cropped in different sizes and shapes which should be used for the learning process. Extracting SRPC not only does not require image augmentation, but it is also possible to define templates. Therefore, it has advantages in terms of both time and accuracy performances as the cropped shape of learning and inspection images are matched only when the segmentation process is based on a criterion. The image is cropped using a matching technique such as correlation analysis based on a predefined template [3] if a specific area of the image is targeted as with semiconductor chip inspection. When the template is not specified, such as for concrete cracks, data can be generated by dividing the image into sizes that are easy to learn. An elaborate segmentation criterion is required to ensure uniformity in the shapes of cropped patterns obtained from their images. Thus, for an image that is rotated or has a defect, many problems can occur when finding the split reference point.
It is also possible to perform vision inspection by deep-learning through data augmentation without cutting unit patterns based on a certain criterion. However, if images are augmented at various angles for various types of fabrics, the learning time increases proportionally. In addition, this study assumes that learning for fabric vision inspection is processed only with normal data in actual environments as described in [4,5] where it is difficult to obtain abnormal samples for each new-patterned fabric. For this reason, there is a possibility that it is judged as a defect but not a defect because it learns the various unit shapes generated by the sliding window during augmentation as normal even though there is an error with a different pattern area as shown in Figure 2. Moreover, it is difficult to determine the template area of learning and inspection images only by data augmentation, so the segmentation reference point candidate (SRPC)-based scheme is proposed to get the similar area. If a template is determined, leaning is performed by segmented images which have same size and similar shape. Otherwise, the learning takes a long time because more images are cropped in different sizes and shapes which should be used for the learning process. Extracting SRPC not only does not require image augmentation, but it is also possible to define templates. Therefore, it has advantages in terms of both time and accuracy performances as the cropped shape of learning and inspection images are matched only when the segmentation process is based on a criterion. This study analyzes problems associated with segmenting pattern images to generate deep-learning input data and presents a solution for fabric defect detection (FDD) in many pattern images. Textile fabrics classified by form and shape, lattice/striped, and solid-color fabrics, are mostly used in apparel production. Therefore, as mentioned in [6], most research related to fabric defect detection efforts focus on plain solid-color fabric. For a single-color fabric image, we can easily set the size and crop it like concrete images; however, fabrics with lattice and line patterns require split reference points for cropping. Methods to determine optimal vertical and horizontal split lines for unrotated images have been proposed. This method only support unrotated fabric images shown as Figure 3 [7]. The same interval distance between split lines also can be a problem when a fabric image has abnormal pattern area such as Figure 2. This study analyzes problems associated with segmenting pattern images to generate deep-learning input data and presents a solution for fabric defect detection (FDD) in many pattern images. Textile fabrics classified by form and shape, lattice/striped, and solidcolor fabrics, are mostly used in apparel production. Therefore, as mentioned in [6], most research related to fabric defect detection efforts focus on plain solid-color fabric. For a single-color fabric image, we can easily set the size and crop it like concrete images; however, fabrics with lattice and line patterns require split reference points for cropping. Methods to determine optimal vertical and horizontal split lines for unrotated images have been proposed. This method only support unrotated fabric images shown as Figure 3 [7]. The same interval distance between split lines also can be a problem when a fabric image has abnormal pattern area such as Figure 2.  Therefore, a split reference point is required for rotated images and a suitable rotation angle must be estimated in advance to extract a reference point without errors, thereby facilitating collection of similar divided images. Algorithms that can predict rotation angles have already been proposed [8,9] based on line detection techniques. However, long computation times or large rotation-angle errors depending on the detection resolution are major drawbacks. Predicting the rotation angle cannot perfectly guarantee nonrotation; therefore, a split reference point needs to be extracted. From these problems, applying deep learning techniques to the vision inspection of rotated pattern fabric is difficult.
This study proposes a method for applying the extracted reference points not using split line or line detection methods to both rotation-angle estimation and image segmentation to ensure high inspection and segmentation performance for various pattern shapes within an appropriate performance time. As mentioned in [10], in general, images can be segmented by comparing multiple pixels with a template predefined by the user. Because this method uses only reference points not all pixels, it takes a short time for segmentation process. Within this low time consumptions, the proposed system achieves high accuracy as described in the below simulation section. Furthermore, we propose a method for increasing the performance stability and operational convenience using automatically calculated variables such as a threshold. By applying the proposed scheme, deep learningbased FDD can be extended from an existing solid color to pattern images.
The rest of this paper is organized as follows: Section 2 describes the radon transformation (RT) and Hough transformation (HT) techniques used in rotation-angle estimation; furthermore, template-based correction (TC) [7] for segmenting image is also discussed. Section 3 introduces the overall structure of the proposed algorithm. Section 4 provides details of the proposed algorithm. The SRPC is extracted and the rotation angle is estimated using SRPC. Then, the reference point is confirmed, and the image is segmented. For each step, SRPC using Euclidean distance mapping (EDM)-a method for estimating the rotation angle quickly and accurately-and another method for segmenting images by merging or generating reference points are introduced. Section 5 presents the performance test for evaluating the error performance of the rotation-angle estimation algorithm and identifies how well the image was segmented. Finally, in Section 6, the significance, applicability, and limitations of this study, and the additional research required in the future are analyzed comprehensively.

Rotation-Angle Estimation
Rotation-angle estimation can be divided into two methods: obtaining an absolute angle or getting a relative angle using a reference image. Fabric defect detection needs absolute angle estimation because the inspection requires accurate segmentation in the entire image area. As mentioned in [11], similarity measure direct pixel matching (DPM) and principal axes matching (PAM) methods find the angle by measuring the similarity with the registered reference. Reference images are also needed for histogram-oriented Therefore, a split reference point is required for rotated images and a suitable rotation angle must be estimated in advance to extract a reference point without errors, thereby facilitating collection of similar divided images. Algorithms that can predict rotation angles have already been proposed [8,9] based on line detection techniques. However, long computation times or large rotation-angle errors depending on the detection resolution are major drawbacks. Predicting the rotation angle cannot perfectly guarantee nonrotation; therefore, a split reference point needs to be extracted. From these problems, applying deep learning techniques to the vision inspection of rotated pattern fabric is difficult.
This study proposes a method for applying the extracted reference points not using split line or line detection methods to both rotation-angle estimation and image segmentation to ensure high inspection and segmentation performance for various pattern shapes within an appropriate performance time. As mentioned in [10], in general, images can be segmented by comparing multiple pixels with a template predefined by the user. Because this method uses only reference points not all pixels, it takes a short time for segmentation process. Within this low time consumptions, the proposed system achieves high accuracy as described in the below simulation section. Furthermore, we propose a method for increasing the performance stability and operational convenience using automatically calculated variables such as a threshold. By applying the proposed scheme, deep learning-based FDD can be extended from an existing solid color to pattern images.
The rest of this paper is organized as follows: Section 2 describes the radon transformation (RT) and Hough transformation (HT) techniques used in rotation-angle estimation; furthermore, template-based correction (TC) [7] for segmenting image is also discussed. Section 3 introduces the overall structure of the proposed algorithm. Section 4 provides details of the proposed algorithm. The SRPC is extracted and the rotation angle is estimated using SRPC. Then, the reference point is confirmed, and the image is segmented. For each step, SRPC using Euclidean distance mapping (EDM)-a method for estimating the rotation angle quickly and accurately-and another method for segmenting images by merging or generating reference points are introduced. Section 5 presents the performance test for evaluating the error performance of the rotation-angle estimation algorithm and identifies how well the image was segmented. Finally, in Section 6, the significance, applicability, and limitations of this study, and the additional research required in the future are analyzed comprehensively.

Rotation-Angle Estimation
Rotation-angle estimation can be divided into two methods: obtaining an absolute angle or getting a relative angle using a reference image. Fabric defect detection needs absolute angle estimation because the inspection requires accurate segmentation in the entire image area. As mentioned in [11], similarity measure direct pixel matching (DPM) and principal axes matching (PAM) methods find the angle by measuring the similarity with the registered reference. Reference images are also needed for histogram-oriented gradients (HOG) based on studies such as [12]. This study aims to get a rotation-angle estimation method applicable to actual operations by considering both accuracy and time appropriately. HOG and Radon based methods are relatively accurate but estimate angles by using all possible candidate angles. Hough is fast, but the accuracy is unstable depending on the case. This section analyzes Radon and Hough, that can estimate the absolute rotation-angle and are widely used as representative basic algorithms because of their high accuracy and low execution time.
2.1.1. Radon Transformation Figure 4 represents the RT value R(r, θ) can be determined by projecting L r, θ at a distance r from the origin when the target image I(x, y) is viewed at angle θ and then calculating the line integral. Here, the line L r, θ is projected in the direction of adding π/2 to the line which forms the angle θ with the x-axis. According to [13], RT can be expressed as Electronics 2021, 10, x FOR PEER REVIEW 4 of 21 estimation method applicable to actual operations by considering both accuracy and time appropriately. HOG and Radon based methods are relatively accurate but estimate angles by using all possible candidate angles. Hough is fast, but the accuracy is unstable depending on the case. This section analyzes Radon and Hough, that can estimate the absolute rotation-angle and are widely used as representative basic algorithms because of their high accuracy and low execution time.
2.1.1. Radon Transformation Figure 4 represents the RT value ( , ) can be determined by projecting , at a distance r from the origin when the target image ( , ) is viewed at angle and then calculating the line integral. Here, the line , is projected in the direction of adding /2 to the line which forms the angle with the -axis. According to [13], RT can be expressed as  This process is performed while rotating projection direction within the angle range. Consequently, ( , ) overlaps when , passes through ( , ). If the variance of RT obtained along the line at a specific is the largest, then the image has a line component perpendicular to the angle [8].
(2) Figure 5 shows RT result for the lattice pattern image. Since is 84°, it needs to rotate only 6° in the counterclockwise direction. Thus, because the detection of the rotation angle by RT extracts the global linear component, then, the denser the projection angle spacing, the more accurate the value obtained. However, a long computation time is a major constraint when using machine vision for inspection since performance speed is a critical requirement. For example, if we want to achieve a 10 times more accurate resolution, we must endure a performance time that is approximately 10 times longer. This process is performed while rotating projection direction within the angle range. Consequently, R(r, θ) overlaps when L r, θ passes through I(x, y). If the variance of RT obtained along the line at a specific θ m is the largest, then the image has a line component θ L perpendicular to the angle [8].
(2) Figure 5 shows RT result for the lattice pattern image. Since θ m is 84 • , it needs to rotate only 6 • in the counterclockwise direction. Thus, because the detection of the rotation angle by RT extracts the global linear component, then, the denser the projection angle spacing, the more accurate the value obtained. However, a long computation time is a major constraint when using machine vision for inspection since performance speed is a critical requirement. For example, if we want to achieve a 10 times more accurate resolution, we must endure a performance time that is approximately 10 times longer.

Hough Transformation
HT [9] is an algorithm that detects a straight line by converting a linear equation to a parameter space in a two-dimensional orthogonal coordinate system. As shown in Figure  6, when there is a straight line ρ, a point in the x-y plane is expressed as a curve in the θρ plane. If two points of 'a' and 'b' on a straight line of the x-y plane are converted to the θ-ρ plane, they are located at the same point ( , ). When each point is inserted into an accumulation array of θ-ρ using the edge component of images extracted using the canny edge (CE) filter [14], the value at the intersection becomes greater. After finding such local maximum points, the points larger than the set threshold are detected as a line. When an accumulation array is configured, the precision and computation speed vary relative to the interval of parameters. If the interval is small, the precision increases and the computation speed decreases; if the interval is large, the precision decreases and the computation speed increases.
Linear components extracted by applying HT after CE are shown in Figure 7a-c. Lattice patterns are occasionally extracted accurately; however, other linear components are also detected, as shown in Figure 7b. The lines that represent various angles can be detected in the pattern and their average value is the final rotation angle.

Hough Transformation
HT [9] is an algorithm that detects a straight line by converting a linear equation to a parameter space in a two-dimensional orthogonal coordinate system. As shown in Figure 6, when there is a straight line ρ, a point in the x-y plane is expressed as a curve in the θ-ρ plane. If two points of 'a' and 'b' on a straight line of the x-y plane are converted to the θ-ρ plane, they are located at the same point (θ 0 , ρ 0 ).

Hough Transformation
HT [9] is an algorithm that detects a straight line by converting a linear equation to a parameter space in a two-dimensional orthogonal coordinate system. As shown in Figure  6, when there is a straight line ρ, a point in the x-y plane is expressed as a curve in the θρ plane. If two points of 'a' and 'b' on a straight line of the x-y plane are converted to the θ-ρ plane, they are located at the same point ( , ). When each point is inserted into an accumulation array of θ-ρ using the edge component of images extracted using the canny edge (CE) filter [14], the value at the intersection becomes greater. After finding such local maximum points, the points larger than the set threshold are detected as a line. When an accumulation array is configured, the precision and computation speed vary relative to the interval of parameters. If the interval is small, the precision increases and the computation speed decreases; if the interval is large, the precision decreases and the computation speed increases.
Linear components extracted by applying HT after CE are shown in Figure 7a-c. Lattice patterns are occasionally extracted accurately; however, other linear components are also detected, as shown in Figure 7b. The lines that represent various angles can be detected in the pattern and their average value is the final rotation angle. When each point is inserted into an accumulation array of θ-ρ using the edge component of images extracted using the canny edge (CE) filter [14], the value at the intersection becomes greater. After finding such local maximum points, the points larger than the set threshold are detected as a line. When an accumulation array is configured, the precision and computation speed vary relative to the interval of parameters. If the interval is small, the precision increases and the computation speed decreases; if the interval is large, the precision decreases and the computation speed increases.
Linear components extracted by applying HT after CE are shown in Figure 7a Both CE and HT have a drawback; the performance changes depending on the threshold. Therefore, an appropriate threshold must be set; however, calibrating various fabric shapes is difficult. Therefore, using rotation-angle estimation with HT in machine vision inspection is inefficient.

TC Segmentation
Most methods for segmenting pattern images determine the optimal split lines by assuming the unrotated image as shown in Figure 3. In [7], the authors defined the loss function ( * , * ) and found the horizontal and vertical length ( , ) that minimizes the function. Let ( , ) denote the variance between blocks after the image is cropped. This method finds the horizontal and vertical sizes that generate the maximum number of similar blocks while applying various sizes. The conventional method causes problems in error inspection from two aspects: First, when an error area between patterns is added or partially lost, the pattern blocks of the inspected image are pushed or pulled and all blocks after that area are judged as defective. Second, unlike the learning image, if the inspection image is rotated, the reference point itself is also rotated; therefore, all objects can be recognized as defective. To solve the second problem, unit blocks are rotated in a certain range through data augmentation and learned together; however, this solution is inefficient because of the time required for learning and performance.
Hence, both the rotation angle and segmentation reference point (SRP) are determined when solving the aforementioned problems. If the image is cropped at the reference point instead of the uniform intervals after calibrating the estimated rotation angle and the image is unrotated, then, it is possible to prevent the additional learning because of data augmentation or performance degradation caused by image loss. Figure 8 shows a conceptual diagram of a machine vision system for the automatic inspection of fabrics through image analysis applied to a fabric-inspecting machine. For fabric, 1.5 m or larger in width, multiple cameras are employed to process the assigned areas simultaneously and some parts are overlapped to consider rotations when the inspection is performed. Machine vision cameras acquire high-megapixel resolution images. Since it is difficult to use single images directly as input data for deep learning, the Both CE and HT have a drawback; the performance changes depending on the threshold. Therefore, an appropriate threshold must be set; however, calibrating various fabric shapes is difficult. Therefore, using rotation-angle estimation with HT in machine vision inspection is inefficient.

TC Segmentation
Most methods for segmenting pattern images determine the optimal split lines by assuming the unrotated image as shown in Figure 3. In [7], the authors defined the loss function f (r * , c * ) and found the horizontal and vertical length (r, c) that minimizes the function. Let S 2 (x, y) denote the variance between blocks after the image is cropped. This method finds the horizontal and vertical sizes that generate the maximum number of similar blocks while applying various sizes.
The conventional method causes problems in error inspection from two aspects: First, when an error area between patterns is added or partially lost, the pattern blocks of the inspected image are pushed or pulled and all blocks after that area are judged as defective. Second, unlike the learning image, if the inspection image is rotated, the reference point itself is also rotated; therefore, all objects can be recognized as defective. To solve the second problem, unit blocks are rotated in a certain range through data augmentation and learned together; however, this solution is inefficient because of the time required for learning and performance.
Hence, both the rotation angle and segmentation reference point (SRP) are determined when solving the aforementioned problems. If the image is cropped at the reference point instead of the uniform intervals after calibrating the estimated rotation angle and the image is unrotated, then, it is possible to prevent the additional learning because of data augmentation or performance degradation caused by image loss. Figure 8 shows a conceptual diagram of a machine vision system for the automatic inspection of fabrics through image analysis applied to a fabric-inspecting machine. For fabric, 1.5 m or larger in width, multiple cameras are employed to process the assigned areas simultaneously and some parts are overlapped to consider rotations when the inspection is performed. Machine vision cameras acquire high-megapixel resolution images. Since it is difficult to use single images directly as input data for deep learning, the image is converted to a low-resolution image or divided into detailed areas to detect fine defects. image is converted to a low-resolution image or divided into detailed areas to detect fine defects. Considering that machine vision inspection of fabrics using deep learning, a reference point extraction and rotation-angle estimation algorithm for image segmentation are proposed in this study. One roll of fabric usually has a size of 1.5 m × 50 m or more and consists of the same oriented pattern. Therefore, the rotation angle estimation is performed only once at the beginning, and the result is applied to the rest of the region. The lattice and line patterns are target images for application and performance result are verified using generated images based on the purpose of the test or using the existing fabric image data. Figure 9 shows the implementation process of the proposed image segmentation method, which is largely divided into the SRPC extraction, rotation-angle estimation, and image segmentation stages, respectively. The SRPC is extracted by generating an EDM from the original image and then finding the local maximum. The minimum separation distance (MSD) value, which is automatically determined using the connected component labeling (CCL) technique for distinguishing local areas, influences the performance.  Considering that machine vision inspection of fabrics using deep learning, a reference point extraction and rotation-angle estimation algorithm for image segmentation are proposed in this study. One roll of fabric usually has a size of 1.5 m × 50 m or more and consists of the same oriented pattern. Therefore, the rotation angle estimation is performed only once at the beginning, and the result is applied to the rest of the region. The lattice and line patterns are target images for application and performance result are verified using generated images based on the purpose of the test or using the existing fabric image data. Figure 9 shows the implementation process of the proposed image segmentation method, which is largely divided into the SRPC extraction, rotation-angle estimation, and image segmentation stages, respectively. The SRPC is extracted by generating an EDM from the original image and then finding the local maximum. The minimum separation distance (MSD) value, which is automatically determined using the connected component labeling (CCL) technique for distinguishing local areas, influences the performance. image is converted to a low-resolution image or divided into detailed areas to detect fine defects. Considering that machine vision inspection of fabrics using deep learning, a reference point extraction and rotation-angle estimation algorithm for image segmentation are proposed in this study. One roll of fabric usually has a size of 1.5 m × 50 m or more and consists of the same oriented pattern. Therefore, the rotation angle estimation is performed only once at the beginning, and the result is applied to the rest of the region. The lattice and line patterns are target images for application and performance result are verified using generated images based on the purpose of the test or using the existing fabric image data. Figure 9 shows the implementation process of the proposed image segmentation method, which is largely divided into the SRPC extraction, rotation-angle estimation, and image segmentation stages, respectively. The SRPC is extracted by generating an EDM from the original image and then finding the local maximum. The minimum separation distance (MSD) value, which is automatically determined using the connected component labeling (CCL) technique for distinguishing local areas, influences the performance.  The rotation angle is estimated relative to the area belonging to the coordinate system of the SRPC, mutual distance, and orientation vector. The estimated angle is corrected in the SRPC and original image. A single SRP exists only in each area to accurately segment the image. The SRPC is calibrated by integrating or generating the image and then arranged to determine the SRP. Finally, the image is resized to a certain size that can be used as input data for training and inspection.

EDM Generation
The result of EDM represents the distance of each pixel to the closest black pixel [15]. There are no set values because EDM process only has a Euclidean distance calculation between pixels. Figure 10 shows the image from which the EDM was obtained after binarizing the lattice pattern image. The white area decreased compared to the binarized image because although the pixels have the same white color, they have a lower value since their distance from the black area is shorter, expressing a color close to black. Otsu's [16] technique is used for the binarization performed before the generation of EDM. Otsu algorithm automatically finds the threshold value that best divides the brightness of the image into two areas based on the normalized histogram distribution, so there is no need setting a parameter by the user.
The rotation angle is estimated relative to the area belonging to the coordinate system of the SRPC, mutual distance, and orientation vector. The estimated angle is corrected in the SRPC and original image. A single SRP exists only in each area to accurately segment the image. The SRPC is calibrated by integrating or generating the image and then arranged to determine the SRP. Finally, the image is resized to a certain size that can be used as input data for training and inspection.

EDM Generation
The result of EDM represents the distance of each pixel to the closest black pixel [15]. There are no set values because EDM process only has a Euclidean distance calculation between pixels. Figure 10 shows the image from which the EDM was obtained after binarizing the lattice pattern image. The white area decreased compared to the binarized image because although the pixels have the same white color, they have a lower value since their distance from the black area is shorter, expressing a color close to black. Otsu's [16] technique is used for the binarization performed before the generation of EDM. Otsu algorithm automatically finds the threshold value that best divides the brightness of the image into two areas based on the normalized histogram distribution, so there is no need setting a parameter by the user.

MSD Auto Decision and Find Local Maxima
After the maximum filter (MF) processing of the EDM image, points with the same value as the original image are obtained, and the SRPC is extracted by merging these points by area. An appropriate area size must be defined for MF processing. It can be specified by the user during the initial setting. However, it is cumbersome because a different area size must be set for each object inspected, and a higher possibility of making mistakes when the spacing between areas is narrower exists. Hence, a process of automatically determining the minimum distance for area distinction is required.
Therefore, the EDM image obtained above must be binarized using the Otsu technique. The connected component (CC) can be distinguished by operating with CCL, and the number of connection elements, center position, and size information can be obtained.
For the CCL algorithm, block-based connected components labeled with decision trees (BBDT) [17] that can minimize memory access was used. Figure 11 shows the CC and labeling number obtained from the binarized image of the EDM for the lattice and line patterns.

MSD Auto Decision and Find Local Maxima
After the maximum filter (MF) processing of the EDM image, points with the same value as the original image are obtained, and the SRPC is extracted by merging these points by area. An appropriate area size must be defined for MF processing. It can be specified by the user during the initial setting. However, it is cumbersome because a different area size must be set for each object inspected, and a higher possibility of making mistakes when the spacing between areas is narrower exists. Hence, a process of automatically determining the minimum distance for area distinction is required.
Therefore, the EDM image obtained above must be binarized using the Otsu technique. The connected component (CC) can be distinguished by operating with CCL, and the number of connection elements, center position, and size information can be obtained.
For the CCL algorithm, block-based connected components labeled with decision trees (BBDT) [17] that can minimize memory access was used. Figure 11 shows the CC and labeling number obtained from the binarized image of the EDM for the lattice and line patterns.
Lattice patterns must be filtered because the size of the unit element is too small or too large due to noise or a defect in the target material. As shown in Equation (5), the CC exceeding a certain size based on the average size of each element a avg are filtered and removed. The lower boundary values B l and B u for the width and height of the area were determined based on the probabilistic median value; however, that can change depending on the image characteristics. The set of areas before filtering is given as A = { a 1 , a 2 , · · · , a i }, and the new set after filtering is A F . B l a avg < a i < B u a avg ∈ A F , B l = 0.5 2 , B u = 1. Lattice patterns must be filtered because the size of the unit element is too small or too large due to noise or a defect in the target material. As shown in Equation (5), the CC exceeding a certain size based on the average size of each element are filtered and removed. The lower boundary values and for the width and height of the area were determined based on the probabilistic median value; however, that can change depending on the image characteristics. The set of areas before filtering is given as = , , ⋯ , , and the new set after filtering is .
After obtaining the averages of width w and height h, respectively, of the CCs remaining after filtering as shown in Equation (5), the smaller of the two values is set as the minimum distance for the area distinction. The filtering process is performed for the line and lattice patterns. Upon obtaining the average width and height, and ℎ of the remaining elements, the longer side defines the direction of the line pattern. This direction information is used for obtaining the rotation angle later. The line pattern determines the minimum distance for area distinction with only the number of elements. When the width and height ℎ of the original image are divided by the number of elements , the minimum distance for area distinction for the vertical and horizontal line patterns can be obtained respectively as: = min( , ℎ ) , Lattice min( , ℎ ) , Line One example of obtaining the SRPC after MF processing for the EDM image using the minimum distance for area distinction obtained using this approach is shown in Figure 12.  After obtaining the averages of width w and height h, respectively, of the CCs remaining after filtering as shown in Equation (5), the smaller of the two values is set as the minimum distance d c for the area distinction.
The filtering process is performed for the line and lattice patterns. Upon obtaining the average width and height, w avg and h avg of the remaining elements, the longer side defines the direction of the line pattern. This direction information is used for obtaining the rotation angle later. The line pattern determines the minimum distance for area distinction with only the number of elements. When the width w o and height h o of the original image are divided by the number of elements N cc , the minimum distance for area distinction d c for the vertical and horizontal line patterns can be obtained respectively as: One example of obtaining the SRPC after MF processing for the EDM image using the minimum distance for area distinction obtained using this approach is shown in Figure 12. Lattice patterns must be filtered because the size of the unit element is too small or too large due to noise or a defect in the target material. As shown in Equation (5), the CC exceeding a certain size based on the average size of each element are filtered and removed. The lower boundary values and for the width and height of the area were determined based on the probabilistic median value; however, that can change depending on the image characteristics. The set of areas before filtering is given as = , , ⋯ , , and the new set after filtering is .
< < ∈ , ( = 0.5 , = 1.5 ) After obtaining the averages of width w and height h, respectively, of the CCs remaining after filtering as shown in Equation (5), the smaller of the two values is set as the minimum distance for the area distinction. The filtering process is performed for the line and lattice patterns. Upon obtaining the average width and height, and ℎ of the remaining elements, the longer side defines the direction of the line pattern. This direction information is used for obtaining the rotation angle later. The line pattern determines the minimum distance for area distinction with only the number of elements. When the width and height ℎ of the original image are divided by the number of elements , the minimum distance for area distinction for the vertical and horizontal line patterns can be obtained respectively as: One example of obtaining the SRPC after MF processing for the EDM image using the minimum distance for area distinction obtained using this approach is shown in

Rotation-Angle Estimation
Candidate orientation vectors for estimating the rotation angle are obtained by connecting the SRPCs extracted from EDM. The line and lattice patterns have one and two orientation vectors, respectively, related to the rotation angle when there is no image distortion.
The rotation angle can be estimated if the orientation vector is obtained after setting only the nearest points for Quad 1 and Quad 4 , on the right side when the image is divided into quadrature ("Quad") based on the SRPC. Since the position of the nearest point changes depending on the image rotation direction, Quad 1 and Quad 4 are divided in half diagonally and the position of the nearest point is distinguished again to estimate the rotation angle regardless of the rotation direction. Figure 13a shows the nearest point is located in Quad 11 and Quad 41 , which must be rotated counterclockwise. However, in Figure 13b, which will be rotated clockwise, it exists in Quad 12 or Quad 42 .
only the nearest points for Quad1 and Quad4, on the right side when the image is divided into quadrature ("Quad") based on the SRPC. Since the position of the nearest point changes depending on the image rotation direction, Quad1 and Quad4 are divided in half diagonally and the position of the nearest point is distinguished again to estimate the rotation angle regardless of the rotation direction. Figure 13a shows the nearest point is located in Quad11 and Quad41, which must be rotated counterclockwise. However, in Figure  13b, which will be rotated clockwise, it exists in Quad12 or Quad42. For every SRPC from to , the orientation vector with the nearest point above Quad1 or Quad4 on the right-side location is found. Then, we determine which among Quad11, Quad12, Quad41, and Quad42 the orientation vector belongs to; the determined orientation vectors are added. Finally, the direction of ⃗ , ⃗ , ⃗ , and ⃗ are selected as the candidate vectors for Quad12, Quad11, Quad42, and Quad41, respectively. Note that Quad1, Quad4, Quad11, Quad12, Quad41, and Quad42 areas in each step are formed around the corresponding SRPC. For both horizontal and vertical line patterns, the candidate orientation vector is obtained by the same process as the lattice pattern. Hence, they can be integrated into one algorithm. Pseudocode for rotation angle estimation is represented in Algorithm 1, including the equation for finding the rotation angle. For every SRPC from P 1 to P 9 , the orientation vector with the nearest point above Quad 1 or Quad 4 on the right-side location is found. Then, we determine which among Quad 11 , Quad 12 , Quad 41 , and Quad 42 the orientation vector belongs to; the determined orientation vectors are added. Finally, the direction of → P 5 P 9 , → P 5 P 6 , → P 5 P 3 , and → P 5 P 2 are selected as the candidate vectors for Quad 12 , Quad 11 , Quad 42 , and Quad 41 , respectively. Note that Quad 1 , Quad 4 , Quad 11 , Quad 12 , Quad 41 , and Quad 42 areas in each step are formed around the corresponding SRPC.
For both horizontal and vertical line patterns, the candidate orientation vector is obtained by the same process as the lattice pattern. Hence, they can be integrated into one algorithm. Pseudocode for rotation angle estimation is represented in Algorithm 1, including the equation for finding the rotation angle. if ( return angle

Correct Rotation Angle
Let the position of each pixel of the original image be (x, y), the image center be (x 0 , y 0 ) and the position where all pixels are moved by the rotation angle θ is (x , y ), the conversion matrix according to affine transformation (AT) [18] is given as: If the horizontal and vertical sizes of the original image are given as (w o , h o ), then, the image size expanded by rotation (w r , h r ) can be expressed as Consequently, the final position (x , y ) of new pixels including all rotation images are given as The new position of the SRPC is calculated using the same method as that for the image pixel. Since the value of the rotated position value can be a real number, it must be converted to an integer.

SRP Decision
The positions of the points in the matrix must be specified to crop the image based on SRP. Elements are sorted in ascending order between row groups based on the y-axis coordinates for the lattice pattern; they should be sorted in ascending order with the x-axis coordinates. The row groups are sorted in ascending order based on the y-coordinate value P y , ignoring the x-coordinates. Then, the row group Row(i) is changed when the difference in P y for each point exceeds the row minimum group separation distance (MGSD) D r_min , which can be determined by dividing the image height H by the number of SRPCs P N , thereby assuming that the maximum number of possible rows is equal to the total number of points. In a lattice pattern, P N can range from four at the minimum to H 2 at the maximum.
When determining SRP, multiple SRPC points need to be merged into a single point because the image cropping each criterion should exist as a single point. As Figure 14, two SRPC points belonging to the same column group are converted into a single SRP.
In order to merge multiple SRPCs, column grouping is performed after row grouping in the same manner as Equations (10) and (11). The columns MGSD and D c_min can be determined by finding the column group Column(j) after changing height H to width W in Equation (10) and changing P y to P x , an x-coordinate, in Equation (11). Suppose multiple elements exist in the column group, either the average is obtained, or one point is selected and merged with the representative point. the total number of points. In a lattice pattern, can range from four at the minimum to at the maximum.
When determining SRP, multiple SRPC points need to be merged into a single point because the image cropping each criterion should exist as a single point. As Figure 14, two SRPC points belonging to the same column group are converted into a single SRP. In order to merge multiple SRPCs, column grouping is performed after row grouping in the same manner as Equations (10) and (11). The columns MGSD and _ can be determined by finding the column group Column(j) after changing height H to width W in Equation (10) and changing to , an x-coordinate, in Equation (11). Suppose multiple elements exist in the column group, either the average is obtained, or one point is selected and merged with the representative point. Figure 15 shows the error tolerance calculated for detecting the rotation angle. If equals MGSD when the point at the origin O-one of the outermost edge points-rotates to R around the center C, the angle at that time is the maximum tolerance angle. After the tolerance angles for the row and column are determined, as shown in the following Equations (12)- (14), smaller values are selected.  Figure 15 shows the error tolerance calculated for detecting the rotation angle. If RA equals MGSD when the point at the origin O-one of the outermost edge points-rotates to R around the center C, the angle θ at that time is the maximum tolerance angle. After the tolerance angles for the row and column are determined, as shown in the following Equations (12)- (14), smaller values are selected.
Electronics 2021, 10, x FOR PEER REVIEW 13 of 21 The column case Equation (15) also can be derived by the same way of the row tolerance angle.

= sin
For the Tilda [19] lattice image of 768 × 512 pixels, and are calculated as 4.84° and 8.32°, respectively, meaning that even if the rotation-angle estimation error is as large as 4.84°, the SRP group is maintained and there is no problem segmenting the image. The column case Equation (15) also can be derived by the same way of the row tolerance angle. For the Tilda [19] lattice image of 768 × 512 pixels, θ r and θ c are calculated as 4.84 • and 8.32 • , respectively, meaning that even if the rotation-angle estimation error is as large as 4.84 • , the SRP group is maintained and there is no problem segmenting the image.
The line pattern has a larger number of SRPCs than the lattice pattern; the difference between groups is larger than the difference between group elements. For grouping, the SRPCs of the vertical line are sorted in the ascending order from values on the x-axis, and those of the horizontal line are sorted in the ascending order corresponding to values on the y-axis. When the differences between the sorted points are determined, they are squared and averaged. This average value becomes the reference value for grouping, i.e., C G .
These values are squared to prevent an increase in the number of groups caused by a decrease in grouping reference values using the average effect if the number of SRPCs is too large compared to the number of groups. Suppose the square of the difference between the sorted points is smaller than the grouping reference value, the points are assigned to the same group; however, if it is larger, they are separated. The final grouping follows this manner. Once the grouping is completed, the final SRP is determined.
For the lattice pattern, once the SRP is generated when omissions or additional images should be secured as shown in Figure 16a, the SRP to be used is finally determined.  Figure 17 shows the SRP result for the vertical line pattern. The average value of the group elements C is the reference value of the corresponding pattern direction; the average of the reference values is used as the segment length value D in the direction orthogonal to the pattern. The same method is applicable to the horizontal line pattern.

Simulation
The simulation environment comprises an Intel® Core™ i7-7820HK CPU @2.9 GHz, 32 GB RAM, and Nvidia GeForce GTX 1070 GPU as the hardware; It is simulated with  Figure 17 shows the SRP result for the vertical line pattern. The average value of the group elements C is the reference value of the corresponding pattern direction; the average of the reference values is used as the segment length value D in the direction orthogonal to the pattern. The same method is applicable to the horizontal line pattern.  Figure 17 shows the SRP result for the vertical line pattern. The average value of the group elements C is the reference value of the corresponding pattern direction; the average of the reference values is used as the segment length value D in the direction orthogonal to the pattern. The same method is applicable to the horizontal line pattern.

Simulation
The simulation environment comprises an Intel® Core™ i7-7820HK CPU @2.9 GHz, 32 GB RAM, and Nvidia GeForce GTX 1070 GPU as the hardware; It is simulated with

Simulation
The simulation environment comprises an Intel®Core™ i7-7820HK CPU @2.9 GHz, 32 GB RAM, and Nvidia GeForce GTX 1070 GPU as the hardware; It is simulated with Python 3.7.7 using libraries such as OpenCV, ScipPy, and skimage on a system with Windows 10 64bit OS.
For the target image data, the lattice and line pattern images are used with the TILDA images. As shown in Figure 18, various positions and rotation angles are applied for the normal case and the seven defect types. A total of 806 images including about 50 images for the pattern and error type were used. Each image consists of 768 × 512 pixels of gray-level. Optimal performance of the rotation angle cannot be achieved with the image data only. Tilda does not have accurate rotation-angle information and the directly photographed fabric images can collect different information of the linear components by area depending on the characteristics of the optical system. Hence, the accuracy of the rotationangle estimation is subjective. Therefore, to measure the accuracy of rotation-angle estimation, the lattice and line pattern images are directly generated as in Figure 7b,c. Furthermore, the performance of the conventional algorithms is measured for the same images to examine their relative difference in performance. By rotating in 0.1° units in the range of −44 to 44, 881 data points were generated for each pattern shape.
The image segmentation test applied the rotation-angle estimation algorithm to Tilda images and the correlation of the segmented images was calculated and analyzed.

Error of Rotation-Angle Estimation
This section discusses the results of the rotation-angle estimation obtained by applying the RT, HT, and the proposed algorithm to the generated lattice, vertical, and horizontal line patterns containing the same rotation-angle error. Figure 19 shows the error of the rotation-angle estimation. RT shows good and stable performance in the range 0.3~0.4 for every pattern. Since the test was performed in 0.1° units, the error increased by 0.1° steps for every pattern.
For HT, the lattice and line patterns show distinct differences in performance. The maximum error in line patterns is below 1° within ±39° for both vertical and horizontal line patterns. However, the lattice pattern shows a low performance at specific rotation angles and the error is several tens of degrees large at many angles. The reason is illustrated in Figure 7a. Viewing the pattern shape in detail, we see that linear components exist at other angles that are not lattice components of the pattern. Optimal performance of the rotation angle cannot be achieved with the image data only. Tilda does not have accurate rotation-angle information and the directly photographed fabric images can collect different information of the linear components by area depending on the characteristics of the optical system. Hence, the accuracy of the rotation-angle estimation is subjective. Therefore, to measure the accuracy of rotation-angle estimation, the lattice and line pattern images are directly generated as in Figure 7b,c. Furthermore, the performance of the conventional algorithms is measured for the same images to examine their relative difference in performance. By rotating in 0.1 • units in the range of −44 to 44, 881 data points were generated for each pattern shape.
The image segmentation test applied the rotation-angle estimation algorithm to Tilda images and the correlation of the segmented images was calculated and analyzed.

Error of Rotation-Angle Estimation
This section discusses the results of the rotation-angle estimation obtained by applying the RT, HT, and the proposed algorithm to the generated lattice, vertical, and horizontal line patterns containing the same rotation-angle error. Figure 19 shows the error of the rotation-angle estimation. RT shows good and stable performance in the range 0.3~0.4 for every pattern. Since the test was performed in 0.1 • units, the error increased by 0.1 • steps for every pattern. The proposed algorithm has an error of 0.8° or less within ±30° and 2° or less within ±41.5° for all patterns. Since actual fabrics have extreme rotation angles below ±40°, then the proposed algorithm is suitable for the application. Even with a large rotation angle, it does not affect the line patterns and solves the lattice patterns problem by performing the proposed algorithm twice.
Summarily, it is difficult to apply HT to lattice patterns. The proposed algorithm has the lowest error value, similar to RT, in actual operating environments. The accurate prediction of rotation angles increased the probability of elaborating pattern segmentation. Although RT shows good performance in rotation estimation, it is not suitable for fabric vision inspection because of its long performance time. Figure 20 shows the result of measuring the time required for rotation-angle estimation. When the resolution was set to 0.1°, it took approximately 30 s in most cases regardless of the pattern type and rotation angle. This required time can be reduced by lowering the detection resolution and if the resolution is increased, the required time becomes longer. Even if the required time is reduced to 1/10 by lowering the resolution to 1°, that of 3~4 s is still unacceptable for machine vision inspection. For HT, the lattice and line patterns show distinct differences in performance. The maximum error in line patterns is below 1 • within ±39 • for both vertical and horizontal line patterns. However, the lattice pattern shows a low performance at specific rotation angles and the error is several tens of degrees large at many angles. The reason is illustrated in Figure 7a. Viewing the pattern shape in detail, we see that linear components exist at other angles that are not lattice components of the pattern.
The proposed algorithm has an error of 0.8 • or less within ±30 • and 2 • or less within ±41.5 • for all patterns. Since actual fabrics have extreme rotation angles below ±40 • , then the proposed algorithm is suitable for the application. Even with a large rotation angle, it does not affect the line patterns and solves the lattice patterns problem by performing the proposed algorithm twice.
Summarily, it is difficult to apply HT to lattice patterns. The proposed algorithm has the lowest error value, similar to RT, in actual operating environments. The accurate prediction of rotation angles increased the probability of elaborating pattern segmentation. Although RT shows good performance in rotation estimation, it is not suitable for fabric vision inspection because of its long performance time. Figure 20 shows the result of measuring the time required for rotation-angle estimation. When the resolution was set to 0.1 • , it took approximately 30 s in most cases regardless of the pattern type and rotation angle. This required time can be reduced by lowering the detection resolution and if the resolution is increased, the required time becomes longer. Even if the required time is reduced to 1/10 by lowering the resolution to 1 • , that of 3~4 s is still unacceptable for machine vision inspection. The required time for HT-based rotation-angle estimation was the shortest among the compared methods. As with RT, the generated image was measured with a detection resolution of 0.1°. In every pattern, it was 0.05 s or less at most rotation angles and was less than 0.2 s in the worst case. If only the required time is considered, then HT is the most suitable for actual system applications. However, with respect to the accuracy of rotation-angle estimation, HT is applicable only under specific conditions because of the large estimated error angle from the lattice pattern and the errors generated from the parameter settings.
The proposed algorithm shows stable results of 0.1~0.3 s within ±25°; the required time increased as the boundary value approached ±45°. The pattern rotations of images photographed within one roll of fabrics did not show significant differences. Hence, it applies to actual operation because the rotation angle can be estimated for the first few images are applied to the entire fabrics.
The number of SRPCs has the largest effect on the required time of the proposed algorithm. This is because the larger the number of SRPCs, the more time it takes for the SRPCs to be generated and determined in extracting the orientation vector. However, the number of SRPCs depends on the pattern type. In this experiment, the maximum number The required time for HT-based rotation-angle estimation was the shortest among the compared methods. As with RT, the generated image was measured with a detection resolution of 0.1 • . In every pattern, it was 0.05 s or less at most rotation angles and was less than 0.2 s in the worst case. If only the required time is considered, then HT is the most suitable for actual system applications. However, with respect to the accuracy of rotation-angle estimation, HT is applicable only under specific conditions because of the large estimated error angle from the lattice pattern and the errors generated from the parameter settings.
The proposed algorithm shows stable results of 0.1~0.3 s within ±25 • ; the required time increased as the boundary value approached ±45 • . The pattern rotations of images photographed within one roll of fabrics did not show significant differences. Hence, it applies to actual operation because the rotation angle can be estimated for the first few images are applied to the entire fabrics.
The number of SRPCs has the largest effect on the required time of the proposed algorithm. This is because the larger the number of SRPCs, the more time it takes for the SRPCs to be generated and determined in extracting the orientation vector. However, the number of SRPCs depends on the pattern type. In this experiment, the maximum number of SRPCs is generated by assuming the segmentation of the image in pattern units of minimum size. However, in the actual case, especially for line patterns, a smaller time is expected because there is a high possibility of dividing several basic patterns into groups.
As such, SRPC-based estimation of rotation angle is the best method for vision inspection of patterned fabric because it has the highest level of performance and takes a short computation time.

Segmented Images Similarity (SIS)
Image segmentation is performed after the estimated rotation angle for Tilda images is corrected. Figure 21 shows a segmentation result using the proposed algorithm. Even though normal and abnormal cases have rotation or distortion, the segmentation is well performed. This study compared the image segmentation performance of the conventional TC method with that of SRP-based image segmentation using Equation (15) for normalized cross-correlation (NCC).
10, x FOR PEER REVIEW 18 of 21 of SRPCs is generated by assuming the segmentation of the image in pattern units of minimum size. However, in the actual case, especially for line patterns, a smaller time is expected because there is a high possibility of dividing several basic patterns into groups. As such, SRPC-based estimation of rotation angle is the best method for vision inspection of patterned fabric because it has the highest level of performance and takes a short computation time.

Segmented Images Similarity (SIS)
Image segmentation is performed after the estimated rotation angle for Tilda images is corrected. Figure 21 shows a segmentation result using the proposed algorithm. Even though normal and abnormal cases have rotation or distortion, the segmentation is well performed. This study compared the image segmentation performance of the conventional TC method with that of SRP-based image segmentation using Equation (15) for normalized cross-correlation (NCC). To determine the similarity between two images and , all the pixels are multiplied with each other and the results are summed up and then divided by the size for normalization. The resulting value is closer to 1 if they are similar.
When there are N-generated template images, the NCC operation values of N-1 images, except themselves, are added. This calculation is performed for all N images as shown in Equation (16); furthermore, the resulting value is divided by the total number of additions. Thus, we can measure how similar the original image was segmented. Table 1 shows the NCC-based SIS performance result. The conventional method showed large differences in performance between normal and error images; however, the proposed algorithm shows a relatively even performance. The conventional algorithm is To determine the similarity between two images I 1 and I 2 , all the pixels are multiplied with each other and the results are summed up and then divided by the size for normalization. The resulting value is closer to 1 if they are similar.
When there are N-generated template images, the NCC operation values of N-1 images, except themselves, are added. This calculation is performed for all N images as shown in Equation (16); furthermore, the resulting value is divided by the total number of additions. Thus, we can measure how similar the original image was segmented. Table 1 shows the NCC-based SIS performance result. The conventional method showed large differences in performance between normal and error images; however, the proposed algorithm shows a relatively even performance. The conventional algorithm is vulnerable to defects because the segmentation size is constant and feature point information is not used. Particularly, the correlation is low when there are many wrinkles or if distance distortion occurs. The proposed algorithm regenerates feature points lost by defects and removes duplicates based on the SRPC, thereby flexibly adjusting the segmentation size to minimize the information difference between segmented images. The proposed algorithm also showed relatively low performance with wrinkles. In lattice patterns, both horizontal and vertical SRPCs exist, whereas, for the line patterns, the SRPCs are extracted only in the pattern direction. Thus, defects across them or defects existing in a wide area affect performance.
When the proposed algorithm is applied having a limitation in image segmentation besides wrinkle images, image segmentation with a mean correlation of 0.72 or higher is possible. Such results demonstrate that the similarity between segmented images is high, meaning the quality of input data is also high, thus increases defect detection performance. Considering that a satisfactory detection performance was achieved when deep learning was used with images segmented by the TC method, a higher performance can be attained using the proposed method. The performance difference by image type is partly related to the error in rotation-angle estimation and the limited number of Tilda images. Therefore, the reliability of the performance result can be improved if more images are used.

Conclusions
This paper described a method for appropriately segmenting fabric images with rotated or defective lattice and line patterns that can be used as a deep learning input data for machine vision inspection. Images segmented using conventional methods had large errors from estimating the rotation angle or the performance time was long. Several performance variations by threshold, and the need for setting these thresholds depending on the fabric pattern shapes were other problems to be solved.
Since the absolute rotation angle is calculated by considering only the orientation vector of the SRPC instead of all pixels, the calculation time can be reduced while satisfying the required accuracy. Furthermore, the variables required for algorithm performance such as MSD were automatically obtained from the corresponding the pattern shape.
The rotation angle estimated was similar to that of RT, which has the highest accuracy for all pattern types. The performance time was longer than that of HT; however, it was considerably shorter than RT, which shows suitability for actual vision inspection.
In the image segmentation step, conventional methods had low SIS because the rotation angle of the image was not accurately estimated or there was a defect. In contrast, the proposed method can achieve a relatively high SIS by increasing the robustness for image rotation and defects in image segmentation because it uses the SRPC extracted based on EDM.
If the results of this study are applied to machine vision inspection for fabrics using deep learning, the objects of inspection can be expanded from a single color and unrotated patterns to lattices with rotations and line patterns. Improved error detection accuracy is possible by reducing the time required for deep learning and enhancing the SIS by removing the data augmentation process through a rotation-angle estimation. The proposed method can be applied to several fields that require rotation-angle estimation or image segmentation because of patterns. It can also be applied to defect inspection in smart factories and geographic information systems in addition to the fabric inspection.
The time required for line patterns is irregular compared to lattice patterns because the number of SRPCs is relatively large. Thus, a method that prevents an increase in the number of SRPCs according to the line pattern shape is additionally required. This study has a limitation, which is the difficulty of applying the proposed method to repeated patterns of a general design.
In the future, studies on the configuration of neural networks and the measurement of inspection performance using segmented images in deep learning-based fabric inspection are expected. Further research will also cover the comparison between original and segmented images as deep-learning input with respect to overall time consumptions including training and inspection. Furthermore, algorithms that can be applied to more diverse fields and pattern types should be developed, and additional methods to enhance the performance of the rotation-angle estimation and reduce the required time should be explored.