Research on Tire Marking Point Completeness Evaluation Based on K-Means Clustering Image Segmentation

The tire marking points of dynamic balance and uniformity play a crucial guiding role in tire installation. Incomplete marking points block the recognition of tire marking points, and then affect the installation of tires. It is usually necessary to evaluate the marking point completeness during the quality inspection of finished tires. In order to meet the high-precision requirements of the evaluation of tire marking point completeness in the smart factories, the K-means clustering algorithm is introduced to segment the image of marking points in this paper. The pixels within the contour of the marking point are weighted to calculate the marking point completeness on the basis of the image segmentation. The completeness is rated and evaluated by completeness calculation. The experimental results show that the accuracy of the marking point completeness ratings is 95%, and the accuracy of the marking point evaluations is 99%. The proposed method has an important guiding significance of practice to evaluate the tire marking point completeness during the tire quality inspection based on machine vision.


Introduction
In recent years, with the improvement of people's living standards and the rapid development of the automobile industry, the popularity of automobile has increased significantly. More attention has been paid to the safety and comfort of the automobile. It is strict on stability and silence during high-speed driving [1]. The tires are the important parts of the automobile, their dynamic balance and uniformity are two factors affecting safety and comfort during driving [2]. In the detection of finished tires, marking points with different colors and shapes are used to mark the position information of the dynamic balance and uniformity test on tires for the counterweight during the installation of tires and their accessories, so as to improve the balance of the car with high speed [3,4]. The poor printing during the process of printing or scratching of the tire with the conveying equipment during the process of conveying will lead to incomplete marking points. Since every marking point is an important reference point for the tire installation, a less complete marking point will make it difficult for the tire installer to recognize the color, shape, and position of the marking points, which affect the accuracy of the tire and wheel hub installation. Therefore, the accuracy of the marking points should be detected before the tire is finished. In addition, it is necessary to check the marking point completeness, which is also regarded as a criterion for whether the marking points are qualified. The traditional detection of tire marking points completeness is mainly through manual subjective evaluation of the tires moving on the conveyor line, and the marking points that are judged incomplete need to be reprinted. The manual detection of the marking point completeness with high work intensity is prone to visual fatigue. In addition, because there is no objective judgment basis for completeness, misjudgment is easy to occur, and the accuracy and efficiency of the detection of marking points cannot be guaranteed.
In contrast, intelligent detection technology based on machine vision technology has been widely studied and applied in many fields [5][6][7]. Collecting images by industrial cameras, recognizing the colors and shapes of tire marking points, and calculating their completeness are effective methods for the quantitative analysis of marking point completeness. It can avoid misjudgment caused by manual subjective factors and improve the accuracy and efficiency of the marking point completeness evaluation. In various fields, there are great progresses in the study of completeness. Chu Qichao and his teammates used machine vision technology to judge the completeness of the dry package quickly and accurately on the packaging line [8]. He Chao used the method of deep learning object classification to realize online automatic germ completeness calculation during germinated rice processing [9]. In the development of the tire marking points recognition and detection system, domestic and foreign researchers focus on color and shape recognition. For example, the Quality Control System IRIS-M [10] invented by the German SICK-Sensor Intelligence company and the closed-loop control system identity CONTROL TMI 8303.I [11] invented by the German Micro-Epsilon company can perform online tire marking point recognition. The Japanese Bridgestone Corporation proposed a device to realize the printing and recognition of tire marking points [12]. The Portuguese scholar named André P. Dias used the background subtraction method to realize tire positioning [13]. Wang Yong proposed to use supporting directional machine to recognize the color of the tire marking point [14]. It is demonstrated that although some achievements of the tire marking points recognition system based on machine vision have been made, there are few reports on the calculation and evaluation of the completeness marking point after the recognition of the tire marking points. In this paper, machine vision technology is used to calculate the marking point completeness quantitatively on the basis of the color and shape recognition of the tire marking point, and then according to the calculated value of completeness, the marking point is rated and evaluated to achieve the online automatic detection of the tire marking point completeness. The method proposed in this paper blazes new trails on the marking point completeness rating, which meets the strict requirements of smart factories for the recognition and detection of the tire marking point, laying the foundation for the intelligent detection technology of the tire industry.

Image Acquisition of the Tire Marking Point
The online collection system of the tire the marking point's image is shown in Figure 1. The tire on the conveyor line triggers the light source and camera according to the preset delay by the photoelectric switch sensor. When the tire passes directly under the color Charge Coupled Device (CCD) industrial camera, the host computer controls the light source to turn on and trigger the industrial camera to collect the tire image marked with the marking points and send the image to the host computer to process it. After locating and extracting the marking points, the images of the tire marking point with the size of 48 × 48 pixels can be gotten. The color and shape of the tire marking points can be recognized, and their completeness is calculated and evaluated. After that, the recognized result is transferred from the host computer to the Manufacturing Execution System (MES) and compared with the dynamic balance and uniformity detection results in the MES to determine whether the marking point is qualified. The detected result is sent to a Programmable Logic Controller (PLC) to shunt the tire, that is, the unqualified tire will be diverted into the repair stage, and the qualified tire will be sent to test by an X-ray, and then classified and stored according to a tire category. The configuration of the host computer is the Central Processing Unit (CPU) is Intel (R) Core (TM) i5-3470M; the main frequency is There are three colors of the tire marking points obtained by the online image collection system, including red, yellow, and white, and there are four shapes, including a solid circle, hollow circle, square, and triangle. During the process of printing the marking points, due to the poor printing, the scratches between the tires and the conveying equipment, printing on the uneven texture on the side of tires, or printing on the fur on tires, there would be incomplete marking points. The comparisons between the complete and the incomplete tire marking points are shown in Figure 2.

Determination of the Marking Point Segmentation Scheme
The calculation of the marking point completeness should be divided into three steps: segmenting the marking points, extracting the contour of the complete marking point, and calculating the marking point completeness. The aim of segmenting the marking point is to separate the marking point from the background area and is a prerequisite for the calculation of the marking point completeness. The results of the segmentation directly affect the accuracy of the subsequent calculation of completeness. After extracting the contour of the complete marking point, it is a relatively simple method of calculating the completeness to use the ratio of pixels of the segmented marking point image and those of the corresponding complete marking point. The segmentation algorithms based on the edge contour or threshold are widely used, and the foreground image and the background area can be divided into two parts. One of the most popular threshold-based segmentation algorithms, is the maximum between-class variance method, also known as the Otsu segmentation method [15]. It is a global adaptive threshold segmentation algorithm. It searches for the optimal threshold based on the principle of the smallest variance within the target area or the There are three colors of the tire marking points obtained by the online image collection system, including red, yellow, and white, and there are four shapes, including a solid circle, hollow circle, square, and triangle. During the process of printing the marking points, due to the poor printing, the scratches between the tires and the conveying equipment, printing on the uneven texture on the side of tires, or printing on the fur on tires, there would be incomplete marking points. The comparisons between the complete and the incomplete tire marking points are shown in Figure 2. There are three colors of the tire marking points obtained by the online image collection system, including red, yellow, and white, and there are four shapes, including a solid circle, hollow circle, square, and triangle. During the process of printing the marking points, due to the poor printing, the scratches between the tires and the conveying equipment, printing on the uneven texture on the side of tires, or printing on the fur on tires, there would be incomplete marking points. The comparisons between the complete and the incomplete tire marking points are shown in Figure 2.

Determination of the Marking Point Segmentation Scheme
The calculation of the marking point completeness should be divided into three steps: segmenting the marking points, extracting the contour of the complete marking point, and calculating the marking point completeness. The aim of segmenting the marking point is to separate the marking point from the background area and is a prerequisite for the calculation of the marking point completeness. The results of the segmentation directly affect the accuracy of the subsequent calculation of completeness. After extracting the contour of the complete marking point, it is a relatively simple method of calculating the completeness to use the ratio of pixels of the segmented marking point image and those of the corresponding complete marking point. The segmentation algorithms based on the edge contour or threshold are widely used, and the foreground image and the background area can be divided into two parts. One of the most popular threshold-based segmentation algorithms, is the maximum between-class variance method, also known as the Otsu segmentation method [15]. It is a global adaptive threshold segmentation algorithm. It searches for the optimal threshold based on the principle of the smallest variance within the target area or the

Determination of the Marking Point Segmentation Scheme
The calculation of the marking point completeness should be divided into three steps: segmenting the marking points, extracting the contour of the complete marking point, and calculating the marking point completeness. The aim of segmenting the marking point is to separate the marking point from the background area and is a prerequisite for the calculation of the marking point completeness. The results of the segmentation directly affect the accuracy of the subsequent calculation of completeness. After extracting the contour of the complete marking point, it is a relatively simple method of calculating the completeness to use the ratio of pixels of the segmented marking point image and those of the corresponding complete marking point. The segmentation algorithms based on the edge contour or threshold are widely used, and the foreground image and the background area can be divided into two parts. One of the most popular threshold-based segmentation algorithms, is the maximum between-class variance method, also known as the Otsu segmentation method [15]. It is a global adaptive threshold segmentation algorithm. It searches for the optimal threshold based on the principle of the smallest variance within the target area or the background area and the largest variance between different areas and divides the image into a target area and a background area. Taking the Otsu segmentation method as an example to segment the marking points, shown in Figure 3. Since the marking point image is a Red-Green-Blue (RGB) three-channel image shown in Figure 3a, when it is segmented using the maximum between-class variance method, the RGB mark point image should be converted into a single-channel image first, and the conversion result is shown in Figure 3b. Then, the single-channel image is segmented by the maximum between-class variance method. The white area is the area where the marked points are obtained, and the black area is the segmented background area, shown in Figure 3c. The color is the main feature of the marking point. However, the color of the marking point image is not fully utilized, when the image is converted from the RGB three-channel one into a gray one. The maximum between-class variance method strictly divides the gray image of the marking point into the object and background by calculating the global adaptive threshold. According to the segmentation results, for example, Figure 3a,c, the blurred transition edges are segmented into the marking point foreground for the solid red circle marking point and the white solid circle marking point. It can be seen that the calculation result of the marking point completeness relies on the segmentation of the marking point absolutely. When the marking point is relatively blurred and the edge is not clear, the segmentation algorithm is based on the edge or based on the threshold value, which when used to divide the image into foreground and background, may lead to reflect the true marking point completeness unsuccessfully, shown in Figure 3. background area and the largest variance between different areas and divides the image into a target area and a background area. Taking the Otsu segmentation method as an example to segment the marking points, shown in Figure 3. Since the marking point image is a Red-Green-Blue (RGB) threechannel image shown in Figure 3a, when it is segmented using the maximum between-class variance method, the RGB mark point image should be converted into a single-channel image first, and the conversion result is shown in Figure 3b. Then, the single-channel image is segmented by the maximum between-class variance method. The white area is the area where the marked points are obtained, and the black area is the segmented background area, shown in Figure 3c. The color is the main feature of the marking point. However, the color of the marking point image is not fully utilized, when the image is converted from the RGB three-channel one into a gray one. The maximum between-class variance method strictly divides the gray image of the marking point into the object and background by calculating the global adaptive threshold. According to the segmentation results, for example, Figure 3a,c, the blurred transition edges are segmented into the marking point foreground for the solid red circle marking point and the white solid circle marking point. It can be seen that the calculation result of the marking point completeness relies on the segmentation of the marking point absolutely. When the marking point is relatively blurred and the edge is not clear, the segmentation algorithm is based on the edge or based on the threshold value, which when used to divide the image into foreground and background, may lead to reflect the true marking point completeness unsuccessfully, shown in Figure 3.  In order to consider the edge transition between the marking point and the background, and reflect the influence of the edge area on the marked point completeness, a completeness calculation method is proposed based on weighting and clustering image segmentation. The purpose of image segmentation is to divide different characteristic regions of the image. The purpose of the clustering is to divide the data points with the same attributes into a category. Their essences are the same. The clustering method is used to divide the pixels in the image into multiple discretes so that the pixels have a high degree of similarity in each region, and there is a high comparison among these regions in order to reach the purpose of the image segmentation.

Preprocess of the Marking Point
The image collection system of the tire marking point collects the tire image and captures the marking points. The image size of the obtained marking point is 48 × 48 pixels, where the marking point accounts for about 1/3-1/4 of the entire image. The low resolution of the marking point will In order to consider the edge transition between the marking point and the background, and reflect the influence of the edge area on the marked point completeness, a completeness calculation method is proposed based on weighting and clustering image segmentation. The purpose of image segmentation is to divide different characteristic regions of the image. The purpose of the clustering is to divide the data points with the same attributes into a category. Their essences are the same. The clustering method is used to divide the pixels in the image into multiple discretes so that the pixels have a high degree of similarity in each region, and there is a high comparison among these regions in order to reach the purpose of the image segmentation.

Preprocess of the Marking Point
The image collection system of the tire marking point collects the tire image and captures the marking points. The image size of the obtained marking point is 48 × 48 pixels, where the marking point accounts for about 1/3-1/4 of the entire image. The low resolution of the marking point will lead to obvious jagged patterns after the marking point image is segmented, shown in Figure 4a. Moreover, in the process of extracting the contour of the complete marking point, the contour parameters of the complete marking point are calculated as real numbers, and a large error will occur in the process of transforming these real numbers into integers, which will affect the accuracy of the completeness, shown in Figure 4b. In order to make the calculation results of the marking point completeness accurate, the image is enlarged before segmentation. As shown in Figure 4c, when the image is enlarged, the error is obviously reduced, which is generated during the conversion from the contour parameter of the complete marking point to integers, and the image processing time will also increase. The experimental results show that when the magnification is greater than three times, the calculating error is not decreased significantly, from 2.61% to 1.85%, but the image processing time is increased significantly, from 8.47 millisecond (ms) to 28.20 ms, as shown in Figure 5. In a word, the image of the tire marking point is enlarged by three times, so that the speed of processing the image and the accuracy of calculating the completeness can be taken into account.
Sensors 2020, 20, x FOR PEER REVIEW 5 of 15 lead to obvious jagged patterns after the marking point image is segmented, shown in Figure 4a. Moreover, in the process of extracting the contour of the complete marking point, the contour parameters of the complete marking point are calculated as real numbers, and a large error will occur in the process of transforming these real numbers into integers, which will affect the accuracy of the completeness, shown in Figure 4b. In order to make the calculation results of the marking point completeness accurate, the image is enlarged before segmentation. As shown in Figure 4c, when the image is enlarged, the error is obviously reduced, which is generated during the conversion from the contour parameter of the complete marking point to integers, and the image processing time will also increase. The experimental results show that when the magnification is greater than three times, the calculating error is not decreased significantly, from 2.61% to 1.85%, but the image processing time is increased significantly, from 8.47 millisecond (ms) to 28.20 ms, as shown in Figure 5. In a word, the image of the tire marking point is enlarged by three times, so that the speed of processing the image and the accuracy of calculating the completeness can be taken into account.

Image Segmentation of the Marking Point Based on the K-means Clustering
The main clustering algorithms for image segmentation include K-means clustering [16], Fuzzy C-means (FCM) clustering [17,18], and so on. In this paper, the K-means and FCM clustering are both tested for their segmentation effect and processing speed. Although the image segmentation method of the FCM clustering has high segmenting accuracy, it needs a large amount of calculation and slow processing speed. For each marking point image, the average processing speed is 9.27 ms using Kmeans clustering and the average processing speed is 600.36 ms using FCM clustering, which is not suitable for the real-time requirements of the online marking point recognition system. By contrast, the K-means clustering image segmentation algorithm is simple, efficient, and with low complexity. In this paper, the K-means clustering image segmentation algorithm is used to segment the marking lead to obvious jagged patterns after the marking point image is segmented, shown in Figure 4a. Moreover, in the process of extracting the contour of the complete marking point, the contour parameters of the complete marking point are calculated as real numbers, and a large error will occur in the process of transforming these real numbers into integers, which will affect the accuracy of the completeness, shown in Figure 4b. In order to make the calculation results of the marking point completeness accurate, the image is enlarged before segmentation. As shown in Figure 4c, when the image is enlarged, the error is obviously reduced, which is generated during the conversion from the contour parameter of the complete marking point to integers, and the image processing time will also increase. The experimental results show that when the magnification is greater than three times, the calculating error is not decreased significantly, from 2.61% to 1.85%, but the image processing time is increased significantly, from 8.47 millisecond (ms) to 28.20 ms, as shown in Figure 5. In a word, the image of the tire marking point is enlarged by three times, so that the speed of processing the image and the accuracy of calculating the completeness can be taken into account.

Image Segmentation of the Marking Point Based on the K-means Clustering
The main clustering algorithms for image segmentation include K-means clustering [16], Fuzzy C-means (FCM) clustering [17,18], and so on. In this paper, the K-means and FCM clustering are both tested for their segmentation effect and processing speed. Although the image segmentation method of the FCM clustering has high segmenting accuracy, it needs a large amount of calculation and slow processing speed. For each marking point image, the average processing speed is 9.27 ms using Kmeans clustering and the average processing speed is 600.36 ms using FCM clustering, which is not suitable for the real-time requirements of the online marking point recognition system. By contrast, the K-means clustering image segmentation algorithm is simple, efficient, and with low complexity. In this paper, the K-means clustering image segmentation algorithm is used to segment the marking

Image Segmentation of the Marking Point Based on the K-Means Clustering
The main clustering algorithms for image segmentation include K-means clustering [16], Fuzzy C-means (FCM) clustering [17,18], and so on. In this paper, the K-means and FCM clustering are both tested for their segmentation effect and processing speed. Although the image segmentation method of the FCM clustering has high segmenting accuracy, it needs a large amount of calculation Sensors 2020, 20, 4687 6 of 16 and slow processing speed. For each marking point image, the average processing speed is 9.27 ms using K-means clustering and the average processing speed is 600.36 ms using FCM clustering, which is not suitable for the real-time requirements of the online marking point recognition system. By contrast, the K-means clustering image segmentation algorithm is simple, efficient, and with low complexity. In this paper, the K-means clustering image segmentation algorithm is used to segment the marking point.
The basic process of the K-means clustering algorithm is as follows: the k of the number of clusters is given, and k data points are selected randomly from the data set as the initial centroid c 0 j , j = 1, 2 . . . k. Equation (1) is used to calculate the distance between the remaining data points of the data set to the initial centroid and divide it into the cluster closest to the initial centroid; where, t is the number of iterations, C t j , j = 1, 2, . . . , k; is the j cluster generated by the t iteration, and f (i) represents the eigenvalue of data points.
Then, the k clusters (C 1 j , j = 1, 2, . . . , k) are obtained. Equation (2) is used to calculate the centroid c 1 j of the k clusters C 1 j , and the data points in the data set are re-divided into the cluster closest to the c 1 j and form cluster C 2 j ; where, |C t j | is the number of data points contained in the j cluster, and Σf(i) is the sum of the eigenvalues of the data points in the j cluster.
It is the process of cyclically iterating the distribution of the data points in the data set and updating the centroid of the cluster. When the distance from the data point in the cluster to the centroid is the smallest, it satisfies Equation (3), or the set number of clustering iterations is reached, the clustering center c t j and the class C t j of the data points in the data set to be segmented are output, and the clustering progress is ended.
where, SSE represents the sum of squares of the Euclidean distance between the data points and the centroid of the cluster, d(f (i), c j ) represents the Euclidean distance from the data points to the centroid, and f (i) represents the eigenvalue of the data points. When using the algorithm, the number k of clusters should be determined at first. The value of k directly determines the results of the clustering segmentation. The segmentation results of the tire marking points from k is 2 to 8, and are shown in Figure 6. It shows when k = 2, the image is divided into the foreground (marking point) and the background, and the blurred edges may be divided into the foreground or the background. There is no edge transition between the foreground and the background, just like using the Otsu segmentation method in Section 2.2. When k is 3 to 8, as shown in Figure 6, the brightest area is the area of the foreground (marking point), and the darkest area is the background. The area between the brightest and the darkest areas is divided into several regions. The larger the k is, the finer the cluster is. It is hard to select the suitable k by visual inspection. The purpose of segmenting the marking point image is to consider the edge transition between the foreground (marking point) and background. In fact, the dark part has little contribution to the marking point completeness. Therefore, the dark part (background) is not necessary to be divided in detail. All of the dark parts could be regarded as the background and their weights are zero, which can also simplify the weighting process for marking point completeness calculation. Thus, in this paper, k = 4 is selected as the clustering category in the K-means clustering segmentation.  The marking point samples with the same color and shape (for example red and circle) are selected, which can be regarded as a whole image. The K-means clustering methods with the different k values are used to segment this image. N i is the number of pixels in a region i (i = 1, 2, . . . , k), T is the total pixels of the whole image. P i is indicated by the percentage of N i to T, which can be calculated. For K-means clustering with k = 3 and k = 4, the area with a large value of the luminance channel (V channel) of the Hue-Saturation-Value (HSV) space is divided into several regions, and the area with a small value of the V channel is a whole region, which can be regarded as the background. For example, when k = 4, P 1 = 0.08, P 2 = 0.10, P 3 = 0.08, and P 4 = 0.74, their corresponding values on Sensors 2020, 20, 4687 8 of 16 the V channel are 0.92, 0.31, 0.62, and 0.08. The fourth region (0.08) is dark, which is the background. It makes up 74 percent of the total area of the image. When k is larger than 4, the background area is subdivided. For example, when k = 5, P 1 = 0.08, P 2 = 0.08, P 3 = 0.08, P 4 = 0.27, and P 5 = 0.50, their corresponding values of the V channel are 0.93, 0.64, 0.36, 0.12, and 0.07. The fourth and the fifth regions (0.12 and 0.07) are dark. The background makes up 77 (25 and 50) percent of the total area of the image; when k = 8, P 1 = 0.06, P 2 = 0.06, P 3 = 0.05, P 4 = 0.05, P 5 = 0.05, P 6 = 0.13, P 7 = 0.29, and P 8 = 0.31, their corresponding values of the V channel are 0.95, 0.58, 0.38, 0.76, 0.22, 0.14, 0.09, and 0.05. The sixth, seven, and eighth regions (0.14, 0.09 and 0.05) are dark. The background (the sixth, seven, and eighth regions) makes up 73 (13,29, and 31) percent of the total area of the image. That is to say, the background may be divided into two or several parts when k is larger than 4.

Extract the Contour of Complete Marking Points
The purpose of segmenting the marking point image is to consider the edge transition between the foreground (marking point) and background. In fact, the dark part has little contribution to the marking point completeness. Therefore, the dark part (background) is not necessary to be divided in detail. All of the dark parts could be regarded as the background and their weights are zero, which can also simplify the weighting process for marking point completeness calculation. Thus, in this paper, k = 4 is selected as the clustering category in the K-means clustering segmentation.

Extract the Contour of Complete Marking Points
After the image is segmented by K-means clustering, the area with the highest brightness is the object area of the marking point. Before calculating the marking point completeness, the information of the marking point's shape is used to extract the contour of the complete points correspondingly, then the contour of the complete marking point is regarded as a reference to calculate the marking point completeness. After the image is segmented by K-means clustering, the area with the highest brightness is the object area of the marking point. Before calculating the marking point completeness, the information of the marking point's shape is used to extract the contour of the complete points correspondingly, then the contour of the complete marking point is regarded as a reference to calculate the marking point completeness.

Completeness Calculation of the Tire Marking Point
In the second part, the K-means clustering segmentation algorithm is used to segment the image into four regions R1, R2, R3, and R4, shown in Figure 8. The four regions have significant differences in the luminance channel (V channel) of the HSV space. Table 1 lists the brightness of the four regions divided by the K-means clustering for the three marking points in Figure 8. Among them, the region with the highest brightness is R1, the brightness is t1, corresponding to the object area of the marking point; the R2 and R3 regions with slightly darker brightness, the brightness is t2 and t3, respectively, corresponding to the two layers blurred edge between the object area of the marking point and the

Completeness Calculation of the Tire Marking Point
In the second part, the K-means clustering segmentation algorithm is used to segment the image into four regions R 1 , R 2 , R 3 , and R 4 , shown in Figure 8. The four regions have significant differences in Sensors 2020, 20, 4687 9 of 16 the luminance channel (V channel) of the HSV space. Table 1 lists the brightness of the four regions divided by the K-means clustering for the three marking points in Figure 8. Among them, the region with the highest brightness is R 1 , the brightness is t 1 , corresponding to the object area of the marking point; the R 2 and R 3 regions with slightly darker brightness, the brightness is t 2 and t 3 , respectively, corresponding to the two layers blurred edge between the object area of the marking point and the background region; the darkest region is the R 4 , and the brightness is t 4 , which corresponds to the background area of the image.

Completeness Calculation of the Tire Marking Point
In the second part, the K-means clustering segmentation algorithm is used to segment the image into four regions R1, R2, R3, and R4, shown in Figure 8. The four regions have significant differences in the luminance channel (V channel) of the HSV space. Table 1 lists the brightness of the four regions divided by the K-means clustering for the three marking points in Figure 8. Among them, the region with the highest brightness is R1, the brightness is t1, corresponding to the object area of the marking point; the R2 and R3 regions with slightly darker brightness, the brightness is t2 and t3, respectively, corresponding to the two layers blurred edge between the object area of the marking point and the background region; the darkest region is the R4, and the brightness is t4, which corresponds to the background area of the image.   By extracting region R 1 , the contour of the complete marking point corresponding to the object area of the marking point, the pixel points within the contour of the complete marking point are weighted and calculated, and the marking point completeness can be obtained. The method proposed in this paper not only incorporates the objective area of the marking point into the completeness calculation, but also the influence of the blurred edge on the completeness, so as to make the calculating result close to the actual marking point completeness. Taking a white solid circle marking point as an example, this method is described, shown in Figure 9a.  By extracting region R1, the contour of the complete marking point corresponding to the object area of the marking point, the pixel points within the contour of the complete marking point are weighted and calculated, and the marking point completeness can be obtained. The method proposed in this paper not only incorporates the objective area of the marking point into the completeness calculation, but also the influence of the blurred edge on the completeness, so as to make the calculating result close to the actual marking point completeness. Taking a white solid circle marking point as an example, this method is described, shown in Figure 9a.
In Figure 9b  In order to accurately calculate the marking point completeness, four regions that may be contained in the M area are defined, and the contributions to completeness in the different regions are weighted according to their brightness: The R1 included in the M region is defined as R1′, R1′ = R1 ∩ M, and the number of pixels is R1a'. Since R1 is the object area of the marking point, each pixel's contribution weight ξ1 to completeness is 1.
The R2 included in the M region is R2′, R2′ = R2 ∩ M, the number of pixels is R2a', and the calculation formula of the contribution weight ξ2 of each pixel is expressed in Equation (4): (4) Region R 4 is the background, each background pixel in the M area has no contribution to the marking point completeness. The more the background pixels in the M area are, the lower the marking point completeness is. The R 2 and R 3 regions are the blurry edges between the marking point and the background. The contribution of the pixels of R 2 and R 3 in the M area for the completeness should be between 0 and 1. Since the brightness of the different regions is different, the contributions of these two areas to completeness are reflected by the brightness interpolation.
In order to accurately calculate the marking point completeness, four regions that may be contained in the M area are defined, and the contributions to completeness in the different regions are weighted according to their brightness: The R 1 included in the M region is defined as R 1 , R 1 = R 1 ∩ M, and the number of pixels is R 1a '. Since R 1 is the object area of the marking point, each pixel's contribution weight ξ 1 to completeness is 1.
The R 2 included in the M region is R 2 , R 2 = R 2 ∩ M, the number of pixels is R 2a ', and the calculation formula of the contribution weight ξ 2 of each pixel is expressed in Equation (4): The R 3 included in the M region is R 3 , R 3 = R 3 ∩ M, the number of pixels is R 3a ', and the calculation formula of the contribution weight ξ 3 of each pixel is expressed in Equation (5): The R 4 included in the M region is R 4 , R 4 = R 4 ∩ M, the number of pixels is R 4a ', and the contribution weight is ξ 4 that has no contribution to its completeness.
In a word, the marking point completeness named could be calculated as Formula (6):

Marking Point Completeness Evaluation
The purpose of quantifying the tire marking point completeness is to determine its completeness based on the completeness rating standards given by the enterprises and shunts of the tires on the conveyor line, according to the rules of the system's decision. The tires with unqualified marking points will be sent back to the printing module.

Rating Standards
According to the requirements of the enterprises, the tire marking point completeness is divided into five levels namely: intact (qualified), minor missing (qualified), medium missing (unqualified), obvious missing (unqualified), and severe missing (unqualified), the higher the degree of the marked points lack is, the lower the corresponding integrity level becomes.
For the marking points of the solid circle and hollow circle, a completeness greater than 90% is qualified, or else is unqualified, shown in Table 2. The square and triangle marking points have sharp corners, so the images are not clear, and the sharp corners are blurred, as shown in Figure 10a,b. The sharp corners of the marking point will be divided into the transitional edge region during the segmentation, shown in Figure 10c,d. It will lead to a calculated value of the marking point completeness lower than the solid circle and the hollow circle. So, the rating standards for the square and triangle marking points are separately formulated; that is, the rating of the squares and the triangles decreases by 4% than that in Table 2. For example, square marking points No. 20 and No. 27, and the triangle marking points No. 15 and No. 29 have been adjusted accordingly.

Marking Point Completeness Level Marking Point Completeness D Marking Point Completeness
Level 1 95% < D ≤ 100% intact (qualified) Level 2 90% < D ≤ 95% minor missing (qualified) Level 3 80% < D ≤ 90% medium missing (unqualified) Level 4 65% < D ≤ 80% obvious missing (unqualified) Level 5 D ≤ 65% severe missing (unqualified) The square and triangle marking points have sharp corners, so the images are not clear, and the sharp corners are blurred, as shown in Figure 10a,b. The sharp corners of the marking point will be divided into the transitional edge region during the segmentation, shown in Figure 10c,d. It will lead to a calculated value of the marking point completeness lower than the solid circle and the hollow circle. So, the rating standards for the square and triangle marking points are separately formulated; that is, the rating of the squares and the triangles decreases by 4% than that in Table 2

Tests and Analysis of the Marking Point Completeness
In order to examine the effectiveness of the tire marking point completeness calculation method, the manual evaluating results of the marking point completeness are used as a standard group for the comparative test. In this paper, 100 samples of the tire marking points selected to calculate the

Tests and Analysis of the Marking Point Completeness
In order to examine the effectiveness of the tire marking point completeness calculation method, the manual evaluating results of the marking point completeness are used as a standard group for the comparative test. In this paper, 100 samples of the tire marking points selected to calculate the marking point completeness and use this to rate them according to the standards in Table 2. The evaluation of the marking point completeness by manual observation is subjective due to visual errors of human observation and other manual factors. In order to get objective evaluating results as far as possible, the manual rating results are obtained comprehensively by the ratings on the 100 samples from five quality inspectors of the enterprise. Due to limited space, 30 marking point images are selected and shown in Table 3. Moreover, in order to compare the K-means clustering marking point segmentation algorithm proposed in this paper with the threshold-based segmentation algorithm, the representative and advanced threshold-based algorithm, the maximum between-class variance method [15], is used and shown in Table 3. What is more, there are the results of calculating the tire marking point completeness using both the method proposed in this paper and the Otsu segmentation method in Table 3. marking point completeness and use this to rate them according to the standards in Table 2. The evaluation of the marking point completeness by manual observation is subjective due to visual errors of human observation and other manual factors. In order to get objective evaluating results as far as possible, the manual rating results are obtained comprehensively by the ratings on the 100 samples from five quality inspectors of the enterprise. Due to limited space, 30 marking point images are selected and shown in Table 3. Moreover, in order to compare the K-means clustering marking point segmentation algorithm proposed in this paper with the threshold-based segmentation algorithm, the representative and advanced threshold-based algorithm, the maximum between-class variance method [15], is used and shown in Table 3. What is more, there are the results of calculating the tire marking point completeness using both the method proposed in this paper and the Otsu segmentation method in Table 3. marking point completeness and use this to rate them according to the standards in Table 2. The evaluation of the marking point completeness by manual observation is subjective due to visual errors of human observation and other manual factors. In order to get objective evaluating results as far as possible, the manual rating results are obtained comprehensively by the ratings on the 100 samples from five quality inspectors of the enterprise. Due to limited space, 30 marking point images are selected and shown in Table 3. Moreover, in order to compare the K-means clustering marking point segmentation algorithm proposed in this paper with the threshold-based segmentation algorithm, the representative and advanced threshold-based algorithm, the maximum between-class variance method [15], is used and shown in Table 3. What is more, there are the results of calculating the tire marking point completeness using both the method proposed in this paper and the Otsu segmentation method in Table 3. marking point completeness and use this to rate them according to the standards in Table 2. The evaluation of the marking point completeness by manual observation is subjective due to visual errors of human observation and other manual factors. In order to get objective evaluating results as far as possible, the manual rating results are obtained comprehensively by the ratings on the 100 samples from five quality inspectors of the enterprise. Due to limited space, 30 marking point images are selected and shown in Table 3. Moreover, in order to compare the K-means clustering marking point segmentation algorithm proposed in this paper with the threshold-based segmentation algorithm, the representative and advanced threshold-based algorithm, the maximum between-class variance method [15], is used and shown in Table 3. What is more, there are the results of calculating the tire marking point completeness using both the method proposed in this paper and the Otsu segmentation method in Table 3.           segmentation method in Table 3. segmentation method in Table 3.  The Otsu algorithm and K-means clustering segmentation algorithm are used to calculate the marking point completeness. Since the segmentation methods are different, the values of the completeness calculation are also different. There are big differences of completeness values for the eleven marking points in Table 3 using these two segmentation algorithms, resulting in different ratings for marking points, including No. 3,6,10,12,15,17,20,25,27,28, and 29 marking points. For the solid circles and hollow circle marking points, such as No. 3, 6, 10, 28, and others, due to the smearing of the marking point, when using the Otsu threshold segmentation, the marking point is slightly elliptical, so the completeness is lower than the normal one. For example, the marking point No. 28, the marking point completeness is 93.1% using the Otsu algorithm, and the corresponding rating level is 2. The marking point completeness is 96.7% using the K-means clustering segmenting algorithm, and the corresponding rating level is 1. The latter is closer to the manual rating (level 1) The Otsu algorithm and K-means clustering segmentation algorithm are used to calculate the marking point completeness. Since the segmentation methods are different, the values of the completeness calculation are also different. There are big differences of completeness values for the eleven marking points in Table 3 using these two segmentation algorithms, resulting in different ratings for marking points, including No. 3,6,10,12,15,17,20,25,27,28, and 29 marking points. For the solid circles and hollow circle marking points, such as No. 3, 6, 10, 28, and others, due to the smearing of the marking point, when using the Otsu threshold segmentation, the marking point is slightly elliptical, so the completeness is lower than the normal one. For example, the marking point No. 28, the marking point completeness is 93.1% using the Otsu algorithm, and the corresponding rating level is 2. The marking point completeness is 96.7% using the K-means clustering segmenting algorithm, and the corresponding rating level is 1. The latter is closer to the manual rating (level 1) The Otsu algorithm and K-means clustering segmentation algorithm are used to calculate the marking point completeness. Since the segmentation methods are different, the values of the completeness calculation are also different. There are big differences of completeness values for the eleven marking points in Table 3 using these two segmentation algorithms, resulting in different ratings for marking points, including No. 3,6,10,12,15,17,20,25,27,28, and 29 marking points. For the solid circles and hollow circle marking points, such as No. 3, 6, 10, 28, and others, due to the smearing of the marking point, when using the Otsu threshold segmentation, the marking point is slightly elliptical, so the completeness is lower than the normal one. For example, the marking point No. 28, the marking point completeness is 93.1% using the Otsu algorithm, and the corresponding rating level is 2. The marking point completeness is 96.7% using the K-means clustering segmenting algorithm, and the corresponding rating level is 1. The latter is closer to the manual rating (level 1) The Otsu algorithm and K-means clustering segmentation algorithm are used to calculate the marking point completeness. Since the segmentation methods are different, the values of the completeness calculation are also different. There are big differences of completeness values for the eleven marking points in Table 3 using these two segmentation algorithms, resulting in different ratings for marking points, including No. 3,6,10,12,15,17,20,25,27,28, and 29 marking points. For the solid circles and hollow circle marking points, such as No. 3, 6, 10, 28, and others, due to the smearing of the marking point, when using the Otsu threshold segmentation, the marking point is slightly elliptical, so the completeness is lower than the normal one. For example, the marking point No. 28, the marking point completeness is 93.1% using the Otsu algorithm, and the corresponding rating level is 2. The marking point completeness is 96.7% using the K-means clustering segmenting algorithm, and the corresponding rating level is 1. The latter is closer to the manual rating (level 1) The Otsu algorithm and K-means clustering segmentation algorithm are used to calculate the marking point completeness. Since the segmentation methods are different, the values of the completeness calculation are also different. There are big differences of completeness values for the eleven marking points in Table 3 using these two segmentation algorithms, resulting in different ratings for marking points, including No. 3,6,10,12,15,17,20,25,27,28, and 29 marking points. For the solid circles and hollow circle marking points, such as No. 3, 6, 10, 28, and others, due to the smearing of the marking point, when using the Otsu threshold segmentation, the marking point is slightly elliptical, so the completeness is lower than the normal one. For example, the marking point No. 28, the marking point completeness is 93.1% using the Otsu algorithm, and the corresponding rating level is 2. The marking point completeness is 96.7% using the K-means clustering segmenting algorithm, and the corresponding rating level is 1. The latter is closer to the manual rating (level 1) The Otsu algorithm and K-means clustering segmentation algorithm are used to calculate the marking point completeness. Since the segmentation methods are different, the values of the completeness calculation are also different. There are big differences of completeness values for the eleven marking points in Table 3 using these two segmentation algorithms, resulting in different ratings for marking points, including No. 3, 6, 10, 12, 15, 17, 20, 25, 27, 28, and 29 marking points. For the solid circles and hollow circle marking points, such as No. 3, 6, 10, 28, and others, due to the smearing of the marking point, when using the Otsu threshold segmentation, the marking point is slightly elliptical, so the completeness is lower than the normal one. For example, the marking point No. 28, the marking point completeness is 93.1% using the Otsu algorithm, and the corresponding rating level is 2. The marking point completeness is 96.7% using the K-means clustering segmenting algorithm, and the corresponding rating level is 1. The latter is closer to the manual rating (level 1) The Otsu algorithm and K-means clustering segmentation algorithm are used to calculate the marking point completeness. Since the segmentation methods are different, the values of the completeness calculation are also different. There are big differences of completeness values for the eleven marking points in Table 3 using these two segmentation algorithms, resulting in different ratings for marking points, including No. 3, 6, 10, 12, 15, 17, 20, 25, 27, 28, and 29 marking points. For the solid circles and hollow circle marking points, such as No. 3, 6, 10, 28, and others, due to the smearing of the marking point, when using the Otsu threshold segmentation, the marking point is slightly elliptical, so the completeness is lower than the normal one. For example, the marking point No. 28, the marking point completeness is 93.1% using the Otsu algorithm, and the corresponding rating level is 2. The marking point completeness is 96.7% using the K-means clustering segmenting algorithm, and the corresponding rating level is 1. The latter is closer to the manual rating (level 1) The Otsu algorithm and K-means clustering segmentation algorithm are used to calculate the marking point completeness. Since the segmentation methods are different, the values of the completeness calculation are also different. There are big differences of completeness values for the eleven marking points in Table 3 using these two segmentation algorithms, resulting in different ratings for marking points, including No. 3, 6, 10, 12, 15, 17, 20, 25, 27, 28, and 29 marking points. For the solid circles and hollow circle marking points, such as No. 3, 6, 10, 28, and others, due to the smearing of the marking point, when using the Otsu threshold segmentation, the marking point is slightly elliptical, so the completeness is lower than the normal one. For example, the marking point No. 28, the marking point completeness is 93.1% using the Otsu algorithm, and the corresponding rating level is 2. The marking point completeness is 96.7% using the K-means clustering segmenting algorithm, and the corresponding rating level is 1. The latter is closer to the manual rating (level 1) The Otsu algorithm and K-means clustering segmentation algorithm are used to calculate the marking point completeness. Since the segmentation methods are different, the values of the completeness calculation are also different. There are big differences of completeness values for the eleven marking points in Table 3 using these two segmentation algorithms, resulting in different ratings for marking points, including No. 3, 6, 10, 12, 15, 17, 20, 25, 27, 28, and 29 marking points. For the solid circles and hollow circle marking points, such as No. 3, 6, 10, 28, and others, due to the smearing of the marking point, when using the Otsu threshold segmentation, the marking point is slightly elliptical, so the completeness is lower than the normal one. For example, the marking point No. 28, the marking point completeness is 93.1% using the Otsu algorithm, and the corresponding rating level is 2. The marking point completeness is 96.7% using the K-means clustering segmenting algorithm, and the corresponding rating level is 1. The latter is closer to the manual rating (level 1) The Otsu algorithm and K-means clustering segmentation algorithm are used to calculate the marking point completeness. Since the segmentation methods are different, the values of the completeness calculation are also different. There are big differences of completeness values for the eleven marking points in Table 3 using these two segmentation algorithms, resulting in different ratings for marking points, including No. 3, 6, 10, 12, 15, 17, 20, 25, 27, 28, and 29 marking points. For the solid circles and hollow circle marking points, such as No. 3, 6, 10, 28, and others, due to the smearing of the marking point, when using the Otsu threshold segmentation, the marking point is slightly elliptical, so the completeness is lower than the normal one. For example, the marking point No. 28, the marking point completeness is 93.1% using the Otsu algorithm, and the corresponding rating level is 2. The marking point completeness is 96.7% using the K-means clustering segmenting algorithm, and the corresponding rating level is 1. The latter is closer to the manual rating (level 1) The Otsu algorithm and K-means clustering segmentation algorithm are used to calculate the marking point completeness. Since the segmentation methods are different, the values of the completeness calculation are also different. There are big differences of completeness values for the eleven marking points in Table 3 using these two segmentation algorithms, resulting in different ratings for marking points, including No. 3, 6, 10, 12, 15, 17, 20, 25, 27, 28, and 29 marking points. For the solid circles and hollow circle marking points, such as No. 3, 6, 10, 28, and others, due to the smearing of the marking point, when using the Otsu threshold segmentation, the marking point is slightly elliptical, so the completeness is lower than the normal one. For example, the marking point No. 28, the marking point completeness is 93.1% using the Otsu algorithm, and the corresponding rating level is 2. The marking point completeness is 96.7% using the K-means clustering segmenting algorithm, and the corresponding rating level is 1. The latter is closer to the manual rating (level 1) 95.6% 97.5% 1 1 The Otsu algorithm and K-means clustering segmentation algorithm are used to calculate the marking point completeness. Since the segmentation methods are different, the values of the completeness calculation are also different. There are big differences of completeness values for the eleven marking points in Table 3 using these two segmentation algorithms, resulting in different ratings for marking points, including No. 3, 6, 10, 12, 15, 17, 20, 25, 27, 28, and 29 marking points. For the solid circles and hollow circle marking points, such as No. 3, 6, 10, 28, and others, due to the smearing of the marking point, when using the Otsu threshold segmentation, the marking point is slightly elliptical, so the completeness is lower than the normal one. For example, the marking point No. 28, the marking point completeness is 93.1% using the Otsu algorithm, and the corresponding rating level is 2. The marking point completeness is 96.7% using the K-means clustering segmenting algorithm, and the corresponding rating level is 1. The latter is closer to the manual rating (level 1) than the former. However, regardless of level 1 or 2, the marking point completeness is qualified. For the marking point No. 20, the marking point completeness calculated by the Otsu algorithm is 87.5%, the corresponding level is 3, which is obviously missing and is unqualified. Its completeness is 92.8% using the K-means clustering segmentation algorithm, and the corresponding rating level is 2, which is a minor missing and is a qualified marking point. The manual rating is level 2, which is qualified. In other words, the marking point No. 20 is rated as the unqualified one using the Otsu algorithm, which is inconsistent with the manual rating. The reason is that the image of the square marking point is not clear, especially at sharp corners. The marking point is divided into foreground and background by the Otsu algorithms, and the segmentation of the blurred edge area is not accurate, so the accuracy of the marking point completeness is low. This shows that in the marking point segmentation process of the foreground and background, the two segmented parts cannot reflect the influence of the different regions on the completeness. The marking point completeness calculation method of weighting based on the clustering hierarchy segmentation can not only segment the foreground of the marking point accurately, but also reflect the contribution degree of the different regions to the marking point completeness value.
Taking the manual rating as a reference, there are 28 samples correctly rated by using the method proposed in this paper among the 30 sample points. To pay attention to the marking point No. 6, which is wrongly rated, although the marking point is rated differently, the completeness calculated by the automatic recognition system is 91.5%, which belongs to the level of minor missing and it is qualified. The result of the manual rating is complete and the marking point is also qualified. The evaluation result of the automatic rating system is consistent with that of the manual rating. To pay attention to the marking point No. 26, which is wrongly rated, the completeness calculated by the automatic rating system is 90.3%, which belongs to the level of minor missing, and it is qualified. However, the result of manual ration belongs to the medium missing, and it is unqualified. The evaluation results of automatic rating system and manual rating are not consistent. Compared with the marking point No. 25, the completeness calculated by the automatic rating system is 90.5%, which belongs to the minor missing, and the marking point is qualified. The manual rating belongs to the minor missing, it is qualified. The evaluation results of the automatic rating system and the manual rating are consistent. The calculated completeness values of the marking points No. 25 and 26 are almost the same, but the manual evaluation is different. The reason is that the missing area of the marking point No. 25 is more scattered than that of the marking point No. 26, and its visual impact is weaker than that of the marking point No. 26. It can also be seen that the manual observation is difficult to quantify the completeness with a unified standard, and blurring of the boundary is inevitable. Taking the manual ratings of the enterprise quality inspection group as the standard, the completeness accuracy of the automatic recognition system is 95% for the 100 tire marking samples and its accuracy of the qualified evaluation is 99%.
In addition, the average processing speed of each marking point image is 9.27 ms for the above 100 marking point samples. There are always two marking points in each tire, and the tire marking point completeness for both of them should be calculated. The processing time for the two marking points is approximately 19 ms. The processing speed of the recognition stage for the original image is 93.7 ms per image (The marking point recognition is omitted in this paper and please read the reference [19]). Therefore, the average processing speed of each tire image in the entire marking point recognition system is 112.7 ms. It is far less than 1 s, which is required. It meets the real-time requirement of the tire marking point completeness processing.

Conclusions
A method is proposed to calculate and evaluate the tire marking point completeness based on the K-means clustering segmentation. It is as follows: (1) The image of the marking point is enlarged to improve the accuracy of the marking point completeness; the image is segmented by the K-means clustering segmentation algorithm and divided into four regions to obtain the object area of the marking point. (2) According to the shape of the marking point, the contour of the complete marking point is extracted; the pixels within the contour of the complete marking points are weighted based on the brightness differences of the different regions, and the marking point completeness is calculated.