Threshold Segmentation and Length Measurement Algorithms for Irregular Curves in Complex Backgrounds

It is an urgent problem to know how to quickly and accurately measure the length of irregular curves in complex background images. To solve the problem, we first proposed a quasi-bimodal threshold segmentation (QBTS) algorithm, which transforms the multimodal histogram into a quasi-bimodal histogram to achieve a faster and more accurate segmentation of the target curve. Then, we proposed a single-pixel skeleton length measurement (SPSLM) algorithm based on the 8-neighborhood model, which used the 8-neighborhood feature to measure the length for the first time, and achieved a more accurate measurement of the curve length. Finally, the two algorithms were tested and analyzed in terms of accuracy and speed on the two original datasets of this paper. The experimental results show that the algorithms proposed in this paper can quickly and accurately segment the target curve from the neon design rendering with complex background interference and measure its length.


Introduction
As cities develop, energy-saving and environmentally friendly neon lights have become a meaningful way to enhance the image of a city [1,2] and an essential part of the urban night scene [3]. Although the patterns vary in style, the vital elements are all irregular curves. Measuring the length of the distinctive curves that make up the pattern from the neon design renderings is necessary before production. It significantly improves efficiency, saves raw materials, and guides production. In addition, in the field of construction work, measuring and analyzing the number and length of cracks on building surfaces based on captured pictures of bridges, tunnels, roads, and other facilities is a significant way to assess their risk and quality [4,5]. Since there are various background interferences in addition to the target curve in the design drawings and captured pictures, it is of great significance in engineering practice to measure the length of the curves in the images with background interference.
Since separating the background noise from the binarized image is difficult, it is necessary to remove the noise interference before the length measurement. The traditional segmentation method is manual tracing, which has low measurement efficiency and significant error in the results. The blue light and ultraviolet rays from the computer screen will cause damage to the staff's eyes [6,7]. With the development of computer technology, image segmentation technology to segment and extract target curves in design drawings has gradually become a better way to replace manual tracing. Threshold segmentation is a technique with the most straightforward principle and the most comprehensive application range in image segmentation technology. Two classical threshold segmentation algorithms, the bimodal method [8] and OTSU [9], directly perform image segmentation according to the grayscale difference between the target and the background. The principle is simple, The rest of the paper is organized as follows. The steps of the proposed method are described in Section 2. Section 3 discusses experiments on two original datasets in this paper and analyzes the experimental results. Section 4 summarizes the work of this paper.

Proposed Method
Given a design rendering, we propose a curve segmentation and length measurement method, as shown in Figure 1. The method includes three steps: "Image Preprocessing," "Threshold Segmentation," and "Length Measurement." We first obtain the grayscale image, get the segmentation threshold, and measure the length according to the curve skeleton. The method is described in detail below. (2) We propose the SPSLM algorithm based on the 8-neighborhood model, which improves the accuracy of irregular curve length measurement. (3) We constructed three new image datasets for performance testing of the two proposed algorithms.
The rest of the paper is organized as follows. The steps of the proposed method are described in Section 2. Section 3 discusses experiments on two original datasets in this paper and analyzes the experimental results. Section 4 summarizes the work of this paper.

Proposed Method
Given a design rendering, we propose a curve segmentation and length measurement method, as shown in Figure 1. The method includes three steps: "Image Preprocessing," "Threshold Segmentation," and "Length Measurement." We first obtain the grayscale image, get the segmentation threshold, and measure the length according to the curve skeleton. The method is described in detail below.

Image Preprocessing
The object of the preprocessing is the original neon design rendering. The goal of the preprocessing is to obtain a grayscale image for threshold segmentation, including the three main steps of color space conversion, channel separation, and grayscale processing.

Image Color Space Conversion
On the one hand, the color space description should conform to the visual perception characteristics of the human eye, and on the other hand, it should be convenient for image processing. The design renderings are usually in RGB color space, but the color space is a non-uniform color space [25]. The color of pixels in this color space are far from the perception of human eyes, so it is not suitable for color image segmentation. However, the HSV color space is a uniform color space that reflects the human visual perception of color. Its V component has nothing to do with the color information of the image, and the H and S components are closely related to the way people perceive color. Therefore, images are converted from RGB color space to HSV color space by linear or non-linear transformations [26].
We can convert the (R, G, B) coordinates of a point in RGB color space to (H, S, V) coordinates in HSV color space using the following formula:

Image Preprocessing
The object of the preprocessing is the original neon design rendering. The goal of the preprocessing is to obtain a grayscale image for threshold segmentation, including the three main steps of color space conversion, channel separation, and grayscale processing.

Image Color Space Conversion
On the one hand, the color space description should conform to the visual perception characteristics of the human eye, and on the other hand, it should be convenient for image processing. The design renderings are usually in RGB color space, but the color space is a non-uniform color space [25]. The color of pixels in this color space are far from the perception of human eyes, so it is not suitable for color image segmentation. However, the HSV color space is a uniform color space that reflects the human visual perception of color. Its V component has nothing to do with the color information of the image, and the H and S components are closely related to the way people perceive color. Therefore, images are converted from RGB color space to HSV color space by linear or non-linear transformations [26].
We can convert the (R, G, B) coordinates of a point in RGB color space to (H, S, V) coordinates in HSV color space using the following formula: Images before and after the conversion of a design drawing are shown in Figure 2.
Images before and after the conversion of a design drawing are shown in Figure 2.

HSV Image Channel Separation and Grayscale Processing
Channel separation is the separating of a multi-channel composite image into multiple single-channel photos. Each single-channel image represents a feature of the multichannel composite image. The HSV image consists of three single-channel images of H, S, and V, representing the three characteristics of the image's Hue, Saturation, and Value. In the neon design renderings, the target line representing the neon light strip is brighter than the background noise, so we can select the V-channel image, representing the feature of "brightness," to remove the background noise. The three single-channel images after channel separation are shown in Figure 3. Although the V-channel image obtained by channel separation can be used as a grayscale image, the image quality is low. To obtain a high-quality grayscale image, the common method is to use RGB as an intermediate and use the following equation to perform grayscale processing.

HSV Image Channel Separation and Grayscale Processing
Channel separation is the separating of a multi-channel composite image into multiple single-channel photos. Each single-channel image represents a feature of the multi-channel composite image. The HSV image consists of three single-channel images of H, S, and V, representing the three characteristics of the image's Hue, Saturation, and Value. In the neon design renderings, the target line representing the neon light strip is brighter than the background noise, so we can select the V-channel image, representing the feature of "brightness," to remove the background noise. The three single-channel images after channel separation are shown in Figure 3.
Images before and after the conversion of a design drawing are shown in Figure 2.

HSV Image Channel Separation and Grayscale Processing
Channel separation is the separating of a multi-channel composite image into multiple single-channel photos. Each single-channel image represents a feature of the multichannel composite image. The HSV image consists of three single-channel images of H, S, and V, representing the three characteristics of the image's Hue, Saturation, and Value. In the neon design renderings, the target line representing the neon light strip is brighter than the background noise, so we can select the V-channel image, representing the feature of "brightness," to remove the background noise. The three single-channel images after channel separation are shown in Figure 3.  Although the V-channel image obtained by channel separation can be used as a grayscale image, the image quality is low. To obtain a high-quality grayscale image, the common method is to use RGB as an intermediate and use the following equation to perform grayscale processing. Although the V-channel image obtained by channel separation can be used as a grayscale image, the image quality is low. To obtain a high-quality grayscale image, the common method is to use RGB as an intermediate and use the following equation to perform grayscale processing.
Gray(x, y)= 0.299R(x, y)+0.587G(x, y)+0.114B(x, y) where Gray(x, y) represents the grayscale value of the pixel, whose coordinates are (x, y) on the image after grayscale processing, and R(x, y), G(x, y), and B(x, y) represent the pixel's R, G, and B channel components. The result of the grayscale processing is shown in Figure 4. Gray x, y = 0.299R x, y + 0.587G x, y + 0.114B x, y Where Gray x, y represents the grayscale value of the pixel, whose coordinates are x, y on the image after grayscale processing, and R x, y , G x, y , and B x, y represent the pixel's R, G, and B channel components. The result of the grayscale processing is shown in Figure 4.

Curve Extraction
Due to the variety of background interference in the neon light design, the grayscale histogram presents multimodal characteristics. Since the brightness of the target curve is higher than the background noise, the peaks of the target curve are always located at the far right of the histogram, and we can regard the remaining peaks as "background peaks." The entire histogram presents a "quasi-bimodal" feature.
This paper proposes a QBTS algorithm based on the "quasi-bimodal" feature of the grayscale histogram. The algorithm mainly includes obtaining the grayscale distribution chart and the segmentation threshold. We first obtain the grayscale histogram of the grayscale image and use the sliding smoothing filter to convolve it to get the grayscale distribution map, and then obtain the segmentation threshold by analyzing the characteristics of the peaks and troughs in the grayscale distribution chart. The specific implementation steps are shown in Figure 5.

Get Grayscale Distribution Chart by Sliding Filter Method
Perform a histogram analysis in Figure 4, and obtain its grayscale histogram as shown in Figure 6. The x-axis represents the grayscale value, and the y-axis represents the total number of pixels corresponding to each grayscale value in the grayscale image. Assuming that N 1×256] is the histogram vector of the grayscale image, 1×256] is the size of

Curve Extraction
Due to the variety of background interference in the neon light design, the grayscale histogram presents multimodal characteristics. Since the brightness of the target curve is higher than the background noise, the peaks of the target curve are always located at the far right of the histogram, and we can regard the remaining peaks as "background peaks." The entire histogram presents a "quasi-bimodal" feature.
This paper proposes a QBTS algorithm based on the "quasi-bimodal" feature of the grayscale histogram. The algorithm mainly includes obtaining the grayscale distribution chart and the segmentation threshold. We first obtain the grayscale histogram of the grayscale image and use the sliding smoothing filter to convolve it to get the grayscale distribution map, and then obtain the segmentation threshold by analyzing the characteristics of the peaks and troughs in the grayscale distribution chart. The specific implementation steps are shown in Figure 5. Gray x, y = 0.299R x, y + 0.587G x, y + 0.114B x, y Where Gray x, y represents the grayscale value of the pixel, whose coordinates are x, y on the image after grayscale processing, and R x, y , G x, y , and B x, y represent the pixel's R, G, and B channel components. The result of the grayscale processing is shown in Figure 4.

Curve Extraction
Due to the variety of background interference in the neon light design, the grayscale histogram presents multimodal characteristics. Since the brightness of the target curve is higher than the background noise, the peaks of the target curve are always located at the far right of the histogram, and we can regard the remaining peaks as "background peaks." The entire histogram presents a "quasi-bimodal" feature.
This paper proposes a QBTS algorithm based on the "quasi-bimodal" feature of the grayscale histogram. The algorithm mainly includes obtaining the grayscale distribution chart and the segmentation threshold. We first obtain the grayscale histogram of the grayscale image and use the sliding smoothing filter to convolve it to get the grayscale distribution map, and then obtain the segmentation threshold by analyzing the characteristics of the peaks and troughs in the grayscale distribution chart. The specific implementation steps are shown in Figure 5.

Get Grayscale Distribution Chart by Sliding Filter Method
Perform a histogram analysis in Figure 4, and obtain its grayscale histogram as shown in Figure 6. The x-axis represents the grayscale value, and the y-axis represents the total number of pixels corresponding to each grayscale value in the grayscale image. Assuming that N 1×256] is the histogram vector of the grayscale image, 1×256] is the size of

Get Grayscale Distribution Chart by Sliding Filter Method
Perform a histogram analysis in Figure 4, and obtain its grayscale histogram as shown in Figure 6. The x-axis represents the grayscale value, and the y-axis represents the total number of pixels corresponding to each grayscale value in the grayscale image. Assuming that N [1×256] is the histogram vector of the grayscale image, [1 × 256] is the size of the histogram vector, and n i represents the number of pixels whose grayscale value is i, then N [1×256] = [n 0 , n 1 , n 2 , n 3 , . . . . . . n 254 , n 255 ] The grayscale distribution after the smoothing process is shown in Figure 7.   The grayscale histogram has many spikes, so it is filtered by sliding average, and a suitable convolution kernel is selected for linear convolution to make the grayscale histogram smoother. The convolution kernels chosen in this paper are The smoothed new histogram vector is N [1×256] = n 0 , n 1 , n 2 , n 3 , . . . . . . n 254 , n 255 (7) The elements n i in the new histogram vector satisfy the following formula: The grayscale distribution after the smoothing process is shown in Figure 7.
the histogram vector, and n i represents the number of pixels whose grayscale value is i, then N 1×256] = n 0 , n 1 , n 2 , n 3 , …… n 254 , n 255 (5) The grayscale histogram has many spikes, so it is filtered by sliding average, and a suitable convolution kernel is selected for linear convolution to make the grayscale histogram smoother. The convolution kernels chosen in this paper are The smoothed new histogram vector is The elements n i ' in the new histogram vector satisfy the following formula: The grayscale distribution after the smoothing process is shown in Figure 7.

Get the Segmentation Threshold by the Quasi-Bimodal Characteristics of the Gray Distribution Chart
After obtaining the grayscale distribution of the V (Value) channel image, the segmentation threshold is obtained according to the basic idea of the bimodal method. First, mark the "target peak" and the "background peak" in the grayscale distribution chart, and then select the trough between the two peaks as the threshold. Since the "value" of the target curve is the highest, the rightmost peak in the grayscale distribution graph is marked as the "target peak." Then the mountain with the most prominent peak among the remaining peaks is chosen as the "background peak." When there is more than one trough between the "target peak" and the "background peak," the trough that is closest to the "target peak" and whose amplitude is less than the average of all the troughs is marked as the threshold value.
According to the above steps, mark the "target peak," "background peak," and "threshold value" in the grayscale distribution diagram, as shown in Figure 8. As can be seen from Figure 8, the "target peak," "background peak," and "threshold" are "245", "97," and "208," respectively. mentation threshold is obtained according to the basic idea of the bimodal method. First, mark the "target peak" and the "background peak" in the grayscale distribution chart, and then select the trough between the two peaks as the threshold. Since the "value" of the target curve is the highest, the rightmost peak in the grayscale distribution graph is marked as the "target peak." Then the mountain with the most prominent peak among the remaining peaks is chosen as the "background peak." When there is more than one trough between the "target peak" and the "background peak," the trough that is closest to the "target peak" and whose amplitude is less than the average of all the troughs is marked as the threshold value.
According to the above steps, mark the "target peak," "background peak," and "threshold value" in the grayscale distribution diagram, as shown in Figure 8. As can be seen from Figure 8, the "target peak," "background peak," and "threshold" are "245", "97," and "208," respectively. We segment Figure 4 with the threshold marked in Figure 8 and get the target curve, as shown in Figure 9.

Curve Refinement
The image refinement of the binary image obtained by threshold segmentation can obtain the single-pixel skeleton of the target curve. We use the improved Zhang-Suen refinement algorithm to refine the target curve [27]. On the one hand, the algorithm has simple logic and a fast running speed. On the other hand, it also overcomes the disadvantage of missing some pixels in the traditional Zhang-Suen algorithm, resulting in partially refined textures that are not single pixels. We segment Figure 4 with the threshold marked in Figure 8 and get the target curve, as shown in Figure 9.
After obtaining the grayscale distribution of the V (Value) channel image, the segmentation threshold is obtained according to the basic idea of the bimodal method. First, mark the "target peak" and the "background peak" in the grayscale distribution chart, and then select the trough between the two peaks as the threshold. Since the "value" of the target curve is the highest, the rightmost peak in the grayscale distribution graph is marked as the "target peak." Then the mountain with the most prominent peak among the remaining peaks is chosen as the "background peak." When there is more than one trough between the "target peak" and the "background peak," the trough that is closest to the "target peak" and whose amplitude is less than the average of all the troughs is marked as the threshold value.
According to the above steps, mark the "target peak," "background peak," and "threshold value" in the grayscale distribution diagram, as shown in Figure 8. As can be seen from Figure 8, the "target peak," "background peak," and "threshold" are "245", "97," and "208," respectively. We segment Figure 4 with the threshold marked in Figure 8 and get the target curve, as shown in Figure 9.

Curve Refinement
The image refinement of the binary image obtained by threshold segmentation can obtain the single-pixel skeleton of the target curve. We use the improved Zhang-Suen refinement algorithm to refine the target curve [27]. On the one hand, the algorithm has simple logic and a fast running speed. On the other hand, it also overcomes the disadvantage of missing some pixels in the traditional Zhang-Suen algorithm, resulting in partially refined textures that are not single pixels.

Curve Refinement
The image refinement of the binary image obtained by threshold segmentation can obtain the single-pixel skeleton of the target curve. We use the improved Zhang-Suen refinement algorithm to refine the target curve [27]. On the one hand, the algorithm has simple logic and a fast running speed. On the other hand, it also overcomes the disadvantage of missing some pixels in the traditional Zhang-Suen algorithm, resulting in partially refined textures that are not single pixels.
Using this algorithm to refine Figure 9, we can obtain a single-pixel skeleton image, as shown in Figure 10.

Skeleton Length Measurement
After obtaining the single-pixel skeleton of the target curve, the method of directly counting the number of pixels as the skeleton length has a large error [15][16][17]. To improve the measurement accuracy, this paper proposes a single-pixel skeleton length measurement (SPSLM) algorithm based on the 8-neighborhood model. We can label the 8 pixels adjacent to the pixel p 1 and use the schematic diagram shown in Figure 11 to represent the 8-neighborhood model of p 1 . The specific implementation steps are shown in Figure 12. Using this algorithm to refine Figure 9, we can obtain a single-pixel skeleton image, as shown in Figure 10.

Skeleton Length Measurement
After obtaining the single-pixel skeleton of the target curve, the method of directly counting the number of pixels as the skeleton length has a large error [15][16][17]. To improve the measurement accuracy, this paper proposes a single-pixel skeleton length measurement (SPSLM) algorithm based on the 8-neighborhood model. We can label the 8 pixels adjacent to the pixel p 1 and use the schematic diagram shown in Figure 11 to represent the 8-neighborhood model of p 1 . The specific implementation steps are shown in Figure  12.
B(p 1 ), N, and C are three parameters determined by the 8-neighborhood of p 1 , as shown in Figure 12. B(p 1 ) represents the number of foreground pixels in the 8-neighborhood of p 1 . N and C represent the number of foreground pixels among the 4 pixels directly adjacent and diagonally adjacent to p 1 , respectively.  According to the 8-neighborhood model of p 1 shown in Figure 11, B(p 1 ), N, and C can be represented by the following equations.

Skeleton Length Measurement
After obtaining the single-pixel skeleton of the target curve, the method of directly counting the number of pixels as the skeleton length has a large error [15][16][17]. To improve the measurement accuracy, this paper proposes a single-pixel skeleton length measurement (SPSLM) algorithm based on the 8-neighborhood model. We can label the 8 pixels adjacent to the pixel p 1 and use the schematic diagram shown in Figure 11 to represent the 8-neighborhood model of p 1 . The specific implementation steps are shown in Figure  12.
B(p 1 ), N, and C are three parameters determined by the 8-neighborhood of p 1 , as shown in Figure 12. B(p 1 ) represents the number of foreground pixels in the 8-neighborhood of p 1 . N and C represent the number of foreground pixels among the 4 pixels directly adjacent and diagonally adjacent to p 1 , respectively.  According to the 8-neighborhood model of p 1 shown in Figure 11, B(p 1 ), N, and C can be represented by the following equations.

Skeleton Length Measurement
After obtaining the single-pixel skeleton of the target curve, the method of directly counting the number of pixels as the skeleton length has a large error [15][16][17]. To improve the measurement accuracy, this paper proposes a single-pixel skeleton length measurement (SPSLM) algorithm based on the 8-neighborhood model. We can label the 8 pixels adjacent to the pixel p 1 and use the schematic diagram shown in Figure 11 to represent the 8-neighborhood model of p 1 . The specific implementation steps are shown in Figure  12.
B(p 1 ), N, and C are three parameters determined by the 8-neighborhood of p 1 , as shown in Figure 12. B(p 1 ) represents the number of foreground pixels in the 8-neighborhood of p 1 . N and C represent the number of foreground pixels among the 4 pixels directly adjacent and diagonally adjacent to p 1 , respectively.  According to the 8-neighborhood model of p 1 shown in Figure 11, B(p 1 ), N, and C can be represented by the following equations. B(p 1 ), N, and C are three parameters determined by the 8-neighborhood of p 1 , as shown in Figure 12. B(p 1 ) represents the number of foreground pixels in the 8-neighborhood of p 1 . N and C represent the number of foreground pixels among the 4 pixels directly adjacent and diagonally adjacent to p 1 , respectively.
According to the 8-neighborhood model of p 1 shown in Figure 11, B(p 1 ), N, and C can be represented by the following equations.
B(p 1 )= p 2 +p 3 +p 4 +p 5 +p 6 +p 7 +p 8 +p 9 N = p 2 +p 4 +p 6 +p 8 C = p 3 +p 5 +p 7 +p 9 Next, we discuss the connection between L p 1 and the three parameters of B(p 1 ), N, and C. It is important to note that the following discussion of the distribution of pixels in the 8-neighborhood of pixel p 1 has removed the situation that meets the labeling conditions of the improved Zhang-Suen refinement algorithm. Among them, A denotes the length of a single pixel, B denotes the diagonal measurement of a single pixel, and L p 1 denotes the actual length represented by the pixel p 1 . According to the different values of B(p 1 ), the 8-neighborhood model of p 1 is classified and discussed. The model diagrams are shown in Appendix A.

•
If B(p 1 )= 1, as shown in Figure A1, then there are • If B(p 1 )= 2, as shown in Figure A2, then there are • If B(p 1 )= 3, as shown in Figure A3, then there are • If B(p 1 )= 4, as shown in Figure A4, then there are • If B(p 1 )= 5, as shown in Figure A5, then there are • If B(p 1 )= 6, as shown in Figure A6, then there are • If B(p 1 )= 7, as shown in Figure A7, then there are • If B(p 1 )= 8, as shown in Figure A8, then there are Traverse the single-pixel skeleton image and assume that the number of pixels that satisfy the condition of B(p 1 )= i is N i and the pixel length of the target curve skeleton is L TP , then there is the following formula:

Size Transformation
Assumption α represents the scale factor of the pixel size to the actual size in cm/pixel. Set the length of a single-pixel A to "unit1", then the proper length of the pixel skeleton can be calculated by the following formula.

Experiments and Results
This section shows the experimental results of the proposed QBTS algorithm and SPSLM algorithm on the original datasets and compares them with the results of other algorithms. Furthermore, all experiments were performed on an Intel Core i5-9400 2.9 GHz desktop with 8 GB of RAM.

Performance Metrics
To evaluate the proposed method, we choose accuracy and running speed as evaluation metrics. The segmentation accuracy of the QBTS algorithm is defined as Acc S = N same N total , where N same represents the number of pixels with the same pixel value in the binary image obtained after image segmentation by the QBTS algorithm and the binary image of the standard segmented image, and N total represents the total number of pixels. According to the expression of Acc S , its range is [0, 1]. The measurement accuracy of the SPSLM algorithm is defined as Acc M = L M L R , where L M represents the length measurement value of the SPSLM algorithm, and L R represents the reference value obtained by manual measurement. Since L R is a manual measurement value, there is also a particular error, so in some cases, the value of Acc M may be greater than 1. For the entire dataset, the closer Acc M is to 1, the higher the measurement accuracy of the SPSLM algorithm.
In addition, the running speed of an algorithm is usually measured in terms of running time. The shorter the time it takes for the algorithm to complete the segmentation or length measurement, the faster it is.

Dataset
We conduct experiments on three original datasets to analyze these two algorithms' accuracy and running speed. Additionally, all images are less than 2000 × 2000 in size and have different pixel dimensions.
Mini. This dataset contains 12 original neon design renderings, as shown in Figure 13. In addition, the dataset also includes 12 standard target curves that were segmented manually by multiple researchers. This dataset is used to test the segmentation accuracy of the QBTS algorithm.

algorithm.
Neon Curve. This dataset contains 139 images of neon pattern curves without noise interference and 139 corresponding single-pixel skeleton images. Hunan Kangxuan Technology Co., Ltd. provides the original images and the corresponding curve length value. The single-pixel skeleton image is obtained by refining the original picture through the improved Zhang-Suen algorithm. This dataset is used to test the measurement accuracy and running speed of the SPSLM algorithm.  Neon Rendering. This dataset contains 198 images, all sourced from the Internet. These images are actual neon design renderings with patterned curves and backgrounds with many types of noise inside. This dataset is used to test the running speed of the QBTS algorithm.
Neon Curve. This dataset contains 139 images of neon pattern curves without noise interference and 139 corresponding single-pixel skeleton images. Hunan Kangxuan Technology Co., Ltd. provides the original images and the corresponding curve length value. The single-pixel skeleton image is obtained by refining the original picture through the improved Zhang-Suen algorithm. This dataset is used to test the measurement accuracy and running speed of the SPSLM algorithm.

• Accuracy of Segmentation
To test the segmentation accuracy of the QBTS algorithm, we compared it with the OTSU algorithm and the bimodal method. We used three threshold segmentation algorithms to segment the images of the mini dataset, and the experimental results are shown in Figure 14. Each image segmentation result consists of four images. From left to right are the original image, the binary image of the standard target curve, and the binary image obtained by dividing the QBTS algorithm, the OTSU algorithm, and the bimodal method, respectively. It can be seen intuitively from Figure 14 that the similarity between the binary image segmented by the QBTS algorithm and the standard binary image is the highest. The images segmented by the other two algorithms still have varying degrees of noise interference, and even the target curve cannot be seen in some segmented images. The results show that the segmentation result of the QBTS algorithm is better, and it can accurately remove noise interference and segment the target curve.
To conduct a more accurate quantitative analysis of the segmentation accuracy of the QBTS algorithm, we calculated the segmentation accuracy of the three algorithms according to the definition of Acc S , as shown in Figure 15. The ordinate in the figure is the segmentation accuracy of the three algorithms. obtained by dividing the QBTS algorithm, the OTSU algorithm, and the bimodal method, respectively. It can be seen intuitively from Figure 14 that the similarity between the binary image segmented by the QBTS algorithm and the standard binary image is the highest. The images segmented by the other two algorithms still have varying degrees of noise interference, and even the target curve cannot be seen in some segmented images. The results show that the segmentation result of the QBTS algorithm is better, and it can accurately remove noise interference and segment the target curve. Figure 14. The original images, the standard binary images and the binary images are obtained by dividing by the QBTS algorithm, the OTSU algorithm, and the bimodal method, respectively.
To conduct a more accurate quantitative analysis of the segmentation accuracy of the QBTS algorithm, we calculated the segmentation accuracy of the three algorithms according to the definition of Acc S , as shown in Figure 15. The ordinate in the figure is the segmentation accuracy of the three algorithms.
The results show that the average segmentation accuracy of the QBTS algorithm is 97.9%, much higher than the 89.6% and 64.4% of the other two algorithms. At the same time, the distribution of the segmentation accuracy of the algorithm is also more concentrated, indicating that the QBTS algorithm has higher robustness. At the same time, we also noticed that the segmented images obtained using the QBTS algorithm are not entirely accurate. There are two main reasons for this: on the one hand, the QBTS algorithm belongs to the global threshold segmentation algorithm and can't segment all the boundaries of the target curve very finely and accurately. On the other hand, researchers manually segment the reference images of the dataset, and in this process, errors will inevitably occur and affect the experimental results. •

Running Speed of QBTS Algorithm
In addition to testing segmentation accuracy, this paper also conducts experiments on the running speed of the QBTS algorithm on the Neon rendering dataset. Figure 16 shows the time required for each of the three algorithms to segment images in the dataset Neon rendering. To compare the segmentation speed more intuitively, we calculated the ratio of the time required by the QBTS algorithm and the OTSU algorithm for segmentation to the time needed for the Bimodal algorithm, as shown in Figure 17. The results show that the average segmentation accuracy of the QBTS algorithm is 97.9%, much higher than the 89.6% and 64.4% of the other two algorithms. At the same time, the distribution of the segmentation accuracy of the algorithm is also more concentrated, indicating that the QBTS algorithm has higher robustness. At the same time, we also noticed that the segmented images obtained using the QBTS algorithm are not entirely accurate. There are two main reasons for this: on the one hand, the QBTS algorithm belongs to the global threshold segmentation algorithm and can't segment all the boundaries of the target curve very finely and accurately. On the other hand, researchers manually segment the reference images of the dataset, and in this process, errors will inevitably occur and affect the experimental results.

Running Speed of QBTS Algorithm
In addition to testing segmentation accuracy, this paper also conducts experiments on the running speed of the QBTS algorithm on the Neon rendering dataset. Figure 16 shows the time required for each of the three algorithms to segment images in the dataset Neon rendering. To compare the segmentation speed more intuitively, we calculated the ratio of the time required by the QBTS algorithm and the OTSU algorithm for segmentation to the time needed for the Bimodal algorithm, as shown in Figure 17. •

Running Speed of QBTS Algorithm
In addition to testing segmentation accuracy, this paper also conducts experiments on the running speed of the QBTS algorithm on the Neon rendering dataset. Figure 16 shows the time required for each of the three algorithms to segment images in the dataset Neon rendering. To compare the segmentation speed more intuitively, we calculated the ratio of the time required by the QBTS algorithm and the OTSU algorithm for segmentation to the time needed for the Bimodal algorithm, as shown in Figure 17. We can see from Figure 16 that the time consumed by the QBTS algorithm to segment images is generally shorter than the other two algorithms. In addition, the figure has some discrete points outside the 1.5IQR range. The main reason is that the time complexity of the QBTS algorithm is O(mn), where m×n represents the size of the image. The running time is closely related to the image size, so some images with larger sizes will take longer to process. We can see from Figure 17 that the average values of the split time ratios of the OTSU algorithm, the QBTS algorithm, and the Bimodal algorithm are 0.98 and 0.47, respectively. It shows that for the same image, the segmentation time of the QBTS algorithm is shorter, the segmentation speed is faster, and the segmentation speed has increased by about 50%. To compare the performance of these three image segmentation algorithms more intuitively and clearly, we summarize and extract the key data from Figure 15 to Figure 17, as shown in Table 1. It can be seen from Table 1 that the performance of the QBTS algorithm is much better than that of the OTSU algorithm and the Bimodal algorithm in terms of average segmentation accuracy and segmentation speed. According to the above experimental results, the QBTS algorithm proposed in this paper performs better when segmenting the target curve from the neon sign design draw- We can see from Figure 16 that the time consumed by the QBTS algorithm to segment images is generally shorter than the other two algorithms. In addition, the figure has some discrete points outside the 1.5IQR range. The main reason is that the time complexity of the QBTS algorithm is O(mn), where m × n represents the size of the image. The running time is closely related to the image size, so some images with larger sizes will take longer to process. We can see from Figure 17 that the average values of the split time ratios of the OTSU algorithm, the QBTS algorithm, and the Bimodal algorithm are 0.98 and 0.47, respectively. It shows that for the same image, the segmentation time of the QBTS algorithm is shorter, the segmentation speed is faster, and the segmentation speed has increased by about 50%.
To compare the performance of these three image segmentation algorithms more intuitively and clearly, we summarize and extract the key data from Figures 15-17, as shown in Table 1. It can be seen from Table 1 that the performance of the QBTS algorithm is much better than that of the OTSU algorithm and the Bimodal algorithm in terms of average segmentation accuracy and segmentation speed. According to the above experimental results, the QBTS algorithm proposed in this paper performs better when segmenting the target curve from the neon sign design drawing with complex background noise interference. Compared with the OTSU and Bimodal algorithms, it has better segmentation accuracy, robustness, and faster segmentation speed.

•
Accuracy of Measurement Figure 18 shows the length measurement accuracy of the SPSLM algorithm and the other four methods. The SPSLM algorithm in the figure is a single-pixel skeleton length measurement algorithm based on the 8-neighborhood model proposed in this paper. Method 1, Method 2, Method 3, and Method 4 are methods for measuring curve lengths used in [13][14][15], [16][17][18][19][20], [12], and [21,22], respectively. We can see from Figure 18 that the average measurement accuracy of the SPSLM algorithm proposed in this paper is 99.1%, and the average measurement accuracy of the other four methods is 88.3%, 92.5%, 19.9%, and 88.1%, respectively; it shows that the accuracy of the SPSLM algorithm is higher. At the same time, we can also see that some values are greater than 100%. This phenomenon is also consistent with our analysis of Acc M . •

Running Speed of SPSLM Algorithm
After the accuracy test, we ran the SPSLM algorithm's speed test on the Neon Curve dataset. Figure 19 shows the running speed of the length measurement algorithms. The results show that the average measurement time of Method 3 is 0.001s, which is much shorter than other methods. Because Method 3 mainly estimates the length through the area range, it has the advantage of low time complexity, but the measurement accuracy is also very low. The average measurement time of Method 4 is 6.90s, much higher than the other four methods. The method includes canny edge detection, image refinement, and skeleton length measurement. The processing process is cumbersome and time-consum- We can see from Figure 18 that the average measurement accuracy of the SPSLM algorithm proposed in this paper is 99.1%, and the average measurement accuracy of the other four methods is 88.3%, 92.5%, 19.9%, and 88.1%, respectively; it shows that the accuracy of the SPSLM algorithm is higher. At the same time, we can also see that some values are greater than 100%. This phenomenon is also consistent with our analysis of Acc M .

Running Speed of SPSLM Algorithm
After the accuracy test, we ran the SPSLM algorithm's speed test on the Neon Curve dataset. Figure 19 shows the running speed of the length measurement algorithms. The results show that the average measurement time of Method 3 is 0.001s, which is much shorter than other methods. Because Method 3 mainly estimates the length through the area range, it has the advantage of low time complexity, but the measurement accuracy is also very low. The average measurement time of Method 4 is 6.90s, much higher than the other four methods. The method includes canny edge detection, image refinement, and skeleton length measurement. The processing process is cumbersome and time-consuming. Acc M . •

Running Speed of SPSLM Algorithm
After the accuracy test, we ran the SPSLM algorithm's speed test on the Neon Curve dataset. Figure 19 shows the running speed of the length measurement algorithms. The results show that the average measurement time of Method 3 is 0.001s, which is much shorter than other methods. Because Method 3 mainly estimates the length through the area range, it has the advantage of low time complexity, but the measurement accuracy is also very low. The average measurement time of Method 4 is 6.90s, much higher than the other four methods. The method includes canny edge detection, image refinement, and skeleton length measurement. The processing process is cumbersome and time-consuming. Figure 19. Running time of different length measurement methods. Figure 19. Running time of different length measurement methods. Figure 20 shows the running speeds of SPSLM, Method 1, and Method 2 separately. Their average measurement times were 0.615s, 0.565s, and 0.577s, respectively, which were the same and had similar distributions. Because the measurement principle of these three methods is first to perform image refinement to obtain a single-pixel skeleton and then measure the length of the structure, their time complexity is the same, so the measurement speed is the same.  Figure 20 shows the running speeds of SPSLM, Method 1, and Method 2 separately. Their average measurement times were 0.615s, 0.565s, and 0.577s, respectively, which were the same and had similar distributions. Because the measurement principle of these three methods is first to perform image refinement to obtain a single-pixel skeleton and then measure the length of the structure, their time complexity is the same, so the measurement speed is the same. To compare the performance of these five length measurement methods more intuitively and clearly, we summarize and extract the pivotal data from Figure 18 to Figure 20, as shown in Table 2. As can be seen from Table 2, in terms of measurement accuracy, the average accuracy of the SPSLM algorithm is much higher than that of other algorithms. In terms of measurement speed, on the premise of ensuring the necessary accuracy, the average running speed of the SPSLM algorithm is comparable to Method 1 and Method 2, and both are much faster than Method 4.

Method
Average Accuracy (%) Average Running Time (S) To compare the performance of these five length measurement methods more intuitively and clearly, we summarize and extract the pivotal data from Figures 18-20, as shown in Table 2. As can be seen from Table 2, in terms of measurement accuracy, the average accuracy of the SPSLM algorithm is much higher than that of other algorithms. In terms of measurement speed, on the premise of ensuring the necessary accuracy, the average running speed of the SPSLM algorithm is comparable to Method 1 and Method 2, and both are much faster than Method 4. Based on the above analysis of the running speed and the measurement accuracy, the SPSLM algorithm dramatically improves the measurement accuracy while maintaining a low algorithm complexity, and its overall performance is better than the other length measurement methods.

Conclusions
This paper used digital image processing techniques to measure the length of irregular curves in neon design renderings. Firstly, a new QBTS algorithm was proposed to segment and extract the target curve. Then, a single-pixel skeleton length measurement algorithm based on the 8-neighborhood model was proposed to measure the length of the skeleton of the target curve. Finally, we conducted tests on the three original datasets of this paper, respectively. The results showed that the average segmentation accuracy of the QBTS algorithm was 97.9%, and the segmentation speed was more than 50% higher than the other two algorithms. The average measurement accuracy of the length measurement algorithm was 99.1%, higher than the four existing length measurement algorithms, and the measurement speed was comparable.
The above results demonstrate that the two algorithms proposed in this paper can be used for the problem of "accurately segmenting irregular target curves from images with background interference and measuring their length accurately", and can be applied in engineering practice. Subsequent research will further improve the segmentation accuracy and applicability of the threshold segmentation algorithm, laying a solid foundation for future applications in more areas of target curve segmentation.

Data Availability Statement:
The datasets that support the findings of this study are openly available at [https://github.com/csuruan/Image-Processing-Datasets-Ruan.git, accessed on 28 June 2022]. The datasets used in this paper are all original datasets by the author. Some of the pictures included in the data set are from public resources on the Internet, and the rest are from Hunan Kangxuan Technology Co., Ltd. All images obtained from public sources on the Internet are for scientific research purposes only and do not involve commercial interests.

Appendix A
In this appendix, we show the 8-neighborhood model diagrams of p 1 corresponding to different values of B p 1 .  Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: Not applicable.

Acknowledgments:
We are grateful to the High Performance Computing Center of Central South University for assistance with the computations.

Conflicts of Interest:
The authors declare no conflict of interest.

Appendix A
In this appendix, we show the 8-neighborhood model diagrams of p 1 corresponding to different values of B p 1 .