SNR Analysis for Quantitative Comparison of Line Detection Methods

: The need for line detection in images is growing rapidly due to its importance in many image processing applications. The selection of an appropriate line detection method is essential for accurate detection of line pixels, but few studies provide an analytical basis for selecting a speciﬁc line detection method. In this study, to solve the problem, a method to analytically determine the signal-to-noise ratio (SNR) of line detection methods is proposed. Three line detection methods were selected for comparison: edge-detection (ED)-based, second derivative (SD)-based, and the sum of gradient angle differences (SGAD)-based line detection methods. Then, this study quantiﬁes the SNR of the three line detectors through error propagation and signal noise coupling. In addition, the derived SNRs are graphically visualized to explicitly compare the performance of line detectors. Then, the quantiﬁed SNRs were validated by showing that they are highly correlated with the completeness and correctness observed in the experiment with a set of natural images. The experimental results show that the proposed SNR analysis can be used to select or design a suitable line detector.


Introduction
In the field of image processing, edges and lines are important features used to detect object shapes in a scene. In the literature, lines are classified as "ridge" or "valley" depending on their relative intensity compared to their neighbor intensities [1][2][3][4][5][6]. Lines are the primary features observed in various types of images and are used to detect and recognize the appearance of objects in an image. Recently, by developing various applications, the demand for efficient line extraction methods is rapidly increasing [7][8][9][10][11][12][13].
Quality line extraction requires line detection with high signal-to-noise (SNR) values and high quality subpixel line localization and linking processes [5]. Moreover, although the criteria for establishing a method for detecting linear features in noisy images are always necessary, not many studies have compared the performances of line detection methods to help establish such criteria. Therefore, in this study, a quantitative comparison of the effectiveness of line detection methods, and their merits and demerits in detecting line features under varying conditions, has been described based on SNR analysis.
In this study, the influence of image smoothing on both the signal and noise strengths was first investigated. For the comparative study of line feature detectors, a second-order derivative (SD)-based method [19] was first selected and compared with an ED-based indirect method.
Moreover, the SD-based line detection method was shown to result in dislocalized line pixels under relatively large line widths [6]. A line detection method based on the addition of gradient angle differences (SGAD) was proposed to overcome the problems [6]. Evaluating line detectors based on experiments with a limited number of test images does not comprehensively demonstrate the performance of line detectors under varying conditions. Thus, the analytical quantification of the SNR of line detectors is required to determine their efficiency. A SNR analysis to indentify an optimal edge detector was used in [18]. However, a few studies have considered evaluating line detectors based on analytical quantification of their SNR. This primarily motivated the quantitative derivation of the SNRs of line detectors in this study.
A set of line detectors were compared based on their experimental characteristic curves in [20]. Moreover, multiple line detection methods were evaluated using multiple images in [21]. However, evaluating line detectors with a set of images does not comprehensively demonstrate the performance of them under varying conditions. In this study, quantitative derivation of the SNRs of line detectors overcomes the limitations of the existing evaluation methods. The contributions of this study are listed below: The rest of this study is organized as follows. Section 2 describes the related studies for line detection. Section 3 compares the performance of the ED-based and SD-based line detections based on SNR analysis. Section 4 describes the SNR of the SGAD-based line detection method. The results obtained using real images are described in Section 5 and are followed by the conclusion in Section 6.

Related Work
Pixels in an image were classified into edges, ridges, valleys, and other geometric features based on the analysis of the coefficients resulting from fitting a Legendre polynomial of 2 degrees or less with local intensity in [22]. However, this approach has a nontrivial problem with line detection because it requires multiple steps to detect line pixels and a set of well-tuned thresholds. In another study, ridges and valleys were detected by finding zero crossings of the first-order directional derivatives in [1]. The method was based on 10 coefficients resulting from fitting a bivariate cubic polynomial to local intensities. One of the limitations of this method is that it may produce nontrivial ringing artifacts, and the approach requires a large window size for the 10 coefficients of the bivariate cubic polynomial to be calculated. A Zernike moment-based edge and line detection method was proposed in [23], which was a parameter-based edge and line detection method. Differential geometry was used to detect the line pixels in [2,3]. One of the limitations with this approach is that it requires a high processing time to calculate the first-, second-, and third-order derivatives. A line detection approach based on second-order derivatives was proposed in [19,24]. In a similar study, this SD-based approach was used to detect line pixels in aerial images and synthetic aperture radar (SAR) images [25,26]. However, as it will be shown in the following sections, the SD-based line detection methods have limitations under relatively large line widths. A multi-scale-based line detection approach was proposed in [24]. This approach may, however, face limitations when line features are located close to other non-homogeneous features. These limitations are attributed to the blending of neighboring features at a larger scale.
A simple line detection method using relational operations was proposed in [4]. Although its computational cost is low, one of the limitations of this approach is that selecting an appropriate window size depends on the width of a line to be detected.
However, multi-scale anisotropic Gaussian kernels were used for line detection in [27]. This approach has a problem of high computational cost. Moreover, a multi-step-based line detection approach was proposed in [28], and its computation is cost-intensive because of the convolution of a set of filters and a sequence of steps. Recently, a convolutional neural network-based approach was proposed to detect the boundaries among walls, floors, and ceilings in [29]. However, this approach requires a well-trained network to implement boundary detection and detects only predefined types of boundary pixels such as wall-wall, wall-ceiling, and wall-floor.
Furthermore, a creaseness measure was used to detect line pixels in [21]. One of the limitations of this approach is that a simple sum of gradient differences may lose the direction information in detecting line pixels. The SGAD-based line detection approach resolved this limitation using the addition of absolute values of gradient angle differences [6].

Method-SNR of ED and SD for Line Detection
It is necessary to select between edge and line detectors when extracting linear features from an image. To make the right decision, it is necessary to know their performance under various conditions such as line width, noise level, and smoothing factor for noise suppression. Although this information is important for making the decision, presumably, very few studies have focused on this issue. To compensate for these limitations, in this section, the performances of ED and SD are compared by investigating their respective SNRs under various conditions.

Line Model and Derivation of Its Derivatives
In camera-captured images, there is always a certain amount of blur and noise. An image I can be modeled by signals F convoluted with a certain amount of blur b and noise n, which is described in various approaches [30][31][32][33][34][35][36][37][38] as follows: where * indicates the convolution operator.
Variations of blurring patterns around edges because of varying contrast have been analyzed by modeling edge profiles with two blur parameters in [39]. For simplicity, however, the imaging model with one blur parameter in Equation (1) is used in the following derivation. In this study, the line profile has been modeled by two factors, namely, width, w, and contrast, k, as shown in Figure 1. Then, according to Figure 1, the 1D line signal can be mathematically modeled as follows: where h, L, and x are the background intensity, coordinate of the center of the line, and coordinate of an arbitrary location, respectively. L and x are the coordinates with reference to the line normal direction. To consider the blurring effect in the image formation process, a 1D Gaussian blur function is introduced as follows: where σ b and t are the blurring factor and distance from the center of the Gaussian function, respectively. The dashed line in Figure 2 shows a scaled version of the blur function, ) with variation of the location, x. Thus, from Figure 2, the intensity value at x captured by a camera can be derived as follows: ‫ܮ‬ ‫ݔ‬ 0 ℎ ݇ Figure 2. Gaussian blur for a line model.
For noise removal, image smoothing is usually applied by convoluting an image with a 2D Gaussian function, which can be described as follows: where σ s is the smoothing factor, and u and v are the distances from the center of the Gaussian function in the column and row directions, respectively. Accordingly, its 1D function can be written as follows: Then, the application of the smoothing function s generates the image I s , which can be written as follows: Thus, after blurring and smoothing, the line signal in Equation (2) is reformed as per [40] as follows: Thus, the first derivative of the cross section profile of the smoothed line with respect to x is derived from Equation (8) as follows: Additionally, the second derivative of the cross section profile of the smoothed line with respect to x is derived from Equation (9) as follows:

Measure of Signal Strengths
Edge strengths are measured at the boundaries of smoothed line model in terms of the values of first and second derivatives. The first derivative indicates the gradient of the smoothed edge profile and is derived by substituting x = L − w 2 in Equation (9) as follows: To distinctly determine an edge pixel in the local areas in an image in a non-maxima suppression process, the absolute values of the gradients at its neighboring pixels should be sufficiently lower than the absolute values at the edge location, x = L − w 2 . Thus, the absolute value of the second derivatives of the neighborhood pixels should be high. Therefore, the neighborhood pixel, which is one pixel away from the edge location, is seleted. Moreover, the second derivative at the pixel location x = L − w 2 − 1 is derived as another measure of edge strength from Equation (10) as follows: Alternatively, to distinctly determine a line pixel in a non-maxima suppression process, the absolute values of the first derivatives or the gradients of the neighborhood pixel of the line pixel should be high. Therefore, as one of the neighborhood pixels, a pixel located at x = L − 1 is selected. The first derivative of this pixel, as a measure of line strength, is derived from Equation (9) as follows: Moreover, for detecting a line pixel, its second derivative at the line location x = L must be high, which can be derived from Equation (10) as follows:

Measure of Noise Strengths
To measure the SNR for line detection in a smoothed image, it is necessary to quantify the amount of noise after smoothing and the signal strengths. However, there have been few studies on the quantification of the amount of noise after smoothing. Thus, in the following section, the residual noise after the smoothing is first quantified, and the correlation between the pixels in an image of smoothed noises is derived using an error propagation scheme.

Correlation of Noises in Smoothed Images
First, the amount of noise at an arbitrary location (r,c) in a noise image is denoted by n as follows: n = n(r, c).
Noise n is assumed to be symmetrically distributed around zero, and its expectation E{n} satisfies the following equation: Then, the dispersion or variance of the noise at a single pixel σ 2 n is defined as follows: By extrapolating the expectation and dispersion of the noise to the noises of all the pixels in an image, the expectation and dispersion of the vector containing all noises can be written as follows: where vec(·) is an operation converting a matrix into a column vector and I c is the continuous version of the identity matrix defined as The noise amount remaining after smoothing at an arbitrary location (r,c) can be modeled as follows: Furthermore, the expectation of the vectors of the smoothed noises is derived as follows: Moreover, the dispersion of the vectors of the smoothed noises is derived as follows: To quantify the correlation between the noises remaining after the smoothing, the covariance between them is first derived for two arbitrary pixels at (r,c) and (r − α,c − β) as follows: If we let d be the distance between the two positions (r,c) and (r − α,c − β), then d is calculated as follows: Now, according to Equation (22), the covariance between the two pixels can be calculated as follows: Then, the dispersion D{vec(n * s)}, which is the auto-covariance of the remaining noise at (r,c), is derived from Equation (24) as follows: Therefore, the correlation among noises remaining after the convolution with a smoothing function s for two arbitrary pixels separated by a distance d is derived from Equations (24) and (25) as follows:

Measure of Noise in Derivatives
To extract edges or lines, the calculation of the first and second derivatives is typically performed after smoothing. While calculating the first and second derivatives, the postsmoothing residual noise propagates, which should be quantified for computing the SNR of ED-based and SD-based line detections. Calculating the first and second derivatives in an image is implemented by applying a convolution of certain kernels to the image. In this study, the kernels with size 3 × 3 are used for the implementation. Accordingly, to investigate the propagation of noise, while calculating the first and second derivatives, it is necessary to consider the smoothed noises in the 3 × 3 neighborhood at each pixel as follows: 3 (n * s) 4 (n * s) 5 (n * s) 6 (n * s) 7 (n * s) 8 where i in (n * s) i is the unique sequential number assigned to each pixel within a 3 × 3 kernel. After rearranging the pixels in a vector ordered by their sequential numbers, their correlations are calculated by Equation (26) by considering their distance, and transformed into a correlation matrix, R (n * s) 3×3 as follows: To calculate the first derivative in the column direction, this study used a scaled Sobel operator such that it should produce the gradient for one pixel unit as follows: The kernel in Equation (29) can be rewritten in its vector form as follows: Then, the dispersion σ 2 n * s * D c of the noise resulting from the convolution of the smoothed noise with the kernel in Equation (29) can be quantitatively derived as follows: Moreover, to calculate the second derivative in the column direction, this study used the following kernel such that it should produce the second derivative for one pixel unit, defined as follows: The kernel in Equation (32) can be represented in the vector form as follows: Then, the dispersion σ 2 n * s * D cc of the noise resulting from the convolution of the smoothed noise with the kernel in Equation (32) can be quantitatively derived as follows:

Derivation of SNRs
In the following section, the SNRs of ED-based and SD-based line detections are derived using the signal strengths quantified in Section 3.2 and the amount of noise resulting from the smoothing and convolution of specific kernels described in Section 3.3. To make the calculation simple, the normal direction of lines to be detected is assumed to be aligned in the column direction in the following derivations.
To detect edge pixels at the boundaries of a line, its SNR for the first derivative in the normal direction is derived from Equations (11) and (31) as follows: Similarly, the edge SNR for the second derivative in the normal direction can be derived from Equations (12) and (34). However, in this study, the second derivative of the edge profile model is assumed to be positive at x = L − w 2 − 1. Thus, the max(·) operation is applied to suppress the negative signals and the edge SNR for the second derivative in the normal direction is derived as follows: Given that the edge detection is performed based on combining the first and second derivatives in this study, the SNRs for both the derivatives are combined into the measure SNR C (edge) as follows: The SNR of the SD-based line detection for the first derivative in the normal direction can be derived from Equations (13) and (31) as follows: The SNR of SD-based line detection for the second derivative in the normal direction can be derived from Equations (14) and (34) as follows: Then, because the SD-based line detection is considered to be performed based on combining the first and second derivatives in this study, the SNRs of both the derivatives are combined into the measure SNR C (line SD ) as follows: where the subscript SD in SNR C (line SD ) represents the SNR measure for the SD-based line detection so that it can be distinguished from that of the SGAD-based line detection.
With the increase in sizes of blurring and smoothing, the signals tend to mix with their neighborhood signals and degenerate. Based on this rationale, in this study, a penalty function is introduced to measure the degeneration of signals with the increase in the sizes of blurring and smoothing. However, as line width increases, the degeneration of signal owing to blurring and smoothing decreases. Accordingly, the penalty function is modeled as follows: where λ is the power factor applied to line width, w, so that it can reflect the decrease of degeneration effect of signal strength because of blurring and smoothing. From a set of brief experiments, the value of λ was set to 0.3 in this study. Thus, the SNR of edge detection with the penalty for blurring and smoothing, i.e., SNR PS (edge), is calculated as follows: Moreover, the SNR of the SD-based line detection with the penalty for blurring and smoothing, i.e., SNR PS (line SD ), is calculated as follows: Because the detection of edge pixels is required at either sides of the line to find the line pixel, the SNR PS of edge detection for the purpose of line detection is measured by halving SNR PS (edge) in Equation (42) as follows: To compare the performance of the ED-based and SD-based line detections, a set of graphical investigations was used. The tests were performed with various values of the smoothing factor σ s and the line width w for investigation. The smoothing factor was set to vary from 0.4 to 10.0 pixels with an interval of 0.1 pixels, and the line width was set to vary from 0.5 to 20.0 pixels with an interval of 0.1 pixels. The blurring factor σ b was set to 1.0, which is a reasonable value to represent the amount of blurs observed in many camera-captured images [39]. The standard deviation of noise and contrast were set to σ n = 0.05 and k = 1.0, respectively, in the following graphical investigations. Figure 3a shows the SNR C of edge detection. In Figure 3a, the SNR becomes very high, when the smoothing factor is set to be large and distinct, particularly when the line width is large. However, it is not realistic because the signals become interspersed with other signals with increase in the smoothing factor. Thus, according to Equation (42), a realistic measure of SNR is obtained by applying the penalty function to SNR C (edge). The resulting SNR PS is shown in Figure 3b.  Figure 4a shows the SNR C (line SD ). Similarly, in Figure 4a, the SNR becomes extremely high when the smoothing factor is set to be large, and a realistic measure of SNR is obtained by applying the penalty function to SNR C (line SD ) according to Equation (43). The resulting SNR PS (line SD ) is shown in Figure 4b.  The SNR PS (line ED ) is shown in Figure 5a, which is calculated using Equation (44). Then, Figure 5b shows the difference in SNR PS (line SD )−SNR PS (line ED ). According to Figure 5b, when the line width is relatively small, the SD-based line detection in terms of SNR is shown to be more effective than ED-based line detection. For example, the SD-based line detection is more effective than ED-based line detection when the line width w is less than 5 pixels with a smoothing factor of 1.0, and less than 11 pixels when a smoothing factor of 3.0 is applied to an image. Moreover, the highest values in SNR PS (line SD )−SNR PS (line ED ) against the varying line widths are observed when a smoothing factor within the range from 1.0 to 2.0 is applied to an image with the line width within the range of 4 pixels.

Method-SNR of SGAD for Line Detection
In this section, a line detection method, based on the sum of gradient angle differences [6], is introduced. Then, its performance is investigated based on its SNR derivations. Subsequently, the SGAD-based line detection can be compared with the SD-based line detection in terms of their SNRs.

Definition of SGAD
According to the work in [6], the gradient angle at each pixel can be derived as follows: where g r i and g c i are the gradients at pixel i in the row and column directions, respectively. Then, the gradient angle difference (GAD) of two pixels is defined as the minimum positive angle between their gradient vectors as follows: The angle difference is calculated for the pairs of gradient vectors ( Figure 6). Then, the measure, SGAD is derived by adding the gradient angle differences as follows: (47) Figure 6. Pairings for calculating gradient angle differences.

Dispersion of Gradient Angle Differences
The dispersion of the single-angle difference between the ith and the jth pixels, D{θ i -θ j } is calculated as follows: The dispersion of the vector of the angles in Equation (48) is derived as follows: where Then, the dispersion of the vector of the gradients in Equation (49) is derived as follows: The value of the gradient in the column direction at the ith pixel is calculated by convoluting the smoothed image with the kernel D c as follows: To calculate the gradients in the row direction, a kernel is defined as follows: Then, the value of the gradient in the row direction at the ith pixel is calculated using the convolution of the smoothed image with the kernel D r as follows: (53) Figure 7 shows the windows for calculating the gradients, where the numbers within the pixels represent the pixel indices used to indicate the pixel locations in the Jacobian matrices below. In Figure 7a, the center pixel is located at 8 and the gradient angle difference is calculated for the pixels located at 5 and 11. In Figure 7b, the center pixel is located at 9 and the gradient angle difference is calculated for the pixels located at 5 and 13.  Then, the Jacobian matrix J g(i,j) in Equation (50) when i = 4 and j = 5 is derived using D c and D r . Moreover, the Jacobian matrix, J g(i,j) in Equation (50) when i = 1 and j = 8 is derived using D c and D r . The correlation matrix R i,j in Equation (50) when i = 4 and j = 5 is derived as per Figure 7a and Equation (26). The correlation matrix R i,j in Equation (50) when i = 1 and j = 8 is derived as per Figure 7b and Equation (26).
Thus, the dispersion of the vector of the gradients in Equation (50) when i = 4 and j = 5 is derived as follows: where . Furthermore, the dispersion of the vector of the gradients in Equation (50) when i = 1 and j = 8 is derived as follows: where Thus, the dispersion of the angle difference in Equation (48), D{θ i -θ j }, when i = 4 and j = 5 is derived as follows: Moreover, the dispersion of the angle difference in Equation (48), D{θ i -θ j }, when i = 1 and j = 8 is derived as follows:

Derivation of SNRs
To effectively compare the performance of SGAD-based line detection with that of the SD-based line detection, a line with width w and aligned along the direction of row was used. Then, the line was assumed to have the following gradient values at pixels i and j, which were located one pixel to the left and right, respectively, of the center of the line and can be expressed as follows: g c i = g, g r i = 0, g c j = −g, and g r j = 0, where Then, at the center of the line, the standard deviation of the gradient angle difference in the column direction, σ θ 4 -θ 5 , was derived from Equations (56) and (58) as follows: Moreover, at the center of the line, the standard deviation of the gradient angle difference in the lower-right diagonal direction, σ θ 1 -θ 8 , was derived from Equations (57) and (58) as follows: For the line model assumed in the first paragraph of this section, the strength of the line signal based on the angle difference was π. Thus, the SNR of the angle difference in the column direction can be derived as follows: Moreover, the SNR of the angle difference in the lower-right diagonal direction can be derived as follows: For the SGAD-based line detection, the performance was measured by combining both the gradient angle difference and SNR of the first derivative. Thus, the SNR combined for the line detection using the gradients at the fourth and fifth pixels in Figure 6 can be modeled as follows: Moreover, the SNR combined for the line detection using the gradients at the first and eighth pixels in Figure 6 can be modeled as follows: Considering the penalty for blurring, smoothing, and line width in Equation (41), the SNR C (line θ 4 -θ 5 ) in Equation (63) can be derived into SNR PS (line θ 4 -θ 5 ) as follows: Similarly, the SNR C (line θ 1 -θ 8 ) in Equation (64) can be derived into SNR PS (line θ 1 -θ 8 ) as follows: Then, the SNR PS of the sum of the gradient angle differences for all of the four pairs in Figure 6 can be modeled as follows: Because of the symmetry between the pair of gradients at 1 and 8, and the pair of gradients at 3 and 6 in Figure 6, SNR PS (line θ 3 -θ 6 ) can be written as follows: Because the gradients at 2 and 7 in Figure 6 are zero for the assumed line model, SNR PS (line θ 2 -θ 7 ) can be written as follows: Furthermore, considering the SNR PS of the four directions, the SNR PS of SGAD for detecting the assumed line model can be derived as follows: The ratio of SNR PS (line SGAD ) to SNR PS (line SD ) can be derived from Equations (43) and (70) as follows: Furthermore, Equation (71) can be reduced from Equations (43), (65) and (66) as follows: and Then, ratio(SGAD, SD) can be derived from Equations (14), (39), (61), (62), (72), (73) and (74) as follows: In Equation (75), g f is derived from Equations (13) and (14) as follows: As shown in Equation (76), ratio(SGAD,SD) changes with the variations of σ b , σ s , and w, but does not change with the variation of σ n .
To investigate the performance of the SGAD-based line detection, the graphical plots of SNR are used as shown below. The values of SNR used are generated under the same conditions as applied in Section 3.4. Figure 8 shows the difference, SNR PS (line SGAD )− SNR PS (line SD ). According to Figure 8, the SGAD-based line detection has higher SNR than the SD-based line detection under varying conditions of the line width and the smoothing factor. When the smoothing factor of 1.0 is applied, the advantage of the SGAD-based line detection becomes distinct for a line width less than 8 pixels.

Experimental Results and Discussion
To validate the derived SNRs in natural images, the relationships of the SNRs with the completeness and correctness were also used in this section. For the experiments with natural images, the Lena image with the size of 512 × 512 pixels in gray color [0, 255] was first selected to describe the procedure of the experiments for natural images. Figure 9 shows the Lena image with three annotated regions for further investigation.  Figure 10 shows the images resulting from each process for the annotated regions in Figure 9. To measure the SNRs, the completeness and correctness, ridge, and valley pixels were detected in the smoothed version of the original image with σ s = 1.0 by the SGAD-based line detection and considered as the ground truth pixels of ridge and valley pixels. Then, the line width and contrast of each ground truth pixel were calculated as follows. At each ground truth pixel, line width was searched for two directions: the normal and its opposite directions. Regarding ridge pixels, for each direction, line width in one direction increased by one pixel at every extension of the line width before the intensity of the current pixel is greater than that of its previous pixel + a specified tolerance. In this study, the tolerance was set to 0.05. Then, the direction among the two search directions, whose last intensity has less contrast with the ground truth line pixel, was selected and the line width in that direction was recorded as the line width of the ground truth line pixels. The contrast in the selected direction was recorded as the contrast of the ground truth line pixels. The method to find the line width and contrast of valley pixels was the same to that of the ridge pixels. SNRs of the SD-based and SGAD-based line detections were calculated using the given line width, contrast, and noise strength. As shown in Figure 10, SNRs of the SGAD-based line detection were much greater than those of the SD-based line detection. Accordingly, as shown in the last two columns in Figure 10, the results by SGAD show less noisy and more accurate line detection results than those by SD when they are compared with the ground truths shown in the third column in Figure 10. The completeness and correctness of line detection results were measured as follows. First, ground truth images were generated using the line detection results by applying the SGAD-based line detection to the smoothed version of the original image with σ s = 1.0, as shown in the third column in Figure 10. Then, a set of noisy images were generated by adding varying strengths of Gaussian random noises ranging from σ n = 4.0 to 20.0 with the interval of 4.0, respectively, to the original images. Next, all the noisy images were denoised by applying a smoothing convolution with σ s = 1.0, and the SD-based and SGAD-based line detections were applied to the noisy images. When the ground truth image as reference is denoted by R and a line pixel image resulting from the application of a line detection method to a noisy image are denoted by T, then the image C which contains line pixels correctly detected by a line detector can be written as where ∧ denotes the logical AND operator.
To find the incorrectly detected line pixels within a specified distance from the reference pixels, the reference line pixel image was dilated with a certain radius r as where ⊕ denotes the dilation operator and S r the structural element with radius r. In this study, r was set to 3 pixels because it equals 3σ s when σ s = 1.0. Then, the image containing incorrectly detected pixels Q was generated as where ¬ denotes the logical NOT operator. Next, the number of incorrectly detected pixels were counted at each ground truth line pixel within a certain distance d t in its line normal and opposite to the directions using the images R and Q and recorded into an image V. In this study, d t was set to 3 pixels, which is the same as the value of r used for dilation of a reference image.
Subsequently, the SNRs at ground truth line pixels were summarized into a histogram with a certain bin size. The bin size was determined by dividing the maximum of SNRs in each SNR image by a specified number n. In this study, the number n was set to 100. Then, new bins were generated in a descending order so that each new bin has about a certain percentage p b of the total ground truth line pixels. The value of p b was set to 10 percent in this study. Figure 11 shows the results for the SNRs of the ridge pixels in the SD-based line detection, when σ n = 8.0. Next, pixels having SNRs within the boundaries of each SNR bin were selected and the mean of their SNRs was recorded. Moreover, among the selected pixels, the total number of correctly detected pixels N c was counted using the image C derived from Equation (77). Then, the completeness corresponding to the mean SNR was measured by dividing the number of correctly detected line pixels by the total number of the ground truth line pixels N gt as shown here: Additionally, among the selected pixels, the total number of incorrectly detected pixels N ic was counted using the image Q derived from Equation (79). Then, the correctness corresponding to the mean SNR was measured as Figure 12 shows the observed relationships of SNRs with completeness and correctness for the Lena image under varying noise strengths. As shown in the figure, the trend of their relationships is manifest under varying noise strengths. More so, the SGAD-based line detection showed higher SNRs than the SD-based line detection and higher completeness and correctness under all the tested noise strengths.  Figure 13 shows all the plots together which are shown in Figure 12. As shown in Figure 13, the validity of the SNRs derived for the SD-based and the SGAD-based line detections was proven by the overall strong and consistent relationships of SNRs with completeness and correctness under varying noise strength. The experiments were also applied to other eight natural images as shown in Figure 14. Before implementation of line detections, the natural images were converted into grayscale images using the MATLAB's rgb2gray function. As shown in Figure 15, experiments with the images show that there exist distinct and strong relationships of SNRs with completeness and correctness. The pattern of variations of completeness against SNR was observed similar to that of the correctness. Although the content differs depending on the test images, it was found that the line detection completeness and correctness against SNRs showed similar patterns throughout the test images. This observation indicates the consistency of SNR measures with respect to completeness and correctness under various line conditions. However, as shown in Figure 15, the SD-based line detection was observed to produce low completeness and correctness as compared to its SNR when SNR > 3.0. This is caused by frequent bifurcations of the SD-based line detection. When the line width was not relatively large, but the contrast was relatively large, the calculated value of SNR was relatively large. Meanwhile, in the experiments, the shape of many line profiles with such conditions deviated from that of the ideal line profile and the SDbased line detection could not overcome this problem and produced low completeness and correctness. However, the SGAD-based line detection was observed to overcome this problem upto some extent and produce high completeness and correctness, when the value of SNR was high.

Conclusions
In this study, the performances of line detectors were evaluated by the analytical quantifications of their SNRs and completeness and correctness. The correlations arising among pixels when a Gaussian smoothing filter was applied were first identified. Then, the amount of noise remaining in the derived values, such as gradients, SD, and SGAD, was derived based on the error propagation. Furthermore, the SNRs of line detectors were analytically derived based on the derived signal and noise strengths. In addition, the penalty function was proposed to consider the influence of the blur, smoothing, and the line-width on line detectors. Verification of the validity of the derived SNRs based on investigation of the relationships of the SNRs with completeness and correctness indicates that the derivations of the SNRs of line detectors proposed in this study were effective in quantifying their performance . The validation test of SNRs was performed using nine color images and will be extended to more image sets for further test completion in the future.
Regarding feature extraction, it was observed that the edge features could be accurately extracted in the vector form using the methods proposed in [18,41]. In contrast, the line features were considered to be accurately extracted in the vector form using the SGAD method followed by the subpixel line localization and location-linking methods as described in [5].
Therefore, a set of methods for line detection, non-maxima suppression, subpixel line localization, and linking can produce high-quality ridge and valley features for various applications. Moreover, the error propagation scheme used in this study to derive the relevant theoretical SNRs can be used to develop high-performance operators for extracting features.