An Improved Method for Evaluating Image Sharpness Based on Edge Information

: In order to improve the subjective and objective consistency of image sharpness evaluation while meeting the requirement of image content irrelevance, this paper proposes an improved sharpness evaluation method without a reference image. First, the positions of the edge points are obtained by a Canny edge detection algorithm based on the activation mechanism. Then, the edge direction detection algorithm based on the grayscale information of the eight neighboring pixels is used to acquire the edge direction of each edge point. Further, the edge width is solved to establish the histogram of edge width. Finally, according to the performance of three distance factors based on the histogram information, the type 3 distance factor is introduced into the weighted average edge width solving model to obtain the sharpness evaluation index. The image sharpness evaluation method proposed in this paper was tested on the LIVE database. The test results were as follows: the Pearson linear correlation coefﬁcient (CC) was 0.9346, the root mean square error (RMSE) was 5.78, the mean absolute error (MAE) was 4.9383, the Spearman rank-order correlation coefﬁcient (ROCC) was 0.9373, and the outlier rate (OR) as 0. In addition, through a comparative analysis with two other methods and a real shooting experiment, the superiority and effectiveness of the proposed method in performance were veriﬁed.


Introduction
With the significant advantages of non-contact, flexibility, and high integration, computer vision measurement has broad application prospects in electronic semiconductors, automotive manufacturing, food packaging, film, and other industrial fields. Image sharpness is the core index to measure the quality of visual images; therefore, the research on the evaluation method of visual image sharpness is one of the key technologies to achieve visual detection [1][2][3]. Moreover, as people demand more and more sharpness in video chats, HDTV, etc., the research of a more efficient image sharpness evaluation method has become a pressing problem nowadays.
Generally, image sharpness evaluation methods can be divided into full-reference (FR) sharpness evaluation methods, reduced-reference (RR) sharpness evaluation methods, and no-reference (NR) sharpness evaluation methods. Among them, the FR sharpness evaluation methods are used to judge the degree of deviation of the measured image from the sharp reference image [4]. The RR sharpness evaluation methods evaluate the measured image by extracting only part of the information of the reference image [5]. However, in practical applications, undistorted sharp reference images are usually difficult to obtain. Therefore, the NR sharpness evaluation methods have higher research value and wider application capability. Existing NR sharpness evaluation methods are formulated either in the transform domain or in the spatial domain [6]. Transform domain-based methods [7][8][9][10] need to transform images from the spatial domain to other domains for processing. However, the computational complexity is often too large. Therefore, such methods are poor in real time and limited in many applications. Spatial domain-based methods [11][12][13][14][15] can be divided into two main types. One type is based on the fact that clear images have higher contrast compared to blurred images. Typical evaluation methods of this type are the various gradient function methods, such as the Tenegrad function method and energy gradient function method [11]. The other type is based on the fact that image blurring will lead to edge diffusion; a typical evaluation method of this type is the average edge width method [12]. It should be noted that, although both the contrastbased evaluation method and the edge information-based evaluation method have the advantage of low computational complexity, the former is more dependent on the image content compared to the latter, that is, the former method tends to fail when the contents of measured images are different.
Li et al. [13] proposed a no-reference image sharpness evaluation method on scanning electron microscopes. The method firstly extracts the edge of dark channel maps by a Sobel operator. It then removes the noise effect but preserves the edge information by an edge-preserving operator based on the weighted least squares' (WLS) framework. Finally, it combines the maximum gradient of each edge point with the average gradient to form the sharpness evaluation index. Although this method extracts a part of the edge information of the image by edge detection, it is still essentially an evaluation method based on the contrast principle. Wang [14] proposed an image sharpness evaluation method based on a strong edge width. She convolved the measured image by a Sobel operator to obtain the horizontal and vertical gradient maps, respectively. By selecting the threshold, the horizontal and vertical strong edge points of the measured image were obtained. Moreover, the strong edge width was solved. Finally, the sharpness evaluation index was generated by introducing the histogram information. In summary, most of the current image sharpness evaluation methods based on edge information still extract edge points by a Sobel operator and often only consider horizontal and vertical directions when determining the edge direction of edge points, which largely limits the further improvement of the accuracy of this type of evaluation methods. In addition, not all edge information is needed for evaluation methods and few scholars distinguish the extracted edge information.
In this paper, we focus on the abovementioned problems. Firstly, a Canny edge detection algorithm with excellent comprehensive performance was improved to enhance the edge detection effect of the measured images. Then, we proposed an eight-neighborhood grayscale difference method to achieve a rapid and efficient determination of the edge points' four edge directions. Finally, by comparing three distance factors based on the histogram of the edge width, the image sharpness evaluation method proposed in this paper was obtained. With the abovementioned improvements, our proposed method has excellent performance in terms of content irrelevance, subjective-objective consistency, and computational speed, especially in the real-time evaluation of image sharpness, which has great potential for application.

Image Edge
The edge information of an image is crucial for vision and it is also one of the important features of an image. Figure 1 simulates the situation when the ideal step edge is blurred using a black and white image with drastically changing grayscale values. Additionally, it can be seen that, when the image is blurred, the edges of the image will spread and the grayscale curve slows down accordingly. Obviously, there is a positive correlation between the degree of edge diffusion and the degree of image blurring.
It should be noted that the edges in a clear image are not always step edges, and there are also impulse edges and roof edges depending on the variation of grayscale values, as shown in Figure 2. However, the clear image is gradually smoothed after blurring, which leads to the disappearance of impulse edges and roof edges; this is obviously different from the relationship of step edges with the degree of image blurring. Therefore, which approach is used to extract the step or approximate step edges is directly related to the accuracy of the sharpness evaluation method. Section 2.4 of this paper gives a detailed solution to this problem, which will not be discussed here for now. Step edge image and its corresponding grayscale change curve of pixels on a row.
It should be noted that the edges in a clear image are not always step edges, and there are also impulse edges and roof edges depending on the variation of grayscale values, as shown in Figure 2. However, the clear image is gradually smoothed after blurring, which leads to the disappearance of impulse edges and roof edges; this is obviously different from the relationship of step edges with the degree of image blurring. Therefore, which approach is used to extract the step or approximate step edges is directly related to the accuracy of the sharpness evaluation method. Section 2.4 of this paper gives a detailed solution to this problem, which will not be discussed here for now.

Edge Detection
A Canny operator has superior overall performance compared to other edge detection operators; but, the effect of Canny edge detection depends heavily on the choice of its threshold value. If the threshold value is set too high, it will lead to missed detection Step edge image and its corresponding grayscale change curve of pixels on a row. Step edge image and its corresponding grayscale change curve of pixels on a row.
It should be noted that the edges in a clear image are not always step edges, and there are also impulse edges and roof edges depending on the variation of grayscale values, as shown in Figure 2. However, the clear image is gradually smoothed after blurring, which leads to the disappearance of impulse edges and roof edges; this is obviously different from the relationship of step edges with the degree of image blurring. Therefore, which approach is used to extract the step or approximate step edges is directly related to the accuracy of the sharpness evaluation method. Section 2.4 of this paper gives a detailed solution to this problem, which will not be discussed here for now.

Edge Detection
A Canny operator has superior overall performance compared to other edge detection operators; but, the effect of Canny edge detection depends heavily on the choice of its threshold value. If the threshold value is set too high, it will lead to missed detection

Edge Detection
A Canny operator has superior overall performance compared to other edge detection operators; but, the effect of Canny edge detection depends heavily on the choice of its threshold value. If the threshold value is set too high, it will lead to missed detection and edge discontinuity. If the threshold value is set too low, there will be over-detection problems, such as the noise in measured images being wrongly detected as an edge. Therefore, in order to improve the edge detection effect, an improved Canny edge detection algorithm based on the activation mechanism is proposed in this paper. Figure 3 assumes that the edge detection result is obtained under a high threshold; it can be seen that there are fewer edge points on it. Plot (b) shows the edge detection result obtained in the low-threshold case with more edge points and the appearance of a noise point marked in green. By replicating the edge information in plot (a) to plot (b), plot (c) can be procured. This process is called activation; the activated edge points are marked in red. After that, the activated edge points will activate all the other edge points adjacent to them, as shown in plot (d). Because noise tends to exist in isolation, the isolated noise point is filtered out after the activation process, which is plot (e). The edge information obtained after the detection of this improved algorithm has the characteristics of low noise and high accuracy. and edge discontinuity. If the threshold value is set too low, there will be over-detectio problems, such as the noise in measured images being wrongly detected as an edge Therefore, in order to improve the edge detection effect, an improved Canny edge detec tion algorithm based on the activation mechanism is proposed in this paper.

Plot (a) in
Plot (a) in Figure 3 assumes that the edge detection result is obtained under a hig threshold; it can be seen that there are fewer edge points on it. Plot (b) shows the edg detection result obtained in the low-threshold case with more edge points and the appear ance of a noise point marked in green. By replicating the edge information in plot (a) t plot (b), plot (c) can be procured. This process is called activation; the activated edg points are marked in red. After that, the activated edge points will activate all the othe edge points adjacent to them, as shown in plot (d). Because noise tends to exist in isolation the isolated noise point is filtered out after the activation process, which is plot (e). Th edge information obtained after the detection of this improved algorithm has the charac teristics of low noise and high accuracy.  Figure 4 depicts the edge extraction process of the Lena test image using our im proved algorithm. It can be clearly seen that the edge extraction result processed by th improved algorithm is less noisy than that under low threshold and more accurate tha that under high threshold.

Analysis of Edge Width
The essence of an edge is a collection of pixel points with drastically changing gray scale values. To calculate the edge width of an edge point, it is necessary to firstly deter mine the edge direction corresponding to the edge point and then calculate the edge widt along the edge direction according to appropriate rules.

Raw image
High threshold Low threshold Final image  Figure 4 depicts the edge extraction process of the Lena test image using our improved algorithm. It can be clearly seen that the edge extraction result processed by the improved algorithm is less noisy than that under low threshold and more accurate than that under high threshold. and edge discontinuity. If the threshold value is set too low, there will be over-detection problems, such as the noise in measured images being wrongly detected as an edge. Therefore, in order to improve the edge detection effect, an improved Canny edge detection algorithm based on the activation mechanism is proposed in this paper. Plot (a) in Figure 3 assumes that the edge detection result is obtained under a high threshold; it can be seen that there are fewer edge points on it. Plot (b) shows the edge detection result obtained in the low-threshold case with more edge points and the appearance of a noise point marked in green. By replicating the edge information in plot (a) to plot (b), plot (c) can be procured. This process is called activation; the activated edge points are marked in red. After that, the activated edge points will activate all the other edge points adjacent to them, as shown in plot (d). Because noise tends to exist in isolation, the isolated noise point is filtered out after the activation process, which is plot (e). The edge information obtained after the detection of this improved algorithm has the characteristics of low noise and high accuracy.  Figure 4 depicts the edge extraction process of the Lena test image using our improved algorithm. It can be clearly seen that the edge extraction result processed by the improved algorithm is less noisy than that under low threshold and more accurate than that under high threshold.

Analysis of Edge Width
The essence of an edge is a collection of pixel points with drastically changing grayscale values. To calculate the edge width of an edge point, it is necessary to firstly determine the edge direction corresponding to the edge point and then calculate the edge width along the edge direction according to appropriate rules.

Raw image
High threshold Low threshold Final image

Analysis of Edge Width
The essence of an edge is a collection of pixel points with drastically changing grayscale values. To calculate the edge width of an edge point, it is necessary to firstly determine the edge direction corresponding to the edge point and then calculate the edge width along the edge direction according to appropriate rules.

Determination of Edge Direction
Eeping et al. [16] calculated the gradient of each edge point in the measured image by a Sobel operator and defined the gradient direction (including the negative direction of the gradient) as the edge direction of that edge point. Different from the idea of using a gradient to determine the edge direction, this paper proposes a method based on the difference of a grayscale between the pixel points in the eight neighborhoods of the edge points. Compared with the gradient determining method, this method not only improves the accuracy of determining the edge direction but also saves twice the computational time. Figure 5 illustrates the calculation principle of the eight-neighborhood grayscale difference method. For each edge point (G Edge denotes an edge point in the figure below), the four grayscale differences of its eight-neighborhood pixel points are calculated along the horizontal, vertical, 45 • , and −45 • directions, respectively, namely: time. Figure 5 illustrates the calculation principle of the eight-neighborhood grayscale dif ference method. For each edge point (GEdge denotes an edge point in the figure below), th four grayscale differences of its eight-neighborhood pixel points are calculated along th horizontal, vertical, 45°, and −45° directions, respectively, namely: The opposite direction of the direction corresponding to the minimum value of th four differences in the above equations is the edge direction obtained by this method. Fo example, the opposite direction of the horizontal direction is the vertical direction, and the opposite direction of the 45° direction is the −45° direction. The following figure shows the effect of determining the edge direction of the Len test image using the gradient determining method in reference [14] and the eight-neigh borhood grayscale difference method in this paper, respectively. The results of the gradi ent determining method for determining the edge directions of the edge points (where th pentagrams are located) in Figure 6a-d are the vertical direction, vertical direction, hori zontal direction, and vertical direction, respectively. Accordingly, the edge direction de termination results of the eight-neighborhood grayscale difference method are −45° direc tion, horizontal direction, vertical direction, and −45° direction, respectively. It is thu clear that the determination results of the edge direction by the eight-neighborhood gray scale difference method are more realistic and accurate. The opposite direction of the direction corresponding to the minimum value of the four differences in the above equations is the edge direction obtained by this method. For example, the opposite direction of the horizontal direction is the vertical direction, and the opposite direction of the 45 • direction is the −45 • direction.
The following figure shows the effect of determining the edge direction of the Lena test image using the gradient determining method in reference [14] and the eight-neighborhood grayscale difference method in this paper, respectively. The results of the gradient determining method for determining the edge directions of the edge points (where the pentagrams are located) in Figure 6a-d are the vertical direction, vertical direction, horizontal direction, and vertical direction, respectively. Accordingly, the edge direction determination results of the eight-neighborhood grayscale difference method are −45 • direction, horizontal direction, vertical direction, and −45 • direction, respectively. It is thus clear that the determination results of the edge direction by the eight-neighborhood grayscale difference method are more realistic and accurate.
In addition to comparing the accuracy of the above two methods, this paper further compares the computational time of the gradient determining method and the eightneighborhood grayscale difference method by calculating four images with typical edge directions.
We performed the gradient determining operation on the four images in Figure 7 in Visual Studio 2019 using C++ under the Windows 10 operating system, and the average processing time was 10 ms per image, while the average processing time was 3 ms per image for the eight-neighborhood grayscale difference operation on the four images. It can be seen that the eight-neighborhood grayscale difference method proposed in this paper can quickly and efficiently determine the edge direction of each edge point, which creates the condition for accurate calculation of the edge width in the next step. In addition to comparing the accuracy of the above two methods, this paper further compares the computational time of the gradient determining method and the eightneighborhood grayscale difference method by calculating four images with typical edge directions.
We performed the gradient determining operation on the four images in Figure 7 in Visual Studio 2019 using C++ under the Windows 10 operating system, and the average processing time was 10 ms per image, while the average processing time was 3 ms per image for the eight-neighborhood grayscale difference operation on the four images. It can be seen that the eight-neighborhood grayscale difference method proposed in this paper can quickly and efficiently determine the edge direction of each edge point, which creates the condition for accurate calculation of the edge width in the next step.   In addition to comparing the accuracy of the above two methods, this paper furthe compares the computational time of the gradient determining method and the eigh neighborhood grayscale difference method by calculating four images with typical edg directions.
We performed the gradient determining operation on the four images in Figure 7 i Visual Studio 2019 using C++ under the Windows 10 operating system, and the averag processing time was 10 ms per image, while the average processing time was 3 ms pe image for the eight-neighborhood grayscale difference operation on the four images. can be seen that the eight-neighborhood grayscale difference method proposed in this pa per can quickly and efficiently determine the edge direction of each edge point, whic creates the condition for accurate calculation of the edge width in the next step.

Solution of Edge Width
To calculate the edge width of an edge point, it is necessary to find the grayscale extreme points at the two ends closest to the edge point in the edge direction [10]. When the grayscale values of one side are larger than those of the other side, the maximum value point of the side with the larger grayscale values and the minimal value point of the side with the smaller grayscale values are selected as the start and end points of the edge width; Appl. Sci. 2022, 12, 6712 7 of 19 the distance between the two end points is the corresponding edge width of the edge point. Figure 8 shows the variation of the grayscale values in the 257th row along the horizontal direction of the reference image "parrots" in the LIVE database [17] after Gaussian blurring. As can be seen in the figure below, the edge widths of the edge points P1 and P3 are P2-P2 and P4 -P4, respectively.

Solution of Edge Width
To calculate the edge width of an edge point, it is necessary to find the grayscale extreme points at the two ends closest to the edge point in the edge direction [10]. When the grayscale values of one side are larger than those of the other side, the maximum value point of the side with the larger grayscale values and the minimal value point of the side with the smaller grayscale values are selected as the start and end points of the edge width; the distance between the two end points is the corresponding edge width of the edge point. Figure 8 shows the variation of the grayscale values in the 257th row along the horizontal direction of the reference image "parrots" in the LIVE database [17] after Gaussian blurring. As can be seen in the figure below, the edge widths of the edge points P1 and P3 are P2-P2' and P4'-P4, respectively. In this paper, the above rule is also followed when calculating the edge width. The upper and lower rows of the images in Figure 9 are "parrots" and "planes" in the Gaussian blurred images of the LIVE database, respectively. According to the method in this paper, the edge widths of the edge points in the upper and lower rows of the images were calculated separately; the calculation results are shown in Figure 10. Randomly select 100 edge points of "parrots" and "planes" and divide these 100 points into equal parts by 360° so that each edge point corresponds to an angle within 0-360°, which is the polar angle corresponding to that edge point. Then, the edge width of the edge point is selected as the corresponding polar diameter; so, the edge point with a certain edge width can be mapped to the polar coordinate system. Plot (a) and plot (b) in In this paper, the above rule is also followed when calculating the edge width. The upper and lower rows of the images in Figure 9 are "parrots" and "planes" in the Gaussian blurred images of the LIVE database, respectively. According to the method in this paper, the edge widths of the edge points in the upper and lower rows of the images were calculated separately; the calculation results are shown in Figure 10.
To calculate the edge width of an edge point, it is necessary to find the grayscale extreme points at the two ends closest to the edge point in the edge direction [10]. When the grayscale values of one side are larger than those of the other side, the maximum value point of the side with the larger grayscale values and the minimal value point of the side with the smaller grayscale values are selected as the start and end points of the edge width; the distance between the two end points is the corresponding edge width of the edge point. Figure 8 shows the variation of the grayscale values in the 257th row along the horizontal direction of the reference image "parrots" in the LIVE database [17] after Gaussian blurring. As can be seen in the figure below, the edge widths of the edge points P1 and P3 are P2-P2' and P4'-P4, respectively. In this paper, the above rule is also followed when calculating the edge width. The upper and lower rows of the images in Figure 9 are "parrots" and "planes" in the Gaussian blurred images of the LIVE database, respectively. According to the method in this paper, the edge widths of the edge points in the upper and lower rows of the images were calculated separately; the calculation results are shown in Figure 10. Randomly select 100 edge points of "parrots" and "planes" and divide these 100 points into equal parts by 360° so that each edge point corresponds to an angle within 0-360°, which is the polar angle corresponding to that edge point. Then, the edge width of the edge point is selected as the corresponding polar diameter; so, the edge point with a certain edge width can be mapped to the polar coordinate system. Plot (a) and plot (b) in Randomly select 100 edge points of "parrots" and "planes" and divide these 100 points into equal parts by 360 • so that each edge point corresponds to an angle within 0-360 • , which is the polar angle corresponding to that edge point. Then, the edge width of the edge point is selected as the corresponding polar diameter; so, the edge point with a certain edge width can be mapped to the polar coordinate system. Plot (a) and plot (b) in Figure 10 correspond to the upper and lower rows of images in Figure 9, respectively (each row of images in Figure 9, from left to right, can be numbered as a, b, c, and d). According to the definition of points in the polar coordinate system, the more the line is located outside, the more edge points with large edge widths will be in the image corresponding to the line. It can be clearly seen from Figure 10 that the pink line is the most inward, the blue line is outward, the yellow line is further outward, and the green line is the outermost, which correspond to the fact that the two rows of images in Figure 9 are getting blurred from left to right, indicating that the edge width calculation method proposed in this paper can adequately reflect the blurring degree of images. Figure 10 correspond to the upper and lower rows of images in Figure 9, respectively (each row of images in Figure 9, from left to right, can be numbered as a, b, c, and d).
According to the definition of points in the polar coordinate system, the more the line is located outside, the more edge points with large edge widths will be in the image corresponding to the line. It can be clearly seen from Figure 10 that the pink line is the most inward, the blue line is outward, the yellow line is further outward, and the green line is the outermost, which correspond to the fact that the two rows of images in Figure 9 are getting blurred from left to right, indicating that the edge width calculation method proposed in this paper can adequately reflect the blurring degree of images.

Histogram of Edge Width
For the obtained edge widths of different edge points, the probability that the edge width is can be calculated by Equation (2).
In the above equation, is the number of edge points with edge width and is the total number of edge points.
Once the probabilities of different edge widths are obtained, the histogram of the edge width can be established. Take the Gaussian blurred image "womanhat" in the LIVE database as an example; its corresponding histogram is shown in Figure 11.
It can be seen from Figure 11 that, as the degree of blurring deepens, two phenomena appear in the corresponding histogram. (1) The peak shifts to the right, that is, the probability of large edge widths increases. (2) The histogram spreads and the peak value decreases, which means that the probability of larger edge widths generally increases. Reference [14] states, for these phenomena, that the edge widths corresponding to the peak portion of the histogram are more likely to be generated after the step edges or approximate step edges are blurred, which can more accurately reflect blurriness. In this regard, a distance factor, as shown in Equation (3), was introduced to enhance the contribution of the edge widths of the peak portion to the sharpness evaluation. The distance factor variation relationship corresponding to Equation (3) is shown in Figure 12.
In this paper, based on the previous study, two distance factors, as shown in Equations (4) and (5), are proposed, and their respective relationships with the edge width are shown in Figures 13 and 14, respectively. For the convenience of later description, the distance factors corresponding to Equations (3)-(5) are named as type 1 distance factor, type 2 distance factor, and type 3 distance factor, respectively.

Histogram of Edge Width
For the obtained edge widths of different edge points, the probability P(ω i ) that the edge width is ω i can be calculated by Equation (2).
In the above equation, n i is the number of edge points with edge width ω i and N is the total number of edge points.
Once the probabilities of different edge widths are obtained, the histogram of the edge width can be established. Take the Gaussian blurred image "womanhat" in the LIVE database as an example; its corresponding histogram is shown in Figure 11.
It can be seen from Figure 11 that, as the degree of blurring deepens, two phenomena appear in the corresponding histogram. (1) The peak shifts to the right, that is, the probability of large edge widths increases. (2) The histogram spreads and the peak value decreases, which means that the probability of larger edge widths generally increases. Reference [14] states, for these phenomena, that the edge widths corresponding to the peak portion of the histogram are more likely to be generated after the step edges or approximate step edges are blurred, which can more accurately reflect blurriness. In this regard, a distance factor, as shown in Equation (3), was introduced to enhance the contribution of the edge widths of the peak portion to the sharpness evaluation. The distance factor variation relationship corresponding to Equation (3) is shown in Figure 12.
In this paper, based on the previous study, two distance factors, as shown in Equations (4) and (5), are proposed, and their respective relationships with the edge width are shown in Figures 13 and 14, respectively. For the convenience of later description, the distance factors corresponding to Equations (3)-(5) are named as type 1 distance factor, type 2 distance factor, and type 3 distance factor, respectively.    In the above equation, ω mp is the edge width with the highest probability, ω me is the longest edge width, ω i is the edge width, and d(ω i ) is the distance factor of ω i . Figure 12. The distance factor variation relationship in Equation (3).

Sharpness Evaluation Model
After acquiring distance factors, the final sharpness evaluation value can be obtained by introducing them into Equation (6).
In the above equation, and are the minimum and maximum edge widths, respectively.
Finally, we summarize the sharpness evaluation model proposed in this paper. The edge information of the measured image can be obtained after the edge detection. Then, the edge direction of the edge point can be determined by calculating the eight-neighborhood grayscale difference of the extracted edge point and the edge width can be calculated along the edge direction of the edge point. With the edge width, the histogram of edge width can be established. Then, the distance factor of each edge width can be acquired according to the distance factor calculation equation. Afterwards, the distance factor is introduced into the evaluation index to obtain the sharpness evaluation model of this paper. The above process is shown in Figure 15.

Sharpness Evaluation Model
After acquiring distance factors, the final sharpness evaluation value can be obtained by introducing them into Equation (6).
In the above equation, and are the minimum and maximum edge widths, respectively.
Finally, we summarize the sharpness evaluation model proposed in this paper. The edge information of the measured image can be obtained after the edge detection. Then, the edge direction of the edge point can be determined by calculating the eight-neighborhood grayscale difference of the extracted edge point and the edge width can be calculated along the edge direction of the edge point. With the edge width, the histogram of edge width can be established. Then, the distance factor of each edge width can be acquired according to the distance factor calculation equation. Afterwards, the distance factor is introduced into the evaluation index to obtain the sharpness evaluation model of this paper. The above process is shown in Figure 15.

Sharpness Evaluation Model
After acquiring distance factors, the final sharpness evaluation value can be obtained by introducing them into Equation (6).
In the above equation, ω minE and ω maxE are the minimum and maximum edge widths, respectively.
Finally, we summarize the sharpness evaluation model proposed in this paper. The edge information of the measured image can be obtained after the edge detection. Then, the edge direction of the edge point can be determined by calculating the eight-neighborhood grayscale difference of the extracted edge point and the edge width can be calculated along the edge direction of the edge point. With the edge width, the histogram of edge width can be established. Then, the distance factor of each edge width can be acquired according to the distance factor calculation equation. Afterwards, the distance factor is introduced into the evaluation index to obtain the sharpness evaluation model of this paper. The above process is shown in Figure 15.

Distance Factor Comparison Experiment
In order to fully compare the performances of the three distance factors and, thus, decide which distance factor should be introduced into the sharpness evaluation index, two experiments were conducted for this section based on whether the image contents were the same.
The first experiment was set up as follows. Firstly, 11 "cameraman" images with the same image contents but gradually increasing blur were selected, as shown in Figure 16. Then these images were evaluated by the sharpness evaluation model after introducing three different distance factors, respectively. Finally, the obtained evaluation values were plotted as scatter plot and least squares fitted with polynomial functions, as shown in Figure 17.

Distance Factor Comparison Experiment
In order to fully compare the performances of the three distance factors and, thus, decide which distance factor should be introduced into the sharpness evaluation index, two experiments were conducted for this section based on whether the image contents were the same.
The first experiment was set up as follows. Firstly, 11 "cameraman" images with the same image contents but gradually increasing blur were selected, as shown in Figure 16. Then these images were evaluated by the sharpness evaluation model after introducing three different distance factors, respectively. Finally, the obtained evaluation values were plotted as scatter plot and least squares fitted with polynomial functions, as shown in Figure 17.  In Figure 17, the blue, green, and red lines are the fitted lines of the scatter points o the image sharpness evaluation values after the introduction of the three distance factor of type 1, type 2, and type 3, respectively. In Table 1, CC is the Pearson linear correlatio coefficient and ROCC is the Spearman rank-order correlation coefficient. A higher CC value indicates that the evaluation method is more effective; a larger ROCC values ind cates that the evaluation method is more monotonic. From the data in Table 1, it is clea that the evaluation method after introducing the type 3 distance factor performed bette in terms of both accuracy and predicted monotonicity. Therefore, when the contents o measured images are the same, the type 3 distance factor performs better. The highest performances are shown in boldface.
The second experiment was different from the first experiment. Its selected image were all from the Gaussian blurred images of the LIVE database. These images had di ferent blurring degrees and were not correlated with each other, as shown in Figure 18.  In Figure 17, the blue, green, and red lines are the fitted lines of the scatter points of the image sharpness evaluation values after the introduction of the three distance factors of type 1, type 2, and type 3, respectively. In Table 1, CC is the Pearson linear correlation coefficient and ROCC is the Spearman rank-order correlation coefficient. A higher CC value indicates that the evaluation method is more effective; a larger ROCC values indicates that the evaluation method is more monotonic. From the data in Table 1, it is clear that the evaluation method after introducing the type 3 distance factor performed better in terms of both accuracy and predicted monotonicity. Therefore, when the contents of measured images are the same, the type 3 distance factor performs better. The highest performances are shown in boldface.
The second experiment was different from the first experiment. Its selected images were all from the Gaussian blurred images of the LIVE database. These images had different blurring degrees and were not correlated with each other, as shown in Figure 18. In Figure 17, the blue, green, and red lines are the fitted lines of the scatter points of the image sharpness evaluation values after the introduction of the three distance factors of type 1, type 2, and type 3, respectively. In Table 1, CC is the Pearson linear correlation coefficient and ROCC is the Spearman rank-order correlation coefficient. A higher CC value indicates that the evaluation method is more effective; a larger ROCC values indicates that the evaluation method is more monotonic. From the data in Table 1, it is clear that the evaluation method after introducing the type 3 distance factor performed better in terms of both accuracy and predicted monotonicity. Therefore, when the contents of measured images are the same, the type 3 distance factor performs better. The second experiment was different from the first experiment. Its selected images were all from the Gaussian blurred images of the LIVE database. These images had different blurring degrees and were not correlated with each other, as shown in Figure 18. 12 13 of 19 Figure 18. Content-independent images with increasing blur.
Using the same processing method as the first experiment, Figure 19 was obtained. It should be noted that the abscissa in Figure 19 was not the order of the measured images but the subjective evaluation DMOS values of the corresponding measured images. The reason is that the measured images in the first experiment were generated by artificially applying Gaussian blur evenly. Therefore, it is reasonable to perform a linear fit to the scatter points of the evaluation values. However, the measured images selected in the second experiment were Gaussian blurred images in the LIVE database; their blurring degrees did not increase uniformly. At this time, it was obviously wrong to linearly fit the scatter points. Therefore, it was better to select the subjective evaluation DMOS values in the LIVE database as the abscissa and then to perform a linear fit to the scatter points. Observing the data of the second experiment in Table 1, it is easy to find that the CC values of the evaluation methods corresponding to the three distance factors were almost the same, while the ROCC value of the evaluation method corresponding to the type 1 distance factor was smaller compared to those of the evaluation methods corresponding to the type 2 and type 3 distance factors. This indicates that, when the contents of the measured images are not correlated, there is no significant difference in the accuracy of Using the same processing method as the first experiment, Figure 19 was obtained. It should be noted that the abscissa in Figure 19 was not the order of the measured images but the subjective evaluation DMOS values of the corresponding measured images. The reason is that the measured images in the first experiment were generated by artificially applying Gaussian blur evenly. Therefore, it is reasonable to perform a linear fit to the scatter points of the evaluation values. However, the measured images selected in the second experiment were Gaussian blurred images in the LIVE database; their blurring degrees did not increase uniformly. At this time, it was obviously wrong to linearly fit the scatter points. Therefore, it was better to select the subjective evaluation DMOS values in the LIVE database as the abscissa and then to perform a linear fit to the scatter points.
Using the same processing method as the first experiment, Figure 19 was obtained. It should be noted that the abscissa in Figure 19 was not the order of the measured images but the subjective evaluation DMOS values of the corresponding measured images. The reason is that the measured images in the first experiment were generated by artificially applying Gaussian blur evenly. Therefore, it is reasonable to perform a linear fit to the scatter points of the evaluation values. However, the measured images selected in the second experiment were Gaussian blurred images in the LIVE database; their blurring degrees did not increase uniformly. At this time, it was obviously wrong to linearly fit the scatter points. Therefore, it was better to select the subjective evaluation DMOS values in the LIVE database as the abscissa and then to perform a linear fit to the scatter points. Observing the data of the second experiment in Table 1, it is easy to find that the CC values of the evaluation methods corresponding to the three distance factors were almost the same, while the ROCC value of the evaluation method corresponding to the type 1 distance factor was smaller compared to those of the evaluation methods corresponding to the type 2 and type 3 distance factors. This indicates that, when the contents of the measured images are not correlated, there is no significant difference in the accuracy of the evaluation methods with different distance factors. However, the evaluation methods corresponding to type 2 and type 3 distance factors performed better in predicting monotonicity.
In conclusion, the evaluation method after introducing the type 3 distance factor has better accuracy and monotonicity prediction when evaluating images with the same contents or images with different contents. Therefore, the image sharpness evaluation index after the introduction of the type 3 distance factor will be chosen for subsequent experiments in this paper. Observing the data of the second experiment in Table 1, it is easy to find that the CC values of the evaluation methods corresponding to the three distance factors were almost the same, while the ROCC value of the evaluation method corresponding to the type 1 distance factor was smaller compared to those of the evaluation methods corresponding to the type 2 and type 3 distance factors. This indicates that, when the contents of the measured images are not correlated, there is no significant difference in the accuracy of the evaluation methods with different distance factors. However, the evaluation methods corresponding to type 2 and type 3 distance factors performed better in predicting monotonicity.
In conclusion, the evaluation method after introducing the type 3 distance factor has better accuracy and monotonicity prediction when evaluating images with the same contents or images with different contents. Therefore, the image sharpness evaluation index after the introduction of the type 3 distance factor will be chosen for subsequent experiments in this paper.

Content-Independent Experiment
The experiment was designed to show that the evaluation method proposed in this paper was superior compared to the evaluation method proposed in reference [14] and the traditional Tenegrad function evaluation method.
As in the second experiment in the previous subsection, we still selected images from the LIVE database in this experiment; the difference was that the selected images were 29 undistorted reference images with different contents. First, the order of these 29 images was disrupted and randomly ordered. After that, a Gaussian blur was added to these images sequentially according to the order of measured images, with a Gaussian blur standard deviation from 0.1 to 2.9 in steps of 0.1, resulting in 29 blurred measured images. Finally, the measured images were evaluated by the method in this paper, the method in reference [14], and the Tenegrad function method, respectively; the evaluation results are shown in Figure 20.

Content-Independent Experiment
The experiment was designed to show that the evaluation method proposed in this paper was superior compared to the evaluation method proposed in reference [14] and the traditional Tenegrad function evaluation method.
As in the second experiment in the previous subsection, we still selected images from the LIVE database in this experiment; the difference was that the selected images were 29 undistorted reference images with different contents. First, the order of these 29 images was disrupted and randomly ordered. After that, a Gaussian blur was added to these images sequentially according to the order of measured images, with a Gaussian blur standard deviation from 0.1 to 2.9 in steps of 0.1, resulting in 29 blurred measured images. Finally, the measured images were evaluated by the method in this paper, the method in reference [14], and the Tenegrad function method, respectively; the evaluation results are shown in Figure 20.
As can be seen in Figure 20, with the increase in the Gaussian blur standard deviation of the images, both the evaluation methods in this paper and reference [14] show an increasing trend while the Tenegrad function evaluation method shows a decreasing trend, which are caused by their respective calculation principles. In order to compare the accuracy of the three evaluation methods more comprehensively, this paper adds a comparison of the root mean square error (RMSE) and mean absolute error (MAE) of the three evaluation methods in addition to the calculation of the Pearson linear correlation coefficient (CC); the final results are shown in Table 2. Figure 20. Comparative experiment of image content irrelevance (red represents the method of this paper, blue represents the method of reference [14], green represents the Tenegrad method). Larger CC values and smaller RMSE and MAE values in the above table indicate the better validity of the method. Therefore, it is clear from the above table that the evaluation method proposed in this paper had significant superiority compared to the other two methods. Figure 20. Comparative experiment of image content irrelevance (red represents the method of this paper, blue represents the method of reference [14], green represents the Tenegrad method).
As can be seen in Figure 20, with the increase in the Gaussian blur standard deviation of the images, both the evaluation methods in this paper and reference [14] show an increasing trend while the Tenegrad function evaluation method shows a decreasing trend, which are caused by their respective calculation principles. In order to compare the accuracy of the three evaluation methods more comprehensively, this paper adds a comparison of the root mean square error (RMSE) and mean absolute error (MAE) of the three evaluation methods in addition to the calculation of the Pearson linear correlation coefficient (CC); the final results are shown in Table 2. Larger CC values and smaller RMSE and MAE values in the above table indicate the better validity of the method. Therefore, it is clear from the above table that the evaluation method proposed in this paper had significant superiority compared to the other two methods.

Subjective and Objective Consistency Experiment
The above experiments fully proved that the sharpness evaluation method proposed in this paper can well determine the blurring degrees of images with different contents. However, the consistency between the evaluation method in this paper and the subjective evaluation has not been verified. Therefore, we verified the subjective and objective consistency of the proposed method by evaluating the sharpness of all 145 Gaussian blurred images in the LIVE database. In addition, it is important to note that all 145 images above have subjective evaluation DMOS values.
The sharpness evaluation values and DMOS values of these 145 measured images were fitted using Equation (7) [18] where Value i is the sharpness evaluation value, DMOS i is the corresponding subjective evaluation value, and β 1~β4 are the model parameters that need to be fitted.

Subjective and Objective Consistency Experiment
The above experiments fully proved that the sharpness evaluation method proposed in this paper can well determine the blurring degrees of images with different contents. However, the consistency between the evaluation method in this paper and the subjective evaluation has not been verified. Therefore, we verified the subjective and objective consistency of the proposed method by evaluating the sharpness of all 145 Gaussian blurred images in the LIVE database. In addition, it is important to note that all 145 images above have subjective evaluation DMOS values.
The sharpness evaluation values and DMOS values of these 145 measured images were fitted using Equation (7) [18] where Value is the sharpness evaluation value, DMOS is the corresponding subjective evaluation value, and ~ are the model parameters that need to be fitted. After obtaining the fitted model parameters of the above three methods, the subjective evaluation DMOS values of the Gaussian blurred images in the LIVE database could be predicted by Equation (7). The relationship between the predicted subjective evaluation value DMOS and the subjective evaluation value DMOS is shown in Figure 22. After obtaining the fitted model parameters of the above three methods, the subjective evaluation DMOS values of the Gaussian blurred images in the LIVE database could be predicted by Equation (7). The relationship between the predicted subjective evaluation value DMOS pred and the subjective evaluation value DMOS is shown in Figure 22. As can be seen in Figure 22, the method of this paper was in better agreement wi the subjective evaluation. To illustrate this point more fully, five additional technical i dicators, which are widely used to measure the performance of the evaluation method were calculated in this paper. These five indicators are the root mean square error (RMS Pearson linear correlation coefficient (CC), mean absolute error (MAE), Spearman ran order correlation coefficient (ROCC), and outlier ratio (OR). The higher the ROCC valu the more obvious is the trend that the predicted evaluation value of the model increas with the increase in the subjective evaluation value. Thus, the ROCC value is used to sho the predictive monotonicity of the model; the larger the value is, the better the model A smaller OR value means that the model has better prediction ability for images wi different contents, which is used to show the predictive consistency of the model; t smaller the value is, the better the model is. The performance evaluation of the three met ods is shown in Table 3. It is obvious from the table that the evaluation method propos in this paper is better than the other two methods in the above technical indicators. Th proves that the evaluation method proposed in this paper can well satisfy the requireme that image sharpness evaluation should be independent of image content and has hi consistency with subjective evaluation.  [14] method; (c) tenegrad method.
As can be seen in Figure 22, the method of this paper was in better agreement with the subjective evaluation. To illustrate this point more fully, five additional technical indicators, which are widely used to measure the performance of the evaluation methods, were calculated in this paper. These five indicators are the root mean square error (RMSE), Pearson linear correlation coefficient (CC), mean absolute error (MAE), Spearman rankorder correlation coefficient (ROCC), and outlier ratio (OR). The higher the ROCC values, the more obvious is the trend that the predicted evaluation value of the model increases with the increase in the subjective evaluation value. Thus, the ROCC value is used to show the predictive monotonicity of the model; the larger the value is, the better the model is. A smaller OR value means that the model has better prediction ability for images with different contents, which is used to show the predictive consistency of the model; the smaller the value is, the better the model is. The performance evaluation of the three methods is shown in Table 3. It is obvious from the table that the evaluation method proposed in this paper is better than the other two methods in the above technical indicators. This proves that the evaluation method proposed in this paper can well satisfy the requirement that image sharpness evaluation should be independent of image content and has high consistency with subjective evaluation.

Running Time Evaluation
In the previous subsections, we experimentally verified that our proposed method was superior to the other two methods in terms of content irrelevance and subjective-objective consistency. In the following, we evaluated 29 clear reference images in the LIVE database using each of the above three evaluation methods and then obtained the time required to process one image for each method by averaging, as shown in Table 4. It should be noted that the experiment in this section was conducted on a PC with a 2.9 GHz CPU, 16 G RAM, and an NVIDIA GeForce RTX 2060. From the experimental data in Table 4, it can be seen that the proposed method had more obvious advantages in terms of running time compared to the other two methods. This is actually due to the fact that we adopted the less computationally intensive eightneighborhood grayscale difference method to determine the edge directions of the edge points while reference [14] requires at least two Sobel convolution operations on the measured image, making its running time significantly slower than our method. Additionally, we noticed that both the methods in this paper and in reference [14] were faster than the Tenegrad method. This fully illustrates that the edge information-based image sharpness evaluation methods have the advantage of being faster than the Tenegrad method based on the contrast principle. Therefore, our method is more suitable for application in scenes with heavy real-time requirements.

Real Shooting Experiment
Most of the measured images in the above experiments are from the LIVE database. In order to verify that the sharpness evaluation method proposed in this paper is also effective for real shooting images, we conducted a real shooting experiment by HUAWEI P40 Pro, taking a total of six images with different contents, as shown in Figure 23. The blurring degrees of these six images are increasing sequentially; the sharpness indexes after applying the proposed evaluation method to these images are shown in Figure 24. The experimental result shows that the evaluation method proposed in this paper has the irrelevance of image content and subjective-objective consistency.

Conclusions
This paper proposes an improved method based on edge information for evaluating image sharpness. Firstly, the Canny edge detection algorithm based on the activation mechanism was used to obtain the edge position. Then, the edge direction of each edge point was determined by the eight-neighborhood grayscale difference method; the histogram of edge width was established afterwards. Finally, a distance factor was introduced into the weighted average edge width solving model to obtain the sharpness evaluation index. By comparing the image evaluation performance when three distance factors were applied, a comprehensive analysis showed that the type 3 distance factor possessed better accuracy and predictive monotonicity. In addition, to verify the superiority of the evaluation method proposed in this paper, three evaluation methods were compared on the

Conclusions
This paper proposes an improved method based on edge information for evaluating image sharpness. Firstly, the Canny edge detection algorithm based on the activation mechanism was used to obtain the edge position. Then, the edge direction of each edge point was determined by the eight-neighborhood grayscale difference method; the histogram of edge width was established afterwards. Finally, a distance factor was introduced into the weighted average edge width solving model to obtain the sharpness evaluation index. By comparing the image evaluation performance when three distance factors were applied, a comprehensive analysis showed that the type 3 distance factor possessed better accuracy and predictive monotonicity. In addition, to verify the superiority of the evaluation method proposed in this paper, three evaluation methods were compared on the

Conclusions
This paper proposes an improved method based on edge information for evaluating image sharpness. Firstly, the Canny edge detection algorithm based on the activation mechanism was used to obtain the edge position. Then, the edge direction of each edge point was determined by the eight-neighborhood grayscale difference method; the histogram of edge width was established afterwards. Finally, a distance factor was introduced into the weighted average edge width solving model to obtain the sharpness evaluation index. By comparing the image evaluation performance when three distance factors were applied, a comprehensive analysis showed that the type 3 distance factor possessed better accuracy and predictive monotonicity. In addition, to verify the superiority of the evaluation method proposed in this paper, three evaluation methods were compared on