Next Article in Journal
Tunnel Slotting-Blasting Numerical Modeling Using Rock Tension-Compression Coupling Damage Algorithm
Previous Article in Journal
Analysis of the Mode Shapes of Kaplan Runners
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Improved Method for Evaluating Image Sharpness Based on Edge Information

College of Intelligence Science and Technology, National University of Defense Technology, Changsha 410073, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(13), 6712; https://doi.org/10.3390/app12136712
Submission received: 8 June 2022 / Revised: 29 June 2022 / Accepted: 30 June 2022 / Published: 2 July 2022
(This article belongs to the Topic Computer Vision and Image Processing)

Abstract

:
In order to improve the subjective and objective consistency of image sharpness evaluation while meeting the requirement of image content irrelevance, this paper proposes an improved sharpness evaluation method without a reference image. First, the positions of the edge points are obtained by a Canny edge detection algorithm based on the activation mechanism. Then, the edge direction detection algorithm based on the grayscale information of the eight neighboring pixels is used to acquire the edge direction of each edge point. Further, the edge width is solved to establish the histogram of edge width. Finally, according to the performance of three distance factors based on the histogram information, the type 3 distance factor is introduced into the weighted average edge width solving model to obtain the sharpness evaluation index. The image sharpness evaluation method proposed in this paper was tested on the LIVE database. The test results were as follows: the Pearson linear correlation coefficient (CC) was 0.9346, the root mean square error (RMSE) was 5.78, the mean absolute error (MAE) was 4.9383, the Spearman rank-order correlation coefficient (ROCC) was 0.9373, and the outlier rate (OR) as 0. In addition, through a comparative analysis with two other methods and a real shooting experiment, the superiority and effectiveness of the proposed method in performance were verified.

1. Introduction

With the significant advantages of non-contact, flexibility, and high integration, computer vision measurement has broad application prospects in electronic semiconductors, automotive manufacturing, food packaging, film, and other industrial fields. Image sharpness is the core index to measure the quality of visual images; therefore, the research on the evaluation method of visual image sharpness is one of the key technologies to achieve visual detection [1,2,3]. Moreover, as people demand more and more sharpness in video chats, HDTV, etc., the research of a more efficient image sharpness evaluation method has become a pressing problem nowadays.
Generally, image sharpness evaluation methods can be divided into full-reference (FR) sharpness evaluation methods, reduced-reference (RR) sharpness evaluation methods, and no-reference (NR) sharpness evaluation methods. Among them, the FR sharpness evaluation methods are used to judge the degree of deviation of the measured image from the sharp reference image [4]. The RR sharpness evaluation methods evaluate the measured image by extracting only part of the information of the reference image [5]. However, in practical applications, undistorted sharp reference images are usually difficult to obtain. Therefore, the NR sharpness evaluation methods have higher research value and wider application capability. Existing NR sharpness evaluation methods are formulated either in the transform domain or in the spatial domain [6]. Transform domain-based methods [7,8,9,10] need to transform images from the spatial domain to other domains for processing. However, the computational complexity is often too large. Therefore, such methods are poor in real time and limited in many applications. Spatial domain-based methods [11,12,13,14,15] can be divided into two main types. One type is based on the fact that clear images have higher contrast compared to blurred images. Typical evaluation methods of this type are the various gradient function methods, such as the Tenegrad function method and energy gradient function method [11]. The other type is based on the fact that image blurring will lead to edge diffusion; a typical evaluation method of this type is the average edge width method [12]. It should be noted that, although both the contrast-based evaluation method and the edge information-based evaluation method have the advantage of low computational complexity, the former is more dependent on the image content compared to the latter, that is, the former method tends to fail when the contents of measured images are different.
Li et al. [13] proposed a no-reference image sharpness evaluation method on scanning electron microscopes. The method firstly extracts the edge of dark channel maps by a Sobel operator. It then removes the noise effect but preserves the edge information by an edge-preserving operator based on the weighted least squares’ (WLS) framework. Finally, it combines the maximum gradient of each edge point with the average gradient to form the sharpness evaluation index. Although this method extracts a part of the edge information of the image by edge detection, it is still essentially an evaluation method based on the contrast principle. Wang [14] proposed an image sharpness evaluation method based on a strong edge width. She convolved the measured image by a Sobel operator to obtain the horizontal and vertical gradient maps, respectively. By selecting the threshold, the horizontal and vertical strong edge points of the measured image were obtained. Moreover, the strong edge width was solved. Finally, the sharpness evaluation index was generated by introducing the histogram information. In summary, most of the current image sharpness evaluation methods based on edge information still extract edge points by a Sobel operator and often only consider horizontal and vertical directions when determining the edge direction of edge points, which largely limits the further improvement of the accuracy of this type of evaluation methods. In addition, not all edge information is needed for evaluation methods and few scholars distinguish the extracted edge information.
In this paper, we focus on the abovementioned problems. Firstly, a Canny edge detection algorithm with excellent comprehensive performance was improved to enhance the edge detection effect of the measured images. Then, we proposed an eight-neighborhood grayscale difference method to achieve a rapid and efficient determination of the edge points’ four edge directions. Finally, by comparing three distance factors based on the histogram of the edge width, the image sharpness evaluation method proposed in this paper was obtained. With the abovementioned improvements, our proposed method has excellent performance in terms of content irrelevance, subjective–objective consistency, and computational speed, especially in the real-time evaluation of image sharpness, which has great potential for application.

2. Principle and Design of the Sharpness Evaluation Method

2.1. Image Edge

The edge information of an image is crucial for vision and it is also one of the important features of an image. Figure 1 simulates the situation when the ideal step edge is blurred using a black and white image with drastically changing grayscale values. Additionally, it can be seen that, when the image is blurred, the edges of the image will spread and the grayscale curve slows down accordingly. Obviously, there is a positive correlation between the degree of edge diffusion and the degree of image blurring.
It should be noted that the edges in a clear image are not always step edges, and there are also impulse edges and roof edges depending on the variation of grayscale values, as shown in Figure 2. However, the clear image is gradually smoothed after blurring, which leads to the disappearance of impulse edges and roof edges; this is obviously different from the relationship of step edges with the degree of image blurring. Therefore, which approach is used to extract the step or approximate step edges is directly related to the accuracy of the sharpness evaluation method. Section 2.4 of this paper gives a detailed solution to this problem, which will not be discussed here for now.

2.2. Edge Detection

A Canny operator has superior overall performance compared to other edge detection operators; but, the effect of Canny edge detection depends heavily on the choice of its threshold value. If the threshold value is set too high, it will lead to missed detection and edge discontinuity. If the threshold value is set too low, there will be over-detection problems, such as the noise in measured images being wrongly detected as an edge. Therefore, in order to improve the edge detection effect, an improved Canny edge detection algorithm based on the activation mechanism is proposed in this paper.
Plot (a) in Figure 3 assumes that the edge detection result is obtained under a high threshold; it can be seen that there are fewer edge points on it. Plot (b) shows the edge detection result obtained in the low-threshold case with more edge points and the appearance of a noise point marked in green. By replicating the edge information in plot (a) to plot (b), plot (c) can be procured. This process is called activation; the activated edge points are marked in red. After that, the activated edge points will activate all the other edge points adjacent to them, as shown in plot (d). Because noise tends to exist in isolation, the isolated noise point is filtered out after the activation process, which is plot (e). The edge information obtained after the detection of this improved algorithm has the characteristics of low noise and high accuracy.
Figure 4 depicts the edge extraction process of the Lena test image using our improved algorithm. It can be clearly seen that the edge extraction result processed by the improved algorithm is less noisy than that under low threshold and more accurate than that under high threshold.

2.3. Analysis of Edge Width

The essence of an edge is a collection of pixel points with drastically changing grayscale values. To calculate the edge width of an edge point, it is necessary to firstly determine the edge direction corresponding to the edge point and then calculate the edge width along the edge direction according to appropriate rules.

2.3.1. Determination of Edge Direction

Eeping et al. [16] calculated the gradient of each edge point in the measured image by a Sobel operator and defined the gradient direction (including the negative direction of the gradient) as the edge direction of that edge point. Different from the idea of using a gradient to determine the edge direction, this paper proposes a method based on the difference of a grayscale between the pixel points in the eight neighborhoods of the edge points. Compared with the gradient determining method, this method not only improves the accuracy of determining the edge direction but also saves twice the computational time.
Figure 5 illustrates the calculation principle of the eight-neighborhood grayscale difference method. For each edge point (GEdge denotes an edge point in the figure below), the four grayscale differences of its eight-neighborhood pixel points are calculated along the horizontal, vertical, 45°, and −45° directions, respectively, namely:
D horizontal = ABS ( G 12 G 32 ) D vertical = ABS ( G 21 G 23 ) D 45 ° = ABS ( G 31 G 13 ) D 45 ° = ABS ( G 11 G 33 )
The opposite direction of the direction corresponding to the minimum value of the four differences in the above equations is the edge direction obtained by this method. For example, the opposite direction of the horizontal direction is the vertical direction, and the opposite direction of the 45° direction is the −45° direction.
The following figure shows the effect of determining the edge direction of the Lena test image using the gradient determining method in reference [14] and the eight-neighborhood grayscale difference method in this paper, respectively. The results of the gradient determining method for determining the edge directions of the edge points (where the pentagrams are located) in Figure 6a–d are the vertical direction, vertical direction, horizontal direction, and vertical direction, respectively. Accordingly, the edge direction determination results of the eight-neighborhood grayscale difference method are −45° direction, horizontal direction, vertical direction, and −45° direction, respectively. It is thus clear that the determination results of the edge direction by the eight-neighborhood grayscale difference method are more realistic and accurate.
In addition to comparing the accuracy of the above two methods, this paper further compares the computational time of the gradient determining method and the eight-neighborhood grayscale difference method by calculating four images with typical edge directions.
We performed the gradient determining operation on the four images in Figure 7 in Visual Studio 2019 using C++ under the Windows 10 operating system, and the average processing time was 10 ms per image, while the average processing time was 3 ms per image for the eight-neighborhood grayscale difference operation on the four images. It can be seen that the eight-neighborhood grayscale difference method proposed in this paper can quickly and efficiently determine the edge direction of each edge point, which creates the condition for accurate calculation of the edge width in the next step.

2.3.2. Solution of Edge Width

To calculate the edge width of an edge point, it is necessary to find the grayscale extreme points at the two ends closest to the edge point in the edge direction [10]. When the grayscale values of one side are larger than those of the other side, the maximum value point of the side with the larger grayscale values and the minimal value point of the side with the smaller grayscale values are selected as the start and end points of the edge width; the distance between the two end points is the corresponding edge width of the edge point. Figure 8 shows the variation of the grayscale values in the 257th row along the horizontal direction of the reference image “parrots” in the LIVE database [17] after Gaussian blurring. As can be seen in the figure below, the edge widths of the edge points P1 and P3 are P2–P2′ and P4′–P4, respectively.
In this paper, the above rule is also followed when calculating the edge width. The upper and lower rows of the images in Figure 9 are “parrots” and “planes” in the Gaussian blurred images of the LIVE database, respectively. According to the method in this paper, the edge widths of the edge points in the upper and lower rows of the images were calculated separately; the calculation results are shown in Figure 10.
Randomly select 100 edge points of “parrots” and “planes” and divide these 100 points into equal parts by 360° so that each edge point corresponds to an angle within 0–360°, which is the polar angle corresponding to that edge point. Then, the edge width of the edge point is selected as the corresponding polar diameter; so, the edge point with a certain edge width can be mapped to the polar coordinate system. Plot (a) and plot (b) in Figure 10 correspond to the upper and lower rows of images in Figure 9, respectively (each row of images in Figure 9, from left to right, can be numbered as a, b, c, and d). According to the definition of points in the polar coordinate system, the more the line is located outside, the more edge points with large edge widths will be in the image corresponding to the line. It can be clearly seen from Figure 10 that the pink line is the most inward, the blue line is outward, the yellow line is further outward, and the green line is the outermost, which correspond to the fact that the two rows of images in Figure 9 are getting blurred from left to right, indicating that the edge width calculation method proposed in this paper can adequately reflect the blurring degree of images.

2.4. Histogram of Edge Width

For the obtained edge widths of different edge points, the probability P ( ω i ) that the edge width is ω i can be calculated by Equation (2).
P ( ω i ) = n i N
In the above equation, n i is the number of edge points with edge width ω i and N is the total number of edge points.
Once the probabilities of different edge widths are obtained, the histogram of the edge width can be established. Take the Gaussian blurred image “womanhat” in the LIVE database as an example; its corresponding histogram is shown in Figure 11.
It can be seen from Figure 11 that, as the degree of blurring deepens, two phenomena appear in the corresponding histogram. (1) The peak shifts to the right, that is, the probability of large edge widths increases. (2) The histogram spreads and the peak value decreases, which means that the probability of larger edge widths generally increases. Reference [14] states, for these phenomena, that the edge widths corresponding to the peak portion of the histogram are more likely to be generated after the step edges or approximate step edges are blurred, which can more accurately reflect blurriness. In this regard, a distance factor, as shown in Equation (3), was introduced to enhance the contribution of the edge widths of the peak portion to the sharpness evaluation. The distance factor variation relationship corresponding to Equation (3) is shown in Figure 12.
In this paper, based on the previous study, two distance factors, as shown in Equations (4) and (5), are proposed, and their respective relationships with the edge width are shown in Figure 13 and Figure 14, respectively. For the convenience of later description, the distance factors corresponding to Equations (3)–(5) are named as type 1 distance factor, type 2 distance factor, and type 3 distance factor, respectively.
( ω i ) = { ( ω i ω m p ) 2                     ω i < ω m p 1                               ω i = ω m p ( ω m e ω i ω m e ω m p ) 2           ω i > ω m p
In the above equation, ω m p is the edge width with the highest probability, ω m e is the longest edge width, ω i is the edge width, and d ( ω i ) is the distance factor of ω i .
d ( ω i ) = { ω i ω m p                                           ω i < ω m p 1                                                 ω i = ω m p ω i ω m p ω m e + ω m e ω m e ω m p           ω i > ω m p
d ( ω i ) = { ω i ( 2 ω m p ω i ) ω m p 2                               ω i < ω m p 1                                                   ω i = ω m p ( ω m e ω i ) ( ω i 2 ω m p + ω m e ) ( ω m p ω m e ) 2         ω i > ω m p
Figure 13. The distance factor variation relationship in Equation (4).
Figure 13. The distance factor variation relationship in Equation (4).
Applsci 12 06712 g013
Figure 14. The distance factor variation relationship in Equation (5).
Figure 14. The distance factor variation relationship in Equation (5).
Applsci 12 06712 g014

2.5. Sharpness Evaluation Model

After acquiring distance factors, the final sharpness evaluation value can be obtained by introducing them into Equation (6).
Value = ω m i n E ω m a x E d ( ω i ) P ( ω i ) ω i
In the above equation, ω m i n E and ω m a x E are the minimum and maximum edge widths, respectively.
Finally, we summarize the sharpness evaluation model proposed in this paper. The edge information of the measured image can be obtained after the edge detection. Then, the edge direction of the edge point can be determined by calculating the eight-neighborhood grayscale difference of the extracted edge point and the edge width can be calculated along the edge direction of the edge point. With the edge width, the histogram of edge width can be established. Then, the distance factor of each edge width can be acquired according to the distance factor calculation equation. Afterwards, the distance factor is introduced into the evaluation index to obtain the sharpness evaluation model of this paper. The above process is shown in Figure 15.

3. Experimental Results and Analysis

3.1. Distance Factor Comparison Experiment

In order to fully compare the performances of the three distance factors and, thus, decide which distance factor should be introduced into the sharpness evaluation index, two experiments were conducted for this section based on whether the image contents were the same.
The first experiment was set up as follows. Firstly, 11 “cameraman” images with the same image contents but gradually increasing blur were selected, as shown in Figure 16. Then these images were evaluated by the sharpness evaluation model after introducing three different distance factors, respectively. Finally, the obtained evaluation values were plotted as scatter plot and least squares fitted with polynomial functions, as shown in Figure 17.
In Figure 17, the blue, green, and red lines are the fitted lines of the scatter points of the image sharpness evaluation values after the introduction of the three distance factors of type 1, type 2, and type 3, respectively. In Table 1, CC is the Pearson linear correlation coefficient and ROCC is the Spearman rank-order correlation coefficient. A higher CC value indicates that the evaluation method is more effective; a larger ROCC values indicates that the evaluation method is more monotonic. From the data in Table 1, it is clear that the evaluation method after introducing the type 3 distance factor performed better in terms of both accuracy and predicted monotonicity. Therefore, when the contents of measured images are the same, the type 3 distance factor performs better.
The second experiment was different from the first experiment. Its selected images were all from the Gaussian blurred images of the LIVE database. These images had different blurring degrees and were not correlated with each other, as shown in Figure 18.
Using the same processing method as the first experiment, Figure 19 was obtained. It should be noted that the abscissa in Figure 19 was not the order of the measured images but the subjective evaluation DMOS values of the corresponding measured images. The reason is that the measured images in the first experiment were generated by artificially applying Gaussian blur evenly. Therefore, it is reasonable to perform a linear fit to the scatter points of the evaluation values. However, the measured images selected in the second experiment were Gaussian blurred images in the LIVE database; their blurring degrees did not increase uniformly. At this time, it was obviously wrong to linearly fit the scatter points. Therefore, it was better to select the subjective evaluation DMOS values in the LIVE database as the abscissa and then to perform a linear fit to the scatter points.
Observing the data of the second experiment in Table 1, it is easy to find that the CC values of the evaluation methods corresponding to the three distance factors were almost the same, while the ROCC value of the evaluation method corresponding to the type 1 distance factor was smaller compared to those of the evaluation methods corresponding to the type 2 and type 3 distance factors. This indicates that, when the contents of the measured images are not correlated, there is no significant difference in the accuracy of the evaluation methods with different distance factors. However, the evaluation methods corresponding to type 2 and type 3 distance factors performed better in predicting monotonicity.
In conclusion, the evaluation method after introducing the type 3 distance factor has better accuracy and monotonicity prediction when evaluating images with the same contents or images with different contents. Therefore, the image sharpness evaluation index after the introduction of the type 3 distance factor will be chosen for subsequent experiments in this paper.

3.2. Content-Independent Experiment

The experiment was designed to show that the evaluation method proposed in this paper was superior compared to the evaluation method proposed in reference [14] and the traditional Tenegrad function evaluation method.
As in the second experiment in the previous subsection, we still selected images from the LIVE database in this experiment; the difference was that the selected images were 29 undistorted reference images with different contents. First, the order of these 29 images was disrupted and randomly ordered. After that, a Gaussian blur was added to these images sequentially according to the order of measured images, with a Gaussian blur standard deviation from 0.1 to 2.9 in steps of 0.1, resulting in 29 blurred measured images. Finally, the measured images were evaluated by the method in this paper, the method in reference [14], and the Tenegrad function method, respectively; the evaluation results are shown in Figure 20.
As can be seen in Figure 20, with the increase in the Gaussian blur standard deviation of the images, both the evaluation methods in this paper and reference [14] show an increasing trend while the Tenegrad function evaluation method shows a decreasing trend, which are caused by their respective calculation principles. In order to compare the accuracy of the three evaluation methods more comprehensively, this paper adds a comparison of the root mean square error (RMSE) and mean absolute error (MAE) of the three evaluation methods in addition to the calculation of the Pearson linear correlation coefficient (CC); the final results are shown in Table 2.
Larger CC values and smaller RMSE and MAE values in the above table indicate the better validity of the method. Therefore, it is clear from the above table that the evaluation method proposed in this paper had significant superiority compared to the other two methods.

3.3. Subjective and Objective Consistency Experiment

The above experiments fully proved that the sharpness evaluation method proposed in this paper can well determine the blurring degrees of images with different contents. However, the consistency between the evaluation method in this paper and the subjective evaluation has not been verified. Therefore, we verified the subjective and objective consistency of the proposed method by evaluating the sharpness of all 145 Gaussian blurred images in the LIVE database. In addition, it is important to note that all 145 images above have subjective evaluation DMOS values.
The sharpness evaluation values and DMOS values of these 145 measured images were fitted using Equation (7) [18] where Value i is the sharpness evaluation value, DMOS i is the corresponding subjective evaluation value, and β 1 ~ β 4 are the model parameters that need to be fitted.
DMOS i = β 2 + β 1 β 2 1 + e ( Value i β 3 | β 4 | )
Figure 21 shows the fitting curves between the evaluation values of the three evaluation methods and DMOS values. The model parameters of the fitting curve of the proposed method are β 1 = 73.14, β 2 = 17.12, β 3 = 10,150, and β 4 = 1406. The model parameters of the fitting curve of the reference [14] method are β 1 = 64.56, β 2 = −1.817, β 3 = 1.857, and β 4 = −0.3279. The model parameters of the fitting curve of the Tenegrad method are β 1 = 29.22, β 2 = 227.7, β 3 = −1716, and β 4 = −1811.
After obtaining the fitted model parameters of the above three methods, the subjective evaluation DMOS values of the Gaussian blurred images in the LIVE database could be predicted by Equation (7). The relationship between the predicted subjective evaluation value DMOS p r e d and the subjective evaluation value DMOS is shown in Figure 22.
As can be seen in Figure 22, the method of this paper was in better agreement with the subjective evaluation. To illustrate this point more fully, five additional technical indicators, which are widely used to measure the performance of the evaluation methods, were calculated in this paper. These five indicators are the root mean square error (RMSE), Pearson linear correlation coefficient (CC), mean absolute error (MAE), Spearman rank-order correlation coefficient (ROCC), and outlier ratio (OR). The higher the ROCC values, the more obvious is the trend that the predicted evaluation value of the model increases with the increase in the subjective evaluation value. Thus, the ROCC value is used to show the predictive monotonicity of the model; the larger the value is, the better the model is. A smaller OR value means that the model has better prediction ability for images with different contents, which is used to show the predictive consistency of the model; the smaller the value is, the better the model is. The performance evaluation of the three methods is shown in Table 3. It is obvious from the table that the evaluation method proposed in this paper is better than the other two methods in the above technical indicators. This proves that the evaluation method proposed in this paper can well satisfy the requirement that image sharpness evaluation should be independent of image content and has high consistency with subjective evaluation.

3.4. Running Time Evaluation

In the previous subsections, we experimentally verified that our proposed method was superior to the other two methods in terms of content irrelevance and subjective–objective consistency. In the following, we evaluated 29 clear reference images in the LIVE database using each of the above three evaluation methods and then obtained the time required to process one image for each method by averaging, as shown in Table 4. It should be noted that the experiment in this section was conducted on a PC with a 2.9 GHz CPU, 16 G RAM, and an NVIDIA GeForce RTX 2060.
From the experimental data in Table 4, it can be seen that the proposed method had more obvious advantages in terms of running time compared to the other two methods. This is actually due to the fact that we adopted the less computationally intensive eight-neighborhood grayscale difference method to determine the edge directions of the edge points while reference [14] requires at least two Sobel convolution operations on the measured image, making its running time significantly slower than our method. Additionally, we noticed that both the methods in this paper and in reference [14] were faster than the Tenegrad method. This fully illustrates that the edge information-based image sharpness evaluation methods have the advantage of being faster than the Tenegrad method based on the contrast principle. Therefore, our method is more suitable for application in scenes with heavy real-time requirements.

3.5. Real Shooting Experiment

Most of the measured images in the above experiments are from the LIVE database. In order to verify that the sharpness evaluation method proposed in this paper is also effective for real shooting images, we conducted a real shooting experiment by HUAWEI P40 Pro, taking a total of six images with different contents, as shown in Figure 23. The blurring degrees of these six images are increasing sequentially; the sharpness indexes after applying the proposed evaluation method to these images are shown in Figure 24. The experimental result shows that the evaluation method proposed in this paper has the irrelevance of image content and subjective–objective consistency.

4. Conclusions

This paper proposes an improved method based on edge information for evaluating image sharpness. Firstly, the Canny edge detection algorithm based on the activation mechanism was used to obtain the edge position. Then, the edge direction of each edge point was determined by the eight-neighborhood grayscale difference method; the histogram of edge width was established afterwards. Finally, a distance factor was introduced into the weighted average edge width solving model to obtain the sharpness evaluation index. By comparing the image evaluation performance when three distance factors were applied, a comprehensive analysis showed that the type 3 distance factor possessed better accuracy and predictive monotonicity. In addition, to verify the superiority of the evaluation method proposed in this paper, three evaluation methods were compared on the LIVE database. The experimental results showed that, compared with the traditional Tenegrad function evaluation method, the method proposed in this paper was greatly improved in performance and can meet the requirement of image content irrelevance and subjective–objective consistency.

Author Contributions

Conceptualization, Z.L. and Z.G.; methodology, Z.L.; software, Z.L.; validation, J.W.; formal analysis, Y.C.; investigation, Z.L.; resources, Z.L.; data curation, Z.L.; writing—original draft preparation, Z.L.; writing—review and editing, Z.G.; supervision, H.H.; project administration, H.H.; funding acquisition, H.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Key Laboratory Fund Project (Grant No. 6142003190302) and University Scientific Research Plan Project (Grant No. ZK22-19).

Data Availability Statement

The data presented in this study are available on request from the author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ke, F.; Liu, H.; Zhao, D.; Sun, G.; Xu, W.; Feng, W. A high precision image registration method for measurement based on the stereo camera system. Optik 2020, 204, 164186. [Google Scholar] [CrossRef]
  2. Varga, D. No-Reference Image Quality Assessment with Convolutional Neural Networks and Decision Fusion. Appl. Sci. 2022, 12, 101. [Google Scholar] [CrossRef]
  3. Liu, T.J.; Liu, H.H.; Pei, S.C.; Liu, K.H. A high-definition diversity-scene database for image quality assessment. IEEE Access 2018, 6, 45427–45438. [Google Scholar] [CrossRef]
  4. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Liu, Y.; Zhai, G.; Gu, K.; Liu, X.; Zhao, D.; Gao, W. Reduced-reference image quality assessment in free-energy principle and sparse representation. IEEE Trans. Multimed. 2018, 20, 379–391. [Google Scholar] [CrossRef]
  6. Liu, H.T.; Heynderickx, I. Issues in the design of a no-reference metric for perceived blur. In Proceedings of the SPIE Conference on Image Quality and System Performance, San Francisco, CA, USA, 24 January 2011; p. 78670C. [Google Scholar]
  7. Marichal, X.; Ma, W.Y.; Zhang, H.J. Blur determination in the compressed domain using DCT information. In Proceedings of the International Conference on Image Processing, Kobe, Japan, 24–28 October 1999; pp. 386–390. [Google Scholar]
  8. Caviedes, J.; Gurbuz, S. No-reference sharpness metric based on local edge kurtosis. In Proceedings of the International Conference on Image Processing, Rochester, NY, USA, 22–25 September 2002; pp. 53–56. [Google Scholar]
  9. Caviedes, J.; Oberti, F. A new sharpness metric based on local kurtosis, edge and energy information. Signal Process. Image Commun. 2004, 19, 147–161. [Google Scholar] [CrossRef]
  10. Hassen, R.; Wang, Z.; Salama, M. No-reference image sharpness assessment based on local phase coherence measurement. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, Dallas, TX, USA, 14–19 March 2010; pp. 2434–2437. [Google Scholar]
  11. Zhan, Y.B.; Zhang, R. No-reference image sharpness assessment based on maximum gradient and variability of gradients. IEEE Trans. Multimed. 2018, 20, 1796–1808. [Google Scholar] [CrossRef]
  12. Pina, M.; Frederic, D.; Stefan, W.; Touradj, E. Perceptual blur and ringing metrics: Application to JPEG2000. Signal Process. Image Commun. 2004, 19, 163–172. [Google Scholar]
  13. Li, Q.Y.; Li, L.D.; Lu, Z.L.; Zhou, Y.; Zhu, H.C. No-reference Sharpness Index for Scanning Electron Microscopy Images Based on Dark Channel Prior. KSII Trans. Int. Inf. Systems. 2019, 13, 2529–2543. [Google Scholar]
  14. Wang, Y.R. Research on Auto-Focus Methods Based on Digital Imaging Processing. Ph.D. Thesis, Zhejiang University, Hangzhou, China, 2018. [Google Scholar]
  15. Ferzli, R.; Karam, L.J. A No-Reference Objective Image Sharpness Metric Based on the Notion of Just Noticeable Blur (JNB). IEEE Trans. Image. Process. 2009, 18, 717–728. [Google Scholar] [CrossRef] [PubMed]
  16. Ong, E.; Lin, W.; Lu, Z.; Yang, X.; Yao, S.; Pan, F.; Jiang, L.; Moschetti, F. A no-reference quality metric for measuring image blur. In Proceedings of the International Symposium on Signal Processing and Its Applications(IASTED), Paris, France, 4 July 2003; pp. 469–472. [Google Scholar]
  17. LIVE Image Quality Assessment Database Release 2. Available online: http://live.eoe.utexas.edu/research/quality/ (accessed on 1 February 2022).
  18. Kalpana, S.; Rajiv, S.; Alan, C.B.; Lawrence, K.C. Study of Subjective and Objective Quality Assessment of Video. IEEE Trans. Image. Process. 2010, 19, 1427–1440. [Google Scholar]
Figure 1. Step edge image and its corresponding grayscale change curve of pixels on a row.
Figure 1. Step edge image and its corresponding grayscale change curve of pixels on a row.
Applsci 12 06712 g001
Figure 2. Impulse, roof edge images, and their corresponding grayscale change curves of pixels on a row: (a) impulse edge; (b) roof edge; (c) impulse edge grayscale change curve; (d) roof edge grayscale change curve.
Figure 2. Impulse, roof edge images, and their corresponding grayscale change curves of pixels on a row: (a) impulse edge; (b) roof edge; (c) impulse edge grayscale change curve; (d) roof edge grayscale change curve.
Applsci 12 06712 g002
Figure 3. The principle of an improved Canny edge detection algorithm: (a) edge detection result under a high threshold; (b) edge detection result under a low threshold; (c) edge detection result after initial activation; (d) edge detection result after multiple activations; (e) final edge detection result.
Figure 3. The principle of an improved Canny edge detection algorithm: (a) edge detection result under a high threshold; (b) edge detection result under a low threshold; (c) edge detection result after initial activation; (d) edge detection result after multiple activations; (e) final edge detection result.
Applsci 12 06712 g003
Figure 4. Improved Lena edge detection diagram.
Figure 4. Improved Lena edge detection diagram.
Applsci 12 06712 g004
Figure 5. Eight-neighborhood grayscale difference method.
Figure 5. Eight-neighborhood grayscale difference method.
Applsci 12 06712 g005
Figure 6. Determination of edge direction: (a) −45° edge direction; (b) horizontal edge direction; (c) vertical edge direction; (d) −45° edge direction.
Figure 6. Determination of edge direction: (a) −45° edge direction; (b) horizontal edge direction; (c) vertical edge direction; (d) −45° edge direction.
Applsci 12 06712 g006
Figure 7. Typical edge direction diagram: (a) vertical edge direction; (b) 45° edge direction; (c) −45° edge direction; (d) horizontal edge direction.
Figure 7. Typical edge direction diagram: (a) vertical edge direction; (b) 45° edge direction; (c) −45° edge direction; (d) horizontal edge direction.
Applsci 12 06712 g007
Figure 8. Parrots’ grayscale value change curve.
Figure 8. Parrots’ grayscale value change curve.
Applsci 12 06712 g008
Figure 9. Gaussian blurred images.
Figure 9. Gaussian blurred images.
Applsci 12 06712 g009
Figure 10. (a) “parrots” polar graph of edge information; (b) “planes” polar graph of edge information.
Figure 10. (a) “parrots” polar graph of edge information; (b) “planes” polar graph of edge information.
Applsci 12 06712 g010
Figure 11. Histogram of edge width under different blurring degrees of “womanhat”.
Figure 11. Histogram of edge width under different blurring degrees of “womanhat”.
Applsci 12 06712 g011
Figure 12. The distance factor variation relationship in Equation (3).
Figure 12. The distance factor variation relationship in Equation (3).
Applsci 12 06712 g012
Figure 15. Flow chart of sharpness evaluation model.
Figure 15. Flow chart of sharpness evaluation model.
Applsci 12 06712 g015
Figure 16. “Cameraman” with increasing blur.
Figure 16. “Cameraman” with increasing blur.
Applsci 12 06712 g016
Figure 17. The relationship between the evaluation value and the degree of image blurring when the image contents are the same.
Figure 17. The relationship between the evaluation value and the degree of image blurring when the image contents are the same.
Applsci 12 06712 g017
Figure 18. Content-independent images with increasing blur.
Figure 18. Content-independent images with increasing blur.
Applsci 12 06712 g018
Figure 19. The relationship between the evaluation value and the degree of image blurring when the image contents are not the same.
Figure 19. The relationship between the evaluation value and the degree of image blurring when the image contents are not the same.
Applsci 12 06712 g019
Figure 20. Comparative experiment of image content irrelevance (red represents the method of this paper, blue represents the method of reference [14], green represents the Tenegrad method).
Figure 20. Comparative experiment of image content irrelevance (red represents the method of this paper, blue represents the method of reference [14], green represents the Tenegrad method).
Applsci 12 06712 g020
Figure 21. Fitting curves of three methods: (a) the fitting curve of proposed method; (b) the fitting curve of reference [14] method; (c) the fitting curve of Tenegrad method.
Figure 21. Fitting curves of three methods: (a) the fitting curve of proposed method; (b) the fitting curve of reference [14] method; (c) the fitting curve of Tenegrad method.
Applsci 12 06712 g021
Figure 22. The relationship between predicted subjective evaluation values and subjective evaluation values: (a) proposed method; (b) reference [14] method; (c) tenegrad method.
Figure 22. The relationship between predicted subjective evaluation values and subjective evaluation values: (a) proposed method; (b) reference [14] method; (c) tenegrad method.
Applsci 12 06712 g022
Figure 23. Real shooting images: (a) toy; (b) vehicle; (c) building; (d) cup; (e) indoor; (f) door.
Figure 23. Real shooting images: (a) toy; (b) vehicle; (c) building; (d) cup; (e) indoor; (f) door.
Applsci 12 06712 g023
Figure 24. Evaluation values of real shooting images.
Figure 24. Evaluation values of real shooting images.
Applsci 12 06712 g024
Table 1. Performance of three distance factors.
Table 1. Performance of three distance factors.
ExperimentDistance FactorCCROCC
Type 10.88690.9545
Type 20.93690.9727
Type30.95680.9909
IIType 10.95090.9818
Type 20.95140.9909
Type30.95020.9909
The highest performances are shown in boldface.
Table 2. Performance of the three evaluation methods.
Table 2. Performance of the three evaluation methods.
MethodCCRMSEMAE
Proposed0.96840.65340.4746
Reference [14]0.96270.71580.5275
Tenegrad−0.78171.9442.9083
Table 3. Performance evaluation of the three methods.
Table 3. Performance evaluation of the three methods.
MethodRMSECCMAEROCCOR
Proposed5.780.93464.93830.93730
Reference [14]7.3930.89065.95600.87540.0276
Tenegrad8.2970.85996.78080.83010.0456
Table 4. Running time comparison.
Table 4. Running time comparison.
MethodProposedReference [14]Tenegrad
Time (s)0.006970.02070.0247
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Liu, Z.; Hong, H.; Gan, Z.; Wang, J.; Chen, Y. An Improved Method for Evaluating Image Sharpness Based on Edge Information. Appl. Sci. 2022, 12, 6712. https://doi.org/10.3390/app12136712

AMA Style

Liu Z, Hong H, Gan Z, Wang J, Chen Y. An Improved Method for Evaluating Image Sharpness Based on Edge Information. Applied Sciences. 2022; 12(13):6712. https://doi.org/10.3390/app12136712

Chicago/Turabian Style

Liu, Zhaoyang, Huajie Hong, Zihao Gan, Jianhua Wang, and Yaping Chen. 2022. "An Improved Method for Evaluating Image Sharpness Based on Edge Information" Applied Sciences 12, no. 13: 6712. https://doi.org/10.3390/app12136712

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop