Next Article in Journal
1010 nm Directly LD-Pumped 6kW Monolithic Fiber Laser Employing Long-Tapered Yb3+-Doped Fiber
Next Article in Special Issue
Double-Sided Metasurfaces for Dual-Band Mid-Wave and Long-Wave Infrared Reflectors
Previous Article in Journal
Wavelength-Switchable 2 μm Single-Longitudinal-Mode Thulium-Doped Fiber Laser Based on Dual-Active Cavity and DLTCTR
Previous Article in Special Issue
Accurate Inspection and Super-Resolution Reconstruction for Additive Manufactured Defects Based on Stokes Vector Method and Deep Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Objective Evaluation Method for Image Sharpness Under Different Illumination Imaging Conditions

by
Huan He
1,
Benchi Jiang
1,2,
Chenyang Shi
1,3,*,
Yuelin Lu
1 and
Yandan Lin
4,*
1
School of Artificial Intelligence, Anhui Polytechnic University, Wuhu 241000, China
2
Industrial Innovation Technology Research Co., Ltd., Anhui Polytechnic University, Wuhu 241000, China
3
Anhui Engineering Research Center of Vehicle Display Integrated Systems, School of Integrated Circuits, Anhui Polytechnic University, Wuhu 241000, China
4
Department of Illuminating Engineering & Light Sources, School of Information Science and Technology, Fudan University, Shanghai 200433, China
*
Authors to whom correspondence should be addressed.
Photonics 2024, 11(11), 1032; https://doi.org/10.3390/photonics11111032
Submission received: 18 September 2024 / Revised: 15 October 2024 / Accepted: 28 October 2024 / Published: 1 November 2024
(This article belongs to the Special Issue New Perspectives in Optical Design)

Abstract

:
Blurriness is troublesome in digital images when captured under different illumination imaging conditions. To obtain an accurate blurred image quality assessment (IQA), a machine learning-based objective evaluation method for image sharpness under different illumination imaging conditions is proposed. In this method, the visual saliency, color difference, and gradient information are selected as the image features, and the relevant feature information of these three aspects is extracted from the image as the feature value for the blurred image evaluation under different illumination imaging conditions. Then, a particle swarm optimization-based general regression neural network (PSO-GRNN) is established to train the above extracted feature values, and the final blurred image evaluation result is determined. The proposed method was validated based on three databases, i.e., BID, CID2013, and CLIVE, which contain real blurred images under different illumination imaging conditions. The experimental results showed that the proposed method has good performance in evaluating the quality of images under different imaging conditions.

1. Introduction

With the continuous development of imaging science, imaging technology has been widely applied in many fields such as video conferencing, medical imaging, remote sensing, compressive sensing, and social media [1,2]. In machine vision systems, extreme external illumination can further lead to a decrease in the image quality [3]. Illumination plays a critical role in capturing the image procedure, and a change in the illumination is the main factor causing image blurring and distortion [4]. The evaluation of image quality consists of analyzing and quantifying the degree of distortion and developing a quantitative evaluation index. A subjective quality evaluation is relatively reliable. However, it is time-consuming, labor-intensive, and not conducive to application in intelligent evaluation systems [5]. Studies relating to objective image quality assessment (IQA), e.g., image sharpness assessments, are becoming increasingly important in assessing the impact of the variation in an image’s appearance on the resulting visual quality and ensuring the reliability of image processing systems [6,7].
According to the availability of a reference image, IQA can usually be divided into three categories: full-reference IQA (FR-IQA) [8,9], reduced-reference IQA (RR-IQA) [10], and no-reference IQA (NR-IQA) [11]. In practical applications, it is usually impossible to obtain undistorted images or their features as references, so NR-IQA has practical significance. In this field, many mature algorithms have achieved good results in image quality evaluation. Gaussian blur is one of the common and dominant types of distortion perceived in images when captured under low-light conditions. Therefore, a suitable and efficient blurriness or sharpness evaluation method should be explored [7].
Usually, statistical data on the structural features of an image are important in NR-IQA research. When the blurred image is captured under nonideal illumination conditions, the structural features will change accordingly. This structural change can be characterized by some specific structural statistics of the image, which can be used to evaluate the image quality [11,12,13]. Bahrami and Kot [11] proposed a model to measure sharpness based on the maximum local variation (MLV). Li [12] proposed a blind image blur evaluation (BIBLE) algorithm based on discrete orthogonal moments, where gradient images are divided into equally sized blocks and orthogonal moments are calculated to characterize the image sharpness. Gvozden [13] proposed a fast blind image sharpness/ambiguity evaluation model (BISHARP), and the local contrast information of the image was obtained by calculating the root mean square of the image. These methods are all built in the traditional way on the spatial domain/spectral domain of an image.
Unlike traditional methods, learning-based methods can improve the accuracy of the evaluation results [14]. Li [15] proposed a reference-free and robust image sharpness assessment (RISE) method, which evaluates image quality by learning multi-scale features extracted in spatial and spectral domains. A no-reference image sharpness metric based on structural information using sparse representation (SR) [16] was proposed. Yu [17] proposed a blind image sharpness assessment by using a shallow convolutional neural network (CNN). Kim [18] applied a deep CNN to the NR-IQA by separating the training of the NR-IQA into two stages: (1) an objective distortion part and (2) a part related to the human visual system. Liu [19] developed an efficient general-purpose no-reference NR-IQA model that utilizes local spatial and spectral entropy features on distorted images. Li [20] proposed a method based on semantic feature aggregation (SFA) to alleviate the impact of image content variation. Zhang [21] proposed a deep bilinear convolutional neural network (DB-CNN) model for blind image quality assessment that works for both synthetically and authentically distorted images.
These methods can solve simulated blur evaluation problems well, but the majority of them cannot accurately evaluate the realistic blur introduced during image capturing, especially under different illumination imaging conditions. Moreover, evaluating realistic blur is undoubtedly more significant than the evaluation of simulated blur. Therefore, it is necessary to design a sharpness assessment method that is effective for image sharpness under different illumination imaging conditions.
In this research, an NR-IQA method for blurred images under different illumination imaging conditions is proposed to evaluate image sharpness based on a particle swarm optimization-based general regression neural network (PSO-GRNN). Firstly, some basic image feature maps are extracted, i.e., the visual saliency (VS), color difference, and gradient, and the feature values of all maps are obtained by using statistical calculation. Secondly, the feature values are trained and optimized by a PSO-GRNN. Lastly, after the PSO-GRNN is determined, an evaluation result for the real blurry images will be calculated. The experimental results show that the evaluation performance of the proposed method on real blur databases, i.e., BID [22], CID2013 [23], and CLIVE [24], is better than the state-of-the-art and recently published NR methods.

2. Feature Extraction

2.1. Visual Saliency (VS) Index

The vs. of an image can reflect how “salient” a local region is [25,26]. The relationship between the vs. and image quality has been integrated into IQA studies [27]. For a blurred image that is captured under non-ideal illumination conditions, the important areas of the scene will decrease, and, consequently, the vs. map of the image will also change. The extraction method for the vs. in this study is based on the SDSP method [27]. In Figure 1, the blurred and distorted images with the same content under different lighting conditions and their vs. maps (pseudo-color maps) are presented. From Figure 1, it can be seen that the vs. maps can accurately extract important regions. Figure 1c,d shows the vs. maps of Figure 1a,b, which show the blurred level of an image under specific lighting conditions, in line with the human visual perception characteristics.

2.2. Color Difference (CD) Index

The previous section introduces the overall structural information of images, i.e., the vs. index. For a color image, color information is also important for image quality. The CD index [28] can reflect the color distortion by different illumination imaging conditions. Therefore, the CD is used to evaluate the image quality for the color information. For an RGB image, mapping the image to a specific color space where each pixel contains three color components (brightness S 1 , color channel S 2 , and S 3 ) allows the CD index between adjacent pixels to be calculated using Equation (1):
Δ E = S 11 S 12 + S 21 S 22 + S 31 S 32 ,
where S 11 and S 12 are the lightness channels of two neighbor pixels; S 21 and S 22 are the color channels of two neighbor pixels; S 31 and S 32 are the other color channels of two neighbor pixels. The CD operators in the horizontal and vertical directions of the k channel are defined as
Δ E h i , j = n = 1 k S n i , j S n i , j + 1 2 ,
Δ E v i , j = n = 1 k S n i , j S n i + 1 , j 2 ,
where S n ( i , j ) represent the intensity at the pixel location for each color channel. The color channel number is represented by k.
By combining the local CD operators in the above two directions, the local CD (ΔEL) for pixel (i, j) is obtained, which is given by Formula (4):
Δ E L i , j = Δ E h i , j + Δ E v i , j / 2
Figure 2 shows the CD pseudo-color maps corresponding to the images with the same content under different lighting conditions. After comparing, it can be seen that there are obvious differences in the CD maps of images under different lighting environments. Therefore, the CD index can be used to evaluate the quality of real blurred images.

2.3. Gradient Index

In IQA studies, a grayscale image gradient is also a commonly used feature [29]. The calculation of an image gradient refers to the magnitude of image changes. For the edge parts of an image, the grayscale values change significantly, and the gradient values also change significantly. On the contrary, for the smoother parts of an image, the grayscale values change less, and the corresponding gradient values also change less. As shown in Figure 3, the red rectangular area is more prominent. The degree of blur in an image is positively correlated with the change in edge location. By calculating the image gradient, the corresponding change in edge location can be determined. In this study, the Roberts operator is utilized to calculate the image gradient. The gradient value of the pixel point at ( i , j ) is defined as G ( i , j ) , and the gradient calculation formula is
G i , j = I i , j I i , j + 1 2 + I i , j I i + 1 , j 2

2.4. Image Feature Value

Based on the above analysis, the VS, CD, and gradient information are selected to extract image features in this study. As shown in Figure 4, for a blurred image, the vs. map, CD map, and gradient map are processed at first. Then, these three feature maps are subjected to maximum (Max), relative change (RC), and variance (Var) calculations. After that, nine feature values can be obtained. These values are used to construct the feature value of an image, which is then input into the following parts of the proposed method.
The calculation process for the Max, RC, and Var eigenvalues of the obtained feature map M, e.g., the VS, CD, and gradient maps, is as follows:
Max = max ( M ( i , j ) )
RC = max ( M ( i , j ) ) min ( M ( i , j ) ) m e a n ( M ( i , j ) )
Var = x 1 x ¯ 2 + x 2 x ¯ 2 + + x n x ¯ 2 n
where x1, x1, …, xn represents the pixel value of each feature map, x ¯ represents the average pixel value of a feature map, and n represents the total number of pixels.

3. Algorithm Framework

3.1. Generalized Regression Neural Network (GRNN)

GRNN is a powerful regression tool with a dynamic network structure [30]. It is a radial basis function neural network based on non-parametric estimation for nonlinear regression. The network structure of a GRNN is shown in Figure 5, which includes an input layer for conditional samples, a corresponding network pattern layer, a summation layer, and an output layer for the final network training results. The number of neurons in the input layer is equal to the dimension of the input vector in the learning sample. In this paper, the number of neurons in the input layer is n, which equals the number of feature values in an image. Each neuron is a simple distribution unit that directly passes input variables to the pattern layer. The number of neurons in the pattern layer is equal to the number n in the input layer, and each neuron corresponds to a different sample. Each neuron in the pattern layer is connected to two neurons in the summation layer, and the output layer calculates the quotient of the two outputs of the summation layer to generate feature-based predictions. The corresponding network input and network output are
Y ( X ) = i = 1 n Y i exp ( D i 2 / 2 σ 2 ) i = 1 n exp ( D i 2 / 2 σ 2 )
where n is the number of observed values in the sample. X i and Y i are the values of the sample. p i = exp X X i T X X i / 2 σ 2 D i 2 = X X i T X X i is the transfer function of the pattern layer neuron, and σ is the transfer parameter. The larger the value of σ, the smoother the function approximation.
To achieve the optimal performance of a GRNN, it is necessary to determine the ideal variable value σ. In IQA studies, the method of controlling variables is commonly used to calculate the variable values. However, adaptive optimization methods are selected in the proposed method to obtain better performance. In Table 1, three adaptive optimization methods, i.e., fruit fly optimization algorithm (FOA), firefly algorithm (FA), and particle swarm optimization (PSO), are tested on the BID database. In addition, a GRNN without an adaptive optimization method is also tested. The best results are highlighted in boldface in Table 1. Based on the corresponding evaluation standard value (SROCC and PLCC), PSO performs better than other methods. Therefore, PSO-GRNN is selected as the main framework of the proposed IQA method.

3.2. Particle Swarm Optimization (PSO) Algorithm

The calculation steps of PSO are shown in Figure 6. The specific implementation steps are as follows:
(1)
Initializing the population: Randomly initialize the position (Pi) and velocity of each particle in the population (vi), the maximum number of iterations of the algorithm, etc.
(2)
Calculate the fitness value of each particle based on the fitness function, save the optimal position of each particle (i), and save the individual best fitness value (pbesti) and the global best position of the population (gbesti).
(3)
Update the velocity and position based on the velocity and position the update formula according to the following equations:
v i t + 1 = ω v i t + c 1 r 1 p b e s t i P i t + c 2 r 2 g b e s t i P i t
P i t + 1 = P i t + v i t
where c1, c2 are the learning factors, also known as the acceleration constant. r1, r2 are the uniform random numbers within the range of [0, 1]. ω is the inertia weight. t is the iteration number. Calculate the fitness value of each particle after updating and compare the best fitness value of each particle with its historical best position fitness value. If it is good, use its current position as the optimal position for that particle. For each particle, compare the fitness value corresponding to its optimal position with the population’s optimal fitness value. If it is better, update the population’s optimal position and fitness value.
(4)
Determine whether the search results meet the stopping conditions (reach the maximum number of iterations or meet the accuracy requirements). If the stopping conditions are met, output the optimal value. Otherwise, proceed to the second step and continue running until the stopping conditions are met.

3.3. PSO-GRNN Image Quality Evaluation Model

A GRNN has a strong nonlinear mapping ability and a flexible network structure. The PSO-GRNN prediction model was introduced in reference [31]. It can solve the problems of easy convergence to local minima, slow the convergence speed during network training, and improve the generalization ability of the neural network by optimizing the expansion speed of GRNN functions.
This article extends the PSO-GRNN model to blurred images quality evaluation under different illumination imaging conditions. The design of the PSO-GRNN-based IQA method is proposed in Figure 6. Firstly, the VS, CD, and gradient processing is performed on all color blurred images to obtain feature values by Equations (6)–(8). Then, 80% of the images in the database are randomly selected, and the feature values and benchmark values, i.e., MOS or DOMS, of these images are input in the GRNN for training. Later, the variable value of the GRNN is optimized by PSO. Finally, the trained PSO-GRNN is used to evaluate the quality of the other 20% of images in the database and obtain the predicted value for image quality.

4. Experiments and Discussion

4.1. Database and Evaluation Indicators

In our study, experiments are performed on the BID [22], CID2013 [23], and CLIVE [24] databases. These three databases are all public realistic blur image databases. The information of each database is introduced in Table 2. Eight different scenes are included in CID2013. Scenes 1, 2, 3, 6 include six datasets (I–VI), Scene 4 includes four datasets (I–IV), Scene 5 includes five datasets (I–V), Scene 7 includes one dataset (VI), and Scene 8 includes two datasets (V, IV). Table 3 shows the descriptions and example images for each scene. The BID database contains 586 images, while the CLIVE database contains 1162 images, all of which are real blurry images in real-world environments. These three databases are commonly utilized collections in real IQA studies, covering a wide range of ordinarily authentic distortions in real-world applications.
The most commonly used sharpness performance evaluation indicators are the Pearson linear correlation coefficient (PLCC) and Spearman’s rank-ordered correlation coefficient (SROCC). The PLCC and SROCC reaching unity 1 means that the prediction performance of an objective method is performing better.
1.
Prediction Accuracy
The PLCC is used to measure the prediction accuracy of an IQA method. To compute the PLCC, a logistic regression with the five parameters is used to obtain the same scale values with subjective ratings [31]:
p x = β 1 1 2 1 1 + exp β 2 x β 3 + β 4 x + β 5 ,
where x denotes the objective quality scores directly from an IQA method, p(x) denotes the IQA scores after the regression step, and β1, …, β5 are model parameters that are found numerically to maximize the correlation between subjective and objective scores. The PLCC value of an IQA method is then calculated as
PLCC = 1 n 1 i = 1 n x i x ¯ σ x y i y ¯ σ y ,
where x ¯ and y ¯ are the mean values of xi and yi, respectively, and σx and σy are the corresponding standard deviations.
2.
Prediction Monotonicity
SROCC is used to predict the monotonicity of an IQA method. The SROCC value of an IQA method on a database with n images is calculated as [32]
SROCC = 1 6 i = 1 n r x i r y i 2 n ( n 2 1 )
where rxi and ryi represent the ranks of the prediction score and the subjective score, respectively.

4.2. Performance Comparison

In this section, real blurry images from the BID, CID2013, and CLIVE databases are tested to obtain the prediction scores for each image. The predicted scores are then linearly fitted with the subjective scores of the images to obtain the corresponding PLCC and SROCC values. Each database was tested 20 times by the proposed method, and the average value of 20 tests was taken as the final fitting result for the entire database. The PLCC and SROCC results of the proposed method and comparison methods are shown in Table 4. The comparison methods are related to the spatial domain, the frequency domain, machine learning, and deep learning. The best results are highlighted in boldface for the two indices in Table 4. RISE [15] (2017), SR [16] (2016), Yu’s CNN [17] (2017), DIQA [18] (2019), SSEQ [19] (2014), SFA [20] (2019), DB-CNN [21] (2020), DIVINE [33], and NIQE [34] are learning-based algorithms, while MLV [11] (2014), BIBLE [12] (2017), BISHARP [13] (2018), and GCDV [28] (2024) are the methods related to the spatial and frequency domains. Moreover, DIVINE [33] and NIQE [34] are general-purpose NR-IQA methods, and the other compared methods are all image sharpness assessment methods.
It can be seen that the results of the proposed method on these three databases are all above 0.85. Overall, learning-based algorithms perform better than the methods related to the spatial and frequency domains in evaluating the quality of real blurry and distorted images. The PSO-GRNN-based proposed method yields better performance than other advanced network structure-based methods, i.e., Yu’s CNN [17] (2017), DIQA [18] (2019), SFA [20] (2019), and DB-CNN [21] (2020). Therefore, a fully connected GRNN is suitable for dealing with IQA problems.

4.3. Performance of Image Feature Selection

In this section, the impact of feature selection on the performance of the proposed method is verified, and the features are the VS, CD, and gradient. Three feature value calculation methods, i.e., Max, RC, and Var, are selected for these features. In Table 5, seven different feature value combinations are tested and the best results are highlighted in boldface for the two indices.
From Table 5, it can be seen that the results of all combinations on these two databases are almost all above 0.8. Thus, the proposed method can yield better performance by all combinations. Especially, the combination of all three feature value calculation methods yields the best results. Furthermore, analyzing and comparing the data reveals that the Var plays a slightly greater role in feature combination than the Max and RC.

4.4. Performance on Different Scenarios on CID2013

The CID2013 database consists of six datasets (I–VI) and each dataset contains six different scenes [23]. This section focuses on testing images from different groups in the CID2013 database to verify the evaluation effect of the proposed method on images with the same content under different lighting conditions. The test results are shown in Table 6. In Table 6, the results of the same scenarios with different subjects are set in the same background color.
A total of 36 groups were tested, and after analysis, more than 50% (20 groups) of the fitting results were above 0.90, and more than 75% (28 groups) of the fitting results were above 0.80. The worst SROCC and PLCC results from the above test were 0.7090 and 0.7326, respectively. In this part, the content of the test images was the same in each group, but the lighting conditions were different. From the test data, it can be concluded that the proposed method yields good and stable performance on IQA under different lighting conditions.
Based on the results from Table 6, two box charts of SROCC and the PLCC on different scenarios are shown in Figure 7. From Figure 7, the proposed method shows better performance in Scenes 3 and 4, which are all indoor scenarios with subject illuminance between 10 and 1000 lux. In addition, the proposed method also shows stable performance in Scenes 3 and 4. For the other scenarios, the proposed method yields similar performance.

4.5. Scatter Plot and Fitting Curve

A total of 20 experiments were conducted on the BID, CID2013, and CLIVE databases to obtain 20 PLCC and SROCC values. The average value of these 20 values was selected as the final PLCC and SROCC values of the proposed method, and the results are presented in Table 4. Here, a random test was conducted on these databases, and the scatter plot of the proposed method’s prediction results and subjective scores from the databases are shown in Figure 8. From Figure 8, it can be seen that the proposed methods perform well in evaluating the quality of real blurry images. The regression curve of the proposed method has a better correlation with the subjective observation values.

5. Conclusions

An NR image sharpness evaluation method for images under different lighting imaging conditions is proposed in this article. The proposed method consists of two parts, namely the feature values extraction part and the machine learning part using a PSO-GRNN. Firstly, the VS, CD, and gradient feature information are extracted from the test image and the related feature maps are obtained. Then, the Max, RC, and Var calculations are conducted on these feature maps to obtain the feature values. Lastly, the PSO algorithm is used to optimize the GRNN, and the image feature values are input into the PSO-GRNN to predict the image sharpness. Some tests are conducted on real databases, i.e., the BID, CID2013, and CLIVE databases, and some other state-of-the-art or widely cited leaning-based IQA methods are selected for comparison with the proposed method. The results indicate that the proposed method produces better prediction accuracy than all other competing methods. In the future, further studies can be conducted on the evaluation of the different specific illumination parameters.

Author Contributions

Conceptualization, H.H. and C.S.; methodology, H.H.; software, H.H.; validation, H.H., B.J., and C.S.; formal analysis, C.S.; investigation, Y.L. (Yandan Lin); resources, B.J.; data curation, H.H.; writing—original draft preparation, H.H.; writing—review and editing, C.S.; visualization, C.S.; supervision, Y.L. (Yuelin Lu); project administration, C.S.; funding acquisition, B.J. and C.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Key Project of Scientific Research in universities of Anhui Province, grant number 2022AH050983 and 2024AH050117, Anhui Future Technology Research Institute enterprise cooperation project, grant number 2023qyhz02, Anhui Engineering Research Center of Vehicle Display Integrated Systems Open Fund, grant number VDIS2023C01, Research Start-up Foundation for Introduction of Talents of AHPU, grant number 2021YQQ027, and Scientific Research Fund of AHPU, grant number Xjky20220003.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Sachin; Kumar, R.; Sakshi; Yadav, R.; Reddy, S.G.; Yadav, A.K.; Singh, P. Advances in Optical Visual Information Security: A Comprehensive Review. Photonics 2024, 11, 99. [Google Scholar] [CrossRef]
  2. Xu, W.; Wei, L.; Yi, X.; Lin, Y. Spectral Image Reconstruction Using Recovered Basis Vector Coefficients. Photonics 2023, 10, 1018. [Google Scholar] [CrossRef]
  3. Sun, X.; Kong, L.; Wang, X.; Peng, X.; Dong, G. Lights off the Image: Highlight Suppression for Single Texture-Rich Images in Optical Inspection Based on Wavelet Transform and Fusion Strategy. Photonics 2024, 11, 623. [Google Scholar] [CrossRef]
  4. Qiu, J.; Xu, H.; Ye, Z.; Diao, C. Image quality degradation of object-color metamer mismatching in digital camera color reproduction. Appl. Opt. 2018, 57, 2851–2860. [Google Scholar] [CrossRef]
  5. Liu, C.; Zou, Z.; Miao, Y.; Qiu, J. Light field quality assessment based on aggregation learning of multiple visual features. Opt. Express 2022, 30, 38298–38318. [Google Scholar] [CrossRef]
  6. Kim, B.; Heo, D.; Moon, W.; Hahh, J. Absolute Depth Estimation Based on a Sharpness-assessment Algorithm for a Camera with an Asymmetric Aperture. Curr. Opt. Photonics 2021, 5, 514–523. [Google Scholar]
  7. Baig, M.A.; Moinuddin, A.A.; Khan, E. A simple spatial domain method for quality evaluation of blurred images. Multimed. Syst. 2024, 30, 28. [Google Scholar] [CrossRef]
  8. Wang, Z.; Bovik, A.; Sheikh, H. Image Quality Assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef]
  9. Shi, C.; Lin, Y. Full reference image quality assessment based on visual salience with color appearance and gradient similarity. IEEE Access 2020, 8, 97310–97320. [Google Scholar] [CrossRef]
  10. Dost, S.; Saud, F.; Shabbir, M.; Khan, M.G.; Shahid, M.; Lovstrom, B. Reduced reference image and video quality assessments: Review of methods. EURASIP J. Image Video Process. 2022, 2022, 1–31. [Google Scholar] [CrossRef]
  11. Bahrami, K.; Kot, A.C. A fast approach for no-reference image sharpness assessment based on maximum local variation. IEEE Signal Process. Lett. 2014, 21, 751–755. [Google Scholar] [CrossRef]
  12. Li, L.; Lin, W.; Wang, X.; Yang, G.; Bahrami, K.; Kot, A.C. No-reference image blur assessment based on discrete orthogonal moments. IEEE Trans. Cybern. 2017, 46, 39–50. [Google Scholar] [CrossRef] [PubMed]
  13. Gvozden, G.; Grgic, S.; Grgic, M. Blind image sharpness assessment based on local contrast map statistics. J. Vis. Commun. Image Represent. 2018, 50, 145–158. [Google Scholar] [CrossRef]
  14. Zhu, M.; Yu, L.; Wang, Z.; Ke, Z.; Zhi, C. Review: A Survey on Objective Evaluation of Image Sharpness. Appl. Sci. 2023, 13, 2652. [Google Scholar] [CrossRef]
  15. Li, L.; Xia, W.; Lin, W.; Fang, Y.; Wang, S. No-Reference and Robust Image Sharpness Evaluation Based on Multiscale Spatial and Spectral Features. IEEE Trans. Multimed. 2017, 19, 1030–1040. [Google Scholar] [CrossRef]
  16. Lu, Q.; Zhou, W.; Li, H. A no-reference image sharpness metric based on structural information using sparse representation. Inf. Sci. 2016, 369, 334–346. [Google Scholar] [CrossRef]
  17. Yu, S.; Wu, S.; Wang, L.; Jiang, F.; Xie, Y.; Li, L. A shallow convolutional neural network for blind image sharpness assessment. PLoS ONE 2017, 12, e0176632. [Google Scholar] [CrossRef]
  18. Kim, J.; Nguyen, A.D.; Lee, S. Deep CNN-based blind image quality predictor. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 11–24. [Google Scholar] [CrossRef]
  19. Liu, L.; Liu, B.; Huang, H.; Bovik, A.C. No-reference image quality assessment based on spatial and spectral entropies. Signal Process. Image Commun. 2014, 29, 856–863. [Google Scholar] [CrossRef]
  20. Li, D.; Jiang, T.; Lin, W.; Jiang, M. Which has better visual quality: The clear blue sky or a blurry animal? IEEE Trans. Multimed. 2019, 21, 1221–1234. [Google Scholar] [CrossRef]
  21. Zhang, W.X.; Ma, K.D.; Yan, J.; Deng, D.; Wang, Z. Blind image quality assessment using a deep bilinear convolutional neural network. IEEE Trans. Circuits Syst. Video Technol. 2020, 30, 36–47. [Google Scholar] [CrossRef]
  22. Ciancio, A.; da Costa, A.L.N.T.; Silva, E.A.B.D.; Said, A.; Samadani, R.; Obrador, P. No-reference blur assessment of digital pictures based on multifeature classifiers. IEEE Trans. Image Process. 2011, 20, 64–75. [Google Scholar] [CrossRef] [PubMed]
  23. Toni, V.; Mikko, N.; Mikko, V.; Pirkko, O.; Jukka, H. CID2013: A database for evaluating no-reference image quality assessment algorithms. IEEE Trans. Image Process. 2015, 24, 390–402. [Google Scholar]
  24. Deepti, G.; Alan, C.B. Massive online crowd sourced study of subjective and objective picture quality. IEEE Trans. Image Process. 2016, 25, 372–387. [Google Scholar]
  25. Kim, W.; Kim, C. Saliency detection via textural contrast. Opt. Lett. 2012, 37, 1550–1552. [Google Scholar] [CrossRef]
  26. Zahra, S.S.; Karim, F. Visual saliency detection via integrating bottom-up and top-down information. Optik 2019, 178, 1195–1207. [Google Scholar]
  27. Zhang, L.; Shen, Y.; Li, H. VSI: A visual saliency-induced index for perceptual image quality assessment. IEEE Trans. Image Process. 2014, 23, 4270–4281. [Google Scholar] [CrossRef]
  28. Shi, C.; Lin, Y. No reference image sharpness assessment based on global color difference variation. Chin. J. Electron. 2024, 33, 293–302. [Google Scholar] [CrossRef]
  29. Varga, D. Full-Reference Image Quality Assessment Based on Grünwald–Letnikov Derivative, Image Gradients, and Visual Saliency. Electronics 2022, 11, 559. [Google Scholar] [CrossRef]
  30. Li, C.; Bovik, A.C.; Wu, X. Blind Image Quality Assessment Using a General Regression Neural Network. IEEE Trans. Neural Netw. 2011, 22, 793–799. [Google Scholar]
  31. Zhao, M.; Ji, S.; Wei, Z. Risk prediction and risk factor analysis of urban logistics to public security based on PSO-GRNN algorithm. PLoS ONE 2020, 15, e0238443. [Google Scholar] [CrossRef] [PubMed]
  32. Rahim, H.S.; Farooq, M.S.; Conrad, A.B. A statistical evaluation of recent full reference image quality assessment algorithms. IEEE Trans. Image Process. 2006, 15, 3440–3451. [Google Scholar]
  33. Moorthy, A.K.; Bovik, A.C. Blind image quality assessment: From natural scene statistics to perceptual quality. IEEE Trans. Image Process. 2011, 20, 3350–3364. [Google Scholar] [CrossRef]
  34. Mittal, A.; Soundararajan, R.; Bovik, A.C. Making a ‘completely blind’ image quality analyzer. IEEE Signal Process. Lett. 2013, 20, 209–212. [Google Scholar] [CrossRef]
Figure 1. Images of the same content under different lighting conditions and the corresponding vs. maps. (a,b) are images of the same content under different lighting conditions [23], while (c,d) are the corresponding vs. maps.
Figure 1. Images of the same content under different lighting conditions and the corresponding vs. maps. (a,b) are images of the same content under different lighting conditions [23], while (c,d) are the corresponding vs. maps.
Photonics 11 01032 g001
Figure 2. (a) and (b) are two CD pseudo-color maps of different images in CID2013.
Figure 2. (a) and (b) are two CD pseudo-color maps of different images in CID2013.
Photonics 11 01032 g002
Figure 3. Blurred images of the same content under different lighting conditions and corresponding gradient maps: (a,b) are blurry images of the same content under different lighting conditions [23], while (c,d) are corresponding gradient maps.
Figure 3. Blurred images of the same content under different lighting conditions and corresponding gradient maps: (a,b) are blurry images of the same content under different lighting conditions [23], while (c,d) are corresponding gradient maps.
Photonics 11 01032 g003
Figure 4. Flowchart for extracting image feature values.
Figure 4. Flowchart for extracting image feature values.
Photonics 11 01032 g004
Figure 5. The network structure of GRNN.
Figure 5. The network structure of GRNN.
Photonics 11 01032 g005
Figure 6. Overall framework diagram of PSO-GRNN algorithm.
Figure 6. Overall framework diagram of PSO-GRNN algorithm.
Photonics 11 01032 g006
Figure 7. Box chart results of different scenarios on CID2013.
Figure 7. Box chart results of different scenarios on CID2013.
Photonics 11 01032 g007
Figure 8. Scatter plot and fitting curve of BID, CID2013, and CLIVE databases.
Figure 8. Scatter plot and fitting curve of BID, CID2013, and CLIVE databases.
Photonics 11 01032 g008
Table 1. Performance on different adaptive optimization methods.
Table 1. Performance on different adaptive optimization methods.
DatabaseCriteriaGRNNFOA-GRNNFA-GRNNPSO-GRNN
BIDSROCC0.880 0.880 0.876 0.885
PLCC0.885 0.887 0.883 0.890
Table 2. Database information description.
Table 2. Database information description.
DatabaseBlur ImagesSubjective ScoresTypical SizeScore Range
BID586MOS1280 × 960[0, 5]
CID2013474MOS1600 × 1200[0, 100]
CLIVE1162MOS500 × 500[0, 100]
Table 3. Introduction to CID2013.
Table 3. Introduction to CID2013.
ClusterSubject
luminance
(lux)
Subject-
Camera
Distance (m)
Scene DescriptionExample ImagesImage
Set
Motivation
120.5Close-up in dark lighting conditionsPhotonics 11 01032 i001I–VIBar and restaurant setting
21001.5Close-up in typical indoor lighting conditionsPhotonics 11 01032 i002I–VILiving room
environment,
Indoor portrait
3104.0Small group in dim lighting conditionsPhotonics 11 01032 i003I–VILiving room
environment,
group picture
410001.5Studio imagePhotonics 11 01032 i004I–IVStudio image
generally used in image quality testing
5>34003.0Small group in cloudy bright to sunny lighting conditionsPhotonics 11 01032 i005I–VTypical tourist image
6>3400>50Close-up in high dynamic range lighting conditionsPhotonics 11 01032 i006I–VILandscape image
7>34003.0Small group in cloudy bright to sunny lighting conditions (~3× optical or digital zoom)Photonics 11 01032 i007VIGeneral zooming
8>3400
(outdoors)
and <100
(indoors)
1.5Close-up in high dynamic range lighting conditionsPhotonics 11 01032 i008V, VIHigh dynamic range scene
Table 4. Comparison between the proposed method and others.
Table 4. Comparison between the proposed method and others.
DatabasesBID [22]CID2013 [23]CLIVE [24]
CriteriaPLCCSROCCPLCCSROCCPLCCSROCC
BISHARP [13]0.3560.3070.6780.681--
BIBLE [12]0.3920.3610.6980.6870.5150.427
MLV [11]0.3750.3170.6890.6210.4000.339
GCDV [28]0.3380.2940.6810.5960.4050.334
RISE [15]0.6020.5840.7930.7690.5550.515
SR [16]0.4150.4670.6210.634--
Yu’s CNN [17]0.5600.5570.7150.7040.5010.502
DIQA [18]0.5060.4920.7200.7080.7040.703
SSEQ [19]0.6040.5810.6890.676--
SFA [20]0.8400.826--0.8330.812
DB-CNN [21]0.4710.4640.6860.6720.8690.851
DIVINE [33]0.5060.4890.4990.4770.5580.509
NIQE [34]0.4710.4690.6930.6330.4780.421
Proposed0.8900.8850.9240.9130.8730.867
Table 5. The performance of different feature selections.
Table 5. The performance of different feature selections.
DatabasesFeature ValuePLCCSROCC
BIDMax0.8280.825
RC0.8020.788
Var0.8520.850
Max + RC0.8460.844
Max + Var0.8860.877
RC + Var0.8770.869
Max + RC + Var0.8900.885
CID2013Max0.9020.892
RC0.8430.828
Var0.9170.909
Max + RC0.9020.888
Max + Var0.9250.915
RC + Var0.9220.916
Max + RC + Var0.9240.913
CLIVEMax0.8340.821
RC0.7850.769
Var0.8510.858
Max + RC0.8660.857
Max + Var0.8710.863
RC + Var0.8610.855
Max + RC + Var0.8730.867
Table 6. Test results in different scenarios on CID2013.
Table 6. Test results in different scenarios on CID2013.
ScenariosSROCCPLCCScenariosSROCCPLCCScenariosSROCCPLCC
IS_I_C010.7890.801IS_I_C020.8780.934IS_I_C030.9890.991
IS_II_C010.7090.860IS_II_C020.7810.745IS_II_C030.8410.934
IS_III_C010.9790.991IS_III_C020.7720.741IS_III_C030.9850.990
IS_IV_C010.9080.890IS_IV_C020.9550.997IS_IV_C030.9600.960
IS_V_C010.9020.890IS_V_C020.9680.990IS_V_C030.9930.999
IS_VI_C010.8660.938IS_VI_C020.9280.937IS_VI_C030.9660.991
IS_I_C040.9640.956IS_I_C050.8840.930IS_I_C060.9640.984
IS_II_C040.9680.948IS_II_C050.8770.836IS_II_C060.7750.733
IS_III_C040.9720.971IS_III_C050.8440.871IS_III_C060.7910.807
IS_IV_C040.9080.968IS_IV_C050.9770.996IS_IV_C060.9520.969
IS_VI_C070.8490.967IS_V_C050.9750.961IS_V_C060.7650.833
IS_V_C080.9650.984IS_VI_C080.8040.968IS_VI_C060.7430.983
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

He, H.; Jiang, B.; Shi, C.; Lu, Y.; Lin, Y. An Objective Evaluation Method for Image Sharpness Under Different Illumination Imaging Conditions. Photonics 2024, 11, 1032. https://doi.org/10.3390/photonics11111032

AMA Style

He H, Jiang B, Shi C, Lu Y, Lin Y. An Objective Evaluation Method for Image Sharpness Under Different Illumination Imaging Conditions. Photonics. 2024; 11(11):1032. https://doi.org/10.3390/photonics11111032

Chicago/Turabian Style

He, Huan, Benchi Jiang, Chenyang Shi, Yuelin Lu, and Yandan Lin. 2024. "An Objective Evaluation Method for Image Sharpness Under Different Illumination Imaging Conditions" Photonics 11, no. 11: 1032. https://doi.org/10.3390/photonics11111032

APA Style

He, H., Jiang, B., Shi, C., Lu, Y., & Lin, Y. (2024). An Objective Evaluation Method for Image Sharpness Under Different Illumination Imaging Conditions. Photonics, 11(11), 1032. https://doi.org/10.3390/photonics11111032

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop