Next Article in Journal
Energy Efficiency and Thermal Performance of Office Buildings Integrated with Passive Strategies in Coastal Regions of Humid and Hot Tropical Climates in Madagascar
Previous Article in Journal
Effect Range of the Material Constraint in Different Strength Mismatched Laboratory Specimens
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Blind Image Deconvolution Algorithm Based on Sparse Optimization with an Adaptive Blur Kernel Estimation

1
Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi’an 710119, China
2
University of Chinese Academy of Sciences, Beijing 100049, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2020, 10(7), 2437; https://doi.org/10.3390/app10072437
Submission received: 6 February 2020 / Revised: 25 March 2020 / Accepted: 31 March 2020 / Published: 2 April 2020
(This article belongs to the Section Computing and Artificial Intelligence)

Abstract

:
Image blurs are a major source of degradation in an imaging system. There are various blur types, such as motion blur and defocus blur, which reduce image quality significantly. Therefore, it is essential to develop methods for recovering approximated latent images from blurry ones to increase the performance of the imaging system. In this paper, an image blur removal technique based on sparse optimization is proposed. Most existing methods use different image priors to estimate the blur kernel but are unable to fully exploit local image information. The proposed method adopts an image prior based on nonzero measurement in the image gradient domain and introduces an analytical solution, which converges quickly without additional searching iterations during the optimization. First, a blur kernel is accurately estimated from a single input image with an alternating scheme and a half-quadratic optimization algorithm. Subsequently, the latent sharp image is revealed by a non-blind deconvolution algorithm with the hyper-Laplacian distribution-based priors. Additionally, we analyze and discuss its solutions for different prior parameters. According to the tests we conducted, our method outperforms similar methods and could be suitable for dealing with image blurs in real-life applications.

1. Introduction

Image deblurring is important in many fields, such as surveillance, traffic control, astronomy, and remote sensing [1,2,3]. Blurs occur due to a variety of reasons, such as moving objects, focus issues, and atmospheric turbulence. They significantly deteriorate the quality of the image. The blurring procedure is always described by a point spread function (PSF), also known as blur kernel. When the PSF is known, the blur can be removed by conventional deconvolution methods, such as Weiner filtering and the Lucy–Richardson algorithm. However, when the PSF is unknown, the issue constitutes a blind-deconvolution problem, which is a notoriously vague inverse problem that has perplexed the scientific community for decades.
Therefore, recovering an approximated latent image from a blurred observation is essential for improving the performance of an imaging system, in addition to having several applications [4,5,6]. One approach to handle this problem is to build parameterized models for specific blur types, i.e., employ motion length and angle to describe a motion blur, the radius of a disk to model the defocus blur, and a Gaussian model to simulate the atmospheric turbulence blur. Subsequently, Dash and Majhi suggested a radial basis function neural network with image features based on the magnitude of Fourier coefficients to estimate the motion lengths [7]. Jalobeanu et al. exploited the maximum likelihood estimator on the entire dataset available to estimate the parameters for a Gaussian model [8]. Kumar et al. utilized the Tchebycheff moment to estimate the variance of a Gaussian model [9]. Yin and Hussain combined the non-Gaussianity measures for independent component analysis to estimate the parameters for the motion length, disk radius, and turbulence degrees [10].
However, the methods based on parameter estimation fail when dealing with a random PSF. Because the blind deconvolution problem is ambiguous, it could be regularized by some image priors, which favor the original nature of images over noisy and blurry ones. In this perspective, numerous image priors have been proposed in recent years. The Gaussian prior is one of the simplest and most commonly used priors because it implements ridge regularization in the image gradient domain as the penalty term in the cost functions and turns the problem into a quadratic convex optimization problem, which can be solved by a number of numerical methods [11]. However, the Gaussian prior is not adequately effective because natural images are mainly non-Gaussian [12,13].
Recent advances in digital image processing and sparse representation have led to the development of new methods based on the statistical characteristics of natural images. Fergus et al. asserted that the distribution of gradient magnitudes of a natural image has a heavy tail and sharp top, and proposed an effective method based on the variational Bayesian approach [14]. Levin et al. introduced a hyper-Laplacian prior to the image gradient and used the iterative reweighted least squares to solve the cost function [15]. Joshi et al. put forward a prior on local color statistics with the hyper-Laplacian prior for denoising and deblurring [16]. Wang et al. proposed a prior based on total variation [17]. Xu and Jia combined the total variation with an 1 constraint deconvolution method to efficiently reduce outliers and preserve image structures [18]. Pan et al. restored the latent image in the dark channel [19]. Yang et al. proposed an 1 -based constrained method combined with a genetic algorithm to restore a clear image [20]. These methods adopt the p -norm or quasi-norm forms as the image priors, which achieve high quality results.
Furthermore, learning-based methods are also practical approaches in this field. Zhu et al. proposed a method that reveals the priors from Gibbs sampling [21]. Roth et al. adopted the fields of experts to learn image priors with contrastive divergence [22]. Raj et al. proposed an image prior based on a Markov random field [23]. Schuler et al. used a multi-layer perceptron to learn an image deconvolution scheme on a dataset of natural images with large quantities [24]. Zhang et al. trained a set of convolutional neural network priors, which act as a constraint of model-based optimization methods to cope with the image restoration problems [25].
In this paper, we propose a deblurring framework consisting of two stages. First, an accurate blur kernel is estimated from an input blurry image via a nonzero constrained function as the image prior. Compared to the conventional sparsity-based techniques, the optimization procedure of our method has a lower computational complexity due to the analytical solution of the mathematical model. Subsequently, we propose a deconvolution algorithm on sparse representation for latent image restoration. Experimental results and comparisons indicate that our method outperforms others.
This study makes a novel contribution to the literature as summarized below:
  • An image prior based on nonzero measurement on four orientations of the image gradient domain is proposed. The image histogram charts show that the frequency of nonzero values in the gradient domain of a blurry image is far more than that in a clear one, and the nonzero measurement is suitable for a constraint for image deblurring. The solution for the cost function with the proposed image prior is also analyzed and discussed.
  • The blur kernel is obtained under a ridge regularization on the PSF because the measurements on an image are enough to estimate the blur kernel in the maximum a posteriori (MAP) framework. During the optimization, we propose a solution based on a conjugate gradient method combined with Newton’s method; this solution could prevent us from calculating the inversion of a Hessian matrix and solve the cost function efficiently.
  • Considering the statistical features of natural images, we presented a non-blind image deconvolution algorithm by applying the concept of hyper-Laplacian distribution-based prior. Its target image is constrained by an p quasi-norm in the cost function. We analyze and discuss the solutions for different p values.
  • We tested our method on both simulated motion blurs and atmospheric turbulence blurs in real-life applications. In addition, we comparatively analyzed our method, in terms of the cost durations, estimated accuracy of the blur kernels, and quality assessment of the restored images, and adopted several approaches related to blur removal.
The remainder of this paper is organized as follows: Section 2 introduces the imaging system and the principle of image blurs. Subsequently, we present the details of our proposed deblurring method, including the blur kernel estimation technique and the image deconvolution algorithm. The results of the experiments and comparative analysis performed are described in Section 3. Section 4 summarizes the study and presents concluding remarks.

2. Materials and Methods

Image blur is one of the most common phenomena of image degradations, which can generally be regarded as a linear, shift-invariant system. Furthermore, it can be expressed by a convolution model as follows [14,15,16,17,18,19,20]:
g = k f + η
where f and g represent the original and observed blurry images, respectively; k is the blur kernel; the symbol “ ” represents the convolution operator; and η stands for the additive noise generated during image acquisition or transmission.

2.1. Image Prior and Our Method’s Framework

From previous studies a practical framework of blind image deblurring is the blind iterative deconvolution algorithm [15,16,17,18,19]. Its fundamental principle is expressed as follows:
( f ^ , k ^ ) = arg min f , k k f g 2 2 + α 1 P f + α 2 P k
where the first term, k f g 2 2 , is the data fidelity term which indicates the prior of the additive noise η in Equation (1), P f and P k represent the priors of the original image and the blur kernel, respectively, while α 1 and α 2 are their corresponding weights.
For the image prior model, we assume that the first-order derivatives of the image f in four directions are represented in Equation (3) by:
{ 1 f ( x ,   y ) = f ( x , y ) f ( x , y + 1 ) 2 f ( x , y ) = f ( x , y ) f ( x + 1 , y ) 3 f ( x , y ) = f ( x , y ) f ( x + 1 , y + 1 ) 4 f ( x , y ) = f ( x , y ) f ( x 1 , y 1 )
where x and y represent the coordinates of a digital image; and the operator n with n N = { 1 ,   2 ,   3 ,   4 } indicates the image gradients in four orientations: 0, π / 2 , π / 4 , and 3 π / 4 , respectively. In the following context, the images and blur kernels are represented by lexicographic order, which is to list every pixel in raster scan order as one long vector [26]. This process is convenient for analysis and mathematical calculations.
In this paper, to measure the image sparsity, we utilized the nonzero measurement in the image gradient domain in the four orientations as the image prior, and we defined the prior as follows:
P f = i Ω ( 1 δ ( | 1 f i | + | 2 f i | + | 3 f i | + | 4 f i | ) )
where the subscript i represents the i th element of the image gradient domain in lexicographic order discussed above, and Ω is the support domain of the images. Notation | | represents the magnitude. The function δ ( m ) is the Kronecker delta function, which returns 0 if m 0 and 1 if m = 0 . In general, Equation (4) aims at counting the number of nonzero elements in image gradient domains in four orientations.
In Figure 1, the first column shows an example of a clear image and a blurry one. The statistical charts in the second column constitute their corresponding image histograms. It shows that the pixel intensities are more concentrated in the blurred image, which implies that the gradient domain of the blurred image has a better sparsity than that of the clear one. The third column shows the statistical charts of nonzero values in the gradient domain in four orientations ( 1 + 2 + 3 + 4 ), which confirms that the frequencies of nonzero values of the blurred image in gradient domain are higher than those in the clear image.
In the view of MAP, the estimators can get closer to the true values on the condition of more measurements. Since the size of the blur kernel is far smaller than that of the images, the observed image g supplies enough measurements for estimating the blur kernel alone [27]. Thus, the blind deconvolution problem of solving Equation (2) can be transformed into an alternating scheme, i.e., the blur kernel should be estimated at first, and then the latent sharp image can be acquired by a non-blind deconvolution method.

2.2. Estimation of Blur Kernel and the Intermediate Latent Image

To estimate the blur kernel, Equation (2) can be divided into two parts as follows:
{ f ^ = arg min f k f g 2 2 + α 1 P f k ^ = arg min k k f g 2 2 + α 2 P k
where the two subproblems above are solved alternately. Their solutions are represented in the following section. In addition, to avoid the algorithm’s converging to local minimums, we adopt a coarse-to-fine scheme during the blur kernel estimation. The image pyramid technique is applied to generate the target image from the coarsest level to the finest level [26]. Figure 2 shows the framework of the proposed method.

2.2.1. Solve f with a Given k

As discussed in Section 2.1, we use a notation in the form of a norm constraint f c to represent the image prior P f proposed in Equation (4) for simplicity. Then, the first subproblem of Equation (5) can be written as:
f ^ = arg min f k f g 2 2 + α 1 f c
where the result of f is an intermediate variable that is used for estimating an accurate blur kernel. Furthermore, it cannot be regarded as the deblurring result because it loses multiple image details and retains only the outlines. The above mentioned minimization can be solved via the split Bergman iteration. With auxiliary variable   u = ( u 1 ,   u 2 , u 3 , u 4 ) the cost function of Equation (6) turns into another optimization problem:
min f , u k f g 2 2 + β 1 f u 2 2 + α 1 u c
where f = ( 1 f ,     2 f , 3 f , 4 f ) represents the collection of the gradient in the four orientations, and the intermediate coefficient, β 1 , is varied during the optimization process. The optimization of f in Equation (7) converges to the solution of Equation (6) as β 1 approaches to infinity. The alternating scheme of the subproblem is described as follows:
{ u ^ = arg min u   u c + β 1 α 1 f u 2 2 f ^ = arg min f k f g 2 2 + β 1 f u 2 2
where the result value of f ^ represents the intermediate latent image.
Considering the i th element of u and f , the u ^ -subproblem in the first part of Equation (8) can be expressed as:
u ^ = i Ω ( arg min   u i   1 δ ( n N | u i n | ) + β 1 α 1 n N ( n f i u i n ) 2 )
where the bold variable u i = ( u i 1 ,   u i 2 , u i 3 , u i 4 ) to be minimized under the “ arg m i n ” function represents the i th element of u . It can be seen from the above equation that the solution of Equation (9) is able to be acquired via element-wise optimization.
Let Ψ ( u i ) represent the minimization problem, i.e., the “ arg m i n ” function, for the i th element in the right-hand side of Equation (9), and it can be expanded as follows:
Ψ ( u i ) = { β 1 α 1 n N ( n f i ) 2 , u i = 0   (   n   : u i n = 0 )   1 + β 1 α 1 n N ( n f i u i n ) 2 , u i 0   (   n   :   u i n 0 )
It can be seen from Equation (10) that when u i 0 , the inequality Ψ ( u i ) 1 always holds. As a result, when Ψ ( 0 ) > 1 , the minimum of Ψ ( u i ) is 1 ; when Ψ ( 0 ) 1 , the minimum of Ψ ( u i ) is Ψ ( 0 ) .
The optimization of Equation (10) is discussed as follows:
(1) When Ψ ( 0 ) > 1 , the solution for Ψ ( u i ) = 1 is u i = f .
(2) When Ψ ( 0 ) 1 , Ψ ( u i ) reaches the minimum when u i = 0 .
Overall, the solution for Equation (9) is obtained in Equation (11) as follows:
u ^ i = {   ( 1 f ,   2 f , 3 f , 4 f ) , β 1 α 1 n N ( n f i ) 2 > 1       ( 0 ,   0 ,   0 ,   0 ) , o t h e r w i s e
As to the f ^ -subproblem in the second part of Equation (8), it is a convex quadratic optimization problem whose solution can be achieved in frequency domain as follows:
F ^ = G K * + β 1 n N D n U n * K K * + β 1 n N D n D n *
where F , G , K , and U n stand for the discrete Fourier transform (DFT) of f , g , k , and u n ; D n is the DFT of the differential operators in the four orientations; notation is the Hadamard product; and the superscript * represents the complex conjugation. Generally, the denominator should be adjusted to the floating-point relative accuracy depending on the platform to avoid any divide by zero cases in practical implementations.

2.2.2. Solve k with a given f

As the differential operations are linear, the blurs in the image gradient domains also satisfy the convolution model in Equation (1). Moreover, the blur kernel k can be estimated with more accuracy in the gradient domain than directly solving the second subproblem in Equation (5) with pixel intensities [28]. In the following contexts, the analytical solution for the k -subprolem is discussed in the matrix form, and the second part of Equation (5) can be rewritten as:
K ^ = arg min K   ( F K G ) T ( F K G ) + α 2 K T K
where F = ( F 1 + F 2 + F 3 + F 4 ) and G = ( G 1 + G 2 + G 3 + G 4 ) ; G n and G n with n N represent the block circulant matrices with circulant blocks generated from f and g ; and K is rearranged from k in a vector form so as to transform the convolution operation to the matrix multiplication [26,29].
The optimization of the above equation is a least squares problem, and its closed-form solution can be achieved with the help of fast Fourier transform. However, the division and truncation in the frequency domain will lead to the amplification of both noise and estimation error. However, in this paper, we solve the optimization problem in the pixel domain directly with an effective conjugate gradient method combined with Newton’s method.
Let ( K ) stand for the “ arg m i n ” function in the right-hand side of Equation (13), then its gradient ( K ) and Hessian matrix H are calculated in Equation (14) as follows:
{ B ( K ) = 2 F T F K 2 F T G + 2 α 2 K           H         = 2 F T F + 2 α 2 I
where I is an identity matrix of the same size as F T F , and the variable K is eliminated in the Hessian matrix when calculating their second-order partial derivatives. The solution of K can be achieved by Newton’s descent method. Furthermore, since ( K ) is quadratic, the searching process converges within only one iteration [30]. Assume the starting point is K 0 = 0 , then the minimum is achieved for:
K ^ = K 0 H 1 ( K 0 ) = 2 H 1 F T G
Considering that the computation of the inverse of the Hessian matrix H directly is very complicated and impracticable, and computing the inversion H 1 is not the final aim, the solution of Equation (15) can be converted to the following form:
H K ^ = 2 F T G .
It can be seen that Equation (16) is a linear system of equations; it can be solved by the conjugate descent method effectively. During the alternating optimization, it should be noted that the blur kernel must be kept positive and normalized so as to satisfy the property of a PSF. As a result, the negative elements of the estimated K are set to zero, and its elements, K i , are then normalized to hold K i = 1.

2.3. Image Restoration

With the estimated blur kernel k from the above method, the latent image can be revealed by nonblind deconvolution algorithms, such as Wiener filtering and the Lucy–Richardson method, which were introduced decades ago. These deconvolution algorithms are classic and have been widely implemented in many fields. However, they are sensitive to noise, especially when the blur kernels are estimated incorrectly, and the ringing effect is unavoidable. Therefore, inspired by the kernel estimation discussed above, the sparse representation techniques can also be applied in the restoration of the latent image. Based on the statistical character of natural images in recent studies, the distributions of image gradient domains tend to have heavy tails and sharp tops, which means that most of their distribution is on small values, but the probability of large values is much greater compared to the Gaussian distribution [14,15,16,17,18,19]. Therefore, the hyper-Laplacian distribution-based prior can be used as a regularization term in the cost function as follows:
f ^ = arg min f k f g 2 2 + α 2 ( x f ,   y f ) p
where p is derived from the hyper-Laplacian model, which signifies the slope of the exponential term of the density function. In Equation (17), it becomes the constraint term as a quasi-norm form, p , as shown above. x f and y f are the image gradient domains in the horizontal ( x -axis), and vertical ( y -axis), respectively. The optimization can be solved by half-quadratic penalty method similar to that of Equation (7). With auxiliary variable v = ( v x ,   v y ) , the alternating scheme is shown as follows:
{ v ^ = arg min v v p + β 2 α 2 ( x f ,   y f ) ( v x ,   v y ) 2 2 f ^ = arg min f k f g 2 2 + β 2 ( x f ,   y f ) ( v x ,   v y ) 2 2
The solutions for the v ^ -subproblem vary with different p values. In particular, when p = 1 , the analytical solution can be obtained in Equation (19) by the soft thresholding algorithm [17] as follows:
v ^ = s g n ( v )   max { 0 ,   | v | T }
where T = α 2 / ( 2 β 2 ) is the threshold and s g n ( ) represents the signum function.
It should be noted that the solutions in the horizontal and vertical orientation have the same form in the following Equation (20)–(22), therefore the subscripts related to orientations are omitted for briefness and convenience.
In general, for an arbitrary p , the minimization problem can be solved by setting the derivative of the v ^ -subproblem to zero. Therefore, for the i th element of v and f in horizontal orientation, we get:
p | v i | p 1 s g n ( v i ) + 2 β 2 α 2 ( v i f i ) = 0
In particular, when p = 0.5 , the v ^ -subproblem in Equation (18) can be simplified to a cubic function:
v i 3 2 f i v i 2 + f i 2 u i ( α 2 4 β 2 ) 2 s g n ( v i ) = 0
which can be solved using Cardano’s formula [31].
When p = 2 / 3 , the v ^ -subproblem in Equation (18) can be expanded as a quartic function:
v i 4 3 f i v i 3 + 3 f i 2 v i 2 f i 3 v i ( α 2 3 β 2 ) 3 = 0
which can be solved using Ferrari’s and Descartes’ solutions [31].
For some special cases where p > 1 , analytical solutions can be referred to in [32]. However, for the remaining p values, no analytical solution exists, and the Newton–Raphson root finding method is more effective [30].
The f ^ -subproblem is similar to Equation (12) and, hence, its solution can also be determined in the frequency domain in Equation (23), as follows:
f ^ = I D F T   [ G K * + β 2 V x D x * + β 2 V y D y * K K * + β 2 D x D x * + β 2 D y D y * ]
where “IDFT” stands for the inverse discrete Fourier transform operation; V x and V y stand for the DFT of v x and v y , respectively; and D x and D y is the DFT of the differential operators in horizontal orientation, x , and vertical, y , respectively. The treatment for zero denominator is the same as that in Equation (12).

2.4. Smooth the Image Boundaries

When processing a digital image in the frequency domain via DFT, this indicates that the pixel domain is assumed to be periodical. This means that the top and bottom of the image are connected similar to the left and right sides. In practice, the image size should be expanded to a proper size and be padded with zeroes or other values to avoid aliasing, according to the Nyquist sampling criterion, which dramatically reduces the smoothness of the padded image.
One approach to address this issue is to pre-process the image to be symmetric when padding, and then use the “edgetaper” function in MATLAB, which only blurs the edges of the input image with a specified PSF and keeps the center region. In this way, the pixel values in the boundary are equal to the weighted sum of the original image and become differentiable. However, this method cannot ensure C 2 continuity. In this paper, we padded the images by minimizing the summation of second-order partial derivatives of the image borders to satisfy the 2nd-order smoothness. This technique was also referred to by Liu and Jia [33].

3. Results and Discussion

In this section, the proposed deblurring method is verified by both simulated motion-blurred images and atmospheric turbulence blur in real-life applications. For simulations, the test images were taken from the Kodak Lossless True Color Image Suite [34], and were numbered as “Img01,” “Img02,” etc., for ease of comprehension. Figure 3 shows examples of the ground truth images for which the testing process is outlined in the following subsections.
We compared the proposed method with several other blind image deblurring methods. Shan et al. used both the image gradient domain and pixel domain in the cost function for the data fitting term, together with a natural image prior and uniform prior [35]. Cho and Lee used the shock filter and bilateral filter with multiple orientations of images to enhance the structures, with the aim of ensuring that the histograms of the gradient domain of the reconstructed images are similar to those of the original [36]. Krishnan et al. proposed an image prior model based on a piecewise function that approximates the statistical characteristics of natural images using a sparsity representation technique [37]. They solved the deblurring problem using an iterative shrinkage-thresholding algorithm. Wu and Su used the graph Laplacian matrix as the cost function and obtained the clear image by an iterative solution [38].

3.1. Comparisons of Blur Kernel Estimations

The ground truth of blur kernels was taken from Levin et al. who represent a variety of PSFs of motion blurs in daily life. Blur kernels are numbered as “(a),” “(b),” etc., for convenience [27]. Their sizes range from 13 × 13 to 27 × 27 and their support domain varied from 10 to 25 pixels. The labels and sizes are listed in the first two lines in Table 1.

3.2. Comparison of Deblurring Results.

The first row of Figure 4 shows the eight ground truth blur kernels. To evaluate the effectiveness of blur kernel estimation, a test image, namely “Img21,” was artificially blurred by the eight ground truth blur kernels, therefore eight simulated blurred images were obtained. Subsequently, these blurred images were used as the inputs of the proposed method and other methods discussed above.
To evaluate the blur kernel estimation more objectively, the mean square errors (MSE) of the estimated blur kernels were calculated and are listed in Table 1. The cost durations for the deblurring methods were also recorded. Our method was implemented in MATLAB, and the comparisons were assessed using a Windows platform with an Intel Xeon E5-2620 v4 CPU (2.1 GHz, 8 cores) and 32 GB RAM. The comparison results are shown in Figure 4.
Table 1 shows that the proposed method achieves faster convergence than most of the other methods. Because the method in [36] adopts the fast Fourier transform technique both in the kernel estimation and in the deblurring process, we estimated the blur kernel in the pixel domain directly, using the conjugate descent method. Although our estimation of the blur kernels required more time, we avoided truncation and division operations in the high-frequency domain; this effectively improved the estimation accuracy. In practice, the estimated blur kernels of our method achieved the best visual effects and achieved the lowest MSE as shown in the last row of Figure 4 and Table 1.
In our experiments, to evaluate our method and compare it with other similar ones, we used the peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) as the full-reference image quality assessment (FRIQA) parameters, which are shown in Equations (24) and (25).
The PSNR [39] is a conventional indicator that is widely employed for measuring the quality of image reconstruction:
P S N R ( x , y ) = 10   lg L 2 M S E ( x , y ) ,
where x and y represent the reference image and deblurred result, respectively, M S E ( x , y ) = x y 2 2 represents the mean square error between them, and L stands for the image dynamic range.
The SSIM [39] is a full-reference metric for measuring the similarity between two images, x and y , and is defined as follows:
S S I M ( x , y ) = ( 2 x ¯ y ¯ + C 1 ) ( 2 σ x y + C 2 ) ( x ¯ 2 + y ¯ 2 + C 1 ) ( σ x 2 + σ y 2 + C 2 )
where x ¯ and y ¯ are the mean values, σ x and σ y are the standard deviations, and σ x y is the cross-covariance. Furthermore, C 1 = ( k 1 L ) 2 and C 2 = ( k 2 L ) 2 are factors used to ensure stability in case of division with a small denominator, with k 1 = 0.01 and k 2 = 0.03 in general.
An example of “Img02” blurred by kernel “(b)” and the deblurring results are shown in Figure 5. The white ball and the latch, as well as the wood grains and cracks, are blurred and indistinct, as shown in Figure 5a. The deblurring method described in [35] significantly reduces the blur in Figure 5b, but the result is still unsatisfying. The methods described in [36,37] introduce the ringing effect, both around the ball and wood cracks, as they are obvious in Figure 5c,d. This phenomenon is reduced by the method described in [38] as shown in Figure 5e. However, wood grains in the image are not clear enough. In contrast, as it can be seen in Figure 5f, the proposed method restores both the texture and the background.
Figure 6 shows an image, namely, “Img07,” blurred by kernel “(a).” As can be seen in Figure 6b, the deblurring method described in [35] reduced some of the blurriness; however, the visual effect is still lacking. Particularly, the shutters in the background are still not clear enough, and the estimated blur kernel shown in the top left is a little rough when compared with others. Moreover, as it can be seen in Figure 6c,d, the method described in [36,37] restored the sharpness, but introduced strong artifacts, especially around the petals and leaves. These limitations were overcome by the method described in [38] (see Figure 6e), as well as by the proposed method (see Figure 6f), which restored most of the details, and the shape of its estimated blur kernel is the closest to the ground truth data.
Figure 7 shows another example blurred by kernel “(f)”; the image here is “Img21.” The lighthouse and cabins are blurred and unclear in Figure 7a. As can be seen in Figure 7b, the deblurring method described in [35] reduced some of the blurriness; however, the visual effect is still lacking. In addition, as can be seen in Figure 7c, the method described in [36] restored the sharpness but introduced strong artifacts along the roofs and the white tower. In Figure 7d, because the method described in [37] estimated a blur kernel with numerous outliers, its deblurring result is not as good as the others. The limitation was partially overcome by the method described in [38]. There were still, however, some outliers in the estimated blur kernel shown in Figure 7e; the general outlines of the kernel were estimated correctly. In contrast, the proposed method not only estimated the most appropriate blur kernel, but also obtained a clear result with high quality.
Table 2 lists the original FRIQA data of a comparative analysis performed on the examples. For all of these—“Img02,” “Img07,” and “Img21,”—the PSNR and SSIM were the highest for the proposed method.
More comprehensive comparisons were carried out on all 24 tested images in the Kodak Lossless True Color Image Suite, which was introduced at the beginning of Section 3 [34]. The statistics related to the PSNR and SSIM, as determined from a comparative analysis of all the simulated motion-blurred images, are shown in Figure 8 and Figure 9, respectively. In the deconvolution procedure, any slight noise or outliers in an estimated blur kernel can cause errors in the image gray values; the shapes and offsets of the blur kernels can also affect the pixel positions in the deblurred results. In addition, the testing ground truth images were wrapped with gray or black boundaries. As the PSNR value is based on the MSE of the images, it mainly depends on image contrast and gray scales. Additionally, the SSIM value uses the image absolute means and the standard deviations. These two metrics have their advantages and make up for each other’s shortcomings. In this paper, therefore, we use both metrics to evaluate the deblurring results. As can be seen in Figure 8 and Figure 9, the PSNR values of our results for “Img03” and “Img10” are lower than others, but their SSIM values are the highest. Similarly, the phenomena of “Img04” and “Img22”, with lower SSIM values, but the highest PSNR values, derive from the same reasons. Once again, it can be seen that the proposed method was superior to other methods in most cases.

3.3. Real-Life Applications

In building and infrastructure construction, a theodolite is used to measure angles between designated visible points in the vertical and horizontal planes. In special tasks, however, such as long-range detection in the morning, the imaging quality is greatly reduced by atmospheric turbulence. In our project, a customized theodolite equipped with a high-speed camera was used to capture photo sequences (in gray scale) of targets over 1.5 kilometers away in the open air of a military base. The photographs were taken early in the morning on a sunny day in autumn. The focal length of the camera was 2000 mm; the exposure time was 4 ms; and the frequency was 50 frames per second. Some examples of the targets captured over long distances are shown in this paper. The deblurring results obtained using this method are shown in Figure 10, Figure 11 and Figure 12. In Figure 10a, the numeric character on the watchtower is dim and illegible, and the stairs are unclear. In Figure 11a and Figure 12a, the edges of the chimney and the signal pole are indistinct. Figure 10f clearly shows that the character on the watchtower and the outlines of the stairs are more detailed. The area around the edges of the chimney and pole are clearer than in the images obtained using other methods, as shown in Figure 11f and Figure 12f.
To evaluate the deblurring methods for images without ground truth, we used the Blind Image Quality Index (BIQI) and the Spatial-Spectral Entropy-based Quality (SSEQ) index as the non-reference image quality assessments (NRIQA). Both of these constitute a scoring mechanism: a 2-D image is inputted and then a score is outputted (100 represents the worst quality while 0 represents the best). The BIQI was proposed based on distorted image statistics (DIS), and the proposers found that the distortions of natural images have unique characteristics using DIS, and the images can be classified into distortion categories via these characteristics; details can be obtained from [40]. The SSEQ index utilizes down-sampled responses as inputs, then a local entropy feature vector of twelve dimensions can be extracted, and the image quality scores are generated from these features via a learning-based method [41]. Table 3 shows the original NRIQA data of a comparative analysis performed on the examples.

4. Conclusions

In this paper, we proposed and evaluated an image deblurring method based on sparse representation. We determined the image prior based on the nonzero measurement in the gradient domain of natural images, which is an efficient prior for blur kernel estimation. We optimized the cost function via the split Bergman iteration that turned the non-convex problem into an alternating scheme, and the blur kernel could be estimated correctly in a coarse-to-fine manner for avoiding local minima. Subsequently, we revealed the latent image through a similar technique with the help of the hyper-Laplacian distribution-based prior with the estimated blur kernel. In the interim, to reduce ringing effects, we preprocessed the image boundaries to achieve second-order smoothness. Moreover, the deblurring algorithm converged quickly owing to the analytical solutions of the subproblems. The results of experiments and comparative analysis shows that the proposed method can obtain improved results compared with other similar deblurring methods. In addition, our method is effective when dealing with atmospheric turbulence blurs in real-life applications. Because our deblurring framework is based on a linear system, it works well on spatially invariant blurs in most scenarios. In the future, the deblurring model can be extended to nonlinear blurs for wider applications.

Author Contributions

Conceptualization, H.Y., X.S. and S.C.; methodology, H.Y. and S.C.; software, H.Y. and S.C.; validation, H.Y., X.S. and S.C.; formal analysis, H.Y. and X.S.; investigation, H.Y. and S.C.; resources, H.Y.; data curation, H.Y.; writing—original draft preparation, H.Y. and X.S.; writing—review and editing, H.Y. and S.C.; visualization, H.Y. and X.S.; supervision, H.Y. and S.C.; project administration, H.Y. and X.S.; funding acquisition, X.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by High-tech Innovation Fund of Chinese Academy of Sciences, grant number GQRC_19_19.

Acknowledgments

We would like to thank Editage (www.editage.cn) for English language editing.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. El-Sallam, A.A.; Boussaid, F. Spectral-based blind image restoration method for thin TOMBO imagers. Sensors 2008, 8, 6108–6124. [Google Scholar] [CrossRef] [Green Version]
  2. Zhang, W.; Quan, W.; Guo, L. Blurred star image processing for star sensors under dynamic conditions. Sensors 2012, 12, 6712–6726. [Google Scholar] [CrossRef]
  3. Manfredi, M.; Bearman, G.; Williamson, G.; Kronkright, D.; Doehne, E.; Jacobs, M.; Marengo, E. A new quantitative method for the non-invasive documentation of morphological damage in paintings using RTI surface normals. Sensors 2014, 14, 12271–12284. [Google Scholar] [CrossRef] [Green Version]
  4. Luan, S.; Xie, S.; Wang, T.; Hao, X.; Yang, M.; Li, Y. A Space-Variant Deblur Method for Focal-Plane Microwave Imaging. Appl. Sci. 2018, 8, 2166. [Google Scholar] [CrossRef] [Green Version]
  5. Zhang, H.; Yuan, B.; Dong, B.; Jiang, Z. No-Reference Blurred Image Quality Assessment by Structural Similarity Index. Appl. Sci. 2018, 8, 2003. [Google Scholar] [CrossRef] [Green Version]
  6. Ali, U.; Mahmood, M.T. Analysis of blur measure operators for single image blur segmentation. Appl. Sci. 2018, 8, 807. [Google Scholar] [CrossRef] [Green Version]
  7. Dash, R.; Majhi, B. Motion blur parameters estimation for image restoration. Optik 2014, 125, 1634–1640. [Google Scholar] [CrossRef]
  8. Jalobeanu, A.; Blanc-Feraud, L.; Zerubia, J. An adaptive Gaussian model for satellite image deblurring. IEEE Trans. Image Process. 2004, 13, 613–621. [Google Scholar] [CrossRef] [PubMed]
  9. Kumar, A.; Paramesran, R.; Lim, C.-L.; Dass, S.C. Tchebichef moment based restoration of Gaussian blurred images. Appl. Opt. 2016, 55, 9006–9016. [Google Scholar] [CrossRef] [PubMed]
  10. Yin, H.; Hussain, I. Independent component analysis and nongaussianity for blind image deconvolution and deblurring. Integr. Comput. Aided Eng. 2008, 15, 219–228. [Google Scholar] [CrossRef] [Green Version]
  11. Saleh, A.K.; Arashi, M.; Kibria, B.G. Theory of Ridge Regression Estimation with Applications; John Wiley & Sons: Hoboken, NJ, USA, 2019; ISBN 978-1-118-64461-4. [Google Scholar]
  12. Field, D.J. What is the Goal of Sensory Coding? Neural Comput. 1994, 6, 559–601. [Google Scholar] [CrossRef]
  13. Weiss, Y.; Freeman, W.T. What makes a good model of natural images? In Proceedings of the 2007 IEEE Conference on Computer Vision and Pattern Recognition, Minneapolis, MN, USA, 17–22 June 2007; pp. 1–8. [Google Scholar]
  14. Fergus, R.; Singh, B.; Hertzmann, A.; Roweis, S.T.; Freeman, W.T. Removing camera shake from a single photograph. ACM Trans. Graph. 2006, 25, 787–794. [Google Scholar] [CrossRef]
  15. Levin, A.; Fergus, R.; Durand, F.; Freeman, W.T. Image and depth from a conventional camera with a coded aperture. ACM Trans. Graph. 2007, 26, 70-es. [Google Scholar] [CrossRef]
  16. Joshi, N.; Zitnick, C.L.; Szeliski, R.; Kriegman, D.J. Image deblurring and denoising using color priors. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 1550–1557. [Google Scholar]
  17. Wang, Y.; Yang, J.; Yin, W.; Zhang, Y. A New Alternating Minimization Algorithm for Total Variation Image Reconstruction. SIAM J. Imaging Sci. 2008, 1, 248–272. [Google Scholar] [CrossRef]
  18. Xu, L.; Jia, J. Two-Phase Kernel Estimation for Robust Motion Deblurring. In Computer Vision—ECCV 2010. ECCV 2010; Daniilidis, K., Maragos, P., Paragios, N., Eds.; Heraklion: Crete, Greece; Springer: Berlin/Heidelberg, Germany, 2010; Volume 6311, pp. 157–170. [Google Scholar]
  19. Pan, J.; Sun, D.; Pfister, H.; Yang, M.-H. Blind Image Deblurring Using Dark Channel Prior. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 1628–1636. [Google Scholar]
  20. Yang, H.; Su, X.; Ju, C.; Wu, S. Efficient Self-Adaptive Image Deblurring Based on Model Parameter Optimization. In Proceedings of the 2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC), Chongqing, China, 27–29 June 2018; pp. 384–388. [Google Scholar]
  21. Zhu, S.C.; Mumford, D.B. Prior learning and Gibbs reaction-diffusion. IEEE Trans. Pattern Anal. Mach. Intell. 1997, 19, 1236–1250. [Google Scholar] [CrossRef]
  22. Roth, S.; Black, M.J. Fields of Experts: A framework for learning image priors. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005; Volume 2, pp. 860–867. [Google Scholar]
  23. Raj, A.; Zabih, R. A graph cut algorithm for generalized image deconvolution. In Proceedings of the Tenth IEEE International Conference on Computer Vision (ICCV’05) Volume 1, Beijing, China, 17–21 October 2005; Volume 2, pp. 1048–1054. [Google Scholar]
  24. Schuler, C.J.; Burger, C.H.; Harmeling, S.; Scholkopf, B. A Machine Learning Approach for Non-blind Image Deconvolution. In Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, USA, 23–28 June 2013; pp. 1067–1074. [Google Scholar]
  25. Zhang, K.; Zuo, W.; Gu, S.; Zhang, L. Learning Deep CNN Denoiser Prior for Image Restoration. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2808–2817. [Google Scholar]
  26. Snyder, W.E.; Qi, H. Machine Vision; Cambridge University Press: New York, NY, USA, 2010; ISBN 9780521169813. [Google Scholar]
  27. Levin, A.; Weiss, Y.; Durand, F.; Freeman, W.T. Understanding and evaluating blind deconvolution algorithms. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 1964–1971. [Google Scholar]
  28. Levin, A.; Weiss, Y.; Durand, F.; Freeman, W.T. Efficient marginal likelihood optimization in blind deconvolution. In Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 20–25 June 2011; pp. 2657–2664. [Google Scholar]
  29. Miao, L.; Qi, H. A Blind Source Separation Perspective on Image Restoration. In Proceedings of the 2007 IEEE Conference on Computer Vision and Pattern Recognition, Minneapolis, MN, USA, 17–22 June 2007; pp. 1–7. [Google Scholar]
  30. Mathews, J.H.; Fink, K.D. Numerical Methods Using MATLAB, 4th ed.; Pearson Prentice Hall Press: Upper Saddle River, NJ, USA, 2004; ISBN 97871219074. [Google Scholar]
  31. Abramowitz, M. Handbook of Mathematical Functions: With Formulas, Graphs, and Mathematical Tables; Dover Publications: Mineola, New York, USA, 1974; ISBN 0486612724. [Google Scholar]
  32. Wright, S.J.; Nowak, R.D.; Figueiredo, M.A. Sparse reconstruction by separable approximation. IEEE Trans. Signal Process. 2009, 57, 2479–2493. [Google Scholar] [CrossRef] [Green Version]
  33. Liu, R.; Jia, J. Reducing boundary artifacts in image deconvolution. In Proceedings of the 2008 15th IEEE International Conference on Image Processing, San Diego, CA, USA, 12–15 October 2008; pp. 505–508. [Google Scholar]
  34. Franzen, R. Kodak Lossless True Color Image Suite, Kodak PhotoCD PCD0992 image samples in PNG file format. Available online: http://r0k.us/graphics/kodak/ (accessed on 1 January 2020).
  35. Shan, Q.; Jia, J.; Agarwala, A. High-quality motion deblurring from a single image. ACM Trans. Graph. 2008, 27, 1–10. [Google Scholar] [CrossRef]
  36. Cho, S.; Lee, S. Fast motion deblurring. ACM Trans. Graph. 2009, 28, 1–8. [Google Scholar] [CrossRef]
  37. Krishnan, D.; Tay, T.; Fergus, R. Blind deconvolution using a normalized sparsity measure. In Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 20 June 2011; pp. 233–240. [Google Scholar]
  38. Wu, J.; Su, X. Method of Image Quality Improvement for Atmospheric Turbulence Degradation Sequence Based on Graph Laplacian Filter and Nonrigid Registration. Math. Prob. Eng. 2018, 2018, 1–15. [Google Scholar] [CrossRef]
  39. Hore, A.; Ziou, D. Image Quality Metrics: PSNR vs. SSIM. In Proceedings of the 20th International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 August 2010; pp. 2366–2369. [Google Scholar]
  40. Moorthy, A.K.; Bovik, A.C. A two-step framework for constructing blind image quality indices. IEEE Signal Process. Lett. 2010, 17, 513–516. [Google Scholar] [CrossRef]
  41. Liu, L.; Liu, B.; Huang, H.; Bovik, A.C. No-reference image quality assessment based on spatial and spectral entropies. Signal Process. Image Commun. 2014, 29, 856–863. [Google Scholar] [CrossRef]
Figure 1. Example of the histograms on a clear and blurred image. First column: the upper is a clear image and the lower is a motion-blurred image. Second column: statistics of pixel value distributions in the two images. Third column: statistics of nonzero value distributions in the gradient domains of the images ( x -axis represents the pixel value, y -axis represents the statistical frequency.)
Figure 1. Example of the histograms on a clear and blurred image. First column: the upper is a clear image and the lower is a motion-blurred image. Second column: statistics of pixel value distributions in the two images. Third column: statistics of nonzero value distributions in the gradient domains of the images ( x -axis represents the pixel value, y -axis represents the statistical frequency.)
Applsci 10 02437 g001
Figure 2. Framework of the proposed method in a coarse-to-fine scheme. The “Down sampling” step is accomplished through the Gaussian pyramid technique; the “Estimating” step is described in Section 2.2; the “Deblurring” step is discussed in Section 2.3.
Figure 2. Framework of the proposed method in a coarse-to-fine scheme. The “Down sampling” step is accomplished through the Gaussian pyramid technique; the “Estimating” step is described in Section 2.2; the “Deblurring” step is discussed in Section 2.3.
Applsci 10 02437 g002
Figure 3. Examples of the ground truth images. (a) Img02: “RedDoor.png”; (b) Img07: “Flowers.png”; (c) Img21: “Lighthouse.png.”.
Figure 3. Examples of the ground truth images. (a) Img02: “RedDoor.png”; (b) Img07: “Flowers.png”; (c) Img21: “Lighthouse.png.”.
Applsci 10 02437 g003
Figure 4. Comparison results for blur kernel estimation. (ah): 8 blur kernels listed in Table 1. First row: ground truth blur kernel. Second row: method in [35]. Third row: method in [36]. Fourth row: method in [37]. Fifth row: method in [38]. Last row: proposed method.
Figure 4. Comparison results for blur kernel estimation. (ah): 8 blur kernels listed in Table 1. First row: ground truth blur kernel. Second row: method in [35]. Third row: method in [36]. Fourth row: method in [37]. Fifth row: method in [38]. Last row: proposed method.
Applsci 10 02437 g004
Figure 5. Comparison results for the “RedDoor.png” image: (a) blurry image; (b) method in [35]; (c) method in [36]; (d) method in [37]; (e) method in [38]; (f) proposed method.
Figure 5. Comparison results for the “RedDoor.png” image: (a) blurry image; (b) method in [35]; (c) method in [36]; (d) method in [37]; (e) method in [38]; (f) proposed method.
Applsci 10 02437 g005
Figure 6. Comparison results for the “Flowers.png” image: (a) blurry image; (b) method in [35]; (c) method in [36]; (d) method in [37]; (e) method in [38]; (f) proposed method.
Figure 6. Comparison results for the “Flowers.png” image: (a) blurry image; (b) method in [35]; (c) method in [36]; (d) method in [37]; (e) method in [38]; (f) proposed method.
Applsci 10 02437 g006
Figure 7. Comparison results for the “Lighthouse.png” image: (a) blurry image; (b) method in [35]; (c) method in [36]; (d) method in [37]; (e) method in [38]; (f) proposed method.
Figure 7. Comparison results for the “Lighthouse.png” image: (a) blurry image; (b) method in [35]; (c) method in [36]; (d) method in [37]; (e) method in [38]; (f) proposed method.
Applsci 10 02437 g007
Figure 8. Statistics related to the average peak signal-to-noise ratios (PSNRs) for the comparison results of all the test images.
Figure 8. Statistics related to the average peak signal-to-noise ratios (PSNRs) for the comparison results of all the test images.
Applsci 10 02437 g008
Figure 9. Statistics related to the average structural similarity indexes (SSIMs) for the comparison results of all the test images.
Figure 9. Statistics related to the average structural similarity indexes (SSIMs) for the comparison results of all the test images.
Applsci 10 02437 g009
Figure 10. Comparison results for the “Tower.bmp” image: (a) blurry image; (b) method in [35]; (c) method in [36]; (d) method in [37]; (e) method in [38]; (f) proposed method.
Figure 10. Comparison results for the “Tower.bmp” image: (a) blurry image; (b) method in [35]; (c) method in [36]; (d) method in [37]; (e) method in [38]; (f) proposed method.
Applsci 10 02437 g010
Figure 11. Comparison results for the “Chimney.bmp” image: (a) blurry image; (b) method in [35]; (c) method in [36]; (d) method in [37]; (e) method in [38]; (f) proposed method.
Figure 11. Comparison results for the “Chimney.bmp” image: (a) blurry image; (b) method in [35]; (c) method in [36]; (d) method in [37]; (e) method in [38]; (f) proposed method.
Applsci 10 02437 g011
Figure 12. Comparison results for the “Pole.bmp” image: (a) blurry image; (b) method in [35]; (c) method in [36]; (d) method in [37]; (e) method in [38]; (f) proposed method.
Figure 12. Comparison results for the “Pole.bmp” image: (a) blurry image; (b) method in [35]; (c) method in [36]; (d) method in [37]; (e) method in [38]; (f) proposed method.
Applsci 10 02437 g012
Table 1. Comparison results of the blur kernel estimation. The cost durations are recorded in seconds, and the best results of mean square error (MSE) are shown in bold.
Table 1. Comparison results of the blur kernel estimation. The cost durations are recorded in seconds, and the best results of mean square error (MSE) are shown in bold.
PSFNumber(a)(b)(c)(d)(e)(f)(g)(h)
Size 19 × 19 17 × 17 15 × 15 27 × 27 13 × 13 21 × 21 23 × 23 23 × 23
Method in [35]Duration64.57263.43156.71481.50850.35667.93869.44774.999
MSE0.61820.25920.22590.10620.69420.26560.14570.0975
Method in [36]Duration6.96906.93706.85807.59306.16907.06207.09307.3120
MSE0.48930.37630.37220.09770.74990.20360.08670.1339
Method in [37]Duration95.97956.91635.043211.0725.636110.29131.32143.96
MSE0.62490.76170.28920.32210.48170.34040.17110.4252
Method in [38]Duration71.57770.95065.525126.0858.95377.165110.45119.04
MSE0.56240.36500.18070.13640.23740.23120.10120.1525
Proposed methodDuration14.70214.32713.79628.07613.17118.06121.62421.653
MSE0.48230.15400.10240.06750.11100.11610.06270.0821
Table 2. Full-reference image quality assessment data of the comparison results of Figure 5, Figure 6 and Figure 7.
Table 2. Full-reference image quality assessment data of the comparison results of Figure 5, Figure 6 and Figure 7.
Test ImagesFRIQAMethod in [35]Method in [36]Method in [37]Method in [38]Proposed Method
“RedDoor.png”PSNR28.66026.12627.12628.63328.986
SSIM0.97510.95700.96600.97640.9787
“Flowers.png”PSNR27.41126.08227.42927.82628.240
SSIM0.90490.91200.91730.91220.9177
“Lighthouse.png”PSNR25.19724.29724.88725.57725.841
SSIM0.88160.87240.86510.87910.8867
Table 3. Non-reference image quality assessment data of the comparison results of Figure 10, Figure 11 and Figure 12.
Table 3. Non-reference image quality assessment data of the comparison results of Figure 10, Figure 11 and Figure 12.
Test ImagesNRIQAMethod in [35]Method in [36]Method in [37]Method in [38]Proposed Method
“Tower.bmp”BIBQ46.34738.82941.49639.85537.293
SSEQ37.83146.83940.26836.63735.693
“Chimney.bmp”BIBQ57.80052.97458.43752.88951.953
SSEQ69.90252.12759.65949.88433.326
“Pole.bmp”BIBQ52.35745.61246.50745.24641.586
SSEQ45.69352.58844.67239.17137.514

Share and Cite

MDPI and ACS Style

Yang, H.; Su, X.; Chen, S. Blind Image Deconvolution Algorithm Based on Sparse Optimization with an Adaptive Blur Kernel Estimation. Appl. Sci. 2020, 10, 2437. https://doi.org/10.3390/app10072437

AMA Style

Yang H, Su X, Chen S. Blind Image Deconvolution Algorithm Based on Sparse Optimization with an Adaptive Blur Kernel Estimation. Applied Sciences. 2020; 10(7):2437. https://doi.org/10.3390/app10072437

Chicago/Turabian Style

Yang, Haoyuan, Xiuqin Su, and Songmao Chen. 2020. "Blind Image Deconvolution Algorithm Based on Sparse Optimization with an Adaptive Blur Kernel Estimation" Applied Sciences 10, no. 7: 2437. https://doi.org/10.3390/app10072437

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop