Blind Image Deconvolution Algorithm Based on Sparse Optimization with an Adaptive Blur Kernel Estimation

Image blurs are a major source of degradation in an imaging system. There are various blur types, such as motion blur and defocus blur, which reduce image quality significantly. Therefore, it is essential to develop methods for recovering approximated latent images from blurry ones to increase the performance of the imaging system. In this paper, an image blur removal technique based on sparse optimization is proposed. Most existing methods use different image priors to estimate the blur kernel but are unable to fully exploit local image information. The proposed method adopts an image prior based on nonzero measurement in the image gradient domain and introduces an analytical solution, which converges quickly without additional searching iterations during the optimization. First, a blur kernel is accurately estimated from a single input image with an alternating scheme and a half-quadratic optimization algorithm. Subsequently, the latent sharp image is revealed by a non-blind deconvolution algorithm with the hyper-Laplacian distribution-based priors. Additionally, we analyze and discuss its solutions for different prior parameters. According to the tests we conducted, our method outperforms similar methods and could be suitable for dealing with image blurs in real-life applications.


Introduction
Image deblurring is important in many fields, such as surveillance, traffic control, astronomy, and remote sensing [1][2][3]. Blurs occur due to a variety of reasons, such as moving objects, focus issues, and atmospheric turbulence. They significantly deteriorate the quality of the image. The blurring procedure is always described by a point spread function (PSF), also known as blur kernel. When the PSF is known, the blur can be removed by conventional deconvolution methods, such as Weiner filtering and the Lucy-Richardson algorithm. However, when the PSF is unknown, the issue constitutes a blind-deconvolution problem, which is a notoriously vague inverse problem that has perplexed the scientific community for decades.
Therefore, recovering an approximated latent image from a blurred observation is essential for improving the performance of an imaging system, in addition to having several applications [4][5][6]. One approach to handle this problem is to build parameterized models for specific blur types, i.e., employ motion length and angle to describe a motion blur, the radius of a disk to model the defocus blur, and a Gaussian model to simulate the atmospheric turbulence blur. Subsequently, Dash and Majhi suggested a radial basis function neural network with image features based on the magnitude of Fourier coefficients to estimate the motion lengths [7]. Jalobeanu et al. exploited the maximum

•
An image prior based on nonzero measurement on four orientations of the image gradient domain is proposed. The image histogram charts show that the frequency of nonzero values in the gradient domain of a blurry image is far more than that in a clear one, and the nonzero measurement is suitable for a constraint for image deblurring. The solution for the cost function with the proposed image prior is also analyzed and discussed.

•
The blur kernel is obtained under a ridge regularization on the PSF because the measurements on an image are enough to estimate the blur kernel in the maximum a posteriori (MAP) framework.
During the optimization, we propose a solution based on a conjugate gradient method combined with Newton's method; this solution could prevent us from calculating the inversion of a Hessian matrix and solve the cost function efficiently.

•
Considering the statistical features of natural images, we presented a non-blind image deconvolution algorithm by applying the concept of hyper-Laplacian distribution-based prior. Its target image is constrained by an p quasi-norm in the cost function. We analyze and discuss the solutions for different p values. • We tested our method on both simulated motion blurs and atmospheric turbulence blurs in real-life applications. In addition, we comparatively analyzed our method, in terms of the cost durations, estimated accuracy of the blur kernels, and quality assessment of the restored images, and adopted several approaches related to blur removal.
The remainder of this paper is organized as follows: Section 2 introduces the imaging system and the principle of image blurs. Subsequently, we present the details of our proposed deblurring method, including the blur kernel estimation technique and the image deconvolution algorithm. The results of the experiments and comparative analysis performed are described in Section 3. Section 4 summarizes the study and presents concluding remarks.

Materials and Methods
Image blur is one of the most common phenomena of image degradations, which can generally be regarded as a linear, shift-invariant system. Furthermore, it can be expressed by a convolution model as follows [14][15][16][17][18][19][20]: where f and g represent the original and observed blurry images, respectively; k is the blur kernel; the symbol "⊗" represents the convolution operator; and η stands for the additive noise generated during image acquisition or transmission.

Image Prior and Our Method's Framework
From previous studies a practical framework of blind image deblurring is the blind iterative deconvolution algorithm [15][16][17][18][19]. Its fundamental principle is expressed as follows: f ,k = arg min f,k k ⊗ f − g 2 2 +α 1 P f + α 2 P k (2) where the first term, k ⊗ f − g 2 2 , is the data fidelity term which indicates the prior of the additive noise η in Equation (1), P f and P k represent the priors of the original image and the blur kernel, respectively, while α 1 and α 2 are their corresponding weights.
For the image prior model, we assume that the first-order derivatives of the image f in four directions are represented in Equation (3) by: where x and y represent the coordinates of a digital image; and the operator ∇ n with n ∈ N = {1, 2, 3, 4} indicates the image gradients in four orientations: 0, π/2, π/4, and 3π/4, respectively. In the following context, the images and blur kernels are represented by lexicographic order, which is to list every pixel in raster scan order as one long vector [26]. This process is convenient for analysis and mathematical calculations.
In this paper, to measure the image sparsity, we utilized the nonzero measurement in the image gradient domain in the four orientations as the image prior, and we defined the prior as follows: where the subscript i represents the ith element of the image gradient domain in lexicographic order discussed above, and Ω is the support domain of the images. Notation |·| represents the magnitude. The function δ(m) is the Kronecker delta function, which returns 0 if m 0 and 1 if m = 0. In general, Equation (4) aims at counting the number of nonzero elements in image gradient domains in four orientations. In Figure 1, the first column shows an example of a clear image and a blurry one. The statistical charts in the second column constitute their corresponding image histograms. It shows that the pixel intensities are more concentrated in the blurred image, which implies that the gradient domain of the blurred image has a better sparsity than that of the clear one. The third column shows the statistical charts of nonzero values in the gradient domain in four orientations (∇ 1 + ∇ 2 + ∇ 3 + ∇ 4 ), which confirms that the frequencies of nonzero values of the blurred image in gradient domain are higher than those in the clear image.
Appl. Sci. 2020, 10, x FOR PEER REVIEW 4 of 17 function ( ) is the Kronecker delta function, which returns 0 if ≠ 0 and 1 if = 0. In general, Equation (4) aims at counting the number of nonzero elements in image gradient domains in four orientations.
In Figure 1, the first column shows an example of a clear image and a blurry one. The statistical charts in the second column constitute their corresponding image histograms. It shows that the pixel intensities are more concentrated in the blurred image, which implies that the gradient domain of the blurred image has a better sparsity than that of the clear one. The third column shows the statistical charts of nonzero values in the gradient domain in four orientations ( 1 + 2 + 3 + 4 ), which confirms that the frequencies of nonzero values of the blurred image in gradient domain are higher than those in the clear image. In the view of MAP, the estimators can get closer to the true values on the condition of more measurements. Since the size of the blur kernel is far smaller than that of the images, the observed image supplies enough measurements for estimating the blur kernel alone [27]. Thus, the blind deconvolution problem of solving Equation (2) can be transformed into an alternating scheme, i.e., the blur kernel should be estimated at first, and then the latent sharp image can be acquired by a non-blind deconvolution method.

Estimation of blur kernel and the intermediate latent image
To estimate the blur kernel, Equation (2) can be divided into two parts as follows: where the two subproblems above are solved alternately. Their solutions are represented in the following section. In addition, to avoid the algorithm's converging to local minimums, we adopt a coarse-to-fine scheme during the blur kernel estimation. The image pyramid technique is applied to generate the target image from the coarsest level to the finest level [26]. Figure 2 shows the framework of the proposed method. In the view of MAP, the estimators can get closer to the true values on the condition of more measurements. Since the size of the blur kernel is far smaller than that of the images, the observed image g supplies enough measurements for estimating the blur kernel alone [27]. Thus, the blind deconvolution problem of solving Equation (2) can be transformed into an alternating scheme, i.e., the blur kernel should be estimated at first, and then the latent sharp image can be acquired by a non-blind deconvolution method.

Estimation of Blur Kernel and the Intermediate Latent Image
To estimate the blur kernel, Equation (2) can be divided into two parts as follows: where the two subproblems above are solved alternately. Their solutions are represented in the following section. In addition, to avoid the algorithm's converging to local minimums, we adopt a coarse-to-fine scheme during the blur kernel estimation. The image pyramid technique is applied to generate the target image from the coarsest level to the finest level [26]. Figure 2 shows the framework of the proposed method. Appl. Sci. 2020, 10, x FOR PEER REVIEW 5 of 17 Figure 2. Framework of the proposed method in a coarse-to-fine scheme. The "Down sampling" step is accomplished through the Gaussian pyramid technique; the "Estimating" step is described in Section 2.2; the "Deblurring" step is discussed in Section 2.3.

Solve with a given
As discussed in Section 2.1, we use a notation in the form of a norm constraint ‖ ‖ c to represent the image prior proposed in Equation (4) for simplicity. Then, the first subproblem of Equation (5) can be written as: where the result of is an intermediate variable that is used for estimating an accurate blur kernel. Furthermore, it cannot be regarded as the deblurring result because it loses multiple image details and retains only the outlines. The above mentioned minimization can be solved via the split Bergman iteration. With auxiliary variable = ( 1 , 2 , 3 , 4 ) the cost function of Equation (6) turns into another optimization problem: represents the collection of the gradient in the four orientations, and the intermediate coefficient, 1 , is varied during the optimization process. The optimization of in Equation (7) converges to the solution of Equation (6) as 1 approaches to infinity. The alternating scheme of the subproblem is described as follows: where the result value of � represents the intermediate latent image.
Considering the th element of and , the �-subproblem in the first part of Equation (8) can be expressed as: where the bold variable = ( 1 , 2 , 3 , 4 ) to be minimized under the " " function represents the th element of . It can be seen from the above equation that the solution of Equation (9) is able to be acquired via element-wise optimization. Framework of the proposed method in a coarse-to-fine scheme. The "Down sampling" step is accomplished through the Gaussian pyramid technique; the "Estimating" step is described in Section 2.2; the "Deblurring" step is discussed in Section 2.3.

Solve f with a Given k
As discussed in Section 2.1, we use a notation in the form of a norm constraint ∇f c to represent the image prior P f proposed in Equation (4) for simplicity. Then, the first subproblem of Equation (5) can be written as: where the result of f is an intermediate variable that is used for estimating an accurate blur kernel. Furthermore, it cannot be regarded as the deblurring result because it loses multiple image details and retains only the outlines. The above mentioned minimization can be solved via the split Bergman iteration. With auxiliary variable u = u 1 , u 2 , u 3 , u 4 the cost function of Equation (6) turns into another optimization problem: min where ∇f = (∇ 1 f, ∇ 2 f, ∇ 3 f, ∇ 4 f) represents the collection of the gradient in the four orientations, and the intermediate coefficient, β 1 , is varied during the optimization process. The optimization of f in Equation (7) converges to the solution of Equation (6) as β 1 approaches to infinity. The alternating scheme of the subproblem is described as follows: where the bold variable u i = u 1 i , u 2 i , u 3 i , u 4 i to be minimized under the "arg min" function represents the ith element of u. It can be seen from the above equation that the solution of Equation (9) is able to be acquired via element-wise optimization. Let Ψ(u i ) represent the minimization problem, i.e., the "arg min" function, for the ith element in the right-hand side of Equation (9), and it can be expanded as follows: It can be seen from Equation (10) that when u i 0, the inequality Ψ(u i ) ≥ 1 always holds. As a result, when The optimization of Equation (10) is discussed as follows: Overall, the solution for Equation (9) is obtained in Equation (11) as follows: As to thef-subproblem in the second part of Equation (8), it is a convex quadratic optimization problem whose solution can be achieved in frequency domain as follows: where F, G, K, and U n stand for the discrete Fourier transform (DFT) of f, g, k, and u n ; D n is the DFT of the differential operators in the four orientations; notation • is the Hadamard product; and the superscript * represents the complex conjugation. Generally, the denominator should be adjusted to the floating-point relative accuracy depending on the platform to avoid any divide by zero cases in practical implementations.

Solve k with a given f
As the differential operations are linear, the blurs in the image gradient domains also satisfy the convolution model in Equation (1). Moreover, the blur kernel k can be estimated with more accuracy in the gradient domain than directly solving the second subproblem in Equation (5) with pixel intensities [28]. In the following contexts, the analytical solution for the k-subprolem is discussed in the matrix form, and the second part of Equation (5) can be rewritten as: where ; G n and G n with n ∈ N represent the block circulant matrices with circulant blocks generated from ∇f and ∇g; and K is rearranged from k in a vector form so as to transform the convolution operation to the matrix multiplication [26,29]. The optimization of the above equation is a least squares problem, and its closed-form solution can be achieved with the help of fast Fourier transform. However, the division and truncation in the frequency domain will lead to the amplification of both noise and estimation error. However, in this paper, we solve the optimization problem in the pixel domain directly with an effective conjugate gradient method combined with Newton's method.
Let B(K) stand for the "arg min" function in the right-hand side of Equation (13), then its gradient ∇B(K) and Hessian matrix H are calculated in Equation (14) as follows: where I is an identity matrix of the same size as F T F, and the variable K is eliminated in the Hessian matrix when calculating their second-order partial derivatives. The solution of K can be achieved by Newton's descent method. Furthermore, since B(K) is quadratic, the searching process converges within only one iteration [30]. Assume the starting point is K 0 = 0, then the minimum is achieved for: Considering that the computation of the inverse of the Hessian matrix H directly is very complicated and impracticable, and computing the inversion H −1 is not the final aim, the solution of Equation (15) can be converted to the following form: It can be seen that Equation (16) is a linear system of equations; it can be solved by the conjugate descent method effectively. During the alternating optimization, it should be noted that the blur kernel must be kept positive and normalized so as to satisfy the property of a PSF. As a result, the negative elements of the estimated K are set to zero, and its elements, K i , are then normalized to hold K i = 1.

Image Restoration
With the estimated blur kernel k from the above method, the latent image can be revealed by nonblind deconvolution algorithms, such as Wiener filtering and the Lucy-Richardson method, which were introduced decades ago. These deconvolution algorithms are classic and have been widely implemented in many fields. However, they are sensitive to noise, especially when the blur kernels are estimated incorrectly, and the ringing effect is unavoidable. Therefore, inspired by the kernel estimation discussed above, the sparse representation techniques can also be applied in the restoration of the latent image. Based on the statistical character of natural images in recent studies, the distributions of image gradient domains tend to have heavy tails and sharp tops, which means that most of their distribution is on small values, but the probability of large values is much greater compared to the Gaussian distribution [14][15][16][17][18][19]. Therefore, the hyper-Laplacian distribution-based prior can be used as a regularization term in the cost function as follows: where p is derived from the hyper-Laplacian model, which signifies the slope of the exponential term of the density function. In Equation (17), it becomes the constraint term as a quasi-norm form, · p , as shown above. ∇ x f and ∇ y f are the image gradient domains in the horizontal (x-axis), and vertical (y-axis), respectively. The optimization can be solved by half-quadratic penalty method similar to that of Equation (7). With auxiliary variable v = v x , v y , the alternating scheme is shown as follows: The solutions for thev-subproblem vary with different p values. In particular, when p = 1, the analytical solution can be obtained in Equation (19) by the soft thresholding algorithm [17] as follows: where T = α 2 /(2β 2 ) is the threshold and sgn(·) represents the signum function.
It should be noted that the solutions in the horizontal and vertical orientation have the same form in the following Equation (20)-(22), therefore the subscripts related to orientations are omitted for briefness and convenience.
In general, for an arbitrary p, the minimization problem can be solved by setting the derivative of thev-subproblem to zero. Therefore, for the ith element of v and ∇f in horizontal orientation, we get: In particular, when p = 0.5, thev-subproblem in Equation (18) can be simplified to a cubic function: which can be solved using Cardano's formula [31]. When p = 2/3, thev-subproblem in Equation (18) can be expanded as a quartic function: which can be solved using Ferrari's and Descartes' solutions [31]. For some special cases where p > 1, analytical solutions can be referred to in [32]. However, for the remaining p values, no analytical solution exists, and the Newton-Raphson root finding method is more effective [30].
Thef-subproblem is similar to Equation (12) and, hence, its solution can also be determined in the frequency domain in Equation (23), as follows: where "IDFT" stands for the inverse discrete Fourier transform operation; V x and V y stand for the DFT of v x and v y , respectively; and D x and D y is the DFT of the differential operators in horizontal orientation, ∇ x , and vertical, ∇ y , respectively. The treatment for zero denominator is the same as that in Equation (12).

Smooth the Image Boundaries
When processing a digital image in the frequency domain via DFT, this indicates that the pixel domain is assumed to be periodical. This means that the top and bottom of the image are connected similar to the left and right sides. In practice, the image size should be expanded to a proper size and be padded with zeroes or other values to avoid aliasing, according to the Nyquist sampling criterion, which dramatically reduces the smoothness of the padded image.
One approach to address this issue is to pre-process the image to be symmetric when padding, and then use the "edgetaper" function in MATLAB, which only blurs the edges of the input image with a specified PSF and keeps the center region. In this way, the pixel values in the boundary are equal to the weighted sum of the original image and become differentiable. However, this method cannot ensure C 2 continuity. In this paper, we padded the images by minimizing the summation of second-order partial derivatives of the image borders to satisfy the 2nd-order smoothness. This technique was also referred to by Liu and Jia [33].

Results and Discussion
In this section, the proposed deblurring method is verified by both simulated motion-blurred images and atmospheric turbulence blur in real-life applications. For simulations, the test images were taken from the Kodak Lossless True Color Image Suite [34], and were numbered as "Img01," "Img02," etc., for ease of comprehension. Figure 3 shows examples of the ground truth images for which the testing process is outlined in the following subsections.
We compared the proposed method with several other blind image deblurring methods. Shan et al. used both the image gradient domain and pixel domain in the cost function for the data fitting term, together with a natural image prior and uniform prior [35]. Cho and Lee used the shock filter and bilateral filter with multiple orientations of images to enhance the structures, with the aim of ensuring that the histograms of the gradient domain of the reconstructed images are similar to those of the original [36]. Krishnan et al. proposed an image prior model based on a piecewise function that approximates the statistical characteristics of natural images using a sparsity representation technique [37]. They solved the deblurring problem using an iterative shrinkage-thresholding algorithm. Wu and Su used the graph Laplacian matrix as the cost function and obtained the clear image by an iterative solution [38].

Comparisons of blur kernel estimations
The ground truth of blur kernels was taken from Levin et al. who represent a variety of PSFs of motion blurs in daily life. Blur kernels are numbered as "(a)," "(b)," etc., for convenience [27]. Their sizes range from 13×13 to 27×27 and their support domain varied from 10 to 25 pixels. The labels and sizes are listed in the first two lines in Table. 1.

Comparison of deblurring results.
The first row of Figure 4 shows the eight ground truth blur kernels. To evaluate the effectiveness of blur kernel estimation, a test image, namely "Img21," was artificially blurred by the eight ground truth blur kernels, therefore eight simulated blurred images were obtained. Subsequently, these blurred images were used as the inputs of the proposed method and other methods discussed above.
We compared the proposed method with several other blind image deblurring methods. Shan et al. used both the image gradient domain and pixel domain in the cost function for the data fitting term, together with a natural image prior and uniform prior [35]. Cho and Lee used the shock filter and bilateral filter with multiple orientations of images to enhance the structures, with the aim of ensuring that the histograms of the gradient domain of the reconstructed images are similar to those of the original [36]. Krishnan et al. proposed an image prior model based on a piecewise function that approximates the statistical characteristics of natural images using a sparsity representation technique [37]. They solved the deblurring problem using an iterative shrinkage-thresholding algorithm. Wu and Su used the graph Laplacian matrix as the cost function and obtained the clear image by an iterative solution [38].

Comparisons of Blur Kernel Estimations
The ground truth of blur kernels was taken from Levin et al. who represent a variety of PSFs of motion blurs in daily life. Blur kernels are numbered as "(a)," "(b)," etc., for convenience [27]. Their sizes range from 13 × 13 to 27 × 27 and their support domain varied from 10 to 25 pixels. The labels and sizes are listed in the first two lines in Table 1. The first row of Figure 4 shows the eight ground truth blur kernels. To evaluate the effectiveness of blur kernel estimation, a test image, namely "Img21," was artificially blurred by the eight ground truth blur kernels, therefore eight simulated blurred images were obtained. Subsequently, these blurred images were used as the inputs of the proposed method and other methods discussed above.
To evaluate the blur kernel estimation more objectively, the mean square errors (MSE) of the estimated blur kernels were calculated and are listed in Table 1. The cost durations for the deblurring methods were also recorded. Our method was implemented in MATLAB, and the comparisons were assessed using a Windows platform with an Intel Xeon E5-2620 v4 CPU (2.1 GHz, 8 cores) and 32 GB RAM. The comparison results are shown in Figure 4.  Table 1. First row: ground truth blur kernel. Second row: method in [35]. Third row: method in [36]. Fourth row: method in [37]. Fifth row: method in [38]. Last row: proposed method. Table 1 shows that the proposed method achieves faster convergence than most of the other methods. Because the method in [36] adopts the fast Fourier transform technique both in the kernel estimation and in the deblurring process, we estimated the blur kernel in the pixel domain directly, using the conjugate descent method. Although our estimation of the blur kernels required more time, we avoided truncation and division operations in the high-frequency domain; this effectively improved the estimation accuracy. In practice, the estimated blur kernels of our method achieved the best visual effects and achieved the lowest MSE as shown in the last row of Figure 4 and Table 1.
In our experiments, to evaluate our method and compare it with other similar ones, we used the peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) as the full-reference image quality assessment (FRIQA) parameters, which are shown in Equations (24) and (25).
The PSNR [39] is a conventional indicator that is widely employed for measuring the quality of image reconstruction: where and represent the reference image and deblurred result, respectively, ( , ) = ∥ ∥ − ∥ ∥ 2 2 represents the mean square error between them, and stands for the image dynamic range. The SSIM [39] is a full-reference metric for measuring the similarity between two images, and , and is defined as follows:  Table 1. First row: ground truth blur kernel. Second row: method in [35]. Third row: method in [36]. Fourth row: method in [37]. Fifth row: method in [38]. Last row: proposed method.
To evaluate the blur kernel estimation more objectively, the mean square errors (MSE) of the estimated blur kernels were calculated and are listed in Table 1. The cost durations for the deblurring methods were also recorded. Our method was implemented in MATLAB, and the comparisons were assessed using a Windows platform with an Intel Xeon E5-2620 v4 CPU (2.1 GHz, 8 cores) and 32 GB RAM. The comparison results are shown in Figure 4. Table 1 shows that the proposed method achieves faster convergence than most of the other methods. Because the method in [36] adopts the fast Fourier transform technique both in the kernel estimation and in the deblurring process, we estimated the blur kernel in the pixel domain directly, using the conjugate descent method. Although our estimation of the blur kernels required more time, we avoided truncation and division operations in the high-frequency domain; this effectively improved the estimation accuracy. In practice, the estimated blur kernels of our method achieved the best visual effects and achieved the lowest MSE as shown in the last row of Figure 4 and Table 1.
In our experiments, to evaluate our method and compare it with other similar ones, we used the peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) as the full-reference image quality assessment (FRIQA) parameters, which are shown in Equations (24) and (25).
The PSNR [39] is a conventional indicator that is widely employed for measuring the quality of image reconstruction: where x and y represent the reference image and deblurred result, respectively, MSE(x, y) = x − y 2 2 represents the mean square error between them, and L stands for the image dynamic range.
The SSIM [39] is a full-reference metric for measuring the similarity between two images, x and y, and is defined as follows: SSIM(x, y) = (2xy + C 1 ) 2σ xy + C 2 wherex andȳ are the mean values, σ x and σ y are the standard deviations, and σ xy is the cross-covariance. Furthermore, C 1 = (k 1 L) 2 and C 2 = (k 2 L) 2 are factors used to ensure stability in case of division with a small denominator, with k 1 = 0.01 and k 2 = 0.03 in general. An example of "Img02" blurred by kernel "(b)" and the deblurring results are shown in Figure 5. The white ball and the latch, as well as the wood grains and cracks, are blurred and indistinct, as shown in Figure 5a. The deblurring method described in [35] significantly reduces the blur in Figure 5b, but the result is still unsatisfying. The methods described in [36,37] introduce the ringing effect, both around the ball and wood cracks, as they are obvious in Figure 5c,d. This phenomenon is reduced by the method described in [38] as shown in Figure 5e. However, wood grains in the image are not clear enough. In contrast, as it can be seen in Figure 5f, the proposed method restores both the texture and the background.
An example of "Img02" blurred by kernel "(b)" and the deblurring results are shown in Figure 5. The white ball and the latch, as well as the wood grains and cracks, are blurred and indistinct, as shown in Figure 5(a). The deblurring method described in [35] significantly reduces the blur in Figure 5(b), but the result is still unsatisfying. The methods described in [36,37] introduce the ringing effect, both around the ball and wood cracks, as they are obvious in Figures 5(c) and 5(d). This phenomenon is reduced by the method described in [38] as shown in Figure 5(e). However, wood grains in the image are not clear enough. In contrast, as it can be seen in Figure 5(f), the proposed method restores both the texture and the background. Figure 6 shows an image, namely, "Img07," blurred by kernel "(a)." As can be seen in Figure 6(b), the deblurring method described in [35] reduced some of the blurriness; however, the visual effect is still lacking. Particularly, the shutters in the background are still not clear enough, and the estimated blur kernel shown in the top left is a little rough when compared with others. Moreover, as it can be seen in Figures 6(c) and 6(d), the method described in [36,37] restored the sharpness, but introduced strong artifacts, especially around the petals and leaves. These limitations were overcome by the method described in [38] (see Figure 6(e)), as well as by the proposed method (see Figure 6(f)), which restored most of the details, and the shape of its estimated blur kernel is the closest to the ground truth data. Figure 6 shows an image, namely, "Img07," blurred by kernel "(a)." As can be seen in Figure 6b, the deblurring method described in [35] reduced some of the blurriness; however, the visual effect is still lacking. Particularly, the shutters in the background are still not clear enough, and the estimated blur kernel shown in the top left is a little rough when compared with others. Moreover, as it can be seen in Figure 6c,d, the method described in [36,37] restored the sharpness, but introduced strong artifacts, especially around the petals and leaves. These limitations were overcome by the method described in [38] (see Figure 6e), as well as by the proposed method (see Figure 6f), which restored most of the details, and the shape of its estimated blur kernel is the closest to the ground truth data.
Appl. Sci. 2020, 10, x FOR PEER REVIEW 12 of 17 Figure 7 shows another example blurred by kernel "(f)"; the image here is "Img21." The lighthouse and cabins are blurred and unclear in Figure 7(a). As can be seen in Figure 7(b), the deblurring method described in [35] reduced some of the blurriness; however, the visual effect is still lacking. In addition, as can be seen in Figure 7(c), the method described in [36] restored the sharpness but introduced strong artifacts along the roofs and the white tower. In Figure 7(d), because the method described in [37] Figure 7 shows another example blurred by kernel "(f)"; the image here is "Img21." The lighthouse and cabins are blurred and unclear in Figure 7a. As can be seen in Figure 7b, the deblurring method described in [35] reduced some of the blurriness; however, the visual effect is still lacking. In addition, as can be seen in Figure 7c, the method described in [36] restored the sharpness but introduced strong artifacts along the roofs and the white tower. In Figure 7d, because the method described in [37] estimated a blur kernel with numerous outliers, its deblurring result is not as good as the others. The limitation was partially overcome by the method described in [38]. There were still, however, some outliers in the estimated blur kernel shown in Figure 7e; the general outlines of the kernel were estimated correctly. In contrast, the proposed method not only estimated the most appropriate blur kernel, but also obtained a clear result with high quality. Figure 7 shows another example blurred by kernel "(f)"; the image here is "Img21." The lighthouse and cabins are blurred and unclear in Figure 7(a). As can be seen in Figure 7(b), the deblurring method described in [35] reduced some of the blurriness; however, the visual effect is still lacking. In addition, as can be seen in Figure 7(c), the method described in [36] restored the sharpness but introduced strong artifacts along the roofs and the white tower. In Figure 7(d), because the method described in [37] estimated a blur kernel with numerous outliers, its deblurring result is not as good as the others. The limitation was partially overcome by the method described in [38]. There were still, however, some outliers in the estimated blur kernel shown in Figure 7(e); the general outlines of the kernel were estimated correctly. In contrast, the proposed method not only estimated the most appropriate blur kernel, but also obtained a clear result with high quality. Table 2 lists the original FRIQA data of a comparative analysis performed on the examples. For all of these-"Img02," "Img07," and "Img21,"-the PSNR and SSIM were the highest for the proposed method.  Table 2 lists the original FRIQA data of a comparative analysis performed on the examples. For all of these-"Img02," "Img07," and "Img21,"-the PSNR and SSIM were the highest for the proposed method. More comprehensive comparisons were carried out on all 24 tested images in the Kodak Lossless True Color Image Suite, which was introduced at the beginning of Section 3 [34]. The statistics related to the PSNR and SSIM, as determined from a comparative analysis of all the simulated motion-blurred images, are shown in Figures 8 and 9, respectively. In the deconvolution procedure, any slight noise or outliers in an estimated blur kernel can cause errors in the image gray values; the shapes and offsets of the blur kernels can also affect the pixel positions in the deblurred results. In addition, the testing ground truth images were wrapped with gray or black boundaries. As the PSNR value is based on the MSE of the images, it mainly depends on image contrast and gray scales. Additionally, the SSIM value uses the image absolute means and the standard deviations. These two metrics have their advantages and make up for each other's shortcomings. In this paper, therefore, we use both metrics to evaluate the deblurring results. As can be seen in Figures 8 and 9, the PSNR values of our results for "Img03" and "Img10" are lower than others, but their SSIM values are the highest. Similarly, the phenomena of "Img04" and "Img22", with lower SSIM values, but the highest PSNR values, derive from the same reasons. Once again, it can be seen that the proposed method was superior to other methods in most cases.
the PSNR and SSIM, as determined from a comparative analysis of all the simulated motion-blurred images, are shown in Figures 8 and 9, respectively. In the deconvolution procedure, any slight noise or outliers in an estimated blur kernel can cause errors in the image gray values; the shapes and offsets of the blur kernels can also affect the pixel positions in the deblurred results. In addition, the testing ground truth images were wrapped with gray or black boundaries. As the PSNR value is based on the MSE of the images, it mainly depends on image contrast and gray scales. Additionally, the SSIM value uses the image absolute means and the standard deviations. These two metrics have their advantages and make up for each other's shortcomings. In this paper, therefore, we use both metrics to evaluate the deblurring results. As can be seen in Figures 8 and 9, the PSNR values of our results for "Img03" and "Img10" are lower than others, but their SSIM values are the highest. Similarly, the phenomena of "Img04" and "Img22", with lower SSIM values, but the highest PSNR values, derive from the same reasons. Once again, it can be seen that the proposed method was superior to other methods in most cases.   the PSNR and SSIM, as determined from a comparative analysis of all the simulated motion-blurred images, are shown in Figures 8 and 9, respectively. In the deconvolution procedure, any slight noise or outliers in an estimated blur kernel can cause errors in the image gray values; the shapes and offsets of the blur kernels can also affect the pixel positions in the deblurred results. In addition, the testing ground truth images were wrapped with gray or black boundaries. As the PSNR value is based on the MSE of the images, it mainly depends on image contrast and gray scales. Additionally, the SSIM value uses the image absolute means and the standard deviations. These two metrics have their advantages and make up for each other's shortcomings. In this paper, therefore, we use both metrics to evaluate the deblurring results. As can be seen in Figures 8 and 9, the PSNR values of our results for "Img03" and "Img10" are lower than others, but their SSIM values are the highest. Similarly, the phenomena of "Img04" and "Img22", with lower SSIM values, but the highest PSNR values, derive from the same reasons. Once again, it can be seen that the proposed method was superior to other methods in most cases.

Real-Life Applications
In building and infrastructure construction, a theodolite is used to measure angles between designated visible points in the vertical and horizontal planes. In special tasks, however, such as long-range detection in the morning, the imaging quality is greatly reduced by atmospheric turbulence. In our project, a customized theodolite equipped with a high-speed camera was used to capture photo sequences (in gray scale) of targets over 1.5 kilometers away in the open air of a military base. The photographs were taken early in the morning on a sunny day in autumn. The focal length of the camera was 2000 mm; the exposure time was 4 ms; and the frequency was 50 frames per second. Some examples of the targets captured over long distances are shown in this paper. The deblurring results obtained using this method are shown in Figures 10-12. In Figure 10a, the numeric character on the watchtower is dim and illegible, and the stairs are unclear. In Figures 11a and 12a, the edges of the chimney and the signal pole are indistinct. Figure 10f clearly shows that the character on the watchtower and the outlines of the stairs are more detailed. The area around the edges of the chimney and pole are clearer than in the images obtained using other methods, as shown in Figures 11f and 12f. of the targets captured over long distances are shown in this paper. The deblurring results obtained using this method are shown in Figures 10-12. In Figure 10(a), the numeric character on the watchtower is dim and illegible, and the stairs are unclear. In Figures 11(a) and 12(a), the edges of the chimney and the signal pole are indistinct. Figure 10(f) clearly shows that the character on the watchtower and the outlines of the stairs are more detailed. The area around the edges of the chimney and pole are clearer than in the images obtained using other methods, as shown in Figure 11(f) and 12(f).  [35]; (c) method in [36]; (d) method in [37]; (e) method in [38]; (f) proposed method.  [35]; (c) method in [36]; (d) method in [37]; (e) method in [38]; (f) proposed method.  [35]; (c) method in [36]; (d) method in [37]; (e) method in [38]; (f) proposed method.
To evaluate the deblurring methods for images without ground truth, we used the Blind Image Quality Index (BIQI) and the Spatial-Spectral Entropy-based Quality (SSEQ) index as the non-reference image quality assessments (NRIQA). Both of these constitute a scoring mechanism: a 2-D image is inputted and then a score is outputted (100 represents the worst quality while 0 represents the best). The BIQI was proposed based on distorted image statistics (DIS), and the proposers found that the distortions of natural images have unique characteristics using DIS, and the images can be classified into distortion categories via these characteristics; details can be obtained from [40]. The SSEQ index utilizes downsampled responses as inputs, then a local entropy feature vector of twelve dimensions can be extracted, and the image quality scores are generated from these features via a learning-based method [41]. Table 3 shows the original NRIQA data of a comparative analysis performed on the examples.  [35]; (c) method in [36]; (d) method in [37]; (e) method in [38]; (f) proposed method. of the targets captured over long distances are shown in this paper. The deblurring results obtained using this method are shown in Figures 10-12. In Figure 10(a), the numeric character on the watchtower is dim and illegible, and the stairs are unclear. In Figures 11(a) and 12(a), the edges of the chimney and the signal pole are indistinct. Figure 10(f) clearly shows that the character on the watchtower and the outlines of the stairs are more detailed. The area around the edges of the chimney and pole are clearer than in the images obtained using other methods, as shown in Figure 11(f) and 12(f).  [35]; (c) method in [36]; (d) method in [37]; (e) method in [38]; (f) proposed method.  [35]; (c) method in [36]; (d) method in [37]; (e) method in [38]; (f) proposed method.  [35]; (c) method in [36]; (d) method in [37]; (e) method in [38]; (f) proposed method.
To evaluate the deblurring methods for images without ground truth, we used the Blind Image Quality Index (BIQI) and the Spatial-Spectral Entropy-based Quality (SSEQ) index as the non-reference image quality assessments (NRIQA). Both of these constitute a scoring mechanism: a 2-D image is inputted and then a score is outputted (100 represents the worst quality while 0 represents the best). The BIQI was proposed based on distorted image statistics (DIS), and the proposers found that the distortions of natural images have unique characteristics using DIS, and the images can be classified into distortion categories via these characteristics; details can be obtained from [40]. The SSEQ index utilizes downsampled responses as inputs, then a local entropy feature vector of twelve dimensions can be extracted, and the image quality scores are generated from these features via a learning-based method [41]. Table 3 shows the original NRIQA data of a comparative analysis performed on the examples.  [35]; (c) method in [36]; (d) method in [37]; (e) method in [38]; (f) proposed method. of the targets captured over long distances are shown in this paper. The deblurring results obtained using this method are shown in Figures 10-12. In Figure 10(a), the numeric character on the watchtower is dim and illegible, and the stairs are unclear. In Figures 11(a) and 12(a), the edges of the chimney and the signal pole are indistinct. Figure 10(f) clearly shows that the character on the watchtower and the outlines of the stairs are more detailed. The area around the edges of the chimney and pole are clearer than in the images obtained using other methods, as shown in Figure 11(f) and 12(f).  [35]; (c) method in [36]; (d) method in [37]; (e) method in [38]; (f) proposed method.  [35]; (c) method in [36]; (d) method in [37]; (e) method in [38]; (f) proposed method.  [35]; (c) method in [36]; (d) method in [37]; (e) method in [38]; (f) proposed method.
To evaluate the deblurring methods for images without ground truth, we used the Blind Image Quality Index (BIQI) and the Spatial-Spectral Entropy-based Quality (SSEQ) index as the non-reference image quality assessments (NRIQA). Both of these constitute a scoring mechanism: a 2-D image is inputted and then a score is outputted (100 represents the worst quality while 0 represents the best). The BIQI was proposed based on distorted image statistics (DIS), and the proposers found that the distortions of natural images have unique characteristics using DIS, and the images can be classified into distortion categories via these characteristics; details can be obtained from [40]. The SSEQ index utilizes downsampled responses as inputs, then a local entropy feature vector of twelve dimensions can be extracted, and the image quality scores are generated from these features via a learning-based method [41]. Table 3 shows the original NRIQA data of a comparative analysis performed on the examples.  [35]; (c) method in [36]; (d) method in [37]; (e) method in [38]; (f) proposed method.
To evaluate the deblurring methods for images without ground truth, we used the Blind Image Quality Index (BIQI) and the Spatial-Spectral Entropy-based Quality (SSEQ) index as the non-reference image quality assessments (NRIQA). Both of these constitute a scoring mechanism: a 2-D image is inputted and then a score is outputted (100 represents the worst quality while 0 represents the best). The BIQI was proposed based on distorted image statistics (DIS), and the proposers found that the distortions of natural images have unique characteristics using DIS, and the images can be classified into distortion categories via these characteristics; details can be obtained from [40]. The SSEQ index utilizes down-sampled responses as inputs, then a local entropy feature vector of twelve dimensions can be extracted, and the image quality scores are generated from these features via a learning-based method [41]. Table 3 shows the original NRIQA data of a comparative analysis performed on the examples.

Conclusions
In this paper, we proposed and evaluated an image deblurring method based on sparse representation. We determined the image prior based on the nonzero measurement in the gradient domain of natural images, which is an efficient prior for blur kernel estimation. We optimized the cost function via the split Bergman iteration that turned the non-convex problem into an alternating scheme, and the blur kernel could be estimated correctly in a coarse-to-fine manner for avoiding local minima. Subsequently, we revealed the latent image through a similar technique with the help of the hyper-Laplacian distribution-based prior with the estimated blur kernel. In the interim, to reduce ringing effects, we preprocessed the image boundaries to achieve second-order smoothness. Moreover, the deblurring algorithm converged quickly owing to the analytical solutions of the subproblems. The results of experiments and comparative analysis shows that the proposed method can obtain improved results compared with other similar deblurring methods. In addition, our method is effective when dealing with atmospheric turbulence blurs in real-life applications. Because our deblurring framework is based on a linear system, it works well on spatially invariant blurs in most scenarios. In the future, the deblurring model can be extended to nonlinear blurs for wider applications.