A Fractional-Order Fidelity-Based Total Generalized Variation Model for Image Deblurring

: Image deblurring is a fundamental image processing task, and research for efﬁcient image deblurring methods is still a great challenge. Most of the currently existing methods are focused on TV-based models and regularization term construction; little efforts are paid to model proposal and correlated algorithms for the ﬁdelity term in fractional-order derivative space. In this paper, we propose a novel fractional-order variational model for image deblurring, which can efﬁciently address three different blur kernels. The objective functional contains a fractional-order gradient ﬁdelity term and a total generalized variation (TGV) regularization term


Introduction
Image deblurring aims to get a clean, sharp image from a noisy, blurred image.Blurs can be observed in many fields such as out-of-focus blur in X-ray imaging because of the poor localization of the point spread function and motion blur caused by the movement of the person.Generally, the image blurring process can be modeled as the convolution of an original clear image with a shift-invariant blur kernel plus additive Gaussian white noise, i.e., f = Ku + n, where u : Ω ⊂ R 2 → R is the original clean image, K is the convolution operator, and n is Gaussian noise with zero mean and variance σ 2 .According to whether the blur kernel K is prior knowledge, the problem of image deblurring can be divided into non-blind deblurring and blind deblurring.When K is known exactly, the problem is obtaining a clean image u from the observed image f and prior knowledge K; when K is unknown, the problem is estimating the blur kernel K first and then obtaining the clear image u from image f .For the restoration of a blurred image, scientists take advantage of some prior knowledge of the unknown image u by adding a regularization term, which can be modeled as follows: min where Ku − f 2 is called the fidelity term and Φ(u) is called the regularization term, and µ is a tuning parameter to balance the weight between the fidelity term and the regularization term.There are many methods that have been proposed for image restoration.Blind deconvolution image restoration [1,2] performs image restoration based on degradation models and prior knowledge, so it can adapt to different types and degrees of image degradation.However, blind deconvolution image restoration usually requires complex models and algorithms to model and process the degradation process of images, which makes it difficult to apply in practice.In addition, the unsuitability of inverse problems makes blind deconvolution image restoration unable to guarantee the uniqueness of solutions.The PSF restoration method [3,4] aims to reduce blur and noise in the image by calculating and estimating PSF, thus restoring the sharpness and detail of the image.However, it is sensitive to noise in the image, which may be amplified and affect the quality of the restored image.An image recursive filter [5,6] is an effective image smoothing and denoising filter, which calculates the value of the output pixel by the weighted average of the current pixel and its neighbors.However, the design and optimization of the image recursive filter is relatively complex, and there is a poor effect of noise suppression, which is easily introduces artifacts or distortion.In the field of variation image restoration, the most famous model in image restoration is the TV model, which is as follows: Actually, TV was originally proposed in [7] for image denoising, and then it was extended to image deblurring in [8].The TV model is a popular method that can effectively reduce the noise and blur in the image, and also preserve the sharp edge and texture detail of the image.However, TV regularization tends to get a piecewise constant solution, and, therefore, it easily causes a staircase effect.In order to efficiently suppress the staircase artifacts, numerous models with improved regularization terms have been proposed.This includes high-order partial differential equations [9] and higher-order TV methods (HOTV) [10], total variation regularization methods [11,12], sparsity regularization models [13,14], and fractional-order TV (FOTV) models [15,16].In addition, nonlocal total variation (NLTV) [17,18] and block-matching 3-D (BM3D) [19,20] have been the most promising deblurring methods for recovering texture.Although NLTV and BM3D have shown good performance in image restoration, the NLTV functional minimization problem has always been a difficult optimization problem because of its high computation complexity and the non-differentiability, and the BM3D method has limited effectiveness in processing high-noise images and motion-blurred images.Another method to overcome the staircase effect is the total generalized variation (TGV) regularization, which was firstly proposed by Bredies et al. as a penalty function in [21].As an extension of TV regularization, TGV has good properties such as rotational invariance, lower semi-continuity, and convexity.The results show that the TGV regularization method can preserve the details of image edges and textures and suppress the staircase effect.In addition, the scalar weight α = (α 1 , α 0 ) of the second-order TGV regularization model has multiple parameters, and better image restoration results can be achieved by adjusting the parameters, so the TGV model has been extensively studied in image restoration [22][23][24][25] and medical imaging [26].It can be formulated as: Although the TGV regularization model has many advantages, it tends to amplify the noise while restoring the image detail, creating artifacts or distortions, and due to the sensitivity of parameters, it may lead to poor deblurring results or excessive smoothing results (see [27,28]).In this paper, aiming at achieving a good performance for image restoration, we propose a fractional-order fidelity-based total generalized variation model (FTGV) for image deblurring.The objective function takes the TGV as the regularization term that can suppress staircase effect, preserve edges and a fractional-order gradient fidelity term to preserve more details, and get a trade-off between edge preservation and blur removal by adjusting the regularization parameters.Then, based on the non-smoothing of the regular-ization term of the variation model and the non-convexity of the fractional fidelity term, we propose two optimization algorithms based on the primal-dual (PD) and alternating direction multiplier (ADMM), which transform the fractional problem into subproblems with less computation by introducing auxiliary variables, and finally solve the minimization problem by alternate iteration strategy.By precisely adjusting step size parameters and penalty parameters, the two algorithms are insensitive to the weights α = (α 1 , α 0 ), non-integer order γ, and balance parameters β in the variational model, and run faster, thus making the model more robust and efficient.
The rest of this paper is organized as follows.In Section 2, we present a brief introduction of TGV model, and then give the new deblurring model and the discrete form for objection functional.We provide the numerical scheme based on the PD algorithm and ADMM algorithm to solve the proposed model and analyze the convergence in Section 3. Numerical experiments are shown to illustrate the performance of the proposed model in Section 4.

Proposed Model
In this section, we first review the classic total generalized variation method (TGV) and some notations.Afterwards, we give the new model for image deblurring and its discrete form.

Review of TGV
The concept of total generalized variation (TGV) was proposed by Bredies et al. in [21].The TGV of order k with positive weights α = (α 0 , α 1 , . . ., α k−1 ) is defined as: where C k c (Ω, Sym k (R d )) denotes the space of the compactly supported symmetric tensor field and Sym k (R d ) is the space of symmetric tensors on R d , which has the form: 1  1 (u) = TV(u), thus it is clear that TGV is a generalization of TV, when k = 2, Sym 2 (R d )) denotes the space of all symmetric S d×d matrices.Particularly, we take the second-order TGV [29] in the proposed model, it has form: , 1 ≤ i ≤ d and the infinite norm are defined as: In order to effectively use the PD algorithm and ADMM algorithm to solve the TGV model, we need to establish topological equivalence form of the second-order TGV in terms of l 1 minimization by utilizing the Legendre-Fenchel transform [27]: where (w) is a weak symmetric derivative and has the computational expression (w) = 1 2 (∇w + ∇w T ).Specifically, the discrete gradient ∇, the symmetrized gradient , and the corresponding divergence operators are defined as: where x , and ∂ − y are classic first-order forward backward discrete derivation operators in the x-direction and y-direction.U, V, W are defined as: When using the periodic boundary condition in the discrete derivation forms, it is easy to see that ∇ T = −div and T = −div h .

The Proposed Model
As we all know, the fidelity term is the error between the restoration image and the original image, but it always assumes that the signal or image is smooth and continuous, which is impossible in the real world, and the integer-order fidelity term is only suitable for modeling simple problems and can not effectively model complex phenomena, so the restoration result is unsatisfactory.In order to improve these problems, domestic and foreign researchers have proposed some models to improve the fidelity term, such as iterative denoising and the back projection (IDBP) [30] algorithm.Ren et al. [31] proposed an efficient derivative alternate direction multiplier (D-ADMM) algorithm based on derivative space [32,33].This algorithm can reduce the computation time and protect the image details appropriately; however, the regularization term is based on TV regularization; thus, the staircase effect is unavoidable.However, most of the existing models have poor performance for textured image restoration.
Unlike the integral-order derivative operator, the fractional-order derivative at a point depends on the characteristics of the whole function [34]; thus, the fractional derivative operator has a non-local property.This good property is beneficial to improve texture preservation performance [35,36].Therefore, we choose the fractional gradient (0 < γ < 1) as the fidelity term instead of the integer gradient.For comparison, we take the integer gradient, namely, the classic TGV model (γ = 0) and the fractional gradient (0 < γ < 1), in the experiment.We can observe from Figure 1 that the integer gradient model cannot remove blur completely, but the fractional gradient model achieved good results in image deblurring and texture preservation.
Based on the TGV model ( 1) and to further improve the quality of the image restoration, we propose a fractional-order fidelity-based total generalized variation (FTGV) model for image deblurring as follows: where f is the noisy blurred image, K corresponds to the blurring matrix, (2) becomes the denoise model when K is the identity operator, u is the result image after deblurring, and β is the balance parameter.The effect of the first term can effectively protect the medium-and low-frequency components of the image, and the second term gets the deblurred image close enough to the blurred image to achieve more details during image denoising and deblurring.There has been a growing interest in the study of the fractional-order Sobolev-Poincaré inequalities (see, for instance, [37,38] and the references therein).Let 1 ≤ p < ∞, γ ∈ (0, 1) and Ω ⊂ R n (n ≥ 2) be a bounded domain; Jonsson et al. [39] combined the classical embedding theorems with fractional-order Sobolev spaces [40] and achieved the fractional Sobolev-Poincaré inequality: where the constant C > 0, u Ω = 1 |Ω| Ω u(x)dx is the average of u over Ω and the right hand side is W γ,p (Ω) seminorm.
Let u(x) = Ku − f in (3); then: Thus, according to the computation of u Ω and the fractional poincaré inequality (3), we know that the integer fidelity term Ku − f 2 can be stronger controlled by its fractional gradient.This inequality provides support for our model in theory.

Discrete Implementations of Gradient and Divergence
In this paper, there exists a gradient and divergence operator of fractional-order and integer-order and their adjoint operators; thus, we give the discretization in the following.
For a real function u : Ω → R 2 , where Ω ⊂ R 2 is a bounded open set, a spatial rectangular partition (x m , y n ) (for all m, n = 0, 1, ..., N − 1) of image domain Ω is defined.In this paper, we adopt the G-L definition of the fractional-order derivatives D α x u and D α y u; they can naturally be seen as the generalization of the finite difference scheme for the partial derivatives, which can be defined as: Thus, we consider the discretization form of the α-order fractional derivative at all points of Ω along the x-direction and the y-direction by using equations: Using the relation (∇ α ) T = (−1) α div α , the discrete form the fractional-order divergence operator for a vector function p(x, y) = (p 1 (x, y), p 2 (x, y)) is given as: where Here, , where Γ(•) is the gamma function.

Proposition 1 ([42]
).For any 0 < α < 1, the coefficients {w α k } k=0 ∞ have the following properties: In practice, all equations of fractional-order derivatives and their adjoint operators along the x-direction in ( 5) and (7) can be written as the matrix form: Let U ∈ R N×N denote the solution matrix at all nodes (mh; nh), m, n = 0, 1, ..., N − 1, corresponding to x-direction and y-direction spatial discretization nodes.Therefore, it is clear that D α x U = B α N U. Similarly, the α-th order along the y-direction derivative of u(x; y), (x, y ∈ [h, (N − 1)h]) is written as D α y U = U(B α N ) T .In addition, the adjoint operators can be written as: Remark 2. When α = 1, C k 1 = 0 for k > 1, the discrete fractional-order gradient and divergence become: which are classical backward and forward differences.

Algorithms
For the implementation of (2), we consider the discrete form: where β > 0 is a weighting parameter.
Because of the non-linearity and non-convexity of the proposed model (10), some conventional methods, such as gradient descent method, conjugate gradient method, and Newton method, are not applicable for solving our problem.Thus, we propose two efficient numerical algorithms based on two popular optimization methods, i.e., PD and ADMM; they have many good properties: fast and easy to compute, and not sensitive with parameters.
N is linear.Then, the discrete form (10) is reasonable.

Augmented Lagrangian Algorithm
We utilize the alternating direction method of multipliers (ADMM) [43] to solve the proposed model (10).For implementation, we need to introduce three auxiliary vari- ; then, the problem ( 10) is transformed into the following constrained problem: Then, the augmented Lagrangian functional corresponding to (11) is: where and δ 1 , δ 2 , δ 3 > 0 are penalty parameters.
In fact, the primal variables (u, w; z, h, g) cannot be easily computed, so we use alternative iterative strategy to compute them separately; then, for each variable: Next, we will solve each subproblem one by one.
Step 1: Computation of u.For the u-subproblem in (13), it can be given by: Then minimization problem ( 14) can be solved by: with the periodic boundary condition of u, where ∇ T ∇ = ∇ T x ∇ x + ∇ T y ∇ y , and K T K are block circulant matrices and can be diagonalized by 2D discrete Fourier transform F , where ∇ T is the adjoint of ∇, and K T is the adjoint kernel of K by rotating 90 • clockwise.Thus, we compute (14) by FFTs and IFFTs: ) where and • denotes componentwise multiplication.
Step 2: Computation of w.For the w-subproblem in (13), the functional can be writen by: Then, for w 1 , w 2 , we have: where Similar to the u-subproblem, we also use FFTs to solve subproblem (17).Applying FFTs to both sides of (17) yields: where Θ i,j , i, j = 1, 2 are diagonal matrices.Then, we use IFFTs to get w 1 , w 2 from F (w 1 ) and F (w 2 ).
Step 3: Computation of z.The subproblem with respect to z can be written as: This is a typical l 2 − l 1 problem, which can be solved directly by a two-dimensional shrinkage operation: Step 4: Computation of h.For the h-subproblem, the minimization problem can be given by: which can be solved directly by the following four-dimensional shrinkage operation: Step 5: Computation of g.The subproblem with respect to g can be written as: To solve the g-subproblem in (13), we consider the Euler-Lagrange equation of ( 12) with respect to g, which has the form: where the operator (∇ γ ) T ∇ γ can be diagonalized by the fast Fourier fractional transform (FFTs) with the periodic boundary condition; then, we can get: We summarize the FTGV-ADMM in Algorithm 1.

end
Output: u.
It is straightforward to see that FTGV-ADMM does not satisfy the convergence of ADMM (proven by Eckstein and Bertsekas [44]), since the fidelity term of problem ( 10) is non-convex.However, we can demonstrate the convergence of FTGV-ADMM by plotting the Energy and MSE values with respect to the iteration in Section 4.

Primal-Dual Algorithm
For the purpose of applying the Chambolle-Pock primal-dual (PD) algorithm [45,46], we need to rewrite the proposed model (10) as a minimax problem.Define: then the the proposed model ( 10) can be expressed as: it follows from the property of convex conjugate F * , and can be defined as: Then the minimax problem of ( 10) is given as: where P and Q are, respectively, expressed as follows: where the infinite norm is defined as Applying the Chambolle-Pock algorithm for the minimax problem ( 26) yields the following iterative scheme: where σ, τ are positive tuning parameters.Then, we obtain the respective closed-form solutions for subproblems in (27) one by one.Specifically, for the p-subproblem and the q-subproblem, they have a closed-form solution, given by: where Proj P and Proj Q denote the Euclidean projection on the convex sets P and Q, respectively.They can be easily calculated by: .
Solutions for the u-subproblem and the w-subproblem can be obtained by solving the corresponding Euler-Lagrange equation of the form: where K T is the adjoint operator of K and (∇ γ ) T is the adjoint operator of ∇ γ as above.
The convergence result for the PD algorithm was proven by Chambolle and Pock in [47,48] when G(x) is convex.However, in our problem of the loss the convexity of the fidelity term G(x), we cannot get the convergence of the PD algorithm to the problem (10) under normal circumstances unless G(x) is linearized, so we can solve the primaldual problem: instead, min We get the following convergence result for problem (32): Proposition 2. Let the sequence (x k , y k ) be generated by (32).If we choose parameters τ and σ satisfying , L G is the Lipschitz constant of G, then (x k , y k ) converges to the saddle point saddle-point of (10).

Proof of Proposition 2.
Based on the analysis, we only need to estimate the parameters L A and L G to enable the convergence of the proposed FTGV-PD algorithm, as well as to provide guidance on choosing the appropriate values of the parameters τ and σ.
The norm of A can be estimated as follows: < 12 (see details in [27]).
For the Lipschitz constant L G , it can be estimated by: ) and the GL definition of the fractional-order derivatives ∇ γ x u and ∇ γ y u can be described as the convolution of the weight coefficient (−1) l C γ l (represented as k l ) and u, we, thus, have: where k = (k 0 , w 1 , . . ., k L−1 ) and k l = (−1) l C γ l .For the special case of TV, i.e., γ = 1, L = 2, we have L 2 = ∇ 2 ≤ 8, which is consistent in [50]; hence, the upper bound of fractional order ∇ γ 2 is 8. Remark 4. Proposition 2 is a special case of theorem 1 in Chambolle and Pock [48].Thus, we can simply choose τ = 1/(L A + L G ) and σ = 1/L A in our model.

Numerical Experiments
In this section, we will test the performance of the proposed model ( 2) with the PD algorithm and ADMM algorithm, and also compare with some efficient visual and analytical methods for image deblurring such as TGV [21], APE-TGV [28], D-TGV [33], BM3D [19], and NLTV [18].To evaluate the restoration results, we use these quantitative measures, including the peak signal to noise ratio (PSNR), the mean square error (MSE), and the structural similarity (SSIM) metric, which are commonly used in image processing.The better quality image will have higher PSNR and SSIM, but lower MSE.
In experiments, we consider three common blur scenarios: motion blur, disk blur, and average blur.The motion blur, disk blur, and average blur are generated by the MATLAB built-in functions fspecial('motion', 20, 50), fspecial('disk', 5) and ones(9)/81, respectively.Except for the blur, we also add the Gaussian noise with standard deviation 0.1 to the blurry image.For illustration, six test images are presented in Figure 2

Comparison of Proposed Algorithms
In this subsection, we compare the efficiency of the two proposed algorithms, FTGV-ADMM and FTGV-PD, by minimizing the same objective function (10).We optimize the algorithmic parameters for each algorithm to achieve higher PSNR improvement, which are listed in Table 1.The visual results are provided in Figures 3-5, and Tables 2 and 3 report the restoration results in terms of PSNR and SSIM values, along with other contrastive models.In addition, we plot the energy and MSE values with respect to the iteration in Figures 6 and 7, and numerically demonstrate the convergence of each algorithm.

Model
Image Type I II III  FTGV-ADMM does not satisfy the convergence of ADMM and, hence, for any penalty parameter, δ > 0 is not available.Actually, different choices about δ could influence the convergent speed of the algorithms.In our experiments, we set a satisfying rule of thumb for deblurring based on experience δ 1 = δ 2 = δ 3 = 0.01.In Tables 2 and 3, we find that FTGV-ADMM has the highest PNSR value and competitive SSIM value for the texture image, butterfly image, brain image, and man image.FTGV-PD achieved slightly lower results than FTGV-ADMM in the PSNR value for disk blur and average blur.Unfortunately, we notice that FTGV-PD achieves bad results on motion blur for all test images, whether it is the texture image or natural images.In the visualization aspect, FTGV-ADMM achieves better recovery results than FTGV-PD overall.
The results provided in Figure 6 show that FTGV-ADMM has the lowest MSE value and the fewest iterations for all test images.On the contrary, FTGV-ADMM requires more iterations to obtain a smaller MSE.We observe that FTGV-ADMM is the fastest algorithm to minimize the energy because it costs less iterations.In terms of the disk blur presented in Figure 7, FTGV-PD owns the lowest MSE and less iterations for the texture image than FTGV-ADMM, but FTGV-ADMM has the fastest rate to obtain the lowest MSE for other three images.In addition, FTGV-PD is the fastest and best algorithm to minimize the energy.Although FTGV-ADMM has brief oscillations occurring during the descent process of energy, it did not affect the convergence result.For average blur, we get the same results as disk blur, so we omit it.
Next, we illustrate how the fractional order γ affects the image restoration referred to in Figure 8, which plots the largest PSNR value as a function of fractional order γ.The experimental results show that the fractional order γ in the fidelity term can avoid the staircase effect and get more detailed structures by choosing suitable orders.
From Figure 8, we can notice that for motion blur, the optimal order occurs at γ ∈ {0.1-0.8}; it is obvious that we get the lowest PSNR value when γ = 0.9 for all test images.For the disk blur, the optimal order occurs at γ = 0.8, where test images have the highest PSNR value.The trend of curve of the average blur is roughly the same as with the disk blur, so it is omitted.Thus, we use the fractional orders γ = 0.7 or γ = 0.8 in our experiments to obtain a higher PSNR through simple adjustments.
Overall, as FTGV-ADMM involves the least number of parameters, and is suitable for all blur types and images compared to the FTGV-PD, we will use it for the rest of the experiments.

Comparison of Other TGV-Based Methods
To demonstrate that the model based on the fractional-order fidelity term has better capability in texture restoration than other fidelity-based models for images with rich texture and nature images, we compare the proposed model (2) with TGV [21], APE-TGV [28], and D-TGV [33] for three different blur kernels and the standard deviation 0.1 Gaussian noise.To fairly compare, the parameters of the comparison models are selected according to the recommendations of the corresponding paper through adjusting them appropriately to get better results and the best PSNR; the choice of parameters is listed in Table 1.We provide results in Figures 3-5 and Tables 2 and 3.
As we can see in Figures 3b, 4b, and 5b, TGV could not eliminate blur completely.The images restored by APE-TGV in Figures 3c, 4c, and 5c were over-smoothed, in which the texture structure was lost.The D-TGV model has advantages in maintaining structures, but it tends to lose some texture details and create some artifacts.From Figures 3-5, it is easy to find that the restoration result of D-TGV is visually inferior to APE-TGV for the texture image.This means that D-TGV and APE-TGV are imperfect under certain circumstances.However, the proposed model with the ADMM algorithm (FTGV-ADMM) overcome these drawbacks and gets a better balance between deblurring completely and restoring more image details; see Figures 3e, 4e, and 5e.
Tables 2 and 3 report that FTGV-ADMM is comprehensively superior to other models in terms of PSNR.For example, it is 8.4 db, 2.65 db, and 1.69 db higher than the TGV, APE-TGV, and D-TGV for the texture image damaged by motion blur.This demonstrates the superiority of FTGV-ADMM.

Comparison with BM3D and NLTV
In this subsection, we demonstrate the performance of FTGV-ADMM with the famous blockmatching and 3D filtering (BM3D) method and the nonlocal TV (NLTV) for solving the image deblurring problem.In this paper, we use the preconditioned Bregmanized operator splitting (PBOS) method proposed by Zhang and Burger et al. [18].Other parameters of the method are selected as suggested by the authors, but the regularization parameter µ is adjusted to obtain the highest PSNR value.
Four images are used in this example.They are corrupted by motion blur and disk blur and Gaussian noise.The experimental results are displayed in Figures 9-12 and Table 4.
From Table 4, we can see that FTGV-ADMM achieves the best PSNR and SSIM values, which are far higher than the BM3D and NLTV methods.The visual quality of the deblurred results by the NLTV method is too over-smooth, and it produces obvious artifacts; see enlarged zoom Figures 9i, 10i, 11i and 12i.The BM3D method is one of the best existing deblurring methods for Gaussian blurs and out-of-focus blurs, but the restoration image is still a little smooth and many details have been lost, which can be seen in the enlarged, zoomed Figures 9h, 10h, 11h and 12h.Most importantly, FTGV-ADMM overcomes these difficulties and achieves higher visual quality.

Conclusions
The paper studies a novel image deblurring model that is applicable to three different blur kernels.The new model contains a fidelity term in fractional-order derivative space, which has the ability to preserve details, eliminate the staircase effect, and has a total generalized variation (TGV) regularization term.We propose two optimal numerical algorithms based on the PD method and the ADMM method to overcome the non-convex and non-differentiability of the new variational model.The experiment results show that our model is both quantitatively and qualitatively better than other advanced models.

Figure 1 .
Figure 1.Visualization in zoom for Lena image.(a) Noisy blurred image (disk blur with radius 5 and a Gaussian noise with σ = 0.1), (b) deblurred image of integer model, (c) deblurred image of fractional model.

Figure 8 .
Figure 8.The influence of the fractional-orders γ on restored results.(a) PSNR values for different γ on motion blur.(b) PSNR values for different γ on disk blur.

Figure 9 .Figure 10 .Figure 11 .Figure 12 .
Figure 9. Performance comparison of the proposed method with BM3D and NLTV for the Lena image.The red squares are marked for zooming.First row: (a) original image; (b) motion blurred image; (c-e) deblurred image by BM3D, NLTV, and FTGV-ADMM.Second row: (f-j) corresponding enlarged version of (a-e).

Table 1 .
The parameter values for numerical experiments on all test images.

Table 2 .
Quantitative comparison with three blur kernels.Bold values indicate the best result.

Table 3 .
Quantitative comparison with three blur kernels.Bold values indicate the best result.