Blind Image Deblurring Based on Local Edges Selection

: The edges of images are less sparse when images become blurred. Selecting e ﬀ ective image edges is a vital step in image deblurring, which can help us to build image deblurring models more accurately. While global edges selection methods tend to fail in capturing dense image structures, the edges are easy to be a ﬀ ected by noise and blur. In this paper, we propose an image deblurring method based on local edges selection. The local edges are selected by the di ﬀ erence between the bright channel and the dark channel. Then a novel image deblurring model including local edges regularization term is established. The obtaining of a clear image and blurring kernel is based on alternating iterations, in which the clear image is obtained by the alternating direction method of multipliers (ADMM). In the experiments, tests are carried out on gray value images, synthetic color images and natural color images. Compared with other state-of-the-art blind image deblurring methods, the visualization results and performance verify the e ﬀ ectiveness of our method.


Introduction
Image deblurring has long been a challenging problem. The aim of image deblurring is to recover a clear image from a blurred image. Image deblurring can be separated into non-blind and blind cases. In non-blind image deblurring, the blurring kernel is known in advance and the clear image is obtained from the blurred image and blurring kernel [1][2][3]. Different from non-blind image blurring, blind image deblurring aims to obtain a clear image from a blurred image when the blurring kernel is unknown. Generally, the uniform blurring process [4] is modeled by: where y is the blurred image, H is the blurring kernel, x is the clear image, n is additive noise, and ⊗ is the convolution operator. Blind image deblurring is an ill-posed problem, in which clear image and blurring kernel are unknown. Given a blurred image, there are countless sets of estimated results on clear images and blurring kernels. Researchers have been working for many years to estimate better results for clear image and blurring kernel [5][6][7][8][9]. Many state-of-the-art image deblurring algorithms solve the ill-posed problem within the maximum a posteriori (MAP) framework [5]. In MAP based image deblurring methods, the process of estimating clear image can be formulated as follows: p(x, H|y) = p(y|x, H) · p(x) · p(H) p(y) Appl. Sci. 2019, 9,  where p(x) and p(H) are the probability density functions of clear image and blurring kernel. In image deblurring, p(y) is known in advance, so Equation (2) can be can be simplified as follows: p(x, H y) ∝ p(y x, H) · p(x) · p(H) where p(y x, H) , p(x), and p(H) are the likelihood term, the priors on the clear image, and the priors on the blurring kernel, respectively [5]. According to Equation (3), the image deblurring model [6] can be summarized as follows: where P, Q, and R are the likelihood term, the regularization term based on image priors, and the regularization term based on the blurring kernel, respectively. The likelihood term is usually formulated as follows [6]: The regularization term Q(x) is built based on a large number of experiments on image priors [7][8][9]. R(H) is often formulated by the Lp-norm of H [7][8][9][10].
Numerous image deblurring algorithms have been proposed in the past years. Some of them use the sparse priors on natural images [7,[9][10][11][12][13], but the statistical priors on image or image gradients are not effective for all kinds of images [8]. The edges of an image are less sparse when the image becomes blurred [14], so some algorithms focus on building deblurring models by using the salient image edges. Joshi et al. [15] found the location and orientation of edges by a sub-pixel difference of Gaussians edge detector, then predicted sharp edges by propagating the maximum and minimum values along the edge profile. Jia [16] estimated the blurring kernel by using the transparency on the image boundary. Hu et al. [17] stated that smooth regions cannot contribute much for estimating the ground truth of the kernel, and proposed a method to extract suitable regions for blurring kernel estimation. Javaran et al. [18] extracted the main structure of the blurred object, then selected salient edges by shock filtering. Cho and Lee [19] obtained image salient edges by bilateral filtering. Xu and Jia [20] proposed a deblurring method in which the edges selection method is also based on shock filtering. In [21], color image gradients were added in the likelihood term instead of other operators. However, in these methods, the image edges are selected based on the global information of the image, the dense structures cannot be captured, and the edges are easy to be affected by noise [4]. In addition, when image become blurred, it is even harder to extract image edges well by these methods. Recently, image restoration methods based on deep learning [22,23] and super resolution [24][25][26] are proposed, which can obtain good results when applied to image deblurring. However, a large number of computations are needed, or a lot of images are required for training, which add to the complexity of the algorithm.
Considering the limitation of global edges of images, we propose a new blind image deblurring method based on local edges selection. The contributions of the proposed method are summarized as follows: (1) The proposed image deblurring model is built based on MAP, but different from traditional MAP based methods, in the deblurring model, we add creative local image edges, the local edges are selected from the bright and dark channels of the image. (2) In most blind image deblurring methods, the blurring kernel is estimated first and the clear image is obtained by non-blind deblurring methods. Different from these methods, the clear image and blurring kernel are obtained based on alternating iteration in the proposed method. The rest of this paper is organized as follows. Section 2 consists of five parts. In Section 2.1, the proposed local edges selection method is introduced. Then the image deblurring model and blind deblurring process are introduced in Section 2.2. In Sections 2.3 and 2.4, we present the estimation of the blurring kernel and clear image, respectively. Section 2.5 introduces the stopping criterion. Section 3 provides image deblurring results and discussions, consisting of four parts. The results of image edges selection are shown in Section 3.1. In Sections 3.2 and 3.3, we discuss the parameters in the deblurring model and the convergence of the proposed algorithm. In Section 3.4, we provide the results of the image deblurring we carried out, and compare the images deblurring results with other state-of-the-art methods to verify the effectiveness of the proposed method. Finally, Section 4 is the conclusion of this paper.

Local Edges Selection Method
Traditional methods obtain image edges by global filtering. Gradient filters consider two or three pixels in the neighborhood, which tends to ignore longer range dependencies [4]. In contrast, image patches can model more complex image structures in larger neighborhoods [27]. So recently some patch-based image deblurring methods are proposed, such as image deblurring based on patch priors [4,28] and internal patch recurrence [27], etc. The dark channel [29] and bright channel [30] are useful matrices based on local information of image, some previous methods use them in haze removal [29] and image restoration [30,31]. Different from all these methods, in this paper, we innovatively select image edges by using dark channel and bright channel, and build the deblurring model.
Dark channel [29] is obtained by finding the minimum value in an image patch, which is defined as follows: where Φ 1 (x) is the dark channel of x, N(w) is an image patch centered at w, and x c is the c th channel of the RGB color image. Similarly, the bright channel [31] represents the maximum value in an image patch, which is defined as follows: Based on the bright channel and dark channel, we propose a new image edges selection method. The edge of an image is composed of points whose brightness changes obviously, so the edge of an image can also be regarded as the junction of the "bright" area and the "dark" area. By subtracting the bright channel image from the dark channel image, the complete edges of an image can be obtained. So in the proposed method, the edges of image x can be obtained as follows: The proposed method can select the edges of color images as well as gray value images. In Equations (6) and (7), when the input image is a gray-value image, the dark channel and bright channel can be obtained by Equations (9) and (10):

Image Deblurring Model
As is mentioned above, the edges of images are less sparse when the image become blurred. Based on the changes in sparsity, in some methods, the first-order or higher-order gradient operators are added to the deblurring model in likelihood terms [19]; some methods also find new criteria for selecting informative edges [20], but adding too many partial derivative operators will increase computational complexity and reduce efficiency [21]. In addition, sometimes the failure of extracting blurred image edges will cause ringing artifacts. Since the proposed edges selection method can better detect the edges of the image, we use it to build the deblurring model.
The proposed image deblurring model is defined as follows: wherex andŷ are the local edges of clear image and blurred image. Q TV (x) is the optimized total variation term and Section 2.4 covers it in detail. Then, based on the proposed deblurring model, the image deblurring process is shown in Algorithm 1. The blurred image y and the blurring kernel size are the input, the output values of the algorithm are the clear image x and blurring kernel H. In the initial stage of the algorithm, the clear image x and blurring kernel H are unknown, so we need to initialize them. In the proposed method, we initialize clear image x by setting x = y. The blurring kernel H is assumed to be a sparse matrix and a few pixels are nonzero [21] in the initialization. In subsequent calculations, the value of x and H are constantly updated in each iteration until the algorithm stops. In each iteration, we obtain the renewed local edges of the intermediate clear image

Estimation of Blurring Kernel
In the proposed method, the blurring kernel is obtained by Equation (12). Because of the effectiveness of the proposed edges selection method, we do not make more constraints on the blurring kernel. So, in each iteration, the intermediate blurring kernel in Algorithm 1 can be easily obtained by the fast Fourier transform (FFT) method, which is defined in Equation (13).
whereŷ andx are the edges selected by the proposed method, and x is the intermediate result of clear image in the previous iteration. F (·) and F −1 (·) are the forward and inverse Fourier transforms, respectively, and F T (·) is the complex conjugate of F (·).

Estimation of Clear Image
In the estimation of the clear image, we aim to solve the function as follows: where ∇ i is the image gradients filter [32] in the directions of 0 • , 45 • , 90 • , and 135 • ; different from the method in [32], the operators are obtained by bilinear interpolation of the basic gradients operator [21]. In fact, when i = 2, ∇ i comprises the image gradient operators in the directions of 0 • and 90 • , then Q TV (x) represents the classic total variation [33]. We add two more directions based on the total variation term to reduce ringing artifacts. The dark channel and bright channel are obtained by a non-linear operation. In the method proposed by Pan et al. [31], dark channel is equivalently transformed into the multiplication of the image and a linear operator, then the clear image can be achieved by FFT method. However, in Pan's method, the linear operator is obtained by using gray value image rather than color image, so it is not the best way to solve the problem by introducing a linear operator. In the proposed method, the clear image is obtained within the alternating direction method of multipliers (ADMM) framework [32], where the clear image can be achieved as follows: where k is the inner iteration time. With a k , b k , c k and d k as intermediate parameters, the process of obtaining clear image is as follows: In Algorithm 2, a k and c k are obtained by the gradient descent method [14]. In each iteration, a k , b k , c k and d k are obtained first, then intermediate image x k are obtained by Equation (16). When iteration time reaches 20, the algorithm can converge to a high accuracy, so we empirically set k = 20. The detailed convergence analysis is introduced in Section 3.3.

Stopping Criterion
In the proposed method, the intermediate clear image and blurring kernel are alternately obtained in each iteration. With the iteration time increases, the estimated intermediate clear image and blurring kernel are closer to the real ones. When the iteration time reaches a certain number, the results converge to a higher accuracy, and the iteration can be stopped. In the proposed method, we utilize the kernel similarity to decide the stopping criterion. Kernel similarity [17] is defined as follows: where H is the real kernel, H 1 is the intermediate estimated kernel. The value of kernel similarity ranges from 0 to 1, the larger value of kernel similarity reflects the better result.
In the proposed method, we set H and H 1 to be the estimated kernel in two contiguous iterative processes. In the first few iterations, the kernel similarity changes largely. After a certain number of iterations, the change of blurring kernel tends to be slow, then kernel similarity increases. After exhaustive experiments, the stopping criterion of the proposed method can be summarized as follows: After the iteration number reaches 10, when the kernel similarity is between H and H 1 higher than 0.95, the iteration stops.

The Results of Image Edges Selection
First, tests are carried out to verify the effectiveness of the proposed edges selection method. Figure 1 shows the comparison of different edges selection methods. It can be seen from Figure 1 that the first-order image gradients [9] and Laplace gradients cannot effectively represent the edges well in each direction. Although Canny edges selection method [34] can better extract the details of image edges, its thinning operation on image edges will bring unnecessary noise to the image. In contrast, the proposed method can get more complete and smooth image edges in each direction. Moreover, because the proposed edges selection method is based on image patch, it also has rotation invariance. Figure 1e shows the edges of gray value images obtained by the proposed method whose dark channel and bright channel are calculated by Equations (9) and (10). We can see from Figure 1e that the proposed method can select image edges effectively. Figure 1f is the edges of color images, and Φ 1 (x), Φ 2 (x) are calculated by Equations (6) and (7). The method utilizes the color information of the image, and the selected edges are more representative than the edges in Figure 1e. It can be concluded from the test that the proposed edges selection method based on local information is more effective than others. In addition, salient edges and detailed texture information can be better obtained using color information.
The proposed edges selection method is also applicable to blurred images. Figure 2 shows the comparison of the selected edges in a patch of the image in Figure 1. In the comparison methods, image gradients are obtained by using the difference between adjacent pixels. When the image becomes blurred, image structure becomes blurred as well, the difference between adjacent pixels is not so large near image edges, so edges cannot be extracted well. In contrast, the proposed edges selection method can preserve image structure as much as possible, and can avoid bringing unnecessary noise.

Discussion of the Parameters in the Deblurring Model
In this subsection, we will discuss how the parameters in the model effect the deblurring results. First, the size of local image patch will affect the edges selection results and the final image deblurring results. When the size of the image patch N(w) in Formulas (6) and (7) is less than 9 × 9, for normal images, it nearly has no influence on the deblurring results. However, for low illumination images or saturated images, when choosing larger sizes of image patches, the edges cannot be selected well, thus leading to bad kernel estimating results. Figure 3 shows an example of how the window size influence the deblurring result. Based on exhaustive experiments, we set the patch size to be 5 × 5. In addition, the proposed model involves two parameters, α and β. In order to analyze the effects of the two parameters on the proposed image deblurring method, we test the sensitivity of α and β. The sensitivity analysis is similar to the method proposed by Pan et al. [8]. In the analysis of each parameter, other parameters remain unchanged. We set the value of α from 0.0001 to 0.2 with the step size of 0.005, the value of β ranges from 0.1 to 3 with the step size of 0.05. In the sensitivity analysis test, kernel similarity is the metric to measure the accuracy of estimated kernels. Figure 4 shows the average kernel similarity of the 20 test images in the test, and the results show that the proposed method performs well with a wide range of parameter settings, and the algorithm has certain robustness for parameter selection.

The Convergence
In order to verify the effectiveness of the proposed method, we test the convergence. Figure 5 shows the residual [35] of the proposed method with the iteration time increases. In Figure 5a, the residual of R, G, and B channel are calculated respectively, of the color image. Figure 5b shows the Appl. Sci. 2019, 9, 3274 9 of 21 residual of blurring kernel. With the iteration time increases, the residual [23] gradually reduces. When the iteration reaches 20, the proposed algorithm can converge to a high precision, so the inner iteration times of the image and blurring kernel are set to be 20 in the deblurring process. From the convergence test, we can conclude that the proposed algorithm can converge to the real results of clear image and blurring kernel with high probability.
In order to verify the effectiveness of the proposed method, we test the convergence. Figure 5 shows the residual [35] of the proposed method with the iteration time increases. In Figure 5a, the residual of R, G, and B channel are calculated respectively, of the color image. Figure 5b shows the residual of blurring kernel. With the iteration time increases, the residual [23] gradually reduces. When the iteration reaches 20, the proposed algorithm can converge to a high precision, so the inner iteration times of the image and blurring kernel are set to be 20 in the deblurring process. From the convergence test, we can conclude that the proposed algorithm can converge to the real results of clear image and blurring kernel with high probability.

Image Deblurring Results
In all the tests, the size of the image patch N(w) equals 5 × 5. The parameters in Formula (11)  The first test is based on the dataset of Levin [5] which is shown in Figure 6-the blurred images are obtained by using eight blurring kernels and four ground truth gray value images. In the deblurring of gray value images, the dark channel and bright channel are obtained by Equations (9) and (10). The comparison algorithms include the methods in [7,9,19,20,22,27,31,33,36]. Figure 7 shows the visual results of one motion blurred image in Levin's dataset. From the estimated clear images, the proposed method outperforms others. The clarity of the estimated clear images by some methods are relatively poor, such as [7,19]. The clear images obtained by the methods in [22,31] are too smooth, which may lead to the loss of some details. The brightness of the image has changed a lot in the image obtained by [36]. The blurring kernels estimated by the proposed method are also closer to the real kernels and have fewer noise points. Table 1 shows quantitative comparison results of the clear image and blurring kernel in Figure 7, the metrics include structure similarity (SSIM) [37], Peak Signal to Noise Ratio (PSNR) [38] and kernel similarity (KS). It can be seen from the results that the proposed method outperforms competing methods.
Then, tests are carried out based on all the 32 synthetic blurred images in Levin's dataset. In addition to SSIM, PSNR, and KS, success rates are obtained based on the dataset. The success rate is measured by error ratio [5], which is defined in Equation (18),

Image Deblurring Results
In all the tests, the size of the image patch N(w) equals 5 × 5. The parameters in Formula (11) are set to be the same in all the experiments: α = 0.008, β = 0.7. The parameters of the comparison methods in this section are selected according to the references.
The first test is based on the dataset of Levin [5] which is shown in Figure 6-the blurred images are obtained by using eight blurring kernels and four ground truth gray value images. In the deblurring of gray value images, the dark channel and bright channel are obtained by Equations (9) and (10). The comparison algorithms include the methods in [7,9,19,20,22,27,31,33,36]. Figure 7 shows the visual results of one motion blurred image in Levin's dataset. From the estimated clear images, the proposed method outperforms others. The clarity of the estimated clear images by some methods are relatively poor, such as [7,19]. The clear images obtained by the methods in [22,31] are too smooth, which may lead to the loss of some details. The brightness of the image has changed a lot in the image obtained by [36]. The blurring kernels estimated by the proposed method are also closer to the real kernels and have fewer noise points. Table 1 shows quantitative comparison results of the clear image and blurring kernel in Figure 7, the metrics include structure similarity (SSIM) [37], Peak Signal to Noise Ratio (PSNR) [38] and kernel similarity (KS). It can be seen from the results that the proposed method outperforms competing methods.
Then, tests are carried out based on all the 32 synthetic blurred images in Levin's dataset. In addition to SSIM, PSNR, and KS, success rates are obtained based on the dataset. The success rate is measured by error ratio [5], which is defined in Equation (18), x e p , x t p are the clear images obtained by the estimated blurring kernel and the ground truth blurring kernel, respectively, for each pixel p. x g p is the ground truth clear image for each pixel p. Empirically, when the error ratio is lower than 3, the algorithm is considered to be successful. Figure 8 shows the average SSIM, PSNR, KS, and success rate.     Then, experiments are carried out based on the 48 color images in the dataset of Köhler et al. [39], which includes four ground truth color images and 12 blurring kernels. The 48 synthetic images in Köhler's dataset are deblurred by the proposed method and other methods in [1,7,9,19,20,22,31,[40][41][42]. Figure 9 shows the visualization results of one blurred image in the dataset. The results show that the proposed method can effectively restore detailed information of images. Some methods cannot obtain clear images successfully, the image is still suffered from serious blur, such as [1,41]; In some deblurring images, such as the images estimated by [7,19,20,22], ringing artifacts affect the deblurred results. It can be seen more clearly from the magnified views that the ringing artifacts are better suppressed by the proposed method, and the estimated image is clearer than others. Then, quantitative comparisons are carried out based on the 48 images in Köhler's dataset. Figure 10 shows the comparisons of the average SSIM and PSNR values. The proposed method outperforms others in quality indices-the SSIM and PSNR are consistently higher than others. In addition, we test the performance based on the dataset of real captured standing trees, the ground truth images are shown in Figure 11. The images of living trees contain rich texture information, which are good typical images to test the results of deblurring. We obtain 32 blurred images based on the blurring kernels in Levin's dataset and the four ground truths of images. The comparison methods include those in [7,9,19,20,22,27,30,31,36]. Figure 12 shows one of the results in the 32 blurring images. From the visualization results, the proposed method can reserve more texture information of an image compared with other methods-the image details can be seen clearly from the magnified views in Figure 12l,m. Figure 13 shows the average SSIM values and the success rate. The proposed method still has advantages over others-the SSIM are higher than others, and the success rate is the highest among the methods consistently.    The experimental results also show that the proposed method can deblur natural blurred images well. In Figure 14, we compare the deblurring results for a natural blurred image with other methods. The clear image, their magnified views and the estimated blurring kernels are also shown together in the figure. Compared with other methods, the clear image obtained by the proposed method has fewer ringing artifacts, more image details and better color fidelity.   Figure 15 shows other blind deblurring results. The images are chosen from Xu's dataset [20], Krishnan's dataset [7], and the dataset of ours. Results show that the proposed method can deblur natural images well-the clear images have good color fidelity and clarity. The proposed method is effective in image blind deblurring.

Conclusions
In this paper, we propose a blind image deblurring method based on local edges selection. The proposed edges selection method based on local information is new and proven to be more effective than using global information when capturing images with dense structures, noise, and blur.
We use the new edges selection method to build the regularization term. Our model is simpler and more effective than other methods using image gradients. The image and blurring kernel are obtained simultaneously by alternating iteration. The experimental results show that the proposed blind deblurring method is effective for both gray value images and color images. Compared with other state-of-the-art deblurring methods, the quantitative results demonstrate the effectiveness of our method.
There are still some limitations of the proposed method. In the deblurring process of the proposed method, we assume that the blur is stationary, but in practical application, the blurring kernels are sometimes time-varying and space-varying, which are much more complex. We will further focus on the deblurring related to more complex blurring kernel in the future.