Blind Image Deblurring via a Novel Sparse Channel Prior

: Blind image deblurring (BID) is a long-standing challenging problem in low-level image processing. To achieve visually pleasing results, it is of utmost importance to select good image priors. In this work, we develop the ratio of the dark channel prior (DCP) to the bright channel prior (BCP) as an image prior for solving the BID problem. Speciﬁcally, the above two channel priors obtained from RGB images are used to construct an innovative sparse channel prior at ﬁrst, and then the learned prior is incorporated into the BID tasks. The proposed sparse channel prior enhances the sparsity of the DCP. At the same time, it also shows the inverse relationship between the DCP and BCP. We employ the auxiliary variable technique to integrate the proposed sparse prior information into the iterative restoration procedure. Extensive experiments on real and synthetic blurry sets show that the proposed algorithm is efﬁcient and competitive compared with the state-of-the-art methods and that the proposed sparse channel prior for blind deblurring is effective.


Introduction
The goal of blind image deblurring is to restore a sharp image and a blur kernel from the input degraded image. The degradation types include motion blur, noise, outof-focus and camera shake. Assuming that the blur is uniform and spatially invariant, the mathematical formulation of the blurring process can be modeled as b = l * k + n (1) where b is the blurry input, k is the blur kernel and n is the additive noise. The * denotes the convolution operator. This problem is highly ill-posed because both the latent sharp image l and blur kernel k are unknown. In order to make this problem well-posed, most existing methods utilize the statistics of natural images to estimate the blur kernel. For example, a heavy-tailed distribution [1], patch recurrence prior [2], nuclear norm [3,4], low-rank prior [5], sparse prior [6], multiscale latent prior [7] or additional information of a specific image [8][9][10] have been used to estimate a better kernel. Strong sparsity of image intensity and gradient has been widely used in low-level computer vision processing problems. It also has mature applications in the field of image deblurring [6,[11][12][13], such as the L 1 /L 2 [14] norm, the reweighted L 1 norm [15], the L 0 norm prior [16][17][18][19] and the sparse prior-local maximum gradient (LMG) [20]. For favoring clear images over blurry ones, the edge selection method [21][22][23] is embedded in the blind deconvolution framework. However, strong edges are not always available in many cases. The channel prior was introduced by He et al. for image defogging in Ref. [24]. Then, Pan et al. [18] enforced the sparsity of the dark channel by the L 0 norm for kernel estimation. Unfortunately, this prior does not work well on images with large noise and large numbers of pixels. To solve this problem, Yan et al. [19] proposed an extreme channel prior (ECP) which utilizes both the dark channel and bright channel for estimating the blur kernel.
In this paper, a novel sparse channel prior is proposed for blind image deblurring. Inspired by [18,19,24], we take the advantages of the DCP and BCP to construct a confrontation constraint D/B. We prove its characteristic from a mathematical perspective and explore how these properties can be used to estimate the blur kernel. In the proposed algorithm, the optimization of the proposed prior is a challenging problem. We use the idea of auxiliary variables and the alternating minimization method to decompose the problem into independent subproblems optimised by the alternating direction minimization (ADM) method. The main contributions of this work can be stated as follows: • A new D/B prior is presented for kernel estimation, which fully explores the relationship between the DCP and BCP. We also verify the effectiveness of D/B. • We develop an effective optimization strategy for kernel estimation based on the idea of auxiliary variables and the alternating direction minimization (ADM) method. • Experiments on four databases show that the proposed method is competitive compared with the state-of-the-art blind deblurring algorithms.
The rest of this paper is organized as follows. Section 2 introduces the related work. The proposed D/B is detailed in Section 3. Our blind deblurring model and optimization strategy are presented in Section 4. Section 5 shows the experimental results. Further discussion of our proposed deblurring algorithm is given in Section 6. Section 7 summarizes this paper.

Related Work
Blind image deblurring algorithms have made great progress due to the use of the proper kernel estimation model. In this part, we introduce the methods related to our work in an appropriate context.
The success of many blind image deblurring algorithms is based on the use of the statistical characteristics of the image intensity and gradient. Krishnan et al. [14] presented the L 1 /L 2 norm based on the sparsity of image intensity. The L 1 /L 2 norm is a normalized version of L 1 , which enhances the sparsity of L 1 . Levin et al. [1] observed the heavy-tailed distribution of image intensities and introduced a maximum posteriori (MAP) framework. Shan et al. [25] introduced a probability model to fit the sparse gradient distribution of a natural image. Pan et al. [16] developed a method in which both intensity and gradient are regularized by the L 0 norm for text image deblurring. These methods are limited by the modeling of more complex image structures and contexts.
Another group of blind image deblurring methods [22,23] employs a significant edge detection step for kernel estimation. Specifically, Cho et al. [21] predicted sharp edges by the bilateral and shock filters. Joshi et al. [26] detected image contours by locating the subpixels' extrema. These methods cannot capture the sparse kernel and structures, which makes the restored image blurry and noisy sometimes. To solve these problems, researchers have proposed many better models to estimate the blur kernel. Xu et al. [27] presented a two-phase kernel estimation algorithm, which separates kernel initialization from the iterative support detection (ISD)-based kernel refinement step, giving an efficient estimation process and maintaining many small structures. Zoran and Weiss [28] proposed the expected patch log likelihood (EPLL) method, which imposes a prior on the patches of the final image. However, this will iteratively restore the degradation. Vardan et al. [29] exploited the multiscale prior to further improve the EPLL and reduce the error to that of the global modeling. Bai et al. [7] developed a multiscale latent structures (MSLS) prior. Based on the MSLS prior, their deblurring algorithm consists of two stages: sharp image estimation in the coarse scales and a refinement process in the finest scale. For the patch-based methods, global modeling is a difficult problem.
With the rapid development of the deep learning method, remarkable results have been achieved in the field of blind image deblurring [30][31][32][33][34]. For example, convolutional neural networsk (CNN) [35], Wasserste generative adversarial networks (GAN) [36], deep hierarchical multipatch networks (DMPHN) [37], ConvLSTM [38] and scale-recurrent networks (SRN) [39] are all designed for image deblurring. Zheng et al. [40] presented an edge heuristic multiscale GAN, which utilizes the edge's information to conduct the deblurring process in a coarse-to-fine manner for nonuniform blur. Liang et al. [41] learned novel neural network structures from RAW images and achieved superb performance. Chang et al. [42] proposed a long-short-exposure fusion network (LSFNet) for low-light image restoration by using the pairs of long-and short-exposure images. The success of deep-learning-based methods mainly relies on the consistency between training and test data, which limits the generalization ability of these methods.
Recently, the classical dark channel prior (DCP) has been proved effective for image deblurring. The DCP was introduced by He et al. [24] for image defogging. It is based on the observation that there is at least one color channel that has very low and close-to-zero pixel values on outdoor haze-free nonsky image patches. Pan et al. [18] further found that most elements of the dark channel are zero for nature images and then enhanced the sparsity of dark channel for image deblurring. Inspired by the DCP, the bright channel prior (BCP) is proposed. That is, in most of nature patches, at least one color channel has very high pixel values. Yan et al. [19] used the simple addition of the DCP and BCP to form an extreme channel prior (ECP) for a blind image deblurring algorithm. However, the relationship between the BCP and DCP is not fully explored in the ECP.

Proposed Sparse Channel Prior
To explain that the proposed sparse channel vary after blurring, we model the blurring process as described in [43]. For an image I, consider the noise is small enough to be neglected. We have: where x and m denote the coordinates of the pixel and the size of the blur kernel k, respectively. Ψ(x) represents an image patch centered at x, ∑ z∈Ψ(x) k(z)= 1 and k(z) ≥ 0.
[·] is a rounding operator. Inspired by the two channels (dark and bright channels) and the statistics of images, we observe that when the dark channel is more different from the bright channel of one image patch, the edges are more salient, which is helpful to estimate an accurate blur kernel. To formally describe this observation, the proposed sparse channel prior is defined by: where x and y denote the coordinates of the pixel, is a non-negative constant and Ψ(x) represents an image patch centered at x. I c is the c-th color channel of image I. As described in Equation (3), B(x) = max y∈Ψ(x) max c∈(r,g,b) (I c (y)) represents the BCP and D(x) = min y∈Ψ(x) min c∈(r,g,b) (I c (y)) represents the DCP. Dark channels are obtained by two minimization operations: min c∈(r,g,b) and min y∈Ψ(x) . The bright channel is obtained by two maximization operations: max c∈(r,g,b) and max y∈Ψ(x) . In the implementations of the DCP and BCP, if I is a gray image, then only the latter operation is performed. A small value of R(x) implies there are salient edges in the image patch. On the contrary, a large R(x) implies that there are fine structures in an image patch. The reason is that when the edge is salient, the pixel values are more different between the two sides of edges. It means that the minimum value is more different from the maximum value of the image patch. Conversely, when the difference between the DCP and BCP is not that large, the image edge is unclear, and the value of R(x) is large. Therefore, it is natural to think that if the DCP is equal to or slightly smaller than the BCP, small edges can be accurately removed by minimizing Equation (3).
Consider a natural image that was blurred by a blur kernel. Blur reduces the maximum pixel value and increases the minimum pixel value of one patch. In other words, the DCP of one patch will increase and the BCP will decrease. Let R(b) and R(l) denote the proposed sparse channel of the blurred and clear image, respectively, when the l( To further apply this proposition to the definition of the proposed sparse channel, we have: Let m and S Ψ denote the size of Ψ(x) and Ψ(x), respectively. Then we have m = S Ψ + m. Equation (4) shows that R(x) of the image patch centered at x after blurring is no less than the value of the original image patch centered at x.
. This means that after blurring, the difference between the DCP and the BCP is smaller than that of the corresponding patch in a sharp image. In other words, R(x) always favors the sharp image. We further validate our analysis on the dataset [44]. Figure 1a-c show the histogram of the average number of dark channel pixels, bright channel pixels and D/B channel pixels, respectively. As can be observed, a large portion of the pixels in the dark channels and bright channels possess very small or large values, and our D/B channel pixels possess smaller values than those of the DCP and BCP. As shown in Figure 1, the proposed sparse channels of clear images have significantly more zero elements than those of blurred images. Thus, the sparsity of the proposed channel is a natural metric to distinguish clear images from blurred images. This observation motivates us to introduce a new regularization term to enforce sparsity of the proposed channels in latent images.

Proposed Sparse Channel as an Image Prior
Equation (4) shows that after blurring, the difference between the DCP and BCP is smaller than that of the corresponding patch in a sharp image. Therefore, in order to generate sharp and reliable salient edges, we propose a novel sparse channel prior which combines the D/B and L 0 norm: We define P(x) as a D/B prior, and the L 0 norm is used for sparsity. Let Ψ(x) denote one patch of the image I. If there exist some pixels x ∈ Ψ(x) such that I(x) = 0, we have where P(b)(x) and P(l)(x) denote the D/B prior of the blurred and clear image, respectively. This property directly follows from Equation (4). In the framework of MAP, by minimizing the sparse prior P(x), we obtain a result that favors a sharp image. This property is also validated using dataset [44]. As shown in Figure 1c, the average number of D/B channels in clear images has significantly more zero elements than that of blurred ones.

Proposed Blind Deblurring Model
Based on the proposed D/B prior, we construct the blind deblurring model under the maximum a posteriori (MAP) framework.
where P(l) is our proposed prior, ∇ denotes the gradient operation and µ, ϑ and γ are non-negative weights. The data-fitting term of our model ensures that the latent sharp image is consistent with the observed image. ∇l 0 is the L 0 norm of the image gradient, which is used to suppress ringing and artifacts. Finally, we use the L 2 norm to increase the sparsity of the blur kernel.

Optimization
In this part, we adopt the ADM method to obtain the solution to the objective function. By using the idea of alternating optimization, we can obtain two independent subproblems about l and k, respectively: and Equation (9) is a classical least squares problem with respect to k. By introducing the auxiliary variable g, which is related to ∇l, Equation (8) can be written as follows: Equation (10) can be decomposed into: and argmin g λ ∇l − g 2 2 + ϑ g 0 Equation (12) is an L 0 norm minimization problem for g.

Estimating Intermediate Image l
For the k-th iteration, we consider B(l) estimated in the (k − 1)-th iteration as a constant. Denoting Equation (11) can be rewritten as follows: By introducing an auxiliary variable, p, which is related to D(l), Equation (14) can be reformulated as follows: Using the idea of alternating optimization, we can obtain two independent subproblems to solve for l and p, respectively: and Equation (16) contains all quadratic terms, and we can obtain its solution by the least squares method. In each iteration, the FFT (Fast Fourier Transform) is used to accelerate the computation process. Its closed-form solution is given as follows: where F g = F (∇ v )F (g v ) + F (∇ h )F (g h ) and F (·) and F −1 (·) are the Fast Fourier Transform (FFT) and its inverse, respectively. F (·) denotes the complex conjugate operator of FFT and ∇ v and ∇ h are gradients in the vertical and horizontal directions, respectively.

Estimating p and g
Equations (12) and (17) are minimization problems of the L 0 norm. Due to the difficulty of solving the L 0 norm minimization problem, we adopt the method described in Ref. [13]. As a result, the solution of Equation (17) can be expressed as: Given l, the solution of Equation (12) can be expressed as:

Estimating Blur Kernel k
Since the updating of the blur kernel is an independent subproblem, we estimate k in the gradient space. Specifically, we obtain the solution to the blur kernel by minimizing the following problem though the known intermediate image l: where ∇ denotes the gradient operation. Note that we use Equation (21) to estimate the blur kernel instead of Equation (9), which helps to suppress ringing artifacts and eliminate noise. The closed-form solution to Equation (21) is obtained by FFT.
The coarse-to-fine strategy is used in the process of blur kernel estimation, which is similar to that used in [26,45]. In the process of solving the problem, it is very important to restrict the small values of the blur kernel by thresholding at fine scale, which enhances the robustness of the algorithm to noise.

Estimating Latent Sharp Image
Although the latent sharp images can be estimated from Equation (18), this formulation is less effective for fine-texture details. For the purpose of suppressing ringing and artifacts, we fine-tune the final restored image. With the estimated blur kernel and blur input image y, we can use the nonblind deconvolution method to obtain the final latent sharp image l latent . Algorithm 1 summarizes the main steps of the final latent sharp image restoration method. Firstly, we estimate the restored image l h by the method in Ref. [46] using the hyper-Laplacian prior. Then we restore image l r according to the method in Ref. [47] using the total variation prior. Finally, the latent sharp image l latent is calculated by the average of the two restored images, i.e., l latent = (l h + l r )/2. The main steps of our proposed algorithm are summarized as Algorithm 2.

Algorithm 1 Final latent sharp image restoration.
Input: Blurry image b and estimated kernel k.
1: Estimate latent image l h by using the method described in [46] with Laplacian prior; 2: Estimate latent image l r by using the method described in [47] with total variation prior; Alternately calculate l and k by the manner of coarse-to-fine levels: Interpolate solution to finer level as initialization; 7: Calculate the latent sharp image according to Algorithm 1. Output: Sharp latent image l latent .
We first initialize the intermediate image l and blur kernel k according to the blurry input. Then we alternately update l and k. In order to avoid falling into a local minimum, our algorithm is executed in a coarse-to-fine manner. The results of the coarse layer are up-sampled with the bilinear interpolation method as the initialization of the next fine layer. Finally, a latent sharp image is obtained by Algorithm 1 with the estimated blur kernel.

Results
We examine our method and compare it with the state-of-the-art BID methods on different image datasets, including a synthetic image dataset and real-world blurred images. We then evaluate the quality of deblurring models by different metrics, including the peak signal-to-noise ratio (PSNR, unit: dB), which is a measure of image quality, and cumulative error ratio (CER). The higher the CER value, the better the model.
In all the experiments, the parameter settings of our model are as follows: µ = ϑ = 0.003, γ = 2 and the size of image patch to compute the D/B channel is set to be 35. The maximum iteration is empirically set to 5 as a trade-off between accuracy and speed.

Synthetic Image Deblurring
We first test our method on the synthetic image dataset [44] for quantitative evaluations. This dataset includes 4 ground truth images and 12 different kernels. We compare our results with the state-of-the-art methods [11,14,18,19,21,27,48]. Our algorithm performs well with other methods on this benchmark dataset. Additionally, we present a challenging example in Figure 2. We record the largest PSNR calculated by comparing each restored result with 199 ground truth images captured along the camera shake trajectory in Figure 3. Since the proposed method considers not only BCP and DCP information but also the relationship between them, the PSNR values of the restored images achieved by our method are higher than those of the state-of-the-art algorithms [11,14,18,19,25,45,[48][49][50].  [44]. The image (a) is blurry input; (b-h) are deblurring results of Ref. [21], Ref. [27], Ref. [14], Ref. [48], Ref. [18], Ref. [19] and our proposed method, respectively.
We also test our algorithm against the competing methods [6,14,18,19,21,48,51,52] on another benchmark dataset [12], which includes four ground truth images and eight different kernels. One example is shown in Figure 4 with a visual result comparison against the state-of-the-art methods [18,19]. Although the image restored by Pan et al. [18] performs well against other approaches, the generated image still contains significant fake textures and blur regions in Figure 4b. The algorithm proposed by Yan et al. [19] considers both the DCP and BCP, but the generated result still has unclear edges, as Figure 4c shows. However, our method generates a sharp image with fine textures, as shown in Figure 4d. We can observe that the result is more visually pleasing than that of others. The main reason is that the enhanced edges in local patches help to remove the small textures and fine details. Figure 5a plots the cumulative error ratios of our method and the other competing methods. Note that our D/B-based method outperforms state-of-the-art algorithms by 100% under error ratio 2. All the experimental results consistently show that our method is competitive on this dataset.  We further carry out experiments of our method against the state-of-the-art approaches [16,19] on text images from the dataset [16]. This dataset consists of 15 images and eight different kernels ranging in size from 13 × 13 to 27 × 27. Figure 6 visually shows that our method performs well on a challenging blurry image in comparison with [19] and the method designed for text images [16]. As shown in the figure, the DCP and ECP also help the blind deblurring of text images. Our deblurred result in Figure 6d utilizing the proposed D/B generates sharper edges and clearer text compared to other results [16,19]. Another text example is shown in Figure 7. Note that the text becomes extremely sharper after the deblurring process, which demonstrates that our proposed L 0 norm based on the D/B is helpful for kernel estimation and image deblurring. In particular, sharp text images contain more salient edges in local patches, which drives our D/B to perform well. Table 1 presents the average PSNR values of the deblurred results on the text image dataset [16] compared with the state-of-the-art methods. Our method achieves the maximum PSNR value.
(a) (b) Figure 5. Quantitative results of our method on two benchmark datasets [12,22]: (a) error ratios comparison between our approach and the other methods on the benchmark dataset [12]; (b) quantitative evaluations on the benchmark dataset [22].
(a) (b) (c) (d) Figure 6. A comparison of our method with state-of-the-art methods. The images (a-d) are blurry input, result of Pan et al. [16], result of Yan et al. [19] and our result, respectively.

Real Image Delurring
In this part, we test our method on real-world blurred images against the recent state-of-the-art blind single image deblurring methods [11,14,18,19,21,48]. We analyze the deblurring results qualitatively as the blur kernels and ground truth images are unknown. Figure 8 shows one challenging real-world blurred image. The recovered images generated by the proposed algorithm are sharper and clearer than those generated by [11,14,18,19,21,48]. As shown in Figure 8, the blurry image contains large and small edges and textures, which causes trouble for deblurring with the methods designed for natural images. Pan et al. [18] exploited the dark channel and achieved encouraging results. However, the deblurred image still contains visually blurry artifacts. In contrast, by further utilizing the edge information in local patches, our method generates sharper and clearer image details compared with other methods as shown in Figure 8. As a second example, we present deblurring results on a challenging image in Figure 9. Note that our deblurred image has clear background and sharp edges against other results.  [11,14,18,19,21,48] and our proposed method, respectively. Figure 9. An example of real-world image results. The images (a-e) are blurry input, result of Krishnan et al. [14], result of Pan et al. [18], result of Yan et al. [19] and our result, respectively

The Effectiveness of Proposed Sparse Channel Prior
In this subsection, experiments are conducted to verify the performance of the proposed D/B for blind image deblurring. As mentioned above, the proposed D/B regularization term considers the contrast and salient edges' information in local patches. To demonstrate the effectiveness of the proposed prior, we compare the proposed method with the DCP-based method [18] and the ECP-based method [19] in image deblurring. Figure 10 shows the changes of the DCP, the BCP and the proposed sparse channel prior in each phase of the image. Initially, the contrast and clarity of the DCP, the BCP and the proposed sparse channel prior of the proceeding blurred images are very low, while the contrast of the middle layer is significantly improved and the final restored images have a higher contrast and sharper contour. At this time, the ringing and artifacts in the images are greatly reduced. Note in each stage, the proposed sparse channel prior has a clearer outline than the DCP and BCP. Compared with the literature [18], the proposed method estimates the blur kernels better with less artifacts. Figure 11 shows the quantitative evaluations on the benchmark dataset [12] by the ECP and our method with and without the proposed D/B. Note that the PSNR (Figure 11a) of the proposed D/B-based method is higher than that of the ECP and our method without D/B. Moreover, our method with the proposed D/B prior performs more favorably in terms of error ratio (Figure 11b) than without the D/B regularization, which further demonstrates the effectiveness of the proposed D/B-based methods. The proposed D/B-based algorithm generates the results with PSNR values higher than the other two methods. (a) (b) Figure 11. Quantitative results of our method on benchmark dataset [12]: (a) quantitative evaluations on the benchmark dataset by [12] and our method with and without D/B; (b) error comparison between our approach and the other methods.
In addition, our method has a higher success rate on the dataset [22], as shown in Figure 5b. All the results consistently demonstrate that the proposed sparse channel prior improves the deblurring performance.

Comparison with Other Related Methods
In this part, we will discuss some methods most related to the algorithm in this paper. The dark channel prior was used by Pan et al. [18] for blind image deblurring. They enhanced the sparsity of the DCP and achieved good results on low-light images. Yan et al. [19] used the ECP to solve the problem that the DCP has less effect on sky images. However, the ECP is a simple addition of the DCP and BCP, and the relationship between them has not been deeply studied. Figure 10 shows the intermediate images of three different methods (Refs. [18,19] and ours). Although the intermediate results become clearer and sharper as iterations increase, the images (Figure 10c) generated by our method have sharper edges and clearer contents than those of Refs. [18] (Figure 10a) and [19] (Figure 10b). Figure 12 shows the results generated by these three methods on some challenging images, including real blurred and low-light images. Our results have fewer blurred areas and ringings, which look more pleasant. Table 2 shows the error ratio of two related approaches [18,19] on dataset [22], and the proposed method fails on one image in which the error ratio value is larger than 4. In order to analyze the three methods in more detail, we show the different maps of the DCP, the BCP and our D/B in Figure 13. Although the dark channel, bright channel and our D/B map of the recovered image all have improvement with respect to that of the corresponding blurry image, our D/B map improves more than the dark channel and bright channel. Moreover, our D/B map is clearer (higher contrast and sharper edges) than the dark channel and bright channel in both the blurry image and recovered image. All have improvement with respect to that of the corresponding blurry image. Table 2. Quality evaluation of competitive methods on dataset [22] in terms of error ratio.

Convergence Analysis
Blind deconvolution is a highly ill-posed problem and we introduce a new spare prior to make the problem produce feasible results in this paper. The optimization scheme of our model is challenging, and with the idea of auxiliary variables and the alternating direction minimization (ADM) method, one may question the convergence. Thus, we show the traces of the objective function (computed from Equation (8)) and kernel similarity [53] on dataset [12] with respect to iterations in Figure 14. Figure 14a shows our method converges after less than 30 iterations, and Figure 14b shows the kernel similarity [53] becomes higher with more iterations. Overall, our method converges well after less than 30 iterations.

Running Time
We simply explain the computational complexity through the running time of the algorithms. We select several competing algorithms closely related to this paper to run on the same database as this algorithm. All experiments were carried out under the same computer. The running time on different sizes of images is summarized in Table 3. As can be seen from the table, our algorithm is faster than [19] and slower than [14].

Conclusions
In this paper, a novel, simple yet efficient image prior D/B for blind image deblurring is proposed, which builds on the DCP and BCP. An extensive investigation on natural images shows that the DCP behaves inversely to the BCP, and a large difference between the DCP and BCP preserves salient edges. For blind image deblurring, salient edges are helpful to estimate the blur kernel. In order to utilize the advantages of the DCP and BCP and further exploit the edge information in a local patch, we propose the D/B prior for image deblurring. The D/B prior preserves the main edges and eliminates the fine textures of intermediate latent images. Meanwhile, it retains the advantages of the DCP and BCP. The feasibility and effectiveness of using the D/B prior to estimate the blur kernel are discussed. The experimental results show that our algorithm is competitive with the state-of-the-art algorithms. In addition, experiments with our proposed prior show that it can significantly improve the performance of the deblurring algorithm.

Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.

Data Availability Statement:
The data presented in this study are available on request from the corresponding author.

Conflicts of Interest:
The authors declare no conflict of interest.