Spectral Norm Regularization for Blind Image Deblurring

: Blind image deblurring is a well-known ill-posed inverse problem in the computer vision ﬁeld. To make the problem well-posed, this paper puts forward a plain but effective regularization method, namely spectral norm regularization (SN), which can be regarded as the symmetrical form of the spectral norm. This work is inspired by the observation that the SN value increases after the image is blurred. Based on this observation, a blind deblurring algorithm (BDA-SN) is designed. BDA-SN builds a deblurring estimator for the image degradation process by investigating the inherent properties of SN and an image gradient. Compared with previous image regularization methods, SN shows more vital abilities to differentiate clear and degraded images. Therefore, the SN of an image can effectively help image deblurring in various scenes, such as text, face, natural, and saturated images. Qualitative and quantitative experimental evaluations demonstrate that BDA-SN can achieve favorable performances on actual and simulated images, with the average PSNR reaching 31.41, especially on the benchmark dataset of Levin et al.


Introduction
Blind deblurring, or blind deconvolution, has received considerable attention in the field of image processing and computer vision. The most typical example is the motion blur caused by a mobile phone shaking when taking pictures. In addition, the movement of the target object, bad weather, poor focus, insufficient light, etc., are all causes of image degradation. The blur kernel is assumed to be space-invariant. The blurred image g(x, y) obtained is expressed as the convolution of the kernel h(x, y) and the clear image o(x, y). The kernel is also referred to the point spread function (PSF) [1], which leads to image degradation. The blurring process can be modeled as follows [2]: g(x, y) = o(x, y) * h(x, y) + n(x, y) where "*" stands for the convolution operator; o(x, y) and g(x, y) represent clear images and blurred versions, respectively; h(x, y) denotes the kernel representing degradation induced in the spatial domain; and n(x, y) stands for the inevitable noise.
In blind deblurring, only the blurred version g(x, y) is known; thus, we have to calculate the kernel h(x, y) and the clear image o(x, y) through the obtained blurred image g(x, y), simultaneously. Obviously, this problem is highly ill-posed. In theory, infinite solution pairs o(x, y) and h(x, y) correspond to g(x, y). The delta kernel and blurred images are the most typical solutions. To alleviate this inherently ill-posed problem, image priors and appropriate regularization are employed [3]. Various statistical priors are incorporated into the associated variational model to tackle this challenging inverse problem. The statistical priors about images mainly include image gradient sparse priors [4][5][6], L0 regularized priors [7][8][9], low-rank priors [10,11], dark channel priors [9], deep discrimination priors [12], The core contributions are as follows: (1) This paper proposes a prior, named spectral norm regularization (SN). Different from existing image gradient priors, SN is a prior about the image domain. The SN value becomes larger when the image becomes blurred. As a result, SN can easily distinguish between degraded and clear images. (2) This paper proposed a novel algorithm to utilize the property of SN, named BDA-SN.
BDA-SN can use not only the information brought by the image gradient domain but also the information brought by the image domain. Therefore, BDA-SN can better deal with blind deblurring. (3) Extensive experiments demonstrate that BDA-SN can achieve good performances on actual and simulated images. Qualitative and quantitative evaluations indicate that BDA-SN is superior to other state-of-the-art methods.

Related Work
In the past ten years, deblurring algorithms for single images have made great progress. There are two main methods. One is through statistical priors of natural images, and the other is via deep learning.
Scholars developed various statistical priors on image distribution in order to efficiently calculate the kernel. After investigating the variational Bayesian inference, Fergus et al. [4] introduced a mixture of Gaussian models to fit the gradient distribution. To better fit the gradient of the heavy-tailed distribution, a piecewise function was adopted by Shan et al. [6]. Levin et al. [5] found that the maximum a posterior (MAP) method often produces trivial solutions and introduced an effective maximum margin strategy. Krishnan et al. [19] exploited the L1/L2 function to restrict the sparsity of the gradient. Xu and Jia [7] found a more sparse prior; that is, the generalized L0 regularization prior, which not only improves the restoration quality but also speeds up algorithm efficiency. For text images, Pan et al. [9] investigated the sparsity of image pixel intensity. Jin [20] designed a blind deblurring strategy with high accuracy and robustness to noise. Bai et al. [21] exploited the re-weighted total variation of the graph (RGTV) prior that derives the blur kernel efficiently. L0 regularization is widely used in image restoration and has achieved excellent results. Li et al. [12] utilized L0 regularization to constrain the blur kernel. In this paper, the L0 regularization prior is also adopted in the proposed blind deblurring model.
In the MAP framework, the estimation of the kernel benefits from sharp edges. Therefore, algorithms that use explicit edge extraction [22,23] have received widespread attention. Using a gradient threshold to retrieve strong edges is the main edge extraction method at present. The explicit edge extraction method has obvious defects; In other words, some images have no obvious edges to retrieve [24]. This method not only leads to over-sharpening of the image but also to the amplification of noise.
The gradient prior and the intensity prior are mainly applied to a single pixel or adjacent pixels, ignoring the relationship in a larger range. In order to better reflect the relationship within the image, many patch-based algorithms have been exploited. Inspired by the statistical priors of natural images, Sun et al. [25] adopted two priors based on patch edges. Ren et al. [10] developed a blind deblurring method combining self-similar characteristics of image patches with low-rank prior. By combining low ranking constraint and salient edge selection, Dong et al. [11] developed an algorithm that can protect edges while removing blur. Hsieh et al. [26] proposed a strongly imposed zero patch minimum constraint for blind image deblurring. These patch-based methods require a patch search, so more running time is required. Tang et al. [15] used sparse representation with external patch priors for image deblurring. Pan et al. [9] analyzed the changes in the dark channel after the image was blurred and introduced a blind deblurring algorithm via a dark channel prior, which achieves good performance in different scenes. Yan et al. [13] combined a bright channel with a dark channel and utilized the extreme channel for image restoration. Although Pan et al. [9] and Yan et al. [13] have achieved good results, they obviously encountered certain limitations. Sometimes, the image did not have obvious dark pixels and bright pixels and the blur kernel could not be effectively estimated. Inspired by the dark channel prior, Wen and Ying [27] proposed sparse regularization using the local minimum pixel, which improves the speed of the algorithm. At the same time, Chen et al. [16] proposed the local maximum gradient prior (LMG) for blind deblurring, and LMG has reached satisfactory performance in a variety of scenes. Xu et al. [24] simplified LMG and derived the patch maximum gradient prior (PMG), which lowered the cost of calculation. Algorithms based on image priors are difficult to use to restore images of specific scenes [28]. Therefore, some algorithms for special scenes have been exploited, such as text [9], saturated [29], and face images [29]. However, these specific algorithms often lack generalization and have poor restoration effects on other special scene images. Table 1 summarizes the strengths and weaknesses of BDA-SN and previous methods.

Strengths Weaknesses
Krishnan et al. [19] Uses L1/L2 regularization to constrain the sparsity of the image gradient. The algorithm is efficient.
L1/L2 is non-convex. The restored image has strong artifacts.
L0 is non-convex. The deblurring effect is poor.
Pan et al. [9] Uses dark channel, which can easily distinguish between clear and degraded images.
The method performs poorly on images without obvious dark pixels.
Yan et al. [13] Combines both the dark channel and the bright channel information. No complicated processing techniques and edge selection steps are required.
The method performs poorly on images without obvious dark or bright pixels.
Jin et al. [20] Uses constraint k p ∇x 2 to to fix the scale ambiguity, and proposes a blind deblurring strategy with high accuracy and robustness to noise.
High computational cost.
Bai et al. [21] Uses the re-weighted total variation of the graph (RGTV) prior that derives the blur kernel efficiently.
This is a non-convex and nondifferentiable optimization problem that requires additional strategies.
Wen et al. [27] Uses the patch-wise minimal pixels (PMP) prior, which is very effective in discriminating between clear and blurred images. The algorithm is efficient.
This method performs poorly on images with large pixel values.

BDA-SN
Uses the prior SN of the image domain, which has a strong ability to distinguish clear and blurred images. High computational cost.
In recent years, deep neural networks have developed rapidly, and data-driven methods have made great progress. Nah and Hyun [30] adopted a convolutional neural network (CNN) with multiple scales, which does not need to make any assumptions about the kernel and recovers images with an end-and-end method. Su et al. [31] used a deep learning method to deblur the video with trained CNN. Kupyn et al. [32] exploited an endand-end learning approach, which utilizes conditional generative adversarial networks (GAN) to remove motion blur. Zhao et al. [33] developed an improved deep multi-patch hierarchical network that has a powerful and complex representation for dynamic scene deblurring. Almansour et al. [34] investigated the impact of a super-resolution reconstruction technique using deep leaning on abdominal magnetic resonance imaging. Li et al. [35] developed a single-image high-fidelity blind deblurring method that embedded a CNN prior before MAP. Although these data-driven ways reached excellent results, the effects severely depend on the similarity of the test dataset and the training dataset. Therefore, the generalization of data-driven strategies is poor, and the computational cost is huge.
Having reviewed the progress of image restoration of the last decade in this section, the rest of this work is as follows. In Section 3, this paper introduces a blind deblurring algorithm using spectral norm regularization (BDA-SN) in detail. In Section 4, this paper presents some experimental results for performance evaluation, which are compared with the latest methods. Section 5 provides an analysis and discussion about the effectiveness of BDA-SN. Section 6 gives a summary of this paper.

Spectral Normalization
This section first describes spectral norm regularization (SN) and then its advantage in blind image deblurring. The spectral norm of a matrix A is defined by where λ max is the maximum eigenvalue of A H A, σ 1 is the maximum singular value of A, and A H is the transposed-conjugate matrix of A. For an image o(x, y), the spectral norm regularization (SN) is defined by The spectral norm regularization (SN) is based on an observation that, in an image, the SN value becomes larger after the blurring process. To better illustrate this property, an example of different regularization losses is shown in Figure 1, which reveals the degradation caused by atmospheric turbulence. Blur kernels are simulated by a random phase screen [36].
As shown in Figure 1, L1 and L2 regularization decrease as the degree of blur becomes larger. L1 and L2 regularization are more friendly to blurred images, so they are not proper regularizations [5]. Krishnan et al. [19] proposed L1/L2 regularization, which is more friendly to clear images than blurred images. Inspired by L1/L2 regularization, this paper adopts SN, which shows more vital abilities to differentiate clear and degraded images. Next, this paper gives a detailed comparison between SN and other regularization.

Comparison with Other Regularizations
Different from other gradient domain regularizations, this paper presents an image domain regularization. SN can better describe the image domain rather than the gradient domain. BDA-SN combines the regularization method of gradient domain and image domain. BDA-SN is an enhanced sparse method. Therefore, BDA-SN can better distinguish between clear images and blurred images.
Comparison with L2 regularization: L2 regularization is a famous blind image deblurring regularization. The L2 regularization can make the model meet the Lipschitz continuity better, thus reducing the sensitivity of the model to input perturbation and enhancing the generalization performance of the model. Therefore, it can be considered that the L2 regularization reduces the sum of squared singular values [37]. Although the model using L2 regularization is insensitive to perturbation and the model is valid, L2 regularization loses important information about the image, because the image acts as an operator contracting in all directions under the constraint of L2 regularization. In contrast, spectral norm regularization focuses only on the first singular value, so the image matrix does not significantly shrink in the direction orthogonal to the first right singular vector. Obviously, SN can retain more information of the image itself. In other words, BDA-SN can achieve greater complexity and can better describe image information.
Comparison with L1/L2 regularization: SN is similar to L1/L2 regularization in form, but they are two utterly different regularization methods. As mentioned above, SN is applied to the image domain, while L1/L2 regularization is applied to the gradient domain. At the same time, BDA-SN uses the spectral norm instead of the L2 norm.
Comparison with spectral norm regularization: Nevertheless, here, we emphasize the difference between spectral norm regularization and spectral norm regularization. Spectral norm regularization, L1 regularization, and L2 regularization add explicit regularization terms to the loss function. Spectral norm regularization is used to punish the spectral norm. To some extent, spectral norm regularization is a normalized version of spectral norm regularization. Spectral norm regularization attempts to set the spectral norm to a specified interval by constraining the spectral norm of the image after each iteration [38]. Therefore, BDA-SN can deal with images in a variety of different scenarios well. The use of spectral norm regularization makes BDA-SN more robust.

BDA-SN
Based on the property that SN can easily differentiate degraded and clear images, a novel deblurring algorithm is devised by adopting SN, i.e., BDA-SN. The least-squares algorithm is almost insensitive to whether noise is Gaussian or Poissonian [39]. For Poissonian noise, there is no significant difference between the effects of RLA and ISRA, while for Gaussian noise, ISRA can achieve better results than RLA [40]. Here, due to the robustness of the Gaussian noise assumption, the likelihood probability function [41] can be modeled as where σ 2 denotes the variance of the noise, g(x, y) represents the degraded image, o(x, y) denotes the clear image, and h(x, y) represents the kernel. The corresponding log-likelihood probability function multiplied by σ 2 is where C denotes a constant and J(o, h) represents the loss function. Obviously, the problem is heavily ill-posed because numerous different solution pairs (o, h) give rise to the same g(x, y) [9]. In order to make blind deblurring well-posed, this paper adopts sparsity priors to restrain the image and blur kernel [7]. This paper adopts h 1 instead of h 2 employed in [7], which can force the blur kernel to be sharp [6,42]. This paper used L0 regularization [9] and SN to constrain the image.
where α, γ, and denote penalty parameters and "∇" represents the gradient operator. This paper uses a numerical function from [43] to approximate L 0 norm, i.e., ∇o 0 ∼ ∇o 2 2 ∇o 2 2 +β , where β = 0.001 is a modulation parameter. o 1 σ(o) is the spectral norm regularization. In the MAP framework, the formulation can be written as As reported in References [44,45], blind deblurring needs to minimize the energy function. The partial derivatives of J(o, h) with respect to o(x, y) and h(x, y) are obtained as follows: where the function f c () is the adjoint function of f () and the gradient of α ∇o 0 is −α∇ · 2β∇o ∇o 2 +β 2 2 [25]. The new regularization term However, if the denominator of the regularizer in the previous iteration is fixed, then this problem becomes a convex L1 regularization problem [19]. Forcing Equations (11) and (12) to be zero, it arrives at the maximum log-likelihood Multiply both sides of the above Equations (13) and (14) by a positive real number λ. This real number λ is a parameter that adjusts the convergence speed of the algorithm. When λ is large, the algorithm converges rapidly. Then, use the sigmoid function to process the equations as used in [25]. The sigmoid function is used to keep the image non-negative during iteration [25].
Multiply Equations (15) and (16) by o(x, y) and h(x, y), respectively; then, the blind deblurring estimators can be written as Due to insufficient prior information, this paper initializes o(x, y) and h(x, y) as a matrix of all ones. In order to protect the edges of the image while removing noise during the image deblurring process, Equations (17) and (18) are rewritten as where h GaussianLP (x, y) denotes the Gaussian low-pass filter, h SobelV (x, y) denotes the Sobel vertical edge detector impulse response function, and h SobelH (x, y) denotes the Sobel horizontal edge detector impulse response function [25]. µ ∈ [0.15, 0.35] is the edge protection parameter. This paper chooses a larger value when the image contains more details and, conversely, chooses a smaller value when the image contains less details. Parameter λ ∈ [600, 1200] is used to adjust the speed of convergence. In the case of ensuring convergence, a larger value of λ can be selected to speed up the convergence. In this paper, the size of Gaussian low-pass filter [25] is 5 × 5. For the sake of simplicity, we drop "(x, y)" in Equations (19) and (20).
Obviously, o k (x, y) and h k (x, y) are estimated by iterating Equations (21) and (22). The maximum of Equation (5) and the best original image estimation can be obtained. Algorithms 1 and 2 show the main steps of the BDA-SN proposed in this paper.

Experimental Results
First, BDA-SN is evaluated on two natural image datasets [5,46] and compared with several other SotA algorithms. The algorithms involved in the comparison are those of Krishnan et al. [19], Xu et al. [7], Pan et al. [9], Yan et al. [13], Jin et al. [20], Bai et al. [21], and Wen et al. [27]. Second, BDA-SN is evaluated on domain-specific images, such as face images [29], saturated images [29], text images [8], and natural images [9]. BDA-SN is compared with methods specially designed for these specific scenarios. Finally, this paper tested BDA-SN on nonuniform blurred images. This paper sets α = 0.04, γ = 2, µ = 0.25, = 0.004, λ 1 = 800, and λ 2 = 1000. The number of iterations was set to J max = 5 for the balance between speed and precision. The complexity of the algorithm was O(nlogn). The experiment was carried out in MATLAB R2014a on a Windows 10 desktop computer with Intel Core i5-7200U CPU at 2.7 GHz with 12 GB RAM.

Performance Evaluation
In order to better evaluate the effect of BDA-SN, peak-signal-to-noise ratio (PSNR) [47], cumulative error ratio (ER) [5], and structural similarity (SSIM) [48] were used to evaluate the effect of the algorithm.
The peak value of the signal-to-noise ratio (PSNR) in the image is defined by where o represents the latent image,ô represents the restored image, and MAX o denotes the maximum value of the image o. Structural similarity (SSIM) is used to evaluate the similarity between the restored image and the ground truth image. SSIM is defined by where µ o and µô are the means of o andô, respectively; σ o and σô represent variances of o andô, respectively; and σ oô is the image covariance.
The cumulative error ratio (ER) is used to evaluate the difference between the restored image and the ground-truth sharp image. When ER is reduced, it indicates that the estimated image is closer to the ground-truth image. ER is defined by where L, L t , and L k denote the restored latent image, the ground-truth sharp image, and the image acquired by the ground-truth kernel.

Dataset of Levin et al.
This experiment was conducted on the dataset of Levin et al. [5], containing 32 blurred images generated from four clear images and eight blur kernels. Kernel size ranged from 13 to 27. Other state-of-the-art methods involved in the comparison are those of Krishnan et al. [19], Xu et al. [7], Pan et al. [9], Yan et al. [13], Jin et al. [20], Bai et al. [21], and Wen et al. [27]. Figure 2 shows the kernels estimated by BDA-SN on the dataset [5]. It is evident that kernels estimated by BDA-SN were close to the ground-truth kernels. Figure 3 illustrates the average SSIM and PSNR. BDA-SN reached a higher PSNR than BDA-SN without SN. Figure 4a shows that BDA-SN without SN has a lower success rate than BDA-SN. Figure 4b demonstrates that BDA-SN achieved the highest success rate compared with other SotA methods. When error was 2.5, BDA-SN achieved 100% success. As illustrated in Figure 5, BDA-SN achieves the highest average PSNR in the most advanced methods.   In order to show the effects of these algorithms more intuitively, Figure 6 visually demonstrates the comparison of BDA-SN with other SotA methods. The recovered image by BDA-SN is visually more pleasing. However, algorithms [13,19,20] exhibit strong ringing artifacts. The deblurred image by BDA-SN without SN contains severe blur residues. Table 2 provides a quantitative evaluation corresponding to Figure 6. Table 2 demonstrates that the image restored by BDA-SN has the highest PSNR and SSIM. Table 2. Quantitative evaluations of the image in Figure 6.

Dataset of Kohler et al.
The second experiment was carried out on the dataset of Kohler et al. [46], containing 48 blurred images generated from 4 clear images and 12 blur kernels. The algorithms compared include those of Krishnan et al. [19], Xu et al. [7], Pan et al. [9], Yan et al. [13], Jin et al. [20], Bai et al. [21], and Wen et al. [27]. Figure 7 reveals that the estimated kernels by BDA-SN were close to the true kernels. Figure 8    For a better comparison, Figure 10 chooses a challenging visual example that is severely blurred. BDA-SN yields the best visual effect; the number "4" in the red box in the lower left corner has a sharp edge, which is visually more pleasing. The outcome of the algorithm [20] produces ringing artifacts, the consequences of algorithms [7,19] have severe blur effects, and the results of the methods [9,13,27] are too smooth. Table 3 corresponding to Figure 10 shows that the PSNR and SSIM of BDA-SN are the highest. Table 3. Quantitative evaluations of the image in Figure 10.

Domain-Specific Images
Additionally, this paper evaluates BDA-SN on face image [29], saturated image [29], text image [8], and natural image [9]. This paper gives typical results for each category. This paper also extended BDA-SN to nonuniform blurred images. Finally, the run times of different methods are compared in this paper.
Natural image: The real natural image that comes from the dataset [9] is used to further test BDA-SN. As shown in Figure 11, BDA-SN produces results comparable to or better than methods [9,27]. The image restored by methods [7,20], and BDA-SN without SN displayed obvious ringing artifacts, suggesting the effectiveness of SN. The methods of [19] produced strong artifacts and blur effects, while BDA-SN generated a clearer image. Face image: Face images lack sufficient structural edges and textures, making kernel estimation challenging. A visual comparison is shown in Figure 12. It can be inferred from Figure 12 that BDA-SN yields the best result, whereas BDA-SN without SN produces severe distortions. The restored image by BDA-SN is visually pleasing, while other SotA methods [9,20,21] produced strong artifacts, particularly in the eye region. Text image: Most text images have two tones (black and white), which do not obey the heavy tail distribution of natural images. For most deblurring methods, dealing with text images is a daunting task. Figure 13 shows a challenging image from [8]. For this example, BDA-SN yields the best visual effect, while most other methods [7,13,19] produce severe artifacts and blur residuals. Saturated image: For most deblurring methods, the deblurring of saturated images is particularly challenging because saturated images usually have saturated pixels that affect the estimation of the kernel. Figure 14 displays a visual comparison on a saturated image. Due to the saturation pixels, the kernel estimated by [7,[19][20][21] looks similar to a delta kernel. BDA-SN obviously has fewer ringing artifacts and has the best visual effect on the restoration of the light source in the image. Nonuniform deblurring: BDA-SN very easily extends to nonuniform blur. Figure 15 shows the result of a degraded image due to spatially variant blur. It can be inferred from Figure 15 that BDA-SN gives comparable visual results compared with other algorithms [7,49]. Figure 15. Comparisons on an image with nonuniform blur. For visualization, the kernels were resized. BDA-SN is visually equivalent with the algorithm in [7]. The algorithms in [9,13,27] contain strong ringing artifacts. (a) Input; the algorithms of (b) Whyte et al. [49], (c) Xu et al. [7], (d) Pan et al. [9], (e) Yan et al. [13], and (f) Wen et al. [27]; (g) BDA-SN; and (h) kernels.
Computation complexity: Finally, this paper compares the computation complexity of BDA-SN with other SotA methods [7,9,13,[19][20][21]27]. The simulation was performed on Windows 10, using Intel Core i5-7200U CPU, 2.7 GHz, 12 GB RAM. The natural image size was 360 × 480. The face image size was 900 × 896. The text image size was 410 × 180. The saturated image size was 606 × 690. The run time of the non-blind deblurring step included the total time. Table 4 demonstrates that the method in [19] is the fastest. However, its results are not as good as those of BDA-SN, as illustrated above. BDA-SN is two times faster than the method in [20]. The results in this paper are derived from the code supplied by the scholars on their website.

Analysis and Discussion
In this section, we provide a further analysis and discussion on the effectiveness of BDA-SN, the convergence of BDA-SN, and the limitations of BDA-SN.

Effectiveness of BDA-SN
This paper quantitatively evaluates BDA-SN using two benchmark datasets [5,46]. Moreover, this paper evaluates BDA-SN on face image [29], saturated image [29], text image [8], and natural image [9]. As reported in Section 4, numerous experimental comparisons have proved that BDA-SN compares favorably with or even better against other SotA methods [7,9,13,[19][20][21]27]. This paper uses evaluation indexes PSNR and SSIM to evaluate the image quality. Tables 2 and 3 show that BDA-SN achieves a SotA performance on domain-specific images. Figure 10 demonstrates that BDA-SN can protect the edge details and texture features concerning the Sobel filter (µ = 0.25).
To better illustrate the validity of SN, this paper disables the SN in the implementation. Figure 16 shows the intermediate results corresponding to Figure 11. The intermediate results recovered by BDA-SN contain more sharp edges and texture features, which facilitates kernel estimation. The results in Figure 16 demonstrate that SN consistently improves deblurring. All of these results demonstrate the effectiveness of SN.

Convergence Property
Since the loss function in this paper is nonlinear, a natural question is whether BDA-SN can converge. In this paper, the change in residual error during the iteration process is observed on the dataset of Levin et al. [5] to evaluate convergence quantitatively. It can be seen from Figure 17 that BDA converges after about 40 iterations, which verifies the effectiveness of the algorithm.

Limitation
This paper establishes the likelihood function that noise obeys a Gaussian distribution. If the image has non-Gaussian noise, BDA-SN cannot obtain satisfactory results. As shown in Figure 18, BDA-SN processes images degraded by salt and pepper noise. Figure 18 shows that BDA-SN does not perform well in processing non-Gaussian noise degraded images. Another disadvantage of BDA-SN is that it is not fast enough. It can be seen from the Table 4 that BDA-SN is slower than the algorithm by [13,19]. The impact of various noises (such as salt and pepper noise) will be considered in the future.

Conclusions
Based on the observation that the SN value of a degraded image is greater than that of a clear image, a new iterative algorithm for image restoration based on SN is proposed, namely BDA-SN. SN captures the change in the degraded image during the blurring process and tends toward a clear image during the deblurring process. BDA-SN naturally maintains the nonnegative constraint of the solution during the deblurring process. BDA-SN adds a low-pass filter and an edge-preserving process to the iterative formula to protect the edges of the image while removing noise. Furthermore, BDA-SN very easily extends to non-uniform blur. The experimental results demonstrate that BDA-SN has reached the most advanced level in both natural images and specific scenarios. Quantitative and qualitative evaluations demonstrate that BDA-SN performs favorably against other SotA methods.
Author Contributions: J.Z. proposed the original idea and supervised the project. S.S. fabricated the samples and performed the measurements. Z.X. revisited and supervised the whole process. All authors have read and agreed to the published version of the manuscript.