Next Article in Journal
Classifying Sex from MSCT-Derived 3D Mandibular Models Using an Adapted PointNet++ Deep Learning Approach in a Croatian Population
Previous Article in Journal
A Review on the Detection of Plant Disease Using Machine Learning and Deep Learning Approaches
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Fast Nonlinear Sparse Model for Blind Image Deblurring

1
School of Physics, Nanjing University of Science and Technology, Nanjing 210094, China
2
Nanjing University of Science and Technology Tangshan Test Center, Nanjing University of Science and Technology, Nanjing 210000, China
3
Faculty of Mathematics and Physics, Huaiyin Institute of Technology, Huai’an 223003, China
4
Engineering Research Center of Semiconductor Device Optoelectronic Hybrid Integration in Jiangsu Province, Nanjing 210000, China
*
Authors to whom correspondence should be addressed.
J. Imaging 2025, 11(10), 327; https://doi.org/10.3390/jimaging11100327
Submission received: 17 August 2025 / Revised: 9 September 2025 / Accepted: 19 September 2025 / Published: 23 September 2025

Abstract

Blind image deblurring, which requires simultaneous estimation of the latent image and blur kernel, constitutes a classic ill-posed problem. To address this, priors based on L 2 , L 1 , and L p regularizations have been widely adopted. Based on this foundation and combining successful experiences of previous work, this paper introduces L N regularization, a novel nonlinear sparse regularization combining the L p and L norms via nonlinear coupling. Statistical probability analysis demonstrates that L N regularization achieves stronger sparsity than traditional regularizations like L 2 , L 1 , and L p regularizations. Furthermore, building upon the L N regularization, we propose a novel nonlinear sparse model for blind image deblurring. To optimize the proposed L N regularization, we introduce an Adaptive Generalized Soft-Thresholding (AGST) algorithm and further develop an efficient optimization strategy by integrating AGST with the Half-Quadratic Splitting (HQS) strategy. Extensive experiments conducted on synthetic datasets and real-world images demonstrate that the proposed nonlinear sparse model achieves superior deblurring performance while maintaining completive computational efficiency.

1. Introduction

Recent advances in computer vision have intensified the research focus on image processing, particularly through significant contribution to the development of image deblurring, a fundamental component of low-level image processing. In the context of space-invariant blur kernel modeling, a blurred image y can be mathematically represented as:
B = I k n
here B denotes the blurred image, I denotes the sharp image corresponding to B, ⊗ denotes the convolution operator, k denotes the blur kernel, and n denotes the inevitable additional noise. Image deblurring encompasses two distinct categories: non-blind deblurring, where the kernel k is known, and blind deblurring, where the kernel k is unknown. This research addresses the latter.
Blind deblurring algorithms aim to direct the optimization process toward the desired solution, with traditional optimization approaches employing regularization terms to enhance constraints on both the latent image I and blur kernel k , facilitating convergence to the appropriate blur kernel and sharp latent image. The standard formulation of blind deblurring is expressed as:
( I , k ) = arg min I , k   F ( I , k ) + α R 1 ( I ) + β R 2 ( k ) ,
where F ( I ,   k ) denotes the data fidelity term; R 1 ( I ) and R 2 ( I ) denote the regularization terms for the latent image I and blur kernel k , respectively.
Regarding the regularization term R 1 ( I ) , researchers have developed numerous effective image priors, with gradient sparsity-based priors receiving extensive attention and implementation. For instance, Xu et al. [1] introduced an L 0 norm-based image smoothing algorithm and subsequently expanded L 0 regularization to the field of image deblurring [2], revealing that optimizing L 0 regularization presents a Nondeterministic Polynomial (NP)-hard problem, rendering direct solutions impractical. To address this limitation, Xu et al. [2] developed an unnatural distribution to approximate the L 0 regularization solution. Concurrently, the compressed sensing community typically addressed this challenge by relaxing the L 0 norm to one of convex optimization. Earlier approaches commonly replaced the L 0 norm with that of L 1 [3,4]; however, the limited sparsity of the L 1 norm results in performance that fails to match that of L 0 regularization in image processing applications.
To address this issue, Wang et al. [3,4,5,6] introduced a series of norm-ratio-based regularizations, e.g., L 1 / L 2 and L 1 / L , establishing that L 1 / L could effectively approximate the L0 norm [7]. Given that the L1 norm represents a special case of L p norm where p = 1, we extend Wang et al.’s work by generalizing L 1 / L regularization into a broader framework: L p / L (0 < p < 1) regularization (termed nonlinear sparse regularization, denoted as L N regularization). To evaluate the effectiveness of L N regularization, we perform a sparsity analysis of various regularizations using gradient distribution statistics, as presented in Figure 1. The results indicate that the proposed L N regularization achieves superior sparsity, offering preliminary evidence of its advantage over L 1 / L regularization.
Due to its high sparsity, the L 0 gradient prior exhibits strong filtering capabilities against harmful artifacts in blurred images, although this restricts its performance when processing detail-rich images. To overcome this limitation, researchers have developed an L 0 + X deblurring prior framework that combines the L 0 gradient prior with complementary prior terms, a hybrid approach which aims to filter detrimental artifacts effectively while preserving fine image structures. Current X-prior terms frequently incorporate various patch-based image priors [9,10,11,12,13]; however, these methods encounter challenges due to their high computational complexity, excessive resource consumption, and low operational efficiency. In response, Chen et al. [14] introduced an enhanced sparse model utilizing the L 1 gradient prior as the X-term, substantially improving computational speed. Building on this work, we incorporate the proposed L N regularized gradient prior as the X-term and combine it with the L 0 gradient prior, developing a novel fast nonlinear sparse model.
The main contributions of this paper are as follows:
  • We propose a novel nonlinear sparse regularization ( L N ) that nonlinearly couples the L p norm with the L norm.
  • An Adaptive Generalized Soft-Thresholding (AGST) algorithm is developed to optimize the L N regularization problem.
  • Building upon L N -regularization, we design a novel nonlinear sparse model for blind deblurring and develop an efficient optimization algorithm based on AGST and HQS.
The rest of this paper is organized as follows: Section 2 provides a comprehensive review of existing deblurring methods; Section 3 introduces the proposed nonlinear sparse regularization and the corresponding fast nonlinear sparse model; Section 4 experimentally evaluates the model on synthetic datasets and real-world blurred images; Section 5 analyzes ablation studies in order to validate the components of our fast nonlinear sparse model, including runtime performance tests; and Section 6 gives our conclusion.

2. Related Work

Generally, all existing methods can be classified into two categories: optimization-based and deep-learning-based methods. This section provides an overview of these distinct methodologies.

2.1. Optimization-Based Methods

Optimization-based image deblurring algorithms originated from the Richardson-Lucy method [15,16]. Subsequently, various approaches emerged, including the Gaussian mixture model (Fergus et al. [17]) and a fast L 2 norm-based deblurring method (Cho et al. [18]). However, L 2 regularization demonstrates limited effectiveness in blur kernel estimation due to its insufficient sparsity; therefore, a sparse regularized prior is urgently needed to meet the performance requirements of image deblurring. Yang et al. [19] and Candes et al. [20] proposed several L 1 -norm methods; however, these approaches failed to satisfy the sparsity requirements of the deblurring prior term.
To obtain superior restoration performance, Daniele et al. [21] tried an L p (0 < p < 1) gradient prior. Since the kernel estimation based on L p sparse regularization is a non-convex problem, this leads to a hard solution of L p optimization. To address this problem, Daniele et al. [21] transformed the non-convex L p optimization problem into a quadratic optimization problem by taking the log function of the L p regularization term. Gasso et al. [22] and Zou et al. [23] extended Iteratively Reweighted L 1 minimization (IRL1) [20] to the non-convex problem domain of L p minimization. Rao and Kreutz-Delgado [24] proposed an Iteratively Reweighted Least Squares (IRLS) approach to L p minimization; and She et al. [25] proposed the Iteratively Thresholding Method (ITM), which was only suitable for unconstrained problems. In 2013, Zuo et al. [26] proposed a GST operator to solve the L p minimization problem, and following this, Zuo et al. [27] applied L p regularization to the field of image deblurring.
In pursuit of enhanced norm sparsity during the deblurring process, research focus shifted toward the L0 norm. Xu et al. [1] introduced an L 0 image smoothing method in 2011, and building upon this, Xu et al. [2] developed an L 0 gradient prior by applying L 0 norm constraints to image gradients, enhancing kernel estimation and large-scale optimization. Extensive research has demonstrated that the generalized L 0 sparse gradient prior can effectively extract strong edges, and Xu et al.’s [2] groundbreaking work inspired numerous deblurring methods based on L 0 regularization: Pan et al. [28] implemented L 0 regularized intensity and gradient prior for text image deblurring, and Li et al. [29] applied L 0 norm to constrain the blur kernel intensity.
Despite the L 0 -regularization prior’s proven effectiveness in removing harmful artifacts from images and widespread adoption in blind deblurring, it often underperforms when processing images with complex structural details. To address this limitation, researchers developed the +   X paradigm, combining the L 0 gradient prior with supplementary image priors. A representative example is the Dark Channel Prior (DCP) developed by Pan et al. [9], who discovered that sharp images exhibited sparser dark channels compared to blurred ones, combining the DCP with the L 0 gradient prior and achieving good results. Similarly, Yan et al. [30] extended the applicability of the DCP by combining it with the bright channel prior. Additionally, Eqtedaei et al. [31] developed a deblurring prior based on the difference between local maximum and minimum pixel values within an image region, developing two distinct deblurring algorithms utilizing L 1 and L 0 regularization, respectively.
The aforementioned image patch-based priors rely on overlapping patches, which substantially increases their computational complexity. To address this limitation, researchers have explored non-overlapping patches as an alternative approach. Notable examples include the Patch-wise Minimum Pixel (PMP) prior proposed by Wen et al. [11] and the Patch-wise Maximum Gradient (PMG) prior developed by Xu et al. [13]. These non-overlapping patch priors demonstrate significant computational acceleration while maintaining restoration accuracy; however, despite their improved efficiency, these methods still require individual path processing. Additionally, many patch-based priors necessitate the introduction of large, sparse matrices during optimization, consuming considerable computational resources and reducing algorithmic efficiency.
On the other hand, edge detection-based deblurring algorithms have emerged as a viable technical approach. Joshi et al. [32] implemented direct detection and prediction of latent sharp edges to enhance blur kernel estimation; Cho et al. [18] integrated bilateral filtering, shock filtering, and edge gradient thresholding for salient edges prediction; Xu et al. [33] developed a two-phase robust kernel estimation framework with an effective edge selection strategy; and Pan et al. [34] proposed a self-adaptive edge selection algorithm, while Liu et al. [35] implemented a surface-aware approach. Although explicit edge prediction methods demonstrate validity in blind deblurring, they remain dependent on heuristic filters. These methods therefore tend to amplify noise, potentially compromising the deblurring process and producing over-sharpened images as, furthermore, natural images do not consistently contain salient edges. Additionally, some scholars have begun exploring the integration of learning mechanisms with traditional optimization frameworks [36,37,38].

2.2. Learning-Based Methods

In the last decade, the rapid advancement of deep learning technology has prompted researchers to investigate its applications in image deblurring tasks.
In 2015, Sun et al. [39] pioneered the application of Convolutional Neural Networks (CNNs) to non-uniform image deblurring, marking an early successful integration of deep learning techniques. Subsequently, numerous CNN-based methods have emerged. For instance, Chakrabarti et al. [40] modified initial network layer connectivity using multi-frequency decomposition; Gong et al. [41] proposed a CNN-based approach for direct motion flow estimation from blur kernels; Ren et al. [42] designed a Maximum a Posteriori (MAP) deep learning hybrid framework, utilizing dual-branch networks for the alternating optimization of latent sharp images and blur kernels. Feng et al. [43] proposed Ghost-UNet, incorporating lightweight sub-networks for enhanced computational efficiency while preserving feature representation capacity; Mao et al. [44] developed a Residual Fast Fourier Transform with Convolution Block (ResFFT-Conv) module; and Mou et al. [45] transformed the Proximal Gradient Descent algorithm into a learnable deep architecture.
Beyond CNNs, other neural network architectures, including Recurrent Neural Networks (RNNs), Generative Adversarial Networks (GANs), and Feed-Forward Networks (FNNs), have demonstrated success in image deblurring. Zhang et al. [46] developed a Hybrid Deblur Net incorporating RNNs for non-uniform deblurring; Wang et al. [5] proposed a real-time deblurring algorithm utilizing GANs; and Kong et al. [47] developed a Frequency domain-based Self-Attention Solver (FSAS) to address the limitations of FFNs in image deblurring.
Despite their superior deblurring capabilities, neural networks face two significant limitations: (1) Their substantial data dependency means they require extensive training samples for optimal performance, resulting in generalization failures with distributionally shifted data. (2) Computational requirements are significant during both training and inference phases, particularly for architectures with numerous parameters.

3. Proposed Method

In this section, we present our improved sparse regularization and develop an effective deblurring algorithm based on this model.

3.1. Definition of Nonlinear Sparse Regularization

The nonlinear sparse regularization ( L N ) is defined as:
| | | | N = | | | | p | | | |
Given a corrupted signal A , we assume the latent sharp signal B is sparse. With a basic quadratic penalty, the objective energy function can be written as:
B = arg min B A B 2 2 + λ B N
where λ denotes the regularization parameter, and k denotes the iteration level. Connecting to the definition of L N in (Equation (3)), Equation (4) can be rewritten as:
B = arg min B A B 2 2 + λ B p B
where B denotes the result after this iteration level, while B denotes the result of the previous iteration level. The | | · | | notation represents the infinity norm, which is mathematically defined as the maximum absolute value of a matrix, expressed as | | · | | = m a x { | · | } . Building on previous successful practices [3,4,6,7], we utilize the infinity norm ( B ) of the signal from the previous iteration as a weighting factor to adjust the regularization parameter. To facilitate the solution of our model, we decompose the signals A and B into a series of independent subproblems:
B i = arg min B i k A i B i 2 2 + λ B i p B ,
where i denotes the location of an element. The nonlinear sparse regularization term in Equation (6) presents a non-convex optimization problem, and following successful experiments, we are able to transform L N optimization into L p optimization with an adaptively adjustable regularization parameter. Based on the GST algorithm widely employed for L p norm optimization, we further develop an AGST algorithm. To illustrate the nonlinear sparse regularization, we abstract the L N -related component in Equation (6) as a function:
F ( A i , B i ) = ( A i B i ) 2 + λ | B i | p B .
The curves of function F under different values of variable A i are displayed in the following figure:
The curves in Figure 2 illustrate the existence of a threshold τ : when A i falls below this threshold, the minimum value of the function in Equation (7) occurs at B i = 0 ; when A i exceeds this threshold, the function reaches its minimum at a non-zero value. These properties indicate that the threshold τ satisfies the following condition:
F ( τ , B i τ ) = F ( 0 , B i τ ) ,
F ( τ , B i τ ) = 0 .
Here, B i τ denotes the variable B i when the function in Equation (7) achieves its non-zero minimum, and F denotes the first-order derivative of F . These conditions yield the following solutions:
B i τ = ( λ ( 1 p ) B ) 1 2 p ,
τ = ( λ ( 1 p ) B ) 1 2 p + λ p 2 B ( λ ( 1 p ) B ) p 1 2 p .
As shown in Equation (11), since B varies with the optimization target B , which directly relates to the threshold τ , the proposed AGST algorithm adapts its thresholding dynamically according to the intrinsic characteristics of input variables. The solution B i is expressed as:
B i = A G S T ( A i , B , λ , p ) .
The workflow of AGST is outlined in Algorithm 1.
Algorithm 1: The Adaptive Generalized Soft-Thresholding algorithm
input: A i , B , λ , p, J
   τ = ( λ ( 1 p ) B ) 1 2 p + λ p 2 B ( λ ( 1 p ) B ) p 1 2 p
  if | A i | τ
     B i = 0
  else
     b ( 0 ) = | A i |
    for t = 1, 2, …, J
         b ( t ) = | A i | λ p B ( b ( t 1 ) ) p 1
         t t + 1
    end
     B i = sgn ( A i ) b ( J )
  end
Output: B i

3.2. Deblurring Model and Optimization

This subsection describes the proposed deblurring model and its optimization procedure. In our formulation, we employ the L 2 -norm, commonly utilized in traditional algorithms, to regularize the fidelity term and the blur kernel term. For image-related regularization terms, building upon previous successful approaches [9,10,11,13,14], we combine the L N gradient prior with the widely adopted L 0 gradient prior, thereby constructing a novel fast nonlinear sparse model. The complete model is expressed as:
min I , k | |   I k B   | | 2 2 + α | |   I | | N + β   I 0 + γ | |   k   | | 2 2
where ∇ denotes the gradient operators in vertical and horizontal dimensions (i.e., = { h , v } ); α , β , and γ denote the weight parameters. We solve Equation (13) by alternatively updating I and k with the other one held fixed. The sub-problems referring to I and k are given by:
min I | |   I k B   | | 2 2 + α | |   I   | | N + β   I 0 ,
min k | |   I k B   | | 2 2 + γ | |   k   | | 2 2 ,

3.2.1. Updating Latent Image I

The latent image I is updated while keeping the kernel k fixed. Since Equation (14) presents a highly non-convex problem, the HQS method is employed. Two auxiliary variables, u and g , are introduced to represent I in the second and third terms of Equation (14), respectively, transforming Equation (14) into:
min I , u , g | |   I k B   | | 2 2 + α   | |   u   | | N + β   g 0 + λ 1 | | u I | | 2 2 + λ 2   g I 2 2 ,
where λ 1 and λ 2 denote the penalty parameters. Similarly to Equation (13), Equation (16) is decomposed into three subproblems associated with I , u , and g :
min I | |   I k B   | | 2 2 + λ 1 | |   u I   | | 2 2 + λ 2   g I 2 2 ,
min u α | |   u   | | N + λ 1 | |   u I   | | 2 2 ,
min g β   g 0 + λ 2   g I   2 2 .
Solving  I . Equation (17) represents a classical quadratic optimization problem solvable via Fast Fourier Transform (FFT), with its closed-form solution expressed as:
I = F 1 ( F ( k ) ¯ F ( B ) + λ 1 F ( ) ¯ F ( u ) + λ 2 F ( ) ¯ F ( g ) F ( k ) ¯ F ( k ) + ( λ 1 + λ 2 ) F ( ) ¯ F ( ) ) .
Solving  u . Based on the definition in Equation (3), Equation (18) is reformulated as:
min u λ 1   u I   2 2 + α   u p   I p r e ,
where I p r e denotes the I obtained from the previous iteration level. Adopting the AGST algorithm, the solution for variable u is expressed as:
u = A G S T ( I , I p r e , λ 1 α , p ) .
Solving  g . The objective function for the auxiliary variable g in Equation (19) represents an L 0 gradient prior [2]. The solution employs the unnatural distribution method, yielding the closed-form solution for g:
g = I , | g | 2 > β λ 2 0 , o t h e r s .
The principal steps for estimating the latent image I are summarized in Algorithm 2.
Algorithm 2: Latent image estimation
Input Blurred image B , initialized k from the coarser level.
    I B , λ 1 α , λ 2 β
   repeat
      Calculating u using Equation (22)
      Calculating g using Equation (23)
      Calculating I using Equation (20)
      λ1←2λ1
λ2←2λ2
   until λ1 >αmax
Output Intermediate latent image I .

3.2.2. Updating Blur Kernel k

The objective function for the blur kernel k presents a quadratic optimization problem similar to Equation (15). While Equation (15) emphasizes image intensity information, previous advanced methods [9,13,18,28] demonstrate that blur kernel estimation achieves higher accuracy when based on image gradient. Therefore, Equation (15) is modified into a gradient-based form:
m i n k   I k B 2 2 + γ   k 2 2 ,
and can be effectively solved through FFT:
k = F 1 ( F ( I ) ¯ F ( B ) F ( I ) ¯ F ( I ) + γ ) .
The essential steps of blur kernel estimation are summarized in Algorithm 3.
Algorithm 3: Blur kernel estimation
Input Blurred image B
   Initialized k from the previous level of the image pyramid.
      while i m a x _ i t e r  do
   Estimate I using Algorithm 2
      Estimate k using Equation (25)
Output Blur kernel k

4. Experimental Results

This section presents an evaluation of the proposed method using natural image datasets [8,48,49], real-world images, and a specific domain dataset [50], comparing it with several state-of-the-art methods, including traditional methods and deep learning methods. For all uniform image deblurring experiments, the parameters are set as α   = 0.0013 , β = 0.0023 , γ   = 9 , p = 0.8 , m a x _ i t e r = 5 , and α m a x = 10 5 , and for fair comparison, the other algorithms utilize the default settings from the authors’ codes. Throughout the experiments, blur kernel estimation employed different blind deblurring methods, followed by the same non-blind deblurring method as the final step, and the model implementation used MATLAB R2022a with efficiency assessment conducted on an Intel Core i7-11800H CPU with 16GB RAM (Intel Corporation, Santa Clara, CA, USA).

4.1. Natural Images

This subsection demonstrates our method’s performance on two synthetic datasets from Levin et al. [8] and Sun et al. [49]. The restoration results undergo quantitative evaluation using three standard metrics: Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM), and cumulative error ratio. The mathematical definitions of each evaluation metric are expressed as follows:
P S N R = 10 log 10 ( M A X S 2 M S E ) , s . t . M S E = 1 m n i = 0 m 1 j = 0 n 1 [ I ( i , j ) I t ( i , j ) ] 2 ,
S S I M ( I , S ) = ( 2 μ I μ S + C 1 ) ( 2 σ I S + C 2 ) ( μ I 2 + μ S 2 + C 1 ) ( σ I 2 + σ S 2 + C 2 ) ,
e r r o r   r a t i o =   I t I 2 2   I t I k 2 2 ,
where I denotes the restored image, I t denotes the reference ground truth image used for quality assessment, and I k denotes the deblurred image using the real blur kernel. In Equation (26), m and n denote the size of an image, M A X S indicates the maximum pixel value of image S , and M S E means the mean squared error. In Equation (27), μ I and μ S denote the mean values of images I and S , σ I and σ S denote their standard deviation, σ IS denotes the covariance between the two images, and C 1 and C 2 are two constants.

4.1.1. Levin’s Dataset

The initial evaluation utilizes the dataset reported by Levin et al. [8], comprising 32 blurred images generated from 4 images filtered with eight blur kernels, each maintaining a uniform resolution of 255 × 255 pixels. The PSNR, SSIM, and cumulative error ratio metrics of our method is compared with several state-of-the-art methods [9,11,14,30,31,33,51,52,53], with the results presented in Figure 3. As demonstrated in Figure 3a,b, the proposed model achieves superior average PSNR and SSIM metrics (32.234 dB in PSNR and 0.909 in SSIM), surpassing the L e model by 0.866 dB in PSNR and 0.024 in SSIM, respectively. Additionally, Figure 3c displays the cumulative error ratio curves of the comparative methods, with the results demonstrating that the proposed method consistently outperforms competing approaches, achieving a 90.625% success rate when the error ratio is ≤1.5, and a 100% success rate when the error ratio is ≤2.0.
Figure 4 presents a particularly challenging image—exhibiting a large blur kernel and complex texture details, posing significant difficulties for deblurring algorithms—alongside the restoration results of the compared methods, with corresponding PSNR and SSIM values annotated in the upper-left corner. The proposed method achieves superior kernel estimation accuracy and visual quality, producing the highest PSNR (30.046 dB).

4.1.2. Sun’s Dataset

To expand the comparative analysis, evaluation was conducted on Sun’s dataset [49], containing 640 high-resolution blurred images, using the cumulative error ratio as the comparison metric. Our method was evaluated against several established deblurring methods [9,14,18,31,33,52,54,55,56], with the quantitative results presented in Figure 5. For equitable comparison, an identical non-blind deblurring approach [57] was applied for all competing methods. As the figure illustrates, the proposed method achieves an 87.500% success rate when the error ratio is ≤2, exceeding Chen et al.’s [14] enhanced sparse model, which achieves an 85.469% success rate.
Following an established protocol, a representative case is presented in Figure 6 to illustrate the advantages of the proposed method over comparative approaches, with quantitative metrics annotated in the upper-left corner. The selected example features an interior architectural space with intricate structural details, and the results demonstrate that the proposed method has superior blur kernel accuracy compared to alternative approaches. Notably, the method achieves a 0.489 dB PSNR improvement and 0.011 SSIM gain over the L e model proposed by Chen et al. [14].

4.2. Specific Images

With Section 4.1 having demonstrated the superior performance of the proposed method on natural images, this subsection presents the results of targeted experiments using representative scenarios from the dataset in [50], specifically evaluating performance on two distinct scenarios: human face images and text images.

4.2.1. Human Face Images

Face image processing constitutes a fundamental research area in this field, with images of faces presenting unique challenges due to their frequent absence of dominant structural information, complicating blur kernel estimation. Figure 7 presents quantitative comparison results of face images from the dataset in [50]. The results indicate the superior PSNR and SSIM metrics (27.638 dB PSNR and 0.860 SSIM) of the proposed method, showing an improvement of 1.337 dB in PSNR and 0.030 in SSIM compared to Chen et al.’s L e model [14].
The first row of Figure 8 presents a challenging face image example from the dataset [50], while column (b) displays the ground-truth sharp image. This example demonstrates that our method reconstructs the most accurate blur kernel while producing results with minimal ringing artifacts. Following standard practice, the PSNR and SSIM metrics are displayed in the under-left corner, indicating the substantial improvement achieved by the proposed method over the L e model, obtaining a PSNR of 28.085 dB.

4.2.2. Text Images

Text image processing represents another significant application domain, different from other tasks in that these images predominantly contain two-tones that do not follow the heavy-tailed distribution of natural images, making text images particularly challenging for most deblurring methods. Figure 9 presents the average PSNR and SSIM for text images from the dataset in [50], indicating that our method achieves the highest quantitative evaluation metrics, surpassing the second-highest method by 0.533 dB in PSNR and 0.066 in SSIM. For visual comparison, the second row of Figure 8 illustrates an exemplar text image from the dataset [19], containing abundant image details. The comparative experimental results demonstrate that our method generates reconstructed results with superior detail preservation. The quantitative analysis metrics indicated in the upper-left corner reveal a PSNR improvement of 0.374 dB compared to other methods.

4.3. Comparison Against Deep Learning Methods

The recent decade has witnessed rapid advancement in deep learning, leading to numerous deep learning-based deblurring methods being proposed. To validate the effectiveness of our approach, we performed comparative experiments with several state-of-the-art deep learning models [44,45,47,51,52,54,58,59,60] on dataset proposed by Köhler et al. [48]. The quantitative analysis results, presented in Figure 10, demonstrate that our model achieves superior performance compared to several deep learning approaches in terms of both PSNR and Mean SSIM (MSSIM), surpassing the best-performing deep learning model [45] by 2.237 dB and 0.057, respectively.
For qualitative evaluation, we illustrate a challenging example from the dataset in Figure 11. The results indicate that most deep learning methods encounter difficulties in producing satisfactory restoration results when processing images with large blur kernels. Our method, however, maintains accurate reconstruction quality even in this challenging case, achieving a PSNR of 28.064 dB and an MSSIM of 0.907.

4.4. Real-World Images

Following evaluation on synthetic datasets, we tested our algorithm using real-world blurred images, inputs that present greater randomness and uncertainty compared to synthetic images, thus imposing higher requirements on algorithmic stability and adaptability. Figure 12 illustrates an example of a real-world blurred image, featuring rich structural details and exhibiting relatively complex blur kernel size and motion trajectory characteristics. To ensure fair comparison, we applied the same non-blind image deblurring algorithm [8] with identical parameter settings throughout all experiments, and the other methods’ restoration parameters for other methods were configured using the combinations published by their respective authors. The results demonstrate that our proposed fast nonlinear sparse model achieves the most accurate blur kernel estimation among all compared methods, producing a final restored image with superior detail preservation and minimal ringing artifacts.

5. Analysis and Discussion

In this section, we analyze a series of ablation experiments to systematically validate the effectiveness of our proposed fast nonlinear sparse model, accompanied by comprehensive discussions on key parameter influences and computational efficiency.

5.1. The Effectiveness of the Fast Nonlinear Sparse Model

This section presents a series of ablation experiments conducted to validate the effectiveness of the proposed fast nonlinear sparse model, evaluating various combinations of regularization norms commonly employed in deblurring methods, with quantitative results illustrated in Table 1. Specifically, columns 1–3 present the quantitative deblurring evaluations for three distinct regularization method– L 1 / L 2 , L 1 / L , and L p / L ; column 4 presents the average PSNR and SSIM obtained by standalone L 0 -regularization; and columns 5–7 demonstrate the corresponding performance when these norm-ratio-based regularizations are coupled with the L 0 norm.
The comparative analysis reveals three significant findings: First, within columns 1–3, the proposed nonlinear sparse regularization ( L p / L ) exhibits substantial advantages over other norm-ratio-based regularizations. Compared to the L 1 / L prior, the proposed L p / L prior achieves an average PSNR improvement of 0.591 dB, indicating superior performance over other nonlinear coupled norm priors. Second, the L 0 coupled versions (columns 5–7) achieve markedly better performance than their standalone counterparts (columns 1–3). Incorporating the L 0 prior yields an average improvement of 1.629 dB in PSNR and 0.038 in SSIM, which demonstrates that the L 0 regularization prior significantly enhances algorithm performance. Third, the results in the last four columns indicate that combining nonlinear coupled norm priors with the L 0 prior enhances the deblurring algorithm’s performance more effectively than using the L 0 prior alone, achieving PSNR improvements of 0.184 dB, 0.091 dB, and 0.679 dB, respectively. Furthermore, the proposed fast nonlinear sparse model, integrating the L N prior with the L 0 prior, achieves optimal deblurring performance with a 32.234 dB PSNR and 0.909 SSIM.

5.2. Effect of Main Parameters

The proposed fast nonlinear sparse model incorporates four key parameters: α , β , γ , and p . The parameter optimization process consists of two stages: first, a two-dimensional grid search for the key parameters, α and β , utilizing the average PSNR as the selection criterion, and second, a separate search grid for parameter p , given its relative independence, as illustrated in Figure 13c.
First, based on existing empirical knowledge, we construct a grid search for parameters α and β over the interval [ 1 × 10 3 , 10 × 10 3 ] with a step size of 1 × 10 3 , as illustrated in Figure 13a, where the boxed coordinate indicates the position yielding the maximum average PSNR. This primary grid serves to identify the approximate optimal ranges for α and β . Subsequently, we perform a secondary grid search within the identified optimal range from the initial screen, employing a refined step size of 10 4 (Figure 13b), enabling the precise determination of the final parameter values.
Parameter stability is essential for robust optimization-based deblurring, and that of the fast nonlinear sparse model was evaluated using blur kernel similarity (Figure 14). Parameters α and β demonstrate stable performance across the refined grid range illustrated in Figure 13b, with kernel similarity variances of 5.62 × 10 5 and 1.78 × 10 7 , respectively. All three parameters show minimal fluctuations in their similarity curves, confirming stable performance within reasonable ranges. Parameter γ exhibits particularly low sensitivity, maintaining stable kernel similarity across its 3–20 operating range (Figure 14c).

5.3. Runtime Analysis

Computational efficiency serves as a crucial metric for evaluating deblurring algorithm performance, whit shorter durations indicating higher efficiency. In this experiment, we assess the computational time of our algorithm across three distinct image resolutions (255 × 255, 600 × 600, 800 × 800 pixels), with testing conditions controlled by maintaining a fixed blur kernel size of 27 × 27 throughout the trials. The comprehensive timing results are presented in Table 2.
We compared runtime results between our method and several classical patch-based deblurring approaches [9,10,11,13,31]. As demonstrated in Table 2, non-overlapping patch priors [11,13] show substantial efficiency improvements over overlapping patch design priors [9,10,31], with our method achieving an approximately 50% runtime reduction compared to standard non-overlapping patch-based methods. This enhancement results from its pixel-wise optimization strategy, which eliminates patch extraction requirements in each iteration.

5.4. Limitations

In the previous sections, we evaluated the superior performance and computational efficiency of our proposed nonlinear sparse model compared to existing state-of-the-art methods using both synthetic datasets and real-world images. However, our model still exhibits certain limitations. First, due to its pixel-wise computation approach during optimization, it demonstrates poor resistance to salt-and-pepper noise; as shown in Figure 15 and Figure 16, restoration performance deteriorates significantly when such noise is present. Additionally, our model performs weakly when handling locally blurred images, such as that in Figure 17.

6. Conclusions

This paper introduces a novel nonlinear sparse regularization ( L N -regularization) based on the nonlinear coupling of the L p and L norms, and to facilitate effective optimization, an AGST algorithm is developed. Through the integration of the L N and L 0 regularization priors, this research establishes a fundamentally new fast nonlinear sparse model. Statistical analyses demonstrate that L N regularization achieves optimal sparsity. Comprehensive experiments on synthetic datasets and real-world blurred images validate that our fast nonlinear sparse model delivers superior deblurring performance. Quantitative results show that the proposed model achieves approximately 1 dB higher PSNR and 0.04 better SSIM values compared to state-of-the-art optimization-based deblurring methods, further reducing the computational time by 50% compared to conventional patch-based approaches due to its pixel-wise optimization strategy.

Author Contributions

Conceptualization, Z.Z.; methodology, Z.Z., Z.X. and H.C.; software, Z.Z. and Z.X.; validation, Z.Z., Z.X. and Z.L.; formal analysis, Z.Z.; investigation, Z.Z. and Z.G.; resources, Z.G., Z.X. and H.C.; data curation, Z.Z., Z.X. and H.C.; writing—original draft preparation, Z.Z.; writing—review and editing, Z.G., Z.X., H.C., J.L. and Z.L.; visualization, Z.Z.; supervision, Z.X., C.W., Y.S., Y.J., J.L. and Z.L.; project administration, Z.X., J.L. and Z.L.; funding acquisition, H.C., Y.S., Y.J., J.L. and Z.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by “the National Natural Science Foundation of China, grant number 62221004 and 62175110”, “The Funding of Nanjing University of Science and Technology, grant number TSXK2022D00x”, and “the Humanities and Social Science Project of the Ministry of Education of China, grant number 23YJCZH013”.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Xu, L.; Lu, C.; Xu, Y.; Jia, J. Image smoothing via L0 gradient minimization. In Proceedings of the 2011 SIGGRAPH Asia Conference, Hong Kong, China, 12–15 December 2011. [Google Scholar]
  2. Xu, L.; Zheng, S.; Jia, J. Unnatural L0 Sparse Representation for Natural Image Deblurring. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Portland, OR, USA, 23–28 June 2013; IEEE: Piscataway, NJ, USA, 2013. [Google Scholar]
  3. Wang, J.; Ma, Q. The variant of the iterative shrinkage-thresholding algorithm for minimization of the 1 over ∞ norms. Signal Process. 2023, 211, 109104. [Google Scholar] [CrossRef]
  4. Wang, C.; Yan, M.; Yu, J. Sorted L1/L2 Minimization for Sparse Signal Recovery. J. Sci. Comput. 2023, 99, 32. [Google Scholar] [CrossRef]
  5. Wang, H.; Hu, C.; Qian, W.; Wang, Q. RT-Deblur: Real-time image deblurring for object detection. Vis. Comput. 2023, 40, 2873–2887. [Google Scholar] [CrossRef]
  6. Wang, C.; Tao, M.; Nagy, J.; Lou, Y. Limited-angle CT reconstruction via the L1/L2 minimization. arXiv 2020, arXiv:2006.00601. [Google Scholar]
  7. Wang, J. A wonderful triangle in compressed sensing. Inf. Sci. 2022, 611, 95–106. [Google Scholar] [CrossRef]
  8. Levin, A.; Weiss, Y.; Durand, F.; Freeman, W.T. Understanding and evaluating blind deconvolution algorithms. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 1964–1971. [Google Scholar]
  9. Pan, J.; Sun, D.; Pfister, H.; Yang, M.H. Blind Image Deblurring Using Dark Channel Prior. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
  10. Chen, L.; Fang, F.; Wang, T.; Zhang, G. Blind Image Deblurring with Local Maximum Gradient Prior. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; IEEE: Piscataway, NJ, USA, 2020. [Google Scholar]
  11. Wen, F.; Ying, R.; Liu, Y.; Liu, P.; Truong, T.K. A Simple Local Minimal Intensity Prior and an Improved Algorithm for Blind Image Deblurring. IEEE Trans. Circuits Syst. Video Technol. 2020, 31, 2923–2937. [Google Scholar] [CrossRef]
  12. Feng, X.; Tan, J.; Ge, X.; Liu, J.; Hu, D. Blind Image Deblurring via Weighted Dark Channel Prior. Circuits Syst. Signal Process. CSSP 2023, 42, 5478–5499. [Google Scholar] [CrossRef]
  13. Xu, Z.; Chen, H.; Li, Z. Fast blind deconvolution using a deeper sparse patch-wise maximum gradient prior. Signal Process. Image Commun. 2021, 90, 116050. [Google Scholar] [CrossRef]
  14. Chen, L.; Fang, F.; Lei, S.; Li, F.; Zhang, G. Enhanced Sparse Model for Blind Deblurring. In Proceedings of the 16th European Conference Computer Vision—ECCV 2020, Glasgow, UK, 23–28 August 2020. [Google Scholar]
  15. Richardson, W.H. Bayesian-Based Iterative Method of Image Restoration. J. Opt. Soc. Am. 1972, 62, 55–59. [Google Scholar] [CrossRef]
  16. Lucy, L.B. An Iterative Technique for the Rectification of Observed Distributions. Astron. J. 1974, 79, 745. [Google Scholar] [CrossRef]
  17. Fergus, R.; Singh, B.; Hertzmann, A.; Roweis, S.; Freeman, W. Removing camera shake from a single photograph. ACM Trans. Graph. 2006, 25, 787–794. [Google Scholar] [CrossRef]
  18. Cho, S.; Lee, S. Fast Motion Deblurring. ACM Trans. Graph. 2009, 28, 1–8. [Google Scholar] [CrossRef]
  19. Yang, A.Y.; Zhou, Z.; Ganesh, A.; Sastry, S.S.; Ma, Y. Fast L1-Minimization Algorithms For Robust Face Recognition. IEEE Trans. Image Process. 2010, 22, 3234–3246. [Google Scholar] [CrossRef] [PubMed]
  20. Candès, E.J.; Wakin, M.B.; Boyd, S.P. Enhancing Sparsity by Reweighted 1 Minimization. J. Fourier Anal. Appl. 2008, 14, 877–905. [Google Scholar] [CrossRef]
  21. Perrone, D.; Diethelm, R.; Favaro, P. Blind Deconvolution via Lower-Bounded Logarithmic Image Priors. In Energy Minimization Methods in Computer Vision and Pattern Recognition. EMMCVPR 2015; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2015. [Google Scholar]
  22. Gasso, G.; Rakotomamonjy, A.; Canu, S. Recovering sparse signals with a certain family of non-convex penalties and DC programming. IEEE Trans. Signal Process. 2009, 57, 4686–4698. [Google Scholar] [CrossRef]
  23. Zou, H.; Li, R. One-step sparse estimates in nonconcave penalized likelihood models. Ann. Stat. 2008, 36, 1509–1533. [Google Scholar]
  24. Rao, B.D.; Kreutz-Delgado, K. An affine scaling methodology for best basis selection. IEEE Trans. Signal Process. 1999, 47, 187–200. [Google Scholar] [CrossRef]
  25. She, Y. Thresholding-based Iterative Selection Procedures for Model Selection and Shrinkage. Electron. J. Stat. 2009, 3, 384–415. [Google Scholar] [CrossRef]
  26. Zuo, W.; Meng, D.; Zhang, L.; Feng, X.; Zhang, D. A generalized iterated shrinkage algorithm for non-convex sparse coding. In Proceedings of the 2013 IEEE International Conference on Computer Vision, Sydney, NSW, Australia, 1–8 December 2013; IEEE: Piscataway, NJ, USA, 2013. [Google Scholar]
  27. Zuo, W.; Ren, D.; Zhang, D.D.; Gu, S.; Zhang, L. Learning Iteration-wise Generalized Shrinkage–Thresholding Operators for Blind Deconvolution. IEEE Trans. Image Process. 2016, 25, 1751–1764. [Google Scholar] [CrossRef]
  28. Pan, J.; Hu, Z.; Su, Z.; Yan, M.H. L0—Regularized Intensity and Gradient Prior for Deblurring Text Images and Beyond. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 39, 342–355. [Google Scholar] [CrossRef]
  29. Li, J.; Lu, W. Blind image motion deblurring with L 0 -regularized priors. J. Vis. Commun. Image Represent. 2016, 40, 14–23. [Google Scholar] [CrossRef]
  30. Yan, Y.; Ren, W.; Guo, Y.; Wang, R.; Cao, X. Image Deblurring via Extreme Channels Prior. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; IEEE: Piscataway, NJ, USA, 2017. [Google Scholar]
  31. Eqtedaei, A.; Ahmadyfard, A. Blind image deblurring using both L0 and L1 regularization of Max-min prior. Neurocomputing 2024, 592, 127727. [Google Scholar] [CrossRef]
  32. Joshi, N.; Szeliski, R.; Kriegman, D.J. PSF estimation using sharp edge prediction. In Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 23–28 June 2008. [Google Scholar]
  33. Xu, L.; Jia, J. Two-Phase Kernel Estimation for Robust Motion Deblurring. In Proceedings of the Computer Vision—ECCV 2010 11th European Conference on Computer Vision, Crete, Greece, 5–11 September 2010. [Google Scholar]
  34. Pan, J.; Liu, R.; Su, Z.; Gu, X. Kernel Estimation from Salient Structure for Robust Motion Deblurring. Signal Process. Image Commun. 2013, 28, 1156–1170. [Google Scholar] [CrossRef]
  35. Liu, J.; Yan, M.; Zeng, T. Surface-Aware Blind Image Deblurring. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 43, 1041–1055. [Google Scholar] [CrossRef]
  36. Xue, J.; Zhao, Y.Q.; Wu, T.; Chan, J.C.W. Tensor Convolution-Like Low-Rank Dictionary for High-Dimensional Image Representation. IEEE Trans. Circuits Syst. Video Technol. 2024, 34, 13257–13270. [Google Scholar] [CrossRef]
  37. Wu, T.; Gao, B.; Fan, J.; Xue, J.; Woo, W.L. Low-Rank Tensor Completion Based on Self-Adaptive Learnable Transforms. IEEE Trans. Neural Netw. Learn. Syst. 2024, 35, 8826–8838. [Google Scholar] [CrossRef]
  38. Bu, Y.; Zhao, Y.; Xue, J.; Yao, J.; Chan, J.C.W. Transferable Multiple Subspace Learning for Hyperspectral Image Super-Resolution. IEEE Geosci. Remote Sens. Lett. 2024, 21, 5501005. [Google Scholar] [CrossRef]
  39. Sun, J.; Cao, W.; Xu, Z.; Ponce, J. Learning a Convolutional Neural Network for Non-uniform Motion Blur Removal. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; IEEE: Piscataway, NJ, USA, 2015. [Google Scholar]
  40. Chakrabarti, A. A Neural Approach to Blind Motion Deblurring. In Proceedings of the Computer Vision—ECCV 2016 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; Springer: Cham, Switzerland, 2016. [Google Scholar]
  41. Gong, D.; Yang, J.; Liu, L.; Zhang, Y.; Reid, I.; Shen, C.; Hengel, A.V.D.; Shi, Q. From Motion Blur to Motion Flow: A Deep Learning Solution for Removing Heterogeneous Motion Blur. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
  42. Ren, D.; Zhang, K.; Wang, Q.; Hu, Q.; Zuo, W. Neural Blind Deconvolution Using Deep Priors. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 3338–3347. [Google Scholar]
  43. Feng, Z.; Zhang, J.; Ran, X.; Li, D.; Zhang, C. Ghost-Unet: Multi-stage network for image deblurring via lightweight subnet learning. Vis. Comput. 2025, 41, 141–155. [Google Scholar] [CrossRef]
  44. Mao, X.; Liu, Y.; Shen, W.; Li, Q.; Wang, Y. Deep Residual Fourier Transformation for Single Image Deblurring. arXiv 2021, arXiv:2111.11745. [Google Scholar]
  45. Mou, C.; Wang, Q.; Zhang, J. Deep Generalized Unfolding Networks for Image Restoration. arXiv 2022, arXiv:2204.13348. [Google Scholar] [CrossRef]
  46. Zhang, L.; Zhang, H.; Chen, J.; Wang, L. Hybrid Deblur Net: Deep Non-uniform Deblurring with Event Camera. IEEE Access 2020, 8, 148075–148083. [Google Scholar] [CrossRef]
  47. Kong, L.; Dong, J.; Li, M.; Ge, J.; Pan, J.-s. Efficient Frequency Domain-based Transformers for High-Quality Image Deblurring. In Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 17–24 June 2023; pp. 5886–5895. [Google Scholar]
  48. Khler, R.; Hirsch, M.; Mohler, B.; Schlkopf, B.; Harmeling, S. Recording and Playback of Camera Shake: Benchmarking Blind Deconvolution with a Real-World Database. In Proceedings of the Computer Vision—ECCV 2012 12th European Conference on Computer Vision, Florence, Italy, 7–13 October 2012. [Google Scholar]
  49. Sun, L.; Cho, S.; Wang, J.; Hays, J. Edge-based blur kernel estimation using patch priors. In Proceedings of the IEEE International Conference on Computational Photography (ICCP), Cambridge, MA, USA, 19–21 April 2013. [Google Scholar]
  50. Lai, W.S.; Huang, J.B.; Hu, Z.; Ahuja, N.; Yang, M.H. A Comparative Study for Single Image Blind Deblurring. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
  51. Zhang, H.; Dai, Y.; Li, H.; Koniusz, P. Deep Stacked Hierarchical Multi-patch Network for Image Deblurring. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; IEEE: Piscataway, NJ, USA, 2019. [Google Scholar]
  52. Kupyn, O.; Martyniuk, T.; Wu, J.; Wang, Z. DeblurGAN-v2: Deblurring (Orders-of-Magnitude) Faster and Better. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 7 October–2 November 2019; IEEE: Piscataway, NJ, USA, 2019. [Google Scholar]
  53. Chen, L.; Lu, X.; Zhang, J.; Chu, X.; Chen, C. HINet: Half Instance Normalization Network for Image Restoration. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021. [Google Scholar]
  54. Zamir, S.W.; Arora, A.; Khan, S.; Hayat, M.; Khan, F.S.; Yang, M.H.; Shao, L. Multi-Stage Progressive Image Restoration. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021. [Google Scholar]
  55. Krishnan, D.; Tay, T.; Fergus, R. Blind deconvolution using a normalized sparsity measure. In Proceedings of the CVPR 2011, Colorado Springs, CO, USA, 20–25 June 2011. [Google Scholar]
  56. Levin, A.; Weiss, Y.; Durand, F.; Freeman, W.T. Efficient Marginal Likelihood Optimization in Blind Deconvolution. In Proceedings of the 24th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2011, Colorado Springs, CO, USA, 20–25 June 2011. [Google Scholar]
  57. Zoran, D.; Weiss, Y. From learning models of natural image patches to whole image restoration. In Proceedings of the International Conference on Computer Vision, Tokyo, Japan, 25–27 May 2011; IEEE: Piscataway, NJ, USA, 2011; pp. 479–486. [Google Scholar]
  58. Fang, Z.; Wu, F.; Dong, W.; Li, X.; Wu, J.; Shi, G. Self-supervised Non-uniform Kernel Estimation with Flow-based Motion Prior for Blind Image Deblurring. In Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 17–24 June 2023; pp. 18105–18114. [Google Scholar]
  59. Liu, C.X.; Wang, X.; Xu, X.; Tian, R.; Li, S.; Qian, X.; Yang, M.H. Motion-Adaptive Separable Collaborative Filters for Blind Motion Deblurring. In Proceedings of the 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 16–22 June 2024; pp. 25595–25605. [Google Scholar]
  60. Mao, X.; Li, Q.; Wang, Y. AdaRevD: Adaptive Patch Exiting Reversible Decoder Pushes the Limit of Image Deblurring. In Proceedings of the 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 16–22 June 2024; pp. 25681–25690. [Google Scholar]
Figure 1. Log probability curves of the gradients of the intermediate latent images obtained from different sparse regularizations on the dataset in [8].
Figure 1. Log probability curves of the gradients of the intermediate latent images obtained from different sparse regularizations on the dataset in [8].
Jimaging 11 00327 g001
Figure 2. Curves of Equation (7) under different values of A i , where λ = 2 and p = 0.5.
Figure 2. Curves of Equation (7) under different values of A i , where λ = 2 and p = 0.5.
Jimaging 11 00327 g002
Figure 3. Quantitative evaluations of different algorithms on Levin et al.’s dataset. (a) Average PSNR; (b) average SSIM; (c) cumulative error ratio [9,11,14,30,31,33,51,52,53].
Figure 3. Quantitative evaluations of different algorithms on Levin et al.’s dataset. (a) Average PSNR; (b) average SSIM; (c) cumulative error ratio [9,11,14,30,31,33,51,52,53].
Jimaging 11 00327 g003
Figure 4. A visible comparison of the deblurring results for one image from the dataset [8]. (a) Blurred image; (b) Clear image; (c) Yan et al. [30]; (d) Pan et al. [9]; (e) Wen et al. [11]; (f) Eqtedaei et al. [31]; (g) Chen et al. [14]; (h) Ours.
Figure 4. A visible comparison of the deblurring results for one image from the dataset [8]. (a) Blurred image; (b) Clear image; (c) Yan et al. [30]; (d) Pan et al. [9]; (e) Wen et al. [11]; (f) Eqtedaei et al. [31]; (g) Chen et al. [14]; (h) Ours.
Jimaging 11 00327 g004
Figure 5. The cumulative error ratio statistics curve on Sun’s dataset [9,14,18,31,33,52,54,55,56].
Figure 5. The cumulative error ratio statistics curve on Sun’s dataset [9,14,18,31,33,52,54,55,56].
Jimaging 11 00327 g005
Figure 6. A visual example from Sun’s dataset [49]. (a) Blurred image; (b) Clear image; (c) Pan et al. [9]; (d) Eqtedaei et al. [31]; (e) Chen et al. [14]; (f) Ours.
Figure 6. A visual example from Sun’s dataset [49]. (a) Blurred image; (b) Clear image; (c) Pan et al. [9]; (d) Eqtedaei et al. [31]; (e) Chen et al. [14]; (f) Ours.
Jimaging 11 00327 g006
Figure 7. A quantitative comparison of restoration results for face images from the dataset of Lai et al. (a) Average PSNR; (b) average SSIM [9,11,14,31,51,52,53,54].
Figure 7. A quantitative comparison of restoration results for face images from the dataset of Lai et al. (a) Average PSNR; (b) average SSIM [9,11,14,31,51,52,53,54].
Jimaging 11 00327 g007
Figure 8. Visual examples of two classic scenarios (face and text) from the dataset of Lai et al. (a) Blurred image; (b) Clear image; (c) Eqtedaei et al. [31]; (d) Chen et al. [14]; (e) Ours.
Figure 8. Visual examples of two classic scenarios (face and text) from the dataset of Lai et al. (a) Blurred image; (b) Clear image; (c) Eqtedaei et al. [31]; (d) Chen et al. [14]; (e) Ours.
Jimaging 11 00327 g008
Figure 9. A quantitative comparison of restoration results for text images in dataset of Lai et al. (a) Average PSNR; (b) average SSIM [9,11,13,14,31,51,52,54].
Figure 9. A quantitative comparison of restoration results for text images in dataset of Lai et al. (a) Average PSNR; (b) average SSIM [9,11,13,14,31,51,52,54].
Jimaging 11 00327 g009
Figure 10. A quantitative comparison of our method with other deep-learning methods on the dataset proposed by Köhler et al. (a). PSNR; (b). MSSIM.
Figure 10. A quantitative comparison of our method with other deep-learning methods on the dataset proposed by Köhler et al. (a). PSNR; (b). MSSIM.
Jimaging 11 00327 g010
Figure 11. A visual example from the dataset proposed by Köhler et al. (a) Blurred image; (b) clear image; (c) Kupyn et al. [52]; (d) Zamir et al. [54]; (e) Zhang et al. [51]; (f) Fang et al. [58]; (g) Mao et al. [44]; (h) Kong et al. [47]; (i) Mou et al. [45]; (j) Ours.
Figure 11. A visual example from the dataset proposed by Köhler et al. (a) Blurred image; (b) clear image; (c) Kupyn et al. [52]; (d) Zamir et al. [54]; (e) Zhang et al. [51]; (f) Fang et al. [58]; (g) Mao et al. [44]; (h) Kong et al. [47]; (i) Mou et al. [45]; (j) Ours.
Jimaging 11 00327 g011
Figure 12. A comparative analysis of restoration results for a real-world blurred image. (a) Blurred image; (b) Pan et al. [9]; (c) Chen et al. [10]; (d) Xu et al. [13]; (e) Eqtedaei et al. [31]; (f) Wen et al. [11]; (g) Chen et al. [14]; (h) Ours.
Figure 12. A comparative analysis of restoration results for a real-world blurred image. (a) Blurred image; (b) Pan et al. [9]; (c) Chen et al. [10]; (d) Xu et al. [13]; (e) Eqtedaei et al. [31]; (f) Wen et al. [11]; (g) Chen et al. [14]; (h) Ours.
Jimaging 11 00327 g012
Figure 13. Parameter search grids. (a,b) Primary and secondary search grids for α and β; (c) independent search grid for parameter p.
Figure 13. Parameter search grids. (a,b) Primary and secondary search grids for α and β; (c) independent search grid for parameter p.
Jimaging 11 00327 g013
Figure 14. Impact of three key parameters on our model’s restoration performance.
Figure 14. Impact of three key parameters on our model’s restoration performance.
Jimaging 11 00327 g014
Figure 15. Quantitative analysis of our method under different noise levels. (a) the average PSNR; (b) the average Kernel Similarity.
Figure 15. Quantitative analysis of our method under different noise levels. (a) the average PSNR; (b) the average Kernel Similarity.
Jimaging 11 00327 g015
Figure 16. An example of noise image deblurring. (a) Noise-free blurred image; (b) Blurred image with salt-and-pepper noise; (c) Noise-free deblurring result; (d) Deblurring result with salt-and-pepper noise.
Figure 16. An example of noise image deblurring. (a) Noise-free blurred image; (b) Blurred image with salt-and-pepper noise; (c) Noise-free deblurring result; (d) Deblurring result with salt-and-pepper noise.
Jimaging 11 00327 g016
Figure 17. An example of a highly non-uniform locally blurred image.
Figure 17. An example of a highly non-uniform locally blurred image.
Jimaging 11 00327 g017aJimaging 11 00327 g017b
Table 1. Quantitative analysis of the ablation experiment.
Table 1. Quantitative analysis of the ablation experiment.
1 2 1 p 0 1 2 + 0 1 + 0 p + 0
Average PSNR28.48130.82931.42031.55531.73931.64632.234
Average SSIM0.8110.8840.8940.8890.8950.8950.909
Table 2. Comparison of runtime performance (in seconds) across three different resolutions for various methods.
Table 2. Comparison of runtime performance (in seconds) across three different resolutions for various methods.
255 × 255600 × 600800 × 800
Pan et al. [9]63.652327.051659.056
Wen et al. [11]10.22922.03459.721
Xu et al. [13]9.32135.02182.246
Chen et al. [10]33.041179.554327.993
Eqtedaei et al. [31]27.079114.625198.729
Ours4.65720.87135.443
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, Z.; Guo, Z.; Xu, Z.; Chen, H.; Wang, C.; Song, Y.; Lai, J.; Ji, Y.; Li, Z. A Fast Nonlinear Sparse Model for Blind Image Deblurring. J. Imaging 2025, 11, 327. https://doi.org/10.3390/jimaging11100327

AMA Style

Zhang Z, Guo Z, Xu Z, Chen H, Wang C, Song Y, Lai J, Ji Y, Li Z. A Fast Nonlinear Sparse Model for Blind Image Deblurring. Journal of Imaging. 2025; 11(10):327. https://doi.org/10.3390/jimaging11100327

Chicago/Turabian Style

Zhang, Zirui, Zheng Guo, Zhenhua Xu, Huasong Chen, Chunyong Wang, Yang Song, Jiancheng Lai, Yunjing Ji, and Zhenhua Li. 2025. "A Fast Nonlinear Sparse Model for Blind Image Deblurring" Journal of Imaging 11, no. 10: 327. https://doi.org/10.3390/jimaging11100327

APA Style

Zhang, Z., Guo, Z., Xu, Z., Chen, H., Wang, C., Song, Y., Lai, J., Ji, Y., & Li, Z. (2025). A Fast Nonlinear Sparse Model for Blind Image Deblurring. Journal of Imaging, 11(10), 327. https://doi.org/10.3390/jimaging11100327

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop