Next Article in Journal
HADF-Crowd: A Hierarchical Attention-Based Dense Feature Extraction Network for Single-Image Crowd Counting
Previous Article in Journal
Optimizing the Empirical Parameters of the Data-Driven Algorithm for SIF Retrieval for SIFIS Onboard TECIS-1 Satellite
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Blind Deblurring Based on Sigmoid Function

1
Key Laboratory of Optical Engineering, Chinese Academy of Sciences, Chengdu 610209, China
2
School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, Beijing 100049, China
3
Institute of Optics and Electronics, Chinese Academy of Sciences, Chengdu 610209, China
*
Author to whom correspondence should be addressed.
Sensors 2021, 21(10), 3484; https://doi.org/10.3390/s21103484
Submission received: 23 April 2021 / Revised: 10 May 2021 / Accepted: 12 May 2021 / Published: 17 May 2021
(This article belongs to the Section Sensing and Imaging)

Abstract

:
Blind image deblurring, also known as blind image deconvolution, is a long-standing challenge in the field of image processing and low-level vision. To restore a clear version of a severely degraded image, this paper proposes a blind deblurring algorithm based on the sigmoid function, which constructs novel blind deblurring estimators for both the original image and the degradation process by exploring the excellent property of sigmoid function and considering image derivative constraints. Owing to these symmetric and non-linear estimators of low computation complexity, high-quality images can be obtained by the algorithm. The algorithm is also extended to image sequences. The sigmoid function enables the proposed algorithm to achieve state-of-the-art performance in various scenarios, including natural, text, face, and low-illumination images. Furthermore, the method can be extended naturally to non-uniform deblurring. Quantitative and qualitative experimental evaluations indicate that the algorithm can remove the blur effect and improve the image quality of actual and simulated images. Finally, the use of sigmoid function provides a new approach to algorithm performance optimization in the field of image restoration.

1. Introduction

Digital images are an important source of information for humans. However, due to the imaging equipment’s defects (optical aberration, defocusing, etc.) and limitations of shooting conditions (insufficient light, bad weather, and atmosphere turbulence), images obtained will be of low visual quality. It is a blind deconvolution problem which calls for a solution to recover the scene or restore the clear picture from its blurred counterparts with unknown blur parameters. Blind deconvolution is a well-known, ill-posed problem. This paper also takes the effects of noise into account. To obtain an image of high visual quality, it is necessary to strike a balance between resolution and noise suppression. In the deblurring image discussion, the obtained blurred vision g ( x , y ) is modeled as the convolution between a clear image o ( x , y ) and the point spread function (PSF) h ( x , y ) and the additive noise n ( x , y ) . The PSF, also known as blur kernel [1], causes image degradation. In the image restoration literature, image degradation is commonly modeled as follows [2]
g ( x , y ) = o ( x , y ) h ( x , y ) + n ( x , y )
where “*” is the convolution operator, o ( x , y ) and g ( x , y ) stand for the given clear image and its degraded counterpart, respectively, h ( x , y ) denotes the point spread function (PSF) representing degradation induced in the spatial domain, and n ( x , y ) represents the additive noise.
Image blurring is a significant detriment to the succeeding work, such as object recognition and object tracking. Therefore, image restoration technology has attracted extensive attention. Many academics have presented meaningful work. Categorizing by problem-solving approach, there are four main types of image restoration method. They are image restoration algorithms in the spatial domain, image restoration algorithm in the frequency domain, image modeling and image estimation algorithm, and neural network algorithm. The most common type of image restoration algorithms is those in the spatial domain. This algorithm was proposed and used first. The most representative algorithm in the spatial domain is the regularization method [3]. For an ill-posed problem, the condition number is considerable. By adding regularization to the loss function, the approach can use the original image’s priors to reduce the condition number. The approach can obtain a good result quickly for image restoration algorithms in the frequency domain [4]. The approach maps the different frequency characteristics of the image according to the flat region and the edge region. The approach converts the image to the frequency domain through the transformation model [5,6]. After completing data processing in the frequency domain, it converts the results to the spatial domain. The filtering method is widely used. The most typical method is Wiener filtering, which essentially minimizes the mean square error (MSE). The study of stochastic processes is always a hot topic. Gaussian random field theory and Markov field theory are well known. They apply Bayesian theory to image restoration. The most critical probabilistic models are maximum likelihood estimation (MLE) and maximum a posterior estimation (MAP) [7], by which the image restoration problem is converted to probability estimation through Bayesian inference. Maximum Likelihood Estimation algorithm and the Richardson–Lucy algorithm (RL) [8] are the most representative. A multiplicative iterative approach (MIA) [9,10] was proposed based on a probabilistic model. MIA [9,10] naturally preserves the non-negative constraint on the iterative solutions when the initial estimates are non-negative, producing a restored image of high quality. At present, neural networks are the most popular in computer vision. An artificial neural network [11,12] is a new method to obtain the loss function’s minimum value. However, artificial neural networks tend to be more expensive in terms of computation complexity.
When restoring image sequences, it is usually assumed that the image sequences’ target does not change significantly in a short time. The estimation of adjacent frames of short exposure sequence is applied to approximate the current frame and to obtain a better target image estimation. Due to the redundancy of information, image sequences provide more supplementary information for image recovery. Compared with single image restoration, image sequences can reduce the meaningless solution and improve restoration stability. Unfortunately, there are two sides to everything, like a coin. Image sequences always need more storage memory. Additionally, the need for image information of adjacent frames leads to more computation.
In this paper, an efficient scheme for blind deblurring is introduced via the sigmoid function, which was inspired by the multiplicative iterative algorithm (MIA) [9,10]. The MIA, as reported in [9,10], is efficient but limited to weak degradation. To overcome this drawback and deal with the severe degradation problem, a new form of iteration strategy is adopted in this work, which employs the sigmoid function, leading to a novel blind deconvolution algorithm for restoration of seriously degraded and blurred images.
The contributions are as follows. First, this paper proposes an image restoration model based on the sigmoid function. Intuitively, the latest iteration model can ensure that the image is non-negative in the iteration process. As a result, it does not need any other constraints to make the pixel value non-negative. Second, the approach can effectively restore severely degraded images using the sigmoid function and the information between sequences. Compared with the classical and the state-of-the-art methods, experiments show that the new method has a better competitive performance for severely degraded images. Third, to better evaluate the algorithm’s performance, this paper presents more experimental results of blind deblurring. The results demonstrate that the new algorithm can achieve the same level of performance as state-of-the-art methods.

2. Related Work

In recent years, significant progress has been made in image deblurring [13]. In particular, using the prior information of the image to deblur has attracted significant attention from academics. Many contributions reported in the literature are based on maximum a posterior (MAP) framework and variational Bayesian methods [14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32]. These methods often involve two steps. In the first step, the blur kernel is estimated by using the obtained image. The second step is to estimate the latent image according to the estimated blur kernel through a non-blind deconvolution method [33,34,35,36]. Considering that the simplest MAP method cannot always estimate the blur kernel effectively, it is not easy to obtain a satisfactory image.
The key to image deblurring is to use the image’s prior information to constrain the blur kernel and the image. The most widely used prior is gradient sparsity prior [37,38,39,40]. However, in reference [18], the authors find that the gradient sparsity prior is often more friendly to blurred images than to clear images. In reference [14,15,16,29,41,42], the sharp edges of the image are constrained in order to alleviate the above shortcomings. However, it has to be acknowledged that images do not always have sharp edges; for example, many natural images have unsharp edges. At the same time, some other image priors are also widely used by scholars. For example, intensity prior [22], normalized sparsity prior [19], dark channel prior [23], data-driven learned prior [28]. These image priors have also achieved remarkable results.
With the popularity of deep neural networks, data-driven methods have also achieved great success [43,44,45,46,47,48]. In reference [43], Sun et al. adopted a convolutional neural network (CNN) to remove motion blur. Nah et al. [45] designed a multi-scale convolutional neural network that can restore the image without estimating the blur kernel. Furthermore, Kupyn et al. [46] designed a generative adversarial network (GAN) to restore images end-to-end. Su et al. [47], applied an improved convolutional neural network to video deblurring. Yang et al. [47] designed a 3D convolutional encoder–decoder network for video deblurring. The data-driven methods do not always generalize well if the test images vary or differ from the training dataset.
Having reviewed image restoration progress of the last decade in this section, the remaining contents of this article are organized as follows. In Section 3, a new blind deblurring algorithm based on the sigmoid function (BDA-SF for short) is introduced in detail with practical applications. In Section 4, experimental results are presented for performance evaluation, which are compared with those of the existing algorithms. Section 5 provides a summary of this paper.

3. Methods

3.1. Image Restoration Model

Based on the idea of the multiplicative iterative algorithm (MIA), which is efficient but limited to weak degradation, a novel blind deconvolution algorithm is devised employing the sigmoid function, i.e., the BDA-SF, for the restoration of seriously degraded images, to overcome MIA’s limitation. The algorithm has good convergence with simple parameter selection. The algorithm can avoid the instability of numerical calculation and naturally meet non-negative constraints. It has been shown that the performance of the least-squares algorithm is almost insensitive to whether noise is Poissonian or Gaussian [49], and that, for Poissonian noise, no strong difference exists between the results of the ISRA and those of the RLA, while for Gaussian noise, the ISRA produces much better results than the RLA [50]. Here, owing to the robustness of Gaussian noise hypothesis, the likelihood probability function [51] can be established as
P ( g | o , h ) = x , y 1 2 π σ e x p ( [ g ( x , y ) h ( x , y ) o ( x , y ) ] 2 2 σ 2 )
The σ 2 is the variance in the noise, g ( x , y ) is the blurred image, o ( x , y ) is the original image, and h ( x , y ) is the point spread function (PSF). The corresponding log-likelihood [51] multiplied by σ 2 is
σ 2 l o g [ P ( g | o , h ) ] = x , y σ 2 l o g [ 1 2 π σ ] x , y [ g ( x , y ) h ( x , y ) o ( x , y ) ] 2 2
J ( o , h ) = σ 2 l o g [ P ( g | o , h ) ] = x , y [ g ( x , y ) h ( x , y ) o ( x , y ) ] 2 2 + C = g ( x , y ) h ( x , y ) o ( x , y ) 2 + C
where C is a constant independent of o ( x , y ) and h ( x , y ) , J ( o , h ) is the loss function. Basically, the problem is highly ill-posed, and there are many different solution pairs ( o , h ) that give rise to the same g [22]. In order to make the problem well-posed, this paper uses sparsity prior to constrain the image and the kernel [20]. This paper uses h 1 instead of h 2 used in [20], which works to constrain the kernel to be sharp [17,52].
p ( o ) = α o 0
p ( h ) = γ h 1
p ( o , h ) = p ( o ) + p ( h )
where α , γ are penalty parameters, “∇” is the gradient operator, L 0 norm is modeled by a numerical approximation function in [53], i.e., o 0 o 2 2 o 2 2 + β , where β is a modulation parameter (in this paper, β is set to 0.001). The loss function can be written as
J ( o , h ) = g ( x , y ) h ( x , y ) o ( x , y ) 2 + p ( o , h )
Just as the MIA [10], blind deconvolution is to minimize the loss function by obtain partial derivatives of J ( o , h ) with respect to o ( x , y ) and h ( x , y ) , respectively, as follows
J ( o , h ) o = h c ( x , y ) [ g ( x , y ) h ( x , y ) o ( x , y ) ] + o p ( o , h )
J ( o , h ) h = o c ( x , y ) [ g ( x , y ) h ( x , y ) o ( x , y ) ] + h p ( o , h )
where the function f c ( ) represents the adjoint function of f ( ) , o p ( o , h ) = α · 2 β o o 2 + β 2 2 , h p ( o , h ) = γ · h h 2 . Forcing (9) and (10) to zero, it will arrive at the maximum log-likelihood equations:
h c ( x , y ) [ g ( x , y ) h ( x , y ) o ( x , y ) ] + o p ( o , h ) = 0
o c ( x , y ) [ g ( x , y ) h ( x , y ) o ( x , y ) ] + h p ( o , h ) = 0
Multiply both sides of (11) and (12) by a positive actual number λ , which is a parameter used to adjust the convergence rate of the algorithm. When it is large, the algorithm converges quickly. This paper adopts the sigmoid function to promote the optimization performance
2 S i g m o i d ( λ 1 h c ( x , y ) [ g ( x , y ) h ( x , y ) o ( x , y ) ] + o p ( o , h ) ) = 1
2 S i g m o i d ( λ 2 o c ( x , y ) [ g ( x , y ) h ( x , y ) o ( x , y ) ] + h p ( o , h ) ) = 1
Multiply both sides of (13) and (14) by the estimates of o ( x , y ) and h ( x , y ) , respectively, to arrive at the final iterative formulae for image restoration.   
o k + 1 ( x , y ) = 2 o k ( x , y ) S i g m o i d ( λ 1 J ( o k , h k ) o k ( x , y ) ) = 2 o k ( x , y ) S i g m o i d ( λ 1 h k c ( x , y ) [ g ( x , y ) h k ( x , y ) o k ( x , y ) ] + o p ( o , p ) ) , λ 1 > 0
h k + 1 ( x , y ) = 2 h k ( x , y ) S i g m o i d ( λ 2 J ( o k , h k ) h k ( x , y ) ) = 2 h k ( x , y ) S i g m o i d ( λ 2 o k c ( x , y ) [ g ( x , y ) h k ( x , y ) o k ( x , y ) ] + h p ( o , p ) ) , λ 2 > 0
For (15) and (16), this paper initializes o ( x , y ) and h ( x , y ) to the matrices of all ones due to their insufficiency. In this paper, in order to make the result converge and protect the edge information of the image while removing the noise, Equations (15) and (16) can be rewritten as
o k + 1 ( x , y ) = 2 o k ( x , y ) S i g m o i d ( λ 1 h k c ( x , y ) [ g ( x , y ) h k ( x , y ) o k ( x , y ) ( 1 + μ h S o b e l V ( x , y ) h S o b e l H ( x , y ) ) h G a u s s i a n L P ( x , y ) ] + o p ( o , p ) )
h k + 1 ( x , y ) = 2 h k ( x , y ) S i g m o i d ( λ 2 o k c ( x , y ) [ g ( x , y ) h k ( x , y ) o k ( x , y ) ( 1 + μ h S o b e l V ( x , y ) h S o b e l H ( x , y ) ) h G a u s s i a n L P ( x , y ) ] + h p ( o , p ) )
The h G a u s s i a n L P ( x , y ) represents the Gaussian low-pass filter; h S o b e l V ( x , y ) is the Sobel vertical edge detector impulse response function; h S o b e l H ( x , y ) is the Sobel horizontal edge detector impulse response function. μ [ 0.15 , 0.35 ] is the edge protection factor. This paper chose a more considerable value when there are many details in the image; otherwise, it chose a smaller one. λ [ 600 , 1200 ] is the coefficient that controls the convergence rate. When λ takes a considerable value, the convergence speed is fast.
The Sobel vertical edge detector impulse response function, h S o b e l V ( x , y ) , defined as
h S o b e l V ( x , y ) = s g n ( x ) i f | y | = 1 a n d | x | = 1 2 s g n ( x ) i f | y | = 0 a n d | x | = 1 , 0 o t h e r w i s e
and the Sobel horizontal edge detector impulse response function, h S o b e l H ( x , y ) , defined as
h S o b e l H ( x , y ) = s g n ( y ) i f | x | = 1 a n d | y | = 1 2 s g n ( y ) i f | x | = 0 a n d | y | = 1 , 0 o t h e r w i s e
where s g n ( · ) denotes the sign function, i.e.,
s g n ( x ) = 1 f o r x > 1 0 f o r x = 1 , 1 f o r x < 1
The Gaussian low-pass filter, h G a u s s i a n L P ( x , y ) , defined as
h G a u s s i a n L P ( x , y ) = 1 2 π σ 2 e x 2 + y 2 2 σ 2
The support size of Gaussian low-pass filter is fixed as 5 × 5, and the standard deviation σ is set to 0.5–2.0. The range σ is a user parameter, which is related to the noise level of the input image. When images contains much noise, a large σ is chosen. For simplicity, drop “ ( x , y ) ” in (17) and (18),
o k + 1 = 2 o k S i g m o i d ( λ 1 h k c [ g h k o k ( 1 + μ h S o b e l V h S o b e l H ) h G a u s s i a n L P ] + o p ( o , h ) )
h k + 1 = 2 h k S i g m o i d ( λ 2 o k c [ g h k o k ( 1 + μ h S o b e l V h S o b e l H ) h G a u s s i a n L P ] + h p ( o , h ) )
Iterating Equations (23) and (24) to alternately estimate the o k ( x , y ) and h k ( x , y ) , achieve the maximum of Equation (3) and obtain the best original image estimation. The main steps of the proposed BDA-SF are shown in the Algorithms 1 and 2.
Algorithm 1 Estimate latent image
Input: Blurred image g, kernel estimation h 0 , regularization weight α , γ , parameter λ , iterations J, K;
1: o k g , h k h 0 .
2: while i t e r < K do
3:     if i t e r < J then
4:         for i t e r = 0 : J 1 do
5:            Compute o k + 1 via (17) using h k , o k ;
6:            Compute h k + 1 via (18) using h k , o k ;
7:         end for
8:     else J < i t e r < K
9:         for i t e r = 0 : K 1 do
10:            Compute o k + 1 via (17) using h k , o k ;
11:            Compute h k + 1 via (18) using h k , o k ;
12:         end for
13:     end if
14: end while
Output: Intermediate latent image o. Blur kernel h.
Algorithm 2 Estimate Blur kernel
Input: Blurred image g, maximum iterations K.
1: while i t e r < K do
2:     Update latent image o via Algorithm 1;
3:     Update blur kernel h via (18);
4: end while
Output: Intermediate latent image o. Blur kernel h.

3.2. Sigmoid Function

It is the sigmoid function of the proposed BDA-SF that provides the critical difference from the MIA which uses the exponential function, and significantly improves the blind deconvolution performance. For comparison, these two functions are plotted here and shown in Figure 1. Figure 1a shows the plots of exponential functions with different coefficients, while Figure 1b shows a cluster of sigmoid functions. Figure 1 shows that the exponential function is asymmetric. For the negative variable, it changes slowly, while for the positive variable, it changes steeply. That is to say, the exponential function may fail to update the estimator and tend towards zero when h c ( x , y ) [ g ( x , y ) h ( x , y ) o ( x , y ) ] is much less than zero. While the h c ( x , y ) [ g ( x , y ) h ( x , y ) o ( x , y ) ] is much bigger than zero, the estimator may overly update, thus incurring an enormous negative value in the next iteration. We suppose this exponential function phenomenon is why MIA cannot be applied to severely degraded image blind deconvolution. Conversely, the sigmoid function is symmetric. It is free from the problems of the exponential function mentioned above. Further, its saturation property helps it deal with the immense value of the variable. Therefore, benefiting from these properties of the sigmoid function, this blind deconvolution algorithm performs well, especially with severely degraded images.
Image Sequence: The image target scene changes little when imaging with a short exposure (the imaging system tends to have a high frame rate). Therefore, this paper considers that adjacent frames are similar to the same target image and different point spread functions (PSF). Figure 2 describes the image degradation process.
It is reasonable to assume that adjacent frames do not change significantly in a short period [51]. For image degradation by atmosphere turbulence, the displacement of the target image mainly comes from the degradation caused by turbulence rather than the target itself changing. Figure 3 simulates the image degradation caused by atmospheric turbulence. These blur kernels are generated by the random phase screen [54]. The parameters of the simulated atmosphere turbulence were chosen to create images similar to images recorded by telescope (D = 1.50 m) through a turbulence of r 0 = 0.045–0.055. The blur nearly occupies 25 × 25 pixels in the 128 × 128 pixels image pane. Similarly, the assumption that the target image will not change in the short-term is also applicable to other situations where the imaging frame rate is high, such as removing motion blur.
The specific process of the algorithm is as follows: The work divides the reconstruction of the iterative algorithm into two stages. The first stage is to restore the original sequence. The second stage is the restoration of the remaining sequences. As shown in Figure 4, this paper assumes that the short exposure image sequence does not change much in a short period. This paper treats the first five frames of the input image sequence as a sub-sequence and iterated the frames several times to restore them. The appropriate number of frames needs to be selected according to the target scene. When dealing with a single image, set J = 0. The method is suitable for short exposure images with little change in the target. BDA-SF obtains an average result of the sub-sequence. Using the average value can prevent unknown noise interference. BDA-SF uses this result as the initial estimate for subsequent frames. Next, BDA-SF uses the result of the previous frame as the initial estimate for the next frame. In this way, BDA-SF can obtain good results with fewer iterations.

4. Experimental Results and Analysis

First, This paper provides a practical application of the algorithm and analyzes the convergence of the algorithm. Second, this paper compares the algorithm with traditional algorithms. Third, this paper compares the algorithm with state-of-the-art methods.

4.1. Performance Evaluation

To evaluate the result of restored images. This paper uses the peak signal-to-noise ratio (PSNR) [55] and structural similarity (SSIM) [56] to evaluate the effect of image restoration.
PSNR is the peak value of the signal to noise in the images. The equation is shown below
P S N R = 10 l o g 10 M A X o 2 | | o ^ o | | 2 2
where o is the latent image. o ^ is the restored image. M A X o is the maximum value of the image o.
SSIM is used to evaluate the degree of similarity of geometric structure information of the restored image and the latent image. The equation is as below
S S I M = ( 2 μ o μ o ^ + C 1 ) ( 2 σ o o ^ + C 2 ) ( μ o 2 + μ o ^ 2 + C 1 ) ( σ o 2 + σ o ^ 2 + C 2 )
where μ o , μ o ^ denote the means of o, o ^ , respectively. σ o , σ o ^ denote variances of o, o ^ , respectively. σ o o ^ is the image covariance.

4.2. Convergence Property

Figure 5 shows the frames from one video sequence of a flying plane. This paper converts the frames in the video sequence to 256 grayscale for convenience and sheared the images to 256 × 256 pixels. Figure 5a is the initial frame. This paper sets the parameters as μ = 0.25 , λ 1 = 800 , λ 2 = 1000 , α = 0.04 , γ = 2 . Through 200 iterations of the algorithm, we obtain Figure 5d. At this time, the picture quality was not improved significantly. The goal of this step is to obtain an initial estimate. Figure 5b is the 20th frame of the sequence. BDA-SF iterates Figure 5e 40 times to get Figure 5b. At this point, we can vaguely see numbers on the fuselage of the plane. The picture quality has improved to some extent. Figure 5c is the 40th frame of the video sequence. Restoring Figure 5c obtains Figure 5f. The picture quality was greatly improved. We can see the number “126” on the fuselage. With the deepening of iteration, BDA-SF can restore the image sequence efficiently. The most time-consuming part of the algorithm is the Fourier transform. The complexity of the algorithm is O ( n l o g n ) . The simulations are carried out on Windows 10 with an Intel Core i5-7200U CPU at 2.7 GHz with 12 GB RAM. The algorithm takes about 0.04 s per iteration to process the 256 × 256 image. Using the previous frame as the initial estimate can save many iterative steps and improve the algorithm’s efficiency.
Figure 6 shows the iterative curve. The horizontal axis represents the number of iteration, and the vertical axis is the residual. The black line is a direct iterative algorithm. The curve marked by a red star represents the first stage of the algorithm. The first stage of restoration did not arrive at the optimal point, but this does not matter; all we need is an initial estimate. The next step needs a few iterations, and the green line represents the second step of the algorithm. BDA-SF can achieve convergence with only a few iterations—no more than 20. Although the first step requires lots of iterations, it reduces the number of iterations needed for later work.
To better show the convergence of the algorithm. This paper randomly selects four-pixel points in Figure 5f and investigates the change in their pixel values with the number of iterations. At the same time, this paper also obtains the residual curve of the image. Figure 7 shows the variation in pixel values and residual. Figure 7a shows the pixel values with the number of iterations; Figure 7b is the residual curve of the image.

4.3. Compared with Traditional Methods

It can be seen from Figure 8 that the proposed algorithm can protect the edge details of the image while removing the blur. BDA-SF restores and extends the spectrum, and the image quality is improved. The algorithm achieves high-resolution restoration.
Another example: This paper restores the tower from an actual video. Figure 9a–c are the blurred images and Figure 9d–f are the restored images. Figure 9 shows that the texture information obtained is abundant. Even the lines on the top of the tower are clear.

4.4. Compared with State-of-the-Art Methods

To better evaluate the algorithm. This paper selects severely degraded images from the public dataset [18], which contains four images and eight kernels. Figure 10 shows the comparison between the proposed algorithm and other iterative algorithms based on MAP estimates. Algorithms involved in the comparison are Krishnan et al. [19], Xu et al. [20], Pan et al. [22], Yan et al. [27], Jin et al. [31], Bai et al. [32]. This paper uses evaluation indexes PSNR and SSIM to evaluate the image quality. Table 1 provides a quantitative evaluation of Figure 10. Table 1 shows that the image restored by the method has the highest PSNR and SSIM. [19] has the best visual effect, but it is too sharp compared to the original image, resulting in poor evaluation. This paper also shows the error ratios for various algorithms in Figure 11. Figure 11 shows that BDA-SF can achieve 100% success at an error ratio of 2.
Figure 12 is from the dataset by Kohler et al. [57] containing four images and twelve kernels. This paper chooses a severely degraded image from the dataset. The compared algorithms include Xu et al. [15], Krishnan et al. [19], Whyte et al. [30], Xu et al. [20], Pan et al. [23], Pan et al. [22], Yan et al. [27], Jin et al. [31], Bai et al. [32]. Table 2 provides a quantitative evaluation of Figure 12. Figure 13 investigates the effectiveness of the sigmoid function. The results demonstrate that the sigmoid function gives rise to significant SSIM (Figure 13b) and PSNR (Figure 13a) improvement. Figure 14 presents the PSNR results of the compared algorithms. Figure 14 shows that BDA-SF can achieve a state-of-the-art performance. It can be inferred from Figure 12 and Figure 14 that BDA-SF can achieve comparable visual results compared with the state-of-the-art methods [22,27]. Reference [27] is slightly superior to BDA-SF in PSNR and SSIM. Figure 12 is a dark scene with lights, reference [27] used the dark channel and the bright channel at the same time, so [27] achieved the best results. However, reference [27] has poor robustness and may perform poorly on other images, such as Figure 15e.
This paper evaluates the method on natural, face, text, and low-illumination images. This paper also reports results on images with non-uniform blur. This paper provides typical results for each class. Finally, this paper also compares the running time of different algorithms.
Natural image: The natural images are from the dataset [57]. Figure 15 presents a visual comparison. The algorithm achieves competitive results against the method [23]. Furthermore, The method has a better visual result on textures for the local details than other state-of-the-art methods.
Face image: Face image deblurring is a challenge for algorithms designed for natural images. The lack of textures and edges in face images makes kernel estimation challenging. It can be inferred from Figure 16 that the method can achieve comparable visual results to other methods [23,27].
Text image:Figure 17 illustrated the results of the state-of-the-art methods on a text image. The algorithm can achieve a superior performance compared with existing methods. Visually, BDA-SF shows better texture features compared with the method [23]. While methods [19,20,31] produce heavy ringing artifacts, BDA-SF achieves more explicit images.
Low-illumination image: It is particularly challenging for most deblurring methods to deal with the low-illumination images because low-illumination images often have saturated pixels that interfere with kernel estimation. Figure 18 shows the results of the state-of-the-art methods on a low-illumination image. As a result, the method achieves a comparable result with the method [21], designed explicitly for low-illumination images.
Non-uniform deblurring: This paper applies the method to non-uniform blur. Figure 19 presents the results on images degraded by spatially variant blur. It can be inferred from Figure 19 that BDA-SF can give comparable visual results to the state-of-the-art non-uniform deblurring method [20,30]. Figure 20 shows the results and their corresponding intermediate results. With the sigmoid function, the results contain more sharp edges and texture features.
Computation complexity: This paper compares the computation complexity of BDA-SF with existing state-of-the-art methods [19,20,23,27,31]. The simulations are carried out on Windows 10 with an Intel Core i5-7200U CPU at 2.7 GHz with 12 GB RAM. The natural image size is 280 × 325; face image size is 284 × 365; text image size is 1097 × 1094; low-illumination image size is 800 × 533. The runtime of the non-blind deblurring step includes the total time. Among the methods, it can be seen from Table 3 that the method developed by Krishnan et al. [19] is the fastest. However, its results are inferior to BDA-SF, as illustrated above. BDA-SF is slower than the method [23]. BDA-SF is twice as fast as the method [31].

4.5. Effectiveness of BDA-SF

BDA-SF is based on sigmoid function, which constructs novel blind deconvolution estimators for both original image and degradation process. Figure 5 and Figure 9 are applications of BDA-SF. Figure 8 demonstrates that BDA-SF can protect the edge details concerning the Sobel filter ( μ = 0.25 ).
To better evaluate the effectiveness of the sigmoid function, this paper selects severely degraded images from public datasets [18,57]. Methods involved in the comparison are Xu et al. [15], Krishnan et al. [19], Whyte et al. [30], Xu et al. [20], Pan et al. [23], Pan et al. [22], Yan et al. [27], Jin et al. [31], Bai et al. [32]. Figure 11 and Figure 12 show the results. This paper uses evaluation indexes PSNR and SSIM to evaluate the image quality. Table 1 and Table 2 demonstrate that BDA-SF using sigmoid function can achieve a state-of-the-art performance on severely degraded images.
This paper evaluates the method on natural, face, text, and low-illumination images. To better evaluate the effectiveness of the sigmoid function, ablation experiments were performed. As is shown in Figure 15, Figure 16, Figure 17 and Figure 18, the images recovered using sigmoid function are more visually pleasing. Figure 19 shows that BDA-SF using sigmoid function generates intermediate results with more sharp edges. All the results demonstrate the effectiveness of the sigmoid function.

4.6. Limitation

This paper establishes the likelihood function assuming that the noise obeys Gaussian distribution. When the image has significant non-Gaussian noise, the algorithm cannot achieve satisfactory results. Figure 21 shows an example of the method dealing with images degraded by salt and pepper noise. As shown in Figure 21, the method will not work well when dealing with images degraded by non-Gaussian noise. Another drawback of the method is that the running speed is not fast enough. Table 3 demonstrates that the algorithm is slower than [19,27]. In the future, we will consider the effects of various noises (such as salt and pepper noise). We will also consider extending the algorithm to video deblurring.

5. Conclusions

This paper proposes a new iterative algorithm based on the sigmoid function for image restoration. The algorithm can naturally maintain the non-negative constraint of the solution during the restoration process. The algorithm can effectively enhance the high frequency spectrum and achieve high-resolution restoration, even when images are severely degraded. Since all operations in the algorithm are multiplication operations, the method can avoid the instability of numerical calculations. The approach has added a low pass filter and edge-preserving process to the iteration formulae, to protect the image’s edges while removing noise sufficiently. For the image sequence, the method uses inter-frame information, from which satisfactory results can be obtained with fewer iterations. Extensive experiments demonstrate that the method achieves a state-of-the-art performance for both natural images and images acquired under specific scenarios. It is expected that the success of deploying the sigmoid function in construction of the blind deblurring algorithm will motivate further research in the field of image restoration.

Author Contributions

J.Z. proposed the original idea and supervised the project. S.S. and L.D. fabricated the samples and performed the measurements. Z.X. revisited and supervised the whole process. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the West Light Foundation for Innovative Talents of the Chinese Academy of Sciences, grant number YA18K001, and the Frontier Research Foundation of the Chinese Academy of Sciences, grant number Z20H04.

Conflicts of Interest

The authors declare no competing financial interests.

References

  1. Kwan, C.; Dao, M.; Chou, B.; Kwan, L.; Ayhan, B. Mastcam image enhancement using estimated point spread functions. In Proceedings of the 2017 IEEE 8th Annual Ubiquitous Computing, Electronics and Mobile Communication Conference (UEMCON), New York, NY, USA, 19–21 October 2017; pp. 186–191. [Google Scholar]
  2. Jain, A.K. Fundamentals of Digital Image Processing; Prentice Hall: Hoboken, NJ, USA, 1989. [Google Scholar]
  3. Pitas, I. Digital Image Processing Algorithms and Applications; John Wiley & Sons: Hoboken, NJ, USA, 2000. [Google Scholar]
  4. Lane, R.G. Blind deconvolution of speckle images. JOSA A 1992, 9, 1508–1514. [Google Scholar] [CrossRef]
  5. Yan, H.; Liu, J.; Li, D.Z.; Sun, J.D. Incremental Wiener filter and space-adaptive regularization based super-resolution image restoration. J. Electron. Inf. Technol. 2005, 27, 35–38. [Google Scholar]
  6. Shui, P.L. Image denoising algorithm via doubly local Wiener filtering with directional windows in wavelet domain. IEEE Signal Process. Lett. 2005, 12, 681–684. [Google Scholar] [CrossRef]
  7. EVANS, J. Canadian Data May Portend Steeper Rise in Diabetes Rates. Clin. Endocrinol. News 2007, 2, 6. [Google Scholar]
  8. Burrows, T. Nuclear data sheets for A = 50. Nucl. Data Sheets 1995, 75, 1–98. [Google Scholar] [CrossRef]
  9. Zhang, J.; Zhang, Q. Noniterative blind image restoration based on estimation of a significant class of point spread functions. Opt. Eng. 2007, 46, 077005. [Google Scholar] [CrossRef]
  10. Zhang, J.; Zhang, Q. Blind image restoration using improved APEX method with pre-denoising. In Proceedings of the Fourth International Conference on Image and Graphics (ICIG 2007), Chengdu, China, 22–24 August 2007; pp. 164–168. [Google Scholar]
  11. Steriti, R.; Fiddy, M. Blind deconvolution of images by use of neural networks. Opt. Lett. 1994, 19, 575–577. [Google Scholar] [CrossRef] [PubMed]
  12. Chakrabarti, A. A neural approach to blind motion deblurring. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; Springer: Berlin/Heidelberg, Germany, 2016; pp. 221–235. [Google Scholar]
  13. Jia, J. Mathematical Models and Practical Solvers for Uniform Motion Deblurring; Cambridge University Press: Cambridge, UK, 2014. [Google Scholar]
  14. Fergus, R.; Singh, B.; Hertzmann, A.; Roweis, S.T.; Freeman, W.T. Removing camera shake from a single photograph. In ACM SIGGRAPH 2006 Papers; 2006; pp. 787–794. Available online: http://people.csail.mit.edu/billf/publications/Removing_Camera_Shake.pdf (accessed on 2 April 2021).
  15. Xu, L.; Jia, J. Two-phase kernel estimation for robust motion deblurring. In Proceedings of the European Conference on Computer Vision, Heraklion, Greece, 5–11 September 2010; Springer: Berlin/Heidelberg, Germany, 2010; pp. 157–170. [Google Scholar]
  16. Cho, S.; Lee, S. Fast motion deblurring. In ACM SIGGRAPH Asia 2009 Papers; 2009; pp. 1–8. Available online: https://dl.acm.org/doi/abs/10.1145/1618452.1618491?download=true (accessed on 2 April 2021).
  17. Shan, Q.; Jia, J.; Agarwala, A. High-quality motion deblurring from a single image. ACM Trans. Graph. 2008, 27, 1–10. [Google Scholar]
  18. Levin, A.; Weiss, Y.; Durand, F.; Freeman, W.T. Understanding and evaluating blind deconvolution algorithms. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 1964–1971. [Google Scholar]
  19. Krishnan, D.; Tay, T.; Fergus, R. Blind deconvolution using a normalized sparsity measure. In Proceedings of the CVPR 2011, Colorado Springs, CO, USA, 20–25 June 2011; pp. 233–240. [Google Scholar]
  20. Xu, L.; Zheng, S.; Jia, J. Unnatural l0 sparse representation for natural image deblurring. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, USA, 23–28 June 2013; pp. 1107–1114. [Google Scholar]
  21. Hu, Z.; Cho, S.; Wang, J.; Yang, M.H. Deblurring low-light images with light streaks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 24–27 June 2014; pp. 3382–3389. [Google Scholar]
  22. Pan, J.; Sun, D.; Pfister, H.; Yang, M.H. Blind image deblurring using dark channel prior. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 1628–1636. [Google Scholar]
  23. Pan, J.; Hu, Z.; Su, Z.; Yang, M.H. Deblurring text images via L0-regularized intensity and gradient prior. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 24–27 June 2014; pp. 2901–2908. [Google Scholar]
  24. Levin, A.; Weiss, Y.; Durand, F.; Freeman, W.T. Efficient marginal likelihood optimization in blind deconvolution. In Proceedings of the CVPR 2011, Colorado Springs, CO, USA, 20–25 June 2011; pp. 2657–2664. [Google Scholar]
  25. Pan, J.; Sun, D.; Pfister, H.; Yang, M.H. Deblurring images via dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 40, 2315–2328. [Google Scholar] [CrossRef]
  26. Cai, J.F.; Ji, H.; Liu, C.; Shen, Z. Framelet-based blind motion deblurring from a single image. IEEE Trans. Image Process. 2011, 21, 562–572. [Google Scholar] [PubMed]
  27. Yan, Y.; Ren, W.; Guo, Y.; Wang, R.; Cao, X. Image deblurring via extreme channels prior. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4003–4011. [Google Scholar]
  28. Li, L.; Pan, J.; Lai, W.S.; Gao, C.; Sang, N.; Yang, M.H. Learning a discriminative prior for blind image deblurring. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 6616–6625. [Google Scholar]
  29. Cho, T.S.; Paris, S.; Horn, B.K.; Freeman, W.T. Blur kernel estimation using the radon transform. In Proceedings of the CVPR 2011, Colorado Springs, CO, USA, 20–25 June 2011; pp. 241–248. [Google Scholar]
  30. Whyte, O.; Sivic, J.; Zisserman, A.; Ponce, J. Non-uniform deblurring for shaken images. Int. J. Comput. Vis. 2012, 98, 168–186. [Google Scholar] [CrossRef] [Green Version]
  31. Jin, M.; Roth, S.; Favaro, P. Normalized blind deconvolution. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 668–684. [Google Scholar]
  32. Bai, Y.; Cheung, G.; Liu, X.; Gao, W. Graph-based blind image deblurring from a single photograph. IEEE Trans. Image Process. 2018, 28, 1404–1418. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  33. Wang, Y.; Yang, J.; Yin, W.; Zhang, Y. A new alternating minimization algorithm for total variation image reconstruction. SIAM J. Imaging Sci. 2008, 1, 248–272. [Google Scholar] [CrossRef]
  34. Krishnan, D.; Fergus, R. Fast image deconvolution using hyper-Laplacian priors. Adv. Neural Inf. Process. Syst. 2009, 22, 1033–1041. [Google Scholar]
  35. Zoran, D.; Weiss, Y. From learning models of natural image patches to whole image restoration. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 479–486. [Google Scholar]
  36. Dong, W.; Zhang, L.; Shi, G.; Li, X. Nonlocally centralized sparse representation for image restoration. IEEE Trans. Image Process. 2012, 22, 1620–1630. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  37. Guo, Y.; Ma, H. Image blind deblurring using an adaptive patch prior. Tsinghua Sci. Technol. 2018, 24, 238–248. [Google Scholar] [CrossRef]
  38. Li, L.; Pan, J.; Lai, W.S.; Gao, C.; Sang, N.; Yang, M.H. Blind image deblurring via deep discriminative priors. Int. J. Comput. Vis. 2019, 127, 1025–1043. [Google Scholar] [CrossRef]
  39. Ma, L.; Xu, L.; Zeng, T. Low rank prior and total variation regularization for image deblurring. J. Sci. Comput. 2017, 70, 1336–1357. [Google Scholar] [CrossRef]
  40. Tang, Y.; Xue, Y.; Chen, Y.; Zhou, L. Blind deblurring with sparse representation via external patch priors. Digit. Signal Process. 2018, 78, 322–331. [Google Scholar] [CrossRef]
  41. Money, J.H.; Kang, S.H. Total variation minimizing blind deconvolution with shock filter reference. Image Vis. Comput. 2008, 26, 302–314. [Google Scholar] [CrossRef]
  42. Dong, J.; Pan, J.; Su, Z. Blur kernel estimation via salient edges and low rank prior for blind image deblurring. Signal Process. Image Commun. 2017, 58, 134–145. [Google Scholar] [CrossRef]
  43. Sun, J.; Cao, W.; Xu, Z.; Ponce, J. Learning a convolutional neural network for non-uniform motion blur removal. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 769–777. [Google Scholar]
  44. Zhang, J.; Pan, J.; Ren, J.; Song, Y.; Bao, L.; Lau, R.W.; Yang, M.H. Dynamic scene deblurring using spatially variant recurrent neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 2521–2529. [Google Scholar]
  45. Nah, S.; Hyun Kim, T.; Mu Lee, K. Deep multi-scale convolutional neural network for dynamic scene deblurring. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 3883–3891. [Google Scholar]
  46. Kupyn, O.; Budzan, V.; Mykhailych, M.; Mishkin, D.; Matas, J. Deblurgan: Blind motion deblurring using conditional adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 8183–8192. [Google Scholar]
  47. Su, S.; Delbracio, M.; Wang, J.; Sapiro, G.; Heidrich, W.; Wang, O. Deep video deblurring for hand-held cameras. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1279–1288. [Google Scholar]
  48. Yang, F.; Xiao, L.; Yang, J. Video Deblurring Via 3d CNN and Fourier Accumulation Learning. In Proceedings of the ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain, 4–9 May 2020; pp. 2443–2447. [Google Scholar]
  49. Vio, R.; Bardsley, J.; Wamsteker, W. Least-squares methods with Poissonian noise: An analysis and a comparison with the Richardson-Lucy algorithm. Astron. Astrophys. 2005, 436, 741–756. [Google Scholar] [CrossRef] [Green Version]
  50. Lantéri, H.; Soummer, R.; Aime, C. Comparison between ISRA and RLA algorithms. Use of a Wiener Filter based stopping criterion. Astron. Astrophys. Suppl. Ser. 1999, 140, 235–246. [Google Scholar] [CrossRef] [Green Version]
  51. Jia, G.; Peng, X.; Zhang, J.; Fu, C. Blind Image Deblurring for Multiply Image Frames Based on an Iterative Algorithm. J. Comput. Theor. Nanosci. 2016, 13, 6531–6538. [Google Scholar]
  52. Chan, T.F.; Wong, C.K. Total variation blind deconvolution. IEEE Trans. Image Process. 1998, 7, 370–375. [Google Scholar] [CrossRef] [Green Version]
  53. Wei, Z.; Zhang, J.; Xu, Z.; Huang, Y.; Liu, Y.; Fan, X. Gradient projection with approximate L0 norm minimization for sparse reconstruction in compressed sensing. Sensors 2018, 18, 3373. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  54. Roggemann, M.C.; Welsh, B. Imaging Through Turbulence. Opt. Eng. 1996, 35. [Google Scholar] [CrossRef]
  55. Wang, Z.; Bovik, A.C. Mean squared error: Love it or leave it? A new look at signal fidelity measures. IEEE Signal Process. Mag. 2009, 26, 98–117. [Google Scholar] [CrossRef]
  56. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [Green Version]
  57. Köhler, R.; Hirsch, M.; Mohler, B.; Schölkopf, B.; Harmeling, S. Recording and playback of camera shake: Benchmarking blind deconvolution with a real-world database. In Proceedings of the European Conference on Computer Vision, Florence, Italy, 7–13 October 2012; Springer: Berlin/Heidelberg, Germany, 2012; pp. 27–40. [Google Scholar]
Figure 1. Comparison between exponential function and Sigmoid function.
Figure 1. Comparison between exponential function and Sigmoid function.
Sensors 21 03484 g001
Figure 2. Image degradation under short exposure conditions.
Figure 2. Image degradation under short exposure conditions.
Sensors 21 03484 g002
Figure 3. The image is degraded by the point spread function. (a) is the original image. Resolution is 128 × 128. Point Spread Functions are shown in (be). Degraded images corresponding to the (be) are shown in (fi).
Figure 3. The image is degraded by the point spread function. (a) is the original image. Resolution is 128 × 128. Point Spread Functions are shown in (be). Degraded images corresponding to the (be) are shown in (fi).
Sensors 21 03484 g003
Figure 4. Two-step iterative algorithm. The iteration time J is greater than K.
Figure 4. Two-step iterative algorithm. The iteration time J is greater than K.
Sensors 21 03484 g004
Figure 5. Frames in a video sequence.
Figure 5. Frames in a video sequence.
Sensors 21 03484 g005
Figure 6. Iterations curve.
Figure 6. Iterations curve.
Sensors 21 03484 g006
Figure 7. Pixel value curve and residual curve. The horizontal direction represents the number of iterations. The vertical direction represents the pixel value.
Figure 7. Pixel value curve and residual curve. The horizontal direction represents the number of iterations. The vertical direction represents the pixel value.
Sensors 21 03484 g007
Figure 8. The proposed algorithm is compared with other general algorithms. (a) an actual blurred image. (b) Multiplicative iterative algorithm. (c) Wiener-IBD. (d) the proposed BDA-SF. (eh) are the corresponding spectra.
Figure 8. The proposed algorithm is compared with other general algorithms. (a) an actual blurred image. (b) Multiplicative iterative algorithm. (c) Wiener-IBD. (d) the proposed BDA-SF. (eh) are the corresponding spectra.
Sensors 21 03484 g008
Figure 9. The tower in the actual video.
Figure 9. The tower in the actual video.
Sensors 21 03484 g009
Figure 10. Deblurred results from the dataset [18]. The PSNR and SSIM values are shown in Table 1. BDA-SF has the highest PSNR and SSIM. (Best viewed on high-resolution display with zoom-in).
Figure 10. Deblurred results from the dataset [18]. The PSNR and SSIM values are shown in Table 1. BDA-SF has the highest PSNR and SSIM. (Best viewed on high-resolution display with zoom-in).
Sensors 21 03484 g010
Figure 11. Comparisons in terms of cumulative error ratio.
Figure 11. Comparisons in terms of cumulative error ratio.
Sensors 21 03484 g011
Figure 12. Deblurred results from the dataset [57]. The PSNR and SSIM values are shown in Table 2. BDA-SF has the second highest PSNR and SSIM. The deblurred image estimated by BDA-SF is visually more pleasing. (Best viewed on high-resolution display with zoom-in).
Figure 12. Deblurred results from the dataset [57]. The PSNR and SSIM values are shown in Table 2. BDA-SF has the second highest PSNR and SSIM. The deblurred image estimated by BDA-SF is visually more pleasing. (Best viewed on high-resolution display with zoom-in).
Sensors 21 03484 g012
Figure 13. Quantitative evaluation results of BDA-SF.
Figure 13. Quantitative evaluation results of BDA-SF.
Sensors 21 03484 g013
Figure 14. Quantitative evaluation results on the dataset [57].
Figure 14. Quantitative evaluation results on the dataset [57].
Sensors 21 03484 g014
Figure 15. Visual comparison on a real natural image. BDA-SF achieves finer edges and details, as is shown in red boxes. (Best viewed on high-resolution display with zoom-in).
Figure 15. Visual comparison on a real natural image. BDA-SF achieves finer edges and details, as is shown in red boxes. (Best viewed on high-resolution display with zoom-in).
Sensors 21 03484 g015
Figure 16. Visual comparison on a face image. BDA-SF achieves comparable visual results with method [23,27,31]. (Best viewed on high-resolution display with zoom-in).
Figure 16. Visual comparison on a face image. BDA-SF achieves comparable visual results with method [23,27,31]. (Best viewed on high-resolution display with zoom-in).
Sensors 21 03484 g016
Figure 17. Visual comparison on a text image. BDA-SF achieves comparable visual results with method [23,27]. (Best viewed on high-resolution display with zoom-in).
Figure 17. Visual comparison on a text image. BDA-SF achieves comparable visual results with method [23,27]. (Best viewed on high-resolution display with zoom-in).
Sensors 21 03484 g017
Figure 18. Visual comparison on a low-illumination image. BDA-SF achieves comparable visual results with method [21] which is specifically designed for low-illumination images. (Best viewed on high-resolution display with zoom-in).
Figure 18. Visual comparison on a low-illumination image. BDA-SF achieves comparable visual results with method [21] which is specifically designed for low-illumination images. (Best viewed on high-resolution display with zoom-in).
Sensors 21 03484 g018
Figure 19. Visual comparison on images with non-uniform blur. Kernels are resized for visualization. BDA-SF is visually comparable to methods [20]. Method [23] contains ringing artifacts and residual blurs.
Figure 19. Visual comparison on images with non-uniform blur. Kernels are resized for visualization. BDA-SF is visually comparable to methods [20]. Method [23] contains ringing artifacts and residual blurs.
Sensors 21 03484 g019
Figure 20. Deblurred results and its corresponding intermediate results over iterations. With the Sigmoid function, the proposed BDA-SF achieves intermediate results containing more sharp edges. The use of the Sigmoid function makes the results contain sharper edges and texture features.
Figure 20. Deblurred results and its corresponding intermediate results over iterations. With the Sigmoid function, the proposed BDA-SF achieves intermediate results containing more sharp edges. The use of the Sigmoid function makes the results contain sharper edges and texture features.
Sensors 21 03484 g020
Figure 21. Limitation of the proposed model.
Figure 21. Limitation of the proposed model.
Sensors 21 03484 g021
Table 1. Quantitative evaluations on the image from Figure 10.
Table 1. Quantitative evaluations on the image from Figure 10.
MethodsPSNRSSIM
Krishnan et al. [19]21.23980.7588
Xu et al. [20]20.84020.6921
Pan et al. [22]19.26880.6089
Yan et al. [27]24.21500.7683
Jin et al. [31]23.83770.7542
Bai et al. [32]26.41200.8174
BDA-SF27.24340.8859
Table 2. Quantitative evaluations on the image from Figure 12.
Table 2. Quantitative evaluations on the image from Figure 12.
MethodsPSNRSSIM
Xu et al. [15]19.09640.6987
Krishnan et al. [19]21.99740.8330
Whyte et al. [30]20.62460.8254
Xu et al. [20]21.84910.8373
Pan et al. [22]21.77230.8250
Pan et al. [23]23.94030.8047
Yan et al. [27]25.54300.8507
Jin et al. [31]22.09740.8376
Bai et al. [32]22.03110.8401
BDA-SF25.01370.8413
Table 3. Runtime (in seconds) of different methods. The code is implemented in MATLAB.
Table 3. Runtime (in seconds) of different methods. The code is implemented in MATLAB.
Methods280 × 325284 × 365 1097 × 1094800 × 533
Krishnan et al. [19]20.4123.71156.4074.37
Xu et al. [20]226.51468.564033.791655.50
Pan et al. [23]319.34295.604078.681201.16
Yan et al. [27]47.9946.501077.90294.89
Jin et al. [31]561.12620.0414187.652814.74
BDA-SF255.45249.953115.041075.10
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Sun, S.; Duan, L.; Xu, Z.; Zhang, J. Blind Deblurring Based on Sigmoid Function. Sensors 2021, 21, 3484. https://doi.org/10.3390/s21103484

AMA Style

Sun S, Duan L, Xu Z, Zhang J. Blind Deblurring Based on Sigmoid Function. Sensors. 2021; 21(10):3484. https://doi.org/10.3390/s21103484

Chicago/Turabian Style

Sun, Shuhan, Lizhen Duan, Zhiyong Xu, and Jianlin Zhang. 2021. "Blind Deblurring Based on Sigmoid Function" Sensors 21, no. 10: 3484. https://doi.org/10.3390/s21103484

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop