Robust Image Restoration for Motion Blur of Image Sensors

Blind image restoration algorithms for motion blur have been deeply researched in the past years. Although great progress has been made, blurred images containing large blur and rich, small details still cannot be restored perfectly. To deal with these problems, we present a robust image restoration algorithm for motion blur of general image sensors in this paper. Firstly, we propose a self-adaptive structure extraction method based on the total variation (TV) to separate the reliable structures from textures and small details of a blurred image which may damage the kernel estimation and interim latent image restoration. Secondly, we combine the reliable structures with priors of the blur kernel, such as sparsity and continuity, by a two-step method with which noise can be removed during iterations of the estimation to improve the precision of the estimated blur kernel. Finally, we use a MR-based Wiener filter as the non-blind deconvolution algorithm to restore the final latent image. Experimental results demonstrate that our algorithm can restore large blur images with rich, small details effectively.


Introduction
Motion blur widely exists in digital photography and leads to disappointing blurry images with inevitable information loss. Due to the mechanism of image sensors that integrate incoming lights for an amount of time to produce images, if a relative motion happens between the subject and the image sensors during the integration time, a blurred image will be produced as shown in Figure 1.

Introduction
Motion blur widely exists in digital photography and leads to disappointing blurry images with inevitable information loss. Due to the mechanism of image sensors that integrate incoming lights for an amount of time to produce images, if a relative motion happens between the subject and the image sensors during the integration time, a blurred image will be produced as shown in Figure 1.

1.
In order to eliminate the effect of noise and ambiguous structures which may damage kernel estimation, we propose a novel structure extraction method with which the reliable structures can be selected adaptively and effectively; 2.
As motion blur kernel is sparse and delineates the motion trace between the subject and image sensors, we introduce a two-step method for the kernel estimation process to eliminate the noise and guarantee the sparsity and continuity.
The rest of this paper is organized as follow: in Section 2 we review related works, the proposed kernel estimation and non-blind deconvolution algorithm are described in Section 3, experimental results are illustrated in Section 4 and, finally, we conclude our algorithm in Section 5.

Related Work
Blurred image restoration is a fundamental problem in enhancing images acquired by various types of image sensors [9][10][11][12]. Although various image sensors' signal processing techniques have been proposed, restoration of blurred images modeled in Equation (1) is still a challenging task because of the latent sharp image and blur kernel are highly unconstrained and there is no unique combination of them whose convolution is equal to the blurred image. Previous works generally impose constraints on the blur kernel and represent it in a simple parametric form. However, as is shown [3], the real blur kernels are too complicated to be formulated in a simple parametric form. Fergus et al. [3] used a zero-mean Mixture of Gaussian to approximate the gradients of nature image and proposed a variational Bayesian inference algorithm to deblur images. Shan et al. [4] used a series of optimization techniques to avoid trivial solutions and so be robust to noise. Krishnan et al. [13] introduced a new normalized sparsity as the regularization term into their MAP framework to estimate the blur kernel. Xu et al. [14] introduced an image decomposition scheme to make image deblurring process more robust. Levin et al. [15] showed the limitation of the naive MAP approach and suggested estimating the MAP of the blur kernel alone while marginalizing over the latent sharp image in [16]. Oh et al. [17] adopted a piecewise-linear model to approximate the curves for the blur kernel estimation. Shao et al. [18] impose a type of non-stationary Gaussian prior on the gradient fields of sharp images to estimate the blur kernel. Although extensive works have been done, the estimated kernels still contain some noise, and selecting a kernel with a hard threshold will destroy the intrinsic structures of the blur kernel.
Another group of algorithms [6][7][8]19] use predicted sharp edges to estimate the blur kernel. Money and Kang [19] used a shock filter to sharpen edges. Joshi et al. [8] used edge profiles instead of shock filtering, but those methods only work for small blurs. Cho and Lee [6] combined a bilateral filter with the shock filter to predict sharp edges from the blurred image iteratively. However, as described in the paper, that method could not predict sharp edges correctly for large blurs. Xu and Jia [7] introduced a mask to select useful gradients of the blurred image for kernel estimation. Although the performance improved greatly, the continuity and sparsity of the blur kernels still cannot be guaranteed and the estimated kernels still occasionally contain some noise.
With a known blur kernel, the blind deconvolution problems can be solved by the non-blind deconvolution approaches. However, as the latent sharp image restoration is very sensitive to noise and may produce some undesirable artifacts, the non-blind deconvolution is still ill-conditioned. Many works [4,20,21] have been conducted to overcome this problem. Levin et al. [20] used a heavy-tailed function to alleviate the ringing artifacts, which was based on the model of sparse image derivatives distribution. Yuan et al. [21] proposed a multi-scale bilateral Richardson-Lucy algorithm to reduce ringing artifacts. Shan et al. [4] adopted a local smoothness prior to suppress the artifacts in smooth regions.

Single Image Blind Deconvolution Using Reliable Structure
Many algorithms have been proposed to restore the latent sharp image using image structures and the deblurred results depend much on the reliability of the extractive structures. We propose a new method to extract the reliable structures, which will be discussed in Section 3.1. The blur kernel estimated by existing methods usually contain some noise which may damage the latent sharp image estimation. We introduce a two-step method which can guarantee the continuity and sparsity of the blur kernel to eliminate the noise in Section 3.2. The latent sharp image restoration method will be discussed in Sections 3.3 and 3.4. And we give the multi-scale implementation of our algorithm in Section 3.5. The overview framework of our approach is shown in Figure 2, and the deblurred result of a synthetic blurred image is shown in Figure 3.

Reliable Structure Extraction from Blurred Image
As is discussed in [7], the structures of the blurred image do not always improve the blur kernel estimation. On the contrary, the edges whose size are smaller than that of the blur kernel will deteriorate the blur kernel estimation. Inspired by the total variation-based noise removal algorithm [22], we treat the textures and small details as "noise" and introduce a salient structure extraction method to separate the helpful structures of the blurred image for the blur kernel estimation from the detrimental textures and small details. The energy function is formulated as follow: where M is the gradient attenuation function [23] defined as Equation (4), which is a self-adaptive weight for textures and small details attenuation:

Reliable Structure Extraction from Blurred Image
As is discussed in [7], the structures of the blurred image do not always improve the blur kernel estimation. On the contrary, the edges whose size are smaller than that of the blur kernel will deteriorate the blur kernel estimation. Inspired by the total variation-based noise removal algorithm [22], we treat the textures and small details as "noise" and introduce a salient structure extraction method to separate the helpful structures of the blurred image for the blur kernel estimation from the detrimental textures and small details. The energy function is formulated as follow:   where M is the gradient attenuation function [23] defined as Equation (4), which is a self-adaptive weight for textures and small details attenuation:

Reliable Structure Extraction from Blurred Image
As is discussed in [7], the structures of the blurred image do not always improve the blur kernel estimation. On the contrary, the edges whose size are smaller than that of the blur kernel will deteriorate the blur kernel estimation. Inspired by the total variation-based noise removal algorithm [22], we treat the textures and small details as "noise" and introduce a salient structure extraction method to separate the helpful structures of the blurred image for the blur kernel estimation from the detrimental textures and small details. The energy function is formulated as follow: where ||I s || TV2 is the isotropic TV norm of salient structure I s defined as Equation (3), which can preserve edges.
where M is the gradient attenuation function [23] defined as Equation (4), which is a self-adaptive weight for textures and small details attenuation: where L is the image to be measured and the parameter, α s controls the gradient magnitudes which remain unchanged, and β determines how much the lager magnitude will be attenuated (assuming β < 1), while gradients of magnitude smaller than α s are slightly magnified. As M(B) is the self-adaptive weight in Equation (2), the salient structures will be kept more while the flat areas and narrow strips will be smoothed, so the salient structure I s can be obtained by minimizing Equation (2). We set the coarsest version of the blurred image B as the initial value of the latent image I for Equation (2). We empirically set β between 0.8 and 0.9, and α s to s times the average gradient magnitude of the blurred image in each level. Figure 4 show the self-adaptive weight M(B) with different parameters and the corresponding extracted salient structures, a larger α s will give a strong attenuation to the smooth regions while a smaller α s will preserve more details. where L is the image to be measured and the parameter, s  controls the gradient magnitudes which remain unchanged, and  determines how much the lager magnitude will be attenuated (assuming  < 1), while gradients of magnitude smaller than s  are slightly magnified. As M(B) is the self-adaptive weight in Equation (2), the salient structures will be kept more while the flat areas and narrow strips will be smoothed, so the salient structure Is can be obtained by minimizing Equation (2). We set the coarsest version of the blurred image B as the initial value of the latent image I for Equation (2).
We empirically set  between 0.8 and 0.9, and s  to s times the average gradient magnitude of the blurred image in each level. Figure 4 show the self-adaptive weight M(B) with different parameters and the corresponding extracted salient structures, a larger s  will give a strong attenuation to the smooth regions while a smaller s  will preserve more details. After obtaining the salient structure Is, we enhance it by the the shock filter [24] to recover strong edges: where t is the evolution time of the shock filter, s I  and s I  are the Laplacian and gradient of Is, respectively. The enhanced structure s I  contains not only the sharp edges but also the enhanced noise. In order to remove the noise, we use a mask to select reliable structure S , which will be used to estimate the blur kernel: where H(·) denotes the Heaviside step function which outputs ones for positive values and zeros otherwise, and M is defined as Equation (4)  After obtaining the salient structure I s , we enhance it by the the shock filter [24] to recover strong edges: where t is the evolution time of the shock filter, ∆I s and ∇I s are the Laplacian and gradient of I s , respectively. The enhanced structure r I s contains not only the sharp edges but also the enhanced noise. In order to remove the noise, we use a mask to select reliable structure S, which will be used to estimate the blur kernel: where H(¨) denotes the Heaviside step function which outputs ones for positive values and zeros otherwise, and M is defined as Equation (4). As M(I s ) is large in smooth regions, while small near strong edges, and M( r I s ) is small near not only strong edges, but also enhanced noise, one can set an appropriate value of the threshold τ, referring to the value of M( r I s ) near strong edges and the differences between the value of M(I s ) and M( r I s ) in smooth regions to eliminate the noise and obtain the reliable structures.
It is known that the less the salient edges are used in kernel estimation, the more unreliable the estimated blur kernel is. We take the following strategies to guarantee the reliability of the estimated blur kernel: Firstly, as the initial value of τ is critical to kernel estimation [7], we take the method of [6] to set the value of τ adaptively at the beginning of the iterative restoration process. Four directions of the image gradients are taken into account to guarantee enough information of the salient edges are used to estimate the blur kernel. Additionally, the value of τ for later iterations is set to allow that at least 0.5 ? P I P k pixels take part in the kernel estimation in each group. P k and P I are the total number of pixels of the blur kernel and input image, respectively.
Secondly, as more edges are needed to estimate the blur kernel in higher level of the pyramid, the parameters θ and α s are decreased to bring more edge information into the kernel estimation. Our strategies allow the recovery of fine structures during kernel refinement.
Figure 5d-f show some interim ∇S maps in different levels. It is obvious that the higher the level is, the more the sharp edges participate in kernel estimation. It is known that the less the salient edges are used in kernel estimation, the more unreliable the estimated blur kernel is. We take the following strategies to guarantee the reliability of the estimated blur kernel: Firstly, as the initial value of  is critical to kernel estimation [7], we take the method of [6] to set the value of  adaptively at the beginning of the iterative restoration process. Four directions of the image gradients are taken into account to guarantee enough information of the salient edges are used to estimate the blur kernel. Additionally, the value of τ for later iterations is set to allow that at least 0.5 I k P P pixels take part in the kernel estimation in each group. k P and I P are the total number of pixels of the blur kernel and input image, respectively. Secondly, as more edges are needed to estimate the blur kernel in higher level of the pyramid, the parameters  and s  are decreased to bring more edge information into the kernel estimation. Our strategies allow the recovery of fine structures during kernel refinement.
Figure 5d-f show some interim S  maps in different levels. It is obvious that the higher the level is, the more the sharp edges participate in kernel estimation.

Kernel Estimation and Refinement
As motion blur ascribes to the relative motion between the subject and image sensors within the exposure time period, the blur kernel delineates the motion trace between them and should be continuous and sparse. We employ a two-step method to guarantee the sparsity and continuity, respectively.
Estimation. We combine the strictly-selected edges S  with a Hyper-Laplacian prior regularization term by the MAP estimation criterion to estimate the blur kernel with sparsity. The energy function is formulated as follow:

Kernel Estimation and Refinement
As motion blur ascribes to the relative motion between the subject and image sensors within the exposure time period, the blur kernel delineates the motion trace between them and should be continuous and sparse. We employ a two-step method to guarantee the sparsity and continuity, respectively.
Estimation. We combine the strictly-selected edges ∇S with a Hyper-Laplacian prior regularization term by the MAP estimation criterion to estimate the blur kernel with sparsity. The energy function is formulated as follow: s.t. k px, yq ě 0, where ω˚is the weight for each partial derivative, ζ is the weight for the Hyper-Laplacian regularization term with 0 < γ < 1. S˚and B˚are the partial derivatives of the selected reliable structures and blurred image.
We run the constrained iterative reweighted least square (IRLS) method [14] two iterations to minimize Equation (7) and use CG method for the inner IRLS system.
Refinement. As the estimated blur kernel may contain some noise which may deteriorate the following the estimated interim latent images and kernels (e.g., the deblurred image and estimated kernel shown in Figure 6), we eliminate the noise by checking the pixels' continuity of the estimated blur kernel. The energy function is defined as follow, which can be minimized using the alternative optimization method [25].
E´r k¯" ÿ pP r k´r k p´k p¯2`λ C´r k¯ (10) where λ is the weight for the continuity constrained term which is defined as follow: During the kernel estimation process, the two steps above are taken alternately three times (Itr = 3) with γ = 0.5, empirically, in each level of the pyramid. The value of λ can be set refer to the size of the blur kernels. Algorithm 1 illustrates the blur kernel estimation algorithm.

Algorithm 1. Blur Kernel Estimation.
Input: Blurred image, reliable structures and the initial value of k from previous iterations or previous level; for n = 1 to Itr (Itr: number of iterations) do Estimate the blur kernel using the reliable structures. (Equation (7)) Refine the blur kernel by solving Equations (10) and (11) where *  is the weight for each partial derivative,  is the weight for the Hyper-Laplacian regularization term with 0 <  < 1. * S and * B are the partial derivatives of the selected reliable structures and blurred image.
We run the constrained iterative reweighted least square (IRLS) method [14] two iterations to minimize Equation (7) and use CG method for the inner IRLS system.
Refinement. As the estimated blur kernel may contain some noise which may deteriorate the following the estimated interim latent images and kernels (e.g., the deblurred image and estimated kernel shown in Figure 6), we eliminate the noise by checking the pixels' continuity of the estimated blur kernel. The energy function is defined as follow, which can be minimized using the alternative optimization method [25].
where λ is the weight for the continuity constrained term which is defined as follow: During the kernel estimation process, the two steps above are taken alternately three times (Itr = 3) with  = 0.5, empirically, in each level of the pyramid. The value of  can be set refer to the size of the blur kernels. Algorithm 1 illustrates the blur kernel estimation algorithm.

Algorithm 1. Blur Kernel Estimation.
Input: Blurred image, reliable structures and the initial value of k from previous iterations or previous level; for n = 1 to Itr (Itr: number of iterations) do Estimate the blur kernel using the reliable structures. (Equation (7)) Refine the blur kernel by solving Equations (10) and (11)   We use the dataset of [15] to testify the effectiveness of our kernel estimation method, and the comparison of the estimated kernels between some previous works and our algorithm is illustrated in Figure 7. Benefiting from both the reliable structures and two-step estimation method, our kernel estimation results perform better. Then we adopt the SSDE (sum of squared differences error, which is defined as Equation (12)) to evaluate the accuracy of the estimated kernels shown in Figure 7 and give the results comparison in Figure 8.
where k and GT k denote the estimated blur kernel and ground truth blur kernel respectively.   We use the dataset of [15] to testify the effectiveness of our kernel estimation method, and the comparison of the estimated kernels between some previous works and our algorithm is illustrated in Figure 7. Benefiting from both the reliable structures and two-step estimation method, our kernel estimation results perform better. Then we adopt the SSDE (sum of squared differences error, which is defined as Equation (12)) to evaluate the accuracy of the estimated kernels shown in Figure 7 and give the results comparison in Figure 8. where k and k GT denote the estimated blur kernel and ground truth blur kernel respectively. We use the dataset of [15] to testify the effectiveness of our kernel estimation method, and the comparison of the estimated kernels between some previous works and our algorithm is illustrated in Figure 7. Benefiting from both the reliable structures and two-step estimation method, our kernel estimation results perform better. Then we adopt the SSDE (sum of squared differences error, which is defined as Equation (12)) to evaluate the accuracy of the estimated kernels shown in Figure 7 and give the results comparison in Figure 8.
where k and GT k denote the estimated blur kernel and ground truth blur kernel respectively.   We use the dataset of [15] to testify the effectiveness of our kernel estimation method, and the comparison of the estimated kernels between some previous works and our algorithm is illustrated in Figure 7. Benefiting from both the reliable structures and two-step estimation method, our kernel estimation results perform better. Then we adopt the SSDE (sum of squared differences error, which is defined as Equation (12)) to evaluate the accuracy of the estimated kernels shown in Figure 7 and give the results comparison in Figure 8.
where k and GT k denote the estimated blur kernel and ground truth blur kernel respectively.

Interim Latent Image Restoration
As attention is paid on recovering the salient edges during the interim latent image restoration process, we use the strictly selected edges ∇S as a spatial prior to restore the coarse version of the latent image. We minimize the following energy function to obtain the latent image: where κ is the weight for the spatial prior regularization term which can suppress the noise and avoid ringing artifacts. We employ FFTs on Equation (13) and take the partial derivative with respect to k to zero. The closed-form of the latent image is given by Equation (14): where F and F´1 denote the forward and inverse Fourier transforms, respectively, and F p˚q is the complex conjugate of F p˚q.

Final Non-Blind Deconvolution
After obtaining the blur kernel, the final latent image will be restored from the full-scale blurred image, which contains more noise and the process is time consuming. In order to achieve high robustness and processing speed, we adopt the MR-based Wiener filter [26] to restore the final latent sharp image in frequency domain, whose transfer function is formulated as follow: Fpkq¨F pBq where Γ is defined as: Γ " |F pkq pu MH , 0q| 2η¨" max uPD T |F pkq pu, 0q| The symbols D T and u MH denote the cut-off frequency and boundary between the high-frequency and mid-frequency regions of the blur kernel frequency spectrum F pkq pu, vq, respectively, and pu, vq is the index in frequency domain. The values of them are calculated with the method proposed in that paper. The parameter η controls the compromise between image details recovery and noise suppression, and we set η = 0.8 in our experiment.

Multi-Scale Implementation
In order to deal with large blur kernels and make the restoration algorithm effectively and efficiently, we build a coarse-to-fine pyramid of images with a down-sampling factor of ? 2 to estimate the blur kernel in multi-scale resolution. The number of pyramid levels is determined by the size of the blur kernel such that the width or height of the blur kernel at the coarsest level is about 3-7 pixels. The blur kernel and interim latent image are estimated alternately for a few iterations at each level. After obtaining the full scale blur kernel, the final latent sharp image is restored by using a MR-based Wiener filter. Algorithm 2 outlines our approach.

Algorithm 2. Overall Algorithm.
Input: Blurred image B, parameters θ, α s and the size of blur kernel; Build an image pyramid {B s } and all-zero kernel pyramid {k s } with level index {1, 2, . . . , n} according to the size of blur kernel; 1. Blind estimation of blur kernel for = 1 to n do Compute adaptive weight M(B s ) (Equation (4)). for i = 1 to m (m iterations) do Extract salient structure I s (Equation (2)). Select reliable structure ∇S for kernel estimation (Equation (6)) Estimate blur kernel according to Algorithm 1. Restore interim latent image L (Equation (14)) θ Ð θ{1.1 , α s Ð α s {1.1 end for Up-sample latent image: L l`1 Ð L l Ò . Porject k l onto the constraints (Equation (8)) and up-sample blur kernel: k l`1 Ð k l Ò . end for 2. Image restoration using MR-based Wiener Filter.
-Recover I using k from B in full-scale resolution(Equation (15)) Output: Blur kernel k and latent sharp image I.

Experiments
In order to prove the effectiveness of our algorithm, we compare it with several state-of-art approaches on synthetic and real blurred images. Here, we give some implementation details. Before kernel estimation, we convert all color images to grayscale ones and experimentally set the initial value of θ to 1 based on numerous experiments. The initial value of α s is set to 0.8 times the average gradient magnitude of the coarsest version of the blurred image. The value of θ and α s in higher levels are obtained by dividing the value of them in the previous level by 1.1. During kernel estimation, before up-sampling the blur kernel estimated from the previous level, the negative elements of the blur kernel will be set to 0 and renormalized. In Algorithm 2, the iteration time m is set to 5, empirically. We deblur each color channel respectively in the final non-blind restoration. All experiments test on a PC running MS Windows 7 64 bit version with Intel Core i5 560 M CPU, 8 GB RAM and implementation platform is MATLAB 2014a.

Experimental Results and Evaluation
It is known that the more textures and small details in the blurred image, the harder the restoration. We, firstly, give a synthetic example shown in Figure 9 which contains rich textures and small details, such as flowers and leaves, to prove the robustness and effectiveness of our algorithm. The deblurred result of Fergus et al. [3] and Shan et al. [4] still contain some blur and ringing artifacts due to the inaccurate kernel estimation. The approach of Jia et al. [7] performs better, but the imperfect estimated kernel also leads to some artifacts in the restored result. In contrast, our algorithm gives the best performance on both the estimated blur kernel and final restored image shown in Figure 9e. In Table 1, we adopt PSNR (Peak Signal to Noise Ratio, which is defined as Equation (17)) and SSDE to give a quantification comparison of the estimated blur kernels and deblurred images in Figure 8, respectively. As shown in Table 1, our algorithm gives a higher PSNR value for the deblurred image and lower SSDE value for the estimated blur kernel.
PSNR " 10 log 10 where I and I GT are the restored latent sharp image and ground truth image, respectively, whose width is M and height is N. The symbol R denotes the maximum value of the input image data type, e.g., R = 255 when the images have 8-bit unsigned integer data type.
where I and GT I are the restored latent sharp image and ground truth image, respectively, whose width is M and height is N. The symbol R denotes the maximum value of the input image data type, e.g., R = 255 when the images have 8-bit unsigned integer data type. We next test our algorithm on real blurred images, as shown in Figures 10-12, which are presented in some previous works for comparison. Figure 10a is the real blurred image used in [3]. Due to the inaccurate blur kernel estimation, the results of [3,4,6] contain some noise. The method of [7] shows a better result, but there is still some noise in the estimated blur kernel. Our result shown in Figure 10f performs best in both kernel estimation and latent image restoration. Figure 11a is the blurred fish image used in [13]. The result of [13] contains same noise both in the estimated blur kernel and restored image. The result of [4] performs better in kernel estimation, but the restored image still contains some ringing artifacts. By contrast, our result shown in Figure 11d gives the best performance.  We next test our algorithm on real blurred images, as shown in Figures 10-12, which are presented in some previous works for comparison. Figure 10a is the real blurred image used in [3]. Due to the inaccurate blur kernel estimation, the results of [3,4,6] contain some noise. The method of [7] shows a better result, but there is still some noise in the estimated blur kernel. Our result shown in Figure 10f performs best in both kernel estimation and latent image restoration. Figure 11a is the blurred fish image used in [13]. The result of [13] contains same noise both in the estimated blur kernel and restored image. The result of [4] performs better in kernel estimation, but the restored image still contains some ringing artifacts. By contrast, our result shown in Figure 11d gives the best performance.
In addition to the capability in dealing with the blurred image containing rich textures and small details, our algorithm also can deal with large blur kernels, as shown in Figure 12. Figure 12a is the blurred wall image which contains large motion blur and small details. Due to the large blur, the method of [3] cannot estimate the blur kernel accurately and the restored latent image still contains  [6]; (e) result of Xu and Jia [7]; and (f) result of our algorithm.
In addition to the capability in dealing with the blurred image containing rich textures and small details, our algorithm also can deal with large blur kernels, as shown in Figure 12. Figure 12a is the blurred wall image which contains large motion blur and small details. Due to the large blur, the method of [3] cannot estimate the blur kernel accurately and the restored latent image still contains some blur. The restored result of [4] still contains some noise and ringing artifacts. The results of our algorithm give a clearer image with finer details. some blur. The restored result of [4] still contains some noise and ringing artifacts. The results of our algorithm give a clearer image with finer details.

Operation Speed
During the restoration process, we estimated the blur kernel and latent sharp image alternately, which involved a few matrix-vector or convolution operations. For the operation speed, the MATLAB implementation of the proposed algorithm spends about 90 s to estimate a 27 × 27 blur kernel from a 255 × 255 blurred image with an Intel i5 560 M CPU@2.67 GHz and 8 GB RAM. In comparison, methods [3,13,16] need about 6 min, 3 min, and 4 min, respectively, which are computed by using the author's MATLAB source code. The algorithm [4] implemented in C++ spends about 50 s. As the proposed algorithm involves non-convex models in kernel estimation, it needs more computational time than [6,7]. As the proposed algorithm uses a CG method and FFTs for optimization, we believe that it is feasible to accelerate by using a GPU with the strategy in [6].

Conclusions
In this paper, we propose a robust image restoration algorithm for motion blur of image sensors. Our algorithm is composed of three steps: reliable structures extraction, blur kernel estimation, and non-blind deconvolution. We implement it in a multi-scale coarse-to-fine manner. Benefiting from the self-adaptive reliable structures extraction method, the structures which have adverse effect on kernel estimation will be removed. Then we use the reliable structures of the blur image and priors of the blur kernel, such as sparsity and continuity, to estimate and refine the blur kernel. After obtaining the blur kernel, we use the fast MR-based Wiener filter to restore the final latent image. Our algorithm can deal with large blur kernels even when the blurred images contain abundant textures and small details.
However, as saturated regions in blur images destroy the linearity of the blur model, our algorithm cannot estimate the blur kernel accurately and fails to restore the blur images which contain saturated regions. Furthermore, our algorithm also cannot deal with the spatially-varying blur. Our future work is to resolve these limitation.

Operation Speed
During the restoration process, we estimated the blur kernel and latent sharp image alternately, which involved a few matrix-vector or convolution operations. For the operation speed, the MATLAB implementation of the proposed algorithm spends about 90 s to estimate a 27ˆ27 blur kernel from a 255ˆ255 blurred image with an Intel i5 560 M CPU@2.67 GHz and 8 GB RAM. In comparison, methods [3,13,16] need about 6 min, 3 min, and 4 min, respectively, which are computed by using the author's MATLAB source code. The algorithm [4] implemented in C++ spends about 50 s. As the proposed algorithm involves non-convex models in kernel estimation, it needs more computational time than [6,7]. As the proposed algorithm uses a CG method and FFTs for optimization, we believe that it is feasible to accelerate by using a GPU with the strategy in [6].

Conclusions
In this paper, we propose a robust image restoration algorithm for motion blur of image sensors. Our algorithm is composed of three steps: reliable structures extraction, blur kernel estimation, and non-blind deconvolution. We implement it in a multi-scale coarse-to-fine manner. Benefiting from the self-adaptive reliable structures extraction method, the structures which have adverse effect on kernel estimation will be removed. Then we use the reliable structures of the blur image and priors of the blur kernel, such as sparsity and continuity, to estimate and refine the blur kernel. After obtaining the blur kernel, we use the fast MR-based Wiener filter to restore the final latent image. Our algorithm can deal with large blur kernels even when the blurred images contain abundant textures and small details.
However, as saturated regions in blur images destroy the linearity of the blur model, our algorithm cannot estimate the blur kernel accurately and fails to restore the blur images which contain saturated regions. Furthermore, our algorithm also cannot deal with the spatially-varying blur. Our future work is to resolve these limitation.