Fast Image Restoration for Spatially Varying Defocus Blur of Imaging Sensor

This paper presents a fast adaptive image restoration method for removing spatially varying out-of-focus blur of a general imaging sensor. After estimating the parameters of space-variant point-spread-function (PSF) using the derivative in each uniformly blurred region, the proposed method performs spatially adaptive image restoration by selecting the optimal restoration filter according to the estimated blur parameters. Each restoration filter is implemented in the form of a combination of multiple FIR filters, which guarantees the fast image restoration without the need of iterative or recursive processing. Experimental results show that the proposed method outperforms existing space-invariant restoration methods in the sense of both objective and subjective performance measures. The proposed algorithm can be employed to a wide area of image restoration applications, such as mobile imaging devices, robot vision, and satellite image processing.


Introduction
Restoration of spatially varying out-of-focus blur is a fundamental problem in enhancing images acquired by various types of imaging sensors [1][2][3][4]. Despite the advances in various digital imaging OPEN ACCESS techniques, restoration of spatially varying degradation is still a challenging issue because of the inherent limitation of imaging devices and the non-ideal image acquisition conditions. Although various image restoration algorithms have been proposed in the literature, most of them work under an unrealistic condition that, for example, there is only a single, space-invariant blur. High computational load and/or the iterative optimization are another burden on the practical employment of digital image restoration algorithms in a general imaging sensor.
As a well-known, basic image restoration algorithm, the constrained least-squares filter can remove space-invariant image degradation with suppressing noise amplification using an appropriately weighted smoothing constraint [5]. Kim et al. proposed a practically efficient restoration method by truncating coefficients of the constrained least-squares filter [6][7][8], which can be implemented in the form of the finite impulse response (FIR) filter. However, the original version of the truncated constrained least-squares (TCLS) filter cannot deal with a spatially varying blur.
For restoring a space-variant degradation, Kuthirummal estimated the out-of-focus blur using a computational camera [9], and Pertuz proposed a spatially adaptive restoration method by estimating a local blur from multiple images [10]. Kuthirummal's method requires a special optical part that is not easily implemented in a general camera, whereas Pertuz's method is not suitable for fast image restoration because of the use of multiple input images. Whyte and Xu estimated blurring components in small blocks and proposed the correspondingly adaptive restoration method using sparse representation, respectively, in [11,12]. Chan proposed a selective image restoration method by separating defocused areas [13]. Shen proposed a restoration method using 1  and 2  minimizations [14].
The above mentioned adaptive restoration algorithms commonly need enormous amount of computations and an indefinite processing time to estimate the degradation parameters and optimization-based adaptive restoration. In order to minimize the computational overhead of space-variant image restoration while preserving the restoration performance, this work assumes that spatially varying degradation can be approximated as multiple, region-wise space-invariant Gaussian kernels, and that the restoration process can be approximated in the form of an FIR filter. In this context, the proposed algorithm consists of two functional modules: (i) estimation of the spatially varying two-dimensional Gaussian point spread functions (PSFs) by analyzing the relationship between the first and second derivatives of the corresponding region and (ii) spatially adaptive image restoration using optimally selected TCLS filters as shown in Figure 1. The major advantage of the proposed method is the fast, robust restoration of spatially-varying defocus blur using the parametric model of the PSF and a computationally efficient FIR restoration filter. Although the real PSF of an optical lens is not necessarily Gaussian, the proposed parametric model is a good approximation of most optical lenses, which will be proved by the experiment.
Since spatially-varying image restoration is a fundamental problem in image filtering, enhancement, and restoration applications, there have been many researches in the literature. Some important techniques are summarized below with brief comparisons with the proposed work. Early efforts to enhance a satellite image assumed that the PSF varies because of a geometric transformation. In this context, Sawchuk proposed a space-variant image restoration method that performs a geometric transformation to make the blur kernel space-invariant, and then performed a space-invariant inverse filtering [15]. Flicker et al., proposed a modified maximum-likelihood deconvolution method for astronomical adaptive optics images [16]. It is an improved version of anisoplanatic deconvolution using a space-varying kernel and Richardson-Lucy restoration. Flicker's method, however, differs from this work because of pre-required space-varying PSF information and optimization-based iterative restoration. Hajlaoui et al. proposed a spatially-varying restoration approach to enhance a satellite image acquired by a pushbroom-type sensor where the PSF is spatially varying in only one direction [17]. It uses wavelet decomposition with redundant wavelet frames to improve the restoration performance. However, MAP estimation and wavelet decomposition are not suitable to be implemented in the form of simple linear filtering. Hirsch et al. proposed a class of linear transformations called the Filter Flow for blind deconvolution of motion blur with noise, and they demonstrated the practical significance by showing experimental results on removing geometric rotation, atmospheric turbulence, and random motions [18]. Although this work has a common motivation of efficient space-variant deconvolution, PSF estimation of motion blur differs from that of the defocus blur. This paper is organized as follows. In Section 2, a general space-variant image degradation process is approximated into a region-wise space-invariant model, which serves as a theoretical basis of the proposed space-adaptive image restoration. Sections 3 and 4, respectively, present the estimation of space-variant blur and the corresponding adaptive image restoration algorithms. After experimental results are given Section 5; Section 6 concludes the paper.

Region-Wise Space-Invariant Image Degradation Model
If spherical and chromatic aberrations of a lens are ignored, a point on the object plane generates a space-invariant point spread function (PSF) in the image plane as shown in Figure 2. The corresponding linear space-invariant image degradation model of the out-of-focus blur is expressed as a convolution sum as [19] ( , ) where ( , ) g x y represents the out-of-focus image, ( , ) f x y the virtually in-focused image assuming that the object plane is located at the in-focus position, η( , ) x y the additive noise, and ( , ) h s t the space-invariant PSF. On the other hand, multiple objects with different distances from the lens generate different PSFs in the image plane as shown in Figure 3. In the corresponding space-variant image formation model, a PSF is determined by the distance of the object point from the lens. Since different object points are projected onto different locations in the image plane, a parametric representation of the Gaussian PSF is given as 2 where σ xy varies with the location in the image plane. Equation (2) is a simplified version of a PSF for a single lens proposed in [20]. Elaboration of lens analysis and design is out of scope of the paper, and the proposed parametric method can represent other types of PSF with proper modifications. The corresponding space-variant image degradation model is given as Assuming that an image includes multiple objects (or regions) with different distances from the focus point and background, the space-variant model in Equation (3) can be approximated by a region-wise space-invariant version as where the binary-valued function ( , ) i A x y is equal to unity only if a pixel at ( , ) x y is degraded by the each pixel. Figure 4 shows a real defocused image with multiple different PSFs. The tree and gazebo is blurred by different PSFs because they are located in different distances from the camera.

Blur Estimation Based on the Derivative Distribution
In a sufficiently small region containing only a single, vertical edge as shown in Figure 5a, the ideal edge profile along the horizontal line can be expressed as where ( ) c u x x − represents the unit step function, which is also called Heaviside function, shifted by xc, a the magnitude of the edge, and b the offset as shown in Figure 5b. If the region e R is outfocused by the Gaussian PSF that can be expressed as the correspondingly blurred edge function is given as where ( , ) x y η represents the additive white Gaussian noise, such as . In the rest of this section, the shifting components c x and c y are omitted for simplicity. In order to estimate the variance of the Gaussian PSF, an edge-based analysis approach is used. This approach is a parametrically modified version of two-dimensional isotropic PSF estimation using one-dimensional edge profile analysis [21]. Although one-dimensional edge can be in any direction, we derive the PSF estimation procedure in the horizontal direction, since any non-horizontal edges can be expressed as a rotated version of the horizontal edge. From the input degraded edge signal, a one-dimensional gradient of the blurred edge signal, that is the derivative of Equation (7) with respect to the x -axis without loss of generality, provides an important clue using the fundamental properties of the derivative of convolution and normal distribution as and the derivative operator x ∇ can be replaced by a forward difference operator in the horizontal direction as In one-dimensional case, the Gaussian parameter σ can be directly computed from the derivative equation by solving the Lambert W function [22]. However, it is not easy to solve the two-dimensional derivative equation given in Equation (8) for the Gaussian parameter.
Instead of directly solving Equation (8) for the Gaussian parameter σ , an approximate estimation of the amount of out-of-focus blur on the horizontal edge is given by the ratio of local variances of the first and second derivatives as x e x x e x e x e x e x e g x y B x y g x y where var{} ⋅ represents the local variance and {} E ⋅ the local mean or average operators. The discrete approximation of the second order derivative operator 2 x ∇ can be expressed as In Equation (10) x y The size of a local region e R is related to the range of an out-of-focus blur that is estimated using local statistics. If the size of a local region increases, a larger PSF can be estimated at the cost of potential mixture with other edges. In this work, 10 p = is used to estimate 4.0 σ = at maximum. In flat areas, ( , ) B x y has a small value. It implies that both of a blurry object and a clear object have similar ( , ) B x y values in flat areas. However, the flat area does not need to be restored regardless of the amount of the defocus blur. Figure 6 shows a step-by-step result of the proposed out-of-focus blur estimation process. Figure 6a show a synthetic image with gradually increasing amount of blur from left to right according to Equation (7). The variance of the Gaussian PSF changes from 0 to 2.5, and the variance of additive noise is 0.0001. Figure 6b,c, respectively, show the first-and second-order derivatives of the image. Figure 6d,e, respectively, show local variances of the first-and second-order derivatives. As shown in Figure 6f, the estimated amount of the blur ( , ) B x y follows the real amount of the blur regardless of the magnitude of edges.
where ( , ) C u v represents a high-pass filter, and λ the regularization parameter for controlling the smoothness in the restored image [19]. In order to avoid the frequency domain processing that requires at least one additional frame memory, Kim has proposed the original version of truncated constrained least-squares (TCLS) filter [6], and also applied it to multi-focusing image restoration [8]. The TCLS filter is generated by truncating the spatial domain coefficients of the CLS filter using the raised cosine window. As a result, the TCLS filter can be realized in the form of a finite impulse response (FIR) filter in the spatial domain.
In order to design the TCLS restoration filter for the estimated blur size ( , ) B x y , an arbitrary image is synthetically blurred by Gaussian function with variance 2 σ to establish the relationship between ( , ) B x y and σ in a statistical manner. As shown in Figure 7, since ( , ) B x y and σ have the one-to-one correspondence, the optimum TCLS filter can be selected by calculating ( , ) B x y out of a set of a priori generated TCLS filters. To reduce common restoration artifacts, such as noise clustering, ringing, and overshoot near edges, the spatially adaptive noise-smoothing algorithm [6,8] can also be used on the necessity basis. Figure 8 shows an example for of the proposed region-adaptive image restoration algorithm. A spatially variant defocused input image is segmented for selecting optimal TCLS filters according to the blur map defined in Equation (10). The blurred image can be restored using Equation (15) together with the spatially adaptive noise smoothing algorithm.
The process of the proposed image restoration is shown as Algorithm 1. The proposed blur estimation algorithm corresponds to lines 1-2 in Algorithm 1 and the proposed region-adaptive restoration algorithm corresponds to lines 3-5.
5. Apply the spatially adaptive noise smoothing algorithm.
Output: a space adaptively restored image.

Experimental Results
For evaluating the performance of the proposed algorithm, Three sets of test images including a synthetic image, standard images of size 768 × 512, and outdoor images of size 1024 × 768 acquired by using a digital single lens reflected (DSLR) camera were tested using the peak-to-peak signal-to-noise ratio (PSNR), mean structural similarity (MSSIM) [23], and the CPU processing time in seconds on a PC equipped with a 3.40 GHz CPU and 16 GB RAM.
Experimental results using a synthetic image are shown in Figure 9. As shown Figure 9c,d, the CLS filter and Dong's method result in significant ringing artifacts because of the mismatch between the real blur and the restoration filter. Although Yang's method that minimizes the total variation can better remove the defocus blur artifact with suppressed ringing as shown in Figure 9e than the CLS filter and Dong's method, it cannot avoid distortions in high frequency regions, such as corners and ridges. Its iterative computational structure is another burden to be implemented in a commercial digital camera. The result of Xu's method include ringing artifact as shown in Figure 9f. Since Xu's method globally estimates a blur kernel in the entire input image and then adjusts the blur kernel for each divided region, it is not easy to accurately measure the spatially varying defocus blur that may continuously changes throughout the image. Figure 9g,h show that both Shen's and the proposed restoration methods can successfully restore the spatially varying defocus blur without ringing artifacts. Although both methods perform restoration based on the blur map, Shen's method is strongly influenced by noise because it uses only maximum and minimum values within a local window for generating the blur map, which results in artifacts at boundaries with discontinuity of the blur map. On the other hand, the proposed method is more robust to noise since it uses the ratio of variances of the first and second derivatives in the corresponding region. The precisely estimated blur map by the proposed method results in less ringing artifact with the highest PSNR and MSSIM values. Figure 10a,b respectively show the averaged step responses of various restoration methods used in Figure 9. The restored step response using the proposed method is the most similar to that of the input image with minimum undesired artifacts.  Figure 11 shows the results in another synthetic image which was generated using a commercial three-dimensional computer graphics software. Distances of the cylinder, cube, cone, and sphere to the camera lens is respectively equal to 15.5, 17, 24, and 25.5 cm with the in-focus distance of 17 cm. The proposed restoration method shows the best restored result in the sense of PSNR and MSSIM values and restores the spatially varying blur with minimum artifacts.
(e) (f) Figure 12. Comparison of different restoration algorithms using a real image with multiple objects with different distances from the camera; (a) the input image; (b) the restored image by the Dong's method [24]; (c) the restored image by the Yang's method [25]; (d) the restored image by the Xu's method [12]; (e) the restored image by the Shen's method [14]; and (f) the restored image by the proposed method.
Experimental results of the proposed method for naturally blurred images acquired by a DSLR camera are shown in Figures 12 and 13. Shen's method cannot avoid ringing artifacts at boundaries with discontinuity of the blur map. However, as shown in Figures 12d and 13d, the proposed restoration method successfully removes the spatially varying blur with minimized artifacts.
(e) (f) Figure 13. Comparison of different restoration algorithms using another real image; (a) the input image; (b) the restored image by the Dong's method [24]; (c) the restored image by the Yang's method [25]; (d) the restored image by the Xu's method [12]; (e) the restored image by the Shen's method [14]; and (f) the restored image by the proposed method. artifacts. When noise variance is large, Yang's method is effective because it can control the noise by minimizing the total variation of the image, whereas its performance is not acceptable when the noise is negligible. Xu's method also shows low PSNR and MSSIM values due to inappropriate blur estimation for the spatially varying defocus blur. Although Shen's method gives almost similar PSNR and MSSIM values compared with the proposed method, its performance is limited in the neighborhood of the discontinuity in the blur map. The proposed method produces the best restored result in the sense of both PSNR and MSSIM values. In addition, the proposed algorithm can be implemented in the form of an FIR filter.

Conclusions
In order to solve the long-term unsolved digital multi-focusing problem in digital imaging technology, a region-wise linear approximation of the general space-variant image degradation model is presented. The proposed image degradation model can cover from the space-invariant to pixel-wise adaptive image restoration by adjusting the size of the region. Gaussian approximation of the point spread function (PSF) of an optical system enables parametric estimation of the blur parameter, which is an important factor for the potential application of wide variety of imaging sensors. Based on the region-wise space-invariant image degradation model, a novel Gaussian parameter estimation method is proposed by analyzing the relationship between the variance of a Gaussian PSF and the first and second derivatives of local edges. Since the proposed estimation method uses a set of a priori generated real out-of-focused images in the statistical manner, the estimation process is stable and free from amplification of the estimation error due to the ill-posedness of derivative operations.
The restoration process is implemented in the form of a finite impulse response (FIR) filter, which can be embedded in a typical image signal processor (ISP) of a digital imaging devices without using an additional hardware such as a frame memory. Although the truncation of the filter coefficients may degrade the performance of restoration in a certain degree, the region-based processing minimizes the degradation, which was proved by experimental results using a set of real photographs.
The major contribution of this work is that the multi-focusing problem is decomposed into two separate sub-problems: (i) estimation of the PSF and (ii) spatially-adaptive image restoration. Since neither perfect estimation of the PSF nor ideal restoration is possible in practice, in a combined approach, such as blind deconvolution, the mixed errors from the PSF estimation and restoration cannot be removed by an analytic method. On the other hand, in the proposed approach, PSF estimation can be improved by extending the Gaussian model to more realistic one without affecting the restoration process. In the same manner, the restoration process can be either improved or replaced with any advanced one without affecting the PSF estimation process. For example, Wei et al. proposed a matrix source coding algorithm for efficiently computing space-varying convolution, and demonstrated its performance using the space-variant image restoration results [26]. Since the proposed work completely decouples the PSF estimation and restoration sub-problems, Wei's convolution method can be applied to the restoration process to improve the performance.