A Deconvolutional Deblurring Algorithm Based on Short- and Long-Exposure Images

An iterative image restoration algorithm, directed at the image deblurring problem and based on the concept of long- and short-exposure deblurring, was proposed under the image deconvolution framework by investigating the imaging principle and existing algorithms, thus realizing the restoration of degraded images. The effective priori side information provided by the short-exposure image was utilized to improve the accuracy of kernel estimation, and then increased the effect of image restoration. For the kernel estimation, a priori filtering non-dimensional Gaussianity measure (BID-PFNGM) regularization term was raised, and the fidelity term was corrected using short-exposure image information, thus improving the kernel estimation accuracy. For the image restoration, a P norm-constrained relative gradient regularization term constraint model was put forward, and the restoration result realizing both image edge preservation and texture restoration effects was acquired through the further processing of the model results. The experimental results prove that, in comparison with other algorithms, the proposed algorithm has a better restoration effect.


Introduction
In the exposure process of an imaging detector, the relative motion between the image surface and object surface will lead to image motion and the degradation of image quality [1]. This motion problem is usually solved by two means: (1) the speed of moving the camera relative to the object surface is measured, and the image motion is compensated for by placing a fast reflection mirror in the light path [2]. By correcting the image motion directly through the principle of relative motion, this method is of favorable effect, but it will increase the cost and complexity of the camera hardware system. (2) The optical imaging mechanism is combined with the signal processing algorithm, and the image motion is compensated for using a deblurring algorithm [3]; this method is low-cost and easy to implement, but the kernel estimation accuracy of the image motion is greatly affected by the algorithm, which will impact the correction effect of the image motion.
A motion-deblurred image can be expressed as the projection of the convolution of the imaging target and the image motion kernel on the image surface. Therefore, a deblurring algorithm can be denoted as a deconvolutional process with an ill-posed problem [3]. If the shift image kernel is estimated only through the blurred image itself [3][4][5][6], the ill-posed problem will be serious in the equation solving process, with a large kernel estimation error, and it will not be easy to acquire good restoration results. To solve this problem, some scholars have proposed algorithms for multichannel deconvolution using two or more

Design Principle and Framework Basic Model of the Proposed Method
In general, imaging models for long-and short-exposure images can be expressed as below [11]: noises by default. In the long-and short-exposure deconvolution framework, the ideal can be generally expressed as below [11]: (2) where N Total d is the preprocessed image after the registration [20], straight variance balancing, and denoising [21] operations of the short-exposure image Y Total S , aiming to enhance the information in the short-exposure image, which is consistently the same with the ideal image's corresponding part, for example, the part of "Carton"(areas with large gradients in the image). ∆X Total is the residual error of X Total and N Total d , and it is so small that it is usually ignored in the kernel estimation process. The deconvolutional algorithm model under the long-and short-exposure framework can be written into the following form: where 2 is the norm operator, and H m , X denote the pool of allowed H Total M and X Total iter . X opt is the preliminary restoration image. H Total is the iter-kernel in the process, X Total iter is the iter-image in the process.
Under normal circumstances, this problem is an ill-posed problem, which cannot be directly solved, so it should be constrained by introducing the regularization method.
As Figure 1 shows, the work of Equation (3) is divided into two parts: kernel estimation part and original image estimation part. Divide the problem to be solved by Equation (3) into two parts according to the steps shown in Figure 1. N denote the additive noises incurred in long-exposure and short-exposure imaging processes, respectively, both of which are zero-mean Gaussian noises by default. In the long-and short-exposure deconvolution framework, the ideal can be generally expressed as below [11]: Where Total d N is the preprocessed image after the registration [20], straight variance balancing, and denoising [21] operations of the short-exposure image S Total Y , aiming to enhance the information in the short-exposure image, which is consistently the same with the ideal image's corresponding part, for example, the part of "Carton"(areas with large gradients in the image).
is the residual error of Total X and Total d N , and it is so small that it is usually ignored in the kernel estimation process. The deconvolutional algorithm model under the long-and short-exposure framework can be written into the following form:  Under normal circumstances, this problem is an ill-posed problem, which cannot be directly solved, so it should be constrained by introducing the regularization method.
As Figure 1 shows, the work of Equation (3) is divided into two parts: kernel estimation part and original image estimation part. Divide the problem to be solved by Equation   The kernel estimation model in gradient field was obtained from Equation (3): where λ is the regularization coefficient of kernel estimation, H denote the pool of allowed H Total , φ(h Total iter ) is the regularization term function for kernel estimation, h Total iter is the iterkernel in the process (same as H Total ). Because of the higher calculation efficiency, the kernel estimation model Equation (4) was solved by sparsity [22] in the image gradient domain. Therefore, the ''∇" means the function of the gradient operator.
The main work of Section 3.1 was to use short-exposure images to reconstruct the kernel estimation model Equation (4). By introducing the short-exposure images into the iterative image estimation process, with a new regularization term model proposed and the iterative optimization of data items, the accuracy of the kernel estimation were improved.
In detail, according to the solution framework in [3], first of all, the paper built an image pyramid by processing the blurred and short-exposure images through a stepwise downsampling method. Then, the kernel was estimated from the low-resolution image in the bottom layer of the pyramid. Next, the kernel estimation result of the low-resolution layer was used as the initial term of the kernel in the kernel estimation process of the upper high-resolution layer. Finally, by calculating layer-by-layer, the kernel estimation result of the original image was obtained. Since the estimation process is the same for every layer, Section 3.1 provides an example of the kernel estimation process of any layer to introduce the kernel estimation method in this paper. The image deblur model in the spatial domain was obtained from Equation (3): γ X is the regularization coefficient for an iter-restored image as a value, X opt is the pool of allowed X opt , X Total iter is iter-image in the process, and R(X Total iter ) is the regularization term function for image restoration. In part 2, the paper used short-exposure images to design a new regularization term. The regularization term used the short-exposure image and the iter-restored image to jointly constrain the iterative solution process. This obtained a better restoration effect.

Main Works: Estimate Blur Kernel and Restore the Image Using Priori Side Information
As shown in the above section, in order to solve ill-conditioned problems, such as Equations (4) and (5), it is necessary to design appropriate models to highlight certain features of the image.
Aimed at the kernel estimation problem in Equation (4), traditional algorithms usually solve the equation with an easily designed regularization term. However, it is too difficult to obtain a good solution in this way, because of the nonconvex problem. Therefore, in this study, with the application of the expectation maximum, we obtained the estimation result of the kernel by the alternating layer-by-layer calculation of the layer iterative image and layer iterative kernel layer. Section 3.1 introduces the details of the method of kernel estimation in the selected layer with priori side information from the short-exposure image.
Aimed at the problem of Equation (5), Section 3.2 proposes a regularization term design method that uses short-exposure image information. The proposed regularization term, which retains information from the short-exposure image, is used to improve the restoration effect by retaining the advantage of the Lp-norm gradient operator, which is closer to the characteristics of natural image distribution.

Kernel Estimation in This Layer: Based on a Priori Filter of Short-Exposure Images to Build the Model
In this section, the estimation method for the kernel of example layer is introduced as an example for any layer of the image pyramid. As mentioned above, the main aim of the layer kernel estimation process is to apply the maximum expectation algorithm to estimate the layer iter-kernel and the layer iter-image iteratively. Therefore, for the target of improving the accuracy of the layer kernel estimation, we introduced the preprocessed layer short-exposure image as the correction template and used the JBF algorithm to calculate the result of correcting the current layer iter-image by the template. The result was the filtered iterative image for the current iteration step. As shown in Sections 3.1.2 and 3.1.3, the a priori filtered iterative image could be applied to improve the regularization term design and optimize the fidelity term calculation results.
For the layer-kernel estimation problem, Equation (4) can be written in the following iterative form: where H is the result of the kernel estimation in this layer, Y L is the deblurred image in this layer, and argmin a h represents the alternate iterative minimization of layer iterative image X iter and layer-iterative kernel h iter in this layer. The λ X and λ h are regularization coefficients for iterative image estimation and kernel estimation in each layer. T(X iter ) and F(h iter ) are regularization terms for the layer iterative image estimation and layer-kernel estimation models, respectively. As shown in Figure 2, the layer-kernel estimation process is mainly divided into three steps:   Step 1. Apply a joint bilateral filter (JBF) [23] algorithm to update the example layer's priori filter image (Section 3.1.1). Step 2. Apply the updated priori filter image to obtain a layer iterative image (Section 3.1.2).
Step 3. Apply the priori filter image to estimate the layer iterative kernel (Section 3.1.3). The design requirement of the regularization term T(X iter ) in the layer iterative image solution process aims to improve the similarity between the gradient map of the layer iterative result and the gradient map ideal image. In Section 3.1.1, we propose a layer iterative image estimation model with short-exposure images, while optimizing the design of regularization and data items, and improving the estimation accuracy of iterative images.
The design requirements of the regularization term F(h iter ) in the layer iterative kernel solution process usually need to consider the feature of the kernel itself. In this paper, the common regularization term h iter 2 2 is used to constrain the iterative kernel solution. The calculation process is simple, and the iterative result has a better overfitting control effect.
Therefore, by the improvements above, with the appropriate regularization term and continuously iterating alternately to convergence, the optimal solution of the iterative layer kernel H can be obtained.

Calculate the Priori Filter Iterative Image
In this example layer, the gradient of the priori filter iterative image "∇Y iter JBL " is obtained by the JBF of the iterative image X iter and the preprocessed image N d : where w p is the normalization terms, f () and g(), are the spatial distance weight template function and the similarity weight template function, respectively, both of which are Gaussian functions in this algorithm with the default parameters δ. Φ is the range of the block filter area for example, [3,3], and p and q are the pixel coordinates; p is the center pixel of the block filtered area, q is the neighbor pixel of p in the block filter region, and Ω is the entire image. The values of f () and g() are both in the range [0, 1], so the result of the Hadamard product will be very small. Use of this result directly will result in severe distortion, so it is necessary to multiply it by a normalization factor, which is the inverse of the sum of all elements of the product matrix obtained by the Hadamard product from two matrices with the spatial distance weights and the similarity weights. (∇X iter ) p and (∇X iter ) q are the pixel gradient values of the iterative image. (∇N d ) p and (∇N d ) q are the pixel gradient values at the corresponding positions in N d . As the Figure 3 shown, assuming that there is a template matrix (a) and a target matrix (b), the same parameter settings are applied to obtain the bilateral filtering result (c) and the joint bilateral filter result (d). It is evident that the distribution of (d) is closer to template (a) than (c) and shows more details from (b) than template (a). This is determined by the principle of the joint bilateral algorithm. Therefore, it can be concluded that using N d images that are closer to the real distribution to participate in the calculation can make the distribution of the iterative results closer to the real distribution. The main reason why the N d image is not directly used in the calculation as a priori filter image is that the signal-to-noise ratio of the short-exposure image is low, and the detail loss in the smooth part of the image intensity transition is large. Therefore, using the results of bilateral filtering to participate in the iteration can have a good correction effect on both the texture and edge parts of the image.
For example: the distribution of the iterative results closer to the real distribution. The main reason why the d N image is not directly used in the calculation as a priori filter image is that the signal-to-noise ratio of the short-exposure image is low, and the detail loss in the smooth part of the image intensity transition is large. Therefore, using the results of bilateral filtering to participate in the iteration can have a good correction effect on both the texture and edge parts of the image. For example:

Iterative Image Estimation
The main concept in this section is that, with the priori filter non-dimensional Gaussianity measure regularization term, which was designed to constrain the layer iterative image estimation process, the iterative updating of the fidelity term was realized using the priori filtering term in the kernel estimation process. Compared with the common kernel estimation algorithms from the literature [11,15,18], this algorithm constrained the iteration process through the partial high-frequency information provided by short-exposure images, thus improving the kernel estimation accuracy.
The iterative process is divided into two steps: Step 1: solve the iterative image model with BID-PFNGM regularization term and updated data item that introduces a priori filter image for reconstruction.
The BID-PFNGM is as below:  means solving mathematical expectation, and the value is the mean of the gradients of all pixels of the priori filter image. As above, is the priori filter term, and the initial value is d N .
pixel iter X ∇ is the pixel value of the selected location. As the constraint term, BID-PFNGM can be used to judge the gradient information needing to be reserved or attenuated. If the gradient of pixel point is greater than the global mean of relative gradient, the pixel will be reserved, or otherwise abandoned. Through repeated updating of threshold and iterative Equation (9), only a small part of pixels was reserved at the end, which improved the kernel estimation accuracy while ensuring the sparsity.
The original NGM, which, from [24], was calculated iteratively in each iteration based on the previous result. The blur kernel used at the beginning was an artificial initial value, so the result of the initial iterative image deviated greatly from the final result. Although the final iterative result converged, if the mean value can be adjusted using the more reliable pixel gradient distribution provided by the short-exposure image in each iteration

Iterative Image Estimation
The main concept in this section is that, with the priori filter non-dimensional Gaussianity measure regularization term, which was designed to constrain the layer iterative image estimation process, the iterative updating of the fidelity term was realized using the priori filtering term in the kernel estimation process. Compared with the common kernel estimation algorithms from the literature [11,15,18], this algorithm constrained the iteration process through the partial high-frequency information provided by short-exposure images, thus improving the kernel estimation accuracy.
The iterative process is divided into two steps: Step 1: solve the iterative image model with BID-PFNGM regularization term and updated data item that introduces a priori filter image for reconstruction.
The BID-PFNGM is as below: where E ∇Y iter JBL means solving mathematical expectation, and the value is the mean of the gradients of all pixels of the priori filter image. As above, ∇Y iter JBL is the priori filter term, and the initial value is N d . ∇X pixel iter is the pixel value of the selected location. As the constraint term, BID-PFNGM can be used to judge the gradient information needing to be reserved or attenuated. If the gradient of pixel point is greater than the global mean of relative gradient, the pixel will be reserved, or otherwise abandoned. Through repeated updating of threshold and iterative Equation (9), only a small part of pixels was reserved at the end, which improved the kernel estimation accuracy while ensuring the sparsity.
The original NGM, which, from [24], was calculated iteratively in each iteration based on the previous result. The blur kernel used at the beginning was an artificial initial value, so the result of the initial iterative image deviated greatly from the final result. Although the final iterative result converged, if the mean value can be adjusted using the more reliable pixel gradient distribution provided by the short-exposure image in each iteration process and adding the priori distribution term and the fidelity term for calculation together (as shown in Equation (9)), the estimated effect of the final blur kernel is better. As described in 3.1.1, the addition of the joint bilateral filtering algorithm can make the value of the result of the corrected image have both the characteristics of the template image and the characteristics of the corrected image. If the template image has high reliability, then the corrected image is more credible than the uncorrected one.
Step 2: update the iterative image model. Applying the priori filter image Y JBL and iterative image X iter to reconstruct the itera- (6) in gradient domain, it can be written as: As in [24], λ x is a fixed value determined by experimental experience, X iter denote the pool of allowed X iter . As mentioned before, the physical meaning of this regularization term is to keep a relatively large portion of the result compared to the mean of the priori filtered image. If the selection weight is too small, the final result will contain more noise information, and if the selection weight is too large, the resulting image information will be lost. Therefore, different images need different weight parameters according to the characteristics of the image (sharpness, texture, etc.), and the selection of this item is mainly determined by experimental experience.
To solve this problem, this paper used the iterative shrinkage-threshold (IST) algorithm, which has high calculation accuracy and a good restoration effect for non-uniform blur kernel [25]. After calculation, the iterative image solution is obtained: X iter [24].

Iterative Kernel Estimation Algorithm
The iter-kernel h iter can be obtained by updating the kernel iteration model [26] using the priori filtering term Y JBL and iterative image X iter : where λ h is a hyperparameter of the regularization term, H denote the pool of allowed h, h temp is the iter-temp result of h iter , and the limitations are that the energy of the sum of all pixels from the kernel ∑ h pixel temp is 1, and the energy h pixel temp of each point in the kernel is not less than 0. In this paper, we adopt a similar algorithm from [24] to obtain the iterative kernel by using the gradient projection method [26] to solve the equation. It is worth noting that the algorithm uses the Dirichlet function to approximate the true distribution of the iterative kernel [27]. From a statistical perspective, the process of solving the iterative kernel can be regarded as the process of solving the probability distribution of h iter , when the probability of satisfying the conditional distribution P(h | ∇Y JBL ) is the largest. However, it is very difficult to solve the model directly, so the probability of satisfying a certain Dirichlet approximate distribution P(D Diriclet ) is used to approximate P(h | ∇Y JBL ), where D Diriclet represents the Dirichlet distribution to be determined. The relative entropy Kullback-Leibler (KL) distance is usually used to judge the similarity of two distributions. And as below in Table 1, the flow of the method is shown. The proposed algorithm was compared with the algorithms in [11,24] (All results from [24,27] calculated by the code from URL: https://sites.google.com/site/fbdhsgp accessed on 6 December 2021. With GNU3.0 License. And the result of [11] calculated by the code from https://github.com/jtyuan/deblur accessed on 6 December 2021) in the aspect of kernel estimation, to verify its estimation accuracy. In the simulation process of the long-exposure image, the original image experienced blur processing using a kernel with the size of 6 pixels and in the direction of 45 • . As for the short-exposure image, the exposure of the original image was reduced via PS software, the parameters were set as 30% of the original image, and Gaussian noise with a mean value of 0 and variance of 0.05 was added to simulate the low SNR characteristic of the short-exposure image. Figure 4 shows the kernel estimation results. It can be directly observed that the energy distribution of the kernel estimated by the proposed algorithm was more approximate to the original kernel. The results were evaluated using the quantitative index evaluation method as seen in Table 2. A comparative evaluation was conducted using two objective criteria: peak signal-to-noise ratio (PSNR, unit: dB) and kernel structural similarity (SSIM). Figure 4e,f show the PSNR and SSIM normalized comparison results of the blur kernel estimation results in the blur kernel estimation process. In the iterative process based on the image pyramid, due to down-sampling, the initial image was small and the value was high, and the overall trend was a downward trend. This was an inevitable result after resolution normalization. However, it did not affect the estimation effect of this algorithm in the entire iterative process, which was better than the results in [24]. It can be seen that the kernel estimation algorithm based on BID-PFNGM was better than the kernel estimation based on NGM. According to Table 2, the proposed algorithm showed better performance in both indexes, indicating that the estimation result of the proposed algorithm was superior to the algorithms in [11,24].
2. Estimate the convolution kernel at the current layer. 3. Update the fidelity term and regularization term through the united filtering algorithm. 4. Estimate the iterative image. 5. Estimate the convolution kernel. 6. Interpolate both kernel and image and extend to the next layer and repeat Step 2.
7. End the estimation and return to kernel opt H .
The proposed algorithm was compared with the algorithms in [11,24] (All results from [24,27] calculated by the code from URL: https://sites.google.com/site/fbdhsgp. With GNU3.0 License. And the result of [11] calculated by the code from https://github.com/jtyuan/deblur) in the aspect of kernel estimation, to verify its estimation accuracy. In the simulation process of the long-exposure image, the original image experienced blur processing using a kernel with the size of 6 pixels and in the direction of 45°. As for the short-exposure image, the exposure of the original image was reduced via PS software, the parameters were set as 30% of the original image, and Gaussian noise with a mean value of 0 and variance of 0.05 was added to simulate the low SNR characteristic of the short-exposure image. Figure 4 shows the kernel estimation results. It can be directly observed that the energy distribution of the kernel estimated by the proposed algorithm was more approximate to the original kernel. The results were evaluated using the quantitative index evaluation method as seen in Table 2. A comparative evaluation was conducted using two objective criteria: peak signal-to-noise ratio (PSNR, unit: dB) and kernel structural similarity (SSIM). Figure 4e,f show the PSNR and SSIM normalized comparison results of the blur kernel estimation results in the blur kernel estimation process. In the iterative process based on the image pyramid, due to down-sampling, the initial image was small and the value was high, and the overall trend was a downward trend. This was an inevitable result after resolution normalization. However, it did not affect the estimation effect of this algorithm in the entire iterative process, which was better than the results in [24]. It can be seen that the kernel estimation algorithm based on BID-PFNGM was better than the kernel estimation based on NGM. According to Table 2, the proposed algorithm showed better performance in both indexes, indicating that the estimation result of the proposed algorithm was superior to the algorithms in [11,24].

Image Deblur: Deconvolutional Restoration Based on Relative Gradient Operator
In the convolutional image restoration process, [17] used the global gradient function of the to-be-restored image as the constraint term. Affected by the staircase effect, the texture part in the image could not be well restored. References [11,18] used the RL algorithm to restore the image and obtain the original solution, but the RL algorithm had a poor image restoration effect, which also affected the final restoration effect.
To solve the above problems, in this paper we put forward an Lp-norm constraint function called the relative gradient operator (RGO) to construct the regularization. The regularization term constrained by the Lp-norm has a good edge preservation effect [28,29]. When the RGO was applied to the image, besides the global gradient information, the RGO contained the relative difference information between the iterative deblurred image and the short-exposure preprocessed image. Compared with the TV operator, which was defined as nonconvex in the image area, the texture information of the restoration result was clearer, and the effect was better. The following first gives the definition and properties of the RGO.
Any bounded Lipschitz region Ω in a two-dimensional image is defined as a domain interval [30], and the RGO can be written as follows: where a is the pixel in Ω, j is the location order of the pixel, and p is the power number in Section 3.2. Observing the above equation, it can be seen that the RGO must be integrable in the two-dimensional bounded Lipschitz bounded region (because this region is a twodimensional real number field, and N d is a constant, the RGO must have an upper bound in this region, satisfying integrability) [31]; that is, it satisfies p Ω ∇X j − X j + N j d p dΩ < ∞, and the RGO has homogeneity. Therefore, the RGO is a function defined on domain Ω belonging to LP space: CD ∈ L p (Ω), and L p (Ω) = u Ω |u(x)| p dx < ∞ , and u is any function that satisfies the condition: Ω |u(x)| p dx < ∞ [31].
From the above equations, the relative gradient operator can be expressed as the sum of the summation operator and gradient operator; namely, the RGO contains the gradient information of the image itself and the difference information between the iterative deblur image and the priori image N d . From the above equations, the relative gradient operator can be expressed as the sum of summation operator and gradient operator, namely, the RGO contains the gradient information of the image itself and difference information between iterative deblur image and priori image N d .
So, it can be used to rewrite Equation (5) into the following form: where γ R is a hyperparameter of the regularization term, X denote the pool of allowed X Riter , and X temp is the iterative temp value of X opt . Equation (12) denotes the model of an objective function constrained by the relative gradient operator regularization term of the P-norm. It generally cannot be solved directly through the gradient descent-like method [32], so smooth approximation functions are usually used, such as the Huber function [33]: where v is the independent variable and ε is the limited parameter, which determines the similarity between the Huber function and the original function. Equation (12) is solved by approximately expressing it as a first-order continuously differentiable 2-norm function: (14) As N d is a constant, for any p > 0, Equation (12) is first-order continuous differentiable. When ε is sufficiently small, the solutions of Equations (12) and (14) are approximately equivalent. CD(X Riter , N d ) can be written into a function CD(X Riter ) about X Riter . When Equation (12) reaches the minimum value, the following can be obtained through the derivation: M is a diagonal matrix similar to diag(min(pε p−2 , p CD(X Riter ) p−2 )) . With the introduction of auxiliary function T(a) [34] and based on the idea of majorization minimization (MM) [35], this problem can be solved by the alternating direction method of multipliers (ADMM) [36], so as to obtain the initial solution X.
After the initial solution was solved, the high-frequency detailed information of the image was extracted to enhance its texture presentation. The extraction of high-frequency detail information was solved by a similar method to that in [18].
First, the residual error image and residual error image ∆Y L = Y L − H N d were used as the inputs. Then the residual error solution X RRL and gain control solution X gcl were acquired through the residual Richardson-Lucy (RRL) [37] and gain control Richardson-Lucy (GCRL) [38] algorithms, respectively. Secondly, the united bilateral filtering of X RRL and the initial solution X opt were conducted to acquire the smooth image X, followed by the operation of X detail = X g − X to acquire the detailed image solution X detail . In the end, the initial solution and detailed solution were integrated (X = X opt + X detail ) to obtain the final restored image X. The flow of the whole algorithm in Section 3.2 was shown in Table 3. Table 3. Deconvolutional algorithm flow based on long-and short-exposure.

Deconvolutional Algorithm Flow Based on Long-and Short-Exposure
Input: Y L , N d and H 1. Calculate the initial value of regularization term 2. Iteratively solve Equation (15) to acquire the initial solution X opt 3. Calculate the result of the RRL algorithm "X RRL " 4. Calculate the result of GCRL algorithm "X gcl " 5. Calculate the image details "X detail " and acquire the restoration result "X" As shown in Table 4 and Figure 5, in the case of using the same kernel, we used the PSNR and SSIM indicators to compare the results of the restoration effect of the algorithm in this paper and the algorithm in [27]. Table 4 gives a comparison of the parameters of the final restoration result, and Figure 4 shows a comparison of the relevant parameters of the iterative restoration image in each iteration. The top row of Figure 4 shows the difference in the PSNR of the restored iterative image, and the bottom row of Figure 4 shows the comparison of the SSIM of the restored iterative image. It can be seen that the proposed algorithm works well. Both of the results were better than those in [27]. 5. Calculate the image details " detail X " and acquire the restoration result " X " As shown in Table 4 and Figure 5, in the case of using the same kernel, we used the PSNR and SSIM indicators to compare the results of the restoration effect of the algorithm in this paper and the algorithm in [27]. Table 4 gives a comparison of the parameters of the final restoration result, and Figure 4 shows a comparison of the relevant parameters of the iterative restoration image in each iteration. The top row of Figure 4 shows the difference in the PSNR of the restored iterative image, and the bottom row of Figure 4 shows the comparison of the SSIM of the restored iterative image. It can be seen that the proposed algorithm works well. Both of the results were better than those in [27].

Results and Discussion
In this paper, three groups of experiments were used to examine the performance of our algorithm. The first group comprised simulation experiments, which aimed to examine the restoration effect of this algorithm by measuring the quality evaluation index with reference images. The second group of experiments, named the live shot group, were conducted to verify the practicability of the algorithm by examining its restoring effect on live-photography images. The live shot group consisted of experiment 1 and experiment 2, where the algorithms in [11,17] constituted the first control group in experiment 1-group 2, and those in [11,27] formed experiment 2-group 2. In the third group, composed of the modulation transfer function (MTF) measurement experiments [39], the curve of the image was measured by restoring the simulation image.

Group 1: Simulation Experiment
Experiment 1: the first group of the simulation experiment used remote sensing images with an image size of 512 × 512 pixels, and the long-exposure images were manually blurred. The short-exposure image used PS software to reduce the exposure of the original image, the exposure was set to 30% of the original image, and Gaussian noise with a mean value of 0 and a variance of 0.05 was added to simulate the low signal-to-noise ratio of the short-exposure image.
The experimental results are shown in Figure 6. It can be seen by directly observing the restoration results that the methods in [11,27] were not accurate enough in the kernel estimation. However, the proposed algorithm had better performance in both edge preservation and texture restoration. This group of experiments comprised simulation experiments, and there was an original image, so the reference image quality index was used to measure the quality of the restored image. This group of experiments used SSIM and PSNR as indicators for measurement. As shown in Table 5, this algorithm performed well in both metrics.   Tables 6 and 7, the algorithm in this paper performs well overall in both PSNR and SSIM metrics of the images; combined with the recovery effect shown in Figure 6, it can be believed that the algorithm in this paper can outperform the comparison algorithm in most scene fields in both visual intuitive perception and objective metric measurements, on most occasions.  Experiment 2: the second group of the simulation experiment used Image Library, and the long-exposure images were manually blurred. The short-exposure image used PS software to reduce the exposure of the original image, the exposure was set to 30% of the original image, and Gaussian noise with a mean value of 0 and a variance of 0.05 was added to simulate the low signal-to-noise ratio of the short-exposure image. The main purpose of this experiment is to verify the generalizability of this algorithm, so this algorithm used eight kernels to blur all four images of Levin's image library for blur simulation. The test metrics were selected as PSNR and SSIM to measure the overall performance of each algorithm in each image. The specific calculation method used the summation method, and the PSNR and SSIM values of the recovered images of the eight kernels for each image were summed up individually, and the total value was used as the final evaluation result among them, the PSNR parameters processed by histogram matching with the original image. Figure 7 shows (a) the original state images of the images and (b) the self-designed simulation blur kernel. As shown in Tables 6 and 7, the algorithm in this paper performs well overall in both PSNR and SSIM metrics of the images; combined with the recovery effect shown in Figure 6, it can be believed that the algorithm in this paper can outperform the comparison algorithm in most scene fields in both visual intuitive perception and objective metric measurements, on most occasions.  This group of experiments comprised real-shot experiments, so the subjective evaluation method was used to evaluate the restoration quality of the image. However, based on some existing image quality assessment (IQA) indicators, such as natural image quality evaluator(NIQE, the smaller the value, the smaller the gap with the natural image, the better the image quality) [40], average gradient (the larger the value, the clearer the image texture), cumulative probability of blur detection (CPBD, the larger the number, the sharper the image) [41], these three indicators could also evaluate the image restoration quality from another perspective.
The live shot data in [17] were used in the first control group, and the results were compared with the processing results in [11,17]. The live shot image was from [17], page 4, with a size of 498 × 498 pixels, and the copyright belongs to the original author, Tallon.
The results are shown in Figure 8 and Table 8. It can be seen by directly observing the restoration results that the methods in [11,27] were not accurate enough in the kernel  This group of experiments comprised real-shot experiments, so the subjective evaluation method was used to evaluate the restoration quality of the image. However, based on some existing image quality assessment (IQA) indicators, such as natural image quality evaluator(NIQE, the smaller the value, the smaller the gap with the natural image, the better the image quality) [40], average gradient (the larger the value, the clearer the image texture), cumulative probability of blur detection (CPBD, the larger the number, the sharper the image) [41], these three indicators could also evaluate the image restoration quality from another perspective.
The live shot data in [17] were used in the first control group, and the results were compared with the processing results in [11,17]. The live shot image was from [17], page 4, with a size of 498 × 498 pixels, and the copyright belongs to the original author, Tallon.
The results are shown in Figure 8 and Table 8. It can be seen by directly observing the restoration results that the methods in [11,17] were not accurate enough in the kernel estimation; the yellow spine on the left in the figure is quite fuzzy, as is the pattern edge on the cup. Moreover, the characters in the upper right corner (almost) cannot be identified. However, the proposed algorithm had better performance in both edge preservation and texture restoration. Through subjective observation, combined with measurement of the IQA index, it can be seen that the recovery performance of this algorithm was better. The restored image texture is clearer, the larger gradient part is sharper, and the visual effect is better. algorithm was better. The restored image texture is clearer, the larger gradient part is sharper, and the visual effect is better.  (e) algorithm restored image in [17]; (f) this paper; (g) kernel by proposed.

Group 2: Live Shot Group Experiment 2
The second experimental picture was taken by an industrial camera (The Imaging Source DFK290). In order to test the result of the proposed algorithm restored from a natural scene in bad imaging conditions, we selected the exposure time and ISO of the longexposure image as 1/47 s and 100, respectively. The length and width of a single CMOS pixel were 3 μm, the focal length was 50 mm, and the object distance was 5 m. The image resolution was 1920 × 1080, and the area with a resolution of 400 × 360 was intercepted for the restoration. The exposure time and ISO of the short-exposure image were 1/1600 s and 400, respectively. After calculating the camera and lens parameters, the upper speed limit, (e) algorithm restored image in [17]; (f) this paper; (g) kernel by proposed. The second experimental picture was taken by an industrial camera (The Imaging Source DFK290). In order to test the result of the proposed algorithm restored from a natural scene in bad imaging conditions, we selected the exposure time and ISO of the long-exposure image as 1/47 s and 100, respectively. The length and width of a single CMOS pixel were 3 µm, the focal length was 50 mm, and the object distance was 5 m. The image resolution was 1920 × 1080, and the area with a resolution of 400 × 360 was intercepted for the restoration. The exposure time and ISO of the short-exposure image were 1/1600 s and 400, respectively. After calculating the camera and lens parameters, the upper speed limit, at which the relative motion would not be generated within the shutter time, was determined to be about 1.7 m/s; in other words, no motion-induced fuzziness would be generated if the relative motion speed were lower than this speed limit. Through the test, the relative motion speed in the experiment was much higher than 1.7 m/s, so the shutter time selected was suitable and conformed to the short-exposure imaging requirement. The relative motion-triggered fuzziness was large in this experiment, and the algorithm in [11] followed different principles from the proposed restoration algorithm, so an obvious relative position difference existed in the restoration result.
The experimental results by the IQA are seen in Table 9: Table 9. Final results of the no. 2 experiment.

Test Image Evaluation Criterion
Algorithm in the Literature [11] Algorithm in the Literature [ The experimental results are seen in Figure 9. It can be directly observed that the image restored using the algorithm in [11] generated too smooth an effect, with great loss of texture details. The restoration effect of [27] was improved somewhat, but the characters on the bottle still could not be identified. Through subjective observation combined with the measurement of the IQA index, it can be seen that the recovery performance of this algorithm was better. The image restored using the proposed algorithm presented clearer edges than the results obtained through the previous two algorithms; for example, the character profile on the shot object was clearer, there were more texture details, and the image was more real and natural without obvious ringing effects; moreover, it was richer in colors.
ters on the bottle still could not be identified. Through subjective observation combined with the measurement of the IQA index, it can be seen that the recovery performance of this algorithm was better. The image restored using the proposed algorithm presented clearer edges than the results obtained through the previous two algorithms; for example, the character profile on the shot object was clearer, there were more texture details, and the image was more real and natural without obvious ringing effects; moreover, it was richer in colors. (d) algorithm restored image in [11]; (e) the restoration result of this algorithm; (f) result of kernel; (g) blurred image; (h) short exposure image; (i) algorithm restored image in [27]; (j) algorithm restored image in [11]; (k) algorithm proposed in this paper; (l) details of the Chinese characters.

Group 3: MTF Measurement Experiment
The sharpness of the image restored by the proposed algorithm was evaluated through the MTF in this group of experiments. The MTF parameters were measured through the method conforming to ISO12233 [39]. The blade edge image (420 × 650) was selected as the original image, and the motion-induced fuzziness of a long-exposure image was simulated using a 45 • kernel with a pixel size of 6. The short-exposure image was simulated by reducing the exposure of the original image via PS software. The input-output ratio of the brightness parameter curve was adjusted to 36:89, the exposure was reduced to 30% of the original image, and Gaussian noise with a mean value of 0 and variance of 0.05 was added to simulate the low SNR characteristic of a short-exposure image. The measurement algorithm implemented the calculation by selecting the program (SFR_1.41) compiled by MITRE Corporation in 2006.
The experimental results are displayed in Figure 10. The main design goal of this experiment was to evaluate the restoration effect of this algorithm by measuring the MTF index of the deblur image. The test program was used to determine the MTF-related value by calculating the smoothness of the transition part in the longitudinal direction of the edge area. By observing the restored image (d) and the original image (c), the transition of the deblur image in the longitudinal direction is more obvious, and the original image is smoother and fuzzier. Therefore, in the MTF comparison chart, the MTF of the image deblurred by this algorithm was better than the MTF of the original image. The results show that the MTF of the proposed algorithm improved the edge sharpness effectively, with an excellent restoration effect, reaching the expectation of the algorithm design.

Conclusions
The advantages and disadvantages of single-image blind deconvolution algorithms and multi-image deconvolution algorithms were firstly analyzed, and then the merits of the long-and short-exposure deblur algorithm and the blind deblur algorithm were integrated to propose a deconvolutional deblur solution with practical value. Through the united kernel estimation of a short-exposure image and a long-exposure image, the priori side information provided by the short-exposure image was fully utilized, and the morphology of kernel was accurately estimated in the estimation part; in the image restoration part, a new regularization term was designed using the short-exposure image, the image details were enriched by introducing the detailed image information, and the image edges were kept sharp, thus effectively improving the final image restoration effect. The final experimental results also provide that the proposed algorithm basically reaches the design expectation, and the restoration effect of the existing algorithm is apparently enhanced.

Conclusions
The advantages and disadvantages of single-image blind deconvolution algorithms and multi-image deconvolution algorithms were firstly analyzed, and then the merits of the long-and short-exposure deblur algorithm and the blind deblur algorithm were integrated to propose a deconvolutional deblur solution with practical value. Through the united kernel estimation of a short-exposure image and a long-exposure image, the priori side information provided by the short-exposure image was fully utilized, and the morphology of kernel was accurately estimated in the estimation part; in the image restoration part, a new regularization term was designed using the short-exposure image, the image details were enriched by introducing the detailed image information, and the image edges were kept sharp, thus effectively improving the final image restoration effect. The final experimental results also provide that the proposed algorithm basically reaches the design expectation, and the restoration effect of the existing algorithm is apparently enhanced.