Next Article in Journal
A Remarkable Property of Concircular Vector Fields on a Riemannian Manifold
Next Article in Special Issue
A Note on NIEP for Leslie and Doubly Leslie Matrices
Previous Article in Journal
Wiener Index of Edge Thorny Graphs of Catacondensed Benzenoids
Previous Article in Special Issue
Multigrid for Q k Finite Element Matrices Using a (Block) Toeplitz Symbol Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Generalized Structure Preserving Preconditioners for Frame-Based Image Deblurring

by
Davide Bianchi
1,† and
Alessandro Buccini
2,*,†
1
Department of Science and High Technology, University of Insubria, 22100 Como, Italy
2
Department of Mathematics and Computer Science, University of Cagliari, 09124 Cagliari, Italy
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2020, 8(4), 468; https://doi.org/10.3390/math8040468
Submission received: 5 February 2020 / Revised: 14 March 2020 / Accepted: 21 March 2020 / Published: 27 March 2020
(This article belongs to the Special Issue Matrix Structures: Numerical Methods and Applications)

Abstract

:
We are interested in fast and stable iterative regularization methods for image deblurring problems with space invariant blur. The associated coefficient matrix has a Block Toeplitz Toeplitz Blocks (BTTB) like structure plus a small rank correction depending on the boundary conditions imposed on the imaging model. In the literature, several strategies have been proposed in the attempt to define proper preconditioner for iterative regularization methods that involve such linear systems. Usually, the preconditioner is chosen to be a Block Circulant with Circulant Blocks (BCCB) matrix because it can efficiently exploit Fast Fourier Transform (FFT) for any computation, including the (pseudo-)inversion. Nevertheless, for ill-conditioned problems, it is well known that BCCB preconditioners cannot provide a strong clustering of the eigenvalues. Moreover, in order to get an effective preconditioner, it is crucial to preserve the structure of the coefficient matrix. On the other hand, thresholding iterative methods have been recently successfully applied to image deblurring problems, exploiting the sparsity of the image in a proper wavelet domain. Motivated by the results of recent papers, the main novelty of this work is combining nonstationary structure preserving preconditioners with general regularizing operators which hold in their kernel the key features of the true solution that we wish to preserve. Several numerical experiments shows the performances of our methods in terms of quality of the restorations.

1. Introduction

In image deblurring we are concerned in reconstructing an approximation of an image from blurred and noisy measurements. This process can be modeled by an integral equation of the form
g ( x , y ) = ( κ f ) ( x , y ) = + + κ ( x , y , x , y ) f ( x , y )   d x   d y + η ( x , y ) , ( x , y ) Ω R 2 ,
where f : R 2 R is the original image and g : R 2 R is the observed imaged which is obtained from a combination of an (Hilbert-Schmidt) integral operator, represented by κ : R 4 R , and the add of some (unavoidable) noise η : R 2 R coming from, e.g., perturbations on the observed data, measurement errors, and approximation errors. By assuming the kernel κ to be compactly supported and considering the ideal case η = 0 Equation (1) becomes
g = K · f ,
where K is a compact linear operator. In this contest, κ is generally called point spread function (PSF).
Considering an uniform grid, images are represented by their color intensities measured on the grid (pixels). In this paper, for the sake of simplicity, we will deal only with square and gray-scale images, even if all the techniques presented here carry over to images of different sizes and colors as well.
Collected images are available only in a finite region, the field of view (FOV), and the measured intensities near the boundary are affected by data which lie outside the FOV; see Figure 1 for an illustration.
Denoting by g and f the stack ordered vectors corresponding to the observed image and the true image, respectively, the discretization of (1) becomes
g = K f + η ,
where the matrix K is of size m 2 × k 2 , being m and k the dimensions (in pixels) of the original picture. The matrix K is often called the blurring matrix. When imposing proper Boundary Conditions (BCs), the matrix K becomes square m 2 × m 2 and in some cases, depending on the BCs and the symmetry of the PSF, it can be diagonalized by discrete trigonometric transforms. Indeed, specific BCs induce specific matrix structures that can be exploited to lessen the computational costs using fast algorithms. Of course, since BCs are artificially introduced, their advantages could come with drawbacks in terms of reconstruction accuracy, depending on the type of problem. The BCs approach forces a functional dependency between the elements of f external to the FOV and those internal to this area. If the BC model is not a good approximation of the real world outside the FOV, the reconstructed image can be severely affected by some unwanted artifacts near the boundary, called ringing effects; see, e.g., [1].
The choice of the different BCs can be driven by some additional knowledge on the true image and/or from the availability of fast transforms to diagonalize the matrix K within O ( m 2 log ( m ) ) arithmetic operations. Indeed, the matrix-vector product can be always computed by the 2D FFT, after a proper padding of the image to convolve, (see, e.g., [2]), while the availability of fast transforms to diagonalize the matrix K depends on the BCs. Among the BCs present in the literature, we consider: Zero (Dirichlet), Periodic, Reflective and Anti-Reflective, but our approach can be extended to other BCs like, e.g., synthetic BCs [3] or high order BCs [4,5]. See Figure 2 for an illustration of the described BCs.
On the other hand, since Equation (2) is the product of the discretization of a compact operator, K is severely ill-conditioned and may be singular. Such linear systems are commonly referred to as linear discrete ill-posed problems; see, e.g., [6] for a discussion. Therefore a good approximation of f cannot be obtained from the algebraic solution (e.g., the least-square solution) of (2), but regularization methods are required. The basic idea of regularization is to replace the original ill-conditioned problem with a nearby well-conditioned problem, whose solution approximates the true solution. One of the popular regularization techniques is Tikhonov regularization and it amounts in solving
min f { K f g 2 2 + μ f 2 2 } ,
where · 2 denotes the vector 2-norm and μ > 0 is a regularization parameter to be chosen. The first term in (3) is usually refereed to as fidelity term and the second as regularization term. This translates into solving a linear problem for which many efficient methods have been developed for computing its solution and for estimating the regularizing parameter μ , see [6]. This approach unfortunately comes with a drawback: the edges of restored images are usually over-smoothed. Therefore, nonlinear strategies have been employed, in order to overcome this unpleasant property, like total variation (TV) [7] and thresholding iterative methods [8,9]. That said, typically many nonlinear regularization methods have an inner step that apply a least-square regularization and therefore can benefit from strategies previously developed for such simpler model.
In the present paper, both the regularization strategies that we propose share two common ingredients: wavelet decomposition and 1 -norm minimization on the regularization term. This is motivated by the fact that the wavelet coefficients (under some basis) of most real images are usually very sparse. In particular, here we consider the tight frame systems used in [10,11,12], but the result can be easily extended to any framelet/wavelet system. Let W * be a wavelet or tight-frame synthesis operator ( W * W = I ), the wavelets or tight-frame coefficients of the original image f are x such that
f = W * x , and   the   blurring   operator   becomes   A = K W * .
Within this frame set, the model Equation (2) translates into
g = A x .
Recently, in [13], a new technique was proposed which directly applies a single preconditioning operator P to Equation (4). The new preconditioned system becomes
P g = P A x .
Combining this approach with a soft-thresholding technique such as the modified linearized Bregman splitting algorithm [14], in order to mimic the 1 -norm minimization, leads to the following preconditioned iterative scheme
x n + 1 = x n + τ P ( g A S μ ( x n ) ) ,
where τ is a positive relaxation parameter and S μ ( · ) is the soft-thresholding function as defined in 6. Hereafter we will put τ = 1 : this is justified by applying an implicit rescaling of the preconditioned system matrix P A .
The paper is organized as follows: in Section 2 we propose a generalization of an approximated iterative Tikhonov scheme that was firstly introduced in [15] and then developed and adapted into different settings in [16,17]. Here the preconditioner P takes the form
P = B * B B * + α n Λ Λ * 1 ,
where B is an approximation of A, in the sense that B = C W * with C the discretization of the same problem (1) as the original blurring matrix K but imposing Periodic BCs. The operator Λ Λ * can be a function of C C * or the discretization of a differential operator. The method is nonstationary and the parameter α n is computed by solving a nonlinear problem with a computational cost of O ( m 2 ) . Related work on this kind of preconditioner can be found in [18,19,20]. In Section 3 we define a class of preconditioners P endowed with the same structure of the system matrix A, as initially proposed in [21] and then further developed in [22]. It is called structure preserving reblurring preconditioning strategy and we combine it with the generalized regularization filtering approach of the preceding Section 2. The idea is to preserve both the informations carried over by the spectra of the operator A and the structure itself of the operator induced by the best fitting BCs. Section 4 contains a selection of significant numerical examples which confirm the robustness and quality of the proposed regularization schemes. Section 5 provides a summary of the techniques presented in this work and draws some conclusions. Finally, in Appendix A are provided proofs of convergence and regularization properties of the proposed algorithms.

2. Preconditioned Iterated Soft-Thresholding Tikhonov with General Regularizing Operator

2.1. Preliminary Definitions

Before proceeding further, let us introduce here some definitions and notations that will be used in the forthcoming sections. We consider
K : R m 2 , · R m 2 , ·
to be the discretization of a compact linear operator
g = K f ,
where the Euclidean 2-norm · is induced by the standard Euclidean inner product
f ( 1 ) , f ( 2 ) R m 2 = j = 1 m 2 f j ( 1 ) f j ( 2 ) .
Hereafter, we will specify the vector space where the inner product · , · acts only whenever it is necessary for disambiguation. The analysis that will follow in the next sections will be performed generally on a perturbed data g δ , namely
g δ = K f ,
with g δ = g + η , and where η is a noise vector such that η = δ , δ is called the noise level.
Let
C : R m 2 , · R m 2 , ·
be the discretization of a compact linear operator that approximates K, in a sense that will be specified later. Let
W : R m 2 , · R s , ·
be such that
W * W = I ,
where W * : R s R m 2 indicates the adjoint operator of W, i.e., W f , u R s = f , W * u R m 2 for each pair f R m 2 , u R s . We define
x = W f , A = K W * , B = C W * .
Let us introduce the following matrix norm. Given a generic linear operator
L : R s , · R m 2 , · ,
where · is the sup norm, let us define the matrix norm · as
L : = sup x 1 L x 2 .
Finally, let μ 0 and let S μ : R s R s be such that
[ S μ ( u ) ] i = S μ ( u i ) ,
with S μ the soft-thresholding function
S μ ( u i ) = sgn ( u i ) max | u i | μ , 0 .

2.2. General Regularization Operator as h ( C C * )

Let h : [ 0 , C C * 2 ] R be a continuous function such that
0 < c 1 h ( σ 2 ) c 2 ,
where c 1 , c 2 are positive constants, and define c : = c 1 / c 2 . By the continuity of h and by well-known facts from functional analysis [23] we can write h ( C C * ) as the operator defined by
h ( C C * ) [ f ] : = h ( σ 2 ) d E σ 2 ( f ) = k = 1 h ( σ k 2 ) f , u k u k ,
where { E σ 2 } σ 2 σ ( C C * ) is the spectral decomposition of a (generic) self-adjoint operator C C * and σ k , v k , u k k N is the singular value expansion of C. We summarize the computations in Algorithm 1.
Algorithm 1 PISTA h
  • Fix z 0 R s , δ > 0 and set x 0 = S μ ( z 0 ) , r 0 = g A x 0 .
  • Set ρ ( 0 , c / 2 ) and q ( 2 ρ , c ) .
  • Compute τ = 1 + 2 ρ c 2 ρ and r n = g A x n .
  • while r n > τ δ  do
  •  Compute τ n : = r n / δ .
  •  Compute q n : = max { q , 2 ρ + ( 1 + ρ ) / τ n } .
  •  Compute α n such that
    α n ( C C * + α n h C C * ) 1 r n = q n c 1 r n .
  • Compute
    h n = W C * ( C C * + α n h C C * ) 1 r n .
  • Compute
    z n + 1 = z n + h n , x n + 1 = S μ ( z n + 1 ) .
A rigorous and full detailed analysis of the preceding algorithm will be performed in Appendix A. In order to prove all the desired properties we will need a couple of assumptions on the operators K, C, and on the parameter μ , that we present here below.
Assumption 1.
C K f ρ K f , f R m 2 ,
and
μ ρ δ B ,
with a fixed 0 < ρ < c / 2 , where δ = η is the noise level and where · is the operator norm defined in (5).
Let us observe that Equation (9a) translates into
( B A ) u ρ A u , u R s .
Let us spread some light on the preceding conditions. Assumption (9a), or equivalently (10), is a strong assumption. It may be hard to satisfy it for every specific problem, as it implies
1 ρ K v C v 1 + ρ K v for   all v R m 2 ,
or equivalently
1 ρ A u B u 1 + ρ A u for   all u R s ,
that is, K and C are spectrally equivalent. Nevertheless, in image deblurring the boundary conditions have a very local effect, i.e., the approximation error C K can be decomposed as
C K = E + R ,
where E is a matrix of small norm (and the zero matrix if the PSF is compactly supported), and R is a matrix of small rank, compared to the dimension of the problem. This suggests that Assumption (9a) needs to be satisfied only in a relatively small subspace, supposedly being a zero measure subspace. In particular only for every e δ n , with n N and N fixed, such that Proposition A1 in Appendix A could hold. All the numerical experiments are consistent with this observation but for a deeper understanding and a full treatment of this aspect we refer the reader to ([15], Section 4).
On the other hand instead, Assumption (9b) is quite natural. It is indeed equivalent to require that
B u S μ ( u ) ρ δ
that is, the soft-thresholding parameter μ = μ ( δ ) is continuously noise-dependent and it holds that μ ( δ ) 0 as δ 0 .

2.3. General Regularization Operator as Λ Λ *

In image deblurring, in order to better preserve the edges of the reconstructed solution, it is usually introduced a differential operator Λ Λ * , where Λ : X Y is chosen as a first or second order differential operator which holds in its kernel all these functions which posses the key features of the true solution that we wish to preserve. In particular, since we are interested to recover the edges and curves of discontinuities of the true image, it is a common choice to rely on the Laplace operator with Neumann BCs, see [24]. In these recent papers [25,26], observing the spectral distribution of the Laplacian, it was proposed to substitute Λ Λ * with
h ( C C * ) = I C C * C C * j ,
with j N .
Adding some new assumptions, we propose a modified version of the preceding Algorithm 1 that can take into account directly the operator Λ .
Assumption 2.
Ker ( K ) Ker ( Λ ) = { 0 } ;
C | Ker ( Λ ) = K | Ker ( Λ ) ;
C   a n d   Λ   a r e   d i a g o n a l i z e d   b y   t h e   s a m e   u n i t a r y   t r a n s f o r m .
We summarize the computations in Algorithm 2.
Algorithm 2 PISTA Λ
  • Fix z 0 R s , δ > 0 and set x 0 = S μ ( z 0 ) , r 0 = g A x 0 .
  • Set ρ ( 0 , 1 / 2 ) and q ( 2 ρ , 1 ) .
  • Compute τ = 1 + 2 ρ 1 2 ρ and r n = g A x n .
  • while  r n > τ δ do
  •  Compute τ n : = r n / δ .
  •  Compute q n : = max { q , 2 ρ + ( 1 + ρ ) / τ n } .
  •  Compute α n such that
    α n ( C C * + α n Λ Λ * ) 1 r n = q n r n .
  • Compute
    h n = W C * ( C C * + α n Λ Λ * ) 1 r n .
  • Compute
    z n + 1 = z n + α n h n , x n + 1 = S μ ( z n + 1 ) .
We skip all the proofs of convergence since they can be recovered easily adapting the proofs in Appendix A with ([16], Section 4).

3. Structured PISTA with General Regularizing Operator

The structured case is a generalization of what was developed in [21,22], merging these ideas with the general approach described in Section 2. We skip some details since they can be easily recovered from the aforementioned papers.
The blurring matrix K is made by two parts: the PSF and the BCs inherited by the discretization. Different types of structured matrices are given rise by this latter choice. Without loosing generality and for the sake of simplicity, we consider a square PSF H κ R k × k and we suppose that the center of the PSF is known.
Consider the pixels κ i , j of the PSF, we can define the following generating function κ : R 2 C
κ ( x 1 , x 2 ) = i , j = m + 1 m 1 κ i , j e ı ^ ( i x 1 + j x 2 ) ,
where ı ^ 2 = 1 and we assumed that κ i , j = 0 if the entry ( κ i , j ) does not belong to H κ [5]. Observe that κ j , j can be seen as the Fourier coefficients of κ span { e ı ^ ( i x 1 + j x 2 ) , i , j = k , , k } , so that the same information is contained in the generating function κ and in H κ .
Summarizing the notation that we set in the Introduction about the BCs, we have
Zero   BCs : K = T m ( κ ) , Periodic   BCs : K = C m ( κ ) = T m ( κ ) + B m C ( κ ) , Reflective   BCs : K = R m ( κ ) = T m ( κ ) + B m R ( κ ) , Anti - Reflective   BCs : K = AR m ( κ ) = T m ( κ ) + B m AR ( κ ) .
We notice that, since the continuous operator is shift-invariant, in all these four cases K has a Toeplitz structure T m ( κ ) which depends on κ plus a correction term B m X ( κ ) , X = C , R , AR which depends on the chosen BCs.
In conclusion, we employ the unified notation K = M m ( κ ) , where M ( · ) can be any of the classes of matrices just introduced (i.e., T , C , R , AR ). With this notation we wish to highlight the two crucial elements that determine K: the blurring phenomenon associated with the PSF described by κ and the chosen BCs represented by M .
Given the generating function κ (13) associated to the PSF H κ , let us compute the eigenvalues u i , j of the corresponding BCCB matrix C m ( κ ) : = C by means of the 2D-FFT, where i , j = 0 , , m 1 . Fix a regularizing (differential) operator Λ Λ * as in Section 2, and suppose that the Assumptions 1 and 2 hold. The differential operator can be of the form Λ Λ * = h ( C C * ) , as in Algorithm 1 as well. Let now
v i , j = u ¯ i , j | u i , j | 2 + α n | σ i , j | 2 , u ¯ i , j the   complex   conjugate   of   u i , j
be the new eigenvalues after the application of the Tikhonov filter to u i , j , where σ i , j are the eigenvalues (singular values) of Λ and α n is computed as in Algorithms 1 and 2. Let us compute now the coefficients κ ^ i , j of
κ ^ ( x 1 , x 2 ) = i , j = m + 1 m 1 κ ^ i , j e ı ^ ( ix 1 + jx 2 )
by means of the 2D-iFFT and, finally, let us define
P = M m ( κ ^ ) ,
where M ( · ) corresponds to the most fitting BCs for the model problem (1).
We are now ready to formulate the last method whose computation are reported in Algorithm 3.
Algorithm 3 Struct - PISTA Λ
  • Fix H κ , BCs, Λ .
  • Set C = C m ( κ ) .
  • Get { u i , j } i , j = 0 n 1 by computing an FFT of H κ .
  • Fix z 0 R s , δ > 0 and set x 0 = S μ ( z 0 ) , r 0 = g K x 0 .
  • Set ρ ( 0 , 1 / 2 ) and q ( 2 ρ , 1 ) .
  • Compute τ = 1 + 2 ρ 1 2 ρ and r n = g K x n .
  • while  r n > τ δ  do
  •  Compute τ n : = r n / δ .
  •  Compute q n : = max { q , 2 ρ + ( 1 + ρ ) / τ n } .
  •  Compute α n such that
    α n ( C C * + α n Λ Λ * ) 1 r n = q n r n .
  • Compute v i , j = u ¯ i , j u i , j 2 + α n | σ i , j | 2 .
  • Get the mask H ˜ of the coefficients κ ^ i , j of κ ^ of (14) by computing an IFFT of { v i , j } i , j = 0 m 1 .
  • Generate the matrix P : = M m ( κ ^ ) from the coefficient mask H ˜ and BCs.
  • Compute
    h n = P r n .
  • Compute
    z n + 1 = z n + h n , x n + 1 = S μ ( z n + 1 ) .
In the case that Λ Λ * = h ( C C * ) , then the algorithm is modified in the following way:
ρ ( 0 , c / 2 ) , q ( 2 ρ , c ) , α n ( C C * + α n Λ Λ * ) 1 r n = q n c 1 r n
where 0 < c 1 h ( σ 2 ) c 2 , c : = c 1 / c 2 . We will denote this version by Struct - PISTA h . We will not provide a direct proof of convergence for this last algorithm. Let us just observe that the difference between (15) and (12)–(8) is just a correction of small rank and small norm.

4. Numerical Experiments

We now compare the proposed algorithms with some methods from the literature. In particular, we consider the AIT-GP algorithm described in [16] and the ISTA algorithm described in [8]. The AIT-GP method can be seen as Algorithm 2 with μ = 0 , while the ISTA algorithm is equivalent to iterations of Algorithm 2 without the preconditioner. These comparisons allow us to show how the quality of the reconstructed solution is improved by the presence of both the soft-thresholding and the preconditioner.
The ISTA method and our proposals require the selection of a regularization parameter. For all these methods we select the parameter that minimizes the relative restoration error (RRE) defined by
RRE ( f ) = f f true f true .
For the comparison of the algorithms we consider the Peak Signal to Noise Ratio (PSNR) defined by    
PSNR ( f ) = 20 log 10 m M f f true ,
where m 2 is the the number of elements of f and M denotes the maximum value of f true . Moreover, we consider the Structure Similarity index (SSIM); the definition of the SSIM is involved, here we recall that this index measures how accurately the computed approximation is able to reconstruct the overall structure of the image. The higher the value of the SSIM the better the reconstruction is, and the maximum value achievable is 1; see [27] for a precise definition of the SSIM.
We now describe how we construct the operator W. We use the tight frames determined by linear B-splines; see, e.g., [28]. For one-dimensional problems they are composed by a low-pass filter W 0 R m × m and two high-pass filters W 1 R m × m and W 2 R m × m . These filters are determined by the masks are given by
u ( 0 ) = 1 4 [ 1 , 2 , 1 ] , u ( 1 ) = 2 4 [ 1 , 0 , 1 ] , u ( 2 ) = 1 4 [ 1 , 2 , 1 ] .
Imposing reflexive boundary conditions we determine the analysis operator W so that W * W = I . Define the matrices
W 0 = 1 4 3 1 0 0 1 2 1 1 2 1 0 0 1 3 , W 1 = 2 4 1 1 0 0 1 0 1 1 0 1 0 0 1 1 ,
and
W 2 = 1 4 1 1 0 0 1 2 1 1 2 1 0 0 1 1 .
Then the operator W is defined by
W = W 0 W 1 W 2 .
To construct the two-dimensional framelet analysis operator we use the tensor products
W i , j = W i W j , i , j = 0 , 1 , 2 .
The matrix W 00 is a low-pass filter; all the other matrices W i j contain at least one high-pass filter. The analysis operator is given by
W = W 00 W 01 W 22 .
In PISTA h , following [26], we set
h ( x ) = 1 x A 2 4 + 10 15 .
All the computations were performed on MATLAB R2018b running on a laptop with an Intel i7-8750H @ 2.20 GHz CPU and 16 GB of RAM.

4.1. Cameraman

We first considered the cameraman image in Figure 3a and we blurred it with the non-symmetric PSF in Figure 3b. We then added 2 % white Gaussian noise obtaining the blurred and noisy image in Figure 3c. Note that we cropped the boundaries of the image to simulate real data; see [1] for more details. Since the image was generic we imposed reflexive BCs.
In Table 1 we report the results obtained with the different methods. We can observe that Struct - PISTA h provided the best reconstruction of all considered algorithms. Moreover, we can observe that, in general, the introduction of the structured preconditioner improved the quality of the reconstructed solutions, especially in terms of SSIM. From the visual inspection of the reconstructions in Figure 4 we can observe that the introduction of the structured preconditioner allowed us to evidently reduce the boundary artifacts as well as avoid the amplification of the noise.

4.2. Grain

We now considered the grain image in Figure 5a and blurred it with the PSF, obtained by the superposition of two motions PSF, in Figure 5b. After adding 3 % of white Gaussian noise and cropping the boundaries we obtained the blurred and noisy image in Figure 5c. According to the nature of the image we used reflexive bc’s.
Again in Table 1 we report all the results obtained with the considered methods. In this case ISTA provided the best reconstruction in terms of RRE and PSNR. However, Struct - PISTA h provided the best reconstruction terms of SSIM and very similar results in term of PSNR and RRE. In Figure 6 we report some of the reconstructed solution. From the visual inspection of these reconstruction we can see that the introduction of the structured preconditioner reduced the ringing and boundary effects in the computed solutions.

4.3. Satellite

Our final example is the atmosphericBlur30 from the MATLAB toolbox RestoreTools [2]. The true image, PSF, and blurred and noisy image are reported in Figure 7a–c, respectively. Since we knew the true image we could estimate the noise level in the image, which was approximately 1 % . Since this was an astronomical image we imposed zero bc’s.
From the comparison of the computed results in Table 1 we can see that the Struct - PISTA h method provided the best reconstruction among all considered methods. We can observe that, in this particular example, ISTA provided a very low quality reconstruction both in term of RRE and SSIM. We report in Figure 8 some reconstructions. From the visual inspection of the computed solutions we can observe that both the approximations obtained with PISTA h and Struct - PISTA h did not present heavy ringing effects, while the reconstruction obtained by AIT-GP presented very heavy ringing around the “arms” of the satellite. This allowed us to show the benefits of introducing the soft-thresholding into the AIT-GP method.

5. Conclusions

This work develops further and brings together all the techniques studied in [16,17,21,22,29]. The idea is to combine thresholding iterative methods, an approximate Tikhonov regularization scheme depending on a general (differential) operator and a structure preserving approach, with the main goal in mind to reduce the boundary artifacts which appear in the resulting de-blurred image when imposing artificial boundary conditions. The numerical results are promising and show improvements with respect to known state-of-the-art deblurring algorithms. There are still open problems, mainly concerning the theoretical assumptions and convergence proofs which will be furtherly investigated in future works.

Author Contributions

Writing–original draft, D.B. and A.B. These authors contributed equally to this work. All authors have read and agreed to the published version of the manuscript.

Funding

Both authors are members of INdAM-GNCS Gruppo Nazionale per il Calcolo Scientifico. A.B. work is partially founded by the Young Researcher Project “Reconstruction of sparse data” of the GNCS group of INdAM and by the Regione Autonoma della Sardegna research project “Algorithms and Models for Imaging Science [AMIS]” (RASSR57257, intervento finanziato con risorse FSC 2014-2020 - Patto per lo Sviluppo della Regione Sardegna).

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Proofs

Hereafter we analyze Algorithm 1, aiming to prove its convergence. The techniques carried out in the proofs of most of the following results can be tracked down to the papers [15,16,17], therefore bringing not many mathematical novelties other than the results themselves. Nevertheless, since the proofs are very technical such that even a slight change can produce non trivial difficulties, we will present a full treatment leaving no details, in order to make this paper self-contained and easily readable.
Following up Section 2.1, we need to set some more notations. Let us consider the singular value decomposition (SVD) of C as the triple U , V , Σ such that
C = U Σ V * , U , V O ( m 2 , R ) , Σ = diag j = 1 , , m 2 ( σ j )   with   0 σ m 2 σ 1 ,
where O ( m 2 , R ) is the orthonormal group and V * is the adjoint of the operator V, i.e., V f 1 , f 2 = f 1 , V * f 2 for every pair f 1 , f 2 R m 2 . We will denote the spectrum of C C * by
σ ( C C * ) = { 0 } j = 1 m 2 { σ j 2 } .
Hereafter, without loss of generality we will assume that
C = 1 and h ( C C * ) = max σ 2 [ 0 , 1 ] h ( σ 2 ) = 1 .
The first issue we have to consider is the existence of the sequence { α n } .
Lemma A1.
Let r n > τ δ . Then for every fixed n there exists α n that satisfies (7). It can be computed by the following iteration
α n k + 1 : = α n k 2 Φ ( α n k ) α n k Φ ( α n k ) + Φ ( α n k ) q n 2 r n ,
where
Φ ( α ) : = α ( C C * + α h C C * ) 1 r n 2 , Φ ( α ) : = 2 α C C * ( C C * + α h C C * ) 3 / 2 r n 2 .
The convergence is locally quadratic. The existence of the regularization parameter α n and the locally quadratic convergence of the algorithm above are independent and uniform with respect to the dimension m 2 .
Proof. 
The existence of α is an easy consequence of the monotonicity of
ϕ α ( σ 2 ) = α σ 2 + α h ( σ 2 ) 1
with respect to α . Indeed, let us rewrite (7) as follows
α 2 ( C C * + α h C C * ) 1 r n 2 = ϕ α ( C C * ) r n 2 = [ 0 , 1 ] ϕ α 2 ( σ 2 ) d r n ( σ ) = [ 0 , 1 ] α 2 σ 2 + α h ( σ 2 ) 2 d r n ( σ ) = q n 2 c 1 2 [ 0 , 1 ] d r n ( σ ) ,
where d r n ( · ) is the discrete spectral measure associated to r n with respect to the SVD of C and σ σ ( C ) are the singular values of the spectrum of C. Since d ϕ α d α > 0 for every α 0 , then by monotone convergence it holds that
lim α [ 0 , 1 ] α 2 σ 2 + α h ( σ 2 ) 2 d r n ( σ ) = [ 0 , 1 ] lim α α 2 σ 2 + α h ( σ 2 ) 2 d r n ( σ ) r n 2 > q n 2 c 1 2 r n 2 .
Indeed, it is not difficult to prove that q n / c 1 < 1 whenever ρ ( 0 , c 1 / 2 ) and r n > τ δ , as assumed in the hypothesis. Since for α = 0 the left hand-side of (A2) is zero, then we conclude that there exists an unique α n > 0 such that equality holds in (7). Due to the generality of our proof and the fact that we could pass the limit under the sign of integral, the existence of such an α n is granted uniformly with respect to the dimension m 2 .
Since
ϕ α ( σ 2 ) = α σ 2 + α h ( σ 2 ) 1 = α 1 σ 2 + h ( σ 2 ) 1 ,
fixing γ = α 1 , let us now define the following function
ψ γ ( σ 2 ) = γ σ 2 + h ( σ 2 ) 1 .
Since
ψ γ 2 ( σ 2 ) γ = 2 σ 2 γ σ 2 + h ( σ 2 ) 3 , 2 ψ γ 2 ( σ 2 ) γ 2 = 6 σ 4 γ σ 2 + h ( σ 2 ) 4 ,
then there exist two constants d 1 , d 2 independents of γ such that
ψ γ 2 ( σ 2 ) γ d 1 , 2 ψ γ 2 ( σ 2 ) γ 2 d 2 ,
and in particular d 1 , d 2 L 1 ( [ 0 , 1 ] , d r n ) for every n and m. Therefore, if we define
Ψ ( γ ) : = ψ γ ( C C * ) r n 2 ,
it holds that
Ψ ( γ ) = γ [ 0 , 1 ] ψ γ 2 ( σ 2 ) d r n ( σ ) = [ 0 , 1 ] ψ γ 2 ( σ 2 ) γ d r n ( σ ) ,
Ψ ( γ ) = γ [ 0 , 1 ] ψ γ 2 ( σ 2 ) γ d r n ( σ ) = [ 0 , 1 ] 2 ψ γ 2 ( σ 2 ) γ 2 d r n ( σ ) .
Then the Newton iteration applied to Ψ ( γ ) = q n 2 r n 2 yields the iteration
γ n k + 1 = γ n k + q n 2 r n 2 Ψ ( γ n k ) Ψ ( γ n k ) , k 0 .
By (A3), Ψ ( γ ) is a decreasing convex function in γ . Since γ n = lim k γ n k + 1 = 1 / α n , obviously we have that
Ψ ( γ n ) = q n 2 c 1 2 r n 2 .
If
Ψ ( γ n ) = 2 ( C C * ) ( γ n C C * + h ( C C * ) ) 3 / 2 r n 2 = 0 ,
then necessarily we would have that C C * r n = 0 . Hence, γ n C C * + h ( C C * ) 1 r n = h ( C C * ) r n , and consequently
Ψ ( γ n ) = h ( C C * ) r n 2 .
From (A6) we would deduce that q n c 1 , but this is absurd since as already observed above, q n < c 1 if r n > τ δ . Therefore Ψ ( γ n ) 0 and by standard properties of the Newton iteration, γ n k converges to the minimizer γ n from below and the convergence is locally quadratic. Finally, defining
Φ ( α ) = Ψ ( 1 / α ) ,
then we get (A1), α n k converges monotonically from above to α n and the convergence is locally quadratic. Again, thanks to (A4) and (A5), the rate of convergence is uniform with respect to the dimension of Y . □
From now on, instead of working with the error e δ n = x x δ n , in order to simplify the following proofs and notations, it is useful to consider the partial error with respect to z δ n , namely
e ˜ δ n = x z δ n .
This will not affect the generality of our proofs, thanks to the continuity of S μ ( · ) with respect to the noise level δ .
Proposition A1.
Under the assumptions (9), if r δ n > τ δ and we define τ n = r δ n / δ , then it follows that
r δ n B e ˜ δ n ρ + 1 + 2 ρ τ n r δ n < ( 1 ρ ) r δ n ,
where e ˜ n is defined in (A7).
Proof. 
In the free noise case we have g = K x . As a consequence
r δ n B e ˜ δ n = g δ K x δ n B ( x z δ n ) + B x δ n B S μ ( z δ n ) = g δ g + ( K B ) e δ n + B ( z δ n S μ ( z δ n ) ) .
Using now assumptions (9), in particular (10), and g δ g δ , we derive the following estimate
r δ n B e ˜ δ n g δ g + ( K B ) e δ n + B ( z δ n S μ ( z δ n ) ) g δ g + ρ K e δ n + ρ δ g δ g + ρ ( r δ n + g δ g + δ ) ( 1 + 2 ρ ) δ + ρ r δ n .
The first inequality in (A8) now follows from the hypothesis δ = r δ n / τ n . The second inequality follows from ρ + 1 + 2 ρ τ n < ρ + 1 + 2 ρ τ . □
Combining the preceding proposition with (7), we are going to show that the sequence e ˜ δ n is monotonically decreasing. We have the following result.
Proposition A2.
Let e ˜ δ n be defined in (A7). If the assumptions (9) are satisfied, then e ˜ δ n of Algorithm 1 decreases monotonically for n = 0 , 1 , , n δ 1 . In particular, we deduce
e ˜ δ n 2 e ˜ δ n + 1 2 8 ρ 2 1 + 2 ρ ( C C * + α n h C C * ) 1 r δ n r δ n > 0 .
Proof. 
Recalling that W C * = B * and that B B * = C C * , we have
e ˜ δ n 2 e ˜ δ n + 1 2 = 2 e ˜ δ n , h n h n 2 = 2 B e ˜ δ n , ( C C * + α n h C C * ) 1 r δ n r δ n , C C * ( C C * + α n h C C * ) 2 r δ n = 2 r δ n , ( C C * + α n h C C * ) 1 r δ n r δ n , C C * ( C C * + α n h C C * ) 2 r δ n 2 r δ n B e ˜ δ n , ( C C * + α n h C C * ) 1 r δ n 2 r δ n , ( C C * + α n h C C * ) 1 r δ n 2 r δ n , C C * ( C C * + α n h C C * ) 2 r δ n 2 r δ n B e ˜ δ n , ( C C * + α n h C C * ) 1 r δ n = 2 α n r δ n , h C C * ( C C * + α n h C C * ) 2 r δ n 2 r δ n B e ˜ δ n , ( C C * + α n h C C * ) 1 r δ n 2 α n r δ n , h C C * ( C C * + α n h C C * ) 2 r δ n 2 r δ n B e ˜ δ n ( C C * + α n h C C * ) 1 r δ n 2 ( C C * + α n h C C * ) 1 r δ n c 1 α n ( C C * + α n h C C * ) 1 r δ n r δ n B e ˜ δ n 2 ( C C * + α n h C C * ) 1 r δ n · q n r δ n ρ + 1 + 2 ρ τ n r δ n 8 ρ 2 1 + 2 ρ ( C C * + α n h C C * ) 1 r δ n r δ n > 0 ,
where the relevant inequalities are a consequence of Equation (7) and Proposition A1. The last inequality follows from (7) and τ n > τ = ( 1 + 2 ρ ) / ( 1 2 ρ ) for r δ n > τ δ . □
Corollary A1.
Under the assumptions (9), there holds
e ˜ δ 0 8 ρ 2 1 + 2 ρ n = 0 n δ 1 ( C C * + α n h C C * ) 1 r δ n r δ n c n = 0 n δ 1 r δ n 2
for some constant c > 0 , depending only on ρ and q in (7).
Proof. 
The first inequality follows by taking the sum of the quantities in (A9) from n = 0 up to n = n δ 1 .
For the second inequality, note that for every
α > q n c 1 q n
and every σ σ ( C ) [ 0 , 1 ] , we have
α σ 2 + α h ( σ 2 ) α 1 + α = ( 1 + 1 / α ) 1 > q n c 1 ,
and hence,
α ( C C * + α h C C * ) 1 r δ n > q n c 1 r δ n ,
as r δ n > 0 for n < n δ . This implies that α n in (7) satisfies 0 < α n q n c 1 q n , thus
( C C * + α n h C C * ) 1 r δ n = q n c 1 α n r δ n ( c 1 q n ) r δ n .
According to the choice of parameters in Algorithm 1, we deduce
c 1 q n = min { c 1 q , c 1 2 ρ ( 1 + ρ ) / τ n } ,
and
c 1 2 ρ ( 1 + ρ ) / τ n = 1 + 2 ρ τ 1 + ρ τ n > 1 + 2 ρ τ 1 + ρ τ = ρ τ .
Therefore, there exists c > 0 , depending only on ρ and q such that
c 1 q n c 8 ρ 2 1 + 2 ρ 1 ,
and
( C C * + α n h C C * ) 1 r δ n c 8 ρ 2 1 + 2 ρ 1 r δ n for   n = 0 , 1 , , n δ 1 .
Now the second inequality follows immediately. □
From (A10) it can be seen that the sum of the squares of the residual norms is bounded, and hence, if δ > 0 , there must be a first integer n δ < such that (A10) is fulfilled, i.e., Algorithm 1 terminates after finitely many iterations.
Finally, we are ready to prove a convergence and regularity result.
Theorem A1.
Assume that z 0 is not a solution of the linear system
g = A W * x ,
and that δ m is a sequence of positive real numbers such that δ k 0 as k . Then, if Assumption 1 is valid, the sequence { x δ k n ( δ k ) } k N , generated by the discrepancy principle rule (A10), converges as k to the solution of (A11) which is closest to z 0 in Euclidean norm.
Proof. 
We are going to show convergence for the sequence { z δ k n ( δ k ) } k N and then the thesis will follow easily from the continuity of S μ ( δ ) , i.e.,
lim k x δ k n ( δ k ) = lim k S μ ( δ k ) ( z δ k n ( δ k ) ) = S lim k μ ( δ k ) ( lim k z δ k n ( δ k ) ) = lim k z δ k n ( δ k ) .
The proof of the convergence for the sequence { z δ k n ( δ k ) } can be divided into two steps: at step one, we show the convergence in the free noise case δ = 0 . In particular, the sequence { z n } converges to a solution of (A11) that is the closest to z 0 . At the second step, we show that given a sequence of positive real numbers δ k 0 as k , then we get a corresponding sequence { z δ k n ( δ k ) } converging as k .
Step 1: Fix δ = 0 . It follows that r δ n = r n , and the sequence { z n } will not stop, i.e., n , since the discrepancy principle will not be satisfied by any n, in particular n δ for δ 0 . Set n > l > j , with n , l , j N . It holds that
z n z l 2 = e ˜ n e ˜ l 2 = e ˜ n 2 e ˜ l 2 2 e ˜ l , e ˜ n e ˜ l = e ˜ n 2 e ˜ l 2 + 2 e ˜ l , z n z l = e ˜ n 2 e ˜ l 2 + 2 i = l n 1 e ˜ l , h i = e ˜ n 2 e ˜ l 2 + 2 i = l n 1 B e ˜ l , ( C C * + α i h C C * ) 1 r i e ˜ n 2 e ˜ l 2 + 2 i = l n 1 B e ˜ l ( C C * + α i h C C * ) 1 r i e ˜ n 2 e ˜ l 2 + 2 ( 1 + ρ ) i = l n 1 r l ( C C * + α i h C C * ) 1 r i ,
where the last inequality comes from (11). At the same time, we have that
( z l z k ) 2 = ( e ˜ l e ˜ k ) 2 = e ˜ k 2 e ˜ l 2 + 2 e ˜ l , e ˜ k e ˜ l = e ˜ k 2 e ˜ l 2 2 e ˜ l , z l z k = e ˜ k 2 e ˜ l 2 2 i = k l 1 e ˜ l , h i e ˜ k 2 e ˜ l 2 + 2 i = k l 1 B e ˜ l ( C C * + α i h C C * ) 1 r i e ˜ k 2 e ˜ l 2 + 2 ( 1 + ρ ) i = k l 1 r l ( C C * + α i h C C * ) 1 r i .
Combining together (A12) and (A13), we obtain that
z n z k 2 2 z n z l 2 + 2 z l z k 2 2 e ˜ n 2 + 2 e ˜ k 2 4 e ˜ l 2 + 4 ( 1 + ρ ) i = k n 1 r l ( C C * + α i h C C * ) 1 r i .
This is valid for every l { k + 1 , , n 1 } . Choosing l such that r l = min i = k + 1 , , n 1 r i , it follows that
z n z k 2 2 e ˜ n 2 + 2 e ˜ k 2 4 e ˜ l 2 + 4 ( 1 + ρ ) i = k n 1 r i ( C C * + α i h C C * ) 1 r i .
From Proposition A2, { e ˜ j 2 } j N is a converging sequence, and from Corollary A1
i = k n 1 r l ( C C * + α i h C C * ) 1 r i 0 as   k , n
since it is the tail of a converging series. Therefore,
z n z k 2 0 as   k , n
and { z n } n N is a Cauchy sequence, and then convergent.
Step 2: Let x be the converging point of the sequence { z n } n N and let δ k > 0 be a sequence of positive real numbers converging to 0. For every δ k , let n = n ( δ k ) be the first positive integer such that (A10) is satisfied, whose existence is granted by Corollary A1, and let { z δ k n ( δ k ) } be the corresponding sequence. For every fixed ϵ > 0 , there exists n ¯ = n ¯ ( ϵ ) such that
x z n ϵ / 2 for   every   n > n ¯ ( ϵ ) ,
and there exists δ ¯ = δ ¯ ( ϵ ) for which
z n ¯ z δ n ¯ ϵ / 2 for   every   0 < δ < δ ¯ ,
due to the continuity of the operator g z n for every fixed n. Therefore, let us choose k ¯ = k ¯ ( ϵ ) large enough such that δ k < δ ¯ and such that n ( δ k ) > n ¯ for every k > k ¯ . Such k ¯ does exists since δ k 0 and n δ for δ 0 . Hence, for every k > k ¯ , we have
x z δ k n ( δ k ) = e ˜ δ k n ( δ k ) e ˜ δ k n ¯ = x z δ k n ¯ x z n ¯ + z n ¯ z δ k n ¯ ϵ ,
where the first inequality comes from Proposition A2 and the last one from (A14) and (A15). □

References

  1. Hansen, P.C.; Nagy, J.G.; O’leary, D.P. Deblurring Images: Matrices, Spectra, and Filtering; Siam: Philadelphia, PA, USA, 2006; Volume 3. [Google Scholar]
  2. Nagy, J.G.; Palmer, K.; Perrone, L. Iterative methods for image deblurring: A Matlab object-oriented approach. Numer. Algorithms 2004, 36, 73–93. [Google Scholar] [CrossRef]
  3. Almeida, M.S.; Figueiredo, M. Deconvolving images with unknown boundaries using the alternating direction method of multipliers. IEEE Trans. Image Process. 2013, 22, 3074–3086. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Dell’Acqua, P. A note on Taylor boundary conditions for accurate image restoration. Adv. Comput. Math. 2017, 43, 1283–1304. [Google Scholar] [CrossRef]
  5. Donatelli, M. Fast transforms for high order boundary conditions in deconvolution problems. BIT Numer. Math. 2010, 50, 559–576. [Google Scholar] [CrossRef]
  6. Hanke, M.; Hansen, P.C. Regularization methods for large-scale problems. Surv. Math. Ind. 1993, 3, 253–315. [Google Scholar]
  7. Rudin, L.I.; Osher, S.; Fatemi, E. Nonlinear total variation based noise removal algorithms. Physica D 1992, 60, 259–268. [Google Scholar] [CrossRef]
  8. Daubechies, I.; Defrise, M.; De Mol, C. An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pure Appl. Math. 2004, 57, 1413–1457. [Google Scholar] [CrossRef] [Green Version]
  9. Figueiredo, M.A.; Nowak, R.D. An EM algorithm for wavelet-based image restoration. IEEE Trans. Image Process. 2003, 12, 906–916. [Google Scholar] [CrossRef] [Green Version]
  10. Chan, R.H.; Chan, T.F.; Shen, L.; Shen, Z. Wavelet algorithms for high-resolution image reconstruction. SIAM J. Sci. Comput. 2003, 24, 1408–1432. [Google Scholar] [CrossRef]
  11. Cai, J.F.; Osher, S.; Shen, Z. Linearized Bregman iterations for frame-based image deblurring. SIAM J. Imag. Sci. 2009, 2, 226–252. [Google Scholar] [CrossRef]
  12. Cai, J.F.; Osher, S.; Shen, Z. Split Bregman methods and frame based image restoration. Multiscale Model. Simul. 2010, 8, 337–369. [Google Scholar] [CrossRef]
  13. Dell’Acqua, P.; Donatelli, M.; Estatico, C. Preconditioners for image restoration by reblurring techniques. J. Comput. Appl. Math. 2014, 272, 313–333. [Google Scholar] [CrossRef]
  14. Yin, W.; Osher, S.; Goldfarb, D.; Darbon, J. Bregman iterative algorithms for 1-minimization with applications to compressed sensing. SIAM J. Imag. Sci. 2008, 1, 143–168. [Google Scholar] [CrossRef] [Green Version]
  15. Donatelli, M.; Hanke, M. Fast nonstationary preconditioned iterative methods for ill-posed problems, with application to image deblurring. Inverse Probl. 2013, 29, 095008. [Google Scholar] [CrossRef]
  16. Buccini, A. Regularizing preconditioners by non-stationary iterated Tikhonov with general penalty term. Appl. Numer. Math. 2017, 116, 64–81. [Google Scholar] [CrossRef] [Green Version]
  17. Cai, Y.; Donatelli, M.; Bianchi, D.; Huang, T.Z. Regularization preconditioners for frame-based image deblurring with reduced boundary artifacts. SIAM J. Sci. Comput. 2016, 38, B164–B189. [Google Scholar] [CrossRef] [Green Version]
  18. Buccini, A.; Donatelli, M.; Reichel, L. Iterated Tikhonov regularization with a general penalty term. Numer. Linear Algebra Appl. 2017, 24, e2089. [Google Scholar] [CrossRef]
  19. Buccini, A.; Pasha, M.; Reichel, L. Generalized singular value decomposition with iterated Tikhonov regularization. J. Comput. Appl. Math. 2020, 373, 112276. [Google Scholar] [CrossRef]
  20. Huang, G.; Reichel, L.; Yin, F. Projected nonstationary iterated Tikhonov regularization. BIT Numer. Math. 2016, 56, 467–487. [Google Scholar] [CrossRef]
  21. Dell’Acqua, P.; Donatelli, M.; Estatico, C.; Mazza, M. Structure preserving preconditioners for image deblurring. J. Sci. Comput. 2017, 72, 147–171. [Google Scholar] [CrossRef]
  22. Bianchi, D.; Buccini, A.; Donatelli, M. Structure Preserving Preconditioning for Frame-Based Image Deblurring. In Computational Methods for Inverse Problems in Imaging; Springer: Berlin/Heidelberg, Germany, 2019; pp. 33–49. [Google Scholar]
  23. Rudin, W. Functional Analysis; International Series in Pure and Applied Mathematics; McGraw-Hill Education: New York, NY, USA, 1991. [Google Scholar]
  24. Engl, H.W.; Hanke, M.; Neubauer, A. Regularization of Inverse Problems; Springer Science & Business Media: Berlin/Heidelberg, Germany, 1996; Volume 375. [Google Scholar]
  25. Bianchi, D.; Donatelli, M. On generalized iterated Tikhonov regularization with operator-dependent seminorms. Electron. Trans. Numer. Anal. 2017, 47, 73–99. [Google Scholar] [CrossRef]
  26. Huckle, T.K.; Sedlacek, M. Tikhonov–Phillips regularization with operator dependent seminorms. Numer. Algorithms 2012, 60, 339–353. [Google Scholar] [CrossRef] [Green Version]
  27. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  28. Buccini, A.; Reichel, L. An 2q Regularization Method for Large Discrete Ill-Posed Problems. J. Sci. Comput. 2019, 78, 1526–1549. [Google Scholar] [CrossRef]
  29. Huang, J.; Donatelli, M.; Chan, R.H. Nonstationary iterated thresholding algorithms for image deblurring. Inverse Probl. Imaging 2013, 7, 717–736. [Google Scholar] [CrossRef]
Figure 1. Field of view. We see what is inside the square box.
Figure 1. Field of view. We see what is inside the square box.
Mathematics 08 00468 g001
Figure 2. Examples of boundary conditions.
Figure 2. Examples of boundary conditions.
Mathematics 08 00468 g002
Figure 3. Cameraman test problem: (a) True image ( 238 × 238 pixels), (b) point spread function (PSF) ( 17 × 17 pixels), (c) Blurred and noisy image with 2 % of white Gaussian noise ( 238 × 238 pixels).
Figure 3. Cameraman test problem: (a) True image ( 238 × 238 pixels), (b) point spread function (PSF) ( 17 × 17 pixels), (c) Blurred and noisy image with 2 % of white Gaussian noise ( 238 × 238 pixels).
Mathematics 08 00468 g003
Figure 4. Cameraman test problem reconstructions: (a) ISTA, (b) PISTA h , (c) Struct - PISTA h .
Figure 4. Cameraman test problem reconstructions: (a) ISTA, (b) PISTA h , (c) Struct - PISTA h .
Mathematics 08 00468 g004
Figure 5. Grain test problem: (a) True image ( 246 × 246 pixels), (b) PSF ( 9 × 9 pixels), (c) Blurred and noisy image with 3 % of white Gaussian noise ( 246 × 246 pixels).
Figure 5. Grain test problem: (a) True image ( 246 × 246 pixels), (b) PSF ( 9 × 9 pixels), (c) Blurred and noisy image with 3 % of white Gaussian noise ( 246 × 246 pixels).
Mathematics 08 00468 g005
Figure 6. Grain test problem reconstructions: (a) ISTA, (b) PISTA Λ , (c) Struct - PISTA Λ .
Figure 6. Grain test problem reconstructions: (a) ISTA, (b) PISTA Λ , (c) Struct - PISTA Λ .
Mathematics 08 00468 g006
Figure 7. Satellite test problem: (a) True image ( 256 × 256 pixels), (b) PSF ( 256 × 256 pixels), (c) Blurred and noisy image with ≈ 1 % of white Gaussian noise ( 256 × 256 pixels).
Figure 7. Satellite test problem: (a) True image ( 256 × 256 pixels), (b) PSF ( 256 × 256 pixels), (c) Blurred and noisy image with ≈ 1 % of white Gaussian noise ( 256 × 256 pixels).
Mathematics 08 00468 g007
Figure 8. Satellite test problem reconstructions: (a) AIT-GP, (b) PISTA h , (c) Struct - PISTA h .
Figure 8. Satellite test problem reconstructions: (a) AIT-GP, (b) PISTA h , (c) Struct - PISTA h .
Mathematics 08 00468 g008
Table 1. Comparison of the quality of the reconstructions for all considered examples. We highlight in boldface the best result.
Table 1. Comparison of the quality of the reconstructions for all considered examples. We highlight in boldface the best result.
ExampleMethodRREPSNRSSIM
CameramanAIT-GP 0.111024 24.7798 0.729095
ISTA 0.090921 26.5149 0.763217
PISTA h 0.096558 25.9924 0.790363
PISTA Λ 0.094853 26.1471 0.795061
Struct - PISTA h 0.088796 26.7203 0.840145
Struct - PISTA Λ 0.090182 26.5857 0.834532
GrainAIT-GP 0.183796 25.9571 0.731407
ISTA 0.160655 27.1259 0.845816
PISTA h 0.195516 25.4202 0.737254
PISTA Λ 0.181727 26.0554 0.748582
Struct - PISTA h 0.161715 27.0688 0.859284
Struct - PISTA Λ 0.168472 26.7133 0.830990
SatelliteAIT-GP 0.222783 26.6708 0.742416
ISTA 0.286179 24.4956 0.657111
PISTA h 0.192146 27.9558 0.928584
PISTA Λ 0.193730 27.8844 0.916993
Struct - PISTA h 0.187970 28.1466 0.934876
Struct - PISTA Λ 0.189147 28.0924 0.924931

Share and Cite

MDPI and ACS Style

Bianchi, D.; Buccini, A. Generalized Structure Preserving Preconditioners for Frame-Based Image Deblurring. Mathematics 2020, 8, 468. https://doi.org/10.3390/math8040468

AMA Style

Bianchi D, Buccini A. Generalized Structure Preserving Preconditioners for Frame-Based Image Deblurring. Mathematics. 2020; 8(4):468. https://doi.org/10.3390/math8040468

Chicago/Turabian Style

Bianchi, Davide, and Alessandro Buccini. 2020. "Generalized Structure Preserving Preconditioners for Frame-Based Image Deblurring" Mathematics 8, no. 4: 468. https://doi.org/10.3390/math8040468

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop