Next Article in Journal
Comparing Stacking Ensemble Techniques to Improve Musculoskeletal Fracture Image Classification
Next Article in Special Issue
A Green Prospective for Learned Post-Processing in Sparse-View Tomographic Reconstruction
Previous Article in Journal
Breast Cancer Risk Assessment: A Review on Mammography-Based Approaches
Previous Article in Special Issue
Calibration-Less Multi-Coil Compressed Sensing Magnetic Resonance Image Reconstruction Based on OSCAR Regularization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Directional TGV-Based Image Restoration under Poisson Noise

1
Department of Mathematics and Applications “R. Caccioppoli”, University of Naples Federico II, 80126 Naples, Italy
2
Department of Mathematics, University of Bologna, 40126 Bologna, Italy
3
Department of Mathematics and Physics, University of Campania “L. Vanvitelli”, 81100 Caserta, Italy
*
Author to whom correspondence should be addressed.
J. Imaging 2021, 7(6), 99; https://doi.org/10.3390/jimaging7060099
Submission received: 30 April 2021 / Revised: 7 June 2021 / Accepted: 11 June 2021 / Published: 16 June 2021
(This article belongs to the Special Issue Inverse Problems and Imaging)

Abstract

:
We are interested in the restoration of noisy and blurry images where the texture mainly follows a single direction (i.e., directional images). Problems of this type arise, for example, in microscopy or computed tomography for carbon or glass fibres. In order to deal with these problems, the Directional Total Generalized Variation (DTGV) was developed by Kongskov et al. in 2017 and 2019, in the case of impulse and Gaussian noise. In this article we focus on images corrupted by Poisson noise, extending the DTGV regularization to image restoration models where the data fitting term is the generalized Kullback–Leibler divergence. We also propose a technique for the identification of the main texture direction, which improves upon the techniques used in the aforementioned work about DTGV. We solve the problem by an ADMM algorithm with proven convergence and subproblems that can be solved exactly at a low computational cost. Numerical results on both phantom and real images demonstrate the effectiveness of our approach.

1. Introduction

Poisson noise appears in processes where digital images are obtained by the count of particles (generally photons). This is the case of X-ray computed tomography, positron emission tomography, confocal and fluorescence microscopy and optical/infrared astronomical imaging, to name just a few applications (see, e.g., [1] and the references therein). In this case, the object to be restored can be represented as a vector u R n and the data can be assumed to be a vector b N 0 n , whose entries b j are sampled from n independent Poisson random variables B j with probability
P ( B j = b j ) = e ( A u + γ ) j ( A u + γ ) j b j b j ! .
The matrix A = ( a i j ) R n × n models the observation mechanism of the imaging system and the following standard assumptions are made:
a i j 0   for   all   i , j , i = 1 n a i j = 1   for   all   j .
The vector γ R n , with γ > 0 , models the background radiation detected by the sensors.
By applying a maximum-likelihood approach [1,2], we can estimate u by minimizing the Kullback–Leibler (KL) divergence of A u + γ from b :
D K L ( A u + γ , b ) = i = 1 n b i ln b i [ A u + γ ] i + [ A u + γ ] i b i ,
where we set
b i ln b i [ A u + γ ] i = 0 if b i = 0 .
Regularization is usually introduced in (2) to deal with the ill-conditioning of this problem. The Total Variation (TV) regularization [3] has been widely used in this context, because it preserves edges and is able to smooth flat areas of the image. However, since it may produce staircase artifacts, other TV-based regularizers have been proposed. For example, the Total Generalized Variation (TGV) has been proposed and applied in [4,5,6,7] to overcome the staircasing effect while keeping the ability of identifying edges. On the other hand, to improve the quality of restoration for directional images, the Directional TV (DTV) regularization has been considered in [8], in the discrete setting. In [9,10], a regularizer combining DTV and TGV, named Directional TGV (DTGV), has been successfully applied to directional images affected by impulse and Gaussian noise.
Given an image u R n , the discrete second-order Directional TGV of u is defined as
DTGV 2 ( u ) = min w R 2 n α 0 ˜ u w 2 , 1 | R 2 n + α 1 E ˜ w 2 , 1 | R 4 n ,
where w R 2 n , ˜ R 2 n × n and E ˜ R 4 n × 2 n are the discrete directional gradient operator and the directional symmetrized derivative, respectively, and α 0 , α 1 ( 0 , + ) . For any vector v R 2 n we set
v 2 , 1 | R 2 n = j = 1 n v j 2 + v n + j 2 ,
and for any vector y R 4 n we set
y 2 , 1 | R 4 n = j = 1 n y j 2 + y n + j 2 + y 2 n + j 2 + y 3 n + j 2 .
Given an angle θ [ π , π ] and a scaling parameter a > 0 , we have that the discrete directional gradient operator has the form
˜ = D θ D θ = cos ( θ ) D H + sin ( θ ) D V a sin ( θ ) D H + cos ( θ ) D V ,
where D θ , D θ R n × n represent the forward finite-difference operators along the directions determined by θ and θ = θ + π 2 , respectively, and D H , D V R n × n represent the forward finite-difference operators along the horizontal and the vertical direction, respectively. Moreover, the directional symmetrized derivative is defined in block-wise form as
E ˜ = D θ 0 1 2 D θ 1 2 D θ 1 2 D θ 1 2 D θ 0 D θ .
It is worth noting that, by fixing θ = 0 and a = 1 , we have D θ = D H and D θ = D V , and the operators ˜ and E ˜ define the TGV 2 regularization [4].
We observe that the definition of both the matrix A and the finite difference operators D H and D V depend on the choice of boundary conditions. We make the following assumption.
Assumption 1.
We assume that periodic boundary conditions are considered for A, D H and D V . Therefore, those matrices are Block Circulant with Circulant Blocks (BCCB).
In this work we focus on directional images affected by Poisson noise, with the aim of assessing the behaviour of DTGV in this case. Besides extending the use of DTGV to Poisson noise, we introduce a novel technique for estimating the main direction of the image, which appears to be more efficient than the techniques applied in [9,10]. We solve the resulting optimization problem by using a customized version of the Alternating Direction Method of Multipliers (ADMM). We note that all the ADMM subproblems can be solved exactly at a low cost, thanks also to the use of FFTs, and that the method has proven convergence. Finally, we show the effectiveness of our approach on a set of test images, corrupted by out-of-focus and Gaussian blurs and noise with different signal-to-noise ratios. In particular, the KL-DTGV model of our problem is described in Section 2 and the technique for estimating the main direction is presented in Section 3. A detailed description of the ADMM version used for the minimization is given in Section 4 and the results of the numerical experiments are discussed in Section 5. Conclusions are given in Section 6.
Throughout this work we denote matrices with uppercase lightface letters, vectors with lowercase boldface letters and scalars with lowercase lightface letters. All the vectors are column vectors. Given a vector v , we use v i or ( v ) i to denote its i-th entry. We use R + to indicate the set of real nonnegative numbers and · to indicate the two-norm. For brevity, given any vectors v and w we use the notation ( v , w ) instead of [ v w ] . Likewise, given any scalars v and w, we use ( v , w ) to indicate the vector [ v w ] . We also use the notation ( [ v ] 1 , [ v ] 2 ) to highlight the subvectors [ v ] 1 and [ v ] 2 forming the vector v . Finally, by writing v > 0 we mean that all the entries of v are nonnegative and at least one of them is positive.

2. The KL-DTGV 2 Model

We briefly describe the KL-DTGV 2 model for the restoration of directional images corrupted by Poisson noise. Let b R n be the observed image. We want to recover the original image by minimizing a combination of the KL divergence (2) and the DTGV 2 regularizer (3), i.e., by solving the optimization problem
min u , w λ D K L ( A u + γ , b ) + α 0 ˜ u w 2 , 1 | R 2 n + α 1 E ˜ w 2 , 1 | R 4 n s . t . u 0 ,
where u R n , A R n × n , γ , b R n , w R 2 n , and ˜ R 2 n × n and E ˜ R 4 n × 2 n are the linear operators defining the DTGV 2 regularization. The parameters λ ( 0 , + ) and α 0 , α 1 ( 0 , 1 ) determine the balance between the KL data fidelity term and the two components of the regularization term.
We note that problem (6) is a nonsmooth convex optimization problem because of the properties of the KL divergence (see, e.g., [11]) and the DTGV operator (see, e.g., [10]).

3. Efficient Estimation of the Image Direction

An essential ingredient in the DTGV regularization is the estimation of the angle θ representing the image texture direction. In [10], an estimation algorithm based on the one in [12] is proposed, whose basic idea is to compute a pixelwise direction estimate and then θ as the average of that estimate. In [9], which focuses on impulse noise removal, a more efficient and robust algorithm for estimating the direction is presented, based on the Fourier transform. The main idea behind this algorithm is to exploit the fact that two-dimensional Fourier basis functions can be seen as images with one-directional patterns. However, despite being very efficient from a computational viewpoint, this technique does not appear to be fully reliable in our tests on Poissonian images (see Section 5.1). Therefore, we propose a different approach for estimating the direction, based on classical tools of image processing: the Sobel filter [13] and the Hough transform [14,15].
Our technique is based on the idea that if an image has a one-directional structure, i.e., its main pattern consists of stripes, then the edges of the image mainly consist of lines going in the direction of the stripes. The first stage of the proposed algorithm uses the Sobel filter to determine the edges of the noisy and blurry image. Then, the Hough transform is applied to the edge image in order to detect the lines. The Hough transform is based on the idea that each straight line can be identified by a pair ( r , η ) where r is the distance of the line from the origin, and η is the angle between the x axis and the segment connecting the origin with its orthogonal projection on the line. The output of the transform is a matrix in which each entry is associated with a pair ( r , η ) , i.e., with a straight line in the image, and its value is the sum of the values in the pixels that are on the line. Hence, the elements with the highest value in the Hough transform indicate the lines that are most likely to be present in the input image. Because of its definition, the Hough transform tends to overestimate diagonal lines in rectangular images (diagonal lines through the central part of the image contain the largest number of pixels); therefore, before computing the transform we apply a mask to the edge image, considering only the pixels inside the largest circle centered in the center of the image. After the Hough transform has been applied, we compute the square of the two-norm of each column of the matrix resulting from the transform, to determine a score for each angle from 90 to 90 . Intuitively, the score for each angle is related to the number of lines with that particular inclination which have been detected in the image. Finally, we set the direction estimate θ [ π , π ] as
θ = 90 η max 180 π , η max 0 , 90 η max 180 π , η max < 0 .
where η max is the value of η corresponding to the maximum score. A pseudocode for the estimation algorithm is provided in Algorithm 1 and an example of the algorithm workflow is given in Figure 1.
Algorithm 1 Direction estimation.
1:
Use the Sobel operator to obtain the image e of the edges of the noisy and blurry image b .
2:
Apply a disk mask to cut out some diagonal edges in e , obtaining a new edge image e ˜ (Figure 1b).
3:
Compute the Hough transform h ( e ˜ ) (Figure 1c).
4:
Set η max as the value of η corresponding to the column of h ( e ˜ ) with maximum 2-norm. (Figure 1d)
5:
Set θ = { 90 η max 180 π , η max 0 , 90 η max 180 π , η max < 0 . (yellow line in Figure 1a)

4. ADMM for Minimizing the KL-DTGV 2 Model

Although problem (6) is a bound-constrained convex optimization problem, the nondifferentiability of the DTGV 2 regularizer does not allow its solution by classical optimization methods for smooth problems, such as gradient methods (see [16,17,18] and the references therein). However, the problem can be solved by methods based on splitting techniques, such as [19,20,21,22,23]. Here we solve (6) by the Alternating Direction Method of Multipliers (ADMM) [20]. To this end, we first reformulate the problem as follows:
min u , w , z 1 , z 2 , z 3 , z 4 λ D K L ( z 1 + γ , b ) + α 0 z 2 2 , 1 | R 2 n + α 1 z 3 2 , 1 | R 4 n + χ R + n ( z 4 ) s . t . z 1 = A u , z 2 = ˜ u w , z 3 = E ˜ w , z 4 = u ,
where z 1 R n , z 2 R 2 n , z 3 R 4 n , z 4 R n , and χ R + n ( z 4 ) is the characteristic function of the nonnegative orthant in R n . A similar splitting has been used in [24] for TV-based deblurring of Poissonian images. By introducing the auxiliary variables x = ( u , w ) and z = ( z 1 , z 2 , z 3 , z 4 ) we can further reformulate the KL-DTGV 2 problem as
min x , z F 1 ( x ) + F 2 ( z ) s . t . H x + G z = 0 ,
where we set
F 1 ( x ) = 0 , F 2 ( z ) = λ D K L ( z 1 + γ , b ) + α 0 z 2 2 , 1 | R 2 n + α 1 z 3 2 , 1 | R 4 n + χ R + n ( z 4 ) ,
and we define the matrices H R 8 n × 3 n and G R 8 n × 8 n as
H = A 0 ˜ I 2 n 0 E ˜ I n 0 , G = I n 0 0 0 0 I 2 n 0 0 0 0 I 4 n 0 0 0 0 I n .
We consider the Lagrangian function associated with problem (8),
L ( x , z , ξ ) = F 1 ( x ) + F 2 ( z ) + ξ H x + G z ,
where ξ R 8 n is a vector of Lagrange multipliers, and then the augmented Lagrangian function
L A ( x , z , ξ ; ρ ) = F 1 ( x ) + F 2 ( z ) + ξ H x + G z + ρ 2 H x + G z 2 2 ,
where ρ > 0 .
Now we are ready to introduce the ADMM method for the solution of problem (8). Let x 0 R 3 n , z 0 R 8 n , ξ 0 R 8 n . At each step k > 0 the ADMM method computes the new iterate x k + 1 , z k + 1 , ξ k + 1 as follows:
x k + 1 = arg min x R 3 n L A ( x , z k , ξ k ; ρ ) , z k + 1 = arg min z R 8 n L A ( x k + 1 , z , ξ k ; ρ ) , ξ k + 1 = ξ k + ρ H x k + 1 + G z k + 1 .
Note that the functions F 1 ( x ) and F 2 ( z ) in (8) are closed, proper and convex. Moreover, the matrices H and G defined in (10) are such that G = I 8 n and H has full rank. Hence, the convergence of the method defined by (13) can be proved by applying a classical convergence result from the seminal paper by Eckstein and Bertsekas [25] (Theorem 8), which we report in a form that can be immediately applied to our reformulation of the problem.
Theorem 1.
Let us consider a problem of the form (8) where F 1 ( x ) and F 2 ( z ) are closed, proper and convex functions and H has full rank. Let x 0 R 3 n , z 0 R 8 n , ξ 0 R 8 n , and ρ > 0 . Suppose { ε k } , { ν k } R + are summable sequences such that for all k
x k + 1 arg min x R 3 n L A ( x , z k , ξ k ; ρ ) ε k , z k + 1 arg min z R 8 n L A ( x k + 1 , z , ξ k ; ρ ) ν k , ξ k + 1 = ξ k + ρ H x k + 1 + G z k + 1 .
If there exists a saddle point ( x * , z * , ξ * ) of L ( x , z , ξ ) , then x k x * , z k z * and ξ k ξ * . If such saddle point does not exist, then at least one of the sequences { z k } or { ξ k } is unbounded.
Since we are dealing with linear constraints, we can recast (13) in a more convenient form, by observing that the linear term in (12) can be included in the quadratic one. By introducing the vector of scaled Lagrange multipliers μ k = 1 ρ ξ k , the ADMM method becomes
x k + 1 = arg min x R 3 n ρ 2 H x z k + μ k 2 2 ,
z k + 1 = arg min z R 8 n F 2 ( z ) + ρ 2 H x k + 1 z + μ k 2 2 ,
μ k + 1 = μ k + H x k + 1 + G z k + 1 .
In the next sections we show how the solutions to subproblems (14) and (15) can be computed exactly with a small computational effort.

4.1. Solving the Subproblem in x

Problem (14) is an overdetermined least squares problem, since H is a tall-and-skinny matrix with full rank. Hence, its solution can be computed by solving the normal equations system
H H x = H v x k ,
where we set v x k = z k μ k . Starting from the definition of H given in (10), we have
H H = I n + A A + ˜ ˜ ˜ ˜ I 2 n + E ˜ E ˜ = = I n + A A + ˜ ˜ D θ D θ D θ I n + D θ D θ + 1 2 D θ D θ 1 2 D θ D θ D θ 1 2 D θ D θ I n + 1 2 D θ D θ + D θ D θ .
System (17) may be quite large and expensive, also for relatively small images. However, as pointed out in Assumption 1, A, D θ and D θ have a BCCB structure, hence all the blocks of H H maintain that structure. By recalling that BCCB matrices can be diagonalized by means of two-dimensional Discrete Fourier Transforms (DFTs), we show how the solution to (17) can be computed expeditiously.
Let F C n × n be the matrix representing the two-dimensional DFT operator, and let F * denote its inverse, i.e., its adjoint. We can write H H as
H H = F * 0 0 0 F * 0 0 0 F * Γ Δ θ * Δ θ * Δ θ Φ 11 Φ 12 Δ θ Φ 21 Φ 22 F 0 0 0 F 0 0 0 F ,
where each block of the central matrix is the diagonal complex matrix associated with the corresponding block in H H , and Δ θ * , Δ θ * denote the (diagonal) adjoint matrices of Δ θ , Δ θ . By (18) and the definition of x , we can reformulate (17) as
Γ Δ θ * Δ θ * Δ θ Φ 11 Φ 12 Δ θ Φ 21 Φ 22 F u F w 1 F w 2 = F [ H v x k ] 1 F [ H v x k ] 2 F [ H v x k ] 3 ,
where we split w and v x k in two and three blocks of size n, respectively.
Now we recall a result about the inversion of block matrices. Suppose that a square matrix M is partitioned into four blocks, i.e.,
M = M 11 M 12 M 21 M 22 ;
then, if M 11 and M 22 are invertible, we have
M 1 = M 11 M 12 M 21 M 22 1 = M 11 M 12 M 22 1 M 21 1 0 0 M 22 M 21 M 11 1 M 12 1 I M 12 M 22 1 M 21 M 11 1 I .
By applying (20) to the matrix consisting of the second and third block rows and columns of the matrix in (19), which we denote Φ , we get
Φ 1 = Φ 11 Φ 12 Φ 21 Φ 22 1 = Φ 11 Φ 12 Φ 22 1 Φ 21 1 Φ 11 Φ 12 Φ 22 1 Φ 21 1 Φ 12 Φ 22 1 Φ 22 Φ 21 Φ 11 1 Φ 12 1 Φ 21 Φ 11 1 Φ 22 Φ 21 Φ 11 1 Φ 12 1 .
To simplify the notation we set
Ψ = Ψ 11 Ψ 12 Ψ 21 Ψ 22 = Φ 1 ,
and observe that the matrices Ψ i j C n × n are diagonal. Letting Δ * = Δ θ * Δ θ * , applying the inversion formula (20) to the whole matrix in (19), and using (21) and (22), we get
Γ Δ * Δ Φ 1 = Ξ 1 0 0 Ω 1 I n Δ * Ψ Δ Γ 1 I 2 n ,
where
Ξ = Γ Δ * Ψ Δ = Γ Δ θ * Δ θ * Ψ 11 Ψ 12 Ψ 21 Ψ 22 Δ θ Δ θ * , Ω = Φ Δ Γ 1 Δ * = Φ 11 Φ 12 Φ 21 Φ 22 Δ θ Δ θ Γ 1 Δ θ * Δ θ * .
We note that Ξ C n × n is diagonal (and its inversion is straightforward), while Ω C 2 n × 2 n has a 2 × 2 block structure with blocks that are diagonal matrices belonging to C n × n . Thus, we can compute Υ = Ω 1 by applying (20):
Υ = Υ 11 Υ 12 Υ 21 Υ 22 = Ω 11 Ω 12 Ω 21 Ω 22 1 = Ω 11 Ω 12 Ω 22 1 Ω 21 1 Ω 11 Ω 12 Ω 22 1 Ω 21 1 Ω 12 Ω 22 1 Ω 22 Ω 21 Ω 11 1 Ω 12 1 Ω 21 Ω 11 1 Ω 22 Ω 21 Ω 11 1 Ω 12 1
Summing up, by (19), (23) and (24), the solution to (17) can be obtained by computing
y 1 y 2 y 3 = Ξ 1 0 0 Υ I n Δ Ψ Δ Γ 1 I 2 n F [ H v x k ] 1 F [ H v x k ] 2 F [ H v x k ] 3 ,
and setting
u k + 1 = F * y 1 , w 1 k + 1 = F * y 2 , w 2 k + 1 = F * y 3 .
Remark 1.
The only quantity in (25) that varies at each iteration is v x k . Hence, the matrices Δ, Γ, Ψ, Ξ 1 , and Υ can be computed only once before the ADMM method starts. This means that the overall cost of the exact solution of (14) at each iteration reduces to six two-dimensional DFTs and two matrix–vector products involving two 3 × 3 block matrices with diagonal blocks of dimension n.

4.2. Solving the Subproblem in z

By looking at the form of F 2 ( z ) –see (9)–and by defining the vector v z k = H x k + 1 + μ k , we see that problem (15) can be split into the four problems
z 1 k + 1 = arg min z 1 R n λ D K L ( z 1 + γ , b ) + ρ 2 z 1 [ v z k ] 1 2 2 ,
z 2 k + 1 = arg min z 2 R 2 n α 0 z 2 2 , 1 | R 2 n + ρ 2 z 2 [ v z k ] 2 2 2 ,
z 3 k + 1 = arg min z 3 R 4 n α 1 z 3 2 , 1 | R 4 n + ρ 2 z 3 [ v z k ] 3 2 2 ,
z 4 k + 1 = arg min z 4 R n χ R + n ( z 4 ) + ρ 2 z 4 [ v z k ] 4 2 2 ,
where v z k = ( [ v z k ] 1 , [ v z k ] 2 , [ v z k ] 3 , [ v z k ] 4 ) , with [ v z k ] 1 R n , [ v z k ] 2 R 2 n , [ v z k ] 3 R 4 n , and [ v z k ] 4 R n . Now we focus on the solution of the four subproblems.

4.2.1. Update of z 1

By the form of the Kullback–Leibler divergence in (2), the minimization problem (27) is equivalent to
min z 1 R n λ i = 1 n b i ln b i ( z 1 ) i + γ i + ( z 1 ) i + γ i b i + ρ 2 i = 1 n ( z 1 ) i d i 2 ,
where we set d = [ v z k ] 1 to ease the notation. From (31) it is clear that the problem in z 1 can be split into n problems of the form
min z R λ b ln ( z + γ ) + z + ρ 2 z d 2 .
Since the objective function of this problem is strictly convex, its solution can be determined by setting the gradient equal to zero, i.e., by solving
λ b z + γ + 1 + ρ z d = 0 ,
which leads to the quadratic equation
z 2 + λ ρ + γ d z λ ρ ρ λ γ d γ + b = 0 .
Since, by looking at the domain of the objective function in (32), z + γ has to be strictly positive, we set each entry of z 1 k + 1 as the largest solution of the corresponding quadratic Equation (33).

4.2.2. Update of z 2 and z 3

The minimization problems (28) and (29) correspond to the computation of the proximal operators of the functions f ( z 2 ) = α 0 ρ z 2 2 , 1 | R 2 n and g ( z 3 ) = α 1 ρ z 3 2 , 1 | R 4 n , respectively.
By the definitions given in (4) and (5), we see that the two (2,1)-norms correspond to the sum of two-norms of vectors in R 2 and R 4 , respectively. This means that the computation of both the proximal operators can be split into the computation of n proximal operators of functions that are scaled two-norms in either R 2 or R 4 .
The proximal operator of the function f ( y ) = c y , c > 0 , at a vector d is
prox c · ( d ) = arg min y c y + 1 2 y d 2 .
It can be shown (see, e.g., [26] [Chapter 6]) that
prox c · ( d ) = 1 c max d , c d = max d c d , 0 d .
Hence, for the update of z 2 we proceed as follows. By setting d = [ v z k ] 2 and c = α 0 ρ , for each i = 1 , , n we have
( z 2 k + 1 ) i , ( z 2 k + 1 ) n + i = prox c · ( d i , d n + i ) .
To update z 3 , we set d = [ v z k ] 3 and c = α 1 ρ and compute
( z 3 k + 1 ) i , ( z 3 k + 1 ) n + i , ( z 3 k + 1 ) 2 n + i , ( z 3 k + 1 ) 3 n + i = prox c · ( d i , d n + i , d 2 n + i , d 3 n + i ) .

4.2.3. Update of z 4

It is straightforward to verify that the update of z 4 in (30) can be obtained as
z 4 k + 1 = Π R + n [ v z k ] 4 ,
where Π R + n is the Euclidean projection onto the nonnegative orthant in R n .

4.3. Summary of the ADMM Method

For the sake of clarity, in Algorithm 2 we sketch the ADMM version for solving problem (7).
In many image restoration applications, a reasonably good starting guess for u is often available. For example, if A represents a blur operator, a common choice is to set u 0 equal to the the noisy and blurry image. We make this choice for u 0 . By numerical experiments we also verified that once x = ( u , w ) has been initialized, it is convenient to set u 1 = u 0 , w 1 1 = w 1 0 and w 2 1 = w 2 0 and to shift the order of the updates in the ADMM scheme (14)–(16), so that a “more effective” initialization of z and μ is performed. We see from line 9 of Algorithm 2 that the algorithm stops when the relative change in the restored image u goes below a certain threshold t o l ( 0 , 1 ) or a maximum number of iterations k max is reached. Finally, we note that for the case of the KL-TGV 2 model, corresponding to θ = 0 and a = 1 , we have that D θ = D H and D θ = D V ; hence, we use the initialization w 1 0 = D H u 0 and w 2 0 = D V u 0 .
Algorithm 2 ADMM for problem (7).
 1:
Let u 0 R n , w 1 0 = D θ u 0 , w 2 0 = D θ u 0 , μ 0 = 0 , z 0 = 0 , λ , ρ ( 0 , + ) , α 0 , α 1 ( 0 , 1 )
 2:
Compute matrices Δ , Γ , Ψ , Ξ 1 , and Υ as specified in Section 4.1
 3:
Let k = 0 , u 1 = u 0 , w 1 = w 0 , s t o p = f a l s e , t o l ( 0 , 1 ) , k max N
 4:
while not s t o p and k k max do
 5:
 Compute z k + 1 by solving the four subproblems (27)–(30)
 6:
 Compute μ k + 1 as in (16)
 7: 
k = k + 1
 8:
 Compute u k + 1 , w 1 k + 1 and w 2 k + 1 by (25) and (26)
 9:
 Set s t o p = u k + 1 u k < t o l u k
10:
end while

5. Numerical Results

All the experiments were carried out using MATLAB R2018a on a 3.50 GHz Intel Xeon E3 with 16 GB of RAM and Windows operating system. In this section, we first illustrate the effectiveness of Algorithm 1 for the estimation of the image direction by comparing it with the one given in [9] and by analysing its sensitivity to the degradation in the image to be restored. Then, we present numerical experiments that demonstrate the improvement of the KL-DTGV 2 model upon the KL-TGV 2 model for the restoration of directional images corrupted by Poisson noise.
Four directional images named phantom ( 512 × 512 ), grass ( 375 × 600 ), leaves ( 203 × 300 ) and carbon ( 247 × 300 ) were used in the experiments. The first image is a piecewise affine fibre phantom image obtained with the fibre_phantom_pa MATLAB function available from http://www2.compute.dtu.dk/~pcha/HDtomo/ (accessed on 20 September 2020). The second and third images represent grass and veins of leaves, respectively, which naturally exhibit a directional structure. The last image is a Scanning Electron Microscope (SEM) image of carbon fibres. The images are shown in Figure 2, Figure 3, Figure 4 and Figure 5.
To simulate experimental data, each reference image was convolved with two PSFs, one corresponding to a Gaussian blur with variance 2, generated by the psfGauss function from [27], and the other corresponding to an out-of-focus blur with radius 5, obtained with the function fspecial from the MATLAB Image Processing Toolbox. To take into account the existence of some background emission, a constant term γ equal to 10 10 was added to all pixels of the blurry image. The resulting image was corrupted by Poisson noise, using the MATLAB function imnoise. The intensities of the original images were pre-scaled to get noisy and blurry images with Signal to Noise Ratio (SNR) equal to 43 and 37 dB. We recall that in the case of Poisson noise, which affects the photon counting process, the SNR is estimated as [28]
SNR = 10 log 10 N exact N exact + N background ,
where N exact and N background are the total number of photons in the image to be recovered and in the background term, respectively. Finally, the corrupted images were scaled to have their maximum intensity values equal to 1. For each test problem, the noisy and blurry images are shown in Figure 2, Figure 3, Figure 4 and Figure 5.

5.1. Direction Estimation

In Figure 2, Figure 3, Figure 4 and Figure 5 we compare Algorithm 1 with the algorithm proposed in [9], showing that Algorithm 1 always correctly estimates the main direction of the four test images. We also test the robustness of our algorithm with respect to noise and blur. In Figure 6 we show the estimated main direction of the phantom image corrupted by Poisson noise with SNR = 35 , 37 , 39 , 41 , 43 dB and out-of-focus blurs with radius R = 5 , 7 , 9 . In only one case (SNR = 35 , R = 7 ) Algorithm 1 fails, returning as estimate the orthogonal direction, i.e., the direction corresponding to the large black line and the background color gradient. Finally, we test Algorithm 1 on a phantom image with vertical, horizontal and diagonal main directions corresponding to θ = 0 , 90 , 45 . The results, in Figure 7, show that our algorithm is not sensitive to the specific directional structure of the image.

5.2. Image Deblurring

We compare the quality of the restorations obtained by using the DTGV 2 and TGV 2 regularizers and ADMM for the solution of both models. In all the tests, the value of the penalty parameter was set as ρ = 10 and the value of the stopping threshold as t o l = 10 4 . A maximum number of k max = 500 iterations was allowed. By following [9,10], the weight parameters of DTGV were chosen as α 0 = β and α 1 = ( 1 β ) with β = 2 / 3 . For each test problem, the value of the regularization parameter λ was tuned by a trial-and-error strategy. This strategy consisted in running ADMM with initial guess u 0 = b several times on each test image, varying the value of λ at each execution. For all the runs the stopping criterion for ADMM and the values of α 0 , α 1 and ρ were the same as described above. The value of λ yielding the smallest Root Mean Square Error (RMSE) at the last iteration was chosen as the “optimal” value.
The numerical results are summarized in Table 1, where the RMSE, the Improved Signal to Noise Ratio (ISNR) [29], and the structural similarity (SSIM) index [30] are used to give a quantitative evaluation of the quality of the restorations. As a measure of the computational cost, the number of iterations and the time in seconds are reported. Table 1 also shows, for each test problem, the values of the regularization parameter λ . The restored images are shown in Figure 8, Figure 9, Figure 10 and Figure 11. For the carbon test problem, Figure 12 shows the error images, i.e., the images obtained as the absolute difference between the original image and the restored one. The values of the pixels of the error images have been scaled in the range [ m , M ] where m and M are the minimum and maximum pixel value of the DTGV 2 and TGV 2 error images.
From the results, it is evident that the DTGV 2 model outperforms the TGV 2 one in terms of quality of the restoration. A visual inspection of the figures shows that the DTGV 2 regularization is very effective in removing the noise, while for high noise levels the TGV 2 reconstructions still exhibit noise artifacts. Finally, by observing the “Iters” column of the table, we can conclude that, on average, the TGV 2 regularization requires less ADMM iterations to achieve a relative change in the restoration that is below the fixed threshold. However, the computational time per iteration is very small and also ADMM for the KL-DGTV 2 regularization is efficient.
Finally, to illustrate the behaviour of ADMM, in Figure 13 we plot the RMSE history for the carbon test problem. A similar RMSE behaviour has been observed in all the numerical experiments.

6. Conclusions

We dealt with the use of the Directional TGV regularization in the case of directional images corrupted by Poisson noise. We presented the KL-DTGV 2 model and introduced a two-block ADMM version for its minimization. Finally, we proposed an effective strategy for the estimation of the main direction of the image. Our numerical experiments show that for Poisson noise the DTGV 2 regularization provides superior restoration performance compared with the standard TGV 2 regularization, thus remarking the importance of taking into account the texture structure of the image. A crucial ingredient for the success of the model was the proposed direction estimation strategy, which proved to be more reliable than those proposed in the literature.
Possible future work includes the use of space-variant regularization terms and the analysis of automatic strategies for the selection of the regularization parameters.

Author Contributions

All authors have contributed equally to this work. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially supported by the Istituto Nazionale di Alta Matematica, Gruppo Nazionale per il Calcolo Scientifico (INdAM-GNCS). D. di Serafino and M. Viola were also supported by the V:ALERE Program of the University of Campania “L. Vanvitelli”.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bertero, M.; Boccacci, P.; Desiderà, G.; Vicidomini, G. Image deblurring with Poisson data: From cells to galaxies. Inverse Probl. 2009, 25, 123006. [Google Scholar] [CrossRef]
  2. Shepp, L.A.; Vardi, Y. Maximum Likelihood Reconstruction for Emission Tomography. IEEE Trans. Med. Imaging 1982, 1, 113–122. [Google Scholar] [CrossRef]
  3. Rudin, L.I.; Osher, S.; Fatemi, E. Nonlinear total variation based noise removal algorithms. Physica D 1992, 60, 259–268. [Google Scholar] [CrossRef]
  4. Bredies, K.; Kunisch, K.; Pock, T. Total generalized variation. SIAM J. Imaging Sci. 2010, 3, 492–526. [Google Scholar] [CrossRef]
  5. Bredies, K.; Holler, M. Regularization of linear inverse problems with total generalized variation. J. Inverse Ill-Posed Probl. 2014, 22, 871–913. [Google Scholar] [CrossRef]
  6. Gao, Y.; Liu, F.; Yang, X. Total generalized variation restoration with non-quadratic fidelity. Multidimens. Syst. Signal Process. 2018, 29, 1459–1484. [Google Scholar] [CrossRef]
  7. Di Serafino, D.; Landi, G.; Viola, M. TGV-based restoration of Poissonian images with automatic estimation of the regularization parameter. In Proceedings of the 21th International Conference on Computational Science and Its Applications (ICCSA), Cagliari, Italy, 5–8 July 2021. [Google Scholar]
  8. Bayram, I.; Kamasak, M.E. Directional Total Variation. IEEE Signal Process. Lett. 2012, 19, 781–784. [Google Scholar] [CrossRef] [Green Version]
  9. Kongskov, R.D.; Dong, Y. Directional Total Generalized Variation Regularization for Impulse Noise Removal. In Scale Space and Variational Methods in Computer Vision; Lauze, F., Dong, Y., Dahl, A., Eds.; SSVM2017; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2017; Volume 10302, pp. 221–231. [Google Scholar] [CrossRef]
  10. Kongskov, R.D.; Dong, Y.; Knudsen, K. Directional total generalized variation regularization. BIT 2019, 59, 903–928. [Google Scholar] [CrossRef] [Green Version]
  11. Di Serafino, D.; Landi, G.; Viola, M. ACQUIRE: An inexact iteratively reweighted norm approach for TV-based Poisson image restoration. Appl. Math. Comput. 2020, 364, 124678. [Google Scholar] [CrossRef]
  12. Setzer, S.; Steidl, G.; Teuber, T. Restoration of images with rotated shapes. Numer. Algorithms 2008, 48, 49–66. [Google Scholar] [CrossRef]
  13. Kanopoulos, N.; Vasanthavada, N.; Baker, R. Design of an image edge detection filter using the Sobel operator. IEEE J. Solid-State Circuits 1988, 23, 358–367. [Google Scholar] [CrossRef]
  14. Hough, P. Method and Means for Recognizing Complex Patterns. U.S. Patent 3069654, 18 December 1962. [Google Scholar]
  15. Duda, R.O.; Hart, P.E. Use of the Hough transform to detect lines and curves in pictures. Commun. ACM 1972, 15, 11–15. [Google Scholar] [CrossRef]
  16. Bonettini, S.; Zanella, R.; Zanni, L. A scaled gradient projection method for constrained image deblurring. Inverse Probl. 2009, 25, 015002. [Google Scholar] [CrossRef]
  17. Birgin, E.G.; Martínez, J.M.; Raydan, M. Spectral Projected Gradient Methods: Review and Perspectives. J. Stat. Softw. 2014, 60, 1–21. [Google Scholar] [CrossRef]
  18. Di Serafino, D.; Ruggiero, V.; Toraldo, G.; Zanni, L. On the steplength selection in gradient methods for unconstrained optimization. Appl. Math. Comput. 2018, 318, 176–195. [Google Scholar] [CrossRef] [Green Version]
  19. Goldstein, T.; Osher, S. The Split Bregman Method for L1-Regularized Problems. SIAM J. Imaging Sci. 2009, 2, 323–343. [Google Scholar] [CrossRef]
  20. Boyd, S.; Parikh, N.; Chu, E.; Peleato, B.; Eckstein, J. Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers. Found. Trends Mach. Learn. 2011, 3, 1–122. [Google Scholar] [CrossRef]
  21. Parikh, N.; Boyd, S. Proximal Algorithms. Found. Trends Optim. 2014, 1, 123–231. [Google Scholar] [CrossRef]
  22. Bonettini, S.; Loris, I.; Porta, F.; Prato, M. Variable Metric Inexact Line-Search-Based Methods for Nonsmooth Optimization. SIAM J. Optim. 2016, 26, 891–921. [Google Scholar] [CrossRef] [Green Version]
  23. De Simone, V.; di Serafino, D.; Viola, M. A subspace-accelerated split Bregman method for sparse data recovery with joint 1-type regularizers. Electron. Trans. Numer. Anal. 2020, 53, 406–425. [Google Scholar] [CrossRef]
  24. Setzer, S.; Steidl, G.; Teuber, T. Deblurring Poissonian images by split Bregman techniques. J. Vis. Commun. Image Represent. 2010, 21, 193–199. [Google Scholar] [CrossRef]
  25. Eckstein, J.; Bertsekas, D.P. On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 1992, 55, 293–318. [Google Scholar] [CrossRef] [Green Version]
  26. Beck, A. First-Order Methods in Optimization; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2017. [Google Scholar] [CrossRef] [Green Version]
  27. Nagy, J.G.; Palmer, K.; Perrone, L. Iterative methods for image deblurring: A Matlab object-oriented approach. Numer. Algorithms 2004, 36, 73–93. [Google Scholar] [CrossRef]
  28. Bertero, M.; Boccacci, P.; Ruggiero, V. Inverse Imaging with Poisson Data; IOP Publishing: Bristol, UK, 2018; pp. 2053–2563. [Google Scholar] [CrossRef]
  29. Banham, M.; Katsaggelos, A. Digital image restoration. IEEE Signal Process. Mag. 1997, 14, 24–41. [Google Scholar] [CrossRef]
  30. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. Workflow of Algorithm 1 on a random directional image.
Figure 1. Workflow of Algorithm 1 on a random directional image.
Jimaging 07 00099 g001
Figure 2. Test problem phantom: original and corrupted images. The yellow dash-dotted line indicates the direction estimated by Algorithm 1 and the red dashed line the direction estimated by the method in [9].
Figure 2. Test problem phantom: original and corrupted images. The yellow dash-dotted line indicates the direction estimated by Algorithm 1 and the red dashed line the direction estimated by the method in [9].
Jimaging 07 00099 g002
Figure 3. Test problem grass: original and corrupted images. The yellow dash-dotted line indicates the direction estimated by Algorithm 1 and the red dashed line the direction estimated by the method in [9].
Figure 3. Test problem grass: original and corrupted images. The yellow dash-dotted line indicates the direction estimated by Algorithm 1 and the red dashed line the direction estimated by the method in [9].
Jimaging 07 00099 g003
Figure 4. Test problem leaves: original and corrupted images. The yellow dash-dotted line indicates the direction estimated by Algorithm 1 and the red dashed line the direction estimated by the method in [9].
Figure 4. Test problem leaves: original and corrupted images. The yellow dash-dotted line indicates the direction estimated by Algorithm 1 and the red dashed line the direction estimated by the method in [9].
Jimaging 07 00099 g004
Figure 5. Test problem carbon: original and corrupted images. The yellow dash-dotted line indicates the direction estimated by Algorithm 1 and the red dashed line the direction estimated by the method in [9].
Figure 5. Test problem carbon: original and corrupted images. The yellow dash-dotted line indicates the direction estimated by Algorithm 1 and the red dashed line the direction estimated by the method in [9].
Jimaging 07 00099 g005
Figure 6. Direction estimation for phantom with SNR = 35 , 37 , 39 , 41 , 43 dB (from left to right) and out-of-focus blur with radius R = 5 , 7 , 9 (from top to bottom).
Figure 6. Direction estimation for phantom with SNR = 35 , 37 , 39 , 41 , 43 dB (from left to right) and out-of-focus blur with radius R = 5 , 7 , 9 (from top to bottom).
Jimaging 07 00099 g006
Figure 7. Direction estimation for phantom with SNR = 37 , 43 (top, bottom) and out-of-focus blur with radius R = 5 .
Figure 7. Direction estimation for phantom with SNR = 37 , 43 (top, bottom) and out-of-focus blur with radius R = 5 .
Jimaging 07 00099 g007
Figure 8. Test problem phantom: images restored with DTGV 2 (top) and TGV 2 (bottom).
Figure 8. Test problem phantom: images restored with DTGV 2 (top) and TGV 2 (bottom).
Jimaging 07 00099 g008
Figure 9. Test problem grass: images restored with DTGV 2 (top) and TGV 2 (bottom).
Figure 9. Test problem grass: images restored with DTGV 2 (top) and TGV 2 (bottom).
Jimaging 07 00099 g009
Figure 10. Test problem leaves: images restored with DTGV 2 (top) and TGV 2 (bottom).
Figure 10. Test problem leaves: images restored with DTGV 2 (top) and TGV 2 (bottom).
Jimaging 07 00099 g010
Figure 11. Test problem carbon: images restored with DTGV 2 (top) and TGV 2 (bottom).
Figure 11. Test problem carbon: images restored with DTGV 2 (top) and TGV 2 (bottom).
Jimaging 07 00099 g011
Figure 12. Test problem carbon: difference images with DTGV 2 (top) and TGV 2 (bottom).
Figure 12. Test problem carbon: difference images with DTGV 2 (top) and TGV 2 (bottom).
Jimaging 07 00099 g012
Figure 13. Test problem carbon: RMSE history for the KL-DGTV 2 (continuous line) and KL-TGV 2 (dashed line) models.
Figure 13. Test problem carbon: RMSE history for the KL-DGTV 2 (continuous line) and KL-TGV 2 (dashed line) models.
Jimaging 07 00099 g013
Table 1. Numerical results for the test problems.
Table 1. Numerical results for the test problems.
BlurSNRModel λ RMSEISNRMSSIMItersTime
phantom
Out-of-focus43DTGV57.52.2558 × 10 2 9.54729.3007 × 10 1 8610.95
TGV2752.8043 × 10 2 7.65688.9887 × 10 1 8911.33
37DTGV3.253.7573 × 10 2 7.44318.5823 × 10 1 12215.45
TGV22.54.1719 × 10 2 6.53398.4061 × 10 1 526.64
Gaussian43DTGV251.5530 × 10 2 9.19669.7829 × 10 1 567.17
TGV1001.8100 × 10 2 7.86679.7200 × 10 1 455.76
37DTGV32.5498 × 10 2 9.08419.2994 × 10 1 9011.41
TGV17.53.0674 × 10 2 7.47889.0199 × 10 1 536.76
grass
Out-of-focus43DTGV603.6313 × 10 2 7.73648.7262 × 10 1 13615.55
TGV5503.6575 × 10 2 7.67388.7188 × 10 1 17920.39
37DTGV505.6164 × 10 2 4.73907.6165 × 10 1 16018.56
TGV555.7604 × 10 2 4.51917.4566 × 10 1 728.31
Gaussian43DTGV652.9883 × 10 2 6.33439.2764 × 10 1 10612.08
TGV6503.0814 × 10 2 6.06769.2523 × 10 1 13615.48
37DTGV5.54.2274 × 10 2 4.79738.5615 × 10 1 9811.13
TGV354.3936 × 10 2 4.46248.4795 × 10 1 546.18
leaves
Out-of-focus43DTGV1256.2767 × 10 2 7.49788.2099 × 10 1 25131.18
TGV11008.2397 × 10 2 5.13427.1557 × 10 1 43553.74
37DTGV12.59.5597 × 10 2 4.14976.3065 × 10 1 25731.87
TGV901.1874 × 10 1 2.26654.3294 × 10 1 11314.03
Gaussian43DTGV1507.3332 × 10 2 4.86757.7456 × 10 1 23629.13
TGV17508.0857 × 10 2 4.01907.3001 × 10 1 38046.77
37DTGV12.59.0999 × 10 2 3.39076.6469 × 10 1 14818.36
TGV1001.0308 × 10 1 2.30815.6534 × 10 1 10312.85
carbon
Out-of-focus43DTGV1501.8360 × 10 2 1.2830 × 10 1 9.4734 × 10 1 33113.78
TGV8502.3825 × 10 2 1.0567 × 10 1 9.3671  × 10 1 2339.73
37DTGV203.1682 × 10 2 8.24168.6294 × 10 1 1717.07
TGV1503.8840 × 10 2 6.47238.2237 × 10 1 1556.55
Gaussian43DTGV2502.0453 × 10 2 8.61789.5974 × 10 1 30512.53
TGV9502.4839 × 10 2 6.93029.5698 × 10 1 1717.12
37DTGV152.7995 × 10 2 6.20179.3007 × 10 1 1285.36
TGV1503.3061 × 10 2 4.75728.9690 × 10 1 1184.73
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

di Serafino, D.; Landi, G.; Viola, M. Directional TGV-Based Image Restoration under Poisson Noise. J. Imaging 2021, 7, 99. https://doi.org/10.3390/jimaging7060099

AMA Style

di Serafino D, Landi G, Viola M. Directional TGV-Based Image Restoration under Poisson Noise. Journal of Imaging. 2021; 7(6):99. https://doi.org/10.3390/jimaging7060099

Chicago/Turabian Style

di Serafino, Daniela, Germana Landi, and Marco Viola. 2021. "Directional TGV-Based Image Restoration under Poisson Noise" Journal of Imaging 7, no. 6: 99. https://doi.org/10.3390/jimaging7060099

APA Style

di Serafino, D., Landi, G., & Viola, M. (2021). Directional TGV-Based Image Restoration under Poisson Noise. Journal of Imaging, 7(6), 99. https://doi.org/10.3390/jimaging7060099

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop