Image Restoration Using Fixed-Point-Like Methods for New TVL1 Variational Problems

: In this paper, we ﬁrst propose two TVL1 variational problems for restoring images degraded by blurring and impulse noise, and then we propose two ﬁxed-point-like methods, using proximal operators, for solving the new proposed TVL1 problems. Numerical experiments for several test images blurred by Gaussian kernel and corrupted by salt-and-pepper impulse noise are provided to demonstrate the efﬁciency and reliability of the proposed ﬁxed-point-like methods. Numerical results show that two ﬁxed-point-like methods for solving the new TVL1 variational problems perform very well in both PSNR (Peak signal-to-noise ratio) values and CPU time as compared with the ﬁxed-point-like methods for solving two existing TVL1 variational problems.


Introduction
In this paper, we consider the problem of restoring images degraded by blurring and impulse noise. Impulse noise is often generated by malfunctioning pixels in camera senses, faulty memory locations in hardware, or erroneous transmission. Two common types of impulse noise are salt-and-pepper noise and random-valued noise. Assume that an intensity range of an image is [d min , d max ]. Salt-and-pepper noise corrupts a portion of pixels with only two values of d min or d max while keeping other pixels unaffected. For random-valued noise, a portion of pixels is corrupted in the same manner as salt-and-pepper noise except that the corrupted pixels can take any random value between d min and d max .
Let us assume that the true image U has an N × N square array. For convenience of exposition, the image U is represented by a long vector u of size m = N 2 which is defined by stacking the columns of U, i.e., u = (u T * 1 , u T * 2 , . . . , u T * N ) T where u * ∈ R N denotes the th column of U. In this paper, we only consider the reflexive boundary condition, which means that the scene outside the image boundaries is a mirror image of the scene inside the image boundaries, and we assume that an observed (or degraded) image f ∈ R m is represented by where A ∈ R m×m is a blurring operator, u ∈ R m is the original image, and η ∈ R m denotes the impulse noise. Then, for salt-and-pepper noise, the noisy image f = ( f i ) ∈ R m is defined as where d i is the uniformly distributed random variable in [d min , d max ] and r is the noise level of the random-valued noise. Our objective of this paper is to restore u from the blurred and noisy image f as well as possible. The classic TVL1 model for recovering a true image u from an observed image f with impulse noise is given by the following variational problem with the l 1 -norm data fidelity term and total variational regularization term where ρ > 0 is a regularization parameter and u TV denotes the total variation (TV) of u. There are two possible definitions for u TV ; one is the anisotropic TV, and the other is the isotropic TV. In this paper, we only consider the isotropic TV of u ∈ R m , which is defined by where the discrete gradient operator : R m → R 2m is defined as follows: ( u) i = ( x u) i , ( y u) m+i T for each i = 1, 2, . . . , m with ( x u) i = 0 , if i mod N = 1, Note that, if 1 -norm is used instead of 2 -norm in Equation (3), then the resulting TV norm is anisotropic.
Notice that there have been many existing mathematical models other than the TVL1 model (2) for recovering a true image from an observed image corrupted by blur and impulse noise (e.g., see [1,2] and the references therein). In this paper, we only consider fixed-point-like methods for solving several types of TVL1 variational models. In the last few decades, the problem of solving the classical TVL1 model Equation (2) has been studied by many researchers (see [3][4][5][6][7][8][9][10][11][12] and the references therein). It was shown in [9][10][11] that the TVL1 model works successfully in recovering blurred images corrupted by impulse noise. Notice that the TVL1 model has many difficulties in finding its solutions both mathematically and numerically since both the l 1 -norm data fidelity and regularization terms are not differentiable.
Recently, Lu et al. [13] proposed a fixed-point method for solving the following TVL1 variational problem: where λ and ρ are positive numbers. They showed that the fixed-point method performs remarkably better in the image quality measured by PSNR and preserves more features than FTVd (Fast total variation deconvolution) proposed in [11] at the expense of much increase in computational time. Furthermore, Han and Yun [14] proposed a fixed-point-like method for solving the following TVL1 variational problem: min where λ and ρ are positive numbers. They showed that the fixed-point-like method restores the true image much better than the fixed-point method proposed by Lu et al. [13]. These two approaches motivate us to propose the following two TVL1 variational problems where D = −∆ and ∆ denote a discrete Laplacian operator. Under the reflexive boundary condition, the discrete Laplacian operator is represented by a singular matrix in R m×m (see Section 6). The new TVL1 model Equation (6) uses a smooth term Du 2 2 instead of using a smooth term u 2 2 . The reason for using the term Du 2 2 is that it may better recover a true image u from an observed image f by taking four neighborhood pixels into account when updating an approximate image during the iteration step of iteration methods to be proposed in this paper. In addition, the new TVL1 model Equation (7) uses a non-smooth term Du 2 instead of using a smooth term Du 2 2 for the purpose of better preserving the edges and corners in the restored images.
Notice that the TVL1 problem Equation (4) has a unique solution since its objective function is strictly convex, while the TVL1 problems Equations (5)-(7) may not have a unique solution since its objective functions are just convex, not strictly convex.
This paper is organized as follows. In Section 2, we provide some definitions and useful properties which are fundamental tools for developing numerical algorithms for solving the TVL1 variational problems Equations (6) and (7). In Section 3, we just provide the fixed-point algorithm proposed in [13] for solving the TVL1 problem Equation (4) and the fixed-point-like algorithm proposed in [14] for solving the TVL1 problem Equation (5) for the purpose of comparison. In Section 4, we propose a fixed-point-like method, using proximal operators, for solving the new proposed TVL1 problem Equation (6). In Section 5, we propose a fixed-point-like method, using proximal operators, for solving the new proposed TVL1 problem Equation (7). In Section 6, we provide numerical experiments for several test images blurred by Gaussian kernel and corrupted by salt-and-pepper impulse noise in order to demonstrate the efficiency and reliability of two fixed-point-like methods for solving the TVL1 problems Equations (6) and (7). Performance evaluation for these two methods is done by comparing their numerical results with those of two fixed-point-like methods for the existing TVL1 problems Equations (4) and (5). Lastly, we provide some concluding remarks.

Preliminaries
Some definitions and useful results which we refer to later in this paper are provided below. We first provide the proximal operator introduced by Moreau [15].

Definition 1.
Let ψ : R m → R ∪ {+∞} be a proper, convex, and lower semi-continuous (l.s.c) function. The proximal operator of ψ at v ∈ R m is defined by Definition 2. Let ψ : R m → R ∪ {+∞} be a proper, convex, and l.s.c function. The subdifferential of ψ at v ∈ R m is defined by Elements in ∂ψ(v) are called subgradients.
It is well known that a subdifferential of a convex function ψ is a set-valued mapping from R m into a nonempty convex compact set in R m [16]. We now provide explicit formulas for three proximal operators used in this paper, which are the l 1 -norm, l 2 -norm, and isotropic TV norm [14,16]. The first example gives two proximal operators for the l 1 -norm and l 2 -norm defined on R m . Example 1. Let || · || 1 and || · || 2 be the l 1 -norm and l 2 -norm defined on R m , respectively. For any λ > 0 and v ∈ R m prox 1 where |v| denotes elementwise absolute value of the vector v and . * denotes the elementwise multiplication.
Notice that the isotropic TV of u ∈ R m defined by Equation (3) can be expressed as where ϕ : R 2m → R is a convex function defined by and B is a d × m matrix which represents a discrete gradient operator with m = N 2 and d = 2m (see Section 6). The next example gives the proximal operator of the convex function ψ = 1 λ ϕ on R 2m which is called the generalized shrinkage formula, where λ > 0.
where ∏ denotes Cartesian product of vector spaces.
The following theorem outlines a relationship between the proximal operator and the subdifferential of a convex function.
Theorem 1 ([13,15]). If ψ is a proper, convex and l.s.c. function on R m and v ∈ R m , then where I denotes an identity operator on R m . (4) and (5) In this section, we just provide the fixed-point algorithm proposed in [13] for solving the TVL1 problem Equation (4) and the fixed-point-like algorithm proposed in [14] for solving the TVL1 problem Equation (5) for the purpose of comparison with two fixed-point-like algorithms to be proposed in this paper. The fixed-point method, called Algorithm 1, for the TVL1 problem Equation (4) and the fixed-point-like method, called Algorithm 2, for the TVL1 problem Equation (5) are described below (see [13,14] for detailed description of algorithms).

end if 11: end for
For all algorithms considered in this paper, maxit denotes the maximum number of iterations and tol denotes the tolerance value of the stopping criterion.

Fixed-Point-Like Method for the TVL1 Problem Equation (6)
In this section, we propose a fixed-point-like method, using the proximal operators, for solving the new proposed TVL1 variational problem Equation (6). Equation (6) can be expressed as where λ and ρ are positive numbers, ϕ and B are defined the same as in Equation (10). Using Theorem 1, we can obtain the following property for a solution to the TVL1 problem Equation (12).

Theorem 2.
If ϕ is a real-valued convex function on R d , B is an d × m matrix, A is an m × m matrix, and u is a solution to the TVL1 problem Equation (12), then, for any α, β > 0, there exist vectors a ∈ R m and b ∈ R d such that Conversely, if there exist positive numbers α, β, a ∈ R m , b ∈ R d and u ∈ R m satisfying Equations (13)-(15), then u is a solution to the TVL1 problem Equation (12).
By splitting Equations (13) and (14) of Theorem 2 and rearranging the resulting equations, we can obtain the following corollary.

Corollary 1.
If ϕ is a real-valued convex function on R d , B is an d × m matrix, A is an m × m matrix, and u is a solution to the TVL1 problem Equation (12), then, for any α, β > 0, there exist vectors a ∈ R m and b ∈ R d such that Conversely, if there exist positive numbers α, β, a ∈ R m , b ∈ R d and u ∈ R m satisfying Equations (18) (13) and (14) and using notations (18) and (19) for a * and b * , one obtains Hence, we obtain Equations (21) and (22). Substituting Equations (21) and (22) into Equation (15), Equation (20) is also obtained.
From Equations (18)-(22) of Corollary 1, we can obtain a fixed-point-like method, called Algorithm 2, using the proximal operators for the TVL1 problem Equation (6).

end if 12: end for
Notice that the linear system in line 6 of Algorithm 2 is equivalent to solving the following least squares problem Hence, the linear system in line 6 of Algorithm 2 is solved by applying the CGLS (Conjugate gradient least squares method [17]) to the problem Equation (25). The following theorem provides a convergence analysis for Algorithm 2.
Theorem 3. Let {a n }, {b n }, and {u n } be sequences generated by Algorithm 2. If we can find two consecutive vectors u k and u k+1 such that u k+1 = u k for some positive values of α, β, λ and ρ, then u k+1 is a solution to the TVL1 problem Equation (6). (13) and (14) into Equation (15), one obtains

Proof. Substituting Equations
From Theorem 2, it can be easily seen that, if u, a, and b satisfy Equation (26) for some positive values of α, β, λ and ρ, then u is a solution to the TVL1 problem Equation (6). In Algorithm 2, if u k+1 = u k for some positive values of α, β, λ and ρ, then it can be easily shown that Simplifying the linear system in line 6 of Algorithm 2 using the relations in lines 7 and 8 of Algorithm 2, one can obtain Substituting Equations (27) and (28) into Equation (29), one obtains From Equation (30), it can be seen that u k+1 , a k , and b k satisfy Equation (26) for some positive values of α, β, λ and ρ. Hence, u k+1 is a solution to the TVL1 problem Equation (6).
Theorem 3 gives an idea of how to stop Algorithm 2. In practical applications, we do not have to find u k+1 which is equal to u k . Instead, we need to find u k+1 which is reasonably close to u k . Hence, we have used the following stopping criterion in Algorithm 2 where tol is a suitably chosen small tolerance value.

Fixed-Point-Like Method for the TVL1 Problem Equation (7)
In this section, we propose a fixed-point-like method, using the proximal operators, for solving the new proposed TVL1 variational problem Equation (7). Equation (7) can be expressed as where λ and ρ are positive numbers, ϕ and B are defined the same as in Equation (10). Using Theorem 1, we can obtain the following property for a solution to the TVL1 problem Equation (31).

Theorem 4.
If ϕ is a real-valued convex function on R d , B is an d × m matrix, A is an m × m matrix, and u is a solution to the TVL1 problem Equation (31), then, for any α, β, γ > 0, there exist vectors a, c ∈ R m and b ∈ R d such that Conversely, if there exist positive numbers α, β, γ and vectors a, c ∈ R m , b ∈ R d , and u ∈ R m satisfying Equations (32)-(35), then u is a solution to the TVL1 problem Equation (31).
By splitting Equations (32)-(34) of Theorem 4 and rearranging the resulting equations, we can obtain the following corollary.

Corollary 2.
If ϕ is a real-valued convex function on R d , B is an d × m matrix, A is an m × m matrix, and u is a solution to the TVL1 problem Equation (31), then, for any α, β, γ > 0 there exist vectors a, c ∈ R m and b ∈ R d such that Conversely, if there exist positive numbers α, β, γ and vectors a, c ∈ R m , b ∈ R d , and u ∈ R m satisfying Equations (37)-(43), then u is a solution to the TVL1 problem Equation (31).  From Equations (37)-(43) of Corollary 2, we can obtain a fixed-point-like method, called Algorithm 3, using the proximal operators for the TVL1 problem Equation (7).

Proof. It is sufficient to show that Equations
Notice that the linear system in line 7 of Algorithm 3 is equivalent to solving the following least squares problem: Hence, the linear system in line 7 of Algorithm 3 is also solved by applying the CGLS to the problem Equation (47). The following theorem provides a convergence analysis for Algorithm 3. (7) 1: Given degraded image f , choose positive parameters α, β, γ, λ, ρ 2: Initialization : u 0 = 0, a 0 = 0, b 0 = 0 and c 0 = 0 3: for k = 0 to maxit do 4:

Algorithm 4 Fixed-point-like method for the TVL1 problem Equation
Du k + c k 7: } be sequences generated by Algorithm 3. If we can find two consecutive vectors u k and u k+1 such that u k+1 = u k for some positive values of α, β, γ, λ and ρ, then u k+1 is a solution to the TVL1 problem Equation (7).

Proof. Substituting Equations (32)-(34) into Equation (35), one obtains
Theorem 4 implies that if u, a, b and c satisfy Equation (48) for some positive values of α, β, γ, λ and ρ, then u is a solution to the TVL1 problem Equation (7). In Algorithm 3, if u k+1 = u k for some positive values of α, β, γ, λ and ρ, then it can be shown that Simplifying the linear system in line 7 of Algorithm 3 using the relations in lines 8 to 10 of Algorithm 3, one can obtain Substituting Equations (49)-(51) into Equation (52), one obtains Equation (53) implies that u k+1 , a k , b k and c k satisfy Equation (48) for some positive values of α, β, γ, λ and ρ. Hence, u k+1 is a solution to the TVL1 problem Equation (7).
Theorem 5 also gives an idea of how to stop Algorithm 3. Thus, Algorithm 3 has the same stopping criterion as Algorithm 2.

Numerical Experiments
In this section, we provide numerical experiments for several test problems to evaluate the efficiency of two fixed-point-like methods, called Algorithms 3 and 4, proposed in this paper. Performances of these two algorithms are analyzed by comparing their numerical results with those of Algorithms 1 and 2.
In Section 2, it was shown that the isotropic TV of u ∈ R m can be represented by ||u|| TV = (ϕ • B)(u), where ϕ and B are defined the same as in Equation (10). Let D x = D y be the first order backward finite difference matrix of order N defined by Then, B can be expressed as a d × m matrix given by where ⊗ denotes the Kronecker product, I N denotes the identity matrix of order N, m = N 2 , and d = 2m. The discrete negative Laplacian operator D = −∆ can be represented by an m × m matrix (I N ⊗ D xx + D yy ⊗ I N ), where D xx = D yy is the second order finite difference matrix of order N defined by In order to illustrate the efficiency and reliability of two fixed-point-like methods, called Algorithms 3 and 4, for solving the new proposed TVL1 problems Equations (6) and (7), we provide numerical results for five test images such as Cameraman, Lena, House, Boat, and Pepper (see Figure 1). The pixel size of five test images is 256 × 256. All numerical tests have been performed using Matlab R2019a (Mathworks, Natick, MA, USA) on a personal computer with 3.2 GHz CPU and 8 GB RAM. maxit is set to 3500 (for Algorithms 1 and 2) or 500 (for Algorithms 3 and 4), and tol is set to 1 × 10 −5 (for Algorithm 1), 1.5 × 10 −4 (for Algorithm 2), 2 × 10 −4 (for Algorithm 3) or 2 × 10 −3 (for Algorithm 4). For the CGLS method which is used to solve a linear system every iteration of Algorithms 3 and 4, the tolerance for stopping criterion is set to 5 × 10 −3 (for Algorithm 3) or 1 × 10 −2 (for Algorithm 4), and the maximum number of iterations is set to 25. Note that the CGLS uses a symmetric preconditioner to accelerate its convergence. To see how to construct the symmetric preconditioner, we refer to [18,19].
To evaluate the quality of the restored images, we use the PSNR (peak signal-to-noise ratio) value between the restored image and original image which is defined by where · F refers to the Frobenius norm, U andŨ are the original and restored images with size N × N, respectively. In addition, u ij stands for the value of original image U at the pixel point (i, j) and N 2 is the total number of pixels. It is generally true that the larger PSNR value stands for the better quality of restored image.  For all numerical experiments, we have used the test images with an intensity range of [0, 1]. For all test problems, we choose the degraded test images which are resulting images blurred by Gaussian kernel of size 15 × 15 with standard deviation 9 under the reflexive boundary condition, and then corrupted by salt-and-pepper impulse noise with noise levels 20%, 30%, 60%, or 70%. In Tables 1-4, P 0 represents the PSNR values for the blurred and noisy images f , Alg denotes the algorithm to be used, Cam denotes the Cameraman image, PSNR represents the PSNR value for the restored image, Iter denotes the number of iterations, and CPU denotes the elapsed CPU time in seconds. All parameters α, β, γ, λ, and ρ are chosen as the best one by numerical tries.
Tables 1-4 contain numerical results of Algorithms 1-4 for degraded test images with 20%, 30%, 60%, or 70% salt-and-pepper impulse noises. In Table 1, we do not provide numerical results for Algorithms 1 and 2 since it does not converges within maxit = 3500. Figure 1 contains the true images for Cameraman, Lena, House, Boat, and Pepper. In Figures 2-6, the first row contains the images restored by Algorithms 1-4 for blurred images with 60% salt-and-pepper noise, and the second row contains the images restored by Algorithms 1-4 for blurred images with 30% salt-and-pepper noise. 60% Figure 6. Image restoration for blurred Pepper image with 60% or 30% salt-and-pepper noise. for the TVL1 problem Equation (6), while Algorithm 4 takes much more CPU time than Algorithm 3 in most cases. In addition, Algorithms 3 and 4 for the new proposed TVL1 problems Equations (6) and (7) perform better than Algorithms 1 and 2 for the TVL1 problems Equations (4) and (5) in both PSNR values and CPU time. Based on numerical results, Algorithm 4 for solving the new TVL1 problem Equation (7) is preferred over Algorithm 3 for solving the new TVL1 problem Equation (6), though it may take more CPU time than Algorithm 3 in most cases.

Conclusions
In this paper, we proposed two new TVL1 variational problems Equations (6) and (7) for restoring images degraded by blurring and impulse noise, and then we proposed two fixed-point-like methods, called Algorithms 3 and 4, for solving the new TVL1 problems Equations (6) and (7). Numerical experiments showed that Algorithms 3 and 4 for solving the new proposed TVL1 problems Equations (6) and (7) perform better than Algorithms 1 and 2 for solving the existing TVL1 problems Equations (4) and (5) in both PSNR values and CPU time, and Algorithm 4 restores the true image much better than Algorithm 3 at the expense of some increase in CPU time. Hence, it can be concluded that Algorithms 3 and 4 are preferred over Algorithms 1 and 2, and Algorithm 4 for the new TVL1 problem Equation (7) is strongly recommended to use even though it may take more CPU time than Algorithm 3 for the new TVL1 problem Equation (6).
The fixed-point-like methods and TVL1 problems Equations (6) and (7) proposed in this paper can be applied to an image inpainting problem or image denoising problem with impulse noise. Future work will study these kinds of problems. Notice that the problem of finding optimal parameters for fixed-point-like methods is a very challenging problem. Future work will also study how to choose optimal or near optimal parameters for the fixed-point-like methods proposed in this paper.