Next Article in Journal
Ransomware Detection System for Android Applications
Next Article in Special Issue
Investigation of Induced Charge Mechanism on a Rod Electrode
Previous Article in Journal
Real-Time Monte Carlo Optimization on FPGA for the Efficient and Reliable Message Chain Structure
Previous Article in Special Issue
A n-out-of-n Sharing Digital Image Scheme by Using Color Palette
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Image-Restoration Method Based on High-Order Total Variation Regularization Term

College of Information and Communication Engineering, Harbin Engineering University, Harbin 150001, China
*
Author to whom correspondence should be addressed.
Electronics 2019, 8(8), 867; https://doi.org/10.3390/electronics8080867
Submission received: 25 June 2019 / Revised: 30 July 2019 / Accepted: 1 August 2019 / Published: 5 August 2019
(This article belongs to the Special Issue Signal Processing and Analysis of Electrical Circuit)

Abstract

:
This paper presents two new models for solving image the deblurring problem in the presence of impulse noise. One involves a high-order total variation (TV) regularizer term in the corrected total variation L1 (CTVL1) model and is named high-order corrected TVL1 (HOCTVL1). This new model can not only suppress the defects of the staircase effect, but also improve the quality of image restoration. In most cases, the regularization parameter in the model is a fixed value, which may influence processing results. Aiming at this problem, the spatially adapted regularization parameter selection scheme is involved in HOCTVL1 model, and spatially adapted HOCTVL1 (SAHOCTVL1) model is proposed. When dealing with corrupted images, the regularization parameter in SAHOCTVL1 model can be updated automatically. Many numerical experiments are conducted in this paper and the results show that the two models can significantly improve the effects both in visual quality and signal-to-noise ratio (SNR) at the expense of a small increase in computational time. Compared to HOCTVL1 model, SAHOCTVL1 model can restore more texture details, though it may take more time.

1. Introduction

In the field of electronics and information, signal processing is a hot research topic, and as a special signal, the study of image has attracted the attention of scholars all over the world [1,2,3]. In image processing, image restoration is one of the most important issues and this issue has received extensive attention in the past few decades [4,5,6,7,8,9,10,11]. Image restoration is a technology that uses degraded images and some prior information to restore and reconstruct clear images, to improve image quality. At present, this technology has been widely used in many fields, such as medical imaging [12,13], astronomical imaging [14,15], remote sensing image [16,17], and so on. In this paper, the problem of image deblurring under impulse noise is considered. Normally, camera shake, and relative motion between the target and the imaging device may cause image blurring; while in digital storage and image transmission, impulse noise may be generated.
In the process of image deblurring under impulse noise, the main task is to find the unknown true image x R n 2 from the observed image f R n 2 defined by
f = N i m p ( K x )
where N i m p denote the process of image degradation by impulse noise, and  K R n 2 × n 2 is a blurring operator. It is known that when K is unknown, the model deals with blind restoration, and when K is known, it deals with image denoising.
There are two main types of impulse noise: salt-and-pepper noise and random-valued noise. Suppose the dynamic range of x to be [ d min , d max ] , for all ( i , j ) Ω = 1 , 2 , , n × 1 , 2 , , n , the x i , j is the gray value of an image x at location ( i , j ) , and d min f i , j d max . For 8-bit images, d min = 0 and d max = 255 . Then for salt-and-pepper noise, the noisy version f at pixel location ( i , j ) is defined as
f i , j = d min , with probability s 2 , d max , with probability s 2 , x i , j , with probability 1 s ,
where s is the noise level of the salt-and-pepper noise.
For random-valued noise, the noisy version f at pixel location ( i , j ) is defined as
f i , j = d i , j , with probability r , x i , j , with probability 1 r ,
where d i , j is uniformly distributed in d min f i , j d max and r is the noise level of random-valued noise. It is clear that compared with salt-and-pepper noise, the random-valued noise is more difficult to remove since it can be arbitrary number in d min f i , j d max .
For image-restoration problem contaminated by impulse noise, the widely used model is composed of data fidelity term measured by 1 norm and the TV regularization term, which is called TVL1 model [18,19,20]. TVL1 model can effectively preserve image boundary information and eliminate the influence of outliers, so it is especially effective to deal with non-Gaussian additive noise such as impulse noise. Now, it has been widely and successfully applied in medical image and computer vision.
However, TVL1 model has its own shortcomings, which makes it ineffective in dealing with high-level noise, such as 90% salt-and-pepper noise and 70% random-valued noise [21]. In recent years, a large number of scholars have devoted themselves to this research, and a lot of algorithms have been proposed [22,23,24,25,26,27]. In 2009, Cai et al. [22] proposed a two-phase method, and in the first phase, damaged pixels of the contaminated image were explored, then in the second phase, undamaged pixels were used to restore images. Numerical experiments show that the two-phase method is superior to TVL1 model, it can handle as high as 90% salt-and-pepper noise, and as high as 55% random-valued noise, while it cannot perform effectively when the level of random-valued noise is higher than 55%. Similarly, considering the problem that TVL1 model may deviate from the data-acquisition model and the prior model, especially for high levels of noise, Bai et al. [23] introduced an adaptive correction procedure in TVL1 model and proposed a new model called the corrected TVL1 (CTVL1) model. The main idea is to improve the sparsity of the TVL1 model by introducing an adaptive correction procedure. The CTVL1 method also uses two steps to restore the corrupted image, the first step generates an initial estimator by solving the TVL1 model, and the second step generates a higher accuracy recovery from the initial estimator which is generated by TVL1 model. Meanwhile, for higher salt-and-pepper noise and higher random-valued noise, by repeating the correction step for several times, high levels of noise can be removed very well. Numerical experiments show that the CTVL1 model can remove salt-and-pepper noise as high as 90%, and remove random-valued noise as high as 70%, which is superior to the two-phase method.
Similar to CTVL1 method, Gu et al. [24] combined TV regularization with smoothly clipped absolute deviation (SCAD) penalty for data fitting, and proposed TVSCAD model; Gong et al. [25] used minimax concave penalty (MCP) in combination with TV regularization for data fitting, and he proposed TV-MCP model. Numerical experiments show that both TVSCAD model and TV-MCP model can achieve better effects than two-phase method. But compared with CTVL1 method, their contributions mainly focus on the convergence rate, and they did not improve much in terms of impulse noise removal.
However, as is described in [28], TV norm may transform the smooth area to piecewise constants, the so-called staircase effect. To overcome this deficiency, the efficient way is to replace the TV norm by a high-order TV norm [29]. In particular, second-order TV regularization schemes are widely studied for overcoming the staircase effects while preserving the edges well in the restored image. In [30], Si Wang proposed a combined total variation and high-order total variation model to restore blurred images corrupted by impulse noise or mixed Gaussian plus impulse noise. In [27], based on Chen and Cheng’s an effective TV-based Poissonian image deblurring model, Jun Liu introduced an extra high-order total variation (HTV) regularization term to this model, which can effectively remove Poisson noise, and its effect is better than Chen and Cheng’s model. In [31], Gang Liu combined the TV regularizer and the high-order TV regularizer term, and proposed HTVL1 model, which can better remove the impulse noise contrast to TVL1 model. However, since TVL1 model has its own defects, the restoration of HTVL1 model is limited. Besides, the author did not consider the removal effect of random-valued noise.
In this paper, we continue to study the problem of image deblurring under impulse noise. The main contributions of this paper include: (1) Combining high-order TV regularizer term with CTVL1 model, a new model named high-order corrected TVL1(HOCTVL1) model is proposed and the alternating direction method of multipliers (ADMM) is used to solve this new model. Compared with existing models, our model can get higher signal-to-noise (SNR) in dealing with image deblurring under impulse noise. (2) The spatially adapted regularization parameter is introduced into the HOCTVL1 model and SAHOCTVL1 model is proposed. Compared to HOCTVL1 model, SAHOCTVL1 model can further improve the effects of image restoration in some degree.
The rest of this paper is organized as follows. In Section 2, a brief review of related work is made. In Section 3, the presentation of HOCTVL1 model is discussed, and  the HOCTVL1 algorithm is concluded. Section 4 introduces the spatially adapted regularization parameter selection scheme, and SAHOCTVL1 model is proposed. Numerical experiments are carried out in Section 5 and finally, the conclusion is presented in Section 6.

2. Brief Review of Related Work

For recovering the image corrupted by blur and impulse noise, the classic method is TVL1 model. Since a lot of literature [19,20,31,32,33] demonstrates that using L1-fidelity term for image restoration under impulse noise can achieve good effects, the TVL1 model is expressed as
min x T V ( x ) + λ K x f 1
where f is the observed image, x denotes the restoration image, K is a blur matrix, λ is a regularization parameter which is greater than zero, T V ( x ) represents the discrete TV norm and is defined as T V ( x ) = 1 i , j n ( D x ) i , j . Here, D denotes the discrete gradient operator (under periodic boundary conditions). The norm in ( D x ) i , j can be taken as 1 norm or 2 norm. When the 2 norm is used, the resulting TV term is isotropic and when the 1 norm is used, the result is anisotropic. For more details about the TV norm, readers can refer to [18].
Since one of the unique characteristics of impulse noise is that an image corrupted with impulse noise still has intact pixels, the impulse noise can be modeled as sparse components, whereas the underlying intact image retains the original image characteristics [34]. Therefore, TVL1 model can efficiently remove abnormal value noise signals, and some points of the solution of the TVL1 model are close to the points of the original image. However, Nikolova [21] pointed out from the viewpoint of MAP that the solutions of the TVL1 model substantially deviate from both the data-acquisition model and the prior model, and Minru Bai [23] further pointed out that the TVL1 model does not perform well at the sparsity of K x f and there are many biased estimates produced by the TVL1 model. To overcome this shortcoming, Bai et al. took a correction step to generate an estimator to obtain a better recovery performance.
Given a reasonable initial estimator x ˜ R n 2 generated by TVL1 model, let z ˜ = K x ˜ f , then she established a model called CTVL1 model, which is defined as
min x , z 1 i , j n ( D x ) i , j + λ ( z 1 F ( z ˜ ) , z ) s . t . z = K x f
Compared with TVL1 model, CTVL1 model is added a correction term F ( z ˜ ) , z , and  F : R n 2 R n 2 is an operator defined as
F ( z ) = ϕ ( z z ) ,   z R n 2 0 , 0 , z = 0 ,
and the scalar function ϕ : R R takes the form
ϕ ( t ) = sgn ( t ) ( 1 + ε τ ) t τ t τ + ε τ , t R ,
where τ > 0 and ε > 0 . Numerical results show that the CTVL1 model improves the sparsity of the data fidelity term K x f greatly for the images deblurring under impulse noise, to achieve a good denoising effect.
Since TV regularization term may come into staircase effects, in the past few years, a lot of researchers have devoted to solving this problem, and they concluded that replacing the TV norm by a high-order TV norm can get a better effect. The majority of the high-order norms involve second-order differential operators because piecewise-vanishing second-order derivatives lead to piecewise-linear solutions that better fit smooth intensity changes [35]. The second-order TV norm is defined as
( D 2 x ) i , j = ( ( D x ) i , j x , x , ( D x ) i , j x , y , ( D x ) i , j y , x , ( D x ) i , j y , y )
where ( D x ) i , j x , x , ( D x ) i , j x , y , ( D x ) i , j y , x , ( D x ) i , j y , y denote the second-order difference of the j 1 n + i th entry of the vector x . Here we just briefly mention the concept of second-order TV norm, for more details, readers can refer to [36].
Figure 1 shows the diagram of image restoration. In this paper, two models named HOCTVL1 model and SAHOCTVL1 model are proposed and are used to recover the corrupted images.

3. New Method: The HOCTVL1 Algorithm

In this section, the HOCTVL1 model is proposed and the selection of F ( z ) is talked about among CTVL1, SCAD and MCP models. Then ADMM is used to solve the proposed model and the HOCTVL1 algorithm is concluded.

3.1. Proposed New Model

Since the TV regularization norm ( D x ) i , j can be taken as 1 norm or 2 norm, which is isotropic or anisotropic respectively. In [23,24,25,31], the authors all only consider the isotropic case, so in this paper, we will also only treat the isotropic case in detail, and the anisotropic case is similar to deal with. Based on this premise, the proposed high-order corrected TVL1 (HOCTVL1) model can be expressed as
min x , z 1 i , j n α i , j ( D x ) i , j 2 + ( 1 α i , j ) ( D 2 x ) i , j 2 + λ ( z 1 F ( z ˜ ) , z ) s . t . z = K x f
where f is the corrupted image, x denotes the restoration image, K is a blur matrix, λ is a regularization parameter, ( D x ) i , j 2 denotes the first-order TV norm and ( D 2 x ) i , j 2 denotes the second-order TV norm, F ( z ˜ ) , z is a correction term, and  F ( z ) is an operator composed of a cluster of scalar functions.
α i , j is a weighting parameter that discriminates the TV and second-order TV penalty, and there are several selection methods for the weighting parameter α . Here, the  α in [31] is chosen since it can achieve better effects in experiments compared with the α in [37]. The α is expressed as
α ( i , j ) = 1 , if D x i , j k + 1 2 c 1 2 cos ( 2 π D x i , j k + 1 2 c ) + 1 2 , else
where c is a constant, and  0 c < 1 .
About the selection of F ( z ) , in [24], Gu et al. used the SCAD penalty function for data fitting, and in [25], Gong et al. used the MCP penalty function for data fitting, the SCAD function ξ ( t ) and MCP function ς ( t ) are described as
ξ ( t ) = t , if t γ 1 t 2 + 2 γ 2 t γ 1 2 2 ( γ 2 γ 1 ) , if γ 1 < t < γ 2 γ 1 + γ 2 2 , if t γ 2
ς ( t ) = θ 1 t t 2 2 θ 2 , if t θ 1 θ 2 θ 1 2 θ 2 2 , if t > θ 1 θ 2
where γ 1 , γ 2 , θ 1 , θ 2 are all numbers greater than 0, and  0 t 1 .
It is easy to find that ξ ( t ) and ς ( t ) are nonconvex and are difficult to solve. To solve this problem, Gu et al. adopted a difference of convex functions (DCA) algorithm to solve the nonconvex TVSCAD model. Similar to TVSCAD, Gong also adopted the DCA algorithm to solve the nonconvex TV-MCP model. The final processed functions φ ( t ) in [24] and ψ ( t ) [25] are respectively defined as
φ ( t ) = 0 , if t γ 1 t γ 1 s g n ( t ) γ 2 γ 1 , if γ 1 < t γ 2 s g n ( t ) , if t > γ 2
and
ψ ( t ) = t θ 2 , if t θ 1 θ 2 θ 1 , if t > θ 1 θ 2
Figure 2 shows the characteristics of ϕ ( t ) , φ ( t ) and ψ ( t ) . In TVL1 model, the regularization term is to enforce certain regularity conditions or prior constraints on the image, and the data fitting term penalizes the deviation of the observed data from the physical model. According to the analysis in Section 2 of [24], these three functions can all enforce less or even null data fitting and more regularization whenever K x deviates significantly from f . However, in the experimental simulation, it is found that the experimental results are almost the same no matter which function is selected. Therefore, in this paper, the scalar function still chooses ϕ ( t ) .

3.2. The HOCTVL1 Algorithm

In this subsection, the solving procedure of the HOCTVL1 model by ADMM will be shown, and the HOCTVL1 algorithm will be concluded. About the details of ADMM, readers can refer to [38]. Firstly, let y i , j = ( D x ) i , j , w i , j = ( D 2 x ) i , j , and  ( i , j = 1 , 2 , , n ) . Then Equation (2) can be rewritten as
min x , z 1 i , j n α i , j ( D x ) i , j 2 + ( 1 α i , j ) ( D 2 x ) i , j 2 + λ ( z 1 F ( z ˜ ) , z ) s . t . y i , j = ( D x ) i , j 2 , w i , j = ( D 2 x ) i , j 2 , z = K x f , ( i , j = 1 , 2 , , n )
where for each i, j, y i , j = ( ( y 1 ) i , j , ( y 2 ) i , j ) R 2 , w i , j = ( ( w 11 ) i , j , ( w 12 ) i , j , ( w 21 ) i , j , ( w 22 ) i , j ) R 4 , y i , j 2 = ( ( y 1 ) i , j ) 2 + ( ( y 2 ) i , j ) 2 , w i , j 2 = ( ( w 11 ) i , j ) 2 + ( ( w 12 ) i , j ) 2 + ( ( w 21 ) i , j ) 2 + ( ( w 22 ) i , j ) 2 .
Thus, the augmented Lagrangian function of Equation (15) is
L ( x , y , z , w , μ 1 , μ 2 , μ 3 ) = i , j α i , j y i , j 2 μ 1 T ( y D x ) + β 1 2 i , j y i , j ( D x ) i , j 2 2 + i , j ( 1 α i , j ) w i , j 2 μ 2 T ( w D 2 x ) + β 2 2 i , j w i , j ( D 2 x ) i , j 2 2 + λ ( z 1 F ( z ˜ ) , z ) μ 3 T ( z ( K x f ) ) + β 3 2 z ( K x f ) 2 2
where μ 1 R 2 n 2 , μ 3 R n 2 , μ 2 R 4 n 2 are the Lagrangian multipliers, and  β 1 , β 2 , β 3 > 0 are the penalty parameters. Then the ADMM for solving the model Equation (2) by updating x , y , w , z and λ as follows:
y k + 1 = arg min y L ( x k , y , z k , w k , μ 1 k , μ 2 k , μ 3 k ) w k + 1 = arg min w L ( x k , y k + 1 , z k , w , μ 1 k , μ 2 k , μ 3 k ) z k + 1 = arg min z L ( x k , y k + 1 , z , w k + 1 , μ 1 k , μ 2 k , μ 3 k ) x k + 1 = arg min x L ( x , y k + 1 , z k + 1 , w k + 1 , μ 1 k , μ 2 k , μ 3 k ) μ 1 k + 1 = μ 1 k ζ β 1 ( y k + 1 D x k + 1 ) μ 2 k + 1 = μ 2 k ζ β 2 ( w k + 1 D 2 x k + 1 ) μ 3 k + 1 = μ 3 k ζ β 3 ( z k + 1 ( K x k + 1 f ) )
where ζ > 0 is the step length, and it can vary ( 0 , ( 5 + 1 ) / 2 ) [23]. For the y , w , z sub-problems, it is easy to get the scalar minimizer by using the soft thresholding, and for x sub-problem, it can be solved by fast Fourier transform (FFT) under periodic boundary conditions. Therefore, for  y sub-problem, it can be obtained that
y i , j k + 1 = max ( D x k ) i , j + ( μ 1 k ) i , j β 1 α i , j β 1 , 0 · ( D x k ) i , j + ( μ 1 k ) i , j / β 1 ( D x k ) i , j + ( μ 1 k ) i , j / β 1 2
here we assume the convention 0 · ( 0 / 0 ) = 0 , i , j = 1 , 2 , , n . For the w sub-problem, there is
w i , j k + 1 = max ( D 2 x k ) i , j + ( μ 2 k ) i , j β 2 1 α i , j β 2 , 0 · ( D 2 x k ) i , j + ( μ 2 k ) i , j / β 2 ( D 2 x k ) i , j + ( μ 2 k ) i , j / β 2 2
Taking account of z , there is
z k + 1 = max K x k f + μ 3 k + λ F ( z ˜ ) β 3 λ β 3 , 0 sgn ( K x k f + μ 3 k + λ F ( z ˜ ) β 3 )
where ∘ denotes pointwise product. For x sub-problem, it can be solved by FFT and the result is shown as
x k + 1 = D T ( β 1 y k + 1 μ 1 k ) + ( D 2 ) T ( β 2 w k + 1 μ 2 k ) + K T ( β 3 z k + 1 μ 3 k ) + β 3 K T f β 1 D T D + β 2 ( D 2 ) T D 2 + β 3 K T K
Now, the HOCTVL1 Algorithm 1 can be concluded and is described as follows.
Algorithm 1: The HOCTVL1 algorithm
  • Input: f , K , λ , β 1 , β 2 , β 3 , ζ , c , δ t o l , Maxiter
  • Initialization: x 0 = f , μ 1 0 = 0 , μ 2 0 = 0 , μ 3 0 = 0 , k = 0 , α = 1 .
  • Step 1. Compute y i , j k + 1 , w i , j k + 1 , z k + 1 via Equations (18)–(20) respectively,
  • Step 2. Compute x k + 1 by solving Equation (21),
  • Step 3. Update μ 1 k + 1 , μ 2 k + 1 , μ 3 k + 1 , via Equation (17),
  • Step 4. Update α i , j via Equation (10),
  • Step 5. If  k < Maxiter or x k + 1 x k 2 / x k 2 > δ t o l , go to Step 1
  • Output: x k .

4. SAHOCTVL1 Model

It is known that the regularization parameter λ controls the trade-off between the fidelity and the smoothness of the solution. Usually in most models, λ is a fixed value. In [39], Dong et al. developed a new automated spatially adapted regularization parameter selection method, and had a good effect on the Gaussian noise removal. In [27,40], the authors proposed a spatially adapted regularization parameter selection scheme for Poissonian image deblurring. In [31], the authors used the spatially adapted regularization parameter selection scheme for the impulse noise removal, while this method did not always show good results. In this section, the spatially adapted regularization parameter selection scheme described in [39] is adopted into the HOCTVL1 model, and SAHOCTVL1 algorithm is concluded.
Firstly, the SAHOCTVL1 model is defined as
min x , z 1 i , j n α i , j ( D x ) i , j 2 + ( 1 α i , j ) ( D 2 x ) i , j 2 + ( λ ( z F ( z ˜ ) z ) 1 s . t . z = K x f
where ∘ represents the pointwise product. Here, λ is a matrix as the same size of f , and its all elements equal to one constant when we set its initial value.
As described in  [39], the local window filter is defined as
ω ( a , b ) = 1 ω 2 , if b a ω 2 , 0 , else ,
with a Ω fixed, and  Ω ω ( a , b ) d a d b = 1 .
Let r represents the noise level, and  ν represents the control constant for controlling the fidelity term. For the salt-and-pepper noise removal, we set
ν = r / 2
and the λ updating rule is expressed as
( λ ˜ p + 1 ) i , j = η min ( ( λ ˜ p ) i , j + τ max ( L E A V E i , j ν , 0 ) , L )
( λ ˜ p + 1 ) i , j = 1 ω 2 ( s , t ) Ω i , j ω ( λ ˜ p + 1 ) s , t
where L is a large constant to ensure λ is finite, 1 < η < 2 , Ω i , j ω is a local window with the center on ( i , j ) , and  τ = 2 λ ( : ) / r , L E A V E i , j = 1 ω 2 ( s , t ) Ω i , j ω K x f s , t .
For the random-valued noise removal, the  ν is defined as
ν i , j ω = 1 ω 2 ( s , t ) Ω i , j ω r · ( ( K x ) s , t 2 ( K x ) s , t + 1 2 )
Then the λ updating rule is expressed as
( λ ˜ p + 1 ) i , j = η min ( ( λ ˜ p ) i , j + τ · ( L E A V E i , j ω ν i , j ω ) + , L )
( λ ˜ p + 1 ) i , j = 1 ω 2 ( s , t ) Ω i , j ω ( λ ˜ p + 1 ) s , t
where the parameters η , L, τ , and  L E A V E i , j are the same as before.
Now, the spatially adapted HOCTVL1 Algorithm 2 can be concluded.
Algorithm 2: Spatially adapted algorithm for solving the SAHOCTVL1 model
  • Input: f , K , λ , β 1 , β 2 , β 3 , ζ , c , δ t o l , Maxiter, r , ω , L .
  • Initialization: x 0 = f , μ 1 0 = 0 , μ 2 0 = 0 , μ 3 0 = 0 , k = 0 , α = 1 , p = 0 .
  • Step 1. Solve the model Equation (22) by Algorithm 1, and get x p ,
  • Step 2. Update λ ˜ p via Equations (24)–(26) for salt-and-pepper noise,
        Update λ ˜ p via Equations (27)–(29) for random-valued noise,
  • Step 3. stop or set p = p + 1 and return to Step 1.
  • Output: x p .

5. Numerical Results

In this section, numerical results will be presented to illustrate the efficiency of the proposed models. Firstly the HOCTVL1 model is compared with TVL1 [20], HTVL1 [31], CTVL1 [23], then four state-of-the-art methods are selected for comparisons and the methods include LpTV-ADMM [26], the Adaptive Outlier Pursuit (AOP) method [41], the Penalty Decomposition Algorithm (PDA) [42], L0TV-PADMM [43]. It should be noted that we all only use HOCTVL1 model in these tests for comparison. In the last subsection, the efficiency of SAHOCTVL1 model will be compared with HOCTVL1 separately. In this section, the convergence of HOCTVL1 model is analyzed too. The test images are mainly: Lena, camera, pepper, boat, which are shown in Figure 3. In the experiments, for ease of comparison, we only consider “Gaussian” blurring kernel, since the model is also suitable for other blurring kernels. Besides, signal-to-noise ratio (SNR) is used to evaluate the quality of restoration, which is defined as
S N R ( x ) = 10 log 10 x ^ E ( x ^ ) 2 2 x ^ x 2 2
where x ^ and x denote the original and restored image respectively, and E ( x ^ ) represents the mean of x ^ . To evaluate the convergence rate, the running time of every algorithm is considered. For fairness, the stop criterion is the same among the algorithms mentioned in the experiments, which is expressed as
x k + 1 x k 2 2 x k 2 2 < δ t o l
All experiments are operated under the Windows 10 and MATLAB R2018a with the platform Lenovo of Intel (R) Core (TM) i5-4200M [email protected] 2.50 GHz made in Beijing, China.

5.1. Parameter Setting

In this subsection, we mainly define the values of some parameters in the experiments. In [23], the author set ( β 1 , β 2 ) = ( 5 , 350 ) for TVL1 model, and ( β 1 , β 2 ) = ( 5 , 20,000 ) , ζ = 1.618 , τ = 2 for CTVL1 model. In [31], the author set ( β 1 , β 2 , β 3 ) = ( 5 , 10 , 1000 ) , η = 1.1 , c = 0.1 , the local window size ω = 21 for HTVL1 model. In this paper, firstly we decide the value of β 3 . Choose camera as test image and add salt-and-pepper noise with noise level 30%, the blurring kernel is Gaussian (hsize = 7, standard deviation = 5). When setting λ = 500 (not the most appropriate λ ) and varying β 3 from 500 to 30,000, the trend of SNR of the restored image is shown in Figure 4. From Figure 4, it can be seen that with the increasing of β 3 , SNR is also increasing, and when β 3 > 20,000 , the trend keeps stable. In order to obtain good numerical results, in this paper, we set β 3 = 25,000 .
Then considering the selection of λ , it is often a troublesome thing. In most cases, scholars obtain the appropriate λ through experience or a lot of attempts. In [24], Gong defined a selection scheme of λ based on numerical experiments, which is expressed as follows.
λ = c λ * 1 r
where λ * denotes the “best” λ found for TVL1 model, c is a constant and r represents the noise level. It means that we still need to struggle for the λ of TVL1 model. Through a large number of simulation experiments, we find that the difficulty in selecting λ mainly lies in the initial value of λ when the noise level is 10%, and as the noise increases, the value of λ decreases. Figure 5 shows the results of HOCTVL1 model and CTVL1 model corrupted by 10% salt-and-pepper noise with different λ and it can be seen that the appropriate λ for HOCTVL1 is almost the same with the λ for CTVL1. Therefore, similarly, we can adopt the λ for CTVL1 model in our HOCTVL1 model, and we set λ = 800 for impulse noise with noise level 10%.
The size of local window ω is a factor that may influence the noise removal effect of SAHOCTVL1 model. In [33], the author illustrated through experiments that it can reduce more noise and recover more details when ω 11 . Here, we also make an experiment. We choose camera as test image, and add 30% salt-and-pepper noise. When varying ω from 3 to 31, the SNR of the restored image is shown in Figure 6. From Figure 6, generally speaking, SNR does not change much, and when ω 13 , SNR tends to be stable. Therefore, in this paper, we still set ω = 21 and the other parameters are the same as mentioned before.

5.2. Convergence Analysis of HOCTVL1 Model

In this subsection, the convergence of HOCTVL1 algorithm will be analyzed. In [23], the author has proved that the CTVL1 model can converge to an optimal solution and its dual. Let y k , w k , z k , x k , μ 1 k , μ 2 k , μ 3 k be the iterative sequence generated by the ADMM approach, and set Q 1 ( y ) = 1 i , j n y i , j 2 , Q 2 ( w ) = 1 i , j n w i , j 2 , Q 3 ( z ) = λ ( z 1 F ( z ˜ ) , z ) . It is obvious that Q 1 : R 2 n 2 R , Q 2 : R 4 n 2 R , and Q 3 : R n 2 R are closed proper convex functions. Then according to the subsection 4.3 of [23], it is easy for us to obtain the convergence result of HOCTVL1 model. Here, we verify the convergence property of HOCTVL1 model from another point of view. We observe the changes of SNR and F ( z ) with the iterations by considering the camera image of size of 256 × 256 corrupted by Gaussian blur (hsize = 15, standard deviation = 5) and 50% salt-and-pepper noise, λ = 500 , as are shown in Figure 7.
It can be seen that the SNR value (or the function F ( z ) ) increases (or decreases) monotonically, which can demonstrate the convexity of the model. Besides, it can be seen that after 130th iteration, the SNR values keep stable, and the function remains unchanged after 70th iteration, which means that the model has converged to an optimal solution.

5.3. Comparisons of TVL1, HTVL1 and CTVL1 Models

In this subsection, some experiments are made to illustrate the superiority of HOCTVL1 model in removing the impulse noise and overcoming the staircase effects. By comparing the restoration effect of TVL1, HTVL1 and CTVL1 models, the superiority of our model is further illustrated. For CTVL1 model and HOCTVL1 model, the results of TVL1 model are used as the initial value, and the initial value is same. The blurring kernel is Gaussian (hsize = 15, standard deviation = 5). Next, the experiments will be carried out from three aspects: (1) Deblurring image under salt-and-pepper noise. (2) Deblurring image under random-valued noise. (3) Analysis of convergence rate.

5.3.1. For Salt-and-Pepper Noise

Firstly, the visual comparisons of Lena image corrupted by Gaussian blur and salt-and-pepper noise with noise levels 30%, 50%, 70% are carried out, and the results are shown in Figure 8, Figure 9 and Figure 10 respectively. The unit of SNR value is dB.
From Figure 8, Figure 9 and Figure 10, it can be found that for noise levels 30% and 50%, four models can all remove the salt-and-pepper noise effectively, but the quality of the restored images is different. The restored image by HOCTVL1 model is closer to the original image and its SNR is the highest. For noise level 70%, the restored image by TVL1 model is not clear. The restored image by HTVL1 model is clearer than the restored image by TVL1 model, but there are some noise points in the image that have not been removed, while both CTVL1 model and HOCTVL1 model can get a good restored result. By comparing the results of TVL1 model and HTVL1 model, the image quality can be indeed improved by introducing the high-order TV regularizer term. However, since the poor performance of TVL1 model, the results of HTVL1 is not very good. Since an adaptive correction procedure is introduced in CTVL1 model, which can greatly enhance the effect of image deblurring. While we combine the CTVL1 model with second-order TV regularizer term, which can further improve the effect of image deblurring. Both for removing the salt-and-pepper noise with noise level from 30% to 70%, HOCTVL1 model can have a great performance. It cannot only provide a very good visual effect, but also achieve a higher SNR. Especially for noise level 30%, the SNR value of the restored image by HOCTVL1 model is more than 1 dB higher than that by CTVL1 model and for noise level 50%, the SNR value obtained by HOCTVL1 model is also about 0.6 dB higher than CTVL1 model.
For noise level 90%, HOCTVL1 model can also use the step correction to improve the removal effect, as shown in Figure 11. Figure 11 shows the results of CTVL1 model and HOCTVL1 model during five correction steps. It can be seen that after several correction steps, two models both can improve the effect of image deblurring though the effect of TVL1 model is worse. However, from first correction step to fifth correction step, the SNR of our HOCTVL1 model is always higher than CTVL1 model. After first correction step, the SNR of recovered image by HOCTVL1 model is about 0.6 dB higher than CTVL1 model. Meanwhile, after three correction steps, the SNR of the restored image keeps stable, and the noise is eliminated, which shows that the correction efficiency of HOCTVL1 model is very high.
Table 1 shows the results of the four models for restoring the corrupted images with noise levels 10%, 30%, 50%, 70% and 90%. The test images are what are shown in Figure 3. It should be noted that the values of CTVL1 and HOCTVL1 model for noise level 90% are the SNR of the restored images after first correction step. From Table 1, it can be seen that compared with other three models, HOCTVL1 model can achieve a higher SNR value. It can also be seen that there is a great improvement in HOCTVL1 model compared with TVL1 and HTVL1 model no matter what the noise level is. Compared with CTVL1 model, there is about 1 dB higher in restoring the Lena, pepper and boat images when noise levels are 10% and 30%. Even for recovering camera image, the SNR of the restored image by HOCTVL1 model is still at least 0.5 dB higher than CTVL1 model when noise levels are 10% and 30%. For noise level 90%, the SNR of our HOCTVL1 model is the highest among the four model, though there is only a slight improvement in the camera image.

5.3.2. For Random-Valued Noise

For the sake of testing the performance of HOCTVL1 model in removing random-valued noise, we also carry out a series of experiments. Figure 12 and Figure 13 show the visual comparisons of Lena image corrupted by Gaussian blur and random-valued with noise levels 30% and 50%. It can be seen that for noise level 30%, both four models can effectively restore the corrupted image, but the image restored by TVL1 is still somewhat blurred, the images recovered by other three models are clearer and the recovered image by HOCTVL1 model is closer to the original image and its SNR is the highest. For noise level 50%, there is a little noise in Figure 13b,c, which shows that TVL1 and HTVL1 models cannot completely remove the 50% random-valued noise though HTVL1 model has a better performance. While CTVL1 model and HOCTVL1 model can remove noise very well, and the recovered image by HOCTVL1 model is clearer than that by CTVL1 model, which illustrates that high-order regularizer term can effectively restrain the staircase effects. Meanwhile, the value of SNR of HOCTVL1 model also illustrates this point.
Figure 14 shows the results of TVL1, five correction steps of CTVL1 model and eighth correction steps of HOCTVL1 model for removing the random-valued noise with noise level 70%. Though the performance of TVL1 model is not good, after several correction steps, both CTVL1 model and HOCTVL1 model can improve the restoration effect. Meanwhile, it can be found that the SNR of the new model is always higher than CTVL1 model after the same correction step. Besides, after the eighth correction step, there are only a few noise points in the image and the restored image is very clear, which shows the superiority of HOCTVL1 model.
Table 2 shows the results of the four models for restoring the corrupted images with noise levels 10%, 30%, 50%, 70%. The values of CTVL1 and HOCTVL1 models for noise level 70% in Table 2 are also the SNR of the restored images after first correction step. The values show the superiority of our model for removing the random-valued noise. Different to removing salt-and-pepper noise, there is only a little improvement compared to CTVL1 model for removing noise as high as 70%. But when the noise level is lower than 70%, the improvement is remarkable.

5.3.3. Analysis of Convergence Rate

Now, we analyze the convergence rate of four models. We choose Lena as the test image, and use the running time to evaluate the convergence rate. Figure 15 shows the time that four models spend restoring the corrupted Lena image under impulse noise with different level. It can be seen that when dealing with the same noise, TVL1 and CTVL1 model cost relatively less time. Because of the combination of high-order TV regularizer term, which increases the computational complexity of the algorithm, HTVL1 and HOCTVL1 models consume more time compared to TVL1 and CTVL1 model. While HOCTVL1 model can effectively reduce the staircase effect and restore more details, it is worthwhile taking more time. Figure 16 shows the change of F ( z ) with the iteration number when dealing with the salt-and-pepper noise with noise levels 30% and 50%. It can be seen that the convergence rate of HOCTVL1 model is slower than CTVL1 model, and the iteration number is about twice as much as that of CTVL1 model.

5.4. Comparisons of Some Other Methods

In this subsection, we compare the effect of the HOCTVL1 model with some other methods for image deblurring under impulse noise, mainly include: LpTV-ADMM [26], AOP [41], PDA [42] and L0TV-PADMM [43]. Since in [23], the author has shown the superiority of CTVL1 model by numerical experiments compared with two-phase method, in this subsection, we do not consider two-phase method. In this experiment, for ease of comparison, we choose “Gaussian” blurring kernel with hsize =9 and standard deviation =7, which is same with [43] and the parameter settings of these methods also obey to the related papers and readers can refer to them for details.
Firstly, we show the visual results of the pepper image corrupted by salt-and-pepper noise and random-valued noise with noise level 50% respectively, as are shown in Figure 17 and Figure 18.
From Figure 17 and Figure 18, it can be seen that the restoration effect of HOCTVL1 model is very remarkable. It is obvious that HOCTVL1 model has the highest SNR, followed by PDA and AOP methods, and the Lp-ADMM method has the lowest SNR. When removing 50% salt-and-pepper noise, as shown in Figure 17, compared with Lp-ADMM method, the SNR of (f) is more than twice as (b). Compared with L0-PADMM method, the SNR of our model is also 4 dB higher. When removing 50% random-valued noise, compared with L0-PADMM method, our model has only about 2 dB improvement in SNR, which is less than that in removing salt-and-pepper noise. But similarly, compared with Lp-ADMM method, our model has more than 100% improvement.
Table 3 shows the results of the five methods for restoring the corrupted images by impulse noise with different noise level, respectively. The value on the left of “/” represents the result after the first correction step and the value on the right of “/” represents the result after multi-correction steps. For removing salt-and-pepper noise, it is obvious that Lp-ADMM and PDA methods perform poorly, and when noise level is 90%, Lp-ADMM method has the worst effect. When the noise level varies from 30% to 70%, HOCTVL1 model has the highest SNR in most cases, except the SNR of L0TV-PADMM when dealing with the camera image with noise level 70%. It can also be seen that L0TV-PADMM method has the best restoration effect and its SNR value is higher than our model when the noise level is 90%. For dealing with random-valued noise, it can be seen that our model has the highest SNR when noise level varies from 30% to 50%. Similarly, for noise level 70%, L0TV-PADMM method has certain advantages; however, it can be seen from Lena and boat images that our model can achieve a higher SNR than L0TV-PADMM method after multi-correction steps.
Figure 19 shows the running time of the five methods for restoring the corrupted pepper image. When dealing with 90% salt-and-pepper noise and 70% random-valued noise, HOCTVL1 model needs multi-correction steps, which costs a lot time, here we only show the time of removing salt-and-pepper noise as high as 70% and random-valued noise as high as 50%. It can be seen that compared to other four methods, our HOCTVL1 model spends the least time whatever the noise level is. It can also be found that the other four methods take several times as much time as our model, which illustrates the advantages of our model.

5.5. Comparisons between SAHOCTVL1 Model and HOCTVL1 Model

In this subsection, the restoration effect of SAHOCTVL1 model will be analyzed. We choose camera and Lena as test images, we use “Gaussian” blurring with hsize = 15 and standard deviation = 5, and we set the spatially adapted iteration number p 3 . Since we have shown the superiority of HOCTVL1 model by many simulation experiments, here we only compare the effect of SAHOCTVL1 model and HOCTVL1 model on image restoration. Meanwhile, since we have shown the huge advantage of HOCTVL1 model compared to HTVL1 model, while the effect of SAHTVL1 model is similar to HTVL1 model, therefore we do not consider SAHTVL1 model in [31] either. In this subsection we do not consider the 90% salt-and-pepper noise and 70% random-valued noise since the multi-correction steps take much time. We evaluate the results by SNR and running time, and the restoration results are shown in Table 4.
From Table 4, as is shown, generally speaking, SAHOCTVL1 model can achieve at least the same effect as HOCTVL1 model. For camera image, when noise is 30% and 50% salt-and-pepper noise, the SNR of SAHOCTVL1 model is about 0.4 dB higher than HOCTVL1 model and when noise is 50% random-valued noise, there is 0.6 dB higher than HOCTVL1 model. For Lena image, the advantage of SAHOCTVL1 model is little, and the SNR improves by about 0.1 dB when noise levels are 50% and 70%. Meanwhile, because we set p 3 , which makes the algorithm run 3 times, making the time SAHOCTVL1 model takes be about 3 times as much as that HOCTVL1 model takes. But we think it is worthwhile to obtain high SNR at the expense of running time.

6. Conclusions

This paper gives a contribution to solving the problem of image deblurring under impulse noise, and two models named HOCTVL1 model and SAHOCTVL1 model are proposed. Benefitting from the merits of high-order TV regularizer term and spatially adapted regularization parameter selection scheme, both models perform well in recovering the corrupted images. A great quantity of experiments is carried out to show the superiority of the two models. Compared to CTVL1 model, HOCTVL1 model can achieve better visual effects and higher SNR values. When dealing with salt-and-pepper noise with noise level less than 90% and random-valued noise with noise level less than 70%, there is about 0.5∼1 dB improvement. When dealing with 90% salt-and-pepper noise and 70% random-valued noise, multi-correction steps are used to improve the restoration quality. HOCTVL1 model outperforms CTVL1 model both in the SNR value of each correction step and the number of correction steps they need. Compared to four other state-of-the-art methods, HOCTVL1 model always outperforms Lp-ADMM, AOP, and PDA methods. Compared to L0TV-PADMM method, HOCTVL1 model performs well in most cases and it can achieve about 1∼4 dB improvement. When dealing with 90% salt-and-pepper noise and 70% random-valued noise, L0TV-PADMM method performs well, while after several correction steps, HOCTVL1 model can obtain higher SNR value in some cases and HOCTVL1 model takes less time. In the last experiment, the comparisons of HOCTVL1 model and SAHOCTVL1 model are conducted and the results show that SAHOCTVL1 can achieve about 0.1∼0.6 dB improvement compared to HOCTVL1 model. However, it takes about three times as long as HOCTVL1 model, which is a problem that needs to be optimized in a future study.

Author Contributions

All authors have made great contributions to the work. J.X., P.Y., L.W. and M.H. conceived and designed the experiments; P.Y. performed the experiments and analyzed the data; J.X. and P.Y. gave insightful suggestions for the work; P.Y. wrote the paper.

Funding

This paper is supported by National Key Laboratory of Communication Anti-jamming191 Technology (614210202030217), Heilongjiang Education and Teaching Reform Project Funding in 2017 (SJGY20170515 and SJGY20170514), project support for Free Exploration of basic Scientific Research Business expenses in Central Colleges and Universities.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Tiwari, K.A.; Raisutis, R.; Tumsys, O. Defect Estimation in Non-Destructive Testing of Composites by Ultrasonic Guided Waves and Image Processing. Electronics 2019, 8, 315. [Google Scholar] [CrossRef]
  2. Turajlic, E. Adaptive Block-Based Approach to Image Noise Level Estimation in the SVD Domain. Electronics 2018, 7, 397. [Google Scholar] [CrossRef]
  3. Chervyakov, N.; Lyakhov, P.; Kaplun, D. Analysis of the Quantization Noise in Discrete Wavelet Transform Filters for Image Processing. Electronics 2018, 7, 135. [Google Scholar] [CrossRef]
  4. Xu, J.; Tai, X.; Wang, L. A two-level domain decomposition method for image restoration. Inverse Probl. 2017, 4, 523–545. [Google Scholar] [CrossRef]
  5. Guo, Z.; Sun, Y.; Jian, M.; Zhang, X. Deep Residual Network with Sparse Feedback for Image Restoration. Appl. Sci. 2018, 8, 2417. [Google Scholar] [CrossRef]
  6. Orgiela, L.; Tadeusiewicz, R.; Ogiela, M.R. Cognitive analysis in diagnostic DSS-type IT systems. In Proceedings of the Artificial Intelligence and Soft Computing, Zakopane, Poland, 12–16 June 2006; pp. 962–971. [Google Scholar]
  7. Simons, T.; Lee, D.J. Jet Features: Hardware-Friendly, Learned Convolutional Kernels for High-Speed Image Classification. Electronics 2019, 8, 588. [Google Scholar] [CrossRef]
  8. Sun, G.; Leng, J.; Huang, T.J.I.A. An Efficient Sparse Optimization Algorithm for Weighted 0 Shearlet-Based Method for Image Deblurring. IEEE Access 2017, 5, 3085–3094. [Google Scholar] [CrossRef]
  9. Xiang, J.H.; Yue, H.H.; Yin, X.J. A Reweighted Symmetric Smoothed Function Approximating L0 Norm Regularized Sparse Reconstruction Method. Symmetry 2018, 10, 583. [Google Scholar] [CrossRef]
  10. Wang, L.Y.; Yin, X.J.; Yue, H.H. A Regularized Weighted Smoothed L0 Norm Minimization Method for Underdetermined Blind Source Separation. Sensors 2018, 18, 4260. [Google Scholar] [CrossRef]
  11. Ma, X.; Hu, S.; Liu, S. Remote Sensing Image Fusion Based on Sparse Representation and Guided Filtering. Electronics 2019, 8, 303. [Google Scholar] [CrossRef]
  12. Kittisuwan, P. Speckle Noise Reduction of Medical Imaging via Logistic Density in Redundant Wavelet Domain. Int. J. Artif. Intell. Tools 2018, 27, 1850006. [Google Scholar] [CrossRef]
  13. Zhang, Y.D.; Zhang, Y.; Dong, Z.C.; Yuan, T.F.; Han, L.X.; Yang, M.; Carlo, C.; Lu, H.M. Advanced Signal Processing Methods In Medical Imaging. IEEE Access 2018, 6, 61812–61818. [Google Scholar] [CrossRef]
  14. Vorontsov, S.; Jefferies, S. A new approach to blind deconvolution of astronomical images. Inverse Probl. 2017, 33, 055004. [Google Scholar] [CrossRef]
  15. Shi, X.; Rui, G.; Yi, Z. Astronomical image restoration using variational Bayesian blind deconvolution. J. Syst. Eng. Electron. 2017, 28, 1236–1247. [Google Scholar]
  16. Chen, S.; Sun, T.; Yang, F. An improved optimum-path forest clustering algorithm for remote sensing image segmentation. Comput. Geosci. 2018, 112, 38–46. [Google Scholar] [CrossRef]
  17. Yong, Y.; Wan, W.; Huang, S.; Yuan, F.; Yang, S.; Yue, Q.J.I.A. Remote Sensing Image Fusion Based on Adaptive IHS and Multiscale Guided Filter. IEEE Access 2017, 4, 4573–4582. [Google Scholar] [CrossRef]
  18. Guo, X.; Fang, L.; Ng, M.K. A Fast 1-TV Algorithm for Image Restoration. SIAM J. Sci. Comput. 2009, 31, 2322–2341. [Google Scholar] [CrossRef]
  19. Goldstein, T.; Osher, S. The Split Bregman method for L1 regularized problems. SIAM J. Sci. Comput. 2009, 2, 323–343. [Google Scholar] [CrossRef]
  20. Yang, J.F.; Zhang, Y.; Yin, W.T. An Efficient Tvl1 Algorithm For Deblurring Multichannel Images Corrupted By Impulsive Noise. SIAM J. Sci. Comput. 2009, 31, 2842–2865. [Google Scholar] [CrossRef]
  21. Nikolova, M. Model distortions in Bayesian map reconstruction. Inverse Probl. Imaging 2007, 1, 399–422. [Google Scholar] [CrossRef]
  22. Cai, J.F.; Chan, R.H.; Nikolova, M. Fast Two-Phase Image Deblurring Under Impulse Noise. J. Math. Imaging Vis. 2008, 2, 187–204. [Google Scholar] [CrossRef]
  23. Bai, M.; Zhang, X.; Shao, Q. Adaptive correction procedure for TVL1 image deblurring under impulse noise. Inverse Probl. 2016, 32, 085004. [Google Scholar] [CrossRef] [Green Version]
  24. Yang, J.; Gu, G.; Jiang, S. A TVSCAD approach for image deblurring with impulsive noise. Inverse Probl. 2017, 33, 125008. [Google Scholar] [Green Version]
  25. Minru, B.; Shihuan, G. TV-MCP: A New Method for Image Restoration in the Presence of Impulse Noise. J. Hunan Nat. Sci. 2018, 45, 126–130. [Google Scholar]
  26. Xu, Z.; Chang, X.; Xu, F.; Zhang, H. L1/2 regularization: A thresholding representation theory and a fast solver. IEEE Trans. Neural Netw. Learn. Syst. 2012, 23, 1013–1027. [Google Scholar]
  27. Liu, J.; Huang, T.Z.; Lv, X.G.; Si, W. High-order total variation-based Poissonian image deconvolution with spatially adapted regularization parameter. Appl. Math. Model. 2017, 45, 516–529. [Google Scholar] [CrossRef]
  28. Chambolle, A.; Lions, P.L. Image recovery via total variation minimization and related problems. Numer. Math. 1997, 76, 167–188. [Google Scholar] [CrossRef]
  29. Chan, T.; Marquina, A.; Mulet, P. High-Order Total Variation-Based Image Restoration. SIAM J. Sci. Comput. 2000, 22, 503–516. [Google Scholar] [CrossRef]
  30. Wang, S.; Huang, T.Z.; Zhao, X.L.; Liu, J. An Alternating Direction Method for Mixed Gaussian Plus Impulse Noise Removal. Abstract Appl. Anal. 2013, 2, 233–255. [Google Scholar] [CrossRef]
  31. Liu, G.; Huang, T.Z.; Liu, J. High-order TVL1-based images restoration and spatially adapted regularization parameter selection. Comput. Math. Appl. 2014, 67, 2015–2026. [Google Scholar] [CrossRef]
  32. Clason, C.; Jin, B.; Kunisch, K. A Duality-Based Splitting Method for 1-TV Image Restoration with Automatic Regularization Parameter Choice. SIAM J. Sci. Comput. 2010, 32, 1484–1505. [Google Scholar] [CrossRef]
  33. Hintermüller, M.; Rinconcamacho, M.M. Expected absolute value estimators for a spatially adapted regularization parameter choice rule in L1-TV-based image restoration. Inverse Probl. 2010, 26, 085005. [Google Scholar] [CrossRef]
  34. Jin, K.; Ye, J. Sparse and Low-Rank Decomposition of a Hankel Structured Matrix for Impulse Noise Removal. IEEE Trans. Image Process. 2018, 27, 1448–1461. [Google Scholar] [CrossRef]
  35. Stamatios, L.; Aurélien, B.; Michael, U. Hessian-based norm regularization for image restoration with biomedical applications. IEEE Trans. Image Process. 2012, 21, 983–995. [Google Scholar]
  36. Wu, C.; Tai, X.C. Augmented Lagrangian Method, Dual Methods, and Split Bregman Iteration for ROF, Vectorial TV, and High Order Models. SIAM J. Imaging Sci. 2012, 3, 300–339. [Google Scholar] [CrossRef]
  37. Lysaker, M.; Tai, X.C. Iterative Image Restoration Combining Total Variation Minimization and a Second-Order Functional. Int. J. Comput. 2006, 66, 5–18. [Google Scholar] [CrossRef]
  38. Ghadimi, E.; Teixeira, A.; Shames, I.; Johansson, M. Optimal Parameter Selection for the Alternating Direction Method of Multipliers (ADMM): Quadratic Problems. IEEE Trans. Autom. Control. 2015, 60, 644–658. [Google Scholar] [CrossRef]
  39. Dong, Y.; Rincon-Camacho, M.M. Automated Regularization Parameter Selection in Multi-Scale Total Variation Models for Image Restoration. J. Math. Imaging Vis. 2011, 40, 82–104. [Google Scholar] [CrossRef]
  40. Chen, D.Q.; Cheng, L.Z. Spatially adapted regularization parameter selection based on the local discrepancy function for Poissonian image deblurring. Inverse Probl. 2012, 28, 015004. [Google Scholar] [CrossRef]
  41. Ming, Y. Restoration of Images Corrupted by Impulse Noise and Mixed Gaussian Impulse Noise using Blind Inpainting. SIAM J. Imaging Sci. 2013, 6, 1227–1245. [Google Scholar]
  42. Lu, Z.; Yong, Z. Sparse Approximation via Penalty Decomposition Methods. SIAM J. Optim. 2012, 23, 2448–2478. [Google Scholar] [CrossRef]
  43. Yuan, G.; Ghanem, B. 0TV: A Sparse Optimization Method for Impulse Noise Image Restoration. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 41, 352–364. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The block diagram of solving image deblurring problem under impulse noise.
Figure 1. The block diagram of solving image deblurring problem under impulse noise.
Electronics 08 00867 g001
Figure 2. Plots of ϕ ( t ) in CTVL1, φ ( t ) in TVSCAD and ψ ( t ) in TV-MCP. (a) ϕ ( t ) : ( ε 2 , τ ) = ( 0.001 , 2 ) ; (b) φ ( t ) : ( γ 1 , γ 2 ) = ( 0.08 , 0.2 ) ; (c) ψ ( t ) : ( θ 1 , θ 2 ) = ( 1 , 0.15 ) .
Figure 2. Plots of ϕ ( t ) in CTVL1, φ ( t ) in TVSCAD and ψ ( t ) in TV-MCP. (a) ϕ ( t ) : ( ε 2 , τ ) = ( 0.001 , 2 ) ; (b) φ ( t ) : ( γ 1 , γ 2 ) = ( 0.08 , 0.2 ) ; (c) ψ ( t ) : ( θ 1 , θ 2 ) = ( 1 , 0.15 ) .
Electronics 08 00867 g002
Figure 3. Original images. First column: image name. Second column: image size. (a) Lena: 512 × 512 ; (b) camera: 256 × 256 ; (c) pepper: 512 × 512 ; (d) boat: 512 × 512 .
Figure 3. Original images. First column: image name. Second column: image size. (a) Lena: 512 × 512 ; (b) camera: 256 × 256 ; (c) pepper: 512 × 512 ; (d) boat: 512 × 512 .
Electronics 08 00867 g003
Figure 4. The SNR for results with different β 3 .
Figure 4. The SNR for results with different β 3 .
Electronics 08 00867 g004
Figure 5. The SNR of HOCTVL1 and CTVL1 with different λ .
Figure 5. The SNR of HOCTVL1 and CTVL1 with different λ .
Electronics 08 00867 g005
Figure 6. The SNR for results with different ω .
Figure 6. The SNR for results with different ω .
Electronics 08 00867 g006
Figure 7. Changes of SNR and F ( z ) with the iterations for Camera image corrupted by Gaussian blur and 50% salt-and-pepper noise. (a) SNR; (b) F ( z ) .
Figure 7. Changes of SNR and F ( z ) with the iterations for Camera image corrupted by Gaussian blur and 50% salt-and-pepper noise. (a) SNR; (b) F ( z ) .
Electronics 08 00867 g007
Figure 8. Comparisons of TVL1, HTVL1, CTVL1 and HOCTVL1 model on the Lena image corrupted by Gaussian blur and salt-and-pepper noise with noise level 30%. (a) Corrupted image. (b) TVL1 model. (c) HTVL1 model. (d) CTVL1 model. (e) HOCTVL1 model.
Figure 8. Comparisons of TVL1, HTVL1, CTVL1 and HOCTVL1 model on the Lena image corrupted by Gaussian blur and salt-and-pepper noise with noise level 30%. (a) Corrupted image. (b) TVL1 model. (c) HTVL1 model. (d) CTVL1 model. (e) HOCTVL1 model.
Electronics 08 00867 g008
Figure 9. Comparisons of TVL1, HTVL1, CTVL1 and HOCTVL1 model on the Lena image corrupted by Gaussian blur and salt-and-pepper noise with noise level 50%. (a) Corrupted image. (b) TVL1 model. (c) HTVL1 model. (d) CTVL1 model. (e) HOCTVL1 model.
Figure 9. Comparisons of TVL1, HTVL1, CTVL1 and HOCTVL1 model on the Lena image corrupted by Gaussian blur and salt-and-pepper noise with noise level 50%. (a) Corrupted image. (b) TVL1 model. (c) HTVL1 model. (d) CTVL1 model. (e) HOCTVL1 model.
Electronics 08 00867 g009
Figure 10. Comparisons of TVL1, HTVL1, CTVL1 and HOCTVL1 model on the Lena image corrupted by Gaussian blur and salt-and-pepper noise with noise level 70%. (a) Corrupted image. (b) TVL1 model. (c) HTVL1 model. (d) CTVL1 model. (e) HOCTVL1 model.
Figure 10. Comparisons of TVL1, HTVL1, CTVL1 and HOCTVL1 model on the Lena image corrupted by Gaussian blur and salt-and-pepper noise with noise level 70%. (a) Corrupted image. (b) TVL1 model. (c) HTVL1 model. (d) CTVL1 model. (e) HOCTVL1 model.
Electronics 08 00867 g010
Figure 11. Restored images of TVL1, CTVL1 and HOCTVL1 models on the Lena image corrupted by Gaussian blur and salt-and-pepper noise with noise level 90%. (a) Corrupted image. (b) TVL1 model. (cg) first correction step to fifth correction step of CTVL1 model. (hl) first correction step to fifth correction step of HOCTVL1 model.
Figure 11. Restored images of TVL1, CTVL1 and HOCTVL1 models on the Lena image corrupted by Gaussian blur and salt-and-pepper noise with noise level 90%. (a) Corrupted image. (b) TVL1 model. (cg) first correction step to fifth correction step of CTVL1 model. (hl) first correction step to fifth correction step of HOCTVL1 model.
Electronics 08 00867 g011
Figure 12. Comparisons of TVL1, HTVL1, CTVL1 and HOCTVL1 model on the Lena image corrupted by Gaussian blur and random-valued noise with noise level 30%. (a) Corrupted image. (b) TVL1 model. (c) HTVL1 model. (d) CTVL1 model. (e) HOCTVL1 model.
Figure 12. Comparisons of TVL1, HTVL1, CTVL1 and HOCTVL1 model on the Lena image corrupted by Gaussian blur and random-valued noise with noise level 30%. (a) Corrupted image. (b) TVL1 model. (c) HTVL1 model. (d) CTVL1 model. (e) HOCTVL1 model.
Electronics 08 00867 g012
Figure 13. Comparisons of TVL1, HTVL1, CTVL1 and HOCTVL1 model on the Lena image corrupted by Gaussian blur and random-valued noise with noise level 50%. (a) Corrupted image. (b) TVL1 model. (c) HTVL1 model. (d) CTVL1 model. (e) HOCTVL1 model.
Figure 13. Comparisons of TVL1, HTVL1, CTVL1 and HOCTVL1 model on the Lena image corrupted by Gaussian blur and random-valued noise with noise level 50%. (a) Corrupted image. (b) TVL1 model. (c) HTVL1 model. (d) CTVL1 model. (e) HOCTVL1 model.
Electronics 08 00867 g013
Figure 14. Restored images of TVL1, CTVL1 and HOCTVL1 model on the Lena image corrupted by Gaussian blur and random-valued noise with noise level 70%. (a) Corrupted image. (b) TVL1 model. (cg) first correction step to fifth correction step of CTVL1 model. (ho) first correction step to eighth correction step of HOCTVL1 model.
Figure 14. Restored images of TVL1, CTVL1 and HOCTVL1 model on the Lena image corrupted by Gaussian blur and random-valued noise with noise level 70%. (a) Corrupted image. (b) TVL1 model. (cg) first correction step to fifth correction step of CTVL1 model. (ho) first correction step to eighth correction step of HOCTVL1 model.
Electronics 08 00867 g014
Figure 15. Running time of four different models for Lena corrupted by Gaussian blur and impulse noise. (a) Salt-and-pepper noise; (b) Random-valued noise.
Figure 15. Running time of four different models for Lena corrupted by Gaussian blur and impulse noise. (a) Salt-and-pepper noise; (b) Random-valued noise.
Electronics 08 00867 g015
Figure 16. Change of F ( z ) value with Iteration number. (a) Corruption: 30%; (b) Corruption: 50%.
Figure 16. Change of F ( z ) value with Iteration number. (a) Corruption: 30%; (b) Corruption: 50%.
Electronics 08 00867 g016
Figure 17. Recovered images on Pepper image corrupted by salt-and-pepper noise with noise level 50%. (a) Corrupted image. (b) Lp-ADMM. (c) AOP. (d) PDA. (e) L0TV-PADMM. (f) HOCTVL1.
Figure 17. Recovered images on Pepper image corrupted by salt-and-pepper noise with noise level 50%. (a) Corrupted image. (b) Lp-ADMM. (c) AOP. (d) PDA. (e) L0TV-PADMM. (f) HOCTVL1.
Electronics 08 00867 g017
Figure 18. Recovered images on Pepper image corrupted by random-valued noise with noise level 50%. (a) Corrupted image. (b) Lp-ADMM. (c) AOP. (d) PDA. (e) L0TV-PADMM. (f) HOCTVL1.
Figure 18. Recovered images on Pepper image corrupted by random-valued noise with noise level 50%. (a) Corrupted image. (b) Lp-ADMM. (c) AOP. (d) PDA. (e) L0TV-PADMM. (f) HOCTVL1.
Electronics 08 00867 g018
Figure 19. Running time of five methods for pepper corrupted by Gaussian blur and impulse noise. (a) Salt-and-pepper noise; (b) Random-valued noise.
Figure 19. Running time of five methods for pepper corrupted by Gaussian blur and impulse noise. (a) Salt-and-pepper noise; (b) Random-valued noise.
Electronics 08 00867 g019
Table 1. SNR of four different models for test images corrupted by Gaussian blur and salt-and-pepper noise.
Table 1. SNR of four different models for test images corrupted by Gaussian blur and salt-and-pepper noise.
ImageNoise LevelSNR(dB)
TVL1HTVL1CTVL1HOCTVL1
Lena10%15.6816.8318.9319.79
50%14.8215.1418.0118.60
70%12.3913.5917.4317.71
90%7.326.7712.1312.77
Camera10%12.9513.6416.2716.92
30%12.3012.9915.7316.21
50%11.4811.7914.7915.28
70%10.2610.2712.4412.54
90%5.735.568.088.20
Pepper10%19.4021.6525.0826.00
30%18.9621.0124.2425.43
50%17.9819.3123.6524.63
70%16.1416.4822.7723.25
90%7.907.1213.1614.75
Boat10%15.2616.5419.0320.02
30%14.7816.0118.5819.52
50%13.6314.5818.0818.69
70%12.3012.5117.2617.69
90%6.796.1310.9911.36
Table 2. SNR of four different models for test images corrupted by Gaussian blur and random-valued noise.
Table 2. SNR of four different models for test images corrupted by Gaussian blur and random-valued noise.
ImageNoise LevelSNR(dB)
TVL1HTVL1CTVL1HOCTVL1
Lena10%15.7016.8218.8719.68
30%15.4116.2518.4819.32
50%14.2414.9517.9418.62
70%8.348.289.359.54
Camera10%12.9713.6416.2216.76
30%12.1812.8815.8016.20
50%9.359.3711.2011.46
70%3.303.603.633.79
Pepper10%19.4121.5725.0025.95
30%18.9420.8824.2425.35
50%16.2316.8820.8821.40
70%7.587.128.638.80
Boat10%15.2616.4819.0119.90
30%14.7615.9418.5519.47
50%12.9013.6217.0618.31
70%6.577.067.887.98
Table 3. SNR of five methods for test images corrupted by Gaussian blur and impulse noise.
Table 3. SNR of five methods for test images corrupted by Gaussian blur and impulse noise.
ImageAlgorithmSalt-and-Pepper NoiseRandom-Valued Noise
30%50%70%90%30%50%70%
LenaLp-ADMM14.1812.728.252.2114.2412.797.21
AOP14.7514.3413.7913.1814.4914.035.56
PDA14.2213.7512.969.1214.0713.4111.77
L0TV-PADMM19.4118.6217.4615.0218.1416.9215.31
HOCTVL123.6622.1918.5211.05/13.8223.4919.449.27/17.24
CameraLp-ADMM9.468.373.92−0.439.437.542.68
AOP11.3210.639.528.3711.088.652.57
PDA9.919.578.985.869.648.926.54
L0TV-PADMM15.5714.3212.639.7413.1612.3510.53
HOCTVL122.2717.8912.556.22/6.4621.6714.793.65/9.51
PepperLp-ADMM16.6314.337.801.9216.7513.806.52
AOP17.3316.9716.3115.3217.2416.235.74
PDA16.5315.9514.919.5416.3015.4513.10
L0TV-PADMM26.1824.6221.5517.5322.6620.8218.00
HOCTVL132.0730.6422.589.80/11.2530.3422.948.06/16.09
BoatLp-ADMM12.2811.137.161.9912.2210.956.18
AOP13.7012.8012.2411.4313.1112.515.43
PDA12.3811.9111.188.2912.3211.7610.43
L0TV-PADMM19.4518.1216.5813.2617.3815.7614.04
HOCTVL124.2122.5518.159.25/10.8523.8418.467.91/15.45
Table 4. Comparisons of SAHOCTVL1 and HOCTVL1 model for Cameraman and Lena corrupted by Gaussian blur and impulse noise.
Table 4. Comparisons of SAHOCTVL1 and HOCTVL1 model for Cameraman and Lena corrupted by Gaussian blur and impulse noise.
ImageNoise TypeNoise LevelSNR(dB)Time(s)
HOCTVL1SAHOCTVL1HOCTVL1SAHOCTVL1
CameraSP10%16.8516.982.326.76
30%16.1816.512.678.20
50%15.3315.723.029.32
70%12.5012.575.1515.47
RV10%16.7316.782.066.07
30%16.2216.342.307.08
50%11.3911.993.028.74
LenaSP10%19.7819.8211.1931.45
30%19.3519.4413.9739.27
50%18.6318.7816.1347.12
70%17.7217.8518.8255.57
RV10%19.6819.729.1830.60
30%19.2919.3211.3936.77
50%18.5618.6812.0741.35
SP denotes salt-and-pepper noise and RV denotes random-valued noise.

Share and Cite

MDPI and ACS Style

Xiang, J.; Ye, P.; Wang, L.; He, M. A Novel Image-Restoration Method Based on High-Order Total Variation Regularization Term. Electronics 2019, 8, 867. https://doi.org/10.3390/electronics8080867

AMA Style

Xiang J, Ye P, Wang L, He M. A Novel Image-Restoration Method Based on High-Order Total Variation Regularization Term. Electronics. 2019; 8(8):867. https://doi.org/10.3390/electronics8080867

Chicago/Turabian Style

Xiang, Jianhong, Pengfei Ye, Linyu Wang, and Mingqi He. 2019. "A Novel Image-Restoration Method Based on High-Order Total Variation Regularization Term" Electronics 8, no. 8: 867. https://doi.org/10.3390/electronics8080867

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop