Next Article in Journal
Breast Density Classification Using Local Quinary Patterns with Various Neighbourhood Topologies
Previous Article in Journal
Range Imaging for Motion Compensation in C-Arm Cone-Beam CT of Knees under Weight-Bearing Conditions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Investigating the Influence of Box-Constraints on the Solution of a Total Variation Model via an Efficient Primal-Dual Method

Department of Mathematics, University of Stuttgart, Pfaffenwaldring 57, 70569 Stuttgart, Germany
J. Imaging 2018, 4(1), 12; https://doi.org/10.3390/jimaging4010012
Submission received: 2 October 2017 / Revised: 2 January 2018 / Accepted: 3 January 2018 / Published: 6 January 2018

Abstract

:
In this paper, we investigate the usefulness of adding a box-constraint to the minimization of functionals consisting of a data-fidelity term and a total variation regularization term. In particular, we show that in certain applications an additional box-constraint does not effect the solution at all, i.e., the solution is the same whether a box-constraint is used or not. On the contrary, i.e., for applications where a box-constraint may have influence on the solution, we investigate how much it effects the quality of the restoration, especially when the regularization parameter, which weights the importance of the data term and the regularizer, is chosen suitable. In particular, for such applications, we consider the case of a squared L 2 data-fidelity term. For computing a minimizer of the respective box-constrained optimization problems a primal-dual semi-smooth Newton method is presented, which guarantees superlinear convergence.

1. Introduction

An observed image g, which contains additive Gaussian noise with zero mean and standard deviation σ , may be modeled as
g = K u ^ + n
where u ^ is the original image, K is a linear bounded operator and n represents the noise. With the aim of preserving edges in images in [1] total variation regularization in image restoration was proposed. Based on this approach and assuming that g L 2 ( Ω ) and K L ( L 2 ( Ω ) ) , a good approximation of u ^ is usually obtained by solving
min u B V ( Ω ) Ω | D u | such that ( s . t . ) K u g L 2 ( Ω ) 2 σ 2 | Ω |
where Ω R 2 is a simply connected domain with Lipschitz boundary and | Ω | its volume. Here Ω | D u | denotes the total variation of u in Ω and B V ( Ω ) is the space of functions with bounded variation, i.e., u B V ( Ω ) if and only if u L 1 ( Ω ) and Ω | D u | < ; see [2,3] for more details. We recall, that B V ( Ω ) L 2 ( Ω ) , if Ω R 2 .
Instead of considering (1), we may solve the penalized minimization problem
min u B V ( Ω ) K u g L 2 ( Ω ) 2 + α Ω | D u |
for a given constant α > 0 , which we refer to the L 2 -TV model. In particular, there exists a constant α 0 such that the constrained problem (1) is equivalent to the penalized problem (2), if g K ( B V ( Ω ) ) and K does not annihilate constant functions [4]. Moreover, under the latter condition also the existence of a minimizer of problem (1) and (2) is guaranteed [4]. There exist many algorithms that solve problem (1) or problem (2), see for example [5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23] and references therein.
If in problem (2) instead of the quadratic L 2 -norm an L 1 -norm is used, we refer to it as the L 1 -TV model. The quadratic L 2 -norm is usually used when Gaussian noise is contained in the image, while the L 1 -norm is more suitable for impulse noise [24,25,26].
If we additionally know (or assume) that the original image lies in the dynamic range [ c min , c max ] , i.e., c min u ( x ) c max for almost every (a.e.) x Ω , we incorporate this information into our problems (1) and (2) leading to
min u B V ( Ω ) Ω | D u | s . t . K u g L 2 ( Ω ) 2 σ 2 | Ω | and u C
and
min u B V ( Ω ) C K u g L 2 ( Ω ) 2 + α Ω | D u | ,
respectively, where C : = { u L 2 ( Ω ) : c min u ( x ) c max for a . e . x Ω } . In order to guarantee the existence of a minimizer of problems (3) and (4) we assume in the sequel that K does not annihilate constant functions. By noting that the characteristic function χ C is lower semicontinuous this follows by the same arguments as in [4]. If additionally g K ( B V ( Ω ) C ) , then by [4] (Prop. 2.1) it follows that there exists a constant α 0 such that problem (3) is equivalent to problem (4).
For image restoration box-constraints have been considered for example in [5,27,28,29]. In [29] a functional consisting of an L 2 -data term and a Tikhonov-like regularization term (i.e., L 2 -norm of some derivative of u) in connection with box-constrained is presented together with a Newton-like numerical scheme. For box-constrained total variation minimization in [5] a fast gradient-based algorithm, called monoton fast iterative shrinkage/thresholding algorithm (MFISTA), is proposed and a rate of convergence is proven. Based on the alternating direction method of multipliers (ADMM) [30] in [27] a solver for the box-constrained L 2 -TV and L 1 -TV model is derived and shown to be faster than MFISTA. In [28] a primal-dual algorithm for the box-constrained L 1 -TV model and for box-constrained non-local total variation is presented. In order to achieve a constrained solution, which is positive and bounded from above by some intensity value, in [31] an exponential type transform is applied to the L 2 -TV model. Recently, in [32] a box-constraint is also incorporated in a total variation model with a combined L 1 - L 2 data fidelity, proposed in [33], for removing simultaneously Gaussian and impulse noise in images.
Setting the upper bound in the set C to infinity and the lower bound to 0, i.e., c min = 0 and c max = + , leads to a non-negativity constraint. Total variation minimization with a non-negativity constraint is a well-known technique to improve the quality of reconstructions in image processing; see for example [34,35] and references therein.
In this paper we are concerned with the problems (3) and (4) when the lower bound c min and the upper bound c max in C are finite. However, the analysis and the presented algorithms are easily adjustable to the situation when one of the bounds is set to or + respectively. Note, that a solution of problem (1) and problem (2) is in general not an element of the set C. However, since g is an observation containing Gaussian noise with zero mean, a minimizer of problem (2) lies indeed in C, if α in problem (2) is sufficiently large and the original image u ^ C . This observation however rises the question whether an optimal parameter α would lead to a minimizer that lies in C. If this would be the case then incorporating the box-constraint into the minimization problem does not gain any improvement of the solution. In particular, there are situations in which a box-constraint is not effecting the solution at all (see Section 3 below). Additionally, we expect that the box-constrained problems are more difficult to handle and numerically more costly to solve than problem (1) and problem (2).
In order to answer the above raised question, we numerically compute optimal values of α for the box-constrained total variation and the non-box-constrained total variation and compare the resulting reconstructions with respect to quality measures. By optimal values we mean here parameters α such that the solutions of problem (1) and problem (2) or problem (3) and problem (4) coincide. Note, that there exists several methods for computing the regularization parameter; see for example [36] for an overview of parameter selection algorithms for image restoration. Here we use the pAPS-algorithm proposed in [36] to compute reasonable α in problem (2) and problem (4). For minimizing problem (4) we derive a semi-smooth Newton method, which should serve us as a good method for quickly computing rather exact solutions. Second order methods have been already proposed and used in image reconstruction; see [21,36,37,38,39]. However, to the best of our knowledge till now semi-smooth Newton methods have not been presented for box-constrained total variation minimization. In this setting, differently to the before mentioned approaches, the box-constraint adds some additional difficulties in deriving the dual problems, which have to be calculated to obtain the desired method; see Section 4 for more details. The superlinear convergence of our method is guaranteed by the theory of semi-smooth Newton methods; see for example [21]. Note, that our approach differs significantly from the Newton-like scheme presented in [29], where a smooth objective functional with a box-constraint is considered. This allows in [29] to derive a Newton method without dualization. Here, our Newton method is based on dualization and may be viewed as a primal-dual (Newton) approach.
We remark, that a scalar regularization parameter might not be the best choice for every image restoration problem, since images usually consist of large uniform areas and parts with fine details, see for example [36,38]. It has been demonstrated, for example in [36,38,40,41] and references therein, that with the help of spatially varying regularization parameters one might be able to restore images visually better than with scalar parameters. In this vein we also consider
min u B V ( Ω ) K u g L 2 ( Ω ) 2 + Ω α | D u |
and
min u B V ( Ω ) C K u g L 2 ( Ω ) 2 + Ω α | D u | ,
where α : Ω R + is a bounded continuous function [42]. We adapt our semi-smooth Newton method to approximately solve these two optimization problems and utilize the pLATV-algorithm of [36] to compute a locally varying α .
Our numerical results show, see Section 6, that in a lot of applications the quality of the restoration is more a question of how to choose the regularization parameter then including a box-constraint. However, the solutions obtained by solving the box-constrained versions (3), (4) and (6) are improving the restorations slightly, but not drastically. Nevertheless, we also report on a medical applications where a non-negativity constraint significantly improves the restoration.
We realize that if the noise-level of the corrupted image is unknown, then we may use the information of the image intensity range (if known) to calculate a suitable parameter for problem (2). Note, that in this situation the optimization problems (1) and (3) cannot be considered since σ is not at hand. We present a method which automatically computes the regularization parameter α in problem (2) provided the information that the original image u ^ [ c min , c max ] .
Hence the contribution of the paper is three-sided: (i) We present a semi-smooth Newton method for the box-constrained total variation minimization problems (3) and (6). (ii) We investigate the influence of the box-constraint on the solution of the total variation minimization models with respect to the regularization parameter. (iii) In case the noise-level is not at hand, we propose a new automatic regularization parameter selection algorithm based on the box-constraint information.
The outline of the rest of the paper is organized as follows: In Section 2 we recall useful definitions and the Fenchel-duality theorem which will be used later. Section 3 is devoted to the analysis of the box-constrained total variation minimization. In particular, we state that in certain cases adding a box-constraint to the considered problem does not change the solution at all. The semi-smooth Newton method for the box-constrained L 2 -TV model (4) and its multiscale version (6) is derived in Section 4 and its numerical implementation is presented in Section 5. Numerical experiments investigating the usefulness of a box-constraint are shown in Section 6. In Section 7 we propose an automatic parameter selection algorithm by using the box-constraint. Finally, in Section 8 conclusions are drawn.

2. Basic Terminology

Let X be a Banach space. Its topological dual is denoted by X * and · , · describes the bilinear canonical pairing over X × X * . A convex functional J : X R ¯ is called proper, if { v X : J ( v ) + } and J ( v ) > for all v X . A functional J : X R ¯ is called lower semicontinuous, if for every weakly convergent sequence v ( n ) v ^ we have
lim inf v ( n ) v ^ J ( v ( n ) ) J ( v ^ ) .
For a convex functional J : X R ¯ we define the subdifferential of J at v X as the set valued function
J ( v ) : = if J ( v ) = , { v * X * : v * , u v + J ( v ) J ( u ) u X } otherwise .
It is clear from this definition, that 0 J ( v ) if and only if v is a minimizer of J.
The conjugate function (or Legendre transform) of a convex function J : X R ¯ is defined as J * : X * R ¯ with
J * ( v * ) = sup v X { v , v * J ( v ) } .
From this definition we see that J * is the pointwise supremum of continuous affine functions and thus, according to [43] (Proposition 3.1, p. 14), convex, lower semicontinuous, and proper.
For an arbitrary set S we denote by χ S its characteristic function defined by
χ S ( u ) = 0 if u S , otherwise .
We recall the Fenchel duality theorem; see, e.g., [43] for details.
Theorem 1 (Fenchel duality theorem).
Let X and Y be two Banach spaces with topological duals X * and Y * , respectively, and Λ : X Y a bounded linear operator with adjoint Λ * L ( Y * , X * ) . Further let F : X R { } , G : Y R { } be convex, lower semicontinuous, and proper functionals. Assume there exists u 0 X such that F ( u 0 ) < , G ( Λ u 0 ) < and G is continuous at Λ u 0 . Then we have
inf u X F ( u ) + G ( Λ u ) = sup p Y * F * ( Λ * p ) G * ( p )
and the problem on the right hand side of (7) admits a solution p ¯ . Moreover, u ¯ and p ¯ are solutions of the two optimization problems in (7), respectively, if and only if
Λ * p ¯ F ( u ¯ ) , p ¯ G ( Λ u ¯ ) .

3. Limitation of Box-Constrained Total Variation Minimization

In this section we investigate the difference between the box-constrained problem (4) and the non-box-constrained problem (1). For the case when the operator K is the identity I, which is the relevant case in image denoising, we have the following obvious result:
Proposition 1.
Let K = I and g C , then the minimizer u * B V ( Ω ) of problem (2) lies also in the dynamic range [ c min , c max ] , i.e., u * B V ( Ω ) C .
Proof of Proposition 1.
Assume u * B V ( Ω ) \ C is a minimizer of problem (2). Define a function u ˜ such that
u ˜ ( x ) : = u * ( x ) if c min u * ( x ) c max , c max if u * ( x ) > c max , c min if u * ( x ) < c min ,
for a.e. x Ω . Then we have that
u * g L 2 ( Ω ) > u ˜ g L 2 ( Ω ) , and Ω | D u * | > Ω | D u ˜ | .
This implies that u * is not a minimizer of problem (2), which is a contradiction and hence u * B V ( Ω ) C . ☐
This result is easily extendable to optimization problems of the type
min u B V ( Ω ) α 1 u g L 1 ( Ω ) + α 2 u g L 2 ( Ω ) 2 + Ω | D u | ,
with α 1 , α 2 0 and α 1 + α 2 > 0 , since for u ˜ , defined as in the above proof, and a minimizer u * B V ( Ω ) C of problem (9) the inequalities in (8) hold as well as u * g L 1 ( Ω ) > u ˜ g L 1 ( Ω ) . Problem (9) has been already considered in [33,44,45] and can be viewed as a generalization of the L 2 -TV model, since α 1 = 0 in (9) yields the L 2 -TV model and α 2 = 0 in (9) yields the L 1 -TV model.
Note, that if an image is only corrupted by impulse noise, then the observed image g is in the dynamic range of the original image. For example, salt-and-pepper noise contained images may be written as
g ( x ) = c min with probability s 1 [ 0 , 1 ) , c max with probability s 2 [ 0 , 1 ) , u ^ ( x ) with probability 1 s 1 s 2 ,
with 1 s 1 s 2 > 0 [46] and for random-valued impulse noise g is described as
g ( x ) = d with probability s [ 0 , 1 ) , u ^ ( x ) with probability 1 s ,
with d being a uniformly distributed random variable in the image intensity range [ c min , c max ] . Hence, following Proposition 1, in such cases considering constrained total variation minimization would not change the minimizer and no improvement in the restoration quality can be expected.
This is the reason why we restrict ourselves in the rest of the paper to Gaussian white noise contaminated images and consider solely the L 2 -TV model.
It is clear that if a solution of the non box-constrained optimization problem already fulfills the box-constraint, then it is of course equivalent to a minimizer of the box-constraint problem. However, note that the minimizer is not unique in general.
In the following we compare the solution of the box-constrained optimization problem (4) with the solution of the unconstrained minimization problem (2).
Proposition 2.
Let u C B V ( Ω ) be a minimizer of
J C ( u ) : = 1 2 u g 2 2 + α Ω | D u | + χ C ( u )
and w B V ( Ω ) be a minimizer of
J ( w ) : = 1 2 w g 2 2 + α | D w | ( Ω ) .
Then we have that
1. 
J C ( w ) J C ( u ) = J ( u ) J ( w ) .
2. 
1 2 u w 2 2 J ( u ) J ( w ) J C ( w ) J C ( u ) .
3. 
u w 2 2 4 ξ g 2 2 + 8 α | D ξ | ( Ω ) for any ξ C B V ( Ω ) .
Proof of Proposition 2.
  • Follows directly from the optimality of u and w.
  • From [47] (Lemma 10.2) it follows that 1 2 u w 2 2 J ( u ) J ( w ) . For the second inequality we make the observation that
    J C ( w ) J C ( u ) = if w C 0 if w C ,
    where we used the fact that w = u if w C . This implies, that J ( u ) J ( w ) J C ( w ) J C ( u ) .
  • For all v C B V ( Ω ) we have that
    u w 2 2 2 u v 2 2 + v w 2 2 4 J ( v ) J ( u ) + J ( v ) J ( w ) = 8 J ( v ) 4 J ( u ) 4 J ( w ) ,
    where we used 2 . and that ( a + b ) 2 2 ( a 2 + b 2 ) . For any arbitrary ξ C B V ( Ω ) , let v = ξ and since J ( ξ ) = 1 2 ξ g L 2 ( Ω ) 2 + α | D ξ | ( Ω ) we get u w 2 2 4 ξ g 2 2 + 8 α | D ξ | ( Ω ) .
If in Proposition 2 ξ C B V ( Ω ) is constant, then | D ξ | ( Ω ) = 0 which implies that u w 2 2 4 ξ g 2 2 .

4. A Semi-Smooth Newton Method

4.1. The Model Problem

In general K * K is not invertible, which causes difficulties in deriving the dual problem of (4). In order to overcome this difficulties we penalize the L 2 -TV model by considering the following neighboring problem
min u C H 0 1 ( Ω ) 1 2 K u g L 2 ( Ω ) 2 + μ 2 Ω | u | 2 d x + α Ω | u | 2 d x ,
where μ > 0 is a very small constant such that problem (10) is a close approximation of the total variation regularized problem (4). Note, that for u H 0 1 ( Ω ) the total variation of u in Ω is equivalent to Ω | u | 2 d x [3]. A typical example for which K * K is indeed invertible is K = I , which is used for image denoising. In this case, we may even set μ = 0 , see Section 6. The objective functional in problem (10) has been already considered for example in [21,48] for image restoration. In particular in [21], a primal-dual semi-smooth Newton algorithm is introduced. Here, we actually adopt this approach to our box-constrained problem (10).
In the sequel we assume for simplicity that c min = c max = : c > 0 , which changes the set C to C : = { u L 2 ( Ω ) : | u | c } . Note, that any bounded image u ^ , i.e., which lies in the dynamic range [ a , b ] , can be easily transformed to an image u ˜ [ c , c ] . Since this transform and K are linear, the observation g is also easily transformed to g ˜ = K u ˜ + n .
Example 1.
Let u ^ such that a u ^ ( x ) b for all x Ω . Then | u ^ ( x ) b + a 2 | b a 2 = : c for all x Ω and we set u ˜ = u ^ b + a 2 . Hence, g ˜ = K u ^ K b + a 2 + n = g K b + a 2 .
Problem (10) can be equivalently written as
min u H 0 1 ( Ω ) 1 2 K u g L 2 ( Ω ) 2 + μ 2 Ω | u | 2 d x + α Ω | u | 2 d x + χ C ( u ) .
If u * H 0 1 ( Ω ) is a solution of problem (10) (and equivalently problem (11)), then there exists λ * H 0 1 ( Ω ) * and σ * R ( u ) , where R ( u ) : = Ω | u | 2 d x , such that
K * K u * K * g μ Δ u * + α σ * + λ * = 0 λ * , u u * 0
for all u C H 0 1 ( Ω ) .
For implementation reasons (actually for obtaining a fast, second-order algorithm) we approximate the non-smooth characteristic function χ C by a smooth function in the following way
χ C ( u ) η 2 max { u c max , 0 } L 2 ( Ω ) 2 + max { c min u , 0 } L 2 ( Ω ) 2 = η 2 max { | u | c , 0 } L 2 ( Ω ) 2 ,
where η > 0 is large. This leads to the following optimization problem
min u H 0 1 ( Ω ) 1 2 K u g L 2 ( Ω ) 2 + μ 2 Ω | u | 2 d x + α Ω | u | 2 d x + η 2 max { | u | c , 0 } L 2 ( Ω ) 2 .
Remark 1.
By the assumption c min = c max we actually excluded the cases (i) c min = 0 , c max = + and (ii) c min = , c max = 0 . In these situations we just need to approximate χ C by (i) η 2 max { u , 0 } L 2 ( Ω ) 2 and (ii) η 2 max { u , 0 } L 2 ( Ω ) 2 . By noting this, in a similar fashion as done below for problem (12), a primal-dual semi-smooth Newton method can be derived for these two cases.

4.2. Dualization

By a standard calculation one obtains that the dual of problem (12) is given by
sup p = ( p 1 , p 2 ) L 2 ( Ω ) × L 2 ( Ω ) 1 2 | | | Λ * p + K * g | | | B 2 + 1 2 g L 2 ( Ω ) 2 χ A ( p 1 ) 1 2 η p 2 L 2 ( Ω ) 2 c p 2 L 1 ( Ω )
with Λ * p = div p 1 + p 2 and A : = { v L 2 ( Ω ) : | v | 2 α } . As the divergence operator does not have a trivial kernel, the solution of the optimization problem (13) is not unique. In order to render the problem (13) strictly concave we add an additional term yielding the following problem
min p L 2 ( Ω ) × L 2 ( Ω ) 1 2 | | | Λ * p + K * g | | | B 2 1 2 g L 2 ( Ω ) 2 + χ A ( p 1 ) + 1 2 η p 2 L 2 ( Ω ) 2 + c p 2 L 1 ( Ω ) + γ 2 α p 1 L 2 ( Ω ) 2 ,
where γ > 0 is a fixed parameter.
Proposition 3.
The dual problem of problem (14) is given by
min u H 0 1 ( Ω ) 1 2 K u g L 2 ( Ω ) 2 + μ 2 u L 2 ( Ω ) 2 + α Ω ϕ γ ( u ) ( x ) d x + η 2 max { | u | c , 0 } L 2 ( Ω ) 2
with
ϕ γ ( q ) ( x ) = 1 2 γ | q ( x ) | 2 2 i f | q ( x ) | 2 < γ | q ( x ) | 2 γ 2 i f | q ( x ) | 2 γ .
The proof of this statement is a bit technical and therefore deferred to Appendix A.
Similar as in [21] one can show that the solution of problem (15) converges to the minimizer of (12) as γ 0 .
From the Fenchel duality theorem we obtain the following characterization of solutions u and p of problem (15) and problem (14) (note that p = q )
div p 1 p 2 = K * K u K * g μ Δ u in H 1 ( Ω )
p 1 = α γ u if | p 1 | 2 < α in L 2 ( Ω )
p 1 = α u | u | if | p 1 | 2 = α 1 in L 2 ( Ω )
p 2 = η max { | u | c , 0 } sign ( u ) in L 2 ( Ω ) .
This system can be solved efficiently by a semi-smooth Newton algorithm. Moreover, equations (17) and (18) can be condensed into p 1 = α u max { γ , | u | 2 } .

4.3. Adaptation to Non-Scalar α

For locally adaptive α , i.e., α : Ω R + is a function, the minimization problem (12) changes to
min u H 0 1 ( Ω ) 1 2 K u g L 2 ( Ω ) 2 + μ 2 Ω | u | 2 d x + Ω α ( x ) | u | 2 d x + η 2 max { | u | c , 0 } L 2 ( Ω ) 2 .
Its dual problem is given by
min p L 2 ( Ω ) × L 2 ( Ω ) 1 2 | | | Λ * p + K * g | | | B 2 1 2 g L 2 ( Ω ) 2 + χ A ˜ ( p 1 ) + 1 2 η p 2 L 2 ( Ω ) 2 + c p 2 L 1 ( Ω ) ,
where A ˜ : = { v L 2 ( Ω ) : | v ( x ) | 2 α ( x ) } . Similarly but slightly different as above, cf. problem (14), we penalize by
min p L 2 ( Ω ) × L 2 ( Ω ) 1 2 | | | Λ * p + K * g | | | B 2 1 2 g L 2 ( Ω ) 2 + χ A ( p 1 ) + 1 2 η p 2 L 2 ( Ω ) 2 + c p 2 L 1 ( Ω ) + γ 2 p 1 L 2 ( Ω ) 2 .
Then the dual of this problem turns out to be
min u H 0 1 ( Ω ) 1 2 K u g L 2 ( Ω ) 2 + μ 2 u L 2 ( Ω ) 2 + Ω ϕ γ , α ( u ) ( x ) d x + η 2 max { | u | c , 0 } L 2 ( Ω ) 2
with
ϕ γ , α ( q ) ( x ) = 1 2 γ | q ( x ) | 2 2 if | q ( x ) | 2 < γ α ( x ) α ( x ) | q ( x ) | 2 γ 2 | α ( x ) | 2 if | q ( x ) | 2 γ α ( x ) .
Denoting by u a solution of problem (21) and p the solution of the associated pre-dual problem, the optimality conditions due to the Fenchel theorem [43] are given by
div p 1 p 2 = K * K u K * g μ Δ u p 1 = α u max { γ α , | u | } p 2 = η max { | u | c , 0 } sign ( u ) .

5. Numerical Implementation

Similar as in the works [21,36,37,38,39], where semi-smooth Newton methods for non-smooth systems emerging from image restoration models have been derived, we can solve the discrete version of the system (16)–(19), using finite differences, efficiently by a primal-dual algorithm. Therefore let u h R N , p 1 h R 2 N , p 2 h R N , g h R N , denote the discrete image intensity, the dual variables, and the observed data vector, respectively, where N N is the number of elements (pixels) in the discrete image Ω h . Moreover, we denote by α h > 0 the regularization parameter. Correspondingly we define h R 2 N × N as the discrete gradient operator, Δ h R N × N as the discrete Laplace operator, K h R N × N as a discrete operator, and ( K h ) t its transpose. Moreover, div h = ( h ) t . Here | · | , max { · , · } , and sign ( · ) are understood for vectors in a component-wise sense. Moreover, we use the function [ | · | ] : R 2 N R 2 N with [ | v h | ] i = [ | v h | ] i + N = ( v i h ) 2 + ( v i + N h ) 2 for 1 i N .

5.1. Scalar α

The discrete version of (16)–(19) reads as
0 = div h p 1 h + η D ( m 0 ) + ( K h ) t K h u h ( K h ) t g h μ Δ h u h 0 = D h ( m γ ) p 1 h α h h u h
where D h ( v ) is a diagonal matrix with vector v in its diagonal, m 0 : = sign ( u ) max { | u | c , 0 } , and m γ : = max { γ , [ | h u h | ] } . We define
χ A γ = D h ( t γ ) with ( t γ ) i = 0 if ( m γ ) i = γ , 1 else ; χ A c max = D h ( t c max ) with ( t c max ) i = 0 if ( m c max ) i = 0 , 1 else ; χ A c min = D h ( t c min ) with ( t c min ) i = 0 if ( m c min ) i = 0 , 1 else ,
where m c max : = max { u c , 0 } and m c min : = max { u + c , 0 } . Further, we set
M h ( v ) = D h ( v x ) D h ( v y ) D h ( v x ) D h ( v y ) with v = ( v x , v y ) t R 2 N .
Applying a generalized Newton step to solve (22) at ( u k h , p 1 , k h ) yields
η ( χ A c max + χ A c min ) + ( K h ) t K h μ Δ h div h C k h D h ( m γ ) δ u δ p 1 = F 1 k F 2 k
where
C k h = D h ( p 1 , k h ) χ A γ D h ( m γ ) 1 M h ( u k h ) α h D h ( e 2 N ) F 1 k = div h p 1 , k h + η D h ( m 0 ) + ( K h ) t K h u k h ( K h ) t g μ Δ h u k h F 2 k = D h ( m γ ) p 1 , k h α h h u k h
and e N R N is the identity vector. The diagonal matrix D ( m γ ) is invertible, i.e.,
δ p 1 = D h ( m γ ) 1 ( F 2 k C k h δ u )
and hence we can eliminate δ p 1 from the Newton system resulting in
H k δ u = f k
where
H k : = η ( χ A c max + χ A c min ) + ( K h ) t K h μ Δ h + div h D h ( m γ ) 1 C k h , f k : = F 1 k div h D h ( m γ ) 1 F 2 k .
If H k is positive definite, then the solution δ u of (24) exists and is a descent direction of (15). However, in general we cannot expect the positive definiteness of H k . In order to ensure that H k is positive definite, we project p 1 , k h onto its feasible set by setting ( ( p 1 , k h ) i , ( p 1 , k h ) i + N ) to α h max { α h , [ | p 1 , k h | ] i } 1 ( ( p 1 , k h ) i , ( p 1 , k h ) i + N ) for i = 1 , , N which guarantees
[ | p 1 , k h | ] i α h
for i = 1 , , 2 N . The modified system matrix, denoted by H k + , is then positive definite. Then our semi-smooth Newton solver may be written as:
Primal-dual Newton method (pdN): Initialize ( u 0 h , p 1 , 0 h ) R N × R 2 N and set k : = 0 .
 1. Determine the active sets χ A c max R N × N , χ A c min R N × N , χ A γ R N × N ,
 2. If (25) is not satisfied, then compute H k + ; otherwise set H k + : = H k .
 3. Solve H k + δ u = f k for δ u .
 4. Compute δ p 1 by using δ u .
 5. Update u k + 1 h : = u k h + δ u and p 1 , k + 1 h : = p 1 , k h + δ p 1 .
 6. Stop or set k : = k + 1 and continue with step 1).
This algorithm converges at a superlinear rate, which follows from standard theory; see [20,21]. The Newton method is terminated as soon as the initial residual is reduced by a factor of 10 4 .
Note, that, since η = 0 implies p 2 = 0 , in this case the proposed primal-dual Newton method becomes the method in [21].

5.2. Non-Scalar α

A similar semi-smooth Newton method might be derived for the locally adaptive case by noting that then α h R N , and hence the second equation in (22) changes to
0 = D h ( m γ ) p 1 h D h ( ( α h , α h ) t ) h u h ,
where m γ : = max { γ α h , [ | h u h | ] } leading to (23) with
C k h = D h ( p 1 , k h ) χ A γ D h ( m γ ) 1 M h ( u k h ) D h ( ( α h , α h ) t )
and
F 2 k = D h ( m γ ) p 1 , k h D h ( ( α h , α h ) t ) h u k h .
The positive definite modified matrix H k + is then obtained by setting ( ( p 1 , k h ) i , ( p 1 , k h ) i + N ) to α i h max { α i h , [ | p 1 , k h | ] i } 1 ( ( p 1 , k h ) i , ( p 1 , k h ) i + N ) for i = 1 , , N .

6. Numerical Experiments

For our numerical studies we consider the images shown in Figure 1 of size 256 × 256 pixels and in Figure 2. The image intensity range of all original images considered in this paper is [ 0 , 1 ] , i.e., c min = 0 and c max = 1 . Our proposed algorithms automatically transform this images into the dynamic range [ c , c ] , here with c = 1 / 2 . That is, let u ^ [ 0 , 1 ] be the original image before any corruption, then u ^ ( x ) 1 2 [ 1 2 , 1 2 ] . Moreover, the solution generated by the semi-smooth Newton method is afterwards back-transformed, i.e., the generated solution u ˜ is transformed to u ˜ + 1 2 . Note that max x u ˜ ( x ) + 1 2 is not necessarily in [ 0 , 1 ] , except u ˜ [ 1 2 , 1 2 ] .
As a comparison for the different restoration qualities of the restored image we use the PSNR [49] (peak signal-to-noise ratio) given by
PSNR = 20 log 1 u ^ u * ,
where u ^ denotes the original image before any corruption and u * the restored image, which is widely used as an image quality assessment measure, and the MSSIM [50] (mean structural similarity), which usually relates to perceived visual quality better than PSNR. In general, when comparing PSNR and MSSIM, large values indicate better reconstruction than small values.
In our experiments we also report on the computational time (in seconds) and the number of iterations (it) needed until the considered algorithms are terminated.
In all the following experiments the parameter μ is chosen to be 0 for image denoising (i.e., K = I ), since then no additional smoothing is needed, and μ = 10 6 if K I (i.e., for image deblurring, image inpainting, for reconstructing from partial Fourier-data, and for reconstructing from sampled Radon-transform).

6.1. Dependency on the Parameter η

We start by investigating the influence of the parameter η on the behavior of the semi-smooth Newton algorithm and its generated solution. Let us recall, that η is responsible how strictly the box-constraint is adhered. In order to visualize how good the box-constraint is fulfilled for a chosen η in Figure 3 we depict max x u ( x ) c max and min x u ( x ) c min with c max = 1 , c min = 0 , and u being the back-transformed solution, i.e., u = u ˜ + 1 2 , where u ˜ is obtained via the semi-smooth Newton method. As long as max x u ( x ) c max and min x u ( x ) c min are positive and negative, respectively, the box-constraint is not perfectly adhered. From our experiments for image denoising and image deblurring, see Figure 3, we clearly see that the larger η the more strictly the box-constraint is adhered. In the rest of our experiments we choose η = 10 6 , which seems sufficiently large to us and the box-constraint seems to hold accurately enough.

6.2. Box-Constrained Versus Non-Box-Constrained

In the rest of this section we are going to investigate how much the solution (and its restoration quality) depends on the box-constraint and if this is a matter on how the regularization parameter is chosen. We start by comparing for different values of α the solutions obtained by the semi-smooth Newton method without a box-constraint (i.e., η = 0 ) with the ones generated by the same algorithm with η = 10 6 (i.e., a box-constraint is incorporated). Our obtained results are shown in Table 1 for image denoising and in Table 2 for image deblurring. We obtain, that for small α we gain “much” better results with respect to PSNR and MSSIM with a box-constraint than without. The reason for this is that if no box-constraint is used and α is small then nearly no regularization is performed and hence noise, which is violating the box-constraint, is still present. Therefore incorporating a box-constraint is reasonable for these choices of parameters. However, if α is sufficiently large then we numerically observe that the solution of the box-constrained and non-box-constrained problem are the same. This is not surprising, because there exists α ¯ > 0 such that for all α > α ¯ the solution of problem (2) is 1 | Ω | Ω g , see [4] (Lemma 2.3). That is, for such α the minimizer of problem (2) is the average of the observation which lies in the image intensity range of the original image, as long as the mean of Gaussian noise is 0 (or sufficiently small). This implies that in such a case the minimizer of problem (2) and problem (4) are equivalent. Actually this equivalency already holds if α is sufficiently large such that the respective solution of problem (2) lies in the dynamic range of the original image, which is the case in our experiments for α = 0.4 . Hence, whether it makes sense or not to incorporate a box-constraint into the considered model depends on the choice of parameters. The third and fourth value of α in Table 1 and Table 2 refer to the ones which equalize problem (2) and problem (1), and respectively problem (4) and problem (3). In the sequel we call such parameters optimal, since a solution of the penalized problem also solves the related constrained problem. However, we note that these α -values are in general not giving the best results with respect to PSNR and MSSIM, but they are usually close to the results with the largest PSNR and MSSIM. For both type of applications, i.e., image denoising and image deblurring, these optimal α -values are nearly the same for problem (2) and problem (1), and respectively problem (4) and problem (3) and hence also the PSNR and MSSIM of the respective results are nearly the same. Nevertheless, we mention that for image deblurring the largest PSNR and MSSIM in these experiments is obtained for α = 0.01 with a box-constraint.
In Table 3 and Table 4 we also report on an additional strategy. In this approach we threshold (or project) the observation g such that the box-constraint holds in any pixel and use then the proposed Newton method with η = 0 . For large α this is an inferior approach, but for small α this seems to work similar to incorporating a box-constraint, at least for image denoising. However, it is outperformed by the other approaches.

6.3. Comparison with Optimal Regularization Parameters

In order to determine the optimal parameters α for a range of different examples we assume that the noise-level σ is at hand and utilize the pAPS-algorithm presented in [36]. Alternatively, instead of computing a suitable α , we may solve the constrained optimization problems (1) and (3) directly by using the alternating direction methods of multipliers (ADMM). An implementation of the ADMM for solving problem (1) is presented in [51], which we refer to as the ADMM in the sequel. For solving problem (3) a possible implementation is suggested in [27]. However, for comparison purposes we use a slightly different version, which uses the same succession of updates as the ADMM in [51], see Appendix B for a description of this version. In the sequel we refer to this algorithm as the box-contrained ADMM. We do not expect the same results for the pAPS-algorithm and the (box-contrained) ADMM, since in the pAPS-algorithm we use the semi-smooth Newton method which generates an approximate solution of problem (15), that is not equivalent to problem (1) and problem (3). In all the experiments in the pAPS-algorithm we set the initial regularization parameter to be 10 3 .

6.3.1. pdN versus ADMM

We start by comparing the performance of the proposed primal-dual semi-smooth Newton method (pdN) and the ADMM. In these experiments we assume that we know the optimal parameters α , which are then used in the pdN. Note, that a fair comparison of these two methods is difficult, since they are solving different optimization problems, as already mentioned above. However, we still compare them in order to understand better the performance of the algorithms in the sequel section.
The comparison is performed for image denoising and image deblurring and the respective findings are collected in Table 5 and Table 6. From there we clearly observe, that the proposed pdN with η = 10 6 reaches in all experiments the desired reconstruction significantly faster than the box-constrained ADMM. While the number of iterations for image denoising is approximately the same for both methods, for image deblurring the box-constrained pdN needs significantly less iterations than the other method. In particular, the pdN needs nearly the same amount of iterations independently of the application. However, more iterations for small σ are needed. Note, that the pdN converges at a superlinear rate and hence a faster convergence than the box-constrained ADMM is not surprising but supports the theory.

6.3.2. Image Denoising

In Table 7 and Table 8 we summarize our findings for image denoising. We observe that adding a box-constraint to the considered optimization problem leads to a possibly slight improvement in PSNR and MSSIM. While in some cases there is some improvement (see for example the image “numbers”) in other examples no improvement is gained (see for example the image “barbara”). In order to make the overall improvement more visible, in the last row of Table 7 and Table 8 we add the average PSNR and MSSIM of all computed restorations. It shows that on average we may expect a gain of around 0.05 PSNR and around 0.001 MSSIM, which is nearly nothing. Moreover, we observe, that the pAPS-algorithm computes the optimal α for the box-constrained problem on average faster than the one for the non-box-constrained problem. We remark, that the box-constrained version needs less (or at maximum the same amount of) iterations as the version with η = 0 . The reason for this might be that in each iterations, due to the thresholding of the approximation by the box-constraint, a longer or better step towards the minimizer than by the non-box-constrained pAPS-algorithm is performed. At the same time also the reconstructions of the box-constrained pAPS-algorithm yield higher PSNR and MSSIM than the ones obtained by the pAPS-algorithm with η = 0 . The situation seems to be different for the ADMM. On average, the box-constrained ADMM and the (non-box-constrained) ADMM need approximately the same run-time.
For several examples (i.e., the images “phantom”, “cameraman”, “barbara”, “house”) the choice of the regularization parameter by the box-constrained pAPS-algorithm with respect to the noise-level is depicted in Figure 4. Clearly, the parameter is selected to be smaller the less noise is present in the image.
We are now wondering whether a box-constraint is more important when the regularization parameter is non-scalar, i.e., when α : Ω R + is a function. For computing suitable locally varying α we use the pLATV-algorithm proposed in [36], whereby we set in all considered examples the initial (non-scalar) regularization parameter to be constant 10 2 . Note, that the continuity assumption on α in problem (5) and problem (6) is not needed in our discrete setting, since x Ω h α ( x ) | h u h ( x ) | is well defined for any α R N . We approximate such α for problem (20) with η = 0 (unconstrained) and with η = 10 6 (box-constrained) and obtain also here that the gain with respect to PSNR and MSSIM is of the same order as in the scalar case, see Table 9.
For σ = 0.1 and the image “barbara” we show in Figure 5 the reconstructions generated by the considered algorithms. As indicated by the quality measures, all the reconstructions look nearly alike, whereby in the reconstructions produced by the pLATV-algorithm details, like the pattern of the scarf, are (slightly) better preserved. The spatially varying α of the pLATV-algorithm is depicted in Figure 6. There we clearly see, that at the scarf around the neck and shoulder the values of α are small, allowing to preserve the details better.

6.3.3. Image Deblurring

Now we consider the images in Figure 1a–c, convolve them first with a Gaussian kernel of size 9 × 9 and standard deviation 3 and then add some Gaussian noise with mean 0 and standard deviation σ . Here we again compare the results obtained by the pAPS-algorithm, the ADMM, and the pLATV-algorithm for the box-constrained and non-box-constrained problems. Our findings are summarized in Table 10. Also here we observe a slight improvement with a box-constraint with respect to PSNR and MSSIM. The choice of the regularization parameters by the box-constrained pAPS-algorithm is depicted in Figure 7. In Figure 8 we present for the image “cameraman” and σ = 0.01 the reconstructions produced by the respective methods. Also here, as indicated by the quality measures, all the restorations look nearly the same. The locally varying α generated by the pLATV-algorithm are depicted in Figure 9.

6.3.4. Image Inpainting

The problem of filling in and recovering missing parts in an image is called image inpainting. We call the missing parts inpainting domain and denote it by D Ω . The linear bounded operator K is then a multiplier, i.e., K u = 1 Ω \ D · u , where 1 Ω \ D is the indicator function of Ω \ D . Note, that K is not injective and hence K * K is not invertible. Hence in this experiment we need to set μ > 0 so that we can use the proposed primal-dual semismooth Newton method. In particular, as mentioned above, we choose μ = 10 6 .
In the considered experiments the inpainting domain are gray bars as shown in Figure 10a, where additionally additive white Gaussian noise with σ = 0.1 is present. In particular, we consider examples with σ { 0.3 , 0.2 , 0.1 , 0.05 , 0.01 , 0.005 } . The performance of the pAPS- and pLATV-algorithm with and without a box-constraint reconstructing the considered examples are summarized in Table 11 and Table 12. We observe, that adding a box-constraint does not seem to change the restoration considerably. However, as in the case of image denoising, the pAPS-algorithm with box-constrained pdN needs less iterations and hence less time than the same algorithm without a box-constraint to reach the stopping criterion. Figure 10 shows a particular example for image inpainting and denoising with σ = 0.1 . It demonstrates that visually there is nearly no difference between the restoration obtained by the considered approaches. Moreover, we observe that the pLATV-algorithm seems to be not suited to the task of image inpainting. A reason for this might be, that the pLATV-algorithm does not take the inpainting domain correctly into account. This is visible in Figure 11 where the spatially varying α seems to be chosen small in the inpainting domain, which not necessarily seems to be a suitable choice.

6.3.5. Reconstruction from Partial Fourier-Data

In magnetic resonance imaging one wishes to reconstruct an image which is only given by partial Fourier data and additionally distorted by some additive Gaussian noise with zero mean and standard deviation σ . Hence, the linear bounded operator is K = S F , where F is the 2 D Fourier matrix and S is a downsampling operator which selects only a few output frequencies. The frequencies are usually sampled along radial lines in the frequency domain, in particular in our experiments along 32 radial lines, as visualized in Figure 12.
In our experiments we consider the images of Figure 2, transformed to its Fourier frequencies. As already mentioned, we sample the frequencies along 32 radial lines and add some Gaussian noise with zero mean and standard deviation σ . In particular, we consider different noise-levels, i.e., σ = { 0.3 , 0.2 , 0.1 , 0.05 , 0.01 , 0.005 } . We reconstruct the obtained data via the pAPS- and pLATV-algorithm by using the semi-smooth Newton method first with η = 0 (no box-constraint) and then with η = 10 6 (with box-constraint). In Table 13 we collect our findings. We observe that the pLATV-algorithm seems not to be suitable for this task, since it is generating inferior results. For scalar α we observe as before, that a slight improvement with respect to PSNR and MSSIM is expectable when a box-constraint is used. In Figure 13 we present the reconstructions generated by the considered algorithms for a particular example, demonstrating the visual behavior of the methods.

6.3.6. Reconstruction from Sampled Radon-Data

In computerized tomography instead of a Fourier-transform a Radon-transform is used in order to obtain a visual image from the measured physical data. Also here the data is obtained along radial lines. Here we consider the Shepp-Logan phantom, see Figure 14a, and a slice of a body, see Figure 15a. The sinogram in Figure 14a and Figure 15b are obtained by sampling along 30 and 60 radial lines, respectively, Note, that the sinogram is in general noisy. Here the data is corrupted by Gaussian white noise with standard deviation σ , whereby σ = 0.1 for the data of the Shepp-Logan phantom and σ = 0.05 for the data of the slice of the head. Using the inverse Radon-transform we obtain Figure 16a,b, which is obviously a suboptimal reconstruction. A more sophisticated approach utilizes the L 2 -TV model which yields the reconstruction depicted in Figure 16b,e, where we use the pAPS-algorithm and the proposed primal-dual algorithm with η = 0 . However, since an image can be assumed to have non-negative values, we may incorporate a non-negativity constraint via the box-constrained L 2 -TV model yielding the result in Figure 16c,f, which is a much better reconstruction. Also here the parameter α is automatically computed by the pAPS-algorithm and the non-negativity constraint is incorporated by setting η = 10 6 in the semi-smooth Newton method. In order to compute the Radon-matrix in our experiments we used the FlexBox [52].
Other applications where a box-constraint, and in particular a non-negativity improves the image reconstruction quality significantly include for example magnetic particle imaging, see for example [53] and references therein.

7. Automated Parameter Selection

We recall, that if the noise-level σ is not known, then the problems (1) and (3) cannot be considered. Moreover, the selection of the parameter α in problem (2) cannot be achieved by using the pAPS-algorithm, since this algorithm is based on problem (1). Note, that also other methods, like the unbiased predictive risk estimator method (UPRE) [54,55] and approaches based on the Stein unbiased risk estimator method (SURE) [56,57,58,59,60] use knowledge of the noise-level and hence cannot be used for selecting a suitable parameter if σ is unknown.
If we assume that σ is unknown but the image intensity range of the original image u ^ is known, i.e., u ^ [ c min , c max ] , then we may use this information for choosing the parameter α in problem (2). This may be performed by applying the following algorithm:
Box-constrained automatic parameter selection (bcAPS): Initialize α 0 > 0 (sufficiently small) and set n : = 0
 
 1. Solve u n arg min u B V ( Ω ) K u g L 2 ( Ω ) 2 + α n Ω | D u | .
 2. If u n [ c min , c max ] increase α n (i.e., α n + 1 : = τ α n with τ > 1 ), else STOP.
 3. Set n : = n + 1 and continue with step 1.
Here τ > 1 is an arbitrary parameter chosen manually such that the generated restoration u is not over-smoothed, i.e., there exist x Ω such that u ( x ) c min and/or u ( x ) c max . In our experiments it turned out that τ = 1 . 05 seems to be a reasonable choice, so that the generated solution has the wished property.

Numerical Examples

In our experiments the minimization problem in step 1 of the bcAPS algorithm is approximately solved by the proposed primal-dual semi-smooth Newton method with η = 0 . We set the initial regularization parameter α 0 = 10 4 for image denoising and α 0 = 10 3 for image deblurring. Moreover, we set τ = 1.05 in the bcAPS-algorithm to increase the regularization parameter.
Experiments for image denoising, see Table 14, show that the bcAPS-algorithm finds suitable parameters in the sense that the PSNR and MSSIM of these reconstructions is similar to the ones obtained with the pAPS-algorithm (when σ is known); also compare with Table 7 and Table 10. This is explained by the observation that also the regularization parameters α calculated by the bcAPS-algorithm do not differ much from the ones obtained via the pAPS-algorithm. For image deblurring, see Table 15, the situation is not so persuasive. In particular, the obtained regularization parameter of the two considered methods differ more significantly than before, resulting in different PSNR and MSSIM. However, in the case σ = 0.05 the considered quality measures of the generated reconstructions are nearly the same.
We also remark, that in all the experiments the pAPS-algorithm generated reconstructions, which have larger PSNR and MSSIM than the ones obtained by the bcAPS-algorithm. From this observation it seems more useful to know the noise-level than the image intensity range. However, if the noise-level is unknown but the image intensity is known, then the bcAPS-algorithm may be a suitable choice.

8. Conclusions

In this work we investigated the quality of restored images when the image intensity range of the original image is additionally incorporated into the L 2 -TV model as a box-constraint. We observe that this box-constraint may indeed improve the quality of reconstructions. However, if the observation already fulfills the box-constraint, then it clearly does not change the solution at all. Moreover, in a lot of applications the proper choice of the regularization parameter seems much more important than an additional box-constraint. Nevertheless, also then a box-constraint may improve the quality of the restored image, although the improvement is then only very little. On the contrary the additional box-constraint may improve the computational time significantly. In particular, for image deblurring and in magnetic resonance imaging using the pAPS-algorithm the computational time is about doubled, while the quality of the restoration is basically not improved. This suggests, that for these applications an additional box-constraint may not be reasonable. Note, that the run-time of the ADMM is independent whether a box-constraint is used or not.
For certain applications, as in computerized tomography, a box-constraint (in particular a non-negativity constraint) improves the reconstruction considerably. Hence, the question rises under which conditions an additional box-constraint indeed has significant influence on the reconstruction when the present parameters are chosen in a nearly optimal way.
If the noise-level of an corrupted image is unknown but the image intensity range of the original image is at hand, then the image intensity range may be used to calculate a suitable regularization parameter α . This can be done as explained in Section 7. Potential future research may consider different approaches, as for example in an optimal control setting. Then one may want to solve
min u , α max { u c max , 0 } L 2 ( Ω ) 2 + max { c min u , 0 } L 2 ( Ω ) 2 + κ J ( α ) s . t . u arg min u K u g L 2 ( Ω ) 2 + α Ω | D u | ,
where κ > 0 and J is a suitable functional, cf. [61,62,63] for other optimal control approaches in image reconstruction.

Conflicts of Interest

The author declares no conflict of interest.

Appendix A. Proof of Proposition 3

In order to compute the Fenchel dual of problem (14) we set q = p ,
F ( q ) : = χ A ( q 1 ) + γ 2 α q 1 2 2 + 1 2 η q 2 2 2 + c q 2 1 G ( Λ q ) : = 1 2 | | | K * g Λ q | | | B 2 1 2 g 2 2 , Λ q : = q 2 div q 1 ,
with X = L 2 ( Ω ) × L 2 ( Ω ) and Y = H 0 1 ( Ω ) * = H 1 ( Ω ) .
By the definition of conjugate we have
G * ( u * ) = sup u Y { u , u * 1 2 B ( K * g u ) , K * g u + 1 2 g 2 2 } .
Then u is a supremum if
u { u , u * G ( u ) } = u * + B ( K * g u ) = 0 ,
which implies u = B 1 u * + 2 α 2 T 2 * g 2 . Hence
G * ( u * ) = B 1 u * + K * g , u * 1 2 B B 1 u * , B 1 u * + 1 2 g 2 2 = u * , B 1 u * + u * , K * g 1 2 u * , B 1 u * + 1 2 g 2 2 = 1 2 u * , ( K * K + μ * ) u * + u * , K * g + 1 2 g 2 2 = 1 2 K u * , K u * + μ 2 u * , u * + K u * , g + 1 2 g 2 2 = 1 2 K u * + g 2 2 + μ 2 u * 2 2 .
In order to compute the conjugate F * we split F into two functionals F 1 and F 2 defined as
F 1 ( q 1 ) : = χ A ( q 1 ) + γ 2 α q 1 2 2 , F 2 ( q 2 ) : = 1 2 η q 2 2 2 + c q 2 1 ,
whereas F * ( q * ) = F 1 * ( q 1 * ) + F 2 * ( q 2 * ) . We have, that
F 1 ( q 1 * ) = sup q 1 L 2 ( Ω ) { q 1 , q 1 * χ A ( q 1 ) γ 2 α q 1 2 2 } .
A function q 1 is a supremum of this set if
q 1 * γ α q 1 = 0
with | q 1 | 2 α . The equality implies q 1 = α γ q 1 * from which we deduce
F 1 * ( q 1 * ) ( x ) = α 2 γ | q 1 * ( x ) | 2 2 if | q 1 * ( x ) | 2 < γ , α | q 1 * ( x ) | 2 α γ 2 if | q 1 * ( x ) | 2 γ .
For the conjugate F 2 * of F 2 we get
F 2 * ( q 2 * ) = sup q 2 L 2 ( Ω ) { q 2 , q 2 * 1 2 η q 2 2 2 c q 2 1 } .
Hence q 2 is a supremum if
q 2 * 1 η q 2 c σ · 1 = 0 with σ · 1 c q 2 1 .
Thus
F 2 * ( q 2 * ) = η q 2 * η c σ · 1 , q 2 * 1 2 η η q 2 * η c σ · 1 2 2 c η q 2 * c 2 η σ · 1 1 = η q 2 * η c σ · 1 , q 2 * η c σ · 1 + η q 2 * η c σ · 1 , η c σ · 1 η 2 η q 2 * η c σ · 1 2 2 c η q 2 * c 2 η σ · 1 1 = η 2 q 2 * c σ · 1 2 2 + η { q 2 0 } ( q 2 * c ) c | c q 2 * c 2 | d x + η { q 2 < 0 } ( q 2 * + c ) ( c ) | c q 2 * + c 2 | d x .
From (A1) we obtain that
if q 2 = 0 then q 2 * = c σ · 1 , if q 2 > 0 then q 2 * > c , if q 2 < 0 then q 2 * < c .
Using this observation yields
F 2 * ( q 2 * ) = η 2 q 2 * c σ · 1 2 2 = η 2 { q 2 0 } | q 2 * c | 2 d x + { q 2 < 0 } | q 2 * + c | 2 d x = η 2 max { | q 2 * | c , 0 } 2 2 .
By the Fenchel duality theorem the assertion follows.

Appendix B. Box-Constrained ADMM

In [51] an ADMM for solving the constrained problem (1) in a finite dimensional setting is presented. In a similar way we may solve the discrete version of problem (3), i.e.,
min u h R N u h 1 s . t . u h C h , 1 N S h H h u h g h 2 2 σ 2 ,
where we use the notation of Section 5 and K h = S h H h with H h R N × N being a circular matrix and S h R N × N as in [51]. Moreover, C h : = { u h R N : c min u i h c max for all i { 1 , , N } } , · i refers to the standard definition of the i -norm, i.e, u i : = j = 1 N | u j | i 1 i , and · , · denotes the 2 inner product.
In order to apply the ADMM to problem (A2) we rewrite it as follows:
min w h R N × R N w h 1 s . t . w h = h u , y h = H h u , 1 N S h y h g h 2 2 ν , z h = u h , z h C h
which is equivalent to
min w h R N × R N , y h , z h R N w 1 + χ Y h ( y h ) + χ C h ( z h ) s . t . w h = h u h , y h = H h u h , z h = u h ,
where Y h : = { y h R N : 1 N S h y h g h 2 2 σ 2 } .
The augmented Lagrangian of this problem is
L ( u h , v h , λ h ) = f ( v h ) + λ h , B h u h v h + β 2 B h u h v h 2 2 ,
with v h = w h y h z h R 4 N , f ( v h ) = w h 1 + χ Y h ( y ) + χ C h ( z h ) , B h = h H h D h ( e N ) R 4 N × N , and β > 0 denoting the penalty parameter. Hence the ADM for solving problem (A2) runs as follows:
Box-constrained ADMM: Initialize v 0 h R 4 N , λ 0 h R 4 N and set n = 0 ;
 (1)
Compute u n + 1 h arg min u h λ n h , B h u h v n h + β 2 B h u h v n h 2 2
 (2)
Compute v n + 1 h = arg min v h f ( v h ) + λ n h , B h u n + 1 h v h + β 2 B h u n + 1 h v h 2 2
 (3)
Update λ n + 1 h = λ n h + β ( B h u n + 1 h v n + 1 h )
 (4)
Stop or set n = n + 1 and continue with step 1).
In order to obtain u n + 1 h in step (1) a linear system that may be diagonalized by the DFT is to solve. The solution of the minimization problem in step (2) might be computed as described in [51] (Section 4.2). More precisely, we have
v n + 1 h = arg min v h f ( v h ) + λ n h , B h u n + 1 h v h + β 2 B h u n + 1 h v h 2 2 = arg min v h f ( v h ) + β 2 v h ( B h u n + 1 h + λ n h β ) 2 2 = : prox f / β B h u n + 1 h + λ n h β ,
where prox f is called proximal operator of f. If we write v n h = u n + 1 h + λ n h β , we can decompose prox f / β ( · ) as
prox f / β w h y h z h = prox · 1 / β ( w h ) prox χ Y h / β ( y h ) prox χ C h / β ( z h ) .
From [51] we know, that
prox · 1 / β ( w h ) = w h if [ | w h | ] = 0 , w h min { 1 β , [ | w h | ] } w h [ | w h | ] otherwise ,
and prox χ Y h / β ( y h ) is a projection onto a weighted 2 -ball, which might be implemented as described in [64]. From the definition of the proximal operator we see that
prox χ C h / β ( z h ) = arg min z ˜ h C h z ˜ h z h
is just the simple orthogonal projection of z h onto C h .
We recall that the ADMM converges for any β > 0 , see for example [30,65,66]. In our numerical experiments we set β = 100 and we use the same stopping criterion as suggested in [51].

References

  1. Rudin, L.I.; Osher, S.; Fatemi, E. Nonlinear total variation based noise removal algorithms. Phys. D 1992, 60, 259–268. [Google Scholar] [CrossRef]
  2. Ambrosio, L.; Fusco, N.; Pallara, D. Functions of Bounded Variation and Free Discontinuity Problems; Oxford Mathematical Monographs, The Clarendon Press, Oxford University Press: New York, NY, USA, 2000; pp. xviii+434. [Google Scholar]
  3. Giusti, E. Minimal Surfaces and Functions of Bounded Variation; Vol. 80—Monographs in Mathematics; Birkhäuser Verlag: Basel, Switzerland, 1984; pp. xii+240. [Google Scholar]
  4. Chambolle, A.; Lions, P.L. Image recovery via total variation minimization and related problems. Numer. Math. 1997, 76, 167–188. [Google Scholar] [CrossRef]
  5. Beck, A.; Teboulle, M. Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems. IEEE Trans. Image Process. 2009, 18, 2419–2434. [Google Scholar] [CrossRef] [PubMed]
  6. Boţ, R.I.; Hendrich, C. A Douglas–Rachford type primal-dual method for solving inclusions with mixtures of composite and parallel-sum type monotone operators. SIAM J. Optim. 2013, 23, 2541–2565. [Google Scholar] [CrossRef]
  7. Boţ, R.I.; Hendrich, C. Convergence analysis for a primal-dual monotone+ skew splitting algorithm with applications to total variation minimization. J. Math. Imaging Vis. 2014, 49, 551–568. [Google Scholar] [CrossRef]
  8. Burger, M.; Sawatzky, A.; Steidl, G. First order algorithms in variational image processing. In Splitting Methods in Communication, Imaging, Science, and Engineering; Springer: Cham, Switzerland, 2014; pp. 345–407. [Google Scholar]
  9. Chambolle, A. An algorithm for total variation minimization and applications. J. Math. Imaging Vis. 2004, 20, 89–97. [Google Scholar]
  10. Chambolle, A.; Pock, T. A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vis. 2011, 40, 120–145. [Google Scholar] [CrossRef]
  11. Chan, T.F.; Golub, G.H.; Mulet, P. A nonlinear primal-dual method for total variation-based image restoration. SIAM J. Sci. Comput. 1999, 20, 1964–1977. [Google Scholar] [CrossRef]
  12. Combettes, P.L.; Vũ, B.C. Variable metric forward–backward splitting with applications to monotone inclusions in duality. Optimization 2014, 63, 1289–1318. [Google Scholar] [CrossRef]
  13. Combettes, P.L.; Wajs, V.R. Signal recovery by proximal forward-backward splitting. Multiscale Model. Simul. 2005, 4, 1168–1200. [Google Scholar] [CrossRef]
  14. Condat, L. A primal–dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms. J. Optim. Theory Appl. 2013, 158, 460–479. [Google Scholar] [CrossRef] [Green Version]
  15. Darbon, J.; Sigelle, M. A fast and exact algorithm for total variation minimization. In Pattern Recognition and Image Analysis; Springer: Berlin/Heidelberg, Germany, 2005; pp. 351–359. [Google Scholar]
  16. Darbon, J.; Sigelle, M. Image restoration with discrete constrained total variation. I. Fast and exact optimization. J. Math. Imaging Vis. 2006, 26, 261–276. [Google Scholar] [CrossRef]
  17. Daubechies, I.; Teschke, G.; Vese, L. Iteratively solving linear inverse problems under general convex constraints. Inverse Probl. Imaging 2007, 1, 29–46. [Google Scholar] [CrossRef]
  18. Dobson, D.C.; Vogel, C.R. Convergence of an iterative method for total variation denoising. SIAM J. Numer. Anal. 1997, 34, 1779–1791. [Google Scholar] [CrossRef]
  19. Goldstein, T.; Osher, S. The split Bregman method for L1-regularized problems. SIAM J. Imaging Sci. 2009, 2, 323–343. [Google Scholar] [CrossRef]
  20. Hintermüller, M.; Kunisch, K. Total bounded variation regularization as a bilaterally constrained optimization problem. SIAM J. Appl. Math. 2004, 64, 1311–1333. [Google Scholar]
  21. Hintermüller, M.; Stadler, G. An infeasible primal-dual algorithm for total bounded variation-based inf-convolution-type image restoration. SIAM J. Sci. Comput. 2006, 28, 1–23. [Google Scholar] [CrossRef]
  22. Nesterov, Y. Smooth minimization of non-smooth functions. Math. Program. 2005, 103, 127–152. [Google Scholar] [CrossRef]
  23. Osher, S.; Burger, M.; Goldfarb, D.; Xu, J.; Yin, W. An iterative regularization method for total variation-based image restoration. Multiscale Model. Simul. 2005, 4, 460–489. [Google Scholar] [CrossRef]
  24. Alliney, S. A property of the minimum vectors of a regularizing functional defined by means of the absolute norm. IEEE Trans. Signal Process. 1997, 45, 913–917. [Google Scholar] [CrossRef]
  25. Nikolova, M. Minimizers of cost-functions involving nonsmooth data-fidelity terms. Application to the processing of outliers. SIAM J. Numer. Anal. 2002, 40, 965–994. [Google Scholar] [CrossRef]
  26. Nikolova, M. A variational approach to remove outliers and impulse noise. J. Math. Imaging Vis. 2004, 20, 99–120. [Google Scholar] [CrossRef]
  27. Chan, R.H.; Tao, M.; Yuan, X. Constrained total variation deblurring models and fast algorithms based on alternating direction method of multipliers. SIAM J. Imaging Sci. 2013, 6, 680–697. [Google Scholar] [CrossRef]
  28. Ma, L.; Ng, M.; Yu, J.; Zeng, T. Efficient box-constrained TV-type-l1 algorithms for restoring images with impulse noise. J. Comput. Math. 2013, 31, 249–270. [Google Scholar] [CrossRef]
  29. Morini, B.; Porcelli, M.; Chan, R.H. A reduced Newton method for constrained linear least-squares problems. J. Comput. Appl. Math. 2010, 233, 2200–2212. [Google Scholar] [CrossRef]
  30. Gabay, D.; Mercier, B. A dual algorithm for the solution of nonlinear variational problems via finite element approximation. Comput. Math. Appl. 1976, 2, 17–40. [Google Scholar] [CrossRef]
  31. Williams, B.M.; Chen, K.; Harding, S.P. A new constrained total variational deblurring model and its fast algorithm. Numer. Algorithms 2015, 69, 415–441. [Google Scholar] [CrossRef]
  32. Liu, R.W.; Shi, L.; Yu, S.C.H.; Wang, D. Box-constrained second-order total generalized variation minimization with a combined L1,2 data-fidelity term for image reconstruction. J. Electron. Imaging 2015, 24, 033026. [Google Scholar] [CrossRef]
  33. Hintermüller, M.; Langer, A. Subspace correction methods for a class of nonsmooth and nonadditive convex variational problems with mixed L1/L2 data-fidelity in image processing. SIAM J. Imaging Sci. 2013, 6, 2134–2173. [Google Scholar] [CrossRef]
  34. Bardsley, J.M.; Vogel, C.R. A nonnegatively constrained convex programming method for image reconstruction. SIAM J. Sci. Comput. 2004, 25, 1326–1343. [Google Scholar] [CrossRef]
  35. Vogel, C.R. Computational Methods for Inverse Problems; Society for Industrial and Applied Mathematics (SIAM): Philadelphia, PA, USA, 2002; Volume 23. [Google Scholar]
  36. Langer, A. Automated Parameter Selection for Total Variation Minimization in Image Restoration. J. Math. Imaging Vis. 2017, 57, 239–268. [Google Scholar] [CrossRef]
  37. Dong, Y.; Hintermüller, M.; Neri, M. An efficient primal-dual method for L1 TV image restoration. SIAM J. Imaging Sci. 2009, 2, 1168–1189. [Google Scholar] [CrossRef]
  38. Dong, Y.; Hintermüller, M.; Rincon-Camacho, M.M. Automated regularization parameter selection in multi-scale total variation models for image restoration. J. Math. Imaging Vis. 2011, 40, 82–104. [Google Scholar] [CrossRef]
  39. Hintermüller, M.; Rincon-Camacho, M.M. Expected absolute value estimators for a spatially adapted regularization parameter choice rule in L1-TV-based image restoration. Inverse Probl. 2010, 26, 085005. [Google Scholar] [CrossRef]
  40. Cao, V.C.; Reyes, J.C.D.L.; Schönlieb, C.B. Learning optimal spatially-dependent regularization parameters in total variation image restoration. Inverse Probl. 2017, 33, 074005. [Google Scholar]
  41. Langer, A. Locally adaptive total variation for removing mixed Gaussian-impulse noise. 2017; submitted. [Google Scholar]
  42. Attouch, H.; Buttazzo, G.; Michaille, G. Variational Analysis in Sobolev and BV Spaces: Applications to PDEs and Optimization, 2nd ed.; MOS-SIAM Series on Optimization; Society for Industrial and Applied Mathematics (SIAM): Philadelphia, PA, USA; Mathematical Optimization Society: Philadelphia, PA, USA, 2014; pp. xii+793. [Google Scholar]
  43. Ekeland, I.; Témam, R. Convex Analysis and Variational Problems, English ed.; Vol. 28—Classics in Applied Mathematics; Society for Industrial and Applied Mathematics (SIAM): Philadelphia, PA, USA, 1999; pp. xiv+402. [Google Scholar]
  44. Alkämper, M.; Langer, A. Using DUNE-ACFEM for Non-smooth Minimization of Bounded Variation Functions. Arch. Numer. Softw. 2017, 5, 3–19. [Google Scholar]
  45. Langer, A. Automated parameter selection in the L1-L2-TV model for removing Gaussian plus impulse noise. Inverse Probl. 2017, 33, 074002. [Google Scholar] [CrossRef]
  46. Chan, R.H.; Ho, C.W.; Nikolova, M. Salt-and-pepper noise removal by median-type noise detectors and detail-preserving regularization. IEEE Trans. Image Process. 2005, 14, 1479–1485. [Google Scholar] [CrossRef] [PubMed]
  47. Bartels, S. Numerical Methods for Nonlinear Partial Differential Equations; Springer: Cham, Switzerland, 2015; Volume 14. [Google Scholar]
  48. Ito, K.; Kunisch, K. An active set strategy based on the augmented Lagrangian formulation for image restoration. ESAIM 1999, 33, 1–21. [Google Scholar] [CrossRef]
  49. Bovik, A.C. Handbook of Image and Video Processing; Academic Press: Cambridge, MA, USA, 2010. [Google Scholar]
  50. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
  51. Ng, M.K.; Weiss, P.; Yuan, X. Solving constrained total-variation image restoration and reconstruction problems via alternating direction methods. SIAM J. Sci. Comput. 2010, 32, 2710–2736. [Google Scholar] [CrossRef]
  52. Dirks, H. A Flexible Primal-Dual Toolbox. ArXiv E-Prints, 2016; arXiv:1603.05835. [Google Scholar]
  53. Storath, M.; Brandt, C.; Hofmann, M.; Knopp, T.; Salamon, J.; Weber, A.; Weinmann, A. Edge preserving and noise reducing reconstruction for magnetic particle imaging. IEEE Trans. Med. Imaging 2017, 36, 74–85. [Google Scholar] [CrossRef] [PubMed]
  54. Lin, Y.; Wohlberg, B.; Guo, H. UPRE method for total variation parameter selection. Signal Process. 2010, 90, 2546–2551. [Google Scholar] [CrossRef]
  55. Mallows, C.L. Some comments on CP. Technometrics 1973, 15, 661–675. [Google Scholar]
  56. Blu, T.; Luisier, F. The SURE-LET approach to image denoising. IEEE Trans. Image Process. 2007, 16, 2778–2786. [Google Scholar] [CrossRef] [PubMed]
  57. Donoho, D.L.; Johnstone, I.M. Adapting to unknown smoothness via wavelet shrinkage. J. Am. Stat. Assoc. 1995, 90, 1200–1224. [Google Scholar] [CrossRef]
  58. Deledalle, C.A.; Vaiter, S.; Fadili, J.; Peyré, G. Stein Unbiased GrAdient estimator of the Risk (SUGAR) for multiple parameter selection. SIAM J. Imaging Sci. 2014, 7, 2448–2487. [Google Scholar] [CrossRef]
  59. Eldar, Y.C. Generalized SURE for exponential families: Applications to regularization. IEEE Trans. Signal Process. 2009, 57, 471–481. [Google Scholar] [CrossRef]
  60. Giryes, R.; Elad, M.; Eldar, Y.C. The projected GSURE for automatic parameter tuning in iterative shrinkage methods. Appl. Comput. Harmon. Anal. 2011, 30, 407–422. [Google Scholar] [CrossRef]
  61. De los Reyes, J.C.; Schönlieb, C.B. Image denoising: Learning the noise model via nonsmooth PDE-constrained optimization. Inverse Probl. Imaging 2013, 7, 1183–1214. [Google Scholar] [CrossRef]
  62. Hintermüller, M.; Rautenberg, C.; Wu, T.; Langer, A. Optimal Selection of the Regularization Function in a Generalized Total Variation Model. Part II: Algorithm, its Analysis and Numerical Tests. J. Math. Imaging Vis. 2017, 59, 515–533. [Google Scholar] [CrossRef]
  63. Kunisch, K.; Pock, T. A bilevel optimization approach for parameter learning in variational models. SIAM J. Imaging Sci. 2013, 6, 938–983. [Google Scholar] [CrossRef]
  64. Weiss, P.; Blanc-Féraud, L.; Aubert, G. Efficient schemes for total variation minimization under constraints in image processing. SIAM J. Sci. Comput. 2009, 31, 2047–2080. [Google Scholar] [CrossRef]
  65. Boyd, S.; Parikh, N.; Chu, E.; Peleato, B.; Eckstein, J. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 2011, 3, 1–122. [Google Scholar] [CrossRef]
  66. Glowinski, R. Numerical Methods for Nonlinear Variational Problems; Springer: New York, NY, USA, 1984. [Google Scholar]
Figure 1. Original images of size 256 × 256 . (a) Phantom; (b) Cameraman; (c) Barbara; (d) House; (e) Lena; (f) Bones; (g) Cookies; (h) Numbers.
Figure 1. Original images of size 256 × 256 . (a) Phantom; (b) Cameraman; (c) Barbara; (d) House; (e) Lena; (f) Bones; (g) Cookies; (h) Numbers.
Jimaging 04 00012 g001
Figure 2. Original images (a) Shepp-Logan phantom of size 128 × 128 pixels (b) knee of size 200 × 200 pixels (c) slice of a human brain of size 128 × 128 pixels.
Figure 2. Original images (a) Shepp-Logan phantom of size 128 × 128 pixels (b) knee of size 200 × 200 pixels (c) slice of a human brain of size 128 × 128 pixels.
Jimaging 04 00012 g002
Figure 3. Reconstruction of the cameraman image corrupted by Gaussian white noise with σ = 0.1 (left), corrupted by blurring and Gaussian white noise with σ = 0.1 (right) via the semi-smooth Newton method with α = 0.01 for different values η .
Figure 3. Reconstruction of the cameraman image corrupted by Gaussian white noise with σ = 0.1 (left), corrupted by blurring and Gaussian white noise with σ = 0.1 (right) via the semi-smooth Newton method with α = 0.01 for different values η .
Jimaging 04 00012 g003
Figure 4. Regularization parameter versus noise-level for the box-constrained pAPS in image denoising.
Figure 4. Regularization parameter versus noise-level for the box-constrained pAPS in image denoising.
Jimaging 04 00012 g004
Figure 5. Reconstruction from blurry and noisy data. (a) Noisy observation; (b) pAPS with pdN with η = 0 (PSNR: 24.241; MSSIM: 0.58555); (c) pAPS with box-constrained pdN (PSNR: 24.241; MSSIM: 0.58564); (d) ADMM (PSNR: 24.224; MSSIM: 0.5883); (e) Box-constrained ADMM (PSNR: 24.215; MSSIM: 0.58921); (f) pLATV with pdN with η = 0 (PSNR: 24.503; MSSIM: 0.58885); (g) pLATV with box-constrained pdN (PSNR: 24.486; MSSIM: 0.58693).
Figure 5. Reconstruction from blurry and noisy data. (a) Noisy observation; (b) pAPS with pdN with η = 0 (PSNR: 24.241; MSSIM: 0.58555); (c) pAPS with box-constrained pdN (PSNR: 24.241; MSSIM: 0.58564); (d) ADMM (PSNR: 24.224; MSSIM: 0.5883); (e) Box-constrained ADMM (PSNR: 24.215; MSSIM: 0.58921); (f) pLATV with pdN with η = 0 (PSNR: 24.503; MSSIM: 0.58885); (g) pLATV with box-constrained pdN (PSNR: 24.486; MSSIM: 0.58693).
Jimaging 04 00012 g005
Figure 6. Spatially varying regularization parameter generated by the respective pLATV-algorithm. (a) pLATV with pdN with η = 0 ; (b) pLATV with box-constrained pdN.
Figure 6. Spatially varying regularization parameter generated by the respective pLATV-algorithm. (a) pLATV with pdN with η = 0 ; (b) pLATV with box-constrained pdN.
Jimaging 04 00012 g006
Figure 7. Regularization parameter versus noise-level for the box-constrained pAPS in image deblurring.
Figure 7. Regularization parameter versus noise-level for the box-constrained pAPS in image deblurring.
Jimaging 04 00012 g007
Figure 8. Reconstruction from blurry and noisy data. (a) Blurry and noisy observation; (b) pAPS with pdN with η = 0 (PSNR: 24.281; MSSIM: 0.34554); (c) pAPS with box-constrained pdN (PSNR: 24.293; MSSIM: 0.34618); (d) ADMM (PSNR: 24.236; MSSIM: 0.34073); (e) Box-constrained ADMM (PSNR: 24.237; MSSIM: 0.34082); (f) pLATV with pdN with η = 0 (PSNR: 24.599; MSSIM: 0.34708); (g) pLATV with box-constrained pdN (PSNR: 24.622; MSSIM: 0.34769).
Figure 8. Reconstruction from blurry and noisy data. (a) Blurry and noisy observation; (b) pAPS with pdN with η = 0 (PSNR: 24.281; MSSIM: 0.34554); (c) pAPS with box-constrained pdN (PSNR: 24.293; MSSIM: 0.34618); (d) ADMM (PSNR: 24.236; MSSIM: 0.34073); (e) Box-constrained ADMM (PSNR: 24.237; MSSIM: 0.34082); (f) pLATV with pdN with η = 0 (PSNR: 24.599; MSSIM: 0.34708); (g) pLATV with box-constrained pdN (PSNR: 24.622; MSSIM: 0.34769).
Jimaging 04 00012 g008
Figure 9. Spatially varying regularization parameter generated by the respective pLATV-algorithm. (a) pLATV with pdN with η = 0 ; (b) pLATV-algorithm with box-constrained pdN.
Figure 9. Spatially varying regularization parameter generated by the respective pLATV-algorithm. (a) pLATV with pdN with η = 0 ; (b) pLATV-algorithm with box-constrained pdN.
Jimaging 04 00012 g009
Figure 10. Simultaneous image inpainting and denoising with σ = 0.1 . (a) Observation; (b) pAPS with pdN with η = 0 (PSNR: 24.922; MSSIM: 0.44992); (c) pAPS with box-constrained pdN (PSNR: 24.922; MSSIM: 0.44992); (d) pLATV with pdN with η = 0 (PSNR: 24.893; MSSIM: 0.4498); (e) pLATV with box-constrained pdN (PSNR: 24.868; MSSIM: 0.45004).
Figure 10. Simultaneous image inpainting and denoising with σ = 0.1 . (a) Observation; (b) pAPS with pdN with η = 0 (PSNR: 24.922; MSSIM: 0.44992); (c) pAPS with box-constrained pdN (PSNR: 24.922; MSSIM: 0.44992); (d) pLATV with pdN with η = 0 (PSNR: 24.893; MSSIM: 0.4498); (e) pLATV with box-constrained pdN (PSNR: 24.868; MSSIM: 0.45004).
Jimaging 04 00012 g010
Figure 11. Spatially varying regularization parameter generated by the respective pLATV-algorithm. (a) pLATV with pdN with η = 0 ; (b) pLATV-algorithm with box-constrained pdN.
Figure 11. Spatially varying regularization parameter generated by the respective pLATV-algorithm. (a) pLATV with pdN with η = 0 ; (b) pLATV-algorithm with box-constrained pdN.
Jimaging 04 00012 g011
Figure 12. Sampling domain in the frequency plane, i.e., sampling operator S.
Figure 12. Sampling domain in the frequency plane, i.e., sampling operator S.
Jimaging 04 00012 g012
Figure 13. Reconstruction from sampled Fourier data. (a) pAPS with pdN with η = 0 (PSNR: 25.017; MSSIM: 0.37061); (b) pAPS with box-constrained pdN (PSNR: 25.017; MSSIM: 0.37056); (c) pLATV with box-constrained pdN (PSNR: 23.64; MSSIM: 0.34945); (d) pLATV with pdN with η = 0 (PSNR: 23.525; MSSIM: 0.34652).
Figure 13. Reconstruction from sampled Fourier data. (a) pAPS with pdN with η = 0 (PSNR: 25.017; MSSIM: 0.37061); (b) pAPS with box-constrained pdN (PSNR: 25.017; MSSIM: 0.37056); (c) pLATV with box-constrained pdN (PSNR: 23.64; MSSIM: 0.34945); (d) pLATV with pdN with η = 0 (PSNR: 23.525; MSSIM: 0.34652).
Jimaging 04 00012 g013
Figure 14. The Shepp-Logan phantom image of size 64 × 64 pixels and its measured sinogram. (a) Original image; (b) Sinogram.
Figure 14. The Shepp-Logan phantom image of size 64 × 64 pixels and its measured sinogram. (a) Original image; (b) Sinogram.
Jimaging 04 00012 g014
Figure 15. Slice of a human head and its measured sinogram. (a) Original image; (b) Sinogram.
Figure 15. Slice of a human head and its measured sinogram. (a) Original image; (b) Sinogram.
Jimaging 04 00012 g015
Figure 16. Reconstruction from noisy data. (a) Inverse Radon-transform (PSNR: 29.08; MSSIM: 0.3906); (b) L 2 -TV (PSNR: 29.14; MSSIM: 0.4051); (c) Box-constrained L 2 -TV (PSNR: 33.31; MSSIM: 0.6128); (d) Inverse Radon-transform (PSNR: 31.75; MSSIM: 0.3699); (e) L 2 -TV (PSNR: 32.16; MSSIM: 0.3682); (f) Box-constrained L 2 -TV (PSNR: 36.08; MSSIM: 0.5856).
Figure 16. Reconstruction from noisy data. (a) Inverse Radon-transform (PSNR: 29.08; MSSIM: 0.3906); (b) L 2 -TV (PSNR: 29.14; MSSIM: 0.4051); (c) Box-constrained L 2 -TV (PSNR: 33.31; MSSIM: 0.6128); (d) Inverse Radon-transform (PSNR: 31.75; MSSIM: 0.3699); (e) L 2 -TV (PSNR: 32.16; MSSIM: 0.3682); (f) Box-constrained L 2 -TV (PSNR: 36.08; MSSIM: 0.5856).
Jimaging 04 00012 g016aJimaging 04 00012 g016b
Table 1. Reconstruction of the cameraman-image corrupted by Gaussian white noise with σ = 0.1 for different regularization parameters α .
Table 1. Reconstruction of the cameraman-image corrupted by Gaussian white noise with σ = 0.1 for different regularization parameters α .
α pdN with η = 0 Box-Constrained pdN
PSNRMSSIMTimeItPSNRMSSIMTimeIt
0.00120.1650.27649183.2936520.6350.28384178.27365
0.0121.4640.2971255.59811721.9050.3047255.245117
0.09602927.1340.3521414.6153327.1350.3522114.46533
0.09610827.1320.3520114.913327.1330.3520714.31733
0.422.0790.1681614.7793422.0790.1681614.98234
23.59470.2891956.6388116.423.77730.292255.4557116.4
Table 2. Reconstruction of the cameraman-image corrupted by Gaussian blur and Gaussian white noise with σ = 0.1 for different regularization parameters α .
Table 2. Reconstruction of the cameraman-image corrupted by Gaussian blur and Gaussian white noise with σ = 0.1 for different regularization parameters α .
α pdN with η = 0 Box-Constrained pdN
PSNRMSSIMTimeItPSNRMSSIMTimeIt
0.0015.93750.039866119714311.5680.0833411946145
0.0121.9080.189388636721.9640.1946201167
0.05187121.8150.181157593721.8140.18113129937
0.05186821.8140.181147523621.8140.18114125537
0.419.8230.09070914316119.8230.090709145461
18.25930.13645100068.819.39660.14618159369.4
Table 3. Reconstruction of the cameraman-image corrupted by Gaussian white noise with σ = 0.1 for different regularization parameters α using the pdN with thresholded g.
Table 3. Reconstruction of the cameraman-image corrupted by Gaussian white noise with σ = 0.1 for different regularization parameters α using the pdN with thresholded g.
α PSNRMSSIMTimeIt
0.00120.640.28388107.05349
0.0121.9490.3050432.336113
0.09602926.5280.330788.947933
0.09610826.5260.330668.896633
0.421.6660.157949.421835
23.46170.2816633.3304112.6
Table 4. Reconstruction of the cameraman-image corrupted by Gaussian blur and Gaussian white noise with σ = 0.1 for different regularization parameters α using the pdN with thresholded g.
Table 4. Reconstruction of the cameraman-image corrupted by Gaussian blur and Gaussian white noise with σ = 0.1 for different regularization parameters α using the pdN with thresholded g.
α PSNRMSSIMTimeIt
0.0016.67580.0464541091140
0.0121.9290.1923174365
0.05187121.7090.172265935
0.05186821.7090.172265135
0.419.6830.087447143861
18.34130.1341291667.2
Table 5. Reconstruction of the cameraman-image corrupted by Gaussian white noise with standard deviation σ .
Table 5. Reconstruction of the cameraman-image corrupted by Gaussian white noise with standard deviation σ .
σ α Box-Constrained pdNBox-Constrained ADMM
PSNRMSSIMTimeItPSNRMSSIMTimeIt
0.30.3458622.4850.1861918.8444122.20.17829897.82127
0.20.2140824.0540.2449714.8473323.8520.23917808.67100
0.10.09610827.1320.3520114.913326.9630.35118716.3770
0.050.04339330.5670.4743722.9025130.4880.47503656.6648
0.010.007184740.4170.7523559.53313340.5420.75864454.4524
0.0050.003299645.1640.868689.67419945.4230.87718501.5924
31.63630.4797536.78581.66731.57810.47991672.596165.5
Table 6. Reconstruction of the cameraman-image corrupted by Gaussian blur and Gaussian white noise with standard deviation σ .
Table 6. Reconstruction of the cameraman-image corrupted by Gaussian blur and Gaussian white noise with standard deviation σ .
σ α Box-Constrained pdNBox-Constrained ADMM
PSNRMSSIMTimeItPSNRMSSIMTimeIt
0.30.234220.3820.106781551.55520.3610.0996912217.3256
0.20.1316920.9810.1326214344120.9780.127022593.1265
0.10.05187121.8140.181131407.43721.8250.175363404.5292
0.050.0195122.4840.229052691.45122.5010.224234065.4305
0.010.001267424.2930.346182440.512624.2370.340827185.3358
0.0050.0003186925.4510.404051903.414925.3770.398726,9851081
22.56740.23331904.694376.522.54640.227647741.7188426.17
Table 7. PSNR- and MSSIM-values of the reconstruction of different images corrupted by Gaussian white noise via pAPS using the primal-dual Newton method.
Table 7. PSNR- and MSSIM-values of the reconstruction of different images corrupted by Gaussian white noise via pAPS using the primal-dual Newton method.
Image σ pAPS with pdN with η = 0 pAPS with Box-Constrained pdN
PSNRMSSIMTime α ItPSNRMSSIMTime α It
phantom0.319.2740.30365693.590.287214019.3020.30456486.90.2854420
0.222.0240.37461553.440.190553722.060.37521393.670.1892117
0.127.4710.44124565.800.096213727.5180.4412389.390.09536517
0.0533.1730.46878858.330.0487443733.2280.46847421.180.0482716
0.0146.4090.506131605.40.0105973746.460.50559791.340.01050917
0.00551.8050.525872245.30.00573623551.8460.52522925.460.005695815
cameraman0.322.4850.1861915910.3458610222.4850.18621847.530.3457950
0.224.0540.24497919.630.214086624.0560.24508528.850.2139832
0.127.1320.35201580.910.0961083927.1350.35221646.440.09602941
0.0530.5670.47437549.230.0433932430.5710.47458561.430.0433625
0.0140.4170.75235677.790.00718471240.4180.75238645.320.007183712
0.00545.1640.8686745.670.0032996945.1640.8686701.10.00329959
barbara0.320.6180.306661419.70.341188420.6180.30666735.830.3411839
0.221.6490.39964788.490.201094721.6490.39971358.60.2010518
0.124.2410.58555336.170.0897582324.2410.58564319.090.08974623
0.0527.8840.75178326.590.042861527.8850.7518286.610.04285816
0.0138.7810.9322345.640.0079133938.7810.9322311.250.00791339
0.00544.0560.9681395.80.0037205744.0560.9681330.970.00372057
house0.323.8270.183927500.4177115423.8290.183921185.90.4176375
0.225.6110.233971460.60.2679511625.6110.23397610.20.2679645
0.128.8550.31916690.330.119797528.8550.31916348.920.1197934
0.0532.040.4074574.340.0505053932.0410.40741563.920.05050240
0.0140.2920.75174493.840.00712061140.2920.75174468.110.007120611
0.00544.9890.866485270.0033035844.9890.86648502.380.00330358
lena0.321.9050.2915519250.4173112021.9050.29155953.830.4173157
0.223.5060.363171064.90.259378523.5060.36317585.240.2593739
0.126.370.49351537.690.112464626.3690.49351302.320.1124621
0.0529.6150.62313566.710.0477712729.6150.62313579.30.0477728
0.0139.2610.91371526.890.00680969739.2620.91371546.750.006809110
0.00544.6720.97133626.470.0032764844.6730.97133652.970.00327628
bones0.325.7440.343955830.20.8604831025.7430.34395749.260.860535
0.227.6370.3982129490.5608623827.6370.39821747.280.5608645
0.130.9080.493981232.80.2721614130.9080.49398364.710.2721632
0.0534.2840.58386612.740.127358734.2840.58386362.620.1273541
0.0143.1740.7449386.030.0208153343.1740.7449479.770.02081433
0.00547.4930.80124340.020.00914232347.4930.80124470.830.009142323
cookies0.321.4660.313941117.40.382548721.4660.31396857.990.3825242
0.223.1360.40787709.80.251176223.1360.40787320.840.2511812
0.126.4980.55614398.590.125984226.4980.55614290.670.1259818
0.0530.2920.67741382.290.062573030.2930.67741523.470.0625730
0.0140.4820.85926414.090.0113241540.4820.85926567.650.01132415
0.00545.390.91128470.540.00526531245.390.91128653.550.005265312
numbers0.317.5930.33171520.530.276543117.8620.35167358.50.263376
0.220.6580.39035415.030.184423020.9360.40758375.140.175269
0.126.2590.44549365.010.0925763026.560.45925318.580.0876188
0.0532.0610.47618466.240.0466743032.3820.48698321.630.0440448
0.0145.5110.5173310390.00995243145.8690.52176467.070.00932737
0.00551.0360.530221474.90.00531713051.440.53149463.780.00494015
32.03680.53439,597,189 54.583332.08280.5357534.877 23.75
Table 8. PSNR- and MSSIM-values of the reconstruction of different images corrupted by Gaussian white noise via the ADMM by solving the constrained versions.
Table 8. PSNR- and MSSIM-values of the reconstruction of different images corrupted by Gaussian white noise via the ADMM by solving the constrained versions.
Image σ ADMMBox-Constrained ADMM
PSNRMSSIMTimeItPSNRMSSIMTimeIt
phantom0.318.9560.29789124511818.9220.29225999.39137
0.221.6150.372991203.810821.5380.366321068.9123
0.127.0210.45439868.127926.9410.44525995.9991
0.0532.8230.5007576.354732.7310.4847650.953
0.0146.4460.60023444.342347.360.54521524.8827
0.00553.7340.5907566.532453.590.51937913.2838
cameraman0.322.330.18003889.659922.20.17829897.82127
0.223.8950.23961836.268023.8520.23917808.67100
0.127.0020.35006690.456026.9630.35118716.3770
0.0530.5130.47359564.734330.4880.47503656.6648
0.0140.4860.75493409.622240.5420.75864454.4524
0.00545.3290.8728494.942345.4230.87718501.5924
barbara0.320.60.30878891.689420.6040.31256683.92112
0.221.6550.40225827.727421.6540.40468628.4591
0.124.2240.5883583.965024.2150.58921547.5461
0.0527.8890.75319449.513527.8740.75405470.841
0.0138.9210.9338441.872138.9780.9341374.6924
0.00544.4130.97035503.682144.5840.97166425.9223
house0.323.7610.192541072.710823.6890.19468674.76128
0.225.6590.24287949.978525.6090.24367709.26108
0.128.9070.320346785428.8750.32115596.9570
0.0532.0540.40609532.993532.0290.40732478.6742
0.0140.3940.7555422.071940.4380.75891331.3621
0.00545.2010.87241492.112045.2780.8758408.1722
lena0.321.8340.29247950.8110221.850.2995570.59115
0.223.4550.364377248123.460.36828670.9697
0.126.3490.49495584.085526.3330.49605641.4767
0.0529.6040.62425456.633929.5880.62541554.5144
0.0139.4220.91694377.762239.4860.91822438.725
0.00545.0210.97391470.812445.1060.97436517.9726
bones0.325.8290.35686750.1211025.0510.36748611.23115
0.227.6890.40363734.139027.4860.41281652.294
0.130.9420.48823624.025831.020.49267755.0675
0.0534.3190.57951343.833534.370.57521624.749
0.0143.0110.73032193.871442.90.72002270.9216
0.00547.4460.79549204.981247.4320.78519285.7214
cookies0.321.4360.31503789.7810221.4610.32264788.24118
0.223.1290.40742713.258223.1280.41261772.9399
0.126.5270.55478554.955526.5060.55871659.570
0.0530.3640.67447422.843730.3380.67746589.7745
0.0140.5130.85593234.351540.5890.85815316.6419
0.00545.3820.90918255.791445.3070.90892319.8216
numbers0.317.2570.321571640.714717.4870.345991311.9146
0.220.2790.38382159413320.490.421215.8134
0.125.9010.44284981.069526.1250.48656874.8785
0.0531.8470.47604719.725232.4120.51241532.9146
0.0145.7260.52365643.882647.0890.511021001.356
0.00552.5120.53468641.432952.8590.519421756.983
32.07540.5386671.727457.729232.13020.5389671.957067.8958
Table 9. PSNR- and MSSIM-values of the reconstruction of different images corrupted by Gaussian white noise via pLATV using the primal-dual Newton method.
Table 9. PSNR- and MSSIM-values of the reconstruction of different images corrupted by Gaussian white noise via pLATV using the primal-dual Newton method.
Image σ pLATV with pdN with η = 0 pLATV with Box-Constrained pdN
PSNRMSSIMTimeItPSNRMSSIMTimeIt
phantom0.319.6290.31986517.781319.7440.32143710.5916
0.222.4050.38578286.661322.5250.38706607.2715
0.127.8020.44701389.341327.9360.44762552.4914
0.0533.480.47301526.891333.570.4733571.0614
0.0146.5460.50749139.72446.6250.5064698.9733
0.00552.0380.52569112.1352.0620.52512105.893
cameraman0.322.3820.18185775.51522.3930.18186838.5519
0.224.0320.24038605.561324.0290.23937593.916
0.127.160.35301304.831227.1750.35285415.0913
0.0530.7020.4745310.031030.6960.47315262.5910
0.0140.6470.73386135.88440.6470.73386144.014
0.00545.2920.86678365.19745.2920.86678357.427
barbara0.320.5270.30278572.441620.5160.3023652.8318
0.221.730.39872411.51321.7290.39882459.0914
0.124.5030.58885285.8924.4860.58693302.4610
0.0528.1960.75135309.53728.1980.75151304.897
0.0138.8980.92794424.81838.8980.92794425.1218
0.00544.1860.96871147.57544.1860.96871150.915
house0.323.6610.18529593.221823.7040.18526698.3522
0.225.5070.23789478.331725.510.23741522.8618
0.128.7360.32695304.431328.7410.32581332.4714
0.0531.940.4217184.751131.9430.42182320.2211
0.0140.4230.73752451.471340.4230.73752377.8913
0.00545.1180.85458286.48745.1180.85458222.547
lena0.321.8290.29245624.961921.8280.29232727.6921
0.223.4420.3634471.911623.4450.36326538.8117
0.126.4030.49438347.041326.4060.49419374.114
0.0529.7030.62778396.43829.7040.62784403.498
0.0139.3240.91256652.21639.3240.91256659.9216
0.00544.7360.97081244.61544.7370.97081237.285
bones0.325.6330.36194869.223725.5180.358551122.838
0.227.60.41895665.853327.5060.41674823.9533
0.130.9640.51162419.722630.8620.51085435.624
0.0534.4130.59991291.131934.4080.59947374.0719
0.0143.5210.7626198.45643.5210.7626193.7656
0.00547.5640.7997103.29647.5640.799795.6376
cookies0.321.4150.31722597.851721.3710.3164653.8318
0.223.1260.40895500.291623.1070.4087485.215
0.126.4870.55758237.871226.5010.55811332.1212
0.0530.3010.68332413.361230.2950.68292175.119
0.0140.5250.86052261.76840.5250.86052236.968
0.00545.4990.90364229.85545.4990.90364174.585
numbers0.317.6270.33232421.861117.8210.34957654.2220
0.220.70.39072313.321020.8880.40559556.0719
0.126.2960.4455258.051026.5280.45766440.1317
0.0532.1040.47624339.031032.3690.48578501.4715
0.0145.5270.5173592.644245.9380.5216959.5132
0.00551.2650.53078207.86551.5360.53125257.456
32.11550.5365374.549012.270832.15310.5375425.859013.4167
Table 10. PSNR- and MSSIM-values of the reconstruction of different images corrupted by Gaussian blur and Gaussian white noise via pAPS or pLATV using the primal-dual Newton method or via the ADMM by solving the constrained versions.
Table 10. PSNR- and MSSIM-values of the reconstruction of different images corrupted by Gaussian blur and Gaussian white noise via pAPS or pLATV using the primal-dual Newton method or via the ADMM by solving the constrained versions.
Image σ pAPS with pdN with η = 0 pAPS with Box-Constrained pdN
PSNRMSSIMTimeItPSNRMSSIMTimeIt
phantom0.0516.410.2169511,6031116.8010.2308533,00211
0.0118.6240.3099712,6631018.8610.3257418,7448
0.00520.5540.367135892.9621.010.3822517,2459
cameraman0.0522.4820.22888328.51122.4840.2290524,16211
0.0124.2810.345544799.8524.2930.3461813,1025
0.00525.4360.40355611.4825.4510.4040512,0337
barbara0.0521.3630.378335422.8821.3630.3783210,6578
0.0122.0520.493122820.6522.0520.493116188.15
0.00523.0840.572234275.51323.0840.5722311,12113
21.58740.368468248.555621.71080.3735316,2508.5556
Image σ ADMMBox-Constrained ADMM
PSNRMSSIMTimeItPSNRMSSIMTimeIt
phantom0.0516.4270.26916243.146516.8110.276138186.6581
0.0118.5810.309434,683142418.8090.3254433,1411327
0.00520.5050.367740,801142620.9060.3828543,7531485
cameraman0.0522.510.22422870.121422.5010.224234065.4305
0.0124.2360.340738268.242824.2370.340827185.3358
0.00525.3730.398630,944120025.3770.3987269851081
barbara0.0521.4170.380391996.616921.4210.381132807.4237
0.0122.0360.4896513,28267822.0380.4896412,012627
0.00523.0430.5687140,949152723.0380.5684137,4301423
21.56990.3720520,004836.7821.68210.3763719,507824.89
Image σ pLATV with pdN with η = 0 pLATV with Box-Constrained pdN
PSNRMSSIMTimeItPSNRMSSIMTimeIt
phantom0.0516.430.2196614,7321216.8240.2330529,56710
0.0118.9670.3178839,0144919.6020.3367911,95052
0.00520.7290.3686149,6586722.0410.3905211,94071
cameraman0.0522.5330.2296426,3802222.5340.2297370,20822
0.0124.5990.3470857,5134724.6220.34769113,17547
0.00525.770.4038939,6626025.7960.40451110,64660
barbara0.0521.3770.380826,7792421.3770.380849,80824
0.0122.4320.5056522,4335122.4480.5062578,72052
0.00523.6520.5953216,1686123.6580.5954553,55961
21.83190.3742832,48243.66722.10030.3805381,89344.333
Table 11. PSNR- and MSSIM-values for the application inpainting via pAPS.
Table 11. PSNR- and MSSIM-values for the application inpainting via pAPS.
Image σ pAPS with pdN with η = 0 pAPS with Box-Constrained pdN
PSNRMSSIMTime α ItPSNRMSSIMTime α It
lena0.321.1510.263781709.90.3735810521.1510.26378958.590.3735850
0.222.5550.330331075.80.233367222.5550.33032578.350.2333632
0.124.9220.44992578.340.103694124.9220.44992370.420.1036918
0.0527.0050.56734513.10.0449222527.0050.56735507.370.04491925
0.0129.6180.82318524.730.006614929.6180.82319516.860.00661339
0.00529.9120.87427569.850.00319829.9120.87427674.020.00318968
cookies0.320.7610.279561189.80.344567420.7630.27963806.650.3444835
0.222.1380.36599761.340.225295522.1380.36599228.760.2252910
0.124.6240.50595419.340.110883624.6240.50595283.690.1108815
0.0526.9670.62721359.670.054672626.9670.62721409.440.0546726
0.0130.050.80847481.890.010081430.050.80847547.40.0100814
0.00530.4380.85701491.230.00477451130.4380.85701590.770.004774511
25.830.57403617.22 3625.830.57404477.78 18.5
Table 12. PSNR- and MSSIM-values for the application inpainting via pLATV.
Table 12. PSNR- and MSSIM-values for the application inpainting via pLATV.
Image σ pLATV with pdN with η = 0 pLATV with Box-Constrained pdN
PSNRMSSIMTimeItPSNRMSSIMTimeIt
lena0.321.0270.26266684.181421.0360.26239836.1816
0.222.450.32986555.661122.4570.32939584.6612
0.124.8930.4498446.841224.8680.45004400.39
0.0526.9820.56904474.61926.9830.56912460.229
0.0129.6210.82242796.531629.6210.82242775.8816
0.00529.9870.87461285.24529.9870.87461284.795
cookies0.320.5460.27499660.051120.5480.27574753.8913
0.221.9650.36179458.11021.9750.36259578.9411
0.124.5380.50627193.21024.5470.50651251.6811
0.0526.8620.63007322.75726.8630.6301381.667
0.0130.0470.8086983.563230.0470.8086999.4152
0.00530.2540.84914181.44530.2540.84914213.155
25.76430.56161428.519.333325.7650.56173468.39619.6667
Table 13. PSNR- and MSSIM-values of the reconstruction of sampled Fourier data corrupted by Gaussian white noise via the pAPS- and pLATV-algorithm using the primal-dual Newton method.
Table 13. PSNR- and MSSIM-values of the reconstruction of sampled Fourier data corrupted by Gaussian white noise via the pAPS- and pLATV-algorithm using the primal-dual Newton method.
Image σ pAPS with pdN with η = 0 pAPS with Box-Constrained pdN
PSNRMSSIMCPU-TimePSNRMSSIMCPU-Time
Shepp-0.318.8880.162333787.219.0000.16855509.2
Logan0.220.5240.213022844.920.6960.220863673.5
phantom0.124.2560.29051884.724.4960.298962582.3
0.0528.6390.349722008.528.9480.358332115
0.0140.1680.427341993.340.7110.433251349.5
0.00545.2630.447142225.445.9330.45199951.49
knee0.321.6060.2655322,46621.6060.2655336,054
0.222.9850.3096515,70522.9850.3096531,072
0.125.0170.3706111,56125.0170.3705624,994
0.0526.4430.416528803.426.4450.4166121,056
0.0127.9120.471414996.927.9590.4726711,707
0.00528.0350.476836076.928.0890.4784313,116
27.47810.350057029.436527.6570.3537812,848.2064
Image σ pLATV with pdN with η = 0 pLATV with Box-Constrained pdN
PSNRMSSIMCPU-TimePSNRMSSIMCPU-Time
Shepp-0.318.990.160785445.317.1480.1121915,500
Logan0.220.5670.210063179.119.3240.1739511,719
phantom0.124.3760.290282491.523.510.270834623.8
0.0528.5690.346451926.128.3030.343927125.8
0.0139.4750.41775266.739.5790.42053695.74
0.00543.7820.43085465.0943.6270.430961373.9
knee0.315.5830.1808917,41316.0110.18617,750
0.218.870.2441911,64019.2270.2506914,414
0.123.5250.346523663.523.640.349459220.4
0.0526.3070.413931545.626.3410.41594165.6
0.0127.0440.460694091.127.0550.461212,059
0.00524.7730.4184110,49924.6390.417234,409
25.98850.32675218.825.70030.319411,088
Table 14. PSNR- and MSSIM-values of the reconstruction of the cameraman-image corrupted by Gaussian white noise with standard deviation σ via the bcAPS algorithm using the primal-dual Newton method with η = 0 .
Table 14. PSNR- and MSSIM-values of the reconstruction of the cameraman-image corrupted by Gaussian white noise with standard deviation σ via the bcAPS algorithm using the primal-dual Newton method with η = 0 .
σ bcAPSpAPS
PSNRMSSIMTime α PSNRMSSIMTime α
0.322.2300.174781065.80.38105822.4850.1861915910.34586
0.223.6370.225521084.90.24563424.0540.24497919.630.21408
0.126.6210.32588840.20.11252827.1320.35201580.910.096108
0.0529.3880.41062817.90.05967630.5670.47437549.230.043393
0.0139.3320.70321552.40.00934640.4170.75235677.790.0071847
0.00544.5080.84591415.60.00388345.1640.8686745.670.0032996
Table 15. PSNR- and MSSIM-values of the reconstruction of the cameraman-image corrupted Gaussian blur and Gaussian white noise with standard deviation σ via the bcAPS algorithm using the primal-dual Newton method with η = 0 .
Table 15. PSNR- and MSSIM-values of the reconstruction of the cameraman-image corrupted Gaussian blur and Gaussian white noise with standard deviation σ via the bcAPS algorithm using the primal-dual Newton method with η = 0 .
σ bcAPSpAPS
PSNRMSSIMTime α PSNRMSSIMTime α
0.0522.3040.2152476,216.30.02759822.4820.22888328.50.019529
0.0122.9560.2652386,068.90.01040124.2810.345544799.80.0012714
0.00523.0240.2701896,837.00.00943425.4360.40355611.40.00031872

Share and Cite

MDPI and ACS Style

Langer, A. Investigating the Influence of Box-Constraints on the Solution of a Total Variation Model via an Efficient Primal-Dual Method. J. Imaging 2018, 4, 12. https://doi.org/10.3390/jimaging4010012

AMA Style

Langer A. Investigating the Influence of Box-Constraints on the Solution of a Total Variation Model via an Efficient Primal-Dual Method. Journal of Imaging. 2018; 4(1):12. https://doi.org/10.3390/jimaging4010012

Chicago/Turabian Style

Langer, Andreas. 2018. "Investigating the Influence of Box-Constraints on the Solution of a Total Variation Model via an Efficient Primal-Dual Method" Journal of Imaging 4, no. 1: 12. https://doi.org/10.3390/jimaging4010012

APA Style

Langer, A. (2018). Investigating the Influence of Box-Constraints on the Solution of a Total Variation Model via an Efficient Primal-Dual Method. Journal of Imaging, 4(1), 12. https://doi.org/10.3390/jimaging4010012

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop