Next Article in Journal
Analysis, Design and Realization of a Furnace for In Situ Wettability Experiments at High Temperatures under X-ray Microtomography
Next Article in Special Issue
Conditional Invertible Neural Networks for Medical Imaging
Previous Article in Journal
Ultrasound Imaging in Dentistry: A Literature Overview
Previous Article in Special Issue
Recovering the Magnetic Image of Mars from Satellite Observations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Discretization of Learned NETT Regularization for Solving Inverse Problems

by
Stephan Antholzer
and
Markus Haltmeier
*
Department of Mathematics, University of Innsbruck, Technikerstrasse 13, 6020 Innsbruck, Austria
*
Author to whom correspondence should be addressed.
J. Imaging 2021, 7(11), 239; https://doi.org/10.3390/jimaging7110239
Submission received: 11 October 2021 / Revised: 5 November 2021 / Accepted: 8 November 2021 / Published: 15 November 2021
(This article belongs to the Special Issue Inverse Problems and Imaging)

Abstract

:
Deep learning based reconstruction methods deliver outstanding results for solving inverse problems and are therefore becoming increasingly important. A recently invented class of learning-based reconstruction methods is the so-called NETT (for Network Tikhonov Regularization), which contains a trained neural network as regularizer in generalized Tikhonov regularization. The existing analysis of NETT considers fixed operators and fixed regularizers and analyzes the convergence as the noise level in the data approaches zero. In this paper, we extend the frameworks and analysis considerably to reflect various practical aspects and take into account discretization of the data space, the solution space, the forward operator and the neural network defining the regularizer. We show the asymptotic convergence of the discretized NETT approach for decreasing noise levels and discretization errors. Additionally, we derive convergence rates and present numerical results for a limited data problem in photoacoustic tomography.

1. Introduction

In this paper, we are interested in neural network based solutions to inverse problems of the form
Find x from data y δ = A x + η .
Here A is a potentially non-linear operator between Banach spaces X and Y , y δ are the given noisy data, x is the unknown to be recovered, η is the unknown noise perturbation and δ 0 indicates the noise level. Numerous image reconstruction problems, parameter identification tasks and geophysical applications can be stated as such inverse problems [1,2,3,4]. Special challenges in solving inverse problems are the non-uniqueness of the solutions and the instability of the solutions with respect to the given data. To overcome these issues, regularization methods are needed, which select specific solutions and at the same time stabilize the inversion process.

1.1. Reconstruction with Learned Regularizers

One of the most established class of methods for solving inverse problems is variational regularization where regularized solutions are defined as minimizers of the generelaized Tikhonov functional [2,5,6]
T y δ , α : X [ 0 , ] : x D ( A x , y δ ) + α R ( x ) .
Here D is a distance like function measuring closeness of the data, R a regularization term enforcing regularity of the minimizer and α is the regularization parameter. Taking minimizers of this functional as regularized solution is also called (generalized) Tikhonov regularization. In the case that D and the regularizer are defined by the Hilbert space norms, (2) is classical Tikhonov regularization for which the theory is quite complete [1,7]. In particular, in this case, convergence rates, which name quantitative estimates for the distance between the true noise-free solution and regularized solutions from noisy data, are well known. Convergence rates for non-convex regularizers are derived in [8].
Typical regularization techniques are based on simple hand crafted regularization terms such as the total variation f TV = | f | or quadratic Sobolev norms f 2 2 = | f | 2 on some function space. However, these regularizers are quite simplistic and might not well reflect the actual complexity of the underlying class of functions. Therefore, recently, it has been proposed and analyzed in [9] to use machine learning to construct regularizers in a data driven manner. In particular, the strategy in [9] is to construct a data-driven regularizer via the following consecutive steps:
(T1)
Choose a family of desired reconstructions ( x i ) i = 1 n .
(T2)
For some B : Y X , construct undesired reconstructions ( B A x i ) i = 1 n .
(T3)
Choose a class ( Φ θ ) θ Θ of functions (networks) Φ θ : X X .
(T4)
Determine θ Θ with Φ θ ( x i ) x i Φ θ ( B A x i ) x i .
(T5)
Define R ( x ) = r ( x , Φ ( x ) ) with Φ = Φ θ for some r : Y × Y [ 0 , ] .
For imaging applications, the function class ( Φ θ ) θ Θ can be chosen as convolutional neural networks which have demonstrated to give powerful classes of mappings between image spaces. The function r measures distance between a potential reconstruction x and the output of the network Φ ( x ) , and possibly contains additional regularization [10,11]. According to the training strategy in item (T4) the value of the regularizer will be small if the reconstruction is similar to elements in ( x i ) i = 1 n and large for elements in ( B A x i ) i = 1 n . A simple example that we will use for our numerical results is the learned regularizer R ( x ) = x Φ ( x ) 2 + x TV .
Convergence analysis and convergence rates for NETT (which stands for Network Tikhonov; referring to variants of (2), where the regularization term is given by a neural network) as well as training strategies have been established in [9,11,12]. A different training strategy for learning a regularizer has been proposed in [13,14]. Note that learning the regularizer first and then minimizing the Tikhonov functional is different from variational and iterative networks [15,16,17,18,19,20] where an iterative scheme is applied to enroll the functional D θ ( A x , y δ ) + α R θ ( x ) which is then trained in an end to end fashion. Training the regularizer first has the advantage of being more modular, sharing some similarity with plug and play techniques [21], and the network training is independent of the forward operator A . Moreover, it enables to derive a convergence analysis as the noise level tends to zero and therefore comes with theoretical recovery guarantees.

1.2. Discrete NETT

The existing analysis of NETT considers minimizers of the Tikhonov functional (2) with regularizer of the form R ( x ) = r ( x , Φ ( x ) ) before discretization, typically in an infinite dimensional setting. However, in practice, only finite dimensional approximations of the unknown, the operator and the neural network are given. To address these issues, in this paper, we study discrete NETT regularization which considers minimizers of
T y δ , α , n : X n Y : x D ( A n x , y δ ) + α R n ( x ) .
Here ( X n ) n N , ( A n ) n N and ( R n ) n N are families of subspaces of X n X , mappings A n : X Y and regularizers R n : X [ 0 , ] , respectively, which reflect discretization of all involved operations. We present a full convergence analysis as the noise level δ converges to zero and n , α are chosen accordingly. Discretization of variational regularization has studied in [22] for the case that D is given by the norm distance and the regularizer R is taken convex and fixed. However, in the case of discrete NETT regularization it is natural to consider the case where the regularization depends on the discretization as regularization is learned in a discretized setting based on actual data. For that purpose our analysis includes non-convex regularizers that are allowed to depend on the discretization and the noise level.

1.3. Outline

The convergence analysis including convergence rates is presented in Section 2. In Section 3 we will present numerical results for a non-standard limited data problem in photoacoustic tomography that can be considered as simultaneous inpainting and artifact removal problem. We conclude the paper with a short summary and conclusion presented in Section 4.

2. Convergence Analysis

In this section we study the convergence of (3) and derive convergence rates.

2.1. Well-Posedness

First we state the assumptions that we will use for well-posedness (existence and stability) of minimizing NETT.
Assumption 1
(Conditions for well-posedness).
(W1) 
X , Y are Banach spaces, X reflexive, D X weakly sequentially closed.
(W2) 
The distance measure D : Y × Y [ 0 , ] satisfies
(a) 
τ 1 : y 1 , y 2 , y 3 Y : D ( y 1 , y 2 ) τ D ( y 1 , y 3 ) + τ D ( y 3 , y 2 ) .
(b) 
y 1 , y 2 Y : D ( y 1 , y 2 ) = 0 y 1 = y 2 .
(c) 
y , y ˜ Y : D ( y , y ˜ ) < y ˜ y k 0 D ( y , y k ) D ( y , y ˜ ) .
(d) 
y Y : y k y 0 D ( y k , y ) 0 .
(e) 
D is weakly sequentially lower semi-continuous (wslsc).
(W3) 
R : X [ 0 , ] is proper and wslsc.
(W4) 
A : D X Y is weakly sequentially continuous.
(W5) 
y , α , C : { x X T y , α C } is nonempty and bounded.
(W6) 
( X n ) n N is a sequence of subspaces of X .
(W7) 
( A n ) n N is a family of weakly sequentially continuous A n : D Y .
(W8) 
( R n ) n N is a family of proper wslsc regularizers R n : X [ 0 , ] .
(W9) 
y , α , C , n : { x X n T y , α , n C } is nonempty and bounded.
Conditions (W2)–(W5) are quite standard for Tikhonov regularization in Banach spaces to guarantee the existence and stability of minimizers of the Tikhonov functional and the given conditions are similar to [2,8,9,10,12,23,24]. In particular, (W2) describes the properties that the distance measure D should have. Clearly, the norm distance on Y fulfills these properties. Moreover, (W2a) holds for the norm with τ = 1 since it then corresponds to the triangle inequality. Item (W2c) is the continuity of D ( y , · ) while (W2d) considers the continuity of D ( · , y ) at y. While (W2c) is not needed for existence and convergence of NETT it is required for the stability result as shown in [10] (Example 2.7). On the other hand (W2e) implies that the Tikhonov functional is wslsc which is needed for existence. Assumption (W5) is a coercivity condition; see [9] (Remark 2.4f.) on how to achieve this for a regularizer defined by neural networks. Item (W8) poses some restrictions on the regularizers. For NETT this is not an issue as neural networks used in practice are continuous. Note that for convergence and convergence rates we will require additional conditions that concern the discretization of the reconstruction space, the forward operator and regularizer.
The references [8,9,10,23] all consider general distance measures and allow non-convex regularizers. However, existence and stability of minimizing (2) are shown under assumptions slightly different from (W1)–(W5). Below we therefore give a short proof of the existence and stability results.
Theorem 1
(Existence and Stability). Let Assumption 1 hold. Then for all y δ Y , α > 0 , n N the following assertions hold true:
(a) 
argmin T y , α , n .
(b) 
Let ( y k ) k N Y N with y k y and consider x k argmin T y k , α , n .
  • ( x k ) k N has at least one weak accumulation point.
  • Every weak accumulation point ( x k ) k N is a minimizer of T y , α , n .
(c) 
The statements in (a),(b) also hold for T y , α in place of T y , α , n ,
Proof. 
Since (W1), (W6)–(W9) for T y , α , n when n N are fixed give the same assumption as (W1), (W3)–(W5) for the non-discrete counterpart T y , α , it is sufficient to verify (a), (b) for the latter. Existence of minimizers follows from (W1), (W2e), (W3)–(W5), because these items imply that the T y , α is a wslsc coercive functional defined on a nonempty weakly sequentially closed subset of a reflexive Banach space. To show stability one notes that according to (W2a) for all x X we have
D ( A x k , y ) + α R ( x k ) τ D ( A x k , y k ) + α R ( x k ) + τ D ( y , y k ) τ D ( A x , y k ) + α R ( x ) + τ D ( y , y k ) .
According to (W2c), (W2d), (W5) there exists x X such that the right hand side is bounded, which by (W5) shows that ( x k ) k has a weak accumulation point. Following the standard proof [2] (Theorem 3.23) shows the weak accumulation points ( x k ) k are minimizers of T y , α . This uses the fact that the weak topology is indeed weaker than the norm topology, and that the involved functionals are wslsc.    □
In the following we write x α , n δ for minimizers of T y δ , α , n . For y Y we call x + argmin { R ( x ) x X A x = y } an R -minimizing solution of A x = y .
Lemma 1
(Existence of R -minimizing solutions). Let Assumption 1 hold. For any y A ( D ) an R -minimizing solution of A x = y exists. Likewise, if n N and y A n ( D ) an R n -minimizing solution of A n x = y exists.
Proof. 
Again is is sufficient the verify the claim for R -minimizing solution. Because y A ( D ) , the set A 1 ( { y } ) = { x X A x = y } is non-empty. Hence we can choose a sequence ( x k ) k N in A 1 ( { y } ) with R ( x k ) inf { R ( x ) x X A x = y } . Due to (W2b), ( x k ) k N is contained in { x X D ( A ( x ) , y ) + α R ( x ) C } for some C > 0 which is bounded according to (W5). By (W1) X is reflexive and therefore ( x k ) k N has a weak accumulation point x + . From (W1), (W4), (W3) we conclude that x + is an R -minimizing solution of A x = y . The case of R n -minimizing solutions follows analogous.    □

2.2. Convergence

Next we proof that discrete NETT converges as the noise level goes to zero and the discretization as well as the regularization parameter are chosen properly. We write D n , M : = { x D X n R n ( x ) M } and formulate the following approximation conditions for obtaining convergence.
Assumption 2
(Conditions for convergence).
Element x + D satisfies the following for all M > 0 :
(C1) 
( z n ) n N ( D X n ) with λ n : = | R n ( z n ) R ( x + ) | 0 .
(C2) 
ρ n : = sup x D n , M | R n ( x ) R ( x ) | 0 .
(C3) 
γ n : = D ( A n z n , A x + ) 0 .
(C4) 
a n : = sup x D n , M | D ( A n x , A x + ) D ( A x , A x + ) | 0 .
Conditions (C1) and (C3) concerns the approximation of the true unknown x with elements in the discretization space, that is compatible with the discretization of the forward operator and regularizer. Conditions (C2) and (C4) are uniform approximation properties of the operator and the regularizer on R n -bounded sets.
Theorem 2
(Convergence). Let (W1)–(W9) hold, y A ( D ) and let x + be an R -minimizing solution of A x = y that satisfies (C1)–(C4). Moreover, suppose ( δ k ) k N ( 0 , ) N converges to zero and ( y k ) k N Y N satisfies D ( y , y k ) δ k . Choose ( α k ) k N and ( n k ) k N such that as k we have
α k 0
n k
( δ k + D ( A n k z n k , y ) ) / α k 0 .
Then for x k argmin T y k , δ k , n k the following hold:
(a) 
( x k ) k N has a weakly convergent subsequence ( x σ ( k ) ) k N
(b) 
The weak limit of ( x σ ( k ) ) k N is an R -minimizing solution of A x = y .
(c) 
R σ ( k ) ( x σ ( k ) ) R ( x ) , where x is the weak limit of ( x σ ( k ) ) k N .
(d) 
If the R -minimizing solution of A x = y is unique, then ( x k ) k N x + .
Proof. 
For convenience and some abuse of notation we use the abbreviations R k : = R n k , A k : = A n k , a k : = a n k , z k : = z n k and ρ k : = ρ n k . Because x k is a minimizer of the discrete NETT functional T y k , δ k , n k by (W2) we have
D ( A k x k , y k ) + α k R k ( x k ) D ( A k z k , y k ) + α k R k ( z k ) τ D ( A k z k , y ) + τ D ( y , y k ) + α k R k ( z k ) = τ D ( A k z k , y ) + τ δ k + α k R k ( z k )
According to (C1), (4), we get
D ( A k x k , y k ) τ ( D ( A k z k , y ) + δ k ) ,
R k ( x k ) τ · D ( A k z k , y k ) + δ k α k + R k ( z k ) .
According to (C1), (C3), (5), (6) the right hand side in (7) converges to zero and the right hand side in (8) to R ( x + ) . Together with (C2) we obtain R ( x k ) R k ( x k ) + ρ k R ( x + ) and D ( A x k , y ) D ( A k x k , y ) + a k τ D ( A k x k , y ) + a k + τ δ k 0 . This shows that ( D ( A x k , y ) + R ( x k ) ) k N is bounded and by (W1), (W9) there exists a weakly convergent subsequence ( x σ ( k ) ) k N . We denote the weak limit by x X . From (W2), (W4) we obtain A x = y . The weak lower semi-continuity of R assumed in (W3) shows
R ( x ) lim inf k R ( x σ ( k ) ) lim sup k R ( x σ ( k ) ) lim sup k ( R σ ( k ) ( x σ ( k ) ) + ρ k ) R ( x + ) .
Consequently, x is an R -minimizing solution of A x = y and R ( x σ ( k ) ) R ( x ) . If the R -minimizing solution is unique then x + is the only weak accumulation point of ( x k ) k N which concludes the proof.    □

2.3. Convergence Rates

Next we derive quantitative error estimates (convergence rates) in terms of the absolute Bregman distance. Recall that a function R : X [ 0 , ] is Gâteaux differentiable at some x X if the directional derivative R ( x ) ( h ) : = ( R ( x + t h ) R ( x ) ) / t exist for every h X . We denote by R ( x ) the Gâteaux derivative of R at x. In [9] we introduced the absolute Bregman distance B R ( · , x ) : X [ 0 , ] of a Gâteaux differentiable functional R : X [ 0 , ] at x X with respect to R defined by
x X : B R ( x , x ) : = | R ( x ) R ( x ) R ( x ) ( x x ) | .
We write sup y δ H ( y δ ) : = sup { H ( y δ ) y δ X D ( A x + , y δ ) δ } . Convergence rates in terms of the Bregman distance are derived under a smoothness assumption on the true solution in the form of a certain variational inequality. More precisely we assume the following:
Assumption 3
(Conditions for convergence rates).
Element x + D satisfies the following for all M , δ > 0 :
(R1) 
Items (C1), (C2) hold.
(R2) 
γ n , δ : = sup y δ | D ( A n z n , y δ ) D ( A x + , y δ ) | 0 .
(R3) 
a n , δ : = sup y δ sup x D n , M | D ( A n x , y δ ) D ( A x , y δ ) | 0 .
(R4) 
R is Gâteaux differentiable at x +
(R5) 
There exist a concave, continuous, strictly increasing φ : [ 0 , ) [ 0 , ) with φ ( 0 ) = 0 and ϵ , β > 0 such that for all x X
| R ( x ) R ( x + ) | ϵ β B R ( x , x + ) R ( x ) R ( x + ) + φ D ( A x , A x + ) .
According to (R5) the inverse function φ 1 : [ 0 , ) [ 0 , ) exists and is convex. We denote by φ * ( s ) : = sup { s t φ 1 ( t ) t 0 } its Fenchel conjugate.
Proposition 1
(Error estimates). Let y A ( D ) and x + be an R -minimizing solution of A x = y such that (W1)–(W9) and (R1)–(R5) are satisfied. For y δ Y with D ( y , y δ ) δ let x α , n δ argmin T y δ , α , n . Then for sufficient small δ , α > 0 and sufficiently large n N , we have the error estimate
B R ( x α , n δ , x + ) a n , δ + γ n , δ + δ α + ρ n + λ n + φ ( τ δ ) + φ * ( τ α ) τ α .
Proof. 
According to Theorem 2 we can assume | R ( x α , n δ ) R ( x + ) | < ϵ and with (R5) we obtain
α β B R ( x α , n δ , x + ) α R ( x α , n δ ) α R ( x + ) + α φ ( D ( A x α , n δ , y ) ) α R n ( x α , n δ ) α R n ( z n ) + α ρ n + α λ n + α φ ( D ( A x α , n δ , y ) ) D ( A n z n , y δ ) D ( A n x α , n δ , y δ ) + α ρ n + α λ n + α φ ( D ( A x α , n δ , y ) ) δ D ( A x α , n δ , y δ ) + γ n , δ + a n , δ + α ρ n + α λ n + α φ ( τ δ ) + α φ ( τ D ( A x α , n δ , y δ ) ) δ + γ n , δ + a n , δ + α ρ n + α λ n + α φ ( τ δ ) + τ 1 φ * ( τ δ ) .
For the second inequality we used (C1) and (C2). We have D ( A n x α , n δ , y δ ) + α R n ( x α , n δ ) D ( A n z n , y δ ) + α R n ( z n ) and thus we get an estimate for R n ( x α , n δ ) R n ( z n ) which we used for the third inequality. For the next inequality we used (R2) and (R3). Finally we used Young’s inequality α φ ( τ t ) t + τ 1 φ * ( τ α ) for the last step.    □
Remark 1.
The error estimate (10) includes the approximation quality of the discrete or inexact forward operator A n and the discrete or inexact regularizer R n described by a n , δ and ρ n , respectively. What might be unexpected at first is the inclusion of two new parameters λ n and γ n , δ . These factors both arise from the approximation of X by the finite dimensional spaces X n , where γ n , δ reflects approximation accuracy in the image of the operator A and λ n approximation accuracy with respect to the true regularization functional R . Note that in the case where the forward operator, the regularizer, and the solution space X are given precisely, we have a n , δ = γ n , δ = λ n = ρ n = 0 . In this particular case we recover the estimate derived for the NETT in [9].
Theorem 3
(Convergence rates). Let the assumptions of Proposition 1 hold and consider the parameter choice rule α ( δ ) δ / φ ( δ ) and let the approximation errors satisfy a n , δ + γ n , δ = O ( δ ) , ρ n + λ n = O ( φ ( τ δ ) ) . Then we have the convergence rate
B R ( x α ( δ ) , n ( δ ) δ , x + ) = O ( φ ( τ δ ) ) .
Proof. 
Noting that φ * ( τ δ / φ ( τ δ ) ) / δ remains bounded as δ 0 , this directly follows from Proposition 1.    □
Next we verify that a variational inequality of the form (R5) is satisfied with φ ( t ) = c t under a typical source like condition.
Lemma 2
(Variational inequality under source condition). Let R , A be Gâteaux differentiable at x + X , consider the distance measure D ( y 1 , y 2 ) = y 1 y 2 2 and assume there exist η X and c 1 , c 2 , ϵ > 0 with c 1 η < 1 such that for all x X with | R ( x ) R ( x + ) | ϵ we have
R ( x + ) = A ( x + ) * η A x A x + A ( x + ) ( x x + ) c 1 B R ( x , x + ) R ( x + ) R ( x ) c 2 A x A x + .
Then (R5) holds with φ ( t ) = ( η + 2 c 2 ) t and β = 1 c 1 η .
Proof. 
Let x X with | R ( x ) R ( x + ) | ϵ . Using the Cauchy-Schwarz inequality and Equation (12), we can estimate
| R ( x + ) , x x + | A ( x + ) ( x x + ) η A x A x + η + A x A x + A ( x + ) ( x x + ) η A x A x + η + c 1 η B R ( x , x + ) .
Additionally, if R ( x ) R ( x + ) , we have | R ( x ) R ( x + ) | = R ( x ) R ( x + ) , and on the other hand if R ( x ) < R ( x + ) , we have | R ( x ) R ( x + ) | R ( x ) R ( x + ) + 2 ( R ( x + ) R ( x ) ) R ( x ) R ( x + ) + 2 c 2 A x A x + . Putting this together we get
B R ( x , x + ) | R ( x ) R ( x + ) | + | R ( x + ) , x x + | R ( x ) R ( x + ) + ( η + 2 c 2 ) A x A x + + c 1 η B R ( x , x + ) ,
and thus ( 1 c 1 η ) B R ( x , x + ) R ( x ) R ( x + ) + ( η + 2 c 2 ) A x A x + .    □
Corollary 1
(Convergence rates under source condition). Let the conditions of Lemma 2 hold and suppose
α ( δ ) δ | R n ( δ ) ( z n ( δ ) ) R ( x + ) | = O ( δ ) sup { | R n ( δ ) ( x ) R ( x ) | x D n ( δ ) , M } = O ( δ ) A n ( δ ) z n ( δ ) A x + = O ( δ ) sup { A n ( δ ) x A x x D n ( δ ) , M } = O ( δ ) sup { A n ( δ ) x x D n ( δ ) , M } < .
Then we have the convergence rates result
B R ( x α ( δ ) , n ( δ ) δ , x + ) = O ( δ ) .
Proof. 
Follows from Theorem 3 and Lemma 2. Note that we use · in the theorem, while D ( y 1 , y 1 ) = y 1 y 2 2 uses the squared norm · 2 and thus the approximation rates for the terms concerning A n ( δ ) are order δ instead of δ as in Theorem 3.    □
In Corollary 1, the approximation quality of the discrete operator A n and the discrete and inexact regularization functional R n need to be of the same order.

3. Application to a Limited Data Problem in PAT

Photoacoustic Tomography (PAT) is an emerging non-invasive coupled-physics biomedical imaging technique with high contrast and high spatial resolution [25,26]. It works by illuminating a semi-transparent sample with short optical pulses which causes heating of the sample followed by expansion and the subsequent emission of an acoustic wave. Sensors on the outside of the sample measure the acoustic wave and these measurements are then used to reconstruct the initial pressure f : R d R , which provides information about the interior of the object. The cases d = 2 and d = 3 are relevant for applications in PAT. Here we only consider the case d = 2 and assume a circular measurement geometry. The 2D case arises for example when using integrating line detectors in PAT [26].

3.1. Discrete Forward Operator

The pressure data p : R 2 × [ 0 , ) R satisfies the wave equation ( t 2 Δ ) p ( r , t ) = 0 for ( r , t ) R 2 × ( 0 , ) with initial data p ( · , 0 ) = and t p ( · , 0 ) = 0 . In the case of circular measurement geometry one assumes that f vanishes outside the unit disc D 1 : = { r R 2 r < 1 } and the measurement sensors are located on the boundary D 1 = S 1 . We assume that the phantom will not generate any data for some region I D 1 , for example when the acoustic pressure generated inside I is too small to be recorded. This masked PAT problem consists in the recovery of the function f from sampled noisy measurements of g = W ( 1 I c f ) where W denotes the solution operator of the wave equation and 𝟙 I c the indicator function on I c : = R 2 I . Note that the resulting inverse problem can be seen of the combination of an inpainting problem and in inverse problems for the wave equation.
In order to implement the PAT forward operator we use a basis ansatz f ( r ) = i = 1 N × N x i ψ ( r r i ) where x i R are basis coefficients and ψ : R 2 R a generalized Kaiser-Bessel (KB) and r i = ( i 1 ) / N with i = ( i 1 , i 2 ) { 1 , , N } 2 . The generalized KB functions are popular in tomographic inverse problems [27,28,29,30] and denote radially symmetric functions with support in D R defined by
ψ ( r ) : = 1 r 2 / R 2 m / 2 I m γ 1 r 2 / R 2 I m ( γ ) for r R .
Here I m is the modified Bessel function of the first kind of order n N and the parameters γ > 0 and R denote the window taper and support radius, respectively. Since W is linear we have W f = i = 1 N × N x i W ( ψ ( · r i ) ) . For convenience we will use a pseudo-3D approach where use the 3D solution of W ψ for which there exists an analytical representation [29]. Denote by s k uniformly spaced sensor locations on S 1 and by t j > 0 uniformly sampled measurement times in [ 0 , 2 ] . Define the N t N s × N 2 model matrix by W N t ( k 1 ) + j , N ( i 1 1 ) + i 2 = W ( ψ ( · r i ) ) ( s k , t j ) and an N 2 × N 2 diagonal matrix by ( M I ) N ( i 1 1 ) + i 2 , N ( i 1 1 ) + i 2 = 1 if r i I c and zero otherwise. Let W M I = U Σ V be the singular valued decomposition. We then consider the discrete forward matrix A = U Σ V where Σ is the diagonal matrix derived from Σ by setting singular values smaller than some σ to zero. This allows us to easily calculate A + = V Σ + U where Σ + is calculated by inverting all diagonal elements of Σ that are greater than zero. In our experiments we use N = N t = 128 , N s = 150 and take I fixed as a diagonal stripe of width 0.34 .

3.2. Discrete NETT

We consider the discrete NETT with discrepancy term D ( A x , y δ ) = A x y δ 2 2 / 2 and regularizer given by
R ( m ) ( x ) = x Φ ( m ) ( x ) 2 2 + β x 1 , ϵ ,
where x 1 , ϵ : = i 1 , i 2 = 1 128 | r i 1 + 1 , i 2 r i | 2 + | r i 1 , i 2 + 1 r i 1 , i 2 | 2 + ϵ 2 with ϵ > 0 is a smooth version of the total variation [31] and Φ ( m ) is a learnable network. We take Φ ( m ) as the U-Net [32] with residual connection, which has first been applied to PAT image reconstruction in [33]. Here m stands for the number of down-/upsampling steps performed in the U-Net (the original one had m = 4 i.e., m N . This means that larger m yield a deeper network with more parameters. We generate training data that consist of square shaped rings with random profile and random location. See Figure 1 for an example of one such phantom (note that all plots in signal space use the same colorbar) and the corresponding data. We get a set of phantoms x 1 , , x 1000 and corresponding basic reconstructions h a : = A + ( A x a + η a ) , where A + is the pseudo-inverse and η a is Gaussian noise with standard deviation of σ A x a with σ = 0.01 . The networks are trained by minimizing a = 1 1000 Φ ( m ) ( h a ) x a 1 + γ Φ ( m ) ( x a ) x a 1 where we used the Adam optimizer with learning rate 0.01 and γ = 0.1 . The considered loss is that we want the trained regularizer to give small values for x a and large values for h a . The strategy is similar to [9] but we use the final output of the network for the regularizer as proposed in [34]. To minimize (15) we use Algorithm 1 which implements a forward-backward scheme [35]. The most expensive step of this algorithm is the matrix inversion but since we use constant stepsize one also has to option to only calculate the inverse of the matrix once and reuse it. Thus one only has to perform two matrix-vector multiplications which are of the order O ( N 2 N t N s ) and O ( N 4 ) since N 2 is the dimension of our phantoms. On the other hand calculating the gradient has similar complexity than applying the neural network which is in the order O ( F 2 L N 2 ) with F the number of convolution channels and L the number of layers.
Algorithm 1: NETT optimization.
Jimaging 07 00239 i001

3.3. Numerical Results

For the numerical results we train two regularizers R ( 1 ) and R ( 3 ) as described in Section 3.2. The networks are implemented using PyTorch [36]. We also use PyTorch in order to calculate the gradient x R ( m ) . We take N iter = 15 , s = 0.25 and x 0 = Φ ( m ) A y in Algorithm 1 and compute the inverse ( A A s Id ) 1 only once and then use it for all examples. We set α = 0.015 for the noise-free case, α = 0.016 for the low noise case and α = 0.02 for the high noise cases, respectively, and selected a fixed β = 15 . We expect that the NETT functional will yield better results due to data consistency, which is mainly helpful outside the masked center diagonal.
First we use the phantom from the testdata shown in Figure 1. The results using post processing and NETT are shown in Figure 2. One sees that all results with higher noise than used during training are not very good. This indicates that one should use similar noise as in the later applications even for the NETT. Figure 3 shows the average error using 10 test phantoms similar to the on in Figure 1. Careful numerical comparison of the numerical convergence rates and the theoretical results of Theorem 1 is an interesting aspect of further research. To investigate the stability of our method with respect to phantoms that are different from the training data we create a phantom with different structures as seen in Figure 4. As expected, the post processing network Φ ( 3 ) is not really able to reconstruct the circles object, since it is quite different from the training data, but it also does not break down completely. On the other hand, the NETT approach yields good results due to data consistency.

4. Conclusions

We have analyzed the convergence a discretized NETT approach and derived the convergence rates under certain assumptions on the approximation quality of the involved operators. We performed numerical experiments using a limited data problem for PAT that is the combination of an inverse problem for the wave equation and an inpainting problem. To the best of our knowledge this is the first such problem studied with deep learning. The NETT approach yields better results that post processing for phantoms different from the training data. NETT still fails to recover some missing parts of the phantom in cases the data contains more noise than the training data. This highlights the relevance of using different regularizers for different noise levels. Finding ways to make the regularizers less dependent on the noise level used during training is a possible future research direction. Another interesting question is if this results can be combined with with approximation error estimates for neural networks e.g., [37,38]. It seems not obvious how these two approaches can be combined. Furthermore, studying how one can define neural network based regularizers that fulfill (12) might also be an interesting line of future research.

Author Contributions

M.H. prosed the conceptualization, framework and the long term vision of the work. M.H and S.A. developed the ideas, performed the formal analysis and have written and edited the paper. S.A. conducted the numerical experiments and has written the software. All authors have read and agreed to the published version of the manuscript.

Funding

This work has been supported bay the Austrian Science Fund (FWF), project P 30747-N32.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data and code are freely available upon request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Engl, H.W.; Hanke, M.; Neubauer, A. Regularization of Inverse Problems. In Mathematics and Its Applications; Kluwer Academic Publishers Group: Dordrecht, The Netherlands, 1996; Volume 375. [Google Scholar]
  2. Scherzer, O.; Grasmair, M.; Grossauer, H.; Haltmeier, M.; Lenzen, F. Variational methods in imaging. In Applied Mathematical Sciences; Springer: New York, NY, USA, 2009; Volume 167. [Google Scholar]
  3. Natterer, F.; Wübbeling, F. Mathematical Methods in Image Reconstruction. In Monographs on Mathematical Modeling and Computation; SIAM: Philadelphia, PA, USA, 2001; Volume 5. [Google Scholar]
  4. Zhdanov, M.S. Geophysical Inverse Theory and Regularization Problems; Elsevier: Amsterdam, The Netherlands, 2002; Volume 36. [Google Scholar]
  5. Morozov, V.A. Methods for Solving Incorrectly Posed Problems; Springer: New York, NY, USA, 1984. [Google Scholar]
  6. Tikhonov, A.N.; Arsenin, V.Y. Solutions of Ill-Posed Problems; John Wiley & Sons: Washington, DC, USA, 1977. [Google Scholar]
  7. Ivanov, V.K.; Vasin, V.V.; Tanana, V.P. Theory of Linear Ill-Posed Problems and Its Applications, 2nd ed.; Inverse and Ill-posed Problems Series; VSP: Utrecht, The Netherlands, 2002. [Google Scholar]
  8. Grasmair, M. Generalized Bregman distances and convergence rates for non-convex regularization methods. Inverse Probl. 2010, 26, 115014. [Google Scholar] [CrossRef] [Green Version]
  9. Li, H.; Schwab, J.; Antholzer, S.; Haltmeier, M. NETT: Solving inverse problems with deep neural networks. Inverse Probl. 2020, 36, 065005. [Google Scholar] [CrossRef] [Green Version]
  10. Obmann, D.; Nguyen, L.; Schwab, J.; Haltmeier, M. Sparse q-regularization of Inverse Problems Using Deep Learning. arXiv 2019, arXiv:1908.03006. [Google Scholar]
  11. Obmann, D.; Nguyen, L.; Schwab, J.; Haltmeier, M. Augmented NETT regularization of inverse problems. J. Phys. Commun. 2021, 5, 105002. [Google Scholar] [CrossRef]
  12. Haltmeier, M.; Nguyen, L.V. Regularization of Inverse Problems by Neural Networks. arXiv 2020, arXiv:2006.03972. [Google Scholar]
  13. Lunz, S.; Öktem, O.; Schönlieb, C.B. Adversarial Regularizers in Inverse Problems; NIPS: Montreal, QC, Canada, 2018; pp. 8507–8516. [Google Scholar]
  14. Mukherjee, S.; Dittmer, S.; Shumaylov, Z.; Lunz, S.; Öktem, O.; Schönlieb, C.B. Learned convex regularizers for inverse problems. arXiv 2020, arXiv:2008.02839. [Google Scholar]
  15. Adler, J.; Öktem, O. Solving ill-posed inverse problems using iterative deep neural networks. Inverse Probl. 2017, 33, 124007. [Google Scholar] [CrossRef] [Green Version]
  16. Aggarwal, H.K.; Mani, M.P.; Jacob, M. MoDL: Model-based deep learning architecture for inverse problems. IEEE Trans. Med. Imaging 2018, 38, 394–405. [Google Scholar] [CrossRef]
  17. de Hoop, M.V.; Lassas, M.; Wong, C.A. Deep learning architectures for nonlinear operator functions and nonlinear inverse problems. arXiv 2019, arXiv:1912.11090. [Google Scholar]
  18. Kobler, E.; Klatzer, T.; Hammernik, K.; Pock, T. Variational networks: Connecting variational methods and deep learning. In Proceedings of the German Conference on Pattern Recognition, Basel, Switzerland, 12–15 September 2017; Springer: Cham, Switzerland, 2017; pp. 281–293. [Google Scholar]
  19. Yang, Y.; Sun, J.; Li, H.; Xu, Z. Deep ADMM-Net for Compressive Sensing MRI. In Proceedings of the 30th International Conference on Neural Information Processing Systems, Barcelona, Spain, 5–10 December 2016; pp. 10–18. [Google Scholar]
  20. Shang, Y. Subspace confinement for switched linear systems. Forum Math. 2017, 29, 693–699. [Google Scholar] [CrossRef]
  21. Romano, Y.; Elad, M.; Milanfar, P. The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 2017, 10, 1804–1844. [Google Scholar] [CrossRef]
  22. Pöschl, C.; Resmerita, E.; Scherzer, O. Discretization of variational regularization in Banach spaces. Inverse Probl. 2010, 26, 105017. [Google Scholar] [CrossRef]
  23. Pöschl, C. Tikhonov Regularization with General Residual Term. Ph.D. Thesis, University of Innsbruck, Innsbruck, Austria, 2008. [Google Scholar]
  24. Tikhonov, A.N.; Leonov, A.S.; Yagola, A.G. Nonlinear ill-posed problems. In Applied Mathematics and Mathematical Computation; Translated from the Russian; Chapman & Hall: London, UK, 1998; Volumes 1, 2 and 14. [Google Scholar]
  25. Kruger, R.; Lui, P.; Fang, Y.; Appledorn, R. Photoacoustic ultrasound (PAUS)—Reconstruction tomography. Med. Phys. 1995, 22, 1605–1609. [Google Scholar] [CrossRef]
  26. Paltauf, G.; Nuster, R.; Haltmeier, M.; Burgholzer, P. Photoacoustic tomography using a Mach-Zehnder interferometer as an acoustic line detector. Appl. Opt. 2007, 46, 3352–3358. [Google Scholar] [CrossRef]
  27. Matej, S.; Lewitt, R.M. Practical considerations for 3-D image reconstruction using spherically symmetric volume elements. IEEE Trans. Med. Imaging 1996, 15, 68–78. [Google Scholar] [CrossRef] [PubMed]
  28. Schwab, J.; Pereverzyev, S., Jr.; Haltmeier, M. A Galerkin least squares approach for photoacoustic tomography. SIAM J. Numer. Anal. 2018, 56, 160–184. [Google Scholar] [CrossRef] [Green Version]
  29. Wang, K.; Schoonover, R.W.; Su, R.; Oraevsky, A.; Anastasio, M.A. Discrete Imaging Models for Three-Dimensional Optoacoustic Tomography Using Radially Symmetric Expansion Functions. IEEE Trans. Med. Imaging 2014, 33, 1180–1193. [Google Scholar] [CrossRef] [Green Version]
  30. Wang, K.; Su, R.; Oraevsky, A.A.; Anastasio, M.A. Investigation of iterative image reconstruction in three-dimensional optoacoustic tomography. Phys. Med. Biol. 2012, 57, 5399. [Google Scholar] [CrossRef] [PubMed]
  31. Acar, R.; Vogel, C.R. Analysis of bounded variation penalty methods for ill-posed problems. Inverse Probl. 1994, 10, 1217. [Google Scholar] [CrossRef]
  32. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the MICCAI 2015, Munich, Germany, 5–9 October 2015; Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F., Eds.; Springer: Cham, Switzerland, 2015; pp. 234–241. [Google Scholar]
  33. Antholzer, S.; Haltmeier, M.; Schwab, J. Deep learning for photoacoustic tomography from sparse data. Inverse Probl. Sci. Eng. 2019, 27, 987–1005. [Google Scholar] [CrossRef] [Green Version]
  34. Antholzer, S.; Schwab, J.; Bauer-Marschallinger, J.; Burgholzer, P.; Haltmeier, M. NETT regularization for compressed sensing photoacoustic tomography. In Proceedings of the Photons Plus Ultrasound: Imaging and Sensing 2019, San Francisco, CA, USA, 3–6 February 2019; Volume 10878, p. 108783B. [Google Scholar]
  35. Combettes, P.L.; Pesquet, J.C. Proximal splitting methods in signal processing. In Fixed-Point Algorithms for Inverse Problems in Science and Engineering; Springer: Berlin/Heidelberg, Germany, 2011; pp. 185–212. [Google Scholar]
  36. Paszke, A.; Gross, S. PyTorch: An Imperative Style, High-Performance Deep Learning Library; NIPS: Montreal, QC, Canada, 2018; pp. 8024–8035. [Google Scholar]
  37. Hornik, K. Some new results on neural network approximation. Neural Netw. 1993, 6, 1069–1072. [Google Scholar] [CrossRef]
  38. Barron, A.R. Approximation and estimation bounds for artificial neural networks. Mach. Learn. 1994, 14, 115–133. [Google Scholar] [CrossRef]
Figure 1. Top from left to right: phantom, masked phantom, and initial reconstruction A + A x . The difference between the phantoms on the left and the middle one shows the mask region I D 1 where no data is generated. Bottom from left to right: data without noise, low noise σ = 0.01 , and high noise σ = 0.1 .
Figure 1. Top from left to right: phantom, masked phantom, and initial reconstruction A + A x . The difference between the phantoms on the left and the middle one shows the mask region I D 1 where no data is generated. Bottom from left to right: data without noise, low noise σ = 0.01 , and high noise σ = 0.1 .
Jimaging 07 00239 g001
Figure 2. Top row: reconstructions using post-processing network Φ ( 1 ) . Middle row: NETT reconstructions using R ( 1 ) . Bottom row: NETT reconstructions using R ( 3 ) . From Left to Right: Reconstructions from data without noise, low noise ( σ = 0.01 ) and high noise ( σ = 0.1 ) .
Figure 2. Top row: reconstructions using post-processing network Φ ( 1 ) . Middle row: NETT reconstructions using R ( 1 ) . Bottom row: NETT reconstructions using R ( 3 ) . From Left to Right: Reconstructions from data without noise, low noise ( σ = 0.01 ) and high noise ( σ = 0.1 ) .
Jimaging 07 00239 g002
Figure 3. Semilogarithmic plot of the mean squared errors of the NETT using R ( 1 ) and R ( 3 ) depending on the noise level. The crosses are the values for the phantoms in Figure 2.
Figure 3. Semilogarithmic plot of the mean squared errors of the NETT using R ( 1 ) and R ( 3 ) depending on the noise level. The crosses are the values for the phantoms in Figure 2.
Jimaging 07 00239 g003
Figure 4. Left column: phantom with a structure not contained in the training data (top) and pseudo inverse reconstruction (bottom). Middle column: Post-processing reconstructions with Φ ( 3 ) using exact (top) and noisy data (bottom). Right column: NETT reconstructions with R ( 3 ) using exact (top) and noisy data (bottom).
Figure 4. Left column: phantom with a structure not contained in the training data (top) and pseudo inverse reconstruction (bottom). Middle column: Post-processing reconstructions with Φ ( 3 ) using exact (top) and noisy data (bottom). Right column: NETT reconstructions with R ( 3 ) using exact (top) and noisy data (bottom).
Jimaging 07 00239 g004
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Antholzer, S.; Haltmeier, M. Discretization of Learned NETT Regularization for Solving Inverse Problems. J. Imaging 2021, 7, 239. https://doi.org/10.3390/jimaging7110239

AMA Style

Antholzer S, Haltmeier M. Discretization of Learned NETT Regularization for Solving Inverse Problems. Journal of Imaging. 2021; 7(11):239. https://doi.org/10.3390/jimaging7110239

Chicago/Turabian Style

Antholzer, Stephan, and Markus Haltmeier. 2021. "Discretization of Learned NETT Regularization for Solving Inverse Problems" Journal of Imaging 7, no. 11: 239. https://doi.org/10.3390/jimaging7110239

APA Style

Antholzer, S., & Haltmeier, M. (2021). Discretization of Learned NETT Regularization for Solving Inverse Problems. Journal of Imaging, 7(11), 239. https://doi.org/10.3390/jimaging7110239

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop