# Discretization of Learned NETT Regularization for Solving Inverse Problems

^{*}

## Abstract

**:**

## 1. Introduction

#### 1.1. Reconstruction with Learned Regularizers

- (T1)
- Choose a family of desired reconstructions ${\left({x}_{i}\right)}_{i=1}^{n}$.
- (T2)
- For some $\mathbf{B}:\mathbb{Y}\to \mathbb{X}$, construct undesired reconstructions ${\left(\mathbf{B}\mathbf{A}{x}_{i}\right)}_{i=1}^{n}$.
- (T3)
- Choose a class ${\left({\Phi}_{\theta}\right)}_{\theta \in \Theta}$ of functions (networks) ${\Phi}_{\theta}:\mathbb{X}\to \mathbb{X}$.
- (T4)
- Determine ${\theta}^{\star}\in \Theta $ with ${\Phi}_{{\theta}^{\star}}\left({x}_{i}\right)\simeq {x}_{i}\wedge {\Phi}_{{\theta}^{\star}}\left(\mathbf{B}\mathbf{A}{x}_{i}\right)\simeq {x}_{i}$.
- (T5)
- Define $\mathcal{R}\left(x\right)=r(x,\Phi (x\left)\right)$ with $\Phi ={\Phi}_{{\theta}^{\star}}$ for some $r:\mathbb{Y}\times \mathbb{Y}\to [0,\infty ]$.

#### 1.2. Discrete NETT

#### 1.3. Outline

## 2. Convergence Analysis

#### 2.1. Well-Posedness

**Assumption**

**1**

- (W1)
- $\mathbb{X}$, $\mathbb{Y}$ are Banach spaces, $\mathbb{X}$ reflexive, $\mathbb{D}\subseteq \mathbb{X}$ weakly sequentially closed.
- (W2)
- The distance measure $\mathcal{D}:\mathbb{Y}\times \mathbb{Y}\to [0,\infty ]$ satisfies
- (a)
- $\exists \tau \ge 1:\forall {y}_{1},{y}_{2},{y}_{3}\in \mathbb{Y}:\mathcal{D}({y}_{1},{y}_{2})\le \tau \mathcal{D}({y}_{1},{y}_{3})+\tau \mathcal{D}({y}_{3},{y}_{2})$.
- (b)
- $\forall {y}_{1},{y}_{2}\in \mathbb{Y}:\mathcal{D}({y}_{1},{y}_{2})=0\iff {y}_{1}={y}_{2}$.
- (c)
- $\forall y,\tilde{y}\in \mathbb{Y}:\mathcal{D}(y,\tilde{y})<\infty \wedge \parallel \tilde{y}-{y}_{k}\parallel \to 0\Rightarrow \mathcal{D}(y,{y}_{k})\to \mathcal{D}(y,\tilde{y})$.
- (d)
- $\forall y\in \mathbb{Y}:\parallel {y}_{k}-y\parallel \to 0\Rightarrow \mathcal{D}({y}_{k},y)\to 0$.
- (e)
- $\mathcal{D}$ is weakly sequentially lower semi-continuous (wslsc).

- (W3)
- $\mathcal{R}:\mathbb{X}\to [0,\infty ]$ is proper and wslsc.
- (W4)
- $\mathbf{A}:\mathbb{D}\subseteq \mathbb{X}\to \mathbb{Y}$ is weakly sequentially continuous.
- (W5)
- $\forall y,\alpha ,C:\{x\in \mathbb{X}\mid {\mathcal{T}}_{y,\alpha}\le C\}$ is nonempty and bounded.
- (W6)
- ${\left({\mathbb{X}}_{n}\right)}_{n\in \mathbb{N}}$ is a sequence of subspaces of $\mathbb{X}$.
- (W7)
- ${\left({\mathbf{A}}_{n}\right)}_{n\in \mathbb{N}}$ is a family of weakly sequentially continuous ${\mathbf{A}}_{n}:\mathbb{D}\to \mathbb{Y}$.
- (W8)
- ${\left({\mathcal{R}}_{n}\right)}_{n\in \mathbb{N}}$ is a family of proper wslsc regularizers ${\mathcal{R}}_{n}:\mathbb{X}\to [0,\infty ]$.
- (W9)
- $\forall y,\alpha ,C,n:\{x\in {\mathbb{X}}_{n}\mid {\mathcal{T}}_{y,\alpha ,n}\le C\}$ is nonempty and bounded.

**Theorem**

**1**

- (a)
- $argmin{\mathcal{T}}_{y,\alpha ,n}\ne \varnothing $.
- (b)
- Let ${\left({y}_{k}\right)}_{k\in \mathbb{N}}\in {\mathbb{Y}}^{\mathbb{N}}$ with ${y}_{k}\to y$ and consider ${x}_{k}\in argmin{\mathcal{T}}_{{y}_{k},\alpha ,n}$.
- ${\left({x}_{k}\right)}_{k\in \mathbb{N}}$ has at least one weak accumulation point.
- Every weak accumulation point ${\left({x}_{k}\right)}_{k\in \mathbb{N}}$ is a minimizer of ${\mathcal{T}}_{y,\alpha ,n}$.

- (c)
- The statements in (a),(b) also hold for ${\mathcal{T}}_{y,\alpha}$ in place of ${\mathcal{T}}_{y,\alpha ,n}$,

**Proof.**

**Lemma**

**1**

**Proof.**

#### 2.2. Convergence

**Assumption**

**2**

- (C1)
- $\exists \left({z}_{n}\right)\in {\prod}_{n\in \mathbb{N}}(\mathbb{D}\cap {\mathbb{X}}_{n})$ with ${\lambda}_{n}:=\left|{\mathcal{R}}_{n}\left({z}_{n}\right)-\mathcal{R}\left({x}^{+}\right)\right|\to 0$.
- (C2)
- ${\rho}_{n}:={sup}_{x\in {\mathbb{D}}_{n,M}}|{\mathcal{R}}_{n}\left(x\right)-\mathcal{R}\left(x\right)|\to 0$.
- (C3)
- ${\gamma}_{n}:=\mathcal{D}({\mathbf{A}}_{n}{z}_{n},\mathbf{A}{x}^{+})\to 0$.
- (C4)
- ${a}_{n}:={sup}_{x\in {\mathbb{D}}_{n,M}}|\mathcal{D}({\mathbf{A}}_{n}x,\mathbf{A}{x}^{+})-\mathcal{D}(\mathbf{A}x,\mathbf{A}{x}^{+})|\to 0$.

**Theorem**

**2**

- (a)
- ${\left({x}_{k}\right)}_{k\in \mathbb{N}}$ has a weakly convergent subsequence ${\left({x}_{\sigma \left(k\right)}\right)}_{k\in \mathbb{N}}$
- (b)
- The weak limit of ${\left({x}_{\sigma \left(k\right)}\right)}_{k\in \mathbb{N}}$ is an $\mathcal{R}$-minimizing solution of $\mathbf{A}x=y$.
- (c)
- ${\mathcal{R}}_{\sigma \left(k\right)}\left({x}_{\sigma \left(k\right)}\right)\to \mathcal{R}\left({x}^{\star}\right)$, where ${x}^{\star}$ is the weak limit of ${\left({x}_{\sigma \left(k\right)}\right)}_{k\in \mathbb{N}}$.
- (d)
- If the $\mathcal{R}$-minimizing solution of $\mathbf{A}x=y$ is unique, then ${\left({x}_{k}\right)}_{k\in \mathbb{N}}\rightharpoonup {x}^{+}$.

**Proof.**

#### 2.3. Convergence Rates

**Assumption**

**3**

- (R1)
- Items (C1), (C2) hold.
- (R2)
- ${\gamma}_{n,\delta}:={sup}_{{y}^{\delta}}\left|\mathcal{D}({\mathbf{A}}_{n}{z}_{n},{y}^{\delta})-\mathcal{D}(\mathbf{A}{x}^{+},{y}^{\delta})\right|\to 0$.
- (R3)
- ${a}_{n,\delta}:={sup}_{{y}^{\delta}}{sup}_{x\in {\mathbb{D}}_{n,M}}|\mathcal{D}({\mathbf{A}}_{n}x,{y}^{\delta})-\mathcal{D}(\mathbf{A}x,{y}^{\delta})|\to 0$.
- (R4)
- $\mathcal{R}$ is Gâteaux differentiable at ${x}^{+}$
- (R5)
- There exist a concave, continuous, strictly increasing $\phi :[0,\infty )\to [0,\infty )$ with $\phi \left(0\right)=0$ and $\u03f5,\beta >0$ such that for all $x\in \mathbb{X}$$$|\mathcal{R}\left(x\right)-\mathcal{R}\left({x}^{+}\right)|\le \u03f5\Rightarrow \beta {\mathcal{B}}_{\mathcal{R}}(x,{x}^{+})\le \mathcal{R}\left(x\right)-\mathcal{R}\left({x}^{+}\right)+\phi \left(\right)open="("\; close=")">\mathcal{D}(\mathbf{A}x,\mathbf{A}{x}^{+})$$

**Proposition**

**1**

**Proof.**

**Remark**

**1.**

**Theorem**

**3**

**Proof.**

**Lemma**

**2**

**Proof.**

**Corollary**

**1**

**Proof.**

## 3. Application to a Limited Data Problem in PAT

#### 3.1. Discrete Forward Operator

#### 3.2. Discrete NETT

Algorithm 1: NETT optimization. |

#### 3.3. Numerical Results

## 4. Conclusions

## Author Contributions

## Funding

## Institutional Review Board Statement

## Informed Consent Statement

## Data Availability Statement

## Conflicts of Interest

## References

- Engl, H.W.; Hanke, M.; Neubauer, A. Regularization of Inverse Problems. In Mathematics and Its Applications; Kluwer Academic Publishers Group: Dordrecht, The Netherlands, 1996; Volume 375. [Google Scholar]
- Scherzer, O.; Grasmair, M.; Grossauer, H.; Haltmeier, M.; Lenzen, F. Variational methods in imaging. In Applied Mathematical Sciences; Springer: New York, NY, USA, 2009; Volume 167. [Google Scholar]
- Natterer, F.; Wübbeling, F. Mathematical Methods in Image Reconstruction. In Monographs on Mathematical Modeling and Computation; SIAM: Philadelphia, PA, USA, 2001; Volume 5. [Google Scholar]
- Zhdanov, M.S. Geophysical Inverse Theory and Regularization Problems; Elsevier: Amsterdam, The Netherlands, 2002; Volume 36. [Google Scholar]
- Morozov, V.A. Methods for Solving Incorrectly Posed Problems; Springer: New York, NY, USA, 1984. [Google Scholar]
- Tikhonov, A.N.; Arsenin, V.Y. Solutions of Ill-Posed Problems; John Wiley & Sons: Washington, DC, USA, 1977. [Google Scholar]
- Ivanov, V.K.; Vasin, V.V.; Tanana, V.P. Theory of Linear Ill-Posed Problems and Its Applications, 2nd ed.; Inverse and Ill-posed Problems Series; VSP: Utrecht, The Netherlands, 2002. [Google Scholar]
- Grasmair, M. Generalized Bregman distances and convergence rates for non-convex regularization methods. Inverse Probl.
**2010**, 26, 115014. [Google Scholar] [CrossRef][Green Version] - Li, H.; Schwab, J.; Antholzer, S.; Haltmeier, M. NETT: Solving inverse problems with deep neural networks. Inverse Probl.
**2020**, 36, 065005. [Google Scholar] [CrossRef][Green Version] - Obmann, D.; Nguyen, L.; Schwab, J.; Haltmeier, M. Sparse ℓ
^{q}-regularization of Inverse Problems Using Deep Learning. arXiv**2019**, arXiv:1908.03006. [Google Scholar] - Obmann, D.; Nguyen, L.; Schwab, J.; Haltmeier, M. Augmented NETT regularization of inverse problems. J. Phys. Commun.
**2021**, 5, 105002. [Google Scholar] [CrossRef] - Haltmeier, M.; Nguyen, L.V. Regularization of Inverse Problems by Neural Networks. arXiv
**2020**, arXiv:2006.03972. [Google Scholar] - Lunz, S.; Öktem, O.; Schönlieb, C.B. Adversarial Regularizers in Inverse Problems; NIPS: Montreal, QC, Canada, 2018; pp. 8507–8516. [Google Scholar]
- Mukherjee, S.; Dittmer, S.; Shumaylov, Z.; Lunz, S.; Öktem, O.; Schönlieb, C.B. Learned convex regularizers for inverse problems. arXiv
**2020**, arXiv:2008.02839. [Google Scholar] - Adler, J.; Öktem, O. Solving ill-posed inverse problems using iterative deep neural networks. Inverse Probl.
**2017**, 33, 124007. [Google Scholar] [CrossRef][Green Version] - Aggarwal, H.K.; Mani, M.P.; Jacob, M. MoDL: Model-based deep learning architecture for inverse problems. IEEE Trans. Med. Imaging
**2018**, 38, 394–405. [Google Scholar] [CrossRef] - de Hoop, M.V.; Lassas, M.; Wong, C.A. Deep learning architectures for nonlinear operator functions and nonlinear inverse problems. arXiv
**2019**, arXiv:1912.11090. [Google Scholar] - Kobler, E.; Klatzer, T.; Hammernik, K.; Pock, T. Variational networks: Connecting variational methods and deep learning. In Proceedings of the German Conference on Pattern Recognition, Basel, Switzerland, 12–15 September 2017; Springer: Cham, Switzerland, 2017; pp. 281–293. [Google Scholar]
- Yang, Y.; Sun, J.; Li, H.; Xu, Z. Deep ADMM-Net for Compressive Sensing MRI. In Proceedings of the 30th International Conference on Neural Information Processing Systems, Barcelona, Spain, 5–10 December 2016; pp. 10–18. [Google Scholar]
- Shang, Y. Subspace confinement for switched linear systems. Forum Math.
**2017**, 29, 693–699. [Google Scholar] [CrossRef] - Romano, Y.; Elad, M.; Milanfar, P. The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci.
**2017**, 10, 1804–1844. [Google Scholar] [CrossRef] - Pöschl, C.; Resmerita, E.; Scherzer, O. Discretization of variational regularization in Banach spaces. Inverse Probl.
**2010**, 26, 105017. [Google Scholar] [CrossRef] - Pöschl, C. Tikhonov Regularization with General Residual Term. Ph.D. Thesis, University of Innsbruck, Innsbruck, Austria, 2008. [Google Scholar]
- Tikhonov, A.N.; Leonov, A.S.; Yagola, A.G. Nonlinear ill-posed problems. In Applied Mathematics and Mathematical Computation; Translated from the Russian; Chapman & Hall: London, UK, 1998; Volumes 1, 2 and 14. [Google Scholar]
- Kruger, R.; Lui, P.; Fang, Y.; Appledorn, R. Photoacoustic ultrasound (PAUS)—Reconstruction tomography. Med. Phys.
**1995**, 22, 1605–1609. [Google Scholar] [CrossRef] - Paltauf, G.; Nuster, R.; Haltmeier, M.; Burgholzer, P. Photoacoustic tomography using a Mach-Zehnder interferometer as an acoustic line detector. Appl. Opt.
**2007**, 46, 3352–3358. [Google Scholar] [CrossRef] - Matej, S.; Lewitt, R.M. Practical considerations for 3-D image reconstruction using spherically symmetric volume elements. IEEE Trans. Med. Imaging
**1996**, 15, 68–78. [Google Scholar] [CrossRef] [PubMed] - Schwab, J.; Pereverzyev, S., Jr.; Haltmeier, M. A Galerkin least squares approach for photoacoustic tomography. SIAM J. Numer. Anal.
**2018**, 56, 160–184. [Google Scholar] [CrossRef][Green Version] - Wang, K.; Schoonover, R.W.; Su, R.; Oraevsky, A.; Anastasio, M.A. Discrete Imaging Models for Three-Dimensional Optoacoustic Tomography Using Radially Symmetric Expansion Functions. IEEE Trans. Med. Imaging
**2014**, 33, 1180–1193. [Google Scholar] [CrossRef][Green Version] - Wang, K.; Su, R.; Oraevsky, A.A.; Anastasio, M.A. Investigation of iterative image reconstruction in three-dimensional optoacoustic tomography. Phys. Med. Biol.
**2012**, 57, 5399. [Google Scholar] [CrossRef] [PubMed] - Acar, R.; Vogel, C.R. Analysis of bounded variation penalty methods for ill-posed problems. Inverse Probl.
**1994**, 10, 1217. [Google Scholar] [CrossRef] - Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the MICCAI 2015, Munich, Germany, 5–9 October 2015; Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F., Eds.; Springer: Cham, Switzerland, 2015; pp. 234–241. [Google Scholar]
- Antholzer, S.; Haltmeier, M.; Schwab, J. Deep learning for photoacoustic tomography from sparse data. Inverse Probl. Sci. Eng.
**2019**, 27, 987–1005. [Google Scholar] [CrossRef][Green Version] - Antholzer, S.; Schwab, J.; Bauer-Marschallinger, J.; Burgholzer, P.; Haltmeier, M. NETT regularization for compressed sensing photoacoustic tomography. In Proceedings of the Photons Plus Ultrasound: Imaging and Sensing 2019, San Francisco, CA, USA, 3–6 February 2019; Volume 10878, p. 108783B. [Google Scholar]
- Combettes, P.L.; Pesquet, J.C. Proximal splitting methods in signal processing. In Fixed-Point Algorithms for Inverse Problems in Science and Engineering; Springer: Berlin/Heidelberg, Germany, 2011; pp. 185–212. [Google Scholar]
- Paszke, A.; Gross, S. PyTorch: An Imperative Style, High-Performance Deep Learning Library; NIPS: Montreal, QC, Canada, 2018; pp. 8024–8035. [Google Scholar]
- Hornik, K. Some new results on neural network approximation. Neural Netw.
**1993**, 6, 1069–1072. [Google Scholar] [CrossRef] - Barron, A.R. Approximation and estimation bounds for artificial neural networks. Mach. Learn.
**1994**, 14, 115–133. [Google Scholar] [CrossRef]

**Figure 1.**Top from left to right: phantom, masked phantom, and initial reconstruction ${\mathbf{A}}^{+}\mathbf{A}x$. The difference between the phantoms on the left and the middle one shows the mask region $I\subseteq {D}_{1}$ where no data is generated. Bottom from left to right: data without noise, low noise $\sigma =0.01$, and high noise $\sigma =0.1$.

**Figure 2.**Top row: reconstructions using post-processing network ${\Phi}^{\left(1\right)}$. Middle row: NETT reconstructions using ${\mathcal{R}}^{\left(1\right)}$. Bottom row: NETT reconstructions using ${\mathcal{R}}^{\left(3\right)}$. From Left to Right: Reconstructions from data without noise, low noise ($\sigma =0.01$) and high noise ($\sigma =0.1)$.

**Figure 3.**Semilogarithmic plot of the mean squared errors of the NETT using ${\mathcal{R}}^{\left(1\right)}$ and ${\mathcal{R}}^{\left(3\right)}$ depending on the noise level. The crosses are the values for the phantoms in Figure 2.

**Figure 4.**Left column: phantom with a structure not contained in the training data (

**top**) and pseudo inverse reconstruction (

**bottom**). Middle column: Post-processing reconstructions with ${\Phi}^{\left(3\right)}$ using exact (

**top**) and noisy data (

**bottom**). Right column: NETT reconstructions with ${\mathcal{R}}^{\left(3\right)}$ using exact (

**top**) and noisy data (

**bottom**).

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Antholzer, S.; Haltmeier, M.
Discretization of Learned NETT Regularization for Solving Inverse Problems. *J. Imaging* **2021**, *7*, 239.
https://doi.org/10.3390/jimaging7110239

**AMA Style**

Antholzer S, Haltmeier M.
Discretization of Learned NETT Regularization for Solving Inverse Problems. *Journal of Imaging*. 2021; 7(11):239.
https://doi.org/10.3390/jimaging7110239

**Chicago/Turabian Style**

Antholzer, Stephan, and Markus Haltmeier.
2021. "Discretization of Learned NETT Regularization for Solving Inverse Problems" *Journal of Imaging* 7, no. 11: 239.
https://doi.org/10.3390/jimaging7110239