# Deep Neural Network Model for Approximating Eigenmodes Localized by a Confining Potential

^{1}

^{2}

^{*}

## Abstract

**:**

## 1. Introduction

#### The Motivation and the Contribution of this Paper

## 2. Theoretical Background

**Definition**

**1.**

**Definition**

**2.**

#### Algorithms

Algorithm 1: Certified Deep Ritz Algorithm. |

## 3. Results

#### 3.1. Direct Approximations of the Ground State in 1D

#### 3.2. Direct Approximations of the Ground State in Higher-Dimensional Spaces

#### 3.3. Approximations of the Landscape Function in 1D

#### 3.4. Direct VPINN Approximation of the Landscape Function in 2D

#### 3.5. Encoder–Decoder Network as a Reduced-Order Model for a Family of Landscape Functions

## 4. Discussion

## 5. Conclusions

## Author Contributions

## Funding

## Conflicts of Interest

## Sample Availability

## Abbreviations

PDE | partial differential equation |

ReLU | rectified linear unit |

FEM | finite element method |

DOF | degrees of freedom |

VPINN | Variational Physics Informed Neural Networks |

FCNN | fully convolutional neural network |

## Appendix A. Implementation Details

## Appendix B. Estimating Residuals

#### Appendix B.1. Finite Element Quadrature for 2D Problems

#### Appendix B.2. Direct Approximations for Higher Dimensional Problems

## Appendix C. Architecture of the VPINN Neural Network

**Figure A1.**VPINN architecture with k blocks, l layers in each block and m neurons in each dense layer.

**Figure A2.**FCNN encoder–decoder architecture inspired by the U-Net concept from [30].

## References

- Reed, M.; Simon, B. Methods of Modern Mathematical Physics, III; Scattering Theory; Academic Press [Harcourt Brace Jovanovich, Publishers]: New York, NY, USA; London, UK, 1979. [Google Scholar]
- Teschl, G. Mathematical methods in quantum mechanics. In Graduate Studies in Mathematics; With Applications to Schrödinger Operators; American Mathematical Society: Providence, RI, USA, 2009; Volume 99, p. xiv+305. [Google Scholar]
- Mills, K.; Spanner, M.; Tamblyn, I. Deep learning and the Schrödinger equation. Phys. Rev. A
**2017**, 96, 042113. [Google Scholar] [CrossRef][Green Version] - Anderson, P.W. Absence of Diffusion in Certain Random Lattices. Phys. Rev.
**1958**, 109, 1492–1505. [Google Scholar] [CrossRef] - Arnold, D.N.; David, G.; Filoche, M.; Jerison, D.; Mayboroda, S. Computing spectra without solving eigenvalue problems. SIAM J. Sci. Comput.
**2019**, 41, B69–B92. [Google Scholar] [CrossRef][Green Version] - Arnold, D.N.; David, G.; Jerison, D.; Mayboroda, S.; Filoche, M. Effective Confining Potential of Quantum States in Disordered Media. Phys. Rev. Lett.
**2016**, 116, 056602. [Google Scholar] [CrossRef] [PubMed][Green Version] - Arnold, D.N.; David, G.; Filoche, M.; Jerison, D.; Mayboroda, S. Localization of eigenfunctions via an effective potential. Comm. Partial. Differ. Equations
**2019**, 44, 1186–1216. [Google Scholar] [CrossRef][Green Version] - Khoromskij, B.N.; Oseledets, I.V. QTT approximation of elliptic solution operators in higher dimensions. Russ. J. Numer. Anal. Math. Model.
**2011**, 26, 303–322. [Google Scholar] [CrossRef] - Orús, R. A practical introduction to tensor networks: Matrix product states and projected entangled pair states. Ann. Phys.
**2014**, 349, 117–158. [Google Scholar] [CrossRef][Green Version] - Raissi, M.; Perdikaris, P.; Karniadakis, G.E. Physics Informed Deep Learning (Part I): Data-driven Solutions of Nonlinear Partial Differential Equations. arXiv
**2017**, arXiv:1711.10561. [Google Scholar] - Mishra, S.; Molinaro, R. Estimates on the generalization error of Physics Informed Neural Networks (PINNs) for approximating PDEs. arXiv
**2020**, arXiv:2006.16144. [Google Scholar] - Lagaris, I.; Likas, A.; Fotiadis, D. Artificial neural network methods in quantum mechanics. Comput. Phys. Commun.
**1997**, 104, 1–14. [Google Scholar] [CrossRef][Green Version] - Steinerberger, S. Localization of quantum states and landscape functions. Proc. Am. Math. Soc.
**2017**, 145, 2895–2907. [Google Scholar] [CrossRef][Green Version] - Hermann, J.; Schätzle, Z.; Noé, F. Deep-neural-network solution of the electronic Schrödinger equation. Nat. Chem.
**2020**, 12, 891–897. [Google Scholar] [CrossRef] [PubMed] - Graziano, G. Deep learning chemistry ab initio. Nat. Rev. Chem.
**2020**, 4, 564. [Google Scholar] [CrossRef] - Han, J.; Jentzen, A.; Weinan, E. Solving high-dimensional partial differential equations using deep learning. Proc. Natl. Acad. Sci. USA
**2018**, 115, 8505–8510. [Google Scholar] [CrossRef][Green Version] - Han, J.; Zhang, L.; Weinan, E. Solving many-electron Schrödinger equation using deep neural networks. J. Comput. Phys.
**2019**, 399, 108929. [Google Scholar] [CrossRef][Green Version] - Beck, C.; Weinan, E.; Jentzen, A. Machine learning approximation algorithms for high-dimensional fully nonlinear partial differential equations and second-order backward stochastic differential equations. J. Nonlinear Sci.
**2019**, 29, 1563–1619. [Google Scholar] [CrossRef][Green Version] - Ma, C.; Wang, J.; Weinan, E. Model reduction with memory and the machine learning of dynamical systems. Commun. Comput. Phys.
**2019**, 25, 947–962. [Google Scholar] [CrossRef] - Weinan, E.; Yu, B. The Deep Ritz method: A deep learning-based numerical algorithm for solving variational problems. Commun. Math. Stat.
**2018**, 6, 1–12. [Google Scholar] - Kharazmi, E.; Zhang, Z.; Karniadakis, G.E. Variational Physics-Informed Neural Networks For Solving Partial Differential Equations. arXiv
**2019**, arXiv:1912.00873. [Google Scholar] - Zhang, L.; Han, J.; Wang, H.; Saidi, W.; Car, R.; Weinan, E. End-to-end Symmetry Preserving Inter-atomic Potential Energy Model for Finite and Extended Systems. In Advances in Neural Information Processing Systems; Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., Garnett, R., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2018; Volume 31, pp. 4436–4446. [Google Scholar]
- Weinan, E.; Han, J.; Zhang, L. Integrating Machine Learning with Physics-Based Modeling. arXiv
**2020**, arXiv:2006.02619. [Google Scholar] - McFall, K.S.; Mahan, J.R. Artificial Neural Network Method for Solution of Boundary Value Problems With Exact Satisfaction of Arbitrary Boundary Conditions. IEEE Trans. Neural Netw.
**2009**, 20, 1221–1233. [Google Scholar] [CrossRef] [PubMed] - Kato, T. Perturbation Theory for Linear Operators; Classics in Mathematics; Reprint of the 1980 Edition; Springer: Berlin, Germany, 1995; p. xxii+619. [Google Scholar]
- Kato, T. On the upper and lower bounds of eigenvalues. J. Phys. Soc. Jpn.
**1949**, 4, 334–339. [Google Scholar] [CrossRef] - Grubišić, L. On eigenvalue and eigenvector estimates for nonnegative definite operators. SIAM J. Matrix Anal. Appl.
**2006**, 28, 1097–1125. [Google Scholar] [CrossRef][Green Version] - Grubišić, L.; Ovall, J.S. On estimators for eigenvalue/eigenvector approximations. Math. Comp.
**2009**, 78, 739–770. [Google Scholar] [CrossRef][Green Version] - Hesthaven, J.S.; Rozza, G.; Stamm, B. Certified Reduced Basis Methods for Parametrized Partial Differential Equations; SpringerBriefs in Mathematics; BCAM SpringerBriefs; Springer: Cham, Switzerland; BCAM Basque Center for Applied Mathematics: Bilbao, Spain, 2016. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Medical Image Computing and Computer-Assisted Intervention— MICCAI 2015; Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F., Eds.; Springer International Publishing: Cham, Switzerland, 2015; pp. 234–241. [Google Scholar]
- Müller, J.; Zeinhofer, M. Deep Ritz revisited. arXiv
**2020**, arXiv:1912.03937. [Google Scholar] - Golub, G.H.; Van Loan, C.F. Matrix Computations, 4th ed.; Johns Hopkins Studies in the Mathematical Sciences; Johns Hopkins University Press: Baltimore, MD, USA, 2013. [Google Scholar]
- Arora, R.; Basu, A.; Mianjy, P.; Mukherjee, A. Understanding Deep Neural Networks with Rectified Linear Units. arXiv
**2018**, arXiv:1611.01491. [Google Scholar] - Grubišić, L.; Nakić, I. Error representation formula for eigenvalue approximations for positive definite operators. Oper. Matrices
**2012**, 6, 793–808. [Google Scholar] [CrossRef][Green Version] - Bank, R.E.; Grubišić, L.; Ovall, J.S. A framework for robust eigenvalue and eigenvector error estimation and Ritz value convergence enhancement. Appl. Numer. Math.
**2013**, 66, 1–29. [Google Scholar] [CrossRef] - Davis, C.; Kahan, W.M. The rotation of eigenvectors by a perturbation. III. SIAM J. Numer. Anal.
**1970**, 7, 1–46. [Google Scholar] [CrossRef] - Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. arXiv
**2015**, arXiv:1412.6980. [Google Scholar] - Feinberg, J.; Langtangen, H.P. Chaospy: An open source tool for designing methods of uncertainty quantification. J. Comput. Sci.
**2015**, 11, 46–57. [Google Scholar] [CrossRef][Green Version] - Sobol, I.M. Distribution of points in a cube and approximate evaluation of integrals. Ž. Vyčisl. Mat. Mat. Fiz.
**1967**, 7, 784–802. [Google Scholar] [CrossRef] - Smoljak, S.A. Quadrature and interpolation formulae on tensor products of certain function classes. Dokl. Akad. Nauk SSSR
**1963**, 148, 1042–1045. [Google Scholar] - Mishra, S.; Molinaro, R. Estimates on the generalization error of Physics Informed Neural Networks (PINNs) for approximating PDEs II: A class of inverse problems. arXiv
**2020**, arXiv:2007.01138. [Google Scholar] - Abadi, M.; Agarwal, A.; Barham, P.; Brevdo, E.; Chen, Z.; Citro, C.; Corrado, G.S.; Davis, A.; Dean, J.; Devin, M.; et al. TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems. arXiv
**2016**, arXiv:1603.04467. [Google Scholar] - Platte, R.B.; Trefethen, L.N. Chebfun: A new kind of numerical computing. In Progress in industrial mathematics at ECMI 2008; Springer: Heidelberg, Germany, 2010; Volume 15, pp. 69–87. [Google Scholar]
- Trefethen, L.N. Approximation Theory and Approximation Practice; Society for Industrial and Applied Mathematics (SIAM): Philadelphia, PA, USA, 2013. [Google Scholar]
- Han, J.; Jentzen, A. Algorithms for Solving High Dimensional PDEs: From Nonlinear Monte Carlo to Machine Learning. arXiv
**2020**, arXiv:2008.13333. [Google Scholar] - Kazeev, V.; Oseledets, I.; Rakhuba, M.; Schwab, C. QTT-finite-element approximation for multiscale problems I: Model problems in one dimension. Adv. Comput. Math.
**2017**, 43, 411–442. [Google Scholar] [CrossRef] - Chollet, F. Keras. 2015. Available online: https://keras.io (accessed on 7 January 2021).
- Logg, A.; Mardal, K.A.; Wells, G.N. Automated Solution of Differential Equations by the Finite Element Method; Springer: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
- Sobol, I.M.; Shukhman, B.V. QMC integration errors and quasi-asymptotics. Monte Carlo Methods Appl.
**2020**, 26, 171–176. [Google Scholar] [CrossRef] - Gribonval, R.; Kutyniok, G.; Nielsen, M.; Voigtlaender, F. Approximation spaces of deep neural networks. arXiv
**2020**, arXiv:1905.01208. [Google Scholar] - Huang, G.; Liu, Z.; van der Maaten, L.; Weinberger, K.Q. Densely Connected Convolutional Networks. arXiv
**2018**, arXiv:1608.06993. [Google Scholar]

**Figure 1.**(

**a**) Comparison of the ground state obtained in chebfun (${\psi}_{chebfun}\left(x\right)$) and as the VPINN solution (${\psi}_{NN}\left(x\right)$) with the architecture ${\overrightarrow{N}}_{\mathtt{DenseNet}}=(n,k,l,m)=(1,4,2,10)$; (

**b**) Residual and Rayleigh quotient error estimate metrics during the training process.

**Figure 2.**The effective potential and its 6 local minima, which define localization of the first six eigenstates is shown on the right. Eigenstates ${\psi}_{i},i=0,1,...,5$ were computed in chebfun.

**Figure 3.**A surface plot of the effective potential $W=1/u$ (

**a**) and the landscape function u (

**b**). In (

**a**) we plot the boundaries of the sets $\{x\phantom{\rule{3.33333pt}{0ex}}:\phantom{\rule{3.33333pt}{0ex}}\epsilon \phantom{\rule{3.33333pt}{0ex}}u(x)\ge 1\}$ that localize the eigenstates. In (

**b**) we plot the circles of radius $1/\tilde{{\epsilon}_{i}}$, for ${\tilde{\epsilon}}_{i-1}=3{W}_{\mathrm{min},i}/2$, $i=1,2,3$, centered at the i-th lowermost local minimum ${W}_{\mathrm{min},i}$.

**Figure 4.**A benchmarking comparison of the encoder–decoder prediction of the landscape function against the FEniCS solution.

**Figure 5.**Comparing the Chebyshev series expansion with 149 terms and a VPINN solution with the architecture ${\overrightarrow{N}}_{\mathtt{DenseNet}}=(1,2,2,2)$ and 30 trainable parameters.

**Table 1.**Convergence rates for the ground state energy of the harmonic oscillator in relation to the dimension. QMC: quasi-Monte Carlo.

n | ${\mathit{\epsilon}}_{0}$ | M for the Loss Function | Adam Optimizer Epochs | M for the Smolyak Quadrature | Smolyak Relative Error % | Relative Error for QMC with $\mathit{M}={10}^{5}$ Points% |
---|---|---|---|---|---|---|

1 | 1 | 100 | 50,000 | 127 | 0.004 | 0.003 |

2 | 2 | 1000 | 20,000 | 769 | 1.416 | 1.226 |

3 | 3 | 5000 | 50,000 | 2815 | 1.110 | 1.608 |

6 | 6 | 50,000 | 80,000 | 40,193 | - | 1.40 |

9 | 9 | 50,000 | 50,000 | 242,815 | 230.366 | 5.816 |

**Table 2.**We tested the accuracy of the predictor ${\tilde{\u03f5}}_{i-1}=\left(\right)open="("\; close=")">1+{\displaystyle \frac{n}{4}}$ for 16 lowermost eigenvalues. The chebfun solution was used to benchmark the error.

0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | |

Minimum values of W | 0.747201 | 0.918677 | 0.918754 | 0.933014 | 1.028903 | 1.057663 | 1.174706 | 1.245278 |

chebfun eigenvalues | 0.979730 | 1.071839 | 1.230230 | 1.282611 | 1.301724 | 1.485232 | 1.577349 | 1.588252 |

Relative error in % | 4.6675 | 7.1379 | 6.6481 | 9.0708 | 1.1981 | 1.9850 | 6.9082 | 1.9930 |

8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | |

Minimum values of W | 1.256498 | 1.273980 | 1.326926 | 1.613203 | 1.848415 | 1.868003 | 1.907063 | 1.931723 |

chebfun eigenvalues | 1.625253 | 1.758768 | 1.780166 | 2.095899 | 2.161778 | 2.265704 | 2.270798 | 2.278380 |

Relative error in % | 3.3614 | 9.4551 | 6.8257 | 3.7882 | 6.8805 | 3.05864 | 4.9776 | 5.9811 |

**Table 3.**A report on the convergence in k and m for the family of architectures ${\overrightarrow{N}}_{\mathtt{DenseNet}}=(2,k,2,m)$. We benchmark the error against the highly accurate ${P}_{3}$ FEniCS solution.

Parameters | k | m | Relative ${\mathit{L}}^{2}$ Error 100,000 Epoch | Relative ${\mathit{H}}^{1}$ Error 100,000 Epoch | Relative ${\mathit{L}}^{2}$ Error 200,000 Epoch | Relative ${\mathit{H}}^{1}$ Error 200,000 Epoch | Relative Error of the First Three Eigenvalues Respectively |
---|---|---|---|---|---|---|---|

803 | 4 | 8 | 2.5852% | 5.6216% | 2.0527% | 4.9876% | 0.1638%, 1.4479%, 1.1472% |

1203 | 4 | 10 | 2.7487% | 5.3611% | 1.2354% | 3.6960% | 0.0839%, 2.3489%, 0.6341% |

1753 | 5 | 10 | 1.9314% | 4.2386% | 1.0679% | 3.3851% | 0.5957%, 1.9264%, 0.3822% |

2403 | 6 | 10 | 1.1745% | 3.0548% | 0.7998% | 2.6994% | 0.4539%, 1.7883%, 1.5112% |

4403 | 4 | 20 | 1.9037% | 3.6929% | 0.7233% | 2.5757% | 0.3242%, 1.8831%, 1.2586% |

9603 | 4 | 30 | 1.8217% | 3.7451% | 0.6689% | 2.3609% | 0.3639%, 2.0083%, 0.9685% |

16,803 | 4 | 40 | 0.6372% | 1.9704% | 0.3920% | 1.5497% | 0.3269%, 1.8606%, 0.6983% |

26,003 | 4 | 50 | 3.6993% | 7.3510% | 0.4207% | 1.6748% | 0.3127%, 1.5756%, 0.3559% |

**Table 4.**Validation of the encoder–decoder representation of the mapping $\mathcal{L}:V\mapsto u$ on a collection of test examples. Recall that the effective potential is defined as $W=1/u$.

Average ${\mathit{L}}^{2}$ error | 1.7545% |

Maximal ${L}^{2}$ error | 2.9769%, example: 58 |

Average ${H}^{1}$ error | 9.2233% |

Maximal ${H}^{1}$ error | 12.6765%, example: 65 |

Mean relative error in $1/{W}_{min,1}$ | 0.4887% |

Maximal relative error in $1/{W}_{min,1}$ | 2.1402%, example: 70 |

The worst ten relative errors in $1/{W}_{min,1}$ (%) | 2.1402, 1.5909, 1.5560, 1.4816, 1.4151, 1.4626, 1.3441, 1.3377, 1.3181, 1.3132 |

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Grubišić, L.; Hajba, M.; Lacmanović, D.
Deep Neural Network Model for Approximating Eigenmodes Localized by a Confining Potential. *Entropy* **2021**, *23*, 95.
https://doi.org/10.3390/e23010095

**AMA Style**

Grubišić L, Hajba M, Lacmanović D.
Deep Neural Network Model for Approximating Eigenmodes Localized by a Confining Potential. *Entropy*. 2021; 23(1):95.
https://doi.org/10.3390/e23010095

**Chicago/Turabian Style**

Grubišić, Luka, Marko Hajba, and Domagoj Lacmanović.
2021. "Deep Neural Network Model for Approximating Eigenmodes Localized by a Confining Potential" *Entropy* 23, no. 1: 95.
https://doi.org/10.3390/e23010095