# Deep Theory of Functional Connections: A New Method for Estimating the Solutions of Partial Differential Equations

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Theory of Functional Connections

#### 2.1. n-Dimensional Constrained Expressions

- The element ${\mathcal{M}}_{111}=0$.
- The first order sub-tensor of $\mathcal{M}$ specified by keeping one dimension’s index free and setting all other dimension’s indices to 1 consists of the value 0 and the boundary conditions for that dimension. Mathematically,$${\mathcal{M}}_{1,\dots ,1,{i}_{k},1,\dots ,1}=\left\{\begin{array}{c}0{,}^{k}{c}_{{\mathbf{p}}_{k}}^{{\mathbf{d}}_{k}}\end{array}\right\}.$$Using the example boundary conditions,$$\begin{array}{cc}\hfill \phantom{\rule{1.em}{0ex}}& {\mathcal{M}}_{{i}_{1}11}={\left[0,c(0,{x}_{2},{x}_{3}),c(1,{x}_{2},{x}_{3})\right]}^{T}\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& {\mathcal{M}}_{1{i}_{2}1}={\left[0,c({x}_{1},0,{x}_{3}),{c}_{{x}_{2}}({x}_{1},0,{x}_{3})\right]}^{T}\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& {\mathcal{M}}_{11{i}_{3}}={\left[0,c({x}_{1},{x}_{2},0),{c}_{{x}_{3}}({x}_{1},{x}_{2},0)\right]}^{T}.\hfill \end{array}$$
- The remaining elements of the $\mathcal{M}$ tensor are those with at least two indices different than one. These elements are the geometric intersection of the boundary condition elements of the first order tensors given in Equation (6), plus a sign (+ or −) that is determined by the number of elements being intersected. Mathematically this can be written as,$${\mathcal{M}}_{{i}_{1}{i}_{2}\dots {i}_{n}}={\phantom{\rule{0.166667em}{0ex}}}^{1}{b}_{{\mathbf{p}}_{{i}_{1}-1}^{1}}^{{\mathbf{d}}_{{i}_{1}-1}^{1}}\left[{\phantom{\rule{0.166667em}{0ex}}}^{2}{b}_{{\mathbf{p}}_{{i}_{2}-1}^{2}}^{{\mathbf{d}}_{{i}_{2}-1}^{2}}\left[\dots \left[{\phantom{\rule{0.166667em}{0ex}}}^{n}{b}_{{\mathbf{p}}_{{i}_{n}-1}^{n}}^{{\mathbf{d}}_{{i}_{n}-1}^{n}}\left[c\left(\mathbf{x}\right)\right]\right]\dots \right]\right]{(-1)}^{m+1},$$$$\begin{array}{cc}\hfill \phantom{\rule{1.em}{0ex}}& {M}_{133}=-{c}_{{x}_{2}{x}_{3}}({x}_{1},0,0)\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& {M}_{221}=-c(0,0,{x}_{3})\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& {M}_{332}={c}_{{x}_{2}}(1,0,0)\hfill \end{array}$$

#### 2.2. Two-Dimensional Example

- The first element is ${\mathcal{M}}_{11}=0$.
- The first order sub-tensors of $\mathcal{M}$ are:$$\begin{array}{cc}\hfill {\mathcal{M}}_{{i}_{1}1}& =\left\{\begin{array}{ccc}0& c(0,y)& c(1,y)\end{array}\right\}\hfill \\ \hfill {\mathcal{M}}_{1{i}_{2}}& =\left\{\begin{array}{ccc}0& c(x,0)& c(x,1)\end{array}\right\}\hfill \end{array}$$
- The remaining elements of $\mathcal{M}$ are the geometric intersection of elements from the first order sub-tensors.$$\begin{array}{cccc}\hfill {\mathcal{M}}_{22}& =-c(0,0)\hfill & \hfill \phantom{\rule{1.em}{0ex}}& {\mathcal{M}}_{23}=-c(1,0)\hfill \\ \hfill {\mathcal{M}}_{32}& =-c(0,1)\hfill & \hfill \phantom{\rule{1.em}{0ex}}& {\mathcal{M}}_{33}=-c(1,1)\hfill \end{array}$$

## 3. PDE Solution Methodology

#### Training the Neural Network

- Hybrid method: Combines the first two methods by applying them in series.

## 4. Results

#### 4.1. Problem 1

#### 4.2. Problem 2

#### 4.3. Problem 3

^{3}, $\mu =1$ Pa·s, and $\frac{\partial P}{\partial x}=-5$ N/m

^{3}were chosen. The constrained expressions for the u-velocity, ${f}^{u}(x,y,t;\theta )$, and v-velocity, ${f}^{v}(x,y,t;\theta )$, are shown in Equation (14).

#### 4.4. Problem 4

## 5. Conclusions

## Author Contributions

## Funding

## Acknowledgments

## Conflicts of Interest

## Abbreviations

FEM | finite element method |

IID | Independent and identically distributed |

PDE | partial differential equation |

TFC | Theory of Functional Connections |

## References

- Argyris, J.; Kelsey, S. Energy Theorems and Structural Analysis: A Generalized Discourse with Applications on Energy Principles of Structural Analysis Including the Effects of Temperature and Non-Linear Stress-Strain Relations. Aircr. Eng. Aerosp. Technol.
**1954**, 26, 347–356. [Google Scholar] [CrossRef] - Turner, M.J.; Clough, R.W.; Martin, H.C.; Topp, L.J. Stiffness and Deflection Analysis of Complex Structures. J. Aeronaut. Sci.
**1956**, 23, 805–823. [Google Scholar] [CrossRef] - Clough, R.W. The Finite Element Method in Plane Stress Analysis; American Society of Civil Engineers: Reston, VA, USA, 1960; pp. 345–378. [Google Scholar]
- Spiliopoulos, J.S.K. DGM: A deep learning algorithm for solving partial differential equations. J. Comput. Phys.
**2018**, 1339–1364. [Google Scholar] [CrossRef][Green Version] - Lagaris, I.E.; Likas, A.; Fotiadis, D.I. Artificial neural networks for solving ordinary and partial differential equations. IEEE Trans. Neural Netw.
**1998**, 9, 987–1000. [Google Scholar] [CrossRef] [PubMed][Green Version] - Yadav, N.; Yadav, A.; Kumar, M. An Introduction to Neural Network Methods for Differential Equations; Springer: Dordrecht, The Netherlands, 2015. [Google Scholar] [CrossRef]
- Coons, S.A. Surfaces for Computer-Aided Design of Space Forms; Technical report; Massachusetts Institute of Technology: Cambridge, MA, USA, 1967. [Google Scholar]
- Mortari, D. The Theory of Connections: Connecting Points. Mathematics
**2017**, 5, 57. [Google Scholar] [CrossRef][Green Version] - Mortari, D.; Leake, C. The Multivariate Theory of Connections. Mathematics
**2019**, 7, 296. [Google Scholar] [CrossRef][Green Version] - Leake, C.; Johnston, H.; Smith, L.; Mortari, D. Analytically Embedding Differential Equation Constraints into Least Squares Support Vector Machines Using the Theory of Functional Connections. Mach. Learn. Knowl. Extr.
**2019**, 1, 1058–1083. [Google Scholar] [CrossRef][Green Version] - Mortari, D. Least-squares Solutions of Linear Differential Equations. Mathematics
**2017**, 5, 48. [Google Scholar] [CrossRef] - Mortari, D.; Johnston, H.; Smith, L. Least-squares Solutions of Nonlinear Differential Equations. In Proceedings of the 2018 AAS/AIAA Space Flight Mechanics Meeting Conference, Kissimmee, FL, USA, 8–12 January 2018. [Google Scholar]
- Johnston, H.; Mortari, D. Linear Differential Equations Subject to Relative, Integral, and Infinite Constraints. In Proceedings of the 2018 AAS/AIAA Astrodynamics Specialist Conference, Snowbird, UT, USA, 19–23 August 2018. [Google Scholar]
- Leake, C.; Mortari, D. An Explanation and Implementation of Multivariate Theory of Connections via Examples. In Proceedings of the 2019 AAS/AIAA Astrodynamics Specialist Conference, AAS/AIAA, Portland, MN, USA, 11–15 August 2019. [Google Scholar]
- Abadi, M.; Agarwal, A.; Barham, P.; Brevdo, E.; Chen, Z.; Citro, C.; Corrado, G.S.; Davis, A.; Dean, J.; Devin, M.; et al. TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems. 2015. Available online: tensorflow.org (accessed on 30 January 2020).
- Baydin, A.G.; Pearlmutter, B.A.; Radul, A.A. Automatic differentiation in machine learning: A survey. arXiv
**2015**, arXiv:1502.05767. [Google Scholar] - Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. arXiv
**2014**, arXiv:1412.6980. [Google Scholar] - Duchi, J.; Hazan, E.; Singer, Y. Adaptive Subgradient Methods for Online Learning and Stochastic Optimization. J. Mach. Learn. Res.
**2011**, 12, 2121–2159. [Google Scholar] - Tieleman, T.; Hinton, G. Lecture 6.5—RMSProp, COURSERA: Neural Networks for Machine Learning; Technical report; University of Toronto: Toronto, ON, Canada, 2012. [Google Scholar]
- Fletcher, R. Practical Methods of Optimization, 2nd ed.; Wiley: New York, NY, USA, 1987. [Google Scholar]
- Glorot, X.; Bengio, Y. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, Sardinia, Italy, 13–15 May 2010; Teh, Y.W., Titterington, M., Eds.; PMLR: Sardinia, Italy, 2010; Volume 9, pp. 249–256. [Google Scholar]
- Huang, G.B.; Zhu, Q.Y.; Siew, C.K. Extreme learning machine: Theory and applications. Neurocomputing
**2006**, 70, 489–501. [Google Scholar] [CrossRef] - Johnston, H.; Mortari, D. Least-squares solutions of boundary-value problems in hybrid systems. arXiv
**2019**, arXiv:1911.04390. [Google Scholar]

**Figure 3.**Problem 1 solution error using Ref. [5] solution form.

**Table 1.**Comparison of Deep TFC, Ref. [5], and finite element method (FEM).

Method | Training Set | Test Set |
---|---|---|

Deep TFC | $3\times {10}^{-7}$ | $3\times {10}^{-7}$ |

Ref. [5] | $5\times {10}^{-7}$ | $5\times {10}^{-7}$ |

FEM | $2\times {10}^{-8}$ | $1.5\times {10}^{-5}$ |

© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Leake, C.; Mortari, D. Deep Theory of Functional Connections: A New Method for Estimating the Solutions of Partial Differential Equations. *Mach. Learn. Knowl. Extr.* **2020**, *2*, 37-55.
https://doi.org/10.3390/make2010004

**AMA Style**

Leake C, Mortari D. Deep Theory of Functional Connections: A New Method for Estimating the Solutions of Partial Differential Equations. *Machine Learning and Knowledge Extraction*. 2020; 2(1):37-55.
https://doi.org/10.3390/make2010004

**Chicago/Turabian Style**

Leake, Carl, and Daniele Mortari. 2020. "Deep Theory of Functional Connections: A New Method for Estimating the Solutions of Partial Differential Equations" *Machine Learning and Knowledge Extraction* 2, no. 1: 37-55.
https://doi.org/10.3390/make2010004