# Analytically Embedding Differential Equation Constraints into Least Squares Support Vector Machines Using the Theory of Functional Connections

^{1}

^{2}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Background on the Theory of Functional Connections

## 3. The Support Vector Machine Technique

#### 3.1. An Overview of SVMs

#### 3.2. Constrained SVM (CSVM) Technique

#### 3.3. Nonlinear ODEs

#### 3.4. Linear PDEs

## 4. Numerical Results

`MATLAB R2018b`(MathWorks, Natick, MA, USA) on a Windows 10 operating system running on an Intel

^{®}Core

^{™}i7-7700 CPU at 3.60GHz and 16.0 GB of RAM. Since all test problems have analytical solutions, absolute error and mean-squared error (MSE) were used to quantify the error of the methods. MSE is defined as,

#### 4.1. Problem #1

#### 4.2. Problem #2

#### 4.3. Problem #3

#### 4.4. Problem #4

## 5. Conclusions

## Author Contributions

## Funding

## Conflicts of Interest

## Abbreviations

BVP | boundary-value problem |

CP | Chebyshev polynomial |

CSVM | constrained support vector machines |

DE | differential equation |

IVP | initial-value problem |

LS | least-squares |

LS-SVM | least-squares support vector machines |

MSE | mean square error |

MVP | multi-value problem |

ODE | ordinary differential equation |

PDE | partial differential equation |

RBF | radial basis function |

SVM | support vector machines |

TFC | Theory of Functional Connections |

## Appendix A Numerical Data

Number of Training Points | Training Time (s) | Maximum Error on Training Set | MSE on Training Set | Maximum Error on Test Set | MSE on Test Set | m |
---|---|---|---|---|---|---|

8 | 7.813 × ${10}^{-5}$ | 6.035 × ${10}^{-6}$ | 1.057 × ${10}^{-11}$ | 6.187 × ${10}^{-6}$ | 8.651 × ${10}^{-12}$ | 7 |

16 | 1.406 × ${10}^{-4}$ | 2.012 × ${10}^{-11}$ | 1.257 × ${10}^{-22}$ | 1.814 × ${10}^{-11}$ | 8.964 × ${10}^{-23}$ | 17 |

32 | 5.000 × ${10}^{-4}$ | 2.220 × ${10}^{-16}$ | 1.887 × ${10}^{-32}$ | 3.331 × ${10}^{-16}$ | 2.086 × ${10}^{-32}$ | 25 |

50 | 7.500 × ${10}^{-4}$ | 2.220 × ${10}^{-16}$ | 9.368 × ${10}^{-33}$ | 2.220 × ${10}^{-16}$ | 1.801 × ${10}^{-32}$ | 25 |

100 | 1.266 × ${10}^{-3}$ | 4.441 × ${10}^{-16}$ | 1.750 × ${10}^{-32}$ | 2.220 × ${10}^{-16}$ | 1.138 × ${10}^{-32}$ | 26 |

Number of Training Points | Training Time (s) | Maximum Error on Training Set | MSE on Training Set | Maximum Error on Test Set | MSE on Test Set | $\mathit{\gamma}$ | $\mathit{\sigma}$ |
---|---|---|---|---|---|---|---|

8 | 1.719 × ${10}^{-3}$ | 1.179 × ${10}^{-5}$ | 5.638 × ${10}^{-11}$ | 1.439 × ${10}^{-5}$ | 7.251 × ${10}^{-11}$ | 5.995 × ${10}^{17}$ | 3.162 × ${10}^{0}$ |

16 | 1.719 × ${10}^{-3}$ | 1.710 × ${10}^{-6}$ | 1.107 × ${10}^{-12}$ | 1.849 × ${10}^{-6}$ | 1.161 × ${10}^{-12}$ | 3.594 × ${10}^{15}$ | 6.813 × ${10}^{-1}$ |

32 | 2.188 × ${10}^{-3}$ | 9.792 × ${10}^{-8}$ | 3.439 × ${10}^{-15}$ | 9.525 × ${10}^{-8}$ | 3.359 × ${10}^{-15}$ | 3.594 × ${10}^{15}$ | 3.162 × ${10}^{-1}$ |

50 | 4.375 × ${10}^{-3}$ | 1.440 × ${10}^{-8}$ | 2.983 × ${10}^{-17}$ | 8.586 × ${10}^{-9}$ | 2.356 × ${10}^{-17}$ | 3.594 × ${10}^{15}$ | 3.162 × ${10}^{-1}$ |

100 | 1.031 × ${10}^{-2}$ | 3.671 × ${10}^{-9}$ | 3.781 × ${10}^{-18}$ | 3.673 × ${10}^{-9}$ | 3.947 × ${10}^{-18}$ | 2.154 × ${10}^{13}$ | 3.162 × ${10}^{-1}$ |

Number of Training Points | Training Time (s) | Maximum Error on Training Set | MSE on Training Set | Maximum Error on Test Set | MSE on Test Set | $\mathit{\gamma}$ | $\mathit{\sigma}$ |
---|---|---|---|---|---|---|---|

8 | 3.125 × ${10}^{-4}$ | 1.018 × ${10}^{-5}$ | 4.131 × ${10}^{-11}$ | 1.357 × ${10}^{-5}$ | 5.547 × ${10}^{-11}$ | 2.154 × ${10}^{13}$ | 3.162 × ${10}^{0}$ |

16 | 1.406 × ${10}^{-3}$ | 2.894 × ${10}^{-7}$ | 2.588 × ${10}^{-14}$ | 2.818 × ${10}^{-7}$ | 2.468 × ${10}^{-14}$ | 5.995 × ${10}^{17}$ | 6.813 × ${10}^{-1}$ |

32 | 5.313 × ${10}^{-3}$ | 2.283 × ${10}^{-8}$ | 1.355 × ${10}^{-16}$ | 2.576 × ${10}^{-8}$ | 1.494 × ${10}^{-16}$ | 3.594 × ${10}^{15}$ | 3.162 × ${10}^{-1}$ |

50 | 3.281 × ${10}^{-3}$ | 8.887 × ${10}^{-9}$ | 2.055 × ${10}^{-17}$ | 1.072 × ${10}^{-8}$ | 2.783 × ${10}^{-17}$ | 7.743 × ${10}^{8}$ | 3.162 × ${10}^{-1}$ |

100 | 1.078 × ${10}^{-2}$ | 2.230 × ${10}^{-9}$ | 5.571 × ${10}^{-19}$ | 2.163 × ${10}^{-9}$ | 5.337 × ${10}^{-19}$ | 3.594 × ${10}^{15}$ | 1.468 × ${10}^{-1}$ |

Number of Training Points | Training Time (s) | Maximum Error on Training Set | MSE on Training Set | Maximum Error on Test Set | MSE on Test Set | m |
---|---|---|---|---|---|---|

8 | 3.437 × ${10}^{-4}$ | 8.994 × ${10}^{-6}$ | 2.242 × ${10}^{-11}$ | 1.192 × ${10}^{-5}$ | 4.132 × ${10}^{-11}$ | 8 |

16 | 1.547 × ${10}^{-3}$ | 4.586 × ${10}^{-12}$ | 6.514 × ${10}^{-24}$ | 9.183 × ${10}^{-12}$ | 2.431 × ${10}^{-23}$ | 16 |

32 | 1.891 × ${10}^{-3}$ | 3.109 × ${10}^{-15}$ | 9.291 × ${10}^{-31}$ | 4.885 × ${10}^{-15}$ | 9.590 × ${10}^{-31}$ | 32 |

50 | 3.125 × ${10}^{-3}$ | 1.110 × ${10}^{-15}$ | 2.100 × ${10}^{-31}$ | 2.665 × ${10}^{-15}$ | 3.954 × ${10}^{-31}$ | 32 |

100 | 4.828 × ${10}^{-3}$ | 1.776 × ${10}^{-15}$ | 3.722 × ${10}^{-31}$ | 2.665 × ${10}^{-15}$ | 4.321 × ${10}^{-31}$ | 32 |

Number of Training Points | Training Time (s) | Maximum Error on Training Set | MSE on Training Set | Maximum Error on Test Set | MSE on Test Set | $\mathit{\gamma}$ | $\mathit{\sigma}$ |
---|---|---|---|---|---|---|---|

8 | 7.813 × ${10}^{-4}$ | 1.001 × ${10}^{-3}$ | 1.965 × ${10}^{-7}$ | 1.001 × ${10}^{-3}$ | 7.904 × ${10}^{-8}$ | 1.000 × ${10}^{10}$ | 3.704 × ${10}^{-1}$ |

16 | 1.250 × ${10}^{-3}$ | 4.017 × ${10}^{-3}$ | 4.909 × ${10}^{-6}$ | 3.872 × ${10}^{-3}$ | 4.514 × ${10}^{-6}$ | 1.000 × ${10}^{10}$ | 4.198 × ${10}^{-1}$ |

32 | 6.875 × ${10}^{-3}$ | 4.046 × ${10}^{-3}$ | 4.834 × ${10}^{-6}$ | 3.900 × ${10}^{-3}$ | 4.575 × ${10}^{-6}$ | 1.000 × ${10}^{10}$ | 4.536 × ${10}^{-1}$ |

50 | 1.203 × ${10}^{-2}$ | 4.048 × ${10}^{-3}$ | 4.792 × ${10}^{-6}$ | 3.902 × ${10}^{-3}$ | 4.580 × ${10}^{-6}$ | 1.000 × ${10}^{10}$ | 4.666 × ${10}^{-1}$ |

100 | 3.156 × ${10}^{-2}$ | 4.050 × ${10}^{-3}$ | 4.752 × ${10}^{-6}$ | 3.903 × ${10}^{-3}$ | 4.582 × ${10}^{-6}$ | 1.000 × ${10}^{10}$ | 4.853 × ${10}^{-1}$ |

Number of Training Points | Training Time (s) | Maximum Error on Training Set | MSE on Training Set | Maximum Error on Test Set | MSE on Test Set | $\mathit{\gamma}$ | $\mathit{\sigma}$ |
---|---|---|---|---|---|---|---|

8 | 1.250 × ${10}^{-3}$ | 1.556 × ${10}^{-3}$ | 7.644 × ${10}^{-7}$ | 1.480 × ${10}^{-3}$ | 5.325 × ${10}^{-7}$ | 1.000 × ${10}^{10}$ | 3.452 × ${10}^{-1}$ |

16 | 1.563 × ${10}^{-3}$ | 4.021 × ${10}^{-3}$ | 4.914 × ${10}^{-6}$ | 3.876 × ${10}^{-3}$ | 4.517 × ${10}^{-6}$ | 1.000 × ${10}^{10}$ | 4.719 × ${10}^{-1}$ |

32 | 2.594 × ${10}^{-2}$ | 4.047 × ${10}^{-3}$ | 4.834 × ${10}^{-6}$ | 3.901 × ${10}^{-3}$ | 4.575 × ${10}^{-6}$ | 1.000 × ${10}^{10}$ | 5.109 × ${10}^{-1}$ |

50 | 4.109 × ${10}^{-2}$ | 4.050 × ${10}^{-3}$ | 4.792 × ${10}^{-6}$ | 3.903 × ${10}^{-3}$ | 4.580 × ${10}^{-6}$ | 1.000 × ${10}^{10}$ | 5.252 × ${10}^{-1}$ |

100 | 9.219 × ${10}^{-2}$ | 4.051 × ${10}^{-3}$ | 4.753 × ${10}^{-6}$ | 3.904 × ${10}^{-3}$ | 4.583 × ${10}^{-6}$ | 1.000 × ${10}^{10}$ | 5.469 × ${10}^{-1}$ |

Number of Training Points | Training Time (s) | Maximum Error on Training Set | MSE on Training Set | Maximum Error on Test Set | MSE on Test Set | m |
---|---|---|---|---|---|---|

8 | 1.563 × ${10}^{-4}$ | 1.313 × ${10}^{-6}$ | 5.184 × ${10}^{-13}$ | 1.456 × ${10}^{-6}$ | 6.818 × ${10}^{-13}$ | 8 |

16 | 7.969 × ${10}^{-4}$ | 5.551 × ${10}^{-16}$ | 6.123 × ${10}^{-32}$ | 8.882 × ${10}^{-16}$ | 7.229 × ${10}^{-32}$ | 15 |

32 | 7.187 × ${10}^{-4}$ | 1.221 × ${10}^{-15}$ | 2.377 × ${10}^{-31}$ | 9.992 × ${10}^{-16}$ | 2.229 × ${10}^{-31}$ | 15 |

50 | 5.000 × ${10}^{-4}$ | 7.772 × ${10}^{-16}$ | 3.991 × ${10}^{-32}$ | 5.551 × ${10}^{-16}$ | 3.672 × ${10}^{-32}$ | 15 |

100 | 9.844 × ${10}^{-4}$ | 7.772 × ${10}^{-16}$ | 5.525 × ${10}^{-32}$ | 6.661 × ${10}^{-16}$ | 3.518 × ${10}^{-32}$ | 15 |

Number of Training Points | Training Time (s) | Maximum Error on Training Set | MSE on Training Set | Maximum Error on Test Set | MSE on Test Set | $\mathit{\gamma}$ | $\mathit{\sigma}$ |
---|---|---|---|---|---|---|---|

8 | 1.563 × ${10}^{-3}$ | 1.420 × ${10}^{-6}$ | 8.300 × ${10}^{-13}$ | 1.638 × ${10}^{-6}$ | 6.522 × ${10}^{-13}$ | 5.995 × ${10}^{17}$ | 6.813 × ${10}^{0}$ |

16 | 1.875 × ${10}^{-3}$ | 1.811 × ${10}^{-8}$ | 1.015 × ${10}^{-16}$ | 1.871 × ${10}^{-8}$ | 1.014 × ${10}^{-16}$ | 3.594 × ${10}^{15}$ | 3.162 × ${10}^{0}$ |

32 | 4.687 × ${10}^{-3}$ | 5.455 × ${10}^{-10}$ | 1.025 × ${10}^{-19}$ | 9.005 × ${10}^{-10}$ | 1.015 × ${10}^{-19}$ | 5.995 × ${10}^{17}$ | 1.468 × ${10}^{0}$ |

50 | 7.656 × ${10}^{-3}$ | 8.563 × ${10}^{-11}$ | 3.771 × ${10}^{-21}$ | 8.391 × ${10}^{-11}$ | 3.646 × ${10}^{-21}$ | 2.154 × ${10}^{13}$ | 1.468 × ${10}^{0}$ |

100 | 2.688 × ${10}^{-2}$ | 6.441 × ${10}^{-11}$ | 1.500 × ${10}^{-21}$ | 6.128 × ${10}^{-11}$ | 1.640 × ${10}^{-21}$ | 2.154 × ${10}^{13}$ | 1.468 × ${10}^{0}$ |

Number of Training Points | Training Time (s) | Maximum Error on Training Set | MSE on Training Set | Maximum Error on Test Set | MSE on Test Set | $\mathit{\gamma}$ | $\mathit{\sigma}$ |
---|---|---|---|---|---|---|---|

8 | 1.563 × ${10}^{-4}$ | 1.263 × ${10}^{-6}$ | 7.737 × ${10}^{-13}$ | 2.017 × ${10}^{-6}$ | 1.339 × ${10}^{-12}$ | 1.000 × ${10}^{20}$ | 6.813 × ${10}^{0}$ |

16 | 4.687 × ${10}^{-4}$ | 1.269 × ${10}^{-9}$ | 4.961 × ${10}^{-19}$ | 1.631 × ${10}^{-9}$ | 5.342 × ${10}^{-19}$ | 3.594 × ${10}^{15}$ | 3.162 × ${10}^{0}$ |

32 | 1.406 × ${10}^{-3}$ | 1.763 × ${10}^{-9}$ | 8.308 × ${10}^{-19}$ | 2.230 × ${10}^{-9}$ | 1.248 × ${10}^{-18}$ | 3.594 × ${10}^{15}$ | 3.162 × ${10}^{0}$ |

50 | 3.281 × ${10}^{-3}$ | 1.429 × ${10}^{-9}$ | 1.045 × ${10}^{-18}$ | 1.569 × ${10}^{-9}$ | 1.017 × ${10}^{-18}$ | 2.154 × ${10}^{13}$ | 1.468 × ${10}^{0}$ |

100 | 1.297 × ${10}^{-2}$ | 8.261 × ${10}^{-10}$ | 8.832 × ${10}^{-20}$ | 7.209 × ${10}^{-10}$ | 5.589 × ${10}^{-20}$ | 2.154 × ${10}^{13}$ | 1.468 × ${10}^{0}$ |

Number of Training Points in Domain | Training Time (s) | Maximum Error on Training Set | MSE on Training Set | Maximum Error on Test Set | MSE on Test Set | m |
---|---|---|---|---|---|---|

9 | 4.375 × ${10}^{-3}$ | 1.107 × ${10}^{-7}$ | 1.904 × ${10}^{-15}$ | 1.543 × ${10}^{-7}$ | 4.633 × ${10}^{-15}$ | 8 |

16 | 5.000 × ${10}^{-3}$ | 3.336 × ${10}^{-9}$ | 2.131 × ${10}^{-18}$ | 4.938 × ${10}^{-9}$ | 3.964 × ${10}^{-18}$ | 9 |

36 | 6.406 × ${10}^{-3}$ | 6.628 × ${10}^{-14}$ | 5.165 × ${10}^{-28}$ | 2.333 × ${10}^{-13}$ | 6.961 × ${10}^{-27}$ | 12 |

64 | 9.844 × ${10}^{-3}$ | 4.441 × ${10}^{-16}$ | 2.091 × ${10}^{-32}$ | 8.882 × ${10}^{-16}$ | 8.320 × ${10}^{-32}$ | 15 |

100 | 1.031 × ${10}^{-2}$ | 3.331 × ${10}^{-16}$ | 1.229 × ${10}^{-32}$ | 6.661 × ${10}^{-16}$ | 1.246 × ${10}^{-32}$ | 15 |

Number of Training Points in Domain | Training Time (s) | Maximum Error on Training Set | MSE on Training Set | Maximum Error on Test Set | MSE on Test Set | $\mathit{\gamma}$ | $\mathit{\sigma}$ |
---|---|---|---|---|---|---|---|

9 | 2.031 × ${10}^{-3}$ | 2.578 × ${10}^{-4}$ | 9.984 × ${10}^{-9}$ | 3.941 × ${10}^{-4}$ | 3.533 × ${10}^{-8}$ | 1.000 × ${10}^{14}$ | 6.635 × ${10}^{0}$ |

16 | 2.344 × ${10}^{-3}$ | 2.229 × ${10}^{-5}$ | 6.277 × ${10}^{-11}$ | 3.794 × ${10}^{-5}$ | 1.731 × ${10}^{-10}$ | 1.000 × ${10}^{14}$ | 3.577 × ${10}^{0}$ |

36 | 4.219 × ${10}^{-3}$ | 1.254 × ${10}^{-6}$ | 2.542 × ${10}^{-13}$ | 2.435 × ${10}^{-6}$ | 4.517 × ${10}^{-13}$ | 1.000 × ${10}^{14}$ | 1.894 × ${10}^{0}$ |

64 | 5.156 × ${10}^{-3}$ | 2.916 × ${10}^{-7}$ | 1.193 × ${10}^{-14}$ | 4.962 × ${10}^{-7}$ | 1.390 × ${10}^{-14}$ | 1.000 × ${10}^{14}$ | 1.589 × ${10}^{0}$ |

100 | 1.297 × ${10}^{-2}$ | 1.730 × ${10}^{-7}$ | 3.028 × ${10}^{-15}$ | 2.673 × ${10}^{-7}$ | 3.668 × ${10}^{-15}$ | 1.000 × ${10}^{14}$ | 9.484 × ${10}^{-1}$ |

Number of Training Points in Domain | Training Time (s) | Maximum Error on Training Set | MSE on Training Set | Maximum Error on Test Set | MSE on Test Set | $\mathit{\gamma}$ | $\mathit{\sigma}$ |
---|---|---|---|---|---|---|---|

9 | 5.000 × ${10}^{-3}$ | 1.305 × ${10}^{-5}$ | 1.936 × ${10}^{-11}$ | 3.325 × ${10}^{-5}$ | 8.262 × ${10}^{-11}$ | 1.000 × ${10}^{14}$ | 6.948 × ${10}^{0}$ |

16 | 1.172 × ${10}^{-2}$ | 2.121 × ${10}^{-6}$ | 7.965 × ${10}^{-13}$ | 5.507 × ${10}^{-6}$ | 2.530 × ${10}^{-12}$ | 1.000 × ${10}^{14}$ | 4.894 × ${10}^{0}$ |

36 | 1.891 × ${10}^{-2}$ | 2.393 × ${10}^{-7}$ | 6.242 × ${10}^{-15}$ | 3.738 × ${10}^{-7}$ | 1.341 × ${10}^{-14}$ | 1.000 × ${10}^{14}$ | 2.154 × ${10}^{0}$ |

64 | 3.156 × ${10}^{-2}$ | 9.501 × ${10}^{-8}$ | 1.021 × ${10}^{-15}$ | 1.251 × ${10}^{-7}$ | 1.165 × ${10}^{-15}$ | 1.000 × ${10}^{14}$ | 1.371 × ${10}^{0}$ |

100 | 8.453 × ${10}^{-2}$ | 4.362 × ${10}^{-8}$ | 2.687 × ${10}^{-16}$ | 5.561 × ${10}^{-8}$ | 2.951 × ${10}^{-16}$ | 1.000 × ${10}^{14}$ | 8.891 × ${10}^{-1}$ |

## Appendix B Nonlinear ODE LS-SVM and CSVM Derivation

## Appendix C Linear PDE CSVM Derivation

## References

- Dormand, J.; Prince, P. A Family of Embedded Runge-Kutta Formulae. J. Comp. Appl. Math.
**1980**, 6, 19–26. [Google Scholar] [CrossRef] - Berry, M.M.; Healy, L.M. Implementation of Gauss-Jackson integration for orbit propagation. J. Astronaut. Sci.
**2004**, 52, 351–357. [Google Scholar] - Bai, X.; Junkins, J.L. Modified Chebyshev-Picard Iteration Methods for Orbit Propagation. J. Astronaut. Sci.
**2011**, 58, 583–613. [Google Scholar] [CrossRef] - Junkins, J.L.; Younes, A.B.; Woollands, R.; Bai, X. Picard Iteration, Chebyshev Polynomials, and Chebyshev Picard Methods: Application in Astrodynamics. J. Astronaut. Sci.
**2015**, 60, 623–653. [Google Scholar] [CrossRef] - Reed, J.; Younes, A.B.; Macomber, B.; Junkins, J.L.; Turner, D.J. State Transition Matrix for Perturbed Orbital Motion using Modified Chebyshev Picard Iteration. J. Astronaut. Sci.
**2015**, 6, 148–167. [Google Scholar] [CrossRef] - Driscoll, T.A.; Hale, N. Rectangular spectral collocation. IMA J. Numer. Anal.
**2016**, 36, 108–132. [Google Scholar] [CrossRef] - Mortari, D. The Theory of Connections: Connecting Points. Mathematics
**2017**, 5, 57. [Google Scholar] [CrossRef] - Mortari, D. Least-squares Solutions of Linear Differential Equations. Mathematics
**2017**, 5, 48. [Google Scholar] [CrossRef] - Mortari, D.; Johnston, H.; Smith, L. High accuracy least-squares solutions of nonlinear differential equations. J. Comput. Appl. Math.
**2019**, 352, 293–307. [Google Scholar] [CrossRef] - Johnston, H.; Mortari, D. Linear Differential Equations Subject to Relative, Integral, and Infinite Constraints. In Proceedings of the 2018 AAS/AIAA Astrodynamics Specialist Conference, Snowbird, UT, USA, 19–23 August 2018. [Google Scholar]
- Johnston, H.; Leake, C.; Efendiev, Y.; Mortari, D. Selected Applications of the Theory of Connections: A Technique for Analytical Constraint Embedding. Mathematics
**2019**, 7, 537. [Google Scholar] [CrossRef] - Mehrkanoon, S.; Falck, T.; Johan, A.K. Approximate Solutions to Ordinary Differential Equations using Least-squares Support Vector Machines. IEEE Trans. Neural Netw. Learn. Syst.
**2012**, 23, 1356–1367. [Google Scholar] [CrossRef] [PubMed] - Freire, R.Z.; Santos, G.H.d.; Coelho, L.d.S. Hygrothermal Dynamic and Mould Growth Risk Predictions for Concrete Tiles by Using Least Squares Support Vector Machines. Energies
**2017**, 10, 1093. [Google Scholar] [CrossRef] - Zhao, X.; Chen, X.; Xu, Y.; Xi, D.; Zhang, Y.; Zheng, X. An EMD-Based Chaotic Least Squares Support Vector Machine Hybrid Model for Annual Runoff Forecasting. Water
**2017**, 9, 153. [Google Scholar] [CrossRef] - Gedik, N. Least Squares Support Vector Mechanics to Predict the Stability Number of Rubble-Mound Breakwaters. Water
**2018**, 10, 1452. [Google Scholar] [CrossRef] - Gao, C.; Xue, W.; Ren, Y.; Zhou, Y. Numerical Control Machine Tool Fault Diagnosis Using Hybrid Stationary Subspace Analysis and Least Squares Support Vector Machine with a Single Sensor. Appl. Sci.
**2017**, 7. [Google Scholar] [CrossRef] - Vapnik, V.N. Statistical Learning Theory; Wiley: Hoboken, NJ, USA, 1998. [Google Scholar]
- Kramer, M.A.; Thompson, M.L.; Bhagat, P.M. Embedding Theoretical Models in Neural Networks. In Proceedings of the 1992 American Control Conference, Chicago, IL, USA, 24–26 June 1992; pp. 475–479. [Google Scholar]
- Pathak, D.; Krähenbühl, P.; Darrell, T. Constrained Convolutional Neural Networks for Weakly Supervised Segmentation. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 11–18 December 2015; pp. 1796–1804. [Google Scholar]
- Márquez-Neila, P.; Salzmann, M.; Fua, P. Imposing Hard Constraints on Deep Networks: Promises and Limitations. arXiv
**2017**, arXiv:1706.02025. [Google Scholar] - Lanczos, C. Applied Analysis. In Progress in Industrial Mathematics at ECMI 2008; Dover Publications, Inc.: New York, NY, USA, 1957; Chapter 7; p. 504. [Google Scholar]
- Wright, K. Chebyshev Collocation Methods for Ordinary Differential Equations. Comput. J.
**1964**, 6, 358–365. [Google Scholar] [CrossRef][Green Version] - Mortari, D.; Leake, C. The Multivariate Theory of Connections. Mathematics
**2019**, 7, 296. [Google Scholar] [CrossRef] - Leake, C.; Mortari, D. An Explanation and Implementation of Multivariate Theory of Functional Connections via Examples. In Proceedings of the 2019 AAS/AIAA Astrodynamics Specialist Conference, Portland, ME, USA, 11–15 August 2019. [Google Scholar]
- Theodoridis, S.; Koutroumbas, K. Pattern Recognition; Academic Press: Cambridge, MA, USA, 2008. [Google Scholar]
- Mehrkanoon, S.; Suykens, J.A. LS-SVM Approximate Solution to Linear Time Varying Descriptor Systems. Automatica
**2012**, 48, 2502–2511. [Google Scholar] [CrossRef] - Mehrkanoon, S.; Suykens, J. Learning Solutions to Partial Differential Equations using LS-SVM. Neurocomputing
**2015**, 159, 105–116. [Google Scholar] [CrossRef]

**Figure 1.**Accuracy gain for the Theory of Functional Connections (TFC) and constrained support vector machine (CSVM) methods over least-squares support vector machines (LS-SVMs) for problem #1 using 100 training points.

**Figure 3.**Accuracy gain for TFC and CSVM methods over LS-SVM for problem #2 using 100 training points.

**Figure 5.**Accuracy gain for TFC and CSVM methods over LS-SVMs for problem #3 using 100 training points.

**Figure 7.**Accuracy gain for TFC and CSVM methods over LS-SVMs for problem #4 using 100 training points in the domain.

© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Leake, C.; Johnston, H.; Smith, L.; Mortari, D.
Analytically Embedding Differential Equation Constraints into Least Squares Support Vector Machines Using the Theory of Functional Connections. *Mach. Learn. Knowl. Extr.* **2019**, *1*, 1058-1083.
https://doi.org/10.3390/make1040060

**AMA Style**

Leake C, Johnston H, Smith L, Mortari D.
Analytically Embedding Differential Equation Constraints into Least Squares Support Vector Machines Using the Theory of Functional Connections. *Machine Learning and Knowledge Extraction*. 2019; 1(4):1058-1083.
https://doi.org/10.3390/make1040060

**Chicago/Turabian Style**

Leake, Carl, Hunter Johnston, Lidia Smith, and Daniele Mortari.
2019. "Analytically Embedding Differential Equation Constraints into Least Squares Support Vector Machines Using the Theory of Functional Connections" *Machine Learning and Knowledge Extraction* 1, no. 4: 1058-1083.
https://doi.org/10.3390/make1040060