Next Article in Journal
Robust Design Optimization for Low-Cost Concrete Box-Girder Bridge
Previous Article in Journal
Persistence for a Two-Stage Reaction-Diffusion System

Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

# Least-Squares Solutions of Eighth-Order Boundary Value Problems Using the Theory of Functional Connections

by
Hunter Johnston
*,
Carl Leake
and
Daniele Mortari
Department of Aerospace Engineering, Texas A&M University, College Station, TX 77843, USA
*
Author to whom correspondence should be addressed.
Mathematics 2020, 8(3), 397; https://doi.org/10.3390/math8030397
Submission received: 10 February 2020 / Revised: 5 March 2020 / Accepted: 7 March 2020 / Published: 11 March 2020

## Abstract

:
This paper shows how to obtain highly accurate solutions of eighth-order boundary-value problems of linear and nonlinear ordinary differential equations. The presented method is based on the Theory of Functional Connections, and is solved in two steps. First, the Theory of Functional Connections analytically embeds the differential equation constraints into a candidate function (called a constrained expression) containing a function that the user is free to choose. This expression always satisfies the constraints, no matter what the free function is. Second, the free-function is expanded as a linear combination of orthogonal basis functions with unknown coefficients. The constrained expression (and its derivatives) are then substituted into the eighth-order differential equation, transforming the problem into an unconstrained optimization problem where the coefficients in the linear combination of orthogonal basis functions are the optimization parameters. These parameters are then found by linear/nonlinear least-squares. The solution obtained from this method is a highly accurate analytical approximation of the true solution. Comparisons with alternative methods appearing in literature validate the proposed approach.
MSC:
34K10; 34K28; 65D05; 65L10; 65L60

## 1. Introduction

This paper has been motivated by several articles dedicated to estimating the solutions of high-order boundary-value problems (BVPs) including, fourth-order [1], sixth-order [2], eighth-order [3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19], $2 m$-order [20], and higher-order [21,22,23,24] BVPs. This paper focuses specifically on eighth-order BVPs because of the volume of research done on them, which is covered in Refs. [3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19]. These references list many existing scientific problems requiring solutions of high-degree BVPs. For example, eighth-order BVPs appear in the physics of specific hydrodynamic stability problems (infinite horizontal layer of fluid heated from below and under rotation) when instability sets in as overstability [25], and in orthotropic cylindrical shells under line load [26]. From a theoretical point of view, the study of the existence and uniqueness for the solutions of high-order boundary value problems is presented in Ref. [22] and studied further in Ref. [27].
The technique presented in this paper is rooted in functional interpolation expressions. These expressions are particularly well suited to solve differential equations. This has been shown in Ref. [28], the seminal paper on the Theory of Functional Connections (TFC), and in subsequent articles, showing its application to solve linear [29] and nonlinear ODEs [30]. The TFC formalized the method of analytical constraint embedding (a.k.a. functional interpolation), since it provides expressions representing all functions satisfying a set of specified constraints.
The general equation to derive these interpolating expressions, named constrained expressions, follows as,
$y ( x , g ( x ) ) = g ( x ) + ∑ k = 1 n η k s k ( x )$
where $g ( x )$ is the free function, $η k$ are unknown coefficients to be solved by imposing the n constraint conditions, and $s k ( x )$ are “support functions,” which are a set of n linearly independent functions. In prior work [29,30] as well as in this paper, the $s k ( x )$ support function set has been selected as the monomial set.
The $η k$ coefficients are computed by imposing the constraints using Equation (1). Then, once the expressions of the $η k$ coefficients are obtained, they are back substituted into Equation (1) to produce the constrained expression, a functional representing all possible functions satisfying the specified set of constraints. The use of this constrained expression has already been applied to many areas of study, including solving low-order differential equations [29,30], hybrid systems [31], and optimal control problems, including energy-optimal landing, energy-optimal intercept [32], and fuel-optimal landing [33]. Furthermore, this technique has been successfully used to embed constraints into machine learning frameworks [34,35], in quadratic and nonlinear programming [36], and in a variety of other applications [37]. In addition, this technique has been generalized to n-dimensions [38,39], providing functionals representing all possible n-dimensional manifolds subject to constraints on the value and arbitrary order derivative of $n − 1$ dimensional manifolds.

## 2. Derivation of the Constrained Expression for Eighth-Order Boundary-Value Problems

In this paper, we consider the solution of an eighth-order BVP via the TFC. In general, the problem can be posed in its implicit form as,
$F ( x , y , y ′ , … , y ( 8 ) ) = 0 subject to : y ( k ) ( x i ) = y i ( k ) y ( k ) ( x f ) = y f ( k ) for k = 0 , 1 , 2 , 3$
where the notation $y ( k ) ( x ) : = d k y ( x ) d x k$ is used to denote the $k t h$ derivative of $y ( x )$ with respect to x. Now, in order to embed the eight constraints, we can set $n = 8$ in Equation (1) leading to the expression,
$y ( x , g ( x ) ) = g ( x ) + η T s ( x )$
where
$η = η 1 , η 2 , η 3 , η 4 , η 5 , η 6 , η 7 , η 8 T$
and, using monomial support functions,
$s ( x ) = 1 , x , x 2 , x 3 , x 4 , x 5 , x 6 , x 7 T$
Now, according to the theory developed in Ref. [28], a system of equations can be constructed by evaluating the candidate function defined by Equation (3) at the constraint locations and setting the function equal to the specified constraint value. For example, the constraint on the function at the initial value (i.e., $y ( x i ) = y i$) is applied as such,
$y i = y ( x i , g ( x i ) ) = g i + η 1 + η 2 x i + η 3 x i 2 + η 4 x i 3 + η 5 x i 4 + η 6 x i 5 + η 7 x i 6 + η 8 x i 7 .$
This can be done for the remaining seven constraint conditions, and the resulting system of equations can be expressed in a compact form,
$y i − g i y f − g f y i ( 1 ) − g i ( 1 ) y f ( 1 ) − g f ( 1 ) y i ( 2 ) − g i ( 2 ) y f ( 2 ) − g f ( 2 ) y i ( 3 ) − g i ( 3 ) y f ( 3 ) − g f ( 3 ) = 1 x i x i 2 x i 3 x i 4 x i 5 x i 6 x i 7 1 x f x f 2 x f 3 x f 4 x f 5 x f 6 x f 7 0 1 2 x i 3 x i 2 4 x i 3 5 x i 4 6 x i 5 7 x i 6 0 1 2 x f 3 x f 2 4 x f 3 5 x f 4 6 x f 5 7 x f 6 0 0 2 6 x i 12 x i 2 20 x i 3 30 x i 4 42 x i 5 0 0 2 6 x f 12 x f 2 20 x f 3 30 x f 4 42 x f 5 0 0 0 6 24 x i 60 x i 2 120 x i 3 210 x i 4 0 0 0 6 24 x f 60 x f 2 120 x f 3 210 x f 4 η .$
This system of equations can be solved for the unknown $η$ coefficients and organized in the form,
$y ( x , g ( x ) ) = g ( x ) + β 1 ( x ) ( y i − g i ) + β 2 ( x ) ( y f − g f ) + β 3 ( x ) y i ( 1 ) − g i ( 1 ) + β 4 ( x ) y f ( 1 ) − g f ( 1 ) + β 5 ( x ) y i ( 2 ) − g i ( 2 ) + β 6 ( x ) y f ( 2 ) − g f ( 2 ) + β 7 ( x ) y i ( 3 ) − g i ( 3 ) + β 8 ( x ) y f ( 3 ) − g f ( 3 ) ,$
where the $β k ( x )$ terms, called switching functions, only depend on the independent variable. This technique is general for any domain $x ∈ [ x i , x f ]$, for example the general expression for $β 1 ( x )$ is,
$β 1 ( x ) = 1 ( x f − x i ) 7 [ ( x − x f ) 4 ( 20 x 3 − 7 x i 10 x 2 + 4 x x f + x f 2 + 10 x 2 x f + 4 x x f 2 + 21 x i 2 ( 4 x + x f ) + x f 3 − 35 x i 3 ) ]$
However, for ease of presentation, since all problems presented in this paper, except for Problem #5, are defined on the domain $x ∈ [ 0 , 1 ]$, we will express these switching functions in terms of this integration range. For completeness, the support functions for two general points $x i$ and $x f$ (i.e., the switching functions for Equation (4)) are provided in Appendix A. The $β k$ terms for $x ∈ [ 0 , 1 ]$ are summarized below in Equations (5)–().
$β 1 ( x ) = 20 x 7 − 70 x 6 + 84 x 5 − 35 x 4 + 1$
$β 2 ( x ) = − 20 x 7 + 70 x 6 − 84 x 5 + 35 x 4$
$β 3 ( x ) = 10 x 7 − 36 x 6 + 45 x 5 − 20 x 4 + x$
$β 4 ( x ) = 10 x 7 − 34 x 6 + 39 x 5 − 15 x 4$
$β 5 ( x ) = 2 x 7 − 15 x 6 2 + 10 x 5 − 5 x 4 + x 2 2$
$β 6 ( x ) = − 2 x 7 + 13 x 6 2 − 7 x 5 + 5 x 4 2$
$β 7 ( x ) = x 7 6 − 2 x 6 3 + x 5 − 2 x 4 3 + x 3 6$
$β 8 ( x ) = x 7 6 − x 6 2 + x 5 2 − x 4 6$
With the solution of the $β k ( x )$ terms, the constrained expression is fully solved and represents all possible functions satisfying the boundary-value constraints. More specifically, by substituting the constrained expression and its derivatives into the original differential equation a new differential equation in terms of $g ( x )$ and its derivatives is obtained. This new differential equation, which has no constraints, can be written in the compact form,
$F ˜ ( x , g , g ′ , … , g ( 8 ) ) = 0$
To solve this differential equation, prior work [29,30,32,33,37] has expanded $g ( x )$ as a linear combination of m basis functions,
$g ( x ) = ξ T h ( z )$
where $z = z ( x )$, $ξ$ is an $m × 1$ vector of unknown coefficients, and $h ( z )$ is an $m × 1$ vector containing the m basis functions (in this paper Chebyshev orthogonal polynomials are used). Particular attention must be paid when using this expansion and least-squares. For example, the basis functions in $h ( z )$ must be linearly independent of the support functions $s ( x )$ in order to solve the system via least-squares. If any of the terms in $h ( z )$ are not linearly independent of the support functions, then the matrix to be inverted in the least squares step will be ill-conditioned. Thus, the terms that are not linearly independent of the support functions must be skipped in the expansion of $g ( x )$. In this problem, our support functions span from the monomial term $x 0$ to $x 7$; therefore, the Chebyshev polynomial expansion must start from $x 8$. Furthermore, in general, the basis functions may not be defined on the same range as the problem domain (i.e., Chebyshev and Legendre polynomials are defined on $z ∈ [ − 1 , + 1 ]$, Fourier series is defined on $z ∈ [ − π , + π ]$, etc.). Therefore, the basis domain (z) must be mapped into the problem domain (x), which can be done using the simple linear equations,
$z = z i + z f − z i x f − x i ( x − x i ) ⟷ x = x i + x f − x i z f − z i ( z − z i ) .$
Furthermore, all subsequent derivatives of the free-function $g ( x )$ are defined as,
$d n g d x n = ξ T d n h ( z ) d z n d z d x n ,$
where by defining,
$c : = d z d x = z f − z i x f − x i$
the expression can be simplified to,
$d n g d x n = c n ξ T d n h ( z ) d z n = c n ξ T h ( n ) ( z ) .$
This defines all mappings from the basis domain into the problem domain. With the expression of $g ( x )$ in terms of a known basis, we can rewrite the constrained expression given in Equation (4) in the form,
$y ( x , ξ ) = a ( x , ξ ) + b ( x )$
where $a ( x , ξ )$ is a function that is zero where the constraints are defined and $b ( x )$ is a function that equals the constraints where they are defined. In fact, if $g ( x )$ is selected such that $g ( x ) = 0$ (meaning $ξ = 0$) the expression would simplify to an interpolating function $y ( x , 0 ) = b ( x )$. Furthermore, since the $ξ$ vector shows up linearly in the expression of $a ( x , ξ )$, Equation (16) can also be written as,
$y ( x , ξ ) = a ( x ) T ξ + b ( x ) ,$
where $a ( x , ξ )$ now becomes a vector equation $a ( x )$. This can be seen by expanding Equation (17),
$y ( x , ξ ) = h ( z ) − β 1 ( x ) h i − β 2 ( x ) h f − β 3 ( x ) h i ( 1 ) − β 4 ( x ) h f ( 1 ) − β 5 ( x ) h i ( 2 ) − β 6 ( x ) h f ( 2 ) − β 7 ( x ) h i ( 3 ) − β 8 ( x ) h f ( 3 ) ︷ a ( x ) T ξ + β 1 ( x ) y i + β 2 ( x ) y f + β 3 ( x ) y i ( 1 ) + β 4 ( x ) y f ( 1 ) + β 5 ( x ) y i ( 2 ) + β 6 ( x ) y f ( 2 ) + β 7 ( x ) y i ( 3 ) + β 8 ( x ) y f ( 3 ) ︸ b ( x ) .$
The subsequent derivatives follow by simply taking the derivatives of the $h ( z )$ and $β k ( x )$ terms. That is, the form of subsequent derivatives of the constrained expression remains the same and we can generally write the constrained expression up to the eighth-order derivative as shown in Equation (16),
$y ( 1 ) ( x , ξ ) = a ( 1 ) ( x ) T ξ + b ( 1 ) ( x ) y ( 2 ) ( x , ξ ) = a ( 2 ) ( x ) T ξ + b ( 2 ) ( x ) y ( 3 ) ( x , ξ ) = a ( 3 ) ( x ) T ξ + b ( 3 ) ( x ) y ( 4 ) ( x , ξ ) = a ( 4 ) ( x ) T ξ + b ( 4 ) ( x ) y ( 5 ) ( x , ξ ) = a ( 5 ) ( x ) T ξ + b ( 5 ) ( x ) y ( 6 ) ( x , ξ ) = a ( 6 ) ( x ) T ξ + b ( 6 ) ( x ) y ( 7 ) ( x , ξ ) = a ( 7 ) ( x ) T ξ + b ( 7 ) ( x ) y ( 8 ) ( x , ξ ) = a ( 8 ) ( x ) T ξ + b ( 8 ) ( x )$
where $a ( n ) ( x ) : = d n a ( x ) d x n$ is the n-th derivative of the $a ( x )$ function; the $a ( x )$ function also includes the derivative of $h ( z )$, which follows Equation (15) such that $d n h ( z ) d x n = c n h ( n ) ( z )$ where c is defined in Equation (14). With this adjustment to the constrained expression, the transformed differential equation defined in Equation (13) can be reduced to a function of only x and the unknown vector $ξ$,
$F ˜ ( x , ξ ) = 0 ,$
which may be linear or nonlinear in the unknown parameter $ξ$. Lastly, in order to solve this equation numerically, we must discretize the domain into $N + 1$ points. Since in this paper we consider the linear basis $h ( z )$ as Chebyshev orthogonal polynomials, the optimal distribution of $N + 1$ points is provided by Chebyshev-Gauss-Lobatto collocation points [40,41], defined as
$z k = − cos k π N for k = 0 , 1 , … , N ,$
and the map from $z → x$ has been previously defined. As compared to the uniform point distribution, the collocation point distribution allows a much slower increase of the condition number as the number of basis functions, m, increases. In general, we can define the residual of our differential equation in Equation (19) for each discretized point,
$F ˜ ( x k , ξ ) = 0 .$
For a linear differential equation F (and therefore a linear differential equation $F ˜$) the constrained expression and its derivatives will show up linearly and therefore will remain linear in the unknown $ξ$ term. This leads to the form
$A ( x ) ξ + b ( x ) = 0$
where the matrix $A$ is composed of a linear combination of $a ( x )$ and its derivatives discretized over $x k$ where $x = [ x 0 , ⋯ , x k , ⋯ , x N ] T$. Note, by our definition the domain is $x ∈ [ x i , x f ]$, where $x ( k = 0 ) = x 0 = x i$ is the initial value and $x ( k = N ) = x N = x f$ is the final value, and k is defined in the description of Chebyshev-Gauss-Lobatto collocation points. Furthermore, it follows that $b ( x )$ is composed of a linear combination of $b ( x )$, its derivatives, and a potential forcing term $f ( x )$ for the discrete values of x. This system can now be easily solved with any available least-squares technique. All numerical solutions in this paper utilize a scaled QR method to perform the least-squares.
In the case of a nonlinear differential equation, Equation (20) can be expressed as a loss function at each discretization point,
$L ( ξ i ) = F ˜ ( x 0 , ξ ) ⋮ F ˜ ( x k , ξ ) ⋮ F ˜ ( x N , ξ ) ξ = ξ i$
and the system can be solved by an iterative least-squares method where the Jacobian is defined as,
$J ( ξ i ) = ∂ F ˜ ( x 0 , ξ ) ∂ ξ ⋮ ∂ F ˜ ( x k , ξ ) ∂ ξ ⋮ ∂ F ˜ ( x N , ξ ) ∂ ξ ξ = ξ i$
where $ξ i$ represents the current step’s estimated $ξ$ parameter. The parameter update is provided by,
$ξ i + 1 = ξ i − Δ ξ i$
where the $Δ ξ i$ can be defined using classic least-squares,
$Δ ξ i = J ( ξ i ) T J ( ξ i ) − 1 J ( ξ i ) T L ( ξ i )$
or in this paper through a QR decomposition method. This process is repeated until either the absolute value of the loss function is below some tolerance $ϵ$, or until the $L 2$ norm of the loss function continues to increase, which is specified by the following conditions,
$L 2 [ L ( ξ i ) ] < ε or L 2 [ L ( ξ i + 1 ) ] > L 2 [ L ( ξ i ) ] .$
In this paper, this tolerance was set as twice the value of machine-level precision for double point precision, $ϵ = 4 . 4409 × 10 − 16$.

## 3. Parameter Initialization for Nonlinear Problems

For nonlinear problems the $ξ$ must be initialized at the beginning of the iterative least-squares process. In this paper, the initialization was chosen to be $ξ 0 = 0$ for all nonlinear problems. Setting the coefficient vector equal to zero is synonymous with selecting $g ( x ) = 0$, or in other words, choosing the constrained expression with the simplest interpolating polynomial satisfying all of the problem constraints. In the case of BVPs, the solution lies somewhere around this initial guess. Although introduced and solved in a later section, consider Problem #4 which involves solving the differential equation,
$y ( 8 ) ( x ) + y ( 3 ) ( x ) sin ( y ( x ) ) = e x ( 1 + sin ( e x ) ) x ∈ [ 0 , 1 ] .$
This problem is highlighted in this section because it had the largest initialization error of the three nonlinear differential equations presented. Figure 1 displays the error of the solution due to the initialization technique of $ξ = 0$. As it can be seen in this figure, the error is on the order of $10 − 7$ for this specific case. The iterative least-squares will then reduce this error to close to machine-level precision. An astute reader will notice that the TFC method at initialization produces a more accurate solution than the techniques developed in [3,18]. A more detailed explanation of this result is discussed in the conclusion.

## 4. Numerical Solution

This section compares the TFC method with competing methods on a variety of problems, both linear and nonlinear. For each problem, the differential equation and the boundary conditions are presented, followed by a table that compares the absolute error of the two methods on a grid of 11 equidistantly spaced points that span the domain. In addition, each table includes the reference where the competing method’s solution error was found.
Although the TFC approach to solve differential equations has typically used ($N ∈ [ 100 , 200$] and $m ∈ [ 20 , 80 ]$) [29,30,34,37], in order to make a commensurable comparison with the techniques developed in Refs. [3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19], a grid of $N = 11$ points was used and $m = 10$ basis terms were selected: one less than the number of points selected. The following sections introduce the six most commonly solved eighth-order differential equations in Refs. [3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19], and compare the TFC solution accuracy with that of the most accurate solution from these references. Furthermore, to supplement the theory given in the previous sections, a step-by-step procedure for TFC is laid out for the first linear and non-linear problem.

#### 4.1.1. Problem #1

Consider the linear eighth-order differential equation solved in Refs. [3,13,19]
$y ( 8 ) ( x ) − y ( x ) = − 8 e x x ∈ [ 0 , 1 ]$
subject to
$y ( 0 ) = 1 y ( 1 ) = 0 y ′ ( 0 ) = 0 y ′ ( 1 ) = − e y ″ ( 0 ) = − 1 y ″ ( 1 ) = − 2 e y ‴ ( 0 ) = − 2 y ‴ ( 1 ) = − 3 e$
which has the exact solution $y ( x ) = ( 1 − x ) e x$.
From Equations (16) and (18), we are shown that the estimated solution and eighth-order derivative take on the forms,
$y ( x , ξ ) = a ( x ) T ξ + b ( x )$
and
$y ( x , ξ ) = a ( 8 ) ( x ) T ξ + b ( 8 ) ( x )$
Thus, using TFC, the differential equation can be re-written as,
$F ˜ ( x , ξ ) = a ( 8 ) ( x ) T ξ + b ( 8 ) ( x ) − a ( x ) T ξ − b ( x ) + 8 e x = 0 .$
Discretizing the problem into points, $x k$ where $k ∈ [ 0 , N ]$, and collecting terms yields,
$A ( x ) ξ + b ( x ) = 0 ,$
where,
$A = a ( 8 ) ( x 0 ) − a ( x 0 ) T ⋮ a ( 8 ) ( x k ) − a ( x k ) T ⋮ a ( 8 ) ( x N ) − a ( x N ) T and b = b ( 8 ) ( x 0 ) − b ( x 0 ) + 8 e x 0 ⋮ b ( 8 ) ( x k ) − b ( x k ) + 8 e x k ⋮ b ( 8 ) ( x N ) − b ( x N ) + 8 e x N .$
This system can be solved by least-squares to yield the unknown coefficients, $ξ$, which can then be substituted back into the constrained expression to give the TFC estimate of the solution.
Table 1 shows the absolute error of the TFC solution and the solution from Ref. [19] at each of the 11 points.
Table 1 shows that the TFC solution error is orders of magnitude lower than the solution from Ref. [19] at all of the points in the domain, except the boundaries. At the boundaries, each of the methods has zero error, because each of the methods satisfies the boundary conditions exactly.

#### 4.1.2. Problem #2

Consider the linear eighth-order differential equation solved in Refs. [3,13,14,15,18,19]
$y ( x ) ( 8 ) + x y ( x ) = − e x ( 48 + 15 x + x 3 ) x ∈ [ 0 , 1 ]$
subject to
$y ( 0 ) = 0 y ( 1 ) = 0 y ′ ( 0 ) = 1 y ′ ( 1 ) = − e y ″ ( 0 ) = 0 y ″ ( 1 ) = − 4 e y ‴ ( 0 ) = − 3 y ‴ ( 1 ) = − 9 e$
which has the exact solution $y ( x ) = x ( 1 − x ) e x$.
Table 2 shows the absolute error of the TFC solution and the solution from Ref. [13] at each of the 11 points. Reference [13] did not report the solution at $x = 0 . 9$, so that entry in the table is labeled “not reported.”
Table 2 shows that the TFC solution error is orders of magnitude lower than the solution from Ref. [13] at all of the points in the domain, except the boundaries. At the boundaries, each of the methods has zero error, because each of the methods satisfies the boundary conditions exactly.

#### 4.1.3. Problem #3

Consider the linear eighth-order differential equation solved in Refs. [3,6,13,19]
$y ( x ) ( 8 ) − y ( x ) = − 8 ( 2 x cos ( x ) + 7 sin ( x ) ) x ∈ [ 0 , 1 ]$
subject to
$y ( 0 ) = 0 y ( 1 ) = 0 y ′ ( 0 ) = − 1 y ′ ( 1 ) = − 2 sin ( 1 ) y ″ ( 0 ) = 0 y ″ ( 1 ) = 4 cos ( 1 ) + 2 sin ( 1 ) y ‴ ( 0 ) = 7 y ‴ ( 1 ) = − 6 sin ( 1 ) + 6 cos ( 1 )$
which has the exact solution $y ( x ) = ( x 2 − 1 ) sin ( x )$.
Table 3 shows the absolute error of the TFC solution and the solution from Ref. [19] at each of the 11 points.
Table 3 shows that the TFC solution error is orders of magnitude lower than the solution from Ref. [19] at all of the points in the domain, except the boundaries. At the boundaries, each of the methods has zero error, because each of the methods satisfies the boundary conditions exactly.

#### 4.2.1. Problem #4

Consider the nonlinear eighth-order differential equation solved in Refs. [3,18]
$y ( x ) ( 8 ) + y ( 3 ) ( x ) sin ( y ( x ) ) = e x ( 1 + sin ( e x ) ) x ∈ [ 0 , 1 ]$
subject to
$y ( 0 ) = 1 y ( 1 ) = e y ′ ( 0 ) = 1 y ′ ( 1 ) = e y ″ ( 0 ) = 1 y ″ ( 1 ) = e y ‴ ( 0 ) = 1 y ‴ ( 1 ) = e$
which has the exact solution $y ( x ) = e x$.
Equations (16) and (18) show that the estimated solution, together with its third-order and eighth-order derivative, take the form
$y ( x , ξ ) = a ( x ) T ξ + b ( x ) y ( 3 ) ( x , ξ ) = a ( 3 ) ( x ) T ξ + b ( 3 ) ( x ) y ( 8 ) ( x , ξ ) = a ( 8 ) ( x ) T ξ + b ( 8 ) ( x ) .$
Thus, using TFC, the differential equation can be re-written as,
$F ˜ ( x , ξ ) = a ( 8 ) ( x ) T ξ + b ( 8 ) ( x ) + a ( 3 ) ( x ) T ξ + b ( 3 ) ( x ) sin a ( x ) T ξ + b ( x ) − e x 1 + sin ( e x ) = 0 .$
Discretizing the problem into points, $x k$ where $k ∈ [ 0 , N ]$, leads to the loss function
$L ( ξ i ) = F ˜ ( x 0 , ξ ) ⋮ F ˜ ( x k , ξ ) ⋮ F ˜ ( x N , ξ ) ξ = ξ i ,$
for some values of $ξ = ξ i$. The Jacobian of the loss function with respect to $ξ i$ is,
$J ( ξ i ) = ∂ F ˜ ( x 0 , ξ ) ∂ ξ ⋮ ∂ F ˜ ( x k , ξ ) ∂ ξ ⋮ ∂ F ˜ ( x N , ξ ) ∂ ξ ξ = ξ i = a ( 8 ) ( x 0 ) + a ( 3 ) ( x 0 ) sin a ( x 0 ) T ξ + b ( x 0 ) + a ( 3 ) ( x 0 ) T ξ + b ( 3 ) ( x 0 ) cos a ( x 0 ) T ξ + b ( x 0 ) a ( x 0 ) T ⋮ a ( 8 ) ( x k ) + a ( 3 ) ( x k ) sin a ( x k ) T ξ + b ( x k ) + a ( 3 ) ( x k ) T ξ + b ( 3 ) ( x k ) cos a ( x k ) T ξ + b ( x k ) a ( x k ) T ⋮ a ( 8 ) ( x N ) + a ( 3 ) ( x N ) sin a ( x N ) T ξ + b ( x N ) + a ( 3 ) ( x N ) T ξ + b ( 3 ) ( x N ) cos a ( x N ) T ξ + b ( x N ) a ( x N ) T ξ = ξ i$
This system can be solved by an iterative least-squares method as shown in Section 2 to yield the unknown coefficients, $ξ$, which can then be substituted back into the constrained expression to give the TFC estimate of the solution.
Table 4 shows the absolute error of the TFC solution, which was obtained in three iterations, and the solution from Ref. [18] at each of the 11 points.
Table 4 shows that the TFC solution error is orders of magnitude lower than the solution from Ref. [18] at all of the points in the domain, except the boundaries. At the boundaries, each of the methods has zero error, because each of the methods satisfies the boundary conditions exactly.

#### 4.2.2. Problem #5

Consider the nonlinear eighth-order differential equation solved in Refs. [3,14,15,16,18]
$y ( x ) ( 8 ) = 7 ! e − 8 y ( x ) − 2 ( 1 + x ) 8 x ∈ [ 0 , e 1.2 − 1 ]$
subject to
$y ( 0 ) = 0 y ( e 1 / 2 − 1 ) = 1.2 y ′ ( 0 ) = 1 y ′ ( e 1 / 2 − 1 ) = e − 1 / 2 y ″ ( 0 ) = − 1 y ″ ( e 1 / 2 − 1 ) = − e − 1 y ‴ ( 0 ) = 2 y ‴ ( e 1 / 2 − 1 ) = 2 e − 3 / 2$
which has the exact solution $y ( x ) = ln ( 1 + x )$.
Table 5 shows the absolute error of the TFC solution, which converged in 2 iterations, and the solution from Ref. [15] at each of the 11 points.
Table 5 shows that the TFC solution error is orders of magnitude lower than the solution from Ref. [15] at all of the points in the domain, except the boundaries. At the boundaries, each of the methods has zero error, because each of the methods satisfies the boundary conditions exactly.

#### 4.2.3. Problem #6

Consider the nonlinear eighth-order differential equation solved in Refs. [3,15,18,19]
$y ( x ) ( 8 ) + e − x y 2 ( x ) = e − x + e − 3 x x ∈ [ 0 , 1 ]$
subject to
$y ( 0 ) = 1 y ( e 1 / 2 − 1 ) = e − 1 y ′ ( 0 ) = − 1 y ′ ( e 1 / 2 − 1 ) = − e − 1 y ″ ( 0 ) = 1 y ″ ( e 1 / 2 − 1 ) = e − 1 y ‴ ( 0 ) = − 1 y ‴ ( e 1 / 2 − 1 ) = − e − 1$
which has the exact solution $y ( x ) = e − x$.
Table 6 shows the absolute error of the TFC solution, which was obtained in 2 iterations, and the solution from Ref. [19] at each of the 11 points.
Table 6 shows that the TFC solution error is orders of magnitude lower than the solution from Ref. [19] at all of the points in the domain, except the boundaries. At the boundaries, each of the methods has zero error, because each of the methods satisfies the boundary conditions exactly.

## 5. Accuracy of the Derivatives

The previous section compared the absolute error of TFC at a number of points along the domain with the absolute error of previous methods. This section discusses the accuracy of the derivatives when using TFC. One of the major advantages of TFC compared to other methods is that the estimated solution is analytical. As a result, further manipulation of the estimated solution is easily achieved, such as taking derivatives. As an example, consider problem #5.
Table 7 shows the mean absolute error of y and its derivatives up to order eight. The second column of Table 7 used 10 basis functions to compute the solution and 11 equidistant points to compute the error, while the third column used 30 basis functions to compute the solution and 100 equidistant points to compute the error.
If enough Chebyshev orthogonal polynomials are used in the free function to estimate the solution, the error in subsequent derivatives should increase by an order of magnitude or less. Table 7 shows that when 10 basis functions are used, the error increases as the order of the derivative increases. In this case, there were not enough Chebyshev orthogonal polynomials used, as indicated by the large mean error in the eighth derivative. In other words, the number of basis functions used was not nearly enough to accurately estimate the solution of the eighth derivative.
When 30 basis functions are used, the mean error increases as the order of the derivative increases, until the eighth derivative is reached. In derivatives one through seven, the mean error increases approximately an order of magnitude or less when compared to the previous derivative. However, the eighth derivative has less error than the seventh derivative, because the eighth derivative shows up in the differential equation, and thus in the residual. Hence, the eighth derivative is directly affected when computing the solution, whereas the other derivatives are not.
In problem #5, the differential equation only contains the function and the eighth derivative. As a different example, consider the problem solved in [17],
$y ( x ) ( 8 ) + y ( x ) ( 7 ) + 2 y ( x ) ( 6 ) + 2 y ( x ) ( 5 ) + 2 y ( x ) ( 4 ) + 2 y ( x ) ( 3 ) + 2 y ( x ) ( 2 ) + y ( x ) ( 1 ) + y ( x ) = 14 cos ( x ) − 16 sin ( x ) − 4 x sin ( x ) x ∈ [ 0 , 1 ]$
subject to
$y ( 0 ) = 0 y ( 1 ) = 0 y ′ ( 0 ) = − 1 y ′ ( 1 ) = 2 sin ( 1 ) y ″ ( 0 ) = 0 y ″ ( 1 ) = 4 cos ( 1 ) + 2 sin ( 1 ) y ‴ ( 0 ) = 7 y ‴ ( 1 ) = 6 cos ( 1 ) − 6 sin ( 1 )$
which has the exact solution $y ( x ) = ( x 2 − 1 ) sin ( x )$. From hereon, we shall refer to this problem as problem #7.
Table 8 shows the mean absolute error of y and its derivatives up to order eight for problem #7. The second column of Table 8 used 10 basis functions to compute the solution and 11 points to compute the error, while the third column used 30 basis functions to compute the solution and 100 points to compute the error.
Table 8 shows that when all derivatives are included in the differential equation, the anomalous decrease in mean error as subsequent derivatives are taken disappears (i.e., the mean solution error from derivative seven to derivative eight increases as expected).

## 6. Conclusions

This paper explores the application of the techniques developed in [28,29,30] to the solution of high-order differential equations, namely eighth-order BVPs. In all the problems presented, which span the publications [3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19], the solution accuracy ranges from $O ( 10 − 13$$10 − 16 )$. These results are similar to the results obtained in earlier studies on first- and second-order linear [29] and nonlinear [30] differential equations. This application to higher-order systems further highlights the power and robustness of this technique.
In Section 3, a discussion of the initialization of the TFC approach for nonlinear differential equations was provided. In the initialization, the coefficient vector $ξ$ is set to zero, which implies that $g ( x ) = 0$. It was found that this still solved the differential equation with an accuracy on the order of $O ( 10 − 7 )$. This can be explained by an equation first presented in the seminal paper on TFC [28]. In this paper an equation (Equation (9) in Ref. [28]) is presented which describes the general expression for the interpolating expression for the function and its first n derivatives. This equation simplifies to the expression of a Taylor series when $g ( x ) = 0$. While this is not the exact case for the boundary-value constraints in this paper, the nature of the aforementioned equation points to the fact that when $g ( x ) = 0$, the constrained expression derived in this paper acts as a Taylor series approximation for two points. The relationship between the TFC method and Taylor series has yet to be explored, and the extension of this Taylor series-like expansion about n points will be the focus of future work.
Section 5 discussed the accuracy of the derivatives of the estimated solution. The solution accuracy was reduced with each subsequent derivative, but overall the accuracy of the final derivatives only lost a few orders of magnitude, resulting in an overall error on the order of $O ( 10 − 10$$10 − 12 )$, provided that enough basis functions were used when estimating the solution. In addition, it was shown that the accuracy of a given derivative depends, in part, on whether it explicitly shows up in the differential equation.

## Author Contributions

Conceptualization, H.J.; software, H.J. and C.L.; supervision D.M.; writing—original draft, H.J. and C.L.; writing—review and editing, H.J., C.L., and D.M. All authors have read and agreed to the published version of the manuscript.

## Funding

This work was supported by a NASA Space Technology Research Fellowship, Johnston [NSTRF 2019] Grant #: 80NSSC19K1149 and Leake [NSTRF 2019] Grant #: 80NSSC19K1152.

## Conflicts of Interest

The authors declare no conflict of interest.

## Abbreviations

The following abbreviations are used in this manuscript:
 BVP boundary value problem ODE ordinary differential equation TFC Theory of Functional Connections

## Appendix A. Support Functions for General Points x i and x f

This appendix shows the switching functions for the constrained expression shown in Equation (4) for a general domain $x ∈ [ x i , x f ]$. The switching functions are:
$β 1 ( x ) = x − x f 4 − 7 4 x x f + x f 2 + 10 x 2 x i + 21 x f + 4 x x i 2 + 10 x 2 x f + 4 x x f 2 + x f 3 − 35 x i 3 + 20 x 3 x f − x i 7 β 2 ( x ) = − x − x i 4 − 7 x f 4 x x i + x i 2 + 10 x 2 + 21 x f 2 x i + 4 x − 35 x f 3 + 10 x 2 x i + 4 x x i 2 + x i 3 + 20 x 3 x f − x i 7 β 3 ( x ) = x − x f 4 x − x i − 6 x f + 4 x x i + 4 x x f + x f 2 + 15 x i 2 + 10 x 2 x f − x i 6 β 4 ( x ) = x − x f x − x i 4 − 6 x f x i + 4 x + 15 x f 2 + 4 x x i + x i 2 + 10 x 2 x f − x i 6 β 5 ( x ) = x − x f 4 x − x i 2 x f − 5 x i + 4 x 2 x f − x i 5 β 6 ( x ) = − x − x f 2 x − x i 4 − 5 x f + x i + 4 x 2 x f − x i 5 β 7 ( x ) = x − x f 4 x − x i 3 6 x f − x i 4 β 8 ( x ) = x − x f 3 x − x i 4 6 x f − x i 4$

## References

1. Gupta, Y.; Kumar, M. B-Spline Based Numerical Algorithm for Singularly Perturbed Problem of Fourth Order. Am. J. Comput. Appl. Math. 2012, 1, 29–32. [Google Scholar] [CrossRef] [Green Version]
2. Khalid, A.; Naeem, M.N.; Agarwal, P.; Ghaffar, A.; Ullah, Z.; Jain, S. Numerical approximation for the solution of linear sixth order boundary value problems by cubic B-spline. Adv. Differ. Equations 2019, 2019, 492. [Google Scholar] [CrossRef] [Green Version]
3. Khalid, A.; Naeem, M.N.; Ullah, Z.; Ghaffar, A.; Baleanu, D.; Nisar, K.S.; Al-Qurashi, M.M. Numerical Solution of the Boundary Value Problems Arising in Magnetic Fields and Cylindrical Shells. Mathematics 2019, 7, 508. [Google Scholar] [CrossRef] [Green Version]
4. Boutayeb, A.; Twizell, E.H. Finite-difference methods for the solution of special eighth-order boundary-value problems. Int. J. Comput. Math. 1993, 48, 63–75. [Google Scholar] [CrossRef]
5. Wazwaz, A.M. The Numerical Solution of Special Fourth-Order Boundary Value Problems by the Modified Decomposition Method. Int. J. Comput. Math. 2002, 79, 345–356. [Google Scholar] [CrossRef]
6. Liu, G.; Wu, T. Differential quadrature solutions of eighth-order boundary-value differential equations. J. Comput. Appl. Math. 2002, 145, 223–235. [Google Scholar] [CrossRef]
7. Inc, M.; Evans, D.J. An efficient approach to approximate solutions of eighth-order boundary-value problems. Int. J. Comput. Math. 2004, 81, 685–692. [Google Scholar] [CrossRef]
8. Noor, M.; Mohyud-Din, S. Homotopy method for solving eighth order boundary value problem. J. Math. Anal. Approx. Theory 2006, 2, 161–169. [Google Scholar]
9. Golbabai, A.; Javidi, M. Application of homotopy perturbation method for solving eighth-order boundary value problems. Appl. Math. Comput. 2007, 191, 334–346. [Google Scholar] [CrossRef]
10. Meštrović, M. The modified decomposition method for eighth-order boundary value problems. Appl. Math. Comput. 2007, 188, 1437–1444. [Google Scholar] [CrossRef]
11. Haq, S.; Idrees, M.; Islam, S. Application Of Optimal Homotopy Asymptotic Method To Eighth Order Initial And Boundary Value Problems. Int. J. Appl. Math. Comput. 2010, 2, 73–80. [Google Scholar]
12. Viswanadham, K.; Raju, Y.S. Quintic B-Spline Collocation Method For Eighth Boundary Value Problems. Adv. Comput. Math. Its Appl. 2012, 1, 47–52. [Google Scholar]
13. Akram, G.; Rehman, H.U. Numerical solution of eighth order boundary value problems in reproducing Kernel space. Numer. Algorithms 2013, 62, 527–540. [Google Scholar] [CrossRef]
14. Viswanadham, K.; Raju, Y.S. Sextic B-Spline Collocation Method For Eighth Order Boundary Value Problems. Int. J. Appl. Sci. Eng. 2014, 12, 43–57. [Google Scholar]
15. Kasi, N.S.; Viswanadham, K.; Ballem, S. Numerical Solution of Eighth Order Boundary Value Problems by Galerkin Method with Quintic B-splines. Int. J. Comput. Appl. 2014, 89, 7–13. [Google Scholar] [CrossRef]
16. Ballem, S.; Viswanadham, K.K. Numerical Solution of Eighth Order Boundary Value Problems by Galerkin Method with Septic B-splines. Procedia Eng. 2015, 127, 1370–1377. [Google Scholar] [CrossRef] [Green Version]
17. Elahi, Z.; Akram, G.; Siddiqi, S.S. Numerical solution for solving special eighth-order linear boundary value problems using Legendre Galerkin method. Math. Sci. 2016, 10, 201–209. [Google Scholar] [CrossRef]
18. Reddy, S. Numerical Solution of Eighth Order Boundary Value Problems by Petrov-Galerkin Method with Quintic B-splines as basic functions and Septic B-Splines as weight functions. Int. J. Eng. Comput. Sci. 2016, 17894–17901. [Google Scholar] [CrossRef]
19. Reddy, A.P.; Harageri, M.; Sateesha, C. A numerical approach to solve eighth order boundary value problems by Haar wavelet collocation method. J. Math. Model. 2017, 5. [Google Scholar] [CrossRef]
20. Djidjeli, K.; Twizell, E.; Boutayeb, A. Numerical methods for special nonlinear boundary-value problems of order 2m. J. Comput. Appl. Math. 1993, 47, 35–45. [Google Scholar] [CrossRef] [Green Version]
21. Noor, M.; Mohyud-Din, S. Variational Iteration Method for Solving Higher-order Nonlinear Boundary Value Problems Using He’s Polynomials. Int. J. Nonlinear Sci. Numer. Simul. 2008, 9, 141–156. [Google Scholar] [CrossRef]
22. Agarwal, R.P. Boundary Value Problems from Higher Order Differential Equations; World Scientific: Singapore, 1986. [Google Scholar] [CrossRef]
23. Noor, M.; Mohyud-Din, S. Homotopy Perturbation Method for Solving Nonlinear Higher-order Boundary Value Problems. Int. J. Nonlinear Sci. Numer. Simul. 2008, 9, 395–408. [Google Scholar] [CrossRef]
24. Mohyud-Din, S.; Noor, M.; Noor, K. Exp-function method for solving higher-order boundary value problems. Bull. Inst. Math. Acad. Sin. (New Ser.) 2009, 4, 219–234. [Google Scholar]
25. Chandrasekhar, S. Hydrodynamic and Hydromagnetic Stability; Dover: New York, NY, USA, 1961. [Google Scholar]
26. Schwaighofer, J.; Microys, H.F. Orthotropic Cylindrical Shells Under Line Load. J. Appl. Mech. 1979, 46, 356–362. [Google Scholar] [CrossRef]
27. Shanmugam, T.; Muthiah, M.; Radenović, S. Existence of Positive Solution for the Eighth-Order Boundary Value Problem Using Classical Version of Leray-Schauder Alternative Fixed Point Theorem. Axioms 2019, 8, 129. [Google Scholar] [CrossRef] [Green Version]
28. Mortari, D. The Theory of Connections: Connecting Points. Mathematics 2017, 5, 57. [Google Scholar] [CrossRef] [Green Version]
29. Mortari, D. Least-Squares Solution of Linear Differential Equations. Mathematics 2017, 5, 48. [Google Scholar] [CrossRef]
30. Mortari, D.; Johnston, H.; Smith, L. High accuracy least-squares solutions of nonlinear differential equations. J. Comput. Appl. Math. 2019, 352, 293–307. [Google Scholar] [CrossRef]
31. Johnston, H.; Mortari, D. Least-squares solutions of boundary-value problems in hybrid systems. arXiv 2019, arXiv:math.OC/1911.04390. [Google Scholar]
32. Furfaro, R.; Mortari, D. Least-squares Solution of a Class of Optimal Space Guidance Problems via Theory of Connections. ACTA Astronaut. 2019. [Google Scholar] [CrossRef]
33. Johnston, H.; Schiassi, E.; Furfaro, R.; Mortari, D. Fuel-Efficient Powered Descent Guidance on Large Planetary Bodies via Theory of Functional Connections. arXiv 2020, arXiv:math.OC/2001.03572. [Google Scholar]
34. Leake, C.; Johnston, H.; Smith, L.; Mortari, D. Analytically Embedding Differential Equation Constraints into Least Squares Support Vector Machines Using the Theory of Functional Connections. Mach. Learn. Knowl. Extr. 2019, 1, 1058–1083. [Google Scholar] [CrossRef] [Green Version]
35. Leake, C. Deep Theory of Functional Connections: A New Method for Estimating the Solutions of PDEs. arXiv 2018, arXiv:cs.NA/1812.08625. [Google Scholar]
36. Mai, T.; Mortari, D. Theory of Functional Connections Applied to Nonlinear Programming under Equality Constraints. Paper presented at the 2019 AAS/AIAA Astrodynamics Specialist Conference, Portland, ME, USA, 11–15 August 2019. [Google Scholar]
37. Johnston, H.; Leake, C.; Efendiev, Y.; Mortari, D. Selected Applications of the Theory of Connections: A Technique for Analytical Constraint Embedding. Mathematics 2019, 7, 537. [Google Scholar] [CrossRef] [Green Version]
38. Mortari, D.; Leake, C. The Multivariate Theory of Connections. Mathematics 2019, 7, 296. [Google Scholar] [CrossRef] [Green Version]
39. Leake, C.; Mortari, D. An Explanation and Implementation of Multivariate Theory of Functional Connections via Examples. In Proceedings of the AIAA/AAS Astrodynamics Specialist Conference, Portland, ME, USA, 11–15 August 2019. [Google Scholar]
40. Lanczos, C. Applied Analysis; Dover Publications Inc.: New York, NY, USA, 1957; p. 504. [Google Scholar]
41. Wright, K. Chebyshev Collocation Methods for Ordinary Differential Equations. Comput. J. 1964, 6, 358–365. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Initialization error of the solution of Problem #4 by imposing $ξ 0 = 0$.
Figure 1. Initialization error of the solution of Problem #4 by imposing $ξ 0 = 0$.
Table 1. Problem #1: Absolute solution error.
Table 1. Problem #1: Absolute solution error.
xTFC Absolute ErrorRef. [19] Absolute Error
000
0.1$2.2204 × 10 − 16$$6.3 × 10 − 11$
0.2$1.1102 × 10 − 16$$6.5 × 10 − 10$
0.3$1.1102 × 10 − 16$$2.0 × 10 − 09$
0.4$1.1102 × 10 − 16$$3.3 × 10 − 09$
0.5$1.1102 × 10 − 16$$3.9 × 10 − 09$
0.6$6.6613 × 10 − 16$$3.4 × 10 − 09$
0.7$2.7756 × 10 − 15$$2.0 × 10 − 09$
0.8$3.8858 × 10 − 15$$6.9 × 10 − 10$
0.9$8.4932 × 10 − 15$$7.6 × 10 − 11$
100
Table 2. Problem #2: Absolute solution error.
Table 2. Problem #2: Absolute solution error.
xTFC Absolute ErrorRef. [13] Absolute Error
000
0.10$1.63 × 10 − 10$
0.2$8.3267 × 10 − 17$$1.63 × 10 − 09$
0.30$4.90 × 10 − 09$
0.4$1.1102 × 10 − 16$$8.46 × 10 − 09$
0.5$5.5511 × 10 − 17$$1.01 × 10 − 08$
0.6$3.8858 × 10 − 16$$8.68 × 10 − 09$
0.7$3.3307 × 10 − 16$$5.15 × 10 − 09$
0.8$3.3307 × 10 − 16$$1.76 × 10 − 09$
0.9$8.0769 × 10 − 15$Not reported
100
Table 3. Problem #3: Absolute solution error.
Table 3. Problem #3: Absolute solution error.
xTFC Absolute ErrorRef. [19] Absolute Error
000
0.1$2.7756 × 10 − 17$$6.6 × 10 − 12$
0.2$2.7756 × 10 − 17$$6.9 × 10 − 11$
0.30$2.1 × 10 − 10$
0.4$5.5511 × 10 − 17$$3.5 × 10 − 10$
0.50$4.1 × 10 − 10$
0.6$7.2164 × 10 − 16$$3.5 × 10 − 10$
0.7$1.3323 × 10 − 15$$2.1 × 10 − 10$
0.8$1.1102 × 10 − 15$$7.2 × 10 − 11$
0.9$3.4417 × 10 − 15$$8.0 × 10 − 12$
100
Table 4. Problem #4: Absolute solution error.
Table 4. Problem #4: Absolute solution error.
xTFC Absolute ErrorRef. [18] Absolute Error
000
0.1$2.2204 × 10 − 16$$2.503395 × 10 − 06$
0.20$8.940697 × 10 − 06$
0.3$2.2204 × 10 − 16$$1.561642 × 10 − 05$
0.4$4.4409 × 10 − 16$$1.823902 × 10 − 05$
0.5$2.2204 × 10 − 16$$8.821487 × 10 − 06$
0.6$6.6613 × 10 − 16$$7.510185 × 10 − 06$
0.7$3.5527 × 10 − 15$$1.883507 × 10 − 05$
0.8$7.5495 × 10 − 15$$1.931190 × 10 − 05$
0.9$1.0214 × 10 − 14$$1.168251 × 10 − 05$
100
Table 5. Problem #5: Absolute solution error.
Table 5. Problem #5: Absolute solution error.
xTFC Absolute ErrorRef. [15] Absolute Error
000
0.1$1.5266 × 10 − 16$$2.01 × 10 − 07$
0.2$1.5821 × 10 − 15$$4.54 × 10 − 07$
0.3$7.0083 × 10 − 14$$1.52 × 10 − 06$
0.4$2.5846 × 10 − 13$$4.07 × 10 − 06$
0.5$3.2330 × 10 − 13$$6.71 × 10 − 06$
0.6$1.3139 × 10 − 13$$9.06 × 10 − 06$
0.7$2.1261 × 10 − 14$$1.00 × 10 − 05$
0.8$2.0539 × 10 − 14$$5.45 × 10 − 06$
0.9$3.3307 × 10 − 16$$2.59 × 10 − 06$
100
Table 6. Problem #6: Absolute solution error.
Table 6. Problem #6: Absolute solution error.
xTFC Absolute ErrorRef. [19] Absolute Error
000
0.1$1.1102 × 10 − 16$$2.9 × 10 − 12$
0.2$1.1102 × 10 − 16$$2.7 × 10 − 11$
0.30$7.6 × 10 − 11$
0.40$1.3 × 10 − 10$
0.5$1.1102 × 10 − 16$$1.5 × 10 − 10$
0.6$1.1102 × 10 − 16$$1.3 × 10 − 10$
0.7$2.2204 × 10 − 16$$7.6 × 10 − 11$
0.8$3.2196 × 10 − 15$$2.5 × 10 − 11$
0.9$9.9920 × 10 − 16$$2.4 × 10 − 12$
100
Table 7. Mean absolute error of all derivatives for Problem #5.
Table 7. Mean absolute error of all derivatives for Problem #5.
FunctionMean Absolute Error: 10 Basis FunctionsMean Absolute Error: 30 Basis Functions
y$7.5585 × 10 − 14$$9.6866 × 10 − 16$
$y ′$$1.0534 × 10 − 12$$7.5884 × 10 − 15$
$y ′ ′$$2.0202 × 10 − 11$$5.0360 × 10 − 14$
$y ( 3 )$$4.9228 × 10 − 10$$4.0456 × 10 − 13$
$y ( 4 )$$1.3318 × 10 − 08$$2.8079 × 10 − 12$
$y ( 5 )$$3.8469 × 10 − 07$$1.3927 × 10 − 11$
$y ( 6 )$$1.3150 × 10 − 05$$5.5250 × 10 − 11$
$y ( 7 )$$3.9359 × 10 − 04$$2.0221 × 10 − 10$
$y ( 8 )$$1.9399 × 10 − 02$$1.4765 × 10 − 12$
Table 8. Mean absolute error of all derivatives for Problem #7.
Table 8. Mean absolute error of all derivatives for Problem #7.
FunctionMean Absolute Error: 10 Basis FunctionsMean Absolute Error: 30 Basis Functions
y$4.8255 × 10 − 15$$5.8919 × 10 − 15$
$y ′$$8.3368 × 10 − 15$$9.9755 × 10 − 15$
$y ′ ′$$7.1054 × 10 − 14$$5.9525 × 10 − 14$
$y ( 3 )$$4.8760 × 10 − 13$$4.8352 × 10 − 13$
$y ( 4 )$$1.5118 × 10 − 12$$1.6443 × 10 − 12$
$y ( 5 )$$4.1244 × 10 − 12$$4.4041 × 10 − 12$
$y ( 6 )$$3.1934 × 10 − 12$$3.2911 × 10 − 12$
$y ( 7 )$$8.0532 × 10 − 12$$7.3956 × 10 − 12$
$y ( 8 )$$8.1927 × 10 − 11$$8.7722 × 10 − 12$

## Share and Cite

MDPI and ACS Style

Johnston, H.; Leake, C.; Mortari, D. Least-Squares Solutions of Eighth-Order Boundary Value Problems Using the Theory of Functional Connections. Mathematics 2020, 8, 397. https://doi.org/10.3390/math8030397

AMA Style

Johnston H, Leake C, Mortari D. Least-Squares Solutions of Eighth-Order Boundary Value Problems Using the Theory of Functional Connections. Mathematics. 2020; 8(3):397. https://doi.org/10.3390/math8030397

Chicago/Turabian Style

Johnston, Hunter, Carl Leake, and Daniele Mortari. 2020. "Least-Squares Solutions of Eighth-Order Boundary Value Problems Using the Theory of Functional Connections" Mathematics 8, no. 3: 397. https://doi.org/10.3390/math8030397

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.