Univariate Theory of Functional Connections Applied to Component Constraints †

: This work presents a methodology to derive analytical functionals, with embedded linear constraints among the components of a vector (e.g., coordinates) that is a function a single variable (e.g., time). This work prepares the background necessary for the indirect solution of optimal control problems via the application of the Pontryagin Maximum Principle. The methodology presented is part of the univariate Theory of Functional Connections that has been developed to solve constrained optimization problems. To increase the clarity and practical aspects of the proposed method, the work is mostly presented via examples of applications rather than via rigorous mathematical deﬁnitions and proofs.


Introduction
The Theory of Functional Connections (TFC) is an analytical framework developed to perform functional interpolation, that is, to derive analytical functionals, called constrained expressions, describing all functions satisfying a set of assigned constraints. This framework has been developed for univariate and multivariate rectangular domains and for a wide class of constraints, including points and derivatives constraints, integral constraints, linear combination of constraints, and, partially, for component constraints. The TFC theory has been presented in detail in [1][2][3][4][5][6]. For instance, the extension to 2-dimensional space allows TFC to generate all surfaces connecting Dirichlet and Neumann boundary conditions. Recently, the domain mapping technique has been shown to be the first step toward extending TFC to any (non-rectangular) domain [7].
The first TFC application was to obtain least-squares solutions of linear and nonlinear ordinary (ODEs) and partial (PDEs) differential equations. Specifically, for ODEs the solutions have been obtained with the following features: • solutions are approximate and analytical (this allows easier subsequent analysis and further manipulation); • the approach solves initial, boundary, or multi-value problems by the same unified procedure; • the approach is numerically robust (low condition number); • solutions are usually provided at machine error accuracy; • solutions are usually obtained at msec level (suitable for real-time applications); and • constraint range is independent from the integration range (solution accuracy is maintained outside the constraints range).
TFC applied to component constraints has been initially presented in [12] to solve firstorder ODEs. However, the solution provided in [12] is restricted only to the cases presented. In this article, the general theory of univariate component constraints is presented. This theory can be further applied to solve more complex differential equations or optimization problems subject to component constraints, such as those generated in indirect optimal control problems. Applications on optimization, based on the theory presented in this work, will not be considered in this study and will be the subject of future works. However, a simple optimal control problem is included, as example.
This study considers a vector, y(t) ∈ R n , depending from a single independent variable, t (univariate case), whose components must satisfy a set of p constraints. Constraints can be absolute, relative, or any linear combination. The general multivariate case of a vector y(t) : R m → R n depending on m independent variables, t := {t 1 , t 2 , · · · , t m }, will be subject of future studies.
Before presenting the univariate Theory of Functional Connections for component constraints, a brief summary of univariate TFC and a summary of the initial (and incomplete) work on component constraints presented in [12] are provided in the next two sections

Summary of Univariate Theory of Functional Connections
The univariate Theory of Functional Connections derives analytical functionals, called constrained expressions, satisfying p distinct linear constraints. These constraints can be points and derivatives constraints (e.g., y(π) = 7 and y (−1) = 1), integral constraints (e.g., −1 −5 y(τ) dτ = 2), and linear combination of constraints (e.g., 2y(π) − y (−1) + 3 −1 −5 y(τ) dτ = 0). These functionals can be analytically obtained using any of the following two equivalent formal expressions, where g(t) is the free function; s k (t) are p linearly independent support functions (e.g., monomials, Fourier, etc.); η k (t, g(t)) are functionals coefficients to be found by imposing the constraints (these are actually scalars in the univariate case and become functionals in the multivariate case); φ k (t, s k (t)) are switching functions satisfying φ k = 1 if the k-th constraint is satisfied and φ k = 0 if the i-th constraint, with i = k, is satisfied; ρ k (t, g(t)) are projection functionals; and t is a vector specifying where the p constraints are defined. This means that η k (t, g(t)) and ρ k (t, g(t)) are not continuous functions of t. A rigorous mathematical definition of φ k (t, s k (t)) and ρ k (t, g(t)) is given in [4].
By imposing the four constraints a system of four equations in the four η k unknowns is obtained. By selecting the support functions as monomials, s 1 (t) = 1, s 2 (t) = t, s 3 (t) = t 2 , and s 4 (t) = t 3 , the previous system becomes provides the functional (constrained expression) for the four specified constraints, This functional represents all functions satisfying simultaneously all four constraints. Furthermore, this equation highlights the switching/projection formal expression given in Equation (1), where the corresponding switching functions and projection functionals are .

Correct Functionals for the Component Constraints Previously Provided
The TFC functionals for component constraints provided in [12] were specifically obtained to be the most simple and specific functionals that can be used to solve first-order differential equations. However, while fitting the purpose of [12], the functionals provided cannot be adopted for the general case of component constraints. In this section, for completeness, we provide the correct expressions of these functionals by also highlighting the constraint requirements affecting the selected support functions. These correct expressions are derived using the general formalism provided in Section 4.

Two Absolute Constraints
The case of two absolute components constraints of a vector v ∈ R 2 |v(t) := {x(t), y(t)}, generates the single simple case and the support functions are subject to s 11 (t a ) = 0 and s 22 (t b ) = 0, respectively.

One Absolute and One Relative Constraints
The case of one absolute and one relative constraints generates two distinct cases: subject to s 11 (t a ) = 0 and s 22 (t b ) = 0.

Two Relative Constraints
The general case of two relative constraints generates three distinct cases: where where

Univariate Theory of Functional Connections Subject to Component Constraints
Let us consider a set of p component constraints as linear combination of point, derivative, or integral of components defined at specific values (point and derivatives) or domain range (integral). To derive the constrained expression of each component the following rules apply.

1.
The x i component appears in n i constraints whose indices are the elements of the vector of integers, I i . For instance, if the x i component appears in the constraint equations identified as "2", "9", and "19", only, then I i = {2, 9, 19} and n i = 3, which is the length of the I i vector.

2.
The constrained expression of the x i component is made of a sum of the g i (t) free function and a linear combination of n i functional coefficients, η k (t, g(t)) and n i linearly independent support functions, s ik (t), Note that the total number of distinct functional coefficients, η k (t, g(t)), is the total number of constraints to be satisfied. The coefficients η k (t, g(t)) are not continuous functions of t. They depend on the specific values of t where the constraints are defined. All these specific values are the elements of the vector t = {t 1 , t 2 , · · · }. Each component, x i , has its own constrained expression and its own free function, g i (t). The following three examples help to clarify how to apply Equation (2). Example 1. Consider the p = 2 constraints among the components of the vector x(t) ∈ R 4 , Note that, as the last component, x 4 (t), does not appear in any constraint equations, then I 4 ≡ {∅}, while the other I i vectors are Therefore, for the two constraints given in Equation (3) the component constrained functionals are expressed as because x 1 appears in the constraint Equation (1) while x 2 , x 3 appear in the constraint Equations (1) and (2), and t := {0, −1, 2, 3, π, 4}. The first constraint becomes Collecting the η k terms we obtain while the second constraint, The support functions selected are consistent if the functional coefficients η k can be computed. In this case, the above matrix (which depend from the selected support functions, only) is nonsingular. The matrix is nonsingular if a linear combination of the selected support functions can be adopted to interpolate the constraints (this interpolation problem can be obtained by setting to zeros all g k (t) free functions). This means that the consistency of the selected support functions is directly connected to an interpolation problem. By solving this interpolation problem the functional interpolation problem (also known as, its generalization) is also solved. The following simple equivalent example clarifies this interpolation issue.
Let us consider adopting the function y(t) = η 1 s 1 (t) + η 2 s 2 (t), using s 1 (t) = 1 and s 2 (t) = t, to interpolate two known derivatives in two distinct points, y (t 1 ) = y 1 and y (t 2 ) = y 2 . This problem cannot be solved using the support functions selected. In this case, the selected support functions are inconsistent with the interpolation problem. The support functions become consistent with respect to the constraints by selecting, for instance, s 1 (t) = t and s 2 (t) = t 2 . Coming back to our example, by selecting s 11 (t) = s 21 (t) = s 31 (t) = 1, s 22 (t) = s 32 (t) = t, the previous equation provides the expressions for the η k functional coefficients, This equation also shows the connections between the functional coefficients, η k , and the projection functionals, ρ k , appearing in Equation (1) (see in [4] for the formal definition and properties of projection functionals). Therefore, the η k functional coefficients for this specific example can be written as Example 2. Consider the p = 3 constraints among the components of the vector v(t) ∈ R 3 |v(t) : .

Application to a Simple Example of Optimal Control Problem
Consider the following one-dimensional optimal control problem. A unitary mass is in rectilinear motion subject to the one-dimensional control force, u(t), in the direction of motion. Initial state conditions are set to be x(t 0 ) = x 0 = 1 m and v(t 0 ) = v 0 = 2 m/s, where t 0 = 0. The goal is to find the optimal control, u(t), to bring the point mass sufficiently close to the origin at the final time, t f = 2 s, with minimum control effort. Overall, the optimal control problem can be mathematically formulated as follows.
Find the optimal control u(t) and the trajectory x(t) and v(t) that minimize the following cost function subject to the dynamic equationṡ and the boundary conditions The necessary conditions are determined by applying the Pontryagin's Maximum Principle (PMP) [35]. First, we compute the Hamiltonian We find the optimal control by applying the optimality conditions, i.e., Therefore, the Hamiltonian becomes State and costate dynamic equations are derived from the Hamiltonian as follows, The constraints are the initial conditions, while the boundary conditions can be determined as follows, where Here, µ 1 and µ 2 are the Lagrange multipliers associated with the initial states x(t 0 ) and v(t 0 ), respectively. The optimal control and trajectory solution can be found by solving the set of ODEs (necessary conditions, i.e., two-point boundary value problem) provided by the Hamiltonian formulation, i.e., where λ x and λ v represent the costate components associated to x and v, respectively. This system of differential equations admits the following analytical solution, .
Using the Theory of Functional Connections formulation for component constraints presented, the constraints given in Equation (16) can be embedded into the following functionals, from which we immediately derive the expressions for η 2 and η 4 , while the expressions for η 1 and η 3 are computed from the constraint expressions, providing the requirement for the support function selection, Table 1 provides three examples of support functions selection and the requirements the constraints are subject to. and the system of differential equations given in Equation (16) can be written in a matrix form, where the expression of the H(t) matrix is derived by applying the constrained expressions given in Equation (18) to the differential equation given in Equation (16). Discretizing the time from the initial value, t 0 , to the final value, t f , the overdetermined linear system .
is obtained. Then, the four unknown coefficients vectors (ξ x , ξ v , ξ λ x , and ξ λ v ) can be computed by least-squares, Particular attention should be given to vectors, like the state vectors in dynamical systems, where some components are derivatives of other components (e.g., position, velocity, and acceleration). For instance, it would be a mistake trying to merge the x(t) and v(t) constrained expressions given in Equation (17) by the following single constrained expression, While this equation satisfies both initial constraints for x(t) and v(t), it does not satisfy the dynamical equation obtained from the Hamiltonian formulation, given in Equation (16), where the control is embedded in the costate terms. Figure 1 shows the numerical results obtained in this simple 4 × 4 optimal control example. The left plots shows the errors with respect the true (analytical) solution of the numerical least-squares solution for x(t), v(t), λ x (t), and λ v (t), using m = 4 basis functions (Chebyshev orthogonal polynomials of the first kind) and 16 collocation points. The top right plot show the control error, u(t), while the bottom right plot shows the condition number of the (A T A) matrix. The solution is obtained in 1.3 ms, using MATLAB R on a standard commercial laptop. In this specific case, as the true solution is polynomial and the selected basis functions are also polynomial (Chebyshev orthogonal polynomials), the estimated solution fully captures the true solution with the corresponding minimum polynomial degree of the true solution (cubic). By increasing the number of basis functions, the solution does not changes and all the coefficients associated to polynomials with degree higher than m > 3 are estimated as zero.

Conclusions
This paper provides a mathematical methodology to perform functional interpolation for vector's components that are subject to a set of linear constraints. The methodology adopts the framework of the Theory of Functional Connections, a mathematical method to derive functionals that are always satisfying a set of linear constraints. These functional reduce the whole search space of functions to just the functions subspace satisfying the assigned constraints.
The main motivation of this study is to provide a numerical new method to solve indirect optimal control problems, where states and costates (components) are connected by constraints.
Several examples are provided showing how to derive the functionals fully satisfying the component constraints. This has been done for 2-dimensional time-varying vectors subject to three classic component constraints (two absolute, one absolute and one relative, and two relative) and three more complex examples involving constraints as linear combinations of points, derivatives, and integrals.
The study also includes a simple example of indirect optimal control problem that is solved using Pontryagin Maximum Principle and the proposed method. This example, whose solution is obtained by least-squares, has been numerically tested. The results validate the proposed approach in terms of speed and accuracy.