Next Article in Journal
Invariant Solutions for a Class of Perturbed Nonlinear Wave Equations
Previous Article in Journal
A Constructive Method for Standard Borel Fixed Submodules with Given Extremal Betti Numbers

Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

# The Theory of Connections: Connecting Points †

Aerospace Engineering, Texas A & M University, College Station, TX 77843-3141, USA
This paper is an extended version of our paper published in Mortari, D. The Theory of Connections. Part 1: Connecting Points, AAS 17-255, 2017 AAS/AIAA Space Flight Mechanics Meeting Conference, San Antonio, TX, USA, 5–9 February 2017; (dedicated to John Lee Junkins).
Mathematics 2017, 5(4), 57; https://doi.org/10.3390/math5040057
Received: 30 July 2017 / Revised: 17 September 2017 / Accepted: 24 October 2017 / Published: 1 November 2017

## Abstract

:
This study introduces a procedure to obtain all interpolating functions, $y = f ( x )$, subject to linear constraints on the function and its derivatives defined at specified values. The paper first shows how to express these interpolating functions passing through a single point in three distinct ways: linear, additive, and rational. Then, using the additive formalism, interpolating functions with linear constraints on one, two, and n points are introduced as well as those satisfying relative constraints. In particular, for expressions passing through n points, a generalization of the Waring’s interpolation form is introduced. An alternative approach to derive additive constraint interpolating expressions is introduced requiring the inversion of a matrix with dimensions equally the number of constraints. Finally, continuous and discontinuous interpolating periodic functions passing through a set of points with specified periods are provided. This theory has already been applied to obtain least-squares solutions of initial and boundary value problems applied to nonhomogeneous linear differential equations with nonconstant coefficients.

## 1. Introduction

This study shows how to derive analytical expressions, called “constrained expressions”, that can be used to represent functions that are subject to a set of linear constraints. The product of this paper is a procedure to derive these constrained expressions as a new mathematical tool with the purpose to apply them to solve problems in computational science. Examples of potential applications of constrained expressions are: solving differential equations, performing constrained optimization, solving some types of optimal control problems, path planning, calculus of variations, etc. All of these applications will be the subject of future papers.
A resulting constrained expression, $f ( x )$, is expressed in terms of a function, $g ( x )$, which can be freely chosen. No matter how $g ( x )$ is chosen, the resulting constrained expression always satisfies the set of linear constraints. Let us give an example to clarify the purpose of this study. The function,
$y ( x ) = g ( x ) + x ( 2 x 2 − x ) 2 ( x 2 − x 1 ) y ˙ 1 − g ˙ 1 + x ( x − 2 x 1 ) 2 ( x 2 − x 1 ) y ˙ 2 − g ˙ 2$
always satisfies the two following constraints, $d y d x x = x 1 = y ˙ 1$ and $d y d x x = x 2 = y ˙ 2$, for any function $g ( x )$ that is differentiable in $x 1$ and $x 2$.
The constrained expressions are generalized interpolation formulae. The generalization rests on the fact that these expressions are not interpolating expressions for a class (or sub-class) of functions, but for all functions, as done, for instance in Refs. [1,2,3] containing reviews of the classic interpolation methods applied in various areas, or in [4] for distributed approximating functionals (DAFs), and in its recent improvement [5], using Hermite DAFs and Sinc DAFs. This paper provides interpolation formulae that are not restricted to using a class of specific functions. This is clarified by the following example.
The problem of writing the equation representing all linear functions passing through a specified point, $[ x 0 , y 0 ]$, is straightforward, $y ( x ) = m ( x − x 0 ) + y 0$, with the line slope, m, which can be freely chosen. Constrained expressions represent all functions passing through $[ x 0 , y 0 ]$. This includes continuous, discontinuous, singular, algebraic, rational, transcendental, and periodic functions, just to mention some. These constrained expressions can be built in several ways. One expression is a direct extension of the linear equation
$y ( x ) = p ( x ) ( x − x 0 ) + y 0 ,$
where $p ( x )$ can be any function satisfying $p ( x 0 ) ≠ ∞$. A second expression, called additive, is
$y ( x ) = g ( x ) + [ y 0 − g ( x 0 ) ] = g ( x ) + ( y 0 − g 0 ) ,$
where $g ( x 0 ) ≠ ∞$, and a third expression, called rational, is
$y ( x ) = h ( x ) h ( x 0 ) y 0 = h ( x ) h 0 y 0 ,$
where $h ( x )$ can be any function satisfying $h ( x 0 ) ≠ 0$. From Equations (1) and (2), the relationship, $p ( x ) = g ( x ) − g 0 x − x 0$, is derived, showing that $p ( x )$ is the derivative of any secant of $g ( x )$ passing through $[ x 0 , y 0 ]$.
The expressions introduced in Equations (1)–(3) can be used to represent all interpolating functions passing through the point $[ x 0 , y 0 ]$, with the only exception for those satisfying $p ( x 0 ) = ∞$, $g ( x 0 ) = ∞$, and $h ( x 0 ) = 0$. The proof that Equation (2) provides all functions passing through the point $[ x 0 , y 0 ]$ is immediate. Assume $f ( x )$ represents all functions satisfying $f ( x 0 ) = y 0$, then, by selecting, $g ( x ) = f ( x )$, we obtain, $y ( x ) = f ( x )$.
It is possible also to combine Equations (1)–(3) to obtain
$y ( x ) = p ( x ) ( x − x 0 ) + h ( x ) h ( x 0 ) y 0 , y ( x ) = p ( x ) ( x − x 0 ) + g ( x ) + ( y 0 − g 0 ) , y ( x ) = g ( x ) + h ( x ) h ( x 0 ) ( y 0 − g 0 ) , y ( x ) = p ( x ) ( x − x 0 ) + g ( x ) + h ( x ) h ( x 0 ) ( y 0 − g 0 ) .$
This study shows how to derive interpolating expressions subject to a variety of linear constraints, such as functions passing through multiple points with assigned derivatives, or subject to multiple relative constraints, as well as periodic functions subject to multiple point constraints. There are many potential applications for these types of expressions. For example, they can be used in optimization problems with constraint embedded expressions. One application of these constrained expressions is provided in Ref. [6], where least-squares solutions of initial and boundary value problems applied to linear nonhomogeneous differential equations of any order and with nonconstant coefficients are obtained. These constrained expressions are provided in terms of a new function, $g ( x )$, which is completely free to choose. In fact, these interpolating expressions are particularly useful when solving linear differential equations because they can be rewritten using functions with embedded constraints (a.k.a., the “subject to” conditions). This has actually been done in Ref. [6], providing a unified least-squares framework to solve initial and boundary values problems for linear differential equations. This approach provides several orders of magnitudes accuracy gain with respect to the most commonly used numerical integrators.
Particularly important is the fact that Equation (2), like many other equations provided in this study, can be immediately extended to vectors (bold lower cases), to matrices (upper cases), and to higher dimensional tensors. Specifically, for vectors and matrices made of independent elements, Equation (2) becomes
$y ( x ) = g ( x ) + ( y 0 − g 0 ) and Y ( x ) = G ( x ) + ( Y 0 − G 0 ) ,$
with the only condition being that the vector $g ( x )$ and the matrix $G ( x )$ are defined in $x 0$.
The paper is organized by providing, in the following sections, constrained expressions satisfying:
• constraints in one point;
• constraints in two and then in n points;
• multiple linear constraints;
• relative constraints;
• constraints on continuous and discontinuous periodic functions.

## 2. One Constraint in One Point

Functions subject to, $y ( x 0 ) = y 0$, can be derived using the general form
$y ( x ) = g ( x ) + η h ( x ) ,$
where $η$ is an unknown coefficient and $g ( x )$ and $h ( x )$ must be any two independent functions, satisfying $g ( x 0 ) = g 0 ≠ ∞$ and $h ( x 0 ) = h 0 ≠ 0$. The constraint, $y ( x 0 ) = y 0$, allows us to derive the expression of $η$, getting
$y ( x ) = g ( x ) + h ( x ) h 0 ( y 0 − g 0 ) ,$
which is a combination of Equations (2) and (3). In particular, if $g ( x ) = 0$ or if $h ( x ) = 1$, we obtain the expressions provided in Equations (2) and (3), respectively.
Equation (4) can be used to obtain interpolating nonlinear functions satisfying the n-th derivative constraint, that is, $d n y d x n x 0 = y 0 ( n )$, getting
$y ( x ) = g ( x ) + h ( x ) h 0 ( n ) y 0 ( n ) − g 0 ( n ) ,$
where the n-th derivative of the functions $g ( x )$ and $h ( x )$ must be continuous in $x 0$, $h 0 ( n ) ≠ 0$, and $g 0 ( n ) ≠ ± ∞$. In particular, if $h ( x ) = x n$, the following simple constrained expression,
$y ( x ) = g ( x ) + x n n ! y 0 ( n ) − g 0 ( n ) ,$
is obtained.

#### 2.1. Two Constraints in One Point

Consider two constraints, the k-th and the j-th derivatives, evaluated at the same coordinate, $x 0$. That is, $d k y d x k x 0 = y 0 ( k )$ and $d j y d x j x 0 = y 0 ( j )$, where $0 ≤ k < j$. The interpolating expression satisfying these two constraints can be searched for as
$y ( x ) = g ( x ) + η 1 p ( x ) + η 2 q ( x ) .$
The two constraints allow the computation of the coefficients, $η 1$ and $η 2$, by solving the system,
$y 0 ( k ) − g 0 ( k ) y 0 ( j ) − g 0 ( j ) = p 0 ( k ) q 0 ( k ) p 0 ( j ) q 0 ( j ) η 1 η 2 ,$
whose solution is
$η 1 η 2 = 1 p 0 ( k ) q 0 ( j ) − q 0 ( k ) p 0 ( j ) q 0 ( j ) − q 0 ( k ) − p 0 ( j ) p 0 ( k ) y 0 ( k ) − g 0 ( k ) y 0 ( j ) − g 0 ( j ) .$
This solution exists as long as the k-th derivative of $p ( x )$ and the j-th derivative of $q ( x )$ exists at $x 0$ and $p 0 ( k ) q 0 ( j ) ≠ q 0 ( k ) p 0 ( j )$. Under this assumption, the searched for interpolating expression is
$y ( x ) = g ( x ) + q 0 ( j ) p ( x ) − p 0 ( j ) q ( x ) p 0 ( k ) q 0 ( j ) − q 0 ( k ) p 0 ( j ) y 0 ( k ) − g 0 ( k ) + p 0 ( k ) q ( x ) − q 0 ( k ) p ( x ) p 0 ( k ) q 0 ( j ) − q 0 ( k ) p 0 ( j ) y 0 ( j ) − g 0 ( j ) .$
The relationship, $p 0 ( k ) q 0 ( j ) ≠ q 0 ( k ) p 0 ( j )$, provides us the following considerations. For example, if $k = 0$ and $j = 3 ,$ then $p ( x )$ must be (at least) a nonzero constant and $q ( x )$ must be (at least) cubic, $q ( x ) = x 3$. This requirement can always be obtained by setting $p ( x ) = x k / ( k ! )$ and $q ( x ) = x j / ( j ! )$. In this case, $η 1$ and $η 2$ are derived from the constraints
$η 1 η 2 = 1 − x 0 ( j − k ) ( j − k ) ! 0 1 y 0 ( k ) − g 0 ( k ) y 0 ( j ) − g 0 ( j ) ,$
and the solution is simplified
$y ( x ) = g ( x ) + x k k ! y 0 ( k ) − g 0 ( k ) + x ( j − k ) j ! − x 0 ( j − k ) k ! ( j − k ) ! x k y 0 ( j ) − g 0 ( j ) .$
In the common case where $k = 0$ (function) and $j = 1$ (first derivative), then Equation (7) becomes
$y ( x ) = g ( x ) + ( y 0 − g 0 ) + ( x − x 0 ) ( y ˙ 0 − g ˙ 0 ) .$
Note that, by selecting, $g ( x ) = a + b x$, Equation (8) then reduces to $y ( x ) = y 0 + y ˙ 0 ( x − x 0 )$, no matter what a and b are. The reason is, to obtain Equation (8), we have implicitly selected $p ( x ) = 1$ and $q ( x ) = x$, and $g ( x )$, in order to provide variability to $y ( x )$, must be a function linearly independent from functions $p ( x )$ and $q ( x )$. In general, by setting, $g ( x ) = a p ( x ) + b q ( x )$, we obtain the same $y ( x )$ function derived from setting, $g ( x ) = 0$. Therefore, to provide variability to $y ( x )$, any constant and/or linear component of function $g ( x )$ has no effect on $y ( x )$.

#### 2.2. Constraints on Function and First N Derivatives

In this case, the constrained (interpolating) expression of a nonlinear function and its first n derivatives can be written as
$y ( x ) = g ( x ) + ∑ k = 0 n ( x − x 0 ) k k ! d k y d x k − d k g d x k x 0 ,$
where $g ( x )$ must satisfy $d n g d x n x 0 ≠ ± ∞$.
Equation (9) satisfies all the constraints, $y ( x 0 ) = y 0$, $y ˙ ( x 0 ) = y ˙ 0$, $y ¨ ( x 0 ) = y ¨ 0$, $y ⃛ ( x 0 ) = y ⃛ 0$, and so on, no matter what $g ( x )$ is. In particular, for $n = ∞$, Equation (9) becomes the combination of the two Taylor series,
$y ( x ) = ∑ k = 0 ∞ d k y d x k x 0 ( x − x 0 ) k k ! and g ( x ) = ∑ k = 0 ∞ d k g d x k x 0 ( x − x 0 ) k k ! .$
Therefore, as $n → ∞$, the set of potential functions that can be described by Equation (9) converges to the unique function, $y ( x )$, defined in Equation (10). This means that, as n increases, the variability of $y ( x )$, caused by $g ( x )$ variations, decreases. For vectors made of m independent variables, $y T ( x ) = { y 1 ( x ) , y 2 ( x ) , … , y m ( x ) }$, subject to n constraints at $x = x 0$, Equation (9) becomes
$y ( x ) = g ( x ) + ∑ k = 0 n ( x − x 0 ) k k ! d k y d x k − d k g d x k x 0 .$
Equation (11) can be used to find an optimal $y ( x )$ satisfying all constraints. In this case, the $g ( x )$ vector can be expressed as a linear combination of m basis functions, $h T ( x ) = { h 1 ( x ) , h 2 ( x ) , … , h m ( x ) }$
$g ( x ) = [ η 1 , η 2 , … , η m ] T h ( x ) = Ξ T h ( x ) → d k g d x k x 0 = Ξ T d k h d x k x 0 ,$
where $Ξ$ is a $m × n$ matrix of unknown coefficients. In this case, Equation (11) becomes
$y ( x ) = Ξ T h ( x ) − ∑ k = 0 n ( x − x 0 ) k k ! h 0 ( k ) + ∑ k = 0 n ( x − x 0 ) k k ! y 0 ( k ) ,$
which is an equation linear in the unknown matrix coefficient matrix, $Ξ$.
For vectors made of subsequent derivatives, $y d T ( x ) = { y , y ˙ , y ¨ , y ⃛ , … }$, where the subscript “d” identifies this specific kind of vector (often appearing in dynamics), as long as $g d 0$ is defined, the interpolating expression is
$y d ( x ) = g d + B ( x , x 0 ) ( y d 0 − g d 0 ) ,$
where the $B ( x , x 0 )$ matrix is
$B ( x , x 0 ) = 1 ( x − x 0 ) ( x − x 0 ) 2 2 ! ( x − x 0 ) 3 3 ! ( x − x 0 ) 4 4 ! … 0 1 ( x − x 0 ) ( x − x 0 ) 2 2 ! ( x − x 0 ) 3 3 ! … 0 0 1 ( x − x 0 ) ( x − x 0 ) 2 2 ! … ⋮ ⋮ ⋮ ⋮ ⋮ ⋱ .$
For applications, to use this constrained expression (for vectors made of subsequent derivatives), the $g d ( x )$ vector can be expressed as a linear combination of a set of m basis functions, $h T ( x ) = { h 1 ( x ) , h 2 ( x ) , … , h m ( x ) }$,
$g d = [ h ( x ) , h ˙ ( x ) , h ¨ ( x ) , h ⃛ ( x ) , … ] T ξ = H T ξ ,$
where $ξ$ is a vector of m unknown coefficients that is then found by satisfying the equation(s) of the problem under analysis. Note that, when the vector is made of independent variables, the number of unknown coefficients is n times higher (the $m × n$ elements of matrix $Ξ$).
It is important to highlight that the intuitive extension
$y d ( x ) = g d + ( y d 0 − g d 0 ) ,$
where $y d = { y , y ˙ , y ¨ , … } T$, is a mistake. In fact, consider just a function and its derivative, $y ( x )$ and $y ˙ ( x )$, subject to $y ( x 0 ) = y 0$ and $y ˙ ( x 0 ) = y ˙ 0$. The previous vectorial equation in scalar form becomes
$y ( x ) = g ( x ) + ( y 0 − g 0 ) , y ˙ ( x ) = g ˙ ( x ) + ( y ˙ 0 − g ˙ 0 ) ,$
and the derivative of the first equation, $y ˙ ( x ) = g ˙ ( x )$, is in contradiction with the second equation.

## 3. Constraints in n Points

Waring polynomials [7], better known as “Lagrange polynomials,” are used for polynomial interpolation. This interpolation formula was first discovered in 1779 by Edward Waring, then rediscovered by Euler in 1783, and then published by Lagrange in 1795 [8]. A two-point Waring polynomial is the equation of a line passing through two points, $[ x 1 , y 1 ]$ and $[ x 2 , y 2 ]$,
$y ( x ) = y 1 x − x 2 x 1 − x 2 + y 2 x − x 1 x 2 − x 1 ,$
while the general n-points Waring polynomial is the ($n − 1$) degree unique polynomial passing through the n points, $[ x k , y k ]$,
$y ( x ) = ∑ k = 1 n y k ∏ i ≠ k x − x i x k − x i .$
Inspired by Waring polynomials, the additive expression of Equation (3) allows us to derive the interpolation of all functions passing through two or more distinct points. Using two points, $[ x 1 , y 1 ]$ and $[ x 2 , y 2 ]$, the Waring polynomial formula is generalized as
$y ( x ) = g ( x ) + x − x 2 x 1 − x 2 ( y 1 − g 1 ) + x − x 1 x 2 − x 1 ( y 2 − g 2 ) ,$
where $g ( x )$ is a free function subject to the conditions, $g ( x 1 ) ≠ ± ∞$ and $g ( x 2 ) ≠ ± ∞$. Again, if $g ( x ) = a + b x$, then the $g ( x )$ contribution to $y ( x )$ in Equation (18) disappears, becoming the classic two-point Waring interpolation form given in Equation (17). This is because Equation (18) is obtained using Equation (6) with $p ( x ) = 1$ and $q ( x ) = x$. Therefore, to obtain a nonlinear interpolating function, $y ( x )$, then $g ( x ) ≠ a p ( x ) + b q ( x )$.
Equation (18) describes all nonlinear interpolating functions passing through two points. The generalization is immediate. The expression of all functions passing through a set of n points is
$y ( x ) = g ( x ) + ∑ k = 1 n ( y k − g k ) ∏ i ≠ k x − x i x k − x i ,$
where $g ( x )$ can be any function satisfying $g ( x k ) ≠ ± ∞$, for $k ∈ [ 1 , n ]$. Equation (19) extends the Waring’s interpolation form using n points to any function. In particular, if $g ( x ) = 0$, the original formulation of Waring’s interpolation form is obtained. This happens also if $g ( x ) = ∑ k = 0 n c k x k$. In fact, to provide additional contribution to $y ( x )$ in Equation (19), $g ( x )$ must be, at least, a monomial with degree higher than n (minimum monomial).
A time-varying n-dimensional vector, $y$, passing through a set of m points, $[ y 1 , y 2 , … , y m ]$, at corresponding times, $[ t 1 , t 2 , … , t m ]$, can be expressed as
$y ( t ) = g ( t ) + ∑ k = 1 m ( y k − g k ) ∏ i ≠ k t − t i t k − t i .$
For example, using the five points given in Table 1 and $g ( t ) = { sin t , e t , 1 − t 2 } T$, the trajectory shown in Figure 1 is obtained.

## 4. Two Constraints in Two Points

Considering a function subject to the k-th derivative assigned at $x 1$ and the j-th derivative assigned at $x 2$,
$d k y d x k x 1 = y 1 ( k ) and d j y d x j x 2 = y 2 ( j ) .$
The constrained expression can be searched for as
$y ( x ) = g ( x ) + η 1 p ( x ) + η 2 h ( x ) .$
The two constraints lead to the solution
$η 1 η 2 = p 1 ( k ) h 1 ( k ) p 2 ( j ) h 2 ( j ) − 1 y 1 ( k ) − g 1 ( k ) y 2 ( j ) − g 2 ( j ) .$
As long as the four derivatives appearing in the matrix exist, $p ( x )$ and $h ( x )$ must satisfy $p 1 ( k ) h 2 ( j ) ≠ p 2 ( j ) h 1 ( k )$. A sufficient (but not necessary) condition is to select $p ( x ) = x k / ( k ! )$ and $h ( x ) = x j / ( j ! )$.

#### 4.1. Two Derivatives Example

Consider a function subject to: $y ˙ ( x 1 ) = y ˙ 1$ and $y ˙ ( x 2 ) = y ˙ 2$. This function can be expressed as
$y ( x ) = g ( x ) + η 1 x + η 2 x 2 ,$
where $η 1$ and $η 2$ are two constants and $g ( x )$ can be any nonlinear function. The two constraints imply solving the system
$y ˙ 1 − g ˙ 1 y ˙ 2 − g ˙ 2 = 1 2 x 1 1 2 x 2 η 1 η 2 ,$
whose solution (always existing as long as $x 1 ≠ x 2$) is
$y ( x ) = g ( x ) + x ( 2 x 2 − x ) 2 ( x 2 − x 1 ) ( y ˙ 1 − g ˙ 1 ) + x ( x − 2 x 1 ) 2 ( x 2 − x 1 ) ( y ˙ 2 − g ˙ 2 ) .$
In this case, if $g ( x ) = b x + c x 2$, then Equation (20) reduces to the same equation, no matter the values of b and c. Again, this means that $g ( x )$ must be linearly independent from $p ( x )$ and $h ( x )$.

#### 4.2. Four Constraints Example

Consider finding a constrained expression subject to four constraints. Assume that these constraints are specified in the following three distinct points:
$d 2 y d x 2 x 1 = y ¨ x 1 , y ( x 2 ) = y x 2 , y ( x 3 ) = y x 3 , and d y d x x 3 = y ˙ x 3 .$
Let us find the constrained expression using monomials,
$y ( x ) = g ( x ) + η 0 + η 1 x + η 2 x 2 + η 3 x 3 .$
Using $x 1 = − 1$, $x 2 = 0$, and $x 3 = 2$, the constraints imply
$y ¨ x 1 − g ¨ x 1 y x 2 − g x 2 y x 3 − g x 3 y ˙ x 3 − g ˙ x 3 = 0 0 2 6 x 1 1 x 2 x 2 2 x 2 3 1 x 3 x 3 2 x 3 3 0 1 2 x 3 3 x 3 2 η 1 η 2 η 3 η 4 = 0 0 2 − 6 1 0 0 0 1 2 4 8 0 1 4 12 η 1 η 2 η 3 η 4 .$
If $g ¨ x 1$, $g x 2$, and $g ˙ x 3$ exist and if the determinant of the matrix is different from zero, the expressions for the four unknown coefficients are
$η 1 η 2 η 3 η 4 = 1 28 0 28 0 0 − 8 − 24 24 − 20 8 3 − 3 6 − 2 1 − 1 2 y ¨ x 1 − g ¨ x 1 y x 2 − g x 2 y x 3 − g x 3 y ˙ x 3 − g ˙ x 3 .$
Then, substituting these expressions in Equation (22), we obtain the searched for constrained expression,
$y ( x ) = g ( x ) + − 8 x + 8 x 2 − 2 x 3 28 ( y ¨ x 1 − g ¨ x 1 ) + 28 − 24 x + 3 x 2 + x 3 28 ( y x 2 − g x 2 ) + + 24 x − 3 x 2 − x 3 28 ( y x 3 − g x 3 ) + − 20 x + 6 x 2 + 2 x 3 28 ( y ˙ x 3 − g ˙ x 3 ) ,$
satisfying all constraints defined in Equation (24). Again, if $g ( x ) = c 0 + c 1 x + c 2 x 2 + c 3 x 3$, no changes will be obtained on the resulting $y ( x )$, no matter what the coefficients a, b, c, and d, are.

#### 4.3. Issues Using Monomials

The solution to the general problem of n constraints on m points exists as long as function $g ( x )$ is defined at the constraints’ conditions and the matrix to derive the $η k$ coefficients is not singular. For instance, using the following constraints,
$d 3 y d x 3 x 1 = y x 1 ( 3 ) , y ( x 2 ) = y x 2 , d 3 y d x 3 x 2 = y x 2 ( 3 ) , and d 3 y d x 3 x 3 = y x 3 ( 3 ) ,$
and the monomial formalism of Equation (22) cannot be adopted due to the corresponding coefficient matrix
$0 0 0 6 1 x 2 x 2 2 x 2 3 0 0 0 6 0 0 0 6$
has rank 2 and cannot be inverted. In this case, the minimal monomial that can be used in Equation (22) must be $y ( x ) = g ( x ) + η 0 + η 1 x 3 + η 2 x 4 + η 3 x 5$. Again, adopting this expression does not guarantee a solution because the determinant of the matrix to invert also depends on the $x 1$, $x 2$, and $x 3$, values. The use of monomials (which usually lead to simple constrained expressions) can still be adopted for most simple cases. However, to reduce the risk of singular coefficient matrix, the selection of a set of functions that are infinitely differentiable, such as exponentials, logarithm, trigonometric functions, rational functions, etc., are suggested.
In the next section an alternative method to derive constrained expressions is provided for the general case of functions subject to n constraints on $m ≤ n$ points.

## 5. Coefficients Functions of Constrained Expressions

Consider the general case of a function with n distinct constraints, $d d k y d x d k x k = y x k ( d k )$, where the n-element vector, $d$, contains the constraints’ derivative orders and the n-element vector, $x$, indicates where the constraints’ are specified. For instance, for the constraints specified in Equation (24), the $d$ and $x$ vectors are, $d = { 2 , 0 , 0 , 1 } T$ and $x = { − 1 , 0 , 2 , 2 } T$, respectively.
In this section, we develop an alternative approach to derive constrained expressions. This is motivated by the fact that Equations (7), (18)–(20) and (23), all share the same formalism, with a number of terms equal to the number of constraints, expressed as $d d k y d x d k − d d k g d x d k x k$, and multiplied by some coefficient functions assuming values of one if computed at the constraint’s coordinate value while all the remaining coefficient functions are zero, and vice versa. This property suggests that the constrained expressions should have the following formal structure:
$y ( x ) = g ( x ) + ∑ k = 1 n β k ( x , x ) y x k ( d k ) − g x k ( d k ) where β i ( d k ) ( x k , x ) = δ k i ,$
where $δ k i$ is the Kronecker and n the number of constraints. The coefficient functions, $β k ( x , x )$, of this expression are such that when the k-th constraint, $d d k y d x d k x k = y x k ( d k )$, is verified, then $β k ( d k ) ( x k , x ) = 1$, while all the other coefficient functions, $β i ( d k ) ( x k , x ) = 0$, for $i ≠ k$. Given a set of constraints, this property allows us to derive the $β k ( x , x )$ expressions, as explained in the next section.

#### Coefficients Functions Derivation

As the number of constraints (m) increases, the approach previously proposed (using monomials) to find constrained expressions becomes more complicated, always with the risk of obtaining a singular matrix to compute the coefficients, $η k$. For this reason, this section proposes a new procedure to compute all the $β k$ functions at once, provided that the $β k$ functions are expressed as a scalar product of a set of m linearly independent functions contained in the vector $h ( x ) = { h 1 ( x ) , h 2 ( x ) , … , h m ( x ) } T$ (where $m ≤ n$)
$β k ( x ) = h T ( x ) η k .$
Functions $h k ( x )$ must be defined at constraints’ conditions (derivatives and locations). A sufficient condition is using infinite nonsingular differentiable functions.
Let’s derive the $β k ( x )$ coefficient functions for the following $n = 4$ constraints example:
$y ( x 1 ) = y 1 , y ⃛ ( x 1 ) = y ⃛ 1 , y ˙ ( x 2 ) = y ˙ 2 , and y ( x 3 ) = y 3 .$
To compute the function $β 1 ( x )$ associated with the first constraint, $y ( x 1 ) = y 1$, the following relationships:
$β 1 ( x 1 ) = h 1 T η 1 = 1 , β ⃛ 1 ( x 1 ) = h ⃛ 1 T η 1 = 0 , β ˙ 1 ( x 2 ) = h ˙ 2 T η 1 = 0 , and β 1 ( x 3 ) = h 3 T η 1 = 0$
must be satisfied. Selecting, for instance, $h ( x ) = e x , sin x , ln x , x − 1 T$, these relationships can be put in a matrix form,
$h 1 T h ⃛ 1 T h ˙ 2 T h 3 T η 1 = e x 1 sin x 1 ln x 1 x 1 − 1 e x 1 − cos x 1 2 / x 1 3 x 1 − 4 e x 2 cos x 2 1 / x 2 − x 2 − 2 e x 3 sin x 3 ln x 3 x 3 − 1 η 1 = 1 0 0 0 ,$
allowing the coefficients vector, $η 1$, of the $β 1 ( x )$ function to be obtained by matrix inversion. The other $η i$ coefficients vectors, where $i = 2 , 3 , 4$, can be computed analogously obtaining the final following system:
$H ( x ) Ξ = h 1 T h ⃛ 1 T h ˙ 2 T h 3 T η 1 , η 2 , η 3 , η 4 = 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 = I 4 × 4 .$
Therefore, the inversion of the coefficients matrix, $Ξ$, provides all the $η k$ coefficient vectors,
$Ξ ( x ) = η 1 , η 2 , η 3 , η 4 = H − 1 ( x ) ,$
and the $β k ( x , x )$ polynomials are
$β 1 ( x , x ) , β 2 ( x , x ) , β 3 ( x , x ) , β 4 ( x , x ) = h T ( x ) Ξ ( x ) = h T ( x ) H − 1 ( x ) .$
This method also fails if the $H ( x )$ matrix is singular. In this case, different $h ( x )$ basis function vectors must be selected, until obtaining $H ( x ) ,$ which is invertible. Because of this, this alternative approach does not look as attractive or preferable over the approach previously described. What makes this approach more interesting is it allows an easy generalization when considering constraints more general, such as linear combination sets of functions and derivatives. This is shown in the next subsection, starting from solving the simple case of relative constraints, still using the previous approach.

## 6. Relative Constraints

Sometimes, constraints are not defined in an absolute way (e.g., $y ˙ ( 2 ) = 1$) but in a relative way, as $y ¨ ( 0 ) = y ( 1 )$. Constrained expressions can also be derived for relative constraints. In general, a relative constraint can be written as
$y x i ( d i ) = y x j ( d j ) where if x i = x j then d i ≠ d j .$
To give an example, consider the problem of finding an expression satisfying the two relative constraints,
$y 1 = y 2 and y ˙ 1 = y ˙ 2 .$
The constrained expression can be searched for as
$y ( x ) = g ( x ) + η p p ( x ) + η r r ( x ) ,$
where $p ( x )$ and $r ( x )$ are two assigned functions and $η p$ and $η r$ two unknown coefficients. The two relative constraints imply solving the system,
$g 2 − g 1 g ˙ 2 − g ˙ 1 = p 1 − p 2 r 1 − r 2 p ˙ 1 − p ˙ 2 r ˙ 1 − r ˙ 2 η p η r ,$
to obtain the unknown coefficients, $η p$ and $η r$. This linear problem admits a solution if the matrix is not singular and if the selected functions, $p ( x )$ and $r ( x )$, are defined at the constraints’ conditions. Matrix singularity indeed occurs if, for instance, $p ( x ) = 1$ and $r ( x ) = x$, as previously highlighted. This means that the case of function and derivative relative constraints is different from the case of function and derivative absolute constraints. In this relative constraint case, in addition to spanning two independent function spaces, the functions $p ( x )$ and $r ( x )$ must admit, at least, a nonzero first derivative. Therefore, using monomials, a potential constrained expression can be searched for as,
$y ( x ) = g ( x ) + η p x + η r x 2 .$
The two constraints give the system,
$g 2 − g 1 g ˙ 2 − g ˙ 1 = x 1 − x 2 x 1 2 − x 2 2 0 2 ( x 1 − x 2 ) η p η r ,$
whose solution exists as long as $x 1 ≠ x 2$,
$y ( x ) = g ( x ) + x x 1 − x 2 ( g 2 − g 1 ) + x 2 − x ( x 1 + x 2 ) 2 ( x 1 − x 2 ) ( g ˙ 2 − g ˙ 1 ) ,$
which has, again, the formal expression of Equation (24). This expression provides all functions satisfying the two relative constraints.

#### 6.1. Coefficients Functions Derivation for Linear Combination of Relative Constraints

Consider searching for the constrained expression as
$y ( x ) = g ( x ) + η T h ( x ) .$
A set of n general linear constraints (this includes absolute and relative constraints) can be expressed by the n linear equations
$c k = α k T y x k ( d k ) = α k T g x k ( d k ) + α k T H x k ( d k ) η , where k = 1 , … , n ,$
and where $η$ and $α k$ are vectors of unknown and known coefficients, respectively. For instance, for the k-th constraint, $3 = 2 y ( x 1 ) − π y ¨ ( x 3 )$, we have $c k = 3$, $α k = { 2 , − π }$, $d k = { 0 , 2 }$, $x k = { x 1 , x 3 }$, and $y x k ( d k ) = { y 1 , y ¨ 3 }$. Using a set of n independent basis functions (size of vectors $η$ and $h$), matrix $H x k ( d k )$ has the expression
$H x k ( d k ) = h x k 1 ( d k 1 ) , … , h x k n ( d k n ) T .$
Then, the $η$ unknown vector can be computed from the constraints’ equations,
$c 1 ⋮ c n = α 1 T g x 1 ( d 1 ) ⋮ α n T g x n ( d n ) + α 1 T H x 1 ( d 1 ) ⋮ α n T H x n ( d n ) η 1 ⋮ η n ,$
from which the solution is
$η 1 ⋮ η n = α 1 T H x 1 ( d 1 ) ⋮ α n T H x n ( d n ) − 1 c 1 − α 1 T g x 1 ( d 1 ) ⋮ c n − α n T g x n ( d n ) .$
Substituting in Equation (25),
$y ( x ) = g ( x ) + h T ( x ) α 1 T H x 1 ( d 1 ) ⋮ α n T H x n ( d n ) − 1 c 1 − α 1 T g x 1 ( d 1 ) ⋮ c n − α n T g x n ( d n ) = g ( x ) + β T ( x , x ) c 1 − α 1 T g x 1 ( d 1 ) ⋮ c n − α n T g x n ( d n ) ,$
we obtain the expressions for the $β k ( x , x )$ functions
$β T ( x , x ) = h T ( x ) α 1 T H x 1 ( d 1 ) ⋮ α n T H x n ( d n ) − 1 ,$
an expression easy to derive for given the basis functions vector, $h ( x )$, and a set of linear constraints.
Therefore, the case of general linear constraints can be expressed as
$y ( x ) = g ( x ) + ∑ k = 1 n β k ( x , x ) c k − α k T g x k ( d k ) , where β k ( d k ) ( x i , x ) = δ k i .$
This equation generalizes Equation (24), where the $β k ( x , x )$ functions are multiplying the linear constraints written in terms of the function $g ( x )$.

#### 6.2. Example of Two Linear Constraints

Let’s give a numerical example. Consider finding all functions satisfying the following two linear constraints:
$3 = 2 y ( x 1 ) − π y ¨ ( x 3 ) and π = e y ˙ ( x 1 ) + y ( x 2 ) − 3 y ˙ ( x 3 ) ,$
for $x 1 = − 1$, $x 2 = 1$, and $x 3 = 2$. In this case, we have
$c 1 = 3 α 1 = { 2 , − π } T d 1 = { 0 , 2 } T x 1 = { − 1 , 2 } T and c 2 = π α 2 = { e , 1 , − 3 } T d 2 = { 1 , 0 , 1 } T x 2 = { − 1 , 1 , 2 } T .$
Consider the coefficients functions selection,
$h = { e x , sin x } T , → h ˙ = { e x , cos x } T , → h ¨ = { e x , − sin x } T .$
Then, the coefficient functions are
$β T = h T ( x ) α 1 T H x 1 ( d 1 ) α 2 T H x 2 ( d 2 ) − 1 ,$
where
$H x 1 ( d 1 ) = e − 1 sin ( − 1 ) e 2 − sin ( 2 ) and H x 2 ( d 2 ) = e − 1 cos ( − 1 ) e 1 sin ( 1 ) e 2 cos ( 2 ) .$
That is,
$β T = { e x , sin x } { 2 , − π } T e − 1 sin ( − 1 ) e 2 − sin ( 2 ) { e , 1 , − 3 } T e − 1 cos ( − 1 ) e 1 sin ( 1 ) e 2 cos ( 2 ) − 1 ,$
and, finally,
$β T = { e x , sin x } 2 e − 1 − π e 2 2 sin ( − 1 ) + π sin ( 2 ) 1 + e − 3 e 2 e cos ( − 1 ) + sin ( 1 ) − 3 cos ( 2 ) − 1 ≈ { e x , sin x } − 0.0610 0.0201 − 0.3163 0.3853 .$
Therefore, all explicit functions satisfying the constraints given in Equation (28) can be expressed by
$y ( x ) = g ( x ) − ( 0.0610 e x + 0.3163 sin x ) ( 3 − 2 g 1 + π g ¨ 3 ) + + ( 0.0201 e x + 0.3853 sin x ) ( π − e g ˙ 1 − g 2 + 3 g ˙ 3 ) ,$
where $g ( x )$ can be freely chosen as long as $g ˙ 1$, $g 2$, and $g ¨ 3$ exist.

## 7. Periodic Functions

Periodic functions, with assigned period T, can be provided in continuous and discontinuous forms,
$y c ( x ) = g [ Ψ ( x − δ x c , T ) ] ( continuous ) , y d ( x ) = g [ ( x − δ x d ) mod T ] ( discontinuous ) ,$
where $Ψ ( x − δ x c , T )$ can be any analytic periodic function with period T (e.g., trigonometric functions), $δ x c$ and $δ x d$ are shifting parameters, and $g ( • )$ can be any function.
All periodic functions passing through the point $[ x k , y k ]$, can be expressed using the additive expression form given in Equation (2), $y ( x ) = g ( x ) + ( y k − g k )$, that is,
$y c k ( x ) = g [ Ψ ( x − δ x c , T ) ] + { y k − g [ Ψ ( x k − δ x c , T ) ] } ( continuous ) , y d k ( x ) = g [ ( x − δ x d ) mod T ] + { y k − g [ ( x k − δ x d ) mod T ] } ( discontinuous ) .$
The expressions provided in Equation (32) can be used along with the general Waring polynomial form given in Equation (18) to obtain all periodic functions passing through a set of n points. By doing that, the periodicity is over a line ($n = 2$), a quadratic ($n = 3$), a cubic ($n = 4$) (and so on) functions.
For instance, using two points, $[ − 0.7 , − 0.1 ]$ and $[ 1.7 , 0.2 ]$ (indicated by red markers in all three plots of Figure 2), we can obtain the continuous (black curves)
$y c ( x ) = x − x 2 x 1 − x 2 y c 1 ( x ) + x − x 1 x 2 − x 1 y c 2 ( x )$
and the discontinuous (blue curves)
$y d ( x ) = x − x 2 x 1 − x 2 y d 1 ( x ) + x − x 1 x 2 − x 1 y d 2 ( x )$
for the three different expressions: $g ( x ) = cos ( 7 x )$, $g ( x ) = 1 − e x$, and $g ( x ) = 2 + 3 x 3$, respectively. These plots clearly show how the periodicity is over a line.
Using n points, the general Waring interpolation form, continuous or discontinuous periodic functions over $n − 1$ polynomials functions,
$y c ( x ) = ∑ k = 1 n y c k ∏ i ∈ [ 1 , n ] i ≠ k x − x i x k − x i and y d ( x ) = ∑ k = 1 y d k ∏ i ∈ [ 1 , n ] i ≠ k x − x i x k − x i ,$
can be obtained, where the n values of $y c k$ and $y d k$ are computed using Equation (32), respectively.

## 8. Conclusions

This study shows how to derive analytical constrained expressions, representing all functions subject to a set of assigned linear constraints. These expressions are not unique and are defined in terms of a new function, $g ( x )$, and its derivatives, evaluated at the constraints’ coordinates. Constrained expressions can be introduced for relative constraints as well as for a set of linear constraints expressed as a linear combination of functions and derivatives specified in some specific coordinates. Finally, the theory has been applied to periodic functions—continuous (using periodic functions) and discontinuous (using modular arithmetic)—with an assigned period and subject to pass through a set of n points.
Applications of the theory presented will be the subject of many future papers. Potential applications are identified in solving differential equations, optimization, optimal control, path planning, calculus of variations, and other scientific problems whose solution is to find functions subject to linear constraints and satisfying some optimality criteria. This theory provides expressions with embedded linear constraints so that the constraints conditions are removed for the mathematical problem. They can be used to restrict the search space to just the one satisfying the problem constraints. An application already developed using these constrained expressions is solving linear nonhomogeneous differential equations with nonconstant coefficients for initial, boundary, and mixed value problems. Ref. [6] shows how, using constrained expressions, least-squares solutions can be obtained for these two problems.
Natural extension of this theory, currently in progress, is to develop a future “Theory of Connections” for functions and differential equations (instead of points). This theory would describe all possible morphings from one functions to another and from one differential equation to another. An important application of this kind of transformation is modelling phase and property transitions (laminar/turbolence flows, subsonic/supersonic regimes, property of materials at different scales, etc.), having the goal of finding the best transformation that fits experimental data.

## Acknowledgments

The author would like to thank Hunter Johnston for checking and improving the use of English in the manuscript.

## Conflicts of Interest

The author declares no conflict of interest.

## References

1. Lam, N.S.-N. Spatial Interpolation Methods: A Review. Am. Cartogr. 1983, 10, 129–150. [Google Scholar] [CrossRef]
2. Li, J.; Heap, A.D. A Review of Spatial Interpolation Methods for Environmental Scientists; Geoscience Australia: Canberra, Australia, 2008; pp. 137–145. [Google Scholar]
3. Lehmann, T.M.; Gonner, C.; Spitzer, K. Survey: Interpolation Methods in Medical Image Processing. IEEE Trans. Med. Imaging 1999, 18, 1049–1075. [Google Scholar] [CrossRef] [PubMed]
4. Hoffman, D.K.; Wei, G.W.; Zhang, D.S.; Kouri, D.J. Interpolating Distributed Approximating Functionals. Phys. Rev. E 1998, 57, 6152–6160. [Google Scholar] [CrossRef]
5. Wei, D.; Wang, H.; Kouri, D.J.; Papadakis, M.; Kakadiaris, I.A.; Hoffman, D.K. On the Mathematical Properties of Distributed Approximating Functionals. J. Math. Chem. 2001, 30, 83–107. [Google Scholar] [CrossRef]
6. Mortari, D. Least-squares Solutions of Linear Differential Equations. In Proceedings of the AAS 17-255 of 2017 AAS/AIAA Space Flight Mechanics Meeting Conference, San Antonio, TX, USA, 5–9 February 2017. [Google Scholar]
7. Waring, E. Problems Concerning Interpolations. Philos. Trans. R. Soc. 1779, 69, 59–67. [Google Scholar] [CrossRef]
8. Michiel, H. (Ed.) Waring Interpolation Formula. In Encyclopedia of Mathematics; Springer: New York, NY, USA, 2001; ISBN 978-1-55608-010-4. [Google Scholar]
Figure 1. Different views of the nonlinear interpolating trajectory obtained using the data given in Table 1. (a): x-y projection, (b): x-z projection, (c): y-z projection, and (d): axonometric view.
Figure 1. Different views of the nonlinear interpolating trajectory obtained using the data given in Table 1. (a): x-y projection, (b): x-z projection, (c): y-z projection, and (d): axonometric view.
Figure 2. Examples of continuous and discontinuous periodic functions for different free functions, $g ( x )$. (a): $g ( x ) = cos ( 7 x )$, (b): $g ( x ) = 1 − e x$, and (c): $g ( x ) = 2 + 3 x 3$.
Figure 2. Examples of continuous and discontinuous periodic functions for different free functions, $g ( x )$. (a): $g ( x ) = cos ( 7 x )$, (b): $g ( x ) = 1 − e x$, and (c): $g ( x ) = 2 + 3 x 3$.
Table 1. Selected points’ coordinates.
Table 1. Selected points’ coordinates.
$y 1$$y 2$$y 3$$y 4$$y 5$
t12345
x20$− 1$11
y120$− 1$1
z2120$− 1$

## Share and Cite

MDPI and ACS Style

Mortari, D. The Theory of Connections: Connecting Points. Mathematics 2017, 5, 57. https://doi.org/10.3390/math5040057

AMA Style

Mortari D. The Theory of Connections: Connecting Points. Mathematics. 2017; 5(4):57. https://doi.org/10.3390/math5040057

Chicago/Turabian Style

Mortari, Daniele. 2017. "The Theory of Connections: Connecting Points" Mathematics 5, no. 4: 57. https://doi.org/10.3390/math5040057

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.