The Theory of Connections. Connecting Points

This study introduces a procedure to obtain general expressions, $y = f(x)$, subject to linear constraints on the function and its derivatives defined at specified values. These constrained expressions can be used describe functions with embedded specific constraints. The paper first shows how to express the most general explicit function passing through a single point in three distinct ways: linear, additive, and rational. Then, functions with constraints on single, two, or multiple points are introduced as well as those satisfying relative constraints. This capability allows to obtain general expressions to solve linear differential equations with no need to satisfy constraints (the"subject to:"conditions) as the constraints are already embedded in the constrained expression. In particular, for expressions passing through a set of points, a generalization of the Waring's interpolation form, is introduced. The general form of additive constrained expressions is introduced as well as a procedure to derive its coefficient functions, requiring the inversion of a matrix with dimensions as the number of constraints.


I. INTRODUCTION
This study introduces some special expressions, called constrained expressions, to nonlinear functions subject to satisfy a set of linear constraints. These constraints are assigned at specified values of the independent variable (x) in term of the function and/or its derivatives. The general expression of any (nonlinear) function subject to pass through one point, two points, and multiple points, with single or multiple constraints, are provided. An application of these constrained expressions is given in Ref. [1], where it has been shown how to obtain least-squares solutions of linear nonhomogeneous differential equations of any order and nonconstant coefficients. This is done for initial and boundary values problems. Additional applications are in optimization as well as in path planning, just to name two. These constrained expressions can be derived using any set of functions spanning different function spaces. The resulting expression is provided in term of a new (nonlinear) function, g(x), which is completely free to choose.
To show the most simple example of a constrained expression, consider the following. The problem of writing the equation of all linear functions passing through a specified point, [x 1 , y 1 ], is straightforward, y(x) = m (x − x 1 ) + y 1 , with the line slope, m, free to choose. The focus of this paper is to write the equation of all explicit functions passing through [x 1 , y 1 ]. This includes all functions: nonlinear, continuous, discontinuous, singular, periodic, etc.
Three different constrained expressions are introduced, here. The first expression is a direct extension of the "all-lines" equation, where p(x) can be any function satisfying, p(x 1 ) = ∞. The second expression, called additive, is y(x) = g(x) + [y 1 − g(x 1 )] = g(x) + (y 1 − g 1 ), where g(x 1 ) = ∞, and the third expression, called rational, is where h(x) can be any function satisfying h(x 1 ) = 0. From Eq. (1) and Eq. (2) the following expression is derived. This equation is interesting as p(x) works as the derivative of g(x), but for finite variations. The expressions introduced in Eqs. (1,2,3) can be used to represent all explicit functions passing through the point [x 1 , y 1 ], with the only exception to those satisfying p(x 1 ) = g(x 1 ) = ∞ and h(x 1 ) = 0. It is possible also to combine Eqs. (1,2,3) to obtain This study investigates how to derive these expressions subject to a variety of linear constraints, such as functions passing through multiple points with assigned derivatives, or subject to multiple relative constraints, as well as periodic functions subject to multiple point constraints. Potential applications for these types of expressions are many. For example, they can be used in optimization problems using expressions with the assigned constraints already embedded. This is particularly useful when solving linear differential equations, especially for boundary values problems, where the integration techniques become complicate to satisfy the constraints. In the contrary, using the expressions provided in this study, linear differential equations can be rewritten for functions with already embedded the constraints (a.k.a., the "subject to" conditions). This approach provides to the initial and boundary values problems a unified solving framework. Specifically, for linear differential equations, a least-squares approach solving technique [1] has been developed with order of magnitudes accuracy gain with respect the step-varying Runge-Kutta-Fehlberg method.
Particularly important is the fact that Eq. (2), and many other equations provided in this study, can be immediately extended to vectors (bold lower cases), to matrices (upper cases), and to higher dimensional tensors. Specifically, for vectors and matrices, Eq. (2) becomes In the following sections this study derives constrained expressions satisfying: • Constraints in one point; • Constraints in two points and extension to n points; • m constraints in n ≤ m points; • Single and multiple relative constraints; • Constraints on continuous and discontinuous periodic functions.

II. CONSTRAINTS ON ONE POINT
Functions subject to, y(x 1 ) = y 1 , can be derived using the general form where ξ 1 is an unknown coefficient and g(x) and h(x) can be any two functions, satisfying g(x 1 ) = g 1 = ∞ and h(x 1 ) = h 1 = 0. The constraint, y(x 1 ) = y 1 , allows us to derive the expression of ξ 1 , getting, which is a combination of Eq. (2) and Eq. (3). In particular, if g(x) = 0 or if h(x) = 1 we obtain the expressions already provided in Eq. (2) and Eq. (3), respectively.
A. Constraint on one derivative Equation (5) can also be used to obtain a function satisfying the constraint on the n-th derivative, d n y dx n x=x1 = y (n) 1 , getting where g The two constraints allow to compute the coefficients ξ p and ξ p , by Solution exists as long as h p1 . Under this assumption the searched constrained expression is p1 provides us the following considerations. In order to admit solution, h p (x) and h q (x) must have a nonzero p and q derivatives, h q (x) = 0, respectively. For example, if p = 0 and q = 3 then h p (x) must be (at least) a nonzero constant and h q (x) must be (at least) cubic, h q (x) = x 3 . For instance, this requirement can always be obtained by setting h p (x) = x p p! and h q (x) = x q q! . Then, ξ p and ξ q are derived from the constraints, and the solution is In the common case where function (p = 0) and first derivative (q = 1) are assigned in one point, the Eq. (8) becomes Note that, if g(x) = a or g(x) = a + b x, then the previous equation reduces to y(x) = y 1 +ẏ 1 (x − x 1 ). The reason is, to obtain Eq. (9), we have implicitly selected h p (x) = 1 and h q (x) = x. Therefore, in order for g(x) to provides additional variation, then its expression must be at least quadratic, that is, at least one degree greater than h p (x) and h q (x).

C. Constraints on n derivatives
The constrained expression when the function and its first n derivatives are assigned in one point can be written as (10) Equation (10) satisfies all the constraints, y( .. y 1 , and so on. In particular, for n = ∞, this equation becomes the combination of two Taylor series, expanded around x 1 , one for y(x) and the other for g(x), respectively, Therefore, as n → ∞ the set of potential functions that can be described by Eq. (10) converges to a single function only, the y(x) defined in Eq. (11). This means that, as n increases the variations affecting y(x), due to the g(x) variations, decrease. Note that, Eq. (10) is linear in g(x) and its derivatives and, therefore, it may be possible to take advantage of g(x) and increase the predictions based on truncated Taylor serie.
For vectors made of m independent variables such as y T (x) = {y 1 (x), y 2 (x), · · · , y m (x)}, and subject to n constraints at x = x 1 , Eq. (10) become (12) In this case the g(x) vector can be expressed as linear combination of n common basis functions, where Ξ is a n × m matrix of the unknown coefficients. Then, Eq. (12) becomes Equation (10) has interesting applications for vectors made of subsequent derivatives, such as, ... y , · · · }, where the subscript "d" identifies this specific kind of vector. The constrained expression for this vector (which often appear in dynamics) is, whose elements (i-row and j-column) can be defined as To use this kind of constrained equations, the g d (x) vector can be expressed as a linear combination of a set of n basis where ξ is a vector of n unknown coefficients to be found. Note that, when the vector is made of independent variables, the number of unknown coefficients is higher: all elements of the Ξ matrix.

III. CONSTRAINTS IN n POINTS
Waring polynomials [2], better known as "Lagrange polynomials," are used for polynomial interpolation. This interpolation formula was first discovered in 1779 by Edward Waring, then rediscovered by Euler in 1783, and then published by Lagrange in 1795 [3]. A two-points Waring polynomial is the equation of a line passing through two points, [x 1 , y 1 ] and [x 2 , y 2 ], while the general n-points Waring polynomial is the (n − 1) degree curve passing through n points, Inspired by Waring polynomials the expression of a function subject to pass through two or more distinct points can be provided. In fact, Waring polynomials can be generalized to obtain all nonlinear functions connecting [x 1 , y 1 ] and [x 2 , y 2 ] using additive expression, where g 1 = ∞ and g 2 = ∞. Equation (18) describes all nonlinear functions passing through two points. The generalization is immediate. The expression representing all functions passing through a set of n points can be generalized as Equation (19) extends the Waring's interpolation form using n points to any function satisfying g(x k ) = ∞, where k = 1, · · · , n. In particular, for a time-varying n-dimensional vector, y, passing through a set of m points, [y 1 , y 2 , · · · , y m ], at respective times, [t 1 , t 2 , · · · , t m ], Eq. (19) becomes Using the five points given in Table III and g = { sin t, e t , 1 − t 2 } T the trajectory shown in Fig. 1 is obtained Fig. 1. Trajectory obtained using t ∈ [0, 1] and data given in Table III y 1 y 2 y 3 y 4 y 5 Next two subsections contain two examples of two-point function constraints. In the first example function and first derivative are assigned in two distinct points while in the second example the first derivatives are assigned in two distinct points.

A. Two constraints example #1
Consider a function subject to: y(x 1 ) = y 1 andẏ(x 2 ) =ẏ 2 . This function can be expressed as In particular, if h 0 (x) = 1 and h 1 (x) = x, then y(x) = g(x) + ξ 0 + ξ 1 x, where ξ 0 and ξ 1 are two constants and g(x) can be any nonlinear function. The two constraints imply solving the system from which we derive the solution Again, note that, if g(x) = a + b x, then Eq. (20) reduces to y(x) = y 1 +ẏ 2 (x − x 1 ), that is, an expression where g(x) disappears, unaffecting the constrained expression.

B. Two constraints example #2
Consider a function subject to:ẏ(x 1 ) =ẏ 1 andẏ(x 2 ) =ẏ 2 . This function can be expressed as, Assuming, h 0 (x) = x and h 1 (x) = x 2 , then, y(x) = g(x)+ξ 0 x+ξ 1 x 2 , where ξ 0 and ξ 1 are two constants and g(x) can be any nonlinear function. In this example we see another property affecting the h k (x) functions. In addition to the property that they cannot span the same function space, all functions h k (x) must admit a nonzero derivative with same order of the constraint's minimum derivative order. This means we cannot adopt for these constraints, as in the previous example, h 0 (x) = 1 and The two constraints imply solving the system giving the solution, Note that, if g(x) = b x + c x 2 , then Eq. (21) reduces to the same equation, independently what b and c are. This means that, in order for g(x) to play a role in the constrained expression, g(x) = c 0 h 0 (x) + c 1 h 1 (x).

C. Four constraints example
Consider to find a constrained expression subject to four constraints. Assume these constraints be specified in the following 3 distinct points, Let us find the constrained expression using monomials The constraints imply providing the expressions for the four coefficients ξ k Then, substituting these expressions in Eq. (23) we obtain the searched constrained expression, satisfying all constraints defined in Eq. (25). Again, the free function must satisfy g(x) = c 0 + c 1 x + c 2 x 2 + c 3 x 3 , otherwise the minimal constrained equation, is obtained.

D. Issues using monomials
The solution to the general problem of n constraints on m points exists as long as the matrix to derive the ξ k coefficients is not singular. For instance, using the following constraints, the monomial formalism of Eq. (23) cannot be adopted because the corresponding coefficient matrix, has rank 2 and cannot be inverted. In this case, the minimal monomial that can be used in Eq. (23) must be y(x) = g(x) + ξ 0 + ξ 1 x 3 + ξ 2 x 4 + ξ 3 x 5 . It is easy to prove by induction that any smooth function f for which d n f / dx n = 0 is a polynomial of degree less than n and the only smooth functions for which a derivative of some order is identically zero are polynomials. The use of monomials (which usually led to simple constrained expressions) can still be adopted for simple constrained expressions, while for the most general case) and to avoid singular coefficient matrix), the selection of a set of functions that are infinitely differentiable, such as exponentials, logarithm, trigonometric functions, rational functions, etc., are preferred.
In the next section a general method to derive constrained expressions is provided for the general case of functions with n constraints on m points.

IV. COEFFICIENTS FUNCTIONS OF CONSTRAINED EXPRESSIONS
Consider the general case of a function with n distinct constraints, x k , where the n-element vector, d, In this section, we derive a different approach to derive constrained expressions. This is motivated by the fact that Eqs.
where δ ki is the Kronecker and n the number of constraints. The coefficient functions, β k (x, x), of this expression are such that when the k-th constraint, x k , is verified, then β (d k ) k (x k , x) = 1, while all the other coefficient functions, . Given a set of constraints, this property allows us to find the β k (x, x) expressions. This is explained in the next section.

A. Coefficients functions derivation
As the number of constraints (m) increases, the approach previously proposed to find constrained expressions becomes complicate with the risk of obtaining singular matrix when computing the coefficient vector, ξ. For this reason this section propose a new procedure to compute all the β k functions at once, provided that the β k functions are expressed as scalar product of a set of m linearly independent functions (preferably indefinitely differentiable), contained in the vector h(x) = {h 1 (x), h 2 (x), · · · , h m (x)} T (where m ≤ n), and the vectors of the unknown functions coefficients, To be clear, the procedure is shown using the following n = 4 constraints example, To compute the function β 1 (x) associated to the first constraint, y(x 1 ) = y 1 , the following relationships, Allowing to obtain the coefficients vector, ξ 1 , of the β 1 (x) function by matrix inversion. The other ξ i coefficients vectors, where i = 2, 3, 4, can be computed analogously obtaining the final following system Therefore, the inversion of the coefficients matrix, Ξ, provides all the ξ k coefficient vectors, and the β k (x, x) polynomials are

V. RELATIVE CONSTRAINTS
Sometime constraints are not defined in an absolute way (e.g.,ẏ(2) = 1) but in a relative way, asÿ(0) = y(1). Constrained expressions can also be derived for relative constraints. In general, a relative constraint can be written as, To give an example, consider the problem of finding an expression satisfying the two relative constraints, The constrained expression can be searched as, where p(x) and r(x) are two assigned functions and ξ p and ξ r two unknown coefficients. The two relative constraints imply solving the system, to obtain the unknown coefficients, ξ p and ξ r . This linear problem admits solution if the matrix is not singular. Matrix singularity indeed occurs if, for instance, p(x) = 1 and r(x) = x, as done in the two constraints example #1, see Eq. (20). This means that the case of two relative constraints is different from the case of two absolute function constraints or one function and one derivative absolute constraint. In this relative constraint case functions p(x) and r(x) must admit, at least, a nonzero first derivative (in addition to span two independent function spaces). Using monomials, a potential constrained expression can be search as, The two constraints give the system, whose solution leads to, which has, again, the formal expression of Eq. (25). This expression provides all functions satisfying the two relative constraints.

A. Coefficients functions derivation for linear combination of relative constraints
Consider the following constrained expression Absolute and relative constraints can be expressed by a set of n linear equations where ξ and α k are vectors of unknown and known coefficients, respectively. For instance, for the constraint, 3 = 2y( x k = {y 1 ,ÿ 3 }. Using a set of n indefinitely differentiable basis functions (size of vectors ξ and h), matrix H The ξ vector can be then computed from the constraints' equations . . .
Therefore, the case of linear relative constraint can be expressed as This equation generalizes Eq. (25), where the β k (x, x) functions are multiplying the relative constraints written in term of the function g(x).

B. Example of two linear combination of relative constraints
Let's give a numerical example. Consider to find all functions satisfying the following two relative constraints where x 1 = −1, x 2 = 1, and x 3 = 2. Therefore, we have, Consider the coefficients functions selection, Then the β i functions are (2) and Therefore, all explicit functions satisfying the constraints given in Eq. (29) can be expressed by VI. PERIODIC FUNCTIONS Periodic functions with period T can be provided in continuous and discontinuous forms, where Ψ(x − δx c , T ) can be any periodic function with period T (e.g., trigonometric functions), δx c and δx d can be any constant (shifting parameters), and g(•) can be any function. Figure 3 shows three examples of using the expressions provided by Eq. (33) and Eq. (34) with period T = 0.5, δx c = 0.4, and δx d = 0.6, and Ψ(x − δx c , T ) ≡ sin[π(x − δx c )/T ]. These three functions are associated to: g(x) = 1 − e x (black), g(x) = 2 + 3 x 3 (red), and g(x) = cos(5 x) (blue), respectively. The top and bottom periodic functions given in Fig. 3 use the continuous and the discontinuous forms, Eq. (33) and Eq. (34), respectively.
All periodic functions passing through the point [x k , y k ], can be expressed using the additive expression form given in Eq. (2), y(x) = g(x) + (y k − g k ), that is, Then, these expressions can be used along the general Waring polynomial form given in Eq. (18) to obtain all periodic functions passing through a set of n points. By doing that, the periodicity is over a line (n = 2), a quadratic (n = 3), a cubic (n = 4), and so on, functions. For instance, using two points, [−0.7, −0.1] and [1.7, 0.2] (indicated by red markers in all three plots of Fig. 4), we can obtain the continuous (black curves), and the discontinuous (blue curves), for the three different expressions: g(x) = cos(7 x), g(x) = 1 − e x , and g(x) = 2 + 3 x 3 , respectively. These plots clearly show how the periodicity is over a line.  [1,n] x − x i x k − x i and y d (x) = k=1 y dk i =k i∈ [1,n] x − x i x k − x i can be obtained, where the n values of y ck and y dk are computed using Eq. (35), respectively.

CONCLUSIONS
This study introduces constrained expressions, providing all functions requiring to satisfy a set of assigned constraints. These expressions are not unique and are defined in terms of a new function, g(x), and/or its derivatives, evaluated at constraints' coordinates. Constrained expressions can be introduced for relative constraints as well as for a set of linear relative constraints made of function and or derivatives absolute and relative constraints. Finally, the theory has been applied to periodic functions, continuous and discontinuous, with assigned period and subject to pass through a set of points.
Applications of the theory presented can be found in several fields of basic scientific research involving optimization. The fundamental property of this theory is to provide expressions that restrict the search space to just those satisfying the problem constraints. An immediate application of these constrained expressions is in solving linear nonhomogeneous differential equations with nonconstant coefficients for both, initial and boundary value problems. Reference [1] shows how, using constrained expressions, a least-square solution can be obtained for these two problems.
This paper introduces the first part of the theory of connections, the one showing how to connect points and/or derivatives. A second and third part, to appear soon, will show how to connect functions and differential equations, respectively.