Next Article in Journal
The Evolution of Mathematical Thinking in Chinese Mathematics Education
Next Article in Special Issue
Prediction of Discretization of GMsFEM Using Deep Learning
Previous Article in Journal
Some Results on the Cohomology of Line Bundles on the Three Dimensional Flag Variety

Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

# The Multivariate Theory of Connections †

by
Daniele Mortari
* and
Carl Leake
*
Aerospace Engineering, Texas A&M University, College Station, TX 77843, USA
*
Authors to whom correspondence should be addressed.
This paper is an extended version of our paper published in Mortari, D. “The Theory of Connections: Connecting Functions.” IAA-AAS-SciTech-072, Forum 2018, Peoples’ Friendship University of Russia, Moscow, Russia, 13–15 November 2018.
Mathematics 2019, 7(3), 296; https://doi.org/10.3390/math7030296
Submission received: 4 January 2019 / Revised: 25 February 2019 / Accepted: 18 March 2019 / Published: 22 March 2019
(This article belongs to the Special Issue Computational Mathematics, Algorithms, and Data Processing)

## Abstract

:
This paper extends the univariate Theory of Connections, introduced in (Mortari, 2017), to the multivariate case on rectangular domains with detailed attention to the bivariate case. In particular, it generalizes the bivariate Coons surface, introduced by (Coons, 1984), by providing analytical expressions, called constrained expressions, representing all possible surfaces with assigned boundary constraints in terms of functions and arbitrary-order derivatives. In two dimensions, these expressions, which contain a freely chosen function, $g ( x , y )$, satisfy all constraints no matter what the $g ( x , y )$ is. The boundary constraints considered in this article are Dirichlet, Neumann, and any combinations of them. Although the focus of this article is on two-dimensional spaces, the final section introduces the Multivariate Theory of Connections, validated by mathematical proof. This represents the multivariate extension of the Theory of Connections subject to arbitrary-order derivative constraints in rectangular domains. The main task of this paper is to provide an analytical procedure to obtain constrained expressions in any space that can be used to transform constrained problems into unconstrained problems. This theory is proposed mainly to better solve PDE and stochastic differential equations.

## 1. Introduction

The Theory of Connections (ToC), as introduced in [1], consists of a general analytical framework to obtain constrained expressions, $f ( x )$, in one-dimension. A constrained expression is a function expressed in terms of another function, $g ( x )$, that is freely chosen and, no matter what the $g ( x )$ is, the resulting expression always satisfies a set of n constraints. ToC generalizes the one-dimensional interpolation problem subject to n constraints using the general form,
$f ( x ) = g ( x ) + ∑ k = 1 n η k p k ( x ) ,$
where $p k ( x )$ are n user-selected linearly independent functions, $η k$ are derived by imposing the n constraints, and $g ( x )$ is a freely chosen function subject to be defined and nonsingular where the constraints are specified. Besides this requirement, $g ( x )$ can be any function, including, discontinuous functions, delta functions, and even functions that are undefined in some domains. Once the $η k$ coefficients have been derived, then Equation (1) satisfies all the n constraints, no matter what the $g ( x )$ function is.
Constrained expressions in the form given in Equation (1) are provided for a wide class of constraints, including constraints on points and derivatives, linear combinations of constraints, as well as infinite and integral constraints [2]. In addition, weighted constraints [3] and point constraints on continuous and discontinuous periodic functions with assigned period can also be obtained [1]. How to extend ToC to inequality and nonlinear constraints is currently a work in progress.
The Theory of Connections framework can be considered the generalization of interpolation; rather than providing a class of functions (e.g., monomials) satisfying a set of n constraints, it derives all possible functions satisfying the n constraints by spanning all possible $g ( x )$ functions. This has been proved in Ref. [1]. A simple example of a constrained expression is,
This equation always satisfies and , as long as $g ˙ ( x 1 )$ and $g ˙ ( x 2 )$ are defined and nonsingular. In other words, the constraints are embedded into the constrained expression.
Constrained expressions can be used to transform constrained optimization problems into unconstrained optimization problems. Using this approach, fast least-squares solutions of linear [4] and nonlinear [5] ODE have been obtained at machine error accuracy and with low (actually, very low) condition number. Direct comparisons of ToC versus MATLAB’s ode45 [6] and Chebfun [7] have been performed on a small test of ODE with excellent results [4,5]. In particular, the ToC approach to solve ODE consists of a unified framework to solve IVP, BVP, and multi-value problems. The extension of differential equations subject to component constraints [8] has opened the possibility for ToC to solve in real-time a class of direct optimal control problems [9], where the constraints connect state and costate.
This study first extends the Theory of Connections to two-dimensions by providing, for rectangular domains, all surfaces that are subject to: (1) Dirichlet constraints; (2) Neumann constraints; and (3) any combination of Dirichlet and Neumann constraints. This theory is then generalized to the Multivariate Theory of Connections which provide in n-dimensional space all possible manifolds that satisfy boundary constraints on the value and boundary constraints on any-order derivative.
This article is structured as follows. First, it shows that the one-dimensional ToC can be used in two dimensions when the constraints (functions or derivatives) are provided along one axis only. This is a particular case, where the original univariate theory [1] can be applied with basically no modifications. Then, a two dimensional ToC version is developed for Dirichlet type boundary constraints. This theory is then extended to include Neumann and mixed type boundary constraints. Finally, the theory is extended to n-dimensions and to incorporate arbitrary-order derivative boundary constraints followed by a mathematical proof validating it.

## 2. Manifold Constraints in One Axis, Only

Consider the function, $f ( x )$, where $f : R n → R 1$, subject to one constraint manifold along the ith variable, $x i$, that is, $f ( x ) | x i = v = c ( x i v )$. For instance, in 3-D space, this can be the surface constraint, $f ( x , y , z ) | y = π = c ( x , π , z )$. All manifolds satisfying this constraint can be expressed using the additive form provided in Ref. [1],
where $g ( x )$ is a freely chosen function that must be defined and nonsingular at the constraint coordinates. When m manifold constraints are defined along the $x i$-axis, then the 1-D methodology [1] can be applied as it is. For instance, the constrained expression subject to m constraints along the $x i$ variable evaluated at $x i = w k$, where $k ∈ [ 1 , m ]$, that is, $f ( x ) | x i = w k = c ( x i w k )$, is,
Note that this equation coincides with the Waring interpolation form (better known as Lagrangian interpolation form) [10] if the free function vanishes, $g ( x ) = 0$.

#### 2.1. Example #1: Surface Subject to Four Function Constraints

The first example is designed to show how to use Equation (3) with mixed, continuous, discontinuous, and multiple constraints. Consider the following four constraints,
$c ( x , − 2 ) = sin ( 2 x ) , c ( x , 0 ) = 3 cos x [ ( x + 1 ) mod ( 2 ) ] , c ( x , 1 ) = 9 e − x 2 , and c ( x , 3 ) = 1 − x .$
This example highlights that the constraints and free-function may be discontinuous by using the modular arithmetic function. The result is a surface that is continuous in x at some coordinates (at y = −2, 1, and 3) and discontinuous at $y = 0$. The surfaces shown in Figure 1 and Figure 2 were obtained using two distinct expressions for the free function, $g ( x , y )$.

#### 2.2. Example #2: Surface Subject to Two Functions and One Derivative Constraint

This second example is provided to show how to use the general approach given in Equation (1) and described in [1], when derivative constraints are involved. Consider the following three constraints,
$c ( x , − 2 ) = sin ( 2 x ) , c y ( x , 0 ) = 0 , and c ( x , 1 ) = 9 e − x 2 .$
Using the functions $p 1 ( y ) = 1$, $p 2 ( y ) = y$, and $p 3 ( y ) = y 2$, the constrained expression form satisfying these three constraints assumes the form,
$f ( x , y ) = g ( x , y ) + η 1 ( x ) + η 2 ( x ) y + η 3 ( x ) y 2 .$
The three constraints imply the constraints,
$sin ( 2 x ) = g ( x , − 2 ) + η 1 − 2 η 2 + 4 η 3 0 = g y ( x , 0 ) + η 2 9 e − x 2 = g ( x , 1 ) + η 1 + η 2 + η 3 ,$
from which the values of the $η k$ coefficients,
$η 1 = 2 g y ( x , 0 ) + 12 e − x 2 − sin ( 2 x ) 3 + 1 3 g ( x , − 2 ) − 4 3 g ( x , 1 ) η 2 = − g y ( x , 0 ) η 3 = sin ( 2 x ) 3 − 1 3 g ( x , − 2 ) − g y ( x , 0 ) − 3 e − x 2 + 1 3 g ( x , 1 ) ,$
can be derived. After substituting these coefficients into Equation (4), the constrained expression that always satisfies the three initial constraints is obtained. Using this expression and two different free functions, $g ( x , y )$, we obtained the surfaces shown in Figure 3 and Figure 4, respectively. The constraint $c y ( x , 0 ) = 0$, difficult to see in both figures, can be verified analytically.

## 3. Connecting Functions in Two Directions

In this section, the Theory of Connections is extended to the two-dimensional case. Note that dealing with constraints in two (or more) directions (functions or derivatives) requires particular attention. In fact, two orthogonal constraint functions cannot be completely distinct as they intersect at one point where they need to match in value. In addition, if the formalism derived for the 1-D case is applied to 2-D case, some complications arise. These complications are highlighted in the following simple clarifying example.
Consider the two boundary constraint functions, $f ( x , 0 ) = q ( x )$ and $f ( 0 , y ) = h ( y )$. Searching the constrained expression as originally done for the one-dimensional case implies the expression,
$f ( x , y ) = g ( x , y ) + η 1 p 1 ( x , y ) + η 2 p 2 ( x , y ) .$
The constraints imply the two constraints,
To obtain the values of $η 1$ and $η 2$, the determinant of the matrix to invert is $p 1 ( x , 0 ) p 2 ( 0 , y ) − p 1 ( 0 , y ) p 2 ( x , 0 )$. This determinant is y by selecting $p 1 ( x , y ) = 1$ and $p 2 ( x , y ) = y$, or it is x by selecting $p 1 ( x , y ) = x$ and $p 2 ( x , y ) = 1$. Therefore, to avoid singularities, this approach requires paying particular attention to the domain definition and/or on the user-selected functions, $p k ( x , y )$. To avoid dealing with these issues, a new (equivalent) formalism to derive constrained expressions is devised for the higher dimensional case.
The Theory of Connections extension to the higher dimensional case (with constraints on all axes) can be obtained by re-writing the constrained expression into an equivalent form, highlighting a general and interesting property. Let us show this by an example. Equation (2) can be re-written as,
$f ( x ) = x ( 2 x 2 − x ) 2 ( x 2 − x 1 ) y ˙ 1 + x ( x − 2 x 1 ) 2 ( x 2 − x 1 ) y ˙ 2 ︸ A ( x ) + g ( x ) − x ( 2 x 2 − x ) 2 ( x 2 − x 1 ) g ˙ 1 − x ( x − 2 x 1 ) 2 ( x 2 − x 1 ) g ˙ 2 ︸ B ( x ) .$
These two components, $A ( x )$ and $B ( x )$, of a constrained expression have a specific general meaning. The term, $A ( x )$, represents an (any) interpolating function satisfying the constraints while the $B ( x )$ term represents all interpolating functions that are vanishing at the constraints. Therefore, the generation of all functions satisfying multiple orthogonal constraints in n-dimensional space can always be expressed by the general form, $f ( x ) = A ( x ) + B ( x )$, where $A ( x )$ is any function satisfying the constraints and $B ( x )$ must represent all functions vanishing at the constraints. Equation $f ( x ) = A ( x ) + B ( x )$ is actually an alternative general form to write a constrained expression, that is, an alternative way to generalize interpolation: rather than derive a class of functions (e.g., monomials) satisfying a set of constraints, it represents all possible functions satisfying the set of constraints.
To prove that this additive formalism can describe all possible functions satisfying the constraints is immediate. Let $f ( x )$ be all functions satisfying the constraints and $y ( x ) = A ( x ) + B ( x )$ be the sum of a specific function satisfying the constraints, $A ( x )$, and a function, $B ( x )$, representing all functions that are null at the constraints. Then, $y ( x )$ will be equal to $f ( x )$ iff $B ( x ) = f ( x ) − A ( x )$, representing all functions that are null at the constraints.
As shown in Equation (5), once the $A ( x )$ function is obtained, then the $B ( x )$ function can be immediately derived. In fact, $B ( x )$ can be obtained by subtracting the $A ( x )$ function, where all the constraints are specified in terms of the $g ( x )$ free function, from the free function $g ( x )$. For this reason, let us write the general expression of a constrained expression as,
$f ( x ) = A ( x ) + g ( x ) − A ( g ( x ) ) ,$
where $A ( g ( x ) )$ indicates the function satisfying the constraints where the constraints are specified in term of $g ( x )$.
The previous discussion serves to prove that the problem of extending Theory of Connections to higher dimensional spaces consists of the problem of finding the function, $A ( x )$, only. In two dimensions, the function $A ( x )$ is provided in literature by the Coons surface [11], $f ( x , y )$. This surface satisfies the Dirichlet boundary constraints,
$f ( 0 , y ) = c ( 0 , y ) , f ( 1 , y ) = c ( 1 , y ) , f ( x , 0 ) = c ( x , 0 ) , and f ( x , 1 ) = c ( x , 1 ) ,$
where the surface is contained in the $x , y ∈ [ 0 , 1 ] × [ 0 , 1 ]$ domain. This surface is used in computer graphics and in computational mechanics applications to smoothly join other surfaces together, particularly in finite element method and boundary element method, to mesh problem domains into elements. The expression of the Coons surface is,
$f ( x , y ) = ( 1 − x ) c ( 0 , y ) + x c ( 1 , y ) + ( 1 − y ) c ( x , 0 ) + y c ( x , 1 ) − x y c ( 1 , 1 ) − ( 1 − x ) ( 1 − y ) c ( 0 , 0 ) − ( 1 − x ) y c ( 0 , 1 ) − x ( 1 − y ) c ( 1 , 0 ) ,$
where the four subtracting terms are there for continuity. Note the constraint functions at boundary corners must have the same value, $c ( 0 , 0 )$, $c ( 0 , 1 )$, $c ( 1 , 0 )$, and $c ( 1 , 1 )$. This equation can be written in matrix form as,
$f ( x , y ) = 1 , 1 − x , x 0 c ( x , 0 ) c ( x , 1 ) c ( 0 , y ) − c ( 0 , 0 ) − c ( 0 , 1 ) c ( 1 , y ) − c ( 1 , 0 ) − c ( 1 , 1 ) 1 1 − y y ,$
or, equivalently,
$f ( x , y ) = v T ( x ) M ( c ( x , y ) ) v ( y ) ,$
where
$M ( c ( x , y ) ) = 0 c ( x , 0 ) c ( x , 1 ) c ( 0 , y ) − c ( 0 , 0 ) − c ( 0 , 1 ) c ( 1 , y ) − c ( 1 , 0 ) − c ( 1 , 1 ) and v ( z ) = 1 1 − z z .$
Since the $f ( x , y )$ boundaries match the boundaries of the $c ( x , y )$ constraint function, then the identity, $f ( x , y ) = v T ( x ) M ( f ( x , y ) ) v ( y )$, holds for any $f ( x , y )$ function. Therefore, the $B ( x )$ function can be set as,
$B ( x ) : = g ( x , y ) − v T ( x ) M ( g ( x , y ) ) v ( y ) ,$
representing all functions that are always zero at the boundary constraints, as $g ( x , y )$ is a free function.

## 4. Theory of Connections Surface Subject to Dirichlet Constraints

Equations (8) and (9) can be merged to provide all surfaces with the boundary constraints defined in Equation (7) in the following compact form,
$f ( x , y ) = v T ( x ) M ( c ( x , y ) ) v ( y ) ︸ A ( x , y ) + g ( x , y ) − v T ( x ) M ( g ( x , y ) ) v ( y ) ︸ B ( x , y ) .$
where, again, $A ( x , y )$ indicates an expression satisfying the boundary function constraints defined by $c ( x , y )$ and $B ( x , y )$ an expression that is zero at the boundaries. In matrix form, Equation (10) becomes,
$f ( x , y ) = 1 1 − x x T g ( x , y ) c ( x , 0 ) − g ( x , 0 ) c ( x , 1 ) − g ( x , 1 ) c ( 0 , y ) − g ( 0 , y ) g ( 0 , 0 ) − c ( 0 , 0 ) g ( 0 , 1 ) − c ( 0 , 1 ) c ( 1 , y ) − g ( 1 , y ) g ( 1 , 0 ) − c ( 1 , 0 ) g ( 1 , 1 ) − c ( 1 , 1 ) 1 1 − y y ,$
where $g ( x , y )$ is a freely chosen function. In particular, if $g ( x , y ) = 0$, then the ToC surface becomes the Coons surface.
Figure 5 (left) shows the Coons surface subject to the constraints,
$c ( x , 0 ) = sin ( 3 x − π / 4 ) cos ( π / 3 ) c ( x , 1 ) = sin ( 3 x − π / 4 ) cos ( 4 + π / 3 ) c ( 0 , y ) = sin ( − π / 4 ) cos ( 4 y + π / 3 ) c ( 1 , y ) = sin ( 3 − π / 4 ) cos ( 4 y + π / 3 ) ,$
and Figure 5 (right) shows a ToC surface that is obtained using the free function,
$g ( x , y ) = 1 3 cos ( 4 π x ) sin ( 6 π y ) − x 2 cos ( 2 π y ) .$
For generic boundaries defined in the rectangle $x , y ∈ [ x i , x f ] × [ y i , y f ]$, the ToC surface becomes,
Equation (12) can also be set in matrix form,
$f ( x , y ) = v x T ( x , x i , x f ) M ( x , y ) v y ( y , y i , y f )$
where
$M ( x , y ) = g ( x , y ) c ( x , y i ) − g ( x , y i ) c ( x , y f ) − g ( x , y f ) c ( x i , y ) − g ( x i , y ) g ( x i , y i ) − c ( x i , y i ) g ( x i , y f ) − c ( x i , y f ) c ( x f , y ) − g ( x f , y ) g ( x f , y i ) − c ( x f , y i ) g ( x f , y f ) − c ( x f , y f )$
and
$v x ( x , x i , x f ) = 1 x − x f x i − x f x − x i x f − x i and v y ( y , y i , y f ) = 1 y − y f y i − y f y − y i y f − y i .$
Note that all the ToC surfaces provided are linear in $g ( x , y )$, and, therefore, they can be used to solve, by linear/nonlinear least-squares, two-dimensional optimization problems subject to boundary function constraints, such as linear/nonlinear partial differential equations.

## 5. Multi-Function Constraints at Generic Coordinates

Equation (12) can be generalized to many function constraints (grid of functions). Assume a set of $n x$ function constraints $c ( x k , y )$ and a set of $n y$ function constraints $c ( x , y k )$ intersecting at the $n x n y$ points $p i j = c ( x i , y j )$, then all surfaces satisfying the $n x n y$ function constraints can be expressed by,
Again, Equation (13) can be written in compact form,
$f ( x , y ) = v T ( x ) M ( c ( x , y ) ) v ( y ) + g ( x , y ) − v T ( x ) M ( g ( x , y ) ) v ( y )$
where,
$v ( x ) = 1 ∏ i ≠ 1 x − x i x 1 − x i ⋮ ∏ i ≠ n x x − x i x n x − x i and v ( y ) = 1 ∏ i ≠ 1 y − y i y 1 − y i ⋮ ∏ i ≠ n y y − y i y n y − y i$
and
$M ( c ( x , y ) ) = 0 c ( x , y 1 ) … c ( x , y n y ) c ( x 1 , y ) − c ( x 1 , y 1 ) … − c ( x 1 , y N y ) ⋮ ⋮ ⋱ ⋮ c ( x n x , y ) − c ( x n x , y 1 ) … − c ( x n x , y n y )$
For example, two function constraints in x and three function constraints in y can be obtained using the matrix,
$M ( c ( x , y ) ) = 0 c ( x , y 1 ) c ( x , y 2 ) c ( x , y 3 ) c ( x 1 , y ) − c ( x 1 , y 1 ) − c ( x 1 , y 2 ) − c ( x 1 , y 3 ) c ( x 2 , y ) − c ( x 2 , y 1 ) − c ( x 2 , y 2 ) − c ( x 2 , y 3 )$
and the vectors,
$v ( x ) = 1 x − x 2 x 1 − x 2 x − x 1 x 2 − x 1 and v ( y ) = 1 ( y − y 2 ) ( y − y 3 ) ( y 1 − y 2 ) ( y 1 − y 3 ) ( y − y 1 ) ( y − y 3 ) ( y 2 − y 1 ) ( y 2 − y 3 ) ( y − y 2 ) ( y − y 1 ) ( y 3 − y 2 ) ( y 3 − y 1 ) .$
Two examples of ToC surfaces are given in Figure 6 in the $x , y ∈ [ − 2 , 1 ] × [ 1 , 3 ]$ domain.

## 6. Constraints on Function and Derivatives

The “Boolean sum formulation” was provided by Farin [12] (also called “Hermite–Coons formulation”) of the Coons surface that includes boundary derivatives,
$f ( x , y ) = v T ( y ) F x ( x ) + v T ( x ) F y ( y ) − v T ( x ) M x y v ( y )$
where
$v ( z ) : = { 2 z 3 − 3 z 2 + 1 , z 3 − 2 z 2 + z , − 2 z 3 + 3 z 2 , z 3 − z 2 } T F x ( x ) : = { c ( x , 0 ) , c y ( x , 0 ) , c ( x , 1 ) , c y ( x , 1 ) } T F y ( y ) : = { c ( 0 , y ) , c x ( 0 , y ) , c ( 1 , y ) , c x ( 1 , y ) } T$
and
$M x y ( x , y ) : = c ( 0 , 0 ) c y ( 0 , 0 ) c ( 0 , 1 ) c y ( 0 , 1 ) c x ( 0 , 0 ) c x y ( 0 , 0 ) c x ( 0 , 1 ) c x y ( 0 , 1 ) c ( 1 , 0 ) c y ( 1 , 0 ) c ( 1 , 1 ) c y ( 1 , 1 ) c x ( 1 , 0 ) c x y ( 1 , 0 ) c x ( 1 , 1 ) c x y ( 1 , 1 ) .$
The formulation provided in Equation (14) can be put in the matrix compact form,
$f ( x , y ) = v T ( x ) M ( c ( x , y ) ) v ( y ) ,$
where
$v ( z ) : = { 1 , 2 z 3 − 3 z 2 + 1 , z 3 − 2 z 2 + z , − 2 z 3 + 3 z 2 , z 3 − z 2 } T$
and the $5 × 5$ matrix, $M ( c ( x , y ) )$, has the expression,
$M ( c ( x , y ) ) : = 0 c ( x , 0 ) c y ( x , 0 ) c ( x , 1 ) c y ( x , 1 ) c ( 0 , y ) − c ( 0 , 0 ) − c y ( 0 , 0 ) − c ( 0 , 1 ) − c y ( 0 , 1 ) c x ( 0 , y ) − c x ( 0 , 0 ) − c x y ( 0 , 0 ) − c x ( 0 , 1 ) − c x y ( 0 , 1 ) c ( 1 , y ) − c ( 1 , 0 ) − c y ( 1 , 0 ) − c ( 1 , 1 ) − c y ( 1 , 1 ) c x ( 1 , y ) − c x ( 1 , 0 ) − c x y ( 1 , 0 ) − c x ( 1 , 1 ) − c x y ( 1 , 1 ) .$
To verify the boundary derivative constraints, the following partial derivatives of Equation (15) are used,
$f x ( x , y ) = [ v x T ( x ) M ( c ( x , y ) ) + v T ( x ) M x ( c ( x , y ) ) ] v ( y ) f y ( x , y ) = v T ( x ) [ M y T ( c ( x , y ) ) v ( y ) + M ( c ( x , y ) ) v y ( y ) ] ,$
where
$d v d z = 0 6 z ( z − 1 ) 3 z 2 − 4 z + 1 6 z ( 1 − z ) z ( 3 z − 2 ) , M y = 0 0 1 × 4 c y ( 0 , y ) 0 1 × 4 c x y ( 0 , y ) 0 1 × 4 c y ( 1 , y ) 0 1 × 4 c x y ( 1 , y ) 0 1 × 4 , and M x T = 0 0 1 × 4 c x ( x , 0 ) 0 1 × 4 c x y ( x , 0 ) 0 1 × 4 c x ( x , 1 ) 0 1 × 4 c x y ( x , 1 ) 0 1 × 4 .$
The ToC in 2D with function and derivative boundary constraints is simply,
$f ( x , y ) = v T ( x ) M ( c ( x , y ) ) v ( y ) ︸ A ( x , y ) + g ( x , y ) − v T ( x ) M ( g ( x , y ) ) v ( y ) ︸ B ( x , y )$
where the $M$ matrix and the $v$ vectors are provided by Equations (17) and (16), respectively.
Dirichlet/Neumann mixed constraints can be derived, as shown in the examples provided in Section 6.1 through Section 6.4. The matrix compact form is simply obtained from the matrix defined in Equation (17) by removing the rows and the columns associated with the boundary constraints not provided, while the vectors $v ( x )$ and $v ( y )$ are derived by specifying the constraints. Note that in general the vectors $v ( x )$ and $v ( y )$ are not unique. The reason the vectors $v ( x )$ and $v ( y )$ are not unique comes from the fact that the $A ( x )$ term in Equation (6) is not unique.
In the next subsections, four Dirichlet/Neumann mixed constraint examples providing the simplest expressions for $v ( x )$ and $v ( y )$ are derived. The Appendix A contains the expressions for the $v ( x )$ and $v ( y )$ vectors associated with all the combinations of Dirichlet and Neumann constraints.

#### 6.1. Constraints: $c ( 0 , y )$ and $c ( x , 0 )$

In this case, the Coons-type surface satisfying the boundary constraints can be expressed as,
$f ( x , y ) = 1 p ( x ) 0 c ( x , 0 ) c ( 0 , y ) − c ( 0 , 0 ) 1 q ( y )$
where $p ( x )$ and $q ( y )$ are unknown functions. Expanding, we obtain $f ( x , y ) = c ( x , 0 ) q ( y ) + p ( x ) [ c ( 0 , y ) − c ( 0 , 0 ) q ( y ) ]$. The two constraints are satisfied if,
$c ( 0 , y ) = c ( 0 , 0 ) q ( y ) + p ( 0 ) [ c ( 0 , y ) − c ( 0 , 0 ) q ( y ) ] c ( x , 0 ) = c ( x , 0 ) q ( 0 ) + p ( x ) [ c ( 0 , 0 ) − c ( 0 , 0 ) q ( 0 ) ] .$
Therefore, the $p ( x )$ and $q ( y )$ functions must satisfy $p ( 0 ) = 1$ and $q ( 0 ) = 1$. The simplest expressions satisfying these equations can be obtained by selecting $p ( x ) = 1$ and $q ( y ) = 1$. In this case, the associated ToC surface is given by,
$f ( x , y ) = 1 1 g ( x , y ) c ( x , 0 ) − g ( x , 0 ) c ( 0 , y ) − g ( 0 , y ) g ( 0 , 0 ) − c ( 0 , 0 ) 1 1$
Note that any functions satisfying $p ( 0 ) = 1$ and $q ( 0 ) = 1$ can be adopted to obtain the ToC surface satisfying the constraints $f ( 0 , y ) = c ( 0 , y )$ and $f ( x , 0 ) = c ( x , 0 )$. This is because there are infinite Coons-type surfaces satisfying the constraints. Consequently, the vectors $v ( x )$ and $v ( y )$ are not unique.

#### 6.2. Constraints: $c ( 0 , y )$ and $c y ( x , 0 )$

For these boundary constraints, the Coons-type surface is expressed by,
$f ( x , y ) = 1 p ( x ) 0 c y ( x , 0 ) c ( 0 , y ) − c y ( 0 , 0 ) 1 q ( y )$
= cy(x, 0)q(y) + p(x)[c (0, y) − cy (0, 0) q(y)].
The constraints are satisfied if,
$c ( 0 , y ) = c y ( 0 , 0 ) q ( y ) + p ( 0 ) [ c ( 0 , y ) − c y ( 0 , 0 ) q ( y ) ] , c y ( x , 0 ) = c y ( x , 0 ) q y ( 0 ) + p ( x ) [ c y ( 0 , 0 ) − c y ( 0 , 0 ) q y ( 0 ) ] .$
Therefore, the $p ( x )$ and $q ( y )$ functions must satisfy $p ( 0 ) = 1$ and $q y ( 0 ) = 1$. One solution is $p ( x ) = 1$ and $q ( y ) = y$. Therefore, the associated ToC surface is given by,
$f ( x , y ) = 1 1 g ( x , y ) c y ( x , 0 ) − g y ( x , 0 ) c ( 0 , y ) − g ( 0 , y ) g y ( 0 , 0 ) − c y ( 0 , 0 ) 1 y .$

#### 6.3. Neumann Constraints: $c x ( 0 , y )$, $c x ( 1 , y )$, $c y ( x , 0 )$, and $c y ( x , 1 )$

In this case, the Coons-type surface satisfying the boundary constraints can be expressed as,
$f ( x , y ) = 1 , p 1 ( x ) , p 2 ( x ) 0 c y ( x , 0 ) c y ( x , 1 ) c x ( 0 , y ) − c x y ( 0 , 0 ) − c x y ( 0 , 1 ) c x ( 1 , y ) − c x y ( 1 , 0 ) − c x y ( 1 , 1 ) 1 q 1 ( y ) q 2 ( y ) .$
The constraints are satisfied if,
$c x ( 0 , y ) = q 1 ( y ) c x y ( 0 , 0 ) + q 2 ( y ) c x y ( 0 , 1 ) + + p 1 x ( 0 ) [ c x ( 0 , y ) − q 1 ( y ) c x y ( 0 , 0 ) − q 2 ( y ) c x y ( 0 , 1 ) ] + + p 2 x ( 0 ) [ c x ( 1 , y ) − q 1 ( y ) c x y ( 1 , 0 ) − q 2 ( y ) c x y ( 1 , 1 ) ] c x ( 1 , y ) = q 1 ( y ) c x y ( 1 , 0 ) + q 2 ( y ) c x y ( 1 , 1 ) + + p 1 x ( 1 ) [ c x ( 0 , y ) − q 1 ( y ) c x y ( 0 , 0 ) − q 2 ( y ) c x y ( 0 , 1 ) ] + + p 2 x ( 1 ) [ c x ( 1 , y ) − q 1 ( y ) c x y ( 1 , 0 ) − q 2 ( y ) c x y ( 1 , 1 ) ] c y ( x , 0 ) = q 1 y ( 0 ) c y ( x , 0 ) + q 2 y ( 0 ) c y ( x , 1 ) + + p 1 ( x ) [ c x y ( 0 , 0 ) − q 1 y ( 0 ) c x y ( 0 , 0 ) − q 2 y ( 0 ) c x y ( 0 , 1 ) ] + + p 2 ( x ) [ c x y ( 1 , 0 ) − q 1 y ( 0 ) c x y ( 1 , 0 ) − q 2 y ( 0 ) c x y ( 1 , 1 ) ] c y ( x , 1 ) = q 1 y ( 1 ) c y ( x , 0 ) + q 2 y ( 1 ) c y ( x , 1 ) + + p 1 ( x ) [ c x y ( 0 , 1 ) − q 1 y ( 1 ) c x y ( 0 , 0 ) − q 2 y ( 1 ) c x y ( 0 , 1 ) ] + + p 2 ( x ) [ c x y ( 1 , 1 ) − q 1 y ( 1 ) c x y ( 1 , 0 ) − q 2 y ( 1 ) c x y ( 1 , 1 ) ] .$
These equations imply $p 1 x ( 0 ) = q 1 x ( 0 ) = 1$, $p 1 x ( 1 ) = q 1 x ( 1 ) = 0$, $p 2 x ( 0 ) = q 2 x ( 0 ) = 0$, and $p 2 x ( 1 ) = q 2 x ( 1 ) = 1$. Therefore, the simplest solution is $p 1 ( t ) = q 1 ( t ) = t − t 2 / 2$ and $p 2 ( t ) = q 2 ( t ) = t 2 / 2$. Then, the associated ToC surface satisfying the Neumann constraints is given by,
$f ( x , y ) = v T ( x ) g ( x , y ) c y ( x , 0 ) − g y ( x , 0 ) c y ( x , 1 ) − g y ( x , 1 ) c x ( 0 , y ) − g x ( 0 , y ) g x y ( 0 , 0 ) − c x y ( 0 , 0 ) g x y ( 0 , 1 ) − c x y ( 0 , 1 ) c x ( 1 , y ) − g x ( 1 , y ) g x y ( 1 , 0 ) − c x y ( 1 , 0 ) g x y ( 1 , 1 ) − c x y ( 1 , 1 ) v ( y )$
where
$v T ( x ) = 1 , x − x 2 2 , x 2 2 and v ( y ) = 1 , y − y 2 2 , y 2 2 .$

#### 6.4. Constraints: $c ( 0 , y )$, $c y ( x , 0 )$, and $c y ( x , 1 )$

In this case, the Coons-type surface satisfying the boundary constraints is in the form,
$f ( x , y ) = 1 p ( x ) T 0 c y ( x , 0 ) c y ( x , 1 ) c ( 0 , y ) − c y ( 0 , 0 ) − c y ( 0 , 1 ) 1 q 1 ( y ) q 2 ( y ) .$
The constraints are satisfied if $p ( 0 ) = 1$, $p 1 y ( 0 ) = 1$, $p 1 y ( 1 ) = 0$, $p 2 y ( 0 ) = 0$, and $p 2 y ( 1 ) = 1$. Therefore, the associated ToC surface is,
$f ( x , y ) = 1 1 T g ( x , y ) c y ( x , 0 ) − g y ( x , 0 ) c y ( x , 1 ) − g y ( x , 1 ) c ( 0 , y ) − g ( 0 , y ) g y ( 0 , 0 ) − c y ( 0 , 0 ) g y ( 0 , 1 ) − c y ( 0 , 1 ) 1 y − y 2 2 y 2 2 .$

#### 6.5. Generic Mixed Constraints

Consider the case of mixed constraints,
$f ( x , y 1 ) = c ( x , y 1 ) f x ( x , y 2 ) = c x ( x , y 2 ) f ( x , y 3 ) = c ( x , y 3 ) and f y ( x 1 , y ) = c y ( x 1 , y ) f y ( x 2 , y ) = c y ( x 2 , y ) f ( x 3 , y ) = c ( x 3 , y ) .$
In this case, the surface satisfying the boundary constraints is built using the matrix,
$M ( c ( x , y ) ) = 0 c ( x , y 1 ) c x ( x , y 2 ) c ( x , y 3 ) c y ( x 1 , y ) − c y ( x 1 , y 1 ) − c x y ( x 1 , y 2 ) − c y ( x 1 , y 3 ) c y ( x 2 , y ) − c y ( x 2 , y 1 ) − c x y ( x 2 , y 2 ) − c y ( x 2 , y 3 ) c ( x 3 , y ) − c ( x 3 , y 1 ) − c x ( x 3 , y 2 ) − c ( x 3 , y 3 )$
and all surfaces subject to the constraints defined in Equation (19) can be obtained by,
$f ( x , y ) = v ( x ) T M ( c ( x , y ) ) v ( y ) + g ( x , y ) − v ( x ) T M ( g ( x , y ) ) v ( y ) ,$
where
$v ( x ) = 1 p 1 ( x , x 1 , x 2 , x 3 ) p 2 ( x , x 1 , x 2 , x 3 ) p 3 ( x , x 1 , x 2 , x 3 ) and v ( y ) = 1 q 1 ( y , y 1 , y 2 , y 3 ) q 2 ( y , y 1 , y 2 , y 3 ) q 3 ( y , y 1 , y 2 , y 3 )$
are vectors made of the (not unique) function vectors $v ( x )$ and $v ( y )$ whose expressions can be found by satisfying the constraints (as done in the previous four subsections) along with a methodology similar to that given in Section 5.

## 7. Extension to $n$-Dimensional Spaces and Arbitrary-Order Derivative Constraints

This section provides the Multivariate Theory of Connections, as the generalization to n-dimensional rectangular domains with arbitrary-order boundary derivatives of what is presented above for two-dimensional space. Using tensor notation, this generalization is represented in the following compact form,
$F ( x ) = M ( c ( x ) ) i 1 i 2 … i n v i 1 v i 2 … v i n ︸ A ( x ) + g ( x ) − M ( g ( x ) ) i 1 i 2 … i n v i 1 v i 2 … v i n ︸ B ( x )$
where n is the number of orthogonal coordinates defined by the vector $x = { x 1 , x 2 , … , x n }$, $v i k ( x k )$ is the $i k$th element of a vector function of the variable $x k$, $M$ is an n-dimensional tensor that is a function of the boundary constraints defined in $c ( x )$, and $g ( x )$ is the free-function.
In Equation (20), the term $A ( x )$ represents any function satisfying the boundary constraints defined by $c ( x )$ and the term $B ( x )$ represents all possible functions that are zero on the boundary constraints. The subsections that follow explain how to construct the $M$ tensor and the $v i k$ vectors for assigned boundary constraints, and provides a proof that the tensor formulation of the ToC defined by Equation (20) satisfies all boundary constraints defined by $c ( x )$, independently of the choice of the free function, $g ( x )$.
Consider a generic boundary constraint on the $x k = p$ hyperplane, where $k ∈ [ 1 , n ]$. This constraint specifies the d-derivative of the constraint function $c ( x )$ evaluated at $x k = p$ and it is indicated by . Consider a set of $ℓ k$ constraints defined in various $x k$ hyperplanes. This set of constraints is indicated by $k c p k d k$, where $d k$ and $p k$ are vectors of $ℓ k$ elements indicating the order of derivatives and the values of $x k$ where the boundary constraints are defined, respectively. A specific boundary constraint, e.g. the mth boundary constraint, can then be written as $k c p m k d m k$.
Additionally, let us define an operator, called the boundary constraint operator, whose purpose is to take the dth derivative with respect to coordinate $x k$ and then evaluate that function at $x k = p$. Equation (21) shows the idea.
$k b p d [ f ] ≡ ∂ d f ∂ x k d | ( x 1 , … , x k − 1 , p , x k + 1 , … , x n )$
In general, for a function of n variables, the boundary constraint operator identifies an $n − 1$-dimensional manifold. As the boundary constraint operator is used throughout this proof, it is important to note its properties when acting on sums and products of functions. Equation (22) shows how the boundary constraint operator acts on sums, and Equation (23) shows how the boundary constraint operator acts on products.
$k b p d [ f 1 + f 2 ] = k b p d [ f 1 ] + k b p d [ f 2 ]$
This section shows how to build the $M$ tensor and the vectors $v$ given the boundary constraints defined by the boundary constraint operators. Moreover, this section contains a proof that, in Equation (20), the boundary constraints defined by $c ( x )$ satisfy the function $A ( x )$ and, by extension, the function $B ( x )$ projects the free-function $g ( x )$ onto the sub-space of functions that are zero on the boundary constraints. Then, it follows that the expression for the ToC surface given in Equation (20) represents all possible functions that meet the boundary defined by the boundary constraint operators.

#### 7.1. The $M$ Tensor

There is a step-by-step method for constructing the $M$ tensor.
• The element of $M$ for all indices equal to 1 is 0 (i.e., $M 11 … 1 = 0$).
• The first order tensor obtained by keeping the kth dimension’s index and setting all other dimension’s indices to 1 can be written as,
$M 1 , … , 1 , i k , 1 , … , 1 = k c p k d k , where i k ∈ [ 2 , ℓ k + 1 ] ,$
where the vector $k c p k d k$ contains the $ℓ k$ boundary constraints specified along the $x k$-axis. For example, consider the following $ℓ 7 = 3$ constraints on the $k = 7$th axis,
• The generic element of the tensor is $M i 1 i 2 … i n$, where at least two indices are different from 1. Let m be the number of indices different from 1. Note that m is also the number of constraint “intersections”. In this case, the generic element of the $M$ tensor is provided by,
If $c ( x ) ∈ C s$, where $s = ∑ k = 1 n d i k − 1 k$, then Clairaut’s theorem states that the sequence of boundary constraint operators provided in Equation (24) can be freely permutated. This permutation becomes obvious by multiple applications of the theorem. For example,
$f x y y = ( f x y ) y = ( f y x ) y = ( f y ) x y = ( f y ) y x = f y y x .$
To better clarify how to use Equation (24), consider the example of the following constraints in three-dimensional space.
$c ( x ) | x 1 = 0 , c ( x ) | x 1 = 1 , c ( x ) | x 2 = 0 , ∂ c ( x ) ∂ x 2 | x 2 = 0 , c ( x ) | x 3 = 0 , and ∂ c ( x ) ∂ x 3 | x 3 = 0$
• From Step 1: $M 111 = 0$
• From Step 2:
$M i 1 11 = 0 , c ( 0 , x 2 , x 3 ) , c ( 1 , x 2 , x 3 ) = 0 , 1 b 0 0 [ c ( x ) ] , 1 b 1 0 [ c ( x ) ] M 1 i 2 1 = 0 , c ( x 1 , 0 , x 3 ) , ∂ c ∂ x 2 ( x 1 , 0 , x 3 ) = 0 , 2 b 0 0 [ c ( x ) ] , 2 b 0 1 [ c ( x ) ] M 11 i 3 = 0 , c ( x 1 , x 3 , 0 ) , ∂ c ∂ x 3 ( x 1 , x 2 , 0 ) = 0 , 3 b 0 0 [ c ( x ) ] , 3 b 0 1 [ c ( x ) ]$
• From Step 3, a single example is provided,
which, thanks to Clairaut’s theorem, can also be written as,
Three additional examples are given to help further illustrate the procedure,

#### 7.2. The v Vectors

Each vector, $v k$, is associated with the $ℓ k$ constraints that are specified by $k c p k d k$. The $v k$ vector is built as follows,
$v k = 1 , ∑ i = 1 ℓ k α i 1 h i ( x k ) , ∑ i = 1 ℓ k α i 2 h i ( x k ) , … , ∑ i = 1 ℓ k α i ℓ k h i ( x k ) ,$
where $h i ( x k )$ are $ℓ k$ linearly independent functions. The simplest set of linearly independent functions are monomials, that is, $h i ( x k ) = x k i − 1$. The $ℓ k × ℓ k$ coefficients, $α i j$, can be computed by matrix inversion,
$k b p 1 d 1 [ h 1 ] k b p 1 d 1 [ h 2 ] … k b p 1 d 1 [ h ℓ k ] k b p 2 d 2 [ h 1 ] k b p 2 d 2 [ h 2 ] … k b p 2 d 2 [ h ℓ k ] ⋮ ⋮ ⋱ ⋮ k b p ℓ k d ℓ k [ h 1 ] k b p ℓ k d ℓ k [ h 2 ] … k b p ℓ k d ℓ k [ h ℓ k ] α 11 α 12 … α 1 ℓ k α 21 α 22 … α 2 ℓ k ⋮ ⋮ ⋱ ⋮ α ℓ k 1 α ℓ k 2 … α ℓ k ℓ k = 1 0 … 0 0 1 … 0 ⋮ ⋮ ⋱ ⋮ 0 0 … 1 .$
To supplement the above explanation, let us look at the example of Dirichlet boundary conditions on $x 1$ from the example in Section 7.1. There are two boundary conditions, $c ( x ) | x 1 = 0$ and $c ( x ) | x 1 = 1$, and thus two linearly independent functions are needed,
$v i 1 = 1 , α 11 h 1 ( x 1 ) + α 21 h 2 ( x 1 ) , α 12 h 1 ( x 1 ) + α 22 h 2 ( x 1 ) .$
Let us consider, $h 1 ( x 1 ) = 1$ and $h 2 ( x 1 ) = x 1$. Then, following Equation (25),
$1 b 0 0 [ 1 ] 1 b 0 0 [ x ] 2 b 1 0 [ 1 ] 2 b 1 0 [ x ] α 11 α 12 α 21 α 22 = 1 0 1 1 α 11 α 12 α 21 α 22 = 1 0 0 1 → α 11 α 12 α 21 α 22 = [ r ] 1 0 − 1 1 ,$
and substituting the values of $α i j$, we obtain $v i 1 = 1 , 1 − x 1 , x 1$.

#### 7.3. Proof

This section demonstrates that the term $A ( x )$ from Equation (20) generates a surface satisfying the boundary constraints defined by the function $c ( x )$. First, it is shown that $A ( x )$ satisfies boundary constraints on the value, and then that $A ( x )$ satisfies boundary constraints on arbitrary-order derivatives.
Equation (23) for $d = 0$ allows us to write,
$k b p q − 1 0 [ A ( x ) ] = k b p q − 1 0 [ M i 1 i 2 … i k … i n ] v i 1 v i 2 … k b p q − 1 0 [ v i k ] … v i n .$
The boundary constraint operator applied to $v k$ yields,
Since the only nonzero terms are associated with $i k = 1 , q$, we have,
Applying the boundary constraint operator to the $n − 1$-dimensional $M$ tensor where index $i k = q$ has no effect, because all of the functions already have coordinate $x k$ substituted for the value $p q − 1$ (see Equation (24)). Moreover, applying the boundary constraint operator to the $M$ tensor where index $i k = 1$ causes all terms in the sum within the parenthesis in Equation (28) to cancel each other, except when all of the non-$i k$ indices are equal to one. This leads to Equation (29).
Since $v j = 1$ when $j = 1$ and $M 11 … 1 = 0$ by definition, then,
$k b p q − 1 0 [ A ( x ) ] = M 11 … q … 1 = c ( x 1 , x 2 , … , p q − 1 , … , x n ) ,$
which proves Equation (20) works for boundary constraints on the value.
Now, we show that Equation (20) holds for arbitrary-order derivative type boundary constraints. Equation (23) for $d > 0$ allows us to write,
$k b p q − 1 d q − 1 [ A ( x ) ] = k b p q − 1 d q − 1 [ M i 1 i 2 … i k … i n ] v i 1 v i 2 … v i k … v i n + M i 1 i 2 … i k … i n v i 1 v i 2 … k b p q − 1 d q − 1 [ v i k ] … v i n .$
From Equation (23), we note that boundary constraint operators that take a derivative follow the usual product rule when applied to a product. Moreover, we note that all of the $v$ vectors except $v i k$ do not depend on $x k$, thus applying the boundary constraint operator to them results in a vector of zeros. Applying the boundary constraint operator to $v i k$ yields,
and applying the boundary constraint operator to $M$ yields,
Substituting these simplifications into $A ( x ) = M i 1 i 2 … i k … i n v i 1 v i 2 … v i k … v i n$, after applying the boundary constraint operator, results in Equation (31).
Similar to the proof for value-based boundary constraints, based on Equation (24), all terms in the sum within the parenthesis in Equation (31) cancel each other, except when all of the non-$i k$ indices are equal to one. Thus, Equation (31) can be simplified to Equation (32).
Again, all of the vectors $v$ were designed such that their first component is 1, and the value of the element of $M$ for all indices equal to 1 is 0. Therefore, Equation (32) simplifies to,
$k b p q − 1 d q − 1 [ A ( x ) ] = M 11 … q … 1 = ∂ d c ( x ) ∂ x k d | x k = p q − 1 ,$
which proves Equation (20) works for arbitrary-order derivative boundary constraints.
In conclusion, the term $A ( x )$ from Equation (20) generates a manifold satisfying the boundary constraints given in terms of arbitrary-order derivative in n-dimensional space. The term $B ( x )$ from Equation (20) projects any free function $g ( x )$ onto the space of functions that are vanishing at the specified boundary constraints. As a result, Equation (20) can be used to produce the family of all possible functions satisfying assigned boundary constraints (functions or derivatives) in rectangular domains in n-dimensional space.

## 8. Conclusions

This paper extends to n-dimensional spaces the Univariate Theory of Connections (ToC), introduced in Ref. [1]. First, it provides a mathematical tool to express all possible surfaces subject to constraint functions and arbitrary-order derivatives in a boundary rectangular domain, and then it extends the results to the multivariate case by providing the Multivariate Theory of Connections, which allows one to obtain n-dimensional manifolds subject to any-order derivative boundary constraints.
In particular, if the constraints are provided along one axis only, then this paper shows that the univariate ToC, as defined in Ref. [1], can be adopted to describe all possible surfaces satisfying the constraints. If the boundary constraints are defined in a rectangular domain, then the constrained expression is found in the form $f ( x ) = A ( x ) + B ( x )$, where $A ( x )$ can be any function satisfying the constraints and $B ( x )$ describes all functions that are vanishing at the constraints. This is obtained by introducing a free function, $g ( x )$, into the function $B ( x )$ in such a way that $B ( x )$ is zero at the constraints no matter what the $g ( x )$ is. This way, by spanning all possible $g ( x )$ surfaces (even discontinuous, null, or piece-wise defined) the resulting $B ( x )$ generates all surfaces that are zero at the constraints and, consequently, $f ( x ) = A ( x ) + B ( x )$, describes all surfaces satisfying the constraints defined in the rectangular boundary domain. The function $A ( x )$ has been selected as a Coons surface [11] and, in particular, a Coons surface is obtained if $g ( x ) = 0$ is selected. All possible combinations of Dirichlet and Neumann constraints are also provided in Appendix A.
The last section provides the Multivariate Theory of Connections extension, which is a mathematical tool to transform n-dimensional constraint optimization problems subject to constraints on the boundary value and any-order derivative into unconstrained optimization problems. The number of applications of the Multivariate Theory of Connections are many, especially in the area of partial and stochastic differential equations: the main subjects of our current research.

## Author Contributions

C.L. derived the table in Appendix A and the mathematical proof validating the tensor notation. All the remaining parts are provided by D.M.

## Funding

This research received no external funding.

## Acknowledgments

The authors acknowledge Ergun Akleman for pointing out the Coons surface.

## Conflicts of Interest

The authors declare no conflict of interest.

## Abbreviations

The following abbreviation is used in this manuscript:
 ToC Theory of Connections PDE Partial Differential Equations ODE Ordinary Differential Equations IVP Initial Value Problems BVP Boundary Value Problems

## Appendix A. All combinations of Dirichlet and Neumann constraints

 $c x , 0$ $c 0 , y$ $c x , 1$ $c 1 , y$ $c 0 , y x$ $c 1 , y x$ $c x , 0 y$ $c x , 1 y$ $v ( x )$ $v ( y )$ ✓ ✓ $1 1$ $1 1$ ✓ ✓ $1 1$ $1 y$ ✓ ✓ $1 x$ $1 y$ ✓ ✓ ✓ $1 1$ $1 1 − y 2 y 2$ ✓ ✓ ✓ $1 1$ $1 1 y$ ✓ ✓ ✓ $1 x$ $1 1 − y y$ ✓ ✓ ✓ $1 x$ $1 y − y 2 y 2$ ✓ ✓ $1 1$ $1 y − y 2 / 2 y 2 / 2$ ✓ ✓ ✓ $1 x$ $1 y − y 2 / 2 y 2 / 2$ ✓ ✓ ✓ ✓ $1 1 − x x$ $1 1 − y y$ ✓ ✓ ✓ ✓ $1 1 − x x$ $1 y − y 2 y 2$ ✓ ✓ ✓ ✓ $1 1 − x x$ $1 y − y 2 / 2 y 2 / 2$ ✓ ✓ ✓ ✓ $1 x − x 2 x 2$ $1 y − y 2 y 2$ ✓ ✓ ✓ ✓ $1 x − x 2 x 2$ $1 y − y 2 / 2 y 2 / 2$ ✓ ✓ ✓ ✓ $1 x − x 2 / 2 x 2 / 2$ $1 y − y 2 / 2 y 2 / 2$ ✓ ✓ ✓ $1 1$ $1 1 y$ ✓ ✓ ✓ $1 x$ $1 1 y$ ✓ ✓ ✓ ✓ $1 1 − x x$ $1 1 y$ ✓ ✓ ✓ ✓ $1 1$ $1 1 − y 2 y − y 2 y 2$ ✓ ✓ ✓ ✓ $1 x$ $1 1 − y 2 y − y 2 y 2$ ✓ ✓ ✓ ✓ $1 1$ $1 1 y − y 2 / 2 y 2 / 2$ ✓ ✓ ✓ ✓ $1 x$ $1 1 y − y 2 / 2 y 2 / 2$ ✓ ✓ ✓ ✓ $1 1 x$ $1 1 y$ ✓ ✓ ✓ ✓ $1 x − x 2 / 2 x 2 / 2$ $1 1 y$ ✓ ✓ ✓ ✓ $1 1 x$ $1 1 y$ ✓ ✓ ✓ ✓ ✓ $1 1 x$ $1 1 − y 2 y − y 2 y 2$ ✓ ✓ ✓ ✓ ✓ $1 1 x$ $1 1 y − y 2 / 2 y 2 / 2$ ✓ ✓ ✓ ✓ ✓ $1 1 − x x$ $1 1 − y 2 y − y 2 y 2$ ✓ ✓ ✓ ✓ ✓ $1 x − x 2 x 2$ $1 1 − y 2 y − y 2 y 2$ ✓ ✓ ✓ ✓ ✓ $1 1 − x x$ $1 1 y − y 2 / 2 y 2 / 2$ ✓ ✓ ✓ ✓ ✓ $1 x − x 2 x 2$ $1 1 y − y 2 / 2 y 2 / 2$ ✓ ✓ ✓ ✓ ✓ $1 x − x 2 / 2 x 2 / 2$ $1 1 y − y 2 / 2 y 2 / 2$ ✓ ✓ ✓ ✓ ✓ ✓ $1 1 − x 2 x − x 2 x 2$ $1 1 − y 2 y − y 2 y 2$ ✓ ✓ ✓ ✓ ✓ ✓ $1 1 − x 2 x − x 2 x 2$ $1 1 y − y 2 / 2 y 2 / 2$ ✓ ✓ ✓ ✓ ✓ ✓ $1 1 x − x 2 / 2 x 2 / 2$ $1 1 y − y 2 / 2 y 2 / 2$ ✓ ✓ ✓ ✓ ✓ ✓ $1 1 − x x$ $1 1 − 3 y 2 + 2 y 3 y − 2 y 2 + y 3 3 y 2 − 2 y 3 − y 2 + y 3$ ✓ ✓ ✓ ✓ ✓ ✓ $1 − 1 + x 1$ $1 1 − 3 y 2 + 2 y 3 y − 2 y 2 + y 3 3 y 2 − 2 y 3 − y 2 + y 3$ ✓ ✓ ✓ ✓ ✓ ✓ $1 x − x 2 / 2 x 2 / 2$ $1 1 − 3 y 2 + 2 y 3 y − 2 y 2 + y 3 3 y 2 − 2 y 3 − y 2 + y 3$ ✓ ✓ ✓ ✓ ✓ ✓ $1 1 x$ $1 1 − 3 y 2 + 2 y 3 y − 2 y 2 + y 3 3 y 2 − 2 y 3 − y 2 + y 3$ ✓ ✓ ✓ ✓ ✓ ✓ ✓ $1 1 − x 2 x − x 2 x 2$ $1 1 − 3 y 2 + 2 y 3 y − 2 y 2 + y 3 3 y 2 − 2 y 3 − y 2 + y 3$ ✓ ✓ ✓ ✓ ✓ ✓ ✓ $1 1 x − x 2 / 2 x 2 / 2$ $1 1 − 3 y 2 + 2 y 3 y − 2 y 2 + y 3 3 y 2 − 2 y 3 − y 2 + y 3$ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ $1 1 − 3 x 2 + 2 x 3 x − 2 x 2 + x 3 3 x 2 − 2 x 3 − x 2 + x 3$ $1 1 − 3 y 2 + 2 y 3 y − 2 y 2 + y 3 3 y 2 − 2 y 3 − y 2 + y 3$

## References

1. Mortari, D. The theory of connections: Connecting points. Mathematics 2017, 5, 57. [Google Scholar] [CrossRef]
2. Johnston, H.; Mortari, D. Linear differential equations subject to relative, integral, and infinite constraints. In Proceedings of the Astrodynamics Specialist Conference, Snowbird, UT, USA, 19–23 August 2018; pp. 18–273. [Google Scholar]
3. Johnston, H.; Mortari, D. Weighted least-squares solutions of over-constrained differential equations. In Proceedings of the IAA SciTech-081 Forum on Space Flight Mechanics and Space Structures and Materials, Moscow, Russia, 13–15 November 2018. [Google Scholar]
4. Mortari, D. Least-squares solutions of linear differential equations. Mathematics 2017, 5, 48. [Google Scholar] [CrossRef]
5. Mortari, D.; Johnston, H.; Smith, L. High accurate least-squares solutions of nonlinear differential equations. J. Comput. Appl. Math. 2018, 352, 293–307. [Google Scholar] [CrossRef]
6. MATLAB and Statistics Toolbox Release 2012b; The MathWorks, Inc.: Natick, MA, USA, 2012.
7. Chebfun Guide; Driscoll, T.A.; Hale, N.; Trefethen, L.N. (Eds.) Chebfun Guide Pafnuty Publications: Oxford, UK, 2014. [Google Scholar]
8. Mortari, D.; Furfaro, R. Theory of connections applied to first-order system of ordinary differential equations subject to component constraints. In Proceedings of the 2018 AAS/AIAA Astrodynamics Specialist Conference, Snowbird, UT, USA, 19–23 August 2018. [Google Scholar]
9. Furfaro, R.; Mortari, D. Least-squares solution of a class of optimal guidance problems. In Proceedings of the 2018 AAS/AIAA Astrodynamics Specialist Conference, Snowbird, UT, USA, 19–23 August 2018. [Google Scholar]
10. Waring, E. Problems concerning interpolations. Philos. Trans. R. Soc. Lond. 1779, 69, 59–67. [Google Scholar]
11. Coons, S.A. Surfaces for Computer Aided Design; Technical Report; Massachusetts Institute of Technology: Cambridge, MA, USA, 1964. [Google Scholar]
12. Farin, G. Curves and Surfaces for CAGD: A Practical Guide, 5th ed.; Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA, 2002. [Google Scholar]
Figure 1. Surface obtained using function $g ( x , y ) = 0$ (simplest surface).
Figure 1. Surface obtained using function $g ( x , y ) = 0$ (simplest surface).
Figure 2. Surface obtained using function $g ( x , y ) = x 2 y − sin ( 5 x ) cos ( 4 mod ( y , 1 ) )$.
Figure 2. Surface obtained using function $g ( x , y ) = x 2 y − sin ( 5 x ) cos ( 4 mod ( y , 1 ) )$.
Figure 3. Surface obtained using function $g ( x , y ) = 0$ (simplest surface).
Figure 3. Surface obtained using function $g ( x , y ) = 0$ (simplest surface).
Figure 4. Surface obtained using function $g ( x , y ) = 3 x 2 y − 2 sin ( 15 x ) cos ( 2 y )$.
Figure 4. Surface obtained using function $g ( x , y ) = 3 x 2 y − 2 sin ( 15 x ) cos ( 2 y )$.
Figure 5. Coons surface (left); and ToC surface (right) using $g ( x , y )$ provided in Equation (11).
Figure 5. Coons surface (left); and ToC surface (right) using $g ( x , y )$ provided in Equation (11).
Figure 6. ToC surface subject to multiple constraints on two axes: using $g ( x , y ) = 0$ (left); and using $g ( x , y ) = mod ( x , 0 . 5 ) cos ( 19 y ) − x mod ( 3 y , 0 . 4 )$ (right).
Figure 6. ToC surface subject to multiple constraints on two axes: using $g ( x , y ) = 0$ (left); and using $g ( x , y ) = mod ( x , 0 . 5 ) cos ( 19 y ) − x mod ( 3 y , 0 . 4 )$ (right).

## Share and Cite

MDPI and ACS Style

Mortari, D.; Leake, C. The Multivariate Theory of Connections. Mathematics 2019, 7, 296. https://doi.org/10.3390/math7030296

AMA Style

Mortari D, Leake C. The Multivariate Theory of Connections. Mathematics. 2019; 7(3):296. https://doi.org/10.3390/math7030296

Chicago/Turabian Style

Mortari, Daniele, and Carl Leake. 2019. "The Multivariate Theory of Connections" Mathematics 7, no. 3: 296. https://doi.org/10.3390/math7030296

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.