Theory of Functional Connections Extended to Fractional Operators

: The theory of functional connections, an analytical framework generalizing interpolation, was extended and applied in the context of fractional-order operators (integrals and derivatives). The extension was performed and presented for univariate functions, with the aim of determining the whole set of functions satisfying some constraints expressed in terms of integrals and derivatives of non-integer order. The objective of these expressions was to solve fractional differential equations or other problems subject to fractional constraints. Although this work focused on the Riemann– Liouville deﬁnitions, the method is, however, more general, and it can be applied with different deﬁnitions of fractional operators just by changing the way they are computed. Three examples are provided showing, step by step, how to apply this extension for: (1) one constraint in terms of a fractional derivative, (2) three constraints (a function, a fractional derivative, and an integral), and (3) two constraints expressed in terms of linear combinations of fractional derivatives and integrals.


Introduction
During the past few decades, fractional calculus has been applied in different fields of science.Introductory surveys can be found in [1,2].In engineering, it finds applications in fields such as control theory [3], the mechanics' theory of viscoelasticity [4,5], in sedimentology with diffusion and transport in porous rocks (using fractional derivatives with the theory of fractals) [6], in (bio)chemistry with the modeling of polymers and proteins [7], in finance on stochastic computation with respect to fractional Brownian motion [8], and in medicine on the modeling of human tissue under mechanical loads [9], as well as in many other fields such as anomalous diffusion [10], anomalous convection [11], power laws [12], probability [13], optimal control [14], allometric scaling laws [15], long-range interactions [16], the description of galaxy rotation [17], market price dynamics [18], and so on.
The use of fractional derivatives and integrals (the term "fractional" must be actually intended as an extension to real values, rather than limited to integer values) in applied science and engineering has been dormant for a long time.There is no reason why the laws of the natural phenomena must be restricted to integer-order derivatives and integrals, and the idea to extend these operators to real orders is actually quite old, dating back to 1695 [19].The reason for this delay is perhaps due to the fact that the various different approaches/definitions (with respect to the "boundary" constraints being coincident, in some specific domains, with the classical integer-order definitions) provide different results.
In addition, solving fractional-order problems turns out to be much more difficult than in the integer-order case, and sophisticated (analytical and numerical) methods are necessary; indeed, they are the subject of more and more ongoing investigations.
In this study, the theory of functional connections (TFC), a new mathematical framework generalizing interpolation using functionals [20], is extended for the first time to fractional derivatives and integrals.The capability to derive expressions representing all functions subject to fractional constraints (the main objective of this article) has a direct impact on solving fractional differential equations [21] and, in general, in any other problem subject to fractional constraints.In fact, this capability allows transforming these constrained optimization problems to unconstrained problems, which can then be solved using simpler, more robust, more accurate, faster, and more reliable methods.Therefore, the main aim of this article was to pave the way for devising advanced mathematical tools to solve fractional differential equations (FDEs) using the TFC.
In previous TFC applications, functional interpolation problems were solved by expanding the TFC free function in terms of orthogonal polynomials (Chebyshev, Legendre, etc.).Since fractional derivatives and integrals involve the computation of negative powers of the independent variable (a complex value for x < 0), shifted Chebyshev polynomials (SCPs), defined in the x ∈ [0, 1] range, were adopted, and a brief description of SCPs and how to evaluate the derivatives is provided.
To achieve these goals, we first provide, in Section 2, a brief survey of the background materials on fractional calculus.Hence, Section 3 summarizes what the TFC is, where it has already be applied, and how it derives functionals representing all functions always satisfying a set of linear constraints.Shifted Chebyshev polynomials and their applications to represent derivatives of fractional order are discussed in Section 4. Finally, some numerical experiments to show the applications of the TFC with different constraints involving fractional integrals and fractional derivatives are presented.The Appendices at the end of the paper collect more technical details.

Background on Fractional Calculus
To allow the use of the TFC with fractional-order operators, it is necessary to preliminarily provide some basic material on fractional calculus.

The Gamma Function
The introduction of fractional-order operators requires recalling the Gamma function, Γ(z), and some of its main properties.The Gamma function is defined as and it is analytically continuable in C\{0, −1, −2, −3, . . .}.
The extension of the factorial definition using the Gamma function is supported by the Bohr-Mollerup theorem [22]: the Gamma function is indeed the unique function defined on (0, ∞) satisfying the following properties: 1.
Property 2 implicitly states that Γ(z + 1) = z! when z ∈ N, and the similarity to the analogous property of the factorial, n! = n (n − 1)!, is evident.
Along with the definition given in (1), the Gamma function allows expressing factorials in terms of the Gamma function, namely n! = Γ(n + 1) when n ∈ N, and extending the factorial operator to non-integer values.For instance, This is also true for functions derived from the factorial, as for instance the binomial coefficient ), which can be, therefore, defined also for non-integer values as

Riemann-Liouville Fractional Integral
Cauchy's formula allows representing the n-times repeated integration on [a, x] in terms of just one integral: an expression valid for n ∈ N. By replacing (n − 1)! = Γ(n), this formula holds for any n ∈ C, Re(n) > 0. What is obtained is the Riemann-Liouville (RL) fractional integral: which coincides with the usual definition of the integral operator when α is an integer.This RL fractional operator satisfies the following semigroup properties: for real α, β > 0 and m ∈ N.
As in the integer-order case, RL fractional integration improves the smoothness properties of functions (see Appendix A for an explanation).

Riemann-Liouville Fractional Derivative
Different kinds of operators inverting the RL integral are possible.For a sufficiently smooth function f , the RL fractional derivative of order α > 0 is defined as where m = α .In view of the mentioned semigroup properties of J α a , one can immediately The reverse composition of the RL derivative and integral instead involves initial conditions at x = a, namely [23] (Theorem 2.23) provided that J m−α a [ f (x)] possesses a sufficient regularity.There is a substantial difference with integer-order differential operators: the operator thus-obtained has a non-local character.This is known as the "memory effect", and this property is more deeply explained in Appendix B.
The fractional derivative is a linear operator: and the RL derivative of power functions f (x) = (x − a) β is given by By setting a = 0, we readily obtain the RL fractional derivative of monomials as which is known as the fractional power rule.This expression is useful when expanding a function in Taylor series or in terms of orthogonal polynomials as, for instance, the shifted Chebyshev polynomials.
In some cases (see [23] (Theorem 2.3)), the RL derivative may satisfy a subsequent derivation property such as which allows re-conducting fractional derivatives in the restricted range α ∈ (0, 1); indeed, when β ∈ (m, m + 1), m ∈ N, one can find α ∈ (0, 1) such that β = m + α and write as for instance in (the use of (4) must be done, however, with caution since it does not hold for any function f and for any pair of orders α and β).On the basis of this observation, we restricted our investigation to the range α ∈ (0, 1).

Caputo Fractional Derivative
The RL derivative is not the only operator performing the inversion of the RL integral A further definition is known as the Caputo fractional derivative: and also in this case, it is easy to check The Caputo derivative requires a stronger regularity of the function f , but its left inversion by means of the RL integral involves initial conditions expressed in terms of integer-order derivatives since [23] This property is of importance in fractional differential equations since it implies that initial-value problems with the Caputo fractional derivative are initialized by values of the integer-order derivative of the solution (which are standard in physical problems) and not by the limits of fractional-order integrals as with the RL derivative.However, the two derivatives are strictly connected since and therefore, focusing on just one of them does not appear too restrictive.In this preliminary work, our attention was mainly devoted to the RL definition.

Grünwald-Letnikov Definitions
It is worthwhile to mention the Grünwald-Letnikov (GL) definition of fractional derivatives [24], which provides discrete access to the fractional calculus, and it is particularly suitable for numerical treatment.Indeed, the GL derivative is defined as where the binomial coefficient ( α j ) must be intended as in (2).It is immediate to see that GL D α a allows a numerical approximation once a constant (possibly small) step h > 0, rather than the limit as h → 0 + , is selected.Reference [24] analyzed the numerical stability, convergence, and error of FDEs using the GL derivative, where the asymptotic and the absolute stability of these methods were proven.
The GL definition is obtained from a generalization of the definition of the integerorder derivative expressed as the limit of the difference quotient and after imposing, in order to ensure the convergence of the series, that the function has a value of 0 before the initial point a (see, for instance, [25]).Under the necessary assumptions on the smoothness of f , it is possible to prove an equivalence between the GL and RL definitions.
A integral of the GL type can be defined as well after considering a negative-order −α in GL D α a , i.e., which instead corresponds to the RL integral initialized at a.

Background on the Theory of Functional Connections
The theory of functional connections (TFC) performs linear functional interpolation, and functional interpolation is a generalization of interpolation.Instead of selecting a function (or a class of functions) satisfying a set of constraints, functional interpolation derives functionals (called constrained expressions) representing the whole set of functions satisfying the constraints; stated in a different way, TFC derives functionals that always satisfy the whole set of constraints (the constraints are embedded in the functional).In this way, functional interpolation identifies the subset of the function space that fully satisfies the constraints.
These functionals contain a free function, g(x), and by spanning all possible expressions of the free function, the whole space of functions satisfying the constraints is covered.In particular, the free function always appears linear in the TFC, thus simplifying optimization processes.This approach was introduced in [26], and then, immediate applications appeared for solving differential equations [27][28][29][30] and for other mathematical problems [31][32][33][34].
To give a simple example of what functional interpolation is, consider the functional: where ġ denotes the derivative of g.This equation always satisfies the two constraints, f (−3) = f (π) and ḟ (1) = 1, regardless of the function g(x).The function g(x) is a free function subject to being defined, where the constraints are specified, only.Functionals such as (5) represent the whole set of functions satisfying the set of constraints they are derived for.These functionals project the whole space of functions to just the subspace fully satisfying the constraints.This way, constrained optimization problems, such as differential equations, can be transformed to unconstrained problems and, consequently, be solved using simpler, faster, more robust, and more accurate methods.Univariate functionals such as ( 5) can be generated by either one of these two formal expressions, called constrained expressions (CEs): where n is the number of linear constraints, g(x) is the free function, s j (x) is a set of n user-defined linearly independent support functions, η j (x, g(x)) are coefficient functionals, φ j (x) are switching functions (they are 1 when evaluated at the constraint they reference and 0 when evaluated at all other constraints), and ρ j (x, g(x)) are projection functionals representing the constraints written in terms of the free function.
Numerically efficient applications of the TFC have already been implemented in optimization problems, outperforming the current methods, especially in solving differential equations.In this area, the TFC has unified initial, boundary, and multi-value problems by usually providing fast solutions with high accuracy.

A Simple Explanatory Example
Let us consider the problem of determining functions f satisfying the initial condition: where, to adopt a more concise notation, from now on, D α 0 will denote the fractional RL derivative of order α, previously identified as RL D α 0 , and D α 0 [ f (x)] x 0 its value at x = x 0 .Using the "η" formulation (6), the constrained expression is After imposing (8) at x = x 0 , the η coefficient can be derived as and the constrained expression becomes This constrained expression represents the whole set of functions satisfying the given constraint (8).It can be used, for instance, in a constrained optimization process, subject to satisfying (8).By means of (10), an initial constrained optimization problem can be transformed to the unconstrained problem of finding the expression of the free function, namely g(x), generating the optimal f (x) [20].
The most-simple function satisfying the constraint ( 8) is obtained from (10) after setting g(x) = 0: and indeed, it is easy to check that In Figure 1, we show, for three different choices of the free function g, namely g(x) = x 2 , g(x) = e −x/2 , and g(x) = cos(2x), the plots of f (x, g(x)) (left plot) and of its RL derivative D α 0 f (x, g(x)) (right plot).We imposed the constraint (8) with α = 0.8, x 0 = 1, and f 0 = 1, and we considered the constrained expression (9) with p = 2.Although we obtained three different functionals f (x, g(x)), all their RL satisfy the constraint (8): indeed, in the right plot, we observe that D α 0 [ f (x)] x 0 = f 0 .For the evaluation of the RL derivatives, we used the formulas in Appendix A. ) (left plot) and its RL derivative D α 0 f (x, g(x)) (right plot) for some choices of g(x).Here, α = 0.8, x 0 = 1, f 0 = 1 and p = 2.

Shifted Chebyshev Polynomials
Chebyshev polynomials of the first kind are orthogonal polynomials defined for t ∈ [−1, 1] as T k (t) = cos k arccos t , k = 0, 1, . . . .They can be built in different ways, such as, for instance, by means of the three-term recurrence relation: starting with: To operate with fractional-order operators initialized at some points x 0 , it is convenient to use shifted Chebyshev polynomials (SCPs) preserving orthogonality on the shifted interval.Appendix C provides a summary of various approaches to perform the least squares.They can be adopted in optimization to estimate the coefficients of the free function, which is expanded in terms of shifted Chebyshev polynomials.
To simplify the notation, we considered the fractional integral and the fractional RL derivative initialized at x = 0, and we assumed we are interested in approximating functions on the interval [0, 1].The generalization to different intervals is always possible without particular difficulties.
SCPs on [0, 1] are connected to Chebyshev polynomials by the relationship: and the corresponding three-term recurrence relation is In order to expand free functions in the TFC, it is convenient to express SCPs in terms of monomials: where coefficients a k,j , j = 0, 1, . . ., k, mapping SCPs to monomials, are obtained from the recursive form (11) of SCPs; a closed form can be obtained, for instance, from [35] as Therefore, it is possible to represent the SCPs up to degree m as and the first few coefficients (for m = 5) can be easily evaluated as The matrix expression of the SCPs allows us to provide the RL fractional derivatives of the SCPs after using (3) for the RL derivatives of monomials: where D α is a diagonal matrix made of Γ functions ratio terms It is well known that integer-order derivatives of SCPs can expressed in terms of SCPs themselves according to and where matrices B and C can be easily built in a recursive way.Indeed, by using the expression of their first column: one can build the following columns according to For instance, for m = 5, the B and C lower-triangular matrices are 0 0 0 0 0 0 0 0 0 0 0 0 8 0 0 0 0 0 0 16 0 0 0 0 16 0 16 0 0 0 0 32 0 16 0 0 To obtain a similar relationship with RL derivatives, we first observe that the inverse mapping of ( 12) is also available, namely where coefficients āk,j , j = 0, 1, . . ., k are given (after simple manipulations) by [35] āk Thus, we write We note that Ā = A −1 , and therefore, thanks to (13) one obtains

Example
Let us consider, on the interval [0, 1], the following function: and its approximation g m (t) ≈ g(t) obtained by the first m + 1 terms in the shifted Chebyshev expansion, namely , where coefficients ξ k can be easily evaluated after computing (by some accurate quadrature rule) the integrals: 2  dt with c 0 = 2 and c k = 1 for any k ≥ 1.By using ( 14), we are, hence, able to evaluate which provides an approximation to RL D α 0 g(t) in terms of shifted Chebyshev polynomials.In the left plot of Figure 2, we show the function g(t) and the error |g(t) − g m (t)| for some values of m.The RL derivative of g m (t), where m = 12, is instead presented in the right plot of Figure 2 for α ∈ {0.5, 0.7, 0.9}.

Numerical Examples
This section shows, by three distinct examples, how to apply the TFC to obtain functionals, called constrained expressions, with embedded fractional constraints.The first example involves, again, a single fractional derivative constraint that is solved using the switching projection TFC formulation, while the second examples shows how to derive, via the TFC η formulation [20], the constrained expression subject to three constraints specified by the values of the function, of its fractional derivative, and of its fractional integral.The third example is devoted to presenting the use of the switching-projection formulation [20] to derive the constrained expression for two constraints defined as a linear combination of fractional constraints specified in distinct locations.

Single Fractional Constraint
Let us derive, by the switching-projection TFC formulation (7), the functional representing all functions satisfying the single fractional derivative constraint: The number of constraints is n = 1.Therefore, just one support function and one switching function are needed.Let s(x) = x 2 be the support function.The switching function is φ(x) = ξ s(x) = ξ x 2 , and it is subject to from which the unknown coefficient ξ = Γ(8/3) Γ(3) 2 5/3 is computed.Therefore, the expressions of the switching function and of the projection functional are and the constrained expression, f (x, g(x)) = g(x) + φ(x) ρ(g(x)), representing the whole set of functions satisfying (15), is Note that, by setting g(x) = 0, the simplest interpolation result is obtained, i.e., Equation ( 16) can also be validated using any expression of the free function.For example, by setting g(x) = cx + d cos x, thanks to (A2) and (A8), we obtain

Three Mixed Constraints
This example includes n = 3 constraints, the function value, fractional derivative, and fractional integral constraints: For these constraints, the constrained expression is derived using the η formulation (6).Let the support functions be s(x) = {1, x, x 2 }.Then, the constraints imply By inverting the matrix, the η j coefficients can be computed in terms of the free functions: whose approximated solution is Then, all functions simultaneously satisfying the constraints given in ( 17) can be represented by the constrained expression:

Two Linear Combinations of Fractional Constraints
This example is provided to show how to proceed with a multiple linear combination of constraints.Let the constraints be The constrained expression of these two constraints is derived using the TFC switchingprojection formulation given in (7).Using the support functions s(x) = {e x , x 3 }, the switching functions are expressed in terms of the α ij coefficients: and φ 2 (x) = α 12 e x + α 22 x 3 .
To better clarify, let us write the constraints (18) as and express the switching conditions as Therefore, in view of the linearity of φ 1 and φ 2 , the switching conditions are given by After evaluating constraints C 1 and C 2 at the support functions e x and x 3 : ] 2 coefficients α ij are computed by the simple matrix inversion [20]: and the switching functions can be approximated as The projection functionals are [20] and by using the expressions provided by ( 19) and ( 20), the constrained expression representing all functions satisfying the constraints given in ( 18) is The constrained expressions must be validated by showing that they satisfy all constraints, no matter what the free function is.In particular, by selecting g(x) = 0, the functional interpolation problem is transformed into a simple interpolation problem that uses the support functions selected.
Equation ( 21) satisfies the constraints given in (18) for any free function.Let us prove it by selecting (for simplicity) g(x) = c x 2 , where c is an unknown constant.Using this free function, the projection functionals become and using the approximated expression of the switching functions and the projection functionals, the constrained expression becomes Thus, by replacing this expression in the constraints, we obtain the residuals: which have values at the machine error level, as we show in Figure 3, where R 1 ( f ) and R 2 ( f ) are plotted as a function of c ∈ [0, 1] (note that all available digits provided by Matlab were used for the coefficients and not just the few digits displayed in ( 19) and ( 22)).

Discussion
This work extended the application of the theory of functional connections (TFC), an analytical framework to derive functionals representing all functions interpolating a set of constraints, to constraints made of fractional-order operators (derivatives and integrals).The TFC has been developed for constraints defined by points, integer derivatives and integrals, limits, components, and any linear combination of them for the univariate and the multivariate case [20].In this article, constraints made of fractional derivatives and fractional integrals for the univariate case were included in the TFC framework.Although the extension was presented for the Riemann-Liouville definition of the fractional derivative and integral, this choice is actually not restrictive, since the method presented can be used, as it is, for other definitions; the only difference is in the corresponding way to compute the fractional operators.
The representation of all functions' interpolating set by the TFC for some linear constraints was obtained by deriving analytical functionals, called constrained expressions, containing a free function, g(x).No matter what g(x) is, the constrained expression analytically satisfies the constraints, and in addition, by spanning all possible expressions of g(x), the whole set of functions interpolating the constraints was obtained.
The extension was validated by three illustrative examples.
for x → a, where O(•) indicates that Φ(x) is the same order of the infinitesimal of (x − a) µ+α for x → a.Furthermore, Memory effect property: Be α ∈ (0, 1] and calculate the difference between two valuations J α 0 f (x) in x 1 and x 2 such that x 1 < x 2 : If α = 1, the second integral is canceled: Therefore, H depends only on what happens in [x 1 , x 2 ], and no information about f (x) is needed for x in [0, x 1 ).This property expresses the locality of the integer-order differential operator.
For the fractional case, the situation is different.In general, the first integral is not zero, and therefore, to evaluate H, it is necessary to know the history of f (x) from the initial point 0 to the point x 2 of interest; this is to underline the non-locality of fractional differential and integral operators, which characterizes the memory effect in the process.
Interesting applications of the memory effect of the fractional integrals are the description of the behavior of a crowd of pedestrians, especially to characterize the competitive and cooperative interactions between pedestrians [37], and the generalization of the dynamical model of love/hate [38].

Figure 3 .
Figure 3.Residual of the constraints C 1 (left plot) and C 2 (right plot) when applied to the functional f (x, g(x)) given by(22).
There are several different approaches to solve by least squares the over-determined linear system A x = b, where A ∈ R n×m : •The common solution:x = (A T A) −1 A T b; • The QR decomposition: A = QR, then x = R −1 Q T b, where Q ∈ SO(n) and R an upper-triangular matrix; •The SVD decompositions: A = UΣV T , then x = A + b = VΣ + U T b, where U ∈ SO(n) and V ∈ SO(n) and where Σ + is the pseudo-inverse of Σ, which is formed by replacing every non-zero diagonal entry by its reciprocal and transposing the resulting matrix; • The Cholesky decomposition:A T Ax = U T Ux = A T b, then x = U −1 (U −T A T b), where U is an upper-triangular matrix.