Next Article in Journal
Robust Control Design of Under-Actuated Nonlinear Systems: Quadcopter Unmanned Aerial Vehicles with Integral Backstepping Integral Terminal Fractional-Order Sliding Mode
Next Article in Special Issue
Calculation of the Relaxation Modulus in the Andrade Model by Using the Laplace Transform
Previous Article in Journal
Complex-Valued Suprametric Spaces, Related Fixed Point Results, and Their Applications to Barnsley Fern Fractal Generation and Mixed Volterra–Fredholm Integral Equations
Previous Article in Special Issue
Rational Approximations for the Oscillatory Two-Parameter Mittag–Leffler Function
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Theory on Linear L-Fractional Differential Equations and a New Mittag–Leffler-Type Function

Departament de Matemàtiques, Universitat de València, 46100 Burjassot, Spain
Fractal Fract. 2024, 8(7), 411; https://doi.org/10.3390/fractalfract8070411
Submission received: 19 June 2024 / Revised: 9 July 2024 / Accepted: 11 July 2024 / Published: 13 July 2024
(This article belongs to the Special Issue Mittag-Leffler Function: Generalizations and Applications)

Abstract

:
The L-fractional derivative is defined as a certain normalization of the well-known Caputo derivative, so alternative properties hold: smoothness and finite slope at the origin for the solution, velocity units for the vector field, and a differential form associated to the system. We develop a theory of this fractional derivative as follows. We prove a fundamental theorem of calculus. We deal with linear systems of autonomous homogeneous parts, which correspond to Caputo linear equations of non-autonomous homogeneous parts. The associated L-fractional integral operator, which is closely related to the beta function and the beta probability distribution, and the estimates for its norm in the Banach space of continuous functions play a key role in the development. The explicit solution is built by means of Picard’s iterations from a Mittag–Leffler-type function that mimics the standard exponential function. In the second part of the paper, we address autonomous linear equations of sequential type. We start with sequential order two and then move to arbitrary order by dealing with a power series. The classical theory of linear ordinary differential equations with constant coefficients is generalized, and we establish an analog of the method of undetermined coefficients. The last part of the paper is concerned with sequential linear equations of analytic coefficients and order two.

1. Introduction

1.1. Literature Review

Fractional calculus is concerned with non-integer differentiation, where the new derivative operator is often presented as an integral expression with respect to a kernel function. The operator depends on the fractional order or index, which may be a real number in (0,1), a real number with no bounds, or even a complex value, and the ordinary derivative is retrieved for order 1. Good expositions of the topic are given in the monographs [1,2,3,4,5,6]. There are many notions of fractional derivatives, and different approaches and rules have been followed to study these operators and associated differential equations [7,8,9,10,11,12,13]. Among all of the definitions, in this paper, we will consider the important Caputo fractional operator, with the consequent Caputo fractional differential equations. This operator was proposed nearly sixty years ago in [14] in the context of viscoelasticity theory. However, it is still of use in current mathematical and applied research; see for example the recent publications [15,16,17,18,19,20,21]. The operator is defined as a convolution with respect to a singular kernel so that a continuous delay is incorporated into the differential equation. This definition brings about a new kind of functional differential equations, which exhibit memory and hereditary effects that may capture different dynamics more flexibly. Compared to the Riemann–Liouville formulation, the ordinary derivative is placed within the integral so that initial conditions are posed as in the classical sense. Due to the applicability of fractional calculus and the similarities with ordinary calculus, some definitions and computations in the literature lack sufficient rigor, as pointed out in [10]; thus, we aim at giving precise results, in line with [3,10], for instance.
Throughout this article, we will be interested in explicit and closed-form solutions to fractional differential equations. In fact, we will build a theory on a new class of fractional differential equations and their corresponding solutions, but details will be given later. By explicit solution, we mean a state or response function that can be solved and isolated, whereas a closed-form solution refers to a more detailed final expression in terms of the input data. For Caputo fractional differential equations, there are many works that construct explicit solutions, often in the realm of applicable models. In the homogeneous and autonomous linear case, the solution depends on the most important function in fractional calculus, the (one-parameter) Mittag–Leffler function, which is defined as a power-series expansion that extends the Taylor series of the exponential function [22,23,24,25,26]. The theory of fractional Taylor series was first introduced in [27], where some examples of homogeneous linear equations were shown. For non-homogeneous linear models, the two-parameter Mittag–Leffler function appears in the solution’s expression too, within a convolution; this result can be deduced by means of Picard’s iterations [28]. When moving to nonlinear equations, fractional power series may be employed as well, albeit the recursive relation for the expansion’s coefficients is not solvable in closed form. Some examples, which were published quite recently, are the logistic equation [29], the Bernoulli equation [30], SIS equations [31], and general compartmental models with polynomial nonlinearity [32]. In fact, the Cauchy–Kovalevskaya theorem has just been proved for systems of Caputo fractional differential equations with analytic inputs [33], hence giving a theoretical justification of the method in general. Since fractional calculus differs from standard calculus (product rule, chain rule, etc. [34]), the contribution [33] circumvents the problems and employs the method of majorants and the implicit-function theorem to achieve a proof of the Cauchy–Kovalevskaya theorem. For variations and generalizations of the Caputo operator, which expand the possible kernel functions, power series also play an important role as well, for instance, for Prabhakar fractional logistic equations [35] and Caputo generalized proportional fractional logistic equations [36]. Further explicit expressions compared to power-series expansions are not usually available; see the discussions in [37,38,39]. There are alternative analytical techniques, such as the Laplace-transform method [18,40,41,42] (which is often applied with formal calculations), which has even been used in the stochastic sense together with other probabilistic tools [43]. As most Caputo models do not possess explicit solutions, numerical schemes have been implemented to compute approximations on mesh discretizations [44,45,46]. Building numerical solvers for fractional models is much more difficult than in the standard integer-order case due to persistent memory terms. Here, we will not use Laplace transforms or numerical resolutions; we will focus on power-series-related methods instead, with rigorous proofs of convergence.
Motivated by issues with the Caputo fractional derivative, in this paper, we investigate a variant that has been applied in mechanics, already called the L-fractional derivative by other authors, with associated L-fractional differential equations [47,48]. It has also been introduced in the logistic equation for growth processes [49]. The definition is based on normalizing the Caputo operator, so that the fractional derivative of the identity function is 1. With such an approach, as will be seen, the class of fractional differentiable functions is enlarged from absolute continuity to classical analyticity so that the calculus is less restrictive. It is true that the normalization of fractional derivatives has a straightforward definition, but it gives rise to distinct and interesting geometrical, physical, and qualitative features. Thus, it should be further investigated in theory and in modeling. See [49] and the recent arXiv preprint [50], for example. In contrast to the Caputo derivative, the ordinary derivative of an L-fractional solution is always finite at the initial instant, which likely makes more sense when modeling real dynamics. The L-fractional derivative can be interpreted in terms of differentials [51,52,53,54,55], with usual units of time 1 in the vector field in the model. Thus, the disadvantages of the Caputo derivative are overcome. Although the normalization is directly related to the original Caputo fractional derivative and numerical solvers available for Caputo fractional differential equations are readily extended to the L-fractional situation, the new L-fractional differential equations exhibit many properties, and the search for solutions thus deserves specific attention. We develop a complete theory on linear L-fractional differential equations, with ideas that might be adapted to other fractional operators. Interestingly, the theory provides a new insight into the classical exposition of linear ordinary differential equations, and it gives rise to the definition of a new Mittag–Leffler-type function with a certain power series. As in other treatments for the Caputo derivative [56,57], we deal with sequential-type models by composing the L-fractional derivative.
A related fractional derivative that could be investigated in the future is the Λ -fractional derivative, which normalizes the Riemann–Lioville operator instead [58,59].
In the article, we fix the fractional order α ( 0 , 1 ) . The case α = 1 is possible as well, and it corresponds to the classical integer-order setting.

1.2. Previous Context

We base this on the references previously cited. In this paper, all integrals will be understood in the sense of Lebesgue, which may be interpreted as improper Riemann integrals or Riemann integrals under appropriate conditions, for example, the continuity of the integrand. Let L 1 [ 0 , T ] be the Lebesgue space of integrable functions on the interval [ 0 , T ] , T > 0 . If the function x : [ 0 , T ] C d belongs to L 1 [ 0 , T ] , then its Riemann–Liouville fractional integral is defined as [3,10]
  R L J α x ( t ) = 1 Γ ( α ) 0 t ( t τ ) α 1 x ( τ ) d τ = 1 Γ ( α ) ( t α 1 x ) ( t ) ,
where α ( 0 , 1 ) * is the convolution and
Γ ( z ) = 0 τ z 1 e τ d τ
is the gamma function. The gamma function generalizes the factorial: Γ ( n + 1 ) = n ! , for integers n 0 . As x L 1 [ 0 , T ] and t α 1 L 1 [ 0 , T ] , a standard result tells us that the convolution in (1) is defined as an L 1 [ 0 , T ] function; in particular, it is pointwise defined almost everywhere on [ 0 , T ] (i.e., everywhere except a set of Lebesgue measure zero). Of course, there are functions for which the Riemann–Liouville integral exists for every t [ 0 , T ] . Some texts define (1) whenever the integral exists, but that is certainly imprecise.
We say that x : [ 0 , T ] C d is absolutely continuous if its derivative x exists almost everywhere, x L 1 [ 0 , T ] , and
x ( t ) = x ( 0 ) + 0 t x ( s ) d s
for all t [ 0 , T ] , i.e., Barrow’s rule holds in the Lebesgue sense. These conditions are weaker than the continuous differentiability demanded by Riemann integration. Essentially, we are saying that x belongs to the Sobolev space W 1 , 1 [ 0 , T ] with values on C d . The identity (2) is necessary, as the Cantor’s function shows. According to [10] (Proposition 3.2), the operator   R L J α from (1) maps W 1 , 1 [ 0 , T ] into W 1 , 1 [ 0 , T ] (it does not map infinitely differentiable functions C [ 0 , T ] into continuously differentiable functions C 1 [ 0 , T ] in general). For absolutely continuous functions on [ 0 , T ] , the Riemann–Liouville fractional derivative is defined as [3,10]
  R L D α x ( t ) = d d t   R L J 1 α x ( t ) = 1 Γ ( 1 α ) d d t 0 t x ( τ ) ( t τ ) α d τ ,
where α ( 0 , 1 ) is the fractional order of differentiation. Note that   R L J 1 α x is absolutely continuous on [ 0 , T ] ; therefore, it makes sense to differentiate   R L J 1 α x almost everywhere on [ 0 , T ] .
The Caputo fractional derivative is defined as [3,10]
  C D α x ( t ) =   R L J 1 α x ( t ) = 1 Γ ( 1 α ) 0 t x ( τ ) ( t τ ) α d τ ,
where α ( 0 , 1 ) is the fractional order of differentiation and t [ 0 , T ] . Compared to (3), the ordinary derivative is placed within the integral. The operator (4) is a convolution with continuous delay with respect to a singular kernel
K ( t τ ) = ( t τ ) α .
Since x L 1 [ 0 , T ] , the Caputo derivative   R L J 1 α x ( t ) exists almost everywhere on [ 0 , T ] , and it belongs to L 1 [ 0 , T ] . The boundary values of the operator are
  C D 0 + x ( t ) = x ( t ) x ( 0 ) ,
for every t [ 0 , T ] , and if f is continuously differentiable on [ 0 , T ] [3] (page 37),
  C D 1 x ( t ) = x ( t ) ,
for all t [ 0 , T ] . Then, it interpolates between the discrete difference x ( t ) x ( 0 ) = 0 t x ( τ ) d τ , which is related to the mean value, and the ordinary derivative x ( t ) .
Useful examples of computation for (4) are
  C D α t β = Γ ( β + 1 ) Γ ( β α + 1 ) t β α ,
for powers β > 0 [60]. In particular,
  C D α t = 1 Γ ( 2 α ) t 1 α
and
  C D α 1 = 0 .
Therefore, while   C D α c = 0 holds for constants c C , it is not true that   C D α t = 1 .
Motivated by the definition of ordinary differential equations, a Caputo fractional differential equation is an equation of the form
  C D α x ( t ) = f ( t , x ( t ) ) ,
with an initial condition or state x ( 0 ) = x 0 , where f : [ 0 , T ] × Ω [ 0 , T ] × R d R d , or f : [ 0 , T ] × Ω [ 0 , T ] × C d C d , is a continuous function such that x 0 Ω . Problem (10) can be interpreted in an almost-everywhere sense, considering that   C D α x L 1 [ 0 , T ] . As usual, the equation is said to be autonomous if f ( t , x ) does not depend on t explicitly, i.e., f ( t , x ) f ( x ) , so that the involved input parameters are constant. Equation (10) exhibits non-local behavior due to the delay involved in   C D α x ( t ) . The units in (10) are time α .
In general, the solution of (10) cannot be twice continuously differentiable on [ 0 , T ] . Indeed, if it were, then we could apply integration by parts on (4) so that the kernel would become non-singular:
  C D α x ( t ) =   1 Γ ( 1 α ) t 1 α 1 α x ( 0 ) + 1 1 α 0 t ( t τ ) 1 α x ( τ ) d τ =   1 Γ ( 2 α ) t 1 α x ( 0 ) + 0 t ( t τ ) 1 α x ( τ ) d τ .
Then, at t = 0 , f ( 0 , x 0 ) =   C D α x ( 0 ) = 0 , which is not often the case. In practice, one has | x ( 0 ) | = . This comment highlights the need to consider absolutely continuous functions in the setting of Caputo fractional differential equations.
As occurs with classical differential equation problems, Caputo Equation (10) does not usually have explicit solutions, and numerical methods must be used. When possible, analytical or semi-analytical techniques that have been employed to derive solutions are Laplace transform and power series. For example, the simplest linear model
  C D α x ( t ) = λ x ( t ) ,
where λ C and x ( 0 ) = x 0 , can be solved with those techniques.
The fractional power-series solution (i.e., a power series evaluated at t α )
x ( t ) = n = 0 x n ( t α ) n = n = 0 x n t α n ,
where x n C and t 0 , formally satisfies
λ n = 0 x n t α n = n = 0 x n ·   C D α t α n = n = 0 x n + 1 Γ ( ( n + 1 ) α + 1 ) Γ ( n α + 1 ) t α n
in (12), by (7). (We use a centered dot for the notation of the product when there may be confusion with superscripts.) After matching terms,
x n + 1 = Γ ( n α + 1 ) Γ ( ( n + 1 ) α + 1 ) λ x n
is the first-order difference equation for the coefficients. Notice that the property (9) is key in the development. The closed-form solution to (15) is
x n = λ n Γ ( n α + 1 ) x 0 .
The solution (13) is then expressed as
x ( t ) = E α ( λ t α ) x 0 ,
where
E α ( s ) = n = 0 s n Γ ( n α + 1 )
is the well-known Mittag–Leffler function [22,23,24,26]. It is an entire function on the complex plane C and extends the exponential function through its Taylor series.
The Laplace-transform technique can also be used to derive (16) and (17). The Laplace transform is defined as
L [ x ] ( s ) = 0 x ( t ) e s t d t .
The most important property of L is
L [   C D α x ] ( s ) = s α L [ x ] ( s ) s α 1 x ( 0 ) ,
see [1] (page 81). By applying (18) into (12),
s α x ˜ ( s ) s α 1 x ( 0 ) = λ x ˜ ( s ) ,
where x ˜ = L x for simplicity. That is,
x ˜ ( s ) = s α 1 s α λ x 0 .
It is known [1] (chapter 4) that
L [ E α ( λ t α ) ] ( s ) = s α 1 s α + λ .
Hence, (16) is obtained again.
Problem (12) and the solution (16) can be extended to the matrix case λ = A C d × d . The Mittag–Leffler function (17) is defined for matrix arguments s = A C d × d , with the same series.
A general result is the following: if
  C D α x ( t ) = A x ( t ) + b ( t ) ,
where A C d × d is a matrix and b : [ 0 , T ] C d is a continuous vector function, then
x ( t ) = E α ( A t α ) x 0 + 0 t τ α 1 E α , α ( A τ α ) b ( t τ ) d τ = E α ( A t α ) x 0 + t α 1 E α , α ( A t α ) b ( t ) ,
where
E α , β ( s ) = n = 0 s n Γ ( n α + β )
is the two-parameter Mittag–Leffler function. The procedure to derive (20) relies on solving Picard’s iterative scheme [28], via the associated Volterra integral operator
  C J α x ( t ) = 1 Γ ( α ) 0 t ( t s ) α 1 x ( s ) d s = 1 Γ ( α ) t α 1 x ( t ) =   R L J α x ( t ) ,
which is defined for integrable or continuous functions on [ 0 , T ] . Although the properties
  C J α   C D α x ( t ) = x ( t ) x ( 0 )
and
  C D α   C J α x ( t ) = x ( t ) ,
where ∘ denotes the composition of operators, are often used in the literature without detailed explanations, they deserve an in-depth discussion [3,10] (all this will be carried out in Lemma 1, Remarks 1 and 3). They are analogous to the relationship between the Lebesgue integral and the standard derivative (Barrow’s rule and the fundamental theorem of calculus, respectively). Only in that case, (10) would be equivalent to the fixed-point problem
x ( t ) = x 0 +   C J α f ( t , x ( t ) ) ,
the details of which can be found in [10] (Remark 5.2 and Addendum). For the complete linear Equation (19), the authors of [28] define the Picard’s iterative scheme from (25) and then obtain (20) with (21).

1.3. Objectives

A great deal of research in applied mathematics is concerned with obtaining analytical or semi-analytical solutions of models. The present contribution continues this purpose, with the use of power series for fractional models.
The homogeneous part of Equation (19),   C D α x ( t ) = A x ( t ) , is autonomous, meaning that A does not depend on t. An aim of our paper is to address a situation of the time dependency of A, specifically,
  C D α x ( t ) = t 1 α A x ( t ) + b ( t ) ,
where A C d × d is a matrix and b : [ 0 , T ] C d is a continuous vector function. To the best of our knowledge, this type of model has not previously been solved in the literature in closed form. We also deal with the case in which b ( t ) is given by certain fractional-power functions, for which specific closed forms of the solution appear.
The key fact is that (26) can be transformed into a complete linear equation with an autonomous homogeneous part, but with respect to the other fractional derivative,   L D α . The L-fractional derivative, as will be seen, has many properties that may be advantageous compared with the conventional Caputo derivative. With (26) and this alternative derivative, a new Mittag–Leffler-type function E α emerges, with a similar structure to (17). This fact opens up a wide range of research possibilities.
To deal with linear L-fractional differential equations and build their solution with Picard’s iterations, the associated L-fractional integral operator, the fundamental theorem of L-fractional calculus, and the estimates for its norm have a relevant role in the development. Due to the form of the kernel function, many of the computations are related to the beta function and the beta probability distribution. Considering this fact, the form of the solution and the proposed Mittag–Leffler-type function are analyzed probabilistically.
In the second part of the paper, we address autonomous linear equations of sequential type to extend scalar homogeneous first-order linear models. We base it entirely on power-series expressions. The classical theory of linear ordinary differential equations is fully generalized, where the alternative Mittag–Leffler function substitutes the exponential function of the algebraic basis in a suitable way. In the non-homogeneous case, some forcing terms with a special form (polynomials and ordinary derivatives of the Mittag–Leffler-type function) are allowed to extend the well-known method of undetermined coefficients to the fractional context.
Finally, a class of sequential non-autonomous linear equations is studied of order two and analytic coefficients. The solutions are expressed by means of power series, where the coefficients satisfy recursive relations but are not given in closed form in general. Two important models are illustrated in the fractional sense: Airy’s and Hermite’s equation.
The techniques used in the article are essentially based on power series, integral equations and operators, norm estimates, Picard’s iterations, probability distributions, and the algebra of vector spaces and operators, in the setting of fractional calculus.
Some equations related to (26) have been investigated in the literature. For example, papers [61,62] study linear fractional differential equations with variable coefficients, of the Riemann–Liouville and Caputo type. The solutions are given by a convergent infinite series involving compositions of fractional integrals. Our methodology and results are distinct and more specific to L-fractional differential equations. In [56], the authors examine the problem   C D α x ( t ) = λ t α x ( t ) , where λ C , and formally build the fractional power-series solution. In [63], the authors solve the complete non-autonomous linear problem in symbolic form, with a distinct expression for the solution.

1.4. Organization

Concisely, the plan of the paper is the following. In Section 2, we introduce and work with the alternative L-fractional derivative and pose the linear-equation problem (26) in the setting of L-fractional calculus. In Section 3, we address L-fractional autonomous homogeneous linear equations with power series and define a new Mittag–Leffler-type function. In Section 4, we study the associated integral operator of the L-fractional derivative, with the fundamental theorem of calculus, explicit computations, and norm estimates. This is necessary to solve, in Section 5, the complete linear equation in the L-fractional sense with Picard’s iterations, which corresponds to (26). The form of the solution and the proposed Mittag–Leffler-type function are analyzed with probabilistic arguments. The concrete case of the fractional-power source term is addressed. The uniqueness of the L-fractional solutions is justified and discussed. In Section 6, we investigate linear L-fractional differential equations of sequential type, with constant coefficients. We start with sequential order two and then turn to any order. By using power series, the main result is the derivation of the algebraic basis of solutions for an arbitrary order, in terms of the alternative Mittag–Leffler function. This is a nice extension of the classical theory. Some non-homogeneous equations are solved, with a generalized method of undetermined coefficients. In Section 7, the investigation is concerned with linear L-fractional differential equations of the sequential type, with analytic coefficients and order two. Power series are employed again, where the coefficients of the solution satisfy recurrence relations. Lastly, Section 8 is devoted to future research lines.

2. The L-Fractional Derivative and Formulation of the Complete Linear Equation

The (Leibniz) L-fractional derivative of an absolutely continuous function x : [ 0 , T ] C d is [47,48]
  L D α x ( t ) =   C D α x ( t )   C D α t ,
where α ( 0 , 1 ) is the fractional order of differentiation, t ( 0 , T ] , and   C D α is the Caputo fractional derivative (4). We know that   L D α x is defined almost everywhere on [ 0 , T ] , at least, by the properties of the Riemann–Liouville and Caputo operators. This fractional derivative (27) was envisioned to deal with fractional differentials in geometry [51,52],
d α x ( t ) =   L D α x ( t ) d α t ,
and it has recently been utilized in [49] for logistic growth.
By (8),
  L D α x ( t ) = Γ ( 2 α ) t 1 α   C D α x ( t ) .
Two important properties of the L-fractional derivative are
  L D α 1 = 0 ,
by (9), and, in contrast to the Caputo derivative,
  L D α t = 1 .
Property (29) will be very important when dealing with initial states in fractional differential equations and with power series, to derive difference equations for the expansion’s coefficients. For the Riemann–Liouville or the Λ -derivative, the corresponding result (29) does not hold.
If
Δ s x ( t ) = x ( t ) x ( s ) t s
is the derivative discretization (mean past velocity over [ s , t ] ), then the fractional derivative (28) interpolates between
Δ 0 x ( t ) = x ( t ) x ( 0 ) t = 1 t 0 t x ( τ ) d τ mean value of x , when α 0 + ,
and, if x is continuously differentiable on [ 0 , T ] ,
lim s t Δ s x ( t ) = x ( t ) , when α 1 ;
see (5) and (6). We notice that, for the Caputo derivative, the value at α = 0 + is x ( t ) x ( 0 ) instead of ( x ( t ) x ( 0 ) ) / t , which is not the mean value on [ 0 , t ] exactly.
Analogously to (10), an L-fractional differential equation is
  L D α x ( t ) = f ( t , x ( t ) ) ,
for t ( 0 , T ] , with an initial condition or state x ( 0 ) = x 0 , where f : [ 0 , T ] × Ω [ 0 , T ] × R d R d , or f : [ 0 , T ] × Ω [ 0 , T ] × C d C d , is a continuous function such that x 0 Ω . We remove t = 0 from (30) by the division of t 1 α in (28). In fact, we can interpret (30) in the almost-everywhere sense. Due to the close relation between   L D α and   C D α ,
  C D α x ( t ) = t 1 α Γ ( 2 α ) f ( t , x ( t ) ) ,
there are of course methods and results for Caputo fractional differential equations that readily apply to the L-fractional counterpart. For example, the finite difference scheme from [45] is suitable, taking t 1 α into account. The proof of the Cauchy–Kovalevskaya theorem from [33] works as well, just by modifying the gamma-function factor in [33] (expression (2.19)), which is bounded too. Despite these matching properties, other topics on L-fractional differential equations deserve further attention; for example, the analysis of associated geometrical/physical features, the attainment of explicit and closed-form solutions, or the applicability in modeling. This paper is devoted to obtaining solutions, which offers some insight into their behavior and the derivative.
By considering (28), the target Equation (26) can be transformed into
  L D α x ( t ) = A x ( t ) + ϑ ( t ) ,
where A C d × d is a matrix and ϑ : [ 0 , T ] C d is a continuous function. The relations
A = Γ ( 2 α ) A , ϑ ( t ) = Γ ( 2 α ) t 1 α b ( t )
hold. The new system (31) has an autonomous homogeneous part, which is a key reduction to solve (28). Due to the equivalence between (28) and (31) through (32), we will work with (31).
As will be seen, solutions of (31) are C and analytic, with power-series expansions expressed in terms of t n , not t α n . For L-fractional differential equations, the units of the vector field f are time 1 , instead of time α .
As the solution x is smooth and not only absolutely continuous, we can conduct integration by parts on (4) and (28), so the equalities (11) and
  L D α x ( t ) = x ( 0 ) + 1 t 1 α 0 t ( t τ ) 1 α x ( τ ) d τ
hold, pointwise, on ( 0 , T ] . Thus, these fractional derivatives contain a non-singular kernel function that is continuous on [ 0 , T ] ,
K ˜ ( t τ ) = ( t τ ) 1 α ,
with the second-order derivative of x. Nevertheless, the L-fractional derivative has the denominator t 1 α that controls   L D α x ( t ) when t 0 + , so
  C D α x ( 0 ) = 0   L D α x ( 0 )
in general, and no controversies arise at the initial instant. This is a relevant property, considering the documented deficiencies of certain fractional operators with non-singular kernels [11]. As an illustration, the Caputo–Fabrizio derivative [64]
  C F D α x ( t ) = 1 1 α 0 t e α 1 α ( t s ) x ( s ) d s
is always subject to the restriction
  C F D α x ( 0 ) = 0 ,
so for applications on fractional differential equations, one is forced to work with the Losada–Nieto integral problem, which is equivalent to a certain ordinary differential equation [65,66,67]. For the L-fractional derivative, the factor 1 / t 1 α avoids issues associated with bounded kernels and makes dimensionality consistent so that the vector field f is a true velocity from a physical viewpoint. In fact, for smooth functions x on [ 0 , T ] , we have   L D α x ( 0 ) = x ( 0 ) by (33), and we can consider t = 0 in Equation (30) as well. Indeed, by translation in the integral (commutativity of the convolution) and L’Hôpital’s rule,
lim t 0 +   L D α x ( t ) x ( 0 ) =   lim t 0 + 1 t 1 α 0 t ( t τ ) 1 α x ( τ ) d τ ( by ( 33 ) ) =   lim t 0 + 1 t 1 α 0 t τ 1 α x ( t τ ) d τ ( convolution ) =   lim t 0 + t α 1 α t 1 α x ( 0 ) + 0 t τ 1 α x ( t τ ) d τ ( L H ô pital ) =   0 .
In the third equality above, we differentiated the denominator, which gives ( 1 α ) t α , and the numerator, which is a parametric integral. The function   L D α x in (33) is then continuous on [ 0 , T ] .
Table 1 reports a schematized comparison between the L- and the Caputo fractional derivatives. It highlights the changes when normalizing the standard operator.
In the notation, we will follow the convention that j = 1 0 = 0 and j = 1 0 = 1 ; that is, an empty sum is zero and an empty product is one. In the power series, s 0 = 1 for every s C , even for s = 0 .

3. Homogeneous Linear Equation: A New Mittag–Leffler-Type Function

Let us consider the simplest problem of L-fractional differential equations:
  L D α x = λ x ,
where λ C , t 0 , and the dimension d is 1. Analogously to Section 1.2, which was focused on the Caputo setting, we consider a Taylor-series solution, but now in terms of t n instead of t α n . The motivation for this thought is the dimensionality time 1 of the problem, instead of time α ; see the previous section.
The candidate power-series solution
x ( t ) = n = 0 x n t n
satisfies, in a formal sense,
λ n = 0 x n t n = n = 0 x n ·   L D α ( t n ) = n = 0 x n + 1 Γ ( n + 2 ) Γ ( 2 α ) Γ ( n + 2 α ) t n ,
as per (7). After the terms are equated, the recursive equation for the coefficients is given by
x n + 1 = Γ ( n + 2 α ) Γ ( n + 2 ) Γ ( 2 α ) λ x n .
As it occurs with Caputo fractional equations, the fact that the L-fractional derivative of a constant is zero—see (29)—is key to deriving a first-order difference equation. The relation (38) can be solved:
x n = λ n Γ ( 2 α ) n j = 1 n Γ ( j + 1 ) Γ ( j + 1 α ) x 0 = λ n Γ ( 2 α ) n Γ ( 1 + α ) n j = 1 n j j α x 0 ,
where x 0 = x ( 0 ) C is the initial value. The solution of (35) is thus expressed as
x ( t ) = E α ( λ t ) x 0 ,
where
E α ( s ) = n = 0 s n Γ ( 2 α ) n j = 1 n Γ ( j + 1 ) Γ ( j + 1 α ) = n = 0 s n Γ ( 2 α ) n Γ ( 1 + α ) n j = 1 n j j α ,
for s C . This is a new extension of the exponential function, an alternative to the Mittag–Leffler formulation (17). It is related to the family of functions studied in [68], with a distinct motivation.
For α ( 0 , 1 ] , convergence of the new function (40) holds on C by the ratio test:
lim n Γ ( 2 α ) n j = 1 n Γ ( j + 1 ) Γ ( j + 1 α ) Γ ( 2 α ) n + 1 j = 1 n + 1 Γ ( j + 1 ) Γ ( j + 1 α ) =   1 Γ ( 2 α ) lim n Γ ( n + 2 α ) Γ ( n + 2 ) =   1 Γ ( 2 α ) lim n 1 ( n + 2 α ) α =   0 .
The asymptotic relation
Γ ( y + α ) Γ ( y ) y α ,
when y , which is a consequence of Stirling’s formula, has been used. For the standard Mittag–Leffler function (17), the corresponding quotient (41) behaves asymptotically as
1 ( ( n + 1 ) α + 1 α ) α ,
which is lower by the factor Γ ( 2 α ) ( 0 , 1 ) compared to our E α . The fastest rate of convergence occurs for the classical exponential function, when α = 1 , as the corresponding quotient (41) is 1 / ( n + 1 ) asymptotically.
The boundary values of E α are
E 0 ( s ) = 1 1 s , | s | < 1 ,
and
E 1 ( s ) = e s , s C .
Actually, although the solution (39) converges by (41), it is still formal; see (37). Later, through the integral operator associated with the L-fractional derivative, we will prove that (39) is indeed the solution for (35) (Theorem 1). For now, in this section, we are only interested in how the new Mittag–Leffler-type function (40) is built.
From (40), a nice identity is
E 1 / 2 ( s ) = n = 0 s n 2 n 2 j = 1 n 2 j j .
This gives a new interpretation of the product of central binomial coefficients,
j = 1 n 2 j j ,
in terms of the power-series solution to the fractional problem
  L D 1 / 2 x = x , x ( 0 ) = 1 .
The development of this section can be readily adapted to matrix arguments. Let
  L D α x = A x ,
where A C d × d is a matrix and x takes vector values in C d . Then, the power-series method can be employed, which yields
x ( t ) = E α ( A t ) x 0 ,
where x 0 C d .

4. On the Associated Integral Operator

In this section, we study the integral operator associated with the L-fractional derivative.

4.1. Introduction

By (22) and (28), the integral operator associated with   L D α is
  L J α x ( t ) =   1 Γ ( α ) Γ ( 2 α ) 0 t ( t s ) α 1 s 1 α x ( s ) d s =   t α 1 Γ ( α ) t 1 α Γ ( 2 α ) x ( t ) =     C J α t 1 α Γ ( 2 α ) x ( t ) .
If x L 1 [ 0 , T ] , then   L J α x L 1 [ 0 , T ] , by standard properties of the convolution. Note that, if x is continuous on [ 0 , T ] , then   L J α x is well defined everywhere on [ 0 , T ] and poses no problem at t = 0 . Indeed,
|   L J α x ( t ) |   max [ 0 , T ] | x | T 1 α Γ ( α ) Γ ( 2 α ) 0 t ( t s ) α 1 d s =   max [ 0 , T ] | x | T 1 α t α Γ ( α ) Γ ( 2 α ) α t 0 + 0 .
The same occurs for   C J α x .
We rigorously prove the L-fractional fundamental theorem of calculus in the following proposition. We first need a lemma on (23) and (24) concerning the Caputo fractional calculus. We emphasize here the important remarks of [10] about the conditions and assumptions in fractional computations, as well as the rigorous results in [3].
Lemma 1.
If x : [ 0 , T ] C is absolutely continuous, then (23) holds for all t [ 0 , T ] and (24) holds for almost every t [ 0 , T ] . If x is given by a fractional power series on [ 0 , T ] (i.e., a power series evaluated at t α ), then (24) is verified at every t [ 0 , T ] .
Proof. 
When x is absolutely continuous, we know that
y =   C D α x =   R L J 1 α x L 1 [ 0 , T ]
exists almost everywhere. Indeed, recall that   R L J 1 α maps L 1 [ 0 , T ] into L 1 [ 0 , T ] . Then,
  C J α y =   R L J α y =   R L J α   R L J 1 α x =   R L J 1 x = x x 0 ,
for all t [ 0 , T ] . We used the integral operators (3) and (22), as well as [10] (Lemma 3.4) for the composition   R L J α   R L J 1 α . The idea of this part has been taken from the first paragraph of the proof of [10] (Theorem 5.1).
On the other hand, we know that   R L J α x is absolutely continuous on [ 0 , T ] ; see [10] (Proposition 3.2) (it states that   R L J α maps absolutely continuous functions onto absolutely continuous functions, among other results). We also know that
  C D α   C J α x ( t ) = x ( t )
for all t in [ 0 , T ] , where
  C D α =   R L D α [ x x 0 ]
is a modified Caputo operator [3,10]. Since   C J α x =   R L J α x is absolutely continuous, Ref. [3] (Theorem 3.1) ensures that
  C D α   C J α x ( t ) =   C D α   C J α x ( t ) = x ( t )
holds almost everywhere.
We remark that, in the literature, one usually finds applications of (24) for every t and when x is merely continuous. This result is not true, because   R L J α does not necessarily map continuous functions into absolutely continuous functions (see [10] (Addendum (3)) on a paper by Hardy and Littlewood), and   C D α is not identically equal to   C D α .
The case of x being given by a fractional power series on [ 0 , T ] is postponed to Remark 1 after Corollary 1. □
Proposition 1.
If x : [ 0 , T ] C is absolutely continuous, then
  L J α   L D α x ( t ) = x ( t ) x ( 0 )
for all t [ 0 , T ] , and
  L D α   L J α x ( t ) = x ( t )
for almost every t ( 0 , T ] . If x is real analytic at t = 0 with a radius of convergence T , then (48) is verified at every t [ 0 , T ] .
Proof. 
On the one hand, by (23) (see Lemma 1),
  L J α   L D α x ( t ) =   1 Γ ( α ) Γ ( 2 α ) 0 t ( t s ) α 1 s 1 α ·   L D α x ( s ) d s =   1 Γ ( α ) Γ ( 2 α ) 0 t ( t s ) α 1 s 1 α · Γ ( 2 α ) s 1 α ·   C D α x ( s ) d s =   1 Γ ( α ) 0 t ( t s ) α 1 ·   C D α x ( s ) d s =     C J α   C D α x ( t ) =   x ( t ) x ( 0 ) .
On the other hand, clearly,
t 1 α Γ ( 2 α ) x
is absolutely continuous on [ 0 , T ] . Then, for almost every t,
  L D α   L J α x ( t ) =     L D α   C J α t 1 α Γ ( 2 α ) x ( t ) =   Γ ( 2 α ) t 1 α   C D α   C J α t 1 α Γ ( 2 α ) x ( t ) =   Γ ( 2 α ) t 1 α t 1 α Γ ( 2 α ) x ( t ) =   x ( t ) .
We used (24) (see Lemma 1).
The part on x being real analytic will be justified in Corollary 1. □
We denote
  L J α   L J α m times =   L J m α ,   L D α   L D α m times =   L D m α ,
for m 1 , whenever the compositions make sense. We use this notation to distinguish   L J m α from   L J m α and   L D m α from   L D m α , where m α is another fractional index; see the following proposition. The main ideas are taken from the interesting note [69].
Proposition 2.
  L J 2 α   L J 2 α and   L D 2 α   L D 2 α , if 0 < α 1 / 2 .
Proof. 
Suppose that
  L J 2 α =   L J 2 α .
By definition (49), this means that
  L J α   L J α =   L J 2 α .
Then, by (47),
  L J α   L J α   L D 2 α =   L J 2 α   L D 2 α = Id x 0 .
By (48),
  L D α   L D α =   L D 2 α .
This is the negation of the second condition in the proposition. Let us see that we arrive at a contradiction, with an adequate set of functions. We consider the operators from C ω to C ω , where C ω is the vector space of real analytic functions at t = 0 with values in C . According to Proposition 1, the fundamental theorem of calculus holds for every point t with functions of C ω , so the above compositions are justified. Since
  L D β n = 0 y n t n = n = 0 y n ·   L D β ( t n ) = n = 0 y n + 1 Γ ( n + 2 ) Γ ( 2 β ) Γ ( n + 2 β ) t n ,
for 0 < β 1 and n = 0 | y n | t n < (see the forthcoming Corollary 1 for rigorous details), the operator
  L D β : C ω C ω
is surjective and
Ker (   L D β ) = { k : k C } .
Consequently, given 1 C ω , there exists y C ω such that
1 =   L D α y .
Therefore, by (52) and (50),
0 =   L D α 1 =   L D α   L D α y =   L D 2 α y ,
which implies that y Ker (   L D 2 α ) . Then, y is constant by (51) and
  L D α y = 0 ,
contradicting (52) and completing the proof. □
Recall [70] that a linear map Λ between normed spaces X and Y, expressed by Λ : X Y , is continuous if and only if there exists a constant K > 0 such that
Λ x K x
for all x X . In such a case, the induced norm for Λ is
Λ = sup x 1 Λ x = sup x = 1 Λ x = min { K > 0 : Λ x K x , x X } .
We denote by L ( X , Y ) the normed space of linear continuous maps from X and Y, so that Λ L ( X , Y ) . If Y is a Banach space, then L ( X , Y ) is Banach too.
Let | · | be the usual Euclidean norm for vectors, which becomes the absolute value for real scalars and the modulus for complex scalars. The induced norm for matrices A C d × d is also denoted by | · | :
| A | = sup v C d , | v | 1 | A v | = sup v C d , | v | = 1 | A v | .
It satisfies the submultiplicative property, namely | A B | | A | | B | , for all matrices A and B. We work with the specific case of X = Y = C [ 0 , T ] , which is the Banach space of continuous functions y = ( y 1 , , y d ) : [ 0 , T ] C d with the supremum norm
y = max t [ 0 , T ] | y ( t ) | .
The Banach space L ( X , Y ) is then denoted by L ( C [ 0 , T ] ) , with the induced operator’s norm · defined by (54).
The set C p [ 0 , T ] , for integers p 1 or p = , is given by the functions that have partial derivatives up to order p and are continuous on [ 0 , T ] .

4.2. List of Results

We state and prove the results that are needed to solve (31). These are concerned with explicit computations, especially regarding the so-called beta function and norm estimates in L ( C [ 0 , T ] ) .
Lemma 2.
If δ > α 2 and t > 0 , then
0 t ( t s ) α 1 s 1 α + δ d s = t 1 + δ Γ ( 2 α + δ ) Γ ( α ) Γ ( 2 + δ ) .
Proof. 
We make the change of variable s = t u , d s = t d u , and the resulting integral is related to the beta function
B ( z 1 , z 2 ) = 0 1 u z 1 1 ( 1 u ) z 2 1 d u ,
defined for complex numbers z 1 and z 2 such that Re ( z 1 ) > 0 and Re ( z 2 ) > 0 . A key property [71] of the beta function is its connection with the gamma function:
B ( z 1 , z 2 ) = Γ ( z 1 ) Γ ( z 2 ) Γ ( z 1 + z 2 ) .
In our case, we have
0 t ( t s ) α 1 s 1 α + δ d s =   t 0 1 ( t t u ) α 1 ( t u ) 1 α + δ d u =   t 1 + δ 0 1 ( 1 u ) α 1 u 1 α + δ d u =   t 1 + δ Γ ( 2 α + δ ) Γ ( α ) Γ ( 2 + δ ) ,
where z 1 = 2 α + δ > 0 and z 2 = α > 0 in the notation of (55) and (56). □
Proposition 3.
If C [ 0 , T ] is endowed with the supremum norm · , then   L J α : C [ 0 , T ] C [ 0 , T ] is a continuous operator. Thus, if L ( C [ 0 , T ] ) denotes the Banach space of linear continuous maps from C [ 0 , T ] to C [ 0 , T ] with the induced norm · , then   L J α L ( C [ 0 , T ] ) and   L J α T .
Proof. 
We first check that   L J α is well defined from C [ 0 , T ] into C [ 0 , T ] . Let y C [ 0 , T ] . We rewrite the convolution (45) as
  L J α y ( t ) = 1 Γ ( α ) Γ ( 2 α ) 0 t s α 1 ( t s ) 1 α y ( t s ) d s .
If 0 < h < 1 , then
  |   L J α y ( t + h )   L J α y ( t ) | =   1 Γ ( α ) Γ ( 2 α ) 0 t + h s α 1 ( t + h s ) 1 α y ( t + h s ) d s 0 t s α 1 ( t s ) 1 α y ( t s ) d s   1 Γ ( α ) Γ ( 2 α ) t t + h s α 1 ( t + h s ) 1 α | y ( t + h s ) | d s   + 1 Γ ( α ) Γ ( 2 α ) 0 t s α 1 | ( t + h s ) 1 α y ( t + h s ) ( t s ) 1 α y ( t s ) | d s
  1 Γ ( α ) Γ ( 2 α ) ( T + 1 ) y t t + h s α 1 d s
  + 1 Γ ( α ) Γ ( 2 α ) 0 t s α 1 | ( t + h s ) 1 α y ( t + h s ) ( t s ) 1 α y ( t s ) | d s .
The bound ( t + h s ) 1 α ( T + 1 ) 1 α T + 1 has been used. We analyze the limit of both (58) and (59) when h 0 . On the one hand, for (58),
t t + h s α 1 d s = ( t + h ) α t α α h 0 0 .
On the other hand, for (59),
| ( t + h s ) 1 α y ( t + h s ) ( t s ) 1 α y ( t s ) | h 0 0
and
  s α 1 | ( t + h s ) 1 α y ( t + h s ) ( t s ) 1 α y ( t s ) |   s α 1 ( t + h s ) 1 α | y ( t + h s ) | + ( t s ) 1 α | y ( t s ) |   2 ( T + 1 ) y s α 1 L 1 ( [ 0 , T ] , d s ) ,
so the dominated convergence theorem ensures that
0 t s α 1 | ( t + h s ) 1 α y ( t + h s ) ( t s ) 1 α y ( t s ) | d s h 0 0 .
Thus, from (57),
|   L J α y ( t + h )   L J α y ( t ) | h 0 0 .
For h < 0 , one proceeds analogously, and then   L J α C [ 0 , T ] , as wanted.
The linearity of   L J α is clear, based on the properties of the integral. Now we prove continuity of   L J α by using (53). If y C [ 0 , T ] , then
|   L J α y ( t ) |   1 Γ ( α ) Γ ( 2 α ) 0 t ( t s ) α 1 s 1 α | y ( s ) | d s   y 1 Γ ( α ) Γ ( 2 α ) 0 t ( t s ) α 1 s 1 α d s
=   y 1 Γ ( α ) Γ ( 2 α ) t Γ ( 2 α ) Γ ( α ) Γ ( 2 )
=   t y
  T y .
In the first equality (60), Lemma 2 has been employed with δ = 0 > α 2 . Hence,
  L J α y T y ,
  L J α L ( C [ 0 , T ] ) , and
  L J α T ,
by (54). □
Corollary 1.
If
n = 0 | x n | t n <
for all t [ 0 , ϵ ] , where ϵ > 0 and x n C , then
  L D α n = 0 x n t n = n = 0 x n ·   L D α ( t n )
on [ 0 , ϵ ] . Furthermore, (48) holds for all t [ 0 , ϵ ] for n = 0 x n t n , not just almost everywhere, hence completing the statement of the fundamental theorem of L-fractional calculus; see Proposition 1.
Proof. 
Let x : [ 0 , ϵ ] C be defined by the power series,
x ( t ) = n = 0 x n t n .
Consider new coefficients
x ˜ n = Γ ( 2 α ) Γ ( n + 2 ) Γ ( 2 α + n ) x n + 1 ,
for n 0 . Notice that
n = 0 | x ˜ n | t n <
on [ 0 , ϵ ] , because
lim n Γ ( 2 α ) Γ ( n + 2 ) Γ ( 2 α + n ) Γ ( 2 α ) Γ ( n + 1 ) Γ ( 1 α + n ) = 1 .
Then,
  L J α n = 0 x ˜ n t n =   n = 0 x ˜ n ·   L J α ( t n )
=   n = 0 x ˜ n Γ ( 2 α + n ) Γ ( 2 α ) Γ ( n + 2 ) t n + 1
=   n = 0 x n + 1 t n + 1 = x ( t ) x 0 .
Equality (62) holds by Proposition 3 (the convergence of n = 0 x ˜ n t n is uniform on [ 0 , ϵ ] , i.e., in the space C [ 0 , ϵ ] ). In (63), the computation in Lemma 2 is used. Consequently,
  L D α x ( t ) =     L D α ( x x 0 ) ( t )
= n = 0 x ˜ n t n
= n = 0 Γ ( 2 α ) Γ ( n + 2 ) Γ ( 2 α + n ) x n + 1 t n
=   n = 0 x n ·   L D α ( t n ) ,
almost everywhere. For (65), we use (64) and (48). Now, as x and t n are smooth, both   L D α x and (66) are continuous on [ 0 , ϵ ] ; see (33) and (34). Hence, the previous equality almost everywhere becomes a pointwise equality for every t [ 0 , ϵ ] . The point t = 0 does not pose any problem, because
  L D α x ( 0 ) = x ( 0 ) = x 1 = n = 0 x n ·   L D α ( t n ) | t = 0 ,
by (33) and (34).
Finally, we need to check that (48) holds for all t [ 0 , ϵ ] , from the obtained results:
  L D α   L J α x ( t ) =     L D α n = 0 x n Γ ( 2 α + n ) Γ ( 2 α ) Γ ( n + 2 ) t n + 1 =   n = 0 x n Γ ( 2 α + n ) Γ ( 2 α ) Γ ( n + 2 ) ·   L D α t n + 1 =   n = 0 x n Γ ( 2 α + n ) Γ ( 2 α ) Γ ( n + 2 ) · Γ ( 2 α ) Γ ( n + 2 ) Γ ( 2 α + n ) t n =   n = 0 x n t n =   x ( t ) .
Remark 1.
In the Caputo fractional calculus, the previous Corollary 1 reads as follows:
“If
n = 0 | x n | t α n <
for all t [ 0 , ϵ ] , where ϵ > 0 and x n C , then
  C D α n = 0 x n t α n = n = 0 x n ·   C D α ( t α n )
on [ 0 , ϵ ] . Furthermore, (24) holds for all t [ 0 , ϵ ] for n = 0 x n t α n , not just almost everywhere, hence completing the statement of the fundamental theorem of Caputo fractional calculus, see Lemma 1”.
This property is often used in the literature when solving linear and nonlinear fractional models in the Caputo sense; see, for example, (14). Here, we validate it rigorously based on the operator’s theory.
We note that
x ( t ) = n = 0 x n t α n
is absolutely continuous on [ 0 , ϵ ] . Indeed, we decompose x as
x ( t ) = n = 0 N α 1 x n t α n + n = N α x n t α n ,
where N α 1 is an integer satisfying α · N α 1 . The first sum in (68) is a finite combination of absolutely continuous functions and hence absolutely continuous. The second sum in (68) is C 1 [ 0 , ϵ ] , because the series of ordinary derivatives converges uniformly.
The proof of the corresponding formula
  C J α n = 0 x ˜ n t α n = x ( t ) x 0 ,
where
x ˜ n = Γ ( ( n + 1 ) α + 1 ) Γ ( n α + 1 ) x n + 1 ,
is analogous to Corollary 1 until (64). This is due to the fact that   C J α is also an element of L ( C [ 0 , ϵ ] ) .
Now, the part of the proof until (66) in the Caputo setting, which justifies the equality (67) almost everywhere on [ 0 , ϵ ] , is analogous too. One needs to use (24) from Lemma 1. To finally prove that the equality almost everywhere becomes a pointwise equality for every t [ 0 , ϵ ] , we notice that the right-hand side of (67) is clearly continuous (by uniform convergence), and the left-hand side of (67) satisfies, at every t,
  C D α x ( t ) = n = 0 N α x n ·   C D α t α n +   C D α n = N α + 1 x n t α n ,
as per (68). The first sum in (69) is finite and continuous. The second part of (69) is continuous too, because
n = N α + 1 x n t α n C 2 [ 0 , ϵ ]
and (11) holds. Therefore, both sides of (67) are continuous on [ 0 , ϵ ] , so we have the equality (67) at every t [ 0 , ϵ ] , as wanted. The remark is concluded.
Proposition 4.
If δ > α 2 , m 1 and t > 0 , then
  L J m α t δ = i = 2 m + 1 Γ ( i α + δ ) Γ ( 2 α ) m i = 2 m + 1 Γ ( i + δ ) t m + δ .
Proof. 
By induction on m, for m = 1 , we have
  L J α t δ =   1 Γ ( α ) Γ ( 2 α ) 0 t ( t s ) α 1 s 1 α s δ d s =   1 Γ ( α ) Γ ( 2 α ) × t 1 + δ Γ ( 2 α + δ ) Γ ( α ) Γ ( 2 + δ ) =   Γ ( 2 α + δ ) Γ ( 2 α ) Γ ( 2 + δ ) t 1 + δ ,
after applying Lemma 2. Now suppose the result is true for m 1 (induction hypothesis). Then,
  L J m α t δ =     L J α   L J ( m 1 ) α t δ
=     L J α i = 2 m Γ ( i α + δ ) Γ ( 2 α ) m 1 i = 2 m Γ ( i + δ ) t m 1 + δ
=   i = 2 m Γ ( i α + δ ) Γ ( 2 α ) m 1 i = 2 m Γ ( i + δ )   L J α t m 1 + δ =   i = 2 m Γ ( i α + δ ) Γ ( 2 α ) m 1 i = 2 m Γ ( i + δ ) × 1 Γ ( α ) Γ ( 2 α ) 0 t ( t s ) α 1 s 1 α s m 1 + δ d s
=   i = 2 m Γ ( i α + δ ) Γ ( 2 α ) m 1 i = 2 m Γ ( i + δ ) × 1 Γ ( α ) Γ ( 2 α ) t 1 + ( m 1 + δ ) Γ ( 2 α + ( m 1 + δ ) ) Γ ( α ) Γ ( 2 + ( m 1 + δ ) )
=   i = 2 m + 1 Γ ( i α + δ ) Γ ( 2 α ) m i = 2 m + 1 Γ ( i + δ ) t m + δ .
The first equality (71) is the definition (49). In the second equality (72), the induction hypothesis is employed. In the fifth equality (73), Lemma 2 is used with m 1 + δ instead of δ . □
Proposition 5.
If m 1 , t [ 0 , T ] and y C [ 0 , T ] , then
|   L J m α y ( t ) | y i = 2 m + 1 Γ ( i α ) Γ ( 2 α ) m i = 2 m + 1 Γ ( i ) t m
and
  L J m α i = 2 m + 1 Γ ( i α ) Γ ( 2 α ) m i = 2 m + 1 Γ ( i ) T m .
Proof. 
We first notice that   L J m α L ( C [ 0 , T ] ) , by Proposition 3 and definition (49). Second, (75) is a consequence of (74). For (74), we proceed by induction on m 1 . For m = 1 , the result is known by our previous estimate (61). Suppose the inequality for m 1 (induction hypothesis), and let us prove it for m. We have
  L J m α y ( t ) =     L J α   L J ( m 1 ) α y ( t ) =   1 Γ ( α ) Γ ( 2 α ) 0 t ( t s ) α 1 s 1 α ·   L J ( m 1 ) α y ( s ) d s .
By applying | · | , we have
|   L J m α y ( t ) |   1 Γ ( α ) Γ ( 2 α ) 0 t ( t s ) α 1 s 1 α |   L J ( m 1 ) α y ( s ) | d s
  y i = 2 m Γ ( i α ) Γ ( 2 α ) m 1 i = 2 m Γ ( i ) 1 Γ ( α ) Γ ( 2 α ) 0 t ( t s ) α 1 s 1 α + ( m 1 ) d s
=   y i = 2 m Γ ( i α ) Γ ( 2 α ) m 1 i = 2 m Γ ( i ) 1 Γ ( α ) Γ ( 2 α ) t m Γ ( 2 α + ( m 1 ) ) Γ ( α ) Γ ( 2 + ( m 1 ) ) =   y i = 2 m + 1 Γ ( i α ) Γ ( 2 α ) m i = 2 m + 1 Γ ( i ) t m .
In the second inequality (76), the induction hypothesis is used. In the first equality (77), Lemma 2 is employed with δ = m . □
Proposition 6.
The series of operators
j = 0 A j ·   L J ( j + 1 ) α
is convergent in L ( C [ 0 , T ] ) .
Proof. 
We first notice that A j ·   L J ( j + 1 ) α L ( C [ 0 , T ] ) , with
A j ·   L J ( j + 1 ) α = | A j |   L J ( j + 1 ) α | A | j   L J ( j + 1 ) α .
The submultiplicative property of the matrix norm has been used. Since L ( C [ 0 , T ] ) is a Banach space, for (78), it suffices to prove that
j = 0 | A | j   L J ( j + 1 ) α < .
By Proposition 5, specifically inequality (75), we bound the series in (79) as
j = 0 | A | j   L J ( j + 1 ) α j = 0 | A | j i = 2 j + 2 Γ ( i α ) Γ ( 2 α ) j + 1 i = 2 j + 2 Γ ( i ) T j + 1 .
To justify the convergence of the right-hand series in (80), we employ the ratio test:
lim j | A | j + 1 i = 2 j + 3 Γ ( i α ) Γ ( 2 α ) j + 2 i = 2 j + 3 Γ ( i ) T j + 2 | A | j i = 2 j + 2 Γ ( i α ) Γ ( 2 α ) j + 1 i = 2 j + 2 Γ ( i ) T j + 1   = | A | T Γ ( 2 α ) lim j Γ ( j + 3 α ) Γ ( j + 3 ) =   | A | T Γ ( 2 α ) lim j 1 Γ ( j + 3 α ) α = 0 .

5. Solution of the Complete Linear Equation

In this section, we solve the complete linear equation in the L-fractional sense with Picard’s iterations. Later, we give a probabilistic form to this solution by using the beta-distributed delay of the L-fractional operators. The new Mittag–Leffler-type function is connected with basic probability theory as well, via generalized moment-generating functions. The concrete case of fractional-power source term is addressed. Finally, the uniqueness of L-fractional solutions is justified and discussed.

5.1. General Equation and Explicit Solution

We give the explicit solution to (31). We remark on the difference between (31) and the integral problem (81), considering the absolutely continuous functions (Proposition 1); this is not usually carried out in the literature, which states an equivalence vaguely.
Proposition 7.
The new Mittag–Leffler-type function E α —see (40)—converges for matrix arguments s = A C d × d . The convergence for E α ( A t ) is uniform on [ 0 , T ] , and hence (44) belongs to C [ 0 , T ] .
Proof. 
For t [ 0 , T ] , we have
n = 0 | A n | t n Γ ( 2 α ) n j = 1 n Γ ( j + 1 ) Γ ( j + 1 α )   n = 0 | A | n t n Γ ( 2 α ) n j = 1 n Γ ( j + 1 ) Γ ( j + 1 α )   n = 0 | A | n T n Γ ( 2 α ) n j = 1 n Γ ( j + 1 ) Γ ( j + 1 α ) ,
by the submultiplicative property of the matrix norm. The convergence of the last series, which is independent of t, is checked with the ratio test; see (41). Thus, the series of E α ( A t ) exhibits uniform convergence on [ 0 , T ] . In particular, for t = 1 , the function E α ( A ) is well defined. Finally, the continuity of t E α ( A t ) is clear, because it is the uniform limit of polynomials, which are continuous. □
Theorem 1.
The solution of
x ( t ) = x 0 +   L J α ( A x ( t ) + ϑ ( t ) ) = x 0 + 1 Γ ( α ) Γ ( 2 α ) 0 t ( t s ) α 1 s 1 α ( A x ( s ) + ϑ ( s ) ) d s
on [ 0 , T ] , with initial condition x ( 0 ) = x 0 , is
x ( t ) = E α ( A t ) x 0 + j = 0 A j ·   L J ( j + 1 ) α ϑ ( t ) .
If x and ϑ are absolutely continuous on [ 0 , T ] , then (82) solves (31) almost everywhere on [ 0 , T ] . If x and ϑ are given by power series on [ 0 , T ] , then (82) solves (31) for every t [ 0 , T ] .
Proof. 
Since ϑ C [ 0 , T ] , the function x in (82) belongs to C [ 0 , T ] , by Propositions 6 and 7. We need to check that x in (82) is a fixed point of the associated Volterra integral operator (81). We build the solution to (81) with Picard’s iteration method:
x k ( t ) = x 0 +   L J α ( A x k 1 ( t ) + ϑ ( t ) ) = x 0 + 1 Γ ( α ) Γ ( 2 α ) 0 t ( t s ) α 1 s 1 α ( A x k 1 ( s ) + ϑ ( s ) ) d s ,
for k 1 .
Let us see by induction on k that
x k ( t ) = j = 0 k t j A j i = 2 j + 1 Γ ( i α ) Γ ( 2 α ) Γ ( i ) x 0 + j = 0 k 1 A j ·   L J ( j + 1 ) α ϑ ( t ) .
For k = 0 , it is clear because the identity x 0 = x 0 is obtained. Suppose that the expression is true for k 1 . We have
x k ( t ) =   x 0 +   L J α ( A x k 1 ( t ) + ϑ ( t ) )
=   x 0 + j = 0 k 1   L J α t j A j + 1 i = 2 j + 1 Γ ( i α ) Γ ( 2 α ) Γ ( i ) x 0 + j = 0 k 2 A j + 1 ·   L J ( j + 2 ) α ϑ ( t ) +   L J α ϑ ( t )
=   x 0 + j = 0 k 1 t 1 + j Γ ( 2 α + j ) Γ ( 2 α ) Γ ( 2 + j ) A j + 1 i = 2 j + 1 Γ ( i α ) Γ ( 2 α ) Γ ( i ) x 0
  + j = 0 k 2 A j + 1 ·   L J ( j + 2 ) α ϑ ( t ) +   L J α ϑ ( t )
=   x 0 + j = 0 k 1 t 1 + j A j + 1 i = 2 j + 2 Γ ( i α ) Γ ( 2 α ) Γ ( i ) x 0 + j = 0 k 2 A j + 1 ·   L J ( j + 2 ) α ϑ ( t ) +   L J α ϑ ( t ) =   j = 0 k t j A j i = 2 j + 1 Γ ( i α ) Γ ( 2 α ) Γ ( i ) x 0 + j = 0 k 1 A j ·   L J ( j + 1 ) α ϑ ( t ) ,
which is exactly (84). In the first equality (85), we use (83). The second equality (86) is the induction hypothesis. The third equality (87) is obtained from Lemma 2.
From the form of x k in (84) and Propositions 6 and 7, the convergence of x k toward x in (82) is guaranteed in C [ 0 , T ] . We need to check that this x indeed solves (81).
Since x k x in the sense of C [ 0 , T ] as k , we obtain that A x k 1 + ϑ A x + ϑ in C [ 0 , T ] . By Proposition 3, we know that   L J α L ( C [ 0 , T ] ) ; therefore,
lim k   L J α ( A x k 1 + ϑ ) =   L J α ( A x + ϑ )
in C [ 0 , T ] . Thus, taking limits as k in the recurrence’s definition (83), the fixed-point identity (81) is established, as wanted.
By (48) in Proposition 1, if x and ϑ are absolutely continuous on [ 0 , T ] , then (82) solves (31) almost everywhere on [ 0 , T ] . If x and ϑ are given by power series on [ 0 , T ] , then (82) solves (31) for every t [ 0 , T ] . □

5.2. A Link with Probability Theory

For computations and proofs concerning   L J α , the incorporation of t 1 α in the convolution of (45) is obviously a handicap. For the Caputo fractional derivative, the fact that   C J α y ( t ) = 1 Γ ( α ) t α 1 y ( t ) (see (22)) and the associative property of the convolution permit having the iterations of   C J α :
  C J α   C J α m times y ( t ) =   1 Γ ( α ) m ( t α 1 t α 1 m times ) y ( t ) =   1 Γ ( α ) m t m α 1 y ( t ) .
Unfortunately, this is not the case for the L-fractional derivative and its iterated integral operator   L J m α , which has an effect on the computation of the solution (82).
A probabilistic interpretation [72] may help us understand the structure of   L J m α more. From the definition (45), we notice that
  L J α y ( t ) = t E [ y ( t U ) ] ,
where U is a random variable with distribution Beta ( 2 α , α ) and E is the expectation operator. The L-fractional derivative (28) is
  L D α y ( t ) = E [ y ( t W ) ] ,
where W is a random variable with distribution Beta ( 1 , 1 α ) . Expression (90) emphasizes the memory property and the non-local behavior associated with the fractional derivative. Lemma 2 is, in fact, a result of statistical moments of the beta distribution. When α = 1 , we obtain the ordinary operators that depend on Uniform ( 0 , 1 ) distributions. The iterations of (89) are the following:
  L J 2 α y ( t ) =   t E U 2 [   L J α ( t U 2 ) ] =   t E U 2 [ t U 2 E U 1 [ y ( t U 1 U 2 ) ] ] =   t 2 E U 2 [ U 2 E U 1 [ y ( t U 1 U 2 ) ] ] ,
  L J 3 α y ( t ) =   t E U 3 [   L J 2 α y ( t U 3 ) ] =   t E U 3 [ ( t U 3 ) 2 E U 2 [ U 2 E U 1 [ y ( t U 1 U 2 U 3 ] ] ] =   t 3 E U 3 [ U 3 2 E U 2 [ U 2 E U 1 [ y ( t U 1 U 2 U 3 ] ] ] , ,
  L J m α y ( t ) = t m E U m [ U m m 1 E U m 1 [ U m 1 m 2 E U 2 [ U 2 E U 1 [ y ( t U 1 U m ) ] ] ] ] ,
where U 1 , U 2 , are Beta ( 2 α , α ) -distributed and independent. Here, E U [ g ( U , V ) ] = E [ g ( U , V ) | V ] denotes an expectation of g ( U , V ) with respect to U , as if we were conditioning on the other random quantity V . We arrive at the following theorem, which highlights the difficulty when dealing with   L J m α .
Theorem 2.
The solution of (81) on [ 0 , T ] , with initial condition x ( 0 ) = x 0 , is
x ( t ) = E α ( A t ) x 0 + j = 0 A j t j + 1 E U j + 1 [ U j + 1 j E U j [ U j j 1 E U 2 [ U 2 E U 1 [ ϑ ( t U 1 U j + 1 ) ] ] ] ] ,
where U 1 , U 2 , are Beta ( 2 α , α ) -distributed and independent. If x and ϑ are absolutely continuous on [ 0 , T ] , then x solves (31) almost everywhere on [ 0 , T ] . If x and ϑ are given by power series on [ 0 , T ] , then (82) solves (31) for every t [ 0 , T ] .
Proof. 
See (91) and the previous development. □
The appearance of ϑ ( t U 1 U j + 1 ) in Theorem 2 makes us investigate what happens when ϑ is given by a power function. Indeed, in that case, the various expectations can be separated.
Remark 2.
The difference between the explicit form of (88) and (91) has an effect on the theory of Taylor series and their residuals as well. In the Caputo case, the mean-value theorem is
x ( t ) x ( 0 ) = 1 Γ ( α )   C D α x ( ξ ) · t α ,
where ξ ( 0 , t ) , t > 0 . For the L-fractional derivative,
x ( t ) x ( 0 ) =     L J α   L D α x ( t )
=   1 Γ ( α ) Γ ( 2 α ) 0 t ( t s ) α 1 s 1 α ·   L D α x ( s ) d s
=     L D α x ( ξ ) Γ ( α ) Γ ( 2 α ) 0 t ( t s ) α 1 s 1 α d s
=     L D α x ( ξ ) · t .
In (92), the analog of Barrow’s rule (47) is used. In (93), definition (45) is applied. The mean-value theorem gives (94), by the continuity of    L D α x when x is smooth. Finally, Lemma 2 is utilized in the last equality (95). Observe, as a consequence, that
  L D α x ( 0 ) = d x d t ( 0 ) = x ( 0 ) ( , ) ,
in contrast to the Caputo derivative (see also the justification (34)). Hence, locally, at the beginning of the dynamics around t 0 , the system (30) is very similar to the ordinary differential equation analog, and the change with α is smoother than in the Caputo case.
The mean-value theorem may be seen as the residual of the zeroth-order Taylor series. When the order of the Taylor series is increased, the Caputo derivative has the residual
x ( t ) = i = 0 n x i t α i +   C D ( n + 1 ) α x ( ξ ) Γ ( ( n + 1 ) α + 1 ) t ( n + 1 ) α ,
where t > 0 and   C D ( n + 1 ) α =   C D α   C D α is the iterated derivative. This formula mimics the expression for the ordinary derivative ( α = 1 ), and it is a consequence of (88). Unfortunately, for the L-fractional derivative, one cannot expect a similar expression for
x ( t ) i = 0 n x i t i =   L J ( n + 1 ) α   L D ( n + 1 ) α x ( t ) ,
because   L J ( n + 1 ) α is not given in closed form, as a convolution. See [27] (expression (3.11)) for details in the context of the Caputo fractional calculus. These observations conclude the remark.
We noticed that both operators   L J α and   L D α have a probabilistic interpretation in terms of the beta distribution. Does the new Mittag–Leffler-type function E α enjoy a connection with probability theory? If a is a random variable and
φ a ( s ) = E [ e a s ]
is its moment-generating function, it is known that [73,74]:
C , n 0 > 0 , 0 p < 1 :   E [ | a | n + 1 ] E [ | a | n ] C n p , n n 0   φ a ( s ) < , s R lim n a n n = 0 ;
C , n 0 > 0 , 0 p 1 :   E [ | a | n + 1 ] E [ | a | n ] C n p , n n 0   δ > 0 : φ a ( s ) < , s ( δ , δ ) lim sup n a n n < ,
where a n = E [ | a | n ] 1 / n is the n-th norm. In (96), the converse of the first implication is not true, as the Poisson distribution shows with its moments given by the Bell numbers. Let
φ a α ( s ) = E [ E α ( a s ) ] ,
s R , be the L-fractional moment-generating function of a, of order α ( 0 , 1 ] . This is an extension of the usual moment-generating function, which is retrieved for α = 1 . We obtain a partial version of (96) and (97) for φ a α , because we need to employ the ratio test of convergence instead of the Cauchy–Hadamard theorem.
Theorem 3.
The following implications hold:
C , n 0 > 0 , 0 p < 1 : E [ | a | n + 1 ] E [ | a | n ] C n α p , n n 0 φ a α ( s ) < , s R ;
C , n 0 > 0 , 0 p 1 : E [ | a | n + 1 ] E [ | a | n ] C n α p , n n 0 δ > 0 : φ a α ( s ) < , s ( δ , δ ) .
Proof. 
Considering our definition (40), we aim to prove that
n = 0 E [ | a | n ] | s | n Γ ( 2 α ) n j = 1 n Γ ( j + 1 ) Γ ( j + 1 α ) < .
According to (41), the ratio test gives
E [ | a | n + 1 ] Γ ( 2 α ) n j = 1 n Γ ( j + 1 ) Γ ( j + 1 α ) E [ | a | n ] Γ ( 2 α ) n + 1 j = 1 n + 1 Γ ( j + 1 ) Γ ( j + 1 α )   C n α p Γ ( 2 α ) n j = 1 n Γ ( j + 1 ) Γ ( j + 1 α ) Γ ( 2 α ) n + 1 j = 1 n + 1 Γ ( j + 1 ) Γ ( j + 1 α )
=   C n α p 1 Γ ( 2 α ) Γ ( n + 2 α ) Γ ( n + 2 )
  C n α p 1 Γ ( 2 α ) 1 ( n + 2 α ) α .
If p < 1 , then (99) tends to 0 and (98) holds for s R . If p = 1 , then (99) converges to
C Γ ( 2 α ) > 0 ,
so (98) is satisfied for s ( δ , δ ) , where
δ = Γ ( 2 α ) C .
If a is a bounded random variable, then
E [ | a | n + 1 ] E [ | a | n ] C
and φ a α is finite on R . If a is a Gaussian random variable, then
E [ | a | n + 1 ] E [ | a | n ] C n 1 / 2 ,
so φ a α is finite on the real line for α > 1 / 2 , and it is finite on a neighborhood of zero for α = 1 / 2 . Since the gamma distribution satisfies
E [ | a | n + 1 ] E [ | a | n ] C n ,
one cannot work with φ a α for α < 1 . Finally, the Weibull distribution with shape parameter β has the ratio
E [ | a | n + 1 ] E [ | a | n ] C n 1 / β ,
therefore φ a α is finite on R for α > 1 / β , and it is finite around zero for α = 1 / β . For information on these distributions, see [73].
It would be of certain relevance to investigate whether we can expect a better characterization for the finiteness of the fractional moment-generating function of random variables. One would likely need to use the Cauchy–Hadamard theorem, rather than the ratio test. Since the new Mittag–Leffler-type function is defined with products of gamma functions, the ratio test is the most straightforward tool to check the convergence of the series.

5.3. Fractional Powers and Closed-Form Solutions

For an example of closed form of (82), let us consider
ϑ ( t ) = ( 1 t δ 1 , , d t δ d ) ,
where 1 , , d C and δ 1 , , δ d ( 0 , ) . Equivalently,
b ( t ) = ( κ 1 t μ 1 , , κ d t μ d ) ,
where κ 1 , , κ d C and μ 1 , , μ d > 1 α satisfy
j = Γ ( 2 α ) κ j , δ j = μ j 1 + α ;
see Section 2 and, specifically, the link conditions (32). Here, ⊤ denotes the transpose of the vectors, for column form. The powers δ j or μ j are not necessarily integers; therefore, they are called fractional.
Lemma 3.
(Analogous to Corollary 1) If
n = 0 | x n | t n + 1 + δ <
for all t [ 0 , ϵ ] , where ϵ > 0 , δ > 0 , and x n C , then
  L D α n = 0 x n t n + 1 + δ = n = 0 x n ·   L D α ( t n + 1 + δ )
on [ 0 , ϵ ] . Furthermore, (48) holds for all t [ 0 , ϵ ] for n = 0 x n t n + 1 + δ .
Proof. 
The proof is analogous to Corollary 1 and its subsequent Remark 1. Conduct the steps in the proof of Corollary 1, adapted to this case, until (66), which holds almost everywhere. To justify equality everywhere based on continuity at both sides, proceed as in Remark 1. Notice that
n = 0 x n t n + 1 + δ = x 0 t 1 + δ + x 1 t 2 + δ + n = 2 x n t n + 1 + δ ,
where
n = 2 x n t n + 1 + δ C 3 [ 0 , T ] ,
so the left-hand side of the corresponding Equation (66) is
  L D α n = 0 x n t n + 1 + δ C [ 0 , T ] ;
see (33) and (34). □
Theorem 4.
The solution of (31), with source term (100) and initial condition x ( 0 ) = x 0 , is
x ( t ) = E α ( A t ) x 0 + j = 0 A j ν j ( t ) ,
where
ν j ( t ) = 1 i = 2 j + 2 Γ ( i α + δ 1 ) Γ ( 2 α ) j + 1 i = 2 j + 2 Γ ( i + δ 1 ) t j + 1 + δ 1 , , d i = 2 j + 2 Γ ( i α + δ d ) Γ ( 2 α ) j + 1 i = 2 j + 2 Γ ( i + δ d ) t j + 1 + δ d ,
for every t in [ 0 , T ] .
Proof. 
In the general form (82) from Theorem 1, use Proposition 4. By Lemma 3, we have a solution for all t in [ 0 , T ] . (Without Lemma 3, the conclusion would have been almost everywhere.) □
Theorem 5.
The solution of (31), with source term (100) and initial condition x ( 0 ) = x 0 , dimension d = 1 , A = a C , 1 = and δ 1 = δ , is
x ( t ) = E α ( a t ) x 0 + j = 0 a j i = 2 j + 2 Γ ( i α + δ ) Γ ( 2 α ) j + 1 i = 2 j + 2 Γ ( i + δ ) t j + 1 + δ ,
for every t in [ 0 , T ] .
Proof. 
Apply Theorem 4 in the scalar case. □
For another example of a closed form of (82), now consider
ϑ ( t ) = n = 0 ϑ n t n
on [ 0 , T ] , where ϑ n C d . That is, ϑ is real analytic at t = 0 with values in C . Equivalently,
b ( t ) = t 1 α Γ ( 2 α ) n = 0 ϑ n t n ,
by Section 2 and (32). In contrast to the previous case, the powers of ϑ are integer numbers. For b, the powers are fractional.
Theorem 6.
The solution of (31), with source term (101) and initial condition x ( 0 ) = x 0 , is
x ( t ) = E α ( A t ) x 0 + n = 0 k = 0 n 1 A k j = n k n Γ ( j α + 1 ) Γ ( 2 α ) k + 1 j = n k n Γ ( j + 1 ) ϑ n k 1 t n ,
for every t in [ 0 , T ] .
Proof. 
For j 0 , we perform the following computations:
  L J ( j + 1 ) α ϑ ( t ) =     L J ( j + 1 ) α n = 0 ϑ n t n
=   n = 0 ϑ n ·   L J ( j + 1 ) α t n
=   n = 0 ϑ n i = 2 j + 2 Γ ( i α + n ) Γ ( 2 α ) j + 1 i = 2 j + 2 Γ ( i + n ) t j + 1 + n
=   l = j + 1 ϑ l j 1 i = 2 j + 2 Γ ( i α + l j 1 ) Γ ( 2 α ) j + 1 i = 2 j + 2 Γ ( i + l j 1 ) t l
=   l = j + 1 ϑ l j 1 i = l j l Γ ( i α + 1 ) Γ ( 2 α ) j + 1 i = l j l Γ ( i + 1 ) t l .
In the equality from (103), the continuity of   L J ( j + 1 ) α is used; see Proposition 3. In the equality from (104), the formula (70) in Proposition 4 is employed with m = j + 1 and δ = n > α 2 . In the equality from (105), the change in variable n + j + 1 = l is applied. The equality from (106) follows by expanding the product.
From Theorem 1, (82) and (106),
x ( t ) =   E α ( A t ) x 0 + j = 0 A j ·   L J ( j + 1 ) α ϑ ( t ) =   E α ( A t ) x 0 + j = 0 A j l = j + 1 ϑ l j 1 i = l j l Γ ( i α + 1 ) Γ ( 2 α ) j + 1 i = l j l Γ ( i + 1 ) t l =   E α ( A t ) x 0 + l = 0 j = 0 l 1 A j ϑ l j 1 i = l j l Γ ( i α + 1 ) Γ ( 2 α ) j + 1 i = l j l Γ ( i + 1 ) t l ,
which corresponds to (102). We finally note that x and the corresponding ϑ are analytic; hence, we have a solution for every t [ 0 , T ] and not just almost everywhere; see Theorem 1. □
Remark 3.
In the Caputo case, we have the fact that (20) solves (19) almost everywhere on [ 0 , T ] if x and b are absolutely continuous on [ 0 , T ] . If b is given by a fractional power series on [ 0 , T ] (in terms of t α n ), then (20) is the solution of (19) everywhere on [ 0 , T ] . Otherwise, we only know that (20) solves the corresponding Volterra integral problem associated with (19), x ( t ) = x 0 +   C J α ( A x + b ) ( t ) , for all t [ 0 , T ] . All these assertions are a consequence of Lemma 1. Thus, one should be careful when proposing solutions to fractional differential equations; imprecise statements may give rise to solutions of the integral problem or almost-everywhere solutions. If b only belongs to C [ 0 , T ] , then (20) solves the modified Caputo equation   C D α x ( t ) = A x ( t ) + b ( t ) for every t [ 0 , T ] , where   C D α is defined by (46); see [10] (Lemma 4.5). If x in (20) is absolutely continuous on [ 0 , T ] , then   C D α x ( t ) =   C D α x ( t ) almost everywhere [10] (Lemma 4.12), and (19) holds almost everywhere on [ 0 , T ] .

5.4. On Uniqueness

We notice that all of the obtained solutions are unique. For a general L-fractional Equation (30), where the input function f can be nonlinear, uniqueness holds if f is Lipschitz-continuous with respect to the second component on every compact set, independently of the size of the Lipschitz constant. Indeed, if there are two solutions   1 x ( t ) and   2 x ( t ) of (30) with   1 x ( 0 ) =   2 x ( 0 ) = x 0 , then
|   1 x ( t )   2 x ( t ) | =   |   L J α f ( t ,   1 x ( t ) )   L J α f ( t ,   2 x ( t ) ) |   1 Γ ( α ) Γ ( 2 α ) 0 t ( t s ) α 1 s 1 α | f ( s ,   1 x ( s ) ) f ( s ,   2 x ( s ) ) | d s ,
by (47). Since   1 x ( [ 0 , T ] ) and   2 x ( [ 0 , T ] ) are bounded, there exists a constant M > 0 such that
| f ( s ,   1 x ( s ) ) f ( s ,   2 x ( s ) ) | M |   1 x ( s )   2 x ( s ) | ,
for all s [ 0 , T ] . As a consequence, from (107),
|   1 x ( t )   2 x ( t ) | T 1 α M Γ ( α ) Γ ( 2 α ) 0 t ( t s ) α 1 |   1 x ( s )   2 x ( s ) | d s .
By Gronwall’s inequality with singularity [75], one concludes that |   1 x ( t )   2 x ( t ) | = 0 and   1 x =   2 x on [ 0 , T ] , as wanted.
The precise statement that has been proved is the following:
Proposition 8.
Given an L-fractional differential Equation (30), if f is Lipschitz-continuous with respect to the second component on every compact set (i.e., for every R > 0 , there exists M > 0 such that | f ( t ,   1 x ) f ( t ,   2 x ) | M |   1 x   2 x | for all | t | R , |   1 x | R and |   2 x | R ), then (30) has a unique solution for any initial condition ( 0 , x 0 ) (in the set of absolutely continuous functions).
We observe that Proposition 8 may be proved without relying on Gronwall’s inequality with singularity. This is a nice fact because proofs of uniqueness often use Gronwall’s lemmas. If z =   1 x   2 x on [ 0 , T ] , then
| z ( t ) | M ·   L J α | z ( t ) |
for every t [ 0 , T ] , by (107). If we iterate (108) m times,
| z ( t ) | M m ·   L J m α | z ( t ) | .
By Proposition 5, (109) continues with
| z ( t ) | M m ·   L J m α | z ( t ) | M m z i = 2 m + 1 Γ ( i α ) Γ ( 2 α ) m i = 2 m + 1 Γ ( i ) T m .
As m , the right-hand side of the inequality (110) tends to 0, because
m = 1 M m z i = 2 m + 1 Γ ( i α ) Γ ( 2 α ) m i = 2 m + 1 Γ ( i ) T m <
by the ratio test (see (41), for instance). Hence z ( t ) = 0 , and we are finished.
In spite of this, I am not aware of a proof that does not draw on the integral operator   L J α (or   C J α ). Let us consider the case of dimension d = 1 . If z =   1 x   2 x were non-zero at some point on ( 0 , T ] , then we could define
t = max { t [ 0 , T ] : z ( [ 0 , t ] ) = { 0 } } < T .
For some δ > 0 such that z ( t ) 0 on ( t , t + δ ) , we would have
  L D α z ( t ) = f ( s ,   1 x ( s ) ) f ( s ,   2 x ( s ) ) z ( t ) z ( t ) = a ( t ) z ( t )
on ( t , t + δ ) . By extending a to [ 0 , t ] with the zero value, the equation
  L D α z ( t ) = a ( t ) z ( t )
would hold on [ 0 , t + δ ) . That is, the initial problem is converted into a linear equation. The function a is bounded by M, by the Lipschitz condition of f; therefore, it is Lebesgue-integrable. In the case α = 1 , one defines
a ˜ ( t ) = 0 t a ( τ ) d τ .
By using the integrating-factor method,
e a ˜ ( t ) z ( t ) = e a ˜ ( t ) a ( t ) z ( t ) ,
i.e.,
( e a ˜ z ) ( t ) = 0
almost everywhere. Hence e a ˜ ( t ) z ( t ) = z ( 0 ) = 0 and z ( t ) = 0 on [ 0 , t + δ ) . For α < 1 , one cannot use the same reasoning, because the product rule is not satisfied.

6. Sequential Linear Equations with Constant Coefficients: Context and Solution

The aim of this section is to investigate autonomous linear L-fractional differential equations of the sequential type. The term “sequential” comes from the fact that higher-order derivatives are defined by composition, in a sequential manner, while maintaining the original index α in ( 0 , 1 ) . We define these equations and highlight some of the issues and problems that arise. We then proceed to find solutions, by exploiting the vector-space structure. We first elaborate on the case of sequential order two, which gives the necessary intuition to tackle the general case. The novel Mittag–Leffler-type function plays an essential role and gives rise to a new view of how the exponential function works in the setting of linear ordinary differential equations. Most of the development is concerned with the homogeneous model. Eventually, some forcing terms are possible, by extending the method of undetermined coefficients. Several examples illustrate the theory.

6.1. Definitions and Problems

Considering the composition of operators (49), and in analogy to ordinary differential equations, a sequential L-fractional differential equation of order m 1 and dimension d 1 is
  L D m α x ( t ) = f ( t , x ( t ) ,   L D α x ( t ) ,   L D 2 α x ( t ) , ,   L D ( m 1 ) α x ( t ) ) ,
where f : [ 0 , T ] × Ω [ 0 , T ] × R d m R d m , or f : [ 0 , T ] × Ω [ 0 , T ] × C d m C d m , is a continuous function. The initial data to be met are
x ( 0 ) = x 0 ,   L D α x ( 0 ) = x 0 , 1 ,   L D 2 α x ( 0 ) = x 0 , 2 , ,   L D ( m 1 ) α x ( 0 ) = x 0 , m 1 ,
where x 0 , x 0 , 1 , x 0 , 2 , , x 0 , m 1 C . Model (111) with (112) generalizes, in principle, (30), since m can be greater than 1. However, as occurs with the ordinary case α = 1 , one may see that (111) and (30) are equivalent.
Proposition 9.
Equations (111) and (30) are equivalent.
Proof. 
If x satisfies (111), then
x ˜ ( t ) = ( x ( t ) ,   L D α x ( t ) ,   L D 2 α x ( t ) , ,   L D ( m 1 ) α x ( t ) )
solves
  L D α x ˜ ( t ) =   (   L D α x ( t ) ,   L D 2 α x ( t ) ,   L D 3 α x ( t ) , ,   L D m α x ( t ) ) =   ( x ˜ 2 ( t ) , x ˜ 3 ( t ) , , x ˜ m ( t ) , f ( t , x ˜ ( t ) ) ) =   f ˜ ( t , x ˜ ( t ) ) ,
which is a first-order system of dimension d m . The initial condition is
x ˜ ( 0 ) = ( x 0 , x 0 , 1 , , x 0 , m 1 ) .
Although this proposition reduces (111) to (30), the dimension of the associated system (30) is greater, of size d m . Hence, in some situations, this procedure may not be convenient to derive explicit or closed-form solutions for (111).
A sequential linear L-fractional differential equation of order m 1 and dimension d = 1 is
  L D m α x ( t ) + a m 1   L D ( m 1 ) α x ( t ) + + a 1   L D α x ( t ) + a 0 x ( t ) = 0 .
The coefficients a 0 , , a m 1 are scalars in C and x : [ 0 , T ] C . The initial condition to be met is (112). Note that (113) is scalar, homogeneous, and autonomous.
By Proposition 9, (113) can be reduced to a linear system of the form (43), with matrix A C m × m and solution (44):
  L D α x ^ = A x ^ , x ^ = x   L D α x   L D ( m 1 ) α x , A = 0 1 0 0 0 0 1 0 a 0 a 1 a 2 a m 1 .
Let S be the set of solutions of (113), equivalently (114), without specifying initial conditions. By Theorem 6, it is clear that S C ω . In the following proposition, we examine the algebraic structure of S .
Proposition 10.
The set S is a vector space over C , of dimension m.
Proof. 
Since   L D α is a linear operator, it is obvious that S satisfies the properties of a vector space. Another proof relies on the fact that S = Ker Λ , where
Λ : C ω C ω ,
Λ =   L D m α + a m 1   L D ( m 1 ) α + + a 1   L D α + a 0 .
The fact that dim S m follows from the injectivity of the linear map
Ξ : S C m ,
Ξ x = ( x ( 0 ) ,   L D α x ( 0 ) ,   L D 2 α x ( 0 ) , ,   L D ( m 1 ) α x ( 0 ) ) .
Indeed, since (113) can be expressed as a first-order system (114) by Proposition 9, and uniqueness for these models is known—see Proposition 8—we then have that initial conditions in C m give rise to at most one solution in S .
The surjectivity of (117)–(118), which is equivalent to the existence of a solution for any initial-value problem (113) with (112), is true by Proposition 9 (transformation to first-order system (114)) and Theorem 1. It implies that dim S m . This completes the proof. □
Consider the polynomial
p ( λ ) = λ m + a m 1 λ m 1 + + a 1 λ + a 0 ,
which is the characteristic polynomial of the matrix A C m × m associated with the corresponding first-order linear system (114). By the fundamental theorem of algebra,
p ( λ ) = ( λ λ 1 ) n 1 ( λ λ r ) n r ,
where λ 1 , , λ r C are the distinct roots of p (eigenvalues of A ) with multiplicities n 1 , , n r 1 , respectively, and n 1 + + n r = m .
To solve (113), we express (113) as a sequential model of scalar first-order linear equations of the form (35). We rely on solving scalar linear problems iteratively, entirely based on power-series calculations, with no need for matrix variables. It is important to emphasize that, since we deal with power series, computations hold for every t, and not only almost everywhere; see Theorem 1 and Theorem 6. Equation (113) is rewritten as
(   L D α λ 1 ) n 1 (   L D α λ r ) n r x = 0 .
All those factors commute. To find x, one needs to consider, in order,
(   L D α λ 1 ) y 1 , λ 1 = 0 , (   L D α λ 1 ) y 2 , λ 1 = y 1 , λ 1 , , (   L D α λ 1 ) y n 1 , λ 1 = y n 1 1 , λ 1 ,
(   L D α λ 2 ) y 1 , λ 2 = y n 1 1 , λ 1 , (   L D α λ 2 ) y 2 , λ 2 = y 1 , λ 2 , , (   L D α λ 2 ) y n 2 , λ 2 = y n 2 1 , λ 2 ,
(   L D α λ r ) y 1 , λ r = y n r 1 1 , λ r 1 , (   L D α λ r ) y 2 , λ r = y 1 , λ r , , (   L D α λ r ) y n r , λ r = y n r 1 , λ r ,
x = y n r , λ r .
In the following parts, we investigate how to solve the sequential problems (121)–(124). We first address the order m = 2 and then generalize to any m. Besides the former case being easier, it permits establishing the methodology and deducing how the general solution should be.
Actually, we will only need the upper bound dim S m , which holds by uniqueness (Proposition 8). Note that dim S m can be justified alternatively, based on the sequential decomposition (120), by
dim S = dim Ker Λ j = 1 m n j · dim Ker (   L D α λ j ) E α ( λ j t ) = j = 1 m n j · 1 = m ,
where Λ was defined in (115), (116). The first inequality in (125) comes from the fact that, if g 1 , g 2 : V V are two linear maps, then Ker ( g 1 g 2 ) = g 2 1 ( Ker g 1 ) and G : Ker ( g 1 g 2 ) Ker g 1 , v g 2 ( v ) , is well defined and linear with Ker G = Ker g 2 , so that Ker ( g 1 g 2 ) / Ker g 2 Im G Ker g 1 by the first isomorphism theorem and dim Ker ( g 1 g 2 ) dim Ker g 1 + dim Ker g 2 holds.

6.2. Solution for Sequential Order Two

When m = 2 , Equation (113) is
(   L D 2 α + a 1   L D α + a 0 ) x = 0 ,
where a 0 , a 1 C . The associated polynomial p in (119),
p ( λ ) = λ 2 + a 1 λ + a 0 ,
has two roots, λ 1 and λ 2 . Let S be the complete set of solutions of (126). From (112), and consider the initial states
x ( 0 ) = x 0 ,   L D α x ( 0 ) = x 0 , 1 .
With this notation, the following theorem gives the solution to (126).
Theorem 7.
If the roots of the associated polynomial, λ 1 and λ 2 , are distinct in R or in C , then
x ( t ) = x 0 , 1 λ 2 x 0 λ 1 λ 2 E α ( λ 1 t ) + λ 1 x 0 x 0 , 1 λ 1 λ 2 E α ( λ 2 t )
is the solution of (126) with initial conditions (127), on [ 0 , ) . The set
{ E α ( λ 1 t ) , E α ( λ 2 t ) }
is a basis of S .
On the contrary, if λ 1 = λ 2 = λ (in R ), then
x ( t ) = x 0 E α ( λ t ) + ( x 0 , 1 λ x 0 ) t E α ( λ t )
is the solution of (126) with initial conditions (127), on [ 0 , ) . The set
{ E α ( λ t ) , t E α ( λ t ) }
is a basis of S .
Here, E α is the new Mittag–Leffler-type function (40) and E α denotes its ordinary derivative.
Proof. 
Consider the roots λ 1 and λ 2 , irrespective of whether these are different or not. Problem (126) decomposes as
(   L D α λ 1 ) (   L D α λ 2 ) x = 0 ;
see (120) with m = n 1 + n 2 = 2 , n 1 , n 2 { 1 , 2 } .
First, we solve
(   L D α λ 1 ) y = 0 ,
which gives
y = E α ( λ 1 t ) c 1 ,
where c 1 C and t [ 0 , ) . See, for example, the general result of Theorem 1. Since
(   L D α λ 2 ) x = y ,
the constant c 1 is
c 1 = y ( 0 ) =   L D α x ( 0 ) λ 2 x ( 0 ) = x 0 , 1 λ 2 x 0 .
Second, from (132) and (133), we solve
(   L D α λ 2 ) x ( t ) = ϑ ( t ) = E α ( λ 1 t ) c 1 .
We need to use Theorem 6, which deals with power-series source terms. In this case,
ϑ n = c 1 λ 1 n j = 1 n Γ ( j + 1 α ) Γ ( 2 α ) n j = 1 n Γ ( j + 1 ) ,
considering the expansion’s coefficients of the new Mittag–Leffler function (40). Therefore, by Theorem 6, the solution of (135) is
x ( t ) =   E α ( λ 2 t ) x 0 + n = 0 k = 0 n 1 λ 2 k j = n k n Γ ( j α + 1 ) Γ ( 2 α ) k + 1 j = n k n Γ ( j + 1 ) c 1 λ 1 n k 1
  × j = 1 n k 1 Γ ( j + 1 α ) Γ ( 2 α ) n k 1 j = 1 n k 1 Γ ( j + 1 ) t n
=   E α ( λ 2 t ) x 0 + ( x 0 , 1 λ 2 x 0 ) n = 0 j = 1 n Γ ( j + 1 α ) Γ ( 2 α ) n j = 1 n Γ ( j + 1 ) t n k = 0 n 1 λ 1 n k 1 λ 2 k
where the constant (134) is substituted into (136). To deal with the sum
k = 0 n 1 λ 1 n k 1 λ 2 k ,
we distinguish between λ 1 λ 2 and λ 1 = λ 2 = λ . In the former case,
k = 0 n 1 λ 1 n k 1 λ 2 k = λ 1 n λ 2 n λ 1 λ 2
and, from (136),
x ( t ) =   E α ( λ 2 t ) x 0 + x 0 , 1 λ 2 x 0 λ 1 λ 2 E α ( λ 1 t ) E α ( λ 2 t ) =   x 0 , 1 λ 2 x 0 λ 1 λ 2 E α ( λ 1 t ) + λ 1 x 0 x 0 , 1 λ 1 λ 2 E α ( λ 2 t ) ,
which is (128). In the latter case,
k = 0 n 1 λ 1 n k 1 λ 2 k = n λ n 1 ,
and (130) is obtained.
We finally need to justify that (129) and (131) are bases of S , when λ 1 λ 2 and λ 1 = λ 2 = λ , respectively. Since they generate S , we need to prove linear independence. (Analogously, since dim S 2 by Proposition 10 or (125), the linear independence of the two elements suffices.)
For (129), consider a linear combination
β 1 E α ( λ 1 t ) + β 2 E α ( λ 2 t ) = 0
for all t R , where β 1 , β 2 C . Then,
β 1   L D α E α ( λ 1 t ) + β 2   L D α E α ( λ 2 t ) = 0 .
Since
det E α ( λ 1 t ) E α ( λ 2 t )   L D α E α ( λ 1 t )   L D α E α ( λ 2 t ) = det E α ( λ 1 t ) E α ( λ 2 t ) λ 1 E α ( λ 1 t ) λ 2 E α ( λ 2 t ) = ( λ 2 λ 1 ) E α ( λ 1 t ) E α ( λ 2 t )
gives λ 2 λ 1 0 at t = 0 , we conclude that β 1 = β 2 = 0 and that linear independence of (129) holds.
Analogously, for (131), consider a linear combination
β 1 E α ( λ t ) + β 2 t E α ( λ t ) = 0
for all t R , where β 1 , β 2 C . Then,
β 1   L D α E α ( λ t ) + β 2   L D α ( t E α ( λ t ) ) = 0 .
Simple computations yield
  L D α ( t E α ( λ t ) ) = n = 1 n λ n 1 Γ ( 2 α ) n j = 1 n Γ ( j + 1 ) Γ ( j + 1 α ) Γ ( n + 1 ) Γ ( 2 α ) Γ ( n + 1 α ) t n 1 .
Since
det E α ( λ t ) t E α ( λ t )   L D α E α ( λ t )   L D α ( t E α ( λ t ) ) t = 0 = det 1 0 λ 1 = 1 0 ,
it follows β 1 = β 2 = 0 and the linear independence of (131). □
Later, in Section 7, we will address (126) when the coefficients are analytic functions. The solution will be a power series, with coefficients defined recursively.

6.3. Solution for Arbitrary Sequential Order and Method of Undetermined Coefficients

Consider the general problem (113). The associated polynomial (119) has distinct roots λ 1 , , λ r C , with multiplicities n 1 , , n r 1 , m = n 1 + + n r . Let the complete set of solutions be S . Initial conditions are denoted by (112).
Theorem 7 provides the intuition to establish the following general result. Later, we will give several remarks, examples, and an immediate corollary on the method of undetermined coefficients (i.e., the resolution of (113) when it is perturbed by a certain source term).
Theorem 8.
For each eigenvalue λ l with multiplicity n l , l = 1 , , r , consider
B λ l , n l = { E α ( λ l t ) , t E α ( λ l t ) , t 2 E α ( λ l t ) , , t n l 1 E α ( n l 1 ) ( λ l t ) } ,
where E α is the new Mittag–Leffler-type function (40) and E α ( k ) denotes its ordinary k-th derivative (for k { 1 , 2 , 3 } , we use primes). Let
B = l = 1 r B λ l , n l .
Then, B is a basis for S .
Proof. 
Fix 1 l r . Successive differentiation for (40) gives
t k E α ( k ) ( λ l t ) = n = k n ( n 1 ) ( n k + 1 ) t n λ l n k Γ ( 2 α ) n j = 1 n Γ ( j + 1 ) Γ ( j + 1 α ) .
Let us prove by induction on 0 q k that
(   L D α λ l ) q ( t k E α ( k ) ( λ l t ) ) = t k q i = 0 q 1 ( k i ) n = k i = q k 1 ( n i ) t n k λ l n k Γ ( 2 α ) n q j = 1 n q Γ ( j + 1 ) Γ ( j + 1 α ) .
For q = 0 , (141) corresponds to (140). Suppose the expression is true for q 1 (induction hypothesis), and we prove it for q. With detailed steps, we have:
  (   L D α λ l ) q ( t k E α ( k ) ( λ l t ) ) = (   L D α λ l ) (   L D α λ l ) q 1 ( t k E α ( k ) ( λ l t ) )
=   (   L D α λ l ) t k q + 1 i = 0 q 2 ( k i ) n = k i = q 1 k 1 ( n i ) t n k λ l n k Γ ( 2 α ) n q + 1 j = 1 n q + 1 Γ ( j + 1 ) Γ ( j + 1 α )
=   i = 0 q 2 ( k i ) n = k i = q 1 k 1 ( n i ) λ l n k Γ ( 2 α ) n q + 1 j = 1 n q + 1 Γ ( j + 1 ) Γ ( j + 1 α ) (   L D α λ l ) ( t n q + 1 ) =   i = 0 q 2 ( k i ) n = k i = q 1 k 1 ( n i ) λ l n k Γ ( 2 α ) n q + 1 j = 1 n q + 1 Γ ( j + 1 ) Γ ( j + 1 α )
  × Γ ( n q + 2 ) Γ ( 2 α ) Γ ( n q + 2 α ) t n q λ l t n q + 1
=   i = 0 q 2 ( k i ) n = k i = q 1 k 1 ( n i ) λ l n k t n q Γ ( 2 α ) n q j = 1 n q Γ ( j + 1 ) Γ ( j + 1 α )
  i = 0 q 2 ( k i ) n = k i = q 1 k 1 ( n i ) λ l n k + 1 t n q + 1 Γ ( 2 α ) n q + 1 j = 1 n q + 1 Γ ( j + 1 ) Γ ( j + 1 α )
=   i = 0 q 2 ( k i ) n = k i = q 1 k 1 ( n i ) λ l n k t n q Γ ( 2 α ) n q j = 1 n q Γ ( j + 1 ) Γ ( j + 1 α )
  i = 0 q 2 ( k i ) n = k + 1 i = q 1 k 1 ( n 1 i ) λ l n k t n q Γ ( 2 α ) n q j = 1 n q Γ ( j + 1 ) Γ ( j + 1 α )
=   i = 0 q 2 ( k i ) n = k + 1 i = q 1 k 1 ( n i ) i = q 1 k 1 ( n 1 i ) λ l n k t n q Γ ( 2 α ) n q j = 1 n q Γ ( j + 1 ) Γ ( j + 1 α )
  + i = 0 q 2 ( k i ) i = q 1 k 1 ( k i ) λ l k k t k q Γ ( 2 α ) k q j = 1 k q Γ ( j + 1 ) Γ ( j + 1 α )
=   i = 0 q 1 ( k i ) n = k + 1 i = q k 1 ( n i ) t n q λ l n k Γ ( 2 α ) n q j = 1 n q Γ ( j + 1 ) Γ ( j + 1 α )
  + i = 0 q 1 ( k i ) ( k q ) ! λ l k k t k q Γ ( 2 α ) k q j = 1 k q Γ ( j + 1 ) Γ ( j + 1 α )
=   t k q i = 0 q 1 ( k i ) n = k i = q k 1 ( n i ) t n k λ l n k Γ ( 2 α ) n q j = 1 n q Γ ( j + 1 ) Γ ( j + 1 α ) .
Equality (142) comes from the induction hypothesis. Equality (143) is merely the linearity of   L D α λ l . In (144), the fractional derivative of the power is computed. In (145) and (146), we just expand the previous expression. For expression (148), we just change the variable n in the sum, while (147) is unchanged. In (149), we join the two sums (147) and (148) from n = k + 1 , leaving the k-th term of (147) at (150). For (151), we apply the equality
i = 0 q 2 ( k i ) i = q 1 k 1 ( n i ) i = q 1 k 1 ( n 1 i ) = i = 0 q 1 ( k i ) i = q k 1 ( n i ) .
Expression (152) comes from
i = 0 q 2 ( k i ) i = q 1 k 1 ( k i ) = i = 0 q 1 ( k i ) ( k q ) ! .
Finally, for (153), we merge terms to derive (141).
Considering (141), for q = k , we obtain
(   L D α λ l ) k ( t k E α ( k ) ( λ l t ) ) = k ! n = k t n k λ l n k Γ ( 2 α ) n k j = 1 n k Γ ( j + 1 ) Γ ( j + 1 α ) = k ! E α ( λ l t ) .
Therefore,
(   L D α λ l ) k + 1 ( t k E α ( k ) ( λ l t ) ) = k ! (   L D α λ l ) ( E α ( λ l t ) ) = 0 .
Thus,
(   L D α λ l ) n l ( t k E α ( k ) ( λ l t ) ) = 0
for all k = 0 , , n l 1 , so the operator Λ from (115) and (116) vanishes at t k E α ( k ) ( λ l t ) . This result proves that
B λ l , n l S
and, in general,
B S .
Since B consists of n 1 + + n r = m elements, and dim S m by Proposition 10 or (125), it suffices to prove that the functions in B are linearly independent.
First, we prove that the functions in each B λ l , n l are linearly independent. Consider a linear combination
β 0 E α ( λ l t ) + β 1 t E α ( λ l t ) + β 2 t 2 E α ( λ l t ) + + β n l 1 t n l 1 E α ( n l 1 ) ( λ l t ) = 0 ,
for all t R , where β 0 , β 1 , , β n l 1 C . Then,
β 0   L D q α E α ( λ l t ) +   β 1   L D q α ( t E α ( λ l t ) ) +   β 2   L D q α ( t 2 E α ( λ l t ) ) + + β n l 1   L D q α ( t n l 1 E α ( n l 1 ) ( λ l t ) ) = 0 ,
for 1 q n l 1 . Now, by (140),
  L D q α   ( t k E α ( k ) ( λ l t ) ) = n = k n ( n 1 ) ( n k + 1 )   L D q α ( t n ) λ l n k Γ ( 2 α ) n j = 1 n Γ ( j + 1 ) Γ ( j + 1 α ) =   n = k n ( n 1 ) ( n k + 1 ) t n q λ l n k Γ ( 2 α ) n j = 1 n Γ ( j + 1 ) Γ ( j + 1 α ) Γ ( 2 α ) q i = n q + 2 n + 1 Γ ( i ) Γ ( i α ) ,
which vanishes at t = 0 for q + 1 k n l 1 and takes the value 1 at t = 0 for k = q . Consequently, the matrix
E α ( λ l t ) t E α ( λ l t ) t n l 1 E α ( n l 1 ) ( λ l t )   L D α E α ( λ l t )   L D α ( t E α ( λ l t ) )   L D α ( t n l 1 E α ( n l 1 ) ( λ l t ) )   L D 2 α E α ( λ l t )   L D 2 α ( t E α ( λ l t ) )   L D 2 α ( t n l 1 E α ( n l 1 ) ( λ l t ) )   L D ( n l 1 ) α E α ( λ l t )   L D ( n l 1 ) α ( t E α ( λ l t ) )   L D ( n l 1 ) α ( t n l 1 E α ( n l 1 ) ( λ l t ) )
is upper-triangular at t = 0 , with non-zero elements at the diagonal. Its determinant is then non-zero, so necessarily β 0 = β 1 = = β n l 1 = 0 .
To conclude the proof, consider a linear combination of elements in the complete set B :
k = 0 n 1 1 β k , 1 t k E α ( k ) ( λ 1 t ) + + k = 0 n r 1 β k , r t k E α ( k ) ( λ r t ) = 0 ,
where β k , l C . Suppose that there are coefficients β k i , l i 0 for i = 1 , , I r , I 2 , 1 l i r distinct, 0 k i n l i 1 , that is, at least one non-zero coefficient for each root λ l i . Then, (158) can be rewritten as
i = 1 I β k i , l i e i = 0 ,
where e i B λ l i , n l i . We know that e i is a generalized eigenfunction of   L D α associated with λ l i ; see (155). Since λ 1 , , λ r are distinct, { e 1 , , e I } are linearly independent by standard theory. Therefore, the assumed condition with I 2 is impossible. Then, I = 1 and the linear combination (158) is actually for a single group B λ l , n l , for some l { 1 , , r } . But the elements within B λ l , n l are linearly independent, as already proved above. Hence, all of the coefficients of (158) are zero, and we are finished. □
Remark 4.
Analogously to the standard theory on linear ordinary differential equations, the determinant of the matrix (157) is the L-fractional wronskian of the elements in B λ l , n l . In general, we define the L-fractional wronskian of n real analytic functions ϕ 1 , , ϕ n as
  L W α ( ϕ 1 , , ϕ n ) ( t ) = det ϕ 1 ( t ) ϕ 2 ( t ) ϕ n ( t )   L D α ϕ 1 ( t )   L D α ϕ 2 ( t )   L D α ϕ n ( t )   L D ( n 1 ) α ϕ 1 ( t )   L D ( n 1 ) α ϕ 2 ( t )   L D ( n 1 ) α ϕ n ( t ) .
If there is a point t 1 where
  L W α ( ϕ 1 , , ϕ n ) ( t 1 ) 0 ,
then { ϕ 1 , , ϕ n } are linearly independent. This fact was justified in the previous proof. Reciprocally, if m functions in S are linearly independent, then their L-fractional wronskian is non-zero at t = 0 , because dim S = m , and the map Ξ in (117) and (118) is an isomorphism by Proposition 10. The wronskian appears when the coefficients of a linear combination in the basis B are found: if B = { e i } i = 1 m and x ( t ) = i = 1 m c i e i ( t ) S with initial conditions (112), where c i C , then
  L W α ( e 1 , , e m ) ( 0 ) · c 1 c 2 c m = x 0 x 0 , 1 x 0 , m 1 .
For example, the coefficients in (128) came from the linear system
1 1 λ 1 λ 2 c 1 c 2 = x 0 x 0 , 1 ,
and in (130), from
1 0 λ 1 c 1 c 2 = x 0 x 0 , 1 .
Example 1.
If λ C , let us see that the solution of
(   L D α λ ) x = t l E α ( l ) ( λ t ) ,
with l 0 and x ( 0 ) = x 0 C , is
x ( t ) = E α ( λ t ) x 0 + 1 l + 1 t l + 1 E α ( l + 1 ) ( λ t ) E α ( λ t ) , t l + 1 E α ( l + 1 ) ( λ t ) ,
which makes sense with Theorem 8.
By (140),
ϑ ( t ) = t l E α ( l ) ( λ t ) = n = l n ( n 1 ) ( n l + 1 ) t n λ n l Γ ( 2 α ) n j = 1 n Γ ( j + 1 ) Γ ( j + 1 α ) .
Considering (102), if ϑ n denotes the n-th term of this power series, we have
ϑ n k 1 = ( n k 1 ) ( n k l ) λ n k 1 l Γ ( 2 α ) n k 1 j = 1 n k 1 Γ ( j + 1 ) Γ ( j + 1 α )
for n k 1 l , and
ϑ n k 1 = 0
for n k 1 < l . By Theorem 6, the solution of (159) is
x ( t ) =   E α ( λ t ) x 0 + n = l + 1 k = 0 n 1 l λ k j = n k n Γ ( j α + 1 ) Γ ( 2 α ) k + 1 j = n k n Γ ( j + 1 ) t n ϑ n k 1
=   E α ( λ t ) x 0 + n = l + 1 k = 0 n 1 l ( n k 1 ) ( n k l ) j = 1 n Γ ( j α + 1 ) Γ ( 2 α ) n j = 1 n Γ ( j + 1 ) t n λ k λ n k 1 l
=   E α ( λ t ) x 0 + n = l + 1 λ n 1 l j = 1 n Γ ( j α + 1 ) Γ ( 2 α ) n j = 1 n Γ ( j + 1 ) t n k = 0 n 1 l ( n k 1 ) ( n k l )
=   E α ( λ t ) x 0 + 1 l + 1 t l + 1 n = l + 1 n ( n 1 ) ( n l ) λ n 1 l j = 1 n Γ ( j α + 1 ) Γ ( 2 α ) n j = 1 n Γ ( j + 1 ) t n 1 l
=   E α ( λ t ) x 0 + 1 l + 1 t l + 1 E α ( l + 1 ) ( λ t ) .
The equality in (161) comes from (160). The equality in (162) follows from the identity
( l + 1 ) k = 0 n 1 l ( n k 1 ) ( n k l ) = n ( n 1 ) ( n l ) ,
which is easy to prove by computing from k = n 1 l to k = 0 , adding term by term and taking common factors.
Example 2.
For 0 λ C , let us see that the solution of
(   L D α λ ) x = t l E α ( l ) ( 0 ) = l ! Γ ( 2 α ) l j = 1 l Γ ( j + 1 ) Γ ( j + 1 α ) t l ,
with l 0 and x ( 0 ) = x 0 C , is
x ( t ) =   E α ( λ t ) x 0 + l ! λ l + 1 E α ( λ t ) q l ( λ t )   E α ( λ t ) , E α ( 0 · t ) , t E α ( 0 · t ) , , t l E α ( l ) ( 0 · t ) ,
where
q l ( λ t ) = n = 0 l λ n t n j = 1 n Γ ( j + 1 α ) Γ ( 2 α ) n j = 1 n Γ ( j + 1 )
is a polynomial of degree l. This result agrees with Theorem 8.
Considering the forcing term
ϑ ( t ) = t l E α ( l ) ( 0 )
and (102), we have
ϑ n k 1 = E α ( n k 1 ) ( 0 ) = ( n k 1 ) ! Γ ( 2 α ) n k 1 j = 1 n k 1 Γ ( j + 1 ) Γ ( j + 1 α )
for n k 1 = l , and
ϑ n k 1 = 0
for n k 1 l . By Theorem 6, the solution of (164) is
x ( t ) =   E α ( λ t ) x 0 + n = l + 1 k = 0 n 1 l λ k j = n k n Γ ( j α + 1 ) Γ ( 2 α ) k + 1 j = n k n Γ ( j + 1 ) t n ϑ n k 1
=   E α ( λ t ) x 0 + l ! n = l + 1 j = 1 n Γ ( j α + 1 ) Γ ( 2 α ) n j = 1 n Γ ( j + 1 ) t n λ n 1 l
=   E α ( λ t ) x 0 + l ! λ l + 1 E α ( λ t ) q l ( λ t )
The equality in (167) is due to (166). In (168), the definition (165) is used. Notice that q l ( λ t ) is the l-th partial sum of E α ( λ t ) .
Corollary 2.
In the context of this section, consider the non-homogeneous equation
(   L D α λ 1 ) n 1 (   L D α λ r ) n r x ( t ) = j = 0 J β j t j E α ( j ) ( μ t ) ,
where β j , μ C and J 0 .
If μ λ l for every l = 1 , , r , then
x B { t j E α ( j ) ( μ t ) : j = 0 , , J } .
On the contrary, if μ = λ l 0 for some l 0 { 1 , , r } , then
x l l 0 B λ l , n l { t k E α ( k ) ( λ l 0 t ) : k = 0 , , J + n l 0 } .
Recall that B = l = 1 r B λ l , n l is the basis of the homogeneous part of (169); see (138) and (139).
Proof. 
The uniqueness of solution for (169) (given m = n 1 + + n r initial conditions (112)) is known by Proposition 9 and Proposition 8.
Consider the linear map Λ from (115) and (116), which represents the homogeneous part of (169).
If μ λ l for every l = 1 , , r , then (169) is equivalent to
(   L D α μ ) J + 1 Λ x = 0 ,
for adequate initial conditions, by Theorem 8. By Theorem 8 again, the solution to (172) satisfies (170).
On the other hand, if μ = λ l 0 for some l 0 { 1 , , r } , then (169) is equivalent to
(   L D α λ l 0 ) J + 1 Λ x = 0 ,
for adequate initial conditions, by Theorem 8. Notice that (   L D α λ l 0 ) J + 1 Λ has the factor (   L D α λ l 0 ) J + 1 + n l 0 . By Theorem 8 again, the solution to (173) satisfies (171). □
Example 3.
We work with a specific numerical case of (169):
  L D 2 α x 2 ·   L D α x + x = 3 E α ( 2 t ) ,
with initial states
x ( 0 ) = 3 ,   L D α x ( 0 ) = 1 .
According to Corollary 2,
x ( t ) = β 1 E α ( t ) + β 2 t E α ( t ) + γ E α ( 2 t ) ,
where β 1 , β 2 , γ R are values to be determined. Since γ E α ( 2 t ) is a particular solution of (174), we have
4 γ E α ( 2 t ) 4 γ E α ( 2 t ) + γ E α ( 2 t ) = 3 E α ( 2 t ) γ = 3 .
The other two coefficients are determined from the initial conditions (175). First, since E α ( 0 ) = 1 ,
3 = x ( 0 ) = β 1 + γ β 1 = 0 .
Second, since   L D α ( t E α ( t ) ) | t = 0 = 1 (see (137)),
1 =   L D α x ( 0 ) = β 1 + β 2 + 2 γ β 2 = 7 .
Example 4.
Now, we deal with the numerical example (169)
  L D 2 α x 2 ·   L D α x + x = 3 E α ( t ) ,
with initial states (175). By Corollary 2,
x ( t ) = β 1 E α ( t ) + β 2 t E α ( t ) + γ t 2 E α ( t ) ,
where β 1 , β 2 , γ R are values to be determined. We have the fact that γ t 2 E α ( t ) is a particular solution of (176), which satisfies
3 E α ( t ) = (   L D α 1 ) 2 ( γ t 2 E α ( t ) ) = 2 γ E α ( t ) ,
by the previous computation (154). Thus,
γ = 3 2 .
By employing (175), we determine β 1 and β 2 :
3 = x ( 0 ) = β 1 ,
1 =     L D α x ( 0 ) =   β 1 ·   L D α E α ( t ) | t = 0 + β 2 ·   L D α ( t E α ( t ) ) | t = 0 + γ ·   L D α ( t 2 E α ( t ) ) | t = 0 =   β 1 + β 2   β 2 = 4 .
We used (140) to compute   L D α ( t 2 E α ( t ) ) | t = 0 = 0 and (137) for   L D α ( t E α ( t ) ) | t = 0 = 1 .
Example 5.
Finally, in the complex field, consider (169) given by
  L D 2 α x 2 i ·   L D α x x = 3 E α ( t ) ,
with initial conditions (175) and the imaginary unit i = 1 . Corollary 2 states that
x ( t ) = β 1 E α ( i t ) + β 2 t E α ( i t ) + γ E α ( t ) ,
where β 1 , β 2 , γ C . Since γ E α ( t ) is a particular solution of (177), we have
γ E α ( t ) 2 i γ E α ( t ) γ E α ( t ) = 3 E α ( t ) γ = 3 2 i .
For β 1 and β 2 , we use (175), as in the other two examples:
3 = x ( 0 ) = β 1 + γ β 1 = 3 3 2 i ,
1 =   L D α x ( 0 ) = β 1 i + β 2 + γ β 2 = 5 2 9 2 i .
Remark 5.
In the context of Examples 1 and 2, let us try to solve
(   L D α λ 1 ) x = t l E α ( l ) ( λ 2 t )
in closed form in general, where λ 1 λ 2 and λ 2 0 in C and l 0 . We will see that the fact of changing of space, from B λ 1 , n 1 to B λ 1 , n 1 B λ 2 , n 2 , complicates computations. According to the previous results (see Theorem 8 or Corollary 2), the solution of (178) takes the form
x ( t ) = E α ( λ 1 t ) c + i = 0 l β i t i E α ( i ) ( λ 2 t ) ,
where c , β i C . These parameters need to be determined. On the one hand,
t l E α ( l ) ( λ 2 t ) = n = l n ( n 1 ) ( n l + 1 ) λ 2 n l Γ ( 2 α ) n j = 1 n Γ ( j + 1 ) Γ ( j + 1 α ) t n ,
see (140). On the other hand, some computations on (179) with power series yield
(   L D α   λ 1 ) x = c ·   L D α ( E α ( λ 1 t ) ) + i = 0 l β i ·   L D α ( t i E α ( i ) ( λ 2 t ) )   c λ 1 E α ( λ 1 t ) λ 1 i = 0 l β i t i E α ( i ) ( λ 2 t ) =   β 0 n = 0 λ 2 n + 1 Γ ( 2 α ) n j = 1 n Γ ( j + 1 ) Γ ( j + 1 α ) t n   + i = 1 l β i n = i 1 ( n + 1 ) n ( n i + 2 ) λ 2 n + 1 i Γ ( 2 α ) n j = 1 n Γ ( j + 1 ) Γ ( j + 1 α ) t n   λ 1 i = 0 l β i n = i n ( n 1 ) ( n i + 1 ) λ 2 n i Γ ( 2 α ) n j = 1 n Γ ( j + 1 ) Γ ( j + 1 α ) t n =   β 0 n = 0 λ 2 n + 1 Γ ( 2 α ) n j = 1 n Γ ( j + 1 ) Γ ( j + 1 α ) t n   + n = 0 1 Γ ( 2 α ) n j = 1 n Γ ( j + 1 ) Γ ( j + 1 α ) i = 1 min { n + 1 , l } β i ( n + 1 ) n ( n i + 2 ) λ 2 n + 1 i t n   λ 1 n = 0 1 Γ ( 2 α ) n j = 1 n Γ ( j + 1 ) Γ ( j + 1 α ) i = 0 min { n , l } β i n ( n 1 ) ( n i + 1 ) λ 2 n i t n .
After equating (180) and (181), we obtain the relations
β 0 λ 2 n + 1 +   i = 1 n + 1 β i ( n + 1 ) n ( n i + 2 ) λ 2 n + 1 i   λ 1 i = 0 n β i n ( n 1 ) ( n i + 1 ) λ 2 n i = 0
for 0 n < l , and
β 0 λ 2 n + 1 +   i = 1 min { n + 1 , l } β i ( n + 1 ) n ( n i + 2 ) λ 2 n + 1 i   λ 1 i = 0 min { n , l } β i n ( n 1 ) ( n i + 1 ) λ 2 n i =   n ( n 1 ) ( n l + 1 ) λ 2 n l
for n l . The linear equations (182) can be rewritten, for 0 n < l :
β n + 1 = 1 ( n + 1 ) ! β 0 λ 2 n + 1 + i = 0 n β i n ( n 1 ) ( n i + 2 ) λ 2 n i λ 1 ( n i + 1 ) λ 2 ( n + 1 ) .
To determine β 0 , because it cannot be a free parameter, the Equation (183) is employed for n = l :
β 0 λ 2 l + 1 +   i = 1 l β i ( l + 1 ) l ( l i + 2 ) λ 2 l + 1 i   λ 1 i = 0 l β i l ( l 1 ) ( l i + 1 ) λ 2 l i = l !
For c, one simply uses the initial condition,
x 0 = x ( 0 ) = c + β 0 .
The l + 1 equations (184) and (185) cannot be decoupled, in general, for symbolic variables.
If Theorem 6 is employed to directly deal with (178) based on power series, as in Examples 1 and 2, we have the expression
x ( t ) =   E α ( λ 1 t ) x 0   + n = l + 1 j = 1 n Γ ( j + 1 α ) Γ ( 2 α ) n j = 1 n Γ ( j + 1 ) t n k = 0 n 1 l ( n k 1 ) ( n k l ) λ 1 k λ 2 n k 1 l .
Compared to (163), the sum
k = 0 n 1 l ( n k 1 ) ( n k l ) λ 1 k λ 2 n k 1 l
does not seem to be solvable in explicit form for λ 1 λ 2 .
Remark 6.
An alternative development to Theorem 8 may be carried out, based upon the Jordan form of A in (114), to compute the solution
x ^ ( t ) = E α ( A t ) x ^ 0 ,
where
x ^ 0 = x ^ ( 0 ) = ( x 0 , x 0 , 1 , , x 0 , m 1 ) C m .
First of all, we notice in this remark that care must be exercised, since some methods for the standard case α = 1 do not apply when α < 1 . For example, the new Mittag–Leffler-type function (40) (the same occurs for the classical Mittag–Leffler function (17)) does not satisfy the product property of the exponential
e A 1 + A 2 = e A 1 e A 2 ,
E α ( A 1 + A 2 ) E α ( A 1 ) E α ( A 2 ) ,
E α ( A 1 + A 2 ) E α ( A 1 ) E α ( A 2 ) ,
when the matrices A 1 and A 2 commute, in general. The property (187), which is based on the binomial expansion and canceling out the factorial, is key to compute e A when A is not diagonalizable. For example, a Jordan block
J = μ Id + N ,
where μ C is an eigenvalue and N is a nilpotent matrix, satisfies
e J = e μ e N ;
hence, e J is a finite sum. However, in general,
E α ( J ) E α ( μ ) E α ( N ) .
Indeed, if N N 0 = 0 , then
E α ( J ) =   n = 0 J n Γ ( 2 α ) n j = 1 n Γ ( j + 1 ) Γ ( j + 1 α ) =   n = 0 ( μ Id + N ) n Γ ( 2 α ) n j = 1 n Γ ( j + 1 ) Γ ( j + 1 α ) =   n = 0 1 Γ ( 2 α ) n j = 1 n Γ ( j + 1 ) Γ ( j + 1 α ) k = 0 n n k μ n k N k =   n = 0 N 0 1 Γ ( 2 α ) n j = 1 n Γ ( j + 1 ) Γ ( j + 1 α ) k = 0 n n k μ n k N k   + n = N 0 + 1 1 Γ ( 2 α ) n j = 1 n Γ ( j + 1 ) Γ ( j + 1 α ) k = 0 N 0 n k μ n k N k ,
which is an infinite series. Likewise, if v is a generalized eigenvector of A associated with an eigenvalue μ C , then
e A v = e μ Id + ( A μ Id ) v = e μ e ( A μ Id ) v
is a finite sum again. Nonetheless, in general,
E α ( A ) v = E α ( μ Id + ( A μ Id ) ) v E α ( μ ) E α ( A μ Id ) v .
When α = 1 , Liouville’s formula states that
det e A = e trace A .
For α < 1 , this is not the case in general, not even for diagonalizable matrices A , on account of (188) and (189).
Another procedure can be followed to avoid the problematic fact that (189).
When the eigenvalues λ 1 , , λ r of A are distinct, then A is diagonalizable. Let A = P D P 1 , where D is the diagonal matrix of eigenvalues and P is the invertible matrix of eigenvectors, of size m × m . If y ^ = P 1 x ^ , then   L D α y ^ = D y ^ and y ^ 0 = y ^ ( 0 ) = P 1 x ^ 0 . Therefore,
y ^ = E α ( D t ) y ^ 0 = E α ( λ 1 t ) E α ( λ r t ) y ^ 0 .
This implies that
x ^ E α ( λ 1 t ) , , E α ( λ r t ) .
From
(   L D α λ i ) ( E α ( λ i t ) ) = 0 ,
it is clear that E α ( λ i t ) solves the sequential problem (120). Since the cardinality of
{ E α ( λ 1 t ) , , E α ( λ r t ) } S
is m and dim S = m —see Proposition 10—we obtain that (190) is a basis for S .
When there are repeated eigenvalues, the matrix A in (114) cannot be diagonalizable because the minimal polynomial coincides with the characteristic polynomial here. Then, A is expressed with Jordan blocks J 1 , , J r of size n 1 × n 1 , , n r × n r , respectively. Let A = P J P 1 , where J is the Jordan form and P is the invertible matrix of generalized eigenvectors. If y ^ = P 1 x ^ , then   L D α y ^ = J y ^ and y ^ 0 = y ^ ( 0 ) = P 1 x ^ 0 . Therefore,
y ^ = E α ( J t ) y ^ 0 = E α ( J 1 t ) E α ( J r t ) y ^ 0 .
For each E α ( J i t ) , where J i = λ i Id + N i , we use a matrix Taylor development:
E α ( J i t ) =   E α ( λ i t Id + N i t ) =   n = 0 1 n ! E α ( n ) ( λ i t ) t n N i n =   n = 0 n i 1 1 n ! E α ( n ) ( λ i t ) t n N i n .
Hence
x ^ { t n E α ( n ) ( λ i t ) : n = 0 , , n i 1 , i = 1 , , r } .
Nonetheless, to prove that
{ t n E α ( n ) ( λ i t ) : n = 0 , , n i 1 , i = 1 , , r } S ,
one needs to establish
(   L D α λ i ) n i ( t n E α ( n ) ( λ i t ) ) = 0
for n = 0 , , n i 1 . Then, one should proceed as in the proof of Theorem 8, from (140) until (155) and (156). For α = 1 , it is more or less straightforward that (191) holds, but further computations are needed for the fractional case. Once (191) is known, the fact that dim S = m from Proposition 10 entails that (191) is a basis for S .
I decided to conduct the methodology based on scalar power series because of the following facts.
  • It uses the decomposition (120) and scalar first-order linear problems iteratively, which enlightens the underlying structure of the problem. This is specially true for the order m = 2 .
  • Essentially, one needs to prove (155) and (156) in any case, to ensure that the functions belong to S . That is the difficult part.
  • With (120), only the upper bound dim S m is needed, which can be established from uniqueness or from the first isomorphism theorem; see (125). For (125), previous existence-and-uniqueness theory or results for linear differential systems are not a prerequisite.
  • Although well known, the equality between the minimal polynomial and the characteristic polynomial of A is a key step to distinguish between multiplicities equal to one and repeated eigenvalues. With our methodology, no Jordan forms, generalized eigenvectors or minimal polynomials are needed. Matrix Taylor series are not required either.
  • Our theory, based on (120), immediately gives the method of undetermined coefficients as a consequence; see Corollary 2. Non-homogeneous equations with certain forcing terms—see Examples 1 and 2—can be addressed.
  • Power series have gained importance in the study of fractional models in recent years; see the Introduction section. We show their use for arbitrary sequential problems.

7. Sequential Linear Equations with Analytic Coefficients and Order Two: Context and Solution

The aim of this section is the study of non-autonomous linear L-fractional equations of sequential type to extend the analysis conducted in the earlier section. We focus on the case of order two, with analytic functions. First, we provide the context of the problem, and then we solve it.

7.1. Context

In the previous section, we solved the autonomous linear Equation (113), with the operator’s decomposition (120). When the coefficients are not constant, such a procedure cannot be carried out.
In this part, we address the following non-autonomous linear equation in dimension d = 1 :
  L D 2 α x ( t ) + p ( t ) ·   L D α x ( t ) + q ( t ) x ( t ) = c ( t ) ,
where
p ( t ) = n = 0 p n t n , q ( t ) = n = 0 q n t n , c ( t ) = n = 0 c n t n
are any power series that are convergent on an interval [ 0 , T ] , with terms p n , q n , c n C . Again,   L D 2 α is understood sequentially, as   L D α   L D α . Like in the classical model with ordinary derivative, we seek a power-series solution for (192). Compared to Theorem 8, the coefficients of this power series will not be given in the closed form (see (195)).
By Proposition 9, the Equation (192) can be written as a first-order equation of dimension 2. If S is the vector space of solutions of the homogeneous part of (192), then dim S 2 , by the uniqueness Proposition 8. Indeed, the linear map
Ξ ˜ : S C 2 ,
Ξ ˜ x = ( x ( 0 ) ,   L D α x ( 0 ) )
is injective. Since we have not tackled non-autonomous equations of the type   L D α z ( t ) = A ˜ ( t ) z ( t ) , where A ˜ is a continuous matrix function, we cannot ensure the surjectivity of Ξ ˜ for the moment. In what follows, we will establish two linearly independent power series that form a basis for S .
For (192), initial data are defined by (127).

7.2. Results

The main theorem of this section is the following. After discussing it, we will discuss two examples, the L-fractional Airy’s equation and the L-fractional Hermite’s equation.
Theorem 9.
Given (192) with coefficients (193) on [ 0 , T ] , the general solution on [ 0 , T ) is given by
x ( t ) = n = 0 x n t n ,
where
x n + 2 = Γ ( n + 3 α ) Γ ( n + 2 α ) Γ ( n + 3 ) Γ ( n + 2 ) Γ ( 2 α ) 2 l = 0 n p n l x l + 1 Γ ( l + 2 ) Γ ( 2 α ) Γ ( l + 2 α ) l = 0 n q n l x l + c n .
The coefficients x 0 and x 1 correspond to x ( 0 ) = x 0 and   L D α x ( 0 ) = x 0 , 1 , respectively. A basis of the homogeneous part ( c n = 0 ) is obtained for ( x 0 , x 0 , 1 ) = ( 1 , 0 ) and ( x 0 , x 0 , 1 ) = ( 0 , 1 ) , respectively.
Proof. 
Given (194), the following L-fractional derivatives apply:
  L D α x ( t ) = n = 0 x n + 1 Γ ( n + 2 ) Γ ( 2 α ) Γ ( n + 2 α ) t n ,
  L D 2 α x ( t ) = n = 0 x n + 2 Γ ( n + 3 ) Γ ( n + 2 ) Γ ( 2 α ) 2 Γ ( n + 3 α ) Γ ( n + 2 α ) t n ,
see Corollary 1. Placing these derivatives into (192), with Cauchy products, and matching terms of the expansions, the recurrence relation (195) is obtained. It remains to check that the series (194) actually converges on [ 0 , T ) .
Concerning (193), the coefficients are controlled as follows:
| p n | C T n , | q n | C T n , | c n | C T n ,
where C > 0 is a constant. By the triangular inequality and induction, the sequence { x n } n = 0 is “majorized” by
H n + 2 = Γ ( n + 3 α ) Γ ( n + 2 α ) Γ ( n + 3 ) Γ ( n + 2 ) Γ ( 2 α ) 2 C T n l = 0 n T l H l + 1 Γ ( l + 2 ) Γ ( 2 α ) Γ ( l + 2 α ) + H l + 1 ,
for n 0 ,
H 0 = | x 0 | , H 1 = | x 1 | .
By splitting the sum l = 0 n in (196) into l = 0 n 1 and the n-th term, one derives
H n + 2 =   Γ ( n + 3 α ) Γ ( n + 1 ) Γ ( n + 3 ) Γ ( n + 1 α ) T + C Γ ( n + 3 α ) Γ ( n + 3 ) H n + 1   + C Γ ( n + 3 α ) Γ ( n + 2 α ) Γ ( n + 3 ) Γ ( n + 2 ) Γ ( 2 α ) H n ,
for n 1 . Then, if we pick any v ( 0 , T ) ,
H n + 2 v n + 2 =   Γ ( n + 3 α ) Γ ( n + 1 ) Γ ( n + 3 ) Γ ( n + 1 α ) T v + C Γ ( n + 3 α ) Γ ( n + 3 ) v H n + 1 v n + 1   + C Γ ( n + 3 α ) Γ ( n + 2 α ) Γ ( n + 3 ) Γ ( n + 2 ) Γ ( 2 α ) v 2 H n v n .
By letting
K n = max 0 l n H l v l ,
one has the bound
H n + 2 v n + 2   Γ ( n + 3 α ) Γ ( n + 1 ) Γ ( n + 3 ) Γ ( n + 1 α ) T v + C Γ ( n + 3 α ) Γ ( n + 3 ) v   + C Γ ( n + 3 α ) Γ ( n + 2 α ) Γ ( n + 3 ) Γ ( n + 2 ) Γ ( 2 α ) v 2 K n + 1 .
Since
lim n   Γ ( n + 3 α ) Γ ( n + 1 ) Γ ( n + 3 ) Γ ( n + 1 α ) T v + C Γ ( n + 3 α ) Γ ( n + 3 ) v   + C Γ ( n + 3 α ) Γ ( n + 2 α ) Γ ( n + 3 ) Γ ( n + 2 ) Γ ( 2 α ) v 2 = v T < 1 ,
by (42), we deduce that K n + 2 = K n + 1 = K from a certain n 0 . As a consequence, if we take any 0 w < v < T , then
H n w n H n v n w v n L w v n .
Therefore,
n = 0 H n w n < .
This proves that (194) converges on [ 0 , T ) , as wanted.
Concerning the basis of the homogeneous part (with c n = 0 for n 0 ), let y and z be series in S with initial terms ( x 0 , x 0 , 1 ) = ( 1 , 0 ) and ( x 0 , x 0 , 1 ) = ( 0 , 1 ) , respectively. If β 1 y + β 2 z = 0 on [ 0 , T ) , for β 1 , β 2 C , then
0 = β 1 y ( 0 ) + β 2 z ( 0 ) = β 1
and
0 = β 1 ·   L D α y ( 0 ) + β 2 ·   L D α z ( 0 ) = β 2 ,
so { y , z } are linearly independent and form a basis of S . □
Example 6.
Let
  L D 2 α x ( t ) + a t x ( t ) = 0
be the L-fractional version of Airy’s equation, where a C . Here, p = c = 0 and q ( t ) = a t . By (195),
x 2 = 0
and
x n + 2 = a Γ ( n + 3 α ) Γ ( n + 2 α ) Γ ( n + 3 ) Γ ( n + 2 ) Γ ( 2 α ) 2 x n 1 ,
for n 1 . This difference equation can be solved as follows:
x 3 n 1 = 0 ,
x 3 n = ( 1 ) n a n j = 1 n Γ ( 3 j α ) Γ ( 3 j + 1 α ) Γ ( 2 α ) 2 n j = 1 n Γ ( 3 j ) Γ ( 3 j + 1 ) x 0 ,
x 3 n + 1 = ( 1 ) n a n j = 1 n Γ ( 3 j + 1 α ) Γ ( 3 j + 2 α ) Γ ( 2 α ) 2 n j = 1 n Γ ( 3 j + 1 ) Γ ( 3 j + 2 ) x 0 , 1 ,
for n 1 . Hence,
y ( t ) = n = 0 ( 1 ) n a n j = 1 n Γ ( 3 j α ) Γ ( 3 j + 1 α ) Γ ( 2 α ) 2 n j = 1 n Γ ( 3 j ) Γ ( 3 j + 1 ) t 3 n
and
z ( t ) = n = 0 ( 1 ) n a n j = 1 n Γ ( 3 j + 1 α ) Γ ( 3 j + 2 α ) Γ ( 2 α ) 2 n j = 1 n Γ ( 3 j + 1 ) Γ ( 3 j + 2 ) t 3 n + 1
form the basis of solutions of (197), on [ 0 , ) .
Example 7.
Let
  L D 2 α x ( t ) 2 t ·   L D α x ( t ) + a x ( t ) = 0
be the L-fractional Hermite’s equation, where a C . The input polynomials are p ( t ) = 2 t , q ( t ) = a and c ( t ) = 0 . According to (195),
x n + 2 = Γ ( n + 3 α ) Γ ( n + 2 α ) Γ ( n + 3 ) Γ ( n + 2 ) Γ ( 2 α ) 2 2 Γ ( n + 1 ) Γ ( 2 α ) Γ ( n + 1 α ) a x n ,
for n 0 . In closed form,
x 2 n + 1 = x 1 i = 3 2 n + 2 Γ ( i α ) Γ ( 2 α ) 2 n i = 3 2 n + 2 Γ ( i ) i = 1 n 2 Γ ( 2 i ) Γ ( 2 α ) Γ ( 2 i α ) a
and
x 2 n = x 0 i = 2 2 n + 1 Γ ( i α ) Γ ( 2 α ) 2 n i = 2 2 n + 1 Γ ( i ) i = 1 n 2 Γ ( 2 i 1 ) Γ ( 2 α ) Γ ( 2 i 1 α ) a .
As a consequence, the functions
y ( t ) = n = 0 t 2 n + 1 i = 3 2 n + 2 Γ ( i α ) Γ ( 2 α ) 2 n i = 3 2 n + 2 Γ ( i ) i = 1 n 2 Γ ( 2 i ) Γ ( 2 α ) Γ ( 2 i α ) a
and
z ( t ) = n = 0 t 2 n i = 2 2 n + 1 Γ ( i α ) Γ ( 2 α ) 2 n i = 2 2 n + 1 Γ ( i ) i = 1 n 2 Γ ( 2 i 1 ) Γ ( 2 α ) Γ ( 2 i 1 α ) a
form the basis of solutions of (198), on [ 0 , ) . Notice that, if
a = 2 λ ,
where
λ = Γ ( i ) Γ ( 2 α ) Γ ( i α ) , i 1 , i Z ,
then there exists a polynomial solution of (198):
  N y ( t ) = n = 0 N t 2 n + 1 i = 3 2 n + 2 Γ ( i α ) Γ ( 2 α ) 2 n i = 3 2 n + 2 Γ ( i ) i = 1 n 2 Γ ( 2 i ) Γ ( 2 α ) Γ ( 2 i α ) a ,
  N z ( t ) = n = 0 N t 2 n i = 2 2 n + 1 Γ ( i α ) Γ ( 2 α ) 2 n i = 2 2 n + 1 Γ ( i ) i = 1 n 2 Γ ( 2 i 1 ) Γ ( 2 α ) Γ ( 2 i 1 α ) a ,
for N 0 . These polynomials extend, in a fractional sense, the classical Hermite’s polynomials.

8. Open Problems

We broadly list some questions, which also highlight the limitations of the work:
  • Would the L-fractional derivative have better performance than the Caputo fractional derivative in specific modeling problems? According to Section 2 and Table 1, the L-fractional derivative and its associated differential equations have many appealing properties. For example, the solution is smooth, its ordinary derivative at the initial instant is finite, the vector field of the equation is a velocity with units of time 1 , and a differential can be associated with the fractional derivative. The appropriateness of the L-fractional derivative shall be checked with applied models, simulations, and fitting to real data, beyond purely theoretical work.
  • Is it possible to derive more formulas, improper/contour integral representations, applications, and numerical algorithms for the new Mittag–Leffler-type function (40)? Obviously, the classical Mittag–Leffler function (17) is much more developed theoretically.
  • Can the “almost everywhere” condition in the fundamental theorem of L-fractional calculus (and in Caputo fractional calculus) be weakened? (See Lemma 1 and Proposition 1.) We know that, for analytic functions and variations of them, the fundamental theorem of L-fractional calculus holds at every point t, not just almost everywhere (Corollary 1 and Lemma 3). Analogously, for fractional analytic functions, the fundamental theorem of Caputo fractional calculus is satisfied at every point t, not only almost everywhere (Remark 1), hence the potential of power-series expansions in fractional calculus, both for applications and theory. However, it would be of relevance to investigate whether there exists a larger class of functions for which there is equality at every t. We highlight the need to conduct rigorous computations in fractional calculus to make it clear what kind of solutions one obtains (an everywhere solution, an almost-everywhere solution, a solution to the fixed-point integral problem, a solution to the modified Caputo equation, etc.; see Remark 3, for example).
  • Is it possible to find closed-form expressions for the composed integral operator   L J m α ? A probabilistic structure was given to   L J m α depending on beta-distributed delays (Section 5.2), and expressions were obtained for source terms based on power functions (Section 5.3). We wonder whether   L J m α could be given as a convolution, like in the Caputo case, and whether the solution x ( t ) would depend on some new two-parameter Mittag–Leffler-type function.
  • Would the Laplace transform have any role when solving L-fractional differential equations? The power-series method is a powerful tool for L-fractional differential equations, by the analyticity of the solutions. However, the use of the Laplace transform has not been checked. The increase in the nonlinearity in the equation with t 1 α may complicate the applicability or the usefulness of the transform. Furthermore, the use should be precise, under appropriate hypotheses.
  • Can the probability link (Section 5.2) established in the paper help understand and generalize the concept of fractional derivative more? The L-fractional derivative and the associated integral operator distribute the past time with a beta distribution. Hence, the L derivative includes history’s effects on the model, according to a fixed probability law. For the fractional order 1, the ordinary derivative is local, while the time of the Riemann integral is distributed uniformly. Given an interval, the uniform distribution maximizes the Shannon entropy, so the benefits of the fractional derivative in terms of memory terms shall be investigated.
  • Can the new Mittag–Leffler-type function (40) be used in other settings as a substitute for the exponential function, for example, to define novel probability distributions, such as a “Poisson distribution” with mass function related to the Mittag–Leffler-type function, or to study partial differential equations with exponentials involved, such as the heat equation? In the fractional case, the new Mittag–Leffler function would emerge.
  • Can we expect (Section 5.2) a better characterization of the finiteness of the fractional moment-generating function of random variables? One would probably need to apply the Cauchy–Hadamard theorem adequately, instead of the ratio test. Since the new Mittag–Leffler-type function is defined with products of gamma functions, the ratio test is the most straightforward tool to analyze the convergence of the series. On the other hand, the fractional moment-generating function may be of use to study some stochastic/random fractional differential equations.
  • Can the theory on m-th order autonomous linear equations be generalized to variable coefficients? Is it possible to find a variation-of-constants formula when forcing terms are present? This new research would continue the results from Section 6.
  • Can we build a theory about L-fractional dynamical systems? The corresponding fractional exponential, which is the proposed Mittag–Leffler-type function (40), should play a key role, as it solves the linearized problem. The monotonicity and asymptotic properties of the new function shall be investigated. Relevant applications, such as the study of the L-fractional SIR (susceptible–infected–recovered) epidemiological model, would come up.
  • Is the theory on linear L-fractional differential equations with analytic coefficients extensible to the case of regular singular points? The problems are that changes in the variable and the product rule for the fractional derivative are not amenable to computing. This new research would continue the results from Section 7.
  • What are the properties of the fractional Hermite’s polynomial defined in Example 7? Do they satisfy certain formulas or orthogonality conditions? A similar analysis would yield fractional Legendre’s polynomials, fractional Laguerre’s polynomials, and so on.
  • Does it make sense to rescale other fractional derivatives? For example, we commented that the Λ -fractional derivative normalizes the Riemann–Liouville derivative, and it shall be investigated mathematically. Would fractional operators D α with continuous or bounded kernels improve their applicability if a factor ( D α t ) 1 is included?
  • Can we explicitly solve other models, with nonlinearities, under the L-fractional derivative? With the experience of the Caputo derivative, the main tool shall be the power-series method, under analytic inputs. The solution will be local, as predicted by the Cauchy–Kovalevskaya theorem. It will be well defined and pointwise, according to Lemma 1, Proposition 1, Corollary 1, and Remark 1.
  • Finally, what about fractional partial differential equations? There are no studies for the L-fractional derivative. In the Caputo context, formal solutions have been found in terms of bivariate fractional power series, but rigorous theorems are yet to be investigated.

Funding

This research received no external funding.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The author declares that there are no conflicts of interest.

References

  1. Podlubny, I. Fractional Differential Equations. An Introduction to Fractional Derivatives, Fractional Differential Equations, to Methods of Their Solution and Some of Their Applications, 1st ed.; Academic Press: Cambridge, MA, USA, 1998; Volume 198. [Google Scholar]
  2. Kilbas, A.A.; Srivastava, H.M.; Trujillo, J.J. Theory and Applications of the Fractional Differential Equations. In North-Holland Mathematics Studies; Elsevier: Amsterdam, The Netherlands, 2006. [Google Scholar]
  3. Diethelm, K. The Analysis of Fractional Differential Equations. In An Application-Oriented Exposition Using Differential Operators of Caputo Type; Lecture Notes in Mathematics; Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
  4. Abbas, S.; Benchohra, M.; Lazreg, J.E.; Nieto, J.J.; Zhou, Y. Fractional Differential Equations and Inclusions. Classical and Advanced Topics; World Scientific: Singapore, 2023. [Google Scholar]
  5. Yong, Z. Basic Theory of Fractional Differential Equations, 3rd ed.; World Scientific: Singapore, 2023. [Google Scholar]
  6. Ascione, G.; Mishura, Y.; Pirozzi, E. Fractional Deterministic and Stochastic Calculus; De Gruyter Series in Probability and Stochastics; Walter de Gruyter: Berlin, Germany, 2024; Volume 4. [Google Scholar]
  7. Oliveira, E.C.D.; Machado, J.A.T. A review of definitions for fractional derivatives and integral. Math. Probl. Eng. 2014, 2014, 238459. [Google Scholar] [CrossRef]
  8. Ortigueira, M.D.; Machado, J.T. What is a fractional derivative? J. Comput. Phys. 2015, 293, 4–13. [Google Scholar] [CrossRef]
  9. Teodoro, G.S.; Machado, J.T.; Oliveira, E.C.D. A review of definitions of fractional derivatives and other operators. J. Comput. Phys. 2019, 388, 195–208. [Google Scholar] [CrossRef]
  10. Webb, J.R.L. Initial value problems for Caputo fractional equations with singular nonlinearities. Electron. J. Differ. Equ. 2019, 2019, 1–32. [Google Scholar]
  11. Diethelm, K.; Garrappa, R.; Giusti, A.; Stynes, M. Why fractional derivatives with nonsingular kernels should not be used. Fract. Calc. Appl. Anal. 2020, 23, 610–634. [Google Scholar] [CrossRef]
  12. Diethelm, K.; Kiryakova, V.; Luchko, Y.; Machado, J.T.; Tarasov, V.E. Trends, directions for further research, and some open problems of fractional calculus. Nonlinear Dyn. 2022, 107, 3245–3270. [Google Scholar] [CrossRef]
  13. Mieghem, P.V. The origin of the fractional derivative and fractional non-Markovian continuous time processes. Phys. Rev. Res. 2022, 4, 023242. [Google Scholar] [CrossRef]
  14. Caputo, M. Linear model of dissipation whose Q is almost frequency independent-II. Geophys. J. Int. 1967, 13, 529–539. [Google Scholar] [CrossRef]
  15. Dong, H.; Kim, D. An approach for weighted mixed-norm estimates for parabolic equations with local and non-local time derivatives. Adv. Math. 2021, 377, 107494. [Google Scholar] [CrossRef]
  16. Dong, H.; Kim, D. Time fractional parabolic equations with partially SMO coefficients. J. Differ. Equ. 2023, 377, 759–808. [Google Scholar] [CrossRef]
  17. Lastra, A.; Prisuelos-Arribas, C. Solutions of linear systems of moment differential equations via generalized matrix exponentials. J. Differ. Equ. 2023, 372, 591–611. [Google Scholar] [CrossRef]
  18. Cinque, F.; Orsingher, E. Analysis of fractional Cauchy problems with some probabilistic applications. J. Math. Anal. Appl. 2024, 536, 128188. [Google Scholar] [CrossRef]
  19. Hazarika, D.; Borah, J.; Singh, B.K. Existence and controllability of non-local fractional dynamical systems with almost sectorial operators. J. Math. Anal. Appl. 2024, 532, 127984. [Google Scholar] [CrossRef]
  20. Zhang, W.; Gu, W. Machine learning for a class of partial differential equations with multi-delays based on numerical Gaussian processes. Appl. Math. Comput. 2024, 467, 128498. [Google Scholar] [CrossRef]
  21. Cai, L.; Cao, J.; Jing, F.; Wang, Y. A fast time integral finite difference method for a space-time fractional FitzHugh-Nagumo monodomain model in irregular domains. J. Comput. Phys. 2024, 501, 112744. [Google Scholar] [CrossRef]
  22. Haubold, H.J.; Mathai, A.M.; Saxena, R.K. Mittag–Leffler functions and their applications. J. Appl. Math. 2011, 2011, 298628. [Google Scholar] [CrossRef]
  23. Mainardi, F. Why the Mittag–Leffler function can be considered the queen function of the fractional calculus? Entropy 2020, 22, 1359. [Google Scholar] [CrossRef]
  24. Gorenflo, R.; Kilbas, A.A.; Mainardi, F.; Rogosin, S.V. Mittag–Leffler Functions, Related Topics and Applications; Springer: Berlin, Germany, 2020. [Google Scholar]
  25. Mieghem, P.V. The Mittag–Leffler function. arXiv 2021, arXiv:2005.13330v4. [Google Scholar]
  26. Kürt, C.; Fernandez, A.; Özarslan, M.A. Two unified families of bivariate Mittag–Leffler functions. Appl. Math. Comput. 2023, 443, 127785. [Google Scholar] [CrossRef]
  27. Odibat, Z.M.; Shawagfeh, N.T. Generalized Taylor’s formula. Appl. Math. Comput. 2007, 186, 286–293. [Google Scholar] [CrossRef]
  28. Duan, J.; Chen, L. Solution of fractional differential equation systems and computation of matrix Mittag–Leffler functions. Symmetry 2018, 10, 503. [Google Scholar] [CrossRef]
  29. Area, I.; Nieto, J.J. Power series solution of the fractional logistic equation. Physica A 2021, 573, 125947. [Google Scholar] [CrossRef]
  30. D’Ovidio, M.; Lai, A.C.; Loreti, P. Solutions of Bernoulli equations in the fractional setting. Fractal Fract. 2021, 5, 57. [Google Scholar] [CrossRef]
  31. Balzotti, C.; D’Ovidio, M.; Loreti, P. Fractional SIS epidemic models. Fractal Fract. 2020, 4, 44. [Google Scholar] [CrossRef]
  32. Jornet, M. Power-series solutions of fractional-order compartmental models. Comput. Appl. Math. 2024, 43, 67. [Google Scholar] [CrossRef]
  33. Jornet, M. On the Cauchy–Kovalevskaya theorem for Caputo fractional differential equations. Phys. D Nonlinear Phenom. 2024, 462, 134139. [Google Scholar] [CrossRef]
  34. Shchedrin, G.; Smith, N.C.; Gladkina, A.; Carr, L.D. Fractional derivative of composite functions: Exact results and physical applications. arXiv 2018, arXiv:1803.05018. [Google Scholar]
  35. Area, I.; Nieto, J.J. Fractional-order logistic differential equation with Mittag–Leffler-type Kernel. Fractal Fract. 2021, 5, 273. [Google Scholar] [CrossRef]
  36. Nieto, J.J. Fractional Euler numbers and generalized proportional fractional logistic differential equation. Fract. Calc. Appl. Anal. 2022, 25, 876–886. [Google Scholar] [CrossRef]
  37. West, B.J. Exact solution to fractional logistic equation. Phys. Stat. Mech. Appl. 2015, 429, 103–108. [Google Scholar] [CrossRef]
  38. Area, I.; Losada, J.; Nieto, J.J. A note on the fractional logistic equation. Phys. Stat. Mech. Appl. 2016, 444, 182–187. [Google Scholar] [CrossRef]
  39. D’Ovidio, M.; Loreti, P.; Ahrabi, S.S. Modified fractional logistic equation. Phys. Stat. Mech. Appl. 2018, 505, 818–824. [Google Scholar] [CrossRef]
  40. Kexue, L.; Jigen, P. Laplace transform and fractional differential equations. Appl. Math. Lett. 2011, 24, 2019–2023. [Google Scholar] [CrossRef]
  41. Beghin, L.; Cristofaro, L.; Garrappa, R. Renewal processes linked to fractional relaxation equations with variable order. J. Math. Anal. Appl. 2024, 531, 127795. [Google Scholar] [CrossRef]
  42. Cruz-López, C.A.; Espinosa-Paredes, G. Fractional radioactive decay law and Bateman equations. Nucl. Eng. Technol. 2022, 54, 275–282. [Google Scholar] [CrossRef]
  43. Jornet, M. On the random fractional Bateman equations. Appl. Math. Comput. 2023, 457, 128197. [Google Scholar] [CrossRef]
  44. Garrappa, R. On linear stability of predictor-corrector algorithms for fractional differential equations. Int. J. Comput. Math. 2010, 87, 2281–2290. [Google Scholar] [CrossRef]
  45. Garrappa, R. Predictor-Corrector PECE Method for Fractional Differential Equations. MATLAB Central File Exchange, Version 1.4.0.0. 2012. Available online: https://www.mathworks.com/matlabcentral/fileexchange/32918-predictor-corrector-pece-method-for-fractional-differential-equations (accessed on 20 February 2024).
  46. Garrappa, R. Numerical solution of fractional differential equations: A survey and a software tutorial. Mathematics 2018, 6, 16. [Google Scholar] [CrossRef]
  47. Lazopoulos, A.K.; Karaoulanis, D. On L-fractional derivatives and L-fractional homogeneous equations. Int. J. Pure Appl. Math. 2016, 21, 249–268. [Google Scholar]
  48. Lazopoulos, K.A.; Karaoulanis, D.; Lazopoulos, A.K. On fractional modelling of viscoelastic mechanical systems. Mech. Res. Commun. 2016, 78, 1–5. [Google Scholar] [CrossRef]
  49. Jornet, M.; Nieto, J.J. Power-series solution of the L-fractional logistic equation. Appl. Math. Lett. 2024, 154, 109085. [Google Scholar] [CrossRef]
  50. Jornet, M. Theory on new fractional operators using normalization and probability tools. arXiv 2024, arXiv:2403.06198. [Google Scholar]
  51. Lazopoulos, K.A.; Lazopoulos, A.K. Fractional vector calculus and fractional continuum mechanics. Prog. Fract. Differ. Appl. 2016, 2, 67–86. [Google Scholar] [CrossRef]
  52. Lazopoulos, K.A.; Lazopoulos, A.K. Fractional differential geometry of curves & surfaces. Prog. Fract. Differ. Appl. 2016, 2, 169–186. [Google Scholar]
  53. Cottrill-Shepherd, K.; Naber, M. Fractional differential forms. J. Math. Phys. 2001, 42, 2203–2212. [Google Scholar] [CrossRef]
  54. Adda, F.B. The differentiability in the fractional calculus. Nonlinear Anal. 2001, 47, 5423–5428. [Google Scholar] [CrossRef]
  55. Tarasov, V. Liouville and Bogoliubov equations with fractional derivatives. Mod. Phys. Lett. 2007, 21, 237–248. [Google Scholar] [CrossRef]
  56. Vatsala, A.S.; Pageni, G. Series solution method for solving sequential Caputo fractional differential equations. AppliedMath 2023, 3, 730–740. [Google Scholar] [CrossRef]
  57. Uğurlu, E. On some even-sequential fractional boundary-value problems. Fract. Calc. Appl. Anal. 2024, 27, 353–392. [Google Scholar] [CrossRef]
  58. Lazopoulos, K.A.; Lazopoulos, A.K. Equilibrium of Λ-fractional liquid crystals. Mech. Res. Commun. 2024, 136, 104243. [Google Scholar] [CrossRef]
  59. Lazopoulos, K.A.; Lazopoulos, A.K. On the Λ-fractional continuum mechanics fields. Contin. Mech. Thermodyn. 2024, 36, 561–570. [Google Scholar] [CrossRef]
  60. Samko, S.; Kilbas, A.A.; Marichev, O. Fractional Integrals and Derivatives; Taylor & Francis: Abingdon, UK, 1993. [Google Scholar]
  61. Fernandez, A.; Restrepo, J.E.; Suragan, D. On linear fractional differential equations with variable coefficients. Appl. Math. Comput. 2022, 432, 127370. [Google Scholar] [CrossRef]
  62. Fernandez, A.; Restrepo, J.E.; Suragan, D. A new representation for the solutions of fractional differential equations with variable coefficients. Mediterr. J. Math. 2023, 20, 27. [Google Scholar] [CrossRef]
  63. Sambandham, B.; Vatsala, A.S. Basic results for sequential Caputo fractional differential equations. Mathematics 2015, 3, 76–91. [Google Scholar] [CrossRef]
  64. Caputo, M.; Fabrizio, M. A new definition of fractional derivative without singular kernel. Prog. Fract. Differ. Appl. 2015, 1, 73–78. [Google Scholar]
  65. Losada, J.; Nieto, J.J. Properties of a new fractional derivative without singular kernel. Prog. Fract. Differ. Appl. 2015, 1, 87–92. [Google Scholar]
  66. Losada, J.; Nieto, J.J. Fractional integral associated to fractional derivatives with nonsingular kernels. Prog. Fract. Differ. Appl. 2021, 7, 137–143. [Google Scholar]
  67. Area, I.; Nieto, J.J. On a quadratic nonlinear fractional equation. Fractal Fract. 2023, 7, 469. [Google Scholar] [CrossRef]
  68. Kiryakova, V. The multi-index Mittag–Leffler functions as an important class of special functions of fractional calculus. Comput. Math. Appl. 2010, 59, 1885–1895. [Google Scholar] [CrossRef]
  69. Cao Labora, D.; Nieto, J.J.; Rodríguez-López, R. Is it possible to construct a fractional derivative such that the index law holds? Prog. Fract. Differ. Appl. 2018, 4, 1–3. [Google Scholar] [CrossRef]
  70. Brezis, H. Functional Analysis, Sobolev Spaces and Partial Differential Equations; Springer: New York, NY, USA, 2011. [Google Scholar]
  71. Artin, E. The Gamma Function; Dover Books on Mathematics; Dover Publications Inc.: New York, NY, USA, 2015. [Google Scholar]
  72. Barros, L.C.D.; Lopes, M.M.; Pedro, F.S.; Esmi, E.; Santos, J.P.C.D.; Sánchez, D.E. The memory effect on fractional calculus: An application in the spread of COVID-19. Comput. Appl. Math. 2021, 40, 72. [Google Scholar] [CrossRef]
  73. Jornet, M. Beyond the hypothesis of boundedness for the random coefficient of the Legendre differential equation with uncertainties. Appl. Math. Comput. 2021, 391, 125638. [Google Scholar] [CrossRef]
  74. Jornet, M. On the mean-square solution to the Legendre differential equation with random input data. Math. Methods Appl. Sci. 2023, 47, 5341–5347. [Google Scholar] [CrossRef]
  75. Ye, H.; Gao, J.; Ding, Y. A generalized Gronwall inequality and its application to a fractional differential equation. J. Math. Anal. Appl. 2007, 328, 1075–1081. [Google Scholar] [CrossRef]
Table 1. Comparison between the Caputo and the L-fractional derivatives and their applications in differential equations.
Table 1. Comparison between the Caputo and the L-fractional derivatives and their applications in differential equations.
Caputo DerivativeL Derivative
α = 1 x ( t ) x ( t )
α = 0 x ( t ) x ( 0 ) ( x ( t ) x ( 0 ) ) / t
α = 0 , t = 0 0 x ( 0 )
derivative of constants00
initial condition x ( 0 ) = x 0 x ( 0 ) = x 0
derivative of t 1 1
power seriesfractional ( t α n )classical ( t n )
regularity of solutionabsolutely continuoussmooth
α ( 0 , 1 ) , x ( 0 ) ± it is   L D α x ( 0 ) ( , )
kernelsingularsingular and non-singular
issues at t = 0 nono
units time α time 1
differential form d α x ( t ) / ( d t ) α d α x ( t ) / d α t
velocitynoyes
fluxesnoyes
memoryyesyes
“exponential” functionyes (Mittag–Leffler)yes (another Mittag–Leffler)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jornet, M. Theory on Linear L-Fractional Differential Equations and a New Mittag–Leffler-Type Function. Fractal Fract. 2024, 8, 411. https://doi.org/10.3390/fractalfract8070411

AMA Style

Jornet M. Theory on Linear L-Fractional Differential Equations and a New Mittag–Leffler-Type Function. Fractal and Fractional. 2024; 8(7):411. https://doi.org/10.3390/fractalfract8070411

Chicago/Turabian Style

Jornet, Marc. 2024. "Theory on Linear L-Fractional Differential Equations and a New Mittag–Leffler-Type Function" Fractal and Fractional 8, no. 7: 411. https://doi.org/10.3390/fractalfract8070411

APA Style

Jornet, M. (2024). Theory on Linear L-Fractional Differential Equations and a New Mittag–Leffler-Type Function. Fractal and Fractional, 8(7), 411. https://doi.org/10.3390/fractalfract8070411

Article Metrics

Back to TopTop