How Many Fractional Derivatives Are There?

: In this paper, we introduce a uniﬁed fractional derivative , deﬁned by two parameters (order and asymmetry). From this, all the interesting derivatives can be obtained. We study the one-sided derivatives and show that most known derivatives are particular cases. We consider also some myths of Fractional Calculus and false fractional derivatives. The results are expected to contribute to limit the appearance of derivatives that differ from existing ones just because they are deﬁned on distinct domains, and to prevent the ambiguous use of the concept of fractional derivative.


Introduction
Four centuries after the first reference to the possibility of non integer order derivatives, the presently termed Fractional Calculus (FC) has reached a crossroads where multiple definitions are mixed, causing a huge confusion that makes life very difficult for those who only intend to make applications in Science and Engineering. In fact, the first reference was found in a letter from Leibniz to J. Bernoulli [1]. Although Euler (1730), Fourier (1822), and Abel (1823) touched on the problem, the true father of FC was   [2,3], in spite of their many difficulties to impose their vision, due to a main obstacle: At that time, the inverse Laplace integral was unknown. Therefore, Liouville could not find a simple way of expressing a function in terms of exponentials that were the basis for his findings. Anyway, the main definitions we find today are based on the formulae presented by Liouville, mainly the Riemann-Liouville [4], (Dzherbashian)-Caputo [5,6], and Grünwald-Letnikov [4] definitions. However, and based on these derivatives, new ones have been proposed alongside these, such as Hadamard's [6] or Marchaud's [4]. Consequently, the number of currently existing fractional derivatives (FDs) is so high, which became the biggest obstacle to the diffusion of FC in Science and Engineering.
If we also consider the pseudo-derivatives and the disguised integer order derivatives, we conclude that the situation is really confused and confusing. Trying to introduce some order in the field, Oliveira and Machado, first, and Teodoro et al., more recently [7,8], listed such derivatives and introduced a classification according to some specified criteria. However, these papers included some operators that can hardly be classified as FDs. On the other hand, in recent years a great discussion took place in forums, conferences and articles, where there is a great confusion between the concepts of system and derivative. In a sequence of papers, Ortigueira and Machado tried to clarify the situation by proposing a coherent definition of FD [9], a description of FD suitable for applications in Science and Engineering [10], and introducing the FD in the context of fractional linear systems [11].
In this text, we make another step to clarify the situation, by introducing a step-down procedure. We recover the unified fractional derivative (UFD) obtained in [11] after a four step unification, and proceed as if this UFD were a mother derivative, defined by two parameters (order and dissymmetry), from which all the derivatives listed in [7,8] emerge as particular cases [12]. We will work in the context of Laplace and Fourier transforms. This includes most of the interesting functions and distributions used in practical applications.
We could go further by considering the tempered FDs [13,14], but we will not do so, in order to keep ourselves in the context of the derivatives introduced in [7,8]. On the other hand, we directed our attention to what we can call shift invariant derivatives, without considering the scale invariant ones such as the Hadamard [6] and the quantum [15] derivatives, as well as the discrete-time derivatives [16][17][18]. We will also not consider variable order derivatives [17]. Some operators, sometimes called FDs, are analyzed and put in the correct framework.
The paper is outlined as follows. In Section 2, we define fractional derivative. The UFD and its main properties is introduced in Section 3. The derivatives to referred in [7,8] are then obtained sequentially as particular cases through suitable choice of the parameters and working domain (Section 4). The other operators that, according to our framework, cannot be considered as FD will be treated in Section 5. Section 6 concludes the paper with a reflection on which definitions ought to be chosen.

Remark 1.
We adopt here the following assumptions: • We work on R. • We use the two-sided Laplace transform (LT): where f (t) is any function defined on R and F(s) is its transform, provided that it has a non empty region of convergence (ROC).

•
The Fourier transform (FT), F [ f (t)], is obtained from the LT through the substitution s = iκ, with κ ∈ R.

What Is a Fractional Derivative?
In Signal Processing, independently of the applications to Electrical, Mechanical, Biomedical or any other Engineering field, there is a very simple way of defining a FD: a FD is a linear operator described by Bode diagrams that are straight lines [11]. In terms of the FT we can write where γ, θ ∈ R, and D γ θ represents the derivative. The operator is the frequency response of the derivative. Functions A(ω) (amplitude) and φ(ω) (phase), when represented in a logscale for ω > 0, are expressed by straight lines. These can be used to define a FD. However, we need a criterion independent of any transform. As in [9], we define as FD an operator that verifies the following (wide sense) criterion.

Definition 1.
An operator is considered a FD in wide sense if it enjoys properties P defined as:

P1 Linearity
The operator is linear.

P2 Identity
The zero order derivative of a function returns the function itself.

P3 Backward compatibility
When the order is integer, FD gives the same result as the ordinary derivative.

P5
The generalized Leibniz rule holds. As is clear, when α = N ∈ Z + , we obtain the classical Leibniz rule.
The index law property can be modified to include positive orders. This leads to the strict sense criterion. This criterion has the same five conditions, but P4 is modified to: (6) holds for any α and β.
This is very important because it allows the existence of the inverse derivative: the antiderivative. It is convenient to state the differences between anti-derivative and "primitive": • The anti-derivative is unique. • The anti-derivative is a left and right inverse, while any primitive is only right inverse.
These criteria allow us to clarify the situation of some "disguised" order one derivatives and put out some frauds.

Unified Fractional Derivative
In [11], a UFD incorporating most of the useful derivatives was presented and its properties studied. It is described as follows.
Definition 2. Let f (t) be a function defined on R (C), and let α > −1 if θ = ±α, or α ∈ R if θ = ±α. We define a UFD of GL type by where α is the derivative order and θ the asymmetry parameter. We define also a general integral formulation for the unified anti-derivative through where sgn(.) denotes the signum function. The integral in (8) can be regularized in order to become valid for positive orders [12]. This is done with the substitution with N = γ + 1.
Some known properties of this derivative can be drawn [12,15]. The main ones are: 1. Fourier transformation It was introduced above in (2). It permits obtaining (8) from (7), using the convolution theorem. It has another consequence: 2 .
meaning that the sisoids are the eigenfunctions of the UFD with eigenvalue Ψ as we observe from (2).

5.
Existence of inverse derivative From (13), the anti-derivative exists when β 2 = −β 1 and θ 1 = −θ 2 . Therefore, 6. Identity operator According to (13) and (20), the identity operator is defined by Suitable choices of these parameters allow us to recover the causal, anti-causal, and bilateral (acausal) derivatives. The particular, most interesting, cases are obtained from (7) and (8). In terms of the frequency response, we have • Riesz derivative and potential, θ = 0 • Feller derivative and potential, θ = 1 From these expressions and using (7) it is possible to devise numerous derivatives by: 1.
choosing particular values of the parameters α and θ, 2.
restricting the domain of the function at hand.

The GL Derivatives
The great importance of one-sided derivatives in applications leads us to study them in detail. Taking (7) with γ = α and θ = ±α we obtain the forward (left) and backward (right) GL derivatives that with some manipulation can be written as As it is easy to verify, the forward derivative + is causal, while the backward − is anti-causal. This kind of derivatives was proposed first by Liouville [2]. For functions with LT (or FT), we can write where F(s) = L[ f (t)]. As is known from the study of the LT, the LT of a right (left) function, f (t) = 0, t < a(t > a), a ∈ R has a ROC defined by (s) > 0 ( (s) < 0). If f (t) is absolutely of square integrable, then it has FT and we can write Function Ψ α ± (s) = (±s) α with suitable ROC is the transfer function (TF) of the derivative (also called differintegrator).

The Impulse Response
The relations (28) and (29) suggest the existence of two operators that, convolved with suitable functions, give their FDs. For α > 0 the LT inverse of s α does not exist as regular function. However, it has a generalized inverse represented by the pseudo-function where + (−) corresponds to the causal (anti-causal) case. The pseudo-function (30) is called impulse response (IR) of the differintegrator. The use of the convolution resulting from (28) would allow us to obtain the derivative D α ± f (t). For α < 0, there is no particular difficulty and we obtain from (8) The expression corresponding to the anti-causal case was proposed almost with the above form by Liouville [3]. The causal case was deduced from the anti-causal by Serret [1,19]. Liouville's formula included a factor (±1) −α to ensure that for both expressions, although the ROC will be defined by ±R e (s) > 0. In most texts, such factor is removed. In time problems it must be kept, but, in space applications, it plays no relevant role. Therefore, we will omit it in the following. The finite domain versions of (31) are called RL integrals: and We will assume these forms in the following.

Liouville's Derivatives
Liouville [2] noted that (31) becomes singular when the order is positive (i.e., when it corresponds to a derivative). This problem can be solved with the regularization [15,17] that we will call regularized Liouville (L) derivative. The first regularization of the Liouville integral (31) was done by Marchaud [4]. However, their regularization only verifies the derivative property of the LT if the order is less than 1.
Instead of a regularization, Liouville devised a trick for solving the singularity problem that we can describe as where N > 0 verifies N > α. The most usual choice is N = α . Basically, it consists of transferring the singular behavior to an integer order derivative. The first approach leads to the so-called Liouville derivatives [4], and that can be rewritten as and This one is sometimes called "Weyl derivative" [8].
Liouville's second procedure leads to what can be called Liouville-Caputo (LC) derivatives [20]: and Remark 2. We must remark that: 1. These derivative definitions are equivalent for functions with LT or FT.

2.
For some particular classes of functions this may not be correct, e.g., the L derivatives are better than the LC. Possible causes for this are: • The convolution makes the functions smoother; • The derivative may introduce roughness or spikes.
3. These 2 + 2 formulations lead to most derivatives described in [7,8]. We must reinforce something very important: all the above defined derivatives are valid for functions defined in any interval of R. This means that we do not need to change the definitions to accommodate them to the domain of the function at hand. One thing is the definition, another one is the computation of the derivative. It is a situation similar to the one we find in the LT or FT. We do not need to change the definitions to agree with the domain of the function. Therefore, most derivative definitions in [7,8] have no reason to be considered as autonomous derivatives.

4.
These derivatives do not introduce any initial conditions. 5.
For a given derivative, there is always an anti-derivative.
These derivative formulations using (28) and the convolution are related as Figure 1 illustrates, and collected in Table 1, where

RL and C Derivatives
Despite the high degree of generality exhibited by the above derivatives, they are not used in most papers. In fact, such papers use derivatives specialized for functions that are defined on intervals [a, b], −∞ < a < b < ∞ [4,6]. Usually a ≥ 0 so that right functions are assumed. When a = 0 we use frequently the designation "causal function". The GL derivatives assume the form and This was the procedure of Grünwald [21] and Letnikov [22]. Therefore, they are nothing else than the application of (27) to bounded support functions and so they are not new derivatives. For the integral formulations, the contributions of Liouville and Riemann [23] were joined to obtain the Riemann-Liouville (RL) derivative [4,6]; likewise, from the Liouville-Caputo derivative, the (Dzherbashian-)Caputo (C) derivative [5,6] is obtained. We have: and • (Dzherbashian-)Caputo (C) derivatives and These formulations, collected in Table 2, are the ones most used, although they have several inconveniences, mainly the initial conditions problem they introduce [17]. However, they form the basis for many applications.

Multistep Derivatives
The procedure introduced in (36) can be generalized. For example, with k < N and N > α. Then, This procedure was proposed by Davidson and Essex [8]. However, the derivative they proposed is valid only for functions which are null for t < 0. Cavanati [8] introduced an algorithm that is a particular case obtained with k = N − 1.
Another similar algorithm was presented by Hilfer [24], also for causal functions. It can be stated as where 0 < α, µ < 1. It reads This approach can be generalized. For example, if 1 < α < 2, then we can set or This kind of reasoning shows that, following this method, we can invent billions of "derivatives". However, it is not clear how this is interesting in real life applications. Indeed, we are not introducing a new derivative, but using repeatedly one basic derivative. Some of the such derivatives were proposed because they introduced more favorable initial conditions. However, this is in general incorrect, since the initial conditions do not depend on the derivatives, but on the physical structure of the system at hand [17,25].

Second Generation Operators
The derivatives we introduced above are "shift invariant". However, there are other derivatives that do not enjoy such properties, and therefore cannot be obtained from the UFD introduced above. It is the case of the Hadamard derivatives [6] that are "scale invariant". Other examples are the "quantum derivatives" [15]. It is important to refer also the Marchaud derivatives [4] that exhibit a kind of regularization. On the other hand, some modifications and variable changes can be made in the above derivatives, leading to interesting operators that may not necessarily be considered to be derivatives when seen in the light of the above criterion [7,8]. They are introduced in Table 3. Table 3. Operators (not necessarily derivatives) obtained from modified derivatives.

Name Definition Domain
Hadamard

Name Definition Domain
k-Hilfer

Some Comments
In many quarters, the treatment of FDs and systems was taken to be a quite difficult task. Therefore, simplified versions thereof were welcomed, even if some important features that characterize most systems were lost in the process -for example, in modeling natural or human-made systems that are essentially low-pass or bandpass systems. However, some proposed systems are high-pass, and then have limited usefulness; in the following, we describe some of them. We must remark that sometimes the word "fractional" is used as a "trade mark" that helps to "sell a product". It is the case of the "Memory-dependent derivative" [26], that is nothing else than a running average.

"Derivatives" That Are High-Pass Filters
Consider a simple differential equation: This equation is recognized easily as the model of the classic high-pass filter [27]. Its transfer function is One way of relating the input and output is: This expression was chosen with this form to remember the Caputo derivative (41). Now set: to obtain This is the expression of the "famous" fractional Caputo-Fabrizio "derivative". As is clear, it is neither a derivative, nor fractional.
Let us continue and substitute the exponential in (60) by the Mittag-Leffler function. We obtain also a fractional high-pass filter with TF given by that is called Atangana-Baleanu "derivative". Attending to its FT, it is a system, but not a derivative. There are many variations of this operator, but they remain high-pass filters, not derivatives [8].

"Disguised" Order 1 Derivatives
The "local fractional derivative" introduced by Kolwankar reads [28] where D α t is the RL derivative. V. Tarasov [29] showed that, if α < 1, this derivative is equivalent to the order 1 derivative, therefore not fractional.
A similar result can be found for the "conformable" derivative [30] D α f (t) = lim and For differentiable functions and for both derivatives, this results in Several modified versions of these derivatives were proposed [8].
The so-called "fractal derivative" was introduced in [31] and reads It is a strange derivative that gives ∞ for differentiable functions, unless α = 1. For a list of these and similar operators, see Table 5. Table 5. Local formulations of derivatives (α > 0).

Name Definition Domain
Kolwankar 2-4 illustrate the results obtained with different FD formulations, when applied to compute the α = 0.5 order derivative of the function f (x) = cos(ωx), for ω = {0.01π, 0.1π, 100π}, respectively. The derivatives are compared with the one obtained with the GL operator, which serves as baseline. In all cases we adopt a = 0, while FD specific parameters are as presented in the legends of the graphs. We verify that some formulations fail to compute the derivatives accurately, while others diverge.   As is well known, the classic order 1 derivative of a sinusoidal is the same sinusoidal multiplied by the angular frequency and with a change of phase equal to π/2. Therefore, we expect something similar in the fractional case, at least after passing some time corresponding to the transient regime. This happens with those operators that we classified as derivatives. For the others, we have amplifications/attenuations and, in some cases, modulations, confirming that they are systems (filters), but not derivatives. Some have a non-acceptable behavior: they are unstable.

Which Derivatives?
After this journey into the world of FDs, it is important to answer the question: "Do we need such different formulations?" In previous papers [10][11][12], some answers to this question were given.

1.
In problems involving time, we have almost always to use causal derivatives. Therefore, the GL or one of the integral versions (37), (39), or (41) should be used.

2.
In space problems, we can use the above formulaeor the corresponding right-side versions, if there is any privileged direction. If this does not happen, we must use a two-sided derivative, preferably (2).
However, most derivatives described in [7,8] are particular cases, and the particularity is introduced by the domain. Therefore, do we need to define a new derivative each time the domain changes? This creates a big difficulty: we cannot keep increasing the number of derivatives, differing only because they are defined on different domains, which constrains the application fields.