Next Article in Journal
Effective Summation and Interpolation of Series by Self-Similar Root Approximants
Next Article in Special Issue
Time Automorphisms on C*-Algebras
Previous Article in Journal
The Complement of Binary Klein Quadric as a Combinatorial Grassmannian
Previous Article in Special Issue
Sinc-Approximations of Fractional Operators: A Computing Approach
Open AccessArticle

The Fractional Orthogonal Difference with Applications

Kooikersdreef 620, 7328 BS Apeldoorn, The Netherlands
Academic Editor: Hari M. Srivastava
Mathematics 2015, 3(2), 487-509; https://doi.org/10.3390/math3020487
Received: 4 March 2015 / Revised: 3 June 2015 / Accepted: 4 June 2015 / Published: 12 June 2015
(This article belongs to the Special Issue Recent Advances in Fractional Calculus and Its Applications)

Abstract

This paper is a follow-up of a previous paper of the author published in Mathematics journal in 2015, which treats the so-called continuous fractional orthogonal derivative. In this paper, we treat the discrete case using the fractional orthogonal difference. The theory is illustrated with an application of a fractional differentiating filter. In particular, graphs are presented of the absolutel value of the modulus of the frequency response. These make clear that for a good insight into the behavior of a fractional differentiating filter, one has to look for the modulus of its frequency response in a log-log plot, rather than for plots in the time domain.
Keywords: orthogonal difference; orthogonal polynomials; hypergeometric functions; Fourier transform; frequency response orthogonal difference; orthogonal polynomials; hypergeometric functions; Fourier transform; frequency response

1. Introduction

In [1], a comprehensive survey is given about the orthogonal derivative. This derivative has a long, but almost forgotten history. The fractional derivative has also a long history. In [2], these two subjects are combined with the fractional orthogonal derivative. This derivative make use of orthogonal polynomials, both continuous (e.g., Jacobi polynomials) and discrete (Hahn polynomials). In this paper, the theory is applied to functions given for a discrete set of points. We use the fractional difference. Grünwald [3] and Letnikov [4] developed a theory of discrete fractional calculus for causal functions (f (x) = 0 for x < 0). We use the definition defined by, e.g., Kuttner [5] or Isaacs [6]. We show that the definition of Grünwald–Letnikov for the fractional difference is a special case of the definition of the fractional orthogonal difference for the Hahn polynomials for causal functions. Bhrawy et al. [7] use orthogonal polynomials to solve fractional differential equations. However, there formulation of the fractional derivative is entirely different from ours.
In Section 2, we give an overview of some results for the fractional derivative following [2] (Section 2).
In Section 3, the theory of the fractional derivative will be applied with a discrete function, and so, the fractional difference is used as a base for deriving a formula for the fractional orthogonal difference.
In Section 4, the theory of the fractional difference is applied with the discrete Hahn polynomials.
In Section 5, we derive the frequency response for the fractional difference with the discrete Hahn polynomials.
In Section 6, the theory will be applied to a fractional differentiating filter. Analog, as well discrete filters are treated. The theory is illustrated by the graphs of the modulus of the frequency responses.

2. Overview of the Results of the Fractional Derivative

In [2], we find the following formulas for the fractional integral: the Weyl and the Riemann–Liouville transform. In terms of these, we can define a fractional derivative.
Let µ ∈ ℂ with e(µ) > 0. Let f be a function on (−∞,b), which is integrable on bounded subintervals. Then, the Riemann–Liouville integral of order µ is defined as:
R μ [ f ] ( x ) = f ( μ ) ( x ) : = 1 Γ ( μ ) x f ( y ) ( x y ) μ 1 d y
A sufficient condition for the absolute convergence of this integral is f (−x) = O (xµϵ),ϵ>0, x→∞
Similarly, for Re(µ) > 0 and f, a locally-integrable function on (a, ∞), such that f (x) = O (xµϵ), ϵ>0, x→∞, the Weyl integral of order µ is defined as:
W μ [ f ] ( x ) = f ( μ ) ( x ) : = 1 Γ ( x ) x f ( y ) ( y x ) μ 1 d y
Clearly:
R μ [ f ( · ) ] ( x ) = W μ [ f ] ( x )
The Riemann–Liouville integral is often given for a function f on [0, b) in the form:
R μ [ f ] ( x ) = f ( μ ) ( x ) : = 1 Γ ( μ ) 0 x f ( y ) ( x y ) μ 1 d y ( x > 0 )
This can be obtained from Equation (1) if we assume that f (x) = 0 for x ≤ 0 (then, f is called a causal function).
The fractional derivatives are defined by the following formulas with Re(ν) < n,n a positive integer and D = d d x:
R v [ f ] ( x ) = D n [ R v n [ f ] ] ( x )
W v [ f ] ( x ) = ( 1 ) n D n [ W v n [ f ] ] ( x )
Put W0 = id = R0. Under the assumption of sufficient differentiability and convergence, we have WμWV = Wμ+v, RμRv = Rμ+v for all µ, ν ∈ ℂ and Rn = Dn, Wn = (−1)n Dn.
For Re(ν) < 0, formula Equations (2) and (3) are easily derived, and for Re(ν) > 0, they are definitions. This is under the condition that all derivatives of order less then n + 1 of the function f (x) should exist.
Because we need the Fourier transform of Formulas (2) and (3), we use the following theorem. The proof is in [8] (Chapter 2, Section 7) for the Riemann–Liouville derivative, as well as for the Weyl derivative.
Theorem 1. Let f and g be functions onfor which the Fourier transforms exist and are given by:
F ( w ) = 1 2 π e i w x f ( x ) d x
G ( w ) = 1 2 π e i w x g ( x ) d x
If g = Wv [f], then:
G ( w ) F ( w ) = ( i w ) v
Here and in the rest of this paper, zv = evInz with − π < arg (z) < π. Therefore, (−iw)v =(−iw+0)v = e−πiv/2(w+i0)v. If z is on the cut (−∞, 0] and z ≠ 0, then we distinguish between (z+i0)v=eiπv(−z)v and (zi0)v=eiπv(−z)v
The quotient H ( w ) = G ( w ) F ( w ) will be called the frequency response, where we follow the terminology of filter theory.
In [1], the following theorem is proven:
Theorem 2. Let n be a positive integer. Let pn be an orthogonal polynomial of degree n with respect to the orthogonality measure µ for which all moments exist. Let x ∈ ℝ. Let I be a closed interval, such that, for some ε > 0, x + δu ∈ I, if 0 ≤ δ ≤ ε and u ∈ supp(µ). Let f be a continuous function on I, such that its derivatives of order 1, 2,…, n at x exist. In addition, if I is unbounded, assume that f is of at most polynomial growth on I. Then:
f ( n ) ( x ) = lim δ 0 D δ n [ f ] ( x )
where:
D δ n [ f ] ( x ) = k n n ! h n 1 δ n f ( x + δ ξ ) p n ( ξ ) d μ ( ξ )
From this formula, with f absolutely integrable on ℝ and g = D δ n [ f ], we immediately derive for the frequency response H δ n ( w ):
H δ n ( w ) = G ( w ) F ( w ) = k n n ! h n δ n p n ( ξ ) e i w δ ξ d μ ( ξ )

3. The Fractional Orthogonal Difference

In this section, we treat the fractional difference. If we want to compute the fractional derivative of a function given at a set of equidistant points, we can use instead of the fractional derivative, the fractional difference. Then, we have to work in the formula for the orthogonal derivative with discrete orthogonal polynomials.
As an analog of the Riemann–Liouville fractional integral Equation (1), we can define a fractional summation (see, for example, [5] or [6]) as:
I δ μ [ f ] ( x ) : = δ μ k = 0 ( μ ) k k ! f ( x k δ ) δ > 0
In contrast to the Riemann–Liouville fractional integral, this summation has no a priori singularity for µ < 0. If f is a causal function, then the upper bound for the summation should be [x/δ], and the sum is certainly well-defined. For µ ≠ 0,− 1,− 2,…, we can write:
( μ ) k k ! = 1 Γ ( μ ) Γ ( μ + k ) Γ ( 1 + k ) k μ 1 Γ ( μ )
as k → ∞. Hence, the infinite sum converges if f ( x ) = O ( | x | λ ) as x → ∞ with λ > Re(µ). For µ = −n(n a nonnegative integer), we get:
I δ n [ f ] ( x ) = δ n k = 0 ( 1 ) k ( k n ) f ( x k δ ) = ( D δ ) n [ f ] ( x )
where:
D δ [ f ] ( x ) : = f ( x ) f ( x δ ) δ
Note that:
D δ [ I δ μ [ f ] ] ( x ) = I δ μ + 1 [ f ] ( x )
More generally, we have:
I δ μ [ I δ v [ f ] ] ( x ) = I δ ( μ + v ) [ f ] ( x )
For µ > 0, the fractional summation I δ μ [ f ] formally approximates the Riemann–Liouville integral R−µ [f]. Indeed, putting y = kδ, we can rewrite the formula for I δ μ [ f ] ( x ) as:
I δ μ [ f ] ( x ) = δ μ Γ ( μ ) y δ 0 Γ ( μ + y δ ) Γ ( 1 + y δ ) f ( x y ) = δ Γ ( μ ) y δ 0 y μ 1 Γ ( μ + y δ ) ( y δ ) μ 1 Γ ( 1 + y δ ) f ( x y )
The work in [9] (5.11.12) gives, if δ ↓ 0:
lim δ 0 Γ ( y δ + μ ) Γ ( y δ + 1 ) ( y δ ) μ 1 = 1
Formally, we have:
lim δ 0 δ y δ 0 g δ ( y ) = 0 g ( y ) d y
with gδ (y)g (y) as δ ↓ 0. Application of Equations (8) and (9) to Equation (7) yields:
lim δ 0 I δ μ [ f ] ( x ) = 1 Γ ( μ ) 0 y μ 1 f ( x y ) d y
This is the fractional integral of Riemann–Liouville. Therefore, it is shown that the fractional difference formally tends to the fractional integral in the limit as δ ↓ 0.
Remark 1. Grünwald [3] and Letnikov [4] developed a theory of discrete fractional calculus for causal functions (f (x) = 0 for x < 0). For a good description of their theory, see [10]. They started with the backwards difference:
Δ δ [ f ] ( x ) = f ( x ) f ( x δ ) δ
and next, they define the fractional derivative as:
d v d x v f ( x ) : = 1 Γ ( v ) lim δ 0 , N 1 δ v m = 0 N 1 Γ ( m v ) Γ ( m + 1 ) f ( x k δ )
where N = [ x δ ]. This definition has the advantage that ν is fully arbitrary. No use is made of derivatives or integrals. The disadvantage is that computation of the limit is often very difficult. However, it can be shown that Equation (11) is the same as the Riemann–Liouville derivative for causal functions for Re(ν) > 0 and f is n times differentiable with n > Re(ν). At least formally, this follows also our approach if we write I δ v [ f ] ( x ) = ( D δ ) n [ I δ v n [ f ] ] ( x ) and then use Equation (10) and the fact that Equation (6) approaches the derivative as δ ↓ 0.
Comparing Equation (11) with Equation (5), the differences are the upper bound and the limit. For practical calculations, Formula (11) is often used without the limit. In the next section, we will develop the fractional orthogonal difference for the Hahn polynomials. In Section 5, we will see that the definition of Grünwald–Letnikov for the fractional difference is a special case of the definition of the fractional orthogonal difference for the Hahn polynomials for causal functions.
For obtaining the formula for the fractional orthogonal difference, we let { p n } n = 0 be a system of orthogonal polynomials with respect to weights w (j) on points j (j = 0, 1,…, N). Then, the approximate orthogonal derivative [2] (3.1b) takes the form:
D δ n [ f ] ( x ) = k n n ! h n 1 δ n j = 0 N f ( x + j δ ) w ( j ) p n ( j ) n = 0 , 1 , , N
Let Re(ν) < n with n a positive integer. The fractional derivative R v [ f ] ( x ) = D n [ R v n [ f ] ] ( x ) can be formally approximated by:
I δ , n , w v [ f ] ( x ) : = D δ n [ I δ v n [ f ] ] ( x )
as δ ↓ 0. This is a motivation to compute I δ , n , w v [ f ] more explicitly, in particular for special choices of the weights w. Substitution of Equations (5) and (12) in Equation (13) gives:
I δ , n . w v [ f ] ( x ) = k n n ! h n 1 δ v k = 0 j = 0 N ( n v ) k k ! f ( x + ( j k ) δ ) w ( j ) p n ( j )
For the double sum, we can write:
k = 0 j = 0 N g ( k , j ) = j = 0 N k = 0 g ( k , j ) = j = 0 N m = j g ( j + m , j ) = = j = 0 N m = 1 g ( j + m , j ) + j = 0 N m = 0 j g ( j m , j ) = = m = 1 j = 0 N g ( j + m , j ) + m = 0 N j = m N g ( j m , j )
Application of this formula to Equation (14) yields:
I δ , n , w v [ f ] = k n h n Γ ( n + 1 ) 1 δ v m = 1 f ( x m δ ) [ j = 0 N ( n v ) j + m ( j + m ) ! p n ( j ) w ( j ) ] + + k n h n Γ ( n + 1 ) 1 δ v m = 0 N f ( x + m δ ) [ j = m N ( n v ) j m ( j m ) ! p n ( j ) w ( j ) ]
The summations inside the square brackets are in principle known after a choice of the discrete orthogonal polynomials pn (x). Then, this formula can be used for approximating the fractional orthogonal derivative.
Remark 2. When taking the Fourier transform of the fractional summation, one can show that for δ ↓ 0, this Fourier transform formally tends to the Fourier transform of the fractional integral. For this purpose, we write Equation (5) as:
I δ μ [ f ] ( n δ ) = δ μ k = 0 ( μ ) k k ! f ( ( n k ) δ )
Taking the discrete Fourier transform gives:
n = I δ μ [ f ] ( n δ ) e i n δ w = δ μ n = e i n δ w k = 0 ( μ ) k k ! f ( ( n k ) δ )
while working formally, we interchange the summations on the right. This gives:
n = I δ μ [ f ] ( n δ ) e i n δ w = δ μ k = 0 ( μ ) k k ! e i k δ w n = f ( ( n k ) δ ) e i ( n k ) δ w = = δ μ k = 0 ( μ ) k k ! e i k δ w n = f ( n δ ) e i n δ w
The first summation is well known. We obtain:
n = I δ μ [ f ] ( n δ ) e i n δ w = ( 1 e i δ w δ ) μ n = f ( n δ ) e i n δ w = = w μ ( sin ( δ w / 2 ) δ w / 2 ) μ e i μ ( δ w π ) / 2 n = f ( n δ ) e i n δ w
Multiply both sides with δ, and take the formal limit for δ 0. We obtain the well-known formula:
R μ [ f ] ( x ) e i x w d x = e i π μ / 2 w μ f ( x ) e i x w d x

4. The Fractional Orthogonal Difference for the Discrete Hahn Polynomials

In this section, we substitute the discrete Hahn polynomials Qn(x;α,β,N) in Formula (14) to calculate the fractional difference. We do this analogously to Jacobi polynomials in [2]. For the Hahn polynomials, we use the definition in [11]. They give:
p n ( j ) = Q n ( j ; α , β , N ) : = 3 F 2 ( j , n , n + α + β + 1 α + 1 , N ; 1 )
For the Hahn polynomials, there are the following properties with n = 0,1, ,…, N:
w ( j ; α , β , N ) = ( α + 1 ) j ( B + 1 ) N j j ! ( N j ) ! k n h n = ( 1 ) n ( 2 n + α + β + 1 ) ( β + 1 ) n ( n + α + β + 1 ) n ( n + α + β + 1 ) N + 1 N ! n !
The work in [11] (9.5.10) gives:
Q n ( x ; α , β , N ) w ( x ) = ( 1 ) n ( β + 1 ) ( N ) n x n ( ( α + n + 1 ) x x ! ( β + n + 1 ) N n x ( N n x ) ! )
where:
x ( f ( x ) ) : = f ( x ) f ( x 1 )
Substitution in Equation (14) gives:
I δ , n , w v [ f ] = H 1 ( n ) [ m 1 f ( x m δ ) J 1 ( m ) + m = 0 N f ( x + m δ ) J 2 ( m ) ]
with:
J 1 ( m ) = j = 0 N ( n v ) j + m ( j + m ) ! ( w Q n ) ( j ; α , β , N )
J 2 ( m ) = j = m N ( n v ) j m ( j m ) ! ( w Q n ) ( j ; α , β , N ) = j = 0 N m ( n v ) j j ! ( w Q n ) ( j + m ; α , β , N )
H 1 ( n ) = ( 1 ) n Γ ( 2 n + α + β + 2 ) Γ ( β + 1 ) Γ ( n + 1 ) Γ ( N + n + α + β + 2 ) Γ ( n + β + 1 ) 1 δ v
To compute the sums in Equations (19) and (20), we use partial summation. The general formula for partial summation is:
j = 0 N ( f ) ( j ) g ( j ) = f ( N ) g ( N ) f ( 1 ) g ( 0 ) j = 0 N 1 f ( j ) ( g ) ( j + 1 )
In particular, if f (N) = f (−1) = 0, then:
i = 0 N ( f ) ( j ) g ( j ) = i = 0 N 1 f ( j ) ( g ) ( j + 1 )
Applying this formula n times and remarking that f (Nn) = f (−1) = 0 gives:
j = 0 N ( f ) n ( j ) g ( j ) = ( 1 ) n j = 0 N n f ( j ) ( g ) n ( j + n )
Applying Formula (17) for the summation in Equation (19), we get:
J 1 ( m ) = ( 1 ) n ( β + 1 ) n ( N ) n j = 0 N ( n v ) j + m ( j + m ) ! j n ( ( α + n + 1 ) j j ! ( β + n + 1 ) N n j ( N n j ) ! )
Suppose:
f ( j ) : = ( α + n + 1 ) j j ! ( β + n + 1 ) N n j ( N n j ) ! g ( j ) : = ( n v ) j + m ( j + m ) ! = ( j + m + 1 ) n v 1 Γ ( n v )
and seeing that f (Nn) = f (−1) = 0, we use Equation (23) to get:
J 1 ( m ) = ( β + 1 ) n Γ ( n v ) ( N ) j = 0 N n ( α + n + 1 ) j j ! ( β + n + 1 ) N n j ( N n j ) ! j n ( ( j + m + n + 1 ) n v 1 )
For the second fraction inside the summation, we use [12] (Appendix I.(30)) ):
Γ ( a j ) = Γ ( a ) ( 1 ) j ( 1 a ) j
Then, it follows:
( β + n + 1 ) N n j ( N n j ) ! = ( β + n + 1 ) N n ( N n ) ! ( n N ) j ( β N ) j
For the third fraction inside the summation, we use the formula:
n ( ( j + a ) b ) = ( a ) b n ( b n + 1 ) n ( a + b + n ) j ( a ) j
Then, we obtain:
j n ( ( j + m + n + 1 ) n v 1 ) = ( v ) n Γ ( m + n v ) Γ ( m + n + 1 ) ( m + n v ) j ( m + n + 1 ) j
After substitution in J1 (m), we get:
J 1 ( m ) = 1 Γ ( v ) ( N ) n ( 1 + β ) N ( N n ) ! Γ ( m + n v ) Γ ( m + n + 1 ) j = 0 N n ( α + n + 1 ) j ( m + n v ) j ( n N ) j ( m + n + 1 ) j ( β N ) j 1 j !
Then, J1 (m) can be written as a hypergeometric function:
J 1 ( m ) = 1 ( N ) n ( 1 + β ) N ( N n ) ! ( v ) m + n ( m + n ) ! 3 F 2 ( n N , n + α + 1 , m + n v N β , m + n + 1 ; 1 )
Observing the hypergeometric function, we will see that this function represents a series with a finite number of terms (if β is not an integer with − N − β ≠ 0). With α = β = 0 and Re (α) < m, it follows:
J 1 ( m ) = ( N + 1 n ) n ( N ) n ( v ) m + n ( m + n ) ! 3 F 2 ( n N , n + 1 , m + n v N , m + n + 1 ; 1 )
Observing the hypergeometric function, we will see that this function represents an infinite series. To compute J2 (m), we write Formula (20) with Equation (17) as:
J 2 ( m ) = ( 1 ) n ( β + 1 ) n ( N ) n j = 0 N m ( n v ) j j ! j n ( ( α + n + 1 ) j + m ( j + m ) ! ( β + n + 1 ) N n m j ( N n m j ) ! )
Suppose:
f ( j ) = ( α + n + 1 ) j + m ( j + m ) ! ( β + n + 1 ) N n m j ( N n m j ) ! g ( j ) = ( n v ) j j !
Then, we rewrite the formula for the partial summation Equation (22) as:
j = 1 N m ( f ) ( j ) g ( j ) = f ( N m ) g ( N m ) f ( 2 ) g ( 1 ) j = 1 N m 1 f ( j ) j n ( j + 1 )
In particular, if f (N — m) = g (−1) = 0, then:
j = 0 N m ( f ) ( j ) g ( j ) = j = 1 N m 1 f ( j ) ( g ) ( j + 1 ) = j = 0 N m f ( j 1 ) ( g ) ( j )
Applying this formula n times gives:
j = 0 N m ( f ) n ( j ) g ( j ) = ( 1 ) n j = 0 N m f ( j n ) ( g ) n ( j )
Substitution in Equation (26) gives:
J 2 ( m ) = ( β + 1 ) n Γ ( n v ) ( N ) n j = 0 N m ( α + n + 1 ) j + m n ( j n + m ) ( β + n + 1 ) N m j ( N m j ) ! n ( ( j + 1 ) n v 1 )
For the ∇ function, we use Formula (24). We obtain:
n ( ( j + 1 ) n v 1 ) = Γ ( n v ) ( v ) j j !
After substitution in J2 (m), we get:
J 2 ( m ) = ( β + 1 ) n Γ ( n ν ) ( N ) n j = 0 N m ( α + n + 1 ) j + m n ( j n + m ) ! ( β + n + 1 ) N m j ( N m j ) ! ( ν ) j j !
For j = 0, the factor (mn + 1) could be less than or equal to zero. Then, the terms concerning this condition should not be taken. In this case, the summation should be reversed. We set j = Nmi. This gives:
J 2 ( m ) = ( β + 1 ) n Γ ( n ν ) ( N ) n i = 0 min ( N m , N n ) ( α + n + 1 ) N n i ( N n + i ) ! ( β + n + 1 ) i i ! ( ν ) N m i ( N m j ) !
After some manipulations with the Pochhammer symbols, J2 (m) can be written as a hypergeometric function.
J 2 ( m ) = ( β + 1 ) n ( N ) n ( α + n + 1 ) N n ( N n ) ! ( ν ) N m ( N m ) ! F 3 2 ( n N , β + n + 1 , N + m α N , ν N + m + 1 ; 1 )
With α = β = 0, it follows again, after some manipulations of the Pochhammer symbols:
J 2 ( m ) = ( 1 ) n Γ ( ν ) Γ ( N m ν ) ( N m ) ! F 3 2 ( n N , n + 1 , N + m N , ν N + m + 1 ; 1 )
It can be shown that the hypergeometric function represents a finite series. Then, with [9] (16.4.11), we can rewrite the hypergeometric function. This results in:
J 2 ( m ) = Γ ( ν 2 n ) Γ ( ν + 1 n ) Γ ( ν ) Γ ( N m ν + n + 1 ) ( N m ) ! F 3 2 ( n , n + 1 , m N , ν + 1 n ; 1 )
For the approximate fractional difference, it results from Equations (18)(21), (25) and (27):
I δ , n ν [ f ] = Γ ( n + 1 ) Γ ( N + 1 ) Γ ( 2 n + α + β + 2 ) Γ ( N + n + α + β + 2 ) Γ ( N + 1 + β ) Γ ( n + 1 + β ) 1 δ ν m = 1 f ( x m δ ) ( ν ) m + n ( m + n ) ! F 3 2 ( n N , n + α , m + n ν N β , m + n + 1 ; 1 ) + + Γ ( n + 1 ) Γ ( N + 1 ) Γ ( 2 n + α + β + 2 ) Γ ( N + n + α + β + 2 ) Γ ( α + N + 1 ) Γ ( α + n + 1 ) 1 δ ν m = 0 N f ( x + m δ ) ( ν ) N m ( N m ) ! F 3 2 ( n N , β + n + 1 , N + m α N , ν N , m + 1 ; 1 )
Remark 3. To see what conditions are necessary for the given function f (x), so that the first summation is convergent, we split up this summation into two summations:
Γ ( n + 1 ) Γ ( N + 1 ) Γ ( 2 n + α + β + 2 ) ( N + n + α + β + 2 ) Γ ( N + 1 + β ) Γ ( N + 1 + β ) 1 δ v m = 1 f ( x m δ ) ( ν ) m + n ( m + n ) ! F 3 2 ( n N , n + α + 1 , m + n ν N β , m + n + 1 ; 1 ) = = Γ ( n + 1 ) Γ ( N + 1 ) Γ ( 2 n + α + β + 2 ) ( N + n + α + β + 2 ) Γ ( N + 1 + β ) Γ ( n + 1 + β ) 1 δ ν 1 δ ν m = 1 M f ( x m δ ) ( ν ) m + n ( m + n ) ! F 3 2 ( n N , n + α + 1 , m + n ν N β , m + n + 1 ; 1 ) + 1 δ ν R ( x )
with:
R ( x ) = Γ ( n + 1 ) Γ ( N + 1 ) Γ ( 2 n + α + β + 2 ) ( N + n + α + β + 2 ) Γ ( N + 1 + β ) Γ ( n + 1 + β ) 1 δ v m = 1 f ( x m δ ) ( ν ) m + n ( m + n ) ! F 3 2 ( n N , n + α + 1 , m + n ν N β , m + n + 1 ; 1 )
If M goes to infinity, then for R (x), there remains (with [9] (5.11.12)):
R ( x ) = Γ ( n + 1 ) Γ ( N + 1 ) Γ ( 2 n + α + β + 2 ) ( N + n + α + β + 2 ) Γ ( N + 1 + β ) Γ ( n + 1 + β ) 1 δ v F ( n N , n + α + 1 N β ; 1 ) 1 Γ ( ν ) m = M + 1 f ( x m δ ) 1 m ν + 1
The hypergeometric function can be summed. There remains:
R ( x ) = Γ ( n + 1 ) Γ ( N + 1 ) 1 Γ ( ν ) 1 δ ν m = M + 1 f ( x m δ ) 1 m ν + 1
so for convergence of approximate fractional difference, the summation in Equation (28) should converge.
For α = β = 0, there remains:
I δ , n ν [ f ] = Γ ( 2 n + 2 ) Γ ( N + n + 2 ) 1 Γ ( ν ) 1 δ ν m = M + 1 f ( x m δ ) Γ ( m + n ν ) ( m + n ) F 3 2 ( n N , n + 1 , m + n ν N , m + n + 1 ; 1 ) + + ( 1 ) n Γ ( 2 n + 2 ) Γ ( N + n + 2 ) 1 Γ ( ν ) Γ ( ν 2 n ) Γ ( ν + 1 n ) 1 δ ν m = 0 N f ( x + m δ ) Γ ( N m ν + n + 1 ) ( N m ) ! F 3 2 ( n , n + 1 , m N , ν + 1 n ; 1 )
For n = 1, it follows:
I δ , n ν [ f ] = 6 Γ ( N + 3 ) 1 Γ ( ν ) 1 δ ν m = 1 f ( x m δ ) Γ ( m + 1 ν ) Γ ( m + 2 ) F 3 2 ( n N , 2 , m + 1 , ν N , m + 2 ; 1 ) 6 Γ ( N + 3 ) 1 Γ ( ν ) Γ ( ν 2 ) Γ ( ν ) 1 δ ν m = 0 f ( x m δ ) Γ ( N m ν + 2 ) Γ ( N m + 1 ) F 3 2 ( 1 , 2 , m N , ν ; 1 )
The first hypergeometric function can be written as:
F 3 2 ( 1 N , 2 , m + 1 ν N , m + 2 ; 1 ) = Γ ( m + 2 ) ( N ) Γ ( m + 1 ν ) k = 0 N 1 ( k N ) ( k + 1 ) Γ ( k + m + 1 ν ) Γ ( k + m + 2 )
Making use of:
( k N ) ( k + 1 ) = ( k + m + 1 ) ( k + m ) ( N + 2 m ) ( k + m + 1 ) + m ( N + m + 1 )
and of [13] (2.5(16)):
k = 0 N 1 Γ ( k + a ) Γ ( k + b ) = 1 ( a b + 1 ) [ Γ ( N + a ) Γ ( N + b 1 ) Γ ( a ) Γ ( b 1 ) ] ( a + 1 b , N > 0 )
there remains:
F 3 2 ( 1 N , 2 , m + 1 ν N , m + 2 ; 1 ) = Γ ( m + 2 ) Γ ( ν ) Γ ( m + 1 ν ) Γ ( 3 ν ) [ ( 2 N + 2 m 2 ν N ν + 2 ] Γ ( m ν + 1 ) Γ ( m ) ( 2 m N ν ) Γ ( N + m ν + 2 ) Γ ( N + m + 1 )
For the second hypergeometric functions, it is yielded:
F 3 2 ( 1 , 2 , m N , ν ; 1 ) = ( N ν 2 m ) N ν
After substitution in Equation (29), there remains:
I δ , 1 ν [ f ] ( x ) [ 6 N Γ ( N + 3 ) Γ ( 3 ν ) 1 δ ν ] 1 = = m = 1 f ( x m δ ) [ ( 2 N + 2 m 2 ν N ν + 2 ) Γ ( m ν + 1 ) Γ ( m ) ( 2 m + N ν ) Γ ( N + m ν + 2 ) Γ ( N + m + 1 ) ] + + m = 0 f ( x m δ ) Γ ( N m ν + 2 ) Γ ( N m + 1 ) ( 2 m N ν )

5. The Frequency Response for the Approximate Fractional Orthogonal Difference

Just as for the fractional orthogonal derivative, it is possible to compute the frequency response for the fractional orthogonal difference. In that case, we can apply Theorem 1, where we take another function for the function g. In this case, we use Equation (5):
g = I δ μ [ f ] ( x ) = δ μ k = 0 ( μ ) k k ! f ( x k δ ) δ > 0
Fourier transform of this function gives:
G ( ω ) = 1 2 π e i ω x g ( x ) d x = δ μ k = 0 ( μ ) k k ! 1 2 π e i ω x f ( x k δ ) d x = δ μ k = 0 ( μ ) k k ! e i ω k δ f ( ω )
The summation is known, and it follows:
G ( ω ) = ( 1 e i w δ δ ) μ F ( ω )
From Equations (4) and (31), we obtain that if g = I δ , n , ω v [ f ], then:
H ( ω ) = G ( ω ) F ( ω ) = ( 1 e i ω δ δ ) v n k n n ! h n δ n x = 0 N p n ( x ) w ( x ) e i x δ ω
For the Hahn polynomials, this becomes:
H ( ω ) = ( 1 e i ω δ δ ) v n k n n ! h n δ n S
For the summation S, [2] (5.28) gives:
S = ( β + 1 ) N N ! ( 1 e i δ ω ) n F ( N + n , α + n + 1 β N ; e i δ ω )
Substitution in Equation (32) gives:
H ( ω ) = ( 1 e i ω δ δ ) ν k n n ! h n ( β + 1 ) N N ! F ( N + n , α + n + 1 β N ; e i δ ω )
Substitution of Equation (16) yields:
H ( ω ) = ( 1 ) n ( 1 e i ω δ δ ) ν Γ ( N + β + 1 ) Γ ( 2 n + α + β + 2 ) Γ ( N + β + 1 ) Γ ( N + n + α + β + 2 ) F ( N + n , α + n + 1 β N ; e i δ ϖ )
Comparing this formula with [2] (5.28), we will see that the only difference between these formulas is the factor ( i ω ) ν n. In the next section, we will discuss this fact.
With α = β = 0 and n = 1, there remains:
H ( ω ) = ( 1 e i ω δ δ ) ν 6 Γ ( N + 1 ) Γ ( N + 3 ) F ( 1 N , 2 N ; e i δ ω ) = = ( 1 e i ω δ δ ) ν 6 Γ ( N ) Γ ( N + 3 ) k = 0 N 1 ( k N ) ( k + 1 ) e i δ k ω = = ( 1 e i ω δ ) ν δ ν ( e i δ ω 1 ) 3 6 Γ ( N ) Γ ( N + 3 ) ( e i δ ω ( N + 1 ) ( N N e i δ ω + 2 ) ( 2 e i δ ω + N e i δ ω N ) )
For N = 1, there remains:
H ( ω ) = ( 1 e i ω δ δ ) ν
Remark 4. Note that the frequency response in Equation (33) is derived with a summation to infinity. For practical reasons, one should take the summation to a finite number, say M. Then, we have to use the frequency response corresponding to Equation (30). This is important, because Equation (30) is the formula to compute the approximate fractional Hahn derivative. Then, the frequency response will differ from the ideal one. Taking the Fourier transform, the result for the frequency response is:
H ( ω ) [ 6 Γ ( N + 3 ) 1 δ ν ] 1 = = 1 Γ ( 2 ν ) e i ( N + 1 ) ω δ m = 1 M + N + 1 e i m ω δ Γ ( m ν + 1 ) Γ ( m ) + 1 Γ ( 2 ν ) m = 1 M e i m ω δ Γ ( m ν + 1 ) Γ ( m ) 2 N 1 Γ ( 3 ν ) e i N ω δ m = 1 M + N e i m ω δ Γ ( m ν + 2 ) Γ ( m ) + 2 N 1 Γ ( 3 ν ) m = 1 M e i m ω δ Γ ( m ν + 2 ) Γ ( m )
For M →∞, this function becomes the same as Equation (34).
If f (x) is causal, then Equation (33) can be derived from a finite sum, and we do not cut offthe summation. Then, we can use the frequency response Equation (33) instead of Equation (35).
Remark 5. For the filter for the fractional differentiation of Grünwald and Letnikov, the approximate frequency response (with N finite) can be simply derived from Equation (11) by taking the Fourier transform. We first take the Fourier transform of Equation (11) without the limits. This gives: with N = [x/δ]
F [ G L ν [ f ] ( x ) ( ω ) = 1 Γ ( ν ) 0 e i ω x 1 δ ν k = 0 [ x / δ ] Γ ( k ν ) Γ ( K + 1 ) f ( x k δ ) d x
Because f (x) is a causal function, it follows that xkδ. Substitution gives:
H ( ω ) = 1 Γ ( ν ) 1 δ ν k = 0 Γ ( k ν ) Γ ( k + 1 ) k δ e i ω x f ( x k δ ) d x = 1 Γ ( ν ) 1 δ ν k = 0 Γ ( k ν ) Γ ( k + 1 ) e i ω k δ F ( ω ) = ( 1 e i ω δ δ ) ν F ( ω )
For δ → 0, the frequency response becomes:
H ( ω ) = lim δ 0 ( 1 e i ω δ δ ) ν = ( i ω ) ν
Because of Section 3.11 from [1], the Grünwald-Letnikov derivative follows by taking w (j) = 1 and N = n in Equation (12) or α = β = 0 and N = n in Equation (15). Therefore, Equation (36) becomes a special case of Equation (33).

6. Application of the Theory to a Fractional Differentiating Filter

This section treats the application of the fractional derivative in linear filter theory. In that theory, a filter is described by three properties, e.g., the input signal, the so-called impulse response function of the filter and the output signal. These properties are described in the time domain. The output signal is the convolution integral of the input signal and the impulse response function of the filter. The linearity means that the output signal depends linearly on the input signal.
One distinguishes between continuous and discrete filters, corresponding to continuity and discreteness, respectively, of the signals involved. In the discrete case, the output signal can be computed with a discrete filter equation.
In the usage of filters, there are two important items for consideration. The first item is the computation of the output signal given the input signal. The second item is a qualification of the working of the filter.
In our opinion, this second task should be preferably done in the frequency domain, where one can see the spectra of the signal and eventually the noise. In this domain, one obtains the Fourier transform of the output signal by multiplication of the Fourier transform of the input signal with the frequency response.
In this section, we will give graphs of the absolute value of the frequency response associated with various approximate fractional derivatives introduced in this paper. From these graphs, we will get insight into how these approximations will work in practice. We think this kind of analysis, in particular when using log-log plots, is preferable to the analysis of filters in the time domain. As an example of the latter type of analysis, actually for the fractional Jacobi derivative, see a paper of Tenreiro Machado [14].
For the second item, the frequency response of the filter should be computed. We treat here the approximate fractional Jacobi derivative and the approximate fractional Hahn derivative. For practical computations, these become the approximate fractional Legendre derivative and the approximate fractional Gram derivative.
For the first item, we can use [2] (4.18) for the Jacobi derivative and [2] (4.39) for the Hahn derivative. For the second item, we had to go to the frequency domain.
In the frequency domain, we suppose that the Fourier transforms of the input signal x (t) and the output signal y (t) are X (ω), and Y (ω) and does exist. The Fourier transformation of the impulse response function h (t) of the filter is the frequency response H (ω) of the filter. For this frequency response, there is the definition:
H ( ω ) : = Y ( ω ) X ( ω )
The function H (ω) is a complex function. For a graphic display of this function, one uses the modulus and the phase of the frequency response. In our case, we want to see the property of the differentiation, and then, the graph of the modulus of the frequency response is preferred.
For an n-th order differentiator, one can show:
H ( ω ) = ( i ω ) n
In the frequency domain, the graph of the modulus of the frequency response of this filter is a straight line with a slope that depends on n. See Figure 1.
We see that for ω→∞, the modulus of the frequency response |H (ω) | goes to ∞. This means that (because every system has some high-frequency noise) this filter is unstable. That is the reason why a differentiating filter should always have an integrating factor. This factor will be added to the filter, so that for ω→∞, the modulus of the frequency response goes to zero. The filter for the orthogonal derivatives has this property. These derivatives appear from an averaging process (least squares), and such a process will always give an integrating factor.
The formula for the frequency response of the approximate Legendre derivative is [1] (Section 5):
H ( ω ) = Γ ( 2 n + 2 ) 2 n Γ ( n + 1 ) 1 δ n j n ( ω δ )
where the functions jn are the spherical Bessel functions [9] (10.49.3). The graph of the modulus of the first order approximate Legendre derivative with δ = 1 is given in Figure 2.
It is clear that for large ω, the modulus of the frequency response goes to zero. For small ω, the graph is a straight line with slope one. In the case that the order of the differentiating filter is not an integer, the formula of the frequency response should be analogous to Equation (37):
H ( ω ) = ( i ω ) ν
where υ can be an integer, a fractional or even a complex number. The graph of the modulus of the frequency response is given in Figure 3.
Furthermore, in this case, there is an instability. To prevent this instability, one can use an approximate fractional Jacobi differentiating filter. For the approximate Jacobi derivative, the frequency response is already computed in [2] (Section 5). We repeat the formula [2] (5.5).
H ( ω ) = ( i ω ) v e i ω δ M ( n + α + 1 , 2 n + α + β + 2 ; 2 i ω δ )
With δ = 1, the squared absolute value of the frequency response of the approximate fractional Jacobi derivative can be written as a series expansion as follows:
| H ( ω ) | 2 = m = 0 ( 4 ω 2 ) m K = 0 2 m ( n + α + β + 1 ) k ( n + α + β + 1 ) 2 m k ( n + α + β + 1 ) k ( n + α + β + 1 ) 2 m k ( 1 ) k k ! ( 2 m k ) !
For small ω, a first approximation of the modulus of the frequency response gives:
| H ( ω ) | ~ ω v 1 4 ( n + α + 1 ) ( n + β + 1 ) ( 2 n + α + β + 2 ) 2 ( 2 n + α + β + 3 ) ω 2
It is clear that the formula is symmetric in α and β. The choices of α and β detect the cut-off frequency For simplicity, we look at the frequency (and not at the exact cut-off frequency) where the frequency response has a first maximum. From Formula (38), the maximum frequency is:
ω max = ( 2 n + α + β + 2 ) 2 ν ( 2 n + α + β + 3 ) ( n + α + 1 ) ( n + β + 1 ) ( 1 + ν )
To simplify this formula, set α = β. Then, there remains:
ω max = ν ( 2 n + 2 α + 3 ) ( 1 + ν )
The shape of the curves does not change in principle. Therefore, if one wants the cut-off frequency as high as possible, then α and β should be chosen as high as possible. For the case α = β = 0, the formula for the frequency response of the approximate fractional Jacobi derivative simplifies to the approximate fractional Legendre derivative:
H ( ω ) = ω ν Γ ( 2 n + 2 ) 2 n Γ ( n + 1 ) 1 ( ω δ ) n j n ( ω δ )
where the functions jn are the spherical Bessel functions with n − 1 ≤ υn. For n = 1 and δ =1, there remains:
H ( ω ) = 3 ω ν 3 ( sin ω ω cos ω )
The graph of the modulus of this frequency response is given in Figure 4 for different values of υ.
Next, we treat the discrete filter.
When we apply a discrete filter, the input signal and the output signal are sampled with a sample frequency that is higher than twice the maximum frequency of the input signal. Therefore, there is always a maximum frequency for the frequency response of the filter. What the frequency response will do above this maximum frequency is not important, presupposing that for ω →∞, the frequency response will go to zero. The input signal is known over N points. Therefore, the input signal is an approximation of the true input signal.
The output signal can be computed for the N points of the input signal. This is the reason that the output signal is always an approximation of the filtered true input signal. If the filter is a differentiator, the output signal is an approximation of the derivative of the input signal. This approximation is very dependent on N.
For the fractional orthogonal derivative, we use the frequency response of the approximate derivative of the discrete Hahn difference as derived in Equation (33). For α = β = 0, this frequency response becomes that of the fractional Gram derivative. In the following figure, we take with 0 < υ ≤ 1 and n = 1 in Formula (34). For N = 1, the modulus of the frequency response of the ideal filter has a maximum at ω = π. For different values of N, the modulus of the frequency response with δ = 1 is given in Figure 5.
In practice (taking a finite number of points), we use the frequency response in Equation (35). For δ =1, υ = 0.5, N = 7 and different values of M, the modulus of this frequency response is shown in Figure 6.
We see that for ω → 0, the absolute value of the frequency response tends to a constant value. Taking ω = 0 in Equation (35), it follows:
H ( 0 ) [ 6 Γ ( N + 3 ) 1 δ ν ] 1 = 1 Γ ( 2 ν ) m = 1 M + N + 1 Γ ( m ν + 1 ) Γ ( m ) + 1 Γ ( 2 ν ) m = 1 M Γ ( m ν + 1 ) Γ ( m ) 2 N 1 Γ ( 3 ν ) m = 1 M + N + 1 Γ ( m ν + 2 ) Γ ( m ) + 2 N 1 Γ ( 3 ν ) m = 1 M Γ ( m ν + 2 ) Γ ( m )
For the summations, it is proven:
m = 1 K Γ ( m α ) Γ ( m ) = Γ ( K + 1 α ) Γ ( K ) ( 1 α ) a 1
Then, there remains:
H ( 0 ) [ 6 N Γ ( N + 3 ) Γ ( 4 + ν ) 1 δ ν ] 1 = = ( N 2 M N ν ) Γ ( M + N ν + 3 ) Γ ( M + N + 1 ) + ( ( 3 ν ) N + 2 ( M ν + 2 ) ) Γ ( M ν + 2 ) Γ ( M )
The lower bound of the frequency for which the filter does a fractional differentiation can be defined with the help of the following equation:
ω l = ( 6 ( N 2 M N ν ) N Γ ( N + 3 ) Γ ( 4 ν ) Γ ( M + N ν + 3 ) Γ ( M + N + 3 ) + 6 ( ( 3 ν ) N + 2 ( M ν + 2 ) ) N Γ ( N + 3 ) Γ ( 4 + ν ) Γ ( M ν + 2 ) Γ ( M ) ) 1 / ν
In practice, we should take this frequency 10-times higher than computed with Equation (40).
For the maximum frequency, we should compute the value of the frequency for the first maximum of the absolute value of Equation (35). A tedious computation leads to:
ω max 2 6 ( 1 ν ) ( 6 N + ν + 6 N ν + N 2 ν + N 2 + 9 ) δ ( 6 N + ν + 6 N ν + N 2 ν + N 2 + 9 )
Therefore, we can define a bandwidth B of the filter, which is equal to B = ωmaxωl.
For M →∞, we had to take the limits for the ratios of the gamma functions. The work in [9] (5.11.13) gives:
1 z a b Γ ( z + a ) Γ ( z + a ) = 1 + 1 2 z ( a b ) ( a + b 1 ) + + 1 24 z 2 ( a b 1 ) ( a b ) ( 3 ( a + b 1 ) 2 ( a b + 1 ) ) + O ( z 3 )
Substitution in Equation (39) gives with δ =1:
H ( 0 ) = 1 M ν ( 12 N 10 ν 18 N ν + 11 ν 2 3 ν 3 + 6 N ν 2 6 N 2 ν + 6 N 2 ) 2 N ( N + 2 ) Γ ( N + 1 ) ( 3 ν ) Γ ( 1 ν ) ( 1 + O ( 1 M ) )
Because ν > 0, this constant goes to zero, and the frequency response approximates the ideal case for low frequencies.
With this frequency response, it is seen that the filter works only well when the frequencies of the input signal lie inside the band-pass of the filter. The maximum frequency of this band-pass is mainly dependent on the number N and the minimum frequency on the number M.
Many authors (e.g., [1518]) tried to describe the properties of a discrete fractional filter in the time domain with causal functions. Then, there are always transient effects at the initial time t = 0. There, the filter has other properties than elsewhere, because of the discontinuity of the input signal (and the resulting high frequencies).
When using the frequency response, the realms of lower and of higher frequencies need special attention. For the lower frequencies, the frequency response does not go to zero when using a finite number of points (in practice, this is always the case). This we call the minimum frequency effect. When the input signal is causal, the frequency response has no minimum frequency effect (this is important when, for example, treating differential equations). For the higher frequencies, there is always a maximum depending on the sample frequency (Shannon frequency). However, for the orthogonal derivative, there is an extra maximum that is lower than the Shannon frequency. For the GL filter, there is no such maximum.

7. Conclusions

The first conclusion of this paper is that when using a discrete fractional filter, the properties of the filter should be examined using the frequency response. The second conclusion is that a discrete fractional differentiating orthogonal filter has always a certain bandwidth.
In general, a fractional differentiating filter can be built from two serial filters. The first one has a frequency response with modulus |ω|υ. The second filter is a so-called low-pass filter. This low-pass filter determines the highest frequency for differentiation of the filter. In the case of the Jacobi filter (continuous case), one has to keep in mind that the frequency response has side lobes. In the discrete case, these side lobes are not important, because there is a maximum sample frequency. The Jacobi filter can be used if the maximum signal frequency is far beyond the side lobes. These sides lobes go to zero for high frequencies. Hence, this is a very good fractional differentiating filter.
As another example for the low-pass filter, we can choose the Butterworth filter. For this filter, the frequency response is:
H n ( ω ) = ( i ω ) ν ( 1 1 + ( ω / ω 0 ) 2 n )
which gives a very good frequency response for fractional differentiation. There are no side lobes. In Figure 7, this is demonstrated. There is no simple analogous discrete filter.
The work in [1] makes some remarks about the practical applications of the filters obtained from orthogonal polynomials and the Butterworth filter. In the analog integer case, the Butterworth filter can be much more easily constructed physically In the analog fractional case, a filter should be built with the frequency response (−)υ. This gives a difficult problem. Solutions are always approximations.
In the discrete case, the Butterworth frequency response can be transformed into a filter equation. See [19]. For more details about the Butterworth filter, see [20] (Section 5.2.1) (both analog and discrete), [21] (both analog and discrete), [22] (Section 3.2) (analog) and [19] (Section 12.6) (digital).

Acknowledgments

I thank T.H. Koornwinder for his help and time offered during my writing of this paper. Especially his hints about the Hahn polynomials were very helpful. Without his very stimulating enthusiasm and everlasting patience, this work could not be done.

References

  1. Diekema, E.; Koornwinder, T.H. Differentiation by integration using orthogonal polynomials, a survey. J. Approx. Theory 2012, 164, 637–667. [Google Scholar]
  2. Diekema, E. The fractional orthogonal derivative. Mathematics 2015, 3, 273–298. [Google Scholar]
  3. Grünwald, A.K. Ueber “begrenzte” Derivationen und deren Anwendung. Z. Math. Phys. 1867, 12, 441–480. [Google Scholar]
  4. Letnikov, A.V. Theory of Differentiation of Fractional Order. Mat. Sb. 1868, 3, 1–68. [Google Scholar]
  5. Kuttner, B. On differences of fractional order. Proc. Lond. Math. Soc. 1957, 34–7, 453–466. [Google Scholar]
  6. Isaacs, G.L. An iteration formula for fractional differences. Proc. Lond. Math. Soc. 1963, 3, 430–460. [Google Scholar]
  7. Bhrawy, A.H.; Zaky, M.A. A method based on the Jacobi tau approximation for solving multi-term time-space fractional partial differential equations. J. Comput. Phys. 2015, 281, 876–895. [Google Scholar]
  8. Samko, S.G.; Kilbas, A.A.; Marichev, O.I. Fractional Integrals and Derivatives; Gordon and Breach: Newark, NJ, USA, 1993. [Google Scholar]
  9. Olver, F.W.J.; Lozier, D.W.; Boisvert, R.F.; Clark, C.W. NIST Handbook of Mathematical Functions; Cambridge University Press: Cambridge, UK, 2010. [Google Scholar]
  10. Oldham, K.B.; Spanier, J. The Fractional Calculus; Academic Press: Salt Lake City, UT, USA, 1974. [Google Scholar]
  11. Koekoek, R.; Lesky, P.A.; Swarttouw, R.F. Hypergeometric Orthogonal Polynomials and Their q-Analogues; Springer-Verlag: Berlin, Germany, 2010. [Google Scholar]
  12. Slater, L.J. Generalized Hypergeometric Functions; Cambridge University Press: Cambridge, UK, 1966. [Google Scholar]
  13. Erdélyi, A. Higher Transcendental Functions; McGraw-Hill: New York, NY, USA, 1953. [Google Scholar]
  14. Tenreiro Machado, J.A. Calculation of fractional derivatives of noisy data with genetic algorithms. Nonlinear Dyn. 2009, 57, 253–260. [Google Scholar]
  15. Diethelm, K.; Ford, N.J.; Freed, A.D.; Luchko, Yu. Algorithms for the fractional calculus: A selection of numerical methods. Comput. Methods Appl. Mech. Eng. 2005, 194, 743–773. [Google Scholar]
  16. Galucio, A.C.; Deü, J.-F.; Mengué, S.; Dubois, F. An adaptation of the Gear scheme for fractional derivatives. Comp. Methods Appl. Mech. Eng. 2006, 195, 6073–6085. [Google Scholar]
  17. Poosh, S.; Almeida, R.; Torres, D.F.M. Approximation of fractional integrals by means of derivatives. Comput. Math. Appl. 2012, 64, 3090–3100. [Google Scholar]
  18. Khosravian-Arab, H.; Torres, D.F.M. Uniform approximation of fractional derivatives and integrals with application to fractional differential equations. Nonlinear Stud. 2013, 20. [Google Scholar]
  19. Hamming, R.W. Digital Filters, 3rd ed; Prentice-Hall: Upper Saddle River, NJ, USA, 1989. [Google Scholar]
  20. Oppenheim, A.V.; Schafer, R.W. Digital Signal Processing; Prentice Hall: Upper Saddle River, NJ, USA, 1975. [Google Scholar]
  21. Ziemer, R.E.; Tranter, W.H.; Fannin, D.R. Signals & Systems, 4th ed; Prentice-Hall: Upper Saddle River, NJ, USA, 1998. [Google Scholar]
  22. Johnson, D.E. Introduction to Filter Theory; Prentice-Hall: Upper Saddle River, NJ, USA, 1976. [Google Scholar]
Figure 1. Moduli of the frequency response of an n-th order differentiator for n = 1, n = 2 and n = 5.
Figure 1. Moduli of the frequency response of an n-th order differentiator for n = 1, n = 2 and n = 5.
Mathematics 03 00487f1
Figure 2. Modulus of the frequency response of the first order Legendre derivative.
Figure 2. Modulus of the frequency response of the first order Legendre derivative.
Mathematics 03 00487f2
Figure 3. Moduli of the frequency responses of a fractional derivative with υ = 1, υ = 1.5 and υ = 2.
Figure 3. Moduli of the frequency responses of a fractional derivative with υ = 1, υ = 1.5 and υ = 2.
Mathematics 03 00487f3
Figure 4. Moduli of the frequency responses of a fractional Legendre derivative.
Figure 4. Moduli of the frequency responses of a fractional Legendre derivative.
Mathematics 03 00487f4
Figure 5. Moduli of the frequency response of the fractional Gram derivative with υ = 0.5.
Figure 5. Moduli of the frequency response of the fractional Gram derivative with υ = 0.5.
Mathematics 03 00487f5
Figure 6. Moduli of the frequency responses of the fractional Hahn derivative with N = 7 and υ = 0.5.
Figure 6. Moduli of the frequency responses of the fractional Hahn derivative with N = 7 and υ = 0.5.
Mathematics 03 00487f6
Figure 7. Modulus of the frequency response for the fractional Butterworth derivative with n = 7.
Figure 7. Modulus of the frequency response for the fractional Butterworth derivative with n = 7.
Mathematics 03 00487f7
Back to TopTop