Fractional Fokker-Planck Equation

We shall discuss the numerical solution of the Cauchy problem for the fully fractional Fokker-Planck (fFP) equation in connection with Sinc convolution methods. The numerical approximation is based on Caputo and Riesz-Feller fractional derivatives. The use of the transfer function in Laplace and Fourier spaces in connection with Sinc convolutions allow to find exponentially converging computing schemes. Examples using different initial conditions demonstrate the effective computations with a small number of grid points on an infinite spatial domain.


Introduction
The Fokker-Planck Equation (FP) is used in models of standard diffusion problems involving external fields: where the linear FP operator is defined by L FP : with the external potential V(x) [1].V represents the negative external forces in the system.The parameters m, η and are the mass, the friction coefficient, and the diffusion constant, respectively.Risken in his book collected and developed a rich variety of solution methods for the FP equation [1].The basis of the FP equation is a diffusion process re-distributing u(x, t) in space x and time t the quantity may be related e.g., to mass, charge, or probability.From (2) it becomes apparent that the FP equation is a generalization of the diffusion equation.In recent years the integer order diffusion equation was generalized to a fractional diffusion equation in which the differentiations with respect to t and x are replaced by differentiations of non-integer order [2][3][4][5].The aim of the current paper is to use such fractional generalizations for the FP equation, too.Let us first examine Risken's approach.To this end, we first separate the diffusion part from the rest of the equation, as follows: We now assume that the function u(x, t) satisfies natural Dirichlet boundary conditions; i.e., u(±∞, t) = 0.The initial condition for the problem is u(x, t = 0) = u (0) (x).Without loss of generality, we rescale the temporal coordinate by using the diffusion constant t → t so that the equation becomes: It is well known that for vanishing external forces, the fundamental solution of ( 4) is given by a Gaussian: This fundamental solution enables us to transform Equation ( 4) to the integral equation: (6) in which w(x) = V (x)/( mη) is the scaled external force, and in which the first derivative G x is given by: The function v(x, t) related to the initial condition is given by: The integral equations (IE) (6) and (8) can now be used to derive the solution of the FP equation.These straight forward steps enable us to transform the FP problem into an integral equation, which is suitable for approximate solution by our methods given below in this paper (see also [6,7]).We remark that there already exist a number of other methods for solving the FP equation.We refer to Risken [1] for an overview of such methods, and to [8][9][10][11] for a discussion of more recent developments.Our method of approximate solution has advantages over these other methods, for ease of solution and for exponential convergence.
Our aim in this paper is to generalize the integer order original FP equation to it's fractional form, the fractional Fokker-Planck (fFP) equation.There exists a large number of papers dealing with such generalizations see e.g., [12][13][14] and references therein.However, constraints were added in each of these papers by way of replacing integer order derivatives by fractional order derivatives.Only the temporal derivative is replaced in some of these publications [13].Some other papers consider replacing only the second order spacial whereas the other derivatives are kept at integer order, and so on.Thus these other approaches enable solutions which are not the most general possible.We will introduce a fractional representation which uses the full conversion of all integer derivatives to fractional order derivatives in all independent coordinates.Since the FP equation is of order one and two and one in time and spatial coordinates, respectively, we have to consider three different fractional differentiation orders β, α, and µ as follows: Here 0 < β ≤ 1, 0 < α ≤ 2, and 0 < µ ≤ 1 are constraints for the fractional derivatives consistent with the FP equation.The two skewness parameters θ α and θ µ satisfy the conditions |θ α | = min(α, 2 − α) and θ µ = min(µ, 1 − µ), respectively.The fractional derivatives for temporal and spatial coordinates are defined in terms of Caputo and Riesz-Feller derivatives.For details of these definitions see Appendix A.
We begin by considering the Cauchy problem for the (spatially one-dimensional) space-time fractional FP Equation (9).Note that our approximation method is also applicable to spatially higher dimensional cases and can be generalized to systems of FP equations.
We shall first generate the fundamental solution of the IE via application of Laplace and Fourier transforms to the fractional operators.Assuming, (s) > σ 0 , κ ∈ R, the transforms of Laplace and Fourier are defined as: and: Thus the corresponding transform of the fractional derivatives C D β t f (t) and D α θ;x g(x) are: and: The fundamental solution of the fractional diffusion equation in the Fourier-Laplace domain with the initial condition û0 (κ) = 1 thus becomes: which delivers the equations: If we use the initial condition û(0) = 1 then we finally get: representing the Laplace-Fourier solution of the Green's or transfer function for the fractional diffusion equation [15].In fact the knowledge of the Laplace-Fourier representation of the fractional diffusion equation is sufficient to solve the fractional FP Equation ( 9) by Sinc convolution approximation.Next, the Fourier-Laplace of the fFP Equation (9) takes the form: Applying Leibniz's rule for fractional derivatives, we write: The Green's function G α,β (x, t) has to satisfy: with −∞ < x < ∞ and t ≥ 0, which applied to (19) yields: Assuming that w (1) is negligible, relative to w, and since the Green's function satisfies: which can be further simplified if we use a Taylor expansion of w around x = 0: The major contribution now is assigned to the constant term w(0) so that: Equation ( 24) delivers when Fourier-Laplace transforms are applied to the determining equation for G α,β : Under the assumption that the major contribution is generated by u itself in the Fourier-Laplace space, we approximate w(0) = 1 which then yields: s β Ĝα,β (κ, s) − s β−1 Ĝα,β (0) ≈ − |κ| α e i sign(κ)θ α π/2 Ĝα,β (κ, s) + |κ| µ e i sign(κ)θ µ π/2 Ĝα,β (κ, s) Solving with respect to the Fourier-Laplace variable we get: Ĝα,β (κ, s) = s β−1 Ĝα,β (0) s β + |κ| α e i sign(κ)θ α π/2 − |κ| µ e i sign(κ)θ µ π/2 (27) and setting Ĝα,β (0) = 1, we finally get: Ĝα,β (κ, s) = s β−1 Here, α as a superscript of Ĝ denotes the collection of the parameters α = (α, θ α ) , µ, θ µ .This transfer/Green function collects the basic information about the fractional FP equation.It will be used in the discrete convolution representation to approximate the fractional IE: incorporating the initial condition: with w(x) = V (x)/( mη).
Comparing the spectral representation of the fFP equation with the Montrol-Weiss equation used for continuous time random walks (CTRW) [15], we identify the transfer function by means of the expression: where Ψ(s) = (1 − ψ(s))/s and Φ(κ, s) = Ŵ(κ)ψ(s) with ψ(s) = 1 − s β representing the asymptotic scaling behavior for s → 0. The separation of Fourier and Laplace variables in Bα,β (κ, In (32), Bγ (κ) ≈ −|κ| γ e i sign(κ)θ γ π/2 represents the asymptotic scaling law in Fourier space as κ → 0, with γ ∈ {α, µ}.If in addition we also assume that ŵ(k) = ŵ(0) + ŵ (0)κ + . . .as κ → 0, we set ŵ(k) = ŵ(0) = 1 as a proper normalization.Thus the fundamental solution in Fourier-Laplace representation becomes: corresponding to a CTRW equation of the type: with the survival function Ψ(t) = ∞ t ψ(τ) dτ denoting the probability that at instant t the particle is still sitting in its starting position x = 0.The transfer function û(κ, s) in (33) will be the key expression in the solution of the fFP equation: The equivalent IE to (35) is numerically approximated by a successive Neumann iteration.Examples of fractional integral equations are solved for single valued functions in [16,17].The work in [16] proves that for fractional integral equations the solution exists and converges exponentially under the condition that the functions are analytic.Using the property of separation of variables in convolution representations the results for single valued functions are also valid for multivalued functions [18].Thus the existence of solutions and convergence of the numerical approximation to the solution is guaranteed (for details see [16,18]).Based on these facts, we shall next introduce Sinc methods to approximate fractional partial differential equations specifically the fFP equation.

Methods of Approximation
This section introduced the basic ideas of Sinc methods [19].We will discuss only the main ideas as a collection of recipes to set up a Sinc approximation.We omit most of the proofs of the different important theorems and lemmas because these proofs are available in literature [6,18,[20][21][22].
The following subsections collect information on the basic mathematical functions used in Sinc approximation.We introduce Sinc methods to represent indefinite integrations and convolution integrals.These types of integrals are essential for representing the fractional operators of differentiation and integration [7,18].

Sinc Basis
To start with we first introduce some definitions and theorems allowing us to specify the space of functions, domains, and arcs for a Sinc approximation.

Definition 1. Domain and Conditions
Let D be a simply connected domain in the complex plane and z ∈ C having a boundary ∂D.Let a and b denote two distinct points of ∂D and φ denote a conformal map of D onto D d , where denote the inverse conformal map, and let Γ be an arc defined by Γ = {z ∈ C : z = ψ(x), x ∈ R}.Given φ, ψ, and a positive number h, let us set z k = ψ(kh), k ∈ Z to be the Sinc points, let us also define ρ(z) = e φ(z) .
Note the Sinc points are an optimal choice of approximation points in the sense of Lebesgue measures for Sinc approximations [23].
Now let the positive numbers α and β belong to (0,1], and let M M M α,β (D) denote the family of all functions g ∈ Hol Hol Hol(D), such that g(a) and g(b) are finite numbers, where g(a) = lim z→a g(z) and g(b) = lim z→b g(z), and such that u ∈ L L L α,β (D) where: The two definitions allow us to formulate the following theorem for Sinc approximations.
The proof of this Theorem 1 is given in [20].Note the choice h = (πd/(βN)) 1/2 is close to optimal for an approximation in the space M M M α,β (D) in the sense that the error bound in Theorem 1 cannot be appreciably improved regardless of the basis [20].It is also optimal in the sense of the Lebesgue measure achieving an optimal value less than Chebyshev approximations [23].We also note that this behavior is found also in bi-variate approximations based on Poly-Sinc methods [24][25][26].
These definitions and the theorem directly allow the formulation of an algorithm for a Sinc approximation.Let Z denote the set of all integers.Select positive integers N and M = [βN/α] and set m = M + N + 1.The step length is determined by h = (πd/(βN)) 1/2 where α, β, and d are real parameters.In addition assume there is a conformal map φ and its inverse ψ such that we can define Sinc points z j = ψ(jh), j ∈ Z [20].The following relations define the basis of a Sinc approximation: The shifted Sinc is derived from relation (39) by translating the argument by integer steps of length h and applying the conformal map to the independent variable: The discrete shifting allows us to cover the approximation interval (a, b) in a dense way while the conformal map is used to map the interval of approximation from an infinite range of values to a finite one.Using the Sinc basis we are able to represent the basis functions as a piecewise defined function w j (z) by: where ρ(z) = exp(φ(z)).This form of the Sinc basis is chosen as to satisfy the interpolation at the boundaries.The basis functions defined in (41) suffice for purposes of uniform−norm approximation over (a, b).This notation allows us to define a row vector V m (S) of basis functions: with w j defined as in (41).For a given vector V m (u) = (u −M , . . . ,u N ) T we now introduce the product as an approximation of the function u(z) by: Based on this notation, we will introduce in the next few subsections the different integrals we need [18].

Indefinite Integral
We need for the representation of fractional derivatives indefinite integrals.This subsection describes how indefinite integrals can be defined [20] and how these definitions are related to the definition of definite integrals.For collocating an indefinite integral and for obtaining explicit approximations of the functions (J u)(x) and (J u)(x) defined by: We use the following basic relations [20].Let Z denote the set of all integers, and let C denote the complex plane.Let Sinc(x) be given by ( 40) and e k be defined as: with Si(x) the sine integral.This put us into position to write: If now φ denotes a one−to−one transformation of the interval (a, b) onto the real line R, let h denote a fixed positive number, and let the Sinc points be defined on (a, b) by z k = φ −1 (kh), k ∈ Z, where φ −1 denotes the inverse function of the conformal map φ.Let M and N be positive integers, set m = M + N + 1, and for a given function u defined on (a, b), define a diagonal matrix D(u) by D(u) = diag [u (z −M ) , . . . ,u (z N )].Let I (−1) be a square Töplitz matrix of order m having e i−j , as its (i, j) th entry i.e., i, j = −M, . . ., N: Define square matrices A m and B m by: where the superscript "T" denotes the transpose.The collocated representation of the indefinite integrals are thus given by: These are collocated representations of the indefinite integrals defined in ( 44) and (45), respectively [18].If we compare our results with the target equations for example (35) we observe that we need one additional step to extend the indefinite integrals to convolution integrals.

Convolution Integrals
Indefinite convolution integrals can also be effectively collocated via Sinc methods [20].This section discusses the core procedure of this paper, for collocating the convolution integrals and for obtaining explicit approximations of the functions p and q defined by: where x ∈ Γ.In presenting these convolution results, we shall assume that Γ = (a, b) ⊆ R, unless otherwise indicated.Note also, that being able to collocate p and q enables us to collocate definite convolutions like b a f (x − t)g(t)dt.Before we start to present the collocation of the Equations ( 53) and (54) we mention that there is a special approach to evaluate the convolution integrals by using a Laplace transform.In fact Lubich [27,28] introduced this way of calculation by the following idea: x 0 e st g(x − t)dtds (55) for which the inner integral solves the initial value problem y = sy + g with g(0) = 0. We assume that the Laplace transform (Stenger-Laplace transform): with E any subset of R such that E ⊇ (0, b − a), exists for all s ∈ Ω + = {s ∈ C| (s) > 0}.
The eigenvalues of A m and B m are all positive which was a 20 year old conjecture by FS.This conjecture was recently solved by Han and Xu [29].
In the notation introduced above we get: and: are accurate approximations, provided that the inverse Laplace transform of F + (1/s)g(1/s) again belongs to a space M α ,d for some positive numbers α and d [18].The procedure to calculate the convolution integrals is now as follows.The collocated integral J m = V m (S)A m V m and J m = V m (S)B m V m , upon diagonalization of A m and B m in the form: with Σ = diag [s −M , . . . ,s N ] as the eigenvalues arranged in a diagonal matrix for each of the matrices A m and B m .Note that A m and B m have the same vector of eigenvalues.Then the Stenger-Laplace transform (56) delivers the square matrices F + (A m ) and F + (B m ) defined via the equations: We can get the approximation of (57) and (58) by: These two formulas deliver a finite approximation of the convolution integrals p and q.The convergence of the method is exponential under the above Laplace transform assumptions-see [20] for a proof of a special case.

Convolution Integrals in Two Variables
Examining the IE (35) in more detail we realize that we have to extend the one dimensional convolution to a two dimensional one in fact we have to deal with integrals of the type: As discussed in the previous section we will use a Laplace representation of the convolution integrals but now in two dimensions.Where the approximation is sought over the region (a 1 , b 1 ) × (a 2 , b 2 ), and with (a i , b i ) ∈ R , i = 1, 2. Note that the method of separation of variables made possible by the one-dimensional convolution approximation, as well as to how easily the final algorithm can be adapted to parallel computation.In order to guarantee some accuracy in the final approximation, we shall simply assume, without going into detail, that the function p belongs to the class M α i ,β i with respect to each variable x j , for all fixed values of the other variables, each in its respective interval of definition, j = 1, 2.
We shall assume that the mappings φ j have been selected properly according to the limits of the intervals.We furthermore assume that positive integers N j and M j as well as positive numbers h j (j = 1, 2) have been selected such that h j = π N j .These definitions ensure that we get the same order of accuracy of approximation = O N 1/2 1 exp −cN 1/2 1 in each variable [16,18].We set m j = M j + N j + 1, and we define the Sinc points by z (j) l = φ −1 j lh j for l = −M j , . . ., N j ; j = 1, 2. Next, we determine matrices A j , X j , S j , and X −1 j such that: In ( 66) is defined as in (48), and S j are diagonal matrices: Arbitrarily taking c j ∈ 2 b j − a j , ∞ , we set: G s (1) , s (2) = c 2 0 F s (1) , x e −x s (2)   dx (69) We now illustrate the method of separation of variables.To this end, we first rewrite (65) in the notationally more convenient form: representing the first integral in (65).Discretization with respect to t.We set: −M 1 , . . ., g ξ, z (1) and we then define a vector p p p(x) by: where A 1 and F are defined in (66) and ( 68) respectively.Using the diagonalization identity given in (66), it now follows from (72) that: The expression (73) motivates the transformations: Thus, denoting the components of h h h and q q q by h i and q 1 respectively, with i = −M 1 , . . ., N 1 , Equation (73) reduces to the decoupled set of scalar equations: Discretization with respect to x.We now set: and we then define a vector q i by: where A 2 and G are defined in (66) and ( 69) respectively.Using the diagonalization identity A 2 = X 2 S 2 X −1 2 given in (66), it now follows from (77) that: . . .

G s
(2) This last expression motivates the transformations: Denoting the components of r r r i and k k k i by r i,j and k i,j respectively, with i = −M 1 , . . ., N 1 and j = −M 2 , . . ., N 2 , Eq. ( 78) reduces to the decoupled set of scalar equations: By assumption, the k i,j are known at this point, and (80) then determines the r i,j .The second equation in (79) is then used to determine the vectors q q q i at the Sinc points z (2) .The second transformation of (74) is next used to determine the vector p p p(x) at the set of Sinc points.We can thus recover the complete array of values p(x, t) at the set of Sinc points z . The algorithmic procedure is described in more detail in [21].

Sinc Collocation of fFP
For fractional differential equations there exists a solution in terms of a second kind Volterra integral equation incorporating the initial conditions of the Cauchy problem as given in ( 29) and (30) in the representation of (34) connected to (33).These relations are equivalent to the fFP Equation (35).The discrete version of Equation (34) follow as a collocation approximation as: Here, N collects the given initial conditions of the problem and are factors independent of t or x related to the Caputo and Riesz-Feller operators (see Appendix A).The discrete version follows straight forward from (81) to be: The matrices F + are set up according to the discussion in Section 2.4 using the transfer function (33) in the Laplace transform representation.Solving this linear system with respect to V m (u) allows us to approximate the solution by: Note the solution of (82) is generated using a Neumann iteration applying a trash hold level for two consecutive approximations in view of generalizing the linear equation to a nonlinear one [16].

Experimental Section
This section collects four examples for the fFP equation using different types of external forces and different values for fractional derivatives.We also demonstrate the influence of the skewness parameters to exemplify the effects on the approximations such as the counter gradient transport [30].The first three examples use a normalized Gaussian distribution with different amplitudes.Since the Sinc approximation allows an arbitrary initial distribution, we examine in the fourth example the influence of the initial distribution on the solution.
Example 1.The first example uses a normalized Gaussian as initial distribution.The distribution follows from u (0) (x) = √ be −bx 2 / √ π with b = 4.The initial state is shown in Figure 1.The approximation on the time domain [0, T] = [0, 2.0] was iteratively generated using a Neumann iteration scheme.The external force w(x) was set to a constant value w(x) = −1.Figure 1 collects the initial distribution, a sequence of temporal steps represented on the spatial domain, and a (x, t) graph of u(x, t).The level lines of this graphs are also shown in a contour plot.
The graphs in Figure 1 demonstrate that the approximations represent the expected dispersive behavior.The initially localized distribution starts to spread with reduced amplitudes for times larger than zero.The dispersion is visible by the broadening of the distribution if we step along the time axis the amplitude u(x, t) is reduced.We also observe that the function is positive; i.e., u(x, t) ≥ 0.
Example 2. The second example uses the same fractional parameters β, α, and µ except θ α = 0.21 and θ µ = 0. Figure 2 also includes a graph showing the convergence of the Neumann iteration of the IE (top left).The specified error level is reached within four iterations.The graphs in Figure 2 show again dispersive behavior of the approximation (broadening of the distribution and reduction of the amplitude).It becomes obvious that the decay of the amplitude is slower than that in Example 1 (see Figure 1).In addition we observe that the decaying maximum in time is shifted to the left in the graphs and the symmetry of the solution is distorted.The finite value of θ α is responsible for this effect.Again the distribution is positive on the whole domain.Figure 2 also shows how the time evolution of the distribution u(x, t) as 3d and contour plot.We observe that due to the finite value of θ α the symmetry of the solution is broken and that a steep ramp on the right of the approximation exists.The left side of the solution decays in the spatial direction much more slower than that on the steep ramp.Over all, the dispersion of the solution is present but occurs mainly on the left side of the maximum of the distribution.The maximum location of the distribution follows an asymptotic relation like ∼t α .Example 3. The third example uses the same initial condition and fractional parameters as that in Example 2 (see Figure 2) except for θ µ = −0.1.The second skewness parameter is set to a negative value which affects the decay rate and the shifting of the maximum of the distribution.The dispersion rate is also affected by this second skewness parameter (see Figure 3).
In Figure 4 we collect some characteristics of the approximations generated in Example 1,2, and 3.The data shown in Figure 4 represent the location of the maximum x 0 of u(x, t), the decay of the maximum value in time, and the change of the width ∆x of the maximum on the level u max (1 − e) e corresponding to the 63% value of the maximum.The three graphs summarize the scaling properties of the approximations.The first graph (top row) shows that the maximum location is shifted in time if the skewness parameters are different from zero.The shift of the maximum follows a relation x 0 ∼ t 0.28 for θ α = 0.21 and θ µ = −0.1.As a reference for zero skewness parameters there is no shift of the maximum (top line in Figure 4 top row).The second characteristic of the approximation is the decay of the maximum in time.The decay follows the relation u max ∼ exp −δt 0.1 .Note the exponent of t is the same for all approximations.The width ∆x of the maximum also changes with time according to the dispersion.For vanishing skewness we have a relation ∆x ∼ t 0.67 while for the skewness θ α = 0.21 and θ µ = −0.1 we found the relation ∆x ∼ t 0.21 .This indicates that the dispersion process is influenced by the skewness parameters.Example 4. This example uses an external force w(x) = −2sin 2 (2πx) with an initial condition u (0) (x) = 0.797885e − x 2 2 sin 2 (πx) where the prefactor is chosen in such a way that the integral on R is normalized to one.Figure 5 collects the information about the fractional exponents used and displays the behavior of the approximation.

Conclusions
Starting from the integer order Fokker-Planck equation and applying the fractional operators to the temporal and spatial coordinates we derived a fully fractional Fokker-Planck equation in temporal and spatial coordinates.The equivalent representation of the Montrol-Weiss equation using the adapted transfer function with three fractional exponents and two skewness parameters provides us with a suitable Volterra integral equation of second type for the numerical approximation.The approximation of the integral equation is using exponentially converging Sinc convolution integrals extended to higher dimensions.The Volterra equation of the second kind is converted in Sinc convolution methods to a linear system of equations which are solved by a Neumann iteration.The convergence of this iteration is absolute and reaches a specified error margin in a reliable fast way.The examples presented demonstrate for a choice of different initial conditions u (0) (x) dispersing approximations.The dispersion is affected by the fractional exponents and the two skewness parameters.Numerical examination of the shift of the maximum and the width of the distribution for different times show that these quantities follow a scaling relation in time.We also observe in these examples so called counter-gradient transport for the function u.Furthermore, a drift of the maximum concentration is observed.

Definition 2 .
Function Space Let d ∈ (0, π), and let the domains D and D d be given as in Definition 1.If d is a number such that d > d, and if the function φ provides a conformal map of D onto D d , then D ⊂ D .Let α and β denote positive numbers, and let L L L α,β (D) denote the family of functions u ∈ Hol Hol Hol(D), for which there exists a positive constant c 1 such that, for all z ∈ D:

Figure 2 .
Figure 2. (a) Top left panel shows the convergence of the Neumann iteration toward a fixed error level ; (b) approximation of fFP for different times (t ∈ {0.1, 0.3, 0.5, 0.7, 0.9, 1.1, 1.3, 1.5, 1.7, 1.9}, top down) using a normalized Gaussian with amplitude b = 4; (c) 3D graph of the approximation; and (d) contour plot of the approximation.The fractional exponent are given on top of the graphs (b), (c), and (d).

Figure 4 .
Figure 4. (a) Location of the maximum x 0 as a function of time.The top curve shows the location of the maximum where the skewness parameters are zero.The lower two curves represent the skewness for θ α = 0.21 and θ µ = −0.1.The bending of the curve follows a relation x 0 ∼ t σ with σ ≈ 0.28; (b) decay of the maximum amplitude satisfies the relation u max ∼ exp (−δt γ ) with γ ≈ 0.1; and (c) change of the width ∆xd) at a level u max (e − 1) e.The width changes according to the relation ∆x ∼ t η with η ≈ 0.67 for zero skewness (top curve) and η ≈ 0.9 with θ α = 0.21 and θ µ = −0.1 (lower curve).