Fractional diffusion in Gaussian noisy environment

We study the fractional diffusion in a Gaussian noisy environment as described by the fractional order stochastic partial equations of the following form: $D_t^\alpha u(t, x)=\textit{B}u+u\cdot W^H$, where $D_t^\alpha$ is the fractional derivative of order $\alpha$ with respect to the time variable $t$, $\textit{B}$ is a second order elliptic operator with respect to the space variable $x\in\mathbb{R}^d$, and $W^H$ a fractional Gaussian noise of Hurst parameter $H=(H_1, \cdots, H_d)$. We obtain conditions satisfied by $\alpha$ and $H$ so that the square integrable solution $u$ exists uniquely .


Introduction
In recent years there have been a great amount of works on anomalous diffusions in the study of biophysics and so on (see for example [1], [4], [10], [11] to mention just a few). In mathematics, some of these anomalous diffusions (such as sub-diffusions) can be described by the so-called fractional order diffusion processes. As for the term "fractional order diffusion", one has to distinguish two completely different types. One is the equation of the form ∂ t u(t, x) = −(−∆) α u(t, x), where t ≥ 0, x ∈ R d , α ∈ (0, 2) is a positive number, ∂ t = ∂ ∂t and ∆ = d i=1 ∂ 2 x i is the Laplacian. This equation is not associated with the anomalous diffusion. Instead, it is associated with the so-called stable process (or in general Lévy process), which has jumps. Another equation is of the form D is the Caputo fractional derivative with respect to t (see [9] for the study of various fractional derivatives). This equation is relevant to the anomalous diffusion which we mentioned and has been studied by a number of researchers, see for example [2] and the references therein. If one considers the anomalous diffusion in a random environment, then it is naturally led to the study of a fractional order stochastic partial differential equation of the form D (α) t u(t, x) = Bu(t, x) + u(t, x)Ẇ (t, x), where B is a second order differential operator, including the Laplacian as a special example, andẆ is a noise. In this paper, we shall study this fractional order stochastic partial differential equation whenẆ (t, x) =Ẇ H (x) is a time homogeneous fractional Gaussian noise of Hurst parameter H = (H 1 , · · · , H d ). Mainly, we shall find a relation between α and H such that the solution to the above equation has a unique square integrable solution.
If α is formally set to 1, then the above stochastic partial differential equation has been studied in [5]. So, our work can be considered as an extension of the work [5] to the case of fractional diffusion (in Gaussian noisy environment). Let us also mention that when we formally set α = 1, we recover one of the main results in [5]. Thus, our condition (2.10) given below is also optimal.
Here is the organization of the paper, Section 2 will describe the operator B and the noise W H and state the main result of the paper. In our proof we need to use the properties of the two fundamental solutions (Green's functions) Z(t, x, ξ) and Y (t, x, ξ) associated with the equation D (α) t u(t, x) = Bu(t, x), which is represented by the Fox's H-function . We shall recall some most relevant results on the H-function and the Green's function Z(t, x, ξ) and Y (t, x, ξ) in Section 3. A number of preparatory lemmas are needed to prove main results and they are presented in Section 4. Finally, the last section is devoted to the proof of our main theorem.

Main result
be a uniformly elliptic second-order differential operator with bounded continuous real-valued coefficients. Let u 0 be a given bounded continuous function (locally Hölder continuous if d > 1). Let {W H (x) , x ∈ R d } be a time homogeneous (time-independent) fractional Brownian field on some probability space (Ω, F , P ) (Like elsewhere in probability theory, we omit the dependence of W H (x) = W H (x, ω) on ω ∈ Ω). Namely, the stochastic process {W H (x) , x ∈ R d } is a (multi-parameter) Gaussian process with mean 0 and its covariance is given by where H 1 , · · · , H d are some real numbers in the interval (0, 1). Due to some technical difficulty, we assume that H i > 1/2 for all i = 1, 2, · · · , d; the symbol E denotes the expectation on (Ω, F , P ) and is the covariance function of a fractional Brownian motion of Hurst parameter H i . Throughout this paper we fix an arbitrary parameter α ∈ (0, 1) and a finite time horizon T ∈ (0, ∞). We study the following stochastic partial differential equation of fractional order: is the Caputo fractional derivative (see e.g. [9]) andẆ Our objective is to obtain condition on α and H such that the above equation has a unique solution. But since W H is not differentiable or sinceẆ H (x) does not exist as an ordinary function, we have to describe under what sense a random field u(t, x) , t ≥ 0 , x ∈ R d is a solution to the above equation (2.2).
To motivate our definition of the solution, let us consider the following (deterministic) partial differential equation of fractional order with the term u(t, x)·Ẇ H (x) in (2.2) replaced by f (t, x): where the function f is bounded and jointly continuous in (t, x) and locally Hölder continuous in x. In [2], it is proved that there are two Green's functions such that the solution to the Cauchy problem (2.3) is given by In general, there is no explicit form for the two Green's functions {Z(t, x, ξ) , Y (t, x, ξ)}. However, their constructions and properties are known (see [2], [6], [7], and the references therein). We shall recall some needed results in the next section.
From the classical solution expression (2.4), we expect that the solution u(t, x) to (2.2) satisfies formally The above formal integral t 0 ds R d Y (t − s, x, y)u(s, y)Ẇ H (y)dy can be defined by Itô-Skorohod stochastic integral R d t 0 Y (t − s, x, y)u(s, y)ds W H (dy) as given in [5]. Now, we can give the following definition.
(3) The following holds in L 2 Let us return to the discussion of the two Green's functions In this case the stochastic partial differential equation of the form was studied in [5]. The mild solution to the above equation (2.7) is proved to exist uniquely under conditions The main result of this paper is to extend the above result in [5] to our equation (2.2).
Theorem 2.2. Let the coefficients a ij (x), b i (x) , i, j = 1, · · · , d , be bounded and continuous and let them be Hölder continuous with exponent γ. Let a ij (x) be uniformly elliptic. Namely, there is a constant a 0 ∈ (0, ∞) such that Let u 0 be a bounded continuous (and locally Hölder continuous if d > 1). Assume Then, the mild solution to (2.2) exists uniquely in L 2 (Ω, F, P ).

3.
Green's functions Z and Y 3.1. Fox's H-function. We shall use H-function to express the Green's functions Z and Y in Definition 2.1. In this subsection we recall some results about the H-function s and the two Green's functions. We shall follow the presentation in [8] (see also [2] and references therein).
Definition 3.1. Let m, n, p, q be integers such that 0 ≤ m ≤ q, 0 ≤ n ≤ p. Let a i , b i ∈ C be complex numbers and let α j , β j be positive numbers, i = 1, 2, · · · , p; j = 1, 2, · · · , q. Let the set of poles of the gamma functions Γ(b j + β j s) doesn't intersect with that of the gamma for all i = 1, 2, · · · , p and j = 1, 2, · · · , q. The H-function is defined by the following integral where an empty product in (3.1) means 1 and L in (3.1) is the infinite contour which separates all the points b jl to the left and all the points a ik to the right of L. Moreover, L has one of the following forms: (1) L = L −∞ is a left loop situated in a horizontal strip starting at point −∞ + iφ 1 and terminating at point −∞ + iφ 2 for some −∞ < φ 1 < φ 2 < ∞ (2) L = L +∞ is a right loop situated in a horizontal strip starting at point +∞ + iφ 1 and terminating at point To illustrate L we give the following graphs. [8], Theorem 1.1).

Example 3.2.
To compare with the classical case α = 1, we consider the case m = 2, n = 0, Green's functions Z and Y when B has constant coefficients. In this subsection let us consider Z and Y when the operator B in (2.2) has the following form where the matrix A = (a ij ) is positive definite. In this case, Z and Y (we call them Z 0 and Y 0 to distinguish with the general coefficient case) are given as follows. .
It is easy to see that for the constant coefficient case, the both of the Green's functions are homogeneous in time and space. Namely, In particular, when α = 1, it is easy to see from the above expression and the explicit form which reduces to (2.6) when A = I is the identity matrix.
With the above expression for Z 0 and Y 0 and the properties of the H-function , one can obtain the following estimates.
where σ ∈ (0, ∞) is a constant whose exact value is irrelevant in the paper. Then, we have the following estimates: x) means that there are positive constant C and positive constant σ such that the above inequality holds. In what follows the positive constants C and σ are generic, which may be different in different appearances.
Proof. Denote R = x 2 /t α . From [2], Proposition 1, it follows that when R ≤ 1, we have Since when R ≤ 1, p(t, x) is bounded from below. This proves the inequality (3.4) when R ≤ 1. When R > 1, then by [2], Proposition 1 we have |Z 0 (t, x)| ≤ Ct − αd 2 p(t, x). It is clear that this implies the inequality (3.4) when d = 1 and d = 2. Now, we assume that d ≥ 3. We have where we used the fact that Similarly, we can use [2], Proposition 2 (for d = 1 case) and [2], Section 4.2 (for d ≥ 2 case) to obtain the following estimates for Y 0 (t, x). Proposition 3.4. We follow the same notation p(t, x) as defined by (3.3). We have (1) When d = 1, we have the following estimates: (2) When d ≥ 2, we have the following estimates: where for instance, |Y 0 (t, x)| ≤ Ct −1 p(t, x) means that there are positive constant C and positive constant σ such that the above inequality holds. In what follows the positive constants C and σ are generic, which may be different in different appearances.

3.3.
Green's functions Z and Y in general coefficient case. If the coefficients of B are not constant, then the Green's functions Z and Y are more complicated and may be obtained by a method similar to the Levi parametrix for the parabolic equations. Denote Let Q(s, y, ξ) and Φ(s, y, ξ) be defined by Recall that γ is the Hölder exponent of the coefficients with respect to the spatial variable x. Then, the Green's functions {Z(t, x, ξ), Y (t, x, ξ)} have the following form: Moreover, the function V Z (t, x, ξ), V Y (t, x, ξ) satisfy the following estimates.
Here γ 0 is any number such that 0 < γ 0 < γ and in the case d ≥ 3, the constant C depends on γ 0 .

Auxiliary lemmas
To prove our main theorem, we need to dominate certain multiple integral involving Y (t, x, ξ) and Z(t, x, ξ). Since both Y (t, x, ξ) and Z(t, x, ξ) are complicated, we shall first bounded them by p(t, x−ξ) from the estimations of |Y 0 (t, x)| and |V Y (t, x, ξ)|. More precisely, we have the following bounds for Y (t, x, ξ).
Proof. We shall prove the lemma case by case. First, when d = 1, by Proposition 3.4, we have Therefore Now we consider the case d = 2. From the following inequalities: We re going to prove the lemma when d = 3. From Proposition 3.4 we have Combining this inequality with Proposition 3.5 we obtain We turn to consider the case d = 4. Proposition 3.4 yields that for any θ > 0 the following holds true: As a consequence, we have If |x−ξ| 2 t α ≤ 1, then the above inequality is obviously true. Now, we can choose θ > 0, such that −2θ ≥ (−2 + γ − 2γ 0 ). Thus, we have . Combining the above inequality with Proposition 3.5 we have . The proposition is then proved.
The bound (4.1) will greatly help to simplify our estimation of the multiple integrals that we are going to encounter. However, when the dimension d is greater than or equal to 2, the multiple integrals are still complicated to estimate and our main technique is to reduce the computation to one dimensional. This means we shall further bound the right hand side of the inequality (4.1) by product of functions of one variable. Before doing so, we denote the exponents of t and |x − ξ| in (4.1) by ζ d and κ d . Namely, we denote From now on we shall exclusively use p(t, x) = exp − σt − α 2−α |x| 2 2−α to denote a function of one variable. However, the constant σ may be different in different appearances of p(t, x) (for notational simplicity, we omit the explicit dependence on σ of p(t, x)).
With these notation Lemma 4.1 yields Lemma 4.2. The following bound holds true for the Green's function Y : Proof. It is easy to see that Thus for any positive number On the other hand, Combining the above with (4.1) yields (4.4) since the exponents in |x − ξ| in (4.1) are negative.
Then, there is a constant C, dependent on σ, α and β, but independent of ξ and s such that Proof. Making the substitution x = ys since the two integrals inside the parenthesis are finite (and independent of s and ξ).
The following is a slight extension of the above lemma. Proof. We shall follow the same idea as in the proof of Lemma 4.3. Making the substitution This proves the lemma.
Similar argument works for I 4 . Combining the estimates for I k ,k = 1, 2, 3, 4 yields the lemma.
In this case we apply the Hölder's inequality to obtain where the last inequality follows from Lemma 4.3. Substituting the above estimate (4.7) into (4.6), we have Using Lemma 4.3 again we have, In this case, from Lemma 4.5, part (ii) it follows Hence, we have Now, since from the condition of the lemma, θ 1 +2θ 2 +1 > −1, we can use Hölder's inequality such as in the inequality (4.7) in the case (i), to obtain Case iii): θ 1 + θ 2 = −1.
In this case, we first use Lemma 4.5, part (i) to obtain Thus, using Lemma 4.4, we have The lemma is then proved.
Lemma 4.9. Assume that u 0 is bounded. Then which is bounded by the estimates in (3.4) and a substitution ξ = x + t α 2 y. In fact, we have, for example, when d ≥ 3, Similarly, using the estimation for V Z (t, x, ξ) given in Proposition 3.5 we can bound R d |V Z (t, x, ξ)|dξ by a constant. In fact, for example, when d = 3, we have The other dimension case can be dealt with the same way.

Proof of the main theorem 2.2.
Change t to s and x to y and the equation (2.5) for mild solution becomes Substituting the above into (2.5), we have Y (t − s, x, y)Z(s, y, ξ)u 0 (ξ)dξW H (dy)ds Y (t − s, x, y)Y (s − r, y, z)u(r, z)W H (dz)drW H (dy)ds .
We continue to iterate this procedure to obtain where Ψ n satisfies the following recursive relation: To write down the explicit expression for the expansion (5.1), we denote where T n = 0 < s 1 < s 2 < · · · < s n ≤ t and ds = ds 1 ds 2 · · · ds n .
With these notations, we see from the above iteration procedure that where I n denotes the multiple Itô type integral with respect to W (x) (see [5]) andf n (t, x; x 1 , · · · , x n ) is the symmetrization of f n (t, x; x 1 , · · · , x n ) with respect to x 1 , · · · , x n : where σ(n) denotes the set of permutations of (1, 2, · · · , n). The expansion (5.1) with the explicit expression (5.3) for Ψ n is called the chaos expansion of the solution.
If the equation (2.2) has a square integrable solution, then it has a chaos expansion according to a general theorem of Itô. From the above iteration procedure, it is easy to see that this chaos expansion of the solution is given uniquely by (5.1)-(5.3). This is the uniqueness.
Using Lemma 4.2 to the above integral, we have (5.6) Θ n (t, x) ≤ C n n!