Asymptotic Behavior on a Linear Self-Attracting Diffusion Driven by Fractional Brownian Motion

: Let B H = { B Ht , t ≥ 0 } be a fractional Brownian motion with Hurst index 12 ≤ H < 1. In this paper, we consider the linear self-attracting diffusion: dX Ht = dB Ht + σ X Ht dt − θ (cid:16) (cid:82) t 0 (cid:0) X Hs − X Hu (cid:1) ds (cid:17) dt + ν dt with X H 0 = 0, where θ > 0 and σ , ν ∈ R are three parameters. The process is an analogue of the self-attracting diffusion (Cranston and Le Jan, Math. Ann. 303 (1995), 87–93). Our main aim is to study the large time behaviors. We show that the solution (cid:0) t − σ θ (cid:1) H (cid:0) X Ht − X H ∞ (cid:1) converges in distribution to a normal random variable, as t tends to inﬁnity, and obtain two strong laws of large numbers associated with the solution X H .


Introduction
In 1995, Cranston and Le Jan [1] introduced a linear self-attracting diffusion with θ > 0 and ν ∈ R, where B is a 1-dimensional standard Brownian motion. It showed that the process X t converges in L 2 and almost surely as t tends to infinity. This is a special case of path-dependent stochastic differential equations. In 2008, inspired by research on fractional Brownian motion as a polymer model, Yan et al. [2] considered the analogue driven by fractional Brownian motion with 1 2 ≤ H < 1, and, moreover, Sun and Yan [3] studied related parameter estimation. In fact, such a path-dependent stochastic differential equation was first developed by Durrett and Rogers [4] and was introduced in 1992 as a model for the shape of a growing polymer (Brownian polymer): (2) where B t is a standard Brownian motion on R d and f Lipschitz continuous (which is called the interacting function). X t corresponds to the location of the end of the polymer at time t. The model is a continuous analogue of the motion of edge self-interacting random walk (see Pemantle [5]). We may call this solution a Brownian motion interacting with its own passed trajectory, i.e., a self-interacting motion. In general, Equation (2) defines a self-interacting diffusion without any assumption on f . We will call it self-repelling (resp. self-attracting) if, for all x ∈ R d , x · f (x) ≥ 0 (resp. ≤ 0); in other words, if it is more likely to stay away from (and, respectively, come back to) the places that it has already visited before. In 2002, Benaïm et al. [6] also introduced a self-interacting diffusion with dependence on the (convolved) empirical measure. A great difference between these diffusions and Brownian polymers is that the drift term is divided by t. It is important to note that, in many cases of f , the interaction potential is attractive enough to compare the diffusion (a bit modified) to an Ornstein-Uhlenbeck process, which gives access to its ergodic behaviour.
On the other hand, in 2015, Benaim et al. [7] studied the self-repelling diffusion of the form X t = B t + long-time behavior of the process (3) is much more complex than that of the Ornstein-Uhlenbeck process, so many cases cannot be observed in the Ornstein-Uhlenbeck process. When θ > 0, we can basically conclude that the asymptotic behavior of the system is very sensitive to the complexity of the dependent structure of a driving noise, and the driving noise is the main contradiction that leads to the complexity of the asymptotic behavior of such processes. Guo et al. [17] considered the model driven by sub-fBm and σ = 0. When θ < 0, the asymptotic behavior of the process (3) basically does not depend on the selection of noise (in fact, the results in the study by Sun and Yan [15] support this judgment). We also need to say that such an equation can be written as with X H 0 = 0, which is a special case of the equation with X H 0 = x, where g 1 and g 2 are two Borel measurable functions. We will consider this general equation in a future paper. This paper is organized as follows. In Section 2, we present some preliminaries for fractional Brownian motion and Malliavin calculus. In Section 3, we prove the statement (I). The statement (II) is given in Section 4.

Preliminaries
In this section, we briefly recall some basic definitions and results of fractional Brownian motion. For more aspects of this material we refer to Duncan et al. (2000) [18], Hu (2005) [19], Mishura (2008) [20] and the references therein. Throughout this paper, we assume that 1 2 ≤ H < 1 is arbitrary but fixed and we let B H = B H t , t ≥ 0 be a onedimensional fBm with Hurst index H defined on Ω, F H , P such that F H is the sigma-field generated by B H . A fractional Brownian motion (fBm) B H = B H t , t ≥ 0 with Hurst index H is a mean zero Gaussian process such that B H 1 = 0 and for all t, s > 0. For H = 1 2 , B H coincides with the standard Brownian motion B. B H is neither a semi-martingale nor a Markov process unless H = 1 2 , so many of the powerful techniques from stochastic analysis are not available when dealing with B H .
Let H be the completion of the linear space E generated by the indicator functions 1 [0,t] , t ∈ [0, T] with respect to the inner product for all s, t ∈ [0, T]. When 1 2 < H < 1, we have . The elements of the Hilbert space H may not be functions but distributions of negative order (see, for instance, Pipiras and Taqqu (2001)).
In order to avoid unnecessary trouble, we introduce a subspace of H as follows: It is not difficult to show that |H| is a Banach space with the norm · |H| and that E is dense in |H|. Moreover, we have for all ϕ, ψ ∈ |H| and we also have As usual, we define the Wiener integral as the limit in probability of a Riemann sum, which is a linear isometry between H and the Gaussian space spanned by B H , and it can be understood as an extension of the mapping is well-defined as a mean zero Gaussian random variable such that Thus, we regard (7) as the indefinite Wiener integral. Consider the set S of smooth functionals of the form where f ∈ C ∞ b (R n ) ( f and all of its derivatives are bounded) and ϕ i ∈ H. The derivative operator D H (the Malliavin derivative) of a functional F of form (8) is defined as The derivative operator D H is a closable operator from L 2 (Ω) into L 2 (Ω; H). We denote the closure of F by D 1,2 with respect to the norm The divergence integral δ H is the adjoint of the derivative operator D H . That is, we say that a random variable u in L 2 (Ω; H) belongs to the domain of the divergence operator δ H , denoted by Dom(δ H ), if for every F ∈ F . In this case, δ H (u) is defined by the duality relationship E Fδ H (u) = E D H F, u H for every u ∈ D 1,2 . We have D 1,2 ⊂ Dom(δ H ) and for all u ∈ D 1,2 . We will use the notation to express the Skorohod integral of a process u, and the indefinite Skorohod integral is defined as . Finally, we recall that the fBm t → B H t admits almost surely a bounded p > 1 Hvariation on any finite interval. As an immediate result, one can define the Young integral t 0 u s dB H s as the limit in probability of a Riemann sum, and provided the process u is of bounded q-variation on any finite interval with q > 1 and

Large Time Behaviors
The object of this section is to expound and prove the large time behaviors of the linear self-attracting diffusion with θ > 0, σ, ν ∈ R, where B H t is a fractional Brownian motion with 1 2 ≤ H < 1. For simplicity, throughout this paper, C stand for a positive constant that may depend on H, θ, σ, ν, and its value may be different in appearance. This assumption is also suitable for c. Proposition 1. Equation (9) admits a unique solution, and the solution can be expressed as where for s, t ≥ 0.
Proof. We can show the result by integration by parts. Of course, we also can regard (9) as a deterministic equation since the diffusion coefficient is equal to a constant. Thus, we solve the equation by the variation of constants method. In fact, Equation (9) is equivalent tö Through the variation of constants method, we can assume that the procesṡ is the solution of Equation (12) with for all t ≥ 0, which implies that It follows from X H 0 = 0 and (13) that Lemma 1. Let θ > 0. Then, the function (t, s) → h θ,σ (t, s) admits the following properties: (1) The limit exists for all s ≥ 0.
for all t ≥ s ≥ 0.
for all t > 0.
Proof. The statement (1) is trivial. For the statement (2), we have Similarly, one can show this for the other statements.

Lemma 2.
Let θ > 0 and denote for all t ≥ 0. Then, we have lim Proof. This is a simple calculus exercise. In fact, we have that for all t ≥ 0 and θ > 0. Thus, the Lemma follows from convergences This completes the proof.
Proof. Given θ > 0, integration by parts implies that This completes the proof. is finite and non-zero.
Proof. By the continuity, the Lemma is equivalent to for 1 2 < H < 1, θ > 0 and σ ∈ R. According to L'Hospital's rule and making the substitution It follows from the dominated convergence theorem that for all 1 2 < H < 1. This completes the proof. for all 0 < s < t.
Proof. Let 1 2 < H < 1; this is a simple calculus exercise. In fact, we have and continuity of the functions t → ∞ t e − 1 2 θx 2 dx and t → 1 t ∧ 1 e − 1 2 θt 2 , we obtain the inequality for all t > 0. As an immediate result, we see that for all t > 0. It follows from Lemma 4 that for all t > s ≥ σ θ . Case II: σ > 0 and 0 < s < t ≤ σ θ . We have for all 0 < s < t ≤ σ θ . Case III: σ > 0 and 0 < s < σ θ ≤ t. According to the inequality (22), we have for all t ≥ σ θ > s > 0. Similar to case I, we also have for all t ≥ σ θ > s > 0. Thus, we complete the proof for 1 2 < H < 1. Similarly, we can obtain the case H = 1 2 .
Theorem 1. Let θ > 0 and 1 2 ≤ H < 1. Then, the solution X H t of (9) converges to the random variable in L 2 and almost surely as t tends to infinity.
Proof. Let θ > 0. We first consider the convergence in L 2 . We decompose for all t ≥ 0 and 1 2 ≤ H < 1. When H = 1 2 , by the fact as t tends to infinity, we have and as t tends to infinity. Combining this with (25) and Lemma 2, we show that X 1/2 t converges to X 1/2 ∞ in L 2 for H = 1 2 as t tends to infinity. When 1 2 < H < 1, we also have that as t tends to infinity for all 1 2 < H < 1. Combining this with Lemma 2, we show that X H t converges to X H ∞ in L 2 for all 1 2 < H < 1 as t tends to infinity. We now prove the convergence with probability one. According to the decomposition (25) and Lemma 2, we need to show that the convergence holds almost surely as t tends to infinity. First, on the grounds of (16), Lemma 3 and the fact that almost surely for all 1 2 ≤ H < 1 as T tends to infinity, we prove that for all 1 2 ≤ H < 1 as t tends to infinity. Now, we consider the convergence (29). When 1 2 < H < 1, for integer numbers n, k with 0 ≤ k < n, we set M H n,k = Q H n+ k n . Then, M H n,k is Gaussian, and we have with n ≥ (0 ∨ σ θ ) and for all ε > 0 and n ≥ (0 ∨ σ θ ). Furthermore, for s ∈ (0, 1), we denote R n,k s = Q H n+ k+s Then, {R n,k s , s ∈ (0, 1)} is a Gaussian process for any n and k, and, on the basis of Lemma 5, we have for all ε > 0 and n ≥ (0 ∨ σ θ ). It follows from Slepian's Lemma and Markov's inequality that for any ε > 0, n ≥ (0 ∨ σ θ ) and γ ≥ 1. Combining this with the Borel-Cantelli Lemma and the inclusion relation, for all k, n ≥ (0 ∨ σ θ ), we show that the convergence (29) holds almost surely. Similarly, we can also check the case H = 1 2 . This completes the proof. and in L 2 and almost surely as t tends to infinity. Moreover, the solution X H t admits the following estimation. Lemma 6. Let 1 2 ≤ H < 1 and θ > 0. Then, the solution X H t satisfies Proof. Let 1 2 < H < 1. We have for all 0 < s < t. Clearly, on the basis of (15), we have for all 0 < s < t. The proof of Lemma 5 implies that for all 0 < s < t. Finally, we have for all 0 < s < t. Thus, we complete the proof for 1 2 < H < 1. Similarly, we can obtain the case H = 1 2 .
is the solution of (41) with C H 0 = Y H 0 = 0. Then, according to (41), we have for all t ≥ 0, which implies that for all σ θ ≤ s < t. It follows from the fact with x ≥ 0 and β > −1 that For the term Ψ 2 (H; t, s), according to Lemma 4 and the facts ∞ t e −θs 2 +2σs ds and e −x ≤ 1 1+x ≤ 1 x with 0 < < 1 and x ≥ 0, we obtain for all t > s ≥ σ θ , σ ≤ 0 and 0 ≤ γ ≤ 2 − 2H. Case II: σ > 0 and 0 < s < t ≤ σ θ . From the forms of Ψ 1 (H; t, s) and Ψ 2 (H; t, s), it is easy to find that they are bounded uniformly in t and s.
Case III: σ > 0 and 0 < s < σ θ ≤ t. Clearly, Ψ 2 (H; t, s) is bounded uniformly in t and s. For Ψ 1 (H; t, s), based on the following estimates  is bounded uniformly in t and s. Thus, we have completed the proof.
Theorem 3. Let 1 2 ≤ H < 1 and θ > 0. Then, we have in L 2 and almost surely as T tends to infinity.
Let 1 2 ≤ H < 1 and denote for all t ≥ 0. Then, according to Y H t = η t + ∆ t and lim T→∞ 1 T 3−2H T 0 (∆ t ) 2 dt = 0, the convergence (45) is equivalent to in L 2 and almost surely as T tends to infinity. We now verify that the convergence (46) holds in L 2 and almost surely, respectively.
Proof of the L 2 -convergence. We first show that the convergence (46) holds in L 2 . This is equivalent to as T tends to infinity, according to the fact that for all t, s > 0. We now check convergence (47) in the four cases. Case 1: H = 1 2 . On the basis of the fact that as T tends to infinity.