Next Article in Journal
Development of an Optimal Novel Cascaded 1+TDFλ/PIλDμ Controller for Frequency Management in a Triple-Area Power Grid Considering Nonlinearities and PV/Wind Integration
Previous Article in Journal
Fixed-Point Theorems for Covariant and Contravariant Multivalued Mappings in Bipolar b-Metric Spaces
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Quasi-Likelihood Estimation in the Fractional Black–Scholes Model

1
Glorious Sun School of Business and Management, Donghua University, Shanghai 200051, China
2
School of Mathematics and Statistics, Donghua University, Shanghai 201620, China
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(18), 2984; https://doi.org/10.3390/math13182984
Submission received: 21 July 2025 / Revised: 10 September 2025 / Accepted: 12 September 2025 / Published: 15 September 2025
(This article belongs to the Section D1: Probability and Statistics)

Abstract

In this paper, we consider the parameter estimation for the fractional Black–Scholes model of the form S t H = S 0 H + μ 0 t S s H d s + σ 0 t S s H d B s H , where σ > 0 and μ R are the parameters to be estimated. Here, B H = { B t H , t 0 } denotes a fractional Brownian motion with Hurst index 0 < H < 1 . Using the quasi-likelihood method, we estimate the parameters μ and σ based on observations taken at discrete time points { t i = i h , i = 0 , 1 , 2 , , n } . Under the conditions h = h ( n ) 0 , n h , and h 1 + γ n 1 for some γ > 0 , as n , the asymptotic properties of the quasi-likelihood estimators are established. The analysis further reveals how the convergence rate of n h 1 + γ 1 approaching zero affects the accuracy of estimation. To validate the effectiveness of our method, we conduct numerical simulations using real-world stock market data, demonstrating the practical applicability of the proposed estimation framework.

1. Introduction

In classical financial theory, the market is assumed to be arbitrage free and complete, and so standard Brownian motion is often used as noise to characterize the prices of financial derivatives. However, in real-world financial markets, prices often exhibit long-range dependence and non-stationarity, which are inconsistent with the characteristics of standard Brownian motion. Consequently, many authors have proposed using fractional Brownian motion (fBm) to construct market models (see, for examples, Mandelbrot and Van Ness [1]), with its simple structure and properties in memory noise. Unfortunately, starting with Rogers [2], there has been an ongoing dispute on the proper usage of fractional Brownian motion in option pricing theory. A troublesome problem arises because fBm is not a semimartingale and therefore, “no arbitrage pricing” cannot be used. Although this is a consensus, the consequences are not clear. The orthodox explanation is simple: fBm is not a suitable candidate for the price process. But, as shown by Cheridito [3], assuming that market participants cannot react immediately, any theoretical arbitrage opportunities will disappear. On the other hand, in 2003, Hu and ksendal [4] used the Wick–Itô-type integral to define a fractional market and showed the market was arbitrage free and complete. In that case, the prices of financial derivatives satisfied the following fractional Black–Scholes model:
d S t H = μ S t H d t + σ S t H d B t H
with S 0 H > 0 , where B H = { B t H , t 0 } is a fractional Brownian motion with Hurst index 1 2 H < 1 , μ R and σ > 0 are two parameters, and the integral 0 t S s H d B s H denotes the fractional Itô integral (Skorohod integral). For further studies on fractional Brownian motion in Black–Scholes models, refer to works by Bender and Elliott [5], Biagini [6], Bjork-Hult [7], Cheridito [3], Elliott and Chan [8], Greene-Fielitz [9], Necula [10], Lo [11], Mishura [12], Rogers [2], Izaddine [13], and additional references cited therein.
In this paper, we consider the application of the quasi-likelihood method in continuous stochastic systems. Our goal is to establish the quasi-likelihood estimations for parameters μ and σ 2 in Equation (1) and to establish their asymptotic behaviors. As is well known, there are many papers on parameter estimation of stochastic differential equations, but the use of the quasi-likelihood method to deal with parameter estimation problems of stochastic differential equations without independent increments has not been seen so far. Clearly, the solution of (1) does not have independent increments unless H = 1 2 . We briefly describe the quasi-likelihood method as follows.
Let X = { X t , t 0 } be a stochastic process such that its distribution contains unknown parameters θ Θ R k with k 1 . Assume that X t 1 , X t 2 , , X t n are samples extracted from X, and that x f j ( x ) is the probability function (e.g., density function) of the increment X t j X t j 1 for j { 1 , 2 , , n } . Since the process X generally does not have independent increments, the function
L ( θ ) = j = 1 n f j X t j X t j 1
is generally not a likelihood function. However, we can still use the usual method to obtain an estimator, which is called a quasi-likelihood estimator.
Let B H = B t H , t 0 be fractional Brownian motion with Hurst index H ( 0 , 1 ) defined on the probability space ( Ω , F , P , ( F t ) ) . Consider the fractional Black–Scholes model as follows:
S t H = S 0 H + μ 0 t S s H d s + σ 0 t S s H d B s H
with S 0 H > 0 , where 0 < H < 1 , and the stochastic integral is the fractional Itô integral [14]. By using the Itô formula, we get
S t H = S 0 H exp μ t 1 2 σ 2 t 2 H + σ B t H ,
with t 0 , which is called the geometric fractional Brownian motion (gmfBm).
In this paper, for simplicity, throughout, we let H be known. Denote f ( t ) = μ t 1 2 σ 2 t 2 H ( t 0 ) and
Y t H : = log S t H log S 0 H = σ B t H + μ t 1 2 σ 2 t 2 H = σ B t H + f ( t ) .
Now, let H ( 0 , 1 ) be known, and let the gmfBm S H = { S t H , t 0 } be observed at some discrete time instants { t i = i h , i = 0 , 1 , 2 , , n } satisfying the following conditions:
(C1)
h = h ( n ) 0 and t n = n h + as n .
(C2)
There exists γ > 0 such that n h 1 + γ 1 as n .
We get a quasi-likelihood function of parameter μ and σ 2 as follows
L 1 ( μ , σ 2 ) : = i = 1 n f i Y t i Y t i 1 = i = 1 n 1 σ 2 π h 2 H exp 1 2 σ 2 h 2 H Y t i Y t i 1 μ h + 1 2 σ 2 t i 2 H t i 1 2 H 2 ,
where f i ( · ) is the density of the random variable Y t i Y t i 1 . Then, the logarithmic quasi-likelihood function is given by
log L 1 ( μ , θ ) = n 2 log θ n 2 log 2 π h 2 H 1 2 θ h 2 H i = 1 n Y t i H Y t i 1 H μ h + 1 2 θ t ^ i 2
with θ = σ 2 , where t ^ i = t i 2 H t i 1 2 H . By using the quasi-likelihood function, we get that the estimators μ ^ n and θ ^ n of μ and θ = σ 2 satisfy the equations
μ ^ n = 1 n h i = 1 n Y t i Y t i 1 + 1 2 θ ^ n t ^ i θ ^ n = 1 i = 1 n ( t ^ i ) 2 2 n h 2 H + 2 n 2 h 2 H + i = 1 n ( t ^ i ) 2 i = 1 n Y t i Y t i 1 μ ^ n h 2 .
When H ( 0 , 1 2 ) ( 1 2 , 1 ) , by solving the above equation system, we get the estimators of μ and θ = σ 2 as follows:
μ ^ n = 1 n h Y t n H + β n ρ n 2 n h 2 H + 2 n 2 h 4 H + ρ n i = 1 n Y t i H Y t i 1 H 1 n Y t n H 2 , θ ^ n = 1 ρ n 2 n h 2 H + 2 n 2 h 4 H + ρ n i = 1 n Y t i H Y t i 1 H 1 n Y t n H 2 ,
where β n = 1 2 ( n h ) 2 H 1 and ρ n = i = 1 n ( t ^ i ) 2 n 4 H 1 h 4 H for every n 1 . When H = 1 2 , we have ρ n = 0 and the random variables
Y t i H Y t i 1 H N ( μ σ 2 ) h , σ 2 , i = 1 , 2 , n
are independent identical distributions, the above logarithm quasi-likelihood function is a classical logarithm likelihood function, and we have
μ ^ n = 1 n h Y t n H + β n n h i = 1 n Y t i H Y t i 1 H 2 β n n 2 h ( Y t n H ) 2 , θ ^ n = 1 n h i = 1 n Y t i H Y t i 1 H 2 1 n 2 h ( Y t n H ) 2 .
and the asymptotic behavior of the two estimators can be easily established, so in the discussion later in this paper, unless otherwise stated, it is assumed that H 1 2 .
Our study focuses on the asymptotic properties of two estimators. Given the Gaussian properties of the sample, we expect these estimators to exhibit quadratic variation, facilitating the derivation and simplification of their asymptotic behavior using fractional Brownian motion. To fully characterize this behavior, we rely on key properties of fractional Brownian motion, which not only underpin the theoretical understanding of complex stochastic processes but also provide a foundation for applying quasi-likelihood methods in parameter estimation.
The structure of this paper is as follows. In Section 2, we briefly describe the basic properties of fractional Brownian motion. In Section 3 and Section 4 we discuss the strong consistency and asymptotic normality of the estimator θ ^ n and analyze the asymptotic behavior under the cases where the parameter μ is known and unknown. To prove these two asymptotic behaviors, we rely on two key results related to fractional Brownian motion. Although these results have been proven for a finite observation interval, they also hold when the observation length n h tends to infinity. In Section 5, we consider the asymptotic behavior of the estimator μ ^ n . In Section 6, we provide numerical verification and empirical analysis of the estimators μ ^ n and θ ^ n . In Section 7, we conclude that the proposed fractional Brownian motion quasi-likelihood method performs well theoretically and empirically, offering a practical framework for financial parameter estimation.

2. Preliminaries

In this section, we briefly recall some basic results on fractional Brownian motion. For more aspects on the material, we refer to Bender [15], Biagini et al. [6], Cheridito-Nualart [16], Gradinaru et al. [17], Hu [4], Mishura [12], Nourdin [18], Nualart [19], Tudor [20], and references therein.
A zero mean Gaussian process B H = { B t H , 0 t T } defined on a complete probability space ( Ω , F , P , ( F t ) ) is called the fBm with Hurst index H ( 0 , 1 ) provided that B 0 H = 0 and
E B t H B s H = 1 2 t 2 H + s 2 H | t s | 2 H
for t , s 0 . Let H be the completion of the linear space E generated by the indicator functions 1 [ 0 , t ] , t [ 0 , T ] with respect to the inner product
1 [ 0 , s ] , 1 [ 0 , t ] H = 1 2 t 2 H + s 2 H | t s | 2 H .
When H = 1 2 , we know that H = L 2 , and when 1 2 < H < 1 , we have
φ H 2 = H ( 2 H 1 ) 0 T 0 T φ ( t ) φ ( s ) | t s | 2 H 2 d s d t
for all φ H . The application
E φ B H ( φ ) : = 0 T φ ( s ) d B s H
is an isometry from E to the Gaussian space generated by B H , and it can be extended to H . Denote by 𝒮 the set of smooth functionals of the form
F = f ( B H ( φ 1 ) , B H ( φ 2 ) , , B H ( φ n ) ) ,
where f C b ( R n ) (f and all its derivatives are bounded) and φ i H . The derivative operator D H (the Malliavin derivative) of a functional F of the above form is defined as
D H F = j = 1 n f x j ( B H ( φ 1 ) , B H ( φ 2 ) , , B H ( φ n ) ) φ j .
The derivative operator D H is then a closable operator from L 2 ( Ω ) into L 2 ( Ω ; H ) . We denote by D 1 , 2 the closure of 𝒮 with respect to the norm
F 1 , 2 : = E | F | 2 + E D H F H 2 .
The divergence integral  δ H is the adjoint of derivative operator D H . That is, we say that a random variable u in L 2 ( Ω ; H ) belongs to the domain of the divergence operator δ H , denoted by Dom ( δ H ) , if
E D H F , u H c F L 2 ( Ω )
for every F D 1 , 2 . In this case, δ H ( u ) is defined by the duality relationship
E F δ H ( u ) = E D H F , u H
for any F D 1 , 2 . Generally, the divergence δ H ( u ) is also called the Skorohod integral of a process u and denoted as
δ H ( u ) = 0 T u s d B s H ,
and the indefinite Skorohod integral is defined as 0 t u s d B s H = δ H ( u 1 [ 0 , t ] ) . If the process u = { u t , t 0 } is adapted, the Skorohod integral is called the fractional Itô integral, and the Itô formula
f ( B t H , t ) = f ( 0 , 0 ) + 0 t x f ( B s H , s ) d B s H + 0 t t f ( B s H , s ) d s + H 0 t 2 x 2 f ( B s H , s ) s 2 H 1 d s
holds for all t [ 0 , T ] and f C 2 × 1 ( R × R + ) .

3. The Strong Consistency of the Estimator θ ^ n

In this section, we obtain the consistency of the estimator θ ^ n in two cases. To establish the consistency of estimator θ ^ n , we need four lemmas, and proving these four lemmas requires several more statements. Therefore, we have placed the proof of these results at the end of this section. For simplicity, we denote by a . s the convergence with probability one, as n tends to infinity; moreover, the symbol a . s means that both sides have the same limit with probability one, as n tends to infinity.
 Lemma 1. 
Let B H = { B t H , t 0 } be a fractional Brownian motion with Hurst index 0 < H < 1 , and let t i = i h , i = 0 , 1 , 2 , , n such that the condition ( C 1 ) holds. Then, with probability one, we have
M n B H : = 1 n h 2 H i = 1 n B t i H B t i 1 H 2 1 ( n ) .
 Lemma 2. 
Let the conditions of Lemma 1 and condition ( C 1 ) hold. Denote
N n Y H : = 1 n h 2 H i = 1 n Y t i H Y t i 1 H 2 .
 (1) 
If 0 < H < 1 2 , we have N n ( Y H ) σ 2 almost surely, as n tends to infinity.
 (2) 
If 1 2 < H < 1 and the condition ( C 2 ) holds with 0 < γ < 1 H 2 H 1 , N n ( Y H ) converges to σ 2 almost surely, as n tends to infinity.
 (3) 
If 1 2 < H < 1 and the condition ( C 2 ) holds with γ = 1 H 2 H 1 , N n ( Y H ) converges to σ 2 + H 2 σ 4 4 H 1 almost surely, as n tends to infinity.

3.1. The Strong Consistency of θ ^ n When μ Is Known

First, we consider the case where μ is known. By transforming θ ^ n based on Equation (4), we obtain the following result:
θ ^ n = 2 n h 2 H i = 1 n ( t ^ i ) 2 1 + 1 n 2 h 4 H i = 1 n ( t ^ i ) 2 · i = 1 n Y t i H Y t i 1 H μ h 2 1 = 2 n h 2 H i = 1 n ( t ^ i ) 2 1 + Δ n ( H ) 1 ,
where
Δ n ( H ) = 1 n 2 h 4 H i = 1 n ( t ^ i ) 2 · i = 1 n Y t i H Y t i 1 H μ h 2 .
 Lemma 3. 
Let μ R and H ( 0 , 1 2 ) ( 1 2 , 1 ) , and let condition ( C 1 ) hold.
 (1) 
For 0 < H < 1 2 , we have lim n 0 Δ n ( H ) = a . s 0 .
 (2) 
For 1 2 < H < 1 , if the condition ( C 2 ) hold with 0 < γ < 1 H 2 H 1 , lim n 0 Δ n ( H ) = a . s 0 .
 Theorem 1. 
Let μ be known and let the condition ( C 1 ) hold.
 (1) 
If 0 < H < 1 2 , the estimator θ ^ n is strongly consistent.
 (2) 
If 1 2 < H < 1 and the condition ( C 2 ) holds with 0 < γ < 1 H 2 H 1 , then the estimator θ ^ n is strongly consistent.
 Proof. 
When 0 < H < 1 2 , by the fact that 1 + x 1 1 2 x ( x 0 ) , Lemma 2, and Lemma 3, we obtain
θ ^ n = 2 n h 2 H i = 1 n ( t ^ i ) 2 1 + 1 n 2 h 4 H i = 1 n ( t ^ i ) 2 · i = 1 n Y t i H Y t i 1 H μ h 2 1 a . s 1 n h 2 H i = 1 n Y t i H Y t i 1 H μ h 2 = 1 n h 2 H i = 1 n ( Y t i H Y t i 1 H ) 2 2 μ h 1 2 H · 1 n i = 1 n ( Y t i H Y t i 1 H ) + μ 2 h 2 2 H σ 2
almost surely, as n , and statement (1) follows. Similarly, we may obtain statement (2). □

3.2. The Strong Consistency of θ ^ n When μ Is Unknown

In this subsection, we assume that both parameters μ and σ are unknown and establish the strong consistency of estimator θ ^ n as defined in (5). In this case, for H ( 0 , 1 2 ) ( 1 2 , 1 ) , we obtain
θ ^ n = 1 ρ n 2 n h 2 H + 2 n 2 h 4 H + ρ n i = 1 n Y t i H Y t i 1 H 1 n Y t n 2 = 2 n h 2 H ρ n 1 + 1 n 2 h 4 H ρ n i = 1 n Y t i H Y t i 1 H 1 n Y t n 2 1 = 2 n h 2 H ρ n 1 + Δ ˜ n ( H ) 1
for every n 1 , where ρ n = i = 1 n ( t ^ i ) 2 n 4 H 1 h 4 H and
Δ ˜ n ( H ) = 1 n 2 h 4 H ρ n i = 1 n Y t i H Y t i 1 H 1 n Y t n 2 .
 Lemma 4. 
Let H ( 0 , 1 2 ) ( 1 2 , 1 ) , and let condition ( C 1 ) hold.
 (1) 
For 0 < H < 1 2 we have lim n 0 Δ ˜ n ( H ) = a . s 0 .
 (2) 
For 1 2 < H < 1 , if the condition ( C 2 ) hold with 0 < γ < 1 H 2 H 1 , we have lim n 0 Δ ˜ n ( H ) = a . s 0 .
 Theorem 2. 
Let μ be unknown and let the condition ( C 1 ) hold.
 (1) 
For 0 < H < 1 2 , the estimator θ ^ n is strongly consistent.
 (2) 
For 1 2 < H < 1 , if the condition ( C 2 ) holds with 0 < γ < 1 H 2 H 1 , then θ ^ n is strongly consistent.
 Proof. 
By (10), we first show that
Δ ˜ n ( H ) : = 1 n 2 h 4 H ρ n i = 1 n Y t i H Y t i 1 H 1 n Y t n 2 0
as n tends to infinity.
We first prove statement (2). When 1 2 < H < 1 , we have
1 n h H Y t n = σ B t n H n h H + μ h 1 H 1 2 σ 2 h H n 2 H 1 a . s 0 ( n ) ,
and combined with the fact that 1 + x 1 1 2 x ( x 0 ) , Lemma 2, and Lemma 4, we obtain
θ ^ n = 2 n h 2 H ρ n 1 + 1 n 2 h 4 H ρ n i = 1 n Y t i H Y t i 1 H 1 n Y t n 2 1 a . s 1 n h 2 H i = 1 n Y t i H Y t i 1 H 1 n Y t n 2 = 1 n h 2 H i = 1 n Y t i H Y t i 1 H 2 1 n 2 h 2 H Y t n 2 a . s σ 2 ( n + ) .
By conditions (C1) and (C2), when 1 2 < H < 1 and 0 < γ < 1 H 2 H 1 , we have
1 n h H Y t n = σ B t n H n h H + μ h 1 H 1 2 σ 2 h H n 2 H 1 a . s 0 n , .
Thus, similar to the case where 0 < H < 1 2 , we obtain statement (1). □

3.3. Proofs of Lemmas in Section 3

In this subsection, we complete the proof of several lemmas that were not proven above. To prove these lemmas and for the convenience of expression, we provide four lemmas.
 Lemma 5 
(Etemadi [21]). Let { ξ n } be a sequence of nonnegative random variables with finite second moments and S n = i = 1 n ξ i such that:
 (1) 
The sequence { ω n = E ξ n } satisfies i = 1 n ω i 1 ω n 0 and i = 1 n ω i as n .
 (2) 
The following series converges:
n = 1 1 ( E S n ) 2 i = 1 n Cov + ( ξ n , ξ i ) .
Then, as n , S n E S n 1 almost surely.
 Lemma 6. 
For all 0 < r < s < r < s and 0 < H < 1 , we have
E ( B s H B r H ) ( B s H B r H ) ( s r ) ( s r ) ( r s ) 2 2 H .
 Proof. 
When 0 < H < 1 2 , the lemma is obtained from the proof of Lemma 3.3 in Yan et al. [22], and we can also prove the case 1 2 < H < 1 in a completely similar way. □
 Lemma 7. 
Let H ( 0 , 1 2 ) ( 1 2 , 1 ) and denote K n ( H ) : = i = 1 n ( i 2 H ( i 1 ) 2 H ) 2 for n 1 . Then, we have:
 (1) 
For 0 < H < 1 4 , lim n K n ( H ) is finite and nonzero.
 (2) 
For H = 1 4 , K n ( H ) = O log n , as n .
 (3) 
For 1 4 < H < 1 , as n , we have
K n ( H ) = 4 H 2 4 H 1 n 4 H 1 + 2 H 2 h 4 H n 4 H 2 + h 4 H O n 4 H 3 .
 Proof. 
The lemma is a simple calculus exercise. □
 Lemma 8. 
Let H ( 0 , 1 2 ) ( 1 2 , 1 ) and f ( t ) = μ t 1 2 σ 2 t 2 H . Denote
Ψ H ( n ) : = 1 n h 2 H i = 1 n f t i f t i 1 2 ,
where t i = i h with conditions (C1) and (C2). Then, lim n Ψ H ( n ) = 0 for 0 < H < 1 2 , and
lim n Ψ H ( n ) = 0 , 0 < γ < 1 H 2 H 1 , H 2 σ 4 4 H 1 , γ = 1 H 2 H 1 , + , γ > 1 H 2 H 1 ,
for all 1 2 < H < 1 .
 Proof. 
By Lemma 7, it follows that
Ψ H ( n ) = μ 2 h 2 2 H μ σ 2 h n 2 H 1 + 1 4 n h 2 H σ 4 i = 1 n ( t i ) 2 H ( t i 1 ) 2 H 2 = μ 2 h 2 2 H μ σ 2 h n 2 H 1 + 1 4 σ 4 h 2 H 1 n K n ( H ) 0
for all 0 < H < 1 2 , as n tends to infinity. Moreover, when 1 2 < H < 1 , by conditions (C1) and (C2), we have
lim n h n 2 H 1 = 0 , 0 < γ < 2 2 H 2 H 1 , 1 , γ = 2 2 H 2 H 1 , + , γ > 2 2 H 2 H 1 , and lim n h 2 H n 4 H 2 = 0 , 0 < γ < 1 H 2 H 1 , 1 , γ = 1 H 2 H 1 , + , γ > 1 H 2 H 1 ,
which imply
Ψ H ( n ) = μ 2 h μ σ 2 h 2 H n 2 H 1 + σ 2 1 4 n h 4 H 1 K n ( H ) 0 , 0 < γ < 1 H 2 H 1 , H 2 σ 4 4 H 1 , γ = 1 H 2 H 1 , + , γ > 1 H 2 H 1
for all 1 2 < H < 1 , as n tends to infinity. □
Now let us prove Lemma 1, Lemma 2, Lemma 3, and Lemma 4 one by one.
 Proof of Lemma 1. 
When H = 1 2 , the lemma follows from the strong law of large numbers. Now, let H ( 0 , 1 2 ) ( 1 2 , 1 ) . Then, the sequence { E ( B t n H B t n 1 H ) 2 } satisfies condition (1) in Lemma 5. On the other hand, by the fact that
E ( B t n H B t n 1 H ) 2 ( B t i H B t i 1 H ) 2 = h 4 H + 2 E ( B t n H B t n 1 H ) ( B t i H B t i 1 H ) 2
with n > i and Lemma 6, we see that
i = 1 n Cov ( B t n H B t n 1 H ) 2 , ( B t i H B t i 1 H ) 2 = i = 1 n E ( B t n H B t n 1 H ) 2 ( B t i H B t i 1 H ) 2 h 4 H = 2 i = 1 n E ( B t n H B t n 1 H ) ( B t i H B t i 1 H ) 2 = 2 h 4 H + 2 E ( B t n H B t n 1 H ) ( B t n 1 H B t n 2 H ) 2 + 2 i = 1 n 2 E ( B t n H B t n 1 H ) ( B t i H B t i 1 H ) 2 2 h 4 H + ( 2 2 2 H ) 2 h 4 H + 2 h 4 H i = 1 n 2 1 ( n 1 i ) 4 4 H = h 4 H 2 + ( 2 2 2 H ) 2 + 2 j = 1 n 2 1 j 4 4 H C h 4 H , 0 < H < 3 4 , C h 3 ( 1 + log n ) , H = 3 4 , C h 4 H n 4 H 3 , 3 4 < H < 1
for all n 1 . It follows that
n = 1 1 i = 1 n E ( B t i H B t i 1 H ) 2 2 i = 1 n Cov ( B t n H B t n 1 H ) 2 , ( B t i H B t i 1 H ) 2 <
for all H ( 0 , 1 2 ) ( 1 2 , 1 ) . Thus, condition (2) in Lemma 5 holds, and the lemma follows. □
 Proof of Lemma 2. 
Recall that
Y t H = σ B t H + f ( t ) , t 0 ,
where f ( t ) = μ t 1 2 σ 2 t 2 H . It follows that
N n Y H = σ 2 n h 2 H i = 1 n B t i H B t i 1 H 2 + 1 n h 2 H i = 1 n f t i f t i 1 2 + 2 σ n h 2 H i = 1 n f t i f t i 1 B t i H B t i 1 H
for all 0 < H < 1 and n 1 .
When H = 1 2 , statement (1) follows from the strong law of large numbers for independent random variables.
When 0 < H < 1 2 , by Cauchy’s inequality, Lemma 8, and Lemma 1, one may show that
1 n h 2 H i = 1 n f t i f t i 1 B t i H B t i 1 H 1 n h 2 H i = 1 n f t i f t i 1 2 · 1 n h 2 H i = 1 n B t i H B t i 1 H 2 1 / 2 a . s . 0 .
almost surely, as n tends to infinity. It follows from (13), Lemma 8, and Lemma 1 that
N n Y H a . s σ 2
for all 0 < H < 1 2 , as n tends to infinity. This shows that statement (1) holds. Similarly, we can also obtain statements (2) and (3). □
 Proof of Lemma 3. 
When 0 < H < 1 2 , by Lemma 7, we obtain
1 n i = 1 n t i ^ 2 = 1 n i = 1 n t i 2 H t i 1 2 H 2 = 1 n h 4 H K n ( H ) = h 4 H O n 4 H 2 0 ( n ) .
It follows from Lemma 2 and the fact that
1 n i = 1 n ( Y t i H Y t i 1 H ) = 1 n Y t n H = 1 n σ B t n H + μ h 1 2 σ 2 n 2 H 1 h 2 H a . s 0 ( n )
that
1 n 2 h 4 H i = 1 n ( t ^ i ) 2 · i = 1 n Y t i H Y t i 1 H μ h 2 = C n 4 H 2 h 2 H 1 n h 2 H i = 1 n ( Y t i H Y t i 1 H ) 2 2 μ h ( Y t i H Y t i 1 H ) + ( μ h ) 2 a . s 0
for all H ( 0 , 1 2 ) , as n . This gives statement (1).
When 1 2 < H < 1 and 0 < γ < 1 2 H 1 , by Lemma 7, we have
1 n i = 1 n t i ^ 2 = 1 n i = 1 n t i 2 H t i 1 2 H 2 = 1 n h 4 H K n ( H ) = h 4 H O n 4 H 2 0
and
h 1 2 H n i = 1 n ( Y t i H Y t i 1 H ) = h 1 2 H n Y t n H = h 1 2 H n σ B t n H + μ h 2 2 H 1 2 σ 2 n 2 H 1 h a . s 0 ,
as n tends to infinity. It follows from the Lemma 2 that
1 n 2 h 4 H i = 1 n ( t ^ i ) 2 · i = 1 n Y t i H Y t i 1 H μ h 2 = C n 4 H 2 h 2 H 1 n h 2 H i = 1 n ( Y t i H Y t i 1 H ) 2 2 μ h ( Y t i H Y t i 1 H ) + ( μ h ) 2 = C n 4 H 2 h 2 H N n ( Y H ) 2 μ h 1 2 H Y t n H n + μ 2 h 2 2 H a . s 0 ,
as n tends to infinity, if 1 2 < H < 1 and 0 < γ < 1 H 2 H 1 . This gives statement (2). □
 Proof of Lemma 4. 
By Lemma 7, we see that
ρ n = i = 1 n ( t ^ i ) 2 n 4 H 1 h 4 H = h 4 H K H ( n ) n 4 H 1 = O ( h 4 H ) , if 0 < H 1 4 , h O log n , if H = 1 4 , h 4 H O n 4 H 1 , if 1 4 < H < 1 2 ,
as n . Let H ( 0 , 1 4 ) . This yields
n 3 2 Y t n = n 3 2 σ B t n H + μ h n 1 2 1 2 σ 2 h 2 H n 2 H 3 2 a . s 0 ( n )
and
n 2 H 2 Y t n = n 2 H 2 σ B t n H + μ h n 2 H 1 1 2 σ 2 h 2 H n 4 H 2 a . s 0 n ,
and we can obtain
1 n 2 h 4 H ρ n i = 1 n Y t i H Y t i 1 H 1 n Y t n 2 a . s C n 2 i = 1 n Y t i H Y t i 1 H 1 n Y t n 2 = C n 1 h 2 H N n ( Y H ) C n 3 ( Y t n ) 2 a . s 0 ( n ) .
Similarly, when H = 1 4 ,
1 n 2 h 4 H ρ n i = 1 n Y t i H Y t i 1 H 1 n Y t n 2 a . s C n 2 log n i = 1 n Y t i H Y t i 1 H 1 n Y t n 2 = C log n n 1 h 2 H N n ( Y H ) C n 3 log n ( Y t n ) 2 a . s 0 ( n ) ,
and when H ( 1 4 , 1 2 ) ,
1 n 2 h 4 H ρ n i = 1 n Y t i H Y t i 1 H 1 n Y t n 2 a . s C n 4 H 3 i = 1 n Y t i H Y t i 1 H 1 n Y t n 2 = C n 4 H 2 h 2 H N n ( Y H ) C n 4 H 4 ( Y t n ) 2 a . s 0 ( n ) .
This gives statement (1).
Now, let 1 2 < H < 1 and 0 < γ < 1 H 2 H 1 . Lemma 7 implies that
ρ n = i = 1 n ( t ^ i ) 2 n 4 H 1 h 4 H = h 4 H K n ( H ) n 4 H 1 = h 4 H O n 4 H 1 n .
Noting that
n 2 H 2 Y t n = n 2 H 2 σ B t n H + μ h n 2 H 1 1 2 σ 2 h 2 H n 4 H 2 a . s 0 n ,
we obtain
1 n 2 h 4 H ρ n i = 1 n Y t i H Y t i 1 H 1 n Y t n 2 a . s 0 n ,
and statement (2) follows. □

4. Asymptotic Normality of Estimator θ ^ n

In this section, we examine the asymptotic distribution of θ ^ n . We keep the notations from Section 3, and denote by d and P the convergence in distribution and probability, as n tends to infinity, respectively. From the structure of estimator θ ^ n , one can find its asymptotic distribution depends on the asymptotic distribution of { N n ( Y H ) , n 0 } . By the definition of N n ( Y H ) , we can check that
N n ( Y H ) σ 2 = σ 2 M n ( B H ) 1 + Ψ H ( n ) + 2 σ Φ H ( n )
for H ( 0 , 1 2 ) ( 1 2 , 1 ) , where Ψ H ( n ) is given in Lemma 8 and
Φ H ( n ) = 1 n h 2 H i = 1 n f ( t i ) f ( t i 1 ) B t i H B t i 1 H
with f ( t ) = μ t 1 2 σ 2 t 2 H . From the proof later given, we find that the two terms M n ( B H ) 1 and Φ H ( n ) admit same asymptotic velocity under some suitable assumptions of γ . However, when 3 4 H < 1 and conditions (C1) and (C2) hold, we know that (see proof of Lemma 9 in the following)
n 4 4 H E Φ H ( n ) 2 = O ( n h ) 2 H + O ( n h ) + O ( n h ) 2 2 H
almost surely, as n . But n 2 2 H M n ( B H ) 1 converges in L 2 for 3 4 < H < 1 , and n log n M n ( B H ) 1 converges in distribution for H = 3 4 . This indicates that Φ H ( n ) and M n ( B H ) 1 do not have the same asymptotic velocity for all γ > 0 , which means that such models have inflection points when H = 3 / 4 . The reason for this situation is that n h tends to infinity. If we assume that n h tends to infinity logarithmically, the scenario is different. The following lemma provides the asymptotic normality of N n ( Y H ) , and its proof is given at the end of this section.
 Lemma 9. 
Let N n ( Y H ) be defined in Lemma 2, and let conditions ( C 1 ) and ( C 2 ) hold.
 (1) 
When 0 < H < 1 2 , we have
n N n Y H σ 2 d N 0 , σ 4 2 + λ H , 0 < γ < 3 4 H , N μ 2 , σ 4 2 + λ H , γ = 3 4 H ,
where N ( a , b 2 ) denotes the normal random variable with mean a and variance b 2 , and
λ H = n = 1 ( n + 1 ) 2 H + ( n 1 ) 2 H 2 n 2 H 2 .
 (2) 
When 1 2 < H < 3 4 , we obtain
n N n Y H σ 2 d N 0 , σ 4 2 + λ H , 0 < γ < 3 4 H 8 H 3 , N σ 4 H 2 4 H 1 , σ 4 2 + λ H , γ = 3 4 H 8 H 3 .
 (3) 
When 3 4 H < 1 , we have
N n ( Y H ; 1 ) : = n 2 2 H ( n h ) 2 H N n ( Y H ) σ 2 a . s H 2 4 H 1 σ 4
N n ( Y H ; 2 ) : = ( n h ) 2 H 1 N n ( Y H ; 1 ) H 2 σ 4 4 H 1 a . s μ σ 2
( n h ) 1 H N n ( Y H ; 2 ) + μ σ 2 d N 0 , 4 β ( H ) σ 6 ,
where β ( H ) = 2 H 1 3 H 1 B ( 2 H , 2 H 1 ) H 3 .

4.1. The Asymptotic Distribution of θ ^ n When μ Is Known

In this subsection, we obtain the asymptotic distribution of θ ^ n , provided μ is known. By (8), Lemma 3, and the fact that 1 + x 1 = 1 2 x + O x 2 ( x 0 ) , for all n 1 , we get
θ ^ n σ 2 = a . s 1 n h 2 H i = 1 n Y t i H Y t i 1 H μ h 2 σ 2 + 2 n h 2 H i = 1 n ( t ^ i ) 2 O ( Δ n ( H ) 2 ) = N n Y H σ 2 2 μ n h 2 H 1 Y t n H + μ 2 h 2 2 H + 2 n h 2 H i = 1 n ( t ^ i ) 2 O ( Δ n ( H ) 2 )
with H ( 0 , 1 2 ) ( 1 2 , 3 4 ) .
 Lemma 10. 
Let the condition ( C 1 ) hold, 0 < H < 3 4 , and denote
T n ( H ) : = n h 2 H i = 1 n ( t ^ i ) 2 Δ n ( H ) 2 .
 (1) 
For 0 < H 3 8 , we have T n ( H ) n a . s . 0 as n .
 (2) 
For 3 8 < H < 3 4 , we have T n ( H ) n a . s . 0 , as n , provided that condition ( C 2 ) holds with 0 < γ < 3 4 H 8 H 3 .
 (3) 
For 3 4 H < 1 , we have
T n ( H ) · n 2 2 H ( n h ) 2 H a . s . 4 H 2 4 H 1 σ 4
as n tends to infinity, provided that condition ( C 2 ) holds with 0 < γ < 1 H 2 H 1 .
 Proof. 
Let H ( 0 , 1 2 ) ( 1 2 , 1 ) . By Lemma 7 we have
T n ( H ) = K n ( H ) n 3 h 2 H i = 1 n Y t i H Y t i 1 H 2 2 μ h Y t n H + μ 2 h 2 n 2 a . s 4 H 2 n 4 H 4 ( 4 H 1 ) h 2 H i = 1 n Y t i H Y t i 1 H 2 2 μ h Y t n H + μ 2 h 2 n 2 = 4 H 2 ( 4 H 1 ) h 2 H n 4 H 2 N n ( Y H ) 2 μ h 1 2 H 1 n Y t n H + μ 2 h 2 2 H 2
for all n 2 . Clearly, h 2 H n 4 H 3 2 0 for 0 < H 3 8 and h 2 H n 4 H 3 2 = O h 3 2 2 H γ ( 4 H 3 2 ) 0 for 3 8 < H < 3 4 if 0 < γ < 3 4 H 8 H 3 . It follows from (15) that
n T n ( H ) a . s 4 H 2 ( 4 H 1 ) h 2 H n 4 H 3 2 N n ( Y H ) 2 μ h 1 2 H 1 n Y t n H + μ 2 h 2 2 H 2 0 ,
as n tends to infinity under the conditions of statements (1) and (2).
We now verify statement (3). Let 3 4 H < 1 . It follows from Lemma 2 that
T n ( H ) · n 2 2 H ( n h ) 2 H = 4 H 2 ( 4 H 1 ) N n Y H 2 μ h 1 2 H 1 n Y t n H + μ 2 h 2 2 H 2 a . s 4 H 2 4 H 1 σ 4
as n tends to infinity, provided that 0 < γ < 1 H 2 H 1 since
2 μ h 1 2 H 1 n Y t n H = 2 μ 1 n h 1 2 H σ B t n H + μ n h 1 2 σ 2 ( n h ) 2 H = 2 μ σ h 1 H n 1 H · B t n H ( n h ) H + 2 μ 2 h 2 2 H μ σ 2 n 2 H 1 h a . s μ σ 2 n 2 H 1 h
as n tends to infinity. □
 Theorem 3. 
Let μ be known and let conditions (C1) and (C2) hold
 (1) 
Let 0 < H < 1 2 and 0 < γ 3 4 H , then, as n , we have
n θ ^ n σ 2 d N 0 , ( 2 + λ H ) σ 4 .
 (2) 
Let 1 2 < H < 3 4 , then, as n , we have
n θ ^ n σ 2 d N 0 , ( 2 + λ H ) σ 4 , 0 < γ < 3 4 H 8 H 3 , N σ 4 H 2 4 H 1 , ( 2 + λ H ) σ 4 , γ = 3 4 H 8 H 3 .
 Proof. 
Let 0 < H < 3 4 . Then, we have
lim n n h 2 2 H = 0 , 0 < γ < 3 4 H , 1 , γ = 3 4 H , + , γ > 3 4 H .
Moreover, we have lim n n 2 H 1 2 h = 0 for 0 < H 1 4 , and
lim n n 2 H 1 2 h = 0 , 0 < γ < 3 4 H 4 H 1 , 1 , γ = 3 4 H 4 H 1 , + , γ > 3 4 H 4 H 1
for all 1 4 < H < 3 4 and H 1 2 .
For statement (1), we have
1 n h 1 2 H B t n H = n H 1 2 h 1 H B t n H ( n h ) H a . s 0 ( n )
for all 0 < H < 1 2 , and by (23) and (24), we also have
h 1 2 H n Y t n H = σ n h 1 2 H B t n H + μ n h 2 2 H σ 2 2 h n 2 H 1 2 a . s 0 , 0 < γ < 3 4 H , μ , γ = 3 4 H , + , γ > 3 4 H ( n )
since 3 4 H < 3 4 H 4 H 1 for 1 4 < H < 1 2 . Combining these with (21), (23), Lemma 9, Lemma 10, and Slutsky’s theorem, we obtain statement (1).
For statement (2), we have
1 n h 1 2 H B t n H = n H 1 2 h 1 H B t n H ( n h ) H = a . s O h 3 2 2 H γ ( H 1 2 ) a . s 0 ( n )
for all 1 2 < H < 3 4 if 0 < γ < 3 4 H 2 H 1 . It follows from (23) and (24) that
h 1 2 H n Y t n H = σ n h 1 2 H B t n H + μ n h 2 2 H σ 2 2 h n 2 H 1 2 a . s 0 , 0 < γ < 3 4 H 4 H 1 , 1 , γ = 3 4 H 4 H 1 , + , γ > 3 4 H 4 H 1 ( n )
for all 1 2 < H < 3 4 since 3 4 H 4 H 1 < 3 4 H < 3 4 H 2 H 1 . Combining these with (21), (23), Lemma 9, Lemma 10, and Slutsky’s theorem, we obtain statement (2) because 3 4 H 8 H 3 < 3 4 H 4 H 1 . □
 Theorem 4. 
Let μ be known and 3 4 H < 1 . If conditions (C1) and (C2) hold with 0 < γ < 2 2 H 5 H 2 . We then have
n 2 2 H ( n h ) H θ ^ n σ 2 d N 0 , 4 β ( H ) σ 6
as n tends to infinity, where β ( H ) is given in Lemma 9.
 Proof. 
By (8), Lemma 3, and the fact 1 + x 1 = 1 2 x 1 8 x 2 + O ( x 3 ) ( x 0 ) , for all n 1 , we get
θ ^ n σ 2 = ( N n Y H σ 2 ) 2 μ h 1 2 H n 1 Y t n H + μ 2 h 2 2 H 1 4 T n ( H ) + 2 n h 2 H i = 1 n ( t ^ i ) 2 · O ( Δ n ( H ) 3 ) = ( M n B H 1 ) σ 2 + 2 σ Φ H ( n ) + H 2 4 H 1 σ 4 h 2 H n 4 H 2 2 μ σ n H 1 h 1 H B t n H ( n h ) H 1 4 T n ( H ) + 2 n h 2 H i = 1 n ( t ^ i ) 2 · O ( Δ n ( H ) 3 ) .
On the other hand, we have
n 2 2 H ( n h ) H 1 4 T n ( H ) H 2 4 H 1 σ 4 h 2 H n 4 H 2 = H 2 4 H 1 { N n Y H σ 2 ( n h ) H 2 μ n H 1 h 1 H Y t n H + μ 2 n H h 2 H } N n Y H 2 μ h 1 2 H 1 n Y t n H + μ 2 h 2 2 H + σ 2 a . s 0 ( n )
when 0 < γ < 2 2 H 5 H 2 . Therefore, the asymptotic normality follows from (26), (27), Lemma 9, and Slutsky’s theorem, and we get
n 2 2 H ( n h ) H θ ^ n σ 2 = n 2 2 H ( n h ) H ( M n B H 1 ) σ 2 + 2 σ Φ H ( n ) n 2 2 H ( n h ) H 2 μ σ n 2 H 1 h B t n H ( n h ) H + n 2 2 H ( n h ) H 1 4 T n ( H ) H 2 4 H 1 σ 4 h 2 H n 4 H 2 + 2 n h 2 H i = 1 n ( t ^ i ) 2 · O ( Δ n ( H ) 3 ) n 2 2 H ( n h ) H d N 0 , 4 β ( H ) σ 6
when 0 < γ < 2 2 H 5 H 2 and as n tends to infinity. □

4.2. The Asymptotic Distribution of θ ^ n When μ Is Unknown

In this subsection, we consider the asymptotic distribution of estimator θ ^ n when μ is unknown. Based on (10), Lemma 4, and the fact that 1 + x 1 = 1 2 x + O ( x 2 ) , ( x 0 ) , we obtain the following result
n θ ^ n σ 2 = a . s n 1 n h 2 H i = 1 n Y t i H Y t i 1 H 1 n Y t n H 2 σ 2 + 2 n h 2 H ρ n O ( Δ ˜ n ( H ) 2 ) · n = n N n Y H σ 2 n 3 4 h H Y t n H 2 + 2 n h 2 H ρ n O ( Δ ˜ n ( H ) 2 ) · n
with 0 < H < 3 4 , where ρ n = i = 1 n ( t ^ i ) 2 n 4 H 1 h 4 H . As a corollary of Lemma 4, the following lemma provides an estimate for the remainder term
T ˜ n ( H ) : = n h 2 H ρ n Δ ˜ n ( H ) 2 .
 Lemma 11. 
Let conditions ( C 1 ) and ( C 2 ) hold.
 (1) 
For 0 < H 3 8 , we have lim n T ˜ n ( H ) · n = a . s . 0 .
 (2) 
For 3 8 < H < 3 4 , we have lim n T ˜ n ( H ) · n = a . s . 0 , provided 0 < γ < 3 4 H 8 H 3 .
 Proof. 
By Lemma 7 and the proof of Lemma 4, we get
T ˜ n ( H ) n = n h 2 H ρ n Δ ˜ n ( H ) 2 · n = ρ n n n 3 h 6 H i = 1 n Y t i H Y t i 1 H 1 n Y t n 2 2 a . s C n n 4 4 H h 2 H n h 2 H N n Y H 1 n Y t n 2 2 = C h H n 2 H 3 4 N n Y H h H n 2 H 11 4 Y t n 2 2
for all n 1 . Clearly, lim n n 2 H 3 4 h H = 0 and lim n n H 3 8 h 1 1 2 H = 0 for all 0 < H 3 8 . Moreover, when 3 8 < H < 3 4 , we have
lim n n 2 H 3 4 h H = 0 , if 0 < γ < 3 4 H 8 H 3 , 1 , if γ = 3 4 H 8 H 3 , and lim n n H 3 8 h 1 1 2 H = 0 , if 0 < γ < 11 12 H 8 H 3 , 1 , if γ = 11 12 H 8 H 3 .
Similarly, lim n n 3 H 11 8 h 3 2 H = 0 for all 0 < H 11 24 and
lim n n 3 H 11 8 h 3 2 H = 0 , if 0 < γ < 11 12 H 24 H 11 , 1 , if γ = 11 12 H 24 H 11
for all 11 24 < H < 11 12 . Noting that 3 4 H 8 H 3 < 11 12 H 24 H 11 and 3 4 H 8 H 3 < 11 12 H 8 H 3 for all 3 8 < H < 3 4 , we obtain that
h 1 2 H n H 11 8 Y t n = μ n H 3 8 h 1 1 2 H 1 2 σ 2 n 3 H 11 8 h 3 2 H + n H 11 8 h 1 2 H σ B n H ,
converges almost surely to 0 for 0 < H 3 8 and that it converges almost surely to 0 for 3 8 < H < 3 4 provided 0 < γ < 3 4 H 8 H 3 . Thus, the lemma follows from Lemma 4 and (30). □
 Theorem 5. 
Let μ be unknown and let conditions (C1) and (C2) hold.
 (1) 
For 0 < H < 1 2 , if 0 < γ 3 4 H , we have
n θ ^ n σ 2 d N 0 , ( 2 + λ H ) σ 4 .
 (2) 
For 1 2 < H < 3 4 , we have
n θ ^ n σ 2 d N 0 , ( 2 + λ H ) σ 4 , 0 < γ < 3 4 H 8 H 3 , N σ 4 H 2 4 H 1 + 1 4 σ 4 , ( 2 + λ H ) σ 4 , γ = 3 4 H 8 H 3 .
 Proof. 
Clearly, we have lim n n 2 H 3 4 h H = 0 for all 0 < H 3 8 and
lim n n 2 H 3 4 h H = 0 , 0 < γ < 3 4 H 8 H 3 , 1 , γ = 3 4 H 8 H 3 , , γ > 3 4 H 8 H 3 .
for 3 8 < H < 3 4 , and moreover
lim n n 1 4 h 1 H = 0 , 0 < γ < 3 4 H , 1 , γ = 3 4 H , , γ > 3 4 H
for all 0 < H < 3 4 . It follows that
n 3 4 h H Y t n H = σ n H 3 4 B t n H ( n h ) H + μ n 1 4 h 1 H 1 2 σ 2 n 2 H 3 4 h H 0 , 0 < γ < 3 4 H , μ , γ = 3 4 H
for all 3 8 < H < 1 2 , and n 3 4 h H Y t n H a . s 0 for all 0 < H 3 8 , and
n 3 4 h H Y t n H = σ B t n H n 3 4 h H + μ n 1 4 h 1 H 1 2 σ 2 n 2 H 3 4 h H 0 , 0 < γ < 3 4 H 8 H 3 , 1 2 σ 2 , γ = 3 4 H 8 H 3
for all 1 2 < H < 3 4 , since 3 4 H 8 H 3 > 1 for 1 2 < H < 3 4 and 3 4 H 8 H 3 < 1 for all 3 8 < H < 1 2 . Combining this with (29), Lemma 9, Lemma 11, and Slutsky’s theorem, we obtain the theorem. □
 Lemma 12. 
Let conditions ( C 1 ) and ( C 2 ) hold with 0 < γ < 1 H 2 H 1 . For 3 4 H < 1 , we have
T ˜ n ( H ) · n 2 2 H ( n h ) 2 H a . s 4 H 2 4 H 1 1 σ 4 ( n ) .
Proof. 
Similar to the proof of Lemma 11, we get
1 n h H Y t n = σ n 1 H B t n H ( n h ) H + μ h 1 H 1 2 σ 2 n 2 H 1 h H a . s 0 , 0 < γ < 1 H 2 H 1 , 1 2 σ 2 , γ = 1 H 2 H 1 , , γ > 1 H 2 H 1 , ,
for all 1 2 < H < 1 . It follows from Lemma 7 and Lemma 2 that
T ˜ n ( H ) · n 2 2 H ( n h ) 2 H a . s n 2 h 4 H 4 H 2 4 H 1 1 n h 2 H N n Y H 1 n Y t n 2 2 = 4 H 2 4 H 1 1 N n Y H n 2 h 2 H Y t n 2 2 a . s 4 H 2 4 H 1 1 σ 4 ,
as n tends to infinity. □
 Theorem 6. 
Let 3 4 H < 1 and μ be unknown. If conditions ( C 1 ) and ( C 2 ) hold with 0 < γ < 2 2 H 5 H 2 , we then have
n 2 2 H ( n h ) H θ ^ n σ 2 d N 0 , 4 β ( H ) σ 6
as n tends to infinity, where β ( H ) is given in Lemma 9.
Proof. 
Let 3 4 H < 1 . By (10), Lemma 4, and the fact that 1 + x 1 = 1 2 x 1 8 x 2 + O ( x 3 ) ( x 0 ) , we get
θ ^ n σ 2 = N n Y H σ 2 h 2 H n 2 Y t n H 2 1 4 T ˜ n ( H ) + 2 n h 2 H ρ n O ( Δ ˜ n ( H ) 3 ) = ( M n B H 1 ) σ 2 + 2 σ Φ H ( n ) + H 2 4 H 1 1 σ 4 h 2 H n 4 H 2 2 μ σ n H 1 h 1 H B t n H ( n h ) H σ 2 n 2 H 2 B t n H ( n h ) H 2 + σ 3 n 3 H 2 h H B t n H ( n h ) H 1 4 T ˜ n ( H ) + 2 n h 2 H ρ n O ( Δ ˜ n ( H ) 3 ) .
Similarly, we can obtain
n 2 2 H ( n h ) H 1 4 T ˜ n ( H ) 4 H 2 4 H 1 1 n 4 H 2 h H σ 4 = 4 H 2 4 H 1 1 N n Y H n 2 h 2 H Y t n 2 + σ 2 N n Y H σ 2 ( n h ) H n 2 h 2 H Y t n 2 ( n h ) H a . s 0 .
Therefore, using Equations (33) and (34) and Lemma 9, we have
n 2 2 H ( n h ) H θ ^ n σ 2 = n 2 2 H ( n h ) H M n B H 1 σ 2 + 2 σ Φ H ( n ) n 2 2 H ( n h ) H 2 μ σ ( n h ) 1 2 H B t n H ( n h ) H + n 2 2 H ( n h ) H 1 4 T ˜ n ( H ) 4 H 2 4 H 1 1 n 4 H 2 h H σ 4 σ 2 ( n h ) H B t n H ( n h ) H 2 + σ 3 h 2 H B t n H ( n h ) H + 2 n h 2 H ρ n O ( Δ ˜ n ( H ) 3 ) n 2 2 H ( n h ) H d N 0 , 4 β ( H ) σ 6
when 0 < γ < 2 2 H 5 H 2 and as n tends to infinity. □

4.3. Proofs of Lemmas in Section 4

In this subsection, we complete the proof of Lemma 9.
 Proposition 1. 
Let the conditions in Lemma 1 hold.
 (1) 
For 0 < H < 3 4 , we have
n M n ( B H ) 1 N 0 , 2 + λ H ( n ) ,
in distribution, where
λ H = n = 1 ( n + 1 ) 2 H + ( n 1 ) 2 H 2 n 2 H 2 .
 (2) 
For H = 3 4 , we have
n log n M n ( B H ) 1 N 0 , 9 4 ( n )
in distribution.
 (3) 
For 3 4 < H < 1 , we have
n 2 2 H M n ( B H ) 1 2 H 2 ( 2 H 1 ) 4 H 3 R H ( n )
in L 2 , where R H denotes a Rosenblatt random distribution with E ( R H ) 2 = 1 .
The lemma is an insignificant extension for some known results, and its proof is omitted (see, for examples, Theorem 5.4, Proposition 5.4, Theorem 5.5 in Tudor [20]). In fact, for t n = n h = T < , such convergence have been studied and can be found in Breuer and Major [23], Dobrushin and Major [24], Giraitis and Surgailis [25], Nourdin [26], Nourdin and Reveillac [27] and Tudor [20]. On the other hand, for more material on the Rosenblatt distribution and related process, refer to Tudor [20].
Proof of Lemma 9. 
Let H ( 0 , 1 2 ) ( 1 2 , 3 4 ) be given. We have
lim n n h 2 2 H = 0 , if 0 < γ < 3 4 H , 1 , if γ = 3 4 H .
We also have lim n h n 2 H 1 2 = 0 for 0 < H < 1 4 and
lim n h n 2 H 1 2 = 0 , if 0 < γ < 3 4 H 4 H 1 , 1 , if γ = 3 4 H 4 H 1 ( n )
for 1 4 < H < 3 4 . On the other hand, by Taylor’s expansion, we may prove
lim n 1 n h 2 H K n ( H ) = lim n 1 n h 2 H i = 1 n i 4 H 1 ( 1 1 i ) 2 H 2 = 4 H 2 4 H 1
for 1 4 < H < 3 4 if γ = 3 4 H 8 H 3 . It follows from Lemma 7 that lim n 1 n h 2 H K n ( H ) = 0 for 0 < H 3 8 and
lim n 1 n h 2 H K n ( H ) = lim n n 3 + γ 2 ( 1 + γ ) K n ( H ) = 0 , if 0 < γ < 3 4 H 8 H 3 , 4 H 2 4 H 1 , if γ = 3 4 H 8 H 3
for 3 8 < H < 3 4 . Combining the above three convergences and the proof of Lemma 8, we obtain that
n Ψ H ( n ) = μ 2 n h 2 2 H σ 2 μ h n 2 H 1 2 + 1 4 σ 4 h 2 H 1 n K n ( H ) 0 , if 0 < γ < 3 4 H , μ 2 , if γ = 3 4 H , , if γ > 3 4 H ( n )
for 0 < H < 1 2 and
n Ψ H ( n ) = μ 2 n h 2 2 H σ 2 μ h n 2 H 1 2 + 1 4 σ 4 h 2 H 1 n K n ( H ) 0 , if 0 < γ < 3 4 H 8 H 3 , H 2 4 H 1 σ 4 , if γ = 3 4 H 8 H 3 , , if γ > 3 4 H 8 H 3 ( n )
for 1 2 < H < 3 4 . Thus, by (16) and Proposition 1, to end the proof, we check that
n Φ H ( n ) P 0 ( n )
for all H ( 0 , 1 2 ) ( 1 2 , 3 4 ) under some suitable conditions for γ . By the fact
f ( t i ) f ( t i 1 ) = μ h 1 2 σ 2 h 2 H Δ i
with Δ i = i 2 H ( i 1 ) 2 H and i { 1 , 2 , , n } , we get that
n E Φ H ( n ) 2 = 1 n h 4 H E i = 1 n f t i f t i 1 B t i H B t i 1 H 2 = Ψ H ( n ) + 1 n h 2 H 1 i < j n f ( t i ) f ( t i 1 ) f ( t j ) f ( t j 1 ) R ( j i ) = Ψ H ( n ) + 1 n μ 2 h 2 2 H 1 i < j n R ( j i ) 1 2 n μ σ 2 h 1 i < j n Δ i + Δ j R ( j i ) + 1 4 n σ 4 h 2 H 1 i < j n Δ i Δ j R ( j i )
for all n 2 , where R ( x ) = ( x + 1 ) 2 H + ( x 1 ) 2 H 2 x 2 H for x 1 .
Now, in order to end the proof, we estimate the last three items in (42) in the two cases 0 < H < 1 2 and 1 2 < H < 3 4 .
Cases I: 0 < H < 1 2 . Clearly, the sequence
m = 1 n | R ( m ) | = m = 1 n m 2 H ( 1 + 1 m ) 2 H + ( 1 1 m ) 2 H 2
converges. It follows that
1 n h 2 2 H 1 i < j n R ( j i ) = 1 n h 2 2 H i = 1 n 1 j = i + 1 n R ( j i ) = n 1 n h 2 2 H m = 1 n R ( m ) = O h 2 2 H 0 ,
1 n h 1 i < j n Δ i · R ( j i ) 1 n h 1 i < j n | R ( j i ) | = O n 2 H 1 h 0 ,
1 n h 1 i < j n Δ j R ( j i ) 1 n h 1 i < j n | R ( j i ) | = O n 2 H 1 h 0
and
1 n h 2 H 1 i < j n Δ i Δ j R ( j i ) 1 n h 2 H 1 i < j n | R ( j i ) | = h 2 H O n 2 H 1 0 ,
as n tends to infinity. Combining these with Lemma 8 and (42), we obtain convergence (41) for all 0 < H < 1 2 . Thus, by Proposition 1, (16), (39), and Slutsky’s theorem, we obtain the desired asymptotic behavior
n N n ( Y H ) σ 2 = σ 2 n M n B H 1 + n Ψ H ( n ) + 2 σ n Φ H ( n ) N 0 , ( 2 + λ H ) σ 4 , if 0 < γ < 3 4 H , N μ 2 , ( 2 + λ H ) σ 4 , if γ = 3 4 H ( n )
for all 0 < H < 1 2 , and statement (1) follows.
Cases II: 1 2 < H < 3 4 . From
j = 1 n j 2 H 2 = n 2 H 1 j = 1 n j n 2 H 2 · 1 n 1 2 H 1 n 2 H 1 ( n )
and Taylor’s expansion, we get that
m = 1 n R ( m ) = m = 1 n m 2 H ( 1 + 1 m ) 2 H + ( 1 1 m ) 2 H 2 2 H n 2 H 1
as n tends to infinity, which implies that
1 n h 2 2 H 1 i < j n R ( j i ) = n 1 n h 2 2 H m = 1 n R ( m ) 2 H n 2 H 1 h 2 2 H 0 , if 0 < γ < 3 4 H 2 H 1 , 2 H , if γ = 3 4 H 2 H 1
as n tends to infinity. Similarly, we also have
1 2 n μ σ 2 h 1 i < j n Δ i R ( j i ) = 1 2 n μ σ 2 h m = 1 n R ( m ) i = 1 n 1 Δ i = 1 2 n μ σ 2 h ( n 1 ) 2 H m = 1 n R ( m ) H μ σ 2 n 4 H 2 h 0 , if 0 < γ < 3 4 H 4 H 2 , H μ σ 2 , if γ = 3 4 H 4 H 2
and
1 2 n μ σ 2 h 1 i < j n Δ j R ( j i ) = 1 2 n μ σ 2 h m = 1 n R ( m ) j = m + 1 n Δ j = 1 2 n μ σ 2 h m = 1 n n 2 H m 2 H R ( m ) μ σ 2 2 H 2 4 H 1 n 4 H 2 h
as n tends to infinity. On the other hand, we have
1 i < j n i j n 2 2 H 1 j n i n 2 H 2 · 1 n 2 ζ H = 0 1 0 y ( x y ) 2 H 1 ( y x ) 2 H 2 d x d y = 1 6 H 2 B ( 2 H , 2 H 1 ) ,
as n tends to infinity, where B ( · , · ) denotes the classical Beta function. It follows from Taylor’s expansion that
1 4 n σ 4 h 2 H 1 i < j n Δ i Δ j R ( j i ) 2 H 3 ( 2 H 1 ) σ 4 h 2 H 1 n 1 i < j n ( i j ) 2 H 1 ( j i ) 2 H 2 2 H 3 ( 2 H 1 ) ζ H σ 4 h 2 H n 6 H 3 0 , if 0 < γ < 3 4 H 6 H 3 , 2 H 3 ( 2 H 1 ) ζ H σ 4 , if γ = 3 4 H 6 H 3
for all 1 2 < H < 3 4 , as n tends to infinity. Combining these with Lemma 8 and (42), we obtain convergence (41) for all 1 2 < H < 3 4 if 0 < γ < 3 4 H 6 H 3 . Thus, we obtain the desired asymptotic behavior
n N n ( Y H ) σ 2 = σ 2 n M n B H 1 + n Ψ H ( n ) + 2 σ n Φ H ( n ) N 0 , ( 2 + λ H ) σ 4 , if 0 < γ < 3 4 H 8 H 3 , N μ 2 , ( 2 + λ H ) σ 4 , if γ = 3 4 H 8 H 3 ( n )
for all 1 2 < H < 3 4 by Proposition 1, (16), and (40), and statement (2) follows.
Now, we verify statement (3). Let 3 4 H < 1 . By Lemma 7, we have
n 2 2 H ( n h ) 2 H Ψ H ( n ) = n 2 2 H ( n h ) 2 H μ 2 h 2 2 H μ σ 2 h n 2 H 1 + 1 4 n σ 4 h 2 H K n ( H ) = μ 2 ( n h ) 4 H 2 μ σ 2 ( n h ) 2 H 1 + H 2 4 H 1 σ 4 + 1 2 n H 2 σ 4 + O n 2 H 2 4 H 1 σ 4 ( n ) ,
and moreover, from the proof of statement (2) in Lemma 9, we also have
n 4 4 H ( n h ) 2 H E Φ H ( n ) 2 = n 3 4 H ( n h ) 2 H Ψ H ( n ) + n 3 4 H ( n h ) 2 H 1 n μ 2 h 2 2 H 1 i < j n R ( j i ) 1 2 n μ σ 2 h 1 i < j n Δ i + Δ j R ( j i ) + 1 4 n σ 4 h 2 H 1 i < j n Δ i Δ j R ( j i ) β ( H ) σ 4 ,
where β ( H ) = 2 H 1 3 H 1 B ( 2 H , 2 H 1 ) H 3 . Noting that
Φ H ( n ) = 1 n h 2 H i = 1 n f ( t i ) f ( t i 1 ) B t i H B t i 1 H
admits a normal distribution for all n 1 , we see that
2 σ n 2 2 H ( n h ) H Φ H ( n ) d N 0 , 4 β ( H ) σ 6
from (52). It follows from (16), statement (3) in Proposition 1, and (51) that
N n ( Y H ; 1 ) = n 2 2 H ( n h ) 2 H N n ( Y H ) σ 2 = σ 2 n 2 2 H ( n h ) 2 H M n ( B H ) 1 + n 2 2 H ( n h ) 2 H Ψ H ( n ) + 2 σ n 2 2 H ( n h ) 2 H Φ H ( n ) a . s H 2 4 H 1 σ 4 ,
as n tends to infinity. Moreover, by (51) we obtain
( n h ) 2 H 1 n 2 2 H ( n h ) 2 H Ψ H ( n ) H 2 4 H 1 σ 4 = μ 2 ( n h ) 2 H 1 μ σ 2 + H 2 σ 4 2 n 2 2 H h 2 H 1 + h 2 H 1 O n 2 H 3 μ σ 2 ( n ) .
Combining this with (54), (53), and Proposition 1, we get
N n ( Y H ; 2 ) = ( n h ) 2 H 1 N n ( Y H ; 1 ) H 2 σ 4 4 H 1 = σ 2 n 2 2 H n h M n ( B H ) 1 + 2 σ n 2 2 H n h Φ H ( n ) + n 2 2 H ( n h ) 2 H Ψ H ( n ) H 2 4 H 1 σ 4 ( n h ) 2 H 1 a . s μ σ 2 ,
as n tends to infinity. Finally, by Proposition 1, (53), (55), and Slutsky’s theorem, we obtain
( n h ) 1 H N n ( Y H ; 2 ) + μ σ 2 = σ 2 n 2 2 H ( n h ) H M n ( B H ) 1 + 2 σ n 2 2 H ( n h ) H Φ H ( n ) + n 2 2 H ( n h ) 2 H Ψ H ( n ) H 2 4 H 1 σ 4 ( n h ) 2 H 1 + μ σ 2 ( n h ) 1 H = σ 2 n 2 2 H ( n h ) H M n ( B H ) 1 + 2 σ n 2 2 H ( n h ) H Φ H ( n ) + μ 2 ( n h ) 3 H 2 + H 2 σ 4 2 n 1 H h H + h H O n H 2 d N 0 , 4 β ( H ) σ 6 .
Thus, the three convergences in statements (2) and (3) follow. □

5. Asymptotic Behavior of Estimator μ ^ n

In this section, we consider the strong consistency and asymptotic distribution of μ ^ n . We keep the notations from Section 3.
 Theorem 7. 
Let H ( 0 , 1 ) and assume that condition (C1) holds. If σ > 0 is known, the estimator given by (4)
μ ^ n = 1 n h Y t n + 1 2 ( n h ) 2 H 1 σ 2
is strongly consistent and
( n h ) 1 H ( μ ^ n μ ) d N ( 0 , σ 2 ) .
Proof. 
The theorem follows from the fact that
μ ^ n μ = 1 n h μ n h 1 2 σ 2 ( n h ) 2 H + σ B t n H + 1 2 ( n h ) 2 H 1 σ 2 μ = σ B t n H n h
for all n 1 . □
 Lemma 13. 
When 1 2 < H < 1 , and assuming that conditions (C1) and (C2) hold, estimator θ ^ n given by (5) satisfies
( n h ) 2 H 1 θ ^ n σ 2 a . s 0 ,
if the following conditions are satisfied
 (1) 
1 2 < H < 3 4 and 0 < γ < 3 4 H 8 H 3 ;
 (2) 
3 4 < H < 1 and 0 < γ < 2 2 H 5 H 3 .
Proof. 
When 1 2 < H < 3 4 , based on statement (2) of Theorem 5 and Theorem 6, we obtain that when 0 < γ < 3 4 H 8 H 3 ,
( n h ) 2 H 1 θ ^ n σ 2 = n 2 H 3 2 h 2 H 1 · n θ ^ n σ 2 = n h 1 + γ 2 H 1 1 + γ n 1 ( 4 H 3 ) γ 2 ( 1 + γ ) · n θ ^ n σ 2 a . s 0
as n tends to infinity. Thus, statement (1) is established. When 3 4 H < 1 and 0 < γ < 2 2 H 5 H 3 , we have
( n h ) 2 H 1 θ ^ n σ 2 = n 5 H 3 h 3 H 1 · n 2 2 H ( n h ) H θ ^ n σ 2 = n h 1 + γ 3 H 1 1 + γ n ( 5 H 3 ) γ + 2 H 2 1 + γ · n 2 2 H ( n h ) H θ ^ n σ 2 a . s 0
as n tends to infinity. Thus, statement (2) is established. □
 Theorem 8. 
Let conditions (C1) and (C2) hold and σ > 0 be unknown.
 (1) 
If 0 < H < 1 2 , estimator μ ^ n given by (5) is a strongly consistent estimator. Furthermore, we have
( n h ) 1 H ( μ ^ n μ ) d N ( 0 , σ 2 ) .
 (2) 
If 1 2 < H < 3 4 , estimator μ ^ n given by (5) is a strongly consistent estimator. Furthermore, if 0 < γ < 3 4 H 8 H 3 , we obtain
( n h ) 1 H ( μ ^ n μ ) d N ( 0 , σ 2 ) .
 (3) 
If 3 4 < H < 1 , estimator μ ^ n given by (5) is a strongly consistent estimator. Moreover, if 0 < γ < 2 2 H 6 H 3 , we get
( n h ) 1 H ( μ ^ n μ ) d N ( 0 , σ 2 ) .
Proof. 
For 0 < H < 1 2 , applying a transformation to (5), we obtain
μ ^ n = 1 n h Y t n H + 1 2 ( n h ) 2 H 1 θ ^ n = μ + σ B t n H n h 1 2 σ 2 ( n h ) 2 H 1 + 1 2 ( n h ) 2 H 1 θ ^ n a . s μ .
as n tends to infinity. Moreover,
( n h ) 1 H ( μ ^ n μ ) = σ B t n H ( n h ) H + 1 2 ( n h ) H θ ^ n σ 2 = σ B t n H ( n h ) H + 1 2 n h 1 + γ H 1 + γ · n ( 2 H 1 ) γ 1 2 ( 1 + γ ) · n θ ^ n σ 2 d N ( 0 , σ 2 )
as n tends to infinity. So statement (1) is proven. When 1 2 < H < 3 4 and 0 < γ < 3 4 H 8 H 3 , by Lemma 13, we have
μ ^ n = 1 n h Y t n H + 1 2 ( n h ) 2 H 1 θ ^ n = μ + σ B t n H n h + 1 2 ( n h ) 2 H 1 θ ^ n σ 2 a . s μ
as n tends to infinity. Similarly, using statement (2) of Theorem 5 and the condition 3 4 H 8 H 3 < 1 2 H 1 , we get
( n h ) 1 H ( μ ^ n μ ) = σ B t n H ( n h ) H + 1 2 ( n h ) H θ ^ n σ 2 = σ B t n H ( n h ) H + 1 2 n h 1 + γ H 1 + γ · n ( 2 H 1 ) γ 1 2 ( 1 + γ ) · n θ ^ n σ 2 d N ( 0 , σ 2 ) ,
when 0 < γ < 3 4 H 8 H 3 and as n tends to infinity. Thus, statement (2) is established. When 3 4 H < 1 and 0 < γ < 2 2 H 6 H 3 , it follows from Lemma 13 that
μ ^ n = 1 n h Y t n H + 1 2 ( n h ) 2 H 1 θ ^ n = μ + σ B t n H n h + 1 2 ( n h ) 2 H 1 θ ^ n σ 2 a . s μ . ( n )
From Theorem 6 and the fact that 2 2 H 6 H 3 < 1 H 2 H 1 , for 0 < γ < 2 2 H 6 H 3 , we obtain
( n h ) 1 H ( μ ^ n μ ) = σ B t n H ( n h ) H + 1 2 ( n h ) H θ ^ n σ 2 = σ B t n H ( n h ) H + 1 2 n h 1 + γ 3 H 1 + γ · n ( 4 H 2 ) γ + 2 H 2 1 + γ · n 2 2 H ( n h ) H θ ^ n σ 2 d N ( 0 , σ 2 ) .
Consequently, statement (3) is established. □

6. Numerical Simulation and Empirical Analysis

In this section, the effectiveness of the proposed estimator is validated through numerical simulations. The results demonstrate that the estimator exhibits strong applicability and reliable performance in practical scenarios. To further assess the precision of the two estimation methods, Monte Carlo simulations were conducted in MATLAB 2017b, where the simulated estimates were compared against the true values, and their mean values and standard deviations were calculated to provide a comprehensive evaluation of the estimator’s performance. In addition, real trading data from the Chinese financial market were retrieved via the Tushare Pro platform using Python 3.10. With the known value of H, the parameters μ and σ were estimated, and track plots were generated in MATLAB and compared with the logarithmic closing prices of the stock, thereby further validating the effectiveness of the pseudo-likelihood estimation.

6.1. Numerical Simulation

First, we emphasize that in all the figures presented below, the sample size was fixed at n = 1000 , and the time step was chosen as h = 0.01 n . The parameters were set to μ = 4 and σ 2 = 9 . In the analysis of the asymptotic distribution, the number of replications, i.e., the simulated sample paths, was specified as n u m t r i a l s = 1000 . For the sake of notational consistency, we denote θ = σ 2 throughout the subsequent discussion. To assess the effectiveness and robustness of the proposed estimation method, we designed two primary experimental scenarios:
1.
Case with Partially Known Parameters
  • In the case where the parameter μ = 4 is known, we estimated the parameter θ ^ n and further examined its estimation path, quantile–quantile plot, and asymptotic distribution. The corresponding results for the estimator θ ^ n are presented for H = 0.4 (Figure 1) and H = 0.6 (Figure 2).
  • In the case where the parameter σ is known, we estimated the parameter μ and examined its estimation path and asymptotic distribution. Similarly, figures present the estimation paths and asymptotic distribution of μ ^ n when H = 0.4 (Figure 3) and H = 0.6 (Figure 4).
2.
Case with Completely Unknown Parameters
  • In this scenario, where both μ and σ are unknown, we estimated both parameters simultaneously and analyzed their estimation paths and asymptotic distributions. Figures present the estimation paths and asymptotic distribution of μ ^ n and θ ^ n when H = 0.4 (Figure 5 and Figure 6) and H = 0.6 (Figure 7 and Figure 8).
Case 1: The asymptotic behavior of the estimators of θ ^ n and μ ^ n when μ is known.
Case 2: The asymptotic behavior of the estimators of θ ^ n and μ ^ n when both parameters are unknown.
From the above figures, it can be observed that for different values of H, the numerical simulation results of the convergence and asymptotic properties of the estimators μ and θ are largely consistent with the theoretical predictions. The discrepancies are minor, indicating that the obtained estimates exhibit a high degree of accuracy.
In addition, to investigate the asymptotic behavior of the proposed estimators for different sample sizes, we considered three sample sizes: n = 1000 , 2000, and 3000. The comparison of theoretical variance with empirical variance, as well as the corresponding errors, was carried out. The specific experimental design is outlined as follows:
  • Table 1: Theoretical variance, empirical variance, and their errors for parameter μ when θ = 9 is known.
  • Table 2: Theoretical variance, empirical variance, and their errors for parameter θ when μ = 4 is known.
  • Table 3: Joint analysis of the variance estimates and errors for both parameters when μ and θ are unknown.
The discrepancies are minor, indicating that the obtained estimates exhibit a high degree of accuracy.

6.2. Empirical Analysis

To further evaluate the performance of the proposed model and estimation method in a real-world market setting, we conducted an empirical analysis using Heilan Home Co., Ltd. Jiangyin, Jiangsu Province, China (stock code: 600398), a representative stock from the Chinese A-share market. Daily closing price data were retrieved via the Tushare Pro platform using Python, covering the period from 28 December 2000, to 26 August 2025. Data cleaning and preprocessing were carried out to ensure consistency. As supported by the theoretical results in Section 3, the estimators were consistent as the sample size N h ; therefore, the full sample period was employed to guarantee robustness. The Hurst exponent of the stock return series was first estimated using the R/S method, yielding H = 0.629 , which suggested the presence of long-memory effects.
Based on this, the key model parameters μ ^ n and θ ^ n were estimated within the quasi-likelihood framework proposed in this paper. To provide an intuitive evaluation of model fit, simulated price track were generated in MATLAB using the estimated parameters and compared with the actual closing prices observed. The comparison demonstrated that the model captured the overall price dynamics effectively, thereby confirming both the applicability of the mixed fractional Brownian motion Black–Scholes framework and the reliability of the proposed quasi-likelihood estimation method on real financial data. Furthermore, we simulated stock price tracking using both the fractional Brownian motion model proposed in this study and the classical Black–Scholes model. The comparative results are presented in Figure 9 and Figure 10. As illustrated, our proposed model provides a notably better fit to the observed price dynamics, particularly in capturing volatility clustering and the long-memory behavior inherent in the price process. These results further highlight the advantages and practical applicability of our model in financial data modeling and empirical analysis.

7. Conclusions

In this paper, we studied quasi-likelihood estimation for the fractional Black–Scholes model driven by fractional Brownian motion. Based on discrete observations of the geometric fractional Brownian motion, we constructed the quasi-likelihood function and derived the estimators μ ^ n and θ ^ n . We further analyzed the asymptotic properties of these estimators, including strong consistency and asymptotic normality, considering both cases where μ was known or unknown. Numerical simulations and an empirical analysis indicated that the quasi-likelihood estimation method provided accurate parameter estimates under high-frequency observations, while effectively capturing volatility clustering and long-memory characteristics of the price process. These results demonstrate the effectiveness and applicability of the proposed model and estimation methodology in financial data modeling and empirical studies. In summary, this study extended the application of fractional Brownian motion models in financial parameter estimation and provided a feasible methodological framework for estimating parameters in complex stochastic systems, offering both theoretical insights and practical guidance for future research.

Author Contributions

Conceptualization, W.L. and L.Y.; methodology, W.L., Y.X. and L.Y.; software, W.L. and Y.X.; validation, W.L. and Y.X.; formal analysis, L.Y.; writing—original draft preparation, W.L. and Y.X.; writing—review and editing, W.L., Y.X. and L.Y.; visualization, W.L. and Y.X.; supervision, L.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (Grant Nos. 11971101, 12171081) and Shanghai Natural Science Foundation (Grant No. 24ZR1402900).

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

The authors thank the editor and the referees for their valuable comments and suggestions, which greatly improved the quality of this paper.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Mandelbrot, B.B.; Van Ness, J.W. Fractional Brownian motions, fractional noises and applications. SIAM Rev. 1968, 10, 422–437. [Google Scholar] [CrossRef]
  2. Rogers, L.C.G. Arbitrage with fractional Brownian motion. Math. Financ. 1997, 7, 95–105. [Google Scholar]
  3. Cheridito, P. Arbitrage in fractional Brownian motion models. Financ. Stochastics 2003, 7, 533–553. [Google Scholar] [CrossRef]
  4. Hu, Y.; Øksendal, B. Fractional white noise calculus and applications to finance. Infin. Dimens. Anal. Quantum Probab. Relat. Top. 2003, 6, 1–32. [Google Scholar] [CrossRef]
  5. Bender, C.; Elliott, R.J. Arbitrage in a discrete version of the Wick-fractional Black-Scholes market. Math. Oper. Res. 2004, 29, 935–945. [Google Scholar]
  6. Biagini, F.; Hu, Y.; Øksendal, B.; Zhang, T. Stochastic Calculus for Fractional Brownian Motion and Applications; Springer Science & Business Media: London, UK, 2008. [Google Scholar]
  7. Björk, T.; Hult, H. A note on Wick products and the fractional Black-Scholes model. Financ. Stochastics 2005, 9, 197–209. [Google Scholar]
  8. Elliott, R.J.; Chan, L. Perpetual American options with fractional Brownianmotion. Quant. Financ. 2003, 4, 123. [Google Scholar] [CrossRef]
  9. Greene, M.T.; Fielitz, B.D. Long-term dependence in common stock returns. J. Financ. Econ. 1977, 4, 339–349. [Google Scholar]
  10. Necula, C. Option Pricing in a Fractional Brownian Motion Environment. 2002. Available online: https://ssrn.com/abstract=1286833 (accessed on 11 September 2025).
  11. Lo, A.W. Long-term memory in stock market prices. Econom. J. Econom. Soc. 1991, 59, 1279–1313. [Google Scholar]
  12. Mishura, Y.; Shevchenko, G. The rate of convergence for Euler approximations of solutions of stochastic differential equations driven by fractional Brownian motion. Stochastics Int. J. Probab. Stoch. Process. 2008, 80, 489–511. [Google Scholar]
  13. Izaddine, H.G.; Deme, S.; Dabye, A.S. Analysis of fractional Black-Scholes, Ornstein-Uhlenbeck, and Langevin models: A minimum distance estimation approach. Gulf J. Math. 2025, 19, 217–250. [Google Scholar] [CrossRef]
  14. Alos, E.; Mazet, O.; Nualart, D. Stochastic calculus with respect to Gaussian processes. Ann. Probab. 2001, 29, 766–801. [Google Scholar] [CrossRef]
  15. Bender, C. An Itô formula for generalized functionals of a fractional Brownian motion with arbitrary Hurst parameter. Stoch. Process. Their Appl. 2003, 104, 81–106. [Google Scholar] [CrossRef]
  16. Cheridito, P.; Nualart, D. Stochastic integral of divergence type with respect to fractional Brownian motion with Hurst parameter H∈0,12. Ann. De L’Institut Henri Poincare (B) Probab. Stat. 2005, 41, 1049–1081. [Google Scholar] [CrossRef]
  17. Gradinaru, M.; Nourdin, I.; Russo, F.; Vallois, P. m-order integrals and generalized Itô’s formula; the case of a fractional brownian motion with any Hurst index. Ann. De L’Institut Henri Poincare (B) Probab. Stat. 2005, 41, 781–806. [Google Scholar] [CrossRef]
  18. Nourdin, I. Selected Aspects of Fractional Brownian Motion; Bocconi and Springer Series; Springer: Milan, Italy, 2012. [Google Scholar]
  19. Nualart, D. Malliavin Calculus and Related Topics, 2nd ed.; Springer: New York, NY, USA, 2006. [Google Scholar]
  20. Tudor, C. Analysis of Variations for Self-Similar Processes: A Stochastic Calculus Approach; Springer: Cham, Switzerland, 2013. [Google Scholar]
  21. Etemadi, N. On the laws of large numbers for nonnegative random variables. J. Multivar. Anal. 1983, 13, 187–193. [Google Scholar] [CrossRef]
  22. Yan, L.; Liu, J.; Chen, C. The generalized quadratic covariation for fractional Brownian motion with Hurst index less than 1/2. Infin. Dimens. Anal. Quantum Probab. Relat. Top. 2014, 17, 1450030. [Google Scholar] [CrossRef]
  23. Breuer, P.; Major, P. Central limit theorems for non-linear functionals of Gaussian fields. J. Multivar. Anal. 1983, 13, 425–441. [Google Scholar] [CrossRef]
  24. Dobrushin, R.L.; Major, P. Non-central limit theorems for non-linear functional of Gaussian fields. Z. Wahrscheinlichkeitstheorie Verwandte Geb. 1979, 50, 27–52. [Google Scholar] [CrossRef]
  25. Giraitis, L.; Surgailis, D. CLT and other limit theorems for functionals of Gaussian processes. Z. Wahrscheinlichkeitstheorie Verwandte Geb. 1985, 70, 191–212. [Google Scholar] [CrossRef]
  26. Nourdin, I. Asymptotic Behavior of Weighted Quadratic and Cubic Variations of Fractional Brownian Motion. Ann. Probab. 2008, 36, 2159–2175. [Google Scholar] [CrossRef]
  27. Nourdin, I.; Réveillac, A. Asymptotic Behavior of Weighted Quadratic Variations of Fractional Brownian Motion: The Critical Case H=1/4. Ann. Probab. 2009, 37, 2200–2230. [Google Scholar] [CrossRef]
Figure 1. Asymptotic behavior of the estimators θ ^ n when μ is known under H = 0.4 . (a) Plot of θ ^ n ; (b) quantile–quantile plot of θ ^ n ; (c) asymptotic distribution of θ ^ n .
Figure 1. Asymptotic behavior of the estimators θ ^ n when μ is known under H = 0.4 . (a) Plot of θ ^ n ; (b) quantile–quantile plot of θ ^ n ; (c) asymptotic distribution of θ ^ n .
Mathematics 13 02984 g001
Figure 2. Asymptotic behavior of the estimators θ ^ n when μ is known under H = 0.6 . (a) Plot of θ ^ n ; (b) quantile–quantile plot of θ ^ n ; (c) asymptotic distribution of θ ^ n .
Figure 2. Asymptotic behavior of the estimators θ ^ n when μ is known under H = 0.6 . (a) Plot of θ ^ n ; (b) quantile–quantile plot of θ ^ n ; (c) asymptotic distribution of θ ^ n .
Mathematics 13 02984 g002
Figure 3. Asymptotic behavior of the estimators μ ^ n when θ is known under H = 0.4 . (a) Plot of μ ^ n ; (b) quantile–quantile plot of μ ^ n ; (c) the asymptotic distribution of μ ^ n .
Figure 3. Asymptotic behavior of the estimators μ ^ n when θ is known under H = 0.4 . (a) Plot of μ ^ n ; (b) quantile–quantile plot of μ ^ n ; (c) the asymptotic distribution of μ ^ n .
Mathematics 13 02984 g003
Figure 4. Asymptotic behavior of the estimators μ ^ n when θ is known under H = 0.6 . (a) Plot of μ ^ n ; (b) quantile–quantile plot of μ ^ n ; (c) asymptotic distribution of μ ^ n .
Figure 4. Asymptotic behavior of the estimators μ ^ n when θ is known under H = 0.6 . (a) Plot of μ ^ n ; (b) quantile–quantile plot of μ ^ n ; (c) asymptotic distribution of μ ^ n .
Mathematics 13 02984 g004
Figure 5. Asymptotic behavior of the estimators μ ^ n when both parameters are known under H = 0.4 . (a) Plot of μ ^ n ; (b) quantile–quantile plot of μ ^ n ; (c) asymptotic distribution of μ ^ n .
Figure 5. Asymptotic behavior of the estimators μ ^ n when both parameters are known under H = 0.4 . (a) Plot of μ ^ n ; (b) quantile–quantile plot of μ ^ n ; (c) asymptotic distribution of μ ^ n .
Mathematics 13 02984 g005
Figure 6. Asymptotic behavior of the estimators θ ^ n when both parameters are known under H = 0.4 . (a) Plot of θ ^ n ; (b) quantile–quantile plot of θ ^ n ; (c) asymptotic distribution of θ ^ n .
Figure 6. Asymptotic behavior of the estimators θ ^ n when both parameters are known under H = 0.4 . (a) Plot of θ ^ n ; (b) quantile–quantile plot of θ ^ n ; (c) asymptotic distribution of θ ^ n .
Mathematics 13 02984 g006
Figure 7. Asymptotic behavior of the estimators μ ^ n when both parameters are known under H = 0.6 . (a) Plot of μ ^ n ; (b) quantile–quantile plot of μ ^ n ; (c) asymptotic distribution of μ ^ n .
Figure 7. Asymptotic behavior of the estimators μ ^ n when both parameters are known under H = 0.6 . (a) Plot of μ ^ n ; (b) quantile–quantile plot of μ ^ n ; (c) asymptotic distribution of μ ^ n .
Mathematics 13 02984 g007
Figure 8. Asymptotic behavior of the estimators θ ^ n when both parameters are known under H = 0.6 . (a) Plot of θ ^ n ; (b) quantile–quantile plot of θ ^ n ; (c) asymptotic distribution of θ ^ n .
Figure 8. Asymptotic behavior of the estimators θ ^ n when both parameters are known under H = 0.6 . (a) Plot of θ ^ n ; (b) quantile–quantile plot of θ ^ n ; (c) asymptotic distribution of θ ^ n .
Mathematics 13 02984 g008
Figure 9. Parameter estimation and price comparison for stock 600398. (a) μ ^ n for stock 600398; (b) θ ^ n for stock 600398; (c) stock 600398: comparison of real and simulated prices.
Figure 9. Parameter estimation and price comparison for stock 600398. (a) μ ^ n for stock 600398; (b) θ ^ n for stock 600398; (c) stock 600398: comparison of real and simulated prices.
Mathematics 13 02984 g009
Figure 10. Comparison of classical and fractional Black–Scholes models. (A) Plot of 600398 from the fractional Black–Scholes model; (B) Plot of 600398 from the classical Black–Scholes model.
Figure 10. Comparison of classical and fractional Black–Scholes models. (A) Plot of 600398 from the fractional Black–Scholes model; (B) Plot of 600398 from the classical Black–Scholes model.
Mathematics 13 02984 g010
Table 1. Comparison of theoretical and empirical variances of μ ^ n under known θ = 9 and various Hurst indices.
Table 1. Comparison of theoretical and empirical variances of μ ^ n under known θ = 9 and various Hurst indices.
H n = 1000 n = 2000 n = 3000
Theoretical Empirical Abs. Error Theoretical Empirical Abs. Error Theoretical Empirical Abs. Error
0.21.42641.5730.146630.819250.907760.0885010.100210.101940.00174
0.42.26072.40860.147881.49151.60080.109281.16941.19470.025295
0.63.5833.64370.0607142.71542.66930.04612.30882.34190.03308
0.7516.004516.25630.2517813.458113.2210.237112.160812.23220.071373
0.814.26414.38850.1244612.417612.50230.084711.450311.51620.065843
Table 2. Comparison of theoretical and empirical variances of θ ^ n under known μ = 4 and various Hurst indices.
Table 2. Comparison of theoretical and empirical variances of θ ^ n under known μ = 4 and various Hurst indices.
H n = 1000 n = 2000 n = 3000
Theoretical Empirical Abs. Error Theoretical Empirical Abs. Error Theoretical Empirical Abs. Error
0.20.200410.195970.00444040.100210.0965710.00363430.0668030.0699680.0031643
0.40.168290.161160.00712830.0841430.0871560.00301250.0560950.0551720.00092321
0.60.175180.185040.00986010.0876120.0826140.00499820.0584140.0577960.00061751
0.750.514310.313980.200330.272940.172810.100130.188120.118090.070025
0.80.341170.463010.121840.341170.280880.0602960.235820.251740.01592
Table 3. Comparison of theoretical and empirical variances of θ ^ n and μ ^ n under various Hurst indices and sample sizes.
Table 3. Comparison of theoretical and empirical variances of θ ^ n and μ ^ n under various Hurst indices and sample sizes.
HEstimator n = 1000 n = 2000 n = 3000
Theoretical Empirical Abs. Error Theoretical Empirical Abs. Error Theoretical Empirical Abs. Error
0.2 θ ^ n 0.200410.195280.00512780.100210.096210.00399460.0668030.0659010.00090264
μ ^ n 1.42641.44610.019740.819250.808640.010610.0668030.0700160.0032127
0.4 θ ^ n 0.168290.159380.00890780.0841430.0871560.00301250.0560950.058760.0026642
μ ^ n 2.26072.42230.161561.49151.59980.108281.16941.18190.012476
0.6 θ ^ n 0.175180.241920.0667420.0876120.108510.0208970.0584140.0704230.012009
μ ^ n 3.5833.65740.0744342.71542.67480.0406222.30882.34340.034604
0.75 θ ^ n 0.514310.628070.113760.272940.172810.100130.188120.254220.066106
μ ^ n 16.004516.31750.3129813.458113.24180.2163412.160812.24280.081992
0.8 θ ^ n 0.487561.53881.05120.487561.11080.623220.487561.04980.56228
μ ^ n 14.26414.44080.176812.417612.52350.1059811.450311.52960.079271
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lu, W.; Yan, L.; Xia, Y. Quasi-Likelihood Estimation in the Fractional Black–Scholes Model. Mathematics 2025, 13, 2984. https://doi.org/10.3390/math13182984

AMA Style

Lu W, Yan L, Xia Y. Quasi-Likelihood Estimation in the Fractional Black–Scholes Model. Mathematics. 2025; 13(18):2984. https://doi.org/10.3390/math13182984

Chicago/Turabian Style

Lu, Wenhan, Litan Yan, and Yiang Xia. 2025. "Quasi-Likelihood Estimation in the Fractional Black–Scholes Model" Mathematics 13, no. 18: 2984. https://doi.org/10.3390/math13182984

APA Style

Lu, W., Yan, L., & Xia, Y. (2025). Quasi-Likelihood Estimation in the Fractional Black–Scholes Model. Mathematics, 13(18), 2984. https://doi.org/10.3390/math13182984

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop