Next Article in Journal
A Phase-Field Perspective on Mereotopology
Previous Article in Journal
On the Oval Shapes of Beach Stones
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Berry–Esseen Bounds of the Quasi Maximum Likelihood Estimators for the Discretely Observed Diffusions

by
Jaya P. N. Bishwal
Department of Mathematics and Statistics, University of North Carolina at Charlotte, 376 Fretwell Bldg., 9201 University City Blvd., Charlotte, NC 28223-0001, USA
AppliedMath 2022, 2(1), 39-53; https://doi.org/10.3390/appliedmath2010003
Submission received: 2 December 2021 / Revised: 22 December 2021 / Accepted: 7 January 2022 / Published: 8 January 2022

Abstract

:
For stationary ergodic diffusions satisfying nonlinear homogeneous Itô stochastic differential equations, this paper obtains the Berry–Esseen bounds on the rates of convergence to normality of the distributions of the quasi maximum likelihood estimators based on stochastic Taylor approximation, under some regularity conditions, when the diffusion is observed at equally spaced dense time points over a long time interval, the high-frequency regime. It shows that the higher-order stochastic Taylor approximation-based estimators perform better than the basic Euler approximation in the sense of having smaller asymptotic variance.

1. Introduction and Preliminaries

Parameter estimation in diffusion processes based on discrete observations is the recent trend of investigation in financial econometrics and mathematical biology since the data available in finance and biology are high-frequency discrete, though the model is continuous. For a treatise on this subject, see Bishwal (2008, 2021) [1,2].
Consider the Itô stochastic differential equation
d X t = f ( θ , X t ) d t + d W t , t 0 X 0 = X 0
where { W t , t 0 } is a one-dimensional standard Wiener process, θ Θ , Θ is a compact subset of ℝ, f is a known real valued function defined on Θ × , the unknown parameter θ is to be estimated on the basis of observation of the process { X t , t 0 } . Let θ 0 be the true value of the parameter that is in the interior of Θ . We assume that the process { X t , t 0 } is observed at 0 = t 0 < t 1 < < t n = T with Δ t i : = t i t i 1 = T n = h and T = d n 1 / 2 for some fixed real number d > 0 . We estimate θ from the observations { X t 0 , X t 1 , , X t n } .
The conditional least squares estimator (CLSE) of θ is defined as
θ n , T : = arg min θ Θ Q n , T ( θ ) where Q n , T ( θ ) = i = 1 n X t i X t i 1 f ( θ , X t i 1 ) h 2 Δ t i .
This estimator was first studied by Dorogovcev (1976) [3], who obtained its weak consistency under some regularity conditions as T and T n 0 . Kasonga (1988) [4] obtained the strong consistency of the CLSE under some regularity conditions as n assuming that T = d n 1 / 2 for some fixed real number d > 0 . Prakasa Rao (1983) [5] obtained asymptotic normality of the CLSE as T and T n 1 / 2 0 .
Florens-Zmirou (1989) [6] studied the minimum contrast estimator, based on a Euler–Maruyama-type first-order approximate discrete time scheme of the SDE (1), which is given by
Z t i Z t i 1 = f ( θ , Z t i 1 ) ( t i t i 1 ) + W t i W t i 1 , i 1 , Z 0 = X 0 .
The log-likelihood function of { Z t i , 0 i n } is given by
L n , T = C i = 1 n Z t i Z t i 1 f ( θ , Z t i 1 ) h 2 Δ t i .
where C is a constant independent of θ . A contrast for the estimation of θ is derived from the above log-likelihood by substituting { Z t i , 0 i n } with { X t i , 0 i n } . The resulting contrast is
H n , T = C i = 1 n X t i X t i 1 f ( θ , X t i 1 ) h 2 Δ t i
and the resulting minimum contrast estimator, called the Euler–Maruyama estimator, is given by
θ ˇ n , T : = arg min θ Θ H n , T ( θ )
Florens-Zmirou (1989) [6] showed the L 2 -consistency of the estimator as T and T n 0 and asymptotic normality as T and T n 2 / 3 0 .
Notice that the contrast H n , T would be the log-likelihood of ( X t i , 0 i n ) if the transition probability was N ( f ( θ , x ) h , h ) ) . This led Kessler (1997) [7] to consider Gaussian approximation of the transition density. The most natural one is achieved through choosing its mean and variance to be the mean and variance of the transition density. Thus, the transition density is approximated by N ( E ( X t i | X t i 1 ) , h ) ) , which produces the contrast
K n , T = C i = 1 n X t i E ( X t i | X t i 1 ) 2 Δ t i .
Since the transition density is unknown, in general, there is no closed-form expression for E ( X t i | X t i 1 ) . Using the stochastic Taylor formula obtained in Florens-Zmirou (1989) [6], he obtained a closed-form expression of E ( X t i | X t i 1 ) . The contrast H n , T is an example of such an approximation when E ( X t i | X t i 1 ) X t i 1 + h f ( θ , X t i 1 ) .
The resulting minimum contrast estimator, which is also the quasi-maximum likelihood estimator (QMLE), is given by
θ n , T : = arg min θ Θ K n , T ( θ )
Kessler (1997) [7] showed the L 2 -consistency of the estimator as T and T n 0 and asymptotic normality as T and T n ( p 1 ) / p 0 for an arbitrary integer p.
Denote
μ ( θ , X t i 1 ) : = E ( X t i | X t i 1 ) , μ ( θ , x ) : = E ( X t i | X t i 1 = x )
which is the mean function of the transition probability distribution. Hence, the contrast is given by
K n , T = C i = 1 n X t i μ ( θ , X t i 1 ) ) 2 Δ t i .
If continuous observation of { X t } on the interval [ 0 , T ] were available, then the likelihood function of θ would be
L T ( θ ) = exp 0 T f ( θ , X t ) d X t 1 2 0 T f 2 ( θ , X t ) d t
(see Liptser and Shiryayev (1977) [8]). Since we have discrete data, we have to approximate the likelihood to obtain the MLE. Taking Itô-type approximation of the stochastic integral and rectangle rule approximation of the ordinary integral in (9), we obtain the approximate likelihood function
L ^ n , T ( θ ) : = exp i = 1 n f ( θ , X t i 1 ) ( X t i X t i 1 ) h 2 i = 1 n f 2 ( θ , X t i 1 ) .
The Itô approximate maximum likelihood estimate (IAMLE) based on L ^ n , T is defined as
θ ^ n , T : = arg max θ Θ L ^ n , T ( θ ) .
Weak consistency and asymptotic normality of this estimator were obtained by Yoshida (1992) [9] as T and T n 0 .
Note that the CLSE, the Euler–Maruyama estimator and the IAMLE are the same estimator (see Shoji (1997) [10]). For the Ornstein–Uhlenbeck process, Bishwal and Bose (2001) [11] studied the rates of weak convergence of approximate maximum likelihood estimators, which are of conditional least squares type. For the Ornstein–Uhlenbeck process, Bishwal (2010) [12] studied the uniform rate of weak convergence for the minimum contrast estimator, which has a close connection to the Stratonovich–Milstein scheme. Bishwal (2009) [13] studied Berry–Esseen inequalities for conditional least squares estimator in discretely observed nonlinear diffusions. Bishwal (2009) [14] studied the Stratonovich-based approximate M-estimator of discretely sampled nonlinear diffusions. Bishwal (2011) [15] studied Milstein approximation of the posterior density of diffusions. Bishwal (2010) [16] studied conditional least squares estimation in nonlinear diffusion processes based on Poisson sampling. Bishwal (2011) [17] obtained some new estimators of integrated volatility using the stochastic Taylor-type schemes, which could be useful for option pricing in stochastic volatility models; see also Bishwal (2021) [2].
Prime denotes the derivative with respect to θ , dot denotes the derivative with respect to x and ⋁ denotes the max symbol throughout the paper. In order to obtain a better estimator in terms of lowering variance in Monte Carlo simulation, which may have a faster rate of convergence, first, we use the algorithm proposed in Bishwal (2008) [1]. Note that the Itô integral and the Fisk–Stratonovich (FS, henceforth; Fisk, while introducing the concept of quasimartingale, had the trapezoidal approximation and Stratonovich had the midpoint approximation, converging to the same mean square limit) integral are connected by
0 T f ( θ , X t ) d X t = 0 T f ( θ , X t )   o   d X t 1 2 0 T f ˙ ( θ , X t ) d t ,
where o is the Itô’s circle for the FS integral. We transform the Itô integral (the limit of the rectangular approximation to preserve the martingale property) in (9) to the FS integral and apply FS-type trapezoidal approximation of the stochastic integral and rectangular rule-type approximation of the Lebesgue integrals and obtain the approximate likelihood
L ˜ n , T ( θ ) : = exp { 1 2 i = 1 n [ f ( θ , X t i 1 ) + f ( θ , X t i ) ] ( X t i X t i 1 ) h 2 i = 1 n f ˙ ( θ , X t i 1 ) h 2 i = 1 n f 2 ( θ , X t i 1 ) }
The Fisk–Stratonovich approximate maximum likelihood estimator (FSAMLE) based on L ˜ n , T is defined as
θ ˜ n , T : = arg max θ Θ L ˜ n , T ( θ ) .
Weak consistency as T and T n 0 and asymptotic normality as T and T n 2 / 3 0 of the FSAMLE were shown in Bishwal (2008) [1]. Berry–Esseen bounds for the IAMLE and the FSAMLE for the Ornstein–Uhlenbeck processes were obtained in Bishwal and Bose (2001) [11].
We shall use the following notations: Δ X i = X t i X t i 1 , Δ W i = W t i W t i 1 , C is a generic constant independent of h , n and other variables (it may depend on θ ). Throughout the paper, f ˙ denotes the derivative with respect to x and f denotes the derivative with respect to θ of the function f ( θ , x ) . Suppose that θ 0 denotes the true value of the parameter and θ 0 Θ . We assume the following conditions:
Assumption 1.
(A1) | f ( θ , x ) | a ( θ ) ( 1 + | x | ) ,
          | f ( θ , x ) f ( θ , y ) | a ( θ ) | x y | .
(A2) | f ( θ , x ) f ( ϕ , y ) | b ( x ) | θ ϕ | for all θ , ϕ Θ , x , y
where sup θ Θ | a ( θ ) | = a < , E | b ( X 0 ) | r < for any integer r.
(A3) The diffusion process X is stationary and ergodic with invariant measure ν, i.e., for any g with E [ g ( · ) ] < ,
1 n i = 1 n g ( X t i ) E ν [ g ( X 0 ) ] a . s . a s T a n d h 0 .
(A4) sup t 0 E | X t | q < for all q 0 .
(A5) E | f ( θ , X 0 ) f ( θ 0 , X 0 ) | 2 = 0 i f f θ = θ 0 .
(A6) f is continuously differentiable function in x up to order p for all θ.
(A7) f ( · , x ) and all its derivatives are three times continuously differentiable with respect to θ for all x . Moreover, these derivatives up to third order with respect to θ are of polynomial growth in x uniformly in θ.
The Fisher information is given by
0 < I ( θ ) : = ( f ( θ , x ) ) 2 d ν ( x ) <
and for any δ > 0 , or any compact Θ ¯ Θ ,
inf θ 0 Θ ¯ sup | θ θ 0 | > δ E θ 0 | f ( θ , X 0 ) f ( θ 0 , X 0 ) | 2 > 0 .
(A8) The Malliavin covariance of the process is nondegenerate.
The Malliavin covariance matrix of a smooth random variable S is defined as γ T = 0 T D t S [ D t S ] * d t , where D t is the Malliavin derivative. The Malliavin covariance is nondegenerate if d e t ( γ T ) is almost surely positive and, for any m 1 , one has 1 / d e t ( γ T ) L m < . This, associated with the functional ω X ( t , ω ) , is given by 0 < σ 2 ( t ) = Y t 2 0 t f 2 ( θ , X s ) Z s 2 d s < where Y t and Z t , respectively, satisfy
d Y t = f ˙ ( θ , X t ) Y t d t + Y t d W t , Y 0 = 1 , d Z t = f ˙ ( θ , X t ) Z t d t Z t d W t , Z 0 = 1 .
In the case of independent observations, in order to prove the validity of asymptotic expansion, one usually needs a certain regularity condition for the underlying distribution, such as the Cramér condition; see Bhattacharya and Ranga Rao (1976) [18]. This type of condition then ensures the regularity of the distribution and hence the smoothness assumption of the functional under the expectation whose martingale expansion is desired can be removed. This type of condition for dependent observations leads to the regularity of the distribution of a functional with nondegenerate Malliavin covariance, which is known in Malliavin calculus; see Ikeda and Watanabe (1989) [19] and Nualart (1995) [20]. Malliavin covariance is connected to the Hörmander condition, which is a sufficient condition for a second-order differential operator to be hypoelliptic; see Bally (1991) [21]. For operators with analytic coefficients, this condition turns out to be also necessary, but this is not true for general smooth coefficients.
More precisely, let X be a differentiable ℝ-valued Wiener functional defined on a Wiener space. Assume that there exists a functional ψ such that
sup u | u | j E [ e i u X X k ψ ] < , j , k Z + .
Thus, it is a regularity condition of the characteristic function, which is a consequence of the nondegeneracy of the Malliavin covariance in the case of Wiener functionals. The functional ψ , which is a random variable satisfying 0 ψ 1 , is a truncation functional extracting from the Wiener space, the portion on which the distribution is regular. If X is almost regular, one may take ψ nearly equal to one. Uniform degeneracy of the Malliavin covariance of the functional T 1 / 2 0 T f ( θ 0 , X t ) d W t can be shown under (A8); see Yoshida (1997) [22].
Bishwal (2009) [13] obtained the rate of convergence to normality of the Itô AMLE and the Fisk–Stratonovich AMLE of the order O T 1 / 2 T 2 n and O T 1 / 2 T 3 n 2 , respectively, under the regularity conditions given above with q > 16 for (A4). We obtain the rate of convergence to normality, i.e., Berry–Esseen bound of the order O T 1 / 2 T p + 1 n p for the QMLE θ n , T for arbitrary integer p.
We need the following lemma from Michel and Pfanzagl (1971) [23] to prove our main results.
Lemma 1.
Let ξ , ζ and η be any three random variables on a probability space ( Ω , F , P ) with P ( η > 0 ) = 1 . Then, for any ϵ > 0 , we have
( a ) sup x | P { ξ + ζ x } Φ ( x ) | sup x | P { ξ x } Φ ( x ) | + P ( | ζ | > ϵ ) + ϵ ,
( b ) sup x | P { ξ η x } Φ ( x ) | sup x | P { ξ x } Φ ( x ) | + P { | η 1 | > ϵ } + ϵ .

2. Main Results

We start with some preliminary lemmas. Let L denote the generator of the diffusion process, g C 2 ( )
L g ( x ) : = f ( θ , x ) g ˙ ( x ) + 1 2 g ¨ ( x ) .
The k-th iterate of L is denoted as L k . Its domain is C 2 k ( ) . We set L 0 = I .
Stochastic Taylor formula (Kloeden and Platen (1992) [24]): For a p + 1 times continuously differentiable function g : , we have for t [ 0 , T ] and p = 1 , 2 , 3 ,
g ( X t ) = g ( X 0 ) + k = 1 p t k k ! L k g ( X 0 ) + 0 t 0 s 2 L p + 1 g ( X s 1 ) d s 1 d s p + 1 .
Lemma 2.
With f ( x ) = x , the stochastic Taylor expansion of μ ( θ , x ) is given by
μ ( θ , X t i 1 ) : = E ( X t i | X t i 1 ) = k = 0 p h k k ! L k f ( X t i 1 ) + R ( θ , h p + 1 , X t i 1 )
where R denotes a function for which there exists a constant C such that
R ( θ , h p + 1 , X t i 1 ) h p + 1 C ( 1 + | X t i 1 | ) C .
Proof. 
Applying the stochastic Taylor formula of Florens-Zmirou (1989, Lemma 1) [6], one obtains the result. See also Kloeden and Platen (1992) [24].
Consider the following special cases:
Euler Scheme: For p = 1 , μ ( θ , x ) = L 0 f ( x ) + h L 1 f ( x ) + R ( θ , h 2 , x ) .
Milstein Scheme: For p = 2 , μ ( θ , x ) = L 0 f ( x ) + h L 1 f ( x ) + h 2 2 ! L 2 f ( x ) + R ( θ , h 3 , x ) .
Simpson Scheme: For p = 4 , μ ( θ , x ) = L 0 f ( x ) + h L 1 f ( x ) + h 2 2 ! L 2 f ( x ) + h 3 3 ! L 3 f ( x ) + h 4 4 ! L 4 f ( x ) + R ( θ , h 5 , x ) .
Boole Scheme: For p = 6 ,
μ ( θ , x ) = L 0 f ( x ) + h L 1 f ( x ) + h 2 2 ! L 2 f ( x ) + h 3 3 ! L 3 f ( x ) + h 4 4 ! L 4 f ( x ) + h 5 5 ! L 5 f ( x ) + h 6 6 ! L 6 f ( x ) + R ( θ , h 7 , x ) .
Remark 1.
For p = 1 , μ ( θ , X t i 1 ) X t i 1 + h f ( θ , X t i 1 ) . This produces the CLSE. This estimator has been very well studied in the literature (see Shoji (1997) [10]).
Remark 2.
Note that the Milstein scheme is equivalent to Stratonovich approximation of the stochastic integral after converting the Itô integral to the Stratonovich integral.
Lemma 3.
For all p 2 , we have
E sup θ Θ i = 1 n [ μ ( θ , X t i 1 ) X t i 1 ] Δ t i 0 T f ( θ , X t ) d t 2 C T p + 1 n p 1 .
Proof. 
First, we show that, for p = 2 ,
E sup θ Θ i = 1 n f ( θ , X t i 1 ) Δ t i 0 T f ( θ , X t ) d t 2 C T 4 n 2 .
We emphasize that the Itô formula is a stochastic Taylor formula of order 2. By the Itô formula, we have
f ( θ 0 , X t ) f ( θ 0 , X t i 1 ) = t i 1 t f ˙ ( θ 0 , X u ) d X u + 1 2 t i 1 t f ¨ ( θ 0 , X u ) d u = t i 1 t f ˙ ( θ 0 , X u ) d W u + t i 1 t [ f ˙ ( θ 0 , X u ) f ( θ 0 , X u ) + 1 2 f ¨ ( θ 0 , X u ) ] d u = : t i 1 t f ˙ ( θ 0 , X u ) d W u + t i 1 t F ( θ 0 , X u ) d u
where
F ( θ 0 , X u ) = : f ˙ ( θ 0 , X u ) f ( θ 0 , X u ) + 1 2 f ¨ ( θ 0 , X u ) .
We employ Taylor expansion in the local neighborhood of θ 0 . Let θ = θ 0 + T 1 / 2 u , u . Then, we have
E sup θ Θ i = 1 n f ( θ , X t i 1 ) Δ t i 0 T f ( θ , X t ) d t 2 = E sup θ Θ i = 1 n t i 1 t i f ( θ , X t ) f ( θ , X t i 1 ) d t 2 = E sup u i = 1 n t i 1 t i f ( θ 0 + T 1 / 2 u , X t ) f ( θ 0 + T 1 / 2 u , X t i 1 ) d t 2 = E sup u i = 1 n t i 1 t i f ( θ 0 , X t ) f ( θ 0 , X t i 1 ) d t + T 1 / 2 u i = 1 n t i 1 t i f ( θ ¯ , X t ) f ( θ ¯ , X t i 1 ) d t 2 = 2 E i = 1 n t i 1 t i f ( θ 0 , X t ) f ( θ 0 , X t i 1 ) d t 2 + 2 E sup u T 1 / 2 u i = 1 n t i 1 t i f ( θ ¯ , X t ) f ( θ ¯ , X t i 1 ) d t 2 = : 2 E i = 1 n t i 1 t i f ( θ 0 , X t ) f ( θ 0 , X t i 1 ) d t 2 + 2 G 1 = E i = 1 n t i 1 t i t i 1 t f ˙ ( θ 0 , X u ) d W u + t i 1 t f ( θ 0 , X t i 1 ) F ( θ 0 , X u ) d u d t 2 + 2 G 1 2 E i = 1 n t i 1 t i t i 1 t f ˙ ( θ 0 , X u ) d W u d t 2 + 2 E i = 1 n t i 1 t i t i 1 t f ( θ 0 , X t i 1 ) F ( θ 0 , X u ) d u d t 2 + 2 G 1 = : 2 ( J 1 + J 2 ) + 2 G 1
where
J 1 = : E i = 1 n t i 1 t i t i 1 t f ˙ ( θ 0 , X u ) d W u d t 2 ,
J 2 = : E i = 1 n t i 1 t i t i 1 t f ( θ 0 , X t i 1 ) F ( θ 0 , X u ) d u d t 2 ,
G 1 = : E sup u T 1 / 2 u i = 1 n t i 1 t i f ( θ ¯ , X t ) f ( θ ¯ , X t i 1 ) d t 2
and | θ ¯ θ 0 | | θ θ 0 | . Further
E sup θ Θ i = 1 n [ μ ( θ , X t i 1 ) X t i 1 ] Δ t i 0 T f ( θ , X t ) d t 2 2 E sup θ Θ i = 1 n f ( θ , X t i 1 ) Δ t i 0 T f ( θ , X t ) d t 2 + 2 E sup θ Θ i = 1 n [ μ ( θ , X t i 1 ) X t i 1 ] Δ t i i = 1 n f ( θ , X t i 1 ) Δ t i 2 .
By Lemma 2, we have
μ ( θ , X t i 1 ) X t i 1 h f ( θ , X t i 1 ) = k = 2 p h k k ! L k f ( θ , X t i 1 ) + R ( θ , h p + 1 , X t i 1 ) .
Further
i = 1 n [ μ ( θ , X t i 1 ) X t i 1 ] Δ t i 0 T f ( θ , X t ) d t = i = 1 n [ μ ( θ , X t i 1 ) X t i 1 ] Δ t i i = 1 n f ( θ , X t i 1 ) Δ t i + i = 1 n f ( θ , X t i 1 ) Δ t i 0 T f ( θ , X t ) d t = i = 1 n k = 2 p h k k ! L k f ( θ , X t i 1 ) + R ( θ , h p + 1 , X t i 1 ) + i = 1 n f ( θ , X t i 1 ) Δ t i 0 T f ( θ , X t ) d t .
Hence
E sup θ Θ i = 1 n [ μ ( θ , X t i 1 ) X t i 1 ] Δ t i 0 T f ( θ , X t ) d t 2 4 ( J 1 + J 2 ) + 2 E sup θ Θ i = 1 n k = 2 p h k k ! L k f ( θ , X t i 1 ) + R ( θ , h p + 1 , X t i 1 ) 2 .
Observe that, with B i , t : = t i 1 t f ( θ 0 , X t i 1 ) f ˙ ( θ 0 , X u ) d W u , 1 i n , we have
J 1 = i = 1 n E t i 1 t i B i , t d t 2 + j i = 1 n E t i 1 t i B i , t d t t j 1 t j B j , t d t i = 1 n ( t i t i 1 ) t i 1 t i E ( B i , t 2 ) d t ( the last term being zero due to the orthogonality of the integrals ) i = 1 n ( t i t i 1 ) t i 1 t i t i 1 t E f ( θ 0 , X t i 1 ) f ˙ ( θ 0 , X u ) 2 d u d t C T n i = 1 n t i 1 t i ( t t i 1 ) d t ( by   ( A 4 )   and   ( A 3 ) ) C T n i = 1 n ( t i t i 1 ) 2 = C T 3 n 2 .
On the other hand, with A i , t : = t i 1 t f ( θ 0 , X t i 1 ) F ( θ 0 , X u ) d u , 1 i n , we have
J 2 = E i = 1 n t i 1 t i t i 1 t f ( θ 0 , X t i 1 ) F ( θ 0 , X u ) d u d t 2 = E i = 1 n t i 1 t i A i , t d t 2 = i = 1 n E t i 1 t i A i , t d t 2 + j i = 1 n E t i 1 t i A i , t d t t j 1 t j A j , t d t i = 1 n ( t i t i 1 ) E t i 1 t i A i , t 2 d t + j i = 1 n E t i 1 t i A i , t d t 2 E t j 1 t j A j , t d t 2 1 / 2 i = 1 n ( t i t i 1 ) t i 1 t i E ( A i , t 2 ) d t + j i = 1 n ( t i t i 1 ) t i 1 t i E ( A i , t 2 ) d t ( t j t j 1 ) t j 1 t j E ( A j , t 2 ) d t 1 / 2 .
However, E ( A i , t 2 ) C ( t t i 1 ) 2 using (A4) and (A3). On substitution, the last term is dominated by
C i = 1 n ( t i t i 1 ) 4 + C j i = 1 n ( t i t i 1 ) 2 ( t j t j 1 ) 2 = C T 4 n 3 + C n ( n 1 ) T 4 2 n 4 C T 4 n 2 .
Thus
J 1 + J 2 C T 4 n 2 .
By the same method, we have
G 1 C T 3 n 2 .
Hence
E sup θ Θ i = 1 n f ( θ , X t i 1 ) Δ t i 0 T f ( θ , X t ) d t 2 2 ( J 1 + J 2 ) + 2 G 1 C T 4 n 2 .
Thus, the proof for p = 2 is complete. Next, we consider the general case p 3 . Denote
J 3 : = E sup θ Θ i = 1 n k = 2 p h k k ! L k f ( θ , X t i 1 ) + R ( θ , h p + 1 , X t i 1 ) 2 .
We have
E sup θ Θ i = 1 n [ μ ( θ , X t i 1 ) X t i 1 ] Δ t i 0 T f ( θ , X t ) d t 2 2 E sup θ Θ i = 1 n [ μ ( θ , X t i 1 ) X t i 1 ] Δ t i i = 1 n f ( θ , X t i 1 ) Δ t i 2 + 2 E sup θ Θ i = 1 n f ( θ , X t i 1 ) Δ t i 0 T f ( θ , X t ) d t 2 2 J 3 + 4 ( J 1 + J 2 ) + 2 G 1 .
Observe that, by Lemma 2, we have
J 3 C T p + 1 n p 1 .
Thus, by combining the bounds for J 1 , J 2 , J 3 and G 1 , we have
E sup θ Θ i = 1 n [ μ ( θ , X t i 1 ) X t i 1 ] Δ t i 0 T f ( θ , X t ) d t 2 C T p + 1 n p 1 .
The following lemma is from Bishwal (2008) [1].
Lemma 4.
Let
I T : = 1 T I ( θ 0 ) 0 T f 2 ( θ 0 , X t ) d t .
Then, under the conditions (A1)–(A8),
sup θ Θ E [ I T ( θ ) 1 ] 2 C T 1 .
The following lemma follows from Theorem 7 in Yoshida (1997) [22].
Lemma 5.
Let
M T : = 1 T I ( θ 0 ) 0 T f ( θ 0 , X t ) d W t .
Then, under the conditions (A1)–(A8),
sup x P θ 0 M T x Φ ( x ) C T 1 / 2 .
Our main result is the following theorem.
Theorem 1.
Under the conditions (A1)-(A8), for any p 1 , we have
sup x P θ T I ( θ ) ( θ n , T θ ) x Φ ( x ) = O T 1 / 2 T p + 1 n p .
Proof. 
We start with p = 1 and p = 2 . Let
l ^ n , T ( θ ) : = log L ^ n , T ( θ ) , a n d l ˜ n , T ( θ ) : = log L ˜ n , T ( θ ) .
By Taylor expansion, we have
l ^ n , T ( θ ^ n , T ) = l ^ n , T ( θ 0 ) + ( θ ^ n , T θ 0 ) l ^ n , T ( θ ¯ n , T )
where θ ¯ n , T θ θ ^ n , T θ 0 . Since l ^ n , T ( θ ^ n , T ) = 0 , hence we have
T I ( θ 0 ) ( θ ^ n , T θ 0 ) = 1 T I ( θ 0 ) l ^ n , T ( θ 0 ) 1 T I ( θ 0 ) l ^ n , T ( θ ¯ n , T ) = 1 T I ( θ 0 ) i = 1 n f ( θ 0 , X t i 1 ) Δ W i 1 T I ( θ 0 ) i = 1 n f ( θ ¯ n , T , X t i 1 ) Δ t i = : M n , T V n , T
Note that
V n , T = 1 T I ( θ 0 ) i = 1 n f ( θ ¯ n , T , X t i 1 ) Δ t i = 1 T I ( θ 0 ) i = 1 n f ( θ ¯ n , T , X t i 1 ) 2 Δ t i .
However, E ( I T 1 ) 2 C T 1 from Lemma 4 (see also Pardoux and Veretennikov (2001) [25] and Yoshida (2011) [26]). It can be shown that E ( V n , T I T ) 2 C T n (see Altmeyer and Chorowski (2018) [27]). Hence
E ( V n , T 1 ) 2 = E [ ( V n , T I T ) + ( I T 1 ) ] 2 C ( T 1 T n ) .
Further, by Lemma 1 (b), we have
sup x P θ T I ( θ ) ( θ ^ n , T θ ) x Φ ( x ) = sup x P θ M n , T V n , T x Φ ( x ) = sup x P θ M n , T x Φ ( x ) + P θ V n , T 1 ϵ + ϵ C ( T 1 / 2 T 2 n ) + ϵ 2 C ( T 1 T n ) + ϵ .
since, by Lemmas 1 (a) and 5, we have
sup x P θ M n , T x Φ ( x ) sup x P θ M T x Φ ( x ) + P θ M n , T M T ϵ + ϵ C T 1 / 2 + ϵ 2 E M n , T M T 2 + ϵ C ( T 1 / 2 T 2 n ) + ϵ 2 C T n + ϵ .
Choosing ϵ = T 1 / 2 , we have the result.
On the other hand, by Taylor expansion, we have
l ˜ n , T ( θ ˜ n , T ) = l ˜ n , T ( θ 0 ) + ( θ ˜ n , T θ 0 ) l ˜ n , T ( θ ¯ ¯ n , T )
where θ ¯ ¯ n , T θ θ ˜ n , T θ 0 . Since l ˜ n , T ( θ ˜ n , T ) = 0 , hence we have
T I ( θ 0 ) ( θ ˜ n , T θ 0 ) = 1 T I ( θ 0 ) l ˜ n , T ( θ 0 ) 1 T I ( θ 0 ) l ˜ n , T ( θ ¯ ¯ n , T ) = 1 T I ( θ 0 ) 1 2 i = 1 n [ f ( θ 0 , X t i 1 ) + f ( θ 0 , X t i ) ] Δ W i + 1 2 i = 1 n [ f ( θ 0 , X t i 1 ) + f ( θ 0 , X t i ) ] t i 1 t i f ( θ 0 , X t ) d t h 2 i = 1 n f ˙ ( θ 0 , X t i 1 ) h i = 1 n f ( θ 0 , X t i 1 ) f ( θ 0 , X t i 1 ) × 1 T I ( θ 0 ) 1 2 i = 1 n [ f ( θ ¯ ¯ n , T , X t i 1 ) + f ( θ ¯ ¯ n , T , X t i 1 ) ] Δ W i 1 2 i = 1 n [ f ( θ ¯ ¯ n , T , X t i 1 ) + f ( θ ¯ ¯ n , T , X t i ) ] t i 1 t i f ( θ ¯ ¯ n , T , X t ) d t
h 2 i = 1 n f ˙ ( θ ¯ ¯ n , T , X t i 1 ) h i = 1 n f ( θ ¯ ¯ n , T , X t i 1 ) f ( θ ¯ ¯ n , T , X t i 1 ) h i = 1 n f 2 ( θ ¯ ¯ n , T , X t i 1 ) 1 = : { R n , T } { S n , T } 1 .
Let lim S n , T = S T in L 2 as T and T n 0 . Similar to Lemma 4, it can be shown that E ( S T 1 ) 2 C T 1 (see also Pardoux and Veretennikov (2001) [25] and Yoshida (2011) [26]). It can be shown that E ( S n , T S T ) 2 C T n (see Altmeyer and Chorowski (2018) [27]). Hence
E ( S n , T 1 ) 2 = E [ ( S n , T S T ) + ( S T 1 ) ] 2 C ( T 1 T n ) .
Thus, by Lemma 1 (b), we have
sup x P θ T I ( θ ) ( θ ˜ n , T θ ) x Φ ( x ) = sup x P θ R n , T S n , T x Φ ( x ) = sup x P θ R n , T x Φ ( x ) + P θ S n , T 1 ϵ + ϵ C ( T 1 / 2 T 3 n 2 ) + ϵ 2 C ( T 1 T n ) + ϵ .
since, by Lemmas 1 (a) and 5, we have
sup x P θ R n , T x Φ ( x ) sup x P θ M T x Φ ( x ) + P θ R n , T M T ϵ } + ϵ C T 1 / 2 + ϵ 2 E R n , T M T 2 + ϵ C T 1 / 2 + ϵ 2 C T 3 n 2 + ϵ .
Choosing ϵ = T 1 / 2 , we have the result.
Now, we study the general case for arbitrary p. By Taylor expansion, we have
K n , T ( θ n , T ) = K n , T ( θ 0 ) + ( θ n , T θ 0 ) K n , T ( θ ¯ ¯ ¯ n , T )
where θ ¯ ¯ ¯ n , T θ θ n , T θ 0 . Since K n , T ( θ n , T ) = 0 , hence we have
T I ( θ 0 ) ( θ n , T θ 0 ) = 1 T I ( θ 0 ) K n , T ( θ 0 ) 1 T I ( θ 0 ) K n , T ( θ ¯ ¯ ¯ n , T ) = 1 T I ( θ 0 ) i = 1 n m ( θ 0 , X t i 1 ) Δ W i 1 T I ( θ 0 ) i = 1 n m ( θ ¯ ¯ ¯ n , T , X t i 1 ) Δ t i = : N n , T U n , T
Note that
U n , T = 1 T I ( θ 0 ) i = 1 n m ( θ ¯ ¯ ¯ n , T , X t i 1 ) Δ t i = 1 T I ( θ 0 ) i = 1 n m ( θ ¯ ¯ ¯ n , T , X t i 1 ) 2 Δ t i .
Let lim U n , T = U T in L 2 as T and T n 0 . Similar to Lemma 4, it can be shown that E ( U T 1 ) 2 C T 1 (see also Pardoux and Veretennikov (2001) [25] and Yoshida (2011) [26]). It can be shown that E [ ( U n , T U T ) 2 C T n (see Altmeyer and Chorowski (2018) [27]). Hence
E ( U n , T 1 ) 2 = E [ ( U n , T U T ) + ( U T 1 ) ] 2 C ( T 1 T n ) .
Further, by Lemma 1 (b), we have
sup x P θ T I ( θ ) ( θ n , T θ ) x Φ ( x ) = sup x P θ N n , T U n , T x Φ ( x ) = sup x P θ N n , T x Φ ( x ) + P θ U n , T 1 ϵ + ϵ C ( T 1 / 2 T p + 1 n p ) + ϵ 2 C ( T 1 T n ) + ϵ .
since, by Lemmas 1 (a) and 5, we have
sup x P θ N n , T x Φ ( x ) sup x P θ M T x Φ ( x ) + P θ N n , T M T ϵ + ϵ C T 1 / 2 + ϵ 2 E N n , T M T 2 + ϵ C T 1 / 2 + ϵ 2 C T p + 1 n p + ϵ .
Choosing ϵ = T 1 / 2 , we have the result. □
Remark 3.
With p = 1 , for the Euler scheme, which produces the conditional least squares estimator, one obtains the rate O T 1 / 2 T 2 n . With p = 2 , for the Milstein scheme, one obtains the rate O T 1 / 2 T 3 n 2 . With p = 4 , for the Simpson scheme, one obtains the rate O T 1 / 2 T 5 n 4 . With p = 6 , for the Boole scheme, one obtains the rate O T 1 / 2 T 7 n 6 . Thus, the higher the p, the sharper the bound. Thus, the Itô/Euler scheme gives the first-order QMLE, the Milstein/Stratonovich scheme produces the second-order QMLE, the Simpson scheme produces the fourth-order QMLE and the Boole scheme produces the sixth-order QMLE. See Bishwal (2011) [28] for a connection of this area to the stochastic moment problem and hedging of generalized Black–Scholes options.

3. Example

Consider the stochastic differential equation
d X t = θ X t 1 + X t 2 d t + d W t , t 0 X 0 = x 0
The solution to the above SDE is called the hyperbolic diffusion process because it has a hyperbolic stationary distribution when θ < 0 . The process has nonlinear drift and the process is stationary and ergodic, which distinguishes this from a linear drift case, such as the Ornstein–Uhlenbeck process and the Cox–Ingersoll–Ross process, which have linear drift. This model verifies assumption (A3). In fact, the stationary density is proportional to exp ( θ 1 + X t 2 ) . It is not possible to calculate the conditional expectation for the hyperbolic diffusion process and hence one needs a higher-order Taylor expansion approach.
Remark 4
(Concluding Remark). It would be interesting to extend the results of the paper to diffusions with jumps using the strong stochastic Taylor expansion with jumps results in Chapter 6 of Kloeden and Bruti-Liberati (2010) [29].

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Bishwal, J.P.N. Parameter Estimation in Stochastic Differential Equations; Lecture Notes in Mathematics; Springer: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
  2. Bishwal, J.P.N. Parameter Estimation in Stochastic Volatility Models; Springer Nature Switzerland AG: Cham, Switzerland, 2021; (forthcoming). [Google Scholar]
  3. Dorogovcev, A.J. The consistency of an estimate of a parameter of a stochastic differential equation. Theory Prob. Math. Stat. 1976, 10, 73–82. [Google Scholar]
  4. Kasonga, R.A. The consistency of a nonlinear least squares estimator from diffusion processes. Stoch. Proc. Appl. 1988, 30, 263–275. [Google Scholar] [CrossRef] [Green Version]
  5. Prakasa Rao, B.L.S. Asymptotic theory for non-linear least squares estimator for diffusion processes. Math. Oper. Stat. Ser. Stat. 1983, 14, 195–209. [Google Scholar] [CrossRef]
  6. Florens-Zmirou, D. Approximate discrete time schemes for stiatistics of diffusion processes. Statistics 1989, 20, 547–557. [Google Scholar] [CrossRef]
  7. Kessler, M. Estimation of an ergodic diffusion from discrete observations. Scand. J. Stat. 1997, 24, 211–229. [Google Scholar] [CrossRef]
  8. Liptser, R.S.; Shiryayev, A.N. Statistics of Random Processes I; Springer: New York, NY, USA, 1977. [Google Scholar]
  9. Yoshida, N. Estimation for diffusion processes from discrete observations. J. Multivar. Anal. 1992, 41, 220–242. [Google Scholar] [CrossRef] [Green Version]
  10. Shoji, I. A note on asymptotic properties of estimator derived from the Euler method for diffusion processes at discrete times. Stat. Probab. Lett. 1997, 36, 153–159. [Google Scholar] [CrossRef]
  11. Bishwal, J.P.N.; Bose, A. Rates of convergence of approximate maximum likelihood estimators in the Ornstein-Uhlenbeck process. Comput. Math. Appl. 2001, 42, 23–38. [Google Scholar] [CrossRef] [Green Version]
  12. Bishwal, J.P.N. Uniform rate of weak convergence for the minimum contrast estimator in the Ornstein-Uhlenbeck process. Methodol. Comput. Appl. Probab. 2010, 12, 323–334. [Google Scholar] [CrossRef]
  13. Bishwal, J.P.N. Berry-Esseen inequalities for discretely observed diffusions. Monte Carlo Methods Appl. 2009, 15, 229–239. [Google Scholar] [CrossRef]
  14. Bishwal, J.P.N. M-Estimation for discretely sampled diffusions. Theory Stoch. Process. 2009, 15, 62–83. [Google Scholar]
  15. Bishwal, J.P.N. Milstein approximation of posterior density of diffusions. Int. J. Pure Appl. Math. 2011, 68, 403–414. [Google Scholar]
  16. Bishwal, J.P.N. Conditional least squares estimation in diffusion processes based on Poisson sampling. J. Appl. Probab. Stat. 2010, 5, 169–180. [Google Scholar]
  17. Bishwal, J.P.N. Some new estimators of integrated volatility. Am. Open J. Stat. 2011, 1, 74–80. [Google Scholar] [CrossRef] [Green Version]
  18. Bhattacharya, R.N.; Ranga Rao, R. Normal Approximation and Asymptotic Expansion; Wiley: New York, NY, USA, 1976. [Google Scholar]
  19. Ikeda, N.; Watanabe, S. Stochastic Differential Equations and Diffusion Processes, 2nd ed.; North-Holland: Amsterdam, The Netherlands; Kodansha Ltd.: Tokyo, Japan, 1989. [Google Scholar]
  20. Nualart, D. Malliavin Calculus and Related Topics; Springer: Berlin, Germany, 1995. [Google Scholar]
  21. Bally, V. On the connection between the Malliavin covariance matrix and Hörmander condition. J. Funct. Anal. 1991, 96, 219–255. [Google Scholar] [CrossRef] [Green Version]
  22. Yoshida, M. Malliavin calculus and asymptotic expansion for martingales. Probab. Theory Relat. Fields 1997, 109, 301–342. [Google Scholar] [CrossRef]
  23. Michel, R.; Pfanzagl, J. The accuracy of the normal approximation for minimum contrast estimate. Zeit. Wahr. Verw. Gebiete 1971, 18, 73–84. [Google Scholar] [CrossRef]
  24. Kloeden, P.E.; Platen, E. Numerical Solution of Stochastic Differential Equations; Springer: Berlin, Germany, 1992. [Google Scholar]
  25. Pardoux, E.; Veretennikov, A.Y. On the Poisson equation and diffusion equation I. Ann. Probab. 2001, 29, 1061–1085. [Google Scholar] [CrossRef]
  26. Yoshida, N. Polynomial type large deviation inequalities and quasi-likelihood analysis for stochastic differential equations. Ann. Inst. Stat. Math. 2011, 63, 431–479. [Google Scholar] [CrossRef]
  27. Altmeyer, R.; Chorowski, J. Estimation error for occupation functionals of stationary Markov processes. Stoch. Proc. Appl. 2018, 128, 1830–1848. [Google Scholar] [CrossRef] [Green Version]
  28. Bishwal, J.P.N. Stochastic moment problem and hedging of generalized Black-Scholes options. Appl. Numer. Math. 2011, 61, 1271–1280. [Google Scholar] [CrossRef]
  29. Kloeden, P.E.; Bruti-Liberati, N. Numerical Solution of Stochastic Differential Equations with Jumps in Finance; Springer: Berlin, Germany, 2010. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Bishwal, J.P.N. Berry–Esseen Bounds of the Quasi Maximum Likelihood Estimators for the Discretely Observed Diffusions. AppliedMath 2022, 2, 39-53. https://doi.org/10.3390/appliedmath2010003

AMA Style

Bishwal JPN. Berry–Esseen Bounds of the Quasi Maximum Likelihood Estimators for the Discretely Observed Diffusions. AppliedMath. 2022; 2(1):39-53. https://doi.org/10.3390/appliedmath2010003

Chicago/Turabian Style

Bishwal, Jaya P. N. 2022. "Berry–Esseen Bounds of the Quasi Maximum Likelihood Estimators for the Discretely Observed Diffusions" AppliedMath 2, no. 1: 39-53. https://doi.org/10.3390/appliedmath2010003

APA Style

Bishwal, J. P. N. (2022). Berry–Esseen Bounds of the Quasi Maximum Likelihood Estimators for the Discretely Observed Diffusions. AppliedMath, 2(1), 39-53. https://doi.org/10.3390/appliedmath2010003

Article Metrics

Back to TopTop