Next Article in Journal
Comparison of Three Computational Approaches for Tree Crop Irrigation Decision Support
Next Article in Special Issue
On Exact and Approximate Approaches for Stochastic Receptor-Ligand Competition Dynamics—An Ecological Perspective
Previous Article in Journal
Logarithmic Decay of Wave Equation with Kelvin-Voigt Damping
Previous Article in Special Issue
Modeling Y-Linked Pedigrees through Branching Processes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Least-Squares Estimators of Drift Parameter for Discretely Observed Fractional Ornstein–Uhlenbeck Processes

Department of Mathematics, Faculty of Chemical Engineering, University of Chemistry and Technology Prague, 16628 Prague, Czech Republic
*
Author to whom correspondence should be addressed.
Mathematics 2020, 8(5), 716; https://doi.org/10.3390/math8050716
Submission received: 3 April 2020 / Revised: 20 April 2020 / Accepted: 22 April 2020 / Published: 3 May 2020
(This article belongs to the Special Issue Stochastic Modeling in Biology)

Abstract

:
We introduce three new estimators of the drift parameter of a fractional Ornstein–Uhlenbeck process. These estimators are based on modifications of the least-squares procedure utilizing the explicit formula for the process and covariance structure of a fractional Brownian motion. We demonstrate their advantageous properties in the setting of discrete-time observations with fixed mesh size, where they outperform the existing estimators. Numerical experiments by Monte Carlo simulations are conducted to confirm and illustrate theoretical findings. New estimation techniques can improve calibration of models in the form of linear stochastic differential equations driven by a fractional Brownian motion, which are used in diverse fields such as biology, neuroscience, finance and many others.

1. Introduction

Stochastic models with fractional Brownian motion (fBm) as the noise source have attained increasing popularity recently. This is because fBm is a continuous Gaussian process, increments of which are positively, or negatively correlated if Hurst parameter H > 1 / 2 , or H < 1 / 2 , respectively. If H = 1 / 2 fBm coincides with classical Brownian motion and its increments are independent. The ability of fBm to include memory into the noise process makes it possible to build more realistic models in such diverse fields as biology, neuroscience, hydrology, climatology, finance and many others. The interested reader may check monographs [1,2], or more recent paper [3] and the references therein for more information.
Let { B t ( H ) } t [ 0 , ) be a fractional Brownian motion with Hurst parameter H defined on an appropriate probability space { Ω , A , P } . Fractional Ornstein–Uhlenbeck process (fOU) is the unique solution to the following linear stochastic differential equation
d X t = λ X t d t + σ d B t ( H ) , X 0 = x 0 R , t 0 ,
where λ > 0 is a drift parameter (we consider ergodic case only) and σ > 0 is a noise intensity (or volatility). Recall that solution to Equation (1) can be expressed by the exact analytical formula:
X t = e λ t x 0 + 0 t e λ ( t u ) σ d B u ( H ) .
A single realization of the random process { X t ( ω ) } t [ 0 , ) for a particular ω Ω is the model for the single real-valued trajectory, part of which is observed. Two examples of such trajectories are given in Figure 1. We assume H > 1 / 2 throughout this paper so that the fOU exhibits long-range dependence. For an example of application, see a neuronal model based on fOU described in the recent work [4].
The aim of this paper is to study the problem of estimating drift parameter λ based on an observation of a single trajectory of a fOU in discrete time instants t = 0 , h , , N h with fixed mesh size h > 0 and increasing time horizon T = N h (long-span asymptotics). Estimating drift parameter of a fOU observed in continuous time has been considered in [5,6], where least-squares estimator (LSE) and ergodic-type estimator are studied. These have advantageous asymptotic properties, they are strongly consistent and, if H 3 / 4 , also asymptotically normal. Ergodic-type estimator is easy to implement, but it has greater asymptotic variance compared to LSE, requires a priori knowledge of H and σ and does not provide acceptable results for non-stationary processes with limited time horizon.
A straightforward discretization of the least-squares estimator for a fOU has been introduced and studied in [7] for H > 1 / 2 and in [8] for H < 1 / 2 . For the precise formula, see (8). This estimator is consistent provided that both the time horizon T = N h and the mesh size h 0 (mixed in-fill and long-span asymptotics). However, it is not consistent when h is fixed and T . This has led us to construct and study LSE-type estimators that converge in this long-span setting.
An easy modification of the ergodic-type estimator to discrete-time setting with fixed time step was given in [9], see (10) for precise formula, and its strong consistency (assuming H 1 / 2 ) and asymptotic normality (for 1 / 2 H < 3 / 4 ) when N were proved, but with possibly incorrect technique (as pointed out in [10]). Correct proofs of asymptotic normality for 0 < H 3 / 4 and strong consistency for 0 < H < 1 of this estimator were provided (in more general setup) in [10]. Note that the use of this discrete ergodic estimator requires the knowledge of parameter σ (in contrast to the estimators of least-squares type introduced below). Other works related to estimating drift parameter for discretely observed fOU include [11,12,13], but this list is by no means complete.
This work contributes to the problem of estimating drift parameter of fOU by introducing three new LSE-type estimators: least-squares estimator from exact solution, asymptotic least-squares estimator and conditional least-squares estimator. These estimators are tailored to discrete-time observations with fixed time step. We provide proofs of their asymptotic properties and identify situations, in which these new estimators perform better than the already known ones. In particular, we eliminate the discretization error (the LSE from exact solution), construct strongly consistent estimators in the long-span regime without assuming in-fill condition (the asymptotic LSE and the conditional LSE), and eliminate the bias in the least-squares procedure caused by autocorrelation of the noise term (the conditional LSE). Especially the conditional LSE demonstrates outstanding performance in all studied scenarios. This suggests that the newly introduced (to our best knowledge) concept of conditioning in the least-squares procedure applied to the models with fractional noise provides a powerful framework for parameter estimation in this type of models. The proof of its strong consistency, presented within this paper, is rather non-trivial and may serve as a starting point for investigation of similar estimators in possibly different settings. A certain disadvantage of the conditional LSE is its complicated implementation (involving optimization procedure), which is in contrast to the other studied estimators.
Let us explain the strength of the conditional least-squares estimator in more detail. Comparison of the two trajectories in Figure 1 demonstrates the effect of different values of λ on trajectories of fOU. In particular, it affects the speed of exponential decay in initial non-stationary phase and the variability in stationary phase. As we illustrate below, the discretized least-squares estimator, cf. (8), utilizes information about λ from the exponential decay in initial phase, but is not capable to make use of the information contained in the variability in stationary phase. As a consequence, it is not consistent (in long-span setting). On the contrary, the ergodic-type estimator, cf. (10), is derived from the variance of the stationary distribution of the process. It works well for stationary processes (and is consistent), but leaves idle (and even worse, it is corrupted by) the observation of the process in its initial non-stationary phase. In result, neither of these estimators can efficiently estimate drift from long trajectories with far-from-stationary initial values. This gap is best filled with the conditional least-squares estimator, cf. (25), which effectively utilizes both information stored in non-stationary phase and in stationary phase of the observed process. This unique property is demonstrated in Results and Discussion, where the conditional LSE (denoted by λ ^ 5 ) dominates the other estimators.
For the three newly introduced estimators the value of the Hurst parameter H is considered to be known a priori, whereas the knowledge of volatility parameter σ is not required, which is an advantage of these methods. If H is not known, it can be estimated in advance by some of many methods, such as methods based on quadratic variations (cf. [14]), sample quantiles or trimmed means (cf. [15]), or on a wavelet transform (cf. [16]), to name just a few. Another useful works in this direction include simultaneous estimation of σ and H using the powers of the second order variations (see [17], Chapter 3.3). The estimates of H (obtained independently from λ ) can subsequently be used in the LSE-type estimators of lambda introduced below in a way similar to [18].
In Section 2, some elements of stochastic calculus with respect to fBm are recalled, stationary fOU is introduced and precise formulas for two existing drift estimators λ ^ 1 and λ ^ 2 are provided. Section 3 is devoted to construction of a new LSE type estimator ( λ ^ 3 ) based on exact formula for fOU. A certain modification of λ ^ 3 (denoted as λ ^ 4 ), which ensures long-span consistency, is introduced in Section 4. In Section 5, we rewrite the linear model using conditional expectations to overcome the bias in LSE caused by autocorrelation of the noise. Least-squares method, applied to the conditional model with explicit formulas for conditional expectations, results in the conditional least-squares estimator ( λ ^ 5 ). We prove strong consistency of this estimator. The actual performance of the newly introduced estimators λ ^ 3 , λ ^ 4 and λ ^ 5 as well as its comparison to the already-known λ ^ 1 and λ ^ 2 , is studied by Monte Carlo simulations in various scenarios and reported in Section 6. The simulated trajectories have been obtained in software R with YUIMA package (see [19]). Section 7 summarizes key points of the article and provides possible future extensions.

2. Preliminaries

For reader’s convenience we briefly review the basic concepts from theory of stochastic models with fractional noise in this section, including definition of fBm, Wiener integral of deterministic functions w.r.t. fBm and stationary fOU. This exposition follows [2,20]. For further reading, see also the monograph [1]. In the end of this section, we also recall formulas for discretized LSE and discrete ergodic estimator.
Fractional Brownian motion with Hurst parameter H ( 0 , 1 ) is a centered (zero-mean) continuous Gaussian process { B t ( H ) } t [ 0 , ) starting from zero ( B 0 ( H ) = 0 and having the following covariance structure
E B s ( H ) B t ( H ) = 1 2 s 2 H + t 2 H | s t | 2 H , s , t 0 .
Note that for the purpose of construction of the stationary fOU, we need a two-sided fBm { B t ( H ) } t R with t ranging over the whole R . In this case we have
E B s ( H ) B t ( H ) = 1 2 | s | 2 H + | t | 2 H | s t | 2 H , s , t R .
As a consequence, the increments of fBm are negatively correlated for H < 1 / 2 , independent for H = 1 / 2 and positively correlated for H > 1 / 2 .
Consider a two-sided fBm with H > 1 / 2 and define Wiener integral of a deterministic step function with respect to the fBm by formula
R i = 1 N α i I [ t i , t i + 1 ] ( s ) d B s ( H ) = i = 1 N α i B t i + 1 ( H ) B t i ( H )
for any positive integer N, real-valued coefficients α 1 , α N R and a partition < t 1 t N + 1 < . This definition constitutes the following isometry for any pair of deterministic step functions f and g
E R f ( t ) d B t ( H ) R g ( s ) d B s ( H ) = R 2 f ( t ) g ( s ) φ ( t , s ) d t d s = f , g H ,
where φ ( t , s ) = H ( 2 H 1 ) | t s | 2 H 2 . Using this isometry, we can extend the definition of the Wiener integral w.r.t. fBm to all elements of the space L H 2 ( R ) , defined as the completions of the space of deterministic step functions w.r.t. the scalar product . , . H defined above. In result, the formula (3) holds true for any f , g L H 2 ( R ) , see also [21]. We will frequently use this formula in what follows, mainly to calculate the covariances of Wiener integrals.
Let { B t ( H ) } t R be again a two-sided fBm with H > 1 / 2 . Define
Z 0 = 0 e λ ( 0 u ) σ d B u ( H ) ,
and denote by { Z t } t [ 0 , ) the solution to (1) with initial condition Z 0 in the sense that it satisfies
d Z t = λ Z t d t + σ d B t ( H ) .
This process is referred to as the stationary fOU and it can be expressed as
Z t = t e λ ( t u ) σ d B u ( H ) , t 0 .
Note that the stationary fOU is an ergodic stationary Gaussian process (its autocorrelation function vanishes at infinity).
Consider now a stationary fOU { Z t } t [ 0 , ) observed at discrete time instants t = 0 , h , 2 h , . The ergodicity and the formula for the second moment of stationary fOU (see e.g., [20]) imply
1 N n = 0 N 1 Z n h 2 N E Z 0 2 = σ 2 λ 2 H H Γ ( 2 H ) a . s .
Analogously,
1 N n = 0 N 1 Z n h Z ( n + 1 ) h N E Z 0 Z h a . s . ,
and the expectation can be calculated using (3) and the change-of-variables formula
E Z 0 Z h = E Z 0 ( e λ h Z 0 + 0 h e λ ( h u ) σ d B u ( H ) ) = e λ h σ 2 λ 2 H H Γ ( 2 H ) + 0 h 0 e λ ( h u ) e λ ( 0 v ) σ 2 ( u v ) 2 H 2 H ( 2 H 1 ) d v d u = e λ h σ 2 λ 2 H H Γ ( 2 H ) 1 + 2 H 1 Γ ( 2 H ) 0 λ h 0 e r + s ( r s ) 2 H 2 d s d r .
The rest of this section is devoted to the two popular estimators of the drift parameter of fOU observed at discrete time instants described in Introduction—the discretized LSE and the discrete ergodic estimator. Start with the former. Consider a straightforward discrete approximation of the Equation (1):
X n + h X n λ X n h + σ ( B n + h ( H ) B n ( H ) ) .
Application of the standard least-squares procedure to the linear approximation above provides the discretized LSE studied in [7,8], which takes the form
λ ^ 1 = 1 h F N 1 ,
where h is the mesh size (time step) and
F N = n = 0 N 1 X n h X ( n + 1 ) h n = 0 N 1 X n h 2 ,
with X n h and X ( n + 1 ) h being the observations at adjacent time instants t = n h and t = ( n + 1 ) h respectively, of the process X t defined by (1) or (2). Note that having λ ^ 1 expressed in term of F N simplifies its comparison with the estimators newly constructed in this paper. Recall that for consistency of λ ^ 1 mixed in-fill and long-span asymptotics is required due to the approximation error in (7).
The discrete ergodic estimator is derived from asymptotic behavior of the (stationary) fOU. Recall the convergence in (4). Rearranging the terms provides an asymptotic formula for drift parameter λ expressed in terms of the limit of the second sample moment of the stationary fOU. Substituting the stationary fOU by the observed fOU X h , X 2 h , X N h in the asymptotic formula results in the discrete ergodic estimator:
λ ^ 2 = 1 N σ 2 H Γ ( 2 H ) i = 1 N X i h 1 2 H ,
which was studied in [9,10]. Recall that this estimator is strongly consistent in the long span regime (no in-fill condition needed), however, it heavily builds upon the asymptotic (stationary) behavior of the process and fails for processes with non-stationary initial phase (as illustrated by numerical experiments below).

3. Least-Squares Estimator from Exact Solution

Since the estimator λ ^ 1 obtained from naive discretization of (1) provides reasonable approximations only for non-stationary solutions with short time horizon and small time step h > 0 (as seen from numerical simulations below), we eliminate discretization error by considering exact analytical formula for X t , see (2), and corresponding exact discrete formula for X t + h ,
X t + h = β X t + ξ t ,
where
β = e λ h , and ξ t = t t + h e λ ( t + h u ) σ d B u ( H ) .
The least-squares estimator for β w.r.t. linear model (11) is given by β ^ = F N , cf. (9), and the estimator for λ can be defined as
λ ^ 3 = 1 h log F N .
Numerical simulations show that λ ^ 3 works well for non-stationary solutions and short time horizon ( T = 10 in simulations). The results for H = 0.6 are presented in Figure 2. Simulation results for H { 0.75 , 0.9 } are similar.
On the other hand, estimator λ ^ 3 does not provide good results for observations with long time horizon ( T = 1000 in simulations) since λ ^ 3 is not consistent if N and h > 0 is fixed. The reason is that ξ t and X t in (11) are correlated. In fact, we can calculate the almost sure limit of λ ^ 3 exactly. The limit is provided in Theorem 1. Its proof uses the following simple lemma (see [22]) to show the diminishing effect of the initial condition on limiting behaviour of sample averages. It is later used in the proof of Lemma 3 as well.
Lemma 1.
Consider real-valued sequences ( a n ) n = 1 and ( b n ) n = 1 such that
1 N n = 1 N | b n | N K < , and a n n 0 .
Then 1 N n = 1 N a n b n N 0 .
Theorem 1.
Let h > 0 be fixed and define f : ( 0 , ) R by
f ( x ) = e x 1 + 2 H 1 Γ ( 2 H ) 0 x 0 e s e r ( r s ) 2 H 2 d s d r .
Then
lim N λ ^ 3 = 1 h log f ( λ h ) a l m o s t s u r e l y .
In particular, lim N λ ^ 3 < λ .
Proof. 
Recall the asymptotic behavior of a stationary fOU described in formulas (4)–(6).
Since the effect of the initial condition vanishes at infinity, the limit behaviour of the non-stationary solution ( X n h ) n = 0 is same. Indeed,
1 N n = 0 N 1 X n h X ( n + 1 ) h Z n h Z ( n + 1 ) h = 1 N n = 0 N 1 ( X n h Z n h ) X ( n + 1 ) h + 1 N n = 0 N 1 ( X ( n + 1 ) h Z ( n + 1 ) h ) Z n h .
The convergence of the first summand to zero follows from the facts that
( X n h Z n h ) n 0 a . s . , 1 N n = 0 N 1 X ( n + 1 ) h N 0 a . s .
and Lemma 1. Similar argument guarantees convergence of the second summand to zero as well. The convergence
1 N n = 0 N 1 X n h 2 Z n h 2 N 0
can be shown correspondingly. In result, we obtain the almost sure convergence
n = 0 N 1 X n h X ( n + 1 ) h n = 0 N 1 X n h 2 N e λ h 1 + 2 H 1 Γ ( 2 H ) 0 λ h 0 e r + s ( r s ) 2 H 2 d s d r = f ( λ h ) .
The claim follows immediately from definition of λ ^ 3 . □
Remark 1.
Note that the convergence in (14) holds true also for H = 1 / 2 (the double integral in f disappears) and for H < 1 / 2 (utilizing the fact that a relation analogous to (3) is true even for H < 1 / 2 , if the two domains of integration are disjoint). Consequently
lim N λ ^ 3 = 1 h log f ( λ h ) < λ , H > 1 / 2 , = λ , H = 1 / 2 , > λ , H < 1 / 2 . a . s .

4. Asymptotic Least-Squares Estimator

Our goal in this section is modifying λ ^ 3 so that it converges to λ when N and h > 0 is fixed. Combination of (12) and (14) yields F N f ( λ h ) a.s. Thus, we can define the asymptotic least-squares estimator λ ^ 4 by relation f ( h λ ^ 4 ) = F N . Since f is one-to-one (see below), the explicit formula for λ ^ 4 reads
λ ^ 4 = 1 h f 1 ( F N ) .
The following lemma justifies invertibility of f in the definition of λ ^ 4 .
Lemma 2.
In our setting H > 1 / 2 the function f defined by (13) is strictly decreasing on [ 0 , ) .
Proof. 
Calculate the derivative
f ( x ) = e x 1 + 2 H 1 Γ ( 2 H ) 0 x 0 e s e r ( r s ) 2 H 2 d s d r + e x 2 H 1 Γ ( 2 H ) 0 e s e x ( x s ) 2 H 2 d s = e x 2 H 1 Γ ( 2 H ) 0 e s e x ( x s ) 2 H 2 0 x e s e r ( r s ) 2 H 2 d r d s Γ ( 2 H ) 2 H 1 .
Continue with
e s e x ( x s ) 2 H 2 0 x e s e r ( r s ) 2 H 2 d r < ( x s ) 2 H 2 e s e x 0 x e r d r = ( x s ) 2 H 2 e s < ( s ) 2 H 2 e s .
Plug this estimate into the formula for f ( x ) to see
f ( x ) < e x 2 H 1 Γ ( 2 H ) 0 ( s ) 2 H 2 e s d s Γ ( 2 H ) 2 H 1 = e x 2 H 1 Γ ( 2 H ) Γ ( 2 H 1 ) Γ ( 2 H 1 ) = 0 .
Remark 2.
Note that f is monotonous also for H = 1 / 2 , but it is not monotonous if H < 1 / 2 , which rules out the possibility to use estimator λ ^ 4 in this singular case.
Theorem 2.
The asymptotic least-squares estimator λ ^ 4 is strongly consistent, i.e.,
lim N λ ^ 4 = λ a . s .
Proof. 
Recall the definition of λ ^ 4 in (16) and the limit in (15):
F N N f ( λ h ) a . s .
Further recall that f : ( 0 , ) ( 0 , ) is differentiable with strictly negative derivative (see (17)). Thus, f 1 is also differentiable with strictly negative derivative and this implies
lim N λ ^ 4 = lim N 1 h f 1 ( F N ) = 1 h f 1 ( f ( λ h ) ) = λ a . s .
The strongly consistent estimator λ ^ 4 works well for stationary solutions or observations with long time horizon (see Figure 3). Moreover it does not require explicit knowledge of σ (in contrast to λ ^ 2 , which is also strictly consistent). On the other hand it does not provide adequate results for non-stationary solutions and short time horizon, since the correction function f reflects stationary behavior of the process (see Figure 4).

5. Conditional Least-Squares Estimator

Non-stationary trajectories with long time horizon contain a lot of information about λ , which is encoded mainly in two aspects: speed of decay in initial non-stationary phase and variance in stationary phase (see Figure 1). However, neither of the estimators λ ^ 1 , λ ^ 2 , λ ^ 3 or λ ^ 4 can utilize all the information effectively. This motivates us to introduce another estimator. Recall that λ ^ 3 fails to be consistent because of bias in LSE caused by the correlation between X t and ξ t in Equation (11). To eliminate the correlation between explanatory variable and noise term in the linear model, we switch to conditional expectations. Start from the following equation, which defines η t :
X t + h = E [ X t + h | X t ] + η t = E Λ = λ [ X t + h | X t ] + η t ,
where λ is the true value of the unknown drift parameter and E Λ the (conditional) expectation with respect to the measure generated by the fOU { X t } t [ 0 , ) with drift value Λ and initial condition x 0 . ( Λ stands for an unknown throughout this section). In other words, E Λ [ X t + h | X t ] means the conditional expectation of X t + h , conditioned by X t , where the process X is given by (2) with drift λ = Λ . Hence, E λ has the same meaning as E in previous sections.
Obviously E λ [ η t | X t ] = 0 and, consequently, c t ( X t , Λ ) = E Λ [ X t + h | X t ] and η t are uncorrelated. Indeed,
E λ ( ( c t ( X t , Λ ) E λ [ c t ( X t , Λ ) ] ) η t ) = E λ ( E λ [ ( c t ( X t , Λ ) E λ [ c t ( X t , Λ ) ] ) η t | X t ] ) = E λ ( ( c t ( X t , Λ ) E λ [ c t ( X t , Λ ) ] ) E λ [ η t | X t ] ) = 0 .
In result, we apply the least-squares technique to Equation (18), where λ is to be estimated, i.e., we would like to minimize
min Λ n = 0 N 1 X ( n + 1 ) h E Λ [ X ( n + 1 ) h | X n h ] 2 .
To calculate c t ( X t , Λ ) = E Λ [ X t + h | X t ] explicitly, use (11) and obtain
E Λ [ X t + h | X t ] = e Λ h X t + E Λ [ ξ t | X t ] .
Note that random vector ( ξ t , X t ) has 2-dimensional normal distribution (dependent on parameter Λ )
ξ t X t N 0 e Λ t x 0 , σ ξ 2 ( Λ ) σ ξ , X ( Λ ) σ ξ , X ( Λ ) σ X 2 ( Λ ) ,
and we can use explicit expression for its conditional expectation to write
E Λ [ ξ t | X t ] = X t e Λ t x 0 σ ξ , X ( Λ ) σ X 2 ( Λ ) .
With respect to the exact formula for X t given by (2) and relation (3) we get
σ X 2 ( Λ ) = σ 2 H ( 2 H 1 ) 0 t 0 t e Λ ( u t ) e Λ ( v t ) | u v | 2 H 2 d v d u = σ 2 H ( 2 H 1 ) Λ 2 H Λ t 0 Λ t 0 e r e s | r s | 2 H 2 d s d r ,
where we used change-of-variable formula in the last step. Analogously
σ ξ , X ( Λ ) = σ 2 H ( 2 H 1 ) t t + h 0 t e Λ ( u t h ) e Λ ( v t ) ( u v ) 2 H 2 d v d u = σ 2 H ( 2 H 1 ) Λ 2 H e Λ h 0 Λ h Λ t 0 e r e s ( r s ) 2 H 2 d s d r .
Using the expressions for σ ξ , X ( Λ ) and σ X 2 ( Λ ) in (20) we obtain
E Λ [ ξ t | X t ] = X t e Λ t x 0 e Λ h 0 Λ h Λ t 0 e r e s ( r s ) 2 H 2 d s d r Λ t 0 Λ t 0 e r e s | r s | 2 H 2 d s d r .
Combining formula (21) with (19) yields
E Λ [ X t + h | X t ] = c t ( X t , Λ ) = X t A Λ t , Λ h B Λ t , Λ h ,
with
A τ , x = e x 1 + 0 x τ 0 e r e s ( r s ) 2 H 2 d s d r τ 0 τ 0 e r e s | r s | 2 H 2 d s d r ,
and
B τ , x = e τ e x x 0 0 x τ 0 e r e s ( r s ) 2 H 2 d s d r τ 0 τ 0 e r e s | r s | 2 H 2 d s d r .
We can thus reformulate the Equation (18) for the observed process X as the following model (linear in X t , but non-linear in Λ ):
X t + h = X t A Λ t , Λ h B Λ t , Λ h Λ = λ + η t .
Now we aim to apply the least-squares method to the reformulated model to get the conditional least-squares estimator λ ^ 5 . To ensure the existence of global minima, we choose a closed interval [ Λ L , Λ U ] ( 0 , ) and define λ ^ 5 as the minimizer of sum-of-squares function on this interval:
S N ( λ ^ 5 ) = min Λ [ Λ L , Λ U ] { S N ( Λ ) } ,
with criterion function S N defined as
S N ( Λ ) = n = 0 N 1 X ( n + 1 ) h X n h A Λ n h , Λ h B Λ n h , Λ h 2 ,
where we used (24) with t = n h .
Note that S N ( Λ ) is continuous in Λ and therefore a minimum on the compact interval [ Λ L , Λ U ] exists. Although model (24) is linear in X t , the coefficients A and B depend on t and that complicates the numerical minimization of S N .
Remark 3.
Let { Z t } t [ 0 , ) be the stationary solution to (1). Then
E Λ [ Z t + h | Z t ] = Z t A , Λ h + 0 = Z t f ( Λ h ) ,
where f is defined in (13) and Λ > 0 is arbitrary. Since the coefficient f ( Λ h ) does not depend on t, it is possible to calculate LSE for f ( λ h ) explicitly and to construct the estimator of λ by applying f 1 . Such estimator coincides with λ ^ 4 introduced in previous chapter. Thus λ ^ 4 can be understood as the special case of conditional LSE for the stationary solution.
In order to prove strong consistency of the estimator λ ^ 5 we need to verify uniform convergence of ( 1 / N ) S N ( Λ ) to a function S ( Λ ) specified below. Let us start with the following proposition on uniform convergence of A τ , x and B τ , x . This proposition will help us in the sequel to investigate limiting behaviour of the two terms A Λ n h , Λ h and B Λ n h , Λ h in the sum-of-squares function S N .
Proposition 1.
Consider A τ , x and B τ , x defined by (22) and (23), and f defined by (13). Fix arbitrary 0 < x L < x U < . Then
lim τ sup x [ x L , x U ] | A τ , x f ( x ) | = 0 ,
and
lim τ sup x [ x L , x U ] | B τ , x | = 0 .
Proof. 
In order to simplify the notation, denote
I ( τ ) = τ 0 τ 0 e r e s | r s | 2 H 2 d s d r , J ( τ ) = τ 0 e s ( s ) 2 H 2 d s ,
and notice that
I ( τ ) τ Γ ( 2 H ) 2 H 1 = Γ ( 2 H 1 ) , J ( τ ) τ Γ ( 2 H 1 ) .
Begin with (27):
sup x [ x L , x U ] | A τ , x f ( x ) |       = sup x [ x L , x U ] e x 0 x τ 0 e r e s ( r s ) 2 H 2 d s d r τ 0 τ 0 e r e s | r s | 2 H 2 d s d r 0 x 0 e s e r ( r s ) 2 H 2 d s d r Γ ( 2 H 1 )       e x L sup x [ x L , x U ] 0 x τ 0 e r e s ( r s ) 2 H 2 d s d r 0 x 0 e s e r ( r s ) 2 H 2 d s d r I ( τ )       + e x L sup x [ x L , x U ] Γ ( 2 H 1 ) I ( τ ) 0 x 0 e s e r ( r s ) 2 H 2 d s d r I ( τ ) Γ ( 2 H 1 )       e x L I ( τ ) 0 x U e r τ e s ( r s ) 2 H 2 d s d r       + e x L Γ ( 2 H 1 ) I ( τ ) I ( τ ) Γ ( 2 H 1 ) 0 x U e r 0 e s ( r s ) 2 H 2 d s d r       e x L ( Γ ( 2 H 1 ) J ( τ ) ) I ( τ ) 0 x U e r d r + e x L Γ ( 2 H 1 ) I ( τ ) I ( τ ) 0 x U e r d r τ 0 .
Similarly
sup x [ x L , x U ] | B τ , x | e τ e x L x 0 J ( τ ) I ( τ ) 0 x U e r d r τ 0 .
Choose any 0 < Λ L < Λ U < and recall that h > 0 is fixed. The uniform convergences in (27) and (28) imply the following convergences uniformly in Λ [ Λ L , Λ U ] :
lim n sup Λ [ Λ L , Λ U ] | A Λ n h , Λ h f ( Λ h ) | = 0 ,
and
lim n sup Λ [ Λ L , Λ U ] | B Λ n h , Λ h | = 0 ,
respectively. Indeed, set x L = Λ L h and x U = Λ U h and fix any ε > 0 . There is τ 0 > 0 such that for any τ > τ 0 ,
sup x [ x L , x U ] | A τ , x f ( x ) | < ε .
If n > τ 0 Λ L h , then Λ n h > τ 0 for any Λ [ Λ L , Λ U ] . Consequently
sup Λ [ Λ L , Λ U ] | A Λ n h , Λ h f ( Λ h ) | < ε ,
which proves (29). The convergence in (30) can be shown analogously. These uniform convergences will be helpful in the proof of the following Lemma, which provides uniform convergence of 1 N S N ( Λ ) to a limiting function S ( Λ ) . This uniform convergence is the key ingredient for the convergence of the minimizers λ ^ 5 .
Lemma 3.
Let f be defined by (13) and let S N ( Λ ) be defined by (26), where { X t } t [ 0 , ) is the observed process with drift value λ. Denote
S ( Λ ) = σ 2 λ 2 H H Γ ( 2 H ) 1 2 f ( Λ h ) f ( λ h ) + f 2 ( Λ h ) .
Then
lim N sup Λ [ Λ L , Λ U ] 1 N S N ( Λ ) S ( Λ ) = 0 a . s .
Proof. 
First consider the stationary solution { Z t } t [ 0 , ) to (1) corresponding to drift value λ . Comparison of (6) with (13) yields
E λ Z 0 Z h = f ( λ h ) σ 2 λ 2 H H Γ ( 2 H ) .
It enables us to write
S ( Λ ) = E λ Z 0 2 2 f ( Λ h ) E λ Z 0 Z h + f 2 ( Λ h ) E λ Z 0 2
for any Λ > 0 , and, consequently
1 N S N ( Λ ) S ( Λ ) = 1 N n = 0 N 1 X ( n + 1 ) h 2 E λ Z 0 2 2 1 N n = 0 N 1 A Λ n h , Λ h X ( n + 1 ) h X n h f ( Λ h ) E λ Z 0 Z h + 1 N n = 0 N 1 X n h 2 A Λ n h , Λ h 2 f 2 ( Λ h ) E λ Z 0 2 + 1 N n = 0 N 1 B Λ n h , Λ h 2 X ( n + 1 ) h 2 A Λ n h , Λ h X n h + B Λ n h , Λ h .
Recall that { Z t } t [ 0 , ) is ergodic and | Z t X t | vanishes at infinity. Using Lemma 1 in the same way as in the proof of Theorem 1 implies
sup Λ [ Λ L , Λ U ] 1 N n = 0 N 1 X ( n + 1 ) h 2 E λ Z 0 2 N 0 a . s .
For the second term, write
sup Λ [ Λ L , Λ U ] 1 N n = 0 N 1 A Λ n h , Λ h X ( n + 1 ) h X n h f ( Λ h ) E λ Z 0 Z h        sup Λ [ Λ L , Λ U ] | 1 N n = 0 N 1 A Λ n h , Λ h f ( Λ h ) X ( n + 1 ) h X n h        + 1 N n = 0 N 1 f ( Λ h ) X ( n + 1 ) h X n h E λ Z 0 Z h |        1 N n = 0 N 1 sup Λ [ Λ L , Λ U ] A Λ n h , Λ h f ( Λ h ) X ( n + 1 ) h X n h        + sup Λ [ Λ L , Λ U ] f ( Λ h ) 1 N n = 0 N 1 X ( n + 1 ) h X n h E λ Z 0 Z h .
Application of Lemma 1, the convergence in (29) and the continuity of f ensure the convergence with probability one of both summands to zero as N .
The uniform convergence of the third term can be shown analogously:
sup Λ [ Λ L , Λ U ] 1 N n = 0 N 1 X n h 2 A Λ n h , Λ h 2 f 2 ( Λ h ) E λ Z 0 2 N 0 a . s . ,
where we use
lim n sup Λ [ Λ L , Λ U ] | A Λ n h , Λ h 2 f 2 ( Λ h ) | = 0 ,
which follows directly from (29) and the continuity of f.
The last term in (33) can be treated similarly:
sup Λ [ Λ L , Λ U ] 1 N n = 0 N 1 B Λ n h , Λ h 2 X ( n + 1 ) h 2 A Λ n h , Λ h X n h + B Λ n h , Λ h 1 N n = 0 N 1 sup Λ [ Λ L , Λ U ] | B Λ n h , Λ h | C n ,
where
C n = 2 | X ( n + 1 ) h | + 2 sup Λ [ Λ L , Λ U ] | A Λ n h , Λ h f ( Λ h ) | | X n h | + 2 sup Λ [ Λ L , Λ U ] | f ( Λ h ) | | X n h | + sup Λ [ Λ L , Λ U ] | B Λ n h , Λ h | .
By (30)
sup Λ [ Λ L , Λ U ] | B Λ n h , Λ h | n 0 ,
and
1 N n = 0 N 1 C n N E λ | Z 0 | 1 + 2 sup Λ [ Λ L , Λ U ] | f ( Λ h ) | < a . s .
Lemma 1 concludes the proof:
1 N n = 0 N 1 sup Λ [ Λ L , Λ U ] | B Λ n h , Λ h | C n N 0 a . s .
Previous considerations lead to the convergence of λ ^ 5 , being the minimizers of 1 N S N ( Λ ) to the minimizer of S ( Λ ) . Next lemma ensures that this minimizer coincides with the true drift value λ .
Lemma 4.
S defined by (31) is continuous on ( 0 , ) and λ is the unique minimizer of S , i.e.,
S ( λ ) < S ( Λ ) Λ > 0 , Λ λ .
Proof. 
By definition
S ( Λ ) = σ 2 λ 2 H H Γ ( 2 H ) ( 1 2 f ( Λ h ) f ( λ h ) + f 2 ( Λ h ) ) = σ 2 λ 2 H H Γ ( 2 H ) ( 1 f 2 ( λ h ) + f 2 ( λ h ) 2 f ( Λ h ) f ( λ h ) + f 2 ( Λ h ) ) = σ 2 λ 2 H H Γ ( 2 H ) 1 f 2 ( λ h ) + [ f ( λ h ) f ( Λ h ) ] 2 .
The claim follows immediately, because f is one-to-one (it is strictly decreasing).
Continuity of S is a direct consequence of the continuity of f. □
Now we are in a position to prove the strong consistency of λ ^ 5 .
Theorem 3.
Consider bounds 0 < Λ L < Λ U < so that they cover the true drift λ of the observed solution { X t } t [ 0 , ) to Equation (1), i.e., λ ( Λ L , Λ U ) . Then λ ^ 5 defined in (25) is strongly consistent, i.e.,
λ ^ 5 N λ a . s .
Proof. 
The proof follows standard argumentation from nonlinear regression and utilizes Lemma 3 and Lemma 4. Choose ε > 0 sufficiently small so that [ Λ L , Λ U ] \ ( λ ε , λ + ε ) and set
δ = min { S ( Λ ) S ( λ ) : Λ [ Λ L , Λ U ] \ ( λ ε , λ + ε ) } > 0 .
Consider a set of full measure on which the uniform convergence (32) holds and take N 0 > 0 such that
sup Λ [ Λ L , Λ U ] 1 N S N ( Λ ) S ( Λ ) < δ 3 N N 0 .
Fix any N N 0 . Then for arbitrary Λ [ Λ L , Λ U ] \ ( λ ε , λ + ε ) we get
1 N S N ( λ ) < S ( λ ) + δ 3 < S ( Λ ) δ 3 < 1 N S N ( Λ ) .
As λ ^ 5 minimizes S N , for all N N 0 we have
| λ ^ 5 λ | < ε .
Since ε was arbitrary (if small enough), we obtain the convergence
λ ^ 5 N λ
on a set of full measure. □

6. Results and Discussion

In Table 1 we present comparison of the root mean square errors (RMSE) of all considered estimators for λ = 0.5 and several combinations of x 0 , T and H. Estimators λ ^ 1 and λ ^ 3 demonstrate good performance in scenarios with far-from-zero initial ( x 0 = 100 ) condition and short time horizon ( T = 10 ) This illustrates the fact that these estimators reflect mainly the speed of convergence to zero of the observed process in its initial phase. Increasing time horizon to T = 1000 adds a stationary phase to the observed trajectories, which distorts the estimators λ ^ 1 and λ ^ 3 .
Estimators λ ^ 2 and λ ^ 4 perform well in settings with stationary-like initial condition ( x 0 = 0 ) and long time horizon ( T = 1000 ). This is because they are constructed from the stationary behavior of the process. Taking far-from-zero initial condition ruins these estimators, unless trajectory is very long.
The conditional LSE, λ ^ 5 , shows reasonable performance in all studied scenarios and it significantly outperforms the other estimators in scenario with far-from-stationary initial condition ( x 0 = 100 ) and long time horizon ( T = 1000 ). This results from the unique ability of this estimator to reflect and utilize information about the drift from both non-stationary (decreasing) phase and stationary (oscillating) phase. This is also illustrated on Figure 5. On the other hand, evaluation of λ ^ 5 is the most numerically demanding compared the other studied estimators.
If x 0 = 0 and T = 10 , λ ^ 5 shows greater RMSE than λ ^ 1 and λ ^ 3 in Table 1 due to λ = 1 / 2 being relatively close to zero. This causes that λ ^ 1 and λ ^ 3 have smaller variance (although greater bias) compared to λ ^ 5 (see Figure 6). In order to present this effect we have calculated RMSE for simulations in same scenario but with λ = 3 / 2 (see Table 2). λ ^ 5 provides smaller RMSE than the other estimators in this setting.

7. Conclusions

Three new estimators were defined and studied:
  • The least-squares estimator from exact solution ( λ ^ 3 ), which improves the popular discretized LSE ( λ ^ 1 ) by eliminating the discretization error. It is easy to implement, since it can be calculated by a closed formula. However, it fails to be strongly consistent in long-span regime.
  • The asymptotic least-squares estimator ( λ ^ 4 ), which is a modification of λ ^ 3 with respect to its asymptotic behavior. In result, λ ^ 4 is strongly consistent in the long-span regime and behaves similarly to the well-established discrete ergodic estimator ( λ ^ 2 ). The advantage of λ ^ 4 is that it does not require a priori knowledge of the volatility σ . On the other hand, its implementation includes a root-finding numerical procedure.
  • The conditional least-squares estimator ( λ ^ 5 ), which eliminates the bias in the least-squares procedure by considering the conditional expectation of the response as the explanatory variable. The possibility to express the conditional expectation explicitly makes this approach feasible. This conditioning idea (which is new in the context of the models with fractional noise, to our best knowledge) provides exceptionally reliable estimator, which outperforms all the other studied estimators. We proved the strong consistency (in long-span regime) of this estimator. The implementation comprises solving an optimization problem.
These new estimating procedures can help practitioners or scientists from various fields to improve the calibration of their models based on available data with autocorrelated noise (these are typically observed/measured in discrete time instants) and, consequently, obtain more reliable conclusions from the calibrated models.
An interesting future extension would certainly be to explore the potential of the promising idea of the conditioning within least-squares procedure to more general models and settings (including d-dimensional fOU, fOU with H < 1 / 2 , non-linear drift, multiplicative noise, etc.).

Author Contributions

Conceptualization, P.K.; methodology, P.K.; software, L.S.; writing—original draft, P.K. and L.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the grant LTAIN19007 Development of Advanced Computational Algorithms for Evaluating Post-surgery Rehabilitation.

Acknowledgments

We are grateful to four anonymous reviewers for their valuable comments, which helped improve this paper significantly.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Mishura, Y. Stochastic Calculus for Fractional Brownian Motion and Related Processes; Springer: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
  2. Biagini, F.; Hu, Y.; Øksendal, B.; Zhang, T. Stochastic Calculus for Fractional Brownian Motion and Applications; Springer: London, UK, 2008. [Google Scholar]
  3. Abundo, M.; Pirozzi, E. On the Integral of the Fractional Brownian Motion and Some Pseudo-Fractional Gaussian Processes. Mathematics 2019, 7, 991. [Google Scholar] [CrossRef] [Green Version]
  4. Ascione, G.; Mishura, Y.; Pirozzi, E. Fractional Ornstein–Uhlenbeck Process with Stochastic Forcing, and its Applications. Methodol. Comput. Appl. Probab. 2019. [Google Scholar] [CrossRef]
  5. Hu, Y.; Nualart, D. Parameter estimation for fractional Ornstein–Uhlenbeck processes. Stat. Probab. Lett. 2010, 80, 1030–1038. [Google Scholar] [CrossRef] [Green Version]
  6. Hu, Y.; Nualart, D.; Zhou, H. Parameter estimation for fractional Ornstein–Uhlenbeck processes of general Hurst parameter. Stat. Inference Stoch. Process. 2019, 22, 111–142. [Google Scholar] [CrossRef] [Green Version]
  7. Es-Sebaiy, K. Berry-Esseen bounds for the least squares estimator for discretely observed fractional Ornstein–Uhlenbeck processes. Stat. Probab. Lett. 2013, 83, 2372–2385. [Google Scholar] [CrossRef] [Green Version]
  8. Kubilius, K.; Mishura, Y.; Ralchenko, K.; Seleznjev, O. Consistency of the drift parameter estimator for the discretized fractional Ornstein–Uhlenbeck process with Hurst index H is an element of (0,1/2). Electron. J. Stat. 2015, 9, 1799–1825. [Google Scholar] [CrossRef]
  9. Hu, Y.; Song, J. Parameter estimation for fractional Ornstein–Uhlenbeck processes with discrete observations. In Malliavin Calculus and Stochastic Analysis; Springer: Boston, MA, USA, 2013; Volume 34, pp. 427–442. [Google Scholar] [CrossRef] [Green Version]
  10. Es-Sebaiy, K.; Viens, F. Optimal rates for parameter estimation of stationary Gaussian processes. Stoch. Process. Their. Appl. 2019, 129, 3018–3054. [Google Scholar] [CrossRef] [Green Version]
  11. Azmoodeh, E.; Viitasaari, L. Parameter estimation based on discrete observations of fractional Ornstein–Uhlenbeck process of the second kind. Stat. Inference Stoch. Process. 2015, 18, 205–227. [Google Scholar] [CrossRef] [Green Version]
  12. Neuenkirch, A.; Tindel, S. A least square-type procedure for parameter estimation in stochastic differential equations with additive fractional noise. Stat. Inference Stoch. Process. 2014, 17, 99–120. [Google Scholar] [CrossRef] [Green Version]
  13. Xiao, W.; Zhang, W.; Xu, W. Parameter estimation for fractional Ornstein–Uhlenbeck processes at discrete observation. Appl. Math. Model. 2011, 35, 4196–4207. [Google Scholar] [CrossRef]
  14. Istas, J.; Lang, G. Quadratic variations and estimation of the local Hölder index of a Gaussian process. Annales de l’I.H.P. Probabilités et Statistiques 1997, 33, 407–436. [Google Scholar] [CrossRef] [Green Version]
  15. Coeurjolly, J. Hurst exponent estimation of locally self-similar Gaussian processes using sample quantiles. Ann. Stat. 2008, 36, 1404–1434. [Google Scholar] [CrossRef]
  16. Rosenbaum, M. Estimation of the volatility persistence in a discretely observed diffusion model. Stoch. Process. Their. Appl. 2008, 118, 1434–1462. [Google Scholar] [CrossRef]
  17. Berzin, C.; Latour, A.; León, J. Inference on the Hurst Parameter and Variance of Diffusions Driven by Fractional Brownian Motion; Springer International Publishing: Cham, Switzerland, 2014. [Google Scholar]
  18. Brouste, A.; Iacus, S. Parameter estimation for the discretely observed fractional Ornstein–Uhlenbeck process and the Yuima R package. Comput. Stat. 2013, 28, 1529–1547. [Google Scholar] [CrossRef] [Green Version]
  19. Brouste, A.; Fukasawa, M.; Hino, H.; Iacus, S.; Kamatani, K.; Koike, Y.; Masuda, H.; Nomura, R.; Ogihara, T.; Shimuzu, Y.; et al. The YUIMA Project: A Computational Framework for Simulation and Inference of Stochastic Differential Equations. J. Stat. Softw. 2014, 4, 1–51. [Google Scholar] [CrossRef]
  20. Kubilius, K.; Mishura, Y.; Ralchenko, K. Parameter Estimation in Fractional Diffusion Models; Springer International Publishing AG: Cham, Switzerland, 2017. [Google Scholar]
  21. Pipiras, V.; Taqqu, M. Integration questions related to fractional Brownian motion. Probab. Theory Relat. Fields 2000, 118, 251–291. [Google Scholar] [CrossRef]
  22. Kříž, P.; Maslowski, B. Central limit theorems and minimum-contrast estimators for linear stochastic evolution equations. Stochastics 2019, 91, 1109–1140. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Two single trajectories of fOU with different values of λ, where σ = 2, x0 = 20, T = 20, H = 0.6.
Figure 1. Two single trajectories of fOU with different values of λ, where σ = 2, x0 = 20, T = 20, H = 0.6.
Mathematics 08 00716 g001
Figure 2. Comparison of λ ^ 1 , λ ^ 2 and λ ^ 3 for 100 trajectories, where H = 0.6 , x 0 = 100 , T = 10 , h = 0.1 , σ = 2 ; the horizontal line shows the true value of the estimated parameter λ = 0.5 .
Figure 2. Comparison of λ ^ 1 , λ ^ 2 and λ ^ 3 for 100 trajectories, where H = 0.6 , x 0 = 100 , T = 10 , h = 0.1 , σ = 2 ; the horizontal line shows the true value of the estimated parameter λ = 0.5 .
Mathematics 08 00716 g002
Figure 3. Comparison of λ ^ 1 , λ ^ 2 , λ ^ 3 and λ ^ 4 for 100 trajectories, where H = 0.6 , x 0 = 0 , T = 1000 , h = 1 , σ = 2 ; the horizontal line shows the true value of the estimated parameter λ = 0.5 .
Figure 3. Comparison of λ ^ 1 , λ ^ 2 , λ ^ 3 and λ ^ 4 for 100 trajectories, where H = 0.6 , x 0 = 0 , T = 1000 , h = 1 , σ = 2 ; the horizontal line shows the true value of the estimated parameter λ = 0.5 .
Mathematics 08 00716 g003
Figure 4. Comparison of λ ^ 1 , λ ^ 2 , λ ^ 3 and λ ^ 4 for 100 trajectories, where H = 0.6 , x 0 = 100 , T = 10 , h = 0.1 , σ = 2 ; the horizontal line shows the true value of the estimated parameter λ = 0.5 .
Figure 4. Comparison of λ ^ 1 , λ ^ 2 , λ ^ 3 and λ ^ 4 for 100 trajectories, where H = 0.6 , x 0 = 100 , T = 10 , h = 0.1 , σ = 2 ; the horizontal line shows the true value of the estimated parameter λ = 0.5 .
Mathematics 08 00716 g004
Figure 5. Comparison of λ ^ 1 , λ ^ 2 , λ ^ 3 , λ ^ 4 and λ ^ 5 for 100 trajectories, where H = 0.75 , x 0 = 100 , T = 1000 , h = 1 , σ = 2 ; the horizontal line shows the true value of the estimated parameter λ = 0.5 .
Figure 5. Comparison of λ ^ 1 , λ ^ 2 , λ ^ 3 , λ ^ 4 and λ ^ 5 for 100 trajectories, where H = 0.75 , x 0 = 100 , T = 1000 , h = 1 , σ = 2 ; the horizontal line shows the true value of the estimated parameter λ = 0.5 .
Mathematics 08 00716 g005
Figure 6. Comparison of λ ^ 1 , λ ^ 2 , λ ^ 3 , λ ^ 4 and λ ^ 5 for 100 trajectories, where H = 0.6 , x 0 = 0 , T = 10 , h = 0.1 , σ = 2 ; the horizontal line shows the true value of the estimated parameter λ = 0.5 .
Figure 6. Comparison of λ ^ 1 , λ ^ 2 , λ ^ 3 , λ ^ 4 and λ ^ 5 for 100 trajectories, where H = 0.6 , x 0 = 0 , T = 10 , h = 0.1 , σ = 2 ; the horizontal line shows the true value of the estimated parameter λ = 0.5 .
Mathematics 08 00716 g006
Table 1. Root mean square errors of the studied estimators calculated using 100 numerical simulations with λ = 1 / 2 .
Table 1. Root mean square errors of the studied estimators calculated using 100 numerical simulations with λ = 1 / 2 .
x 0 TH λ ^ 1 λ ^ 2 λ ^ 3 λ ^ 4 λ ^ 5
010000.60.2161610.03648950.1671080.04649310.0462385
010000.750.3505710.04849040.338020.06372740.0643273
010000.90.4442390.1209540.4424230.1742130.170646
10010000.60.1585610.2376890.08409230.1258460.0297993
10010000.750.2444120.1601160.2051520.3585870.0497393
10010000.90.3266620.1055730.3092561.178710.0454533
0100.60.3338340.5138970.3472780.5394180.488992
0100.750.4395450.5908480.4389780.6087440.552812
0100.90.5285860.8413350.5279320.8407210.635887
100100.60.02595390.493690.0240920.4411520.0277489
100100.750.0319490.4803620.0298531.534540.0411947
100100.90.03981040.4571550.03763664.542650.0291719
Table 2. Root mean square errors of the studied estimators calculated using 100 numerical simulations with λ = 3 / 2 .
Table 2. Root mean square errors of the studied estimators calculated using 100 numerical simulations with λ = 3 / 2 .
x 0 TH λ ^ 1 λ ^ 2 λ ^ 3 λ ^ 4 λ ^ 5
0100.60.689460.6416130.6861830.7616770.399386
0100.751.117930.7617931.109490.8777070.457095
0100.91.384631.219281.382991.528120.504919

Share and Cite

MDPI and ACS Style

Kříž, P.; Szała, L. Least-Squares Estimators of Drift Parameter for Discretely Observed Fractional Ornstein–Uhlenbeck Processes. Mathematics 2020, 8, 716. https://doi.org/10.3390/math8050716

AMA Style

Kříž P, Szała L. Least-Squares Estimators of Drift Parameter for Discretely Observed Fractional Ornstein–Uhlenbeck Processes. Mathematics. 2020; 8(5):716. https://doi.org/10.3390/math8050716

Chicago/Turabian Style

Kříž, Pavel, and Leszek Szała. 2020. "Least-Squares Estimators of Drift Parameter for Discretely Observed Fractional Ornstein–Uhlenbeck Processes" Mathematics 8, no. 5: 716. https://doi.org/10.3390/math8050716

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop