Next Article in Journal
Sharp Bounds of Hankel Determinant on Logarithmic Coefficients for Functions Starlike with Exponential Function
Next Article in Special Issue
Arbitrage in the Hermite Binomial Market
Previous Article in Journal
Distributed Optimization for Fractional-Order Multi-Agent Systems Based on Adaptive Backstepping Dynamic Surface Control Technology
Previous Article in Special Issue
An Insight into the Impacts of Memory, Selling Price and Displayed Stock on a Retailer’s Decision in an Inventory Management Problem
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Controlled Parameter Estimation for The AR(1) Model with Stationary Gaussian Noise

1
School of Mathematics and Statistics, Guangdong University of Technology, Guangzhou 510520, China
2
School of Mathematics (Zhuhai), Sun Yat-sen University, Zhuhai 519082, China
3
School of Mathematics, Shanghai University of Finance and Economics, Shanghai 200433, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Fractal Fract. 2022, 6(11), 643; https://doi.org/10.3390/fractalfract6110643
Submission received: 26 September 2022 / Revised: 28 October 2022 / Accepted: 31 October 2022 / Published: 3 November 2022
(This article belongs to the Special Issue Fractional Processes and Multidisciplinary Applications)

Abstract

:
This paper deals with the maximum likelihood estimator for the parameter of first-order autoregressive models driven by the stationary Gaussian noises (Colored noise) together with an input. First, we will find the optimal input that maximizes the Fisher information, and then, with the method of the Laplace transform, both the asymptotic properties and the asymptotic design problem of the maximum likelihood estimator will be investigated. The results of the numerical simulation confirm the theoretical analysis and show that the proposed maximum likelihood estimator performs well in finite samples.

1. Introduction

The experiment design has been given a great deal of interest over the last decades from the early statistical literature [1,2,3,4] as well as in the engineering literature [5,6,7]. The classical approach for experiment design consists of two-step procedures: Maximize Fisher information under the energy constraint of the input and find an adaptive estimator that is asymptotically normal with an optimal convergence rate, and the variance achieves the inverse of this Fisher information, as presented in [8].
In the research of [9], the authors showed that real inputs exhibit long-range dependence: the behavior of a real process after a given time t depends on the entire history of the process up to time t, the classical examples are presented in finance [10,11,12], that is why the researchers considered controlled problems not only with white noise or standard Brownian motion, but also with fractional type noise such as the fractional Brownian motion [13,14]. The applications of fractional case can be seen in [15,16,17,18]. Let us take [13] for the example: The authors considered the controlled drift estimation problem in fractional Ornstein–Ulenbeck (FOU) process:
d X t = ϑ X t d t + u ( t ) d t + d B t H , ϑ > 0 , t [ 0 , T ]
where B t H is a fractional Brownian motion with a known Hurst parameter H ( 0 , 1 ) , whose covariance function is
E ( B t H B s H ) = 1 2 ( t 2 H + s 2 H | t s | 2 H ) , s , t [ 0 , T ] .
u ( t ) is a deterministic function in a control space and ϑ is the unknown drift parameter. They have achieved the goal of the experiment design.
However, in the real world, we always try to deal with high-frequency or low-frequency data, but not with continuous ones, as presented in the previous example. So, can we find a discrete model which will be pretty close to (1)? To achieve this goal, we apply the Euler approximation with discrete time t = Δ , 2 Δ , , N Δ ( = T ) to X t , then we have a time series:
X i Δ = β X ( i 1 ) Δ + Δ u ( i 1 ) Δ + η i Δ H
where β = 1 ϑ Δ , η i Δ H = B i Δ H B ( i 1 ) Δ H is a fractional Gaussian noise with distance Δ . When ϑ is a positive constant, we can take some special Δ such that | β | < 1 then the Equation (2) is a controlled AR(1) (Autoregressive model of order 1) process with fractional Gaussian noise. For simplicity, we rewrite this model with
X n = ϑ X n 1 + u ( n ) + ξ n , 0 < | ϑ | < 1 , X 0 = 0 ,
where ξ = ( ξ n , n Z ) is the fractional Gaussian noise (fGn) with the covariance function defined in (20) (when the variance will not affect the final results, here we always suppose that it will be 1). In fact, according to [19] we can extend the noise ξ = ( ξ n , n Z ) to the centered regular stationary Gaussian noise with
π π | log f ξ ( λ ) | d λ < ,
where f ξ ( λ ) is the spectral density of ξ . A similar change-point and Kalman–Bucy problem can be found in [20,21].
Now, let us return to the model (3), the same as the function u ( t ) in (1), u ( n ) denotes a deterministic function of n Z . Obviously, when considering the problem of estimating the unknown parameter, ϑ with the observation data X ( N ) = X n , n = 1 , , N , if u ( n ) = 0 , it has been solved in [19]. When u ( n ) U N – the control space defined in (17), Let us denote L ( ϑ , X ( N ) ) the likelihood function for ϑ , then the Fisher information can be written as
I N ( ϑ , u ) = E ϑ 2 ϑ 2 ln L ( ϑ , X ( N ) ) .
Therefore, we denote
J N ( ϑ ) = sup u U N I N ( ϑ , u ) .
our main goal is to find a function, say, u ^ U N such that
I N ( ϑ , u ^ ) = J N ( ϑ )
and then with this u ^ we will find an adapted estimator ϑ ¯ N of the parameter ϑ , which is asymptotically efficient in the sense that, for any compact set K = { ϑ | 1 < ϑ < 1 } ,
sup ϑ K J N ( ϑ ) E ϑ ( ϑ ¯ N ϑ ) 2 = 1 + o ( 1 ) ,
as N .
In this paper, with the computation of the Laplace transform, we will find u ^ for the model (3) and we check that the Maximum Likelihood Estimator (MLE) satisfy (5).
Remark 1.
Here, we assume that the covariance structure of the noise ξ is known. In fact, if this covariance depends only on one parameter, for example, the Hurst parameter H of the fractional Gaussian noise presented in Section 4 we can estimate this parameter with the log-periodogram method [22] or with generalized quadratic variation [23] and study the plug-in estimator. All details can be found in [24].
Remark 2.
In this paper, we will not estimate the function u ( n ) , but we will simply find one function that maximizes Fisher information.
The organization of this paper is as follows. In Section 2, we give some basic results of regular stationary noise ξ n , find the likelihood function, and the formula for Fisher information. Section 3 provides the main results of this paper, and Section 4 shows some simulation examples to show the performance of the proposed MLE. All proofs are collected in Appendix A.

2. Preliminaries and Notations

2.1. Stationary Gaussian Sequences

First of all, let us construct the connection between stationary Gaussian noise and the independent case. Suppose that the covariance of the stationary Gaussian process ξ = ξ n n 1 is defined by
E ξ m ξ n = c ( m , n ) = ρ ( | n m | ) , ρ ( 0 ) = 1 .
when it is positive, then there exists an associated innovation sequence σ n ε n n 1 where ε n N ( 0 , 1 ) , n 1 are independent, defined by the following relations:
σ 1 ε 1 = ξ 1 , σ n ε n = ξ n E ( ξ n | ξ 1 , ξ 2 , , ξ n 1 ) , n 2 .
From the theorem of Normal Correlation (Theorem 13.1, [25]) that there exists a deterministic kernel k = ( k ( n , m ) , n 1 , m n ) such that k ( n , n ) = 1 and
σ n ε n = m = 1 n k ( n , m ) ξ m .
For n 1 , we will denote by β n 1 the partial correlation coefficient
β n 1 = k ( n , 1 ) .
As with the Levinson–Durbin algorithm (see [26]), we have the following relationship between k ( · , · ) , the covariance function ρ ( · ) defined in (6), the sequence of partial correlation coefficients ( β ) n 1 and the variances of innovation ( σ n 2 ) n 1 :
σ n 2 = m = 1 n 1 ( 1 β m 2 ) , n 2 , σ 1 = 1 ,
m = 1 n k ( n , m ) ρ ( m ) = β n σ n 2 ,
k ( n + 1 , n + 1 m ) = k ( n , n m ) β n k ( n , m )
Since the covariance matrix of ξ n is positively defined, there also exists an inverse deterministic kernel K = ( K ( n , m ) , n 1 , m n ) such that
ξ n = m = 1 n K ( n , m ) σ m ε m .
The relationship of the kernel k and K can be found in [19].
Remark 3.
It is worth mentioning that the condition (4) implies that
n 1 β n 2 < .
This condition is theoretically verified for classical autoregressive-moving-average (ARMA) noises. To our knowledge, no explicit form of the partial autocorrelation coefficients for fractional Gaussian noise (fGn) is known, but because the explicit formula of the spectral density of fGn sequences has been exhibited in [27], condition (4) is fulfilled for any Hurst index H ( 0 , 1 ) . For very similar fractional autoregressive integrated moving average (fractional ARIMA) processes, it has been proven that β n = O ( 1 ) in [28].

2.2. Model Transformation

Let us define the process Z = ( Z n , n 1 ) so that
Z n = m = 1 n k ( n , m ) X m , n 1 ,
where k ( n , m ) is the kernel defined in (7). Similar to (12), we have
X n = m = 1 n K ( n , m ) Z m .
From the Equalities (13) and (14) the process Z = ( Z n , n 1 ) has the same filtration of X = ( X n , n 1 ) . In the following parts, let the observation be ( Z 1 , Z 2 , , Z N ) . Actually, it was shown in [19] that the process Z can be considered as the first component of a 2-dimensional AR(1) process ζ = ( ζ n , n 1 ) , which is defined by:
ζ n = ( Z n r = 1 n 1 β r Z r ) .
It is not hard to obtain that ζ n is a 2-dimensional Markov process, which satisfies the following equation:
ζ n = A n 1 ζ n 1 + b v ( n ) + b σ n ϵ n , n 1 , ζ 0 = 0 2 × 1 ,
with
A n = ( ϑ ϑ β n β n 1 ) , b = ( 1 0 )
and ϵ n N ( 0 , 1 ) are independent. Following from the idea of [13] we will define the control space V N of the function v ( n ) :
V N = v ( n ) | 1 N n = 1 N v ( n ) σ n + 1 2 1 .
From the control space of V N we can define that for the function u ( n ) :
U N = u ( n ) | 1 N n = 1 N m = 1 n k ( n , m ) u ( m ) σ n + 1 2 1 ,
where | · | is just the absolute value.

2.3. Fisher Information

As we have interpreted, the observation will be the first component of the process ζ = ( ζ n , n 1 ) . Now, from Equation (15), it is easy to write the likelihood function L ( ϑ , X ( N ) ) , which depends on the function v ( n ) :
L ( ϑ , X ( N ) ) = n = 1 N 1 2 π σ n 2 exp 1 2 b * ζ n A n 1 ζ n 1 b v ( n ) σ n 2 .
Consequently, Fisher information I N ( ϑ , v ) can be written as
I N ( ϑ , v ) = E ϑ 2 ϑ 2 ln L ( ϑ , X ( N ) ) = E ϑ n = 1 N 1 a n * ζ n σ n + 1 2 ,
where a n = 1 β n .

3. Main Results

In this part, we will present the main results of this paper. First of all, from the presentation of Fisher information (19), we have the following.
Theorem 1.
The asymptotical optimal input in the class of control U T is u o p t ( n ) = m = 1 n K ( n , m ) σ m + 1 for 0 < ϑ < 1 and u o p t ( n ) = ( 1 ) n m = 1 n K ( n , m ) σ m + 1 or u o p t ( n ) = ( 1 ) n + 1 m = 1 n K ( n , m ) σ m + 1 for 1 < ϑ < 0 . Furthermore,
lim N J N ( ϑ ) N = I ( ϑ ) ,
where I ( ϑ ) = 1 1 ϑ 2 + 1 ( 1 ϑ ) 2 for 0 < ϑ < 1 and I ( ϑ ) = 1 1 ϑ 2 + 1 ( 1 + ϑ ) 2 for 1 < ϑ < 0 .
Remark 4.
Theorem 1 can be generalized to the AR(p) case with the norm of the Fisher information matrix, but this purpose is not as clear as AR(1), that is: when the Fisher information is larger, the error will be smaller. For this reason, we will illustrate only the result of the first order, but not of the order p.
From Theorem 1, since the optimal input does not depend on the unknown parameter ϑ , we can consider ϑ ¯ N as the MLE ϑ ^ N . The following theorem states that ϑ ^ N will reach the efficiency of (5).
Theorem 2.
With the optimal input u o p t ( n ) defined in Theorem 1, for 0 < | ϑ | < 1 , the MLE ϑ ^ N has the following properties:
  • ϑ ^ N is strong consistency, that is, ϑ ^ N a . s . ϑ as N .
  • ϑ ^ N is uniformly consistent on compact K R , i.e. for any ν > 0
    lim N sup ϑ K P ϑ N ϑ ^ N ϑ > ν = 0
  • ϑ ^ N is uniformly on compacts asymptotically normal, i.e., as N ,
    lim N sup ϑ K E ϑ f N ϑ ^ N ϑ E f ( ξ ) = 0 , f C b ,
    where ξ is a zero mean Gaussian random variable with variance I 1 ( ϑ ) defined in Theorem 1. Moreover, we have the uniform on ϑ K convergence of the moments: for any q > 0 ,
    lim N sup ϑ K E ϑ N ϑ ^ N ϑ q E | ξ | q = 0 .
Remark 5.
From Theorem 2, we can see that the asymptotical properties of the MLE do not depend on the structure of the noise, which is the same as described in [19].

4. Simulation Study

In this section, Monte-Carlo simulations are done just for the verification of the asymptotical normality of the MLE ϑ ^ N with different Gaussian noise such as AR(1), MA(1) and fractional Gaussian noise(fGn). When β n defined in (8) for the ARMA case is explicit, we can easily obtain the result of MLE, so here we just take the fGn as an example. In fact, the covariance function of fGn is
ρ ( | n m | ) = 1 2 ( | m n + 1 | 2 H 2 | m n | 2 H + | m n 1 | 2 H ) .
As presented in Remark 5, the asymptotical normality for the MLE does not depend on the structure of the stationary noise, which means it does not depend on the Hurst parameter H. So, different from [29] and other LSE methods, which have a change at H = 2 / 3 or another point in the fractional case, we only need to verify the asymptotical normality with only one fixed H, and we take this value H = 0.65 . Here we compare the different ϑ = 0.7 and ϑ = 0.7 in Figure 1 and Figure 2.
Even if we have got the optimal input u o p t ( n ) in Theorem 1, our simulation will not use the initial model (1) because obviously this u o p t ( n ) is so complicated in the fractional case. We will compute our MLE with the transformation (15) with the corresponding v o p t ( n ) = σ n + 1 using the method of Wood and Chan (see [30]) for the simulation of fractional Gaussian noise. All the simulations will be the same as presented in [19].
Remark 6.
From the two figures, we can see that the statistical error of the MLE is asymptotically normal, and we also have verified that the variance is nearly the same as the inverse of the Fisher information.

5. Conclusions

With the approximation of the controlled fractional Ornstein–Ulenbeck model in (1), we considered the optimal input problem in the AR(1) process driven by stationary Gaussian noise. We have found the controlled function, which maximizes the Fisher information, and with the Laplace Transform, we have proved the asymptotical normality of the MLE, whose asymptotical variance does not depend on the structure of the noise. Our future study will focus on the non-Gaussian case, such as the general fractional ARIMA, and so on.

Author Contributions

Methodology, L.S., M.Z. and C.C.; formal analysis, M.Z.; Writing—original draft, C.C.; Writing—review and editing, L.S. and M.Z. All authors have read and agreed to the published version of the manuscript.

Funding

Lin Sun is supported by the Humanities and Social Sciences Research and Planning Fund of the Ministry of Education of China No. 20YJA630053.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank the editor and reviewers for their valuable comments, which improved the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

The appendix provides the proofs of Theorems 1 and 2. Without special note, we only consider 0 < ϑ < 1 , for 1 < ϑ < 0 the proofs will be the same.

Appendix A.1. Proof of Theorem 1

To prove Theorem 1, we separate the Fisher Information of (19) into two parts:
I N ( ϑ , v ) = E ϑ n = 1 N 1 a n * ζ n a n * E ϑ ζ n + a n * E ϑ ζ n σ n + 1 2 = E ϑ n = 1 N 1 a n * ( ζ n E ϑ ζ n ) σ n + 1 2 + n = 1 N 1 a n * E ϑ ζ n σ n + 1 2 = E ϑ n = 1 N 1 a n * P n ϑ σ n + 1 2 + n = 1 N 1 a n * E ϑ ζ n σ n + 1 2 = I 1 , N ( ϑ ) + I 2 , N ( ϑ , v ) .
where P n ϑ satisfies the following equation:
P n ϑ = A n 1 P n 1 ϑ + b σ n ϵ n , P 0 ϑ = 0 2 × 1 .
Obviously, I 1 , N ( ϑ ) does not depend on v ( n ) . Thus, and as presented in [19], we have
lim N E ϑ exp 1 2 N n = 1 N 1 ( a n * P n ϑ ) 2 σ n + 1 2 = exp 1 2 I 1 ( ϑ )
and I 1 ( ϑ ) = 1 1 ϑ 2 which can be deduced by (9) in [19].
A standard calculation yields
lim N I 1 , N ( ϑ ) N = I 1 ( ϑ ) = 1 1 ϑ 2 .
To compute I 2 , N ( ϑ ) , let s ( n ) = E ϑ ζ n σ n + 1 . Then, we can see that s ( n ) satisfies the following equation:
s ( n ) = A n 1 s ( n 1 ) σ n σ n + 1 + b f ( n ) ,
where f ( n ) = v ( n ) σ n + 1 and it is bounded.
Note that β n 0 and σ n σ n + 1 1 , we assume that for n = 1 , 2 , , σ n σ n + 1 ( 1 + ε ) and β n ε for the sufficiently small positive constant ε and ( 1 + ε ) ϑ < 1 . Consequently, we can state the following result.
Lemma A1.
Let Y = ( Y n , n 1 ) be the 2-dimension vector, which satisfies the following equation:
Y n = ϑ 0 0 1 Y n 1 + b f ( n ) , Y ( 0 ) = y 0 .
Then, we have
lim N 1 N n = 1 N ( a n * s ( n ) ) 2 n = 1 N ( b * Y n ) 2 = 0 .
Proof. 
For the sake of notational simplicity, we introduce a 2-dimensional vector Y = ( Y n , n 1 ) , which satisfies the following equation:
Y n = A n 1 Y n 1 σ n σ n + 1 + b f ( n ) , Y 0 = y 0 .
In this situation, we have three comparisons. First, we compare b * ( s ( n ) Y n ) 0 . A standard calculation implies that
s ( n ) Y n = A n 1 s ( n 1 ) σ n σ n + 1 A n 1 Y n 1 σ n σ n + 1 = ϑ ϑ β n 1 β n 1 1 ( s ( n 1 ) Y n 1 ) σ n σ n + 1 ,
since β n 0 , σ n σ n + 1 1 , we have
b * ( s ( n ) Y n ) = ϑ b * ( s ( n 1 ) Y n 1 ) , ( n )
which implies b * ( s ( n ) Y n ) 0 .
Now, we compare b * Y n b * Y n . A simple calculation shows that
Y n Y n = ϑ 0 0 1 Y n 1 A n 1 Y n 1 σ n σ n + 1
n on both sides of this equation, then we have
lim n ( Y n Y n ) = ϑ 0 0 1 lim n ( Y n 1 Y n 1 )
which implies b * ( Y n Y n ) 0 .
Finally, since β n 0 and the component of s ( n ) is bounded, we can easily obtain a n * s ( n ) b * Y n 0 , which demonstrates the proof. □
Now, we define α ( n ) = b * Y n . Then α ( n ) = ϑ α ( n 1 ) + f ( n ) , where f ( n ) is in the space of F N = f ( n ) | 1 N n = 1 N f 2 ( n ) 1 . Since the initial value α ( 0 ) will not change our result, we assume α ( 0 ) = 0 without loss of generality.
Let J 2 , N ( ϑ ) = sup v V N I 2 , N ( ϑ , v ) . Then, it is clear that
lim N J 2 , N ( ϑ ) N 1 = lim N 1 N sup f F N n = 1 N α 2 ( n ) .
Now to prove Theorem 1, we only need the following lemma.
Lemma A2.
As J 2 , N ( ϑ ) is presented in (A3), we have
lim N J 2 , N ( ϑ ) N 1 = I 2 ( ϑ ) ,
where I 2 ( ϑ ) = 1 ( 1 ϑ ) 2 .
Proof. 
First of all, taking f ( n ) = 1 , then α ( n ) = ϑ α ( n 1 ) + 1 , we can conclude that
α ( n ) = 1 ϑ n 1 ϑ , α 2 ( n ) = ( 1 ϑ n ) 2 ( 1 ϑ ) 2
we can easily abtain that
lim N 1 N n = 1 N α 2 ( n ) = 1 ( 1 ϑ ) 2
It is easy to get the lower bound
lim N 1 N sup f F N n = 1 N α 2 ( n ) 1 ( 1 ϑ ) 2 .
Furthermore, a simple calculation shows that
α ( n ) = φ ( n ) i = 1 n φ 1 ( i ) f ( i ) , n 1 , α ( 0 ) = 0 ,
where
φ ( n ) = ϑ φ ( n 1 ) , φ ( 0 ) = 1 .
Obviously, we can rewrite 1 N n = 1 N α 2 ( n ) as
1 N n = 1 N α 2 ( n ) = n = 1 N φ ( n ) i = 1 n φ 1 ( i ) f ( i ) N φ ( n ) i = 1 n φ 1 ( i ) f ( i ) N .
or
1 N n = 1 N α 2 ( n ) = i = 1 N j = 1 N F N ( i , j ) f ( i ) N f ( j ) N ,
where
F N ( i , j ) = = i j N φ ( ) φ 1 ( i ) φ ( ) φ 1 ( j ) .
Let ϕ n = φ 1 ( n ) = n N φ ( ) ϵ with ϵ N ( 0 , 1 ) are independent. Then, we have
F N ( i , j ) = E ( ϕ i ϕ j )
and
ϕ n 1 = ϑ ϕ n + ϵ n 1 , ϕ N = 0 .
Let us mention that F N ( i , j ) is a compact symmetric operator for fixed N. We should estimate the spectral gap (the first eigenvalue ν 1 ( N ) ) of the operator. The estimation of the spectral gap is based on the Laplace transform of i = 1 N ϕ i 2 , which is written as
L N ( a ) = E ϑ exp a 2 i = 1 N ϕ i 2 ,
for sufficiently small negative a < 0 . On the one hand, when a > 2 ν 1 ( N ) , ϕ is a centered Gaussian process with a covariance operator F N . Using Mercer’s theorem and Parseval’s identity, L T ( a ) can be represented by
L N ( a ) = i 1 ( 1 + a ν i ( N ) ) 1 / 2 ,
where ν i ( N ) is the sequence of positive eigenvalues of the covariance operator. A straightforward algebraic calculation shows the following.
L N ( a ) = ϑ N 1 Ψ N 1 1 / 2 ,
where
Ψ N = 1 0 1 ϑ 1 ϑ a ϑ a ϑ + ϑ N 1 1 0 .
For
Δ = 1 + a ϑ + ϑ 2 4 0 ,
there exists two real eigenvalues λ 2 1 , λ 1 1 of the matrix
1 ϑ 1 ϑ a ϑ a ϑ + ϑ .
Then, we can see that
Ψ N = λ 2 N 1 λ 1 N 1 ϑ 2 λ 2 N 2 λ 1 N 2 ϑ ϑ λ 2 λ 1 0 .
That is to say for ϑ > 0 and for any 0 > a ( 1 ϑ ) 2 , L N ( a ) 0 . Thus, lim N ν 1 ( N ) 1 ( 1 ϑ ) 2 and we complete the proof. □
Remark A1.
For 1 < ϑ < 0 , Δ 0 means 1 + a ϑ + ϑ 2 and 0 > a ( 1 + ϑ ) 2 . As a consequence, we have ν 1 ( N ) 1 ( 1 + ϑ ) 2 .

Appendix A.2. Proof of Theorem 2

Let v o p t ( n ) = σ n + 1 and ζ o = ( ζ n o , n 1 ) be the process ζ with the function v o p t ( n ) . Then, we have
ζ n o = A n 1 ζ n 1 o + b v o p t ( n ) + b σ n ϵ n , ζ 0 = 0 2 × 1
To estimate the parameter ϑ from observations ζ 1 , ζ 2 , , ζ N , we can write the MLE of ϑ with the help of (18)
ϑ ^ N = n = 1 N a n * ζ n σ n + 1 2 1 n = 1 N a n * ζ n b * η n + 1 σ n + 1 2 .
where η n = Z n v ( n ) 0 .
A standard calculation yields
ϑ ^ N ϑ = M N M N ,
where
M N = n = 1 N a n * ζ n σ n + 1 ϵ n + 1 , M N = n + 1 N a n * ζ n σ n + 1 2 .
The second and third conclusions on the asymptotic normality of Theorem 2 are crucially based on the asymptotical study of the Laplace transform
L N ϑ μ N = E ϑ exp μ 2 N M N ,
for N .
First, we can rewrite L N ϑ μ N by the following formula:
L N ϑ μ N = E ϑ exp 1 2 n = 1 N ζ n * M n ζ n ,
where M n = μ N σ n + 1 2 a n a n * .
As presented in [19] and using the Cameron–Martin formula [31], we have the following result.
Lemma A3.
For any N, the following equality holds:
L N ϑ μ N = n = 1 N [ det ( Id + γ ( n , n ) M n ) ] 1 / 2 exp 1 2 n = 1 N z n * M n ( Id + γ ( n , n ) M n ) 1 z n ,
where ( γ ( n , m ) , 1 m n ) is the unique solution of the equation
γ ( n , m ) = r = m + 1 n A r 1 ( Id + γ ( r , r ) M r ) 1 γ ( m , m ) ,
and the function ( γ ( n , n , n 1 ) ) is the solution of the Ricatti equation:
γ ( n , n ) = A n 1 ( Id + γ ( n 1 , n 1 ) M n 1 ) 1 γ ( n 1 , n 1 ) A n 1 * + σ n 2 b b * .
It is worth mentioning that ( z n , 1 n N ) is the unique solution of the equation
z n = m n r = 1 n 1 γ ( n , r ) [ Id + γ ( r , r ) M r ] 1 M r z r , z 0 = m 0 .
where m n = E ζ n o .
With the explicit formula of the Laplace transform presented in Lemma A3, we have its asymptotical property.
Lemma A4.
Under the condition (4), for any μ R , we have
lim N L N ϑ μ N = exp μ 2 I ( ϑ )
where I ( ϑ ) = 1 1 ϑ 2 + 1 ( 1 ϑ ) 2 .
Proof. 
In [19], we have stated that
lim N n = 1 N [ det ( Id + γ ( n , n ) M n ) ] 1 / 2 = exp ( 1 2 μ 1 ϑ 2 ) .
Since the component of γ ( n , n ) is bounded, we have
lim N Id + γ ( n , n ) M n = Id .
On the other hand,
n = 1 N m n * M n m n = μ N n = 1 N a n * E ζ n 0 σ n + 1 2 μ ( 1 ϑ ) 2 , N .
which was presented in the last part. Finally, notice
r = 1 n 1 γ ( n , r ) [ Id + γ ( r , r ) M r ] 1 M r z r = r = 1 n 1 τ = r + 1 n A τ 1 ( Id + γ ( τ , τ ) M τ ) 1 [ Id + γ ( r , r ) M r ] 1 M r z r
we have the following.
lim N r = 1 n 1 γ ( n , r ) [ Id + γ ( r , r ) M r ] 1 M r z r = 0 .
Combining (A11) and (A12) and (A13), the Lemma A4 achieves. □
From this conclusion, it follows immediately that
P ϑ lim N 1 N M N = I ( ϑ ) .
Furthermore, using the central limit theorem for martingale, we have
1 N M N N ( 0 , I ( ϑ ) ) .
Consequently, the asymptotical part of Theorem 2 is obtained.
Strong consistency is immediate when we change μ N with a positive proper constant μ in the Lemma A4 because the determinant part tends to 0 as presented in Section 5.2 of [19] and the extra part is bounded.

References

  1. Kiefer, J. On the Efficient Design of Statistical Investigation. Ann. Stat. 1974, 2, 849–879. [Google Scholar]
  2. Mehra, R. Optimal Input Signal for Linear System Identification. IEEE Trans. Autom. Control 1974, 19, 192–200. [Google Scholar] [CrossRef]
  3. Mehra, R. Optimal Inputs Signal for Parameter Estimation in Dynamic Systems-Survey and New Results. IEEE Trans. Autom. Contrl 1974, 19, 753–768. [Google Scholar] [CrossRef]
  4. Ng, T.S.; Qureshi, Z.H.; Cheah, Y.C. Optimal Input Design for An AR Model with Output Constraints. Automatica 1984, 20, 359–363. [Google Scholar] [CrossRef]
  5. Gevers, M. From the Early Achievement to the Revival of Experiment Design. Eur. J. Control 2005, 11, 1–18. [Google Scholar] [CrossRef]
  6. Goodwin, G.; Rojas, C.; Welsh, J.; Feuer, A. Robust Optimal Experiment Design for System Indentification. Automatica 2007, 43, 993–1008. [Google Scholar]
  7. Ljung, L. System Identification-Theory for the User; Prentice Hall: Englewood Cliffs, NJ, USA, 1987. [Google Scholar]
  8. Ovseevich, A.; Khasminskii, R.; Chow, P. Adaptative Design for Estimation of Unknown Parameters in Linear System. Probl. Inf. Transm. 2000, 36, 125–153. [Google Scholar]
  9. Leland, W.E.; Taqqu, M.S.; Willinger, W.; Wilson, D.V. On the Self-similar Nature of Ethernet Traffic. IEEE/ACM Trans. Netw. 1994, 2, 1–15. [Google Scholar] [CrossRef] [Green Version]
  10. Comte, F.; Renault, E. Long Memory in Continuous-time Stochastic Volatility Models. Math. Financ. 1998, 8, 291–323. [Google Scholar] [CrossRef]
  11. Gatheral, J.; Jaisson, T.; Rosenbaum, M. Volatility is Rough. Quant. Financ. 2018, 18, 933–949. [Google Scholar] [CrossRef]
  12. Yajima, Y. On Estimation of Long-Memory Time Series Models. Aust. J. Stat. 1985, 27, 302–320. [Google Scholar] [CrossRef]
  13. Brouste, A.; Cai, C. Controlled Drift Estimation in Fractional Diffusion Process. Stoch. Dyn. 2013, 13, 1250025. [Google Scholar] [CrossRef]
  14. Brouste, A.; Marina, K.; Popier, A. Design for Estimation of Drift Parameter in Fractional Diffusion System. Stat. Inference Stoch. Process. 2012, 15, 133–149. [Google Scholar] [CrossRef]
  15. Cao, K.; Gu, J.; Mao, J.; Liu, C. Sampled-Data Stabilization of Fractional Linear System under Arbitrary Sampling Periods. Fractal Fract. 2022, 6, 416. [Google Scholar] [CrossRef]
  16. Chen, S.; Huang, W.; Liu, Q. A New Adaptive Robust Sliding Mode Control Approach for Nonlinear Singular Fractional Oder System. Fractal Fract. 2022, 6, 253. [Google Scholar] [CrossRef]
  17. Jia, T.; Chen, X.; He, L.; Zhao, F.; Qiu, J. Finite-Time Synchronization of Uncertain Fractional-Order Delayed Memristive Neural Networks via Adaptive Sliding Mode Control and Its Application. Fractal Fract. 2022, 6, 502. [Google Scholar] [CrossRef]
  18. Liu, R.; Wang, Z.; Zhang, X.; Ren, J.; Gui, Q. Robust Control for Variable-Order Fractional Interval System Subject to Actuator Saturation. Fractal Fract. 2022, 6, 159. [Google Scholar] [CrossRef]
  19. Brouste, A.; Cai, C.; Kleptsyna, M. Asymptotic Properties of the MLE for the Autoregressive Process Coefficients Under Stationary Gaussian Noise. Math. Methods Stat. 2014, 23, 103–115. [Google Scholar] [CrossRef]
  20. Brouste, A.; Cai, C.; Soltane, M.; Wang, L. Testing for The Change of the Mean-Reverting Parameter of An Autoregressive Model with Stationary Gaussian Noise. Stat. Inference Stoch. Process. 2020, 23, 301–318. [Google Scholar] [CrossRef]
  21. Brouste, A.; Kleptsyna, M. Kalman Type Filter Under Stationary Noises. Syst. Control Lett. 2012, 61, 1229–1234. [Google Scholar] [CrossRef]
  22. Robinson, P. Log-periodogram Regression of Time Series with Long-Range Dependence. Ann. Stat. 1995, 23, 1048–1072. [Google Scholar] [CrossRef]
  23. Istas, J.; Lang, G. Quadratic Variation and Estimation of Local Holder Index of a Gaussian Process. Ann. l’I.H.P. Sect. B 1997, 33, 407–436. [Google Scholar] [CrossRef] [Green Version]
  24. Ben Hariz, S.; Brouste, A.; Cai, C.; Soltane, M. Fast and Asymptotically-Efficient Estimation in a Fractional Autoregressive Process. 2021. Available online: https://hal.archives-ouvertes.fr/hal-03221391 (accessed on 8 May 2021).
  25. Liptser, R.S.; Shiryaev, A.N. Statistics of Random Processes II: Applications; Springer: Berlin/Heidelberg, Germany, 2001; Volume 2. [Google Scholar]
  26. Durbin, J. The Fitting of Time Series Models. Rev. Inst. Int. Stat. 1960, 28, 233–243. [Google Scholar] [CrossRef] [Green Version]
  27. Sinai, Y.G. Self-Similar Probability Distribution. Theory Probab. Appl. 1976, 21, 64–80. [Google Scholar] [CrossRef]
  28. Hosking, J. Fractional Differencing. Biometrika 1981, 68, 165–176. [Google Scholar] [CrossRef]
  29. Chen, Y.; Li, T.; Li, Y. Second Estimator for An AR(1) Model Driven by a Long Momory Gaussian Noise. arXiv, 2020; arXiv:2008.12443. [Google Scholar]
  30. Wood, A.; Chan, G. Simulation of Stationary Gaussian Processes in [0,1]d. J. Comput. Graph. Stat. 1994, 3, 409–432. [Google Scholar] [CrossRef]
  31. Kleptsyna, M.L.; Le Breton, A.; Viot, M. New Formulas Concerning Laplace transforms of Quadratic Forms for General Gaussian Sequences. Int. J. Stoch. Anal. 2002, 15, 309–325. [Google Scholar] [CrossRef]
Figure 1. Histogram of the statistic Φ ( N , ϑ , X ) with ϑ = 0.7 and N = 4000 .
Figure 1. Histogram of the statistic Φ ( N , ϑ , X ) with ϑ = 0.7 and N = 4000 .
Fractalfract 06 00643 g001
Figure 2. Histogram of the statistic Φ ( N , ϑ , X ) with ϑ = 0.7 and N = 4000 .
Figure 2. Histogram of the statistic Φ ( N , ϑ , X ) with ϑ = 0.7 and N = 4000 .
Fractalfract 06 00643 g002
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Sun, L.; Cai, C.; Zhang, M. Controlled Parameter Estimation for The AR(1) Model with Stationary Gaussian Noise. Fractal Fract. 2022, 6, 643. https://doi.org/10.3390/fractalfract6110643

AMA Style

Sun L, Cai C, Zhang M. Controlled Parameter Estimation for The AR(1) Model with Stationary Gaussian Noise. Fractal and Fractional. 2022; 6(11):643. https://doi.org/10.3390/fractalfract6110643

Chicago/Turabian Style

Sun, Lin, Chunhao Cai, and Min Zhang. 2022. "Controlled Parameter Estimation for The AR(1) Model with Stationary Gaussian Noise" Fractal and Fractional 6, no. 11: 643. https://doi.org/10.3390/fractalfract6110643

Article Metrics

Back to TopTop