Next Article in Journal
Jarzyski’s Equality and Crooks’ Fluctuation Theorem for General Markov Chains with Application to Decision-Making Systems
Next Article in Special Issue
Nonparametric Clustering of Mixed Data Using Modified Chi-Squared Tests
Previous Article in Journal
Design Optimization of a Gas Turbine Engine for Marine Applications: Off-Design Performance and Control System Considerations
Previous Article in Special Issue
A New Class of Weighted CUSUM Statistics
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mildly Explosive Autoregression with Strong Mixing Errors

School of Big Data and Statistics, Anhui University, Hefei 230039, China
*
Author to whom correspondence should be addressed.
Entropy 2022, 24(12), 1730; https://doi.org/10.3390/e24121730
Submission received: 1 November 2022 / Revised: 22 November 2022 / Accepted: 24 November 2022 / Published: 26 November 2022
(This article belongs to the Special Issue Recent Advances in Statistical Theory and Applications)

Abstract

:
In this paper, we consider the mildly explosive autoregression y t = ρ n y t 1 + u t , 1 t n , where ρ n = 1 + c / n ν , c > 0 , ν ( 0 , 1 ) , and u 1 , , u n are arithmetically α -mixing errors. Under some weak conditions, such as E u 1 = 0 , E | u 1 | 4 + δ < for some δ > 0 and mixing coefficients α ( n ) = O ( n ( 2 + 8 / δ ) ) , the Cauchy limiting distribution is established for the least squares (LS) estimator ρ ^ n of ρ n , which extends the cases of independent errors and geometrically α -mixing errors. Some simulations for ρ n , such as the empirical probability of the confidence interval and the empirical density, are presented to illustrate the Cauchy limiting distribution, which have good finite sample performances. In addition, we use the Cauchy limiting distribution of the LS estimator ρ ^ n to illustrate real data from the NASDAQ composite index from April 2011 to April 2021.

1. Introduction

We consider the first-order autoregressive process defined by
y t = ρ y t 1 + u t , 1 t n ,
where u 1 , u 2 , , u n are random errors with mean 0 and variance σ 2 > 0 . It is well known that the regression coefficient ρ characterizes the properties of the process { y t } . When | ρ | < 1 , { y t } is called a stationary process (see Brockwell and Davis [1]). For example, assume that u 1 , , u n are independent and identically distributed errors with E u 1 = 0 , V a r ( u 1 ) = σ 2 , and E | u 1 | 2 + δ < , with some δ > 0 . Then, the least squares (LS) estimator ρ ^ n of ρ defined by
ρ ^ n = t = 1 n y t 1 y t t = 1 n y t 1 2 1
has a normal limiting distribution:
n ( ρ ^ n ρ ) d N ( 0 , 1 ρ 2 ) ,
where | ρ | < 1 (see Phillips and Magdalinos [2]). When | ρ | = 1 , { y t } is called a random walk process (see Dickey and Fuller [3], Wang et al. [4]). When | ρ | > 1 , { y t } is called an explosive process. Let u 1 , , u n be independent and identically distributed Gaussian errors N ( 0 , σ 2 ) with σ 2 > 0 , and the initial condition y 0 = 0 . White [5] and Anderson [6] showed that the LS estimator ρ ^ n of ρ has a Cauchy limiting distribution:
ρ n ρ 2 1 ( ρ ^ n ρ ) d C ,
where C is a standard Cauchy random variable. Moreover, let c be a constant, and ρ = ρ n = 1 + c / n ν , where ν ( 0 , 1 ) . If c < 0 , then { y t } is called a near-stationary process (see Chan and Wei [7]). Let u 1 , , u n be independent and identically distributed errors with E u 1 = 0 , V a r ( u 1 ) = σ 2 , and E | u 1 | 2 + δ < , with some δ > 0 . Phillips and Magdalinos [2] showed that the LS estimator ρ ^ n of ρ n has a normal limiting distribution:
n 1 + ν ( ρ ^ n ρ n ) d N ( 0 , 1 2 c ) ,
where c < 0 . If c > 0 , then { y t } is called a near-explosive process or mildly explosive process. Phillips and Magdalinos [2] also showed that the LS estimator ρ ^ n of ρ n has a Cauchy limiting distribution:
n ν ρ n n 2 c ( ρ ^ n ρ n ) d C ,
where c > 0 .
It is interesting to study the near-stationary process and mildly explosive process based on dependent errors. For example, Buchmann and Chan [8] considered the near-stationary process whose errors were strongly dependent random variables; Phillips and Magdalinos [9] and Magdalinos [10] studied the mildly explosive process whose errors were moving average process with martingale differences; Aue and Horvàth [11] considered the mildly explosive process based on stable errors; Oh et al. [12] studied the mildly explosive process-based strong mixing ( α -mixing) errors and obtained the Cauchy limiting distribution in (6) for the LS estimator ρ ^ n . It is known that the sequence of α -mixing is a weakly dependent sequence. However, they assumed that { u n , n 1 } was geometrically α -mixing, i.e., α ( n ) = O ( | ξ | n ) for some | ξ | < 1 . Obviously, it was a very strong condition. It will be more general if { u n , n 1 } is arithmetically α -mixing, i.e., α ( n ) = O ( n β ) for some β > 0 . Thus, the aim of this paper is to weaken this mixing condition. We continue to investigate the mildly explosive process based on arithmetically α -mixing errors. Compared with Oh et al. [12], we use different inequalities of α -mixing sequences to prove the key Lemmas 1 and 2 (see Section 2 and Section 6). As important applications, some simulations and real data of the NASDAQ composite index from April 2011 to April 2021 are also discussed in this paper. Next, we recall the definition of α -mixing as follows:
Let N = { 1 , 2 , } and denote F k n = σ ( u t , k i n , i N ) to be the σ -field generated by random variables u k , u k + 1 , , u n , 1 k n . For n 1 , we define
α ( n ) = sup m N sup A F 1 m , B F m + n | P ( A B ) P ( A ) P ( B ) | .
Definition 1.
If α ( n ) 0 as n , then { u n , n 1 } is called a strong mixing or α-mixing sequence. If α ( n ) = O ( n β ) for some β > 0 , then { u n , n 1 } is called an arithmetically α-mixing sequence. If α ( n ) = O ( | ξ | n ) for some | ξ | < 1 , then { u n , n 1 } is called a geometrically α-mixing sequence.
The α -mixing sequence is a weakly dependent sequence and several linear and nonlinear time series models satisfy the mixing properties. For more works on α -mixing and applications of regression, we refer the reader to Hall and Heyde [13], Györfi et al. [14], Lin and Lu [15], Fan and Yao [16], Jinan et al. [17], Escudero et al. [18], Li et al. [19], and the references therein. Many researchers have studied mildly explosive models. For example, Arvanitis and Magdalinos [20] studied the mildly explosive process under the stationary conditional heteroskedasticity errors; Liu et al. [21] investigated the mildly explosive process under the anti-persistent errors; Wang and Yu [22] studied the explosive process without Gaussian errors; Kim et al. [23] studied the explosive process without identically distributed errors. Furthermore, many researchers have used the mildly explosive model to study the behavior of economic growth and rational bubble problems, see Magdalinos and Phillips [24], Phillips et al. [25], Oh et al. [12], Liu et al. [21], and the references therein.
The rest of this paper is organized as follows. First, some conditions in Assumption (1) and two important Lemmas 1 and 2 are presented in Section 2. Consequently, the Cauchy limiting distribution for LS estimator ρ ^ n and the confidence interval of ρ n are obtained in Section 2 (see Theorem 1). We also give some remarks about the existing studies of the Cauchy limiting distribution in Section 2. As applications, some simulations on the empirical probability of the confidence interval for ρ n and the empirical density for ρ n n ρ n 2 1 ( ρ ^ n ρ n ) and ρ ^ n n ρ ^ n 2 1 ( ρ ^ n ρ n ) are presented in Section 3, which agree with the Cauchy limiting distribution in (6). In Section 4, the mildly explosive process is used to analyze the real data from the NASDAQ composite index from April 2011 to April 2021. It is a takeoff period of technology stocks and a faster increase in U.S. Treasury yields. Some conclusions and future research are discussed in Section 5. Finally, the proofs of main results are presented in Section 6. Throughout the paper, as n , let P and d respectively denote the convergence in probability and in distribution. Let C , C 1 , C 2 , C 3 , denote some positive constants not depending on n, which may be different in various places. If X and Y have the same distribution, we denote it as X = d Y .

2. Results

We consider the mildly explosive process
y t = ρ n y t 1 + u t , t = 1 , , n ,
where ρ n = 1 + c / n ν for some c > 0 , ν ( δ 1 + δ , 1 ) , and δ > 0 . In addition, u 1 , , u n are mean zeros of α -mixing errors. Some conditions in Assumption 1 are listed as follows:
Assumption 1.
( A 1 ) Let E u 1 = 0 and E | u 1 | 4 + δ < for some δ > 0 ;
( A 2 ) Let { u n , n 1 } be a strictly stationarity sequence of arithmetically α-mixing with α ( n ) = O ( n ( 2 + 8 / δ ) ) , where δ is defined by ( A 1 ) ;
( A 3 ) Let ρ n = 1 + c / n ν for some c > 0 , and ν ( δ 1 + δ , 1 ) , where δ is defined by ( A 1 ) ; in addition, let y 0 = o P ( n ν ) .
In order to prove the limiting distribution of the LS estimator ρ ^ n of ρ n , the normalized sample covariance t = 1 n y t 1 u t can be approximated by the product of the stochastic sequences
X n = 1 n ν t = 1 n ρ n ( n t ) 1 u t and Y n = 1 n ν j = 1 n ρ n j u j .
Then, we have the following lemmas:
Lemma 1.
Let the conditions (A1)–(A3) hold. Then, as n ,
ρ n n n ν t = 1 n j = t n ρ n t j 1 u j u t L 2 0 ,
ρ n 2 n + 1 n ν t = 1 n j = 1 t 1 ρ n t j 1 u j u t L 2 0 ,
where L 2 means convergence in the mean square.
Lemma 2.
Let the the conditions (A1)–(A3) hold. Then, as n , the sequences { X n , n 1 } and { Y n , n 1 } defined by (8) satisfy
( X n , Y n ) d ( X , Y ) ,
where X and Y are two independent N ( 0 , σ 2 2 c ) random variables with c > 0 and
σ 2 = k = C o v ( u 0 , u k ) > 0 .
Combining this with Lemmas 1 and 2, we have the following Cauchy limiting distribution for the LS estimator ρ ^ n of ρ n as follows:
Theorem 1.
Let the conditions of Lemmas 1 and 2 be satisfied. Then, as n , we have
ρ n n n ν t = 1 n y t 1 u t , ρ n 2 n n 2 ν t = 1 n y t 1 2 d ( X Y , Y 2 ) ,
n ν ρ n n 2 c ( ρ ^ n ρ n ) d C ,
where X and Y are two independent N ( 0 , σ 2 2 c ) random variables defined by (11), and C is a standard Cauchy random variable.
Remark 1.
Let ( A 2 ) * : Let { u n , n 1 } be a strictly stationarity sequence of geometrically α-mixing; ( A 3 ) * Let ρ n = 1 + c / n ν with some c > 0 , ν ( 0 , 1 ) , and y 0 = o P ( n ν ) . Under the assumptions ( A . 1 ) , ( A 2 ) * , and ( A 3 ) * , Oh et al. [12] considered the mildly explosive process (7) and obtained Lemmas 1 and 2 and Theorem 1, which extended Theorem 4.3 of Phillips and Magdalinos [2] based on independent errors to geometrically α-mixing errors. In order to weaken geometrically α-mixing, we use the inequalities from Doukhan and Louhichi [26] and Yang [27] to re-prove the key Lemmas 1 and 2. Thus, the mixing coefficients need to satisfy α ( n ) = O ( n ( 2 + 8 / δ ) ) for some δ > 0 . For details, please the proofs of Lemmas 1 and 2 in Section 6. If positive parameter δ coming from moment condition E | u 1 | 4 + δ < is large, then the mixing coefficient α ( n ) = O ( n ( 2 + 8 / δ ) ) is weak. Similarly, if positive parameter δ is small, then the mixing coefficient α ( n ) = O ( n ( 2 + 8 / δ ) ) becomes strong. If δ = δ n 0 , then α ( n ) is a geometrically decaying. So the condition ν ( δ 1 + δ , 1 ) in assumption ( A 3 ) becomes ν ( 0 , 1 ) . Thus, we extend the results of Phillips and Magdalinos [2] and Oh et al. [12] to arithmetically α-mixing case. In Section 3, we give some simulations for the LS estimator ρ ^ n in a mildly explosive process, which agree with Theorem 1. Meanwhile, the mildly explosive model is used to analyze the data of the NASDAQ composite index from April 2011 to April 2021 in Section 4.
Remark 2.
For some c > 0 , ν ( δ 1 + δ , 1 ) , and δ > 0 , we take ρ n = 1 + c / n ν in (14) and obtain
n ν ρ n n 2 c ( ρ ^ n ρ n ) = ρ n n 2 ( ρ n 1 ) ( ρ ^ n ρ n ) d C
and
ρ n n ρ n 2 1 ( ρ ^ n ρ n ) d C ,
where it uses the fact that ρ n 2 1 = ( c / n ν + 2 ) c / n ν 2 c / n ν = 2 ( ρ n 1 ) . Here, a n b n means a n / b n 1 , as n . Moreover, by Proposition A.1 of Phillips and Magdalinos [2], it has ρ n n n n ν = o ( 1 ) . Combining n ν ρ n n = o ( n ) with n ν ρ n n 2 c ( ρ ^ n ρ n ) = O P ( 1 ) , we have ρ ^ n / ρ n P 1 and ( ρ ^ n / ρ n ) n P 1 (or see Oh et al. [12]). Let 0 < α < 1 be the significance level. Then, as in Phillips et al. [25], (14) in Theorem 1 suggests that a 100 ( 1 α ) % confidence interval for ρ n can be constructed as
ρ ^ n ρ ^ n 2 1 ρ ^ n n C α , ρ ^ n + ρ ^ n 2 1 ρ ^ n n C α : = [ ρ ^ n L , ρ ^ n U ] ,
where ρ ^ n L and ρ ^ n U are the lower bound and upper bound for ρ n respectively, and C α is the two-tailed α percentile critical value of the standard Cauchy distribution. For example, C 0.1 = 6.315 , C 0.05 = 12.7 , and C 0.01 = 63.65674 .

3. Simulations

In this section, we conduct some simulations to evaluate the LS estimator ρ ^ n defined by (2). The experimental data { y t , t 1 } are a realization from the following first-order autoregressive model
y t = ρ n y t 1 + u t , t = 1 , , n ,
where y 0 = 0 , ρ n = 1 + c / n ν for some c > 0 , ν ( δ 1 + δ , 1 ) , and δ > 0 . In addition, u 1 , u 2 , are mean zero random errors. Let the error vector u 1 , , u n satisfy the Gaussian model such that
( u 1 , , u n ) = d N n ( 0 , Σ n ) .
Here, 0 = 0 n × 1 , and Σ n is the covariance matrix satisfying
Σ n = ( ξ | i j | ) 1 i , j n
for some | ξ | < 1 . Σ n is a positive symmetric matrix. Moreover, it is easy to check that the sequence { u n , n 1 is geometrically α -mixing. For any moment of Gaussian random variable, it is finite. Combining this with Remark 1, ν ( δ 1 + δ , 1 ) is ν ( 0 , 1 ) .
Firstly, we show the simulation of the empirical probability of the confidence interval (CI) for ρ n defined by (17). We consider the following parameter settings
n { 100 , 500 , 1000 } , c { 0.5 , 1 } , ν { 0.5 , 0.6 , 0.7 , 0.8 } , ξ { 0.3 , 0.3 } .
The number of replications is always set at 10000 and the level of significance is 0.05. Let I ( · ) be the indicator function. Applying (17), we calculate the empirical probability of the true value ρ n , i.e.,
1 10000 l = 1 10000 I ( ρ ^ n L ( l ) ρ n ρ ^ n U ( l ) ) ,
where ρ ^ n L ( l ) and ρ ^ n U ( l ) are the two CI bounds of ρ n in the l th replication. The results are shown in Table 1 and Table 2.
From Table 1 and Table 2, we see that the CIs under ξ = 0.3 were relatively better than ξ = 0.3 . It may be that the volatility of Σ n with ξ = 0.3 is relatively larger than the one with ξ = 0.3 . The CIs had good finite sample performance when c was relatively large, and ν was between 0.5 and 0.7. When c = 1 , it can be seen that the empirical probability was close to the nominal probability 95 % .
Next, by (15), (16), ρ ^ n / ρ n P 1 , and ( ρ ^ n / ρ n ) n P 1 , we give some histograms to illustrate
ρ n n ρ n 2 1 ( ρ ^ n ρ n ) d C and ρ ^ n n ρ ^ n 2 1 ( ρ ^ n ρ n ) d C ,
where C is a standard Cauchy random variable. We consider the following parameter settings
n { 500 , 1000 , 1500 } , c { 0.5 } , ν { 0.5 } , ξ { 0.3 , 0.3 } .
The red line in Figure 1, Figure 2, Figure 3 and Figure 4 is the density of the standard Cauchy random variable.
According to Figure 1, Figure 2, Figure 3 and Figure 4, the histograms ρ n n ρ n 2 1 ( ρ ^ n ρ n ) and ρ ^ n n ρ ^ n 2 1 ( ρ ^ n ρ n ) under ξ = 0.3 were relatively better than the ones under ξ = 0.3 (the volatility of Σ n with ξ = 0.3 was relatively larger than the one with ξ = 0.3 ). As the sample n increased, the histograms ρ n n ρ n 2 1 ( ρ ^ n ρ n ) and ρ ^ n n ρ ^ n 2 1 ( ρ ^ n ρ n ) were close to the red line of the density of the standard Cauchy random variable. Thus, the results in Figure 1, Figure 2, Figure 3 and Figure 4 and Table 1 and Table 2 agree with (14) in Theorem 1. Since the histograms with different c and ν were similar, we omit them here.

4. Real Data Analysis

In this section, we use the mildly explosive model (7) and confidence interval estimation in (17) to study the NASDAQ composite index during an inflation period. Similar to Phillips et al. [25] and Liu et al. [21], we consider the log-NASDAQ composite index for the period from April 2011 to April 2021, which contained 2522 observations denoted by y t = log ( P t ) , 1 t n = 2522 . In addition, we let P 0 = 1 and y 0 = log ( P 0 ) = 0 . The scatter plots of { y t } are shown in Figure 5. According to Figure 5, the process of { y t } was increasing. Then, we used the Augmented Dickey Fuller Test (ADF Test, see [28]) to conduct the unit root test. The ADF test was −3.069 with Lag order 1, while the p-value of the ADF test was 0.1257. This means that the process of { y t } was nonstationary. Thus, the mildly explosive model y t = ρ n y t 1 + u t was considered to fit the process of { y t } . We let u ^ t = y t ρ ^ n y t 1 be the residuals of errors u t , 1 t n , where ρ ^ n is the LS estimator of ρ n defined by (2). Then, the residuals’ autocorrelation function (ACF) of u ^ 1 , , u ^ n is shown in Figure 6.
According to Figure 6, the autocorrelation coefficients for the residuals were around 0 as the Lag increased, which satisfied the property of α -mixing data. Then, the curves of the LS estimator ρ ^ n defined by (2), lower bound ρ ^ n L = ρ ^ n ρ ^ n 2 1 ρ ^ n n C α and upper bound ρ ^ n U = ρ ^ n + ρ ^ n 2 1 ρ ^ n n C α , for ρ n are also shown in Figure 7, C α defined by (17) is the value of the standard Cauchy distribution with significance level α . With α = 0.05 , the curves of ρ ^ n , ρ ^ n L , and ρ ^ n U are presented in Figure 7.
According to Figure 7, the values of ρ ^ n approached 1 as sample n increased, while the lower bound ρ ^ n L and upper bound ρ ^ n U were around 1. In addition, by (17), we let y ^ t L = ρ ^ n L y t 1 , and y ^ t U = ρ ^ n U y t 1 , t = 2 , , n . The curves of y ^ t L and y ^ t U are also shown in Figure 5. According to Figure 5, the curve y t was between curves y ^ t L and y ^ t U , while the curve widths of y ^ t L and y ^ t U were very small. Furthermore, the period from April 2011 to April 2021 was the takeoff period of technology stocks and a faster increase in U.S. Treasury yields. Thus, these real data are a good use of the mildly explosive model and the Cauchy limiting distribution of the LS estimator in Theorem 1.

5. Conclusions and Discussion

The study of the mildly explosive process has received much attention from researchers, as it can be used to test the explosive behavior of economic growth. Phillips and Magdalinos [2] considered the mildly explosive process (7) based on independent errors and obtained the Cauchy limiting distribution of the LS estimator ρ ^ n of ρ n . Oh et al. [12] extended Phillips and Magdalinos [2] to geometrically α -mixing errors. Obviously, the assumption of geometrically α -mixing was very strong. Thus, we considered the mildly explosive process based on arithmetically α -mixing errors. Under the condition of the mixing coefficients α ( n ) = O ( n ( 2 + 8 / δ ) ) for some δ > 0 , we re-proved the key Lemmas 1 and 2. Consequently, the Cauchy limiting distribution for ρ n in Theorem 1 also held true. In order to illustrate the main result of the Cauchy limiting distribution, some simulations of the empirical probability of the confidence interval and the empirical density for ρ n were presented in Section 4. It had a good finite sample performances. As an application, we used the mildly explosive process to analyze real data from the NASDAQ composite index from April 2011 to April 2021. It was a takeoff period of technology stocks and a faster increase in U.S. Treasury yields. Moreover, it is of interest for researchers to study the random walk process, near-stationary process, mildly explosive process, and explosive process under heteroskedasticity errors (see Arvanitis and Magdalinos [20]), anti-persistent errors (see Liu et al. [21]), and other missing dependent data in the future.

6. Proofs of Main Results

Lemma 3.
([13], Corollary A.2). Suppose that ξ and η are random variables, which are G and H -measurable, respectively, and that E | η | p < , E | ξ | q < , where p , q > 1 and 1 p + 1 q < 1 . Then,
| E ξ η E ξ E η | 8 ( E | ξ | p ) 1 / p ( E | η | q ) 1 / q ( α ( G , H ) ) 1 1 p 1 q .
Lemma 4.
([27], Lemma 3.2). Let { X n } n 1 be an α-mixing sequence. Suppose that p and q are two positive integers. Set η l = ( l 1 ) ( p + q ) + 1 ( l 1 ) ( p + q ) + p X j for 1 l k . If s > 0 , r > 0 with 1 s + 1 r = 1 , then there exists a constant C such that
| E exp { i t l = 1 k η l } Π l = 1 k E exp { i t η l } | C | t | α 1 s ( q ) l = 1 k ( E | η l | r ) 1 / r .
Lemma 5.
([27], Lemma 3.3). Let { X n } n 1 be a mean zero α-mixing sequence. Let r > 2 . If there exist τ > 0 and λ > r ( r + τ ) 2 τ such that α ( n ) = O ( n λ ) and E | X i | r + τ < , then, for any given ε > 0 ,
E | i = 1 n X i | r C n ε i = 1 n E | X i | r + i = 1 n ( E | X i | r + τ ) 2 r + τ r 2 , n 1 ,
where C is a positive constant as C : = C ( r , τ , λ , ε ) , not depending on n.
Proof of Lemma 1.
It is seen that
E | ρ n n n v t = 1 n j = t n ρ n t j 1 u j u t | 2 ρ n 2 n n 2 ν t = 1 n j = t n s = 1 n k = s n ρ n ( j t ) ( k s ) 2 | E u t u j u s u k | C 1 ρ n 2 n n 2 ν 1 t 1 t 2 t 3 t 4 n | E u t 1 u t 2 u t 3 u t 4 | .
Similar to Doukhan and Louhichi [26], for any q 2 , we denote
A q ( n ) = 1 t 1 t q n | E u t 1 u t 2 u t q |
and
V q ( n ) = | Cov ( u t 1 u t m , u t m + 1 u t q ) | ,
where the sum is considered over { t 1 , , t q } fulfilling 1 t 1 t q n with r = t m + 1 t m = max 1 i < q ( t i + 1 t i ) . Clearly,
A q ( n ) 1 t 1 t q n | E u t 1 u t m E u t m + 1 u t q | + 1 t 1 t q n | Cov ( u t 1 u t m , u t m + 1 u t q ) | .
The first term on the right-hand side of the last inequality in (24) is bounded by
1 t 1 t q n | E u t 1 u t m E u t m + 1 u t q | m = 1 q 1 A m ( n ) A q m ( n ) ,
(see [26]). Then, it follows from (24) to (25) that
A q ( n ) m = 1 q 1 A m ( n ) A q m ( n ) + V q ( n ) .
Assume that E | u 1 | Δ < for some Δ > q , q 2 and α ( n ) = O ( n Δ q 2 ( Δ q ) ) . Then, by Lemma 3, it has
| Cov ( u t 1 u t m , u t m + 1 u t q ) | 8 α 1 q Δ ( r ) [ E | u t 1 u t m | Δ m ] m Δ [ E | u t 1 u t m | Δ q m ] q m Δ .
Using the Hölder inequality, we have that
[ E | u t 1 u t m | Δ m ] m Δ [ E | u t 1 | Δ ] 1 Δ [ E | u t 2 u t m | Δ m 1 ] m 1 Δ [ E | u 1 | Δ ] m Δ .
Consequently, by (27) and (28),
| Cov ( u t 1 u t m , u t m + 1 u t q ) | 8 α 1 q Δ ( r ) [ E | u 1 | Δ ] q Δ ,
which implies
M r , q sup | Cov ( u t 1 u t m , u t m + 1 u t q ) | 8 α 1 q Δ ( r ) [ E | u 1 | Δ ] q Δ ,
where the supremum is taken over all 1 t 1 t q and 1 m < q with t m + 1 t m r . By the assumption α ( n ) = O ( n Δ q 2 ( Δ q ) ) , we have
M r , q = O ( r Δ q 2 ( Δ q ) ( 1 q Δ ) ) = O ( r Δ q 2 ( Δ q ) Δ q Δ ) = O ( r q / 2 ) .
In addition, by (31), we have
V q ( n ) t 1 = 1 n r = 1 n 1 ( r + 1 ) q 2 M r , q C 1 t 1 = 1 n r = 1 n 1 r q 2 q / 2 C 2 n q / 2 .
Thus, by (22)–(24), (26), (32), we obtain
A q ( n ) m = 1 q 1 A m ( n ) A q m ( n ) + V q ( n ) C 4 n q / 2 .
Since E | u 1 | 4 + δ < with some δ > 0 , α ( n ) = O ( n ( 2 + 8 / δ ) ) and the second-order stationarity of { u n , n 1 } , then the conditions E | u 1 | Δ < , Δ > q , q 2 , and α ( n ) = O ( n Δ q 2 ( Δ q ) ) for (33) are satisfied with q = 4 , Δ = 4 + δ and δ > 0 . Consequently, by (21) and (33), we obtain
E | ρ n n n v t = 1 n j = t n ρ n t j 1 u j u t | 2 C 1 ρ n 2 n n 2 ν A 4 ( n ) C 2 ρ n 2 n n 2 n 2 ν = o ( 1 ) ,
by using the fact that ρ n n n n ν = o ( 1 ) , i.e., for each c > 0 , ρ n n = o ( n ν 1 ) (see Proposition A.1 of [2]). The proof of (9) is complete. On the other hand, the proof of (10) is similar, so it is omitted. □
Proof of Lemma 2.
By the Cramér-Wold device, it is sufficient to show that
a X n + b Y n d N 0 , ( a 2 + b 2 ) σ 2 2 c , a , b R ,
where X n and Y n are defined by (8), and σ 2 > 0 is defined by (12). Denote
a X n + b Y n = 1 n ν i = 1 n ξ n i ,
where
ξ n i = ( a ρ n i + b ρ n ( n i ) 1 ) u i , 1 i n .
Similar to Oh et al. [12], let { k n } , { p n } , and { q n } be sequences of positive integers. Then, we split the sum i = 1 n ξ n i into large p n blocks and small q n blocks. Define k n n 1 ν / 2 , p n n ν / 2 n ν / 4 , and q n n ν / 4 . So, we have q n / p n 0 and k n ( p n + q n ) / n 1 as n .
Denote
1 n ν i = 1 n ξ n i = 1 n ν m = 1 k n y n m + 1 n ν m = 1 k n y n m + 1 n ν y n k n ,
where
y n m = i = ( m 1 ) ( p n + q n ) + 1 ( m 1 ) ( p n + q n ) + p n ξ n i , 1 m k , y n m = j = ( m 1 ) ( p n + q n ) + p n + 1 m ( p n + q n ) ξ n j , 1 m k , y n k n = l = k ( p n + q n ) + 1 n ξ n l .
Next, we will show that
1 n ν m = 1 k n y n m d N 0 , ( a 2 + b 2 ) σ 2 2 c
and
1 n ν m = 1 k n y n m P 0 , 1 n ν y n k n P 0 .
Since E | u 1 | 4 + δ < , and α ( n ) = O ( n ( 2 + 8 / δ ) ) , we use Lemma 5 with ε = 1 and obtain
m = 1 k n ( E | y n m | 4 ) 1 4 C 1 m = 1 k n p n i = ( m 1 ) ( p n + q n ) + 1 ( m 1 ) ( p n + q n ) + p n E | ξ n i | 4 + i = ( m 1 ) ( p n + q n ) + 1 ( m 1 ) ( p n + q n ) + p n ( E | ξ n i | 4 + δ ) 2 4 + δ 2 1 4 C 2 m = 1 k n p n 1 / 2 = O ( k n p n 1 / 2 ) .
By condition (A3), (38), and Lemma 4 with r = s = 2 , we obtain
| E exp i t m = 1 k n y n m n ν Π m = 1 k n E exp i t y n m n ν | C 1 t α 1 / 2 ( q n ) 1 n ν m = 1 k n ( E | y n m | 2 ) 1 2 C 1 t α 1 / 2 ( q n ) 1 n ν m = 1 k n ( E | y n m | 4 ) 1 4 C 2 1 n ν / 2 q n ( 2 + 8 / δ ) / 2 k n p n 1 / 2 C 3 1 n ν / 2 n ( 1 + 4 / δ ) ν / 4 n 1 ν / 2 n ν / 4 C 4 n 1 ν ( 1 + 1 δ ) 0 ,
since ν > δ 1 + δ and ν ( 1 + 1 δ ) > 1 .
For m = 1 , 2 , , k n , it can be checked that
E ( y n m 2 ) = E i = ( m 1 ) ( p n + q n ) + 1 ( m 1 ) ( p n + q n ) + p n ( a ρ n i + b ρ n ( n i ) 1 ) u i 2 = E i = 1 p n ( a ρ n ( m 1 ) ( p n + q n ) i + b ρ n ( n ( m 1 ) ( p n + q n ) i ) 1 ) u ( m 1 ) ( p n + q n ) + i 2 = j = ( p n 1 ) p n 1 i = 1 p n j ( I 1 + I 2 ) E u 0 u j ,
where
I 1 = a 2 ρ n 2 ( m 1 ) ( p n + q n ) 2 i j + b 2 ρ n 2 ( n ( m 1 ) ( p n + q n ) i ) + j 1 , I 2 = a b ρ n n 1 ρ n j + ρ n j .
Similar to Oh et al. [12], there is j = ( p n 1 ) p n 1 ρ n j E u 0 u j σ 2 > 0 , where σ 2 is defined by (12). Combining this with ρ n n n n ν = o ( 1 ) and k n p n n , we establish that
m = 1 k n E ( y n m n ν ) 2 = 1 n ν m = 1 k n j = ( p n 1 ) p n 1 i = 1 p n j ( I 1 + I 2 ) E u 0 u j = m = 1 k n a 2 ρ n 2 ( m 1 ) ( p n + q n ) + 2 n ν ( ρ n 2 1 ) j = ( p n 1 ) p n 1 ρ n j ( 1 ρ n 2 ( p n j ) ) E u 0 u j + m = 1 k n b 2 ρ n 2 ( n ( m 1 ) ( p n + q n ) ) + 2 n ν ( ρ n 2 1 ) j = ( p n 1 ) p n 1 ρ n j ( ρ n 2 ( p n j ) 1 ) E u 0 u j ( a 2 + b 2 ) σ 2 2 c .
Meanwhile, for all η > 0 , by (38), (40), and the Markov inequality, we have
E [ ( y n m n ν ) 2 I ( | y n m n ν | > η ) ] = 1 n ν E [ y n m 2 I ( | y n m n ν | > η ) ] 1 η 2 n 2 ν ( E y n m 4 ) 1 2 P ( | y n m | 2 > n ν ) c p n η 2 n 2 ν E y n m 2 = O ( p n 2 / n 2 ν ) ,
which implies
m = 1 k n E [ ( y n m n ν ) 2 I ( | y n m n ν | > η ) ] = O ( k n p n 2 / n 2 ν ) = o ( 1 ) .
Consequently, (36) follows from (39), (40), and (41) immediately.
On the other hand, similar to the proof of (36), we have
1 n ν / 2 m = 1 k n y n m d N 0 , ( a 2 + b 2 ) σ 2 2 c ,
which implies
1 n ν m = 1 k n y n m P 0 .
Furthermore, by the proof of (38) and k n ( p n + q n ) / n 1 , we obtain
E ( 1 n ν y n k n ) 2 = 1 n ν E ( y n k ) 2 1 n ν ( E ( y n k ) 4 ) 1 / 2 C n ν l = k n ( p n + q n ) + 1 n a ρ n l + b ρ n ( n l ) 1 2 = o ( 1 ) .
Thus, by (43) and (44), the proof of (37) is complete. Combining (35), (36) and (37), we obtain the result of (34). □
Proof of Theorem 1.
Similar to the proofs of Theorem 4.3 of Phillips and Magdalinos [2] and Theorem 1 of Oh et al. [12], by Lemmas 1 and 2, it is easy to obtain the results (13) and (14). We omit the details here. □

Author Contributions

Supervision W.Y.; software X.L. (Xian Liu) and M.G.; writing—original draft preparation, W.Y. and X.L. (Xiaoqin Li). All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the NSF of Anhui Province (2008085MA14, 2108085MA06).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors are deeply grateful to the editors and anonymous referees for their careful reading and insightful comments. The comments led us to significantly improve the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Brockwell, P.J.; Davis, R.A. Time Series: Theory and Methods; Series in Statistics; Springer: New York, NY, USA, 1991. [Google Scholar]
  2. Phillips, P.C.B.; Magdalinos, T. Limit theory for moderate deviations from a unit root. J. Econom. 2007, 136, 115–130. [Google Scholar] [CrossRef] [Green Version]
  3. Dickey, D.A.; Fuller, W.A. Distribution of the estimators for autoregressive time series with a unit root. J. Am. Stat. Assoc. 1979, 74, 427–431. [Google Scholar]
  4. Wang, Q.Y.; Lin, Y.X.; Gulati, C.M. The invariance principle for linear processes with applications. Econom. Theory 2002, 18, 119–139. [Google Scholar] [CrossRef] [Green Version]
  5. White, J.S. The limiting distribution of the serial correlation coefficient in the explosive case. Ann. Math. Stat. 1958, 29, 1188–1197. [Google Scholar] [CrossRef]
  6. Anderson, T.W. On asymptotic distributions of estimates of parameters of stochastic difference equations. Ann. Math. Stat. 1959, 30, 676–687. [Google Scholar] [CrossRef]
  7. Chan, N.H.; Wei, C.Z. Asymptotic inference for nearly nonstationary AR(1) processes. Ann. Stat. 1987, 15, 1050–1063. [Google Scholar] [CrossRef]
  8. Buchmann, B.; Chan, N.H. Asymptotic theory of least squares estimators for nearly unstable processes under strong dependence. Ann. Stat. 2007, 35, 2001–2017. [Google Scholar] [CrossRef]
  9. Phillips, P.C.B.; Magdalinos, T. Limit theory for moderate deviations from a unit root under weak dependence. In The Refinement of Econometric Estimation and Test Procedures: Finite Sample and Asymptotic Analysis; Phillips, G.D.A., Tzavalis, E., Eds.; Cambridge University Press: Cambridge, UK, 2007; pp. 123–162. [Google Scholar]
  10. Magdalinos, T. Mildly explosive autoregression under weak and strong dependence. J. Econom. 2012, 169, 179–187. [Google Scholar] [CrossRef] [Green Version]
  11. Aue, A.; Horvàth, L. A limit theorem for mildly explosive autoregression with stable errors. Econom. Theory 2007, 23, 201–220. [Google Scholar] [CrossRef] [Green Version]
  12. Oh, H.; Lee, S.; Chan, N. Mildly explosive autoregression with mixing innovations. J. Korean Statist. Soc. 2018, 47, 41–53. [Google Scholar] [CrossRef]
  13. Hall, P.; Heyde, C.C. Martingale Limit Theory and Its Application; Academic Press: New York, NY, USA, 1980. [Google Scholar]
  14. Györfi, L.; Härdle, W.; Sarda, P.; Vieu, P. Nonparametric Curve Estimation from Time Series; Springer: Berlin/Heidelberg, Germany, 1989. [Google Scholar]
  15. Lin, Z.Y.; Lu, C.R. Limit Theory for Mixing Dependent Random Variable; Science Press: Beijing, China, 1997. [Google Scholar]
  16. Fan, J.Q.; Yao, Q.W. Nonlinear Time Series: Nonparametric and Parametric Methods; Springer: Berlin/Heidelberg, Germany, 2003. [Google Scholar]
  17. Jinan, R.; Parag, P.; Tyagi, H. Tracking an Auto-Regressive Process with Limited Communication per Unit Time. Entropy 2021, 23, 347. [Google Scholar] [CrossRef]
  18. Escudero, I.; Angulo, J.M.; Mateu, J. A Spatially Correlated Model with Generalized Autoregressive Conditionally Heteroskedastic Structure for Counts of Crimes. Entropy 2022, 24, 892. [Google Scholar] [CrossRef]
  19. Li, H.Q.; Liu, X.H.; Chen, Y.T.; Fan, Y.W. Testing for Serial Correlation in Autoregressive Exogenous Models with Possible GARCH Errors. Entropy 2022, 24, 1076. [Google Scholar] [CrossRef]
  20. Arvanitis, S.; Magdalinos, T. Mildly Explosive Autoregression Under Stationary Conditional Heteroskedasticity. J. Time Ser. Anal. 2018, 39, 892–908. [Google Scholar] [CrossRef]
  21. Lui, Y.L.; Xiao, W.; Yu, J. Mildly Explosive Autoregression with Anti-persistent Errors. Oxford Bull. Econom. Stat. 2021, 83, 518–539. [Google Scholar] [CrossRef]
  22. Wang, X.H.; Yu, J. Limit theory for an explosive autoregressive process. Econom. Lett. 2015, 126, 176–180. [Google Scholar] [CrossRef]
  23. Kim, T.Y.; Hwang, S.Y.; Oh, H. Explosive AR(1) process with independent but not identically distributed errors. J. Korean Stat. Soc. 2020, 49, 702–721. [Google Scholar] [CrossRef]
  24. Magdalinos, T.; Phillips, P.C.B. Limit theory for cointegrated systems with moderately integrated and moderately explosive regressors. Econom. Theory 2009, 25, 482–526. [Google Scholar] [CrossRef] [Green Version]
  25. Phillips, P.C.B.; Wu, Y.; Yu, J. Explosive behavior in the 1990s Nasdaq: When did exuberance escalate asset values? Int. Econom. Rev. 2011, 52, 201–226. [Google Scholar] [CrossRef] [Green Version]
  26. Doukhan, P.; Louhichi, S. A new weak dependence condition and applications to moment inequalities. Stoch. Process. Appl. 1999, 84, 313–342. [Google Scholar] [CrossRef]
  27. Yang, S.C.; Li, Y.M. Uniformly asymptotic normality of the regression weighted estimator for strong mixing samples. Acta Math. Sin. (Chin. Ser.) 2006, 49, 1163–1170. [Google Scholar]
  28. Jonathan, D.; Cryer, J.D.; Chan, K.S. Time Series Analysis With Applications in R, 2nd ed.; Springer Science+Business Media: New York, NY, USA, 2008. [Google Scholar]
Figure 1. Histograms of ρ n n ρ n 2 1 ( ρ ^ n ρ n ) with ξ = 0.3 and n = [ 500 , 1000 , 1500 ] .
Figure 1. Histograms of ρ n n ρ n 2 1 ( ρ ^ n ρ n ) with ξ = 0.3 and n = [ 500 , 1000 , 1500 ] .
Entropy 24 01730 g001
Figure 2. Histograms of ρ ^ n n ρ ^ n 2 1 ( ρ ^ n ρ n ) with ξ = 0.3 and n = [ 500 , 1000 , 1500 ] .
Figure 2. Histograms of ρ ^ n n ρ ^ n 2 1 ( ρ ^ n ρ n ) with ξ = 0.3 and n = [ 500 , 1000 , 1500 ] .
Entropy 24 01730 g002
Figure 3. Histograms of ρ n n ρ n 2 1 ( ρ ^ n ρ n ) with ξ = 0.3 and n = [ 500 , 1000 , 1500 ] .
Figure 3. Histograms of ρ n n ρ n 2 1 ( ρ ^ n ρ n ) with ξ = 0.3 and n = [ 500 , 1000 , 1500 ] .
Entropy 24 01730 g003
Figure 4. Histograms of ρ ^ n n ρ ^ n 2 1 ( ρ ^ n ρ n ) with ξ = 0.3 and n = [ 500 , 1000 , 1500 ] .
Figure 4. Histograms of ρ ^ n n ρ ^ n 2 1 ( ρ ^ n ρ n ) with ξ = 0.3 and n = [ 500 , 1000 , 1500 ] .
Entropy 24 01730 g004
Figure 5. Time series for the log-NASDAQ composite index from April 2011 to April 2021.
Figure 5. Time series for the log-NASDAQ composite index from April 2011 to April 2021.
Entropy 24 01730 g005
Figure 6. The autocorrelation coefficients based on the residuals.
Figure 6. The autocorrelation coefficients based on the residuals.
Entropy 24 01730 g006
Figure 7. The estimators ρ ^ n , ρ ^ n L , and ρ ^ n U for ρ n .
Figure 7. The estimators ρ ^ n , ρ ^ n L , and ρ ^ n U for ρ n .
Entropy 24 01730 g007
Table 1. Empirical probability of 95 % CI of ρ n with ξ = 0.3 .
Table 1. Empirical probability of 95 % CI of ρ n with ξ = 0.3 .
ν n = 100 n = 500 n = 1000
c = 0.5c = 1c = 0.5c = 1c = 0.5c = 1
CICICICICICI
0.50.95010.97770.96290.97210.96110.9733
0.60.94050.96810.95080.96370.95820.9748
0.70.94330.96870.94640.96130.92480.9493
0.80.94640.94810.93230.93940.92080.9293
Table 2. Empirical probability of 95 % CI of ρ n with ξ = 0.3 .
Table 2. Empirical probability of 95 % CI of ρ n with ξ = 0.3 .
ν n = 100 n = 500 n = 1000
c = 0.5c = 1c = 0.5c = 1c = 0.5c = 1
CICICICICICI
0.50.94720.95410.95720.96240.95360.9571
0.60.93710.95920.95690.95760.95220.9587
0.70.97240.94980.94760.95540.96190.9515
0.80.99720.97210.99190.94750.99610.9536
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Liu, X.; Li, X.; Gao, M.; Yang, W. Mildly Explosive Autoregression with Strong Mixing Errors. Entropy 2022, 24, 1730. https://doi.org/10.3390/e24121730

AMA Style

Liu X, Li X, Gao M, Yang W. Mildly Explosive Autoregression with Strong Mixing Errors. Entropy. 2022; 24(12):1730. https://doi.org/10.3390/e24121730

Chicago/Turabian Style

Liu, Xian, Xiaoqin Li, Min Gao, and Wenzhi Yang. 2022. "Mildly Explosive Autoregression with Strong Mixing Errors" Entropy 24, no. 12: 1730. https://doi.org/10.3390/e24121730

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop