Abstract
The invariance principle and Berry–Esseen bound for an error variance estimator based on the residuals are established by using a Taylor expansion and the classical invariance principle and Berry–Esseen bound for independent random variables. Some examples are given to illustrate their applications.
Keywords:
nonlinear autoregressive models; invariance principle; Berry–Esseen bound; error variance estimator; residuals MSC:
60F17; 60F05
1. Introduction and Main Results
Let be a sequence of strictly stationary random variables that fulfills the following nonlinear autoregressive models of order p (NAR(p), for short):
where is a collection of known measurable functions, , is a collection of mean zero i.i.d. random variables with finite variance .
In recent years, an increasing number of scholars have been studying the error sequence in Model (1). One of the research directions is an error density estimator, and some classic results can be found in Liebscher [1], Cheng and Sun [2], Fu and Yang [3], Cheng [4], Hilgert and Portier [5], Kim et al. [6], Li [7], Cheng [8], Liu and Zhang [9], and Wu et al. [10]. Another research direction is an error variance estimator. There are fewer results for the error variance estimator compared to the error density estimator. Cheng [11] and Liang and Zhang [12] discussed asymptotic normality and almost sure central limit theorem for the error variance estimator from NAR(p) with i.i.d. errors, respectively.
The invariance principle can be seen mathematically as the functional central limit theorem, which is more effective than the classical central limit theorem. By combining the invariance principle with the continuity of convergence according to the distribution, one can obtain the asymptotic distribution of various statistics. The Berry–Esseen bound is commonly used to represent the speed at which the distribution function of the regularized sum of the first n terms of a random variable sequence converges to zero compared to its limit distribution . They have a wide range of applications in disciplines such as mathematical statistics, econometrics, and financial mathematics.
The purpose of this article is to establish the invariance principle and Berry–Esseen bound of the estimator of the error variance by using the observation value from Model (1). The biggest inconvenience is that the error sequence and the residual sequence cannot be observed directly. Taylor’s expansion and some technical methods are needed to solve it. Firstly, we provide an estimator of . Secondly, on the basis of and Model (1) above, the residuals below are established.
Finally, by (2), the estimator of is given by
The underlying assumptions below for Model (1) are needed throughout this paper.
Assumption 1A.
For arbitrary , , suppose that
where and for every , .
Assumption 1B.
where is a strong consistent estimator of θ and is a constant.
Remark 1.
Assumption 1B can be satisfied for some models. Please see the following corollaries for details.
We will state the first result concerning the invariance principle for error variance estimator .
Theorem 1.
The second result is the Berry–Esseen bound for error variance estimator .
Theorem 2.
For Model (1), under Assumptions A and B, if , and , for , we have
where stands for the distribution function of the standard normal distribution.
Remark 2.
Theorem 1 generalizes Theorem 2.2 of Cheng [11] from the central limit theorem to the invariance principle for error variance estimator .
Some typical examples are shown to test the invariance principle and Berry–Esseen bound for the error variance estimator for some special NAR(p). The first one is a degenerate case, which concerns AR(1) models.
Example 1.
Consider the following AR(1) model:
where .
Assumption 1A obviously holds for AR(1) model. Assumption 1B can be satisfied for the least squares estimator from Theorem 1 of Wang et al. [13] under the condition . Thus, Theorems 1 and 2 hold for the AR(1) model. We summarize it as the following corollary:
Corollary 1.
For the AR(1) model, if for some and arbitrary , one can obtain
and
where .
The second example encompasses the self-exciting threshold autoregressive (SETAR) models.
Example 2.
Let satisfy the continuous SETAR() models, as follows:
where are the disjoint intervals, and are the thresholds. denote the true parameters of the models, and denote the unknown parameters, , . is the threshold process (sometimes referred to as a switching variable), which controls the switching or jump between the regime . The integer is called a delay parameter. The integer is called a delay parameter. The SETAR() is an AR(p) model in each of the several regimes. As such, it is a piecewise model which is linear in each regime , but the overall time series model is nonlinear. The piecewise nature of the SETAR model is able to capture some important nonlinear phenomena, such as sudden jumps, sub and higher harmonics, asymmetric limit cycles and chaos, and amplitude-dependent frequency.
Assumption 1C.
Assume that has the density function h and that the continuous density function f of has a support including the interval , where , , . for every and and some .
Assumption 1B can be satisfied by using Corollary 3.1 of Liebscher [1], under , and Assumption 1C, .
Thus, Theorems 1 and 2 hold for the SETAR() model. We summarize it as the following corollary:
Corollary 2.
For SETAR() models, under Assumption 1A and Assumption 1C, if , , , ; thus,
and
The third example encompasses the threshold-exponential AR models.
Example 3.
The threshold-exponential AR models are defined below
with and being non-empty and non-overlapping intervals of with . denote the true parameters, denote the unknown parameters, and is the threshold variable, .
By Theorem 4 of Yao [14], Assumption 1B can be satisfied if , , , and for some . Thus, Theorems 1 and 2 hold for threshold-exponential AR progresses. We summarize it as the following corollary:
Corollary 3.
For threshold-exponential AR models, if , , , , then under Assumption 1A, for for some , one can obtain
for for some and , one can obtain
The last example is the multilayer perceptron model.
Example 4.
Consider the following multilayer perceptrons model:
where denote the the true parameters, denote the unknown parameters, and is called an activation function, . The above models can describe that it has one output unit which provides the variable , p input units being fed by variables at time k and a hidden layer with K units. Due to its universal approximation ability, it has become popular in nonlinear models. We will take in this example as in Yao [14].
By Theorem 5 of Yao [14], Assumption 1B can be satisfied if for some , for every different from and some , and the matrix is regular, where
Thus Theorems 1 and 2 hold for multilayer perceptrons models. We summarize it as the following corollary:
Corollary 4.
For multilayer perceptrons models with for some , for every θ different from and some , , and with the matrix being regular, if Assumption 1A holds, then one can obtain
and
where .
The structure of this article is as follows. The background and main results of this paper are declared in Section 1. Auxiliary lemmas are stated in Section 2. The proofs are given in Section 3. Throughout this paper, C denotes a constant that may be distinctly subjected to its locations. Let be the indicator function of the collection A.
2. Preliminary Lemmas
We will use the following lemmas:
Lemma 1
(Hall and Heyde [15]). Let and , . If is a martingale and , then
where is a constant depending only on q.
Lemma 2
(Liang and Zhang [12]). Under the assumptions of Theorem 1 or 2, for , , ; then, for any , one can obtain
where
for some .
Lemma 3
(Chao and Rao [16]). Let be two random variables; then, for any and ,
Lemma 4
(Petrov [17]). Let be independent random variables with zero means and for , . Then, there exists a positive constant A such that
where .
3. Proofs
Proof of Theorem 1.
In order to prove Theorem 1, it suffices to prove
where .
Noting (1) and (2), via Taylor’s expansion, one can obtain
where and are defined as in Lemma 2. Using (8), one can conclude the following:
Noticing the fundamental inequality
For , let
By (1) and the independence of , one can conclude that and are independent; thus, is a martingale. Using Lemmas 1 and 2, we know
For , via the Markov inequality, (4) and (10), inequality, Cauchy–Schwarz inequality, Lemma 2, and , one can obtain
For , via the Markov inequality, (4) and (10), inequality, Cauchy–Schwarz inequality, Lemma 2, and , it follows that
By the invariance principle for i.i.d. random variable (see Lu [18]), we know that
Proof of Theorem 2.
By (8), we can obtain
where .
Then, by Lemma 3 and (18), one can obtain
Using a similar proof to that of (11) with and the Markov inequality, we obtain
Using a similar method to that of (12) with and the Markov inequality, we know that
Using a similar argument to that of (14) with and the Markov inequality, we have
By the similar proof to that of (15) with and Markov inequality, it follows that
Using a similar method to that of (16) with and the Markov inequality, one can conclude that
Using Lemma 4 with , we can easily see that
4. Conclusions
In this article, via Taylor’s expansion, the classical invariance principle, and Berry–Esseen bound for independent random variables, the authors established the invariance principle and Berry–Esseen bound for an error variance estimator for NAR with i.i.d. errors. The central limit theorem for the error variance estimator was extended to the invariance principle. Four corollaries were given to test the results. In the future, the moderate deviation principle and large deviation principle for the error variance estimator for NAR with independent errors will be considered.
Author Contributions
Conceptualization, K.L.; formal analysis, K.L.; methodology, Y.Z. and X.D.; validation, K.L. and Y.Z.; visualization, K.L. and Y.Z.; writing—original draft preparation, K.L. and X.D.; writing—review and editing, Y.Z. and X.D. All authors have read and agreed to the published version of the manuscript.
Funding
This research was funded by the Science and Technology Program of Jilin Educational Department during the 14th Five-Year Plan Period (Grant No. JJKH20241239KJ) and the National Natural Science Foundation of China (Grant No. 12171198).
Data Availability Statement
Data sharing is not applicable to this article as no datasets were generated or analyzed during the current (theoretical) study.
Conflicts of Interest
The authors declare no conflicts of interest in this paper.
References
- Liebscher, E. Strong convergence of estimators in nonlinear autoregressive models. J. Multivar. Anal. 2003, 84, 247–261. [Google Scholar] [CrossRef]
- Cheng, F.; Sun, S. A goodness-of-fit test of the errors in nonlinear autoregressive time series models. Stat. Probab. Lett. 2008, 78, 50–59. [Google Scholar] [CrossRef]
- Fu, K.; Yang, X. Asymptotics of kernel density estimators in nonlinear autoregressive models. J. Math. Chem. 2008, 44, 831–838. [Google Scholar] [CrossRef]
- Cheng, F. Global property of error density estimation in nonlinear autoregressive time series models. Stat. Inference Stoch. Process. 2010, 13, 43–53. [Google Scholar] [CrossRef]
- Hilgert, N.; Portier, B. Strong uniform consistency and asymptotic normality of a kernel based error density estimator in functional autoregressive models. J. Stat. Plan. Inference 2012, 15, 105–125. [Google Scholar] [CrossRef]
- Kim, K.; Sin, M.; Kim, O. A goodness-of-fit test of the errors in nonlinear autoregressive time series models with stationary α-mixing error terms. ROMAI J. 2014, 10, 63–70. [Google Scholar]
- Li, J. Asymptotics of the Lp-norms of density estimators in the nonlinear autoregressive models. Commun. Stat. Theory Methods 2014, 43, 4845–4855. [Google Scholar] [CrossRef]
- Cheng, F. Strong consistency of the distribution estimator in the nonlinear autoregressive time series. J. Multivar. Anal. 2015, 142, 41–47. [Google Scholar] [CrossRef]
- Liu, T.; Zhang, Y. Law of the iterated logarithm for error density estimators in nonlinear autoregressive models. Commun. Stat. Theory Methods 2020, 49, 1082–1098. [Google Scholar] [CrossRef]
- Wu, S.; Yang, W.; Gao, M.; Fang, H. Asymptotic results of error density estimator in nonlinear autoregressive models. J. Korean Stat. Soc. 2024, 53, 563–582. [Google Scholar] [CrossRef]
- Cheng, F. Variance estimation in nonlinear autoregressive time series models. J. Stat. Plann. Inference 2011, 141, 1588–1592. [Google Scholar] [CrossRef]
- Liang, K.; Zhang, Y. Kaiyu Liang, Yong Zhang. Almost sure central limit theorem for error variance estimator in pth-order nonlinear autoregressive processes. Mathematics 2024, 12, 1482. [Google Scholar] [CrossRef]
- Wang, Y.; Mao, M.; Hu, X.; He, T. The law of iterated logarithm for autoregressive processes. Math. Probl. Eng. 2014, 2014, 972712. [Google Scholar] [CrossRef]
- Yao, J. On least squares estimation for stable nonlinear AR processes. Ann. Inst. Stat. Math. 2000, 52, 316–331. [Google Scholar] [CrossRef]
- Hall, P.; Heyde, C. Martingale Limit Theory and Its Application; Academic Press: New York, NY, USA, 1980. [Google Scholar]
- Chang, M.; Rao, P. Berry-Esseen bound for the Kaplan-Meier estimator. Comm. Stat. Theory Methods 1989, 18, 4647–4664. [Google Scholar] [CrossRef]
- Petrov, V. Limit Theorems of Probability Theory; Oxford Univ. Press Inc.: New York, NY, USA, 1995. [Google Scholar]
- Lu, C. The invariance principle for linear processes generated by a negatively associated sequence and its applications. Acta Math. Appl. Sin. Engl. Ser. 2003, 19, 641–646. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).