Next Article in Journal
Statistical Predictors of Project Management Maturity
Previous Article in Journal
Analysis of Ordinal Populations from Judgment Post-Stratification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Step-Ahead Prediction Intervals for Nonparametric Autoregressions via Bootstrap: Consistency, Debiasing, and Pertinence

1
Department of Mathematics and Halicioğlu Data Science Institute, University of California, San Diego, CA 92093, USA
2
Department of Mathematics, University of California, San Diego, CA 92093, USA
*
Author to whom correspondence should be addressed.
Stats 2023, 6(3), 839-867; https://doi.org/10.3390/stats6030053
Submission received: 19 July 2023 / Revised: 5 August 2023 / Accepted: 7 August 2023 / Published: 11 August 2023
(This article belongs to the Section Time Series Analysis)

Abstract

:
To address the difficult problem of the multi-step-ahead prediction of nonparametric autoregressions, we consider a forward bootstrap approach. Employing a local constant estimator, we can analyze a general type of nonparametric time-series model and show that the proposed point predictions are consistent with the true optimal predictor. We construct a quantile prediction interval that is asymptotically valid. Moreover, using a debiasing technique, we can asymptotically approximate the distribution of multi-step-ahead nonparametric estimation by the bootstrap. As a result, we can build bootstrap prediction intervals that are pertinent, i.e., can capture the model estimation variability, thus improving the standard quantile prediction intervals. Simulation studies are presented to illustrate the performance of our point predictions and pertinent prediction intervals for finite samples.

1. Introduction

Since the 1980s, non-linear time-series models have attracted attention for modeling asymmetry in financial returns, the volatility of stock markets, switching regimes, etc. Compared to linear time-series models, non-linear models are more capable of depicting the underlying data-generating mechanism; see the review in [1], for example. However, unlike linear models, where the one-step-ahead predictor can be iterated, the multi-step-ahead prediction of non-linear models is cumbersome, since the innovation severely influences the forecasting value.
In this paper, by combining the forward bootstrap in [2] with nonparametric estimation, we develop multi-step-ahead (conditional) predictive inference for the general model:
X t = m ( X t 1 , , X t p ) + σ ( X t 1 , , X t q ) ϵ t ;
where the ϵ t values are assumed to be independent and identically distributed (i.i.d.) with mean 0 and variance 1, and m ( · ) and σ ( · ) are some functions that satisfy some smoothness conditions. We will also assume that the time series satisfying Equation (1) is geometrically ergodic and causal, i.e., that for any t, ϵ t is independent of { X s , s < t } .
In Equation (1), we have the trend/regression function m ( · ) depending on the last p data points, while the standard deviation/volatility function σ ( · ) depends on the last q data points; in many situations, p and q are taken to be equal for simplicity. Some special cases deserve mention: e.g., if σ ( X t 1 , , X t q ) σ (constant), Equation (1) yields a non-linear/nonparametric autoregressive model with homoscedastic innovations. The well-known ARCH/GARCH models are a special case of Equation (1) with m ( X t 1 , , X t p ) 0 .
Although the L 2 -optimal one-step-ahead prediction of Equation (1) is trivial when we know the regression function m ( · ) or have a consistent estimator of it, the multi-step-ahead prediction is not easy to obtain. In addition, it is nontrivial to find the L 1 -optimal prediction, even for one-step-ahead forecasting. In several applied areas, e.g., econometrics, climate modeling, and water resources management, data might not possess a finite second moment, in which case, optimizing L 2 loss is vacuous. For all such cases—but also of independent interest—prediction that is optimal with respect to L 1 loss should receive more attention in practice; see detailed discussion in Ch. 10 of [2]. Later, we will show that our method is compatible with both L 2 - and L 1 -optimal multi-step-ahead predictions.
Efforts to overcome the difficulty of forecasting non-linear time series can be traced back to the work of [3], where a numerical approach was proposed to explore the exact conditional k-step-ahead L 2 -optimal prediction of X T + k for the homoscedastic Equation (1). However, this method is computationally intractable with long-horizon prediction and requires knowledge of the distribution of innovations and the regression function, which is not realistic in practice.
Consequently, practitioners started to investigate some suboptimal methods to perform multi-step-ahead prediction. Generally speaking, these methods take one of two avenues: (1) direct prediction or (2) iterative prediction. The first idea involves working with a different (“direct”) model, specific to k-step-ahead prediction, namely:
X t = m k ( X t k , , X t k p + 1 ) + σ k ( X t k , , X t k q + 1 ) ξ t .
Even though m k ( · ) and σ k ( · ) are unknown to us, we can construct nonparametric estimators, m ^ k and σ ^ k , and plug them into Equation (2) to perform k-step-ahead prediction. Ref. [4] gives a review of this approach. However, as pointed out by [5], a drawback of this approach is that information from intermediate observations { X t , , X t k + 1 } is disregarded. Furthermore, if ϵ t in Equation (1) is i.i.d., then ξ t in Equation (2) cannot be i.i.d. In other words, a practitioner must employ the (estimated) dependence structure of ξ t in Equation (2) in order to perform the prediction in an optimal fashion.
The second idea is “iterative prediction”, which employs one-step-ahead predictors in a sequential way to perform a multi-step-ahead forecast. For example, consider a two-step-ahead prediction using Model Equation (1); first, note that the L 2 -optimal predictor of X T + 1 is X ^ T + 1 = m ( X T , , X T + 1 p ) . The L 2 -optimal predictor of X T + 2 = m ( X T + 1 , X T , , X T + 2 p ) , but since X T + 1 is unknown, it is tempting to plug in X ^ T + 1 in its place. This plug-in idea can be extended to multi-step-ahead forecasts, but it does not lead to the L 2 -optimal predictor, except in the special case where the function m ( · ) is linear, e.g., in the case of a linear autoregressive (LAR) model.
Remark 1. 
Since neither of the above two approaches is satisfactory, we propose to approximate the distribution of the future value via a particular type of simulation when the model is known or, more generally, by the bootstrap. To describe this approach, we rewrite Equation (1) as
X t = G ( X t 1 , ϵ t ) ,
where X t 1 is a vector that represents { X t 1 , , X t m a x ( p , q ) } , and G ( · , · ) is some appropriate function. Then, when the model and the innovation information are known to us, we can create a pseudo-value X T + k * . Taking a three-step-ahead prediction as an example, the pseudo-value X T + 3 * can be defined as follows:
X T + 3 * = G ( G ( G ( X T , ϵ T + 1 * ) , ϵ T + 2 * ) , ϵ T + 3 * ) ;
where { ϵ i * } i = T + 1 T + 3 is simulated as i.i.d. from F ϵ . Repeating this process to M pseudo- X T + 3 * , the L 2 -optimal prediction of X T + 3 can be estimated from the mean of { X T + 3 * ( m ) } m = 1 M . As already discussed, constructing the L 1 -optimal predictor may also be required since sometimes L 2 loss is not well defined; in our simulation framework, we can construct the optimal L 1 prediction by taking the median of { X T + k * ( m ) } m = 1 M . Moreover, we can even build a prediction interval (PI) to measure the forecasting accuracy based on the quantile values of simulated pseudo-values. The extension of this algorithm to longer-step-ahead prediction is illustrated in Section 2.
Realistically, practitioners will not know F ϵ , m ( · ) , or σ ( · ) . In this situation, the first step is to estimate these quantities and plug them into the above simulation, which then turns into a bootstrap method. The bootstrap idea was introduced by [6] to carry out statistical inference for independent data. After that, many variants of the bootstrap were developed to handle time-series data. Prominent examples include the sieve bootstrap and the block bootstrap in its many variations, e.g., the circular bootstrap of [7] and the stationary bootstrap of [8]; see [9] for a review. Once some model structure of the data is assumed, participators can rely on model-based bootstrap methods, e.g., the residual and/or wild bootstrap; see [10] for a book-length treatment. The bootstrap technique can also be applied to a recently popular model, namely, a neural network. In particular, ref. [11] applied the bootstrap for the estimation inference of neural networks’ parameters, while [12] utilized the bootstrap to estimate the performance of neural networks.
In the spirit of the idea of the bootstrap, ref. [13] proposed a backward bootstrap trick to predict an A R ( p ) model. The advantage of the backward method is that each bootstrap prediction is naturally conditional on the latest p observations, which coincide with the conditional prediction in the real world. However, this method cannot handle non-linear time series, whose backward representation may not exist. Later, ref. [14] proposed a strategy to generate forward bootstrap A R ( p ) series. To resolve the conditional prediction issues, they fixed the last p bootstrap values to be the true observations and computed predictions iteratively in the bootstrap world starting from there. They then extended this procedure to forecast the GARCH model in [15].
Sharing a similar idea, ref. [16] defined the forward bootstrap to perform prediction, but they proposed a different PI format that empirically has better performance, according to the coverage rate (CVR) and the length (LEN), compared to the PI of [14]. Although ref. [16] covered the forecasting of a non-linear and/or nonparametric time-series model, only one-step-ahead prediction was considered. The case of the multi-step-ahead prediction of non-linear (but parametric) time-series models was recently addressed in [17]. In the paper at hand, we address the case of the multi-step-ahead prediction of nonparametric time-series models, as in Equation (1). Beyond discussing optimal L 1 and L 2 point predictions, we consider two types of PI—quantile PI (QPI) and pertinent PI (PPI). As already mentioned, the former can be approximated by taking the quantile values of the future value’s distribution in the bootstrap world. The PPI requires a more complicated and computationally heavy procedure to be built, as it attempts to capture the variability in parameter estimation. This additional effort results in improved finite-sample coverage as compared to the QPI.
As in most nonparametric estimation problems, the issue of bias becomes important. We will show that debiasing on the inherent bias-type terms of local constant estimation is necessary to guarantee the pertinence of a PI when multi-step-ahead predictions are required. Although the QPI and PPI are asymptotically equivalent, the PPI renders a better CVR in finite-sample cases; see the formal definition of PPI in the work of [2,16]. Analogously to the successful construction of PIs in the work of [18], we can employ predictive—as opposed to fitted—residuals in the bootstrap process to further alleviate the finite-sample undercoverage of bootstrap PIs in practice. There are several other nonparametric approaches to carry out the prediction inference of future values; e.g., see the work of [5,19] for variants of kernel-based methods; see the work of [20,21,22] for prediction with a neural network using the sieve bootstrap or various ensemble strategies; finally, see the work of [23,24,25] for a novel transformation-based approach for model-free prediction. The comparison of these various nonparametric techniques could be an independent study.
This paper is organized as follows. In Section 2, forward bootstrap prediction algorithms with local constant estimators will be given. The asymptotic properties of point predictions and PIs will be discussed in Section 3. Simulations are given in Section 4 to substantiate the finite-sample performance of our methods. Conclusions are given in Section 5. All proofs can be found in Appendix A. Discussions on the debiasing and pertinence related to building PIs are presented in Appendix B, Appendix C and Appendix D.

2. Nonparametric Forward Bootstrap Prediction

As discussed in the remark in Section 1, we can apply the simulation or bootstrap technique to approximate the distribution of future values. In general, this idea works for any geometrically ergodic autoregressive model, regardless of whether it is in a linear or non-linear format. For example, if we have a known general model X t = G ( X t 1 , ϵ t ) at hand, we can perform k-step-ahead predictions according to the same logic of the three-step-ahead prediction example in Section 1.
To elaborate, we need to simulate { ϵ i * } i = T + 1 T + k as i.i.d. from F ϵ and then compute the pseudo-value X T + k * iteratively with simulated innovations as follows:
X T + k * = G ( G ( G ( G ( X T , ϵ T + 1 * ) , ϵ T + 2 * ) , ϵ T + 3 * ) , , ϵ T + k * ) .
Repeating this procedure M times, we can make a prediction inference with the empirical distribution of { X T + k * ( m ) } m = 1 M . Similarly, if the model and innovation distribution are unknown to us, we can perform the estimation first to obtain G ^ ( · , · ) and F ^ ϵ . Then, the above simulation-based algorithm turns out to be a bootstrap-based algorithm. More specifically, we bootstrap { ϵ ^ i * } i = T + 1 T + k from F ^ ϵ and calculate the pseudo-value X ^ T + k * iteratively with G ^ ( · , · ) . The prediction inference can also be conducted with the empirical distribution of { X ^ T + k * ( m ) } m = 1 M .
This simulation/bootstrap idea was recently implemented by [17] in the case where the model G is either known or parametrically specified. In what follows, we will focus on the case of the nonparametric model in Equation (1) and will analyze the asymptotic properties of the point predictor and prediction interval. For the sake of simplicity, we consider only the case in which p = q = 1 ; the general case can be handled similarly, but the notation is much more cumbersome. Assume that we observe T + 1 data points and that we denote them by { X 0 , , X T } ; our goal is the prediction inference of X T + k for some k 1 . If we know m ( · ) , σ ( · ) , and F ϵ , we can take a simulation approach to develop the prediction inference, as we explained in Section 1. When m ( · ) , σ ( · ) , and F ϵ are unknown, we start by estimating m ( · ) and σ ( · ) ; we then estimate F ϵ based on the empirical distribution of residuals. Subsequently, we can deploy a bootstrap-based method to approximate the distribution of future values. Several algorithms are given for this purpose later in the paper.

2.1. Bootstrap Algorithm for Point Prediction and QPI

For concreteness, we focus on local constant estimators, i.e., kernel-smoothed estimators of the Nadaraya–Watson type; other estimators can be applied similarly. The local constant estimators of m ( · ) and σ ( · ) are, respectively, defined as:
m ˜ h ( x ) = t = 1 T K ( x X t 1 h ) X t t = 1 T K ( x X t 1 h ) and σ ˜ h ( x ) = t = 1 T K ( x X t 1 h ) ( X t m ˜ h ( X t 1 ) ) 2 t = 1 T K ( x X t 1 h ) ;
where K is a non-negative kernel function that satisfies some regularity assumptions; see Section 3 for details. We use h to represent the bandwidth of kernel functions, but h may take a different value for mean and variance estimators. Due to theoretical and practical issues, we need to truncate the above local constant estimators as follows:
m ^ h ( x ) = C m if m ˜ h ( x ) < C m m ˜ h ( x ) if | m ˜ h ( x ) | C m C m if m ˜ h ( x ) > C m ; σ ^ ( x ) = c σ if σ ˜ h ( x ) < c σ σ ˜ h ( x ) if c σ σ ˜ h ( x ) C σ C σ if σ ˜ h ( x ) > C σ ;
where C m and C σ are large enough, and c σ is small enough.
Using m ^ h ( · ) and σ ^ h ( · ) on Equation (1), we can obtain the fitted residuals { ϵ ^ t } t = 1 T , which are defined as:
ϵ ^ t = X t m ^ h ( X t 1 ) σ ^ h ( X t 1 ) , for t = 1 , , T .
Later, in Section 3, we will show that the innovation distribution F ϵ can be consistently estimated from the centered empirical distribution of { ϵ ^ t } t = 1 T , i.e., F ^ ϵ , under some standard assumptions. We now have all the ingredients to perform the bootstrap-based Algorithm 1 to yield the point prediction and QPI of X T + k .
Algorithm 1 Bootstrap prediction of X T + k with fitted residuals
Step 1 With data { X 0 , , X T } , construct the estimators m ^ h ( x ) and σ ^ h ( x ) with Equation (6).
Step 2 Compute fitted residuals based on Equation (7), and let ϵ ¯ = 1 T i = 1 T ϵ ^ i . Let F ^ ϵ denote the empirical distribution of the centered residuals ϵ ^ t ϵ ¯ for t = 1 , , T .
Step 3 Generate { ϵ ^ i * } i = T + 1 T + k i.i.d. from F ^ ϵ . Then, construct bootstrap pseudo-values X T + 1 * , , X T + k * iteratively, i.e.,
X T + i * = m ^ h ( X T + i 1 * ) + σ ^ h ( X T + i 1 * ) ϵ ^ T + i * , for i = 1 , , k .

For example, X T + 1 * = m ^ h ( X T * ) + σ ^ h ( X T * ) ϵ ^ T + 1 * , and X T + 2 * = m ^ h ( m ^ h ( X T ) + σ h ( X T ) ϵ ^ T + 1 * ) + σ ^ h ( m ^ ( X T ) + σ ^ h ( X T ) ϵ ^ T + 1 * ) ϵ ^ T + 2 * .
Step 4 Repeating Step 3 M times, we obtain pseudo-value replicates of X T + k * that we denote by { X T + k ( 1 ) , , X T + k ( M ) } . Then, L 2 - and L 1 -optimal predictors can be approximated by 1 M i = 1 M X T + k ( i ) and the median of { X T + k ( 1 ) , , X T + k ( M ) } , respectively. Furthermore, a ( 1 α ) 100 % QPI can be built as ( L , U ) , where L and U denote the α / 2 and 1 α / 2 sample quantiles of M values { X T + k ( 1 ) , , X T + k ( M ) } .
Remark 2. 
To construct the QPI of Algorithm 1, we can employ the optimal bandwidth rate, i.e., h = O ( T 1 / 5 ) . However, in practice with small sample size, the QPI has a better empirical CVR for multi-step-ahead predictions by adopting an under-smoothing bandwidth; see Appendix B for a related discussion, and see Section 4 for simulation comparisons between applying optimal and under-smoothing bandwidths to the QPI.
In the next section, we will show the conditional asymptotic consistency of our optimal point predictions and the QPI. In particular, we will verify that our point predictions converge to oracle optimal point predictors in probability—conditional on X T . In addition, we will look for an asymptotically valid PI with a ( 1 α ) 100 % CVR to measure the prediction accuracy conditional on the latest observed data, which is defined as:
P ( L X T + k U ) 1 α , as T ,
where L and U are lower and higher PI bounds, respectively. Although not explicitly denoted, the probability P should be understood as the conditional probability given X T . Later, based on a sequence of sets that contains the observed sample with a probability tending to 1, we will show how to build a prediction interval that is asymptotically valid by the bootstrap technique, even if the model information is unknown.
Although asymptotically correct, in finite samples, the QPI typically suffers from undercoverage; see the discussion in [2,16]. To improve the CVR in practice, we consider taking the predictive residuals to boost the bootstrap process. To derive such predictive residuals, we need to estimate the model based on the delete- X t dataset, i.e., the available data for the scatter plot of X i vs. { X i 1 } for i = 1 , , t 1 , t + 1 , , T , i.e., excluding the single point at i = t . More specifically, we define the delete- X t local constant estimators as:
m ˜ h t ( x ) = i = 1 , i t T K ( | x X i 1 | h ) X i i = 1 , i t T K ( | x X i 1 | h ) and σ ˜ h t ( x ) = i = 1 , i t T K ( | x X i 1 | h ) ( X i m ˜ h t ( X i 1 ) ) 2 i = 1 , i t T K ( | x X i 1 | h ) .
Similarly, the truncated delete- X t local estimators m ^ t ( x ) and σ ^ h t ( x ) can be defined according to Equation (6). We now construct the so-called predictive residuals as:
ϵ ^ t p = X t m ^ h t ( X t 1 ) σ ^ h t ( X t 1 ) , for t = 1 , , T .
The k-step-ahead prediction of X T + k with predictive residuals is depicted in Algorithm 2. Although Algorithms 1 and 2 are asymptotically equivalent, Algorithm 2 gives a QPI with a better CVR for finite samples; see the simulation comparisons of these two approaches in Section 4.
Algorithm 2 Bootstrap prediction of X T + k with predictive residuals
Step 1 The same as Step 1 of Algorithm 1.
Step 2 Compute predictive residuals based on Equation (11). Let F ^ ϵ p denote the empirical distribution of the centered predictive residuals ϵ ^ t p 1 T i = 1 T ϵ ^ i p , t = 1 , , T .
Steps 3–4 Replace F ^ ϵ by F ^ ϵ p in Algorithm 1. All the rest are the same.

2.2. Bootstrap Algorithm for PPI

To improve the CVR of a PI, we can try to take the variability in the model estimation into account when we build the PI; i.e., we need to mimic the estimation process in the bootstrap world. Employing this idea results in a pertinent PI (PPI), as discussed in Section 1; see also [26].
Algorithm 3 outlines the procedure to build a PPI. Although this algorithm is more computationally heavy, the advantage is that the PPI gives a better CVR compared to the QPI in practice, i.e., with finite samples; see the examples in Section 4.
Remark 3 
(Bandwidth choices). In Step 3 (b) of Algorithm 3, we can use an optimal bandwidth h and an over-smoothing bandwidth g to generate bootstrap time series so that we can capture the asymptotically non-random bias-type term of nonparametric estimation by the forward bootstrap; see the application in [27]. We can also apply an under-smoothing bandwidth h (and then use g = h ) to render the bias term negligible. It turns out that both approaches work well for one-step-ahead prediction, although applying the over-smoothing bandwidth may be slightly better. However, taking under-smoothing bandwidth(s) is notably better for multi-step-ahead prediction. The reason for this is that the bias term cannot be captured appropriately for multi-step-ahead estimation with an over-smoothing bandwidth. On the other hand, with an under-smoothing bandwidth, the bias term is negligible; see Section 3.2 for further discussion; also, see [28] for a related discussion. The simulation studies in Appendix C explore the differences between these two bandwidth strategies.
Algorithm 3 Bootstrap PPI of X T + k with fitted residuals
Step 1With data { X 0 , , X T } , construct the estimators m ^ h ( x ) and σ ^ h ( x ) by using Equation (6). Furthermore, compute fitted residuals based on Equation (7). Denote the empirical distribution of centered residuals by ϵ ^ t 1 T i = 1 T ϵ ^ i , t = 1 , , T by F ^ ϵ .
Step 2 Construct the L 1 or L 2 prediction X ^ T + k using Algorithm 1.
Step 3 (a) Resample (with replacement) the residuals from F ^ ϵ to create pseudo-errors { ϵ ^ i * } i = 1 T and { ϵ ^ i * } i = T + 1 T + k .
(b) Let X 0 * = X I , where I is generated as a discrete random variable uniformly distributed on the values 0 , , T . Then, create bootstrap pseudo-data { X t * } t = 1 T in a recursive manner from the formula
X i * = m ^ g ( X i 1 * ) + σ ^ g ( X i 1 * ) ϵ ^ i * , for i = 1 , , T .

(c) Based on the bootstrap data { X t * } t = 0 T , re-estimate the regression and variance functions according to Equation (6) and obtain m ^ h * ( x ) and σ ^ h * ( x ) ; we use the same bandwidth h as the original estimator m ^ h ( x ) .
(d) Guided by the idea of the forward bootstrap, re-define the latest value of X T * to match the original, i.e., re-define X T * = X T .
(e) With the estimators m ^ g ( x ) and σ ^ g ( x ) , the bootstrap data { X t * } t = 0 T , and the pseudo-errors { ϵ ^ t * } t = T + 1 T + k , use Equation (12) to recursively generate the future bootstrap data X T + 1 * , , X T + k * .
(f) With bootstrap data { X t * } t = 0 T and the estimators m ^ h * ( x ) and σ ^ h * ( x ) , utilize Algorithm 1 to compute the optimal bootstrap prediction, which is denoted by X ^ T + h * ; to generate bootstrap innovations, we still use F ^ ϵ .
(g) Determine the bootstrap predictive root: X T + k * X ^ T + k * .
Step 4 Repeat Step 3 B times; the B bootstrap root replicates are collected in the form of an empirical distribution whose β -quantile is denoted by q ( β ) . The ( 1 α ) 100 % equal-tailed prediction interval for X T + k centered at X ^ T + k is then estimated by [ X ^ T + k + q ( α / 2 ) , X ^ T + k + q ( 1 α / 2 ) ] .
As Algorithm 2 is a version of Algorithm 1 using predictive (as opposed to fitted) residuals, we now propose Algorithm 4, which constructs a PPI with predictive residuals.
Algorithm 4 Bootstrap PPI of X T + k with predictive residuals
Step 1With data { X 0 , , X T } , construct the estimators m ^ h ( x ) and σ ^ h ( x ) by using Equation (6). Furthermore, compute predictive residuals based on Equation (11). Denote the empirical distribution of centered residuals ϵ ^ t p 1 T i = 1 T ϵ ^ i p , t = 1 , , T by F ^ ϵ p .
Steps 2–4The same as in Algorithm 3, but change the residual distribution from F ^ ϵ to F ^ ϵ p , and change the application of Algorithm 1 to Algorithm 2.

3. Asymptotic Properties

In this section, we provide the theoretical substantiation of our nonparametric bootstrap prediction methods—Algorithms 1–4. We start by analyzing optimal point predictions and the QPI based on Algorithms 1 and 2.
Remark 4. 
Since the effect of leaving out one data pair  X t   v s . { X t 1 } is asymptotically negligible for large T, the delete- X t estimators m ^ t ( x ) and σ ^ t ( x ) are asymptotically equal to m ^ ( x ) and σ ^ ( x ) , respectively. Then, the predictive residual ϵ ^ t p is asymptotically the same as the fitted residual ϵ ^ t ; see Lemma 5.5 of [16] for a formal comparison of these two types of estimators and residuals. Thus, we just give theorems to guarantee the asymptotic properties of point predictions and PIs with fitted residuals. The asymptotic properties for variants with predictive residuals also hold true.

3.1. On Point Prediction and QPI

First, to conduct statistical inference for time series, we need to quantify the degree of the asymptotic dependence of the time series. In this paper, we consider that the time series is geometrically ergodic, which is equivalent to the β -mixing condition with an exponentially fast mixing rate; see [29] for a detailed introduction to different mixing conditions and ergodicity. To simplify the proof, we make the following assumptions:
A1
| m ( x ) | + σ ( x ) E | ϵ 1 | c 1 + c 2 | x | for all x R and some c 1 < , c 2 < 1 ;
A2
σ ( x ) c 3 > 0 for all x R and some c 3 > 0 ;
A3
f ϵ ( x ) is positive everywhere.
A1–A3 can guarantee that the time-series process is geometrically ergodic; see Theorem 1 of [30] for proof and see the work of [31] for a discussion on the sufficient conditions of higher-order time series.
Since we need to build consistent properties of the nonparametric estimation, we further assume that:
A4
The regression function m ( x ) is twice continuously differentiable with bounded derivatives, and we denote its Lipschitz continuous constant as L m ;
A5
The volatility function σ ( x ) is twice continuously differentiable with bounded derivatives, and we denote its Lipschitz continuous constant as L σ . Moreover, for all M < , there is c M < with E | σ ( X 0 ) ϵ 1 | M c M , where X 0 is the initial point of the time series;
A6
For L m and L σ , L m + L σ E | ϵ 1 | < 1 ;
A7
For the innovation distribution, f ϵ is twice continuously differentiable; f ϵ , f ϵ , and f ϵ are bounded; and sup x R | x f ϵ ( x ) | < ;
A8
The kernel function K ( x ) is a compactly supported and symmetric probability density on R and has a bounded derivative.
Remark 5. 
Assumption A6 is originally used to show that the expected value of X t * is O p ( 1 ) in the bootstrap world for all t. In practice, this assumption is not strict; see examples in Section 4. For assumption A8, we can apply a kernel with a support on the whole real line as long as the part outside a large enough compact set is asymptotically negligible.
Under A1–A8, [27] shows that truncated the local constant estimators in Equation (6) are uniformly consistent with the true functions in an expanding region. We summarize this result in the lemma below:
Lemma 1. 
Under A1–A8 and observed data { X 0 , , X T } , for local constant estimation as in Equation (6), we have:
sup | x | c T | m ^ h ( x ) m ( x ) | p 0 a n d sup | x | c T | σ ^ h ( x ) σ ( x ) | p 0 .
where c T is an appropriate sequence that converges to infinity as T .
In addition, for the centered empirical distribution of ϵ ^ , we can derive Lemma 2 to describe its consistency property.
Lemma 2. 
Under A1–A8 and observed data { X 0 , , X T } , for the centered empirical distribution F ^ ϵ , we have:
sup x R | F ^ ϵ ( x ) F ϵ ( x ) | p 0 .
See Theorem 5 of [27] for the proof of Lemmas 1 and 2. Combining all the pieces, we present Theorem 1 to show that the optimal point prediction and QPI returned by Algorithm 1 or Algorithm 2 are consistent and asymptotically valid, respectively, conditionally on the latest observations.
Theorem 1. 
Under assumptions A1–A8 and observed data { X 0 , , X T } , we have:
sup | x | c T F X T + k * | X T , , X 0 ( x ) F X T + k | X T ( x ) p 0 , f o r k 1 ,
where X T + k * is a future value in the bootstrap world that can be determined iteratively by applying the expression X T + i * = m ^ h ( X T + i 1 * ) + σ ^ h ( X T + i 1 * ) ϵ ^ T + i * for i = 1 , k ; { ϵ ^ T + i * } i = 1 k is i.i.d., with its distribution given by the empirical distribution of fitted (or predictive) residuals; F X T + k * | X T , , X 0 ( x ) represents the distribution P * ( X T + k * x | X T , , X 0 ) , and here, we take P * to represent the probability measure conditional on the sample of data; and F X T + k | X T ( x ) represents the (conditional) distribution of X T + k in the real world, i.e., P ( X T + k x | X T ) .

3.2. On PPI with Homoscedastic Errors

With more complicated prediction procedures, such as Algorithms 3 and 4, we expect to find a more accurate PI, i.e., a PPI. The superiority of such PIs is that the estimation variability can be captured when we use the distribution of the predictive root in the bootstrap world to approximate its variant in the real world. We consider models with homoscedastic errors throughout this section; the model with heteroscedastic errors will be analyzed later.
Firstly, let us consider the one-step-ahead predictive root centered at the optimal L 2 point prediction in the real and bootstrap worlds, as given below:
X T + 1 X ^ T + 1 = m ( X T ) + ϵ T + 1 1 M i = 1 M m ^ h ( X T ) + ϵ ^ i , T + 1 ; X T + 1 * X ^ T + 1 * = m ^ g ( X T ) + ϵ ^ T + 1 1 M i = 1 M m ^ h * ( X T ) + ϵ ^ i , T + 1 * ,
where M is the number of bootstrap replications that we employ to approximate the optimal L 2 point prediction. Since we have centered the residuals to a mean of zero, Equation (16) degenerates to the following simple form asymptotically as M :
X T + 1 X ^ T + 1 = m ( X T ) + ϵ T + 1 m ^ h ( X T ) ; X T + 1 * X ^ T + 1 * = m ^ g ( X T ) + ϵ ^ T + 1 m ^ h * ( X T ) .
To acquire a pertinent PI according to Definition 2.4 of [16], in addition to Equation (14), we also need asymptotically valid confidence intervals for local constant estimation in the bootstrap world; i.e., we should be able to estimate the distribution of the nonparametric estimator in the bootstrap world. For one-step-ahead prediction, this condition can be formulated as follows:
sup x | P ( a T A m x ) P * ( a T A m * x ) | p 0 ,
where
A m = m ( X T ) m ^ h ( X T ) ; A m * = m ^ g ( X T ) m ^ h * ( X T ) ,
and a T is an appropriate sequence such that P ( a T A m x ) has a nontrivial limit as T . In [16], it was assumed that the nontrivial limit of P ( a T A m x ) is continuous. In this case, the uniform convergence in Equation (18) follows from the pointwise convergence of all x.
Remark 6. 
As we have discussed in Remark 3, the bootstrap procedure cannot capture the bias term of nonparametric estimation exactly unless delicate manipulations are made. Ref. [16] adopts two strategies to solve this issue: (B1) let g = h , and take a bandwidth rate satisfying h T 1 / 5 0 , i.e., under-smoothing in function estimation; (B2) use the optimal smoothing rate with h proportional to T 1 / 5 , but generate time series in the bootstrap world with over-smoothing estimators, i.e., g h and g / h . No matter which approach we take, Equation (18) can be shown; see details from Theorem 1 of [27] and Theorem 5.4 of [16].
The following corollary is immediate:
Corollary 1. 
Under assumptions A1–A3 and observed data { X 0 , , X T } , the one-step-ahead PI returned by Algorithms 3 and 4 with fitted or predictive residuals is asymptotically pertinent, respectively.
However, for multi-step-ahead predictions, the analysis becomes more complicated, and the under-smoothing strategy turns out to work better. For example, considering the two-step-ahead prediction, the two predictive roots can be written as follows:
X T + 2 X ^ T + 2 = m ( X T + 1 ) + ϵ T + 2 1 M i = 1 M m ^ h m ^ h ( X T ) + ϵ ^ i , T + 1 + ϵ ^ i , T + 2 m ( m ( X T ) + ϵ T + 1 ) + ϵ T + 2 1 M i = 1 M m ^ h m ^ h ( X T ) + ϵ ^ i , T + 1 .
Correspondingly, the predictive root in the bootstrap world is:
X T + 2 * X ^ T + 2 * = m ^ g ( X T + 1 * ) + ϵ ^ T + 2 * 1 M i = 1 M m ^ h * m ^ h * ( X T ) + ϵ ^ i , T + 1 * + ϵ ^ i , T + 2 * m ^ g ( m ^ g ( X T ) + ϵ ^ T + 1 * ) + ϵ ^ T + 2 * 1 M i = 1 M m ^ h * m ^ h * ( X T ) + ϵ ^ i , T + 1 * ,
where the approximated equality is due to the application of the LLN on the sample mean of the centered residuals.
Remark 7. 
We should note that the over-smoothing approach may work better for finite samples. The reason is that applying the optimal bandwidth rate is superior when the bias-type term of the nonparametric estimation can be captured by the bootstrap. However, we will soon show that applying an under-smoothing bandwidth strategy is more accurate for multi-step-ahead predictions since it can solve the bias issue and render a PPI. Thus, in practice, we recommend adopting strategy (B2) to perform one-step-ahead predictions and adopting strategy (B1) to perform multi-step-ahead predictions. For a time series with heteroscedastic errors, the optimal bandwidth strategy is slightly different; see Section 3.3 for reference.
Based on Equations (20) and (21), as we prove that the future distribution of X T + k * converges uniformly to the future distribution of X T + k in probability, we can show that the distribution of the predictive root X T + 2 * X ^ T + 2 * in the bootstrap world also converges uniformly in probability to the distribution of the predictive root X T + 2 X ^ T + 2 in the real world. This result guarantees the asymptotic validity of the PPI. We summarize this conclusion in Theorem 2.
Theorem 2. 
Under assumptions A1–A8 and observed data { X 0 , , X T } , we have:
sup | x | c T F X T + k * X ^ T + k * | X T , , X 0 ( x ) F X T + k X ^ T + k | X T , , X 0 ( x ) p 0 , f o r k 1 ,
where X T + k * X ^ T + k * is the k-step-ahead predictive root in the bootstrap world, and F X T + k * X ^ T + k * | X T , , X 0 ( x ) represents its distribution at point x; X T + k X ^ T + k is the k-step-ahead predictive root in the real world, and F X T + k X ^ T + k | X T , , X 0 ( x ) represents its (conditional) distribution at point x. This theorem holds for both bandwidth selection strategies.
However, since we apply a more complicated procedure to capture estimation variability, we anticipate that this results in a PPI. To see this, we first apply the Taylor expansion on the r.h.s. of Equations (20) and (21); the two predictive roots can be decomposed into several parts:
X T + 2 X ^ T + 2 = m ( m ( X T ) ) m ^ h ( m ^ h ( X T ) ) + m ( 1 ) ( x ^ ) ϵ T + 1 + ϵ T + 2 1 M i = 1 M m ^ h ( 1 ) ( x ^ ^ i ) ϵ ^ i , T + 1 ; X T + 2 * X ^ T + 2 * = m ^ g ( m ^ g ( X T ) ) m ^ h * ( m ^ h * ( X T ) ) + m ^ g ( 1 ) ( x ^ * ) ϵ ^ T + 1 * + ϵ ^ T + 2 * 1 M i = 1 M m ^ h * ( 1 ) ( x ^ ^ i * ) ϵ ^ i , T + 1 * ,
where x ^ and x ^ * are some points between m ( X T ) and m ( X T ) + ϵ T + 1 and between m ^ g ( X T ) and m ^ g ( X T ) + ϵ ^ T + 1 * , respectively; x ^ ^ i and x ^ ^ i * are some points between m ^ h ( X T ) and m ^ h ( X T ) + ϵ ^ i , T + 1 and between m ^ h * ( X T ) and m ^ h * ( X T ) + ϵ ^ i , T + 1 * , respectively. The k-step-ahead predictive root can be expressed similarly when k > 2 . We can consider the r.h.s of Equation (23) to be made up of two components in both the real and bootstrap worlds: (1) the two-step-ahead estimation variability component, m ( m ( X T ) ) m ^ h ( m ^ h ( X T ) ) and m ^ g ( m ^ g ( X T ) ) m ^ h * ( m ^ h * ( X T ) ) ; (2) the rest of the terms, which are related to future innovations. For the second component, the bootstrap can mimic the real-world situation well.
We expect that the first component, i.e., the variability in local constant estimation of the mean function m ( m ( X T ) ) m ^ h ( m ^ h ( X T ) ) , can be well approximated by its variant m ^ g ( m ^ g ( X T ) ) m ^ h * ( m ^ h * ( X T ) ) in the bootstrap world. Although PPIs with either of the two bandwidth selection approaches are both asymptotically valid, the PPI with the bandwidth strategy (B2) is only “almost” pertinent for multi-step-ahead predictions since the variability in local constant estimation is not well estimated in finite samples; see also the simulation results in Section 4 and Appendix C. On the other hand, the PPI with the bandwidth strategy (B1) meets our goal. We summarize this finding in Theorem 3.
Theorem 3. 
Under assumptions A1–A8 and with observed data { X 0 , , X T } Ω T , where P ( ( X 0 , , X T ) Ω T ) = 1 o ( 1 ) as T , by taking the bandwidth strategy (B1), we can build a confidence bound for the local constant estimation at step k:
sup | x | c T P a T M k ( X T ) M ^ h , k ( X T ) x P a T M h , k * ( X T ) M ^ h , k * ( X T ) x p 0 , f o r k 1 ;
M k ( X T ) can be expressed by iteratively computing X T + i = m ( X T + i 1 ) for i = 1 , , k ; i.e., it has the form below:
M k ( X T ) = m ( m ( ( m ( m ( X T ) ) ) ) ) ;
M ^ h , k ( X T ) can be expressed by iteratively computing X T + i = m ^ h ( X T + i 1 ) for i = 1 , , k ; i.e., it has the form below:
M ^ h , k ( X T ) = m ^ h ( m ^ h ( m ^ h ( m ^ h ( X T ) ) ) ) ;
M h , k * ( X T ) and M ^ h , k * ( X T ) can be expressed similarly.
The direct implication of Theorem 3 is that the PPI generated by Algorithms 3 and 4 should have a better CVR for small sample sizes than the QPI since the estimation variability is included in the PI with high probability; see the simulation examples in Section 4.

3.3. On PPI with Heteroscedastic Errors

For time-series models with heteroscedastic errors, i.e., where the variance function σ ( x ) represents the heteroscedasticity of innovations, we do not need to care about the bias term in the nonparametric estimation of the variance function. In other words, we use neither under-smoothing nor over-smoothing bandwidth tricks on the variance function to generate the bootstrap series for covering the bias term; we can just use the bandwidth with the optimal rate to estimate the variance function from real and bootstrap series.
To see this, let us consider the two-step-ahead predictive root with heteroscedastic errors. In the real world, we have:
X T + 2 X ^ T + 2 = m ( X T + 1 ) + σ ( X T + 1 ) ϵ T + 2 1 M i = 1 M m ^ h m ^ h ( X T ) + σ ^ h ( X T ) ϵ ^ i , T + 1 + σ ^ h ( X T + 1 ) ϵ ^ i , T + 2 m ( m ( X T ) + σ ( X T ) ϵ T + 1 ) + σ ( X T + 1 ) ϵ T + 2 1 M i = 1 M m ^ h m ^ h ( X T ) + σ ^ h ( X T ) ϵ ^ i , T + 1 .
Correspondingly, the predictive root in the bootstrap world is:
X T + 2 * X ^ T + 2 * = m ^ g ( X T + 1 * ) + σ ^ g ( X T + 1 * ) ϵ T + 2 * 1 M i = 1 M m ^ h * m ^ h * ( X T ) + σ ^ h * ( X T ) ϵ ^ i , T + 1 * + σ ^ h * ( X T + 1 * ) ϵ ^ i , T + 2 * m ^ g ( m ^ g ( X T ) + σ ^ g ( X T * ) ϵ ^ T + 1 * ) + σ ^ g ( X T + 1 * ) ϵ ^ T + 2 * 1 M i = 1 M m ^ h * m ^ h * ( X T ) + σ ^ h * ( X T ) ϵ ^ i , T + 1 * .
Through Taylor expansion, we can obtain:
X T + 2 X ^ T + 2 m ( m ( X T ) ) m ^ h ( m ^ h ( X T ) ) + m ( 1 ) ( x ^ ) σ ( X T ) ϵ T + 1 + σ ( X T + 1 ) ϵ T + 2 1 M i = 1 M m ^ h ( 1 ) ( x ^ ^ i ) σ ^ h ( X T ) ϵ ^ i , T + 1 ; X T + 2 * X ^ T + 2 * m ^ g ( m ^ g ( X T ) ) m ^ h * ( m ^ h * ( X T ) ) + m ^ g ( 1 ) ( x ^ * ) σ ^ g ( X T * ) ϵ ^ T + 1 * + σ ^ g ( X T + 1 * ) ϵ ^ T + 2 * 1 M i = 1 M m ^ h * ( 1 ) ( x ^ ^ i * ) σ ^ h * ( X T ) ϵ ^ i , T + 1 * .
We can still consider the r.h.s. of Equation (29) to contain two components. Once we use the under-smoothing technique to cover the estimation variability for the mean function, since the residual distribution is determined by the estimated mean and variance functions, the convergence rate of the residual distribution to the true innovation distribution is dominated by the convergence rate of m ^ h ( x ) to m ( x ) . In addition, all estimators of the variance function in Equation (29) are tied with future estimated innovations; so, we are free to use the bandwidth g = h with the optimal smoothing rate to estimate the variance function, and the overall convergence rate will not change. To show this benefit, we ran some simulations, which are presented in Appendix D, to compare the performance of PIs when applying under-smoothing or the optimal bandwidth in estimating the variance function. In Section 4, we will demonstrate the use of the optimal bandwidth to estimate the variance function if the time series is heteroscedastic.
To analyze the pertinence of the PPI for time series with heteroscedastic errors, from Equation (29), it is apparent that the distribution of m ( m ( X T ) ) m ^ h ( m ^ h ( X T ) ) can still be approximated by m ^ g ( m ^ g ( X T ) ) m ^ h * ( m ^ h * ( X T ) ) . For the rest of the terms, the bootstrap can still mimic the real-world situation.

4. Simulations

In this section, we describe the simulations that we deployed to check the performance of five-step-ahead point predictions and the corresponding PIs of our algorithms in the R platform with finite samples. To obtain the optimal bandwidth h o p for our local constant estimators, we relied on the function npregbw from the R package np. The under-smoothing and over-smoothing bandwidths were taken as 0.5 · h o p and 2 · h o p , respectively.

4.1. Optimal Point Prediction

We first consider a simple non-linear model:
X t = log ( X t 1 2 + 1 ) + ϵ t ,
where { ϵ t } is assumed to have a standard normal distribution. The geometric ergodicity of Equation (30) can be easily checked.
We apply the “oracle” prediction as the benchmark. The oracle prediction is returned by employing a simulation approach, assuming that we know the true model and the error distribution, i.e., the simulation-based prediction, as we discussed in Section 1; see Section 3.2 of [17] for more details and the theoretical validation of this approach. Since this oracle prediction should have the best performance, we would like to challenge our nonparametric bootstrap-based methods by comparing them with the oracle prediction. We also pretend that the true model and innovation distribution are unknown when we perform the nonparametric bootstrap-based prediction. For point predictions, we just utilize fitted residuals. The application of predictive residuals will play a role in building PIs later.
In a single experiment, we take X 0 Uniform ( 1 , 1 ) and then iteratively generate a series with size C + T + 1 according to Equation (30). Here, C is taken as 200 to remove the effects of the initial distribution of X 0 . To perform oracle predictions, we take M = 1000 to obtain a satisfying approximation. For a fair comparison, we also apply a 1000 times bootstrap in Algorithms 1 and 2 to obtain bootstrap-based predictions.
Referring to the simulation studies in [16], we take T = 100 , 200 , k = 1 , , 5 , and employ the Mean-Squared Prediction Error (MSPE) to compare oracle and bootstrap predictions. The metric MSPE can be approximated based on the formula below:
MSPE of the k - th ahead prediction = 1 N n = 1 N ( X n , k P n , k ) 2 , for k = 1 , , 5 ,
where P n , k represents the k-th step-ahead optimal L 1 or L 2 point predictions implied by the bootstrap or simulation approach, and X n , k stands for the true future value in the n-th replication. We take N = 5000 and record all MSPEs in Table 1.
From Table 1, we can find that the MSPEs of oracle- and bootstrap-based L 1 - or L 2 -optimal predictions are very close to each other, respectively. The MSPEs of oracle optimal predictions are always smaller than the corresponding bootstrap predictions. This phenomenon is in line with our expectation since the bootstrap prediction is obtained with an estimated model and innovation distribution.
Rather than applying the standard normal distribution, we consider a skewed innovation, i.e., ϵ t χ 2 ( 3 ) 3 . Repeating the above process, we present the MSPEs in Table 2.
The performance of bootstrap-based predictions is also competitive with oracle predictions. Another notable phenomenon indicated by Table 2 is that the MSPE of L 2 -optimal predictions is always less than its corresponding value in L 1 -optimal predictions. The reason for this is that the L 2 -optimal prediction coincides with the L 2 loss used in MSPE. However, this phenomenon is not remarkable for the results in Table 1 since the innovation distribution is symmetric in that case.
For the non-linear model with heteroscedastic errors, we consider the following model:
X t = sin ( X t 1 ) + ϵ t 0.5 + 0.25 X t 1 2 .
Model Equation (32) is in a GARCH form, except that the regression function is non-linear. This model was also considered by [16]. We present the MSPEs of different predictions in Table 3. It reveals that our bootstrap-based optimal point prediction methods can work for the non-linear time-series model with heteroscedastic errors, and its performance is still competitive with oracle predictions.
Remark 8. 
In practice, we should mention that both local constant estimators m ^ ( x ) and σ ^ ( x ) will only be accurate when x falls in the area where data are dense. Estimations in the sparse area will return large fitted residuals. These large residuals will spoil the multi-step-ahead prediction process in the bootstrap procedure. Thus, depending on which optimal prediction we are pursuing, we replace all inappropriate or numerical NaN values with the sample mean or sample median of observed data. In addition, during the simulation studies, we truncate m ˜ ( x ) h ; i.e., we take C m as 5 · max { | x 0 | , , | x T | } . For the mean function estimator m ˜ ( x ) h * in the bootstrap world, we take C m * as min { 2 · C m , 5 · max { | x 0 * | , , | x T * | } } since we want to allow more variability for the bootstrap series. For the local constant estimator of the variance function, we take c σ and c σ * as 0.01 . We take C σ and C σ * as 2 · σ ^ and min { 4 · σ ^ , 2 · σ ^ * } , respectively; σ ^ and σ ^ * are the sample standard deviations of the observed series in the real world and bootstrap world, respectively. These truncating constants work well for the above two models. In practice, a cross-validation approach could be taken to find the optimal truncating constants.

4.2. QPI and PPI

In this subsection, we try to evaluate the CVR of the QPI and PPI based on the nonparametric forward bootstrap prediction method. Similarly, we take the oracle prediction interval as the benchmark, which is computed by the QPI with a known model and innovation distribution; see the discussion in Section 1 and Section 3.2 of [17] for references on this approach.
Due to the time complexity of the double bootstrap in the bootstrap world, we only take B = 500 and M = 100 in Algorithms 3 and 4 to derive the PPI. Correspondingly, we take M = 500 to compute the QPI. In practice, people can increase the values of B and M. To make the result as consistent as possible, we still repeat the simulation process 5000 times.
The empirical CVR of the bootstrap-based QPI and PPI for k = 1 , , 5 -step-ahead predictions is determined with the formula below:
CVR of the k - th ahead prediction = 1 N n = 1 N 1 X n , k [ L n , k , U n , k ] , for k = 1 , , 5 ,
where [ L n , k , U n , k ] and X n , k represent the k-th step-ahead prediction interval and the true future value in the n-th replication, respectively. In addition to the CVR, we are also concerned about the empirical LEN of different PIs. The empirical LEN of a PI is defined as follows:
LEN of the k - th ahead PI = 1 N n = 1 N ( U n , k L n , k ) , for k = 1 , , 5 .
Recall that the PPI can be centered at the L 1 - or L 2 -optimal point predictor, and the QPI can be found with the optimal bandwidth and the under-smoothing bandwidth; thus, we have four types of PIs based on the bootstrap. In particular, each type of PI can be obtained with fitted or predictive residuals. In total, we have eight bootstrap-type PIs and one oracle PI. In addition, to observe the effects of introducing the predictive residuals and the superiority of the PPI, we consider three sample sizes, 50, 100, and 200. All CVRs and LENs for different PIs for predicting Equations (30) and (32) are presented in Table 4 and Table 5, respectively.
From there, we can observe that the SPI (oracle PI) is the best one, according to the most accurate CVR and relatively small LEN. For the QPI with fitted residuals, it severely under-covers the true future value, especially for data with a small sample size. With predictive residuals, although the LEN of the PI gets amplified, the CVR of the QPI improves significantly. After applying the under-smoothing bandwidth with the QPI, the CVR is further improved for multi-step-ahead (i.e., k 2 ) predictions, regardless of whether fitted or predictive residuals are used. The PPI with fitted residuals outperforms the QPI with fitted residuals. The PPI with predictive residuals can achieve the most accurate CVR among various bootstrap-based PIs, especially when the data are short, though the price is that its LEN is the largest compared to other PIs. We should note that the QPI with predictive residuals and the under-smoothing bandwidth can achieve a great CVR with 200 samples for these two models. However, we may not know the sufficiently large sample size to guarantee that the QPI can work well. Thus, we recommend taking the PPI with predictive residuals as the first choice.
Remark 9. 
We should clarify that the CVR computed using Equation (33) is the unconditional coverage rate of X T + k since it is an average of the conditional coverage of X T + k for all replications.

4.3. Simulation Results for Appendices

We carried out a simulation study to show that the QPIs with the optimal bandwidth and under-smoothing bandwidth are asymptotically equivalent; see the results in Table 6, and see the formal analysis in Appendix B.
We also deployed simulations to check the effects of applying under-smoothing or over-smoothing tricks on the performance of the PPI. We took the sample size T + 1 to be 50 or 500 and performed simulations 5000 times on the first model; see Table 7 and Appendix C for the results and analysis, respectively.
For a model with heteroscedastic errors, as we mentioned in Section 3.3, we can rely on the optimal bandwidth to estimate the variance functions. To check this claim, we consider two strategies for the bandwidth of the estimator for the variance function: (1) take the under-smoothing bandwidth as we do for the mean function estimator; (2) take the bandwidth with the optimal rate. To estimate the mean function in the bootstrap world, we continue using the under-smoothing bandwidth strategy. The simulation results based on Equation (32) with a small sample size are shown in Table 8; see the corresponding discussion in Appendix D.

5. Conclusions

In this paper, we propose some forward bootstrap prediction algorithms based on the local constant estimation of the model. With theoretical and practical validations, we show that our bootstrap-based point predictions work well, and their MSPEs are very close to those of the oracle predictions. By debiasing the nonparametric estimation with the under-smoothing bandwidth, we show that the confidence bounds for the multi-step-ahead estimator can be approximated by the bootstrap. As a result, we can obtain a pertinence prediction interval by using a specifically designed algorithm. Empirically, we further take the predictive residuals to make predictions that can alleviate the under-coverage of the PI for a small sample size. Among different bootstrap-based PIs, as revealed by simulation studies, the PPI with predictive residuals is the best one, and it is competitive with the oracle PI.

Author Contributions

All authors D.N.P. and K.W. contributed equally to this project. All authors have read and agreed to the published version of the manuscript.

Funding

The research of the first author was partially supported by NSF grant DMS 19-14556. The research of the second author was partially supported by the Richard Libby Graduate Research Award.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Proofs

Proof of Theorem 1. 
To show that Equation (15) is satisfied for k 1 , we can just show the case with k = 2 . Cases with k = 1 and k > 2 can be handled similarly. F X T + 2 | X T ( x ) is equivalent to:
F X T + 2 | X T ( x ) = P ( X T + 2 x | X T ) = P ( m ( X T + 1 ) + σ ( X T + 1 ) ϵ T + 2 x | X T ) = P ϵ T + 2 x m ( m ( X T ) + σ ( X t ) ϵ T + 1 ) σ ( m ( X T ) + σ ( X t ) ϵ T + 1 ) | X T = E P ϵ T + 2 x m ( m ( X T ) + σ ( X T ) ϵ T + 1 ) σ ( m ( X T ) + σ ( X T ) ϵ T + 1 ) | ϵ T + 1 , X T | X T = E F ϵ x m ( m ( X T ) + σ ( X T ) ϵ T + 1 ) σ ( m ( X T ) + σ ( X T ) ϵ T + 1 ) | X T = E F ϵ G ( x , X T , ϵ T + 1 ) | X T ;
we use G ( x , X T , ϵ T + 1 ) to represent x m ( m ( X T ) + σ ( X T ) ϵ T + 1 ) σ ( m ( X T ) + σ ( X T ) ϵ T + 1 ) to simplify notations. Similarly, we can analyze F X T + 2 * ( x ) , and it has the following equivalent expressions:
F X T + 2 * | X T , , X 0 ( x ) = P ( X T + 2 * x | X T , , X 0 ) = E P ϵ ^ T + 2 * G ^ ( x , X T , ϵ ^ T + 1 * ) | ϵ ^ T + 1 * , X T , , X 0 | X T , , X 0 = E * F ^ ϵ G ^ ( x , X T , ϵ ^ T + 1 * ) ,
where G ^ ( x , X T , ϵ ^ T + 1 * ) represents x m ^ h ( m ^ h ( X T ) + σ ^ h ( X T ) ϵ ^ T + 1 * ) σ ^ h ( m ^ h ( X T ) + σ ^ h ( X T ) ϵ ^ T + 1 * ) , and E * ( · ) represents the expectation in the bootstrap world, i.e., E ( · | X T , , X 0 ) . Thus, we hope to show:
sup | x | c T | E * F ^ ϵ ( G ^ ( x , X T , ϵ ^ T + 1 * ) ) E F ϵ G ( x , X T , ϵ T + 1 ) | X T | p 0 .
However, it is hard to analyze Equation (A3) since there is a random variable X T inside E * ( · ) and E ( · ) . Thus, we consider two regions of X T , i.e., (1) | X T | > γ T and (2) | X T | γ T , where γ T is an appropriate sequence that converges to infinity. Under A1, A2, and A5, by Lemma 1 of [30], we have:
P ( | X T | > γ T ) 0 .
In addition, we have the relationship:
P sup | x | c T | E * F ^ ϵ ( G ^ ( x , X T , ϵ ^ T + 1 * ) ) E F ϵ G ( x , X T , ϵ T + 1 ) | X T | > ε P ( ( | X T | > γ T ) ) + P ( | X T | γ T ) sup | x | c T | E * F ^ ϵ ( G ^ ( x , X T , ϵ ^ T + 1 * ) ) E F ϵ G ( x , X T , ϵ T + 1 ) | X T | > ε .
Thus, to verify Equation (A3), we just need to show that the second term on the r.h.s. of Equation (A5) converges to 0. We can take the sequences c T and γ T to be the same sequence, which converges to infinity slowly enough. Then, it is enough for us to analyze the asymptotic probability of the following expression:
sup | x | c T , | y | c T | E * F ^ ϵ ( G ^ ( x , y , ϵ ^ T + 1 * ) ) E F ϵ G ( x , y , ϵ T + 1 ) | > ε .
We decompose the l.h.s. of Equation (A6) into:
sup | x | c T , | y | c T | E * F ^ ϵ ( G ^ ( x , y , ϵ ^ T + 1 * ) ) E F ϵ G ( x , y , ϵ T + 1 ) | = sup | x | c T , | y | c T | E * F ^ ϵ ( G ^ ( x , y , ϵ ^ T + 1 * ) ) E * F ϵ ( G ^ ( x , y , ϵ ^ T + 1 * ) ) + E * F ϵ ( G ^ ( x , y , ϵ ^ T + 1 * ) ) E F ϵ G ( x , y , ϵ T + 1 ) | sup | x | c T , | y | c T | E * F ^ ϵ ( G ^ ( x , y , ϵ ^ T + 1 * ) ) E * F ϵ ( G ^ ( x , y , ϵ ^ T + 1 * ) ) | + sup | x | c T , | y | c T | E * F ϵ ( G ^ ( x , y , ϵ ^ T + 1 * ) ) E F ϵ G ( x , y , ϵ T + 1 ) | .
Then, we analyze the two terms on the r.h.s. of Equation (A7) separately. For the first term, we have:
sup | x | c T , | y | c T | E * F ^ ϵ ( G ^ ( x , y , ϵ ^ T + 1 * ) ) E * F ϵ ( G ^ ( x , y , ϵ ^ T + 1 * ) ) | sup | x | c T , | y | c T E * | F ^ ϵ ( G ^ ( x , y , ϵ ^ T + 1 * ) ) F ϵ ( G ^ ( x , y , ϵ ^ T + 1 * ) ) | sup | x | c T , | y | c T , z | F ^ ϵ ( G ^ ( x , y , z ) ) F ϵ ( G ^ ( x , y , z ) ) | p 0 , under Equation ( 14 ) .
For the second term on the r.h.s. of Equation (A7), we have:
sup | x | c T , | y | c T | E * F ϵ ( G ^ ( x , y , ϵ ^ T + 1 * ) ) E F ϵ G ( x , y , ϵ T + 1 ) | = sup | x | c T , | y | c T | 1 T i = 1 T F ϵ ( G ^ ( x , y , ϵ ^ i ) ) 1 T i = 1 T F ϵ ( G ( x , y , ϵ i ) ) + 1 T i = 1 T F ϵ ( G ( x , y , ϵ i ) ) E F ϵ G ( x , y , ϵ T + 1 ) | sup | x | c T , | y | c T | 1 T i = 1 T F ϵ ( G ^ ( x , y , ϵ ^ i ) ) 1 T i = 1 T F ϵ ( G ( x , y , ϵ i ) ) | + sup | x | c T , | y | c T | 1 T i = 1 T F ϵ ( G ( x , y , ϵ i ) ) E F ϵ G ( x , y , ϵ T + 1 ) | ,
where { ϵ i } i = 1 T is taken as ( X i m ( X i 1 ) ) / σ ( X i 1 ) for i = 1 , , T . And { ϵ ^ i } i = 1 T is computed by ( X i m ^ ( X i 1 ) ) / σ ^ ( X i 1 ) for i = 1 , , T . We can show:
P max i = 1 , , T | ϵ i ϵ ^ i | > ε = P max i = 1 , , T | X i m ( X i 1 ) σ ( X i 1 ) X i m ^ ( X i 1 ) σ ^ ( X i 1 ) | > ε P max i = 1 , , T | X i | > c T max i = 1 , , T | X i 1 | > c T + P max i = 1 , , T | X i | < c T max i = 1 , , T | X i 1 | < c T max i = 1 , , T | X i m ( X i 1 ) σ ( X i 1 ) X i m ^ ( X i 1 ) σ ^ ( X i 1 ) | > ε o ( 1 ) + P sup | x | , | y | c T | x m ( y ) σ ( y ) x m ^ ( y ) σ ^ ( y ) | > ε 0 .
We further consider the two terms on the r.h.s. of Equation (A9) separately. For the first term, by applying Taylor expansion, we have:
sup | x | c T , | y | c T | 1 T i = 1 T F ϵ ( G ^ ( x , y , ϵ ^ i ) ) 1 T i = 1 T F ϵ ( G ( x , y , ϵ i ) ) | = sup | x | c T , | y | c T | 1 T i = 1 T F ϵ ( G ( x , y , ϵ i ) ) + f ϵ ( o i ) ( G ^ ( x , y , ϵ ^ i ) G ( x , y , ϵ i ) ) 1 T i = 1 T F ϵ ( G ( x , y , ϵ i ) ) | = sup | x | c T , | y | c T | 1 T i = 1 T f ϵ ( o i ) ( G ^ ( x , y , ϵ ^ i ) G ( x , y , ϵ i ) ) | sup | x | c T , | y | c T 1 T i = 1 T | f ϵ ( o i ) ( G ^ ( x , y , ϵ ^ i ) G ( x , y , ϵ i ) ) | sup | x | c T , | y | c T sup z | f ϵ ( z ) | · 1 T i = 1 T | G ^ ( x , y , ϵ ^ i ) G ( x , y , ϵ i ) | sup | x | c T , | y | c T C · 1 T i = 1 T | G ^ ( x , y , ϵ ^ i ) G ( x , y , ϵ i ) | ( under A 7 ) sup | x | c T , | y | c T , j { 1 , , T } C · | G ^ ( x , y , ϵ ^ j ) G ( x , y , ϵ j ) | .
From Equation (A10) and Lemma 1, we verify that Equation (A11) converges to 0 in probability. For the second term on the r.h.s. of Equation (A9), by the uniform law of large numbers, we have:
sup | x | c T , | y | c T | 1 T i = 1 T F ϵ ( G ( x , y , ϵ i ) ) E F ϵ G ( x , y , ϵ T + 1 ) | p 0 .
Combining all the pieces, Equation (A6) converges to 0 in probability, which implies Theorem 1. □
Proof of Theorem 2. 
We want to show:
sup | x | c T F X T + k * X ^ T + k * | X T , , X 0 ( x ) F X T + k X ^ T + k | X T , , X 0 ( x ) p 0 , for k 1 ,
where X T + k * X ^ T + k * and X T + k X ^ T + k are predictive roots. We still present the proof for the two-step prediction. The proof for higher-step predictions can be shown similarly. When we are dealing with two-step-ahead predictions, predictive roots have the same expression as in Equations (20) and (21). Thus, we want to measure the asymptotic distance between the following two quantities:
P m ( m ( X T ) + ϵ T + 1 ) + ϵ T + 2 1 M j = 1 M m ^ h m ^ h ( X T ) + ϵ ^ j , T + 1 x | X T , , X 0 ; P m ^ h ( m ^ h ( X T ) + ϵ ^ T + 1 * ) + ϵ ^ T + 2 * 1 M j = 1 M m ^ h * m ^ h * ( X T ) + ϵ ^ j , T + 1 * x | X T , , X 0 .
Compared to Equations (A1) and (A2), Equations (20) and (21) just have two more terms, 1 M j = 1 M m ^ h ( m ^ h ( X T ) + ϵ ^ j , T + 1 ) and 1 M j = 1 M m ^ h * m ^ h * ( X T ) + ϵ ^ j , T + 1 * , in the predictive root in the real and bootstrap worlds, respectively. By the LLN, these two terms converge to their corresponding means in the real or bootstrap world. Based on the consistency between m ^ h ( · ) and m ^ h * ( · ) , we can show Theorem 2 similarly with the procedure used to prove Theorem 1. □
Proof of Theorem 3. 
The proof is based on { X 0 , , X T } Ω T . We need to verify Equation (24); i.e., we can build confidence bounds for the k-step-ahead estimation by the bootstrap. Still, we focus on the two-step-ahead prediction; i.e., we want to show:
sup | x | c T P T h m ^ h ( m ^ h ( X T ) ) m ( m ( X T ) x P T h m ^ h * ( m ^ h * ( X T ) ) m ^ h ( m ^ h ( X T ) ) x p 0 .
Applying the property P ( | X T | > c T ) 0 again, it is enough to show:
sup | x | , | y | c T P T h m ^ h ( m ^ h ( y ) ) m ( m ( y ) ) x P T h m ^ h * ( m ^ h * ( y ) ) m ^ h ( m ^ h ( y ) ) x p 0 .
To handle the uniform convergence on y, we make a ε -covering of X T . Let the ε -covering number of [ c T , c T ] be C N = N ( ε ; [ c T , c T ] ; | · | ) , which means that for every y [ c T , c T ] , i { 1 , 2 , , C N } s.t. | y y i | ε for ε > 0 . Defining y 0 { y 1 , , y C N } , we can consider:
sup | x | , | y | c T P T h m ^ h ( m ^ h ( y ) ) m ( m ( y ) x P T h m ^ h * ( m ^ h * ( y ) ) m ^ h ( m ^ h ( y ) ) x sup | x | , | y | c T P T h m ^ h ( m ^ h ( y ) ) m ( m ( y ) ) x P T h m ^ h ( m ^ h ( y 0 ) ) m ( m ( y 0 ) ) x + sup | x | c T , y 0 { y 1 , , y C N } P T h m ^ h ( m ^ h ( y 0 ) ) m ( m ( y 0 ) ) x P T h m ^ h * ( m ^ h * ( y 0 ) ) m ^ h ( m ^ h ( y 0 ) ) x + sup | x | , | y | c T P T h m ^ h * ( m ^ h * ( y 0 ) ) m ^ h ( m ^ h ( y 0 ) ) x P T h m ^ h * ( m ^ h * ( y ) ) m ^ h ( m ^ h ( y ) ) x .
For the first term on the r.h.s. of Equation (A17), we have:
sup | x | , | y | c T P T h m ^ h ( m ^ h ( y ) ) m ( m ( y ) ) x P T h m ^ h ( m ^ h ( y 0 ) ) m ( m ( y 0 ) ) x = sup | x | , | y | c T P ( T h m ^ h ( m ^ h ( y 0 ) ) m ( m ( y 0 ) ) + T h ( C 1 ( m ^ h ( y ) m ^ h ( y 0 ) ) + C 2 ( m ( y 0 ) m ( y ) ) ) x ) P T h m ^ h ( m ^ h ( y 0 ) ) m ( m ( y 0 ) ) x .
where C 1 and C 2 are some finite constants since the derivatives of m ^ h ( · ) and m ( · ) are bounded. Considering the first term inside the absolute bracket on the r.h.s. of Equation (A18), we can think of this as a convolution of two random variables:
P T h m ^ h ( m ^ h ( y 0 ) ) m ( m ( y 0 ) ) + T h C 1 ( m ^ h ( y ) m ^ h ( y 0 ) ) + C 2 ( m ( y 0 ) m ( y ) ) x = P X + Z x .
Further, based on the smoothing property of m ^ h ( · ) and m ( · ) , again, we can take ε to be small enough to make the random variable Z close to being degenerated, i.e., P ( Z = 0 ) = 1 P ( Z A ) = 1 o ( 1 ) ; A is a small set around 0 without containing 0. Thus, Equation (A18) can be written as:
sup | x | , | y | c T P X + Z x P ( X x ) = sup | x | , | y | c T P ( X + 0 x , Z = 0 ) + P ( X + Z x , Z A ) P ( X x ) sup | x | , | y | c T P ( X x ) + o ( 1 ) + o ( 1 ) P ( X x ) = o ( 1 ) .
Similarly, the last term on the r.h.s. of Equation (A17) can also be made to converge to 0. We can then focus on analyzing the middle term. In other words, it is enough to analyze the pointwise convergence property between distributions in the real and bootstrap worlds. According to the idea for estimating the distribution of nonparametric estimation by the bootstrap in the work of [27], we decompose T h m ^ h ( m ^ h ( y 0 ) ) m ( m ( y 0 ) ) into bias-type and variance terms:
T h m ^ h ( m ^ h ( y 0 ) ) m ( m ( y 0 ) ) = T h t = 0 T 1 K h ( m ^ h ( y 0 ) X t ) X t + 1 T f ^ h ( m ^ h ( y 0 ) ) t = 0 T 1 K h ( m ^ h ( y 0 ) X t ) · m ( m ( y 0 ) ) T f ^ h ( m ^ h ( y 0 ) ) = T h r ^ V , h ( m ^ h ( y 0 ) ) f ^ h ( m ^ h ( y 0 ) ) + r ^ B , h ( m ^ h ( y 0 ) ) f ^ h ( m ^ h ( y 0 ) ) ,
where
r ^ V , h ( m ^ h ( y 0 ) ) = 1 T t = 0 T 1 K h ( m ^ h ( y 0 ) X t ) ϵ t + 1 ; r ^ B , h ( m ^ h ( y 0 ) ) = 1 T t = 0 T 1 K h ( m ^ h ( y 0 ) X t ) m ( X t ) m ( m ( y 0 ) ) ,
where K h ( · ) represents the function in the form 1 h K ( · / h ) . By also carrying this process out for T h ( m ^ h * ( m ^ h * ( y 0 ) ) m ^ h ( m ^ h ( y 0 ) ) ) , we can obtain:
T h m ^ h * ( m ^ h * ( y 0 ) ) m ^ h ( m ^ h ( y 0 ) ) = T h r ^ V , h * ( m ^ h * ( y 0 ) ) f ^ h * ( m ^ h * ( y 0 ) ) + r ^ B , h * ( m ^ h * ( y 0 ) ) f ^ h * ( m ^ h * ( y 0 ) ) ,
where
r ^ V , h * ( m ^ h * ( y 0 ) ) = 1 T t = 0 T 1 K h ( m ^ h * ( y 0 ) X t * ) ϵ ^ t + 1 * ; r ^ B , h * ( m ^ h * ( y 0 ) ) = 1 T t = 0 T 1 K h ( m ^ h * ( y 0 ) X t * ) m ^ h ( X t * ) m ^ h ( m ^ h ( y 0 ) ) .
For the variance term, by Lemma 4.4 of [27], we have:
sup x P ( T h r ^ V , h ( x 0 ) x ) P ( Z ( x 0 ) x ) = o ( 1 ) ; sup x P ( T h r ^ V , h * ( x 0 ) x ) P ( Z ( x 0 ) x ) = o p ( 1 ) ,
where Z ( x 0 ) has the distribution N ( 0 , τ 2 ( x 0 ) ) ; τ 2 ( x 0 ) = f X ( x 0 ) K 2 ( v ) d v ; x 0 R . Since m ^ h ( y 0 ) and m ^ h * ( y 0 ) both converge to m ( y 0 ) in probability and the target distribution is continuous, by the continuous mapping theorem, we can obtain the uniform convergence between the distributions of T h r ^ V , h ( m ( y 0 ) ) and T h r ^ V , h ( m ^ h ( y 0 ) ) , i.e.:
sup x P ( T h r ^ V , h ( m ^ h ( y 0 ) ) x ) P ( T h r ^ V , h ( m ( y 0 ) ) x ) = o ( 1 ) .
To show the uniform convergence relationship between T h r ^ V , h * ( m ( y 0 ) ) and T h r ^ V , h * ( m ^ h * ( y 0 ) ) , we need the continuous T h r ^ V , h * ( m ( y 0 ) ) , which is a convolution of i . i . d . random variables, { ϵ ^ i * } i = 1 T F ^ ϵ . Unfortunately, F ^ ϵ is the empirical distribution of residuals and is discrete. To make the analysis more convenient, we take a convolution approach to smooth the distribution of empirical residuals; i.e., we define another random variable, which is the sum of ϵ ^ and a standard normal random variable ξ :
ϵ ˜ = ϵ ^ + ξ ,
where ξ N ( 0 , L ( T ) ) and L ( T ) 0 at an appropriate rate. It is easy to show that the distribution of ϵ ˜ , F ˜ ϵ is asymptotically equivalent to F ^ ϵ ; i.e., Equation (14) is also satisfied for F ˜ ϵ . In practice, we can take L ( T ) to be small enough, and then we still bootstrap time series based on F ^ ϵ in practice. However, from a theoretical view, we would like to take F ˜ ϵ . To simplify the notation, we use F ^ ϵ throughout this paper, and its representation changes according to the context.
Combining all the pieces, we can obtain:
sup x P ( T h r ^ V , h ( m ^ h ( y 0 ) ) x ) P ( Z ( m ( y 0 ) ) x ) = o p ( 1 ) ; sup x P ( T h r ^ V , h * ( m ^ h * ( y 0 ) ) x ) P ( Z ( m ( y 0 ) ) x ) = o p ( 1 ) .
Then, the bias-type term in the real and bootstrap worlds remains to be analyzed. We first consider the bias-type term r ^ B , h ( m ^ h ( y 0 ) ) :
T h r ^ B , h ( m ^ h ( y 0 ) ) = h T t = 0 T 1 K h ( m ^ h ( y 0 ) X t ) · m ( X t ) m ( m ( y 0 ) ) = h T t = 0 T 1 K h ( m ( y 0 ) X t ) + K h ( 1 ) ( x ^ ) · ( m ^ h ( y 0 ) m ( y 0 ) ) · m ( X t ) m ( m ( y 0 ) ) = h T t = 0 T 1 K h ( m ( y 0 ) X t ) · m ( X t ) m ( m ( y 0 ) ) + h T t = 0 T 1 K h ( 1 ) ( x ^ ) · ( m ^ h ( y 0 ) m ( y 0 ) ) · m ( X t ) m ( m ( y 0 ) ) .
For the first term on the r.h.s. of Equation (A29), based on the ergodicity of { X t } series, we can find that the mean of this term is:
E h T t = 0 T 1 K h ( m ( y 0 ) X t ) · m ( X t ) m ( m ( y 0 ) ) = E T h E K h ( m ( y 0 ) X 1 ) · m ( X 1 ) m ( m ( y 0 ) ) | X 0 = E T h K ( v ) · m ( v h + m ( y 0 ) ) m ( m ( y 0 ) ) · f ϵ ( v h + m ( y 0 ) m ( X 0 ) ) d v = E T h K ( v ) · m ( 1 ) ( m ( y 0 ) ) v h + m ( 2 ) ( y ^ ) · v 2 h 2 · f ϵ ( m ( y 0 ) m ( X 0 ) ) + f ϵ ( 1 ) ( x ^ ) · v h d v .
If we take the bandwidth satisfying T h 5 0 , Equation (A30) converges to 0. Then, we consider the mean of the second term on the r.h.s. of Equation (A29):
E h T t = 0 T 1 K h ( 1 ) ( x ^ ) · ( m ^ h ( y 0 ) m ( y 0 ) ) · m ( X t ) m ( m ( y 0 ) ) = h T t = 0 T 1 E K h ( 1 ) ( x ^ ) · ( m ^ h ( y 0 ) m ( y 0 ) ) · m ( X t ) m ( m ( y 0 ) ) = 1 T t = 0 T 1 E E T h · K h ( 1 ) ( x ^ ) · ( m ^ h ( y 0 ) m ( y 0 ) ) · m ( X t ) m ( m ( y 0 ) ) | X t .
Since E ( T h · ( m ^ h ( y 0 ) m ( y 0 ) ) is O ( T h 5 ) (see Lemma 4.6 of [27] for the proof), under the assumption that K ( · ) has a bounded derivative and m ( · ) is bounded in a compact set, we have E ( E ( T h · K h ( 1 ) ( x ^ ) · ( m ^ h ( y 0 ) m ( y 0 ) ) · m ( X t ) m ( m ( y 0 ) ) | X t ) ) equal to O ( T h 5 ) ; once we select the under-smoothing bandwidth that satisfies T h 5 0 , Equation (A31) converges to 0. Then, we need to analyze the variance of T h r ^ B , h ( m ^ h ( y 0 ) ) . Similarly, we can show that it is o p ( 1 ) . All in all, T h r ^ B , h ( m ^ h ( y 0 ) ) converges to 0 in probability.
For the bias-type term r ^ B , h * ( m ^ h * ( y 0 ) ) in the bootstrap world, we can perform a similar decomposition to the one applied in Equation (A29), and then we can obtain:
T h r ^ B , h * ( m ^ h * ( y 0 ) ) = h T t = 0 T 1 K h ( m ^ h ( y 0 ) X t * ) · m ^ h ( X t * ) m ^ h ( m ^ h ( y 0 ) ) + h T t = 0 T 1 K h ( 1 ) ( x ^ ) · ( m ^ h * ( y 0 ) m ^ h ( y 0 ) ) · m ^ h ( X t * ) m ^ h ( m ^ h ( y 0 ) ) .
We first rely on the fact that E * ( E * ( T h · K h ( 1 ) ( x ^ ) · ( m ^ h * ( y 0 ) m ^ h ( y 0 ) ) · ( m ^ h ( X t * ) m ^ h ( m ^ h ( y 0 ) ) | X t * ) ) is also O ( T h 5 ) ; see Lemma 4.6 of [27] for more details. Thus, using the under-smoothing bandwidth strategy, the second term on the r.h.s. of Equation (A32) also converges to 0. For the first term, we can rely on the fact that the bootstrap series is also ergodic with high probability; see Theorem 2 of [30,32] for a time-series model with homoscedastic or heteroscedastic errors, respectively. Thus, with a similar analysis of the variant in the real world, we can see that the bias-type term in the bootstrap world also converges to 0 in probability. Given the consistent relationship between f ^ h ( m ^ h ( y 0 ) ) and f ^ h * ( m ^ h ( y 0 ) ) , which is implied by Lemma 4.5 of [27], Equation (A15) follows from the analysis of variance and bias-type terms in the real and bootstrap worlds. □

Appendix B. The Advantage of Applying Under-Smoothing Bandwidth for QPI with Finite Sample

The proof of Theorem 1 provides the big picture of the asymptotic validity of the QPI. Although the choice of the bandwidth does not influence the asymptotic validity of the QPI, we can find that the QPI with the under-smoothing bandwidth has a better CVR for multi-step-ahead predictions from the simulation results. We attempt to analyze this phenomenon informally. Starting from the convergence result, we want to show:
sup | x | c T F X T + k * | X T , , X 0 ( x ) F X T + k | X T ( x ) p 0 , for k 1 .
We still take the case with k = 2 as an example. From the analyses in the proof of Theorem 1, we can obtain:
sup | x | c T F X T + 2 * | X T , , X 0 ( x ) F X T + 2 | X T ( x ) o p ( 1 ) + sup | x | c T , | y | c T , j { 1 , , T } C · | G ^ ( x , y , ϵ ^ j ) G ( x , y , ϵ j ) | .
Recall that G ( x , X T , ϵ T + 1 ) represents x m ( m ( X T ) + σ ( X T ) ϵ T + 1 ) σ ( m ( X T ) + σ ( X T ) ϵ T + 1 ) , and G ^ ( x , X T , ϵ ^ T + 1 * ) represents x m ^ h ( m ^ h ( X T ) + σ ^ h ( X T ) ϵ ^ T + 1 * ) σ ^ h ( m ^ h ( X T ) + σ ^ h ( X T ) ϵ ^ T + 1 * ) . To simplify the notation, we consider the model when σ ( x ) 1 . Then, Equation (A34) becomes:
sup | x | c T F X T + k * | X T , , X 0 ( x ) F X T + k | X T ( x ) o p ( 1 ) + sup | y | c T , j { 1 , , T } C · | m ^ h ( m ^ h ( y ) + ϵ ^ j * ) m ( m ( y ) + ϵ j ) | .
Then, we can focus on analyzing m ^ h ( m ^ h ( X T ) + ϵ ^ j * ) m ( m ( X T ) + ϵ j ) . By applying Taylor expansion, we can obtain:
m ^ h ( m ^ h ( y ) + ϵ ^ j * ) m ( m ( y ) + ϵ j ) = m ^ h ( m ( y ) + ϵ j ) m ( m ( y ) + ϵ j ) + m ^ h ( 1 ) ( x ^ ^ ) ( m ^ h ( y ) + ϵ ^ j * m ( y ) ϵ j ) .
For the first term on the r.h.s. of Equation (A36), based on the ergodicity, asymptotically, we have:
m ^ h ( m ( y ) + ϵ j ) = 1 T h i = 0 T 1 K m ( y ) + ϵ j X i h X i + 1 f ^ h ( m ( y ) + ϵ j ) 1 T h i = 0 T 1 K m ( y ) + ϵ j X i h ( m ( X i ) + ϵ i + 1 ) f ^ h ( m ( y ) + ϵ j ) = 1 f ^ h ( m ( y ) + ϵ j ) 1 h E ( K m ( y ) + ϵ j X 1 h m ( X 1 ) ) + 1 h E ( K m ( y ) + ϵ j X 1 h ϵ 1 ) = 1 f ^ h ( m ( y ) + ϵ j ) 1 h K u m ( y ) ϵ j h m ( u ) f X ( u ) d u + 0 = 1 f ^ h ( m ( y ) + ϵ j ) K v m ( v h + m ( y ) + ϵ j ) f X ( v h + m ( y ) + ϵ j ) d v + 0 = 1 f ^ h ( m ( y ) + ϵ j ) K v [ m ( m ( y ) + ϵ j ) + v h m ( 1 ) ( m ( y ) + ϵ j ) + v 2 h 2 m ( 2 ) ( y ^ ^ ) ] · [ f X ( m ( y ) + ϵ j ) + v h f X ( 1 ) ( m ( y ) + ϵ j ) + v 2 h 2 f X ( 2 ) ( z ^ ^ ) ] d v = 1 f ^ h ( m ( y ) + ϵ j ) m ( m ( y ) + ϵ j ) f X ( m ( y ) + ϵ j ) + O ( h 2 ) .
The convergence of f ^ h ( m ( y ) + ϵ j ) to f X ( m ( y ) + ϵ j ) guarantees the consistency relationship of Equation (A37) and m ( m ( y ) + ϵ j ) . Similarly, for the third term on the r.h.s. of Equation (A36), we can conduct a similar analysis to find the convergence to 0 in probability. Moreover, the convergence speed is related to O ( h 2 ) . When multiple-step-ahead predictions are required, we will obtain more and more such O ( h 2 ) terms. If we have large enough data, it is “safe” to focus on the bandwidth with the optimal rate to estimate the model. However, for the finite-sample cases, it is better to take an under-smoothing h, though the corresponding LEN of the prediction interval will get larger due to the mean–variance trade-off. This conclusion coincides with the results shown in Table 4 and Table 5. From there, we can observe that the one-step-ahead QPI with the optimal bandwidth has a better CVR compared to the version with the under-smoothing bandwidth. In the meantime, the LEN of the PI with the optimal bandwidth is also slightly smaller. When the prediction horizon is larger than 1, although the QPI with the under-smoothing bandwidth has a slightly larger LEN, its CVR is notably better than the QPI with the optimal bandwidth. Here, we conducted more simulation studies to show that the QPIs with the optimal bandwidth and under-smoothing bandwidth are asymptotically equivalent. We performed simulations with Equation (30) and took T + 1 to be 1000. The CVR and LEN of different QPIs are tabulated in Table 6. From the simulation results, although the LEN of the QPI with the optimal bandwidth is always less than the variant with the under-smoothing bandwidth, the difference is marginal. In addition, these two types of QPIs have indistinguishable performance according to the CVR, which implies the asymptotic equivalence of applying the optimal bandwidth or under-smoothing bandwidth. It also implies that adopting fitted or predictive residuals is also asymptotically equivalent.

Appendix C. The Effects of Applying Under-Smoothing or Over-Smoothing Bandwidth on PPI

To see the effects of applying under-smoothing or over-smoothing tricks on the performance of the PPI, we took the sample size T + 1 to be 50 or 500 and performed simulations 5000 times on the first model. The simulation results are shown in Table 7. These results coincide with Corollary 1: i.e., both bandwidth strategies can give one-step-ahead PPIs with satisfactory CVR even for a small sample size with predictive residuals. The implication of Theorem 3 is also verified: i.e., taking the under-smoothing bandwidth can keep the CVR at a high level for multi-step-ahead predictions when the sample size is small. In addition, as the sample size increases, the CVR of the PPI with the over-smoothing bandwidth also increases. This phenomenon is guaranteed by the asymptotically valid property of the PPI regardless of whether the over-smoothing or under-smoothing bandwidth is used; see Theorem 2.

Appendix D. The Comparison of Applying Under-Smoothing and Optimal Bandwidths on Estimating the Variance Function for Building PPI

In Appendix C, we have seen the advantage of applying the under-smoothing bandwidth to estimate the model in the real and bootstrap worlds when the model has homoscedastic errors. For the model with heteroscedastic errors, as we have mentioned in Section 3.3, we can rely on the optimal bandwidth to estimate the variance functions.
To check this claim, we consider two strategies for the bandwidth of the estimator for the variance function: (1) take the under-smoothing bandwidth as we do for the mean function estimator; (2) take the bandwidth with the optimal rate. To estimate the mean function in the bootstrap world, we keep using the under-smoothing bandwidth strategy. The simulation results based on Equation (32) with a small sample size are shown in Table 8. We can see that the LEN of the PPI with the optimal bandwidth when estimating the variance function is always smaller than that of the corresponding PPI with the under-smoothing bandwidth. At the same time, the CVRs of both types of PPI are indistinguishable for k > 1 . For the one-step-ahead prediction, the former PPI is notably better than the latter PPI. This phenomenon is implied by Remark 7: i.e., the best strategy for the one-step-ahead PPI is choosing bandwidths with the optimal rate for both estimators of mean and variance functions.

References

  1. Politis, D.N. Financial time series. Wiley Interdiscip. Rev. Comput. Stat. 2009, 1, 157–166. [Google Scholar] [CrossRef]
  2. Politis, D.N. Pertinent Prediction Intervals. In Model-Free Prediction and Regression; Springer: New York, NY, USA, 2015; pp. 43–45. [Google Scholar]
  3. Pemberton, J. Exact least squares multi-step prediction from nonlinear autoregressive models. J. Time Ser. Anal. 1987, 8, 443–448. [Google Scholar] [CrossRef]
  4. Lee, K.; Billings, S. A new direct approach of computing multi-step ahead predictions for non-linear models. Int. J. Control 2003, 76, 810–822. [Google Scholar] [CrossRef]
  5. Chen, R.; Yang, L.; Hafner, C. Nonparametric multistep-ahead prediction in time series analysis. J. R. Stat. Soc. Ser. Stat. Methodol. 2004, 66, 669–686. [Google Scholar] [CrossRef]
  6. Efron, B. Bootstrap Methods: Another Look at the Jackknife. Ann. Stat. 1979, 7, 1–26. [Google Scholar] [CrossRef]
  7. Politis, D.N.; Romano, J.P. A Circular Block-Resampling Procedure for Stationary Data. In Exploring the Limits of Bootstrap; LePage, R., Billard, L., Eds.; Wiley: New York, NY, USA, 1992; pp. 263–270. [Google Scholar]
  8. Politis, D.N.; Romano, J.P. The stationary bootstrap. J. Am. Stat. Assoc. 1994, 89, 1303–1313. [Google Scholar] [CrossRef]
  9. Politis, D.N. The Impact of Bootstrap Methods on Time Series Analysis. Stat. Sci. 2003, 18, 219–230. [Google Scholar] [CrossRef]
  10. Kreiss, J.P.; Paparoditis, E. Bootstrap for Time Series: Theory and Methods; Springer: Heidelberg, Germany, 2023. [Google Scholar]
  11. Franke, J.; Neumann, M.H. Bootstrapping neural networks. Neural Comput. 2000, 12, 1929–1949. [Google Scholar] [CrossRef]
  12. Michelucci, U.; Venturini, F. Estimating neural network’s performance with bootstrap: A tutorial. Mach. Learn. Knowl. Extr. 2021, 3, 357–373. [Google Scholar] [CrossRef]
  13. Thombs, L.A.; Schucany, W.R. Bootstrap prediction intervals for autoregression. J. Am. Stat. Assoc. 1990, 85, 486–492. [Google Scholar] [CrossRef]
  14. Pascual, L.; Romo, J.; Ruiz, E. Bootstrap predictive inference for ARIMA processes. J. Time Ser. Anal. 2004, 25, 449–465. [Google Scholar] [CrossRef] [Green Version]
  15. Pascual, L.; Romo, J.; Ruiz, E. Bootstrap prediction for returns and volatilities in GARCH models. Comput. Stat. Data Anal. 2006, 50, 2293–2312. [Google Scholar] [CrossRef] [Green Version]
  16. Pan, L.; Politis, D.N. Bootstrap prediction intervals for linear, nonlinear and nonparametric autoregressions. J. Stat. Plan. Inference 2016, 177, 1–27. [Google Scholar] [CrossRef] [Green Version]
  17. Wu, K.; Politis, D.N. Bootstrap Prediction Inference of Non-linear Autoregressive Models. arXiv 2023, arXiv:2306.04126. [Google Scholar]
  18. Politis, D.N. Model-free model-fitting and predictive distributions. Test 2013, 22, 183–221. [Google Scholar] [CrossRef] [Green Version]
  19. Manzan, S.; Zerom, D. A bootstrap-based non-parametric forecast density. Int. J. Forecast. 2008, 24, 535–550. [Google Scholar] [CrossRef]
  20. Giordano, F.; La Rocca, M.; Perna, C. Forecasting nonlinear time series with neural network sieve bootstrap. Comput. Stat. Data Anal. 2007, 51, 3871–3884. [Google Scholar] [CrossRef]
  21. Khosravi, A.; Nahavandi, S.; Creighton, D.; Atiya, A.F. Comprehensive review of neural network-based prediction intervals and new advances. IEEE Trans. Neural Netw. 2011, 22, 1341–1356. [Google Scholar] [CrossRef]
  22. Lakshminarayanan, B.; Pritzel, A.; Blundell, C. Simple and scalable predictive uncertainty estimation using deep ensembles. Adv. Neural Inf. Process. Syst. 2017, 6402–6413. [Google Scholar]
  23. Chen, J.; Politis, D.N. Optimal multi-step-ahead prediction of ARCH/GARCH models and NoVaS transformation. Econometrics 2019, 7, 34. [Google Scholar] [CrossRef] [Green Version]
  24. Wu, K.; Karmakar, S. Model-free time-aggregated predictions for econometric datasets. Forecasting 2021, 3, 920–933. [Google Scholar] [CrossRef]
  25. Wu, K.; Karmakar, S. A model-free approach to do long-term volatility forecasting and its variants. Financ. Innov. 2023, 9, 59. [Google Scholar] [CrossRef]
  26. Wang, Y.; Politis, D.N. Model-free Bootstrap and Conformal Prediction in Regression: Conditionality, Conjecture Testing, and Pertinent Prediction Intervals. arXiv 2021, arXiv:2109.12156. [Google Scholar]
  27. Franke, J.; Kreiss, J.P.; Mammen, E. Bootstrap of kernel smoothing in nonlinear time series. Bernoulli 2002, 8, 1–37. [Google Scholar]
  28. Politis, D.N. Studentization vs. Variance Stabilization: A Simple Way Out of an Old Dilemma. 2022. Available online: https://mathweb.ucsd.edu/~politis/PAPER/DGP_Aug_11.pdf (accessed on 18 July 2023).
  29. Bradley, R.C. Basic properties of strong mixing conditions. A survey and some open questions. Probab. Surv. 2005, 2, 107–144. [Google Scholar] [CrossRef] [Green Version]
  30. Franke, J.; Neumann, M.H.; Stockis, J.P. Bootstrapping nonparametric estimators of the volatility function. J. Econom. 2004, 118, 189–218. [Google Scholar] [CrossRef]
  31. Min, C.; Hongzhi, A. The probabilistic properties of the nonlinear autoregressive model with conditional heteroskedasticity. Acta Math. Appl. Sin. 1999, 15, 9–17. [Google Scholar] [CrossRef]
  32. Franke, J.; Kreiss, J.P.; Mammen, E.; Neumann, M.H. Properties of the nonparametric autoregressive bootstrap. J. Time Ser. Anal. 2002, 23, 555–585. [Google Scholar] [CrossRef] [Green Version]
Table 1. The MSPEs of different predictions using Model Equation (30) with a standard normal innovation.
Table 1. The MSPEs of different predictions using Model Equation (30) with a standard normal innovation.
Model: X t = log ( X t 1 2 + 1 ) + ϵ t , ϵ t N ( 0 , 1 )
T = 100 Prediction step12345
L 2 -Bootstrap 1.10881.52231.60881.58861.6282
L 1 -Bootstrap 1.11231.52901.62121.60111.6385
L 2 -Oracle 1.01811.45211.55291.52731.5731
L 1 -Oracle 1.01981.45401.55541.53051.5734
T = 200
L 2 -Bootstrap 1.01421.40061.53801.59561.6102
L 1 -Bootstrap 1.01341.40411.54261.60241.6171
L 2 -Oracle 0.97901.36711.49821.55561.5791
L 1 -Oracle 0.97931.36811.49991.55681.5791
Table 2. The MSPEs of different predictions using Model Equation (30) with χ ( 3 ) 3 innovation.
Table 2. The MSPEs of different predictions using Model Equation (30) with χ ( 3 ) 3 innovation.
Model: X t = log ( X t 1 2 + 1 ) + ϵ t , ϵ t χ ( 3 ) 3
T = 100 Prediction step12345
L 2 -Bootstrap 6.72867.60877.82027.33957.6966
L 1 -Bootstrap 7.10937.99088.25987.67617.9988
L 2 -Oracle 6.29727.36087.69537.17667.5157
L 1 -Oracle 6.69377.65408.00647.38897.7174
T = 200
L 2 -Bootstrap 6.24577.16627.50427.62277.1980
L 1 -Bootstrap 6.63557.49427.79647.92857.5006
L 2 -Oracle 5.95317.02447.38237.43827.0738
L 1 -Oracle 6.35197.27857.58107.64437.2600
Table 3. The MSPEs of different predictions using Model Equation (32) with standard normal innovation.
Table 3. The MSPEs of different predictions using Model Equation (32) with standard normal innovation.
Model: X t = sin ( X t 1 ) + ϵ t 0.5 + 0.25 X t 1 2 , ϵ t N ( 0 , 1 )
T = 100 Prediction step12345
L 2 -Bootstrap 0.94471.13061.23731.20911.2714
L 1 -Bootstrap 0.94611.13741.23961.21271.2731
L 2 -Oracle 0.84541.07261.18321.17221.2186
L 1 -Oracle 0.84571.07301.18411.17371.2183
T = 200
L 2 -Bootstrap 0.87981.15391.26001.29011.2717
L 1 -Bootstrap 0.88331.16001.26491.29491.2749
L 2 -Oracle 0.81031.09911.22271.26801.2509
L 1 -Oracle 0.81071.10001.22391.26841.2511
Table 4. The CVRs and LENs of PIs for Equation (30).
Table 4. The CVRs and LENs of PIs for Equation (30).
Model 1: X t = log ( X t 1 2 + 1 ) + ϵ t , ϵ t N ( 0 , 1 )
CVR for each stepLEN for each step
T = 200 1234512345
QPI-f0.9360.9350.9310.9280.9253.804.384.524.554.57
QPI-p0.9430.9440.9390.9350.9373.944.544.694.734.74
QPI-f-u0.9360.9410.9400.9370.9373.804.514.694.764.77
QPI-p-u0.9420.9490.9490.9450.9493.954.684.864.924.94
L 2 -PPI-f-u0.9400.9440.9440.9400.9393.944.594.764.814.83
L 2 -PPI-p-u0.9470.9540.9510.9470.9474.094.754.924.985.00
L 1 -PPI-f-u0.9420.9450.9440.9400.9413.954.614.774.834.84
L 1 -PPI-p-u0.9480.9540.9520.9480.9494.104.774.944.995.01
SPI0.9510.9480.9500.9440.9463.884.584.774.824.84
T = 100
QPI-f0.9210.9180.9120.9130.9093.744.284.404.444.45
QPI-p0.9400.9350.9310.9310.9283.994.544.674.714.72
QPI-f-u0.9160.9280.9310.9300.9273.744.464.634.694.71
QPI-p-u0.9370.9430.9430.9440.9433.994.724.894.954.97
L 2 -PPI-f-u0.9310.9340.9350.9340.9313.974.584.734.784.80
L 2 -PPI-p-u0.9490.9480.9470.9440.9474.224.844.995.045.07
L 1 -PPI-f-u0.9310.9360.9340.9330.9343.984.604.754.794.82
L 1 -PPI-p-u0.9490.9480.9490.9440.9484.234.865.015.065.09
SPI0.9510.9410.9460.9420.9443.894.584.764.824.84
T = 50
QPI-f0.8910.8980.8990.8900.8873.644.144.254.294.30
QPI-p0.9230.9260.9310.9240.9174.044.564.674.714.72
QPI-f-u0.8840.9160.9210.9180.9073.644.374.544.604.62
QPI-p-u0.9140.9390.9400.9390.9344.034.794.955.005.02
L 2 -PPI-f-u0.9060.9240.9240.9270.9193.994.564.694.744.76
L 2 -PPI-p-u0.9360.9510.9480.9440.9434.414.975.105.155.16
L 1 -PPI-f-u0.9070.9250.9240.9270.9204.004.584.724.764.79
L 1 -PPI-p-u0.9390.9520.9480.9450.9414.435.005.125.175.18
SPI0.9470.9490.9440.9470.9423.884.584.764.814.84
Note: With no other specifications, throughout all simulations, QPI-f and QPI-p represent QPIs based on optimal bandwidth with fitted and predictive residuals, respectively; QPI-f-u and QPI-p-u represent QPIs based on under-smoothing bandwidth with fitted and predictive residuals, respectively; L 2 -PPI-f-u and L 2 -PPI-p-u represent PPIs centered at L 2 -optimal point prediction with fitted and predictive residuals, respectively; L 1 -PPI-f-u and L 1 -PPI-p-u represent PPIs centered at L 1 -optimal point prediction with fitted and predictive residuals, respectively; all PPIs with the “-u” symbol are based on applying the under-smoothing bandwidth to estimate the model; SPI represents the oracle PI.
Table 5. The CVRs and LENs of PIs for Equation (32).
Table 5. The CVRs and LENs of PIs for Equation (32).
Model 2: X t = sin ( X t 1 ) + ϵ t 0.5 + 0.25 X t 1 2 , ϵ t N ( 0 , 1 )
CVR for each stepLEN for each step
T = 200 1234512345
QPI-f0.9130.9180.9160.9240.9243.303.934.074.114.12
QPI-p0.9350.9360.9330.9410.9403.624.294.464.494.51
QPI-f-u0.9040.9340.9350.9430.9443.344.254.504.554.57
QPI-p-u0.9260.9490.9510.9580.9553.654.624.894.954.97
L 2 -PPI-f-opv0.9090.9380.9370.9480.9463.514.384.604.654.67
L 2 -PPI-p-opv0.9320.9520.9510.9610.9593.874.805.035.085.10
L 1 -PPI-f-opv0.9120.9390.9370.9490.9463.534.384.594.644.66
L 1 -PPI-p-opv0.9330.9510.9500.9600.9603.884.795.025.075.08
SPI0.9480.9480.9400.9500.9463.374.114.324.384.40
T = 100
QPI-f0.9010.9070.9120.9090.9063.283.853.974.014.01
QPI-p0.9330.9310.9380.9330.9383.824.414.554.584.59
QPI-f-u0.9010.9230.9310.9290.9323.284.074.294.354.37
QPI-p-u0.9310.9430.9500.9500.9473.824.644.854.904.93
L 2 -PPI-f-opv0.9150.9250.9350.9360.9353.524.254.434.484.50
L 2 -PPI-p-opv0.9410.9480.9540.9550.9544.174.905.075.115.13
L 1 -PPI-f-opv0.9160.9260.9350.9360.9363.534.254.434.484.50
L 1 -PPI-p-opv0.9410.9470.9540.9520.9554.174.905.075.125.13
SPI0.9510.9470.9470.9460.9423.414.134.334.394.40
T = 50
QPI-f0.8440.8740.8840.8830.8883.093.683.833.873.89
QPI-p0.9030.9210.9290.9290.9344.014.744.854.934.95
QPI-f-u0.8450.8920.9070.9100.9103.093.934.154.234.26
QPI-p-u0.9050.9290.9340.9400.9464.034.915.175.235.24
L 2 -PPI-f-opv0.8710.9050.9170.9180.9223.454.194.384.464.47
L 2 -PPI-p-opv0.9340.9410.9480.9500.9544.715.485.605.675.68
L 1 -PPI-f-opv0.8730.9070.9200.9190.9233.464.204.404.474.48
L 1 -PPI-p-opv0.9340.9420.9480.9500.9544.695.445.575.645.64
SPI0.9420.9460.9480.9390.9503.394.114.334.384.40
Note: All PPIs with the “-opv” symbol are based on applying under-smoothing and optimal bandwidths to estimate mean and variance functions, respectively.
Table 6. The CVRs and LENs of QPIs with 1000 samples using Equation (30).
Table 6. The CVRs and LENs of QPIs with 1000 samples using Equation (30).
Model 1: X t = log ( X t 1 2 + 1 ) + ϵ t , ϵ t N ( 0 , 1 )
CVR for each stepLEN for each step
T = 1000 1234512345
QPI-f0.9500.9400.9480.9470.9393.864.504.664.704.71
QPI-f-u0.9470.9430.9520.9540.9463.864.564.744.794.81
QPI-p0.9490.9380.9510.9510.9433.914.544.714.754.76
QPI-p-u0.9510.9470.9540.9560.9503.904.624.804.844.86
Table 7. The CVRs and LENs of PPIs with under-smoothing or over-smoothing bandwidth strategies using Equation (30).
Table 7. The CVRs and LENs of PPIs with under-smoothing or over-smoothing bandwidth strategies using Equation (30).
Model 1: X t = log ( X t 1 2 + 1 ) + ϵ t , ϵ t N ( 0 , 1 )
CVR for each stepLEN for each step
T = 500 1234512345
L 2 -PPI-f-u0.9430.9400.9450.9430.9483.884.544.714.774.78
L 1 -PPI-f-u0.9420.9410.9460.9470.9493.894.554.724.784.80
L 2 -PPI-p-u0.9460.9490.9470.9520.9543.964.634.794.854.8
L 1 -PPI-p-u0.9460.9500.9470.9510.9543.974.644.814.864.88
L 2 -PPI-f-o0.9420.9260.9160.9150.9233.864.264.334.344.35
L 1 -PPI-f-o0.9430.9250.9210.9180.9223.874.274.344.364.36
L 2 -PPI-p-o0.9480.9290.9270.9270.9253.944.344.424.434.43
L 1 -PPI-p-o0.9490.9310.9280.9250.9243.954.354.434.444.44
SPI0.9460.9470.9480.9500.9563.894.574.764.824.84
T = 50
L 2 -PPI-f-u0.9120.9190.9190.9250.9313.954.534.674.724.74
L 1 -PPI-f-u0.9130.9210.9190.9280.9313.964.554.694.744.76
L 2 -PPI-p-u0.9430.9450.9420.9460.9504.384.955.085.125.14
L 1 -PPI-p-u0.9440.9460.9430.9480.9504.394.985.105.155.16
L 2 -PPI-f-o0.9110.8800.8690.8690.8733.783.933.963.973.97
L 1 -PPI-f-o0.9120.8820.8680.8680.8713.793.953.983.983.98
L 2 -PPI-p-o0.9400.9180.9030.9080.9104.204.374.404.414.42
L 1 -PPI-p-o0.9410.9190.9020.9090.9094.224.394.424.434.43
SPI0.9500.9470.9460.9470.9503.894.584.764.824.84
Note: “-o” indicates that the corresponding PPI is built with an over-smoothing bandwidth in generating bootstrap series.
Table 8. The CVRs and LENs of PPIs with two strategies for estimating the variance function.
Table 8. The CVRs and LENs of PPIs with two strategies for estimating the variance function.
Model 1: X t = sin ( X t 1 ) + ϵ t 0.5 + 0.25 X t 1 2 , ϵ t N ( 0 , 1 )
CVR for each stepLEN for each step
T = 50 , Rep = 5000
L 2 -PPI-f-u0.8710.8960.9210.9150.9233.504.244.414.484.52
L 1 -PPI-f-u0.8770.9010.9190.9180.9253.524.244.424.494.53
L 2 -PPI-p-u0.9250.9390.9460.9460.9464.825.475.635.715.81
L 1 -PPI-p-u0.9270.9350.9450.9490.9494.805.395.515.655.75
L 2 -PPI-f-opv0.8850.8910.9230.9200.9183.454.124.344.394.43
L 1 -PPI-f-opv0.8850.8930.9270.9190.9173.474.144.364.414.45
L 2 -PPI-p-opv0.9340.9390.9470.9500.9474.755.285.495.565.60
L 1 -PPI-p-opv0.9400.9400.9460.9510.9434.725.215.405.455.55
SPI0.9430.9390.9580.9450.9453.384.114.334.384.40
Note: “-opv” indicates the corresponding PPI is built by optimal bandwidth for the variance function estimator.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Politis, D.N.; Wu, K. Multi-Step-Ahead Prediction Intervals for Nonparametric Autoregressions via Bootstrap: Consistency, Debiasing, and Pertinence. Stats 2023, 6, 839-867. https://doi.org/10.3390/stats6030053

AMA Style

Politis DN, Wu K. Multi-Step-Ahead Prediction Intervals for Nonparametric Autoregressions via Bootstrap: Consistency, Debiasing, and Pertinence. Stats. 2023; 6(3):839-867. https://doi.org/10.3390/stats6030053

Chicago/Turabian Style

Politis, Dimitris N., and Kejin Wu. 2023. "Multi-Step-Ahead Prediction Intervals for Nonparametric Autoregressions via Bootstrap: Consistency, Debiasing, and Pertinence" Stats 6, no. 3: 839-867. https://doi.org/10.3390/stats6030053

APA Style

Politis, D. N., & Wu, K. (2023). Multi-Step-Ahead Prediction Intervals for Nonparametric Autoregressions via Bootstrap: Consistency, Debiasing, and Pertinence. Stats, 6(3), 839-867. https://doi.org/10.3390/stats6030053

Article Metrics

Back to TopTop