1. Introduction
Functional time series analysis combines functional data analysis with time series analysis. Similarly to univariate and multivariate time series, a temporal dependence structure exists in functional observations, which manifest themselves in a graphical form of curves, images, or shapes. Typically, functional time series can be classified into two main categories. Specifically, the first one segments a univariate time series into (sliced) functional time series. For example,
Rice et al. (
2023) considered intraday volatility to form functional time series
defined for a continuum
. The other category is when the continuum is not a time variable, such as age (see, e.g.,
Shang et al. 2022) or wavelength in spectroscopy (see, e.g.,
Shang et al. 2022).
Over the past two decades or so, there have been rapid developments in functional time series analysis. An important branch of such developments is to extend the mainstream models and analytical tools in univariate time series to functional cases (see, e.g.,
Kokoszka and Reimherr 2017). To name a few,
Kokoszka et al. (
2017) and
Mestre et al. (
2021) proposed a functional autocorrelation function (fACF) to quantify linear serial correlation in a functional time series.
Huang and Shang (
2023) proposed a nonlinear fACF to measure nonlinear dependence in a functional time series.
Bosq (
1991) extended the autoregressive (AR) model to the functional case, referred to as the fAR model. Since then, a score of functional time series models have been extended from the fAR model. Some extended models include, for example, the autoregressive Hilbertian with exogenous variables model (ARHX) (
Damon and Guillas 2002), the Hilbert moving average model (
Turbillon et al. 2007), the functional autoregressive moving average (fARMA) model (
Klepsch et al. 2017), the seasonal functional autoregressive model (
Zamani et al. 2022), and the seasonal autoregressive moving average Hilbertian with exogenous variables model (SARMAHX) (
González et al. 2017). For modeling conditional variance, these extensions include the functional autoregressive conditional heteroskedasticity model (fARCH) (
Hörmann et al. 2013), the functional generalized autoregressive conditional heteroskedasticity (fGARCH) model (
Aue et al. 2017), and the fGARCH-X model (
Rice et al. 2023).
Functional time series analysis has a wide range of applications, including those in financial risk management. For example,
Tang and Shi (
2021) applied a high-dimensional functional time series method to forecast constituent stocks in the Dow Jones index.
Shang (
2017) implemented a functional time series approach to forecast intraday S&P 500 index returns.
Shang et al. (
2019a) considered the problem of dynamic updating for intraday forecasts of the volatility index.
Despite increasing interest and research on functional time series, the existing literature focuses on the use of measures or tools based on autocovariance and/or autocorrelation to investigate the underlying structure of observed functional time series (
Horváth et al. 2013;
Mestre et al. 2021;
Zhang 2016). The independent and identically distributed (IID) test for functional time series is rather limited, with the exception of
Gabrys and Kokoszka (
2007b),
García-Portugués et al. (
2019), and
Kim et al. (
2023). These exceptions only captured the linear temporal structure. Except for nonlinear fACF in
Huang and Shang (
2023), a relatively little attention has been given to studying the nonlinear temporal structures within the functional time series literature. Additionally, since linear structures restrict these tools, they cannot test all possible deviations from randomness. Therefore, a robust model specification test is required to evaluate the adequacy of functional time series models. In a function-on-function regression,
Chiou and Müller (
2007) developed a set of diagnostic tools based on residual processes. The latter are defined by subtracting the predicted functions from the observed response functions. This residual process is expanded into functional principal components and their scores. A randomization test is developed based on these scores to examine whether the residual process is related to the covariate, as an indication of lack of fit of the model. However, these diagnostic tools cannot be extended to functional time series due to temporal dependence.
In this paper, we extend the BDS test of
Brock et al. (
1987) to a functional time series. As in the univariate case, the proposed test can be used as an IID test on estimated residuals to evaluate the adequacy of the fitted model and as a nonlinearity test on residuals of functional time series after removing linear temporal structures exhibited in the investigated data.
The BDS test proposed by
Brock et al. (
1996) is the most widely used nonlinearity test and model specification test in univariate time series analysis. In empirical studies, the BDS test is often used on financial time series residuals after fitting an ARMA or ARCH-type model to test for the presence of chaos and nonlinearity (
Lee et al. 1993;
Mammadli 2017;
Small and Tse 2003). The reason behind its popularity is mainly twofold:
- (1)
The BDS test requires minimal assumptions and previous knowledge about the investigated data sets. When the BDS test is applied to model residuals, the asymptotic distribution of its test statistic is independent of estimation errors under certain sufficient conditions (see
Chan and Tong 2001, Chapter 5). Specifically,
de Lima (
1996) showed that for linear additive models or models that can be transformed into that form, the BDS test is nuisance parameter-free and does not require any adjustment when applied to fitted model residuals.
- (2)
The BDS test tests against various forms of deviation from randomness. While the null hypothesis of the BDS test is that the investigated time series is generated by an IID process, its alternative hypothesis is not specified. It may be thought of as a portmanteau test. This implies that the BDS test can detect any non-randomness exhibited in the investigated time series. Additionally, a fast algorithm exists for computing the BDS test statistics, which ensures the BDS test’s easy and speedy application on empirical applications (
LeBaron 1997). Also, the BDS statistic asymptotic distribution theory does not require higher-order moments to exist. This property is especially useful in analyzing financial time series since many financial time series exhibit heavy-tailed distributions whose higher-order moments may not exist.
In the recent literature,
Kim et al. (
2003) compared the conventional nonparametric tests and the BDS test for residual analysis. They found that the BDS test is more reasonable than the conventional nonparametric tests.
Caporale et al. (
2005) examined the use of the BDS test when applied to the logarithm of the squared standardized residuals of an estimated GARCH(1,1) model as a test for the suitability of this specification. Extending from
Brock et al. (
1996),
Kočenda (
2001) removed the limitation of having to arbitrarily select a proximity parameter by integrating across the correlation integral.
Luo et al. (
2020) proposed a modified BDS test by removing some terms from the correlation integral, and this addresses the weakness of overly rejecting the null hypothesis in the original BDS test.
Escot et al. (
2023) recursively applied the BDS test to detect structural changes in financial time series.
The BDS test has its own weaknesses.
Luo et al. (
2020) contained a revision about some of the known problems of this type of test; among others, the sensitivity with respect to the choice of tuning parameters, low convergence to asymptotic normality, and over-rejection of the null hypothesis.
The rest of this paper is structured as follows.
Section 2 provides the specification of the functional BDS test. In
Appendix B, we provide detailed proof of the asymptotic distribution of the test statistics of the functional BDS test. In
Section 3, we present Monte-Carlo experiments on the IID functional time series and simulated fGARCH(1,1) functional time series to provide the recommended dimension and distance hyperparameters range. In
Section 4, the functional BDS test is used to test the adequacy of the fAR(1) and fGARCH
models on the fitted residuals of daily curves of intraday VIX index returns. Conclusions are given in
Section 5, along with some ideas on how the methodology presented here can be further extended.
2. BDS Test for Functional Time Series
The BDS test uses “correlation integral”, a popular measure in chaotic time series analysis. According to
Packard et al. (
1980) and
Takens (
1981), the method of delays can embed a scalar time series
into a
m-dimensional space as follows:
Accordingly,
is called
m-history of
.
Grassberger and Procaccia (
1983) proposed correlation integral as a measure of the fractal dimension of deterministic data since it records the frequency with which temporal patterns are repeated. The correlation integral at the embedding dimension
m is given by
where
N is the size of the data sets,
is the number of embedded points in m-dimensional space,
r is the distance used for testing the proximity of the data points, and
denotes the sup-norm.
1In essence, measures the fraction of the pairs of points , , the sup-norm separation of which is less than r.
Brock (
1987) showed that under the null hypothesis
are IID with a non-degenerated distribution function
,
According to
Brock et al. (
1996), the BDS statistic for
is defined as
where
,
and
. Note that
is a consistent estimate of
C, and
K can be consistently estimated by
Under the IID hypothesis,
has a limiting standard normal distribution as
.
The above specification of the BDS test is for scalar time series. When the object is a functional time series, one needs to adjust the computation of sup-norm separation of the
m-histories in (
2) and (
9).
Given a functional time series
, the
m-history of
is constructed by its
m neighbouring observations, namely
The sup-norm of two sets of
m functions can be measured by taking the maximum distance between the corresponding curves. Specifically, if we use
norm as the distance measure between two curves,
Since we adjust the specification of the BDS test statistic to be adaptive to the functional case, to determine the critical value of the BDS test after the adjustment, one needs to derive its asymptotic distribution under the null hypothesis. In
Appendix B, we prove that the asymptotic normality for the univariate BSD test statistic is also valid for the functional case. Indeed, the asymptotic normality result presented in Theorem 1 of
Appendix B is versatile. It holds for any norm
on a separable Hilbert space
, which is more general than the
-norm.
The
norm is not the only distance measure of two functions. Other common choices include
norm and
norm. All of them, including other norms, can be used for computing the sup-norm of
m-histories of functional time series. However, the choice for the distance measure determines the recommended range for the distance hyperparameter
r as well as the speed of convergence of the test statistic and the power of the test. In
Section 3, we present power and size experiments on random and structured functional time series when
,
and
are selected as the distance measure inside the sup-norms.
3. Monte-Carlo Simulation Study
We conduct Monte-Carlo experiments on simulated IID and structured functional time series to provide the recommended range of hyperparameters of the functional BDS test, namely m, r, and the preferred norms inside the sup-norms.
We use three metrics to evaluate the selection of the hyperparameters and the norms: (1) the resemblance of normality of the test statistics on the IID process; (2) the size of the test at 1%, 5% and 10% nominal levels; and (3) the power of rejecting
on a structured process. To compare the performance of the functional BDS test with the existing method, the same power experiment is also conducted on the GK independence test proposed by
Gabrys and Kokoszka (
2007b), a commonly used independence test in the functional time series domain.
For the resemblance of normality, we simulated 200 paths of 500 IID functional time series and computed the BDS test statistic on each path with
and
, where s.d. denotes the standard deviation of the residual process. Computationally, the standard deviation of the residual process can be computed by the
sd.fts function in the ftsa package (
Hyndman and Shang 2025).
Table 1 provides the
p-value of the Kolmogorov–Smirnov (KS) test for each combination of
m and
r when
is selected as the norm inside the sup-norms. Since different types of norm focus on different error loss functions, the respective tables with
and
being selected as the norms inside the sup-norms are provided in
Table A1 in
Appendix A.
The KS test examines against the null hypothesis that the computed functional BDS test statistic is from a standard normal distribution. A p-value less than 0.025 (highlighted in bold) indicates the rejection of , which means the generated BDS test statistics cannot be assumed to follow a standard normal distribution. On the contrary, the higher the p-value is, the closer the BDS test statistics are to a standard normal distribution.
From the results in
Table 1 and
Table A1, we can see that the functional BDS test with a moderate
m (
) and a sufficiently large
r (
) ensures that the respective test statistics have distributions sufficiently close to a standard normal distribution under the null hypothesis. The
metrics are different ways of quantifying distances. For example, from the perspective of statistical estimation, the
-norm corresponds to the least absolute deviation and gives an least-absolute-deviation estimator if it is used as a criterion for estimation, while the
-norm corresponds to the least square deviation and gives a least-square estimator if it is used as a criterion for estimation. The
-norm corresponds to a maximum deviation and gives rise to a (robust) minimax estimator if it is used as a criterion for estimation. Based on the data sets we used, it is found that the different criteria do not influence the calculation of
p-value shown in
Table 1 in the main manuscript and
Table A1 in
Appendix A. However, at an intuitive level, it seems that the
-norm may give rise to the most conservative result among the three criteria considered as far as the detection of nonlinear patterns in different types of functional time series is concerned. Indeed, the theoretical results, particularly the asymptotic results, in our paper apply to a general norm on a separable Hilbert space. It is flexible to accommodate different criteria and facilities for the study of the impacts of the choices of different norms on the detection of nonlinearity in different classes of functional time series models.
To examine the size of the test, we compute the probability of falsely rejecting the null hypothesis using the same simulated IID functional time series in the normality resemblance experiments. We conduct the size experiments with a nominal test level at 1%, 5%, and 10% significance levels.
Table 2 reports the size of the test with
norm at a nominal level of 1% when a different combination of the hyperparameters
m and
r is selected. The results of the size experiments at the nominal levels of 5% and 10% will be provided in
Table A2 in
Appendix A. We highlight the cells in bold when the actual sizes of the test exceed the nominal level by 3% or more. The results from the size experiments are consistent with those of the normality resemblance experiments. From
Table 2, a moderate
m (
) and a sufficiently large
r (
) ensure an appropriate size of the functional BDS test at the 1% nominal level. However, the results at the nominal levels of 5% and 10% presented in
Table A2 appear less conclusive.
For the power test, we simulated 200 paths of an fGARCH
process proposed by
Aue et al. (
2017). In each path, we generate 500 observations, and each functional observation is formed by 100 equal-spaced points within (0,1). A sequence of random functions
is called a functional GARCH process of order (1,1), abbreviated as fGARCH
, if it satisfies the equations
where
is a non-negative function, the operators
and
map non-negative functions to non-negative functions, and the innovations
are iid random functions. Our simulated fGARCH processes inherit the format of the simulated fGARCH process in
Aue et al. (
2017). We set
and the integral operators
and
to be
where
C is a constant. The innovations
are defined as
where
are IID standard Brownian motions. We specifically choose a small constant
in (
15), so the generated process has relatively weak temporal structures.
After simulating 200 paths of the fGARCH
process, we compute the functional BDS test statistic on each simulated functional time series.
Table 3 presents the probability the functional BDS test successfully rejects the IID hypothesis on a structured process when
is selected as the norm.
Table A3 in
Appendix A provides the respective tables with
and
. A value of 100% indicates that the BDS test made correct inferences at all simulated paths, whereas a value less than 100% suggests it failed to distinguish a structured process from a random one at certain paths. The same statistics of the power experiment on GK test is provided in
Table 4. The GK independence test requires two hyperparameters,
p and
H, since it is based on the lagged cross-covariances of the projected principal components of the functional time series. The hyperparameter
p represents the number of retained principal components in the dimension reduction step, and
H denotes the maximum lagged cross-covariances considered in computing the test statistics.
In
Table 3 and
Table A3, the results of the power experiments indicate the BDS test attains the highest successful rejection rate when
m is between 2 and 7 and
r is between
and
Additionally, the functional BDS test demonstrates clear superiorities compared to the GK test in identifying temporal structures, especially when the temporal dependence is nonlinear.
For the robustness experiments, we randomly replaced 1% of the simulated IID functional time series to have a distinctive higher mean than the rest of the observations and then repeated the normality resemblance experiments.
Table A4 presents the
p-values of the KS test for
,
, and
metrics. The results showed that including random outliers does not impair the convergence to normality for the functional BDS test when
and
are used as the norm inside the sup-norms. However, when
is used inside the sup-norms, the test is significantly affected by the outliers. The presence of outliers makes the generated test statistic fail the KS test for most of the combinations of
m and
r when
is chosen as the norm inside the sup-norm.
Lastly, we performed an experiment to guide the preferred length of the functional time series so that the BDS test has satisfactory performance. We repeat the normality resemblance experiment and the power test with
and
where the selected
m and
r are within the recommended range as indicated by our previous normality resemblance experiments and power test. The simulated functional time series length is
, or 1000. The result of the normality resemblance experiment is presented in
Table 5, and the power test result is given in
Table 6. The experiment indicates that with an appropriate selection of
m and
r, the functional BDS test has satisfactory performance for functional time series with a length greater than 250.
To conclude, to ensure the convergence of normality, appropriate size, and the power of the test, it is recommended that the dimension hyperparameter m is in the range of 2 and 7, and the distance parameter r is recommended to be between s.d and For the norm inside the sup-norms, we recommend and , as they are more robust to outliers. Lastly, a functional time series with more than 250 observations is recommended for the functional BDS test to perform satisfactorily.
4. Evaluation of the Adequacy of the fAR(1) and fGARCH(1,1) Models on VIX Tick Returns
We depict an empirical application of the functional BDS test to evaluate the adequacy of the fAR(1) and the fGARCH models in fitting the daily curves of intraday VIX (volatility) index returns. VIX is a forward-looking volatility measure of the future equity market based on a weighted portfolio of 30-day S&P 500 Index option prices. The VIX index is a key measure of risk for the market. It is considered a fear index in the finance literature. Therefore, accurately predicting the VIX index is essential in risk management, especially for hedge and pension funds. Specifically, predictions of the VIX index may be used as a (forward-looking) risk indicator of the equity market. The information from the predictions may be used by banks, financial institutions, and insurance companies to evaluate portfolio’s risk and diversification, as well as to construct investment strategies.
Most existing studies that attempted to model and predict the VIX index treat it as a discrete time series. To name a few,
Konstantinidi et al. (
2008) used an autoregressive fractionally integrated moving average (ARFIMA) model, and
Fernandes et al. (
2014) employed a heterogeneous AR model to predict future values of the VIX index. Recently, the functional time series model has provided new alternatives to extract additional information underlying the VIX dynamics and potentially provides more accurate forecasts for market expectations about equity risk in the future (see, e.g.,
Shang et al. 2019a).
The data set we considered comprises the 15-s interval observations of the VIX index from 19 March 2013 to 21 July 2017, where 15-s is the highest frequency available for the VIX index (Chicago Board Options Exchange). Some of its derivatives may trade at a higher frequency, but the index value is only recalculated and released on a 15-s basis. Consequently, to provide us with the most current information and the highest level of granularity to model its evolution, we use these 15-s VIX. The VIX index of the investigation period is plotted in
Figure 1. From the figure, it can be observed that there are several spikes or peaks of the VIX index. Specifically, the highest peak occurs around the time point
at which the VIX index jumps to
. This indicates a highly volatile market as expected by market participants. It is worth noting that the timing of the first and last VIX records can vary slightly on different trading days. To ensure that the start time and end time of the daily curves of the VIX records are constant, we use linear interpolation to fill in missing values (if any) so that the timings of the VIX indexes are the same for every trading day. After linear interpolation, we have a total of 1095 trading days (excluding weekends and holidays) in our investigated data set. On each trading day, VIX indexes take from 09:31:10 to 16:15:00 of a 15-s interval, and the total constitutes 1616 points per day.
Based on the interpolated index, we transformed the non-stationary intraday VIX index into daily curves of cumulative intraday returns (CIDR). Let
denote the daily VIX value at time
(
) on day
i (
); CIDRs are computed by
where
denotes the natural logarithm and
and
are 15-s apart. The daily curves of the CIDR of the VIX index are the functional time series of interest.
Figure 2 plots the functional time series curves of the CIDR VIX index for different trading days. From the plot, we can see that there are some variations in the curves for the CIDR VIX index on different trading curves. In fact, on some trading days, the qualitative behaviors of the curves of the CIDR VIX index are different from those on other trading days. For example, the curve of the CIDR VIX index on a trading day between 5 August 2015 and 12 October 2016 exhibits a
U-shaped behavior. However, on most of the other trading days, this
U-shaped pattern is absent or not so obvious.
The candidate models we consider to fit the daily curves of the CIDR VIX index are the fAR(1) and fGARCH
models. We use the R package ’far’ to fit the observed data to the fAR(1) model. The estimation procedure for fitting the fGARCH model is described in
Rice et al. (
2023) via quasi-likelihood. The VIX index returns estimated from the fAR(1) model and the fitted
estimated from the fGARCH
model are plotted in
Figure 3. By eyeballing the figure, we see that the fAR(1) model captures some features of the original functional time series of the curves of the CIDR VIX index. Furthermore, the estimated conditional standard deviations from the fitted fGARCH
model are strictly increasing on each trading day. This makes intuitive sense for the intraday cumulative returns considered.
To evaluate the adequacy of the fAR(1) model, we apply the functional BDS test on the residuals between the observed returns curves and the fitted return curves (
:
;
).
Table 7 presents the functional BDS test statistics of the residuals of the fAR(1) model for a variety combination of hyperparameters
m and
r. Since most test statistics exceed the 1% critical value of a standard normal distribution, the functional BDS test rejects the null hypothesis of IID residuals. In other words, the fAR(1) model cannot capture all the structures underlying the observed daily curves of VIX returns.
Since the fGARCH
is a multiplicative model, we use the standardized returns (
) to evaluate its adequacy. In the univariate case, when evaluating the adequacy of the GARCH model, if the BDS test is applied directly to the standardized returns
, previous studies (see
Brock et al. 1991) suggest that the BDS statistic needs to be adjusted to have the right size.
Fernandes and Preumont (
2012) proposed to apply the BDS test on natural logarithms of squared standardized residuals [
] so that the logarithmic transformation casts the GARCH model into a linear additive model.
Table 7 records the functional BDS test statistics of the logarithm of the squared standardized returns. From the table, we see that the two models, namely fAR(1) and fGARCH
, are rejected in most cases.
To compare the performance of the functional BDS test with the GK test as a model specification test in empirical analysis,
Table 8 documents the
p-value of the GK independence test for
and
on the fitted residuals after fAR(1) and standardized returns after fGARCH
. In
Gabrys and Kokoszka (
2007b), the authors investigated the finite-sample performance of the GK independence test with
and
and concluded that the test power against the fAR(1) model is very good if
is used. Since the optimal parameters of the GK test depend on the underlying dynamic of the residuals, which is unknown in empirical studies, we also extend the GK test with relatively larger
H and
p.
Comparing the inferences drawn from the functional BDS test and the GK test provides additional insights into the dynamics of the CIDR VIX functional time series. Based on the functional BDS test results, both the fAR and fGARCH models are insufficient to capture the temporal structure exhibited in the daily CIDR curves of the VIX index. However, the GK test showed evidence of a violation of independence only for the fAR(1) model when a larger p is selected. This indicates that the fGARCH model better fits the observed curves compared to the fAR(1) model. Additionally, the seemingly contradictory conclusions regarding the fGARCH residuals from the functional BDS test and the GK test indicate a nonlinear structure exhibited in the daily curves of the CIDR VIX index. Furthermore, the GK test results indicate that the test’s inference can vary based on the parameters selected, whereas our functional BDS test provides consistent inferences across different parameter selections. This reliable statistical inference forms the foundation for modeling and forecasting VIX returns, which play a crucial role in risk management.
5. Conclusions
In this paper, we extended the BDS test to functional time series. Just like the BDS test in the univariate case, the functional BDS test enjoys some key desired properties, making it a plausible candidate for testing model specification and nonlinearity. Those advantages include a minimal requirement of prior assumptions and knowledge and the capacity to detect linear and nonlinear structures. We proved that the asymptotic normality previously held for the test statistics under the null hypothesis in the univariate case remains valid after extending the test statistics to the functional case. Additionally, we conducted Monte-Carlo experiments on the functional BDS test to provide the recommended range of its hyperparameters and data length. Outside our recommended ranges, the functional BDS test results can be sensitive to the choices of hyperparameters. This aligns with the findings of the conventional BDS test. We showed that with appropriate selection of the hyperparameters, the functional BDS test only required the data to be of length 250 to ensure that they converge to normality and has a 100% correct rate in terms of detecting predictability in a simulated functional time series with a relatively weak temporal structure. Moreover, if either
or
is selected as a distance measure inside the sup-norms, the function BDS test is also robust to outliers. The code for the functional BDS test is available at
https://github.com/Landy339/functional_BDS_test (accessed on 12 December 2024).
We illustrate the significance of our research in an empirical analysis, where we used the functional BDS test to evaluate the adequacy of the fAR model and the fGARCH model in terms of fitting a functional time series to the CIDR VIX index. After fitting the candidate models, we applied the functional BDS test to detect the remaining structures in the residuals. The test rejects the independence null hypothesis and thus concludes that both fAR and fGARCH models are insufficient to capture the temporal structures exhibited in the observed curves fully. In addition, our test showed added sensitivity in detecting predictability, particularly for the nonlinear structure, compared to the existing independence test in functional time series. We compared the results from the functional BDS test with those from the GK test, an existing linear independent test in the domain of functional time series. The results showed that our newly proposed functional BDS test provides a remedy to the weakness of the GK test by detecting the nonlinear structure in the fGARCH residuals that the GK test neglects. With the new tool, one could be aware of the existing independence test that the fGARCH is an adequate model for the observed data and overlook its nonlinear temporal structures.
The functional BDS test is the first nonlinearity test and the first model specification test proposed in functional time series. However, the major limitation of the proposed test is that it can only detect the remaining structures in the residuals. Unfortunately, it cannot indicate the form of the detected structures. Consequently, if a model is deemed insufficient, practitioners have no guidance on what models can fully capture the structures in the observed data.
We conclude by highlighting several potentially interesting issues that may be considered by extending the results obtained in this paper. (1) We provide a range of plausible tuning parameters, with the identification of optimal parameters for specific data sets left as future work. (2) Although our study demonstrated that with the proper selection of norms, the functional BDS test is robust to outliers, future research can examine its behavior on non-stationary functional time series, which frequently arise in real-world data. (3) The current study focused on univariate functional time series. Future work could investigate the extension of nonlinearity tests to multivariate functional time series while accounting for potential correlations among the variables. (4) Since our empirical analysis indicates the existence of nonlinearity in financial functional time series, it is hoped that the proposed test and the respective results will inspire further research into the dependence structure of functional time series, particularly in analyzing, modeling, and forecasting nonlinear functional time series. (5) We demonstrated the use of the BDS test via the 15-s VIX data. We could apply the functional BDS test to other climate or biomedical data sets.
The functional BDS test proposed in this paper provides market practitioners in banks, financial institutions, insurance companies, and regulatory bodies with a theoretically sound and practically feasible way to detect nonlinearity in financial data and model building relevant to risk management. Specifically, we illustrate, using empirical data on the VIX index, how the proposed functional BDS test may be used to detect nonlinearity in the VIX index data and model building for a (forward-looking) risk indicator. The proposed test provides market professionals with a rigorous way to assess the suitability and reliability of their models in managing portfolio risk and diversification, as well as in constructing investment strategies.