Abstract
Despite the growing interest in realized stochastic volatility models, their estimation techniques, such as simulated maximum likelihood (SML), are computationally intensive. Based on the realized volatility equation, this study demonstrates that, in a finite sample, the quasi-maximum likelihood estimator based on the Kalman filter is competitive with the two-step SML estimator, which is less efficient than the SML estimator. Regarding empirical results for the S&P 500 index, the quasi-likelihood ratio tests favored the two-factor realized asymmetric stochastic volatility model with the standardized t distribution among alternative specifications, and an analysis on out-of-sample forecasts prefers the realized stochastic volatility models, rejecting the model without the realized volatility measure. Furthermore, the forecasts of alternative RSV models are statistically equivalent for the data covering the global financial crisis.
Keywords:
realized volatility; stochastic volatility; asymmetry; heavy-tailed distribution; quasi-maximum likelihood estimation JEL Classification:
C2; C22
1. Introduction
Over the last two decades, research on realized volatility has received significant attention in modeling and forecasting the volatility of financial returns. For the generalized autoregressive conditional heteroskedasticity (GARCH) class models, Engle and Gallo (2006) and Shephard and Sheppard (2010) incorporated realized volatility for modeling and forecasting volatility. Using the information of return and realized volatility measure simultaneously, Hansen et al. (2012) and Hansen and Huang (2016) developed the “realized GARCH” and “realized exponential GARCH” models, respectively.
The literature on stochastic volatility models considers a realized volatility measure to be an estimate of latent volatility. As highlighted by Barndorff-Nielsen and Shephard (2002), there is a gap between true volatility and its consistent estimate, referred to as the “realized volatility error”. Since the aforementioned error is nonnegligible, Barndorff-Nielsen and Shephard (2002); Bollerslev and Zhou (2006); Takahashi et al. (2009), and Asai et al. (2012a, 2012b) accommodate a homoscedastic disturbance as an ad hoc approach. Analogous to the realized GARCH model, Takahashi et al. (2009) suggested the realized stochastic volatility (RSV) model, which is based on the information of return and realized volatility measure.
As in the GARCH model, it is useful to accommodate asymmetric effects and heavy-tailed conditional distributions in stochastic volatility models. For the former, a typical approach is to assume a negative correlation between the return and the disturbance for the one-step-ahead log-volatility (see Harvey and Shephard 1996; Yu 2005), known as the leverage effect. Instead of the standard normal distribution for the conditional distributions for return, we may consider the standardized t distribution and the generalized error distribution. Harvey et al. (1994); Sandmann and Koopman (1998); Liesenfeld and Jung (2000), and Asai (2008, 2009), among others, assumed the (standardized) t distribution. Furthermore, the empirical results in Liesenfeld and Jung (2000) and Asai (2009) indicate that the standardized t distribution yields better fits than the generalized error distribution. By the statistical property of the stochastic volatility models, we can analytically obtain the fourth moment of the return series.
For estimating various RSV models, Koopman and Scharth (2013) and Shirota et al. (2014) used the simulated maximum likelihood (SML) estimation and the Bayesian Markov chain Monte Carlo (MCMC) technique, respectively. Both approaches are computationally demanding. This study reconsiders the quasi-maximum likelihood (QML) method of Harvey et al. (1994) as it is straightforward to include the realized volatility equation, and it is expected to contribute toward improving the efficiency of the QML estimator.
Interest in modeling volatility using the information on return and realized volatility measure is simultaneously growing. Extending the class of the GARCH, Hansen et al. (2012) and Hansen and Huang (2016) developed the “realized GARCH” and the “realized exponential GARCH” models, respectively. Using the stochastic volatility (SV) models (e.g., see Chib et al. 2009), Takahashi et al. (2009) and Koopman and Scharth (2013) considered the “realized SV” (RSV) and the “realized asymmetric SV” (RSV-A) models, respectively.
While the realized GARCH class models can be estimated by the maximum likelihood estimation (MLE) technique, RSV class models require computationally demanding techniques. While Takahashi et al. (2009) and Shirota et al. (2014) suggested the Bayesian MCMC method, Koopman and Scharth (2013) developed the SML technique. Note that Koopman and Scharth (2013) suggested a two-step SML (2SML) estimator that is less efficient but less computationally intensive than the SML estimator. The current study applies the QML estimation Harvey et al. (1994) to the RSV models to show the practical usefulness.
The remainder of this paper is organized as follows. The RSV-A model with the standardized t distribution (RSVt-A) is outlined in Section 2. The asymptotic property of the QML estimator using the Kalman filter is discussed in Section 3, and its finite sample properties are examined and compared with that of the 2SML method of Koopman and Scharth (2013). The empirical results for the Standard and Poor’s (S&P) 500 index using the return and realized volatility measure are reported in Section 4. Finally, concluding remarks are presented in Section 5.
2. Realized Stochastic Volatility Models
2.1. Model
Let and denote open-to-close return of a financial asset and the log of a realized measure of its volatility on day t, respectively. The pair of close-to-close return and the realized volatility measure accommodating overnight volatility can be used as in Koopman and Scharth (2013).
Consider the realized asymmetric SV model with the standardized t distribution as follows:
where c, , , , , , and are parameters. According to the structure, follows a standardized t distribution with the degree-of-freedom parameter . Note that , , , and . To guarantee the stationarity of the process and the existence of the fourth moment, we assume and , respectively. As is the correlation coefficient between and , it satisfies . If the realized volatility measure, , is a consistent estimate of , is expected to be zero. By specification, non-zero implies the (finite sample) bias in . We denote the model (1)–(4) as the “RSVt-A”. The model reduces to the asymmetric SV model with the standardized t distribution when we omit Equation (3) (see Harvey and Shephard 1996; Asai 2008). Without the leverage effect, that is, , we obtain the RSVt model. By setting , the model reduces to the RSV-A model. As in Asai (2008) and Koopman and Scharth (2013), we can consider multi-factor models by allowing multiple factors in log-volatility as .
Following Harvey et al. (1994) and Harvey and Shephard (1996), we obtain the state space for the RSVt-A model. The logarithmic transformation of the squared yields
where is the vector of ones, , and . By Equation (26.3.46) in Abramovits and Stegun (1970), we obtain and , where is the digamma function defined by .
By the transformation , we lose the information of the sign of . To recover such information, we define the sign of as , where is the indicator function, which takes one if the condition A holds, and zero otherwise. As in Harvey and Shephard (1996), we can modify Equation (2) as follows:
with
where , , and . Hence, the measurement in Equation (5) and the transition in Equation (6) form the state-space model. By applying the Kalman filter to the state-space form in (5) and (6), it is straightforward to construct the quasi-log-likelihood function (see Appendix A).
As in Koopman and Scharth (2013), the assumption on the covariance matrix in Equation (4) can be relaxed by considering dependence between conditional return and measurement noise. This modification requires a change in Equation (3) using to construct the state-space model, as in Equations (6) and (7).
2.2. Realized Kernel Estimator
As a realized volatility measure, we adopt the realized kernel (RK) estimator developed by Barndorff-Nielsen et al. (2008), since it is a consistent estimator of the quadratic variation and is robust to microstructure noise and jumps. This subsection explains the RK estimator concisely.
Consider that the latent log-price follows a Brownian semimartingale plus jump process given by
where is a predictable locally bounded drift; is a càdlàg process of volatility; is a process of Brownian motion; and is a finite activity jump process, which has a finite number of jumps in any bounded interval of time. More precisely, counts the number of jumps that have occurred in the interval and for any t.
The quadratic variation of is given by
where is the integrated variance. The estimator of Barndorff-Nielsen et al. (2008) for the quadratic variation is based on the noisy observation of log-price, with and , where and .
Barndorff-Nielsen et al. (2008) suggested a non-negative estimator that takes the following form:
where is the jth high-frequency return calculated over the interval , and is a kernel weight function. For practical purposes, Barndorff-Nielsen et al. (2009) focused on the Parzen kernel function, defined by
Barndorff-Nielsen et al. (2008, 2009) provided an estimator for the bandwidth of G. Under mild regularity conditions, Barndorff-Nielsen et al. (2008) demonstrated that converges to in probability, as .
For practical purposes, it is convenient to work with the log of the RK estimator for its stability. Even though the RK estimator is consistent, it has finite sample bias and noise, and Equation (3) accommodates the constant term and the disturbance.
3. QML Estimation via Kalman Filter
3.1. QML Estimation
Define . For the state-space form (5) and (6), applying the Kalman filtering algorithm produces the quasi-log-likelihood function:
where and can be obtained as described in Appendix A. Maximizing the quasi-log-likelihood derives the QML estimator, . Although the vector of errors has a non-Gaussian distribution, the state-space form has the martingale property and at least the finite fourth moment. Based on the results in Dunsmuir (1979), it is straightforward to demonstrate the consistency and the asymptotic normality of the QML estimator, as follows:
where is the vector of true parameters. The covariance matrix is equivalent to that of the MLE based on the Whittle likelihood. Using the equivalence, we can demonstrate that the quasi-likelihood ratio (QLR) statistic has an asymptotic distribution under the null hypothesis (e.g., see the proof of Theorem 3.1.3 in Taniguchi and Kakizawa 2000).
Koopman and Scharth (2013) developed the SML approach and its two-step version, while Durbin and Koopman (2001) suggested a general approach for obtaining the simulated likelihood function for non-Gaussian state-space models and Koopman and Scharth (2013) applied it to the RSVA-t model. For the SML method, the likelihood function can be arbitrarily approximated precisely by decomposing it into a Gaussian part, constructed with a Kalman filter, and a remainder function, for which the expectation is evaluated through simulation. The decomposition reduces the computational time, but the SML is still computer-intensive since it requires a T-dimensional Monte Carlo integration for the latent variables whenever the log-likelihood function is evaluated. To avoid this problem, Koopman and Scharth (2013) developed the 2SML estimator. The first-step estimation provides estimates via the Kalman filter for the state-space model consisting of Equations (2) and (3), while the second maximizes the remaining part of the likelihood function to obtain the remaining parameter and to correct the bias caused by neglecting (1) in the first step (see Appendix B for details).
Compared with the SML estimator, the 2SML and QML estimators are less efficient owing to the fluctuations caused by the asymptotic variance of the first-step estimator and the non-Gaussianity in in Equation (5), respectively. Hence, the inefficiency of these two estimators derives from different reasons, and the finite sample properties of the 2SML and QML estimators are worth examining. Note that the QML estimation is faster than the 2SML method, since the former has no additional step using the simulated quantity. Recently, Asai et al. (2017) developed a two-step estimation method based on the Whittle likelihood for the RSV-A model. While their approach requires two steps, the QML estimation using the Kalman filter needs no additional step, implying that the two-step Whittle likelihood estimator is less efficient than the QML estimator.
3.2. Finite Sample Property of QML Estimator
In this subsection, we conducted a Monte Carlo experiment to investigate the finite sample properties of the QML estimator, with a comparison with the 2SML estimator. Although the QML and 2SML estimator are asymptotically less efficient than the SML estimator, these estimation methods are computationally faster than the SML approach. Hence, it is worth comparing the QML and 2SML estimators. The experiments follow the framework in Koopman and Scharth (2013) with the two data-generating processes (DGP) based on Equations (1)–(4). The first DGP was the RSV-A model with the true parameters reported in Table 1, while the second was the RSVt model based on the true parameters in Table 2. The sample size was . While Koopman and Scharth (2013) set the number of replications as 250 for the computationally intensive SML method, we considered 2000 replications for fast estimation procedures, that is, the QML and 2SML methods. For details regarding the 2SML estimation, see Appendix B. All the experiments were run on MATLAB R2022b, using the interior-point algorithm and starting from the true parameter values.
Table 1.
Monte Carlo results for QML and 2SML estimators for RSV-A.
Table 2.
Monte Carlo results for QML and 2SML estimators for RSVt.
Regarding the QML and 2SML estimates for the RSV-A model, Table 1 reports the sample means, standard deviations, and root mean squared errors (RMSEs) divided by the absolute value of the corresponding true parameters. As presented in Table 1, the sample mean of the QML estimates were close to the true value, implying that the finite sample biases are negligible. The RMSE () takes a values less than 0.51. The finite sample biases for the 2SML estimator are negligible, except for c. The bias is caused by the fluctuations in the first-step estimator, and it will disappear as sample size increases. Compared with the result of the 2SML estimator, the values of the QML estimator are close to those corresponding to the former except for . The QML estimator for has a higher value of RMSE, implying an inefficiency when estimating the constant term in the volatility equation caused by including the equation. Regarding c and , the QML estimator has smaller standard deviations and RMSEs, implying an inefficiency caused by the first-step estimate on 2SML.
The simulation results for the RSVt model are reported in Table 2, implying that the finite sample biases are negligible. The results of RMSE for the QML estimator indicate that the QML and 2SML estimators are competitive. Owing to the inefficiency caused by the two-step estimation and bias correction, the 2SML estimator has greater values for RMSE for . On the other hand, the 2SML estimator has a smaller value of RMSE for , indicating the inefficiency caused by approximating the log of the squared t variable as a normal distribution on the QML estimation.
The 2SML method estimates using the information of in the first step, while the QML estimation uses that of to obtain the estimates of all parameters. These estimations are based on the Kalman filtering algorithm and common parameters are . Hence, it is possible to examine the contributions of for estimating via the Kalman filter. According to Table 1 and Table 2, the differences are negligible for , indicating that the contribution of in the state-space form is negligible for estimating these parameters. The difference in the variance between and supports the results, 4.93 and 0.05, respectively.
The Monte Carlo results imply that the QML estimator based on the Kalman filter is competitive with the 2SML estimator in Koopman and Scharth (2013). Owing to computational simplicity, the QML estimator is a fast and useful alternative to the 2SML estimator.
4. Empirical Analysis
4.1. Estimation Results
To estimate alternative RSV models using the QML method based on the Kalman filter, we use a daily return and realized volatility measure for the S&P 500 index. For the realized volatility measure, we selected the RK estimator in Barndorff-Nielsen et al. (2008) as it is robust to microstructure noise and jumps, as explained above. As the realized volatility is calculated using intraday data, the open-to-close return is used for , as in Hansen et al. (2012). The data are obtained from the Oxford Man Institute of Quantitative Finance, and the sample period is from 22 December 2005 and to 4 December 2017, giving 3000 observations. The first observations are used for estimating parameters, and the remaining are reserved for forecasting. The descriptive statistics for the whole sample are presented in Table 3. The standardized variable was calculated using , as is the log of RK. The return and RK have heavy tails, whereas the kurtoses of and the standardized variable are close to three. Compared with the return, the standardized variable is close to the Gaussian distribution.
Table 3.
Descriptive statistics for S&P 500.
This section compares six models: SV, RSV, RSV-A, RSVt, RSVt-A, and two-factor RSVt-A (2fRSVt-A). Among these, the last model is defined by Equations (1) and (3), with
As discussed in the previous section, the model comparison is based on the QLR test.
The QML estimates for the six models are reported in Table 4, indicating that all parameters are significant at the five percent level. The estimates of () for the SV model are typical values in empirical analysis. For the RSV model, the estimates of () are similar to those of the SV model. As discussed in Section 3.2, the contribution of the equation is negligible for estimating () in the RSV model. In other words, the finite sample bias of the QML estimator for () in the SV model is corrected in the RSV model owing to the contribution of the realized volatility equation to construct the RSV model. As the estimate of decreases, the value of increases, keeping the variance of , at a similar level. The estimate of is negative and significant, which may be caused by finite sample bias in the RK estimates. Note that it is inappropriate to compare the quasi-log-likelihood of the SV and RSV models, since the former excludes the information of . The estimates for the RSV-A model are close to those of the RSV model. The estimate of is negative and significant, implying the existence of the leverage effect. The QLR test rejects the null hypothesis . For the RSVt model, the estimate of is 11.2. In contrast, the QLR test failed to reject the null hypothesis of the Gaussian distribution, . As implied by the Monte Carlo results in Table 2, there is an inefficiency on estimating , which may derive an ambiguous result for the inference on . Note that the descriptive statistics for the standardized variable in Table 3 support the results of the QLR tests. The QLR tests in Table 4 indicate that the RSVt-A is preferred to the RSV and RSVt model. For the 2fRSVt-A model, the estimates of and in the first factor are larger than the corresponding values in the second factor. The QLR test rejects the null hypothesis of the one factor model; the tests selected the 2fRSVt-A model among the six models.
Table 4.
QML estimates via the Kalman filter for S&P 500.
4.2. Forecasting Performance
We compare out-of-sample forecasts of the SV and the five RSV models. For these six models, the Kalman filter prediction for the state-space form in (5) and (6) provides the one-step-ahead forecast, , and is a forecast of the quadratic variation for day . By updating the parameter estimates, we calculate the forecasts using the rolling window of the model for recent observations. An alternative forecast can be considered as follows. Under the Gaussianity of , the distribution of conditional on the past observations is , where and are defined in Appendix A. Then, the conditional distribution of follows the log-normal distribution, which gives the conditional mean . Define where . Then, can be a forecast as an alternative to .
For comparison, we obtain forecasts based on the RSV, RSV-A, and RSVt models using the 2SML method, as explained in Koopman and Scharth (2013) For the RSV-A model, we first compute the Kalman filter prediction of and subsequently add the leverage effect by calculating . Since the RSV and RSVt have no leverage effect, they are free from the latter part.
For comparing the out-of-sample forecasts, Patton (2011) suggested an approach using imperfect volatility proxies. Patton (2011) examined the functional form of the loss function for comparing volatility forecasts, such that the forecasts are robust to the presence of noise in the proxies. According to the definition in Patton (2011), a loss function is “robust” if the ranking of any two forecasts of the co-volatility matrix, and , by expected loss is the same whether the ranking is performed using the true covariance matrix or an unbiased volatility proxy, . Patton (2011) demonstrated that squared forecast error and quasi-likelihood type loss functions, defined by
are robust to the forecast error and the standardized forecast error , respectively.
For these loss functions in (10) and (11), the MCS procedure in Hansen et al. (2011) enables us to determine the set of models, , that consist of the best model(s) from a collection of models, . Define and as the difference in the loss functions of two competitive models and its sample mean, respectively. Under the null hypothesis , Hansen et al. (2011) considered two kinds of test statistics:
where is a bootstrap estimate of the variance of , and the p-values of the test statistics are determined using a bootstrap approach. If the null hypothesis is rejected at a given confidence level, the worst performing model is excluded (rejection is determined on the basis of bootstrap p-values under the null hypothesis). Such a model is identified as follows:
where the variance is computed using a bootstrap method.
The means of the MSFE and QLIKE, defined by (10) and (11), for the six models for the two volatility forecasts, that is, and , based on the QML estimation are presented in Table 5, which reports three models for the 2SML method. Generally, the Kalman filtering prediction, , has a smaller MSFE, while adjusted forecast, , has the smaller QLIKE. MSFE selects the 2fRSV-t (QML) based on the Kalman filtering prediction, while QLIKE chooses the simple RSV (QML) using the adjusted value. The p-values of the MCS for the best model based on the statistic are presented in brackets in Table 5. We omitted the results for as these are similar. The differences in model performance are explained using the p-values. The forecast made by the SV model is significantly different from those of alternative RSV models for both loss functions. Among the RSV models, the differences may be negligible for the datasets. In general, the QML method produces better forecasts than the 2SML estimations, but the differences are statistically insignificant.
Table 5.
Out-of-Sample Forecast Evaluation.
The out-of-sample forecast performance indicates that the data prefer the RSV models to the SV model. For the datasets, there are no statistical differences among the two forecasts of the six RSV models. The data contain the period of the global financial crisis caused by the collapse of the Lehman Brothers, starting from 15 September 2008. The statistical indifference among the RSV models may be caused by the effects of turbulence in the data.
5. Conclusions
This study examined the QML method using the Kalman filter for RSVt-A models. The Monte Carlo experiments reveal that the finite sample property of the QML estimator is competitive with the 2SML estimator in Koopman and Scharth (2013). The QML estimation is useful for its computational speed and simplicity. The empirical results for the S&P 500 index indicate that the 2fRSVt-A model is preferred over alternative RSV models, while the analysis of the out-of-sample forecasts favors the RSV models rejecting the simple SV model. Furthermore, the forecasts of alternative RSV models are statistically equivalent to the simple RSV for the data covering the global financial crisis.
Compared with the SML, 2SML, and Bayesian MCMC methods, the computational load of the QML estimation is negligible and useful for practical purposes. There are several directions for extending the current research. First, we can consider estimating a multivariate model with dynamic correlations, extending the work of Asai and McAleer (2009). Second, we can develop the QML technique for the long-memory volatility model, as in Shirota et al. (2014). Third, it is straightforward to include multiple components for the volatility equation, as in Engle and Gallo (2006). We leave such tasks for future research.
Funding
This research was funded by the Japan Society for the Promotion of Science, grant number 22K01429.
Data Availability Statement
The data were obtained from the Oxford Man Institute of Quantitative Finance on 29 June 2022.
Acknowledgments
The author is most grateful to the editors, two anonymous reviewers, and Yoshihisa Baba for their helpful comments and suggestions.
Conflicts of Interest
The author declares no conflict of interest.
Abbreviations
The following abbreviations are used in this manuscript:
| 2fRSVt-A | Two-factor realized asymmetric stochastic volatility with standardized t distribution |
| 2SML | Two-step simulated maximum likelihood |
| GARCH | Generalized autoregressive conditional heteroskedasticity |
| MCMC | Markov chain Monte Carlo |
| MSFE | Mean squared forecast error |
| QLIKE | Quasi-likelihood |
| QLR | Quasi-likelihood ratio |
| QML | Quasi-maximum likelihood |
| RK | Realized kernel |
| RSV | Realized stochastic volatility |
| RSV-A | Realized asymmetric stochastic volatility |
| RSVt | Realized stochastic volatility with standardized t distribution |
| RSVt-A | Realized asymmetric stochastic volatility with standardized t distribution |
| S&P | Standard and Poor’s |
| SML | Simulated maximum likelihood |
| SV | Stochastic volatility |
Appendix A. Kalman Filtering and Smoothing
Consider a linear state-space model for vector :
where . The Kalman filter recursion is given by
Note that and . Since , we obtain the log-likelihood function as follows:
If the distribution of is non-Gaussian, becomes the quasi-log-likelihood function.
We can compute the smoothed estimate, , and the corresponding covariance matrix, , using the backward state-smoothing equations:
with the starting values and .
Appendix B. Two-Step SML (2SML) Estimation
Appendix B.1. Framework
Let , , and . For the vector of unknown parameters, define , where , , with . The density of is expressed as follows:
Koopman and Scharth (2013) documented that the likelihood function based on the density (A4) can be approximated via
where
with the interpolation set . Intuitively, the idea of Koopman and Scharth (2013) is to evaluate the marginal density via the Kalman filter and to estimate the remaining part via numerical integration or quasi-Monte Carlo integration. Maximizing gives the first-step estimator for and . Conditional on the estimate, maximizing the remaining part with respect to gives the second-step estimate. The details of the second-step estimation for the models with and without the asymmetric effect are explained in the rest of the appendix.
Appendix B.2. Estimation for Model without Asymmetric Effect
For the model without asymmetric effects, the conditional density for in (A6) reduces to
As discussed in Koopman and Scharth (2013), is the normal density function with mean and variance , which can be obtained via the deletion smoothing algorithm (see Appendix A). Koopman and Scharth (2013) recommended using the Gaussian quadrature for approximating , where and are estimates obtained via the first-step. This current study adopts the Gaussian–Legendre quadrature based on six points since there is no major improvement from increasing the number of points after six.
Using the approximated density
it is straightforward to obtain the second-step estimator by maximizing the log of the simulated likelihood function for the 2SML estimation. Note that the constant term in (3) needs correcting by using in the second step.
Appendix B.3. Estimation for Model with Asymmetric Effect
For the model without asymmetric effects, Koopman and Scharth (2013) suggested approximating via quasi-Monte Carlo integration using the Halton sequence (see Train 2003 for instance):
where is obtained via the two-dimensional Halton sequence, which is controlled by the mean and the covariance matrix of . The deletion-smoothing algorithm for the redefined state is explained in Appendix A. For the case of the normal distribution for , is the normal distribution with mean and variance , with , respectively. For the case of non-normal distribution, Koopman and Scharth (2013) used a copula function for the dependence. Maximizing the log of the simulated likelihood gives the second-step SML estimator, as above.
The approximation error of the quasi-Monte Carlo integration is of the order . As discussed in Asmussen and Glynn (2007), the convergence rate of the quasi-Monte Carlo method in practice is usually much faster than its theoretical upper bound. The Monte Carlo experiments in Koopman and Scharth (2013) set , and the current study follows this approach.
References
- Abramowitz, Milton, and Irene A. Stegun. 1970. Handbook of Mathematical Functions. Mineola: Dover Publications. [Google Scholar]
- Asai, Manabu. 2008. Autoregressive stochastic volatility models with heavy-tailed distributions: A comparison with multifactor volatility models. Journal of Empirical Finance 15: 332–41. [Google Scholar] [CrossRef]
- Asai, Manabu. 2009. Bayesian analysis of stochastic volatility models with mixture-of-normal distributions. Mathematics and Computers in Simulation 79: 2579–96. [Google Scholar] [CrossRef]
- Asai, Manabu, and Michael McAleer. 2009. The structure of dynamic correlations in multivariate stochastic volatility models. Journal of Econometrics 150: 182–92. [Google Scholar] [CrossRef]
- Asai, Manabu, Chia-Lin Chang, and Michael McAleer. 2017. Realized stochastic volatility with general asymmetry and long memory. Journal of Econometrics 199: 202–12. [Google Scholar] [CrossRef]
- Asai, Manabu, Michael McAleer, and Marcelo C. Medeiros. 2012a. Asymmetry and long memory in volatility modeling. Journal of Financial Econometrics 10: 495–512. [Google Scholar] [CrossRef]
- Asai, Manabu, Michael McAleer, and Marcelo C. Medeiros. 2012b. Estimation and forecasting with noisy realized volatility. Computational Statistics & Data Analysis 56: 217–30. [Google Scholar]
- Asmussen, Søren, and Peter W. Glynn. 2007. Stochastic Simulation: Algorithms and Analysis. New York: Springer. [Google Scholar]
- Barndorff-Nielsen, Ole E., and Neil Shephard. 2002. Econometric analysis of realized volatility and its use in estimating stochastic volatility models. Journal of the Royal Statistical Society, Series B 64: 253–80. [Google Scholar] [CrossRef]
- Barndorff-Nielsen, Ole E., Peter Reinhard Hansen, Asger Lunde, and Neil Shephard. 2008. Designing realised kernels to measure the ex-post variation of equity prices in the presence of noise. Econometrica 76: 1481–36. [Google Scholar] [CrossRef]
- Barndorff-Nielsen, Ole E., P. Reinhard Hansen, Asger Lunde, and Neil Shephard. 2009. Realised kernels in practice: Trades and quotes. Econometrics Journal 12: C1–C32. [Google Scholar] [CrossRef]
- Bollerslev, Tim, and Hao Zhou. 2006. Volatility puzzles: A simple framework for gauging return-volatility regressions. Journal of Econometrics 131: 123–50. [Google Scholar] [CrossRef]
- Chib, S., Y. Omori, and M. Asai. 2009. Multivariate stochastic volatility. In Handbook of Financial Time Series. Edited by T. G. Andersen, R. A. Davis, J. P. Kreiss and T. Mikosch. New York: Springer, pp. 365–400. [Google Scholar]
- De Jong, Piet. 1989. Smoothing and interpolation with the state-space model. Journal of the American Statistical Association 84: 1085–88. [Google Scholar] [CrossRef]
- Dunsmuir, W. 1979. A central limit theorem for parameter estimation in stationary vector time series and its applications to models for a signal observed with noise. Annals of Statistics 7: 490–506. [Google Scholar] [CrossRef]
- Durbin, James, and Siem Jan Koopman. 2001. Time Series Analysis by State-Space Methods. Oxford: Oxford University Press. [Google Scholar]
- Engle, Robert F., and Giampiero M. Gallo. 2006. A multiple indicators model for volatility using intra-daily data. Journal of Econometrics 131: 3–27. [Google Scholar] [CrossRef]
- Hansen, Peter Reinhard, and Zhuo Huang. 2016. Exponential GARCH modeling with realized measures of volatility. Journal of Business & Economic Statistics 34: 269–87. [Google Scholar]
- Hansen, P. R., Z. Huang, and H. H. Shek. 2012. Realized GARCH: A complete model of returns and realized measures of volatility. Journal of Applied Econometrics 27: 877–906. [Google Scholar] [CrossRef]
- Hansen, Peter R., Asger Lunde, and James M. Nason. 2011. The model confidence set. Econometrica 79: 453–97. [Google Scholar] [CrossRef]
- Harvey, Andrew C., and Neil Shephard. 1996. Estimation of an asymmetric stochastic volatility model for asset returns. Journal of Business and Economic Statistics 14: 429–34. [Google Scholar]
- Harvey, Andrew, Esther Ruiz, and Neil Shephard. 1994. Multivariate stochastic variance models. Review of Economic Studies 61: 247–64. [Google Scholar] [CrossRef]
- Koopman, Siem Jan, and Marcel Scharth. 2013. The analysis of stochastic volatility in the presence of daily realized measures. Journal of Financial Econometrics 11: 76–115. [Google Scholar] [CrossRef]
- Liesenfeld, Roman, and Robert C. Jung. 2000. Stochastic volatility models: Conditional normality versus heavy-tailed distributions. Journal of Applied Econometrics 15: 137–60. [Google Scholar] [CrossRef]
- Patton, Andrew J. 2011. Volatility forecast comparison using imperfect volatility proxies. Journal of Econometrics 160: 246–56. [Google Scholar] [CrossRef]
- Sandmann, Gleb, and Siem Jan Koopman. 1998. Estimation of stochastic volatility models via Monte Carlo maximum likelihood. Journal of Econometrics 87: 271–301. [Google Scholar] [CrossRef]
- Shephard, Neil, and Kevin Sheppard. 2010. Realising the future: Forecasting with high frequency-based volatility (HEAVY) models. Journal of Applied Econometrics 25: 197–231. [Google Scholar] [CrossRef]
- Shirota, Shinichiro, Takayuki Hizu, and Yasuhiro Omori. 2014. Realized stochastic volatility with leverage and long memory. Computational Statistics & Data Analysis 76: 618–41. [Google Scholar]
- Takahashi, Makoto, Yasuhiro Omori, and Toshiaki Watanabe. 2009. Estimating stochastic volatility models using daily returns and realized volatility simultaneously. Computational Statistics & Data Analysis 53: 2404–26. [Google Scholar]
- Taniguchi, Masanobu, and Yoshihide Kakizawa. 2000. Asymptotic Theory of Statistical Inference for Time Series. New York: Springer. [Google Scholar]
- Train, Kenneth E. 2003. Discrete Choice Methods with Simulation. Cambridge: Cambridge University Press. [Google Scholar]
- Yu, Jun. 2005. On leverage in a stochastic volatility model. Journal of Econometrics 127: 165–78. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).