Next Article in Journal
Tracking ‘Pure’ Systematic Risk with Realized Betas for Bitcoin and Ethereum
Previous Article in Journal
Socio-Economic and Demographic Factors Associated with COVID-19 Mortality in European Regions: Spatial Econometric Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Estimation of Realized Asymmetric Stochastic Volatility Models Using Kalman Filter

Faculty of Economics, Soka University, Tokyo 192-8577, Japan
Econometrics 2023, 11(3), 18; https://doi.org/10.3390/econometrics11030018
Submission received: 30 December 2022 / Revised: 19 July 2023 / Accepted: 23 July 2023 / Published: 31 July 2023

Abstract

:
Despite the growing interest in realized stochastic volatility models, their estimation techniques, such as simulated maximum likelihood (SML), are computationally intensive. Based on the realized volatility equation, this study demonstrates that, in a finite sample, the quasi-maximum likelihood estimator based on the Kalman filter is competitive with the two-step SML estimator, which is less efficient than the SML estimator. Regarding empirical results for the S&P 500 index, the quasi-likelihood ratio tests favored the two-factor realized asymmetric stochastic volatility model with the standardized t distribution among alternative specifications, and an analysis on out-of-sample forecasts prefers the realized stochastic volatility models, rejecting the model without the realized volatility measure. Furthermore, the forecasts of alternative RSV models are statistically equivalent for the data covering the global financial crisis.

1. Introduction

Over the last two decades, research on realized volatility has received significant attention in modeling and forecasting the volatility of financial returns. For the generalized autoregressive conditional heteroskedasticity (GARCH) class models, Engle and Gallo (2006) and Shephard and Sheppard (2010) incorporated realized volatility for modeling and forecasting volatility. Using the information of return and realized volatility measure simultaneously, Hansen et al. (2012) and Hansen and Huang (2016) developed the “realized GARCH” and “realized exponential GARCH” models, respectively.
The literature on stochastic volatility models considers a realized volatility measure to be an estimate of latent volatility. As highlighted by Barndorff-Nielsen and Shephard (2002), there is a gap between true volatility and its consistent estimate, referred to as the “realized volatility error”. Since the aforementioned error is nonnegligible, Barndorff-Nielsen and Shephard (2002); Bollerslev and Zhou (2006); Takahashi et al. (2009), and Asai et al. (2012a, 2012b) accommodate a homoscedastic disturbance as an ad hoc approach. Analogous to the realized GARCH model, Takahashi et al. (2009) suggested the realized stochastic volatility (RSV) model, which is based on the information of return and realized volatility measure.
As in the GARCH model, it is useful to accommodate asymmetric effects and heavy-tailed conditional distributions in stochastic volatility models. For the former, a typical approach is to assume a negative correlation between the return and the disturbance for the one-step-ahead log-volatility (see Harvey and Shephard 1996; Yu 2005), known as the leverage effect. Instead of the standard normal distribution for the conditional distributions for return, we may consider the standardized t distribution and the generalized error distribution. Harvey et al. (1994); Sandmann and Koopman (1998); Liesenfeld and Jung (2000), and Asai (2008, 2009), among others, assumed the (standardized) t distribution. Furthermore, the empirical results in Liesenfeld and Jung (2000) and Asai (2009) indicate that the standardized t distribution yields better fits than the generalized error distribution. By the statistical property of the stochastic volatility models, we can analytically obtain the fourth moment of the return series.
For estimating various RSV models, Koopman and Scharth (2013) and Shirota et al. (2014) used the simulated maximum likelihood (SML) estimation and the Bayesian Markov chain Monte Carlo (MCMC) technique, respectively. Both approaches are computationally demanding. This study reconsiders the quasi-maximum likelihood (QML) method of Harvey et al. (1994) as it is straightforward to include the realized volatility equation, and it is expected to contribute toward improving the efficiency of the QML estimator.
Interest in modeling volatility using the information on return and realized volatility measure is simultaneously growing. Extending the class of the GARCH, Hansen et al. (2012) and Hansen and Huang (2016) developed the “realized GARCH” and the “realized exponential GARCH” models, respectively. Using the stochastic volatility (SV) models (e.g., see Chib et al. 2009), Takahashi et al. (2009) and Koopman and Scharth (2013) considered the “realized SV” (RSV) and the “realized asymmetric SV” (RSV-A) models, respectively.
While the realized GARCH class models can be estimated by the maximum likelihood estimation (MLE) technique, RSV class models require computationally demanding techniques. While Takahashi et al. (2009) and Shirota et al. (2014) suggested the Bayesian MCMC method, Koopman and Scharth (2013) developed the SML technique. Note that Koopman and Scharth (2013) suggested a two-step SML (2SML) estimator that is less efficient but less computationally intensive than the SML estimator. The current study applies the QML estimation Harvey et al. (1994) to the RSV models to show the practical usefulness.
The remainder of this paper is organized as follows. The RSV-A model with the standardized t distribution (RSVt-A) is outlined in Section 2. The asymptotic property of the QML estimator using the Kalman filter is discussed in Section 3, and its finite sample properties are examined and compared with that of the 2SML method of Koopman and Scharth (2013). The empirical results for the Standard and Poor’s (S&P) 500 index using the return and realized volatility measure are reported in Section 4. Finally, concluding remarks are presented in Section 5.

2. Realized Stochastic Volatility Models

2.1. Model

Let y t and x t denote open-to-close return of a financial asset and the log of a realized measure of its volatility on day t, respectively. The pair of close-to-close return and the realized volatility measure accommodating overnight volatility can be used as in Koopman and Scharth (2013).
Consider the realized asymmetric SV model with the standardized t distribution as follows:
y t = z t exp 1 2 h t , z t = ε t w t / ( ν 2 ) , w t χ 2 ( ν ) ,
h t = c + α t , α t + 1 = ϕ α t + η t ,
x t = ξ + h t + u t ,
ε t η t u t N 0 0 0 , 1 ρ σ η 0 ρ σ η σ η 2 0 0 0 σ u 2 ,
where c, ϕ , σ η , ρ , ν , ξ , and σ u are parameters. According to the structure, z t follows a standardized t distribution with the degree-of-freedom parameter ν . Note that E ( z t ) = 0 , E ( z t 2 ) = 1 , E ( z t 3 ) = 0 , and E ( z t 4 ) = 3 ( ν 2 ) / ( ν 4 ) > 3 . To guarantee the stationarity of the process and the existence of the fourth moment, we assume | ϕ | < 1 and ν > 4 , respectively. As ρ is the correlation coefficient between ε t and η t , it satisfies | ρ | < 1 . If the realized volatility measure, x t , is a consistent estimate of h t , ξ is expected to be zero. By specification, non-zero ξ implies the (finite sample) bias in x t . We denote the model (1)–(4) as the “RSVt-A”. The model reduces to the asymmetric SV model with the standardized t distribution when we omit Equation (3) (see Harvey and Shephard 1996; Asai 2008). Without the leverage effect, that is, ρ = 0 , we obtain the RSVt model. By setting ν , the model reduces to the RSV-A model. As in Asai (2008) and Koopman and Scharth (2013), we can consider multi-factor models by allowing multiple factors in log-volatility as h t = c + i = 1 m α i t .
Following Harvey et al. (1994) and Harvey and Shephard (1996), we obtain the state space for the RSVt-A model. The logarithmic transformation of the squared y t yields
log y t 2 x t = c + μ log z 2 c + ξ + ι α t + ζ t u t ,
where ι is the 2 × 1 vector of ones, μ log z 2 = E ( log z t 2 ) , and ζ t = log z t 2 μ log z 2 . By Equation (26.3.46) in Abramovits and Stegun (1970), we obtain μ log z 2 = ψ ( 1 / 2 ) ψ ( ν / 2 ) + log ( ν 2 ) and V ( log z t 2 ) = V ( ζ t ) = ψ ( 1 / 2 ) + ψ ( ν / 2 ) , where ψ ( x ) is the digamma function defined by ψ ( x ) = d log Γ ( x ) d x .
By the transformation log y t 2 , we lose the information of the sign of y t . To recover such information, we define the sign of y t as s t = I ( y t > 0 ) I ( y t 0 ) , where I ( A ) is the indicator function, which takes one if the condition A holds, and zero otherwise. As in Harvey and Shephard (1996), we can modify Equation (2) as follows:
α t + 1 = ϕ α t + η t * ,
with
E ζ t η t * u t s t = 0 a s t 0 , V ζ t η t * u t s t = σ ζ 2 b s t 0 b s t σ η 2 a 2 0 0 0 σ u 2 ,
where σ ζ 2 = ψ ( 1 / 2 ) + ψ ( ν / 2 ) , a = E ( η t | s t = 1 ) = ρ σ η 2 / π = 0.7979 ρ σ η , and b = ρ σ η E ( | ε t | log ε t 2 ) ρ σ η 2 / π E ( log ε t 2 ) = 1.1061 ρ σ η . Hence, the measurement in Equation (5) and the transition in Equation (6) form the state-space model. By applying the Kalman filter to the state-space form in (5) and (6), it is straightforward to construct the quasi-log-likelihood function (see Appendix A).
As in Koopman and Scharth (2013), the assumption on the covariance matrix in Equation (4) can be relaxed by considering dependence between conditional return and measurement noise. This modification requires a change in Equation (3) using s t to construct the state-space model, as in Equations (6) and (7).

2.2. Realized Kernel Estimator

As a realized volatility measure, we adopt the realized kernel (RK) estimator developed by Barndorff-Nielsen et al. (2008), since it is a consistent estimator of the quadratic variation and is robust to microstructure noise and jumps. This subsection explains the RK estimator concisely.
Consider that the latent log-price p * follows a Brownian semimartingale plus jump process given by
p t * = 0 t μ s d s + 0 t σ s d W s + J t ,
where μ t is a predictable locally bounded drift; σ t is a càdlàg process of volatility; W t is a process of Brownian motion; and J t = i = 1 N t C i is a finite activity jump process, which has a finite number of jumps in any bounded interval of time. More precisely, N t counts the number of jumps that have occurred in the interval [ 0 , t ] and N t < for any t.
The quadratic variation of p * is given by
[ p * ] = 0 τ σ s 2 d s + i = 1 N t C i 2 ,
where 0 τ σ s 2 d s is the integrated variance. The estimator of Barndorff-Nielsen et al. (2008) for the quadratic variation is based on the noisy observation of log-price, p t = p t * + v t ( t = τ 0 , τ 1 , , τ n ) with τ 0 = 0 and τ n = τ , where E ( v t ) = 0 and V a r ( v t ) = ω 2 .
Barndorff-Nielsen et al. (2008) suggested a non-negative estimator that takes the following form:
K ( p ) = G G k g G + 1 γ g , γ g = j = | g | + 1 n y ´ j y ´ j | g | ,
where y ´ j is the jth high-frequency return calculated over the interval [ τ j 1 , τ j ] , and k ( x ) is a kernel weight function. For practical purposes, Barndorff-Nielsen et al. (2009) focused on the Parzen kernel function, defined by
k ( x ) = 1 6 x 2 + 6 x 3 0 x 1 / 2 2 ( 1 x ) 3 1 / 2 x 1 0 x > 1 .
Barndorff-Nielsen et al. (2008, 2009) provided an estimator for the bandwidth of G. Under mild regularity conditions, Barndorff-Nielsen et al. (2008) demonstrated that K ( p ) converges to [ p * ] in probability, as n .
For practical purposes, it is convenient to work with the log of the RK estimator for its stability. Even though the RK estimator is consistent, it has finite sample bias and noise, and Equation (3) accommodates the constant term and the disturbance.

3. QML Estimation via Kalman Filter

3.1. QML Estimation

Define θ = ( c , ϕ , σ η 2 , ρ , ν , ξ , σ u 2 ) . For the state-space form (5) and (6), applying the Kalman filtering algorithm produces the quasi-log-likelihood function:
L ( θ ) = t = 1 T l t ( θ ) , l t ( θ ) = ln ( 2 π ) 1 2 ln | F t | 1 2 v t F t 1 v t .
where v t ( 2 × 1 ) and F t ( 2 × 2 ) can be obtained as described in Appendix A. Maximizing the quasi-log-likelihood derives the QML estimator, θ ^ . Although the vector of errors has a non-Gaussian distribution, the state-space form has the martingale property and at least the finite fourth moment. Based on the results in Dunsmuir (1979), it is straightforward to demonstrate the consistency and the asymptotic normality of the QML estimator, as follows:
plim θ ^ = θ 0 , T ( θ ^ θ 0 ) d N 0 , C ( θ 0 ) ,
where θ 0 is the vector of true parameters. The covariance matrix C ( θ 0 ) is equivalent to that of the MLE based on the Whittle likelihood. Using the equivalence, we can demonstrate that the quasi-likelihood ratio (QLR) statistic has an asymptotic χ 2 distribution under the null hypothesis (e.g., see the proof of Theorem 3.1.3 in Taniguchi and Kakizawa 2000).
Koopman and Scharth (2013) developed the SML approach and its two-step version, while Durbin and Koopman (2001) suggested a general approach for obtaining the simulated likelihood function for non-Gaussian state-space models and Koopman and Scharth (2013) applied it to the RSVA-t model. For the SML method, the likelihood function can be arbitrarily approximated precisely by decomposing it into a Gaussian part, constructed with a Kalman filter, and a remainder function, for which the expectation is evaluated through simulation. The decomposition reduces the computational time, but the SML is still computer-intensive since it requires a T-dimensional Monte Carlo integration for the latent variables whenever the log-likelihood function is evaluated. To avoid this problem, Koopman and Scharth (2013) developed the 2SML estimator. The first-step estimation provides estimates via the Kalman filter for the state-space model consisting of Equations (2) and (3), while the second maximizes the remaining part of the likelihood function to obtain the remaining parameter and to correct the bias caused by neglecting (1) in the first step (see Appendix B for details).
Compared with the SML estimator, the 2SML and QML estimators are less efficient owing to the fluctuations caused by the asymptotic variance of the first-step estimator and the non-Gaussianity in ζ t in Equation (5), respectively. Hence, the inefficiency of these two estimators derives from different reasons, and the finite sample properties of the 2SML and QML estimators are worth examining. Note that the QML estimation is faster than the 2SML method, since the former has no additional step using the simulated quantity. Recently, Asai et al. (2017) developed a two-step estimation method based on the Whittle likelihood for the RSV-A model. While their approach requires two steps, the QML estimation using the Kalman filter needs no additional step, implying that the two-step Whittle likelihood estimator is less efficient than the QML estimator.

3.2. Finite Sample Property of QML Estimator

In this subsection, we conducted a Monte Carlo experiment to investigate the finite sample properties of the QML estimator, with a comparison with the 2SML estimator. Although the QML and 2SML estimator are asymptotically less efficient than the SML estimator, these estimation methods are computationally faster than the SML approach. Hence, it is worth comparing the QML and 2SML estimators. The experiments follow the framework in Koopman and Scharth (2013) with the two data-generating processes (DGP) based on Equations (1)–(4). The first DGP was the RSV-A model with the true parameters reported in Table 1, while the second was the RSVt model based on the true parameters in Table 2. The sample size was T = 2500 . While Koopman and Scharth (2013) set the number of replications as 250 for the computationally intensive SML method, we considered 2000 replications for fast estimation procedures, that is, the QML and 2SML methods. For details regarding the 2SML estimation, see Appendix B. All the experiments were run on MATLAB R2022b, using the interior-point algorithm and starting from the true parameter values.
Regarding the QML and 2SML estimates for the RSV-A model, Table 1 reports the sample means, standard deviations, and root mean squared errors (RMSEs) divided by the absolute value of the corresponding true parameters. As presented in Table 1, the sample mean of the QML estimates were close to the true value, implying that the finite sample biases are negligible. The RMSE / | θ i | ( i = 1 , , 6 ) takes a values less than 0.51. The finite sample biases for the 2SML estimator are negligible, except for c. The bias is caused by the fluctuations in the first-step estimator, and it will disappear as sample size increases. Compared with the result of the 2SML estimator, the values of the QML estimator are close to those corresponding to the former except for ξ . The QML estimator for ξ has a higher value of RMSE / | θ i | , implying an inefficiency when estimating the constant term in the volatility equation caused by including the log y t 2 equation. Regarding c and ρ , the QML estimator has smaller standard deviations and RMSEs, implying an inefficiency caused by the first-step estimate on 2SML.
The simulation results for the RSVt model are reported in Table 2, implying that the finite sample biases are negligible. The results of RMSE / | θ i | for the QML estimator indicate that the QML and 2SML estimators are competitive. Owing to the inefficiency caused by the two-step estimation and bias correction, the 2SML estimator has greater values for RMSE / | θ i | for ( c , ξ ) . On the other hand, the 2SML estimator has a smaller value of RMSE / | θ i | for ν , indicating the inefficiency caused by approximating the log of the squared t variable as a normal distribution on the QML estimation.
The 2SML method estimates ( ϕ , σ η 2 , ξ * , σ u 2 ) using the information of x t in the first step, while the QML estimation uses that of ( log ( y t 2 ) , x t , s t ) to obtain the estimates of all parameters. These estimations are based on the Kalman filtering algorithm and common parameters are ( ϕ , σ η 2 , σ u 2 ) . Hence, it is possible to examine the contributions of log ( y t 2 ) for estimating ( ϕ , σ η 2 , σ u 2 ) via the Kalman filter. According to Table 1 and Table 2, the differences are negligible for ( ϕ , σ η 2 , σ u 2 ) , indicating that the contribution of log ( y t 2 ) in the state-space form is negligible for estimating these parameters. The difference in the variance between ζ t and u t supports the results, 4.93 and 0.05, respectively.
The Monte Carlo results imply that the QML estimator based on the Kalman filter is competitive with the 2SML estimator in Koopman and Scharth (2013). Owing to computational simplicity, the QML estimator is a fast and useful alternative to the 2SML estimator.

4. Empirical Analysis

4.1. Estimation Results

To estimate alternative RSV models using the QML method based on the Kalman filter, we use a daily return and realized volatility measure for the S&P 500 index. For the realized volatility measure, we selected the RK estimator in Barndorff-Nielsen et al. (2008) as it is robust to microstructure noise and jumps, as explained above. As the realized volatility is calculated using intraday data, the open-to-close return is used for y t , as in Hansen et al. (2012). The data are obtained from the Oxford Man Institute of Quantitative Finance, and the sample period is from 22 December 2005 and to 4 December 2017, giving 3000 observations. The first T = 2500 observations are used for estimating parameters, and the remaining F = 500 are reserved for forecasting. The descriptive statistics for the whole sample are presented in Table 3. The standardized variable was calculated using y t exp ( 0.5 x t ) , as x t is the log of RK. The return and RK have heavy tails, whereas the kurtoses of x t and the standardized variable are close to three. Compared with the return, the standardized variable is close to the Gaussian distribution.
This section compares six models: SV, RSV, RSV-A, RSVt, RSVt-A, and two-factor RSVt-A (2fRSVt-A). Among these, the last model is defined by Equations (1) and (3), with
h t = c + α 1 t + α 2 t , α 1 , t + 1 = ϕ α 1 t + η 1 t , α 2 , t + 1 = ϕ 2 α 2 t + η 2 t ,
ε t η 1 t η 2 t u t N 0 0 0 0 , 1 ρ σ η ρ 2 σ η , 2 0 ρ σ η σ η 2 0 0 ρ 2 σ η , 2 0 σ η , 2 2 0 0 0 0 σ u 2 .
As discussed in the previous section, the model comparison is based on the QLR test.
The QML estimates for the six models are reported in Table 4, indicating that all parameters are significant at the five percent level. The estimates of ( c , ϕ , σ η 2 ) for the SV model are typical values in empirical analysis. For the RSV model, the estimates of ( c , ϕ , σ η 2 ) are similar to those of the SV model. As discussed in Section 3.2, the contribution of the log y t 2 equation is negligible for estimating ( ϕ , σ η 2 ) in the RSV model. In other words, the finite sample bias of the QML estimator for ( ϕ , σ η 2 ) in the SV model is corrected in the RSV model owing to the contribution of the realized volatility equation to construct the RSV model. As the estimate of ϕ decreases, the value of σ η 2 increases, keeping the variance of α t , σ η 2 / ( 1 ϕ 2 ) at a similar level. The estimate of ξ is negative and significant, which may be caused by finite sample bias in the RK estimates. Note that it is inappropriate to compare the quasi-log-likelihood of the SV and RSV models, since the former excludes the information of x t . The estimates for the RSV-A model are close to those of the RSV model. The estimate of ρ is negative and significant, implying the existence of the leverage effect. The QLR test rejects the null hypothesis ρ = 0 . For the RSVt model, the estimate of ν is 11.2. In contrast, the QLR test failed to reject the null hypothesis of the Gaussian distribution, ν = . As implied by the Monte Carlo results in Table 2, there is an inefficiency on estimating ν , which may derive an ambiguous result for the inference on ν . Note that the descriptive statistics for the standardized variable in Table 3 support the results of the QLR tests. The QLR tests in Table 4 indicate that the RSVt-A is preferred to the RSV and RSVt model. For the 2fRSVt-A model, the estimates of ϕ and | ρ | in the first factor are larger than the corresponding values in the second factor. The QLR test rejects the null hypothesis of the one factor model; the tests selected the 2fRSVt-A model among the six models.

4.2. Forecasting Performance

We compare out-of-sample forecasts of the SV and the five RSV models. For these six models, the Kalman filter prediction for the state-space form in (5) and (6) provides the one-step-ahead forecast, x ^ T + 1 , and σ ^ T + 1 2 = exp ( x ^ T + 1 ) is a forecast of the quadratic variation for day T + 1 . By updating the parameter estimates, we calculate the forecasts σ ^ T + j 2 ( j = 1 , , F ) using the rolling window of the model for recent T = 2500 observations. An alternative forecast can be considered as follows. Under the Gaussianity of ζ t , the distribution of α t + 1 conditional on the past observations is N ( a t + 1 , P t + 1 ) , where a t and P t are defined in Appendix A. Then, the conditional distribution of σ t + 1 2 = exp ( ξ + c + α t + 1 ) follows the log-normal distribution, which gives the conditional mean exp ( ξ + c + a t + 1 + 0.5 P t + 1 ) . Define σ ^ T + 1 2 * = exp ( x ^ T + 1 + 0.5 P T + 1 ) where x ^ t + 1 = ξ ^ + c ^ + a t + 1 . Then, σ ^ T + 1 2 * can be a forecast as an alternative to σ ^ T + 1 2 .
For comparison, we obtain forecasts based on the RSV, RSV-A, and RSVt models using the 2SML method, as explained in Koopman and Scharth (2013) For the RSV-A model, we first compute the Kalman filter prediction of x T + 1 and subsequently add the leverage effect E ( η t | ε t ) = ρ σ η ε t by calculating ε ^ t = y t × E ( exp ( 0.5 h t ) | x 1 , , x t ) . Since the RSV and RSVt have no leverage effect, they are free from the latter part.
For comparing the out-of-sample forecasts, Patton (2011) suggested an approach using imperfect volatility proxies. Patton (2011) examined the functional form of the loss function for comparing volatility forecasts, such that the forecasts are robust to the presence of noise in the proxies. According to the definition in Patton (2011), a loss function is “robust” if the ranking of any two forecasts of the co-volatility matrix, σ ^ T + j 2 ( 1 ) and σ ^ T + j 2 ( 2 ) , by expected loss is the same whether the ranking is performed using the true covariance matrix or an unbiased volatility proxy, σ t 2 . Patton (2011) demonstrated that squared forecast error and quasi-likelihood type loss functions, defined by
M S F E : L F i , T + j m = L F m ( σ T + j 2 , σ ^ T + j 2 ( i ) ) = σ T + j 2 σ ^ T + j 2 ( i ) 2 ,
Q L I K E : L F i , T + j q = L F q ( σ T + j 2 , σ ^ T + j 2 ( i ) ) = σ T + j 2 σ ^ T + j 2 ( i ) + log σ ^ T + j 2 ( i ) ,
are robust to the forecast error σ T + j 2 σ ^ T + j 2 ( i ) and the standardized forecast error σ T + j 2 / σ ^ T + j 2 ( i ) , respectively.
For these loss functions in (10) and (11), the MCS procedure in Hansen et al. (2011) enables us to determine the set of models, M * , that consist of the best model(s) from a collection of models, M 0 . Define d i k , t = L F i , t L F k , t and d ¯ i k = F 1 j = 1 F d i k , T + j as the difference in the loss functions of two competitive models and its sample mean, respectively. Under the null hypothesis H 0 : E ( d i k , t ) = 0 ( i > k , i , k M k ) , Hansen et al. (2011) considered two kinds of test statistics:
t R = max i , k M k d ¯ i k V a r ^ ( d ¯ i k ) , t S Q = i , k M k , i > k d ¯ i k V a r ^ ( d ¯ i k ) 2 ,
where V a r ^ ( d ¯ i k ) is a bootstrap estimate of the variance of d ¯ i k , and the p-values of the test statistics are determined using a bootstrap approach. If the null hypothesis is rejected at a given confidence level, the worst performing model is excluded (rejection is determined on the basis of bootstrap p-values under the null hypothesis). Such a model is identified as follows:
i = arg max i M k k M k d ¯ i k V a r ^ k M k d ¯ i k 1 / 2 ,
where the variance is computed using a bootstrap method.
The means of the MSFE and QLIKE, defined by (10) and (11), for the six models for the two volatility forecasts, that is, σ ^ T + j 2 and σ ^ T + j 2 * ( j = 1 , , F ) , based on the QML estimation are presented in Table 5, which reports three models for the 2SML method. Generally, the Kalman filtering prediction, σ ^ T + j 2 , has a smaller MSFE, while adjusted forecast, σ ^ T + j 2 * , has the smaller QLIKE. MSFE selects the 2fRSV-t (QML) based on the Kalman filtering prediction, while QLIKE chooses the simple RSV (QML) using the adjusted value. The p-values of the MCS for the best model based on the t R statistic are presented in brackets in Table 5. We omitted the results for t S Q as these are similar. The differences in model performance are explained using the p-values. The forecast made by the SV model is significantly different from those of alternative RSV models for both loss functions. Among the RSV models, the differences may be negligible for the datasets. In general, the QML method produces better forecasts than the 2SML estimations, but the differences are statistically insignificant.
The out-of-sample forecast performance indicates that the data prefer the RSV models to the SV model. For the datasets, there are no statistical differences among the two forecasts of the six RSV models. The data contain the period of the global financial crisis caused by the collapse of the Lehman Brothers, starting from 15 September 2008. The statistical indifference among the RSV models may be caused by the effects of turbulence in the data.

5. Conclusions

This study examined the QML method using the Kalman filter for RSVt-A models. The Monte Carlo experiments reveal that the finite sample property of the QML estimator is competitive with the 2SML estimator in Koopman and Scharth (2013). The QML estimation is useful for its computational speed and simplicity. The empirical results for the S&P 500 index indicate that the 2fRSVt-A model is preferred over alternative RSV models, while the analysis of the out-of-sample forecasts favors the RSV models rejecting the simple SV model. Furthermore, the forecasts of alternative RSV models are statistically equivalent to the simple RSV for the data covering the global financial crisis.
Compared with the SML, 2SML, and Bayesian MCMC methods, the computational load of the QML estimation is negligible and useful for practical purposes. There are several directions for extending the current research. First, we can consider estimating a multivariate model with dynamic correlations, extending the work of Asai and McAleer (2009). Second, we can develop the QML technique for the long-memory volatility model, as in Shirota et al. (2014). Third, it is straightforward to include multiple components for the volatility equation, as in Engle and Gallo (2006). We leave such tasks for future research.

Funding

This research was funded by the Japan Society for the Promotion of Science, grant number 22K01429.

Data Availability Statement

The data were obtained from the Oxford Man Institute of Quantitative Finance on 29 June 2022.

Acknowledgments

The author is most grateful to the editors, two anonymous reviewers, and Yoshihisa Baba for their helpful comments and suggestions.

Conflicts of Interest

The author declares no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
2fRSVt-ATwo-factor realized asymmetric stochastic volatility with standardized t distribution
2SMLTwo-step simulated maximum likelihood
GARCHGeneralized autoregressive conditional heteroskedasticity
MCMCMarkov chain Monte Carlo
MSFEMean squared forecast error
QLIKEQuasi-likelihood
QLRQuasi-likelihood ratio
QMLQuasi-maximum likelihood
RKRealized kernel
RSVRealized stochastic volatility
RSV-ARealized asymmetric stochastic volatility
RSVtRealized stochastic volatility with standardized t distribution
RSVt-ARealized asymmetric stochastic volatility with standardized t distribution
S&PStandard and Poor’s
SMLSimulated maximum likelihood
SVStochastic volatility

Appendix A. Kalman Filtering and Smoothing

Consider a linear state-space model for m × 1 vector y t :
y t = c + Z α t + ε t , ε t N ( 0 , H t ) , α t + 1 = d t + T α t + R η t , η t N ( 0 , Q t ) , t = 1 , , T ,
where α 1 N ( a 1 , P 1 ) . The Kalman filter recursion is given by
v t = y t c Z α t , F t = Z P t Z + H t , K t = T P t Z F t 1 , L t = T K t Z , a t + 1 = d t + T a t + K t v t , P t + 1 = T P t L t + R Q t R .
Note that a t + 1 = E ( α t + 1 | y 1 , , y t ; θ ) and P t + 1 = V a r ( α t + 1 | y 1 , , y t ; θ ) . Since v t N ( 0 , F t ) , we obtain the log-likelihood function as follows:
L ( θ ) = t = 1 T l t ( θ ) , l t ( θ ) = m 2 ln ( 2 π ) 1 2 ln | F t | 1 2 v t F t 1 v t .
If the distribution of ε t is non-Gaussian, L ( θ ) becomes the quasi-log-likelihood function.
We can compute the smoothed estimate, α ^ t = E ( α t | y 1 , , y T ; θ ) , and the corresponding covariance matrix, V t = V a r ( α t | y 1 , , y t ; θ ) , using the backward state-smoothing equations:
r t 1 = Z F t 1 v t + L t r t , N t 1 = Z F t 1 Z + L t N t L t , α ^ t = a t + P t r t 1 , V t = P t P t N t 1 P t ,
with the starting values r T = 0 and N T = O .
de Jong (1989) developed the deletion-smoothing algorithm to obtain the estimate α t = E ( α t | y t ; θ ) and the associated covariance matrix V t = Var ( α t | y t ; θ ) , where y t is the interpolation set { y 1 , , y t 1 , y t + 1 , , y T } . Define forward equations as follows:
w t = F t 1 v t K t r t , W t = F t 1 + K t N t K t , M t = L t N t K t Z F t 1 .
As demonstrated by Theorem 5 in de Jong (1989), we obtain the following:
α t = α ^ t + P t M t W t 1 w t , V t = V ^ t + P t M t W t 1 M t P t .
We can obtain the deletion-smoothing estimate of α t * = ( α t , η t ) by redefining the state space model,
y t = c + Z * α t * + ε t , ε t N ( 0 , H t ) , α t + 1 * = d t * + T * α t * + R * η t * , η t * N ( 0 , Q t ) ,
where
d t * = d t O q × 1 , Z * = Z O m × q , T * = T R O p × r O q × q , R * = O r × q I q ,
to apply the deletion-soothing algorithms in (A1) and (A2) to the model in (A3).

Appendix B. Two-Step SML (2SML) Estimation

Appendix B.1. Framework

Let y = ( y 1 , , y T ) , x = ( x 1 , , x T ) , and α = ( α 1 , , α T ) . For the vector of unknown parameters, define ψ = ( ψ y , ψ x * , ψ α ) , where ψ y = ( c , ρ , ν ) , ψ x * = ( ξ * , σ u 2 ) , ψ α = ( ϕ , σ η 2 ) with ξ * = c + ξ . The density of ( y , x ) is expressed as follows:
p ( y , x ; ψ ) = p ( y , x , α ; ψ ) d α = p ( y | x , α ; ψ y ) p ( x | α ; ψ x * ) p ( α ; ψ α ) d α = t = 1 T p ( y t | x t , α t , η t ; ψ y ) p ( x t | α t ; ψ x * ) p ( α t | α t 1 ; ψ α ) d α .
Koopman and Scharth (2013) documented that the likelihood function based on the density (A4) can be approximated via
L ( ψ ; y , x ) = p ( x ; ψ x * , ψ α ) × p ( y | x ; ψ y , ψ α ) ,
where
p ( x ; ψ x * , ψ α ) = t = 1 T p ( x t | α t ; ψ x * ) p ( α t | α t 1 ; ψ α ) d α , p ( y | x ; ψ ) = t = 1 T p ( y t | x t ; ψ ) , p ( y t | x t ; ψ ) = p ( y t | α t , η t ; ψ ) p ( α t , η t | x t ; ψ ) d α t d η t ,
with the interpolation set x t = { x 1 , , x t 1 , x t + 1 , , x T } . Intuitively, the idea of Koopman and Scharth (2013) is to evaluate the marginal density p ( x ; ψ x * , ψ α ) via the Kalman filter and to estimate the remaining part p ( y | x ; ψ ) via numerical integration or quasi-Monte Carlo integration. Maximizing log p ( x ; ψ x * , ψ α ) gives the first-step estimator for ψ x and ψ α . Conditional on the estimate, maximizing the remaining part with respect to ψ y gives the second-step estimate. The details of the second-step estimation for the models with and without the asymmetric effect are explained in the rest of the appendix.

Appendix B.2. Estimation for Model without Asymmetric Effect

For the model without asymmetric effects, the conditional density for y t in (A6) reduces to
p ( y t | x t ; ψ ) = p ( y t | α t ; ψ y ) p ( α t | x t ; ψ ) d α t .
As discussed in Koopman and Scharth (2013), p ( α t | x t ; ψ ) is the normal density function with mean E ( α t | x t ; ψ ) and variance Var ( α t | x t ; ψ ) , which can be obtained via the deletion smoothing algorithm (see Appendix A). Koopman and Scharth (2013) recommended using the Gaussian quadrature for approximating p ( y t | x t ; ψ y , ψ ^ x , ψ ^ α ) , where ψ ^ x and ψ ^ α are estimates obtained via the first-step. This current study adopts the Gaussian–Legendre quadrature based on six points since there is no major improvement from increasing the number of points after six.
Using the approximated density
p ^ ( y | x ; ψ y , ψ ^ x * , ψ ^ α ) = t = 1 T p ^ ( y t | x t ; ψ y , ψ ^ x * , ψ ^ α ) ,
it is straightforward to obtain the second-step estimator by maximizing the log of the simulated likelihood function for the 2SML estimation. Note that the constant term in (3) needs correcting by ξ ^ = ξ ^ * c ^ using c ^ in the second step.

Appendix B.3. Estimation for Model with Asymmetric Effect

For the model without asymmetric effects, Koopman and Scharth (2013) suggested approximating p ( y t | x t ; ψ ) via quasi-Monte Carlo integration using the Halton sequence (see Train 2003 for instance):
p ^ ( y t | x t ; ψ ) = 1 S s = 1 S p ( y t | α t ( s ) , η t ( s ) ; ψ y , ψ ^ α ) ,
where ( α t ( s ) , η t ( s ) ) is obtained via the two-dimensional Halton sequence, which is controlled by the mean and the covariance matrix of p ( α t , η t | x t ; ψ ^ x * , ψ ^ α ) . The deletion-smoothing algorithm for the redefined state ( α t , η t ) is explained in Appendix A. For the case of the normal distribution for ε t , p ( y t | α t , η t ; ψ y , ψ ^ α ) is the normal distribution with mean ( ρ η t / σ ^ η ) exp ( 0.5 h t ) and variance ( 1 ρ 2 ) exp ( 0.5 h t ) , with h t = c + α t , respectively. For the case of non-normal distribution, Koopman and Scharth (2013) used a copula function for the dependence. Maximizing the log of the simulated likelihood gives the second-step SML estimator, as above.
The approximation error of the quasi-Monte Carlo integration is of the order O ( S 1 ( log S ) 2 ) . As discussed in Asmussen and Glynn (2007), the convergence rate of the quasi-Monte Carlo method in practice is usually much faster than its theoretical upper bound. The Monte Carlo experiments in Koopman and Scharth (2013) set S = 100 , and the current study follows this approach.

References

  1. Abramowitz, Milton, and Irene A. Stegun. 1970. Handbook of Mathematical Functions. Mineola: Dover Publications. [Google Scholar]
  2. Asai, Manabu. 2008. Autoregressive stochastic volatility models with heavy-tailed distributions: A comparison with multifactor volatility models. Journal of Empirical Finance 15: 332–41. [Google Scholar] [CrossRef]
  3. Asai, Manabu. 2009. Bayesian analysis of stochastic volatility models with mixture-of-normal distributions. Mathematics and Computers in Simulation 79: 2579–96. [Google Scholar] [CrossRef]
  4. Asai, Manabu, and Michael McAleer. 2009. The structure of dynamic correlations in multivariate stochastic volatility models. Journal of Econometrics 150: 182–92. [Google Scholar] [CrossRef] [Green Version]
  5. Asai, Manabu, Chia-Lin Chang, and Michael McAleer. 2017. Realized stochastic volatility with general asymmetry and long memory. Journal of Econometrics 199: 202–12. [Google Scholar] [CrossRef] [Green Version]
  6. Asai, Manabu, Michael McAleer, and Marcelo C. Medeiros. 2012a. Asymmetry and long memory in volatility modeling. Journal of Financial Econometrics 10: 495–512. [Google Scholar] [CrossRef]
  7. Asai, Manabu, Michael McAleer, and Marcelo C. Medeiros. 2012b. Estimation and forecasting with noisy realized volatility. Computational Statistics & Data Analysis 56: 217–30. [Google Scholar]
  8. Asmussen, Søren, and Peter W. Glynn. 2007. Stochastic Simulation: Algorithms and Analysis. New York: Springer. [Google Scholar]
  9. Barndorff-Nielsen, Ole E., and Neil Shephard. 2002. Econometric analysis of realized volatility and its use in estimating stochastic volatility models. Journal of the Royal Statistical Society, Series B 64: 253–80. [Google Scholar] [CrossRef]
  10. Barndorff-Nielsen, Ole E., Peter Reinhard Hansen, Asger Lunde, and Neil Shephard. 2008. Designing realised kernels to measure the ex-post variation of equity prices in the presence of noise. Econometrica 76: 1481–36. [Google Scholar] [CrossRef] [Green Version]
  11. Barndorff-Nielsen, Ole E., P. Reinhard Hansen, Asger Lunde, and Neil Shephard. 2009. Realised kernels in practice: Trades and quotes. Econometrics Journal 12: C1–C32. [Google Scholar] [CrossRef]
  12. Bollerslev, Tim, and Hao Zhou. 2006. Volatility puzzles: A simple framework for gauging return-volatility regressions. Journal of Econometrics 131: 123–50. [Google Scholar] [CrossRef]
  13. Chib, S., Y. Omori, and M. Asai. 2009. Multivariate stochastic volatility. In Handbook of Financial Time Series. Edited by T. G. Andersen, R. A. Davis, J. P. Kreiss and T. Mikosch. New York: Springer, pp. 365–400. [Google Scholar]
  14. De Jong, Piet. 1989. Smoothing and interpolation with the state-space model. Journal of the American Statistical Association 84: 1085–88. [Google Scholar] [CrossRef]
  15. Dunsmuir, W. 1979. A central limit theorem for parameter estimation in stationary vector time series and its applications to models for a signal observed with noise. Annals of Statistics 7: 490–506. [Google Scholar] [CrossRef]
  16. Durbin, James, and Siem Jan Koopman. 2001. Time Series Analysis by State-Space Methods. Oxford: Oxford University Press. [Google Scholar]
  17. Engle, Robert F., and Giampiero M. Gallo. 2006. A multiple indicators model for volatility using intra-daily data. Journal of Econometrics 131: 3–27. [Google Scholar] [CrossRef] [Green Version]
  18. Hansen, Peter Reinhard, and Zhuo Huang. 2016. Exponential GARCH modeling with realized measures of volatility. Journal of Business & Economic Statistics 34: 269–87. [Google Scholar]
  19. Hansen, P. R., Z. Huang, and H. H. Shek. 2012. Realized GARCH: A complete model of returns and realized measures of volatility. Journal of Applied Econometrics 27: 877–906. [Google Scholar] [CrossRef]
  20. Hansen, Peter R., Asger Lunde, and James M. Nason. 2011. The model confidence set. Econometrica 79: 453–97. [Google Scholar] [CrossRef] [Green Version]
  21. Harvey, Andrew C., and Neil Shephard. 1996. Estimation of an asymmetric stochastic volatility model for asset returns. Journal of Business and Economic Statistics 14: 429–34. [Google Scholar]
  22. Harvey, Andrew, Esther Ruiz, and Neil Shephard. 1994. Multivariate stochastic variance models. Review of Economic Studies 61: 247–64. [Google Scholar] [CrossRef] [Green Version]
  23. Koopman, Siem Jan, and Marcel Scharth. 2013. The analysis of stochastic volatility in the presence of daily realized measures. Journal of Financial Econometrics 11: 76–115. [Google Scholar] [CrossRef]
  24. Liesenfeld, Roman, and Robert C. Jung. 2000. Stochastic volatility models: Conditional normality versus heavy-tailed distributions. Journal of Applied Econometrics 15: 137–60. [Google Scholar] [CrossRef]
  25. Patton, Andrew J. 2011. Volatility forecast comparison using imperfect volatility proxies. Journal of Econometrics 160: 246–56. [Google Scholar] [CrossRef] [Green Version]
  26. Sandmann, Gleb, and Siem Jan Koopman. 1998. Estimation of stochastic volatility models via Monte Carlo maximum likelihood. Journal of Econometrics 87: 271–301. [Google Scholar] [CrossRef]
  27. Shephard, Neil, and Kevin Sheppard. 2010. Realising the future: Forecasting with high frequency-based volatility (HEAVY) models. Journal of Applied Econometrics 25: 197–231. [Google Scholar] [CrossRef] [Green Version]
  28. Shirota, Shinichiro, Takayuki Hizu, and Yasuhiro Omori. 2014. Realized stochastic volatility with leverage and long memory. Computational Statistics & Data Analysis 76: 618–41. [Google Scholar]
  29. Takahashi, Makoto, Yasuhiro Omori, and Toshiaki Watanabe. 2009. Estimating stochastic volatility models using daily returns and realized volatility simultaneously. Computational Statistics & Data Analysis 53: 2404–26. [Google Scholar]
  30. Taniguchi, Masanobu, and Yoshihide Kakizawa. 2000. Asymptotic Theory of Statistical Inference for Time Series. New York: Springer. [Google Scholar]
  31. Train, Kenneth E. 2003. Discrete Choice Methods with Simulation. Cambridge: Cambridge University Press. [Google Scholar]
  32. Yu, Jun. 2005. On leverage in a stochastic volatility model. Journal of Econometrics 127: 165–78. [Google Scholar] [CrossRef]
Table 1. Monte Carlo results for QML and 2SML estimators for RSV-A.
Table 1. Monte Carlo results for QML and 2SML estimators for RSV-A.
ParameterTrue QML 2SML
MeanStd. Dev.RMSE / | θ i | MeanStd. Dev.RMSE / | θ i |
ϕ 0.980.9786(0.0042)[0.0045]0.9784(0.0045)[0.0048]
σ η 2 0.050.0501(0.0034)[0.0675]0.0501(0.0035)[0.0703]
ξ 0.100.1002(0.0444)[0.4442]0.0928(0.0290)[0.2988]
σ u 2 0.050.0500(0.0027)[0.0545]0.0500(0.0029)[0.0572]
c0.400.3998(0.2021)[0.5055]0.4092(0.2216)[0.5545]
ρ −0.30−0.3020(0.0298)[0.0994]−0.2999(0.0280)[0.0932]
Table 2. Monte Carlo results for QML and 2SML estimators for RSVt.
Table 2. Monte Carlo results for QML and 2SML estimators for RSVt.
ParameterTrue QML 2SML
MeanStd. Dev.RMSE / | θ i | MeanStd. Dev.RMSE / | θ i |
ϕ 0.980.9786(0.0044)[0.0048]0.9787(0.0045)[0.0047]
σ η 2 0.050.0500(0.0033)[0.0653]0.0500(0.0033)[0.0658]
ξ 0.100.0899(0.0645)[0.6523]0.0584(0.4538)[4.5541]
σ u 2 0.050.0500(0.0028)[0.0564]0.0500(0.0028)[0.0569]
c0.400.4022(0.2268)[0.5671]0.4289(0.5024)[1.2579]
ν 10.0010.365(4.0983)[0.4114]10.547(0.8452)[0.0998]
Table 3. Descriptive statistics for S&P 500.
Table 3. Descriptive statistics for S&P 500.
DataMeanStd. Dev.SkewnessKurtosis
Return0.02221.3138−0.298914.419
RK1.05358.519514.260359.30
log(RK)−0.82401.37990.56973.5821
Std. Var.0.13161.1649−0.00532.6944
Note: The standardized variable is calculated by dividing the return by the square root of the RK.
Table 4. QML estimates via the Kalman filter for S&P 500.
Table 4. QML estimates via the Kalman filter for S&P 500.
ParameterSVRSVRSV-ARSVtRSVt-A2fRSVt-A
c−0.4605−0.4588−0.3243−0.3843−0.2946−0.2113
(0.0045)(0.0029)(0.0023)(0.0033)(0.0028)(0.0029)
ϕ 0.98200.95390.95830.95420.95830.9714
(0.0001)(0.0001)(0.0001)(0.0001)(0.0001)(0.0001)
σ η 2 0.04110.09890.07610.09820.07600.0482
(0.0002)(0.0002)(0.0001)(0.0002)(0.0001)(0.0001)
ρ −0.6034 −0.6048−0.5737
(0.0007) (0.0007)(0.0009)
ϕ 2 0.2188
(0.0015)
σ η , 2 2 0.2128
(0.0005)
ρ 2 −0.1216
(0.0006)
ξ −0.1807−0.1927−0.2553−0.2207−0.1950
(0.0009)(0.0009)(0.0018)(0.0017)(0.0011)
σ u 2 0.15670.18390.15720.18400.0026
(0.0002)(0.0002)(0.0002)(0.0002)(0.0005)
ν 15.075137.8286102.1949
(0.2884)(1.9073)(6.6828)
QLogLike−5734.4−7756.8−7641.5−7756.3−7641.4−7590.8
H 0 ρ = 0 ν = ρ = 0 1 factor
QLR test 230.610.9698229.80101.27
[0.0000][0.3247][0.0000][0.0000]
Note: Standard errors are in parentheses. p-values are in brackets.
Table 5. Out-of-Sample Forecast Evaluation.
Table 5. Out-of-Sample Forecast Evaluation.
MSFEQLIKE
Model σ ^ T + 1 2 σ ^ T + 1 2 * σ ^ T + 1 2 σ ^ T + 1 2 *
SV (QML)5.7010 [0.000]9.0803 [0.000]1.0754 [0.000]1.2688 [0.000]
RSV (QML)0.0816 [0.750]0.0817 [0.750]−0.5990 [0.337]−0.6043 [1.000]
RSV-A (QML)0.0800 [0.967]0.0813 [0.750]−0.5937 [0.337]−0.5978 [0.337]
RSVt (QML)0.0817 [0.750]0.0817 [0.750]−0.5989 [0.337]−0.6042 [0.426]
RSVt-A (QML)0.0800 [0.967]0.0812 [0.750]−0.5938 [0.337]−0.5979 [0.426]
2fRSVt-A (QML)0.0799 [1.000]0.0841 [0.750]−0.5905 [0.337]−0.5956 [0.337]
RSV (2SML)0.0829 [0.750]0.0824 [0.750]−0.5964 [0.337]−0.6017 [0.426]
RSV-A (2SML)0.0815 [0.750]0.0821 [0.750]−0.5904 [0.242]−0.5972 [0.337]
RSVt (2SML)0.0829 [0.750]0.0824 [0.750]−0.5964 [0.337]−0.6017 [0.426]
Note: The table reports the means of MSFE and QLIKE, defined by (10) and (11), respectively. σ ^ T + 1 2 and σ ^ T + 1 2 * are the one-step-ahead volatility forecast and its adjusted value based on the log-normal assumption. The minimum values for MSFE and QLIKE are highlighted in bold. The values in brackets are the p-value obtained via the model confidence set (MCS) procedure (Equations (12) and (13)) on the 18 sets of forecasts for the two loss functions.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Asai, M. Estimation of Realized Asymmetric Stochastic Volatility Models Using Kalman Filter. Econometrics 2023, 11, 18. https://doi.org/10.3390/econometrics11030018

AMA Style

Asai M. Estimation of Realized Asymmetric Stochastic Volatility Models Using Kalman Filter. Econometrics. 2023; 11(3):18. https://doi.org/10.3390/econometrics11030018

Chicago/Turabian Style

Asai, Manabu. 2023. "Estimation of Realized Asymmetric Stochastic Volatility Models Using Kalman Filter" Econometrics 11, no. 3: 18. https://doi.org/10.3390/econometrics11030018

APA Style

Asai, M. (2023). Estimation of Realized Asymmetric Stochastic Volatility Models Using Kalman Filter. Econometrics, 11(3), 18. https://doi.org/10.3390/econometrics11030018

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop