Next Article in Journal
Fuzzy-Set-Based Multi-Attribute Decision-Making, Its Computing Implementation, and Applications
Next Article in Special Issue
Randomness Test of Thinning Parameters for the NBRCINAR(1) Process
Previous Article in Journal
More Effective Criteria for Testing the Oscillation of Solutions of Third-Order Differential Equations
Previous Article in Special Issue
A Two-Step Estimation Method for a Time-Varying INAR Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Portmanteau Test for ARCH-Type Models by Using High-Frequency Data

1
School of Economics and Statistics, Guangzhou University, Guangzhou 510006, China
2
School of Mathematics, Jiaying University, Meizhou 514015, China
*
Author to whom correspondence should be addressed.
Current address: University City Campus, Guangzhou University, No. 230 Waihuan Xi Road, Guangdong Higher Education Mega Center, Panyu District, Guangzhou 510006, China.
These authors contributed equally to this work.
Axioms 2024, 13(3), 141; https://doi.org/10.3390/axioms13030141
Submission received: 18 December 2023 / Revised: 11 February 2024 / Accepted: 17 February 2024 / Published: 22 February 2024
(This article belongs to the Special Issue Time Series Analysis: Research on Data Modeling Methods)

Abstract

:
The portmanteau test is an effective tool for testing the goodness of fit of models. Motivated by the fact that high-frequency data can improve the estimation accuracy of models, a modified portmanteau test using high-frequency data is proposed for ARCH-type models in this paper. Simulation results show that the empirical size and power of the modified test statistics of the model using high-frequency data are better than those of the daily model. Three stock indices (CSI 300, SSE 50, CSI 500) are taken as an example to illustrate the practical application of the test.
MSC:
62H15; 62G20

1. Introduction

Securities trading has always been a prominent topic in the financial sector, and volatility serves as a crucial indicator for analyzing fluctuations in trading price data. Volatility reflects the expected level of price instability in a financial asset or market, which greatly influences investment decisions [1]. In the field of volatility modeling, the autoregressive conditional heteroscedasticity (ARCH) model and the generalized autoregressive conditional heteroscedasticity (GARCH) model are widely recognized as two fundamental models [2]. Let y t be the log-return of day t. The ARCH model proposed by Engle (1982) [3] is structured as follows:
y t = σ t ε t ,
σ t 2 = ω + α 1 y t 1 2 + α 2 y t 2 2 + + α q y t q 2 ,
where ε t is an independent and identically distributed (i.i.d.) sequence, and σ t represents the volatility of y t . Additionally, it is assumed that the expectation of ε t is zero, i.e., E [ ε t ] = 0 , and the expectation of ε t 2 is equal to one, i.e., E [ ε t 2 ] = 1 . The parameters ( ω , α 1 , α 2 , , α q ) are the coefficients associated with the lagged squared observations ( 1 , y t 1 2 , y t 2 2 , , y t q 2 ) , which need to be estimated. ARCH models are commonly employed in time-series modeling and analysis. However, when the order q of the ARCH(q) model is large, the number of parameters that require estimation increases. This can lead to challenges in estimation, particularly in cases with finite samples where estimation efficiency may decrease. Furthermore, it is possible for the estimated parameter values to turn out negative. To address these limitations, Bollerslev (1986) [4] proposed the generalized autoregressive conditional heteroscedasticity (GARCH) model. For the pure GARCH(1,1) model, the conditional variance equation is expressed as follows:
σ t 2 = ω + α y t 1 2 + β σ t 1 2 .
Obviously, Formula (1) is an iterative equation. Let t = t 1 . Then
σ t 1 2 = ω + α y t 2 2 + β σ t 2 2 .
From (1) and (2), we obtain
σ t 2 = ω + α y t 1 2 + β ( ω + α y t 2 2 + β σ t 2 2 ) = ( 1 + β ) ω + α y t 1 2 + α β y t 2 2 + β 2 σ t 2 2 .
Repeating the process above, we expand σ t 2 2 , σ t 3 2 , . Then, we can obtain an infinite-order ARCH model. Since the GARCH model is a generalization of the ARCH model, they are collectively referred to as the ARCH-type models. In financial data analysis, apart from heteroscedasticity, there are other prominent characteristics, such as the leverage effect (Black, 1976) [5]. To address this, Geweke (1986) [6] introduced the asymmetric log-GARCH model, while Engle et al. (1993) [7] introduced the asymmetric power GARCH model to account for the leverage effect. Furthermore, Drost and Klassen (1997) [8] modified the pure GARCH(1,1) model as follows:
y t = v t τ ε t ,
v t 2 = 1 + γ y t 1 2 + β v t 1 2 .
For the ARCH(q) case, the conditional variance equation is given by
v t 2 = 1 + γ 1 y t 1 2 + γ 2 y t 2 2 + + γ q y t q 2 .
When v t τ = σ t , τ 2 = ω , and γ τ 2 = α , the models (3) and (4) can be transformed into the pure GARCH(1,1) model. Similarly, the models (3) and (5) can be transformed into the pure ARCH(q) model. The advantage of this model is that the standardization of ε t only affects the parameter τ . By standardizing the residuals, they are made unit variance, simplifying the estimation process and allowing the focus to be on estimating the parameters of interest without being influenced by the scale of the residuals.
With the advancement of information technology, obtaining intraday high-frequency data has become effortless, and such data often contain valuable information. Recognizing this, Visser (2011) [9] introduced high-frequency data into models (3) and (4), leading to improved efficiency in model parameters estimation. Subsequently, numerous researchers have extensively explored the enhancement of classical models by utilizing high-frequency data. Huang et al. (2015) [10] incorporated high-frequency data into the GJR model (named by the proponents Glosten, Jagannathan and Runkel) [11] and examined a range of robust M-estimators [12]. Wang et al. (2017) [13] employed composite quantile regression to examine the GARCH model using high-frequency data. Other notable studies include Fan et al. (2017) [14], Deng et al. (2020) [15], and Liang et al. (2021) [16].
In addition to model parameter estimation, model testing plays a crucial role in time-series modeling and analysis as researchers seek to assess the adequacy of the established models. Portmanteau tests have been widely used for this purpose. The earliest work in this area can be traced back to Box and Pierce (1970) [17], who demonstrated the utility of square-residual autocorrelations for model testing. Since then, several researchers have applied this test to time-series models, including Granger et al. (1978) [18] and Mcleod and Li (1983) [19]. For instance, Engle and Bollerslev (1986) [20] and Pantula (1988) [21] proposed the test to examine the presence of ARCH in the error term. However, Li and Mak (1994) [22] showed that the variance of the residual autocorrelation function is not idempotent when using ARCH-type models. To overcome this issue, they developed a modified statistic that incorporated the variance of the residual autocorrelation function. They also proved that when the parameter estimates follow asymptotic normality, the test statistic follows a chi-square distribution. After the emergence of the quasi-maximum exponential likelihood estimation (QMELE) method [23], Li and Li (2005) [24] generalized the test statistic proposed by Li and Mak [22] using a new estimation method. They also proposed a similar test statistic for the autocorrelation function with absolute residuals | ε t | . Carbon and Francq (2011) [25] extended the test of Li and Mak [22] to the asymmetric power GARCH model. Furthermore, Chen and Zhu (2015) [26] constructed a rank-based portmanteau test based on Li and Mak’s [22] statistic. They modified the autocorrelation function of the residuals ε t to the autocorrelation function of the rank-based residuals s g n ( ε t 2 1 ) , making the new test applicable to heavy-tailed data. Even today, scholars like Li and Zhang (2022) [27] continue to show great interest in studying the extension of these tests.
Motivated by the works of Li and Mak (1994) [22] and Visser (2011) [9], this paper introduces a modified portmanteau test for diagnosing ARCH-type models using high-frequency data. The paper also discusses the general procedure for constructing portmanteau test statistics for ARCH-type models based on high-frequency data. It is demonstrated that under weaker conditions such as having a finite residual fourth-order moment and other regularity conditions, the proposed portmanteau test follows a chi-square distribution.
This paper is structured as follows. Section 2 discusses the estimation for ARCH-type models. Section 3 covers the construction of the portmanteau test statistics and provides the corresponding asymptotic distribution. Section 4 presents the simulation process and the related results. Section 5 includes three real data examples along with the analysis of the corresponding results. Finally, the assumptions, proofs, and additional results are deferred to the Appendix B.

2. Estimation Using High-Frequency Data

To introduce high-frequency data, the structure of the log-returns equation has been enhanced. The observed intraday log-return process Y t ( u ) for day t is associated with the standardized intraday trading time variable u, which ranges from 0 to 1. According to Visser (2011) [9], the modified model is a scaling GARCH(1,1) model, which is expressed as:
Y t ( u ) = v t τ ε t ( u ) , 0 u 1 ,
v t 2 = 1 + γ y t 1 2 + β v t 1 2 ,
where { ε t ( u ) } represents a standardized process. The assumption is made that for any k l , ε k ( · ) and ε l ( · ) are independent of each other and share the same distribution. When u is set to 1, the following relationships hold:
Y t ( 1 ) = y t , ε t ( 1 ) = ε t , E ε t ( 1 ) 2 = 1 .
Hence, when u is set to 1, models (6) and (7) are transformed into models (3) and (4), which combine the scaling model with the pure GARCH model.
In order to estimate the parameters, the scaling model utilizes a volatility proxy. This proxy reduces high-dimensional information to a single dimension. Furthermore, when the conditional mean is zero, the volatility proxy serves as an unbiased estimate of the conditional variance. Specifically, the volatility proxy H ( · ) is a statistic derived from intraday data and satisfies the following property of positive homogeneity:
H ( ρ Y t ( u ) ) = ρ H ( Y t ( u ) ) > 0 , ρ 0 .
When t is fixed, v t becomes a constant. By applying the homogeneity property of H ( · ) , it can be observed that
H ( Y t ( u ) ) = H ( v t τ ε t ( u ) ) = v t τ H ( ε t ( u ) ) .
For convenience, let H t H ( Y t ( u ) ) , μ H E ( H 2 ( ε t ( u ) ) ) , τ H μ H τ , and ε t * H ( ε t ( u ) ) / μ H . Then, the volatility proxy GARCH model has the following structure:
H t = v t τ H ε t * ,
v t 2 = 1 + γ y t 1 2 + β v t 1 2 ,
where { ε t * } is also an i.i.d. sequence that satisfies E ε t * 2 = 1 . ( τ H , γ , β ) is the parameter of the models (8) and (9) that needs to be estimated. For simplicity, models (8) and (9) are referred to as the VP-GARCH(1,1) model.
In the case of ARCH(q), the return equation aligns with Formula (8), while the conditional variance equation is expressed as:
v t 2 = 1 + γ 1 y t 1 2 + γ 2 y t 2 2 + + γ q y t q 2 .
Similarly, the parameter vector ( τ H , γ 1 , γ 2 , , γ q ) represents the parameters of models (8) and (10), which require estimation. The models (8) and (10) are referred to as the VP-ARCH(q) model.
The Gaussian quasi-maximum likelihood estimation (QMLE) [9] is employed to estimate the parameters of models (8) and (9) and models (8) and (10). The QMLE of θ is defined as:
θ ˜ = arg min θ Θ L n ( θ ) , L n ( θ ) = 1 2 t = 1 n log ( v t 2 τ H 2 ) + H t 2 v t 2 τ H 2 .
To differentiate between them, the parameter vector estimate using low-frequency data is denoted as θ ^ , while the parameter vector estimate using high-frequency data is denoted as θ ˜ . The asymptotic normality of the parameter estimates for models (8) and (9) has been proven by Visser [9]. This conclusion can also be applied to models (8) and (10). Therefore, the following asymptotic normality can be obtained:
n ( θ ˜ θ 0 ) d N ( 0 , v a r ( ε t * 2 ) G 1 ) ,
G = E 1 σ H , t 4 σ H , t 2 θ σ H , t 2 θ , σ H , t = v t τ H .
In particular, when τ H is known, let η = ( γ , β ) be the parameter vector for models (8) and (9), and let η = ( γ 1 , , γ q ) denote the parameter vector for models (8) and (10). Then
G = cov 1 v t 2 v t 2 η , 1 v t 2 v t 2 η .

3. Portmanteau Test

3.1. Traditional Portmanteau Test

The portmanteau test is employed to evaluate the adequacy of the model’s fit. This statistical test is constructed based on the squared residual autocorrelation function. In cases where the volatility model is inadequate, a certain level of correlation between the squared residual terms exists.
In this paper, the null hypothesis is that the squared residual autocorrelation functions are irrelevant, indicating that the hypothesized model is adequate. The sample squared residual autocorrelation function r ^ k is calculated as follows:
r ^ k = t = k + 1 n ( y t 2 / σ ^ t 2 1 ) ( y t k 2 / σ ^ t k 2 1 ) t = 1 n ( y t 2 / σ ^ t 2 1 ) 2 , k = 1 , 2 , 3 ,
According to the central limit theorem, it can be proven that r ^ k follows an asymptotic normal distribution under the null hypothesis. To obtain the test statistic, a finite vector of autocorrelation functions r ^ M = ( r ^ 1 , r ^ 2 , , r ^ m ) is considered, where m is the maximum lag order of r ^ k . Let D denote the asymptotic variance of r M . The portmanteau test statistic Q 2 can be formulated as:
Q 2 = n r ^ M D ^ 1 r ^ M .
Under Assumptions 3 and 4, 1 n t = 1 n ( y t 2 / σ ^ t 2 1 ) 2 converges to constant E ( ε t 2 1 ) 2 in probability. Here, E ( ε t 2 1 ) 2 can be estimated by C ^ 0 , where
C ^ 0 = 1 n t = 1 n y t 4 σ ^ t 4 1 .
Therefore, only the asymptotic distribution of C ^ k needs to be considered, where
C ^ k = t = k + 1 n ( y t 2 σ ^ t 2 1 ) ( y t k 2 σ ^ t k 2 1 ) , k = 1 , 2 , , m .
So, Formula (12) can be changed into
Q 2 = n C ^ M V ^ 1 C ^ M d χ 2 ( m ) ,
where C ^ M = ( C ^ 1 , C ^ 2 , , C ^ m ) , and V is the asymptotic variance of C M .
The statistic Q 2 asymptotically follows a chi-square distribution with m degrees of freedom. By setting the significance level at 0.05, if the calculated result exceeds the 95 % quantile χ 0.95 2 ( m ) , the null hypothesis will be rejected. Conversely, if the calculated result is below the quantile, the null hypothesis will not be rejected, indicating that the model can be considered adequate.

3.2. Portmanteau Test Using High-Frequency Data

From Equation (13), we can observe that the estimate C ^ k is dependent on the estimate of σ ^ t 2 . The estimator σ ^ t 2 is a function of θ . Given that the estimator θ ˜ is obtained using high-frequency data, the volatility estimator σ ˜ t 2 ( θ ˜ ) can be easily obtained. Additionally, the statistic C ˜ k ( θ ˜ ) can be calculated as follows:
C ˜ k ( θ ˜ ) = t = k + 1 n ( y t 2 σ ˜ t 2 ( θ ˜ ) 1 ) ( y t k 2 σ ˜ t k 2 ( θ ˜ ) 1 ) , k = 1 , 2 , .
Similarly,
C ˜ 0 = 1 n t = 1 n y t 4 σ ˜ t 4 1 .
Furthermore, the asymptotic distribution of the estimator θ ˜ differs from that of the estimator θ ^ , which is a difference that further impacts the asymptotic variance of C ˜ k . Let V 1 ˜ denote the modified variance estimator. The following theorem can then be derived.
Theorem 1.
If Assumptions 1–5 are satisfied, then under the null,
Q ˜ 2 = n C ˜ M V ˜ 1 1 C ˜ M d χ 2 ( m ) ,
where C ˜ M = ( C ˜ 1 , C ˜ 2 , , C ˜ m ) , V ˜ 1 = C ˜ 0 2 I M + C ˜ H X ˜ G ˜ 1 X ˜ 2 C ˜ H , 0 X ˜ G ˜ 1 X ˜ H ,
C ˜ H = 1 n t = 1 n H t 4 σ ˜ H , t 4 1 , C ˜ H , 0 = 1 n t = 1 n { ( H t 2 σ ˜ H , t 2 1 ) ( y t 2 σ ˜ t 2 1 ) } , X ˜ = ( X ˜ 1 , X ˜ 2 , , X ˜ m ) , X ˜ H = ( X ˜ H , 1 , X ˜ H , 2 , , X ˜ H , m ) , X ˜ k = 1 n t = k + 1 n ( 1 σ ˜ t 2 σ t 2 θ ) ( y t k 2 σ ˜ t k 2 1 ) , k = 1 , 2 , , m , X ˜ H , k = 1 n t = 1 n ( 1 σ ˜ H , t 2 σ H , t 2 θ ) ( y t k 2 σ ˜ t k 2 1 ) , k = 1 , 2 , , m .
The proof of Theorem 1 is presented in Appendix A.2.
Indeed, the presence of the parameter τ H poses challenges in obtaining the QMLE θ ˜ in practical applications. However, these challenges can be overcome if an appropriate volatility proxy H ( · ) is identified. When the volatility proxy H ( · ) satisfies E H 2 ( ε t ( u ) ) = 1 , indicating μ H = 1 , then τ = τ H . Assuming μ H = 1 , we can establish the following lemma.
Lemma 1.
If Assumptions 1–6 are satisfied, then under the null,
Q ˜ 2 = n C ˜ M V 2 ˜ 1 C ˜ M d χ 2 ( m ) ,
where V 2 ˜ = C ˜ 0 2 I M + ( C ˜ H 2 C ˜ H , 0 ) X ˜ G ˜ 1 X ˜ .
The proof of Lemma 1 is also provided in Appendix A.2.

4. Simulation

In this section, the finite-sample performance of the proposed method is examined through Monte Carlo simulations [28]. All data generation, results calculation, and figures plotting in this section are accomplished by running the R program.
In practical applications, the log-return series Y t ( u ) can be calculated based on the stock price. However, in the simulation, prior to simulating the generation of Y t ( u ) , it is necessary to generate the high-frequency residual sequences. Following Visser [9], the high-frequency residual sequences ε t ( u ) can be generated using the stationary Ornstein–Uhlenbeck process [29]:
d Γ t ( u ) = δ ( Γ t ( u ) μ Γ ) d u + σ Γ d B t ( 2 ) ( u ) ,
d ε t ( u ) = e x p ( Γ t ( u ) ) d B t ( 1 ) ( u ) , u [ 0 , 1 ] ,
where d B t ( 1 ) and d B t ( 2 ) are unrelated Brownian motions [30]. The initial value of ε t ( 0 ) is set to 0, and Γ t ( 0 ) is generated from a stable distribution N ( μ Γ , σ Γ 2 / 2 δ ) . To simulate the Chinese stock exchange market, the interval [0, 1] is divided into 240 small equal intervals, representing every minute of intraday trading. The values of μ Γ , σ Γ , and δ are set to
δ = 1 2 , σ Γ = 1 4 , μ Γ = 1 16 .
This ensures that the expected value of ε t 2 ( 1 ) is equal to 1 [9]. The calculation of Y t ( u ) is based on the given parameter vector θ using Equations (6), (7) and (10). For ARCH(2), set η = ( 0.6 , 0.3 ) and η = ( 0.4 , 0.25 ) . For VP-GARCH(1,1), set η = ( 0.1 , 0.6 ) and η = ( 0.25 , 0.5 ) . The corresponding equation is as follows:
v t 2 = 1 + 0.6 y t 1 2 + 0.3 y t 2 2 ,
v t 2 = 1 + 0.4 y t 1 2 + 0.25 y t 2 2 ,
v t 2 = 1 + 0.1 y t 1 2 + 0.6 v t 1 2 ,
v t 2 = 1 + 0.25 y t 1 2 + 0.5 v t 1 2 .
Eventually, volatility proxy is selected as the realized volatility (RV). Three sampling frequencies are considered: 5 min, 15 min, and 30 min, which are denoted as RV5, RV15, and RV30, respectively. Taking RV15 as an example, the formula is
H t = R V 15 t = ( t = 1 16 [ Y t ( u 15 i ) Y t ( u 15 ( i 1 ) ) ] ) 1 / 2 .
Similarly, for the high-frequency residual ε t ( u ) , the formula is
H ( ε t ( u ) ) = ( t = 1 16 [ ε t ( u 15 i ) ε t ( u 15 ( i 1 ) ) ] ) 1 / 2 .
To evaluate the performance of the models, let n denote the sample size and four sample sizes are considered: 200, 300, 400, and 500. For each model, 1000 independent replications are generated. Then, the root mean squared error (RMSE) of parameter estimates can be calculated. The formula is as follows:
R M S E ( η ^ ) = 1 1000 i = 1 1000 ( η ^ i η ) 2 ,
where η ^ i is the parameter estimate for the i-th time and η is the true value of the parameter. The RMSE results are presented in Table 1.
The | y t | in Table 1 represents the daily model, where H t = | y t | is calculated using daily closing prices. Table 1 clearly shows that the estimation results obtained from the intraday models, using high-frequency data, outperform those of the daily model.
Additionally, it is necessary to examine the distribution of the statistic and compare its performance with the daily model. Therefore, Table 2 presents the empirical size values for this purpose.
We set m = 6 and calculate the empirical size by determining the proportions of rejections based on the 95th percentile of χ 2 ( 6 ) . The results are presented in Table 2. It is evident that as the sample size increases, the results of the intraday models are closer to 0.05 compared to those of the daily model. It suggests that introducing high-frequency data can enhance the accuracy of the model.
Regarding the power, we define the alternative hypotheses for ARCH(2) and GARCH(1,1) as follows:
H 1 : v t 2 = 1 + γ 1 y t 1 2 + γ 2 y t 2 2 + γ 3 y t 3 2 ,
H 1 : v t 2 = 1 + γ 1 y t 1 2 + γ 2 y t 2 2 + β 1 v t 1 2 ,
where γ 1 and γ 2 are obtained from models (15) and models (16), respectively, while γ 1 and β 1 are obtained from models (17) and models (18), respectively, which are all fixed values. Next, we introduce γ 3 and γ 2 as variables to examine the impact on the power.
Figure 1 displays the power results of the ARCH(2) model, while Figure 2 presents the power results of the GARCH(1,1) model. The results for models (16) and (18) can be found in Appendix B (Figure A1 and Figure A2). From these figures, it is evident that the power curves of the intraday models exhibit clear distinctions from that of the daily model, although this effect diminishes as the sample size increases. Additionally, upon comparing the power of the four models, we observe that the ARCH model demonstrates a more pronounced power.

5. Application

In our analysis, we focus on three stock indices: the CSI 300, SSE 50, and CSI 500. The data cover the period from 2 January 2004 to 6 June 2019. After deleting missing data, we obtained a dataset consisting of 2610 consecutive days (8 April 2005 to 31 December 2015) for the CSI 300 index, 2856 consecutive days (2 January 2004 to 13 October 2015) for the SSE 50 index, and 2124 consecutive days (15 January 2007 to 13 October 2015) for the CSI 500 index. The calculations and figures presented in this section are generated using the R programming language.
To compute the high-frequency log-return Y t ( u ) [9], the following formula is employed:
Y t ( u ) = [ log P t ( u ) log P t 1 ( u ) ] × 100 , u [ 0 , 1 ] ,
where P t ( u ) denotes the trading price within 240 min of day t, and Y t ( 1 ) = y t denotes the closing price of day t. The log-return Y t ( 1 ) values for the three indices are depicted in Figure 3.
From Figure 3, it is evident that all three samples exhibit significant heteroscedasticity and fluctuate around 0. Therefore, it is worth considering the use of pure ARCH or GARCH models.
However, an estimation challenge arises in the process, specifically in estimating the parameter μ H . To overcome this, we assume μ H = 1 , which leads to τ H = τ . Noting that ω = τ 2 , we can obtain an estimator for τ 2 in the daily model. It implies that estimating intraday models will depend on daily models.
We intend to fit the data with the ARCH(2) model first, and the portmanteau test statistics using low-frequency are shown in Table 3.
Choose m = 6 . Obviously, the results of all three samples are significantly larger than χ 0.95 2 ( 6 ) = 12.5916 , leading to the rejection of the null hypothesis. Therefore, the ARCH(2) model is deemed inadequate. To identify a more suitable model, we examine the residuals of these models and observe the log-return figures. Notably, the log-return of the CSI 500 exhibits relatively less fluctuation over a period, indicating the potential need for a higher-order ARCH model. Hence, we consider the GARCH(1,1) model. The parameter estimates are presented in Table 4.
Before calculating the test statistic, it is necessary to consider the hypothesis of μ H = 1 . This hypothesis implies that E ( H 2 ( ε t ) ) = 1 . To validate the hypothesis, an estimate for E ( H 2 ( ε t ) ) = 1 can be calculated using the following expression:
E ( H 2 ^ ( ε t ) ) = E ( H t 2 v ^ t 2 τ ^ 2 ) .
The calculation results are reported in Table 5.
The results of Table 5 show that the estimators of E ( H 2 ( ε t ) ) are almost close to 1. It suggests that an appropriate volatility measure has been identified. With this in mind, it is straightforward to proceed with the calculation of the portmanteau test statistics. The specific results are shown in Table 6.
At a 5% significance level, the critical value for the rejection region is χ 0.95 2 ( 6 ) = 12.5916 . It is important to note that the null hypothesis of our test is that the model fitting is adequate, while the alternative hypothesis suggests inadequate model fitting. In the portmanteau test, a higher value of the test statistic indicates a greater likelihood of rejecting the null hypothesis, implying inadequate model fitting.
From Table 6, it can be observed that for the daily model, all of the portmanteau test statistics for the three stock indices fall within the accepted region. However, for the intraday model, except for the CSI 500 index, the test statistics for the other two indices fall within the rejection region.
Furthermore, an interesting phenomenon emerges. When the intraday models reject the null hypothesis, the values of the portmanteau test statistic differ significantly from those of the daily model. On the other hand, when the intraday models accept the null hypothesis, the difference between the two is not significant. Specifically, if the intraday model fitting is adequate, then the daily model fitting may be inadequate. Conversely, if the daily model fitting is inadequate, the intraday model fitting will also be inadequate. It suggests that the daily model could serve as a boundary model. In practical terms, when the daily model is inadequate, there is no need to consider the intraday model further.
The fact that the intraday models reject the null hypothesis while the daily model accepts it is a noteworthy issue that warrants further study. To facilitate this analysis, the estimated volatility curves and residual scatter plots are shown in Figure 4 and Figure 5.
Since the results of different high-frequency volatility proxies (RV30, RV15, RV5) are similar and their curves overlap, the model with E ( H ^ ( ε t 2 ) ) closer to 1 is selected. Figure 4 illustrates that the estimated volatility curve derived from high-frequency data exhibits greater fluctuations, indicating its ability to capture more information. A similar pattern can be observed for the SSE 50 index, as shown in the Appendix B (Figure A3).
As can be seen from Figure 5, the residuals of the low-frequency model are mainly concentrated within the range of [−3, 3], whereas the residuals of the high-frequency model are primarily concentrated within the range of [−2.5, 2.5]. However, the results also indicate a certain degree of heteroscedasticity. A similar result can be observed for the SSE 50 index, as shown in Appendix B (Figure A4).

6. Discussion

In this study, we aimed to propose a portmanteau test suitable for ARCH models based on high-frequency data. Based on the asymptotic properties of the QMLE for ARCH-type models with high-frequency data, we developed a new portmanteau test.
Firstly, we constructed the modified portmanteau test statistic in this paper using the vector of residual autocorrelation functions and its variance obtained from the QMLE based on high-frequency information. Through the application of the law of large numbers, central limit theorem and Taylor expansion, we proved that this statistic follows a chi-square distribution. The specific form of this statistic was provided for cases where the high-frequency redundant parameters are both known and unknown, as outlined in Theorem 1 and Lemma 1.
Secondly, the simulation results regarding the size of the test provide evidence that the modified test statistic asymptotically follows a chi-square distribution when the chosen model is adequate. It is evident from the fact that the size of the modified test statistic, based on high-frequency information, approaches 0.05. In other words, the proportion of this test statistic exceeding the 0.95 quantile of the derived chi-square distribution is closer to 0.05. Furthermore, the power results from the simulation demonstrate that the modified test statistic is more effective in rejecting the model when it is inadequate and the sample size is small. In conclusion, the modified test statistic improved identification of the adequacy of ARCH-type models.
Furthermore, empirical studies have provided evidence supporting the applicability of the modified portmanteau test. The test results for the three indices indicate that when the test statistic based on low-frequency data accepts the null hypothesis, the test statistic based on high-frequency information does not always accept the null hypothesis. The discrepancy suggests a difference between the tests based on high-frequency information and those based on low-frequency data. Additionally, by examining the residual plots, it becomes evident that the model test results based on high-frequency data are more reasonable.
However, despite the numerous advantages of the modified portmanteau test, there are several challenges and barriers that need to be addressed. Firstly, ARCH-type models based on high-frequency data often include the redundant parameter μ H . In existing studies, estimating this redundant parameter μ H relies on the estimation results obtained from low-frequency data. Secondly, the modified test statistic based on high-frequency data is more intricate compared to the one based on low-frequency data, requiring additional computational steps. Specifically, the derivation of the modified test statistic becomes feasible when the asymptotic properties of parameter estimation for more complex ARCH-type models are established. It implies a wider applicability of the modified portmanteau test. However, the paper primarily focuses on simpler ARCH-type models, and the study of more complex models or other types of models involving high-frequency data remains unexplored. These areas will be explored in future studies.

7. Conclusions

In conclusion, the modified portmantea test statistic provided a new idea for testing the goodness of fit of the ARCH-type models. This statistic builds upon the principles of the traditional test statistic and the asymptotic properties of QMLE based on high-frequency data. The test statistic takes into account a redundant parameter in ARCH-type models. It is the part left after high-frequency residual regularization, which is not present in traditional portmanteau test. In spite of this redundant parameter, the modified test statistic have been proven to follow a chi-square distribution.
Furthermore, the simulation study confirms that the modified portmanteau test follows a chi-square distribution. The size and power results indicate that the test based on high-frequency data outperforms the test based on low-frequency data in assessing model adequacy. In practical applications, the modified test based on high-frequency data consistently performs well. A comparison of the results from the three indices reveals that the results of tests based on high-frequency data sometimes differ from those based on low-frequency data. Overall, the test based on high-frequency data is more effective in identifying cases of incorrect model selection.
Lastly, it is worth noting that the applicability of the modified portmanteau test extends beyond the simple ARCH-type models examined in this paper. Other ARCH-type models, such as TGARCH and EGARCH models, can also benefit from this portmanteau test. However, the current study focuses solely on the use of simple ARCH-type models. It is important to recognize that leverage effects are prevalent in financial assets, and ARCH-type models capable of capturing such effects should be taken into consideration. We will leave this extension as a task of future study.

Author Contributions

Conceptualization, X.Z.; Formal analysis, X.Z. and C.D.; Funding acquisition, X.Z. and C.D.; Investigation, Y.C.; Methodology, Y.C., X.Z. and C.D.; Project administration, X.Z.; Resources, Y.C.; Validation, Y.L.; Visualization, Y.L.; Writing—original draft, Y.C.; Writing—review and editing, C.D. All authors have read and agreed to the published version of the manuscript.

Funding

The work is partially supported by Guangdong Basic and Applied Basic Research Foundation 2022A1515010046 (Xingfa Zhang), Funding by Science and Technology Projects in Guangzhou SL2022A03J00654 (Xingfa Zhang), 202201010276 (Zefang Song), and Youth Joint Project of Department of Science and Technology in Guangdong Province of China, 2022A1515110009 (Chunliang Deng).

Data Availability Statement

The data used in this study are not publicly available due to confidentiality requirements imposed by the data collection agency.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ARCHautoregressive conditional heteroscedasticity model
GARCHgeneralized autoregressive conditional heteroscedasticity model
QMLEGaussian quasi-maximum likelihood estimation
CSI 300China Securities Index 300, also called HuShen 300
SSE 50Shanghai stock exchange 50 index
CSI 500China Securities Index 500
GJRmodel named by the proponents Glosten, Jagannathan and Runkel
QMELEquasi-maximum exponential likelihood estimation
VP-GARCH(1,1)volatility proxy GARCH(1,1) model
VP-ARCH(q)volatility proxy ARCH(q) model
RVrealized volatility
RMSEroot mean square error

Appendix A

Appendix A.1. Assumption

Assumption A1.
Given the intial observations { y 0 , y 1 , y 2 , } . Θ denotes the parameter space. The parameter θ belongs to the interior of the compact set Θ. Let θ = ( τ , γ , β ) be the parameter for models (8) and (9), let θ = ( τ H , γ 1 , , γ q ) be the parameter for models (8) and (10). θ 0 = ( τ H 0 , γ 0 , β 0 ) and θ 0 = ( τ H 0 , γ 10 , , γ q 0 ) denote their true value, respectively.
Assumption A2.
τ H > 0 , γ > 0 , β [ 0 , 1 ) , γ i > 0 , i = 1 , 2 , , q .
Assumption A3.
The sequence { ε t } is i.i.d. with zero mean and unit variance. The sequence { ε t * } is also i.i.d.
Assumption A4.
E ε t 4 < , E ε t * 4 < .
Assumption A5.
For models (8) and (9), γ 0 τ 0 2 E ( ε t * 2 ) + β 0 < 1 , for models (8) and (10), γ 10 τ 0 2 E ( ε t * 2 ) + γ 20 τ 0 2 E ( ε t * 2 ) + + γ q 0 τ 0 2 E ( ε t * 2 ) < 1 .
Note that under Assumption 5, Pan et al. (2008) [31] showed that the model we used admits a strictly stationary solution.
Assumption A6.
E H 2 ( ε t ( u ) ) = 1 .

Appendix A.2. Proof

Proof of Theorem 1.
Since n r M d N ( 0 , I M ) , then
n C M d N ( 0 , C 0 2 I M ) .
By the Taylor expansion, it follows that
C ˜ ( θ ˜ ) C ( θ 0 ) + C θ ( θ ˜ θ 0 ) ,
C k θ = 1 n t = k + 1 n ( y t 2 σ t 4 σ t 2 θ ) ( y t k 2 σ t k 2 1 ) 1 n t = k + 1 n ( y t 2 σ t 2 1 ) ( y t k 2 σ t k 4 σ t k 2 θ ) .
Since 1 n t = k + 1 n ( y t 2 σ t 2 1 ) ( y t k 2 σ t k 4 σ t k 2 θ ) 0 as n , hence we have
C ˜ ( θ ˜ ) C ( θ 0 ) + X ( θ ˜ θ 0 ) .
X = ( X 1 , X 2 , , X m ) ,
X k = 1 n t = k + 1 n ( 1 σ t 2 σ t 2 θ ) ( y t k 2 σ t k 2 1 ) , k = 1 , 2 , , m .
To obtain the asymptotic distribution of n C ˜ , the key is to calculate the covariance between n X ( θ ˜ θ 0 ) and n C . Before that, we first need to calculate E ( ( θ ˜ θ 0 ) C ) .
Applying Taylor’s expansion for the function L ( θ ˜ ) θ , it follows that
0 = L ( θ ˜ ) θ = L ( θ 0 ) θ + 2 L ( θ ˜ ) θ θ ( θ ˜ θ 0 ) ,
then
θ ˜ θ 0 = L ( θ 0 ) θ ( 2 L ( θ ˜ ) θ θ ) 1 ,
θ ˜ θ 0 = ( n G ) 1 L θ ,
where G = E ( 1 σ H , t 4 σ H , t 2 θ σ H , t 2 θ ) , σ H , t = v t τ H . Through simple calculations, we have
E ( ( θ ˜ θ 0 ) C ) = E ( ( n G ) 1 L θ C ) = 1 n G 1 E ( L θ C ) .
According to Formula (11),
L θ = t = 1 n ( 1 H t 2 σ H , t 2 ) 1 σ H , t 2 σ H , t 2 θ ,
then
E ( L θ C k ) = 1 n E { t = 1 n ( 1 H t 2 σ H , t 2 ) 1 σ H , t 2 σ H , t 2 θ t = k + 1 n ( y t 2 σ t 2 1 ) ( y t k 2 σ t k 2 1 ) } = 1 n E { t = 1 n ( H t 2 σ H , t 2 1 ) ( 1 σ H , t 2 σ H , t 2 θ ) ( y t 2 σ t 2 1 ) ( y t k 2 σ ˜ t k 2 1 ) } = E { ( H t 2 σ H , t 2 1 ) ( y t 2 σ t 2 1 ) } E { 1 n t = 1 n ( 1 σ H , t 2 σ H , t 2 θ ) ( y t k 2 σ t k 2 1 ) } .
Denote
C H , 0 E { ( H t 2 σ H , t 2 1 ) ( y t 2 σ t 2 1 ) } ,
X H , k 1 n t = 1 n ( 1 σ H , t 2 σ H , t 2 θ ) ( y t k 2 σ t k 2 1 ) .
Then
E ( L θ C k ) = C H , 0 X H , k ,
c o v ( n X ( θ ˜ θ 0 ) , n C ) = C H , 0 X G 1 X H .
Owing to
v a r ( ε t * 2 ) = E ( H t 2 σ H , t 2 1 ) 2 = E ( H t 4 σ H , t 4 ) 1 C H ,
thus
v a r ( n C ˜ M ) = v a r n C M + v a r ( n X ( θ ˜ θ 0 ) ) + 2 c o v ( n X ( θ ˜ θ 0 ) , n C M ) = C 0 2 I M + C H X G 1 X 2 C H , 0 X G 1 X H V 1 .
Since n C ˜ M d N ( 0 , V 1 ) , then
n C ˜ M V 1 1 C ˜ M d χ 2 ( m ) .
Similarly, we have
n C ˜ M V ˜ 1 1 C ˜ M d χ 2 ( m ) ,
where
V 1 ˜ = C ˜ 0 2 I M + ( C ˜ H 2 C ˜ H , 0 ) X G 1 X , C ˜ 0 = 1 n t = 1 n y t 4 σ ˜ t 4 1 , C ˜ H = 1 n t = 1 n H t 4 σ ˜ H , t 4 1 , C ˜ H , 0 = 1 n t = 1 n { ( H t 2 σ ˜ H , t 2 1 ) ( y t 2 σ ˜ t 2 1 ) } .
This completes the proof of Theorem 1. □
Proof of Lemma 1.
The proof of Lemma 1 is similar to Theorem 1, except that there is no need to define X H . Following Visser (2011) [9], suppose τ and τ H are known, then
1 σ H , t 2 σ H , t 2 θ = 1 τ H 2 v t 2 τ H 2 v t 2 θ = 1 v t 2 v t 2 θ = 1 τ 2 v t 2 τ 2 v t 2 θ = 1 σ t 2 σ t 2 θ .
If we assume E ( H 2 ( ε t ( u ) ) ) = 1 , which means μ H = 1 , we can weaken the condition. Even if τ and τ H are unknown, thanks to τ H = τ , in this case, we can still have
1 σ H , t 2 σ H , t 2 θ = 1 σ t 2 σ t 2 θ .
Thus
E ( L θ C k ) = C H , 0 X k ,
c o v ( n X ( θ ˜ θ 0 ) , n C ) = C H , 0 X G 1 X .
Owing to
v a r ( ε t * 2 ) = E ( H t 2 σ H , t 2 1 ) 2 = E ( H t 4 σ H , t 4 ) 1 C H ,
then
v a r ( n C ˜ M ) = v a r n C M + v a r ( n X ( θ ˜ θ 0 ) ) + 2 c o v ( n X ( θ ˜ θ 0 ) , n C M ) = C 0 2 I M + C H X G 1 X 2 C H , 0 X G 1 X X = C 0 2 I M + ( C H 2 C H , 0 ) X G 1 X V 2 ,
In view of n C ˜ M d N ( 0 , V 2 ) , thus
n C ˜ M V ˜ 2 1 C ˜ M d χ 2 ( m ) .
Then, we complete the proof of Lemma 1. □

Appendix B. Remaining Results

Figure A1. Power for models (16), where γ 3 takes 0.1, 0.2, 0.3, and 0.4. (a) The variation of power of different volatility proxies as the parameter γ 2 changes when the sample size is 200. (b) The variation of power of different volatility proxies as the parameter γ 2 changes when the sample size is 300. (c) The variation of power of different volatility proxies as the parameter γ 2 changes when the sample size is 400. (d) The variation of power of different volatility proxies as the parameter γ 2 changes when the sample size is 500.
Figure A1. Power for models (16), where γ 3 takes 0.1, 0.2, 0.3, and 0.4. (a) The variation of power of different volatility proxies as the parameter γ 2 changes when the sample size is 200. (b) The variation of power of different volatility proxies as the parameter γ 2 changes when the sample size is 300. (c) The variation of power of different volatility proxies as the parameter γ 2 changes when the sample size is 400. (d) The variation of power of different volatility proxies as the parameter γ 2 changes when the sample size is 500.
Axioms 13 00141 g0a1
Figure A2. Power for models (18), where γ 2 takes 0.1, 0.2, …, 0.5: (a) The variation of power of different volatility proxies as the parameter γ 2 changes when the sample size is 200. (b) The variation of power of different volatility proxies as the parameter γ 2 changes when the sample size is 300. (c) The variation of power of different volatility proxies as the parameter γ 2 changes when the sample size is 400. (d) The variation of power of different volatility proxies as the parameter γ 2 changes when the sample size is 500.
Figure A2. Power for models (18), where γ 2 takes 0.1, 0.2, …, 0.5: (a) The variation of power of different volatility proxies as the parameter γ 2 changes when the sample size is 200. (b) The variation of power of different volatility proxies as the parameter γ 2 changes when the sample size is 300. (c) The variation of power of different volatility proxies as the parameter γ 2 changes when the sample size is 400. (d) The variation of power of different volatility proxies as the parameter γ 2 changes when the sample size is 500.
Axioms 13 00141 g0a2
Figure A3. The estimated volatility curves of SSE 50, where the black curve is the real data curve, the blue curve is the estimated volatility curve of RV5, and the red curve is the estimation curve of the model using low-frequency data.
Figure A3. The estimated volatility curves of SSE 50, where the black curve is the real data curve, the blue curve is the estimated volatility curve of RV5, and the red curve is the estimation curve of the model using low-frequency data.
Axioms 13 00141 g0a3
Figure A4. The residual plots of SSE 50, where (a) is of the model using low-frequency data and (b) is of RV5.
Figure A4. The residual plots of SSE 50, where (a) is of the model using low-frequency data and (b) is of RV5.
Axioms 13 00141 g0a4

References

  1. Samuelson, P. The Variation of Certain Speculative Prices. J. Bus. 1966, 39, 34–105. [Google Scholar]
  2. Hamilton, J.D. Time Series Analysis, 1st ed.; Princeton University Press: Princeton, NJ, USA, 1994; pp. 154–196. [Google Scholar]
  3. Engle, R. Autoregressive Conditional Heteroscedasticity with Estimates of the Variance of United Kingdom Inflation. Econometrica 1982, 50, 987–1007. [Google Scholar] [CrossRef]
  4. Bollerslev, T. Generalized autoregressive conditional heteroskedasticity. J. Econom. 1986, 31, 307–327. [Google Scholar] [CrossRef]
  5. Black, F. Studies of stock market volatility changes. In Proceedings of the 1976 Meeting of the Business and Economic Statistics Section; American Statistical Association: Washington, DC, USA, 1986; pp. 177–181. [Google Scholar]
  6. Geweke, J. Modeling The persistence of Conditional Variances: A Comment. Econom. Rev. 1986, 5, 57–61. [Google Scholar] [CrossRef]
  7. Engle, R.F.; Granger, C.W.J.; Ding, Z. A long memory property of stock returns. J. Empri. Financ. 1993, 1, 83–106. [Google Scholar]
  8. Drost, F.; Klaassen, C. Efficient estimation in semiparametric GARCH models. J. Econom. 1997, 81, 193–221. [Google Scholar] [CrossRef]
  9. Visser, M. GARCH parameter estimation using high-frequency data. J. Financ. 2011, 9, 162–197. [Google Scholar] [CrossRef]
  10. Huang, J.S.; Wu, W.Q.; Chen, Z.; Zhou, J.J. Robust M-estimate of gjr model with high frequency data. Acta Math. Appl. Sin. 2015, 31, 591–606. [Google Scholar] [CrossRef]
  11. Glosten, L.R.; Jagannathan, R.; Runkle, D.E. On the relation between the expected value and the volatility of the nominal excess return on stocks. J. Financ. 1993, 48, 1779–1801. [Google Scholar] [CrossRef]
  12. Huber, P.J. Robust Statistics: A Review. Ann. Stat. 1981, 9, 436–466. [Google Scholar]
  13. Wang, M.; Chen, Z.; Wang, C. Composite quantile regression for garch models using high-frequency data. Econ. Stat. 2018, 7, 115–133. [Google Scholar] [CrossRef]
  14. Fan, P.; Lan, Y.; Chen, M. The estimating method of var based on pgarch model with high-frequency data. Syst. Eng. 2017, 37, 2052–2059. [Google Scholar]
  15. Deng, C.; Zhang, X.; Li, Y.; Song, Z. On the test of the volatility proxy model. Commun. Stat. Simul. Comput. 2022, 51, 7390–7403. [Google Scholar] [CrossRef]
  16. Liang, X.; Zhang, X.; Li, Y.; Deng, C. Daily nonparametric ARCH(1) model estimation using intraday high frequency data. AIMS Math. 2021, 6, 3455–3464. [Google Scholar] [CrossRef]
  17. Box, G.; Pierce, D. Distribution of Residual Autocorrelations in Autoregressive Integrated Moving Average Time Series Models. J. Am. Stat. Assoc. 1970, 65, 1509–1526. [Google Scholar] [CrossRef]
  18. Granger, C.; Granger, C.W.J.; Andersen, A.P. An introduction to bilinear time series models. Int. Stat. Rev. 1978, 8, 7–94. [Google Scholar]
  19. McLeod, A.; Li, W. Diagnostic Checking ARMA Time Series Models Using Squared-Residual Autocorrelations. J. Time Ser. Anal. 1983, 4, 269–273. [Google Scholar] [CrossRef]
  20. Engle, R.; Bollerslev, T. Modelling the persistence of conditional variances. Econom. Rev. 1986, 5, 1–50. [Google Scholar] [CrossRef]
  21. Pantula, S.G. Estimation of Autoregressive Models with Arch Errors. Sankhyā Indian J. Stat. 1988, 50, 119–138. [Google Scholar]
  22. Li, W.K.; Mak, T.K. On the squared residual autocorrelations in non-linear time series with conditional heteroskedasticity. J. Time Ser. Anal. 1994, 15, 627–636. [Google Scholar] [CrossRef]
  23. Peng, L.; Yao, Q. Least absolute deviations estimation for ARCH and GARCH models. Biometrika 2003, 90, 967–975. [Google Scholar] [CrossRef]
  24. Li, G.; Li, W.K. Diagnostic checking for time series models with conditional heteroscedasticity estimated by the least absolute deviation approach. Biometrika 2005, 92, 691–701. [Google Scholar] [CrossRef]
  25. Carbon, M.; Francq, C. Portmanteau Goodness-of-Fit Test for Asymmetric Power GARCH Models. Aust. N. Z. J. Stat. 2011, 40, 55–64. [Google Scholar]
  26. Chen, M.; Zhu, K. Sign-based portmanteau test for ARCH-type models with heavy-tailed innovations. J. Econom. 2015, 189, 313–320. [Google Scholar] [CrossRef]
  27. Li, M.; Zhang, Y. Bootstrapping multivariate portmanteau tests for vector autoregressive models with weak assumptions on errors. Comput. Stat. Data Anal. 2022, 165, 107321. [Google Scholar] [CrossRef]
  28. Ulam, S.; von Neumann, J. Monte Carlo Calculations in Problems of Mathematical Physics. Bull. New Ser. Am. Math. Soc. 1947, 53, 1120–1126. [Google Scholar]
  29. Uhlenbeck, G.E.; Ornstein, L.S. Carlo Calculations in Problems of Mathematical Physics. Phys. Rev. 1930, 36, 823–841. [Google Scholar] [CrossRef]
  30. Brown, R. A Brief Account of Microscopical Observations Made in the Months of June, July and August, 1827, on the Particles Contained in the Pollen of Plants; and on the General Existence of Active Molecules in Organic and Inorganic Bodies. Philos. Mag. 1828, 4, 161–173. [Google Scholar] [CrossRef]
  31. Pan, J.; Wang, H.; Tong, H. Estimation and tests for power-transformed and threshold GARCH models. J. Econom. 2008, 142, 352–378. [Google Scholar] [CrossRef]
Figure 1. Power for models (15), where γ 3 takes 0.1, 0.2, 0.3, and 0.4. (a) The variation of power of different volatility proxies as the parameter γ 3 changes when the sample size is 200. (b) The variation of power of different volatility proxies as the parameter γ 3 changes when the sample size is 300. (c) The variation of power of different volatility proxies as the parameter γ 3 changes when the sample size is 400. (d) The variation of power of different volatility proxies as the parameter γ 3 changes when the sample size is 500.
Figure 1. Power for models (15), where γ 3 takes 0.1, 0.2, 0.3, and 0.4. (a) The variation of power of different volatility proxies as the parameter γ 3 changes when the sample size is 200. (b) The variation of power of different volatility proxies as the parameter γ 3 changes when the sample size is 300. (c) The variation of power of different volatility proxies as the parameter γ 3 changes when the sample size is 400. (d) The variation of power of different volatility proxies as the parameter γ 3 changes when the sample size is 500.
Axioms 13 00141 g001
Figure 2. Power for models (17), where γ 2 takes 0.1, 0.2, …, 0.5. (a) The variation of power of different volatility proxies as the parameter γ 2 changes when the sample size is 200. (b) The variation of power of different volatility proxies as the parameter γ 2 changes when the sample size is 300. (c) The variation of power of different volatility proxies as the parameter γ 2 changes when the sample size is 400. (d) The variation of power of different volatility proxies as the parameter γ 2 changes when the sample size is 500.
Figure 2. Power for models (17), where γ 2 takes 0.1, 0.2, …, 0.5. (a) The variation of power of different volatility proxies as the parameter γ 2 changes when the sample size is 200. (b) The variation of power of different volatility proxies as the parameter γ 2 changes when the sample size is 300. (c) The variation of power of different volatility proxies as the parameter γ 2 changes when the sample size is 400. (d) The variation of power of different volatility proxies as the parameter γ 2 changes when the sample size is 500.
Axioms 13 00141 g002
Figure 3. Log-return of three indices: (a) The figure of CSI 300, (b) the figure of SSE 50 and (c) the figure of CSI 500. The vertical ordinate of all figures is log-return, and the horizontal ordinate is time t.
Figure 3. Log-return of three indices: (a) The figure of CSI 300, (b) the figure of SSE 50 and (c) the figure of CSI 500. The vertical ordinate of all figures is log-return, and the horizontal ordinate is time t.
Axioms 13 00141 g003
Figure 4. The estimated volatility curves of CSI 300, where the black curve is the real data curve, the blue curve is the estimated volatility curve of RV15, and the red curve is the estimated volatility curve of the model using low-frenquency data.
Figure 4. The estimated volatility curves of CSI 300, where the black curve is the real data curve, the blue curve is the estimated volatility curve of RV15, and the red curve is the estimated volatility curve of the model using low-frenquency data.
Axioms 13 00141 g004
Figure 5. The residual plots of CSI 300, where (a) is of the model using low-frequency data and (b) is of RV15.
Figure 5. The residual plots of CSI 300, where (a) is of the model using low-frequency data and (b) is of RV15.
Axioms 13 00141 g005
Table 1. RMSE of parameter estimates under four volatility proxy models.
Table 1. RMSE of parameter estimates under four volatility proxy models.
n = 200 n = 300 n = 400 n = 500
model (15) γ 1 γ 2 γ 1 γ 2 γ 1 γ 2 γ 1 γ 2
| y t | 0.17550.12820.14180.10880.12800.09410.10970.0819
RV300.07880.05850.06560.04890.05630.04080.04900.0370
RV150.06630.04970.05510.04100.04710.03460.04140.0319
RV50.05730.04100.04800.03550.04120.03460.03530.0268
model (16) | y t | 0.14390.11970.11840.09940.10640.08470.09190.0801
RV300.06760.05680.5550.04620.04650.03780.04240.0344
RV150.05680.04690.04800.03860.03850.03250.03670.0293
RV50.04860.03920.04160.03270.03300.02780.03160.0253
model (17) γ 1 β 1 γ 1 β 1 γ 1 β 1 γ 1 β 1
| y t | 0.07730.07770.06510.06560.05520.05590.04850.0499
RV300.03720.03670.03040.03060.02670.02570.02280.0221
RV150.03060.03020.02590.02570.02260.02170.01930.0188
RV50.02680.02630.02260.02190.01970.01910.01650.0157
model (18) | y t | 0.12400.09410.09350.07500.08560.06700.07300.0573
RV300.05290.04110.04260.03420.03730.03000.03240.0254
RV150.04470.03420.03540.02820.03130.02520.02680.0219
RV50.03700.02870.03050.02430.02630.02100.02320.0187
Table 2. Empirical size of four volatility proxies under four models.
Table 2. Empirical size of four volatility proxies under four models.
n = 200 n = 300 n = 400 n = 500
model (15) | y t | 0.0390.0330.0370.031
RV300.0700.0640.0590.058
RV150.0710.0640.0520.057
RV50.0750.0590.0560.059
model (16) | y t | 0.0300.0290.0330.019
RV300.0750.0470.0520.044
RV150.0750.0550.0540.049
RV50.0750.0540.0570.050
model (17) | y t | 0.0240.0350.0280.030
RV300.0680.0620.0460.047
RV150.0610.0710.0540.054
RV50.0640.0700.0480.058
model (18) | y t | 0.0370.0240.0350.027
RV300.0660.0590.0600.047
RV150.0640.0640.0620.050
RV50.0720.0620.0600.052
Table 3. The results of statistic Q 2 of ARCH(2).
Table 3. The results of statistic Q 2 of ARCH(2).
CSI 300SSE 50CSI 500
Q 2 ( y t ) 92.5215104.022776.4603
Table 4. The estimators of parameters of GARCH(1,1).
Table 4. The estimators of parameters of GARCH(1,1).
τ 2 / τ H 2 α β
CSI 300 | y t | 0.00690.05530.9447
RV300.17750.8678
RV150.17180.8659
RV50.18600.8646
SSE 50 | y t | 0.02540.05630.9372
RV300.17230.8592
RV150.17330.8653
RV50.18890.8570
CSI 500 | y t | 0.02230.05320.9416
RV300.17040.8477
RV150.17730.8479
RV50.17480.8556
Table 5. The estimators of E ( H 2 ( ε t ) ) of GARCH(1,1).
Table 5. The estimators of E ( H 2 ( ε t ) ) of GARCH(1,1).
RV30RV15RV5
CSI 3001.00071.00040.9996
SSE 501.00241.00261.0000
CSI 5001.00421.00331.0018
Table 6. The results of statistic Q 2 of GARCH(1,1).
Table 6. The results of statistic Q 2 of GARCH(1,1).
Q 2 ( y t ) Q 2 ( RV 30 ) Q 2 ( RV 15 ) Q 2 ( RV 5 )
CSI 3003.967022.653420.119519.7573
SSE 506.744025.425020.372022.1240
CSI 50011.32549.586310.05618.4238
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, Y.; Zhang, X.; Deng, C.; Liu, Y. Portmanteau Test for ARCH-Type Models by Using High-Frequency Data. Axioms 2024, 13, 141. https://doi.org/10.3390/axioms13030141

AMA Style

Chen Y, Zhang X, Deng C, Liu Y. Portmanteau Test for ARCH-Type Models by Using High-Frequency Data. Axioms. 2024; 13(3):141. https://doi.org/10.3390/axioms13030141

Chicago/Turabian Style

Chen, Yanshan, Xingfa Zhang, Chunliang Deng, and Yujiao Liu. 2024. "Portmanteau Test for ARCH-Type Models by Using High-Frequency Data" Axioms 13, no. 3: 141. https://doi.org/10.3390/axioms13030141

APA Style

Chen, Y., Zhang, X., Deng, C., & Liu, Y. (2024). Portmanteau Test for ARCH-Type Models by Using High-Frequency Data. Axioms, 13(3), 141. https://doi.org/10.3390/axioms13030141

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop