Next Article in Journal
Towards Efficient Federated Learning: Layer-Wise Pruning-Quantization Scheme and Coding Design
Next Article in Special Issue
Genetic Algorithm for Feature Selection Applied to Financial Time Series Monotonicity Prediction: Experimental Cases in Cryptocurrencies and Brazilian Assets
Previous Article in Journal
Non-Additive Entropy Formulas: Motivation and Derivations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Financial Risk Measurement EVaR Based on DTARCH Models

1
Department of Mathematics and Statistics, York University, Toronto, ON M3J 1P3, Canada
2
Key Laboratory of Advanced Theory and Application in Statistics and Data Science, MOE, and Academy of Statistics and Interdisciplinary Sciences and School of Statistics, East China Normal University, Shanghai 200062, China
*
Author to whom correspondence should be addressed.
Entropy 2023, 25(8), 1204; https://doi.org/10.3390/e25081204
Submission received: 15 June 2023 / Revised: 1 August 2023 / Accepted: 9 August 2023 / Published: 13 August 2023
(This article belongs to the Special Issue Advanced Statistical Applications in Financial Econometrics)

Abstract

:
The value at risk based on expectile (EVaR) is a very useful method to measure financial risk, especially in measuring extreme financial risk. The double-threshold autoregressive conditional heteroscedastic (DTARCH) model is a valuable tool in assessing the volatility of a financial asset’s return. A significant characteristic of DTARCH models is that their conditional mean and conditional variance functions are both piecewise linear, involving double thresholds. This paper proposes the weighted composite expectile regression (WCER) estimation of the DTARCH model based on expectile regression theory. Therefore, we can use EVaR to predict extreme financial risk, especially when the conditional mean and the conditional variance of asset returns are nonlinear. Unlike the existing papers on DTARCH models, we do not assume that the threshold and delay parameters are known. Using simulation studies, it has been demonstrated that the proposed WCER estimation exhibits adequate and promising performance in finite samples. Finally, the proposed approach is used to analyze the daily Hang Seng Index (HSI) and the Standard & Poor’s 500 Index (SPI).

1. Introduction

Scientifically and accurately measuring financial risk is the core part of the financial risk management process. Developing efficient statistical methods of financial risk measurement is essential for effectively controlling financial risk. We aim to develop financial risk models that account for extreme events, thereby enhancing the accuracy and efficacy of risk assessments in the field of finance. Due to the increasing complexity, time-variation and randomness of financial markets, nonlinear time series models are used to provide a more reasonable description of the markets’ behaviors or phenomena, the double-threshold autoregressive conditional heteroscedastic (DTARCH) model is one of nonlinear time series models which are designed for this purpose (see [1] for details). A significant characteristic of DTARCH models is that their conditional mean and conditional variance functions both are piecewise linear involving double thresholds. Our investigation will focus on developing an expectile-based value at risk (EVaR) model with a DTARCH structure.
DTARCH models are very useful and flexible in analyzing asymmetric financial time series, making them a subject of considerable attention in recent statistical and econometric papers. Ref. [1] investigated the model identification, estimation and diagnostic checking techniques based on the maximum likelihood principle under the normal assumption of the conditional distribution of the observed data. Ref. [2] investigated robust modeling techniques without a specific form of the conditional distribution, focusing on the L 1 estimation of DTARCH models and deriving limiting distributions for the proposed estimators. Ref. [3] further studied the parameter estimation of DTARCH models using the weighted composite quantile regression procedure, which includes quantile regression as a special case while significantly improving efficiency and inheriting robustness. Ref. [4] investigated DTARCH models with restrictions on parameters and proposed both unrestricted and restricted weighted composite quantile regression estimation for the model parameters, which can be utilized to construct the likelihood ratio-type test statistic. However, these papers are all based on the known threshold and delay parameters of DTARCH models.
The risk measure EVaR proposed by [5] is based on expectile regression theory. Ref. [6] proposed the concept of expectile. The expectile is a one-to-one mapping relationship with the quantile and has similar properties as the quantile. So, the expectile can be regarded as an estimation of quantile; see [7,8,9,10] for details. Expectile has gained popularity in recent years as a subject of interest. Ref. [11] discovered that similar to quantiles, time-varying expectiles can be estimated using a state space signal extraction algorithm. Ref. [12] proposed a new model based on expectile regression–geoadditive expectile regression model. Ref. [13] proposed regularized expectile regression with smoothly clipped absolute deviation (SCAD) penalty for analyzing heteroscedasticity in high dimensions when the error has finite moments. Ref. [14] considered penalized linear expectile regression using SCAD penalty function. Ref. [15] proposed aggregated expectile regression by exponential weighting. Ref. [16] derived joint weighted Gaussian approximations of the tail empirical expectile and quantile processes. Ref. [17]focused on the semi-parametric estimation of multivariate expectiles for extreme levels of risk. Ref. [18] proposed expectHill estimators, which are used as the basis for estimating tail expectiles and expected shortfall. Ref. [19] built a general theory for the estimation of extreme conditional expectiles in heteroscedastic regression models with heavy-tailed noise. Ref. [20] developed a weighted expectile regression approach for estimating the conditional expectile when covariates are missing at random. Ref. [21] studied the problem of the nonparametric estimation of the expectile regression model for strong mixing functional time series. Ref. [22] considered model averaging for expectile regressions. Ref. [23] exploited the fact that the expectiles of a distribution F are in fact the quantiles of another distribution E explicitly linked to F, in order to construct nonparametric kernel estimators of extreme conditional expectiles. Ref. [24] dealt with the problem of the nonparametric estimation of the functional expectile regression, and so on.
Since EVaR is derived from expectile theory and utilizes a squared loss function as its loss function, it exhibits higher sensitivity to extreme values and is mathematically easier to handle. In addition, EVaR is a weighted average of the lower risk (expected shortfall, i.e., ES) and upper risk in conditions. Currently, several research papers on EVaR have been published, exploring various aspects of its application and properties. For example, Ref. [11] proved that EVaR is a consistent risk measure when the confidence level p is less than 0.5 . Ref. [25] studied risk measurement EVaR under a variable coefficient model. Ref. [26] proposed a weighted composite expectile regression estimation for autoregressive models. Ref. [27] discussed the financial meaning of EVaR, compared them with VaR and ES, and studied their asymptotic behavior. Ref. [28] considered a new class of conditional dynamic expectile models with partially varying coefficients in assessing the tail risk of asset returns for S&P 500 Index. Ref. [29] proposed a class of semiparametric composite expectile models with varying coefficients. Ref. [30] proposed a semi-parametric model with varying-coefficients to analyze the EVaR under the assumption of α -mixing. Ref. [31] forecasted the expectile-based risk measures by using the expected-based procedures. Ref. [32] provided a basis for inference on extreme expectiles and expectile-based marginal expected shortfall in a general β -mixing context that encompasses ARMA and GARCH models with heavy-tailed innovations. Ref. [33] developed a single-index approach for modeling the expectile-based value at risk. Ref. [34] studied the estimation of extremal conditional expectile based on quantile regression and expectile regression models. Considering the advantages of EVaR, we will propose the estimation of the DTARCH model based on expectile regression theory. Unlike the existing papers on the DTARCH model, we do not assume that the threshold and delay parameters of DTARCH models are known.
The rest of the paper is organized as follows. Section 2 investigates the estimation problem of DTARCH models based on expectile regression theory. We propose WCER estimation of DTARCH models in Section 2.3, and the proposed expectile regression estimation in Section 2.2 is a special case of WCER estimation, while the least squares estimation in Section 2.1 is a special case of the expectile regression estimation. In Section 2.4, we show that the asymptotic efficiency of WCER estimators calculated using weights obtained through data-driven methods is the same as those of WCER estimators calculated using known weights. We compare the least squares estimation, quantile regression estimation, expectile regression estimation and weighted composite expectile regression estimation of DTARCH models based on the maximum likelihood estimation in Section 3. The proposed methodology is also applied to analyze the daily Hang Seng Index (HSI) and the Standard & Poor’s 500 Composite Index (SPI) in Section 4. We summarize our work in Section 5. Also, for readers interested in the theoretical basis of our results, the proofs of our theoretical results are provided in Appendix A. In addition, some of our simulation results are given in Appendix B.

2. Estimation of the DTARCH Model

Ref. [1] proposed the DTARCH model based on the autoregressive conditional heteroskedasticity model (ARCH) model (see [35]) and the threshold model (see [36]). The DTARCH model can handle situations where both conditional mean and conditional variance specifications are piecewise linear based on previous information. Given a time series y t , t = 1 , 2 , , n , let F t be the σ -field generated from the realized value y t , y t 1 , at time t. Assume that y t is generated by
y t = x t , j α ( j ) + ϵ t , if r j 1 < y t d r j ,
where j = 1 , 2 , , m ; the delay parameter d is a positive integer; the threshold parameters r j satisfy = r 0 < r 1 < < r m = ; x t , j = 1 , y t 1 , , y t p j is a ( p j + 1 ) × 1 vector of lagged variables; and α ( j ) = α 0 ( j ) , α 1 ( j ) , , α p j ( j ) is a ( p j + 1 ) × 1 parameter vector. The stochastic error satisfies ϵ t = h t ( γ ) u t with
h t ( γ ) = j = 1 m I t , j γ 0 ( j ) + γ 1 ( j ) ϵ t 1 2 + + γ q j ( j ) ϵ t q j 2 ,
where I t , j = I r j 1 < y t d r j , and γ = vec γ ( 1 ) , , γ ( m ) with γ ( j ) = γ 0 ( j ) , γ 1 ( j ) , , γ q j ( j ) is a ( q j + 1 ) × 1 parameter vector, j = 1 , , m . Because (2) is an ARCH process, the innovations u t are independently and identically distributed random variables with E ( u t ) = 0 , Var ( u t ) = 1 , the parameters γ 0 ( j ) > 0 , γ i ( j ) 0 ( i = 1 , , q j ) and i = 1 q j γ i ( j ) < 1 . This is the DTARCH model proposed by [1]. A significant characteristic of DTARCH models is that their conditional mean and conditional variance functions both are piecewise linear involving double thresholds.
We have made a slight modification to the DTARCH model under consideration. As reported by [3,4], the stochastic error satisfies ϵ t = h t ( β ) u t with
h t ( β ) = j = 1 m I t , j β 0 ( j ) + β 1 ( j ) | ϵ t 1 | + + β q j ( j ) | ϵ t q j | j = 1 m I t , j z t , j β ( j ) ,
where z t , j = 1 , | ϵ t 1 | , , | ϵ t q j | , β = vec β ( 1 ) , , β ( m ) with β ( j ) = β 0 ( j ) , β 1 ( j ) , , β q j ( j ) ( j = 1 , , m ), and β 0 ( j ) > 0 , β i ( j ) 0 ( i = 1 , , q j ) . The innovations u t are independently and identically distributed random variables with an unknown distribution F ( u ) and a density function f ( u ) .
Let α = vec α ( 1 ) , , α ( m ) , x t = vec I t , 1 x t , 1 , , I t , m x t , m , z t = vec I t , 1 z t , 1 , , I t , m z t , m , and denote j = 1 m p j + 1 = p and j = 1 m q j + 1 = q . Then Equations (1) and (3) can be written as
y t = x t α + ϵ t ,
and ϵ t = h t ( β ) u t with
h t ( β ) = z t β ,
respectively. As in [1,3,4], we denote the model defined by (4) and (5) by
DTARCH ( p 1 , , p m ; q 1 , , q m ) ,
where p 1 , , p m represent the autoregressive model (AR) orders in the m regimes and q 1 , , q m denote the ARCH orders in the m regimes. We use the DTARCH model with a conditional scale, rather than a conditional variance, because modeling the conditional scale is very important. Previous studies emphasized that such a scale provides a more natural dispersion concept than the variance and offers substantial advantages in terms of robustness. The advantage of such an approach with conditional scale instead of conditional variance can be found in [37,38,39,40,41] and so on.

2.1. Least Squares Estimators of the DTARCH Model

Most of the research papers on DTARCH models are based on the condition that the threshold parameters { r 0 , r 1 , , r m } and delay parameter d are known. But in real data analysis, we know that this condition is hard to meet. In the literature on threshold models, there are also a few studies that are based on scenarios where the threshold or delay parameters are unknown. For example, [42] proposed the least squares (LS) estimators for a threshold AR(1) model with an unknown threshold and proved that LS estimators of the threshold parameters were strongly consistent. Ref. [43] proposed the conditional least squares (CLS) estimators for the threshold autoregressive model with unknown threshold and delay parameters and proved that CLS estimators of the threshold parameters were convergent in distribution. In this paper, we propose the parameter estimation methods for the DTARCH model based on expectile regression theory, which includes the expectile regression estimation and the weighted composite expectile regression estimation of the DTARCH model. Note that the expectile regression estimation can be seen as a special case of the weighted composite expectile regression estimation when the expectile takes on a certain value (see Section 2.3 of this paper for details), while the least squares estimation can be seen as a special value of the expectile regression estimation (see Section 2.2 of this paper for details). Under some conditions, we can show that the proposed estimators of the threshold and delay parameters are consistent.
Using the least squares estimation method, we can obtain the least squares estimation of the DTARCH model DTARCH ( p 1 , , p m ; q 1 , , q m ) , α ^ 0 L S , β ^ 0 L S . Denote the threshold parameters ( r 0 , r 1 , , r m ) = r , the least squares estimator of r by r ^ 0 L S , and the least squares estimator of the delay parameter d by d ^ 0 L S . However, these estimators are biased. Obviously, the distribution of | ϵ t | is skewed and the log-transformation is an intuitive mechanism that can make the distribution less skewed; see [44] for details. Thus, in light of [3,4], we introduce a modified form of the model DTARCH ( p 1 , , p m ; q 1 , , q m ) . Let
log | ϵ t ( α ) | = log h t ( α , β ) + e t ,
where e t = log | u t | , and
h t ( α , β ) = j = 1 m I t , j β 0 ( j ) + β 1 ( j ) | ϵ t 1 ( α ) | + + β q j ( j ) | ϵ t q j ( α ) | .
h t ( α , β ) is equivalent to h t ( β ) , as we can see that h t ( β ) is also related to α . Therefore, we rename h t ( β ) as h t ( α , β ) . Apply the least squares method again, we can obtain LS estimators of DTARCH ( p 1 , , p m ; q 1 , , q m ) denoted by α ^ L S , β ^ L S , r ^ L S and d ^ L S , respectively. We study the properties of the least squares estimators under the following conditions.
For j = 1 , 2 , , m , suppose that x t , j are all Markov chains. Their l-step transition probability is denoted by P l ( x j , A j ) , where x j R p and A j are Borel sets. Later on, we will need the following set of regularity conditions.
(C1)
{ x t , j } admits a unique invariant measure π j ( · ) such that ∃ K j , ρ j < 1 , x j R p , n j N , P n j ( x j , · ) π j ( · ) K j ( 1 + | x j | ) ρ j n j , where · and | · | denote the total variation norm and the Euclidean norm, respectively.
(C2)
E | y t | 2 + δ < + for δ > 0 , and { y t } is strictly stationary and ergodic.
(C3)
{ y t } has the first derivative, and the stationary distribution of the derivative admits a density positive everywhere.
(C4)
Error e t has the cumulative distribution function G ( · ) with density g ( · ) being positive and having a continuous second derivative. Furthermore, E ( e t 4 ) < + .
Let α * , β * , d * and r * be the true values of α , β , d and r , respectively. Then, we can obtain the following theorems and corollary.
Theorem 1.
Suppose that the conditions (C2) and (C3) hold. Then, the estimators α ^ L S β ^ L S are strongly consistent, that is,
α ^ L S β ^ L S a . s . α * β * .
Under some additional conditions, we have the following corollary.
Corollary 1.
Suppose that the conditions (C1), (C2) and (C4) hold. Then it follows from Theorem 1 that the estimator d ^ L S is strongly consistent, that is,
d ^ L S a . s . d * .
Theorem 2.
Suppose that the conditions (C1), (C2) and (C4) hold. Then, the estimator r ^ L S converges to r * in distribution, that is,
r ^ L S D r * .
According to Corollary 1 and Theorem 2, both the threshold and delay parameters converge to their true values. Therefore, after obtaining estimated values for threshold and delay parameters, estimating the remaining parameters of the DTARCH model will yield convergence properties that are equivalent to those obtained by estimating the parameters using the known threshold and delay parameters. In order to simplify the theoretical analysis, without loss of generality, we assume that the threshold and delay parameters are known throughout the remainder of this paper.

2.2. Expectile Regression Estimators of the DTARCH Model

The definition of expectile regression proposed by [6] states that the τ -th expectile of a random variable u can be obtained minimizing the following check function,
Q τ ( u ) = ( 1 τ ) u 2 , u 0 , τ u 2 , u > 0 ,
and the derivative 1 2 Q ˙ τ ( u ) = ( 1 τ ) u , u 0 , τ u , u > 0 , satisfies
E Q ˙ τ ( u ) = 0 ,
i.e.,
( 1 τ ) 0 u f ( u ) d u + τ 0 u f ( u ) d u = 0 ,
where f ( · ) is the density function of u. Therefore, the τ -th expectile of u is 0.
Let ϵ t ( α ) = y t x t α = h t ( α , β ) u t and the τ -th expectile of u t be μ ( τ ) . Based on Theorem 1 in [6], the τ -th conditional expectile of ϵ t given F t 1 is
μ τ ( ϵ t | F t 1 ) = h t ( α , β ) μ ( τ ) .
The τ -th expectile regression (ER) estimator of α and β can be obtained by minimizing
t = s + 1 n Q τ ϵ t ( α ) h t ( α , β ) b τ ,
over b τ , α and β , where 0 < τ < 1 , s = max p 1 , , p m , q 1 , , q m and b τ is the τ -th expectile of u t . Let the resulting estimators from (7) be b ^ τ E R , α ^ 0 E R , β ^ 0 E R . Not surprisingly, these estimators are also biased. To correct the bias, we should still perform expectile regression estimation on the DTARCH model (6) that has undergone a logarithmic transformation.
From model (6), the τ -th expectile of log ϵ t ( α ) given F t 1 is
log ϵ t ( α ) F t 1 = log h t ( α , β ) + c τ ,
where c τ is the τ -th expectile of e t . Applying the expectile regression scheme, we can obtain the expectile regression estimators of c τ , α and β by minimizing
t = s + 1 n Q τ log ( | ϵ t ( α ) | ) log { h t ( α , β ) } c τ ,
over c τ , α and β . Obviously, when τ = 0.5 , the expectile regression estimators are the least square estimators in Section 2.1.
Let the resulting estimators of c τ , α , β be c ^ τ E R , α ^ E R , β ^ E R . To derive the asymptotic property of the proposed estimator, we introduce some notations and conditions. Let c τ * be the τ -th expectile of e t , ϵ t = ϵ t ( α * ) , h t = h t α * , β * , r t = x t ϵ t + f t h t with
f t = j = 1 m I t , j 0 , sgn ( ϵ t 1 ) x t 1 , , sgn ( ϵ t q j ) x t q j β ( j ) ,
μ = μ r , μ z with μ r = E r t and μ z = E z t / h t , Ω = Ω r Ω r z Ω r z Ω z with Ω r = E r t r t , Ω r z = E z t h t r t and Ω z = E z t z t h t 2 and Π = Π r Π r z Π r z Π z with Π r = Var r t , Π r z = Cov r t , z t h t and Π z = Var z t h t . We assume that
(C5)
Covariance matrix Π is positive definite.
Then we have the following asymptotic results for c ^ τ E R , α ^ E R , β ^ E R .
Theorem 3.
Suppose that the conditions (C2), (C4) and (C5) hold. Then, c ^ τ E R , α ^ E R , β ^ E R converge to c τ * , ( α * ) , ( β * ) in distribution, that is
n c ^ τ E R c τ * , α ^ E R α * , β ^ E R β * D N 0 , Ψ ,
where Ψ is a block matrix with blocks
Ψ 11 = ζ 1 + μ Π 1 μ , Ψ 12 = ζ μ Π 1 , Ψ 21 = ζ Π 1 μ , Ψ 22 = ζ Π 1 ,
with ζ = 1 4 ( 1 2 τ ) G ( c τ * ) + τ 2 E Q ˙ τ 2 e t c τ * .
By Theorem 3, it follows that
n c ^ τ E R c τ * D N 0 , Ψ 11 ,
n α ^ E R α * β ^ E R β * D N 0 , Ψ 22 .

2.3. Weight Composite Expectile Regression Estimators of the DTARCH Model

Refs. [45,46,47] considered the composite quantile regression (CQR) estimation, which is obtained by incorporating the information of multiple quantiles into the objective function. This estimation method incorporates more comprehensive model information. Subsequently, ref. [29] introduced an estimation called composite expectile regression (CER) and established large sample properties of the resulting CER estimator. However, both CQR and CER estimations assign equal weights to different quantiles and expectiles, respectively. Intuitively, using different weights for different quantile regression (QR) and expectile regression (ER) models might lead to improved efficiency. Hence, ref. [48,49] proposed the weighted composite quantile regression (WCQR) estimation method. The standard deviation (SD) of the WCQR estimator is smaller than the SD of the CQR estimator and QR estimator as discussed in [4]. Furthermore, ref. [26] proposed a weighted composite expectile regression (WCER) estimation for AR models and established its large sample’properties.
From model (6), the τ k -th expectile of log ϵ t ( α ) given F t 1 is
log ϵ t ( α ) F t 1 = log h t ( α , β ) + c τ k ,
where c τ k is the τ k -th expectile of e t . Applying the WCER scheme, we can jointly estimate the AR and ARCH parameters by minimizing
k = 1 K ω k t = s + 1 n Q τ k log ( | ϵ t ( α ) | ) log { h t ( α , β ) } c τ k ,
over c τ k , α and β , where ω = ω 1 , , ω K is a vector of weights such that ω = 1 , with · denoting the Euclidean norm. Without loss of generality, we assume that 0 < τ 1 < < τ K < 1 . If ω i = 1 / K , the estimation obtained from Equation (9) is CER estimation. Obviously, the weight ω k is the contribution rate of the τ k -th expectile. Since Q τ k ( log ( | ϵ t ( α ) | ) log h t ( α , β ) c τ k ) may not have a positive correlation, it is possible for the weight component ω to be negative. Therefore, the WCER estimation is not a simple extension of the CER estimation. Due to the limited space, we will not discuss the CER estimation in detail.
Let the resulting estimator of c τ 1 , , c τ K , α , β be c ^ τ 1 , , c ^ τ K , α ^ , β ^ and c τ k * be the true value of the τ k -th expectile of e t . Under certain conditions, we have the following asymptotic results for c ^ τ 1 , , c ^ τ K , α ^ , β ^ .
Theorem 4.
Suppose that the conditions (C2), (C4) and (C5) hold. Then c ^ τ 1 , , c ^ τ K , α ^ , β ^ converge to c τ 1 * , , c τ K * , ( α * ) , ( β * ) in distribution, that is
n c ^ τ 1 c τ 1 * , , c ^ τ K c τ K * , α ^ α * , β ^ β * D N 0 , Σ ,
where Σ is a block matrix with blocks
Σ 11 = ξ + σ 2 ( ω ) μ Π 1 μ 1 1 , Σ 12 = σ 2 ( ω ) 1 μ Π 1 , Σ 21 = σ 2 ( ω ) 1 Π 1 μ , Σ 22 = σ 2 ( ω ) Π 1 ,
with 1 = 1 , , 1 , ξ is a K × K matrix with the ( j , k ) th element being
ξ j k = E Q ˙ τ j e t c τ j * Q ˙ τ k e t c τ k * 4 ( 1 2 τ j ) G ( c τ j * ) + τ j ( 1 2 τ k ) G ( c τ k * ) + τ k ,
σ 2 ( ω ) = ω Λ ω k = 1 K ω k ( 1 2 τ k ) G ( c τ k * ) + τ k 2 , Λ is also a K × K matrix with the ( j , k ) th element being
Λ ( j , k ) = 1 4 E Q ˙ τ j e t c τ j * Q ˙ τ k e t c τ k * = τ j τ k τ j I ( e t c τ k * ) τ k I ( e t c τ j * ) + I ( e t c τ j * c τ k * ) × e t 2 ( c τ j * + c τ k * ) e t + c τ j * c τ k * d G ( e t ) .

2.4. Selection of Optimal Weight

By Theorem 4, we have
n α ^ α * β ^ β * D N 0 , Σ 22 ,
where Σ 22 = σ 2 ( ω ) Π 1 . Because Π does not contain weight ω , to obtain the optimal weight, we only need to minimize σ 2 ( ω ) under the condition of ω = 1 , which yields
ω opt = g Λ 2 g 1 / 2 Λ 1 g ,
where
g = ( 1 2 τ 1 ) G ( c τ 1 * ) + τ 1 , , ( 1 2 τ K ) G ( c τ K * ) + τ K .
The cumulative distribution function G ( · ) with density g ( · ) of error { e t } can be obtained by the kernel smooth estimation. Then, the nonparametric estimator of ω opt is given by
ω ^ = ω ^ 1 , , ω ^ K = g ^ Λ ^ 2 g ^ 1 / 2 Λ ^ 1 g ^ ,
so that estimator of α β denoted by α ^ 0 β ^ 0 can be obtained by the following formula,
min c τ k , α , β k = 1 K ω ^ k t = s + 1 n Q τ k log ( | ϵ t ( α ) | ) log { h t ( α , β ) } c τ k .
Then, under certain conditions, we have the following asymptotic results for α ^ 0 β ^ 0 .
Corollary 2.
Suppose that the conditions (C2), (C4) and (C5) hold. Then α ^ 0 β ^ 0 converge to α * β * in distribution, that is
n α ^ 0 α * β ^ 0 β * D N 0 , g Λ 1 g 1 Π 1 .
Note that σ 2 ( ω opt ) = g Λ 1 g 1 . When ω opt is known, the asymptotic covariance of α ^ 0 β ^ 0 is the same as the covariance variance of α ^ β ^ . In other words, the asymptotic efficiency of WCER estimators calculated using weights obtained through data-driven optimal weighting is the same as those of WCER estimators calculated using known weights.

3. Comparison of Estimation Methods

In this section, we compare the least squares estimation, quantile regression estimation, expectile regression estimation and weighted composite expectile regression estimation of DTARCH models by using the maximum likelihood estimation (MLE) of DTARCH models as the benchmark. We consider the following DTARCH ( 2 , 2 ; 2 , 2 ) model:
y t = α 1 ( 1 ) y t 1 + α 2 ( 1 ) y t 2 + ϵ t , if y t 1 0 , α 1 ( 2 ) y t 1 + α 2 ( 2 ) y t 2 + ϵ t , if y t 1 > 0 ,
where α 1 ( 1 ) , α 2 ( 1 ) = 0.25 , 0.30 , α 1 ( 2 ) , α 2 ( 2 ) = 0.45 , 0.20 and ϵ t = h t u t , with
h t = β 0 ( 1 ) + β 1 ( 1 ) ϵ t 1 + β 2 ( 1 ) ϵ t 2 , if y t 1 0 , β 0 ( 2 ) + β 1 ( 2 ) ϵ t 1 + β 2 ( 2 ) ϵ t 2 , if y t 1 > 0 ,
where β 0 ( 1 ) , β 1 ( 1 ) , β 2 ( 1 ) = 0.03 , 0.30 , 0.50 , β 0 ( 2 ) , β 1 ( 2 ) , β 2 ( 2 ) = 0.06 , 0.45 , 0.35 .
We consider three types of innovation variables, which are distributed as N ( 0 , 1 ) , t ( 6 ) and χ 2 ( 4 ) . They are centralized and normalized so that the medians of the absolute innovations are 1, i.e., u t is normalized to satisfy Median ( | u t | ) = 1 . The sample size n is chosen are 100, 300, 800, 1500 and 2500. All the simulation results are based on 500 Monte Carlo replications. Seven equally spaced expectiles in ( 0 , 1 ) are chosen for each simulation setting when we apply the WCER estimation process. For QR and ER estimation, we take τ = 0.25 and 0.75 , respectively. In each simulation, the root mean squared error (RMSE) for different estimators are calculated, and they are reported in Table 1, Table 2 and Table 3. In addition, the parameter estimators obtained from different estimation methods of the DTARCH model are listed in Table A1, Table A2 and Table A3 in Appendix B.
As expected, the oracle MLE performs the best, while the WCER estimators outperform both the QR estimators and the ER estimators. As can be seen from Table 1 and Table A1, the WCER estimators slightly underperform LS estimators only when the residual error follows a normal distribution. From Table 2, Table 3, Table A2 and Table A3, we can see that the WCER estimators greatly outperform the LS estimators in terms of RMSE when the error follows a heavy-tailed or asymmetric distribution. In studies that apply time series models to study financial data, it is more realistic to assume that the error follows a non-normal and heavy-tailed distribution. For example, [50] considered the heavy-tailed nature and extreme volatility of asset returns, and demonstrated these statistical characteristics using financial data. Ref. [51] introduced a new heavy-tailed distribution to characterize errors in the ARCH/GARCH model and applied it to financial data. Furthermore, [52] assumed that there are two different types of heavy-tail distributions for GARCH model errors: the student’s t-distribution and the normal reciprocal inverse Gaussian distribution. They compared the application of these distributions to South Korea’s daily stock market returns.
Therefore, it is possible to obtain a WCER estimator with favorable statistical properties similar to those of the MLE, even when the distribution of the error is unknown. Moreover, the RMSEs of all estimation methods decrease with the increase of sample size n, indicating that all estimators are consistent.
We also make an empirical analysis of the DTARCH model with delay parameter d 1 . Similarly, based on the maximum likelihood estimation, we compare the QR estimation, ER estimation and WCER estimation of this DTARCH model. We present the simulation results in the supplementary materials.

4. Real Data Analysis

In this section, we use the proposed method to analyze the Hang Seng Index (HSI) and the Standard & Poor’s 500 Index (SPI) daily from 7 February 2013, to 6 February 2023. The formula for calculating returns series y t uses the daily returns of the exponential market, which are represented by the first-order difference of the logarithm of the closing prices of the index on adjacent days,
y t = log ( x t ) log ( x t 1 ) ,
where x t represents the closing price of the HSI or SPI on day t. The sample size for SPI is n = 2516 , and the sample size for HSI is n = 2453 . We are interested in the asymmetry of the conditional mean and conditional variance of the stock market.
First, we need to identify the values of m, the delay parameter d and the threshold parameters r i . Applying the same method as in [3], we obtain the values of m, d and r i as d = 1 , m = 2 and ( r 0 , r 1 , r 2 ) = ( , 0 , + ) . This aligns with stock market observations and supports our goal of examining the asymmetry in conditional mean and variance. Similar to [3,4], we employ the generalized Akaike information criterion (GAIC) and the generalized Bayesian information criterion (GBIC) methods to determine the orders of DTARCH models before fitting the models to HSI and SPI. We find out that both GAIC and GBIC reach their minimum values for HSI and SPI with p = 2 and q = 4 . The minimum GAIC and GBIC values for HSI are 0.3144 and 0.3876 , respectively. The minimum GAIC and GBIC values for SPI are 0.2975 and 0.3463 , respectively. This designates a DTARCH ( 2 , 2 ; 4 , 4 ) model for the return series. Thus, the following DTARCH model is taken into account for the return series y t ,
y t = α 1 ( 1 ) y t 1 + α 2 ( 1 ) y t 2 + ϵ t , if y t 1 0 , α 1 ( 2 ) y t 1 + α 2 ( 2 ) y t 2 + ϵ t , if y t 1 > 0 ,
and ϵ t = h t u t , with
h t = β 0 ( 1 ) + β 1 ( 1 ) ϵ t 1 + β 2 ( 1 ) ϵ t 2 + β 3 ( 1 ) ϵ t 3 + β 4 ( 1 ) ϵ t 4 , if y t 1 0 , β 0 ( 2 ) + β 1 ( 2 ) ϵ t 1 + β 2 ( 2 ) ϵ t 2 + β 3 ( 2 ) ϵ t 3 + β 4 ( 2 ) ϵ t 4 , if y t 1 > 0 .
For comparison, we calculate the proposed WCER estimate, ER estimate, QR estimate and LS estimate, respectively. Seven equally spaced expectiles in ( 0 , 1 ) are chosen when we apply the WCER estimation process. For QR and ER estimation, we take τ = 0.25 and 0.75 respectively.
These estimates and their standard errors are listed in Table 4 and Table 5, which show some interesting results. First, the estimated α ^ 1 ( 1 ) , α ^ 2 ( 1 ) , α ^ 1 ( 2 ) , α ^ 2 ( 2 ) are all negative, which suggest that if current and past returns are negative, the forecasted mean return will be positive; if they are positive, the forecasted mean return will be negative. Second, all the estimated coefficients β ^ 0 ( 1 ) , β ^ 1 ( 1 ) , β ^ 2 ( 1 ) , β ^ 3 ( 1 ) , β ^ 4 ( 1 ) , β ^ 0 ( 2 ) and β ^ 1 ( 2 ) , β ^ 2 ( 2 ) , β ^ 3 ( 2 ) , β ^ 4 ( 2 ) are positive, which aligns with the expectations as the DTARCH models assume non-negative volatility coefficients. Third, the values of β ^ 1 ( 1 ) , β ^ 2 ( 1 ) , β ^ 3 ( 1 ) , β ^ 4 ( 1 ) are significantly different from those of β ^ 1 ( 2 ) , β ^ 2 ( 2 ) , β ^ 3 ( 2 ) , β ^ 4 ( 2 ) . These show that the volatility of HSI and SPI exhibit obvious asymmetry. Fourth, the absolute values of the parameter estimates of HSI are almost greater than those of SPI. This shows that the market volatility of HSI is higher than that of SPI. Indeed, over the past 20 years, the volatility of the S&P 500 Index has been lower than that of the Hang Seng Index, leading some to believe that the former’s returns would be lower than the latter. However, since the S&P 500 Index has experienced smaller market drops in the past compared to the Hang Seng Index, its absolute returns have been higher over the past 20 years and both its short-term and long-term risk-adjusted returns are higher as well.
To evaluate the predictive performance of the models we built, we split the dataset into two parts: a larger part for model building and a smaller part for model validation. For example, we split the SPI dataset with sample size n = 2516 into two subsets: sample 1 (the sample from 7 February 2013 to 21 December 2022) of size n 1 = 2486 and sample 2 (the sample from 22 December 2022 to 6 February 2023) of n 2 = 30 . We build a DARCH model using the sample 1, and then apply the model to predict the dataset from 22 December 2022 to 6 February 2023. We compare the predicted values with the sample 2 to evaluate the performance of different estimation methods. The chosen evaluation metric is Median Absolute Percentage Error (MAPE), calculated as the median absolute difference between predicted values y ^ i and observed values y i ( i = 1 , 2 , , n 2 ). This approach is similar to that used in [29]. We perform the same procedure on the SPI and HSI datasets with sample 2 of different sizes, specifically n 2 = 10 , 20 and 30. The results obtained are shown in Table 6 and Table 7, respectively.
Based on the MAPE from Table 6 and Table 7, it can be see that the WCER estimation consistently produces lower MAPE values compared to the other methods. Therefore, we conclude that the WCER estimation method outperforms other methods.

5. Concluding Remarks

In this paper, we develop an estimation method for DTARCH models based on the expectile theory. We propose the WCER estimators for DTARCH models and derive the large sample properties of the proposed estimators. Unlike the existing papers on the study of DTARCH models, we do not need to know the threshold and delay parameters. We conduct a simulation study to test the proposed theory and find that our WCER estimator outperforms the LS estimator in terms of RMSE, particularly when the errors follow a heavy-tailed or asymmetric distribution. The simulation results are consistent with our theoretical results. Even if the common distribution of errors is unknown, we can still obtain a WCER estimator with good statistical properties like the MLE. Furthermore, we apply the proposed WCER estimation method to estimate the parameters of DTARCH models using daily returns data for HSI and SPI.
It is noted that the proposed WCER estimation method is more effective for DTARCH models when the errors follow a non-normal heavy-tail distribution. This finding is consistent with real data examples, which adds to the practical significance of our study. Therefore, our future work will focus on further practical data analysis using the proposed methods. In addition, considering the high dimensionality of many real datasets, one of our next key steps is to come up with estimates based on expectile regression theory in such a scenario. This is one of the important research tasks that we will undertake.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/e25081204/s1, Table S1: RMSE comparison of various estimation methods, ϵ t N ( 0 , 1 ) ; Table S2: RMSE comparison of various estimation methods, ϵ t t ( 6 ) ; Table S3: RMSE comparison of various estimation methods, ϵ t χ 2 ( 4 ) ; Table S4: Parameter estimate of various estimation methods, ϵ t N ( 0 , 1 ) ; Table S5: Parameter estimate of various estimation methods, ϵ t t ( 6 ) ; Table S6: Parameter estimate of various estimation methods, ϵ t χ 2 ( 4 ) .

Author Contributions

Conceptualization and methodology, X.L., Y.W. and Y.Z.; software and writing—original draft preparation, X.L. and Z.T.; writing—review and editing, X.L., Y.W. and Y.Z.; supervision, Y.W. and Y.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work is partially supported by the Natural Science and Engineering Research Council of Canada (RGPIN-2017-05720, RGPIN-2023-05655), the State Key Program of National Natural Science Foundation of China (71931004) and National Natural Science Foundation of China (92046005).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
MLEMaximum Likelihood Estimation
LSLeast Squares
ERExpectile Regression
QRQuantile Regression
CERComposite Expectile Regression
WCERWeighted Composite Expectile Regression
EVaRExpectile-based Value at Risk
DTARCHDouble-threshold Autoregressive Conditional Heteroscedastic
RMSERoot Mean Squared Error

Appendix A

This appendix contains proofs of the various theorems and corollaries used herein. The proofs of Theorem 1, Corollary 1 and Theorem 2 are similar to those of [42,43], which will not be listed in detail here. Theorem 3 is a special case when k = 1 and ω = 1 in Theorem 4, and hence its proof is omitted. Corollary 2 can be obtained in the same way as Theorem 4. Thus, we only give the detailed proof of Theorem 4, in which the following lemmas are needed.
Lemma A1.
Suppose that the conditions (C2), (C4) and (C5) hold. Then,
n ( v , δ , u ) = k = 1 K ω k t = s + 1 n Q τ k S t k * Δ t k Q τ k S t k * ,
which is defined in (A11), can be written as
n ( v , δ , u ) = L n θ + θ G θ + o p ( 1 ) ,
  where θ = v , δ , u ; L n = L n 1 , L n 2 , L n 3 , with
L n 1 = ω 1 q n , 1 , , ω K q n , K , q n , k = 1 n t = s + 1 n Q ˙ τ k e t c τ k * ( k = 1 , , K ) , L n 2 = 1 n k = 1 K ω k t = s + 1 n Q ˙ τ k e t c τ k * r t , L n 3 = 1 n k = 1 K ω k t = s + 1 n Q ˙ τ k e t c τ k * z t h t ;
G = G 11 G 12 G 21 G 22 is a block matrix with each block being
G 11 = diag ω 1 ( 1 2 τ 1 ) G ( c τ 1 * ) + τ 1 , , ω K ( 1 2 τ K ) G ( c τ K * ) + τ K ,
G 12 = G 11 1 μ , G 21 = 1 G 11 μ , G 22 = 1 G 11 1 Ω .
Proof of Lemma A1.
To facilitate the proof, we denote by
H t = H t 11 H t 12 H t 21 H t 22 ,
where H t = 2 log h t ( α , β ) γ γ γ * is a 2 × 2 block matrix with γ = α β , γ * = α * β * , and
H = E H t = H 11 H 12 H 21 H 22 .
Note that for arbitrary positive number a, we have
P n 1 / 2 x t δ ϵ t < a 1 .
Then, it follows from Taylor’s expansion for the natural logarithm log | ϵ t ( α ) | that
log | ϵ t ( α ) | log | ϵ t ( α * ) | = x t δ n ϵ t 1 2 n δ x t x t δ ϵ t 2 + o p n 1 .
And by Taylor’s expansions for the natural logarithm log h t ( α , β ) , we have
log h t ( α , β ) log h t ( α * , β * ) = f t δ n h t + z t u n h t 1 2 n δ H t 11 δ 1 2 n u H t 22 u 1 n δ H t 12 u + o p n 1 ,
where
f t = j = 1 m I t , j 0 , sgn ( ϵ t 1 ) x t 1 , , sgn ( ϵ t q j ) x t q j β ( j ) .
By substituting Equations (A1) and (A2) into Δ t k , where the definition of Δ t k is given in the proof of Theorem 4, we can obtain that
Δ t k = v k n + z t u n h t + r t δ n + 1 2 n δ x t x t δ ϵ t 2 1 2 n δ H t 11 δ 1 2 n u H t 22 u 1 n δ H t 12 u + o p n 1 ,
where r t = x t ϵ t + f t h t .
Note that n ( v , δ , u ) is convex in ( v , δ , u ) . Showing that n ( v , δ , u ) converges pointwise to its conditional expectation is enough, and the convergence is uniformly valid on any compact set of ( v , δ , u ) , which is because of the convexity lemma in [53].
n ( v , δ , u ) = k = 1 K ω k t = s + 1 n E Q τ k S t k * Δ t k Q τ k S t k * + Q ˙ τ k S t k * Δ t k x t , z t k = 1 K ω k t = s + 1 n Q ˙ τ k S t k * Δ t k + R n , k ( v , δ , u ) ,
where
Q τ k S t k * Δ t k Q τ k S t k * + Q ˙ τ k S t k * Δ t k = ( 1 2 τ k ) S t k * Δ t k 2 I S t k * Δ t k I S t k * 0 + ( 1 τ k ) Δ t k 2 I S t k * 0 + τ k Δ t k 2 I S t k * > 0 ,
and
E I S t k * < Δ t k I S t k * < 0 x t , z t = g c τ k * Δ t k + 1 2 g ˙ c τ k * Δ t k 2 + o Δ t k 2 ,
E e t I S t k * < Δ t k I S t k * < 0 x t , z t = c τ k * g c τ k * Δ t k + 1 2 g c τ k * + c τ k * g ˙ c τ k * Δ t k 2 + o Δ t k 2 ,
E e t 2 I S t k * < Δ t k I S t k * < 0 x t , z t = c τ k * 2 g c τ k * Δ t k + 1 2 2 c τ k * g c τ k * + c τ k * 2 g ˙ c τ k * Δ t k 2 + o Δ t k 2 .
From the above equations, we can obtain that
k = 1 K ω k t = s + 1 n E Q τ k S t k * Δ t k Q τ k S t k * + Q ˙ τ k S t k * Δ t k x t , z t = k = 1 K ω k ( 1 2 τ k ) G ( c τ k * ) + τ k v k , δ , u × 1 n t = s + 1 n 1 1 n t = s + 1 n r t 1 n t = s + 1 n z t h t 1 n t = s + 1 n r t 1 n t = s + 1 n r t r t 1 n t = s + 1 n r t z t h t 1 n t = s + 1 n z t h t 1 n t = s + 1 n z t r t h t 1 n t = s + 1 n z t z t h t 2 v k δ u + O p 1 n .
By the Chebyshev’s weak law of large numbers, we obtain that
1 n t = s + 1 n 1 1 n t = s + 1 n r t 1 n t = s + 1 n z t h t 1 n t = s + 1 n r t 1 n t = s + 1 n r t r t 1 n t = s + 1 n r t z t h t 1 n t = s + 1 n z t h t 1 n t = s + 1 n z t r t h t 1 n t = s + 1 n z t z t h t 2 p 1 μ r μ z μ r Ω r Ω r z μ z Ω r z Ω z Γ .
By substituting the above equations into Equation (A4), we can obtain that
n ( v , δ , u ) = k = 1 K ω k v k 1 n t = s + 1 n Q ˙ τ k e t c τ k * k = 1 K ω k 1 n t = s + 1 n Q ˙ τ k e t c τ k * r t δ k = 1 K ω k 1 n t = s + 1 n Q ˙ τ k e t c τ k * z t h t u + k = 1 K ω k δ 1 n t = s + 1 n Q ˙ τ k e t c τ k * H t 12 u + k = 1 K ω k u 1 2 n t = s + 1 n Q ˙ τ k e t c τ k * H t 22 u + k = 1 K ω k δ 1 2 n t = s + 1 n Q ˙ τ k e t c τ k * H t 11 x t x t ϵ t 2 δ + k = 1 K ω k ( 1 2 τ k ) G ( c τ k * ) + τ k θ k Γ θ k + o p ( 1 ) + R n , k ( v , δ , u ) ,
where θ k = v k , δ , u . Since E Q ˙ τ k e t c τ k * = 0 , by the Chebyshev’s weak law of large numbers, we obtain that
1 n t = s + 1 n Q ˙ τ k e t c τ k * H t 12 p 0 ,
1 n t = s + 1 n Q ˙ τ k e t c τ k * H t 22 p 0 ,
1 n t = s + 1 n Q ˙ τ k e t c τ k * H t 11 x t x t ϵ t 2 p 0 .
Subsequently, we obtain that
n ( v , δ , u ) = k = 1 K ω k 1 n t = s + 1 n Q ˙ τ k e t c τ k * v k k = 1 K ω k 1 n t = s + 1 n Q ˙ τ k e t c τ k * r t δ k = 1 K ω k 1 n t = s + 1 n Q ˙ τ k e t c τ k * z t h t u + k = 1 K ω k ( 1 2 τ k ) G ( c τ k * ) + τ k θ k Γ θ k + o p ( 1 ) + R n , k ( v , δ , u ) .
According to Lemma A2, we can obtain that
R n , k ( v , δ , u ) = o p n 1 = o p ( 1 ) ,
which combining with (A7) and performing some straightforward calculations, we arrive at the conclusion of Lemma A1. □
Lemma A2.
Suppose that the conditions (C2), (C4) and (C5) hold. Then,
R n , k ( v , δ , u ) = o p n 1 = o p ( 1 ) ,
which is defined in (A4).
Proof of Lemma A2.
We can manipulate Equation (A4) to obtain the following form
E n ( v , δ , u ) = E E n ( v , δ , u ) x , z E k = 1 K ω k t = s + 1 n Q ˙ τ k S t k * Δ t k E k = 1 K ω k t = s + 1 n E Q ˙ τ k S t k * Δ t k x t , z t + E R n , k ( v , δ , u ) ,
which yields that
E R n , k ( v , δ , u ) = 0 .
According to Equations (A6) and (A11), we obtain that
R n , k ( v , δ , u ) = k = 1 K ω k t = s + 1 n Q τ k S t k * Δ t k Q τ k S t k * + Q ˙ τ k S t k * Δ t k k = 1 K ω k ( 1 2 τ k ) G ( c τ k * ) + τ k θ k Γ θ k + o p ( 1 ) .
By Lemma 2 in [9], we obtain that
Var R n , k ( v , δ , u ) = n k = 1 K ω k Var Q τ k S t k * Δ t k Q τ k S t k * + Q ˙ τ k S t k * Δ t k n k = 1 K ω k E Q τ k S t k * Δ t k Q τ k S t k * + Q ˙ τ k S t k * Δ t k 2 n k = 1 K ω k E 4 Δ t k 2 2 = O n 1 ,
i.e.,
Var R n , k ( v , δ , u ) = o n 1 .
From (A8) and (A10), it follows that
R n , k ( v , δ , u ) = o p n 1 = o p ( 1 ) .
This completes the proof of Lemma A2. □
Lemma A3.
Suppose that the conditions (C2), (C4) and (C5) hold. Then,
L n D N 0 , Σ 0 ,
where L n is defined in Lemma A1, and Σ 0 = Σ 11 0 Σ 12 0 Σ 21 0 Σ 22 0 is a block matrix with blocks Σ 11 0 is a K × K matrix with the ( j , k ) th element being
Σ 11 0 ( j , k ) = ω j ω k E Q ˙ τ j e t c τ j * Q ˙ τ k e t c τ k * ;
Σ 12 0 = Σ 11 0 1 μ ; Σ 21 0 = 1 Σ 11 0 μ ; Σ 22 0 = 1 Σ 11 0 1 Ω .
Proof of Lemma A3.
Note that
L n = L n 1 L n 2 L n 3 = 1 n t = s + 1 n Q ˙ τ 1 e t c τ 1 * ω 1 1 n t = s + 1 n Q ˙ τ K e t c τ K * ω K 1 n k = 1 K ω k t = s + 1 n Q ˙ τ k e t c τ k * r t 1 n k = 1 K ω k t = s + 1 n Q ˙ τ k e t c τ k * z t h t .
By the Cramer–Wald device and the Central Limit Theorem, we obtain that
n 1 n L n μ L D N 0 ( K + p + q ) × 1 , Σ 0 .
We now calculate μ L and Σ 0 , respectively. It is easy to show that
μ L = 0 ( K + p + q ) × 1 .
Let
Σ 0 = Σ 11 0 Σ 12 0 Σ 21 0 Σ 22 0 ,
where
Σ 11 0 = E Q ˙ τ 1 e t c τ 1 * ω 1 Q ˙ τ K e t c τ K * ω K Q ˙ τ 1 e t c τ 1 * ω 1 Q ˙ τ K e t c τ K * ω K ,
which is a K × K matrix with the ( j , k ) th element being
Σ 11 ( j , k ) = ω j ω k E Q ˙ τ j e t c τ j * Q ˙ τ k e t c τ k * = 4 ω j ω k τ j τ k τ j I ( e t c τ k * ) τ k I ( e t c τ j * ) + I ( e t c τ j * c τ k * ) × e t 2 ( c τ j * + c τ k * ) e t + c τ j * c τ k * d G ( e t ) .
We can obtain the expressions of Σ 12 , Σ 21 and Σ 22 similarly. Thus,
L n D N 0 , Σ 0 ,
which completes the proof of Lemma A3. □
Proof of Theorem 4.
Let n ( α α * ) = δ , n ( β β * ) = u , n ( c τ k c τ k * ) = v k , v = v 1 , , v K and
S t k = log | ϵ t ( α ) | log h t ( α , β ) c τ k = log | ϵ t ( α * + n 1 / 2 δ ) | log h t ( α * + n 1 / 2 δ , β * + n 1 / 2 u ) c τ k + n 1 / 2 v k .
Define S t k * = log | ϵ t ( α * ) | log h t ( α * , β * ) c τ k * = e t c τ k * , Δ t k = S t k * S t k , and
n ( v , δ , u ) = k = 1 K ω k t = s + 1 n Q τ k S t k Q τ k e t c τ k * = k = 1 K ω k t = s + 1 n Q τ k S t k * Δ t k Q τ k S t k * .
Then minimizing the objective in (9) is equivalent to minimizing n ( v , δ , u ) . By Lemma A1, we obtain that
n ( v , δ , u ) = L n θ + 1 2 θ 2 G θ + o p ( 1 ) ,
where θ = v , δ , u ; L n = L n 1 , L n 2 , L n 3 , with
L n 1 = ω 1 q n , 1 , , ω K q n , K , q n , k = 1 n t = s + 1 n Q ˙ τ k e t c τ k * ( k = 1 , , K ) , L n 2 = 1 n k = 1 K ω k t = s + 1 n Q ˙ τ k e t c τ k * r t , L n 3 = 1 n k = 1 K ω k t = s + 1 n Q ˙ τ k e t c τ k * z t h t ;
G = G 11 G 12 G 21 G 22 is a block matrix, with each block being
G 11 = diag ω 1 ( 1 2 τ 1 ) G ( c τ 1 * ) + τ 1 , , ω K ( 1 2 τ K ) G ( c τ K * ) + τ K ,
G 12 = G 11 1 μ , G 21 = 1 G 11 μ , G 22 = 1 G 11 1 Ω .
We further perform transformation calculations on matrix G, resulting in
G = G 11 G 11 1 μ 1 G 11 μ 1 G 11 1 Ω = G 11 1 / 2 0 1 G 11 1 / 2 μ I I 0 0 1 G 11 1 Π G 11 1 / 2 G 11 1 / 2 1 μ 0 I = G 11 1 / 2 0 1 G 11 1 / 2 μ I I 0 0 1 G 11 1 Π 1 / 2 I 0 0 1 G 11 1 Π 1 / 2 G 11 1 / 2 G 11 1 / 2 1 μ 0 I = G 11 1 / 2 0 1 G 11 1 / 2 μ I I 0 0 1 G 11 1 Π 1 / 2 G 11 1 / 2 0 1 G 11 1 / 2 μ I I 0 0 1 G 11 1 Π 1 / 2 ,
that is to say, G is a positive matrix. Furthermore, according to Lemma A3, we obtain that
L n D N 0 , Σ 0 ,
where Σ 0 = Σ 11 0 Σ 12 0 Σ 21 0 Σ 22 0 is a block matrix, and each block is, respectively: Σ 11 0 is a K × K matrix with the ( j , k ) th element being
Σ 11 0 ( j , k ) = ω j ω k E Q ˙ τ j e t c τ j * Q ˙ τ k e t c τ k * ;
Σ 12 0 = Σ 11 0 1 μ ; Σ 21 0 = 1 Σ 11 0 μ ; Σ 22 0 = 1 Σ 11 0 1 Ω .
Therefore, by Theorem 2 in [54], we can obtain that
θ ^ D N 0 , 1 4 G 1 Σ 0 G 1 .
Note θ ^ = n c ^ τ 1 c τ 1 * , , c ^ τ K c τ K * , α ^ α * , β ^ β * . Now, we obtain that
n c ^ τ 1 c τ 1 * , , c ^ τ K c τ K * , α ^ α * , β ^ β * D N 0 , 1 4 G 1 Σ 0 G 1 ,
next, we need to compute 1 4 G 1 Σ 0 G 1 .
Denote G 1 = G 11 G 12 G 21 G 22 , by applying the inverse operation rules of block matrices, we can obtain that
G 22.1 = G 22 G 21 G 11 1 G 12 = 1 G 11 1 Ω 1 G 11 μ G 11 1 G 11 1 μ = 1 G 11 1 Ω μ μ .
Because Ω r = E r t r t , Π r = Var r t , μ r = E r t , then Π r = Ω r μ r μ r ; because Ω z = E z t z t h t 2 , Π z = Var z t h t , μ z = E z t h t , then Π z = Ω z μ z μ z . Because Ω r z = E z t h t r t , Π r z = Cov r t , z t h t , then Π r z = Cov z t h t , r t = Ω r z μ z μ r ; so, we obtain that
Ω μ μ = Π .
Substituting the above formula into G 22.1 yields
G 22.1 = 1 G 11 1 Π = k = 1 K ω k ( 1 2 τ k ) G ( c τ k * ) + τ k Π .
So, we can obtain the expressions for G 11 , G 12 , G 21 and G 22 as follows:
G 11 = G 11 1 + G 11 1 G 12 G 22.1 1 G 21 G 11 1 = G 11 1 + G 11 1 G 11 1 μ 1 G 11 1 Π 1 1 G 11 μ G 11 1 = G 11 1 + 1 G 11 1 1 1 μ Π 1 1 μ = G 11 1 + 1 G 11 1 1 μ Π 1 μ 1 1 = diag 1 ω 1 ( 1 2 τ 1 ) G ( c τ 1 * ) + τ 1 , , 1 ω K ( 1 2 τ K ) G ( c τ K * ) + τ K + μ Π 1 μ k = 1 K ω k ( 1 2 τ k ) G ( c τ k * ) + τ k 1 1 ;
G 12 = G 11 1 G 12 G 22.1 1 = G 11 1 G 11 1 μ 1 G 11 1 Π 1 = 1 G 11 1 1 1 μ Π 1 = 1 k = 1 K ω k ( 1 2 τ k ) G ( c τ k * ) + τ k 1 μ Π 1 ;
G 21 = G 22.1 1 G 21 G 11 1 = 1 G 11 1 1 1 Π 1 μ = 1 k = 1 K ω k ( 1 2 τ k ) G ( c τ k * ) + τ k 1 Π 1 μ ;
and
G 22 = G 22.1 1 = 1 G 11 1 Π 1 = 1 k = 1 K ω k ( 1 2 τ k ) G ( c τ k * ) + τ k Π 1 .
Denote
Σ = Σ 11 Σ 12 Σ 21 Σ 22 = 1 4 G 1 Σ 0 G 1 = 1 4 G 11 G 12 G 21 G 22 Σ 11 0 Σ 12 0 Σ 21 0 Σ 22 0 G 11 G 12 G 21 G 22 ,
next, we have to calculate Σ 11 , Σ 12 , Σ 21 and Σ 22 , respectively. First of all,
Σ 11 = 1 4 G 11 Σ 11 0 G 11 + G 12 Σ 21 0 G 11 + G 11 Σ 12 0 G 21 + G 12 Σ 22 0 G 21 ,
where
G 11 Σ 11 0 G 11 = G 11 1 + 1 G 11 1 1 μ Π 1 μ 1 1 Σ 11 0 G 11 1 + 1 G 11 1 1 μ Π 1 μ 1 1 = G 11 1 Σ 11 0 G 11 1 + G 11 1 Σ 11 0 1 G 11 1 1 μ Π 1 μ 1 1 + 1 G 11 1 1 μ Π 1 μ 1 1 Σ 11 0 G 11 1 + 1 G 11 1 1 μ Π 1 μ 1 1 Σ 11 0 1 G 11 1 1 μ Π 1 μ 1 1 = G 11 1 Σ 11 0 G 11 1 + μ Π 1 μ 1 G 11 1 G 11 1 Σ 11 0 1 1 + 1 1 Σ 11 0 G 11 1 + μ Π 1 μ 1 G 11 1 2 1 Σ 11 0 1 1 1 ;
G 12 Σ 21 0 G 11 = 1 G 11 1 1 1 μ Π 1 1 Σ 11 0 μ G 11 1 + 1 G 11 1 1 μ Π 1 μ 1 1 = μ Π 1 μ 1 G 11 1 1 1 Σ 11 0 G 11 1 μ Π 1 μ 1 G 11 1 2 1 Σ 11 0 1 1 1 ;
G 11 Σ 12 0 G 21 = G 12 Σ 21 0 G 11 = μ Π 1 μ 1 G 11 1 G 11 1 Σ 11 0 1 1 μ Π 1 μ 1 G 11 1 2 1 Σ 11 0 1 1 1 ;
G 12 Σ 22 0 G 21 = 1 G 11 1 1 1 μ Π 1 1 Σ 11 0 1 Ω 1 G 11 1 1 1 Π 1 μ = 1 G 11 1 2 1 Σ 11 0 1 μ Π 1 Ω Π 1 μ 1 1 = 1 G 11 1 2 1 Σ 11 0 1 μ Π 1 Π + μ μ Π 1 μ 1 1 = 1 G 11 1 2 1 Σ 11 0 1 μ Π 1 μ z + μ Π 1 μ 2 1 1 .
So, by substituting the results of (A18)–(A21) into the expression for Σ 11 in (A17), we can obtain the value of Σ 11 as
Σ 11 = 1 4 G 11 Σ 11 0 G 11 + G 12 Σ 21 0 G 11 + G 11 Σ 12 0 G 21 + G 12 Σ 22 0 G 21 = 1 4 G 11 1 Σ 11 0 G 11 1 + 1 4 1 G 11 1 2 1 Σ 11 0 1 μ Π 1 μ 1 1 = ξ + σ 2 ( ω ) μ Π 1 μ 1 1 ,
where ξ is a K × K matrix with the ( j , k ) th element being
ξ j k = E Q ˙ τ j e t c τ j * Q ˙ τ k e t c τ k * 4 ( 1 2 τ j ) G ( c τ j * ) + τ j ( 1 2 τ k ) G ( c τ k * ) + τ k ,
σ 2 ( ω ) = ω Λ ω k = 1 K ω k ( 1 2 τ k ) G ( c τ k * ) + τ k 2 ,
with Λ is a K × K matrix with the ( j , k ) th element being Λ ( j , k ) ,
Λ ( j , k ) = 1 4 E Q ˙ τ j e t c τ j * Q ˙ τ k e t c τ k * = τ j τ k τ j I ( e t c τ k * ) τ k I ( e t c τ j * ) + I ( e t c τ j * c τ k * ) e t 2 ( c τ j * + c τ k * ) e t + c τ j * c τ k * d G ( e t ) .
Secondly, we have
Σ 12 = 1 4 G 11 Σ 11 0 G 12 + G 12 Σ 21 0 G 12 + G 11 Σ 12 0 G 22 + G 12 Σ 22 0 G 22 ,
where
G 11 Σ 11 0 G 12 = G 11 1 + 1 G 11 1 1 μ Π 1 μ 1 1 Σ 11 0 1 G 11 1 1 1 μ Π 1 = 1 G 11 1 1 G 11 1 Σ 11 0 1 μ Π 1 1 G 11 1 2 μ Π 1 μ 1 Σ 11 0 1 1 μ Π 1 ;
G 12 Σ 21 0 G 12 = 1 G 11 1 1 1 μ Π 1 1 Σ 11 0 μ 1 G 11 1 1 1 μ Π 1 = 1 G 11 1 2 1 Σ 11 0 1 μ Π 1 μ 1 μ Π 1 ;
G 11 Σ 12 0 G 22 = G 11 1 + 1 G 11 1 1 μ Π 1 μ 1 1 Σ 11 0 1 μ 1 G 11 1 Π z 1 = 1 G 11 1 1 G 11 1 Σ 11 0 1 μ Π 1 + 1 G 11 1 2 μ Π 1 μ 1 Σ 11 0 1 1 μ Π 1 ;
G 12 Σ 22 0 G 22 = 1 G 11 1 1 1 μ Π 1 1 Σ 11 0 1 Ω 1 G 11 1 Π 1 = 1 G 11 1 2 1 Σ 11 0 1 1 μ Π 1 Ω Π 1 = 1 G 11 1 2 1 Σ 11 0 1 1 μ Π 1 Π + μ μ Π 1 = 1 G 11 1 2 1 Σ 11 0 1 1 μ Π 1 1 G 11 1 2 1 Σ 11 0 1 μ Π 1 μ 1 μ Π 1 .
So, by substituting the results of (A24)–(A27) into the expression for Σ 12 in (A23), we can obtain the value of Σ 12 as
Σ 12 = 1 4 G 11 Σ 11 0 G 12 + G 12 Σ 21 0 G 12 + G 11 Σ 12 0 G 22 + G 12 Σ 22 0 G 22 = 1 4 1 G 11 1 2 1 Σ 11 0 1 1 μ Π 1 = σ 2 ( ω ) 1 μ Π 1 .
Thirdly, we have
Σ 21 = Σ 12 = σ 2 ( ω ) 1 Π 1 μ .
Fourthly, we have
Σ 22 = 1 4 G 21 Σ 11 0 G 12 + G 22 Σ 21 0 G 12 + G 21 Σ 12 0 G 22 + G 22 Σ 22 0 G 22 ,
where
G 21 Σ 11 0 G 12 = 1 G 11 1 1 1 Π 1 μ Σ 11 0 1 G 11 1 1 1 μ Π 1 = 1 G 11 1 2 1 Σ 11 0 1 Π 1 μ μ Π 1 ;
G 22 Σ 21 0 G 12 = 1 G 11 1 Π 1 1 Σ 11 0 μ 1 G 11 1 1 1 μ Π 1 = 1 G 11 1 2 1 Σ 11 0 1 Π 1 μ μ Π 1 ;
G 21 Σ 12 0 G 22 = G 22 Σ 21 0 G 12 = 1 G 11 1 2 1 Σ 11 0 1 Π 1 μ μ Π 1 ;
G 22 Σ 22 0 G 22 = 1 G 11 1 Π 1 1 Σ 11 0 1 Ω 1 G 11 1 Π 1 = 1 G 11 1 2 1 Σ 11 0 1 Π 1 Ω Π 1 = 1 G 11 1 2 1 Σ 11 0 1 Π 1 Π + μ μ Π 1 = 1 G 11 1 2 1 Σ 11 0 1 Π 1 + Π 1 μ μ Π 1 .
So, by substituting the results of (A31)–(A34) into the expression for Σ 22 in (A30), we can obtain the value of Σ 22 as
Σ 22 = 1 4 G 21 Σ 11 0 G 12 + G 22 Σ 21 0 G 12 + G 21 Σ 12 0 G 22 + G 22 Σ 22 0 G 22 = 1 4 1 G 11 1 2 1 Σ 11 0 1 Π 1 = σ 2 ( ω ) Π 1 .
Therefore, we obtain that
n c ^ τ 1 c τ 1 * , , c ^ τ K c τ K * , α ^ α * , β ^ β * D N 0 , Σ ,
where Σ = Σ 11 Σ 12 Σ 21 Σ 22 . We have now concluded the proof of Theorem 4. □

Appendix B. Partial Results of Simulation Studies

Table A1. Parameter estimate of various estimation methods, ϵ t N ( 0 , 1 ) .
Table A1. Parameter estimate of various estimation methods, ϵ t N ( 0 , 1 ) .
n = 100
Estimate α 1 ( 1 ) α 2 ( 1 ) α 1 ( 2 ) α 2 ( 2 ) β 0 ( 1 ) β 1 ( 1 ) β 2 ( 1 ) β 0 ( 2 ) β 1 ( 2 ) β 2 ( 2 )
MLE0.23570.29020.42640.18680.03490.28640.49070.06380.43460.3369
LS0.23420.28780.42280.18540.03540.28520.48810.06410.43330.3357
QR 0.25 0.21410.15690.41130.10190.04410.22490.43440.07890.38990.2959
QR 0.75 0.21480.17520.41340.10890.04390.22560.43560.07920.39010.2981
ER 0.25 0.20990.21230.41340.10990.04300.22570.43120.07940.39140.2953
ER 0.75 0.20840.21010.41290.10980.04410.22410.43460.07910.39130.2953
WCER0.21590.22570.42650.14390.03540.24590.47530.06710.40560.3149
n = 300
Estimate α 1 ( 1 ) α 2 ( 1 ) α 1 ( 2 ) α 2 ( 2 ) β 0 ( 1 ) β 1 ( 1 ) β 2 ( 1 ) β 0 ( 2 ) β 1 ( 2 ) β 2 ( 2 )
MLE0.24170.29130.43960.19070.03270.29030.49360.06230.44030.3421
LS0.24040.29020.43900.19020.03310.29020.49250.06310.43910.3404
QR 0.25 0.22090.23870.42420.14990.04010.25120.45480.07170.40240.3080
QR 0.75 0.22130.23890.42440.15080.03980.25080.45390.07290.40110.3079
ER 0.25 0.22030.23990.42330.15080.03990.24890.45530.07270.40130.3086
ER 0.75 0.22020.23800.42390.15110.03920.25130.45460.07410.40060.3089
WCER0.22950.26040.43760.17010.03390.28240.48690.06420.42480.3302
n = 800
Estimate α 1 ( 1 ) α 2 ( 1 ) α 1 ( 2 ) α 2 ( 2 ) β 0 ( 1 ) β 1 ( 1 ) β 2 ( 1 ) β 0 ( 2 ) β 1 ( 2 ) β 2 ( 2 )
MLE0.24490.29280.44460.19250.03160.29440.49580.06170.44340.3449
LS0.24410.29190.44430.19190.03180.29410.49510.06200.44300.3441
QR 0.25 0.23110.25780.43130.16410.03720.25880.46610.06870.40990.3191
QR 0.75 0.23060.25840.43210.16440.03710.25800.46560.06890.41010.3195
ER 0.25 0.23010.25790.43110.16580.03760.25940.46590.06920.41080.3188
ER 0.75 0.22940.25810.43140.16540.03740.25870.46480.06960.41060.3191
WCER0.24090.28020.44180.18140.03220.29010.49180.06240.43370.3379
n = 1500
Estimate α 1 ( 1 ) α 2 ( 1 ) α 1 ( 2 ) α 2 ( 2 ) β 0 ( 1 ) β 1 ( 1 ) β 2 ( 1 ) β 0 ( 2 ) β 1 ( 2 ) β 2 ( 2 )
MLE0.24670.29550.44620.19540.03060.29770.49710.06050.44650.3473
LS0.24620.29440.44600.19510.03080.29720.49680.06060.44600.3469
QR 0.25 0.23610.27320.44350.18620.03360.27040.47610.06510.42410.3269
QR 0.75 0.23680.27380.44370.18580.03390.27110.47580.06510.42390.3263
ER 0.25 0.23550.27210.44350.18640.03380.27010.47590.06490.42310.3277
ER 0.75 0.23640.27180.44320.18710.03410.27050.47550.06520.42290.3266
WCER0.24460.29030.44590.19450.03070.29240.49640.06070.44510.3467
n = 2500
Estimate α 1 ( 1 ) α 2 ( 1 ) α 1 ( 2 ) α 2 ( 2 ) β 0 ( 1 ) β 1 ( 1 ) β 2 ( 1 ) β 0 ( 2 ) β 1 ( 2 ) β 2 ( 2 )
MLE0.24850.29780.44830.19760.03020.29880.49850.06020.44830.3487
LS0.24830.29720.44790.19700.03040.29830.49800.060040.44780.3484
QR 0.25 0.24240.28580.44630.19240.03160.28580.48770.06250.43620.3373
QR 0.75 0.24390.28560.44650.19260.03190.28510.48750.06260.43640.3371
ER 0.25 0.24280.28530.44650.19250.03180.28530.48720.06240.43610.3378
ER 0.75 0.24370.28500.44620.19220.03200.28500.48740.06250.43630.3376
WCER0.24740.29600.44770.19650.03050.29800.49790.06050.44750.3482
Table A2. Parameter estimate of various estimation methods, ϵ t t ( 6 ) .
Table A2. Parameter estimate of various estimation methods, ϵ t t ( 6 ) .
n = 100
Estimate α 1 ( 1 ) α 2 ( 1 ) α 1 ( 2 ) α 2 ( 2 ) β 0 ( 1 ) β 1 ( 1 ) β 2 ( 1 ) β 0 ( 2 ) β 1 ( 2 ) β 2 ( 2 )
MLE0.23620.27620.43020.18220.03460.28690.49030.06380.43370.3394
LS0.21350.20570.41810.27070.04090.23410.44240.07650.40040.3038
QR 0.25 0.21190.20440.41890.27560.04120.36890.44180.07590.39880.3030
QR 0.75 0.21210.20550.41800.12480.04100.36920.44110.07610.39800.3025
ER 0.25 0.21220.20390.41870.12500.04160.22990.44200.07670.39960.3036
ER 0.75 0.21180.20490.41760.12550.04220.22960.44160.07620.39880.3039
WCER0.21790.20950.42720.14360.03650.25010.48510.06780.41060.3177
n = 300
Estimate α 1 ( 1 ) α 2 ( 1 ) α 1 ( 2 ) α 2 ( 2 ) β 0 ( 1 ) β 1 ( 1 ) β 2 ( 1 ) β 0 ( 2 ) β 1 ( 2 ) β 2 ( 2 )
MLE0.24240.28940.44360.19130.03290.29180.49480.06210.44060.3426
LS0.22280.24460.42870.24210.03760.25630.46450.07050.40840.3156
QR 0.25 0.22210.24330.42590.24780.03790.34510.46330.07090.40450.3151
QR 0.75 0.22180.24300.42560.15240.03810.34530.46390.07070.40430.3159
ER 0.25 0.22250.24240.42640.15390.03810.25430.46240.07120.40440.3142
ER 0.75 0.22230.24280.42550.15340.03870.25510.46260.07160.40390.3151
WCER0.23210.26460.43840.17080.03370.28210.48990.06410.42590.3313
n = 800
Estimate α 1 ( 1 ) α 2 ( 1 ) α 1 ( 2 ) α 2 ( 2 ) β 0 ( 1 ) β 1 ( 1 ) β 2 ( 1 ) β 0 ( 2 ) β 1 ( 2 ) β 2 ( 2 )
MLE0.24540.29220.44580.19360.03090.29470.49560.06110.44440.3458
LS0.23190.26390.43250.22680.03430.26990.47200.06560.42030.3262
QR 0.25 0.23070.26250.43230.22880.03700.33110.47020.06690.41890.3260
QR 0.75 0.23050.26140.43240.17210.03690.33200.47050.06710.41840.3256
ER 0.25 0.23090.26280.43310.17090.03660.26830.46990.06780.41940.3261
ER 0.75 0.23000.26190.43290.16950.03680.26690.46870.06810.41830.3249
WCER0.24120.28010.44230.18240.03140.29080.49130.06180.43460.3395
n = 1500
Estimate α 1 ( 1 ) α 2 ( 1 ) α 1 ( 2 ) α 2 ( 2 ) β 0 ( 1 ) β 1 ( 1 ) β 2 ( 1 ) β 0 ( 2 ) β 1 ( 2 ) β 2 ( 2 )
MLE0.24770.29490.44730.19550.03040.29690.49720.06070.44590.3466
LS0.23970.28040.43980.21680.03230.28080.48130.06470.43090.3353
QR 0.25 0.23890.27990.43840.21760.03350.32030.48110.06530.43010.3349
QR 0.75 0.23830.27870.43790.18210.03370.32090.48080.06550.42970.3341
ER 0.25 0.23850.27920.43880.18230.03330.27920.48030.06510.42950.3348
ER 0.75 0.23790.27830.43820.18320.03380.27900.48090.06490.42840.3340
WCER0.24490.28980.44690.19330.03080.29320.49540.06100.44070.3419
n = 2500
Estimate α 1 ( 1 ) α 2 ( 1 ) α 1 ( 2 ) α 2 ( 2 ) β 0 ( 1 ) β 1 ( 1 ) β 2 ( 1 ) β 0 ( 2 ) β 1 ( 2 ) β 2 ( 2 )
MLE0.24880.29770.44850.19760.03020.29830.49860.06040.44790.3482
LS0.24380.28910.44410.20690.03150.28870.48980.06250.44010.3413
QR 0.25 0.24400.28880.44390.20730.03170.31170.48910.06270.43980.3408
QR 0.75 0.24370.28870.44360.19210.03190.31150.48890.06290.43960.3410
ER 0.25 0.24360.28890.44370.19220.03160.28810.48930.06330.43950.3411
ER 0.75 0.24370.28830.44340.19280.03210.28770.48870.06310.43970.3409
WCER0.24750.29410.44810.19660.03040.29640.49760.06070.44520.3468
Table A3. Parameter estimate of various estimation methods, ϵ t χ 2 ( 4 ) .
Table A3. Parameter estimate of various estimation methods, ϵ t χ 2 ( 4 ) .
n = 100
Estimate α 1 ( 1 ) α 2 ( 1 ) α 1 ( 2 ) α 2 ( 2 ) β 0 ( 1 ) β 1 ( 1 ) β 2 ( 1 ) β 0 ( 2 ) β 1 ( 2 ) β 2 ( 2 )
MLE0.23770.27720.43520.18690.03370.28830.48920.06510.43770.3397
LS0.21260.20090.40180.28330.04110.22440.56090.07670.39530.3002
QR 0.25 0.21510.20480.49270.28390.04170.22390.56010.07690.39620.4001
QR 0.75 0.21480.20370.49440.11490.04210.22220.43960.07620.39560.4007
ER 0.25 0.21610.20570.40840.11790.04090.22550.44060.07570.39740.3006
ER 0.75 0.21530.20610.40690.11280.04170.22180.44010.07720.39470.2995
WCER0.22390.21230.42890.15180.03650.25190.47570.06830.41810.3188
n = 300
Estimate α 1 ( 1 ) α 2 ( 1 ) α 1 ( 2 ) α 2 ( 2 ) β 0 ( 1 ) β 1 ( 1 ) β 2 ( 1 ) β 0 ( 2 ) β 1 ( 2 ) β 2 ( 2 )
MLE0.24140.28910.44390.19220.03180.29220.49510.06210.44190.3437
LS0.22350.24560.42680.23440.03810.26550.53040.07010.41600.3227
QR 0.25 0.22420.24690.47220.23420.03790.26670.53070.06990.41690.3771
QR 0.75 0.22400.24590.47310.16510.03810.26610.46940.06980.41630.3773
ER 0.25 0.22480.24770.42810.16640.03730.26760.46990.06980.41740.3240
ER 0.75 0.22220.24450.42630.16490.03850.26420.46910.07120.41470.3224
WCER0.23280.26670.43800.17280.03350.28390.48920.06380.42940.3329
n = 800
Estimate α 1 ( 1 ) α 2 ( 1 ) α 1 ( 2 ) α 2 ( 2 ) β 0 ( 1 ) β 1 ( 1 ) β 2 ( 1 ) β 0 ( 2 ) β 1 ( 2 ) β 2 ( 2 )
MLE0.24560.29440.44630.19490.03100.29560.49650.06120.44460.3459
LS0.23140.26130.43360.22890.03630.27050.52270.06840.42250.3286
QR 0.25 0.23210.26290.46610.22810.03610.27090.52200.06810.42270.3717
QR 0.75 0.23170.26230.46670.17130.03650.27030.47750.06860.42210.3718
ER 0.25 0.23260.26330.43420.17230.03540.27140.47830.06750.42320.3295
ER 0.75 0.23090.26080.43310.17050.03670.26990.47680.06890.42180.3282
WCER0.24160.28010.44260.18740.03200.29090.49170.06180.43550.3389
n = 1500
Estimate α 1 ( 1 ) α 2 ( 1 ) α 1 ( 2 ) α 2 ( 2 ) β 0 ( 1 ) β 1 ( 1 ) β 2 ( 1 ) β 0 ( 2 ) β 1 ( 2 ) β 2 ( 2 )
MLE0.24790.29580.44760.19650.03060.29720.49710.06050.44680.3467
LS0.23570.27430.43790.21310.03440.27820.51920.06530.43050.3317
QR 0.25 0.23650.27470.46200.21290.03400.27840.51910.06500.43080.3678
QR 0.75 0.23610.27410.46270.18670.03450.27810.48050.06540.43060.3680
ER 0.25 0.23660.27520.43840.18730.03370.27860.48110.06480.43130.3331
ER 0.75 0.23510.27390.43730.18620.03470.27770.48030.06560.42990.3311
WCER0.24420.28990.44690.19320.03080.29300.49550.06080.44010.3433
n = 2500
Estimate α 1 ( 1 ) α 2 ( 1 ) α 1 ( 2 ) α 2 ( 2 ) β 0 ( 1 ) β 1 ( 1 ) β 2 ( 1 ) β 0 ( 2 ) β 1 ( 2 ) β 2 ( 2 )
MLE0.24890.29780.44860.19820.03040.29850.49850.06030.44830.3482
LS0.24180.28700.44330.20730.03220.28860.51010.06250.44010.3412
QR 0.25 0.24240.28720.45690.20760.03230.28840.51030.06270.44040.3590
QR 0.75 0.24160.28750.45710.19250.03240.28810.48950.06240.44020.3594
ER 0.25 0.24230.28780.44320.19280.03230.28850.48960.06240.44060.3408
ER 0.75 0.24150.28760.44300.1930220.28870.48930.06220.44040.3412
WCER0.24720.29510.44800.19650.03060.29670.49790.06050.44610.3473

References

  1. Li, C.W.; Li, W.K. On a double-threshold autoregressive heteroscedastic time series model. J. Appl. Econom. 1996, 11, 253–274. [Google Scholar] [CrossRef]
  2. Van Hui, Y.; Jiang, J. Robust modelling of DTARCH models. Econom. J. 2005, 8, 143–158. [Google Scholar] [CrossRef]
  3. Jiang, J.; Jiang, X.; Song, X. Weighted composite quantile regression estimation of DTARCH models. Econom. J. 2014, 17, 1–23. [Google Scholar] [CrossRef]
  4. Liu, X.; Song, X.; Zhou, Y. Likelihood ratio-type tests in weighted composite quantile regression of DTARCH models. Sci. China Math. 2019, 62, 2571–2590. [Google Scholar] [CrossRef]
  5. Kuan, C.M.; Yeh, J.H.; Hsu, Y.C. Assessing value at risk with CARE, the conditional autoregressive expectile models. J. Econom. 2009, 150, 261–270. [Google Scholar] [CrossRef]
  6. Newey, W.; Powell, J. Asymmetric least squares estimation and testing. Econometrica 1987, 55, 819–847. [Google Scholar] [CrossRef]
  7. Efron, B. Regression percentiles using asymmetric squared error loss. Stat. Sin. 1991, 1, 93–125. [Google Scholar]
  8. Jones, M. Expectiles and m-quantiles are quantiles. Stat. Probab. 1994, 20, 149–153. [Google Scholar] [CrossRef]
  9. Yao, Q.; Tong, H. Asymmetric least squares regression estimation: A nonparametric approach. J. Nonparametric Stat. 1996, 6, 273–292. [Google Scholar] [CrossRef] [Green Version]
  10. Eberl, A.; Klar, B. Expectile-based measures of skewness. Scand. J. Stat. 2022, 49, 373–399. [