Abstract
Many financial decisions, such as portfolio allocation, risk management, option pricing and hedge strategies, are based on forecasts of the conditional variances, covariances and correlations of financial returns. The paper shows an empirical comparison of several methods to predict one-step-ahead conditional covariance matrices. These matrices are used as inputs to obtain out-of-sample minimum variance portfolios based on stocks belonging to the S&P500 index from 2000 to 2017 and sub-periods. The analysis is done through several metrics, including standard deviation, turnover, net average return, information ratio and Sortino’s ratio. We find that no method is the best in all scenarios and the performance depends on the criterion, the period of analysis and the rebalancing strategy.
JEL Classification:
C13; C53; C58; G11
1. Introduction
Forecasting returns, volatilities and conditional correlations has attracted the interest of researchers and practitioners in finance since these factors are crucial, for example, in portfolio allocation, risk management, option pricing and hedging strategies; see, for instance, Engle (2009), Hlouskova et al. (2009) and Boudt et al. (2013) for some references.
A well-known stylised fact in multivariate time series of financial returns is that not only conditional variances but also conditional covariances and correlations evolve over time. To describe this evolution, several methods have been proposed in the literature. In general, these methods involve different ways to circumvent the issue of dimensionality. The treatment of this problem is vital for the estimation of large portfolios (composed of hundreds or thousands of assets). As noted by Engle et al. (2017), when dealing with portfolios composed of a thousand time series, many multivariate GARCH models present unsatisfactory performance or computational problems in their estimation. For some multivariate GARCH models, estimation problems arise even for smaller dimensions; see, for instance, Laurent et al. (2012), Caporin and McAleer (2014), Caporin and Paruolo (2015) and de Almeida et al. (2018).
Our empirical application is based on an investor who adopts the minimum variance criterion in order to decide on portfolio allocations. A very large body of literature in portfolio optimization considers this particular policy; see, for instance, Clarke et al. (2011 2006) for extensive practitioner-oriented studies on the performance and composition of minimum variance portfolios. This policy can be seen as a particular case of the traditional mean-variance optimisation. The mean-variance problem, however, is known to be very sensitive to estimation of the mean returns (Frahm 2010; Jagannathan and Ma 2003).1 Very often, the estimation error in the mean returns degrades the overall portfolio performance and introduces an undesirable level of portfolio turnover. In fact, existing evidence suggests that the performance of optimal portfolios that do not rely on estimated mean returns is usually better, see DeMiguel et al. (2009).
To obtain the minimum variance portfolio, the key input is the estimate of the conditional covariance matrix. As far as we known, there are few works in the literature comparing the estimation of this matrix for large portfolios, with Creal et al. (2011), Hafner and Reznikova (2012), Engle et al. (2017), Nakagawa et al. (2018) and Moura and Santos (2018) being especially relevant. Given the myriad of models and methods in the literature to estimate the covariance matrix, empirical studies about the comparison of estimates in large portfolios are most welcome.
The paper is intended to assess the performance of several methods to predict one-step-ahead conditional covariance matrices in large portfolios. This is done empirically, by comparing the out-of-sample performance of minimum variance portfolios based on S&P500 stocks traded from 2 January 2000 to 30 November 2017, using measures such as average (AV), standard deviation (SD), information ratio (IR), Sortino’s ratio (SR) (Sortino and van der Meer 1991), turnover (TO) and average portfolio net of transaction cost (AV). Since not all stocks of the index were traded during the whole period, we consider portfolios of dimension stocks. To assess the robustness of the results, we also the analyse three sub-periods: the pre-crisis period (January 2004 to December 2007), the subprime crisis period (January 2008 to June 2009), and the post-crisis period (July 2009 to November 2017).
We consider several attractive methods and models including recent proposals used by practitioners and academics to predict one-step-ahead conditional covariance matrices. They are selected mainly because they use different approaches to overcome the issue of dimensionality problem. Specifically, the paper compares the DCC model as used in Engle et al. (2017), the DECO model of Engle and Kelly (2012), the OGARCH model of Alexander and Chibumba (1996), the RiskMetrics 1994 and the RiskMetrics 2006 (Zumbach 2007) methods, the generalised principal volatility components analysis (GPVC) proposed by Li et al. (2016) as a generalisation of the procedure of Hu and Tsay (2014), and we also apply the robust version of the GPVC method proposed by Trucíos et al. (2019). DCC models are estimated using composite likelihood, as advocated in Pakel et al. (2014). In addition, the linear shrinkage (LS) and non-linear shrinkage (NLS) of Ledoit and Wolf (2004a) and Ledoit and Wolf (2012), respectively, are applied on all the previous methods. Therefore, compared to Engle et al. (2017), Hafner and Reznikova (2012) and Nakagawa et al. (2018), the set of competing methods is much bigger and the device of shrinkage is assessed in all the compared methods. We consider a total of 47 methods, including the equal-weighted portfolio strategy. This constitutes the main contribution of the paper.
The rest of the paper is organised as follows: Section 2 presents the methods and models used to predict the one-step-ahead volatility covariance matrix. It also presents the composite likelihood used to estimate the DCC model and the shrinkage method as presented in Pakel et al. (2014). The empirical application is given in Section 3. Section 4 concludes and the list of the estimation methods is in the Appendix A.
2. The Forecast Methods
Denote by the return of the i-th asset at time t, where N is the number of assets under consideration to construct the portfolio and T denotes the sample size. For simplicity, consider that , where denotes the information available at time . Let ; the conditional covariance matrix is defined as with elements . At time we are interested in estimating in order to select a portfolio for the period . In the following we present some methods to estimate it.
2.1. The RiskMetrics Methods
One of the most popular methods used in risk analysis is the RiskMetrics method developed by the RiskMetrics Group at JP Morgan. We call this the RiskMetrics 1994 (RM1994) method. The main feature of the RiskMetrics method is that the predicted volatility is a linear function of the present and past squared returns. Although it has being widely used, it has some problems. In order to overcome some of these problems, the same group developed the RM2006 method. Like the RM1994 method, the RM2006 method is also data-oriented, in the sense that it was calibrated and tested to have good performance with the majority of the target empirical data, and was developed to take into account some of the stylised facts and weaknesses detected in the RM1994 method. We can summarize the main modifications in three types. In the first type, considering that the volatility has a long memory feature, the weights decay logarithmically instead of exponentially, as happens in the RM1994 method. The second is that the weights depend on the forecast horizon. The third is that the conditional distribution of the return is not multivariate Gaussian; the distribution is based on the estimated devolatilised residuals and it can be roughly defined as a Student-t distribution with scale correction. Finally, the return levels are modelled considering the lagged correlation between returns.
2.2. The CCC Model
The constant conditional correlation model (Bollerslev 1990) is one of the simplest MGARCH models to estimate, since basically the variances are modelled independently and the covariances are obtained using the conditional standard deviation and a constant conditional correlation matrix. The conditional covariance matrix evolves according to:
with (marginal univariate conditional variances). The advantage of the CCC model is its easy estimation, although, the main disadvantage is the strong assumption that conditional correlations are time-invariant. Engle (2002) extended this idea in a dynamic conditional correlation way, as detailed in the next section.
2.3. The DCC Model
In this section, we describe the scalar DCC model of Engle (2002) as used in Pakel et al. (2014) and Engle et al. (2017), and the composite likelihood. The non-linear shrinkage method, which is also used to estimate the DCC model, is presented in Section 2.8. In the DCC model, the marginal univariate conditional variances are modelled first. Define the devolatilised residuals as . We use the DCC model with correlation targeting as in Engle et al. (2017). The conditional covariance matrix evolves according to:
where is a diagonal matrix with the i-th element of the diagonal equal to , is the unconditional correlation matrix, and is the conditional correlation matrix at time t. The parameters and are non-negative with We have
where means a multivariate distribution with mean zero and covariance matrix
The model is usually estimated in three stages. In each stage, the estimation is conditional on the estimates found in previous stages. The stages are: (1) estimate usually assuming a GARCH(1,1) model for each , and evaluate the devolatilised residuals; (2) select an estimator of the correlation target matrix C using the devolatilised residuals; and (3) estimate the parameters and . We will comment on stage one in the application section and on stage 2 in Section 2.8. In the third stage, even with only two parameters, one may face estimation problems with a large number of assets because it is necessary to invert the conditional covariance matrix (for each ). One way to overcome this problem is through the use of the composite (log-)likelihood2 to compute it. This method was proposed in the 2008 version of Pakel et al. (2014). In the 2014 version, they showed that the estimators of and , given by maximizing the composite likelihood, are consistent although not efficient. They evaluate the composite likelihood by summing the likelihood of all contiguous pairs. Thus, there are only bivariate terms and for any contiguous pair it is only necessary to invert a matrix of order two. For instance, let i.e., the series of returns of the ith asset, and denote by the likelihood of the pair , assuming that each pair comes from a bivariate DCC model, defined similarly as the model given by Equations (5–7). Then, the composite likelihood is given by:
Engle et al. (2017) argue that the estimator of the conditional covariance matrix given by the DCC model using composite likelihood in stage three with the estimation of the unconditional correlation matrix using non-linear shrinkage in stage two is robust against model misspecification in large dimensions (N).
2.4. The DECO Model
Engle and Kelly (2012) propose a dynamic equicorrelation (DECO) model as a trade-off between a model which imposes many restrictions in the covariance matrix and a less structured model. They contend that imposing too much structure can lead to an efficient estimation when the restrictions are correct, but can suffer from breakdown in the presence of misspecification. On the other hand, the lack of restrictions may lead to the issue of dimensionality. Considering this trade-off, they propose a model where the cross-correlations between any pair of returns are equal on the same day, but it can vary over time. In addition, as in the CCC and DCC models, the DECO model also assumes that the marginals are modelled by a univariate volatility model. Using the same notation, we have and the covariance matrix is written as as in Equation (5). The equicorrelation matrix is given by:
where is the equicorrelation, denotes the N-dimensional identity matrix and is the matrix of ones. According to Engle and Kelly (2012), exist if and only if and and is positive definite if and only if . The evaluation of the likelihood is easy because we have closed forms for and , given by:
and
respectively. This description of the DECO model corresponds to a single block. The DECO model can also be used considering many blocks, as described in Engle and Kelly (2012).
2.5. The OGARCH Model
Alexander and Chibumba (1996) propose the Orthogonal GARCH (OGARCH) model, a dimension reduction technique to model the conditional covariance matrix. The model intends to simplify the problem of modelling an N-dimensional system into modelling a system of K-dimension orthogonal components where those components are obtained through principal component analysis (). Since the components are orthogonal, the conditional covariance matrix of the whole system can be obtained as:
where is an matrix whose columns are the normalised eigenvectors associated with the unconditional covariance matrix, is a diagonal matrix whose elements are the conditional variances of the k principal orthogonal components associated with the k largest eigenvalues, and is the covariance matrix of the errors that can be ignored. The conditional variances of each component can be modelled by a GARCH-type model.
Alexander and Chibumba (1996) and Alexander (2002) emphasise the importance of using a number of components k much smaller than N. However, Bauwens et al. (2006) and Becker et al. (2015) suggest using to avoid problems related with the inverse of . The OGARCH model with is a particular case of the GO-GARCH model (Van der Weide 2002).
2.6. The Generalised Principal Volatility Components Model
The generalised principal volatility components (GPVC) procedure is a dimension reduction technique recently proposed by Li et al. (2016), which decomposes a series into two groups of volatility components. The first group corresponds to a small number of components with volatility evolving over time while the second one corresponds to components whose volatility is constant over time. The GPVC procedure considers an orthogonal matrix and decomposes an N-dimensional vector with into:
with and . The matrix is obtained through the decomposition , where is a diagonal matrix with elements given by the eigenvalues in decreasing order and is the associated matrix of normalised eigenvectors. The columns of matrices and are the eigenvectors associated with the non-zero and zero eigenvalues, respectively, which are obtained from the eigenvalue decomposition of the matrix . In practice, is given by:
where g is a positive integer that gives the maximum lag order considered, is a weight function, is the unconditional covariance matrix and is the norm. Then, after some calculations, the conditional covariance matrix can be obtained by:
where is the conditional covariance matrix of the volatility components with volatility evolving over time and the remaining are terms as defined previously3. The matrix is estimated as:
The estimated version of Equation (16) is obtained by replacing the true values with the estimated ones.
2.7. The Robust GPVC Model
Trucíos et al. (2019) show the non-robustness of the GPVC procedure of Li et al. (2016) and propose an alternative procedure to obtain volatility components that is robust to outliers. This procedure is based on a robust estimator of the unconditional covariance matrix, a weighted estimator of , and robustified filters. The matrix (17) is replaced by a less sensitive matrix, defined as:
where is the robust squared Mahalanobis distance given by with , and , being robust estimates of the unconditional mean and covariance matrix. Trucíos et al. (2019) use the minimum covariance determinant (MCD) estimator of Rousseeuw (1984), implemented by the algorithm of Hubert et al. (2012). The robust filters, and are given by if , if ; if , if and , where is the norm. For details, see Trucíos et al. (2019).
To avoid returns corresponding to periods with high volatility being considered as possible outliers, the robust procedure incorporates in the squared Mahalanobis distance a covariance matrix evolving over time, which can be seen as a robust RM1994 method with .
Finally, the conditional covariance matrix is obtained as in Equation (16).
2.8. Linear and Non-Linear Shrinkage
Besides the estimation of the covariance matrix (), in some of the aforementioned models, we have to estimate the unconditional covariance or correlation matrix; for instance, the matrix in Equation (7) of the DCC model. Generally, the estimation of the unconditional correlation (covariance) matrix is done using the sample correlation (covariance) matrix. However, this is inefficient in the large dimensional case because we could end up with a number of parameters with the same order of magnitude as the dataset, or even larger (see, for instance, the simulation study in the Appendix of Engle et al. (2017)). In general, comparing the eigenvalues of the true correlation matrix with the eigenvalues of the sample correlation matrix, there is a tendency to underestimate the smaller eigenvalues and overestimate the larger ones. A natural way to reduce this bias is to increase the smaller eigenvalues and decrease the larger sample eigenvalues and then reconstruct the estimate of the correlation matrix. This is the main idea behind the shrinkage method. Engle et al. (2017) analyse the use of three types the shrinkage: linear shrinkage of Ledoit and Wolf (2004b) with shrinkage target given by (a multiple of) the identity matrix; linear shrinkage of Ledoit and Wolf (2004a) with shrinkage target given by the equicorrelation matrix; and the non-linear shrinkage of Ledoit and Wolf (2012) for the estimation of the unconditional correlation matrix in Equation (7). Using simulation, they conclude that the three types of shrinkage have better performance than the use of the sample correlation matrix in the estimation of , and the best performance is obtained from the non-linear shrinkage. They conclude that the application of non-linear shrinkage improves the estimation, and the improvement generally increases for a larger number of assets. In the application, they also apply the non-linear shrinkage to the estimated one-step-ahead conditional covariance matrix, which is not done in the simulation study. In the empirical application, they construct portfolios of global minimum variance with portfolio sizes and 1000 and updated monthly. As in the simulation study, they construct portfolios with modelled by DCC and CCC models and the RiskMetrics 2006 method. However, besides applying the linear and non-linear shrinkage to the target correlation matrix, they also apply the shrinkages to the one-step-ahead prediction of the volatility matrix. The best performance is achieved by the DCC model with the non-linear shrinkage applied only to the estimation of the intercept matrix, followed by the non-linear shrinkage applied both to the intercept matrix and to the one-step-ahead prediction matrix. We use the linear shrinkage towards the equicorrelation matrix, because in Engle et al. (2017) it presented slightly better performance than the shrinkage towards the identity matrix, although the estimator does not belong to the class of rotation-equivariant estimators.
For a light introduction to the main idea behind shrinkage, suppose we want to estimate the covariance matrix and we have an estimate based on a sample of size T. For instance, could be the sample covariance matrix and , the population matrix (unconditional covariance matrix). This is the case of the estimation of the DCC, where is the intercept matrix. When the ratio , called concentration ratio, becomes large, we have in-sample overfitting due to the excessive number of parameters, introducing a bias in the estimation of the eigenvalues. One way to correct this problem is through the shrinkage method.
For the linear shrinkage towards the equicorrelation matrix, denote by the element of the estimate . The mean of the estimated correlations is given by:
such that for the target matrix F we have and . The shrinkage estimate is given by:
where the shrinkage intensity, , is such that it minimizes the expected quadratic loss as in Ledoit and Wolf (2004a). For the shrinkage intensity , define the quadratic loss function
Ledoit and Wolf (2004a) propose to use the shrinkage intensity, which minimizes the risk function . The formulae and the derivation of the estimated shrinkage intensity can be found in the Appendix B of Ledoit and Wolf (2004a).
Regarding the non-linear shrinkage, let having dimension , , sorted in descending order, be the set of eigenvalues, and the corresponding eigenvectors, such that:
For an investor holding a portfolio with weights , the estimated variance is given by . The non-linear shrinkage of Ledoit and Wolf (2004b) is a transformation from to , such that substituting for in Equation (21) gives a consistent estimator of the out-of-sample variance . Denote by the set of eigenvalues of in descending order. Ledoit and Wolf (2004b) define QuEST functions , such that minimizes the Euclidean distance between the QuEST functions and the sample eigenvalues, i.e., given by:
A definition of the QuEST functions and a rigorous exposition of non-linear shrinkage can be found in Ledoit and Wolf (2012), while a lighter presentation can be found in the Supplementary Material of Engle et al. (2017).
3. Empirical Application
3.1. Data and Methods
In this section, we implement the procedures described in Section 2 and use the predicted one-step-ahead conditional covariance matrix to construct the minimum variance portfolio (MVP) of the stocks used in the composition of the S&P 500 index, traded from 2 January 2000 to 30 November 2017. Because not all stocks of the index were traded during the whole period, we ended up with stocks.
To evaluate the out-of-sample portfolio performance, we consider a rolling window scheme. The out-of-sample portfolio performance is evaluated in four different periods, namely: pre-crisis period (January 2004 to December 2007, 1008 days), subprime crisis period (January 2008 to June 2009, 378 days), post-crisis period (July 2009 to November 2017, 2218 days), and full period (January 2004 to November 2017, 3503 days). In each window, the one-step-ahead covariance matrix is estimated and the MVP values with and without short-sale constraints are obtained. The weights in the MVP portfolio are rebalanced with both daily and monthly frequencies. In the latter case, we follow Engle et al. (2017), that is, we obtain the portfolio returns daily but update the weights monthly (following the common convention we use 21 consecutive trading days as a month). Monthly updating is common in practice to reduce transaction costs.
The procedures described in Section 2 are combined with the linear and non-linear shrinkage estimator described in SubSection 2.8. The linear and non-linear shrinkage are applied at the beginning and/or at the end of the estimation procedure. A detailed description of each combination of the estimation procedures is given in the Appendix A. In addition, for the sake of comparison, we also implement the naive equal-weighted portfolio. In the line of Engle et al. (2017), Gambacciani and Paolella (2017), Trucíos et al. (2018) among others, we consider the following annualised out-of-sample performance measures. Denote by the observed out-of-sample returns from a given method where k in the length of the out-of-sample period. The measures considered in this paper: the annualised average portfolio return (AV), standard deviation portfolio return (SD), information ratio (IR), Sortino’s ratio (SR) and average turnover (TO) are computed as follows:
- AV: equal to , where is the average of the elements of .
- SD: equal to , where is the standard deviation of the elements of .
- IR: AV/SD.
- SR: AV/, where is the mean of , with if less than the minimal acceptable return, which is taken to zero, and zero otherwise.
- TO: where is the portfolio weight at time t for the j-th asset, and k is the number of the out-of-sample portfolio returns.
As pointed out by Kirby and Ostdiek (2012), Santos and Ferreira (2017), Olivares-Nadal and DeMiguel (2018), among others, transaction costs (c) can have an impact on the portfolio’s performance. In order to take into account those costs, we also compute the portfolio returns net of transaction cost. For a given c, the portfolio return net of transaction costs at time t is given by and then the annualised average portfolio return net of transaction costs is AV where is the average of the portfolio return net of transaction costs We consider (intermediate) and (high level) transaction costs where a basis point (bp) is a unit of measure commonly used in finance and is equivalent to The annualised average portfolio return net of transation costs considering and are denoted by AV and AV, respectively.
3.2. Results
Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7 and Table 8 report annualised out-of-sample performance measures for MVP with performance for the pre-crisis, crisis, post-crisis and full periods. Table 1, Table 2, Table 3 and Table 4 report the results for daily rebalanced portfolios whereas Table 5, Table 6, Table 7 and Table 8 report the results for monthly rebalanced portfolios. We also have results for MVP with no short-sale constraints. However, in this paper we focus on the results for MVP with short-sale constraints and give a short summary of the main findings for the case without short-sale constraints. A detailed analysis of the case without short-sale constraints is given in the Supplementary Material.
Table 1.
Annualised performance measures: AV, SD, IR, SR and TO stand for the average, standard deviation, information ratio, Sortino’s ratio and turnover of the out-of-sample MVP returns. AV and AV stand for the average out-of-sample MVP return net of transaction costs considering 20 and 50 basis-points, respectively. Period January 2004 to November 2017. The shaded cells denote the top five for each criterion. Weights are rebalanced on a daily basis considering short-selling constraints.
Table 2.
Annualised performance measures: AV, SD, IR, SR and TO stand for the average, standard deviation, information ratio, Sortino’s ratio and turnover of the out-of-sample MVP returns. AV and AV stand for the average out-of-sample MVP return net of transaction costs considering 20 and 50 basis-points, respectively. Period January 2004 to December 2007. The shaded cells denote the top five for each criterion. Weights are rebalanced on a daily basis considering short-selling constraints.
Table 3.
Annualised performance measures: AV, SD, IR, SR and TO stand for the average, standard deviation, information ratio, Sortino’s ratio and turnover of the out-of-sample MVP returns. AV and AV stand for the average out-of-sample MVP return net of transaction costs considering 20 and 50 basis-points, respectively. Period January 2008 to June 2009. The shaded cells denote the top five for each criterion. Weights are rebalanced on a daily basis considering short-selling constraints.
Table 4.
Annualised performance measures: AV, SD, IR, SR and TO stand for the average, standard deviation, information ratio, Sortino’s ratio and turnover of the out-of-sample MVP returns. AV and AV stand for the average out-of-sample MVP return net of transaction costs considering 20 and 50 basis-points, respectively. Period July 2009 to November 2017. The shaded cells denote the top five for each criterion. Weights are rebalanced on a daily basis considering short-selling constraints.
Table 5.
Annualised performance measures: AV, SD, IR, SR and TO stand for the average, standard deviation, information ratio, Sortino’s ratio and turnover of the out-of-sample MVP returns. AV and AV stand for the average out-of-sample MVP return net of transaction costs considering 20 and 50 basis-points, respectively. Period January 2004 to November 2017. The shaded cells denote the top five for each criterion. Weights are rebalanced on a monthly basis considering short-selling constraints.
Table 6.
Annualised performance measures: AV, SD, IR, SR and TO stand for the average, standard deviation, information ratio, Sortino’s ratio and turnover of the out-of-sample MVP returns. AV and AV stand for the average out-of-sample MVP return net of transaction costs considering 20 and 50 basis-points, respectively. Period January 2004 to December 2007. The shaded cells denote the top five for each criterion. Weights are rebalanced on a monthly basis considering short-selling constraints.
Table 7.
Annualised performance measures: AV, SD, IR, SR and TO stand for the average, standard deviation, information ratio, Sortino’s ratio and turnover of the out-of-sample MVP returns. AV and AV stand for the average out-of-sample MVP return net of transaction costs considering 20 and 50 basis-points, respectively. Period January 2008 to June 2009. The shaded cells denote the top five for each criterion. Weights are rebalanced on a monthly basis considering short-selling constraints.
Table 8.
Annualised performance measures: AV, SD, IR, SR and TO stand for the average, standard deviation, information ratio, Sortino’s ratio and turnover of the out-of-sample MVP returns. AV and AV stand for the average out-of-sample MVP return net of transaction costs considering 20 and 50 basis-points, respectively. Period July 2009 to November 2017. The shaded cells denote the top five for each criterion. Weights are rebalanced on a monthly basis considering short-selling constraints.
In Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7 and Table 8 we report (in parentheses) the rank of the methods according to the SD criterion in the second column. Moreover, for each criterion, the best five methods are highlighted in shadowed cells. The equal-weighted portfolio strategy is represented by .
Taking into account the fact that portfolios are chosen in order to have the minimum variance, the analysis is first done according to the SD criterion. For portfolios rebalanced daily or monthly, the largest SD is reported by the equal-weight portfolio strategy. For portfolios rebalanced daily (Table 1, Table 2, Table 3 and Table 4), the five smallest SDs are obtained by the DCC based-methods, except in the crisis period, in which case the five smallest SDs are spread among the DCC, OGARCH and GPVC based-methods. In the crisis-period, the smallest SD is obtained by the GPVC procedure with the non-linear shrinkage applied to the one-step-ahead conditional covariance matrix. For portfolios rebalanced monthly (Table 5, Table 6, Table 7 and Table 8), the smallest SDs are obtained by the RM2006-LS4, NLS-DCC, NLS-GPVC and RM2006-LS procedures for the full, pre-crisis, crisis and post-crisis periods, respectively.
The best performance in terms of the AV criterion differs depending on the period and rebalance strategy. For instance, for daily rebalancing the best performance in the full period is achieved by the RPVC followed by the RPVC with non-linear shrinkage applied to the one-step-ahead conditional covariance matrix. However, for the pre-crisis, crises and post-crisis periods, the best performance is achieved by the OGARCH with non-linear shrinkage applied to the unconditional covariance matrix (NLS-OGARCH), RPVC with linear shrinkage applied to the one-step-ahead conditional covariance matrix (RPVC-LS) and RiskMetrics method with linear shrinkage applied to the one-step-ahead conditional covariance matrix (RM1994-LS), respectively. For monthly rebalancing, the best performances in the full, pre-crisis, crisis and post-crisis periods are achieved by the RPVC, OGARCH-NLS, GPVC-LS and equal-weight portfolio strategy, respectively.
In terms of average turnover, the five smallest average turnovers are in the OGARCH and GPVC groups, with the best performance being achieved by the OGARCH with non-linear shrinkage applied to the one-step-ahead conditional covariance matrix in almost all cases. The only two exceptions are observed in the crisis period, in which case the best performance is achieved by the GPVC procedure with non-linear shrinkage applied to the one-step-ahead conditional covariance matrix. Additionally, note that regardless of whether portfolio is rebalanced daily or monthly, the average turnover reported by all dimension reduction techniques is smaller than reported by the non-dimension reduction procedures.
As for the annualised average portfolio returns taking into account transaction costs, the procedures with the five largest values of AV and AV are the same procedures with the largest AV, except in some cases in the pre-crisis period, where one of five largest AV is obtained by the NLS-OGARCH-NLS procedure.
For each period, the five best methods in terms of information criteria are the same (except in Table 8, where four methods are the same). We omit the analysis in the crisis period because these criteria values are negative. Overall, for daily rebalancing, RiskMetrics based methods are among the best in the full and post-crisis periods, RPVC and RPVC-NLS are among the best in the full and pre-crisis periods, and NLS-OGARCH and LS-OGARCH are among the best in the pre-crisis period. For monthly rebalancing, some OGARCH-based methods are among the best in the pre-crisis and full periods, some CCC-based methods are among the best in the post-crisis and full periods, RM1994-LS is among the best for the post-crisis period, and RPVC is among the best for the full period.
The analysis of Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7 and Table 8 reveals that none of the methods is the best in all scenarios and the performance depends on the criterion, the period and the rebalancing strategy. In this sense, the analysis will focus on the full period (Table 1 and Table 5) in order to account for periods with different volatility levels. When portfolios are rebalanced on a daily basis, we find that DCC-based methods are the best in terms of SD; RM2006-LS, RM2006-NL, RPVC and RPVC-NLS are the best in terms of {AV, AV, AV} and {IR, SR}, and some OGARCH-based are the best regarding TO. For monthly rebalanced portfolios, the best methods in terms of SD are DCC, LS-DCC, NLS-DCC, RM2006 and RM2006-LS, whereas the best performances in terms of {AV, AV, AV} and {IR, SR} are given by (RPVC, RPVC-NLS), (OGARCH-NLS, NLS-OGARCH-NLS) and CCC. In addition, the equal-weighted strategy is the second best in terms of AV, but the worst regarding SD, IR and SR criteria.
To show when the shrinkage method improves performance in terms of SD, the analysis is again focused on the full period (Table 1 and Table 5). For daily and monthly portfolio rebalancing: shrinkage always improves the performance of the RM2004 and GPVC methods (except LS-GPVC for monthly rebalancing) whereas it always worsens the DCC method; linear shrinkage at the end improves RM2006; just linear/non-linear shrinkage at the beginning improves DECO; OGARCH-NLS and NLS-OGARCH-NLS improves OGARCH; LS-CCC improves CCC (as well as NLS-DCC for daily rebalancing). Additionally, for daily rebalancing, shrinkage always improves the performance of RPVC (except LS-GPVC), whereas for monthly rebalancing, linear shrinkage applied at the beginning and/or end improves RPVC. Nakagawa et al. (2018) also reports that in some cases the use of non-linear shrinkage on the unconditional covariance matrix of the devolatilised returns in the DCC model increases the standard deviation of the out-of-sample portfolio returns.
We now discuss the effect of shrinkage in terms of AV. For daily rebalancing, shrinkage improves the performance of the RM2006 and DECO methods, and worsens the performance of the DCC and RPVC methods. In addition, CCC-NLS is better than CCC, RM1994-NLS is better than RM1994, and LS-GPVC is better than GPVC. For monthly rebalancing, shrinkage does not improve the performance of the CCC, DCC, GPVC and RPVC methods. In addition, RM2006-LS is better than RM2006, RM1994-NLS is better than RM1994, DECO-NLS and NLS-DECO-NLS are better than DECO, and OGARCH-NLS and NLS-OGARCH-NLS are better than OGARCH.
Finally, we list next the main findings when short-selling is allowed for optimisation of the portfolio variance. A detailed analysis of these cases is given in the Supplementary Material. First, none of the methods is the best in all scenarios and the performance depends on the criterion, the sample period and the portfolio rebalancing scheme. Second, the analysis of the full period reveals that for daily rebalancing, DCC methods are the best regarding SD and are among the best in terms of IR and SR. RM1994-LS and RM2006-LS are the best according to AV, AV, AV, IR and SR. For monthly rebalancing, DCC-LS and LS-DCC-LS are among the best in terms of SD, RM2006-NLS is the best in terms of SD and is among the best regarding IR and SR. RM 1994 and RM1994-LS are the first and second best in terms of AV, AV, AV but are among the worst in terms of SD. Third, the analysis of the turnover and average net returns in the no short-sale constraints case must be carefully done. This is because since no limits are imposed on the weights of the portfolio, large turnover values can be obtained and consequently we can have a large loss (average return) but huge net gain (average net portfolio return taking into account transaction costs). Fourth, in many cases shrinkage improves the performance of the methods in terms of SD, and this improvement can be substantial. Fifth, the top-five models in terms of SD are the same in both restricted and unrestricted minimum variance portfolios for daily rebalancing, except in the crisis period.
4. Conclusions
The main conclusion of the paper is that none of the methods is the best in all scenarios and the performance depends on the criterion, the sample period, the portfolio rebalancing scheme and whether or not short-selling constraints are included in the portfolio optimisation process.
When short-selling constraints are included in the portfolio optimisation process, the main results can be summarised as follows. First, none of the methods is the best in all scenarios and the performance depends on the criterion, the sample period and the portfolio rebalancing scheme. Second, when considering the SD criterion, the five smallest SDs are obtained by the DCC based-methods, except in the crisis period, in which case, the five smallest SDs are spread among the DCC, OGARCH and GPVC based-methods. In the crisis-period, the smallest SDs are obtained by the GPVC procedure with the non-linear shrinkage applied to the one-step-ahead conditional covariance matrix. For portfolios rebalanced monthly, the smallest SDs are obtained by the RM2006-LS, NLS-DCC, NLS-GPVC and RM2006-LS procedures for the full, pre-crisis, crisis and post-crisis periods, respectively. Third, unlike Engle et al. (2017) and Nakagawa et al. (2018), we do not find that applying non-linear shrinkage to the unconditional correlation matrix of the devolatilised returns improves the performance of the portfolio in terms of SD when the DCC model is used, and this also happens when applied in other methods. It is important to point out that Engle et al. (2017) use portfolio of 1000 assets, Nakagawa et al. (2018) use portfolios of 100, 500 and 1000 assets and we use a portfolio with 174 assets.
When short-selling is allowed for optimisation of the portfolio variance, the main conclusions are: none of the methods is the best in all scenarios and the performance depends on the criterion, the sample period and the portfolio rebalancing scheme; in many cases shrinkage improves the performance of the methods in terms of SD and this improvement can be substantial; for daily rebalancing the top-five models in terms of SD are the same of those when short-selling constraints are imposed, except in the crisis period cases. Finally, focusing on the analysis of the full period cases we can say that overall the DCC and Riskmetrics-based methods are the best; and the analysis of the turnover and average net returns in the no short-selling constraints case should be carefully done.
Supplementary Materials
The following are available online at https://www.mdpi.com/2225-1146/7/2/19/s1, File: Covariance Prediction in Large Portfolio Allocation: Supplementary Material.
Author Contributions
This paper has been a collaborative effort, with all authors contributing equally to this work. This includes conceptualization and investigation of the main ideas in the manuscript, methodology proposals, and formal analysis, as well as all aspects of the writing process.
Funding
The first three authors acknowledge financial support from the São Paulo Research Foundation (FAPESP), grants 2016/18599-4, 2018/03012-3, 2013/00506-1 and 2018/04654-9. The fourth author is grateful to the National Council for Scientific and Technological Development (CNPq) for grant 303688/2016-5. The third author is also grateful to CNPq for grant 313035/2017-2.
Acknowledgments
The first three authors acknowledge support of the Centre for Applied Research on Econometrics, Finance and Statistics (CAREFS) and the Centre of Quantitative Studies in Economics and Finance (CEQEF). The authors are also grateful to two anonymous referees and the academic editor for providing useful comments and suggestions on earlier version of the paper.
Conflicts of Interest
The authors declare no conflict of interest.
Appendix A. Estimation Methods
Here we present the detailed list of the estimation methods implemented in the paper. The marginal variances in the CCC, DCC and DECO models were modelled by the GJR-(1,1) model (Glosten et al. 1993) and the parameters were estimated by quasi-maximum likelihood assuming a Student-t distribution. The volatility components in the GPVC and RPVC procedures were modelled by the GJR(1,1)-cDCC(1,1) model and its robust version proposed by Boudt et al. (2013) and Laurent et al. (2016), respectively. The univariate variances in the OGARCH model were also modelled by the GJR-(1,1).
In the GPVC and RPVC procedures, the number of selected volatility components was estimated using criteria of Ahn and Horenstein (2013), Bai and Ng (2002) and Kaiser-Guttman Guttman (1954), and using the ratio estimator proposed by Lam and Yao (2012). Following these criteria and the suggestions in Trucíos et al. (2019), we use one volatility component in the GPVC procedure and four volatility components in the RPVC procedure.
The CCC, DCC, DECO, RM1994 and RM2006 procedures were implemented using the MFE Matlab Toolbox of Kevin Sheppard. The OGARCH, GPVC and RPVC procedures were implemented in R (R Core Team 2017) using the R packages rugarch of Ghalanos (2017), Rcpp of Eddelbuettel and François (2011) and covRobust of Wang et al. (2017). For the shrinkage procedures, we used the R packages RiskPortfolios (Ardia et al. 2018) and nlshrink (Ramprasad 2016) for the linear and non-linear shrinkage, respectively, coupled with the MATLAB toolbox QuEST (Ledoit and Wolf 2017) for the non-linear shrinkage and the MATLAB function covCor5. Whenever a program presented other options, we used the default options.
CCC based-methods
- CCC: Estimated by quasi-maximum likelihood.
- LS-CCC: Estimated as in CCC, but with the unconditional covariance matrix (Equation (4)) estimated using linear shrinkage.
- NLS-CCC: Estimated as in LS-CCC, but replacing linear by the non-linear shrinkage.
- CCC-LS: Estimated as in CCC, with the application of the linear shrinkage to the one-step-ahead conditional covariance matrix .
- CCC-NLS: Estimated as in CCC-LS, but replacing linear by non-linear shrinkage.
- LS-CCC-LS: Estimated as in LS-CCC, with the application of non-linear shrinkage to the one-step-ahead conditional covariance matrix .
- NLS-CCC-NLS: Estimated as in NLS-CCC, with the application of non-linear shrinkage to the one-step-ahead conditional covariance matrix .
DCC based-methods
- DCC: Estimated by composite likelihood (Pakel et al. 2014) using consecutive pairs.
- LS-DCC: Estimated as in DCC, but with the unconditional covariance matrix of the devolatilised returns ( in Equation (7)) estimated using linear shrinkage.
- NLS-DCC: Estimated as in LS-DCC, but replacing linear by non-linear shrinkage.
- DCC-LS: Estimated as in DCC, with the application of linear shrinkage to the one-step-ahead conditional covariance matrix .
- DCC-NLS: Estimated as in DCC-LS, but replacing linear by non-linear shrinkage.
- LS-DCC-LS: Estimated as in LS-DCC, with the application of linear shrinkage to the one-step-ahead conditional covariance matrix .
- NLS-DCC-NLS: Estimated as in NLS-DCC, with the application of non-linear shrinkage to the one-step-ahead conditional covariance matrix .
DECO based-methods
- DECO: Estimated using a single block.
- LS-DECO: Estimated as in DECO, but the unconditional covariance matrix of the devolatilised returns is estimated using linear shrinkage.
- NLS-DECO: Estimated as in LS-DECO, but replacing linear by non-linear shrinkage.
- DECO-NLS: Estimated as in DECO-LS, but non-linear shrinkage is applied to the one-step-ahead conditional covariance matrix .
- NLS-DECO-NLS: Estimated as in NLS-DECO model, but with non-linear shrinkage applied to the and linear shrinkage towards the equicorrelation matrix
Because in the DECO model the estimated unconditional covariance matrix and are already equicorrelated there is no sense in using linear shrinkage towards the equicorrelation matrix, since it has no effect.
RiskMetrics based-methods
- RM1994: RM1994 method.
- RM1994-LS: Estimated as in RM1994 with linear shrinkage applied to the one-step-ahead conditional covariance matrix .
- RM1994-NLS: Estimated as in RM1994-LS but replacing linear by non-linear shrinkage.
- RM20066: RM2006 method (Zumbach 2007).
- RM2006-LS: Estimated as in RM2006 with linear shrinkage applied to the one-step-ahead conditional covariance matrix .
- RM2006-NLS: Estimated as in RM2006-LS but replacing linear by non-linear shrinkage.
OGARCH based-methods
- OGARCH: The OGARCH model considers components.
- LS-OGARCH: Estimated as in OGARCH, but the unconditional covariance matrix used in the spectral decomposition is estimated using linear shrinkage.
- NLS-OGARCH: Estimated as in LS-OGARCH, but replacing linear by non-linear shrinkage.
- OGARCH-LS: Estimated as in OGARCH with the linear shrinkage applied to the one-step-ahead conditional covariance matrix
- OGARCH-NLS: Estimated as in OGARCH-LS, but replacing linear by non-linear shrinkage.
- LS-OGARCH-LS: Estimated as in LS-OGARCH, but linear shrinkage is applied to the predicted one-step-ahead conditional covariance matrix
- NLS-OGARCH-NLS: Estimated as in NLS-OGARCH, but non-linear shrinkage is applied to the predicted one-step-ahead conditional covariance matrix
GPVC based-methods
- GPVC: The GPVC procedure considers volatility component, as explained later. We use as in Li et al. (2016).
- LS-GPVC: Estimated as in the GPVC model with the unconditional covariance matrix in Equation (17) estimated using linear shrinkage.
- NLS-GPVC: Estimated as in LS-GPVC, but replacing linear by non-linear shrinkage.
- GPVC-LS: Estimated as in GPVC with linear shrinkage applied to the one-step-ahead conditional covariance matrix .
- GPVC-NLS: Estimated as in GPVC-LS, but replacing linear by non-linear shrinkage.
- LS-GPVC-LS: Estimated as in LS-GPVC with linear shrinkage applied to the predicted one-step-ahead conditional covariance matrix
- NLS-GPVC-NLS: Estimated as in NLS-GPVC with non-linear shrinkage applied to the predicted one-step-ahead conditional covariance matrix
RPVC based-methods
- RPVC: The RPVC procedure considers volatility components, as explained later. We use as in Li et al. (2016) and c as in Trucíos et al. (2019).
- LS-RPVC: Estimated as in RPVC, but linear shrinkage is applied to the robust unconditional covariance matrix used in Equation (18).
- NLS-RPVC: Estimated as in LS-RPVC, but replacing linear by non-linear shrinkage.
- RPVC-LS: Estimated as in RPVC with linear shrinkage applied to the one-step-ahead conditional covariance matrix .
- RPVC-NLS: Estimated as in RPVC-LS, but replacing linear by non-linear shrinkage.
- LS-RPVC-LS: Estimated as in LS-RPVC with the linear shrinkage applied to the predicted one-step-ahead conditional covariance matrix
- NLS-RPVC-NLS: Estimated as in NLS-RPVC with non-linear shrinkage applied to the predicted one-step-ahead conditional covariance matrix
References
- Ahn, Seung C, and Alex R. Horenstein. 2013. Eigenvalue ratio test for the number of factors. Econometrica 81: 1203–27. [Google Scholar]
- Alexander, Carol. 2002. Principal component models for generating large GARCH covariance matrices. Economic Notes: Review of Banking, Finance and Monetary Economics 31: 337–59. [Google Scholar] [CrossRef]
- Alexander, Carol O., and Aubrey Muyeke Chibumba. 1996. Multivariate Orthogonal Factor GARCH. Sussex: University of Sussex Discussion Papers in Mathematics. [Google Scholar]
- Ardia, David, Kris Boudt, and Jean-Philippe Gagnon-Fleury. 2018. RiskPortfolios: Computation of Risk-Based Portfolios, R package version 2.1.2; Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2911021 (accessed on 28 June 2018).
- Bai, Jushan, and Serena Ng. 2002. Determining the number of factors in approximate factor models. Econometrica 70: 191–221. [Google Scholar] [CrossRef]
- Bauwens, Luc, Sébastien Laurent, and Jeroen V.K. Rombouts. 2006. Multivariate GARCH models: A survey. Journal of Applied Econometrics 21: 79–109. [Google Scholar] [CrossRef]
- Becker, Ralf, Adam E. Clements, Mark B. Doolan, and A. Stan Hurn. 2015. Selecting volatility forecasting models for portfolio allocation purposes. International Journal of Forecasting 31: 849–61. [Google Scholar] [CrossRef]
- Bollerslev, Tim. 1990. Modelling the coherence in short-run nominal exchange rates: A multivariate generalized ARCH model. Review of Economics and Statistics 72: 498–505. [Google Scholar] [CrossRef]
- Boudt, Kris, Jon Danielsson, and Sébastien Laurent. 2013. Robust forecasting of dynamic conditional correlation GARCH models. International Journal of Forecasting 29: 244–57. [Google Scholar] [CrossRef]
- Caporin, Massimiliano, and Michael McAleer. 2014. Robust ranking of multivariate GARCH models by problem dimension. Computational Statistics & Data Analysis 76: 172–85. [Google Scholar]
- Caporin, Massimiliano, and Paolo Paruolo. 2015. Proximity-structured multivariate volatility models. Econometric Reviews 34: 559–93. [Google Scholar] [CrossRef]
- Clarke, Roger, Harindra De Silva, and Steven Thorley. 2011. Minimum-variance portfolio composition. Journal of Portfolio Management 37: 31. [Google Scholar] [CrossRef]
- Clarke, Roger G., Harindra De Silva, and Steven Thorley. 2006. Minimum-variance portfolios in the US equity market. The Journal of Portfolio Management 33: 10–24. [Google Scholar] [CrossRef]
- Creal, Drew, Siem Jan Koopman, and André Lucas. 2011. A dynamic multivariate heavy-tailed model for time-varying volatilities and correlations. Journal of Business & Economic Statistics 29: 552–563. [Google Scholar]
- de Almeida, Daniel, Luiz K. Hotta, and Esther Ruiz. 2018. MGARCH models: Trade-off between feasibility and flexibility. International Journal of Forecasting 34, 1: 45–63. [Google Scholar] [CrossRef]
- DeMiguel, Victor, Lorenzo Garlappi, and Raman Uppal. 2009. Optimal versus naive diversification: How inefficient is the 1/n portfolio strategy? Review of Financial Studies 22: 1915–53. [Google Scholar] [CrossRef]
- Eddelbuettel, Dirk, and Romain François. 2011. Rcpp: Seamless R and C++ integration. Journal of Statistical Software 40: 1–18. [Google Scholar] [CrossRef]
- Engle, Robert. 2002. Dynamic conditional correlation: A simple class of multivariate generalized autoregressive conditional heteroskedasticity models. Journal of Business & Economic Statistics 20: 339–50. [Google Scholar]
- Engle, Robert. 2009. Anticipating Correlations: A New Paradigm for Risk Management. Princeton: Princeton University Press. [Google Scholar]
- Engle, Robert, and Bryan Kelly. 2012. Dynamic equicorrelation. Journal of Business & Economic Statistics 30: 212–28. [Google Scholar]
- Engle, Robert F., Olivier Ledoit, and Michael Wolf. 2017. Large dynamic covariance matrices. Jounal of Business and Economic Statistics. [Google Scholar] [CrossRef]
- Frahm, Gabriel. 2010. Linear statistical inference for global and local minimum variance portfolios. Statistical Papers 51: 789–812. [Google Scholar] [CrossRef]
- Gambacciani, Marco, and Marc S. Paolella. 2017. Robust normal mixtures for financial portfolio allocation. Econometrics and Statistics 3: 91–111. [Google Scholar] [CrossRef]
- Ghalanos, Alexios. 2017. Rugarch: Univariate GARCH Models, R package version 1.3-8; Available online: https://cran.r-project.org/web/packages/rugarch/ (accessed on 15 October 2017).
- Glosten, Lawrence R., Ravi Jagannathan, and David E. Runkle. 1993. On the relation between the expected value and the volatility of the nominal excess return on stocks. The Journal of Finance 48: 1779–801. [Google Scholar] [CrossRef]
- Guttman, Louis. 1954. Some necessary conditions for common factor analysis. Psychometrika 19: 149–61. [Google Scholar] [CrossRef]
- Hafner, Christian M, and Olga Reznikova. 2012. On the estimation of dynamic conditional correlation models. Computational Statistics & Data Analysis 56: 3533–3545. [Google Scholar]
- Hlouskova, Jaroslava, Kurt Schmidheiny, and Martin Wagner. 2009. Multistage predictions for multivariate GARCH models: Closed form solution and the value for portfolio management. Journal of Empirical Finance 16: 330–6. [Google Scholar] [CrossRef]
- Hu, Yu-Pin, and Ruey S. Tsay. 2014. Principal volatility component analysis. Journal of Business & Economic Statistics 32: 153–164. [Google Scholar]
- Hubert, Mia, Peter J. Rousseeuw, and Tim Verdonck. 2012. A deterministic algorithm for robust location and scatter. Journal of Computational and Graphical Statistics 21: 618–37. [Google Scholar] [CrossRef]
- Jagannathan, Ravi, and Tongshu Ma. 2003. Risk reduction in large portfolios: Why imposing the wrong constraints helps. The Journal of Finance 58: 1651–84. [Google Scholar] [CrossRef]
- Kirby, Chris, and Barbara Ostdiek. 2012. It’s all in the timing: simple active portfolio strategies that outperform naive diversification. Journal of Financial and Quantitative Analysis 47: 437–67. [Google Scholar] [CrossRef]
- Lam, Clifford, and Qiwei Yao. 2012. Factor modeling for high-dimensional time series: Inference for the number of factors. The Annals of Statistics 40: 694–726. [Google Scholar] [CrossRef]
- Laurent, Sébastien, Christelle Lecourt, and Franz C. Palm. 2016. Testing for jumps in conditionally Gaussian ARMA–GARCH models, a robust approach. Computational Statistics & Data Analysis 100: 383–400. [Google Scholar]
- Laurent, Sébastien, Jeroen V. K. Rombouts, and Francesco Violante. 2012. On the forecasting accuracy of multivariate garch models. Journal of Applied Econometrics 27: 934–55. [Google Scholar] [CrossRef]
- Ledoit, Olivier, and Michael Wolf. 2004a. Honey, I shrunk the sample covariance matrix. The Journal of Portfolio Management 30: 110–9. [Google Scholar] [CrossRef]
- Ledoit, Olivier, and Michael Wolf. 2004b. A well-conditioned estimator for large-dimensional covariance matrices. Journal of Multivariate Analysis 88: 365–411. [Google Scholar] [CrossRef]
- Ledoit, Olivier, and Michael Wolf. 2012. Nonlinear shrinkage estimation of large-dimensional covariance matrices. The Annals of Statistics 40: 1024–60. [Google Scholar] [CrossRef]
- Ledoit, Olivier, and Michael Wolf. 2017. Numerical implementation of the quest function. Computational Statistics & Data Analysis 115: 199–223. [Google Scholar]
- Li, Weiming, Jing Gao, Kunpeng Li, and Qiwei Yao. 2016. Modeling multivariate volatilities via latent common factors. Journal of Business & Economic Statistics 34: 564–73. [Google Scholar]
- Moura, Guilherme Valle, and Andre A. P Santos. 2018. Forecasting Large Stochastic Covariance Matrices. Working Paper. Available online: https://ssrn.com/abstract=3222808 (accessed on 15 September 2018).
- Nakagawa, Kei, Mitsuyoshi Imamura, and Kenichi Yoshida. 2018. Risk-based portfolios with large dynamic covariance matrices. International Journal of Financial Studies 6: 52. [Google Scholar] [CrossRef]
- Olivares-Nadal, Alba V., and Victor DeMiguel. 2018. A robust perspective on transaction costs in portfolio optimization. Operations Research 66: 733–9. [Google Scholar] [CrossRef]
- Pakel, Cavit, Neil Shephard, Kevin Sheppard, and Robert Engle. 2017. Fitting Vast Dimensional Time-Varying Covariance Models. NYU Working Paper No. FIN-08-009. Available online: https://ssrn.com/abstract=1354497 (accessed on 3 March 2018).
- R Core Team. 2017. R: A Language and Environment for Statistical Computing. Vienna: R Foundation for Statistical Computing. [Google Scholar]
- Ramprasad, Pratik. 2016. nlshrink: Non-Linear Shrinkage Estimation of Population Eigenvalues and Covariance Matrices, R package version 1.0.1; Available online: https://cran.r-project.org/web/packages/nlshrink/ (accessed on 27 March 2018).
- Rousseeuw, Peter J. 1984. Least median of squares regression. Journal of the American Statistical Association 79: 871–880. [Google Scholar] [CrossRef]
- Santos, André Alves Portela, and Alexandre R. Ferreira. 2017. On the choice of covariance specifications for portfolio selection problems. Brazilian Review of Econometrics 37: 89–122. [Google Scholar] [CrossRef]
- Sortino, Frank A., and Robert van der Meer. 1991. Downside risk. Journal of Portfolio Management 17: 27–31. [Google Scholar] [CrossRef]
- Trucios, Carlos. 2017. RM2006: RiskMetricsss 2006 Methodology, R package version 0.1.0; Available online: https://cran.r-project.org/web/packages/RM2006/ (accessed on 9 May 2018).
- Trucíos, Carlos, Luiz K. Hotta, and Pedro L. Valls Pereira. 2019. On the robustness of the principal volatility components. Journal of Empirical Finance 52: 201–219. [Google Scholar] [CrossRef]
- Trucíos, Carlos, Luiz K. Hotta, and Esther Ruiz. 2018. Robust bootstrap densities for dynamic conditional correlations: Implications for portfolio selection and value-at-risk. Journal of Statistical Computation and Simulation 88: 1976–2000. [Google Scholar] [CrossRef]
- Van der Weide, Roy. 2002. GO-GARCH: A multivariate generalized orthogonal GARCH model. Journal of Applied Econometrics 17: 549–64. [Google Scholar] [CrossRef]
- Wang, Naisyin, Adrian Raftery, and Chris Fraley. 2017. covRobust: Robust Covariance Estimation via Nearest Neighbor Cleaning, R package version 1.1-3; Available online: https://cran.r-project.org/web/packages/covRobust/ (accessed on 27 March 2018).
- Wied, Dominik, Daniel Ziggel, and Tobias Berens. 2013. On the application of new tests for structural changes on global minimum-variance portfolios. Statistical Papers 54: 955–75. [Google Scholar] [CrossRef]
- Zumbach, Gilles O. 2007. A Gentle Introduction to the RM2006 Methodology. Available online: https://ssrn.com/abstract=1420183 (accessed on 12 March 2018).
| 1 | See Wied et al. (2013) for a test for the presence of structural breaks in minimum variance portfolios |
| 2 | From now on we just call the log-likelihood likelihood. |
| 3 | Note that when , = as presented in Li et al. (2016). |
| 4 | The acronyms are described in the Appendix A. |
| 5 | Available at www.econ.uzh.ch/en/people/faculty/wolf/publications. |
| 6 | This method was implemented using the MFE Matlab Toolbox of Kevin Sheppard with the default options. An R implementation of the same procedure can be found in Trucios (2017). |
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).