Sustainable Financial Obligations and Crisis Cycles*Econometrics* **2017**, *5*(2), 27; doi:10.3390/econometrics5020027 - 22 June 2017**Abstract **

►
Figures
The ability to distinguish between sustainable and excessive debt developments is crucial for securing economic stability. By studying US private sector credit loss dynamics, we show that this distinction can be made based on a measure of the incipient aggregate liquidity constraint, the

[...] Read more.
The ability to distinguish between sustainable and excessive debt developments is crucial for securing economic stability. By studying US private sector credit loss dynamics, we show that this distinction can be made based on a measure of the incipient aggregate liquidity constraint, the financial obligations ratio. Specifically, as this variable rises, the interaction between credit losses and the business cycle increases, albeit with different intensity depending on whether the problems originate in the household or the business sector. This occurs 1–2 years before each recession in the sample. Our results have implications for macroprudential policy and countercyclical capital-buffers.
Full article

A Spatial Econometric Analysis of the Calls to the Portuguese National Health Line *Econometrics* **2017**, *5*(2), 24; doi:10.3390/econometrics5020024 - 16 June 2017**Abstract **

►
Figures
The Portuguese National Health Line, LS24, is an initiative of the Portuguese Health Ministry which seeks to improve accessibility to health care and to rationalize the use of existing resources by directing users to the most appropriate institutions of the national public health

[...] Read more.
The Portuguese National Health Line, LS24, is an initiative of the Portuguese Health Ministry which seeks to improve accessibility to health care and to rationalize the use of existing resources by directing users to the most appropriate institutions of the national public health services. This study aims to describe and evaluate the use of LS24. Since for LS24 data, the location attribute is an important source of information to describe its use, this study analyses the number of calls received, at a municipal level, under two different spatial econometric approaches. This analysis is important for future development of decision support indicators in a hospital context, based on the economic impact of the use of this health line. Considering the discrete nature of data, the number of calls to LS24 in each municipality is better modelled by a Poisson model, with some possible covariates: demographic, socio-economic information, characteristics of the Portuguese health system and development indicators. In order to explain model spatial variability, the data autocorrelation can be explained in a Bayesian setting through different hierarchical log-Poisson regression models. A different approach uses an autoregressive methodology, also for count data. A log-Poisson model with a spatial lag autocorrelation component is further considered, better framed under a Bayesian paradigm. With this empirical study we find strong evidence for a spatial structure in the data and obtain similar conclusions with both perspectives of the analysis. This supports the view that the addition of a spatial structure to the model improves estimation, even in the case where some relevant covariates have been included.
Full article

The Realized Hierarchical Archimedean Copula in Risk Modelling*Econometrics* **2017**, *5*(2), 26; doi:10.3390/econometrics5020026 - 15 June 2017**Abstract **

►
Figures
This paper introduces the concept of the realized hierarchical Archimedean copula (rHAC). The proposed approach inherits the ability of the copula to capture the dependencies among financial time series, and combines it with additional information contained in high-frequency data. The considered model does

[...] Read more.
This paper introduces the concept of the realized hierarchical Archimedean copula (rHAC). The proposed approach inherits the ability of the copula to capture the dependencies among financial time series, and combines it with additional information contained in high-frequency data. The considered model does not suffer from the curse of dimensionality, and is able to accurately predict high-dimensional distributions. This flexibility is obtained by using a hierarchical structure in the copula. The time variability of the model is provided by daily forecasts of the realized correlation matrix, which is used to estimate the structure and the parameters of the rHAC. Extensive simulation studies show the validity of the estimator based on this realized correlation matrix, and its performance, in comparison to the benchmark models. The application of the estimator to one-day-ahead Value at Risk (VaR) prediction using high-frequency data exhibits good forecasting properties for a multivariate portfolio.
Full article

Improved Inference on Cointegrating Vectors in the Presence of a near Unit Root Using Adjusted Quantiles*Econometrics* **2017**, *5*(2), 25; doi:10.3390/econometrics5020025 - 14 June 2017**Abstract **

►
Figures
It is well known that inference on the cointegrating relations in a vector autoregression (CVAR) is difficult in the presence of a near unit root. The test for a given cointegration vector can have rejection probabilities under the null, which vary from the

[...] Read more.
It is well known that inference on the cointegrating relations in a vector autoregression (CVAR) is difficult in the presence of a near unit root. The test for a given cointegration vector can have rejection probabilities under the null, which vary from the nominal size to more than 90%. This paper formulates a CVAR model allowing for multiple near unit roots and analyses the asymptotic properties of the Gaussian maximum likelihood estimator. Then two critical value adjustments suggested by McCloskey (2017) for the test on the cointegrating relations are implemented for the model with a single near unit root, and it is found by simulation that they eliminate the serious size distortions, with a reasonable power for moderate values of the near unit root parameter. The findings are illustrated with an analysis of a number of different bivariate DGPs.
Full article

Dependence between Stock Returns of Italian Banks and the Sovereign Risk*Econometrics* **2017**, *5*(2), 23; doi:10.3390/econometrics5020023 - 8 June 2017**Abstract **

►
Figures
We analyze the interdependence between the government yield spread and stock returns of the banking sector in Italy during the years 2003–2015. In a first step, we find that the Spearman’s rank correlation between the yield spread and the Italian banking system changed

[...] Read more.
We analyze the interdependence between the government yield spread and stock returns of the banking sector in Italy during the years 2003–2015. In a first step, we find that the Spearman’s rank correlation between the yield spread and the Italian banking system changed significantly after September 2008. According to this finding, we split the time window in two sub-periods. While we show that the dependence between the banking industry and changes in the yield spread increased significantly in the second time interval, we find no contagion effects from changes in the yield spread to returns of the banking system.
Full article

Unit Roots and Structural Breaks*Econometrics* **2017**, *5*(2), 22; doi:10.3390/econometrics5020022 - 30 May 2017**Abstract **
n/a
Full article

Bayesian Inference for Latent Factor Copulas and Application to Financial Risk Forecasting*Econometrics* **2017**, *5*(2), 21; doi:10.3390/econometrics5020021 - 23 May 2017**Abstract **

►
Figures
Factor modeling is a popular strategy to induce sparsity in multivariate models as they scale to higher dimensions. We develop Bayesian inference for a recently proposed latent factor copula model, which utilizes a pair copula construction to couple the variables with the latent

[...] Read more.
Factor modeling is a popular strategy to induce sparsity in multivariate models as they scale to higher dimensions. We develop Bayesian inference for a recently proposed latent factor copula model, which utilizes a pair copula construction to couple the variables with the latent factor. We use adaptive rejection Metropolis sampling (ARMS) within Gibbs sampling for posterior simulation: Gibbs sampling enables application to Bayesian problems, while ARMS is an adaptive strategy that replaces traditional Metropolis-Hastings updates, which typically require careful tuning. Our simulation study shows favorable performance of our proposed approach both in terms of sampling efficiency and accuracy. We provide an extensive application example using historical data on European financial stocks that forecasts portfolio Value at Risk (VaR) and Expected Shortfall (ES).
Full article

Copula-Based Factor Models for Multivariate Asset Returns*Econometrics* **2017**, *5*(2), 20; doi:10.3390/econometrics5020020 - 17 May 2017**Abstract **

►
Figures
Recently, several copula-based approaches have been proposed for modeling stationary multivariate time series. All of them are based on vine copulas, and they differ in the choice of the regular vine structure. In this article, we consider a copula autoregressive (COPAR) approach to

[...] Read more.
Recently, several copula-based approaches have been proposed for modeling stationary multivariate time series. All of them are based on vine copulas, and they differ in the choice of the regular vine structure. In this article, we consider a copula autoregressive (COPAR) approach to model the dependence of unobserved multivariate factors resulting from two dynamic factor models. However, the proposed methodology is general and applicable to several factor models as well as to other copula models for stationary multivariate time series. An empirical study illustrates the forecasting superiority of our approach for constructing an optimal portfolio of U.S. industrial stocks in the mean-variance framework.
Full article

Maximum Likelihood Estimation of the I(2) Model under Linear Restrictions*Econometrics* **2017**, *5*(2), 19; doi:10.3390/econometrics5020019 - 15 May 2017**Abstract **

►
Figures
Estimation of the I(2) cointegrated vector autoregressive (CVAR) model is considered. Without further restrictions, estimation of the I(1) model is by reduced-rank regression (Anderson (1951)). Maximum likelihood estimation of I(2) models, on the other hand, always requires iteration. This paper presents a new

[...] Read more.
Estimation of the I(2) cointegrated vector autoregressive (CVAR) model is considered. Without further restrictions, estimation of the I(1) model is by reduced-rank regression (Anderson (1951)). Maximum likelihood estimation of I(2) models, on the other hand, always requires iteration. This paper presents a new triangular representation of the I(2) model. This is the basis for a new estimation procedure of the unrestricted I(2) model, as well as the I(2) model with linear restrictions imposed.
Full article

The Univariate Collapsing Method for Portfolio Optimization*Econometrics* **2017**, *5*(2), 18; doi:10.3390/econometrics5020018 - 5 May 2017**Abstract **

►
Figures
The univariate collapsing method (UCM) for portfolio optimization is based on obtaining the predictive mean and a risk measure such as variance or expected shortfall of the univariate pseudo-return series generated from a given set of portfolio weights and multivariate set of assets

[...] Read more.
The univariate collapsing method (UCM) for portfolio optimization is based on obtaining the predictive mean and a risk measure such as variance or expected shortfall of the univariate pseudo-return series generated from a given set of portfolio weights and multivariate set of assets under interest and, via simulation or optimization, repeating this process until the desired portfolio weight vector is obtained. The UCM is well-known conceptually, straightforward to implement, and possesses several advantages over use of multivariate models, but, among other things, has been criticized for being too slow. As such, it does not play prominently in asset allocation and receives little attention in the academic literature. This paper proposes use of fast model estimation methods combined with new heuristics for sampling, based on easily-determined characteristics of the data, to accelerate and optimize the simulation search. An extensive empirical analysis confirms the viability of the method.
Full article

Selecting the Lag Length for the *M*^{GLS} Unit Root Tests with Structural Change: A Warning Note for Practitioners Based on Simulations*Econometrics* **2017**, *5*(2), 17; doi:10.3390/econometrics5020017 - 16 April 2017**Abstract **

This is a simulation-based warning note for practitioners who use the ${M}^{GLS}$ unit root tests in the context of structural change using different selection lag length criteria. With $T=100$ , we find severe oversize problems when using some

[...] Read more.
This is a simulation-based warning note for practitioners who use the ${M}^{GLS}$ unit root tests in the context of structural change using different selection lag length criteria. With $T=100$ , we find severe oversize problems when using some criteria, while other criteria produce an undersizing behavior. In view of this dilemma, we do not recommend using these tests. While such behavior tends to disappear when $T=250$ , it is important to note that most empirical applications use smaller sample sizes such as $T=100$ or $T=150$ . The $AD{F}^{GLS}$ test does not present an oversizing or undersizing problem. The only disadvantage of the $AD{F}^{GLS}$ test arises in the presence of $MA(1)$ negative correlation, in which case the ${M}^{GLS}$ tests are preferable, but in all other cases they are very undersized. When there is a break in the series, selecting the breakpoint using the Supremum method greatly improves the results relative to the Infimum method.
Full article

Copula–Based vMEM Specifications versus Alternatives: The Case of Trading Activity*Econometrics* **2017**, *5*(2), 16; doi:10.3390/econometrics5020016 - 12 April 2017**Abstract **

►
Figures
We discuss several multivariate extensions of the Multiplicative Error Model to take into account dynamic interdependence and contemporaneously correlated innovations (vector MEM or vMEM). We suggest copula functions to link Gamma marginals of the innovations, in a specification where past values and conditional

[...] Read more.
We discuss several multivariate extensions of the Multiplicative Error Model to take into account dynamic interdependence and contemporaneously correlated innovations (vector MEM or vMEM). We suggest copula functions to link Gamma marginals of the innovations, in a specification where past values and conditional expectations of the variables can be simultaneously estimated. Results with realized volatility, volumes and number of trades of the JNJ stock show that significantly superior realized volatility forecasts are delivered with a fully interdependent vMEM relative to a single equation. Alternatives involving log–Normal or semiparametric formulations produce substantially equivalent results.
Full article

Accuracy and Efficiency of Various GMM Inference Techniques in Dynamic Micro Panel Data Models*Econometrics* **2017**, *5*(1), 14; doi:10.3390/econometrics5010014 - 20 March 2017**Abstract **

Studies employing Arellano-Bond and Blundell-Bond generalized method of moments (GMM) estimation for linear dynamic panel data models are growing exponentially in number. However, for researchers it is hard to make a reasoned choice between many different possible implementations of these estimators and associated

[...] Read more.
Studies employing Arellano-Bond and Blundell-Bond generalized method of moments (GMM) estimation for linear dynamic panel data models are growing exponentially in number. However, for researchers it is hard to make a reasoned choice between many different possible implementations of these estimators and associated tests. By simulation, the effects are examined in terms of many options regarding: (i) reducing, extending or modifying the set of instruments; (ii) specifying the weighting matrix in relation to the type of heteroskedasticity; (iii) using (robustified) 1-step or (corrected) 2-step variance estimators; (iv) employing 1-step or 2-step residuals in Sargan-Hansen overall or incremental overidentification restrictions tests. This is all done for models in which some regressors may be either strictly exogenous, predetermined or endogenous. Surprisingly, particular asymptotically optimal and relatively robust weighting matrices are found to be superior in finite samples to ostensibly more appropriate versions. Most of the variants of tests for overidentification and coefficient restrictions show serious deficiencies. The variance of the individual effects is shown to be a major determinant of the poor quality of most asymptotic approximations; therefore, the accurate estimation of this nuisance parameter is investigated. A modification of GMM is found to have some potential when the cross-sectional heteroskedasticity is pronounced and the time-series dimension of the sample is not too small. Finally, all techniques are employed to actual data and lead to insights which differ considerably from those published earlier.
Full article

A Simple Test for Causality in Volatility*Econometrics* **2017**, *5*(1), 15; doi:10.3390/econometrics5010015 - 20 March 2017**Abstract **

An early development in testing for causality (technically, Granger non-causality) in the conditional variance (or volatility) associated with financial returns was the portmanteau statistic for non-causality in the variance of Cheng and Ng (1996). A subsequent development was the Lagrange Multiplier (LM) test

[...] Read more.
An early development in testing for causality (technically, Granger non-causality) in the conditional variance (or volatility) associated with financial returns was the portmanteau statistic for non-causality in the variance of Cheng and Ng (1996). A subsequent development was the Lagrange Multiplier (LM) test of non-causality in the conditional variance by Hafner and Herwartz (2006), who provided simulation results to show that their LM test was more powerful than the portmanteau statistic for sample sizes of 1000 and 4000 observations. While the LM test for causality proposed by Hafner and Herwartz (2006) is an interesting and useful development, it is nonetheless arbitrary. In particular, the specification on which the LM test is based does not rely on an underlying stochastic process, so the alternative hypothesis is also arbitrary, which can affect the power of the test. The purpose of the paper is to derive a simple test for causality in volatility that provides regularity conditions arising from the underlying stochastic process, namely a random coefficient autoregressive process, and a test for which the (quasi-) maximum likelihood estimates have valid asymptotic properties under the null hypothesis of non-causality. The simple test is intuitively appealing as it is based on an underlying stochastic process, is sympathetic to Granger’s (1969, 1988) notion of time series predictability, is easy to implement, and has a regularity condition that is not available in the LM test.
Full article

Goodness-of-Fit Tests for Copulas of Multivariate Time Series

*Econometrics* **2017**, *5*(1), 13; doi:10.3390/econometrics5010013 - 17 March 2017**Abstract **

In this paper, we study the asymptotic behavior of the sequential empirical process and the sequential empirical copula process, both constructed from residuals of multivariate stochastic volatility models. Applications for the detection of structural changes and specification tests of the distribution of innovations

[...] Read more.
In this paper, we study the asymptotic behavior of the sequential empirical process and the sequential empirical copula process, both constructed from residuals of multivariate stochastic volatility models. Applications for the detection of structural changes and specification tests of the distribution of innovations are discussed. It is also shown that if the stochastic volatility matrices are diagonal, which is the case if the univariate time series are estimated separately instead of being jointly estimated, then the empirical copula process behaves as if the innovations were observed; a remarkable property. As a by-product, one also obtains the asymptotic behavior of rank-based measures of dependence applied to residuals of these time series models.

Full article

Full article

Testing for a Structural Break in a Spatial Panel Model*Econometrics* **2017**, *5*(1), 12; doi:10.3390/econometrics5010012 - 6 March 2017**Abstract **

►
Figures
We consider the problem of testing for a structural break in the spatial lag parameter in a panel model (spatial autoregressive). We propose a likelihood ratio test of the null hypothesis of no break against the alternative hypothesis of a single break. The

[...] Read more.
We consider the problem of testing for a structural break in the spatial lag parameter in a panel model (spatial autoregressive). We propose a likelihood ratio test of the null hypothesis of no break against the alternative hypothesis of a single break. The limiting distribution of the test is derived under the null when both the number of individual units N and the number of time periods T is large or N is ﬁxed and T is large. The asymptotic critical values of the test statistic can be obtained analytically. We also propose a break-date estimator that can be employed to determine the location of the break point following evidence against the null hypothesis. We present Monte Carlo evidence to show that the proposed procedure performs well in ﬁnite samples. Finally, we consider an empirical application of the test on budget spillovers and interdependence in ﬁscal policy within the U.S. states.
Full article

Structural Breaks, Inflation and Interest Rates: Evidence from the G7 Countries*Econometrics* **2017**, *5*(1), 11; doi:10.3390/econometrics5010011 - 17 February 2017**Abstract **

►
Figures
This study reconsiders the common unit root/co-integration approach to test for the Fisher effect for the economies of the G7 countries. We first show that nominal interest and inflation rates are better represented as I(0) variables. Later, we use the Bai–Perron procedure to

[...] Read more.
This study reconsiders the common unit root/co-integration approach to test for the Fisher effect for the economies of the G7 countries. We first show that nominal interest and inflation rates are better represented as I(0) variables. Later, we use the Bai–Perron procedure to show the existence of structural changes in the Fisher equation. After considering these breaks, we find very limited evidence of a total Fisher effect as the transmission coefficient of the expected inflation rates to nominal interest rates is very different than one.
Full article

A Note on Identification of Bivariate Copulas for Discrete Count Data*Econometrics* **2017**, *5*(1), 10; doi:10.3390/econometrics5010010 - 15 February 2017**Abstract **

Copulas have enjoyed increased usage in many areas of econometrics, including applications with discrete outcomes. However, Genest and Nešlehová (2007) present evidence that copulas for discrete outcomes are not identified, particularly when those discrete outcomes follow count distributions. This paper confirms the Genest

[...] Read more.
Copulas have enjoyed increased usage in many areas of econometrics, including applications with discrete outcomes. However, Genest and Nešlehová (2007) present evidence that copulas for discrete outcomes are not identified, particularly when those discrete outcomes follow count distributions. This paper confirms the Genest and Nešlehová result using a series of simulation exercises. The paper then proceeds to show that those identification concerns diminish if the model has a regression structure such that the exogenous variable(s) generates additional variation in the outcomes and thus more completely covers the outcome domain.
Full article

Endogeneity, Time-Varying Coefficients, and Incorrect vs. Correct Ways of Specifying the Error Terms of Econometric Models*Econometrics* **2017**, *5*(1), 8; doi:10.3390/econometrics5010008 - 3 February 2017**Abstract **

Using the net effect of all relevant regressors omitted from a model to form its error term is incorrect because the coefficients and error term of such a model are non-unique. Non-unique coefficients cannot possess consistent estimators. Uniqueness can be achieved if; instead;

[...] Read more.
Using the net effect of all relevant regressors omitted from a model to form its error term is incorrect because the coefficients and error term of such a model are non-unique. Non-unique coefficients cannot possess consistent estimators. Uniqueness can be achieved if; instead; one uses certain “sufficient sets” of (relevant) regressors omitted from each model to represent the error term. In this case; the unique coefficient on any non-constant regressor takes the form of the sum of a bias-free component and omitted-regressor biases. Measurement-error bias can also be incorporated into this sum. We show that if our procedures are followed; accurate estimation of bias-free components is possible.
Full article

A Fast Algorithm for the Computation of HAC Covariance Matrix Estimators*Econometrics* **2017**, *5*(1), 9; doi:10.3390/econometrics5010009 - 25 January 2017**Abstract **

►
Figures
This paper considers the algorithmic implementation of the heteroskedasticity and autocorrelation consistent (HAC) estimation problem for covariance matrices of parameter estimators. We introduce a new algorithm, mainly based on the fast Fourier transform, and show via computer simulation that our algorithm is up

[...] Read more.
This paper considers the algorithmic implementation of the heteroskedasticity and autocorrelation consistent (HAC) estimation problem for covariance matrices of parameter estimators. We introduce a new algorithm, mainly based on the fast Fourier transform, and show via computer simulation that our algorithm is up to 20 times faster than well-established alternative algorithms. The cumulative effect is substantial if the HAC estimation problem has to be solved repeatedly. Moreover, the bandwidth parameter has no impact on this performance. We provide a general description of the new algorithm as well as code for a reference implementation in `R`.
Full article