Next Issue
Previous Issue

Table of Contents

Econometrics, Volume 7, Issue 1 (March 2019)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
View options order results:
result details:
Displaying articles 1-16
Export citation of selected articles as:
Open AccessArticle Monte Carlo Inference on Two-Sided Matching Models
Econometrics 2019, 7(1), 16; https://doi.org/10.3390/econometrics7010016
Received: 1 October 2018 / Revised: 29 November 2018 / Accepted: 7 March 2019 / Published: 26 March 2019
Viewed by 440 | PDF Full-text (361 KB) | HTML Full-text | XML Full-text
Abstract
This paper considers two-sided matching models with nontransferable utilities, with one side having homogeneous preferences over the other side. When one observes only one or several large matchings, despite the large number of agents involved, asymptotic inference is difficult because the observed matching [...] Read more.
This paper considers two-sided matching models with nontransferable utilities, with one side having homogeneous preferences over the other side. When one observes only one or several large matchings, despite the large number of agents involved, asymptotic inference is difficult because the observed matching involves the preferences of all the agents on both sides in a complex way, and creates a complicated form of cross-sectional dependence across observed matches. When we assume that the observed matching is a consequence of a stable matching mechanism with homogeneous preferences on one side, and the preferences are drawn from a parametric distribution conditional on observables, the large observed matching follows a parametric distribution. This paper shows in such a situation how the method of Monte Carlo inference can be a viable option. Being a finite sample inference method, it does not require independence or local dependence among the observations which are often used to obtain asymptotic validity. Results from a Monte Carlo simulation study are presented and discussed. Full article
(This article belongs to the Special Issue Resampling Methods in Econometrics)
Open AccessArticle On the Convergence Rate of the SCAD-Penalized Empirical Likelihood Estimator
Econometrics 2019, 7(1), 15; https://doi.org/10.3390/econometrics7010015
Received: 18 October 2018 / Revised: 18 March 2019 / Accepted: 18 March 2019 / Published: 20 March 2019
Viewed by 423 | PDF Full-text (283 KB) | HTML Full-text | XML Full-text
Abstract
This paper investigates the asymptotic properties of a penalized empirical likelihood estimator for moment restriction models when the number of parameters (pn) and/or the number of moment restrictions increases with the sample size. Our main result is that the SCAD-penalized [...] Read more.
This paper investigates the asymptotic properties of a penalized empirical likelihood estimator for moment restriction models when the number of parameters ( p n ) and/or the number of moment restrictions increases with the sample size. Our main result is that the SCAD-penalized empirical likelihood estimator is n / p n -consistent under a reasonable condition on the regularization parameter. Our consistency rate is better than the existing ones. This paper also provides sufficient conditions under which n / p n -consistency and an oracle property are satisfied simultaneously. As far as we know, this paper is the first to specify sufficient conditions for both n / p n -consistency and the oracle property of the penalized empirical likelihood estimator. Full article
Open AccessArticle Indirect Inference: Which Moments to Match?
Econometrics 2019, 7(1), 14; https://doi.org/10.3390/econometrics7010014
Received: 19 December 2018 / Revised: 17 February 2019 / Accepted: 7 March 2019 / Published: 19 March 2019
Viewed by 411 | PDF Full-text (308 KB) | HTML Full-text | XML Full-text
Abstract
The standard approach to indirect inference estimation considers that the auxiliary parameters, which carry the identifying information about the structural parameters of interest, are obtained from some recently identified vector of estimating equations. In contrast to this standard interpretation, we demonstrate that the [...] Read more.
The standard approach to indirect inference estimation considers that the auxiliary parameters, which carry the identifying information about the structural parameters of interest, are obtained from some recently identified vector of estimating equations. In contrast to this standard interpretation, we demonstrate that the case of overidentified auxiliary parameters is both possible, and, indeed, more commonly encountered than one may initially realize. We then revisit the “moment matching” and “parameter matching” versions of indirect inference in this context and devise efficient estimation strategies in this more general framework. Perhaps surprisingly, we demonstrate that if one were to consider the naive choice of an efficient Generalized Method of Moments (GMM)-based estimator for the auxiliary parameters, the resulting indirect inference estimators would be inefficient. In this general context, we demonstrate that efficient indirect inference estimation actually requires a two-step estimation procedure, whereby the goal of the first step is to obtain an efficient version of the auxiliary model. These two-step estimators are presented both within the context of moment matching and parameter matching. Full article
(This article belongs to the Special Issue Resampling Methods in Econometrics)
Open AccessArticle Fixed and Long Time Span Jump Tests: New Monte Carlo and Empirical Evidence
Econometrics 2019, 7(1), 13; https://doi.org/10.3390/econometrics7010013
Received: 5 September 2018 / Revised: 19 February 2019 / Accepted: 7 March 2019 / Published: 13 March 2019
Viewed by 449 | PDF Full-text (417 KB) | HTML Full-text | XML Full-text
Abstract
Numerous tests designed to detect realized jumps over a fixed time span have been proposed and extensively studied in the financial econometrics literature. These tests differ from “long time span tests” that detect jumps by examining the magnitude of the jump intensity parameter [...] Read more.
Numerous tests designed to detect realized jumps over a fixed time span have been proposed and extensively studied in the financial econometrics literature. These tests differ from “long time span tests” that detect jumps by examining the magnitude of the jump intensity parameter in the data generating process, and which are consistent. In this paper, long span jump tests are compared and contrasted with a variety of fixed span jump tests in a series of Monte Carlo experiments. It is found that both the long time span tests of Corradi et al. (2018) and the fixed span tests of Aït-Sahalia and Jacod (2009) exhibit reasonably good finite sample properties, for time spans both short and long. Various other tests suffer from finite sample distortions, both under sequential testing and under long time spans. The latter finding is new, and confirms the “pitfall” discussed in Huang and Tauchen (2005), of using asymptotic approximations associated with finite time span tests in order to study long time spans of data. An empirical analysis is carried out to investigate the implications of these findings, and “time-span robust” tests indicate that the prevalence of jumps is not as universal as might be expected. Full article
(This article belongs to the Special Issue Resampling Methods in Econometrics)
Figures

Figure 1

Open AccessArticle On the Validity of Tests for Asymmetry in Residual-Based Threshold Cointegration Models
Econometrics 2019, 7(1), 12; https://doi.org/10.3390/econometrics7010012
Received: 1 August 2018 / Revised: 16 January 2019 / Accepted: 7 March 2019 / Published: 13 March 2019
Viewed by 405 | PDF Full-text (312 KB) | HTML Full-text | XML Full-text
Abstract
This paper investigates the properties of tests for asymmetric long-run adjustment which are often applied in empirical studies on asymmetric price transmissions. We show that substantial size distortions are caused by preconditioning the test on finding sufficient evidence for cointegration in a first [...] Read more.
This paper investigates the properties of tests for asymmetric long-run adjustment which are often applied in empirical studies on asymmetric price transmissions. We show that substantial size distortions are caused by preconditioning the test on finding sufficient evidence for cointegration in a first step. The extent of oversizing the test for long-run asymmetry depends inversely on the power of the primary cointegration test. Hence, tests for long-run asymmetry become invalid in cases of small sample sizes or slow speed of adjustment. Further, we provide simulation evidence that tests for long-run asymmetry are generally oversized if the threshold parameter is estimated by conditional least squares and show that bootstrap techniques can be used to obtain the correct size. Full article
(This article belongs to the Special Issue Resampling Methods in Econometrics)
Figures

Figure 1

Open AccessArticle Not p-Values, Said a Little Bit Differently
Econometrics 2019, 7(1), 11; https://doi.org/10.3390/econometrics7010011
Received: 14 December 2018 / Revised: 25 February 2019 / Accepted: 7 March 2019 / Published: 13 March 2019
Viewed by 554 | PDF Full-text (364 KB) | HTML Full-text | XML Full-text
Abstract
As a contribution toward the ongoing discussion about the use and mis-use of p-values, numerical examples are presented demonstrating that a p-value can, as a practical matter, give you a really different answer than the one that you want. Full article
(This article belongs to the Special Issue Towards a New Paradigm for Statistical Evidence)
Figures

Figure 1

Open AccessConcept Paper Permutation Entropy and Information Recovery in Nonlinear Dynamic Economic Time Series
Econometrics 2019, 7(1), 10; https://doi.org/10.3390/econometrics7010010
Received: 5 February 2019 / Revised: 28 February 2019 / Accepted: 5 March 2019 / Published: 12 March 2019
Viewed by 447 | PDF Full-text (1467 KB) | HTML Full-text | XML Full-text
Abstract
The focus of this paper is an information theoretic-symbolic logic approach to extract information from complex economic systems and unlock its dynamic content. Permutation Entropy (PE) is used to capture the permutation patterns-ordinal relations among the individual values of a given time series; [...] Read more.
The focus of this paper is an information theoretic-symbolic logic approach to extract information from complex economic systems and unlock its dynamic content. Permutation Entropy (PE) is used to capture the permutation patterns-ordinal relations among the individual values of a given time series; to obtain a probability distribution of the accessible patterns; and to quantify the degree of complexity of an economic behavior system. Ordinal patterns are used to describe the intrinsic patterns, which are hidden in the dynamics of the economic system. Empirical applications involving the Dow Jones Industrial Average are presented to indicate the information recovery value and the applicability of the PE method. The results demonstrate the ability of the PE method to detect the extent of complexity (irregularity) and to discriminate and classify admissible and forbidden states. Full article
Figures

Figure 1

Open AccessArticle A Parametric Factor Model of the Term Structure of Mortality
Received: 11 June 2018 / Revised: 17 February 2019 / Accepted: 5 March 2019 / Published: 11 March 2019
Viewed by 463 | PDF Full-text (547 KB) | HTML Full-text | XML Full-text
Abstract
The prototypical Lee–Carter mortality model is characterized by a single common time factor that loads differently across age groups. In this paper, we propose a parametric factor model for the term structure of mortality where multiple factors are designed to influence the age [...] Read more.
The prototypical Lee–Carter mortality model is characterized by a single common time factor that loads differently across age groups. In this paper, we propose a parametric factor model for the term structure of mortality where multiple factors are designed to influence the age groups differently via parametric loading functions. We identify four different factors: a factor common for all age groups, factors for infant and adult mortality, and a factor for the “accident hump” that primarily affects mortality of relatively young adults and late teenagers. Since the factors are identified via restrictions on the loading functions, the factors are not designed to be orthogonal but can be dependent and can possibly cointegrate when the factors have unit roots. We suggest two estimation procedures similar to the estimation of the dynamic Nelson–Siegel term structure model. First, a two-step nonlinear least squares procedure based on cross-section regressions together with a separate model to estimate the dynamics of the factors. Second, we suggest a fully specified model estimated by maximum likelihood via the Kalman filter recursions after the model is put on state space form. We demonstrate the methodology for US and French mortality data. We find that the model provides a good fit of the relevant factors and, in a forecast comparison with a range of benchmark models, it is found that, especially for longer horizons, variants of the parametric factor model have excellent forecast performance. Full article
(This article belongs to the Special Issue Celebrated Econometricians: Katarina Juselius and Søren Johansen)
Figures

Figure 1

Open AccessArticle Structural Panel Bayesian VAR Model to Deal with Model Misspecification and Unobserved Heterogeneity Problems
Received: 4 September 2018 / Revised: 23 February 2019 / Accepted: 5 March 2019 / Published: 11 March 2019
Viewed by 512 | PDF Full-text (3892 KB) | HTML Full-text | XML Full-text
Abstract
This paper provides an overview of a time-varying Structural Panel Bayesian Vector Autoregression model that deals with model misspecification and unobserved heterogeneity problems in applied macroeconomic analyses when studying time-varying relationships and dynamic interdependencies among countries and variables. I discuss what its distinctive [...] Read more.
This paper provides an overview of a time-varying Structural Panel Bayesian Vector Autoregression model that deals with model misspecification and unobserved heterogeneity problems in applied macroeconomic analyses when studying time-varying relationships and dynamic interdependencies among countries and variables. I discuss what its distinctive features are, what it is used for, and how it can be analytically derived. I also describe how it is estimated and how structural spillovers and shock identification are performed. The model is empirically applied to a set of developed European economies to illustrate the functioning and the ability of the model. The paper also discusses more recent studies that have used multivariate dynamic macro-panels to evaluate idiosyncratic business cycles, policy-making, and spillover effects among different sectors and countries. Full article
(This article belongs to the Special Issue Big Data in Economics and Finance)
Figures

Figure 1

Open AccessArticle Panel Data Estimation for Correlated Random Coefficients Models
Received: 30 January 2018 / Revised: 13 January 2019 / Accepted: 23 January 2019 / Published: 1 February 2019
Viewed by 995 | PDF Full-text (777 KB) | HTML Full-text | XML Full-text
Abstract
This paper considers methods of estimating a static correlated random coefficient model with panel data. We mainly focus on comparing two approaches of estimating unconditional mean of the coefficients for the correlated random coefficients models, the group mean estimator and the generalized least [...] Read more.
This paper considers methods of estimating a static correlated random coefficient model with panel data. We mainly focus on comparing two approaches of estimating unconditional mean of the coefficients for the correlated random coefficients models, the group mean estimator and the generalized least squares estimator. For the group mean estimator, we show that it achieves Chamberlain (1992) semi-parametric efficiency bound asymptotically. For the generalized least squares estimator, we show that when T is large, a generalized least squares estimator that ignores the correlation between the individual coefficients and regressors is asymptotically equivalent to the group mean estimator. In addition, we give conditions where the standard within estimator of the mean of the coefficients is consistent. Moreover, with additional assumptions on the known correlation pattern, we derive the asymptotic properties of panel least squares estimators. Simulations are used to examine the finite sample performances of different estimators. Full article
(This article belongs to the Special Issue Celebrated Econometricians: Peter Phillips)
Open AccessArticle Asymptotic Theory for Cointegration Analysis When the Cointegration Rank Is Deficient
Received: 3 May 2018 / Revised: 26 September 2018 / Accepted: 8 January 2019 / Published: 18 January 2019
Viewed by 1133 | PDF Full-text (391 KB) | HTML Full-text | XML Full-text
Abstract
We consider cointegration tests in the situation where the cointegration rank is deficient. This situation is of interest in finite sample analysis and in relation to recent work on identification robust cointegration inference. We derive asymptotic theory for tests for cointegration rank and [...] Read more.
We consider cointegration tests in the situation where the cointegration rank is deficient. This situation is of interest in finite sample analysis and in relation to recent work on identification robust cointegration inference. We derive asymptotic theory for tests for cointegration rank and for hypotheses on the cointegrating vectors. The limiting distributions are tabulated. An application to US treasury yields series is given. Full article
(This article belongs to the Special Issue Celebrated Econometricians: Katarina Juselius and Søren Johansen)
Figures

Figure 1

Open AccessArticle Information Flow in Times of Crisis: The Case of the European Banking and Sovereign Sectors
Received: 20 December 2017 / Revised: 10 December 2018 / Accepted: 11 January 2019 / Published: 17 January 2019
Viewed by 956 | PDF Full-text (1521 KB) | HTML Full-text | XML Full-text
Abstract
Crises in the banking and sovereign debt sectors give rise to heightened financial fragility. Of particular concern is the development of self-fulfilling feedback loops where crisis conditions in one sector are transmitted to the other sector and back again. We use time-varying tests [...] Read more.
Crises in the banking and sovereign debt sectors give rise to heightened financial fragility. Of particular concern is the development of self-fulfilling feedback loops where crisis conditions in one sector are transmitted to the other sector and back again. We use time-varying tests of Granger causality to demonstrate how empirical evidence of connectivity between the banking and sovereign sectors can be detected, and provide an application to the Greek, Irish, Italian, Portuguese and Spanish (GIIPS) countries and Germany over the period 2007 to 2016. While the results provide evidence of domestic feedback loops, the most important finding is that financial fragility is an international problem and cannot be dealt with purely on a country-by-country basis. Full article
(This article belongs to the Special Issue Celebrated Econometricians: Peter Phillips)
Figures

Figure 1

Open AccessArticle Gini Regressions and Heteroskedasticity
Received: 20 July 2018 / Revised: 19 November 2018 / Accepted: 4 January 2019 / Published: 14 January 2019
Viewed by 1068 | PDF Full-text (331 KB) | HTML Full-text | XML Full-text
Abstract
We propose an Aitken estimator for Gini regression. The suggested A-Gini estimator is proven to be a U-statistics. Monte Carlo simulations are provided to deal with heteroskedasticity and to make some comparisons between the generalized least squares and the Gini regression. [...] Read more.
We propose an Aitken estimator for Gini regression. The suggested A-Gini estimator is proven to be a U-statistics. Monte Carlo simulations are provided to deal with heteroskedasticity and to make some comparisons between the generalized least squares and the Gini regression. A Gini-White test is proposed and shows that a better power is obtained compared with the usual White test when outlying observations contaminate the data. Full article
Figures

Figure 1

Open AccessEditorial Acknowledgement to Reviewers of Econometrics in 2018
Published: 10 January 2019
Viewed by 810 | PDF Full-text (132 KB) | HTML Full-text | XML Full-text
Abstract
Rigorous peer-review is the corner-stone of high-quality academic publishing [...] Full article
Open AccessArticle Cointegration and Adjustment in the CVAR(∞) Representation of Some Partially Observed CVAR(1) Models
Received: 18 September 2018 / Revised: 29 October 2018 / Accepted: 8 January 2019 / Published: 10 January 2019
Viewed by 761 | PDF Full-text (254 KB) | HTML Full-text | XML Full-text
Abstract
A multivariate CVAR(1) model for some observed variables and some unobserved variables is analysed using its infinite order CVAR representation of the observations. Cointegration and adjustment coefficients in the infinite order CVAR are found as functions of the parameters in the CVAR(1) model. [...] Read more.
A multivariate CVAR(1) model for some observed variables and some unobserved variables is analysed using its infinite order CVAR representation of the observations. Cointegration and adjustment coefficients in the infinite order CVAR are found as functions of the parameters in the CVAR(1) model. Conditions for weak exogeneity for the cointegrating vectors in the approximating finite order CVAR are derived. The results are illustrated by two simple examples of relevance for modelling causal graphs. Full article
(This article belongs to the Special Issue Celebrated Econometricians: Katarina Juselius and Søren Johansen)
Open AccessArticle The Specification of Dynamic Discrete-Time Two-State Panel Data Models
Received: 1 November 2018 / Revised: 16 December 2018 / Accepted: 18 December 2018 / Published: 24 December 2018
Viewed by 940 | PDF Full-text (291 KB) | HTML Full-text | XML Full-text
Abstract
This paper compares two approaches to analyzing longitudinal discrete-time binary outcomes. Dynamic binary response models focus on state occupancy and typically specify low-order Markovian state dependence. Multi-spell duration models focus on transitions between states and typically allow for state-specific duration dependence. We show [...] Read more.
This paper compares two approaches to analyzing longitudinal discrete-time binary outcomes. Dynamic binary response models focus on state occupancy and typically specify low-order Markovian state dependence. Multi-spell duration models focus on transitions between states and typically allow for state-specific duration dependence. We show that the former implicitly impose strong and testable restrictions on the transition probabilities. In a case study of poverty transitions, we show that these restrictions are severely rejected against the more flexible multi-spell duration models. Full article
Econometrics EISSN 2225-1146 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top