Subset-Continuous-Updating GMM Estimators for Dynamic Panel Data Models*Econometrics* **2016**, *4*(4), 47; doi:10.3390/econometrics4040047 - 30 November 2016**Abstract **

The two-step GMM estimators of Arellano and Bond (1991) and Blundell and Bond (1998) for dynamic panel data models have been widely used in empirical work; however, neither of them performs well in small samples with weak instruments. The continuous-updating GMM estimator proposed

[...] Read more.
The two-step GMM estimators of Arellano and Bond (1991) and Blundell and Bond (1998) for dynamic panel data models have been widely used in empirical work; however, neither of them performs well in small samples with weak instruments. The continuous-updating GMM estimator proposed by Hansen, Heaton, and Yaron (1996) is in principle able to reduce the small-sample bias, but it involves high-dimensional optimizations when the number of regressors is large. This paper proposes a computationally feasible variation on these standard two-step GMM estimators by applying the idea of continuous-updating to the autoregressive parameter only, given the fact that the absolute value of the autoregressive parameter is less than unity as a necessary requirement for the data-generating process to be stationary. We show that our subset-continuous-updating method does not alter the asymptotic distribution of the two-step GMM estimators, and it therefore retains consistency. Our simulation results indicate that the subset-continuous-updating GMM estimators outperform their standard two-step counterparts in finite samples in terms of the estimation accuracy on the autoregressive parameter and the size of the Sargan-Hansen test.
Full article

Generalized Information Matrix Tests for Detecting Model Misspecification*Econometrics* **2016**, *4*(4), 46; doi:10.3390/econometrics4040046 - 15 November 2016**Abstract **

►
Figures
Generalized Information Matrix Tests (GIMTs) have recently been used for detecting the presence of misspecification in regression models in both randomized controlled trials and observational studies. In this paper, a unified GIMT framework is developed for the purpose of identifying, classifying, and deriving

[...] Read more.
Generalized Information Matrix Tests (GIMTs) have recently been used for detecting the presence of misspecification in regression models in both randomized controlled trials and observational studies. In this paper, a unified GIMT framework is developed for the purpose of identifying, classifying, and deriving novel model misspecification tests for finite-dimensional smooth probability models. These GIMTs include previously published as well as newly developed information matrix tests. To illustrate the application of the GIMT framework, we derived and assessed the performance of new GIMTs for binary logistic regression. Although all GIMTs exhibited good level and power performance for the larger sample sizes, GIMT statistics with fewer degrees of freedom and derived using log-likelihood third derivatives exhibited improved level and power performance.
Full article

Testing Cross-Sectional Correlation in Large Panel Data Models with Serial Correlation*Econometrics* **2016**, *4*(4), 44; doi:10.3390/econometrics4040044 - 4 November 2016**Abstract **

This paper considers the problem of testing cross-sectional correlation in large panel data models with serially-correlated errors. It finds that existing tests for cross-sectional correlation encounter size distortions with serial correlation in the errors. To control the size, this paper proposes a modification

[...] Read more.
This paper considers the problem of testing cross-sectional correlation in large panel data models with serially-correlated errors. It finds that existing tests for cross-sectional correlation encounter size distortions with serial correlation in the errors. To control the size, this paper proposes a modification of Pesaran’s Cross-sectional Dependence (CD) test to account for serial correlation of an unknown form in the error term. We derive the limiting distribution of this test as $\left(N,,,T\right)\to \infty $ . The test is distribution free and allows for unknown forms of serial correlation in the errors. Monte Carlo simulations show that the test has good size and power for large panels when serial correlation in the errors is present.
Full article

Panel Cointegration Testing in the Presence of Linear Time Trends*Econometrics* **2016**, *4*(4), 45; doi:10.3390/econometrics4040045 - 1 November 2016**Abstract **

We consider a class of panel tests of the null hypothesis of no cointegration and cointegration. All tests under investigation rely on single-equations estimated by least squares, and they may be residual-based or not. We focus on test statistics computed from regressions with

[...] Read more.
We consider a class of panel tests of the null hypothesis of no cointegration and cointegration. All tests under investigation rely on single-equations estimated by least squares, and they may be residual-based or not. We focus on test statistics computed from regressions with intercept only (i.e., without detrending) and with at least one of the regressors (integrated of order 1) being dominated by a linear time trend. In such a setting, often encountered in practice, the limiting distributions and critical values provided for and applied with the situation “with intercept only” are not correct. It is demonstrated that their usage results in size distortions growing with the panel size *N*. Moreover, we show which are the appropriate distributions, and how correct critical values can be obtained from the literature.
Full article

Pair-Copula Constructions for Financial Applications: A Review*Econometrics* **2016**, *4*(4), 43; doi:10.3390/econometrics4040043 - 29 October 2016**Abstract **

►
Figures
This survey reviews the large and growing literature on the use of pair-copula constructions (PCCs) in financial applications. Using a PCC, multivariate data that exhibit complex patterns of dependence can be modeled using bivariate copulae as simple building blocks. Hence, this model represents

[...] Read more.
This survey reviews the large and growing literature on the use of pair-copula constructions (PCCs) in financial applications. Using a PCC, multivariate data that exhibit complex patterns of dependence can be modeled using bivariate copulae as simple building blocks. Hence, this model represents a very flexible way of constructing higher-dimensional copulae. In this paper, we survey inference methods and goodness-of-fit tests for such models, as well as empirical applications of the PCCs in finance and economics.
Full article

Oil Price and Economic Growth: A Long Story?*Econometrics* **2016**, *4*(4), 41; doi:10.3390/econometrics4040041 - 28 October 2016**Abstract **

►
Figures
This study investigates changes in the relationship between oil prices and the US economy from a long-term perspective. Although neither of the two series (oil price and GDP growth rates) presents structural breaks in mean, we identify different volatility periods in both of

[...] Read more.
This study investigates changes in the relationship between oil prices and the US economy from a long-term perspective. Although neither of the two series (oil price and GDP growth rates) presents structural breaks in mean, we identify different volatility periods in both of them, separately. From a multivariate perspective, we do not observe a significant effect between changes in oil prices and GDP growth when considering the full period. However, we find a significant relationship in some subperiods by carrying out a rolling analysis and by investigating the presence of structural breaks in the multivariate framework. Finally, we obtain evidence, by means of a time-varying VAR, that the impact of the oil price shock on GDP growth has declined over time. We also observe that the negative effect is greater at the time of large oil price increases, supporting previous evidence of nonlinearity in the relationship.
Full article

Social Networks and Choice Set Formation in Discrete Choice Models*Econometrics* **2016**, *4*(4), 42; doi:10.3390/econometrics4040042 - 27 October 2016**Abstract **

►
Figures
The discrete choice literature has evolved from the analysis of a choice of a single item from a fixed choice set to the incorporation of a vast array of more complex representations of preferences and choice set formation processes into choice models. Modern

[...] Read more.
The discrete choice literature has evolved from the analysis of a choice of a single item from a fixed choice set to the incorporation of a vast array of more complex representations of preferences and choice set formation processes into choice models. Modern discrete choice models include rich specifications of heterogeneity, multi-stage processing for choice set determination, dynamics, and other elements. However, discrete choice models still largely represent socially isolated choice processes —individuals are not affected by the preferences of choices of other individuals. There is a developing literature on the impact of social networks on preferences or the utility function in a random utility model but little examination of such processes for choice set formation. There is also emerging evidence in the marketplace of the influence of friends on choice sets and choices. In this paper we develop discrete choice models that incorporate formal social network structures into the choice set formation process in a two-stage random utility framework. We assess models where peers may affect not only the alternatives that individuals consider or include in their choice sets, but also consumption choices. We explore the properties of our models and evaluate the extent of “errors” in assessment of preferences, economic welfare measures and market shares if network effects are present, but are not accounted for in the econometric model. Our results shed light on the importance of the evaluation of peer or network effects on inclusion/exclusion of alternatives in a random utility choice framework.
Full article

Editorial Announcement*Econometrics* **2016**, *4*(4), 40; doi:10.3390/econometrics4040040 - 10 October 2016**Abstract **
I am pleased to announce that, following my retirement on the 30th September 2016, Marc Paolella will become Editor-in-Chief (EiC) of Econometrics.
Full article

Estimation of Dynamic Panel Data Models with Stochastic Volatility Using Particle Filters*Econometrics* **2016**, *4*(4), 39; doi:10.3390/econometrics4040039 - 9 October 2016**Abstract **

Time-varying volatility is common in macroeconomic data and has been incorporated into macroeconomic models in recent work. Dynamic panel data models have become increasingly popular in macroeconomics to study common relationships across countries or regions. This paper estimates dynamic panel data models with

[...] Read more.
Time-varying volatility is common in macroeconomic data and has been incorporated into macroeconomic models in recent work. Dynamic panel data models have become increasingly popular in macroeconomics to study common relationships across countries or regions. This paper estimates dynamic panel data models with stochastic volatility by maximizing an approximate likelihood obtained via Rao-Blackwellized particle filters. Monte Carlo studies reveal the good and stable performance of our particle filter-based estimator. When the volatility of volatility is high, or when regressors are absent but stochastic volatility exists, our approach can be better than the maximum likelihood estimator which neglects stochastic volatility and generalized method of moments (GMM) estimators.
Full article

Econometric Information Recovery in Behavioral Networks*Econometrics* **2016**, *4*(3), 38; doi:10.3390/econometrics4030038 - 14 September 2016**Abstract **

►
Figures
In this paper, we suggest an approach to recovering behavior-related, preference-choice network information from observational data. We model the process as a self-organized behavior based random exponential network-graph system. To address the unknown nature of the sampling model in recovering behavior related network

[...] Read more.
In this paper, we suggest an approach to recovering behavior-related, preference-choice network information from observational data. We model the process as a self-organized behavior based random exponential network-graph system. To address the unknown nature of the sampling model in recovering behavior related network information, we use the Cressie-Read (CR) family of divergence measures and the corresponding information theoretic entropy basis, for estimation, inference, model evaluation, and prediction. Examples are included to clarify how entropy based information theoretic methods are directly applicable to recovering the behavioral network probabilities in this fundamentally underdetermined ill posed inverse recovery problem.
Full article

Generalized Fractional Processes with Long Memory and Time Dependent Volatility Revisited*Econometrics* **2016**, *4*(3), 37; doi:10.3390/econometrics4030037 - 5 September 2016**Abstract **

►
Figures
In recent years, fractionally-differenced processes have received a great deal of attention due to their flexibility in financial applications with long-memory. This paper revisits the class of generalized fractionally-differenced processes generated by Gegenbauer polynomials and the ARMA structure (GARMA) with both the long-memory

[...] Read more.
In recent years, fractionally-differenced processes have received a great deal of attention due to their flexibility in financial applications with long-memory. This paper revisits the class of generalized fractionally-differenced processes generated by Gegenbauer polynomials and the ARMA structure (GARMA) with both the long-memory and time-dependent innovation variance. We establish the existence and uniqueness of second-order solutions. We also extend this family with innovations to follow GARCH and stochastic volatility (SV). Under certain regularity conditions, we give asymptotic results for the approximate maximum likelihood estimator for the GARMA-GARCH model. We discuss a Monte Carlo likelihood method for the GARMA-SV model and investigate finite sample properties via Monte Carlo experiments. Finally, we illustrate the usefulness of this approach using monthly inflation rates for France, Japan and the United States.
Full article

Nonparametric Regression with Common Shocks*Econometrics* **2016**, *4*(3), 36; doi:10.3390/econometrics4030036 - 1 September 2016**Abstract **

This paper considers a nonparametric regression model for cross-sectional data in the presence of common shocks. Common shocks are allowed to be very general in nature; they do not need to be finite dimensional with a known (small) number of factors. I investigate

[...] Read more.
This paper considers a nonparametric regression model for cross-sectional data in the presence of common shocks. Common shocks are allowed to be very general in nature; they do not need to be finite dimensional with a known (small) number of factors. I investigate the properties of the Nadaraya-Watson kernel estimator and determine how general the common shocks can be while still obtaining meaningful kernel estimates. Restrictions on the common shocks are necessary because kernel estimators typically manipulate conditional densities, and conditional densities do not necessarily exist in the present case. By appealing to disintegration theory, I provide sufficient conditions for the existence of such conditional densities and show that the estimator converges in probability to the Kolmogorov conditional expectation given the sigma-field generated by the common shocks. I also establish the rate of convergence and the asymptotic distribution of the kernel estimator.
Full article

Special Issues of Econometrics: Celebrated Econometricians*Econometrics* **2016**, *4*(3), 35; doi:10.3390/econometrics4030035 - 17 August 2016**Abstract **
Econometrics is pleased to announce the commissioning of a new series of Special Issues dedicated to celebrated econometricians of our time.[...]
Full article

Jump Variation Estimation with Noisy High Frequency Financial Data via Wavelets*Econometrics* **2016**, *4*(3), 34; doi:10.3390/econometrics4030034 - 16 August 2016**Abstract **

►
Figures
This paper develops a method to improve the estimation of jump variation using high frequency data with the existence of market microstructure noises. Accurate estimation of jump variation is in high demand, as it is an important component of volatility in finance for

[...] Read more.
This paper develops a method to improve the estimation of jump variation using high frequency data with the existence of market microstructure noises. Accurate estimation of jump variation is in high demand, as it is an important component of volatility in finance for portfolio allocation, derivative pricing and risk management. The method has a two-step procedure with detection and estimation. In Step 1, we detect the jump locations by performing wavelet transformation on the observed noisy price processes. Since wavelet coefficients are significantly larger at the jump locations than the others, we calibrate the wavelet coefficients through a threshold and declare jump points if the absolute wavelet coefficients exceed the threshold. In Step 2 we estimate the jump variation by averaging noisy price processes at each side of a declared jump point and then taking the difference between the two averages of the jump point. Specifically, for each jump location detected in Step 1, we get two averages from the observed noisy price processes, one before the detected jump location and one after it, and then take their difference to estimate the jump variation. Theoretically, we show that the two-step procedure based on average realized volatility processes can achieve a convergence rate close to ${O}_{P}({n}^{-4/9})$ , which is better than the convergence rate ${O}_{P}({n}^{-1/4})$ for the procedure based on the original noisy process, where *n* is the sample size. Numerically, the method based on average realized volatility processes indeed performs better than that based on the price processes. Empirically, we study the distribution of jump variation using Dow Jones Industrial Average stocks and compare the results using the original price process and the average realized volatility processes.
Full article

Measuring the Distance between Sets of ARMA Models*Econometrics* **2016**, *4*(3), 32; doi:10.3390/econometrics4030032 - 15 July 2016**Abstract **

A distance between pairs of sets of autoregressive moving average (ARMA) processes is proposed. Its main properties are discussed. The paper also shows how the proposed distance finds application in time series analysis. In particular it can be used to evaluate the distance

[...] Read more.
A distance between pairs of sets of autoregressive moving average (ARMA) processes is proposed. Its main properties are discussed. The paper also shows how the proposed distance finds application in time series analysis. In particular it can be used to evaluate the distance between portfolios of ARMA models or the distance between vector autoregressive (VAR) models.
Full article

Market Microstructure Effects on Firm Default Risk Evaluation*Econometrics* **2016**, *4*(3), 31; doi:10.3390/econometrics4030031 - 8 July 2016**Abstract **

Default probability is a fundamental variable determining the credit worthiness of a firm and equity volatility estimation plays a key role in its evaluation. Assuming a structural credit risk modeling approach, we study the impact of choosing different non parametric equity volatility estimators

[...] Read more.
Default probability is a fundamental variable determining the credit worthiness of a firm and equity volatility estimation plays a key role in its evaluation. Assuming a structural credit risk modeling approach, we study the impact of choosing different non parametric equity volatility estimators on default probability evaluation, when market microstructure noise is considered. A general stochastic volatility framework with jumps for the underlying asset dynamics is defined inside a Merton-like structural model. To estimate the volatility risk component of a firm we use high-frequency equity data: market microstructure noise is introduced as a direct effect of observing noisy high-frequency equity prices. A Monte Carlo simulation analysis is conducted to (i) test the performance of alternative non-parametric equity volatility estimators in their capability of filtering out the microstructure noise and backing out the true unobservable asset volatility; (ii) study the effects of different non-parametric estimation techniques on default probability evaluation. The impact of the non-parametric volatility estimators on risk evaluation is not negligible: a sensitivity analysis defined for alternative values of the leverage parameter and average jumps size reveals that the characteristics of the dataset are crucial to determine which is the proper estimator to consider from a credit risk perspective.
Full article

Estimation of Gini Index within Pre-Specified Error Bound*Econometrics* **2016**, *4*(3), 30; doi:10.3390/econometrics4030030 - 24 June 2016**Abstract **

Gini index is a widely used measure of economic inequality. This article develops a theory and methodology for constructing a confidence interval for Gini index with a specified confidence coefficient and a specified width without assuming any specific distribution of the data. Fixed

[...] Read more.
Gini index is a widely used measure of economic inequality. This article develops a theory and methodology for constructing a confidence interval for Gini index with a specified confidence coefficient and a specified width without assuming any specific distribution of the data. Fixed sample size methods cannot simultaneously achieve both specified confidence coefficient and fixed width. We develop a purely sequential procedure for interval estimation of Gini index with a specified confidence coefficient and a specified margin of error. Optimality properties of the proposed method, namely first order asymptotic efficiency and asymptotic consistency properties are proved under mild moment assumptions of the distribution of the data.
Full article

Evaluating Eigenvector Spatial Filter Corrections for Omitted Georeferenced Variables*Econometrics* **2016**, *4*(2), 29; doi:10.3390/econometrics4020029 - 21 June 2016**Abstract **

The Ramsey regression equation specification error test (RESET) furnishes a diagnostic for omitted variables in a linear regression model specification (*i.e.*, the null hypothesis is no omitted variables). Integer powers of fitted values from a regression analysis are introduced as additional

[...] Read more.
The Ramsey regression equation specification error test (RESET) furnishes a diagnostic for omitted variables in a linear regression model specification (*i.e.*, the null hypothesis is no omitted variables). Integer powers of fitted values from a regression analysis are introduced as additional covariates in a second regression analysis. The former regression model can be considered restricted, whereas the latter model can be considered unrestricted; this first model is nested within this second model. A RESET significance test is conducted with an *F*-test using the error sums of squares and the degrees of freedom for the two models. For georeferenced data, eigenvectors can be extracted from a modified spatial weights matrix, and included in a linear regression model specification to account for the presence of nonzero spatial autocorrelation. The intuition underlying this methodology is that these synthetic variates function as surrogates for omitted variables. Accordingly, a restricted regression model without eigenvectors should indicate an omitted variables problem, whereas an unrestricted regression model with eigenvectors should result in a failure to reject the RESET null hypothesis. This paper furnishes eleven empirical examples, covering a wide range of spatial attribute data types, that illustrate the effectiveness of eigenvector spatial filtering in addressing the omitted variables problem for georeferenced data as measured by the RESET.
Full article

Testing Symmetry of Unknown Densities via Smoothing with the Generalized Gamma Kernels*Econometrics* **2016**, *4*(2), 28; doi:10.3390/econometrics4020028 - 17 June 2016**Abstract **

This paper improves a kernel-smoothed test of symmetry through combining it with a new class of asymmetric kernels called the generalized gamma kernels. It is demonstrated that the improved test statistic has a normal limit under the null of symmetry and is consistent

[...] Read more.
This paper improves a kernel-smoothed test of symmetry through combining it with a new class of asymmetric kernels called the generalized gamma kernels. It is demonstrated that the improved test statistic has a normal limit under the null of symmetry and is consistent under the alternative. A test-oriented smoothing parameter selection method is also proposed to implement the test. Monte Carlo simulations indicate superior finite-sample performance of the test statistic. It is worth emphasizing that the performance is grounded on the first-order normal limit and a small number of observations, despite a nonparametric convergence rate and a sample-splitting procedure of the test.
Full article