Open AccessArticle
Johansen’s Reduced Rank Estimator Is GMM
Econometrics 2018, 6(2), 26; https://doi.org/10.3390/econometrics6020026 -
Abstract
The generalized method of moments (GMM) estimator of the reduced-rank regression model is derived under the assumption of conditional homoscedasticity. It is shown that this GMM estimator is algebraically identical to the maximum likelihood estimator under normality developed by Johansen (1988). This includes
[...] Read more.
The generalized method of moments (GMM) estimator of the reduced-rank regression model is derived under the assumption of conditional homoscedasticity. It is shown that this GMM estimator is algebraically identical to the maximum likelihood estimator under normality developed by Johansen (1988). This includes the vector error correction model (VECM) of Engle and Granger. It is also shown that GMM tests for reduced rank (cointegration) are algebraically similar to the Gaussian likelihood ratio tests. This shows that normality is not necessary to motivate these estimators and tests. Full article
Open AccessEditorial
Recent Developments in Macro-Econometric Modeling: Theory and Applications
Econometrics 2018, 6(2), 25; https://doi.org/10.3390/econometrics6020025 -
Abstract
Developments in macro-econometrics have been evolving since the aftermath of the Second World War.[...] Full article
Open AccessArticle
A Hybrid MCMC Sampler for Unconditional Quantile Based on Influence Function
Econometrics 2018, 6(2), 24; https://doi.org/10.3390/econometrics6020024 -
Abstract
In this study, we provide a Bayesian estimation method for the unconditional quantile regression model based on the Re-centered Influence Function (RIF). The method makes use of the dichotomous structure of the RIF and estimates a non-linear probability model by a logistic regression
[...] Read more.
In this study, we provide a Bayesian estimation method for the unconditional quantile regression model based on the Re-centered Influence Function (RIF). The method makes use of the dichotomous structure of the RIF and estimates a non-linear probability model by a logistic regression using a Gibbs within a Metropolis-Hastings sampler. This approach performs better in the presence of heavy-tailed distributions. Applied to a nationally-representative household survey, the Senegal Poverty Monitoring Report (2005), the results show that the change in the rate of returns to education across quantiles is substantially lower at the primary level. Full article
Figures

Figure 1

Open AccessArticle
Forecasting Inflation Uncertainty in the G7 Countries
Econometrics 2018, 6(2), 23; https://doi.org/10.3390/econometrics6020023 -
Abstract
There is substantial evidence that inflation rates are characterized by long memory and nonlinearities. In this paper, we introduce a long-memory Smooth Transition AutoRegressive Fractionally Integrated Moving Average-Markov Switching Multifractal specification [ STARFIMA(p,d,q) - MSM(
[...] Read more.
There is substantial evidence that inflation rates are characterized by long memory and nonlinearities. In this paper, we introduce a long-memory Smooth Transition AutoRegressive Fractionally Integrated Moving Average-Markov Switching Multifractal specification [ STARFIMA(p,d,q) - MSM(k) ] for modeling and forecasting inflation uncertainty. We first provide the statistical properties of the process and investigate the finite sample properties of the maximum likelihood estimators through simulation. Second, we evaluate the out-of-sample forecast performance of the model in forecasting inflation uncertainty in the G7 countries. Our empirical analysis demonstrates the superiority of the new model over the alternative STARFIMA(p,d,q) - GARCH -type models in forecasting inflation uncertainty. Full article
Figures

Figure 1

Open AccessArticle
Parametric Inference for Index Functionals
Econometrics 2018, 6(2), 22; https://doi.org/10.3390/econometrics6020022 -
Abstract
In this paper, we study the finite sample accuracy of confidence intervals for index functional built via parametric bootstrap, in the case of inequality indices. To estimate the parameters of the assumed parametric data generating distribution, we propose a Generalized Method of Moment
[...] Read more.
In this paper, we study the finite sample accuracy of confidence intervals for index functional built via parametric bootstrap, in the case of inequality indices. To estimate the parameters of the assumed parametric data generating distribution, we propose a Generalized Method of Moment estimator that targets the quantity of interest, namely the considered inequality index. Its primary advantage is that the scale parameter does not need to be estimated to perform parametric bootstrap, since inequality measures are scale invariant. The very good finite sample coverages that are found in a simulation study suggest that this feature provides an advantage over the parametric bootstrap using the maximum likelihood estimator. We also find that overall, a parametric bootstrap provides more accurate inference than its non or semi-parametric counterparts, especially for heavy tailed income distributions. Full article
Figures

Figure 1

Open AccessReview
Using the GB2 Income Distribution
Econometrics 2018, 6(2), 21; https://doi.org/10.3390/econometrics6020021 -
Abstract
To use the generalized beta distribution of the second kind (GB2) for the analysis of income and other positively skewed distributions, knowledge of estimation methods and the ability to compute quantities of interest from the estimated parameters are required. We review estimation methodology
[...] Read more.
To use the generalized beta distribution of the second kind (GB2) for the analysis of income and other positively skewed distributions, knowledge of estimation methods and the ability to compute quantities of interest from the estimated parameters are required. We review estimation methodology that has appeared in the literature, and summarize expressions for inequality, poverty, and pro-poor growth that can be used to compute these measures from GB2 parameter estimates. An application to data from China and Indonesia is provided. Full article
Figures

Figure 1

Open AccessArticle
Polarization and Rising Wage Inequality: Comparing the U.S. and Germany
Econometrics 2018, 6(2), 20; https://doi.org/10.3390/econometrics6020020 -
Abstract
Since the late 1970s, wage inequality has increased strongly both in the U.S. and Germany but the trends have been different. Wage inequality increased along the entire wage distribution during the 1980s in the U.S. and since the mid 1990s in Germany. There
[...] Read more.
Since the late 1970s, wage inequality has increased strongly both in the U.S. and Germany but the trends have been different. Wage inequality increased along the entire wage distribution during the 1980s in the U.S. and since the mid 1990s in Germany. There is evidence for wage polarization in the U.S. in the 1990s, and the increase in wage inequality in Germany was restricted to the top of the distribution before the 1990s. Using an approach developed by MaCurdy and Mroz (1995) to separate age, time, and cohort effects, we find a large role played by cohort effects in Germany, while we find only small cohort effects in the U.S. Employment trends in both countries are consistent with polarization since the 1990s. The evidence is consistent with a technology-driven polarization of the labor market, but this cannot explain the country specific differences. Full article
Figures

Figure 1

Open AccessArticle
TSLS and LIML Estimators in Panels with Unobserved Shocks
Econometrics 2018, 6(2), 19; https://doi.org/10.3390/econometrics6020019 -
Abstract
The properties of the two stage least squares (TSLS) and limited information maximum likelihood (LIML) estimators in panel data models where the observables are affected by common shocks, modelled through unobservable factors, are studied for the case where the time series dimension is
[...] Read more.
The properties of the two stage least squares (TSLS) and limited information maximum likelihood (LIML) estimators in panel data models where the observables are affected by common shocks, modelled through unobservable factors, are studied for the case where the time series dimension is fixed. We show that the key assumption in determining the consistency of the panel TSLS and LIML estimators, as the cross section dimension tends to infinity, is the lack of correlation between the factor loadings in the errors and in the exogenous variables—including the instruments—conditional on the common shocks. If this condition fails, both estimators have degenerate distributions. When the panel TSLS and LIML estimators are consistent, they have covariance-matrix mixed-normal distributions asymptotically. Tests on the coefficients can be constructed in the usual way and have standard distributions under the null hypothesis. Full article
Open AccessArticle
Decomposing the Bonferroni Inequality Index by Subgroups: Shapley Value and Balance of Inequality
Econometrics 2018, 6(2), 18; https://doi.org/10.3390/econometrics6020018 -
Abstract
Additive decomposability is an interesting feature of inequality indices which, however, is not always fulfilled; solutions to overcome such an issue have been given by Deutsch and Silber (2007) and by Di Maio and Landoni (2017). In this paper, we apply these methods,
[...] Read more.
Additive decomposability is an interesting feature of inequality indices which, however, is not always fulfilled; solutions to overcome such an issue have been given by Deutsch and Silber (2007) and by Di Maio and Landoni (2017). In this paper, we apply these methods, based on the “Shapley value” and the “balance of inequality” respectively, to the Bonferroni inequality index. We also discuss a comparison with the Gini concentration index and highlight interesting properties of the Bonferroni index. Full article
Figures

Figure 1

Open AccessArticle
On the Decomposition of the Esteban and Ray Index by Income Sources
Econometrics 2018, 6(2), 17; https://doi.org/10.3390/econometrics6020017 -
Abstract
This paper proposes a simple algorithm based on a matrix formulation to compute the Esteban and Ray (ER) polarization index. It then shows how the algorithm introduced leads to quite a simple decomposition of polarization by income sources. Such a breakdown
[...] Read more.
This paper proposes a simple algorithm based on a matrix formulation to compute the Esteban and Ray (ER) polarization index. It then shows how the algorithm introduced leads to quite a simple decomposition of polarization by income sources. Such a breakdown was not available hitherto. The decomposition we propose will thus allow one to determine the sign, as well as the magnitude, of the impact of the various income sources on the ER polarization index. A simple empirical illustration based on EU data is provided. Full article
Open AccessArticle
Data-Driven Jump Detection Thresholds for Application in Jump Regressions
Econometrics 2018, 6(2), 16; https://doi.org/10.3390/econometrics6020016 -
Abstract
This paper develops a method to select the threshold in threshold-based jump detection methods. The method is motivated by an analysis of threshold-based jump detection methods in the context of jump-diffusion models. We show that over the range of sampling frequencies a researcher
[...] Read more.
This paper develops a method to select the threshold in threshold-based jump detection methods. The method is motivated by an analysis of threshold-based jump detection methods in the context of jump-diffusion models. We show that over the range of sampling frequencies a researcher is most likely to encounter that the usual in-fill asymptotics provide a poor guide for selecting the jump threshold. Because of this we develop a sample-based method. Our method estimates the number of jumps over a grid of thresholds and selects the optimal threshold at what we term the ‘take-off’ point in the estimated number of jumps. We show that this method consistently estimates the jumps and their indices as the sampling interval goes to zero. In several Monte Carlo studies we evaluate the performance of our method based on its ability to accurately locate jumps and its ability to distinguish between true jumps and large diffusive moves. In one of these Monte Carlo studies we evaluate the performance of our method in a jump regression context. Finally, we apply our method in two empirical studies. In one we estimate the number of jumps and report the jump threshold our method selects for three commonly used market indices. In the other empirical application we perform a series of jump regressions using our method to select the jump threshold. Full article
Figures

Figure 1

Open AccessArticle
Income Inequality, Cohesiveness and Commonality in the Euro Area: A Semi-Parametric Boundary-Free Analysis
Econometrics 2018, 6(2), 15; https://doi.org/10.3390/econometrics6020015 -
Abstract
The cohesiveness of constituent nations in a confederation such as the Eurozone depends on their equally shared experiences. In terms of household incomes, commonality of distribution across those constituent nations with that of the Eurozone as an entity in itself is of the
[...] Read more.
The cohesiveness of constituent nations in a confederation such as the Eurozone depends on their equally shared experiences. In terms of household incomes, commonality of distribution across those constituent nations with that of the Eurozone as an entity in itself is of the essence. Generally, income classification has proceeded by employing “hard”, somewhat arbitrary and contentious boundaries. Here, in an analysis of Eurozone household income distributions over the period 2006–2015, mixture distribution techniques are used to determine the number and size of groups or classes endogenously without resort to such hard boundaries. In so doing, some new indices of polarization, segmentation and commonality of distribution are developed in the context of a decomposition of the Gini coefficient and the roles of, and relationships between, these groups in societal income inequality, poverty, polarization and societal segmentation are examined. What emerges for the Eurozone as an entity is a four-class, increasingly unequal polarizing structure with income growth in all four classes. With regard to individual constituent nation class membership, some advanced, some fell back, with most exhibiting significant polarizing behaviour. However, in the face of increasing overall Eurozone inequality, constituent nations were becoming increasingly similar in distribution, which can be construed as characteristic of a more cohesive society. Full article
Figures

Figure 1

Open AccessArticle
Statistical Inference on the Canadian Middle Class
Econometrics 2018, 6(1), 14; https://doi.org/10.3390/econometrics6010014 -
Abstract
Conventional wisdom says that the middle classes in many developed countries have recently suffered losses, in terms of both the share of the total population belonging to the middle class, and also their share in total income. Here, distribution-free methods are developed for
[...] Read more.
Conventional wisdom says that the middle classes in many developed countries have recently suffered losses, in terms of both the share of the total population belonging to the middle class, and also their share in total income. Here, distribution-free methods are developed for inference on these shares, by means of deriving expressions for their asymptotic variances of sample estimates, and the covariance of the estimates. Asymptotic inference can be undertaken based on asymptotic normality. Bootstrap inference can be expected to be more reliable, and appropriate bootstrap procedures are proposed. As an illustration, samples of individual earnings drawn from Canadian census data are used to test various hypotheses about the middle-class shares, and confidence intervals for them are computed. It is found that, for the earlier censuses, sample sizes are large enough for asymptotic and bootstrap inference to be almost identical, but that, in the twenty-first century, the bootstrap fails on account of a strange phenomenon whereby many presumably different incomes in the data are rounded to one and the same value. Another difference between the centuries is the appearance of heavy right-hand tails in the income distributions of both men and women. Full article
Figures

Figure 1

Open AccessArticle
An Overview of Modified Semiparametric Memory Estimation Methods
Econometrics 2018, 6(1), 13; https://doi.org/10.3390/econometrics6010013 -
Abstract
Several modified estimation methods of the memory parameter have been introduced in the past years. They aim to decrease the upward bias of the memory parameter in cases of low frequency contaminations or an additive noise component, especially in situations with a short-memory
[...] Read more.
Several modified estimation methods of the memory parameter have been introduced in the past years. They aim to decrease the upward bias of the memory parameter in cases of low frequency contaminations or an additive noise component, especially in situations with a short-memory process being contaminated. In this paper, we provide an overview and compare the performance of nine semiparametric estimation methods. Among them are two standard methods, four modified approaches to account for low frequency contaminations and three procedures developed for perturbed fractional processes. We conduct an extensive Monte Carlo study for a variety of parameter constellations and several DGPs. Furthermore, an empirical application of the log-absolute return series of the S&P 500 shows that the estimation results combined with a long-memory test indicate a spurious long-memory process. Full article
Figures

Figure 1

Open AccessArticle
Response-Based Sampling for Binary Choice Models With Sample Selection
Econometrics 2018, 6(1), 12; https://doi.org/10.3390/econometrics6010012 -
Abstract
Sample selection models attempt to correct for non-randomly selected data in a two-model hierarchy where, on the first level, a binary selection equation determines whether a particular observation will be available for the second level (outcome equation). If the non-random selection mechanism induced
[...] Read more.
Sample selection models attempt to correct for non-randomly selected data in a two-model hierarchy where, on the first level, a binary selection equation determines whether a particular observation will be available for the second level (outcome equation). If the non-random selection mechanism induced by the selection equation is ignored, the coefficient estimates in the outcome equation may be severely biased. When the selection mechanism leads to many censored observations, few data are available for the estimation of the outcome equation parameters, giving rise to computational difficulties. In this context, the main reference is Greene (2008) who extends the results obtained by Manski and Lerman (1977), and develops an estimator which requires the knowledge of the true proportion of occurrences in the outcome equation. We develop a method that exploits the advantages of response-based sampling schemes in the context of binary response models with a sample selection, relaxing this assumption. Estimation is based on a weighted version of Heckman’s likelihood, where the weights take into account the sampling design. In a simulation study, we found that, for the outcome equation, the results obtained with our estimator are comparable to Greene’s in terms of mean square error. Moreover, in a real data application, it is preferable in terms of the percentage of correct predictions. Full article
Figures

Figure 1

Open AccessArticle
Jackknife Bias Reduction in the Presence of a Near-Unit Root
Econometrics 2018, 6(1), 11; https://doi.org/10.3390/econometrics6010011 -
Abstract
This paper considers the specification and performance of jackknife estimators of the autoregressive coefficient in a model with a near-unit root. The limit distributions of sub-sample estimators that are used in the construction of the jackknife estimator are derived, and the joint moment
[...] Read more.
This paper considers the specification and performance of jackknife estimators of the autoregressive coefficient in a model with a near-unit root. The limit distributions of sub-sample estimators that are used in the construction of the jackknife estimator are derived, and the joint moment generating function (MGF) of two components of these distributions is obtained and its properties explored. The MGF can be used to derive the weights for an optimal jackknife estimator that removes fully the first-order finite sample bias from the estimator. The resulting jackknife estimator is shown to perform well in finite samples and, with a suitable choice of the number of sub-samples, is shown to reduce the overall finite sample root mean squared error, as well as bias. However, the optimal jackknife weights rely on knowledge of the near-unit root parameter and a quantity that is related to the long-run variance of the disturbance process, which are typically unknown in practice, and so, this dependence is characterised fully and a discussion provided of the issues that arise in practice in the most general settings. Full article
Open AccessArticle
Top Incomes, Heavy Tails, and Rank-Size Regressions
Econometrics 2018, 6(1), 10; https://doi.org/10.3390/econometrics6010010 -
Abstract
In economics, rank-size regressions provide popular estimators of tail exponents of heavy-tailed distributions. We discuss the properties of this approach when the tail of the distribution is regularly varying rather than strictly Pareto. The estimator then over-estimates the true value in the leading
[...] Read more.
In economics, rank-size regressions provide popular estimators of tail exponents of heavy-tailed distributions. We discuss the properties of this approach when the tail of the distribution is regularly varying rather than strictly Pareto. The estimator then over-estimates the true value in the leading parametric income models (so the upper income tail is less heavy than estimated), which leads to test size distortions and undermines inference. For practical work, we propose a sensitivity analysis based on regression diagnostics in order to assess the likely impact of the distortion. The methods are illustrated using data on top incomes in the UK. Full article
Figures

Figure 1

Open AccessArticle
A Spatial-Filtering Zero-Inflated Approach to the Estimation of the Gravity Model of Trade
Econometrics 2018, 6(1), 9; https://doi.org/10.3390/econometrics6010009 -
Abstract
Nonlinear estimation of the gravity model with Poisson-type regression methods has become popular for modelling international trade flows, because it permits a better accounting for zero flows and extreme values in the distribution tail. Nevertheless, as trade flows are not independent from each
[...] Read more.
Nonlinear estimation of the gravity model with Poisson-type regression methods has become popular for modelling international trade flows, because it permits a better accounting for zero flows and extreme values in the distribution tail. Nevertheless, as trade flows are not independent from each other due to spatial and network autocorrelation, these methods may lead to biased parameter estimates. To overcome this problem, eigenvector spatial filtering (ESF) variants of the Poisson/negative binomial specifications have been proposed in the literature on gravity modelling of trade. However, no specific treatment has been developed for cases in which many zero flows are present. This paper contributes to the literature in two ways. First, by employing a stepwise selection criterion for spatial filters that is based on robust (sandwich) p-values and does not require likelihood-based indicators. In this respect, we develop an ad hoc backward stepwise function in R. Second, using this function, we select a reduced set of spatial filters that properly accounts for importer-side and exporter-side specific spatial effects, as well as network effects, both at the count and the logit processes of zero-inflated methods. Applying this estimation strategy to a cross-section of bilateral trade flows between a set of 64 countries for the year 2000, we find that our specification outperforms the benchmark models in terms of model fitting, both considering the AIC and in predicting zero (and small) flows. Full article
Open AccessArticle
Lasso Maximum Likelihood Estimation of Parametric Models with Singular Information Matrices
Econometrics 2018, 6(1), 8; https://doi.org/10.3390/econometrics6010008 -
Abstract
An information matrix of a parametric model being singular at a certain true value of a parameter vector is irregular. The maximum likelihood estimator in the irregular case usually has a rate of convergence slower than the n-rate in a regular case.
[...] Read more.
An information matrix of a parametric model being singular at a certain true value of a parameter vector is irregular. The maximum likelihood estimator in the irregular case usually has a rate of convergence slower than the n-rate in a regular case. We propose to estimate such models by the adaptive lasso maximum likelihood and propose an information criterion to select the involved tuning parameter. We show that the penalized maximum likelihood estimator has the oracle properties. The method can implement model selection and estimation simultaneously and the estimator always has the usual n-rate of convergence. Full article
Open AccessArticle
A Multivariate Kernel Approach to Forecasting the Variance Covariance of Stock Market Returns
Econometrics 2018, 6(1), 7; https://doi.org/10.3390/econometrics6010007 -
Abstract
This paper introduces a multivariate kernel based forecasting tool for the prediction of variance-covariance matrices of stock returns. The method introduced allows for the incorporation of macroeconomic variables into the forecasting process of the matrix without resorting to a decomposition of the matrix.
[...] Read more.
This paper introduces a multivariate kernel based forecasting tool for the prediction of variance-covariance matrices of stock returns. The method introduced allows for the incorporation of macroeconomic variables into the forecasting process of the matrix without resorting to a decomposition of the matrix. The model makes use of similarity forecasting techniques and it is demonstrated that several popular techniques can be thought as a subset of this approach. A forecasting experiment demonstrates the potential for the technique to improve the statistical accuracy of forecasts of variance-covariance matrices. Full article
Figures

Figure 1