Econometrics doi: 10.3390/econometrics6040048

Authors: Yukai Yang Luc Bauwens

We develop novel multivariate state-space models wherein the latent states evolve on the Stiefel manifold and follow a conditional matrix Langevin distribution. The latent states correspond to time-varying reduced rank parameter matrices, like the loadings in dynamic factor models and the parameters of cointegrating relations in vector error-correction models. The corresponding nonlinear filtering algorithms are developed and evaluated by means of simulation experiments.

]]>Econometrics doi: 10.3390/econometrics6040047

Authors: Hussein Khraibani Bilal Nehme Olivier Strauss

Value-at-Risk (VaR) has become the most important benchmark for measuring risk in portfolios of different types of financial instruments. However, as reported by many authors, estimating VaR is subject to a high level of uncertainty. One of the sources of uncertainty stems from the dependence of the VaR estimation on the choice of the computation method. As we show in our experiment, the lower the number of samples, the higher this dependence. In this paper, we propose a new nonparametric approach called maxitive kernel estimation of the VaR. This estimation is based on a coherent extension of the kernel-based estimation of the cumulative distribution function to convex sets of kernel. We thus obtain a convex set of VaR estimates gathering all the conventional estimates based on a kernel belonging to the above considered convex set. We illustrate this method in an empirical application to daily stock returns. We compare the approach we propose to other parametric and nonparametric approaches. In our experiment, we show that the interval-valued estimate of the VaR we obtain is likely to lead to more careful decision, i.e., decisions that cannot be biased by an arbitrary choice of the computation method. In fact, the imprecision of the obtained interval-valued estimate is likely to be representative of the uncertainty in VaR estimate.

]]>Econometrics doi: 10.3390/econometrics6040046

Authors: George Judge

In this paper, we borrow some of the key concepts of nonequilibrium statistical systems, to develop a framework for analyzing a self-organizing-optimizing system of independent interacting agents, with nonlinear dynamics at the macro level that is based on stochastic individual behavior at the micro level. We demonstrate the use of entropy-divergence methods and micro income data to evaluate and understand the hidden aspects of stochastic dynamics that drives macroeconomic behavior systems and discuss how to empirically represent and evaluate their nonequilibrium nature. Empirical applications of the information theoretic family of power divergence measures-entropic functions, interpreted in a probability context with Markov dynamics, are presented.

]]>Econometrics doi: 10.3390/econometrics6040045

Authors: Loann David Denis Desboulets

In this paper, we investigate several variable selection procedures to give an overview of the existing literature for practitioners. &ldquo;Let the data speak for themselves&rdquo; has become the motto of many applied researchers since the number of data has significantly grown. Automatic model selection has been promoted to search for data-driven theories for quite a long time now. However, while great extensions have been made on the theoretical side, basic procedures are still used in most empirical work, e.g., stepwise regression. Here, we provide a review of main methods and state-of-the art extensions as well as a topology of them over a wide range of model structures (linear, grouped, additive, partially linear and non-parametric) and available software resources for implemented methods so that practitioners can easily access them. We provide explanations for which methods to use for different model purposes and their key differences. We also review two methods for improving variable selection in the general sense.

]]>Econometrics doi: 10.3390/econometrics6040044

Authors: Christopher L. Skeels Frank Windmeijer

A standard test for weak instruments compares the first-stage F-statistic to a table of critical values obtained by Stock and Yogo (2005) using simulations. We derive a closed-form solution for the expectation from which these critical values are derived, as well as present some second-order asymptotic approximations that may be of value in the presence of multiple endogenous regressors. Inspection of this new result provides insights not available from simulation, and will allow software implementations to be generalised and improved. Finally, we explore the calculation of p-values for the first-stage F-statistic weak instruments test.

]]>Econometrics doi: 10.3390/econometrics6040043

Authors: Jianning Kong Donggyu Sul

This paper provides a new statistical model for repeated voluntary contribution mechanism games. In a repeated public goods experiment, contributions in the first round are cross-sectionally independent simply because subjects are randomly selected. Meanwhile, contributions to a public account over rounds are serially and cross-sectionally correlated. Furthermore, the cross-sectional average of the contributions across subjects usually decreases over rounds. By considering this non-stationary initial condition&mdash;the initial contribution has a different distribution from the rest of the contributions&mdash;we model statistically the time varying patterns of the average contribution in repeated public goods experiments and then propose a simple but efficient method to test for treatment effects. The suggested method has good finite sample performance and works well in practice.

]]>Econometrics doi: 10.3390/econometrics6040042

Authors: Martin Biewen Emmanuel Flachaire

It is well-known that, after decades of non-interest in the theme, economics has experienced a proper surge in inequality research in recent years. [...]

]]>Econometrics doi: 10.3390/econometrics6030041

Authors: Chung Choe Philippe Van Kerm

This paper draws upon influence function regression methods to determine where foreign workers stand in the distribution of private sector wages in Luxembourg, and assess whether and how much their wages contribute to wage inequality. This is quantified by measuring the effect that a marginal increase in the proportion of foreign workers&mdash;foreign residents or cross-border workers&mdash;would have on selected quantiles and measures of inequality. Analysis of the 2006 Structure of Earnings Survey reveals that foreign workers have generally lower wages than natives and therefore tend to haul the overall wage distribution downwards. Yet, their influence on wage inequality reveals small and negative. All impacts are further muted when accounting for human capital and, especially, job characteristics. Not observing any large positive inequality contribution on the Luxembourg labour market is a striking result given the sheer size of the foreign workforce and its polarization at both ends of the skill distribution.

]]>Econometrics doi: 10.3390/econometrics6030040

Authors: Eric Hillebrand Huiyu Huang Tae-Hwy Lee Canlin Li

In forecasting a variable (forecast target) using many predictors, a factor model with principal components (PC) is often used. When the predictors are the yield curve (a set of many yields), the Nelson&ndash;Siegel (NS) factor model is used in place of the PC factors. These PC or NS factors are combining information (CI) in the predictors (yields). However, these CI factors are not &ldquo;supervised&rdquo; for a specific forecast target in that they are constructed by using only the predictors but not using a particular forecast target. In order to &ldquo;supervise&rdquo; factors for a forecast target, we follow Chan et al. (1999) and Stock and Watson (2004) to compute PC or NS factors of many forecasts (not of the predictors), with each of the many forecasts being computed using one predictor at a time. These PC or NS factors of forecasts are combining forecasts (CF). The CF factors are supervised for a specific forecast target. We demonstrate the advantage of the supervised CF factor models over the unsupervised CI factor models via simple numerical examples and Monte Carlo simulation. In out-of-sample forecasting of monthly US output growth and inflation, it is found that the CF factor models outperform the CI factor models especially at longer forecast horizons.

]]>Econometrics doi: 10.3390/econometrics6030039

Authors: Andreas Hetland

We propose and study the stochastic stationary root model. The model resembles the cointegrated VAR model but is novel in that: (i) the stationary relations follow a random coefficient autoregressive process, i.e., exhibhits heavy-tailed dynamics, and (ii) the system is observed with measurement error. Unlike the cointegrated VAR model, estimation and inference for the SSR model is complicated by a lack of closed-form expressions for the likelihood function and its derivatives. To overcome this, we introduce particle filter-based approximations of the log-likelihood function, sample score, and observed Information matrix. These enable us to approximate the ML estimator via stochastic approximation and to conduct inference via the approximated observed Information matrix. We conjecture the asymptotic properties of the ML estimator and conduct a simulation study to investigate the validity of the conjecture. Model diagnostics to assess model fit are considered. Finally, we present an empirical application to the 10-year government bond rates in Germany and Greece during the period from January 1999 to February 2018.

]]>Econometrics doi: 10.3390/econometrics6030038

Authors: In Choi Steve Cook Marc S. Paolella Jeffrey S. Racine

n/a

]]>Econometrics doi: 10.3390/econometrics6030037

Authors: Rachidi Kotchoni

This paper proposes an approach to measure the extent of nonlinearity of the exposure of a financial asset to a given risk factor. The proposed measure exploits the decomposition of a conditional expectation into its linear and nonlinear components. We illustrate the method with the measurement of the degree of nonlinearity of a European style option with respect to the underlying asset. Next, we use the method to identify the empirical patterns of the return-risk trade-off on the SP500. The results are strongly supportive of a nonlinear relationship between expected return and expected volatility. The data seem to be driven by two regimes: one regime with a positive return-risk trade-off and one with a negative trade-off.

]]>Econometrics doi: 10.3390/econometrics6030036

Authors: Helmut Lütkepohl Aleksei Netšunajev

We use a cointegrated structural vector autoregressive model to investigate the relation between monetary policy in the euro area and the stock market. Since there may be an instantaneous causal relation, we consider long-run identifying restrictions for the structural shocks and also used (conditional) heteroscedasticity in the residuals for identification purposes. Heteroscedasticity is modelled by a Markov-switching mechanism. We find a plausible identification scheme for stock market and monetary policy shocks which is consistent with the second-order moment structure of the variables. The model indicates that contractionary monetary policy shocks lead to a long-lasting downturn of real stock prices.

]]>Econometrics doi: 10.3390/econometrics6030035

Authors: D. Stephen G. Pollock

Econometric analysis requires filtering techniques that are adapted to cater to data sequences that are short and that have strong trends. Whereas the economists have tended to conduct their analyses in the time domain, the engineers have emphasised the frequency domain. This paper places its emphasis in the frequency domain; and it shows how the frequency-domain methods can be adapted to cater to short trended sequences. Working in the frequency domain allows an unrestricted choice to be made of the frequency response of a filter. It also requires that the data should be free of trends. Methods for extracting the trends prior to filtering and for restoring them thereafter are described.

]]>Econometrics doi: 10.3390/econometrics6030034

Authors: Dorota Toczydlowska Gareth W. Peters

A novel class of dimension reduction methods is combined with a stochastic multi-factor panel regression-based state-space model in order to model the dynamics of yield curves whilst incorporating regression factors. This is achieved via Probabilistic Principal Component Analysis (PPCA) in which new statistically-robust variants are derived also treating missing data. We embed the rank reduced feature extractions into a stochastic representation for state-space models for yield curve dynamics and compare the results to classical multi-factor dynamic Nelson&ndash;Siegel state-space models. This leads to important new representations of yield curve models that can be practically important for addressing questions of financial stress testing and monetary policy interventions, which can incorporate efficiently financial big data. We illustrate our results on various financial and macroeconomic datasets from the Euro Zone and international market.

]]>Econometrics doi: 10.3390/econometrics6030033

Authors: Hiroshi Yamada Ruixue Du

ℓ1 polynomial trend filtering, which is a filtering method described as an ℓ1-norm penalized least-squares problem, is promising because it enables the estimation of a piecewise polynomial trend in a univariate economic time series without prespecifying the number and location of knots. This paper shows some theoretical results on the filtering, one of which is that a small modification of the filtering provides not only identical trend estimates as the filtering but also extrapolations of the trend beyond both sample limits.

]]>Econometrics doi: 10.3390/econometrics6030032

Authors: John W. Galbraith Douglas J. Hodgson

Statistical methods are widely used for valuation (prediction of the value at sale or auction) of a unique object such as a work of art. The usual approach is estimation of a hedonic model for objects of a given class, such as paintings from a particular school or period, or in the context of real estate, houses in a neighborhood. Where the object itself has previously sold, an alternative is to base an estimate on the previous sale price. The combination of these approaches has been employed in real estate price index construction (e.g., Jiang et al. 2015); in the present context, we treat the use of these different sources of information as a forecast combination problem. We first optimize the hedonic model, considering the level of aggregation that is appropriate for pooling observations into a sample, and applying model-averaging methods to estimate predictive models at the individual-artist level. Next, we consider an additional stage in which we incorporate repeat-sale information, in a subset of cases for which this information is available. The methods are applied to a data set of auction prices for Canadian paintings. We compare the out-of-sample predictive accuracy of different methods and find that those that allow us to use single-artist samples produce superior results, that data-driven averaging across predictive models tends to produce clear gains, and that, where available, repeat-sale information appears to yield further improvements in predictive accuracy.

]]>Econometrics doi: 10.3390/econometrics6020031

Authors: Gulasekaran Rajaguru Michael O’Neill Tilak Abeysinghe

In applied econometric literature, the causal inferences are often made based on temporally aggregated or systematically sampled data. A number of studies document that temporal aggregation has distorting effects on causal inference and systematic sampling of stationary variables preserves the direction of causality. Contrary to the stationary case, this paper shows for the bivariate VAR(1) system that systematic sampling induces spurious bi-directional Granger causality among the variables if the uni-directional causality runs from a non-stationary series to either a stationary or a non-stationary series. An empirical exercise illustrates the relative usefulness of the results further.

]]>Econometrics doi: 10.3390/econometrics6020030

Authors: Vladimir Hlasny Paolo Verme

It is sometimes observed and frequently assumed that top incomes in household surveys worldwide are poorly measured and that this problem biases the measurement of income inequality. This paper tests this assumption and compares the performance of reweighting and replacing methods designed to correct inequality measures for top-income biases generated by data issues such as unit or item non-response. Results for the European Union&rsquo;s Statistics on Income and Living Conditions survey indicate that survey response probabilities are negatively associated with income and bias the measurement of inequality downward. Correcting for this bias with reweighting, the Gini coefficient for Europe is revised upwards by 3.7 percentage points. Similar results are reached with replacing of top incomes using values from the Pareto distribution when the cut point for the analysis is below the 95th percentile. For higher cut points, results with replacing are inconsistent suggesting that popular parametric distributions do not mimic real data well at the very top of the income distribution.

]]>Econometrics doi: 10.3390/econometrics6020029

Authors: Tareq Sadeq Michel Lubrano

In 2002, the Israeli government decided to build a wall inside the occupied West Bank. The wall had a marked effect on the access to land and water resources as well as to the Israeli labour market. It is difficult to include the effect of the wall in an econometric model explaining poverty dynamics as the wall was built in the richer region of the West Bank. So a diff-in-diff strategy is needed. Using a Bayesian approach, we treat our two-period repeated cross-section data set as an incomplete data problem, explaining the income-to-needs ratio as a function of time invariant exogenous variables. This allows us to provide inference results on poverty dynamics. We then build a conditional regression model including a wall variable and state dependence to see how the wall modified the initial results on poverty dynamics. We find that the wall has increased the probability of poverty persistence by 58 percentage points and the probability of poverty entry by 18 percentage points.

]]>Econometrics doi: 10.3390/econometrics6020028

Authors: Sergio P. Firpo Nicole M. Fortin Thomas Lemieux

This paper provides a detailed exposition of an extension of the Oaxaca-Blinder decomposition method that can be applied to various distributional measures. The two-stage procedure first divides distributional changes into a wage structure effect and a composition effect using a reweighting method. Second, the two components are further divided into the contribution of each explanatory variable using recentered influence function (RIF) regressions. We illustrate the practical aspects of the procedure by analyzing how the polarization of U.S. male wages between the late 1980s and the mid 2010s was affected by factors such as de-unionization, education, occupations, and industry changes.

]]>Econometrics doi: 10.3390/econometrics6020027

Authors: Alaa Abi Morshed Elena Andreou Otilia Boldea

Structural break tests for regression models are sensitive to model misspecification. We show&mdash;analytically and through simulations&mdash;that the sup Wald test for breaks in the conditional mean and variance of a time series process exhibits severe size distortions when the conditional mean dynamics are misspecified. We also show that the sup Wald test for breaks in the unconditional mean and variance does not have the same size distortions, yet benefits from similar power to its conditional counterpart in correctly specified models. Hence, we propose using it as an alternative and complementary test for breaks. We apply the unconditional and conditional mean and variance tests to three US series: unemployment, industrial production growth and interest rates. Both the unconditional and the conditional mean tests detect a break in the mean of interest rates. However, for the other two series, the unconditional mean test does not detect a break, while the conditional mean tests based on dynamic regression models occasionally detect a break, with the implied break-point estimator varying across different dynamic specifications. For all series, the unconditional variance does not detect a break while most tests for the conditional variance do detect a break which also varies across specifications.

]]>Econometrics doi: 10.3390/econometrics6020026

Authors: Bruce E. Hansen

The generalized method of moments (GMM) estimator of the reduced-rank regression model is derived under the assumption of conditional homoscedasticity. It is shown that this GMM estimator is algebraically identical to the maximum likelihood estimator under normality developed by Johansen (1988). This includes the vector error correction model (VECM) of Engle and Granger. It is also shown that GMM tests for reduced rank (cointegration) are algebraically similar to the Gaussian likelihood ratio tests. This shows that normality is not necessary to motivate these estimators and tests.

]]>Econometrics doi: 10.3390/econometrics6020025

Authors: Gilles Dufrénot Fredj Jawadi Alexander Mihailov

Developments in macro-econometrics have been evolving since the aftermath of the Second World War.[...]

]]>Econometrics doi: 10.3390/econometrics6020024

Authors: El Moctar Laghlal Abdoul Aziz Junior Ndoye

In this study, we provide a Bayesian estimation method for the unconditional quantile regression model based on the Re-centered Influence Function (RIF). The method makes use of the dichotomous structure of the RIF and estimates a non-linear probability model by a logistic regression using a Gibbs within a Metropolis-Hastings sampler. This approach performs better in the presence of heavy-tailed distributions. Applied to a nationally-representative household survey, the Senegal Poverty Monitoring Report (2005), the results show that the change in the rate of returns to education across quantiles is substantially lower at the primary level.

]]>Econometrics doi: 10.3390/econometrics6020023

Authors: Mawuli Segnon Stelios Bekiros Bernd Wilfling

There is substantial evidence that inflation rates are characterized by long memory and nonlinearities. In this paper, we introduce a long-memory Smooth Transition AutoRegressive Fractionally Integrated Moving Average-Markov Switching Multifractal specification [ STARFIMA ( p , d , q ) - MSM ( k ) ] for modeling and forecasting inflation uncertainty. We first provide the statistical properties of the process and investigate the finite sample properties of the maximum likelihood estimators through simulation. Second, we evaluate the out-of-sample forecast performance of the model in forecasting inflation uncertainty in the G7 countries. Our empirical analysis demonstrates the superiority of the new model over the alternative STARFIMA ( p , d , q ) - GARCH -type models in forecasting inflation uncertainty.

]]>Econometrics doi: 10.3390/econometrics6020022

Authors: Stéphane Guerrier Samuel Orso Maria-Pia Victoria-Feser

In this paper, we study the finite sample accuracy of confidence intervals for index functional built via parametric bootstrap, in the case of inequality indices. To estimate the parameters of the assumed parametric data generating distribution, we propose a Generalized Method of Moment estimator that targets the quantity of interest, namely the considered inequality index. Its primary advantage is that the scale parameter does not need to be estimated to perform parametric bootstrap, since inequality measures are scale invariant. The very good finite sample coverages that are found in a simulation study suggest that this feature provides an advantage over the parametric bootstrap using the maximum likelihood estimator. We also find that overall, a parametric bootstrap provides more accurate inference than its non or semi-parametric counterparts, especially for heavy tailed income distributions.

]]>Econometrics doi: 10.3390/econometrics6020021

Authors: Duangkamon Chotikapanich William E. Griffiths Gholamreza Hajargasht Wasana Karunarathne D. S. Prasada Rao

To use the generalized beta distribution of the second kind (GB2) for the analysis of income and other positively skewed distributions, knowledge of estimation methods and the ability to compute quantities of interest from the estimated parameters are required. We review estimation methodology that has appeared in the literature, and summarize expressions for inequality, poverty, and pro-poor growth that can be used to compute these measures from GB2 parameter estimates. An application to data from China and Indonesia is provided.

]]>Econometrics doi: 10.3390/econometrics6020020

Authors: Dirk Antonczyk Thomas DeLeire Bernd Fitzenberger

Since the late 1970s, wage inequality has increased strongly both in the U.S. and Germany but the trends have been different. Wage inequality increased along the entire wage distribution during the 1980s in the U.S. and since the mid 1990s in Germany. There is evidence for wage polarization in the U.S. in the 1990s, and the increase in wage inequality in Germany was restricted to the top of the distribution before the 1990s. Using an approach developed by MaCurdy and Mroz (1995) to separate age, time, and cohort effects, we find a large role played by cohort effects in Germany, while we find only small cohort effects in the U.S. Employment trends in both countries are consistent with polarization since the 1990s. The evidence is consistent with a technology-driven polarization of the labor market, but this cannot explain the country specific differences.

]]>Econometrics doi: 10.3390/econometrics6020019

Authors: Giovanni Forchini Bin Jiang Bin Peng

The properties of the two stage least squares (TSLS) and limited information maximum likelihood (LIML) estimators in panel data models where the observables are affected by common shocks, modelled through unobservable factors, are studied for the case where the time series dimension is fixed. We show that the key assumption in determining the consistency of the panel TSLS and LIML estimators, as the cross section dimension tends to infinity, is the lack of correlation between the factor loadings in the errors and in the exogenous variables&mdash;including the instruments&mdash;conditional on the common shocks. If this condition fails, both estimators have degenerate distributions. When the panel TSLS and LIML estimators are consistent, they have covariance-matrix mixed-normal distributions asymptotically. Tests on the coefficients can be constructed in the usual way and have standard distributions under the null hypothesis.

]]>Econometrics doi: 10.3390/econometrics6020018

Authors: Giovanni M. Giorgi Alessio Guandalini

Additive decomposability is an interesting feature of inequality indices which, however, is not always fulfilled; solutions to overcome such an issue have been given by Deutsch and Silber (2007) and by Di Maio and Landoni (2017). In this paper, we apply these methods, based on the “Shapley value” and the “balance of inequality” respectively, to the Bonferroni inequality index. We also discuss a comparison with the Gini concentration index and highlight interesting properties of the Bonferroni index.

]]>Econometrics doi: 10.3390/econometrics6020017

Authors: Elena Bárcena-Martín Jacques Silber

This paper proposes a simple algorithm based on a matrix formulation to compute the Esteban and Ray (ER) polarization index. It then shows how the algorithm introduced leads to quite a simple decomposition of polarization by income sources. Such a breakdown was not available hitherto. The decomposition we propose will thus allow one to determine the sign, as well as the magnitude, of the impact of the various income sources on the ER polarization index. A simple empirical illustration based on EU data is provided.

]]>Econometrics doi: 10.3390/econometrics6020016

Authors: Robert Davies George Tauchen

This paper develops a method to select the threshold in threshold-based jump detection methods. The method is motivated by an analysis of threshold-based jump detection methods in the context of jump-diffusion models. We show that over the range of sampling frequencies a researcher is most likely to encounter that the usual in-fill asymptotics provide a poor guide for selecting the jump threshold. Because of this we develop a sample-based method. Our method estimates the number of jumps over a grid of thresholds and selects the optimal threshold at what we term the ‘take-off’ point in the estimated number of jumps. We show that this method consistently estimates the jumps and their indices as the sampling interval goes to zero. In several Monte Carlo studies we evaluate the performance of our method based on its ability to accurately locate jumps and its ability to distinguish between true jumps and large diffusive moves. In one of these Monte Carlo studies we evaluate the performance of our method in a jump regression context. Finally, we apply our method in two empirical studies. In one we estimate the number of jumps and report the jump threshold our method selects for three commonly used market indices. In the other empirical application we perform a series of jump regressions using our method to select the jump threshold.

]]>Econometrics doi: 10.3390/econometrics6020015

Authors: Gordon Anderson Maria Pittau Roberto Zelli Jasmin Thomas

The cohesiveness of constituent nations in a confederation such as the Eurozone depends on their equally shared experiences. In terms of household incomes, commonality of distribution across those constituent nations with that of the Eurozone as an entity in itself is of the essence. Generally, income classification has proceeded by employing “hard”, somewhat arbitrary and contentious boundaries. Here, in an analysis of Eurozone household income distributions over the period 2006–2015, mixture distribution techniques are used to determine the number and size of groups or classes endogenously without resort to such hard boundaries. In so doing, some new indices of polarization, segmentation and commonality of distribution are developed in the context of a decomposition of the Gini coefficient and the roles of, and relationships between, these groups in societal income inequality, poverty, polarization and societal segmentation are examined. What emerges for the Eurozone as an entity is a four-class, increasingly unequal polarizing structure with income growth in all four classes. With regard to individual constituent nation class membership, some advanced, some fell back, with most exhibiting significant polarizing behaviour. However, in the face of increasing overall Eurozone inequality, constituent nations were becoming increasingly similar in distribution, which can be construed as characteristic of a more cohesive society.

]]>Econometrics doi: 10.3390/econometrics6010014

Authors: Russell Davidson

Conventional wisdom says that the middle classes in many developed countries have recently suffered losses, in terms of both the share of the total population belonging to the middle class, and also their share in total income. Here, distribution-free methods are developed for inference on these shares, by means of deriving expressions for their asymptotic variances of sample estimates, and the covariance of the estimates. Asymptotic inference can be undertaken based on asymptotic normality. Bootstrap inference can be expected to be more reliable, and appropriate bootstrap procedures are proposed. As an illustration, samples of individual earnings drawn from Canadian census data are used to test various hypotheses about the middle-class shares, and confidence intervals for them are computed. It is found that, for the earlier censuses, sample sizes are large enough for asymptotic and bootstrap inference to be almost identical, but that, in the twenty-first century, the bootstrap fails on account of a strange phenomenon whereby many presumably different incomes in the data are rounded to one and the same value. Another difference between the centuries is the appearance of heavy right-hand tails in the income distributions of both men and women.

]]>Econometrics doi: 10.3390/econometrics6010013

Authors: Marie Busch Philipp Sibbertsen

Several modified estimation methods of the memory parameter have been introduced in the past years. They aim to decrease the upward bias of the memory parameter in cases of low frequency contaminations or an additive noise component, especially in situations with a short-memory process being contaminated. In this paper, we provide an overview and compare the performance of nine semiparametric estimation methods. Among them are two standard methods, four modified approaches to account for low frequency contaminations and three procedures developed for perturbed fractional processes. We conduct an extensive Monte Carlo study for a variety of parameter constellations and several DGPs. Furthermore, an empirical application of the log-absolute return series of the S&amp;P 500 shows that the estimation results combined with a long-memory test indicate a spurious long-memory process.

]]>Econometrics doi: 10.3390/econometrics6010012

Authors: Maria Felice Arezzo Giuseppina Guagnano

Sample selection models attempt to correct for non-randomly selected data in a two-model hierarchy where, on the first level, a binary selection equation determines whether a particular observation will be available for the second level (outcome equation). If the non-random selection mechanism induced by the selection equation is ignored, the coefficient estimates in the outcome equation may be severely biased. When the selection mechanism leads to many censored observations, few data are available for the estimation of the outcome equation parameters, giving rise to computational difficulties. In this context, the main reference is Greene (2008) who extends the results obtained by Manski and Lerman (1977), and develops an estimator which requires the knowledge of the true proportion of occurrences in the outcome equation. We develop a method that exploits the advantages of response-based sampling schemes in the context of binary response models with a sample selection, relaxing this assumption. Estimation is based on a weighted version of Heckman’s likelihood, where the weights take into account the sampling design. In a simulation study, we found that, for the outcome equation, the results obtained with our estimator are comparable to Greene’s in terms of mean square error. Moreover, in a real data application, it is preferable in terms of the percentage of correct predictions.

]]>Econometrics doi: 10.3390/econometrics6010011

Authors: Marcus J. Chambers Maria Kyriacou

This paper considers the specification and performance of jackknife estimators of the autoregressive coefficient in a model with a near-unit root. The limit distributions of sub-sample estimators that are used in the construction of the jackknife estimator are derived, and the joint moment generating function (MGF) of two components of these distributions is obtained and its properties explored. The MGF can be used to derive the weights for an optimal jackknife estimator that removes fully the first-order finite sample bias from the estimator. The resulting jackknife estimator is shown to perform well in finite samples and, with a suitable choice of the number of sub-samples, is shown to reduce the overall finite sample root mean squared error, as well as bias. However, the optimal jackknife weights rely on knowledge of the near-unit root parameter and a quantity that is related to the long-run variance of the disturbance process, which are typically unknown in practice, and so, this dependence is characterised fully and a discussion provided of the issues that arise in practice in the most general settings.

]]>Econometrics doi: 10.3390/econometrics6010010

Authors: Christian Schluter

In economics, rank-size regressions provide popular estimators of tail exponents of heavy-tailed distributions. We discuss the properties of this approach when the tail of the distribution is regularly varying rather than strictly Pareto. The estimator then over-estimates the true value in the leading parametric income models (so the upper income tail is less heavy than estimated), which leads to test size distortions and undermines inference. For practical work, we propose a sensitivity analysis based on regression diagnostics in order to assess the likely impact of the distortion. The methods are illustrated using data on top incomes in the UK.

]]>Econometrics doi: 10.3390/econometrics6010009

Authors: Rodolfo Metulini Roberto Patuelli Daniel Griffith

Nonlinear estimation of the gravity model with Poisson-type regression methods has become popular for modelling international trade flows, because it permits a better accounting for zero flows and extreme values in the distribution tail. Nevertheless, as trade flows are not independent from each other due to spatial and network autocorrelation, these methods may lead to biased parameter estimates. To overcome this problem, eigenvector spatial filtering (ESF) variants of the Poisson/negative binomial specifications have been proposed in the literature on gravity modelling of trade. However, no specific treatment has been developed for cases in which many zero flows are present. This paper contributes to the literature in two ways. First, by employing a stepwise selection criterion for spatial filters that is based on robust (sandwich) p-values and does not require likelihood-based indicators. In this respect, we develop an ad hoc backward stepwise function in R. Second, using this function, we select a reduced set of spatial filters that properly accounts for importer-side and exporter-side specific spatial effects, as well as network effects, both at the count and the logit processes of zero-inflated methods. Applying this estimation strategy to a cross-section of bilateral trade flows between a set of 64 countries for the year 2000, we find that our specification outperforms the benchmark models in terms of model fitting, both considering the AIC and in predicting zero (and small) flows.

]]>Econometrics doi: 10.3390/econometrics6010008

Authors: Fei Jin Lung-fei Lee

An information matrix of a parametric model being singular at a certain true value of a parameter vector is irregular. The maximum likelihood estimator in the irregular case usually has a rate of convergence slower than the n -rate in a regular case. We propose to estimate such models by the adaptive lasso maximum likelihood and propose an information criterion to select the involved tuning parameter. We show that the penalized maximum likelihood estimator has the oracle properties. The method can implement model selection and estimation simultaneously and the estimator always has the usual n -rate of convergence.

]]>Econometrics doi: 10.3390/econometrics6010007

Authors: Ralf Becker Adam Clements Robert O'Neill

This paper introduces a multivariate kernel based forecasting tool for the prediction of variance-covariance matrices of stock returns. The method introduced allows for the incorporation of macroeconomic variables into the forecasting process of the matrix without resorting to a decomposition of the matrix. The model makes use of similarity forecasting techniques and it is demonstrated that several popular techniques can be thought as a subset of this approach. A forecasting experiment demonstrates the potential for the technique to improve the statistical accuracy of forecasts of variance-covariance matrices.

]]>Econometrics doi: 10.3390/econometrics6010006

Authors: Francesca Rondina

This paper uses an econometric model and Bayesian estimation to reverse engineer the path of inflation expectations implied by the New Keynesian Phillips Curve and the data. The estimated expectations roughly track the patterns of a number of common measures of expected inflation available from surveys or computed from financial data. In particular, they exhibit the strongest correlation with the inflation forecasts of the respondents in the University of Michigan Survey of Consumers. The estimated model also shows evidence of the anchoring of long run inflation expectations to a value that is in the range of the target inflation rate.

]]>Econometrics doi: 10.3390/econometrics6010005

Authors: Paola Cerchiello Giancarlo Nicola

The analysis of news in the financial context has gained a prominent interest in the last years. This is because of the possible predictive power of such content especially in terms of associated sentiment/mood. In this paper, we focus on a specific aspect of financial news analysis: how the covered topics modify according to space and time dimensions. To this purpose, we employ a modified version of topic model LDA, the so-called Structural Topic Model (STM), that takes into account covariates as well. Our aim is to study the possible evolution of topics extracted from two well known news archive—Reuters and Bloomberg—and to investigate a causal effect in the diffusion of the news by means of a Granger causality test. Our results show that both the temporal dynamics and the spatial differentiation matter in the news contagion.

]]>Econometrics doi: 10.3390/econometrics6010004

Authors: Francesca Greselin Ričardas Zitikis

The underlying idea behind the construction of indices of economic inequality is based on measuring deviations of various portions of low incomes from certain references or benchmarks, which could be point measures like the population mean or median, or curves like the hypotenuse of the right triangle into which every Lorenz curve falls. In this paper, we argue that, by appropriately choosing population-based references (called societal references) and distributions of personal positions (called gambles, which are random), we can meaningfully unify classical and contemporary indices of economic inequality, and various measures of risk. To illustrate the herein proposed approach, we put forward and explore a risk measure that takes into account the relativity of large risks with respect to small ones.

]]>Econometrics doi: 10.3390/econometrics6010003

Authors: Aurelio Bariviera Angelo Plastino George Judge

This paper offers a general and comprehensive definition of the day-of-the-week effect. Using symbolic dynamics, we develop a unique test based on ordinal patterns in order to detect it. This test uncovers the fact that the so-called “day-of-the-week” effect is partly an artifact of the hidden correlation structure of the data. We present simulations based on artificial time series as well. While time series generated with long memory are prone to exhibit daily seasonality, pure white noise signals exhibit no pattern preference. Since ours is a non-parametric test, it requires no assumptions about the distribution of returns, so that it could be a practical alternative to conventional econometric tests. We also made an exhaustive application of the here-proposed technique to 83 stock indexes around the world. Finally, the paper highlights the relevance of symbolic analysis in economic time series studies.

]]>Econometrics doi: 10.3390/econometrics6010002

Authors: Econometrics Editorial Office

Peer review is an essential part in the publication process, ensuring that Econometrics maintains high quality standards for its published papers. In 2017, a total of 47 papers were published in the journal.[...]

]]>Econometrics doi: 10.3390/econometrics6010001

Authors: Katarina Juselius

n/a

]]>Econometrics doi: 10.3390/econometrics5040054

Authors: Yoontae Jeon Thomas McCurdy

Forecasting correlations between stocks and commodities is important for diversification across asset classes and other risk management decisions. Correlation forecasts are affected by model uncertainty, the sources of which can include uncertainty about changing fundamentals and associated parameters (model instability), structural breaks and nonlinearities due, for example, to regime switching. We use approaches that weight historical data according to their predictive content. Specifically, we estimate two alternative models, ‘time-varying weights’ and ‘time-varying window’, in order to maximize the value of past data for forecasting. Our empirical analyses reveal that these approaches provide superior forecasts to several benchmark models for forecasting correlations.

]]>Econometrics doi: 10.3390/econometrics5040053

Authors: Tristan Skolrud

The Fourier Flexible form provides a global approximation to an unknown data generating process. In terms of limiting function specification error, this form is preferable to functional forms based on second-order Taylor series expansions. The Fourier Flexible form is a truncated Fourier series expansion appended to a second-order expansion in logarithms. By replacing the logarithmic expansion with a Box-Cox transformation, we show that the Fourier Flexible form can reduce approximation error by 25% on average in the tails of the data distribution. The new functional form allows for nested testing of a larger set of commonly implemented functional forms.

]]>Econometrics doi: 10.3390/econometrics5040052

Authors: Jinyong Hahn Ruoyao Shi

We examine properties of permutation tests in the context of synthetic control. Permutation tests are frequently used methods of inference for synthetic control when the number of potential control units is small. We analyze the permutation tests from a repeated sampling perspective and show that the size of permutation tests may be distorted. Several alternative methods are discussed.

]]>Econometrics doi: 10.3390/econometrics5040049

Authors: Jurgen Doornik Rocco Mosconi Paolo Paruolo

This paper provides some test cases, called circuits, for the evaluation of Gaussian likelihood maximization algorithms of the cointegrated vector autoregressive model. Both I(1) and I(2) models are considered. The performance of algorithms is compared first in terms of effectiveness, defined as the ability to find the overall maximum. The next step is to compare their efficiency and reliability across experiments. The aim of the paper is to commence a collective learning project by the profession on the actual properties of algorithms for cointegrated vector autoregressive model estimation, in order to improve their quality and, as a consequence, also the reliability of empirical research.

]]>Econometrics doi: 10.3390/econometrics5040051

Authors: Yingjie Dong Yiu-Kuen Tse

We propose a new method to implement the Business Time Sampling (BTS) scheme for high-frequency financial data. We compute a time-transformation (TT) function using the intraday integrated volatility estimated by a jump-robust method. The BTS transactions are obtained using the inverse of the TT function. Using our sampled BTS transactions, we test the semi-martingale hypothesis of the stock log-price process and estimate the daily realized volatility. Our method improves the normality approximation of the standardized business-time return distribution. Our Monte Carlo results show that the integrated volatility estimates using our proposed sampling strategy provide smaller root mean-squared error.

]]>Econometrics doi: 10.3390/econometrics5040050

Authors: Martin Ravallion

On the presumption that poorer people tend to work less, it is often claimed that standard measures of inequality and poverty are overestimates. The paper points to a number of reasons to question this claim. It is shown that, while the labor supplies of American adults have a positive income gradient, the heterogeneity in labor supplies generates considerable horizontal inequality. Using equivalent incomes to adjust for effort can reveal either higher or lower inequality depending on the measurement assumptions. With only a modest allowance for leisure as a basic need, the effort-adjusted poverty rate in terms of equivalent incomes rises.

]]>Econometrics doi: 10.3390/econometrics5040048

Authors: Alain Hecq Sean Telg Lenard Lieb

This paper investigates the effect of seasonal adjustment filters on the identification of mixed causal-noncausal autoregressive models. By means of Monte Carlo simulations, we find that standard seasonal filters induce spurious autoregressive dynamics on white noise series, a phenomenon already documented in the literature. Using a symmetric argument, we show that those filters also generate a spurious noncausal component in the seasonally adjusted series, but preserve (although amplify) the existence of causal and noncausal relationships. This result has has important implications for modelling economic time series driven by expectation relationships. We consider inflation data on the G7 countries to illustrate these results.

]]>Econometrics doi: 10.3390/econometrics5040047

Authors: Andras Fulop Jun Yu

We develop a new model where the dynamic structure of the asset price, after the fundamental value is removed, is subject to two different regimes. One regime reflects the normal period where the asset price divided by the dividend is assumed to follow a mean-reverting process around a stochastic long run mean. The second regime reflects the bubble period with explosive behavior. Stochastic switches between two regimes and non-constant probabilities of exit from the bubble regime are both allowed. A Bayesian learning approach is employed to jointly estimate the latent states and the model parameters in real time. An important feature of our Bayesian method is that we are able to deal with parameter uncertainty and at the same time, to learn about the states and the parameters sequentially, allowing for real time model analysis. This feature is particularly useful for market surveillance. Analysis using simulated data reveals that our method has good power properties for detecting bubbles. Empirical analysis using price-dividend ratios of S&amp;P500 highlights the advantages of our method.

]]>Econometrics doi: 10.3390/econometrics5040045

Authors: Apostolos Serletis

William(Bill) Barnett is an eminent econometrician andmacroeconomist.[...]

]]>Econometrics doi: 10.3390/econometrics5040046

Authors: Umberto Triacca

The contribution of this paper is to investigate a particular form of lack of invariance of causality statements to changes in the conditioning information sets. Consider a discrete-time three-dimensional stochastic process z = ( x , y 1 , y 2 ) ′ . We want to study causality relationships between the variables in y = ( y 1 , y 2 ) ′ and x. Suppose that in a bivariate framework, we find that y 1 Granger causes x and y 2 Granger causes x, but these relationships vanish when the analysis is conducted in a trivariate framework. Thus, the causal links, established in a bivariate setting, seem to be spurious. Is this conclusion always correct? In this note, we show that the causal links, in the bivariate framework, might well not be ‘genuinely’ spurious: they could be reflecting causality from the vector y to x. Paradoxically, in this case, it is the non-causality in trivariate system that is misleading.

]]>Econometrics doi: 10.3390/econometrics5040044

Authors: Antoni Espasa Eva Senra

The Bulletin of EU &amp; US Inflation and Macroeconomic Analysis (BIAM) is a monthly publication that has been reporting real time analysis and forecasts for inflation and other macroeconomic aggregates for the Euro Area, the US and Spain since 1994. The BIAM inflation forecasting methodology stands on working with useful disaggregation schemes, using leading indicators when possible and applying outlier correction. The paper relates this methodology to corresponding topics in the literature and discusses the design of disaggregation schemes. It concludes that those schemes would be useful if they were formulated according to economic, institutional and statistical criteria aiming to end up with a set of components with very different statistical properties for which valid single-equation models could be built. The BIAM assessment, which derives from a new observation, is based on (a) an evaluation of the forecasting errors (innovations) at the components’ level. It provides information on which sectors they come from and allows, when required, for the appropriate correction in the specific models. (b) In updating the path forecast with its corresponding fan chart. Finally, we show that BIAM real time Euro Area inflation forecasts compare successfully with the consensus from the ECB Survey of Professional Forecasters, one and two years ahead.

]]>Econometrics doi: 10.3390/econometrics5030043

Authors: Ronald Butler Marc Paolella

A new method for determining the lag order of the autoregressive polynomial in regression models with autocorrelated normal disturbances is proposed. It is based on a sequential testing procedure using conditional saddlepoint approximations and permits the desire for parsimony to be explicitly incorporated, unlike penalty-based model selection methods. Extensive simulation results indicate that the new method is usually competitive with, and often better than, common model selection methods.

]]>Econometrics doi: 10.3390/econometrics5030042

Authors: Econometrics Editorial Office

With the goal of encouraging and motivating young researchers in the field of econometrics, last year the journal Econometrics accepted applications and nominations for the 2017 Young Researcher Award.[...]

]]>Econometrics doi: 10.3390/econometrics5030041

Authors: Jae Kim In Choi

This paper re-evaluates key past results of unit root tests, emphasizing that the use of a conventional level of significance is not in general optimal due to the test having low power. The decision-based significance levels for popular unit root tests, chosen using the line of enlightened judgement under a symmetric loss function, are found to be much higher than conventional ones. We also propose simple calibration rules for the decision-based significance levels for a range of unit root tests. At the decision-based significance levels, many time series in Nelson and Plosser’s (1982) (extended) data set are judged to be trend-stationary, including real income variables, employment variables and money stock. We also find that nearly all real exchange rates covered in Elliott and Pesavento’s (2006) study are stationary; and that most of the real interest rates covered in Rapach and Weber’s (2004) study are stationary. In addition, using a specific loss function, the U.S. nominal interest rate is found to be stationary under economically sensible values of relative loss and prior belief for the null hypothesis.

]]>Econometrics doi: 10.3390/econometrics5030038

Authors: P. Owen

Empirical studies of the determinants of cross-country differences in long-run development are characterized by the ingenious nature of the instruments used. However, scepticism remains about their ability to provide a valid basis for causal inference. This paper examines whether explicit consideration of the statistical adequacy of the underlying reduced form, which provides an embedding framework for the structural equations, can usefully complement economic theory as a basis for assessing instrument choice in the fundamental determinants literature. Diagnostic testing of the reduced forms in influential studies reveals evidence of model misspecification, with parameter non-constancy and spatial dependence of the residuals being almost ubiquitous. This feature, surprisingly not previously identified, potentially undermines the inferences drawn about the structural parameters, such as the quantitative and statistical significance of different fundamental determinants.

]]>Econometrics doi: 10.3390/econometrics5030040

Authors: Andreas Hetland Simon Hetland

The primary contribution of this paper is to establish that the long-swings behavior observed in the market price of Danish housing since the 1970s can be understood by studying the interplay between short-term expectation formation and long-run equilibrium conditions. We introduce an asset market model for housing based on uncertainty rather than risk, which under mild assumptions allows for other forms of forecasting behavior than rational expectations. We test the theory via an I(2) cointegrated VAR model and find that the long-run equilibrium for the housing price corresponds closely to the predictions from the theoretical framework. Additionally, we corroborate previous findings that housing markets are well characterized by short-term momentum forecasting behavior. Our conclusions have wider relevance, since housing prices play a role in the wider Danish economy, and other developed economies, through wealth effects.

]]>Econometrics doi: 10.3390/econometrics5030039

Authors: Jennifer Castle David Hendry Andrew Martinez

Economic policy agencies produce forecasts with accompanying narratives, and base policy changes on the resulting anticipated developments in the target variables. Systematic forecast failure, defined as large, persistent deviations of the outturns from the numerical forecasts, can make the associated narrative false, which would in turn question the validity of the entailed policy implementation. We establish when systematic forecast failure entails failure of the accompanying narrative, which we call forediction failure, and when that in turn implies policy invalidity. Most policy regime changes involve location shifts, which can induce forediction failure unless the policy variable is super exogenous in the policy model. We propose a step-indicator saturation test to check in advance for invariance to policy changes. Systematic forecast failure, or a lack of invariance, previously justified by narratives reveals such stories to be economic fiction.

]]>Econometrics doi: 10.3390/econometrics5030037

Authors: Haci Karatas

The wage curve for Turkey revisited considering the spatial spillovers of the regional unemployment rates using individual level data for a period of 2004–2013 at the 26 NUTS-2 level by employing FE-2SLS models. The unemployment elasticity of real wages is −0.07 without excluding any group of workers unlike previous studies. There is strong evidence on spatial effects of unemployment rate of contiguous regions on wage level, which is larger, in absolute value, than the effect of own-regional unemployment rate, −0.087 and −0.056, respectively. Male workers are slightly more responsive to the own-region unemployment rate than female workers. However, female workers are more responsive to the neighboring regions’ unemployment rate. Furthermore, using group-specific unemployment rates in the estimation of the wage curve for various groups, we find that unemployment elasticity of pay for female workers has become smaller and lost its significance, whereas unemployment elasticity for male workers has changed slightly. However, introducing group-specific unemployment rate results in losing significance in estimates for female workers. The findings in this paper suggest that individual wages are more responsive to the unemployment rates of the proximate regions than that of an individual’s own region. Also, the wage curve estimates are sensitive to the group-specific unemployment rates.

]]>Econometrics doi: 10.3390/econometrics5030036

Authors: Søren Johansen Morten Tabor

A state space model with an unobserved multivariate random walk and a linear observation equation is studied. The purpose is to find out when the extracted trend cointegrates with its estimator, in the sense that a linear combination is asymptotically stationary. It is found that this result holds for the linear combination of the trend that appears in the observation equation. If identifying restrictions are imposed on either the trend or its coefficients in the linear observation equation, it is shown that there is cointegration between the identified trend and its estimator, if and only if the estimators of the coefficients in the observation equations are consistent at a faster rate than the square root of sample size. The same results are found if the observations from the state space model are analysed using a cointegrated vector autoregressive model. The findings are illustrated by a small simulation study.

]]>Econometrics doi: 10.3390/econometrics5030035

Authors: Massimiliano Caporin Francesco Poli

We retrieve news stories and earnings announcements of the S&amp;P 100 constituents from two professional news providers, along with ten macroeconomic indicators. We also gather data from Google Trends about these firms’ assets as an index of retail investors’ attention. Thus, we create an extensive and innovative database that contains precise information with which to analyze the link between news and asset price dynamics. We detect the sentiment of news stories using a dictionary of sentiment-related words and negations and propose a set of more than five thousand information-based variables that provide natural proxies for the information used by heterogeneous market players. We first shed light on the impact of information measures on daily realized volatility and select them by penalized regression. Then, we perform a forecasting exercise and show that the model augmented with news-related variables provides superior forecasts.

]]>Econometrics doi: 10.3390/econometrics5030033

Authors: Junrong Liu Robin Sickles E. Tsionas

This paper considers a linear panel data model with time varying heterogeneity. Bayesian inference techniques organized around Markov chain Monte Carlo (MCMC) are applied to implement new estimators that combine smoothness priors on unobserved heterogeneity and priors on the factor structure of unobserved effects. The latter have been addressed in a non-Bayesian framework by Bai (2009) and Kneip et al. (2012), among others. Monte Carlo experiments are used to examine the finite-sample performance of our estimators. An empirical study of efficiency trends in the largest banks operating in the U.S. from 1990 to 2009 illustrates our new estimators. The study concludes that scale economies in intermediation services have been largely exploited by these large U.S. banks.

]]>Econometrics doi: 10.3390/econometrics5030034

Authors: Jean-David Fermanian

Copula models have become very popular and well studied among the scientific community.[...]

]]>Econometrics doi: 10.3390/econometrics5030032

Authors: P.A.V.B. Swamy Stephen Hall George Tavlas Peter von zur Muehlen

We appreciate the effort and thoughtfulness of Raunig’s (2017) attempted critique of Swamy et al. (2015).[...]

]]>Econometrics doi: 10.3390/econometrics5030031

Authors: Burkhard Raunig

Swamy et al. (2015) argue that valid instruments cannot exist when a structural model is misspecified. This note shows that this is not true in general. In simple examples valid instruments can exist and can help to estimate parameters of interest.

]]>Econometrics doi: 10.3390/econometrics5030030

Authors: Katarina Juselius

A theory-consistent CVAR scenario describes a set of testable regularieties one should expect to see in the data if the basic assumptions of the theoretical model are empirically valid. Using this method, the paper demonstrates that all basic assumptions about the shock structure and steady-state behavior of an an imperfect knowledge based model for exchange rate determination can be formulated as testable hypotheses on common stochastic trends and cointegration. This model obtaines remarkable support for almost every testable hypothesis and is able to adequately account for the long persistent swings in the real exchange rate.

]]>Econometrics doi: 10.3390/econometrics5030029

Authors: Leonardo Salazar

The long and persistent swings in the real exchange rate have for a long time puzzled economists. Recent models built on imperfect knowledge economics seem to provide a theoretical explanation for this persistence. Empirical results, based on a cointegrated vector autoregressive (CVAR) model, provide evidence of error-increasing behavior in prices and interest rates, which is consistent with the persistence observed in the data. The movements in the real exchange rate are compensated by movements in the interest rate spread, which restores the equilibrium in the product market when the real exchange rate moves away from its long-run benchmark value. Fluctuations in the copper price also explain the deviations of the real exchange rate from its long-run equilibrium value.

]]>Econometrics doi: 10.3390/econometrics5030028

Authors: H. Boswijk Paolo Paruolo

Likelihood ratio tests of over-identifying restrictions on the common trends loading matrices in I(2) VAR systems are discussed. It is shown how hypotheses on the common trends loading matrices can be translated into hypotheses on the cointegration parameters. Algorithms for (constrained) maximum likelihood estimation are presented, and asymptotic properties sketched. The techniques are illustrated using the analysis of the PPP and UIP between Switzerland and the US.

]]>Econometrics doi: 10.3390/econometrics5020027

Authors: Mikael Juselius Moshe Kim

The ability to distinguish between sustainable and excessive debt developments is crucial for securing economic stability. By studying US private sector credit loss dynamics, we show that this distinction can be made based on a measure of the incipient aggregate liquidity constraint, the financial obligations ratio. Specifically, as this variable rises, the interaction between credit losses and the business cycle increases, albeit with different intensity depending on whether the problems originate in the household or the business sector. This occurs 1–2 years before each recession in the sample. Our results have implications for macroprudential policy and countercyclical capital-buffers.

]]>Econometrics doi: 10.3390/econometrics5020024

Authors: Paula Simões M. Carvalho Sandra Aleixo Sérgio Gomes Isabel Natário

The Portuguese National Health Line, LS24, is an initiative of the Portuguese Health Ministry which seeks to improve accessibility to health care and to rationalize the use of existing resources by directing users to the most appropriate institutions of the national public health services. This study aims to describe and evaluate the use of LS24. Since for LS24 data, the location attribute is an important source of information to describe its use, this study analyses the number of calls received, at a municipal level, under two different spatial econometric approaches. This analysis is important for future development of decision support indicators in a hospital context, based on the economic impact of the use of this health line. Considering the discrete nature of data, the number of calls to LS24 in each municipality is better modelled by a Poisson model, with some possible covariates: demographic, socio-economic information, characteristics of the Portuguese health system and development indicators. In order to explain model spatial variability, the data autocorrelation can be explained in a Bayesian setting through different hierarchical log-Poisson regression models. A different approach uses an autoregressive methodology, also for count data. A log-Poisson model with a spatial lag autocorrelation component is further considered, better framed under a Bayesian paradigm. With this empirical study we find strong evidence for a spatial structure in the data and obtain similar conclusions with both perspectives of the analysis. This supports the view that the addition of a spatial structure to the model improves estimation, even in the case where some relevant covariates have been included.

]]>Econometrics doi: 10.3390/econometrics5020026

Authors: Ostap Okhrin Anastasija Tetereva

This paper introduces the concept of the realized hierarchical Archimedean copula (rHAC). The proposed approach inherits the ability of the copula to capture the dependencies among financial time series, and combines it with additional information contained in high-frequency data. The considered model does not suffer from the curse of dimensionality, and is able to accurately predict high-dimensional distributions. This flexibility is obtained by using a hierarchical structure in the copula. The time variability of the model is provided by daily forecasts of the realized correlation matrix, which is used to estimate the structure and the parameters of the rHAC. Extensive simulation studies show the validity of the estimator based on this realized correlation matrix, and its performance, in comparison to the benchmark models. The application of the estimator to one-day-ahead Value at Risk (VaR) prediction using high-frequency data exhibits good forecasting properties for a multivariate portfolio.

]]>Econometrics doi: 10.3390/econometrics5020025

Authors: Massimo Franchi Søren Johansen

It is well known that inference on the cointegrating relations in a vector autoregression (CVAR) is difficult in the presence of a near unit root. The test for a given cointegration vector can have rejection probabilities under the null, which vary from the nominal size to more than 90%. This paper formulates a CVAR model allowing for multiple near unit roots and analyses the asymptotic properties of the Gaussian maximum likelihood estimator. Then two critical value adjustments suggested by McCloskey (2017) for the test on the cointegrating relations are implemented for the model with a single near unit root, and it is found by simulation that they eliminate the serious size distortions, with a reasonable power for moderate values of the near unit root parameter. The findings are illustrated with an analysis of a number of different bivariate DGPs.

]]>Econometrics doi: 10.3390/econometrics5020023

Authors: Fabrizio Durante Enrico Foscolo Alex Weissensteiner

We analyze the interdependence between the government yield spread and stock returns of the banking sector in Italy during the years 2003–2015. In a first step, we find that the Spearman’s rank correlation between the yield spread and the Italian banking system changed significantly after September 2008. According to this finding, we split the time window in two sub-periods. While we show that the dependence between the banking industry and changes in the yield spread increased significantly in the second time interval, we find no contagion effects from changes in the yield spread to returns of the banking system.

]]>Econometrics doi: 10.3390/econometrics5020022

Authors: Pierre Perron

This special issue deals with problems related to unit roots and structural change, and the interplay between the two.[...]

]]>Econometrics doi: 10.3390/econometrics5020021

Authors: Benedikt Schamberger Lutz Gruber Claudia Czado

Factor modeling is a popular strategy to induce sparsity in multivariate models as they scale to higher dimensions. We develop Bayesian inference for a recently proposed latent factor copula model, which utilizes a pair copula construction to couple the variables with the latent factor. We use adaptive rejection Metropolis sampling (ARMS) within Gibbs sampling for posterior simulation: Gibbs sampling enables application to Bayesian problems, while ARMS is an adaptive strategy that replaces traditional Metropolis-Hastings updates, which typically require careful tuning. Our simulation study shows favorable performance of our proposed approach both in terms of sampling efficiency and accuracy. We provide an extensive application example using historical data on European financial stocks that forecasts portfolio Value at Risk (VaR) and Expected Shortfall (ES).

]]>Econometrics doi: 10.3390/econometrics5020020

Authors: Eugen Ivanov Aleksey Min Franz Ramsauer

Recently, several copula-based approaches have been proposed for modeling stationary multivariate time series. All of them are based on vine copulas, and they differ in the choice of the regular vine structure. In this article, we consider a copula autoregressive (COPAR) approach to model the dependence of unobserved multivariate factors resulting from two dynamic factor models. However, the proposed methodology is general and applicable to several factor models as well as to other copula models for stationary multivariate time series. An empirical study illustrates the forecasting superiority of our approach for constructing an optimal portfolio of U.S. industrial stocks in the mean-variance framework.

]]>Econometrics doi: 10.3390/econometrics5020019

Authors: Jurgen Doornik

Estimation of the I(2) cointegrated vector autoregressive (CVAR) model is considered. Without further restrictions, estimation of the I(1) model is by reduced-rank regression (Anderson (1951)). Maximum likelihood estimation of I(2) models, on the other hand, always requires iteration. This paper presents a new triangular representation of the I(2) model. This is the basis for a new estimation procedure of the unrestricted I(2) model, as well as the I(2) model with linear restrictions imposed.

]]>Econometrics doi: 10.3390/econometrics5020018

Authors: Marc Paolella

The univariate collapsing method (UCM) for portfolio optimization is based on obtaining the predictive mean and a risk measure such as variance or expected shortfall of the univariate pseudo-return series generated from a given set of portfolio weights and multivariate set of assets under interest and, via simulation or optimization, repeating this process until the desired portfolio weight vector is obtained. The UCM is well-known conceptually, straightforward to implement, and possesses several advantages over use of multivariate models, but, among other things, has been criticized for being too slow. As such, it does not play prominently in asset allocation and receives little attention in the academic literature. This paper proposes use of fast model estimation methods combined with new heuristics for sampling, based on easily-determined characteristics of the data, to accelerate and optimize the simulation search. An extensive empirical analysis confirms the viability of the method.

]]>Econometrics doi: 10.3390/econometrics5020017

Authors: Ricardo Quineche Gabriel Rodríguez

This is a simulation-based warning note for practitioners who use the M G L S unit root tests in the context of structural change using different selection lag length criteria. With T = 100 , we find severe oversize problems when using some criteria, while other criteria produce an undersizing behavior. In view of this dilemma, we do not recommend using these tests. While such behavior tends to disappear when T = 250 , it is important to note that most empirical applications use smaller sample sizes such as T = 100 or T = 150 . The A D F G L S test does not present an oversizing or undersizing problem. The only disadvantage of the A D F G L S test arises in the presence of M A ( 1 ) negative correlation, in which case the M G L S tests are preferable, but in all other cases they are very undersized. When there is a break in the series, selecting the breakpoint using the Supremum method greatly improves the results relative to the Infimum method.

]]>Econometrics doi: 10.3390/econometrics5020016

Authors: Fabrizio Cipollini Robert Engle Giampiero Gallo

We discuss several multivariate extensions of the Multiplicative Error Model to take into account dynamic interdependence and contemporaneously correlated innovations (vector MEM or vMEM). We suggest copula functions to link Gamma marginals of the innovations, in a specification where past values and conditional expectations of the variables can be simultaneously estimated. Results with realized volatility, volumes and number of trades of the JNJ stock show that significantly superior realized volatility forecasts are delivered with a fully interdependent vMEM relative to a single equation. Alternatives involving log–Normal or semiparametric formulations produce substantially equivalent results.

]]>Econometrics doi: 10.3390/econometrics5010015

Authors: Chia-Lin Chang Michael McAleer

An early development in testing for causality (technically, Granger non-causality) in the conditional variance (or volatility) associated with financial returns was the portmanteau statistic for non-causality in the variance of Cheng and Ng (1996). A subsequent development was the Lagrange Multiplier (LM) test of non-causality in the conditional variance by Hafner and Herwartz (2006), who provided simulation results to show that their LM test was more powerful than the portmanteau statistic for sample sizes of 1000 and 4000 observations. While the LM test for causality proposed by Hafner and Herwartz (2006) is an interesting and useful development, it is nonetheless arbitrary. In particular, the specification on which the LM test is based does not rely on an underlying stochastic process, so the alternative hypothesis is also arbitrary, which can affect the power of the test. The purpose of the paper is to derive a simple test for causality in volatility that provides regularity conditions arising from the underlying stochastic process, namely a random coefficient autoregressive process, and a test for which the (quasi-) maximum likelihood estimates have valid asymptotic properties under the null hypothesis of non-causality. The simple test is intuitively appealing as it is based on an underlying stochastic process, is sympathetic to Granger’s (1969, 1988) notion of time series predictability, is easy to implement, and has a regularity condition that is not available in the LM test.

]]>Econometrics doi: 10.3390/econometrics5010014

Authors: Jan Kiviet Milan Pleus Rutger Poldermans

Studies employing Arellano-Bond and Blundell-Bond generalized method of moments (GMM) estimation for linear dynamic panel data models are growing exponentially in number. However, for researchers it is hard to make a reasoned choice between many different possible implementations of these estimators and associated tests. By simulation, the effects are examined in terms of many options regarding: (i) reducing, extending or modifying the set of instruments; (ii) specifying the weighting matrix in relation to the type of heteroskedasticity; (iii) using (robustified) 1-step or (corrected) 2-step variance estimators; (iv) employing 1-step or 2-step residuals in Sargan-Hansen overall or incremental overidentification restrictions tests. This is all done for models in which some regressors may be either strictly exogenous, predetermined or endogenous. Surprisingly, particular asymptotically optimal and relatively robust weighting matrices are found to be superior in finite samples to ostensibly more appropriate versions. Most of the variants of tests for overidentification and coefficient restrictions show serious deficiencies. The variance of the individual effects is shown to be a major determinant of the poor quality of most asymptotic approximations; therefore, the accurate estimation of this nuisance parameter is investigated. A modification of GMM is found to have some potential when the cross-sectional heteroskedasticity is pronounced and the time-series dimension of the sample is not too small. Finally, all techniques are employed to actual data and lead to insights which differ considerably from those published earlier.

]]>Econometrics doi: 10.3390/econometrics5010013

Authors: Bruno Rémillard

In this paper, we study the asymptotic behavior of the sequential empirical process and the sequential empirical copula process, both constructed from residuals of multivariate stochastic volatility models. Applications for the detection of structural changes and specification tests of the distribution of innovations are discussed. It is also shown that if the stochastic volatility matrices are diagonal, which is the case if the univariate time series are estimated separately instead of being jointly estimated, then the empirical copula process behaves as if the innovations were observed; a remarkable property. As a by-product, one also obtains the asymptotic behavior of rank-based measures of dependence applied to residuals of these time series models.

]]>Econometrics doi: 10.3390/econometrics5010012

Authors: Aparna Sengupta

We consider the problem of testing for a structural break in the spatial lag parameter in a panel model (spatial autoregressive). We propose a likelihood ratio test of the null hypothesis of no break against the alternative hypothesis of a single break. The limiting distribution of the test is derived under the null when both the number of individual units N and the number of time periods T is large or N is ﬁxed and T is large. The asymptotic critical values of the test statistic can be obtained analytically. We also propose a break-date estimator that can be employed to determine the location of the break point following evidence against the null hypothesis. We present Monte Carlo evidence to show that the proposed procedure performs well in ﬁnite samples. Finally, we consider an empirical application of the test on budget spillovers and interdependence in ﬁscal policy within the U.S. states.

]]>Econometrics doi: 10.3390/econometrics5010011

Authors: Jesús Clemente María Gadea Antonio Montañés Marcelo Reyes

This study reconsiders the common unit root/co-integration approach to test for the Fisher effect for the economies of the G7 countries. We first show that nominal interest and inflation rates are better represented as I(0) variables. Later, we use the Bai–Perron procedure to show the existence of structural changes in the Fisher equation. After considering these breaks, we find very limited evidence of a total Fisher effect as the transmission coefficient of the expected inflation rates to nominal interest rates is very different than one.

]]>Econometrics doi: 10.3390/econometrics5010010

Authors: Pravin Trivedi David Zimmer

Copulas have enjoyed increased usage in many areas of econometrics, including applications with discrete outcomes. However, Genest and Nešlehová (2007) present evidence that copulas for discrete outcomes are not identified, particularly when those discrete outcomes follow count distributions. This paper confirms the Genest and Nešlehová result using a series of simulation exercises. The paper then proceeds to show that those identification concerns diminish if the model has a regression structure such that the exogenous variable(s) generates additional variation in the outcomes and thus more completely covers the outcome domain.

]]>Econometrics doi: 10.3390/econometrics5010008

Authors: P.A.V.B. Swamy Jatinder Mehta I-Lok Chang

Using the net effect of all relevant regressors omitted from a model to form its error term is incorrect because the coefficients and error term of such a model are non-unique. Non-unique coefficients cannot possess consistent estimators. Uniqueness can be achieved if; instead; one uses certain “sufficient sets” of (relevant) regressors omitted from each model to represent the error term. In this case; the unique coefficient on any non-constant regressor takes the form of the sum of a bias-free component and omitted-regressor biases. Measurement-error bias can also be incorporated into this sum. We show that if our procedures are followed; accurate estimation of bias-free components is possible.

]]>Econometrics doi: 10.3390/econometrics5010009

Authors: Jochen Heberle Cristina Sattarhoff

This paper considers the algorithmic implementation of the heteroskedasticity and autocorrelation consistent (HAC) estimation problem for covariance matrices of parameter estimators. We introduce a new algorithm, mainly based on the fast Fourier transform, and show via computer simulation that our algorithm is up to 20 times faster than well-established alternative algorithms. The cumulative effect is substantial if the HAC estimation problem has to be solved repeatedly. Moreover, the bandwidth parameter has no impact on this performance. We provide a general description of the new algorithm as well as code for a reference implementation in R.

]]>Econometrics doi: 10.3390/econometrics5010006

Authors: Ragnar Nymoen

This paper reviews the development of labour market institutions in Norway, shows how labour market regulation has been related to the macroeconomic development, and presents dynamic econometric models of nominal and real wages. Single equation and multi-equation models are reported. The econometric modelling uses a new data set with historical time series of wages and prices, unemployment and labour productivity. Impulse indicator saturation is used to achieve robust estimation of focus parameters, and the breaks are interpreted in the light of the historical overview. A relatively high degree of constancy of the key parameters of the wage setting equation is documented, over a considerably longer historical time period than earlier studies have done. The evidence is consistent with the view that the evolving system of collective labour market regulation over long periods has delivered a certain necessary level of coordination of wage and price setting. Nevertheless, there is also evidence that global forces have been at work for a long time, in a way that links real wages to productivity trends in the same way as in countries with very different institutions and macroeconomic development.

]]>Econometrics doi: 10.3390/econometrics5010007

Authors: Econometrics Editorial Office

The editors of Econometrics would like to express their sincere gratitude to the following reviewers for assessing manuscripts in 2016.[...]

]]>Econometrics doi: 10.3390/econometrics5010005

Authors: Seong Chang Pierre Perron

This paper considers testing procedures for the null hypothesis of a unit root process against the alternative of a fractional process, called a fractional unit root test. We extend the Lagrange Multiplier (LM) tests of Robinson (1994) and Tanaka (1999), which are locally best invariant and uniformly most powerful, to allow for a slope change in trend with or without a concurrent level shift under both the null and alternative hypotheses. We show that the limit distribution of the proposed LM tests is standard normal. Finite sample simulation experiments show that the tests have good size and power. As an empirical analysis, we apply the tests to the Consumer Price Indices of the G7 countries.

]]>Econometrics doi: 10.3390/econometrics5010001

Authors: Luis Álvarez

Filters constructed on the basis of standard local polynomial regression (LPR) methods have been used in the literature to estimate the business cycle. We provide a frequency domain interpretation of the contrast filter obtained by the difference of a series and its long-run LPR component and show that it operates as a kind of high-pass filter, so that it provides a noisy estimate of the cycle. We alternatively propose band-pass local polynomial regression methods aimed at isolating the cyclical component. Results are compared to standard high-pass and band-pass filters. Procedures are illustrated using the US GDP series.

]]>Econometrics doi: 10.3390/econometrics5010004

Authors: Jingjing Yang

This paper discusses the consistency of trend break point estimators when the number of breaks is underspecified. The consistency of break point estimators in a simple location model with level shifts has been well documented by researchers under various settings, including extensions such as allowing a time trend in the model. Despite the consistency of break point estimators of level shifts, there are few papers on the consistency of trend shift break point estimators in the presence of an underspecified break number. The simulation study and asymptotic analysis in this paper show that the trend shift break point estimator does not converge to the true break points when the break number is underspecified. In the case of two trend shifts, the inconsistency problem worsens if the magnitudes of the breaks are similar and the breaks are either both positive or both negative. The limiting distribution for the trend break point estimator is developed and closely approximates the finite sample performance.

]]>