Econometrics doi: 10.3390/econometrics7030041

Authors: Matei Rovira Agell

We propose a methodology to include night volatility estimates in the day volatility modeling problem with high-frequency data in a realized generalized autoregressive conditional heteroskedasticity (GARCH) framework, which takes advantage of the natural relationship between the realized measure and the conditional variance. This improves volatility modeling by adding, in a two-factor structure, information on latent processes that occur while markets are closed but captures the leverage effect and maintains a mathematical structure that facilitates volatility estimation. A class of bivariate models that includes intraday, day, and night volatility estimates is proposed and was empirically tested to confirm whether using night volatility information improves the day volatility estimation. The results indicate a forecasting improvement using bivariate models over those that do not include night volatility estimates.

]]>Econometrics doi: 10.3390/econometrics7030040

Authors: Tian Xie

In this paper, we study forecasting problems of Bitcoin-realized volatility computed on data from the largest crypto exchange&mdash;Binance. Given the unique features of the crypto asset market, we find that conventional regression models exhibit strong model specification uncertainty. To circumvent this issue, we suggest using least squares model-averaging methods to model and forecast Bitcoin volatility. The empirical results demonstrate that least squares model-averaging methods in general outperform many other conventional regression models that ignore specification uncertainty.

]]>Econometrics doi: 10.3390/econometrics7030039

Authors: Wei Qian Craig A. Rolling Gang Cheng Yuhong Yang

It is often reported in the forecast combination literature that a simple average of candidate forecasts is more robust than sophisticated combining methods. This phenomenon is usually referred to as the &ldquo;forecast combination puzzle&rdquo;. Motivated by this puzzle, we explore its possible explanations, including high variance in estimating the target optimal weights (estimation error), invalid weighting formulas, and model/candidate screening before combination. We show that the existing understanding of the puzzle should be complemented by the distinction of different forecast combination scenarios known as combining for adaptation and combining for improvement. Applying combining methods without considering the underlying scenario can itself cause the puzzle. Based on our new understandings, both simulations and real data evaluations are conducted to illustrate the causes of the puzzle. We further propose a multi-level AFTER strategy that can integrate the strengths of different combining methods and adapt intelligently to the underlying scenario. In particular, by treating the simple average as a candidate forecast, the proposed strategy is shown to reduce the heavy cost of estimation error and, to a large extent, mitigate the puzzle.

]]>Econometrics doi: 10.3390/econometrics7030038

Authors: Qingfeng Liu Andrey L. Vasnev

To avoid the risk of misspecification between homoscedastic and heteroscedastic models, we propose a combination method based on ordinary least-squares (OLS) and generalized least-squares (GLS) model-averaging estimators. To select optimal weights for the combination, we suggest two information criteria and propose feasible versions that work even when the variance-covariance matrix is unknown. The optimality of the method is proven under some regularity conditions. The results of a Monte Carlo simulation demonstrate that the method is adaptive in the sense that it achieves almost the same estimation accuracy as if the homoscedasticity or heteroscedasticity of the error term were known.

]]>Econometrics doi: 10.3390/econometrics7030037

Authors: Richard M. Golden Steven S. Henley Halbert White T. Michael Kashner

Researchers are often faced with the challenge of developing statistical models with incomplete data. Exacerbating this situation is the possibility that either the researcher&rsquo;s complete-data model or the model of the missing-data mechanism is misspecified. In this article, we create a formal theoretical framework for developing statistical models and detecting model misspecification in the presence of incomplete data where maximum likelihood estimates are obtained by maximizing the observable-data likelihood function when the missing-data mechanism is assumed ignorable. First, we provide sufficient regularity conditions on the researcher&rsquo;s complete-data model to characterize the asymptotic behavior of maximum likelihood estimates in the simultaneous presence of both missing data and model misspecification. These results are then used to derive robust hypothesis testing methods for possibly misspecified models in the presence of Missing at Random (MAR) or Missing Not at Random (MNAR) missing data. Second, we introduce a method for the detection of model misspecification in missing data problems using recently developed Generalized Information Matrix Tests (GIMT). Third, we identify regularity conditions for the Missing Information Principle (MIP) to hold in the presence of model misspecification so as to provide useful computational covariance matrix estimation formulas. Fourth, we provide regularity conditions that ensure the observable-data expected negative log-likelihood function is convex in the presence of partially observable data when the amount of missingness is sufficiently small and the complete-data likelihood is convex. Fifth, we show that when the researcher has correctly specified a complete-data model with a convex negative likelihood function and an ignorable missing-data mechanism, then its strict local minimizer is the true parameter value for the complete-data model when the amount of missingness is sufficiently small. Our results thus provide new robust estimation, inference, and specification analysis methods for developing statistical models with incomplete data.

]]>Econometrics doi: 10.3390/econometrics7030036

Authors: Sophie van Huellen Duo Qin

This paper re-examines the instrumental variable (IV) approach to estimating returns to education by use of compulsory school law (CSL) in the US. We show that the IV-approach amounts to a change in model specification by changing the causal status of the variable of interest. From this perspective, the IV-OLS (ordinary least square) choice becomes a model selection issue between non-nested models and is hence testable using cross validation methods. It also enables us to unravel several logic flaws in the conceptualisation of IV-based models. Using the causal chain model specification approach, we overcome these flaws by carefully distinguishing returns to education from the treatment effect of CSL. We find relatively robust estimates for the first effect, while estimates for the second effect are hindered by measurement errors in the CSL indicators. We find reassurance of our approach from fundamental theories in statistical learning.

]]>Econometrics doi: 10.3390/econometrics7030035

Authors: Richard Kouamé Moussa

This paper introduces an estimation procedure for a random effects probit model in presence of heteroskedasticity and a likelihood ratio test for homoskedasticity. The cases where the heteroskedasticity is due to individual effects or idiosyncratic errors or both are analyzed. Monte Carlo simulations show that the test performs well in the case of high degree of heteroskedasticity. Furthermore, the power of the test increases with larger individual and time dimensions. The robustness analysis shows that applying the wrong approach may generate misleading results except for the case where both individual effects and idiosyncratic errors are modelled as heteroskedastic.

]]>Econometrics doi: 10.3390/econometrics7030034

Authors: Jie Chen Dimitris N. Politis

This paper gives a computer-intensive approach to multi-step-ahead prediction of volatility in financial returns series under an ARCH/GARCH model and also under a model-free setting, namely employing the NoVaS transformation. Our model-based approach only assumes i . i . d innovations without requiring knowledge/assumption of the error distribution and is computationally straightforward. The model-free approach is formally quite similar, albeit a GARCH model is not assumed. We conducted a number of simulations to show that the proposed approach works well for both point prediction (under L 1 and/or L 2 measures) and prediction intervals that were constructed using bootstrapping. The performance of GARCH models and the model-free approach for multi-step ahead prediction was also compared under different data generating processes.

]]>Econometrics doi: 10.3390/econometrics7030033

Authors: Chuanming Gao Kajal Lahiri

We compare the finite sample performance of a number of Bayesian and classical procedures for limited information simultaneous equations models with weak instruments by a Monte Carlo study. We consider Bayesian approaches developed by Chao and Phillips, Geweke, Kleibergen and van Dijk, and Zellner. Amongst the sampling theory methods, OLS, 2SLS, LIML, Fuller&rsquo;s modified LIML, and the jackknife instrumental variable estimator (JIVE) due to Angrist et al. and Blomquist and Dahlberg are also considered. Since the posterior densities and their conditionals in Chao and Phillips and Kleibergen and van Dijk are nonstandard, we use a novel &ldquo;Gibbs within Metropolis&ndash;Hastings&rdquo; algorithm, which only requires the availability of the conditional densities from the candidate generating density. Our results show that with very weak instruments, there is no single estimator that is superior to others in all cases. When endogeneity is weak, Zellner&rsquo;s MELO does the best. When the endogeneity is not weak and &rho; &omega; 12 &gt; 0 , where &rho; is the correlation coefficient between the structural and reduced form errors, and &omega; 12 is the covariance between the unrestricted reduced form errors, the Bayesian method of moments (BMOM) outperforms all other estimators by a wide margin. When the endogeneity is not weak and &beta; &rho; &lt; 0 ( &beta; being the structural parameter), the Kleibergen and van Dijk approach seems to work very well. Surprisingly, the performance of JIVE was disappointing in all our experiments.

]]>Econometrics doi: 10.3390/econometrics7030032

Authors: Maria Felice Arezzo Giuseppina Guagnano

Most empirical work in the social sciences is based on observational data that are often both incomplete, and therefore unrepresentative of the population of interest, and affected by measurement errors. These problems are very well known in the literature and ad hoc procedures for parametric modeling have been proposed and developed for some time, in order to correct estimate&rsquo;s bias and obtain consistent estimators. However, to our best knowledge, the aforementioned problems have not yet been jointly considered. We try to overcome this by proposing a parametric approach for the estimation of the probabilities of misclassification of a binary response variable by incorporating them in the likelihood of a binary choice model with sample selection.

]]>Econometrics doi: 10.3390/econometrics7030031

Authors: Franz Ramsauer Aleksey Min Michael Lingauer

This article extends the Factor-Augmented Vector Autoregression Model (FAVAR) to mixed-frequency and incomplete panel data. Within the scope of a fully parametric two-step approach, the alternating application of two expectation-maximization algorithms jointly estimates model parameters and missing data. In contrast to the existing literature, we do not require observable factor components to be part of the panel data. For this purpose, we modify the Kalman Filter for factors consisting of latent and observed components, which significantly improves the reconstruction of latent factors according to the performed simulation study. To identify model parameters uniquely, the loadings matrix is constrained. In our empirical application, the presented framework analyzes US data for measuring the effects of the monetary policy on the real economy and financial markets. Here, the consequences for the quarterly Gross Domestic Product (GDP) growth rates are of particular importance.

]]>Econometrics doi: 10.3390/econometrics7030030

Authors: Annika Homburg Christian H. Weiß Layth C. Alwan Gabriel Frahm Rainer Göb

In forecasting count processes, practitioners often ignore the discreteness of counts and compute forecasts based on Gaussian approximations instead. For both central and non-central point forecasts, and for various types of count processes, the performance of such approximate point forecasts is analyzed. The considered data-generating processes include different autoregressive schemes with varying model orders, count models with overdispersion or zero inflation, counts with a bounded range, and counts exhibiting trend or seasonality. We conclude that Gaussian forecast approximations should be avoided.

]]>Econometrics doi: 10.3390/econometrics7030029

Authors: Emanuela Ciapanna Marco Taboga

This paper deals with instability in regression coefficients. We propose a Bayesian regression model with time-varying coefficients (TVC) that allows to jointly estimate the degree of instability and the time-path of the coefficients. Thanks to the computational tractability of the model and to the fact that it is fully automatic, we are able to run Monte Carlo experiments and analyze its finite-sample properties. We find that the estimation precision and the forecasting accuracy of the TVC model compare favorably to those of other methods commonly employed to deal with parameter instability. A distinguishing feature of the TVC model is its robustness to mis-specification: Its performance is also satisfactory when regression coefficients are stable or when they experience discrete structural breaks. As a demonstrative application, we used our TVC model to estimate the exposures of S&amp;P 500 stocks to market-wide risk factors: We found that a vast majority of stocks had time-varying exposures and the TVC model helped to better forecast these exposures.

]]>Econometrics doi: 10.3390/econometrics7020028

Authors: Fernando Rios-Avila

This paper presents an extension to the Oaxaca&ndash;Blinder decomposition with continuous groups using a semiparametric approach known as varying coefficients model. To account for potential self-selection into the continuum of groups, the use of inverse mills ratios is expanded upon following the literature on endogenous selection. The flexibility of this methodology may allow detecting heterogeneity when analyzing endogenous dose treatments effects, as well as correcting for endogeneity when analyzing the heterogeneous partial effects across the continuous group variable. For illustration, the methodology is used to revisit the impact of body weight on wages, using body mass index (BMI) as the continuum of groups, finding evidence that body weight has a negative, but decreasing impact on wages for both white men and women.

]]>Econometrics doi: 10.3390/econometrics7020027

Authors: Zhengyuan Gao Christian M. Hafner

Filtering has had a profound impact as a device of perceiving information and deriving agent expectations in dynamic economic models. For an abstract economic system, this paper shows that the foundation of applying the filtering method corresponds to the existence of a conditional expectation as an equilibrium process. Agent-based rational behavior of looking backward and looking forward is generalized to a conditional expectation process where the economic system is approximated by a class of models, which can be represented and estimated without information loss. The proposed framework elucidates the range of applications of a general filtering device and is not limited to a particular model class such as rational expectations.

]]>Econometrics doi: 10.3390/econometrics7020026

Authors: David Trafimow

There has been much debate about null hypothesis significance testing, p-values without null hypothesis significance testing, and confidence intervals. The first major section of the present article addresses some of the main reasons these procedures are problematic. The conclusion is that none of them are satisfactory. However, there is a new procedure, termed the a priori procedure (APP), that validly aids researchers in obtaining sample statistics that have acceptable probabilities of being close to their corresponding population parameters. The second major section provides a description and review of APP advances. Not only does the APP avoid the problems that plague other inferential statistical procedures, but it is easy to perform too. Although the APP can be performed in conjunction with other procedures, the present recommendation is that it be used alone.

]]>Econometrics doi: 10.3390/econometrics7020025

Authors: Kyoo il Kim

It is well known that efficient estimation of average treatment effects can be obtained by the method of inverse propensity score weighting, using the estimated propensity score, even when the true one is known. When the true propensity score is unknown but parametric, it is conjectured from the literature that we still need nonparametric propensity score estimation to achieve the efficiency. We formalize this argument and further identify the source of the efficiency loss arising from parametric estimation of the propensity score. We also provide an intuition of why this overfitting is necessary. Our finding suggests that, even when we know that the true propensity score belongs to a parametric class, we still need to estimate the propensity score by a nonparametric method in applications.

]]>Econometrics doi: 10.3390/econometrics7020024

Authors: Jan R. Magnus

The t-ratio has not one but two uses in econometrics, which should be carefully distinguished. It is used as a test and also as a diagnostic. I emphasize that the commonly-used estimators are in fact pretest estimators, and argue in favor of an improved (continuous) version of pretesting, called model averaging.

]]>Econometrics doi: 10.3390/econometrics7020023

Authors: Tue Gørgens Allan H. Würtz

This paper considers the estimation of dynamic threshold regression models with fixed effects using short panel data. We examine a two-step method, where the threshold parameter is estimated nonparametrically at the N-rate and the remaining parameters are estimated by GMM at the N -rate. We provide simulation results that illustrate advantages of the new method in comparison with pure GMM estimation. The simulations also highlight the importance of the choice of instruments in GMM estimation.

]]>Econometrics doi: 10.3390/econometrics7020022

Authors: Pierre Perron Yohei Yamamoto

In empirical applications based on linear regression models, structural changes often occur in both the error variance and regression coefficients, possibly at different dates. A commonly applied method is to first test for changes in the coefficients (or in the error variance) and, conditional on the break dates found, test for changes in the variance (or in the coefficients). In this note, we provide evidence that such procedures have poor finite sample properties when the changes in the first step are not correctly accounted for. In doing so, we show that testing for changes in the coefficients (or in the variance) ignoring changes in the variance (or in the coefficients) induces size distortions and loss of power. Our results illustrate a need for a joint approach to test for structural changes in both the coefficients and the variance of the errors. We provide some evidence that the procedures suggested by Perron et al. (2019) provide tests with good size and power.

]]>Econometrics doi: 10.3390/econometrics7020021

Authors: Jae H. Kim Andrew P. Robinson

This paper presents a brief review of interval-based hypothesis testing, widely used in bio-statistics, medical science, and psychology, namely, tests for minimum-effect, equivalence, and non-inferiority. We present the methods in the contexts of a one-sample t-test and a test for linear restrictions in a regression. We present applications in testing for market efficiency, validity of asset-pricing models, and persistence of economic time series. We argue that, from the point of view of economics and finance, interval-based hypothesis testing provides more sensible inferential outcomes than those based on point-null hypothesis. We propose that interval-based tests be routinely employed in empirical research in business, as an alternative to point null hypothesis testing, especially in the new era of big data.

]]>Econometrics doi: 10.3390/econometrics7020020

Authors: Burkhard Raunig

It is customary to assume that an indicator of a latent variable is driven by the latent variable and some random noise. In contrast, a background indicator is also systematically influenced by variables outside the structural model of interest. Background indicators deserve attention because in empirical work they are difficult to distinguish from ordinary effect indicators. This paper assesses instrumental variable (IV) estimation of the effect of a latent variable in a linear model when a background indicator replaces the latent variable. It turns out that IV estimates are inconsistent in many important cases. In some cases, the estimates capture causal effects of the indicator rather than causal effects of the latent variable. A simulation experiment that considers the impact of economic uncertainty on aggregate consumption illustrates some of the results.

]]>Econometrics doi: 10.3390/econometrics7020019

Authors: Carlos Trucíos Mauricio Zevallos Luiz K. Hotta André A. P. Santos

Many financial decisions, such as portfolio allocation, risk management, option pricing and hedge strategies, are based on forecasts of the conditional variances, covariances and correlations of financial returns. The paper shows an empirical comparison of several methods to predict one-step-ahead conditional covariance matrices. These matrices are used as inputs to obtain out-of-sample minimum variance portfolios based on stocks belonging to the S&amp;P500 index from 2000 to 2017 and sub-periods. The analysis is done through several metrics, including standard deviation, turnover, net average return, information ratio and Sortino&rsquo;s ratio. We find that no method is the best in all scenarios and the performance depends on the criterion, the period of analysis and the rebalancing strategy.

]]>Econometrics doi: 10.3390/econometrics7020018

Authors: Thomas R. Dyckman Stephen A. Zeff

A great deal of the accounting research published in recent years has involved statistical tests. Our paper proposes improvements to both the quality and execution of such research. We address the following limitations in current research that appear to us to be ignored or used inappropriately: (1) unaddressed situational effects resulting from model limitations and what has been referred to as &ldquo;data carpentry,&rdquo; (2) limitations and alternatives to winsorizing, (3) necessary improvements to relying on a study&rsquo;s calculated &ldquo;p-values&rdquo; instead of on the economic or behavioral importance of the results, and (4) the information loss incurred by under-valuing what can and cannot be learned from replications.

]]>Econometrics doi: 10.3390/econometrics7020017

Authors: Christian H. Weiß

The analysis and modeling of categorical time series requires quantifying the extent of dispersion and serial dependence. The dispersion of categorical data is commonly measured by Gini index or entropy, but also the recently proposed extropy measure can be used for this purpose. Regarding signed serial dependence in categorical time series, we consider three types of &kappa; -measures. By analyzing bias properties, it is shown that always one of the &kappa; -measures is related to one of the above-mentioned dispersion measures. For doing statistical inference based on the sample versions of these dispersion and dependence measures, knowledge on their distribution is required. Therefore, we study the asymptotic distributions and bias corrections of the considered dispersion and dependence measures, and we investigate the finite-sample performance of the resulting asymptotic approximations with simulations. The application of the measures is illustrated with real-data examples from politics, economics and biology.

]]>Econometrics doi: 10.3390/econometrics7010016

Authors: Taehoon Kim Jacob Schwartz Kyungchul Song Yoon-Jae Whang

This paper considers two-sided matching models with nontransferable utilities, with one side having homogeneous preferences over the other side. When one observes only one or several large matchings, despite the large number of agents involved, asymptotic inference is difficult because the observed matching involves the preferences of all the agents on both sides in a complex way, and creates a complicated form of cross-sectional dependence across observed matches. When we assume that the observed matching is a consequence of a stable matching mechanism with homogeneous preferences on one side, and the preferences are drawn from a parametric distribution conditional on observables, the large observed matching follows a parametric distribution. This paper shows in such a situation how the method of Monte Carlo inference can be a viable option. Being a finite sample inference method, it does not require independence or local dependence among the observations which are often used to obtain asymptotic validity. Results from a Monte Carlo simulation study are presented and discussed.

]]>Econometrics doi: 10.3390/econometrics7010015

Authors: Tomohiro Ando Naoya Sueishi

This paper investigates the asymptotic properties of a penalized empirical likelihood estimator for moment restriction models when the number of parameters ( p n ) and/or the number of moment restrictions increases with the sample size. Our main result is that the SCAD-penalized empirical likelihood estimator is n / p n -consistent under a reasonable condition on the regularization parameter. Our consistency rate is better than the existing ones. This paper also provides sufficient conditions under which n / p n -consistency and an oracle property are satisfied simultaneously. As far as we know, this paper is the first to specify sufficient conditions for both n / p n -consistency and the oracle property of the penalized empirical likelihood estimator.

]]>Econometrics doi: 10.3390/econometrics7010014

Authors: David T. Frazier Eric Renault

The standard approach to indirect inference estimation considers that the auxiliary parameters, which carry the identifying information about the structural parameters of interest, are obtained from some recently identified vector of estimating equations. In contrast to this standard interpretation, we demonstrate that the case of overidentified auxiliary parameters is both possible, and, indeed, more commonly encountered than one may initially realize. We then revisit the &ldquo;moment matching&rdquo; and &ldquo;parameter matching&rdquo; versions of indirect inference in this context and devise efficient estimation strategies in this more general framework. Perhaps surprisingly, we demonstrate that if one were to consider the naive choice of an efficient Generalized Method of Moments (GMM)-based estimator for the auxiliary parameters, the resulting indirect inference estimators would be inefficient. In this general context, we demonstrate that efficient indirect inference estimation actually requires a two-step estimation procedure, whereby the goal of the first step is to obtain an efficient version of the auxiliary model. These two-step estimators are presented both within the context of moment matching and parameter matching.

]]>Econometrics doi: 10.3390/econometrics7010012

Authors: Karl-Heinz Schild Karsten Schweikert

This paper investigates the properties of tests for asymmetric long-run adjustment which are often applied in empirical studies on asymmetric price transmissions. We show that substantial size distortions are caused by preconditioning the test on finding sufficient evidence for cointegration in a first step. The extent of oversizing the test for long-run asymmetry depends inversely on the power of the primary cointegration test. Hence, tests for long-run asymmetry become invalid in cases of small sample sizes or slow speed of adjustment. Further, we provide simulation evidence that tests for long-run asymmetry are generally oversized if the threshold parameter is estimated by conditional least squares and show that bootstrap techniques can be used to obtain the correct size.

]]>Econometrics doi: 10.3390/econometrics7010013

Authors: Mingmian Cheng Norman R. Swanson

Numerous tests designed to detect realized jumps over a fixed time span have been proposed and extensively studied in the financial econometrics literature. These tests differ from &ldquo;long time span tests&rdquo; that detect jumps by examining the magnitude of the jump intensity parameter in the data generating process, and which are consistent. In this paper, long span jump tests are compared and contrasted with a variety of fixed span jump tests in a series of Monte Carlo experiments. It is found that both the long time span tests of Corradi et al. (2018) and the fixed span tests of A&iuml;t-Sahalia and Jacod (2009) exhibit reasonably good finite sample properties, for time spans both short and long. Various other tests suffer from finite sample distortions, both under sequential testing and under long time spans. The latter finding is new, and confirms the &ldquo;pitfall&rdquo; discussed in Huang and Tauchen (2005), of using asymptotic approximations associated with finite time span tests in order to study long time spans of data. An empirical analysis is carried out to investigate the implications of these findings, and &ldquo;time-span robust&rdquo; tests indicate that the prevalence of jumps is not as universal as might be expected.

]]>Econometrics doi: 10.3390/econometrics7010011

Authors: Richard Startz

As a contribution toward the ongoing discussion about the use and mis-use of p-values, numerical examples are presented demonstrating that a p-value can, as a practical matter, give you a really different answer than the one that you want.

]]>Econometrics doi: 10.3390/econometrics7010010

Authors: Miguel Henry George Judge

The focus of this paper is an information theoretic-symbolic logic approach to extract information from complex economic systems and unlock its dynamic content. Permutation Entropy (PE) is used to capture the permutation patterns-ordinal relations among the individual values of a given time series; to obtain a probability distribution of the accessible patterns; and to quantify the degree of complexity of an economic behavior system. Ordinal patterns are used to describe the intrinsic patterns, which are hidden in the dynamics of the economic system. Empirical applications involving the Dow Jones Industrial Average are presented to indicate the information recovery value and the applicability of the PE method. The results demonstrate the ability of the PE method to detect the extent of complexity (irregularity) and to discriminate and classify admissible and forbidden states.

]]>Econometrics doi: 10.3390/econometrics7010009

Authors: Niels Haldrup Carsten P. T. Rosenskjold

The prototypical Lee&ndash;Carter mortality model is characterized by a single common time factor that loads differently across age groups. In this paper, we propose a parametric factor model for the term structure of mortality where multiple factors are designed to influence the age groups differently via parametric loading functions. We identify four different factors: a factor common for all age groups, factors for infant and adult mortality, and a factor for the &ldquo;accident hump&rdquo; that primarily affects mortality of relatively young adults and late teenagers. Since the factors are identified via restrictions on the loading functions, the factors are not designed to be orthogonal but can be dependent and can possibly cointegrate when the factors have unit roots. We suggest two estimation procedures similar to the estimation of the dynamic Nelson&ndash;Siegel term structure model. First, a two-step nonlinear least squares procedure based on cross-section regressions together with a separate model to estimate the dynamics of the factors. Second, we suggest a fully specified model estimated by maximum likelihood via the Kalman filter recursions after the model is put on state space form. We demonstrate the methodology for US and French mortality data. We find that the model provides a good fit of the relevant factors and, in a forecast comparison with a range of benchmark models, it is found that, especially for longer horizons, variants of the parametric factor model have excellent forecast performance.

]]>Econometrics doi: 10.3390/econometrics7010008

Authors: Antonio Pacifico

This paper provides an overview of a time-varying Structural Panel Bayesian Vector Autoregression model that deals with model misspecification and unobserved heterogeneity problems in applied macroeconomic analyses when studying time-varying relationships and dynamic interdependencies among countries and variables. I discuss what its distinctive features are, what it is used for, and how it can be analytically derived. I also describe how it is estimated and how structural spillovers and shock identification are performed. The model is empirically applied to a set of developed European economies to illustrate the functioning and the ability of the model. The paper also discusses more recent studies that have used multivariate dynamic macro-panels to evaluate idiosyncratic business cycles, policy-making, and spillover effects among different sectors and countries.

]]>Econometrics doi: 10.3390/econometrics7010007

Authors: Cheng Hsiao Qi Li Zhongwen Liang Wei Xie

This paper considers methods of estimating a static correlated random coefficient model with panel data. We mainly focus on comparing two approaches of estimating unconditional mean of the coefficients for the correlated random coefficients models, the group mean estimator and the generalized least squares estimator. For the group mean estimator, we show that it achieves Chamberlain (1992) semi-parametric efficiency bound asymptotically. For the generalized least squares estimator, we show that when T is large, a generalized least squares estimator that ignores the correlation between the individual coefficients and regressors is asymptotically equivalent to the group mean estimator. In addition, we give conditions where the standard within estimator of the mean of the coefficients is consistent. Moreover, with additional assumptions on the known correlation pattern, we derive the asymptotic properties of panel least squares estimators. Simulations are used to examine the finite sample performances of different estimators.

]]>Econometrics doi: 10.3390/econometrics7010006

Authors: David H. Bernstein Bent Nielsen

We consider cointegration tests in the situation where the cointegration rank is deficient. This situation is of interest in finite sample analysis and in relation to recent work on identification robust cointegration inference. We derive asymptotic theory for tests for cointegration rank and for hypotheses on the cointegrating vectors. The limiting distributions are tabulated. An application to US treasury yields series is given.

]]>Econometrics doi: 10.3390/econometrics7010005

Authors: Mardi Dungey Stan Hurn Shuping Shi Vladimir Volkov

Crises in the banking and sovereign debt sectors give rise to heightened financial fragility. Of particular concern is the development of self-fulfilling feedback loops where crisis conditions in one sector are transmitted to the other sector and back again. We use time-varying tests of Granger causality to demonstrate how empirical evidence of connectivity between the banking and sovereign sectors can be detected, and provide an application to the Greek, Irish, Italian, Portuguese and Spanish (GIIPS) countries and Germany over the period 2007 to 2016. While the results provide evidence of domestic feedback loops, the most important finding is that financial fragility is an international problem and cannot be dealt with purely on a country-by-country basis.

]]>Econometrics doi: 10.3390/econometrics7010004

Authors: Arthur Charpentier Ndéné Ka Stéphane Mussard Oumar Hamady Ndiaye

We propose an Aitken estimator for Gini regression. The suggested A-Gini estimator is proven to be a U-statistics. Monte Carlo simulations are provided to deal with heteroskedasticity and to make some comparisons between the generalized least squares and the Gini regression. A Gini-White test is proposed and shows that a better power is obtained compared with the usual White test when outlying observations contaminate the data.

]]>Econometrics doi: 10.3390/econometrics7010003

Authors: Econometrics Editorial Office

Rigorous peer-review is the corner-stone of high-quality academic publishing [...]

]]>Econometrics doi: 10.3390/econometrics7010002

Authors: Søren Johansen

A multivariate CVAR(1) model for some observed variables and some unobserved variables is analysed using its infinite order CVAR representation of the observations. Cointegration and adjustment coefficients in the infinite order CVAR are found as functions of the parameters in the CVAR(1) model. Conditions for weak exogeneity for the cointegrating vectors in the approximating finite order CVAR are derived. The results are illustrated by two simple examples of relevance for modelling causal graphs.

]]>Econometrics doi: 10.3390/econometrics7010001

Authors: Tue Gørgens Dean Robert Hyslop

This paper compares two approaches to analyzing longitudinal discrete-time binary outcomes. Dynamic binary response models focus on state occupancy and typically specify low-order Markovian state dependence. Multi-spell duration models focus on transitions between states and typically allow for state-specific duration dependence. We show that the former implicitly impose strong and testable restrictions on the transition probabilities. In a case study of poverty transitions, we show that these restrictions are severely rejected against the more flexible multi-spell duration models.

]]>Econometrics doi: 10.3390/econometrics6040048

Authors: Yukai Yang Luc Bauwens

We develop novel multivariate state-space models wherein the latent states evolve on the Stiefel manifold and follow a conditional matrix Langevin distribution. The latent states correspond to time-varying reduced rank parameter matrices, like the loadings in dynamic factor models and the parameters of cointegrating relations in vector error-correction models. The corresponding nonlinear filtering algorithms are developed and evaluated by means of simulation experiments.

]]>Econometrics doi: 10.3390/econometrics6040047

Authors: Hussein Khraibani Bilal Nehme Olivier Strauss

Value-at-Risk (VaR) has become the most important benchmark for measuring risk in portfolios of different types of financial instruments. However, as reported by many authors, estimating VaR is subject to a high level of uncertainty. One of the sources of uncertainty stems from the dependence of the VaR estimation on the choice of the computation method. As we show in our experiment, the lower the number of samples, the higher this dependence. In this paper, we propose a new nonparametric approach called maxitive kernel estimation of the VaR. This estimation is based on a coherent extension of the kernel-based estimation of the cumulative distribution function to convex sets of kernel. We thus obtain a convex set of VaR estimates gathering all the conventional estimates based on a kernel belonging to the above considered convex set. We illustrate this method in an empirical application to daily stock returns. We compare the approach we propose to other parametric and nonparametric approaches. In our experiment, we show that the interval-valued estimate of the VaR we obtain is likely to lead to more careful decision, i.e., decisions that cannot be biased by an arbitrary choice of the computation method. In fact, the imprecision of the obtained interval-valued estimate is likely to be representative of the uncertainty in VaR estimate.

]]>Econometrics doi: 10.3390/econometrics6040046

Authors: George Judge

In this paper, we borrow some of the key concepts of nonequilibrium statistical systems, to develop a framework for analyzing a self-organizing-optimizing system of independent interacting agents, with nonlinear dynamics at the macro level that is based on stochastic individual behavior at the micro level. We demonstrate the use of entropy-divergence methods and micro income data to evaluate and understand the hidden aspects of stochastic dynamics that drives macroeconomic behavior systems and discuss how to empirically represent and evaluate their nonequilibrium nature. Empirical applications of the information theoretic family of power divergence measures-entropic functions, interpreted in a probability context with Markov dynamics, are presented.

]]>Econometrics doi: 10.3390/econometrics6040045

Authors: Loann David Denis Desboulets

In this paper, we investigate several variable selection procedures to give an overview of the existing literature for practitioners. &ldquo;Let the data speak for themselves&rdquo; has become the motto of many applied researchers since the number of data has significantly grown. Automatic model selection has been promoted to search for data-driven theories for quite a long time now. However, while great extensions have been made on the theoretical side, basic procedures are still used in most empirical work, e.g., stepwise regression. Here, we provide a review of main methods and state-of-the art extensions as well as a topology of them over a wide range of model structures (linear, grouped, additive, partially linear and non-parametric) and available software resources for implemented methods so that practitioners can easily access them. We provide explanations for which methods to use for different model purposes and their key differences. We also review two methods for improving variable selection in the general sense.

]]>Econometrics doi: 10.3390/econometrics6040044

Authors: Christopher L. Skeels Frank Windmeijer

A standard test for weak instruments compares the first-stage F-statistic to a table of critical values obtained by Stock and Yogo (2005) using simulations. We derive a closed-form solution for the expectation from which these critical values are derived, as well as present some second-order asymptotic approximations that may be of value in the presence of multiple endogenous regressors. Inspection of this new result provides insights not available from simulation, and will allow software implementations to be generalised and improved. Finally, we explore the calculation of p-values for the first-stage F-statistic weak instruments test.

]]>Econometrics doi: 10.3390/econometrics6040043

Authors: Jianning Kong Donggyu Sul

This paper provides a new statistical model for repeated voluntary contribution mechanism games. In a repeated public goods experiment, contributions in the first round are cross-sectionally independent simply because subjects are randomly selected. Meanwhile, contributions to a public account over rounds are serially and cross-sectionally correlated. Furthermore, the cross-sectional average of the contributions across subjects usually decreases over rounds. By considering this non-stationary initial condition&mdash;the initial contribution has a different distribution from the rest of the contributions&mdash;we model statistically the time varying patterns of the average contribution in repeated public goods experiments and then propose a simple but efficient method to test for treatment effects. The suggested method has good finite sample performance and works well in practice.

]]>Econometrics doi: 10.3390/econometrics6040042

Authors: Martin Biewen Emmanuel Flachaire

It is well-known that, after decades of non-interest in the theme, economics has experienced a proper surge in inequality research in recent years. [...]

]]>Econometrics doi: 10.3390/econometrics6030041

Authors: Chung Choe Philippe Van Kerm

This paper draws upon influence function regression methods to determine where foreign workers stand in the distribution of private sector wages in Luxembourg, and assess whether and how much their wages contribute to wage inequality. This is quantified by measuring the effect that a marginal increase in the proportion of foreign workers&mdash;foreign residents or cross-border workers&mdash;would have on selected quantiles and measures of inequality. Analysis of the 2006 Structure of Earnings Survey reveals that foreign workers have generally lower wages than natives and therefore tend to haul the overall wage distribution downwards. Yet, their influence on wage inequality reveals small and negative. All impacts are further muted when accounting for human capital and, especially, job characteristics. Not observing any large positive inequality contribution on the Luxembourg labour market is a striking result given the sheer size of the foreign workforce and its polarization at both ends of the skill distribution.

]]>Econometrics doi: 10.3390/econometrics6030040

Authors: Eric Hillebrand Huiyu Huang Tae-Hwy Lee Canlin Li

In forecasting a variable (forecast target) using many predictors, a factor model with principal components (PC) is often used. When the predictors are the yield curve (a set of many yields), the Nelson&ndash;Siegel (NS) factor model is used in place of the PC factors. These PC or NS factors are combining information (CI) in the predictors (yields). However, these CI factors are not &ldquo;supervised&rdquo; for a specific forecast target in that they are constructed by using only the predictors but not using a particular forecast target. In order to &ldquo;supervise&rdquo; factors for a forecast target, we follow Chan et al. (1999) and Stock and Watson (2004) to compute PC or NS factors of many forecasts (not of the predictors), with each of the many forecasts being computed using one predictor at a time. These PC or NS factors of forecasts are combining forecasts (CF). The CF factors are supervised for a specific forecast target. We demonstrate the advantage of the supervised CF factor models over the unsupervised CI factor models via simple numerical examples and Monte Carlo simulation. In out-of-sample forecasting of monthly US output growth and inflation, it is found that the CF factor models outperform the CI factor models especially at longer forecast horizons.

]]>Econometrics doi: 10.3390/econometrics6030039

Authors: Andreas Hetland

We propose and study the stochastic stationary root model. The model resembles the cointegrated VAR model but is novel in that: (i) the stationary relations follow a random coefficient autoregressive process, i.e., exhibhits heavy-tailed dynamics, and (ii) the system is observed with measurement error. Unlike the cointegrated VAR model, estimation and inference for the SSR model is complicated by a lack of closed-form expressions for the likelihood function and its derivatives. To overcome this, we introduce particle filter-based approximations of the log-likelihood function, sample score, and observed Information matrix. These enable us to approximate the ML estimator via stochastic approximation and to conduct inference via the approximated observed Information matrix. We conjecture the asymptotic properties of the ML estimator and conduct a simulation study to investigate the validity of the conjecture. Model diagnostics to assess model fit are considered. Finally, we present an empirical application to the 10-year government bond rates in Germany and Greece during the period from January 1999 to February 2018.

]]>Econometrics doi: 10.3390/econometrics6030038

Authors: In Choi Steve Cook Marc S. Paolella Jeffrey S. Racine

n/a

]]>Econometrics doi: 10.3390/econometrics6030037

Authors: Rachidi Kotchoni

This paper proposes an approach to measure the extent of nonlinearity of the exposure of a financial asset to a given risk factor. The proposed measure exploits the decomposition of a conditional expectation into its linear and nonlinear components. We illustrate the method with the measurement of the degree of nonlinearity of a European style option with respect to the underlying asset. Next, we use the method to identify the empirical patterns of the return-risk trade-off on the SP500. The results are strongly supportive of a nonlinear relationship between expected return and expected volatility. The data seem to be driven by two regimes: one regime with a positive return-risk trade-off and one with a negative trade-off.

]]>Econometrics doi: 10.3390/econometrics6030036

Authors: Helmut Lütkepohl Aleksei Netšunajev

We use a cointegrated structural vector autoregressive model to investigate the relation between monetary policy in the euro area and the stock market. Since there may be an instantaneous causal relation, we consider long-run identifying restrictions for the structural shocks and also used (conditional) heteroscedasticity in the residuals for identification purposes. Heteroscedasticity is modelled by a Markov-switching mechanism. We find a plausible identification scheme for stock market and monetary policy shocks which is consistent with the second-order moment structure of the variables. The model indicates that contractionary monetary policy shocks lead to a long-lasting downturn of real stock prices.

]]>Econometrics doi: 10.3390/econometrics6030035

Authors: D. Stephen G. Pollock

Econometric analysis requires filtering techniques that are adapted to cater to data sequences that are short and that have strong trends. Whereas the economists have tended to conduct their analyses in the time domain, the engineers have emphasised the frequency domain. This paper places its emphasis in the frequency domain; and it shows how the frequency-domain methods can be adapted to cater to short trended sequences. Working in the frequency domain allows an unrestricted choice to be made of the frequency response of a filter. It also requires that the data should be free of trends. Methods for extracting the trends prior to filtering and for restoring them thereafter are described.

]]>Econometrics doi: 10.3390/econometrics6030034

Authors: Dorota Toczydlowska Gareth W. Peters

A novel class of dimension reduction methods is combined with a stochastic multi-factor panel regression-based state-space model in order to model the dynamics of yield curves whilst incorporating regression factors. This is achieved via Probabilistic Principal Component Analysis (PPCA) in which new statistically-robust variants are derived also treating missing data. We embed the rank reduced feature extractions into a stochastic representation for state-space models for yield curve dynamics and compare the results to classical multi-factor dynamic Nelson&ndash;Siegel state-space models. This leads to important new representations of yield curve models that can be practically important for addressing questions of financial stress testing and monetary policy interventions, which can incorporate efficiently financial big data. We illustrate our results on various financial and macroeconomic datasets from the Euro Zone and international market.

]]>Econometrics doi: 10.3390/econometrics6030033

Authors: Hiroshi Yamada Ruixue Du

ℓ1 polynomial trend filtering, which is a filtering method described as an ℓ1-norm penalized least-squares problem, is promising because it enables the estimation of a piecewise polynomial trend in a univariate economic time series without prespecifying the number and location of knots. This paper shows some theoretical results on the filtering, one of which is that a small modification of the filtering provides not only identical trend estimates as the filtering but also extrapolations of the trend beyond both sample limits.

]]>Econometrics doi: 10.3390/econometrics6030032

Authors: John W. Galbraith Douglas J. Hodgson

Statistical methods are widely used for valuation (prediction of the value at sale or auction) of a unique object such as a work of art. The usual approach is estimation of a hedonic model for objects of a given class, such as paintings from a particular school or period, or in the context of real estate, houses in a neighborhood. Where the object itself has previously sold, an alternative is to base an estimate on the previous sale price. The combination of these approaches has been employed in real estate price index construction (e.g., Jiang et al. 2015); in the present context, we treat the use of these different sources of information as a forecast combination problem. We first optimize the hedonic model, considering the level of aggregation that is appropriate for pooling observations into a sample, and applying model-averaging methods to estimate predictive models at the individual-artist level. Next, we consider an additional stage in which we incorporate repeat-sale information, in a subset of cases for which this information is available. The methods are applied to a data set of auction prices for Canadian paintings. We compare the out-of-sample predictive accuracy of different methods and find that those that allow us to use single-artist samples produce superior results, that data-driven averaging across predictive models tends to produce clear gains, and that, where available, repeat-sale information appears to yield further improvements in predictive accuracy.

]]>Econometrics doi: 10.3390/econometrics6020031

Authors: Gulasekaran Rajaguru Michael O’Neill Tilak Abeysinghe

In applied econometric literature, the causal inferences are often made based on temporally aggregated or systematically sampled data. A number of studies document that temporal aggregation has distorting effects on causal inference and systematic sampling of stationary variables preserves the direction of causality. Contrary to the stationary case, this paper shows for the bivariate VAR(1) system that systematic sampling induces spurious bi-directional Granger causality among the variables if the uni-directional causality runs from a non-stationary series to either a stationary or a non-stationary series. An empirical exercise illustrates the relative usefulness of the results further.

]]>Econometrics doi: 10.3390/econometrics6020030

Authors: Vladimir Hlasny Paolo Verme

It is sometimes observed and frequently assumed that top incomes in household surveys worldwide are poorly measured and that this problem biases the measurement of income inequality. This paper tests this assumption and compares the performance of reweighting and replacing methods designed to correct inequality measures for top-income biases generated by data issues such as unit or item non-response. Results for the European Union&rsquo;s Statistics on Income and Living Conditions survey indicate that survey response probabilities are negatively associated with income and bias the measurement of inequality downward. Correcting for this bias with reweighting, the Gini coefficient for Europe is revised upwards by 3.7 percentage points. Similar results are reached with replacing of top incomes using values from the Pareto distribution when the cut point for the analysis is below the 95th percentile. For higher cut points, results with replacing are inconsistent suggesting that popular parametric distributions do not mimic real data well at the very top of the income distribution.

]]>Econometrics doi: 10.3390/econometrics6020029

Authors: Tareq Sadeq Michel Lubrano

In 2002, the Israeli government decided to build a wall inside the occupied West Bank. The wall had a marked effect on the access to land and water resources as well as to the Israeli labour market. It is difficult to include the effect of the wall in an econometric model explaining poverty dynamics as the wall was built in the richer region of the West Bank. So a diff-in-diff strategy is needed. Using a Bayesian approach, we treat our two-period repeated cross-section data set as an incomplete data problem, explaining the income-to-needs ratio as a function of time invariant exogenous variables. This allows us to provide inference results on poverty dynamics. We then build a conditional regression model including a wall variable and state dependence to see how the wall modified the initial results on poverty dynamics. We find that the wall has increased the probability of poverty persistence by 58 percentage points and the probability of poverty entry by 18 percentage points.

]]>Econometrics doi: 10.3390/econometrics6020028

Authors: Sergio P. Firpo Nicole M. Fortin Thomas Lemieux

This paper provides a detailed exposition of an extension of the Oaxaca-Blinder decomposition method that can be applied to various distributional measures. The two-stage procedure first divides distributional changes into a wage structure effect and a composition effect using a reweighting method. Second, the two components are further divided into the contribution of each explanatory variable using recentered influence function (RIF) regressions. We illustrate the practical aspects of the procedure by analyzing how the polarization of U.S. male wages between the late 1980s and the mid 2010s was affected by factors such as de-unionization, education, occupations, and industry changes.

]]>Econometrics doi: 10.3390/econometrics6020027

Authors: Alaa Abi Morshed Elena Andreou Otilia Boldea

Structural break tests for regression models are sensitive to model misspecification. We show&mdash;analytically and through simulations&mdash;that the sup Wald test for breaks in the conditional mean and variance of a time series process exhibits severe size distortions when the conditional mean dynamics are misspecified. We also show that the sup Wald test for breaks in the unconditional mean and variance does not have the same size distortions, yet benefits from similar power to its conditional counterpart in correctly specified models. Hence, we propose using it as an alternative and complementary test for breaks. We apply the unconditional and conditional mean and variance tests to three US series: unemployment, industrial production growth and interest rates. Both the unconditional and the conditional mean tests detect a break in the mean of interest rates. However, for the other two series, the unconditional mean test does not detect a break, while the conditional mean tests based on dynamic regression models occasionally detect a break, with the implied break-point estimator varying across different dynamic specifications. For all series, the unconditional variance does not detect a break while most tests for the conditional variance do detect a break which also varies across specifications.

]]>Econometrics doi: 10.3390/econometrics6020026

Authors: Bruce E. Hansen

The generalized method of moments (GMM) estimator of the reduced-rank regression model is derived under the assumption of conditional homoscedasticity. It is shown that this GMM estimator is algebraically identical to the maximum likelihood estimator under normality developed by Johansen (1988). This includes the vector error correction model (VECM) of Engle and Granger. It is also shown that GMM tests for reduced rank (cointegration) are algebraically similar to the Gaussian likelihood ratio tests. This shows that normality is not necessary to motivate these estimators and tests.

]]>Econometrics doi: 10.3390/econometrics6020025

Authors: Gilles Dufrénot Fredj Jawadi Alexander Mihailov

Developments in macro-econometrics have been evolving since the aftermath of the Second World War.[...]

]]>Econometrics doi: 10.3390/econometrics6020024

Authors: El Moctar Laghlal Abdoul Aziz Junior Ndoye

In this study, we provide a Bayesian estimation method for the unconditional quantile regression model based on the Re-centered Influence Function (RIF). The method makes use of the dichotomous structure of the RIF and estimates a non-linear probability model by a logistic regression using a Gibbs within a Metropolis-Hastings sampler. This approach performs better in the presence of heavy-tailed distributions. Applied to a nationally-representative household survey, the Senegal Poverty Monitoring Report (2005), the results show that the change in the rate of returns to education across quantiles is substantially lower at the primary level.

]]>Econometrics doi: 10.3390/econometrics6020023

Authors: Mawuli Segnon Stelios Bekiros Bernd Wilfling

There is substantial evidence that inflation rates are characterized by long memory and nonlinearities. In this paper, we introduce a long-memory Smooth Transition AutoRegressive Fractionally Integrated Moving Average-Markov Switching Multifractal specification [ STARFIMA ( p , d , q ) - MSM ( k ) ] for modeling and forecasting inflation uncertainty. We first provide the statistical properties of the process and investigate the finite sample properties of the maximum likelihood estimators through simulation. Second, we evaluate the out-of-sample forecast performance of the model in forecasting inflation uncertainty in the G7 countries. Our empirical analysis demonstrates the superiority of the new model over the alternative STARFIMA ( p , d , q ) - GARCH -type models in forecasting inflation uncertainty.

]]>Econometrics doi: 10.3390/econometrics6020022

Authors: Stéphane Guerrier Samuel Orso Maria-Pia Victoria-Feser

In this paper, we study the finite sample accuracy of confidence intervals for index functional built via parametric bootstrap, in the case of inequality indices. To estimate the parameters of the assumed parametric data generating distribution, we propose a Generalized Method of Moment estimator that targets the quantity of interest, namely the considered inequality index. Its primary advantage is that the scale parameter does not need to be estimated to perform parametric bootstrap, since inequality measures are scale invariant. The very good finite sample coverages that are found in a simulation study suggest that this feature provides an advantage over the parametric bootstrap using the maximum likelihood estimator. We also find that overall, a parametric bootstrap provides more accurate inference than its non or semi-parametric counterparts, especially for heavy tailed income distributions.

]]>Econometrics doi: 10.3390/econometrics6020021

Authors: Duangkamon Chotikapanich William E. Griffiths Gholamreza Hajargasht Wasana Karunarathne D. S. Prasada Rao

To use the generalized beta distribution of the second kind (GB2) for the analysis of income and other positively skewed distributions, knowledge of estimation methods and the ability to compute quantities of interest from the estimated parameters are required. We review estimation methodology that has appeared in the literature, and summarize expressions for inequality, poverty, and pro-poor growth that can be used to compute these measures from GB2 parameter estimates. An application to data from China and Indonesia is provided.

]]>Econometrics doi: 10.3390/econometrics6020020

Authors: Dirk Antonczyk Thomas DeLeire Bernd Fitzenberger

Since the late 1970s, wage inequality has increased strongly both in the U.S. and Germany but the trends have been different. Wage inequality increased along the entire wage distribution during the 1980s in the U.S. and since the mid 1990s in Germany. There is evidence for wage polarization in the U.S. in the 1990s, and the increase in wage inequality in Germany was restricted to the top of the distribution before the 1990s. Using an approach developed by MaCurdy and Mroz (1995) to separate age, time, and cohort effects, we find a large role played by cohort effects in Germany, while we find only small cohort effects in the U.S. Employment trends in both countries are consistent with polarization since the 1990s. The evidence is consistent with a technology-driven polarization of the labor market, but this cannot explain the country specific differences.

]]>Econometrics doi: 10.3390/econometrics6020019

Authors: Giovanni Forchini Bin Jiang Bin Peng

The properties of the two stage least squares (TSLS) and limited information maximum likelihood (LIML) estimators in panel data models where the observables are affected by common shocks, modelled through unobservable factors, are studied for the case where the time series dimension is fixed. We show that the key assumption in determining the consistency of the panel TSLS and LIML estimators, as the cross section dimension tends to infinity, is the lack of correlation between the factor loadings in the errors and in the exogenous variables&mdash;including the instruments&mdash;conditional on the common shocks. If this condition fails, both estimators have degenerate distributions. When the panel TSLS and LIML estimators are consistent, they have covariance-matrix mixed-normal distributions asymptotically. Tests on the coefficients can be constructed in the usual way and have standard distributions under the null hypothesis.

]]>Econometrics doi: 10.3390/econometrics6020018

Authors: Giovanni M. Giorgi Alessio Guandalini

Additive decomposability is an interesting feature of inequality indices which, however, is not always fulfilled; solutions to overcome such an issue have been given by Deutsch and Silber (2007) and by Di Maio and Landoni (2017). In this paper, we apply these methods, based on the “Shapley value” and the “balance of inequality” respectively, to the Bonferroni inequality index. We also discuss a comparison with the Gini concentration index and highlight interesting properties of the Bonferroni index.

]]>Econometrics doi: 10.3390/econometrics6020017

Authors: Elena Bárcena-Martín Jacques Silber

This paper proposes a simple algorithm based on a matrix formulation to compute the Esteban and Ray (ER) polarization index. It then shows how the algorithm introduced leads to quite a simple decomposition of polarization by income sources. Such a breakdown was not available hitherto. The decomposition we propose will thus allow one to determine the sign, as well as the magnitude, of the impact of the various income sources on the ER polarization index. A simple empirical illustration based on EU data is provided.

]]>Econometrics doi: 10.3390/econometrics6020016

Authors: Robert Davies George Tauchen

This paper develops a method to select the threshold in threshold-based jump detection methods. The method is motivated by an analysis of threshold-based jump detection methods in the context of jump-diffusion models. We show that over the range of sampling frequencies a researcher is most likely to encounter that the usual in-fill asymptotics provide a poor guide for selecting the jump threshold. Because of this we develop a sample-based method. Our method estimates the number of jumps over a grid of thresholds and selects the optimal threshold at what we term the ‘take-off’ point in the estimated number of jumps. We show that this method consistently estimates the jumps and their indices as the sampling interval goes to zero. In several Monte Carlo studies we evaluate the performance of our method based on its ability to accurately locate jumps and its ability to distinguish between true jumps and large diffusive moves. In one of these Monte Carlo studies we evaluate the performance of our method in a jump regression context. Finally, we apply our method in two empirical studies. In one we estimate the number of jumps and report the jump threshold our method selects for three commonly used market indices. In the other empirical application we perform a series of jump regressions using our method to select the jump threshold.

]]>Econometrics doi: 10.3390/econometrics6020015

Authors: Gordon Anderson Maria Pittau Roberto Zelli Jasmin Thomas

The cohesiveness of constituent nations in a confederation such as the Eurozone depends on their equally shared experiences. In terms of household incomes, commonality of distribution across those constituent nations with that of the Eurozone as an entity in itself is of the essence. Generally, income classification has proceeded by employing “hard”, somewhat arbitrary and contentious boundaries. Here, in an analysis of Eurozone household income distributions over the period 2006–2015, mixture distribution techniques are used to determine the number and size of groups or classes endogenously without resort to such hard boundaries. In so doing, some new indices of polarization, segmentation and commonality of distribution are developed in the context of a decomposition of the Gini coefficient and the roles of, and relationships between, these groups in societal income inequality, poverty, polarization and societal segmentation are examined. What emerges for the Eurozone as an entity is a four-class, increasingly unequal polarizing structure with income growth in all four classes. With regard to individual constituent nation class membership, some advanced, some fell back, with most exhibiting significant polarizing behaviour. However, in the face of increasing overall Eurozone inequality, constituent nations were becoming increasingly similar in distribution, which can be construed as characteristic of a more cohesive society.

]]>Econometrics doi: 10.3390/econometrics6010014

Authors: Russell Davidson

Conventional wisdom says that the middle classes in many developed countries have recently suffered losses, in terms of both the share of the total population belonging to the middle class, and also their share in total income. Here, distribution-free methods are developed for inference on these shares, by means of deriving expressions for their asymptotic variances of sample estimates, and the covariance of the estimates. Asymptotic inference can be undertaken based on asymptotic normality. Bootstrap inference can be expected to be more reliable, and appropriate bootstrap procedures are proposed. As an illustration, samples of individual earnings drawn from Canadian census data are used to test various hypotheses about the middle-class shares, and confidence intervals for them are computed. It is found that, for the earlier censuses, sample sizes are large enough for asymptotic and bootstrap inference to be almost identical, but that, in the twenty-first century, the bootstrap fails on account of a strange phenomenon whereby many presumably different incomes in the data are rounded to one and the same value. Another difference between the centuries is the appearance of heavy right-hand tails in the income distributions of both men and women.

]]>Econometrics doi: 10.3390/econometrics6010013

Authors: Marie Busch Philipp Sibbertsen

Several modified estimation methods of the memory parameter have been introduced in the past years. They aim to decrease the upward bias of the memory parameter in cases of low frequency contaminations or an additive noise component, especially in situations with a short-memory process being contaminated. In this paper, we provide an overview and compare the performance of nine semiparametric estimation methods. Among them are two standard methods, four modified approaches to account for low frequency contaminations and three procedures developed for perturbed fractional processes. We conduct an extensive Monte Carlo study for a variety of parameter constellations and several DGPs. Furthermore, an empirical application of the log-absolute return series of the S&amp;P 500 shows that the estimation results combined with a long-memory test indicate a spurious long-memory process.

]]>Econometrics doi: 10.3390/econometrics6010012

Authors: Maria Felice Arezzo Giuseppina Guagnano

Sample selection models attempt to correct for non-randomly selected data in a two-model hierarchy where, on the first level, a binary selection equation determines whether a particular observation will be available for the second level (outcome equation). If the non-random selection mechanism induced by the selection equation is ignored, the coefficient estimates in the outcome equation may be severely biased. When the selection mechanism leads to many censored observations, few data are available for the estimation of the outcome equation parameters, giving rise to computational difficulties. In this context, the main reference is Greene (2008) who extends the results obtained by Manski and Lerman (1977), and develops an estimator which requires the knowledge of the true proportion of occurrences in the outcome equation. We develop a method that exploits the advantages of response-based sampling schemes in the context of binary response models with a sample selection, relaxing this assumption. Estimation is based on a weighted version of Heckman’s likelihood, where the weights take into account the sampling design. In a simulation study, we found that, for the outcome equation, the results obtained with our estimator are comparable to Greene’s in terms of mean square error. Moreover, in a real data application, it is preferable in terms of the percentage of correct predictions.

]]>Econometrics doi: 10.3390/econometrics6010011

Authors: Marcus J. Chambers Maria Kyriacou

This paper considers the specification and performance of jackknife estimators of the autoregressive coefficient in a model with a near-unit root. The limit distributions of sub-sample estimators that are used in the construction of the jackknife estimator are derived, and the joint moment generating function (MGF) of two components of these distributions is obtained and its properties explored. The MGF can be used to derive the weights for an optimal jackknife estimator that removes fully the first-order finite sample bias from the estimator. The resulting jackknife estimator is shown to perform well in finite samples and, with a suitable choice of the number of sub-samples, is shown to reduce the overall finite sample root mean squared error, as well as bias. However, the optimal jackknife weights rely on knowledge of the near-unit root parameter and a quantity that is related to the long-run variance of the disturbance process, which are typically unknown in practice, and so, this dependence is characterised fully and a discussion provided of the issues that arise in practice in the most general settings.

]]>Econometrics doi: 10.3390/econometrics6010010

Authors: Christian Schluter

In economics, rank-size regressions provide popular estimators of tail exponents of heavy-tailed distributions. We discuss the properties of this approach when the tail of the distribution is regularly varying rather than strictly Pareto. The estimator then over-estimates the true value in the leading parametric income models (so the upper income tail is less heavy than estimated), which leads to test size distortions and undermines inference. For practical work, we propose a sensitivity analysis based on regression diagnostics in order to assess the likely impact of the distortion. The methods are illustrated using data on top incomes in the UK.

]]>Econometrics doi: 10.3390/econometrics6010009

Authors: Rodolfo Metulini Roberto Patuelli Daniel Griffith

Nonlinear estimation of the gravity model with Poisson-type regression methods has become popular for modelling international trade flows, because it permits a better accounting for zero flows and extreme values in the distribution tail. Nevertheless, as trade flows are not independent from each other due to spatial and network autocorrelation, these methods may lead to biased parameter estimates. To overcome this problem, eigenvector spatial filtering (ESF) variants of the Poisson/negative binomial specifications have been proposed in the literature on gravity modelling of trade. However, no specific treatment has been developed for cases in which many zero flows are present. This paper contributes to the literature in two ways. First, by employing a stepwise selection criterion for spatial filters that is based on robust (sandwich) p-values and does not require likelihood-based indicators. In this respect, we develop an ad hoc backward stepwise function in R. Second, using this function, we select a reduced set of spatial filters that properly accounts for importer-side and exporter-side specific spatial effects, as well as network effects, both at the count and the logit processes of zero-inflated methods. Applying this estimation strategy to a cross-section of bilateral trade flows between a set of 64 countries for the year 2000, we find that our specification outperforms the benchmark models in terms of model fitting, both considering the AIC and in predicting zero (and small) flows.

]]>Econometrics doi: 10.3390/econometrics6010008

Authors: Fei Jin Lung-fei Lee

An information matrix of a parametric model being singular at a certain true value of a parameter vector is irregular. The maximum likelihood estimator in the irregular case usually has a rate of convergence slower than the n -rate in a regular case. We propose to estimate such models by the adaptive lasso maximum likelihood and propose an information criterion to select the involved tuning parameter. We show that the penalized maximum likelihood estimator has the oracle properties. The method can implement model selection and estimation simultaneously and the estimator always has the usual n -rate of convergence.

]]>Econometrics doi: 10.3390/econometrics6010007

Authors: Ralf Becker Adam Clements Robert O'Neill

This paper introduces a multivariate kernel based forecasting tool for the prediction of variance-covariance matrices of stock returns. The method introduced allows for the incorporation of macroeconomic variables into the forecasting process of the matrix without resorting to a decomposition of the matrix. The model makes use of similarity forecasting techniques and it is demonstrated that several popular techniques can be thought as a subset of this approach. A forecasting experiment demonstrates the potential for the technique to improve the statistical accuracy of forecasts of variance-covariance matrices.

]]>Econometrics doi: 10.3390/econometrics6010006

Authors: Francesca Rondina

This paper uses an econometric model and Bayesian estimation to reverse engineer the path of inflation expectations implied by the New Keynesian Phillips Curve and the data. The estimated expectations roughly track the patterns of a number of common measures of expected inflation available from surveys or computed from financial data. In particular, they exhibit the strongest correlation with the inflation forecasts of the respondents in the University of Michigan Survey of Consumers. The estimated model also shows evidence of the anchoring of long run inflation expectations to a value that is in the range of the target inflation rate.

]]>Econometrics doi: 10.3390/econometrics6010005

Authors: Paola Cerchiello Giancarlo Nicola

The analysis of news in the financial context has gained a prominent interest in the last years. This is because of the possible predictive power of such content especially in terms of associated sentiment/mood. In this paper, we focus on a specific aspect of financial news analysis: how the covered topics modify according to space and time dimensions. To this purpose, we employ a modified version of topic model LDA, the so-called Structural Topic Model (STM), that takes into account covariates as well. Our aim is to study the possible evolution of topics extracted from two well known news archive—Reuters and Bloomberg—and to investigate a causal effect in the diffusion of the news by means of a Granger causality test. Our results show that both the temporal dynamics and the spatial differentiation matter in the news contagion.

]]>Econometrics doi: 10.3390/econometrics6010004

Authors: Francesca Greselin Ričardas Zitikis

The underlying idea behind the construction of indices of economic inequality is based on measuring deviations of various portions of low incomes from certain references or benchmarks, which could be point measures like the population mean or median, or curves like the hypotenuse of the right triangle into which every Lorenz curve falls. In this paper, we argue that, by appropriately choosing population-based references (called societal references) and distributions of personal positions (called gambles, which are random), we can meaningfully unify classical and contemporary indices of economic inequality, and various measures of risk. To illustrate the herein proposed approach, we put forward and explore a risk measure that takes into account the relativity of large risks with respect to small ones.

]]>Econometrics doi: 10.3390/econometrics6010003

Authors: Aurelio Bariviera Angelo Plastino George Judge

This paper offers a general and comprehensive definition of the day-of-the-week effect. Using symbolic dynamics, we develop a unique test based on ordinal patterns in order to detect it. This test uncovers the fact that the so-called “day-of-the-week” effect is partly an artifact of the hidden correlation structure of the data. We present simulations based on artificial time series as well. While time series generated with long memory are prone to exhibit daily seasonality, pure white noise signals exhibit no pattern preference. Since ours is a non-parametric test, it requires no assumptions about the distribution of returns, so that it could be a practical alternative to conventional econometric tests. We also made an exhaustive application of the here-proposed technique to 83 stock indexes around the world. Finally, the paper highlights the relevance of symbolic analysis in economic time series studies.

]]>Econometrics doi: 10.3390/econometrics6010002

Authors: Econometrics Editorial Office

Peer review is an essential part in the publication process, ensuring that Econometrics maintains high quality standards for its published papers. In 2017, a total of 47 papers were published in the journal.[...]

]]>Econometrics doi: 10.3390/econometrics6010001

Authors: Katarina Juselius

n/a

]]>Econometrics doi: 10.3390/econometrics5040054

Authors: Yoontae Jeon Thomas McCurdy

Forecasting correlations between stocks and commodities is important for diversification across asset classes and other risk management decisions. Correlation forecasts are affected by model uncertainty, the sources of which can include uncertainty about changing fundamentals and associated parameters (model instability), structural breaks and nonlinearities due, for example, to regime switching. We use approaches that weight historical data according to their predictive content. Specifically, we estimate two alternative models, ‘time-varying weights’ and ‘time-varying window’, in order to maximize the value of past data for forecasting. Our empirical analyses reveal that these approaches provide superior forecasts to several benchmark models for forecasting correlations.

]]>Econometrics doi: 10.3390/econometrics5040053

Authors: Tristan Skolrud

The Fourier Flexible form provides a global approximation to an unknown data generating process. In terms of limiting function specification error, this form is preferable to functional forms based on second-order Taylor series expansions. The Fourier Flexible form is a truncated Fourier series expansion appended to a second-order expansion in logarithms. By replacing the logarithmic expansion with a Box-Cox transformation, we show that the Fourier Flexible form can reduce approximation error by 25% on average in the tails of the data distribution. The new functional form allows for nested testing of a larger set of commonly implemented functional forms.

]]>Econometrics doi: 10.3390/econometrics5040052

Authors: Jinyong Hahn Ruoyao Shi

We examine properties of permutation tests in the context of synthetic control. Permutation tests are frequently used methods of inference for synthetic control when the number of potential control units is small. We analyze the permutation tests from a repeated sampling perspective and show that the size of permutation tests may be distorted. Several alternative methods are discussed.

]]>Econometrics doi: 10.3390/econometrics5040049

Authors: Jurgen Doornik Rocco Mosconi Paolo Paruolo

This paper provides some test cases, called circuits, for the evaluation of Gaussian likelihood maximization algorithms of the cointegrated vector autoregressive model. Both I(1) and I(2) models are considered. The performance of algorithms is compared first in terms of effectiveness, defined as the ability to find the overall maximum. The next step is to compare their efficiency and reliability across experiments. The aim of the paper is to commence a collective learning project by the profession on the actual properties of algorithms for cointegrated vector autoregressive model estimation, in order to improve their quality and, as a consequence, also the reliability of empirical research.

]]>Econometrics doi: 10.3390/econometrics5040051

Authors: Yingjie Dong Yiu-Kuen Tse

We propose a new method to implement the Business Time Sampling (BTS) scheme for high-frequency financial data. We compute a time-transformation (TT) function using the intraday integrated volatility estimated by a jump-robust method. The BTS transactions are obtained using the inverse of the TT function. Using our sampled BTS transactions, we test the semi-martingale hypothesis of the stock log-price process and estimate the daily realized volatility. Our method improves the normality approximation of the standardized business-time return distribution. Our Monte Carlo results show that the integrated volatility estimates using our proposed sampling strategy provide smaller root mean-squared error.

]]>Econometrics doi: 10.3390/econometrics5040050

Authors: Martin Ravallion

On the presumption that poorer people tend to work less, it is often claimed that standard measures of inequality and poverty are overestimates. The paper points to a number of reasons to question this claim. It is shown that, while the labor supplies of American adults have a positive income gradient, the heterogeneity in labor supplies generates considerable horizontal inequality. Using equivalent incomes to adjust for effort can reveal either higher or lower inequality depending on the measurement assumptions. With only a modest allowance for leisure as a basic need, the effort-adjusted poverty rate in terms of equivalent incomes rises.

]]>Econometrics doi: 10.3390/econometrics5040048

Authors: Alain Hecq Sean Telg Lenard Lieb

This paper investigates the effect of seasonal adjustment filters on the identification of mixed causal-noncausal autoregressive models. By means of Monte Carlo simulations, we find that standard seasonal filters induce spurious autoregressive dynamics on white noise series, a phenomenon already documented in the literature. Using a symmetric argument, we show that those filters also generate a spurious noncausal component in the seasonally adjusted series, but preserve (although amplify) the existence of causal and noncausal relationships. This result has has important implications for modelling economic time series driven by expectation relationships. We consider inflation data on the G7 countries to illustrate these results.

]]>Econometrics doi: 10.3390/econometrics5040047

Authors: Andras Fulop Jun Yu

We develop a new model where the dynamic structure of the asset price, after the fundamental value is removed, is subject to two different regimes. One regime reflects the normal period where the asset price divided by the dividend is assumed to follow a mean-reverting process around a stochastic long run mean. The second regime reflects the bubble period with explosive behavior. Stochastic switches between two regimes and non-constant probabilities of exit from the bubble regime are both allowed. A Bayesian learning approach is employed to jointly estimate the latent states and the model parameters in real time. An important feature of our Bayesian method is that we are able to deal with parameter uncertainty and at the same time, to learn about the states and the parameters sequentially, allowing for real time model analysis. This feature is particularly useful for market surveillance. Analysis using simulated data reveals that our method has good power properties for detecting bubbles. Empirical analysis using price-dividend ratios of S&amp;P500 highlights the advantages of our method.

]]>Econometrics doi: 10.3390/econometrics5040045

Authors: Apostolos Serletis

William(Bill) Barnett is an eminent econometrician andmacroeconomist.[...]

]]>Econometrics doi: 10.3390/econometrics5040046

Authors: Umberto Triacca

The contribution of this paper is to investigate a particular form of lack of invariance of causality statements to changes in the conditioning information sets. Consider a discrete-time three-dimensional stochastic process z = ( x , y 1 , y 2 ) ′ . We want to study causality relationships between the variables in y = ( y 1 , y 2 ) ′ and x. Suppose that in a bivariate framework, we find that y 1 Granger causes x and y 2 Granger causes x, but these relationships vanish when the analysis is conducted in a trivariate framework. Thus, the causal links, established in a bivariate setting, seem to be spurious. Is this conclusion always correct? In this note, we show that the causal links, in the bivariate framework, might well not be ‘genuinely’ spurious: they could be reflecting causality from the vector y to x. Paradoxically, in this case, it is the non-causality in trivariate system that is misleading.

]]>Econometrics doi: 10.3390/econometrics5040044

Authors: Antoni Espasa Eva Senra

The Bulletin of EU &amp; US Inflation and Macroeconomic Analysis (BIAM) is a monthly publication that has been reporting real time analysis and forecasts for inflation and other macroeconomic aggregates for the Euro Area, the US and Spain since 1994. The BIAM inflation forecasting methodology stands on working with useful disaggregation schemes, using leading indicators when possible and applying outlier correction. The paper relates this methodology to corresponding topics in the literature and discusses the design of disaggregation schemes. It concludes that those schemes would be useful if they were formulated according to economic, institutional and statistical criteria aiming to end up with a set of components with very different statistical properties for which valid single-equation models could be built. The BIAM assessment, which derives from a new observation, is based on (a) an evaluation of the forecasting errors (innovations) at the components’ level. It provides information on which sectors they come from and allows, when required, for the appropriate correction in the specific models. (b) In updating the path forecast with its corresponding fan chart. Finally, we show that BIAM real time Euro Area inflation forecasts compare successfully with the consensus from the ECB Survey of Professional Forecasters, one and two years ahead.

]]>