Open AccessArticle
Time-Varying Window Length for Correlation Forecasts
Econometrics 2017, 5(4), 54; doi:10.3390/econometrics5040054 -
Abstract
Forecasting correlations between stocks and commodities is important for diversification across asset classes and other risk management decisions. Correlation forecasts are affected by model uncertainty, the sources of which can include uncertainty about changing fundamentals and associated parameters (model instability), structural breaks and
[...] Read more.
Forecasting correlations between stocks and commodities is important for diversification across asset classes and other risk management decisions. Correlation forecasts are affected by model uncertainty, the sources of which can include uncertainty about changing fundamentals and associated parameters (model instability), structural breaks and nonlinearities due, for example, to regime switching. We use approaches that weight historical data according to their predictive content. Specifically, we estimate two alternative models, ‘time-varying weights’ and ‘time-varying window’, in order to maximize the value of past data for forecasting. Our empirical analyses reveal that these approaches provide superior forecasts to several benchmark models for forecasting correlations. Full article
Figures

Figure 1

Open AccessArticle
Reducing Approximation Error in the Fourier Flexible Functional Form
Econometrics 2017, 5(4), 53; doi:10.3390/econometrics5040053 -
Abstract
The Fourier Flexible form provides a global approximation to an unknown data generating process. In terms of limiting function specification error, this form is preferable to functional forms based on second-order Taylor series expansions. The Fourier Flexible form is a truncated Fourier series
[...] Read more.
The Fourier Flexible form provides a global approximation to an unknown data generating process. In terms of limiting function specification error, this form is preferable to functional forms based on second-order Taylor series expansions. The Fourier Flexible form is a truncated Fourier series expansion appended to a second-order expansion in logarithms. By replacing the logarithmic expansion with a Box-Cox transformation, we show that the Fourier Flexible form can reduce approximation error by 25% on average in the tails of the data distribution. The new functional form allows for nested testing of a larger set of commonly implemented functional forms. Full article
Figures

Figure 1

Open AccessArticle
Synthetic Control and Inference
Econometrics 2017, 5(4), 52; doi:10.3390/econometrics5040052 -
Abstract
We examine properties of permutation tests in the context of synthetic control. Permutation tests are frequently used methods of inference for synthetic control when the number of potential control units is small. We analyze the permutation tests from a repeated sampling perspective and
[...] Read more.
We examine properties of permutation tests in the context of synthetic control. Permutation tests are frequently used methods of inference for synthetic control when the number of potential control units is small. We analyze the permutation tests from a repeated sampling perspective and show that the size of permutation tests may be distorted. Several alternative methods are discussed. Full article
Open AccessArticle
Formula I(1) and I(2): Race Tracks for Likelihood Maximization Algorithms of I(1) and I(2) Cointegrated VAR Models
Econometrics 2017, 5(4), 49; doi:10.3390/econometrics5040049 -
Abstract
This paper provides some test cases, called circuits, for the evaluation of Gaussian likelihood maximization algorithms of the cointegrated vector autoregressive model. Both I(1) and I(2) models are considered. The performance of algorithms is compared first in terms of effectiveness, defined as
[...] Read more.
This paper provides some test cases, called circuits, for the evaluation of Gaussian likelihood maximization algorithms of the cointegrated vector autoregressive model. Both I(1) and I(2) models are considered. The performance of algorithms is compared first in terms of effectiveness, defined as the ability to find the overall maximum. The next step is to compare their efficiency and reliability across experiments. The aim of the paper is to commence a collective learning project by the profession on the actual properties of algorithms for cointegrated vector autoregressive model estimation, in order to improve their quality and, as a consequence, also the reliability of empirical research. Full article
Figures

Figure 1

Open AccessArticle
Business Time Sampling Scheme with Applications to Testing Semi-Martingale Hypothesis and Estimating Integrated Volatility
Econometrics 2017, 5(4), 51; doi:10.3390/econometrics5040051 -
Abstract
We propose a new method to implement the Business Time Sampling (BTS) scheme for high-frequency financial data. We compute a time-transformation (TT) function using the intraday integrated volatility estimated by a jump-robust method. The BTS transactions are obtained using the inverse of the
[...] Read more.
We propose a new method to implement the Business Time Sampling (BTS) scheme for high-frequency financial data. We compute a time-transformation (TT) function using the intraday integrated volatility estimated by a jump-robust method. The BTS transactions are obtained using the inverse of the TT function. Using our sampled BTS transactions, we test the semi-martingale hypothesis of the stock log-price process and estimate the daily realized volatility. Our method improves the normality approximation of the standardized business-time return distribution. Our Monte Carlo results show that the integrated volatility estimates using our proposed sampling strategy provide smaller root mean-squared error. Full article
Figures

Figure 1

Open AccessFeature PaperArticle
Inequality and Poverty When Effort Matters
Econometrics 2017, 5(4), 50; doi:10.3390/econometrics5040050 -
Abstract
On the presumption that poorer people tend to work less, it is often claimed that standard measures of inequality and poverty are overestimates. The paper points to a number of reasons to question this claim. It is shown that, while the labor supplies
[...] Read more.
On the presumption that poorer people tend to work less, it is often claimed that standard measures of inequality and poverty are overestimates. The paper points to a number of reasons to question this claim. It is shown that, while the labor supplies of American adults have a positive income gradient, the heterogeneity in labor supplies generates considerable horizontal inequality. Using equivalent incomes to adjust for effort can reveal either higher or lower inequality depending on the measurement assumptions. With only a modest allowance for leisure as a basic need, the effort-adjusted poverty rate in terms of equivalent incomes rises. Full article
Figures

Figure 1

Open AccessArticle
Do Seasonal Adjustments Induce Noncausal Dynamics in Inflation Rates?
Econometrics 2017, 5(4), 48; doi:10.3390/econometrics5040048 -
Abstract
This paper investigates the effect of seasonal adjustment filters on the identification of mixed causal-noncausal autoregressive models. By means of Monte Carlo simulations, we find that standard seasonal filters induce spurious autoregressive dynamics on white noise series, a phenomenon already documented in the
[...] Read more.
This paper investigates the effect of seasonal adjustment filters on the identification of mixed causal-noncausal autoregressive models. By means of Monte Carlo simulations, we find that standard seasonal filters induce spurious autoregressive dynamics on white noise series, a phenomenon already documented in the literature. Using a symmetric argument, we show that those filters also generate a spurious noncausal component in the seasonally adjusted series, but preserve (although amplify) the existence of causal and noncausal relationships. This result has has important implications for modelling economic time series driven by expectation relationships. We consider inflation data on the G7 countries to illustrate these results. Full article
Figures

Figure 1

Open AccessArticle
Bayesian Analysis of Bubbles in Asset Prices
Econometrics 2017, 5(4), 47; doi:10.3390/econometrics5040047 -
Abstract
We develop a new model where the dynamic structure of the asset price, after the fundamental value is removed, is subject to two different regimes. One regime reflects the normal period where the asset price divided by the dividend is assumed to follow
[...] Read more.
We develop a new model where the dynamic structure of the asset price, after the fundamental value is removed, is subject to two different regimes. One regime reflects the normal period where the asset price divided by the dividend is assumed to follow a mean-reverting process around a stochastic long run mean. The second regime reflects the bubble period with explosive behavior. Stochastic switches between two regimes and non-constant probabilities of exit from the bubble regime are both allowed. A Bayesian learning approach is employed to jointly estimate the latent states and the model parameters in real time. An important feature of our Bayesian method is that we are able to deal with parameter uncertainty and at the same time, to learn about the states and the parameters sequentially, allowing for real time model analysis. This feature is particularly useful for market surveillance. Analysis using simulated data reveals that our method has good power properties for detecting bubbles. Empirical analysis using price-dividend ratios of S&P500 highlights the advantages of our method. Full article
Figures

Figure 1

Open AccessEditorial
An Interview with William A. Barnett
Econometrics 2017, 5(4), 45; doi:10.3390/econometrics5040045 -
Abstract
William(Bill) Barnett is an eminent econometrician andmacroeconomist.[...] Full article
Open AccessArticle
Non-Causality Due to Included Variables
Econometrics 2017, 5(4), 46; doi:10.3390/econometrics5040046 -
Abstract
The contribution of this paper is to investigate a particular form of lack of invariance of causality statements to changes in the conditioning information sets. Consider a discrete-time three-dimensional stochastic process z=(x,y1,y2)
[...] Read more.
The contribution of this paper is to investigate a particular form of lack of invariance of causality statements to changes in the conditioning information sets. Consider a discrete-time three-dimensional stochastic process z=(x,y1,y2). We want to study causality relationships between the variables in y=(y1,y2) and x. Suppose that in a bivariate framework, we find that y1 Granger causes x and y2 Granger causes x, but these relationships vanish when the analysis is conducted in a trivariate framework. Thus, the causal links, established in a bivariate setting, seem to be spurious. Is this conclusion always correct? In this note, we show that the causal links, in the bivariate framework, might well not be ‘genuinely’ spurious: they could be reflecting causality from the vector y to x. Paradoxically, in this case, it is the non-causality in trivariate system that is misleading. Full article
Open AccessArticle
Twenty-Two Years of Inflation Assessment and Forecasting Experience at the Bulletin of EU & US Inflation and Macroeconomic Analysis
Econometrics 2017, 5(4), 44; doi:10.3390/econometrics5040044 -
Abstract
The Bulletin of EU & US Inflation and Macroeconomic Analysis (BIAM) is a monthly publication that has been reporting real time analysis and forecasts for inflation and other macroeconomic aggregates for the Euro Area, the US and Spain since 1994. The BIAM inflation
[...] Read more.
The Bulletin of EU & US Inflation and Macroeconomic Analysis (BIAM) is a monthly publication that has been reporting real time analysis and forecasts for inflation and other macroeconomic aggregates for the Euro Area, the US and Spain since 1994. The BIAM inflation forecasting methodology stands on working with useful disaggregation schemes, using leading indicators when possible and applying outlier correction. The paper relates this methodology to corresponding topics in the literature and discusses the design of disaggregation schemes. It concludes that those schemes would be useful if they were formulated according to economic, institutional and statistical criteria aiming to end up with a set of components with very different statistical properties for which valid single-equation models could be built. The BIAM assessment, which derives from a new observation, is based on (a) an evaluation of the forecasting errors (innovations) at the components’ level. It provides information on which sectors they come from and allows, when required, for the appropriate correction in the specific models. (b) In updating the path forecast with its corresponding fan chart. Finally, we show that BIAM real time Euro Area inflation forecasts compare successfully with the consensus from the ECB Survey of Professional Forecasters, one and two years ahead. Full article
Figures

Figure 1

Open AccessArticle
Autoregressive Lag—Order Selection Using Conditional Saddlepoint Approximations
Econometrics 2017, 5(3), 43; doi:10.3390/econometrics5030043 -
Abstract
A new method for determining the lag order of the autoregressive polynomial in regression models with autocorrelated normal disturbances is proposed. It is based on a sequential testing procedure using conditional saddlepoint approximations and permits the desire for parsimony to be explicitly incorporated,
[...] Read more.
A new method for determining the lag order of the autoregressive polynomial in regression models with autocorrelated normal disturbances is proposed. It is based on a sequential testing procedure using conditional saddlepoint approximations and permits the desire for parsimony to be explicitly incorporated, unlike penalty-based model selection methods. Extensive simulation results indicate that the new method is usually competitive with, and often better than, common model selection methods. Full article
Figures

Figure 1

Open AccessEditorial
Announcement of the 2017 Econometrics Young Researcher Award
Econometrics 2017, 5(3), 42; doi:10.3390/econometrics5030042 -
Abstract
With the goal of encouraging and motivating young researchers in the field of econometrics, last year the journal Econometrics accepted applications and nominations for the 2017 Young Researcher Award.[...] Full article
Figures

Open AccessArticle
Unit Roots in Economic and Financial Time Series: A Re-Evaluation at the Decision-Based Significance Levels
Econometrics 2017, 5(3), 41; doi:10.3390/econometrics5030041 -
Abstract
This paper re-evaluates key past results of unit root tests, emphasizing that the use of a conventional level of significance is not in general optimal due to the test having low power. The decision-based significance levels for popular unit root tests, chosen using
[...] Read more.
This paper re-evaluates key past results of unit root tests, emphasizing that the use of a conventional level of significance is not in general optimal due to the test having low power. The decision-based significance levels for popular unit root tests, chosen using the line of enlightened judgement under a symmetric loss function, are found to be much higher than conventional ones. We also propose simple calibration rules for the decision-based significance levels for a range of unit root tests. At the decision-based significance levels, many time series in Nelson and Plosser’s (1982) (extended) data set are judged to be trend-stationary, including real income variables, employment variables and money stock. We also find that nearly all real exchange rates covered in Elliott and Pesavento’s (2006) study are stationary; and that most of the real interest rates covered in Rapach and Weber’s (2004) study are stationary. In addition, using a specific loss function, the U.S. nominal interest rate is found to be stationary under economically sensible values of relative loss and prior belief for the null hypothesis. Full article
Figures

Figure 1

Open AccessArticle
Evaluating Ingenious Instruments for Fundamental Determinants of Long-Run Economic Growth and Development
Econometrics 2017, 5(3), 38; doi:10.3390/econometrics5030038 -
Abstract
Empirical studies of the determinants of cross-country differences in long-run development are characterized by the ingenious nature of the instruments used. However, scepticism remains about their ability to provide a valid basis for causal inference. This paper examines whether explicit consideration of the
[...] Read more.
Empirical studies of the determinants of cross-country differences in long-run development are characterized by the ingenious nature of the instruments used. However, scepticism remains about their ability to provide a valid basis for causal inference. This paper examines whether explicit consideration of the statistical adequacy of the underlying reduced form, which provides an embedding framework for the structural equations, can usefully complement economic theory as a basis for assessing instrument choice in the fundamental determinants literature. Diagnostic testing of the reduced forms in influential studies reveals evidence of model misspecification, with parameter non-constancy and spatial dependence of the residuals being almost ubiquitous. This feature, surprisingly not previously identified, potentially undermines the inferences drawn about the structural parameters, such as the quantitative and statistical significance of different fundamental determinants. Full article
Figures

Figure 1

Open AccessArticle
Short-Term Expectation Formation Versus Long-Term Equilibrium Conditions: The Danish Housing Market
Econometrics 2017, 5(3), 40; doi:10.3390/econometrics5030040 -
Abstract
The primary contribution of this paper is to establish that the long-swings behavior observed in the market price of Danish housing since the 1970s can be understood by studying the interplay between short-term expectation formation and long-run equilibrium conditions. We introduce an asset
[...] Read more.
The primary contribution of this paper is to establish that the long-swings behavior observed in the market price of Danish housing since the 1970s can be understood by studying the interplay between short-term expectation formation and long-run equilibrium conditions. We introduce an asset market model for housing based on uncertainty rather than risk, which under mild assumptions allows for other forms of forecasting behavior than rational expectations. We test the theory via an I(2) cointegrated VAR model and find that the long-run equilibrium for the housing price corresponds closely to the predictions from the theoretical framework. Additionally, we corroborate previous findings that housing markets are well characterized by short-term momentum forecasting behavior. Our conclusions have wider relevance, since housing prices play a role in the wider Danish economy, and other developed economies, through wealth effects. Full article
Figures

Figure 1

Open AccessFeature PaperArticle
Evaluating Forecasts, Narratives and Policy Using a Test of Invariance
Econometrics 2017, 5(3), 39; doi:10.3390/econometrics5030039 -
Abstract
Economic policy agencies produce forecasts with accompanying narratives, and base policy changes on the resulting anticipated developments in the target variables. Systematic forecast failure, defined as large, persistent deviations of the outturns from the numerical forecasts, can make the associated narrative false, which
[...] Read more.
Economic policy agencies produce forecasts with accompanying narratives, and base policy changes on the resulting anticipated developments in the target variables. Systematic forecast failure, defined as large, persistent deviations of the outturns from the numerical forecasts, can make the associated narrative false, which would in turn question the validity of the entailed policy implementation. We establish when systematic forecast failure entails failure of the accompanying narrative, which we call forediction failure, and when that in turn implies policy invalidity. Most policy regime changes involve location shifts, which can induce forediction failure unless the policy variable is super exogenous in the policy model. We propose a step-indicator saturation test to check in advance for invariance to policy changes. Systematic forecast failure, or a lack of invariance, previously justified by narratives reveals such stories to be economic fiction. Full article
Figures

Figure 1

Open AccessArticle
The Turkish Spatial Wage Curve
Econometrics 2017, 5(3), 37; doi:10.3390/econometrics5030037 -
Abstract
The wage curve for Turkey revisited considering the spatial spillovers of the regional unemployment rates using individual level data for a period of 2004–2013 at the 26 NUTS-2 level by employing FE-2SLS models. The unemployment elasticity of real wages is −0.07 without excluding
[...] Read more.
The wage curve for Turkey revisited considering the spatial spillovers of the regional unemployment rates using individual level data for a period of 2004–2013 at the 26 NUTS-2 level by employing FE-2SLS models. The unemployment elasticity of real wages is −0.07 without excluding any group of workers unlike previous studies. There is strong evidence on spatial effects of unemployment rate of contiguous regions on wage level, which is larger, in absolute value, than the effect of own-regional unemployment rate, −0.087 and −0.056, respectively. Male workers are slightly more responsive to the own-region unemployment rate than female workers. However, female workers are more responsive to the neighboring regions’ unemployment rate. Furthermore, using group-specific unemployment rates in the estimation of the wage curve for various groups, we find that unemployment elasticity of pay for female workers has become smaller and lost its significance, whereas unemployment elasticity for male workers has changed slightly. However, introducing group-specific unemployment rate results in losing significance in estimates for female workers. The findings in this paper suggest that individual wages are more responsive to the unemployment rates of the proximate regions than that of an individual’s own region. Also, the wage curve estimates are sensitive to the group-specific unemployment rates. Full article
Figures

Figure 1

Open AccessArticle
Cointegration between Trends and Their Estimators in State Space Models and Cointegrated Vector Autoregressive Models
Econometrics 2017, 5(3), 36; doi:10.3390/econometrics5030036 -
Abstract
A state space model with an unobserved multivariate random walk and a linear observation equation is studied. The purpose is to find out when the extracted trend cointegrates with its estimator, in the sense that a linear combination is asymptotically stationary. It is
[...] Read more.
A state space model with an unobserved multivariate random walk and a linear observation equation is studied. The purpose is to find out when the extracted trend cointegrates with its estimator, in the sense that a linear combination is asymptotically stationary. It is found that this result holds for the linear combination of the trend that appears in the observation equation. If identifying restrictions are imposed on either the trend or its coefficients in the linear observation equation, it is shown that there is cointegration between the identified trend and its estimator, if and only if the estimators of the coefficients in the observation equations are consistent at a faster rate than the square root of sample size. The same results are found if the observations from the state space model are analysed using a cointegrated vector autoregressive model. The findings are illustrated by a small simulation study. Full article
Figures

Figure 1

Open AccessArticle
Building News Measures from Textual Data and an Application to Volatility Forecasting
Econometrics 2017, 5(3), 35; doi:10.3390/econometrics5030035 -
Abstract
We retrieve news stories and earnings announcements of the S&P 100 constituents from two professional news providers, along with ten macroeconomic indicators. We also gather data from Google Trends about these firms’ assets as an index of retail investors’ attention. Thus, we create
[...] Read more.
We retrieve news stories and earnings announcements of the S&P 100 constituents from two professional news providers, along with ten macroeconomic indicators. We also gather data from Google Trends about these firms’ assets as an index of retail investors’ attention. Thus, we create an extensive and innovative database that contains precise information with which to analyze the link between news and asset price dynamics. We detect the sentiment of news stories using a dictionary of sentiment-related words and negations and propose a set of more than five thousand information-based variables that provide natural proxies for the information used by heterogeneous market players. We first shed light on the impact of information measures on daily realized volatility and select them by penalized regression. Then, we perform a forecasting exercise and show that the model augmented with news-related variables provides superior forecasts. Full article
Figures

Figure 1