Next Issue
Volume 7, December
Previous Issue
Volume 7, June

Table of Contents

Econometrics, Volume 7, Issue 3 (September 2019)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Order results
Result details
Select all
Export citation of selected articles as:
Open AccessArticle
Bivariate Volatility Modeling with High-Frequency Data
Econometrics 2019, 7(3), 41; https://doi.org/10.3390/econometrics7030041 - 15 Sep 2019
Viewed by 308
Abstract
We propose a methodology to include night volatility estimates in the day volatility modeling problem with high-frequency data in a realized generalized autoregressive conditional heteroskedasticity (GARCH) framework, which takes advantage of the natural relationship between the realized measure and the conditional variance. This [...] Read more.
We propose a methodology to include night volatility estimates in the day volatility modeling problem with high-frequency data in a realized generalized autoregressive conditional heteroskedasticity (GARCH) framework, which takes advantage of the natural relationship between the realized measure and the conditional variance. This improves volatility modeling by adding, in a two-factor structure, information on latent processes that occur while markets are closed but captures the leverage effect and maintains a mathematical structure that facilitates volatility estimation. A class of bivariate models that includes intraday, day, and night volatility estimates is proposed and was empirically tested to confirm whether using night volatility information improves the day volatility estimation. The results indicate a forecasting improvement using bivariate models over those that do not include night volatility estimates. Full article
Open AccessArticle
Forecast Bitcoin Volatility with Least Squares Model Averaging
Econometrics 2019, 7(3), 40; https://doi.org/10.3390/econometrics7030040 - 14 Sep 2019
Viewed by 397
Abstract
In this paper, we study forecasting problems of Bitcoin-realized volatility computed on data from the largest crypto exchange—Binance. Given the unique features of the crypto asset market, we find that conventional regression models exhibit strong model specification uncertainty. To circumvent this issue, we [...] Read more.
In this paper, we study forecasting problems of Bitcoin-realized volatility computed on data from the largest crypto exchange—Binance. Given the unique features of the crypto asset market, we find that conventional regression models exhibit strong model specification uncertainty. To circumvent this issue, we suggest using least squares model-averaging methods to model and forecast Bitcoin volatility. The empirical results demonstrate that least squares model-averaging methods in general outperform many other conventional regression models that ignore specification uncertainty. Full article
(This article belongs to the Special Issue Bayesian and Frequentist Model Averaging)
Show Figures

Figure 1

Open AccessArticle
On the Forecast Combination Puzzle
Econometrics 2019, 7(3), 39; https://doi.org/10.3390/econometrics7030039 - 10 Sep 2019
Viewed by 441
Abstract
It is often reported in the forecast combination literature that a simple average of candidate forecasts is more robust than sophisticated combining methods. This phenomenon is usually referred to as the “forecast combination puzzle”. Motivated by this puzzle, we explore its possible explanations, [...] Read more.
It is often reported in the forecast combination literature that a simple average of candidate forecasts is more robust than sophisticated combining methods. This phenomenon is usually referred to as the “forecast combination puzzle”. Motivated by this puzzle, we explore its possible explanations, including high variance in estimating the target optimal weights (estimation error), invalid weighting formulas, and model/candidate screening before combination. We show that the existing understanding of the puzzle should be complemented by the distinction of different forecast combination scenarios known as combining for adaptation and combining for improvement. Applying combining methods without considering the underlying scenario can itself cause the puzzle. Based on our new understandings, both simulations and real data evaluations are conducted to illustrate the causes of the puzzle. We further propose a multi-level AFTER strategy that can integrate the strengths of different combining methods and adapt intelligently to the underlying scenario. In particular, by treating the simple average as a candidate forecast, the proposed strategy is shown to reduce the heavy cost of estimation error and, to a large extent, mitigate the puzzle. Full article
(This article belongs to the Special Issue Bayesian and Frequentist Model Averaging)
Show Figures

Figure 1

Open AccessArticle
A Combination Method for Averaging OLS and GLS Estimators
Econometrics 2019, 7(3), 38; https://doi.org/10.3390/econometrics7030038 - 09 Sep 2019
Viewed by 282
Abstract
To avoid the risk of misspecification between homoscedastic and heteroscedastic models, we propose a combination method based on ordinary least-squares (OLS) and generalized least-squares (GLS) model-averaging estimators. To select optimal weights for the combination, we suggest two information criteria and propose feasible versions [...] Read more.
To avoid the risk of misspecification between homoscedastic and heteroscedastic models, we propose a combination method based on ordinary least-squares (OLS) and generalized least-squares (GLS) model-averaging estimators. To select optimal weights for the combination, we suggest two information criteria and propose feasible versions that work even when the variance-covariance matrix is unknown. The optimality of the method is proven under some regularity conditions. The results of a Monte Carlo simulation demonstrate that the method is adaptive in the sense that it achieves almost the same estimation accuracy as if the homoscedasticity or heteroscedasticity of the error term were known. Full article
(This article belongs to the Special Issue Bayesian and Frequentist Model Averaging)
Show Figures

Figure 1

Open AccessArticle
Consequences of Model Misspecification for Maximum Likelihood Estimation with Missing Data
Econometrics 2019, 7(3), 37; https://doi.org/10.3390/econometrics7030037 - 05 Sep 2019
Viewed by 423
Abstract
Researchers are often faced with the challenge of developing statistical models with incomplete data. Exacerbating this situation is the possibility that either the researcher’s complete-data model or the model of the missing-data mechanism is misspecified. In this article, we create a formal theoretical [...] Read more.
Researchers are often faced with the challenge of developing statistical models with incomplete data. Exacerbating this situation is the possibility that either the researcher’s complete-data model or the model of the missing-data mechanism is misspecified. In this article, we create a formal theoretical framework for developing statistical models and detecting model misspecification in the presence of incomplete data where maximum likelihood estimates are obtained by maximizing the observable-data likelihood function when the missing-data mechanism is assumed ignorable. First, we provide sufficient regularity conditions on the researcher’s complete-data model to characterize the asymptotic behavior of maximum likelihood estimates in the simultaneous presence of both missing data and model misspecification. These results are then used to derive robust hypothesis testing methods for possibly misspecified models in the presence of Missing at Random (MAR) or Missing Not at Random (MNAR) missing data. Second, we introduce a method for the detection of model misspecification in missing data problems using recently developed Generalized Information Matrix Tests (GIMT). Third, we identify regularity conditions for the Missing Information Principle (MIP) to hold in the presence of model misspecification so as to provide useful computational covariance matrix estimation formulas. Fourth, we provide regularity conditions that ensure the observable-data expected negative log-likelihood function is convex in the presence of partially observable data when the amount of missingness is sufficiently small and the complete-data likelihood is convex. Fifth, we show that when the researcher has correctly specified a complete-data model with a convex negative likelihood function and an ignorable missing-data mechanism, then its strict local minimizer is the true parameter value for the complete-data model when the amount of missingness is sufficiently small. Our results thus provide new robust estimation, inference, and specification analysis methods for developing statistical models with incomplete data. Full article
Open AccessArticle
Compulsory Schooling and Returns to Education: A Re-Examination
Econometrics 2019, 7(3), 36; https://doi.org/10.3390/econometrics7030036 - 02 Sep 2019
Viewed by 333
Abstract
This paper re-examines the instrumental variable (IV) approach to estimating returns to education by use of compulsory school law (CSL) in the US. We show that the IV-approach amounts to a change in model specification by changing the causal status of the variable [...] Read more.
This paper re-examines the instrumental variable (IV) approach to estimating returns to education by use of compulsory school law (CSL) in the US. We show that the IV-approach amounts to a change in model specification by changing the causal status of the variable of interest. From this perspective, the IV-OLS (ordinary least square) choice becomes a model selection issue between non-nested models and is hence testable using cross validation methods. It also enables us to unravel several logic flaws in the conceptualisation of IV-based models. Using the causal chain model specification approach, we overcome these flaws by carefully distinguishing returns to education from the treatment effect of CSL. We find relatively robust estimates for the first effect, while estimates for the second effect are hindered by measurement errors in the CSL indicators. We find reassurance of our approach from fundamental theories in statistical learning. Full article
Show Figures

Figure 1

Open AccessArticle
Heteroskedasticity in One-Way Error Component Probit Models
Econometrics 2019, 7(3), 35; https://doi.org/10.3390/econometrics7030035 - 11 Aug 2019
Viewed by 695
Abstract
This paper introduces an estimation procedure for a random effects probit model in presence of heteroskedasticity and a likelihood ratio test for homoskedasticity. The cases where the heteroskedasticity is due to individual effects or idiosyncratic errors or both are analyzed. Monte Carlo simulations [...] Read more.
This paper introduces an estimation procedure for a random effects probit model in presence of heteroskedasticity and a likelihood ratio test for homoskedasticity. The cases where the heteroskedasticity is due to individual effects or idiosyncratic errors or both are analyzed. Monte Carlo simulations show that the test performs well in the case of high degree of heteroskedasticity. Furthermore, the power of the test increases with larger individual and time dimensions. The robustness analysis shows that applying the wrong approach may generate misleading results except for the case where both individual effects and idiosyncratic errors are modelled as heteroskedastic. Full article
Open AccessArticle
Optimal Multi-Step-Ahead Prediction of ARCH/GARCH Models and NoVaS Transformation
Econometrics 2019, 7(3), 34; https://doi.org/10.3390/econometrics7030034 - 08 Aug 2019
Viewed by 591
Abstract
This paper gives a computer-intensive approach to multi-step-ahead prediction of volatility in financial returns series under an ARCH/GARCH model and also under a model-free setting, namely employing the NoVaS transformation. Our model-based approach only assumes i . i . d innovations without requiring [...] Read more.
This paper gives a computer-intensive approach to multi-step-ahead prediction of volatility in financial returns series under an ARCH/GARCH model and also under a model-free setting, namely employing the NoVaS transformation. Our model-based approach only assumes i . i . d innovations without requiring knowledge/assumption of the error distribution and is computationally straightforward. The model-free approach is formally quite similar, albeit a GARCH model is not assumed. We conducted a number of simulations to show that the proposed approach works well for both point prediction (under L 1 and/or L 2 measures) and prediction intervals that were constructed using bootstrapping. The performance of GARCH models and the model-free approach for multi-step ahead prediction was also compared under different data generating processes. Full article
(This article belongs to the Special Issue Resampling Methods in Econometrics)
Open AccessArticle
A Comparison of Some Bayesian and Classical Procedures for Simultaneous Equation Models with Weak Instruments
Econometrics 2019, 7(3), 33; https://doi.org/10.3390/econometrics7030033 - 29 Jul 2019
Viewed by 720
Abstract
We compare the finite sample performance of a number of Bayesian and classical procedures for limited information simultaneous equations models with weak instruments by a Monte Carlo study. We consider Bayesian approaches developed by Chao and Phillips, Geweke, Kleibergen and van Dijk, and [...] Read more.
We compare the finite sample performance of a number of Bayesian and classical procedures for limited information simultaneous equations models with weak instruments by a Monte Carlo study. We consider Bayesian approaches developed by Chao and Phillips, Geweke, Kleibergen and van Dijk, and Zellner. Amongst the sampling theory methods, OLS, 2SLS, LIML, Fuller’s modified LIML, and the jackknife instrumental variable estimator (JIVE) due to Angrist et al. and Blomquist and Dahlberg are also considered. Since the posterior densities and their conditionals in Chao and Phillips and Kleibergen and van Dijk are nonstandard, we use a novel “Gibbs within Metropolis–Hastings” algorithm, which only requires the availability of the conditional densities from the candidate generating density. Our results show that with very weak instruments, there is no single estimator that is superior to others in all cases. When endogeneity is weak, Zellner’s MELO does the best. When the endogeneity is not weak and ρ ω 12 > 0 , where ρ is the correlation coefficient between the structural and reduced form errors, and ω 12 is the covariance between the unrestricted reduced form errors, the Bayesian method of moments (BMOM) outperforms all other estimators by a wide margin. When the endogeneity is not weak and β ρ < 0 ( β being the structural parameter), the Kleibergen and van Dijk approach seems to work very well. Surprisingly, the performance of JIVE was disappointing in all our experiments. Full article
Open AccessArticle
Misclassification in Binary Choice Models with Sample Selection
Econometrics 2019, 7(3), 32; https://doi.org/10.3390/econometrics7030032 - 24 Jul 2019
Viewed by 737
Abstract
Most empirical work in the social sciences is based on observational data that are often both incomplete, and therefore unrepresentative of the population of interest, and affected by measurement errors. These problems are very well known in the literature and ad hoc procedures [...] Read more.
Most empirical work in the social sciences is based on observational data that are often both incomplete, and therefore unrepresentative of the population of interest, and affected by measurement errors. These problems are very well known in the literature and ad hoc procedures for parametric modeling have been proposed and developed for some time, in order to correct estimate’s bias and obtain consistent estimators. However, to our best knowledge, the aforementioned problems have not yet been jointly considered. We try to overcome this by proposing a parametric approach for the estimation of the probabilities of misclassification of a binary response variable by incorporating them in the likelihood of a binary choice model with sample selection. Full article
Show Figures

Figure 1

Open AccessArticle
Estimation of FAVAR Models for Incomplete Data with a Kalman Filter for Factors with Observable Components
Econometrics 2019, 7(3), 31; https://doi.org/10.3390/econometrics7030031 - 15 Jul 2019
Cited by 1 | Viewed by 925
Abstract
This article extends the Factor-Augmented Vector Autoregression Model (FAVAR) to mixed-frequency and incomplete panel data. Within the scope of a fully parametric two-step approach, the alternating application of two expectation-maximization algorithms jointly estimates model parameters and missing data. In contrast to the existing [...] Read more.
This article extends the Factor-Augmented Vector Autoregression Model (FAVAR) to mixed-frequency and incomplete panel data. Within the scope of a fully parametric two-step approach, the alternating application of two expectation-maximization algorithms jointly estimates model parameters and missing data. In contrast to the existing literature, we do not require observable factor components to be part of the panel data. For this purpose, we modify the Kalman Filter for factors consisting of latent and observed components, which significantly improves the reconstruction of latent factors according to the performed simulation study. To identify model parameters uniquely, the loadings matrix is constrained. In our empirical application, the presented framework analyzes US data for measuring the effects of the monetary policy on the real economy and financial markets. Here, the consequences for the quarterly Gross Domestic Product (GDP) growth rates are of particular importance. Full article
Open AccessArticle
Evaluating Approximate Point Forecasting of Count Processes
Econometrics 2019, 7(3), 30; https://doi.org/10.3390/econometrics7030030 - 06 Jul 2019
Viewed by 999
Abstract
In forecasting count processes, practitioners often ignore the discreteness of counts and compute forecasts based on Gaussian approximations instead. For both central and non-central point forecasts, and for various types of count processes, the performance of such approximate point forecasts is analyzed. The [...] Read more.
In forecasting count processes, practitioners often ignore the discreteness of counts and compute forecasts based on Gaussian approximations instead. For both central and non-central point forecasts, and for various types of count processes, the performance of such approximate point forecasts is analyzed. The considered data-generating processes include different autoregressive schemes with varying model orders, count models with overdispersion or zero inflation, counts with a bounded range, and counts exhibiting trend or seasonality. We conclude that Gaussian forecast approximations should be avoided. Full article
(This article belongs to the Special Issue Discrete-Valued Time Series: Modelling, Estimation and Forecasting)
Show Figures

Figure 1

Open AccessArticle
Bayesian Analysis of Coefficient Instability in Dynamic Regressions
Econometrics 2019, 7(3), 29; https://doi.org/10.3390/econometrics7030029 - 28 Jun 2019
Viewed by 1023
Abstract
This paper deals with instability in regression coefficients. We propose a Bayesian regression model with time-varying coefficients (TVC) that allows to jointly estimate the degree of instability and the time-path of the coefficients. Thanks to the computational tractability of the model and to [...] Read more.
This paper deals with instability in regression coefficients. We propose a Bayesian regression model with time-varying coefficients (TVC) that allows to jointly estimate the degree of instability and the time-path of the coefficients. Thanks to the computational tractability of the model and to the fact that it is fully automatic, we are able to run Monte Carlo experiments and analyze its finite-sample properties. We find that the estimation precision and the forecasting accuracy of the TVC model compare favorably to those of other methods commonly employed to deal with parameter instability. A distinguishing feature of the TVC model is its robustness to mis-specification: Its performance is also satisfactory when regression coefficients are stable or when they experience discrete structural breaks. As a demonstrative application, we used our TVC model to estimate the exposures of S&P 500 stocks to market-wide risk factors: We found that a vast majority of stocks had time-varying exposures and the TVC model helped to better forecast these exposures. Full article
Previous Issue
Next Issue
Back to TopTop