Econometrics
http://www.mdpi.com/journal/econometrics
Latest open access articles published in Econometrics at http://www.mdpi.com/journal/econometrics<![CDATA[Econometrics, Vol. 2, Pages 123-144: Two-Part Models for Fractional Responses Defined as Ratios of Integers]]>
http://www.mdpi.com/2225-1146/2/3/123
This paper discusses two alternative two-part models for fractional response variables that are defined as ratios of integers. The first two-part model assumes a Binomial distribution and known group size. It nests the one-part fractional response model proposed by Papke and Wooldridge (1996) and, thus, allows one to apply Wald, LM and/or LR tests in order to discriminate between the two models. The second model extends the first one by allowing for overdispersion in the data. We demonstrate the usefulness of the proposed two-part models for data on the 401(k) pension plan participation rates used in Papke and Wooldridge (1996).Econometrics2014-09-1923Article10.3390/econometrics20301231231442225-11462014-09-19doi: 10.3390/econometrics2030123Harald OberhoferMichael Pfaffermayr<![CDATA[Econometrics, Vol. 2, Pages 98-122: A Fast, Accurate Method for Value-at-Risk and Expected Shortfall]]>
http://www.mdpi.com/2225-1146/2/2/98
A fast method is developed for value-at-risk and expected shortfall prediction for univariate asset return time series exhibiting leptokurtosis, asymmetry and conditional heteroskedasticity. It is based on a GARCH-type process driven by noncentral t innovations. While the method involves the use of several shortcuts for speed, it performs admirably in terms of accuracy and actually outperforms highly competitive models. Most remarkably, this is the case also for sample sizes as small as 250.Econometrics2014-06-2522Article10.3390/econometrics2020098981222225-11462014-06-25doi: 10.3390/econometrics2020098Jochen KrauseMarc Paolella<![CDATA[Econometrics, Vol. 2, Pages 92-97: A One Line Derivation of EGARCH]]>
http://www.mdpi.com/2225-1146/2/2/92
One of the most popular univariate asymmetric conditional volatility models is the exponential GARCH (or EGARCH) specification. In addition to asymmetry, which captures the different effects on conditional volatility of positive and negative effects of equal magnitude, EGARCH can also accommodate leverage, which is the negative correlation between returns shocks and subsequent shocks to volatility. However, the statistical properties of the (quasi-) maximum likelihood estimator of the EGARCH parameters are not available under general conditions, but rather only for special cases under highly restrictive and unverifiable conditions. It is often argued heuristically that the reason for the lack of general statistical properties arises from the presence in the model of an absolute value of a function of the parameters, which does not permit analytical derivatives, and hence does not permit (quasi-) maximum likelihood estimation. It is shown in this paper for the non-leverage case that: (1) the EGARCH model can be derived from a random coefficient complex nonlinear moving average (RCCNMA) process; and (2) the reason for the lack of statistical properties of the estimators of EGARCH under general conditions is that the stationarity and invertibility conditions for the RCCNMA process are not known.Econometrics2014-06-2322Article10.3390/econometrics202009292972225-11462014-06-23doi: 10.3390/econometrics2020092Michael McAleerChristian Hafner<![CDATA[Econometrics, Vol. 2, Pages 72-91: Credible Granger-Causality Inference with Modest Sample Lengths: A Cross-Sample Validation Approach]]>
http://www.mdpi.com/2225-1146/2/1/72
Credible Granger-causality analysis appears to require post-sample inference, as it is well-known that in-sample fit can be a poor guide to actual forecasting effectiveness. However, post-sample model testing requires an often-consequential a priori partitioning of the data into an “in-sample” period – purportedly utilized only for model specification/estimation – and a “post-sample” period, purportedly utilized (only at the end of the analysis) for model validation/testing purposes. This partitioning is usually infeasible, however, with samples of modest length – e.g., T ≤ 150 – as is common in both quarterly data sets and/or in monthly data sets where institutional arrangements vary over time, simply because there is in such cases insufficient data available to credibly accomplish both purposes separately. A cross-sample validation (CSV) testing procedure is proposed below which both eliminates the aforementioned a priori partitioning and which also substantially ameliorates this power versus credibility predicament – preserving most of the power of in-sample testing (by utilizing all of the sample data in the test), while also retaining most of the credibility of post-sample testing (by always basing model forecasts on data not utilized in estimating that particular model’s coefficients). Simulations show that the price paid, in terms of power relative to the in-sample Granger-causality F test, is manageable. An illustrative application is given, to a re-analysis of the Engel andWest [1] study of the causal relationship between macroeconomic fundamentals and the exchange rate; several of their conclusions are changed by our analysis.Econometrics2014-03-2521Article10.3390/econometrics201007272912225-11462014-03-25doi: 10.3390/econometrics2010072Richard AshleyKwok Tsang<![CDATA[Econometrics, Vol. 2, Pages 45-71: Bias-Correction in Vector Autoregressive Models: A Simulation Study]]>
http://www.mdpi.com/2225-1146/2/1/45
We analyze the properties of various methods for bias-correcting parameter estimates in both stationary and non-stationary vector autoregressive models. First, we show that two analytical bias formulas from the existing literature are in fact identical. Next, based on a detailed simulation study, we show that when the model is stationary this simple bias formula compares very favorably to bootstrap bias-correction, both in terms of bias and mean squared error. In non-stationary models, the analytical bias formula performs noticeably worse than bootstrapping. Both methods yield a notable improvement over ordinary least squares. We pay special attention to the risk of pushing an otherwise stationary model into the non-stationary region of the parameter space when correcting for bias. Finally, we consider a recently proposed reduced-bias weighted least squares estimator, and we find that it compares very favorably in non-stationary models.Econometrics2014-03-1321Article10.3390/econometrics201004545712225-11462014-03-13doi: 10.3390/econometrics2010045Tom EngstedThomas Pedersen<![CDATA[Econometrics, Vol. 2, Pages 20-44: Incorporating Responsiveness to Marketing Efforts in Brand Choice Modeling]]>
http://www.mdpi.com/2225-1146/2/1/20
We put forward a brand choice model with unobserved heterogeneity that concerns responsiveness to marketing efforts. We introduce two latent segments of households. The first segment is assumed to respond to marketing efforts, while households in the second segment do not do so. Whether a specific household is a member of the first or the second segment at a specific purchase occasion is described by household-specific characteristics and characteristics concerning buying behavior. Households may switch between the two responsiveness states over time. When comparing the performance of our model with alternative choice models that account for various forms of heterogeneity for three different datasets, we find better face validity for our parameters. Our model also forecasts better.Econometrics2014-02-2121Article10.3390/econometrics201002020442225-11462014-02-21doi: 10.3390/econometrics2010020Dennis FokRichard PaapPhilip Franses<![CDATA[Econometrics, Vol. 2, Pages 1-19: Referee Bias and Stoppage Time in Major League Soccer: A Partially Adaptive Approach]]>
http://www.mdpi.com/2225-1146/2/1/1
This study extends prior research on referee bias and close bias in professional soccer by examining whether Major League Soccer (MLS) referees’ discretion over stoppage time (i.e., extra play beyond regulation) is influenced by end-of-regulation match scores and/or home field advantage. To do so, we employ a grouped-data regression model and a partially adaptive model. Both account for the imprecise measurement in reported stoppage time. For the 2011 season we find no home field advantage. In fact, stoppage time is the same with a one or two goal deficit at the end of regulation, regardless of which team is ahead. However, the 2011 results do point to an increase in stoppage time of 12 to 20 seconds for nationally televised matches. For the 2012 season, the nationally televised effect disappears due to an increase in stoppage time for those matches not nationally televised. However, a home field advantage is present. Facing a one-goal deficit at the end of regulation, the home team receives about 33 seconds more stoppage time than a visiting team facing the same deficit.Econometrics2014-02-1721Article10.3390/econometrics20100011192225-11462014-02-17doi: 10.3390/econometrics2010001Katherine YewellSteven CaudillFranklin Mixon, Jr.<![CDATA[Econometrics, Vol. 1, Pages 249-280: Academic Rankings with RePEc]]>
http://www.mdpi.com/2225-1146/1/3/249
This article describes the data collection and use of data for the computation of rankings within RePEc (Research Papers in Economics). This encompasses the determination of impact factors for journals and working paper series, as well as the ranking of authors, institutions, and geographic regions. The various ranking methods are also compared, using a snapshot of the data.Econometrics2013-12-1713Article10.3390/econometrics10302492492802225-11462013-12-17doi: 10.3390/econometrics1030249Christian Zimmermann<![CDATA[Econometrics, Vol. 1, Pages 236-248: Polynomial Regressions and Nonsense Inference]]>
http://www.mdpi.com/2225-1146/1/3/236
Polynomial specifications are widely used, not only in applied economics, but also in epidemiology, physics, political analysis and psychology, just to mention a few examples. In many cases, the data employed to estimate such specifications are time series that may exhibit stochastic nonstationary behavior. We extend Phillips’ results (Phillips, P. Understanding spurious regressions in econometrics. J. Econom. 1986, 33, 311–340.) by proving that an inference drawn from polynomial specifications, under stochastic nonstationarity, is misleading unless the variables cointegrate. We use a generalized polynomial specification as a vehicle to study its asymptotic and finite-sample properties. Our results, therefore, lead to a call to be cautious whenever practitioners estimate polynomial regressions.Econometrics2013-11-1813Article10.3390/econometrics10302362362482225-11462013-11-18doi: 10.3390/econometrics1030236Daniel Ventosa-SantaulàriaCarlos Rodríguez-Caballero<![CDATA[Econometrics, Vol. 1, Pages 217-235: Ranking Leading Econometrics Journals Using Citations Data from ISI and RePEc]]>
http://www.mdpi.com/2225-1146/1/3/217
The paper focuses on the robustness of rankings of academic journal quality and research impact of 10 leading econometrics journals taken from the Thomson Reuters ISI Web of Science (ISI) Category of Economics, using citations data from ISI and the highly accessible Research Papers in Economics (RePEc) database that is widely used in economics, finance and related disciplines. The journals are ranked using quantifiable static and dynamic Research Assessment Measures (RAMs), with 15 RAMs from ISI and five RAMs from RePEc. The similarities and differences in various RAMs, which are based on alternative weighted and unweighted transformations of citations, are highlighted to show which RAMs are able to provide informational value relative to others. The RAMs include the impact factor, mean citations and non-citations, journal policy, number of high quality papers, and journal influence and article influence. The paper highlights robust rankings based on the harmonic mean of the ranks of 20 RAMs, which in some cases are closely related. It is shown that emphasizing the most widely-used RAM, the two-year impact factor of a journal, can lead to a distorted evaluation of journal quality, impact and influence relative to the harmonic mean of the ranks. Some suggestions regarding the use of the most informative RAMs are also given.Econometrics2013-11-1813Article10.3390/econometrics10302172172352225-11462013-11-18doi: 10.3390/econometrics1030217Chia-Lin ChangMichael McAleer<![CDATA[Econometrics, Vol. 1, Pages 207-216: The Geometric Meaning of the Notion of Joint Unpredictability of a Bivariate VAR(1) Stochastic Process]]>
http://www.mdpi.com/2225-1146/1/3/207
This paper investigates, in a particular parametric framework, the geometric meaning of joint unpredictability for a bivariate discrete process. In particular, the paper provides a characterization of the joint unpredictability in terms of distance between information sets in an Hilbert space.Econometrics2013-11-1413Article10.3390/econometrics10302072072162225-11462013-11-14doi: 10.3390/econometrics1030207Umberto Triacca<![CDATA[Econometrics, Vol. 1, Pages 180-206: Structural Panel VARs]]>
http://www.mdpi.com/2225-1146/1/2/180
The paper proposes a structural approach to VAR analysis in panels, which takes into account responses to both idiosyncratic and common structural shocks, while permitting full cross member heterogeneity of the response dynamics. In the context of this structural approach, estimation of the loading matrices for the decomposition into idiosyncratic versus common shocks is straightforward and transparent. The method appears to do remarkably well at uncovering the properties of the sample distribution of the underlying structural dynamics, even when the panels are relatively short, as illustrated in Monte Carlo simulations. Finally, these simulations also illustrate that the SVAR panel method can be used to improve inference, not only for properties of the sample distribution, but also for dynamics of individual members of the panel that lack adequate data for a conventional time series SVAR analysis. This is accomplished by using fitted cross sectional regressions of the sample of estimated panel responses to correlated static measures, and using these to interpolate the member-specific dynamics.Econometrics2013-09-2412Article10.3390/econometrics10201801802062225-11462013-09-24doi: 10.3390/econometrics1020180Peter Pedroni<![CDATA[Econometrics, Vol. 1, Pages 157-179: Parametric and Nonparametric Frequentist Model Selection and Model Averaging]]>
http://www.mdpi.com/2225-1146/1/2/157
This paper presents recent developments in model selection and model averaging for parametric and nonparametric models. While there is extensive literature on model selection under parametric settings, we present recently developed results in the context of nonparametric models. In applications, estimation and inference are often conducted under the selected model without considering the uncertainty from the selection process. This often leads to inefficiency in results and misleading confidence intervals. Thus an alternative to model selection is model averaging where the estimated model is the weighted sum of all the submodels. This reduces model uncertainty. In recent years, there has been significant interest in model averaging and some important developments have taken place in this area. We present results for both the parametric and nonparametric cases. Some possible topics for future research are also indicated.Econometrics2013-09-2012Article10.3390/econometrics10201571571792225-11462013-09-20doi: 10.3390/econometrics1020157Aman UllahHuansha Wang<![CDATA[Econometrics, Vol. 1, Pages 141-156: Generalized Empirical Likelihood-Based Focused Information Criterion and Model Averaging]]>
http://www.mdpi.com/2225-1146/1/2/141
This paper develops model selection and averaging methods for moment restriction models. We first propose a focused information criterion based on the generalized empirical likelihood estimator. We address the issue of selecting an optimal model, rather than a correct model, for estimating a specific parameter of interest. Then, this study investigates a generalized empirical likelihood-based model averaging estimator that minimizes the asymptotic mean squared error. A simulation study suggests that our averaging estimator can be a useful alternative to existing post-selection estimators.Econometrics2013-07-0312Article10.3390/econometrics10201411411562225-11462013-07-03doi: 10.3390/econometrics1020141Naoya Sueishi<![CDATA[Econometrics, Vol. 1, Pages 127-140: Forecasting Value-at-Risk Using High-Frequency Information]]>
http://www.mdpi.com/2225-1146/1/1/127
in the prediction of quantiles of daily Standard&amp;Poor’s 500 (S&amp;P 500) returns we consider how to use high-frequency 5-minute data. We examine methods that incorporate the high frequency information either indirectly, through combining forecasts (using forecasts generated from returns sampled at different intraday interval), or directly, through combining high frequency information into one model. We consider subsample averaging, bootstrap averaging, forecast averaging methods for the indirect case, and factor models with principal component approach, for both direct and indirect cases. We show that in forecasting the daily S&amp;P 500 index return quantile (Value-at-Risk or VaR is simply the negative of it), using high-frequency information is beneficial, often substantially and particularly so, in forecasting downside risk. Our empirical results show that the averaging methods (subsample averaging, bootstrap averaging, forecast averaging), which serve as different ways of forming the ensemble average from using high-frequency intraday information, provide an excellent forecasting performance compared to using just low-frequency daily information.Econometrics2013-06-2111Article10.3390/econometrics10101271271402225-11462013-06-21doi: 10.3390/econometrics1010127Huiyu HuangTae-Hwy Lee<![CDATA[Econometrics, Vol. 1, Pages 115-126: Ten Things You Should Know about the Dynamic Conditional Correlation Representation]]>
http://www.mdpi.com/2225-1146/1/1/115
The purpose of the paper is to discuss ten things potential users should know about the limits of the Dynamic Conditional Correlation (DCC) representation for estimating and forecasting time-varying conditional correlations. The reasons given for caution about the use of DCC include the following: DCC represents the dynamic conditional covariances of the standardized residuals, and hence does not yield dynamic conditional correlations; DCC is stated rather than derived; DCC has no moments; DCC does not have testable regularity conditions; DCC yields inconsistent two step estimators; DCC has no asymptotic properties; DCC is not a special case of Generalized Autoregressive Conditional Correlation (GARCC), which has testable regularity conditions and standard asymptotic properties; DCC is not dynamic empirically as the effect of news is typically extremely small; DCC cannot be distinguished empirically from diagonal Baba, Engle, Kraft and Kroner (BEKK) in small systems; and DCC may be a useful filter or a diagnostic check, but it is not a model.Econometrics2013-06-2111Article10.3390/econometrics10101151151262225-11462013-06-21doi: 10.3390/econometrics1010115Massimiliano CaporinMichael McAleer<![CDATA[Econometrics, Vol. 1, Pages 71-114: Generalized Spatial Two Stage Least Squares Estimation of Spatial Autoregressive Models with Autoregressive Disturbances in the Presence of Endogenous Regressors and Many Instruments]]>
http://www.mdpi.com/2225-1146/1/1/71
This paper studies the generalized spatial two stage least squares (GS2SLS) estimation of spatial autoregressive models with autoregressive disturbances when there are endogenous regressors with many valid instruments. Using many instruments may improve the efficiency of estimators asymptotically, but the bias might be large in finite samples, making the inference inaccurate. We consider the case that the number of instruments K increases with, but at a rate slower than, the sample size, and derive the approximate mean square errors (MSE) that account for the trade-offs between the bias and variance, for both the GS2SLS estimator and a bias-corrected GS2SLS estimator. A criterion function for the optimal K selection can be based on the approximate MSEs. Monte Carlo experiments are provided to show the performance of our procedure of choosing K.Econometrics2013-05-2711Article10.3390/econometrics1010071711142225-11462013-05-27doi: 10.3390/econometrics1010071Fei JinLung-fei Lee<![CDATA[Econometrics, Vol. 1, Pages 53-70: Outlier Detection in Regression Using an Iterated One-Step Approximation to the Huber-Skip Estimator]]>
http://www.mdpi.com/2225-1146/1/1/53
In regression we can delete outliers based upon a preliminary estimator and re-estimate the parameters by least squares based upon the retained observations. We study the properties of an iteratively defined sequence of estimators based on this idea. We relate the sequence to the Huber-skip estimator. We provide a stochastic recursion equation for the estimation error in terms of a kernel, the previous estimation error and a uniformly small error term. The main contribution is the analysis of the solution of the stochastic recursion equation as a fixed point, and the results that the normalized estimation errors are tight and are close to a linear function of the kernel, thus providing a stochastic expansion of the estimators, which is the same as for the Huber-skip. This implies that the iterated estimator is a close approximation of the Huber-skip.Econometrics2013-05-1311Article10.3390/econometrics101005353702225-11462013-05-13doi: 10.3390/econometrics1010053Søren JohansenBent Nielsen<![CDATA[Econometrics, Vol. 1, Pages 32-52: Constructing U.K. Core Inflation]]>
http://www.mdpi.com/2225-1146/1/1/32
The recent volatile behaviour of U.K. inflation has been officially attributed to a sequence of “unusual” price changes, prompting renewed interest in the construction of measures of “core inflation”, from which such unusual price changes may be down-weighted or even excluded. This paper proposes a new approach to constructing core inflation based on detailed analysis of the temporal stochastic structure of the individual prices underlying a particular index. This approach is illustrated using the section structure of the U.K. retail price index (RPI), providing a number of measures of core inflation that can be automatically calculated and updated to provide both a current assessment and forecasts of the underlying inflation rate in the U.K.Econometrics2013-04-2511Article10.3390/econometrics101003232522225-11462013-04-25doi: 10.3390/econometrics1010032Terence Mills<![CDATA[Econometrics, Vol. 1, Pages 1-31: On Diagnostic Checking of Vector ARMA-GARCH Models with Gaussian and Student-t Innovations]]>
http://www.mdpi.com/2225-1146/1/1/1
This paper focuses on the diagnostic checking of vector ARMA (VARMA) models with multivariate GARCH errors. For a fitted VARMA-GARCH model with Gaussian or Student-t innovations, we derive the asymptotic distributions of autocorrelation matrices of the cross-product vector of standardized residuals. This is different from the traditional approach that employs only the squared series of standardized residuals. We then study two portmanteau statistics, called Q1(M) and Q2(M), for model checking. A residual-based bootstrap method is provided and demonstrated as an effective way to approximate the diagnostic checking statistics. Simulations are used to compare the performance of the proposed statistics with other methods available in the literature. In addition, we also investigate the effect of GARCH shocks on checking a fitted VARMA model. Empirical sizes and powers of the proposed statistics are investigated and the results suggest a procedure of using jointly Q1(M) and Q2(M) in diagnostic checking. The bivariate time series of FTSE 100 and DAX index returns is used to illustrate the performance of the proposed portmanteau statistics. The results show that it is important to consider the cross-product series of standardized residuals and GARCH effects in model checking.Econometrics2013-04-0411Article10.3390/econometrics10100011312225-11462013-04-04doi: 10.3390/econometrics1010001Yongning WangRuey Tsay