Econometrics
http://www.mdpi.com/journal/econometrics
Latest open access articles published in Econometrics at http://www.mdpi.com/journal/econometrics<![CDATA[Econometrics, Vol. 3, Pages 199-214: Plug-in Bandwidth Selection for Kernel Density Estimation with Discrete Data]]>
http://www.mdpi.com/2225-1146/3/2/199
This paper proposes plug-in bandwidth selection for kernel density estimation with discrete data via minimization of mean summed square error. Simulation results show that the plug-in bandwidths perform well, relative to cross-validated bandwidths, in non-uniform designs. We further find that plug-in bandwidths are relatively small. Several empirical examples show that the plug-in bandwidths are typically similar in magnitude to their cross-validated counterparts.Econometrics2015-03-3132Article10.3390/econometrics30201991992142225-11462015-03-31doi: 10.3390/econometrics3020199Chi-Yang ChuDaniel HendersonChristopher Parmeter<![CDATA[Econometrics, Vol. 3, Pages 187-198: Information Recovery in a Dynamic Statistical Markov Model]]>
http://www.mdpi.com/2225-1146/3/2/187
Although economic processes and systems are in general simple in nature, the underlying dynamics are complicated and seldom understood. Recognizing this, in this paper we use a nonstationary-conditional Markov process model of observed aggregate data to learn about and recover causal influence information associated with the underlying dynamic micro-behavior. Estimating equations are used as a link to the data and to model the dynamic conditional Markov process. To recover the unknown transition probabilities, we use an information theoretic approach to model the data and derive a new class of conditional Markov models. A quadratic loss function is used as a basis for selecting the optimal member from the family of possible likelihood-entropy functional(s). The asymptotic properties of the resulting estimators are demonstrated, and a range of potential applications is discussed.Econometrics2015-03-2532Article10.3390/econometrics30201871871982225-11462015-03-25doi: 10.3390/econometrics3020187Douglas MillerGeorge Judge<![CDATA[Econometrics, Vol. 3, Pages 156-186: A Joint Chow Test for Structural Instability]]>
http://www.mdpi.com/2225-1146/3/1/156
The classical Chow test for structural instability requires strictly exogenousregressors and a break-point specified in advance. In this paper, we consider twogeneralisations, the one-step recursive Chow test (based on the sequence of studentisedrecursive residuals) and its supremum counterpart, which relaxes these requirements. We useresults on the strong consistency of regression estimators to show that the one-step test isappropriate for stationary, unit root or explosive processes modelled in the autoregressivedistributed lags (ADL) framework. We then use the results in extreme value theory to developa new supremum version of the test, suitable for formal testing of structural instability withan unknown break-point. The test assumes the normality of errors and is intended to be usedin situations where this can be either assumed nor established empirically. Simulations showthat the supremum test has desirable power properties, in particular against level shifts latein the sample and against outliers. An application to U.K. GDP data is given.Econometrics2015-03-1231Article10.3390/econometrics30101561561862225-11462015-03-12doi: 10.3390/econometrics3010156Bent NielsenAndrew Whitby<![CDATA[Econometrics, Vol. 3, Pages 128-155: Two-Step Lasso Estimation of the Spatial Weights Matrix]]>
http://www.mdpi.com/2225-1146/3/1/128
The vast majority of spatial econometric research relies on the assumption that the spatial network structure is known a priori. This study considers a two-step estimation strategy for estimating the n(n-1) interaction effects in a spatial autoregressive panel model where the spatial dimension is potentially large. The identifying assumption is approximate sparsity of the spatial weights matrix. The proposed estimation methodology exploits the Lasso estimator and mimics two-stage least squares (2SLS) to account for endogeneity of the spatial lag. The developed two-step estimator is of more general interest. It may be used in applications where the number of endogenous regressors and the number of instrumental variables is larger than the number of observations. We derive convergence rates for the two-step Lasso estimator. Our Monte Carlo simulation results show that the two-step estimator is consistent and successfully recovers the spatial network structure for reasonable sample size, T.Econometrics2015-03-0931Article10.3390/econometrics30101281281552225-11462015-03-09doi: 10.3390/econometrics3010128Achim AhrensArnab Bhattacharjee<![CDATA[Econometrics, Vol. 3, Pages 101-127: Heteroskedasticity of Unknown Form in Spatial Autoregressive Models with a Moving Average Disturbance Term]]>
http://www.mdpi.com/2225-1146/3/1/101
In this study, I investigate the necessary condition for the consistency of the maximum likelihood estimator (MLE) of spatial models with a spatial moving average process in the disturbance term. I show that the MLE of spatial autoregressive and spatial moving average parameters is generally inconsistent when heteroskedasticity is not considered in the estimation. I also show that the MLE of parameters of exogenous variables is inconsistent and determine its asymptotic bias. I provide simulation results to evaluate the performance of the MLE. The simulation results indicate that the MLE imposes a substantial amount of bias on both autoregressive and moving average parameters.Econometrics2015-02-2631Article10.3390/econometrics30101011011272225-11462015-02-26doi: 10.3390/econometrics3010101Osman Doğan<![CDATA[Econometrics, Vol. 3, Pages 91-100: Entropy Maximization as a Basis for Information Recovery in Dynamic Economic Behavioral Systems]]>
http://www.mdpi.com/2225-1146/3/1/91
As a basis for information recovery in open dynamic microeconomic systems, we emphasize the connection between adaptive intelligent behavior, causal entropy maximization and self-organized equilibrium seeking behavior. This entropy-based causal adaptive behavior framework permits the use of information-theoretic methods as a solution basis for the resulting pure and stochastic inverse economic-econometric problems. We cast the information recovery problem in the form of a binary network and suggest information-theoretic methods to recover estimates of the unknown binary behavioral parameters without explicitly sampling the configuration-arrangement of the sample space.Econometrics2015-02-1631Article10.3390/econometrics3010091911002225-11462015-02-16doi: 10.3390/econometrics3010091George Judge<![CDATA[Econometrics, Vol. 3, Pages 65-90: Finding Starting-Values for the Estimation of Vector STAR Models]]>
http://www.mdpi.com/2225-1146/3/1/65
This paper focuses on finding starting-values for the estimation of Vector STAR models. Based on a Monte Carlo study, different procedures are evaluated. Their performance is assessed with respect to model fit and computational effort. I employ (i) grid search algorithms and (ii) heuristic optimization procedures, namely differential evolution, threshold accepting, and simulated annealing. In the equation-by-equation starting-value search approach the procedures achieve equally good results. Unless the errors are cross-correlated, equation-by-equation search followed by a derivative-based algorithm can handle such an optimization problem sufficiently well. This result holds also for higher-dimensional Vector STAR models with a slight edge for heuristic methods. For more complex Vector STAR models which require a multivariate search approach, simulated annealing and differential evolution outperform threshold accepting and the grid search.Econometrics2015-01-2931Article10.3390/econometrics301006565902225-11462015-01-29doi: 10.3390/econometrics3010065Frauke Schleer<![CDATA[Econometrics, Vol. 3, Pages 55-64: On the Interpretation of Instrumental Variables in the Presence of Specification Errors]]>
http://www.mdpi.com/2225-1146/3/1/55
The method of instrumental variables (IV) and the generalized method of moments (GMM), and their applications to the estimation of errors-in-variables and simultaneous equations models in econometrics, require data on a sufficient number of instrumental variables that are both exogenous and relevant. We argue that, in general, such instruments (weak or strong) cannot exist.Econometrics2015-01-2931Article10.3390/econometrics301005555642225-11462015-01-29doi: 10.3390/econometrics3010055P.A.V.B. SwamyGeorge TavlasStephen Hall<![CDATA[Econometrics, Vol. 3, Pages 2-54: Modeling Autoregressive Processes with Moving-Quantiles-Implied Nonlinearity]]>
http://www.mdpi.com/2225-1146/3/1/2
We introduce and investigate some properties of a class of nonlinear time series models based on the moving sample quantiles in the autoregressive data generating process. We derive a test fit to detect this type of nonlinearity. Using the daily realized volatility data of Standard &amp; Poor’s 500 (S&amp;P 500) and several other indices, we obtained good performance using these models in an out-of-sample forecasting exercise compared with the forecasts obtained based on the usual linear heterogeneous autoregressive and other models of realized volatility.Econometrics2015-01-1631Article10.3390/econometrics30100022542225-11462015-01-16doi: 10.3390/econometrics3010002Isao IshidaVirmantas Kvedaras<![CDATA[Econometrics, Vol. 3, Pages 1: Acknowledgement to Reviewers of Econometrics in 2014]]>
http://www.mdpi.com/2225-1146/3/1/1
The editors of Econometrics would like to express their sincere gratitude to the following reviewers for assessing manuscripts in 2014:[...]Econometrics2015-01-0931Editorial10.3390/econometrics3010001112225-11462015-01-09doi: 10.3390/econometrics3010001 Econometrics Editorial Office<![CDATA[Econometrics, Vol. 2, Pages 217-249: The Biggest Myth in Spatial Econometrics]]>
http://www.mdpi.com/2225-1146/2/4/217
There is near universal agreement that estimates and inferences from spatial regression models are sensitive to particular specifications used for the spatial weight structure in these models. We find little theoretical basis for this commonly held belief, if estimates and inferences are based on the true partial derivatives for a well-specified spatial regression model. We conclude that this myth may have arisen from past applied work that incorrectly interpreted the model coefficients as if they were partial derivatives, or from use of misspecified models.Econometrics2014-12-2324Article10.3390/econometrics20402172172492225-11462014-12-23doi: 10.3390/econometrics2040217James LeSageR. Pace<![CDATA[Econometrics, Vol. 2, Pages 203-216: Testing for A Set of Linear Restrictions in VARMA Models Using Autoregressive Metric: An Application to Granger Causality Test]]>
http://www.mdpi.com/2225-1146/2/4/203
In this paper we propose a test for a set of linear restrictions in a Vector Autoregressive Moving Average (VARMA) model. This test is based on the autoregressive metric, a notion of distance between two univariate ARMA models, M0 and M1, introduced by Piccolo in 1990. In particular, we show that this set of linear restrictions is equivalent to a null distance d(M0,M1 ) between two given ARMA models. This result provides the logical basis for using d(M0,M1) = 0 as a null hypothesis in our test. Some Monte Carlo evidence about the finite sample behavior of our testing procedure is provided and two empirical examples are presented.Econometrics2014-12-2224Article10.3390/econometrics20402032032162225-11462014-12-22doi: 10.3390/econometrics2040203Francesca Di IorioUmberto Triacca<![CDATA[Econometrics, Vol. 2, Pages 169-202: Success at the Summer Olympics: How Much Do Economic Factors Explain?]]>
http://www.mdpi.com/2225-1146/2/4/169
Many econometric analyses have attempted to model medal winnings as dependent on per capita GDP and population size. This approach ignores the size and composition of the team of athletes, especially the role of female participation and the role of sports culture, and also provides an inadequate explanation of the variability between the outcomes of countries with similar features. This paper proposes a model that offers two substantive advancements, both of which shed light on previously hidden aspects of Olympic success. First, we propose a selection model that treats the process of fielding any winner and the subsequent level of total winnings as two separate, but related, processes. Second, our model takes a more structural angle, in that we view GDP and population size as inputs into the “production” of athletes. After that production process, those athletes then compete to win medals. We use country-level panel data for the seven Summer Olympiads from 1988 to 2012. The size and composition of the country’s Olympic team are shown to be highly significant factors, as is also the past performance, which generates a persistence effect.Econometrics2014-12-0524Article10.3390/econometrics20401691692022225-11462014-12-05doi: 10.3390/econometrics2040169Pravin TrivediDavid Zimmer<![CDATA[Econometrics, Vol. 2, Pages 151-168: A GMM-Based Test for Normal Disturbances of the Heckman Sample Selection Model]]>
http://www.mdpi.com/2225-1146/2/4/151
The Heckman sample selection model relies on the assumption of normal and homoskedastic disturbances. However, before considering more general, alternative semiparametric models that do not need the normality assumption, it seems useful to test this assumption. Following Meijer and Wansbeek (2007), the present contribution derives a GMM-based pseudo-score LM test on whether the third and fourth moments of the disturbances of the outcome equation of the Heckman model conform to those implied by the truncated normal distribution. The test is easy to calculate and in Monte Carlo simulations it shows good performance for sample sizes of 1000 or larger.Econometrics2014-10-2324Article10.3390/econometrics20401511511682225-11462014-10-23doi: 10.3390/econometrics2040151Michael Pfaffermayr<![CDATA[Econometrics, Vol. 2, Pages 145-150: Asymmetry and Leverage in Conditional Volatility Models]]>
http://www.mdpi.com/2225-1146/2/3/145
The three most popular univariate conditional volatility models are the generalized autoregressive conditional heteroskedasticity (GARCH) model of Engle (1982) and Bollerslev (1986), the GJR (or threshold GARCH) model of Glosten, Jagannathan and Runkle (1992), and the exponential GARCH (or EGARCH) model of Nelson (1990, 1991). The underlying stochastic specification to obtain GARCH was demonstrated by Tsay (1987), and that of EGARCH was shown recently in McAleer and Hafner (2014). These models are important in estimating and forecasting volatility, as well as in capturing asymmetry, which is the different effects on conditional volatility of positive and negative effects of equal magnitude, and purportedly in capturing leverage, which is the negative correlation between returns shocks and subsequent shocks to volatility. As there seems to be some confusion in the literature between asymmetry and leverage, as well as which asymmetric models are purported to be able to capture leverage, the purpose of the paper is three-fold, namely, (1) to derive the GJR model from a random coefficient autoregressive process, with appropriate regularity conditions; (2) to show that leverage is not possible in the GJR and EGARCH models; and (3) to present the interpretation of the parameters of the three popular univariate conditional volatility models in a unified manner.Econometrics2014-09-2423Article10.3390/econometrics20301451451502225-11462014-09-24doi: 10.3390/econometrics2030145Michael McAleer<![CDATA[Econometrics, Vol. 2, Pages 123-144: Two-Part Models for Fractional Responses Defined as Ratios of Integers]]>
http://www.mdpi.com/2225-1146/2/3/123
This paper discusses two alternative two-part models for fractional response variables that are defined as ratios of integers. The first two-part model assumes a Binomial distribution and known group size. It nests the one-part fractional response model proposed by Papke and Wooldridge (1996) and, thus, allows one to apply Wald, LM and/or LR tests in order to discriminate between the two models. The second model extends the first one by allowing for overdispersion in the data. We demonstrate the usefulness of the proposed two-part models for data on the 401(k) pension plan participation rates used in Papke and Wooldridge (1996).Econometrics2014-09-1923Article10.3390/econometrics20301231231442225-11462014-09-19doi: 10.3390/econometrics2030123Harald OberhoferMichael Pfaffermayr<![CDATA[Econometrics, Vol. 2, Pages 98-122: A Fast, Accurate Method for Value-at-Risk and Expected Shortfall]]>
http://www.mdpi.com/2225-1146/2/2/98
A fast method is developed for value-at-risk and expected shortfall prediction for univariate asset return time series exhibiting leptokurtosis, asymmetry and conditional heteroskedasticity. It is based on a GARCH-type process driven by noncentral t innovations. While the method involves the use of several shortcuts for speed, it performs admirably in terms of accuracy and actually outperforms highly competitive models. Most remarkably, this is the case also for sample sizes as small as 250.Econometrics2014-06-2522Article10.3390/econometrics2020098981222225-11462014-06-25doi: 10.3390/econometrics2020098Jochen KrauseMarc Paolella<![CDATA[Econometrics, Vol. 2, Pages 92-97: A One Line Derivation of EGARCH]]>
http://www.mdpi.com/2225-1146/2/2/92
One of the most popular univariate asymmetric conditional volatility models is the exponential GARCH (or EGARCH) specification. In addition to asymmetry, which captures the different effects on conditional volatility of positive and negative effects of equal magnitude, EGARCH can also accommodate leverage, which is the negative correlation between returns shocks and subsequent shocks to volatility. However, the statistical properties of the (quasi-) maximum likelihood estimator of the EGARCH parameters are not available under general conditions, but rather only for special cases under highly restrictive and unverifiable conditions. It is often argued heuristically that the reason for the lack of general statistical properties arises from the presence in the model of an absolute value of a function of the parameters, which does not permit analytical derivatives, and hence does not permit (quasi-) maximum likelihood estimation. It is shown in this paper for the non-leverage case that: (1) the EGARCH model can be derived from a random coefficient complex nonlinear moving average (RCCNMA) process; and (2) the reason for the lack of statistical properties of the estimators of EGARCH under general conditions is that the stationarity and invertibility conditions for the RCCNMA process are not known.Econometrics2014-06-2322Article10.3390/econometrics202009292972225-11462014-06-23doi: 10.3390/econometrics2020092Michael McAleerChristian Hafner<![CDATA[Econometrics, Vol. 2, Pages 72-91: Credible Granger-Causality Inference with Modest Sample Lengths: A Cross-Sample Validation Approach]]>
http://www.mdpi.com/2225-1146/2/1/72
Credible Granger-causality analysis appears to require post-sample inference, as it is well-known that in-sample fit can be a poor guide to actual forecasting effectiveness. However, post-sample model testing requires an often-consequential a priori partitioning of the data into an “in-sample” period – purportedly utilized only for model specification/estimation – and a “post-sample” period, purportedly utilized (only at the end of the analysis) for model validation/testing purposes. This partitioning is usually infeasible, however, with samples of modest length – e.g., T ≤ 150 – as is common in both quarterly data sets and/or in monthly data sets where institutional arrangements vary over time, simply because there is in such cases insufficient data available to credibly accomplish both purposes separately. A cross-sample validation (CSV) testing procedure is proposed below which both eliminates the aforementioned a priori partitioning and which also substantially ameliorates this power versus credibility predicament – preserving most of the power of in-sample testing (by utilizing all of the sample data in the test), while also retaining most of the credibility of post-sample testing (by always basing model forecasts on data not utilized in estimating that particular model’s coefficients). Simulations show that the price paid, in terms of power relative to the in-sample Granger-causality F test, is manageable. An illustrative application is given, to a re-analysis of the Engel andWest [1] study of the causal relationship between macroeconomic fundamentals and the exchange rate; several of their conclusions are changed by our analysis.Econometrics2014-03-2521Article10.3390/econometrics201007272912225-11462014-03-25doi: 10.3390/econometrics2010072Richard AshleyKwok Tsang<![CDATA[Econometrics, Vol. 2, Pages 45-71: Bias-Correction in Vector Autoregressive Models: A Simulation Study]]>
http://www.mdpi.com/2225-1146/2/1/45
We analyze the properties of various methods for bias-correcting parameter estimates in both stationary and non-stationary vector autoregressive models. First, we show that two analytical bias formulas from the existing literature are in fact identical. Next, based on a detailed simulation study, we show that when the model is stationary this simple bias formula compares very favorably to bootstrap bias-correction, both in terms of bias and mean squared error. In non-stationary models, the analytical bias formula performs noticeably worse than bootstrapping. Both methods yield a notable improvement over ordinary least squares. We pay special attention to the risk of pushing an otherwise stationary model into the non-stationary region of the parameter space when correcting for bias. Finally, we consider a recently proposed reduced-bias weighted least squares estimator, and we find that it compares very favorably in non-stationary models.Econometrics2014-03-1321Article10.3390/econometrics201004545712225-11462014-03-13doi: 10.3390/econometrics2010045Tom EngstedThomas Pedersen<![CDATA[Econometrics, Vol. 2, Pages 20-44: Incorporating Responsiveness to Marketing Efforts in Brand Choice Modeling]]>
http://www.mdpi.com/2225-1146/2/1/20
We put forward a brand choice model with unobserved heterogeneity that concerns responsiveness to marketing efforts. We introduce two latent segments of households. The first segment is assumed to respond to marketing efforts, while households in the second segment do not do so. Whether a specific household is a member of the first or the second segment at a specific purchase occasion is described by household-specific characteristics and characteristics concerning buying behavior. Households may switch between the two responsiveness states over time. When comparing the performance of our model with alternative choice models that account for various forms of heterogeneity for three different datasets, we find better face validity for our parameters. Our model also forecasts better.Econometrics2014-02-2121Article10.3390/econometrics201002020442225-11462014-02-21doi: 10.3390/econometrics2010020Dennis FokRichard PaapPhilip Franses<![CDATA[Econometrics, Vol. 2, Pages 1-19: Referee Bias and Stoppage Time in Major League Soccer: A Partially Adaptive Approach]]>
http://www.mdpi.com/2225-1146/2/1/1
This study extends prior research on referee bias and close bias in professional soccer by examining whether Major League Soccer (MLS) referees’ discretion over stoppage time (i.e., extra play beyond regulation) is influenced by end-of-regulation match scores and/or home field advantage. To do so, we employ a grouped-data regression model and a partially adaptive model. Both account for the imprecise measurement in reported stoppage time. For the 2011 season we find no home field advantage. In fact, stoppage time is the same with a one or two goal deficit at the end of regulation, regardless of which team is ahead. However, the 2011 results do point to an increase in stoppage time of 12 to 20 seconds for nationally televised matches. For the 2012 season, the nationally televised effect disappears due to an increase in stoppage time for those matches not nationally televised. However, a home field advantage is present. Facing a one-goal deficit at the end of regulation, the home team receives about 33 seconds more stoppage time than a visiting team facing the same deficit.Econometrics2014-02-1721Article10.3390/econometrics20100011192225-11462014-02-17doi: 10.3390/econometrics2010001Katherine YewellSteven CaudillFranklin Mixon, Jr.<![CDATA[Econometrics, Vol. 1, Pages 249-280: Academic Rankings with RePEc]]>
http://www.mdpi.com/2225-1146/1/3/249
This article describes the data collection and use of data for the computation of rankings within RePEc (Research Papers in Economics). This encompasses the determination of impact factors for journals and working paper series, as well as the ranking of authors, institutions, and geographic regions. The various ranking methods are also compared, using a snapshot of the data.Econometrics2013-12-1713Article10.3390/econometrics10302492492802225-11462013-12-17doi: 10.3390/econometrics1030249Christian Zimmermann<![CDATA[Econometrics, Vol. 1, Pages 236-248: Polynomial Regressions and Nonsense Inference]]>
http://www.mdpi.com/2225-1146/1/3/236
Polynomial specifications are widely used, not only in applied economics, but also in epidemiology, physics, political analysis and psychology, just to mention a few examples. In many cases, the data employed to estimate such specifications are time series that may exhibit stochastic nonstationary behavior. We extend Phillips’ results (Phillips, P. Understanding spurious regressions in econometrics. J. Econom. 1986, 33, 311–340.) by proving that an inference drawn from polynomial specifications, under stochastic nonstationarity, is misleading unless the variables cointegrate. We use a generalized polynomial specification as a vehicle to study its asymptotic and finite-sample properties. Our results, therefore, lead to a call to be cautious whenever practitioners estimate polynomial regressions.Econometrics2013-11-1813Article10.3390/econometrics10302362362482225-11462013-11-18doi: 10.3390/econometrics1030236Daniel Ventosa-SantaulàriaCarlos Rodríguez-Caballero<![CDATA[Econometrics, Vol. 1, Pages 217-235: Ranking Leading Econometrics Journals Using Citations Data from ISI and RePEc]]>
http://www.mdpi.com/2225-1146/1/3/217
The paper focuses on the robustness of rankings of academic journal quality and research impact of 10 leading econometrics journals taken from the Thomson Reuters ISI Web of Science (ISI) Category of Economics, using citations data from ISI and the highly accessible Research Papers in Economics (RePEc) database that is widely used in economics, finance and related disciplines. The journals are ranked using quantifiable static and dynamic Research Assessment Measures (RAMs), with 15 RAMs from ISI and five RAMs from RePEc. The similarities and differences in various RAMs, which are based on alternative weighted and unweighted transformations of citations, are highlighted to show which RAMs are able to provide informational value relative to others. The RAMs include the impact factor, mean citations and non-citations, journal policy, number of high quality papers, and journal influence and article influence. The paper highlights robust rankings based on the harmonic mean of the ranks of 20 RAMs, which in some cases are closely related. It is shown that emphasizing the most widely-used RAM, the two-year impact factor of a journal, can lead to a distorted evaluation of journal quality, impact and influence relative to the harmonic mean of the ranks. Some suggestions regarding the use of the most informative RAMs are also given.Econometrics2013-11-1813Article10.3390/econometrics10302172172352225-11462013-11-18doi: 10.3390/econometrics1030217Chia-Lin ChangMichael McAleer<![CDATA[Econometrics, Vol. 1, Pages 207-216: The Geometric Meaning of the Notion of Joint Unpredictability of a Bivariate VAR(1) Stochastic Process]]>
http://www.mdpi.com/2225-1146/1/3/207
This paper investigates, in a particular parametric framework, the geometric meaning of joint unpredictability for a bivariate discrete process. In particular, the paper provides a characterization of the joint unpredictability in terms of distance between information sets in an Hilbert space.Econometrics2013-11-1413Article10.3390/econometrics10302072072162225-11462013-11-14doi: 10.3390/econometrics1030207Umberto Triacca<![CDATA[Econometrics, Vol. 1, Pages 180-206: Structural Panel VARs]]>
http://www.mdpi.com/2225-1146/1/2/180
The paper proposes a structural approach to VAR analysis in panels, which takes into account responses to both idiosyncratic and common structural shocks, while permitting full cross member heterogeneity of the response dynamics. In the context of this structural approach, estimation of the loading matrices for the decomposition into idiosyncratic versus common shocks is straightforward and transparent. The method appears to do remarkably well at uncovering the properties of the sample distribution of the underlying structural dynamics, even when the panels are relatively short, as illustrated in Monte Carlo simulations. Finally, these simulations also illustrate that the SVAR panel method can be used to improve inference, not only for properties of the sample distribution, but also for dynamics of individual members of the panel that lack adequate data for a conventional time series SVAR analysis. This is accomplished by using fitted cross sectional regressions of the sample of estimated panel responses to correlated static measures, and using these to interpolate the member-specific dynamics.Econometrics2013-09-2412Article10.3390/econometrics10201801802062225-11462013-09-24doi: 10.3390/econometrics1020180Peter Pedroni<![CDATA[Econometrics, Vol. 1, Pages 157-179: Parametric and Nonparametric Frequentist Model Selection and Model Averaging]]>
http://www.mdpi.com/2225-1146/1/2/157
This paper presents recent developments in model selection and model averaging for parametric and nonparametric models. While there is extensive literature on model selection under parametric settings, we present recently developed results in the context of nonparametric models. In applications, estimation and inference are often conducted under the selected model without considering the uncertainty from the selection process. This often leads to inefficiency in results and misleading confidence intervals. Thus an alternative to model selection is model averaging where the estimated model is the weighted sum of all the submodels. This reduces model uncertainty. In recent years, there has been significant interest in model averaging and some important developments have taken place in this area. We present results for both the parametric and nonparametric cases. Some possible topics for future research are also indicated.Econometrics2013-09-2012Article10.3390/econometrics10201571571792225-11462013-09-20doi: 10.3390/econometrics1020157Aman UllahHuansha Wang<![CDATA[Econometrics, Vol. 1, Pages 141-156: Generalized Empirical Likelihood-Based Focused Information Criterion and Model Averaging]]>
http://www.mdpi.com/2225-1146/1/2/141
This paper develops model selection and averaging methods for moment restriction models. We first propose a focused information criterion based on the generalized empirical likelihood estimator. We address the issue of selecting an optimal model, rather than a correct model, for estimating a specific parameter of interest. Then, this study investigates a generalized empirical likelihood-based model averaging estimator that minimizes the asymptotic mean squared error. A simulation study suggests that our averaging estimator can be a useful alternative to existing post-selection estimators.Econometrics2013-07-0312Article10.3390/econometrics10201411411562225-11462013-07-03doi: 10.3390/econometrics1020141Naoya Sueishi<![CDATA[Econometrics, Vol. 1, Pages 127-140: Forecasting Value-at-Risk Using High-Frequency Information]]>
http://www.mdpi.com/2225-1146/1/1/127
in the prediction of quantiles of daily Standard&amp;Poor’s 500 (S&amp;P 500) returns we consider how to use high-frequency 5-minute data. We examine methods that incorporate the high frequency information either indirectly, through combining forecasts (using forecasts generated from returns sampled at different intraday interval), or directly, through combining high frequency information into one model. We consider subsample averaging, bootstrap averaging, forecast averaging methods for the indirect case, and factor models with principal component approach, for both direct and indirect cases. We show that in forecasting the daily S&amp;P 500 index return quantile (Value-at-Risk or VaR is simply the negative of it), using high-frequency information is beneficial, often substantially and particularly so, in forecasting downside risk. Our empirical results show that the averaging methods (subsample averaging, bootstrap averaging, forecast averaging), which serve as different ways of forming the ensemble average from using high-frequency intraday information, provide an excellent forecasting performance compared to using just low-frequency daily information.Econometrics2013-06-2111Article10.3390/econometrics10101271271402225-11462013-06-21doi: 10.3390/econometrics1010127Huiyu HuangTae-Hwy Lee<![CDATA[Econometrics, Vol. 1, Pages 115-126: Ten Things You Should Know about the Dynamic Conditional Correlation Representation]]>
http://www.mdpi.com/2225-1146/1/1/115
The purpose of the paper is to discuss ten things potential users should know about the limits of the Dynamic Conditional Correlation (DCC) representation for estimating and forecasting time-varying conditional correlations. The reasons given for caution about the use of DCC include the following: DCC represents the dynamic conditional covariances of the standardized residuals, and hence does not yield dynamic conditional correlations; DCC is stated rather than derived; DCC has no moments; DCC does not have testable regularity conditions; DCC yields inconsistent two step estimators; DCC has no asymptotic properties; DCC is not a special case of Generalized Autoregressive Conditional Correlation (GARCC), which has testable regularity conditions and standard asymptotic properties; DCC is not dynamic empirically as the effect of news is typically extremely small; DCC cannot be distinguished empirically from diagonal Baba, Engle, Kraft and Kroner (BEKK) in small systems; and DCC may be a useful filter or a diagnostic check, but it is not a model.Econometrics2013-06-2111Article10.3390/econometrics10101151151262225-11462013-06-21doi: 10.3390/econometrics1010115Massimiliano CaporinMichael McAleer<![CDATA[Econometrics, Vol. 1, Pages 71-114: Generalized Spatial Two Stage Least Squares Estimation of Spatial Autoregressive Models with Autoregressive Disturbances in the Presence of Endogenous Regressors and Many Instruments]]>
http://www.mdpi.com/2225-1146/1/1/71
This paper studies the generalized spatial two stage least squares (GS2SLS) estimation of spatial autoregressive models with autoregressive disturbances when there are endogenous regressors with many valid instruments. Using many instruments may improve the efficiency of estimators asymptotically, but the bias might be large in finite samples, making the inference inaccurate. We consider the case that the number of instruments K increases with, but at a rate slower than, the sample size, and derive the approximate mean square errors (MSE) that account for the trade-offs between the bias and variance, for both the GS2SLS estimator and a bias-corrected GS2SLS estimator. A criterion function for the optimal K selection can be based on the approximate MSEs. Monte Carlo experiments are provided to show the performance of our procedure of choosing K.Econometrics2013-05-2711Article10.3390/econometrics1010071711142225-11462013-05-27doi: 10.3390/econometrics1010071Fei JinLung-fei Lee<![CDATA[Econometrics, Vol. 1, Pages 53-70: Outlier Detection in Regression Using an Iterated One-Step Approximation to the Huber-Skip Estimator]]>
http://www.mdpi.com/2225-1146/1/1/53
In regression we can delete outliers based upon a preliminary estimator and re-estimate the parameters by least squares based upon the retained observations. We study the properties of an iteratively defined sequence of estimators based on this idea. We relate the sequence to the Huber-skip estimator. We provide a stochastic recursion equation for the estimation error in terms of a kernel, the previous estimation error and a uniformly small error term. The main contribution is the analysis of the solution of the stochastic recursion equation as a fixed point, and the results that the normalized estimation errors are tight and are close to a linear function of the kernel, thus providing a stochastic expansion of the estimators, which is the same as for the Huber-skip. This implies that the iterated estimator is a close approximation of the Huber-skip.Econometrics2013-05-1311Article10.3390/econometrics101005353702225-11462013-05-13doi: 10.3390/econometrics1010053Søren JohansenBent Nielsen<![CDATA[Econometrics, Vol. 1, Pages 32-52: Constructing U.K. Core Inflation]]>
http://www.mdpi.com/2225-1146/1/1/32
The recent volatile behaviour of U.K. inflation has been officially attributed to a sequence of “unusual” price changes, prompting renewed interest in the construction of measures of “core inflation”, from which such unusual price changes may be down-weighted or even excluded. This paper proposes a new approach to constructing core inflation based on detailed analysis of the temporal stochastic structure of the individual prices underlying a particular index. This approach is illustrated using the section structure of the U.K. retail price index (RPI), providing a number of measures of core inflation that can be automatically calculated and updated to provide both a current assessment and forecasts of the underlying inflation rate in the U.K.Econometrics2013-04-2511Article10.3390/econometrics101003232522225-11462013-04-25doi: 10.3390/econometrics1010032Terence Mills<![CDATA[Econometrics, Vol. 1, Pages 1-31: On Diagnostic Checking of Vector ARMA-GARCH Models with Gaussian and Student-t Innovations]]>
http://www.mdpi.com/2225-1146/1/1/1
This paper focuses on the diagnostic checking of vector ARMA (VARMA) models with multivariate GARCH errors. For a fitted VARMA-GARCH model with Gaussian or Student-t innovations, we derive the asymptotic distributions of autocorrelation matrices of the cross-product vector of standardized residuals. This is different from the traditional approach that employs only the squared series of standardized residuals. We then study two portmanteau statistics, called Q1(M) and Q2(M), for model checking. A residual-based bootstrap method is provided and demonstrated as an effective way to approximate the diagnostic checking statistics. Simulations are used to compare the performance of the proposed statistics with other methods available in the literature. In addition, we also investigate the effect of GARCH shocks on checking a fitted VARMA model. Empirical sizes and powers of the proposed statistics are investigated and the results suggest a procedure of using jointly Q1(M) and Q2(M) in diagnostic checking. The bivariate time series of FTSE 100 and DAX index returns is used to illustrate the performance of the proposed portmanteau statistics. The results show that it is important to consider the cross-product series of standardized residuals and GARCH effects in model checking.Econometrics2013-04-0411Article10.3390/econometrics10100011312225-11462013-04-04doi: 10.3390/econometrics1010001Yongning WangRuey Tsay