Econometrics
http://www.mdpi.com/journal/econometrics
Latest open access articles published in Econometrics at http://www.mdpi.com/journal/econometrics<![CDATA[Econometrics, Vol. 3, Pages 667-697: A Joint Specification Test for Response Probabilities in Unordered Multinomial Choice Models]]>
http://www.mdpi.com/2225-1146/3/3/667
Estimation results obtained by parametric models may be seriously misleading when the model is misspecified or poorly approximates the true model. This study proposes a test that jointly tests the specifications of multiple response probabilities in unordered multinomial choice models. The test statistic is asymptotically chi-square distributed, consistent against a fixed alternative and able to detect a local alternative approaching to the null at a rate slower than the parametric rate. We show that rejection regions can be calculated by a simple parametric bootstrap procedure, when the sample size is small. The size and power of the tests are investigated by Monte Carlo experiments.Econometrics2015-09-1633Article10.3390/econometrics30306676676972225-11462015-09-16doi: 10.3390/econometrics3030667Masamune Iwasawa<![CDATA[Econometrics, Vol. 3, Pages 654-666: On Bootstrap Inference for Quantile Regression Panel Data: A Monte Carlo Study]]>
http://www.mdpi.com/2225-1146/3/3/654
This paper evaluates bootstrap inference methods for quantile regression panel data models. We propose to construct confidence intervals for the parameters of interest using percentile bootstrap with pairwise resampling. We study three different bootstrapping procedures. First, the bootstrap samples are constructed by resampling only from cross-sectional units with replacement. Second, the temporal resampling is performed from the time series. Finally, a more general resampling scheme, which considers sampling from both the cross-sectional and temporal dimensions, is introduced. The bootstrap algorithms are computationally attractive and easy to use in practice. We evaluate the performance of the bootstrap confidence interval by means of Monte Carlo simulations. The results show that the bootstrap methods have good finite sample performance for both location and location-scale models.Econometrics2015-09-1033Article10.3390/econometrics30306546546662225-11462015-09-10doi: 10.3390/econometrics3030654Antonio GalvaoGabriel Montes-Rojas<![CDATA[Econometrics, Vol. 3, Pages 633-653: A New Family of Consistent and Asymptotically-Normal Estimators for the Extremal Index]]>
http://www.mdpi.com/2225-1146/3/3/633
The extremal index (θ) is the key parameter for extending extreme value theory results from i.i.d. to stationary sequences. One important property of this parameter is that its inverse determines the degree of clustering in the extremes. This article introduces a novel interpretation of the extremal index as a limiting probability characterized by two Poisson processes and a simple family of estimators derived from this new characterization. Unlike most estimators for θ in the literature, this estimator is consistent, asymptotically normal and very stable across partitions of the sample. Further, we show in an extensive simulation study that this estimator outperforms in finite samples the logs, blocks and runs estimation methods. Finally, we apply this new estimator to test for clustering of extremes in monthly time series of unemployment growth and inflation rates and conclude that runs of large unemployment rates are more prolonged than periods of high inflation.Econometrics2015-08-2833Article10.3390/econometrics30306336336532225-11462015-08-28doi: 10.3390/econometrics3030633Jose Olmo<![CDATA[Econometrics, Vol. 3, Pages 610-632: Right on Target, or Is it? The Role of Distributional Shape in Variance Targeting]]>
http://www.mdpi.com/2225-1146/3/3/610
Estimation of GARCH models can be simplified by augmenting quasi-maximum likelihood (QML) estimation with variance targeting, which reduces the degree of parameterization and facilitates estimation. We compare the two approaches and investigate, via simulations, how non-normality features of the return distribution affect the quality of estimation of the volatility equation and corresponding value-at-risk predictions. We find that most GARCH coefficients and associated predictions are more precisely estimated when no variance targeting is employed. Bias properties are exacerbated for a heavier-tailed distribution of standardized returns, while the distributional asymmetry has little or moderate impact, these phenomena tending to be more pronounced under variance targeting. Some effects further intensify if one uses ML based on a leptokurtic distribution in place of normal QML. The sample size has also a more favorable effect on estimation precision when no variance targeting is used. Thus, if computational costs are not prohibitive, variance targeting should probably be avoided.Econometrics2015-08-1033Article10.3390/econometrics30306106106322225-11462015-08-10doi: 10.3390/econometrics3030610Stanislav AnatolyevStanislav Khrapov<![CDATA[Econometrics, Vol. 3, Pages 590-609: A Kolmogorov-Smirnov Based Test for Comparing the Predictive Accuracy of Two Sets of Forecasts]]>
http://www.mdpi.com/2225-1146/3/3/590
This paper introduces a complement statistical test for distinguishing between the predictive accuracy of two sets of forecasts. We propose a non-parametric test founded upon the principles of the Kolmogorov-Smirnov (KS) test, referred to as the KS Predictive Accuracy (KSPA) test. The KSPA test is able to serve two distinct purposes. Initially, the test seeks to determine whether there exists a statistically significant difference between the distribution of forecast errors, and secondly it exploits the principles of stochastic dominance to determine whether the forecasts with the lower error also reports a stochastically smaller error than forecasts from a competing model, and thereby enables distinguishing between the predictive accuracy of forecasts. We perform a simulation study for the size and power of the proposed test and report the results for different noise distributions, sample sizes and forecasting horizons. The simulation results indicate that the KSPA test is correctly sized, and robust in the face of varying forecasting horizons and sample sizes along with significant accuracy gains reported especially in the case of small sample sizes. Real world applications are also considered to illustrate the applicability of the proposed KSPA test in practice.Econometrics2015-08-0433Article10.3390/econometrics30305905906092225-11462015-08-04doi: 10.3390/econometrics3030590Hossein HassaniEmmanuel Silva<![CDATA[Econometrics, Vol. 3, Pages 577-589: A Spectral Model of Turnover Reduction]]>
http://www.mdpi.com/2225-1146/3/3/577
We give a simple explicit formula for turnover reduction when a large number of alphas are traded on the same execution platform and trades are crossed internally. We model turnover reduction via alpha correlations. Then, for a large number of alphas, turnover reduction is related to the largest eigenvalue and the corresponding eigenvector of the alpha correlation matrix.Econometrics2015-07-2933Article10.3390/econometrics30305775775892225-11462015-07-29doi: 10.3390/econometrics3030577Zura Kakushadze<![CDATA[Econometrics, Vol. 3, Pages 561-576: A Note on the Asymptotic Normality of the Kernel Deconvolution Density Estimator with Logarithmic Chi-Square Noise]]>
http://www.mdpi.com/2225-1146/3/3/561
This paper studies the asymptotic normality for the kernel deconvolution estimator when the noise distribution is logarithmic chi-square; both identical and independently distributed observations and strong mixing observations are considered. The dependent case of the result is applied to obtain the pointwise asymptotic distribution of the deconvolution volatility density estimator in discrete-time stochastic volatility models.Econometrics2015-07-2133Short Note10.3390/econometrics30305615615762225-11462015-07-21doi: 10.3390/econometrics3030561Yang Zu<![CDATA[Econometrics, Vol. 3, Pages 532-560: New Graphical Methods and Test Statistics for Testing Composite Normality]]>
http://www.mdpi.com/2225-1146/3/3/532
Several graphical methods for testing univariate composite normality from an i.i.d. sample are presented. They are endowed with correct simultaneous error bounds and yield size-correct tests. As all are based on the empirical CDF, they are also consistent for all alternatives. For one test, called the modified stabilized probability test, or MSP, a highly simplified computational method is derived, which delivers the test statistic and also a highly accurate p-value approximation, essentially instantaneously. The MSP test is demonstrated to have higher power against asymmetric alternatives than the well-known and powerful Jarque-Bera test. A further size-correct test, based on combining two test statistics, is shown to have yet higher power. The methodology employed is fully general and can be applied to any i.i.d. univariate continuous distribution setting.Econometrics2015-07-1533Article10.3390/econometrics30305325325602225-11462015-07-15doi: 10.3390/econometrics3030532Marc Paolella<![CDATA[Econometrics, Vol. 3, Pages 525-531: Efficient Estimation in Heteroscedastic Varying Coefficient Models]]>
http://www.mdpi.com/2225-1146/3/3/525
This paper considers statistical inference for the heteroscedastic varying coefficient model. We propose an efficient estimator for coefficient functions that is more efficient than the conventional local-linear estimator. We establish asymptotic normality for the proposed estimator and conduct some simulation to illustrate the performance of the proposed method.Econometrics2015-07-1533Article10.3390/econometrics30305255255312225-11462015-07-15doi: 10.3390/econometrics3030525Chuanhua WeiLijie Wan<![CDATA[Econometrics, Vol. 3, Pages 494-524: Consistency in Estimation and Model Selection of Dynamic Panel Data Models with Fixed Effects]]>
http://www.mdpi.com/2225-1146/3/3/494
We examine the relationship between consistent parameter estimation and model selection for autoregressive panel data models with fixed effects. We find that the transformation of fixed effects proposed by Lancaster (2002) does not necessarily lead to consistent estimation of common parameters when some true exogenous regressors are excluded. We propose a data dependent way to specify the prior of the autoregressive coefficient and argue for comparing different model specifications before parameter estimation. Model selection properties of Bayes factors and Bayesian information criterion (BIC) are investigated. When model uncertainty is substantial, we recommend the use of Bayesian Model Averaging to obtain point estimators with lower root mean squared errors (RMSE). We also study the implications of different levels of inclusion probabilities by simulations.Econometrics2015-07-1033Article10.3390/econometrics30304944945242225-11462015-07-10doi: 10.3390/econometrics3030494Guangjie Li<![CDATA[Econometrics, Vol. 3, Pages 466-493: A New Approach to Model Verification, Falsification and Selection]]>
http://www.mdpi.com/2225-1146/3/3/466
This paper shows that a qualitative analysis, i.e., an assessment of the consistency of a hypothesized sign pattern for structural arrays with the sign pattern of the estimated reduced form, can always provide decisive insight into a model’s validity both in general and compared to other models. Qualitative analysis can show that it is impossible for some models to have generated the data used to estimate the reduced form, even though standard specification tests might show the model to be adequate. A partially specified structural hypothesis can be falsified by estimating as few as one reduced form equation. Zero restrictions in the structure can themselves be falsified. It is further shown how the information content of the hypothesized structural sign patterns can be measured using a commonly applied concept of statistical entropy. The lower the hypothesized structural sign pattern’s entropy, the more a priori information it proposes about the sign pattern of the estimated reduced form. As an hypothesized structural sign pattern has a lower entropy, it is more subject to type 1 error and less subject to type 2 error. Three cases illustrate the approach taken here.Econometrics2015-06-2933Article10.3390/econometrics30304664664932225-11462015-06-29doi: 10.3390/econometrics3030466Andrew BuckGeorge Lady<![CDATA[Econometrics, Vol. 3, Pages 443-465: Bayesian Approach to Disentangling Technical and Environmental Productivity]]>
http://www.mdpi.com/2225-1146/3/2/443
This paper models the firm’s production process as a system of simultaneous technologies for desirable and undesirable outputs. Desirable outputs are produced by transforming inputs via the conventional transformation function, whereas (consistent with the material balance condition) undesirable outputs are by-produced via the so-called “residual generation technology”. By separating the production of undesirable outputs from that of desirable outputs, not only do we ensure that undesirable outputs are not modeled as inputs and thus satisfy costly disposability, but we are also able to differentiate between the traditional (desirable-output-oriented) technical productivity and the undesirable-output-oriented environmental, or so-called “green”, productivity. To measure the latter, we derive a Solow-type Divisia environmental productivity index which, unlike conventional productivity indices, allows crediting the ceteris paribus reduction in undesirable outputs. Our index also provides a meaningful way to decompose environmental productivity into environmental technological and efficiency changes.Econometrics2015-06-1632Article10.3390/econometrics30204434434652225-11462015-06-16doi: 10.3390/econometrics3020443Emir MalikovSubal KumbhakarEfthymios Tsionas<![CDATA[Econometrics, Vol. 3, Pages 412-442: Strategic Interaction Model with Censored Strategies]]>
http://www.mdpi.com/2225-1146/3/2/412
In this paper, we develop a new model of a static game of incomplete information with a large number of players. The model has two key distinguishing features. First, the strategies are subject to threshold effects, and can be interpreted as dependent censored random variables. Second, in contrast to most of the existing literature, our inferential theory relies on a large number of players, rather than a large number of independent repetitions of the same game. We establish existence and uniqueness of the pure strategy equilibrium, and prove that the censored equilibrium strategies satisfy a near-epoch dependence property. We then show that the normal maximum likelihood and least squares estimators of this censored model are consistent and asymptotically normal. Our model can be useful in a wide variety of settings, including investment, R&amp;D, labor supply, and social interaction applications.Econometrics2015-06-0132Article10.3390/econometrics30204124124422225-11462015-06-01doi: 10.3390/econometrics3020412Nazgul Jenish<![CDATA[Econometrics, Vol. 3, Pages 376-411: Asymptotic Distribution and Finite Sample Bias Correction of QML Estimators for Spatial Error Dependence Model]]>
http://www.mdpi.com/2225-1146/3/2/376
In studying the asymptotic and finite sample properties of quasi-maximum likelihood (QML) estimators for the spatial linear regression models, much attention has been paid to the spatial lag dependence (SLD) model; little has been given to its companion, the spatial error dependence (SED) model. In particular, the effect of spatial dependence on the convergence rate of the QML estimators has not been formally studied, and methods for correcting finite sample bias of the QML estimators have not been given. This paper fills in these gaps. Of the two, bias correction is particularly important to the applications of this model, as it leads potentially to much improved inferences for the regression coefficients. Contrary to the common perceptions, both the large and small sample behaviors of the QML estimators for the SED model can be different from those for the SLD model in terms of the rate of convergence and the magnitude of bias. Monte Carlo results show that the bias can be severe, and the proposed bias correction procedure is very effective.Econometrics2015-05-2132Article10.3390/econometrics30203763764112225-11462015-05-21doi: 10.3390/econometrics3020376Shew LiuZhenlin Yang<![CDATA[Econometrics, Vol. 3, Pages 355-375: A Jackknife Correction to a Test for Cointegration Rank]]>
http://www.mdpi.com/2225-1146/3/2/355
This paper investigates the performance of a jackknife correction to a test for cointegration rank in a vector autoregressive system. The limiting distributions of the jackknife-corrected statistics are derived and the critical values of these distributions are tabulated. Based on these critical values the finite sample size and power properties of the jackknife-corrected tests are compared with the usual rank test statistic as well as statistics involving a small sample correction and a Bartlett correction, in addition to a bootstrap method. The simulations reveal that all of the corrected tests can provide finite sample size improvements, while maintaining power, although the bootstrap procedure is the most robust across the simulation designs considered.Econometrics2015-05-2032Article10.3390/econometrics30203553553752225-11462015-05-20doi: 10.3390/econometrics3020355Marcus Chambers<![CDATA[Econometrics, Vol. 3, Pages 339-354: The Seasonal KPSS Test: Examining Possible Applications with Monthly Data and Additional Deterministic Terms]]>
http://www.mdpi.com/2225-1146/3/2/339
The literature has been notably less definitive in distinguishing between finite sample studies of seasonal stationarity than in seasonal unit root tests. Although the use of seasonal stationarity and unit root tests is advised to determine correctly the most appropriate form of the trend in a seasonal time series, such a use is rarely noted in the relevant studies on this topic. Recently, the seasonal KPSS test, with a null hypothesis of no seasonal unit roots, and based on quarterly data, has been introduced in the literature. The asymptotic theory of the seasonal KPSS test depends on whether data have been filtered by a preliminary regression. More specifically, one may proceed to extracting deterministic components, such as the mean and trend, from the data before testing. In this paper, we examine the effects of de-trending on the properties of the seasonal KPSS test in finite samples. A sketch of the test’s limit theory is subsequently provided. Moreover, a Monte Carlo study is conducted to analyze the behavior of the test for a monthly time series. The focus on this time-frequency is significant because, as we mentioned above, it was introduced for quarterly data. Overall, the results indicated that the seasonal KPSS test preserved its good size and power properties. Furthermore, our results corroborate those reported elsewhere in the literature for conventional stationarity tests. These subsequent results assumed that the nonparametric corrections of residual variances may lead to better in-sample properties of the seasonal KPSS test. Next, the seasonal KPSS test is applied to a monthly series of the United States (US) consumer price index. We were able to identify a number of seasonal unit roots in this time series. [1] [1] Table 1 in this paper is copyrighted and initially published by JMASM in 2012, Volume 11, Issue 1, pp. 69–77, ISSN: 1538–9472, JMASM Inc., PO Box 48023, Oak Park, MI 48237, USA, ea@jmasm.com.Econometrics2015-05-1332Article10.3390/econometrics30203393393542225-11462015-05-13doi: 10.3390/econometrics3020339Ghassen Montasser<![CDATA[Econometrics, Vol. 3, Pages 317-338: The SAR Model for Very Large Datasets: A Reduced Rank Approach]]>
http://www.mdpi.com/2225-1146/3/2/317
The SAR model is widely used in spatial econometrics to model Gaussian processes on a discrete spatial lattice, but for large datasets, fitting it becomes computationally prohibitive, and hence, its usefulness can be limited. A computationally-efficient spatial model is the spatial random effects (SRE) model, and in this article, we calibrate it to the SAR model of interest using a generalisation of the Moran operator that allows for heteroskedasticity and an asymmetric SAR spatial dependence matrix. In general, spatial data have a measurement-error component, which we model, and we use restricted maximum likelihood to estimate the SRE model covariance parameters; its required computational time is only the order of the size of the dataset. Our implementation is demonstrated using mean usual weekly income data from the 2011 Australian Census.Econometrics2015-05-1132Article10.3390/econometrics30203173173382225-11462015-05-11doi: 10.3390/econometrics3020317Sandy BurdenNoel CressieDavid Steel<![CDATA[Econometrics, Vol. 3, Pages 289-316: Selection Criteria in Regime Switching Conditional Volatility Models]]>
http://www.mdpi.com/2225-1146/3/2/289
A large number of nonlinear conditional heteroskedastic models have been proposed in the literature. Model selection is crucial to any statistical data analysis. In this article, we investigate whether the most commonly used selection criteria lead to choice of the right specification in a regime switching framework. We focus on two types of models: the Logistic Smooth Transition GARCH and the Markov-Switching GARCH models. Simulation experiments reveal that information criteria and loss functions can lead to misspecification ; BIC sometimes indicates the wrong regime switching framework. Depending on the Data Generating Process used in the experiments, great care is needed when choosing a criterion.Econometrics2015-05-1132Article10.3390/econometrics30202892893162225-11462015-05-11doi: 10.3390/econometrics3020289Thomas Chuffart<![CDATA[Econometrics, Vol. 3, Pages 265-288: Nonparametric Regression Estimation for Multivariate Null Recurrent Processes]]>
http://www.mdpi.com/2225-1146/3/2/265
This paper discusses nonparametric kernel regression with the regressor being a \(d\)-dimensional \(\beta\)-null recurrent process in presence of conditional heteroscedasticity. We show that the mean function estimator is consistent with convergence rate \(\sqrt{n(T)h^{d}}\), where \(n(T)\) is the number of regenerations for a \(\beta\)-null recurrent process and the limiting distribution (with proper normalization) is normal. Furthermore, we show that the two-step estimator for the volatility function is consistent. The finite sample performance of the estimate is quite reasonable when the leave-one-out cross validation method is used for bandwidth selection. We apply the proposed method to study the relationship of Federal funds rate with 3-month and 5-year T-bill rates and discover the existence of nonlinearity of the relationship. Furthermore, the in-sample and out-of-sample performance of the nonparametric model is far better than the linear model.Econometrics2015-04-1432Article10.3390/econometrics30202652652882225-11462015-04-14doi: 10.3390/econometrics3020265Biqing CaiDag Tjøstheim<![CDATA[Econometrics, Vol. 3, Pages 240-264: Detecting Location Shifts during Model Selection by Step-Indicator Saturation]]>
http://www.mdpi.com/2225-1146/3/2/240
To capture location shifts in the context of model selection, we propose selecting significant step indicators from a saturating set added to the union of all of the candidate variables. The null retention frequency and approximate non-centrality of a selection test are derived using a ‘split-half’ analysis, the simplest specialization of a multiple-path block-search algorithm. Monte Carlo simulations, extended to sequential reduction, confirm the accuracy of nominal significance levels under the null and show retentions when location shifts occur, improving the non-null retention frequency compared to the corresponding impulse-indicator saturation (IIS)-based method and the lasso.Econometrics2015-04-1432Article10.3390/econometrics30202402402642225-11462015-04-14doi: 10.3390/econometrics3020240Jennifer CastleJurgen DoornikDavid HendryFelix Pretis<![CDATA[Econometrics, Vol. 3, Pages 233-239: A Pitfall in Using the Characterization of Granger Non-Causality in Vector Autoregressive Models]]>
http://www.mdpi.com/2225-1146/3/2/233
It is well known that in a vector autoregressive (VAR) model Granger non-causality is characterized by a set of restrictions on the VAR coefficients. This characterization has been derived under the assumption of non-singularity of the covariance matrix of the innovations. This note shows that if this assumption is violated, then the characterization of Granger non-causality in a VAR model fails to hold. In these situations Granger non-causality test results must be interpreted with caution.Econometrics2015-04-0932Article10.3390/econometrics30202332332392225-11462015-04-09doi: 10.3390/econometrics3020233Umberto Triacca<![CDATA[Econometrics, Vol. 3, Pages 215-232: Return and Volatility Spillovers across Equity Markets in Mainland China, Hong Kong and the United States]]>
http://www.mdpi.com/2225-1146/3/2/215
Examinations of the dynamics of daily returns and volatility in stock markets of the U.S., Hong Kong and mainland China (Shanghai and Shenzhen) over 2 January 2001 to 8 February 2013 suggest: (1) evidence of unidirectional return spillovers from the U.S. to the other three markets; but no spillover between Hong Kong and either of the two mainland China markets; (2) evidence of unidirectional ARCH and GARCH effects from the U.S. to the other three markets; (3) correlations of returns vary across markets, with the highest correlation of 93.5% between the two Chinese markets, medium correlation of 30% between mainland China and Hong Kong markets and low correlations of 6.4% and 7.2% between the U.S. and China’s two markets; thus, international investors may benefit by allocating their assets in China’s markets; (4) the patterns of dynamic conditional correlations from the DCC model suggest an increase in correlation between China and other stock markets since the most recent financial crisis of 2007.Econometrics2015-04-0232Article10.3390/econometrics30202152152322225-11462015-04-02doi: 10.3390/econometrics3020215Hassan MohammadiYuting Tan<![CDATA[Econometrics, Vol. 3, Pages 199-214: Plug-in Bandwidth Selection for Kernel Density Estimation with Discrete Data]]>
http://www.mdpi.com/2225-1146/3/2/199
This paper proposes plug-in bandwidth selection for kernel density estimation with discrete data via minimization of mean summed square error. Simulation results show that the plug-in bandwidths perform well, relative to cross-validated bandwidths, in non-uniform designs. We further find that plug-in bandwidths are relatively small. Several empirical examples show that the plug-in bandwidths are typically similar in magnitude to their cross-validated counterparts.Econometrics2015-03-3132Article10.3390/econometrics30201991992142225-11462015-03-31doi: 10.3390/econometrics3020199Chi-Yang ChuDaniel HendersonChristopher Parmeter<![CDATA[Econometrics, Vol. 3, Pages 187-198: Information Recovery in a Dynamic Statistical Markov Model]]>
http://www.mdpi.com/2225-1146/3/2/187
Although economic processes and systems are in general simple in nature, the underlying dynamics are complicated and seldom understood. Recognizing this, in this paper we use a nonstationary-conditional Markov process model of observed aggregate data to learn about and recover causal influence information associated with the underlying dynamic micro-behavior. Estimating equations are used as a link to the data and to model the dynamic conditional Markov process. To recover the unknown transition probabilities, we use an information theoretic approach to model the data and derive a new class of conditional Markov models. A quadratic loss function is used as a basis for selecting the optimal member from the family of possible likelihood-entropy functional(s). The asymptotic properties of the resulting estimators are demonstrated, and a range of potential applications is discussed.Econometrics2015-03-2532Article10.3390/econometrics30201871871982225-11462015-03-25doi: 10.3390/econometrics3020187Douglas MillerGeorge Judge<![CDATA[Econometrics, Vol. 3, Pages 156-186: A Joint Chow Test for Structural Instability]]>
http://www.mdpi.com/2225-1146/3/1/156
The classical Chow test for structural instability requires strictly exogenousregressors and a break-point specified in advance. In this paper, we consider twogeneralisations, the one-step recursive Chow test (based on the sequence of studentisedrecursive residuals) and its supremum counterpart, which relaxes these requirements. We useresults on the strong consistency of regression estimators to show that the one-step test isappropriate for stationary, unit root or explosive processes modelled in the autoregressivedistributed lags (ADL) framework. We then use the results in extreme value theory to developa new supremum version of the test, suitable for formal testing of structural instability withan unknown break-point. The test assumes the normality of errors and is intended to be usedin situations where this can be either assumed nor established empirically. Simulations showthat the supremum test has desirable power properties, in particular against level shifts latein the sample and against outliers. An application to U.K. GDP data is given.Econometrics2015-03-1231Article10.3390/econometrics30101561561862225-11462015-03-12doi: 10.3390/econometrics3010156Bent NielsenAndrew Whitby<![CDATA[Econometrics, Vol. 3, Pages 128-155: Two-Step Lasso Estimation of the Spatial Weights Matrix]]>
http://www.mdpi.com/2225-1146/3/1/128
The vast majority of spatial econometric research relies on the assumption that the spatial network structure is known a priori. This study considers a two-step estimation strategy for estimating the n(n-1) interaction effects in a spatial autoregressive panel model where the spatial dimension is potentially large. The identifying assumption is approximate sparsity of the spatial weights matrix. The proposed estimation methodology exploits the Lasso estimator and mimics two-stage least squares (2SLS) to account for endogeneity of the spatial lag. The developed two-step estimator is of more general interest. It may be used in applications where the number of endogenous regressors and the number of instrumental variables is larger than the number of observations. We derive convergence rates for the two-step Lasso estimator. Our Monte Carlo simulation results show that the two-step estimator is consistent and successfully recovers the spatial network structure for reasonable sample size, T.Econometrics2015-03-0931Article10.3390/econometrics30101281281552225-11462015-03-09doi: 10.3390/econometrics3010128Achim AhrensArnab Bhattacharjee<![CDATA[Econometrics, Vol. 3, Pages 101-127: Heteroskedasticity of Unknown Form in Spatial Autoregressive Models with a Moving Average Disturbance Term]]>
http://www.mdpi.com/2225-1146/3/1/101
In this study, I investigate the necessary condition for the consistency of the maximum likelihood estimator (MLE) of spatial models with a spatial moving average process in the disturbance term. I show that the MLE of spatial autoregressive and spatial moving average parameters is generally inconsistent when heteroskedasticity is not considered in the estimation. I also show that the MLE of parameters of exogenous variables is inconsistent and determine its asymptotic bias. I provide simulation results to evaluate the performance of the MLE. The simulation results indicate that the MLE imposes a substantial amount of bias on both autoregressive and moving average parameters.Econometrics2015-02-2631Article10.3390/econometrics30101011011272225-11462015-02-26doi: 10.3390/econometrics3010101Osman Doğan<![CDATA[Econometrics, Vol. 3, Pages 91-100: Entropy Maximization as a Basis for Information Recovery in Dynamic Economic Behavioral Systems]]>
http://www.mdpi.com/2225-1146/3/1/91
As a basis for information recovery in open dynamic microeconomic systems, we emphasize the connection between adaptive intelligent behavior, causal entropy maximization and self-organized equilibrium seeking behavior. This entropy-based causal adaptive behavior framework permits the use of information-theoretic methods as a solution basis for the resulting pure and stochastic inverse economic-econometric problems. We cast the information recovery problem in the form of a binary network and suggest information-theoretic methods to recover estimates of the unknown binary behavioral parameters without explicitly sampling the configuration-arrangement of the sample space.Econometrics2015-02-1631Article10.3390/econometrics3010091911002225-11462015-02-16doi: 10.3390/econometrics3010091George Judge<![CDATA[Econometrics, Vol. 3, Pages 65-90: Finding Starting-Values for the Estimation of Vector STAR Models]]>
http://www.mdpi.com/2225-1146/3/1/65
This paper focuses on finding starting-values for the estimation of Vector STAR models. Based on a Monte Carlo study, different procedures are evaluated. Their performance is assessed with respect to model fit and computational effort. I employ (i) grid search algorithms and (ii) heuristic optimization procedures, namely differential evolution, threshold accepting, and simulated annealing. In the equation-by-equation starting-value search approach the procedures achieve equally good results. Unless the errors are cross-correlated, equation-by-equation search followed by a derivative-based algorithm can handle such an optimization problem sufficiently well. This result holds also for higher-dimensional Vector STAR models with a slight edge for heuristic methods. For more complex Vector STAR models which require a multivariate search approach, simulated annealing and differential evolution outperform threshold accepting and the grid search.Econometrics2015-01-2931Article10.3390/econometrics301006565902225-11462015-01-29doi: 10.3390/econometrics3010065Frauke Schleer<![CDATA[Econometrics, Vol. 3, Pages 55-64: On the Interpretation of Instrumental Variables in the Presence of Specification Errors]]>
http://www.mdpi.com/2225-1146/3/1/55
The method of instrumental variables (IV) and the generalized method of moments (GMM), and their applications to the estimation of errors-in-variables and simultaneous equations models in econometrics, require data on a sufficient number of instrumental variables that are both exogenous and relevant. We argue that, in general, such instruments (weak or strong) cannot exist.Econometrics2015-01-2931Article10.3390/econometrics301005555642225-11462015-01-29doi: 10.3390/econometrics3010055P.A.V.B. SwamyGeorge TavlasStephen Hall<![CDATA[Econometrics, Vol. 3, Pages 2-54: Modeling Autoregressive Processes with Moving-Quantiles-Implied Nonlinearity]]>
http://www.mdpi.com/2225-1146/3/1/2
We introduce and investigate some properties of a class of nonlinear time series models based on the moving sample quantiles in the autoregressive data generating process. We derive a test fit to detect this type of nonlinearity. Using the daily realized volatility data of Standard &amp; Poor’s 500 (S&amp;P 500) and several other indices, we obtained good performance using these models in an out-of-sample forecasting exercise compared with the forecasts obtained based on the usual linear heterogeneous autoregressive and other models of realized volatility.Econometrics2015-01-1631Article10.3390/econometrics30100022542225-11462015-01-16doi: 10.3390/econometrics3010002Isao IshidaVirmantas Kvedaras<![CDATA[Econometrics, Vol. 3, Pages 1: Acknowledgement to Reviewers of Econometrics in 2014]]>
http://www.mdpi.com/2225-1146/3/1/1
The editors of Econometrics would like to express their sincere gratitude to the following reviewers for assessing manuscripts in 2014:[...]Econometrics2015-01-0931Editorial10.3390/econometrics3010001112225-11462015-01-09doi: 10.3390/econometrics3010001 Econometrics Editorial Office<![CDATA[Econometrics, Vol. 2, Pages 217-249: The Biggest Myth in Spatial Econometrics]]>
http://www.mdpi.com/2225-1146/2/4/217
There is near universal agreement that estimates and inferences from spatial regression models are sensitive to particular specifications used for the spatial weight structure in these models. We find little theoretical basis for this commonly held belief, if estimates and inferences are based on the true partial derivatives for a well-specified spatial regression model. We conclude that this myth may have arisen from past applied work that incorrectly interpreted the model coefficients as if they were partial derivatives, or from use of misspecified models.Econometrics2014-12-2324Article10.3390/econometrics20402172172492225-11462014-12-23doi: 10.3390/econometrics2040217James LeSageR. Pace<![CDATA[Econometrics, Vol. 2, Pages 203-216: Testing for A Set of Linear Restrictions in VARMA Models Using Autoregressive Metric: An Application to Granger Causality Test]]>
http://www.mdpi.com/2225-1146/2/4/203
In this paper we propose a test for a set of linear restrictions in a Vector Autoregressive Moving Average (VARMA) model. This test is based on the autoregressive metric, a notion of distance between two univariate ARMA models, M0 and M1, introduced by Piccolo in 1990. In particular, we show that this set of linear restrictions is equivalent to a null distance d(M0,M1 ) between two given ARMA models. This result provides the logical basis for using d(M0,M1) = 0 as a null hypothesis in our test. Some Monte Carlo evidence about the finite sample behavior of our testing procedure is provided and two empirical examples are presented.Econometrics2014-12-2224Article10.3390/econometrics20402032032162225-11462014-12-22doi: 10.3390/econometrics2040203Francesca Di IorioUmberto Triacca<![CDATA[Econometrics, Vol. 2, Pages 169-202: Success at the Summer Olympics: How Much Do Economic Factors Explain?]]>
http://www.mdpi.com/2225-1146/2/4/169
Many econometric analyses have attempted to model medal winnings as dependent on per capita GDP and population size. This approach ignores the size and composition of the team of athletes, especially the role of female participation and the role of sports culture, and also provides an inadequate explanation of the variability between the outcomes of countries with similar features. This paper proposes a model that offers two substantive advancements, both of which shed light on previously hidden aspects of Olympic success. First, we propose a selection model that treats the process of fielding any winner and the subsequent level of total winnings as two separate, but related, processes. Second, our model takes a more structural angle, in that we view GDP and population size as inputs into the “production” of athletes. After that production process, those athletes then compete to win medals. We use country-level panel data for the seven Summer Olympiads from 1988 to 2012. The size and composition of the country’s Olympic team are shown to be highly significant factors, as is also the past performance, which generates a persistence effect.Econometrics2014-12-0524Article10.3390/econometrics20401691692022225-11462014-12-05doi: 10.3390/econometrics2040169Pravin TrivediDavid Zimmer<![CDATA[Econometrics, Vol. 2, Pages 151-168: A GMM-Based Test for Normal Disturbances of the Heckman Sample Selection Model]]>
http://www.mdpi.com/2225-1146/2/4/151
The Heckman sample selection model relies on the assumption of normal and homoskedastic disturbances. However, before considering more general, alternative semiparametric models that do not need the normality assumption, it seems useful to test this assumption. Following Meijer and Wansbeek (2007), the present contribution derives a GMM-based pseudo-score LM test on whether the third and fourth moments of the disturbances of the outcome equation of the Heckman model conform to those implied by the truncated normal distribution. The test is easy to calculate and in Monte Carlo simulations it shows good performance for sample sizes of 1000 or larger.Econometrics2014-10-2324Article10.3390/econometrics20401511511682225-11462014-10-23doi: 10.3390/econometrics2040151Michael Pfaffermayr<![CDATA[Econometrics, Vol. 2, Pages 145-150: Asymmetry and Leverage in Conditional Volatility Models]]>
http://www.mdpi.com/2225-1146/2/3/145
The three most popular univariate conditional volatility models are the generalized autoregressive conditional heteroskedasticity (GARCH) model of Engle (1982) and Bollerslev (1986), the GJR (or threshold GARCH) model of Glosten, Jagannathan and Runkle (1992), and the exponential GARCH (or EGARCH) model of Nelson (1990, 1991). The underlying stochastic specification to obtain GARCH was demonstrated by Tsay (1987), and that of EGARCH was shown recently in McAleer and Hafner (2014). These models are important in estimating and forecasting volatility, as well as in capturing asymmetry, which is the different effects on conditional volatility of positive and negative effects of equal magnitude, and purportedly in capturing leverage, which is the negative correlation between returns shocks and subsequent shocks to volatility. As there seems to be some confusion in the literature between asymmetry and leverage, as well as which asymmetric models are purported to be able to capture leverage, the purpose of the paper is three-fold, namely, (1) to derive the GJR model from a random coefficient autoregressive process, with appropriate regularity conditions; (2) to show that leverage is not possible in the GJR and EGARCH models; and (3) to present the interpretation of the parameters of the three popular univariate conditional volatility models in a unified manner.Econometrics2014-09-2423Article10.3390/econometrics20301451451502225-11462014-09-24doi: 10.3390/econometrics2030145Michael McAleer<![CDATA[Econometrics, Vol. 2, Pages 123-144: Two-Part Models for Fractional Responses Defined as Ratios of Integers]]>
http://www.mdpi.com/2225-1146/2/3/123
This paper discusses two alternative two-part models for fractional response variables that are defined as ratios of integers. The first two-part model assumes a Binomial distribution and known group size. It nests the one-part fractional response model proposed by Papke and Wooldridge (1996) and, thus, allows one to apply Wald, LM and/or LR tests in order to discriminate between the two models. The second model extends the first one by allowing for overdispersion in the data. We demonstrate the usefulness of the proposed two-part models for data on the 401(k) pension plan participation rates used in Papke and Wooldridge (1996).Econometrics2014-09-1923Article10.3390/econometrics20301231231442225-11462014-09-19doi: 10.3390/econometrics2030123Harald OberhoferMichael Pfaffermayr<![CDATA[Econometrics, Vol. 2, Pages 98-122: A Fast, Accurate Method for Value-at-Risk and Expected Shortfall]]>
http://www.mdpi.com/2225-1146/2/2/98
A fast method is developed for value-at-risk and expected shortfall prediction for univariate asset return time series exhibiting leptokurtosis, asymmetry and conditional heteroskedasticity. It is based on a GARCH-type process driven by noncentral t innovations. While the method involves the use of several shortcuts for speed, it performs admirably in terms of accuracy and actually outperforms highly competitive models. Most remarkably, this is the case also for sample sizes as small as 250.Econometrics2014-06-2522Article10.3390/econometrics2020098981222225-11462014-06-25doi: 10.3390/econometrics2020098Jochen KrauseMarc Paolella<![CDATA[Econometrics, Vol. 2, Pages 92-97: A One Line Derivation of EGARCH]]>
http://www.mdpi.com/2225-1146/2/2/92
One of the most popular univariate asymmetric conditional volatility models is the exponential GARCH (or EGARCH) specification. In addition to asymmetry, which captures the different effects on conditional volatility of positive and negative effects of equal magnitude, EGARCH can also accommodate leverage, which is the negative correlation between returns shocks and subsequent shocks to volatility. However, the statistical properties of the (quasi-) maximum likelihood estimator of the EGARCH parameters are not available under general conditions, but rather only for special cases under highly restrictive and unverifiable conditions. It is often argued heuristically that the reason for the lack of general statistical properties arises from the presence in the model of an absolute value of a function of the parameters, which does not permit analytical derivatives, and hence does not permit (quasi-) maximum likelihood estimation. It is shown in this paper for the non-leverage case that: (1) the EGARCH model can be derived from a random coefficient complex nonlinear moving average (RCCNMA) process; and (2) the reason for the lack of statistical properties of the estimators of EGARCH under general conditions is that the stationarity and invertibility conditions for the RCCNMA process are not known.Econometrics2014-06-2322Article10.3390/econometrics202009292972225-11462014-06-23doi: 10.3390/econometrics2020092Michael McAleerChristian Hafner<![CDATA[Econometrics, Vol. 2, Pages 72-91: Credible Granger-Causality Inference with Modest Sample Lengths: A Cross-Sample Validation Approach]]>
http://www.mdpi.com/2225-1146/2/1/72
Credible Granger-causality analysis appears to require post-sample inference, as it is well-known that in-sample fit can be a poor guide to actual forecasting effectiveness. However, post-sample model testing requires an often-consequential a priori partitioning of the data into an “in-sample” period – purportedly utilized only for model specification/estimation – and a “post-sample” period, purportedly utilized (only at the end of the analysis) for model validation/testing purposes. This partitioning is usually infeasible, however, with samples of modest length – e.g., T ≤ 150 – as is common in both quarterly data sets and/or in monthly data sets where institutional arrangements vary over time, simply because there is in such cases insufficient data available to credibly accomplish both purposes separately. A cross-sample validation (CSV) testing procedure is proposed below which both eliminates the aforementioned a priori partitioning and which also substantially ameliorates this power versus credibility predicament – preserving most of the power of in-sample testing (by utilizing all of the sample data in the test), while also retaining most of the credibility of post-sample testing (by always basing model forecasts on data not utilized in estimating that particular model’s coefficients). Simulations show that the price paid, in terms of power relative to the in-sample Granger-causality F test, is manageable. An illustrative application is given, to a re-analysis of the Engel andWest [1] study of the causal relationship between macroeconomic fundamentals and the exchange rate; several of their conclusions are changed by our analysis.Econometrics2014-03-2521Article10.3390/econometrics201007272912225-11462014-03-25doi: 10.3390/econometrics2010072Richard AshleyKwok Tsang<![CDATA[Econometrics, Vol. 2, Pages 45-71: Bias-Correction in Vector Autoregressive Models: A Simulation Study]]>
http://www.mdpi.com/2225-1146/2/1/45
We analyze the properties of various methods for bias-correcting parameter estimates in both stationary and non-stationary vector autoregressive models. First, we show that two analytical bias formulas from the existing literature are in fact identical. Next, based on a detailed simulation study, we show that when the model is stationary this simple bias formula compares very favorably to bootstrap bias-correction, both in terms of bias and mean squared error. In non-stationary models, the analytical bias formula performs noticeably worse than bootstrapping. Both methods yield a notable improvement over ordinary least squares. We pay special attention to the risk of pushing an otherwise stationary model into the non-stationary region of the parameter space when correcting for bias. Finally, we consider a recently proposed reduced-bias weighted least squares estimator, and we find that it compares very favorably in non-stationary models.Econometrics2014-03-1321Article10.3390/econometrics201004545712225-11462014-03-13doi: 10.3390/econometrics2010045Tom EngstedThomas Pedersen<![CDATA[Econometrics, Vol. 2, Pages 20-44: Incorporating Responsiveness to Marketing Efforts in Brand Choice Modeling]]>
http://www.mdpi.com/2225-1146/2/1/20
We put forward a brand choice model with unobserved heterogeneity that concerns responsiveness to marketing efforts. We introduce two latent segments of households. The first segment is assumed to respond to marketing efforts, while households in the second segment do not do so. Whether a specific household is a member of the first or the second segment at a specific purchase occasion is described by household-specific characteristics and characteristics concerning buying behavior. Households may switch between the two responsiveness states over time. When comparing the performance of our model with alternative choice models that account for various forms of heterogeneity for three different datasets, we find better face validity for our parameters. Our model also forecasts better.Econometrics2014-02-2121Article10.3390/econometrics201002020442225-11462014-02-21doi: 10.3390/econometrics2010020Dennis FokRichard PaapPhilip Franses<![CDATA[Econometrics, Vol. 2, Pages 1-19: Referee Bias and Stoppage Time in Major League Soccer: A Partially Adaptive Approach]]>
http://www.mdpi.com/2225-1146/2/1/1
This study extends prior research on referee bias and close bias in professional soccer by examining whether Major League Soccer (MLS) referees’ discretion over stoppage time (i.e., extra play beyond regulation) is influenced by end-of-regulation match scores and/or home field advantage. To do so, we employ a grouped-data regression model and a partially adaptive model. Both account for the imprecise measurement in reported stoppage time. For the 2011 season we find no home field advantage. In fact, stoppage time is the same with a one or two goal deficit at the end of regulation, regardless of which team is ahead. However, the 2011 results do point to an increase in stoppage time of 12 to 20 seconds for nationally televised matches. For the 2012 season, the nationally televised effect disappears due to an increase in stoppage time for those matches not nationally televised. However, a home field advantage is present. Facing a one-goal deficit at the end of regulation, the home team receives about 33 seconds more stoppage time than a visiting team facing the same deficit.Econometrics2014-02-1721Article10.3390/econometrics20100011192225-11462014-02-17doi: 10.3390/econometrics2010001Katherine YewellSteven CaudillFranklin Mixon, Jr.<![CDATA[Econometrics, Vol. 1, Pages 249-280: Academic Rankings with RePEc]]>
http://www.mdpi.com/2225-1146/1/3/249
This article describes the data collection and use of data for the computation of rankings within RePEc (Research Papers in Economics). This encompasses the determination of impact factors for journals and working paper series, as well as the ranking of authors, institutions, and geographic regions. The various ranking methods are also compared, using a snapshot of the data.Econometrics2013-12-1713Article10.3390/econometrics10302492492802225-11462013-12-17doi: 10.3390/econometrics1030249Christian Zimmermann<![CDATA[Econometrics, Vol. 1, Pages 236-248: Polynomial Regressions and Nonsense Inference]]>
http://www.mdpi.com/2225-1146/1/3/236
Polynomial specifications are widely used, not only in applied economics, but also in epidemiology, physics, political analysis and psychology, just to mention a few examples. In many cases, the data employed to estimate such specifications are time series that may exhibit stochastic nonstationary behavior. We extend Phillips’ results (Phillips, P. Understanding spurious regressions in econometrics. J. Econom. 1986, 33, 311–340.) by proving that an inference drawn from polynomial specifications, under stochastic nonstationarity, is misleading unless the variables cointegrate. We use a generalized polynomial specification as a vehicle to study its asymptotic and finite-sample properties. Our results, therefore, lead to a call to be cautious whenever practitioners estimate polynomial regressions.Econometrics2013-11-1813Article10.3390/econometrics10302362362482225-11462013-11-18doi: 10.3390/econometrics1030236Daniel Ventosa-SantaulàriaCarlos Rodríguez-Caballero<![CDATA[Econometrics, Vol. 1, Pages 217-235: Ranking Leading Econometrics Journals Using Citations Data from ISI and RePEc]]>
http://www.mdpi.com/2225-1146/1/3/217
The paper focuses on the robustness of rankings of academic journal quality and research impact of 10 leading econometrics journals taken from the Thomson Reuters ISI Web of Science (ISI) Category of Economics, using citations data from ISI and the highly accessible Research Papers in Economics (RePEc) database that is widely used in economics, finance and related disciplines. The journals are ranked using quantifiable static and dynamic Research Assessment Measures (RAMs), with 15 RAMs from ISI and five RAMs from RePEc. The similarities and differences in various RAMs, which are based on alternative weighted and unweighted transformations of citations, are highlighted to show which RAMs are able to provide informational value relative to others. The RAMs include the impact factor, mean citations and non-citations, journal policy, number of high quality papers, and journal influence and article influence. The paper highlights robust rankings based on the harmonic mean of the ranks of 20 RAMs, which in some cases are closely related. It is shown that emphasizing the most widely-used RAM, the two-year impact factor of a journal, can lead to a distorted evaluation of journal quality, impact and influence relative to the harmonic mean of the ranks. Some suggestions regarding the use of the most informative RAMs are also given.Econometrics2013-11-1813Article10.3390/econometrics10302172172352225-11462013-11-18doi: 10.3390/econometrics1030217Chia-Lin ChangMichael McAleer<![CDATA[Econometrics, Vol. 1, Pages 207-216: The Geometric Meaning of the Notion of Joint Unpredictability of a Bivariate VAR(1) Stochastic Process]]>
http://www.mdpi.com/2225-1146/1/3/207
This paper investigates, in a particular parametric framework, the geometric meaning of joint unpredictability for a bivariate discrete process. In particular, the paper provides a characterization of the joint unpredictability in terms of distance between information sets in an Hilbert space.Econometrics2013-11-1413Article10.3390/econometrics10302072072162225-11462013-11-14doi: 10.3390/econometrics1030207Umberto Triacca<![CDATA[Econometrics, Vol. 1, Pages 180-206: Structural Panel VARs]]>
http://www.mdpi.com/2225-1146/1/2/180
The paper proposes a structural approach to VAR analysis in panels, which takes into account responses to both idiosyncratic and common structural shocks, while permitting full cross member heterogeneity of the response dynamics. In the context of this structural approach, estimation of the loading matrices for the decomposition into idiosyncratic versus common shocks is straightforward and transparent. The method appears to do remarkably well at uncovering the properties of the sample distribution of the underlying structural dynamics, even when the panels are relatively short, as illustrated in Monte Carlo simulations. Finally, these simulations also illustrate that the SVAR panel method can be used to improve inference, not only for properties of the sample distribution, but also for dynamics of individual members of the panel that lack adequate data for a conventional time series SVAR analysis. This is accomplished by using fitted cross sectional regressions of the sample of estimated panel responses to correlated static measures, and using these to interpolate the member-specific dynamics.Econometrics2013-09-2412Article10.3390/econometrics10201801802062225-11462013-09-24doi: 10.3390/econometrics1020180Peter Pedroni<![CDATA[Econometrics, Vol. 1, Pages 157-179: Parametric and Nonparametric Frequentist Model Selection and Model Averaging]]>
http://www.mdpi.com/2225-1146/1/2/157
This paper presents recent developments in model selection and model averaging for parametric and nonparametric models. While there is extensive literature on model selection under parametric settings, we present recently developed results in the context of nonparametric models. In applications, estimation and inference are often conducted under the selected model without considering the uncertainty from the selection process. This often leads to inefficiency in results and misleading confidence intervals. Thus an alternative to model selection is model averaging where the estimated model is the weighted sum of all the submodels. This reduces model uncertainty. In recent years, there has been significant interest in model averaging and some important developments have taken place in this area. We present results for both the parametric and nonparametric cases. Some possible topics for future research are also indicated.Econometrics2013-09-2012Article10.3390/econometrics10201571571792225-11462013-09-20doi: 10.3390/econometrics1020157Aman UllahHuansha Wang<![CDATA[Econometrics, Vol. 1, Pages 141-156: Generalized Empirical Likelihood-Based Focused Information Criterion and Model Averaging]]>
http://www.mdpi.com/2225-1146/1/2/141
This paper develops model selection and averaging methods for moment restriction models. We first propose a focused information criterion based on the generalized empirical likelihood estimator. We address the issue of selecting an optimal model, rather than a correct model, for estimating a specific parameter of interest. Then, this study investigates a generalized empirical likelihood-based model averaging estimator that minimizes the asymptotic mean squared error. A simulation study suggests that our averaging estimator can be a useful alternative to existing post-selection estimators.Econometrics2013-07-0312Article10.3390/econometrics10201411411562225-11462013-07-03doi: 10.3390/econometrics1020141Naoya Sueishi<![CDATA[Econometrics, Vol. 1, Pages 127-140: Forecasting Value-at-Risk Using High-Frequency Information]]>
http://www.mdpi.com/2225-1146/1/1/127
in the prediction of quantiles of daily Standard&amp;Poor’s 500 (S&amp;P 500) returns we consider how to use high-frequency 5-minute data. We examine methods that incorporate the high frequency information either indirectly, through combining forecasts (using forecasts generated from returns sampled at different intraday interval), or directly, through combining high frequency information into one model. We consider subsample averaging, bootstrap averaging, forecast averaging methods for the indirect case, and factor models with principal component approach, for both direct and indirect cases. We show that in forecasting the daily S&amp;P 500 index return quantile (Value-at-Risk or VaR is simply the negative of it), using high-frequency information is beneficial, often substantially and particularly so, in forecasting downside risk. Our empirical results show that the averaging methods (subsample averaging, bootstrap averaging, forecast averaging), which serve as different ways of forming the ensemble average from using high-frequency intraday information, provide an excellent forecasting performance compared to using just low-frequency daily information.Econometrics2013-06-2111Article10.3390/econometrics10101271271402225-11462013-06-21doi: 10.3390/econometrics1010127Huiyu HuangTae-Hwy Lee<![CDATA[Econometrics, Vol. 1, Pages 115-126: Ten Things You Should Know about the Dynamic Conditional Correlation Representation]]>
http://www.mdpi.com/2225-1146/1/1/115
The purpose of the paper is to discuss ten things potential users should know about the limits of the Dynamic Conditional Correlation (DCC) representation for estimating and forecasting time-varying conditional correlations. The reasons given for caution about the use of DCC include the following: DCC represents the dynamic conditional covariances of the standardized residuals, and hence does not yield dynamic conditional correlations; DCC is stated rather than derived; DCC has no moments; DCC does not have testable regularity conditions; DCC yields inconsistent two step estimators; DCC has no asymptotic properties; DCC is not a special case of Generalized Autoregressive Conditional Correlation (GARCC), which has testable regularity conditions and standard asymptotic properties; DCC is not dynamic empirically as the effect of news is typically extremely small; DCC cannot be distinguished empirically from diagonal Baba, Engle, Kraft and Kroner (BEKK) in small systems; and DCC may be a useful filter or a diagnostic check, but it is not a model.Econometrics2013-06-2111Article10.3390/econometrics10101151151262225-11462013-06-21doi: 10.3390/econometrics1010115Massimiliano CaporinMichael McAleer<![CDATA[Econometrics, Vol. 1, Pages 71-114: Generalized Spatial Two Stage Least Squares Estimation of Spatial Autoregressive Models with Autoregressive Disturbances in the Presence of Endogenous Regressors and Many Instruments]]>
http://www.mdpi.com/2225-1146/1/1/71
This paper studies the generalized spatial two stage least squares (GS2SLS) estimation of spatial autoregressive models with autoregressive disturbances when there are endogenous regressors with many valid instruments. Using many instruments may improve the efficiency of estimators asymptotically, but the bias might be large in finite samples, making the inference inaccurate. We consider the case that the number of instruments K increases with, but at a rate slower than, the sample size, and derive the approximate mean square errors (MSE) that account for the trade-offs between the bias and variance, for both the GS2SLS estimator and a bias-corrected GS2SLS estimator. A criterion function for the optimal K selection can be based on the approximate MSEs. Monte Carlo experiments are provided to show the performance of our procedure of choosing K.Econometrics2013-05-2711Article10.3390/econometrics1010071711142225-11462013-05-27doi: 10.3390/econometrics1010071Fei JinLung-fei Lee<![CDATA[Econometrics, Vol. 1, Pages 53-70: Outlier Detection in Regression Using an Iterated One-Step Approximation to the Huber-Skip Estimator]]>
http://www.mdpi.com/2225-1146/1/1/53
In regression we can delete outliers based upon a preliminary estimator and re-estimate the parameters by least squares based upon the retained observations. We study the properties of an iteratively defined sequence of estimators based on this idea. We relate the sequence to the Huber-skip estimator. We provide a stochastic recursion equation for the estimation error in terms of a kernel, the previous estimation error and a uniformly small error term. The main contribution is the analysis of the solution of the stochastic recursion equation as a fixed point, and the results that the normalized estimation errors are tight and are close to a linear function of the kernel, thus providing a stochastic expansion of the estimators, which is the same as for the Huber-skip. This implies that the iterated estimator is a close approximation of the Huber-skip.Econometrics2013-05-1311Article10.3390/econometrics101005353702225-11462013-05-13doi: 10.3390/econometrics1010053Søren JohansenBent Nielsen<![CDATA[Econometrics, Vol. 1, Pages 32-52: Constructing U.K. Core Inflation]]>
http://www.mdpi.com/2225-1146/1/1/32
The recent volatile behaviour of U.K. inflation has been officially attributed to a sequence of “unusual” price changes, prompting renewed interest in the construction of measures of “core inflation”, from which such unusual price changes may be down-weighted or even excluded. This paper proposes a new approach to constructing core inflation based on detailed analysis of the temporal stochastic structure of the individual prices underlying a particular index. This approach is illustrated using the section structure of the U.K. retail price index (RPI), providing a number of measures of core inflation that can be automatically calculated and updated to provide both a current assessment and forecasts of the underlying inflation rate in the U.K.Econometrics2013-04-2511Article10.3390/econometrics101003232522225-11462013-04-25doi: 10.3390/econometrics1010032Terence Mills<![CDATA[Econometrics, Vol. 1, Pages 1-31: On Diagnostic Checking of Vector ARMA-GARCH Models with Gaussian and Student-t Innovations]]>
http://www.mdpi.com/2225-1146/1/1/1
This paper focuses on the diagnostic checking of vector ARMA (VARMA) models with multivariate GARCH errors. For a fitted VARMA-GARCH model with Gaussian or Student-t innovations, we derive the asymptotic distributions of autocorrelation matrices of the cross-product vector of standardized residuals. This is different from the traditional approach that employs only the squared series of standardized residuals. We then study two portmanteau statistics, called Q1(M) and Q2(M), for model checking. A residual-based bootstrap method is provided and demonstrated as an effective way to approximate the diagnostic checking statistics. Simulations are used to compare the performance of the proposed statistics with other methods available in the literature. In addition, we also investigate the effect of GARCH shocks on checking a fitted VARMA model. Empirical sizes and powers of the proposed statistics are investigated and the results suggest a procedure of using jointly Q1(M) and Q2(M) in diagnostic checking. The bivariate time series of FTSE 100 and DAX index returns is used to illustrate the performance of the proposed portmanteau statistics. The results show that it is important to consider the cross-product series of standardized residuals and GARCH effects in model checking.Econometrics2013-04-0411Article10.3390/econometrics10100011312225-11462013-04-04doi: 10.3390/econometrics1010001Yongning WangRuey Tsay