Econometrics
http://www.mdpi.com/journal/econometrics
Latest open access articles published in Econometrics at http://www.mdpi.com/journal/econometrics<![CDATA[Econometrics, Vol. 4, Pages 40: Editorial Announcement]]>
http://www.mdpi.com/2225-1146/4/4/40
I am pleased to announce that, following my retirement on the 30th September 2016, Marc Paolella will become Editor-in-Chief (EiC) of Econometrics.Econometrics2016-10-1044Editorial10.3390/econometrics4040040402225-11462016-10-10doi: 10.3390/econometrics4040040Kerry Patterson<![CDATA[Econometrics, Vol. 4, Pages 39: Estimation of Dynamic Panel Data Models with Stochastic Volatility Using Particle Filters]]>
http://www.mdpi.com/2225-1146/4/4/39
Time-varying volatility is common in macroeconomic data and has been incorporated into macroeconomic models in recent work. Dynamic panel data models have become increasingly popular in macroeconomics to study common relationships across countries or regions. This paper estimates dynamic panel data models with stochastic volatility by maximizing an approximate likelihood obtained via Rao-Blackwellized particle filters. Monte Carlo studies reveal the good and stable performance of our particle filter-based estimator. When the volatility of volatility is high, or when regressors are absent but stochastic volatility exists, our approach can be better than the maximum likelihood estimator which neglects stochastic volatility and generalized method of moments (GMM) estimators.Econometrics2016-10-0944Article10.3390/econometrics4040039392225-11462016-10-09doi: 10.3390/econometrics4040039Wen Xu<![CDATA[Econometrics, Vol. 4, Pages 38: Econometric Information Recovery in Behavioral Networks]]>
http://www.mdpi.com/2225-1146/4/3/38
In this paper, we suggest an approach to recovering behavior-related, preference-choice network information from observational data. We model the process as a self-organized behavior based random exponential network-graph system. To address the unknown nature of the sampling model in recovering behavior related network information, we use the Cressie-Read (CR) family of divergence measures and the corresponding information theoretic entropy basis, for estimation, inference, model evaluation, and prediction. Examples are included to clarify how entropy based information theoretic methods are directly applicable to recovering the behavioral network probabilities in this fundamentally underdetermined ill posed inverse recovery problem.Econometrics2016-09-1443Article10.3390/econometrics4030038382225-11462016-09-14doi: 10.3390/econometrics4030038George Judge<![CDATA[Econometrics, Vol. 4, Pages 37: Generalized Fractional Processes with Long Memory and Time Dependent Volatility Revisited]]>
http://www.mdpi.com/2225-1146/4/3/37
In recent years, fractionally-differenced processes have received a great deal of attention due to their flexibility in financial applications with long-memory. This paper revisits the class of generalized fractionally-differenced processes generated by Gegenbauer polynomials and the ARMA structure (GARMA) with both the long-memory and time-dependent innovation variance. We establish the existence and uniqueness of second-order solutions. We also extend this family with innovations to follow GARCH and stochastic volatility (SV). Under certain regularity conditions, we give asymptotic results for the approximate maximum likelihood estimator for the GARMA-GARCH model. We discuss a Monte Carlo likelihood method for the GARMA-SV model and investigate finite sample properties via Monte Carlo experiments. Finally, we illustrate the usefulness of this approach using monthly inflation rates for France, Japan and the United States.Econometrics2016-09-0543Article10.3390/econometrics4030037372225-11462016-09-05doi: 10.3390/econometrics4030037M. PeirisManabu Asai<![CDATA[Econometrics, Vol. 4, Pages 36: Nonparametric Regression with Common Shocks]]>
http://www.mdpi.com/2225-1146/4/3/36
This paper considers a nonparametric regression model for cross-sectional data in the presence of common shocks. Common shocks are allowed to be very general in nature; they do not need to be finite dimensional with a known (small) number of factors. I investigate the properties of the Nadaraya-Watson kernel estimator and determine how general the common shocks can be while still obtaining meaningful kernel estimates. Restrictions on the common shocks are necessary because kernel estimators typically manipulate conditional densities, and conditional densities do not necessarily exist in the present case. By appealing to disintegration theory, I provide sufficient conditions for the existence of such conditional densities and show that the estimator converges in probability to the Kolmogorov conditional expectation given the sigma-field generated by the common shocks. I also establish the rate of convergence and the asymptotic distribution of the kernel estimator.Econometrics2016-09-0143Article10.3390/econometrics4030036362225-11462016-09-01doi: 10.3390/econometrics4030036Eduardo Souza-Rodrigues<![CDATA[Econometrics, Vol. 4, Pages 35: Special Issues of Econometrics: Celebrated Econometricians]]>
http://www.mdpi.com/2225-1146/4/3/35
Econometrics is pleased to announce the commissioning of a new series of Special Issues dedicated to celebrated econometricians of our time.[...]Econometrics2016-08-1743Editorial10.3390/econometrics4030035352225-11462016-08-17doi: 10.3390/econometrics4030035 Econometrics Editorial Office<![CDATA[Econometrics, Vol. 4, Pages 34: Jump Variation Estimation with Noisy High Frequency Financial Data via Wavelets]]>
http://www.mdpi.com/2225-1146/4/3/34
This paper develops a method to improve the estimation of jump variation using high frequency data with the existence of market microstructure noises. Accurate estimation of jump variation is in high demand, as it is an important component of volatility in finance for portfolio allocation, derivative pricing and risk management. The method has a two-step procedure with detection and estimation. In Step 1, we detect the jump locations by performing wavelet transformation on the observed noisy price processes. Since wavelet coefficients are significantly larger at the jump locations than the others, we calibrate the wavelet coefficients through a threshold and declare jump points if the absolute wavelet coefficients exceed the threshold. In Step 2 we estimate the jump variation by averaging noisy price processes at each side of a declared jump point and then taking the difference between the two averages of the jump point. Specifically, for each jump location detected in Step 1, we get two averages from the observed noisy price processes, one before the detected jump location and one after it, and then take their difference to estimate the jump variation. Theoretically, we show that the two-step procedure based on average realized volatility processes can achieve a convergence rate close to O P ( n − 4 / 9 ) , which is better than the convergence rate O P ( n − 1 / 4 ) for the procedure based on the original noisy process, where n is the sample size. Numerically, the method based on average realized volatility processes indeed performs better than that based on the price processes. Empirically, we study the distribution of jump variation using Dow Jones Industrial Average stocks and compare the results using the original price process and the average realized volatility processes.Econometrics2016-08-1643Article10.3390/econometrics4030034342225-11462016-08-16doi: 10.3390/econometrics4030034Xin ZhangDonggyu KimYazhen Wang<![CDATA[Econometrics, Vol. 4, Pages 33: Econometrics Best Paper Award 2016]]>
http://www.mdpi.com/2225-1146/4/3/33
n/aEconometrics2016-08-0143Editorial10.3390/econometrics4030033332225-11462016-08-01doi: 10.3390/econometrics4030033Kerry Patterson<![CDATA[Econometrics, Vol. 4, Pages 32: Measuring the Distance between Sets of ARMA Models]]>
http://www.mdpi.com/2225-1146/4/3/32
A distance between pairs of sets of autoregressive moving average (ARMA) processes is proposed. Its main properties are discussed. The paper also shows how the proposed distance finds application in time series analysis. In particular it can be used to evaluate the distance between portfolios of ARMA models or the distance between vector autoregressive (VAR) models.Econometrics2016-07-1543Article10.3390/econometrics4030032322225-11462016-07-15doi: 10.3390/econometrics4030032Umberto Triacca<![CDATA[Econometrics, Vol. 4, Pages 31: Market Microstructure Effects on Firm Default Risk Evaluation]]>
http://www.mdpi.com/2225-1146/4/3/31
Default probability is a fundamental variable determining the credit worthiness of a firm and equity volatility estimation plays a key role in its evaluation. Assuming a structural credit risk modeling approach, we study the impact of choosing different non parametric equity volatility estimators on default probability evaluation, when market microstructure noise is considered. A general stochastic volatility framework with jumps for the underlying asset dynamics is defined inside a Merton-like structural model. To estimate the volatility risk component of a firm we use high-frequency equity data: market microstructure noise is introduced as a direct effect of observing noisy high-frequency equity prices. A Monte Carlo simulation analysis is conducted to (i) test the performance of alternative non-parametric equity volatility estimators in their capability of filtering out the microstructure noise and backing out the true unobservable asset volatility; (ii) study the effects of different non-parametric estimation techniques on default probability evaluation. The impact of the non-parametric volatility estimators on risk evaluation is not negligible: a sensitivity analysis defined for alternative values of the leverage parameter and average jumps size reveals that the characteristics of the dataset are crucial to determine which is the proper estimator to consider from a credit risk perspective.Econometrics2016-07-0843Article10.3390/econometrics4030031312225-11462016-07-08doi: 10.3390/econometrics4030031Flavia BarsottiSimona Sanfelici<![CDATA[Econometrics, Vol. 4, Pages 30: Estimation of Gini Index within Pre-Specified Error Bound]]>
http://www.mdpi.com/2225-1146/4/3/30
Gini index is a widely used measure of economic inequality. This article develops a theory and methodology for constructing a confidence interval for Gini index with a specified confidence coefficient and a specified width without assuming any specific distribution of the data. Fixed sample size methods cannot simultaneously achieve both specified confidence coefficient and fixed width. We develop a purely sequential procedure for interval estimation of Gini index with a specified confidence coefficient and a specified margin of error. Optimality properties of the proposed method, namely first order asymptotic efficiency and asymptotic consistency properties are proved under mild moment assumptions of the distribution of the data.Econometrics2016-06-2443Article10.3390/econometrics4030030302225-11462016-06-24doi: 10.3390/econometrics4030030Bhargab ChattopadhyayShyamal De<![CDATA[Econometrics, Vol. 4, Pages 29: Evaluating Eigenvector Spatial Filter Corrections for Omitted Georeferenced Variables]]>
http://www.mdpi.com/2225-1146/4/2/29
The Ramsey regression equation specification error test (RESET) furnishes a diagnostic for omitted variables in a linear regression model specification (i.e., the null hypothesis is no omitted variables). Integer powers of fitted values from a regression analysis are introduced as additional covariates in a second regression analysis. The former regression model can be considered restricted, whereas the latter model can be considered unrestricted; this first model is nested within this second model. A RESET significance test is conducted with an F-test using the error sums of squares and the degrees of freedom for the two models. For georeferenced data, eigenvectors can be extracted from a modified spatial weights matrix, and included in a linear regression model specification to account for the presence of nonzero spatial autocorrelation. The intuition underlying this methodology is that these synthetic variates function as surrogates for omitted variables. Accordingly, a restricted regression model without eigenvectors should indicate an omitted variables problem, whereas an unrestricted regression model with eigenvectors should result in a failure to reject the RESET null hypothesis. This paper furnishes eleven empirical examples, covering a wide range of spatial attribute data types, that illustrate the effectiveness of eigenvector spatial filtering in addressing the omitted variables problem for georeferenced data as measured by the RESET.Econometrics2016-06-2142Article10.3390/econometrics4020029292225-11462016-06-21doi: 10.3390/econometrics4020029Daniel GriffithYongwan Chun<![CDATA[Econometrics, Vol. 4, Pages 28: Testing Symmetry of Unknown Densities via Smoothing with the Generalized Gamma Kernels]]>
http://www.mdpi.com/2225-1146/4/2/28
This paper improves a kernel-smoothed test of symmetry through combining it with a new class of asymmetric kernels called the generalized gamma kernels. It is demonstrated that the improved test statistic has a normal limit under the null of symmetry and is consistent under the alternative. A test-oriented smoothing parameter selection method is also proposed to implement the test. Monte Carlo simulations indicate superior finite-sample performance of the test statistic. It is worth emphasizing that the performance is grounded on the first-order normal limit and a small number of observations, despite a nonparametric convergence rate and a sample-splitting procedure of the test.Econometrics2016-06-1742Article10.3390/econometrics4020028282225-11462016-06-17doi: 10.3390/econometrics4020028Masayuki HirukawaMari Sakudo<![CDATA[Econometrics, Vol. 4, Pages 26: Removing Specification Errors from the Usual Formulation of Binary Choice Models]]>
http://www.mdpi.com/2225-1146/4/2/26
We develop a procedure for removing four major specification errors from the usual formulation of binary choice models. The model that results from this procedure is different from the conventional probit and logit models. This difference arises as a direct consequence of our relaxation of the usual assumption that omitted regressors constituting the error term of a latent linear regression model do not introduce omitted regressor biases into the coefficients of the included regressors.Econometrics2016-06-0342Article10.3390/econometrics4020026262225-11462016-06-03doi: 10.3390/econometrics4020026P.A.V.B. SwamyI-Lok ChangJatinder MehtaWilliam GreeneStephen HallGeorge Tavlas<![CDATA[Econometrics, Vol. 4, Pages 27: Continuous and Jump Betas: Implications for Portfolio Diversification]]>
http://www.mdpi.com/2225-1146/4/2/27
Using high-frequency data, we decompose the time-varying beta for stocks into beta for continuous systematic risk and beta for discontinuous systematic risk. Estimated discontinuous betas for S&amp;P500 constituents between 2003 and 2011 generally exceed the corresponding continuous betas. We demonstrate how continuous and discontinuous betas decrease with portfolio diversification. Using an equiweighted broad market index, we assess the speed of convergence of continuous and discontinuous betas in portfolios of stocks as the number of holdings increase. We show that discontinuous risk dissipates faster with fewer stocks in a portfolio compared to its continuous counterpart.Econometrics2016-06-0142Article10.3390/econometrics4020027272225-11462016-06-01doi: 10.3390/econometrics4020027Vitali AlexeevMardi DungeyWenying Yao<![CDATA[Econometrics, Vol. 4, Pages 25: Stable-GARCH Models for Financial Returns: Fast Estimation and Tests for Stability]]>
http://www.mdpi.com/2225-1146/4/2/25
A fast method for estimating the parameters of a stable-APARCH not requiring likelihood or iteration is proposed. Several powerful tests for the (asymmetric) stable Paretian distribution with tail index 1 &lt; α &lt; 2 are used for assessing the appropriateness of the stable assumption as the innovations process in stable-GARCH-type models for daily stock returns. Overall, there is strong evidence against the stable as the correct innovations assumption for all stocks and time periods, though for many stocks and windows of data, the stable hypothesis is not rejected.Econometrics2016-05-0542Article10.3390/econometrics4020025252225-11462016-05-05doi: 10.3390/econometrics4020025Marc Paolella<![CDATA[Econometrics, Vol. 4, Pages 24: Bayesian Bandwidth Selection for a Nonparametric Regression Model with Mixed Types of Regressors]]>
http://www.mdpi.com/2225-1146/4/2/24
This paper develops a sampling algorithm for bandwidth estimation in a nonparametric regression model with continuous and discrete regressors under an unknown error density. The error density is approximated by the kernel density estimator of the unobserved errors, while the regression function is estimated using the Nadaraya-Watson estimator admitting continuous and discrete regressors. We derive an approximate likelihood and posterior for bandwidth parameters, followed by a sampling algorithm. Simulation results show that the proposed approach typically leads to better accuracy of the resulting estimates than cross-validation, particularly for smaller sample sizes. This bandwidth estimation approach is applied to nonparametric regression model of the Australian All Ordinaries returns and the kernel density estimation of gross domestic product (GDP) growth rates among the organisation for economic co-operation and development (OECD) and non-OECD countries.Econometrics2016-04-2242Article10.3390/econometrics4020024242225-11462016-04-22doi: 10.3390/econometrics4020024Xibin ZhangMaxwell KingHan Shang<![CDATA[Econometrics, Vol. 4, Pages 23: Building a Structural Model: Parameterization and Structurality]]>
http://www.mdpi.com/2225-1146/4/2/23
A specific concept of structural model is used as a background for discussing the structurality of its parameterization. Conditions for a structural model to be also causal are examined. Difficulties and pitfalls arising from the parameterization are analyzed. In particular, pitfalls when considering alternative parameterizations of a same model are shown to have lead to ungrounded conclusions in the literature. Discussions of observationally equivalent models related to different economic mechanisms are used to make clear the connection between an economically meaningful parameterization and an economically meaningful decomposition of a complex model. The design of economic policy is used for drawing some practical implications of the proposed analysis.Econometrics2016-04-1242Article10.3390/econometrics4020023232225-11462016-04-12doi: 10.3390/econometrics4020023Michel MouchartRenzo Orsi<![CDATA[Econometrics, Vol. 4, Pages 22: Distribution of Budget Shares for Food: An Application of Quantile Regression to Food Security 1]]>
http://www.mdpi.com/2225-1146/4/2/22
This study examines, using quantile regression, the linkage between food security and efforts to enhance smallholder coffee producer incomes in Rwanda. Even though in Rwanda smallholder coffee producer incomes have increased, inhabitants these areas still experience stunting and wasting. This study examines whether the distribution of the income elasticity for food is the same for coffee and noncoffee growing provinces. We find that that the share of expenditures on food is statistically different in coffee growing and noncoffee growing provinces. Thus, the increase in expenditure on food is smaller for coffee growing provinces than noncoffee growing provinces.Econometrics2016-04-0842Article10.3390/econometrics4020022222225-11462016-04-08doi: 10.3390/econometrics4020022Charles MossJames OehmkeAlexandre LyambabajeAndrew Schmitz<![CDATA[Econometrics, Vol. 4, Pages 21: Unit Root Tests: The Role of the Univariate Models Implied by Multivariate Time Series]]>
http://www.mdpi.com/2225-1146/4/2/21
In cointegration analysis, it is customary to test the hypothesis of unit roots separately for each single time series. In this note, we point out that this procedure may imply large size distortion of the unit root tests if the DGP is a VAR. It is well-known that univariate models implied by a VAR data generating process necessarily have a finite order MA component. This feature may explain why an MA component has often been found in univariate ARIMA models for economic time series. Thereby, it has important implications for unit root tests in univariate settings given the well-known size distortion of popular unit root test in the presence of a large negative coefficient in the MA component. In a small simulation experiment, considering several popular unit root tests and the ADF sieve bootstrap unit tests, we find that, besides the well known size distortion effect, there can be substantial differences in size distortion according to which univariate time series is tested for the presence of a unit root.Econometrics2016-04-0742Article10.3390/econometrics4020021212225-11462016-04-07doi: 10.3390/econometrics4020021Nunzio CappuccioDiego Lubian<![CDATA[Econometrics, Vol. 4, Pages 20: Recovering the Most Entropic Copulas from Preliminary Knowledge of Dependence]]>
http://www.mdpi.com/2225-1146/4/2/20
This paper provides a new approach to recover relative entropy measures of contemporaneous dependence from limited information by constructing the most entropic copula (MEC) and its canonical form, namely the most entropic canonical copula (MECC). The MECC can effectively be obtained by maximizing Shannon entropy to yield a proper copula such that known dependence structures of data (e.g., measures of association) are matched to their empirical counterparts. In fact the problem of maximizing the entropy of copulas is the dual to the problem of minimizing the Kullback-Leibler cross entropy (KLCE) of joint probability densities when the marginal probability densities are fixed. Our simulation study shows that the proposed MEC estimator can potentially outperform many other copula estimators in finite samples.Econometrics2016-03-2942Article10.3390/econometrics4020020202225-11462016-03-29doi: 10.3390/econometrics4020020Ba ChuStephen Satchell<![CDATA[Econometrics, Vol. 4, Pages 19: A Method for Measuring Treatment Effects on the Treated without Randomization]]>
http://www.mdpi.com/2225-1146/4/2/19
This paper contributes to the literature on the estimation of causal effects by providing an analytical formula for individual specific treatment effects and an empirical methodology that allows us to estimate these effects. We derive the formula from a general model with minimal restrictions, unknown functional form and true unobserved variables such that it is a credible model of the underlying real world relationship. Subsequently, we manipulate the model in order to put it in an estimable form. In contrast to other empirical methodologies, which derive average treatment effects, we derive an analytical formula that provides estimates of the treatment effects on each treated individual. We also provide an empirical example that illustrates our methodology.Econometrics2016-03-2542Article10.3390/econometrics4020019192225-11462016-03-25doi: 10.3390/econometrics4020019P.A.V.B. SwamyStephen HallGeorge TavlasI-Lok ChangHeather GibsonWilliam GreeneJatinder Mehta<![CDATA[Econometrics, Vol. 4, Pages 17: Bayesian Calibration of Generalized Pools of Predictive Distributions]]>
http://www.mdpi.com/2225-1146/4/1/17
Decision-makers often consult different experts to build reliable forecasts on variables of interest. Combining more opinions and calibrating them to maximize the forecast accuracy is consequently a crucial issue in several economic problems. This paper applies a Bayesian beta mixture model to derive a combined and calibrated density function using random calibration functionals and random combination weights. In particular, it compares the application of linear, harmonic and logarithmic pooling in the Bayesian combination approach. The three combination schemes, i.e., linear, harmonic and logarithmic, are studied in simulation examples with multimodal densities and an empirical application with a large database of stock data. All of the experiments show that in a beta mixture calibration framework, the three combination schemes are substantially equivalent, achieving calibration, and no clear preference for one of them appears. The financial application shows that the linear pooling together with beta mixture calibration achieves the best results in terms of calibrated forecast.Econometrics2016-03-1641Article10.3390/econometrics4010017172225-11462016-03-16doi: 10.3390/econometrics4010017Roberto CasarinGiulia MantoanFrancesco Ravazzolo<![CDATA[Econometrics, Vol. 4, Pages 16: The Evolving Transmission of Uncertainty Shocks in the United Kingdom]]>
http://www.mdpi.com/2225-1146/4/1/16
This paper investigates if the impact of uncertainty shocks on the U.K. economy has changed over time. To this end, we propose an extended time-varying VAR model that simultaneously allows the estimation of a measure of uncertainty and its time-varying impact on key macroeconomic and financial variables. We find that the impact of uncertainty shocks on these variables has declined over time. The timing of the change coincides with the introduction of inflation targeting in the U.K.Econometrics2016-03-1441Article10.3390/econometrics4010016162225-11462016-03-14doi: 10.3390/econometrics4010016Haroon Mumtaz<![CDATA[Econometrics, Vol. 4, Pages 15: Timing Foreign Exchange Markets]]>
http://www.mdpi.com/2225-1146/4/1/15
To improve short-horizon exchange rate forecasts, we employ foreign exchange market risk factors as fundamentals, and Bayesian treed Gaussian process (BTGP) models to handle non-linear, time-varying relationships between these fundamentals and exchange rates. Forecasts from the BTGP model conditional on the carry and dollar factors dominate random walk forecasts on accuracy and economic criteria in the Meese-Rogoff setting. Superior market timing ability for large moves, more than directional accuracy, drives the BTGP’s success. We explain how, through a model averaging Monte Carlo scheme, the BTGP is able to simultaneously exploit smoothness and rough breaks in between-variable dynamics. Either feature in isolation is unable to consistently outperform benchmarks throughout the full span of time in our forecasting exercises. Trading strategies based on ex ante BTGP forecasts deliver the highest out-of-sample risk-adjusted returns for the median currency, as well as for both predictable, traded risk factors.Econometrics2016-03-1141Article10.3390/econometrics4010015152225-11462016-03-11doi: 10.3390/econometrics4010015Samuel MaloneRobert GramacyEnrique ter Horst<![CDATA[Econometrics, Vol. 4, Pages 14: Return and Risk of Pairs Trading Using a Simulation-Based Bayesian Procedure for Predicting Stable Ratios of Stock Prices]]>
http://www.mdpi.com/2225-1146/4/1/14
We investigate the direct connection between the uncertainty related to estimated stable ratios of stock prices and risk and return of two pairs trading strategies: a conditional statistical arbitrage method and an implicit arbitrage one. A simulation-based Bayesian procedure is introduced for predicting stable stock price ratios, defined in a cointegration model. Using this class of models and the proposed inferential technique, we are able to connect estimation and model uncertainty with risk and return of stock trading. In terms of methodology, we show the effect that using an encompassing prior, which is shown to be equivalent to a Jeffreys’ prior, has under an orthogonal normalization for the selection of pairs of cointegrated stock prices and further, its effect for the estimation and prediction of the spread between cointegrated stock prices. We distinguish between models with a normal and Student t distribution since the latter typically provides a better description of daily changes of prices on financial markets. As an empirical application, stocks are used that are ingredients of the Dow Jones Composite Average index. The results show that normalization has little effect on the selection of pairs of cointegrated stocks on the basis of Bayes factors. However, the results stress the importance of the orthogonal normalization for the estimation and prediction of the spread—the deviation from the equilibrium relationship—which leads to better results in terms of profit per capital engagement and risk than using a standard linear normalization.Econometrics2016-03-1041Article10.3390/econometrics4010014142225-11462016-03-10doi: 10.3390/econometrics4010014David ArdiaLukasz GatarekLennart HoogerheideHerman van Dijk<![CDATA[Econometrics, Vol. 4, Pages 13: Bayesian Nonparametric Measurement of Factor Betas and Clustering with Application to Hedge Fund Returns]]>
http://www.mdpi.com/2225-1146/4/1/13
We define a dynamic and self-adjusting mixture of Gaussian Graphical Models to cluster financial returns, and provide a new method for extraction of nonparametric estimates of dynamic alphas (excess return) and betas (to a choice set of explanatory factors) in a multivariate setting. This approach, as well as the outputs, has a dynamic, nonstationary and nonparametric form, which circumvents the problem of model risk and parametric assumptions that the Kalman filter and other widely used approaches rely on. The by-product of clusters, used for shrinkage and information borrowing, can be of use to determine relationships around specific events. This approach exhibits a smaller Root Mean Squared Error than traditionally used benchmarks in financial settings, which we illustrate through simulation. As an illustration, we use hedge fund index data, and find that our estimated alphas are, on average, 0.13% per month higher (1.6% per year) than alphas estimated through Ordinary Least Squares. The approach exhibits fast adaptation to abrupt changes in the parameters, as seen in our estimated alphas and betas, which exhibit high volatility, especially in periods which can be identified as times of stressful market events, a reflection of the dynamic positioning of hedge fund portfolio managers.Econometrics2016-03-0841Article10.3390/econometrics4010013132225-11462016-03-08doi: 10.3390/econometrics4010013Urbi GarayEnrique ter HorstGerman MolinaAbel Rodriguez<![CDATA[Econometrics, Vol. 4, Pages 12: Evolutionary Sequential Monte Carlo Samplers for Change-Point Models]]>
http://www.mdpi.com/2225-1146/4/1/12
Sequential Monte Carlo (SMC) methods are widely used for non-linear filtering purposes. However, the SMC scope encompasses wider applications such as estimating static model parameters so much that it is becoming a serious alternative to Markov-Chain Monte-Carlo (MCMC) methods. Not only do SMC algorithms draw posterior distributions of static or dynamic parameters but additionally they provide an estimate of the marginal likelihood. The tempered and time (TNT) algorithm, developed in this paper, combines (off-line) tempered SMC inference with on-line SMC inference for drawing realizations from many sequential posterior distributions without experiencing a particle degeneracy problem. Furthermore, it introduces a new MCMC rejuvenation step that is generic, automated and well-suited for multi-modal distributions. As this update relies on the wide heuristic optimization literature, numerous extensions are readily available. The algorithm is notably appropriate for estimating change-point models. As an example, we compare several change-point GARCH models through their marginal log-likelihoods over time.Econometrics2016-03-0841Article10.3390/econometrics4010012122225-11462016-03-08doi: 10.3390/econometrics4010012Arnaud Dufays<![CDATA[Econometrics, Vol. 4, Pages 11: Parallelization Experience with Four Canonical Econometric Models Using ParMitISEM]]>
http://www.mdpi.com/2225-1146/4/1/11
This paper presents the parallel computing implementation of the MitISEM algorithm, labeled Parallel MitISEM. The basic MitISEM algorithm provides an automatic and flexible method to approximate a non-elliptical target density using adaptive mixtures of Student-t densities, where only a kernel of the target density is required. The approximation can be used as a candidate density in Importance Sampling or Metropolis Hastings methods for Bayesian inference on model parameters and probabilities. We present and discuss four canonical econometric models using a Graphics Processing Unit and a multi-core Central Processing Unit version of the MitISEM algorithm. The results show that the parallelization of the MitISEM algorithm on Graphics Processing Units and multi-core Central Processing Units is straightforward and fast to program using MATLAB. Moreover the speed performance of the Graphics Processing Unit version is much higher than the Central Processing Unit one.Econometrics2016-03-0741Article10.3390/econometrics4010011112225-11462016-03-07doi: 10.3390/econometrics4010011Nalan BaştürkStefano GrassiLennart HoogerheideHerman van Dijk<![CDATA[Econometrics, Vol. 4, Pages 18: Spatial Econometrics: A Rapidly Evolving Discipline]]>
http://www.mdpi.com/2225-1146/4/1/18
Spatial econometrics has a relatively short history in the scenario of the scientific thought. Indeed, the term “spatial econometrics” was introduced only forty years ago during the general address delivered by Jean Paelinck to the annual meeting of the Dutch Statistical Association in May 1974 (see [1]). [...]Econometrics2016-03-0741Editorial10.3390/econometrics4010018182225-11462016-03-07doi: 10.3390/econometrics4010018Giuseppe Arbia<![CDATA[Econometrics, Vol. 4, Pages 10: Sequentially Adaptive Bayesian Learning for a Nonlinear Model of the Secular and Cyclical Behavior of US Real GDP]]>
http://www.mdpi.com/2225-1146/4/1/10
There is a one-to-one mapping between the conventional time series parameters of a third-order autoregression and the more interpretable parameters of secular half-life, cyclical half-life and cycle period. The latter parameterization is better suited to interpretation of results using both Bayesian and maximum likelihood methods and to expression of a substantive prior distribution using Bayesian methods. The paper demonstrates how to approach both problems using the sequentially adaptive Bayesian learning algorithm and sequentially adaptive Bayesian learning algorithm (SABL) software, which eliminates virtually of the substantial technical overhead required in conventional approaches and produces results quickly and reliably. The work utilizes methodological innovations in SABL including optimization of irregular and multimodal functions and production of the conventional maximum likelihood asymptotic variance matrix as a by-product.Econometrics2016-03-0241Article10.3390/econometrics4010010102225-11462016-03-02doi: 10.3390/econometrics4010010John Geweke<![CDATA[Econometrics, Vol. 4, Pages 8: Volatility Forecasting: Downside Risk, Jumps and Leverage Effect]]>
http://www.mdpi.com/2225-1146/4/1/8
We provide empirical evidence of volatility forecasting in relation to asymmetries present in the dynamics of both return and volatility processes. Using recently-developed methodologies to detect jumps from high frequency price data, we estimate the size of positive and negative jumps and propose a methodology to estimate the size of jumps in the quadratic variation. The leverage effect is separated into continuous and discontinuous effects, and past volatility is separated into “good” and “bad”, as well as into continuous and discontinuous risks. Using a long history of the S &amp; P500 price index, we find that the continuous leverage effect lasts about one week, while the discontinuous leverage effect disappears after one day. “Good” and “bad” continuous risks both characterize the volatility persistence, while “bad” jump risk is much more informative than “good” jump risk in forecasting future volatility. The volatility forecasting model proposed is able to capture many empirical stylized facts while still remaining parsimonious in terms of the number of parameters to be estimated.Econometrics2016-02-2341Article10.3390/econometrics401000882225-11462016-02-23doi: 10.3390/econometrics4010008Francesco AudrinoYujia Hu<![CDATA[Econometrics, Vol. 4, Pages 9: Computational Complexity and Parallelization in Bayesian Econometric Analysis]]>
http://www.mdpi.com/2225-1146/4/1/9
Challenging statements have appeared in recent years in the literature on advances in computational procedures.[...]Econometrics2016-02-2241Editorial10.3390/econometrics401000992225-11462016-02-22doi: 10.3390/econometrics4010009Nalan BaştürkRoberto CasarinFrancesco RavazzoloHerman van Dijk<![CDATA[Econometrics, Vol. 4, Pages 7: Multiple Discrete Endogenous Variables in Weakly-Separable Triangular Models]]>
http://www.mdpi.com/2225-1146/4/1/7
We consider a model in which an outcome depends on two discrete treatment variables, where one treatment is given before the other. We formulate a three-equation triangular system with weak separability conditions. Without assuming assignment is random, we establish the identification of an average structural function using two-step matching. We also consider decomposing the effect of the first treatment into direct and indirect effects, which are shown to be identified by the proposed methodology. We allow for both of the treatment variables to be non-binary and do not appeal to an identification-at-infinity argument.Econometrics2016-02-0441Article10.3390/econometrics401000772225-11462016-02-04doi: 10.3390/econometrics4010007Sung JunJoris PinkseHaiqing XuNeşe Yıldız<![CDATA[Econometrics, Vol. 4, Pages 6: Functional-Coefficient Spatial Durbin Models with Nonparametric Spatial Weights: An Application to Economic Growth]]>
http://www.mdpi.com/2225-1146/4/1/6
This paper considers a functional-coefficient spatial Durbin model with nonparametric spatial weights. Applying the series approximation method, we estimate the unknown functional coefficients and spatial weighting functions via a nonparametric two-stage least squares (or 2SLS) estimation method. To further improve estimation accuracy, we also construct a second-step estimator of the unknown functional coefficients by a local linear regression approach. Some Monte Carlo simulation results are reported to assess the finite sample performance of our proposed estimators. We then apply the proposed model to re-examine national economic growth by augmenting the conventional Solow economic growth convergence model with unknown spatial interactive structures of the national economy, as well as country-specific Solow parameters, where the spatial weighting functions and Solow parameters are allowed to be a function of geographical distance and the countries’ openness to trade, respectively.Econometrics2016-02-0341Article10.3390/econometrics401000662225-11462016-02-03doi: 10.3390/econometrics4010006Mustafa KorogluYiguo Sun<![CDATA[Econometrics, Vol. 4, Pages 5: Acknowledgement to Reviewers of Econometrics in 2015]]>
http://www.mdpi.com/2225-1146/4/1/5
The editors of Econometrics would like to express their sincere gratitude to the following reviewers for assessing manuscripts in 2015. [...]Econometrics2016-01-2541Editorial10.3390/econometrics401000552225-11462016-01-25doi: 10.3390/econometrics4010005 Econometrics Editorial Office<![CDATA[Econometrics, Vol. 4, Pages 4: A Conditional Approach to Panel Data Models with Common Shocks]]>
http://www.mdpi.com/2225-1146/4/1/4
This paper studies the effects of common shocks on the OLS estimators of the slopes’ parameters in linear panel data models. The shocks are assumed to affect both the errors and some of the explanatory variables. In contrast to existing approaches, which rely on using results on martingale difference sequences, our method relies on conditional strong laws of large numbers and conditional central limit theorems for conditionally-heterogeneous random variables.Econometrics2016-01-1241Article10.3390/econometrics401000442225-11462016-01-12doi: 10.3390/econometrics4010004Giovanni ForchiniBin Peng<![CDATA[Econometrics, Vol. 4, Pages 3: Forecasting Value-at-Risk under Different Distributional Assumptions]]>
http://www.mdpi.com/2225-1146/4/1/3
Financial asset returns are known to be conditionally heteroskedastic and generally non-normally distributed, fat-tailed and often skewed. These features must be taken into account to produce accurate forecasts of Value-at-Risk (VaR). We provide a comprehensive look at the problem by considering the impact that different distributional assumptions have on the accuracy of both univariate and multivariate GARCH models in out-of-sample VaR prediction. The set of analyzed distributions comprises the normal, Student, Multivariate Exponential Power and their corresponding skewed counterparts. The accuracy of the VaR forecasts is assessed by implementing standard statistical backtesting procedures used to rank the different specifications. The results show the importance of allowing for heavy-tails and skewness in the distributional assumption with the skew-Student outperforming the others across all tests and confidence levels.Econometrics2016-01-1141Article10.3390/econometrics401000332225-11462016-01-11doi: 10.3390/econometrics4010003Manuela BraioneNicolas Scholtes<![CDATA[Econometrics, Vol. 4, Pages 1: How Credible Are Shrinking Wage Elasticities of Married Women Labour Supply?]]>
http://www.mdpi.com/2225-1146/4/1/1
This paper delves into the well-known phenomenon of shrinking wage elasticities for married women in the US over recent decades. The results of a novel model experimental approach via sample data ordering unveil considerable heterogeneity across different wage groups. Yet, surprisingly constant wage elasticity estimates are maintained within certain wage groups over time. In addition to those constant wage elasticity estimates, we find that the composition of working women into different wage groups has changed considerably, resulting in shrinking wage elasticity estimates at the aggregate level. These findings would be impossible to obtain had we not dismantled and discarded the instrumental variable estimation route.Econometrics2015-12-2541Article10.3390/econometrics401000112225-11462015-12-25doi: 10.3390/econometrics4010001Duo QinSophie van HuellenQing-Chao Wang<![CDATA[Econometrics, Vol. 4, Pages 2: Interpretation and Semiparametric Efficiency in Quantile Regression under Misspecification]]>
http://www.mdpi.com/2225-1146/4/1/2
Allowing for misspecification in the linear conditional quantile function, this paper provides a new interpretation and the semiparametric efficiency bound for the quantile regression parameter β ( τ ) in Koenker and Bassett (1978). The first result on interpretation shows that under a mean-squared loss function, the probability limit of the Koenker–Bassett estimator minimizes a weighted distribution approximation error, defined as \(F_{Y}(X'\beta(\tau)|X) - \tau\), i.e., the deviation of the conditional distribution function, evaluated at the linear quantile approximation, from the quantile level. The second result implies that the Koenker–Bassett estimator semiparametrically efficiently estimates the quantile regression parameter that produces parsimonious descriptive statistics for the conditional distribution. Therefore, quantile regression shares the attractive features of ordinary least squares: interpretability and semiparametric efficiency under misspecification.Econometrics2015-12-2441Article10.3390/econometrics401000222225-11462015-12-24doi: 10.3390/econometrics4010002Ying-Ying Lee<![CDATA[Econometrics, Vol. 3, Pages 864-887: Non-Parametric Estimation of Intraday Spot Volatility: Disentangling Instantaneous Trend and Seasonality]]>
http://www.mdpi.com/2225-1146/3/4/864
We provide a new framework for modeling trends and periodic patterns in high-frequency financial data. Seeking adaptivity to ever-changing market conditions, we enlarge the Fourier flexible form into a richer functional class: both our smooth trend and the seasonality are non-parametrically time-varying and evolve in real time. We provide the associated estimators and use simulations to show that they behave adequately in the presence of jumps and heteroskedastic and heavy-tailed noise. A study of exchange rate returns sampled from 2010 to 2013 suggests that failing to factor in the seasonality’s dynamic properties may lead to misestimation of the intraday spot volatility.Econometrics2015-12-1834Article10.3390/econometrics30408648648872225-11462015-12-18doi: 10.3390/econometrics3040864Thibault VatterHau-Tieng WuValérie Chavez-DemoulinBin Yu<![CDATA[Econometrics, Vol. 3, Pages 825-863: Bootstrap Tests for Overidentification in Linear Regression Models]]>
http://www.mdpi.com/2225-1146/3/4/825
We study the finite-sample properties of tests for overidentifying restrictions in linear regression models with a single endogenous regressor and weak instruments. Under the assumption of Gaussian disturbances, we derive expressions for a variety of test statistics as functions of eight mutually independent random variables and two nuisance parameters. The distributions of the statistics are shown to have an ill-defined limit as the parameter that determines the strength of the instruments tends to zero and as the correlation between the disturbances of the structural and reduced-form equations tends to plus or minus one. This makes it impossible to perform reliable inference near the point at which the limit is ill-defined. Several bootstrap procedures are proposed. They alleviate the problem and allow reliable inference when the instruments are not too weak. We also study their power properties.Econometrics2015-12-0934Article10.3390/econometrics30408258258632225-11462015-12-09doi: 10.3390/econometrics3040825Russell DavidsonJames MacKinnon<![CDATA[Econometrics, Vol. 3, Pages 797-824: Forecast Combination under Heavy-Tailed Errors]]>
http://www.mdpi.com/2225-1146/3/4/797
Forecast combination has been proven to be a very important technique to obtain accurate predictions for various applications in economics, finance, marketing and many other areas. In many applications, forecast errors exhibit heavy-tailed behaviors for various reasons. Unfortunately, to our knowledge, little has been done to obtain reliable forecast combinations for such situations. The familiar forecast combination methods, such as simple average, least squares regression or those based on the variance-covariance of the forecasts, may perform very poorly due to the fact that outliers tend to occur, and they make these methods have unstable weights, leading to un-robust forecasts. To address this problem, in this paper, we propose two nonparametric forecast combination methods. One is specially proposed for the situations in which the forecast errors are strongly believed to have heavy tails that can be modeled by a scaled Student’s t-distribution; the other is designed for relatively more general situations when there is a lack of strong or consistent evidence on the tail behaviors of the forecast errors due to a shortage of data and/or an evolving data-generating process. Adaptive risk bounds of both methods are developed. They show that the resulting combined forecasts yield near optimal mean forecast errors relative to the candidate forecasts. Simulations and a real example demonstrate their superior performance in that they indeed tend to have significantly smaller prediction errors than the previous combination methods in the presence of forecast outliers.Econometrics2015-11-2334Article10.3390/econometrics30407977978242225-11462015-11-23doi: 10.3390/econometrics3040797Gang ChengSicong WangYuhong Yang<![CDATA[Econometrics, Vol. 3, Pages 761-796: Testing in a Random Effects Panel Data Model with Spatially Correlated Error Components and Spatially Lagged Dependent Variables]]>
http://www.mdpi.com/2225-1146/3/4/761
We propose a random effects panel data model with both spatially correlated error components and spatially lagged dependent variables. We focus on diagnostic testing procedures and derive Lagrange multiplier (LM) test statistics for a variety of hypotheses within this model. We first construct the joint LM test for both the individual random effects and the two spatial effects (spatial error correlation and spatial lag dependence). We then provide LM tests for the individual random effects and for the two spatial effects separately. In addition, in order to guard against local model misspecification, we derive locally adjusted (robust) LM tests based on the Bera and Yoon principle (Bera and Yoon, 1993). We conduct a small Monte Carlo simulation to show the good finite sample performances of these LM test statistics and revisit the cigarette demand example in Baltagi and Levin (1992) to illustrate our testing procedures.Econometrics2015-11-0934Article10.3390/econometrics30407617617962225-11462015-11-09doi: 10.3390/econometrics3040761Ming HeKuan-Pin Lin<![CDATA[Econometrics, Vol. 3, Pages 733-760: Forecasting Interest Rates Using Geostatistical Techniques]]>
http://www.mdpi.com/2225-1146/3/4/733
Geostatistical spatial models are widely used in many applied fields to forecast data observed on continuous three-dimensional surfaces. We propose to extend their use to finance and, in particular, to forecasting yield curves. We present the results of an empirical application where we apply the proposed method to forecast Euro Zero Rates (2003–2014) using the Ordinary Kriging method based on the anisotropic variogram. Furthermore, a comparison with other recent methods for forecasting yield curves is proposed. The results show that the model is characterized by good levels of predictions’ accuracy and it is competitive with the other forecasting models considered.Econometrics2015-11-0934Article10.3390/econometrics30407337337602225-11462015-11-09doi: 10.3390/econometrics3040733Giuseppe ArbiaMichele Di Marcantonio<![CDATA[Econometrics, Vol. 3, Pages 719-732: Counterfactual Distributions in Bivariate Models—A Conditional Quantile Approach]]>
http://www.mdpi.com/2225-1146/3/4/719
This paper proposes a methodology to incorporate bivariate models in numerical computations of counterfactual distributions. The proposal is to extend the works of Machado and Mata (2005) and Melly (2005) using the grid method to generate pairs of random variables. This contribution allows incorporating the effect of intra-household decision making in counterfactual decompositions of changes in income distribution. An application using data from five latin american countries shows that this approach substantially improves the goodness of fit to the empirical distribution. However, the exercise of decomposition is less conclusive about the performance of the method, which essentially depends on the sample size and the accuracy of the regression model.Econometrics2015-11-0934Article10.3390/econometrics30407197197322225-11462015-11-09doi: 10.3390/econometrics3040719Javier AlejoNicolás Badaracco<![CDATA[Econometrics, Vol. 3, Pages 709-718: Measurement Errors Arising When Using Distances in Microeconometric Modelling and the Individuals’ Position Is Geo-Masked for Confidentiality]]>
http://www.mdpi.com/2225-1146/3/4/709
In many microeconometric models we use distances. For instance, in modelling the individual behavior in labor economics or in health studies, the distance from a relevant point of interest (such as a hospital or a workplace) is often used as a predictor in a regression framework. However, in order to preserve confidentiality, spatial micro-data are often geo-masked, thus reducing their quality and dramatically distorting the inferential conclusions. In particular in this case, a measurement error is introduced in the independent variable which negatively affects the properties of the estimators. This paper studies these negative effects, discusses their consequences, and suggests possible interpretations and directions to data producers, end users, and practitioners.Econometrics2015-10-2934Article10.3390/econometrics30407097097182225-11462015-10-29doi: 10.3390/econometrics3040709Giuseppe ArbiaGiuseppe EspaDiego Giuliani<![CDATA[Econometrics, Vol. 3, Pages 698-708: Is Benford’s Law a Universal Behavioral Theory?]]>
http://www.mdpi.com/2225-1146/3/4/698
In this paper, we consider the question and present evidence as to whether or not Benford’s exponential first significant digit (FSD) law reflects a fundamental principle behind the complex and nondeterministic nature of large-scale physical and behavioral systems. As a behavioral example, we focus on the FSD distribution of Australian micro income data and use information theoretic entropy methods to investigate the degree that corresponding empirical income distributions are consistent with Benford’s law.Econometrics2015-10-2234Article10.3390/econometrics30406986987082225-11462015-10-22doi: 10.3390/econometrics3040698Sofia Villas-BoasQiuzi FuGeorge Judge<![CDATA[Econometrics, Vol. 3, Pages 667-697: A Joint Specification Test for Response Probabilities in Unordered Multinomial Choice Models]]>
http://www.mdpi.com/2225-1146/3/3/667
Estimation results obtained by parametric models may be seriously misleading when the model is misspecified or poorly approximates the true model. This study proposes a test that jointly tests the specifications of multiple response probabilities in unordered multinomial choice models. The test statistic is asymptotically chi-square distributed, consistent against a fixed alternative and able to detect a local alternative approaching to the null at a rate slower than the parametric rate. We show that rejection regions can be calculated by a simple parametric bootstrap procedure, when the sample size is small. The size and power of the tests are investigated by Monte Carlo experiments.Econometrics2015-09-1633Article10.3390/econometrics30306676676972225-11462015-09-16doi: 10.3390/econometrics3030667Masamune Iwasawa<![CDATA[Econometrics, Vol. 3, Pages 654-666: On Bootstrap Inference for Quantile Regression Panel Data: A Monte Carlo Study]]>
http://www.mdpi.com/2225-1146/3/3/654
This paper evaluates bootstrap inference methods for quantile regression panel data models. We propose to construct confidence intervals for the parameters of interest using percentile bootstrap with pairwise resampling. We study three different bootstrapping procedures. First, the bootstrap samples are constructed by resampling only from cross-sectional units with replacement. Second, the temporal resampling is performed from the time series. Finally, a more general resampling scheme, which considers sampling from both the cross-sectional and temporal dimensions, is introduced. The bootstrap algorithms are computationally attractive and easy to use in practice. We evaluate the performance of the bootstrap confidence interval by means of Monte Carlo simulations. The results show that the bootstrap methods have good finite sample performance for both location and location-scale models.Econometrics2015-09-1033Article10.3390/econometrics30306546546662225-11462015-09-10doi: 10.3390/econometrics3030654Antonio GalvaoGabriel Montes-Rojas<![CDATA[Econometrics, Vol. 3, Pages 633-653: A New Family of Consistent and Asymptotically-Normal Estimators for the Extremal Index]]>
http://www.mdpi.com/2225-1146/3/3/633
The extremal index (θ) is the key parameter for extending extreme value theory results from i.i.d. to stationary sequences. One important property of this parameter is that its inverse determines the degree of clustering in the extremes. This article introduces a novel interpretation of the extremal index as a limiting probability characterized by two Poisson processes and a simple family of estimators derived from this new characterization. Unlike most estimators for θ in the literature, this estimator is consistent, asymptotically normal and very stable across partitions of the sample. Further, we show in an extensive simulation study that this estimator outperforms in finite samples the logs, blocks and runs estimation methods. Finally, we apply this new estimator to test for clustering of extremes in monthly time series of unemployment growth and inflation rates and conclude that runs of large unemployment rates are more prolonged than periods of high inflation.Econometrics2015-08-2833Article10.3390/econometrics30306336336532225-11462015-08-28doi: 10.3390/econometrics3030633Jose Olmo<![CDATA[Econometrics, Vol. 3, Pages 610-632: Right on Target, or Is it? The Role of Distributional Shape in Variance Targeting]]>
http://www.mdpi.com/2225-1146/3/3/610
Estimation of GARCH models can be simplified by augmenting quasi-maximum likelihood (QML) estimation with variance targeting, which reduces the degree of parameterization and facilitates estimation. We compare the two approaches and investigate, via simulations, how non-normality features of the return distribution affect the quality of estimation of the volatility equation and corresponding value-at-risk predictions. We find that most GARCH coefficients and associated predictions are more precisely estimated when no variance targeting is employed. Bias properties are exacerbated for a heavier-tailed distribution of standardized returns, while the distributional asymmetry has little or moderate impact, these phenomena tending to be more pronounced under variance targeting. Some effects further intensify if one uses ML based on a leptokurtic distribution in place of normal QML. The sample size has also a more favorable effect on estimation precision when no variance targeting is used. Thus, if computational costs are not prohibitive, variance targeting should probably be avoided.Econometrics2015-08-1033Article10.3390/econometrics30306106106322225-11462015-08-10doi: 10.3390/econometrics3030610Stanislav AnatolyevStanislav Khrapov<![CDATA[Econometrics, Vol. 3, Pages 590-609: A Kolmogorov-Smirnov Based Test for Comparing the Predictive Accuracy of Two Sets of Forecasts]]>
http://www.mdpi.com/2225-1146/3/3/590
This paper introduces a complement statistical test for distinguishing between the predictive accuracy of two sets of forecasts. We propose a non-parametric test founded upon the principles of the Kolmogorov-Smirnov (KS) test, referred to as the KS Predictive Accuracy (KSPA) test. The KSPA test is able to serve two distinct purposes. Initially, the test seeks to determine whether there exists a statistically significant difference between the distribution of forecast errors, and secondly it exploits the principles of stochastic dominance to determine whether the forecasts with the lower error also reports a stochastically smaller error than forecasts from a competing model, and thereby enables distinguishing between the predictive accuracy of forecasts. We perform a simulation study for the size and power of the proposed test and report the results for different noise distributions, sample sizes and forecasting horizons. The simulation results indicate that the KSPA test is correctly sized, and robust in the face of varying forecasting horizons and sample sizes along with significant accuracy gains reported especially in the case of small sample sizes. Real world applications are also considered to illustrate the applicability of the proposed KSPA test in practice.Econometrics2015-08-0433Article10.3390/econometrics30305905906092225-11462015-08-04doi: 10.3390/econometrics3030590Hossein HassaniEmmanuel Silva<![CDATA[Econometrics, Vol. 3, Pages 577-589: A Spectral Model of Turnover Reduction]]>
http://www.mdpi.com/2225-1146/3/3/577
We give a simple explicit formula for turnover reduction when a large number of alphas are traded on the same execution platform and trades are crossed internally. We model turnover reduction via alpha correlations. Then, for a large number of alphas, turnover reduction is related to the largest eigenvalue and the corresponding eigenvector of the alpha correlation matrix.Econometrics2015-07-2933Article10.3390/econometrics30305775775892225-11462015-07-29doi: 10.3390/econometrics3030577Zura Kakushadze<![CDATA[Econometrics, Vol. 3, Pages 561-576: A Note on the Asymptotic Normality of the Kernel Deconvolution Density Estimator with Logarithmic Chi-Square Noise]]>
http://www.mdpi.com/2225-1146/3/3/561
This paper studies the asymptotic normality for the kernel deconvolution estimator when the noise distribution is logarithmic chi-square; both identical and independently distributed observations and strong mixing observations are considered. The dependent case of the result is applied to obtain the pointwise asymptotic distribution of the deconvolution volatility density estimator in discrete-time stochastic volatility models.Econometrics2015-07-2133Short Note10.3390/econometrics30305615615762225-11462015-07-21doi: 10.3390/econometrics3030561Yang Zu<![CDATA[Econometrics, Vol. 3, Pages 532-560: New Graphical Methods and Test Statistics for Testing Composite Normality]]>
http://www.mdpi.com/2225-1146/3/3/532
Several graphical methods for testing univariate composite normality from an i.i.d. sample are presented. They are endowed with correct simultaneous error bounds and yield size-correct tests. As all are based on the empirical CDF, they are also consistent for all alternatives. For one test, called the modified stabilized probability test, or MSP, a highly simplified computational method is derived, which delivers the test statistic and also a highly accurate p-value approximation, essentially instantaneously. The MSP test is demonstrated to have higher power against asymmetric alternatives than the well-known and powerful Jarque-Bera test. A further size-correct test, based on combining two test statistics, is shown to have yet higher power. The methodology employed is fully general and can be applied to any i.i.d. univariate continuous distribution setting.Econometrics2015-07-1533Article10.3390/econometrics30305325325602225-11462015-07-15doi: 10.3390/econometrics3030532Marc Paolella<![CDATA[Econometrics, Vol. 3, Pages 525-531: Efficient Estimation in Heteroscedastic Varying Coefficient Models]]>
http://www.mdpi.com/2225-1146/3/3/525
This paper considers statistical inference for the heteroscedastic varying coefficient model. We propose an efficient estimator for coefficient functions that is more efficient than the conventional local-linear estimator. We establish asymptotic normality for the proposed estimator and conduct some simulation to illustrate the performance of the proposed method.Econometrics2015-07-1533Article10.3390/econometrics30305255255312225-11462015-07-15doi: 10.3390/econometrics3030525Chuanhua WeiLijie Wan<![CDATA[Econometrics, Vol. 3, Pages 494-524: Consistency in Estimation and Model Selection of Dynamic Panel Data Models with Fixed Effects]]>
http://www.mdpi.com/2225-1146/3/3/494
We examine the relationship between consistent parameter estimation and model selection for autoregressive panel data models with fixed effects. We find that the transformation of fixed effects proposed by Lancaster (2002) does not necessarily lead to consistent estimation of common parameters when some true exogenous regressors are excluded. We propose a data dependent way to specify the prior of the autoregressive coefficient and argue for comparing different model specifications before parameter estimation. Model selection properties of Bayes factors and Bayesian information criterion (BIC) are investigated. When model uncertainty is substantial, we recommend the use of Bayesian Model Averaging to obtain point estimators with lower root mean squared errors (RMSE). We also study the implications of different levels of inclusion probabilities by simulations.Econometrics2015-07-1033Article10.3390/econometrics30304944945242225-11462015-07-10doi: 10.3390/econometrics3030494Guangjie Li<![CDATA[Econometrics, Vol. 3, Pages 466-493: A New Approach to Model Verification, Falsification and Selection]]>
http://www.mdpi.com/2225-1146/3/3/466
This paper shows that a qualitative analysis, i.e., an assessment of the consistency of a hypothesized sign pattern for structural arrays with the sign pattern of the estimated reduced form, can always provide decisive insight into a model’s validity both in general and compared to other models. Qualitative analysis can show that it is impossible for some models to have generated the data used to estimate the reduced form, even though standard specification tests might show the model to be adequate. A partially specified structural hypothesis can be falsified by estimating as few as one reduced form equation. Zero restrictions in the structure can themselves be falsified. It is further shown how the information content of the hypothesized structural sign patterns can be measured using a commonly applied concept of statistical entropy. The lower the hypothesized structural sign pattern’s entropy, the more a priori information it proposes about the sign pattern of the estimated reduced form. As an hypothesized structural sign pattern has a lower entropy, it is more subject to type 1 error and less subject to type 2 error. Three cases illustrate the approach taken here.Econometrics2015-06-2933Article10.3390/econometrics30304664664932225-11462015-06-29doi: 10.3390/econometrics3030466Andrew BuckGeorge Lady<![CDATA[Econometrics, Vol. 3, Pages 443-465: Bayesian Approach to Disentangling Technical and Environmental Productivity]]>
http://www.mdpi.com/2225-1146/3/2/443
This paper models the firm’s production process as a system of simultaneous technologies for desirable and undesirable outputs. Desirable outputs are produced by transforming inputs via the conventional transformation function, whereas (consistent with the material balance condition) undesirable outputs are by-produced via the so-called “residual generation technology”. By separating the production of undesirable outputs from that of desirable outputs, not only do we ensure that undesirable outputs are not modeled as inputs and thus satisfy costly disposability, but we are also able to differentiate between the traditional (desirable-output-oriented) technical productivity and the undesirable-output-oriented environmental, or so-called “green”, productivity. To measure the latter, we derive a Solow-type Divisia environmental productivity index which, unlike conventional productivity indices, allows crediting the ceteris paribus reduction in undesirable outputs. Our index also provides a meaningful way to decompose environmental productivity into environmental technological and efficiency changes.Econometrics2015-06-1632Article10.3390/econometrics30204434434652225-11462015-06-16doi: 10.3390/econometrics3020443Emir MalikovSubal KumbhakarEfthymios Tsionas<![CDATA[Econometrics, Vol. 3, Pages 412-442: Strategic Interaction Model with Censored Strategies]]>
http://www.mdpi.com/2225-1146/3/2/412
In this paper, we develop a new model of a static game of incomplete information with a large number of players. The model has two key distinguishing features. First, the strategies are subject to threshold effects, and can be interpreted as dependent censored random variables. Second, in contrast to most of the existing literature, our inferential theory relies on a large number of players, rather than a large number of independent repetitions of the same game. We establish existence and uniqueness of the pure strategy equilibrium, and prove that the censored equilibrium strategies satisfy a near-epoch dependence property. We then show that the normal maximum likelihood and least squares estimators of this censored model are consistent and asymptotically normal. Our model can be useful in a wide variety of settings, including investment, R&amp;D, labor supply, and social interaction applications.Econometrics2015-06-0132Article10.3390/econometrics30204124124422225-11462015-06-01doi: 10.3390/econometrics3020412Nazgul Jenish<![CDATA[Econometrics, Vol. 3, Pages 376-411: Asymptotic Distribution and Finite Sample Bias Correction of QML Estimators for Spatial Error Dependence Model]]>
http://www.mdpi.com/2225-1146/3/2/376
In studying the asymptotic and finite sample properties of quasi-maximum likelihood (QML) estimators for the spatial linear regression models, much attention has been paid to the spatial lag dependence (SLD) model; little has been given to its companion, the spatial error dependence (SED) model. In particular, the effect of spatial dependence on the convergence rate of the QML estimators has not been formally studied, and methods for correcting finite sample bias of the QML estimators have not been given. This paper fills in these gaps. Of the two, bias correction is particularly important to the applications of this model, as it leads potentially to much improved inferences for the regression coefficients. Contrary to the common perceptions, both the large and small sample behaviors of the QML estimators for the SED model can be different from those for the SLD model in terms of the rate of convergence and the magnitude of bias. Monte Carlo results show that the bias can be severe, and the proposed bias correction procedure is very effective.Econometrics2015-05-2132Article10.3390/econometrics30203763764112225-11462015-05-21doi: 10.3390/econometrics3020376Shew LiuZhenlin Yang<![CDATA[Econometrics, Vol. 3, Pages 355-375: A Jackknife Correction to a Test for Cointegration Rank]]>
http://www.mdpi.com/2225-1146/3/2/355
This paper investigates the performance of a jackknife correction to a test for cointegration rank in a vector autoregressive system. The limiting distributions of the jackknife-corrected statistics are derived and the critical values of these distributions are tabulated. Based on these critical values the finite sample size and power properties of the jackknife-corrected tests are compared with the usual rank test statistic as well as statistics involving a small sample correction and a Bartlett correction, in addition to a bootstrap method. The simulations reveal that all of the corrected tests can provide finite sample size improvements, while maintaining power, although the bootstrap procedure is the most robust across the simulation designs considered.Econometrics2015-05-2032Article10.3390/econometrics30203553553752225-11462015-05-20doi: 10.3390/econometrics3020355Marcus Chambers<![CDATA[Econometrics, Vol. 3, Pages 339-354: The Seasonal KPSS Test: Examining Possible Applications with Monthly Data and Additional Deterministic Terms]]>
http://www.mdpi.com/2225-1146/3/2/339
The literature has been notably less definitive in distinguishing between finite sample studies of seasonal stationarity than in seasonal unit root tests. Although the use of seasonal stationarity and unit root tests is advised to determine correctly the most appropriate form of the trend in a seasonal time series, such a use is rarely noted in the relevant studies on this topic. Recently, the seasonal KPSS test, with a null hypothesis of no seasonal unit roots, and based on quarterly data, has been introduced in the literature. The asymptotic theory of the seasonal KPSS test depends on whether data have been filtered by a preliminary regression. More specifically, one may proceed to extracting deterministic components, such as the mean and trend, from the data before testing. In this paper, we examine the effects of de-trending on the properties of the seasonal KPSS test in finite samples. A sketch of the test’s limit theory is subsequently provided. Moreover, a Monte Carlo study is conducted to analyze the behavior of the test for a monthly time series. The focus on this time-frequency is significant because, as we mentioned above, it was introduced for quarterly data. Overall, the results indicated that the seasonal KPSS test preserved its good size and power properties. Furthermore, our results corroborate those reported elsewhere in the literature for conventional stationarity tests. These subsequent results assumed that the nonparametric corrections of residual variances may lead to better in-sample properties of the seasonal KPSS test. Next, the seasonal KPSS test is applied to a monthly series of the United States (US) consumer price index. We were able to identify a number of seasonal unit roots in this time series. [1] [1] Table 1 in this paper is copyrighted and initially published by JMASM in 2012, Volume 11, Issue 1, pp. 69–77, ISSN: 1538–9472, JMASM Inc., PO Box 48023, Oak Park, MI 48237, USA, ea@jmasm.com.Econometrics2015-05-1332Article10.3390/econometrics30203393393542225-11462015-05-13doi: 10.3390/econometrics3020339Ghassen Montasser<![CDATA[Econometrics, Vol. 3, Pages 317-338: The SAR Model for Very Large Datasets: A Reduced Rank Approach]]>
http://www.mdpi.com/2225-1146/3/2/317
The SAR model is widely used in spatial econometrics to model Gaussian processes on a discrete spatial lattice, but for large datasets, fitting it becomes computationally prohibitive, and hence, its usefulness can be limited. A computationally-efficient spatial model is the spatial random effects (SRE) model, and in this article, we calibrate it to the SAR model of interest using a generalisation of the Moran operator that allows for heteroskedasticity and an asymmetric SAR spatial dependence matrix. In general, spatial data have a measurement-error component, which we model, and we use restricted maximum likelihood to estimate the SRE model covariance parameters; its required computational time is only the order of the size of the dataset. Our implementation is demonstrated using mean usual weekly income data from the 2011 Australian Census.Econometrics2015-05-1132Article10.3390/econometrics30203173173382225-11462015-05-11doi: 10.3390/econometrics3020317Sandy BurdenNoel CressieDavid Steel<![CDATA[Econometrics, Vol. 3, Pages 289-316: Selection Criteria in Regime Switching Conditional Volatility Models]]>
http://www.mdpi.com/2225-1146/3/2/289
A large number of nonlinear conditional heteroskedastic models have been proposed in the literature. Model selection is crucial to any statistical data analysis. In this article, we investigate whether the most commonly used selection criteria lead to choice of the right specification in a regime switching framework. We focus on two types of models: the Logistic Smooth Transition GARCH and the Markov-Switching GARCH models. Simulation experiments reveal that information criteria and loss functions can lead to misspecification ; BIC sometimes indicates the wrong regime switching framework. Depending on the Data Generating Process used in the experiments, great care is needed when choosing a criterion.Econometrics2015-05-1132Article10.3390/econometrics30202892893162225-11462015-05-11doi: 10.3390/econometrics3020289Thomas Chuffart<![CDATA[Econometrics, Vol. 3, Pages 265-288: Nonparametric Regression Estimation for Multivariate Null Recurrent Processes]]>
http://www.mdpi.com/2225-1146/3/2/265
This paper discusses nonparametric kernel regression with the regressor being a \(d\)-dimensional \(\beta\)-null recurrent process in presence of conditional heteroscedasticity. We show that the mean function estimator is consistent with convergence rate \(\sqrt{n(T)h^{d}}\), where \(n(T)\) is the number of regenerations for a \(\beta\)-null recurrent process and the limiting distribution (with proper normalization) is normal. Furthermore, we show that the two-step estimator for the volatility function is consistent. The finite sample performance of the estimate is quite reasonable when the leave-one-out cross validation method is used for bandwidth selection. We apply the proposed method to study the relationship of Federal funds rate with 3-month and 5-year T-bill rates and discover the existence of nonlinearity of the relationship. Furthermore, the in-sample and out-of-sample performance of the nonparametric model is far better than the linear model.Econometrics2015-04-1432Article10.3390/econometrics30202652652882225-11462015-04-14doi: 10.3390/econometrics3020265Biqing CaiDag Tjøstheim<![CDATA[Econometrics, Vol. 3, Pages 240-264: Detecting Location Shifts during Model Selection by Step-Indicator Saturation]]>
http://www.mdpi.com/2225-1146/3/2/240
To capture location shifts in the context of model selection, we propose selecting significant step indicators from a saturating set added to the union of all of the candidate variables. The null retention frequency and approximate non-centrality of a selection test are derived using a ‘split-half’ analysis, the simplest specialization of a multiple-path block-search algorithm. Monte Carlo simulations, extended to sequential reduction, confirm the accuracy of nominal significance levels under the null and show retentions when location shifts occur, improving the non-null retention frequency compared to the corresponding impulse-indicator saturation (IIS)-based method and the lasso.Econometrics2015-04-1432Article10.3390/econometrics30202402402642225-11462015-04-14doi: 10.3390/econometrics3020240Jennifer CastleJurgen DoornikDavid HendryFelix Pretis<![CDATA[Econometrics, Vol. 3, Pages 233-239: A Pitfall in Using the Characterization of Granger Non-Causality in Vector Autoregressive Models]]>
http://www.mdpi.com/2225-1146/3/2/233
It is well known that in a vector autoregressive (VAR) model Granger non-causality is characterized by a set of restrictions on the VAR coefficients. This characterization has been derived under the assumption of non-singularity of the covariance matrix of the innovations. This note shows that if this assumption is violated, then the characterization of Granger non-causality in a VAR model fails to hold. In these situations Granger non-causality test results must be interpreted with caution.Econometrics2015-04-0932Article10.3390/econometrics30202332332392225-11462015-04-09doi: 10.3390/econometrics3020233Umberto Triacca<![CDATA[Econometrics, Vol. 3, Pages 215-232: Return and Volatility Spillovers across Equity Markets in Mainland China, Hong Kong and the United States]]>
http://www.mdpi.com/2225-1146/3/2/215
Examinations of the dynamics of daily returns and volatility in stock markets of the U.S., Hong Kong and mainland China (Shanghai and Shenzhen) over 2 January 2001 to 8 February 2013 suggest: (1) evidence of unidirectional return spillovers from the U.S. to the other three markets; but no spillover between Hong Kong and either of the two mainland China markets; (2) evidence of unidirectional ARCH and GARCH effects from the U.S. to the other three markets; (3) correlations of returns vary across markets, with the highest correlation of 93.5% between the two Chinese markets, medium correlation of 30% between mainland China and Hong Kong markets and low correlations of 6.4% and 7.2% between the U.S. and China’s two markets; thus, international investors may benefit by allocating their assets in China’s markets; (4) the patterns of dynamic conditional correlations from the DCC model suggest an increase in correlation between China and other stock markets since the most recent financial crisis of 2007.Econometrics2015-04-0232Article10.3390/econometrics30202152152322225-11462015-04-02doi: 10.3390/econometrics3020215Hassan MohammadiYuting Tan<![CDATA[Econometrics, Vol. 3, Pages 199-214: Plug-in Bandwidth Selection for Kernel Density Estimation with Discrete Data]]>
http://www.mdpi.com/2225-1146/3/2/199
This paper proposes plug-in bandwidth selection for kernel density estimation with discrete data via minimization of mean summed square error. Simulation results show that the plug-in bandwidths perform well, relative to cross-validated bandwidths, in non-uniform designs. We further find that plug-in bandwidths are relatively small. Several empirical examples show that the plug-in bandwidths are typically similar in magnitude to their cross-validated counterparts.Econometrics2015-03-3132Article10.3390/econometrics30201991992142225-11462015-03-31doi: 10.3390/econometrics3020199Chi-Yang ChuDaniel HendersonChristopher Parmeter<![CDATA[Econometrics, Vol. 3, Pages 187-198: Information Recovery in a Dynamic Statistical Markov Model]]>
http://www.mdpi.com/2225-1146/3/2/187
Although economic processes and systems are in general simple in nature, the underlying dynamics are complicated and seldom understood. Recognizing this, in this paper we use a nonstationary-conditional Markov process model of observed aggregate data to learn about and recover causal influence information associated with the underlying dynamic micro-behavior. Estimating equations are used as a link to the data and to model the dynamic conditional Markov process. To recover the unknown transition probabilities, we use an information theoretic approach to model the data and derive a new class of conditional Markov models. A quadratic loss function is used as a basis for selecting the optimal member from the family of possible likelihood-entropy functional(s). The asymptotic properties of the resulting estimators are demonstrated, and a range of potential applications is discussed.Econometrics2015-03-2532Article10.3390/econometrics30201871871982225-11462015-03-25doi: 10.3390/econometrics3020187Douglas MillerGeorge Judge<![CDATA[Econometrics, Vol. 3, Pages 156-186: A Joint Chow Test for Structural Instability]]>
http://www.mdpi.com/2225-1146/3/1/156
The classical Chow test for structural instability requires strictly exogenousregressors and a break-point specified in advance. In this paper, we consider twogeneralisations, the one-step recursive Chow test (based on the sequence of studentisedrecursive residuals) and its supremum counterpart, which relaxes these requirements. We useresults on the strong consistency of regression estimators to show that the one-step test isappropriate for stationary, unit root or explosive processes modelled in the autoregressivedistributed lags (ADL) framework. We then use the results in extreme value theory to developa new supremum version of the test, suitable for formal testing of structural instability withan unknown break-point. The test assumes the normality of errors and is intended to be usedin situations where this can be either assumed nor established empirically. Simulations showthat the supremum test has desirable power properties, in particular against level shifts latein the sample and against outliers. An application to U.K. GDP data is given.Econometrics2015-03-1231Article10.3390/econometrics30101561561862225-11462015-03-12doi: 10.3390/econometrics3010156Bent NielsenAndrew Whitby<![CDATA[Econometrics, Vol. 3, Pages 128-155: Two-Step Lasso Estimation of the Spatial Weights Matrix]]>
http://www.mdpi.com/2225-1146/3/1/128
The vast majority of spatial econometric research relies on the assumption that the spatial network structure is known a priori. This study considers a two-step estimation strategy for estimating the n(n-1) interaction effects in a spatial autoregressive panel model where the spatial dimension is potentially large. The identifying assumption is approximate sparsity of the spatial weights matrix. The proposed estimation methodology exploits the Lasso estimator and mimics two-stage least squares (2SLS) to account for endogeneity of the spatial lag. The developed two-step estimator is of more general interest. It may be used in applications where the number of endogenous regressors and the number of instrumental variables is larger than the number of observations. We derive convergence rates for the two-step Lasso estimator. Our Monte Carlo simulation results show that the two-step estimator is consistent and successfully recovers the spatial network structure for reasonable sample size, T.Econometrics2015-03-0931Article10.3390/econometrics30101281281552225-11462015-03-09doi: 10.3390/econometrics3010128Achim AhrensArnab Bhattacharjee<![CDATA[Econometrics, Vol. 3, Pages 101-127: Heteroskedasticity of Unknown Form in Spatial Autoregressive Models with a Moving Average Disturbance Term]]>
http://www.mdpi.com/2225-1146/3/1/101
In this study, I investigate the necessary condition for the consistency of the maximum likelihood estimator (MLE) of spatial models with a spatial moving average process in the disturbance term. I show that the MLE of spatial autoregressive and spatial moving average parameters is generally inconsistent when heteroskedasticity is not considered in the estimation. I also show that the MLE of parameters of exogenous variables is inconsistent and determine its asymptotic bias. I provide simulation results to evaluate the performance of the MLE. The simulation results indicate that the MLE imposes a substantial amount of bias on both autoregressive and moving average parameters.Econometrics2015-02-2631Article10.3390/econometrics30101011011272225-11462015-02-26doi: 10.3390/econometrics3010101Osman Doğan<![CDATA[Econometrics, Vol. 3, Pages 91-100: Entropy Maximization as a Basis for Information Recovery in Dynamic Economic Behavioral Systems]]>
http://www.mdpi.com/2225-1146/3/1/91
As a basis for information recovery in open dynamic microeconomic systems, we emphasize the connection between adaptive intelligent behavior, causal entropy maximization and self-organized equilibrium seeking behavior. This entropy-based causal adaptive behavior framework permits the use of information-theoretic methods as a solution basis for the resulting pure and stochastic inverse economic-econometric problems. We cast the information recovery problem in the form of a binary network and suggest information-theoretic methods to recover estimates of the unknown binary behavioral parameters without explicitly sampling the configuration-arrangement of the sample space.Econometrics2015-02-1631Article10.3390/econometrics3010091911002225-11462015-02-16doi: 10.3390/econometrics3010091George Judge<![CDATA[Econometrics, Vol. 3, Pages 65-90: Finding Starting-Values for the Estimation of Vector STAR Models]]>
http://www.mdpi.com/2225-1146/3/1/65
This paper focuses on finding starting-values for the estimation of Vector STAR models. Based on a Monte Carlo study, different procedures are evaluated. Their performance is assessed with respect to model fit and computational effort. I employ (i) grid search algorithms and (ii) heuristic optimization procedures, namely differential evolution, threshold accepting, and simulated annealing. In the equation-by-equation starting-value search approach the procedures achieve equally good results. Unless the errors are cross-correlated, equation-by-equation search followed by a derivative-based algorithm can handle such an optimization problem sufficiently well. This result holds also for higher-dimensional Vector STAR models with a slight edge for heuristic methods. For more complex Vector STAR models which require a multivariate search approach, simulated annealing and differential evolution outperform threshold accepting and the grid search.Econometrics2015-01-2931Article10.3390/econometrics301006565902225-11462015-01-29doi: 10.3390/econometrics3010065Frauke Schleer<![CDATA[Econometrics, Vol. 3, Pages 55-64: On the Interpretation of Instrumental Variables in the Presence of Specification Errors]]>
http://www.mdpi.com/2225-1146/3/1/55
The method of instrumental variables (IV) and the generalized method of moments (GMM), and their applications to the estimation of errors-in-variables and simultaneous equations models in econometrics, require data on a sufficient number of instrumental variables that are both exogenous and relevant. We argue that, in general, such instruments (weak or strong) cannot exist.Econometrics2015-01-2931Article10.3390/econometrics301005555642225-11462015-01-29doi: 10.3390/econometrics3010055P.A.V.B. SwamyGeorge TavlasStephen Hall<![CDATA[Econometrics, Vol. 3, Pages 2-54: Modeling Autoregressive Processes with Moving-Quantiles-Implied Nonlinearity]]>
http://www.mdpi.com/2225-1146/3/1/2
We introduce and investigate some properties of a class of nonlinear time series models based on the moving sample quantiles in the autoregressive data generating process. We derive a test fit to detect this type of nonlinearity. Using the daily realized volatility data of Standard &amp; Poor’s 500 (S&amp;P 500) and several other indices, we obtained good performance using these models in an out-of-sample forecasting exercise compared with the forecasts obtained based on the usual linear heterogeneous autoregressive and other models of realized volatility.Econometrics2015-01-1631Article10.3390/econometrics30100022542225-11462015-01-16doi: 10.3390/econometrics3010002Isao IshidaVirmantas Kvedaras<![CDATA[Econometrics, Vol. 3, Pages 1: Acknowledgement to Reviewers of Econometrics in 2014]]>
http://www.mdpi.com/2225-1146/3/1/1
The editors of Econometrics would like to express their sincere gratitude to the following reviewers for assessing manuscripts in 2014:[...]Econometrics2015-01-0931Editorial10.3390/econometrics3010001112225-11462015-01-09doi: 10.3390/econometrics3010001 Econometrics Editorial Office<![CDATA[Econometrics, Vol. 2, Pages 217-249: The Biggest Myth in Spatial Econometrics]]>
http://www.mdpi.com/2225-1146/2/4/217
There is near universal agreement that estimates and inferences from spatial regression models are sensitive to particular specifications used for the spatial weight structure in these models. We find little theoretical basis for this commonly held belief, if estimates and inferences are based on the true partial derivatives for a well-specified spatial regression model. We conclude that this myth may have arisen from past applied work that incorrectly interpreted the model coefficients as if they were partial derivatives, or from use of misspecified models.Econometrics2014-12-2324Article10.3390/econometrics20402172172492225-11462014-12-23doi: 10.3390/econometrics2040217James LeSageR. Pace<![CDATA[Econometrics, Vol. 2, Pages 203-216: Testing for A Set of Linear Restrictions in VARMA Models Using Autoregressive Metric: An Application to Granger Causality Test]]>
http://www.mdpi.com/2225-1146/2/4/203
In this paper we propose a test for a set of linear restrictions in a Vector Autoregressive Moving Average (VARMA) model. This test is based on the autoregressive metric, a notion of distance between two univariate ARMA models, M0 and M1, introduced by Piccolo in 1990. In particular, we show that this set of linear restrictions is equivalent to a null distance d(M0,M1 ) between two given ARMA models. This result provides the logical basis for using d(M0,M1) = 0 as a null hypothesis in our test. Some Monte Carlo evidence about the finite sample behavior of our testing procedure is provided and two empirical examples are presented.Econometrics2014-12-2224Article10.3390/econometrics20402032032162225-11462014-12-22doi: 10.3390/econometrics2040203Francesca Di IorioUmberto Triacca<![CDATA[Econometrics, Vol. 2, Pages 169-202: Success at the Summer Olympics: How Much Do Economic Factors Explain?]]>
http://www.mdpi.com/2225-1146/2/4/169
Many econometric analyses have attempted to model medal winnings as dependent on per capita GDP and population size. This approach ignores the size and composition of the team of athletes, especially the role of female participation and the role of sports culture, and also provides an inadequate explanation of the variability between the outcomes of countries with similar features. This paper proposes a model that offers two substantive advancements, both of which shed light on previously hidden aspects of Olympic success. First, we propose a selection model that treats the process of fielding any winner and the subsequent level of total winnings as two separate, but related, processes. Second, our model takes a more structural angle, in that we view GDP and population size as inputs into the “production” of athletes. After that production process, those athletes then compete to win medals. We use country-level panel data for the seven Summer Olympiads from 1988 to 2012. The size and composition of the country’s Olympic team are shown to be highly significant factors, as is also the past performance, which generates a persistence effect.Econometrics2014-12-0524Article10.3390/econometrics20401691692022225-11462014-12-05doi: 10.3390/econometrics2040169Pravin TrivediDavid Zimmer<![CDATA[Econometrics, Vol. 2, Pages 151-168: A GMM-Based Test for Normal Disturbances of the Heckman Sample Selection Model]]>
http://www.mdpi.com/2225-1146/2/4/151
The Heckman sample selection model relies on the assumption of normal and homoskedastic disturbances. However, before considering more general, alternative semiparametric models that do not need the normality assumption, it seems useful to test this assumption. Following Meijer and Wansbeek (2007), the present contribution derives a GMM-based pseudo-score LM test on whether the third and fourth moments of the disturbances of the outcome equation of the Heckman model conform to those implied by the truncated normal distribution. The test is easy to calculate and in Monte Carlo simulations it shows good performance for sample sizes of 1000 or larger.Econometrics2014-10-2324Article10.3390/econometrics20401511511682225-11462014-10-23doi: 10.3390/econometrics2040151Michael Pfaffermayr<![CDATA[Econometrics, Vol. 2, Pages 145-150: Asymmetry and Leverage in Conditional Volatility Models]]>
http://www.mdpi.com/2225-1146/2/3/145
The three most popular univariate conditional volatility models are the generalized autoregressive conditional heteroskedasticity (GARCH) model of Engle (1982) and Bollerslev (1986), the GJR (or threshold GARCH) model of Glosten, Jagannathan and Runkle (1992), and the exponential GARCH (or EGARCH) model of Nelson (1990, 1991). The underlying stochastic specification to obtain GARCH was demonstrated by Tsay (1987), and that of EGARCH was shown recently in McAleer and Hafner (2014). These models are important in estimating and forecasting volatility, as well as in capturing asymmetry, which is the different effects on conditional volatility of positive and negative effects of equal magnitude, and purportedly in capturing leverage, which is the negative correlation between returns shocks and subsequent shocks to volatility. As there seems to be some confusion in the literature between asymmetry and leverage, as well as which asymmetric models are purported to be able to capture leverage, the purpose of the paper is three-fold, namely, (1) to derive the GJR model from a random coefficient autoregressive process, with appropriate regularity conditions; (2) to show that leverage is not possible in the GJR and EGARCH models; and (3) to present the interpretation of the parameters of the three popular univariate conditional volatility models in a unified manner.Econometrics2014-09-2423Article10.3390/econometrics20301451451502225-11462014-09-24doi: 10.3390/econometrics2030145Michael McAleer<![CDATA[Econometrics, Vol. 2, Pages 123-144: Two-Part Models for Fractional Responses Defined as Ratios of Integers]]>
http://www.mdpi.com/2225-1146/2/3/123
This paper discusses two alternative two-part models for fractional response variables that are defined as ratios of integers. The first two-part model assumes a Binomial distribution and known group size. It nests the one-part fractional response model proposed by Papke and Wooldridge (1996) and, thus, allows one to apply Wald, LM and/or LR tests in order to discriminate between the two models. The second model extends the first one by allowing for overdispersion in the data. We demonstrate the usefulness of the proposed two-part models for data on the 401(k) pension plan participation rates used in Papke and Wooldridge (1996).Econometrics2014-09-1923Article10.3390/econometrics20301231231442225-11462014-09-19doi: 10.3390/econometrics2030123Harald OberhoferMichael Pfaffermayr<![CDATA[Econometrics, Vol. 2, Pages 98-122: A Fast, Accurate Method for Value-at-Risk and Expected Shortfall]]>
http://www.mdpi.com/2225-1146/2/2/98
A fast method is developed for value-at-risk and expected shortfall prediction for univariate asset return time series exhibiting leptokurtosis, asymmetry and conditional heteroskedasticity. It is based on a GARCH-type process driven by noncentral t innovations. While the method involves the use of several shortcuts for speed, it performs admirably in terms of accuracy and actually outperforms highly competitive models. Most remarkably, this is the case also for sample sizes as small as 250.Econometrics2014-06-2522Article10.3390/econometrics2020098981222225-11462014-06-25doi: 10.3390/econometrics2020098Jochen KrauseMarc Paolella<![CDATA[Econometrics, Vol. 2, Pages 92-97: A One Line Derivation of EGARCH]]>
http://www.mdpi.com/2225-1146/2/2/92
One of the most popular univariate asymmetric conditional volatility models is the exponential GARCH (or EGARCH) specification. In addition to asymmetry, which captures the different effects on conditional volatility of positive and negative effects of equal magnitude, EGARCH can also accommodate leverage, which is the negative correlation between returns shocks and subsequent shocks to volatility. However, the statistical properties of the (quasi-) maximum likelihood estimator of the EGARCH parameters are not available under general conditions, but rather only for special cases under highly restrictive and unverifiable conditions. It is often argued heuristically that the reason for the lack of general statistical properties arises from the presence in the model of an absolute value of a function of the parameters, which does not permit analytical derivatives, and hence does not permit (quasi-) maximum likelihood estimation. It is shown in this paper for the non-leverage case that: (1) the EGARCH model can be derived from a random coefficient complex nonlinear moving average (RCCNMA) process; and (2) the reason for the lack of statistical properties of the estimators of EGARCH under general conditions is that the stationarity and invertibility conditions for the RCCNMA process are not known.Econometrics2014-06-2322Article10.3390/econometrics202009292972225-11462014-06-23doi: 10.3390/econometrics2020092Michael McAleerChristian Hafner<![CDATA[Econometrics, Vol. 2, Pages 72-91: Credible Granger-Causality Inference with Modest Sample Lengths: A Cross-Sample Validation Approach]]>
http://www.mdpi.com/2225-1146/2/1/72
Credible Granger-causality analysis appears to require post-sample inference, as it is well-known that in-sample fit can be a poor guide to actual forecasting effectiveness. However, post-sample model testing requires an often-consequential a priori partitioning of the data into an “in-sample” period – purportedly utilized only for model specification/estimation – and a “post-sample” period, purportedly utilized (only at the end of the analysis) for model validation/testing purposes. This partitioning is usually infeasible, however, with samples of modest length – e.g., T ≤ 150 – as is common in both quarterly data sets and/or in monthly data sets where institutional arrangements vary over time, simply because there is in such cases insufficient data available to credibly accomplish both purposes separately. A cross-sample validation (CSV) testing procedure is proposed below which both eliminates the aforementioned a priori partitioning and which also substantially ameliorates this power versus credibility predicament – preserving most of the power of in-sample testing (by utilizing all of the sample data in the test), while also retaining most of the credibility of post-sample testing (by always basing model forecasts on data not utilized in estimating that particular model’s coefficients). Simulations show that the price paid, in terms of power relative to the in-sample Granger-causality F test, is manageable. An illustrative application is given, to a re-analysis of the Engel andWest [1] study of the causal relationship between macroeconomic fundamentals and the exchange rate; several of their conclusions are changed by our analysis.Econometrics2014-03-2521Article10.3390/econometrics201007272912225-11462014-03-25doi: 10.3390/econometrics2010072Richard AshleyKwok Tsang<![CDATA[Econometrics, Vol. 2, Pages 45-71: Bias-Correction in Vector Autoregressive Models: A Simulation Study]]>
http://www.mdpi.com/2225-1146/2/1/45
We analyze the properties of various methods for bias-correcting parameter estimates in both stationary and non-stationary vector autoregressive models. First, we show that two analytical bias formulas from the existing literature are in fact identical. Next, based on a detailed simulation study, we show that when the model is stationary this simple bias formula compares very favorably to bootstrap bias-correction, both in terms of bias and mean squared error. In non-stationary models, the analytical bias formula performs noticeably worse than bootstrapping. Both methods yield a notable improvement over ordinary least squares. We pay special attention to the risk of pushing an otherwise stationary model into the non-stationary region of the parameter space when correcting for bias. Finally, we consider a recently proposed reduced-bias weighted least squares estimator, and we find that it compares very favorably in non-stationary models.Econometrics2014-03-1321Article10.3390/econometrics201004545712225-11462014-03-13doi: 10.3390/econometrics2010045Tom EngstedThomas Pedersen<![CDATA[Econometrics, Vol. 2, Pages 20-44: Incorporating Responsiveness to Marketing Efforts in Brand Choice Modeling]]>
http://www.mdpi.com/2225-1146/2/1/20
We put forward a brand choice model with unobserved heterogeneity that concerns responsiveness to marketing efforts. We introduce two latent segments of households. The first segment is assumed to respond to marketing efforts, while households in the second segment do not do so. Whether a specific household is a member of the first or the second segment at a specific purchase occasion is described by household-specific characteristics and characteristics concerning buying behavior. Households may switch between the two responsiveness states over time. When comparing the performance of our model with alternative choice models that account for various forms of heterogeneity for three different datasets, we find better face validity for our parameters. Our model also forecasts better.Econometrics2014-02-2121Article10.3390/econometrics201002020442225-11462014-02-21doi: 10.3390/econometrics2010020Dennis FokRichard PaapPhilip Franses<![CDATA[Econometrics, Vol. 2, Pages 1-19: Referee Bias and Stoppage Time in Major League Soccer: A Partially Adaptive Approach]]>
http://www.mdpi.com/2225-1146/2/1/1
This study extends prior research on referee bias and close bias in professional soccer by examining whether Major League Soccer (MLS) referees’ discretion over stoppage time (i.e., extra play beyond regulation) is influenced by end-of-regulation match scores and/or home field advantage. To do so, we employ a grouped-data regression model and a partially adaptive model. Both account for the imprecise measurement in reported stoppage time. For the 2011 season we find no home field advantage. In fact, stoppage time is the same with a one or two goal deficit at the end of regulation, regardless of which team is ahead. However, the 2011 results do point to an increase in stoppage time of 12 to 20 seconds for nationally televised matches. For the 2012 season, the nationally televised effect disappears due to an increase in stoppage time for those matches not nationally televised. However, a home field advantage is present. Facing a one-goal deficit at the end of regulation, the home team receives about 33 seconds more stoppage time than a visiting team facing the same deficit.Econometrics2014-02-1721Article10.3390/econometrics20100011192225-11462014-02-17doi: 10.3390/econometrics2010001Katherine YewellSteven CaudillFranklin Mixon, Jr.<![CDATA[Econometrics, Vol. 1, Pages 249-280: Academic Rankings with RePEc]]>
http://www.mdpi.com/2225-1146/1/3/249
This article describes the data collection and use of data for the computation of rankings within RePEc (Research Papers in Economics). This encompasses the determination of impact factors for journals and working paper series, as well as the ranking of authors, institutions, and geographic regions. The various ranking methods are also compared, using a snapshot of the data.Econometrics2013-12-1713Article10.3390/econometrics10302492492802225-11462013-12-17doi: 10.3390/econometrics1030249Christian Zimmermann<![CDATA[Econometrics, Vol. 1, Pages 236-248: Polynomial Regressions and Nonsense Inference]]>
http://www.mdpi.com/2225-1146/1/3/236
Polynomial specifications are widely used, not only in applied economics, but also in epidemiology, physics, political analysis and psychology, just to mention a few examples. In many cases, the data employed to estimate such specifications are time series that may exhibit stochastic nonstationary behavior. We extend Phillips’ results (Phillips, P. Understanding spurious regressions in econometrics. J. Econom. 1986, 33, 311–340.) by proving that an inference drawn from polynomial specifications, under stochastic nonstationarity, is misleading unless the variables cointegrate. We use a generalized polynomial specification as a vehicle to study its asymptotic and finite-sample properties. Our results, therefore, lead to a call to be cautious whenever practitioners estimate polynomial regressions.Econometrics2013-11-1813Article10.3390/econometrics10302362362482225-11462013-11-18doi: 10.3390/econometrics1030236Daniel Ventosa-SantaulàriaCarlos Rodríguez-Caballero<![CDATA[Econometrics, Vol. 1, Pages 217-235: Ranking Leading Econometrics Journals Using Citations Data from ISI and RePEc]]>
http://www.mdpi.com/2225-1146/1/3/217
The paper focuses on the robustness of rankings of academic journal quality and research impact of 10 leading econometrics journals taken from the Thomson Reuters ISI Web of Science (ISI) Category of Economics, using citations data from ISI and the highly accessible Research Papers in Economics (RePEc) database that is widely used in economics, finance and related disciplines. The journals are ranked using quantifiable static and dynamic Research Assessment Measures (RAMs), with 15 RAMs from ISI and five RAMs from RePEc. The similarities and differences in various RAMs, which are based on alternative weighted and unweighted transformations of citations, are highlighted to show which RAMs are able to provide informational value relative to others. The RAMs include the impact factor, mean citations and non-citations, journal policy, number of high quality papers, and journal influence and article influence. The paper highlights robust rankings based on the harmonic mean of the ranks of 20 RAMs, which in some cases are closely related. It is shown that emphasizing the most widely-used RAM, the two-year impact factor of a journal, can lead to a distorted evaluation of journal quality, impact and influence relative to the harmonic mean of the ranks. Some suggestions regarding the use of the most informative RAMs are also given.Econometrics2013-11-1813Article10.3390/econometrics10302172172352225-11462013-11-18doi: 10.3390/econometrics1030217Chia-Lin ChangMichael McAleer<![CDATA[Econometrics, Vol. 1, Pages 207-216: The Geometric Meaning of the Notion of Joint Unpredictability of a Bivariate VAR(1) Stochastic Process]]>
http://www.mdpi.com/2225-1146/1/3/207
This paper investigates, in a particular parametric framework, the geometric meaning of joint unpredictability for a bivariate discrete process. In particular, the paper provides a characterization of the joint unpredictability in terms of distance between information sets in an Hilbert space.Econometrics2013-11-1413Article10.3390/econometrics10302072072162225-11462013-11-14doi: 10.3390/econometrics1030207Umberto Triacca<![CDATA[Econometrics, Vol. 1, Pages 180-206: Structural Panel VARs]]>
http://www.mdpi.com/2225-1146/1/2/180
The paper proposes a structural approach to VAR analysis in panels, which takes into account responses to both idiosyncratic and common structural shocks, while permitting full cross member heterogeneity of the response dynamics. In the context of this structural approach, estimation of the loading matrices for the decomposition into idiosyncratic versus common shocks is straightforward and transparent. The method appears to do remarkably well at uncovering the properties of the sample distribution of the underlying structural dynamics, even when the panels are relatively short, as illustrated in Monte Carlo simulations. Finally, these simulations also illustrate that the SVAR panel method can be used to improve inference, not only for properties of the sample distribution, but also for dynamics of individual members of the panel that lack adequate data for a conventional time series SVAR analysis. This is accomplished by using fitted cross sectional regressions of the sample of estimated panel responses to correlated static measures, and using these to interpolate the member-specific dynamics.Econometrics2013-09-2412Article10.3390/econometrics10201801802062225-11462013-09-24doi: 10.3390/econometrics1020180Peter Pedroni<![CDATA[Econometrics, Vol. 1, Pages 157-179: Parametric and Nonparametric Frequentist Model Selection and Model Averaging]]>
http://www.mdpi.com/2225-1146/1/2/157
This paper presents recent developments in model selection and model averaging for parametric and nonparametric models. While there is extensive literature on model selection under parametric settings, we present recently developed results in the context of nonparametric models. In applications, estimation and inference are often conducted under the selected model without considering the uncertainty from the selection process. This often leads to inefficiency in results and misleading confidence intervals. Thus an alternative to model selection is model averaging where the estimated model is the weighted sum of all the submodels. This reduces model uncertainty. In recent years, there has been significant interest in model averaging and some important developments have taken place in this area. We present results for both the parametric and nonparametric cases. Some possible topics for future research are also indicated.Econometrics2013-09-2012Article10.3390/econometrics10201571571792225-11462013-09-20doi: 10.3390/econometrics1020157Aman UllahHuansha Wang<![CDATA[Econometrics, Vol. 1, Pages 141-156: Generalized Empirical Likelihood-Based Focused Information Criterion and Model Averaging]]>
http://www.mdpi.com/2225-1146/1/2/141
This paper develops model selection and averaging methods for moment restriction models. We first propose a focused information criterion based on the generalized empirical likelihood estimator. We address the issue of selecting an optimal model, rather than a correct model, for estimating a specific parameter of interest. Then, this study investigates a generalized empirical likelihood-based model averaging estimator that minimizes the asymptotic mean squared error. A simulation study suggests that our averaging estimator can be a useful alternative to existing post-selection estimators.Econometrics2013-07-0312Article10.3390/econometrics10201411411562225-11462013-07-03doi: 10.3390/econometrics1020141Naoya Sueishi<![CDATA[Econometrics, Vol. 1, Pages 127-140: Forecasting Value-at-Risk Using High-Frequency Information]]>
http://www.mdpi.com/2225-1146/1/1/127
in the prediction of quantiles of daily Standard&amp;Poor’s 500 (S&amp;P 500) returns we consider how to use high-frequency 5-minute data. We examine methods that incorporate the high frequency information either indirectly, through combining forecasts (using forecasts generated from returns sampled at different intraday interval), or directly, through combining high frequency information into one model. We consider subsample averaging, bootstrap averaging, forecast averaging methods for the indirect case, and factor models with principal component approach, for both direct and indirect cases. We show that in forecasting the daily S&amp;P 500 index return quantile (Value-at-Risk or VaR is simply the negative of it), using high-frequency information is beneficial, often substantially and particularly so, in forecasting downside risk. Our empirical results show that the averaging methods (subsample averaging, bootstrap averaging, forecast averaging), which serve as different ways of forming the ensemble average from using high-frequency intraday information, provide an excellent forecasting performance compared to using just low-frequency daily information.Econometrics2013-06-2111Article10.3390/econometrics10101271271402225-11462013-06-21doi: 10.3390/econometrics1010127Huiyu HuangTae-Hwy Lee