Econometrics doi: 10.3390/econometrics8030038

Authors: Yuanyuan Li Dietmar Bauer

In this paper the theory on the estimation of vector autoregressive (VAR) models for I(2) processes is extended to the case of long VAR approximation of more general processes. Hereby the order of the autoregression is allowed to tend to infinity at a certain rate depending on the sample size. We deal with unrestricted OLS estimators (in the model formulated in levels as well as in vector error correction form) as well as with two stage estimation (2SI2) in the vector error correction model (VECM) formulation. Our main results are analogous to the I(1) case: We show that the long VAR approximation leads to consistent estimates of the long and short run dynamics. Furthermore, tests on the autoregressive coefficients follow standard asymptotics. The pseudo likelihood ratio tests on the cointegrating ranks (using the Gaussian likelihood) used in the 2SI2 algorithm show under the null hypothesis the same distributions as in the case of data generating processes following finite order VARs. The same holds true for the asymptotic distribution of the long run dynamics both in the unrestricted VECM estimation and the reduced rank regression in the 2SI2 algorithm. Building on these results we show that if the data is generated by an invertible VARMA process, the VAR approximation can be used in order to derive a consistent initial estimator for subsequent pseudo likelihood optimization in the VARMA model.

]]>Econometrics doi: 10.3390/econometrics8030037

Authors: C. Vladimir Rodríguez-Caballero J. Eduardo Vera-Valdés

This paper studies long economic series to assess the long-lasting effects of pandemics. We analyze if periods that cover pandemics have a change in trend and persistence in growth, and in level and persistence in unemployment. We find that there is an upward trend in the persistence level of growth across centuries. In particular, shocks originated by pandemics in recent times seem to have a permanent effect on growth. Moreover, our results show that the unemployment rate increases and becomes more persistent after a pandemic. In this regard, our findings support the design and implementation of timely counter-cyclical policies to soften the shock of the pandemic.

]]>Econometrics doi: 10.3390/econometrics8030036

Authors: Jeremy Arkes

Building on arguments by Joshua Angrist and J&ouml;rn-Steffen Pischke arguments for how the teaching of undergraduate econometrics could become more effective, I propose a redesign of graduate econometrics that would better serve most students and help make the field of economics more relevant. The primary basis for the redesign is that the conventional methods do not adequately prepare students to recognize biases and to properly interpret significance, insignificance, and p-values; and there is an ethical problem in searching for significance and other matters. Based on these premises, I recommend that some of Angrist and Pischke&rsquo;s recommendations be adopted for graduate econometrics. In addition, I recommend further shifts in emphasis, new pedagogy, and adding important components (e.g., on interpretations and simple ethical lessons) that are largely ignored in current textbooks. An obvious implication of these recommended changes is a confirmation of most of Angrist and Pischke&rsquo;s recommendations for undergraduate econometrics, as well as further reductions in complexity.

]]>Econometrics doi: 10.3390/econometrics8030035

Authors: D. Stephen G. Pollock

The econometric data to which autoregressive moving-average models are commonly applied are liable to contain elements from a limited range of frequencies. If the data do not cover the full Nyquist frequency range of [0,&pi;] radians, then severe biases can occur in estimating their parameters. The recourse should be to reconstitute the underlying continuous data trajectory and to resample it at an appropriate lesser rate. The trajectory can be derived by associating sinc fuction kernels to the data points. This suggests a model for the underlying processes. The paper describes frequency-limited linear stochastic differential equations that conform to such a model, and it compares them with equations of a model that is assumed to be driven by a white-noise process of unbounded frequencies. The means of estimating models of both varieties are described.

]]>Econometrics doi: 10.3390/econometrics8030034

Authors: Yong Bao Xiaotian Liu Lihong Yang

The ordinary least squares (OLS) estimator for spatial autoregressions may be consistent as pointed out by Lee (2002), provided that each spatial unit is influenced aggregately by a significant portion of the total units. This paper presents a unified asymptotic distribution result of the properly recentered OLS estimator and proposes a new estimator that is based on the indirect inference (II) procedure. The resulting estimator can always be used regardless of the degree of aggregate influence on each spatial unit from other units and is consistent and asymptotically normal. The new estimator does not rely on distributional assumptions and is robust to unknown heteroscedasticity. Its good finite-sample performance, in comparison with existing estimators that are also robust to heteroscedasticity, is demonstrated by a Monte Carlo study.

]]>Econometrics doi: 10.3390/econometrics8030033

Authors: Stefan Mittnik Willi Semmler Alexander Haider

Recent research in financial economics has shown that rare large disasters have the potential to disrupt financial sectors via the destruction of capital stocks and jumps in risk premia. These disruptions often entail negative feedback effects on the macroeconomy. Research on disaster risks has also actively been pursued in the macroeconomic models of climate change. Our paper uses insights from the former work to study disaster risks in the macroeconomics of climate change and to spell out policy needs. Empirically, the link between carbon dioxide emission and the frequency of climate related disaster is investigated using a panel data approach. The modeling part then uses a multi-phase dynamic macro model to explore the effects of rare large disasters resulting in capital losses and rising risk premia. Our proposed multi-phase dynamic model, incorporating climate-related disaster shocks and their aftermath as a distressed phase, is suitable for studying mitigation and adaptation policies as well as recovery policies.

]]>Econometrics doi: 10.3390/econometrics8030032

Authors: Katsuto Tanaka Weilin Xiao Jun Yu

This paper estimates the drift parameters in the fractional Vasicek model from a continuous record of observations via maximum likelihood (ML). The asymptotic theory for the ML estimates (MLE) is established in the stationary case, the explosive case, and the boundary case for the entire range of the Hurst parameter, providing a complete treatment of asymptotic analysis. It is shown that changing the sign of the persistence parameter changes the asymptotic theory for the MLE, including the rate of convergence and the limiting distribution. It is also found that the asymptotic theory depends on the value of the Hurst parameter.

]]>Econometrics doi: 10.3390/econometrics8030031

Authors: Kevin D. Hoover

The relation between causal structure and cointegration and long-run weak exogeneity is explored using some ideas drawn from the literature on graphical causal modeling. It is assumed that the fundamental source of trending behavior is transmitted from exogenous (and typically latent) trending variables to a set of causally ordered variables that would not themselves display nonstationary behavior if the nonstationary exogenous causes were absent. The possibility of inferring the long-run causal structure among a set of time-series variables from an exhaustive examination of weak exogeneity in irreducibly cointegrated subsets of variables is explored and illustrated.

]]>Econometrics doi: 10.3390/econometrics8030030

Authors: Peter C. B. Phillips

We discuss some conceptual and practical issues that arise from the presence of global energy balance effects on station level adjustment mechanisms in dynamic panel regressions with climate data. The paper provides asymptotic analyses, observational data computations, and Monte Carlo simulations to assess the use of various estimation methodologies, including standard dynamic panel regression and cointegration techniques that have been used in earlier research. The findings reveal massive bias in system GMM estimation of the dynamic panel regression parameters, which arise from fixed effect heterogeneity across individual station level observations. Difference GMM and Within Group (WG) estimation have little bias and WG estimation is recommended for practical implementation of dynamic panel regression with highly disaggregated climate data. Intriguingly, from an econometric perspective and importantly for global policy analysis, it is shown that in this model despite the substantial differences between the estimates of the regression model parameters, estimates of global transient climate sensitivity (of temperature to a doubling of atmospheric CO2) are robust to the estimation method employed and to the specific nature of the trending mechanism in global temperature, radiation, and CO2.

]]>Econometrics doi: 10.3390/econometrics8030029

Authors: Marit Gjelsvik Ragnar Nymoen Victoria Sparrman

Wage coordination plays an important role in macroeconomic stabilization. Pattern wage bargaining systems have been common in Europe, but in different forms, and with different degrees of success in terms of actual coordination reached. We focus on wage formation in Norway, a small open economy, where it is custom to regard the manufacturing industry as the wage leader. We estimate a model of wage formation in manufacturing and in two other sectors. Deciding cointegration rank is an important step in the analysis, economically as well statistically. In combination with simultaneous equation modelling, the cointegration analysis provides evidence that collective wage negotiations in manufacturing have defined wage norms for the rest of the economy over the period 1980(1)&ndash;2014(4).

]]>Econometrics doi: 10.3390/econometrics8030028

Authors: Manveer Kaur Mangat Erhard Reschenhofer

The goal of this paper is to search for conclusive evidence against the stationarity of the global air surface temperature, which is one of the most important indicators of climate change. For this purpose, possible long-range dependencies are investigated in the frequency-domain. Since conventional tests of hypotheses about the memory parameter, which measures the degree of long-range dependence, are typically based on asymptotic arguments and are therefore of limited practical value in case of small or medium sample sizes, we employ a new small-sample test as well as a related estimator for the memory parameter. To safeguard against false positive findings, simulation studies are carried out to examine the suitability of the employed methods and hemispheric datasets are used to check the robustness of the empirical findings against low-frequency natural variability caused by oceanic cycles. Overall, our frequency-domain analysis provides strong evidence of non-stationarity, which is consistent with previous results obtained in the time domain with models allowing for stochastic or deterministic trends.

]]>Econometrics doi: 10.3390/econometrics8030027

Authors: Céline Cunen Nils Lid Hjort

When using the Focused Information Criterion (FIC) for assessing and ranking candidate models with respect to how well they do for a given estimation task, it is customary to produce a so-called FIC plot. This plot has the different point estimates along the y-axis and the root-FIC scores on the x-axis, these being the estimated root-mean-square scores. In this paper we address the estimation uncertainty involved in each of the points of such a FIC plot. This needs careful assessment of each of the estimators from the candidate models, taking also modelling bias into account, along with the relative precision of the associated estimated mean squared error quantities. We use confidence distributions for these tasks. This leads to fruitful CD&ndash;FIC plots, helping the statistician to judge to what extent the seemingly best models really are better than other models, etc. These efforts also lead to two further developments. The first is a new tool for model selection, which we call the quantile-FIC, which helps overcome certain difficulties associated with the usual FIC procedures, related to somewhat arbitrary schemes for handling estimated squared biases. A particular case is the median-FIC. The second development is to form model averaged estimators with weights determined by the relative sizes of the median- and quantile-FIC scores.

]]>Econometrics doi: 10.3390/econometrics8020026

Authors: Francis Bilson Darku Frank Konietschke Bhargab Chattopadhyay

The Gini index, a widely used economic inequality measure, is computed using data whose designs involve clustering and stratification, generally known as complex household surveys. Under complex household survey, we develop two novel procedures for estimating Gini index with a pre-specified error bound and confidence level. The two proposed approaches are based on the concept of sequential analysis which is known to be economical in the sense of obtaining an optimal cluster size which reduces project cost (that is total sampling cost) thereby achieving the pre-specified error bound and the confidence level under reasonable assumptions. Some large sample properties of the proposed procedures are examined without assuming any specific distribution. Empirical illustrations of both procedures are provided using the consumption expenditure data obtained by National Sample Survey (NSS) Organization in India.

]]>Econometrics doi: 10.3390/econometrics8020025

Authors: Fernanda Valente Márcio Laurini

In this paper, we analyze the tornado occurrences in the Unites States. To perform inference procedures for the spatio-temporal point process we adopt a dynamic representation of Log-Gaussian Cox Process. This representation is based on the decomposition of intensity function in components of trend, cycles, and spatial effects. In this model, spatial effects are also represented by a dynamic functional structure, which allows analyzing the possible changes in the spatio-temporal distribution of the occurrence of tornadoes due to possible changes in climate patterns. The model was estimated using Bayesian inference through the Integrated Nested Laplace Approximations. We use data from the Storm Prediction Center&rsquo;s Severe Weather Database between 1954 and 2018, and the results provided evidence, from new perspectives, that trends in annual tornado occurrences in the United States have remained relatively constant, supporting previously reported findings.

]]>Econometrics doi: 10.3390/econometrics8020024

Authors: Robert C. Jung Andrew R. Tremayne

The paper is concerned with estimation and application of a special stationary integer autoregressive model where multiple binomial thinnings are not independent of one another. Parameter estimation in such models has hitherto been accomplished using method of moments, or nonlinear least squares, but not maximum likelihood. We obtain the conditional distribution needed to implement maximum likelihood. The sampling performance of the new estimator is compared to extant ones by reporting the results of some simulation experiments. An application to a stock-type data set of financial counts is provided and the conditional distribution is used to compare two competing models and in forecasting.

]]>Econometrics doi: 10.3390/econometrics8020023

Authors: Virgilio Gómez-Rubio Roger S. Bivand Håvard Rue

The integrated nested Laplace approximation (INLA) for Bayesian inference is an efficient approach to estimate the posterior marginal distributions of the parameters and latent effects of Bayesian hierarchical models that can be expressed as latent Gaussian Markov random fields (GMRF). The representation as a GMRF allows the associated software R-INLA to estimate the posterior marginals in a fraction of the time as typical Markov chain Monte Carlo algorithms. INLA can be extended by means of Bayesian model averaging (BMA) to increase the number of models that it can fit to conditional latent GMRF. In this paper, we review the use of BMA with INLA and propose a new example on spatial econometrics models.

]]>Econometrics doi: 10.3390/econometrics8020022

Authors: Alex Lenkoski Fredrik L. Aanes

In economic applications, model averaging has found principal use in examining the validity of various theories related to observed heterogeneity in outcomes such as growth, development, and trade. Though often easy to articulate, these theories are imperfectly captured quantitatively. A number of different proxies are often collected for a given theory and the uneven nature of this collection requires care when employing model averaging. Furthermore, if valid, these theories ought to be relevant outside of any single narrowly focused outcome equation. We propose a methodology which treats theories as represented by latent indices, these latent processes controlled by model averaging on the proxy level. To achieve generalizability of the theory index our framework assumes a collection of outcome equations. We accommodate a flexible set of generalized additive models, enabling non-Gaussian outcomes to be included. Furthermore, selection of relevant theories also occurs on the outcome level, allowing for theories to be differentially valid. Our focus is on creating a set of theory-based indices directed at understanding a country&rsquo;s potential risk of macroeconomic collapse. These Sovereign Risk Indices are calibrated across a set of different &ldquo;collapse&rdquo; criteria, including default on sovereign debt, heightened potential for high unemployment or inflation and dramatic swings in foreign exchange values. The goal of this exercise is to render a portable set of country/year theory indices which can find more general use in the research community.

]]>Econometrics doi: 10.3390/econometrics8020021

Authors: Marcin Błażejowski Jacek Kwiatkowski Paweł Kufel

In this paper, we apply Bayesian averaging of classical estimates (BACE) and Bayesian model averaging (BMA) as an automatic modeling procedures for two well-known macroeconometric models: UK demand for narrow money and long-term inflation. Empirical results verify the correctness of BACE and BMA selection and exhibit similar or better forecasting performance compared with a non-pooling approach. As a benchmark, we use Autometrics&mdash;an algorithm for automatic model selection. Our study is implemented in the easy-to-use gretl packages, which support parallel processing, automates numerical calculations, and allows for efficient computations.

]]>Econometrics doi: 10.3390/econometrics8020020

Authors: Annalisa Cadonna Sylvia Frühwirth-Schnatter Peter Knaus

Time-varying parameter (TVP) models are very flexible in capturing gradual changes in the effect of explanatory variables on the outcome variable. However, in particular when the number of explanatory variables is large, there is a known risk of overfitting and poor predictive performance, since the effect of some explanatory variables is constant over time. We propose a new prior for variance shrinkage in TVP models, called triple gamma. The triple gamma prior encompasses a number of priors that have been suggested previously, such as the Bayesian Lasso, the double gamma prior and the Horseshoe prior. We present the desirable properties of such a prior and its relationship to Bayesian Model Averaging for variance selection. The features of the triple gamma prior are then illustrated in the context of time varying parameter vector autoregressive models, both for simulated dataset and for a series of macroeconomics variables in the Euro Area.

]]>Econometrics doi: 10.3390/econometrics8020019

Authors: Bo Yu Bruce Mizrach Norman R. Swanson

We investigate the marginal predictive content of small versus large jump variation, when forecasting one-week-ahead cross-sectional equity returns, building on Bollerslev et al. (2020). We find that sorting on signed small jump variation leads to greater value-weighted return differentials between stocks in our highest- and lowest-quintile portfolios (i.e., high&ndash;low spreads) than when either signed total jump or signed large jump variation is sorted on. It is shown that the benefit of signed small jump variation investing is driven by stock selection within an industry, rather than industry bets. Investors prefer stocks with a high probability of having positive jumps, but they also tend to overweight safer industries. Also, consistent with the findings in Scaillet et al. (2018), upside (downside) jump variation negatively (positively) predicts future returns. However, signed (large/small/total) jump variation has stronger predictive power than both upside and downside jump variation. One reason large and small (signed) jump variation have differing marginal predictive contents is that the predictive content of signed large jump variation is negligible when controlling for either signed total jump variation or realized skewness. By contrast, signed small jump variation has unique information for predicting future returns, even when controlling for these variables. By analyzing earnings announcement surprises, we find that large jumps are closely associated with &ldquo;big&rdquo; news. However, while such news-related information is embedded in large jump variation, the information is generally short-lived, and dissipates too quickly to provide marginal predictive content for subsequent weekly returns. Finally, we find that small jumps are more likely to be diversified away than large jumps and tend to be more closely associated with idiosyncratic risks. This indicates that small jumps are more likely to be driven by liquidity conditions and trading activity.

]]>Econometrics doi: 10.3390/econometrics8020018

Authors: Andrew B. Martinez

I analyze damage from hurricane strikes on the United States since 1955. Using machine learning methods to select the most important drivers for damage, I show that large errors in a hurricane&rsquo;s predicted landfall location result in higher damage. This relationship holds across a wide range of model specifications and when controlling for ex-ante uncertainty and potential endogeneity. Using a counterfactual exercise I find that the cumulative reduction in damage from forecast improvements since 1970 is about $82 billion, which exceeds the U.S. government&rsquo;s spending on the forecasts and private willingness to pay for them.

]]>Econometrics doi: 10.3390/econometrics8020017

Authors: Dimitris Fouskakis Ioannis Ntzoufras

This paper focuses on the Bayesian model average (BMA) using the power&ndash;expected&ndash; posterior prior in objective Bayesian variable selection under normal linear models. We derive a BMA point estimate of a predicted value, and present computation and evaluation strategies of the prediction accuracy. We compare the performance of our method with that of similar approaches in a simulated and a real data example from economics.

]]>Econometrics doi: 10.3390/econometrics8020016

Authors: Michael P. Clements

We apply a bootstrap test to determine whether some forecasters are able to make superior probability assessments to others. In contrast to some findings in the literature for point predictions, there is evidence that some individuals really are better than others. The testing procedure controls for the different economic conditions the forecasters may face, given that each individual responds to only a subset of the surveys. One possible explanation for the different findings for point predictions and histograms is explored: that newcomers may make less accurate histogram forecasts than experienced respondents given the greater complexity of the task.

]]>Econometrics doi: 10.3390/econometrics8020015

Authors: Ali Mehrabani Aman Ullah

In this paper, we propose an efficient weighted average estimator in Seemingly Unrelated Regressions. This average estimator shrinks a generalized least squares (GLS) estimator towards a restricted GLS estimator, where the restrictions represent possible parameter homogeneity specifications. The shrinkage weight is inversely proportional to a weighted quadratic loss function. The approximate bias and second moment matrix of the average estimator using the large-sample approximations are provided. We give the conditions under which the average estimator dominates the GLS estimator on the basis of their mean squared errors. We illustrate our estimator by applying it to a cost system for United States (U.S.) Commercial banks, over the period from 2000 to 2018. Our results indicate that on average most of the banks have been operating under increasing returns to scale. We find that over the recent years, scale economies are a plausible reason for the growth in average size of banks and the tendency toward increasing scale is likely to continue

]]>Econometrics doi: 10.3390/econometrics8020014

Authors: Marta Boczoń Jean-François Richard

In this paper, we propose a hybrid version of Dynamic Stochastic General Equilibrium models with an emphasis on parameter invariance and tracking performance at times of rapid changes (recessions). We interpret hypothetical balanced growth ratios as moving targets for economic agents that rely upon an Error Correction Mechanism to adjust to changes in target ratios driven by an underlying state Vector AutoRegressive process. Our proposal is illustrated by an application to a pilot Real Business Cycle model for the US economy from 1948 to 2019. An extensive recursive validation exercise over the last 35 years, covering 3 recessions, is used to highlight its parameters invariance, tracking and 1- to 3-step ahead forecasting performance, outperforming those of an unconstrained benchmark Vector AutoRegressive model.

]]>Econometrics doi: 10.3390/econometrics8020013

Authors: Kamil Makieła Błażej Mazur

This paper discusses Bayesian model averaging (BMA) in Stochastic Frontier Analysis and investigates inference sensitivity to prior assumptions made about the scale parameter of (in)efficiency. We turn our attention to the &ldquo;standard&rdquo; prior specifications for the popular normal-half-normal and normal-exponential models. To facilitate formal model comparison, we propose a model that nests both sampling models and generalizes the symmetric term of the compound error. Within this setup it is possible to develop coherent priors for model parameters in an explicit way. We analyze sensitivity of different prior specifications on the aforementioned scale parameter with respect to posterior characteristics of technology, stochastic parameters, latent variables and&mdash;especially&mdash;the models&rsquo; posterior probabilities, which are crucial for adequate inference pooling. We find that using incoherent priors on the scale parameter of inefficiency has (i) virtually no impact on the technology parameters; (ii) some impact on inference about the stochastic parameters and latent variables and (iii) substantial impact on marginal data densities, which are crucial in BMA.

]]>Econometrics doi: 10.3390/econometrics8020012

Authors: Lynda Khalaf Beatriz Peraza López

A two-stage simulation-based framework is proposed to derive Identification Robust confidence sets by applying Indirect Inference, in the context of Autoregressive Moving Average (ARMA) processes for finite samples. Resulting objective functions are treated as test statistics, which are inverted rather than optimized, via the Monte Carlo test method. Simulation studies illustrate accurate size and good power. Projected impulse-response confidence bands are simultaneous by construction and exhibit robustness to parameter identification problems. The persistence of shocks on oil prices and returns is analyzed via impulse-response confidence bands. Our findings support the usefulness of impulse-responses as an empirically relevant transformation of the confidence set.

]]>Econometrics doi: 10.3390/econometrics8010011

Authors: Richard A. Ashley Christopher F. Parmeter

This work describes a versatile and readily-deployable sensitivity analysis of an ordinary least squares (OLS) inference with respect to possible endogeneity in the explanatory variables of the usual k-variate linear multiple regression model. This sensitivity analysis is based on a derivation of the sampling distribution of the OLS parameter estimator, extended to the setting where some, or all, of the explanatory variables are endogenous. In exchange for restricting attention to possible endogeneity which is solely linear in nature&mdash;the most typical case&mdash;no additional model assumptions must be made, beyond the usual ones for a model with stochastic regressors. The sensitivity analysis quantifies the sensitivity of hypothesis test rejection p-values and/or estimated confidence intervals to such endogeneity, enabling an informed judgment as to whether any selected inference is &ldquo;robust&rdquo; versus &ldquo;fragile.&rdquo; The usefulness of this sensitivity analysis&mdash;as a &ldquo;screen&rdquo; for potential endogeneity issues&mdash;is illustrated with an example from the empirical growth literature. This example is extended to an extremely large sample, so as to illustrate how this sensitivity analysis can be applied to parameter confidence intervals in the context of massive datasets, as in &ldquo;big data&rdquo;.

]]>Econometrics doi: 10.3390/econometrics8010010

Authors: Deliang Dai

A factor model based covariance matrix is used to build a new form of Mahalanobis distance. The distribution and relative properties of the new Mahalanobis distances are derived. A new type of Mahalanobis distance based on the separated part of the factor model is defined. Contamination effects of outliers detected by the new defined Mahalanobis distances are also investigated. An empirical example indicates that the new proposed separated type of Mahalanobis distances predominate the original sample Mahalanobis distance.

]]>Econometrics doi: 10.3390/econometrics8010009

Authors: Brendan P. M. McCabe Christopher L. Skeels

The Poisson regression model remains an important tool in the econometric analysis of count data. In a pioneering contribution to the econometric analysis of such models, Lung-Fei Lee presented a specification test for a Poisson model against a broad class of discrete distributions sometimes called the Katz family. Two members of this alternative class are the binomial and negative binomial distributions, which are commonly used with count data to allow for under- and over-dispersion, respectively. In this paper we explore the structure of other distributions within the class and their suitability as alternatives to the Poisson model. Potential difficulties with the Katz likelihood leads us to investigate a class of point optimal tests of the Poisson assumption against the alternative of over-dispersion in both the regression and intercept only cases. In a simulation study, we compare score tests of &lsquo;Poisson-ness&rsquo; with various point optimal tests, based on the Katz family, and conclude that it is possible to choose a point optimal test which is better in the intercept only case, although the nuisance parameters arising in the regression case are problematic. One possible cause is poor choice of the point at which to optimize. Consequently, we explore the use of Hellinger distance to aid this choice. Ultimately we conclude that score tests remain the most practical approach to testing for over-dispersion in this context.

]]>Econometrics doi: 10.3390/econometrics8010008

Authors: Ramses Abul Naga Christopher Stapenhurst Gaston Yalonetzky

We examine the performance of asymptotic inference as well as bootstrap tests for the Alphabeta and Kobus&ndash;Miłoś family of inequality indices for ordered response data. We use Monte Carlo experiments to compare the empirical size and statistical power of asymptotic inference and the Studentized bootstrap test. In a broad variety of settings, both tests are found to have similar rejection probabilities of true null hypotheses, and similar power. Nonetheless, the asymptotic test remains correctly sized in the presence of certain types of severe class imbalances exhibiting very low or very high levels of inequality, whereas the bootstrap test becomes somewhat oversized in these extreme settings.

]]>Econometrics doi: 10.3390/econometrics8010007

Authors: Haili Zhang Guohua Zou

Functional data is a common and important type in econometrics and has been easier and easier to collect in the big data era. To improve estimation accuracy and reduce forecast risks with functional data, in this paper, we propose a novel cross-validation model averaging method for generalized functional linear model where the scalar response variable is related to a random function predictor by a link function. We establish asymptotic theoretical result on the optimality of the weights selected by our method when the true model is not in the candidate model set. Our simulations show that the proposed method often performs better than the commonly used model selection and averaging methods. We also apply the proposed method to Beijing second-hand house price data.

]]>Econometrics doi: 10.3390/econometrics8010006

Authors: Shahram Amini Christopher F. Parmeter

We provide a general overview of Bayesian model averaging (BMA) along with the concept of jointness. We then describe the relative merits and attractiveness of the newest BMA software package, BMS, available in the statistical language R to implement a BMA exercise. BMS provides the user a wide range of customizable priors for conducting a BMA exercise, provides ample graphs to visualize results, and offers several alternative model search mechanisms. We also provide an application of the BMS package to equity premia and describe a simple function that can easily ascertain jointness measures of covariates and integrates with the BMS package.

]]>Econometrics doi: 10.3390/econometrics8010005

Authors: Tahsin Mehdi

Although a wide array of stochastic dominance tests exist for poverty measurement and identification, they assume the income distributions have independent poverty lines or a common absolute (fixed) poverty line. We propose a stochastic dominance test for comparing income distributions up to a common relative poverty line (i.e., some fraction of the pooled median). A Monte Carlo study demonstrates its superior performance over existing methods in terms of power. The test is then applied to some Canadian household survey data for illustration.

]]>Econometrics doi: 10.3390/econometrics8010004

Authors: David Ardia Lukasz T. Gatarek Lennart Hoogerheide Herman K. Van Dijk

The authors wish to make the following corrections to this paper (Ardia et al [...]

]]>Econometrics doi: 10.3390/econometrics8010003

Authors: Matteo Barigozzi Marco Lippi Matteo Luciani

Large-dimensional dynamic factor models and dynamic stochastic general equilibrium models, both widely used in empirical macroeconomics, deal with singular stochastic vectors, i.e., vectors of dimension r which are driven by a q-dimensional white noise, with q &lt; r . The present paper studies cointegration and error correction representations for an I ( 1 ) singular stochastic vector y t . It is easily seen that y t is necessarily cointegrated with cointegrating rank c &ge; r &minus; q . Our contributions are: (i) we generalize Johansen&rsquo;s proof of the Granger representation theorem to I ( 1 ) singular vectors under the assumption that y t has rational spectral density; (ii) using recent results on singular vectors by Anderson and Deistler, we prove that for generic values of the parameters the autoregressive representation of y t has a finite-degree polynomial. The relationship between the cointegration of the factors and the cointegration of the observable variables in a large-dimensional factor model is also discussed.

]]>Econometrics doi: 10.3390/econometrics8010002

Authors: Econometrics Editorial Office Econometrics Editorial Office

The editorial team greatly appreciates the reviewers who have dedicated their considerable time and expertise to the journal&rsquo;s rigorous editorial process over the past 12 months, regardless of whether the papers are finally published or not [...]

]]>Econometrics doi: 10.3390/econometrics8010001

Authors: Krzysztof Piasecki Anna Łyczkowska-Hanćkowiak

The Japanese candlesticks&rsquo; technique is one of the well-known graphic methods of dynamic analysis of securities. If we apply Japanese candlesticks for the analysis of high-frequency financial data, then we need a numerical representation of any Japanese candlestick. Kacprzak et al. have proposed to represent Japanese candlesticks by ordered fuzzy numbers introduced by Kosiński and his cooperators. For some formal reasons, Kosiński&rsquo;s theory of ordered fuzzy numbers has been revised. The main goal of our paper is to propose a universal method of representation of Japanese candlesticks by revised ordered fuzzy numbers. The discussion also justifies the need for such revision of a numerical model of the Japanese candlesticks. There are considered the following main kinds of Japanese candlestick: White Candle (White Spinning), Black Candle (Black Spinning), Doji Star, Dragonfly Doji, Gravestone Doji, and Four Price Doji. For example, we apply numerical model of Japanese candlesticks for financial portfolio analysis.

]]>Econometrics doi: 10.3390/econometrics7040050

Authors: Peter C. B. Phillips Xiaohu Wang Yonghui Zhang

The usual t test, the t test based on heteroskedasticity and autocorrelation consistent (HAC) covariance matrix estimators, and the heteroskedasticity and autocorrelation robust (HAR) test are three statistics that are widely used in applied econometric work. The use of these significance tests in trend regression is of particular interest given the potential for spurious relationships in trend formulations. Following a longstanding tradition in the spurious regression literature, this paper investigates the asymptotic and finite sample properties of these test statistics in several spurious regression contexts, including regression of stochastic trends on time polynomials and regressions among independent random walks. Concordant with existing theory (Phillips 1986, 1998; Sun 2004, 2014b) the usual t test and HAC standardized test fail to control size as the sample size n &rarr; &infin; in these spurious formulations, whereas HAR tests converge to well-defined limit distributions in each case and therefore have the capacity to be consistent and control size. However, it is shown that when the number of trend regressors K &rarr; &infin; , all three statistics, including the HAR test, diverge and fail to control size as n &rarr; &infin; . These findings are relevant to high-dimensional nonstationary time series regressions where machine learning methods may be employed.

]]>Econometrics doi: 10.3390/econometrics7040049

Authors: Jau-er Chen Chen-Wei Hsiang

We propose an econometric procedure based mainly on the generalized random forests method. Not only does this process estimate the quantile treatment effect nonparametrically, but our procedure yields a measure of variable importance in terms of heterogeneity among control variables. We also apply the proposed procedure to reinvestigate the distributional effect of 401(k) participation on net financial assets, and the quantile earnings effect of participating in a job training program.

]]>Econometrics doi: 10.3390/econometrics7040048

Authors: Hiroyuki Kawakatsu

This paper considers observation driven models with conditional mean and variance dynamics for non-negative valued time series. The motivation is to relax the restriction imposed on the higher order moment dynamics in standard multiplicative error models driven only by the conditional mean dynamics. The empirical fit of a zero inflated mixture distribution is assessed with trade duration data with a large fraction of zero observations. All authors have read and agreed to the published version of the manuscript.

]]>Econometrics doi: 10.3390/econometrics7040047

Authors: Carsten Jentsch Lena Reichmann

The serial dependence of categorical data is commonly described using Markovian models. Such models are very flexible, but they can suffer from a huge number of parameters if the state space or the model order becomes large. To address the problem of a large number of model parameters, the class of (new) discrete autoregressive moving-average (NDARMA) models has been proposed as a parsimonious alternative to Markov models. However, NDARMA models do not allow any negative model parameters, which might be a severe drawback in practical applications. In particular, this model class cannot capture any negative serial correlation. For the special case of binary data, we propose an extension of the NDARMA model class that allows for negative model parameters, and, hence, autocorrelations leading to the considerably larger and more flexible model class of generalized binary ARMA (gbARMA) processes. We provide stationary conditions, give the stationary solution, and derive stochastic properties of gbARMA processes. For the purely autoregressive case, classical Yule&ndash;Walker equations hold that facilitate parameter estimation of gbAR models. Yule&ndash;Walker type equations are also derived for gbARMA processes.

]]>Econometrics doi: 10.3390/econometrics7040046

Authors: Marek Chudý Erhard Reschenhofer

Previous findings indicate that the inclusion of dynamic factors obtained from a large set of predictors can improve macroeconomic forecasts. In this paper, we explore three possible further developments: (i) using automatic criteria for choosing those factors which have the greatest predictive power; (ii) using only a small subset of preselected predictors for the calculation of the factors; and (iii) utilizing frequency-domain information for the estimation of the factor models. Reanalyzing a standard macroeconomic dataset of 143 U.S. time series and using the major measures of economic activity as dependent variables, we find that (i) is not helpful, whereas focusing on the low-frequency components of the factors and disregarding the high-frequency components can actually improve the forecasting performance for some variables. In the case of the gross domestic product, a combination of (ii) and (iii) yields the best results.

]]>Econometrics doi: 10.3390/econometrics7040045

Authors: John C. Chao Peter C. B. Phillips

This paper considers estimation and inference concerning the autoregressive coefficient ( &rho; ) in a panel autoregression for which the degree of persistence in the time dimension is unknown. Our main objective is to construct confidence intervals for &rho; that are asymptotically valid, having asymptotic coverage probability at least that of the nominal level uniformly over the parameter space. The starting point for our confidence procedure is the estimating equation of the Anderson&ndash;Hsiao (AH) IV procedure. It is well known that the AH IV estimation suffers from weak instrumentation when &rho; is near unity. But it is not so well known that AH IV estimation is still consistent when &rho; = 1 . In fact, the AH estimating equation is very well-centered and is an unbiased estimating equation in the sense of Durbin (1960), a feature that is especially useful in confidence interval construction. We show that a properly normalized statistic based on the AH estimating equation, which we call the M statistic, is uniformly convergent and can be inverted to obtain asymptotically valid interval estimates. To further improve the informativeness of our confidence procedure in the unit root and near unit root regions and to alleviate the problem that the AH procedure has greater variation in these regions, we use information from unit root pretesting to select among alternative confidence intervals. Two sequential tests are used to assess how close &rho; is to unity, and different intervals are applied depending on whether the test results indicate &rho; to be near or far away from unity. When &rho; is relatively close to unity, our procedure activates intervals whose width shrinks to zero at a faster rate than that of the confidence interval based on the M statistic. Only when both of our unit root tests reject the null hypothesis does our procedure turn to the M statistic interval, whose width has the optimal N &minus; 1 / 2 T &minus; 1 / 2 rate of shrinkage when the underlying process is stable. Our asymptotic analysis shows this pretest-based confidence procedure to have coverage probability that is at least the nominal level in large samples uniformly over the parameter space. Simulations confirm that the proposed interval estimation methods perform well in finite samples and are easy to implement in practice. A supplement to the paper provides an extensive set of new results on the asymptotic behavior of panel IV estimators in weak instrument settings.

]]>Econometrics doi: 10.3390/econometrics7040044

Authors: John Quiggin

This paper begins with the observation that the constrained maximisation central to model estimation and hypothesis testing may be interpreted as a kind of profit maximisation. The output of estimation is a model that maximises some measure of model fit, subject to costs that may be interpreted as the shadow price of constraints imposed on the model. The replication crisis may be regarded as a market failure in which the price of &ldquo;significant&rdquo; results is lower than would be socially optimal.

]]>Econometrics doi: 10.3390/econometrics7040043

Authors: Harry Joe

For modeling count time series data, one class of models is generalized integer autoregressive of order p based on thinning operators. It is shown how numerical maximum likelihood estimation is possible by inverting the probability generating function of the conditional distribution of an observation given the past p observations. Two data examples are included and show that thinning operators based on compounding can substantially improve the model fit compared with the commonly used binomial thinning operator.

]]>Econometrics doi: 10.3390/econometrics7040042

Authors: Takamitsu Kurita Bent Nielsen

This paper proposes a class of partial cointegrated models allowing for structural breaks in the deterministic terms. Moving-average representations of the models are given. It is then shown that, under the assumption of martingale difference innovations, the limit distributions of partial quasi-likelihood ratio tests for cointegrating rank have a close connection to those for standard full models. This connection facilitates a response surface analysis that is required to extract critical information about moments from large-scale simulation studies. An empirical illustration of the proposed methodology is also provided.

]]>Econometrics doi: 10.3390/econometrics7030041

Authors: Marius Matei Xari Rovira Núria Agell

We propose a methodology to include night volatility estimates in the day volatility modeling problem with high-frequency data in a realized generalized autoregressive conditional heteroskedasticity (GARCH) framework, which takes advantage of the natural relationship between the realized measure and the conditional variance. This improves volatility modeling by adding, in a two-factor structure, information on latent processes that occur while markets are closed but captures the leverage effect and maintains a mathematical structure that facilitates volatility estimation. A class of bivariate models that includes intraday, day, and night volatility estimates is proposed and was empirically tested to confirm whether using night volatility information improves the day volatility estimation. The results indicate a forecasting improvement using bivariate models over those that do not include night volatility estimates.

]]>Econometrics doi: 10.3390/econometrics7030040

Authors: Tian Xie

In this paper, we study forecasting problems of Bitcoin-realized volatility computed on data from the largest crypto exchange&mdash;Binance. Given the unique features of the crypto asset market, we find that conventional regression models exhibit strong model specification uncertainty. To circumvent this issue, we suggest using least squares model-averaging methods to model and forecast Bitcoin volatility. The empirical results demonstrate that least squares model-averaging methods in general outperform many other conventional regression models that ignore specification uncertainty.

]]>Econometrics doi: 10.3390/econometrics7030039

Authors: Wei Qian Craig A. Rolling Gang Cheng Yuhong Yang

It is often reported in the forecast combination literature that a simple average of candidate forecasts is more robust than sophisticated combining methods. This phenomenon is usually referred to as the &ldquo;forecast combination puzzle&rdquo;. Motivated by this puzzle, we explore its possible explanations, including high variance in estimating the target optimal weights (estimation error), invalid weighting formulas, and model/candidate screening before combination. We show that the existing understanding of the puzzle should be complemented by the distinction of different forecast combination scenarios known as combining for adaptation and combining for improvement. Applying combining methods without considering the underlying scenario can itself cause the puzzle. Based on our new understandings, both simulations and real data evaluations are conducted to illustrate the causes of the puzzle. We further propose a multi-level AFTER strategy that can integrate the strengths of different combining methods and adapt intelligently to the underlying scenario. In particular, by treating the simple average as a candidate forecast, the proposed strategy is shown to reduce the heavy cost of estimation error and, to a large extent, mitigate the puzzle.

]]>Econometrics doi: 10.3390/econometrics7030038

Authors: Qingfeng Liu Andrey L. Vasnev

To avoid the risk of misspecification between homoscedastic and heteroscedastic models, we propose a combination method based on ordinary least-squares (OLS) and generalized least-squares (GLS) model-averaging estimators. To select optimal weights for the combination, we suggest two information criteria and propose feasible versions that work even when the variance-covariance matrix is unknown. The optimality of the method is proven under some regularity conditions. The results of a Monte Carlo simulation demonstrate that the method is adaptive in the sense that it achieves almost the same estimation accuracy as if the homoscedasticity or heteroscedasticity of the error term were known.

]]>Econometrics doi: 10.3390/econometrics7030037

Authors: Richard M. Golden Steven S. Henley Halbert White T. Michael Kashner

Researchers are often faced with the challenge of developing statistical models with incomplete data. Exacerbating this situation is the possibility that either the researcher&rsquo;s complete-data model or the model of the missing-data mechanism is misspecified. In this article, we create a formal theoretical framework for developing statistical models and detecting model misspecification in the presence of incomplete data where maximum likelihood estimates are obtained by maximizing the observable-data likelihood function when the missing-data mechanism is assumed ignorable. First, we provide sufficient regularity conditions on the researcher&rsquo;s complete-data model to characterize the asymptotic behavior of maximum likelihood estimates in the simultaneous presence of both missing data and model misspecification. These results are then used to derive robust hypothesis testing methods for possibly misspecified models in the presence of Missing at Random (MAR) or Missing Not at Random (MNAR) missing data. Second, we introduce a method for the detection of model misspecification in missing data problems using recently developed Generalized Information Matrix Tests (GIMT). Third, we identify regularity conditions for the Missing Information Principle (MIP) to hold in the presence of model misspecification so as to provide useful computational covariance matrix estimation formulas. Fourth, we provide regularity conditions that ensure the observable-data expected negative log-likelihood function is convex in the presence of partially observable data when the amount of missingness is sufficiently small and the complete-data likelihood is convex. Fifth, we show that when the researcher has correctly specified a complete-data model with a convex negative likelihood function and an ignorable missing-data mechanism, then its strict local minimizer is the true parameter value for the complete-data model when the amount of missingness is sufficiently small. Our results thus provide new robust estimation, inference, and specification analysis methods for developing statistical models with incomplete data.

]]>Econometrics doi: 10.3390/econometrics7030036

Authors: Sophie van Huellen Duo Qin

This paper re-examines the instrumental variable (IV) approach to estimating returns to education by use of compulsory school law (CSL) in the US. We show that the IV-approach amounts to a change in model specification by changing the causal status of the variable of interest. From this perspective, the IV-OLS (ordinary least square) choice becomes a model selection issue between non-nested models and is hence testable using cross validation methods. It also enables us to unravel several logic flaws in the conceptualisation of IV-based models. Using the causal chain model specification approach, we overcome these flaws by carefully distinguishing returns to education from the treatment effect of CSL. We find relatively robust estimates for the first effect, while estimates for the second effect are hindered by measurement errors in the CSL indicators. We find reassurance of our approach from fundamental theories in statistical learning.

]]>Econometrics doi: 10.3390/econometrics7030035

Authors: Richard Kouamé Moussa

This paper introduces an estimation procedure for a random effects probit model in presence of heteroskedasticity and a likelihood ratio test for homoskedasticity. The cases where the heteroskedasticity is due to individual effects or idiosyncratic errors or both are analyzed. Monte Carlo simulations show that the test performs well in the case of high degree of heteroskedasticity. Furthermore, the power of the test increases with larger individual and time dimensions. The robustness analysis shows that applying the wrong approach may generate misleading results except for the case where both individual effects and idiosyncratic errors are modelled as heteroskedastic.

]]>Econometrics doi: 10.3390/econometrics7030034

Authors: Jie Chen Dimitris N. Politis

This paper gives a computer-intensive approach to multi-step-ahead prediction of volatility in financial returns series under an ARCH/GARCH model and also under a model-free setting, namely employing the NoVaS transformation. Our model-based approach only assumes i . i . d innovations without requiring knowledge/assumption of the error distribution and is computationally straightforward. The model-free approach is formally quite similar, albeit a GARCH model is not assumed. We conducted a number of simulations to show that the proposed approach works well for both point prediction (under L 1 and/or L 2 measures) and prediction intervals that were constructed using bootstrapping. The performance of GARCH models and the model-free approach for multi-step ahead prediction was also compared under different data generating processes.

]]>Econometrics doi: 10.3390/econometrics7030033

Authors: Chuanming Gao Kajal Lahiri

We compare the finite sample performance of a number of Bayesian and classical procedures for limited information simultaneous equations models with weak instruments by a Monte Carlo study. We consider Bayesian approaches developed by Chao and Phillips, Geweke, Kleibergen and van Dijk, and Zellner. Amongst the sampling theory methods, OLS, 2SLS, LIML, Fuller&rsquo;s modified LIML, and the jackknife instrumental variable estimator (JIVE) due to Angrist et al. and Blomquist and Dahlberg are also considered. Since the posterior densities and their conditionals in Chao and Phillips and Kleibergen and van Dijk are nonstandard, we use a novel &ldquo;Gibbs within Metropolis&ndash;Hastings&rdquo; algorithm, which only requires the availability of the conditional densities from the candidate generating density. Our results show that with very weak instruments, there is no single estimator that is superior to others in all cases. When endogeneity is weak, Zellner&rsquo;s MELO does the best. When the endogeneity is not weak and &rho; &omega; 12 &gt; 0 , where &rho; is the correlation coefficient between the structural and reduced form errors, and &omega; 12 is the covariance between the unrestricted reduced form errors, the Bayesian method of moments (BMOM) outperforms all other estimators by a wide margin. When the endogeneity is not weak and &beta; &rho; &lt; 0 ( &beta; being the structural parameter), the Kleibergen and van Dijk approach seems to work very well. Surprisingly, the performance of JIVE was disappointing in all our experiments.

]]>Econometrics doi: 10.3390/econometrics7030032

Authors: Maria Felice Arezzo Giuseppina Guagnano

Most empirical work in the social sciences is based on observational data that are often both incomplete, and therefore unrepresentative of the population of interest, and affected by measurement errors. These problems are very well known in the literature and ad hoc procedures for parametric modeling have been proposed and developed for some time, in order to correct estimate&rsquo;s bias and obtain consistent estimators. However, to our best knowledge, the aforementioned problems have not yet been jointly considered. We try to overcome this by proposing a parametric approach for the estimation of the probabilities of misclassification of a binary response variable by incorporating them in the likelihood of a binary choice model with sample selection.

]]>Econometrics doi: 10.3390/econometrics7030031

Authors: Franz Ramsauer Aleksey Min Michael Lingauer

This article extends the Factor-Augmented Vector Autoregression Model (FAVAR) to mixed-frequency and incomplete panel data. Within the scope of a fully parametric two-step approach, the alternating application of two expectation-maximization algorithms jointly estimates model parameters and missing data. In contrast to the existing literature, we do not require observable factor components to be part of the panel data. For this purpose, we modify the Kalman Filter for factors consisting of latent and observed components, which significantly improves the reconstruction of latent factors according to the performed simulation study. To identify model parameters uniquely, the loadings matrix is constrained. In our empirical application, the presented framework analyzes US data for measuring the effects of the monetary policy on the real economy and financial markets. Here, the consequences for the quarterly Gross Domestic Product (GDP) growth rates are of particular importance.

]]>Econometrics doi: 10.3390/econometrics7030030

Authors: Annika Homburg Christian H. Weiß Layth C. Alwan Gabriel Frahm Rainer Göb

In forecasting count processes, practitioners often ignore the discreteness of counts and compute forecasts based on Gaussian approximations instead. For both central and non-central point forecasts, and for various types of count processes, the performance of such approximate point forecasts is analyzed. The considered data-generating processes include different autoregressive schemes with varying model orders, count models with overdispersion or zero inflation, counts with a bounded range, and counts exhibiting trend or seasonality. We conclude that Gaussian forecast approximations should be avoided.

]]>Econometrics doi: 10.3390/econometrics7030029

Authors: Emanuela Ciapanna Marco Taboga

This paper deals with instability in regression coefficients. We propose a Bayesian regression model with time-varying coefficients (TVC) that allows to jointly estimate the degree of instability and the time-path of the coefficients. Thanks to the computational tractability of the model and to the fact that it is fully automatic, we are able to run Monte Carlo experiments and analyze its finite-sample properties. We find that the estimation precision and the forecasting accuracy of the TVC model compare favorably to those of other methods commonly employed to deal with parameter instability. A distinguishing feature of the TVC model is its robustness to mis-specification: Its performance is also satisfactory when regression coefficients are stable or when they experience discrete structural breaks. As a demonstrative application, we used our TVC model to estimate the exposures of S&amp;P 500 stocks to market-wide risk factors: We found that a vast majority of stocks had time-varying exposures and the TVC model helped to better forecast these exposures.

]]>Econometrics doi: 10.3390/econometrics7020028

Authors: Fernando Rios-Avila

This paper presents an extension to the Oaxaca&ndash;Blinder decomposition with continuous groups using a semiparametric approach known as varying coefficients model. To account for potential self-selection into the continuum of groups, the use of inverse mills ratios is expanded upon following the literature on endogenous selection. The flexibility of this methodology may allow detecting heterogeneity when analyzing endogenous dose treatments effects, as well as correcting for endogeneity when analyzing the heterogeneous partial effects across the continuous group variable. For illustration, the methodology is used to revisit the impact of body weight on wages, using body mass index (BMI) as the continuum of groups, finding evidence that body weight has a negative, but decreasing impact on wages for both white men and women.

]]>Econometrics doi: 10.3390/econometrics7020027

Authors: Zhengyuan Gao Christian M. Hafner

Filtering has had a profound impact as a device of perceiving information and deriving agent expectations in dynamic economic models. For an abstract economic system, this paper shows that the foundation of applying the filtering method corresponds to the existence of a conditional expectation as an equilibrium process. Agent-based rational behavior of looking backward and looking forward is generalized to a conditional expectation process where the economic system is approximated by a class of models, which can be represented and estimated without information loss. The proposed framework elucidates the range of applications of a general filtering device and is not limited to a particular model class such as rational expectations.

]]>Econometrics doi: 10.3390/econometrics7020026

Authors: David Trafimow

There has been much debate about null hypothesis significance testing, p-values without null hypothesis significance testing, and confidence intervals. The first major section of the present article addresses some of the main reasons these procedures are problematic. The conclusion is that none of them are satisfactory. However, there is a new procedure, termed the a priori procedure (APP), that validly aids researchers in obtaining sample statistics that have acceptable probabilities of being close to their corresponding population parameters. The second major section provides a description and review of APP advances. Not only does the APP avoid the problems that plague other inferential statistical procedures, but it is easy to perform too. Although the APP can be performed in conjunction with other procedures, the present recommendation is that it be used alone.

]]>Econometrics doi: 10.3390/econometrics7020025

Authors: Kyoo il Kim

It is well known that efficient estimation of average treatment effects can be obtained by the method of inverse propensity score weighting, using the estimated propensity score, even when the true one is known. When the true propensity score is unknown but parametric, it is conjectured from the literature that we still need nonparametric propensity score estimation to achieve the efficiency. We formalize this argument and further identify the source of the efficiency loss arising from parametric estimation of the propensity score. We also provide an intuition of why this overfitting is necessary. Our finding suggests that, even when we know that the true propensity score belongs to a parametric class, we still need to estimate the propensity score by a nonparametric method in applications.

]]>Econometrics doi: 10.3390/econometrics7020024

Authors: Jan R. Magnus

The t-ratio has not one but two uses in econometrics, which should be carefully distinguished. It is used as a test and also as a diagnostic. I emphasize that the commonly-used estimators are in fact pretest estimators, and argue in favor of an improved (continuous) version of pretesting, called model averaging.

]]>Econometrics doi: 10.3390/econometrics7020023

Authors: Tue Gørgens Allan H. Würtz

This paper considers the estimation of dynamic threshold regression models with fixed effects using short panel data. We examine a two-step method, where the threshold parameter is estimated nonparametrically at the N-rate and the remaining parameters are estimated by GMM at the N -rate. We provide simulation results that illustrate advantages of the new method in comparison with pure GMM estimation. The simulations also highlight the importance of the choice of instruments in GMM estimation.

]]>Econometrics doi: 10.3390/econometrics7020022

Authors: Pierre Perron Yohei Yamamoto

In empirical applications based on linear regression models, structural changes often occur in both the error variance and regression coefficients, possibly at different dates. A commonly applied method is to first test for changes in the coefficients (or in the error variance) and, conditional on the break dates found, test for changes in the variance (or in the coefficients). In this note, we provide evidence that such procedures have poor finite sample properties when the changes in the first step are not correctly accounted for. In doing so, we show that testing for changes in the coefficients (or in the variance) ignoring changes in the variance (or in the coefficients) induces size distortions and loss of power. Our results illustrate a need for a joint approach to test for structural changes in both the coefficients and the variance of the errors. We provide some evidence that the procedures suggested by Perron et al. (2019) provide tests with good size and power.

]]>Econometrics doi: 10.3390/econometrics7020021

Authors: Jae H. Kim Andrew P. Robinson

This paper presents a brief review of interval-based hypothesis testing, widely used in bio-statistics, medical science, and psychology, namely, tests for minimum-effect, equivalence, and non-inferiority. We present the methods in the contexts of a one-sample t-test and a test for linear restrictions in a regression. We present applications in testing for market efficiency, validity of asset-pricing models, and persistence of economic time series. We argue that, from the point of view of economics and finance, interval-based hypothesis testing provides more sensible inferential outcomes than those based on point-null hypothesis. We propose that interval-based tests be routinely employed in empirical research in business, as an alternative to point null hypothesis testing, especially in the new era of big data.

]]>Econometrics doi: 10.3390/econometrics7020020

Authors: Burkhard Raunig

It is customary to assume that an indicator of a latent variable is driven by the latent variable and some random noise. In contrast, a background indicator is also systematically influenced by variables outside the structural model of interest. Background indicators deserve attention because in empirical work they are difficult to distinguish from ordinary effect indicators. This paper assesses instrumental variable (IV) estimation of the effect of a latent variable in a linear model when a background indicator replaces the latent variable. It turns out that IV estimates are inconsistent in many important cases. In some cases, the estimates capture causal effects of the indicator rather than causal effects of the latent variable. A simulation experiment that considers the impact of economic uncertainty on aggregate consumption illustrates some of the results.

]]>Econometrics doi: 10.3390/econometrics7020019

Authors: Carlos Trucíos Mauricio Zevallos Luiz K. Hotta André A. P. Santos

Many financial decisions, such as portfolio allocation, risk management, option pricing and hedge strategies, are based on forecasts of the conditional variances, covariances and correlations of financial returns. The paper shows an empirical comparison of several methods to predict one-step-ahead conditional covariance matrices. These matrices are used as inputs to obtain out-of-sample minimum variance portfolios based on stocks belonging to the S&amp;P500 index from 2000 to 2017 and sub-periods. The analysis is done through several metrics, including standard deviation, turnover, net average return, information ratio and Sortino&rsquo;s ratio. We find that no method is the best in all scenarios and the performance depends on the criterion, the period of analysis and the rebalancing strategy.

]]>Econometrics doi: 10.3390/econometrics7020018

Authors: Thomas R. Dyckman Stephen A. Zeff

A great deal of the accounting research published in recent years has involved statistical tests. Our paper proposes improvements to both the quality and execution of such research. We address the following limitations in current research that appear to us to be ignored or used inappropriately: (1) unaddressed situational effects resulting from model limitations and what has been referred to as &ldquo;data carpentry,&rdquo; (2) limitations and alternatives to winsorizing, (3) necessary improvements to relying on a study&rsquo;s calculated &ldquo;p-values&rdquo; instead of on the economic or behavioral importance of the results, and (4) the information loss incurred by under-valuing what can and cannot be learned from replications.

]]>Econometrics doi: 10.3390/econometrics7020017

Authors: Christian H. Weiß

The analysis and modeling of categorical time series requires quantifying the extent of dispersion and serial dependence. The dispersion of categorical data is commonly measured by Gini index or entropy, but also the recently proposed extropy measure can be used for this purpose. Regarding signed serial dependence in categorical time series, we consider three types of &kappa; -measures. By analyzing bias properties, it is shown that always one of the &kappa; -measures is related to one of the above-mentioned dispersion measures. For doing statistical inference based on the sample versions of these dispersion and dependence measures, knowledge on their distribution is required. Therefore, we study the asymptotic distributions and bias corrections of the considered dispersion and dependence measures, and we investigate the finite-sample performance of the resulting asymptotic approximations with simulations. The application of the measures is illustrated with real-data examples from politics, economics and biology.

]]>Econometrics doi: 10.3390/econometrics7010016

Authors: Taehoon Kim Jacob Schwartz Kyungchul Song Yoon-Jae Whang

This paper considers two-sided matching models with nontransferable utilities, with one side having homogeneous preferences over the other side. When one observes only one or several large matchings, despite the large number of agents involved, asymptotic inference is difficult because the observed matching involves the preferences of all the agents on both sides in a complex way, and creates a complicated form of cross-sectional dependence across observed matches. When we assume that the observed matching is a consequence of a stable matching mechanism with homogeneous preferences on one side, and the preferences are drawn from a parametric distribution conditional on observables, the large observed matching follows a parametric distribution. This paper shows in such a situation how the method of Monte Carlo inference can be a viable option. Being a finite sample inference method, it does not require independence or local dependence among the observations which are often used to obtain asymptotic validity. Results from a Monte Carlo simulation study are presented and discussed.

]]>Econometrics doi: 10.3390/econometrics7010015

Authors: Tomohiro Ando Naoya Sueishi

This paper investigates the asymptotic properties of a penalized empirical likelihood estimator for moment restriction models when the number of parameters ( p n ) and/or the number of moment restrictions increases with the sample size. Our main result is that the SCAD-penalized empirical likelihood estimator is n / p n -consistent under a reasonable condition on the regularization parameter. Our consistency rate is better than the existing ones. This paper also provides sufficient conditions under which n / p n -consistency and an oracle property are satisfied simultaneously. As far as we know, this paper is the first to specify sufficient conditions for both n / p n -consistency and the oracle property of the penalized empirical likelihood estimator.

]]>Econometrics doi: 10.3390/econometrics7010014

Authors: David T. Frazier Eric Renault

The standard approach to indirect inference estimation considers that the auxiliary parameters, which carry the identifying information about the structural parameters of interest, are obtained from some recently identified vector of estimating equations. In contrast to this standard interpretation, we demonstrate that the case of overidentified auxiliary parameters is both possible, and, indeed, more commonly encountered than one may initially realize. We then revisit the &ldquo;moment matching&rdquo; and &ldquo;parameter matching&rdquo; versions of indirect inference in this context and devise efficient estimation strategies in this more general framework. Perhaps surprisingly, we demonstrate that if one were to consider the naive choice of an efficient Generalized Method of Moments (GMM)-based estimator for the auxiliary parameters, the resulting indirect inference estimators would be inefficient. In this general context, we demonstrate that efficient indirect inference estimation actually requires a two-step estimation procedure, whereby the goal of the first step is to obtain an efficient version of the auxiliary model. These two-step estimators are presented both within the context of moment matching and parameter matching.

]]>Econometrics doi: 10.3390/econometrics7010012

Authors: Karl-Heinz Schild Karsten Schweikert

This paper investigates the properties of tests for asymmetric long-run adjustment which are often applied in empirical studies on asymmetric price transmissions. We show that substantial size distortions are caused by preconditioning the test on finding sufficient evidence for cointegration in a first step. The extent of oversizing the test for long-run asymmetry depends inversely on the power of the primary cointegration test. Hence, tests for long-run asymmetry become invalid in cases of small sample sizes or slow speed of adjustment. Further, we provide simulation evidence that tests for long-run asymmetry are generally oversized if the threshold parameter is estimated by conditional least squares and show that bootstrap techniques can be used to obtain the correct size.

]]>Econometrics doi: 10.3390/econometrics7010013

Authors: Mingmian Cheng Norman R. Swanson

Numerous tests designed to detect realized jumps over a fixed time span have been proposed and extensively studied in the financial econometrics literature. These tests differ from &ldquo;long time span tests&rdquo; that detect jumps by examining the magnitude of the jump intensity parameter in the data generating process, and which are consistent. In this paper, long span jump tests are compared and contrasted with a variety of fixed span jump tests in a series of Monte Carlo experiments. It is found that both the long time span tests of Corradi et al. (2018) and the fixed span tests of A&iuml;t-Sahalia and Jacod (2009) exhibit reasonably good finite sample properties, for time spans both short and long. Various other tests suffer from finite sample distortions, both under sequential testing and under long time spans. The latter finding is new, and confirms the &ldquo;pitfall&rdquo; discussed in Huang and Tauchen (2005), of using asymptotic approximations associated with finite time span tests in order to study long time spans of data. An empirical analysis is carried out to investigate the implications of these findings, and &ldquo;time-span robust&rdquo; tests indicate that the prevalence of jumps is not as universal as might be expected.

]]>Econometrics doi: 10.3390/econometrics7010011

Authors: Richard Startz

As a contribution toward the ongoing discussion about the use and mis-use of p-values, numerical examples are presented demonstrating that a p-value can, as a practical matter, give you a really different answer than the one that you want.

]]>Econometrics doi: 10.3390/econometrics7010010

Authors: Miguel Henry George Judge

The focus of this paper is an information theoretic-symbolic logic approach to extract information from complex economic systems and unlock its dynamic content. Permutation Entropy (PE) is used to capture the permutation patterns-ordinal relations among the individual values of a given time series; to obtain a probability distribution of the accessible patterns; and to quantify the degree of complexity of an economic behavior system. Ordinal patterns are used to describe the intrinsic patterns, which are hidden in the dynamics of the economic system. Empirical applications involving the Dow Jones Industrial Average are presented to indicate the information recovery value and the applicability of the PE method. The results demonstrate the ability of the PE method to detect the extent of complexity (irregularity) and to discriminate and classify admissible and forbidden states.

]]>Econometrics doi: 10.3390/econometrics7010009

Authors: Niels Haldrup Carsten P. T. Rosenskjold

The prototypical Lee&ndash;Carter mortality model is characterized by a single common time factor that loads differently across age groups. In this paper, we propose a parametric factor model for the term structure of mortality where multiple factors are designed to influence the age groups differently via parametric loading functions. We identify four different factors: a factor common for all age groups, factors for infant and adult mortality, and a factor for the &ldquo;accident hump&rdquo; that primarily affects mortality of relatively young adults and late teenagers. Since the factors are identified via restrictions on the loading functions, the factors are not designed to be orthogonal but can be dependent and can possibly cointegrate when the factors have unit roots. We suggest two estimation procedures similar to the estimation of the dynamic Nelson&ndash;Siegel term structure model. First, a two-step nonlinear least squares procedure based on cross-section regressions together with a separate model to estimate the dynamics of the factors. Second, we suggest a fully specified model estimated by maximum likelihood via the Kalman filter recursions after the model is put on state space form. We demonstrate the methodology for US and French mortality data. We find that the model provides a good fit of the relevant factors and, in a forecast comparison with a range of benchmark models, it is found that, especially for longer horizons, variants of the parametric factor model have excellent forecast performance.

]]>Econometrics doi: 10.3390/econometrics7010008

Authors: Antonio Pacifico

This paper provides an overview of a time-varying Structural Panel Bayesian Vector Autoregression model that deals with model misspecification and unobserved heterogeneity problems in applied macroeconomic analyses when studying time-varying relationships and dynamic interdependencies among countries and variables. I discuss what its distinctive features are, what it is used for, and how it can be analytically derived. I also describe how it is estimated and how structural spillovers and shock identification are performed. The model is empirically applied to a set of developed European economies to illustrate the functioning and the ability of the model. The paper also discusses more recent studies that have used multivariate dynamic macro-panels to evaluate idiosyncratic business cycles, policy-making, and spillover effects among different sectors and countries.

]]>Econometrics doi: 10.3390/econometrics7010007

Authors: Cheng Hsiao Qi Li Zhongwen Liang Wei Xie

This paper considers methods of estimating a static correlated random coefficient model with panel data. We mainly focus on comparing two approaches of estimating unconditional mean of the coefficients for the correlated random coefficients models, the group mean estimator and the generalized least squares estimator. For the group mean estimator, we show that it achieves Chamberlain (1992) semi-parametric efficiency bound asymptotically. For the generalized least squares estimator, we show that when T is large, a generalized least squares estimator that ignores the correlation between the individual coefficients and regressors is asymptotically equivalent to the group mean estimator. In addition, we give conditions where the standard within estimator of the mean of the coefficients is consistent. Moreover, with additional assumptions on the known correlation pattern, we derive the asymptotic properties of panel least squares estimators. Simulations are used to examine the finite sample performances of different estimators.

]]>Econometrics doi: 10.3390/econometrics7010006

Authors: David H. Bernstein Bent Nielsen

We consider cointegration tests in the situation where the cointegration rank is deficient. This situation is of interest in finite sample analysis and in relation to recent work on identification robust cointegration inference. We derive asymptotic theory for tests for cointegration rank and for hypotheses on the cointegrating vectors. The limiting distributions are tabulated. An application to US treasury yields series is given.

]]>Econometrics doi: 10.3390/econometrics7010005

Authors: Mardi Dungey Stan Hurn Shuping Shi Vladimir Volkov

Crises in the banking and sovereign debt sectors give rise to heightened financial fragility. Of particular concern is the development of self-fulfilling feedback loops where crisis conditions in one sector are transmitted to the other sector and back again. We use time-varying tests of Granger causality to demonstrate how empirical evidence of connectivity between the banking and sovereign sectors can be detected, and provide an application to the Greek, Irish, Italian, Portuguese and Spanish (GIIPS) countries and Germany over the period 2007 to 2016. While the results provide evidence of domestic feedback loops, the most important finding is that financial fragility is an international problem and cannot be dealt with purely on a country-by-country basis.

]]>Econometrics doi: 10.3390/econometrics7010004

Authors: Arthur Charpentier Ndéné Ka Stéphane Mussard Oumar Hamady Ndiaye

We propose an Aitken estimator for Gini regression. The suggested A-Gini estimator is proven to be a U-statistics. Monte Carlo simulations are provided to deal with heteroskedasticity and to make some comparisons between the generalized least squares and the Gini regression. A Gini-White test is proposed and shows that a better power is obtained compared with the usual White test when outlying observations contaminate the data.

]]>Econometrics doi: 10.3390/econometrics7010003

Authors: Econometrics Editorial Office

Rigorous peer-review is the corner-stone of high-quality academic publishing [...]

]]>Econometrics doi: 10.3390/econometrics7010002

Authors: Søren Johansen

A multivariate CVAR(1) model for some observed variables and some unobserved variables is analysed using its infinite order CVAR representation of the observations. Cointegration and adjustment coefficients in the infinite order CVAR are found as functions of the parameters in the CVAR(1) model. Conditions for weak exogeneity for the cointegrating vectors in the approximating finite order CVAR are derived. The results are illustrated by two simple examples of relevance for modelling causal graphs.

]]>Econometrics doi: 10.3390/econometrics7010001

Authors: Tue Gørgens Dean Robert Hyslop

This paper compares two approaches to analyzing longitudinal discrete-time binary outcomes. Dynamic binary response models focus on state occupancy and typically specify low-order Markovian state dependence. Multi-spell duration models focus on transitions between states and typically allow for state-specific duration dependence. We show that the former implicitly impose strong and testable restrictions on the transition probabilities. In a case study of poverty transitions, we show that these restrictions are severely rejected against the more flexible multi-spell duration models.

]]>Econometrics doi: 10.3390/econometrics6040048

Authors: Yukai Yang Luc Bauwens

We develop novel multivariate state-space models wherein the latent states evolve on the Stiefel manifold and follow a conditional matrix Langevin distribution. The latent states correspond to time-varying reduced rank parameter matrices, like the loadings in dynamic factor models and the parameters of cointegrating relations in vector error-correction models. The corresponding nonlinear filtering algorithms are developed and evaluated by means of simulation experiments.

]]>Econometrics doi: 10.3390/econometrics6040047

Authors: Hussein Khraibani Bilal Nehme Olivier Strauss

Value-at-Risk (VaR) has become the most important benchmark for measuring risk in portfolios of different types of financial instruments. However, as reported by many authors, estimating VaR is subject to a high level of uncertainty. One of the sources of uncertainty stems from the dependence of the VaR estimation on the choice of the computation method. As we show in our experiment, the lower the number of samples, the higher this dependence. In this paper, we propose a new nonparametric approach called maxitive kernel estimation of the VaR. This estimation is based on a coherent extension of the kernel-based estimation of the cumulative distribution function to convex sets of kernel. We thus obtain a convex set of VaR estimates gathering all the conventional estimates based on a kernel belonging to the above considered convex set. We illustrate this method in an empirical application to daily stock returns. We compare the approach we propose to other parametric and nonparametric approaches. In our experiment, we show that the interval-valued estimate of the VaR we obtain is likely to lead to more careful decision, i.e., decisions that cannot be biased by an arbitrary choice of the computation method. In fact, the imprecision of the obtained interval-valued estimate is likely to be representative of the uncertainty in VaR estimate.

]]>Econometrics doi: 10.3390/econometrics6040046

Authors: George Judge

In this paper, we borrow some of the key concepts of nonequilibrium statistical systems, to develop a framework for analyzing a self-organizing-optimizing system of independent interacting agents, with nonlinear dynamics at the macro level that is based on stochastic individual behavior at the micro level. We demonstrate the use of entropy-divergence methods and micro income data to evaluate and understand the hidden aspects of stochastic dynamics that drives macroeconomic behavior systems and discuss how to empirically represent and evaluate their nonequilibrium nature. Empirical applications of the information theoretic family of power divergence measures-entropic functions, interpreted in a probability context with Markov dynamics, are presented.

]]>Econometrics doi: 10.3390/econometrics6040045

Authors: Loann David Denis Desboulets

In this paper, we investigate several variable selection procedures to give an overview of the existing literature for practitioners. &ldquo;Let the data speak for themselves&rdquo; has become the motto of many applied researchers since the number of data has significantly grown. Automatic model selection has been promoted to search for data-driven theories for quite a long time now. However, while great extensions have been made on the theoretical side, basic procedures are still used in most empirical work, e.g., stepwise regression. Here, we provide a review of main methods and state-of-the art extensions as well as a topology of them over a wide range of model structures (linear, grouped, additive, partially linear and non-parametric) and available software resources for implemented methods so that practitioners can easily access them. We provide explanations for which methods to use for different model purposes and their key differences. We also review two methods for improving variable selection in the general sense.

]]>Econometrics doi: 10.3390/econometrics6040044

Authors: Christopher L. Skeels Frank Windmeijer

A standard test for weak instruments compares the first-stage F-statistic to a table of critical values obtained by Stock and Yogo (2005) using simulations. We derive a closed-form solution for the expectation from which these critical values are derived, as well as present some second-order asymptotic approximations that may be of value in the presence of multiple endogenous regressors. Inspection of this new result provides insights not available from simulation, and will allow software implementations to be generalised and improved. Finally, we explore the calculation of p-values for the first-stage F-statistic weak instruments test.

]]>Econometrics doi: 10.3390/econometrics6040043

Authors: Jianning Kong Donggyu Sul

This paper provides a new statistical model for repeated voluntary contribution mechanism games. In a repeated public goods experiment, contributions in the first round are cross-sectionally independent simply because subjects are randomly selected. Meanwhile, contributions to a public account over rounds are serially and cross-sectionally correlated. Furthermore, the cross-sectional average of the contributions across subjects usually decreases over rounds. By considering this non-stationary initial condition&mdash;the initial contribution has a different distribution from the rest of the contributions&mdash;we model statistically the time varying patterns of the average contribution in repeated public goods experiments and then propose a simple but efficient method to test for treatment effects. The suggested method has good finite sample performance and works well in practice.

]]>Econometrics doi: 10.3390/econometrics6040042

Authors: Martin Biewen Emmanuel Flachaire

It is well-known that, after decades of non-interest in the theme, economics has experienced a proper surge in inequality research in recent years. [...]

]]>Econometrics doi: 10.3390/econometrics6030041

Authors: Chung Choe Philippe Van Kerm

This paper draws upon influence function regression methods to determine where foreign workers stand in the distribution of private sector wages in Luxembourg, and assess whether and how much their wages contribute to wage inequality. This is quantified by measuring the effect that a marginal increase in the proportion of foreign workers&mdash;foreign residents or cross-border workers&mdash;would have on selected quantiles and measures of inequality. Analysis of the 2006 Structure of Earnings Survey reveals that foreign workers have generally lower wages than natives and therefore tend to haul the overall wage distribution downwards. Yet, their influence on wage inequality reveals small and negative. All impacts are further muted when accounting for human capital and, especially, job characteristics. Not observing any large positive inequality contribution on the Luxembourg labour market is a striking result given the sheer size of the foreign workforce and its polarization at both ends of the skill distribution.

]]>Econometrics doi: 10.3390/econometrics6030040

Authors: Eric Hillebrand Huiyu Huang Tae-Hwy Lee Canlin Li

In forecasting a variable (forecast target) using many predictors, a factor model with principal components (PC) is often used. When the predictors are the yield curve (a set of many yields), the Nelson&ndash;Siegel (NS) factor model is used in place of the PC factors. These PC or NS factors are combining information (CI) in the predictors (yields). However, these CI factors are not &ldquo;supervised&rdquo; for a specific forecast target in that they are constructed by using only the predictors but not using a particular forecast target. In order to &ldquo;supervise&rdquo; factors for a forecast target, we follow Chan et al. (1999) and Stock and Watson (2004) to compute PC or NS factors of many forecasts (not of the predictors), with each of the many forecasts being computed using one predictor at a time. These PC or NS factors of forecasts are combining forecasts (CF). The CF factors are supervised for a specific forecast target. We demonstrate the advantage of the supervised CF factor models over the unsupervised CI factor models via simple numerical examples and Monte Carlo simulation. In out-of-sample forecasting of monthly US output growth and inflation, it is found that the CF factor models outperform the CI factor models especially at longer forecast horizons.

]]>Econometrics doi: 10.3390/econometrics6030039

Authors: Andreas Hetland

We propose and study the stochastic stationary root model. The model resembles the cointegrated VAR model but is novel in that: (i) the stationary relations follow a random coefficient autoregressive process, i.e., exhibhits heavy-tailed dynamics, and (ii) the system is observed with measurement error. Unlike the cointegrated VAR model, estimation and inference for the SSR model is complicated by a lack of closed-form expressions for the likelihood function and its derivatives. To overcome this, we introduce particle filter-based approximations of the log-likelihood function, sample score, and observed Information matrix. These enable us to approximate the ML estimator via stochastic approximation and to conduct inference via the approximated observed Information matrix. We conjecture the asymptotic properties of the ML estimator and conduct a simulation study to investigate the validity of the conjecture. Model diagnostics to assess model fit are considered. Finally, we present an empirical application to the 10-year government bond rates in Germany and Greece during the period from January 1999 to February 2018.

]]>Econometrics doi: 10.3390/econometrics6030038

Authors: In Choi Steve Cook Marc S. Paolella Jeffrey S. Racine

n/a

]]>Econometrics doi: 10.3390/econometrics6030037

Authors: Rachidi Kotchoni

This paper proposes an approach to measure the extent of nonlinearity of the exposure of a financial asset to a given risk factor. The proposed measure exploits the decomposition of a conditional expectation into its linear and nonlinear components. We illustrate the method with the measurement of the degree of nonlinearity of a European style option with respect to the underlying asset. Next, we use the method to identify the empirical patterns of the return-risk trade-off on the SP500. The results are strongly supportive of a nonlinear relationship between expected return and expected volatility. The data seem to be driven by two regimes: one regime with a positive return-risk trade-off and one with a negative trade-off.

]]>