Editor’s Choice Articles

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 1100 KiB  
Article
Dynamic Factor Models and Fractional Integration—With an Application to US Real Economic Activity
by Guglielmo Maria Caporale, Luis Alberiko Gil-Alana and Pedro Jose Piqueras Martinez
Econometrics 2024, 12(4), 39; https://doi.org/10.3390/econometrics12040039 - 19 Dec 2024
Viewed by 1011
Abstract
This paper makes a twofold contribution. First, it develops the dynamic factor model of by allowing for fractional integration instead of imposing the classical dichotomy between I(0) stationary and I(1) non-stationary series. This more general setup provides valuable information on the [...] Read more.
This paper makes a twofold contribution. First, it develops the dynamic factor model of by allowing for fractional integration instead of imposing the classical dichotomy between I(0) stationary and I(1) non-stationary series. This more general setup provides valuable information on the degree of persistence and mean-reverting properties of the series. Second, the proposed framework is used to analyse five annual US Real Economic Activity series (Employees, Energy, Industrial Production, Manufacturing, Personal Income) over the period from 1967 to 2019 in order to shed light on their degree of persistence and cyclical behaviour. The results indicate that economic activity in the US is highly persistent and is also characterised by cycles with a periodicity of 6 years and 8 months. Full article
Show Figures

Figure 1

11 pages, 227 KiB  
Article
Likert Scale Variables in Personal Finance Research: The Neutral Category Problem
by Blain Pearson, Donald Lacombe and Nasima Khatun
Econometrics 2024, 12(4), 33; https://doi.org/10.3390/econometrics12040033 - 6 Nov 2024
Cited by 1 | Viewed by 2014
Abstract
Personal finance research often utilizes Likert-type items and Likert scales as dependent variables, frequently employing standard probit and ordered probit models. If inappropriately modeled, the “neutral” category of discrete dependent variables can bias estimates of the remaining categories. Through the utilization of hierarchical [...] Read more.
Personal finance research often utilizes Likert-type items and Likert scales as dependent variables, frequently employing standard probit and ordered probit models. If inappropriately modeled, the “neutral” category of discrete dependent variables can bias estimates of the remaining categories. Through the utilization of hierarchical models, this paper demonstrates a methodology that accounts for the econometric issues of the neutral category. We then analyze the technique through an empirical exercise relevant to personal finance research using data from the National Financial Capability Study. We demonstrate that ignoring the “neutral” category bias can lead to incorrect inferences, hindering the progression of personal finance research. Our findings underscore the importance of refining statistical modeling techniques when dealing with Likert-type data. By accounting for the neutral category, we can enhance the reliability of personal finance research outcomes, fostering improved decision-relevant insights. Full article
20 pages, 478 KiB  
Article
Long-Term Care in Germany in the Context of the Demographic Transition—An Outlook for the Expenses of Long-Term Care Insurance through 2050
by Patrizio Vanella, Christina Benita Wilke and Moritz Heß
Econometrics 2024, 12(4), 28; https://doi.org/10.3390/econometrics12040028 - 9 Oct 2024
Viewed by 2278
Abstract
Demographic aging results in a growing number of older people in need of care in many regions all over the world. Germany has witnessed steady population aging for decades, prompting policymakers and other stakeholders to discuss how to fulfill the rapidly growing demand [...] Read more.
Demographic aging results in a growing number of older people in need of care in many regions all over the world. Germany has witnessed steady population aging for decades, prompting policymakers and other stakeholders to discuss how to fulfill the rapidly growing demand for care workers and finance the rising costs of long-term care. Informed decisions on this matter to ensure the sustainability of the statutory long-term care insurance system require reliable knowledge of the associated future costs. These need to be simulated based on well-designed forecast models that holistically include the complexity of the forecast problem, namely the demographic transition, epidemiological trends, concrete demand for and supply of specific care services, and the respective costs. Care risks heavily depend on demographics, both in absolute terms and according to severity. The number of persons in need of care, disaggregated by severity of disability, in turn, is the main driver of the remuneration that is paid by long-term care insurance. Therefore, detailed forecasts of the population and care rates are important ingredients for forecasts of long-term care insurance expenditures. We present a novel approach based on a stochastic demographic cohort-component approach that includes trends in age- and sex-specific care rates and the demand for specific care services, given changing preferences over the life course. The model is executed for Germany until the year 2050 as a case study. Full article
(This article belongs to the Special Issue Advancements in Macroeconometric Modeling and Time Series Analysis)
Show Figures

Figure 1

11 pages, 246 KiB  
Article
Estimating Treatment Effects Using Observational Data and Experimental Data with Non-Overlapping Support
by Kevin Han, Han Wu, Linjia Wu, Yu Shi and Canyao Liu
Econometrics 2024, 12(3), 26; https://doi.org/10.3390/econometrics12030026 - 20 Sep 2024
Cited by 1 | Viewed by 1589
Abstract
When estimating treatment effects, the gold standard is to conduct a randomized experiment and then contrast outcomes associated with the treatment group and the control group. However, in many cases, randomized experiments are either conducted with a much smaller scale compared to the [...] Read more.
When estimating treatment effects, the gold standard is to conduct a randomized experiment and then contrast outcomes associated with the treatment group and the control group. However, in many cases, randomized experiments are either conducted with a much smaller scale compared to the size of the target population or accompanied with certain ethical issues and thus hard to implement. Therefore, researchers usually rely on observational data to study causal connections. The downside is that the unconfoundedness assumption, which is the key to validating the use of observational data, is untestable and almost always violated. Hence, any conclusion drawn from observational data should be further analyzed with great care. Given the richness of observational data and usefulness of experimental data, researchers hope to develop credible methods to combine the strength of the two. In this paper, we consider a setting where the observational data contain the outcome of interest as well as a surrogate outcome, while the experimental data contain only the surrogate outcome. We propose an easy-to-implement estimator to estimate the average treatment effect of interest using both the observational data and the experimental data. Full article
18 pages, 1198 KiB  
Article
Transient and Persistent Technical Efficiencies in Rice Farming: A Generalized True Random-Effects Model Approach
by Phuc Trong Ho, Michael Burton, Atakelty Hailu and Chunbo Ma
Econometrics 2024, 12(3), 23; https://doi.org/10.3390/econometrics12030023 - 12 Aug 2024
Viewed by 1862
Abstract
This study estimates transient and persistent technical efficiencies (TEs) using a generalized true random-effects (GTRE) model. We estimate the GTRE model using maximum likelihood and Bayesian estimation methods, then compare it to three simpler models nested within it to evaluate the robustness of [...] Read more.
This study estimates transient and persistent technical efficiencies (TEs) using a generalized true random-effects (GTRE) model. We estimate the GTRE model using maximum likelihood and Bayesian estimation methods, then compare it to three simpler models nested within it to evaluate the robustness of our estimates. We use a panel data set of 945 observations collected from 344 rice farming households in Vietnam’s Mekong River Delta. The results indicate that the GTRE model is more appropriate than the restricted models for understanding heterogeneity and inefficiency in rice production. The mean estimate of overall technical efficiency is 0.71 on average, with transient rather than persistent inefficiency being the dominant component. This suggests that rice farmers could increase output substantially and would benefit from policies that pay more attention to addressing short-term inefficiency issues. Full article
Show Figures

Figure 1

14 pages, 2200 KiB  
Article
Exponential Time Trends in a Fractional Integration Model
by Guglielmo Maria Caporale and Luis Alberiko Gil-Alana
Econometrics 2024, 12(2), 15; https://doi.org/10.3390/econometrics12020015 - 31 May 2024
Viewed by 1438
Abstract
This paper introduces a new modelling approach that incorporates nonlinear, exponential deterministic terms into a fractional integration framework. The proposed model is based on a specific test on fractional integration that is more general than the standard methods, which allow for only linear [...] Read more.
This paper introduces a new modelling approach that incorporates nonlinear, exponential deterministic terms into a fractional integration framework. The proposed model is based on a specific test on fractional integration that is more general than the standard methods, which allow for only linear trends.. Its limiting distribution is standard normal, and Monte Carlo simulations show that it performs well in finite samples. Three empirical examples confirm that the suggested specification captures the properties of the data adequately. Full article
Show Figures

Figure 1

15 pages, 312 KiB  
Article
A Pretest Estimator for the Two-Way Error Component Model
by Badi H. Baltagi, Georges Bresson and Jean-Michel Etienne
Econometrics 2024, 12(2), 9; https://doi.org/10.3390/econometrics12020009 - 16 Apr 2024
Cited by 1 | Viewed by 2262
Abstract
For a panel data linear regression model with both individual and time effects, empirical studies select the two-way random-effects (TWRE) estimator if the Hausman test based on the contrast between the two-way fixed-effects (TWFE) estimator and the TWRE estimator is not rejected. Alternatively, [...] Read more.
For a panel data linear regression model with both individual and time effects, empirical studies select the two-way random-effects (TWRE) estimator if the Hausman test based on the contrast between the two-way fixed-effects (TWFE) estimator and the TWRE estimator is not rejected. Alternatively, they select the TWFE estimator in cases where this Hausman test rejects the null hypothesis. Not all the regressors may be correlated with these individual and time effects. The one-way Hausman-Taylor model has been generalized to the two-way error component model and allow some but not all regressors to be correlated with these individual and time effects. This paper proposes a pretest estimator for this two-way error component panel data regression model based on two Hausman tests. The first Hausman test is based upon the contrast between the TWFE and the TWRE estimators. The second Hausman test is based on the contrast between the two-way Hausman and Taylor (TWHT) estimator and the TWFE estimator. The Monte Carlo results show that this pretest estimator is always second best in MSE performance compared to the efficient estimator, whether the model is random-effects, fixed-effects or Hausman and Taylor. This paper generalizes the one-way pretest estimator to the two-way error component model. Full article
Show Figures

Figure 1

15 pages, 2945 KiB  
Article
Biases in the Maximum Simulated Likelihood Estimation of the Mixed Logit Model
by Maksat Jumamyradov, Murat Munkin, William H. Greene and Benjamin M. Craig
Econometrics 2024, 12(2), 8; https://doi.org/10.3390/econometrics12020008 - 27 Mar 2024
Viewed by 2165
Abstract
In a recent study, it was demonstrated that the maximum simulated likelihood (MSL) estimator produces significant biases when applied to the bivariate normal and bivariate Poisson-lognormal models. The study’s conclusion suggests that similar biases could be present in other models generated by correlated [...] Read more.
In a recent study, it was demonstrated that the maximum simulated likelihood (MSL) estimator produces significant biases when applied to the bivariate normal and bivariate Poisson-lognormal models. The study’s conclusion suggests that similar biases could be present in other models generated by correlated bivariate normal structures, which include several commonly used specifications of the mixed logit (MIXL) models. This paper conducts a simulation study analyzing the MSL estimation of the error components (EC) MIXL. We find that the MSL estimator produces significant biases in the estimated parameters. The problem becomes worse when the true value of the variance parameter is small and the correlation parameter is large in magnitude. In some cases, the biases in the estimated marginal effects are as large as 12% of the true values. These biases are largely invariant to increases in the number of Halton draws. Full article
Show Figures

Figure 1

30 pages, 583 KiB  
Article
When It Counts—Econometric Identification of the Basic Factor Model Based on GLT Structures
by Sylvia Frühwirth-Schnatter, Darjus Hosszejni and Hedibert Freitas Lopes
Econometrics 2023, 11(4), 26; https://doi.org/10.3390/econometrics11040026 - 20 Nov 2023
Cited by 4 | Viewed by 2823
Abstract
Despite the popularity of factor models with simple loading matrices, little attention has been given to formally address the identifiability of these models beyond standard rotation-based identification such as the positive lower triangular (PLT) constraint. To fill this gap, we review the advantages [...] Read more.
Despite the popularity of factor models with simple loading matrices, little attention has been given to formally address the identifiability of these models beyond standard rotation-based identification such as the positive lower triangular (PLT) constraint. To fill this gap, we review the advantages of variance identification in simple factor analysis and introduce the generalized lower triangular (GLT) structures. We show that the GLT assumption is an improvement over PLT without compromise: GLT is also unique but, unlike PLT, a non-restrictive assumption. Furthermore, we provide a simple counting rule for variance identification under GLT structures, and we demonstrate that within this model class, the unknown number of common factors can be recovered in an exploratory factor analysis. Our methodology is illustrated for simulated data in the context of post-processing posterior draws in sparse Bayesian factor analysis. Full article
(This article belongs to the Special Issue High-Dimensional Time Series in Macroeconomics and Finance)
Show Figures

Figure 1

28 pages, 580 KiB  
Article
On the Proper Computation of the Hausman Test Statistic in Standard Linear Panel Data Models: Some Clarifications and New Results
by Julie Le Gallo and Marc-Alexandre Sénégas
Econometrics 2023, 11(4), 25; https://doi.org/10.3390/econometrics11040025 - 8 Nov 2023
Cited by 1 | Viewed by 4601
Abstract
We provide new analytical results for the implementation of the Hausman specification test statistic in a standard panel data model, comparing the version based on the estimators computed from the untransformed random effects model specification under Feasible Generalized Least Squares and the one [...] Read more.
We provide new analytical results for the implementation of the Hausman specification test statistic in a standard panel data model, comparing the version based on the estimators computed from the untransformed random effects model specification under Feasible Generalized Least Squares and the one computed from the quasi-demeaned model estimated by Ordinary Least Squares. We show that the quasi-demeaned model cannot provide a reliable magnitude when implementing the Hausman test in a finite sample setting, although it is the most common approach used to produce the test statistic in econometric software. The difference between the Hausman statistics computed under the two methods can be substantial and even lead to opposite conclusions for the test of orthogonality between the regressors and the individual-specific effects. Furthermore, this difference remains important even with large cross-sectional dimensions as it mainly depends on the within-between structure of the regressors and on the presence of a significant correlation between the individual effects and the covariates in the data. We propose to supplement the test outcomes that are provided in the main econometric software packages with some metrics to address the issue at hand. Full article
Show Figures

Figure 1

32 pages, 10108 KiB  
Article
Dirichlet Process Log Skew-Normal Mixture with a Missing-at-Random-Covariate in Insurance Claim Analysis
by Minkun Kim, David Lindberg, Martin Crane and Marija Bezbradica
Econometrics 2023, 11(4), 24; https://doi.org/10.3390/econometrics11040024 - 12 Oct 2023
Viewed by 2173
Abstract
In actuarial practice, the modeling of total losses tied to a certain policy is a nontrivial task due to complex distributional features. In the recent literature, the application of the Dirichlet process mixture for insurance loss has been proposed to eliminate the risk [...] Read more.
In actuarial practice, the modeling of total losses tied to a certain policy is a nontrivial task due to complex distributional features. In the recent literature, the application of the Dirichlet process mixture for insurance loss has been proposed to eliminate the risk of model misspecification biases. However, the effect of covariates as well as missing covariates in the modeling framework is rarely studied. In this article, we propose novel connections among a covariate-dependent Dirichlet process mixture, log-normal convolution, and missing covariate imputation. As a generative approach, our framework models the joint of outcome and covariates, which allows us to impute missing covariates under the assumption of missingness at random. The performance is assessed by applying our model to several insurance datasets of varying size and data missingness from the literature, and the empirical results demonstrate the benefit of our model compared with the existing actuarial models, such as the Tweedie-based generalized linear model, generalized additive model, or multivariate adaptive regression spline. Full article
Show Figures

Figure 1

27 pages, 1943 KiB  
Article
Local Gaussian Cross-Spectrum Analysis
by Lars Arne Jordanger and Dag Tjøstheim
Econometrics 2023, 11(2), 12; https://doi.org/10.3390/econometrics11020012 - 21 Apr 2023
Cited by 1 | Viewed by 2620
Abstract
The ordinary spectrum is restricted in its applications, since it is based on the second-order moments (auto- and cross-covariances). Alternative approaches to spectrum analysis have been investigated based on other measures of dependence. One such approach was developed for univariate time series by [...] Read more.
The ordinary spectrum is restricted in its applications, since it is based on the second-order moments (auto- and cross-covariances). Alternative approaches to spectrum analysis have been investigated based on other measures of dependence. One such approach was developed for univariate time series by the authors of this paper using the local Gaussian auto-spectrum based on the local Gaussian auto-correlations. This makes it possible to detect local structures in univariate time series that look similar to white noise when investigated by the ordinary auto-spectrum. In this paper, the local Gaussian approach is extended to a local Gaussian cross-spectrum for multivariate time series. The local Gaussian cross-spectrum has the desirable property that it coincides with the ordinary cross-spectrum for Gaussian time series, which implies that it can be used to detect non-Gaussian traits in the time series under investigation. In particular, if the ordinary spectrum is flat, then peaks and troughs of the local Gaussian spectrum can indicate nonlinear traits, which potentially might reveal local periodic phenomena that are undetected in an ordinary spectral analysis. Full article
Show Figures

Figure 1

16 pages, 353 KiB  
Article
Detecting Common Bubbles in Multivariate Mixed Causal–Noncausal Models
by Gianluca Cubadda, Alain Hecq and Elisa Voisin
Econometrics 2023, 11(1), 9; https://doi.org/10.3390/econometrics11010009 - 9 Mar 2023
Cited by 5 | Viewed by 2622
Abstract
This paper proposes concepts and methods to investigate whether the bubble patterns observed in individual time series are common among them. Having established the conditions under which common bubbles are present within the class of mixed causal–noncausal vector autoregressive models, we suggest statistical [...] Read more.
This paper proposes concepts and methods to investigate whether the bubble patterns observed in individual time series are common among them. Having established the conditions under which common bubbles are present within the class of mixed causal–noncausal vector autoregressive models, we suggest statistical tools to detect the common locally explosive dynamics in a Student t-distribution maximum likelihood framework. The performances of both likelihood ratio tests and information criteria were investigated in a Monte Carlo study. Finally, we evaluated the practical value of our approach via an empirical application on three commodity prices. Full article
Show Figures

Figure 1

33 pages, 992 KiB  
Article
Semi-Metric Portfolio Optimization: A New Algorithm Reducing Simultaneous Asset Shocks
by Nick James, Max Menzies and Jennifer Chan
Econometrics 2023, 11(1), 8; https://doi.org/10.3390/econometrics11010008 - 7 Mar 2023
Cited by 9 | Viewed by 4249
Abstract
This paper proposes a new method for financial portfolio optimization based on reducing simultaneous asset shocks across a collection of assets. This may be understood as an alternative approach to risk reduction in a portfolio based on a new mathematical quantity. First, we [...] Read more.
This paper proposes a new method for financial portfolio optimization based on reducing simultaneous asset shocks across a collection of assets. This may be understood as an alternative approach to risk reduction in a portfolio based on a new mathematical quantity. First, we apply recently introduced semi-metrics between finite sets to determine the distance between time series’ structural breaks. Then, we build on the classical portfolio optimization theory of Markowitz and use this distance between asset structural breaks for our penalty function, rather than portfolio variance. Our experiments are promising: on synthetic data, we show that our proposed method does indeed diversify among time series with highly similar structural breaks and enjoys advantages over existing metrics between sets. On real data, experiments illustrate that our proposed optimization method performs well relative to nine other commonly used options, producing the second-highest returns, the lowest volatility, and second-lowest drawdown. The main implication for this method in portfolio management is reducing simultaneous asset shocks and potentially sharp associated drawdowns during periods of highly similar structural breaks, such as a market crisis. Our method adds to a considerable literature of portfolio optimization techniques in econometrics and could complement these via portfolio averaging. Full article
Show Figures

Figure 1

37 pages, 1354 KiB  
Article
Building Multivariate Time-Varying Smooth Transition Correlation GARCH Models, with an Application to the Four Largest Australian Banks
by Anthony D. Hall, Annastiina Silvennoinen and Timo Teräsvirta
Econometrics 2023, 11(1), 5; https://doi.org/10.3390/econometrics11010005 - 6 Feb 2023
Cited by 3 | Viewed by 3231
Abstract
This paper proposes a methodology for building Multivariate Time-Varying STCC–GARCH models. The novel contributions in this area are the specification tests related to the correlation component, the extension of the general model to allow for additional correlation regimes, and a detailed exposition of [...] Read more.
This paper proposes a methodology for building Multivariate Time-Varying STCC–GARCH models. The novel contributions in this area are the specification tests related to the correlation component, the extension of the general model to allow for additional correlation regimes, and a detailed exposition of the systematic, improved modelling cycle required for such nonlinear models. There is an R-package that includes the steps in the modelling cycle. Simulations demonstrate the robustness of the recommended model building approach. The modelling cycle is illustrated using daily return series for Australia’s four largest banks. Full article
Show Figures

Figure 1

13 pages, 428 KiB  
Article
Comparing the Conditional Logit Estimates and True Parameters under Preference Heterogeneity: A Simulated Discrete Choice Experiment
by Maksat Jumamyradov, Benjamin M. Craig, Murat Munkin and William Greene
Econometrics 2023, 11(1), 4; https://doi.org/10.3390/econometrics11010004 - 25 Jan 2023
Cited by 5 | Viewed by 3955
Abstract
Health preference research (HPR) is the subfield of health economics dedicated to understanding the value of health and health-related objects using observational or experimental methods. In a discrete choice experiment (DCE), the utility of objects in a choice set may differ systematically between [...] Read more.
Health preference research (HPR) is the subfield of health economics dedicated to understanding the value of health and health-related objects using observational or experimental methods. In a discrete choice experiment (DCE), the utility of objects in a choice set may differ systematically between persons due to interpersonal heterogeneity (e.g., brand-name medication, generic medication, no medication). To allow for interpersonal heterogeneity, choice probabilities may be described using logit functions with fixed individual-specific parameters. However, in practice, a study team may ignore heterogeneity in health preferences and estimate a conditional logit (CL) model. In this simulation study, we examine the effects of omitted variance and correlations (i.e., omitted heterogeneity) in logit parameters on the estimation of the coefficients, willingness to pay (WTP), and choice predictions. The simulated DCE results show that CL estimates may have been biased depending on the structure of the heterogeneity that we used in the data generation process. We also found that these biases in the coefficients led to a substantial difference in the true and estimated WTP (i.e., up to 20%). We further found that CL and true choice probabilities were similar to each other (i.e., difference was less than 0.08) regardless of the underlying structure. The results imply that, under preference heterogeneity, CL estimates may differ from their true means, and these differences can have substantive effects on the WTP estimates. More specifically, CL WTP estimates may be underestimated due to interpersonal heterogeneity, and a failure to recognize this bias in HPR indirectly underestimates the value of treatment, substantially reducing quality of care. These findings have important implications in health economics because CL remains widely used in practice. Full article
(This article belongs to the Special Issue Health Econometrics)
Show Figures

Figure 1

18 pages, 569 KiB  
Article
Is Climate Change Time-Reversible?
by Francesco Giancaterini, Alain Hecq and Claudio Morana
Econometrics 2022, 10(4), 36; https://doi.org/10.3390/econometrics10040036 - 7 Dec 2022
Cited by 2 | Viewed by 4741
Abstract
This paper proposes strategies to detect time reversibility in stationary stochastic processes by using the properties of mixed causal and noncausal models. It shows that they can also be used for non-stationary processes when the trend component is computed with the Hodrick–Prescott filter [...] Read more.
This paper proposes strategies to detect time reversibility in stationary stochastic processes by using the properties of mixed causal and noncausal models. It shows that they can also be used for non-stationary processes when the trend component is computed with the Hodrick–Prescott filter rendering a time-reversible closed-form solution. This paper also links the concept of an environmental tipping point to the statistical property of time irreversibility and assesses fourteen climate indicators. We find evidence of time irreversibility in greenhouse gas emissions, global temperature, global sea levels, sea ice area, and some natural oscillation indices. While not conclusive, our findings urge the implementation of correction policies to avoid the worst consequences of climate change and not miss the opportunity window, which might still be available, despite closing quickly. Full article
(This article belongs to the Collection Econometric Analysis of Climate Change)
Show Figures

Figure 1

15 pages, 500 KiB  
Article
A Theory-Consistent CVAR Scenario for a Monetary Model with Forward-Looking Expectations
by Katarina Juselius
Econometrics 2022, 10(2), 16; https://doi.org/10.3390/econometrics10020016 - 6 Apr 2022
Cited by 4 | Viewed by 3086
Abstract
A theory-consistent CVAR scenario describes a set of testable regularities capturing basic assumptions of the theoretical model. Using this concept, the paper considers a standard model for exchange rate determination with forward-looking expectations and shows that all assumptions about the model’s shock structure [...] Read more.
A theory-consistent CVAR scenario describes a set of testable regularities capturing basic assumptions of the theoretical model. Using this concept, the paper considers a standard model for exchange rate determination with forward-looking expectations and shows that all assumptions about the model’s shock structure and steady-state behavior can be formulated as testable hypotheses on common stochastic trends and cointegration. The basic stationarity assumptions of the monetary model failed to obtain empirical support. They were too restrictive to explain the observed long persistent swings in the real exchange rate, the real interest rates, and the inflation and interest rate differentials. Full article
(This article belongs to the Special Issue Celebrated Econometricians: David Hendry)
Show Figures

Figure 1

31 pages, 780 KiB  
Article
Green Bonds for the Transition to a Low-Carbon Economy
by Andreas Lichtenberger, Joao Paulo Braga and Willi Semmler
Econometrics 2022, 10(1), 11; https://doi.org/10.3390/econometrics10010011 - 2 Mar 2022
Cited by 28 | Viewed by 10258
Abstract
The green bond market is emerging as an impactful financing mechanism in climate change mitigation efforts. The effectiveness of the financial market for this transition to a low-carbon economy depends on attracting investors and removing financial market roadblocks. This paper investigates the differential [...] Read more.
The green bond market is emerging as an impactful financing mechanism in climate change mitigation efforts. The effectiveness of the financial market for this transition to a low-carbon economy depends on attracting investors and removing financial market roadblocks. This paper investigates the differential bond performance of green vs non-green bonds with (1) a dynamic portfolio model that integrates negative as well as positive externality effects and via (2) econometric analyses of aggregate green bond and corporate energy time-series indices; as well as a cross-sectional set of individual bonds issued between 1 January 2017, and 1 October 2020. The asset pricing model demonstrates that, in the long-run, the positive externalities of green bonds benefit the economy through positive social returns. We use a deterministic and a stochastic version of the dynamic portfolio approach to obtain model-driven results and evaluate those through our empirical evidence using harmonic estimations. The econometric analysis of this study focuses on volatility and the risk–return performance (Sharpe ratio) of green and non-green bonds, and extends recent econometric studies that focused on yield differentials of green and non-green bonds. A modified Sharpe ratio analysis, cross-sectional methods, harmonic estimations, bond pairing estimations, as well as regression tree methodology, indicate that green bonds tend to show lower volatility and deliver superior Sharpe ratios (while the evidence for green premia is mixed). As a result, green bond investment can protect investors and portfolios from oil price and business cycle fluctuations, and stabilize portfolio returns and volatility. Policymakers are encouraged to make use of the financial benefits of green instruments and increase the financial flows towards sustainable economic activities to accelerate a low-carbon transition. Full article
(This article belongs to the Collection Econometric Analysis of Climate Change)
Show Figures

Figure 1

7 pages, 241 KiB  
Article
A New Estimator for Standard Errors with Few Unbalanced Clusters
by Gianmaria Niccodemi and Tom Wansbeek
Econometrics 2022, 10(1), 6; https://doi.org/10.3390/econometrics10010006 - 21 Jan 2022
Cited by 3 | Viewed by 3490
Abstract
In linear regression analysis, the estimator of the variance of the estimator of the regression coefficients should take into account the clustered nature of the data, if present, since using the standard textbook formula will in that case lead to a severe downward [...] Read more.
In linear regression analysis, the estimator of the variance of the estimator of the regression coefficients should take into account the clustered nature of the data, if present, since using the standard textbook formula will in that case lead to a severe downward bias in the standard errors. This idea of a cluster-robust variance estimator (CRVE) generalizes to clusters the classical heteroskedasticity-robust estimator. Its justification is asymptotic in the number of clusters. Although an improvement, a considerable bias could remain when the number of clusters is low, the more so when regressors are correlated within cluster. In order to address these issues, two improved methods were proposed; one method, which we call CR2VE, was based on biased reduced linearization, while the other, CR3VE, can be seen as a jackknife estimator. The latter is unbiased under very strict conditions, in particular equal cluster size. To relax this condition, we introduce in this paper CR3VE-λ, a generalization of CR3VE where the cluster size is allowed to vary freely between clusters. We illustrate the performance of CR3VE-λ through simulations and we show that, especially when cluster sizes vary widely, it can outperform the other commonly used estimators. Full article
16 pages, 1805 KiB  
Article
Forecasting Real GDP Growth for Africa
by Philip Hans Franses and Max Welz
Econometrics 2022, 10(1), 3; https://doi.org/10.3390/econometrics10010003 - 5 Jan 2022
Cited by 2 | Viewed by 5260
Abstract
We propose a simple and reproducible methodology to create a single equation forecasting model (SEFM) for low-frequency macroeconomic variables. Our methodology is illustrated by forecasting annual real GDP growth rates for 52 African countries, where the data are obtained from the World Bank [...] Read more.
We propose a simple and reproducible methodology to create a single equation forecasting model (SEFM) for low-frequency macroeconomic variables. Our methodology is illustrated by forecasting annual real GDP growth rates for 52 African countries, where the data are obtained from the World Bank and start in 1960. The models include lagged growth rates of other countries, as well as a cointegration relationship to capture potential common stochastic trends. With a few selection steps, our methodology quickly arrives at a reasonably small forecasting model per country. Compared with benchmark models, the single equation forecasting models seem to perform quite well. Full article
(This article belongs to the Special Issue Special Issue on Economic Forecasting)
Show Figures

Figure 1

17 pages, 600 KiB  
Article
Second-Order Least Squares Estimation in Nonlinear Time Series Models with ARCH Errors
by Mustafa Salamh and Liqun Wang
Econometrics 2021, 9(4), 41; https://doi.org/10.3390/econometrics9040041 - 27 Nov 2021
Cited by 4 | Viewed by 3588
Abstract
Many financial and economic time series exhibit nonlinear patterns or relationships. However, most statistical methods for time series analysis are developed for mean-stationary processes that require transformation, such as differencing of the data. In this paper, we study a dynamic regression model with [...] Read more.
Many financial and economic time series exhibit nonlinear patterns or relationships. However, most statistical methods for time series analysis are developed for mean-stationary processes that require transformation, such as differencing of the data. In this paper, we study a dynamic regression model with nonlinear, time-varying mean function, and autoregressive conditionally heteroscedastic errors. We propose an estimation approach based on the first two conditional moments of the response variable, which does not require specification of error distribution. Strong consistency and asymptotic normality of the proposed estimator is established under strong-mixing condition, so that the results apply to both stationary and mean-nonstationary processes. Moreover, the proposed approach is shown to be superior to the commonly used quasi-likelihood approach and the efficiency gain is significant when the (conditional) error distribution is asymmetric. We demonstrate through a real data example that the proposed method can identify a more accurate model than the quasi-likelihood method. Full article
Show Figures

Figure 1

27 pages, 469 KiB  
Article
Cointegration, Root Functions and Minimal Bases
by Massimo Franchi and Paolo Paruolo
Econometrics 2021, 9(3), 31; https://doi.org/10.3390/econometrics9030031 - 17 Aug 2021
Cited by 2 | Viewed by 3390
Abstract
This paper discusses the notion of cointegrating space for linear processes integrated of any order. It first shows that the notions of (polynomial) cointegrating vectors and of root functions coincide. Second, it discusses how the cointegrating space can be defined (i) as a [...] Read more.
This paper discusses the notion of cointegrating space for linear processes integrated of any order. It first shows that the notions of (polynomial) cointegrating vectors and of root functions coincide. Second, it discusses how the cointegrating space can be defined (i) as a vector space of polynomial vectors over complex scalars, (ii) as a free module of polynomial vectors over scalar polynomials, or finally (iii) as a vector space of rational vectors over rational scalars. Third, it shows that a canonical set of root functions can be used as a basis of the various notions of cointegrating space. Fourth, it reviews results on how to reduce polynomial bases to minimal order—i.e., minimal bases. The application of these results to Vector AutoRegressive processes integrated of order 2 is found to imply the separation of polynomial cointegrating vectors from non-polynomial ones. Full article
(This article belongs to the Special Issue Celebrated Econometricians: Katarina Juselius and Søren Johansen)
20 pages, 589 KiB  
Article
Semiparametric Estimation of a Corporate Bond Rating Model
by Yixiao Jiang
Econometrics 2021, 9(2), 23; https://doi.org/10.3390/econometrics9020023 - 28 May 2021
Cited by 3 | Viewed by 4363
Abstract
This paper investigates the incentive of credit rating agencies (CRAs) to bias ratings using a semiparametric, ordered-response model. The proposed model explicitly takes conflicts of interest into account and allows the ratings to depend flexibly on risk attributes through a semiparametric index structure. [...] Read more.
This paper investigates the incentive of credit rating agencies (CRAs) to bias ratings using a semiparametric, ordered-response model. The proposed model explicitly takes conflicts of interest into account and allows the ratings to depend flexibly on risk attributes through a semiparametric index structure. Asymptotic normality for the estimator is derived after using several bias correction techniques. Using Moody’s rating data from 2001 to 2016, I found that firms related to Moody’s shareholders were more likely to receive better ratings. Such favorable treatments were more pronounced in investment grade bonds compared with high yield bonds, with the 2007–2009 financial crisis being an exception. Parametric models, such as the ordered-probit, failed to identify this heterogeneity of the rating bias across different bond categories. Full article
Show Figures

Figure 1

21 pages, 731 KiB  
Article
Asymptotic and Finite Sample Properties for Multivariate Rotated GARCH Models
by Manabu Asai, Chia-Lin Chang, Michael McAleer and Laurent Pauwels
Econometrics 2021, 9(2), 21; https://doi.org/10.3390/econometrics9020021 - 4 May 2021
Cited by 1 | Viewed by 3497
Abstract
This paper derives the statistical properties of a two-step approach to estimating multivariate rotated GARCH-BEKK (RBEKK) models. From the definition of RBEKK, the unconditional covariance matrix is estimated in the first step to rotate the observed variables in order to have the identity [...] Read more.
This paper derives the statistical properties of a two-step approach to estimating multivariate rotated GARCH-BEKK (RBEKK) models. From the definition of RBEKK, the unconditional covariance matrix is estimated in the first step to rotate the observed variables in order to have the identity matrix for its sample covariance matrix. In the second step, the remaining parameters are estimated by maximizing the quasi-log-likelihood function. For this two-step quasi-maximum likelihood (2sQML) estimator, this paper shows consistency and asymptotic normality under weak conditions. While second-order moments are needed for the consistency of the estimated unconditional covariance matrix, the existence of the finite sixth-order moments is required for the convergence of the second-order derivatives of the quasi-log-likelihood function. This paper also shows the relationship between the asymptotic distributions of the 2sQML estimator for the RBEKK model and variance targeting quasi-maximum likelihood estimator for the VT-BEKK model. Monte Carlo experiments show that the bias of the 2sQML estimator is negligible and that the appropriateness of the diagonal specification depends on the closeness to either the diagonal BEKK or the diagonal RBEKK models. An empirical analysis of the returns of stocks listed on the Dow Jones Industrial Average indicates that the choice of the diagonal BEKK or diagonal RBEKK models changes over time, but most of the differences between the two forecasts are negligible. Full article
Show Figures

Figure 1

35 pages, 3691 KiB  
Article
Quantile Regression with Generated Regressors
by Liqiong Chen, Antonio F. Galvao and Suyong Song
Econometrics 2021, 9(2), 16; https://doi.org/10.3390/econometrics9020016 - 12 Apr 2021
Cited by 7 | Viewed by 4258
Abstract
This paper studies estimation and inference for linear quantile regression models with generated regressors. We suggest a practical two-step estimation procedure, where the generated regressors are computed in the first step. The asymptotic properties of the two-step estimator, namely, consistency and asymptotic normality [...] Read more.
This paper studies estimation and inference for linear quantile regression models with generated regressors. We suggest a practical two-step estimation procedure, where the generated regressors are computed in the first step. The asymptotic properties of the two-step estimator, namely, consistency and asymptotic normality are established. We show that the asymptotic variance-covariance matrix needs to be adjusted to account for the first-step estimation error. We propose a general estimator for the asymptotic variance-covariance, establish its consistency, and develop testing procedures for linear hypotheses in these models. Monte Carlo simulations to evaluate the finite-sample performance of the estimation and inference procedures are provided. Finally, we apply the proposed methods to study Engel curves for various commodities using data from the UK Family Expenditure Survey. We document strong heterogeneity in the estimated Engel curves along the conditional distribution of the budget share of each commodity. The empirical application also emphasizes that correctly estimating confidence intervals for the estimated Engel curves by the proposed estimator is of importance for inference. Full article
Show Figures

Figure 1

24 pages, 2606 KiB  
Article
Forecast Accuracy Matters for Hurricane Damage
by Andrew B. Martinez
Econometrics 2020, 8(2), 18; https://doi.org/10.3390/econometrics8020018 - 14 May 2020
Cited by 22 | Viewed by 7974
Abstract
I analyze damage from hurricane strikes on the United States since 1955. Using machine learning methods to select the most important drivers for damage, I show that large errors in a hurricane’s predicted landfall location result in higher damage. This relationship holds across [...] Read more.
I analyze damage from hurricane strikes on the United States since 1955. Using machine learning methods to select the most important drivers for damage, I show that large errors in a hurricane’s predicted landfall location result in higher damage. This relationship holds across a wide range of model specifications and when controlling for ex-ante uncertainty and potential endogeneity. Using a counterfactual exercise I find that the cumulative reduction in damage from forecast improvements since 1970 is about $82 billion, which exceeds the U.S. government’s spending on the forecasts and private willingness to pay for them. Full article
(This article belongs to the Collection Econometric Analysis of Climate Change)
Show Figures

Figure 1

23 pages, 361 KiB  
Article
Cointegration and Error Correction Mechanisms for Singular Stochastic Vectors
by Matteo Barigozzi, Marco Lippi and Matteo Luciani
Econometrics 2020, 8(1), 3; https://doi.org/10.3390/econometrics8010003 - 4 Feb 2020
Cited by 15 | Viewed by 6514
Abstract
Large-dimensional dynamic factor models and dynamic stochastic general equilibrium models, both widely used in empirical macroeconomics, deal with singular stochastic vectors, i.e., vectors of dimension r which are driven by a q-dimensional white noise, with q < r . The present paper [...] Read more.
Large-dimensional dynamic factor models and dynamic stochastic general equilibrium models, both widely used in empirical macroeconomics, deal with singular stochastic vectors, i.e., vectors of dimension r which are driven by a q-dimensional white noise, with q < r . The present paper studies cointegration and error correction representations for an I ( 1 ) singular stochastic vector y t . It is easily seen that y t is necessarily cointegrated with cointegrating rank c r q . Our contributions are: (i) we generalize Johansen’s proof of the Granger representation theorem to I ( 1 ) singular vectors under the assumption that y t has rational spectral density; (ii) using recent results on singular vectors by Anderson and Deistler, we prove that for generic values of the parameters the autoregressive representation of y t has a finite-degree polynomial. The relationship between the cointegration of the factors and the cointegration of the observable variables in a large-dimensional factor model is also discussed. Full article
(This article belongs to the Special Issue Celebrated Econometricians: Katarina Juselius and Søren Johansen)
14 pages, 304 KiB  
Article
A Frequentist Alternative to Significance Testing, p-Values, and Confidence Intervals
by David Trafimow
Econometrics 2019, 7(2), 26; https://doi.org/10.3390/econometrics7020026 - 4 Jun 2019
Cited by 38 | Viewed by 11021
Abstract
There has been much debate about null hypothesis significance testing, p-values without null hypothesis significance testing, and confidence intervals. The first major section of the present article addresses some of the main reasons these procedures are problematic. The conclusion is that none [...] Read more.
There has been much debate about null hypothesis significance testing, p-values without null hypothesis significance testing, and confidence intervals. The first major section of the present article addresses some of the main reasons these procedures are problematic. The conclusion is that none of them are satisfactory. However, there is a new procedure, termed the a priori procedure (APP), that validly aids researchers in obtaining sample statistics that have acceptable probabilities of being close to their corresponding population parameters. The second major section provides a description and review of APP advances. Not only does the APP avoid the problems that plague other inferential statistical procedures, but it is easy to perform too. Although the APP can be performed in conjunction with other procedures, the present recommendation is that it be used alone. Full article
(This article belongs to the Special Issue Towards a New Paradigm for Statistical Evidence)
11 pages, 3486 KiB  
Article
Pitfalls of Two-Step Testing for Changes in the Error Variance and Coefficients of a Linear Regression Model
by Pierre Perron and Yohei Yamamoto
Econometrics 2019, 7(2), 22; https://doi.org/10.3390/econometrics7020022 - 21 May 2019
Cited by 9 | Viewed by 6902
Abstract
In empirical applications based on linear regression models, structural changes often occur in both the error variance and regression coefficients, possibly at different dates. A commonly applied method is to first test for changes in the coefficients (or in the error variance) and, [...] Read more.
In empirical applications based on linear regression models, structural changes often occur in both the error variance and regression coefficients, possibly at different dates. A commonly applied method is to first test for changes in the coefficients (or in the error variance) and, conditional on the break dates found, test for changes in the variance (or in the coefficients). In this note, we provide evidence that such procedures have poor finite sample properties when the changes in the first step are not correctly accounted for. In doing so, we show that testing for changes in the coefficients (or in the variance) ignoring changes in the variance (or in the coefficients) induces size distortions and loss of power. Our results illustrate a need for a joint approach to test for structural changes in both the coefficients and the variance of the errors. We provide some evidence that the procedures suggested by Perron et al. (2019) provide tests with good size and power. Full article
Show Figures

Figure 1

24 pages, 365 KiB  
Article
Covariance Prediction in Large Portfolio Allocation
by Carlos Trucíos, Mauricio Zevallos, Luiz K. Hotta and André A. P. Santos
Econometrics 2019, 7(2), 19; https://doi.org/10.3390/econometrics7020019 - 9 May 2019
Cited by 10 | Viewed by 8323
Abstract
Many financial decisions, such as portfolio allocation, risk management, option pricing and hedge strategies, are based on forecasts of the conditional variances, covariances and correlations of financial returns. The paper shows an empirical comparison of several methods to predict one-step-ahead conditional covariance matrices. [...] Read more.
Many financial decisions, such as portfolio allocation, risk management, option pricing and hedge strategies, are based on forecasts of the conditional variances, covariances and correlations of financial returns. The paper shows an empirical comparison of several methods to predict one-step-ahead conditional covariance matrices. These matrices are used as inputs to obtain out-of-sample minimum variance portfolios based on stocks belonging to the S&P500 index from 2000 to 2017 and sub-periods. The analysis is done through several metrics, including standard deviation, turnover, net average return, information ratio and Sortino’s ratio. We find that no method is the best in all scenarios and the performance depends on the criterion, the period of analysis and the rebalancing strategy. Full article
Back to TopTop