Skip to Content

Econometrics

Econometrics is an international, peer-reviewed, open access journal on econometric modeling and forecasting, as well as new advances in econometrics theory, and is published quarterly online by MDPI.

Get Alerted

Add your email address to receive forthcoming issues of this journal.

All Articles (536)

Estimation of Two-States Proportional Hazard Rates Models with Unobserved Heterogeneity

  • Emilio Congregado,
  • David Troncoso-Ponce and
  • Alejandro Morales-Kirioukhina
  • + 1 author

This article examines two-state proportional hazard rate models with unobserved heterogeneity specific to each state, a framework that is especially relevant for labor market transitions. To make estimation feasible in large longitudinal datasets, we implement hshaz2s, a Stata routine that uses analytical expressions for the gradient vector and Hessian matrix of the log-likelihood function through the dual second-order moment (d2 ml) method. The empirical application estimates a discrete-time duration model for transitions between employment and unemployment using Spanish labor market microdata for young low-skilled workers over 2000–2019. The results show that apprenticeship contracts are associated with lower exit rates from employment than other temporary contracts, but not with faster transitions from unemployment back into employment. The estimates also reveal substantial state-specific unobserved heterogeneity, with a large latent group characterized by persistent spells in both states. Analytical second-order information also markedly reduces convergence time under richer heterogeneity structures. Overall, the article makes this class of two-state hazard models operational for applied research and provides new evidence on apprenticeship and temporary contracts in Spain.

28 April 2026

Employment and unemployment mean predicted hazard rates (with UH).

Suppose that we have a statistical model with q unknown parameters w, and an estimate w^, based on a sample of size n. A basic question is: what is the covariance of the estimate? The covariance is needed for the Central Limit Theorem (CLT). This gives a first approximation for the distribution of w^. But what if qn=n increases with n? How fast can it increase and the CLT still hold? An answer has so far only been given for the sample mean. The same is true for the Edgeworth expansions. These are expansions in powers of n1/2 for the density and distribution of w^. For fixed q, these expansions are important, as they show how small n can be for the CLT to apply. When it does, they can greatly improve the accuracy of the CLT. I give conditions that allow for the Edgeworth expansions to remain valid when qn=q increases with n. Earlier Edgeworth expansions when qn=q increases, have only been done for a sample mean, and only for a 2nd order Edgeworth expansion. In contrast, I consider a very large class of estimates, the class of non-lattice standard estimates. An estimate is said to be a standard estimate if its mean converges to its true value as n increases, and for r1, its rth order cumulants have magnitude n1r and can be expanded in powers of n1. For this class of estimates, I show that the Edgeworth expansions hold if qn grows as a power of n less than 1/6. That is, I give these expansions in powers of n1/2qn3. This large class of estimates has a huge range of potential applications, as estimates of high dimension are common in nearly all areas of applied statistics. The most important type of standard estimate is when w^ is a smooth function of a sample mean, of dimension p say. When either or both qn=q and pn=p increase with n, I give conditions on their growth for the Edgeworth expansions for w^ to remain valid: the eighth power of p times the sixth power of q cannot grow as fast as n. This holds for fixed q=qn if pn grows less than a power of n less than 1/8. This appears to be the first time when Edgeworth expansions have been given when not one, but two dimensions, are allowed to increase to with n. This gives two different pathways for allowing an increase in dimensionality. When q=1, I give 5th order Edgeworth-Cornish-Fisher expansions for the standardized distribution and its quantiles of any smooth function of a sample mean of dimension pn, when pn is a power of n less than 1/2. However for the special case when this function is linear, there is no restriction whatever on how fast pn can increase! If also the components of the sample mean are independent, then these expansions are in powers of (np)1/2. I also give a method that greatly reduces the number of terms needed for the 2nd and 3rd order terms in the Edgeworth expansions, that is, for the 1st and 2nd order corrections to the CLTs. I also extend these results to the case where w^Rq is a function of several independent sample means, each of dimension increasing with n, with total dimension p.

27 April 2026

With significant market unsureness, “static” methods fail to account for economic uncertainty, may be less precise and, accordingly, less helpful when selecting investment alternatives. Methods that take into account the current economic situation and allow for adapting the alternative selection to external uncertainty are becoming more relevant. One of such methods is the fuzzy set theory. This article addresses the mathematical framework of such an approach for the economic analysis of investment project selection. A step-by-step scheme for implementing the fuzzy set method for investment projects is presented. Studies performed on the example of three investment alternatives give grounds for asserting the compatibility and feasibility of using two methods (the fuzzy set method may be partly based on the results of pairwise comparisons of experts according to the Saaty method) and confirmation or refutation of previous intuitive decisions of investors based on a comprehensive analysis of the criterion composition and the use of mathematical grounded technique.

13 April 2026

Evidence from observational studies plays a central role in shaping public policy in health, education, and financial regulation, where randomized experiments are rarely feasible. Propensity score matching (PSM) is a widely used method to approximate fair comparisons between treatment and control groups. Incorporating machine learning into the estimation of propensity scores can strengthen prediction and enhance the credibility of findings. However, stronger predictive models create a “predictability paradox”. As predictive accuracy improves, estimated propensity scores for treated and control units become more distinct when treatment assignment is strongly predictable from observed covariates, revealing limited overlap between groups. In the limit, near-perfect prediction produces near-complete separation between groups, rendering traditional matching infeasible and confining inference to a narrow subset of units near the boundary of the propensity score distribution, a setting analogous to a regression discontinuity design (RDD). Researchers thus face perverse incentives to use weaker models for statistically significant but spurious results. These dynamics jeopardize the reliability of evidence for policy. To safeguard decision-making, we propose a simple reform: require that studies using PSM disclose model error rates, including false positive and false negative rates, along with information on overlap and effective sample size.

1 April 2026

News & Conferences

Volumes

Latest Issues

Open for Submission

Editor's Choice

XFacebookLinkedIn
Econometrics - ISSN 2225-1146