Journal Description
Econometrics
Econometrics
is an international, peer-reviewed, open access journal on econometric modeling and forecasting, as well as new advances in econometrics theory, and is published quarterly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), EconLit, EconBiz, RePEc, and other databases.
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 34.6 days after submission; acceptance to publication is undertaken in 8.6 days (median values for papers published in this journal in the first half of 2025).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
Impact Factor:
1.4 (2024);
5-Year Impact Factor:
1.2 (2024)
Latest Articles
Fractional Probit with Cross-Sectional Volatility: Bridging Heteroskedastic Probit and Fractional Response Models
Econometrics 2025, 13(4), 43; https://doi.org/10.3390/econometrics13040043 - 3 Nov 2025
Abstract
►
Show Figures
This paper introduces a new econometric framework for modeling fractional outcomes bounded between zero and one. We propose the Fractional Probit with Cross-Sectional Volatility (FPCV), which specifies the conditional mean through a probit link and allows the conditional variance to depend on observable
[...] Read more.
This paper introduces a new econometric framework for modeling fractional outcomes bounded between zero and one. We propose the Fractional Probit with Cross-Sectional Volatility (FPCV), which specifies the conditional mean through a probit link and allows the conditional variance to depend on observable heterogeneity. The model extends heteroskedastic probit methods to fractional responses and unifies them with existing approaches for proportions. Monte Carlo simulations demonstrate that the FPCV estimator achieves lower bias, more reliable inference, and superior predictive accuracy compared with standard alternatives. The framework is particularly suited to empirical settings where fractional outcomes display systematic variability across units, such as participation rates, market shares, health indices, financial ratios, and vote shares. By modeling both mean and variance, FPCV provides interpretable measures of volatility and offers a robust tool for empirical analysis and policy evaluation.
Full article
Open AccessArticle
Counterfactual Duration Analysis
by
Miguel A. Delgado and Andrés García-Suaza
Econometrics 2025, 13(4), 42; https://doi.org/10.3390/econometrics13040042 - 30 Oct 2025
Abstract
►▼
Show Figures
This article introduces new counterfactual standardization techniques for comparing duration distributions subject to random censoring through counterfactual decompositions. The counterfactual distribution of one population relative to another is computed after estimating the conditional distribution, using either a semiparametric or a nonparametric specification. We
[...] Read more.
This article introduces new counterfactual standardization techniques for comparing duration distributions subject to random censoring through counterfactual decompositions. The counterfactual distribution of one population relative to another is computed after estimating the conditional distribution, using either a semiparametric or a nonparametric specification. We consider both the semiparametric proportional hazard model and a fully nonparametric partition-based estimator. The finite-sample performance of the proposed methods is evaluated through Monte Carlo experiments. We also illustrate the methodology with an application to unemployment duration in Spain during the period between 2004 and 2007, focusing on gender differences. The results indicate that observable characteristics account for only a small portion of the observed gap.
Full article

Figure 1
Open AccessArticle
Consistency of the OLS Bootstrap for Independently but Not-Identically Distributed Data: A Permutation Perspective
by
Alwyn Young
Econometrics 2025, 13(4), 41; https://doi.org/10.3390/econometrics13040041 - 23 Oct 2025
Abstract
This paper introduces a new approach to proving bootstrap consistency based upon the distribution of permutation statistics, using it to derive results covering fundamentally not-identically distributed groups of data, in which average moments do not converge to anything, with moment conditions that are
[...] Read more.
This paper introduces a new approach to proving bootstrap consistency based upon the distribution of permutation statistics, using it to derive results covering fundamentally not-identically distributed groups of data, in which average moments do not converge to anything, with moment conditions that are less demanding than earlier results for either identically distributed or not-identically distributed data.
Full article
Open AccessReview
VAR Models with an Index Structure: A Survey with New Results
by
Gianluca Cubadda
Econometrics 2025, 13(4), 40; https://doi.org/10.3390/econometrics13040040 - 22 Oct 2025
Abstract
The main aim of this paper is to review recent advances in the multivariate autoregressive index model [MAI] and their applications to economic and financial time series. MAI has recently gained momentum because it can be seen as a link between two popular
[...] Read more.
The main aim of this paper is to review recent advances in the multivariate autoregressive index model [MAI] and their applications to economic and financial time series. MAI has recently gained momentum because it can be seen as a link between two popular but distinct multivariate time series approaches: vector autoregressive modeling [VAR] and the dynamic factor model [DFM]. Indeed, on the one hand, MAI is a VAR model with a peculiar reduced-rank structure that can lead to a significant dimension reduction; on the other hand, it allows for the identification of common components and common shocks in a similar way as the DFM. Our focus is on recent developments of the MAI, which include extending the original model with individual autoregressive structures, stochastic volatility, time-varying parameters, high-dimensionality, and co-integration. In addition, some gaps in the literature are filled by providing new results on the representation theory underlying previous contributions, and a novel model is provided.
Full article
(This article belongs to the Special Issue Advancements in Macroeconometric Modeling and Time Series Analysis)
Open AccessArticle
Demonstrating That the Autoregressive Distributed Lag Bounds Test Can Detect a Long-Run Levels Relationship When the Dependent Variable Is I(0)
by
Chris Stewart
Econometrics 2025, 13(4), 39; https://doi.org/10.3390/econometrics13040039 - 22 Oct 2025
Abstract
►▼
Show Figures
The autoregressive distributed lag bounds t-test and F-test for a long-run relationship that allows level variables to be either or is widely used in the literature. However, a long-run levels relationship cannot be detected
[...] Read more.
The autoregressive distributed lag bounds t-test and F-test for a long-run relationship that allows level variables to be either or is widely used in the literature. However, a long-run levels relationship cannot be detected when the dependent variable is , because both tests will always reject their null hypotheses. It has subsequently been argued that a third test determines whether the dependent variable is , such that when all three tests reject their null hypotheses, a cointegrating equation with an dependent variable is identified. It is argued that all three tests rejecting their null hypotheses rules out the possibility that the dependent variable is , implying that the three tests cannot detect an equilibrium when the dependent variable is . Our first contribution is to demonstrate and explain that rejection of all three tests’ null hypotheses can also indicate an equilibrium when the dependent variable is and not only when it is . Our second contribution is to produce previously unavailable critical values for the third test in the cases where an intercept or trend is restricted into the equilibrium.
Full article

Graphical abstract
Open AccessReview
Vis Inertiae and Statistical Inference: A Review of Difference-in-Differences Methods Employed in Economics and Other Subjects
by
Bruno Paolo Bosco and Paolo Maranzano
Econometrics 2025, 13(4), 38; https://doi.org/10.3390/econometrics13040038 - 30 Sep 2025
Abstract
►▼
Show Figures
Difference in Differences (DiD) is a useful statistical technique employed by researchers to estimate the effects of exogenous events on the outcome of some response variables in random samples of treated units (i.e., units exposed to the event) ideally drawn from an infinite
[...] Read more.
Difference in Differences (DiD) is a useful statistical technique employed by researchers to estimate the effects of exogenous events on the outcome of some response variables in random samples of treated units (i.e., units exposed to the event) ideally drawn from an infinite population. The term “effect” should be understood as the discrepancy between the post-event realisation of the response and the hypothetical realisation of that same outcome for the same treated units in the absence of the event. This theoretical discrepancy is clearly unobservable. To circumvent the implicit missing variable problem, DiD methods utilise the realisations of the response variable observed in comparable random samples of untreated units. The latter are samples of units drawn from the same population, but they are not exposed to the event under investigation. They function as the control or comparison group and serve as proxies for the non-existent untreated realisations of the responses in treated units during post-treatment periods. In summary, the DiD model posits that, in the absence of intervention and under specific conditions, treated units would exhibit behaviours that are indistinguishable from those of control or untreated units during the post-treatment periods. For the purpose of estimation, the method employs a combination of before–after and treatment–control group comparisons. The event that affects the response variables is referred to as “treatment.” However, it could also be referred to as “causal factor” to emphasise that, in the DiD approach, the objective is not to estimate a mere statistical association among variables. This review introduces the DiD techniques for researchers in economics, public policy, health research, management, environmental analysis, and other fields. It commences with the rudimentary methods employed to estimate the so-called Average Treatment Effect upon Treated (ATET) in a two-period and two-group case and subsequently addresses numerous issues that arise in a multi-unit and multi-period context. A particular focus is placed on the statistical assumptions necessary for a precise delineation of the identification process of the cause–effect relationship in the multi-period case. These assumptions include the parallel trend hypothesis, the no-anticipation assumption, and the SUTVA assumption. In the multi-period case, both the homogeneous and heterogeneous scenarios are taken into consideration. The homogeneous scenario refers to the situation in which the treated units are initially treated in the same periods. In contrast, the heterogeneous scenario involves the treatment of treated units in different periods. A portion of the presentation will be allocated to the developments associated with the DiD techniques that can be employed in the context of data clustering or spatio-temporal dependence. The present review includes a concise exposition of some policy-oriented papers that incorporate applications of DiD. The areas of focus encompass income taxation, migration, regulation, and environmental management.
Full article

Figure 1
Open AccessArticle
Re-Examining Confidence Intervals for Ratios of Parameters
by
Zaka Ratsimalahelo
Econometrics 2025, 13(3), 37; https://doi.org/10.3390/econometrics13030037 - 20 Sep 2025
Abstract
This paper considers the problem of constructing confidence intervals (CIs) for nonlinear functions of parameters, particularly ratios of parameters a common issue in econometrics and statistics. Classical CIs (such as the Delta method and the Fieller method) often fail in small samples due
[...] Read more.
This paper considers the problem of constructing confidence intervals (CIs) for nonlinear functions of parameters, particularly ratios of parameters a common issue in econometrics and statistics. Classical CIs (such as the Delta method and the Fieller method) often fail in small samples due to biased parameter estimators and skewed distributions. We extended the Delta method using the Edgeworth expansion to correct for skewness due to estimated parameters having non-normal and asymmetric distributions. The resulting bias-corrected confidence intervals are easy to compute and have a good coverage probability that converges to the nominal level at a rate of where n is the sample size. We also propose bias-corrected estimators based on second-order Taylor expansions, aligning with the “almost unbiased ratio estimator” . We then correct the CIs according to the Delta method and the Edgeworth expansion. Thus, our new methods for constructing confidence intervals account for both the bias and the skewness of the distribution of the nonlinear functions of parameters. We conduct a simulation study to compare the confidence intervals of our new methods with the two classical methods. The methods evaluated include Fieller’s interval, Delta with and without the bias correction interval, and Edgeworth expansion with and without the bias correction interval. The results show that our new methods with bias correction generally have good performance in terms of controlling the coverage probabilities and average length intervals. They should be recommended for constructing confidence intervals for nonlinear functions of estimated parameters.
Full article
Open AccessArticle
Integration and Risk Transmission Dynamics Between Bitcoin, Currency Pairs, and Traditional Financial Assets in South Africa
by
Benjamin Mudiangombe Mudiangombe and John Weirstrass Muteba Mwamba
Econometrics 2025, 13(3), 36; https://doi.org/10.3390/econometrics13030036 - 19 Sep 2025
Cited by 1
Abstract
►▼
Show Figures
This study explores the new insights into the integration and dynamic asymmetric volatility risk spillovers between Bitcoin, currency pairs (USD/ZAR, GBP/ZAR and EUR/ZAR), and traditional financial assets (ALSI, Bond, and Gold) in South Africa using daily data spanning the period from 2010 to
[...] Read more.
This study explores the new insights into the integration and dynamic asymmetric volatility risk spillovers between Bitcoin, currency pairs (USD/ZAR, GBP/ZAR and EUR/ZAR), and traditional financial assets (ALSI, Bond, and Gold) in South Africa using daily data spanning the period from 2010 to 2024 and employing Time-Varying Parameter Vector Autoregression (TVP-VAR) and wavelet coherence. The findings revealed strengthened integration between traditional financial assets and currency pairs, as well as weak integration with BTC/ZAR. Furthermore, BTC/ZAR and traditional financial assets were receivers of shocks, while the currency pairs were transmitters of spillovers. Gold emerged as an attractive investment during periods of inflation or currency devaluation. However, the assets have a total connectedness index of 28.37%, offering a reduced systemic risk. Distinct patterns were observed in the short, medium, and long term in time scales and frequency. There is a diversification benefit and potential hedging strategies due to gold’s negative influence on BTC/ZAR. Bitcoin’s high volatility and lack of regulatory oversight continue to be deterrents for institutional investors. This study lays a solid foundation for understanding the financial dynamics in South Africa, offering valuable insights for investors and policymakers interested in the intricate linkages between BTC/ZAR, currency pairs, and traditional financial assets, allowing for more targeted policy measures.
Full article

Figure 1
Open AccessArticle
Forecasting of GDP Growth in the South Caucasian Countries Using Hybrid Ensemble Models
by
Gaetano Perone and Manuel A. Zambrano-Monserrate
Econometrics 2025, 13(3), 35; https://doi.org/10.3390/econometrics13030035 - 10 Sep 2025
Abstract
►▼
Show Figures
This study aimed to forecast the gross domestic product (GDP) of the South Caucasian nations (Armenia, Azerbaijan, and Georgia) by scrutinizing the accuracy of various econometric methodologies. This topic is noteworthy considering the significant economic development exhibited by these countries in the context
[...] Read more.
This study aimed to forecast the gross domestic product (GDP) of the South Caucasian nations (Armenia, Azerbaijan, and Georgia) by scrutinizing the accuracy of various econometric methodologies. This topic is noteworthy considering the significant economic development exhibited by these countries in the context of recovery post COVID-19. The seasonal autoregressive integrated moving average (SARIMA), exponential smoothing state space (ETS) model, neural network autoregressive (NNAR) model, and trigonometric exponential smoothing state space model with Box–Cox transformation, ARMA errors, and trend and seasonal components (TBATS), together with their feasible hybrid combinations, were employed. The empirical investigation utilized quarterly GDP data at market prices from 1Q-2010 to 2Q-2024. According to the results, the hybrid models significantly outperformed the corresponding single models, handling the linear and nonlinear components of the GDP time series more effectively. Rolling-window cross-validation showed that hybrid ETS-NNAR-TBATS for Armenia, hybrid ETS-NNAR-SARIMA for Azerbaijan, and hybrid ETS-SARIMA for Georgia were the best-performing models. The forecasts also suggest that Georgia is likely to record the strongest GDP growth over the projection horizon, followed by Armenia and Azerbaijan. These findings confirm that hybrid models constitute a reliable technique for forecasting GDP in the South Caucasian countries. This region is not only economically dynamic but also strategically important, with direct implications for policy and regional planning.
Full article

Figure 1
Open AccessArticle
Volatility Analysis of Returns of Financial Assets Using a Bayesian Time-Varying Realized GARCH-Itô Model
by
Pathairat Pastpipatkul and Htwe Ko
Econometrics 2025, 13(3), 34; https://doi.org/10.3390/econometrics13030034 - 9 Sep 2025
Abstract
In a stage of more and more complex and high-frequency financial markets, the volatility analysis is a cornerstone of modern financial econometrics with practical applications in portfolio optimization, derivative pricing, and systematic risk assessment. This paper introduces a novel Bayesian Time-varying Generalized Autoregressive
[...] Read more.
In a stage of more and more complex and high-frequency financial markets, the volatility analysis is a cornerstone of modern financial econometrics with practical applications in portfolio optimization, derivative pricing, and systematic risk assessment. This paper introduces a novel Bayesian Time-varying Generalized Autoregressive Conditional Heteroskedasticity (BtvGARCH-Itô) model designed to improve the precision and flexibility of volatility modeling in financial markets. Original GARCH-Itô models, while effective in capturing realized volatility and intraday patterns, rely on fixed or constant parameters; thus, it is limited to studying structural changes. Our proposed model addresses this restraint by integrating the continuous-time Ito process with a time-varying Bayesian inference to allow parameters to vary over time based on prior beliefs to quantify uncertainty and minimize overfitting, especially in small-sample or high-dimensional settings. Through simulation studies, using sample sizes of N = 100 and N = 200, we find that BtvGARCH-Itô outperformed original GARCH-Itô in-sample fit and out-of-sample forecast accuracy based on posterior estimates comparison with true parameter values and forecasting error metrics. For the empirical validation, this model is applied to analyze the volatility of S&P 500 and Bitcoin (BTC) using one-minute length data for S&P 500 (from 3 January 2023 to 31 December 2024) and BTC (from 1 January 2023 to 1 January 2025). This model has potential as a robust tool and a new direction in volatility modeling for financial risk management.
Full article
(This article belongs to the Special Issue Innovations in Bayesian Econometrics: Theory, Techniques, and Economic Analysis)
►▼
Show Figures

Figure 1
Open AccessArticle
Modelling and Forecasting Financial Volatility with Realized GARCH Model: A Comparative Study of Skew-t Distributions Using GRG and MCMC Methods
by
Didit Budi Nugroho, Adi Setiawan and Takayuki Morimoto
Econometrics 2025, 13(3), 33; https://doi.org/10.3390/econometrics13030033 - 4 Sep 2025
Cited by 1
Abstract
►▼
Show Figures
Financial time-series data often exhibit statistically significant skewness and heavy tails, and numerous flexible distributions have been proposed to model them. In the context of the Log-linear Realized GARCH model with Skew-t (ST) distributions, our objective is to explore how the choice
[...] Read more.
Financial time-series data often exhibit statistically significant skewness and heavy tails, and numerous flexible distributions have been proposed to model them. In the context of the Log-linear Realized GARCH model with Skew-t (ST) distributions, our objective is to explore how the choice of prior distributions in the Adaptive Random Walk Metropolis method and initial parameter values in the Generalized Reduced Gradient (GRG) Solver method affect ST parameter and log-likelihood estimates. An empirical study was conducted using the FTSE 100 index to evaluate model performance. We provide a comprehensive step-by-step tutorial demonstrating how to perform estimation and sensitivity analysis using data tables in Microsoft Excel. Among seven ST distributions—namely, the asymmetric, epsilon, exponentiated half-logistic, Hansen, Jones–Faddy, Mittnik–Paolella, and Rosco–Jones–Pewsey distributions—Hansen’s ST distribution is found to be superior. This study also applied the GRG method to estimate new approaches, including Realized Real-Time GARCH, Realized ASHARV, and GARCH@CARR models. An empirical study showed that the GARCH@CARR model with the feedback effect provides the best goodness of fit. Out-of-sample forecasting evaluations further confirm the predictive dominance of models incorporating real-time information, particularly Realized Real-Time GARCH for volatility forecasting and Realized ASHARV for 1% VaR estimation. The findings offer actionable insights for portfolio managers and risk analysts, particularly in improving volatility forecasts and tail-risk assessments during market crises, thereby enhancing risk-adjusted returns and regulatory compliance. Although the GRG method is sensitive to initial values, its presence in the spreadsheet method can be a powerful and promising tool in working with probability density functions that have explicit forms and are unimodal, high-dimensional, and complex, without the need for programming experience.
Full article

Figure 1
Open AccessArticle
Comparisons Between Frequency Distributions Based on Gini’s Approach: Principal Component Analysis Addressed to Time Series
by
Pierpaolo Angelini
Econometrics 2025, 13(3), 32; https://doi.org/10.3390/econometrics13030032 - 13 Aug 2025
Abstract
In this paper, time series of length T are seen as frequency distributions. Each distribution is defined with respect to a statistical variable having T observed values. A methodological system based on Gini’s approach is put forward, so the statistical model through which
[...] Read more.
In this paper, time series of length T are seen as frequency distributions. Each distribution is defined with respect to a statistical variable having T observed values. A methodological system based on Gini’s approach is put forward, so the statistical model through which time series are handled is a frequency distribution studied inside a linear system. In addition to the starting frequency distributions that are observed, other frequency distributions are treated. Thus, marginal distributions based on the notion of proportionality are introduced together with joint distributions. Both distributions are statistical models. A fundamental invariance property related to marginal distributions is made explicit in this research work, so one can focus on collections of marginal frequency distributions, identifying multiple frequency distributions. For this reason, the latter is studied via a tensor. As frequency distributions are practical realizations of nonparametric probability distributions over , one passes from frequency distributions to discrete random variables. In this paper, a mathematical model that generates time series is put forward. It is a stochastic process based on subjective previsions of random variables. A subdivision of the exchangeability of variables of a statistical nature is shown, so a reinterpretation of principal component analysis that is based on the notion of proportionality also characterizes this research work.
Full article
Open AccessArticle
A Statistical Characterization of Median-Based Inequality Measures
by
Charles M. Beach and Russell Davidson
Econometrics 2025, 13(3), 31; https://doi.org/10.3390/econometrics13030031 - 9 Aug 2025
Abstract
For income distributions divided into middle, lower, and higher regions based on scalar median cut-offs, this paper establishes the asymptotic distribution properties—including explicit empirically applicable variance formulas and hence standard errors—of sample estimates of the proportion of the population within the group, their
[...] Read more.
For income distributions divided into middle, lower, and higher regions based on scalar median cut-offs, this paper establishes the asymptotic distribution properties—including explicit empirically applicable variance formulas and hence standard errors—of sample estimates of the proportion of the population within the group, their share of total income, and the groups’ mean incomes. It then applies these results for relative mean income ratios, various polarization measures, and decile-mean income ratios. Since the derived formulas are not distribution-free, the study advises using a density estimation technique proposed by Comte and Genon-Catalot. A shrinking middle-income group with declining relative incomes and marked upper-tail polarization among men’s incomes are all found to be highly statistically significant.
Full article
Open AccessArticle
Simple Approximations and Interpretation of Pareto Index and Gini Coefficient Using Mean Absolute Deviations and Quantile Functions
by
Eugene Pinsky and Qifu Wen
Econometrics 2025, 13(3), 30; https://doi.org/10.3390/econometrics13030030 - 8 Aug 2025
Abstract
►▼
Show Figures
The Pareto distribution has been widely used to model income distribution and inequality. The tail index and the Gini index are typically computed by iteration using Maximum Likelihood and are usually interpreted in terms of the Lorenz curve. We derive an alternative method
[...] Read more.
The Pareto distribution has been widely used to model income distribution and inequality. The tail index and the Gini index are typically computed by iteration using Maximum Likelihood and are usually interpreted in terms of the Lorenz curve. We derive an alternative method by considering a truncated Pareto distribution and deriving a simple closed-form approximation for the tail index and the Gini coefficient in terms of the mean absolute deviation and weighted quartile differences. The obtained expressions can be used for any Pareto distribution, even without a finite mean or variance. These expressions are resistant to outliers and have a simple geometric and “economic” interpretation in terms of the quantile function and quartiles. Extensive simulations demonstrate that the proposed approximate values for the tail index and the Gini coefficient are within a few percent relative error of the exact values, even for a moderate number of data points. Our paper offers practical and computationally simple methods to analyze a class of models with Pareto distributions. The proposed methodology can be extended to many other distributions used in econometrics and related fields.
Full article

Figure 1
Open AccessArticle
Beyond GDP: COVID-19’s Effects on Macroeconomic Efficiency and Productivity Dynamics in OECD Countries
by
Ümit Sağlam
Econometrics 2025, 13(3), 29; https://doi.org/10.3390/econometrics13030029 - 4 Aug 2025
Cited by 1
Abstract
The COVID-19 pandemic triggered unprecedented economic disruptions, raising critical questions about the resilience and adaptability of macroeconomic productivity across countries. This study examines the impact of COVID-19 on macroeconomic efficiency and productivity dynamics in 37 OECD countries using quarterly data from 2018Q1 to
[...] Read more.
The COVID-19 pandemic triggered unprecedented economic disruptions, raising critical questions about the resilience and adaptability of macroeconomic productivity across countries. This study examines the impact of COVID-19 on macroeconomic efficiency and productivity dynamics in 37 OECD countries using quarterly data from 2018Q1 to 2024Q4. By employing a Slack-Based Measure Data Envelopment Analysis (SBM-DEA) and the Malmquist Productivity Index (MPI), we decompose total factor productivity (TFP) into efficiency change (EC) and technological change (TC) across three periods: pre-pandemic, during-pandemic, and post-pandemic. Our framework incorporates both desirable (GDP) and undesirable outputs (inflation, unemployment, housing price inflation, and interest rate distortions), offering a multidimensional view of macroeconomic efficiency. Results show broad but uneven productivity gains, with technological progress proving more resilient than efficiency during the pandemic. Post-COVID recovery trajectories diverged, reflecting differences in structural adaptability and innovation capacity. Regression analysis reveals that stringent lockdowns in 2020 were associated with lower productivity in 2023–2024, while more adaptive policies in 2021 supported long-term technological gains. These findings highlight the importance of aligning crisis response with forward-looking economic strategies and demonstrate the value of DEA-based methods for evaluating macroeconomic performance beyond GDP.
Full article
(This article belongs to the Special Issue Advancements in Macroeconometric Modeling and Time Series Analysis)
►▼
Show Figures

Figure 1
Open AccessArticle
Analyzing the Impact of Carbon Mitigation on the Eurozone’s Trade Dynamics with the US and China
by
Pathairat Pastpipatkul and Terdthiti Chitkasame
Econometrics 2025, 13(3), 28; https://doi.org/10.3390/econometrics13030028 - 29 Jul 2025
Abstract
►▼
Show Figures
This study focusses on the transmission of carbon pricing mechanisms in shaping trade dynamics between the Eurozone and key partners: the USA and China. Using Bayesian variable selection methods and a Time-Varying Structural Vector Autoregressions (TV-SVAR) model, the research identifies the key variables
[...] Read more.
This study focusses on the transmission of carbon pricing mechanisms in shaping trade dynamics between the Eurozone and key partners: the USA and China. Using Bayesian variable selection methods and a Time-Varying Structural Vector Autoregressions (TV-SVAR) model, the research identifies the key variables impacting EU carbon emissions over time. The results reveal that manufactured products from the US have a diminishing positive impact on EU carbon emissions, suggesting potential exemption from future regulations. In contrast, manufactured goods from the US and petroleum products from China are expected to increase emissions, indicating a need for stricter trade policies. These findings provide strategic insights for policymakers aiming to balance trade and environmental objectives.
Full article

Figure 1
Open AccessArticle
Pseudo-Panel Decomposition of the Blinder–Oaxaca Gender Wage Gap
by
Jhon James Mora and Diana Yaneth Herrera
Econometrics 2025, 13(3), 27; https://doi.org/10.3390/econometrics13030027 - 19 Jul 2025
Cited by 1
Abstract
This article introduces a novel approach to decomposing the Blinder–Oaxaca gender wage gap using pseudo-panel data. In many developing countries, panel data are not available; however, understanding the evolution of the gender wage gap over time requires tracking individuals longitudinally. When individuals change
[...] Read more.
This article introduces a novel approach to decomposing the Blinder–Oaxaca gender wage gap using pseudo-panel data. In many developing countries, panel data are not available; however, understanding the evolution of the gender wage gap over time requires tracking individuals longitudinally. When individuals change across time periods, estimators tend to be inconsistent and inefficient. To address this issue, and building upon the traditional Blinder–Oaxaca methodology, we propose an alternative procedure that follows cohorts over time rather than individuals. This approach enables the estimation of both the explained and unexplained components—“endowment effect” and “remuneration effect”—of the wage gap, along with their respective standard errors, even in the absence of true panel data. We apply this methodology to the case of Colombia, finding a gender wage gap of approximately 15% in favor of male cohorts. This gap comprises a −5.6% explained component and a 20% unexplained component without controls. When we control by informality, size of the firm and sector the gap comprises a −3.5% explained component and a 18.7% unexplained component.
Full article
Open AccessArticle
Daily Emissions of CO2 in the World: A Fractional Integration Approach
by
Luis Alberiko Gil-Alana and Carlos Poza
Econometrics 2025, 13(3), 26; https://doi.org/10.3390/econometrics13030026 - 17 Jul 2025
Abstract
►▼
Show Figures
In this article, daily CO2 emissions for the years 2019–2022 are examined using fractional integration for Brazil, China, EU-27 (and the UK), India, and the USA. According to the findings, all series exhibit long memory mean-reversion tendencies, with orders of integration ranging
[...] Read more.
In this article, daily CO2 emissions for the years 2019–2022 are examined using fractional integration for Brazil, China, EU-27 (and the UK), India, and the USA. According to the findings, all series exhibit long memory mean-reversion tendencies, with orders of integration ranging between 0.22 in the case of India (with white noise errors) and 0.70 for Brazil (under autocorrelated disturbances). Nevertheless, the differencing parameter estimates are all considerably below 1, which supports the theory of mean reversion and transient shocks. These results suggest the need for a greater intensification of green policies complemented with economic structural reforms to achieve the zero-emissions target by 2050.
Full article

Figure 1
Open AccessArticle
The Long-Run Impact of Changes in Prescription Drug Sales on Mortality and Hospital Utilization in Belgium, 1998–2019
by
Frank R. Lichtenberg
Econometrics 2025, 13(3), 25; https://doi.org/10.3390/econometrics13030025 - 23 Jun 2025
Abstract
►▼
Show Figures
Objectives: We investigate the long-run impact of changes in prescription drug sales on mortality and hospital utilization in Belgium during the first two decades of the 21st century. Methods: We analyze the correlation across diseases between changes in the drugs used to treat
[...] Read more.
Objectives: We investigate the long-run impact of changes in prescription drug sales on mortality and hospital utilization in Belgium during the first two decades of the 21st century. Methods: We analyze the correlation across diseases between changes in the drugs used to treat the disease and changes in mortality or hospital utilization from that disease. The measure of the change in prescription drug sales we use is the long-run (1998–2018 or 2000–2019) change in the fraction of post-1999 drugs sold. A post-1999 drug is a drug that was not sold during 1989–1999. Results: The 1998–2018 increase in the fraction of post-1999 drugs sold is estimated to have reduced the number of years of life lost before ages 85, 75, and 65 in 2018 by about 438 thousand (31%), 225 thousand (31%), and 114 thousand (32%), respectively. The 1995–2014 increase in in the fraction of post-1999 drugs sold is estimated to have reduced the number of hospital days in 2019 by 2.66 million (20%). Conclusions: Even if we ignore the reduction in hospital utilization attributable to changes in pharmaceutical consumption, a conservative estimate of the 2018 cost per life-year before age 85 gained is EUR 6824. We estimate that previous changes in pharmaceutical consumption reduced 2019 expenditure on inpatient curative and rehabilitative care by EUR 3.55 billion, which is higher than the 2018 expenditure on drugs that were authorized during the period 1998–2018: EUR 2.99 billion.
Full article

Figure 1
Open AccessArticle
The Effect of Macroeconomic Announcements on U.S. Treasury Markets: An Autometric General-to-Specific Analysis of the Greenspan Era
by
James J. Forest
Econometrics 2025, 13(3), 24; https://doi.org/10.3390/econometrics13030024 - 21 Jun 2025
Abstract
This research studies the impact of macroeconomic announcement surprises on daily U.S. Treasury excess returns during the heart of Alan Greenspan’s tenure as Federal Reserve Chair, addressing the possible limitations of standard static regression (SSR) models, which may suffer from omitted variable bias,
[...] Read more.
This research studies the impact of macroeconomic announcement surprises on daily U.S. Treasury excess returns during the heart of Alan Greenspan’s tenure as Federal Reserve Chair, addressing the possible limitations of standard static regression (SSR) models, which may suffer from omitted variable bias, parameter instability, and poor mis-specification diagnostics. To complement the SSR framework, an automated general-to-specific (Gets) modeling approach, enhanced with modern indicator saturation methods for robustness, is applied to improve empirical model discovery and mitigate potential biases. By progressively reducing an initially broad set of candidate variables, the Gets methodology steers the model toward congruence, dispenses unstable parameters, and seeks to limit information loss while seeking model congruence and precision. The findings, herein, suggest that U.S. Treasury market responses to macroeconomic news shocks exhibited stability for a core set of announcements that reliably influenced excess returns. In contrast to computationally costless standard static models, the automated Gets-based approach enhances parameter precision and provides a more adaptive structure for identifying relevant predictors. These results demonstrate the potential value of incorporating interpretable automated model selection techniques alongside traditional SSR and Markov switching approaches to improve empirical insights into macroeconomic announcement effects on financial markets.
Full article
(This article belongs to the Special Issue Advancements in Macroeconometric Modeling and Time Series Analysis)
►▼
Show Figures

Figure 1
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Conferences
Special Issues
Special Issue in
Econometrics
Advancements in Macroeconometric Modeling and Time Series Analysis
Guest Editor: Julien ChevallierDeadline: 31 December 2025
Special Issue in
Econometrics
Innovations in Bayesian Econometrics: Theory, Techniques, and Economic Analysis
Guest Editor: Deborah GefangDeadline: 31 May 2026
Special Issue in
Econometrics
Labor Market Dynamics and Wage Inequality: Econometric Models of Income Distribution
Guest Editor: Marc K. ChanDeadline: 25 June 2026



