You are currently viewing a new version of our website. To view the old version click .

Stats

Stats is an international, peer-reviewed, open access journal on statistical science published quarterly online by MDPI.
The journal focuses on methodological and theoretical papers in statistics, probability, stochastic processes and innovative applications of statistics in all scientific disciplines including biological and biomedical sciences, medicine, business, economics and social sciences, physics, data science and engineering.
Quartile Ranking JCR - Q3 (Statistics and Probability | Mathematics, Interdisciplinary Applications)

All Articles (511)

The optimal allocation of funds within a portfolio is a central research focus in finance. Conventional mean-variance models often concentrate a significant portion of funds in a limited number of high-risk assets. To promote diversification, Shannon Entropy is widely applied. This paper develops a portfolio optimization model that incorporates Shannon Entropy alongside a risk diversification principle aimed at minimizing the maximum individual asset risk. The study combines empirical analysis with numerical simulations. First, empirical data are used to assess the theoretical model’s effectiveness and practicality. Second, numerical simulations are conducted to analyze portfolio performance under extreme market scenarios. Specifically, the numerical results indicate that for fixed values of the risk balance coefficient and minimum expected return, the optimal portfolios and their return distributions are similar when the risk is measured by standard deviation, absolute deviation, or standard lower semi-deviation. This suggests that the model exhibits robustness to variations in the risk function, providing a relatively stable investment strategy.

11 December 2025

Box plots of return rates for optimal portfolios with 
  
    μ
    =
    4.5
    %
  
.

This article provides a critical assessment of the Birnbaum–Saunders (BS) distribution, a pivotal statistical model for lifetime data analysis and reliability estimation, particularly in fatigue contexts. The model has seen successfully applied across diverse fields, including biological mortality, environmental sciences, medicine, and risk models. Moving beyond a basic scientometric review, this study synthesizes findings from 353 peer-reviewed articles, selected using PRISMA 2020 protocols, to specifically trace the evolution of estimation techniques, regression methods, and model extensions. Key findings reveal robust theoretical advances, such as Bayesian methods and bivariate/spatial adaptations, alongside practical progress in influence diagnostics and software development. The analysis highlights key research gaps, including the critical need for scalable, auditable software and structured reviews, and notes a peak in scholarly activity around 2019, driven importantly by the Brazil-Chile research alliance. This work offers a consolidated view of current BS model implementations and outlines clear future directions for enhancing their theoretical robustness and practical utility.

13 December 2025

Background: Health disparities research increasingly relies on complex survey data to understand survival differences between population subgroups. While Peters–Belson decomposition provides a principled framework for distinguishing disparities explained by measured covariates from unexplained residual differences, traditional approaches face challenges with complex data patterns and model validation for counterfactual estimation. Objective: To develop validated Peters–Belson decomposition methods for survival analysis that integrate ensemble machine learning with transfer learning while ensuring logical validity of counterfactual estimates through comprehensive model validation. Methods: We extend the traditional Peters–Belson framework through ensemble machine learning that combines Cox proportional hazards models, cross-validated random survival forests, and regularized gradient boosting approaches. Our framework incorporates a transfer learning component via principal component analysis (PCA) to discover shared latent factors between majority and minority groups. We note that this “transfer learning” differs from the standard machine learning definition (pre-trained models or domain adaptation); here, we use the term in its statistical sense to describe the transfer of covariate structure information from the pooled population to identify group-level latent factors. We develop a comprehensive validation framework that ensures Peters–Belson logical bounds compliance, preventing mathematical violations in counterfactual estimates. The approach is evaluated through simulation studies across five realistic health disparity scenarios using stratified complex survey designs. Results: Simulation studies demonstrate that validated ensemble methods achieve superior performance compared to individual models (proportion explained: 0.352 vs. 0.310 for individual Cox, 0.325 for individual random forests), with validation framework reducing logical violations from 34.7% to 2.1% of cases. Transfer learning provides additional 16.1% average improvement in explanation of unexplained disparity when significant unmeasured confounding exists, with 90.1% overall validation success rate. The validation framework ensures explanation proportions remain within realistic bounds while maintaining computational efficiency with 31% overhead for validation procedures. Conclusions: Validated ensemble machine learning provides substantial advantages for Peters–Belson decomposition when combined with proper model validation. Transfer learning offers conditional benefits for capturing unmeasured group-level factors while preventing mathematical violations common in standard approaches. The framework demonstrates that realistic health disparity patterns show 25–35% of differences explained by measured factors, providing actionable targets for reducing health inequities.

10 December 2025

  • Communication
  • Open Access

The paper shows an alternative perspective of the reduced chi-square as a measure of the goodness of fitting methods. The reduced chi-square is given by the ratio of the fitting over the propagation errors, that is, a universal relationship that holds for any linearity, but not for a nonlinearly parameterized fitting model. We begin by providing the proof for the traditional examples of one-parametric fitting of a constant and the bi-parametric fitting of a linear model, and then, for the general case of any linearly multi-parameterized model. We also show that this characterization is not generally true for nonlinearly parameterized fitting. Finally, we demonstrate these theoretical developments with an application in real data from the plasma protons in the heliosphere.

1 December 2025

News & Conferences

Issues

Open for Submission

Editor's Choice

Get Alerted

Add your email address to receive forthcoming issues of this journal.

XFacebookLinkedIn
Stats - ISSN 2571-905X