Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,629)

Search Parameters:
Keywords = likelihood estimation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
32 pages, 3677 KB  
Article
Efficient Modeling of the Energy Sector Using a New Bivariate Copula
by Jumanah Ahmed Darwish and Muhammad Qaiser Shahbaz
Mathematics 2026, 14(3), 540; https://doi.org/10.3390/math14030540 - 2 Feb 2026
Abstract
Copulas are a useful tool to generate bivariate distributions from the univariate marginals. This method is also useful to generate bivariate families of distributions. In this paper, a new copula has been proposed. Some useful properties of the proposed copula have been studied, [...] Read more.
Copulas are a useful tool to generate bivariate distributions from the univariate marginals. This method is also useful to generate bivariate families of distributions. In this paper, a new copula has been proposed. Some useful properties of the proposed copula have been studied, including the conditional copula. Various dependence measures for the proposed copula have been obtained. A multivariate extension of the proposed copula is also suggested. The proposed copula has been used to obtain a new bivariate family of distributions. Some useful properties of the obtained bivariate family are studied, which include conditional distributions, joint and conditional moments, joint reliability and hazard rate functions, parameter estimation, etc. A specific member of the proposed family has also been discussed. The proposed bivariate distribution has been used to model the energy sector data of the Kingdom of Saudi Arabia. Full article
(This article belongs to the Special Issue Advances in Statistical Methods with Applications)
23 pages, 8113 KB  
Article
Estimating H I Mass Fraction in Galaxies with Bayesian Neural Networks
by Joelson Sartori, Cristian G. Bernal and Carlos Frajuca
Galaxies 2026, 14(1), 10; https://doi.org/10.3390/galaxies14010010 - 2 Feb 2026
Abstract
Neutral atomic hydrogen (H I) regulates galaxy growth and quenching, but direct 21 cm measurements remain observationally expensive and affected by selection biases. We develop Bayesian neural networks (BNNs)—a type of neural model that returns both a prediction and an associated uncertainty—to infer [...] Read more.
Neutral atomic hydrogen (H I) regulates galaxy growth and quenching, but direct 21 cm measurements remain observationally expensive and affected by selection biases. We develop Bayesian neural networks (BNNs)—a type of neural model that returns both a prediction and an associated uncertainty—to infer the H I mass, log10(MHI), from widely available optical properties (e.g., stellar mass, apparent magnitudes, and diagnostic colors) and simple structural parameters. For continuity with the photometric gas fraction (PGF) literature, we also report the gas-to-stellar-mass ratio, log10(G/S), where explicitly noted. Our dataset is a reproducible cross-match of SDSS DR12, the MPA–JHU value-added catalogs, and the 100% ALFALFA release, resulting in 31,501 galaxies after quality controls. To ensure fair evaluation, we adopt fixed train/validation/test partitions and an additional sky-holdout region to probe domain shift, i.e., how well the model extrapolates to sky regions that were not used for training. We also audit features to avoid information leakage and benchmark the BNNs against deterministic models, including a feed-forward neural network baseline and gradient-boosted trees (GBTs, a standard tree-based ensemble method in machine learning). Performance is assessed using mean absolute error (MAE), root-mean-square error (RMSE), and probabilistic diagnostics such as the negative log-likelihood (NLL, a loss that rewards models that assign high probability to the observed H I masses), reliability diagrams (plots comparing predicted probabilities to observed frequencies), and empirical 68%/95% coverage. The Bayesian models achieve point accuracy comparable to the deterministic baselines while additionally providing calibrated prediction intervals that adapt to stellar mass, surface density, and color. This enables galaxy-by-galaxy uncertainty estimation and prioritization for 21 cm follow-up that explicitly accounts for predicted uncertainties (“risk-aware” target selection). Overall, the results demonstrate that uncertainty-aware machine-learning methods offer a scalable and reproducible route to inferring galactic H I content from widely available optical data. Full article
Show Figures

Figure 1

18 pages, 1906 KB  
Article
Assessment of Community Risk from Seismic-Induced Damage to Hazardous Materials Storage Tanks in Marine Ports
by Mohamad Nassar, Fatiha Mouri and Ahmad Abo El Ezz
Infrastructures 2026, 11(2), 49; https://doi.org/10.3390/infrastructures11020049 - 2 Feb 2026
Abstract
Marine ports located in regions of moderate seismicity can face high Natech (natural hazard-triggered technological) risk because large inventories of hazardous materials are stored near dense urban populations. This study proposes and applies a Natech risk framework to a representative port on the [...] Read more.
Marine ports located in regions of moderate seismicity can face high Natech (natural hazard-triggered technological) risk because large inventories of hazardous materials are stored near dense urban populations. This study proposes and applies a Natech risk framework to a representative port on the Saint-Laurence River in Quebec, Canada. Site-specific peak ground accelerations (PGA) are first estimated for 12 earthquake scenarios using regional ground motion prediction equations adjusted for local site conditions. These hazard levels are combined with a damage probability matrix to estimate Hazardous Release Likelihood Index (HRLi) scores for atmospheric steel storage tanks. Offsite consequences are then evaluated to obtain Maximum Distances of Effect (MDEs) for different types of hazardous materials. MDE footprints are intersected with block-level demographic data and complemented by a domino-effect based on inter-tank spacing, yielding a tank-level Natech Risk Index NRIi,s for each storage tank (i) and seismic scenario (s). These values are then averaged over all tanks to obtain a scenario-level mean Natech Risk Index (NRI¯) for each tank substance. Regression equations relating NRI¯  to PGA are provided as a practical tool for defining critical intensity thresholds for seismic Natech risk management in marine ports. Full article
Show Figures

Figure 1

37 pages, 2139 KB  
Article
Determining the Most Suitable Distribution and Estimation Method for Extremes in Financial Data with Different Volatility Levels
by Thusang J. Buthelezi and Sandile C. Shongwe
J. Risk Financial Manag. 2026, 19(2), 96; https://doi.org/10.3390/jrfm19020096 (registering DOI) - 2 Feb 2026
Abstract
In finance, accurately modelling the tail behaviour of extreme log returns is critical for understanding and mitigating risks across diverse asset classes. This research employs extreme value theory to identify the most suitable probability distributions (i.e., generalized extreme value (GEV), generalized logistic (GLO), [...] Read more.
In finance, accurately modelling the tail behaviour of extreme log returns is critical for understanding and mitigating risks across diverse asset classes. This research employs extreme value theory to identify the most suitable probability distributions (i.e., generalized extreme value (GEV), generalized logistic (GLO), Gumbel (GUM), generalized Pareto (GP), and reverse Gumbel (REV)) and estimation methods (least squares (LS), weighted least squares (WLS), maximum likelihood (ML), L-moments (LM), and relative least squares (RLS)) for modelling the tail behaviour of log returns from two financial datasets, each representing a distinct asset class with high (Ethereum, a digital asset class) and low (South African government bonds, a fixed-income asset class) volatility levels. The performance of each model and estimation method (25 different possibilities) is evaluated through goodness-of-fit and risk measures as the study aims to determine the optimal approach for each volatility level. Results from ranking different models and estimation methods show that across both asset classes, ML consistently emerges as the top-performing estimation method across all distributions. LM serves as a solid secondary option, while LS occasionally excels under GLO’s weekly minima for low volatility, whereas RLS occasionally surpasses ML in GLO’s monthly minima for high volatility. Finally, WLS uniquely outperforms under GEV and GLO’s monthly minima under low volatility. Full article
(This article belongs to the Section Risk)
Show Figures

Figure 1

26 pages, 872 KB  
Article
New Modified Generalized Inverted Exponential Distribution and Its Applications
by Zakeia A. Al-Saiary and Hana H. Al-Jammaz
Entropy 2026, 28(2), 161; https://doi.org/10.3390/e28020161 - 31 Jan 2026
Viewed by 50
Abstract
In this paper, a statistical model with three parameters is proposed which is called New Modified Generalized Inverted Exponential Distribution (MGIE). In addition, several statistical characteristics of the MGIE distribution are obtained, including quantile function, median, moments, mode, mean deviation, harmonic mean, reliability, [...] Read more.
In this paper, a statistical model with three parameters is proposed which is called New Modified Generalized Inverted Exponential Distribution (MGIE). In addition, several statistical characteristics of the MGIE distribution are obtained, including quantile function, median, moments, mode, mean deviation, harmonic mean, reliability, hazard and odds functions and Rényi entropy. Moreover, the estimators of parameters are found using the maximum likelihood estimation method. A simulation study using the Monte Carlo method is performed to assess the behavior of the parameters. Finally, three real data sets are applied to demonstrate the importance of the proposed distribution. Full article
(This article belongs to the Special Issue Statistical Inference: Theory and Methods)
47 pages, 726 KB  
Article
Disentangling Signal from Noise: A Bayesian Hybrid Framework for Variance Decomposition in Complex Surveys with Post-Hoc Domains
by JoonHo Lee and Alison Hooper
Mathematics 2026, 14(3), 512; https://doi.org/10.3390/math14030512 - 31 Jan 2026
Viewed by 46
Abstract
Quantifying geographic variation is crucial for policy evaluation, yet researchers often rely on complex national surveys not designed for sub-national inference. This design-analysis mismatch creates two challenges when decomposing variance across domains like states: informative sampling confounds substantive heterogeneity with design artifacts, and [...] Read more.
Quantifying geographic variation is crucial for policy evaluation, yet researchers often rely on complex national surveys not designed for sub-national inference. This design-analysis mismatch creates two challenges when decomposing variance across domains like states: informative sampling confounds substantive heterogeneity with design artifacts, and finite-sample variance inflation conflates sampling noise with signal. We introduce the Bayesian Hybrid Framework that reconciles design-based and model-based inference through Bayesian Pseudo-Likelihood for design consistency and a hybrid generalized linear mixed model that simultaneously estimates substantive domain effects and nuisance design effects (strata, PSUs). We propose a Dual Estimand Framework distinguishing between Descriptive (total observed variance) and Policy (substantive variance net of design) estimands, with explicit de-attenuation to correct finite-sample inflation. Simulations based on the 2019 National Survey of Early Care and Education demonstrate negligible bias and superior efficiency compared to standard alternatives. Applied to subsidy receipt among home-based child care providers, we find the observed between-state variation (16.7%) reduces to only 5.4% after accounting for design artifacts and sampling noise. This three-fold reduction reveals that local factors, not state policies, drive most heterogeneity, highlighting the necessity of our framework for rigorous geographic variance decomposition in complex surveys. An accompanying R package (version 0.3.0), bhfvar, implements the complete framework. Full article
(This article belongs to the Section D1: Probability and Statistics)
17 pages, 507 KB  
Article
A New Trigonometric-Inspired Probability Distribution: The Weighted Sine Generalized Kumaraswamy Model with Simulation and Applications in Epidemiology and Reliability Engineering
by Murat Genç and Ömer Özbilen
Mathematics 2026, 14(3), 510; https://doi.org/10.3390/math14030510 - 31 Jan 2026
Viewed by 43
Abstract
The importance of statistical distributions in representing real-world scenarios and aiding in decision-making is widely acknowledged. However, traditional models often face limitations in achieving optimal fits for certain datasets. Motivated by this challenge, this paper introduces a new probability distribution termed the weighted [...] Read more.
The importance of statistical distributions in representing real-world scenarios and aiding in decision-making is widely acknowledged. However, traditional models often face limitations in achieving optimal fits for certain datasets. Motivated by this challenge, this paper introduces a new probability distribution termed the weighted sine generalized Kumaraswamy (WSG-Kumaraswamy) distribution. This model is constructed by integrating the Kumaraswamy baseline distribution with the weighted sine-G family, which incorporates a trigonometric transformation to enhance flexibility without adding extra parameters. Various statistical properties of the WSG-Kumaraswamy distribution, including the quantile function, moments, moment-generating function, and probability-weighted moments, are derived. Maximum likelihood estimation is employed to obtain parameter estimates, and a comprehensive simulation study is performed to assess the finite-sample performance of the estimators, confirming their consistency and reliability. To illustrate the practical advantages of the proposed model, two real-world datasets from epidemiology and reliability engineering are analyzed. Comparative evaluations using goodness-of-fit criteria demonstrate that the WSG-Kumaraswamy distribution provides superior fits compared to established competitors. The results highlight the enhanced adaptability of the model for unit-interval data, positioning it as a valuable tool for statistical modeling in diverse applied fields. Full article
(This article belongs to the Section D1: Probability and Statistics)
21 pages, 615 KB  
Article
A New Hybrid Weibull–Exponentiated Rayleigh Distribution: Theory, Asymmetry Properties, and Applications
by Tolulope Olubunmi Adeniji and Akinwumi Sunday Odeyemi
Symmetry 2026, 18(2), 264; https://doi.org/10.3390/sym18020264 - 31 Jan 2026
Viewed by 60
Abstract
The choice of probability distribution is strongly data-dependent, as observed in several studies. Given the central role of statistical distribution in predictive analytics, researchers have continued to develop new models that accurately capture underlying data behaviours. This study proposes the Hybrid Weibull–Exponentiated Rayleigh [...] Read more.
The choice of probability distribution is strongly data-dependent, as observed in several studies. Given the central role of statistical distribution in predictive analytics, researchers have continued to develop new models that accurately capture underlying data behaviours. This study proposes the Hybrid Weibull–Exponentiated Rayleigh distribution developed by compounding the Weibull and Exponentiated Rayleigh distributions via the T-X transformation framework. The new three-parameter distribution is formulated to provide a flexible modelling framework capable of handling data exhibiting non-monotone failure rates. The properties of the proposed distribution, such as the cumulative distribution function, probability density function, survival function, hazard function, linear representation, moments, and entropy, are studied. We estimate the parameters of the distribution using the Maximum Likelihood Estimation technique. Furthermore, the impact of the proposed distribution parameters on the distribution’s shape is studied, particularly its symmetry properties. The shape of the distribution varies with its parameter values, thereby enabling it to model diverse data patterns. This flexibility makes it especially useful for describing the presence or absence of symmetry in real-world failure processes. Simulation studies are conducted to assess the behaviour of the estimators under different parameter settings. The proposed distribution is applied to real-world data to demonstrate its performance. Comparative analysis is performed against other well-established models. The results indicate that the proposed distribution outperforms other models in terms of goodness-of-fit, demonstrating its potential as a superior alternative for modelling lifetime data and reliability analysis based on Akaike Information Criterion and Bayesian Information Criterion. Full article
Show Figures

Figure 1

16 pages, 1158 KB  
Article
Optimal α/β Ratio for Biologically Effective Dose-Based Prediction of Radiation-Induced Peritumoral Brain Edema in Meningioma
by Shin-Woong Ko, Yu Deok Won, Byeong Jin Ha, Jin Hwan Cheong, Je Il Ryu, Seung Woo Hong, Kyueng-Whan Min and Myung-Hoon Han
Cancers 2026, 18(3), 448; https://doi.org/10.3390/cancers18030448 - 30 Jan 2026
Viewed by 51
Abstract
Background: Peritumoral brain edema (PTBE) is the most frequent complication for intracranial meningiomas following radiotherapy, yet no clinically validated biologically effective dose (BED) threshold capable of reliably predicting PTBE has currently been established. Although conventional radiobiological models typically assume an α/β ratio [...] Read more.
Background: Peritumoral brain edema (PTBE) is the most frequent complication for intracranial meningiomas following radiotherapy, yet no clinically validated biologically effective dose (BED) threshold capable of reliably predicting PTBE has currently been established. Although conventional radiobiological models typically assume an α/β ratio of 2–4 for benign meningiomas, whether these values accurately reflect the dose–response characteristics underlying radiation-induced PTBE remains unclear. Methods: We analyzed 67 intact meningiomas in the convexity, parasagittal, or falcine regions treated with primary linear accelerator (LINAC)-based radiotherapy. The BED values were recalculated using α/β ratios ranging from 2 to 20, and receiver operating characteristic (ROC) analyses were performed to identify the optimal BED thresholds for predicting PTBE. The most informative α/β ratio was defined as the value yielding the highest Youden’s J statistic. Results: The ROC analyses showed that an assumed α/β ratio of 14 provided the highest discriminative accuracy for predicting PTBE in the overall cohort and markedly superior performance in patients younger than 70 years (area under the curve (AUC) 0.945; Youden’s J = 0.871). The optimal BED threshold for predicting PTBE was approximately 41 Gy (α/β = 14), corresponding to ~18 Gy in a single fraction and ~5.8 Gy per fraction in a five-fraction regimen. Conclusions: The BED values calculated using α/β ratios near 14 provide the most reliable estimate of PTBE risk following primary LINAC-based radiotherapy for convexity, parasagittal, and falcine meningiomas. Maintaining prescription doses below this threshold may help reduce the likelihood of PTBE in this patient population. Full article
(This article belongs to the Section Clinical Research of Cancer)
Show Figures

Figure 1

40 pages, 5991 KB  
Article
The Four-Parameter Odd Generalized Rayleigh Lomax Distribution: Theory, Simulation, and Applications
by Alaa A. Khalaf, Ahmed R. El-Saeed, Mundher A. Khaleel and Ahlam H. Tolba
Symmetry 2026, 18(2), 244; https://doi.org/10.3390/sym18020244 - 29 Jan 2026
Viewed by 103
Abstract
The fundamental problem with current Rayleigh-Lomax-based distributions lies in their limited flexibility to model both symmetry and tail weight simultaneously. Therefore, this study aims to introduce the OGRLx anomalous general distribution as an innovative mathematical framework that addresses these shortcomings by providing precise [...] Read more.
The fundamental problem with current Rayleigh-Lomax-based distributions lies in their limited flexibility to model both symmetry and tail weight simultaneously. Therefore, this study aims to introduce the OGRLx anomalous general distribution as an innovative mathematical framework that addresses these shortcomings by providing precise control over the distribution’s shape and risk ratios. We derived the basic statistical properties of the model, and used six different estimation methods that proved their efficiency through an intensive simulation study, with the Maximum Likelihood Estimator showing the best performance in terms of bias criteria and root mean square error. The practical value of the model is evident in its superior ability to fit data with high skewness and variable risks; experimental results using economic and medical data (bladder cancer) have proven the OGRLx distribution to be significantly superior to nine competing models. It achieved the lowest values for information standards Akaike Information Criteria, Consistent AIC, Bayesian Information Criteria, Hanan and Quinn Information Criteria, Anderson–Darling, Cramer–von Mises, Kolmogorov–Smirnov, and the highest p-value tests, making it a more accurate statistical tool for reliability analysis and medical studies compared to traditional extensions. Finally, it should be noted that all analyses, programming, and statistical operations in this study were performed using the R statistical software. Full article
(This article belongs to the Section Mathematics)
12 pages, 2766 KB  
Article
The Approximate Subcutaneous LD50 and Associated Lesions Induced by Ivalin, Extracted and Purified from Geigeria aspera Harv., in Sprague–Dawley Rats
by Sara Locke, Christo Botha, Sarah Clift and Antoinette Lensink
Molecules 2026, 31(3), 478; https://doi.org/10.3390/molecules31030478 - 29 Jan 2026
Viewed by 96
Abstract
“Vomiting disease” in ruminants is one of the most economically significant phytotoxicities in South Africa and is caused by chronic ingestion of sesquiterpene lactone compounds present in plants of the Geigeria genus. Affected livestock demonstrate mortality due to actin and myosin damage in [...] Read more.
“Vomiting disease” in ruminants is one of the most economically significant phytotoxicities in South Africa and is caused by chronic ingestion of sesquiterpene lactone compounds present in plants of the Geigeria genus. Affected livestock demonstrate mortality due to actin and myosin damage in the striated musculature; however, a validated parental-exposure laboratory animal model would be useful for further study of the toxicodynamics. We exposed Sprague–Dawley rats to ivalin in a sequential dosing procedure and evaluated clinical signs, mortality, histopathology and muscle ultrastructure. Three of the five exposed rats died acutely, and a maximum likelihood estimate method was used to calculate a Median Lethality (LD50) of 135.4 mg/kg Body Weight (BW). Striated muscle in exposed rats showed only minimal and inconsistent histopathological and ultrastructural changes. Subcutaneous ivalin exposure causes acute mortality with minimal muscle pathology, contrasting with the more protracted muscular disease seen in ruminants after plant ingestion. This suggests toxicity by parenteral exposure is due to another mechanism, most likely mitochondrial energy pathway disturbances. Whilst subcutaneously exposed rats do not appear to provide a suitable model for oral sesquiterpene lactone exposure in ruminants, this study provides a starting dose for further investigation of plant extracts in both species. Full article
Show Figures

Graphical abstract

36 pages, 3751 KB  
Article
Butterworth-Induced Autoregressive Model
by Carlos Brás-Geraldes, J. Leonel Rocha and Filipe Martins
Mathematics 2026, 14(3), 479; https://doi.org/10.3390/math14030479 - 29 Jan 2026
Viewed by 112
Abstract
This work proposes a novel autoregressive (AR) modeling framework in which the model structure and coefficients are induced from the analytical properties of Butterworth filters. By exploiting the equivalence between AR models and all-pole discrete-time filters, the proposed approach derives the AR coefficients [...] Read more.
This work proposes a novel autoregressive (AR) modeling framework in which the model structure and coefficients are induced from the analytical properties of Butterworth filters. By exploiting the equivalence between AR models and all-pole discrete-time filters, the proposed approach derives the AR coefficients directly from the pole locations of a continuous-time Butterworth prototype mapped to the discrete-time domain. In this formulation, the filter order and stopband attenuation act as hyperparameters controlling the complexity and frequency-selective behavior of the resulting predictor, while a scalar gain parameter is estimated from data using a maximum likelihood criterion. Model selection is carried out through a nested cross-validation strategy tailored to time series data, employing a rolling-origin scheme to prevent look-ahead bias. The predictive performance of the resulting Butterworth-induced AR models is evaluated using one-step-ahead forecasts and compared against classical ARIMA models on simulated data. Experimental results show that the proposed approach achieves competitive predictive accuracy, while offering a structured and interpretable link between frequency-domain filter design and time-domain autoregressive modeling. Full article
(This article belongs to the Special Issue Applied Mathematical Modelling and Dynamical Systems, 2nd Edition)
33 pages, 733 KB  
Article
Improving Confidence Interval Estimation in Logistic Regression with Multicollinear Predictors: A Comparative Study of Shrinkage Estimators and Application to Prostate Cancer Data
by Sultana Mubarika Rahman Chowdhury, Zoran Bursac and B. M. Golam Kibria
Stats 2026, 9(1), 11; https://doi.org/10.3390/stats9010011 - 29 Jan 2026
Viewed by 64
Abstract
In logistic regression with finite binary samples and multicollinear predictors, the maximum likelihood estimator often results in overfitting and high mean squared error (MSE). Shrinkage methods like ridge, Liu, and Kibria–Lukman offer improved MSE performance but are typically evaluated only on this criterion, [...] Read more.
In logistic regression with finite binary samples and multicollinear predictors, the maximum likelihood estimator often results in overfitting and high mean squared error (MSE). Shrinkage methods like ridge, Liu, and Kibria–Lukman offer improved MSE performance but are typically evaluated only on this criterion, which overlooks their inferential capability. This study shifts the focus toward confidence interval coverage, using simulations to assess the coverage probability, interval width, and MSE of several shrinkage estimators under varying conditions. The results show that, while shrinkage methods generally reduce interval width and MSE, many fail to maintain adequate coverage. However, certain ridge and Kibria–Lukman estimators achieve a favorable balance between narrow interval width and consistent coverage, making them preferable. The findings are further validated using a prostate cancer dataset, contributing to more reliable inference in logistic regression under multicollinearity. Overall, the results demonstrate that well-chosen shrinkage estimators can serve as effective alternatives to the MLE in biostatistical modeling, improving the stability and interpretability of regression analyses in studies pertaining to public health and medicine. Full article
(This article belongs to the Section Applied Statistics and Machine Learning Methods)
26 pages, 726 KB  
Article
A New Cosine Topp–Leone Exponentiated Half Logistic-G Family of Distributions with Applications
by Fastel Chipepa, Mahmoud M. Abdelwahab, Wellington Fredrick Charumbira, Broderick Oluyede, Neo Dingalo, Anis Ben Ghorbal and Mustafa M. Hasaballah
Mathematics 2026, 14(3), 472; https://doi.org/10.3390/math14030472 - 29 Jan 2026
Viewed by 79
Abstract
A new generalized family of distributions, termed the Cosine–Topp–Leone–Exponentiated Half Logistic–G (Cos–TL–EHL–G) family, is proposed. The primary motivation for introducing this family is to enhance the modelling flexibility of the existing Cosine–Topp–Leone–G class by incorporating a exponentiated half logistic (EHL-G)-based transformation. Two important [...] Read more.
A new generalized family of distributions, termed the Cosine–Topp–Leone–Exponentiated Half Logistic–G (Cos–TL–EHL–G) family, is proposed. The primary motivation for introducing this family is to enhance the modelling flexibility of the existing Cosine–Topp–Leone–G class by incorporating a exponentiated half logistic (EHL-G)-based transformation. Two important special cases, namely the Cos–TL–EHL–Weibull (Cos–TL–EHL–W) and Cos–TL–EHL–Log–Logistic (Cos–TL–EHL–LLoG) distributions, are presented. Several mathematical and statistical properties of the proposed family are derived, including series expansions, moments, order statistics, and uncertainty measures. Parameter estimation is carried out using maximum likelihood, least squares, Anderson–Darling, and Cramér–von Mises methods. A Monte Carlo simulation study indicates that the maximum likelihood estimator outperforms the competing estimation techniques. The practical usefulness and robustness of the proposed family are illustrated through applications to two real datasets, where the Cos–TL–EHL–W distribution demonstrates superior performance compared to both nested and non-nested competing models. Full article
(This article belongs to the Section D1: Probability and Statistics)
Show Figures

Figure 1

30 pages, 2844 KB  
Article
Bridging Climate and Socio-Environmental Vulnerability for Wildfire Risk Assessment Using Explainable Machine Learning: Evidence from the 2025 Wildfire in Korea
by Sujung Heo, Sujung Ahn, Ye-Eun Lee, Sung-Cheol Jung and Mina Jang
Forests 2026, 17(2), 182; https://doi.org/10.3390/f17020182 - 29 Jan 2026
Viewed by 86
Abstract
Wildfire activity is intensifying under climate change, particularly in temperate East Asia where human-driven ignitions interact with extreme fire-weather conditions. This study examines wildfire risk during the March 2025 large wildfire event in Korea by applying explainable machine-learning models to assess ignition-prone environments [...] Read more.
Wildfire activity is intensifying under climate change, particularly in temperate East Asia where human-driven ignitions interact with extreme fire-weather conditions. This study examines wildfire risk during the March 2025 large wildfire event in Korea by applying explainable machine-learning models to assess ignition-prone environments and their spatial relationship with socio-environmental features relevant to exposure and management. CatBoost and LightGBM models were used to estimate wildfire susceptibility based on climatic, topographic, vegetation, and anthropogenic predictors, with SHAP analysis employed to interpret variable contributions. Both models showed strong predictive performance (CatBoost AUC = 0.910; LightGBM AUC = 0.907). Temperature, relative humidity, and wind speed emerged as the dominant climatic drivers, with ignition probability increasing under hot (>25 °C), dry (<25%), and windy (>6 m s−1) conditions. Anthropogenic factors—including proximity to graves, mountain trails, forest roads, and contiguous coniferous stands (≥30 ha)—were consistently associated with elevated ignition likelihood, reflecting the role of human accessibility within pine-dominated landscapes. The socio-environmental overlay analysis further indicated that high-susceptibility zones were spatially aligned with arboreta, private commercial forests, and campsites, highlighting areas where ignition-prone environments coincide with active human use and forest management. These results suggest that wildfire risk in Korea is shaped by the spatial concurrence of climatic extremes, fuel continuity, and socio-environmental exposure. By situating explainable susceptibility modeling within an event-conditioned risk perspective, this study provides practical insights for identifying Wildfire Priority Management Areas (WPMAs) and supporting risk-informed prevention, preparedness, and spatial decision-making under ongoing climate change. Full article
Back to TopTop