Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (65)

Search Parameters:
Keywords = log-gamma-normal distribution

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
39 pages, 23728 KB  
Article
Parametric Inference of the Power Weibull Survival Model Using a Generalized Censoring Plan: Three Applications to Symmetry and Asymmetry Scenarios
by Refah Alotaibi and Ahmed Elshahhat
Symmetry 2025, 17(12), 2142; https://doi.org/10.3390/sym17122142 - 12 Dec 2025
Viewed by 174
Abstract
Generalized censoring, combined with a power-based distribution, improves inferential efficiency by capturing more detailed failure-time information in complex testing scenarios. Conventional censoring schemes may discard substantial failure-time information, leading to inefficiencies in parameter estimation and reliability prediction. To address this limitation, we develop [...] Read more.
Generalized censoring, combined with a power-based distribution, improves inferential efficiency by capturing more detailed failure-time information in complex testing scenarios. Conventional censoring schemes may discard substantial failure-time information, leading to inefficiencies in parameter estimation and reliability prediction. To address this limitation, we develop a comprehensive inferential framework for the alpha-power Weibull (APW) distribution under a generalized progressive hybrid Type-II censoring scheme, a flexible design that unifies classical, hybrid, and progressive censoring while guaranteeing test completion within preassigned limits. Both maximum likelihood and Bayesian estimation procedures are derived for the model parameters, reliability function, and hazard rate. Associated uncertainty quantification is provided through asymptotic confidence intervals (normal and log-normal approximations) and Bayesian credible intervals obtained via Markov chain Monte Carlo (MCMC) methods with independent gamma priors. In addition, we propose optimal censoring designs based on trace, determinant, and quantile-variance criteria to maximize inferential efficiency at the design stage. Extensive Monte Carlo simulations, assessed using four precision measures, demonstrate that the Bayesian MCMC estimators consistently outperform their frequentist counterparts in terms of bias, mean squared error, robustness, and interval coverage across a wide range of censoring levels and prior settings. Finally, the proposed methodology is validated using real-life datasets from engineering (electronic devices), clinical (organ transplant), and physical (rare metals) studies, demonstrating the APW model’s superior goodness-of-fit, reliability prediction, and inferential stability. Overall, this study demonstrates that combining generalized censoring with the APW distribution substantially enhances inferential efficiency and predictive performance, offering a robust and versatile tool for complex life-testing experiments across multiple scientific domains. Full article
Show Figures

Figure 1

21 pages, 26649 KB  
Article
A Hybrid Deep Learning-Based Modeling Methods for Atmosphere Turbulence in Free Space Optical Communications
by Yuan Gao, Bingke Yang, Shasha Fan, Leheng Xu, Tianye Wang, Boxian Yang and Shichen Jiang
Photonics 2025, 12(12), 1210; https://doi.org/10.3390/photonics12121210 - 8 Dec 2025
Viewed by 368
Abstract
Free-space optical (FSO) communication provides high-capacity and secure links but is strongly impaired by atmospheric turbulence, which induces multi-scale irradiance fluctuations. Traditional approaches such as adaptive optics, multi-aperture and multiple-input multiple-output FSO schemes offer limited robustness under rapidly varying turbulence, while statistical fading [...] Read more.
Free-space optical (FSO) communication provides high-capacity and secure links but is strongly impaired by atmospheric turbulence, which induces multi-scale irradiance fluctuations. Traditional approaches such as adaptive optics, multi-aperture and multiple-input multiple-output FSO schemes offer limited robustness under rapidly varying turbulence, while statistical fading models such as log-normal and Gamma–Gamma cannot represent multi-scale temporal correlations. This work proposes a hybrid deep learning framework that explicitly separates high-frequency scintillation and low-frequency power drift through a conditional variational autoencoder and a bidirectional long short-term memory dual-branch architecture with an adaptive gating mechanism. Trained on OptiSystem-generated datasets, the model accurately reconstructs irradiance distributions and temporal dynamics. For model-assisted signal compensation, it achieves an average 79% bit-error-rate (BER) reduction across all simulated scenarios compared with conventional thresholding and Gamma–Gamma maximum a posteriori detection. Transfer learning further enables efficient adaptation to new turbulence conditions with minimal retraining. Experimental validation shows that the compensated BER approaches near-zero, yielding significant improvement over traditional detection. These results demonstrate an effective and adaptive solution for turbulence-impaired FSO links. Full article
(This article belongs to the Special Issue Advances in Free-Space Optical Communications)
Show Figures

Figure 1

49 pages, 2669 KB  
Article
On a Three-Parameter Bounded Gamma–Gompertz Distribution, with Properties, Estimation, and Applications
by Tassaddaq Hussain, Mohammad Shakil and Mohammad Ahsanullah
AppliedMath 2025, 5(4), 177; https://doi.org/10.3390/appliedmath5040177 - 8 Dec 2025
Viewed by 145
Abstract
A novel statistical model, the Bounded Gamma–Gompertz Distribution (BGGD), is presented alongside a full characterization of its properties. Our investigation identifies maximum-likelihood estimation (MLE) as the most effective fitting procedure, proving it to be more consistent and efficient than alternative approaches like L-moments [...] Read more.
A novel statistical model, the Bounded Gamma–Gompertz Distribution (BGGD), is presented alongside a full characterization of its properties. Our investigation identifies maximum-likelihood estimation (MLE) as the most effective fitting procedure, proving it to be more consistent and efficient than alternative approaches like L-moments and Bayesian estimation. Empirical validation on Tesla (TSLA) financial records—spanning open, high, low, close prices, and trading volume—showcased the BGGD’s superior performance. It delivered a better fit than several competing heavy-tailed distributions, including Student-t, Log-Normal, Lévy, and Pareto, as indicated by minimized AIC and BIC statistics. The results substantiate the distribution’s robustness in capturing extreme-value behavior, positioning it as a potent tool for financial modeling applications. Full article
Show Figures

Figure 1

20 pages, 1589 KB  
Article
A Computational Framework for Reproducible Generation of Synthetic Grain-Size Distributions for Granular and Geoscientific Applications
by Seweryn Lipiński
Geosciences 2025, 15(12), 464; https://doi.org/10.3390/geosciences15120464 - 4 Dec 2025
Viewed by 302
Abstract
Particle size distribution (PSD), also referred to as grain-size distribution (GSD), is a fundamental characteristic of granular materials, influencing packing density, porosity, permeability, and mechanical behavior across soils, sediments, and industrial powders. Accurate and reproducible representation of PSD is essential for computational modeling, [...] Read more.
Particle size distribution (PSD), also referred to as grain-size distribution (GSD), is a fundamental characteristic of granular materials, influencing packing density, porosity, permeability, and mechanical behavior across soils, sediments, and industrial powders. Accurate and reproducible representation of PSD is essential for computational modeling, digital twin development (i.e., virtual replicas of physical systems), and machine learning applications in geosciences and engineering. Despite the widespread use of classical distributions (log-normal, Weibull, Gamma), there remains a lack of systematic frameworks for generating synthetic datasets with controlled statistical properties and reproducibility. This paper introduces a unified computational framework for generating virtual PSDs/GSDs with predefined statistical characteristics and a specified number of grain-size fractions. The approach integrates parametric modeling with two histogram-based allocation strategies: the equal-width method, maintaining uniform bin spacing, and the equal-probability method, distributing grains according to quantiles of the target distribution. Both methods ensure statistical representativeness, reproducibility, and scalability across material classes. The framework is demonstrated on representative cases of soils (Weibull), sedimentary and industrial materials (Gamma), and food powders (log-normal), showing its generality and adaptability. The generated datasets can support sensitivity analyses, experimental validation, and integration with discrete element modeling, computational fluid dynamics, or geostatistical simulations. Full article
(This article belongs to the Section Geomechanics)
Show Figures

Graphical abstract

24 pages, 3213 KB  
Article
The UG-EM Lifetime Model: Analysis and Application to Symmetric and Asymmetric Survival Data
by Omalsad H. Odhah, Saba M. Alwan and Sarah Aljohani
Symmetry 2025, 17(12), 2027; https://doi.org/10.3390/sym17122027 - 26 Nov 2025
Viewed by 267
Abstract
This paper introduces the UG-EM (Unconditional Gamma-Exponential Model) as a new compound lifetime model designed to enhance flexibility in tail behavior compared to traditional distributions. The UG-EM model provides a unified framework for analyzing deviations from symmetry in survival data, effectively capturing right-skewed [...] Read more.
This paper introduces the UG-EM (Unconditional Gamma-Exponential Model) as a new compound lifetime model designed to enhance flexibility in tail behavior compared to traditional distributions. The UG-EM model provides a unified framework for analyzing deviations from symmetry in survival data, effectively capturing right-skewed patterns, which are commonly observed in real-world lifetime phenomena. The main analytical properties are derived, including the probability density, cumulative distribution, hazard and reversed-hazard functions, mean residual life, and several measures of dispersion and uncertainty. The effects of the UG-EM parameters (α and λ) are examined, showing that increasing either parameter can cause a temporary reduction in entropy H(T) at early times followed by a long-term increase; in some cases, the influence of α is stronger than that of λ. Parameter estimation is carried out using the maximum likelihood method and assessed through Monte Carlo simulations to evaluate estimator bias and variability, highlighting the significant role of sample size in estimation accuracy. The proposed model is applied to three survival datasets (Lung, Veteran, and Kidney) and compared with classical alternatives such as Exponential, Weibull, and Log-normal distributions using standard goodness-of-fit criteria. Results indicate that the UG-EM model offers superior flexibility and can capture patterns that simpler models fail to represent, although the empirical results do not demonstrate a clear, consistent superiority over standard competitors across all tested datasets. The paper also discusses identifiability issues, estimation challenges, and practical implications for reliability and medical survival analysis. Recommendations for further theoretical development and broader model comparison are provided. Full article
(This article belongs to the Section Mathematics)
Show Figures

Figure 1

17 pages, 591 KB  
Article
Extending Approximate Bayesian Computation to Non-Linear Regression Models: The Case of Composite Distributions
by Mostafa S. Aminzadeh and Min Deng
Risks 2025, 13(11), 220; https://doi.org/10.3390/risks13110220 - 5 Nov 2025
Viewed by 428
Abstract
Modeling loss data is a crucial aspect of actuarial science. In the insurance industry, small claims occur frequently, while large claims are rare. Traditional heavy-tail distributions, such as Weibull, Log-Normal, and Inverse Gaussian distributions, are not suitable for describing insurance data, which often [...] Read more.
Modeling loss data is a crucial aspect of actuarial science. In the insurance industry, small claims occur frequently, while large claims are rare. Traditional heavy-tail distributions, such as Weibull, Log-Normal, and Inverse Gaussian distributions, are not suitable for describing insurance data, which often exhibit skewness and fat tails. The literature has explored classical and Bayesian inference methods for the parameters of composite distributions, such as the Exponential–Pareto, Weibull–Pareto, and Inverse Gamma–Pareto distributions. These models effectively separate small to moderate losses from significant losses using a threshold parameter. This research aims to introduce a new composite distribution, the Gamma–Pareto distribution with two parameters, and employ a numerical computational approach to find the maximum likelihood estimates (MLEs) of its parameters. A novel computational approach for a nonlinear regression model where the loss variable is distributed as the Gamma–Pareto and depends on multiple covariates is proposed. The maximum likelihood (ML) and Approximate Bayesian Computation (ABC) methods are used to estimate the regression parameters. The Fisher information matrix, along with a multivariate normal distribution as the prior distribution, is utilized through the ABC method. Simulation studies indicate that the ABC method outperforms the ML method in terms of accuracy. Full article
Show Figures

Figure 1

16 pages, 495 KB  
Article
Slomads Rising: Structural Shifts in U.S. Airbnb Stay Lengths During and After the Pandemic (2019–2024)
by Harrison Katz and Erica Savage
Tour. Hosp. 2025, 6(4), 182; https://doi.org/10.3390/tourhosp6040182 - 17 Sep 2025
Viewed by 1680
Abstract
Background. Length of stay, operationalized here as nights per booking (NPB), is a first-order driver of yield, labor planning, and environmental pressure. The COVID-19 pandemic and the rise of long-stay remote workers (often labeled “slomads”, a slow-travel subset of digital nomads) plausibly altered [...] Read more.
Background. Length of stay, operationalized here as nights per booking (NPB), is a first-order driver of yield, labor planning, and environmental pressure. The COVID-19 pandemic and the rise of long-stay remote workers (often labeled “slomads”, a slow-travel subset of digital nomads) plausibly altered stay-length distributions, yet national, booking-weighted evidence for the United States remains scarce. Purpose. This study quantifies COVID-19 pandemic-era and post-pandemic shifts in U.S. Airbnb stay lengths, and identifies whether higher averages reflect (i) more long stays or (ii) longer long stays. Methods. Using every U.S. Airbnb reservation created between 1 January 2019 and 31 December 2024 (collapsed to booking-count weights), the analysis combines: weighted descriptive statistics; parametric density fitting (Gamma, log-normal, Poisson–lognormal); weighted negative-binomial regression with month effects; a two-part (logit + NB) model for ≥28-night stays; and a monthly SARIMA(0,1,1)(0,1,1)12 with COVID-19 pandemic-phase indicators. Results. Mean NPB rose from 3.68 pre-COVID-19 to 4.36 during restrictions and then stabilized near 4.07 post-2021 (≈10% above 2019); the booking-weighted median shifted permanently from 2 to 3 nights. A two-parameter log-normal fits best by wide AIC/BIC margins, consistent with a heavy-tailed distribution. Negative-binomial estimates imply post-vaccine bookings are 6.5% shorter than restriction-era bookings, while pre-pandemic bookings are 16% shorter. In a two-part (threshold) model at 28 nights, the booking share of month-plus stays rose from 1.43% (pre) to 2.72% (restriction) and settled at 2.04% (post), whereas the conditional mean among long stays was in the mid-to-high 50 s (≈55–60 nights) and varied modestly across phases. Hence, a higher average NPB is driven primarily by a greater prevalence of month-plus bookings. A seasonal ARIMA model with pandemic-phase dummies improves fit over a dummy-free specification (likelihood-ratio = 8.39, df = 2, p = 0.015), indicating a structural level shift rather than higher-order dynamics. Contributions. The paper provides national-scale, booking-weighted evidence that U.S. short-term-rental stays became durably longer and more heavy-tailed after 2020, filling a gap in the tourism and revenue-management literature. Implications. Heavy-tailed pricing and inventory policies, and explicit regime indicators in forecasting, are recommended for practitioners; destination policy should reflect the larger month-plus segment. Full article
Show Figures

Figure 1

24 pages, 7349 KB  
Article
Return Level Prediction with a New Mixture Extreme Value Model
by Emrah Altun, Hana N. Alqifari and Kadir Söyler
Mathematics 2025, 13(17), 2705; https://doi.org/10.3390/math13172705 - 22 Aug 2025
Viewed by 849
Abstract
The generalized Pareto distribution is frequently used for modeling extreme values above an appropriate threshold level. Since the process of determining the appropriate threshold value is difficult, a mixture of extreme value models rises to prominence. In this study, mixture extreme value models [...] Read more.
The generalized Pareto distribution is frequently used for modeling extreme values above an appropriate threshold level. Since the process of determining the appropriate threshold value is difficult, a mixture of extreme value models rises to prominence. In this study, mixture extreme value models based on exponentiated Pareto distribution are proposed. The Weibull, gamma, and log-normal models are used as bulk densities. The parameter estimates of the proposed models are obtained using the maximum likelihood approach. Two different approaches based on maximization of the log-likelihood and Kolmogorov–Smirnov p-value are used to determine the appropriate threshold value. The effectiveness of these methods is compared using simulation studies. The proposed models are compared with other mixture models through an application study on earthquake data. The GammaEP web application is developed to ensure the reproducibility of the results and the usability of the proposed model. Full article
(This article belongs to the Special Issue Mathematical Modelling and Applied Statistics)
Show Figures

Figure 1

14 pages, 3176 KB  
Article
Impact of Data Distribution and Bootstrap Setting on Anomaly Detection Using Isolation Forest in Process Quality Control
by Hyunyul Choi and Kihyo Jung
Entropy 2025, 27(7), 761; https://doi.org/10.3390/e27070761 - 18 Jul 2025
Cited by 3 | Viewed by 1128
Abstract
This study investigates the impact of data distribution and bootstrap resampling on the anomaly detection performance of the Isolation Forest (iForest) algorithm in statistical process control. Although iForest has received attention for its multivariate and ensemble-based nature, its performance under non-normal data distributions [...] Read more.
This study investigates the impact of data distribution and bootstrap resampling on the anomaly detection performance of the Isolation Forest (iForest) algorithm in statistical process control. Although iForest has received attention for its multivariate and ensemble-based nature, its performance under non-normal data distributions and varying bootstrap settings remains underexplored. To address this gap, a comprehensive simulation was performed across 18 scenarios involving log-normal, gamma, and t-distributions with different mean shift levels and bootstrap configurations. The results show that iForest substantially outperforms the conventional Hotelling’s T2 control chart, especially in non-Gaussian settings and under small-to-medium process shifts. Enabling bootstrap resampling led to marginal improvements across classification metrics, including accuracy, precision, recall, F1-score, and average run length (ARL)1. However, a key limitation of iForest was its reduced sensitivity to subtle process changes, such as a 1σ mean shift, highlighting an area for future enhancement. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

20 pages, 1565 KB  
Article
Stratified Median Estimation Using Auxiliary Transformations: A Robust and Efficient Approach in Asymmetric Populations
by Abdulaziz S. Alghamdi and Fatimah A. Almulhim
Symmetry 2025, 17(7), 1127; https://doi.org/10.3390/sym17071127 - 14 Jul 2025
Cited by 1 | Viewed by 380
Abstract
This study estimates the population median through stratified random sampling, which enhances accuracy by ensuring the proper representation of key population groups. The proposed class of estimators based on transformations effectively handles data variability and enhances estimation efficiency. We examine bias and mean [...] Read more.
This study estimates the population median through stratified random sampling, which enhances accuracy by ensuring the proper representation of key population groups. The proposed class of estimators based on transformations effectively handles data variability and enhances estimation efficiency. We examine bias and mean square error expressions up to the first-order approximation for both existing and newly introduced estimators, establishing theoretical conditions for their applicability. Moreover, to assess the effectiveness of the suggested estimators, five simulated datasets derived from distinct asymmetric distributions (gamma, log-normal, Cauchy, uniform, and exponential), along with actual datasets, are used for numerical analysis. These estimators are designed to significantly enhance the precision and effectiveness of median estimation, resulting in more reliable and consistent outcomes. Comparative analysis using percent relative efficiency (PRE) reveals that the proposed estimators perform better than conventional approaches. Full article
(This article belongs to the Section Mathematics)
Show Figures

Figure 1

18 pages, 4683 KB  
Article
Transmission of LG Modes in High-Capacity 16 × 10 Gbps FSO System Using FBG Sensors Under Different Channel Scenarios
by Meet Kumari and Satyendra K. Mishra
Micromachines 2025, 16(7), 738; https://doi.org/10.3390/mi16070738 - 24 Jun 2025
Cited by 1 | Viewed by 1083
Abstract
Free space optics (FSO) aims to perform as one of the best optical wireless channels to design a reliable, flexible, and cost-effective communication system. In FSO systems, mode-division multiplexing (MDM) transmission is a proven technique to expand transmission capacity per communication link. Thus, [...] Read more.
Free space optics (FSO) aims to perform as one of the best optical wireless channels to design a reliable, flexible, and cost-effective communication system. In FSO systems, mode-division multiplexing (MDM) transmission is a proven technique to expand transmission capacity per communication link. Thus, a 16 × 10 Gbps MDM-FSO system using fiber Bragg grating (FBG) sensors for the coexistence of communication and sensing, exploiting FSO links to transmit distinct Laguerre-Gaussian (LG) beams at a 1000–1900 m range, is proposed. The results illustrate that the system can transmit higher-order LG beams with sensor temperatures of 20–120 °C over a 1500 m range under clear air, drizzle, and moderate haze weather. Also, an improved performance is achieved in gamma–gamma compared to the log-normal distribution model for 10−6–10−2.5 index modulation under weak-to-strong turbulence. The proposed system is capable of offering a high optical signal-to-noise ratio (OSNR) and gain of 113.39 and 15.43 dB, respectively, at an aggregate data rate of 160 Gbps under different atmospheric scenarios. Moreover, the proposed system achieves better system performance compared to existing works. Full article
Show Figures

Figure 1

14 pages, 1757 KB  
Article
Probability Distribution of Elastic Response Spectrum with Actual Earthquake Data
by Qianqian Liang, Jie Wu, Guijuan Lu and Jun Hu
Buildings 2025, 15(12), 2062; https://doi.org/10.3390/buildings15122062 - 15 Jun 2025
Viewed by 809
Abstract
This study aimed to propose a probability-guaranteed spectrum method to enhance the reliability of seismic building designs, thereby addressing the inadequacy of the current code-specified response spectrum based on mean fortification levels. This study systematically evaluated the fitting performance of dynamic coefficient spectra [...] Read more.
This study aimed to propose a probability-guaranteed spectrum method to enhance the reliability of seismic building designs, thereby addressing the inadequacy of the current code-specified response spectrum based on mean fortification levels. This study systematically evaluated the fitting performance of dynamic coefficient spectra under normal, log-normal, and gamma distribution assumptions based on 288 ground motion records from type II sites. MATLAB(2010) parameter fitting and the Kolmogorov–Smirnov test were used, revealing that the gamma distribution optimally characterized spectral characteristics across all period ranges (p < 0.05). This study innovatively established dynamic coefficient spectra curves for various probability guarantee levels (50–80%), quantitatively revealing the insufficient probability assurance of code spectra in the long-period range. Furthermore, this study proposed an evaluation framework for load safety levels of spectral values over the design service period, demonstrating that increasing probability guarantee levels significantly improved safety margins over a 50-year reference period. This method provides probabilistic foundations for the differentiated seismic design of important structures and offers valuable insights for revising current code provisions based on mean spectra. Full article
(This article belongs to the Special Issue Study on Concrete Structures—2nd Edition)
Show Figures

Figure 1

28 pages, 11942 KB  
Article
Reliability Analysis of Improved Type-II Adaptive Progressively Inverse XLindley Censored Data
by Refah Alotaibi, Mazen Nassar and Ahmed Elshahhat
Axioms 2025, 14(6), 437; https://doi.org/10.3390/axioms14060437 - 2 Jun 2025
Cited by 1 | Viewed by 777
Abstract
This study offers a newly improved Type-II adaptive progressive censoring with data sampled from an inverse XLindley (IXL) distribution for more efficient and adaptive reliability assessments. Through this sampling mechanism, we evaluate the parameters of the IXL distribution, as well as its reliability [...] Read more.
This study offers a newly improved Type-II adaptive progressive censoring with data sampled from an inverse XLindley (IXL) distribution for more efficient and adaptive reliability assessments. Through this sampling mechanism, we evaluate the parameters of the IXL distribution, as well as its reliability and hazard rate features. In the context of reliability, to handle flexible and time-constrained testing frameworks in high-reliability environments, we formulate maximum likelihood estimators versus Bayesian estimates derived via Markov chain Monte Carlo techniques under gamma priors, which effectively capture prior knowledge. Two patterns of asymptotic interval estimates are constructed through the normal approximation of the classical estimates and of the log-transformed classical estimates. On the other hand, from the Markovian chains, two patterns of credible interval estimates are also constructed. A robust simulation study is carried out to compare the classical and Bayesian point estimation methods, along with the four interval estimation methods. This study’s practical usefulness is demonstrated by its analysis of a real-world dataset. The results reveal that both conventional and Bayesian inferential methods function accurately, with the Bayesian outcomes surpassing those of the conventional method. Full article
(This article belongs to the Special Issue Computational Statistics and Its Applications, 2nd Edition)
Show Figures

Figure 1

22 pages, 1823 KB  
Article
Heavy Rainfall Probabilistic Model for Zielona Góra in Poland
by Marcin Wdowikowski, Monika Nowakowska, Maciej Bełcik and Grzegorz Galiniak
Water 2025, 17(11), 1673; https://doi.org/10.3390/w17111673 - 31 May 2025
Viewed by 1207
Abstract
The research focuses on probabilistic modeling of maximum rainfall in Zielona Góra, Poland, to improve urban drainage system design. The study utilizes archived pluviographic data from 1951 to 2020, collected at the IMWM-NRI meteorological station. These data include 10 min rainfall records and [...] Read more.
The research focuses on probabilistic modeling of maximum rainfall in Zielona Góra, Poland, to improve urban drainage system design. The study utilizes archived pluviographic data from 1951 to 2020, collected at the IMWM-NRI meteorological station. These data include 10 min rainfall records and aggregated hourly and daily totals. The study employs various statistical distributions, including Fréchet, gamma, generalized exponential (GED), Gumbel, log-normal, and Weibull, to model rainfall intensity–duration–frequency (IDF) relationships. After testing the goodness of fit using the Anderson–Darling test, Bayesian Information Criterion (BIC), and relative residual mean square Error (rRMSE), the GED distribution was found to best describe rainfall patterns. A key outcome is the development of a new rainfall model based on the GED distribution, allowing for the estimation of precipitation amounts for different durations and exceedance probabilities. However, the study highlights limitations, such as the need for more accurate local models and a standardized rainfall atlas for Poland. Full article
Show Figures

Figure 1

32 pages, 1098 KB  
Article
Estimation and Bayesian Prediction for New Version of Xgamma Distribution Under Progressive Type-II Censoring
by Ahmed R. El-Saeed, Molay Kumar Ruidas and Ahlam H. Tolba
Symmetry 2025, 17(3), 457; https://doi.org/10.3390/sym17030457 - 18 Mar 2025
Cited by 1 | Viewed by 518
Abstract
This article introduces a new continuous lifetime distribution within the Gamma family, called the induced Xgamma distribution, and explores its various statistical properties. The proposed distribution’s estimation and prediction are investigated using Bayesian and non-Bayesian approaches under progressively Type-II censored data. The maximum [...] Read more.
This article introduces a new continuous lifetime distribution within the Gamma family, called the induced Xgamma distribution, and explores its various statistical properties. The proposed distribution’s estimation and prediction are investigated using Bayesian and non-Bayesian approaches under progressively Type-II censored data. The maximum likelihood and maximum product spacing methods are applied for the non-Bayesian approach, and some of their performances are evaluated. In the Bayesian framework, the numerical approximation technique utilizing the Metropolis–Hastings algorithm within the Markov chain Monte Carlo is employed under different loss functions, including the squared error loss, general entropy, and LINEX loss. Interval estimation methods, such as asymptotic confidence intervals, log-normal asymptotic confidence intervals, and highest posterior density intervals, are also developed. A comprehensive numerical study using Monte Carlo simulations is conducted to evaluate the performance of the proposed point and interval estimation methods through progressive Type-II censored data. Furthermore, the applicability and effectiveness of the proposed distribution are demonstrated through three real-world datasets from the fields of medicine and engineering. Full article
(This article belongs to the Special Issue Bayesian Statistical Methods for Forecasting)
Show Figures

Figure 1

Back to TopTop