Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,342)

Search Parameters:
Keywords = maximum likelihood estimator

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 485 KB  
Article
A Sequential Design for Extreme Quantile Estimation Under Binary Sampling
by Michel Broniatowski and Emilie Miranda
Entropy 2026, 28(4), 479; https://doi.org/10.3390/e28040479 (registering DOI) - 21 Apr 2026
Abstract
We propose a sequential design method aiming at the estimation of an extreme quantile based on a sample of binary data corresponding to peaks over a given threshold. This study is motivated by an industrial challenge in material reliability and consists of estimating [...] Read more.
We propose a sequential design method aiming at the estimation of an extreme quantile based on a sample of binary data corresponding to peaks over a given threshold. This study is motivated by an industrial challenge in material reliability and consists of estimating a failure quantile from trials whose outcomes are reduced to indicators of whether the specimen has failed at the tested stress levels. The proposed approach relies on a splitting strategy that decomposes the target extreme probability into a product of higher-order conditional probabilities, enabling a progressive exploration of the tail of the distribution through sampling under truncated laws. We consider GEV and Weibull models for the underlying distribution, and the sequential estimation of their parameters is carried out using an enhanced maximum likelihood procedure specifically adapted to binary data, addressing the substantial uncertainty inherent to such limited information. Full article
(This article belongs to the Special Issue Statistical Inference: Theory and Methods)
23 pages, 1052 KB  
Article
Technology Analysis of Extended Reality Using Machine Learning and Statistical Models
by Sunghae Jun
Virtual Worlds 2026, 5(2), 19; https://doi.org/10.3390/virtualworlds5020019 - 20 Apr 2026
Abstract
Extended reality (XR), encompassing augmented reality (AR), virtual reality (VR), and mixed reality (MR), is a key enabling technology for virtual worlds, and XR-related patents continue to grow rapidly. However, patent-based XR technology analysis faces a fundamental challenge: document–keyword matrix (DKM) built from [...] Read more.
Extended reality (XR), encompassing augmented reality (AR), virtual reality (VR), and mixed reality (MR), is a key enabling technology for virtual worlds, and XR-related patents continue to grow rapidly. However, patent-based XR technology analysis faces a fundamental challenge: document–keyword matrix (DKM) built from patent titles and abstracts are typically high dimensional, sparse, and often exhibit excess zeros, which can distort inference when conventional text mining pipelines are applied without a generative count perspective. In this study, we propose a statistically grounded XR technology analysis framework that combines likelihood-based count modeling with interpretable structure mining to map XR sub-technologies from a patent DKM. Using an XR patent–keyword matrix, we fit Poisson regression (PR), negative binomial regression (NBR), and zero-inflated negative binomial regression (ZINBR) models via maximum likelihood estimation (MLE), controlling for document-length effects. Model selection by Akaike information criterion (AIC) consistently favored NBR for both target keywords, indicating substantial overdispersion in XR patent counts. We interpret exponentiated coefficients as incidence rate ratios (IRRs) and construct a technology relatedness network from significant IRR edges, revealing a dual-axis XR structure: reality is anchored in an AR or VR experience and content axis such as virtual and augment, whereas extend is embedded in a structure and integration axis for example, surface, edge, layer, and connectivity-related terms. To show how the proposed method can be applied to real domains, we searched the XR patent documents, and analyzed them for XR technology analysis. Full article
Show Figures

Figure 1

17 pages, 1149 KB  
Article
Clinical Characteristics and Outcomes of Malaria Patients in the Aseer Region, Saudi Arabia: A Retrospective Study (2022–2025)
by Fouad Ibrahim Alshehri, Dhaifullah Ahmed Alkhosafi, Essam Abdullah Al Asmari, Abdulrahman Bin Saeed, Anas Mohammed Zarbah, Saeed Ali Algarni, Mohammed Gasim Ahmed, Marim Abdallah Mohamed, Fatma Anter Mady, Saleh Mohammed Zafer Albakri and Ramy Mohamed Ghazy
Trop. Med. Infect. Dis. 2026, 11(4), 108; https://doi.org/10.3390/tropicalmed11040108 - 20 Apr 2026
Abstract
Background: Saudi Arabia has made significant progress toward malaria elimination; however, imported cases continue to occur, particularly in the southwestern regions. This study aimed to describe the clinical characteristics and outcomes of patients with malaria in the Aseer Region, Saudi Arabia. Methods: A [...] Read more.
Background: Saudi Arabia has made significant progress toward malaria elimination; however, imported cases continue to occur, particularly in the southwestern regions. This study aimed to describe the clinical characteristics and outcomes of patients with malaria in the Aseer Region, Saudi Arabia. Methods: A retrospective observational study was conducted at Khamis Mushait General Hospital, Aseer Region, Saudi Arabia, including all patients with malaria from January 2022 to December 2025. Demographic, clinical, laboratory, and outcome data were extracted from the electronic medical records. Severe malaria was defined according to the World Health Organization criteria. Multivariate logistic regression using Firth’s penalized maximum likelihood estimation was performed to identify independent predictors of severe malaria (≥1 WHO criterion). Statistical analysis was performed using R software (version 4.2.1). Results: A total of 311 patients were included, predominantly male (90.0%), with a mean age of 28.8 ± 11.3 years. Ethiopian nationals comprised nearly half the cases (48.2%), followed by Saudi (16.4%) and Yemeni (15.1%) nationals. Plasmodium vivax was the most common species (51.1%), followed by Plasmodium. falciparum (40.2%). Fever was the most frequent symptom (89.4%), followed by fatigue (50.8%), chills (46.9%), and vomiting (39.5%). Low parasitemia (<1%) was the most frequent finding (33.8%), followed by moderate (27.3%) and mild (18.3%) levels, while high (4.2%) and very high parasitemia (1.9%) were uncommon. Severe malaria (≥1 criterion) was diagnosed at 43.7%, with severe anemia (26.0%) and jaundice (23.2%) being the most frequent WHO severity criteria. Notably, 84% of the cases occurred during 2024–2025, indicating a recent outbreak, with a sharp peak of 43 cases in October 2024. Multivariate logistic regression identified two independent predictors of having at least one WHO severity criterion: higher parasitemia level (adjusted OR = 1.70 per 1% increase, 95% CI: 1.40–2.11, p < 0.001) and non-Saudi nationality (adjusted OR = 2.40, 95% CI: 1.10–5.62, p = 0.027). Conclusions: Malaria in the Aseer Region predominantly affects young adult male expatriates, suggesting its imported nature. The predominance of P. vivax represents a shift from historical patterns. Parasitemia level and being of non-Saudi nationality independently predict severe malaria and may therefore support risk stratification and clinical decision-making. The dramatic case surge in 2024–2025 highlights regional vulnerability to outbreaks despite control progress. These findings support enhanced screening for at-risk populations, maintenance of clinical capacity for severe malaria management, and robust surveillance systems for early outbreak detection. Full article
(This article belongs to the Special Issue The Global Burden of Malaria and Control Strategies, 2nd Edition)
Show Figures

Figure 1

31 pages, 543 KB  
Article
Frequentist and Bayesian Predictive Inference for the Log-Logistic Distribution Under Progressive Type-II Censoring
by Ziteng Zhang and Wenhao Gui
Entropy 2026, 28(4), 466; https://doi.org/10.3390/e28040466 - 18 Apr 2026
Viewed by 82
Abstract
This paper investigates the prediction of unobserved future failure times for the heavy-tailed Log-Logistic distribution under Progressive Type-II censoring. We first develop point and interval estimates for the unknown parameters using both frequentist maximum likelihood and Bayesian approaches. For predicting future failures, we [...] Read more.
This paper investigates the prediction of unobserved future failure times for the heavy-tailed Log-Logistic distribution under Progressive Type-II censoring. We first develop point and interval estimates for the unknown parameters using both frequentist maximum likelihood and Bayesian approaches. For predicting future failures, we derive three distinct point predictors: the Best Unbiased Predictor (BUP), the Conditional Median Predictor (CMP), and the Bayesian Predictor (BP). Corresponding prediction intervals are constructed using frequentist pivotal quantities, Bayesian Equal-Tailed Intervals (ETIs), and Highest Posterior Density (HPD) methods. The Bayesian procedures are implemented via Markov chain Monte Carlo (MCMC) sampling. We evaluate the finite-sample performance of the proposed methodologies through a Monte Carlo simulation study and further validate them using two real-world datasets, namely bladder cancer remission times and guinea pig survival times. The numerical results indicate that the proposed BP, particularly under the empirical prior, provides the most accurate and stable overall performance for point prediction, while the frequentist predictors become less reliable in extreme heavy-tailed settings. For interval prediction, the Bayesian HPD method consistently outperforms the alternatives, substantially reducing interval lengths for right-skewed data while maintaining the nominal coverage probability. Full article
29 pages, 437 KB  
Article
Regulatory Fragmentation in Digital Services Trade and Carbon Intensity: Hard and Soft Barriers and the Role of Environmental Policy
by Xuan Liu, Min-Jae Lee and Tae-Hoo Kim
Sustainability 2026, 18(8), 4031; https://doi.org/10.3390/su18084031 - 18 Apr 2026
Viewed by 110
Abstract
This study examines how regulatory heterogeneity in digital services trade relates to the carbon intensity of bilateral trade flows. Using a structural gravity framework estimated with Poisson pseudo maximum likelihood (PPML), we analyzed 10,719 bilateral observations from the Eora Multi-Region Input–Output (MRIO) database [...] Read more.
This study examines how regulatory heterogeneity in digital services trade relates to the carbon intensity of bilateral trade flows. Using a structural gravity framework estimated with Poisson pseudo maximum likelihood (PPML), we analyzed 10,719 bilateral observations from the Eora Multi-Region Input–Output (MRIO) database over 2014–2020. Bilateral gaps in the OECD Digital Services Trade Restrictiveness Index (DSTRI) were used as the main measure of regulatory heterogeneity, and the overall gap was decomposed into infrastructure-related hard barriers and institutional soft barriers. The results suggest that digital regulatory gaps are associated with a higher carbon intensity in trade while also being associated with lower total embodied emissions through reduced trade volumes. This indicates that lower aggregate emissions under regulatory divergence may reflect contraction in trade activity rather than genuine environmental improvement. The decomposition analysis further suggests that infrastructure-related misalignment is more closely associated with carbon inefficiency, whereas institutional divergence operates mainly through its association with trade volume. In addition, environmental policy stringency in the importing country appears to strengthen the positive association between institutional regulatory gaps and carbon intensity, consistent with the possibility of regulatory overload. The study contributes to the sustainability literature by showing that carbon intensity provides a more informative indicator of sustainable trade performance than aggregate emissions alone in fragmented regulatory environments. It also suggests that digital governance, trade policy, and environmental policy should be considered together in promoting more sustainable forms of international trade, particularly in the context of emerging policy frameworks such as WTO digital trade negotiations, OECD digital governance initiatives, and carbon border adjustment mechanisms (CBAMs). Full article
(This article belongs to the Special Issue Knowledge Management and Digital Transformation in Sustainability)
45 pages, 4863 KB  
Article
A Novel Version of the Arcsine–Rayleigh Distribution with Entropy Measures, Statistical Inference, and Applications
by Asmaa S. Al-Moisheer, Khalaf S. Sultan, Moustafa N. Mousa and Mahmoud M. M. Mansour
Entropy 2026, 28(4), 464; https://doi.org/10.3390/e28040464 - 17 Apr 2026
Viewed by 117
Abstract
This paper presents a new distribution on the unit interval, named the Unit Arcsine–Rayleigh distribution (UASRD), which is the result of the exponential transformation of the Arcsine–Rayleigh distribution. The model suggested is versatile and can be used in modeling limited reliability and proportion [...] Read more.
This paper presents a new distribution on the unit interval, named the Unit Arcsine–Rayleigh distribution (UASRD), which is the result of the exponential transformation of the Arcsine–Rayleigh distribution. The model suggested is versatile and can be used in modeling limited reliability and proportion data. Entropy-based measures are also studied to determine the uncertainty and information content of the proposed model and further explain the probabilistic nature of the proposed model and its potential applicability in information-theoretic and reliability tasks. These findings demonstrate the utility of the suggested model in the study of the limited data in the context of information theory. Basic statistical characteristics are derived, such as cumulative and density functions, quantile function, reliability and hazard functions, and ordinary moments. Estimation of parameters is obtained through approaches of maximum likelihood and maximum product spacing and Bayesian estimation of parameters. The performance of the estimators is also assessed by a Monte Carlo simulation study, and the application of real data shows the utility of the proposed model to the analysis of bounded data. Full article
15 pages, 528 KB  
Article
Exploring the New Exponentiated Harris-G Family of Distributions and Its Applications
by Wellington F. Charumbira, Hisham M. Almongy, Fastel Chipepa and Mavis Pararai
Symmetry 2026, 18(4), 673; https://doi.org/10.3390/sym18040673 - 17 Apr 2026
Viewed by 82
Abstract
This paper introduces a new family of distributions called exponentiated Harris-G. This new distribution is a weighted distribution of the well established exponentiated-G distributions. The model allows for easy derivation of statistical properties based on the exponentiated-G distribution. Several statistical properties for the [...] Read more.
This paper introduces a new family of distributions called exponentiated Harris-G. This new distribution is a weighted distribution of the well established exponentiated-G distributions. The model allows for easy derivation of statistical properties based on the exponentiated-G distribution. Several statistical properties for the new model were derived. The paper considered different parameter estimation techniques and the maximum likelihood estimation technique emerged as the best technique. This was evaluated via Monte Carlo simulation studies of the proposed family. Estimation techniques were ranked based on the lowest values of the root mean square error and average bias. The proposed model showed enhanced flexibility in data modeling when compared to some selected competing models. This was demonstrated through application of the special case to two real-world datasets. Full article
(This article belongs to the Section Mathematics)
33 pages, 2945 KB  
Article
Modeling Headway Distribution by Lane and Vehicle Type for Expressways Using UAV Data
by Changxing Li, Yihui Shang, Tian Li, Shuqi Liu, Lingxiang Wei and Junfeng An
Sustainability 2026, 18(8), 4003; https://doi.org/10.3390/su18084003 - 17 Apr 2026
Viewed by 102
Abstract
Time headway is a key parameter for describing car-following behavior and microscopic traffic flow characteristics, and it is important for traffic safety analysis, road design, and optimizing intelligent-driving strategies. Existing research offers limited insight into the heterogeneity of time headway under different vehicle [...] Read more.
Time headway is a key parameter for describing car-following behavior and microscopic traffic flow characteristics, and it is important for traffic safety analysis, road design, and optimizing intelligent-driving strategies. Existing research offers limited insight into the heterogeneity of time headway under different vehicle types and lane conditions. It is particularly important to investigate how time headway distributions differ across lane–vehicle-type combinations on highways, as these differences can affect safety evaluation and operational performance. This study is based on drone-captured vehicle trajectories from the publicly available HighD dataset. We select 378,751 vehicle–frame trajectory records; these records are used to construct valid follower–leader pairs and derive time headway (THW) samples for distribution fitting. Eight subsets are formed by combining two lane positions (inner vs. outer) and four follower–leader vehicle-type pairs (car–car, car–truck, truck–car, truck–truck). Six candidate distributions (Lognormal, Log-logistic, Burr, Weibull, Gamma, and Logistic) are fitted using maximum likelihood estimation, and their fit is evaluated using Kolmogorov–Smirnov, Anderson–Darling, and Chi-square tests, which are fused via an entropy-weighted composite score for model ranking. Results show pronounced heterogeneity across lane–vehicle-type subsets: Inner-lane samples exhibit smaller and more concentrated time gaps, whereas outer-lane samples show larger mean gaps, stronger dispersion, and heavier upper tails. Overall, Lognormal(3P) is selected as the top-ranked model in 5 of 8 subsets (62.5%), while Burr(4P) (car–truck, outer lane), Gamma(3P) (truck–car, outer lane), and Weibull(3P) (truck–truck, inner lane) are optimal in the remaining subsets. These findings indicate that lane position and vehicle-type pairing materially affect THW distributional characteristics, providing quantitative guidance for lane- and vehicle-aware traffic modeling, safety-oriented assessment, and intelligent-driving strategy design. Full article
17 pages, 592 KB  
Article
Modelling Extreme Losses in JSE Life Insurance Price Index Growth Rates Using the Generalised Extreme Value Distribution (GEVD) and the Generalised Pareto Distribution (GPD)
by Delson Chikobvu, Tendai Makoni and Frans Frederik Koning
Data 2026, 11(4), 86; https://doi.org/10.3390/data11040086 - 16 Apr 2026
Viewed by 194
Abstract
The life insurance sector plays a critical role in financial system stability but is inherently exposed to extreme market fluctuations due to long-term liabilities and asset–liability mismatches. This study investigates extreme losses in the growth rates of the JSE Life Insurance Price Index [...] Read more.
The life insurance sector plays a critical role in financial system stability but is inherently exposed to extreme market fluctuations due to long-term liabilities and asset–liability mismatches. This study investigates extreme losses in the growth rates of the JSE Life Insurance Price Index (LIPI) using the Generalised Extreme Value Distribution (GEVD) and the Generalised Pareto Distribution (GPD) under the Extreme Value Theory (EVT) framework. Monthly data from January 2000 to October 2023 were transformed into a loss series, and extreme events were captured using quarterly block maxima and a POT threshold at the 95th percentile. Model parameters were estimated through Maximum Likelihood Estimation, and downside risk was assessed using return levels, Value-at-Risk (VaR), and Tail Value-at-Risk (tVaR). The GEVD model produced a negative shape parameter, consistent with a bounded Weibull-type tail, while the GPD indicated a heavy-tailed distribution. Return level estimates show escalating loss magnitudes and widening uncertainty over longer horizons, reflecting the challenges of projecting rare events. Kupiec backtesting confirms the adequacy and reliability of the GEVD-based VaR across all confidence levels, whereas the GPD underestimates risk at lower thresholds. These findings indicate significant tail risk within the South African life insurance equity segment and underscore the importance of EVT-based risk measures for capital planning and regulatory oversight. The study contributes to financial risk modelling in the life insurance sector and offers practical insights for strengthening solvency assessment and enterprise risk management frameworks. Full article
Show Figures

Figure 1

30 pages, 1611 KB  
Article
Reliability Assessment of Harmonic Reducers Based on the Two-Phase Hybrid Stochastic Degradation Process
by Lai Wei, Peng Liu, Hailong Tian, Haoyuan Li and Yunshenghao Qiu
Sensors 2026, 26(8), 2437; https://doi.org/10.3390/s26082437 - 15 Apr 2026
Viewed by 274
Abstract
Harmonic reducers exhibit non-stationary and phase-dependent degradation behavior during long-term service, challenging the ability of classical stochastic degradation models to accurately assess reliability. To address phase-dependent differences in degradation behavior, this paper proposes a reliability assessment model based on a two-phase hybrid stochastic [...] Read more.
Harmonic reducers exhibit non-stationary and phase-dependent degradation behavior during long-term service, challenging the ability of classical stochastic degradation models to accurately assess reliability. To address phase-dependent differences in degradation behavior, this paper proposes a reliability assessment model based on a two-phase hybrid stochastic degradation process. In the proposed framework, the Wiener process is employed to characterize early-phase gradual degradation dominated by stochastic fluctuations, while the Inverse Gaussian process is used to describe later-phase monotonically accelerated degradation driven by cumulative damage. The framework allows for sample-level variability in transition times to more realistically capture individual degradation behavior. The Schwarz Information Criterion is also adopted to detect change points. Maximum likelihood estimation is performed for model parameter inference, and analytical expressions for the reliability function, cumulative distribution function, and probability density function are derived. Numerical results indicate that a change point exists for each tested product and that the proposed model achieves the best goodness of fit among the considered candidates, demonstrating its superiority in capturing phase-dependent characteristics of harmonic reducer degradation. In terms of reliability assessment bias, the proposed model (0.06%) significantly outperforms the Wiener degradation model (32%) and the IG degradation model (9.9%). These results further confirm that, under an identical failure threshold, the proposed approach yields more accurate and realistic reliability assessment outcomes. Full article
39 pages, 533 KB  
Article
A Novel Extension of the Weibull Distribution with Application in Quantitative and Reliability Sciences
by Shoaib Iqbal, Bassant Elkalzah, Zawar Hussain and Farrukh Jamal
Symmetry 2026, 18(4), 659; https://doi.org/10.3390/sym18040659 - 15 Apr 2026
Viewed by 144
Abstract
The main focus of this paper is to introduce a new probability model. Specifically, this paper presents a modified form of the Weibull distribution and investigates its various statistical properties, such as moments, moment-generating functions, reliability functions, quantile functions, and inequality measures such [...] Read more.
The main focus of this paper is to introduce a new probability model. Specifically, this paper presents a modified form of the Weibull distribution and investigates its various statistical properties, such as moments, moment-generating functions, reliability functions, quantile functions, and inequality measures such as Bonferroni and Lorenz curves. It also investigates the mean absolute deviation and entropy. Distributions of order statistics, reversed order statistics, and upper record values are also obtained. Additionally, univariate and bivariate moment structures are considered. The model parameters are estimated via the maximum likelihood method under simple random sampling and ranked set sampling, allowing an empirical evaluation of efficiency and reliability. Graphical representations exhibit the flexibility of the model, capturing various shapes in the probability density and hazard rate functions. To measure the practical quality of the model, actuarial metrics are used. A comparative analysis based on insurance, biomedical, and reliability datasets demonstrates the empirically improved performance and stability of the proposed new model for these specific datasets. Full article
(This article belongs to the Section Mathematics)
87 pages, 1849 KB  
Article
Statistical Inference for Drift Parameters in Gaussian White Noise Models Driven by Caputo Fractional Dynamics Under Discrete Observation Schemes
by Abdelmalik Keddi and Salim Bouzebda
Symmetry 2026, 18(4), 655; https://doi.org/10.3390/sym18040655 - 14 Apr 2026
Viewed by 156
Abstract
This paper develops a rigorous inferential framework for a class of Gaussian stochastic processes driven by white noise with constant drift, whose temporal evolution is governed by a Caputo fractional derivative of order α(1/2,1). [...] Read more.
This paper develops a rigorous inferential framework for a class of Gaussian stochastic processes driven by white noise with constant drift, whose temporal evolution is governed by a Caputo fractional derivative of order α(1/2,1). The model belongs to the family of fractional Volterra processes, where memory is generated by the dynamics themselves rather than by correlated noise. We derive explicit analytical expressions for the mean, variance, and covariance structure of the solution, thereby characterizing in a precise manner how the fractional order α governs both variance growth and the strength of temporal dependence. In particular, the process exhibits correlated increments and a power-law variance scaling of order t2α1, highlighting the dual role of α as a regularity and memory parameter. Building on this structural analysis, we address the statistical problem of estimating the parameter vector (μ,σ,α) from discrete-time observations. Two complementary procedures are proposed for the estimation of the fractional order: a variance-growth method based on log–log regression of empirical variances, and a wavelet-based estimator exploiting multi-scale scaling properties of the process. For the drift and diffusion parameters (μ,σ), we construct explicit Gaussian pseudo-maximum likelihood estimators derived from the Volterra covariance structure of the increment process. We establish unbiasedness, L2-convergence, strong consistency, and asymptotic normality for all estimators. Furthermore, we derive Berry–Esseen type bounds that quantify the rate of convergence toward the Gaussian law, providing sharp distributional approximations in a genuinely fractional and non-Markovian setting. A Monte Carlo study is carried out, using high-resolution Volterra discretizations, large-scale simulation budgets, covariance-structured linear algebra, and multi-scale diagnostic tools. The numerical experiments confirm the theoretical convergence rates, demonstrate the finite-sample reliability of the estimators, and illustrate the sensitivity of the process dynamics to the fractional order α: smaller values of α produce stronger memory effects and higher variability, while values closer to one lead to smoother and more stable trajectories. The proposed methodology unifies statistical inference for long-memory Gaussian processes with fractional differential stochastic dynamics, offering a coherent analytical and computational framework applicable in areas such as quantitative finance, anomalous diffusion in physics, hydrology, and engineering systems with hereditary effects. Full article
Show Figures

Figure 1

31 pages, 2008 KB  
Article
Different Classical and Bayesian Methods of Estimation of the Power Log-Logistic Distribution with Applications
by Indranil Ghosh, Devendra Kumar and Subhankar Dutta
Axioms 2026, 15(4), 285; https://doi.org/10.3390/axioms15040285 - 14 Apr 2026
Viewed by 213
Abstract
A new approach in constructing a univariate absolutely continuous probability model via power transformation is adopted. The proposed distribution appears to subsume several popular univariate continuous probability models that are already existent in the literature. In addition, the hazard rate function of the [...] Read more.
A new approach in constructing a univariate absolutely continuous probability model via power transformation is adopted. The proposed distribution appears to subsume several popular univariate continuous probability models that are already existent in the literature. In addition, the hazard rate function of the proposed distribution appears to possess all possible shapes, increasing, decreasing, and upside-down which is not present in all existing extensions and generalizations of the log-logistic (LL) distribution. Various established estimation strategies under the classical approach such as maximum product spacing, weighted least squares estimation, etc., are adopted to exhibit the flexibility of the proposed probability distribution. The emphasis has been made on the estimation procedure and the new probability model is indeed augmenting the existing literature in many aspects which are highlighted in the simulation and also in the real data application section. Full article
Show Figures

Figure 1

26 pages, 1868 KB  
Article
Estimation of the Half-Logistic Inverse Rayleigh Distribution Parameters via Ranked Set Sampling: Methods and Applications
by Amer Ibrahim Al-Omari, Sid Ahmed Benchiha and Ghadah Alomani
Mathematics 2026, 14(8), 1281; https://doi.org/10.3390/math14081281 - 12 Apr 2026
Viewed by 186
Abstract
This study investigates a range of parameter estimation methods for the Half-Logistic Inverse Rayleigh Distribution (HLIRD) under two distinct sampling frameworks: ranked set sampling (RSS) and simple random sampling (SRS). The estimation techniques considered include maximum likelihood estimation, ordinary and weighted least squares, [...] Read more.
This study investigates a range of parameter estimation methods for the Half-Logistic Inverse Rayleigh Distribution (HLIRD) under two distinct sampling frameworks: ranked set sampling (RSS) and simple random sampling (SRS). The estimation techniques considered include maximum likelihood estimation, ordinary and weighted least squares, and the maximum and minimum product of spacings methods. Model adequacy is evaluated using five goodness-of-fit criteria: the Anderson–Darling (AD) statistic, its right- and left-tail variants, the second-order left-tail AD statistic, and the Cramér–von Mises statistic. An extensive simulation study is conducted to thoroughly evaluate and compare the performance of the proposed estimators while maintaining a fixed total number of observations across both sampling schemes. The practical relevance of the proposed methods is further illustrated through an application to a real dataset consisting of 69 carbon fiber specimens, with tensile strength measurements (in GPa) recorded at a gauge length of 20 mm. The numerical results demonstrate that estimators based on RSS consistently outperform their SRS counterparts across all considered performance measures, including mean squared error, bias, and mean absolute relative error. Overall, the findings highlight the advantages of employing RSS for parameter estimation of the HLIRD, particularly due to its superior efficiency in small-sample scenarios. Full article
(This article belongs to the Section D1: Probability and Statistics)
Show Figures

Figure 1

17 pages, 329 KB  
Article
The New Polynomial Single Parameter Distribution: Properties, Bayesian and Non-Bayesian Inference with Real-Data Applications
by Meriem Keddali, Hamida Talhi, Mohammed Amine Meraou and Ali Slimani
AppliedMath 2026, 6(4), 60; https://doi.org/10.3390/appliedmath6040060 - 10 Apr 2026
Viewed by 199
Abstract
A novel flexible single-parameter polynomial distribution is presented in this study. The forms of hazard rate and density functions are examined. Additionally, exact formulas for a number of numerical characteristics of distributions are obtained. Stochastic ordering, the moment technique, the maximum likelihood, and [...] Read more.
A novel flexible single-parameter polynomial distribution is presented in this study. The forms of hazard rate and density functions are examined. Additionally, exact formulas for a number of numerical characteristics of distributions are obtained. Stochastic ordering, the moment technique, the maximum likelihood, and a Bayesian analysis of this novel distribution based on type II censored data are used to derive the extreme order statistics. We construct Bayes estimators and the associated posterior risks using a variety of loss functions, such as the generalized quadratic, entropy, and Linex functions. Since tractable analytical formulations of these estimators are unattainable, we suggest using a simulation technique based on Markov chain Monte-Carlo (MCMC) to examine their performance. Furthermore, we construct maximum likelihood estimators given initial values for the model’s parameters. Additionally, we use integrated mean square error and Pitman’s proximity criteria to compare their performance with that of the Bayesian estimators. Lastly, we apply the new family to many real-world datasets to show its versatility, and we model cancer survival data using this new distribution to explain our methodology. Full article
(This article belongs to the Special Issue Large Language Models and Applications)
Back to TopTop