Next Article in Journal
On the Geometric Structure of Hyperbolic Clifford Bundles and Associated Spin Groups
Previous Article in Journal
Reconstruction of Hansen’s High-Temperature Air Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Different Classical and Bayesian Methods of Estimation of the Power Log-Logistic Distribution with Applications

1
Department of Mathematics and Statistics, University of North Carolina, Wilmington, NC 28403, USA
2
Department of Statistics, Faculty of Mathematical Sciences, University of Delhi, Delhi 110021, India
3
Department of Mathematics, Bioinformatics, and Computer Applications, Maulana Azad National Institute of Technology, Bhopal 462003, India
*
Author to whom correspondence should be addressed.
Axioms 2026, 15(4), 285; https://doi.org/10.3390/axioms15040285
Submission received: 15 October 2025 / Revised: 19 February 2026 / Accepted: 2 March 2026 / Published: 14 April 2026

Abstract

A new approach in constructing a univariate absolutely continuous probability model via power transformation is adopted. The proposed distribution appears to subsume several popular univariate continuous probability models that are already existent in the literature. In addition, the hazard rate function of the proposed distribution appears to possess all possible shapes, increasing, decreasing, and upside-down which is not present in all existing extensions and generalizations of the log-logistic (LL) distribution. Various established estimation strategies under the classical approach such as maximum product spacing, weighted least squares estimation, etc., are adopted to exhibit the flexibility of the proposed probability distribution. The emphasis has been made on the estimation procedure and the new probability model is indeed augmenting the existing literature in many aspects which are highlighted in the simulation and also in the real data application section.

1. Introduction

The literature on statistical distribution theory, especially in the area of univariate distributions (both discrete and continuous) is saturated with a vast collection of absolutely continuous univariate distributions, for example, see [1]. These days, almost on a daily basis a new univariate distribution is unearthed; a prominent reason is that the existing list of distributions cannot adequately fit a data. In many real world scenarios, including but not limited to, survival data analysis, environmetrics, economics, medicine etc., there is a clear need for extended forms of classical probability models. In the literature, several new techniques have been advocated to develop new probability distributions, starting from classical/well-known distributions. The need for skewed distributions results from almost every branch of physical and biological sciences, engineering, and biomedical sciences because data are likely to be arising out of asymmetrical populations. One of the most popular skewed distributions is the two-parameter log-logistic (LL) distribution. The log-logistic distribution is heavily used for analyzing skewed data. The LL distribution may assume either a reversed J shape or unimodal, and the hazard function is either a decreasing or inverted bathtub, for pertinent details, see [1]. In the past two decades, numerous extensions of LL distribution (alias generalizations) have been advocated from the practitioner’s point of view to model various observed phenomena where the usual LL distribution fails to provide an adequate fit. In general, the LL distribution and its various generalizations are utilized for survival and reliability data analysis with a particular focus on data arising from the medical domain. Consequently, we believe that it is appropriate to offer a survey of the Log-Logistic distribution and its generalizations due to the growing interest in applications and methodology to provide some justification for the newly developed probability model in this paper. The literature on various extensions to LL distribution is well established; such as modeling for AIDS and melanoma data [2]; utilization as a minification process [3]; modeling breast cancer data [4]; modeling on censored survival data [5]; modeling time up to first calving of cows [6]; modeling, inference and use to a polled Tabapua Race Time up to First Calving Data [7]; modeling on censored survival data [8]; modeling of breaking stress data [9]; evaluating a right-censored data [10] are worth mentioning. In recent times, there has been a continued interest in extending the LL distributions by various well-established techniques and studying their performance both empirically and/or mathematically; for example, see [11,12,13,14]. On the Bayesian estimation of LL distribution and its various extensions with applications to meta analysis, one can see the works of [15,16,17] and the references cited therein.
Note that the choice of a judicious stochastic model is a crucial factor in data analysis, as the efficacy of data modeling is directly related to the quality of the fitted distribution to a particular dataset, albeit computational complexity. We came across several real life scenarios, where classical LL distributions and several of its generalizations, which exist in the literature, did not provide us with an adequate fit. This motivates us to introduce a new extension of the LL, known as the power log-logistic (PLL) distribution, which offers more flexibility. In addition, the PLL provides a reasonably good fit in several examples where LL and some of its extensions do not provide a reasonable fit. Such examples are given later in this article, precisely in the real data application section. The motivations to introduce and study this model are as follows: (i) to provide a flexible extension to the LL distribution; (ii) to accommodate different forms of risk functions namely, decreasing and upside-down failure rate functions where the latter failure rate occurs in modeling mortality from cancer; (iii) to provide great flexibility for modeling heavy-tailed data; and (iv) to extend the considered distribution to accommodate randomly right-censored data, since the c.d.f. and the h.r.f. function of the PLL distribution have closed forms. In addition, the hazard rate function of the proposed distribution appears to possess all possible shape—increasing, decreasing, and upside-down, which is not shared by any or all the existing extensions and generalizations of the log-logistic (LL) distribution.
The remainder of the paper is organized as follows. In Section 2, we provide the useful preliminaries leading to the PLL distribution development along with discussion on the p.d.f., h.r.f. shape(s), and several other useful structural properties. In Section 3, we discuss the estimation of the parameters of the PLL distribution under the frequentist approach, such as the method of maximum likelihood, method of maximum product spacing distance estimators, and ordinary and weighted least square estimators. Section 4 discusses the estimation of the model parameters under the Bayesian paradigm. In Section 5, a small simulation study is carried out to exhibit the efficiency and relative performance of the various estimation strategies discussed in Section 3 and Section 4. One real-world data set is re-analyzed to exhibit the efficacy of the proposed distribution in Section 6. Finally, some concluding remarks are presented in Section 7.

2. Model Description and Statistical Properties

The two parameter log-logistic distribution is defined by the following c.d.f.
F ( y ) = y α 1 β 1 + y α 1 β ; y > 0 , α , β > 0 .
In this paper, we present an extension of the Log-Logistic distribution with three parameters. We use the power transformation, X = Y 1 δ , to add one more parameter to the Log-Logistic distribution. The three-parameter Power Log-Logistic (PLL) distribution’s c.d.f. is provided by
F ( x ) = x δ α 1 β 1 + x δ α 1 β ; x > 0 , α , β , δ > 0 ,
and the associated p.d.f., the hazard rate function (hrf), and the quantile function will be
f ( x ) = δ x δ α 1 β 1 α β 1 + x δ α 1 β 2 x δ 1 ; x > 0 , α , β , δ > 0 ,
and
h ( x ) = δ x δ α 1 β 1 x δ 1 α β 1 + x δ α 1 β ; x > 0 , α , β , δ > 0 .
Q p = α p 1 p β 1 / δ ,
where p 0 , 1 .
Graphical representation of p.d.f. and the hazard rate function of Power Log-Logistic Distribution.
From Figure 1, we can see that the PLL distribution can assume various shapes such as unimodal, reversed J shaped and right-skewed.
Figure 2 displays a variety of possible shapes of hazard rate function of PLL distribution for some selected values of parameters δ , β and α . It can be deduced from Figure 2 that the shape of the hazard function of the PLL distribution could be decreasing and upside-down depending on the value of the parameters.
Next, we provide below several characterization results of the PLL distribution which is given below in the form of a lemma.
Lemma 1. 
The following results hold for the three parameter PLL distribution with parameters δ , α , β given in the density Equation (2)
  • If δ = 1 , and β * = 1 β , then the PLL distribution reduces to a two-parameter Fisk distribution with parameters α , β * .
  • P L L δ = 1 , α , β 1 SinghMadddala 1 , α , β .
  • if Y∼ P L L δ = 1 , α , β 1 , then k Y P L L δ = 1 , k α , β 1 , where k 0 is a constant.
  • P L L δ = 1 , α , β 1 Dagum 1 , α , β 1 .
  • Again, P L L δ = 1 , α , β 1 B e t a p r i m e 1 , 1 , α , β 1 .
The next result gives the limiting behaviors of the PLL p.d.f. and its’ h.r.f. The associated proofs are simple and thus excluded.
Lemma 2. 
The limit of the PLL δ , α , β 1 density function and the hazard function as x is 0 , and the limit as x 0 + is given by
lim x 0 + f ( x ) = 0 , if δ > β , δ β α 1 / β , if δ = β , if δ < β .
lim x 0 + h f ( x ) x δ β 1 .
Next,
If δ β > 1 h ( x ) 0 , If δ β = 1 h ( x ) constant , If δ β < 1 h ( x ) .
Lemma 3. 
The mode of the p.d.f. of the PLL δ , α , β 1 family is the root of
δ x δ α 1 β β 1 + x δ α 1 β + 1 + x δ α 1 β δ × x 2 β 2 1 + x δ α 1 β 3 = 0 .
Evidently, an analytical solution to the mode is impossible. Consequently, the mode has to be computed numerically, for example, using uniroot in the R software, version 4.5.3.

2.1. Log-Convexity

Theorem 1. 
The PLL density in Equation (1) is log-convex for all α > 0 , and for δ < β , and log-convex for δ > β .
Proof. 
From the density in Equation (1), we have
log f ( x ) = C o n s t a n t 1 β 1 log x δ α + δ 1 log x 2 log 1 + x δ α 1 β
Consequently, on taking partial derivative of order 2 w.r.t. x , we get
2 log f ( x ) x 2 = β 2 x δ α 1 β + 1 2 2 δ 2 x δ α 1 β + β δ x δ α 2 / β 1 β 2 x 2 x δ α 1 β + 1 2 .
Observe that Equation (7) > 0 , for all choices of α > 0 , and for δ < β all 0 < x < . Consequently, the following logical implications hold for the extension of the log-logistic distribution.
  • f ( x ) is log-convex → S ( x ) is log-convex,
  • f ( x ) is closed under convolution.
  • f ( x ) is closed under scaling transformations.
  • the density is unimodal and monotone.
Remark 1. 
The increasing hazard rate is useful in studies related to reliability theory, survival analysis, queuing theory among others.
Since the quantile function of PLL is in closed form, alternatively, we can define the measure of skewness and kurtosis based on a quantile function. The Galton’ skewness S defined by [18] and the Moors’ kurtosis K defined by [19] are given by
S = Q ( 6 / 8 ) 2 Q ( 4 / 8 ) + Q ( 2 / 4 ) Q ( 6 / 8 ) Q ( 2 / 8 ) .
K = Q ( 7 / 8 ) Q ( 5 / 8 ) + Q ( 3 / 8 ) Q ( 1 / 8 ) Q ( 6 / 8 ) Q ( 2 / 8 ) .
Observe that when a distribution is symmetric, S = 0 , and when the distribution is right (or left) skew, S > 0 (or S < 0 ). As K increases, the tail of the distribution becomes heavier. Moreover, by substituting values such as 6 / 8 ,   3 / 8 ,   4 / 8 ,   2 / 4 ,   5 / 8 , etc. in Equation (3), one can obtain an exact expression for the quantile based skewness and kurtosis for this distribution.
By substituting p = 6 8 , 2 4 , 4 8 in Equation (4), the expression of the quantile based skewness for the three parameter PLL distribution will be
S = 3 β δ 1 3 β δ 1 3 β δ .
Observe that since 1 3 β δ < 1 , for any choices of β and δ ,   0 < S < 1 . However, if β < < < < δ , i.e., β is negligibly small as compared to δ , then S 0 , i.e., the distribution will be symmetric in such a scenario.
  • Hazard rate function properties
Theorem 2. 
The hazard rate function exhibits increasing, decreasing and upside-down behavior for varying choices of the parameters α , β , δ .
Proof. 
From Equation (3), the first order derivative of the hazard function will be
d d x h f ( x ) = δ x δ α 1 β β + δ β x δ α 1 β β 2 x 2 x δ α 1 β + 1 2 .
Next, consider the following cases:
  • Case 1. If δ β , then from Equation (11) the numerator is negative. Therefore, the hazard function is strictly decreasing.
  • Case 2. If δ > β , then for small x: numerator > 0 which implies that the hazard increases. Again, after x = δ β 1 δ β , the numerator < 0 , i.e., the hazard decreases. Therefore, the hazard is upside-down.
  • Case 3. The hazard function has an increasing behavior for suitable parameter ranges when δ β is sufficiently large.

2.2. Moments

The relevance and significance of moments in statistical analysis, particularly in applied work, hardly need to be emphasized. Let X be a random variable that follows a PLL distribution with parameters β , λ , and θ . Then, the r t h moment of the PLL distribution is
μ r = E ( X r ) = δ β α 0 x r + δ 1 x δ α 1 β 1 1 + x δ α 1 β 2 d x = α p δ B r β δ + 1 , 1 r β δ ; r β δ < 1 .
From Equation (12), it is obvious that the r-th moment of the PLL distribution exists if and only if δ > r β . This definitely puts some limitation on the use of this probability model as in a practical scenario it will be difficult to examine this restriction on the parameters.
The central moments μ r and the associated cumulants k r of X can be determined from Equation (12), respectively
μ r = E X μ r = k = 0 r ( 1 ) k r k ( μ ) 1 r μ r k ,
k r = μ r k = 1 r 1 r 1 r 1 μ r k r .
Consequently, the skewness γ 1 = k 3 k 2 3 / 2 , and kurtosis γ 2 = k 4 k 2 2 can be computed from second, third, and fourth standardized cumulants.
The moment generating function of the PLL distribution will be
M ( t ) = α 1 δ i = 0 t i i ! B i β δ + 1 , 1 i β δ .
The characteristic generating function of PLL distribution, ϕ ( t ) = E ( e i t x ) , and the cumulant generating function of X, K ( t ) = log ϕ ( t ) , are given by
ϕ ( t ) = α 1 δ j = 0 ( i t ) j j ! B j β δ + 1 , 1 j β δ ,
and
K ( t ) = 1 δ log α + log j = 0 ( i t ) j j ! B j β δ + 1 , 1 j β δ ,
both exist if and only if 1 > j β δ . In Table 1, we provide numerical values of the mean, variance, skewness, and kurtosis for the PLL distribution for a wide collection of parameter values.

2.3. Incomplete Moments

Let X be a random variable with the p.d.f. given in Equation (2), and S ( x ) = 1 F ( x ) be the associated survival function. Then, the n-th order incomplete moment of PLL will be ζ n ( t ) = E X n | x < t = 0 t x n f ( x ) d x . Next, using Equation (2), we can derive ζ n ( t ) , which will be
ζ n ( t ) = 1 S ( x ) δ α 1 / β β 0 t x δ / β + n 1 1 + x δ α 1 β 2 d x = δ S ( x ) α 1 / β β j = 0 ( 1 ) j 1 α j / β 0 t x ( j + 1 ) δ β + n 1 d x = δ S ( x ) α 1 / β β j = 0 ( 1 ) j β t δ + δ j + β n β δ + δ j + β n ,
Among several applications, one important application of the first incomplete moment is related to Bonferroni and Lorenz curves, Bonferroni and Gini indices. At first, we provide the definitions for them.
Assume that Y be a continuous random variable with p.d.f. g ( ) and cdf G. Let G 1 ( ) denote the quantile function. Then, the expressions for the Bonferroni Curve (BC), Bonferroni Index (BI), Lorenz Curve (LC), and the Gini Index (GI) of the random variable Y will be
B C F ( p ) = 1 p μ 0 p G 1 ( u ) d u B I = 1 0 1 L F ( p ) d p L C F ( p ) = 1 μ 0 p G 1 ( u ) d u G I = 1 2 0 1 L F ( p ) d p
respectively, where G 1 ( u ) can be evaluated numerically from Equation (4) for a given probability. These measures are quite useful in income modeling, economics, demography, insurance, engineering, and medicine. For example, an useful application of the first incomplete moment refers to the mean residual life (MRL) and the mean waiting time given by w 1 ( t ) = 1 ζ 1 ( t ) R ( t ) 1 , and W 1 ( t ) = t ζ 1 ( t ) F ( t ) , respectively.

2.4. Order Statistics

Let X 1 , X 2 , · · · , X n be the random sample from the density in Equation (2). Further assume that X ( 1 : n ) X ( 2 : n ) · · ·   X ( n : n ) be the corresponding order statistics from this random sample. Then, the p.d.f. f r : n ( x ) of the r t h order statistic, for r = 1 , 2 , · · · , n will be
f r : n ( x ) = n ! ( r 1 ) ! ( n r ) ! ( F ( x ) ) r 1 ( 1 F ( x ) ) n r f ( x ) = n ! ( r 1 ) ! ( n r ) ! i = 0 n r ( 1 ) i n r i ( F ( x ) ) r + i 1 ,
and corresponding cdf
F r : n ( x ) = i = r n n i ( F ( x ) ) i ( 1 F ( x ) ) n i = i = r n j = 0 n i ( 1 ) j n i n i j ( F ( x ) ) i + j ,
respectively for r = 1 , 2 · · ·   , n . It follows that
f r : n ( x ) = n ! ( r 1 ) ! ( n r ) ! δ β i = 0 n r ( 1 ) i n r i × x δ α 1 β 1 + x δ α 1 β r + i 1 1 + x δ α 1 β x ,
and the kth moments of order statistics is
E ( X r : n k ) = n ! ( r 1 ) ! ( n r ) ! δ β i = 0 n r ( 1 ) i n r i 0 x k 1 x δ α r + i 1 + x δ α r + i + 1 d x = n ! ( r 1 ) ! ( n r ) ! i = 1 n 1 ( 1 ) i α k δ n r i β k δ + r + i , 1 β k δ .
The following results related to characterization of the PLL distribution based on order statistics
  • If δ = 1 , and β * = 1 β , in Equation (14), we get the k-th moment of order statistics for Fisk distribution with parameter α and β * .
    E ( X r : n k ) = α k n ! ( r 1 ) ! ( n r ) ! i = 1 n 1 ( 1 ) i n r i k β * + r + i , 1 k β * .
  • If δ = 1 in Equation (14), we get the k-th moment of order statistics for log-logistic distribution
    E ( X r : n k ) = α k n ! ( r 1 ) ! ( n r ) ! i = 1 n 1 ( 1 ) i n r i k β + r + i , 1 k β .
Next, consider the large sample distributions of the smallest order statistic X 1 : n . To obtain the large sample distribution of X 1 : n , we take help of Theorem 8.3 . 6 of [20]. Noticeably, since F 1 ( 0 ) = 0 , it follows from the theorem that the asymptotic distribution of the sample minima X 1 : n is not of the Fréchet type. The large sample distribution of X 1 : n will be of Weibull type with parameter η > 0 if
lim ε 0 + F ( ε x ) F ( ε ) = x η , for all x > 0 .
By using L’Hôpital’s rule, it follows that
lim ε 0 + F ( ε x ) F ( ε ) = x lim ε 0 + g ( ε x ) g ( ε ) = x δ β lim ε 0 + 1 + θ ϵ 1 + x θ ϵ = x δ β .
Consequently, we can say that the large sample distribution of the smallest order statistic X 1 : n is of the Weibull type with shape parameter δ β . Order statistics from the PLL distribution as given in Equation (2) can be used to characterize the associated density via record values, conditional expectation of the first order statistic etc. among others.

3. Method of Estimation

We outline a few techniques for estimating the parameters β , α , and δ of the PLL distribution in this section. A random sample of size n is chosen from the PLL distribution and is defined as X = ( x 1 , x 2 , · · · , x n ) .

3.1. Maximum Likelihood Estimation

This section deals with the issue of using the likelihood function to estimate the PLL distribution’s parameters. This method of estimation is preferred in the following scenarios:
(a)
When the model is correctly specified.
(b)
Moderate to large sample sizes.
(c)
Data are complete (no censoring/truncation complications).
(d)
Interest lies in asymptotic efficiency and standard inferential tools (confidence intervals, likelihood ratio tests). Moreover, t provides asymptotically efficient and consistent estimates under regularity conditions and serves as a reference for comparing alternative estimators. A random sample of size n selected from the PLL distribution with a density function in ( 2 ) is defined as X ̲ = ( x 1 , x 2 · · ·   , x n ) . The likelihood of the observed data is given by
L ( α , δ , β | X ) = i = 1 n δ α β x i δ α 1 β 1 1 + x i δ α 1 β 2 x i δ 1 .
The log likelihood of the PLL distribution is given as
log L α , δ , β | X ̲ = x ̲ = n log δ n log α β + 1 β 1 log x i δ α + ( δ 1 ) i = 1 n log x i + 2 i = 1 n log 1 + x i δ α 1 β .
Next, differentiating log likelihood equation with respect to α , δ and β , and equating to zero we get the following equations:
log L δ = 2 i = 1 n x i δ log ( x i ) x i δ α 1 β 1 α β x i δ α 1 β + 1 + 1 β 1 i = 1 n log ( x i ) + i = 1 n log ( x i ) + n δ = 0 .
log L α = 2 i = 1 n x i δ x i δ α 1 β 1 α 2 β x i δ α 1 β + 1 1 β 1 α 1 β 1 n α n α = 0 .
log L β = i = 1 n log x i δ α β 2 2 i = 1 n x i δ α 1 β log x i δ α β 2 x i δ α 1 β + 1 + log ( α ) β 2 n β = 0 .
It is evident from Equations (17)–(19), that these equations are non linear and do not possess a closed and analytically tractable form. Therefore, any iterative formula such as Newton Raphson or simulated annealing can be used to obtain the estimates. Regarding existence and uniqueness of the MLEs for the PLL distribution, see Appendix A.

3.2. Maximum Product Spacing Distance Estimation

For the estimation of the newly added unknown parameters, MPSE approach is taken into consideration authored by [21]. It seeks to maximize the geometric mean of spacings in the data (differences between the values) and can be used as an alternative to the MLE approach to estimate the parameters of continuous univariate distributions at nearby data points of the cumulative distribution function. The MPSDE is a strong alternative to MLE, especially when likelihood surfaces are irregular. It works well when any of the scenarios is present:
(a)
Small sample sizes.
(b)
Models with heavy tails or extreme-value behavior.
(c)
Situations where MLE fails to converge or produces boundary estimates.
(d)
Distributions with flat or multimodal likelihoods. Most importantly, MPSDE often exhibits better numerical stability and can outperform MLE in small samples or non-regular cases. The estimators produced by the MPSE approach are asymptotically normal, consistent, and as effective as MLEs. The invariance property of the MPSEs was examined by [22], who demonstrated that it is the same as the MLEs. Let us define the spacing for the PLL distribution
D j ( δ , β , α ) = F ( x j : n | δ , β , α ) F ( x j 1 : n | δ , β , α ) ,
where F ( x 0 | δ , β , α ) = 0 and F ( x n + 1 : n | δ , β , α ) = 0 .
The MPSEs δ ^ M P S E , β ^ M P S E , α ^ M P S E of the unknown parameters δ ,   β and α , and are obtained by maximizing the function with respect to δ , β , and α .
G ( δ , β , α ) = j = 1 n + 1 D j ( δ , β , α ) 1 n + 1 ,
H ( δ , β , α ) = 1 n + 1 i = 1 n + 1 l o g D j ( δ , β , α ) .
The estimators δ ^ M P S E , β ^ M P S E , α ^ M P S E of the unknown parameters δ , β , and α are obtained by solving following non linear equations:
H ( δ , β , α ) δ = 1 n + 1 i = 1 n + 1 1 D j ( δ , β , α ) Δ 1 ( x j : n | δ , β , α ) Δ 1 ( x j 1 : n | δ , β , α ) = 0 ,
H ( δ , β , α ) β = 1 n + 1 i = 1 n + 1 1 D j ( δ , β , α ) Δ 2 ( x j : n | δ , β , α ) Δ 2 ( x j 1 : n | δ , β , α ) = 0 ,
H ( δ , β , α ) α = 1 n + 1 i = 1 n + 1 1 D j ( δ , β , α ) Δ 3 ( x j : n | δ , β , α ) Δ 3 ( x j 1 : n | δ , β , α ) = 0 ,
where
Δ 1 ( x j : n | δ , β , α ) = 1 α β x δ α 1 β 1 + x δ α 1 β ,
Δ 2 ( x j : n | δ , β , α ) = 1 β 2 x δ α 1 β log x δ α 1 β 1 + x δ α 1 β ,
Δ 3 ( x j : n | δ , β , α ) = 1 α β x δ α 1 β 1 x δ log x 1 + x δ α 1 β .

3.3. Weighted Least Square Estimation

WLSE improves upon least square estimation (LSE) by accounting for heteroscedasticity in empirical distribution functions. It is desired in the following scenarios:
(a)
When variance of errors differs across quantiles.
(b)
When tail fitting is particularly important.
(c)
Moderate sample sizes where efficiency improvements over LSE are desired.
(d)
It provides improved efficiency over LSE and better performance in the tails by incorporating variance-based weights. The methods of WLSEs of the parameters δ , β and α are obtained by minimizing the function
η ( α , β , δ ) = j = 1 n ( n + 1 ) 2 ( n + 1 ) j ( n j + 1 ) x δ α 1 β 1 + x δ α 1 β j n + 1 2 ,
with respect to δ , β and α
j = 1 n ( n + 1 ) 2 ( n + 1 ) j ( n j + 1 ) x δ α 1 β 1 + x δ α 1 β j n + 1 Δ 1 ( x j : n | δ , β , α ) = 0 , j = 1 n ( n + 1 ) 2 ( n + 1 ) j ( n j + 1 ) x δ α 1 β 1 + x δ α 1 β j n + 1 Δ 2 ( x j : n | δ , β , α ) = 0 , j = 1 n ( n + 1 ) 2 ( n + 1 ) j ( n j + 1 ) x δ α 1 β 1 + x δ α 1 β j n + 1 Δ 3 ( x j : n | β , λ , θ ) = 0 ,
where Δ 1 ( x j : n | δ , β , α ) , Δ 2 ( x j : n | δ , β , α ) and Δ 3 ( x j : n | δ , β , α ) are given in Equations (20)–(22).

3.4. Approximate Confidence Intervals

In this subsection, asymptotic confidence intervals for the model parameters have been determined. The MLEs of these parameters cannot be obtained explicitly. Thus, using the asymptotic properties of the MLEs, the approximate confidence intervals (ACIs) have been obtained from the inverse of the variance-covariance matrix I 1 , where
I 1 ( α , β , δ ) = 2 L α 2 2 L α β 2 L α δ 2 L β α 2 L β 2 2 L β δ 2 L δ α 2 L δ β 2 L δ 2 . 1 = V a r ( α ^ ) C o v ( α ^ , β ^ ) C o v ( α ^ , δ ^ ) C o v ( β ^ , α ^ ) V a r ( β ^ ) C o v ( β ^ , δ ^ ) C o v ( δ ^ , α ^ ) C o v ( δ ^ , β ^ ) V a r ( δ ^ ) .
Thus, the 100 ( 1 λ ) % ACIs for the parameters α , β , and δ are, respectively, obtained as α ^ + ̲ V a r ( α ^ ) , β ^ + ̲ V a r ( β ^ ) , and δ ^ + ̲ V a r ( δ ^ ) .

3.5. Bootstrap Confidence Intervals

Bootstrap avoids the need to derive analytical variance formulas. Bootstrap estimation is preferred when analytical sampling distributions are difficult or unreliable, and when we want data-driven uncertainty quantification without relying heavily on asymptotic theory. It is not an alternative estimator, but a resampling-based inference tool. It provides: (a) Empirical bias estimates; (b) Empirical standard errors; (c) Confidence intervals; and (d) Coverage probability assessment. Remarkably, the bootstrap resampling method yields better accurate results when obtaining confidence intervals for small sample sizes. For the unknown parameters, two distinct parametric bootstrap confidence intervals have been created in this section. The following procedures are used to create these bootstrap confidence intervals.
Boot-p confidence intervals: This subsection presents the construction of the percentile parametric bootstrap (Boot-p) confidence intervals. The following algorithm was used to create these intervals.
Step 1: Generate a bootstrap sample using the MLEs and compute the bootstrap MLEs α ^ * , β ^ * , and δ ^ * .
Step 2: Repeat Step 1 up to B times to get { α ^ * ( 1 ) , · · · , α ^ * ( B ) } , { β ^ * ( 1 ) , · · · , β ^ * ( B ) } , and { δ ^ * ( 1 ) , · · · , δ ^ * ( B ) } .
Step 3: Rearrange all the MLEs obtained in Step 2 in ascending order to obtain { α ^ * [ 1 ] , · · · , α ^ * [ B ] } , { β ^ * [ 1 ] , · · · , β ^ * [ B ] } , and { δ ^ * ( 1 ) , · · · , δ ^ * [ B ] } .
Thus, the 100 ( 1 λ ) % Boot-p confidence intervals for the parameters α , β , and δ are, respectively, obtained as α ^ * [ λ B 2 ] , α ^ * [ B λ B 2 ] , β ^ * [ λ B 2 ] , β ^ * [ B λ B 2 ] , and δ ^ * [ λ B 2 ] , δ ^ * [ B λ B 2 ] , respectively.
Boot-t confidence intervals: When the effective sample size (m) is too small, bootstrap-t (Boot-t) confidence intervals have been created. Using the following procedure, Boot-t confidence intervals have been formed.
Step 1: Generate bootstrap samples and the corresponding MLEs of the parameters as similar as given in Step 1 in Boot-p confidence intervals.
Step 2: Compute the t-statistics for the unknown parameters α , β , and δ as U 1 = α ^ α V a r ( α ^ ) , U 2 = β ^ β V a r ( β ^ ) , and U 3 = δ ^ δ V a r ( δ ^ ) .
Step 3: Repeat Steps 1–2 for B times and and obtain { U 1 ( 1 ) , · · · , { U 1 ( B ) } , { U 2 ( 1 ) , · · · , { U 2 ( B ) } , and { U 3 ( 1 ) , · · · , { U 3 ( B ) } .
Step 4: Rearrange all the values obtained in Step 3 in ascending order to obtain { U 1 [ 1 ] , · · · , { U 1 [ B ] } , { U 2 [ 1 ] , · · · , { U 2 [ B ] } , and { U 3 [ 1 ] , · · · , { U 3 [ B ] } .
Thus, the 100 ( 1 λ ) % Boot-t confidence intervals for the parameters α , β , and δ are respectively obtained as α ^ + U 1 [ λ B 2 ] , α ^ + U 1 [ B λ B 2 ] , β ^ + U 2 [ λ B 2 ] , β ^ + U 2 [ B λ B 2 ] , and δ ^ + U 3 [ λ B 2 ] , δ ^ + U 3 [ B λ B 2 ] , respectively.

4. Bayesian Estimation

In several scenarios, a model’s parameters are not fixable. It fluctuates, which causes some variability in the model parameters. Bayesian analysis takes the parameter’s unpredictability into account. Using the data that is at present accessible, the Bayesian technique enables us to take prior assumptions about the parameter into account. In this subsection, we will derive the Bayes estimates of α , β , and δ with respect to squared error loss function (SELF). Let, σ be an estimator of parameter λ , then SELF is defined as
L λ , σ = ( σ λ ) 2 .
Recall that there is symmetry in the SELF. Moreover, the posterior distribution’s mean is a Bayes estimator under SELF. Observe that the conjugate priors are absent in this case. As per the strategy developed by [23], we assume independent gamma priors for the unknown parameters. Specifically, we consider that α ∼Gamma(a, b), β ∼Gamma(c, d), and δ ∼Gamma(e, f). Then the joint prior density function can be expressed as
π * ( α , β , δ ) α a 1 β c 1 δ e 1 e ( b α + d β + f δ ) ,
where a , b , c , d , e , f > 0 are the hyper-parameters. Now combining (29) and (15), the joint posterior density can be written as
π ( α , β , δ | d a t a ) α a + ( 1 β ) 2 n 1 β c n 1 δ n + e 1 e ( b α + d β + f δ ) i = 1 n x i δ ( 1 β 1 ) 1 + x i δ 1 β 2 .
Thus, the Bayes estimate of any function of parameters ψ ( α , β , δ ) with respect to SELF can be denoted by
ψ ^ ( α , β , δ ) = 0 0 0 ψ ( α , β , γ ) π ( α , β , δ | d a t a ) d α d β d δ 0 0 0 π ( α , β , δ | d a t a ) d α d β d δ .
We observe that it is not possible to obtain the Bayes estimates of α , β , and γ in closed form. Thus, we will employ the MCMC technique to compute the approximate Bayes estimates in the following subsection. Here, the gamma priors are pseudo-conjugate and weakly informative.
On the choice of hyperparameters for the priors: In this case, the hyperparameter values for the Gamma priors are obtained judiciously. Another reasonable strategy of obtaining the hyperparameter values would be the method of matching the first three theoretical moments with the sample moments. It must be mentioned that there are other available methods of selecting hyperparameters. In the context of hyperparameter optimization, etc. However, the methodology adopted in such references is more suitable to spatial–temporal models rather than a simple model like ours. Moreover, whether such strategies will be beneficial (in the sense of computational efficiency) is a debatable matter and will be explored in a future article. We are not claiming that this set of priors with these specific choice of hyperparameters is optimum for the associated Bayesian analysis, but, in our case, we have tried several other prior choices and found minimal changes in the final estimates. A comprehensive study regarding a wide range of judicious prior choices will be taken up later, which is beyond the scope of this present paper.

MCMC Method

Here, MCMC approach will be employed to compute the approximate Bayes estimates of the parameters under SELF. From the posterior density function given by ( 30 ) , we obtain the following conditional posterior density function
π 1 ( α | β , δ , d a t a ) α a + ( 1 β ) 2 n 1 e b α ,
π 2 ( β | α , δ , d a t a ) β c n 1 e d β i = 1 n x i δ ( 1 β 1 ) 1 + x i δ 1 β 2 ,
π 3 ( δ | α , β , d a t a ) δ n + e 1 e f δ i = 1 n x i δ ( 1 β 1 ) 1 + x i δ 1 β 2 .
Here, the above density functions can not be written in the form of any well known distribution. Thus, the MCMC samples cannot be generated from these densities. So, the Metropolis–Hastings algorithm is utilized to obtain MCMC samples from the conditional density function. The Bayes estimates can be obtained by using the following steps:
  • Details of the MCMC procedure
  • Step 1: Choose initial values as α ( 1 ) = α ^ , β ( 1 ) = β ^ , and δ ( 1 ) = δ ^ .
  • Step 2: Generate α ( i ) , β ( i ) , and δ ( i ) from Normal distribution as α ( i ) N ( α ( i 1 ) , V a r ( α ^ ) ) , β ( i ) N ( β ( i 1 ) , V a r ( β ^ ) ) , and δ ( i ) N ( δ ( i 1 ) , V a r ( δ ^ ) ) .
  • Step 3: Compute Ω α = min 1 , π 1 ( α ( i ) | β ( i 1 ) , δ ( i 1 ) , d a t a ) π 1 ( α ( i 1 ) | β ( i 1 ) , δ ( i 1 ) , d a t a ) , Ω β = min ( 1 , π 1 ( β ( i ) | α ( i 1 ) , δ ( i 1 ) , d a t a ) π 1 ( β ( i 1 ) | α ( i 1 ) , δ ( i 1 ) , d a t a ) ) , and Ω δ = min 1 , π 1 ( δ ( i ) | α ( i 1 ) , β ( i 1 ) , d a t a ) π 1 ( δ ( i 1 ) | α ( i 1 ) , β ( i 1 ) , d a t a ) .
  • Step 4: Generate samples for U 1 ∼Uniform(0,1), U 2 Uniform(0,1), and U 3 Uniform(0,1).
  • Step 5: Set α = α ( i ) , i f U 1 Ω α α ( i 1 ) , O t h e r w i s e , β = β ( i ) , i f U 2 Ω β β ( i 1 ) , O t h e r w i s e , , and δ = β ( i ) , i f U 2 Ω β β ( i 1 ) , O t h e r w i s e .
  • Step 6: Set i = i + 1 .
  • Step 7: Repeat steps 1–6 upto N times to generate { α ( 1 ) , · · · , α ( N ) } , { β ( 1 ) , · · · , β ( N ) } , and { δ ( 1 ) , · · · , δ ( N ) } .
Thus, under SELF, the Bayes estimates of α , β , and δ are, respectively, given as
α ^ B E = 1 N i = 1 N α ( i ) , β ^ B E = 1 N i = 1 N β ( i ) , and δ ^ B E = 1 N i = 1 N δ ( i ) .

5. Simulation Study

It is difficult to investigate the obtained expressions for the estimators analytically. As a result, a numerical analysis is carried out to compare the various estimators defined earlier related to the PLL distribution. In this section, we evaluate the performance of the different methods of estimation discussed in the previous subsection through Monte Carlo simulation. The absolute bias (AB) and mean square error (MSE) of point estimates has been evaluated to evaluate the performance of estimation methods. In case of point estimates average width (AW) and coverage probabilities (CP) have been obtained to check the performance of the point estimates.
In this simulation study, 10,000 random samples of sizes 20 , 30 , 50 , 100 , and 200 are generated from the Power log-logistic Distribution. We have considered different choices for true parameter values as ( α , β , δ ) = ( 1.5 , 0.5 , 0.5 ) , ( 1.2 , 0.75 , 0.5 ) , and ( 1.5 , 0.5 , 1.25 ) . We have obtained both classical and Bayes estimates for the parameters. In classical estimation, we have considered the MLE, MPSE, and WLSE method. For interval estimates, we have obtained 95 % ACI, Boot-p, Boot-t, and HPD credible intervals for the model parameters. In case of Bayesian estimation the hyper parameters have been chosen by using the formula used by [24]. For the Bayesian inference, we have selected the following prior choices for the three parameters:
  • Π ( α ) G a m m a ( 0.32 , 1 , 13 ) .
  • Π ( β ) G a m m a ( 0.67 , 1.15 ) .
  • Π ( δ ) G a m m a ( 1.03 , 2.38 ) .
All these simulations have done by using R, version: 4.5.3 software. Table 2 contains the ABs and MSEs for the point estimates under various choices of n and parameter values. Similarly, the AWs and CPs for the point estimates have been tabulated in Table 3. From these tables the following conclusions can be made regarding the performance of different methods of estimation carried out in this case:
  • From Table 2, we clearly observe that the ABs and MSEs decrease when the sample size increases.
  • Based on ABs and MSEs, in general, the Bayes estimates give better results than the classical point estimates. Among the classical estimates, MPSE performs better than the other classical estimates.
  • From Table 3, it can be noticed that AWs decrease with the increase of n.
  • Based on AWs and CPs, the HPD credible intervals perform better than other interval estimates. In case of confidence intervals, Boot-t outperforms Boot-p and ACIs.
  • In particular, for the Bayesian inference, the following observations are made (based on the simulation study):
    1.
    The choice of priors has a significant impact on both bias and computational time.
    2.
    As the sample size increases, bias and MSE decrease while the number of required proposals increases as expected.
From the above observations, we can summarize that Bayes estimators have better performance than the classical estimators. In classical estimators, one can prefer MPSE over MLE and WLSE. Bayesian estimation contains more information than the maximum likelihood estimation. In terms of AW and CP of interval estimates, an experimenter can choose HPD credible intervals over ACIs.
Sensitivity analysis with respect to the values of hyperparameters for the Bayesian inference.
A referee expressed concern that the results of this analysis might be influenced by the choice of hyperparameters, see, ref. [25]. As an initial effort to investigate hyperparameter sensitivity, we re-analyze the data set with the following two other choices of the hyperparameters for the following set of true parameter choices: α , β , δ = ( 1.5 , 0.5 , 0.5 ) . We have utilized posterior mean (P.M.) as the Bayes estimates of the parameters. Similar analysis can be conducted for other true parameter choices which we have not reported here for brevity.
(a)
Choice 2 (C2):
Π ( α ) Γ 1.28 , 3.27 ,
Π ( β ) Γ 1.08 , 2.19 ,
Π ( δ ) Γ 1.13 , 3.46 .
(b)
Choice 3 (C3):
Π ( α ) Γ 1.17 , 2.59 ,
Π ( β ) Γ 1.37 , 2.41 ,
Π ( δ ) Γ 1.12 , 3.19 .
(c)
Choice 4 (C4):
Π ( α ) Γ 1.21 , 2.39 ,
Π ( β ) Γ 1.05 , 3.23 ,
Π ( δ ) Γ 0.98 , 1.98 .
The posterior summaries based on the above four different sets of prior choices (with varying choices of the hyperparameters) are given in Table 4, Table 5 and Table 6. It appears that with these four sets of new choices of hyperparameters for the respective priors the final conclusion remains the same in terms of closeness of the posterior means for each of these three parameters along with their respective 95% HPD intervals.

Comment on the Convergence of MCMC Procedure

For the convergence diagnostics of the MCMC procedure in this paper, the method of [26] is utilized. It involves two stages. The first step, before sampling begins, involves obtaining an over-dispersed estimate of the target distribution(s) and using these to generate the starting points for the desired number of independent chains (in our case, we consider running our MCMC procedure for m = 10 parallel independent chains of length 2 k = 20,000 each). The second step involves using the last k = 1000 iterations to re-estimate the target distribution of the scalar quantity of interest (in our case α , β , and δ ). The convergence of the MCMC convergence is monitored by the following quantity given below: Suppose m = 10 parallel chains are run, each of length k. Let W denote the average within–chain variance and B the between–chain variance computed from the chain means.
The estimated marginal posterior variance is given by
V ^ = k 1 k W + m + 1 m k B .
The potential scale reduction factor is defined as
R ^ = V ^ W = k 1 k + m + 1 m k B W .
With the degrees of freedom adjustment based on the approximating t-distribution, the estimator becomes
R ^ = k 1 k + m + 1 m k B W d f d f 2 ,
where d f denotes the estimated degrees of freedom.
Convergence is considered to be achieved when R ^ is sufficiently close to 1 for all scalar parameters of interest as k .
To assess the convergence of the MCMC procedure under the full conditional prior setup, we again use the Gelman-Rubin diagnostic. The values of R ^ were computed for each of the m = 10 parallel chains and for all scalar parameters of interest when the true value of the model parameters are α = 1.5 , β = 0.5 , δ = 0.5 . The results for n = 100 are presented in Table 7, Table 8, Table 9 and Table 10.
From Table 7, Table 8, Table 9 and Table 10, we observe that the values of R ^ are reasonably close to 1 for all parameters, indicating satisfactory convergence of the MCMC chains. It must be noted that similar computations can also be done for other true parameter choices.

6. Real-Life Data Application

In this section, we compare the performances of some of the generalizations Log-logistic probability models in comparison with the PLL distribution using a real-life data set. This data set focuses on the time to failure of an electronic device studied by [27]. The data set can be found in below Table 11.
The descriptive summary for this data are: Mean = 166.56, Median = 133.50, 1st quartile = 50.75, 3rd quartile = 281.00. We conjecture that for this data set, PLL distribution will provide a reasonably good fit. To examine this we proceed as follows in the next.
To fit this data set with the proposed PLL distribution, the goodness-of-fit test has been conducted using Kolmogorov-Smirnov (K-S) distance and corresponding p-value and these values are 0.1377 and 0.8397, respectively. Figure 3 contains the empirical c.d.f. (ECDF) plot, probability–probability (P-P) plot, quantile–quantile (Q-Q) plot for the real dataset. From these values of K-S statistics and the graphical representations, we can consider that dataset for further analysis. Table 12 represents the point estimates and Table 13 indicates the interval estimates of the parameters based on the given data set. From Table 12, it is observed that the Bayes estimates perform better than other estimates based on the standard error (SE). From Table 13, it has been observed that HPD credible intervals perform better than other confidence intervals.
We fitted the following distributions to the taken dataset: The power log-logistic distribution (PLL), the log-logistic distribution (LL), the logistic distribution. Each distribution is fitted by the method of maximum likelihood estimation. The values of the Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC), Kolmogorov Smirnov distance, and p-value are used to compare the selected models as the measures of goodness of fit. For illustrative purposes, we have provided a comparison study via graphical devices. In Figure 3 we provide the p.d.f. and the c.d.f. of the power log-logistic distribution when fitted to the time to failure of an electronic device dataset. Similarly, Figure 4 and Figure 5represent the same for log-logistic and the logistic distributions, respectively. Furthermore, Figure 6a,b represent the Q-Q plot of the log-logistic and the logistic distribution. In Figure 7, we provide the same for the power log-logistic distribution. From these figures it appears that the PLL distribution fits the data quite well.
Table 14 lists the values of the following statistic: values of AIC, values of BIC and Kolmogorov Smirnov distance. The smaller the value of this criteria the better is the fit, and the greater the p-value the better the fit. Figure 8 shows the trace plots of the MCMC samples, which depicts the convergence of the Bayes estimates.

7. Conclusions

In this paper, we present a new absolutely continuous distribution named as the PLL distribution, which is obtained by generalizing the LL distribution by using a power transformation. An apparent reason to generalize a classical distribution is that it gives analysts more analytical freedom to examine real-world data with a variety of shapes. We have observed that the PLL distribution is more flexible than the existing rival probability models. Several of the useful structural properties such as the order statistics and incomplete moments have been discussed. In addition, estimation of the unknown parameters of the PLL are carried out using four different methods of estimation, including the method of maximum likelihood. We observed that, in general, estimation under the Bayesian paradigm by using a natural conjugate prior(s) provides better estimates as compared to the estimates obtained under the frequentist approach. To examine the efficacy of the PLL, we re-analyzed a real dataset involving time to failure of an electronic device, and found that the PLL provides better fit than other competing models. A bivariate and/or multivariate extension of the proposed PLL distribution can readily be envisioned despite its computational complexity. However, one has to find a real-world motivation before we can dig a little deeper. We are currently working on this and the findings will be published elsewhere.

Author Contributions

Conceptualization, Methodology, and Writing—Original Draft: I.G. and D.K.; Investigation, Methodology, and Writing—review and editing: I.G. and S.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The complete reference of the data is provided in the real data analysis section.

Conflicts of Interest

The authors do not have any conflicts of interest.

Appendix A

  • On the uniqueness and existence of MLEs
Since the density given in Equation (2) cannot be written in the form c ( θ ) h ( x ) exp g ( θ ) T ( x ) , it follows that the PLL distribution does not belong to the regular exponential family. Therefore, sufficient conditions for the existence and uniqueness of the maximum likelihood estimates (MLEs) of the parameters of a regular exponential family cannot be applied here. To examine the existence and uniqueness of the MLEs of α , β , and δ , we consider the following.
At first, one can observe the following:
  • From Equation (17), lim α 0 log L δ = , and lim δ log L δ = 0 . This implies that log L α is a decreasing function which has a unique root δ ^ 0 , .
  • From Equation (18), lim α 0 log L α = , and lim α log L α = 0 . This implies that log L α is a decreasing function which has a unique root α ^ 0 , .
  • From Equation (19), lim β 0 log L β = , and lim β log L β = 0 . This implies that log L α is a decreasing function which has a unique root β ^ 0 , .
Next, consider the following:
2 log L α 2 = 2 α β i = 1 n x i δ x i δ α 1 β 1 α 2 β x i δ α 1 β + 1 β + n + 1 α 2 β α β 1 × [ 2 β i = 1 n x i δ x i δ α 1 β 1 α 2 β x i δ α 1 β + 1 + 2 α β i = 1 n x i 2 δ x i δ α 2 β 2 α 4 β 2 x i δ α 1 β + 1 2 + 2 x i δ x i δ α 1 β 1 α 3 β x i δ α 1 β + 1 x i δ x i δ x i δ α 1 β 2 α 2 x i δ x i δ α 1 β 2 α 2 β α 2 β x i δ α 1 β + 1 ] .
2 log L β 2 = β 3 × [ 2 β 3 i = 1 n x i δ α 2 / β log 2 x i δ α β 4 x i δ α 1 β + 1 2 + x i δ α 1 β log 2 x i δ α β 4 x i δ α 1 β + 1 + 2 x i δ α 1 β log x i δ α β 3 x i δ α 1 β + 1 + 2 i = 1 n log x i δ α 2 log ( α ) + β n ] .
2 log L δ 2 = 2 i = 1 n x i 2 δ log 2 ( x i ) x i δ α 2 β 2 α 2 β 2 x i δ α 1 β + 1 2 + x i δ log 2 ( x i ) x i δ α 1 β 1 α β x i δ α 1 β + 1 + x i δ log ( x i ) x i δ log ( x i ) x i δ α 1 β 2 α β x i δ log ( x i ) x i δ α 1 β 2 α α β x i δ α 1 β + 1 n δ 2 .
2 log L α β = 2 i = 1 n x i δ x i δ α 1 β 1 log x i δ α α 2 β 3 x i δ α 1 β + 1 x i δ x i δ α 2 β 1 log x i δ α α 2 β 3 x i δ α 1 β + 1 2 + x i δ x i δ α 1 β 1 α 2 β 2 x i δ α 1 β + 1 + 1 α β 2 + n α β 2 .
2 log L δ β = 2 i = 1 n x i δ log ( x i ) x i δ α 1 β 1 log x i δ α α β 3 x i δ α 1 β + 1 + x i δ log ( x i ) x i δ α 2 β 1 log x i δ α α β 3 x i δ α 1 β + 1 2 x i δ log ( x i ) x i δ α 1 β 1 α β 2 x i δ α 1 β + 1 i = 1 n log ( x i ) β 2 .
2 log L δ α = 2 i = 1 n x i 2 δ log ( x i ) x i δ α 2 β 2 α 3 β 2 x i δ α 1 β + 1 2 x i δ log ( x i ) x i δ α 1 β 1 α 2 β x i δ α 1 β + 1 + x i δ log ( x i ) x i δ x i δ α 1 β 2 α 2 x i δ x i δ α 1 β 2 α 2 β α β x i δ α 1 β + 1 .
Here, the associated observed Fisher information matrix J ( η ^ ) is given by
J ( η ^ ) = J α α J α β J α δ J β β J β δ J δ δ ,
where η = (α,β,δ).
J α α = 2 log L α 2 η = η ^ ,
J β β = 2 log L β 2 η = η ^ ,
J δ δ = 2 log L δ 2 η = η ^ ,
J α β = 2 log L α β η = η ^ ,
J β δ = 2 log L β δ η = η ^ ,
J δ α = 2 log L δ α η = η ^ ,
J δ β = 2 log L δ β η = η ^ .
Next, we consider the determinant of the associated Hessian matrix (alias the observed Fisher Information matrix), which is given by
Λ = J α α J β β J δ δ J β δ J δ β J α β J β α J δ δ J δ α J β δ + J α δ J β α J δ β J δ α J β β .
The Hessian matrix of second partial derivatives’ elements suggest that for each α , β , δ Δ , it is negative definite, for which the gradient vector L θ , β , δ | X Δ i , for Δ 1 = α , Δ 2 = β , Δ 3 = δ vanishes. Thus, it follows directly from Theorem 2.1 of [28] that Δ i ^ Δ is a unique maximum likelihood estimate, and the likelihood function attains: (a) no other maxima in Δ ; (b) no minima or other stationary point in Δ , and (c) its infimum value 0 is on the boundary α , β , δ and nowhere else. Hence, the proof.

References

  1. Johnson, N.L.; Kotz, S.; Balakrishnan, N. Continuous Univariate Distributions; John Wiley and Sons: Albany, NY, USA, 1995; Volume 2. [Google Scholar]
  2. De Santana, T.V.F.; Ortega, E.M.; Cordeiro, G.M.; Silva, G.O. The Kumaraswamy-log-logistic distribution. J. Stat. Theory Appl. 2012, 11, 265–291. [Google Scholar]
  3. Gui, W. Marshall-Olkin extended log-logistic distribution and its application in minification processes. Appl. Math. Sci. 2013, 7, 3947–3961. [Google Scholar] [CrossRef]
  4. Tahir, M.H.; Mansoor, M.; Zubair, M.; Hamedani, G. McDonald log-logistic distribution with an application to breast cancer data. J. Stat. Theory Appl. 2014, 13, 65–82. [Google Scholar] [CrossRef]
  5. Bennett, S. Log-logistic regression models for survival data. J. R. Stat. Soc. Ser. C Appl. Stat. 1983, 32, 165–171. [Google Scholar] [CrossRef]
  6. Zheng, X.; Chiang, J.Y.; Tsai, T.R.; Wang, S. Estimating the failure rate of the log-logistic distribution by smooth adaptive and bias-correction methods. Comput. Ind. Eng. 2021, 156, 107188. [Google Scholar] [CrossRef]
  7. Granzotto, D.C.T.; Louzada, F. The transmuted log-logistic distribution: Modeling, inference, and an application to a polled tabapua race time up to first calving data. Commun. Stat.-Theory Methods 2015, 44, 3387–3402. [Google Scholar] [CrossRef]
  8. Lemonte, A.J. The beta log-logistic distribution. Braz. J. Probab. Stat. 2014, 28, 313–332. [Google Scholar] [CrossRef]
  9. Aldahlan, M.A. Alpha power transformed log-logistic distribution with application to breaking stress data. Adv. Math. Phys. 2020, 2020, 1–9. [Google Scholar] [CrossRef]
  10. Guure, C.B. Inference on the loglogistic model with right censored data. Austin. Biom. Biostat. 2015, 2, 1015. [Google Scholar]
  11. Adepoju, A.A.; Elbarkawy, M.A.; Ishaq, A.I.; Singh, N.S.S.; Daud, H.; Suleiman, A.A.; Almetwally, E.M.; Elgarhy, M. A Flexible Extension of the Log-Logistic Distribution with Application to Cancer Data. Int. J. 2025, 14, 627. [Google Scholar] [CrossRef]
  12. Alfaer, N.M.; Gemeay, A.M.; Aljohani, H.M.; Afify, A.Z. The extended log-logistic distribution: Inference and actuarial applications. Mathematics 2021, 9, 1386. [Google Scholar] [CrossRef]
  13. Al-Shomrani, A.A.; Shawky, A.I.; Arif, O.H.; Aslam, M. Log-logistic distribution for survival data analysis using MCMC. Springer Plus 2016, 5, 1–16. [Google Scholar] [CrossRef]
  14. Afify, A.Z.; Hussein, E.A.; Alnssyan, B.; Mahran, H.A. The extended log-logistic distribution: Properties, inference, and applications in medicine and geology. J. Stat. Appl. Probab. 2023, 12, 1155–1580. [Google Scholar] [CrossRef]
  15. Yang, G.; Liu, D.; Wang, J.; Xie, M.G. Meta-analysis framework for exact inferences with application to the analysis of rare events. Biometrics 2016, 72, 1378–1386. [Google Scholar] [CrossRef]
  16. Fan, Z.; Liu, D.; Chen, Y.; Zhang, N. Something out of nothing? The influence of double-zero studies in meta-analysis of adverse events in clinical trials. Stat. Biosci. 2024, 1–19. [Google Scholar] [CrossRef]
  17. Ashkar, F.; Mahdi, S. Fitting the log-logistic distribution by generalized moments. J. Hydrol. 2006, 328, 694–703. [Google Scholar] [CrossRef]
  18. Galton, F. Enquiries into Human Faculty and Its Development; Macmillan: London, UK, 1883. [Google Scholar]
  19. Moors, J.J.A. A quantile alternative for Kurtosis. J. R. Stat. Soc. Ser. D 1988, 37, 25–32. [Google Scholar] [CrossRef]
  20. Arnold, B.C.; Balakrishnan, N.; Nagaraja, H.N. A First Course in Order Statistics; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2008. [Google Scholar]
  21. Cheng, R.; Amin, N. Maximum Product of Spacings Estimation with Application to the Lognormal Distribution; Mathematical Report 79-1; Department of Mathematics, UWIST: Cardiff, Wales, 1979. [Google Scholar]
  22. Anatolyev, S.; Kosenok, G. An alternative to maximum likelihood based on spacings. Econom. Theory 2005, 21, 472–476. [Google Scholar] [CrossRef]
  23. Kundu, D.; Pradhan, B. Bayesian inference and life testing plans for generalized exponential distribution. Sci. China Ser. A Math. 2009, 52, 1373–1388. [Google Scholar] [CrossRef]
  24. Dutta, S.; Kayal, S. Estimation and prediction for Burr type III distribution based on unified progressive hybrid censoring scheme. J. Appl. Stat. 2024, 51, 1–33. [Google Scholar] [CrossRef] [PubMed]
  25. Giannone, D.; Lenza, M.; Primiceri, G.E. Prior selection for vector autoregressions. Rev. Econ. Stat. 2015, 97, 436–451. [Google Scholar] [CrossRef]
  26. Gelman, A.; Rubin, D.B. [Practical markov chain Monte Carlo]: Rejoinder: Replication without contrition. Stat. Sci. 1992, 7, 503–511. [Google Scholar] [CrossRef]
  27. Dutta, S.; Dey, S.; Kayal, S. Bayesian survival analysis of logistic exponential distribution for adaptive progressive Type-II censored data. Comput. Stat. 2024, 39, 2109–2155. [Google Scholar] [CrossRef]
  28. Mäkeläinen, T.; Schmidt, K.; Styan, G.P. On the existence and uniqueness of the maximum likelihood estimate of a vector-valued parameter in fixed-size samples. Ann. Stat. 1981, 9, 758–767. [Google Scholar] [CrossRef]
Figure 1. Plot of the PLL p.d.f. for various values of parameters α , β and δ .
Figure 1. Plot of the PLL p.d.f. for various values of parameters α , β and δ .
Axioms 15 00285 g001
Figure 2. Hazard rate function of PLL distribution for various values of parameters α , β and δ .
Figure 2. Hazard rate function of PLL distribution for various values of parameters α , β and δ .
Axioms 15 00285 g002
Figure 3. The p.d.f. and the c.d.f. of the power log-logistic distribution fitted to the time to failure of an electronic device.
Figure 3. The p.d.f. and the c.d.f. of the power log-logistic distribution fitted to the time to failure of an electronic device.
Axioms 15 00285 g003
Figure 4. The p.d.f. and c.d.f. of the log-logistic distribution fitted to the time to failure of an electronic device.
Figure 4. The p.d.f. and c.d.f. of the log-logistic distribution fitted to the time to failure of an electronic device.
Axioms 15 00285 g004
Figure 5. The p.d.f. and the c.d.f. of the Logistic distribution fitted to the time to failure of an electronic device.
Figure 5. The p.d.f. and the c.d.f. of the Logistic distribution fitted to the time to failure of an electronic device.
Axioms 15 00285 g005aAxioms 15 00285 g005b
Figure 6. (a) Q-Q Plot of LL distribution; (b) Q-Q Plot of L distribution.
Figure 6. (a) Q-Q Plot of LL distribution; (b) Q-Q Plot of L distribution.
Axioms 15 00285 g006
Figure 7. Q-Q Plots of the PLL distribution.
Figure 7. Q-Q Plots of the PLL distribution.
Axioms 15 00285 g007
Figure 8. Trace plots (from (top) to (bottom)) of MCMC samples of α , β , and δ for electronic device data.
Figure 8. Trace plots (from (top) to (bottom)) of MCMC samples of α , β , and δ for electronic device data.
Axioms 15 00285 g008
Table 1. Mean, variance, skewness, and kurtosis for the different values of parameters.
Table 1. Mean, variance, skewness, and kurtosis for the different values of parameters.
β α δ MeanVarianceSkewnessKurtosis
0.51.53.51.1614200.0985002.12525010.361600
2 1.2609100.1161102.12508010.361570
2.5 1.3439200.1319002.12502010.361720
3 1.4157900.1463802.12505010.361610
3.5 1.4795390.1598652.12485610.361731
4.5 1.5896830.1845542.12480510.361928
5.5 1.6834900.2069782.12470410.361964
6 1.7258670.2175282.12494310.361529
10 1.9970680.2912612.12502610.361725
11.54.51.1885000.2852206.17689729.556462
2 1.2669600.3241206.17660829.556169
2.5 1.3313700.3579106.17676829.556373
3 1.3864200.3881306.17654929.556093
3.5 1.4347420.4156536.17639829.556052
4.5 1.5171490.4647716.17661429.556172
5.5 1.5863350.5081276.17644329.555964
6 1.6173070.5281616.17669029.556364
10 1.8117240.6627776.17646729.556176
1.21.55.51.1656600.2621708.86979052.534350
2 1.2282500.2910908.86968052.534350
2.5 1.2791100.3156908.86972052.534360
3 1.3222200.3373308.86971052.534230
3.5 1.3598080.3567868.86972452.534315
4.5 1.4233840.3909278.86972452.534350
5.5 1.4762760.4205208.86987452.534521
6 1.4998170.4340308.86989152.534626
10 1.6457900.5226378.86983152.534575
1.41.561.1720900.3136609.65077062.019420
2 1.2296600.3452309.65049062.018780
2.5 1.2762600.3718909.65070062.019290
3 1.3156300.3951909.65055062.018820
3.5 1.3498760.4160329.65052862.018825
4.5 1.4076170.4523869.65062662.019082
5.5 1.4554910.4836819.65062162.019040
6 1.4767520.4979169.65073762.019315
10 1.6079870.5903429.65049662.018961
21.5101.1131930.19372012.294730113.01472
2 1.1456820.20519512.294950113.01528
2.5 1.1715350.21455912.294890113.01514
3 1.1930900.22252812.294960113.01530
3.5 1.2116240.22949612.294897113.01525
4.5 1.2424600.24132512.294695113.01448
5.5 1.2676440.25120912.294799113.01479
6 1.2787230.25561612.294738113.01455
10 1.3457400.28311412.294952113.01538
Table 2. Simulated ABs and MSEs (in parentheses) of the different classical and Bayes estimates under SELF for different values of ( α , β , δ ) .
Table 2. Simulated ABs and MSEs (in parentheses) of the different classical and Bayes estimates under SELF for different values of ( α , β , δ ) .
( α , β , δ ) n MLEMPSEWLSEBayes
α β δ α β δ α β δ α β δ
(1.5, 0.5, 0.5)200.18610.08870.08020.17250.08440.07830.19200.09050.08230.10510.05350.0472
(0.0544)(0.0143)(0.0113)(0.0515)(0.0128)(0.0102)(0.0565)(0.0158)(0.0122)(0.0283)(0.0082)(0.0061)
300.14930.06760.06380.13860.06340.06070.15410.06910.06570.09820.04270.0409
(0.0349)(0.0088)(0.0072)(0.0302)(0.0075)(0.0066)(0.0370)(0.0095)(0.0079)(0.0201)(0.0059)(0.0048)
500.11850.05520.05010.10920.05230.04850.12430.05910.05360.07530.03520.0329
(0.0228)(0.0057)(0.0046)(0.0203)(0.0051)(0.0042)(0.0241)(0.0060)(0.0048)(0.0156)(0.0032)(0.0028)
1000.08560.03920.03640.08220.03690.03410.08940.04110.03950.05100.02250.0218
(0.0179)(0.0037)(0.0033)(0.0165)(0.0035)(0.0031)(0.0195)(0.0042)(0.0040)(0.0103)(0.0025)(0.0024)
2000.05920.03190.03080.05750.03100.03010.06270.03300.03190.03520.01750.0168
(0.0093)(0.0020)(0.0019)(0.0089)(0.0019)(0.0018)(0.0101)(0.0023)(0.0021)(0.0068)(0.0012)(0.0011)
(1.2, 0.75, 0.5)200.15810.05810.03930.15320.05650.03780.16270.06150.04080.09630.03950.0227
(0.0358)(0.0054)(0.0032)(0.0334)(0.0050)(0.0030)(0.0362)(0.0058)(0.0036)(0.0261)(0.0036)(0.0020)
300.15340.05650.03780.15030.05430.03650.15930.05970.03960.09420.03780.0220
(0.0342)(0.0051)(0.0030)(0.0322)(0.0048)(0.0029)(0.0350)(0.0056)(0.0035)(0.0245)(0.0034)(0.0019)
500.13090.05100.03290.14590.05030.03200.13670.05680.03800.09050.03490.0207
(0.0310)(0.0046)(0.0027)(0.0303)(0.0045)(0.0026)(0.0322)(0.0054)(0.0033)(0.0219)(0.0030)(0.0018)
1000.12670.04910.02960.13140.04950.02900.13240.05250.03310.08640.03180.0198
(0.0305)(0.0045)(0.0026)(0.0298)(0.0044)(0.0025)(0.0314)(0.0054)(0.0031)(0.0204)(0.0027)(0.0017)
2000.11980.04550.02720.11630.04430.02660.12240.04800.02940.07960.02840.0171
(0.0296)(0.0041)(0.0024)(0.0275)(0.0042)(0.0024)(0.0302)(0.0050)(0.0029)(0.0183)(0.0025)(0.0016)
(1.5, 0.5, 1.25)200.51320.29270.12570.46390.28360.11980.54270.29810.12950.26810.15750.0793
(0.3565)(0.1838)(0.0312)(0.3150)(0.1796)(0.0301)(0.3781)(0.1892)(0.0351)(0.1759)(0.0965)(0.0210)
300.49340.28500.12090.44250.27910.11330.53840.29370.12400.24730.14980.0757
(0.2961)(0.1562)(0.0285)(0.2773)(0.1510)(0.0268)(0.3259)(0.1772)(0.0324)(0.1494)(0.0826)(0.0165)
500.43820.26960.11340.41510.26820.10740.49090.27830.11780.21300.12850.0696
(0.2608)(0.1431)(0.0246)(0.2581)(0.1403)(0.0239)(0.2934)(0.1581)(0.0295)(0.1358)(0.0785)(0.0151)
1000.39690.23870.10280.38430.23290.10120.44580.26200.10640.19650.11430.0627
(0.2010)(0.1268)(0.0210)(0.1935)(0.1215)(0.0202)(0.2459)(0.1318)(0.0233)(0.1072)(0.0645)(0.0133)
2000.31620.18640.08970.29680.17670.08350.35120.20340.09390.12140.08590.0416
(0.1254)(0.0961)(0.0162)(0.1189)(0.0924)(0.0150)(0.1352)(0.1023)(0.0184)(0.0759)(0.0461)(0.0095)
Table 3. Simulated AWs and CPs (in parentheses) of the ACI, Boot-p, Boot-t and HPD credible intervals for different values of ( α , β , δ ) .
Table 3. Simulated AWs and CPs (in parentheses) of the ACI, Boot-p, Boot-t and HPD credible intervals for different values of ( α , β , δ ) .
( α , β , δ ) n ACIBoot-pBoot-tHPD
α β δ α β δ α β δ α β δ
(1.5, 0.5, 0.5)200.41590.21410.20350.36820.19950.19670.34510.19270.18930.17500.10540.1015
(0.9026)(0.8957)(0.9063)(0.9078)(0.9052)(0.9148)(0.9102)(0.9076)(0.9175)(0.9385)(0.9330)(0.9372)
300.40860.21150.20100.35990.19410.19230.33890.18940.18720.17060.10310.0989
(0.9069)(0.9028)(0.9105)(0.9110)(0.9089)(0.9173)(0.9135)(0.9097)(0.9199)(0.9404)(0.9385)(0.9396)
500.37530.19560.19340.32670.18350.18100.32360.18210.18030.15830.09690.0925
(0.9113)(0.9057)(0.9149)(0.9178)(0.9144)(0.9204)(0.9196)(0.9131)(0.9217)(0.9423)(0.9417)(0.9429)
1000.29810.13580.13150.28690.13270.13020.28310.13050.12960.09840.07830.0754
(0.9235)(0.9108)(0.9212)(0.9256)(0.9180)(0.9237)(0.9272)(0.9204)(0.9255)(0.9441)(0.9435)(0.9447)
2000.27690.13580.13150.28690.13270.13020.28310.13050.12960.09840.07830.0754
(0.9274)(0.9159)(0.9288)(0.9297)(0.9213)(0.9261)(0.9308)(0.9245)(0.9281)(0.9469)(0.9460)(0.9461)
(1.2, 0.75, 0.5)200.33570.18520.18040.33090.18310.17860.32820.18120.17700.16890.10100.0972
(0.9105)(0.9056)(0.9124)(0.9172)(0.9105)(0.9163)(0.9204)(0.9138)(0.9195)(0.9402)(0.9367)(0.9395)
300.31540.17960.17630.31230.17850.17500.30810.17630.17380.15540.09860.0953
(0.9165)(0.9094)(0.9156)(0.9202)(0.9135)(0.9198)(0.9230)(0.9172)(0.9236)(0.9419)(0.9398)(0.9416)
500.29690.16950.16370.28920.16240.15950.28530.16050.15820.14610.09460.0921
(0.9280)(0.9161)(0.9243)(0.9324)(0.9208)(0.9275)(0.9350)(0.9265)(0.9304)(0.9443)(0.9439)(0.9458)
1000.26970.15850.15500.26360.15560.15340.25830.15290.15100.12490.08940.0875
(0.9324)(0.9208)(0.9292)(0.9355)(0.9261)(0.9310)(0.9375)(0.9314)(0.9339)(0.9468)(0.9459)(0.9476)
2000.23870.14310.14120.23430.13820.13670.23120.13580.13260.10930.08270.0803
(0.9387)(0.9281)(0.9353)(0.9403)(0.9336)(0.9389)(0.9423)(0.9357)(0.9402)(0.9489)(0.9477)(0.9490)
(1.5, 0.5, 1.25)200.56350.24190.22750.52400.23700.22040.50340.23280.21680.27520.12840.1237
(0.9082)(0.9143)(0.9168)(0.9115)(0.9196)(0.9205)(0.9157)(0.9231)(0.9249)(0.9362)(0.9385)(0.9403)
300.50250.22930.21240.48640.22310.20960.47500.21890.20270.23460.11610.1133
(0.9165)(0.9220)(0.9252)(0.9208)(0.9264)(0.9283)(0.9251)(0.9289)(0.9306)(0.9421)(0.9439)(0.9444)
500.41260.20080.19850.40350.19960.19390.39480.19250.18990.20100.09890.0964
(0.9275)(0.9296)(0.9323)(0.9313)(0.9327)(0.9344)(0.9340)(0.9352)(0.9368)(0.9443)(0.9452)(0.9468)
1000.36500.17950.17420.34610.17350.16920.33860.17040.16580.17520.08910.0865
(0.9302)(0.9321)(0.9344)(0.9352)(0.9364)(0.9368)(0.9381)(0.9392)(0.9396)(0.9467)(0.9475)(0.9472)
2000.32090.16320.15960.30550.15860.15450.30200.15270.14950.14610.08200.0782
(0.9351)(0.9382)(0.9396)(0.9388)(0.9405)(0.9421)(0.9418)(0.9422)(0.9430)(0.9475)(0.9482)(0.9493)
Table 4. Posterior summary for the PLL distribution [Choice 2].
Table 4. Posterior summary for the PLL distribution [Choice 2].
Sample Size α ^ β ^ δ ^
P.M.95% HPDP.M.95% HPDP.M.95% HPD
201.234(1.054, 3.436)0.423(0.273, 1.781)0.398(0.277, 1.638)
301.412(1.078, 3.409)0.431(0.277, 1.546)0.403(0.283, 1.537)
501.424(1.112, 3.232)0.435(0.279, 1.541)0.412(0.289, 1.531)
1001.428(1.117, 2.789)0.438(0.281, 1.538)0.422(0.291, 1.473)
2001.453(1.242, 2.421)0.447(0.282, 1.532)0.439(0.292, 1.470)
Table 5. Posterior summary for the PLL distribution [Choice 3].
Table 5. Posterior summary for the PLL distribution [Choice 3].
Sample Size α ^ β ^ δ ^
P.M.95% HPDP.M.95% HPDP.M.95% HPD
201.226(1.064, 3.463)0.418(0.274, 1.692)0.386(0.273, 1.664)
301.417(1.073, 3.412)0.427(0.278, 1.564)0.398(0.268, 1.637)
501.422(1.114, 3.238)0.430(0.281, 1.516)0.413(0.292, 1.584)
1001.425(1.116, 2.987)0.432(0.283, 1.502)0.426(0.293, 1.488)
2001.448(1.224, 2.412)0.442(0.278, 1.496)0.435(0.295, 1.468)
Table 6. Posterior summary for the PLL distribution [Choice 4].
Table 6. Posterior summary for the PLL distribution [Choice 4].
Sample Size α ^ β ^ δ ^
P.M.95% HPDP.M.95% HPDP.M.95% HPD
201.238(1.112, 3.263)0.413(0.267, 1.689)0.381(0.271, 1.643)
301.421(1.071, 3.210)0.429(0.268, 1.564)0.401(0.286, 1.573)
501.428(1.121, 3.132)0.434(0.272, 1.546)0.407(0.283, 1.513)
1001.436(1.217, 2.789)0.432(0.274, 1.528)0.418(0.290, 1.479)
2001.452(1.246, 2.321)0.436(0.283, 1.517)0.429(0.293, 1.468)
Table 7. Values of R ^ for n = 30 .
Table 7. Values of R ^ for n = 30 .
Chain Number R ^ α R ^ β R ^ δ
11.070.831.08
21.050.911.04
31.011.030.93
40.811.010.89
50.830.950.92
60.870.940.86
70.760.830.92
80.820.850.85
90.970.880.87
100.960.950.92
Table 8. Values of R ^ for n = 50 .
Table 8. Values of R ^ for n = 50 .
Chain Number R ^ α R ^ β R ^ δ
11.131.120.81
21.051.070.84
31.011.050.83
40.961.020.77
51.010.960.87
60.971.011.06
70.890.930.98
80.910.950.84
90.920.820.86
100.960.830.89
Table 9. Values of R ^ for n = 100 .
Table 9. Values of R ^ for n = 100 .
Chain Number R ^ α R ^ β R ^ δ
11.070.831.08
21.050.911.04
31.011.030.93
40.811.010.89
50.830.950.92
60.870.940.86
70.760.830.92
80.820.850.85
90.970.880.87
100.960.950.90
Table 10. Values of R ^ for n = 200 .
Table 10. Values of R ^ for n = 200 .
Chain Number R ^ α R ^ β R ^ δ
11.111.071.08
21.121.091.10
31.010.951.02
40.780.890.84
51.021.020.84
60.840.910.86
70.770.810.92
80.850.820.79
90.900.890.97
100.920.930.94
Table 11. The time to failure of an electronic device.
Table 11. The time to failure of an electronic device.
5112131467598122145
65196224245293321330350420
Table 12. Point estimates and the associated standard errors (SEs) of the different classical and Bayes estimates for electrical device data.
Table 12. Point estimates and the associated standard errors (SEs) of the different classical and Bayes estimates for electrical device data.
Estimates α SE β SE δ SE
MLE3.17500.28460.17460.05370.24430.0981
MPSE3.20190.19510.16530.04660.22050.0827
WLSE3.14250.33010.18830.06240.21620.1028
Bayes3.24550.13040.15630.03690.17510.0526
Table 13. Estimate values and corresponding widths of the interval estimates for electrical device data.
Table 13. Estimate values and corresponding widths of the interval estimates for electrical device data.
Estimates α Length β Length δ Length
ACI(2.6172, 3.7328)1.1156(0.0693, 0.2799)0.2106(0.0520, 0.4365)0.3845
Boot-p(2.7051, 3.5963)0.8912(0.0762, 0.2548)0.1786(0.0721, 0.4019)0.3298
Boot-t(2.7529, 3.5082)0.7553(0.0844, 0.2427)0.6013(0.0952, 0.3865)0.2913
Bayes(3.0249, 3.5217)0.4968(0.1208, 0.1961)0.0753(0.1127, 0.2659)0.1532
Table 14. Summary of the goodness-of-fit statistics for the electrical dataset.
Table 14. Summary of the goodness-of-fit statistics for the electrical dataset.
DistributionsAICBICp-ValueK-S Distance
PLL313.5353306.14030.85310.0873
LL512.9789495.55470.46780.5829
L3015.6913015.4720.00000.99186
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ghosh, I.; Kumar, D.; Dutta, S. Different Classical and Bayesian Methods of Estimation of the Power Log-Logistic Distribution with Applications. Axioms 2026, 15, 285. https://doi.org/10.3390/axioms15040285

AMA Style

Ghosh I, Kumar D, Dutta S. Different Classical and Bayesian Methods of Estimation of the Power Log-Logistic Distribution with Applications. Axioms. 2026; 15(4):285. https://doi.org/10.3390/axioms15040285

Chicago/Turabian Style

Ghosh, Indranil, Devendra Kumar, and Subhankar Dutta. 2026. "Different Classical and Bayesian Methods of Estimation of the Power Log-Logistic Distribution with Applications" Axioms 15, no. 4: 285. https://doi.org/10.3390/axioms15040285

APA Style

Ghosh, I., Kumar, D., & Dutta, S. (2026). Different Classical and Bayesian Methods of Estimation of the Power Log-Logistic Distribution with Applications. Axioms, 15(4), 285. https://doi.org/10.3390/axioms15040285

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop