Next Article in Journal
Variational Characterizations of Local Entropy and Heat Regularization in Deep Learning
Previous Article in Journal
Entropy of Reissner–Nordström 3D Black Hole in Roegenian Economics
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Exponentiated Lindley Geometric Distribution with Applications

1
School of Computer Science, Southwest Petroleum University, Chengdu 610500, China
2
Department of Mathematics and Statistics, Texas Tech University, Lubbock, TX 79409-1042, USA
*
Author to whom correspondence should be addressed.
Entropy 2019, 21(5), 510; https://doi.org/10.3390/e21050510
Submission received: 4 April 2019 / Revised: 8 May 2019 / Accepted: 9 May 2019 / Published: 20 May 2019
(This article belongs to the Section Information Theory, Probability and Statistics)

Abstract

:
We introduce a new three-parameter lifetime distribution, the exponentiated Lindley geometric distribution, which exhibits increasing, decreasing, unimodal, and bathtub shaped hazard rates. We provide statistical properties of the new distribution, including shape of the probability density function, hazard rate function, quantile function, order statistics, moments, residual life function, mean deviations, Bonferroni and Lorenz curves, and entropies. We use maximum likelihood estimation of the unknown parameters, and an Expectation-Maximization algorithm is also developed to find the maximum likelihood estimates. The Fisher information matrix is provided to construct the asymptotic confidence intervals. Finally, two real-data examples are analyzed for illustrative purposes.

1. Introduction

Suppose that a company has N systems functioning independently and producing a certain product at a given time, where N is a random variable determined by economy, customers demand, etc. The reason for considering N as a random variable comes from a practical viewpoint, because failure (of a device for example) often occurs due to the present of an unknown number of initial defects in the system. In this paper, we consider the case in which N is taken to be a geometric random variable with the probability mass function given by
P ( N = n ) = ( 1 p ) p n 1 ,
for 0 < p < 1 and n is a positive integer. We may take N to follow other discrete distributions, such as binomial, Poisson, etc, whereas they need to be truncated 0 because one must have N 1 . Another rationale by taking N to be a geometric random variable is that the “optimum” number can be interpreted as “number to event”, matching up with the definition of a geometric random variable, as commented by [1]. The geometric distribution has been widely used for the number of “systems” in the literature; see, for example, [2,3]. It has also been adopted to obtain some new class of distributions; see [4] for the exponential geometric (EG) distribution, [5] for the exponentiated exponential geometric (EEG) distribution, [6] for the Weibull geometric distribution, [1] for the geometric exponential Poisson (GEP) distribution, to name just a few.
On the other hand, we assume that each of N systems is made of α parallel components, and therefore, the system will completely shutdown if all of the components fail. Meanwhile, we assume that the failure times of the components for the ith system, denoted by Z i 1 , , Z i α , are independent and identically distributed (iid) with the cumulative distribution function (cdf) G ( z ) and the probability density function (pdf) g ( z ) . For simplicity of notation, let Y i stand for the failure time of the ith system and X denote the time to failure of the first out of the N functioning systems, i.e., X = min ( Y 1 , , Y N ) . Then it can be seen from [5] that the conditional cdf of X given N is given by
G ( x N ) = 1 P ( X > x N ) = 1 1 G ( x ) α N ,
and the unconditional cdf of X can thus be written as
F ( x ) = n = 1 G ( x N ) P ( N = n ) = G ( x ) α 1 p + p · G ( x ) α .
The new class of distribution in (1) depends on the cdf of the failure times of the components in the system, which may follow some continuous probability distributions, such as the exponential, Lindley, and Weibull distributions. As an illustration, if the failure times of the components for the ith system are iid exponential random variables with the rate parameter λ , i.e., G ( z ) = 1 e λ z , then we obtain the EEG distribution due to [5]. Its cdf is given by
F ( x ) = 1 e λ x α 1 p + p 1 e λ x α .
Please note that in reliability engineering and lifetime analysis, we often assume that the failure times of the components within each system follow the exponential lifetimes; see, for example [4,5,7], among others. This assumption may be unreasonable because the hazard rate of the exponential distribution is a constant, whereas some real-life systems may not have constant hazard rates, and the components of a system are often more rigid than the system itself. Accordingly, it becomes reasonable to consider the components of a system following a distribution with a non-constant hazard function that has flexible hazard function shapes.
In this paper, we propose a new three-parameter lifetime distribution by compounding the Lindley and geometric distributions based on the new class of distribution in (1). The Lindley distribution was first proposed by [8] in the context of Bayesian statistics, as a counterexample of fiducial statistics. It has recently received considerable attention as an appropriate model to analyze lifetime data especially in applications modeling stress-strength reliability; see, for example, [9,10,11]. Ghitany et al. [12] argue that the Lindley distribution could be a better lifetime model than the exponential distribution through a numerical example and show that the hazard function of the Lindley distribution does not exhibit a constant hazard rate, indicating the flexibility of the Lindley distribution over the exponential distribution. These observations motivate us to study the structure properties of the distribution in (1) when the failure times of the units for the ith system are iid Lindley random variables with the parameter θ , i.e.,
G ( z ) = 1 θ + 1 + θ z θ + 1 e θ z , z > 0 ,
where the parameter θ > 0 . Its corresponding cdf is given by
F ( x ) = 1 θ + 1 + θ x θ + 1 e θ x α 1 p + p 1 θ + 1 + θ x θ + 1 e θ x α , x > 0 ,
where the parameters α > 0 , θ > 0 , and 0 < p < 1 . We call the distribution as the exponentiated Lindley geometric (ELG) distribution. Indeed, it is necessary to compute the entropy measure for ELG distribution under the assumption that errors are non-Gaussian distributed (e.g., [13]). Other motivations of the ELG distribution are briefly summarized as follows. (i) It contains several lifetime distributions as special cases, such as the Lindley-geometric (LG) distribution due to [14] when α = 1 . (ii) It can be viewed as a mixture of exponentiated Lindley distributions introduced by [15]. (iii) The ELG distribution is a flexible model which can be widely used for modeling lifetime data in reliability and survival analysis. (iv) It exhibits monotonically increasing, decreasing, unimodal (upper-down bathtub), and bathtub shaped hazard rates but does not exhibit a constant hazard rate, which makes the ELG distribution to be superior to other lifetime distributions, which exhibit only monotonically increasing/decreasing, or constant hazard rates.
The remainder of the paper is organized as follows. In Section 2, we discuss various statistical properties of the new distribution. The maximum-likelihood estimation is considered in Section 3, and an EM algorithm is proposed to find the maximum likelihood estimates because they cannot be obtained in closed form. The maximum-likelihood estimation for censored data is also discussed briefly. In Section 4, two real-data applications are provided for illustrative purposes. Some concluding remarks are given in Section 5.

2. Properties of the ELG distribution

We provide statistical properties of the ELG distribution. These include the pdf and its shape (Section 2.1), hazard rate function and its shape (Section 2.2), quantile function (Section 2.3), order statistics (Section 2.4), expressions for the nth moments (Section 2.5), residual life function (Section 2.6), mean deviations (Section 2.7), Bonferroni and Lorenz curves (Section 2.8), and entropies (Section 2.9).

2.1. Probability Density function

The corresponding pdf of the ELG distribution corresponding to the cdf in (4) is given by
f ( x ) = α θ 2 ( 1 p ) ( 1 + x ) e θ x 1 θ + 1 + θ x θ + 1 e θ x α 1 ( θ + 1 ) 1 p + p 1 θ + 1 + θ x θ + 1 e θ x α 2 ,
for x > 0 , α > 0 , θ > 0 , and 0 < p < 1 .
It should be noted that the pdf in (5) is still a well-defined density function when p 0 . Thus, we can define the ELG distribution in (5) to any p < 1 . As mentioned in Section 1, the ELG distribution includes several special submodels. When α = 1 , it becomes the LG distribution due to [14]. When p = 0 and α = 1 , it turns out to be the Lindley distribution due to [8]. It converges a distribution degenerating at the point 0 when p 1 .
Figure 1 displays the pdf of the ELG distribution in (5) with selected values of α , θ , and p. We observe from Figure 1 that the shape of the pdf is monotonically decreasing with the modal value of at x = 0 when α < 1 and the shape of the pdf appears upside-down bathtub for α > 1 . When α = 1 , we observe that the shape exhibits monotonically decreasing as well as unimodal. This observation coincides with Theorem 1 of [14], which states that the density function of the LG distribution is (i) decreasing for all values of p and θ for which p > 1 θ 2 1 + θ 2 , (ii) unimodal for all values of p and θ for which p 1 θ 2 1 + θ 2 .

2.2. Hazard Rate Function

The failure rate function, also known as the hazard rate (hf) function, is an important characteristic for lifetime modeling. For a continuous distribution with the cdf F ( x ) and the pdf f ( x ) , its failure rate function is defined as
h ( x ) = lim Δ x 0 = P ( X < x + Δ x X > x ) Δ x = f ( x ) S ( x ) ,
where S ( x ) = 1 F ( x ) is the survival function of X. The hf of the ELG distribution is given by
h ( x ) = α θ 2 ( 1 + x ) e θ x 1 θ + 1 + θ x θ + 1 e θ x α 1 ( θ + 1 ) 1 1 θ + 1 + θ x θ + 1 e θ x α 1 p + p 1 θ + 1 + θ x θ + 1 e θ x α
for x > 0 , α > 0 , θ > 0 , and p < 1 .
Figure 2 depicts shapes of the hf with selected values of α , θ , and p. We observe that the hf of the ELG distribution is quite flexible. For example, the shape appears monotonically decreasing if α is sufficiently small and p is not sufficiently large. The shape appears monotonically increasing for small p and large α . The shape appears bathtub-shaped or first increases then bathtub-shaped for α = 1 . We may conclude that the ELG distribution exhibits increasing, decreasing, upside-down bathtub, and bathtub shaped failure hazard rates, but does not exhibit a constant hazard rate.
Note also that as x 0 , the initial hf behaves as h ( x ) { α θ 2 α / [ ( θ + 1 ) α ( 1 p ) ] } x α 1 , which implies that h ( 0 ) for α < 1 , h ( 0 ) = θ 2 / [ ( θ + 1 ) ( 1 p ) ] for α = 1 , and h ( 0 ) = 0 for α > 1 .

2.3. Quantile Function

Let Z denote a Lindley random variable with the cdf in (3). We observe from [16] that the quantile function of the Lindley distribution is
G 1 ( u ) = 1 1 θ 1 θ W 1 θ + 1 e θ + 1 ( 1 u ) ,
where 0 < u < 1 and W 1 ( · ) denotes the negative branch of the Lambert W function (i.e., the solution of the equation W ( z ) e W ( z ) = z ), which can be calculated by using the Lambert-W function in the R package lamW; see [17] in detail.
Let X be a ELG random variable with the cdf F ( x ) in (4). By inverting F ( x ) = u for 0 < u < 1 , we obtain
u u p 1 u p 1 / α = 1 θ + 1 + θ x θ + 1 e θ x = G ( x ) .
It follows from Equation (7) that the quantile function of the ELG distribution is given by
F 1 ( u ) = 1 1 θ 1 θ W 1 ( θ + 1 e θ + 1 [ 1 u u p 1 u p 1 / α ] ) .
Please note that 1 e < θ + 1 e θ + 1 [ 1 u u p 1 u p 1 / α ] < 0 , so the W 1 ( · ) is unique, which implies that F 1 ( u ) is also unique. Thus, one can use Equation (8) for generating random data from the ELG distribution. In particular, the quartiles of the ELG distribution, respectively, are given by
Q 1 = F 1 1 4 = 1 1 θ 1 θ W 1 ( θ + 1 e θ + 1 [ 1 1 p 4 p 1 / α ] ) , Q 2 = F 1 1 2 = 1 1 θ 1 θ W 1 ( θ + 1 e θ + 1 [ 1 1 p 2 p 1 / α ] ) , Q 3 = F 1 3 4 = 1 1 θ 1 θ W 1 ( θ + 1 e θ + 1 [ 1 3 3 p 4 3 p 1 / α ] ) .

2.4. Order Statistics

Suppose X 1 , , X n is a random sample from the ELG distribution. Let X ( 1 ) < X ( 2 ) < < X ( n ) be the corresponding order statistics. The pdf for the rth order statistic of the ELG distribution, say Y = X ( r ) , is given by
f Y ( y ) = n ! ( r 1 ) ! ( n r ) ! F r 1 ( y ) 1 F ( y ) n r f ( y ) = n ! ( r 1 ) ! ( n r ) ! τ = 0 n r n r τ ( 1 ) τ F r 1 + τ ( y ) f ( y ) = α θ 2 ( 1 p ) ( 1 + x ) e θ x n ! ( θ + 1 ) 2 ( r 1 ) ! ( n r ) ! τ = 0 n r n r τ ( 1 ) τ 1 θ + 1 + θ x θ + 1 e θ x α ( r + τ ) 1 1 p + p 1 θ + 1 + θ x θ + 1 e θ x α r + τ + 1 .
The corresponding cdf of Y is given by
F Y ( y ) = j = r n F j ( y ) 1 F ( y ) n j = j = r n τ = 0 n j n j n j τ ( 1 ) τ F j + τ ( y ) = j = r n τ = 0 n j n j n j τ ( 1 ) τ 1 θ + 1 + θ x θ + 1 e θ x α ( j + τ ) 1 p + p 1 θ + 1 + θ x θ + 1 e θ x α j + τ .
In practice, we may be interested in studying the asymptotic distribution of the extreme values X ( 1 ) and X ( n ) . By using L’Hospital’s rule, we have
lim t 1 F ( t + x / θ ) 1 F ( t ) = lim t f ( t + x / θ ) f ( t ) = 1 1 θ t θ + 1 e ( θ t + x ) α 1 1 θ t θ + 1 e θ t α = e x .
In addition, by using L’Hospital’s rule, it can be easily shown that
lim t 0 F ( t x ) F ( t ) = lim t 0 x f ( x t ) f ( t ) = lim t 0 1 θ + 1 + θ t x θ + 1 e θ t x 1 θ + 1 + θ t θ + 1 e θ t α = x α .
By following Theorem 1.6.2 in [18] we observe that there must be some normalizing constants a n > 0 , b n , c n > 0 , and d n , such that
P r a n ( X ( 1 ) b n ) x exp e x
and
P r c n ( X ( n ) d n ) x 1 exp x a
as n . The form of the normalizing constants can be determined by using Corollary 1.6.3 in [18]. As an illustration, one can see that a n = θ and b n = F 1 ( 1 1 / n ) , where F 1 ( · ) denotes the inverse function of F ( · ) .

2.5. Moment Properties

Many important features of a distribution can be characterized through its moments, such as dispersion, skewness, and kurtosis. To derive the nth moment of the ELG distribution, we consider the Taylor series expansion of the form
( 1 + x ) a = k = 0 a k x k ,
which converges for | x | < 1 . This provides that
1 p + p G α ( x ) 1 = k = 0 1 k p 1 G α ( x ) k = k = 0 j = 0 k 1 k k j ( 1 ) j + k p k G ( x ) α j ,
where 1 k is the generalized binomial coefficient. Therefore, we can rewrite Equation (4) as
F ( x ) = k = 0 j = 0 k 1 k k j ( 1 ) j + k p k G ( x ) α j + α = k = 0 j = 0 k 1 k k j ( 1 ) j + k p k 1 θ + 1 + θ x θ + 1 e θ x α j + α .
We observe that the ELG distribution is a mixture of exponentiated Lindley distributions introduced by [15], i.e.,
1 p + p G α ( x ) 1 = 1 1 p 1 + p 1 p G α ( x ) 1 = 1 1 p k = 0 1 k p 1 p G α ( x ) k ,
which is convergent for | p / ( 1 p ) G α ( x ) | < 1 . They show that if Y is an exponentiated Lindley random variable with parameters θ and β , the nth moment and the moment generating function of Y are, respectively, given by
I E ( Y θ , β n ) = β θ 2 1 + θ K ( β , θ , n , θ )
and
M Y θ , β ( t ) = β θ 2 1 + θ K ( β , θ , 0 , θ t )
for t < θ , where
K ( a , b , c , δ ) = 0 x c ( 1 + x ) 1 1 + b + b x 1 + b e b x a 1 e δ x d x = i = 0 j = 0 i k = 0 j + 1 a 1 i i j j + 1 k ( 1 ) i b j Γ ( c + k + 1 ) ( 1 + b ) i ( b i + δ ) c + k + 1 .
By using Equation (11), we obtain the nth moment of X can be rewritten as
μ r ( x ) = I E ( X n ) = k = 0 1 k ( p ) k j = 0 k k j ( 1 ) j I E Y θ , α j + α n = k = 0 1 k ( p ) k j = 0 k k j ( 1 ) j ( α j + α ) θ 2 1 + θ K ( α j + α , θ , n , θ ) = α θ 2 1 + θ k = 0 j = 0 k 1 k k j ( 1 ) j + k p k ( j + 1 ) K ( α j + α , θ , n , θ )
for n = 1 , 2 , . Equation (12) can be adopted to compute the third and fourth central moments of the ELG distribution, which are then used to define skewness and kurtosis, respectively. For instance, based on the first four moments of the ELG distribution, the measures of skewness γ and kurtosis κ of the ELG distribution are, respectively, given by
γ = μ 3 ( x ) 3 μ 2 ( x ) μ 2 ( x ) + 2 μ 1 3 ( x ) μ 2 ( x ) μ 1 2 ( x ) 3 / 2 ,
and
κ = μ 4 ( x ) 4 μ 1 ( x ) μ 3 ( x ) + 6 μ 1 2 ( x ) μ 2 ( x ) 3 μ 1 4 ( x ) μ 2 ( x ) μ 1 2 ( x ) 2 .
The moment generating function of the ELG distribution, denoted by M X ( t ) , is given by
M X ( t ) = k = 0 1 k ( p ) k j = 0 k k j ( 1 ) j M Y θ , α j + α ( t ) = α θ 2 1 + θ k = 0 j = 0 k 1 k k j ( 1 ) j + k p k ( j + 1 ) K ( α j + α , θ , 0 , θ t ) .
Thereafter, we can use M X ( t ) to obtain the nth moment about zero of the ELG distribution. In particular, if | p 1 p G α ( x ) | < 1 , then Equation (11) can be simplified to
F ( x ) = 1 1 p k = 0 ( p ) k ( 1 p ) k 1 θ + 1 + θ x θ + 1 e θ x α k + α .
The corresponding nth moment of X can be simplified as
μ r ( x ) = I E ( X n ) = 1 1 p k = 0 ( p ) k ( 1 p ) k I E Y θ , α k + α n = α θ 2 ( 1 + θ ) ( 1 p ) k = 0 ( p ) k ( j + 1 ) ( 1 p ) k K ( α k + α , θ , n , θ )
for n = 1 , 2 , , and the moment generating function of the ELG distribution is given by
M X ( t ) = 1 1 p k = 0 ( p ) k ( 1 p ) k M Y θ , α k + α ( t ) = α θ 2 ( 1 + θ ) ( 1 p ) k = 0 ( p ) k ( j + 1 ) ( 1 p ) k K ( α k + α , θ , 0 , θ t ) .

2.6. Residual Life Function

Given that a component of a system survives up to time t 0 , the residual life will be the period beyond t until the time of failure occurs in the system and is thus defined by the conditional random variable X t X > t . The mean residual life plays an important role in survival analysis and reliability of characterizing lifetime, because it can be used to determine a unique corresponding lifetime distribution. The rth moment of the residual life of the ELG distribution can be obtained by the general formula
m r ( t ) = I E ( X t ) r Y > t = 1 S ( t ) t ( x t ) r f ( x ) d x ,
where S ( t ) = 1 F ( t ) is the survival function defined before. Noting that the ELG distribution is a mixture of exponentiated Lindley distributions, we may calculate m r ( t ) by using the expression in Lemma 2 of [15], which is given by
L ( a , b , c , t ) = t x c ( 1 + x ) 1 b + 1 + b x b + 1 e b x a 1 e b x d x = i = 1 j = 1 i k = 0 j + 1 a 1 i i j j + 1 k ( 1 ) i b j Γ ( c + k + 1 , ( b i + b ) t ) ( 1 + b ) i ( b i + b ) c + k + 1 ,
where Γ ( a , x ) = x t a 1 exp ( t ) d x represents the complementary incomplete gamma function. Let X be an ELG random variable. By using the Taylor series expansion in (9), it can be easily shown that
t x r f ( x ) d x = α θ 2 ( 1 p ) θ + 1 t x r ( 1 + x ) 1 θ + 1 + θ x θ + 1 e θ x α 1 1 p + p 1 θ + 1 + θ x θ + 1 e θ x α 2 d x = α θ 2 ( 1 p ) θ + 1 t l = 0 2 l ( p ) l j = 0 l l j ( 1 ) j x r ( 1 + x ) 1 θ + 1 + θ x θ + 1 e θ x α + α j 1 d x = α θ 2 ( 1 p ) θ + 1 l = 0 2 l ( p ) l j = 0 l l j ( 1 ) j t x r ( 1 + x ) 1 θ + 1 + θ x θ + 1 e θ x α + α j 1 d x = α θ 2 ( 1 p ) θ + 1 l = 0 j = 0 l 2 l l j ( 1 ) j + l p l L ( α + α j , θ , r , t ) .
From the binomial expansion for ( x t ) r , we get that the rth order moment of the residual life of the ELG distribution is given by
m r ( t ) = 1 S ( t ) t ( x t ) r f ( x ) = 1 S ( t ) t k = 0 r r k x r k ( t ) k f ( x ) d x = 1 S ( t ) k = 0 r r k ( t ) k t x r k f ( x ) d x = 1 S ( t ) α θ 2 ( 1 p ) θ + 1 l = 0 j = 0 l k = 0 r r k 2 l l j ( 1 ) j + l + k t k p l L ( α + α j , θ , r k , t ) .
The mean and variance of the residual life function of the ELG distribution can be easily obtained using m 1 ( t ) and m 2 ( t ) , and are not shown here for simplicity. In a similar way as done for Equation (13), it can be shown that if | p 1 p G α ( x ) | < 1 , then
t x r f ( x ) d x = α θ 2 ( θ + 1 ) ( 1 p ) l = 0 2 l ( p ) l ( 1 p ) l L ( α + α l , θ , r , t ) ,
and the rth order moment of the residual life of the ELG distribution can be written as
m r ( t ) = 1 S ( t ) α θ 2 ( θ + 1 ) ( 1 p ) l = 0 k = 0 r r k 2 l ( 1 ) k + l p l t k ( 1 p ) l L ( α + α l , θ , r k , t ) .

2.7. Mean Deviations

We consider the totality of deviations from the mean and median and the mean deviation from the mean, which is often used to estimate the amount of scatter in a population. The mean deviation is a more robust statistic to outliers in the data set than the standard deviation and the mean deviation from the median is a measure of statistical dispersion, which is a more robust statistic to outliers than the sample variance or standard deviation.
Let X denote a random variable with the pdf f ( x ) , the cdf F ( x ) , mean μ , and median M. The mean deviation about the mean and the mean deviation about the median are defined by
δ 1 ( X ) = 0 | x μ | f ( x ) d x = 0 μ ( μ x ) f ( x ) d x + μ ( x μ ) f ( x ) d x = μ F ( μ ) 0 μ x f ( x ) d x + μ x f ( x ) d x μ ( 1 F ( μ ) ) = 2 μ F ( μ ) 2 μ + 2 μ x f ( x ) d x = 2 μ F ( μ ) 2 μ + 2 α θ 2 ( 1 p ) θ + 1 l = 0 j = 0 l 2 l l j ( 1 ) j + l p l L ( α + α j , θ , 1 , μ )
and
δ 2 ( X ) = 0 | x M | f ( x ) d x = 0 M ( M x ) f ( x ) d x + M ( x M ) f ( x ) d x = M F ( M ) 0 M x f ( x ) d x + M x f ( x ) d x M ( 1 F ( M ) ) = μ + 2 M x f ( x ) d x = μ + 2 α θ 2 ( 1 p ) θ + 1 l = 0 j = 0 l 2 l l j ( 1 ) j + l p l L ( α + α j , θ , 1 , M ) .
respectively. Of particular note is that when | p 1 p G α ( x ) | < 1 , the mean deviations above can be further simplified as
δ 1 ( X ) = 2 μ F ( μ ) 2 μ + 2 α θ 2 ( 1 p ) ( θ + 1 ) i = 1 2 j p p + 1 j L ( α + α j , θ , 1 , t ) .
and
δ 2 ( X ) = μ + 2 α θ 2 ( 1 p ) ( θ + 1 ) i = 1 2 j p p + 1 j L ( α + α j , θ , 1 , M ) .

2.8. Bonferroni and Lorenz Curves

The Bonferroni and Lorenz curves (Bonferroni 1930) have many practical applications not only in economics and poverty, but also in other fields like reliability, lifetime testing, insurance, and medicine. For a random variable X with cdf F ( · ) , the Bonferroni and Lorenz curves are defined by
B F ( x ) = 1 μ F ( x ) 0 q x f ( x ) d x ,
where μ = I E ( X ) , and
L F ( x ) = 1 F ( x ) 0 q x f ( x ) d x ,
respectively. If X is an ELG random variable with the pdf in (5), we observe Equation (17) can be written as
B F ( x ) = 1 μ F ( x ) 0 q x f ( x ) d x = 1 μ F ( x ) 0 x f ( x ) d x q x f ( x ) d x = 1 μ F ( x ) μ α θ 2 ( 1 p ) θ + 1 l = 0 j = 0 l 2 l l j ( 1 ) j + l p l L ( α + α j , θ , 1 , q ) ,
which is obtained by using Equation (16) with t = q and r = 1 . By using Equation (18), it follows easily that the Lorenz curve of the ELG distribution is given by L [ F ( x ) ] = μ B [ F ( x ) ] .

2.9. Entropies

It is well known that an entropy of a random variable X is a measure of variation of the uncertainty. The Rényi entropy is defined as
I R ( γ ) = 1 γ log 0 f γ ( x ) d x ,
where γ > 0 and γ 1 . The Shannon entropy is defined as E log ( f ( x ) ) , which is a particular case of the Rényi entropy as γ 1 . We first observe that
0 f γ ( x ) d x = α θ 2 ( 1 p ) 1 + θ 0 γ ( 1 + x ) γ e θ γ x 1 θ + 1 + θ x θ + 1 e θ x α γ γ 1 p + p 1 θ + 1 + θ x θ + 1 e θ x α 2 γ d x = α θ 2 ( 1 p ) 1 + θ γ k = 0 j = 0 2 γ k k j ( 1 ) k + j p 0 k ( 1 + x ) γ e θ γ x 1 θ + 1 + θ x θ + 1 e θ x α γ γ + α j d x ,
which shows that the Rényi entropy of the ELG distribution is given by
I R ( γ ) = 1 γ log 0 f γ ( x ) d x , = γ 1 γ log α θ 2 ( 1 p ) 1 + θ + 1 1 γ log k = 0 j = 0 2 γ k k j ( 1 ) k + j p k 0 ( 1 + x ) γ e θ γ x 1 θ + 1 + θ x θ + 1 e θ x α γ γ + α j d x .
It can be shown that the Shannon entropy of the ELG distribution is given by
H ( X ) = E [ log f ( X ) ] = log α θ 2 ( 1 p ) 1 + θ E [ log ( 1 + x ) ] + θ E [ x ] ( α 1 ) E [ log ( G ( x ) ) ] + 2 E [ l o g ( 1 p + p G α ( x ) ] ,
which can be easily evaluated using a unidimensional integral. Figure 3 depicts shapes of the Shannon entropy of the ELG distribution with several selected values of α , θ , and p. It deserves mentioning that the entropy measure of the ELG distribution can be estimated by using numerical integration methods with the (plug-in) estimators found in the following section.

3. Estimation of Parameters

We adopt the maximum likelihood estimation to estimate the unknown parameters (Section 3.1) and develop an Expectation-Maximization (EM) algorithm to find the maximum likelihood estimate (MLE) (Section 3.2). We also discuss the MLEs of the unknown parameters when the data is censored (Section 3.3).

3.1. Maximum Likelihood Estimation

It is well known that the MLE is often used to estimate the unknown parameter of a distribution because of its attractive properties, such as consistency, asymptotic normality, etc. Let X 1 , , X n be a random sample from the ELG distribution with unknown parameter vector ϕ = ( θ , α , p ) . Then the log-likelihood function l = l ( ϕ ; x ) is given by
l = n log α + 2 n log θ n log ( θ + 1 ) + n log ( 1 p ) + i = 1 n log ( 1 + x i ) θ i = 1 n x i + ( α 1 ) × i = 1 n log 1 θ + 1 + θ x i θ + 1 e θ x i 2 i = 1 n log 1 p + p 1 θ + 1 + θ x i θ + 1 e θ x i α .
For notational convenience, let
τ i ( θ ) = 1 θ + 1 + θ x i θ + 1 e θ x i ,
for i = 1 , , n . The MLEs of the unknown parameters can be obtained by taking the first partial derivatives of Equation (19) with respect to α , θ , and p and putting them equal to 0. We have the following likelihood equations
l α = n α + i = 1 n log τ i ( θ ) 2 p i = 1 n τ i α ( θ ) log τ i ( θ ) 1 p + p τ i α ( θ ) ,
l θ = 2 n θ n θ + 1 i = 1 n x i + ( α 1 ) θ ( θ + 1 ) 2 i = 1 n x i ( 2 + θ + θ x i + x i ) e θ x i τ i ( θ ) 2 α p θ ( θ + 1 ) 2 × i = 1 n τ i α 1 ( θ ) x i ( 2 + θ + θ x i + x i ) e θ x i 1 p + p τ i α ( θ ) ,
l p = n 1 p + 2 i = 1 n 1 τ i α ( θ ) 1 p + p τ i α ( θ ) .
Please note that the MLEs, respectively α ^ , θ ^ and p ^ of α , θ and p cannot be solved analytically. Numerical iteration techniques, such as the Newton-Raphson algorithm, are required to solve these equations, whereas the second derivatives of the log-likelihood are required for all iterations involved in numerical iteration techniques. We thus develop an EM algorithm to estimate the MLEs of the unknown parameters.
For interval estimation of the parameters, we consider suitable pivotal quantities based on the asymptotic properties of the MLEs and approximate the distributions of these quantities by the normal distribution. We observe that
2 log l α 2 = n α 2 2 p ( 1 p ) i = 1 n τ i α ( θ ) [ log ( τ i ( θ ) ) ] 2 [ 1 p + p τ i α ( θ ) ] 2 , 2 log l θ 2 = 2 n θ 2 + n ( θ + 1 ) 2 ( α 1 ) ( θ + 1 ) 4 i = 1 n x i e θ x i e θ x i ( x i + 2 x i θ + 2 + 2 θ ) + ( t + 1 ) κ i τ i 2 ( θ ) + 2 α p θ 2 ( θ + 1 ) 4 i = 1 n x i 2 ( 2 + θ + θ x i + x i ) 2 e 2 θ x i τ i α 2 ( θ ) ( 1 α ) [ 1 p + p τ i α ( θ ) ] + α p τ i α ( θ ) 1 p + p τ i α ( θ ) 2 + 2 α p ( θ + 1 ) 3 i = 1 n x i κ i τ i α 1 ( θ ) e θ x i 1 p + p τ i α ( θ ) ,
2 log l p 2 = n ( 1 p ) 2 + 2 i = 1 n 1 τ i α ( θ ) 1 p + p τ i α ( θ ) 2 , 2 log l α θ = 2 log l θ α = θ ( θ + 1 ) 2 i = 1 n x i ( 2 + θ + θ x i + x i ) e θ x i τ i ( θ ) 2 p θ ( θ + 1 ) 2 × i = 1 n α ( 1 p ) log ( τ i ( θ ) ) + 1 p + p τ i α ( θ ) x i ( 2 + θ + θ x i + x i ) e θ x i τ i α 1 ( θ ) [ 1 p + p τ i α ( θ ) ] 2 , 2 log l α p = 2 log l p α = τ i α ( θ ) log ( τ i ( θ ) ) [ 1 p + p τ i ( θ ) ] 2 , 2 log l θ p = 2 log l p θ = 2 τ i α ( θ ) log ( τ i ( θ ) ) [ 1 p + p τ i α ( θ ) ] 2 ,
where κ i = ( θ 3 + θ ) ( x i + x i 2 ) + θ 2 ( 3 x i + 2 x i 2 ) x i 2 for i = 1 , , n . The observed Fisher information matrix of α , θ , and p can be written as
I = 2 log l α 2 2 log l α θ 2 log l α p 2 log l θ α 2 log l θ 2 2 log l θ p 2 log l p α 2 log l p θ 2 log l p 2 ,
so the variance-covariance matrix of the MLEs α ^ , θ ^ and p ^ may be approximated by inverting the matrix I and is thus given by
V = 2 log l α 2 2 log l α θ 2 log l α p 2 log l θ α 2 log l θ 2 2 log l θ p 2 log l p α 2 log l p θ 2 log l p 2 1 = v a r ( α ) c o v ( α , θ ) c o v ( α , p ) c o v ( θ , α ) v a r ( θ ) c o v ( θ , p ) c o v ( p , α ) c o v ( p , θ ) v a r ( p ) .
The asymptotic joint distribution of the MLEs α ^ , θ ^ , and p ^ can be treated as being approximately multivariate normal and is given by
α ^ θ ^ p ^ N α θ p , v a r ( α ) c o v ( α , θ ) c o v ( α , p ) c o v ( θ , α ) v a r ( θ ) c o v ( θ , p ) c o v ( p , α ) c o v ( p , θ ) v a r ( p ) .
Since V involves the unknown parameters α , θ , and p, we replace these parameters by their corresponding MLEs to obtain an estimate of V denoted by
V ^ = v a r ( α ) ^ c o v ( α , θ ) ^ c o v ( α , p ) ^ c o v ( θ , α ) ^ v a r ( θ ) ^ c o v ( θ , p ) ^ c o v ( p , α ) ^ c o v ( p , θ ) ^ v a r ( p ) ^ .
The asymptotic 100 ( 1 γ ) % confidence intervals of α , θ , and p are determined by
α ^ z γ / 2 v a r ( α ) ^ , α ^ + z γ / 2 v a r ( α ) ^ , θ ^ z γ / 2 v a r ( θ ) ^ , θ ^ + z γ / 2 v a r ( θ ) ^ , p ^ z γ / 2 v a r ( p ) ^ , p ^ + z γ / 2 v a r ( p ) ^ ,
respectively, where z p is the upper pth percentile of the standard normal distribution.
The likelihood ratio (LR) can be used to evaluate the difference between the ELG distribution and its special submodels. We partition the parameters of the ELG distribution into ( ϕ 1 , ϕ 2 ) , where ϕ 1 is the parameter of interest and ϕ 2 is the remaining parameters. Consider the hypotheses
H 0 : ϕ 1 = ϕ 1 ( 0 ) versus H 1 : ϕ 1 ϕ 1 ( 0 ) .
The LR statistic for the test of the null hypothesis in (24) is given by
ω = 2 l ( ϕ ^ ; x ) l ( ϕ ^ * ; x ) ,
where ϕ ^ and ϕ ^ * are the restricted and unrestricted maximum likelihood estimators under H 0 and H 1 , respectively. Under H 0 , it follows
ω D χ κ 2 ,
where D denotes convergence in distribution as n and κ is the dimension of the subset ϕ 1 of interest. For instance, we can compare the ELG and LG distributions by testing H 0 : α = 1 versus H 1 : α 1 . The ELG and Lindley distributions are compared by testing H 0 : ( α , p ) = ( 1 , 0 ) versus H 1 : ( α , p ) ( 1 , 0 ) .

3.2. Expectation-Maximization Algorithm

Dempster et al. [19] introduce an EM algorithm to estimate the parameters when some observations are treated as incomplete data. Suppose that X = ( X 1 , X 2 , , X n ) and Z = ( Z 1 , Z 2 , , Z n ) represent the observed and hypothetical data, respectively. Here, the hypothetical data can be thought of as missing data because Z 1 , Z 2 , , Z n are not observable. We formulate the problem of finding the MLEs as an incomplete data problem, and thus, the EM algorithm is applicable to determine the MLEs of the ELG distribution. Let W = ( X , Z ) denote the complete data. To start this algorithm, define the pdf of each ( X i , Z i ) for i = 1 , , n as
g ( x , z , α , θ , p ) = α ( 1 p ) θ 2 z ( 1 + x ) θ + 1 e θ x 1 θ + 1 + θ x θ + 1 e θ x α 1 × p p 1 θ + 1 + θ x θ + 1 e θ x α z 1 .
The E-step of an EM cycle requires the conditional expectation of ( Z X , α ( r ) , θ ( r ) , p ( r ) ) , where ( α ( r ) , θ ( r ) , p ( r ) ) is the current estimate of ( α , θ , p ) in the rthe iteration. Please note that the pdf of Z given X, say g ( z x ) , is given by
g ( z x ) = z p p 1 θ + 1 + θ x θ + 1 e θ x α z 1 1 p + p 1 θ + 1 + θ x θ + 1 e θ x α 2 .
Thus, the conditional expectation is given by
I E [ Z X , α , θ , p ] = 1 + p 1 1 θ + 1 + θ x θ + 1 e θ x α 1 p 1 1 θ + 1 + θ x θ + 1 e θ x α .
The log-likelihood function l c ( W ; α , θ , p ) of the complete data after ignoring the constants can be written as
l c ( W ; α , θ , p ) i = 1 n z i + n log α + i = 1 n log ( 1 + x i ) + 2 n log θ n log ( θ + 1 ) θ i = 1 n x i + n log ( 1 p ) + ( α 1 ) i = 1 n log 1 θ + 1 + θ x i θ + 1 e θ x i + i = 1 n ( z i 1 ) log p p 1 θ + 1 + θ x i θ + 1 e θ x i α .
Next the M-step involves the maximization of the pseudo log-likelihood function in (27). The components of the score function are given by
l c α = n α + i = 1 n log 1 θ + 1 + θ x i θ + 1 e θ x i i = 1 n ( z i 1 ) 1 θ + 1 + θ x i θ + 1 e θ x i α log 1 θ + 1 + θ x i θ + 1 e θ x i 1 1 θ + 1 + θ x i θ + 1 e θ x i α , l c θ = 2 n θ n θ + 1 i = 1 n x i + ( α 1 ) i = 1 n θ x i e θ x 1 + x i + 1 θ + 1 ( θ + 1 ) 1 θ + 1 + θ x i θ + 1 e θ x i α θ ( θ + 1 ) 2 × i = 1 n ( z i 1 ) x i ( 2 + θ + θ x i + x i ) e θ x i 1 θ + 1 + θ x i θ + 1 e θ x i α 1 1 1 θ + 1 + θ x i θ + 1 e θ x i α , l c p = n 1 p + i = 1 n z i 1 p .
For notational convenience, let
τ i ( r ) ( θ ) = 1 θ ( r ) + 1 + θ ( r ) x i θ ( r ) + 1 e θ ( r ) x i ,
for i = 1 , , n . By replacing the missing Z’s with their conditional expectations I E [ Z X , α ( r ) , θ ( r ) , p ( r ) ] , we obtain an iterative procedure of the EM algorithm given by the following equations.
0 = n α ( r + 1 ) + i = 1 n log τ i ( r + 1 ) ( θ ) i = 1 n ( z i 1 ) τ i ( r + 1 ) ( θ ) α ( r + 1 ) log τ i ( r + 1 ) ( θ ) 1 τ i ( r + 1 ) ( θ ) α ( r + 1 ) ,
0 = 2 n θ ( r + 1 ) n θ ( r + 1 ) + 1 i = 1 n x i + ( α ( r + 1 ) 1 ) i = 1 n θ ( r + 1 ) x i e θ ( r + 1 ) x i 1 + x i + 1 θ ( r + 1 ) + 1 ( θ ( r + 1 ) + 1 ) τ i ( r + 1 ) ( θ ) α ( r + 1 ) θ ( r + 1 ) ( θ ( r + 1 ) + 1 ) 2 i = 1 n ( z i 1 ) x i ( 2 + θ ( r + 1 ) + θ ( r + 1 ) x i + x i ) e θ ( r + 1 ) x i τ i ( r + 1 ) ( θ ) α ( r + 1 ) 1 1 τ i ( r + 1 ) ( θ ) α ( r + 1 ) , p ( r + 1 ) = 1 n i = 1 n z i ,
where
z i = 1 + p ( r ) 1 τ i ( r ) ( θ ) α ( r ) 1 p ( r ) 1 τ i ( r ) ( θ ) α ( r ) ,
for i = 1 , , n . Please note that some efficient numerical methods, such as the Newton-Raphson algorithm, are only needed for solving Equations (28) and (29).

3.3. Censored Maximum Likelihood Estimation

Censored data often occur in lifetime data analysis. Several popular mechanisms of censoring, such as type-I censoring and type-II censoring, have received much attention in the literature. The survival function of the ELG distribution has a simple closed-form expression, and therefore, it can be used in analyzing lifetime data in the presence of censoring. We briefly discuss the general case of multicensored data. Suppose that n = n 0 + n 1 + n 2 subjects of which
  • n 0 is known to have failed at the times t 1 , , t n 0 ,
  • n 1 is known to have failed into the interval [ s i 1 , s i ] for i = 1 , , n 1 ,
  • n 2 is known to have survived at a time r i for i = 1 , n 2 but not observed any longer.
Please note that Type-I censoring and Type-II censoring are contained as particular cases of multicensoring above. The log-likelihood function of ϕ = ( θ , α , p ) of the ELG distribution for this multicensoring takes the form
l ( ϕ ; x ) = i = 1 n 0 log ( 1 + t i ) + n 0 log α 2 θ 2 ( 1 p ) θ + 1 θ i = 1 n 0 t i + ( α 1 ) log 1 θ + 1 θ t i θ + 1 e θ t i i = 1 n 0 log 1 p + p 1 θ + 1 θ t i θ + 1 e θ t i α + i = 1 n 1 log 1 θ + 1 + θ s i θ + 1 e θ s i α 1 p + p 1 θ + 1 + θ s i θ + 1 e θ s i α 1 θ + 1 + θ s i 1 θ + 1 e θ s i 1 α 1 p + p 1 θ + 1 + θ s i 1 θ + 1 e θ s i 1 α d x + i = 1 n 2 log 1 1 θ + 1 + θ r i θ + 1 e θ r i α 1 p + p 1 θ + 1 + θ r i θ + 1 e θ r i α .
It is straightforward to derive the first derivatives of the log-likelihood function with respect to the three unknown parameters α , θ , and p. Thereafter, the MLEs of the unknown parameters can be obtained by setting the first derivatives equal to zero, i.e.,
l ( ϕ ; x ) θ = l ( ϕ ; x ) α = l ( ϕ ; x ) p = 0 .
Please note that the Newton-Raphson algorithm or other optimization algorithms may be employed to solve the above system of equations, because the MLEs of the unknown parameters cannot be obtained in closed-forms. Finally, the corresponding information matrix for ϕ is too complicated to be presented here.

4. Two Real-Data Applications

In this section, we illustrate the applicability of the ELG distribution using two real-data examples. We use the same data sets to compare the ELG distribution with the Gamma, Weibull, Lindley geometric (LG), Weibull geometric (WG) distributions, whose densities are given by
(i) 
Gamma ( β , α )
f 1 ( x ) = 1 Γ ( β ) α β x β 1 e α x , β > 0 , α > 0 ;
(ii) 
Weibull ( β , λ )
f 2 ( x ) = α β x β α 1 e ( x / β ) α , β > 0 , α > 0 ;
(iii) 
LG ( θ , p )
f 3 ( x ) = θ 2 θ + 1 ( 1 p ) ( 1 + x ) e θ x 1 p ( θ + 1 + θ x ) θ + 1 e θ x 2 , θ > 0 , p < 1 ,
(iv) 
WG ( α , β , p )
f 4 ( x ) = α β α ( 1 p ) x α 1 e ( β x ) α 1 p e ( β x ) α 2 , α > 0 , β > 0 , p < 1 ,
for x > 0 , respectively. To compare the ELG distribution with the four distributions listed above, we advocate the Akaike information criterion (AIC), the Bayesian information criterion (BIC), and the AIC with a correction (AICc) for the two-real data sets. In addition, we apply two formal goodness-of-fit tests: the Cramér-von Mises ( W * ) and Anderson-Darling ( A * ) statistics to further verify which distribution fits better to the data; see, for example, [5,20], among others. The smaller the value of the considered criterion, the better the fit to the data.
The first data set is about the remission time (in months) of a random sample of 128 bladder cancer patients. This data set presented in Table 1 was studied by [21] in fitting the extended Lomax distribution and [22] for the modified Weibull geometric distribution. Table 2 shows the MLEs of the parameters, AIC, BIC, and AICc for the ELG, Gamma, Weibull, LG, and WG distributions for the first data set. We observe from Table 2 that the ELG distribution and its special case LG provide an improved fit over other distributions that are commonly used for fitting lifetime data. The plots of the fitted probability density and survival function are also shown in Figure 4. Please note that the density and survival functions of the ELG distribution seem to be better than Gamma, Weibull, and WG density and survival functions. In addition, we observe from the values of goodness-of-fit tests in Table 3 that the ELG distribution fits the current data better than other distributions under consideration.
As mentioned in Section 3.1, we can adopt the LR statistic to compare between the ELG distribution and its special submodels. For example, the LR statistic for testing between the LG and ELG distributions (i.e., H 0 : α = 1 versus H 1 : α 1 ) is ω = 0 . 5645 and the corresponding p-value is 0 . 4525 . Thus, we fail to reject H 0 and conclude that there is no statistical difference between the fits to this data using the ELG and its submodel LG. This is quite reasonable because the estimate of α in the ELG model is α ^ = 1 . 0792 , which is close to 1 in the LG model.
In the second data set, we consider the waiting time (in minutes) before service of 100 bank customers. The data are presented in Table 4. This data set was used by [12] in fitting the Lindley distribution. Table 5 shows the MLEs of the parameters, AIC, BIC, and AICc for the ELG, Gamma, Weibull, LG, and WG distributions for the second data set. Table 5 indicates that the ELG distribution is still a strong competitor to other lifetime distributions. In addition, the plots of the fitted probability density and survival function are shown in Figure 5. Please note that the ELG and WG distributions perform identically and that the empirical and fitted five survival curves almost overlap for this data set, supporting that the ELG distribution fits this data at least as good as the four alternative distributions. In addition, we observe from the values of goodness-of-fit tests in Table 6 that the ELG distribution fits the current data better than the Gamma, Weibull, and LG distributions and is comparable with the WG distribution.

5. Concluding Remarks

In this paper, we introduced the exponentiated Lindley geometric distribution, which generalizes the LG distribution due to [14] and the Lindley distribution proposed by [23]. We have studied various statistical properties of the new distribution. Estimations of the unknown parameters of the distribution are discussed based on the maximum likelihood estimation and an EM algorithm is provided for estimating the parameters. In an ongoing project, we study and Bayesian inference of these parameters and results will be reported elsewhere.

Author Contributions

M.W. and B.P. initiated and carried out the study. MW drafted the manuscript. Z.X. and B.P. participated in the data analysis and discussion. All authors read and approved the final manuscript.

Funding

The first author was partially supported by Scientific Innovation Program of Sichuan Province (Major Engineering Project: 2018RZ0093), Nanchong Scientific Council (Strategic Cooperation Program between University and City: NC17SY4020).

Acknowledgments

We greatly are very grateful to the three anonymous reviewers for their constructive comments and suggestions that led to a significant improvements of this paper. An early version of this paper is available in [24].

Conflicts of Interest

The authors declare that they have no competing interests.

References

  1. Nadarajah, S.; Cancho, V.; Ortega, E.M. The geometric exponential Poisson distribution. Stat. Methods Appl. 2013, 22, 355–380. [Google Scholar] [CrossRef]
  2. Conti, M.; Gregori, E.; Panzieri, F. Load distribution among replicated web servers: A QoS-based approach. SIGMETRICS Perform. Eval. Rev. 2000, 27, 12–19. [Google Scholar] [CrossRef]
  3. Fricker, C.; Gast, N.; Mohamed, H. Mean field analysis for inhomogeneous bike sharing systems. In Proceedings of the DMTCS, Montreal, QC, Canada, 18–22 June 2012; pp. 365–376. [Google Scholar]
  4. Adamidis, K.; Loukas, S. A lifetime distribution with decreasing failure rate. Stat. Probab. Lett. 1998, 39, 35–42. [Google Scholar] [CrossRef]
  5. Rezaei, S.; Nadarajah, S.; Tahghighnia, N. A new three-parameter lifetime distribution. Statistics 2013, 47, 835–860. [Google Scholar] [CrossRef]
  6. Barreto-Souza, W.; de Morais, A.L.; Cordeiro, G.M. The Weibull-geometric distribution. J. Stat. Comput. Simul. 2011, 81, 645–657. [Google Scholar] [CrossRef]
  7. Pararai, M.; Warahena-Liyanage, G.; Oluyede, B.O. Exponentiated power Lindley-Poisson distribution: Properties and applications. Commun. Stat. Theory Methods 2017, 46, 4726–4755. [Google Scholar] [CrossRef]
  8. Lindley, D.V. Fiducial distributions and Bayes’ theorem. J. R. Stat. Soc. Ser. B. Methodol. 1958, 20, 102–107. [Google Scholar] [CrossRef]
  9. Gupta, P.; Singh, B. Parameter estimation of Lindley distribution with hybrid censored data. Int. J. Syst. Assur. Eng. Manag. 2012, 4, 378–385. [Google Scholar] [CrossRef]
  10. Mazucheli, J.; Achcar, J.A. The Lindley distribution applied to competing risks lifetime data. Comput. Methods Progr. Biomed. 2011, 104, 188–192. [Google Scholar] [CrossRef]
  11. Zakerzadeh, Y.; Dolati, A. Generalized Lindley distribution. J. Math. Ext. 2009, 3, 13–25. [Google Scholar]
  12. Ghitany, M.E.; Atieh, B.; Nadarajah, S. Lindley distribution and its application. Math. Comput. Simul. 2008, 78, 493–506. [Google Scholar] [CrossRef]
  13. Arellano-Valle, R.B.; Contreras-Reyes, J.E.; Stehlík, M. Generalized skew-normal negentropy and its application to fish condition factor time series. Entropy 2017, 19, 528. [Google Scholar] [CrossRef]
  14. Zakerzadeh, H.; Mahmoudi, E. A new two parameter lifetime distribution: Model and properties. arXiv 2012, arXiv:1204.4248. [Google Scholar]
  15. Nadarajah, S.; Bakouch, H.S.; Tahmasbi, R. A generalized Lindley distribution. Sankhya B 2011, 73, 331–359. [Google Scholar] [CrossRef]
  16. Jodrá, P. Computer generation of random variables with lindley or Poisson-Lindley distribution via the lambert W function. Math. Comput. Simul. 2010, 81, 851–859. [Google Scholar] [CrossRef]
  17. Adler, A. lamW: Lambert-W Function; R package version 1.3.0. 2017. Available online: https://cran.r-project.org/web/packages/lamW/lamW.pdf (accessed on 18 May 2019).
  18. Leadbetter, M.R.; Lindgren, G.; Rootzén, H. Extremes and Related Properties of Random Sequences and Processes; Springer Series in Statistics; Springer: New York, NY, USA, 1983. [Google Scholar]
  19. Dempster, A.P.; Laird, N.M.; Rubin, D.B. Maximum likelihood from incomplete data via the EM algorithm. J. R. Stat. Soc. Ser. B (Methodol.) 1977, 39, 1–38. [Google Scholar] [CrossRef]
  20. Chen, G.; Balakrishnan, N. A general purpose approximate goodness-of-fit test. J. Qual. Technol. 1995, 27, 154–161. [Google Scholar] [CrossRef]
  21. Lemonte, A.J.; Cordeiro, G.M. An extended Lomax distribution. Statistics 2013, 47, 800–816. [Google Scholar] [CrossRef]
  22. Wang, M.; Elbatal, I. The modified Weibull geometric distribution. METRON 2015, 73, 303–315. [Google Scholar] [CrossRef]
  23. Biçer, C. Statistical inference for geometric process with the power Lindley distribution. Entropy 2018, 20, 723. [Google Scholar] [CrossRef]
  24. Wang, M. A new three-parameter lifetime distribution and associated inference. arXiv 2013, arXiv:1308.4128. [Google Scholar]
Figure 1. Plots of the pdf of the ELG distribution for different values of α , θ , and p.
Figure 1. Plots of the pdf of the ELG distribution for different values of α , θ , and p.
Entropy 21 00510 g001
Figure 2. Plots of the hf of the ELG distribution for different values of α , θ , and p.
Figure 2. Plots of the hf of the ELG distribution for different values of α , θ , and p.
Entropy 21 00510 g002
Figure 3. Plots of the Shannon entropy of the ELG distribution for different values of α , θ , and p.
Figure 3. Plots of the Shannon entropy of the ELG distribution for different values of α , θ , and p.
Entropy 21 00510 g003
Figure 4. Plots of the estimated density and survival function of the fitted models for the first data set.
Figure 4. Plots of the estimated density and survival function of the fitted models for the first data set.
Entropy 21 00510 g004
Figure 5. Plots of the estimated density and survival function of the fitted models for the second data set.
Figure 5. Plots of the estimated density and survival function of the fitted models for the second data set.
Entropy 21 00510 g005
Table 1. The first data set: the remission time (in months) of a random sample of 128 bladder cancer patients.
Table 1. The first data set: the remission time (in months) of a random sample of 128 bladder cancer patients.
0.082.093.484.876.948.6613.1123.630.202.23
3.524.986.979.0213.290.402.263.575.067.09
9.2213.8025.740.502.463.645.097.269.4714.24
25.820.512.543.705.177.289.7414.7626.310.81
2.623.825.327.3210.0614.7732.152.643.885.32
7.3910.3414.8334.260.902.694.185.347.5910.66
15.9636.661.052.694.235.417.6210.7516.6243.01
1.192.754.265.417.6317.1246.121.262.834.33
5.497.6611.2517.1479.051.352.875.627.8711.64
17.361.403.024.345.717.9311.7918.101.464.40
5.858.2611.9819.131.763.254.506.258.3712.02
2.023.314.516.548.5312.0320.282.023.366.76
12.0721.732.073.366.938.6512.6322.69
Table 2. MLEs of the fitted models, AIC, BIC, and AICc for the first data set.
Table 2. MLEs of the fitted models, AIC, BIC, and AICc for the first data set.
Model Parameters AICBICAICc
Gamma α ^ = 0.1252 β ^ = 1.1726 830.7356836.4396830.8316
Weibull α ^ = 1.0478 β ^ = 9.5607 832.1738837.8778832.2698
LG θ ^ = 0.0742 p ^ = 0.8898 823.1859833.742823.2819
WG α ^ = 1.6042 β ^ = 0.0286 p ^ = 0.9362826.1842834.7403826.3777
ELG α ^ = 1.0792 θ ^ = 0.0699 p ^ = 0.9204824.6214833.1775824.8149
Table 3. Goodness-of-fit tests for the first data set.
Table 3. Goodness-of-fit tests for the first data set.
Statistic
Model W * A *
Gamma0.119880.71928
Weibull0.131360.78643
LG0.053740.33827
WG0.014930.09939
ELG0.013890.09498
Table 4. The second data set: the waiting time (in minutes) before service of 100 bank customers.
Table 4. The second data set: the waiting time (in minutes) before service of 100 bank customers.
0.80.81.31.51.81.91.92.12.62.7
2.93.13.23.33.53.64.04.14.24.2
4.34.34.44.44.64.74.74.84.94.9
5.05.35.55.75.76.16.26.26.26.3
6.76.97.17.17.17.17.47.67.78.0
8.28.68.68.68.88.88.98.99.59.6
9.79.810.710.911.011.011.111.211.211.5
11.912.412.512.913.013.113.313.613.713.9
14.115.415.417.317.318.118.218.418.919.0
19.920.621.321.421.923.027.031.633.138.5
Table 5. MLEs of the fitted models, AIC, BIC, and AICc for the second data set.
Table 5. MLEs of the fitted models, AIC, BIC, and AICc for the second data set.
Model Parameters AICBICAICc
Gamma α ^ = 0.2033 β ^ = 2.0089 638.6002643.8106638.724
Weibull α ^ = 1.4585 β ^ = 10.9553 641.4614646.6717641.5851
LG θ ^ = 0.2027 p ^ = 0.2427 641.8269647.0372641.9506
WG α ^ = 1.9789 β ^ = 0.0501 p ^ = 0.82132639.9084647.7239640.1584
ELG α ^ = 1.4602 θ ^ = 0.1725 p ^ = 0.5385640.3108648.1263640.5608
Table 6. Goodness-of-fit tests for the second data set.
Table 6. Goodness-of-fit tests for the second data set.
Statistic
Model W * A *
Gamma0.027610.18225
Weibull0.062940.39624
LG0.053740.33827
WG0.017060.12365
ELG0.018010.12665

Share and Cite

MDPI and ACS Style

Peng, B.; Xu, Z.; Wang, M. The Exponentiated Lindley Geometric Distribution with Applications. Entropy 2019, 21, 510. https://doi.org/10.3390/e21050510

AMA Style

Peng B, Xu Z, Wang M. The Exponentiated Lindley Geometric Distribution with Applications. Entropy. 2019; 21(5):510. https://doi.org/10.3390/e21050510

Chicago/Turabian Style

Peng, Bo, Zhengqiu Xu, and Min Wang. 2019. "The Exponentiated Lindley Geometric Distribution with Applications" Entropy 21, no. 5: 510. https://doi.org/10.3390/e21050510

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop