Next Article in Journal
The Static Standing Postural Stability Measured by Average Entropy
Previous Article in Journal
Learning from Both Experts and Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Statistical Inference on the Shannon Entropy of Inverse Weibull Distribution under the Progressive First-Failure Censoring

School of Science, Beijing Jiaotong University, Beijing 100044, China
*
Author to whom correspondence should be addressed.
Entropy 2019, 21(12), 1209; https://doi.org/10.3390/e21121209
Submission received: 2 November 2019 / Revised: 30 November 2019 / Accepted: 4 December 2019 / Published: 10 December 2019

Abstract

:
Entropy is an uncertainty measure of random variables which mathematically represents the prospective quantity of the information. In this paper, we mainly focus on the estimation for the parameters and entropy of an Inverse Weibull distribution under progressive first-failure censoring using classical (Maximum Likelihood) and Bayesian methods. For Bayesian approaches, the Bayesian estimates are obtained based on both asymmetric (General Entropy, Linex) and symmetric (Squared Error) loss functions. Due to the complex form of Bayes estimates, we cannot get an explicit solution. Therefore, the Lindley method as well as Importance Sampling procedure is applied. Furthermore, using Importance Sampling method, the Highest Posterior Density credible intervals of entropy are constructed. As a comparison, the asymptotic intervals of entropy are also gained. Finally, a simulation study is implemented and a real data set analysis is performed to apply the previous methods.

1. Introduction

Usually, in lifetime experiments, due to the restrictions of limited time and cost, accurate product lifetime data cannot be observed so we have censored data. The most common censoring schemes are so-called Type-I and Type-II censoring. In the first one, place N units in a life experiment and terminate the experiment after a predetermined time; for the other, terminate the experiment after the predetermined units number m has failed. Progressive censoring is a generalization of Type-II censoring which permits the units to be randomly removed at various time points instead of the end of the time.
Compared to conventional Type-I and Type-II censoring, progressive censoring, i.e., withdrawal of non-failed items, decreases the accuracy of estimation. However, in certain practical circumstances, experimenters are forced to withdraw items from tests. Thus, the application of the progressive censoring methodology allows profiting from information related to withdrawn items.
When the above methods still fail to meet the time and cost constraints, to further improve efficiency, other censoring schemes are successively filed by researchers. One of the successful attempts is the first failure censoring. In this censoring scheme, N = k × n units are assigned to n groups in random with k identical units in each group. The lifetime experiment is conducted by testing all groups simultaneously until the first failure is observed in each group.
Since progressive censoring and first-failure censoring can both greatly enhance the efficiency of the lifetime experiment, Ref. [1] united these two items and developed a novel censoring scheme called the progressive first-failure censoring. In this censoring, N = k × n samples are divided into n disjoint groups in random with k identical units at the beginning of the life experiment, and the experiment is terminated when the mth unit fails. When the ith unit fails, the group containing the ith is removed together with R i randomly selected groups, and when the mth fails, all the surviving groups are removed. Here, R = ( R 1 , , R m ) and m are set in advance. Note that
(1) When k = 1 , the progressive first failure censoring can be reduced to the well-known progressive Type-II censoring.
(2) When R 1 = R 2 = = R m = 0 , this censoring becomes the mentioned first-failure censoring.
(3) When k = 1 , R 1 = R 2 = = R m 1 = 0 and R m = n m , this censoring corresponds to Type-II censoring.
Since it is more efficient than other censoring schemes, many researchers have discussed the study of the progressive first-failure censoring. Ref. [2] considered both the point and interval estimation of two parameters from a Burr-XII distribution when both of the parameters are unknown; Ref. [3] dealt with the reliability function of GIED (Generalized inverted exponential distribution) under progressive first-failure censoring; Ref. [4] established different reliability sampling plans using two criteria from a Lognormal distribution based on the progressive first-failure censoring; Ref. [5] chose a competing risks data model under progressive first-failure censoring from a Gompertz distribution and estimated the model using Bayesian and non-Bayesian methods; Ref. [6] considered the lifetime performance index ( C L ) under the progressive first-failure censoring schemes of a Pareto model, solved the problem of the hypothesis testing of C L , and gave a lower specification limit.
The Weibull distribution is used in a widespread manner in analyzing lifetime data. Nevertheless, the Weibull distribution possesses a constant, decreasing or increasing failure rate function, its failure rate function cannot be non-monotone, such as unimodal. In practice, if the research shows that the empirical failure rate function is non-monotone, then the Inverse Weibull model is a more suitable choice than the Weibull model. The Inverse Weibull model has a wide variety of applications in pharmacy, economics and chemistry.
The cumulative distribution function and the probability density function of the Inverse Weibull distribution (IWD) are separately written as
F ( x ; α , λ ) = e λ x α
and
f ( x ; α , λ ) = α λ e λ x α x α 1 ,
where x > 0 , λ > 0 , α > 0 , λ is the scale parameter and α is the shape parameter.
The failure rate function is
h ( x ; α , λ ) = α λ e λ x α x α 1 1 e λ x α .
One of the most important properties of the IWD is that its failure rate function can be unimodal. Figure 1 also evidently supports this conclusion, and we can observe that the distribution whose failure rate function is unimodal is more flexible in application.
Many researchers have studied the Inverse Weibull distribution. Ref. [7] invesigated the Bayesian inference and successfully predicted the IWD for the type-II censoring scheme; Ref. [8] not only considered the Baysian estimation but also the generalized Bayesian estimation for the IWD parameters; Ref. [9] used three classical methods to estimate the parameters from IWD; Ref. [10] estimated the unknown parameters from IWD under the progressive type-I interval censoring and chose the optimal censoring schemes; Ref. [11] adopted two methods to get bias corrections of unknown parameters using maximum likelihood method of the IWD.
Entropy is a quantitive measure of the uncertainty of each probability distribution. For the random variable X, of the probability density distribution f ( x ) , the Shannon entropy, recorded as H ( X ) , is written as:
H ( X ) = f ( x ) log ( f ( x ) ) d x .
Many studies about entropy can be found in the literature. Ref. [12] proposed an indirect method using a decomposition to simplify the entropy’s calculation under the progressive Type-II censoring; Ref. [13] estimated the entropy for several exponential distributions and extended the results to other circumstances; Ref. [14] estimated the Shannon entropy of a Rayleigh model under doubly generalized Type-II hydrid censoring, and compared the performance by two criteria.
The Shannon entropy of the IWD is given by:
H ( X ) = α + 1 α [ γ + log ( λ ) ] + 1 log ( α λ ) ,
where γ is a Euler constant.
In this paper, we discuss the maximum likelihood and Bayesian estimation of the paramaters ( α , λ ) and entropy of IWD under progressive first-failure censoring. As far as we know, this topic is very new and few researchers study it. However, it needs in-depth research and innovation. The rest of this paper is elaborated as follows:
In Section 2, we derive the maximum likelihood estimation of entropy and parameters. In Section 3, we present the asymptotic intervals for the entropy and parameters. In Section 4, we work out the Bayesian estimation of entropy and parameters using Lindley and Importance Sampling methods. In Section 5, a simulation study is organized to compare different estimators. In Section 6, we analyze a real data set to explain the previous conclusions. Finally, in Section 7, a conclusion is presented.

2. Maximum Likelihood Estimation

We consider the maximum likelihood estimates (MLEs) for the entropy and parameters of an Inverse Weibull distribution under progressive first-failure censoring. Set X 1 : m : n : k R X 2 : m : n : k R X m : m : n : k R be a sample from IWD under the progressive first-failure censoring ( k , n , m , R 1 , , R m ) . For simplicity, we choose x i for representing x i : m : n : k R , i = 1 , , m . The joint probability density function is
f X 1 : m : n : k R X m : m : n : k R ( x 1 x m ) = P k m i = 1 m f ( x i ) [ 1 F ( x i ) ] k ( R i + 1 ) 1 ,
where 0 < x 1 < < x m < and P = n ( n 1 R 1 ) ( n m + 1 R 1 R m 1 ) is a normalizing constant.
Combining (1), (2), and (5), the likelihood function (LF) is
L ( x | α , λ ) = P k m α m λ m e λ i = 1 m x i α i = 1 m x i α 1 i = 1 m ( 1 e λ x i α ) k ( R i + 1 ) 1 .
Then, the log-likelihood function is written as
l ( x | α , λ ) = log P + m log k + m log α + m log λ λ i = 1 m x i α ( α + 1 ) i = 1 m log x i + i = 1 m ( k ( R i + 1 ) 1 ) log ( 1 e λ x i α ) .
For partial derivatives with respect to α and λ , the corresponding score equations are
l α = m α + λ i = 1 m x i α log x i i = 1 m log x i i = 1 m ( k ( R i + 1 ) 1 ) λ x i α log x i e λ x i α 1 e λ x i α = 0 ,
l λ = m λ i = 1 m x i α + i = 1 m ( k ( R i + 1 ) 1 ) x i α e λ x i α 1 e λ x i α = 0 .
The MLEs α ^ and λ ^ , separately, are the roots of Equations (8) and (9). The equations don’t have an explicit solution, so we need some numerical techniques to approximate the values of these parameters. Furthermore, according to the invariance property of MLE, we derive the ML estimator of entropy as:
H ^ ( X ) = α ^ + 1 α ^ [ γ + log ( λ ^ ) ] + 1 log ( α ^ λ ^ ) .

3. Confidence Intervals

3.1. Asymptotic Intervals for MLEs

The 100 ( 1 ξ ) % confidence intervals (CIs) for the two parameters α and λ can be constructed by the asymptotic normality of MLEs with V a r ( α ^ ) and V a r ( λ ^ ) which are obtained by the inverse of the observed Fisher matrix.
From Equation (7), find second-order partial derivatives for α and λ as follows:
l 20 = 2 l α 2 = i = 1 m ( k ( R i + 1 ) 1 ) λ x i α log 2 x i e λ x i α ( 1 λ x i α e λ x i α ) ( 1 e λ x i α ) 2 m α 2 λ i = 1 m x i α ( log x i ) 2 , l 11 = 2 l α λ = i = 1 m x i α log x i + i = 1 m ( k ( R i + 1 ) 1 ) x i α log x i e λ x i α ( 1 + λ x i α + e λ x i α ) ( 1 e λ x i α ) 2 , l 02 = 2 l λ 2 = m λ 2 + i = 1 m ( k ( R i + 1 ) 1 ) x i 2 α e λ x i α ( 1 e λ x i α ) 2 .
The Fisher information matrix of two parameters α and λ is I ( α , λ ) . Here, we approximate that ( α ^ , λ ^ ) T is a bivariate normal vector with mean ( α , λ ) T and covariance matrix I 1 = I 1 ( α , λ ) . As a matter of fact, we use I 1 ( α ^ , λ ^ ) to make an estimation of I 1 ( α , λ ) . In other words,
( α ^ , λ ^ ) T as N [ ( α , λ ) T , I 1 ( α ^ , λ ^ ) ] ,
where
I 1 ( α ^ , λ ^ ) = l 20 l 11 l 11 l 02 ( α , λ ) = ( α ^ , λ ^ ) 1 = τ 11 τ 12 τ 21 τ 22 .
Thus, based on the normal approximation, the 100 ( 1 ξ ) % CIs for two parameters α and λ are
α ^ ± Z ξ / 2 τ 11 , λ ^ ± Z ξ / 2 τ 22 .
Here, Z ξ / 2 is the ξ / 2 percentile of the standard normal distribution. Thus, as to obtain the approximate estimation of the variance of entropy, we use the delta method. Let
Ψ ^ = ( H α , H λ ) ,
where
H α = ( α + 1 ) ( γ + log ( λ ) ) α 2 + γ + log ( λ ) α 1 α , H λ = α + 1 α λ 1 λ .
Then, the approximate estimate of V a r ^ ( H ^ ) is obtained by
V a r ^ ( H ^ ) = [ Ψ ^ I 1 ( α ^ , λ ^ ) Ψ ^ ] .
Therefore, we approximate that
H ^ H V a r ^ ( H ^ ) N ( 0 , 1 ) .
The asymptotic 100 ( 1 ξ ) % CI for entropy is derived as
H ^ ± Z ξ / 2 V a r ^ ( H ^ ) .

3.2. Asymptotic Intervals for Log-Transformed MLE

Ref. [15] proposed that the asymptotic CI using log-transformed MLE has a more precise coverage probabilty. It is clear that α , λ , and entropy are all positive. Then, we obtain that 100 ( 1 ξ ) % asymptotic approximate CIs for log-transformed MLEs are
log ( α ^ ) ± Z ξ / 2 τ 11 ( log ( α ^ ) ) , log ( λ ^ ) ± Z ξ / 2 τ 22 ( log ( λ ^ ) ) .
Thus, based on the normal approximation of log-transformed MLE, the 100 ( 1 ξ ) % CIs for two parameters α and λ are
α ^ exp [ ± Z ξ / 2 τ 11 α ^ ] , λ ^ exp [ ± Z ξ / 2 τ 22 λ ^ ] .
Furthermore, a 100 ( 1 ξ ) % CI for entropy is
H ^ exp [ ± Z ξ / 2 V a r ^ ( H ^ ) H ^ ] .

4. Bayes Estimation

4.1. Prior and Posterior Distribution

Both α and λ are unknown parameters, so we don’t have any conjugate prior for both α and λ . Usually, we choose independent priors of α and λ which are both Gamma distributions. However, for the Inverse Weibull distribution, it is not appropriate to choose gamma for both priors. The specific reason is explained in detail in the Importance Sampling procedure subsection. Thus, in this case, we consider the following prior distributions:
λ possesses a Gamma prior G ( a , b ) with the probability density function
π 1 ( λ ) λ a 1 e b λ ,
α has a non-informative prior with the following probability density function
π 2 ( α ) = 1 α ,
where a and b are pre-fixed to be known and positive.
Now, the joint prior distribution of the two parameters α and λ can be obtained by
π ( α , λ ) λ a 1 e b λ α .
Then, the joint posterior PDF of two parameters α and λ is derived by
π ( α , λ | x ) = L ( x | α , λ ) π ( α , λ ) 0 0 L ( x | α , λ ) π ( α , λ ) d α d λ .

4.2. Symmetric and Asymmetric Loss Functions

Choosing loss function is an important part of Bayesian inference. In this subsection, we consider the Bayes estimation for two parameters α , λ , and entropy of an IWD under both the asymmetric and symmetric loss functions. A widely used symmetric loss function is the squared error loss function (SELF). As for asymmetric loss functions, we choose the general entropy loss function (GELF) and linex loss function (LLF). The SELF, LLF, and GELF are defined as
L S ( , ^ ) = ( ^ ) 2 , L L ( , ^ ) = exp ( p ( ^ ) ) p ( ^ ) 1 , L E ( , ^ ) = ( ^ ) q q log ( ^ ) 1 ,
where ^ means an estimate of ℵ. In LLF and SELF, the symbols of p and q indicate the direction of the asymmetry, and their sizes mean the different level. Neither of them are zero.
The Bayes estimates of ℵ under above loss functions are
^ S = E ( | x ) , ^ L = 1 p log [ E ( exp ( p ) | x ) ] , ^ E = [ E ( q | x ) ] 1 q ,
where E means the posterior expectation under the parameter ℵ. Now, we can derive the Bayes estimates of α , λ , and entopy under SELF, LLF, and GELF.
To begin with, Bayes estimate of g ( α , λ ) under SELF is
g ^ ( α , λ ) S = 0 0 g ( α , λ ) L ( x | α , λ ) π ( α , λ ) d α d λ 0 0 L ( x | α , λ ) π ( α , λ ) d α d λ .
Let g ( α , λ ) takes the value of α , λ , and entropy, then we can easily obtain the corresponding estimation under SELF.
Moreover, Bayes estimate of g ( α , λ ) under LLF is
g ^ ( α , λ ) L = 1 p log [ 0 0 e p g ( α , λ ) L ( x | α , λ ) π ( α , λ ) d α d λ 0 0 L ( x | α , λ ) π ( α , λ ) d α d λ ] .
Let g ( α , λ ) take the value of α , λ , and entropy; then, we can obviously obtain the corresponding estimation under LLF.
Finally, Bayes estimate of g ( α , λ ) under GELF is
g ^ ( α , λ ) E = [ 0 0 g ( α , λ ) q L ( x | α , λ ) π ( α , λ ) d α d λ 0 0 L ( x | α , λ ) π ( α , λ ) d α d λ ] 1 q
Let g ( α , λ ) take the value of α , λ , and entropy; then, we can evidently obtain the corresponding esitimation under GELF.
Obviously, the Bayesian estimation cannot be accurately expressed in a closed form. Hence, we recommend using Lindley method as well as Importance Sampling procedure to derive the Bayesian estimation.

4.3. Lindley Approximation

The Bayes estimates are in the shape of a specific ratio of two integals which can’t be reduced to a closed form. Therefore, we utilize Lindley approximation method to derive the Bayes estimates. For the ( 1 , 2 ) , the Bayesian estimate is
g ^ = g ( ^ 1 , ^ 2 ) + 0.5 ( S + l 30 I 12 + l 03 I 21 + l 21 O 12 + l 12 O 21 ) + ρ 1 S 12 + ρ 2 S 21 ,
where
S = i = 1 2 j = 1 2 ω i j τ i j , l i j = i + j l 1 i 2 j , i + j = 3 ; i , j = 0 , 1 , 2 , 3 ,
ρ i = ρ i , ω i = g i , ω i j = 2 g i j , ρ = log π ( 1 , 2 ) ,
S i j = ω i τ i i + ω j τ j i , I i j = ω i i ( ω i τ i i + ω j τ i j ) , O i j = 3 ω i τ i i τ i j + ω j ( τ i i τ j j + 2 τ i j 2 ) .
All terms are estimated by MLEs of 1 and 2 . For our problem, we take g ^ for the estimation of ( 1 , 2 ) = ( α , λ ) . Then, we obtain
l 30 = 2 m α 3 + λ i = 1 m x i α log 3 x i + i = 1 m ( k ( R i + 1 ) 1 ) ( λ 3 x i 3 α log 3 x i e λ x i α 1 e λ x i α 3 λ 3 x i 3 α log 3 x i e 2 λ x i α ( 1 e λ x i α ) 2 2 λ 3 x i 3 α log 3 x i e 3 λ x i α ( 1 e λ x i α ) 3 ) , l 03 = 2 m λ 3 + i = 1 m ( k ( R i + 1 ) 1 ) x i 3 α e λ x i α ( 1 + e λ x i α ) ( 1 e λ x i α ) 3 , l 21 = i = 1 m x i α log 2 x i + i = 1 m ( k ( R i + 1 ) 1 ) λ 2 x i 3 α log 2 x i e λ x i α 1 e λ x i α + 3 λ 2 x i 3 α log 2 x i e 2 λ x i α 1 e λ x i α 2 + 2 λ 2 x i 3 α log 2 x i e 3 λ x α 1 e λ x i α 3 3 λ x i 2 α log 2 x i e λ x i α 1 e λ x i α 3 λ x i 2 α log 2 x i e 2 λ x i α 1 e λ x i α 2 + x i α log 2 x i e λ x i α 1 e λ x i α ,
l 12 = i = 1 m ( k ( R i + 1 ) 1 ) x i 3 α log x i e λ x i α 2 x i α e λ x i α 1 λ e λ x i α + 1 e λ x i α 1 3 , ρ 1 = 1 α ^ 2 , ρ 2 = a 1 λ ^ b .
(1)
Squared error loss function
Taking g ( α , λ ) = α or λ , the Bayesian estimates of two parameters α and λ under SELF, separately, are obtained by
α ^ S = α ^ + 0.5 [ τ 11 2 l 30 + τ 21 τ 22 l 03 + 3 τ 11 τ 12 l 21 + ( τ 22 τ 11 + 2 τ 21 2 ) l 12 ] + τ 11 ρ 1 + τ 12 ρ 2 ,
λ ^ S = λ ^ + 0.5 [ τ 11 τ 12 l 30 + τ 22 2 l 03 + ( τ 11 τ 22 + 2 τ 12 2 ) l 21 + 3 τ 22 τ 21 l 12 ] + τ 21 ρ 1 + τ 22 ρ 2 .
Next, we derive Bayes estimate of entropy under SELF. We consider that
g ( α , λ ) = α + 1 α [ γ + log ( λ ) ] + 1 log ( α λ ) , ω 1 = ( α + 1 ) ( γ + log ( λ ) ) α 2 + γ + log ( λ ) α 1 α , ω 2 = α + 1 α λ 1 λ , ω 11 = 1 α 2 + 2 ( α + 1 ) α 3 2 α 2 ( γ + log ( λ ) ) , ω 22 = 1 λ 2 α + 1 α λ 2 , ω 12 = ω 21 = 1 α λ α + 1 α 2 λ .
The Bayes estimator of entropy can be obtained as earlier, and it is given by
H ^ S ( X ) = H ^ ( X ) + 0.5 [ ω 11 τ 11 + 2 ω 12 τ 12 + ω 22 τ 22 + l 30 ( ω 1 τ 11 2 + ω 2 τ 11 τ 12 ) + l 21 ( 3 ω 1 τ 11 τ 12 + ω 2 ( τ 11 τ 22 + 2 τ 12 2 ) ) + l 12 ( 3 ω 2 τ 22 τ 21 + ω 1 ( τ 22 τ 11 + 2 τ 21 2 ) ) + l 03 ( ω 2 τ 22 2 + ω 1 τ 21 τ 22 ) ] + ρ 1 ( ω 1 τ 11 + ω 2 τ 21 ) + ρ 2 ( ω 2 τ 22 + ω 1 τ 12 ) .
(2)
Linex loss function
For parameter α , taking g ( α , λ ) = α , we can obtain that
ω 1 = p e p α , ω 11 = p 2 e p α , ω 12 = ω 21 = ω 22 = ω 2 = 0 .
Utilizing the above expression in Equation (28), the Bayesian estimate of α is derived by
α ^ L = 1 p log { e p α ^ + 0.5 [ ω 11 τ 11 + ω 1 τ 11 2 l 30 + ω 1 τ 21 τ 22 l 03 + 3 ω 1 τ 11 τ 12 l 21 + ( τ 22 τ 11 + 2 τ 21 2 ) ω 1 l 12 ] + ω 1 τ 11 ρ 1 + ω 1 τ 12 ρ 2 } .
The Bayesian estimate of λ is obtained likewise:
λ ^ L = 1 p log { e p λ ^ + 0.5 [ ω 22 τ 22 + ω 2 τ 11 τ 12 l 30 + ω 2 τ 22 2 l 03 + ( τ 11 τ 22 + 2 τ 12 2 ) ω 2 l 21 + 3 τ 22 τ 21 ω 2 l 12 ] + ω 2 τ 21 ρ 1 + ω 2 τ 22 ρ 2 } .
Next, we derive the Bayesian estimate of entropy. It is clear that
g ( α , λ ) = e p H ( X ) , ω 1 = p e p H [ ( α + 1 ) ( γ + log ( λ ) ) α 2 + γ + log ( λ ) α 1 α ] , ω 2 = p e p H α + 1 α λ 1 λ , ω 11 = p 2 e p H [ ( α + 1 ) ( γ + log ( λ ) ) α 2 + γ + log ( λ ) α 1 α ] 2 p e p H [ 2 ( α + 1 ) ( γ + log ( λ ) ) α 3 2 ( γ + log ( λ ) ) α 2 + 1 α 2 ] , ω 22 = p 2 e p H α + 1 α λ 1 λ 2 p e p H α + 1 α λ 2 + 1 λ 2 , ω 12 = ω 21 = p 2 e p H α + 1 α λ 1 λ [ ( α + 1 ) ( γ + log ( λ ) ) α 2 + γ + log ( λ ) α 1 α ] p e p H 1 α λ α + 1 α 2 λ .
The requested estimation of entropy can be derived in a similar method.
H ^ L ( X ) = 1 p log { e p H ^ + 0.5 [ ω 11 τ 11 + 2 ω 12 τ 12 + ω 22 τ 22 + l 30 ( ω 1 τ 11 2 + ω 2 τ 11 τ 12 ) + l 12 ( 3 ω 2 τ 22 τ 21 + ω 1 ( τ 22 τ 11 + 2 τ 21 2 ) ) + l 21 ( 3 ω 1 τ 11 τ 12 + ω 2 ( τ 11 τ 22 + 2 τ 12 2 ) ) l 03 ( ω 2 τ 22 2 + ω 1 τ 21 τ 22 ) ] + ρ 1 ( ω 1 τ 11 + ω 2 τ 21 ) + ρ 2 ( ω 2 τ 22 + ω 1 τ 12 ) } .
(3)
General entropy loss function
For the parameter α ,
g ( α , λ ) = α q , ω 1 = q α q 1 , ω 11 = q ( q + 1 ) α q 2 , ω 12 = ω 21 = ω 22 = ω 2 = 0 .
Applying the above expression in Equation (28), the Bayesian estimate of α is derived by
α ^ E = { α ^ q + 0.5 [ ω 11 τ 11 + ω 1 τ 11 2 l 30 + ω 1 τ 21 τ 22 l 03 + 3 ω 1 τ 11 τ 12 l 21 + ( τ 22 τ 11 + 2 τ 21 2 ) ω 1 l 12 ] . + ω 1 τ 11 ρ 1 + ω 1 τ 12 ρ 2 } 1 q .
The approximate Bayes estimator of λ is computed likewise.
λ ^ E = { λ ^ q + 0.5 [ ω 22 τ 22 + ω 2 τ 11 τ 12 l 30 + ω 2 τ 22 2 l 03 + ( τ 11 τ 22 + 2 τ 12 2 ) ω 2 l 21 + 3 τ 22 τ 21 ω 2 l 12 ] . + ω 2 τ 21 ρ 1 + ω 2 τ 22 ρ 2 } 1 q .
Finally, we derive Bayesian estimate of entropy under GELF:
g ( α , λ ) = H ( X ) q , ω 1 = q H q 1 [ ( α + 1 ) ( γ + log ( λ ) ) α 2 + γ + log ( λ ) α 1 α ] , ω 2 = q H q 1 α + 1 α λ 1 λ , ω 11 = q ( q + 1 ) H q 2 [ ( α + 1 ) ( γ + log ( λ ) ) α 2 + γ + log ( λ ) α 1 α ] 2 q H q 1 [ 2 ( α + 1 ) ( γ + log ( λ ) ) α 3 2 ( γ + log ( λ ) ) α 2 + 1 α 2 ] , ω 22 = q ( q + 1 ) H q 2 α + 1 α λ 1 λ 2 q H q 1 α + 1 α λ 2 + 1 λ 2 , ω 12 = ω 21 = q ( q + 1 ) H q 2 α + 1 α λ 1 λ [ ( α + 1 ) ( γ + log ( λ ) ) α 2 + γ + log ( λ ) α 1 α ] q H q 1 1 α λ α + 1 α 2 λ .
The Bayes estimator of entropy under GELF can be calculated as earlier, and it is given by
H ^ E ( X ) = { H ^ ( X ) q + 0.5 [ ω 11 τ 11 + 2 ω 12 τ 12 + ω 22 τ 22 + l 30 ( ω 1 τ 11 2 + ω 2 τ 11 τ 12 ) + l 21 ( 3 ω 1 τ 11 τ 12 + ω 2 ( τ 11 τ 22 + 2 τ 12 2 ) ) + l 12 ( 3 ω 2 τ 22 τ 21 + ω 1 ( τ 22 τ 11 + 2 τ 21 2 ) ) l 03 ( ω 2 τ 22 2 + ω 1 τ 21 τ 22 ) ] + ρ 1 ( ω 1 τ 11 + ω 2 τ 21 ) + ρ 2 ( ω 2 τ 22 + ω 1 τ 12 ) } 1 q .

4.4. Importance Sampling Procedure

Using the Lindley approximation method, we can get the Bayesian estimates of the unknown parameters and entropy. Although the Lindley method can make point estimation, it cannot determine the Highest Posterior Density (HPD) credible intervals. Thus, we recommend using the Importance Sampling to get Bayesian estimates and to derive HPD credible intervals as well.
To begin with, let’s solve the doubts before. If we choose two Gammas for prior distributions, record it as α G ( a , b ) and λ G ( c , d ) . Then, the joint prior distribution is
π ( α , λ ) α a 1 e b α λ c 1 e d λ .
Correspondingly, the joint posterior distribution is
π ( α , λ | X ) α m + a 1 λ m + c 1 e b α d λ e λ i = 1 m x i α i = 1 m x i α i = 1 m ( 1 e λ x i α ) k ( R i + 1 ) 1 = α ( m + a ) 1 e α ( b + i = 1 m log ( x i ) ) λ ( m + c ) 1 e λ ( d + i = 1 m x i α ) i = 1 m ( 1 e λ x i α ) k ( R i + 1 ) 1 = G α ( m + a , b + i = 1 m log ( x i ) ) G λ | α ( m + c , d + i = 1 m x i α ) Q ( α , λ ) ,
where Q ( α , λ ) = i = 1 m ( 1 e λ x i α ) k ( R i + 1 ) 1 ( d + i = 1 m x i α ) m + c .
We observe that G α seems like Gamma distribution, but the second parameter b + i = 1 m log ( x i ) can not be proven to be strictly positive, so it cannot be considered to be a Gamma distribution. Obviously, it is not possible to generate its random samples according to the Gamma distribution, and it is also difficult to generate its random samples using other methods. Therefore, it is not appropriate to choose both Gammas as priors.
Then, we return to the prior distribution we selected before. To implement the Importance Sampling, the joint posterior distribution can be adapted as
π ( α , λ | X ) α m 1 λ m + a 1 e b λ e λ i = 1 m x i α i = 1 m x i α i = 1 m ( 1 e λ x i α ) k ( R i + 1 ) 1 = ( b + i = 1 m x i α ) m + a Γ ( m + a ) λ m + a 1 e λ ( b + i = 1 m x i α ) × α m 1 i = 1 m x i α ( b + i = 1 m x i α ) m + a × i = 1 m ( 1 e λ x i α ) k ( R i + 1 ) 1 = f 1 ( λ | α ) f 2 ( α ) Q ( α , λ ) ,
where f 1 ( α , λ ) is a Gamma function G ( m + a , b + i = 1 m x i α ) and
f 2 ( α ) = K α m 1 i = 1 m x i α ( b + i = 1 m x i α ) m + a .
Here, K is a normalizing constant and
Q ( α , λ ) = i = 1 m ( 1 e λ x i α ) k ( R i + 1 ) 1 .
Note that, in order to get the Bayesian estimates of parameters using the Importance Sampling, we demand to produce corresponding samples from f 1 ( λ | α ) and f 2 ( α ) . It is uncomplicated and clear to produce samples from f 1 ( λ | α ) because it is a simple Gamma distribution. As for producing samples from f 2 ( α ) , we have a Lemma.
Lemma 1.
f 2 ( α ) is log-concave.
Proof. 
log ( f 2 ( α ) ) ( m 1 ) log ( α ) α i = 1 m log ( x i ) ( m + a ) log ( b + i = 1 m x i α ) 2 log ( f 2 ( α ) ) α 2 = m 1 α 2 ( m + a ) i = 1 m x i α log 2 ( x i ) ( b + i = 1 m x i α ) 2 .
Since m is a postive number, the second-order partial derivative of log ( f 2 ( α ) ) is constantly negative. Thereby, f 2 ( α ) is log-concave. □
Then, using the approach originally proposed by [16], we can easily produce samples from f 2 ( α ) .
Using the following steps, we can produce several samples from the request scenario:
  • Produce α from f 2 ( α ) .
  • Produce λ from G λ | α ( m + a , b + i = 1 m x i α ) .
  • Repeat Step 1 and 2 to derive ( α 1 , λ 1 ) , ( α 2 , λ 2 ) , , ( α M , λ M ) .
Then, the required Bayesian estimate of ℧( α , λ ) can be represented by
i = 1 M ( α i , λ i ) Q ( α i , λ i ) i = 1 M Q ( α i , λ i ) .
Furthermore, samples produced above can also be chosen to establish the HPD intervals for the parameters and entropy. Suppose that 0 < p < 1 , and p makes P ( ( α , λ ) p ) = p . For a given p, we purpose an approach to make a estimation of p and then to establish the HPD intervals for ( α , λ ) .
Firstly, we suppose
ϑ i = Q ( α i , λ i ) i = 1 M Q ( α i , λ i ) , i = 1 , , M .
For simplicity, we replace ( α i , λ i ) with i . Then, sort { ( 1 , ϑ 1 ) , , ( M , ϑ M ) } into { ( ( 1 ) , ϑ ( 1 ) ) , , ( ( M ) , ϑ ( M ) ) } , where ( 1 ) < < ϑ ( M ) and ( i ) is related to ϑ ( i ) for i = 1 , , M . Then, the Bayesian estimate of p is ^ p = ( M p ) , where M p is an integer which satifies
i = 1 M p ϑ ( i ) p i = 1 M p + 1 ϑ ( i ) .
Therefore, a 100 ( 1 ξ ) % HPD interval of ( α , λ ) can be derived by ( ^ δ , ^ δ + 1 ξ ) for δ = ϑ ( 1 ) , ϑ ( 1 ) + ϑ ( 2 ) , , i = 1 M 1 ξ ϑ ( i ) . Finally, a 100 ( 1 ξ ) % HPD interval of ( α , λ ) transforms ( ^ δ * , ^ δ * + 1 ξ ) , where δ * makes
^ δ * + 1 β ^ δ + 1 β ^ δ
for all δ .
The next section will use Monte Carlo simulation to numerically and systematically compare previously proposed estimators.

5. Simulation Results

We will use the Monte Carlo simulation method to analyze the behavior of different estimators obtained by the above sections based on the expected value (EV) and mean squared error (MSE). The progressive first-failure censored samples are produced from different censoring schemes of ( k , n , m , R 1 , , R m ) and various parameter values from the IWD by using the algorithm originally proposed by [17].
In general, we let α = 2 , λ = 1 , and correspondingly the entropy is 1.172676. We use the ‘optim’ command in the R software (version 3.6.1, Lucent Technologies, Mary Hill, NJ, USA) to get the approximate MLEs of α , λ , and entropy presented in Table 1. The Bayesian estimates under both asymmetric and symmetric loss functions are precisely computed by the Lindley method and Importance Samplings. For the Bayes estimation, we assign the value of hyperparameters as a = 1 , b = 1 for Table 2, Table 3, Table 4, Table 5, Table 6 and Table 7 and a = 0 , b = 0 for Table 8 and Table 9. Under the LLF, we let p = 0.5 and p = 1 . Under the GELF, we choose q = 0.1 and q = 1 . We derive 95 % asymmetric intervals of parameters using the MLEs and log-transformed MLEs and 95 % HPD intervals. Pay attention that, for simplicity, the censoring schemes are presented by abbreviations such as ( 0 5 ) represents ( 0 , 0 , 0 , 0 , 0 ) and ( ( 1 , 0 ) 2 ) represents ( 1 , 0 , 1 , 0 ) . Table 2, Table 3, Table 4, Table 5 and Table 6, Table 8 present the Bayes estimation of α , λ , and entropy using the Lindley method. The Bayes estimation based on Importance Samplings is shown in Table 7 and Table 9. In Table 10, the interval estimation of entropy is presented.
As a whole, the EVs and MSEs of parameters and entropy all significantly decrease as the sample size n increases. In Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8 and Table 9, set m and n invariant, the EVs and MSEs of parameters and entropy both decrease as the group size k increases. Furthermore, set k and n invariant, the EVs and MSEs of parameters and entropy both decrease as m increases. Bayesian estimates with a = 1 , b = 1 perform more precise than a = 0 , b = 0 , which is so-called non-informative. Using MLE and Bayes estimation based on the Lindley method is better than the Importance Sampling procedure. Bayes estimation using the Lindley method is a little bit more precise than the MLE. For LLF, choosing p = 1 seems to be better than p = 0.5 . For GELF, q = 1 competes as well as q = 1 . In Table 7 and Table 9, we observe that the few censoring schemes such as ( 0 24 , 25 ) and ( 0 34 , 35 ) do not compete well.
In Table 10, the average length (AL) narrows down as the sample size n increases. Moreover, HPD intervals are more precise than confidence intervals based on AL. For confidence intervals, using log-transformed MLEs performs much better than MLEs. In almost all circumstances, the coverage probability (CP) of entropy derived here achieve their specified confidence intervals.

6. Real Data Analysis

We will analyze a real data set and apply the approaches put forward in the sections above. The data set was analyzed by [7,18]. The data show the surviving days of guinea pig injected with vairous species of tubercle bacilli. The quantity of regimen is the logarithmic of the quantity of bacillary units in 0.5 mL of the challenging solution. The sample size is 72 which are listed below: 12, 15, 22, 24, 24, 32, 32, 33, 34, 38, 38, 43, 44, 48, 52, 53, 54, 54, 55, 56, 57, 58, 58, 59, 60, 60, 60, 60, 61, 62, 63, 65, 65, 67, 68, 70, 70, 72, 73, 75, 76, 76, 81, 83, 84, 85, 87, 91, 95, 96, 98, 99, 109, 110, 121, 127, 129, 131, 143, 146, 146, 175, 175, 211, 233, 258, 258, 263, 297, 341, 341, 376 (unit: days).
Before analyzing the data, we want to test if the IWD matches the complete data well. To begin with, from [7], we conclude that the failure rate function of this data are unimodal, so it is scientific and reasonable to analyze the data using IWD. Then, we choose various approaches to analyze the goodness of fit of IWD using the MLE. We compute the ln ( L ) and Kolmogorov–Smirnov (K–S) statistics with its associated p-value represented in Table 11. According to the p-value, the IWD fits the complete data well.
Now, we can consider the censoring data to illustrate the previous approaches. To generate the first-failure censored sample, we randomly sort the given data into n = 36 groups with k = 2 identical units in each group, and we can get the first-failure censored sample: 12, 15, 22, 24, 32, 32, 33, 34, 38, 38, 43, 44, 48, 52, 54, 55, 56, 58, 58, 60, 60, 61, 63, 65, 65, 68, 70, 70, 73, 76, 84, 91, 109, 110, 129, 143. Then, we produce samples using three diffrent progressive first-failure censoring which are ( 18 , 0 17 ) , ( 1 18 ) and ( 0 17 , 18 ) from the above sample with m = 18 . The results are organized in Table 12.
In Table 13, for MLE, we calculate the EVs, MSEs, and confidence intervals of the parameters and entropy; for Bayes estimation, we obtain the EVs, MSEs, and HPD intervals of entropy and two parameters. The estimates of α , λ , and entropy using the MLE and the Importance Sampling method are relatively close.

7. Conclusions

In this article, the problem of statistical inference on the parameters and entropy of IWD under progressive first-failure censoring has been considered. Both the maximum likelikood estimation and Bayesian estimation are investigated. For Bayesian estimation, we apply the Lindley and Importance Sampling method to approximate the Bayesian estimates under both asymmetric and symmetric loss functions. We construct the approximate intervals based on MLEs and Log-transformed MLEs. In addition, we use the Importance Sampling method to derive the HPD intervals. Then, we compare the performance of estimates through EV and MSE. Although we have considered the estimation of entropy under progressive first-failure censoring scheme as much as possible, using a similar method, this censoring scheme can be widely extended to other more efficient and complex censoring schemes. This direction is still very promising and requires more attention and work.

Author Contributions

Methodology and Writing, J.Y.; Supervision, W.G.; Writing, Y.S.

Funding

This research was supported by Project 201910004093 which was supported by National Training Program of Innovation and Entrepreneurship for Undergraduates.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wu, S.J.; Kuş, C. On estimation based on progressive first-failure-censored sampling. Comput. Stat. Data Anal. 2009, 53, 3659–3670. [Google Scholar] [CrossRef]
  2. Soliman, A.A.; Abou-Elheggag, N.A.; Ellah, A.H.A.; Modhesh, A.A. Bayesian and non-Bayesian inferences of the Burr-XII distribution for progressive first-failure censored data. Metron 2014, 70, 1–25. [Google Scholar] [CrossRef]
  3. Dube, M.; Krishna, H.; Garg, R. Generalized inverted exponential distribution under progressive first-failure censoring. J. Stat. Comput. Simul. 2015, 86, 1–20. [Google Scholar] [CrossRef]
  4. Singh, S.; Tripathi, Y.M. Reliability sampling plans for a lognormal distribution under progressive first-failure censoring with cost constraint. Stat. Pap. 2015, 56, 773–817. [Google Scholar] [CrossRef]
  5. Bakoban, R.A.; Abd-Elmougod, G.A. MCMC in analysis of progressively first failure censored competing risks data for gompertz model. J. Comput. Theor. Nanosci. 2016, 13, 6662–6670. [Google Scholar] [CrossRef]
  6. Ahmadi, M.V.; Doostparast, M. Pareto analysis for the lifetime performance index of products on the basis of progressively first-failure-censored batches under balanced symmetric and asymmetric loss functions. J. Appl. Stat. 2018, 46, 1–32. [Google Scholar] [CrossRef]
  7. Kundu, D.; Howlader, H. Bayesian inference and prediction of the inverse Weibull distribution for Type-II censored data. Comput. Stat. Data Anal. 2010, 54, 1547–1558. [Google Scholar] [CrossRef]
  8. Sultan, K.S.; Alsadat, N.H.; Kundu, D. Bayesian and maximum likelihood estimations of the inverse Weibull parameters under progressive type-II censoring. J. Stat. Comput. Simul. 2014, 84, 2248–2265. [Google Scholar] [CrossRef]
  9. Musleh, R.M.; Helu, A. Estimation of the inverse Weibull distribution based on progressively censored data: Comparative study. Reliab. Eng. Syst. Saf. 2014, 131, 216–227. [Google Scholar] [CrossRef]
  10. Singh, S.; Tripathi, Y.M. Estimating the parameters of an inverse Weibull distribution under progressive type-I interval censoring. Stat. Pap. 2018, 59, 21–56. [Google Scholar] [CrossRef]
  11. Mazucheli, J.; Menezes, A.F.B.; Dey, S. Bias-corrected maximum likelihood estimators of the parameters of the inverse Weibull distribution. Commun. Stat. Simul. Comput. 2018, 48, 2046–2055. [Google Scholar] [CrossRef]
  12. AboEleneen, Z.A. The Entropy of Progressively Censored Samples. Entropy 2011, 3, 437–449. [Google Scholar] [CrossRef] [Green Version]
  13. Kayal, S.; Kumar, S. Estimation of the Shannon’s entropy of several shifted exponential populations. Stat. Probab. Lett. 2013, 83, 1127–1135. [Google Scholar] [CrossRef]
  14. Cho, Y.; Sun, H.; Lee, K. An Estimation of the Entropy for a Rayleigh Distribution Based on Doubly-Generalized Type-II Hybrid Censored Samples. Entropy 2014, 16, 3655–3669. [Google Scholar] [CrossRef] [Green Version]
  15. Meeker, W.Q.; Escobar, L.A. 8—Maximum Likelihood Methods for Fitting Parametric Statistical Models. Methods Exp. Phys. 1994, 28, 211–244. [Google Scholar]
  16. Devroye, L. A simple algorithm for generating random variates with a log-concave density. Computing 1984, 33, 247–257. [Google Scholar] [CrossRef]
  17. Balakrishnan, N.; Sandhu, R.A. A Simple Simulational Algorithm for Generating Progressive Type-II Censored Samples. Am. Stat. 1995, 49, 229–230. [Google Scholar]
  18. Singh, S.K.; Singh, U.; Kumar, D. Bayesian estimation of parameters of inverse Weibull distribution. J. Appl. Stat. 2013, 40, 1597–1607. [Google Scholar] [CrossRef]
Figure 1. The failure rate function.
Figure 1. The failure rate function.
Entropy 21 01209 g001
Table 1. Maximum likelihood estimates of α , λ , and entropy when α = 2 , λ = 1 , H = 1.172676 .
Table 1. Maximum likelihood estimates of α , λ , and entropy when α = 2 , λ = 1 , H = 1.172676 .
knmScheme α ^ λ ^ H ^
EVMSEEVMSEEVMSE
15025(25, 0*24)2.06080.10321.01440.04521.15580.0670
((5, 0*4)*5)2.14960.17880.99400.03091.10290.0840
(0*24, 25)2.08800.09321.00170.03421.13020.0535
5040(10, 0*39)2.08450.07340.99890.02711.12710.0406
((2, 0*7)*5)2.07430.08130.98880.02271.13270.0467
(0*39, 10)2.04700.05341.00970.01891.15500.0309
7035(35, 0*34)2.12550.08801.00470.03831.10630.0537
((5, 0*4)*7)2.05500.05521.02270.02101.15650.0326
(0*34, 35)2.05750.07580.99530.01931.14580.0423
60(10, 0*59)2.06530.05170.98640.02321.13070.0330
((2, 0*11)*5)2.03300.04281.01400.02011.16440.0281
(0*59, 10)2.05730.04301.00160.01791.14180.0245
25025(25, 0*24)2.10010.09660.97310.02511.11070.0535
((5, 0*4)*5)2.14130.14610.97110.02441.09360.0740
(0*24, 25)2.16290.20700.97620.02911.09300.0972
5040(10, 0*39)2.08890.06980.98320.02061.11820.0393
((2, 0*7)*5)2.07240.07110.98060.02011.12950.0453
(0*39, 10)2.11800.08050.99010.01501.10590.0405
7035(35, 0*34)2.07450.06010.99680.01801.13310.0359
((5, 0*4)*7)2.05830.07240.99840.02141.14930.0508
(0*34, 35)2.07660.06770.97630.02111.12510.0474
60(10, 0*59)2.06320.04330.99390.01571.13640.0290
((2, 0*11)*5)2.03340.04370.99650.01091.15780.0274
(0*59, 10)2.07290.04861.00390.01081.13580.0253
Table 2. Bayes estimates under squared error loss function of α , λ , and entropy based on the Lindley method when α = 2 , λ = 1 , H = 1.172676 .
Table 2. Bayes estimates under squared error loss function of α , λ , and entropy based on the Lindley method when α = 2 , λ = 1 , H = 1.172676 .
knmScheme α ^ S λ ^ S H ^ S
EVMSEEVMSEEVMSE
15025(25, 0*24)2.23440.20170.96610.05881.05990.0921
((5, 0*4)*5)2.10800.14180.98370.04261.14720.0803
(0*24, 25)2.09590.11211.04080.03141.17600.0580
5040(10, 0*39)2.07580.07331.01730.03061.16060.0408
((2, 0*7)*5)2.10150.10290.98130.02451.13170.0463
(0*39, 10)2.02580.05451.02850.02021.19500.0309
7035(35, 0*34)2.14700.10650.99080.03431.10810.0520
((5, 0*4)*7)2.07250.05260.99060.02451.14970.0341
(0*34, 35)2.09650.09571.00920.01971.14970.0434
60(10, 0*59)2.06410.06291.01270.01451.15970.0296
((2, 0*11)*5)2.05700.04941.00300.02131.15620.0282
(0*59, 10)2.08040.05831.00960.01931.14460.0271
25025(25, 0*24)2.21270.14350.91940.03371.04170.0718
((5, 0*4)*5)2.29110.24740.89590.05440.99550.1181
(0*24, 25)2.17680.22990.94920.04261.09940.1140
5040(10, 0*39)2.09540.07420.99410.02381.13700.0416
((2, 0*7)*5)2.09760.08610.99500.02081.13960.0455
(0*39, 10)2.02700.07100.99850.01581.18460.0403
7035(35, 0*34)2.12250.08270.96720.02961.10840.0520
((5, 0*4)*7)2.20370.16430.93140.02711.05470.0783
(0*34, 35)2.20010.18210.92820.02881.06020.0865
60(10, 0*59)2.07600.04560.98230.01111.13430.0247
((2, 0*11)*5)2.06260.03550.99110.01151.14560.0216
(0*59, 10)2.04350.03360.99320.01241.15840.0219
Table 3. Bayes estimates under Linex loss function of α , λ , and entropy based on Lindley method when α = 2 , λ = 1 , H = 1.172676 , p = 0.5 .
Table 3. Bayes estimates under Linex loss function of α , λ , and entropy based on Lindley method when α = 2 , λ = 1 , H = 1.172676 , p = 0.5 .
knmScheme α ^ L λ ^ L H ^ L
EVMSEEVMSEEVMSE
15025(25, 0*24)2.21300.15820.95210.04101.019560.0779
((5, 0*4)*5)2.08740.12211.03280.04031.11790.0812
(0*24, 25)2.04440.08991.04620.01781.14810.0492
5040(10, 0*39)2.06800.07511.01470.02181.13040.0401
((2, 0*7)*5)2.06460.07490.99630.02871.12060.0508
(0*39, 10)2.03810.05771.03290.02391.15280.0326
7035(35, 0*34)2.11090.07860.94030.03841.07150.0534
((5, 0*4)*7)2.03060.07651.01530.02801.14830.0530
(0*34, 35)1.99810.05951.00390.01791.16040.0438
60(10, 0*59)2.05240.04271.00460.02161.13340.0251
((2, 0*11)*5)2.04990.04861.00670.01671.13840.0307
(0*59, 10)2.03100.03381.04090.01611.16480.0194
25025(25, 0*24)2.28160.24780.90430.03980.98430.1064
((5, 0*4)*5)2.17780.14690.92580.03831.00270.1004
(0*24, 25)2.21970.23390.93830.04370.96910.1609
5040(10, 0*39)2.07080.06770.97650.01831.10970.0423
((2, 0*7)*5)2.02400.08830.99930.02321.14920.0593
(0*39, 10)2.01070.06861.00280.01511.15630.0460
7035(35, 0*34)2.13400.09540.95160.02801.07950.0574
((5, 0*4)*7)2.12150.09390.95240.02641.05620.0713
(0*34, 35)2.11970.11460.96260.02041.04810.0754
60(10, 0*59)2.03950.04661.01380.01361.15100.0308
((2, 0*11)*5)2.05000.05100.98210.01501.12550.0376
(0*59, 10)2.03680.04021.00130.01261.14070.0261
Table 4. Bayes estimates under Linex loss function of α , λ , and entropy based on Lindley method when α = 2 , λ = 1 , H = 1.172676 , p = 1 .
Table 4. Bayes estimates under Linex loss function of α , λ , and entropy based on Lindley method when α = 2 , λ = 1 , H = 1.172676 , p = 1 .
knmScheme α ^ L λ ^ L H ^ L
EVMSEEVMSEEVMSE
15025(25, 0*24)2.14960.13500.96290.04011.04280.0820
((5, 0*4)*5)2.08040.14630.98150.02991.07480.0918
(0*24, 25)2.05030.11800.97950.02421.08410.0776
5040(10, 0*39)2.06660.06160.98130.02191.09510.0386
((2, 0*7)*5)1.98080.04561.01340.02641.16220.0361
(0*39, 10)1.99980.06210.98730.02121.13990.0387
7035(35, 0*34)2.09500.07730.95410.02681.07630.0522
((5, 0*4)*7)2.03710.07891.00130.02791.11880.0582
(0*34, 35)2.02190.08030.99980.02271.12580.0527
60(10, 0*59)2.03130.05061.00270.01981.13760.0315
((2, 0*11)*5)2.00010.03181.02770.01481.16770.0204
(0*59, 10)2.02610.03510.99980.01651.13730.0246
25025(25, 0*24)2.26800.25860.90740.05190.98060.1179
((5, 0*4)*5)2.25240.22120.89230.04280.93300.1395
(0*24, 25)2.09670.11570.98100.02611.03610.0977
5040(10, 0*39)2.08140.08620.98330.01881.09530.0521
((2, 0*7)*5)2.05930.06110.98710.01821.09840.0439
(0*39, 10)2.05000.07210.99170.01801.10710.0463
7035(35, 0*34)2.10560.07930.94810.02631.07770.0511
((5, 0*4)*7)2.11970.09560.94390.02531.03790.0719
(0*34, 35)2.15110.12950.92890.02830.99420.1018
60(10, 0*59)2.07180.05850.98800.01121.10820.0341
((2, 0*11)*5)2.03070.03250.99690.01571.13050.0275
(0*59, 10)2.00470.03770.99800.00861.15020.0244
Table 5. Bayes estimates under general entropy loss function of α , λ , and entropy based on Lindley method when α = 2 , λ = 1 , H = 1.172676 , q = 1 .
Table 5. Bayes estimates under general entropy loss function of α , λ , and entropy based on Lindley method when α = 2 , λ = 1 , H = 1.172676 , q = 1 .
knmScheme α ^ E λ ^ E H ^ E
EVMSEEVMSEEVMSE
15025(25, 0*24)2.18510.16560.97290.04651.07170.0836
((5, 0*4)*5)2.10520.13081.00730.03381.11980.0752
(0*24, 25)2.05340.11091.02340.02611.15600.0658
5040(10, 0*39)2.05940.06891.01430.02611.15060.0413
((2, 0*7)*5)2.06140.07121.01190.02451.14620.0415
(0*39, 10)2.06330.06811.02610.02401.15090.0370
7035(35, 0*34)2.11830.09190.99370.03351.11410.0537
((5, 0*4)*7)2.10490.09350.99010.02371.10830.0551
(0*34, 35)2.04460.07221.02310.01931.15990.0425
60(10, 0*59)2.05230.04651.01100.01811.15030.0277
((2, 0*11)*5)2.04300.04641.00950.01781.15420.0271
(0*59, 10)2.06180.04851.00820.01481.14210.0259
25025(25, 0*24)2.26690.21370.91710.04731.01120.1030
((5, 0*4)*5)2.24910.24900.92150.04790.98510.1427
(0*24, 25)2.19920.23340.94150.03981.00390.1497
5040(10, 0*39)2.08180.06820.99620.02121.12800.0425
((2, 0*7)*5)2.07670.06920.98400.01831.11890.0447
(0*39, 10)2.05870.07441.00540.01851.14210.0470
7035(35, 0*34)2.16130.11170.95230.03031.07800.0605
((5, 0*4)*7)2.16480.12260.95170.02811.04620.0780
(0*34, 35)2.12920.11890.96410.02201.06110.0780
60(10, 0*59)2.05420.04170.99380.01241.13990.0258
((2, 0*11)*5)2.05650.04380.99560.01321.13690.0291
(0*59, 10)2.03480.03791.00180.01051.15320.0247
Table 6. Bayes estimates under general entropy loss function of α , λ , and entropy based on Lindley method when α = 2 , λ = 1 , H = 1.172676 , q = 1 .
Table 6. Bayes estimates under general entropy loss function of α , λ , and entropy based on Lindley method when α = 2 , λ = 1 , H = 1.172676 , q = 1 .
knmScheme α ^ E λ ^ E H ^ E
EVMSEEVMSEEVMSE
15025(25, 0*24)2.16500.16650.93300.05041.01510.0994
((5, 0*4)*5)2.06790.11550.97380.03131.06120.0821
(0*24, 25)2.03170.10870.99400.02601.08700.0766
5040(10, 0*39)2.03820.06690.99390.02721.11390.0441
((2, 0*7)*5)2.03300.06710.99700.02491.11530.0448
(0*39, 10)2.03240.06371.00740.02521.12140.0418
7035(35, 0*34)2.09780.09280.95870.03441.07170.0625
((5, 0*4)*7)2.03880.07410.98980.02331.10170.0532
(0*34, 35)2.00430.07171.00620.01931.13060.0500
60(10, 0*59)2.03210.04590.99300.01821.12750.0297
((2, 0*11)*5)2.02110.04420.99730.01661.13540.0297
(0*59, 10)2.02180.04090.98850.01501.13010.0261
25025(25, 0*24)2.21870.20590.91190.04601.00280.1030
((5, 0*4)*5)2.22510.26100.89570.04600.95780.1374
(0*24, 25)2.19070.31010.91610.04450.96040.1458
5040(10, 0*39)2.05660.05840.97360.01991.09340.0436
((2, 0*7)*5)2.04580.06570.98060.01851.09780.0485
(0*39, 10)2.03310.06100.98530.01721.10460.0460
7035(35, 0*34)2.13650.09590.93300.02821.04940.0610
((5, 0*4)*7)2.13080.11190.94010.02751.02750.0790
(0*34, 35)2.09240.10640.94930.02471.03510.0814
60(10, 0*59)2.03120.03700.98430.01261.12300.0276
((2, 0*11)*5)2.04260.04390.98030.01321.11100.0326
(0*59, 10)2.00480.03610.99730.01091.14270.0258
Table 7. Bayes estimates of entropy using Importance Sampling when α = 2 , λ = 1 , H = 1.172676 .
Table 7. Bayes estimates of entropy using Importance Sampling when α = 2 , λ = 1 , H = 1.172676 .
knmScheme α ^ λ ^ H ^
EVMSEEVMSEEVMSE
15020(25, 0*24)1.96690.03541.07560.04421.25130.0515
((5, 0*4)*5)2.18320.13830.97570.01581.08120.0541
(0*24, 25)2.39230.23300.73390.08880.82770.1595
40(10, 0*39)2.08550.09810.95090.02731.11850.0581
((2, 0*7)*5)1.94560.06531.04250.02761.24870.0469
(0*39, 10)2.05850.08420.95070.01121.12950.0404
7035(35, 0*34)1.96730.06511.09780.03221.26030.0394
((5, 0*4)*7)2.09980.04860.97820.02761.11130.0344
(0*34, 35)2.51570.47680.59670.17360.70080.2817
60(10, 0*59)2.02690.03960.98680.01261.16070.0241
(2, 0*11)*5)2.00780.05321.00680.01681.18590.0332
(0*59, 10)2.07240.06581.01030.02111.14540.0411
25020(25, 0*24)2.12600.20790.94340.06681.10160.1009
((5, 0*4)*5)2.64890.57890.60520.18030.64690.3286
(0*24, 25)2.96311.11930.40390.36100.38520.6440
40(10, 0*39)2.14280.08100.94320.03191.06520.0461
((2, 0*7)*5)2.34930.18620.72440.08210.84410.1309
(0*39, 10)2.66100.53670.57610.18900.61420.3376
7035(35, 0*34)2.17960.10640.89890.02761.02920.0588
((5, 0*4)*7)2.65410.62790.55270.21190.61060.3630
(0*34, 35)3.41692.28780.32250.46580.19310.9953
60(10, 0*59)2.37220.21570.83390.03430.89160.1065
(2, 0*11)*5)2.38140.21750.75200.07090.84100.1383
(0*59, 10)2.50750.31130.55880.19690.66210.2744
Table 8. Bayes estimates under squared error loss function of α , λ , and entropy based on Lindley method when α = 2 , λ = 1 , H = 1.172676, a = 0, b = 0.
Table 8. Bayes estimates under squared error loss function of α , λ , and entropy based on Lindley method when α = 2 , λ = 1 , H = 1.172676, a = 0, b = 0.
knmScheme α ^ S λ ^ S H ^ S
EVMSEEVMSEEVMSE
15025(25, 0*24)2.20820.17310.95690.05361.06720.0787
((5, 0*4)*5)2.10240.12360.99730.04161.15210.0662
(0*24, 25)2.07830.12371.02460.02381.18150.0582
5040(10, 0*39)2.09490.08511.0230.02921.15320.0451
((2, 0*7)*5)2.09020.09720.98880.02851.14250.0508
(0*39, 10)2.07580.08241.00270.02211.15620.0428
7035(35, 0*34)2.11230.11860.97340.03731.12490.0657
((5, 0*4)*7)2.06750.09901.02650.03181.17840.0568
(0*34, 35)2.10620.08650.98070.02401.12910.0463
60(10, 0*59)2.06820.05511.00430.01671.15100.0276
((2, 0*11)*5)2.05410.04310.99520.01561.15390.0247
(0*59, 10)2.07480.05381.01800.01651.15090.0222
25025(25, 0*24)2.27640.20650.91130.04721.00640.0986
((5, 0*4)*5)2.29150.28810.91190.04571.01050.1247
(0*24, 25)2.30710.33550.90110.05171.00250.1374
5040(10, 0*39)2.12380.07000.99250.01681.11770.0346
((2, 0*7)*5)2.11100.09380.97500.02071.12190.0469
(0*39, 10)2.07650.05100.98610.01731.14250.0307
7035(35, 0*34)2.16290.10710.95760.02881.08130.0565
((5, 0*4)*7)2.16070.14630.95040.03271.08790.0772
(0*34, 35)2.25370.26290.91200.04191.02770.1145
60(10, 0*59)2.09900.05150.97430.01401.11640.0283
((2, 0*11)*5)2.07520.04770.97010.01271.12940.0269
(0*59, 10)2.05460.04300.99950.01171.15620.0242
Table 9. Bayes estimates of entropy using Importance Sampling procedure when α = 2 , λ = 1 , H = 1.172676 , a = 0, b = 0.
Table 9. Bayes estimates of entropy using Importance Sampling procedure when α = 2 , λ = 1 , H = 1.172676 , a = 0, b = 0.
knmScheme α ^ λ ^ H ^
EVMSEEVMSEEVMSE
15020(25, 0*24)2.15460.05910.94750.05431.05370.0301
((5, 0*4)*5)2.10460.11990.95660.05901.10890.0660
(0*24, 25)2.75850.87180.64680.16650.63500.3742
40(10, 0*39)2.04180.08991.05670.02241.19620.0442
((2, 0*7)*5)1.89750.02880.97040.01481.23550.0202
(0*39, 10)2.10620.07270.99910.00731.11740.0283
7035(35, 0*34)1.97180.11120.94440.01841.19220.0584
((5, 0*4)*7)2.30560.17410.79910.06590.91220.1166
(0*34, 35)2.70180.72500.66300.16380.64510.3327
60(10, 0*59)2.00470.02270.95130.01001.15340.0145
(2, 0*11)*5)2.11700.03851.02600.02531.11870.0257
(0*59, 10)2.21100.08630.99310.03331.04560.0425
25020(25, 0*24)1.99630.07330.91620.01921.15820.0606
((5, 0*4)*5)2.46570.54000.60980.17060.75290.2983
(0*24, 25)3.39602.32950.35040.43190.22600.9416
40(10, 0*39)2.17160.11740.85720.02991.01770.0751
((2, 0*7)*5)2.34310.18210.73320.08910.84700.1384
(0*39, 10)2.83610.86680.57930.19380.55560.4241
7035(35, 0*34)2.14480.10800.83930.04261.02860.0901
((5, 0*4)*7)2.77250.91650.55040.21450.57690.4258
(0*34, 35)2.21370.08900.77900.05740.94440.0715
60(10, 0*59)2.24920.15250.84170.04560.95970.0910
(2, 0*11)*5)2.27670.14540.71590.08870.87280.1198
(0*59, 10)2.79640.66700.55190.20370.54360.4008
Table 10. Average length and coverage probability of 95 % asymptotic intervals/highest posterior density credible intervals of paramater α and entropy when α = 2 , λ = 1 , H = 1.172676 , k = 1.
Table 10. Average length and coverage probability of 95 % asymptotic intervals/highest posterior density credible intervals of paramater α and entropy when α = 2 , λ = 1 , H = 1.172676 , k = 1.
nmScheme H ^ M L H ^ M L ( L o g ) H ^ I S
ALCPALCPALCP
5025(25, 0*24)1.01680.9541.06650.9400.96950.936
((5, 0*4)*5)1.02440.9721.08200.9580.75380.824
(5*5, 0*20)1.04090.9501.10550.9340.97250.948
5040(10, 0*39)0.80540.9500.82640.9520.77510.936
((2, 0*7)*5)0.79910.9620.81980.9420.75320.936
(2*5, 0*35)0.80860.9560.83060.9480.77690.952
7035(35, 0*34)0.85230.9460.87780.9420.81760.938
((7, 0*6)*5)0.86660.9640.89330.9620.82160.952
(5*7, 0*28)1.02020.9520.86940.9540.82020.936
7060(10, 0*59)0.65540.9620.66570.9600.63440.944
((2, 0*11)*5)0.65150.9460.66170.9500.62300.936
(2*5, 0*55)0.65410.9600.66420.9580.63420.944
Table 11. Summary for model fit using ln L , K-S statistic and associated p-value.
Table 11. Summary for model fit using ln L , K-S statistic and associated p-value.
DistributionMLEs ln L K-Sp-Value
Inverse Weibull ( α ^ , λ ^ ) = ( 1.415 , 283.837 ) 395.6490.1520.0728836
Table 12. Progressive first-failure censored sample in the given censoring scheme when k = 2 , n = 36 , m = 18 .
Table 12. Progressive first-failure censored sample in the given censoring scheme when k = 2 , n = 36 , m = 18 .
SchemeSample
R1 = (18, 0*17)12, 24, 32, 32, 34, 38, 54, 55, 58, 60, 61, 65, 68, 70, 91, 109, 110, 143
R2 = (1*18)12, 15, 22, 24, 32, 32, 33, 34, 38, 43, 44, 54, 55, 58, 60, 65, 68, 70
R3 = (0*17, 18)12, 15, 22, 24, 32, 32, 33, 34, 38, 38, 43, 44, 48, 52, 54, 55, 56, 58
Table 13. Point and interval estimation of parameters and entropy using MLE and Bayes methods.
Table 13. Point and interval estimation of parameters and entropy using MLE and Bayes methods.
EstimatesR1R2R3
α ^ M L 1.17 (0.87, 1.59)1.073 (0.78, 1.48)0.95 (0.68, 1.33)
λ ^ M L 123.79 (33.88, 452.35)88.46 (26.28, 297.76)61.08 (19.07, 195.67)
H ^ M L 6.01 (5.36, 6.75)6.22 (5.50, 7.04)6.57 (5.77, 7.48)
α ^ I S 1.10 (0.94, 1.42)1.13 (0.98, 1.38)1.20 (1.14, 1.21)
λ ^ I S 123.47 (42.40, 304.71)100.64 (45.80, 205.23)99.12 (72.85, 106.27)
H ^ I S 6.30 (5.56, 7.18)5.68 (5.68, 5.68)5.52 (5.36, 5.55)

Share and Cite

MDPI and ACS Style

Yu, J.; Gui, W.; Shan, Y. Statistical Inference on the Shannon Entropy of Inverse Weibull Distribution under the Progressive First-Failure Censoring. Entropy 2019, 21, 1209. https://doi.org/10.3390/e21121209

AMA Style

Yu J, Gui W, Shan Y. Statistical Inference on the Shannon Entropy of Inverse Weibull Distribution under the Progressive First-Failure Censoring. Entropy. 2019; 21(12):1209. https://doi.org/10.3390/e21121209

Chicago/Turabian Style

Yu, Jiao, Wenhao Gui, and Yuqi Shan. 2019. "Statistical Inference on the Shannon Entropy of Inverse Weibull Distribution under the Progressive First-Failure Censoring" Entropy 21, no. 12: 1209. https://doi.org/10.3390/e21121209

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop