Parameter and Reliability Inferences of Inverted Exponentiated Half-Logistic Distribution under the Progressive First-Failure Censoring

: Using progressive ﬁrst-failure censored samples, we mainly study the inferences of the unknown parameters and the reliability and failure functions of the Inverted Exponentiated Half-Logistic distribution. The progressive ﬁrst-failure censoring is an extension and improvement of progressive censoring, which is of great signiﬁcance in the ﬁeld of lifetime research. Besides maximum likelihood estimation, we use Bayesian estimation under unbalanced and balanced losses: General Entropy loss function, Squared Error loss function and Linex loss function. Approximate explicit expression of Bayesian estimation is given using Lindley approximation method for point estimation and Metropolis-Hastings method for point and interval estimation. Bayesian credible intervals and asymptotic conﬁdence intervals are derived in the form of average length and coverage probability. To show the research effects, a simulation study and practical data analysis are carried out. Finally, we discuss the optimal censoring mode under four different criteria.


Introduction
Usually, in lifetime experiment and reliability research, there are limitations like time, money, resources, or due to personnel transfer and accidents, data cannot be recorded completely. In these cases, we often use censored samples. At present, many censoring schemes have been applied to lifetime tests. There exist two most common schemes named Type-I censoring and Type-II censoring. They have been studied by many scholars such as Fujii [1] and Kateri [2]. In terms of content, Type-I censoring means that the test ends at a pre-fixed time, while in Type-II censoring, the test ends when the m-th(N > m) failure occurs. The common disadvantage is that no unit in the test can be removed before the test is completely finished. Thus, the progressive censoring was proposed, which has better efficiency in lifetime experiments. Under this censoring scheme, one can remove the test units at various stages of the experiment. For more details, refer to Balakrishnan [3].
Although the progressive censoring can significantly improve the experimental efficiency, in many cases, the duration of the experiment is still too long. Then the first-failure censoring was proposed. One of the most remarkable features of this censoring is grouping data, which can greatly save time and cost.
To better improve the experimental efficiency, Wu and Ku [4] proposed the progressive first-failure censoring. This mode is of great significance in reliability research. It is a combination of first-failure censoring and progressive censoring, and the two main features of this scheme are the grouped samples and the m-dimension random vector = (R 1 , R 2 , · · · , R m ). The vector is corresponding to the number of additional groups removed when each failure occurs. In this scheme, we randomly divide N(N = n × k) units into n groups, each group contains k independent units. Observing these n groups simultaneously and independently. When the i-th failure appears, remove the randomly selected R i groups together with the group containing the i-th failure. The experiment ends when the m-th failure unit is observed. At this point, all remaining surviving units are removed. m and = (R 1 , · · · , R m ) are pre-allocated constants and ∑ m i=1 R i + m = n. Thus, considerable time and resources are saved, and a large part of risky units may be removed at various period of the test. In addition, this mode can be easily transformed into other modes. Thus, it is widely used in experimental design due to its flexibility.
Assume x 1;m:n:k , x 2;m:n:k , · · · , x m;m:n:k are the progressive first-failure censoring samples from a continuous function. Then the joint density function is f X 1:n:k ,··· ,X m:n:k (x 1 , · · · , where F(·) means the cumulative distribution function(c.d.f) and f (·) represents the density function (p.d.f). Here we define R 0 = 0 and A = ∏ m−1 i=0 n − i − ∑ i k=0 R k is a constant determined by n, m and .
The Half-Logistic distribution (HLD) is a very important distribution in lifetime test. It is derived from Logistic distribution and has an increasing hazard rate function. Many scholars have studied this distribution such as Balakrishnan [5,6]. It can be used in the failure model of life-testing research. Adatia [7] studied the parameters of HLD using generalized ranked set sampling technique. Asgharzadeh [8] compared several methods of estimating the parameters of HLD. Jung-In and Kang [9] derived the entropy of generalized HLD under Type-II censoring scheme using Bayes estimators and compared them through mean square error and bias.
The Inverted Exponentiated Half-Logistic distribution (IEHLD) is the inverse of exponentiated Half-Logistic distribution (EHLD). At present, there are many studies and applications on EHLD. Rastogi [10] estimated the parameters and reliability characteristics of EHLD using MLE and Bayes estimation under progressive Type-II censoring scheme. Gui [11] considered the joint confidence regions and the MLE, inverse moment and modified inverse moment estimators of EHLD.
However, at the same time, few people study IEHLD. It is an extension to generalized HLD with nonmonotone hazard rate. Shirke [12] studied the maximum likelihood estimations and the asymptotic confidence intervals for parameters of generalized HLD, and considered its inverse as a member of the generalized inverted scale family to give estimates. A scale family of distributions plays a significant role in lifetime research, and the inverses of these distributions have been studied by many scholars. Dey [13] analyzed lifetime data using inverted exponential distribution. Panahi [14] studied the maximum likelihood estimations and Bayesian estimations of parameters of inverted exponentiated Rayleigh distribution under adaptive Type-II progressive hybrid censoring scheme. Moreover, Lee and Cho [15] applied the maximum likelihood estimations, Importance sampling method and Lindley method to estimate the statistical inference of IEHLD using progressive Type II censored samples.
In addition, when the value of the shape parameter is set to be 1, the IEHLD can be transformed into the usual HLD, thus it has greater inclusiveness and flexibility in the practice of reliability research and lifetime test correspondingly.
Furthermore, through the graph of hazard rate function of IEHLD in Figure 1, we can see that it has one maximum. In this respect, IEHLD can be performed as an alternative lifetime model for several well-known distributions such as Generalized Inverted Rayleigh distribution (GIRD) and Inverse Weibull distribution (IWD), which have the same properties in terms of hazard rate.
The p.d.f, c.d.f, hazard function and reliability function of IEHLD are given as below: f (x; γ, β) = 2γ H(t; γ, β) = 2γ where β > 0 and γ > 0 are the shape and scale parameters. Please note that when γ = 1, the distribution reduce to the HLD. The graph of hazard function is presented in Figure 1. γ=0.5,β=1 γ=0.8,β=1 γ=1,β=1 γ=2,β=1 γ=3,β=1 This paper mainly studies the parameters and the reliability and failure functions of IEHLD using progressive first-failure censored samples. The two main objectives are the maximum likelihood estimation (MLE) and the Bayesian estimation. The asymptotic interval of the parameters are given using MLE, and the Bayesian estimation is prestented under General Entropy loss function, Linex loss function and Squared Error loss function. We use Lindley approximation method and Metropolis-Hastings (M-H) method to approximate the Bayesian estimation to give an explicit solution.
The rest sections in this paper can be summarized as follows: First, we focus on the MLEs of the parameters and the reliability and failure functions in Section 2. The asymptotic intervals of parameters are constructed at the same time. In Section 3, the Bayesian estimations using Lindley approximation method and M-H method are derived, then the Bayesian credible intervals of parameters are given. Next, simulation studies are conducted in Section 4 while practical data analysis is presented in Section 5. In Section 6, an optimal censoring plan under four criteria is derived. And in Section 7, we carry out a brief conclusion.

Maximum Likelihood Estimator
Suppose x 1;m:n:k , x 2;m:n:k , · · · , x m;m:n:k be the progressive first-failure censored samples from IEHLD with = (R 1 , R 2 , · · · , R m ), group n and group size k. For convenience, we replace (x 1;m:n:k , x 2;m:n:k , · · · , x m;m:n:k ) with − → X . The likelihood function can be obtained as : Then we can obtain the log-likelihood function : The corresponding likelihood functions of γ and β are: where Z X a,b (β) = Solving Equation (8), the MLE of γ is: Putting Equation (10) into Equation (9), the MLE of β can be written as: Since g(β) is related toγ and β, the MLE of β cannot be obtained directly. Using the "optimal" command in R software, the MLEs ofγ andβ can be solved easily.

Asymptotic Interval Estimation
We mainly present the asymptotic intervals for parameters in this subsection. Let θ = (γ, β), the Fisher information matrix is shown below: The elements in the matrix are: Usingθ = (γ,β) to represent the MLE of θ = (γ, β). Then we write the observed Fisher information matrix as follows: Thus, the observed variance-covariance matrix ofθ can be obtained as: From the properties of MLE and normal distribution, we can deduce thatθ approximately obey the bivariate normal distribution which have θ and I −1 (θ) as the expectation and variance-covariance matrix. Then the two-sided equal tail asymptotic intervals of γ and β under confidence level α can be constructed as: γ ± u α/2 V ar(γ) and β ± u α/2 V ar(β) . Here, u α/2 represents the upper (α/2)-th percentile of standard normal distribution.

Bayesian Estimation
Under progressive first-failure censoring scheme, we mainly discuss the Bayesian estimations of the unknown parameters and reliability characteristics of IEHLD in this section.
In addition, three loss functions are used here. The first one is Squared Error loss function (SELF): L(α,α) = (α − α) 2 , which is a symmetric loss widely used in statistical research and practical cases. The second is Linex loss function (LLF): It is an asymmetric loss first proposed by Klebanov [16], and its mathematical properties were studied by Klebanov and Rachev [17] and Zellner [18]. The last one is called General Entropy loss function (GELF): L(α,α) = (α/α) q − q ln (α/α) − 1, which is a generalization of the entropy loss and is also an asymmetric loss introduced by Calabria and Pulcini [19].
For our estimation, the prior distributions of unknown parameters need to be determined first. In this case, the joint conjugate prior of γ and β is not easy to obtain. Consider the gamma distribution has good flexibility and can be a prior distribution suitable for γ and β, we assume that γ and β follows gamma distribution G 1 (a 1 , b 1 ) and G 2 (a 2 , b 2 ). They can be written as: Then the joint prior distribution of γ and β reads: Using Equations (6) and (14), the joint posterior distribution of γ and β can be obtained as: have no concern with γ and β, is a normalizing constant.
Thus, using Equation (15), Bayesian estimation of any function of γ and β under different loss functions can be obtained. For example, let φ(γ, β) represents the function related to γ and β, then the Bayesian estimation under SELF is the posterior expectation of φ(γ, β), say, Let φ(γ, β) = γ and H(t), the Bayesian estimations of γ and H(t) under SELF are: In the same way, the Bayesian estimations of β and R(t) under SELF can be obtained. While under loss function LLF and GELF, the Bayesian estimations of α are : Replace α with γ and H(t), the Bayesian estimations under LLF, GELF can be considered to be: the Bayesian estimations of β and R(t) can be obtained likewise. From above, it can be seen that the form of Bayesian estimation is the ratio of two multiple integrals, thus it is unfeasible to obtain an explicit solution. In the next two subsections, we use Lindley approximation method and M-H method to approximate Bayesian estimation.

Lindley Approximation Method
In many studies, Bayesian estimation of distribution is usually in the form of multiple integral ratio, which is difficult to get analytical solution. The Lindley approximation method applied in this subsection was given by Lindley [20]. It is a point estimation which can be used to solve this problem.
Then, under loss function GELF, the Lindley approximation of γ is: For R(t), we have: For H(t): Thus, the Lindley approximation of reliability function R(t) under GELF is: in the same way,β GELF andĤ(t) GELF can be obtained easily. According to the above results, we only need to bring the corresponding parts into the corresponding equation, the required Bayesian estimations of each loss function can be obtained.
To better complete Bayesian estimation, we use the M-H method for interval estimation in the next subsection. Then, the Bayesian credible intervals of parameters are carried out.

Metropolis-Hastings Method
The M-H algorithm is a simulation algorithm based on MCMC technology, originally proposed by Metropolis and then generalized by Hasting. It is widely used in the field of stochastic process. Two important applications are sampling from given probability distributions, and estimating complex integrals by stochastic simulation. Nassar [21], Kohansal [22] and Panahi [14] generated the samples from Weibull distribution, Kumaraswamy distribution and inverted exponentiated Rayleigh distribution respectively under adaptive Type-II progressive hybrid censoring scheme using M-H algorithm with Gibbs sampler.
M-H algorithm uses the joint posterior density function and a proposal distribution to simulate samples. In addition, it uses the existing sample as the distribution parameter of the candidate sample and has an acceptance function to calculate the probability of deciding whether the candidate sample can be accepted. This may cause invalid samples, but it can make the sample distribution move faster towards the original distribution.
The joint posterior function of γ and β in (15) can be re expressed as: where π 1 (γ|β, It is obviously that π 1 (γ|β, − → X ) is a gamma density function, thus we can generate the samples of γ easily. However, the conditional posterior density π 2 (β|γ, − → X ) cannot be transformed into a well-known distribution through simplification and analysis, so it is hard to get samples by general methods. So we choose to apply the M-H algorithm within Gibbs sampling to generate samples subject to π 1 (γ|β, − → x ) and π 2 (β|γ, − → X ). The steps are arranged in the following: Step-1: Initialize the values of γ (0) , β (0) . Step-2: ) under stage j and the progressive first-failure censoring data with given censoring scheme.
Step-6: Do step-3 to step-5 Q times, record the γ (j) , β (j) , j = 1, . . . , Q Then, the Bayesian estimation of γ and λ using M-H method under SELF, LLF, and GELF can be obtained as : The constant B refers here is the burn-in-stage of Markov Chain which can eliminate the impact of choosing an initial value (γ (0) , β (0) ) while ensuring the convergence of the algorithm.
One can use the algorithm proposed by Albert [23] with R package LearnBayes to complete the estimation, which is very convenient.

Simulation Studies
We mainly analyze the effect of the estimators through a Monte Carlo simulation study in this section. Some examples are presented for illustration. First, we determine the true value of parameters and the combination of censoring parameters (k, n, m, ). Then, applying the algorithm presented by [24] and the method of generating progressive first-failure samples mentioned in [25], we can generate the samples of IEHLD under progressive first-failure censoring. All analysis is done through software R.
First, take the parameter values as γ = 0.3, β = 0.8, and t = 1.5 for the reliability and hazard functions, then take the hyper-parameters (a 1 , b 1 , a 2 , b 2 ) of the prior distributions G 1 (γ) and G 2 (β) as (1.06, 3.6, 16, 20). For the sake of comparing the effects of parameter values, the parameter values are changed to γ = 0.3 and β = 1.5 for MLE. While for Bayesian estimation, the hyper-parameters (a 1 , b 1 , a 2 , b 2 ) are changed to (0, 0, 0, 0) for noninformative estimates. It should be noted that the expectation of prior distribution should be equal to the parameter value which means that a 1 /b 1 = γ and a 2 /b 2 = β. For the censoring parameters, we choose 3 and 5 for k, and determine two sets of combination for n and m say n=30, m=15, 20, 30; n = 50, m=25, 30, 50 with different R i .
(1) The MLEs are tabulated in Tables 1 and 2, These tables show: • When k and n are fixed but m increases, the MSEs of γ, β, R(t) and H(t) decrease. • When k is fixed but n increases, the MSEs decrease.

•
When n and m are fixed but k increases, the MSEs decrease. • When γ is fixed but β increases, the MSEs increase.
(2) The Bayesian estimations using Lindley approximation method under SELF, LLF and GELF are tabulated in Tables 3-10. While under LLF, we choose h = −0.5, 0.5 and 1. While under GELF, we consider q = −0.5, 0.5 and 1. The tables show that: • When k and n are fixed but m increases, the MSEs of γ, β, R(t) and H(t) decrease. • When k is fixed but n increases, the MSEs decrease slightly. • When n and m are fixed but k increases, the MSEs have no obvious trend on the whole.

•
The MSEs when using noninformative priors are bigger than using the informative priors under SELF.

•
While under LLF, h = 1 is the best mode with the smallest MSEs, and h = −0.5 is a little bit worse relatively.

•
While under GELF, there is no significant difference in MSEs among the three modes, the estimation effect seems better when q = 0.5.

•
Among the three loss functions, there is little difference between the MSEs under LLF and GELF, the estimation effect of GELF is slightly better, while SELF is the worst of the three.

•
When k and n are fixed but m increases, the MSEs of γ, β, R(t) and H(t) decrease. • When k is fixed but n increases, the MSEs decrease.

•
When n and m are fixed but k increases, the MSEs incraese.

•
The MSEs when using noninformative priors are bigger than using the informative priors.

•
While under LLF, h = 1 is the best mode with the smallest MSEs, and the mode h = −0.5 has the biggest MSEs among the three.

•
While under GELF, the estimation effect when q = 1 is a little bit better than q = 0.5, and the mode q = −0.5 is a little bit worse relatively. • Among the three loss functions, the MSEs under SELF are smallest on the whole, while the MSEs under GELF are a little bit bigger than under LLF.
(4) Between the Bayesian estimation methods in this paper, the estimation effect of Lindley approximation method seems a little better than M-H method overall. Between the interval estimations in this paper, the MLE performs a little better than M-H method as a whole. In Tables 19 and 20, the Bayesian credible intervals and asymptotic confidence intervals of parameters are presented in the form of the average length and the coverage probability.

Practical Data Analysis
We mainly apply the practical data sets prestented in Nichols and Padgett [26] for analysis and illustration in this section. The data reports the observations of tensile strength of 100 carbon fibers, which are tabulated in Table 21.
First, we fit IEHLD and the following five reliability models to the data to compare the fitness: Among them, the Weibull distribution is the original model of the practical data we use, and the IWD and GIRD are two similarly shaped hazard models with IEHLD. The Q-Q plot of fitting these models are shown in Figure 2 for illustration. Then, we use Bayesian information criterion (BIC), Akaike information criterion (AIC), and Kolmogorov-Smirnov (KS) test statistics to present the numerical results based on the MLE. The results are tabulated in Table 22. It can be seen that IEHLD has relatively good fitting degree.  Next, randomly group the data into 50 groups with 2 units in each group. That means n = 50, k = 2. After that, choose the minimum value in each group, using schemes 1 , 2 and 3 when m = 35 to generate the progressive first-failure samples. The grouped data are tabulated in Table 23  Then, using the generated progressive first-failure censored data listed in Table 24, the MLEs and Bayesian estimations of γ, β, H(t) and R(t) are conducted. Simultaneously, the Bayesian credible intervals and asymptotic confidence intervals of parameters are also presented. Take t = 10, the corresponding results are listed in Table 25.

Optimal Censoring Mode
In this part, the optimal censoring mode of the progressive first-failure censoring schemes presented before is considered. Here we use four criteria mentioned in [27] to evaluate the optimal censoring scheme.
In IEHLD, α p = −1/ β ln 1 − (1 − p) 1/γ 1 + (1 − p) 1/γ , Var[lnα p ] = lnα p T Var(θ) lnα p . Please note that lnα p T is the gradients of ln α p when γ and β are obtained as the MLEsγ andβ. We take p=0.05 and p=0.95 for this criterion. [4]: Minimize the integral 1 0 Var[lnα p ] (p)dp with the weight function (p) = 1. The corresponding results are tabulated in Table 26. The minimum value under each criterion is bold for a clearer expression. Through the table we can see, while using criterion [1], 2 is the best scheme, but when using the other three criteria, 3 is the best scheme.

Concluding Remarks
We mainly propose the MLEs and Bayesian estimations of the parameters and the reliability and failure functions of IEHLD using progressive first-failure censored samples. For MLE, we use two sets of parameter values to compare the difference, the asymptotic intervals of parameters are presented at the same time. For Bayesian estimation, Lindley approximation method and Metropolis-Hastings method are conducted as point and interval estimation under three loss functions say GELF, LLF and SELF. The Bayesian intervals of parameters are also obtained. Then we use practical data sets to illustrate the effect of different models and compare different estimation methods. Finally, using four criteria, the optimal censoring plan of the progressive first-failure censoring schemes is discussed.
There are still a lot to be done in the study of censoring mode. In order to improve the experimental efficiency, it is very important to optimize the censoring mode. Now, many new censoring modes have been put forward such as generalized Type-II progressive hybrid censoring, adaptive progressive Type-II censoring. One can also combine existing censoring modes to achieve better results.