Statistical Analysis of Inverse Lindley Data Using Adaptive Type-II Progressively Hybrid Censoring with Applications

: This paper deals with the statistical inference of the unknown parameter and some life parameters of inverse Lindley distribution under the assumption that the data are adaptive Type-II progressively censored. The maximum likelihood method is considered to acquire the point and interval estimates of the distribution parameter, reliability, and hazard rate functions. The approximate conﬁdence intervals are also addressed. The delta method is taken into consideration to approximate the variances of the estimators of the reliability and hazard rate functions to get the required intervals. Based on the assumption of gamma prior, we further consider Bayesian estimation of the different parameters. The Bayes estimates are obtained by considering squared error and general entropy loss functions. The Bayes estimates and highest posterior density credible intervals are obtained by employing the Markov chain Monte Carlo procedure. An exhaustive numerical study is conducted to compare the offered estimates with regard to their root means squared error, relative absolute biases, conﬁdence lengths, and coverage probabilities. To explain the suggested methods, two applications are investigated. The numerical ﬁndings show that the Bayes estimates perform better than those obtained based on the maximum likelihood method. The Bayesian estimations using the asymmetric loss function give more efﬁcient estimates than the symmetric loss. Finally, the inverse Lindley distribution is recommended to be used as a suitable model to ﬁt airborne communication transceiver and wooden toys data sets when compared with some competitive models including inverse Weibull, inverse gamma and alpha power inverted exponential.


Introduction
As a combination of exponential and gamma distributions, Lindley [1] established the so-called Lindley distribution (LD).Because LD is only useful for modelling data with a monotonically increasing failure rate, its relevance may be limited to data with non-monotone shapes, such as upside-down bathtub shapes.Sharma et al. [2] offered the inverse Lindley (IL) distribution, which has an upside-down bathtub-shaped hazard rate function (HRF), as an inverted counterpart of the LD distribution.Assume that Y is a random variable with the IL distribution, represented by the symbol IL(µ), where µ is a scale parameter.According to Sharma et al. [2], for Y > 0, the associated probability density function (PDF), reliability function (RF), and HRF, are given, respectively, by Basu et al. [3] considered the estimations of IL distribution using Type-I censored data.Basu et al. [4] studied some estimation problems of IL distribution using progressive hybrid Type-I censoring scheme with binomial removals.Basu et al. [5] investigated the maximum likelihood, product of spacing and Bayesian estimations for IL based on hybrid censored data.Hassan et al. [6] studied the inference about reliability parameter for IL distribution based on ranked set sampling.
The life-testing experiments frequently end before all of the objects fail. it occurs because of time restrictions or a lack of funding.The observations that arise from these scenarios are referred to as the censored sample.Numerous censoring techniques have been presented in the literature.Of these, the progressive censoring plan is highly helpful since it enables the removal of a predetermined number of surviving items at various periods.Adaptive Type-II progressive hybrid censoring (T2-APHC), proposed by Ng et al. [7], has received a lot of attention from several authors because it allows for the production of highly efficient statistical analysis.In this plan, the total test items n units are placed on a test at time zero, the number of failures m(< n) is predetermined, and the testing time is permitted to run over the prefixed time T. Also, the progressive censoring S = (S 1 , S 2 , . . ., S m ) is prefixed, but some of its values may be adjusted consequently during the test.When the first failure Y 1:m:n occurs, S 1 units are randomly removed from the test.Similarly, when the second failure Y 2:m:n occurs, S 2 units are randomly removed and so on.If the mth failure occurs before time T (i.e.,Y m:m:m < T), the test stops at the mth failure and we have S * m = n − m − ∑ m i=1 S i .On the other hand, if the rth failure occurs before time T (i.e., Y r:m:n < T < Y r+1:m:n ), where r + 1 < m and Y r:m:n , we change the removal pattern bt setting S i = 0 for i = r + 1, r + 2, • • • , m − 1, then we have S * m = n − m − ∑ r i=1 S i .This mechanism ensures that the test will stop when the experimenter achieves the desired number of failures m, and that the overall test duration will be close to the optimal time T.However, suppose y = (y 1:m:n , S 1 ), . . ., (y r:m:n , S r ), (y r+1:m:n , 0), . . ., (y m:m:n , S * m ) is a T2-APHC sample obtained from a continuous population, then according to Ng et al. [7] the likelihood function of the observed data can be defined as where C is a constant and y i is used instead of y i:m:n for simplicity.In the past decade, using T2-APHC plan, several authors have derived different point and/or interval estimators of various parameters of life, for example; see Al Sobhi et al. [8], Hemmati and Khorram [9], Nassar et al. [10], Panahi and Moradi [11], Elshahhat and Nassar [12], Panahi and Asadi [13], Du and Gui [14], Alotaibi et al. [15], Alotaibi et al. [16], Ateya et al. [17], Elshahhat and Nassar [18], and references cited therein.We are motivated to perform this work because (1) The IL distribution exhibits two very admirable characteristics.In addition to possessing a hazard function with an upsidedown bathtub shape, which is a common occurrence in many domains, it is a single parameter distribution, which greatly smoothes out the mathematical difficulties.(2) The effectiveness of the T2-APHC plan in reducing the overall test duration while preserving the desired characteristics of progressive censoring in practical reliability studies.From the aforementioned development, it is evident that the problem of estimating the unknown parameter, RF, and HRF of the IL distribution based on the T2-APHC sample has not been explored.Therefore, we can say that our main objective in this study is to investigate some estimation issues for the IL distribution when the data are T2-APHC.We first obtain the maximum likelihood estimates (MLEs) of the various parameters as classical estimates as well as the approximate confidence intervals (ACIs).We further use the Bayesian estimation method to obtain the Bayes estimates and the highest posterior density (HPD) credible intervals based on gamma prior.The Bayes estimates are obtained through the Markov chain Monte Carlo (MCMC) technique.Two loss functions are taken into consideration for this purpose.The first is the squared error (SE) loss function as a symmetric one.The second is the general entropy (GE) loss function which is an asymmetric loss function.The effectiveness of the various strategies is examined through extensive simulations, and two actual data sets have been examined for illustration.
The remainder of the paper is structured as follows.In Section 3, the maximum likelihood method is applied to acquire the MLEs of the various parameters µ, RF, and HRF and the associated ACIs.Using gamma prior, two loss functions and the MCMC procedure, we provide the Bayesian estimation, including point and HPD credible intervals, of the unknown parameters in Section 3. A simulation study is performed and its outcomes are presented in Section 4. Two real data sets are analyzed and displayed in Section 5. Finally, Section 6 concludes the paper.

Classical Inference
In this section, the maximum likelihood method is taken into consideration to acquire the model parameter estimates as well as its RF and HRF.Additionally, the ACIs of the various parameters are created based on the asymptotic nature of the MLEs.It is crucial to note that the RF and HRF estimators' variances are calculated using the delta approach since these functions' variances cannot be obtained explicitly.Suppose that y 1:m:n < • • • < y r:m:n < T < y r+1:m:n < • • • < y m:m:n be a T2-APHC sample of size m with S = (S 1 , . . ., S r , 0, . . ., 0, S m ) from the IL distribution.The likelihood function in this instance can be determined from (1), ( 2) and (4), after omitting the constant term, as follows where y i = y i:m:n , i = 1, . . ., m for the sake of simplicity.By using the natural logarithm of (5), we can determine the log-likelihood function for the case under consideration as follows In this context, the MLE of the model parameter µ, symbolized by μ, can be owned by maximizing (6) with respect to µ, or otherwise, by solving the resulting equation There is no closed-form for the MLE μ, as can be seen from ( 7).Therefore, a numerical method may be used to solve (7) in order to obtain the MLE of µ.
One of the main issues in maximum likelihood estimation is how to prove the existence and uniqueness of the acquired MLE μ.Due to the complex form of (7), by simulating a T2-APHC sample with µ = 2, (T, n, m) = (1.5, 40, 12) and S = (28, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), both the existence and uniqueness of μ are graphically proved in Figure 1, which presents the log-likelihood function in (6) and the normal equation in (7).As a result, the MLE μ of µ is 2.099918.It is clear, from Figure 1, that the offered maximum likelihood value of µ exists and is unique.Utilizing the MLE's invariance property to estimate RF and HRF is all that is necessary once μ has been determined.We can get the MLEs of RF and HRF at mission time t from (2) and (3), respectively, by substituting the parameter µ with the corresponding MLE μ, as shown below and The common asymptotic normality of the MLE μ can be applied to create the 100(1 − α)% ACI for the parameter µ with estimated variance, denoted by σ2 µ , obtained from the inverse of the observed Fisher information matrix, which obtained based on the inverse of the matrix of second derivative of ( 6) and locally at the MLE of µ.From the log-likelihood function in (6), we obtain the second derivative of (•) with respect to µ as follows d 2 (µ|y) where As a result, the ACI of the parameter µ can be constructed with (1 − α)% level of confidence as follows μ ± z α/2 σµ , where z α/2 is the upper (α/2)th percentile point of the standard normal distribution and σµ is obtained as follows We also need to determine the variances for the MLEs of RF and HRF, on the other hand, in order to build the ACIs for these functions.We apply the delta approach to obtain approximations of the variances of R(t) and ĥ(t).The delta method is a general strategy for calculating confidence intervals for any functions of MLEs.It takes a function that is too complex to calculate the variance analytically, approximates it linearly, and then calculates the variance of the linear function that is simpler and may be utilized for large sample inference, see for more detail Greene [19].To get the required approximate estimated variances, we first need to obtain the following and Let Θ R = (dR(t)/dµ)| (µ= μ) and Θ h = (dh(t)/dµ)| (µ= μ) , then we can approximate the required variances of R(t) and ĥ(t), respectively, as Thus, with 100(1 − α)% level of confidence, the ACIs of RF and HRF can be computed, respectively, as follow R(t) ± z α/2 σR , and ĥ(t) ± z α/2 σh .

Bayesian Inference
This section deals with developing the Bayes estimators along with their HPD credible intervals of µ, R(t) and h(t).Following the main features of the gamma density, reported by Sharma et al. [2], the IL parameter µ is assumed to have a gamma (G(µ) ∝ µ a−1 e −bµ ) density prior, where a and b are assumed to be known.However, combining the likelihood and gamma density functions of µ into the continuous Bayes' theorem, the joint posterior PDF of µ can be written as where K = ∞ 0 G(µ) × L(µ)dµ.Loss functions in the Bayesian paradigm perform a critical role because they can be used to identify overestimation and underestimation of a study of interest.We thus take into account one symmetric loss, namely the SE loss function, and one asymmetric loss, namely the GE loss function.Regarding the SE loss, it is well known that the Bayes estimator (say θS ) of θ is the posterior mean as θS = ∞ 0 θΦ θ|y dθ.
where δ is the parameter loss that determines the degree of asymmetry.Under GE loss, the Bayes estimator (say θS ) of θ is given by provided that E θ (θ −δ ) exists and is finite.
It is clear, from (11), that the posterior PDF of µ cannot be expressed explicitly or reduced to any familiar distribution.Therefore, to compute the Bayes estimates of µ, R(t) and h(t) or to create their HPD intervals, we suggest generating MCMC samples from (11) using the Metropolis-Hastings (M-H) sampler, for detail see Gelman et al. [21] and Lynch [22].It is of interest to mention here that some approximations like Lindley's approximation and Tierney and Kadane's approximation can be used to obtain the Bayes estimates in such situations.The main disadvantages of these method are (1) they give only point estimates for the unknown parameters and they cannot tell us anything regarding the HPD credible intervals, (2) to acquire the needed estimates we should deal with very complicated expressions, especially the third order derivatives.To overcome these difficulties, the M-H technique can be performed using the following steps: Step-1: Set the starting point µ (0) = μ.
Step-9: Ignore the first simulated samples (say D 0 ), to discard the impact of an initial guess, where D = D − D 0 .

Monte Carlo Simulation
In this section, via different Monte Carlo simulations, the performance of the proposed point and interval estimators of the IL parameter µ, reliability function R(t), and hazard function h(t) is compared when an adaptive Type-II progressive hybrid strategy is implemented.Without sacrificing generality, we simulate 1000 T2-APHC samples from IL(0.5) and IL(1.5) based on various choices of T, n, m and S. It is also best emphasized here that no restriction has been imposed on the maximum number of iterations, and convergence is assumed when the absolute difference between successive estimates is less than 10 −5 .By fix t = 0.25, the corresponding true values of (R(t), h(t)) at µ = 0.5 and 1.5 are (0.6842, 2.6373) and (0.9916, 0.1800), respectively.Using T(= 0.5, 2.5), two different levels of n and m are used such as n(=40, 80), m(=12, 32) (for n = 40) and m(=24, 64) (for n = 80).Here, the proposed values of m are taken as failure percentages (FPs) of each n such as m n (=30, 80)%.Moreover, for each set of (n, m), three progressive patterns S = (S 1 , S 2 , . . ., S m ) are also considered, where S = (2, 0, 0, 0, 4) is referred by S = (2, 0 * 3, 4), as Once the desired T2-APHC samples are generated, the maximum likelihood and 95% ACI estimates of µ, R(t) and h(t) are obtained via R 4.1.2software by installing 'maxLik' package (by Henningsen and Toomet [24]).Also, to develop the Bayes MCMC and HPD interval estimates of µ, R(t) and h(t), 12,000 MCMC samples are simulated from the M-H algorithm via 'coda' package (by Plummer et al. [25]) in R 4.1.2software.Taking the first 2000 variates as burn-in, following prior mean and prior variance criteria, the Bayes inferences are developed based on two informative sets of the hyperparameters a and b namely: prior-1: (a, b) = (2.5, 5) and prior-2: (a, b) = (5, 10) (for µ = 0.5) as well as prior-1: (a, b) = (7.5, 5) and prior-2: (a, b) = (15, 10) (for µ = 1.5).Now, from 10,000 MCMC samples, the MCMC estimates of µ, R(t) and h(t) are calculated based on SE and GE (for δ(= −2, +2)) loss functions, as well as the 95% HPD intervals, are computed also.
From likelihood (or Bayes) approach, the average estimates (AEs) of µ, R(t) and h(t) (say ξ) are given by ξk where B is the number of generated sequence data, ξ(i) is the calculated estimate of ξ at the ith simulated sample, Comparison between point estimates of ξ is made based on their root mean squarederrors (RMSEs) and mean relative absolute biases (MRABs) as and respectively.Further, the comparison between interval estimates of ξ is made using their average confidence lengths (ACLs) and coverage percentages (CPs) respectively as and where 1(•) is the indicator function and L(•) and U (•) denote the lower and upper bounds, respectively, of (1 − α)% ACI (or HPD credible) interval of ξ k .It is better to mention here that other comparison criteria, such as: speed of computation, size of memory needed, precision, etc., can be easily incorporated.Furthermore, sensitivity analysis is recommended here to gauge the validity of the proposed control tests.Via R data graphics, the simulation outputs results of µ, R(t) and h(t) are represented with heatmaps in Figures 2-4, respectively, while the numerical tables of µ, R(t) and h(t) are available as Supplementary Material.Furthermore, for brevity, several notations of the estimation methods have been used in Figures 2-4 such as (for prior-1 (say P1) as an example) the Bayes estimates based on SE loss mentioned as "SE-P1", the Bayes estimates based on GE loss for δ = −2 and +2 mentioned as "GE1-P1" and "GE2-P1", respectively, as well as the HPD interval mentioned as "HPD-P1".
From Figures 2-4, the following observations can be easily drawn: As n(or m) increases, all estimates of µ, R(t) and h(t) perform satisfactory.A similar result is also observed when the sum of S i , i = 1, 2, . . ., m decreases.

•
As µ increases, the RMSEs, MRABs and ACLs of µ increase while their CPs decrease as well as the RMSEs, MRABs and ACLs of R(t) and h(t) decrease while their CPs increase.

•
As T increases, it can be seen that: (i) For IL(0.5)

-
The simulated RMSE/MRAB values of the frequentist estimates of µ, R(t) and h(t) increase while that associated with the Bayes estimates of µ, R(t) and h(t) decrease.

-
The ACLs of ACI/HPD interval estimates of µ, R(t) and h(t) narrowed down while their CPs increase.

-
The simulated RMSE/MRAB values of the both frequentist and Bayes estimates of µ, R(t) and h(t) decrease.

-
The ACLs of ACI/HPD interval estimates of µ decrease (with CPs increase) while of R(t) and h(t) increase (with CPs decrease).• Comparing the proposed schemes 1, 2 and 3, it is noted that: (i) For IL(0.5) -Under Scheme-3 (is also known right (or Type-II) censoring), all proposed point and interval estimates of µ behave better than others.

-
The same finding is also observed in the estimation results for R(t) and h(t).

-
The proposed point estimates of µ, R(t) and h(t) perform better for Scheme-1 (left-censoring) when T = 0.5 and for Scheme-3 (right-censoring) when T = 2.5 than others.

-
The proposed interval estimates of µ, R(t) and h(t) behave better under Scheme-3 (right-censoring) in most cases compared to others.
• As a recommendation, we propose to utilize the Bayes M-H algorithm procedure to estimate the IL parameters using T2-APHC.To explain the flexibility of the proposed model, based on the complete ACT data set, the IL distribution is compared to five other common inverted distributions (for y > 0 and α, µ > 0) namely; inverse exponential (IE(µ)) proposed by Keller et al. [30], inverse Weibull (IW(α, µ)) proposed by Keller et al. [31], inverse gamma (IG(α, µ)) discussed by Glen [32], inverted Nadarajah-Haghighi (INH(α, µ)) proposed by Tahir et al. [33] and alpha power inverted exponential (APIE(α, µ)) proposed by Ceren et al. [34] distributions.To determine which distribution has the best fit, different goodness-of-fit measures are considered called: negative log-likelihood (NL), Akaike (A), Bayesian (B), consistent Akaike (CA), Hannan-Quinn (HQ) and Kolmogorov-Smirnov (KS) statistic with its p-value.To calculate the proposed criteria, the MLE with its standard error (St.E) of α or µ is calculated and presented in Table 2.It is evident, in terms of the smallest of NL, A, B, CA, HQ and KS values as well as the highest p-value, that the IL lifetime model provides a better fit than IE, IW, IG, INH and APIE distributions.For more investigation, the IL distribution is also compared to the Lindley (L) model.It is quite evident, from Table 2, that the IL distribution provides the best overall fit compared to L and other inverse models.Further, quantile-quantile plots of IL, IE, IW, IG, INH and APIE distributions are displayed in Figure 5. Furthermore, Figure 6 shows the histograms of ACT data and the lines of fitted densities as well as fitted/empirical reliability functions of IL, IE, IW, IG, INH and APIE distributions are displayed.It can be seen, from Figures 5 and 6, that the IL distribution can be chosen as an appropriate distribution when compared to other distributions in presence of ACT data.From the complete ACT data, when m = 20, three T2-APHC samples based on different schemes are generated and reported in Table 3. From Table 3, the MLEs with their St.Es of µ, R(t) and h(t) (at time t = 1) are computed.By running the M-H algorithm 50,000 times with discarding the first 10,000 variates as burn-in, the Bayes estimates with their St.Es under SE and GE (for δ(= −3, −0.03, +3)) loss functions of µ, R(t) and h(t) are calculated using the improper gamma prior.Since there is no a priori information about µ from ACT data, we assume that the hyperparameters a and b are not available but are set to 0.001 to run computations.Also, the two bounds of the 95% ACI/HPD interval estimates with their lengths of the same parameters are also calculated.To apply the proposed MCMC sampler, the maximum likelihood estimate of µ is selected as an initial guess.The point and interval estimates of µ, R(t) and h(t) are provided in Tables 4 and 5, respectively.It is clear, from Tables 4 and 5, that the estimates of µ, R(t) and h(t) obtained by the MCMC procedure perform better than others.Similar performance is also observed in the case of HPD interval estimates.Some common characteristics for the MCMC iterations of µ, R(t) and h(t) after burnin, namely: mean, mode, quartiles (Q 1 , Q 2 , Q 3 ), standard deviation (St.D) and skewness are computed and provided in Table 6.To highlight the convergence of MCMC draws, from sample 1 (as an example), MCMC trace plots of µ, R(t) and h(t) are displayed in Figure 7. Additionally, using the fitted Gaussian kernel for sample 1, the histograms of MCMC variates of µ, R(t) and h(t) are also shown in Figure 7.For each trace plot, the sample mean is represented by a solid (-) line as well as the HPD interval bounds are represented by dashed (---) lines.For each histogram plot, the sample mean of each unknown parameter is represented with a vertical dotted (:) line.Figure 7 demonstrates that the MCMC sampler converges quite well and that the burn-in sample has a sufficient size to remove the effect of the starting values.It is also noted, from Figure 7, that the generated variates of µ, R(t) and h(t) are positive-skewed, negative-skewed and fairly symmetrical, respectively.Other trace and histogram plots of µ, R(t) and h(t) based on samples 2 and 3 are plotted and displayed in the Supplementary File.

Wooden Toys
In this application, from the marketing field, both proposed frequentist and Bayesian estimators of the IL parameters are computed based on the prices of the thirty different children's wooden toys for sale at a Suffolk craft shop in April 1991, see Table 7.This data was originally published by The Open University and recently analyzed by Chesneau et al. [35].In Table 8, the calculated values of NL, A, B, CA, HQ and KS(p-value) of IL and its competitive models are presented.It shows that the IL distribution fits the wooden toys data better compared to the Lindley model with respect to the KS(p-value) statistic alone.It is also evidence that the IL distribution is the best choice for the wooden toys data compared to other inverse models based on the criteria A, B, CA and HQ whereas the IW, IG, INH and APIE distributions are the next best-fit models based on the NL and KS(p-value) criteria.Also, using the complete wooden toys data, Figure 8 displays the quantile-quantile plots of IL, L, IE, IW, IG, INH and APIE distributions.It also supports the same findings reported in Table 8.Further, for each model based on the wooden toys data, the plot of histograms of wooden toys data with fitted densities as well as the plot of the fitted and empirical reliability functions are shown in Figure 9.It is evident that the IL distribution is the best model compared to its competitive models.Now, we obtain the calculated values of the derived point and interval estimators of µ, R(t) and h(t) based on three different T2-APHC samples with size m = 15 from the complete wooden toys data set which are listed in Table 9.From Table 9, the classical and Bayes MCMC estimates with their St.Es of µ, R(t) and h(t) (at time t = 0.5) are computed and presented in Table 10.Moreover, two-sided 95% ACI/HPD interval estimates with their lengths of the same unknown quantities are also calculated, see Table 11.Utilizing the improper gamma prior under SE and GE (for δ(= −5, −0.05, +5)) loss functions, from 50,000 MCMC draws with 10,000 burn-in, the MCMC estimates with their St.Es of µ, R(t) and h(t) are developed.To run the desired computations, the hyperparameters a and b are selected to be 0.001.Moreover, the same properties mentioned in Table 6 are also reused based on the wooden toys data and reported in Table 12.It is observed, from Tables 10 and 11, that the fitted values of the point and interval estimators of µ, R(t) and h(t) derived from the Bayes paradigm performed better than those derived from the likelihood approach in terms of the lowest St.Es, as well as, the HPD interval estimates are also performed better than others in terms of the shortest intervals.Using the data set of sample 1 as an example, both trace and histogram plots of the MCMC variates of µ, R(t) and h(t) are provided in Figure 10.It shows that the MCMC mechanism converges well and demonstrates that the MCMC variates of R(t) and h(t) are fairly symmetrical while that associated with µ are positive-skewed.Other plots of µ, R(t) and h(t) based on samples 2 and 3 are presented in the Supplementary File.Finally, from both engineering and marketing examples, we can conclude that the proposed methodologies provide a satisfactory interpretation of the IL lifetime model in presence of a sample obtained from an adaptive Type-II progressive hybrid censoring mechanism.

Concluding Remarks
This study takes into account the statistical inference of the unknown parameter and some reliability measures of the inverse Lindley distribution using adaptive Type-II progressively censored samples.The various parameters are inferred using both conventional and Bayesian methods.We employ numerical techniques to acquire the necessary estimate of the unknown parameter because it has been shown that its estimator cannot be derived in closed form.The asymptotic properties of the maximum likelihood estimators are used to produce the approximate confidence intervals in addition to the point estimates of the unknown parameter, reliability, and hazard rate functions.We study the Bayesian estimation of various unknown parameters using symmetric and asymmetric loss functions, and it is noted that they cannot be obtained in closed expressions because of the complexity of the posterior distribution.In order to get the Bayes point estimates and the highest posterior density credible intervals, the Markov chain Monte Carlo method was applied.Various statistical criteria, including root mean squared error and interval length, were assessed using Monte Carlo simulations to determine the performance of the proposed methods.The suggested approaches are demonstrated through two examples involving real data sets.According to the numerical outcomes, the Bayes estimates are more precise than maximum likelihood estimates in terms of minimum root mean squared error, relative absolute bias, and interval length.As the number of observed failures increases the different estimation methods perform well for the different progressive censoring schemes.As the variance of the prior distribution decreases, the Bayes estimates perform well when compared with those with high variance.Furthermore, the Bayes estimates using the general entropy loss function as asymmetric loss function are more efficient than estimates obtained based on the symmetric squared error loss function.Finally, the analysis of two data sets shows that the inverse Lindley distribution can be used as a suitable model when compared with some other competitive models, including Lindley, inverse Weibull and inverted Nadarajah-Haghighi distributions.As a future work, it is of interest to investigate other estimation methods like maximum product of spacing and expectation-maximization algorithm of the inverse Lindley distribution using the same censoring scheme.

Figure 5 .Figure 6 .
Figure5.The quantile-quantile plots of the IL and its competitive models from ACT data.

Figure 8 .
Figure8.The quantile-quantile plots of the IL and its competitive models from wooden toys data.

Figure 10 .
Figure 10.Trace (top) and Histograms (bottom) plots of µ, R(t) and h(t) from wooden toys data.

Table 1 .
[29]he past decade, this data set has received a lot of attention from several authors; for example, see Saroj et al.[27], Sharma et al.[28], Ferreira et al.[29], among others.

Table 1 .
Times of active repair for ACT.

Table 2 .
Fitting results of the IL and its competitive models from ACT data.

Table 3 .
Three generated samples from ACT data.

Table 4 .
Point estimates (first-column) with their St.Es (second-column) of µ, R(t) and h(t) from ACT data.

Table 5 .
Interval estimates of µ, R(t) and h(t) from ACT data.

Table 6 .
Characteristics for MCMC outputs of µ, R(t) and h(t) from ACT data.

Table 7 .
Prices of wooden toys for sale at a Suffolk craft shop.

Table 8 .
Fitting results of the IL and its competitive models from wooden toys data.

Table 9 .
Three different samples from wooden toys data.

Table 10 .
Point estimates (first-column) with their St.Es (second-column) of µ, R(t) and h(t) from wooden toys data.

Table 11 .
Interval estimates of µ, R(t) and h(t) from wooden toys data.

Table 12 .
Characteristics for MCMC outputs of µ, R(t) and h(t) from wooden toys data.