Next Article in Journal
Dynamical Behavior of a Modified Leslie–Gower One Prey–Two Predators with Competition
Previous Article in Journal
The Dirichlet Problem of Hessian Equation in Exterior Domains
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Statistical Analysis of a Lifetime Distribution with a Bathtub-Shaped Failure Rate Function under Adaptive Progressive Type-II Censoring

Department of Mathematics, Beijing Jiaotong University, Beijing 100044, China
*
Author to whom correspondence should be addressed.
Mathematics 2020, 8(5), 670; https://doi.org/10.3390/math8050670
Submission received: 21 February 2020 / Revised: 21 April 2020 / Accepted: 23 April 2020 / Published: 28 April 2020

Abstract

:
In this paper, the estimation problem of two parameters of a lifetime distribution with a bathtub-shaped failure rate function based on adaptive progressive type-II censored data is discussed. This censoring scheme allows the experiment to be more efficient in the use of time and cost while ensuring the statistical inference efficiency based on the experimental results. Maximum likelihood estimators are proposed and the approximate confidence intervals for two parameters are computed using the asymptotic normality. Lindley approximation and Gibbs sampling are used to obtain Bayes point estimates and the latter is also used to generate Bayes two-sided credible intervals. Finally, the performance of various estimation methods is evaluated through simulation experiments, and the proposed estimation method is illustrated through the analysis of a real data set.

1. Introduction

1.1. Chen ( β , θ ) Distribution

In practical analysis, the failure rate function is the key for researchers to study the life phenomenon of a product. In many industries, products are more likely to have a bathtub-shaped failure rate curve than a fixed failure rate. For the bathtub-shaped failure rate function, the product failure rate begins with a decreasing process, then a constant phase, and finally an ascending phase. This model has wide application prospects. For example, some treatments or drugs may work better in middle-aged men than in other age groups. That is, the failure rate of the treatment for children is high, and it will continue to fall to a constant level, then it will maintain that constant level until a certain age, later will rise again as the growth of the age gradually. The exponentiated Weibull extension proposed by Chen [1] (written as Chen ( β , θ ) distribution and abbreviated as the Chen distribution throughout the paper) is a lifetime distribution with bathtub-shaped failure rate function. The confidence regions of the two unknown parameters of the Chen ( β , θ ) distribution have closed forms, which are important for practical applications. Chen [1] proposed an exact confidence interval for two parameters based on type-II censored data. Many researchers have also been interested in this new lifetime distribution and have conducted a series of studies. Based on the same censored data, Wang [2] explored some analytical properties of this distribution, added moment estimation, and evaluated the performance of the proposed estimation methods through intensive Monte Carlo simulations. Sanku Dey [3] focused on frequentist analyses and Mohammad Z Raqab [4] further studied bayesian analyses with the record data. Under the progressive type-II censored data, Wu [5] proposed maximum likelihood estimation and a method to construct exact confidence intervals as well as exact joint confidence regions of parameters. Manoj Kumar Rastogi [6] further discussed the estimation of its reliability function as well as hazard function based on the same censoring scheme. Bahman Tarvirdizade [7] considered the estimation of the stress-strength reliability based on upper record values. Essam Kh Ahmed [8] further used the Markov Chain Monte Carlo method (MCMC) for Bayes estimation with the progressive type-II censored data. Zhang [9] discussed acceleration factors based on the tampered failure rate model.
The probability density function (pdf) of Chen ( β , θ ) is given by
f ( x ; β , θ ) = β θ · x β 1 · e x β · exp ( 1 e x β θ ) , x > 0
and cumulative distribution function (cdf) is
F ( x ; β , θ ) = 1 exp ( 1 e x β θ ) , x > 0 ,
where β > 0 , θ > 0 are the parameters.
Some of its characteristics are illustrated below. f ( x ) is decreasing for 0 < θ 1 , β 1 ; f ( x ) is unimodal for β > 1 . When θ > 1 , f ( x ) is either decreasing, or decreasing first and then increasing and decreasing finally (see [2] in details). Figure 1, Figure 2, Figure 3 and Figure 4 illustrate the curves of f ( x ) under several settings of two parameter values.
The failure rate function of the Chen ( β , θ ) is
λ ( x ) = ( β θ ) · x β 1 · e x β .
Taking the derivation, λ ( x ) = β θ · x β 2 · e x β · ( β 1 ) + β x β , from which we can conclude that  λ ( x ) is in a bathtub-shaped curve when 0 < β < 1 and increases when β 1 . Figure 5 and Figure 6 show the curves of λ ( x ) under various settings of β and θ . Since the failure rate at the bottom of the bathtub is stable, which is closely related to the working stage of the actual product, it is of great significance to analyze this kind of lifetime distribution for the practical application in industry.

1.2. Adaptive Progressive Type-II Censored Scheme

In the life test and reliability test, the experimenters use various censoring schemes to reduce the total test time and related costs among which type-II censoring scheme is one of the most widely used. Under type-II censoring scheme, only the first m failed units in a random sample in size n ( m < n ) are observed. This method is easy to be carried out by people, while it has the possibility of wasting a lot of testing time. Therefore, the progressive type-II censoring as a generalized form of type-II censoring is proposed to improve the efficiency of the experiment. The progressive type-II censoring can be implemented as follows. Let n units be placed on a life test and denote X 1 , X 2 , , X n as their lifetimes, respectively. It is suppose that X i , i = 1 , 2 , , n are independent to each other and all follow pdf f ( x ) and cdf F ( x ) . Before the experiment begins, the number of units observed m ( m < n ) as well as the progressive type-II censoring scheme R = ( R 1 , R 2 , , R m ) are determined where R i > 0 , i = 1 , 2 , , m and i = 1 m R i + m = n . When i-th failure is observed, the surviving R i units are randomly removed from the experiment. The experiment continues according to this rule until m failures are observed and the experiment is over.
Denote by m the completely observed lifetimes by X i : m : n R , i = 1 , 2 , , m (simplified to X i : m : n in a specific experiment), which are the observed progressive type-II right censoring order statistics. A basic assumption of the design of this experiment is that the censoring scheme R = ( R 1 , R 2 , , R m ) is set before the experiment begins. This model can simulate the real-life situation where the loss or removal of some units happens during the experiment, which makes it more reasonable than type-II censoring.
However, this method may still fail to be satisfied in reality since experimenters may want to change the censoring numbers according to the actual situation during the experiment process. Thus, adaptive progressive type-II censoring is proposed [10] to improve the flexibility and efficiency of the experiment. In the adaptive progressive type-II censoring scheme, the size of units m observed is decided before the experiment and the censoring scheme R = ( R 1 , R 2 , , R m ) is roughly decided, but  R i may be modified according to certain needs of experimenters. The working process is demonstrated as follows: suppose there is a pre-provided expected finishing time T of the test, though we allow the total testing time to exceed T. If X m : m : n < T , the experiment is carried out in the same way as type-II progressive censoring. Otherwise, once the experiment time pass time T while we have not observed m failures, we leave as many surviving units as possible, hoping to see more failures in a short time to finish the experiment in the most efficient way. Therefore, suppose X J : m : n is the last failure observed before time T (i.e., X J : m : n < T < X J + 1 : m : n ) where X 0 : m : n 0 and X m + 1 : m : n . After time T, we do not drop any units until the m-th failure is observed. Then we withdraw all surviving units. That is to say, R J + 1 = R J + 2 = = R m 1 =0 and R m = n m i = 1 J R i . Figure 7 and Figure 8 illustrate progressive type-II censoring and adaptive progressive type-II censoring, respectively.
Based on adaptive type-II progressively censored data, the maximum likelihood estimation and Bayes estimation for two parameters of Chen ( β , θ ) distribution have been studied. The rest of this paper is organized as follows. Maximum likelihood point estimation is considered and asymptotic confidence intervals using the asymptotic normality are constructed in Section 2. Then in Section 3, we discuss Bayes estimation and obtain numerical results to Bayes point estimators using Lindley approximation and Gibbs sampling. Furthermore, the latter is used to construct Bayes two-sided credible intervals of unknown parameters. In Section 4, the R program is used for simulation study to compare different estimation methods. In Section 5, a real data set is analyzed to demonstrate the feasibility of the proposed estimation methods. Finally, the thesis is summarized in the Section 6.

2. Maximum Likelihood Estimation

Suppose that X = X 1 : m : n R , X 2 : m : n R , , X m : m : n R is an adaptive progressive type-II censored sample of size m from a sample of size n with censoring scheme R = ( R 1 , R 2 , , R m ) taken from distribution having f ( x ) as the pdf and F ( x ) as the cdf, and X J : m : n is the last observed failure before T which is prefixed best testing time. The parameter x = x 1 : m : n R , x 2 : m : n R , , x m : m : n R (simplified as x = x 1 , x 2 , , x m in later equations) is used to represent the observed values of such an adaptive type-II progressively censored sample. On this basis, the corresponding likelihood function is given by
f ( x 1 : m : n R , x 2 : m : n R , , x m : m : n R ) = B J · i = 1 m f ( x i : m : n ) · i = 1 J [ 1 F ( x i : m : n ) ] R i · 1 F ( x m : m : n ) C J ,
where B J = i = 1 m [ n i + 1 k = 1 m a x ( i 1 , J ) R k ] and C J = n m i = 1 J R i .
Therefore, the likelihood function for X 1 : m : n R , X 2 : m : n R , , X m : m : n R taken from the Chen ( β , θ ) is correspondingly written as
L β , θ x = B J · β m θ m · i = 1 m x i β 1 · e x i β · exp ( 1 e x i β θ ) · i = 1 J exp ( 1 e x i β θ ) R i · exp ( ( 1 e x m β θ ) C J .
Further, the log-likelihood function can then be written as
l ( β , θ x ) = m ln β m ln θ + i = 1 m ( β 1 ) ln x i + x i β + ( 1 e x i β θ ) + i = 1 J R i · ( 1 e x i β θ ) + C J ( 1 e x m β θ ) .
Then take the partial derivatives of the log-likelihood function, and obtain the likelihood equations as follows:
l ( β , θ x ) β = m β + i = 1 m ln x i ( 1 + x i β x i β · e x i β θ ) i = 1 J ln x i ( x i β · R i · e x i β θ ) ln x m · x m β · C J · e x m β θ = 0 .
l ( β , θ x ) θ = m θ i = 1 m 1 e x i β θ 2 i = 1 J R i · ( 1 e x i β ) θ 2 C J · 1 e x m β θ 2 = 0 .
From (8), the maximum likelihood estimate of θ can be illustrated as a function of β ^
θ ^ = i = 1 m ( 1 e x i β ^ ) + i = 1 J R i · ( 1 e x i β ^ ) + C J · ( 1 e x i β ^ ) m .
Thus, given β ^ , the corresponding θ ^ with (9) can be obtained. However, we cannot get the closed forms of (7) and (9). Therefore, numerical methods are included, such as Newton’s method, to determine the expected estimates.
Further, asymptotic confidence intervals are discussed. When the sample size n is large enough, the Fisher information matrix can be involved to get approximate confidence intervals of two unknown parameters. The Fisher information matrix is derived based on likelihood functions. By formula derivation, the pivot quantities with unknown parameters subject to asymptotically normal distribution are constructed. Then the asymptotic confidence intervals are obtained through Monte Carlo simulations.
The Fisher information matrix I ( β , θ ) is written as
I ( β , θ ) = E [ 2 l ( β , θ x ) 2 β 2 l ( β , θ x ) β θ 2 l ( β , θ x ) θ β 2 l ( β , θ x ) 2 θ ] ,
and from the log-likelihood function (6), we have the second partial of l ( β , θ x ) with respect to β and θ which is illustrated in the Appendix A.
It is obvious that it is difficult to get the exact expression of the above expectation. Therefore, the observed Fisher information matrix without taking expectation will be used instead. Let ( β ^ , θ ^ ) represent the maximum likelihood estimate of ( β , θ ) , the observed Fisher information matrix I ( β ^ , θ ^ ) is given as follows:
I ( β ^ , θ ^ ) = 2 l ( β , θ x ) 2 β 2 l ( β , θ x ) β θ 2 l ( β , θ x ) θ β 2 l ( β , θ x ) 2 θ β ^ θ ^ .
Correspondingly, an approximated covariance matrix of ( β ^ , θ ^ ) is given as follows:
v a r ( β ^ ) c o v ( β ^ , θ ^ ) c o v ( θ ^ , β ^ ) v a r ( θ ^ ) = I 1 ( β ^ , θ ^ ) = 2 l ( β , θ x ) 2 β 2 l ( β , θ x ) β θ 2 l ( β , θ x ) θ β 2 l ( β , θ x ) 2 θ β ^ θ ^ 1 .
The asymptotic normality of maximum likelihood estimation (MLE) supports that ( β ^ , θ ^ ) follow bivariate normal distribution with mean vector ( β , θ ) and covariance matrix I 1 ( β ^ , θ ^ ) (12) approximately. Thus, ( 1 α ) × 100 % approximate confidence intervals for the two parameters β and θ can be written respectively, as
β ^ z α 2 v a r ( β ^ ) , β ^ + z α 2 v a r ( β ^ )
θ ^ z α 2 v a r ( θ ^ ) , θ ^ + z α 2 v a r ( θ ^ ) ,
where z α 2 is the percentile of the standard normal distribution N ( 0 , 1 ) with right-tail probability α 2 .

3. Bayes Estimation

Different from maximum likelihood estimation, Bayes estimation analyzes the problem by combining the prior information of random samples from a certain distribution and maximum likelihood estimation. Therefore, Bayes estimation considers both the provided data and the prior probability and can infer the interested parameters more reasonably. When there is no prior probability information, non-information priors are considered.
First, it is supposed here that the unknown parameters β and θ are independent and follow the gamma prior distributions
π ( β ) β a 1 1 e a 2 β ; a 1 , a 2 > 0
π ( θ ) θ b 1 1 e b 2 θ ; b 1 , b 2 > 0 .
Then, the joint prior distribution for β and θ becomes
π ( β , θ ) β a 1 1 θ b 1 1 e ( a 2 β + b 2 θ ) .
According to the Bayes theory, we subsequently get the joint posterior probability distribution π ( β , θ x ) as:
π ( β , θ x ) = L ( β , θ , x ) m ( x ) = L ( x β , θ ) π ( β , θ ) m ( x ) 1 m ( x ) · β m + a 1 1 θ b 1 m 1 e ( a 2 β + b 2 θ ) · i = 1 m V i · i = 1 J M i · M J C J ,
and
m ( x ) = 0 0 β m + a 1 1 θ b 1 m 1 e ( a 2 β + b 2 θ ) · i = 1 m V i · i = 1 J M i · M J C J d β d θ ,
where V i = x i β 1 · e x i β · exp ( 1 e x i β θ ) , M i = exp ( 1 e x i β θ ) .
Thus, the marginal posterior probability distributions of β and θ can be expressed, respectively, as
π β ( β x ) = 0 π ( β , θ x ) d θ
π θ ( θ x ) = 0 π ( β , θ x ) d β .
Generally, squared error loss function λ ( ξ ˜ , ξ ) of unknown parameter vector ξ = ( ξ 1 , ξ 2 , , ξ n ) is as follows:
λ ( ξ ˜ , ξ ) = E [ ( ξ ˜ ξ ) · ( ξ ˜ ξ ) T ] ,
where ξ ˜ = ( ξ ˜ 1 , ξ ˜ 2 , , ξ ˜ n ) represents the estimator of ξ = ( ξ 1 , ξ 2 , , ξ n ) .
Then we consider the Bayes estimators of β and θ under squared error loss function and their corresponding minimum Bayes risks. It is proved that they are the expectation and the variance of the function (17) and (18), respectively. We can easily obtain Bayes estimates for the unknown parameters β and θ and the corresponding minimum Bayes risks by using the marginal posterior distribution function to generate the first and second moments of β and θ . Using μ β ( r ) and μ θ ( r ) ( r = 1 , 2 , ) to represent the r-th central moments of β and θ , then we have equations below
μ β ( r ) = 0 β r π β ( β x ) d β
μ θ ( r ) = 0 θ r π θ ( θ x ) d θ .
Thus, the Bayes point estimators and the corresponding minimum Bayes risks (MBR) of β and θ  are
β ˜ = μ β ( 1 ) , M B R ( β ˜ ) = min β ˜ E [ β ˜ β ] 2 = μ β ( 2 ) [ μ β ( 1 ) ] 2
and
θ ˜ = μ θ ( 1 ) , M B R ( θ ˜ ) = min θ ˜ E [ θ ˜ θ ] 2 = μ θ ( 2 ) [ μ θ ( 1 ) ] 2 .
Furthermore, since the marginal posterior probability distributions of β and θ are obtained, the Bayesian two-sided credible intervals can also be derived.
By solving the following integral equations for β L and β U , the ( 1 α ) × 100 % Bayesian two-sided credible interval of β , say ( β L , β U ) , can be obtained.
0 β L π β ( β x ) = α 2
0 β U π β ( β x ) = 1 α 2 .
Similarly for θ , we can construct a credible interval, say ( θ L , θ U ) , using integral equations below:
0 θ L π θ ( θ x ) = α 2
0 θ U π θ ( θ x ) = 1 α 2
Obviously, the Bayes estimation formulas above do not have closed forms in general, so we will use approximation techniques and numerical methods to generate the Bayes estimates. The Lindley approximation and Gibbs sampling are illustrated in the following sections.

3.1. Lindley Approximation

Lindley [11] introduced a method to approximate the ratio of integrals in the following forms:
0 ω ( ξ ) e l ( ξ , x ) d ξ 0 υ ( ξ ) e l ( ξ , x ) d ξ ,
where ξ = ( ξ 1 , ξ 2 , , ξ n ) represents a vector of unknown parameters, l ( ξ , x ) is the log-likelihood function and ω ( ξ ) , υ ( ξ ) are two arbitrary functions of ξ . Here, set ω ( ξ ) = u ( ξ ) π ( ξ ) . u ( ξ )  is a function of ξ = ( ξ 1 , ξ 2 , , ξ n ) whose expectation is of interest and π ( ξ ) is the prior probability distribution of  ξ .
Therefore, the posterior expectation of u ( ξ ) is
E [ u ( ξ ) x ] = 0 u ( ξ ) e l ( ξ ) + ρ ( ξ ) d ξ 0 e l ( ξ ) + ρ ( ξ ) d ξ ,
where ρ ( ξ ) = ln ( π ( ξ ) ) .
Then the ratio of integral (27) can have a linear approximation as follows:
E [ u ( ξ ) x ] u + 1 2 i = 1 m j = 1 m ( u i j + 2 u i ρ j ) σ i j + 1 2 i = 1 m j = 1 m k = 1 m h = 1 m l i j k σ i j σ k h u i ξ ^ ,
where u i = u ξ i , ρ i = ρ ξ i , u i j = 2 u ξ i ξ j , l i j k = 3 l ξ i ξ j ξ k and σ i j is the ( i , j ) -th element of the Fisher information matrix.
In this case, ξ = ( β , θ ) and σ i j is the ( i , j ) -th element of the matrix (10). The Lindley approximation to Bayes estimators of β and θ (i.e., (20), (21))under the squared error loss function are
β ˜ = β ^ M L E + ρ 1 σ 11 + ρ 2 σ 12 + 1 2 l 111 σ 11 2 + 1 2 l 221 ( σ 11 σ 22 + 2 σ 12 2 ) + 1 2 l 222 σ 22 σ 21
θ ˜ = θ ^ M L E + ρ 2 σ 21 + ρ 2 σ 22 + 1 2 l 111 σ 11 σ 12 + 3 2 l 221 σ 22 σ 12 + 1 2 l 222 σ 22 2 ,
where β ^ M L E and θ ^ M L E are maximum likelihood estimates of β and θ . From (6), other expressions in (29) and (30) can be calculated which are shown in details in the Appendix B.

3.2. Gibbs Sampling

In statistics, Gibbs sampling is one of Markov chain Monte Carlo (MCMC) algorithm for approximating a set of observations from an interested multivariate probability distribution in situations where direct sampling is difficult. In this section, Gibbs sampling is involved to calculate the expectation (20) and (21) and two-sided credible intervals (22)–(25). Although it is difficult to generate samples directly from a posterior distribution, combining Metropolis-Hasting algorithm [12] with Gibbs sampling can easily generate samples. Two-sided credible intervals can be generated with this method. Several Bayesian estimation methods of two parameters of the Chen distribution have already been discussed in many references, like in [6], but the Gibbs (M-H) sampling method has not been involved yet. Therefore, in this section, we will conduct Bayes estimates through Gibbs sampling.
We can simplify the posterior distribution of parameters β , θ to π ( β , θ x ) as the Gibbs sampling density function, because the asymptotic estimate of ( β , θ ) through the Gibbs (M-H) sampling method can be obtained without calculating the constants.
π ( β , θ x ) π ( β , θ x ) = β m + a 1 1 θ b 1 m 1 e ( a 2 β + b 2 θ ) · i = 1 m x i β 1 · e x i β · exp ( 1 e x i β θ ) · i = 1 J exp ( 1 e x i β θ ) · exp ( 1 e x J β θ ) n m i = 1 J R i .
g 1 ( β θ , x ) and g 2 ( θ β , x ) represent the conditional density functions of the parameters, respectively, as follows:
g 1 ( β θ , x ) = β m + a 1 1 e a 2 β i = 1 m x i β 1 · e x i β · exp ( 1 e x i β θ ) i = 1 J exp ( 1 e x i β θ ) · exp ( 1 e x J β θ ) n m i = 1 J R i
g 2 ( θ β , x ) = θ b 1 m 1 e b 2 θ i = 1 m exp ( 1 e x i β θ ) · i = 1 J exp ( 1 e x i β θ ) · exp ( 1 e x J β θ ) n m i = 1 J R i .
Then, sample and get a series values ( β 1 , θ 1 ) , ( β 2 , θ 2 ) , , ( β τ , θ τ ) and finally obtain Bayesian point estimation and 100 ( 1 α ) % two-sided credible intervals from those new samplings. The main idea of Gibbs sampling is that instead of getting the estimate of ( β , θ ) simultaneously, we estimate the parameter discretely in a sequence, which can significantly simplify the process. Obviously, the conditional density functions of β and θ given in (32) and (33) cannot transfer analytically into distributions that can be sampled directly using existing algorithms. Therefore the Metropolis-Hastings algorithm with a normal proposal distribution is involved to generate β and θ . The Gibbs (M-H) sampling procedure can be explained briefly in the following steps.
Thus, a series of Gibbs (M-H) samplings of λ = ( β , θ ) , ( β 1 , θ 1 ) , ( β 2 , θ 2 ) , , ( β τ , θ τ ) is obtained through former process. Now, the approximate expectation of λ = ( β , θ ) could be calculated as,
β ˜ = E ( β | x ) = 1 τ b t i = b t + 1 τ β i
θ ˜ = E ( θ | x ) = 1 τ b t i = b t + 1 τ θ i ,
where b t represents the burn-in time of the Gibbs sampling process when the samples can be assumed to come from the posterior distribution (16), which is a constant determined in advance.
Under the squared error loss function, the Bayesian point estimate of parameter λ = ( β , θ ) is generated with the procedure mentioned above.
To generate two-sided credible intervals of parameters λ = ( β , θ ) in following steps. First sort the sampling sequence ( β b t + 1 , θ b t + 1 ) , ( β b t + 2 , θ b t + 2 ) , , ( β τ , θ τ ) . Denote δ = τ - b t . Then we have two groups of ordered samples:
β ( 1 ) < β ( 2 ) < < β ( δ ) θ ( 1 ) < θ ( 2 ) < < θ ( δ ) .
Therefore, the 100 ( 1 α ) % Bayesian two-sided credible interval of β can be computed as [ β ( δ α / 2 ) , β ( ( 1 α / 2 ) δ ) ] . Similarly, we can construct the 100 ( 1 α ) % Bayesian two-sided credible interval of θ as: [ θ ( δ α / 2 ) , θ ( ( 1 α / 2 ) δ ) ] .

4. Simulation and Comparisons

In this section, the simulation study is carried out to verify the feasibility of our estimation method and compare the effects of different estimation methods.
Using the algorithm offered by [10,13], the adaptive progressive type-II censored data from Chen ( β , θ ) is generated with R program. Maximum likelihood estimates are computed by solving (7) and (8) in R program. The informative hyper-parameters ( a 1 , a 2 , b 1 , b 2 ) in (15) are set to be ( 0 , 1 , 0 , 1 ) and Bayes estimates under Gibbs sampling are obtained using (34) and (35). Bayes point estimates under the Lindley approximation method are obtained by calculating (29) and (30). The simulation experiment is conducted for N = 1000 times, and the mean values are taken as expected values (EV) of the unknown parameters, as it is illustrated below.
E V = i = 1 N ϕ β ^ , θ ^ N .
In order to compare the performances of maximum likelihood estimators for different settings of parameters, mean square error (MSE) of the estimated values ϕ β , θ is also calculated as,
M S E = i = 1 N [ ϕ β ^ , θ ^ ϕ ( β , θ ) ] 2 N .
The asymptotic 95% confidence interval obtained by fisher observation matrix is obtained by (13) an (14). The Bayes two-sided 95% credible interval using Gibbs sampling is generated with Algorithm 1. The simulation experiment is repeated for N = 1000 times, and the corresponding interval coverages and average interval lengths are calculated using R. We further present the average 95% intervals of the parameter setting ( 0.5 , 7 ) to discuss the reason for the difference in interval coverage and average interval length of interval estimation results obtained by the two methods.
Algorithm 1 Gibbs sampling with Metropolis-Hastings algorithms.
  • Based on the sampling x 1 , x 2 , , x m , first initialize the parameters as λ 0 = ( β 0 , θ 0 ) of λ = ( β , θ ) and set ν = 1  
  • repeat
  •  Set β = β ν 1 and θ = θ ν 1  
  •  Generate a candidate parameter ϵ 1 from N ( l o g ( β ) , V ( β ) ) where V ( β ) is the variance of β .  
  •  Set β = e x p ( ϵ 1 )  
  •  Calculate p 1 = min ( 1 , g 1 ( β θ , x ) β g 1 ( β θ , x ) β )  
  •  Accept β ν = β with probability p 1 and accept β ν = β with probability 1 p 1 .  
  •  Generate another candidate parameter ϵ 2 from N ( l o g ( θ ) , V ( θ ) ) .  
  •  Set θ = e x p ( ϵ 2 )  
  •  Calculate p 2 = min ( 1 , g 2 ( θ β ν , x ) θ g 2 ( θ β ν , x ) θ )  
  •  Accept θ ν = θ with probability p 2 and accept θ ν = θ with probability 1 p 2 . Therefore, we obtain λ ν = ( β ν , θ ν )  
  •  Let ν = ν + 1  
  • until ν = τ + 1
In our study, two expected total time on test T are ( 1 , 2 ) . We consider three different values of ( n , m ) within which three progressive censoring schemes R are considered (shown in Table 1 in details). For brevity, we simplify the representation of R. For example, the censoring scheme (0,0,0,0,0,0,0,0,0,0,0,0,0,0,15) is denoted by (0*14,15). Under each censoring scheme, four studied settings of ( β , θ ) are ( 1 , 1 ) , ( 3 , 1 ) , ( 0.5 , 1 ) , ( 0.5 , 7 ) which represent the Chen distribution in different situations.
From the simulation study, the following conclusions are drawn.
(1) Table 2 and Table 3 present the comparison of the maximum likelihood method and Bayes (using Gibbs-MH) method in terms of point estimation results. For both maximum likelihood estimation and Bayes estimation, under the condition that the parameters to be estimated remain unchanged when the sample size n increases, the expected values are closer to the set true values and the mean square errors are smaller. Simulation results show a better fitness performance when the expected total time on test is set with T = 2 under most censoring schemes. Although the results in simulation experiments of other parameter settings are satisfactory, the estimation effect of ( 0.5 , 7 ) is poor. When the sample sizes n and expected finishing time T are fixed, the first censoring scheme ( n m , 0 , , 0 ) is more efficient than others. Bayes estimates are, in most cases, slightly more effective than maximum likelihood estimates in terms of expected values and smaller mean square errors.
(2) Table 4 and Table 5 present the coverage probability and average length of two-sided 95% confidence intervals of parameters constructed by maximum likelihood method (using fisher observation matrix) and the Bayes method (using Gibbs (M-H)). For the same set of parameter, both the asymptotic confidence interval obtained by fisher observation matrix and two-sided probability interval constructed with Gibbs sampling are more accurate when the sample size n increases. That is, the interval coverage probability of the 95% asymptotic confidence interval of β and θ will be closer to 95%; and the average length of the interval is getting shorter. Meanwhile, other variables in the simulation experiment, namely the experiment end time T and different censoring schemes R, had no significant effect on the results of interval estimation. For the asymptotic confidence intervals using fisher observation matrix, under almost all parameter settings, the asymptotic confidence intervals of β are more ideal than that of θ , that is, the coverage of 95% confidence intervals of β are closer to 95%, while the coverage probability of θ are always low. In this term, the two-sided credible intervals of the Gibbs sampling method perform comparatively better.
(3) The coverage probability and average length of the interval obtained by two methods are significantly different under the set of parameters ( 0.5 , 7 ) . For the estimation results of β , the interval coverage probability of the asymptotic confidence interval obtained by the Fisher observation matrix is slightly higher than that obtained by Gibbs sampling, while the average interval length of the latter is slightly lower. For the estimation results of θ , the interval coverage probability of the asymptotic confidence interval obtained by the Fisher observation matrix is noticeably higher than that obtained by Gibbs sampling, but the average interval length of the latter is only half of that of the former. Therefore the average interval results of ( 0.5 , 7 ) are shown in Table 6 for further illustration. There are obviously different results for the upper bound of the interval of θ under the two methods. For the asymptotic confidence interval obtained by the Fisher observation matrix (where the average length is longer), the truth value of θ = 7 is in the middle of the interval and can be well covered. For the interval obtained by Gibbs sampling, the truth value of θ = 7 is very close to the upper bound of the credible interval, so the randomness of the simulation experiment may lead to the fact that the true value is not covered in the credible interval.
(4) Lindley approximation method, as a kind of linear approximation method in the Bayesian estimation, whose estimation result is not ideal from the perspective of estimated value and mean square error when the lifetime distribution property is special and the sample structure is complex. Therefore, we only show Lindley approximation estimation results of ( 0.5 , 7 ) when T = 1 in Table 7 to illustrate. In addition, Lindley approximation can only obtain point estimation, but not confidence interval which is more widely used in real life, so compared with it, maximum likelihood estimation and Bayes estimation obtained by Gibbs-MH sampling are better.

5. Real Data Analysis

In this section, we apply our estimation methods in a real data set to verify the feasibility. The data set from Michele D. Nichols and W. J. Padgett [14] was used for illustration. A complete sample of data shows the observed fracture stress of 100 carbon fibers (shown in Table 8).
The data was previously thought to come from the exponentiated Weibull distribution with probability density function of α θ x α 1 · exp ( x α ) · ( 1 exp ( x α ) ) θ 1 ( E W distribution) in [15]. We used the Kolmogorov-Smirnov (K-S) teat and compared the K-S distances (maximum vertical deviation between the probability density curve of the dataset and the given probability density curve) and p-value (in statistical hypothesis test) of four multi-parameter distributions, including the Chen distribution and the E W distribution. The results in Table 9 show that the K-S distance of the Chen distribution is smaller than that of the E W distribution, indicating that the fitting effect of the data set is better, so it is reasonable to assume that the data set comes from the Chen distribution.
Then consider the estimation problem under adaptive type-II progressively censored data. We withdraw samples of size m = 60 from the complete sample of size n = 100 . Censoring schemes are set to be R = ( 20 , 0 58 , 20 ) and R = ( 1 20 , 0 20 , 1 20 ) , while two values of expected finish time T are (1.6,3.2). Table 10, Table 11, Table 12 and Table 13 illustrate our adaptive type-II progressively censored data in details.
We consider gamma priors for Bayes estimation on both β and θ where the hyper-parameters are 0 (i.e., a 1 = a 2 = b 1 = b 2 = 0 ). Using the formulas mentioned in the simulation experiment, we can obtain the maximum likelihood point estimates, confidence intervals using the observed fisher matrix, Bayes point estimates, and Bayes credible intervals of unknown parameters β and θ of the real data set. The estimation results are shown in Table 14. We have conclusions as follows:
( 1 ) With the same estimation method (i.e., MLE, Gibbs-MH, Lindley), results show little difference under different censoring schemes R and best finishing times T.
( 2 ) It can be seen from the estimated value of the parameters ( β , θ ) that the Chen distribution of the data follows has a bathtub-shaped failure rate function. There is no significant difference between the results of point estimation and interval estimation obtained by MLE and Gibbs sampling. The point estimation obtained by the Lindley approximation method is different from that of the former two methods. The reason can be that the setting of parameter ( β , θ ) allows the failure rate function of the Chen distribution takes the shape of a bathtub, which makes it more complicated to estimate its parameters. This difficulty is also illustrated in our simulation process. We find that the MSE of the point estimates, as well as interval coverage probability of the confidence interval for the setting ( 0.5 , 7 ) , are apparently larger than other parameter settings.

6. Conclusions

In this paper, the estimation problem of two unknown parameters β and θ from the Chen distribution is discussed. This distribution has a bathtub-shaped failure rate function, making it suitable for fitting many real-world data. The statistical properties of this highly applicable distribution are studied based on adaptive type-II progressively censored data, which make our research easier to be applied to practical industrial fields. This censoring scheme gives people better control over the process of experiments in terms of time and funding because it can shorten or lengthen the process.
We discuss the maximum likelihood estimators as well as asymptotic confidence intervals by including the observed Fisher information matrix for unknown parameters. On the premise that the prior distributions are gamma distributions, the theoretical results of the Bayes point estimation and Bayes two-sided credible intervals under the squared error loss function are also given. Finding that accurate estimates are not available, we use approximation techniques such as Lindley approximation and Gibbs sampling to calculate the results.
In the simulation and real data tests, the maximum likelihood estimates and the Bayes estimate under Gibbs sampling show an insignificant difference, though the latter has slightly better performances than the former. In terms of estimated value and mean square error, the result of the point estimator obtained by Lindley approximation is not as good as the former two. When the parameters are set so that the Chen distribution has a bathtub-shaped failure rate function, the accuracy of the estimation results decreases. In the future work, we may explore the optimization algorithm for this remaining problem and propose a more accurate estimation method.
Moreover, we will also study the change-point analysis of the lifetime distribution having a bathtub-shaped failure rate function, which has a wide range of applications. For example, it can help people better analyze the performance of products and predict the total life of products in the actual production.

Author Contributions

Investigation, S.C.; Supervision, W.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by Project 202010004008 which was supported by National Training Program of Innovation and Entrepreneurship for Undergraduates.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. The Second Partial of l(β,θx) with Respect to β and θ

2 l ( β , θ x ) 2 β = m β 2 + i = 1 m ln 2 x i · x i β 1 + e x i β ( 1 x i β θ ) i = 1 J ln 2 x i · x i β · e x i β θ ( 1 + x i β ) C J · ln 2 x m · x m β · e m β θ ( 1 + x m β )
2 l ( β , θ x ) β θ = 2 l ( β , θ x ̲ ) θ β = i = 1 m ln 2 x i · x i β · e x i β θ 2 + i = 1 J R i · ln 2 x i · x i β · e x i β θ 2 + C J ln 2 x m · x m β · e x m β θ 2
2 l ( β , θ x ) 2 θ = m θ 2 + i = 1 m 2 ( 1 e x i β ) θ 3 + i = 1 J 2 R i ( 1 e x i β ) θ 3 + 2 C J ( 1 e x m β ) θ 3

Appendix B. Other Expressions in (29) and (30)

l 111 = 3 l 3 β = 2 m β 3 + i = 1 m ln 3 x i · x i β 1 + e x i β ( 1 + x i β ) ( 1 x i β θ ) e x i β x i β θ i = 1 J A i C J A m
l 222 = 3 l 3 θ = 2 m θ 3 i = 1 m 6 ( 1 e x i β ) θ 4 i = 1 J 6 R i ( 1 e x i β ) θ 4 6 C J ( 1 e x m β ) θ 4
ρ 1 = ρ β = a 1 1 β a 2 , ρ 2 = ρ θ = b 1 1 θ b 2
where A i = 1 θ e x i β x i β ln 3 x i ( x i 2 β + 3 x i β + 1 )

References

  1. Chen, Z. A new two-parameter lifetime distribution with bathtub shape or increasing failure rate function. Stat. Probab. Lett. 2000, 49, 155–161. [Google Scholar] [CrossRef]
  2. Wang, R.; Sha, N.; Gu, B.; Xu, X. Statistical Analysis of a Weibull Extension with Bathtub-Shaped Failure Rate Function. Adv. Stat. 2014, 2014. [Google Scholar] [CrossRef] [Green Version]
  3. Dey, S.; Kumar, D.; Ramos, P.L.; Louzada, F. Exponentiated Chen distribution: Properties and estimation. Commun. Stat. Simul. Comput. 2017, 46, 8118–8139. [Google Scholar] [CrossRef]
  4. Raqab, M.Z.; Bdair, O.M.; Alaboud, F.M. Inference for the two-parameter bathtub-shaped distribution based on record data. Metrika 2018, 81, 229–253. [Google Scholar] [CrossRef]
  5. Wu, S. Estimation of the two-parameter bathtub-shaped lifetime distribution with progressive censoring. J. Appl. Stat. 2008, 35, 1139–1150. [Google Scholar] [CrossRef]
  6. Rastogi, M.K.; Tripathi, Y.M.; Wu, S. Estimating the parameters of a bathtub-shaped distribution under progressive type-II censoring. J. Appl. Stat. 2012, 39, 2389–2411. [Google Scholar] [CrossRef]
  7. Tarvirdizade, B.; Ahmadpour, M. Estimation of the stress-strength reliability for the two-parameter bathtub-shaped lifetime distribution based on upper record values. Stat. Methodol. 2016, 31, 58–72. [Google Scholar] [CrossRef]
  8. Ahmed, E.K. Bayesian estimation based on progressive Type-II censoring from two-parameter bathtub-shaped lifetime model: An Markov chain Monte Carlo approach. J. Appl. Stat. 2014, 41, 752–768. [Google Scholar] [CrossRef]
  9. Zhang, C.; Shi, Y. Estimation of the extended Weibull parameters and acceleration factors in the step-stress accelerated life tests under an adaptive progressively hybrid censoring data. J. Stat. Comput. Simul. 2016, 86, 3303–3314. [Google Scholar] [CrossRef]
  10. Ng, H.K.T.; Kundu, D.; Chan, P.S. Statistical analysis of exponential lifetimes under an adaptive Type-II progressive censoring scheme. Naval Res. Logist. 2009, 56, 687–698. [Google Scholar] [CrossRef] [Green Version]
  11. Lindley, D.V. Approximate Bayesian methods. Trabajos De Estadistica Y De Investigacion Operativa 1980, 31, 223–245. [Google Scholar] [CrossRef]
  12. Hastings, W.K. Monte Carlo Sampling Methods Using Markov Chains and Their Applications. Biometrika 1970, 57, 97–109. [Google Scholar] [CrossRef]
  13. Balakrishnan, N.; Sandhu, R.A. A Simple Simulational Algorithm for Generating Progressive Type-II Censored Samples. Am. Stat. 1995, 49, 229–230. [Google Scholar]
  14. Nichols, M.D.; Padgett, W.J. A Bootstrap Control Chart for Weibull Percentiles. Qual. Reliab. Eng. Int. 2006, 22, 141–151. [Google Scholar] [CrossRef]
  15. Sobhi, M.M.A.; Soliman, A.A. Estimation for the exponentiated Weibull model with adaptive Type-II progressive censored schemes. Appl. Math. Model. 2016, 40, 1180–1192. [Google Scholar] [CrossRef]
  16. Elgohary, A.; Alshamrani, A.M.; Alotaibi, A.N. The generalized Gompertz distribution. Appl. Math. Model. 2013, 37, 13–24. [Google Scholar] [CrossRef] [Green Version]
  17. Sharma, V.K.; Singh, S.K.; Singh, U.; Merovci, F. The Generalized Inverse Lindley Distribution: A New Inverse Statistical Model for the Study of Upside-down Bathtub Data. Commun. Stat. Theory Methods 2016, 45, 5709–5729. [Google Scholar] [CrossRef]
Figure 1. The probability density function (pdf) when β = 1 , 0 < θ 1 .
Figure 1. The probability density function (pdf) when β = 1 , 0 < θ 1 .
Mathematics 08 00670 g001
Figure 2. The pdf when β < 1 , 0 < θ 1 .
Figure 2. The pdf when β < 1 , 0 < θ 1 .
Mathematics 08 00670 g002
Figure 3. The pdf when β > 1 , 0 < θ 1 .
Figure 3. The pdf when β > 1 , 0 < θ 1 .
Mathematics 08 00670 g003
Figure 4. The pdf when β 1 , θ > 1 .
Figure 4. The pdf when β 1 , θ > 1 .
Mathematics 08 00670 g004
Figure 5. λ ( x ) when θ = 7 , 0 < β < 1 .
Figure 5. λ ( x ) when θ = 7 , 0 < β < 1 .
Mathematics 08 00670 g005
Figure 6. λ ( x ) when θ = 1 , β 1 .
Figure 6. λ ( x ) when θ = 1 , β 1 .
Mathematics 08 00670 g006
Figure 7. Progressive type-II censoring scheme.
Figure 7. Progressive type-II censoring scheme.
Mathematics 08 00670 g007
Figure 8. Adaptive progressive type-II censoring scheme.
Figure 8. Adaptive progressive type-II censoring scheme.
Mathematics 08 00670 g008
Table 1. Censoring schemes illustration.
Table 1. Censoring schemes illustration.
Censoring SchemenmR
13015 ( 0 14 , 15 )
23015 ( ( 2 , 0 , 1 ) 5 )
33015 ( 1 15 )
45025 ( 25 , 0 24 )
55025 ( 1 25 )
65025 ( ( 1 , 1 , 2 ) 5 , 0 9 , 5 )
76030 ( 30 , 0 29 )
86030 ( ( 2 , 1 , 0 ) 10 )
96030 ( 1 30 )
Table 2. Expected values (EVs) and mean square errors (MSEs) of point estimators when T = 1 .
Table 2. Expected values (EVs) and mean square errors (MSEs) of point estimators when T = 1 .
(n,m)RTV β θ
MLEBayes-GibbsMLEBayes-Gibbs
EVMSEEVMSEEVMSEEVMSE
(30,15)(0*14,15)(1,1)1.15880.16461.14580.12420.90880.13510.94220.1171
(3,1)3.44940.34682.91720.50450.91850.14081.06450.0980
(0.5,1)0.54860.02670.59800.03880.91260.13670.90420.1059
(0.5,7)0.56660.02390.42780.01317.94229.90194.80275.4897
(1*15)(1,1)1.14200.08541.08300.06960.88720.09760.95630.0775
(3,1)3.14460.08882.91530.41170.89090.10771.06950.0836
(0.5,1)0.58220.03630.56690.02900.94470.11090.96850.1036
(0.5,7)0.60730.02900.42660.01006.97849.71214.62276.3123
((2,0,1)*5)(1,1)1.17450.13261.07460.07020.93610.11690.97530.0920
(3,1)3.34130.82842.93910.38490.92850.09031.02550.0767
(0.5,1)0.51300.02020.56040.02381.08920.08770.95420.0951
(0.5,7)0.58900.04910.43050.00997.06839.69004.63936.1982
(50,25)(25,0*24)(1,1)1.16630.06631.03100.03200.89980.04800.98730.0387
(3,1)3.51930.63972.93270.23180.86440.04691.00240.0376
(0.5,1)0.58270.01580.52600.00750.87070.04800.99500.0475
(0.5,7)0.62390.01890.44820.00418.06849.76374.89145.2365
(1*25)(1,1)1.07820.05301.04320.04270.94900.06101.00050.0619
(3,1)3.21470.43972.93200.25210.97410.06241.02750.0566
(0.5,1)0.53440.01250.53540.01340.95990.06230.98130.0685
(0.5,7)0.55360.03400.44890.00627.06816.88845.28233.7246
((1,1,2)*5,0*9,5)(1,1)1.04390.04241.02850.03890.97960.06771.01200.0626
(3,1)3.16720.43232.90060.25550.97540.06731.03530.0685
(0.5,1)0.52660.01060.52300.01100.97780.06830.98190.0660
(0.5,7)0.60650.01860.44800.00607.19775.83025.18434.0368
(60,30)(30,0*29)(1,1)1.17800.06671.01830.02420.85270.04741.00110.0339
(3,1)3.54510.62532.94550.17400.85980.04731.00620.0313
(0.5,1)0.58800.01600.51980.00700.84320.04840.99520.0325
(0.5,7)0.60770.01490.45110.00387.59028.30665.03144.6874
((2,1,0)*10)(1,1)1.06560.03161.04980.03670.96090.04810.96920.0464
(3,1)3.18310.29972.94550.23440.95080.04351.02970.0440
(0.5,1)0.53250.00860.52570.00930.96050.03910.99930.0476
(0.5,7)0.59250.01620.45380.00527.27884.61825.40113.2807
(1*30)(1,1)1.06480.03651.03150.03400.94920.04200.99990.0481
(3,1)3.07550.27802.90270.25221.00790.04211.05060.0536
(0.5,1)0.51270.00810.52440.01031.00650.04260.99730.0541
(0.5,7)0.55930.01040.45480.00507.38084.63035.47903.0532
Table 3. EVs and MSEs of point estimators when T = 2 .
Table 3. EVs and MSEs of point estimators when T = 2 .
(n,m)RTV β θ
MLEBayes-GibbsMLEBayes-Gibbs
EVMSEEVMSEEVMSEEVMSE
(30,15)(0*14,15)(1,1)1.17040.15781.11090.09250.89550.11260.95440.0926
(3,1)3.37850.03472.91650.49180.95160.11211.07050.0893
(0.5,1)0.55710.02750.58210.03120.96060.12720.93510.0950
(0.5,7)0.54510.02050.41530.01717.72148.64134.79025.4884
(1*15)(1,1)1.14130.11161.10440.08370.92460.08880.95230.0818
(3,1)3.38960.84262.98970.36350.93310.09811.02940.0814
(0.5,1)0.55040.02540.57630.02660.99240.09620.93340.0858
(0.5,7)0.53530.01600.42900.00977.08039.36964.65856.2147
((2,0,1)*5)(1,1)1.09440.10501.08190.08090.95730.08810.97350.0777
(3,1)3.39450.89792.93810.36530.95090.09951.04100.0731
(0.5,1)0.58350.03770.56400.02470.86980.10970.95570.0884
(0.5,7)0.58410.02440.42920.00927.89879.16944.57566.5851
(50,25)(25,0*24)(1,1)1.05770.03291.02280.02600.97010.04240.99260.0370
(3,1)3.15560.29182.96880.18830.98670.04721.02270.0432
(0.5,1)0.53260.00900.52490.00810.99060.04450.99920.0369
(0.5,7)0.62330.01920.44630.00448.438810.97874.82345.5533
(1*25)(1,1)1.07920.04621.04920.04290.95440.05910.98130.0558
(3,1)3.22810.44582.95470.23620.95670.06151.01550.0413
(0.5,1)0.53410.01150.54380.01150.96110.05900.97480.0536
(0.5,7)0.61010.02060.45030.00597.47488.19935.18854.0047
((1,1,2)*5,0*9,5)(1,1)1.05990.04131.04440.03990.96680.05380.98730.0546
(3,1)3.19430.38262.96140.27180.95000.05501.00060.0421
(0.5,1)0.53330.01130.53650.01260.96000.05850.97200.0507
(0.5,7)0.64490.02930.45370.00507.77338.58555.24463.8647
(60,30)(30,0*29)(1,1)1.04310.02591.03060.02280.99080.03791.00730.0357
(3,1)3.11220.21082.90040.16180.98010.03531.00160.0290
(0.5,1)0.52530.00690.51540.00560.97570.03620.99810.0319
(0.5,7)0.63780.02350.45090.00378.33009.96105.07934.5023
((2,1,0)*10)(1,1)1.06810.03931.05210.03500.95610.04950.96470.0395
(3,1)3.19120.34302.94380.21560.96070.04791.03350.0424
(0.5,1)0.52920.00860.52660.00930.96160.04620.97770.0459
(0.5,7)0.57930.01280.45400.00447.64316.20745.43583.2938
(1*30)(1,1)1.05630.03551.03270.03060.96140.05090.98110.0386
(3,1)3.20190.32972.97340.23780.94970.04621.00630.0414
(0.5,1)0.53120.00960.53060.00880.96230.04820.96850.0446
(0.5,7)0.55870.00970.45580.00457.83375.77425.46023.0883
Table 4. Coverage probability (CP) and average length (AL) of 95% intervals when T = 1 .
Table 4. Coverage probability (CP) and average length (AL) of 95% intervals when T = 1 .
(n,m)RTV β θ
MLEBayes-GibbsMLEBayes-Gibbs
CPALCPALCPALCPAL
(30,15)(0*14,15)(1,1)0.9671.25060.9401.14550.8691.34310.9081.2688
(3,1)0.9553.72310.9502.95990.8541.35280.9661.3821
(0.5,1)0.9580.62500.9140.59280.8651.34320.8921.2500
(0.5,7)0.9300.46860.8460.34420.94312.40950.6394.5076
(1*15)(1,1)0.9411.02310.9560.94030.8661.15130.9281.1075
(3,1)0.9403.08520.9342.56870.8651.14440.9541.2180
(0.5,1)0.9510.51010.9120.48770.8741.14770.8921.1113
(0.5,7)0.9360.41670.8060.30880.93113.13190.5214.3932
((2,0,1)*5)(1,1)0.9441.00680.9600.92290.8741.14070.9201.1190
(3,1)0.9563.00400.9462.58260.8641.12820.9441.1780
(0.5,1)0.9450.50790.9300.47550.8701.13690.8981.0928
(0.5,7)0.9330.41070.8360.30740.93113.35900.5514.4228
(50,25)(25,0*24)(1,1)0.9500.65030.9400.61180.9140.78110.9380.7587
(3,1)0.9481.96150.9321.73760.9020.78120.9500.7735
(0.5,1)0.9590.32460.9560.30850.9130.78330.9400.7626
(0.5,7)0.9220.20520.7900.17550.93210.36640.5914.4110
(1*25)(1,1)0.9390.74340.9320.70350.8990.91790.9180.8963
(3,1)0.9402.24940.9402.00160.8910.90920.9320.9346
(0.5,1)0.9430.37330.9200.35950.9020.90410.9080.8834
(0.5,7)0.9510.31150.8440.25080.9528.80540.6974.2801
((1,1,2)*5,0*9,5)(1,1)0.9320.73800.9200.68680.8900.90470.9020.8877
(3,1)0.9362.22450.9321.96550.8980.89970.9140.9172
(0.5,1)0.9500.37070.9080.34970.9010.89550.9060.8732
(0.5,7)0.9570.29580.8360.23970.9468.63630.6514.2269
(60,30)(30,0*29)(1,1)0.9600.58810.9420.55320.9340.71970.9400.7029
(3,1)0.9481.76750.9321.60040.9010.70500.9400.7142
(0.5,1)0.9510.29780.9320.28210.9260.71660.9500.7051
(0.5,7)0.9230.18720.7540.16180.9239.06450.6074.2988
((2,1,0)*10)(1,1)0.9360.65950.9320.63470.9030.82490.9280.7989
(3,1)0.9391.96900.9141.79910.9140.82250.9500.8315
(0.5,1)0.9480.32680.9120.31610.9170.82180.9400.8108
(0.5,7)0.9370.28040.8500.23290.9497.88490.7154.1187
(1*30)(1,1)0.9310.67250.9340.63440.8990.83190.9280.8183
(3,1)0.9282.02020.9161.80210.9040.83760.9320.8618
(0.5,1)0.9440.33790.9040.32070.9050.83580.9180.8230
(0.5,7)0.9430.28310.8820.23340.9327.87230.7504.2056
Table 5. CP and AL of 95% intervals when T = 2 .
Table 5. CP and AL of 95% intervals when T = 2 .
(n,m)RTV β θ
MLEBayes-GibbsMLEBayes-Gibbs
CPALCPALCPALCPAL
(30,15)(0*14,15)(1,1)0.9481.24520.9501.11110.8611.34730.9541.2867
(3,1)0.9653.75340.9602.94300.8701.34840.9681.3766
(0.5,1)0.9510.62500.9400.58390.8671.35380.9261.2599
(0.5,7)0.8710.45150.7820.33460.94012.48800.5914.4395
(1*15)(1,1)0.9551.03030.9400.96540.8811.15190.9401.1146
(3,1)0.9583.07310.9662.64660.8681.14800.9581.1982
(0.5,1)0.9430.50940.9320.49960.8641.14670.9241.0985
(0.5,7)0.9430.38080.8080.29360.93112.51460.5354.4465
((2,0,1)*5)(1,1)0.9481.02000.9220.92570.8851.13210.9381.1114
(3,1)0.9503.06540.9602.56260.8651.12810.9661.1917
(0.5,1)0.9590.49890.9360.47750.8841.14340.9101.0925
(0.5,7)0.9340.37880.8200.29140.92512.70130.5214.3879
(50,25)(25,0*24)(1,1)0.9470.65680.9500.60730.9010.77550.9380.7684
(3,1)0.9551.96020.9501.76830.9090.77820.9380.7950
(0.5,1)0.9640.32540.9460.31080.9130.78490.9460.7696
(0.5,7)0.9260.20660.7370.17450.94010.39830.5534.3295
(1*25)(1,1)0.9500.75760.9320.71380.8890.90200.9160.8902
(3,1)0.9612.27100.9602.03410.9080.90350.9720.9196
(0.5,1)0.9460.37510.9480.36600.9050.91030.9220.8871
(0.5,7)0.9380.28850.8480.23920.9428.79400.6634.1848
((1,1,2)*5,0*9,5)(1,1)0.9440.75160.9460.70380.9000.89110.9200.8824
(3,1)0.9482.25020.9462.01410.9030.88780.9680.9105
(0.5,1)0.9570.37310.9220.36070.9130.89450.9220.8643
(0.5,7)0.9320.26730.8520.22460.9438.87810.6754.2684
(60,30)(30,0*29)(1,1)0.9620.58710.9420.55800.9260.71290.9320.7155
(3,1)0.9441.78230.9321.59620.9310.71750.9520.7067
(0.5,1)0.9500.29350.9560.27870.9350.71660.9340.7026
(0.5,7)0.9230.18760.7820.16250.9349.09770.6514.3302
((2,1,0)*10)(1,1)0.9540.66340.9520.63590.9120.80950.9440.7957
(3,1)0.9691.98070.9381.80860.9170.81960.9520.8417
(0.5,1)0.9590.33060.9200.31370.9230.81880.9160.7979
(0.5,7)0.9380.25810.8660.21760.9417.65350.6834.1591
(1*30)(1,1)0.9520.67640.9400.64110.9160.83440.9420.8207
(3,1)0.9512.04720.9421.86270.9010.81980.9520.8386
(0.5,1)0.9460.34030.9480.32610.9070.82550.930.8068
(0.5,7)0.9470.26040.8460.21900.9437.62040.7294.1702
Table 6. The comparison of the average 95% intervals of maximum likelihood estimation (MLE) and Bayes-Gibbs method for ( 0.5 , 7 ) .
Table 6. The comparison of the average 95% intervals of maximum likelihood estimation (MLE) and Bayes-Gibbs method for ( 0.5 , 7 ) .
T = 1
(n,m) R β θ
MLEBayes-GibbsMLEBayes-Gibbs
(30,15)(0*14,15)(0.3279,0.8076)(0.2636,0.6077)(2.0084,14.8563)(2.9413,7.4489)
(1*15)(0.3452,0.7612)(0.2749,0.5837)(1.7927,14.7752)(2.8076,7.2008)
((2,0,1)*5)(0.3436,0.7573)(0.2798,0.5873)(1.7914,14.1889)(2.8236,7.2463)
(50,25)(25,0*24)(0.4130,0.6200)(0.3574,0.5328)(2.6028,13.0792)(3.0463,7.4573)
(1*25)(0.3749,0.6852)(0.3249,0.5757)(3.2555,11.9749)(3.4622,7.7422)
((1,1,2)*5,0*9,5)(0.3795,0.6767)(0.3288,0.5686)(3.2409,11.9268)(3.3934,7.6203)
(60,30)(30,0*29)(0.4175,0.6046)(0.3675,0.5293)(2.9992,11.9844)(3.2093,7.5081)
((2,1,0)*10)(0.3838,0.6633)(0.3378,0.5707)(3.5905,11.3432)(3.6366,7.7554)
(1*30)(0.3833,0.6647)(0.3395,0.5729)(3.6265,11.3474)(3.6842,7.8899)
T = 2
(n,m)R β θ
MLEBayes-GibbsMLEBayes-Gibbs
(30,15)(0*14,15)(0.3137,0.7713)(0.2555,0.5901)(2.0009,14.7519)(2.9450,7.3845)
(1*15)(0.3580,0.7445)(0.2829,0.5765)(1.8373,14.6433)(2.8220,7.2686)
((2,0,1)*5)(0.3562,0.7375)(0.2844,0.5758)(1.8093,14.6619)(2.7722,7.1601)
(50,25)(25,0*24)(0.4094,0.6152)(0.3557,0.5302)(2.5739,12.6324)(3.0160,7.345)
(1*25)(0.3834,0.6713)(0.3305,0.5697)(3.2967,12.2808)(3.4263,7.6112)
((1,1,2)*5,0*9,5)(0.3832,0.6716)(0.3400,0.5647)(3.2584,11.8990)(3.4365,7.7048)
(60,30)(30,0*29)(0.4134,0.5986)(0.3667,0.5293)(3.0008,12.0372)(3.2468,7.5770)
((2,1,0)*10)(0.3902,0.6492)(0.3453,0.562)(3.6022,11.3464)(3.6506,7.8097)
(1*30)(0.3935,0.6552)(0.3462,0.5652)(3.6669,11.5756)(3.6763,7.8465)
Table 7. EVs and MSEs of Lindley approximation for ( 0.5 , 7 ) when T = 1 .
Table 7. EVs and MSEs of Lindley approximation for ( 0.5 , 7 ) when T = 1 .
(n,m)R β θ
EVMSEEVMSE
(30,15)(0*14,15)0.43080.15728.372212.4001
(1*15)0.99710.76068.103513.9985
((2,0,1)*5)2.47430.32749.096629.7030
(50,25)(25,0*24)0.49590.07097.37776.4728
(1*25)0.42740.07418.38309.8129
((1,1,2)*5,0*9,5)0.41360.05767.66268.2964
(60,30)(30,0*29)1.16760.30227.46506.4905
((2,1,0)*10)0.39810.06337.55945.5368
(1*30)0.43070.02768.14658.8301
Table 8. Real data set of complete 100 observations on fracture stress of carbon fibres.
Table 8. Real data set of complete 100 observations on fracture stress of carbon fibres.
0.390.810.850.981.081.121.171.181.221.25
1.361.411.471.571.571.591.591.611.611.69
1.691.711.731.81.841.841.871.891.922.00
2.032.032.05.2.122.172.172.172.352.382.41
2.432.482.482.52.532.552.552.562.592.67
2.732.742.762.772.792.812.812.822.832.85
2.872.882.932.952.962.972.973.093.113.11
3.153.153.193.193.223.223.273.283.313.31
3.333.393.393.513.563.63.653.683.683.68
3.73.754.24.384.424.74.94.915.085.56
Table 9. The results of Kolmogorov-Smirnov (K-S) test for alternative statistical models.
Table 9. The results of Kolmogorov-Smirnov (K-S) test for alternative statistical models.
Statistical ModelsProbability Density FunctionK-S DistanceP Value
Generalized Gompertz [16] θ β · e λ x e β λ ( e λ x 1 ) 1 e β λ ( e λ x 1 ) θ 1 0.0596990.8511
Chen β θ · x β 1 · e x β · exp ( 1 e x β θ ) 0.0926690.3419
EW [15] α θ x α 1 · exp ( x α ) · ( 1 exp ( x α ) ) θ 1 0.113210.1463
Inverse Power Lindley [17] α θ 2 1 + θ · 1 + x α x 2 α + 1 · exp ( θ x α ) 0.182230.002385
Table 10. Censored data with R = ( 20 , 0 58 , 20 ) , T = 1.6 , J = 11 .
Table 10. Censored data with R = ( 20 , 0 58 , 20 ) , T = 1.6 , J = 11 .
0.390.850.981.121.171.221.251.411.571.59
1.591.611.611.691.711.731.81.841.841.87
1.891.922.002.032.032.052.122.172.172.35
2.382.412.432.482.52.532.552.552.592.67
2.732.762.772.792.812.812.832.852.872.88
2.932.962.973.093.113.113.153.153.193.22
Table 11. Censored data with R = ( 20 , 0 58 , 20 ) , T = 3.2 , J = 60 .
Table 11. Censored data with R = ( 20 , 0 58 , 20 ) , T = 3.2 , J = 60 .
0.390.810.850.981.121.181.221.251.361.41
1.571.571.591.611.611.691.711.731.81.84
1.841.871.922.002.032.032.122.172.172.17
2.352.412.432.482.52.532.552.552.562.67
2.762.772.792.812.812.822.832.852.872.93
2.952.962.973.093.113.113.153.153.193.19
Table 12. Censored data with R = ( 1 20 , 0 20 , 1 20 ) , T = 1.6 , J = 14 .
Table 12. Censored data with R = ( 1 20 , 0 20 , 1 20 ) , T = 1.6 , J = 14 .
0.390.810.851.081.121.171.181.221.361.41
1.471.571.591.591.611.611.691.691.711.73
1.841.841.872.002.032.032.052.122.172.17
2.172.352.382.432.482.52.532.552.552.56
2.592.672.732.742.762.772.792.822.832.85
2.872.882.932.952.972.973.093.113.113.15
Table 13. Censored data with R = ( 1 20 , 0 20 , 1 20 ) , T = 3.2 , J = 55 .
Table 13. Censored data with R = ( 1 20 , 0 20 , 1 20 ) , T = 3.2 , J = 55 .
0.390.810.851.081.121.181.221.251.361.41
1.471.571.591.611.611.691.691.731.81.84
1.841.891.922.002.032.032.052.122.172.17
2.172.412.432.482.482.52.532.552.562.59
2.672.742.792.812.812.822.832.852.872.88
2.932.952.962.973.113.273.513.563.63.65
Table 14. Point estimates and 95% intervals of the real data set under different methods.
Table 14. Point estimates and 95% intervals of the real data set under different methods.
RTMethod β θ
(20,0*58,20)1.6MLE1.117127.8600
(0.9821,1.2521)(12.0721,43.6479)
Bayes-Gibbs1.107628.4749
(0.9687,1.2378)(15.9841,48.5732)
Bayes-Lindley0.747932.1414
3.2MLE1.107126.1906
(0.9703,1.2440)(11.4456,40.9356)
Bayes-Gibbs1.101527.3697
(0.9661,1.2399)(15.2519,48.4656)
Bayes-Lindley0.787830.8984
(1*20,0*20,1*20)1.6MLE1.098627.5346
(0.9613,1.2359)(12.4232,42.6460)
Bayes-Gibbs1.091228.2034
(0.9507,1.2183)(16.1957,45.6042)
Bayes-Lindley0.837931.1335
3.2MLE1.085425.8694
(0.9660,1.2049)(12.6896,39.0493)
Bayes-Gibbs1.08088126.79879
(0.9622,1.1992)(15.8904,45.6340)
Bayes-Lindley0.649131.31

Share and Cite

MDPI and ACS Style

Chen, S.; Gui, W. Statistical Analysis of a Lifetime Distribution with a Bathtub-Shaped Failure Rate Function under Adaptive Progressive Type-II Censoring. Mathematics 2020, 8, 670. https://doi.org/10.3390/math8050670

AMA Style

Chen S, Gui W. Statistical Analysis of a Lifetime Distribution with a Bathtub-Shaped Failure Rate Function under Adaptive Progressive Type-II Censoring. Mathematics. 2020; 8(5):670. https://doi.org/10.3390/math8050670

Chicago/Turabian Style

Chen, Siyi, and Wenhao Gui. 2020. "Statistical Analysis of a Lifetime Distribution with a Bathtub-Shaped Failure Rate Function under Adaptive Progressive Type-II Censoring" Mathematics 8, no. 5: 670. https://doi.org/10.3390/math8050670

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop