Bayesian and Classical Estimation of Stress-Strength Reliability for Inverse Weibull Lifetime Models

In this paper, we consider the problem of estimating stress-strength reliability for inverse Weibull lifetime models having the same shape parameters but different scale parameters. We obtain the maximum likelihood estimator and its asymptotic distribution. Since the classical estimator doesn’t hold explicit forms, we propose an approximate maximum likelihood estimator. The asymptotic confidence interval and two bootstrap intervals are obtained. Using the Gibbs sampling technique, Bayesian estimator and the corresponding credible interval are obtained. The Metropolis-Hastings algorithm is used to generate random variates. Monte Carlo simulations are conducted to compare the proposed methods. Analysis of a real dataset is performed.


Introduction
The inference of stress-strength reliability in statistics is an important topic of interest.It has many applications in practical areas.In the stress-strength modeling, R = P(Y < X) is a measure of component reliability.If X ≤ Y, the component fails or the component may malfunction, where X is subject to Y. R can also be considered in electrical and electronic systems.Many authors have studied its properties for many statistical models including double exponential, Weibull, generalized Pareto and Lindley distribution (see [1][2][3][4]).Classical and Bayesian estimation of reliability in a multicomponent stress-strength model under a general class of inverse exponentiated distributions was researched by Ref. [5].Ref. [6] studied classical and Bayesian estimation of reliability in multicomponent stress-strength model under Weibull distribution.Furthermore, Refs.[7][8][9] also considered the problem of the stress-strength reliability.
The inverse Weibull (IW) distribution has attracted much attention recently.If T denotes the random variable from Weibull model, we define X as follows X = 1/T, then the random variable X is said to have inverse Weibull distribution.It is a lifetime probability distribution that can be used in the reliability engineering discipline.The inverse Weibull distribution has the ability to model failure rates, which is quite common in reliability and biological studies.The inverse Weibull model was referred to with many different names like "Frechet-type" ( [10]) and "Complementary Weibull" ( [11]).Ref. [11] discussed a graphical plotting technique to settle the suitability of the model.Ref. [12] presented the IW distribution for modeling reliability data, this model was further discussed by researching the failures of mechanical components subject to degradation.Ref. [13] proposed a discrete inverse Weibull distribution and its parameters were estimated.The mixture model of two IW distributions and its identifiability properties were studied by [14].For the theoretical analysis of IW distribution, we can refer to [15].Ref. [16] proposed the generalized IW distribution and several properties of this model.For more details on the inverse Weibull distribution, see [17].
In this paper, we focus on the estimation of the stress-strength reliability R = P(Y < X), where X and Y follow the inverse Weibull distribution.As far as we know, this model has not been previously studied, although, we believe it plays an important role in reliability analysis.
We obtain the maximum likelihood estimator (MLE), approximate maximum likelihood estimator (AMLE) and the asymptotic distribution of the estimator.The asymptotic distribution is used to construct an asymptotic confidence interval.We also present two bootstrap confidence intervals of R. By using Gibbs sampling technique, we obtain Bayes estimator of R and its corresponding credible interval.Finally, we present a real data example to illustrate the performance of different methods.
The layout of this paper is organized as follows: in Section 2, we introduce the distribution of the inverse Weibull.In Section 3, we obtain the MLE of R. In Section 4, we derive the estimator of R by approximating maximum likelihood equations.Different confidence intervals are presented in Section 5.In Section 6, Bayesian solutions are introduced.In Section 7, we compare different proposed methods using Monte Carlo simulation.A numerical example is also provided.Finally, in Section 8, we conclude the paper.

Inverse Weibull Distribution
The probability density function of the known Weibull distribution is given by where α > 0 is the shape parameter and θ > 0 is the scale parameter.Let T denote the random variable from Weibull model, namely, W(α, θ).Define X as follows: The random variable X is said to have inverse Weibull distribution, and its probability density function (pdf) is given by The cumulative distribution function(cdf) is given by where α > 0 and θ > 0. The inverse Weibull distribution will be denoted by IW(α, θ).

Maximum Likelihood Estimator of R
In this section, we consider the problem of estimating R = P(Y < X) under the assumption that X ∼ IW(α, θ 1 ) and Y ∼ IW(α, θ 2 ).Then, it can be easily calculated that To computer the MLE of R, first we obtain the MLEs of θ 1 and θ 2 .Suppose X 1 , X 2 , ..., X n is a random sample from IW(α, θ 1 ) and Y 1 , Y 2 , ..., Y m is a random sample from IW(α, θ 2 ).The joint likelihood function is: ) .
Then, the log-likelihood function is α, θ1 and θ2 , the MLEs of the parameters α, θ 1 and θ 2 , can be numerically obtained by solving the following equations: From ( 8) and ( 9), we obtain Putting the expressions of θ1 (α) and θ2 (α) into (7), we obtain Therefore, α can be obtained as a fixed point solution of the non-linear equation of the form where Using a simple iterative procedure h(α (j) ) = α (j+1) , where α (j) is the j-th iterate of α, we stop the iterative procedure when |α (j) − α (j+1) | is adequately small less than a specified level.Once we obtain α, then θ1 and θ2 can be calculated from (10).Therefore, we obtain the MLE of

Approximate Maximum Likelihood Estimator of R
The MLEs do not take explicit forms; therefore, we approximate the likelihood equation and derive explicit estimators of the parameters.
Since the random variable X follows IW(α, θ), then V = ln X has the extreme value distribution with pdf as where µ = − 1 α ln θ and σ = − 1 α .The µ and σ are location and scale parameters, respectively.The pdf and cdf of the standard extreme value distribution can be obtained as are the ordered X i s and Y j s, we assume the following notations: The log-likelihood function of the data T (1) , ..., T (n) and S (1) , ..., S (m) is Differentiating (17) in regard to µ 1 , µ 2 and σ, the score equations are obtained as We note that the function h(z (i) ) = g (z (i) ) g(z (i) ) makes the score Equation ( 18) nonlinear and intricate.
Thus, we approximate the function h(z g(z (i) ) by expanding it in a Taylor series around . Furthermore, we also approximate the function h(w (j) ) = g (w (j) ) g(w (j) ) by expanding it in a Taylor series around d j = E(W (j) ).From [18], it is known that where U (i) is the i-th order statistic from the uniform U(0, 1) distribution.Therefore, and We use the following notations, Expanding the function h(z (i) ) and h(w (j) ) and keeping the first two terms, we have where Therefore, ( 18)-( 20) can be represented as The estimator of µ 1 and µ 2 can be obtained from Equations ( 21) and ( 22), as where The estimator of σ < 0 can be determined as the unique solution of the quadratic equation where Once σ is obtained, μ1 and μ2 can be derived immediately.Hence, the AMLE of R is given by where

Confidence Intervals of R
In this section, we present an asymptotic confidence interval (C.I.) of R and two C.I.s based on the non-parametric bootstrap methods.

Asymptotic Confidence Interval of R
In this subsection, we derive the asymptotic distribution of the MLE θ = (α, θ1 , θ2 ) and R. Based on the asymptotic distribution of R, the corresponding asymptotic confidence interval of R is obtained.We denote the exact Fisher information matrix of θ = (α, θ 1 , θ 2 ) as J(θ) = −E(I; θ), where I = [I ij ] i,j=1,2,3 , I ij = ∂ 2 L/∂θ i ∂θ j and L is given in (6): It is easy to see that (ln y j ) 2 y −α j , x −α i ln x i , Moreover, and Proof.We can use the asymptotic properties of MLEs to prove it.Proof.By using Theorem 1 and the delta method, we immediately derive the asymptotic distribution of R as follows: where Therefore, We can derive the 100(1 − γ)% confidence interval for R using Theorem 2 as where z r is 100γth percentile of N (0, 1).The confidence interval of R can be derived by using the estimate of B in (30).To estimate B, we use the MLEs of α, θ 1 and θ 2 and the following:

Bootstrap Confidence Intervals
In this subsection, two confidence intervals based on the non-parametric bootstrap methods are proposed: (i) the percentile bootstrap method (Boot-p) (see [19]) and (ii) the bootstrap-t method (Boot-t) (see [20]).
The algorithms for conducting the confidence intervals of R are presented as follows: (i) Boot-p method: Step 1: From the sample x 1 , x 2 , ..., x n and y 1 , y 2 , ..., y m compute α, θ1 and θ2 .
Note: In this paper, R * can be computed using ( 14) in Step 2.
Note: In this paper, Var( R * ) can be obtained using Theorem 2 in Step 2.

Bayesian Inference on R
In this section, the Bayes estimate of R can be obtained under assumption that the shape parameter α and scale parameters θ 1 and θ 2 are random variables.According to the likelihood function in Section 3, we note that α has a positive exponent and θ 1 and θ 2 have negative exponents, respectively, so we can assume that θ 1 and θ 2 have independent Inverse Gamma pdf and α follows Gamma distribution.We choose this family such that prior-to-posterior updating yields a posterior that is also in the family: where all the hyper-parameters a i and b i (i = 1, 2) are assumed to be known and non-negative.The prior density function of α is denoted as π(α), and we assume that it has a Gamma (0, 1) distribution.
We have the likelihood function based on the above assumptions as The joint density of the data, α, θ 1 and θ 2 becomes Therefore, the joint posterior density of α, θ 1 and θ 2 given the data is Since the expression (35) cannot be written in a closed form, the Bayes estimate of R and the corresponding credible interval of R are derived adopting the Gibbs sampling technique: The posterior pdfs of α, θ 1 and θ 2 can be obtained based on the expression P(α, θ 1 , θ 2 |data) as the following: The posterior pdf of α is not known.We use the Metropolis-Hastings method with normal proposal distribution to generate random numbers from this distribution.
The algorithm of Gibbs sampling is described as follows: Step 1: Start with an initial guess (α (0) , θ 2 ).
The approximate posterior mean of R is and the approximate posterior variance of R is Using the method proposed by [21], we immediately construct the 100(1 − γ)% highest posterior density (HPD) credible interval as where

Numerical Simulations and Data Analysis
In this section, we present a Monte Carlo simulation study and a real data set to illustrate different estimation methods proposed in the preceding sections.

Numerical Simulations Study
Since we cannot compare the performances of the different methods theoretically, some simulation results to compare the performances of the different methods are presented.We mainly compute the biases and mean square errors (MSEs) of the MLEs, AMLEs and Bayes estimates.The asymptotic C.I. of R and two C.I.s based on the non-parametric bootstrap methods are obtained.We also conduct the Bayes estimates and HPD credible intervals of R. Here, we assume that a1 = a2 = b1 = b2 = 0.0001.We consider sample sizes (n, m) = (10, 10), (20,15), (25, 25), (30, 40), (50, 50).For parameter values, we assume that θ 2 = 1 , θ 1 = 0.5, 1, 1.5, 2, 3 and α = 2.All the results are based on the 1000 replications.For the bootstrap methods, we compute the confidence intervals based on 300 resampling.The Bayes estimates are based on 1000 sampling, namely, M = 1000.In each case, the nominal level for the C.I.s or the credible intervals is 0.95.
We also obtain the average biases and MSEs of the MLEs, AMLEs and Bayes estimates over 1000 replications in Table 1.From Table 1, we can find that the Bayes estimates are almost as efficient as the MLEs and the AMLEs for all sample sizes.Interestingly, in most of the cases, the MSEs of the Bayes estimate are smaller than the MSEs of the MLEs or AMLEs.We can find that the biases and MSEs of the MLEs and AMLEs are very close.As the sample size (n, m) increases, the MSEs of the estimates decrease as expected.
Table 2 reports the results of 95% asymptotic C.I. of R, we also obtain C.I.s based on the bootstrap methods and the HPD credible interval.We represent the results of the average confidence/credible lengths and the coverage probabilities.From Table 2, the coverage probabilities reach the nominal level 95% with the increase of the sample sizes.We observe that the MLE method is the most valid procedure to obtain the confidence intervals.The AMLEs and the Bayes estimates are the second best confidence intervals.Interestingly, we find that the HPD credible intervals provide the most highest coverage probabilities.The Boot-p confidence intervals perform better than Boot-t confidence intervals, in terms of coverage probabilities.One point we should know is that the bootstrap method depends on the number of resampling.For small sample sizes (n, m), the coverage probabilities for the MLEs and AMLEs are less than the nominal levels, with the increase of sample sizes (n, m), they perform well.(n, m) −0.0182(0.0005)−0.0087(0.0007)−0.0033(0.0038)0.0007(0.0017)−0.0035(0.0027)−0.0192(0.0005)−0.0080(0.0006)−0.0033(0.0039)0.0012(0.0017)−0.0022(0.0027)−0.0217(0.0006)−0.0079(0.0006)−0.0008(0.0036)0.0051(0.0015)0.0023(0.0025) In each cell, the average biases are provided and corresponding MSEs are presented within brackets.The first to the third row corresponds the results for MLEs, AMLEs and Bayes estimates respectively.
Table 2. Average confidence lengths and coverage probabilities.
(n, m) In each cell, the average confidence lengths are provided and the corresponding coverage probabilities are given within brackets.The first to the fifth row corresponds the results for MLEs, AMLEs, Boot-p method, Boot-t method and Bayes estimates respectively.

Data Analysis
We consider a real data set to illustrate the methods of inference discussed in this article.These strength data sets in Tables 3 and 4 were analyzed previously by [3,22].We know that if the random variable X follows W(α, θ), the random variable T = 1 X has the IW(α, θ).Hence, we have the following data sets from the inverse Weibull distribution.These data are presented in Tables 5 and 6.We analyze the data by adding 0.5 from both data sets.We fit the inverse Weibull models to the two data sets separately.The estimated shape and scale parameters, log-likelihood values, Kolmogorov-Smirnov (K-S) distances and corresponding p-values are presented in Table 7.The expected frequencies and the observed frequencies based on the fitted models are also presented in the Tables 8 and 9.We also obtain the chi-square values of 5.9914 and 5.9915.Obviously, the inverse Weibull model fits very well to Data Set 1 and Data Set 2.
The K-S values and the corresponding p-values in Table 10 show that the inverse Weibull models with equal shape parameters fit reasonably well to the modified data sets.It is clear that we cannot reject the null hypothesis that the two shape parameters are equal.Based on Equations ( 14) and (29), we provide that the MLE and AMLE of R are 0.7576 and 0.7571.The 95% confidence intervals of MLE, AMLE, Boot-p method and Boot-t method are obtained as (0.6917, 0.8235), (0.6911, 0.8231), (0.6993, 0.8197), (0.7015, 0.8421), respectively.
The Bayesian estimate of R is also presented based on Equation (36).In the previous sections, we assume that θ 1 and θ 2 have independent IG priors, α has a Gamma (0, 1) prior and a1 = a2 = b1 = b2 = 0.0001.Under certain assumptions, we can conduct the Bayesian estimate of R as 0.7437 and the 95% HPD credible interval of R can be obtained as (0.6690, 0.8102).

Inference on R with All Different Parameters
In the sections above, we assume that the shape parameters are taken to be equal.In order to expand the paper, in this section, we study the inference of R with all different parameters.We consider the problem of estimating R = P(Y < X) under the assumption that X ∼ IW(α 1 , θ 1 ) and Y ∼ IW(α 2 , θ 2 ).Then, it can be easily calculated that To computer the MLE of R, we first obtain the MLEs of θ 1 and θ 2 .Suppose X 1 , X 2 , • • • , X n is a random sample from IW(α 1 , θ 1 ) and Y 1 , Y 2 , ..., Y m is a random sample from IW(α 2 , θ 2 ).The joint likelihood function is: Then, similar to the previous approaches, we can obtain the point estimates and interval estimates of R by using the MLE, AMLE and Bayesian method.Furthermore, we also can get bootstrap confidence intervals of R.

Table 3 .
Strength measured in GPA for carbon fibers tested under tension at gauge lengths of 20 mm.

Table 4 .
Strength measured in GPA for carbon fibers tested under tension at gauge lengths of 10 mm.

Table 7 .
Scale parameter, shape parameter, log-likelihood, K-S distances and p-values of the fitted inverse Weibull models to Data Sets 1 and 2.

Table 8 .
Expected frequencies and observed frequencies for modified Data Set 1 when fitting the inverse Weibull model.

Table 9 .
Expected frequencies and observed frequencies for modified Data Set 2 when fitting the inverse Weibull model.

Table 10 .
Scale parameter, shape parameter, log-likelihood, K-S distances and p-values of the fitted inverse Weibull models to Data Sets 1 and 2. Here, we assume that the two shape parameters are identical.