Next Article in Journal
Hierarchical Gradient Similarity Based Video Quality Assessment Metric
Previous Article in Journal
An Improved Brain-Inspired Emotional Learning Algorithm for Fast Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Bayesian and Classical Estimation of Stress-Strength Reliability for Inverse Weibull Lifetime Models

Department of Mathematics, Beijing Jiaotong University, Beijing 100044, China
*
Author to whom correspondence should be addressed.
Algorithms 2017, 10(2), 71; https://doi.org/10.3390/a10020071
Submission received: 25 May 2017 / Revised: 8 June 2017 / Accepted: 16 June 2017 / Published: 21 June 2017

Abstract

:
In this paper, we consider the problem of estimating stress-strength reliability for inverse Weibull lifetime models having the same shape parameters but different scale parameters. We obtain the maximum likelihood estimator and its asymptotic distribution. Since the classical estimator doesn’t hold explicit forms, we propose an approximate maximum likelihood estimator. The asymptotic confidence interval and two bootstrap intervals are obtained. Using the Gibbs sampling technique, Bayesian estimator and the corresponding credible interval are obtained. The Metropolis-Hastings algorithm is used to generate random variates. Monte Carlo simulations are conducted to compare the proposed methods. Analysis of a real dataset is performed.

1. Introduction

The inference of stress-strength reliability in statistics is an important topic of interest. It has many applications in practical areas. In the stress-strength modeling, R = P ( Y < X ) is a measure of component reliability. If X Y , the component fails or the component may malfunction, where X is subject to Y. R can also be considered in electrical and electronic systems. Many authors have studied its properties for many statistical models including double exponential, Weibull, generalized Pareto and Lindley distribution (see [1,2,3,4]). Classical and Bayesian estimation of reliability in a multicomponent stress-strength model under a general class of inverse exponentiated distributions was researched by Ref. [5]. Ref. [6] studied classical and Bayesian estimation of reliability in multicomponent stress-strength model under Weibull distribution. Furthermore, Refs. [7,8,9] also considered the problem of the stress-strength reliability.
The inverse Weibull (IW) distribution has attracted much attention recently. If T denotes the random variable from Weibull model, we define X as follows X = 1 / T , then the random variable X is said to have inverse Weibull distribution. It is a lifetime probability distribution that can be used in the reliability engineering discipline. The inverse Weibull distribution has the ability to model failure rates, which is quite common in reliability and biological studies. The inverse Weibull model was referred to with many different names like “Frechet-type” ([10]) and “Complementary Weibull” ([11]). Ref. [11] discussed a graphical plotting technique to settle the suitability of the model. Ref. [12] presented the IW distribution for modeling reliability data, this model was further discussed by researching the failures of mechanical components subject to degradation. Ref. [13] proposed a discrete inverse Weibull distribution and its parameters were estimated. The mixture model of two IW distributions and its identifiability properties were studied by [14]. For the theoretical analysis of IW distribution, we can refer to [15]. Ref. [16] proposed the generalized IW distribution and several properties of this model. For more details on the inverse Weibull distribution, see [17].
In this paper, we focus on the estimation of the stress-strength reliability R = P ( Y < X ) , where X and Y follow the inverse Weibull distribution. As far as we know, this model has not been previously studied, although, we believe it plays an important role in reliability analysis.
We obtain the maximum likelihood estimator (MLE), approximate maximum likelihood estimator (AMLE) and the asymptotic distribution of the estimator. The asymptotic distribution is used to construct an asymptotic confidence interval. We also present two bootstrap confidence intervals of R. By using Gibbs sampling technique, we obtain Bayes estimator of R and its corresponding credible interval. Finally, we present a real data example to illustrate the performance of different methods.
The layout of this paper is organized as follows: in Section 2, we introduce the distribution of the inverse Weibull. In Section 3, we obtain the MLE of R. In Section 4, we derive the estimator of R by approximating maximum likelihood equations. Different confidence intervals are presented in Section 5. In Section 6, Bayesian solutions are introduced. In Section 7, we compare different proposed methods using Monte Carlo simulation. A numerical example is also provided. Finally, in Section 8, we conclude the paper.

2. Inverse Weibull Distribution

The probability density function of the known Weibull distribution is given by
f ( t ; α , θ ) = α θ t α 1 e t α θ , t > 0 ,
where α > 0 is the shape parameter and θ > 0 is the scale parameter.
Let T denote the random variable from Weibull model, namely, W ( α , θ ) . Define X as follows:
X = 1 T .
The random variable X is said to have inverse Weibull distribution, and its probability density function (pdf) is given by
f ( x ; α , θ ) = α θ x α 1 e x α θ , x > 0 .
The cumulative distribution function(cdf) is given by
F ( x ) = e x α θ , x > 0 ,
where α > 0 and θ > 0 . The inverse Weibull distribution will be denoted by I W ( α , θ ) .

3. Maximum Likelihood Estimator of R

In this section, we consider the problem of estimating R = P ( Y < X ) under the assumption that X I W ( α , θ 1 ) and Y I W ( α , θ 2 ) . Then, it can be easily calculated that
R = P ( Y < X ) = θ 2 θ 1 + θ 2 .
To computer the MLE of R, first we obtain the MLEs of θ 1 and θ 2 . Suppose X 1 , X 2 , . . . , X n is a random sample from I W ( α , θ 1 ) and Y 1 , Y 2 , . . . , Y m is a random sample from I W ( α , θ 2 ) . The joint likelihood function is:
l ( α , θ 1 , θ 2 ) = α n + m θ 1 n θ 2 m ( i = 1 n x i α 1 ) ( j = 1 m y j α 1 ) e ( i = 1 n x i α θ 1 + j = 1 m y j α θ 2 ) .
Then, the log-likelihood function is
L ( α , θ 1 , θ 2 ) = ( m + n ) ln α n ln θ 1 m ln θ 2 ( α + 1 ) i = 1 n ln x i + j = 1 m ln y j 1 θ 1 i = 1 n x i α 1 θ 2 j = 1 m y j α .
α ^ , θ 1 ^ and θ 2 ^ , the MLEs of the parameters α , θ 1 and θ 2 , can be numerically obtained by solving the following equations:
L α = m + n α i = 1 n ln x i j = 1 m ln y j + 1 θ 1 i = 1 n x i α ln x i + 1 θ 2 j = 1 m y j α ln y j = 0 ,
L θ 1 = n θ 1 + 1 θ 1 2 i = 1 n x i α = 0 ,
L θ 2 = m θ 2 + 1 θ 2 2 j = 1 m y j α = 0 .
From (8) and (9), we obtain
θ 1 ^ ( α ) = 1 n i = 1 n x i α and θ 2 ^ ( α ) = 1 m j = 1 m y j α .
Putting the expressions of θ 1 ^ ( α ) and θ 2 ^ ( α ) into (7), we obtain
m + n α 1 α i = 1 n ln x i α + j = 1 m ln x j α + i = 1 n x i α ln x i 1 n i = 1 n x i α + j = 1 m y j α ln y j 1 m j = 1 m y j α = 0 .
Therefore, α ^ can be obtained as a fixed point solution of the non-linear equation of the form
h ( α ) = α ,
where
h ( α ) = m n + i = 1 n ln x i α + j = 1 m ln y j α i = 1 n x i α ln x i 1 n i = 1 n x i α + j = 1 m y j α ln y j 1 m j = 1 m y j α .
Using a simple iterative procedure h ( α ( j ) ) = α ( j + 1 ) , where α ( j ) is the j-th iterate of α ^ , we stop the iterative procedure when | α ( j ) α ( j + 1 ) | is adequately small less than a specified level. Once we obtain α ^ , then θ 1 ^ and θ 2 ^ can be calculated from (10). Therefore, we obtain the MLE of R = P ( Y < X ) as
R ^ = 1 m j = 1 m y j α ^ 1 n i = 1 n x i α ^ + 1 m j = 1 m y j α ^ .

4. Approximate Maximum Likelihood Estimator of R

The MLEs do not take explicit forms; therefore, we approximate the likelihood equation and derive explicit estimators of the parameters.
Since the random variable X follows I W ( α , θ ) , then V = ln X has the extreme value distribution with pdf as
f ( v ; μ , σ ) = 1 σ e v μ σ e v μ σ , < v < + ,
where μ = 1 α ln θ and σ = 1 α . The μ and σ are location and scale parameters, respectively. The pdf and cdf of the standard extreme value distribution can be obtained as
g ( v ) = e v e v , G ( v ) = e e v .
Suppose X ( 1 ) < X ( 2 ) < . . . < X ( n ) and Y ( 1 ) < Y ( 2 ) < . . . < Y ( m ) are the ordered X i s and Y j s , we assume the following notations: T ( i ) = ln X ( i ) , Z ( i ) = T ( i ) μ 1 σ , i = 1 , . . . , n and S ( j ) = ln Y ( j ) , W ( j ) = S ( j ) μ 2 σ , j = 1 , . . . , m , where μ 1 = 1 α ln θ 1 , μ 2 = 1 α ln θ 2 and σ = 1 α .
The log-likelihood function of the data T ( 1 ) , . . . , T ( n ) and S ( 1 ) , . . . , S ( m ) is
L ( μ 1 , μ 2 , σ ) ( m + n ) ln ( σ ) + i = 1 n ln g ( z ( i ) ) + j = 1 m ln g ( w ( j ) ) .
Differentiating (17) in regard to μ 1 , μ 2 a n d σ , the score equations are obtained as
L μ 1 = 1 σ i = 1 n g ( z ( i ) ) g ( z ( i ) ) = 0 ,
L μ 2 = 1 σ j = 1 m g ( w ( j ) ) g ( w ( j ) ) = 0 ,
L σ = ( m + n ) 1 σ 1 σ i = 1 n g ( z ( i ) ) g ( z ( i ) ) z ( i ) 1 σ j = 1 m g ( w ( j ) ) g ( w ( j ) ) w ( j ) = 0 .
We note that the function h ( z ( i ) ) = g ( z ( i ) ) g ( z ( i ) ) makes the score Equation (18) nonlinear and intricate. Thus, we approximate the function h ( z ( i ) ) = g ( z ( i ) ) g ( z ( i ) ) by expanding it in a Taylor series around c i = E ( Z ( i ) ) . Furthermore, we also approximate the function h ( w ( j ) ) = g ( w ( j ) ) g ( w ( j ) ) by expanding it in a Taylor series around d j = E ( W ( j ) ) . From [18], it is known that
G ( Z ( i ) ) = d U ( i ) ,
where U ( i ) is the i-th order statistic from the uniform U ( 0 , 1 ) distribution. Therefore,
Z ( i ) = d G 1 ( U ( i ) ) ,
and
c i = E Z ( i ) G 1 ( E U ( i ) ) = G 1 ( i / ( n + 1 ) ) .
We use the following notations, p i = i n + 1 , p ¯ j = j m + 1 ; therefore, c i = G 1 ( p i ) = ln ( ln p i ) , d j = G 1 ( p ¯ j ) = ln ( ln p ¯ j ) .
Expanding the function h ( z ( i ) ) and h ( w ( j ) ) and keeping the first two terms, we have
h ( z ( i ) ) = g ( z ( i ) ) g ( z ( i ) ) h ( c i ) + h ( c i ) ( z ( i ) c i ) a i b i z ( i ) , i = 1 , . . . , n , h ( w ( j ) ) = g ( w ( j ) ) g ( w ( j ) ) h ( d j ) + h ( d j ) ( w ( j ) d j ) a ¯ j b ¯ j z ( j ) , j = 1 , . . . , m ,
where
a i = h ( c i ) c i h ( c i ) = ln p i ( ln ( ln p i ) 1 ) 1 , b i = h ( c i ) = ln p i , a ¯ j = h ( d j ) d j h ( d j ) = ln p ¯ j ( ln ( ln p ¯ j ) 1 ) 1 , b ¯ j = h ( d j ) = ln p ¯ j .
Therefore, (18)–(20) can be represented as
L μ 1 1 σ i = 1 n ( a i b i z ( i ) ) = 0 ,
L μ 2 1 σ j = 1 m ( a ¯ j b ¯ j w ( j ) ) = 0 ,
L σ 1 σ m + n + i = 1 n ( a i b i z ( i ) ) z ( i ) + j = 1 m ( a ¯ j b ¯ j w ( j ) ) w ( j ) = 0 .
The estimator of μ 1 and μ 2 can be obtained from Equations (21) and (22), as
μ ˜ 1 = A 1 + B 1 σ ˜ ,
μ ˜ 2 = A 2 + B 2 σ ˜ ,
where
A 1 = i = 1 n b i T ( i ) i = 1 n b i , A 2 = j = 1 m b ¯ j S ( j ) j = 1 m b ¯ j , B 1 = i = 1 n a i i = 1 n b i , B 2 = j = 1 m a ¯ j j = 1 m b ¯ j .
The estimator of σ < 0 can be determined as the unique solution of the quadratic equation
C σ 2 + D σ E = 0 ,
where
C = ( m + n ) + B 1 i = 1 n a i + B 2 j = 1 m a ¯ j B 1 2 i = 1 n b i B 2 2 j = 1 m b ¯ j = m + n , D = i = 1 n a i ( A 1 T ( i ) ) 2 B 1 i = 1 n b i ( A 1 T ( i ) ) + j = 1 m a ¯ j ( A 2 S ( j ) ) 2 B 2 j = 1 m b ¯ j ( A 2 S ( j ) ) , E = i = 1 n b i ( T ( i ) A 1 ) 2 + j = 1 m b ¯ j ( S ( j ) A 2 ) 2 > 0 .
Therefore,
σ ˜ = D D 2 + 4 E ( m + n ) 2 ( m + n ) ,
Once σ ˜ is obtained, μ 1 ˜ and μ 2 ˜ can be derived immediately. Hence, the AMLE of R is given by
R ˜ = θ ˜ 2 θ ˜ 1 + θ ˜ 2 ,
where
α ˜ = 1 σ ˜ , θ 1 ˜ = e 1 σ ˜ ( A 1 + B 1 σ ˜ ) , θ 2 ˜ = e 1 σ ˜ ( A 2 + B 2 σ ˜ ) .

5. Confidence Intervals of R

In this section, we present an asymptotic confidence interval (C.I.) of R and two C.I.s based on the non-parametric bootstrap methods.

5.1. Asymptotic Confidence Interval of R

In this subsection, we derive the asymptotic distribution of the MLE θ ^ = ( α ^ , θ 1 ^ , θ 2 ^ ) and R ^ . Based on the asymptotic distribution of R ^ , the corresponding asymptotic confidence interval of R is obtained. We denote the exact Fisher information matrix of θ = ( α , θ 1 , θ 2 ) as J ( θ ) = E ( I ; θ ) , where I = [ I i j ] i , j = 1 , 2 , 3 , I i j = 2 L / θ i θ j and L is given in (6):
I ( θ ) = 2 L α 2 2 L α θ 1 2 L α θ 2 2 L θ 1 α 2 L θ 1 2 2 L θ 1 θ 2 2 L θ 2 α 2 L θ 2 θ 1 2 L θ 2 2 = I 11 I 12 I 13 I 21 I 22 I 23 I 31 I 32 I 33 .
It is easy to see that
I 11 = m + n α 2 1 θ 1 i = 1 n ( ln x i ) 2 x i α 1 θ 2 j = 1 m ( ln y j ) 2 y j α , I 12 = I 21 = 1 θ 1 2 i = 1 n x i α ln x i , I 13 = I 31 = 1 θ 2 2 j = 1 m y j α ln y j , I 22 = n θ 1 2 2 θ 1 3 i = 1 n x i α , I 33 = m θ 2 2 2 θ 2 3 j = 1 m y j α , I 23 = I 32 = 0 .
Moreover,
J 11 = E ( 2 L α 2 ) = 1 α 2 ( m + n ) ( 1 + γ ( 2 ) ) + n ( ln θ 1 ) 2 + m ( ln θ 2 ) 2 + 2 γ ( 2 ) ( n ln θ 1 + m ln θ 2 ) , J 12 = J 21 = 1 θ 1 2 i = 1 n E ( x i α ln x i ) = n θ 1 α ln θ 1 + γ ( 2 ) , J 13 = J 31 = 1 θ 2 2 j = 1 m E ( y j α ln y j ) = m θ 2 α ln θ 2 + γ ( 2 ) , J 22 = n θ 1 2 , J 33 = m θ 2 2 , J 23 = J 32 = 0 ,
where γ ( α ) = 0 x α 1 e x d x is the Gamma function.
Theorem 1.
As n and m and n m p , then
m ( α ^ α ) , n ( θ 1 ^ θ 1 ) , m ( θ 2 ^ θ 2 ) d N 3 0 , A 1 ( α , θ 1 , θ 2 ) ,
where
A ( α , θ 1 , θ 2 ) = a 11 a 12 a 13 a 21 a 22 0 a 31 0 a 33 ,
and
a 11 = 1 α 2 ( 1 + p ) ( 1 + γ ( 2 ) ) + p ( ln θ 1 ) 2 + ( ln θ 2 ) 2 + 2 γ ( 2 ) ( p ln θ 1 + ln θ 2 ) = lim n , m J 11 m , a 12 = a 21 = p θ 1 α l n θ 1 + γ ( 2 ) = lim n , m p n J 12 , a 13 = a 31 = 1 θ 2 α l n θ 2 + γ ( 2 ) = lim n , m 1 m J 13 , a 22 = 1 θ 1 2 = lim n , m 1 n J 22 , a 33 = 1 θ 2 2 = lim n , m 1 m J 33 , a 23 = a 32 = 0 .
Proof. 
We can use the asymptotic properties of MLEs to prove it. ☐
Theorem 2.
As n and m and n m p , then
n ( R ^ R ) N ( 0 , B ) ,
where
B = 1 u A ( θ 1 + θ 2 ) 4 θ 1 2 ( a 11 a 22 a 12 a 21 ) + θ 2 2 ( a 11 a 33 a 13 a 31 ) 2 θ 1 θ 2 a 12 a 13 ,
and u A = a 11 a 22 a 33 a 13 a 22 a 31 a 12 a 21 a 33 .
Proof. 
By using Theorem 1 and the delta method, we immediately derive the asymptotic distribution of R ^ as follows:
n ( R ^ R ) N ( 0 , B ) ,
where
B = c A t A 1 c A ,
with
c A = R α R θ 1 R θ 2 = 1 ( θ 1 + θ 2 ) 2 0 θ 2 θ 1 ,
A 1 = 1 u A a 22 a 33 a 12 a 33 a 22 a 13 a 21 a 33 a 11 a 33 a 13 a 31 a 21 a 13 a 22 a 31 a 12 a 31 a 11 a 22 a 12 a 21 ,
and
u A = a 11 a 22 a 33 a 13 a 22 a 31 a 12 a 21 a 33 .
Therefore,
B = c A t A 1 c A = 1 u A ( θ 1 + θ 2 ) 4 ( a 11 a 33 a 13 a 31 ) θ 2 2 + ( a 11 a 22 a 12 a 21 ) θ 1 2 2 θ 1 θ 2 a 12 a 13 .
We can derive the 100 ( 1 γ ) % confidence interval for R using Theorem 2 as
R ^ z 1 γ 2 B ^ n , R ^ + z 1 γ 2 B ^ n ,
where z r is 100 γ th percentile of N (0, 1). The confidence interval of R can be derived by using the estimate of B in (30). To estimate B, we use the MLEs of α , θ 1 and θ 2 and the following:
a ^ 11 = 1 α ^ 2 ( 1 + p ) ( 1 + γ ( 2 ) ) + p ( ln θ ^ 1 ) 2 + ( ln θ ^ 2 ) 2 + 2 γ ( 2 ) ( p ln θ ^ 1 + ln θ ^ 2 ) , a ^ 12 = a ^ 21 = p θ ^ 1 α ^ ln θ ^ 1 + γ ( 2 ) , a ^ 13 = a ^ 31 = 1 θ ^ 2 α ^ ln θ ^ 2 + γ ( 2 ) , a ^ 22 = 1 θ ^ 1 2 , a ^ 33 = 1 θ ^ 2 2 .

5.2. Bootstrap Confidence Intervals

In this subsection, two confidence intervals based on the non-parametric bootstrap methods are proposed: (i) the percentile bootstrap method (Boot-p) (see [19]) and (ii) the bootstrap-t method (Boot-t) (see [20]).
The algorithms for conducting the confidence intervals of R are presented as follows:
(i)
Boot-p method:
 
Step 1: From the sample x 1 , x 2 , . . . , x n and y 1 , y 2 , . . . , y m compute α ^ , θ 1 ^ and θ 2 ^ .
Step 2: Generate a bootstrap sample x 1 , . . . , x n using α ^ and θ 1 ^ and generate a bootstrap sample y 1 , . . . , y m using α ^ and θ 2 ^ . Based on x 1 , . . . , x n and y 1 , . . . , y m , compute the estimate of R, say R ^ .
Step 3: Repeat Step 2 NBOOT times.
Step 4: Let H 1 ( x ) = P ( R ^ x ) be the cumulative distribution function(cdf) of R ^ . Define R ^ B o o t p = H 1 1 ( x ) for a given x. Thus, the approximate 100 ( 1 z ) % C.I. of R is given by:
R ^ B o o t p ( z 2 ) , R ^ B o o t p ( 1 z 2 ) .
Note: In this paper, R ^ can be computed using (14) in Step 2.
(ii)
Boot-t method:
 
Step 1: From the sample x 1 , . . . , x n and y 1 , . . . , y m , compute α ^ , θ 1 ^ and θ 2 ^ .
Step 2: Generate a bootstrap sample x 1 , . . . , x n using α ^ and θ 1 ^ and generate a bootstrap sample y 1 , . . . , y m using α ^ and θ 2 ^ . Based on x 1 , . . . , x n and y 1 , . . . , y m , compute the estimate of R as R ^ and the statistic is
T = n ( R ^ R ^ ) V a r ( R ^ ) .
Step 3: Repeat Step 2 NBOOT times.
Step 4: Let H 2 ( x ) = P ( T x ) be the cumulative distribution function(cdf) of T . Define R ^ B o o t t = R ^ + n 1 2 V a r ( R ^ ) H 2 1 ( x ) for a given x. Thus, the approximate 100 ( 1 z ) % C.I. of R is given by:
R ^ B o o t t ( z 2 ) , R ^ B o o t t ( 1 z 2 ) .
Note: In this paper, V a r ( R ^ ) can be obtained using Theorem 2 in Step 2.

6. Bayesian Inference on R

In this section, the Bayes estimate of R can be obtained under assumption that the shape parameter α and scale parameters θ 1 and θ 2 are random variables. According to the likelihood function in Section 3, we note that α has a positive exponent and θ 1 and θ 2 have negative exponents, respectively, so we can assume that θ 1 and θ 2 have independent Inverse Gamma pdf and α follows Gamma distribution. We choose this family such that prior-to-posterior updating yields a posterior that is also in the family:
π ( θ 1 ) = b 1 a 1 γ ( a 1 ) θ 1 ( 1 + a 1 ) e b 1 θ 1 , θ 1 > 0 ,
π ( θ 2 ) = b 2 a 2 γ ( a 2 ) θ 2 ( 1 + a 2 ) e b 2 θ 2 , θ 2 > 0 ,
where all the hyper-parameters a i and b i ( i = 1 , 2 ) are assumed to be known and non-negative.
The prior density function of α is denoted as π ( α ) , and we assume that it has a Gamma (0, 1) distribution.
We have the likelihood function based on the above assumptions as
L ( d a t a | α , θ 1 , θ 2 ) = α m + n θ 1 n θ 2 m ( i = 1 n x i α 1 ) ( j = 1 m y j α 1 ) e 1 θ 1 i = 1 n x i α e 1 θ 2 j = 1 m y j α .
The joint density of the data, α , θ 1 and θ 2 becomes
P ( d a t a , α , θ 1 , θ 2 ) = L ( d a t a | α , θ 1 , θ 2 ) π ( α ) π ( θ 1 ) π ( θ 2 ) .
Therefore, the joint posterior density of α , θ 1 and θ 2 given the data is
P ( α , θ 1 , θ 2 | d a t a ) = P ( d a t a , α , θ 1 , θ 2 ) 0 0 0 L ( d a t a , α , θ 1 , θ 2 ) d α d θ 1 d θ 2 .
Since the expression (35) cannot be written in a closed form, the Bayes estimate of R and the corresponding credible interval of R are derived adopting the Gibbs sampling technique:
P ( α , θ 1 , θ 2 | d a t a ) α m + n 1 θ 1 ( n + 1 + a 1 ) θ 2 ( m + 1 + a 2 ) ( i = 1 n x i α 1 ) ( j = 1 m y j α 1 ) exp 1 θ 1 ( i = 1 n x i α + b 1 ) 1 θ 2 ( j = 1 m y j α + b 2 ) α .
The posterior pdfs of α , θ 1 and θ 2 can be obtained based on the expression P ( α , θ 1 , θ 2 | d a t a ) as the following:
θ 1 | α , θ 2 , d a t a I G ( n + a 1 , b 1 + i = 1 n x i α ) , θ 2 | α , θ 1 , d a t a I G ( m + a 2 , b 2 + j = 1 m y j α ) , f α ( α | θ 1 , θ 2 , d a t a ) α m + n 1 i = 1 n x i α 1 j = 1 m y j α 1 exp 1 θ 1 ( i = 1 n x i α + b 1 ) 1 θ 2 ( j = 1 m y j α + b 2 ) α .
The posterior pdf of α is not known. We use the Metropolis–Hastings method with normal proposal distribution to generate random numbers from this distribution.
The algorithm of Gibbs sampling is described as follows:
Step 1: Start with an initial guess ( α ( 0 ) , θ 1 ( 0 ) , θ 2 ( 0 ) ) .
Step 2: Set t = 1 .
Step 3: Generate θ 1 ( t ) form I G ( n + a 1 , b 1 + i = 1 n x i α ( t 1 ) ) .
Step 4: Generate θ 2 ( t ) form I G ( m + a 2 , b 2 + j = 1 m y j α ( t 1 ) ) .
Step 5: Using the Metropolis–Hastings method, generate α ( t ) from f α ( α ( t 1 ) | θ 1 ( t ) , θ 2 ( t ) , d a t a ) .
  • Generate a new candidate value δ from N ( ln α ( t 1 ) , 1 ) .
  • Set α = exp ( δ ) .
  • calculate p = min ( 1 , f α ( α | θ 1 ( t ) , θ 2 ( t ) , d a t a ) f α ( α ( t 1 ) | θ 1 ( t ) , θ 2 ( t ) , d a t a ) ) .
  • Update α ( t ) with probability p; otherwise set α ( t ) = α ( t 1 ) .
     
    Step 6: Compute R ( t ) from (14).
    Step 7: Set t = t + 1 .
    Step 8: Repeat step 3–7, M times.
The approximate posterior mean of R is
E ^ ( R | d a t a ) = 1 M t = 1 M R ( t ) ,
and the approximate posterior variance of R is
V a r ^ ( R | d a t a ) = 1 M t = 1 M ( R ( t ) E ^ ( R | d a t a ) ) 2 .
Using the method proposed by [21], we immediately construct the 100 ( 1 γ ) % highest posterior density (HPD) credible interval as
( R [ γ 2 M ] , R [ ( 1 γ 2 ) M ] ) ,
where R [ γ 2 M ] and R [ ( 1 γ 2 ) M ] are the [ γ 2 M ] -th smallest integer and the [ ( 1 γ 2 ) M ] -th smallest integer of { R t , t = 1 , 2 , . . . , M } .

7. Numerical Simulations and Data Analysis

In this section, we present a Monte Carlo simulation study and a real data set to illustrate different estimation methods proposed in the preceding sections.

7.1. Numerical Simulations Study

Since we cannot compare the performances of the different methods theoretically, some simulation results to compare the performances of the different methods are presented. We mainly compute the biases and mean square errors (MSEs) of the MLEs, AMLEs and Bayes estimates. The asymptotic C.I. of R and two C.I.s based on the non-parametric bootstrap methods are obtained. We also conduct the Bayes estimates and HPD credible intervals of R. Here, we assume that a 1 = a 2 = b 1 = b 2 = 0 . 0001 . We consider sample sizes ( n , m ) = ( 10 , 10 ) , ( 20 , 15 ) , ( 25 , 25 ) , ( 30 , 40 ) , ( 50 , 50 ) . For parameter values, we assume that θ 2 = 1 , θ 1 = 0 . 5 , 1 , 1 . 5 , 2 , 3 and α = 2 . All the results are based on the 1000 replications. For the bootstrap methods, we compute the confidence intervals based on 300 resampling. The Bayes estimates are based on 1000 sampling, namely, M = 1000. In each case, the nominal level for the C.I.s or the credible intervals is 0.95.
We also obtain the average biases and MSEs of the MLEs, AMLEs and Bayes estimates over 1000 replications in Table 1. From Table 1, we can find that the Bayes estimates are almost as efficient as the MLEs and the AMLEs for all sample sizes. Interestingly, in most of the cases, the MSEs of the Bayes estimate are smaller than the MSEs of the MLEs or AMLEs. We can find that the biases and MSEs of the MLEs and AMLEs are very close. As the sample size ( n , m ) increases, the MSEs of the estimates decrease as expected.
Table 2 reports the results of 95% asymptotic C.I. of R, we also obtain C.I.s based on the bootstrap methods and the HPD credible interval. We represent the results of the average confidence/credible lengths and the coverage probabilities. From Table 2, the coverage probabilities reach the nominal level 95% with the increase of the sample sizes. We observe that the MLE method is the most valid procedure to obtain the confidence intervals. The AMLEs and the Bayes estimates are the second best confidence intervals. Interestingly, we find that the HPD credible intervals provide the most highest coverage probabilities. The Boot-p confidence intervals perform better than Boot-t confidence intervals, in terms of coverage probabilities. One point we should know is that the bootstrap method depends on the number of resampling. For small sample sizes ( n , m ) , the coverage probabilities for the MLEs and AMLEs are less than the nominal levels, with the increase of sample sizes ( n , m ) , they perform well.

7.2. Data Analysis

We consider a real data set to illustrate the methods of inference discussed in this article. These strength data sets in Table 3 and Table 4 were analyzed previously by [3,22]. We know that if the random variable X follows W ( α , θ ) , the random variable T = 1 X has the I W ( α , θ ) . Hence, we have the following data sets from the inverse Weibull distribution. These data are presented in Table 5 and Table 6. We analyze the data by adding 0.5 from both data sets. We fit the inverse Weibull models to the two data sets separately. The estimated shape and scale parameters, log-likelihood values, Kolmogorov-Smirnov (K-S) distances and corresponding p-values are presented in Table 7. The expected frequencies and the observed frequencies based on the fitted models are also presented in the Table 8 and Table 9. We also obtain the chi-square values of 5.9914 and 5.9915. Obviously, the inverse Weibull model fits very well to Data Set 1 and Data Set 2.
The K-S values and the corresponding p-values in Table 10 show that the inverse Weibull models with equal shape parameters fit reasonably well to the modified data sets. It is clear that we cannot reject the null hypothesis that the two shape parameters are equal.
Based on Equations (14) and (29), we provide that the MLE and AMLE of R are 0 . 7576 and 0.7571. The 95% confidence intervals of MLE, AMLE, Boot-p method and Boot-t method are obtained as (0.6917, 0.8235), (0.6911, 0.8231), (0.6993, 0.8197), (0.7015, 0.8421), respectively.
The Bayesian estimate of R is also presented based on Equation (36). In the previous sections, we assume that θ 1 and θ 2 have independent IG priors, α has a Gamma (0, 1) prior and a 1 = a 2 = b 1 = b 2 = 0 . 0001 . Under certain assumptions, we can conduct the Bayesian estimate of R as 0.7437 and the 95% HPD credible interval of R can be obtained as (0.6690, 0.8102).

8. Inference on R with All Different Parameters

In the sections above, we assume that the shape parameters are taken to be equal. In order to expand the paper, in this section, we study the inference of R with all different parameters. We consider the problem of estimating R = P ( Y < X ) under the assumption that X I W ( α 1 , θ 1 ) and Y I W ( α 2 , θ 2 ) . Then, it can be easily calculated that
R = P ( Y < X ) = 1 0 α 2 θ 2 y α 2 1 e y α 2 θ 2 e y α 1 θ 1 d y .
To computer the MLE of R, we first obtain the MLEs of θ 1 and θ 2 . Suppose X 1 , X 2 , , X n is a random sample from I W ( α 1 , θ 1 ) and Y 1 , Y 2 , . . . , Y m is a random sample from I W ( α 2 , θ 2 ) . The joint likelihood function is:
l ( α 1 , θ 1 , α 2 , θ 2 ) = α 1 n θ 1 n α 2 m θ 2 m ( i = 1 n x i α 1 1 ) ( j = 1 m y j α 2 1 ) e i = 1 n x i α 1 θ 1 e j = 1 m y j α 2 θ 2 .
Then, the log-likelihood function is
L ( α 1 , θ 1 , α 2 , θ 2 ) = n ln α 1 + m ln α 2 n ln θ 1 m ln θ 2 ( α 1 + 1 ) i = 1 n ln x i ( α 2 + 1 ) j = 1 m ln y j 1 θ 1 i = 1 n x i α 1 1 θ 2 j = 1 m y j α 2 .
Then, similar to the previous approaches, we can obtain the point estimates and interval estimates of R by using the MLE, AMLE and Bayesian method. Furthermore, we also can get bootstrap confidence intervals of R.

9. Conclusions

In this paper, we have addressed the estimator of R = P ( Y < X ) for the inverse Weibull distribution. We assume independent inverse Weibull random variables with equal shape parameters but different scale parameters.
We obtain the maximum likelihood estimator of R and its asymptotic distribution. Note that MLEs do not have explicit forms, and we propose the approximate maximum likelihood estimator of R. The confidence interval of R is obtained using the asymptotic distribution. Two bootstrap confidence intervals are also obtained. By using the Gibbs sampling technique, we present the Bayesian estimator of R and the corresponding credible interval. The Metropolis–Hastings algorithm with the normal proposal distribution is used to generate random numbers from a given density function. Monte Carlo simulations are conducted to compare the proposed methods. Analysis of a real dataset is performed. In the future, we will consider the MLE, AMLE, asymptotic C.I., bootstrap C.I.s and Bayesian inference of R = P ( Y < X ) for inverse Weibull distribution based on incomplete data such as progressively type-II censored samples.

Acknowledgments

The authors’ work was partially supported by the program for the Fundamental Research Funds for the Central Universities (2014RC042, 2015JBM109). The authors would like to thank the referees and Editor for their helpful suggestions.

Author Contributions

The authors contributed equally to this work.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Awad, A. Estimation of P(Y < X) in case of the double exponential distribution. 1985. [Google Scholar] [CrossRef]
  2. Kumar, K.; Krishna, H.; Garg, R. Estimation of P(Y < X) in Lindley distribution using progressively first failure censoring. Int. J. Syst. Assur. Eng. Manag. 2016, 6, 330–341. [Google Scholar]
  3. Kundu, D.; Gupta, R.D. Estimation of P[Y < X] for Weibull distributions. IEEE Trans. Reliab. 2016, 55, 270–280. [Google Scholar]
  4. Rezaei, S.; Tahmasbi, R.; Mahmoodi, M. Estimation of P[Y < X] for generalized Pareto distribution. J. Stat. Plan. Inference 2010, 140, 480–494. [Google Scholar]
  5. Kizilaslan, F. Classical and Bayesian estimation of reliability in a multicomponent stress-strength model based on a general class of inverse exponentiated distributions. Stat. Pap. 2016, 1–32. [Google Scholar] [CrossRef]
  6. Kizilaslan, F.; Nadar, M. Classical and Bayesian Estimation of Reliability in Multicomponent Stress-Strength Model Based on Weibull Distribution. Rev. Colomb. Estad. 2015, 38, 467–484. [Google Scholar] [CrossRef]
  7. Ali, S. On the Mean Residual Life Function and Stress and Strength Analysis under Different Loss Function for Lindley Distribution. J. Qual. Reliab. Eng. 2013, 2013, 190437. [Google Scholar] [CrossRef]
  8. Kizilaslan, F. Classical and Bayesian estimation of reliability in a multicomponent stress-strength model based on the proportional reversed hazard rate mode. Math. Comput. Simul. 2017, 136, 36–62. [Google Scholar] [CrossRef]
  9. Shawky, A.I.; Al-Gashgari, F.H. Bayesian and non-bayesian estimation of stress–strength model for Pareto type I distribution. Iran. J. Sci. Technol. Trans. A Sci. 2013, 37, 335–342. [Google Scholar]
  10. Johnson, N.L.; Kotz, S.; Balakrishnan, N. Distributions. In Continuous Univariate Distributions, 2nd ed.; Wiley: New York, NY, USA, 1995; Volume 2. [Google Scholar]
  11. Drapella, A. The complementary weibull distribution: Unknown or just forgotten? Qual. Reliab. Eng. Int. 1993, 9, 383–385. [Google Scholar] [CrossRef]
  12. Keller, A.Z.; Kamath, A.R.R.; Perera, U.D. Reliability analysis of CNC machine tools. Reliab. Eng. 1982, 3, 449–473. [Google Scholar] [CrossRef]
  13. Jazi, M.A.; Lai, C.D.; Alamatsaz, M.H. A discrete inverse Weibull distribution and estimation of its parameters. Stat. Methodol. 2010, 7, 121–132. [Google Scholar] [CrossRef]
  14. Sultan, K.S.; Ismail, M.A.; Al-Moisheer, A.S. Mixture of two inverse Weibull distributions: Properties and estimation. Comput. Stat. Data Anal. 2007, 51, 5377–5387. [Google Scholar] [CrossRef]
  15. Khan, M.S.; Pasha, G.R.; Pasha, A.H. Theoretical analysis of inverse weibull distribution. Wseas Trans. Math. 2008, 7, 30–38. [Google Scholar]
  16. Gusmo, F.R.S.D.; Ortega, E.M.M.; Cordeiro, G.M. The generalized inverse Weibull distribution. Stat. Pap. 2011, 52, 591–619. [Google Scholar] [CrossRef]
  17. Mohie El-Din, M.M.; Riad, F.H. Estimation and prediction for the inverse Weibull distribution based on records. J. Adv. Res. Stat. Probab. 2011, 2, 20–27. [Google Scholar]
  18. Arnold, B.; Balakrishnan, N. Relations, Bounds and Approximations for Order Statistics. Available online: http://www.springer.com/gp/book/9780387969756 (accessed on 21 June 2017).
  19. Efron, B. The Jackknife, the Bootstrap and Other Resampling Plans. Available online: http://epubs.siam.org/doi/book/10.1137/1.9781611970319 (accessed on 21 June 2017).
  20. Hall, P. Rejoinder: Theoretical comparison of bootstrap confidence intervals. Ann. Stat. 1988, 16, 927–953. [Google Scholar] [CrossRef]
  21. Chen, M.H.; Shao, Q.M. Monte Carlo estimation of Bayesian credible and HPD intervals. J. Comput. Graph. Stat. 1999, 8, 69–92. [Google Scholar] [CrossRef]
  22. Surles, J.G.; Padgett, W.J. Inference for reliability and stress-strength for a scaled Burr type X distribution. Lifetime Data Anal. 2001, 7, 187–200. [Google Scholar] [CrossRef] [PubMed]
Table 1. Biases and mean square errors(MSEs) of the maximum likelihood estimate (MLE), approximate maximum likelihood estimate (AMLE) and Bayes estimate of R, when α = 2 , θ 2 = 1 and for different values of θ 1 .
Table 1. Biases and mean square errors(MSEs) of the maximum likelihood estimate (MLE), approximate maximum likelihood estimate (AMLE) and Bayes estimate of R, when α = 2 , θ 2 = 1 and for different values of θ 1 .
( n , m ) θ 1 = 0 . 5 θ 1 = 1 θ 1 = 1 . 5 θ 1 = 2 θ 1 = 3
( 10 , 10 ) 0.0919(0.0165)−0.0094(0.0079)−0.0082(0.0108)−0.0223(0.0137)−0.0098(0.0090)
0.0881(0.0158)−0.0088(0.0078)−0.0207(0.0137)−0.0145(0.0143)−0.0054(0.0091)
0.0721(0.0120)−0.0084(0.0065)0.0018(0.0083)−0.0036(0.0111)0.0138(0.0078)
( 20 , 15 ) 0.0281(0.0078)0.0095(0.0074)−0.0098(0.0079)−0.0188(0.0091)−0.0127(0.0065)
0.0317(0.0078)0.0061(0.0075)−0.0068(0.0078)−0.0161(0.0095)−0.0079(0.0065)
0.0207(0.0086)0.0090(0.0064)0.0034(0.0079)−0.0056(0.0078)0.0043(0.0060)
( 25 , 25 ) 0.0132(0.0037)0.0060(0.0025)0.0025(0.0067)0.0057(0.0045)−0.0270(0.0049)
0.0132(0.0038)0.0062(0.0024)0.0040(0.0068)0.0047(0.0045)−0.0242(0.0049)
0.0055(0.0036)0.0061(0.0022)0.0076(0.0063)0.0128(0.0041)−0.0165(0.0045)
( 30 , 40 ) −0.0094(0.0014)−0.0039(0.0010)−0.0714(0.0051)−0.0048(0.0024)−0.0510(0.0038)
−0.0092(0.0014)−0.0064(0.0011)−0.0760(0.0051)−0.0060(0.0023)−0.0504(0.0038)
−0.0135(0.0015)−0.0040(0.0009)−0.065(0.0042)0.0058(0.0023)−0.0302(0.0035)
( 50 , 50 ) −0.0182(0.0005)−0.0087(0.0007)−0.0033(0.0038)0.0007(0.0017)−0.0035(0.0027)
−0.0192(0.0005)−0.0080(0.0006)−0.0033(0.0039)0.0012(0.0017)−0.0022(0.0027)
−0.0217(0.0006)−0.0079(0.0006)−0.0008(0.0036)0.0051(0.0015)0.0023(0.0025)
In each cell, the average biases are provided and corresponding MSEs are presented within brackets. The first to the third row corresponds the results for MLEs, AMLEs and Bayes estimates respectively.
Table 2. Average confidence lengths and coverage probabilities.
Table 2. Average confidence lengths and coverage probabilities.
( n , m ) θ 1 = 0 . 5 θ 1 = 1 θ 1 = 1 . 5 θ 1 = 2 θ 1 = 3
( 10 , 10 ) 0.3879(0.90)0.4205(0.91)0.4074(0.90)0.3880(0.93)0.3377(0.89)
0.3894(0.90)0.4206(0.92)0.4079(0.90)0.3907(0.93)0.3403(0.90)
0.4044(0.89)0.4526(0.92)0.4303(0.91)0.4042(0.92)0.3469(0.91)
0.5179(0.90)0.4526(0.90)0.5425(0.90)0.5240(0.90)0.3939(0.90)
0.3997(0.95)0.4113(0.95)0.4028(0.93)0.3931(0.95)0.3546(0.94)
( 20 , 15 ) 0.3201(0.92)0.3480(0.92)0.3427(0.92)0.3176(0.92)0.2862(0.91)
0.3205(0.93)0.3480(0.93)0.3430(0.94)0.3186(0.93)0.2873(0.91)
0.3305(0.92)0.3657(0.93)0.3577(0.94)0.3275(0.93)0.2915(0.92)
0.3810(0.90)0.4089(0.91)0.4020(0.94)0.3814(0.91)0.3531(0.90)
0.3281(0.95)0.3427(0.93)0.3390(0.95)0.3207(0.96)0.2953(0.96)
( 25 , 25 ) 0.2501(0.93)0.2729(0.94)0.2634(0.93)0.2509(0.96)0.2244(0.95)
0.2502(0.93)0.2729(0.95)0.2637(0.94)0.2512(0.95)0.2245(0.95)
0.2510(0.93)0.2774(0.94)0.2696(0.92)0.2555(0.96)0.2250(0.96)
0.2756(0.93)0.2961(0.93)0.2765(0.92)0.2755(0.93)0.2354(0.95)
0.2537(0.96)0.2700(0.96)0.1884(0.95)0.2512(0.95)0.2286(0.96)
( 30 , 40 ) 0.2278(0.96)0.2496(0.94)0.2437(0.94)0.2261(0.96)0.2059(0.95)
0.2279(0.96)0.2496(0.95)0.2436(0.95)0.2263(0.96)0.2062(0.94)
0.2296(0.95)0.2522(0.95)0.2482(0.95)0.2275(0.97)0.2050(0.94)
0.2482(0.94)0.2682(0.94)0.2588(0.95)0.2458(0.92)0.2267(0.93)
0.2284(0.97)0.2470(0.95)0.2423(0.97)0?2272(0.97)0.2090(0.95)
( 50 , 50 ) 0.1788(0.96)0.1948(0.96)0.1887(0.95)0.1780(0.97)0.1553(0.96)
0.1788(0.96)0.1948(0.96)0.1887(0.96)0.1781(0.96)0.1554(0.95)
0.1789(0.96)0.1952(0.95)0.1890(0.94)0.0812(0.96)0.1598(0.96)
0.1848(0.94)0.2013(0.94)0.1942(0.92)0.1845(0.95)0.1588(0.95)
0.1784(0.97)0.1935(0.96)0.1880(0.96)0.1778(0.96)0.1620(0.97)
In each cell, the average confidence lengths are provided and the corresponding coverage probabilities are given within brackets. The first to the fifth row corresponds the results for MLEs, AMLEs, Boot-p method, Boot-t method and Bayes estimates respectively.
Table 3. Strength measured in GPA for carbon fibers tested under tension at gauge lengths of 20 mm.
Table 3. Strength measured in GPA for carbon fibers tested under tension at gauge lengths of 20 mm.
1.3121.3141.4791.5521.7001.8031.8611.8651.9441.958
1.9661.9972.0062.0212.0272.0552.0632.0982.1402.179
2.2242.2402.2532.2702.2722.2742.3012.3012.3592.382
2.3822.4262.4342.4352.4782.4902.5112.5142.5352.554
2.5662.5702.5862.6292.6332.6422.6482.6842.6972.726
2.7702.7732.8002.8092.8182.8212.8482.8802.9543.012
3.0673.0843.0903.0963.1283.2333.4333.5853.585
Table 4. Strength measured in GPA for carbon fibers tested under tension at gauge lengths of 10 mm.
Table 4. Strength measured in GPA for carbon fibers tested under tension at gauge lengths of 10 mm.
1.9012.1322.2032.2282.2572.3502.3612.3962.3972.445
2.4542.4742.5182.5222.5252.5322.5752.6142.6162.618
2.6242.6592.6752.7382.7402.8562.9172.9282.9372.937
2.9772.9963.0303.1253.1393.1453.2203.2233.2353.243
3.2643.2723.2943.3323.3463.3773.4083.4353.4933.501
3.5373.5543.5623.6283.8523.8713.8863.9714.0244.027
4.2254.3955.020
Table 5. Transformed Data Set 1.
Table 5. Transformed Data Set 1.
0.7620.7610.6760.6440.5880.5550.5370.5360.5140.511
0.5090.5010.4990.4950.4930.4870.4850.4770.4670.459
0.4500.4460.4440.4410.4400.4400.4350.4350.4240.420
0.4200.4120.4110.4110.4040.4020.3980.3980.3940.392
0.3900.3890.3870.3800.3800.3790.3780.3730.3710.367
0.3610.3610.3570.3560.3550.3540.3510.3470.3390.332
0.3260.3240.3240.3230.3200.3090.2910.2790.279
Table 6. Transformed Data Set 2.
Table 6. Transformed Data Set 2.
0.5260.4690.4540.4490.4430.4260.4240.4170.4170.409
0.4070.4040.3970.3970.3960.3950.3880.3830.3820.382
0.3810.3760.3740.3650.3650.3500.3430.3420.3400.340
0.3360.3340.3300.3200.3190.3180.3110.3100.3090.308
0.3060.3060.3040.3000.2990.2960.2930.2910.2860.286
0.2830.2810.2810.2760.2600.2580.2570.2520.2490.248
0.2370.2280.199
Table 7. Scale parameter, shape parameter, log-likelihood, K-S distances and p-values of the fitted inverse Weibull models to Data Sets 1 and 2.
Table 7. Scale parameter, shape parameter, log-likelihood, K-S distances and p-values of the fitted inverse Weibull models to Data Sets 1 and 2.
Data SetScale ParameterShape ParameterLog-LikelihoodK-Sp-Value
14.949712.615271.89670.04170.9997
219.081413.622879.40950.08460.7572
Table 8. Expected frequencies and observed frequencies for modified Data Set 1 when fitting the inverse Weibull model.
Table 8. Expected frequencies and observed frequencies for modified Data Set 1 when fitting the inverse Weibull model.
IntervalsExpected FrequenciesObserved Frequencies
0.00–0.250.030
0.25–0.4032.1533
0.40–0.5024.2124
0.50–0.7011.2310
0.70–∞1.382
Table 9. Expected frequencies and observed frequencies for modified Data Set 2 when fitting the inverse Weibull model.
Table 9. Expected frequencies and observed frequencies for modified Data Set 2 when fitting the inverse Weibull model.
IntervalsExpected FrequenciesObserved Frequencies
0.00–0.190.010
0.19–0.3021.0619
0.30–0.4029.4832
0.40–0.509.2311
0.50–∞3.221;
Table 10. Scale parameter, shape parameter, log-likelihood, K-S distances and p-values of the fitted inverse Weibull models to Data Sets 1 and 2. Here, we assume that the two shape parameters are identical.
Table 10. Scale parameter, shape parameter, log-likelihood, K-S distances and p-values of the fitted inverse Weibull models to Data Sets 1 and 2. Here, we assume that the two shape parameters are identical.
Data SetScale ParameterShape ParameterLog-LikelihoodK-Sp-Value
15.347113.093371.81590.04240.9996
216.716813.093379.32150.07320.8878

Share and Cite

MDPI and ACS Style

Bi, Q.; Gui, W. Bayesian and Classical Estimation of Stress-Strength Reliability for Inverse Weibull Lifetime Models. Algorithms 2017, 10, 71. https://doi.org/10.3390/a10020071

AMA Style

Bi Q, Gui W. Bayesian and Classical Estimation of Stress-Strength Reliability for Inverse Weibull Lifetime Models. Algorithms. 2017; 10(2):71. https://doi.org/10.3390/a10020071

Chicago/Turabian Style

Bi, Qixuan, and Wenhao Gui. 2017. "Bayesian and Classical Estimation of Stress-Strength Reliability for Inverse Weibull Lifetime Models" Algorithms 10, no. 2: 71. https://doi.org/10.3390/a10020071

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop