Open Access
This article is

- freely available
- re-usable

*Algorithms*
**2017**,
*10*(2),
71;
https://doi.org/10.3390/a10020071

Article

Bayesian and Classical Estimation of Stress-Strength Reliability for Inverse Weibull Lifetime Models

Department of Mathematics, Beijing Jiaotong University, Beijing 100044, China

^{*}

Author to whom correspondence should be addressed.

Received: 25 May 2017 / Accepted: 16 June 2017 / Published: 21 June 2017

## Abstract

**:**

In this paper, we consider the problem of estimating stress-strength reliability for inverse Weibull lifetime models having the same shape parameters but different scale parameters. We obtain the maximum likelihood estimator and its asymptotic distribution. Since the classical estimator doesn’t hold explicit forms, we propose an approximate maximum likelihood estimator. The asymptotic confidence interval and two bootstrap intervals are obtained. Using the Gibbs sampling technique, Bayesian estimator and the corresponding credible interval are obtained. The Metropolis-Hastings algorithm is used to generate random variates. Monte Carlo simulations are conducted to compare the proposed methods. Analysis of a real dataset is performed.

Keywords:

maximum likelihood estimator; approximate maximum likelihood estimator; bootstrap confidence intervals; Bayes estimator; Metropolis–Hastings algorithm; inverse Weibull distribution## 1. Introduction

The inference of stress-strength reliability in statistics is an important topic of interest. It has many applications in practical areas. In the stress-strength modeling, $R=P(Y<X)$ is a measure of component reliability. If $X\le Y$, the component fails or the component may malfunction, where X is subject to Y. R can also be considered in electrical and electronic systems. Many authors have studied its properties for many statistical models including double exponential, Weibull, generalized Pareto and Lindley distribution (see [1,2,3,4]). Classical and Bayesian estimation of reliability in a multicomponent stress-strength model under a general class of inverse exponentiated distributions was researched by Ref. [5]. Ref. [6] studied classical and Bayesian estimation of reliability in multicomponent stress-strength model under Weibull distribution. Furthermore, Refs. [7,8,9] also considered the problem of the stress-strength reliability.

The inverse Weibull (IW) distribution has attracted much attention recently. If T denotes the random variable from Weibull model, we define X as follows $X=1/T$, then the random variable X is said to have inverse Weibull distribution. It is a lifetime probability distribution that can be used in the reliability engineering discipline. The inverse Weibull distribution has the ability to model failure rates, which is quite common in reliability and biological studies. The inverse Weibull model was referred to with many different names like “Frechet-type” ([10]) and “Complementary Weibull” ([11]). Ref. [11] discussed a graphical plotting technique to settle the suitability of the model. Ref. [12] presented the IW distribution for modeling reliability data, this model was further discussed by researching the failures of mechanical components subject to degradation. Ref. [13] proposed a discrete inverse Weibull distribution and its parameters were estimated. The mixture model of two IW distributions and its identifiability properties were studied by [14]. For the theoretical analysis of IW distribution, we can refer to [15]. Ref. [16] proposed the generalized IW distribution and several properties of this model. For more details on the inverse Weibull distribution, see [17].

In this paper, we focus on the estimation of the stress-strength reliability $R=P(Y<X)$, where X and Y follow the inverse Weibull distribution. As far as we know, this model has not been previously studied, although, we believe it plays an important role in reliability analysis.

We obtain the maximum likelihood estimator (MLE), approximate maximum likelihood estimator (AMLE) and the asymptotic distribution of the estimator. The asymptotic distribution is used to construct an asymptotic confidence interval. We also present two bootstrap confidence intervals of R. By using Gibbs sampling technique, we obtain Bayes estimator of R and its corresponding credible interval. Finally, we present a real data example to illustrate the performance of different methods.

The layout of this paper is organized as follows: in Section 2, we introduce the distribution of the inverse Weibull. In Section 3, we obtain the MLE of R. In Section 4, we derive the estimator of R by approximating maximum likelihood equations. Different confidence intervals are presented in Section 5. In Section 6, Bayesian solutions are introduced. In Section 7, we compare different proposed methods using Monte Carlo simulation. A numerical example is also provided. Finally, in Section 8, we conclude the paper.

## 2. Inverse Weibull Distribution

The probability density function of the known Weibull distribution is given by
where $\alpha >0$ is the shape parameter and $\theta >0$ is the scale parameter.

$$\begin{array}{c}\hfill f(t;\alpha ,\theta )=\frac{\alpha}{\theta}{t}^{\alpha -1}{e}^{-\frac{{t}^{\alpha}}{\theta}},\phantom{\rule{1.em}{0ex}}t>0,\end{array}$$

Let T denote the random variable from Weibull model, namely, $W(\alpha ,\theta )$. Define X as follows:

$$\begin{array}{c}\hfill X=\frac{1}{T}.\end{array}$$

The random variable X is said to have inverse Weibull distribution, and its probability density function (pdf) is given by

$$\begin{array}{c}\hfill f(x;\alpha ,\theta )=\frac{\alpha}{\theta}{x}^{-\alpha -1}{e}^{-\frac{{x}^{-\alpha}}{\theta}},\phantom{\rule{1.em}{0ex}}x>0.\end{array}$$

The cumulative distribution function(cdf) is given by
where $\alpha >0$ and $\theta >0$. The inverse Weibull distribution will be denoted by $IW(\alpha ,\theta )$.

$$\begin{array}{c}\hfill F(x)={e}^{-\frac{{x}^{-\alpha}}{\theta}},\phantom{\rule{1.em}{0ex}}x>0,\end{array}$$

## 3. Maximum Likelihood Estimator of $\mathit{R}$

In this section, we consider the problem of estimating $R=P(Y<X)$ under the assumption that $X\sim IW(\alpha ,{\theta}_{1})$ and $Y\sim IW(\alpha ,{\theta}_{2})$. Then, it can be easily calculated that

$$\begin{array}{c}\hfill R=P(Y<X)=\frac{{\theta}_{2}}{{\theta}_{1}+{\theta}_{2}}.\end{array}$$

To computer the MLE of R, first we obtain the MLEs of ${\theta}_{1}$ and ${\theta}_{2}$. Suppose ${X}_{1},{X}_{2},...,{X}_{n}$ is a random sample from $IW(\alpha ,{\theta}_{1})$ and ${Y}_{1},{Y}_{2},...,{Y}_{m}$ is a random sample from $IW(\alpha ,{\theta}_{2})$. The joint likelihood function is:

$$\begin{array}{c}\hfill \begin{array}{c}\hfill l(\alpha ,{\theta}_{1},{\theta}_{2})=\frac{{\alpha}^{n+m}}{{\theta}_{1}^{n}{\theta}_{2}^{m}}(\prod _{i=1}^{n}{x}_{i}^{-\alpha -1})(\prod _{j=1}^{m}{y}_{j}^{-\alpha -1}){e}^{-({\sum}_{i=1}^{n}\frac{{x}_{i}^{-}\alpha}{{\theta}_{1}}+{\sum}_{j=1}^{m}\frac{{y}_{j}^{-}\alpha}{{\theta}_{2}})}.\end{array}\end{array}$$

Then, the log-likelihood function is

$$\begin{array}{c}\hfill \begin{array}{cc}\hfill L(\alpha ,{\theta}_{1},{\theta}_{2})=& \phantom{\rule{3.33333pt}{0ex}}(m+n)\mathrm{ln}\alpha -n\mathrm{ln}{\theta}_{1}-m\mathrm{ln}{\theta}_{2}-(\alpha +1)\left[\sum _{i=1}^{n}\mathrm{ln}{x}_{i}+\sum _{j=1}^{m}\mathrm{ln}{y}_{j}\right]\hfill \\ & -\frac{1}{{\theta}_{1}}\sum _{i=1}^{n}{x}_{i}^{-\alpha}-\frac{1}{{\theta}_{2}}\sum _{j=1}^{m}{y}_{j}^{-\alpha}.\hfill \end{array}\end{array}$$

$\widehat{\alpha}$, $\widehat{{\theta}_{1}}$ and $\widehat{{\theta}_{2}}$, the MLEs of the parameters $\alpha $, ${\theta}_{1}$ and ${\theta}_{2}$, can be numerically obtained by solving the following equations:

$$\begin{array}{c}\frac{\partial L}{\partial \alpha}=\frac{m+n}{\alpha}-\sum _{i=1}^{n}\mathrm{ln}{x}_{i}-\sum _{j=1}^{m}\mathrm{ln}{y}_{j}+\frac{1}{{\theta}_{1}}\sum _{i=1}^{n}{x}_{i}^{-\alpha}\mathrm{ln}{x}_{i}+\frac{1}{{\theta}_{2}}\sum _{j=1}^{m}{y}_{j}^{-\alpha}\mathrm{ln}{y}_{j}=0,\hfill \end{array}$$

$$\begin{array}{c}\frac{\partial L}{\partial {\theta}_{1}}=-\frac{n}{{\theta}_{1}}+\frac{1}{{\theta}_{1}^{2}}\sum _{i=1}^{n}{x}_{i}^{-\alpha}=0,\hfill \end{array}$$

$$\begin{array}{c}\frac{\partial L}{\partial {\theta}_{2}}=-\frac{m}{{\theta}_{2}}+\frac{1}{{\theta}_{2}^{2}}\sum _{j=1}^{m}{y}_{j}^{-\alpha}=0.\hfill \end{array}$$

From (8) and (9), we obtain

$$\begin{array}{c}\hfill \widehat{{\theta}_{1}}(\alpha )=\frac{1}{n}\sum _{i=1}^{n}{x}_{i}^{-\alpha}\phantom{\rule{1.em}{0ex}}\mathrm{and}\phantom{\rule{1.em}{0ex}}\widehat{{\theta}_{2}}(\alpha )=\frac{1}{m}\sum _{j=1}^{m}{y}_{j}^{-\alpha}.\end{array}$$

Putting the expressions of $\widehat{{\theta}_{1}}(\alpha )$ and $\widehat{{\theta}_{2}}(\alpha )$ into (7), we obtain

$$\begin{array}{ccc}\hfill \frac{m+n}{\alpha}-\frac{1}{\alpha}\left[\sum _{i=1}^{n}\mathrm{ln}{x}_{i}^{\alpha}+\sum _{j=1}^{m}\mathrm{ln}{x}_{j}^{\alpha}\right]+\frac{{\sum}_{i=1}^{n}{x}_{i}^{-\alpha}\mathrm{ln}{x}_{i}}{\frac{1}{n}{\sum}_{i=1}^{n}{x}_{i}^{-\alpha}}+\frac{{\sum}_{j=1}^{m}{y}_{j}^{-\alpha}\mathrm{ln}{y}_{j}}{\frac{1}{m}{\sum}_{j=1}^{m}{y}_{j}^{-\alpha}}& =& 0.\hfill \end{array}$$

Therefore, $\widehat{\alpha}$ can be obtained as a fixed point solution of the non-linear equation of the form
where

$$\begin{array}{ccc}\hfill h(\alpha )& =& \alpha ,\hfill \end{array}$$

$$\begin{array}{ccc}\hfill h(\alpha )& =& \frac{-m-n+{\sum}_{i=1}^{n}\mathrm{ln}{x}_{i}^{\alpha}+{\sum}_{j=1}^{m}\mathrm{ln}{y}_{j}^{\alpha}}{\frac{{\sum}_{i=1}^{n}{x}_{i}^{-\alpha}\mathrm{ln}{x}_{i}}{\frac{1}{n}{\sum}_{i=1}^{n}{x}_{i}^{-\alpha}}+\frac{{\sum}_{j=1}^{m}{y}_{j}^{-\alpha}\mathrm{ln}{y}_{j}}{\frac{1}{m}{\sum}_{j=1}^{m}{y}_{j}^{-\alpha}}}.\hfill \end{array}$$

Using a simple iterative procedure $h({\alpha}_{(j)})={\alpha}_{(j+1)}$, where ${\alpha}_{(j)}$ is the j-th iterate of $\widehat{\alpha}$, we stop the iterative procedure when $|{\alpha}_{(j)}-{\alpha}_{(j+1)}|$ is adequately small less than a specified level. Once we obtain $\widehat{\alpha}$, then $\widehat{{\theta}_{1}}$ and $\widehat{{\theta}_{2}}$ can be calculated from (10). Therefore, we obtain the MLE of $R=P(Y<X)$ as

$$\begin{array}{ccc}\hfill \widehat{R}& =& \frac{\frac{1}{m}{\sum}_{j=1}^{m}{y}_{j}^{-\widehat{\alpha}}}{\frac{1}{n}{\sum}_{i=1}^{n}{x}_{i}^{-\widehat{\alpha}}+\frac{1}{m}{\sum}_{j=1}^{m}{y}_{j}^{-\widehat{\alpha}}}.\hfill \end{array}$$

## 4. Approximate Maximum Likelihood Estimator of $\mathit{R}$

The MLEs do not take explicit forms; therefore, we approximate the likelihood equation and derive explicit estimators of the parameters.

Since the random variable X follows $IW(\alpha ,\theta )$, then $V=\mathrm{ln}X$ has the extreme value distribution with pdf as
where $\mu =-\frac{1}{\alpha}\mathrm{ln}\theta $ and $\sigma =-\frac{1}{\alpha}$. The $\mu $ and $\sigma $ are location and scale parameters, respectively. The pdf and cdf of the standard extreme value distribution can be obtained as

$$\begin{array}{c}\hfill f(v;\mu ,\sigma )=-\frac{1}{\sigma}{e}^{\frac{v-\mu}{\sigma}-{e}^{\frac{v-\mu}{\sigma}}},\phantom{\rule{4pt}{0ex}}-\infty <v<+\infty ,\end{array}$$

$$\begin{array}{c}\hfill g(v)={e}^{-v-{e}^{-v}},\phantom{\rule{4pt}{0ex}}G(v)={e}^{-{e}^{-v}}.\end{array}$$

Suppose ${X}_{(1)}<{X}_{(2)}<...<{X}_{(n)}$ and ${Y}_{(1)}<{Y}_{(2)}<...<{Y}_{(m)}$ are the ordered ${X}_{i}s$ and ${Y}_{j}s$, we assume the following notations: ${T}_{(i)}=\mathrm{ln}{X}_{(i)}$, ${Z}_{(i)}=\frac{{T}_{(i)}-{\mu}_{1}}{-\sigma},\phantom{\rule{4pt}{0ex}}i=1,...,n$ and ${S}_{(j)}=\mathrm{ln}{Y}_{(j)}$, ${W}_{(j)}=\frac{{S}_{(j)}-{\mu}_{2}}{-\sigma},\phantom{\rule{4pt}{0ex}}j=1,...,m$, where ${\mu}_{1}=-\frac{1}{\alpha}\mathrm{ln}{\theta}_{1}$, ${\mu}_{2}=-\frac{1}{\alpha}\mathrm{ln}{\theta}_{2}$ and $\sigma =-\frac{1}{\alpha}$.

The log-likelihood function of the data ${T}_{(1)},...,{T}_{(n)}$ and ${S}_{(1)},...,{S}_{(m)}$ is

$$\begin{array}{c}\hfill L({\mu}_{1},{\mu}_{2},\sigma )\propto -(m+n)\mathrm{ln}(-\sigma )+\sum _{i=1}^{n}\mathrm{ln}g({z}_{(i)})+\sum _{j=1}^{m}\mathrm{ln}g({w}_{(j)}).\end{array}$$

Differentiating (17) in regard to ${\mu}_{1},\phantom{\rule{4pt}{0ex}}{\mu}_{2}\phantom{\rule{4pt}{0ex}}and\phantom{\rule{4pt}{0ex}}\sigma $, the score equations are obtained as

$$\begin{array}{c}\frac{\partial L}{\partial {\mu}_{1}}=\frac{1}{\sigma}\sum _{i=1}^{n}\frac{{g}^{\prime}({z}_{(i)})}{g({z}_{(i)})}=0,\hfill \end{array}$$

$$\begin{array}{c}\frac{\partial L}{\partial {\mu}_{2}}=\frac{1}{\sigma}\sum _{j=1}^{m}\frac{{g}^{\prime}({w}_{(j)})}{g({w}_{(j)})}=0,\hfill \end{array}$$

$$\begin{array}{c}\frac{\partial L}{\partial \sigma}=-(m+n)\frac{1}{\sigma}-\frac{1}{\sigma}\sum _{i=1}^{n}\frac{{g}^{\prime}({z}_{(i)})}{g({z}_{(i)})}{z}_{(i)}-\frac{1}{\sigma}\sum _{j=1}^{m}\frac{{g}^{\prime}({w}_{(j)})}{g({w}_{(j)})}{w}_{(j)}=0.\hfill \end{array}$$

We note that the function $h({z}_{(i)})=\frac{{g}^{\prime}({z}_{(i)})}{g({z}_{(i)})}$ makes the score Equation (18) nonlinear and intricate. Thus, we approximate the function $h({z}_{(i)})=\frac{{g}^{\prime}({z}_{(i)})}{g({z}_{(i)})}$ by expanding it in a Taylor series around ${c}_{i}\phantom{\rule{3.33333pt}{0ex}}=\phantom{\rule{3.33333pt}{0ex}}E({Z}_{(i)})$. Furthermore, we also approximate the function $h({w}_{(j)})=\frac{{g}^{\prime}({w}_{(j)})}{g({w}_{(j)})}$ by expanding it in a Taylor series around ${d}_{j}=E({W}_{(j)})$. From [18], it is known that
where ${U}_{(i)}$ is the i-th order statistic from the uniform $U(0,1)$ distribution. Therefore,
and

$$\begin{array}{c}\hfill G({Z}_{(i)})\stackrel{d}{=}{U}_{(i)},\end{array}$$

$$\begin{array}{c}\hfill {Z}_{(i)}\stackrel{d}{=}{G}^{-1}({U}_{(i)}),\end{array}$$

$$\begin{array}{c}\hfill {c}_{i}=E{Z}_{(i)}\approx {G}^{-1}(E{U}_{(i)})={G}^{-1}(i/(n+1)).\end{array}$$

We use the following notations, ${p}_{i}=\frac{i}{n+1}$, ${\overline{p}}_{j}=\frac{j}{m+1}$; therefore, ${c}_{i}={G}^{-1}({p}_{i})=-\mathrm{ln}(-\mathrm{ln}{p}_{i})$, ${d}_{j}={G}^{-1}({\overline{p}}_{j})=-\mathrm{ln}(-\mathrm{ln}{\overline{p}}_{j}).$

Expanding the function $h({z}_{(i)})$ and $h({w}_{(j)})$ and keeping the first two terms, we have
where

$$\begin{array}{c}h({z}_{(i)})=\frac{{g}^{\prime}({z}_{(i)})}{g({z}_{(i)})}\approx h({c}_{i})+{h}^{\prime}({c}_{i})({z}_{(i)}-{c}_{i})\equiv {a}_{i}-{b}_{i}{z}_{(i)},i=1,...,n,\hfill \\ h({w}_{(j)})=\frac{{g}^{\prime}({w}_{(j)})}{g({w}_{(j)})}\approx h({d}_{j})+{h}^{\prime}({d}_{j})({w}_{(j)}-{d}_{j})\equiv {\overline{a}}_{j}-{\overline{b}}_{j}{z}_{(j)},j=1,...,m,\hfill \end{array}$$

$$\begin{array}{c}{a}_{i}=h({c}_{i})-{c}_{i}{h}^{\prime}({c}_{i})=\mathrm{ln}{p}_{i}(\mathrm{ln}(-\mathrm{ln}{p}_{i})-1)-1,\hfill \\ {b}_{i}=-{h}^{\prime}({c}_{i})=-\mathrm{ln}{p}_{i},\hfill \\ {\overline{a}}_{j}=h({d}_{j})-{d}_{j}{h}^{\prime}({d}_{j})=\mathrm{ln}{\overline{p}}_{j}(\mathrm{ln}(-\mathrm{ln}{\overline{p}}_{j})-1)-1,\hfill \\ {\overline{b}}_{j}=-{h}^{\prime}({d}_{j})=-\mathrm{ln}{\overline{p}}_{j}.\hfill \end{array}$$

Therefore, (18)–(20) can be represented as

$$\begin{array}{c}\frac{\partial L}{\partial {\mu}_{1}}\approx \frac{1}{\sigma}\sum _{i=1}^{n}({a}_{i}-{b}_{i}{z}_{(i)})=0,\hfill \end{array}$$

$$\begin{array}{c}\frac{\partial L}{\partial {\mu}_{2}}\approx \frac{1}{\sigma}\sum _{j=1}^{m}({\overline{a}}_{j}-{\overline{b}}_{j}{w}_{(j)})=0,\hfill \end{array}$$

$$\begin{array}{c}\frac{\partial L}{\partial \sigma}\approx -\frac{1}{\sigma}\left(m+n+\sum _{i=1}^{n}({a}_{i}-{b}_{i}{z}_{(i)}){z}_{(i)}+\sum _{j=1}^{m}({\overline{a}}_{j}-{\overline{b}}_{j}{w}_{(j)}){w}_{(j)}\right)=0.\hfill \end{array}$$

The estimator of ${\mu}_{1}$ and ${\mu}_{2}$ can be obtained from Equations (21) and (22), as
where

$$\begin{array}{c}{\tilde{\mu}}_{1}={A}_{1}+{B}_{1}\tilde{\sigma},\hfill \end{array}$$

$$\begin{array}{c}{\tilde{\mu}}_{2}={A}_{2}+{B}_{2}\tilde{\sigma},\hfill \end{array}$$

$$\begin{array}{c}\hfill {A}_{1}=\frac{{\sum}_{i=1}^{n}{b}_{i}{T}_{(i)}}{{\sum}_{i=1}^{n}{b}_{i}},\phantom{\rule{4pt}{0ex}}{A}_{2}=\frac{{\sum}_{j=1}^{m}{\overline{b}}_{j}{S}_{(j)}}{{\sum}_{j=1}^{m}{\overline{b}}_{j}},\phantom{\rule{4pt}{0ex}}{B}_{1}=\frac{{\sum}_{i=1}^{n}{a}_{i}}{{\sum}_{i=1}^{n}{b}_{i}},\phantom{\rule{4pt}{0ex}}{B}_{2}=\frac{{\sum}_{j=1}^{m}{\overline{a}}_{j}}{{\sum}_{j=1}^{m}{\overline{b}}_{j}}.\end{array}$$

The estimator of $\sigma <0$ can be determined as the unique solution of the quadratic equation
where

$$\begin{array}{c}\hfill C{\sigma}^{2}+D\sigma -E=0,\end{array}$$

$$\begin{array}{c}C=(m+n)+{B}_{1}\sum _{i=1}^{n}{a}_{i}+{B}_{2}\sum _{j=1}^{m}{\overline{a}}_{j}-{B}_{1}^{2}\sum _{i=1}^{n}{b}_{i}-{B}_{2}^{2}\sum _{j=1}^{m}{\overline{b}}_{j}=m+n,\hfill \\ D=\sum _{i=1}^{n}{a}_{i}({A}_{1}-{T}_{(i)})-2{B}_{1}\sum _{i=1}^{n}{b}_{i}({A}_{1}-{T}_{(i)})+\sum _{j=1}^{m}{\overline{a}}_{j}({A}_{2}-{S}_{(j)})-2{B}_{2}\sum _{j=1}^{m}{\overline{b}}_{j}({A}_{2}-{S}_{(j)}),\hfill \\ E=\sum _{i=1}^{n}{b}_{i}{({T}_{(i)}-{A}_{1})}^{2}+\sum _{j=1}^{m}{\overline{b}}_{j}{({S}_{(j)}-{A}_{2})}^{2}>0.\hfill \end{array}$$

Therefore,

$$\begin{array}{c}\hfill \tilde{\sigma}=\frac{-D-\sqrt{{D}^{2}+4E(m+n)}}{2(m+n)},\end{array}$$

Once $\tilde{\sigma}$ is obtained, $\tilde{{\mu}_{1}}$ and $\tilde{{\mu}_{2}}$ can be derived immediately. Hence, the AMLE of R is given by
where

$$\begin{array}{c}\hfill \tilde{R}=\frac{{\tilde{\theta}}_{2}}{{\tilde{\theta}}_{1}+{\tilde{\theta}}_{2}},\end{array}$$

$$\begin{array}{c}\hfill \tilde{\alpha}=-\frac{1}{\tilde{\sigma}},\phantom{\rule{4pt}{0ex}}\tilde{{\theta}_{1}}={e}^{\frac{1}{\tilde{\sigma}}({A}_{1}+{B}_{1}\tilde{\sigma})},\phantom{\rule{4pt}{0ex}}\tilde{{\theta}_{2}}={e}^{\frac{1}{\tilde{\sigma}}({A}_{2}+{B}_{2}\tilde{\sigma})}.\end{array}$$

## 5. Confidence Intervals of R

In this section, we present an asymptotic confidence interval (C.I.) of R and two C.I.s based on the non-parametric bootstrap methods.

#### 5.1. Asymptotic Confidence Interval of R

In this subsection, we derive the asymptotic distribution of the MLE $\widehat{\theta}=(\widehat{\alpha},\widehat{{\theta}_{1}},\widehat{{\theta}_{2}})$ and $\widehat{R}$. Based on the asymptotic distribution of $\widehat{R}$, the corresponding asymptotic confidence interval of R is obtained. We denote the exact Fisher information matrix of $\theta =(\alpha ,{\theta}_{1},{\theta}_{2})$ as $J(\theta )=-\mathbb{E}(I;\theta )$, where $I={\left[{I}_{ij}\right]}_{i,j=1,2,3}$, ${I}_{ij}={\partial}^{2}L/\partial {\theta}_{i}\partial {\theta}_{j}$ and L is given in (6):

$$\begin{array}{c}\hfill I(\theta )=\left(\begin{array}{ccc}\frac{{\partial}^{2}L}{\partial {\alpha}^{2}}& \frac{{\partial}^{2}L}{\partial \alpha \partial {\theta}_{1}}& \frac{{\partial}^{2}L}{\partial \alpha \partial {\theta}_{2}}\\ \frac{{\partial}^{2}L}{\partial {\theta}_{1}\partial \alpha}& \frac{{\partial}^{2}L}{\partial {\theta}_{1}^{2}}& \frac{{\partial}^{2}L}{\partial {\theta}_{1}\partial {\theta}_{2}}\\ \frac{{\partial}^{2}L}{\partial {\theta}_{2}\partial \alpha}& \frac{{\partial}^{2}L}{\partial {\theta}_{2}\partial {\theta}_{1}}& \frac{{\partial}^{2}L}{\partial {\theta}_{2}^{2}}\end{array}\right)=\left(\begin{array}{ccc}{I}_{11}& {I}_{12}& {I}_{13}\\ {I}_{21}& {I}_{22}& {I}_{23}\\ {I}_{31}& {I}_{32}& {I}_{33}\end{array}\right).\end{array}$$

It is easy to see that

$$\begin{array}{c}{I}_{11}=-\frac{m+n}{{\alpha}^{2}}-\frac{1}{{\theta}_{1}}\sum _{i=1}^{n}{(\mathrm{ln}{x}_{i})}^{2}{x}_{i}^{-\alpha}-\frac{1}{{\theta}_{2}}\sum _{j=1}^{m}{(\mathrm{ln}{y}_{j})}^{2}{y}_{j}^{-\alpha},\hfill \\ {I}_{12}={I}_{21}=-\frac{1}{{\theta}_{1}^{2}}\sum _{i=1}^{n}{x}_{i}^{-\alpha}\mathrm{ln}{x}_{i},\hfill \\ {I}_{13}={I}_{31}=-\frac{1}{{\theta}_{2}^{2}}\sum _{j=1}^{m}{y}_{j}^{-\alpha}\mathrm{ln}{y}_{j},\hfill \\ {I}_{22}=\frac{n}{{\theta}_{1}^{2}}-\frac{2}{{\theta}_{1}^{3}}\sum _{i=1}^{n}{x}_{i}^{-\alpha},\hfill \\ {I}_{33}=\frac{m}{{\theta}_{2}^{2}}-\frac{2}{{\theta}_{2}^{3}}\sum _{j=1}^{m}{y}_{j}^{-\alpha},\hfill \\ {I}_{23}={I}_{32}=0.\hfill \end{array}$$

Moreover,
where $\gamma (\alpha )={\int}_{0}^{\infty}{x}^{\alpha -1}{e}^{-x}dx$ is the Gamma function.

$$\begin{array}{c}{J}_{11}=-\mathbb{E}(\frac{{\partial}^{2}L}{\partial {\alpha}^{2}})=\frac{1}{{\alpha}^{2}}\left[(m+n)(1+{\gamma}^{\u2033}(2))+n{(\mathrm{ln}{\theta}_{1})}^{2}+m{(\mathrm{ln}{\theta}_{2})}^{2}+2{\gamma}^{\prime}(2)(n\mathrm{ln}{\theta}_{1}+m\mathrm{ln}{\theta}_{2})\right],\hfill \\ {J}_{12}={J}_{21}=\frac{1}{{\theta}_{1}^{2}}\sum _{i=1}^{n}\mathbb{E}({x}_{i}^{-\alpha}\mathrm{ln}{x}_{i})=-\frac{n}{{\theta}_{1}\alpha}\left[\mathrm{ln}{\theta}_{1}+{\gamma}^{\prime}(2)\right],\hfill \\ {J}_{13}={J}_{31}=\frac{1}{{\theta}_{2}^{2}}\sum _{j=1}^{m}\mathbb{E}({y}_{j}^{-\alpha}\mathrm{ln}{y}_{j})=-\frac{m}{{\theta}_{2}\alpha}\left[\mathrm{ln}{\theta}_{2}+{\gamma}^{\prime}(2)\right],\hfill \\ {J}_{22}=\frac{n}{{\theta}_{1}^{2}},\hfill \\ {J}_{33}=\frac{m}{{\theta}_{2}^{2}},\hfill \\ {J}_{23}={J}_{32}=0,\hfill \end{array}$$

**Theorem**

**1.**

As $n\to \infty $ and $m\to \infty $ and $\frac{n}{m}\to p$, then
where
and

$$\begin{array}{c}\left(\sqrt{m}(\widehat{\alpha}-\alpha ),\sqrt{n}(\widehat{{\theta}_{1}}-{\theta}_{1}),\sqrt{m}(\widehat{{\theta}_{2}}-{\theta}_{2})\right)\stackrel{d}{\to}{N}_{3}\left(0,{A}^{-1}(\alpha ,{\theta}_{1},{\theta}_{2})\right),\hfill \end{array}$$

$$\begin{array}{c}\hfill A(\alpha ,{\theta}_{1},{\theta}_{2})=\left(\begin{array}{ccc}{a}_{11}& {a}_{12}& {a}_{13}\\ {a}_{21}& {a}_{22}& 0\\ {a}_{31}& 0& {a}_{33}\end{array}\right),\end{array}$$

$$\begin{array}{c}{a}_{11}=\frac{1}{{\alpha}^{2}}\left[(1+p)(1+{\gamma}^{\prime \prime}(2))+p{(\mathrm{ln}{\theta}_{1})}^{2}+{(\mathrm{ln}{\theta}_{2})}^{2}+2{\gamma}^{\prime}(2)(p\mathrm{ln}{\theta}_{1}+\mathrm{ln}{\theta}_{2})\right]=\underset{n,m\to \infty}{\mathrm{lim}}\frac{{J}_{11}}{m},\hfill \\ {a}_{12}={a}_{21}=-\frac{\sqrt{p}}{{\theta}_{1}\alpha}\left[ln{\theta}_{1}+{\gamma}^{\prime}(2)\right]=\underset{n,m\to \infty}{\mathrm{lim}}\frac{\sqrt{p}}{n}{J}_{12},\hfill \\ {a}_{13}={a}_{31}=-\frac{1}{{\theta}_{2}\alpha}\left[ln{\theta}_{2}+{\gamma}^{\prime}(2)\right]=\underset{n,m\to \infty}{\mathrm{lim}}\frac{1}{m}{J}_{13},\hfill \\ {a}_{22}=\frac{1}{{\theta}_{1}^{2}}=\underset{n,m\to \infty}{\mathrm{lim}}\frac{1}{n}{J}_{22},\hfill \\ {a}_{33}=\frac{1}{{\theta}_{2}^{2}}=\underset{n,m\to \infty}{\mathrm{lim}}\frac{1}{m}{J}_{33},\hfill \\ {a}_{23}={a}_{32}=0.\hfill \end{array}$$

**Proof.**

We can use the asymptotic properties of MLEs to prove it. ☐

**Theorem**

**2.**

As $n\to \infty $ and $m\to \infty $ and $\frac{n}{m}\to p$, then
where
and ${u}_{A}={a}_{11}{a}_{22}{a}_{33}-{a}_{13}{a}_{22}{a}_{31}-{a}_{12}{a}_{21}{a}_{33}$.

$$\begin{array}{c}\sqrt{n}(\widehat{R}-R)\to N(0,B),\hfill \end{array}$$

$$\begin{array}{c}\hfill B=\frac{1}{{u}_{A}{({\theta}_{1}+{\theta}_{2})}^{4}}\left[{\theta}_{1}^{2}({a}_{11}{a}_{22}-{a}_{12}{a}_{21})+{\theta}_{2}^{2}({a}_{11}{a}_{33}-{a}_{13}{a}_{31})-2{\theta}_{1}{\theta}_{2}{a}_{12}{a}_{13}\right],\end{array}$$

**Proof.**

By using Theorem 1 and the delta method, we immediately derive the asymptotic distribution of $\widehat{R}$ as follows:
where
with
and

$$\begin{array}{c}\sqrt{n}(\widehat{R}-R)\to N(0,B),\hfill \end{array}$$

$$\begin{array}{c}\hfill B={c}_{A}^{t}{A}^{-1}{c}_{A},\end{array}$$

$$\begin{array}{c}\hfill {c}_{A}=\left(\begin{array}{c}\frac{\partial R}{\partial \alpha}\\ \frac{\partial R}{\partial {\theta}_{1}}\\ \frac{\partial R}{\partial {\theta}_{2}}\end{array}\right)=\frac{1}{{({\theta}_{1}+{\theta}_{2})}^{2}}\left(\begin{array}{c}0\\ -{\theta}_{2}\\ \phantom{\rule{4pt}{0ex}}{\theta}_{1}\end{array}\right),\end{array}$$

$$\begin{array}{c}\hfill {A}^{-1}=\frac{1}{{u}_{A}}\left(\begin{array}{ccc}{a}_{22}{a}_{33}& -{a}_{12}{a}_{33}& -{a}_{22}{a}_{13}\\ -{a}_{21}{a}_{33}& {a}_{11}{a}_{33}-{a}_{13}{a}_{31}& {a}_{21}{a}_{13}\\ -{a}_{22}{a}_{31}& {a}_{12}{a}_{31}& {a}_{11}{a}_{22}-{a}_{12}{a}_{21}\end{array}\right),\end{array}$$

$$\begin{array}{c}\hfill {u}_{A}={a}_{11}{a}_{22}{a}_{33}-{a}_{13}{a}_{22}{a}_{31}-{a}_{12}{a}_{21}{a}_{33}.\end{array}$$

Therefore,
☐

$$\begin{array}{c}\hfill B={c}_{A}^{t}{A}^{-1}{c}_{A}=\frac{1}{{u}_{A}{({\theta}_{1}+{\theta}_{2})}^{4}}\left[({a}_{11}{a}_{33}-{a}_{13}{a}_{31}){\theta}_{2}^{2}+({a}_{11}{a}_{22}-{a}_{12}{a}_{21}){\theta}_{1}^{2}-2{\theta}_{1}{\theta}_{2}{a}_{12}{a}_{13}\right].\end{array}$$

We can derive the $100(1-\gamma )\%$ confidence interval for R using Theorem 2 as
where ${z}_{r}$ is 100$\gamma $th percentile of N (0, 1). The confidence interval of R can be derived by using the estimate of B in (30). To estimate B, we use the MLEs of $\alpha $, ${\theta}_{1}$ and ${\theta}_{2}$ and the following:

$$\begin{array}{c}\hfill \left(\widehat{R}-{z}_{1-\frac{\gamma}{2}}\frac{\widehat{B}}{\sqrt{n}},\widehat{R}+{z}_{1-\frac{\gamma}{2}}\frac{\widehat{B}}{\sqrt{n}}\right),\end{array}$$

$$\begin{array}{c}{\widehat{a}}_{11}=\frac{1}{{\widehat{\alpha}}^{2}}\left[(1+p)(1+{\gamma}^{\u2033}(2))+p{(\mathrm{ln}{\widehat{\theta}}_{1})}^{2}+{(\mathrm{ln}{\widehat{\theta}}_{2})}^{2}+2{\gamma}^{\prime}(2)(p\mathrm{ln}{\widehat{\theta}}_{1}+\mathrm{ln}{\widehat{\theta}}_{2})\right],\hfill \\ {\widehat{a}}_{12}={\widehat{a}}_{21}=-\frac{\sqrt{p}}{{\widehat{\theta}}_{1}\widehat{\alpha}}\left[\mathrm{ln}{\widehat{\theta}}_{1}+{\gamma}^{\prime}(2)\right],\hfill \\ {\widehat{a}}_{13}={\widehat{a}}_{31}=-\frac{1}{{\widehat{\theta}}_{2}\widehat{\alpha}}\left[\mathrm{ln}{\widehat{\theta}}_{2}+{\gamma}^{\prime}(2)\right],\hfill \\ {\widehat{a}}_{22}=\frac{1}{{\widehat{\theta}}_{1}^{2}},\hfill \\ {\widehat{a}}_{33}=\frac{1}{{\widehat{\theta}}_{2}^{2}}.\hfill \end{array}$$

#### 5.2. Bootstrap Confidence Intervals

In this subsection, two confidence intervals based on the non-parametric bootstrap methods are proposed: (i) the percentile bootstrap method (Boot-p) (see [19]) and (ii) the bootstrap-t method (Boot-t) (see [20]).

The algorithms for conducting the confidence intervals of R are presented as follows:

- (i)
- Boot-p method:Step 1: From the sample ${x}_{1},{x}_{2},...,{x}_{n}$ and ${y}_{1},{y}_{2},...,{y}_{m}$ compute $\widehat{\alpha}$, $\widehat{{\theta}_{1}}$ and $\widehat{{\theta}_{2}}$.Step 2: Generate a bootstrap sample ${x}_{1}^{\ast},...,{x}_{n}^{\ast}$ using $\widehat{\alpha}$ and $\widehat{{\theta}_{1}}$ and generate a bootstrap sample ${y}_{1}^{\ast},...,{y}_{m}^{\ast}$ using $\widehat{\alpha}$ and $\widehat{{\theta}_{2}}$. Based on ${x}_{1}^{\ast},...,{x}_{n}^{\ast}$ and ${y}_{1}^{\ast},...,{y}_{m}^{\ast}$, compute the estimate of R, say ${\widehat{R}}^{\ast}$.Step 3: Repeat Step 2 NBOOT times.Step 4: Let ${H}_{1}(x)=P({\widehat{R}}^{\ast}\le x)$ be the cumulative distribution function(cdf) of ${\widehat{R}}^{\ast}$. Define ${\widehat{R}}_{Boot-p}={H}_{1}^{-1}(x)$ for a given x. Thus, the approximate $100(1-z)\%$ C.I. of R is given by:$$\begin{array}{c}\hfill \left({\widehat{R}}_{Boot-p}(\frac{z}{2}),{\widehat{R}}_{Boot-p}(1-\frac{z}{2})\right).\end{array}$$Note: In this paper, ${\widehat{R}}^{\ast}$ can be computed using (14) in Step 2.
- (ii)
- Boot-t method:Step 1: From the sample ${x}_{1},...,{x}_{n}$ and ${y}_{1},...,{y}_{m}$, compute $\widehat{\alpha}$, $\widehat{{\theta}_{1}}$ and $\widehat{{\theta}_{2}}$.Step 2: Generate a bootstrap sample ${x}_{1}^{\ast},...,{x}_{n}^{\ast}$ using $\widehat{\alpha}$ and $\widehat{{\theta}_{1}}$ and generate a bootstrap sample ${y}_{1}^{\ast},...,{y}_{m}^{\ast}$ using $\widehat{\alpha}$ and $\widehat{{\theta}_{2}}$. Based on ${x}_{1}^{\ast},...,{x}_{n}^{\ast}$ and ${y}_{1}^{\ast},...,{y}_{m}^{\ast}$, compute the estimate of R as ${\widehat{R}}^{\ast}$ and the statistic is$$\begin{array}{c}\hfill {T}^{\ast}=\frac{\sqrt{n}({\widehat{R}}^{\ast}-\widehat{R})}{\sqrt{Var(\widehat{{R}^{\ast}})}}.\end{array}$$Step 3: Repeat Step 2 NBOOT times.Step 4: Let ${H}_{2}(x)=P({T}^{\ast}\le x)$ be the cumulative distribution function(cdf) of ${T}^{\ast}$. Define ${\widehat{R}}_{Boot-t}=\widehat{R}+{n}^{-\frac{1}{2}}\sqrt{Var(\widehat{R})}{H}_{2}^{-1}(x)$ for a given x. Thus, the approximate $100(1-z)\%$ C.I. of R is given by:$$\begin{array}{c}\hfill \left({\widehat{R}}_{Boot-t}(\frac{z}{2}),{\widehat{R}}_{Boot-t}(1-\frac{z}{2})\right).\end{array}$$Note: In this paper, $Var({\widehat{R}}^{\ast})$ can be obtained using Theorem 2 in Step 2.

## 6. Bayesian Inference on $\mathit{R}$

In this section, the Bayes estimate of R can be obtained under assumption that the shape parameter $\alpha $ and scale parameters ${\theta}_{1}$ and ${\theta}_{2}$ are random variables. According to the likelihood function in Section 3, we note that $\alpha $ has a positive exponent and ${\theta}_{1}$ and ${\theta}_{2}$ have negative exponents, respectively, so we can assume that ${\theta}_{1}$ and ${\theta}_{2}$ have independent Inverse Gamma pdf and $\alpha $ follows Gamma distribution. We choose this family such that prior-to-posterior updating yields a posterior that is also in the family:
where all the hyper-parameters ${a}_{i}$ and ${b}_{i}$$(i=1,2)$ are assumed to be known and non-negative.

$$\begin{array}{c}\hfill \pi ({\theta}_{1})=\frac{{b}_{1}^{{a}_{1}}}{\gamma ({a}_{1})}{\theta}_{1}^{-(1+{a}_{1})}{e}^{-\frac{{b}_{1}}{{\theta}_{1}}},\phantom{\rule{4pt}{0ex}}{\theta}_{1}>0,\end{array}$$

$$\begin{array}{c}\hfill \pi ({\theta}_{2})=\frac{{b}_{2}^{{a}_{2}}}{\gamma ({a}_{2})}{\theta}_{2}^{-(1+{a}_{2})}{e}^{-\frac{{b}_{2}}{{\theta}_{2}}},\phantom{\rule{4pt}{0ex}}{\theta}_{2}>0,\end{array}$$

The prior density function of $\alpha $ is denoted as $\pi (\alpha )$, and we assume that it has a Gamma (0, 1) distribution.

We have the likelihood function based on the above assumptions as

$$\begin{array}{c}\hfill L(data|\alpha ,{\theta}_{1},{\theta}_{2})={\alpha}^{m+n}{\theta}_{1}^{-n}{\theta}_{2}^{-m}(\prod _{i=1}^{n}{x}_{i}^{-\alpha -1})(\prod _{j=1}^{m}{y}_{j}^{-\alpha -1}){e}^{-\frac{1}{{\theta}_{1}}{\sum}_{i=1}^{n}{x}_{i}^{-\alpha}}{e}^{-\frac{1}{{\theta}_{2}}{\sum}_{j=1}^{m}{y}_{j}^{-\alpha}}.\end{array}$$

The joint density of the data, $\alpha $, ${\theta}_{1}$ and ${\theta}_{2}$ becomes

$$\begin{array}{c}\hfill P(data,\alpha ,{\theta}_{1},{\theta}_{2})=L(data|\alpha ,{\theta}_{1},{\theta}_{2})\pi (\alpha )\pi ({\theta}_{1})\pi ({\theta}_{2}).\end{array}$$

Therefore, the joint posterior density of $\alpha $, ${\theta}_{1}$ and ${\theta}_{2}$ given the data is

$$\begin{array}{c}\hfill P(\alpha ,{\theta}_{1},{\theta}_{2}|data)=\frac{P(data,\alpha ,{\theta}_{1},{\theta}_{2})}{{\int}_{0}^{\infty}{\int}_{0}^{\infty}{\int}_{0}^{\infty}L(data,\alpha ,{\theta}_{1},{\theta}_{2})\mathrm{d}\alpha \mathrm{d}{\theta}_{1}\mathrm{d}{\theta}_{2}}.\end{array}$$

Since the expression (35) cannot be written in a closed form, the Bayes estimate of R and the corresponding credible interval of R are derived adopting the Gibbs sampling technique:

$$\begin{array}{cc}\hfill P(\alpha ,{\theta}_{1},{\theta}_{2}|data)\propto & {\alpha}^{m+n-1}{\theta}_{1}^{-(n+1+{a}_{1})}{\theta}_{2}^{-(m+1+{a}_{2})}({\prod}_{i=1}^{n}{x}_{i}^{-\alpha -1})({\prod}_{j=1}^{m}{y}_{j}^{-\alpha -1})\\ & \mathrm{exp}\left\{-\frac{1}{{\theta}_{1}}({\sum}_{i=1}^{n}{x}_{i}^{-\alpha}+{b}_{1})-\frac{1}{{\theta}_{2}}({\sum}_{j=1}^{m}{y}_{j}^{-\alpha}+{b}_{2})-\alpha \right\}.\end{array}$$

The posterior pdfs of $\alpha $, ${\theta}_{1}$ and ${\theta}_{2}$ can be obtained based on the expression $P(\alpha ,{\theta}_{1},{\theta}_{2}|data)$ as the following:

$$\begin{array}{c}{\theta}_{1}|\alpha ,{\theta}_{2},data\sim IG(n+{a}_{1},{b}_{1}+\sum _{i=1}^{n}{x}_{i}^{-\alpha}),\hfill \\ {\theta}_{2}|\alpha ,{\theta}_{1},data\sim IG(m+{a}_{2},{b}_{2}+\sum _{j=1}^{m}{y}_{j}^{-\alpha}),\hfill \\ {f}_{\alpha}(\alpha |{\theta}_{1},{\theta}_{2},data)\propto {\alpha}^{m+n-1}\prod _{i=1}^{n}{x}_{i}^{-\alpha -1}\prod _{j=1}^{m}{y}_{j}^{-\alpha -1}\mathrm{exp}\left\{-\frac{1}{{\theta}_{1}}(\sum _{i=1}^{n}{x}_{i}^{-\alpha}+{b}_{1})-\frac{1}{{\theta}_{2}}(\sum _{j=1}^{m}{y}_{j}^{-\alpha}+{b}_{2})-\alpha \right\}.\hfill \end{array}$$

The posterior pdf of $\alpha $ is not known. We use the Metropolis–Hastings method with normal proposal distribution to generate random numbers from this distribution.

The algorithm of Gibbs sampling is described as follows:

Step 1: Start with an initial guess $({\alpha}^{(0)},{\theta}_{1}^{(0)},{\theta}_{2}^{(0)})$.

Step 2: Set $t=1$.

Step 3: Generate ${\theta}_{1}^{(t)}$ form $IG(n+{a}_{1},{b}_{1}+{\sum}_{i=1}^{n}{x}_{i}^{-{\alpha}^{(t-1)}})$.

Step 4: Generate ${\theta}_{2}^{(t)}$ form $IG(m+{a}_{2},{b}_{2}+{\sum}_{j=1}^{m}{y}_{j}^{-{\alpha}^{(t-1)}})$.

Step 5: Using the Metropolis–Hastings method, generate ${\alpha}^{(t)}$ from ${f}_{\alpha}({\alpha}^{(t-1)}|{\theta}_{1}^{(t)},{\theta}_{2}^{(t)},data)$.

- Generate a new candidate value $\delta $ from $N(\mathrm{ln}{\alpha}^{(t-1)},1)$.
- Set ${\alpha}^{\prime}=\mathrm{exp}(\delta )$.
- calculate $p=\mathrm{min}(1,\frac{{f}_{\alpha}({\alpha}^{\prime}|{\theta}_{1}^{(t)},{\theta}_{2}^{(t)},data)}{{f}_{\alpha}({\alpha}^{(t-1)}|{\theta}_{1}^{(t)},{\theta}_{2}^{(t)},data)})$.
- Update ${\alpha}^{(t)}$ with probability p; otherwise set ${\alpha}^{(t)}={\alpha}^{(t-1)}$.Step 6: Compute ${R}^{(t)}$ from (14).Step 7: Set $t=t+1$.Step 8: Repeat step 3–7, M times.

The approximate posterior mean of R is
and the approximate posterior variance of R is

$$\begin{array}{c}\hfill \widehat{E}(R|data)=\frac{1}{M}\sum _{t=1}^{M}{R}^{(t)},\end{array}$$

$$\begin{array}{c}\hfill \widehat{Var}(R|data)=\frac{1}{M}\sum _{t=1}^{M}{({R}^{(t)}-\widehat{E}(R|data))}^{2}.\end{array}$$

Using the method proposed by [21], we immediately construct the $100(1-\gamma )\%$ highest posterior density (HPD) credible interval as
where ${R}_{\left[\frac{\gamma}{2}M\right]}$ and ${R}_{\left[(1-\frac{\gamma}{2})M\right]}$ are the $\left[\frac{\gamma}{2}M\right]$-th smallest integer and the $\left[(1-\frac{\gamma}{2})M\right]$-th smallest integer of $\{{R}_{t},t=1,2,...,M\}$.

$$\begin{array}{c}\hfill ({R}_{\left[\frac{\gamma}{2}M\right]},{R}_{\left[(1-\frac{\gamma}{2})M\right]}),\end{array}$$

## 7. Numerical Simulations and Data Analysis

In this section, we present a Monte Carlo simulation study and a real data set to illustrate different estimation methods proposed in the preceding sections.

#### 7.1. Numerical Simulations Study

Since we cannot compare the performances of the different methods theoretically, some simulation results to compare the performances of the different methods are presented. We mainly compute the biases and mean square errors (MSEs) of the MLEs, AMLEs and Bayes estimates. The asymptotic C.I. of R and two C.I.s based on the non-parametric bootstrap methods are obtained. We also conduct the Bayes estimates and HPD credible intervals of R. Here, we assume that $a1=a2=b1=b2=0.0001$. We consider sample sizes $(n,m)=(10,10),\phantom{\rule{4pt}{0ex}}(20,15),\phantom{\rule{4pt}{0ex}}(25,25),\phantom{\rule{4pt}{0ex}}(30,40),\phantom{\rule{4pt}{0ex}}(50,50).$ For parameter values, we assume that ${\theta}_{2}=1$, ${\theta}_{1}=0.5,\phantom{\rule{4pt}{0ex}}1,\phantom{\rule{4pt}{0ex}}1.5,\phantom{\rule{4pt}{0ex}}2,\phantom{\rule{4pt}{0ex}}3$ and $\alpha =2$. All the results are based on the 1000 replications. For the bootstrap methods, we compute the confidence intervals based on 300 resampling. The Bayes estimates are based on 1000 sampling, namely, M = 1000. In each case, the nominal level for the C.I.s or the credible intervals is 0.95.

We also obtain the average biases and MSEs of the MLEs, AMLEs and Bayes estimates over 1000 replications in Table 1. From Table 1, we can find that the Bayes estimates are almost as efficient as the MLEs and the AMLEs for all sample sizes. Interestingly, in most of the cases, the MSEs of the Bayes estimate are smaller than the MSEs of the MLEs or AMLEs. We can find that the biases and MSEs of the MLEs and AMLEs are very close. As the sample size $(n,m)$ increases, the MSEs of the estimates decrease as expected.

Table 2 reports the results of 95% asymptotic C.I. of R, we also obtain C.I.s based on the bootstrap methods and the HPD credible interval. We represent the results of the average confidence/credible lengths and the coverage probabilities. From Table 2, the coverage probabilities reach the nominal level 95% with the increase of the sample sizes. We observe that the MLE method is the most valid procedure to obtain the confidence intervals. The AMLEs and the Bayes estimates are the second best confidence intervals. Interestingly, we find that the HPD credible intervals provide the most highest coverage probabilities. The Boot-p confidence intervals perform better than Boot-t confidence intervals, in terms of coverage probabilities. One point we should know is that the bootstrap method depends on the number of resampling. For small sample sizes $(n,m)$, the coverage probabilities for the MLEs and AMLEs are less than the nominal levels, with the increase of sample sizes $(n,m)$, they perform well.

#### 7.2. Data Analysis

We consider a real data set to illustrate the methods of inference discussed in this article. These strength data sets in Table 3 and Table 4 were analyzed previously by [3,22]. We know that if the random variable X follows $W(\alpha ,\theta )$, the random variable $T=\frac{1}{X}$ has the $IW(\alpha ,\theta )$. Hence, we have the following data sets from the inverse Weibull distribution. These data are presented in Table 5 and Table 6. We analyze the data by adding 0.5 from both data sets. We fit the inverse Weibull models to the two data sets separately. The estimated shape and scale parameters, log-likelihood values, Kolmogorov-Smirnov (K-S) distances and corresponding p-values are presented in Table 7. The expected frequencies and the observed frequencies based on the fitted models are also presented in the Table 8 and Table 9. We also obtain the chi-square values of 5.9914 and 5.9915. Obviously, the inverse Weibull model fits very well to Data Set 1 and Data Set 2.

The K-S values and the corresponding p-values in Table 10 show that the inverse Weibull models with equal shape parameters fit reasonably well to the modified data sets. It is clear that we cannot reject the null hypothesis that the two shape parameters are equal.

Based on Equations (14) and (29), we provide that the MLE and AMLE of R are $0.7576$ and 0.7571. The 95% confidence intervals of MLE, AMLE, Boot-p method and Boot-t method are obtained as (0.6917, 0.8235), (0.6911, 0.8231), (0.6993, 0.8197), (0.7015, 0.8421), respectively.

The Bayesian estimate of R is also presented based on Equation (36). In the previous sections, we assume that ${\theta}_{1}$ and ${\theta}_{2}$ have independent IG priors, $\alpha $ has a Gamma (0, 1) prior and $a1=a2=b1=b2=0.0001$. Under certain assumptions, we can conduct the Bayesian estimate of R as 0.7437 and the 95% HPD credible interval of R can be obtained as (0.6690, 0.8102).

## 8. Inference on R with All Different Parameters

In the sections above, we assume that the shape parameters are taken to be equal. In order to expand the paper, in this section, we study the inference of R with all different parameters. We consider the problem of estimating $R=P(Y<X)$ under the assumption that $X\sim IW({\alpha}_{1},{\theta}_{1})$ and $Y\sim IW({\alpha}_{2},{\theta}_{2})$. Then, it can be easily calculated that

$$\begin{array}{c}\hfill R=P(Y<X)=1-{\int}_{0}^{\infty}\frac{{\alpha}_{2}}{{\theta}_{2}}{y}^{-{\alpha}_{2}-1}{e}^{-\frac{{y}^{-{\alpha}_{2}}}{{\theta}_{2}}}{e}^{-\frac{{y}^{-{\alpha}_{1}}}{{\theta}_{1}}}dy.\end{array}$$

To computer the MLE of R, we first obtain the MLEs of ${\theta}_{1}$ and ${\theta}_{2}$. Suppose ${X}_{1},{X}_{2},\cdots ,{X}_{n}$ is a random sample from $IW({\alpha}_{1},{\theta}_{1})$ and ${Y}_{1},{Y}_{2},...,{Y}_{m}$ is a random sample from $IW({\alpha}_{2},{\theta}_{2})$. The joint likelihood function is:

$$\begin{array}{c}\hfill \begin{array}{c}\hfill l({\alpha}_{1},{\theta}_{1},{\alpha}_{2},{\theta}_{2})=\frac{{\alpha}_{1}^{n}}{{\theta}_{1}^{n}}\frac{{\alpha}_{2}^{m}}{{\theta}_{2}^{m}}(\prod _{i=1}^{n}{x}_{i}^{-{\alpha}_{1}-1})(\prod _{j=1}^{m}{y}_{j}^{-{\alpha}_{2}-1}){e}^{-{\sum}_{i=1}^{n}\frac{{x}_{i}^{-{\alpha}_{1}}}{{\theta}_{1}}}{e}^{-{\sum}_{j=1}^{m}\frac{{y}_{j}^{-{\alpha}_{2}}}{{\theta}_{2}}}.\end{array}\end{array}$$

Then, the log-likelihood function is

$$\begin{array}{c}\hfill \begin{array}{cc}\hfill L({\alpha}_{1},{\theta}_{1},{\alpha}_{2},{\theta}_{2})=& n\mathrm{ln}{\alpha}_{1}+m\mathrm{ln}{\alpha}_{2}-n\mathrm{ln}{\theta}_{1}-m\mathrm{ln}{\theta}_{2}-({\alpha}_{1}+1)\sum _{i=1}^{n}\mathrm{ln}{x}_{i}-({\alpha}_{2}+1)\sum _{j=1}^{m}\mathrm{ln}{y}_{j}\hfill \\ & -\frac{1}{{\theta}_{1}}\sum _{i=1}^{n}{x}_{i}^{-{\alpha}_{1}}-\frac{1}{{\theta}_{2}}\sum _{j=1}^{m}{y}_{j}^{-{\alpha}_{2}}.\hfill \end{array}\end{array}$$

Then, similar to the previous approaches, we can obtain the point estimates and interval estimates of R by using the MLE, AMLE and Bayesian method. Furthermore, we also can get bootstrap confidence intervals of R.

## 9. Conclusions

In this paper, we have addressed the estimator of $R=P(Y<X)$ for the inverse Weibull distribution. We assume independent inverse Weibull random variables with equal shape parameters but different scale parameters.

We obtain the maximum likelihood estimator of R and its asymptotic distribution. Note that MLEs do not have explicit forms, and we propose the approximate maximum likelihood estimator of R. The confidence interval of R is obtained using the asymptotic distribution. Two bootstrap confidence intervals are also obtained. By using the Gibbs sampling technique, we present the Bayesian estimator of R and the corresponding credible interval. The Metropolis–Hastings algorithm with the normal proposal distribution is used to generate random numbers from a given density function. Monte Carlo simulations are conducted to compare the proposed methods. Analysis of a real dataset is performed. In the future, we will consider the MLE, AMLE, asymptotic C.I., bootstrap C.I.s and Bayesian inference of $R=P(Y<X)$ for inverse Weibull distribution based on incomplete data such as progressively type-II censored samples.

## Acknowledgments

The authors’ work was partially supported by the program for the Fundamental Research Funds for the Central Universities (2014RC042, 2015JBM109). The authors would like to thank the referees and Editor for their helpful suggestions.

## Author Contributions

The authors contributed equally to this work.

## Conflicts of Interest

The authors declare no conflict of interest.

## References

- Awad, A. Estimation of P(Y < X) in case of the double exponential distribution.
**1985**. [Google Scholar] [CrossRef] - Kumar, K.; Krishna, H.; Garg, R. Estimation of P(Y < X) in Lindley distribution using progressively first failure censoring. Int. J. Syst. Assur. Eng. Manag.
**2016**, 6, 330–341. [Google Scholar] - Kundu, D.; Gupta, R.D. Estimation of P[Y < X] for Weibull distributions. IEEE Trans. Reliab.
**2016**, 55, 270–280. [Google Scholar] - Rezaei, S.; Tahmasbi, R.; Mahmoodi, M. Estimation of P[Y < X] for generalized Pareto distribution. J. Stat. Plan. Inference
**2010**, 140, 480–494. [Google Scholar] - Kizilaslan, F. Classical and Bayesian estimation of reliability in a multicomponent stress-strength model based on a general class of inverse exponentiated distributions. Stat. Pap.
**2016**, 1–32. [Google Scholar] [CrossRef] - Kizilaslan, F.; Nadar, M. Classical and Bayesian Estimation of Reliability in Multicomponent Stress-Strength Model Based on Weibull Distribution. Rev. Colomb. Estad.
**2015**, 38, 467–484. [Google Scholar] [CrossRef] - Ali, S. On the Mean Residual Life Function and Stress and Strength Analysis under Different Loss Function for Lindley Distribution. J. Qual. Reliab. Eng.
**2013**, 2013, 190437. [Google Scholar] [CrossRef] - Kizilaslan, F. Classical and Bayesian estimation of reliability in a multicomponent stress-strength model based on the proportional reversed hazard rate mode. Math. Comput. Simul.
**2017**, 136, 36–62. [Google Scholar] [CrossRef] - Shawky, A.I.; Al-Gashgari, F.H. Bayesian and non-bayesian estimation of stress–strength model for Pareto type I distribution. Iran. J. Sci. Technol. Trans. A Sci.
**2013**, 37, 335–342. [Google Scholar] - Johnson, N.L.; Kotz, S.; Balakrishnan, N. Distributions. In Continuous Univariate Distributions, 2nd ed.; Wiley: New York, NY, USA, 1995; Volume 2. [Google Scholar]
- Drapella, A. The complementary weibull distribution: Unknown or just forgotten? Qual. Reliab. Eng. Int.
**1993**, 9, 383–385. [Google Scholar] [CrossRef] - Keller, A.Z.; Kamath, A.R.R.; Perera, U.D. Reliability analysis of CNC machine tools. Reliab. Eng.
**1982**, 3, 449–473. [Google Scholar] [CrossRef] - Jazi, M.A.; Lai, C.D.; Alamatsaz, M.H. A discrete inverse Weibull distribution and estimation of its parameters. Stat. Methodol.
**2010**, 7, 121–132. [Google Scholar] [CrossRef] - Sultan, K.S.; Ismail, M.A.; Al-Moisheer, A.S. Mixture of two inverse Weibull distributions: Properties and estimation. Comput. Stat. Data Anal.
**2007**, 51, 5377–5387. [Google Scholar] [CrossRef] - Khan, M.S.; Pasha, G.R.; Pasha, A.H. Theoretical analysis of inverse weibull distribution. Wseas Trans. Math.
**2008**, 7, 30–38. [Google Scholar] - Gusmo, F.R.S.D.; Ortega, E.M.M.; Cordeiro, G.M. The generalized inverse Weibull distribution. Stat. Pap.
**2011**, 52, 591–619. [Google Scholar] [CrossRef] - Mohie El-Din, M.M.; Riad, F.H. Estimation and prediction for the inverse Weibull distribution based on records. J. Adv. Res. Stat. Probab.
**2011**, 2, 20–27. [Google Scholar] - Arnold, B.; Balakrishnan, N. Relations, Bounds and Approximations for Order Statistics. Available online: http://www.springer.com/gp/book/9780387969756 (accessed on 21 June 2017).
- Efron, B. The Jackknife, the Bootstrap and Other Resampling Plans. Available online: http://epubs.siam.org/doi/book/10.1137/1.9781611970319 (accessed on 21 June 2017).
- Hall, P. Rejoinder: Theoretical comparison of bootstrap confidence intervals. Ann. Stat.
**1988**, 16, 927–953. [Google Scholar] [CrossRef] - Chen, M.H.; Shao, Q.M. Monte Carlo estimation of Bayesian credible and HPD intervals. J. Comput. Graph. Stat.
**1999**, 8, 69–92. [Google Scholar] [CrossRef] - Surles, J.G.; Padgett, W.J. Inference for reliability and stress-strength for a scaled Burr type X distribution. Lifetime Data Anal.
**2001**, 7, 187–200. [Google Scholar] [CrossRef] [PubMed]

**Table 1.**Biases and mean square errors(MSEs) of the maximum likelihood estimate (MLE), approximate maximum likelihood estimate (AMLE) and Bayes estimate of R, when $\alpha =2$, ${\theta}_{2}=1$ and for different values of ${\theta}_{1}$.

$(\mathit{n},\mathit{m})$ | ${\mathit{\theta}}_{1}=0.5$ | ${\mathit{\theta}}_{1}=1$ | ${\mathit{\theta}}_{1}=1.5$ | ${\mathit{\theta}}_{1}=2$ | ${\mathit{\theta}}_{1}=3$ |
---|---|---|---|---|---|

$(10,10)$ | 0.0919(0.0165) | −0.0094(0.0079) | −0.0082(0.0108) | −0.0223(0.0137) | −0.0098(0.0090) |

0.0881(0.0158) | −0.0088(0.0078) | −0.0207(0.0137) | −0.0145(0.0143) | −0.0054(0.0091) | |

0.0721(0.0120) | −0.0084(0.0065) | 0.0018(0.0083) | −0.0036(0.0111) | 0.0138(0.0078) | |

$(20,15)$ | 0.0281(0.0078) | 0.0095(0.0074) | −0.0098(0.0079) | −0.0188(0.0091) | −0.0127(0.0065) |

0.0317(0.0078) | 0.0061(0.0075) | −0.0068(0.0078) | −0.0161(0.0095) | −0.0079(0.0065) | |

0.0207(0.0086) | 0.0090(0.0064) | 0.0034(0.0079) | −0.0056(0.0078) | 0.0043(0.0060) | |

$(25,25)$ | 0.0132(0.0037) | 0.0060(0.0025) | 0.0025(0.0067) | 0.0057(0.0045) | −0.0270(0.0049) |

0.0132(0.0038) | 0.0062(0.0024) | 0.0040(0.0068) | 0.0047(0.0045) | −0.0242(0.0049) | |

0.0055(0.0036) | 0.0061(0.0022) | 0.0076(0.0063) | 0.0128(0.0041) | −0.0165(0.0045) | |

$(30,40)$ | −0.0094(0.0014) | −0.0039(0.0010) | −0.0714(0.0051) | −0.0048(0.0024) | −0.0510(0.0038) |

−0.0092(0.0014) | −0.0064(0.0011) | −0.0760(0.0051) | −0.0060(0.0023) | −0.0504(0.0038) | |

−0.0135(0.0015) | −0.0040(0.0009) | −0.065(0.0042) | 0.0058(0.0023) | −0.0302(0.0035) | |

$(50,50)$ | −0.0182(0.0005) | −0.0087(0.0007) | −0.0033(0.0038) | 0.0007(0.0017) | −0.0035(0.0027) |

−0.0192(0.0005) | −0.0080(0.0006) | −0.0033(0.0039) | 0.0012(0.0017) | −0.0022(0.0027) | |

−0.0217(0.0006) | −0.0079(0.0006) | −0.0008(0.0036) | 0.0051(0.0015) | 0.0023(0.0025) |

In each cell, the average biases are provided and corresponding MSEs are presented within brackets. The first to the third row corresponds the results for MLEs, AMLEs and Bayes estimates respectively.

$(\mathit{n},\mathit{m})$ | ${\mathit{\theta}}_{1}=0.5$ | ${\mathit{\theta}}_{1}=1$ | ${\mathit{\theta}}_{1}=1.5$ | ${\mathit{\theta}}_{1}=2$ | ${\mathit{\theta}}_{1}=3$ |
---|---|---|---|---|---|

$(10,10)$ | 0.3879(0.90) | 0.4205(0.91) | 0.4074(0.90) | 0.3880(0.93) | 0.3377(0.89) |

0.3894(0.90) | 0.4206(0.92) | 0.4079(0.90) | 0.3907(0.93) | 0.3403(0.90) | |

0.4044(0.89) | 0.4526(0.92) | 0.4303(0.91) | 0.4042(0.92) | 0.3469(0.91) | |

0.5179(0.90) | 0.4526(0.90) | 0.5425(0.90) | 0.5240(0.90) | 0.3939(0.90) | |

0.3997(0.95) | 0.4113(0.95) | 0.4028(0.93) | 0.3931(0.95) | 0.3546(0.94) | |

$(20,15)$ | 0.3201(0.92) | 0.3480(0.92) | 0.3427(0.92) | 0.3176(0.92) | 0.2862(0.91) |

0.3205(0.93) | 0.3480(0.93) | 0.3430(0.94) | 0.3186(0.93) | 0.2873(0.91) | |

0.3305(0.92) | 0.3657(0.93) | 0.3577(0.94) | 0.3275(0.93) | 0.2915(0.92) | |

0.3810(0.90) | 0.4089(0.91) | 0.4020(0.94) | 0.3814(0.91) | 0.3531(0.90) | |

0.3281(0.95) | 0.3427(0.93) | 0.3390(0.95) | 0.3207(0.96) | 0.2953(0.96) | |

$(25,25)$ | 0.2501(0.93) | 0.2729(0.94) | 0.2634(0.93) | 0.2509(0.96) | 0.2244(0.95) |

0.2502(0.93) | 0.2729(0.95) | 0.2637(0.94) | 0.2512(0.95) | 0.2245(0.95) | |

0.2510(0.93) | 0.2774(0.94) | 0.2696(0.92) | 0.2555(0.96) | 0.2250(0.96) | |

0.2756(0.93) | 0.2961(0.93) | 0.2765(0.92) | 0.2755(0.93) | 0.2354(0.95) | |

0.2537(0.96) | 0.2700(0.96) | 0.1884(0.95) | 0.2512(0.95) | 0.2286(0.96) | |

$(30,40)$ | 0.2278(0.96) | 0.2496(0.94) | 0.2437(0.94) | 0.2261(0.96) | 0.2059(0.95) |

0.2279(0.96) | 0.2496(0.95) | 0.2436(0.95) | 0.2263(0.96) | 0.2062(0.94) | |

0.2296(0.95) | 0.2522(0.95) | 0.2482(0.95) | 0.2275(0.97) | 0.2050(0.94) | |

0.2482(0.94) | 0.2682(0.94) | 0.2588(0.95) | 0.2458(0.92) | 0.2267(0.93) | |

0.2284(0.97) | 0.2470(0.95) | 0.2423(0.97) | 0?2272(0.97) | 0.2090(0.95) | |

$(50,50)$ | 0.1788(0.96) | 0.1948(0.96) | 0.1887(0.95) | 0.1780(0.97) | 0.1553(0.96) |

0.1788(0.96) | 0.1948(0.96) | 0.1887(0.96) | 0.1781(0.96) | 0.1554(0.95) | |

0.1789(0.96) | 0.1952(0.95) | 0.1890(0.94) | 0.0812(0.96) | 0.1598(0.96) | |

0.1848(0.94) | 0.2013(0.94) | 0.1942(0.92) | 0.1845(0.95) | 0.1588(0.95) | |

0.1784(0.97) | 0.1935(0.96) | 0.1880(0.96) | 0.1778(0.96) | 0.1620(0.97) |

In each cell, the average confidence lengths are provided and the corresponding coverage probabilities are given within brackets. The first to the fifth row corresponds the results for MLEs, AMLEs, Boot-p method, Boot-t method and Bayes estimates respectively.

1.312 | 1.314 | 1.479 | 1.552 | 1.700 | 1.803 | 1.861 | 1.865 | 1.944 | 1.958 |

1.966 | 1.997 | 2.006 | 2.021 | 2.027 | 2.055 | 2.063 | 2.098 | 2.140 | 2.179 |

2.224 | 2.240 | 2.253 | 2.270 | 2.272 | 2.274 | 2.301 | 2.301 | 2.359 | 2.382 |

2.382 | 2.426 | 2.434 | 2.435 | 2.478 | 2.490 | 2.511 | 2.514 | 2.535 | 2.554 |

2.566 | 2.570 | 2.586 | 2.629 | 2.633 | 2.642 | 2.648 | 2.684 | 2.697 | 2.726 |

2.770 | 2.773 | 2.800 | 2.809 | 2.818 | 2.821 | 2.848 | 2.880 | 2.954 | 3.012 |

3.067 | 3.084 | 3.090 | 3.096 | 3.128 | 3.233 | 3.433 | 3.585 | 3.585 |

1.901 | 2.132 | 2.203 | 2.228 | 2.257 | 2.350 | 2.361 | 2.396 | 2.397 | 2.445 |

2.454 | 2.474 | 2.518 | 2.522 | 2.525 | 2.532 | 2.575 | 2.614 | 2.616 | 2.618 |

2.624 | 2.659 | 2.675 | 2.738 | 2.740 | 2.856 | 2.917 | 2.928 | 2.937 | 2.937 |

2.977 | 2.996 | 3.030 | 3.125 | 3.139 | 3.145 | 3.220 | 3.223 | 3.235 | 3.243 |

3.264 | 3.272 | 3.294 | 3.332 | 3.346 | 3.377 | 3.408 | 3.435 | 3.493 | 3.501 |

3.537 | 3.554 | 3.562 | 3.628 | 3.852 | 3.871 | 3.886 | 3.971 | 4.024 | 4.027 |

4.225 | 4.395 | 5.020 |

0.762 | 0.761 | 0.676 | 0.644 | 0.588 | 0.555 | 0.537 | 0.536 | 0.514 | 0.511 |

0.509 | 0.501 | 0.499 | 0.495 | 0.493 | 0.487 | 0.485 | 0.477 | 0.467 | 0.459 |

0.450 | 0.446 | 0.444 | 0.441 | 0.440 | 0.440 | 0.435 | 0.435 | 0.424 | 0.420 |

0.420 | 0.412 | 0.411 | 0.411 | 0.404 | 0.402 | 0.398 | 0.398 | 0.394 | 0.392 |

0.390 | 0.389 | 0.387 | 0.380 | 0.380 | 0.379 | 0.378 | 0.373 | 0.371 | 0.367 |

0.361 | 0.361 | 0.357 | 0.356 | 0.355 | 0.354 | 0.351 | 0.347 | 0.339 | 0.332 |

0.326 | 0.324 | 0.324 | 0.323 | 0.320 | 0.309 | 0.291 | 0.279 | 0.279 |

0.526 | 0.469 | 0.454 | 0.449 | 0.443 | 0.426 | 0.424 | 0.417 | 0.417 | 0.409 |

0.407 | 0.404 | 0.397 | 0.397 | 0.396 | 0.395 | 0.388 | 0.383 | 0.382 | 0.382 |

0.381 | 0.376 | 0.374 | 0.365 | 0.365 | 0.350 | 0.343 | 0.342 | 0.340 | 0.340 |

0.336 | 0.334 | 0.330 | 0.320 | 0.319 | 0.318 | 0.311 | 0.310 | 0.309 | 0.308 |

0.306 | 0.306 | 0.304 | 0.300 | 0.299 | 0.296 | 0.293 | 0.291 | 0.286 | 0.286 |

0.283 | 0.281 | 0.281 | 0.276 | 0.260 | 0.258 | 0.257 | 0.252 | 0.249 | 0.248 |

0.237 | 0.228 | 0.199 |

**Table 7.**Scale parameter, shape parameter, log-likelihood, K-S distances and p-values of the fitted inverse Weibull models to Data Sets 1 and 2.

Data Set | Scale Parameter | Shape Parameter | Log-Likelihood | K-S | p-Value |
---|---|---|---|---|---|

1 | 4.9497 | 12.6152 | 71.8967 | 0.0417 | 0.9997 |

2 | 19.0814 | 13.6228 | 79.4095 | 0.0846 | 0.7572 |

**Table 8.**Expected frequencies and observed frequencies for modified Data Set 1 when fitting the inverse Weibull model.

Intervals | Expected Frequencies | Observed Frequencies |
---|---|---|

0.00–0.25 | 0.03 | 0 |

0.25–0.40 | 32.15 | 33 |

0.40–0.50 | 24.21 | 24 |

0.50–0.70 | 11.23 | 10 |

0.70–∞ | 1.38 | 2 |

**Table 9.**Expected frequencies and observed frequencies for modified Data Set 2 when fitting the inverse Weibull model.

Intervals | Expected Frequencies | Observed Frequencies |
---|---|---|

0.00–0.19 | 0.01 | 0 |

0.19–0.30 | 21.06 | 19 |

0.30–0.40 | 29.48 | 32 |

0.40–0.50 | 9.23 | 11 |

0.50–∞ | 3.22 | 1; |

**Table 10.**Scale parameter, shape parameter, log-likelihood, K-S distances and p-values of the fitted inverse Weibull models to Data Sets 1 and 2. Here, we assume that the two shape parameters are identical.

Data Set | Scale Parameter | Shape Parameter | Log-Likelihood | K-S | p-Value |
---|---|---|---|---|---|

1 | 5.3471 | 13.0933 | 71.8159 | 0.0424 | 0.9996 |

2 | 16.7168 | 13.0933 | 79.3215 | 0.0732 | 0.8878 |

© 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).