Next Article in Journal
An Asymmetric Bimodal Distribution with Application to Quantile Regression
Previous Article in Journal
Schur-Power Convexity of a Completely Symmetric Function Dual
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Statistical Inference of the Rayleigh Distribution Based on Progressively Type II Censored Competing Risks Data

Department of Mathematics, Beijing Jiaotong University, Beijing 100044, China
*
Author to whom correspondence should be addressed.
Symmetry 2019, 11(7), 898; https://doi.org/10.3390/sym11070898
Submission received: 12 June 2019 / Revised: 5 July 2019 / Accepted: 6 July 2019 / Published: 10 July 2019

Abstract

:
A competing risks model under progressively type II censored data following the Rayleigh distribution is considered. We establish the maximum likelihood estimation for unknown parameters and compute the observed information matrix and the expected Fisher information matrix to construct the asymptotic confidence intervals. Moreover, we obtain the Bayes estimation based on symmetric and non-symmetric loss functions, that is, the squared error loss function and the general entropy loss function, and the highest posterior density intervals are also derived. In addition, a simulation study is presented to assess the performances of different methods discussed in this paper. A real-life data set analysis is provided for illustration purposes.

1. Introduction

The analysis of censored data in life testing experiments is often inevitable due to the limited time of observations or the halfway quitting of some samples, which results in the experimenters failing to observe the occurrence of the failure or the exact failure time. There are various censoring schemes according to different classification methods. According to the classification of time, the censoring schemes include right censoring, left censoring, and interval censoring. Moreover, there are three types of right censoring, namely type I, type II and type III censoring. Type I and type II censoring schemes are the most common censoring schemes. In recent years, a generalized censoring scheme called progressive censoring has been widely used by statisticians because it saves time and costs compared to the traditional censoring scheme. Progressively type I censoring and progressively type II censoring are two basic censoring schemes of progressive censoring. Both of these censoring schemes make it easier for the experimenter to design the experiment, and the censored products can still be used in other experiments. Relevant studies have been carried out by many scholars. Ref. [1] obtained results on likelihood inference for distributions including one- and two-parameter exponential, extreme value, Weibull, normal, and Burr distributions under progressive type I censoring. Ref. [2] provided Bayesian estimation of the parameters and Bayesian prediction of the unobserved failure times based on a progressively type II censored sample from the generalized exponential distribution. Ref. [3] considered maximum likelihood estimation and Bayes estimation for both of the parameters and the reliability function of the Burr type XII distribution under progressively type II censoring. However, in medical studies and reliability analysis, the failure of individuals or products can be caused by multiple risk factors. For example, failures of pneumatic tires in a laboratory test are classified into several modes or categories such as cracking on the sidewall, cracking of the tread rubber, and other causes. In this case, the causes of failure are often termed competing risks. Thus, we adopt a competing risks model to process survival data that faces multiple potential outcomes. A lot of work have been done on the competing risks model. Ref. [4] dealt with the competing risks model as a special case of a multistate model and discussed the relation between the competing risks model and right censoring. Ref. [5] studied the competing risks model under progressively type II censoring based on Lomax distributions and established maximum likelihood estimates for the distribution parameters. Ref. [6] provided several classical point predictors such as the maximum likelihood predictor, the best unbiased predictor from progressively type II censored competing risks data for exponential distributions. Ref. [7] studied a Bayesian analysis of a modified Weibull distribution under a progressively censored competing risks model. Ref. [8] obtained the maximum likelihood estimators as well as approximate maximum likelihood estimators in the Weibull model for competing risks data under progressively type II censoring with binomial removals.
The Rayleigh distribution is a distribution of the magnitude of a two-dimensional random vector whose coordinates are independent random variables with a standard normal distribution. It is a special case of the Weibull distribution with a shape parameter of 2. In real life, the Rayleigh distribution has a wide range of applications, especially in communication engineering and other fields. Many scholars have studied the properties of the Rayleigh distribution. For example, Ref. [9] derived Bayesian estimators and credible intervals for the parameter and reliability function from a progressively type II censored sample. Ref. [10] obtained the maximum likelihood and Bayes estimates for one and two parameters and the reliability function of the Rayleigh distribution under progressively type II censored samples.
In this paper, we mainly focus on the estimation of parameters of the Rayleigh distribution on the basis of progressively type II censored competing risks data. We adopt the so-called latent failure time approach, assuming these risks are independent. It is reasonable to have this assumption because of model simplification, since the information of the covariates is necessary if we assume the latent risks are dependent. Point estimation and interval estimation of the unknown parameters are obtained respectively by using the methods of classical statistics and Bayesian statistics.
The rest of paper is organized as follows. In Section 2, we briefly describe the model considered. Then we present the maximum likelihood estimators and the Bayes estimators under two different loss functions. In Section 3, we compute the observed information matrix and the expected Fisher information matrix to derive asymptotic confidence intervals. Moreover, we obtain the highest posterior density (HPD) credible intervals and compare the performance with the asymptotic confidence intervals. In Section 4, some results of a simulation study and some illustrations are presented. In Section 5, we analyze one data set in real life using all the methods discussed. In addition, in Section 6, we extend discussing the estimation of unknown parameters based on progressively type II censored competing risks data with binomial removals. In Section 7, conclusions are made.

2. Model Description and Parameter Estimation

In this section, we consider a life testing experiment with n identical units whose lifetimes X 1 , X 2 , , X n are independent, identically distributed random variables. Let δ i , i = 1 , 2 , , n denote the cause of failure of the ith units. Assume there are two independent causes, namely cause 1 and cause 2. Let X i j be the lifetime of the ith item under cause j, j = 1, 2. Suppose that X i j , j = 1, 2 follows a Rayleigh distribution with probability density function given by
f j ( x ) = x σ j 2 e x 2 2 σ j 2 , x 0 , σ j > 0 .
And accordingly, the survival function of X i j is given by
F ¯ j ( x ) = e x 2 2 σ j 2 , x 0 , σ j > 0 .
Furthermore, X i = min X i 1 , X i 2 . The cumulative distribution function of X i is given by
F ( x i ) = 1 e x i 2 ( 1 2 σ 1 2 + 1 2 σ 2 2 ) .
The experiment is terminated when a prefixed number of m ( m < n ) products are observed to fail. When the ith failure occurs at time X i : m : n , i = 1 , , m , R i units are removed from the remaining surviving products. Therefore, for a given censored scheme R = ( R 1 , R 2 , , R m ) , we can derive the likelihood function of the m observed data ( X i : m : n ; δ i ) , which can be written as follows:
L ( σ 1 , σ 2 ) = C i m [ f 1 ( x i : m : n ) F 2 ¯ ( x i : m : n ) ] I ( δ i = 1 ) [ f 2 ( x i : m : n ) F 1 ¯ ( x i : m : n ) ] I ( δ i = 2 ) [ 1 F ( x i : m : n ) ] R i = C ( 1 σ 1 2 ) n 1 ( 1 σ 2 2 ) n 2 i = 1 m x i : m : n exp i = 1 m x i : m : n 2 ( 1 2 σ 1 2 + 1 2 σ 2 2 ) ( R i + 1 ) ,
where
C = n ( n R 1 1 ) ( n R 1 R 2 R m 1 R m m + 1 ) ,
n i = j = 1 m I ( δ j = i ) , i = 1 , 2 .
So the log-likelihood function without the constant can be written as
log L ( σ 1 , σ 2 ) = 2 n 1 log σ 1 2 n 2 log σ 2 i = 1 m x i : m : n 2 ( 1 2 σ 1 2 + 1 2 σ 2 2 ) ( R i + 1 ) .

2.1. Maximum Likelihood Estimation

In this section, the maximum likelihood estimations of σ 1 and σ 2 are presented.
Theorem 1.
Suppose the causes of failure follow the Rayleigh distribution with parameters σ 1 and σ 2 , the MLEs of σ 1 and σ 2 exist and are given by
σ ^ 1 = i = 1 m x i : m : n 2 ( R i + 1 ) 2 n 1 ,
and
σ ^ 2 = i = 1 m x i : m : n 2 ( R i + 1 ) 2 n 2 .
Proof of Theorem 1.
The log-likelihood function without the constant is
log L ( σ 1 , σ 2 ) = 2 n 1 log σ 1 2 n 2 log σ 2 i = 1 m x i 2 ( 1 2 σ 1 2 + 1 2 σ 2 2 ) ( R i + 1 ) .
log L ( σ 1 , σ 2 ) σ 1 = 2 n 1 σ 1 + i = 1 m x i : m : n 2 ( R i + 1 ) σ 1 3
log L ( σ 1 , σ 2 ) σ 2 = 2 n 2 σ 2 + i = 1 m x i : m : n 2 ( R i + 1 ) σ 2 3
Equating (5) and (6) to zeros, we can obtain the MLEs of σ 1 and σ 2 . Furthermore, the Hessian matrix with σ 1 and σ 2 substituted is
H = 8 n 1 2 i = 1 m x i : m : n 2 ( R i + 1 ) 0 0 8 n 2 2 i = 1 m x i : m : n 2 ( R i + 1 ) .
Since the Hessian matrix is a negative definite matrix, σ 1 ^ and σ 2 ^ are the maximums. □

2.2. Bayes Estimation

In Bayes estimation, the unknown parameters σ 1 and σ 2 are considered as random variables that have some specific distributions. In this paper, we assume that the conjugate prior distributions for σ 1 and σ 2 follow distributions of the following form:
π 1 ( σ 1 ) = a 1 b 1 Γ ( b 1 ) 2 b 1 1 σ 1 2 b 1 1 e a 1 2 σ 1 2 , σ 1 > 0
and
π 2 ( σ 2 ) = a 2 b 2 Γ ( b 2 ) 2 b 2 1 σ 2 2 b 2 1 e a 2 2 σ 2 2 , σ 2 > 0 ,
where a i > 0 , b i > 0 , i = 1 , 2 are the hyper-parameters. A random variable with a density of this kind of form follows the square-root inverted-gamma distribution. The expectations of (7) and (8) are
E ( σ i ) = a i 2 Γ ( b i 1 2 ) Γ ( b i ) , b i > 1 2 , i = 1 , 2 .
We denote x = ( x 1 : m : n , x 2 : m : n , , x m : m : n ) , then the joint posterior distribution of σ 1 and σ 2 is given by
π * ( σ 1 , σ 2 | x ) = L ( σ 1 , σ 2 ) π 1 ( σ 1 ) π 2 ( σ 2 ) L ( σ 1 , σ 2 ) π 1 ( σ 1 ) π 2 ( σ 2 ) d σ 1 d σ 2 .
Thus, we have
π * ( σ 1 , σ 2 | x ) σ 1 2 n 1 2 b 1 1 σ 2 2 n 2 2 b 2 1 e a 1 2 σ 1 2 e a 2 2 σ 2 2 e i = 1 m x i : m : n 2 ( 1 2 σ 1 2 + 1 2 σ 2 2 ) ( R i + 1 ) .
The conditional density of σ 1 given σ 2 and data is given by
π 1 * ( σ 1 | σ 2 , x ) σ 1 2 n 1 2 b 1 1 e a 1 2 σ 1 2 e i = 1 m x i : m : n 2 1 2 σ 1 2 ( R i + 1 ) .
Similarly, the conditional density of σ 2 given σ 1 and data is given by
π 2 * ( σ 2 | σ 1 , x ) σ 2 2 n 2 2 b 2 1 e a 2 2 σ 2 2 e i = 1 m x i : m : n 2 1 2 σ 2 2 ( R i + 1 ) .
For Bayes estimation, we choose both symmetric and asymmetric loss functions. The most popular symmetric loss function is the squared error loss (SEL) function, and it is defined as:
L 1 ( θ , θ ˜ ) = ( θ ˜ θ ) 2 .
Theorem 2.
Suppose the causes of failure follow the Rayleigh distribution with parameters σ 1 and σ 2 . The Bayes estimators under the squared error loss function are given by
σ ˜ 1 S E L = E ( σ 1 | x ) = a 1 + i = 1 m ( R i + 1 ) x i : m : n 2 2 Γ ( n 1 + b 1 1 2 ) Γ ( n 1 + b 1 ) ,
and
σ ˜ 2 S E L = E ( σ 2 | x ) = a 2 + i = 1 m ( R i + 1 ) x i : m : n 2 2 Γ ( n 2 + b 2 1 2 ) Γ ( n 2 + b 2 ) .
Proof of Theorem 2.
The results can be calculated directly. □
Theorem 3.
Suppose the causes of failure follow the Rayleigh distribution with parameters σ 1 and σ 2 . The posterior risks of σ 1 and σ 2 under the squared error loss function are given by
R ( σ ^ 1 S E L ) = E ( σ 1 2 | x ) [ E ( σ 1 | x ) ] 2 = a 1 + i = 1 m ( R i + 1 ) x i : m : n 2 2 ( Γ ( n 1 + b 1 1 ) Γ ( n 1 + b 1 ) ( Γ ( n 1 + b 1 1 2 ) Γ ( n 1 + b 1 ) ) 2 ) ,
and
R ( σ ^ 2 S E L ) = E ( σ 2 2 | x ) [ E ( σ 2 | x ) ] 2 = a 2 + i = 1 m ( R i + 1 ) x i : m : n 2 2 ( Γ ( n 2 + b 2 1 ) Γ ( n 2 + b 2 ) ( Γ ( n 2 + b 2 1 2 ) Γ ( n 2 + b 2 ) ) 2 ) .
Proof of Theorem 3.
The results can be calculated directly. □
But symmetry is the biggest disadvantage of the squared error loss function. There are many asymmetric loss functions in statistics, among which the LINEX loss function is the most widely used. However, the LINEX loss function performs better in estimating location parameters. A suitable alternative is the general entropy loss (EL) function, which is defined as
L 2 ( θ , θ ˜ ) = ( θ ˜ θ ) q q log ( θ ˜ θ ) 1 , q 0 .
Theorem 4.
Suppose the causes of failure follow the Rayleigh distribution with parameters σ 1 and σ 2 . The Bayes estimators under the general entropy loss function are given by
σ ˜ 1 E L = ( E ( σ 1 q | x ) ) 1 q = 2 q 2 Γ ( n 1 + b 1 + q 2 ) ( a 1 + i = 1 m ( R i + 1 ) x i : m : n 2 ) q 2 Γ ( n 1 + b 1 ) 1 q ,
and
σ ˜ 2 E L = ( E ( σ 2 q | x ) ) 1 q = 2 q 2 Γ ( n 2 + b 2 + q 2 ) ( a 2 + i = 1 m ( R i + 1 ) x i : m : n 2 ) q 2 Γ ( n 2 + b 2 ) 1 q .
Proof of Theorem 4.
The results can be calculated directly. □
Theorem 5.
Suppose the causes of failure follow the Rayleigh distribution with parameters σ 1 and σ 2 , the posterior risks of σ 1 and σ 2 under the general entropy loss function are given by
R ( σ ˜ 1 E L ) = q E ( ln σ 1 | x ) + ln [ E ( σ 1 q | x ) ] = q 2 Γ ( n 1 + b 1 ) ( ln 2 a 1 + i = 1 m ( R i + 1 ) x i : m : n 2 + P o l y g a m m a [ 0 , n 1 + b 1 ] ) + ln ( 2 q 2 Γ ( n 1 + b 1 + q 2 ) ( a 1 + i = 1 m ( R i + 1 ) x i : m : n 2 ) q 2 Γ ( n 1 + b 1 ) ) ,
and
R ( σ ˜ 2 E L ) = q E ( ln σ 2 | x ) + ln [ E ( σ 2 q | x ) ] = q 2 Γ ( n 2 + b 2 ) ( ln 2 a 2 + i = 1 m ( R i + 1 ) x i : m : n 2 + P o l y g a m m a [ 0 , n 2 + b 2 ] ) + ln ( 2 q 2 Γ ( n 2 + b 2 + q 2 ) ( a 2 + i = 1 m ( R i + 1 ) x i : m : n 2 ) q 2 Γ ( n 2 + b 2 ) ) .
Proof of Theorem 5.
The results can be calculated directly. □

3. Interval Estimation

Apart from the point estimation, interval estimation is also a popular method to estimate unknown parameters. In this section, we will propose both asymptotic confidence intervals in classical statistics and the highest posterior density (HPD) credible intervals in Bayesian statistics.

3.1. Asymptotic Confidence Intervals

In this section, we compute the observed and expected Fisher information matrix, which will be used later to construct the asymptotic confidence intervals.

3.1.1. Observed Fisher Information Matrix

The observed Fisher information matrix is the negative of the Hessian matrix of the log-likelihood function. It is a sample-based version of the Fisher information matrix. We denote the matrix J ( σ 1 , σ 2 ) as follows:
J ( σ 1 , σ 2 ) = 2 log L ( σ 1 , σ 2 ) σ 1 2 2 log L ( σ 1 , σ 2 ) σ 1 σ 2 2 log L ( σ 1 , σ 2 ) σ 2 σ 1 2 log L ( σ 1 , σ 2 ) σ 2 2 .
After substituting σ 1 and σ 2 with σ ^ 1 and σ ^ 2 in (13), then we denote J ( σ ^ 1 , σ ^ 2 ) as follows
J ( σ ^ 1 , σ ^ 2 ) = J ^ 11 J ^ 12 J ^ 21 J ^ 22 ,
where
J ^ 11 = 8 n 1 2 i = 1 m x i : m : n 2 ( R i + 1 ) ,
J ^ 12 = J ^ 21 = 0 ,
J ^ 22 = 8 n 2 2 i = 1 m x i : m : n 2 ( R i + 1 ) .
So we obtain the asymptotic confidence intervals by ( σ ^ i σ i ) N 2 ( 0 , J 1 ( σ ^ 1 , σ ^ 2 ) ) , i = 1 , 2 , and then it can be easily obtained that 100(1− α )% confidence intervals for σ 1 and σ 2 are given by
σ 1 [ σ ^ 1 z α 2 J ^ 11 1 , σ ^ 1 + z α 2 J ^ 11 1 ] ,
σ 2 [ σ ^ 2 z α 2 J ^ 22 1 , σ ^ 2 + z α 2 J ^ 22 1 ] ,
where z α 2 is the α 2 quantile of the standard normal distribution.

3.1.2. Expected Fisher Information Matrix

The probability density function of X r : m : n for r = 1 , 2 m is
f x r : m : n ( x ) = C r 1 ( 1 σ 1 2 + 1 σ 2 2 ) i = 1 r a i , r x e x 2 ( 1 2 σ 1 2 + 1 2 σ 2 2 ) γ i ,
where
γ r = m r + 1 + i = r m R i , C r 1 = i = 1 r γ i , a i , r = j = 1 , j i r 1 γ j γ i , 1 i r m .
Based on (17), the expected Fisher information matrix can be written as
I = I 11 I 12 I 21 I 22 = E 2 log L ( σ 1 , σ 2 ) σ 1 2 2 log L ( σ 1 , σ 2 ) σ 1 σ 2 2 log L ( σ 1 , σ 2 ) σ 2 σ 1 2 log L ( σ 1 , σ 2 ) σ 2 2 ,
where
I 11 = 2 n 1 σ 1 2 + i = 1 m ( R i + 1 ) C i 1 j = 1 i a j , i 6 σ 2 2 ( σ 1 2 + σ 2 2 ) γ j 2 σ 1 2 ,
I 12 = I 21 = 0 ,
I 22 = 2 n 2 σ 2 2 + i = 1 m ( R i + 1 ) C i 1 j = 1 i a j , i 6 σ 1 2 ( σ 1 2 + σ 2 2 ) γ j 2 σ 2 2 .
After substituting back σ ^ 1 and σ ^ 2 in the expected Fisher information matrix, we can obtain that
I ^ 11 = 4 n 1 2 i = 1 m ( R i + 1 ) x i : m : n 2 + i = 1 m ( R i + 1 ) C i 1 j = 1 i a j , i 12 n 1 2 γ j 2 ( n 1 + n 2 ) i = 1 m ( R i + 1 ) x i : m : n 2 ,
I ^ 12 = I ^ 21 = 0 ,
I ^ 22 = 4 n 2 2 i = 1 m ( R i + 1 ) x i : m : n 2 + i = 1 m ( R i + 1 ) C i 1 j = 1 i a j , i 12 n 2 2 γ j 2 ( n 1 + n 2 ) i = 1 m ( R i + 1 ) x i : m : n 2 .
Thus, the confidence intervals obtained by the expected Fisher information matrix are given by
σ 1 [ σ ^ 1 z α 2 I ^ 11 1 , σ ^ 1 + z α 2 I ^ 11 1 ] ,
σ 2 [ σ ^ 2 z α 2 I ^ 22 1 , σ ^ 2 + z α 2 I ^ 22 1 ] ,
where the definition of z α 2 is the same as before.

3.2. HPD Credible Intervals

The credible intervals are not unique on a posterior distribution. For a unimodal distribution, one can choose the narrowest interval, which involves values of the highest probability density. This is referred to the highest posterior density credible intervals. Since the Rayleigh distribution is unimodal, the highest posterior density credible intervals for σ 1 and σ 2 can be constructed accordingly. Hence, 100 ( 1 α ) % Bayes credible intervals for σ 1 and σ 2 must satisfy the following two equations:
s 1 t 1 π 1 * ( σ 1 | σ 2 , x ) d σ 1 = 1 α ,
π 1 * ( t 1 | σ 2 , x ) = π 1 * ( s 1 | σ 2 , x ) ,
and
s 2 t 2 π 2 * ( σ 2 | σ 1 , x ) d σ 2 = 1 α ,
π 2 * ( t 2 | σ 1 , x ) = π 2 * ( s 2 | σ 1 , x ) .
Then, the credible intervals for σ 1 and σ 2 can be derived by solving the following equations simultaneously.
1 Γ ( b 1 + n 1 ) γ ( b 1 + n 1 , u 1 ) 1 Γ ( b 1 + n 1 ) γ ( b 1 + n 1 , u 2 ) = 1 α ( t 1 s 1 ) 2 ( b 1 + n 1 ) + 1 = e u 1 u 2 ,
1 Γ ( b 2 + n 2 ) γ ( b 2 + n 2 , v 1 ) 1 Γ ( b 2 + n 2 ) γ ( b 2 + n 2 , v 2 ) = 1 α ( t 2 s 2 ) 2 ( b 2 + n 2 ) + 1 = e v 1 v 2 ,
where
u 1 = a 1 + i = 1 m x i : m : n 2 ( R i + 1 ) 2 s 1 2 , u 2 = a 1 + i = 1 m x i : m : n 2 ( R i + 1 ) 2 t 1 2 ,
v 1 = a 2 + i = 1 m x i : m : n 2 ( R i + 1 ) 2 s 2 2 , v 2 = a 2 + i = 1 m x i : m : n 2 ( R i + 1 ) 2 t 2 2 ,
γ ( b 1 + n 1 , u i ) = 0 u i y b 1 + n 1 1 e y d y ,
γ ( b 2 + n 2 , v i ) = 0 v i y b 2 + n 2 1 e y d y .
However, it is difficult to acquire accurate solutions to these non-linear equations. While using the idea proposed by Ref. [11], the approximate HPD credible interval [ σ k , ( l * [ ( 1 α ) M ] ) , σ k , ( l * ) ] can be easily obtained by choosing l * so that
σ k , ( l * ) σ k , ( l * [ ( 1 α ) M ] ) = min 1 l M [ ( 1 α ) M ] ( σ k , ( l + [ ( 1 α ) M ] ) σ k , ( l ) ) ,
where M is the number of the generated random variables σ k , and σ k , ( l ) is the lth smallest of σ k , ( l ) l = 1 M , k = 1 , 2 .

4. Simulation Study

In this section, we conduct some simulation studies, and the results are presented in the following tables. In terms of bias and mean squared error (MSE), we compare the performances of the maximum likelihood estimator and the Bayes estimators under two loss functions. In addition, we compare the interval lengths and the coverage rates of the asymptotic confidence intervals and the highest posterior density credible intervals. Firstly, we generate progressively type II censored samples from the Rayleigh distribution by using the Algorithm 1 proposed by Ref. [12].
Algorithm 1 Generating the progressively type II censored samples.
1:
Generate m independent observations U 1 , U 2 , , U m from a uniform distribution U(0,1).
2:
Set V i = U i 1 / ( i + R m + R m 1 + + R m i + 1 ) for i = 1 , 2 , , m .
3:
Set W i = 1 V m V m 1 V m i + 1 for i = 1 , 2 , , m . Then W 1 , W 2 , , W m are the progressively type II censored samples from the uniform distribution U(0,1).
4:
Set X i = F 1 ( W i ) for i = 1 , 2 , , m , where F 1 ( · ) is the inverse cumulative distribution function of F ( · ) . Then X 1 , X 2 , , X m are the needed progressively type II censored samples from F ( · ) .
We choose different sizes of n and m and consider five censoring schemes listed in Table 1:
Notice that Schemes 1, 2, 3 are right censoring and Scheme 4 is left censoring. As for Scheme 5, the expected lifetime of samples should be between Scheme 3 and Scheme 4. Here we fix three sets of parameters, namely σ 1 = 0.8 , σ 2 = 1 , σ 1 = 0.8 , σ 2 = 0.6 , and σ 1 = 0.5 , σ 2 = 0.6 . In the maximum likelihood estimation, we calculate the estimates of σ 1 and σ 2 and the corresponding bias and mean squared errors. The results are shown in Table 2. For Bayes estimation, we use both non-informative prior and informative prior. According to different sets of parameters, we take three informative priors. The hyper-parameters of Informative-I are a 1 = 2.9 , b 1 = 3 , a 2 = 6 , b 2 = 3 . The hyper-parameters of Informative-II are a 1 = 2.9 , b 1 = 3 , a 2 = 1 , b 2 = 1.5 , and the hyper-parameters of Informative-III are a 1 = 1 , b 1 = 3 , a 2 = 1 , b 2 = 1.5 . Thus, we obtain the Bayes estimates under two different loss functions and present the results in Table 3 and Table 4. Then we construct the 95% confidence intervals under the observed information matrix as well as expected Fisher information matrix and use different priors to derive the HPD credible intervals. We compare the calculated interval lengths and coverage rates by different methods and present the results in Table 5.

4.1. Point Estimates

From Table 2 and Table 3, one can see that the MLEs of σ 1 and σ 2 perform better than Bayes estimates with non-informative priors under two loss functions in terms of bias and MSEs. From Table 2 and Table 4, one can see that the Bayes estimates of σ 1 with informative priors under the squared error loss function possess minimum bias and MSEs compared with the MLEs of σ 1 , while the Bayes estimates of σ 2 with informative priors under the entropy loss function possess minimum bias and MSEs compared with the MLEs of σ 2 . From Table 3, one can see that the Bayes estimates of σ 1 and σ 2 under the entropy loss function possess minimum bias and MSEs for non-informative priors. From Table 4, one can see that the Bayes estimates of σ 1 under the squared error loss function possess minimum bias compared to under the entropy loss function for the informative priors, but the Bayes estimates of σ 2 under the entropy function possess minimum bias and MSEs for the informative priors. From Table 3 and Table 4, one can see that the Bayes estimates of σ 1 and σ 2 with informative priors perform better than the Bayes estimates with non-informative priors, and the former possess minimum bias and MSEs.
From Table 2, Table 3 and Table 4, one can see that as sample sizes n and m increase, the bias and MSEs of the maximum likelihood estimations and Bayes estimations decrease accordingly. For fixed n and m, the maximum likelihood estimations and Bayes estimations of left censoring perform better in terms of bias and MSEs. From Table 2, Table 3 and Table 4, one can see that for a fixed value of σ 1 , the estimations of σ 1 possess larger bias and MSEs when decreasing the value of σ 2 . In contrast, for a fixed value of σ 2 , the estimations of σ 2 possess smaller bias and MSEs when increasing the value of σ 1 .

4.2. Interval Estimates

From Table 5, one can observe that the second method performs slightly better than the first method in terms of interval lengths and coverage rates. But the difference between them is not remarkable. Moreover, notice that the third method does not work very well compared with the first and second methods. However, as the sample size increases, the interval lengths of Bayes credible intervals with non-informative priors are more and more similar to those obtained by the first two methods. As for the fourth method, the performances are quite satisfactory. In general, the Bayes credible intervals with informative priors maintain the smallest average interval lengths.
As can be seen from Table 5, the intervals become narrower and the coverage rates become closer to 95% as n and m increase. Similarly, when we fix the value of σ 1 and decrease the value of σ 2 , the intervals of σ 1 become wider and the coverage rates of σ 1 become smaller. In contrast, when we fix the value of σ 2 and increase the value of σ 1 , the intervals of σ 1 become narrower and the coverage rates of σ 1 become larger.

5. Data Analysis

In this section, we analyze a real-life data set from Ref. [13] for illustration purposes. The data comes from an experiment in which new models of a small electrical appliance were being tested. The appliances were operated repeatedly by an automatic testing machine. There are 18 different possible causes of failure for the appliance. We will focus on failure mode 9. Therefore, we denote δ i = 1 if the failure occurs in mode 9 and δ i = 2 if the failure occurs in any other mode.
Data Set
(11, 2), (35, 2), (49, 2), (170, 2), (329, 2), (381, 2), (708, 2), (958, 2), (1062, 2), (1167, 1), (1594, 2), (1925, 1),
(1990, 1), (2223, 1), (2327, 2), (2400, 1), (2451, 2), (2471, 1), (2551, 1), (2568, 1), (2694, 1), (2702, 2), (2761, 2),
(2831, 2), (3034, 1), (3059, 2), (3112, 1), (3214, 1), (3478, 1), (3504, 1), (4329, 1), (6976, 1), (7846, 1).
In order to determine whether the data come from Rayleigh distribution, we conduct a K-S test on the data in Matlab. The p-value is 0.2983. Since the p-value is higher than the significance level 0.05, we cannot reject the null hypothesis and accept that the data comes from the Rayleigh distribution. We use the original data to generate progressively type II censored samples. The detailed results are presented in Table 6.
The maximum likelihood estimates of σ 1 and σ 2 are obtained, and the Bayes estimates of σ 1 and σ 2 under squared error and entropy loss function are obtained by using non-informative prior. The estimated value results are presented in Table 7. We also provide the average interval lengths of σ 1 and σ 2 by using the three methods we discussed in the previous sections and present the results in Table 8.

6. Further Discussion

On the basis of the progressively type II censored competing risks data, we assume that the random removal R i is a random variable and follows a binomial distribution with parameter p, which means each item is removed with equal probability p. Thus, the probability of R i items leaving after the ith failure occurs is given by
P ( R i = r i | R i 1 = r i 1 , , R 1 = r 1 ) = n m j = 1 i 1 r j r i p r i ( 1 p ) n m j = 1 i r j ,
where
0 r i n m j = 1 i 1 r j , i = 2 , 3 , , m 1 .
Let the random censoring schemes R = ( R 1 , R 2 , , R m ) and r = ( r 1 , r 2 , , r m ) . Then the joint probability mass function of R i is given by
P ( R 1 = r 1 , , R m = r m ) = P ( R m = r m | R m 1 = r m 1 , , R 1 = r 1 ) P ( R 2 = r 2 | R 1 = r 1 ) × P ( R 1 = r 1 ) = ( n m ) ! i = 1 m 1 r i ! ( n m i = 1 m 1 r i ) ! p i = 1 m 1 r i ( 1 p ) ( m 1 ) ( n m ) i = 1 m 1 ( m i ) r i .
Then, the likelihood function can be found as
L ( σ 1 , σ 2 , p ) = L 1 ( x ; σ 1 , σ 2 | R 1 = r 1 , , R m = r m ) P ( R 1 = r 1 , , R m = r m ) ,
where
L 1 ( σ 1 , σ 2 | R 1 = r 1 , , R m = r m ) = C i m [ f 1 ( x i : m : n ) F 2 ¯ ( x i : m : n ) ] I ( δ i = 1 ) [ f 2 ( x i : m : n ) F 1 ¯ ( x i : m : n ) ] I ( δ i = 2 ) × [ 1 F ( x i : m : n ) ] r i ,
and C is defined as before. P ( R 1 = r 1 , , R m = r m ) is given in (25).
As can be seen from L 1 , it does not involve the parameter p. Hence, the maximum likelihood estimator of σ 1 and σ 2 can be derived by maximizing (27). Similarly, since P ( R 1 = r 1 , , R m = r m ) does not involve the parameters σ 1 and σ 2 , the maximum likelihood estimator of p can be derived by maximizing (25). Thus the MLEs of σ 1 , σ 2 , and p take the following forms:
σ ^ 1 = i = 1 m x i : m : n 2 ( r i + 1 ) 2 n 1 ,
σ ^ 2 = i = 1 m x i : m : n 2 ( r i + 1 ) 2 n 2 ,
p ^ = i = 1 m 1 r i ( m 1 ) ( n m ) i = 1 m 1 ( m i 1 ) r i .
For Bayes estimation, we assume that the prior distributions of σ 1 , σ 2 , and p follow distributions with probablity density functions given by
π 1 ( σ 1 ) = a 1 b 1 Γ ( b 1 ) 2 b 1 1 σ 1 2 b 1 1 e a 1 2 σ 1 2 , σ 1 > 0 ,
π 2 ( σ 2 ) = a 2 b 2 Γ ( b 2 ) 2 b 2 1 σ 2 2 b 2 1 e a 2 2 σ 2 2 , σ 2 > 0 ,
π 3 ( p ) = Γ ( a 3 + b 3 ) Γ ( a 3 ) Γ ( b 3 ) p a 3 1 ( 1 p ) b 3 1 , a 3 , b 3 > 0 .
Then the joint posterior distribution of σ 1 , σ 2 , and p is given by
π * ( σ 1 , σ 2 , p | x ) = L ( σ 1 , σ 2 , p ) π 1 ( σ 1 ) π 2 ( σ 2 ) π 3 ( p ) L ( σ 1 , σ 2 ) π 1 ( σ 1 ) π 2 ( σ 2 ) π 3 ( p ) d σ 1 d σ 2 d p .
The conditional density of σ 1 given σ 2 , p, x , and r is given by
π 1 * ( σ 1 | σ 2 , p , x , r ) σ 1 2 n 1 2 b 1 1 e a 1 2 σ 1 2 e i = 1 m x i : m : n 2 1 2 σ 1 2 ( r i + 1 ) .
Similarly, the conditional density of σ 2 given σ 1 , p, x , and r is given by
π 2 * ( σ 2 | σ 1 , p , x , r ) σ 2 2 n 2 2 b 2 1 e a 2 2 σ 2 2 e i = 1 m x i : m : n 2 1 2 σ 2 2 ( r i + 1 ) .
Furthermore, the conditional density of p given σ 1 , σ 2 , x , and r is given by
π 3 * ( p | σ 1 , σ 2 , x , r ) p i = 1 m 1 r i + a 3 1 ( 1 p ) ( m 1 ) ( n m ) i = 1 m 1 ( m i ) r i + b 3 1 .
Hence, the Bayes estimators under the squared error loss function are given by
σ ˜ 1 S E L = E ( σ 1 | x , r ) = a 1 + i = 1 m ( r i + 1 ) x i : m : n 2 2 Γ ( n 1 + b 1 1 2 ) Γ ( n 1 + b 1 ) ,
σ ˜ 2 S E L = E ( σ 2 | x , r ) = a 2 + i = 1 m ( r i + 1 ) x i : m : n 2 2 Γ ( n 2 + b 2 1 2 ) Γ ( n 2 + b 2 ) ,
and
p ˜ S E L = E ( p | x , r ) = i = 1 m 1 r i + a 3 ( m 1 ) ( n m ) i = 1 m 1 ( m i 1 ) r i + a 3 + b 3 .
The Bayes estimators under the general entropy loss function are given by
σ ˜ 1 E L = ( E ( σ 1 q | x , r ) ) 1 q = 2 q 2 Γ ( n 1 + b 1 + q 2 ) ( a 1 + i = 1 m ( R i + 1 ) x i : m : n 2 ) q 2 Γ ( n 1 + b 1 ) 1 q ,
σ ˜ 2 E L = ( E ( σ 2 q | x , r ) ) 1 q = 2 q 2 Γ ( n 2 + b 2 + q 2 ) ( a 2 + i = 1 m ( R i + 1 ) x i : m : n 2 ) q 2 Γ ( n 2 + b 2 ) 1 q ,
and
p ˜ E L = ( E ( p q | x , r ) ) 1 q = i = 1 m 1 r i + a 3 p ( m 1 ) ( n m ) i = 1 m 1 ( m i 1 ) r i + a 3 + b 3 p 1 q .
The observed Fisher information matrix is given by
J ( σ 1 , σ 2 , p ) = 2 log L ( σ 1 , σ 2 , p ) σ 1 2 2 log L ( σ 1 , σ 2 , p ) σ 1 σ 2 2 log L ( σ 1 , σ 2 , p ) σ 1 p 2 log L ( σ 1 , σ 2 , p ) σ 2 σ 1 2 log L ( σ 1 , σ 2 , p ) σ 2 2 2 log L ( σ 1 , σ 2 , p ) σ 2 , p 2 log L ( σ 1 , σ 2 , p ) p σ 1 2 log L ( σ 1 , σ 2 , p ) p σ 2 2 log L ( σ 1 , σ 2 , p ) p 2 .
After substituting σ 1 , σ 2 , and p with σ ^ 1 , σ ^ 2 , and p ^ in (38), then we denote J ( σ ^ 1 , σ ^ 2 , p ^ ) as follows
J ( σ ^ 1 , σ ^ 2 , p ^ ) = J ^ 11 J ^ 12 J ^ 13 J ^ 21 J ^ 22 J ^ 23 J ^ 31 J ^ 32 J ^ 33 ,
where
J ^ 11 = 8 n 1 2 i = 1 m x i : m : n 2 ( r i + 1 ) ,
J ^ 22 = 8 n 2 2 i = 1 m x i : m : n 2 ( r i + 1 ) ,
J ^ 33 = i = 1 m 1 r i p 2 + ( m 1 ) ( n m ) i = 1 m 1 ( m i ) r i ( 1 p ) 2 ,
J ^ 12 = J ^ 13 = J ^ 21 = J ^ 23 = J ^ 31 = J ^ 32 = 0 .
It can be applied to calculate the asymptotic confidence intervals of the unknown parameters. In addition, Bayes credible intervals can also be derived by referring to similar methods mentioned in Section 3.
In the future, we will thoroughly study statistical inferences of the Rayleigh distribution based on progressively type II censored competing risks data with binomial removals. Assume that R i , the number of units removed from the remaining surviving products, follows a binomial distribution with probability p. We can derive the point estimation and interval estimation by using the theoretical derivation above.

7. Conclusions

In this paper, we investigated progressively type II censored competing risks data when the lifetime of samples is Rayleigh distributed. We assumed that there are two independent competing risks that follow Rayleigh distributions, with the parameters σ 1 and σ 2 . The maximum likelihood estimation and the Bayes estimation under two different loss functions of σ 1 and σ 2 were derived. In addition, we computed the observed information and the expected Fisher information to construct the asymptotic confidence intervals. The highest posterior density credible intervals were also obtained and compared with the asymptotic confidence intervals.
Furthermore, we presented a simulation study and a real-life data set analysis to demonstrate the previous methods. Among the estimation of σ 1 and σ 2 , the Bayes estimations with informative priors performed the best, and the maximum likelihood estimation performed better than Bayes estimations with non-informative priors. Among the Bayes estimations of σ 1 , estimation under the squared error loss function performed better. Among the Bayes estimations of σ 2 , estimation under the general entropy loss function performed better. What’s more, among all the calculated intervals, the Bayes credible intervals with informative priors possessed the smallest average interval lengths.

Author Contributions

Methodology and Writing, H.L.; Supervision, W.G.

Funding

This research was supported by Project 202010004001 which was supported by National Training Program of Innovation and Entrepreneurship for Undergraduates.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Balakrishnan, N.; Cramer, E. The Art of Progressive Censoring, 2nd ed.; Birkhäuser: New York, NY, USA, 2014; pp. 313–325. [Google Scholar]
  2. Madi, M.T.; Raqab, M.Z. Bayesian inference for the generalized exponential distribution based on progressively censored data. Commun. Stat. 2009, 38, 2016–2029. [Google Scholar] [CrossRef]
  3. Rastogi, M.K.; Tripathi, Y.M. Estimating the parameters of a Burr distribution under progressive type-II censoring. Stat. Methodol. 2012, 9, 381–391. [Google Scholar] [CrossRef]
  4. Andersen, P.K.; Abildstrom, S.Z.; Rosthoj, S. Competing risks as a multi-state model. Stat. Methods Med. Res. 2002, 11, 203–215. [Google Scholar] [CrossRef] [PubMed]
  5. Cramer, E.; Schmiedt, A.B. Progressively type-II censored competing risks data from Lomax distributions. Comput. Stat. Data Anal. 2011, 55, 1285–1303. [Google Scholar] [CrossRef]
  6. Ahmadi, K.; Rezaei, M.; Yousefzadeh, F. Point predictors of the latent failure times of censored units in progressively type-II censored competing risks data from the exponential distributions. J. Stat. Comput. Simul. 2015, 86, 1620–1634. [Google Scholar] [CrossRef]
  7. Dey, A.K.; Jha, A.; Dey, S. Bayesian analysis of modified Weibull distribution under progressively censored competing risk model. arXiv 2016, arXiv:1605.06585. [Google Scholar]
  8. Hashemi, R.; Amiri, L. Analysis of progressive type-II censoring in the Weibull model for competing risks data with binomial removals. Appl. Math. Sci. 2000, 5, 1073–1087. [Google Scholar]
  9. Wu, S.J.; Chen, D.H.; Chen, S.T. Bayesian inference for Rayleigh distribution under progressive censored sample. Appl. Stoch. Models Bus. Ind. 2010, 22, 269–279. [Google Scholar] [CrossRef]
  10. Ali Mousa, M.A.M.; Al-Sagheer, S.A. Statistical inference for the Rayleigh model based on progressively type-II censored data. Statistics 2006, 40, 149–157. [Google Scholar] [CrossRef]
  11. Chen, M.; Shao, Q. Monte carlo estimation of bayesian credible and hpd intervals. J. Comput. Graph. Stat. 1999, 8, 69–92. [Google Scholar]
  12. Balakrishnan, N.; Sandhu, R.A. A simple simulational algorithm for generating progressive type-II censored samples. Am. Stat. 1995, 49, 229–230. [Google Scholar]
  13. Lawless, J. Statistical Models and Methods for Lifetime Data; John Wiley & Sons: Hoboken, NJ, USA, 2003. [Google Scholar]
Table 1. Different choices of censoring schemes.
Table 1. Different choices of censoring schemes.
Scheme 1(n = 25, m = 20)
R[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5]
Scheme 2(n = 36, m = 24)
R[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 12]
Scheme 3(n = 40, m = 30)
R[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 10]
Scheme 4(n = 40, m = 30)
R[10, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
Scheme 5(n = 40, m = 30)
R[5, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5]
Table 2. MLEs, bias, and MSEs of σ 1 and σ 2 under different censoring schemes.
Table 2. MLEs, bias, and MSEs of σ 1 and σ 2 under different censoring schemes.
MLE
σ 1 σ 2 n m σ ^ 1 BiasMSE σ ^ 2 BiasMSE
0.8125200.80990.00990.01491.02510.02510.0438
36240.80640.00640.01171.02340.02340.0330
40300.80340.00340.00991.02390.02390.0257
40300.80210.00210.00961.01580.01580.0253
40300.80220.00220.00971.01760.01760.0241
0.80.625200.82380.02380.02860.60420.00420.0071
36240.81750.01750.02400.60370.00370.0046
40300.81690.01690.01760.60340.00340.0050
40300.81610.01610.01770.60160.00160.0078
40300.81740.01740.01660.60150.00150.0050
0.50.625200.50490.00490.00560.61370.01370.0153
36240.50310.00310.00510.61120.01120.0127
40300.50230.00230.00360.60990.00990.0093
40300.50200.00200.00370.60540.00540.0080
40300.50220.00220.00410.60880.00880.0095
Table 3. Bayes estimates with non-informative prior under the squared error loss function and entropy loss function, and the corresponding bias and MSEs.
Table 3. Bayes estimates with non-informative prior under the squared error loss function and entropy loss function, and the corresponding bias and MSEs.
σ 1 σ 2 nmSELEL
σ ˜ 1 SEL BiasMSE σ ˜ 2 SEL BiasMSE σ ˜ 1 EL BiasMSE σ ˜ 2 EL BiasMSE
Non-informative
0.8125200.83290.03290.01681.08850.08850.08290.81930.01930.01561.04650.04650.0513
36240.82810.02810.01371.09330.09330.08930.81070.01070.01291.03300.03300.0333
40300.82600.02600.01121.05900.05900.03690.80890.00890.00951.03290.03290.0298
40300.82080.02080.01061.04290.04290.03080.80560.00560.00981.02290.02290.0242
40300.82210.02210.01101.05240.05240.03410.80640.00640.00931.03420.03420.0313
Non-informative
0.80.625200.86960.06960.04070.62370.02370.00870.84280.04280.03530.60860.00860.0084
36240.85690.05690.03580.61940.01940.00730.83870.03870.02930.60650.00650.0064
40300.84910.04910.02380.61330.01330.00570.83190.03190.02040.60750.00750.0051
40300.84860.04860.02590.61150.01150.00540.82810.02810.02020.60280.00280.0049
40300.84930.04930.02610.61280.01280.00550.82930.02930.02130.60360.00360.0050
Non-informative
0.50.625200.52010.02010.00690.65150.05150.02240.51090.01090.00610.62490.02490.0137
36240.51920.01920.00620.64400.04400.01670.50910.00910.00490.62280.02280.0178
40300.51180.01180.00420.62600.02600.00980.50660.00660.00380.62090.02090.0104
40300.51150.01150.00400.62900.02900.01080.50390.00390.00350.61440.01440.0090
40300.51170.01170.00470.63540.03540.01190.50470.00470.00390.62020.02020.0098
Table 4. Bayes estimates with informative prior under the squared error loss function and entropy loss function, and the corresponding bias and MSEs.
Table 4. Bayes estimates with informative prior under the squared error loss function and entropy loss function, and the corresponding bias and MSEs.
σ 1 σ 2 nmSELEL
σ ˜ 1 SEL BiasMSE σ ˜ 2 SEL BiasMSE σ ˜ 1 EL BiasMSE σ ˜ 2 EL BiasMSE
Informative-I
0.8125200.80770.00770.01021.04920.04920.02450.7887−0.01130.00881.02400.02400.0211
36240.80520.00520.00861.04690.04690.02140.7892−0.01080.00851.02210.02210.0179
40300.80290.00290.00781.03910.03910.01760.7914−0.00860.00801.01660.01660.0149
40300.80180.00180.00751.03750.03750.01810.7920−0.00800.00651.00930.00930.0147
40300.80190.00190.00761.04530.04530.01870.7918−0.00820.00671.01500.01500.0141
Informative-II
0.80.625200.81290.01290.01410.61560.01560.00710.7849−0.01510.01300.60410.00410.0064
36240.80990.00990.01330.61590.01590.00650.7867−0.01330.01160.60280.00280.0048
40300.81240.01240.01060.61310.01310.00440.7900−0.01000.00930.60210.00210.0045
40300.81080.01080.01080.61110.01110.00470.7903−0.00970.01000.60080.00080.0040
40300.81180.01180.01130.61280.01280.00480.7884−0.01160.01020.60110.00110.0042
Informative-III
0.50.625200.4976−0.00240.00410.63240.03240.01310.4906−0.00940.00360.61210.01210.0088
36240.4983−0.00170.00370.62730.02730.01080.4915−0.00850.00330.61090.01090.0088
40300.4983−0.00170.00320.61960.01960.00850.4917−0.00830.00310.60950.00950.0073
40300.4991−0.00090.00270.61930.01930.00740.4923−0.00770.00280.60530.00530.0066
40300.4985−0.00150.00280.62110.02110.00710.4919−0.00810.00270.60750.00750.0063
Table 5. Interval lengths and coverage rates of asymptotic confidence intervals and highest posterior density (HPD) credible intervals.
Table 5. Interval lengths and coverage rates of asymptotic confidence intervals and highest posterior density (HPD) credible intervals.
Scheme σ 1 = 0.8 , σ 2 = 1 σ 1 = 0.8 , σ 2 = 0.6 σ 1 = 0.5 , σ 2 = 0.6
10.46450.930.76550.940.63010.930.33840.950.29460.930.44280.94
0.46030.930.77950.940.63710.930.33920.940.29200.940.45300.94
0.47700.930.81000.940.70250.920.33920.940.29620.930.48720.93
0.39710.950.63670.960.51520.940.31930.950.25360.940.41330.94
20.42320.950.68930.950.59480.930.30650.950.26790.940.40320.95
0.42080.940.68510.950.57430.940.30740.950.26870.950.40040.94
0.41420.930.71490.940.59540.930.30770.940.26760.930.41690.94
0.36890.950.59200.960.47470.940.28770.950.23360.940.37510.95
30.37590.950.61570.950.56060.950.29740.950.23770.950.35640.95
0.40660.950.64800.950.50560.940.27460.960.25830.960.38420.95
0.37150.940.61390.960.51190.930.27280.940.23650.940.35810.94
0.33560.950.53030.960.42700.950.26050.950.21040.950.33180.95
40.37320.950.60690.940.50600.940.27620.940.23770.950.35280.94
0.37310.950.61180.950.50420.950.27420.940.23990.950.35120.94
0.37120.950.61750.940.53390.940.26770.950.24010.940.35470.94
0.33290.960.52890.960.42290.950.25490.950.21230.940.32600.95
50.37510.950.60210.950.50270.940.27320.940.23760.950.35930.95
0.37520.950.60810.950.51010.950.27120.950.23410.950.35390.94
0.37400.940.61620.940.50800.940.27060.940.23550.940.36280.94
0.33430.960.52970.960.41950.960.25780.940.20920.930.33160.95
In each cell of Table 5, the first and second rows represent the average confidence interval lengths and coverage rates of σ 1 and σ 2 based on the asymptotic confidence interval constructed by the observed information matrix and the expected Fisher information matrix. The third and forth rows represent the average credible interval lengths and coverage rates of σ 1 and σ 2 based on non-informative and informative priors.
Table 6. Progressively type II censored samples.
Table 6. Progressively type II censored samples.
Scheme 1(n = 33, m = 20)
R[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 13]
Data(11,2) (35,2) (49,2) (170,2) (329,2) (381,2) (708,2) (958,2) (1062,2) (1167,1)
(1594,2) (1925,1) (1990,1) (2223,1) (2327,2) (2400,1) (2451,2) (2471,1) (2551,1)
(2568,1)
Scheme 2(n = 33, m = 24)
R[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 9]
Data(11,2) (35,2) (49,2) (170,2) (329,2) (381,2) (708,2) (958,2) (1062,2) (1167,1)
(1594,2) (1925,1) (1990,1) (2223,1) (2327,2) (2400,1) (2451,2) (2471,1) (2551,1)
(2568,1) (2694,1) (2702,2) (2761,2) (2831,2)
Scheme 3(n = 33, m = 27)
R[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 6]
Data(11,2) (35,2) (49,2) (170,2) (329,2) (381,2) (708,2) (958,2) (1062,2) (1167,1)
(1594,2) (1925,1) (1990,1) (2223,1) (2327,2) (2400,1) (2451,2) (2471,1) (2551,1)
(2568,1) (2694,1) (2702,2) (2761,2) (2831,2) (3034,1) (3059,2) (3112,1)
Scheme 4(n = 33, m = 27)
R[6, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
Data(708,2) (958,2) (1062,2) (1167,1) (1594,2) (1925,1) (1990,1) (2223,1) (2327,2)
(2400,1) (2451,2) (2471,1) (2551,1) (2568,1) (2694,1) (2702,2) (2761,2) (2831,2)
(3034,1) (3059,2) (3112,1) (3214,1) (3478,1) (3504,1) (4329,1) (6976,1) (7846,1)
Scheme 5(n = 33, m = 27)
R[3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3]
Data(170,2) (329,2) (381,2) (708,2) (958,2) (1062,2) (1167,1) (1594,2) (1925,1) (1990,1)
(2223,1) (2327,2) (2400,1) (2451,2) (2471,1) (2551,1) (2568,1) (2694,1) (2702,2)
(2761,2) (2831,2) (3034,1) (3059,2) (3112,1) (3214,1) (3478,1) (3504,1)
Table 7. Maximum likelihood estimates and Bayes estimates of σ 1 and σ 2 under the squared error loss and entropy loss functions.
Table 7. Maximum likelihood estimates and Bayes estimates of σ 1 and σ 2 under the squared error loss and entropy loss functions.
Scheme σ 1 ^ σ ˜ 1 SEL σ ˜ 1 EL σ ˜ 2 σ ˜ 2 SEL σ ˜ 2 EL
13947.0224217.0533020.2573222.733385.9222453.236
23571.4843847.8893004.7232766.462928.252314.567
32798.4092898.5522830.382320.3182376.5312338.514
42872.6252937.9952893.8223745.4453893.6183792.537
52575.9752647.6412599.0732673.2152753.5472699.037
Table 8. Interval lengths of asymptotic confidence intervals and HPD credible intervals with non-informative priors.
Table 8. Interval lengths of asymptotic confidence intervals and HPD credible intervals with non-informative priors.
SchemeMethod 1Method 2Method 3
12735.0961823.3982735.4221823.6152879.2021760.525
22333.3271399.9962333.2331399.9402659.1701376.380
31653.7241136.9351653.6611136.8921641.6801006.269
41365.5342321.4081365.5342321.4081568.7842169.245
51349.3531453.1501349.3521453.1481258.5281435.572
In Table 8, the first two columns represent the average interval lengths of σ 1 and σ 2 based on the asymptotic confidence interval constructed by the observed information matrix. The third and fourth columns represent the average interval lengths of σ 1 and σ 2 based on the asymptotic confidence interval constructed by the expected Fisher information matrix. The fifth and sixth columns represent the average credible interval lengths of σ 1 and σ 2 based on non-informative priors.

Share and Cite

MDPI and ACS Style

Liao, H.; Gui, W. Statistical Inference of the Rayleigh Distribution Based on Progressively Type II Censored Competing Risks Data. Symmetry 2019, 11, 898. https://doi.org/10.3390/sym11070898

AMA Style

Liao H, Gui W. Statistical Inference of the Rayleigh Distribution Based on Progressively Type II Censored Competing Risks Data. Symmetry. 2019; 11(7):898. https://doi.org/10.3390/sym11070898

Chicago/Turabian Style

Liao, Hongyi, and Wenhao Gui. 2019. "Statistical Inference of the Rayleigh Distribution Based on Progressively Type II Censored Competing Risks Data" Symmetry 11, no. 7: 898. https://doi.org/10.3390/sym11070898

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop