Abstract
In this paper, first we consider the maximum likelihood estimators for two unknown parameters, reliability and hazard functions of the generalized Pareto distribution under progressively Type II censored sample. Next, we discuss the asymptotic confidence intervals for two unknown parameters, reliability and hazard functions by using the delta method. Then, based on the bootstrap algorithm, we obtain another two pairs of approximate confidence intervals. Furthermore, by applying the Markov Chain Monte Carlo techniques, we derive the Bayesian estimates of the two unknown parameters, reliability and hazard functions under various balanced loss functions and the corresponding confidence intervals. A simulation study was conducted to compare the performances of the proposed estimators. A real dataset analysis was carried out to illustrate the proposed methods.
1. Introduction
In recent years, with the continuous development of statistics, the research on the generalized Pareto distribution (GPD) is gradually deepened. The generalized Pareto distribution is an important distribution in statistics, which is widely used in the fields of finance and engineering. Zhang [1] proposed the likelihood moment estimation method for parameters in the generalized Pareto distribution. By combining maximum likelihood estimation and goodness of fit, Husler et al. [2] proposed a new estimator for the generalized Pareto distribution. Rezaei et al. [3] derived the maximum likelihood estimators, Bayes estimators and some confidence intervals for stress-strength reliability based on progressive Type II censoring schemes.
The cumulative distribution function (cdf) and the probability density function (pdf) of the GPD are, respectively, given by
where , and . Here, and are the shape and scale parameters, respectively. Then, reliability function and hazard function are, respectively, derived by
In many cases, censored samples are more popular than complete samples because they need less test time and save some associated costs. Progressive Type II censoring scheme is a more common sampling method. It has the flexibility to remove units at points other than the end point of the experiment. The following is a brief introduction to the progressive Type II censoring scheme. Assume that there are n independent and identically distributed units in the experiment. There will be m failures observed before the end of the experiment. At the time of the first failure , units are randomly removed from the surviving units. When the second failure occurs, units are randomly removed from the surviving units. Finally, when the mth failure occurs, the remaining units are all discarded and the experiment ends. Then, we get the censoring scheme . There are two special cases worth noting. When , that is, , the censoring scheme is Type II censoring scheme. In addition, when , the censoring scheme corresponds to the complete sample. Some authors have studied different inference problems based on the progressive Type II censoring scheme. Wang et al. [4] studied inverse estimation for both parameters in a certain family of two-parameter lifetime distributions and some confidence intervals based on progressive Type II censoring scheme. Kim et al. [5] proposed Bayes and the maximum likelihood estimators for parameters and the reliability function of the exponentiated Weibull lifetime distribution on basis of progressive Type II censored sample. Ahmed [6] obtained the maximum likelihood and Bayes estimators of both parameters, reliability and hazard functions with two-parameter bathtub-shaped lifetime model under progressive Type II censoring scheme.
For Bayesian inference, we need to calculate posterior quantities of interest, which is essentially the calculation of the high-dimensional integral of a function. However, in practice, it is difficult to compute those high-dimensional integrals because they are complex and uncommon. The Markov Chain Monte Carlo (MCMC) technique properly solves this problem by calculating high-dimensional integrals in a simulated manner. It provides a convenient and efficient way to extract samples from the target posterior distribution. In recent years, it is common to estimate unknown parameters using the MCMC methods. Meshkanifarahani and EsmaileKhorram [7] derived Bayes estimators of parameters in the weighted exponential distribution by using MCMC methods. Jaheen and Harbi [8] obtained Bayes estimators of parameters and the reliability function in the exponentiated Weibull distribution by using MCMC methods. Pandey and Bandyopadhyay [9] obtained Bayes estimators of parameters in the inverse Gaussian distribution under MCMC methods. Sarikhanbaglu [10] discussed the maximum likelihood estimation and Bayesian estimation of the parameters and reliability functions of the GPD, which are based on a progressive Type II censored sample with random (Binomial) removals. Abdallah [11] studied the maximum likelihood estimation, Bayesian estimation and bootstrap estimation of three unknown parameters for the new Weibull–Pareto distribution based on progressive Type II censored sample. Based on a certain class of exponential-type distributions including the Pareto distribution, Abdel-Aty et al. [12] proposed the Bayesian prediction intervals of generalized order statistics by using multiply Type II censoring scheme. El-Sagheer [13] dealt with the Bayesian point prediction for the GPD based on general progressive Type II censored sample.
In this article, we derive the maximum likelihood and Bayesian estimators of the two unknown parameters, the reliability and hazard functions in the GPD under progressive Type II censored sample. We use various methods including maximum likelihood method, delta method, logit transformation, arc sine transformation and bootstrap algorithm to obtain different confidence intervals. More importantly, we discuss Bayesian estimates based on different balanced loss functions using the MCMC methods. In addition, we analyzed the influence of relevant parameters on the estimation results in a simulation study. Two examples were analyzed to illustrate the proposed methods.
The rest of this article is organized as follows. Section 2 discusses the maximum likelihood estimators of the two unknown parameters, the reliability and hazard functions. Some asymptotic confidence intervals are obtained in Section 3 and Section 4 derives confidence intervals by using Bootstrap algorithm. We discuss Bayesian estimation under MCMC methods and four different balanced loss functions in Section 5 and Section 6 shows some simulation results. Section 7 presents a numerical example to illustrate the methods proposed above and Section 8 makes the conclusion.
2. Maximum Likelihood Estimation
In this section, we discuss the maximum likelihood estimators of the two unknown parameters, the reliability and hazard functions with generalized Pareto distribution based on progressive Type II censored sample. Let be a censored sample from GPD with the censoring scheme . Then, the likelihood function can be written as
where . Then, we get the log-likelihood function by
Thus, the corresponding likelihood equations are obtained by
and
respectively. According to Equation (7), we can get the expression of the MLE of by
Since it is difficult to find an explicit solution to Equation (10), we can use R to find the MLE of . Then, we put the MLE of into Equation (9) to get .
Utilizing the property of invariance, we can get the MLE of the reliability and hazard function from Equations (3) and (4) by
Next, we consider the approximate confidence intervals of two parameters, the reliability and hazard function.
3. Asymptotic Confidence Interval
The confidence intervals of and can be obtained from the asymptotic normal distribution of the maximum likelihood estimators , and their variance can be obtained from the inverse of the Fisher information matrix.
The Fisher information matrix is given by the expectation of the negative second derivative of the log-likelihood function. Then, we have
where
The expectation of the above expression is difficult to get. Therefore, we consider the observed Fisher information matrix, which is taken at point . The variance-covariance matrix of , which is the inverse of the observed Fisher information matrix, is given by
In general, approximates a bivariate normal distribution with mean and variance-covariance matrix , that is,
Therefore, the confidence intervals of and are
respectively. The coverage probabilities can be expressed as
respectively, where is the percentile of the standard normal distribution.
Moreover, to construct the asymptotic confidence intervals for reliability and hazard functions, we need to apply the delta method to estimate their variances. Let
where
Then, the asymptotic estimators of and are defined as
respectively. Therefore, we have the following relationships:
Furthermore, we can derive the asymptotic confidence intervals of and by
The coverage probabilities can be expressed as
respectively.
In addition, Krishnamoorthy and Lin [14] discussed two transformations as follows:
(Logit transformation) Let . Then, we have . Using the delta theorem, we can obtain the confidence interval of by
Similarly, we can get the confidence interval of by
If the upper and lower bounds of the above confidence intervals are represented by and , respectively, the confidence intervals of and can be expressed as
(Arc sine transformation) Let . Then, we have . Using the delta theorem, we can obtain the confidence interval of by
Similarly, we can get the confidence interval of by
Then, the confidence intervals of and can be obtained by
where and are given in (22).
4. Bootstrap Confidence Interval
When the sample size is small, it is well known that confidence intervals based on the progressive results does not perform well. Thus, we discuss two confidence intervals on basis of the bootstrap algorithm, including the percentile bootstrap algorithm, which we call the bootstrap-p algorithm, and the bootstrap-t algorithm.
4.1. Bootstrap-p Algorithm
The bootstrap-p algorithm is the much simpler of the two algorithms. The process of obtaining the approximate confidence intervals is as follows:
- (1)
- (2)
- Under the pre-set censored scheme , generate a bootstrap sample from the Pareto distribution with parameters and .
- (3)
- Similar to Step (1), under the bootstrap sample, calculate the MLEs of and expressed as and .
- (4)
- (5)
- Repeat Steps (2)–(4) M times to get based on M different bootstrap samples, where .
- (6)
- Arrange in an ascending order to get .
After the above steps, we can get a set of confidence intervals for , , and , respectively
4.2. Bootstrap-t Algorithm
Compared to the bootstrap-p algorithm, the bootstrap-t algorithm is slightly more complicated, but more accurate when the sample size is small. The following algorithm can be used to obtain the bootstrap-t confidence intervals.
- (1)
- (2)
- Generate a bootstrap sample from the Pareto distribution with parameters and . Under the bootstrap sample, calculate the MLEs of and expressed as and . Then, get the MLEs of the reliability and hazard functions by and , respectively.
- (3)
- Define the following statistics: , , and , where , , and are obtained using the Fisher information matrix and delta method.
- (4)
- Repeat Steps (2)–(3) M times to get , , and .
- (5)
- Arrange , , and in ascending order to obtain , , and , respectively.
After the above steps, we can get a set of confidence intervals for , , and , respectively
5. Bayesian Estimation
In this section, we consider the Bayesian estimators of , , reliability and hazard functions and the corresponding confidence intervals using the Markov Chain Monte Carlo (MCMC) technique. Suppose and obey the gamma prior distribution independently. Then, the prior density functions of and can be written as
Therefore, the joint prior density function of and is obtained by
By using the likelihood function given by Equation (5) and the joint prior density function given by Equation (33), the joint posterior density function of and is given by
Obviously, Equation (34) is complicated and difficult to get the posterior conditional distribution of each parameter. Therefore, we consider the Gibbs sampling method, which is the simplest, most intuitive, and most widely used MCMC method. The Gibbs sampling method needs to simulate sampling from these posterior conditional distributions. With Equation (34), we can get the posterior conditional density of by
Similarly, the posterior conditional density of is given by
From Equation (35), it is easy to see that samples of can be generated using any gamma generating routine. By observing Equation (36), we know that the conditional posterior distribution of is not a common distribution, so we cannot sample directly from it in the usual way. Thus, we apply the Metropolis–Hastings algorithm (M-H algorithm) to obtain random data from this distribution. The M-H algorithm is a useful method for generating random samples from the posterior distribution using a proposal density. Therefore, we need to generate an initial value from the normal proposal distribution , where represents the variance of . See [15] for more details.
We set up the M-H algorithm in Gibbs sampling as follows:
| Algorithm 1 M-H algorithm in Gibbs sampling |
| 1: Set the initial value and set . 2: repeat 3: Let . 4: Generate from gamma distribution . 5: Generate a new candidate parameter value from . 6: Let . 7: Compute . 8: Accept with probability and accept with probability . 9: Calculate and . 10: Let . 11: until j=N+1 |
To ensure convergence and eliminate the impact on the choice of initial values, we discard the first M pairs of analog values. Therefore, for a large enough N, we obtain approximate posterior samples and for Bayesian estimation, where .
Bayesian Estimation under Balanced Loss Functions
In Bayesian estimation, to achieve the best estimate, a suitable loss function has to be chosen. However, there is no specific procedure in the estimation process to determine which loss function we should use. In many cases, authors usually use a symmetric loss function for convenience. However, in the case where losses are asymmetrical, it is not appropriate to choose a symmetric loss function indiscriminately. Thus, we need to consider some asymmetric loss functions. Furthermore, in some cases, when one loss is a real loss function, the Bayesian estimation under the other loss function performs better than that under the real loss function. Therefore, we consider different loss functions to get a better understanding in Bayesian analysis. In this paper, we discuss several symmetrical and asymmetric loss functions, such as K-Loss function, modified squared error loss function, precautionary loss function, etc. In this section, we consider the balanced loss function (see [16]), which is a more general asymmetric loss function and can be written as
where represents an arbitrary loss function, and represents a priori target estimator of , which can be obtained by maximum likelihood, least-squares or unbiasedness. represents the weight of , which ranges from 0 to 1.
can choose a variety of loss functions. When chooses the symmetric loss function K-Loss function , we obtain the balanced K-Loss function (BKLF), which can be written as
We obtain the Bayesian estimator of under the BKLF by
When , we obtain the balanced weighted squared error loss function (BWSELF) by
The Bayesian estimator of under the BWSELF is derived by
When , the balanced modified squared error loss function (BMSELF) has the following form:
and the Bayesian estimator of under the BMSELF is given by
When , the balanced precautionary loss function (BPLF) is represented by
and the Bayesian estimator of under the BPLF is obtained by
By observing Equation (39), it is easy to know that, when , the Bayesian estimate under BKLF is equivalent to the maximum likelihood estimate, and when , it is equivalent to the Bayesian estimate under KLF (symmetric). According to Equations (41), (43) and (45), it is clear that the Bayesian estimates under the balanced loss functions are equivalent to the maximum likelihood estimates in the case of , and are equivalent to the Bayesian estimates under different asymmetry loss functions (e.g., WSELF, MSELF, and PLF) in the case of .
According to Equation (39), the approximate Bayesian estimates of , reliability and hazard functions based on the BKLF are given by
respectively. In addition, by using Equation (41), we can obtain the approximate Bayesian estimates of , reliability and hazard functions based on the BWSELF by
respectively. Similarly, by observing Equation (43), we can derive the approximate Bayesian estimates of , reliability and hazard functions under the BMSELF by
respectively. Finally, from Equations (45), the approximate Bayesian estimates of , reliability and hazard functions under the BPLF are obtained by
respectively.
Further, and are arranged in ascending order, and the approximate confidence intervals of and are , , and , respectively.
6. Simulation Study
In this section, we discuss the performance of the estimates and confidence intervals that are considered in the previous sections using different methods. Here, we report the simulation results in the case of and . For the point estimation methods, we compared the expected values (EV) and mean square errors (MSEs) of the estimators for , reliability and hazard functions. In addition, we generated kinds of progressively Type II samples from the generalized Pareto distribution (GPD) for different progressively Type II censoring schemes. For example, presents the censoring scheme in tables. For each censoring scheme, we calculated the MLEs of by using Equations (9)–(12). According to Equations (46)–(61), we computed the Bayesian estimates of under various balanced loss functions. In the Bayesian estimation under various balanced loss functions, for each given censoring scheme, we compared the EVs and MSEs of the parameters when takes 0, 0.3, 0.6, and 0.9, respectively, to obtain an optimal . We derived EVs and MSEs of these estimators more than 1000 times and then took the means (Table 1, Table 2, Table 3, Table 4 and Table 5). For the interval estimation methods, Table 6 and Table 7 report the coverage probabilities (CPs) and average lengths (ALs) of the confidence intervals (CIs) using the delta method and Bayesian estimation, respectively, which were based on 1000 simulations. See Appendix A for a selected R code.
Table 1.
MLEs of two parameters, reliability and hazard functions for , , , , .
Table 2.
Bayesian estimates of two parameters, reliability and hazard functions under BKLF(symmetric) for , , , , .
Table 3.
Bayesian estimates of two parameters, reliability and hazard functions under BWSELF (asymmetric) for , , , , .
Table 4.
Bayesian estimates of two parameters, reliability and hazard functions under BMSELF (asymmetric) for , , , , .
Table 5.
Bayesian estimates of two parameters, reliability and hazard functions under BPLF (asymmetric) for , , , , .
Table 6.
Average length and coverage probability of asymptotic confidence intervals using the delta method for , , , , .
Table 7.
Average length and coverage probability of asymptotic confidence intervals in Bayesian estimation for , , , , .
By observing and comparing the simulation results, we draw the following conclusions:
- (1)
- For the maximum likelihood estimation, as the sample size n increases, the expected values are closer to the true values and the MSEs become smaller and smaller. For the censored sample, when n and m are fixed, the first censoring scheme for each group, i.e., , is more efficient than the other cases. In addition, when n is fixed, the MSEs of and decrease with increasing m for the same type of censored scheme. Furthermore, based on the complete sample, that is, m and n are the same, the simulation results are more efficient than those based on the censored samples. The MLEs of the reliability and hazard functions are efficient for each censoring scheme in terms of expected values and MSEs.
- (2)
- Compared to the maximum likelihood estimation, Bayesian estimates are more efficient in respect of expected values and MSEs in most cases. Similar to the maximum likelihood estimation, as the sample size n increases, the expected values are closer to the true values and the MSEs become smaller and smaller. More importantly, the Bayesian estimates are better based on BMSELF than on the other three balanced loss functions. By observing Table 2, Table 3, Table 4 and Table 5, we know that, as increases, that is, the weight of the MLE increases, the MSEs of and also increase, and the Bayesian estimation performs well when takes 0 or 0.3. Therefore, for the several balanced loss functions we consider in this paper, it is more appropriate to choose BMSELF and let take a value between 0 and 0.3. The Bayesian estimation of the reliability and hazard functions are always efficient for each censoring scheme in terms of expected values and MSEs.
- (3)
- The average lengths of the asymptotic confidence intervals using the delta method decrease as n increases. The approximate confidence intervals using the MCMC methods are more efficient in terms of average length. The coverages of the confidence intervals in all cases are close to .
7. Real Data Analysis
We analyzed a dataset that indicates the annual rainfall (in inches) recorded by the Los Angeles Civic Center from 1982 to 2004 as follows:
Raqab et al. [17] proved that the Pareto model (when ), that is, the generalized Pareto model, fits the above dataset well. We considered a progressively Type II censored sample with the censoring scheme as follows:
According to the description in Section 2, we obtained the maximum likelihood estimators of , reliability and hazard functions by using R software, which are . We obtained the asymptotic CIs presented in Table 8 by applying the delta method in Section 3.
Table 8.
asymptotic confidence intervals of , , , for .
By applying the Bootstrap algorithm described in Section 4, we obtained the estimates under the bootstrap samples, which are and , respectively. Furthermore, we can calculate the corresponding bootstrap confidence intervals (see Table 8).
For Bayesian estimation, we considered the case where the hyperparameters are 0, i.e., . Based on the MCMC methods discussed in Section 5 and the conclusions drawn in Section 6, we obtained the estimates for under various balanced loss functions, as shown in Table 9. In addition, the confidence intervals obtained by using MCMC techniques are shown in Table 8.
Table 9.
Bayesian estimates of , , , for .
8. Conclusions
In this article, we discuss the MLEs for and under progressive Type II censored sample. We also consider the asymptotic confidence intervals for and by using maximum likelihood method, delta method, logit transformation and arc sine transformation. For comparison, we establish the bootstrap-p and bootstrap-t CIs. More importantly, we use the MCMC methods to derive the Bayesian estimates of and under various balanced loss functions and the corresponding confidence intervals. To compare the performances of the proposed estimators, a simulation study was conducted. It is easy to know that the Bayesian estimation method is more efficient in most cases. We analyzed the influence of relevant parameters on the estimation results in the simulation study, and we found that it is more appropriate to choose BMSELF and let take a value between 0 and 0.3. Finally, a real dataset analysis was carried out to illustrate the proposed methods.
Author Contributions
Investigation, X.H.; Supervision, W.G.
Funding
This research was partially supported by the program for the Fundamental Research Funds for the Central Universities (No. 2014RC042).
Acknowledgments
The authors would like to thank the referees for their very helpful and constructive comments, which have significantly improved the quality of this paper.
Conflicts of Interest
The authors declare no conflict of interest.
Appendix A
A selected computational code is attached in the Appendix for reasons of space. More complete codes can be requested from the authors.
##### Table 1
censoring<-function(alpha,lambda,R){
m<-length\,(R)
W<-runif(m)
V<-rep(0,m)
U<-rep(0,m)
for (i in 1:m)
{
V[i]<-W[i]^(1/(i+sum(R[(m-i+1):m])))
}
for (i in 1:m){
U[i]<-1-prod(V[m:(m-i+1)])
}
return((-1+(1-U)^(-1/alpha))/lambda)
}
log.lik1<-function(theta,x,R){
m<-length\,(x)
ln<-m*log(theta[1])+m*log(theta[2])-sum((theta[1]*(R+1)+1)*log(1+theta[2]*x))
return(-ln)
}
t<-1
RT<-function(alpha,lambda){(1+lambda*t)^(-alpha)}
HT<-function(alpha,lambda){alpha*lambda/(1+lambda*t)}
R1<-c(15,rep(0,14))
R2<-rep(1,15)
R3<-c(rep(0,14),15)
scheme<-rbind(R1,R2,R3)
alpha<-0.5
lambda<-1.2
sim<-1000
EVandMSE<-matrix(NA,nrow(scheme),8)
colnames(EVandMSE)<-c("alpha.EV","alpha.mse","lambda.EV","lambda.mse","RT.EV",
+"RT.mse","HT.EV","HT.mse")
rownames(EVandMSE)<-c("R1","R2","R3")
for ( i in 1: nrow(scheme) ) {
R<-scheme[i,]
output<-matrix(NA,sim,4)
for (j in 1:sim){
data<-censoring(alpha,lambda,R)
res1<-optim(c(1,1),log.lik1,method="L-BFGS-B",lower=c(0.1,0.1),x=data,R=R,hessian=TRUE,
+control=list(trace=F,maxit=1000))
output[j,]<-c(res1$par,RT(res1$par[1],res1$par[2]),HT(res1$par[1],res1$par[2]))
}
EV<-apply(output,2,mean)
Real<-c(alpha,lambda,RT(alpha,lambda),HT(alpha,lambda))
bias1 <- EV-Real
var1 <- apply(output,2,var) * ((sim-1)/sim)
mse1 <- bias1^2 + var1
EVandMSE[i,]<-as.vector(rbind(EV,mse1))
}
Real
EVandMSE
#####################Real Data Analysis
R1<-c(rep(0,9),2,4,1,1,0)
data<-c(0,0.08,0.29,0.56,0.70,1.22,1.30,1.72,1.90,4.13,5.54,6.61,8.87,13.68)
n<-23
t<-1
log.lik1<-function(theta,x,R){
m<-length\,(x)
ln<-m*log(theta[1])+m*log(theta[2])-sum((theta[1]*(R+1)+1)*log(1+theta[2]*x))
return(-ln)
}
dralpha<-function(theta){
alpha<-theta[1]
lambda<-theta[2]
dd1<--(1+lambda*t)^(-alpha)*log(1+lambda*t)
return(dd1)
}
drlambda<-function(theta){
alpha<-theta[1]
lambda<-theta[2]
dd2<--alpha*t*(1+lambda*t)^(-alpha-1)
return(dd2)
}
dhalpha<-function(theta){
alpha<-theta[1]
lambda<-theta[2]
dd3<-lambda/(1+lambda*t)
return(dd3)
}
dhlambda<-function(theta){
alpha<-theta[1]
lambda<-theta[2]
dd4<-alpha/(1+lambda*t)^2
return(dd4)
}
RT<-function(theta){
alpha<-theta[1]
lambda<-theta[2]
return((1+lambda*t)^(-alpha))}
HT<-function(theta){
alpha<-theta[1]
lambda<-theta[2]
return(alpha*lambda/(1+lambda*t))}
conf.level<-0.95
output<-matrix(NA,1,8)
colnames(output)<-c("alpha.L","alpha.U","lambda.L","lambda.U","r.L","r.U","h.L","h.U")
out<-matrix(NA,1,8)
colnames(out)<-c("r1.L","r1.U","h1.L","h1.U","r2.L","r2.U","h2.L","h2.U")
m<-length\,(R1)
R<-R1
res1<-optim(c(0.9,0.3),log.lik1,method="L-BFGS-B",lower=c(0.1,0.1),x=data,R=R,hessian=TRUE,
control=list(trace=FALSE,maxit=100))
Q1<-c(dralpha(res1$par),drlambda(res1$par))
Q2<-c(dhalpha(res1$par),dhlambda(res1$par))
I<-solve(res1$hessian) ####Fisher information matrix
Varr<-t(Q1)%*%I%*%Q1
Varh<-t(Q2)%*%I%*%Q2
rt<-RT(res1$par)
ht<-HT(res1$par)
LB1<-log(rt/(1-rt))-qnorm((1+conf.level)/2)*sqrt(Varr)/(rt*(1-rt))
UB1<-log(rt/(1-rt))+qnorm((1+conf.level)/2)*sqrt(Varr)/(rt*(1-rt))
LB2<-log(ht/(1-ht))-qnorm((1+conf.level)/2)*sqrt(Varr)/(ht*(1-ht))
UB2<-log(ht/(1-ht))+qnorm((1+conf.level)/2)*sqrt(Varr)/(ht*(1-ht))
LB3<-asin(sqrt(rt))-qnorm((1+conf.level)/2)*sqrt(Varr/(4*rt*(1-rt)))
UB3<-asin(sqrt(rt))+qnorm((1+conf.level)/2)*sqrt(Varr/(4*rt*(1-rt)))
LB4<-asin(sqrt(ht))-qnorm((1+conf.level)/2)*sqrt(Varr/(4*ht*(1-ht)))
UB4<-asin(sqrt(ht))+qnorm((1+conf.level)/2)*sqrt(Varr/(4*ht*(1-ht)))
output[1,1:2]<-res1$par[1]+c(-1,1)*qnorm((1+conf.level)/2)*sqrt(I[1,1])
output[1,3:4]<-res1$par[2]+c(-1,1)*qnorm((1+conf.level)/2)*sqrt(I[2,2])
output[1,5:6]<-rt+c(-1,1)*qnorm((1+conf.level)/2)*sqrt(Varr)
output[1,7:8]<-ht+c(-1,1)*qnorm((1+conf.level)/2)*sqrt(Varh)
out[1,1:2]<-c(exp(LB1)*(1+exp(LB1))^{-1},exp(UB1)*(1+exp(UB1))^{-1})
out[1,3:4]<-c(exp(LB2)*(1+exp(LB2))^{-1},exp(UB2)*(1+exp(UB2))^{-1})
out[1,5:6]<-c(sin(LB3)^2,sin(UB3)^2)
out[1,7:8]<-c(sin(LB4)^2,sin(UB4)^2)
output
out
References
- Zhang, J. Likelihood moment estimation for the generalized pareto distribution. Aust. N. Z. J. Stat. 2010, 49, 69–77. [Google Scholar] [CrossRef]
- Hüsler, J.; Li, D.; Raschke, M. Estimation for the generalized Pareto distribution using maximum likelihood and goodness-of-fit. Commun. Stat. 2011, 40, 2500–2510. [Google Scholar]
- Rezaei, S.; Noughabi, R.A.; Nadarajah, S. Estimation of Stress-Strength Reliability for the Generalized Pareto Distribution Based on Progressively Censored Samples. Ann. Data Sci. 2015, 2, 83–101. [Google Scholar] [CrossRef]
- Wang, B.X.; Yu, K.; Jones, M.C. Inference Under Progressively Type II Right-Censored Sampling for Certain Lifetime Distributions. Technometrics 2010, 52, 453–460. [Google Scholar] [CrossRef]
- Kim, C.; Jung, J.; Chung, Y. Bayesian estimation for the exponentiated Weibull model under Type II progressive censoring. Stat. Pap. 2011, 52, 53–70. [Google Scholar] [CrossRef]
- Ahmed, E.A. Bayesian estimation based on progressive Type II censoring from two-parameter bathtub-shaped lifetime model: an Markov Chain Monte Carlo approach. J. Appl. Stat. 2014, 41, 752–768. [Google Scholar] [CrossRef]
- Farahani, Z.S.M.; Khorram, E. Bayesian Statistical Inference for Weighted Exponential Distribution. Commun. Stat.-Simul. Comput. 2014, 43, 1362–1384. [Google Scholar]
- Jaheen, Z.; Harbi, M.A. Bayesian Estimation for the Exponentiated Weibull Model via Markov Chain Monte Carlo Simulation. Commun. Stat.-Simul. Comput. 2011, 40, 532–543. [Google Scholar] [CrossRef]
- Pandey, B.N.; Bandyopadhyay, P. Bayesian Estimation of Inverse Gaussian Distribution. Statistics 2012, 33, 115–121. [Google Scholar]
- Azimi, R.; Fasihi, B.; Sarikhanbaglu, F.A. Statistical inference for generalized Pareto distribution based on progressive Type II censored data with random removals. Int. J. Sci. World 2014, 2, 1–9. [Google Scholar] [CrossRef]
- Mahmoud, M.A.; EL-Sagheer, R.M.; Abdallah, S.H. Inferences for New Weibull-Pareto Distribution Based on Progressively Type II Censored Data. J. Stat. Appl. Probab. 2016, 5, 501–514. [Google Scholar] [CrossRef]
- Abdel-Aty, Y.; Franz, J.; Mahmoud, M.A.W. Bayesian prediction based on generalized order statistics using multiply Type II censoring. Statistics 2007, 41, 495–504. [Google Scholar] [CrossRef]
- El-Sagheer, R.M. Bayesian Prediction Based on General Progressive Censored Data from Generalized Pareto Distribution. J. Stat. Appl. Probabil. 2016, 5, 43–51. [Google Scholar] [CrossRef]
- Krishnamoorthy, K.; Lin, Y. Confidence limits for stress–strength reliability involving Weibull models. J. Stat. Plan. Inference 2010, 140, 1754–1764. [Google Scholar] [CrossRef]
- Dey, S.; Pradhan, B. Generalized inverted exponential distribution under hybrid censoring. Stat. Methodol. 2014, 18, 101–114. [Google Scholar] [CrossRef]
- Jozani, M.J. Bayesian and Robust Bayesian analysis under a general class of balanced loss functions. Stat. Pap. 2012, 53, 51–60. [Google Scholar] [CrossRef]
- Raqab, M.Z.; Asgharzadeh, A.; Valiollahi, R. Prediction for Pareto distribution based on progressively Type II censored samples. Comput. Stat. Data Anal. 2010, 54, 1732–1743. [Google Scholar] [CrossRef]
© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).