Abstract
In this article, we discuss the estimation of the parameters for Gompertz distribution and prediction using general progressive Type-II censoring. Based on the Expectation–Maximization algorithm, we calculate the maximum likelihood estimates. Bayesian estimates are considered under different loss functions, which are symmetrical, asymmetrical and balanced, respectively. An approximate method—Tierney and Kadane—is used to derive the estimates. Besides, the Metropolis-Hasting (MH) algorithm is applied to get the Bayesian estimates as well. According to Fisher information matrix, we acquire asymptotic confidence intervals. Bootstrap intervals are also established. Furthermore, we build the highest posterior density intervals through the sample generated by the MH algorithm. Then, Bayesian predictive intervals and estimates for future samples are provided. Finally, for evaluating the quality of the approaches, a numerical simulation study is implemented. In addition, we analyze two real datasets.
1. Introduction
Gompertz distribution has wide applications in describing human mortality, establishing actuarial tables and other fields. Historically, it was originally introduced by Gompertz (see Reference [1]). The probability density function (PDF) and cumulative distribution function (CDF) of the Gompertz distribution are defined as
and
where the unknown parameters and are positive.
The Gompertz distribution possesses a unimodal PDF; in addition to this, it also has an increasing hazard function. Many researchers have contributed to the properties of the Gompertz distribution. In recent years, Reference [2] studied the relations between other distributions and Gompertz distribution, for instance, the Type I extreme value and Weibull distributions. Reference [3] obtained the weighted and unweighted least squares estimations under censored and complete samples. Reference [4] calculated the maximum likelihood estimates (MLEs), and completed the establishment for exact confidence interval and joint confidence region base on progressive Type-II censoring. Reference [5] studied the statistical inferences for Gompertz distribution under generalized progressively hybrid censoring. They derived the MLEs by Newton’s iteration method and used Markov chain Monte Carlo method to obtain Bayes estimates under generalized entropy and other loss functions. Bayesian predictions based on this censoring scheme were provided by one- and two-sample predictive approaches. Finally, they compared the proposed methods by simulation. Reference [6] obtained the MLEs and Bayesian estimates for the parameters using progressive first-failure censoring, also the estimates of hazard rate and reliability functions of Gompertz distribution. Besides, approximate and exact confidence intervals were constructed, and the conjugate and discrete prior distributions for the parameters were proposed. Finally, a numerical example was reported. One may also refer to References [7,8] for extensions about Gompertz distribution.
In life tests and reliability analyses, censoring has attracted more and more attention due to time and cost savings. Several schemes of censoring are proposed in the literature, among which Type-I and Type-II censoring are the most widely used. The former allows an experiment to be ceased at a fixed time point and the number of observed failed units is random, while the latter asks the life testing to be stopped when a prescriptive number of units fail, and the duration of the experiment is random. Both the traditional Type-I and Type-II censoring methods have a limitation that surviving experimental units are only allowed to be withdrawn at the terminal point. In this regard, progressive type-II censoring has better practicability and flexibility because it allows removal of the surviving units after any failure occurs. However, sometimes there are also other cases in the test, such as the existence of the unobserved failures at the starting point of the test. This would result in more general censoring. In this article, following Reference [9], we concentrate on the general progressive Type-II censoring. Assume that a life testing contains n experimental units. The first r failures occur at the time points , …, respectively which are unobserved. When -th failure is observed at the time point , the surviving experimental units of size are withdrawn, and so forth. When -th failure takes place at the point of time , the surviving experimental units of size are removed at random. Eventually, when the m-th failed unit is observed at the time point , remaining experimental units of size are removed. Here is prefixed and is cited as the censoring scheme, besides, is known as general progressive censored data which denotes the observed failure time with size .
Several scholars have discussed various lifetime distributions using the general progressive censored data. Among others, Reference [10] derived both classical and Bayesian estimates using general progressive censored data obtained randomly from exponential distribution. Reference [11] applied general progressive censored sample to discuss Bayesian estimates for the two parameters in inverse Weibull distribution and prediction problems with the priors with gamma distribution on the scale parameter and a log-concave density on the shape parameter. Reference [12] obtained Bayesian prediction estimates for the future sample from Weibull distribution using asymmetric and symmetric loss functions under general progressive censoring. Other studies can be found in References [13,14,15], and so forth.
In this article, using general progressive censoring, we discuss the estimation and prediction problems on Gompertz distribution.
This paper proceeds as follows: First, we calculate the MLEs in Section 2 by the Expectation-Maximization (EM) method and acquire the Fisher information matrix. Besides, in the same section we also derive the bootstrap intervals. In Section 3, we discuss the Bayesian estimates with different loss functions. An approximate method, Tierney and Kadane (TK), is proposed to calculate these estimates. Furthermore, we apply the MH algorithm to obtain Bayesian estimations and establish highest posterior density (HPD) intervals under the sample generated by the MH algorithm. In Section 4, Bayesian point prediction and interval prediction estimates for future samples are provided. A numerical simulation is executed in Section 5 to evaluate the quality of these approaches, in addition, we also analyze two real datasets. Finally, conclusions are arranged in the last Section 6.
2. Maximum Likelihood Estimation
Let be the censoring scheme, under which denotes the corresponding general progressive censored sample drawn from Gompertz distribution. Then, the likelihood function is derived as the following expression:
where , and denotes an observed value of . In addition, is the binomial coefficient, that is .
2.1. Point Estimation with EM Algrithm
A classical method for obtaining MLE is the Newton–Raphson method, which requires the second-order partial derivatives of the log-likelihood function and the derivatives are usually complicated in the case of censoring. Therefore, it is necessary to seek other methods. Following Reference [16], we use the EM algorithm to derive the MLEs. This algorithm is powerful for handling incomplete data problems because only the pseudo-log likelihood function of complete data needs to be maximized. It is an iterative method by using current estimation of the parameters to expect the log-likelihood function filled with censored data which is called E-step and maximize it to get the next estimation which is called M-step.
We use to represent the censored sample, where is a vector of the first r unobserved failures, and denotes a vector of the censored data after failed. The observed and complete sample are denoted by and K respectively, then . Let , and represent the corresponding observations. Under the complete data, we can express the log-likelihood function by
- E-step
To conduct the E-step smoothly, first we compute the expectation of (5), the pseudo-log likelihood function is then expressed as
where
and
- M-step
Suppose that the s-th estimate of is represented by then the M-step aims to maximize (6) by substituting and into , , and , and derive the -th estimate. Therefore, the next task is to maximize the function
where , , , are , , , , respectively. The corresponding likelihood equations are
and
For the reason that it is infeasible to compute (12) and (13) analytically, we use a numerical technique to obtain and . From (12), the estimate of can be described as the following function of :
By replacing with , Equation (13) can be transformed into the equivalent form , where
Then, can be acquired using the fixed-point iterative procedure:
When is smaller than a given tolerance limit, the iteration stops. Once we get , can be computed as easily from (14). Repeat E-step and M-step till this program converges. Then we get the MLEs for and .
2.2. Asymptotic Confidence Interval
Now, we acquire the Fisher information matrix and establish asymptotic confidence intervals (ACIs). The observed information can be extracted by means of a program proposed by Reference [17] when using EM method in handling incomplete sample problems to derive MLEs. Let be the unknown parameter . and denote complete information and observed information respectively. Furthermore, missing information is represented as . The main concept of this program can be described as the principle of missing information
where can be derived by
is the expected information of the distribution of given . According to Reference [18], given the observed sample of general progressive Type-II censoring, we obtain the distribution of Z as
then the is
Let , . Following Reference [18], we know that given the observed sample , the components of are independent of each other and have the PDF . Similarly, the components of are independent of each other and have the PDF . Therefore, the can be restated as
where
and
Now we can figure out the elements of the above matrices as follows:
where
and
Further, using asymptotic normality of MLE , , the ACIs for the two unknown parameters are obtained as
where represents the upper -th quantile for standard normal distribution, and denote the principal diagonal elements of respectively.
2.3. Bootstrap Confidence Interval
As is widely known, asymptotic confidence interval on the basis of MLE requires a large sample to support its accuracy but, in many practical cases, the sample size tends to not be enough. Reference [19] proposed the bootstrap method to construct the confidence interval (CI), which is more suitable for small sample. In this part, the parametric bootstrap method is employed in the establishment of percentile bootstrap (bootstrap-p) and bootstrap-t CIs for a parameter (here is or ). Interested readers may refer to References [20,21] for more information about bootstrap and Reference [22] for the algorithm of generating the general progressive type-II censored sample.
Parametric bootstrap-p
- (1)
- Calculate the MLEs and based on the existing general progressive censored data and censoring scheme .
- (2)
- Generate from Beta().
- (3)
- Generate independent from Uniform, .
- (4)
- Set , where , .
- (5)
- Set , .
- (6)
- Set , , and represents the CDF of Gompertz distribution with parameters and . Then the are the general progressive censored sample (also bootstrap sample).
- (7)
- Compute the MLEs and using the updated bootstrap sample.
- (8)
- Repeat steps (2)–(7) D times. Acquire the estimates: (), ().
- (9)
- Set as the CDF for . For a given value of x, define . The bootstrap-p CI of the parameter is obtained as
Parametric bootstrap-t
- (1)–(7)
- The same as the bootstrap-p above.
- (8)
- Obtain the statistics that
- (9)
- Repeat steps (2)–(8) D times.
- (10)
- Set as the CDF for . For a given value of x, define . The bootstrap-t CI for the parameter is given by
3. Bayesian Estimation
Bayesian statistics are different from traditional statistics in that they allow the incorporation of subjective prior knowledge about life parameters into the inferential procedure in reliability analysis. Therefore, for the same quality of inferences, Bayesian methods tend to require fewer sample data than traditional statistical methods do. This makes it extremely important in expensive life tests.
We investigate the Bayesian estimates in this section. Suppose that and independently have gamma prior distributions with the parameters and . Afterwards, we can obtain their joint prior distribution, that is
where the positive constants and d are hyperparameters. Let be an observed value of . Based on the joint prior distribution and likelihood function, the joint posterior function is
It is clear that (26) is analytically tricky. Furthermore, the Bayesian estimation of a function with and is also intractable because it is related to a ratio of two integrals. For solving the corresponding ratio of two integrals, some approximate approaches have been presented in the literature. Among them, the TK method was proposed by Reference [23] to obtain the approximate posterior expectations. Besides, the MH algorithm is a simulation method with wide applications in sampling from posterior density function. In this article, we use the TK method and the MH algorithm to derive approximate explicit forms for the Bayesian estimates.
3.1. Loss Functions
In Bayesian statistics, the selection of loss function is a fundamental step. There are many symmetric loss functions, among which squared error loss (SEL) function is well-known for its good mathematical properties. Let be a Bayesian estimate of . The form of SEL function is
then under SEL function the Bayesian estimate for can be computed by .
However, in many practical situations, overestimation and underestimation result in different losses, and the consequence is likely to be quite serious if one uses symmetric loss function indiscriminately. In these cases, asymmetrical loss functions are considered to be more suitable. In the literature, many different asymmetric loss functions were used. Among them, LINEX is dominant, and this loss function can be expressed as
Without loss of generality, here, we take . Thus Bayesian estimate for is given by the expression .
Later, Reference [24] proposed a balanced loss function that has a more generalized form
where is a known estimate of such as the MLE, and is a loss function selected arbitrarily. By choosing as SEL function given by (27), the (29) is transformed into balanced squared error loss (BSEL) function. According to BSEL we can give the Bayesian estimate by .
It can be clearly seen that the balanced loss function is more general since it includes special cases of the MLE, symmetric and asymmetric loss functions. For instance, by setting to be the MLE of parameter, under BSEL function, the Bayesian estimate is exactly equal to MLE if , and it is simplified into the Bayesian estimate under SEL function when . Similarly, if is chosen as LINEX loss function given by (28), is called BLINEX function. When and , the Bayesian estimates under the BLINEX loss function correspondingly reduce to MLE and the case of LINEX loss function.
In this article, we derive Bayesian estimates under SEL, BSEL functions and LINEX loss functions, respectively. Next, the TK method is suggested to deal with the ratio of the integrals problem on posterior expectation estimation.
3.2. TK Method
We assume that denotes an arbitrary function of . Following Reference [23], the posterior expectation for is written as
where denotes the prior density, and represents the logarithm of (4). We set:
Maximizing and individually, we derive and . Then, the approximate posterior expectation of by applying the TK method is obtained as
where and represent the corresponding determinants of negative inverse Hessian matrix of and . Next, ignoring the constant term, we note that
Now, we compute the partial derivatives of :
and
Similarly, the second derivatives can be derived as
and
For , we first compute that
and
As a result, is
Finally in the above calculation processes, setting as and respectively, the estimates on the basis of SEL function are given by
Further, we are able to calculate the Bayesian estimates based on BSEL function using the equation with different .
Similarly, by treating as and as , the estimates for unknown parameters under LINEX loss function are given by
3.3. MH Algorithm
We derive Bayesian estimates for and by the MH algorithm (see Reference [20]). First, we suppose that the bivariate normal distribution is the proposal distribution for the parameter , then the MH algorithm can generate samples from the bivariate normal distribution and finally get convergent samples from the posterior distribution. Using the samples, we first compute the Bayesian estimates under different loss functions, thereafter, establish HPD intervals. The MH algorithm can be summarized as:
- (1)
- Begin with an initial value , set .
- (2)
- Generate a proposal from the bivariate normal distribution where , and denotes the variance-covariance matrix which tends to be considered as the inverse for Fisher information matrix.
- (3)
- Calculate the acceptance probability , and is corresponding joint posterior distribution.
- (4)
- Generate from Uniform.
- (5)
- If , let ; else, let .
- (6)
- Set .
- (7)
- Repeat steps (2–6) D times to get required size of sample.
Removing the first number of iterative values, the Bayesian estimates under SEL function are derived as
Proceeding similarly, the desired estimates under BSEL function can be obtained easily. Further the Bayesian estimates under LINEX can be computed as
Now, we can establish the HPD interval (see Reference [25]) for the unknown parameter . Sort the remaining values in ascending order to be . The HPD interval of is given as
where is selected when the following equation is satisfied:
and denotes the integer part of . Likewise, the HPD interval of can be obtained.
4. Bayesian Prediction
Now we obtain the prediction estimates for the future sample on the basis of available sample and obtain predictive intervals. Bayesian prediction for future sample is a fundamental subject in many fields such as medical, agricultural and engineering experiments. Interested readers may refer to Reference [11].
Suppose that the existing is a group of general progressive censored data observed from a population with Gompertz distribution. Let denote the ordered failures time for a future sample with size W, which is also obtained from Gompertz distribution. We aim to obtain their predictive estimation (two-sample prediction). Suppose that represents the v-th failure time of the future sample. Then, for given and , we can obtain the density function of as
Consequently, the posterior predictive density function for is derived as
It is infeasible to compute (57) analytically. By using the MH algorithm, we can obtain its approximate solution as follows:
Further, the survival function is computed as
The posterior predictive survival function for can be derived by
Then, we construct the Bayesian predictive interval of by finding the solution of the equations
Further, it is convenient to derive the predictive estimate of the future v-th ordered lifetime, which is given by
where is obtained as
By using the MH algorithm described in the previous section, the prediction estimate of is derived as
5. Simulation and Data Analysis
For evaluating the quality of the approaches, a numeric simulation study is carried out. In addition, we also analyze two real data sets for further illustration.
5.1. Simulation Study
For the sake of simulation, first we generate general progressive censored sample with the algorithm discussed by Reference [22]. The procedures are as below:
- (1)
- Generate from Beta().
- (2)
- Generate independent from Uniform, .
- (3)
- Set , where , .
- (4)
- Set , .
- (5)
- Set , , and represents the CDF of Gompertz distribution.
Then we get the desired general progressive censored data drawn from Gompertz distribution. In our experiment, the true values for are selected to be . The MLEs for the two parameters are calculated by means of EM algorithm. In the aspect of Bayesian estimation and prediction, are chosen to be the values of hyperparameters respectively. Moreover, Bayesian estimates are obtained by TK and MH methods under different loss functions. Comparison between the results is made according to mean-square error (MSE).
For convenience, we use simplified notations to represent different censoring schemes (CS) with r, such as for the case where , , and the censoring scheme is . Therefore, our schemes in simulation study can be expressed by the following notations: , , , , , , , , , .
Table 1 reports all the average estimates and corresponding MSEs for the parameters. In this table, for a given censoring scheme, the average estimates are placed on the first and third rows respectively, and the second and fourth rows refer to the corresponding MSEs. From tabulated estimates, in general, the MH estimates are observed to have smaller MSEs compared with the estimates using the TK method. Furthermore, we find that the performance of Bayes estimates for the parameters under LINEX is better than those based on the SEL and BSEL functions in MSEs. However, Bayesian estimates under the SEL function are closer to the actual values. For MLEs, it can be seen that larger and n bring about more outstanding estimates, where and n are the corresponding sizes of observed and complete sample. On the whole, Bayesian estimates have an advantage over the corresponding MLEs.
Table 1.
Estimates and MSEs under general progressive censoring schemes.
Furthermore, different intervals have also been constructed, including ACIs on the basis of Fisher information matrix, parametric bootstrap intervals, and HPD intervals based on the sample generated from the MH algorithm. Table 2 presents their average length (AL) and coverage probabilities (CPs). The tabulated values indicate that the AL of the HPD intervals is the shortest among those obtained from other interval estimates. Besides, we also find that the ACIs have better performance according to CPs. In general, bootstrap-t and bootstrap-p intervals behave similarly, and their CPs tend to be below the confidence level. Table 3 lists the results of point prediction and interval prediction. We give the prediction results of , and in a future sample with size 10. Furthermore, we discover that the interval length becomes wider as v increases.
Table 2.
Interval estimates with confidence level of 95% for and .
Table 3.
Point prediction and prediction interval with .
5.2. Data Analysis
Dataset 1: First we analyze a real dataset about the breaking stress of carbon fibers (in Gba) () (see Reference [26]). It is listed as follows:
In order to analyze these data, we calculate the MLEs for the two parameters, and then for the Gompertz distribution we conduct a goodness-of-fit test with some practical guidelines like the Kolmogorov-Smirnov (K-S) statistics, and other criteria, for example, the Akaike Information Criterion (AIC), as well as the Bayesian Information Criterion (BIC). For comparison, some other life distributions have also been tested for goodness-of-fit, including Generalized Exponential (GE), Inverse Weibull and Exponential distributions. Their PDFs have the following forms, respectively:
- (1)
- The PDF of GE distribution:
- (2)
- The PDF of Inverse Weibull distribution:
- (3)
- The PDF of Exponential distribution:
The results of the tests are presented in Table 4 together with the MLEs. Note that the distribution is more suitable to fit the data when K-S, AIC and BIC values are smaller. Comparing the values, we can conclude that the Gompertz distribution is more appropriate.
Table 4.
The MLE and goodness-of-fit tests results in Dataset 1.
To illustrate the proposed methods, three groups of general progressive censored data have been randomly drawn from the parent sample as follows:
Scheme 1: (–,–,–,1.25, 1.47, 1.57, 1.61, 1.61, 1.69, 1.80, 1.87, 2.03, 2.05, 2.35, 2.41, 2.43, 2.48, 2.50, 2.53, 2.55, 2.56, 2.67, 2.73, 2.97, 3.11, 3.15, 3.22, 4.42), , , , , , , , others;
Scheme 2: (–,–,–,–,–,1.57, 1.61, 1.61, 1.69, 1.80, 1.84, 1.87, 2.03, 2.03, 2.05, 2.12, 2.43, 2.48, 2.55, 2.59, 2.67, 2.73, 2.82, 2.87, 2.88, 2.96, 3.09, 3.11, 3.11, 3.15, 3.19, 3.60, 3.75, 4.42, 4.70), , , , , , , , others;
Scheme 3: (–,–,1.08, 1.25, 1.47, 1.57, 1.61, 1.61, 1.69, 1.80, 1.84, 1.89, 2.03, 2.03, 2.05, 2.12, 2.35, 2.41, 2.48, 2.50, 2.53, 2.55, 2.79, 2.82, 2.93, 2.95, 3.19, 3.22, 3.27, 3.31, 3.33, 3.39, 3.60, 3.68, 3.75, 4.20, 4.90), , , , , , , others.
With the EM algorithm, we calculate the MLEs, and the corresponding Bayesian estimates are also derived by TK and MH methods. For the sake of no prior information, all the hyperparameters are set close to zero values. We list the average MLEs and Bayesian estimates in Table 5 and Table 6. In Table 7, the 90% interval estimates are tabulated, which are ACIs, parametric bootstrap and HPD intervals. Finally in Table 8, point prediction and 95% interval prediction of and in a future sample with size 6 are presented.
Table 5.
Estimates for and by EM and TK method in Dataset 1.
Table 6.
Estimates for and by MH algorithm in Dataset 1.
Table 7.
Interval estimates with confidence level of 90% for and in Dataset 1.
Table 8.
Point prediction and interval prediction with in Dataset 1.
Dataset 2: Reference [27] presented a dataset on the tumor-free days of 30 rats which were fed with unsaturated diet which is listed below as
In order to analyze these data, Reference [28] assumed that the number of tumor-free days obeys the Gompertz distribution. To illustrate the methods discussed, here we also suppose that the distribution for these data is Gompertz with . Let , we set up two censoring schemes, respectively and . Then we have obtained the sample under :
and the sample under :
In Table 9 and Table 10, we calculate the average MLEs, and average Bayesian estimates are derived by the TK method and the MH algorithm, respectively. The interval estimates are presented in Table 11, including ACIs, parametric bootstrap and HPD intervals. Finally Table 12 presents the results of the point prediction and the interval prediction of and with .
Table 9.
Estimates for and by EM and TK methods in Dataset 2.
Table 10.
Estimates for and by MH algorithm in Dataset 2.
Table 11.
Interval estimates with confidence level of 90% for and in Dataset 2.
Table 12.
Point prediction and 95% interval prediction with in Dataset 2.
6. Conclusions
In summary, we discuss the classical and Bayes inferences for the Gompertz distribution using the general progressive censoring. First, the MLEs are acquired by the Expectation–Maximization algorithm. Then, according to the asymptotic normality of MLEs and the principle of missing information, we provide the asymptotic confidence intervals. Moreover, we derive parametric percentile bootstrap and bootstrap-t intervals. In Bayesian statistics, three loss functions are considered, which are symmetrical, asymmetrical and balanced, respectively. Since the posterior expectation is intractable to obtain in explicit form, the TK method is employed to calculate approximate Bayesian estimates. Besides, the Metropolis-Hasting algorithm is applied to get the Bayesian estimates and establish HPD intervals. Furthermore, we derive the prediction estimates of future samples. Finally, a numerical simulation is executed to appraise the quality of the approaches, and two real data sets are also analyzed. The results indicate that these approaches have good performance. In addition, the methods in this article can be extended to other distributions.
Author Contributions
Investigation, Y.W.; Supervision, W.G. Both authors have read and agreed to the published version of the manuscript.
Funding
This research was supported by Project 202110004106 supported by Beijing Training Program of Innovation and Entrepreneurship for Undergraduates.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Gompertz, B. On the nature of the function expressive of the law of human mortality and on a new mode of determining life contingencies. Philos. Trans. R. Soc. Lond. 1825, 115, 513–585. [Google Scholar]
- Willekens, F. Gompertz in context: The Gompertz and related distributions. In Forecasting Mortality in Developed Countries: Insights from a Statistical, Demographic and Epidemiological Perspective; Springer: Berlin/Heidelberg, Germany, 2001; Volume 9, pp. 105–126. [Google Scholar]
- Wu, J.-W.; Hung, W.-L.; Tsai, C.-H. Estimation of parameters of the Gompertz distribution using the least squares method. Appl. Math. Comput. 2004, 158, 133–147. [Google Scholar] [CrossRef]
- Chang, S.; Tsai, T. Point and interval estimations for the Gompertz distribution under progressive Type-II censoring. Metron 2003, 61, 403–418. [Google Scholar]
- Mohie El-Din, M.M.; Nagy, M.; Abu-Moussa, M.H. Estimation and prediction for Gompertz distribution under the generalized progressive hybrid censored data. Ann. Data Sci. 2019, 6, 673–705. [Google Scholar] [CrossRef]
- Soliman, A.A.; Abd-Ellah, A.H.; Abou-Elheggag, N.A. Abd-Elmougod, G.A. Estimation of the parameters of life for Gompertz distribution using progressive first-failure censored data. Comput. Stat. Data Anal. 2012, 56, 2471–2485. [Google Scholar] [CrossRef]
- Bakouch, H.S.; El-Bar, A. A new weighted Gompertz distribution with applications to reliability data. Appl. Math. 2017, 62, 269–296. [Google Scholar] [CrossRef]
- Ghitany, M.; Alqallaf, F.; Balakrishnan, N. On the likelihood estimation of the parameters of Gompertz distribution based on complete and progressively Type-II censored samples. J. Stat. Comput. Simul. 2014, 84, 1803–1812. [Google Scholar] [CrossRef]
- Balakrishnan, N.; Sandhu, R. Best linear unbiased and maximum likelihood estimation for exponential distributions under general progressive Type-II censored samples. Sankhyā Indian J. Stat. Ser. 1996, 58, 1–9. [Google Scholar]
- Fernandez, A.J. On estimating exponential parameters with general Type-II progressive censoring. J. Stat. Plan. Inference 2004, 121, 135–147. [Google Scholar] [CrossRef]
- Peng, X.Y.; Yan, Z.Z. Bayesian estimation and prediction for the Inverse Weibull distribution under general progressive censoring. Commun. Stat. Theory Methods 2016, 45, 621–635. [Google Scholar]
- Soliman, A.A.; Al-Hossain, A.Y.; Al-Harbi, M.M. Predicting observables from Weibull model based on general progressive censored data with asymmetric loss. Stat. Methodol. 2011, 8, 451–461. [Google Scholar] [CrossRef]
- Kim, C.; Han, K. Estimation of the scale parameter of the Rayleigh distribution under general progressive censoring. J. Korean Stat. Soc. 2009, 38, 239–246. [Google Scholar] [CrossRef]
- Soliman, A.A. Estimations for Pareto model using general progressive censored data and asymmetric loss. Commun. Stat. Theory Methods 2008, 37, 1353–1370. [Google Scholar] [CrossRef]
- Wang, B.X. Exact interval estimation for the scale family under general progressive Type-II censoring. Commun. Stat. Theory Methods 2012, 41, 4444–4452. [Google Scholar] [CrossRef]
- Dempster, A.P.; Laird, N.M.; Rubin, D.B. Maximum likelihood from incomplete data via the EM algorithm. J. R. Stat. Soc. Ser. (Methodol.) 1977, 39, 1–38. [Google Scholar]
- Louis, T.A. Finding the observed information matrix when using the EM algorithm. J. R. Stat. Soc. Ser. (Methodol.) 1982, 44, 226–233. [Google Scholar]
- Wang, J.; Wang, X.R. The EM algorithm for the estimation of parameters under the general Type-II progressive censoring data. J. Anhui Norm. Univ. (Nat. Sci.) 2014, 37, 524–529. [Google Scholar]
- Efron, B.; Tibshirani, R. Bootstrap methods for standard errors, confidence intervals, and other measures of statistical accuracy. Stat. Sci. 1986, 1, 54–75. [Google Scholar] [CrossRef]
- Kayal, T.; Tripathi, Y.M.; Singh, D.P.; Rastogi, M.K. Estimation and prediction for Chen distribution with bathtub shape under progressive censoring. J. Stat. Comput. Simul. 2017, 87, 348–366. [Google Scholar] [CrossRef]
- Kundu, D.; Kannan, N.; Balakrishnan, N. Analysis of progressively censored competing risks data. Advances in Survival Analysis. In Handbook of Statistics; Elsevier: New York, NY, USA, 2004; Volume 23, pp. 331–348. [Google Scholar]
- Aggarwala, R.; Balakrishnan, N. Some properties of progressivecensored order statistics from arbitrary and uniform distributions with applications to inference and simulation. J. Stat. Plan. Inference 1998, 70, 35–49. [Google Scholar] [CrossRef]
- Tierney, L.; Kadane, J.B. Accurate approximations for posterior moments and marginal densities. J. Am. Stat. Assoc. 1986, 81, 82–86. [Google Scholar] [CrossRef]
- Jozani, M.J.; March, É.; Parsian, A. Bayesian and robust Bayesian analysis under a general class of balanced loss functions. Stat. Pap. 2012, 53, 51–60. [Google Scholar] [CrossRef]
- Bai, X.; Shi, Y.; Liu, Y.; Liu, B. Reliability estimation of stress–strength model using finite mixture distributions under progressively interval censoring. J. Comput. Appl. Math. 2019, 348, 509–524. [Google Scholar] [CrossRef]
- Nichols, M.D.; Padgett, W. A bootstrap control chart for Weibull percentiles. Qual. Reliab. Eng. Int. 2006, 22, 141–151. [Google Scholar] [CrossRef]
- Hand, D.J.; Daly, F.; McConway, K.; Lunn, D.; Ostrowski, E. A Handbook of Small Data Sets; Chapman & Hall: London, UK, 1994. [Google Scholar]
- Chen, Z. Parameter estimation of the Gompertz population. Biom. J. 1997, 39, 117–124. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).