Abstract
The joint censoring technique becomes crucial when the study’s aim is to assess the comparative advantages of products concerning their service times. In recent years, there has been a growing interest in progressive censoring as a means to reduce both cost and experiment duration. This article delves into the realm of statistical inference for the three-parameter Burr-XII distribution using a joint progressive Type II censoring approach applied to two separate samples. We explore both maximum likelihood and Bayesian methods for estimating model parameters. Furthermore, we derive approximate confidence intervals based on the observed information matrix and employ four bootstrap methods to obtain confidence intervals. Bayesian estimators are presented for both symmetric and asymmetric loss functions. Since closed-form solutions for Bayesian estimators are unattainable, we resort to the Markov chain Monte Carlo method to compute these estimators and the corresponding credible intervals. To assess the performance of our estimators, we conduct extensive simulation experiments. Finally, to provide a practical illustration, we analyze a real dataset.
Keywords:
joint progressive censoring scheme; three-parameter Burr-XII distribution; maximum likelihood estimators; parametric bootstrap; Markov chain Monte Carlo method MSC:
62N01; 62N02; 62F10
1. Introduction
The Burr-XII distribution, originally introduced by Burr [1], has found extensive applications in various domains, including lifetime modeling for reliability analysis, addressing life-testing challenges, and devising acceptance sampling plans, as exemplified in the works of Abbasi et al. [2] and other researchers. It has also been effectively utilized in the analysis of observational data across diverse fields such as meteorology, finance, and hydrology, as showcased in studies conducted by Chen et al. [3], Ali and Jaheen [4], Burr [1] and Lio et al. [5]. Moreover, Shao et al. [6] delved into the modeling of extreme events using the three-parameter Burr-XII distribution (TPBXIID), notably in the context of flood-frequency analysis.
Our decision to employ the TPBXIID stems from its remarkable adaptability, spanning a wide spectrum of shapes, from highly skewed to nearly symmetric. This versatility renders it a valuable model for datasets that do not adhere to standard shapes. Notably, the distribution’s three parameters, denoted as , , and , offer straightforward interpretations, simplifying the analysis of statistical outcomes and enabling comparisons across different datasets. As a distribution limited to non-negative values, the Burr-XII distribution is frequently harnessed for modeling data related to lifetimes, sizes, or quantities. It consistently demonstrates an excellent fit to empirical datasets and, in specific data types, is known to outperform other commonly employed distributions, including the Weibull distribution.
Furthermore, the Burr-XII distribution serves as a generalized form encompassing several other distributions, including the Lomax (Pareto II), Burr-XII, and log-logistic distributions. In summary, the three-parameter Burr-XII distribution emerges as a versatile tool for modeling non-negative data, which are characterized by a broad spectrum of shapes. Its flexibility and interpretability render it a popular choice in statistical modeling.
Cook and Johnson [7] applied the Burr model to attain superior fits for a uranium survey dataset, while Zimmer et al. [8] delved into the statistical and probabilistic properties of the Burr-XII distribution and its relationships with other distributions commonly employed in reliability analyses. Additionally, Tadikamalla [9] extended the two-parameter Burr-XII distribution by introducing an additional scale parameter, resulting in the TPBXIID. This extension has sparked increased interest in the applications of the Burr-XII distribution.
Tadikamalla also established mathematical connections among Burr-related distributions, revealing that the Lomax distribution constitutes a special instance of the Burr-XII distribution, and the compound Weibull distribution represents a generalization of the Burr distribution. Furthermore, he demonstrated that several widely used distributions, including Weibull, logistic, log-logistic, normal, and lognormal distributions, can be viewed as specific cases of the Burr-XII distribution by appropriately configuring the distribution parameters.
In essence, the TPBXIID proves to be highly adaptable, encompassing two shape parameters and one scale parameter in the distribution function, thereby allowing it to represent a diverse range of distribution shapes. The TPBXIID can be characterized through its cumulative distribution function (CDF) and probability density function (PDF), as expressed, respectively, in Equations (1) and (2).
where and represent the shape parameters, and serves as the scale parameter. Notably, when , the density function exhibits an upside-down bathtub shape (unimodal) with the mode located at , while it assumes an L-shaped form when .
Recently, Mead and Afify [10] ventured into defining and examining the properties and applications of the five-parameter Burr-XII distribution, which is referred to as the Kumaraswamy exponentiated Burr-XII. Moreover, Shafqat et al. [11] explored the utilization of moving average control charts in the context of Burr X and inverse Gaussian, and Aslam et al. [12] studied a new generalized Burr-XII distribution with real-life applications.
The joint censoring approach proves to be a valuable and practical method for comparing life tests of products originating from various units within the same facility. Consider a scenario where two production lines operate within the same facility, generating products. In this setup, two independent samples of sizes m and n can be selected from each production line and subjected to simultaneous life-testing experiments. To optimize resource utilization, reduce costs, and save time, researchers often employ a joint progressive Type-II censoring scheme (JP-II-CS). This approach is instrumental in terminating life testing when a predetermined number of failures are observed.
Numerous studies in the literature have explored JP-II-CS and inference methods associated with it. For example, Rasouli and Balakrishnan [13] introduced likelihood inference techniques for two exponential distributions based on JP-II-CS. Doostparast et al. [14] delved into Bayes estimation under the linear exponential loss function using JP-II-CS data. Balakrishnan et al. [15] provided likelihood inference procedures for k exponential distributions under JP-II-CS, while Mondal and Kundu [16] focused on the point and interval estimation of Weibull parameters within the context of JP-II-CS.
Goel and Krishna [17] explored likelihood and Bayesian inference for k Lindley populations under a joint Type-II censoring scheme. Krishna and Goel [18] conducted a study on Lindley populations utilizing JP-II-CS. Additionally, Goel and Krishna [19] discussed statistical inference for two Lindley populations under a balanced JP-II-CS. Bayoud and Raqab [20] investigated classical and Bayesian inferences for two Topp–Leone models under JP-II-CS, while Chen and Gui [21] addressed the statistical inference of the generalized inverted exponential distribution in the context of JP-II-CS.
Pandey and Srivastava [22] focused on Bayesian inference for two log-logistic populations under JP-II-CS, and Qiao and Gui [23] tackled the statistical inference of the Weighted Exponential Distribution under similar censoring conditions.
Recently, Hassan et al. [24] delved into the statistical inference of the Burr Type III distribution under joint progressively Type II censoring. Kumar and Kumari [25] explored Bayesian and likelihood estimation techniques for two inverse Pareto populations under joint progressive censoring conditions.
According to Rasouli and Balakrishnan [13], JP-II-CS is described as follows. Let , be lifetimes of m units for product A, and they are supposed to be independent and identically distributed (iid) random variables from TPBXIID with a CDF given by
and the PDF is
In a similar manner, consider a set of n lifetimes denoted as for the product B. These lifetimes correspond to n units and are treated as iid random variables following the TPBXIID. The CDF for this distribution is given by:
and PDF is
Where , , and are shape parameters and and are scale parameters. In this scenario, let denote the total sample size and indicate the order statistics of the K random variables . The JP-II-CS method is applied as follows: when the first failure occurs, units are randomly removed from the remaining surviving units. The same process is repeated for the second failure, where units are randomly withdrawn from the remaining surviving units, and so on. At the failure, all remaining surviving units are withdrawn from the experiment. The JP-II-CS is represented by , and the total number of failures r is predetermined before conducting the experiment. Suppose that , where and represent the number of units withdrawn at the time of failure related to X and Y samples, respectively. These values are unknown and random variables. The data observed in this form will consist of , where , where or 0 if comes from X or Y failure, respectively, with and .
In this study, we employ a JP-II-CS strategy to formulate statistical inferences and assess two independent samples from the TPBXIID. We derive point and interval estimators using Bayesian and maximum likelihood estimation (MLE) techniques. Subsequently, we calculate asymptotic confidence intervals (ACI) based on the observed information matrix. These confidence intervals (CIs) are computed through various bootstrap techniques, including Bootstrap-P (Boot-P), Bootstrap-T (Boot-T), Bias-Corrected Bootstrap (Boot-BC), and Bias-Corrected Accelerated Bootstrap (Boot-BCa) methods. We assume a gamma prior distribution for both the shape and scale parameters. Employing the Metropolis–Hastings (M-H) method, we obtain Bayes estimates and credible intervals (CRIs) for the informative prior under both the squared error (SE) and linear exponential (LINEX) loss functions. To evaluate the effectiveness of these diverse approaches, we conduct Monte Carlo simulations and analyze real-world data.
The paper is structured in the following way: Section 2 outlines the derivation of the MLEs for the unknown parameters of TPBXIID. In Section 3, there is a presentation of ACIs that depend on the MLEs. Section 4 discusses different bootstrap CIs. The Bayesian analysis is performed in Section 5. To illustrate the estimation methods developed in this paper, we analyze real datasets in Section 6. The results of simulation are presented in Section 7. Finally, a brief conclusion can be found in Section 8.
2. Classical Likelihood Estimation
Based on the work of Rasouli and Balakrishnan [13], the likelihood function for can be expressed as follows:
where , , , , , , and with
The likelihood function in (7) becomes
The log-likelihood function for the TPBXIID, as related to Equation (8), is as follows:
Deriving the first derivative of Equation (9) concerning and setting each one as follows:
From (14) and (15), we obtain the MLE as
To acquire the MLEs for , the resolution of Equations (10)–(13) is necessary. Nevertheless, attaining analytical solutions for these equations proves highly demanding, making it arduous to obtain closed-form expressions for each parameter. Consequently, resorting to a numerical approach, such as the Newton–Raphson iteration method, becomes imperative to estimate the parameter values.
3. Approximate Confidence Interval
The second derivative of log-likelihood function with respect to , gives
and
where
The asymptotic variances–covariances of the MLEs for parameters are given by elements of the inverse of the Fisher information matrix (FIM) where the FIM is obtained by taking the expectation of minus Equations (18)–(29) and defined as
where .
The mathematical expressions for the expectations referred to in Equation (31) lack precision and can be located in Cohen’s work [26], where he computed the asymptotic variance–covariance matrices for various sample types.
Therefore, we rely on the estimated asymptotic variance–covariance matrix for the MLEs.
hence,
From the likelihood function in (9), we have if . Consequently, we have
The asymptotic normality of the MLEs can be used to compute the ACIs for parameter , so that we can express the approximate ACIs for as
where is the percentile of the standard normal distribution with right-tail probability .
4. Parametric Bootstrap
Bootstrap confidence intervals are a powerful statistical tool used to estimate the uncertainty associated with a parameter or statistic of interest in data analysis. They are particularly valuable when the underlying probability distribution of the data is complex or unknown. The bootstrap technique involves repeatedly resampling the observed data, with replacement, to create a large number of “bootstrap samples.” From these samples, new estimates of the parameter of interest are calculated. By analyzing the distribution of these estimates, bootstrap confidence intervals provide a range within which the true parameter value is likely to fall. This approach is widely used in various fields, such as finance, biology, and machine learning, to gain insight into the robustness and stability of statistical estimates, making it an indispensable tool for researchers and analysts seeking to make informed decisions in the presence of uncertainty.
In this context, we have the Boot-P, introduced by Efron [27], and the Boot-T proposed by Hall [28], along with two variations, Boot-BC and Boot-BCa, which are rooted in the concept presented by DiCiccio and Efron [29]. In 2017, Ghazal and Hasaballah [30] explored bootstrap and MCMC methods based on unified hybrid censored data. Below, we will introduce these four methods.
4.1. Boot-P Method
Use Boot-P when you need a quick and simple estimation of confidence intervals and do not have concerns about bias correction.
- Strengths:Simplicity: Boot-P is straightforward to implement.Intuitive: It provides easily interpretable confidence intervals based on percentiles.
- Weaknesses:May be imprecise for small sample sizes or skewed data. Ignores bias in the original estimator. The steps are given in Algorithm 1.
| Algorithm 1: Boot-P method |
|
4.2. Boot-T Method
Use Boot-T when you have a small sample size or suspect that the distribution of the statistic is not symmetric.
- Strengths:More robust for small sample sizes compared to Boot-P.Provides improved accuracy when the distribution of the statistic is not symmetric.
- Weaknesses:Somewhat more complex than Boot-P.May still have bias, although less than Boot-P. The steps are given in Algorithm 2.
| Algorithm 2: Boot-T method |
|
4.3. Boot-BC Method
Use Boot-BC when you want to correct for bias in the estimator and have a reasonably sized dataset.
- Strengths:Corrects for bias in the original estimator.Provides accurate results for many cases.
- Weaknesses:Can be computationally intensive.May not work well when dealing with extreme outliers or extremely skewed data. The steps are given in Algorithm 3.
| Algorithm 3: Boot-BC method |
|
4.4. Boot-BCa Method
Use Boot-BCa when you have a relatively large dataset and need to correct for both bias and skewness.
- Strengths:Addresses both bias and skewness in the original estimator. Offers improved accuracy over Boot-BC.
- Weaknesses:Even more computationally demanding than Boot-BC.Requires larger sample sizes than Boot-BC to perform well. The steps are given in Algorithm 4.
| Algorithm 4: Boot-BCa method |
|
In summary, the choice of bootstrap method depends on your specific data and research objectives. Boot-P is the simplest but may be less accurate for small samples and skewed data. Boot-T is a better choice for small samples and non-symmetric distributions. Boot-BC and Boot-BCa are appropriate when you want to correct for bias, with Boot-BCa providing more comprehensive corrections but at a higher computational cost. Consider the trade-offs between computational complexity and the need for bias correction when selecting the appropriate bootstrap method for your analysis.
5. Bayesian Estimation
In this section, we employ the MCMC approach within a Gibbs sampler framework that incorporates a nested M-H algorithm. We utilize this approach to generate parametric samples representing the unknown parameters , , , , and from their respective marginal posteriors, allowing us to derive Bayes estimates for these parameters. The Gibbs sampler is a recursive sampling technique employed to simulate samples from the full conditional posterior distributions, while the M-H algorithm is used to generate samples from arbitrary distributions (Hastings [32]; Metropolis et al. [33]). In this context, we generate N samples using the MCMC technique, with the initial M values discarded during the burn-in period. The remaining N–M sample values are subsequently utilized for further Bayesian analysis.
We assume that the parameters , , , , and have independent gamma prior distributions as
where
Where the hyperparameters and , are supposed to be known and non-negative. By using the prior distribution for , , , , and , we obtain the joint prior distribution as follows:
Based on (8) and (42), the joint posterior density function of , , , , and given is written as
We observed that (43) is not amenable to analytical solutions due to the formidable challenge of deriving closed-form expressions for the marginal posterior distributions of individual parameters. Consequently, we recommend the utilization of the MCMC method to approximate (43). Several studies have extensively explored the MCMC technique, including works by Chen and Shao [34] and Ghazal and Hasaballah [30,35,36]. From (43), the conditional posterior density function of , , , , and can be obtained as the following proportionality: to simplify, we used , , , , and instead of , , , , and respectively:
It is evident that the full conditional posterior density function of , as provided in Equation (48), takes the form of a gamma density with a shape parameter of and a scale parameter of . Similarly, the full conditional posterior density function of , as shown in Equation (49), follows a gamma distribution with a shape parameter of and a scale parameter of . Consequently, samples of and can be readily generated using any gamma distribution generation method.
However, it is important to note that the conditional posterior distribution functions of , , , and , as described in Equations (44)–(47), cannot be analytically simplified into well-known distributions. Consequently, direct sampling using standard methods can be challenging. Nevertheless, as depicted in Figure 1, these distributions exhibit similarities to the normal distribution.
Figure 1.
Posterior density function for and .
5.1. Estimation Based on SE Loss Function
The SE loss function is represented by the equation:
In this equation, the positive constant a is typically set to 1, , is a function of to be estimated, and denotes the SE estimate of . The Bayes estimator under a quadratic loss function is computed as the mean of the posterior distribution:
The SE loss function is commonly employed in the literature and is considered one of the most prevalent loss functions. It exhibits symmetry, implying that it treats the overestimation and underestimation of parameters equally. However, in life-testing scenarios, one type of estimation error may have more significant consequences than the other.
5.2. Estimation Based on LINEX Loss Function
The LINEX loss function is defined as follows:
Here, is as previously defined, and represents the LINEX estimate of .
The shape parameter a determines the direction and degree of symmetry for this loss function. Varian [37] first introduced this loss function, while Zellner [38] highlighted its intriguing properties. When , overestimation is penalized more severely than underestimation, and the reverse is true when a is negative. However, for values of a close to zero, the LINEX loss function closely resembles the symmetry of the SE loss function. This function exhibits significant asymmetry when with overestimation incurring higher costs than underestimation. Conversely, when , the loss function increases nearly exponentially for and decreases nearly linearly for .
This is applicable provided that exists and is finite. Now, we will outline the steps of the process for the M-H within Gibbs sampling method in Algorithm 5.
| Algorithm 5: Metropolis –Hasting within Gibbs sampling |
|
The posterior expectation of the LINEX loss function (52) can be expressed as follows:
Using the LINEX loss function, the Bayes estimate of is given by the following:
6. Applications
In this part, we examine an actual set of data to illustrate how the suggested techniques operate in practical situations. The dataset we used was initially obtained from the National Climatic Data Center (NCDC) located in Asheville, North Carolina, USA. These data show the wind speed measured in knots for two samples: the first one for 23 days and the second one for 25 days. We calculated the average wind speeds for Alexandria city on a daily basis from 1 February 2017 to 23 February 2017 and from 1 February 2018 to 25 February 2018, respectively, in Table 1 and Table 2 as follows:
Table 1.
For the first sample.
Table 2.
For the second sample.
We used the Kolmogorov–Smirnov (K-S) test to check if the data distribution fit the TPBXIID model. For the first sample, the K-S test calculated a value of for TPBXIID, which is smaller than the expected value of at a significance level of with and a P-value of . Similarly, for the second sample, the K-S test calculated a value of for TPBXIID, which is smaller than the expected value of at a significance level of with and a P-value of . Therefore, we can conclude that TPBXIID fits both samples very well. We have also included Figure 2 and Figure 3 to show how well the empirical and fitted values match up. Overall, TPBXIID seems to be an excellent model for fitting these data.
Figure 2.
Empirical and fitted survival functions for first sample.
Figure 3.
Empirical and fitted survival functions for second sample.
From the above datasets, we have generated the JP-II-C sample with the censoring scheme. Assume that for the first sample and for the second sample; by implementing JP-II-CS where denotes the total sample size and when , and , then .
The generated datasets are provided below.
Depending on the data type used in this study, we calculate estimates based on MLEs and the bootstrap method for and ; the results are shown in Table 3, and the results of ACIs, Boot-P CI, Boot-T CI, Boot-BC CI and Boot-BCa CI for and are given in Table 4, Table 5 and Table 6. We used the MCMC method with a 11000 MCMC sample for Bayesian estimation and ignored the first 1000 values as ‘burn-in’. We also used the non-informative priors with hyperparameters and . Then, we obtained the Bayesian estimates for and under SE loss and LINEX loss functions, and the results are displayed in Table 3. Moreover, the results of the CRIs for and are tabled in Table 4, Table 5 and Table 6.
Table 3.
Different point estimates of .
Table 4.
The 95% CIs/CRIs for .
Table 5.
The 95% CIs/CRIs for .
Table 6.
The 95% CIs/CRIs for .
7. Simulation Study
In this section, we have conducted a comprehensive simulation study to assess and compare the performance of various methods. Our investigation encompassed a range of sample sizes for two distinct populations, with m and n taking on values of 10, 20, 30, 40, 50, 60, and multiple selections for the JP-II-CS, specifically with r values of 5, 10, 15, 20, 30, 40, 50, 60, 70, and 80. For the purpose of our simulations, we established fixed parameter values, precisely , and subsequently calculated MLEs along with 95% CIs for these parameters across all the specified scenarios. This process was repeated 1000 times, and we calculated the mean values of MLEs, lengths, and CP. The results are displayed in Table 7, Table 8, Table 9, Table 10, Table 11 and Table 12. Additionally, we assumed informative gamma priors for , , , , , and when computing Bayes estimation under SE and LINEX loss functions with hyperparameters and , . Based on 1000 simulations, we computed Bayes estimates of , , , , , and along with CRIs and corresponding coverage probability (CP) using the MCMC method with 10,000 samples while discarding the first 1000 values as ’burn-in.’ We evaluated the performance of the resulting estimators of , , , , , and in terms of MSE computed for and , , , , , and , as
This process was repeated 1000 times, and we calculated mean values of MLEs, lengths, and CP. The results are presented in Table 7, Table 8, Table 9, Table 10, Table 11 and Table 12.
Table 7.
MSEs, lengths and CPs for parameter .
Table 8.
MSEs, lengths and CPs for parameter .
Table 9.
MSEs, lengths and CPs for parameter .
Table 10.
MSEs, lengths and CPs for parameter .
Table 11.
MSEs, lengths and CPs for parameter .
Table 12.
MSEs, lengths and CPs for parameter .
8. Conclusions
This study utilized the JP-II-CS to compare life tests of items from different units within a single facility. Point and interval estimates for the TPBXIID were generated using diverse methodologies, including maximum likelihood, Bayesian, and parametric bootstrap techniques. However, it should be noted that obtaining explicit MLEs for unknown parameters is not possible, so we used numerical techniques to compute them. Similarly, Bayes estimators are not available in closed form, so we used the MCMC method to compute them for the SE and LINEX loss functions. We tested these techniques on a real dataset and also conducted a simulation study to compare their performance for different sample sizes.
Based on our findings from Table 4, Table 5 and Table 6, we can conclude that Boot-T is better than Boot-P, Boot-BC and Boot-BCa in terms of having the smallest lengths. It is observed that from Table 7, Table 8, Table 9, Table 10, Table 11 and Table 12, the Bayes estimates under LINEX with provide better estimates in the sense of having smaller MSEs. It is clear that from Table 7, Table 8, Table 9, Table 10, Table 11 and Table 12, when and r increase, the MSEs and the lengths decrease Additionally, in Table 7, Table 8, Table 9, Table 10, Table 11 and Table 12, the MSEs and CPs of MLE are smaller than those of MCMC. Finally, we found that the performance of Bayes estimates for the parameters and is better than that of MLEs.
This study demonstrated that Bayesian estimators outperformed MLEs in the context of the JP-II-CS for parameter estimation. The Bayesian approach, employing SE and LINEX loss functions, yielded more accurate and precise estimates for the TPBXIID in life testing and reliability analysis. This improvement was attributed to Bayesian estimation’s ability to incorporate prior knowledge or beliefs about the parameters, enhancing accuracy and precision. Additionally, Bayesian estimation provided a complete posterior distribution of the parameters, offering a comprehensive view of estimation uncertainty and variability. The use of the MCMC method within Bayesian estimation efficiently explored the parameter space, capturing intricate parameter relationships.
In summary, Bayesian estimation stands out by incorporating prior information, enabling a deeper understanding of uncertainty, and efficiently handling small sample sizes and sparse data. When applied to the JP-II-CS with SE and LINEX loss functions, Bayesian estimation outperformed maximum likelihood estimation in terms of accuracy and precision, showcasing its practical advantages in life testing and reliability analysis.
Author Contributions
Conceptualization, R.M.E.-S.; Methodology, M.M.H., M.G.M.G. and R.M.E.-S.; Software, M.M.H. and R.M.E.-S.; Investigation, M.G.M.G., O.S.B. and M.E.B.; Data curation, O.S.B.; Writing—original draft, M.M.H.; Writing—review and editing, M.E.B.; Supervision, M.G.M.G. All authors have read and agreed to the published version of the manuscript.
Funding
This study was funded by Researchers Supporting Project number (RSPD2023R1004), King Saud University, Riyadh, Saudi Arabia.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
All datasets are reported within the article.
Acknowledgments
This study was funded by Researchers Supporting Project number (RSPD2023R1004), King Saud University, Riyadh, Saudi Arabia.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Burr, I.W. Cumulative frequency functions. Ann. Math. Stat. 1942, 13, 215–232. [Google Scholar] [CrossRef]
- Abbasi, B.; Hosseinifard, S.Z.; Coit, D.W. A neural network applied to estimate Burr-XII distribution parameters. Reliab. Eng. Syst. Saf. 2010, 95, 647–654. [Google Scholar] [CrossRef]
- Chen, Y.; Li, Y.; Zhao, T. Cause Analysis on Eastward Movement of Southwest China Vortex and Its Induced Heavy Rainfall in South China. Adv. Meteorol. 2015, 2, 1–22. [Google Scholar] [CrossRef]
- Ali Mousa, M.A.M.; Jaheen, Z.F. Statistical inference for the Burr model based on progressively censored data. Comput. Math. Appl. 2002, 43, 1441–1449. [Google Scholar] [CrossRef]
- Lio, Y.L.; Tsai, T.-R.; Aslam, M.; Jiang, N. Control charts for monitoring Burr type-X percentiles. Commun. Stat. Simul. Comput. 2014, 43, 761–776. [Google Scholar] [CrossRef]
- Shao, Q.; Wong, H.; Xia, J. Models for extremes using the extended three parameter Burr-XII system with application to flood frequency analysis. Hydrol. Sci. J. 2004, 49, 685–702. [Google Scholar]
- Cook, R.D.; Johnson, M.E. Generalized Burr-Pareto-Logistic distribution with application to a uranium exploration data set. Technometrics 1986, 28, 123–131. [Google Scholar] [CrossRef]
- Zimmer, W.J.J.; Keats, B.; Wang, F.K. The Burr-XII distribution in reliability analysis. J. Qual. Technol. 1998, 30, 386–394. [Google Scholar] [CrossRef]
- Tadikamalla, P.R. A look at the Burr and related distributions. Int. Stat. Rev. 1980, 48, 337–344. [Google Scholar] [CrossRef]
- Mead, M.E.; Afify, A.Z. On five parameter Burr-XII distribution: Properties and applications. S. Afr. Stat. J. 2017, 51, 67–80. [Google Scholar]
- Shafqat, A.; Aslam, M.; Albassam, M. Moving average control charts for Burr X and inverse Gaussian distributions. Oper. Res. Decis. 2020, 30, 81–94. [Google Scholar] [CrossRef]
- Aslam, M.; Usman, R.M.; Raqab, M.Z. A new generalized Burr XII distribution with real life applications. J. Stat. Manag. Syst. 2021, 24, 521–543. [Google Scholar] [CrossRef]
- Rasouli, A.; Balakrishnan, N. Exact likelihood inference for two exponential populations under joint progressive type-II censoring. Commun. Stat. Theory Methods 2010, 39, 2172–2191. [Google Scholar] [CrossRef]
- Doostparast, M.; Ahmadi, M.V.; Ahmadi, J. Bayes estimation based on joint progressive type-II censored data under LINEX loss function. Commun. Stat. Simul. Comput. 2013, 42, 1865–1886. [Google Scholar] [CrossRef]
- Balakrishnan, N.; Feng, S.; Kin-yat, L. Exact likelihood inference for k exponential populations under joint progressive type-II censoring. Commun. Stat. Simul. Comput. 2015, 44, 902–923. [Google Scholar] [CrossRef]
- Mondal, S.; Kundu, D. Point and interval estimation of Weibull parameters based on joint progressively censored data. Sankhya B 2019, 81, 1–25. [Google Scholar] [CrossRef]
- Goel, R.; Krishna, H. Likelihood and Bayesian inference for k Lindley populations under joint type-II censoring scheme. Commun. Stat. Simul. Comput. 2021, 52, 3475–3490. [Google Scholar] [CrossRef]
- Krishna, H.; Goel, R. Inferences for two Lindley populations based on joint progressive type-II censored data. Commun. Stat. Simul. Comput. 2020, 51, 4919–4936. [Google Scholar] [CrossRef]
- Goel, R.; Krishna, H. Statistical inference for two Lindley populations under balanced joint progressive type-II censoring scheme. Comput. Stat. 2021, 37, 263–286. [Google Scholar] [CrossRef]
- Bayoud, H.A.; Raqab, M.Z. Classical and Bayesian inferences for two Topp-Leone models under joint progressive Type-II censoring. Commun. Stat. Simul. Comput. 2022. [Google Scholar] [CrossRef]
- Chen, Q.; Gui, W. Statistical inference of the generalized inverted exponential distribution under joint progressively type-II censoring. Entropy 2022, 25, 576. [Google Scholar] [CrossRef] [PubMed]
- Pandey, R.; Srivastava, P. Bayesian inference for two log-logistic populations under joint progressive type II censoring schemes. Int. J. Syst. Assur. Eng. Manag. 2022, 13, 2981–2991. [Google Scholar] [CrossRef]
- Qiao, Y.; Gui, W. Statistical Inference of Weighted Exponential Distribution under Joint Progressive Type-II Censoring. Symmetry 2022, 14, 2031. [Google Scholar] [CrossRef]
- Hassan, A.S.; Elsherpieny, E.A.; Aghel, W.E. Statistical inference of the Burr Type III distribution under joint progressively Type-II censoring. Sci. Afr. 2023, 21, e01770. [Google Scholar] [CrossRef]
- Kumar, K.; Kumari, A. Bayesian and Likelihood Estimation in Two Inverse Pareto Populations Under Joint Progressive Censoring. J. Indian Soc. Probab. Stat. 2023. [Google Scholar] [CrossRef]
- Cohen, A.C. Maximum likelihood estimation in the Weibull distribution based on complete and on censored samples. Technometrics 1965, 5, 579–588. [Google Scholar] [CrossRef]
- Efron, B. The Jackknife, the Bootstrap and Other Resampling Plans; CBMS/NSF Regional Conference Series in Applied Mathematics; SIAM: Philadelphia, PA, USA, 1982; Volume 38. [Google Scholar]
- Hall, P. Theoretical comparison of bootstrap confidence intervals. Ann. Math. Stat. 1988, 16, 927–953. [Google Scholar] [CrossRef]
- DiCiccio, T.J.; Efron, B. Bootstrap confidence intervals. Stat. Sci. 1996, 11, 189–228. [Google Scholar] [CrossRef]
- Ghazal, M.G.M.; Hasaballah, M.M. Exponentiated Rayleigh distribution: A Bayes study using MCMC approach Based on unified hybrid censored data. J. Adv. Math. 2017, 12, 6863–6880. [Google Scholar]
- Efron, B.; Tibshirani, R.J. An Introduction to the Bootstrap; Chapman and Hall: London, UK, 1993. [Google Scholar]
- Hastings, W.K. Monte Carlo sampling methods using Markov chains and their applications. Biometrika 1970, 57, 97–109. [Google Scholar] [CrossRef]
- Metropolis, N.; Rosenbluth, A.W.; Rosenbluth, M.N.; Teller, A.H.; Teller, E. Equation of State Calculations by Fast Computing Machines. J. Chem. Phys. 1953, 21, 1087–1092. [Google Scholar] [CrossRef]
- Chen, M.; Shao, Q. Monte Carlo estimation of Bayesian Credible and HPD intervals. J. Comp. Graph. Stat. 1999, 8, 69–92. [Google Scholar]
- Ghazal, M.G.M.; Hasaballah, M.M. Bayesian prediction based on unified hybrid censored data from the exponentiated Rayleigh distribution. J. Stat. Appl. Probab. Lett. 2018, 5, 103–118. [Google Scholar] [CrossRef]
- Ghazal, M.G.M.; Hasaballah, M.M. Bayesian estimations using MCMC approach under exponentiated Rayleigh distribution based on unified hybrid censored scheme. J. Stat. Appl. Probab. 2017, 6, 329–344. [Google Scholar] [CrossRef]
- Varian, H.R. A Bayesian Approach to Real Estate Assessment; Springer: Amsterdam, The Netherlands, 1975; pp. 195–208. [Google Scholar]
- Zellner, A. Bayesian estimation and prediction using asymmetric loss functions. J. Am. Assoc. Nurse Pract. 1986, 81, 446–551. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).


