Next Article in Journal
Adaptive Learning Control for Vehicle Systems with an Asymmetric Control Gain Matrix and Non-Uniform Trial Lengths
Previous Article in Journal
Self-Organizing Coverage Method of Swarm Robots Based on Dynamic Virtual Force
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Frequentist and Bayesian Estimation Under Progressive Type-II Random Censoring for a Two-Parameter Exponential Distribution

1
Department of Mathematics, School of Physical Sciences, DIT University, Dehradun 248009, India
2
Department of Mathematics and Statistics, Faculty of Science, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia
3
Department of Mathematics, Chandigarh University, Mohali 140301, Punjab, India
*
Author to whom correspondence should be addressed.
Symmetry 2025, 17(8), 1205; https://doi.org/10.3390/sym17081205
Submission received: 30 June 2025 / Revised: 19 July 2025 / Accepted: 21 July 2025 / Published: 29 July 2025
(This article belongs to the Section Mathematics)

Abstract

In medical research, random censoring often occurs due to unforeseen subject withdrawals, whereas progressive censoring is intentionally applied to minimize time and resource requirements during experimentation. This work focuses on estimating the parameters of a two-parameter exponential distribution under a progressive Type-II random censoring scheme, which integrates both censoring strategies. The use of symmetric properties in failure and censoring time models, arising from a shared location parameter, facilitates a balanced and robust inferential framework. This symmetry ensures interpretational clarity and enhances the tractability of both frequentist and Bayesian methods. Maximum likelihood estimators (MLEs) are obtained, along with asymptotic confidence intervals. A Bayesian approach is also introduced, utilizing inverse gamma priors, and Gibbs sampling is implemented to derive Bayesian estimates. The effectiveness of the proposed methodologies was assessed through extensive Monte Carlo simulations and demonstrated using an actual dataset.

1. Introduction

Life-testing experiments are fundamental to reliability engineering, medicine, and the biological sciences for assessing the lifetime characteristics of products or individuals. However, observing the complete failure time of all units in a study is often impractical due to constraints on time, cost, or other resources. To overcome these limitations, various censoring schemes have been developed, allowing for robust statistical inference from incomplete data.
Censoring arises in two broad categories: intentional and unintentional. Intentional schemes, such as Type-I (time-based) and Type-II (failure-based) censoring, are designed by the experimenter to optimize efficiency. Among these schemes, progressive censoring has gained significant popularity for its flexibility. In a progressive censoring scheme, surviving units are strategically removed from the test at various stages, typically at the time of observed failures. This approach enhances efficiency by reducing the number of items on test over time, while providing richer data than conventional censoring. Foundational work on this topic can be found in [1,2], with numerous subsequent contributions from authors such as  [3,4,5,6,7,8].
In contrast, random censoring is an unintentional phenomenon common in clinical trials, where subjects may withdraw from a study for reasons unrelated to the primary outcome. The concept was first introduced by [9] and has been extensively studied since, with notable works by [10,11,12].
Real-world experiments often feature a combination of both planned and unplanned censoring. To model such scenarios, this study considers a progressive Type-II randomly censored (PT-IIRC) scheme. This hybrid framework, which builds on related ideas from [13,14], provides a more realistic model for many practical applications.
The lifetime model under consideration is a two-parameter exponential distribution (TExpD), characterized by a location (threshold) parameter μ and a scale parameter α . We focus on the TExpD, due to its foundational role in reliability analysis and its mathematical properties, which allow for the clear development and illustration of the proposed inferential methods under a complex censoring scheme. The threshold parameter μ represents a guaranteed minimum lifetime. The TExpD has been widely applied in reliability analysis, with key contributions from [15,16,17,18]. A comprehensive overview is provided by [19]. While frequentist methods are well established, Bayesian approaches offer a powerful alternative for incorporating prior knowledge and handling uncertainty, especially with limited data. Seminal work in the Bayesian analysis of censored data can be found in sources like [12], who specifically addressed Bayesian analysis for exponential survival times. Our work builds on these foundations by developing a robust Bayesian framework within the more complex PT-IIRC scheme.
The primary objective of this paper was to develop and compare frequentist and Bayesian inferential methods for the parameters of the TExpD under the PT-IIRC scheme. The remainder of the article is organized as follows: Section 2 details the model formulation. Section 3 presents the MLEs and asymptotic confidence intervals. Section 4 outlines the Bayesian procedure. Section 5 provides the framework for the simulation study, and Section 6 presents a data analysis. Finally, Section 7 concludes the paper.

2. Model Formulation and Assumptions

Consider a life-testing experiment where n identical items are placed on test. The experiment is designed to terminate once a predetermined number of events, m (where m n ), have been observed. Let X 1 , X 2 , , X n be the potential failure times of these items, assumed to be independent and identically distributed (i.i.d.) random variables. In addition to failure, each item is subject to an independent censoring time, denoted by Y 1 , Y 2 , , Y n , which are also i.i.d. We assume that the sets { X i } and { Y i } are mutually independent.
The actual observed lifetime for item i is Z i = min ( X i , Y i ) . To distinguish between failure and censoring, we define an indicator variable d i for the cause of the event
d i = 1 , if X i Y i ( item failed ) , 0 , if X i > Y i ( item was censored ) .
The assumption that both failure times and censoring times follow two-parameter exponential distributions with a common location parameter μ , but different scale parameters α 1 and α 2 , introduces a form of symmetry into the modeling framework. The probability density function (PDF) and survival function (SF) for the failure times (X) and censoring times (Y) are given by
f X ( x ; μ , α 1 ) = 1 α 1 exp x μ α 1 , S X ( x ; μ , α 1 ) = exp x μ α 1 , x μ , α 1 > 0 ,
f Y ( y ; μ , α 2 ) = 1 α 2 exp y μ α 2 , S Y ( y ; μ , α 2 ) = exp y μ α 2 , y μ , α 2 > 0 .
The progressive censoring scheme is defined by a vector of integers = ( R 1 , R 2 , , R m ) such that n = m + i = 1 m R i . The experiment proceeds as follows: At the time of the first observed event, z ( 1 ) , R 1 surviving items are randomly selected and removed. At the time of the second event, z ( 2 ) , R 2 survivors are removed. This process continues until the m-th event at time z ( m ) , at which point all remaining R m surviving items are removed and the experiment terminates.
The observed data consist of the m ordered event times, = ( z ( 1 ) , z ( 2 ) , , z ( m ) ) , and their corresponding causes, = ( d 1 , d 2 , , d m ) , where z ( 1 ) < z ( 2 ) < < z ( m ) . Here, d i is the realization of the indicator variable for the specific unit that failed or was censored at time z ( i ) .

3. Maximum Likelihood Estimation

The likelihood function for progressively censored data is constructed based on the joint probability density function of the observed failure times. The contribution of each item to the likelihood depends on whether it is an observed failure, a random censoring event, or a progressively removed unit. Specifically, for an item i that fails at time z ( i ) , its contribution is the joint density of failure at z ( i ) and the other process surviving past z ( i ) , i.e., f X ( z ( i ) ) S Y ( z ( i ) ) . For an item that is randomly censored, its contribution is f Y ( z ( i ) ) S X ( z ( i ) ) . For each of the R i units removed at time z ( i ) , their contribution is the probability of surviving past z ( i ) , which is S X ( z ( i ) ) S Y ( z ( i ) ) . The complete likelihood is the product of these terms. For a general framework, see [2].
The likelihood function for the PT-IIRC data is given by
L ( μ , α 1 , α 2 | , , ) = C i = 1 m f X ( z ( i ) ) S Y ( z ( i ) ) d i f Y ( z ( i ) ) S X ( z ( i ) ) 1 d i S X ( z ( i ) ) S Y ( z ( i ) ) R i .
where C is a constant. Let m 1 = d i be the number of failures and m 0 = m m 1 . Using Equations (1) and (2), the likelihood function in Equation (3) simplifies to
L ( μ , α 1 , α 2 ) α 1 m 1 α 2 m 0 exp 1 α 1 + 1 α 2 i = 1 m ( z ( i ) μ ) ( 1 + R i ) I ( z ( 1 ) μ ) .
where I ( · ) is the indicator function. The likelihood is maximized with respect to μ at the boundary, so the MLE is μ ^ = z ( 1 ) . This is because the likelihood function is an increasing function of μ , and its parameter space is constrained by the condition μ z ( 1 ) . Therefore, the likelihood is maximized at the boundary of the parameter space. Substituting μ ^ gives the log-likelihood profile
( α 1 , α 2 ) = const m 1 log α 1 m 0 log α 2 1 α 1 + 1 α 2 T .
where T = i = 1 m ( z ( i ) z ( 1 ) ) ( 1 + R i ) . The MLEs for the scale parameters are
α ^ 1 = T m 1 , and α ^ 2 = T m 0 .
These estimators are valid for 0 < m 1 < m .

Asymptotic Confidence Intervals

To quantify the uncertainty associated with the maximum likelihood estimates, we construct confidence intervals for the unknown parameters. For the scale parameters α 1 and α 2 , we can rely on the large-sample properties of MLEs. However, constructing a confidence interval for the location parameter μ requires a different approach, as the MLE μ ^ = z ( 1 ) is an order statistic and does not satisfy the standard regularity conditions for asymptotic normality. Under the standard regularity conditions detailed in texts such as [20], the distribution of the MLE vector θ ^ = ( α ^ 1 , α ^ 2 ) T can be approximated by a multivariate normal distribution. A ( 1 γ ) 100 % asymptotic confidence interval (ACI) for α 1 and α 2 is given by
ACI for α 1 : α ^ 1 ± ϕ 1 γ / 2 α ^ 1 m 1 ,
ACI for α 2 : α ^ 2 ± ϕ 1 γ / 2 α ^ 2 m 0 .
where ϕ 1 γ / 2 is the upper ( 1 γ / 2 ) -th quantile of the standard normal distribution.
Constructing a confidence interval for the location parameter μ requires a different approach, as standard large-sample normality does not apply to the MLE μ ^ = z ( 1 ) . Instead, we use an exact pivotal quantity.
Let Z = min ( X , Y ) . It follows that Z Exp ( μ , α * ) , where α * = ( α 1 1 + α 2 1 ) 1 . For a progressive Type-II censored sample, it is a well-known result that the following two statistics are independent [1]:
(i)
2 n ( μ ^ μ ) α * χ ( 2 ) 2
(ii)
2 T α * χ ( 2 ( m 1 ) ) 2 , where T = i = 1 m ( z ( i ) μ ^ ) ( 1 + R i )
Here, n = m + i = 1 m R i is the total initial sample size, and χ ( k ) 2 denotes a chi-squared distribution with k degrees of freedom. Taking the ratio of these two independent chi-squared variables, each divided by its degrees of freedom, forms a pivotal quantity that follows an F-distribution:
F = 2 n ( μ ^ μ ) α * / 2 2 T α * / ( 2 ( m 1 ) ) = n ( m 1 ) ( μ ^ μ ) T F 2 , 2 ( m 1 ) .
where F 2 , 2 ( m 1 ) is an F-distribution with 2 and 2 ( m 1 ) degrees of freedom. A ( 1 γ ) 100 % confidence interval for μ can be derived from the probability statement
P F γ / 2 , 2 , 2 ( m 1 ) n ( m 1 ) ( μ ^ μ ) T F 1 γ / 2 , 2 , 2 ( m 1 ) = 1 γ .
where F q , ν 1 , ν 2 is the q-th quantile of the F-distribution. Rearranging the inequality yields the exact ( 1 γ ) 100 % confidence interval
μ ^ T F 1 γ / 2 , 2 , 2 ( m 1 ) n ( m 1 ) , μ ^ T F γ / 2 , 2 , 2 ( m 1 ) n ( m 1 ) .
Note that this interval is valid for m > 1 .

4. Bayesian Estimation

In this section, we develop the Bayesian inference for the unknown parameters μ , α 1 , and α 2 . We derive the Bayes estimates under squared error loss (SELF) and construct credible intervals using a direct Markov Chain Monte Carlo (MCMC) sampling procedure.
We assume that μ , α 1 , and α 2 are a priori independent, and for the scale parameters, we choose conjugate Inverse-Gamma priors, and for the location parameter, a non-informative uniform prior:
p ( α 1 ) 1 α 1 a 1 + 1 exp b 1 α 1 , α 1 > 0 Inv-Gamma ( a 1 , b 1 ) ,
p ( α 2 ) 1 α 2 a 2 + 1 exp b 2 α 2 , α 2 > 0 Inv-Gamma ( a 2 , b 2 ) ,
p ( μ ) 1 , < μ < .
where ( a 1 , b 1 , a 2 , b 2 ) are known hyperparameters. We choose non-informative priors to minimize the influence of prior beliefs on the posterior distribution, ensuring the inference is driven primarily by the data. This approach is standard when an objective Bayesian analysis is desired and provides a fair comparison with frequentist results.
The joint posterior distribution is obtained by multiplying the likelihood function by the priors
π ( μ , α 1 , α 2 | d a t a ) 1 α 1 m 1 + a 1 + 1 exp b 1 + T ( μ ) α 1 1 α 2 m 0 + a 2 + 1 exp b 2 + T ( μ ) α 2 I ( μ z ( 1 ) ) .
where m 1 = d i , m 0 = m m 1 , and T ( μ ) = i = 1 m ( z ( i ) μ ) ( 1 + R i ) .
To implement a direct sampling MCMC scheme, we need the marginal posterior distribution of μ and the conditional posterior distributions of α 1 and α 2 given μ .
From the joint posterior (13), if we condition on a given value of μ , the distributions for α 1 and α 2 are independent Inverse-Gamma distributions:
π ( α 1 | μ , , ) Inv-Gamma m 1 + a 1 , b 1 + T ( μ ) ,
π ( α 2 | μ , , ) Inv-Gamma m 0 + a 2 , b 2 + T ( μ ) .
To obtain the marginal posterior distribution of μ , we integrate the joint posterior with respect to α 1 and α 2
π ( μ | d a t a ) = 0 0 π ( μ , α 1 , α 2 | , ) d α 1 d α 2 .
This integration yields
π ( μ | d a t a ) Γ ( m 1 + a 1 ) ( b 1 + T ( μ ) ) m 1 + a 1 × Γ ( m 0 + a 2 ) ( b 2 + T ( μ ) ) m 0 + a 2 × I ( μ z ( 1 ) ) .
Since the Gamma function terms do not depend on μ , we can write the kernel as
π ( μ | d a t a ) 1 ( b 1 + T ( μ ) ) m 1 + a 1 ( b 2 + T ( μ ) ) m 0 + a 2 × I ( μ z ( 1 ) ) .
Let N p r o g = i = 1 m ( 1 + R i ) and S z = i = 1 m z ( i ) ( 1 + R i ) . Then, T ( μ ) = S z μ N p r o g . The cumulative distribution function (CDF) of μ , needed for sampling, is
F ( μ ) = P ( μ ˜ μ | d a t a ) = μ π ( u | d a t a ) d u z ( 1 ) π ( u | d a t a ) d u , for μ z ( 1 ) .
This integral does not have a simple closed form and must be computed numerically. We can then sample from this distribution using the inverse CDF method.

4.1. Gibbs Sampling Algorithm

We use the following MCMC algorithm, a form of Gibbs sampling [21], to generate samples from the posterior distributions.
Step 1:
Generate from the Marginal of μ
(a)
Generate a random number u from a Uniform(0,1) distribution.
(b)
Find the value μ * that solves the equation F ( μ * ) = u , where F ( · ) is the CDF from Equation (17). This requires a numerical root-finding method (e.g., bisection or Newton–Raphson) as the integral is computed numerically. This μ * is one sample from the marginal posterior of μ .
Step 2:
Generate from the Conditionals of α 1 and α 2 . Given the sampled value μ * from Step 1,
(a)
Generate a random sample α 1 * from the Inverse-Gamma distribution in (14)
α 1 * Inv-Gamma m 1 + a 1 , b 1 + T ( μ * ) .
(b)
Generate a random sample α 2 * from the Inverse-Gamma distribution in (15)
α 2 * Inv-Gamma m 0 + a 2 , b 2 + T ( μ * ) .
Step 3:
Repeat for MCMC Samples Repeat Step 1 and 2 for j = 1 , 2 , , N i t e r to obtain a set of samples { ( μ ( j ) , α 1 ( j ) , α 2 ( j ) ) } .
Step 4:
Bayes Estimation. After discarding an initial burn-in period of B samples, the Bayes estimates of the parameters under SELF are obtained by averaging the remaining N i t e r B samples
α ^ 1 , Bayes = 1 N i t e r B j = B + 1 N i t e r α 1 ( j ) ,
α ^ 2 , Bayes = 1 N i t e r B j = B + 1 N i t e r α 2 ( j ) ,
μ ^ Bayes = 1 N i t e r B j = B + 1 N i t e r μ ( j ) .

4.2. Highest Posterior Density (HPD) Credible Intervals

A 100 ( 1 γ ) % Highest Posterior Density (HPD) credible interval is the shortest possible interval containing the ( 1 γ ) of the posterior probability.
We estimate the HPD intervals from the MCMC samples using the algorithm proposed by [22]. For a set of N post-burn-in MCMC samples for a generic parameter θ , the procedure is as follows:
  • Sort the MCMC samples to obtain the ordered values: θ ( 1 ) θ ( 2 ) θ ( N ) .
  • Identify all possible candidate intervals of the form I j = θ ( j ) , θ ( j + k 1 ) for j = 1 , , N k + 1 , where k = ( 1 γ ) N .
  • The HPD interval is the specific interval I j * which has the minimum length. This is found by minimizing the difference ( θ ( j + k 1 ) θ ( j ) ) over all possible values of j.
This process is applied independently to the MCMC sample sets for μ , α 1 , and α 2 to find their respective HPD credible intervals.

5. Simulation Study

To evaluate the finite-sample performance of the proposed estimation procedures, a comprehensive Monte Carlo simulation study was conducted. The true parameter values were fixed at μ = 2.0 , α 1 = 1.5 , and α 2 = 2.0 . Various combinations of sample sizes (n) and numbers of observed failures (m) were considered under different progressive Type-II censoring schemes (R). These schemes were designed to represent diverse scenarios, including complete data (no censoring), heavy early-life censoring (e.g., R = ( 10 , 0 3 ) = ( 10 , 0 , 0 , 0 ) ), and more evenly distributed censoring patterns (e.g., R = ( ( 1 , 0 3 ) 2 ) = ( 1 , 0 , 0 , 0 , 1 , 0 , 0 , 0 ) ). For computing the Bayes estimates, the values of the hyperparameters were taken such that a 1 , a 2 , b 1 and b 2 were equal to 0.001. This choice of small, positive values is a common technique to approximate a vague or diffuse prior distribution, reflecting a lack of specific prior knowledge about the parameters. Each simulation scenario was replicated 10,000 times using the statistical software R version 4.4.2. The performance of the estimators was assessed based on four key metrics: Average Estimate (AE), Mean Squared Error (MSE), Average Interval Length (AL), and Coverage Probability (CP). For the Bayesian approach, inference was conducted via Gibbs sampling with 20,000 iterations, where the initial 2000 iterations were discarded as burn-in, to ensure convergence. The detailed results for point and interval estimation are presented in Table 1 and Table 2, respectively.
From Table 1 and Table 2, several key observations emerge:
  • As expected, the estimation accuracy for all parameters improved as the number of observed failures (m) increased, leading to lower MSEs and shorter interval lengths.
  • For a fixed n and m, censoring schemes with more evenly distributed removals consistently outperformed schemes with heavy early-life censoring, resulting in lower estimation errors.
  • The Bayesian approach generally provided more accurate point estimates, yielding consistently lower MSEs than the MLE method, particularly for the scale parameters ( α 1 , α 2 ) under censored conditions.
  • The Bayesian method was substantially more reliable for interval estimation. Its Credible Intervals (CRIs) maintain coverage probabilities close to the nominal 0.95 level, whereas the MLE-based Asymptotic Confidence Intervals (ACIs) often exhibited significant undercoverage, especially in scenarios with heavy censoring.
  • The location parameter μ was estimated with significantly higher precision (lower MSE) than the scale parameters α 1 and α 2 across all scenarios.
In summary, the simulation results suggest that the Bayesian framework is the more robust and preferable choice for inference, providing more accurate point estimates and more reliable interval estimates under the conditions studied.

6. Real Data Analysis

To illustrate the practical application of the proposed estimation methods, we analyze a well-known dataset from the medical literature. The data, originally presented by [23], pertain to the survival times (in months) of patients with Dukes’ C colorectal cancer, and as the original study did not follow a progressive censoring protocol, we adapted the data for illustrative purposes by imposing a censoring scheme, a common practice for demonstrating new methodologies. The dataset is treated as progressive Type-II random censored and consists of the following observations:
z = ( 3 , 6 , 6.01 , 6.02 , 6.03 , 8.01 , 12 , 12.01 , 16 , 18 , 18.01 , 20 , 22 , 28 , 28.01 , 28.02 , 30.01 , 30.1 , 33 , 42 ) , d = ( 0 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 0 , 0 , 0 , 1 , 0 , 1 , 0 , 0 , 1 , 0 , 0 , 1 ) , R = ( 0 , 0 , 0 , 0 , 1 , 0 , 0 , 2 , 0 , 0 , 0 , 0 , 1 , 0 , 0 , 0 , 0 , 0 , 0 , 0 ) .
Here, d i = 1 indicates a true failure (death), while d i = 0 indicates the observation was censored.
Before proceeding with parameter estimation, we assessed the goodness-of-fit of the proposed model to this dataset. The Kolmogorov–Smirnov (K-S) test, performed using R software, yielded a test statistic of D = 0.1321 and a p-value of 0.8218. Since the p-value was significantly greater than the conventional significance level of 0.05, we did not reject the null hypothesis, suggesting that the model provided an adequate fit for the data.
This conclusion is further supported by graphical diagnostics. Figure 1 shows a close alignment between the empirical and theoretical cumulative distribution functions (CDFs). Furthermore, the P-P plot (Figure 2) and Q-Q plot (Figure 3) both exhibit points lying close to the 45-degree reference line, reinforcing the conclusion that the model fit the data well.
Having established the model’s suitability, we computed both frequentist and Bayesian estimates for the parameters. For the Bayesian analysis, non-informative priors were used by setting the hyperparameters a 1 = a 2 = b 1 = b 2 = 0.0001 . The Maximum Likelihood Estimates (MLEs) with their 95% ACIs, alongside the Bayes estimates with their 95% CRI intervals, are summarized in Table 3.
To validate the Bayesian results, the convergence of the MCMC sampler was rigorously assessed, with diagnostic plots provided in Figure 4. The trace plots for all three parameters ( μ , α 1 , and α 2 ) exhibit a stationary behavior, fluctuating rapidly around a stable central tendency with no discernible trends, which is strong evidence of successful convergence to their stationary distributions. Furthermore, we calculated the Effective Sample Size (ESS) for each parameter, and all values were well over 1000, indicating low autocorrelation in the chains and efficient estimation. The posterior density plots further reinforce this conclusion. The density for μ is appropriately bounded by the first failure time, while the densities for α 1 and α 2 are unimodal and well defined. This excellent MCMC performance ensures that the resulting Bayes estimates and credible intervals are reliable. While the MLE point estimates for μ and α 2 appear closer to the Bayesian estimates in this specific dataset, our comprehensive simulation study (Table 1 and Table 2) provided more rigorous evidence of the Bayesian method’s overall superiority. The simulations showed that Bayesian estimators consistently yielded a lower Mean Squared Error and more reliable interval coverage across a wide range of censoring scenarios.

7. Conclusions

In this study, we successfully developed and compared frequentist and Bayesian inferential methods for the two-parameter exponential distribution under a complex progressive random censoring scheme. Our findings, from both extensive simulations and a real-data application, consistently highlight the advantages of the Bayesian approach.
The Bayesian estimators demonstrated lower mean squared errors and produced credible intervals, with coverage probabilities that were far more reliable than their frequentist counterparts, which often failed to achieve the nominal level, especially under heavy censoring. The analysis of a real-world medical dataset further validated our methodology and confirmed the stability and convergence of the proposed MCMC algorithm.
Therefore, for practitioners working with similarly complex censored data, the Bayesian framework presented here offers a more robust and accurate tool for analysis. The inherent symmetry in the model, stemming from shared structural assumptions for failure and censoring times, not only enhances inferential clarity but also contributes to computational efficiency. This symmetry-driven modeling approach provides a promising direction for further extensions.

Author Contributions

Methodology, R.G.; Software, M.M.A.; Formal analysis, T.K. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported and funded by the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University (IMSIU) (grant number IMSIU-DDRSP2502).

Data Availability Statement

All datasets are reported within the article.

Acknowledgments

The authors gratefully acknowledge the support provided by the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University (IMSIU) through the Research Group project IMSIU-DDRSP2502.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Balakrishnan, N.; Aggarwala, R. Progressive Censoring: Theory, Methods, and Applications; Springer Science Business Media: Berlin, Germany, 2000. [Google Scholar]
  2. Balakrishnan, N.; Cramer, E. The Art of Progressive Censoring; Springer: Berlin/Heidelberg, Germany, 2014; Volume 138. [Google Scholar]
  3. Abu-Moussa, M.H.; Alsadat, N.; Sharawy, A. On estimation of reliability functions for the extended rayleigh distribution under progressive first-failure censoring model. Axioms 2023, 12, 680. [Google Scholar] [CrossRef]
  4. Fayomi, A.; Almetwally, E.M.; Qura, M.E. Bivariate length-biased exponential distribution under progressive type-ii censoring: Incorporating random removal and applications to industrial and computer science data. Axioms 2024, 13, 664. [Google Scholar] [CrossRef]
  5. Gao, X.; Gui, W. Statistical inference of burr–hatke exponential distribution with partially accelerated life test under progressively type ii censoring. Mathematics 2023, 11, 2939. [Google Scholar] [CrossRef]
  6. Laumen, B.; Cramer, E. Progressive censoring with fixed censoring times. Statistics 2019, 53, 569–600. [Google Scholar] [CrossRef]
  7. Lee, K. Inference for parameters of exponential distribution under combined type ii progressive hybrid censoring scheme. Mathematics 2024, 12, 820. [Google Scholar] [CrossRef]
  8. Tian, Y.; Liang, Y.; Gui, W. Inference and optimal censoring scheme for a competing-risks model with type-ii progressive censoring. Math. Popul. Stud. 2024, 31, 1–39. [Google Scholar] [CrossRef]
  9. Gilbert, J.P. Random Censorship. Ph.D. Thesis, University of Chicago, Department of Statistics, Chicago, IL, USA, 1962. [Google Scholar]
  10. Hasaballah, M.M.; Balogun, O.S.; Bakr, M.E. On a randomly censoring scheme for generalized logistic distribution with applications. Symmetry 2024, 16, 1240. [Google Scholar] [CrossRef]
  11. Krishna, H.; Goel, R. Inferences based on correlated randomly censored gumbel’s type-i bivariate exponential distribution. Ann. Data Sci. 2024, 11, 1185–1207. [Google Scholar] [CrossRef]
  12. Saleem, M.; Raza, A. On bayesian analysis of the exponential survival time assuming the exponential censor time. Pak. J. Sci. 2011, 63, 44–48. [Google Scholar]
  13. Goel, R.; Krishna, H. Progressive type-II random censoring scheme with Lindley failure and censoring time distributions. Int. J. Agric. Stat. Sci. 2020, 16, 23–34. [Google Scholar]
  14. Goel, R.; Kumar, K.; Ng, H.K.T.; Kumar, I. Statistical inference in burr type xii lifetime model based on progressive randomly censored data. Qual. Eng. 2024, 36, 150–165. [Google Scholar] [CrossRef]
  15. Beg, M. On the estimation of pr Y¡ X for the two-parameter exponential distribution. Metrika 1980, 27, 29–34. [Google Scholar] [CrossRef]
  16. Cheng, C.; Chen, J.; Bai, J. Exact inferences of the two-parameter exponential distribution and pareto distribution with censored data. J. Appl. Stat. 2013, 40, 1464–1479. [Google Scholar] [CrossRef]
  17. Ganguly, A.; Mitra, S.; Samanta, D.; Kundu, D. Exact inference for the two-parameter exponential distribution under type-ii hybrid censoring. J. Stat. Plan. Inference 2012, 142, 613–625. [Google Scholar] [CrossRef]
  18. Krishna, H.; Goel, N. Classical and bayesian inference in two parameter exponential distribution with randomly censored data. Comput. Stat. 2018, 33, 249–275. [Google Scholar] [CrossRef]
  19. Tomy, L.; Jose, M.; Veena, G. A review on recent generalizations of exponential distribution. Biom. Biostat. Int. J. 2020, 9, 152–156. [Google Scholar] [CrossRef]
  20. Lehmann, E.L.; Casella, G. Theory of Point Estimation; Springer Science & Business Media: Berlin, Germany, 1998. [Google Scholar]
  21. Gelf, A.E.; Smith, A.F. Sampling-based approaches to calculating marginal densities. J. Am. Stat. Assoc. 1990, 85, 398–409. [Google Scholar] [CrossRef]
  22. Chen, M.; Shao, Q. Monte carlo estimation of bayesian credible and hpd intervals. J. Comput. Graph. Stat. 1999, 8, 69–92. [Google Scholar] [CrossRef]
  23. McIllmurray, M.; Turkie, W. Controlled trial of gamma linolenic acid in duke’s c colorectal cancer. Br. Med. J. (Clin. Res. Ed.) 1987, 294, 1260. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Empirical CDF vs. theoretical CDF for the colorectal cancer dataset.
Figure 1. Empirical CDF vs. theoretical CDF for the colorectal cancer dataset.
Symmetry 17 01205 g001
Figure 2. P-P plot comparing empirical and theoretical probabilities.
Figure 2. P-P plot comparing empirical and theoretical probabilities.
Symmetry 17 01205 g002
Figure 3. Q-Q plot comparing empirical and theoretical quantiles.
Figure 3. Q-Q plot comparing empirical and theoretical quantiles.
Symmetry 17 01205 g003
Figure 4. MCMC convergence diagnostics: Trace plots (left) and posterior density curves (right) for parameters μ , α 1 , and α 2 .
Figure 4. MCMC convergence diagnostics: Trace plots (left) and posterior density curves (right) for parameters μ , α 1 , and α 2 .
Symmetry 17 01205 g004
Table 1. Performance of point estimators for different combinations of m, n, and R.
Table 1. Performance of point estimators for different combinations of m, n, and R.
nmR (Scheme)ParameterMLE_AEMLE_MSEBayes_AEBayes_MSE
5050 ( 0 50 ) μ 2.01750.00062.00000.0003
α 1 1.49060.08021.51870.0793
α 2 2.00830.20562.05680.1788
5040 ( ( 1 , 0 3 ) 10 ) μ 2.01780.00062.00010.0003
α 1 1.50440.10501.51800.0417
α 2 2.02980.27452.00320.1117
5040 ( 10 , 0 39 ) μ 2.02190.00092.00410.0005
α 1 1.50590.11461.51960.1036
α 2 2.04140.28492.03680.1302
5030 ( 20 , 0 29 ) μ 2.02870.00162.01100.0009
α 1 1.48900.14581.54420.2126
α 2 2.01720.33012.18590.2657
3030 ( 0 30 ) μ 2.02920.00181.99980.0010
α 1 1.48770.14811.54330.1150
α 2 2.02420.49252.00240.2263
5025 ( 25 , 0 24 ) μ 2.03230.00212.01460.0013
α 1 1.47170.17551.56100.1714
α 2 2.00600.45412.04420.3850
5025 ( 1 25 ) μ 2.01850.00072.00060.0003
α 1 1.48040.18141.56980.1823
α 2 2.04570.55032.09360.6920
3025 ( 5 , 0 24 ) μ 2.03510.00232.00520.0011
α 1 1.49220.20341.48450.1216
α 2 2.06730.55232.02340.1584
5020 ( 30 , 0 19 ) μ 2.04280.00372.02480.0025
α 1 1.49180.24851.54530.1552
α 2 2.02700.75852.09940.0707
3020 ( 10 , 0 19 ) μ 2.04330.00382.01330.0021
α 1 1.49400.24581.74760.1474
α 2 2.02550.79882.00300.1822
3020 ( ( 1 , 0 ) 10 ) μ 2.02890.00171.99850.0010
α 1 1.49740.22561.54900.1131
α 2 2.07960.83372.10710.1511
3015 ( 15 , 0 14 ) μ 2.05560.00662.02460.0042
α 1 1.50060.32851.66860.2739
α 2 2.10151.31372.06440.6006
Table 2. Performance of confidence and credible intervals for different combinations of m, n, and R.
Table 2. Performance of confidence and credible intervals for different combinations of m, n, and R.
nmR (Scheme)ParameterACI_ALACI_CPCRI_ALCRI_CP
5050 ( 0 50 ) μ 0.06510.9470.06510.946
α 1 1.10860.9261.11800.955
α 2 1.74300.9231.66370.954
5040 ( ( 1 , 0 3 ) 10 ) μ 0.06630.9590.06620.958
α 1 1.25690.9301.11650.951
α 2 1.98260.9341.30820.944
5040 ( 10 , 0 39 ) μ 0.06640.9330.06650.933
α 1 1.25830.9221.21860.934
α 2 1.99980.9251.32980.946
5030 ( 20 , 0 29 ) μ 0.06690.8970.06690.897
α 1 1.44530.9211.70130.946
α 2 2.29620.9162.13160.959
3030 ( 0 30 ) μ 0.11110.9380.11110.939
α 1 1.44700.9071.10600.943
α 2 2.32890.8962.90740.940
5025 ( 25 , 0 24 ) μ 0.06700.9320.06600.940
α 1 1.57470.9081.62140.946
α 2 2.53450.8992.29690.953
5025 ( 1 25 ) μ 0.06780.9450.06770.944
α 1 1.57930.9091.62720.943
α 2 2.59700.9042.38860.944
3025 ( 5 , 0 24 ) μ 0.11360.9400.11360.939
α 1 1.59660.8941.45060.943
α 2 2.63850.9052.45940.949
5020 ( 30 , 0 19 ) μ 0.06890.7890.06890.785
α 1 1.80890.9121.34750.935
α 2 2.92050.8831.20150.943
3020 ( 10 , 0 19 ) μ 0.11490.9020.11490.905
α 1 1.81210.9051.34830.945
α 2 2.93420.8762.71810.941
3020 ( ( 1 , 0 ) 10 ) μ 0.11640.9470.11640.946
α 1 1.80520.8981.33410.949
α 2 3.03270.8931.85290.944
3015 ( 15 , 0 14 ) μ 0.12050.8520.12050.854
α 1 2.12810.8692.05670.941
α 2 3.66470.8913.22810.939
Table 3. Comparison of frequentist and Bayesian estimates for the real dataset.
Table 3. Comparison of frequentist and Bayesian estimates for the real dataset.
ParameterMLE95% ACIBayes Estimate95% CRI
μ 3.0000[0.2927, 4.9814]2.1889[0.0000, 2.9822]
α 1 32.0255[13.1000, 50.9509]37.0208[19.8635, 47.7169]
α 2 39.1422[13.5698, 64.7147]46.5209[23.1941, 71.0820]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Goel, R.; Abdelwahab, M.M.; Kamble, T. Frequentist and Bayesian Estimation Under Progressive Type-II Random Censoring for a Two-Parameter Exponential Distribution. Symmetry 2025, 17, 1205. https://doi.org/10.3390/sym17081205

AMA Style

Goel R, Abdelwahab MM, Kamble T. Frequentist and Bayesian Estimation Under Progressive Type-II Random Censoring for a Two-Parameter Exponential Distribution. Symmetry. 2025; 17(8):1205. https://doi.org/10.3390/sym17081205

Chicago/Turabian Style

Goel, Rajni, Mahmoud M. Abdelwahab, and Tejaswar Kamble. 2025. "Frequentist and Bayesian Estimation Under Progressive Type-II Random Censoring for a Two-Parameter Exponential Distribution" Symmetry 17, no. 8: 1205. https://doi.org/10.3390/sym17081205

APA Style

Goel, R., Abdelwahab, M. M., & Kamble, T. (2025). Frequentist and Bayesian Estimation Under Progressive Type-II Random Censoring for a Two-Parameter Exponential Distribution. Symmetry, 17(8), 1205. https://doi.org/10.3390/sym17081205

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop