1. Introduction
Advancements in technology and manufacturing have resulted in the development of highly reliable modern products that often exhibit much longer lifespans before failure. As a result, traditional life testing methods, which rely on observing failures within a reasonable time period, have become less effective. This challenge emphasizes the need for more advanced statistical tools that can efficiently analyze lifetime data under time constraints. In such cases, researchers often use censoring schemes as an effective way to collect sufficient data without waiting for all test units to fail. The literature outlines several censoring schemes, including Type-I and Type-II censoring. In Type-I censoring, the test ends after a predetermined time, while in Type-II censoring, the test concludes once a specified number of failures have occurred. A more flexible alternative is the progressive Type-II censoring (PTIIC) scheme, which allows for the removal of surviving units at multiple stages during the experiment. For more information on the application of PTIIC, one may refer to Balakrishnan et al. [
1], Balakrishnan and Hossain [
2], Wu and Gui [
3], and Dey et al. [
4], among others.
Ng et al. [
5] proposed a more flexible censoring scheme known as the adaptive progressive Type-II censoring (APTIIC) plan. This approach not only encompasses the traditional PTIIC plan as a specific case but also enhances the efficiency of statistical inference compared to the censoring plan introduced by Kundu and Joarder [
6]. The APTIIC scheme has been widely applied in many studies, such as Sobhi and Soliman [
7], Kohansal and Shoaee [
8], Panahi and Asadi [
9], Haj Ahmad et al. [
10], Vardani et al. [
11], and Yu et al. [
12]. However, Ng et al. [
5] noted that the APTIIC scheme is effective for statistical inference only when testing time is not a concern. For highly reliable units, it can lead to excessively long testing durations, making it unsuitable for ensuring reasonable test lengths. To address this, Yan et al. [
13] introduced the improved APTIIC (IAPTIIC) plan. The IAPTIIC plan generalizes several existing censoring methods, including PTIIC and APTIIC schemes, among others. Additionally, it guarantees that experiments are completed within a predefined time frame, effectively addressing the challenges posed by prolonged testing durations. This advancement makes the IAPTIIC scheme a valuable tool for conducting reliability studies under time-constrained conditions. For more details on recent studies concerning the IAPTIIC strategy, see Dutta and Kayal [
14], Nassar and Elshahhat [
15], Dutta et al. [
16], Zhang and Yan [
17], and Irfan et al. [
18]. The next section offers a detailed description of the IAPTIIC scheme.
In various medical and engineering studies, the failure of test units is frequently influenced by multiple risk factors or causes. These factors, which can be conceptualized as competing influences, contribute to the eventual breakdown of the unit. For instance, in medical research, a patient’s health outcome may be impacted by competing risks such as disease progression, treatment side effects, or unrelated health complications. Because these risk factors operate simultaneously and each possesses the potential to induce failure, they are considered to be in competition with one another. This phenomenon is formally recognized in the statistical literature as the competing risks model. See Crowder [
19] for more competing risks examples. When analyzing competing risks data, researchers frequently focus on assessing the impact of a specific risk while considering the influence of other contributing factors. Ideally, the data set for such analysis includes the lifetime of the failed unit along with an indicator variable that specifies the cause of failure. The causes of failure can be modeled as either independent or dependent events. In this study, we adopt the latent failure time model proposed by Cox [
20]. Within this framework, the competing causes of failure are assumed to follow independent distributions. The competing risks model has been widely studied due to its significant importance in fields such as survival analysis, reliability engineering, and medical research see for example, the works of Kundu et al. [
21], Sarhan [
22], Zhang [
23], Fan et al. [
24], Ren and Gui [
25], and Tian et al. [
26].
Elshahhat and Nassar [
27] investigated the Weibull competing risks model using the IAPTIIC scheme. Their work focused on both classical and Bayesian estimation methods for the model parameters and the reliability (RF). They used the Weibull distribution due to its widespread applications. It is widely applied in reliability engineering to model time-to-failure of mechanical systems like turbines, in survival analysis for patient survival times in medical studies, in meteorology for wind speed distributions, in material science for characterizing material strength, in hydrology for extreme events like floods, and in quality control for assessing product lifetimes. While the competing risks model and the IAPTIIC scheme are highly significant, no studies have yet explored this framework for other lifetime distributions, despite the limitations of the Weibull distribution in modeling data with non-monotonic and unimodal hazard rate functions. To address this gap, this study employs the flexibility of the inverted-Weibull (IW) distribution to analyze reliability within the competing risks framework using IAPTIIC data. The IW distribution is chosen for its ability to accommodate non-monotonic hazard rates and provide a more accurate fit to empirical data; see Keller and Kanath [
28]. Its versatility makes it particularly suitable for modeling diverse data types across various fields. Notable practical applications of the IW distribution include modeling the time to breakdown of insulating fluids, the degradation of mechanical components, survival and reliability data, and in extreme value analysis for modeling maximum or minimum values such as stock price fluctuations. See for more details about the IW distribution, AL-Essa et al. [
29], Jana and Bera [
30], and Mou et al. [
31]. Under the competing risks model, several researchers have explored estimation problems for the IW distribution under different sampling schemes. For instance, El Azm et al. [
32] investigated parameter estimation under APTIIC sampling. Samia et al. [
33] focused on making statistical inferences using a generalized progressive hybrid Type-I censoring scheme. Additionally, Alotaibi et al. [
34] examined parameter estimation by applying the expectation–maximization algorithm in the case of complete sample.
It is evident that the available studies have primarily focused on estimating the parameters of the IW competing risks model using traditional sampling plans or complete data. None of these works have explored more advanced censoring schemes such as the IAPTIIC scheme. Furthermore, while parameter estimation has been widely addressed, the estimation of the RF has generally been overlooked. Considering the advantages of the IAPTIIC scheme over conventional censoring methods, and the flexibility of the IW competing risks model in handling data with multiple failure causes, this study aims to investigate both classical and Bayesian estimation methodologies for the unknown parameters and the RF under IAPTIIC competing risks data.The estimation methods involve obtaining both point estimates and interval estimates. By integrating classical and Bayesian methodologies, this study provides a useful framework for reliability analysis in the presence of competing risks which may be of interest for practitioners in fields such as engineering, medicine, and reliability testing. The main contributions of this study can be presented as follows:
Derivation of maximum likelihood estimates (MLEs) and construction of approximate confidence intervals (ACIs) for the unknown parameters and RF of the IW competing risks model under the IAPTIIC scheme.
Development of Bayesian estimation procedures using Markov Chain Monte Carlo (MCMC) methods to obtain Bayes point estimates and highest posterior density (HPD) credible intervals.
Comprehensive simulation study conducted to evaluate the performance and accuracy of both classical and Bayesian estimation methods under various experimental scenarios.
Application of the proposed methods to real-world competing risk data set to demonstrate their practical utility and effectiveness in estimating model parameters and RF.
The remainder of this study is structured as follows:
Section 2 discusses the IW distribution and the IAPTIIC scheme under the competing risks model.
Section 3 derives the MLEs and constructs the ACIs for the IW parameters and the RF.
Section 4 focuses on Bayesian estimation, employing the squared error (SE) loss function to estimate the parameters and the related RF.
Section 5 presents the simulation study design and discusses the corresponding results, evaluating the performance of the proposed methods.
Section 6 demonstrates the practical application of the methodologies by analyzing real-world data sets within the competing risks framework. Finally,
Section 7 concludes this study, summarizing the key findings and their implications for reliability analysis and related fields.
2. Model Description
Suppose
n identical items are put on a life test, and their lifetimes are represented by independent and identically distributed (i.i.d.) random variables, denoted as
. For simplicity and without loss of generality, we assume that there are only two competing risks influencing the failure of these units. Under this assumption, the failure of each unit can be attributed to one of these two competing causes. Then, we have
where
, for
, represents the latent failure time of the
i-th testing unit under the
j-th cause of failure. Additionally, it is assumed that the latent failure times
and
, for
, are independent. When a failure occurs, the researcher records the failure time and identifies the specific cause of failure, denoted by the indicator variable
, with
, where
if the
i-th failure is attributed to the first cause, and
if it is due to the second cause. This indicator variable
helps identify the specific risk factor responsible for each observed failure. Assume that
, follows the IW distribution with scale parameter
and shape parameter
, and it will be referred to as
. Then, the associated probability density function (PDF) and distribution function, can be written, respectively, as
and
Let
and
. Then, the RF at a distinct time
t, defined as
,
, can be expresses as follows
where
.
The process of generating an IAPTIIC sample in the presence of competing risks can be described as follows: Let represent the predefined number of failures to be observed, and let denote the predetermined removal pattern established before the experiment begins. Additionally, the researcher sets two threshold times, and , where and . These thresholds define the experimental time constraints: the experiment is allowed to continue beyond but must not exceed . This setup ensures that the experiment remains within a controlled time frame while accommodating the adaptive nature of the censoring scheme. When the i-th unit fails, with failure time represented by for , the researcher randomly removes units from the remaining surviving items. The cause of failure is recorded at each failure event. This procedure continues until the test concludes based on one of the following three scenarios:
Case I: If , the researcher terminates the test at and discards all the remaining surviving items. This results in the conventional PTIIC sample.
Case II: If , where represents the number of failures occurring before time and , we set . The experiment is terminated at the time of the m-th failure, and all remaining units are removed at that time. Note that this scenario corresponds to the well-known APTIIC sample.
Case III: If , the experiment is terminated at . In this scenario, no items are removed once the test time exceeds the first threshold . Here, represents the number of failures observed before time . At , all remaining units are removed, i.e., .
In this case, the observed IAPTIIC competing risks data will be
where
for the sake of simplicity. Let
Q refers to the total number of observed failures and
denote an indicator function for the event
. Using this, we define
From this, it follows that
, for
, represents the number of observed failures associated with cause
k, where
. Let
be an IAPTIIC competing risks sample drawn from a continuous population. Then, the joint likelihood function of the observed data, can be expressed as follows
where the available options for
Q,
D,
, and
are provided in
Table 1.
4. Bayesian Estimation
In this section, we explore the Bayesian estimation approach to derive the Bayesian estimators and HPD credible intervals for the model parameters and the related RF. The Bayes estimates and the interval ranges are obtained using the MCMC technique through sampling from the posterior distribution. The Bayes point estimates are computed under the SE loss function, a commonly used symmetric loss function in Bayesian inference. In Bayesian analysis, the process begins by specifying prior distributions for the parameters. From the likelihood function given in (
5), it is clear that conjugate priors are not available for
and
,
. To overcome this limitation, we adopt gamma priors for these parameters. Gamma priors are particularly advantageous as they allow flexibility in adjusting the range of the unknown parameters while maintaining computational simplicity. These priors are versatile and can effectively incorporate various forms of prior knowledge. Moreover, the use of gamma priors does not complicate posterior evaluation or computational procedures, especially when employing MCMC methods.
Let
and
where
for
are the hyper-parameters and are assumed to be known. The joint prior distribution of
and
,
is then expressed as follows:
By combining the likelihood function in (
5) with the joint prior distribution in (
13), the joint posterior distribution of the model parameters
can be derived as follows:
where
Let
represent a parametric function of the unknown parameters
. The Bayes estimator of
is typically obtained under a chosen loss function, and one of the most commonly used loss functions is the SE loss function. The SE loss function is commonly used due to its mathematical tractability and desirable properties, such as minimizing the mean squared error. However, it is important to note that the Bayes estimator can be easily obtained under other loss functions as well. Under the SE loss function, the Bayes estimator of
, denoted as
, can be expressed as the expected value of
with respect to the posterior distribution of the parameters
. Specifically, it is given by
where
represents the posterior expectation based on the observed data. Based on the posterior distribution in (
14), the Bayes estimator in (
15), can be obtained as
The Bayes estimator in (
16) requires computing a very complex integrals, which is analytically intractable due to the complicated posterior distribution. While numerical methods like Monte Carlo integration can approximate these integrals, they are often computationally expensive, especially with many parameters or complex posteriors as in our case. To address this, we use the MCMC method, which allows direct sampling from the posterior distribution in (
14). These samples are then used to approximate Bayes estimates and construct the HPD credible intervals. A key step in implementing the MCMC method is deriving the full conditional distributions of the unknown parameters. Using the joint posterior distribution provided in (
14), we can express the conditional distributions for the parameters
and
,
as
and
Upon initial inspection, the full conditional distributions presented in (
17)–(
20) do not align with any standard or widely recognized probability distributions. Consequently, traditional sampling techniques, such as inverse transform sampling, are not applicable to generate random samples from these distributions. To address this limitation, we employ the Metropolis–Hastings (M–H) algorithm, a well-known MCMC method. The M–H algorithm is particularly effective for sampling from complex or non-standard distributions, as it utilizes a candidate-generating distribution, referred to as the proposal distribution, to yield potential samples. In this study, we utilize the normal distribution as the proposal distribution for all parameters. The mean and variance of this normal distribution are determined based on the MLEs of the parameters. The M–H algorithm operates by iteratively proposing new samples from the proposal distribution and accepting or rejecting these samples based on an acceptance probability that guarantees the resulting Markov chain converges to the target posterior distribution. The procedure for generating random samples using the M–H algorithm involves the following steps:
- 1.
Initialize the algorithm by setting the starting values as , where and are the MLEs of the parameters and .
- 2.
Set the iteration counter .
- 3.
Use the conditional distribution
given in (
17) to generate
as follows:
- •
Generate a candidate value from a normal distribution .
- •
Compute the acceptance probability (AP):
- •
Draw a random number u from a uniform distribution .
- •
If , accept and set . Otherwise, retain the previous value .
- 4.
Repeat step 3 to generate the following:
- •
from
in (
18).
- •
from
in (
19).
- •
from
in (
20).
- 5.
Using the updated values
, compute the RF at iteration
j, denoted by
, as follows:
- 6.
Increment the iteration counter by setting .
- 7.
After repeating steps 3 through 6 multiple times, discard an initial set of iterations as the burn-in period to ensure the Markov chain has converged to its stationary distribution. The remaining iterations yield the following sequences:
where
represents the number of iterations retained after discarding the burn-in period.
Once the MCMC samples have been generated, they can be readily utilized to compute the Bayes estimates and the HPD credible intervals for the model parameters and the RF. Under the SE loss function, the Bayes estimates are obtained as the posterior means of the respective parameters. Specifically, the Bayes estimates for the parameters
,
, and the RF are calculated as
Additionally, the HPD credible intervals provide the shortest possible interval for a given probability level under the posterior distribution. To illustrate the computation process, we outline the steps to calculate the HPD credible interval for the RF below. The same procedure can be applied analogously to other parameters using the MCMC-generated samples.
- 1.
Sorting the MCMC samples as .
- 2.
Identify the index
within the range
that satisfies the following condition:
- 3.
The
HPD credible interval for the RF is then given by
6. Radiobiology Application
Radiation exposure in biological organisms is crucial for understanding its effects on cellular function, genetic stability, and overall health. In radiobiology, toxicology, and medical science, it has been used frequently to explore radiation exposure risks and therapeutic interventions. This application captures the effects of 300 roentgens of radiation on male mice aged 35 to 42 days (5–6 weeks) in a controlled laboratory setting. Its precise measurements and experimental design make it a valuable resource for understanding radiation-induced biological changes; for additional details, see Kundu et al. [
21], Alotaibi et al. [
37], Alotaibi et al. [
38], among others.
In
Table 3, the radiation data set (consisting of 77 lifetimes) is distinguished as 0 (lifetime is censoring), 1 (death of life is caused by reticulum cell sarcoma), or 2 (death of life caused by otherwise). Ignoring lifetimes of male mice still alive, in this application, we only examine observations of radiation dose from causes 1 and 2. For computational requirements, each original has been divided by a hundred.
To evaluate the suitability of the IW distribution in modeling the radiation data set,
Table 4 presents the results of the Kolmogorov–Smirnov (KS) test, including its respective test statistic and associated
p-value at a 5% significance level. In the same table, the fitted values of
and
(along with their standard-errors (St.Ers)) in addition to their 95% ACI bounds (along with their interval lengths (ILs)) are reported. The hypothesis framework for this goodness-of-fit assessment is formulated as follows:
Table 4 indicates that all estimated
p-values exceed the pre-specified significance threshold, so there is insufficient evidence to reject
, supporting the conclusion that the IW lifetime model adequately describes the radiation data. Furthermore, the visual assessments (including contour and probability–probability (PP)) provided in
Figure 2 reinforce these findings, aligning with the numerical results in
Table 4.
Figure 2 also recommends that the fitted values of
and
(listed in
Table 4) be utilized as starting points in the next inferential calculations.
Briefly, using the complete radiation data sets (reported in
Table 3), we highlight the superiority of the IW lifespan model by comparing its fitting results to five other lifetime models, namely
Inverted-Kumaraswamy (IK(
)) by Abd AL-Fattah et al. [
39];
Inverted-Lomax (IL(
)) by Kleiber and Kotz [
40];
Inverted-Pham (IP(
)) by Alqasem et al. [
41];
Inverted Nadarajah-Haghighi (INH(
)) by Tahir et al. [
42];
Exponentiated inverted Weibull (EIW(
)) by Flaih et al. [
43].
In particular, this comparison is made based on clear criteria, namely (i) negative log-likelihood (NLL); (ii) Akaike information (AI); (iii) consistent AI (CAI); (iv) Bayesian information (BI); (v) Hannan–Quinn information (HQI); and (vi) KS (
p-value). Thus, in
Table 5, the MLEs (along with their St.Ers) of
,
, and
as well as the estimates criteria (i)–(vi) are listed.
Consequently, from the radiation analyzed data using both causes 1 and 2, the results summarized in
Table 5 indicate that the IW distribution outperforms all other candidate models. This conclusion is substantiated by the fact that the IW model consistently yields the most favorable outcomes across nearly all criteria, demonstrating the lowest values of all suggested metrics except the highest
p-value.
Utilizing
Table 3, three IAPTIIC competing risk samples are generated with
under various combinations of
and
(see
Table 6). The point estimation findings (along with their corresponding St.Ers) of
,
, and
include both likelihood and Bayesian inferences obtained; see
Table 7. In the same table, 95% ACI/HPD interval limits (along with their corresponding ILs) are computed. By performing 10,000 MCMC iterations as a burn-in from
, Bayesian computations are performed. A comparative analysis in
Table 7 reveals that the Bayesian MCMC estimates consistently outperform the likelihood-based results in terms of lower standard errors and narrower interval widths, indicating enhanced precision. Again, from
for
, the estimated relative risks (ERRs) due to causes 1 and 2 using the invariance property of
and
, are computed and reported in
Table 7.
We noticed that, from
Table 7, the ERR outcomes related to cause 1 increased when the removals were done in the last stages compared to the first (or middle stages), while those related to cause 2 increased when the removals were done in the first stages compared to the last (or middle) stages.
Furthermore, based on all samples listed in
Table 3,
Figure 3 presents the profile log-likelihood curves of
and
. The results support the fitted values provided in
Table 7, demonstrating that the fitted MLEs
and
of
and
, respectively, exist and are unique, confirming the stability and reliability of the estimation process.
To assess the convergence behavior and mixing efficiency of the acquired remaining 30,000 MCMC iterations of
,
(for
), and
from radiation data, both density and trace plots of the estimated parameters are generated for each sampled data set listed in
Table 3. These visual diagnostics are presented in
Figure 4. It indicates that for all unknown parameters
,
(for
), and
, the posterior density functions exhibit near-symmetry as their normal proposals. Additionally, the trace plots demonstrate effective mixing, confirming that the Markov chains have achieved convergence, ensuring the reliability of the Bayesian inference.
7. Concluding Remarks
This study addressed the challenge of estimating unknown parameters and the reliability function of the inverted-Weibull distribution under improved adaptive progressively Type-II censored competing risks data. By employing both classical and Bayesian estimation approaches, the research provided a robust framework for reliability analysis in the presence of competing risks. The classical framework employs maximum likelihood estimation to derive point estimates and approximate confidence intervals, while the Bayesian approach utilizes Markov Chain Monte Carlo techniques to compute Bayes estimates and the highest posterior density credible intervals. A comprehensive simulation study demonstrated the accuracy and efficiency of the proposed methods across various experimental scenarios, highlighting their practical applicability. It evaluates estimators using metrics such as RMSE, MRAB, AIL, and CP across varied scenarios involving sample sizes, censoring rates, and time thresholds. The simulations, which were repeated 1000 times for each scenario, confirm the high accuracy and reliability of Bayesian methods based on the Metropolis–Hastings algorithm compared to traditional methods. The proposed point and interval estimation methods perform well under left- and right-censoring for cases
and
, respectively, while middle-censoring is the ideal for the reliability index
. A significant radiation application is analyzed from a practical perspective to demonstrate the feasibility of the suggested approaches. In summary, the research findings using data sets from the radiobiology sector provide valuable insights into the inverted-Weibull lifetime model, particularly when an improved adaptive progressive censored competing risk data set is constructed. One limitation of the current study is the assumption of independence among latent failure times, which may not be appropriate when the causes of failure are correlated. In such cases, a dependent competing risks model would be more suitable for analyzing the data. Another limitation concerns the use of the improved adaptive progressive Type-II censoring scheme, which is most effective when the units under study have long lifespans. If the duration of the experiment is not a major constraint, traditional censoring schemes, such as the standard progressive Type-II censoring, may be more appropriate. Finally, the Bayesian estimation approach, while flexible and informative, is computationally intensive and more costly. In scenarios where prior information about the unknown parameters is unavailable, it is advisable to use classical estimation methods to reduce computational burden and save time. As a direction for future research, it would be valuable to compare the inverted Weibull model with alternative lifetime distributions or machine learning-based approaches to address situations where the true underlying distribution is unknown; see Meyer et al. [
44] for additional details.