Next Article in Journal
Approximated Optimal Solution for Economic Manufacturing Quantity Model
Previous Article in Journal
Analytical Approach to UAV Cargo Delivery Processes Under Malicious Interference Conditions
Previous Article in Special Issue
A First-Order Autoregressive Process with Size-Biased Lindley Marginals: Applications and Forecasting
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Statistical Computation of Hjorth Competing Risks Using Binomial Removals in Adaptive Progressive Type II Censoring

1
Department of Mathematical Sciences, College of Science, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
2
Department of Statistics, Faculty of Science, King Abdulaziz University, Jeddah 21589, Saudi Arabia
3
Department of Statistics, Faculty of Commerce, Zagazig University, Zagazig 44519, Egypt
4
Faculty of Technology and Development, Zagazig University, Zagazig 44519, Egypt
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(12), 2010; https://doi.org/10.3390/math13122010
Submission received: 10 May 2025 / Revised: 11 June 2025 / Accepted: 17 June 2025 / Published: 18 June 2025
(This article belongs to the Special Issue Statistical Simulation and Computation: 3rd Edition)

Abstract

In complex reliability applications, it is common for the failure of an individual or an item to be attributed to multiple causes known as competing risks. This paper explores the estimation of the Hjorth competing risks model based on an adaptive progressive Type II censoring scheme via a binomial removal mechanism. For parameter and reliability metric estimation, both frequentist and Bayesian methodologies are developed. Maximum likelihood estimates for the Hjorth parameters are computed numerically due to their intricate form, while the binomial removal parameter is derived explicitly. Confidence intervals are constructed using asymptotic approximations. Within the Bayesian paradigm, gamma priors are assigned to the Hjorth parameters and a beta prior for the binomial parameter, facilitating posterior analysis. Markov Chain Monte Carlo techniques yield Bayesian estimates and credible intervals for parameters and reliability measures. The performance of the proposed methods is compared using Monte Carlo simulations. Finally, to illustrate the practical applicability of the proposed methodology, two real-world competing risk data sets are analyzed: one representing the breaking strength of jute fibers and the other representing the failure modes of electrical appliances.
MSC:
62F10; 62F15; 62N01; 62N02; 62N05

1. Introduction

In various research fields, especially in medicine and engineering, unit failures often result from multiple competing causes. The data collected in such scenarios are known as competing risk data. Analyzing this type of data is essential for understanding the influence of one failure cause while accounting for the presence of others. Traditional methods typically focus only on failures attributed to a single cause, treating other failure types as censored data. However, this approach overlooks critical information about the underlying failure mechanisms, potentially leading to inaccurate reliability and survival estimates which are the key goals of these analyses. In studies involving competing risks, data typically comprise two key elements: the time until failure for the units under observation and a categorical indicator that identifies the specific cause of failure. Our methodology in this work adopts the latent failure time framework introduced by Cox [1], which assumes that the different causes of failure are statistically independent. Within this field, researchers have predominantly applied various probability models to describe observed lifetime data. The primary focus of research has been parametric estimation methods, particularly employing maximum likelihood and Bayesian approaches. In contrast, reliability assessment and survival analysis have received relatively less attention in comparison. Several studies in this field include contributions by Kundu and Basu [2], Park [3], Wang [4], Park and Wang [5], and Alok et al. [6], among others.
A principal challenge in the analysis of competing risk data lies in selecting an appropriate statistical probability distribution for modeling such data. The statistical literature offers a wide array of models designed for various data types across multiple disciplines. This study focuses on the Hjorth (Hj) distribution, a relatively underexplored model in the context of competing risks, introduced by Hjorth [7]. The Hj distribution stands out for its remarkable flexibility, as it can accommodate four distinct hazard rate patterns: constant, increasing, decreasing, and bathtub-shaped. Despite being defined by only a single scale parameter and one shape parameter, it provides greater modeling versatility compared to other two-parameter distributions, such as the Weibull and gamma models. As noted by Hjorth [7], the reliability function (RF) of the Hj distribution can be expressed as the product of the RFs of the Rayleigh and Lomax distributions. This unique property allows the Hj distribution to serve as a natural candidate for modeling competing risks scenarios. Despite its demonstrated adaptability, there is limited research exploring the application of the Hj distribution in reliability and survival analysis, particularly in estimating its unknown parameters or related reliability metrics, like RF and hazard rate function (HRF). Notable exceptions include the works of Yadav et al. [8] and Elshahhat and Nassar [9]. The detailed description of the Hj distribution within the context of the competing risks model is discussed in the following section, which includes the definition of several structural functions, such as the probability density function (PDF), cumulative distribution function (CDF), RF, and HRF.
Another challenge researchers encounter when analyzing competing risk data is deciding whether to collect data on the failure of all units in the sample or to terminate the test before all units fail. In practice, gathering complete data on all tested units can be time-consuming and costly, particularly for highly reliable products which are now more common due to technological advancements. In such cases, researchers often opt for censoring sampling methods, which help reduce both costs and experimental duration. In the literature, various censoring schemes are available, each with its own unique features and advantages. One widely used censoring plan is the progressive Type II censoring (PTIIC) scheme, which allows researchers to remove some surviving units from the test according to a pre-specified censoring plan. More detailed information regarding the PTIIC can be found in Balakrishnan and Aggarwala [10], Balakrishnan and Hossain [11], Wu and Gui [12], and Baklizi [13]. A more flexible and efficient censoring plan was introduced by Ng et al. [14], designed to enhance the performance of statistical inference methods while ensuring that the test concludes with a fixed number of failures. This approach is known as the adaptive PTIIC (APTIIC) plan. Under the APTIIC framework, researchers can adjust the experimental design by changing the removal of test units once a specific time point is reached. This plan provides greater adaptability in handling real-world constraints. The APTIIC scheme has garnered significant attention in the literature, with numerous researchers exploring its applications; for instance, Haj Ahmad et al. [15], Panahi and Asadi [16], Lv et al. [17], and Sharma and Kumar [18]. Later, Anakha and Chacko [19] suggested a joint APTIIC plan.
The APTIIC plan typically assumes a fixed progressive censoring scheme throughout the testing process. Nonetheless, in certain real-world contexts, the removal of units may occur randomly rather than adhering to a predetermined pattern. Yuen and Tse [20] emphasized that in some studies, researchers might determine it is inappropriate or unsafe to continue monitoring specific units, even if those units have not yet failed. In such instances, units are randomly removed following each failure. To model random removal patterns, three approaches have been employed in the literature: discrete uniform removals, binomial removals, and beta-binomial removals. In this study, we chose to model random removal using the binomial distribution for several important reasons. First, it naturally represents the process of a random number of removals occurring over a fixed number of independent trials, which aligns with the assumption that each item has an equal and independent chance of being removed. Second, the binomial distribution is simple and interpretable, requiring only one parameter, the probability of removal, which is easy to estimate and explain. In contrast, alternatives like the beta-binomial distribution introduce additional parameters, increasing model complexity. Similarly, the discrete uniform distribution assumes equal probability across all possible removal counts, which does not reflect realistic variability. For further details, see Singh et al. [21] and Elbatal et al. [22]. The random removal strategy is frequently considered more practical than fixed-pattern censoring, particularly within the framework of competing risks models. This approach has been employed in studies conducted by Chacko and Mohan [23] and Qin and Gui [24]. Recently, Elshahhat and Nassar [25] expanded the APTIIC framework by integrating binomial removals and investigated the estimation challenges associated with the Weibull lifetime model. Additionally, Alqasem et al. [26] analyzed the Burr-X competing risks model under the APTIIC scheme with binomial removals, further illustrating the applicability of this methodology in reliability analysis.
In this study, we utilize the APTIIC plan with a binomial removal pattern to analyze the Hj distribution within the context of a competing risks model. To the best of our knowledge, no prior research has explored the Hj distribution under the framework of competing risks models, despite its potential applicability in modeling real-world scenarios. Our work can be justified based on the following points:
  • The Hj distribution is highly significant in modeling diverse types of data due to its flexible HRF and its broad applicability across various fields.
  • The APTIIC strategy with binomial removals is more suitable for data collection compared to the traditional fixed removal approach, as it better accommodates random patterns in real-world data.
  • The use of the competing risks model is crucial for analyzing data involving multiple failure causes, which cannot be effectively addressed using conventional methods.
These considerations highlight the novelty and practical relevance of our proposed approach. The primary objective of this paper is to explore the estimation of the Hj competing risks model under the APTIIC scheme with binomial removals. To achieve this, we employ two estimation approaches: maximum likelihood estimation and Bayesian estimation. The analysis encompasses both the model parameters and two key reliability metrics, namely the RF and HRF. First, we derive the maximum likelihood estimates (MLEs) and construct approximate confidence intervals (ACIs). Additionally, we compute Bayes estimates (BEs) and Bayesian credible intervals (BCIs). The Bayesian estimation is performed using the squared error (SE) loss function, along with the Markov Chain Monte Carlo (MCMC) technique for computational implementation. To evaluate the performance of the proposed methods, we conduct a simulation study to compare the MLEs, BEs, ACIs, and BCIs. Finally, we validate the proposed estimation techniques by applying them to real-world data sets.
The remainder of this study is structured as follows: Section 2 presents the derivation of the Hj competing risks model and discusses the APTIIC methodology in conjunction with binomial removals. Section 3 elaborates on the MLEs and ACIs for the parameters of the Hj competing risks model, as well as the associated reliability metrics. In Section 4, the BEs and their corresponding BCIs are examined. Section 5 outlines the simulation study conducted as part of this research. Section 6 analyzes two real data sets, while the paper concludes with Section 7.

2. Model Description and Notation

This section presents the model and the structure of the available data. Suppose n identical units are tested starting at time zero, with their lifetimes denoted by X 1 , , X n . Under the assumption that failures can occur due to two distinct causes, it is assumed for i = 1 , , n that X i = min ( X i 1 , X i 2 ) , where X i represents the observed failure time of the ith unit. Here, X i k corresponds to the latent failure time of the ith unit due to the kth cause of failure ( k = 1 , 2 ). Additionally, it is assumed that X i 1 and X i 2 are independently distributed, with each following the Hj distribution. The PDF and CDF of the Hj distribution with scale parameter μ k and shape parameter λ k , k = 1 , 2 , can be expressed, respectively, as
g k ( x ; μ k , λ k ) = ( 1 + x ) ( λ k + 1 ) [ λ k + μ k x ( 1 + x ) ] e 0.5 μ k x 2 , x > 0 , μ k , λ k > 0 ,
and
G k ( x ; μ k , λ k ) = 1 ( 1 + x ) λ k e 0.5 μ k x 2 .
The RF and HRF of X i = m i n ( X i 1 , X i 2 ) can be simply derived, respectively, as
S ( t ; ψ ) = k = 1 2 ( 1 + t ) λ k e 0.5 μ k t 2 ,
and
h ( t ; ψ ) = k = 1 2 [ μ k t + λ k ( 1 + t ) 1 ] ,
where ψ = ( μ 1 , μ 2 , λ 1 , λ 2 ) is the full parameter vector.
The APTIIC methodology under competing risks, incorporating binomial removals, operates as follows: We let m ( < n ) denote the predetermined target number of failures and let R = R 1 , R 2 , , R m represent the pre-specified removal scheme established prior to experimentation. A time threshold T ( 0 , ) is defined as the intended maximum test duration, though the experiment may continue beyond T if necessary. At the occurrence of the first failure time X 1 : m : n , R 1 surviving units are randomly removed from the remaining non-failed items. This procedure repeats iteratively: For the ith failure time X i : m : n , R i surviving units are eliminated randomly from the remaining units. In this case, we have two termination scenarios:
  • Early Termination: If the mth failure occurs before T (i.e., X m : m : n < T ), the experiment concludes at X m : m : n , adhering to conventional PTIIC.
  • Time-Exceeded Adaptation: If X m : m : n > T , the removal scheme is modified: All scheduled removals after the dth failure (where d denotes the number of failures observed before T) are suspended by setting R i = 0 for i = d + 1 , , m 1 . Post-adjustment, no further removals occur until the mth failure time. At this stage, all remaining surviving units ( R * = n m i = 1 d R i ) are removed.
The resulting APTIIC sample under competing risks consists of the following: (1) Failure times X 1 : m : n , X 2 : m : n , , X m : m : n , (2) Corresponding removal counts R = ( R 1 , R 2 , , R d , 0 , , 0 , R * ) , and (3) Competing risk indicator δ i , i = 1 , , m for each observed failure.
We let x i = x i : m : n for simplicity, x ̲ = ( x 1 , , x d , x d + 1 , , x m ) is an observed APTIIC with competing risks sample obtained from the Hj population, and I ( δ i = k ) refers to an indicator function such that
I ( δ i = k ) = 1 , δ i = k 0 e l s e .
Then, after ignoring the constant term, we can write the joint likelihood function of the observed data as follows:
L 1 ( ψ ; x ̲ | R = r ) = i = 1 m g 1 ( x i ) [ 1 G 2 ( x i ) ] I ( δ i = 1 ) g 2 ( x i ) [ 1 G 1 ( x i ) ] I ( δ i = 2 ) × i = 1 d [ 1 G 1 ( x i ) ] [ 1 G 2 ( x i ) ] R i [ 1 G 1 ( x m ) ] [ 1 G 2 ( x m ) ] R * ,
with m k = i = 1 m I ( δ i = k ) , for k = 1 , 2 , represents the count of observed failures attributed to cause k, with the total number of failures given by m = m 1 + m 2 . We choose the APTIIC scheme over conventional censoring methods for several compelling reasons. First, the APTIIC dynamically adjusts the censoring process based on observed data, allowing the experiment to conclude near a predetermined threshold time. This is especially valuable for high-reliability products where failures are rare, as it prevents unnecessarily long test durations. Second, if failures occur more slowly than expected, the APTIIC halts the removal of remaining units after the threshold time and continues until the required number of failures is observed. Unlike the progressive Type I hybrid censoring scheme proposed by Kundu and Joarder [27], which may result in insufficient sample sizes when failures are infrequent, APTIIC guarantees the collection of the pre-specified number of failures. This ensures more reliable statistical inference by minimizing the risk of invalid or inefficient estimates due to inadequate data.
According to the methodology of Elshahhat and Nassar [25], the removal process R i at the ith failure stage is modeled using a binomial distribution with parameter θ . In this framework, each surviving unit at the time of the ith failure has an identical probability θ of being removed from the experiment. The probability mass function (PMF) for the random removal R i is defined by two parameters: (1) n i * = n m j = 1 i 1 r j : The number of surviving units available just before the ith failure, adjusted for prior removals. (2) θ : The constant removal probability for each surviving unit. In this case, we can write the PMF of R i as
P R i = r i R i 1 = r i 1 , , R 1 = r 1 = n i * r i θ r i ( 1 θ ) n m k = 1 i r k ,
where 0 r i n i * for i = 1 , 2 , , d . Accordingly, one can easily write the joint probability of R i = r i , i = 1 , 2 , , d , as
L 2 R = r ; θ = P R 1 = r 1 P R 2 = r 2 R 1 = r 1 × · · · × P R d = r d R d 1 = r d 1 , , R 1 = r 1 .
It follows from (6) and (7) that
L 2 R = r ; θ = i = 1 d n i * r i θ i = 1 d r i ( 1 θ ) d ( n m ) i = 1 d ( d i + 1 ) r i .
This binomial model ensures that the removal process accounts for the decreasing number of surviving units as the experiment progresses. From (5) and (8), it is evident that the binomial parameter is independent of the Hj competing risks model. Therefore, as demonstrated in the following section, we separately derive the classical and Bayesian estimations for these two types of parameters.

3. Maximum Likelihood Estimation

We have an observed APTIIC sample x ̲ from a population subject to competing risks under the Hj model with removal counts ( R 1 , , R d ) generated via a binomial process. Then, the likelihood function for the unknown parameters ψ can be constructed as follows using the PDF defined in (1), the CDF in (2), and the likelihood formulation in (5), excluding constant terms:
L 1 ( ψ ; x ̲ | R = r ) = exp k = 1 2 i = 1 m k log λ k + μ k x i ( 1 + x i ) ( λ 1 + λ 2 ) W 1 ( μ 1 + μ 2 ) W 2 ,
where
W 1 = i = 1 m log ( 1 + x i ) + i = 1 d r i log ( 1 + x i ) + r * log ( 1 + x m )
and
W 2 = 0.5 i = 1 m x i 2 + i = 1 d r i x i 2 + r * x m 2 .
The log likelihood function of (9) takes the following form:
l 1 ( ψ ; x ̲ | R = r ) = k = 1 2 i = 1 m k log λ k + μ k x i ( 1 + x i ) ( λ 1 + λ 2 ) W 1 ( μ 1 + μ 2 ) W 2 .
Taking the first-order partial derivatives of the objective function defined in (10) with respect to the parameters results in a system of normal equations, these equations must be solved simultaneously. They are given by
l 1 ( ψ ; x ̲ | R = r ) μ 1 = i = 1 m 1 x i ( 1 + x i ) λ 1 + μ 1 x i ( 1 + x i ) W 2 = 0 ,
l 1 ( ψ ; x ̲ | R = r ) μ 2 = i = 1 m 2 x i ( 1 + x i ) λ 2 + μ 2 x i ( 1 + x i ) W 2 = 0 ,
l 1 ( ψ ; x ̲ | R = r ) λ 1 = i = 1 m 1 1 λ 1 + μ 1 x i ( 1 + x i ) W 1 = 0
and
l 1 ( ψ ; x ̲ | R = r ) λ 2 = i = 1 m 2 1 λ 2 + μ 2 x i ( 1 + x i ) W 1 = 0 .
From the preceding expressions, it is evident that the normal equations for the Hj competing risks model parameters do not yield a closed-form solution. Consequently, the MLEs of the parameters cannot be derived analytically. To address this issue, numerical approximation techniques, such as the Newton–Raphson method, may be employed. In this study, the Newton–Raphson method was utilized to compute the required MLEs. On the other hand, the MLE of the binomial parameter θ can be readily derived by maximizing the likelihood function in ( ) . The log-likelihood function corresponding to ( ) , excluding the constant term, can be written as
l 2 R = r ; θ = log ( θ ) i = 1 d r i + log ( 1 θ ) d ( n m ) i = 1 d ( d i + 1 ) r i ,
where υ i = d i + 1 . By maximizing (15) with respect to θ , one can easily obtain the MLE of θ as
θ ^ = i = 1 d r i d ( n m ) i = 1 d ( υ i 1 ) r i .
Once the MLEs of the parameters are acquired by simultaneously solving the aforementioned normal equations, the MLEs of the RF and the HRF can be computed utilizing the invariance property of the MLEs. We let μ ^ k and λ ^ k , k = 1 , 2 , represent the MLEs of the parameters μ k and λ k , respectively. It is evident that the RF and HRF of the Hj competing risks model are independent of the binomial parameter. Consequently, the MLEs of these functions are exclusively determined by the MLEs of the parameters μ k and λ k for k = 1 , 2 . Consequently, the MLEs of the RF and the HRF for a mission time t > 0 , are given, respectively, by
S ^ ( t ) = k = 1 2 ( 1 + t ) λ ^ k e 0.5 μ ^ k t 2
and
h ^ ( t ) = k = 1 2 [ μ ^ k t + λ ^ k ( 1 + t ) 1 ] .
An essential aspect of statistical inference is the construction of interval estimates for both model parameters, binomial parameter, and reliability indices. To this end, the Fisher information matrix is derived, enabling the construction of 100 ( 1 α ) % confidence intervals. However, due to the analytical complexity often associated with obtaining the Fisher information matrix, the observed Fisher information matrix is commonly used in practice as an estimate of the variance–covariance structure. For the Hj competing risks model, this is achieved by inverting the observed Fisher information matrix as
I 1 ( ψ ^ ) = 2 l 1 ( ψ ; x ̲ | R = r ) μ 1 2 0 2 l 1 ( ψ ; x ̲ | R = r ) μ 1 λ 1 0 2 l 1 ( ψ ; x ̲ | R = r ) μ 2 2 0 2 l 1 ( ψ ; x ̲ | R = r ) μ 2 λ 2 2 l 1 ( ψ ; x ̲ | R = r ) λ 1 2 0 2 l 1 ( ψ ; x ̲ | R = r ) λ 2 2 ψ = ψ ^ 1 = σ ^ 11 0 σ ^ 13 0 σ ^ 22 0 σ ^ 24 σ ^ 33 0 σ ^ 44 ,
where ψ ^ = ( μ ^ 1 , μ ^ 2 , λ ^ 1 , λ ^ 2 ) is the MLE of ψ and
2 l 1 μ 1 2 = i = 1 m 1 [ x i ( 1 + x i ) ] 2 [ λ 1 + μ 1 x i ( 1 + x i ) ] 2 , 2 l 1 μ 2 2 = i = 1 m 2 [ x i ( 1 + x i ) ] 2 [ λ 2 + μ 2 x i ( 1 + x i ) ] 2 ,
2 l 1 λ 1 2 = i = 1 m 1 1 [ λ 1 + μ 1 x i ( 1 + x i ) ] 2 , 2 l 1 λ 2 2 = i = 1 m 2 1 [ λ 2 + μ 2 x i ( 1 + x i ) ] 2 ,
2 l 1 μ 1 λ 1 = i = 1 m 1 x i ( 1 + x i ) [ λ 1 + μ 1 x i ( 1 + x i ) ] 2 , and 2 l 1 μ 2 λ 2 = i = 1 m 2 x i ( 1 + x i ) [ λ 2 + μ 2 x i ( 1 + x i ) ] 2 .
Under the asymptotic theory of maximum likelihood estimation, the estimator ψ ^ follows a multivariate normal distribution with mean ψ and estimated variance–covariance matrix I 1 ( ψ ^ ) as presented in (16). Formally, ψ ^ d N ψ , I 1 ( ψ ^ ) , as n . Thus, the 100 ( 1 α ) % ACIs for the parameters μ k and λ k , ( k = 1 , 2 ) are derived as
μ ^ k ± z α / 2 σ ^ k k , and λ ^ k ± z α / 2 σ ^ p p , p = k + 2 , k = 1 , 2 ,
where z α / 2 is the upper α / 2 quantile of the standard normal distribution and σ ^ j j , j = 1 , , 4 denotes the diagonal elements of I 1 ( ψ ^ ) presented in (16). Regarding the binomial parameter, we first need to estimate the variance of the MLE of θ as follows:
σ ^ θ = d 2 l 2 R = r ; θ d θ 2 θ = θ ^ 1 ,
where
d 2 l 2 R = r ; θ d θ 2 = i = 1 d r i θ 2 d ( n m ) i = 1 d ( d i + 1 ) r i ( 1 θ ) 2 .
Then, the 100 ( 1 α ) % ACI for θ follows
θ ^ ± z α / 2 σ ^ θ .
To construct the 100 ( 1 α ) % ACIs for the RF and HRF, it is necessary to estimate the variances of their respective MLEs. These variances are commonly approximated via the delta method. Prior to applying this method, the gradient vectors of the RF and HRF must first be derived using their expressions defined in (3) and (4) as
Δ ^ S = S ^ ( t ) 0.5 t 2 , log ( 1 + t ) , 0.5 t 2 , log ( 1 + t ) ,
and
Δ ^ h = t , ( 1 + t ) 1 , t , ( 1 + t ) 1 .
Then, the approximated variances can be obtained as
σ ^ S Δ ^ S I 1 ( ψ ^ ) Δ ^ S and σ ^ h Δ ^ h I 1 ( ψ ^ ) Δ ^ h .
Accodingly, the 100 ( 1 α ) % ACIs for the RF and HRF are
S ^ ( t ) ± z α / 2 σ ^ S and h ^ ( t ) ± z α / 2 σ ^ h .

4. Bayesian Estimation

This section derives Bayes estimators for the parameters and reliability characteristics of the Hj competing risks model under APTIIC data as well as the binomial parameter. In Bayesian point estimation, selecting an appropriate loss function is a key consideration. In this study, the symmetric squared error loss function is utilized, but other loss functions can also be seamlessly integrated. In Bayesian inference, selecting appropriate prior distributions is critical. For the Hj distribution, conjugate priors are not available. Therefore, independent gamma priors are assigned to the Hj parameters μ k and λ k ( k = 1 , 2 ). The gamma distribution is widely used as a prior in Bayesian analysis because of its mathematical convenience and flexibility in modeling positive continuous parameters. Its shape and scale parameters allow it to represent a wide range of beliefs, from highly skewed to nearly symmetric distributions, while still ensuring analytical tractability in posterior computations. The gamma distribution also offers computational advantages which facilitate efficient sampling in MCMC algorithms. Its robustness and versatility make it particularly suitable for applications in reliability and survival analysis, reinforcing its status as a practical and reliable prior choice in Bayesian frameworks. We assume that μ k G a m m a ( a k , b k ) and λ k G a m m a ( c k , d k ) , ( k = 1 , 2 ). Then, the joint gamma prior distribution for the parameters μ k and λ k is defined as
π 1 ( ψ ) k = 1 2 μ k a k 1 λ k c k 1 e b k μ k d k λ k , μ k , λ k > 0 ,
where a k , b k > 0 and c k , d k > 0 are hyper-parameters. Gamma priors offer flexibility, encompassing diverse shapes that can approximate other distributions. Noninformative priors emerge as a special case when hyper-parameters a k , b k , c k , d k approach zero.
In relation to the binomial parameter θ , examining the likelihood in (8) reveals that the natural conjugate prior distribution for θ is the beta distribution. We assume that ε 1 , ε 2 > 0 represent the hyper-parameters of the beta prior distribution. Then, the prior distribution of the parameter θ can be expressed in the following form:
π 2 ( θ ) θ ε 1 1 ( 1 θ ) ε 2 1 , 0 < θ < 1 .
To derive the Bayes estimators for the Hj competing risks model and its associated reliability metrics, we first determine the posterior distribution of ψ . This is achieved by combining the observed data, represented by the likelihood function in (9), with the joint prior distribution given in (17). The posterior distribution is then obtained by applying Bayes’ rule as follows:
H 1 ( ψ | x ̲ , R = r ) = A 1 k = 1 2 μ k a k 1 λ k c k 1 exp { k = 1 2 i = 1 m k log λ k + μ k x i ( 1 + x i ) k = 1 2 λ k ( d k + W 1 ) k = 1 2 μ k ( b k + W 2 ) } ,
where A 1 refers to the normalized constant. On the other hand, by combining (8) and (18) and applying Bayes’ rule, the posterior distribution of the binomial parameter can be derived as
H 2 θ | R = r = 1 B ( ε 1 * , ε 1 * ) θ ε 1 * 1 ( 1 θ ) ε 2 * 1 ,
where B ( . , . ) is the beta function, ε 1 * = i = 1 d r i + ε 1 , and ε 2 * = d ( n m ) i = 1 d ( d i + 1 ) r i + ε 2 .
Owing to the straightforward and well-known form of the posterior distribution of the parameter θ , which follows the beta distribution with shape parameters ε 1 * and ε 2 * , we begin by deriving the Bayes estimator and the BCI for θ . Under the squared error loss function, the Bayes estimator of θ is expressed as
θ ˜ = 0 1 θ H 2 θ | R = r d θ = ε 1 * ε 1 * + ε 2 * .
In addition, the 100 ( 1 α ) % BCI of θ can be obtained using the posterior distribution in (20) as
B ( ε 1 * , ε 2 * , α / 2 ) , B ( ε 1 * , ε 2 * , 1 α / 2 ) ,
where B ( . , . , . ) is the quantile function of the beta distribution.
Regarding the Bayes estimators of the Hj competing risks model, we let φ ( ψ ) be any function of the model parameters. Then, by employing the squared error loss function, the Bayes estimator of φ ( ψ ) can be obtained using the posterior distribution in (19) as
φ ˜ ( ψ ) = 0 φ ( ψ ) H 1 ( ψ | x ̲ , R = r ) d ψ .
The Bayesian estimator defined in (22) requires solving integrals that are analytically intractable, primarily due to the high-dimensional and non-conjugate nature of the posterior distribution. To address this issue, we utilize the MCMC methods to directly sample from the posterior distribution outlined in (19). These samples are then used to approximate posterior expectations for Bayes estimates and to construct the BCIs. A key step in implementing MCMC involves deriving the full conditional posterior distributions for each parameter. From the joint posterior in (19), the conditional distributions for μ k and λ k ( k = 1 , 2 ) are as follows:
H 1 ( μ k | λ k , x ̲ , R = r ) μ k a k 1 exp i = 1 m k log λ k + μ k x i ( 1 + x i ) μ k ( b k + W 2 )
and
H 1 ( λ k | μ k , x ̲ , R = r ) λ k c k 1 exp i = 1 m k log λ k + μ k x i ( 1 + x i ) λ k ( d k + W 1 ) .
Upon examining the conditional distributions in (23) and (24), it is clear that these distributions do not simplify to any standard forms. As a result, Gibbs sampling cannot be applied in this case. Consequently, we propose the use of the Metropolis–Hastings (MH) algorithm to sample from these distributions, utilizing a normal distribution as the proposal distribution. The MH algorithm is a well-established MMCMC procedure for sampling from complex probability distributions. This approach generates candidate samples using a proposal distribution, which are then evaluated for acceptance based on their probability under the target posterior. In this analysis, the proposal distribution is a normal distribution, initialized with the MLEs of the parameters as its mean and variance. The algorithm iteratively proposes new parameter values from this distribution and computes their acceptance probabilities, ensuring convergence of the Markov chain to the target posterior distribution. Once sufficient samples are generated, it becomes feasible to compute the Bayes estimates and BCIs for μ k and λ k , k = 1 , 2 , as well as the reliability metrics. The MCMC sampling process follows the steps outlined below.
  • Put the initial values of μ k ( 0 ) and λ k ( 0 ) ; here, we use the MLEs as starting values.
  • Put j = 1 .
  • Perform the following MH steps to simulate μ k ( j ) , with k = 1 , 2 :
    (a)
    Generate μ k * N ( μ ^ k , σ ^ k k ) , where μ ^ k is the MLE of μ k and σ ^ k k is the estimated variance of σ ^ k k obtained from (16).
    (b)
    Obtain the acceptance probability (AP):
    A P μ k = min 1 , H 1 ( μ k * | λ ( j 1 ) , x ̲ ) H 1 ( μ k ( j 1 ) | λ k ( j 1 ) , x ̲ ) , k = 1 , 2 .
    (c)
    Generate a random number u from the unit uniform distribution.
    (d)
    Set the generated value μ k * as μ k ( j ) if u A P μ k , and μ k ( j ) = μ k ( j 1 ) if u > A P μ k .
  • Repeat Step 3 to obtain λ k ( j ) from (24).
  • Compute the RF and HRF at iteration j as
    S ( j ) = k = 1 2 ( 1 + t ) λ k ( j ) e 0.5 μ k ( j ) t 2 and h ( j ) = k = 1 2 ( μ k ( j ) t + λ k ( j ) ( 1 + t ) 1 ) .
  • Replace j by j + 1 .
  • Repeat Steps 3 to 6, M times.
  • Store the acquired sequence as
    μ 1 ( j ) , μ 2 ( j ) , λ 1 ( j ) , λ 2 ( j ) , S ( j ) , h ( j ) , j = 1 , , M .
Under the squared error loss function, the Bayes estimates for the parameters of the Hj competing risks model, as well as the two reliability metrics, are calculated as the means of their respective MCMC sample values generated in (25). To ensure convergence and reduce the influence of initialization bias, the initial B iterations are discarded as a burn-in period. The Bayes estimates in this context are given as follows:
μ ˜ k = 1 M B j = B + 1 M μ k ( j ) , λ ˜ k = 1 M B j = B + 1 M λ k ( j ) , k = 1 , 2 ,
S ˜ ( t ) = 1 M B j = B + 1 M S ( j ) , and h ˜ ( t ) = 1 M B j = B + 1 M h ( j ) .
To compute the BCIs of, we first sort the acquired MCMC sequence in (25) as μ k [ B + 1 ] < < μ k [ M ] , λ k [ B + 1 ] < < λ k [ M ] , S [ B + 1 ] < < S [ M ] and h [ B + 1 ] < < h [ M ] . Accordingly, the 100 ( 1 α ) % BCIs are as follows:
μ k [ α ( M B ) / 2 ] , μ k [ ( 1 α / 2 ) ( M B ) ] , λ k [ α ( M B ) / 2 ] , λ k [ ( 1 α / 2 ) ( M B ) ] , k = 1 , 2 ,
S [ α ( M B ) / 2 ] , S [ ( 1 α / 2 ) ( M B ) ] , and h [ α ( M B ) / 2 ] , h [ ( 1 α / 2 ) ( M B ) ] .

5. Monte Carlo Simulations

To assess the performance of the theoretical results—specifically the point and interval estimates of λ i , i = 1 , 2 , μ i , i = 1 , 2 , S ( t ) , h ( t ) , and θ —we generate multiple adaptive Type II progressive hybrid censored datasets with binomial removals based on the Hj lifetime distribution under a competing risks setup.

5.1. Simulation Design

By assigning ( λ 1 , λ 2 ) = ( 1.5 , 0.5 ) and ( μ 1 , μ 2 ) = ( 0.5 , 1.5 ) in the Hj competing risks model, we simulate 1000 APTIIC datasets under a competing risks framework with binomial removals. These datasets are gathered across various configurations of binomial probability θ (=0.4, 0.8), threshold time T (=0.3, 0.5), total number of test units n (=30, 50, 80), and different effective sample sizes m for each n. At a distinct time t = 0.1 , the corresponding actual values of S ( t ) and S ( t ) are taken as 0.8182 and 0.8546, respectively. To perform the removal mechanism of the survived items r i during the APTIIC strategy, perform the steps presented in Algorithm 1.
Algorithm 1 The APTIIC Generation Steps:
Step 1:
Set the values of λ i and μ i (for i = 1 , 2 ).
Step 2:
Set the binomial probability value θ .
Step 3:
Generate r 1 from B i n n m , θ .
Step 4:
Generate r i as
r i B i n n m v = 1 i 1 r v , θ , for i = 2 , 3 , , m 1 for X m : m : n < T ; 2 , 3 , , d for X m : m : n > T .
Step 5:
Find r * according to the following relation:
r * = n m v = 1 m 1 r v , if n m v = 1 m 1 r v > 0 , for X m : m : n < T ; n m v = 1 d r v , if n m v = 1 d r v > 0 , for X m : m : n > T ; 0 for otherwise .
Step 6:
Given R = r , find a PTIIC sample with binomial removals as
(a)
 Generate Ω from U ( 0 , 1 ) .
(b)
 Set ϱ i = Ω i ( i + v = m i + 1 m r v ) 1 , i = 1 , 2 , , m .
(c)
 Set U i = 1 ϱ m ϱ m 1 ϱ m i + 1 for i = 1 , 2 , , m .
(d)
 Set x i = G 1 ( u i ; λ v ) , i = 1 , 2 , , m , v = 1 , 2 .
Step 7:
For x i for i = 1 , 2 , , m , determine d where x d < T < x d + 1 , and discard x i for i = d + 2 , , m .
Step 8:
From g ( x ) S ( x d + 1 ) , obtain the first-order statistics ( x d + 2 , , x m ) with size n d v = 1 d r v 1 .
Step 9.
For x i for i = 1 , 2 , , m , assign the cause of failure c i ( 1 , 2 ) , i = 1 , , m .
The idea of setting prior hyper-parameters (like the suggested gamma density priors) often revolves around encoding prior beliefs. In particular, the method suggested by Kundu [28] proposes choosing hyper-parameters such that the prior mean reflects the true value of the parameter based on domain knowledge. Thus, to evaluate all acquired Bayes theoretical results of λ i , i = 1 , 2 , μ i , i = 1 , 2 , S ( t ) , h ( t ) , or θ , the prior distributions for λ i or μ i for i = 1 , 2 , are chosen such that their means match the true values. Two informative prior sets for ( a i , b i , c i , d i ) , i = 1 , 2 of λ i and μ i for i = 1 , 2 , are considered, namely the following:
  • For ( c i , d i ) , i = 1 , 2 of λ i :
    Prior-A:(7.5, 5) and Prior-B:(15, 10) of λ 1 ;
    Prior-A:(2.5, 5) and Prior-B:(5, 10) of λ 2 ,
  • For ( a i , b i ) , i = 1 , 2 of μ i :
    Prior-A:(2.5, 5) and Prior-B:(5, 10) of μ 1 ;
    Prior-A:(7.5, 5) and Prior-B:(15, 10) of μ 2 ,
On the other hand, to examine the hyper-parameter behavior of ( ε 1 , ε 2 ) for θ , we consider the following sets:
  • For θ = 0.4 : Prior-A:(0.2, 0.3) and Prior-B:(0.4, 0.6);
  • For θ = 0.8 : Prior-A:(1.6, 0.4) and Prior-B:(3.2, 0.8).
The sensitivity analysis shows varying degrees of dependence on prior choice across parameters. Figure 1 shows how the Bayesian posterior summaries for four model parameters ( λ i and μ i (for i = 1 , 2 )) change with different kinds of prior distributions: improper, informative, overdispersed, and weak. It states that λ 2 is highly robust, with posterior distributions nearly unaffected by prior type. In contrast, μ 2 demonstrates strong sensitivity: informative priors shift the posterior upward and reduce spread, while weak and overdispersed priors result in lower, more dispersed estimates. Both λ 1 and μ 1 exhibit moderate sensitivity, where informative priors slightly shift the posterior centers and reduce variability. These findings emphasize that while some parameters are data-driven and stable, others (particularly μ 2 ) require careful prior specification due to limited identifiability from the data alone.
For each simulated dataset, MLEs and their associated 95% ACI are computed for the Hj model parameters λ i and μ i (for i = 1 , 2 ), the reliability measures S ( t ) and h ( t ) , and the binomial parameters θ . The Newton–Raphson algorithm, implemented using the maxLik package (version 1.5-2.1) suggested by Henningsen and Toomet [29], is used to obtain all frequentist estimators. In contrast, Bayesian point estimates and 95% BCI estimates for the same quantities are obtained via MCMC sampling, using the coda package (version 0.19-4.1) developed by Plummer et al. [30]. To perform the MCMC estimation and associated 95% BCI for λ i , i = 1 , 2 , μ i , i = 1 , 2 , S ( t ) , h ( t ) , and θ (as outlined in Section 4), we generate 12,000 MCMC samples, discarding the first 2000 as burn-in.
To check the uniqueness of the calculated MLEs of λ i and μ i (for i = 1 , 2 ) in the proposed Monte Carlo simulations, using just one replicate for illustration, the contour diagrams for log-likelihoods of pair parameters ( λ i , μ i ) for i = 1 , 2 (when ( n , m , T , θ ) = ( 50 , 40 , 0.5 , 0.8 ) as an example) are plotted and shown in Figure 2. As a result, the ML estimates of ( λ 1 , λ 2 , μ 1 , μ 2 ) according to this simulation setting are (1.3539, 0.6198, 1.0384, 0.3717). The contour plots provide strong evidence that, in this simulation, the MLEs λ ^ i and μ ^ i (for i = 1 , 2 ) exist and are unique, as well as confirm stability and identifiability of the estimates in repeated sampling.
To assess the convergence of the MCMC chains for Hj ( λ i , μ i ) , i = 1 , 2 , parameters, in this simulation, both Geweke and Heidelberger–Welch diagnostics are applied. The Geweke diagnostic evaluates the convergence of an MCMC chain by comparing the means of its early and late segments. If the chain is stationary (i.e., converged), these means should be statistically indistinguishable, producing a Z-score close to zero. The Heidelberger–Welch diagnostic performs a two-part test called (1) a stationarity test to verify whether the chain has reached equilibrium and (2) a half-width test to check whether the Monte Carlo error in the posterior mean is sufficiently small relative to the estimate, ensuring estimation precision. The Geweke Z-scores (shown in Figure 3) for all parameters lie within ± 2 ( λ 1 = 1.63 , λ 2 = 0.29 , μ 1 = 0.44 , μ 2 = 0.77 ) , indicating no significant difference between early and late chain segments and thus supporting convergence. Consistently, the Heidelberger–Welch test confirms that all chains passed the stationarity and half-width criteria, with P -values ( λ 1 = 0.320 , λ 2 = 0.498 , μ 1 = 0.871 , μ 2 = 0.404 ) well above 0.05 and negligible Monte Carlo error. These results provide strong numerical evidence that the MCMC chains have adequately converged and that the resulting posterior estimates are both stable and reliable.
To visualize these facts, the trace (with its Gaussian kernel) and autocorrelation function (ACF) are plotted; see Figure 4. Figure 4 indicates that ACF values of λ i or μ i (for i = 1 , 2 ) look like random noise and close to zero as the lag value increases, while Figure 5 implies that the simulated chains are substantially mixed. As a summary, Figure 4 and Figure 5 support the numerical facts given by Geweke and Heidelberger–Welch diagnostics and confirm that the Markov chain draws of λ i or μ i (for i = 1 , 2 ) are acceptable.
The mean point estimate (MPE), root mean square error (RMSE), mean relative absolute bias (MRAB), average interval length (AIL), and coverage percentage (CP) of θ (for instance) are obtained as
M P E ( θ ¨ ) = 1 1000 s = 1 1000 θ ^ [ s ] ,
RMSE ( θ ^ ) = 1 1000 s = 1 1000 θ ^ [ s ] θ 2 ,
MRAB ( θ ^ ) = 1 1000 s = 1 1000 θ 1 θ ^ [ s ] θ ,
AIL 95 % ( θ ) = 1 1000 s = 1 1000 U θ ^ [ s ] L θ ^ [ s ] ,
and
CP 95 % ( θ ) = 1 1000 s = 1 1000 I L θ ^ [ s ] ; U θ ^ [ s ] θ ,
respectively, where θ ^ [ s ] represents the estimate of θ obtained at sth sample, I ( · ) is the indicator, and ( L ( · ) , U ( · ) ) denotes the estimated interval bounds. The simulated MPE, RMSE, MRAB, AIL, and CP values of ( λ i , μ i ) , i = 1 , 2 , S ( t ) and h ( t ) are readily obtained in a similar manner.

5.2. Simulation Results

All Monte Carlo results for λ i , i = 1 , 2 , μ i , i = 1 , 2 , S ( t ) , h ( t ) , and θ are presented in Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7 and Table 8. Now, based on lower RMSE, MRAB, AIL values, and higher CP values, the following observations are made:
  • All estimators for λ i , i = 1 , 2 , μ i , i = 1 , 2 , S ( t ) , h ( t ) , and θ perform well overall.
  • Increasing n (or m) improves the accuracy of both point and interval estimation results. A similar improvement is observed when i = 1 m r i decreases.
  • Bayesian MCMC estimates (or 95% BCI) using informative gamma and beta priors are more precise than others developed from the competitive likelihood method.
  • For each unknown subject, Bayesian results under Prior-B outperform those under Prior-A due to the smaller prior variance in Prior-B. Both Bayesian priors outperform the frequentist estimators.
  • As T(or θ ) grows,
    the RMSEs, MRABs, and AILs of λ i , i = 1 , 2 , μ i , i = 1 , 2 , S ( t ) , h ( t ) , and θ decrease;
    the CPs of λ i , i = 1 , 2 , μ i , i = 1 , 2 , S ( t ) , h ( t ) , and θ increase. In most situations, the simulated CP values for both ACI and BCI intervals are close to the pre-specified 95% nominal level.
  • Overall, based on two competing risks modeled via the APTIIC via binomial removals, the Bayesian approach is recommended for estimating the Hj distribution parameters and its reliability measures.

6. Real Applications

To illustrate how the proposed inference techniques can be applied in practice, we analyze two sets of real competing risk data collected from the industrial field, one representing the breaking strength of jute fibers and the other representing the failure modes of electrical appliances.

6.1. Jute Fibers

Jute fiber is extracted from the jute plant and is important due to its sustainability, biodegradability, and low environmental impact, making it a preferred choice in eco-friendly materials and green manufacturing. Its advantages include high tensile strength, good insulating properties, low cost, and excellent breathability, making it suitable for textiles, geotextiles, and composites. In this application, we analyze a dataset comprising industrial observations of 58 breaking strength failures of the jute fiber was affected by two different gauge lengths, namely 15 (say, Cause 1) and 20 mm (say, Cause 2); see Xia et al. [31]. In Table 9, beyond dividing each breaking strength failure by one hundred (due to computational ease), the newly transformed jute fiber dataset is presented.
First, at a 5% significance level, the Kolmogorov–Smirnov (KS) test is performed to check whether the Hj model fits the jute fiber data well. Given that the latent cause of failures has independent Hj ( λ i , μ i ) , i = 1 , 2 , populations, we apply the following null and alternative hypotheses, respectively, as follows:
  • H 0 : The jute fiber data set follows a Hj distribution;
  • H 1 : The jute fiber data set does not follow a Hj distribution.
To achieve this, the MLEs (with their standard errors (Std.Ers)) and 95% ACIs (with their interval widths (IWs)) of Hj parameters λ i and μ i (for i = 1 , 2 ), as well as the Kolmogorov–Smirnov (KS) distance (with its P -value), are computed; see Table 10. This indicates that there is no statistical reason to reject the null hypothesis, which states that jute fiber failure times follow an Hj distribution.
To conduct a comprehensive visual assessment of the goodness-of-fit from the jute fiber datasets, we refer to Figure 6, which presents three types of diagnostic plots: the empirical and estimated survival functions S ( t ) , probability–probability (PP) plots, and contour diagrams of the fitted parameter surfaces. For both jute fiber datasets under consideration, the graphical comparisons between the empirical and theoretical curves indicate that the fitted RF model aligns closely with the empirical data. Specifically, the PP plots exhibit a near-linear correspondence between the empirical and model-based probabilities, suggesting a good agreement between the observed and expected distributions under the fitted model. Additionally, the subplots in Figure 6 highlight the behavior of the estimated parameter surfaces, revealing the existence and uniqueness of the MLEs λ ^ i and μ ^ i for i = 1 , 2 , corresponding to the two jute fiber datasets. These estimated values are reported in Table 10. Given these diagnostic findings, we recommend the use of the obtained parameter estimates λ ^ i and μ ^ i for i = 1 , 2 as initial values for future numerical optimization procedures and iterative estimation algorithms applied to jute fiber data.
By setting θ = 0.5 from the whole electrode data for different combinations of m and T, two APTIIC competing risk samples with binomial removals (denoted by S [ i ] , i = 1 , 2 ) are formed; see Table 11. The Bayes calculations of λ i , μ i , i = 1 , 2 , S ( t ) , h ( t ) , and θ use non-informative priors due to a lack of prior knowledge about the Hj parameters λ i and μ i (for i = 1 , 2 ) and the binomial parameter θ . The MCMC sampler chain (detailed in Section 4) is run 50,000 times, and the first 10,000 results are discarded as a burn-in. The MCMC point and 95% BCI estimates of the Hj parameters λ i and μ i (for i = 1 , 2 ) and reliability factors S ( t ) and h ( t ) are produced. However, using Table 11, the derived maximum likelihood and Bayes’ MCMC estimates (with their Std.Ers), 95% ACI/BCI estimates (with their IWs) of λ i , μ i , i = 1 , 2 , S ( t ) , h ( t ) , and θ are evaluated; see Table 12. As a sequence, it exhibits that the Bayes’ estimates of λ i , i = 1 , 2 , R ( t ) , or θ outperforms the others regarding the lowest standard error and width values.
To show the existence and uniqueness features of the collected MLEs λ ^ i and μ ^ i (for i = 1 , 2 ) using S [ i ] , i = 1 , 2 gathered from jute fiber data, the log-likelihood contours of λ i , μ i , i = 1 , 2 are shown in Figure 7. As a consequence, it confirms that the acquired MLEs λ ^ i and μ ^ i (for i = 1 , 2 ) exist and are unique. We thus recommend utilizing the fitted values of λ ^ i , i = 1 , 2 and μ ^ i , i = 1 , 2 (reported in Table 12) as initial guesses to run Bayes’ computations.
To further validate the Bayesian inference and to assess the convergence and mixing behavior of the associated MCMC iterations, we present in Figure 8 the trace and kernel density plots for λ i , μ i , i = 1 , 2 , S ( t ) , and h ( t ) based on their remaining 40,000 MCMC simulated variables collected from S [ 1 ] as an example. These diagnostic plots exhibit good mixing and convergence, indicating that the MCMC chains are sampling effectively from the posterior distributions. The resulting posterior distributions for the parameters show that most are approximately symmetric. However, mild positive skewness is observed in the posterior of λ 2 , while the posterior of S ( t ) exhibits slight negative skewness. From the posterior samples (remaining 40,000 MCMC iterates), we compute key summary statistics including the posterior mean, mode, the three quartiles ( Q i , i = 1 , 2 , 3 ), standard deviation (Std.D), and skewness (Sk.) of λ 1 , μ i , i = 1 , 2 , S ( t ) , and h ( t ) based on their remaining 40,000 MCMC variates. These results are compiled in Table 13, providing further support and consistency with the Bayesian estimates listed in Table 12.

6.2. Electrical Appliances

Electrical appliances are essential in modern life as they enhance convenience, save time, improve efficiency in daily tasks, and support productivity both at home and in the workplace. This section examines real-world failure time statistics for electrical appliances evaluated in eighteen operating modes, as published by Lawless [32]. Failures in Mode 11 were identified as Cause 1, whereas failures in all other modes were classified as Cause 2. In this part, to make calculation easier, each failure time in the electrical dataset is split by one hundred. Table 14 shows eight failures attributed to Cause 1 and thirteen to Cause 2. Alotaibi et al. [33] analyzed this dataset later. Before analyzing the proposed point and interval estimation findings of λ i , i = 1 , 2 , μ i , i = 1 , 2 , S ( t ) , h ( t ) , and θ , it is crucial to evaluate whether the Hj distribution fits the electrical appliances data (provided in Table 14). Since the estimated P -values (in Table 15) exceed the pre-specified significance level of 5%, there is no statistical reason to reject the null hypothesis, which claims that electrical appliance failure times follow the Hj distribution.
Following the goodness-of-fit visualization tools discussed in Section 6.1, Figure 9 indicates that there is a strong relationship between the fitted and empirical RF (or PP) lines for both electrical appliances datasets. It also confirms that the estimated parameters λ ^ i , i = 1 , 2 and μ ^ i , i = 1 , 2 exist and are unique.
Taking various configurations of m and T, using θ = 0.5 across the complete electrode dataset, two APTIIC competing risks samples with binomial removals (denoted S [ i ] for i = 1 , 2 ) are generated (see Table 16). Bayesian estimation of the Hj parameters λ i , i = 1 , 2 , μ i , i = 1 , 2 , S ( t ) , and h ( t ) , and the binomial parameter θ is carried out using non-informative priors, reflecting the absence of prior information regarding the parameters λ i , i = 1 , 2 , μ i , i = 1 , 2 , and θ . The proposed MCMC sampling, described in Section 4, is implemented with a total of 50,000 iterations, discarding the initial 10,000 samples as burn-in. Posterior point estimates and 95% BCIs are computed for the parameters λ i , i = 1 , 2 , μ i , i = 1 , 2 , S ( t ) , and h ( t ) . Additionally, Table 17 provides a comparative summary of the classical and Bayesian estimates, including their associated Std.Errs and IWs of the 95% ACI/BCI bounds. Results indicate that the Bayesian estimates of λ i , i = 1 , 2 , μ i , i = 1 , 2 , S ( t ) , h ( t ) , or θ demonstrated improved precision, evidenced by reduced standard errors and narrower interval widths relative to their likelihood counterparts.
To rigorously demonstrate the existence and uniqueness of the ML estimates λ ^ i , i = 1 , 2 and μ ^ i , i = 1 , 2 , derived from the electrical appliances datasets S [ i ] , i = 1 , 2 , we examine the joint log-likelihood contour plots presented in Figure 10. These contours depict the shape and topology of the log-likelihood function with respect to the parameter pairs ( λ i , μ i ) , i = 1 , 2 , and confirm the existence and uniqueness of the corresponding MLEs λ ^ i , i = 1 , 2 and μ ^ i , i = 1 , 2 . Subsequently, the obtained MLEs (as reported in Table 17) are recommended as suitable initial values for the Bayesian analysis from the electrical appliances datasets. Convergence and mixing properties of the MCMC chains are illustrated in Figure 11, which presents the posterior density and trace plots for λ i , i = 1 , 2 , μ i , i = 1 , 2 , S ( t ) , and h ( t ) based on the remaining 40,000 post-burn-in samples from S [ 1 ] . The visual diagnostics indicate that the chains are well-mixed, suggesting adequate convergence. Most posterior distributions of λ i , i = 1 , 2 , μ 2 , S ( t ) , and h ( t ) are approximately symmetric, except those of μ 1 , which exhibit slight positive skewness. Descriptive statistics (provided in Table 18) further support the Bayesian inference results summarized in Table 17.
The results from both the jute fiber and electrical appliance datasets support the applicability of the Hjorth competing risks model under the proposed estimation framework. These findings confirm that the model performs well when applied to real-world data collected through adaptive progressive Type II censoring with binomial removals. Overall, the proposed methods offer a practical and effective approach for analyzing complex lifetime data within this modeling context.

7. Concluding Remarks

This study examined frequentist and Bayesian estimation frameworks for analyzing the Hjorth competing risks model under an adaptive progressive Type II censoring design. To address the limitations of conventional fixed removal schemes, we proposed a dynamic censoring approach that incorporated binomial removals, thereby enhancing adaptability in reliability assessments. The inferential focus included estimating the scale and shape parameters of the Hjorth distribution, the binomial removal probability, and key reliability metrics such as reliability and hazard rate functions. Within the frequentist paradigm, maximum likelihood estimation was utilized, with numerical optimization techniques employed to resolve the complex likelihood equations for the Hjorth parameters. The asymptotic normality of the MLEs facilitated the construction of confidence intervals for the parameters and reliability metrics. For Bayesian inference, independent gamma priors were specified for the Hjorth parameters, while a beta prior governed the binomial probability. Posterior estimates under squared error loss were derived through Markov Chain Monte Carlo algorithms, accompanied by symmetric Bayes credible intervals. A comprehensive simulation study was conducted to evaluate the precision of point and interval estimates, identifying scenarios where Bayesian or frequentist methods performed better. To highlight the practical relevance and adaptability of the proposed methodologies in diverse fields, two genuine real-world datasets, including jute fiber and electrical appliance datasets, were used to evaluate two competing risks, illustrating the practical utility of the proposed estimation methods. These applications highlight the importance of accurately estimating the reliability function in the presence of multiple failure causes and complex censoring schemes. One limitation of the current study is the assumption that latent failure times are independent, which may not hold when the failure causes are correlated. In such scenarios, a dependent competing risks model would provide a more appropriate framework for data analysis. For future work, it would be valuable to explore alternative algorithms for generating samples from the full conditional distributions, such as the Hamiltonian Monte Carlo algorithm, instead of relying solely on the Metropolis–Hastings algorithm. The results obtained using Hamiltonian Monte Carlo can then be compared with those from the current study to assess potential improvements in efficiency and accuracy.

Author Contributions

Methodology, R.A., M.N. and A.E.; Funding acquisition, R.A.; Software, A.E.; Supervision, M.N.; Writing—original draft, R.A. and A.E.; Writing—review and editing, R.A. and M.N. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2025R50), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.

Data Availability Statement

The authors confirm that the data supporting the findings of this study are available within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Cox, D.R. The analysis of exponentially distributed life-times with two types of failure. J. R. Stat. Soc. Ser. Stat. Methodol. 1959, 21, 411–421. [Google Scholar] [CrossRef]
  2. Kundu, D.; Basu, S. Analysis of incomplete data in presence of competing risks. J. Stat. Plan. Inference 2000, 87, 221–239. [Google Scholar] [CrossRef]
  3. Park, C. Parameter estimation of incomplete data in competing risks using the EM algorithm. IEEE Trans. Reliab. 2005, 54, 282–290. [Google Scholar] [CrossRef]
  4. Wang, L. Inference for Weibull competing risks data under generalized progressive hybrid censoring. IEEE Trans. Reliab. 2018, 67, 998–1007. [Google Scholar] [CrossRef]
  5. Park, C.; Wang, M. Parameter Estimation of Birnbaum-Saunders Distribution under Competing Risks Using the Quantile Variant of the Expectation-Maximization Algorithm. Mathematics 2024, 12, 1757. [Google Scholar] [CrossRef]
  6. Alok, A.K.; Singh, G.N.; Chandra, P. Competing risks model with partially observed failure causes based on generalized lifetime family using minimum ranked set sampling. In Quality Technology & Quantitative Management; Taylor Francis: Abingdon, UK, 2025; pp. 1–31. [Google Scholar]
  7. Hjorth, U. A reliability distribution with increasing, decreasing, constant and bathtub-shaped failure rates. Technometrics 1980, 22, 99–107. [Google Scholar] [CrossRef]
  8. Yadav, A.S.; Bakouch, H.S.; Chesneau, C. Bayesian estimation of the survival characteristics for Hjorth distribution under progressive type-II censoring. Commun. Stat.-Simul. Comput. 2020, 51, 882–900. [Google Scholar] [CrossRef]
  9. Elshahhat, A.; Nassar, M. Bayesian survival analysis for adaptive Type-II progressive hybrid censored Hjorth data. Comput. Stat. 2021, 36, 1965–1990. [Google Scholar] [CrossRef]
  10. Balakrishnan, N.; Aggarwala, R. Progressive Censoring: Theory, Methods, and Applications; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2000. [Google Scholar]
  11. Balakrishnan, N.; Hossain, A. Inference for the Type II generalized logistic distribution under progressive Type II censoring. J. Stat. Comput. Simul. 2007, 77, 1013–1031. [Google Scholar] [CrossRef]
  12. Wu, M.; Gui, W. Estimation and prediction for Nadarajah-Haghighi distribution under progressive type-II censoring. Symmetry 2021, 13, 999. [Google Scholar] [CrossRef]
  13. Baklizi, A. Third Order Likelihood Inference in the Generalized Exponential Distribution under Progressive Type II Censoring. J. Stat. Theory Appl. 2025, 1–28. [Google Scholar] [CrossRef]
  14. Ng, H.K.T.; Kundu, D.; Chan, P.S. Statistical analysis of exponential lifetimes under an adaptive Type-II progressive censoring scheme. Nav. Res. Logist. 2009, 56, 687–698. [Google Scholar] [CrossRef]
  15. Haj Ahmad, H.; Salah, M.M.; Eliwa, M.S.; Ali Alhussain, Z.; Almetwally, E.M.; Ahmed, E.A. Bayesian and non-Bayesian inference under adaptive type-II progressive censored sample with exponentiated power Lindley distribution. J. Appl. Stat. 2022, 49, 2981–3001. [Google Scholar] [CrossRef]
  16. Panahi, H.; Asadi, P. Estimating the parameters of a generalized inverted exponential distribution based on adaptive type II hybrid progressive censoring with application. J. Stat. Manag. Syst. 2022, 25, 433–455. [Google Scholar] [CrossRef]
  17. Lv, Q.; Tian, Y.; Gui, W. Statistical inference for Gompertz distribution under adaptive type-II progressive hybrid censoring. J. Appl. Stat. 2024, 51, 451–480. [Google Scholar] [CrossRef]
  18. Sharma, H.; Kumar, P. On survival estimation of Lomax distribution under adaptive progressive type-II censoring. Stat. Transit. New Ser. 2025, 26, 51–67. [Google Scholar] [CrossRef]
  19. Anakha, K.K.; Chacko, V.M. On comparative lifetime analysis with the generalized Lindley distribution: Insights from joint adaptive progressive Type-II censoring. J. Stat. Comput. Simul. 2025, 1–28. [Google Scholar] [CrossRef]
  20. Yuen, H.K.; Tse, S.K. Parameters estimation for Weibull distributed lifetimes under progressive censoring with random removals. J. Stat. Comput. Simul. 1996, 55, 57–71. [Google Scholar] [CrossRef]
  21. Singh, S.K.; Singh, U.; Kumar, M. Bayesian estimation for Poisson-exponential model under progressive type-II censoring data with binomial removal and its application to ovarian cancer data. Commun. Stat.-Simul. Comput. 2016, 45, 3457–3475. [Google Scholar] [CrossRef]
  22. Elbatal, I.; Hassan, A.S.; Diab, L.S.; Ben Ghorbal, A.; Elgarhy, M.; El-Saeed, A.R. Stress–strength reliability analysis for different distributions using Progressive Type-II censoring with Binomial removal. Axioms 2023, 12, 1054. [Google Scholar] [CrossRef]
  23. Chacko, M.; Mohan, R. Bayesian analysis of Weibull distribution based on progressive type-II censored competing risks data with binomial removals. Comput. Stat. 2019, 34, 233–252. [Google Scholar] [CrossRef]
  24. Qin, X.; Gui, W. Statistical inference of Burr-XII distribution under progressive Type-II censored competing risks data with binomial removals. J. Comput. Appl. Math. 2020, 378, 112922. [Google Scholar] [CrossRef]
  25. Elshahhat, A.; Nassar, M. Analysis of adaptive Type-II progressively hybrid censoring with binomial removals. J. Stat. Comput. Simul. 2023, 93, 1077–1103. [Google Scholar] [CrossRef]
  26. Alqasem, O.A.; Nassar, M.; Abd Elwahab, M.E.; Elshahhat, A. Analyzing Burr-X competing risk model using adaptive progressive Type-II censored binomial removal data with application to electrodes and electronics. J. Radiat. Res. Appl. Sci. 2024, 17, 101107. [Google Scholar] [CrossRef]
  27. Kundu, D.; Joarder, A. Analysis of Type-II progressively hybrid censored data. Comput. Stat. Data Anal. 2006, 50, 2509–2528. [Google Scholar] [CrossRef]
  28. Kundu, D. Bayesian inference and life testing plan for the Weibull distribution in presence of progressive censoring. Technometrics 2008, 50, 144–154. [Google Scholar] [CrossRef]
  29. Henningsen, A.; Toomet, O. maxLik: A package for maximum likelihood estimation in R. Comput. Stat. 2011, 26, 443–458. [Google Scholar] [CrossRef]
  30. Plummer, M.; Best, N.; Cowles, K.; Vines, K. CODA: Convergence diagnosis and output analysis for MCMC. R news 2006, 6, 7–11. [Google Scholar]
  31. Xia, Z.P.; Yu, J.Y.; Cheng, L.D.; Liu, L.F.; Wang, W.M. Study on the breaking strength of jute fibres using modified Weibull distribution. Compos. Part A Appl. Sci. Manuf. 2009, 40, 54–59. [Google Scholar] [CrossRef]
  32. Lawless, J.F. Statistical Models and Methods For Lifetime Data, 2nd ed.; John Wiley and Sons: Hoboken, NJ, USA, 2011. [Google Scholar]
  33. Alotaibi, R.; Nassar, M.; Khan, Z.A.; Alajlan, W.A.; Elshahhat, A. Analysis and data modelling of electrical appliances and radiation dose from an adaptive progressive censored XGamma competing risk model. J. Radiat. Res. Appl. Sci. 2025, 18, 101188. [Google Scholar] [CrossRef]
Figure 1. Posterior sensitivity to prior type.
Figure 1. Posterior sensitivity to prior type.
Mathematics 13 02010 g001
Figure 2. Contour diagrams of the Hj ( λ i , μ i ) , i = 1 , 2 , parameters in Monte Carlo simulations.
Figure 2. Contour diagrams of the Hj ( λ i , μ i ) , i = 1 , 2 , parameters in Monte Carlo simulations.
Mathematics 13 02010 g002
Figure 3. Geweke convergence diagnostics for the Hj ( λ i , μ i ) , i = 1 , 2 , parameters in Monte Carlo simulations.
Figure 3. Geweke convergence diagnostics for the Hj ( λ i , μ i ) , i = 1 , 2 , parameters in Monte Carlo simulations.
Mathematics 13 02010 g003
Figure 4. The ACF diagrams of the Hj ( λ i , μ i ) , i = 1 , 2 , parameters in Monte Carlo simulations.
Figure 4. The ACF diagrams of the Hj ( λ i , μ i ) , i = 1 , 2 , parameters in Monte Carlo simulations.
Mathematics 13 02010 g004
Figure 5. The trace diagrams of the Hj ( λ i , μ i ) , i = 1 , 2 , parameters in Monte Carlo simulations.
Figure 5. The trace diagrams of the Hj ( λ i , μ i ) , i = 1 , 2 , parameters in Monte Carlo simulations.
Mathematics 13 02010 g005
Figure 6. Fitting diagrams of the Hj ( λ i , μ i ) , i = 1 , 2 , model from jute fiber data.
Figure 6. Fitting diagrams of the Hj ( λ i , μ i ) , i = 1 , 2 , model from jute fiber data.
Mathematics 13 02010 g006
Figure 7. The log-likelihood contours for λ i and μ i (for i = 1 , 2 ) from jute fiber data.
Figure 7. The log-likelihood contours for λ i and μ i (for i = 1 , 2 ) from jute fiber data.
Mathematics 13 02010 g007
Figure 8. Two MCMC plots of λ i , μ i (for i = 1 , 2 ), S ( t ) , and h ( t ) from jute fiber data.
Figure 8. Two MCMC plots of λ i , μ i (for i = 1 , 2 ), S ( t ) , and h ( t ) from jute fiber data.
Mathematics 13 02010 g008
Figure 9. Fitting diagrams of the Hj ( λ i , μ i ) , i = 1 , 2 , model from electrical appliance data.
Figure 9. Fitting diagrams of the Hj ( λ i , μ i ) , i = 1 , 2 , model from electrical appliance data.
Mathematics 13 02010 g009
Figure 10. The log-likelihood contours for λ i and μ i (for i = 1 , 2 ) from electrical appliance data.
Figure 10. The log-likelihood contours for λ i and μ i (for i = 1 , 2 ) from electrical appliance data.
Mathematics 13 02010 g010
Figure 11. Two MCMC plots of λ i , μ i (for i = 1 , 2 ), S ( t ) , and h ( t ) from electrical appliance data.
Figure 11. Two MCMC plots of λ i , μ i (for i = 1 , 2 ), S ( t ) , and h ( t ) from electrical appliance data.
Mathematics 13 02010 g011
Table 1. The point estimation results of λ i , i = 1 , 2 .
Table 1. The point estimation results of λ i , i = 1 , 2 .
( n , m ) θ T = 0.3 T = 0.5
MLEBayes MLEBayes
Prior AB AB
MPERMSEMRABMPERMSEMRABMPERMSEMRAB MPERMSEMRABMPERMSEMRABMPERMSEMRAB
λ 1
(30, 10)0.41.6861.2350.8171.3101.1890.7941.4700.7910.780 1.5701.2251.5810.6401.1780.7881.6530.5180.442
0.81.4251.1460.7541.6341.0120.6611.5740.7960.442 1.5241.1201.4690.8540.9890.6481.4460.7110.400
(30, 20)0.41.5241.2050.7801.6551.1790.7051.7411.0680.444 1.4761.1881.4930.6151.0750.7011.3390.7480.424
0.81.5001.0780.7121.3870.9860.6461.8510.7770.413 1.3101.0331.5800.5480.9430.6191.6060.7040.398
(50, 20)0.41.5731.0030.6511.5280.7580.4891.7480.7260.394 1.6070.9861.6340.6640.7530.4851.5640.6580.329
0.81.4930.9160.5921.3900.7160.4511.6960.5960.321 1.5840.9081.5900.9010.6940.4401.4830.5420.292
(50, 40)0.41.4360.9670.6331.3790.7410.4591.5320.6980.333 1.6350.9571.4450.5640.7100.4461.7270.5810.322
0.81.4850.8880.5741.6110.6930.4371.7080.5840.293 1.4880.8701.5860.3940.6770.4351.5310.5290.281
(80, 30)0.41.5250.8580.5571.5720.6800.4321.6610.5270.286 1.5210.8251.4840.7390.6750.4201.4150.5060.263
0.81.6290.7150.4321.3990.6670.4151.5440.4880.256 1.5720.6931.4190.4330.5800.3651.5270.4680.235
(80, 60)0.41.5510.7960.5081.7760.6750.4191.3960.5070.265 1.4140.7351.4690.3380.6010.3681.4820.4800.246
0.81.4990.6390.4001.6850.6010.3831.7400.4630.249 1.6280.6211.3900.2850.5170.3151.4130.4430.227
λ 2
(30, 10)0.40.5540.4920.7930.4290.4000.7900.5230.3800.752 0.5840.4640.7910.4290.3890.7550.5160.3680.749
0.80.5190.4500.7370.4990.3440.6680.5450.3280.642 0.5130.4300.7020.5110.3430.6610.5550.3130.599
(30, 20)0.40.5260.4570.7730.5120.3980.7060.4170.3520.688 0.5080.4430.7400.5200.3680.7020.5120.3280.640
0.80.5000.4450.7030.4610.3390.6610.3920.3030.590 0.5260.4170.6660.4920.3330.6330.4160.2800.538
(50, 20)0.40.4910.4210.6640.4740.3230.6320.4510.2880.507 0.6300.4090.6440.5280.3200.5900.4800.2530.408
0.80.4660.4040.6590.6360.2610.4710.3800.2070.373 0.5950.3830.6290.4370.2580.4620.4860.1810.316
(50, 40)0.40.5410.4090.6620.4670.2860.5260.4720.2100.377 0.6620.3990.6340.5170.2630.4790.5130.2050.359
0.80.5310.3930.6120.5150.1990.3480.5400.1770.306 0.4800.3770.5900.4380.1950.3410.4650.1740.291
(80, 30)0.40.5390.2970.5190.5290.1930.3240.4550.1740.288 0.6290.1970.3340.3040.1870.2890.4570.1680.243
0.80.4680.1640.2850.4790.1450.2410.4310.1260.203 0.4780.1550.2590.4260.1370.2110.4940.1140.187
(80, 60)0.40.4810.2480.3030.4960.1820.2740.4800.1660.237 0.6980.1870.2960.4700.1670.2720.5180.1420.217
0.80.5870.1540.2590.5350.1350.2030.5230.1150.175 0.4670.1530.2450.5050.1260.1980.5120.1100.163
Table 2. The point estimation results of μ i , i = 1 , 2 .
Table 2. The point estimation results of μ i , i = 1 , 2 .
( n , m ) θ T = 0.3 T = 0.5
MLEBayes MLEBayes
Prior AB AB
MPERMSEMRABMPERMSEMRABMPERMSEMRAB MPERMSEMRABMPERMSEMRABMPERMSEMRAB
μ 1
(30, 10)0.40.5361.7762.6120.5910.3610.6830.5270.3450.668 0.6031.7062.4420.5480.3460.6720.2930.3380.626
0.80.5471.5012.2500.5370.3290.5540.4940.2790.512 0.5411.4572.2070.5230.2800.5210.4570.2620.483
(30, 20)0.40.5781.6902.5220.4600.3410.6590.4750.2890.541 0.5151.6802.4390.5160.3300.5680.3280.2770.516
0.80.5251.4592.2050.4990.3130.5150.5160.2590.466 0.4121.3452.1890.4320.2750.4960.3640.2510.457
(50, 20)0.40.6891.2651.9340.5310.2930.4960.5280.2370.424 0.5901.2391.9270.5280.2690.4680.3400.2280.419
0.80.4051.1931.7170.4780.2670.4840.4910.2130.380 0.4841.1091.6960.4730.2430.4290.5430.2090.365
(50, 40)0.40.6491.2281.8790.5440.2840.4940.5270.2270.409 0.5951.2021.8380.5170.2560.4480.2160.2140.407
0.80.4811.0691.6840.4550.2360.4410.4770.1950.341 0.4871.0561.6630.4730.2100.5370.3330.1910.325
(80, 30)0.40.5210.9250.6070.5970.2060.3450.4600.1750.314 0.5100.7630.3690.5740.1980.3410.4480.1730.265
0.80.4840.5980.3060.4250.1800.3190.5210.1540.241 0.2960.4840.2910.3960.1650.4670.3310.1380.224
(80, 60)0.40.5380.6770.3270.5230.1970.3240.5240.1650.254 0.5020.5200.3170.4880.1860.5280.2410.1480.234
0.80.5210.4530.2490.5160.1580.2160.5160.1340.182 0.6000.3740.2320.5170.1430.6040.1680.1250.175
μ 2
(30, 10)0.41.6811.2490.8151.3841.1770.7661.7271.1620.757 1.6531.2150.8041.5941.1640.7601.6291.0850.593
0.81.6271.1150.7301.4781.0080.6461.3220.9770.575 1.5081.0730.6541.4361.0080.5841.6920.8870.509
(30, 20)0.41.7101.1560.7581.6891.0990.6741.6531.0310.586 1.5311.1310.7411.7221.0460.6641.5151.0230.577
0.81.6781.0760.6971.5900.9490.5281.4000.7930.484 1.4741.0480.6191.6380.9030.5041.5310.7250.452
(50, 20)0.41.6210.9300.5341.5790.8560.4921.6590.6790.423 1.4970.8980.5071.5860.7680.4801.4790.6680.403
0.81.5260.8880.4921.5760.7150.4431.5590.5720.351 1.6670.8400.4491.4990.5910.3521.6440.5600.325
(50, 40)0.41.6730.9020.5011.5560.7940.4721.3220.5900.365 1.5200.8700.4981.4710.7640.4631.5890.5770.354
0.81.5130.8690.4601.5500.5760.3381.5790.5230.313 1.4750.7850.4461.4620.5470.3151.4720.5070.289
(80, 30)0.41.6230.8030.4541.7440.5570.3071.4730.5000.247 1.6280.7800.3121.5680.5240.3021.5260.4660.225
0.81.4650.4820.2621.5260.4460.2251.4690.3670.204 1.5170.4490.2291.4640.4390.2221.6600.2590.187
(80, 60)0.41.5680.7980.2971.6810.4920.2471.3760.4440.223 1.5520.4930.2891.6480.4550.2351.4860.3240.219
0.81.4740.4160.2131.4380.3970.1801.6480.2850.164 1.6630.3980.1981.5480.3930.1761.5490.2330.152
Table 3. The point estimation results of S ( t ) and h ( t ) .
Table 3. The point estimation results of S ( t ) and h ( t ) .
( n , m ) θ T = 0.3 T = 0.5
MLEBayes MLEBayes
Prior AB AB
MPERMSEMRABMPERMSEMRABMPERMSEMRAB MPERMSEMRABMPERMSEMRABMPERMSEMRAB
S ( t )
(30, 10)0.40.8130.1450.1760.9060.1380.1680.8710.1270.153 0.8100.1440.1740.8960.1370.1670.8560.0550.055
0.80.8180.1270.1510.8690.1100.1360.8640.0820.115 0.8170.1200.1470.8710.0960.1240.8660.0460.045
(30, 20)0.40.8180.1340.1620.9260.1220.1470.9020.0950.126 0.8170.1330.1610.9270.1220.1470.9030.0490.047
0.80.8180.1180.1430.8890.0940.1220.8770.0640.101 0.9430.1180.1330.9250.0910.1150.8920.0450.045
(50, 20)0.40.8480.1200.1310.9150.0900.1130.8850.0580.086 0.8510.1100.1300.9070.0870.1090.8750.0420.038
0.80.8320.1000.1200.8880.0810.0920.8830.0500.064 0.8300.0920.1090.8790.0730.0890.8740.0320.029
(50, 40)0.40.8510.1070.1290.9270.0870.1070.9030.0530.074 0.8470.1000.1180.9300.0860.1010.9060.0360.034
0.80.8310.0920.1090.9280.0720.0830.9150.0490.059 0.8280.0910.1080.9240.0710.0820.9120.0290.028
(80, 30)0.40.8490.0880.1030.9020.0680.0790.8880.0450.046 0.8500.0850.0980.8990.0620.0700.8850.0170.026
0.80.8280.0730.0860.9070.0580.0670.9030.0370.031 0.8270.0640.0740.9170.0540.0590.9110.0180.018
(80, 60)0.40.8450.0830.0960.9510.0630.0730.9390.0410.035 0.8410.0750.0860.9500.0590.0680.9380.0200.021
0.80.8280.0580.0640.9620.0520.0580.9560.0340.027 0.8260.0570.0630.9610.0440.0490.9550.0160.015
h ( t )
(30, 10)0.40.9470.8270.9540.8570.8030.9370.4790.7650.891 0.7260.8150.9350.9030.7800.8240.5710.4500.437
0.80.8660.7740.9020.7710.7090.8240.5690.5060.553 0.8730.7630.8880.7640.6880.7980.5510.3860.524
(30, 20)0.40.8640.7860.9140.6840.7580.8820.1500.5290.625 0.8700.7990.9130.6960.7570.8800.1730.4240.591
0.80.8550.7530.8730.8770.6780.7870.3960.4360.486 0.0930.7220.8350.7610.6480.7480.3240.3740.425
(50, 20)0.40.6760.7080.8220.8120.6390.7350.3480.4340.446 0.6470.7020.8120.8530.6130.7320.4160.3660.396
0.80.8410.6610.7680.7460.5870.6600.3770.4180.413 0.8810.6160.7130.7950.5380.6050.4230.3420.354
(50, 40)0.40.6510.6940.8060.7410.6130.7310.2270.4220.426 0.6970.6540.7510.7080.5900.6810.1820.3580.375
0.80.8590.6170.7040.7520.5380.6810.2160.4100.396 0.7360.6080.6880.7660.5260.6880.2290.3310.321
(80, 30)0.40.6630.6060.6840.8670.4990.7600.3370.3610.375 0.6570.5950.6710.8810.4780.7330.3520.3290.310
0.80.8790.5240.5940.8450.4140.7960.2730.3290.326 0.9080.4810.5390.7990.3580.7440.2300.2870.266
(80, 60)0.40.7060.5980.6750.7410.4680.8140.1020.3470.347 0.7510.5570.6230.7380.4580.6840.1010.2940.286
0.80.8840.4880.4460.6900.3490.8490.0540.3170.308 0.9130.4510.4320.8110.3390.6420.0560.2730.243
Table 4. The point estimation results of θ .
Table 4. The point estimation results of θ .
( n , m ) θ MLEBayes
Prior-APrior-B
MPERMSEMRABMPERMSEMRABMPERMSEMRAB
T = 0.3
(30, 10)0.40.3910.4000.4720.3680.3640.4540.3030.3540.398
0.80.8010.3760.4540.7570.3600.4500.7670.3410.347
(30, 20)0.40.4020.3890.4600.4410.3630.4530.4370.3460.378
0.80.8090.3640.4540.8870.3530.4490.8810.3360.321
(50, 20)0.40.4260.3560.4510.3290.3460.4480.2990.3290.289
0.80.8040.1210.2770.7580.1070.2700.7670.1100.241
(50, 40)0.40.4020.3260.4480.4420.3360.4460.4390.3120.243
0.80.8140.1110.2650.8880.1050.2530.8820.1070.191
(80, 30)0.40.4100.1090.2590.3890.1030.2440.2920.0970.182
0.80.8030.1040.2390.7460.0960.2210.7530.0820.140
(80, 60)0.40.4150.1060.2530.4430.1020.2420.4410.0850.177
0.80.8050.1020.2340.8900.0930.2190.8870.0800.115
T = 0.5
(30, 10)0.40.4070.3890.4560.3970.3620.4510.3530.3490.376
0.80.8060.3590.4540.8170.3460.4490.8160.3330.317
(30, 20)0.40.4230.3760.4550.4060.3560.4500.3800.3400.368
0.80.8090.3450.4490.7930.3410.4460.7950.3280.310
(50, 20)0.40.4080.3340.4450.2790.3370.4460.3300.3210.265
0.80.8050.1120.2750.7680.1190.2680.7870.1070.241
(50, 40)0.40.4260.3130.4400.6440.3260.4440.3920.3090.243
0.80.8140.1090.2630.7880.1100.2510.8180.0940.185
(80, 30)0.40.4030.1040.2590.2900.1030.2440.2930.0820.177
0.80.8030.1010.2340.7460.0950.2170.7530.0750.138
(80, 60)0.40.4100.1030.2510.4430.1010.2400.4410.0790.170
0.80.8050.0960.2310.8900.0920.2110.8870.0710.110
Table 5. The 95% interval estimation results of λ i , i = 1 , 2 .
Table 5. The 95% interval estimation results of λ i , i = 1 , 2 .
( n , m ) θ T = 0.3 T = 0.5
ACIBCIACIBCI
Prior AB AB
AILCPAILCPAILCPAILCPAILCPAILCP
λ 1
(30, 10)0.43.1880.8471.1420.9120.9870.9142.9240.8500.9990.9130.9280.915
0.82.5600.8530.9200.9150.8850.9172.2720.8570.8970.9160.8470.919
(30, 20)0.42.8180.8510.9930.9140.9510.9162.5710.8540.9820.9150.9000.917
0.82.2560.8560.8830.9170.8510.9192.0780.8600.8760.9180.8370.921
(50, 20)0.41.9150.8600.8770.9180.8090.9201.8930.8620.8640.9190.7850.922
0.81.6580.8660.8600.9190.7100.9221.5910.8670.8290.9200.6940.924
(50, 40)0.41.8970.8640.8740.9180.7540.9211.6620.8650.8560.9200.7090.923
0.81.5850.8680.8140.9210.6870.9231.4540.8700.7770.9220.6870.925
(80, 30)0.41.4390.8720.7790.9230.6780.9251.4150.8720.7680.9240.6710.927
0.81.2720.8770.5850.9280.5790.9301.2400.8780.5840.9290.5630.931
(80, 60)0.41.3290.8750.6780.9250.6190.9271.2530.8760.6370.9260.5900.928
0.81.2270.8780.5410.9290.4990.9311.1450.8790.5740.9300.4940.932
λ 2
(30, 10)0.42.4420.8640.8140.9240.7260.9252.3490.8660.7370.9250.7010.927
0.81.9540.8710.6270.9300.5500.9311.9120.8730.6060.9310.5020.933
(30, 20)0.42.3950.8660.7160.9270.6390.9282.1840.8680.6530.9280.6250.930
0.81.7730.8730.5900.9310.5170.9321.7420.8750.5760.9320.4950.934
(50, 20)0.41.6290.8760.5780.9320.4880.9331.6140.8780.5290.9330.4510.935
0.81.5540.8800.4860.9350.4310.9361.5240.8820.4620.9360.4160.938
(50, 40)0.41.6000.8780.5330.9330.4510.9341.5930.8790.5080.9340.4270.936
0.81.5160.8820.4330.9360.4140.9371.4370.8840.4230.9370.4060.938
(80, 30)0.41.4640.8830.3910.9370.3770.9381.3980.8850.3870.9380.3680.939
0.81.2950.8860.3310.9380.2680.9391.2430.8870.3200.9390.2480.941
(80, 60)0.41.3920.8850.3700.9370.3560.9381.3280.8860.3600.9380.3370.940
0.81.2610.8870.2340.9400.2100.9411.2220.8880.2280.9410.1960.941
Table 6. The 95% interval estimation results of μ i , i = 1 , 2 .
Table 6. The 95% interval estimation results of μ i , i = 1 , 2 .
( n , m ) θ T = 0.3 T = 0.5
ACIBCIACIBCI
Prior AB AB
AILCPAILCPAILCPAILCPAILCPAILCP
μ 1
(30, 10)0.43.8520.8420.8570.9070.6930.9083.5330.8440.8140.9090.6400.910
0.83.4370.8490.6730.9150.5750.9143.3930.8510.6460.9170.5650.916
(30, 20)0.43.5850.8470.7780.9110.6120.9123.4280.8490.6900.9130.5660.914
0.83.2540.8520.5630.9190.5230.9213.1970.8540.5420.9200.4920.922
(50, 20)0.42.9950.8550.5460.9210.5010.9222.8450.8570.5380.9230.4750.924
0.82.6880.8580.4800.9240.4560.9252.4240.8600.4790.9260.4520.927
(50, 40)0.42.8710.8570.5150.9220.4900.9232.7420.8590.5060.9240.4670.925
0.82.3470.8620.4540.9250.4300.9262.2090.8640.4450.9270.4190.928
(80, 30)0.42.1910.8640.4390.9260.4060.9271.9080.8660.4090.9280.3630.929
0.81.7540.8700.3520.9300.3390.9311.5640.8720.3480.9310.3250.933
(80, 60)0.41.9850.8670.4040.9280.3610.9291.7450.8690.3820.9300.3560.931
0.81.5490.8720.3430.9310.3090.9321.3910.8740.3300.9320.3140.934
μ 2
(30, 10)0.42.6340.8851.7370.8891.5310.8932.5930.8861.6330.8911.4890.894
0.82.2460.8901.3990.8941.2070.8982.2290.8911.3750.8961.1780.899
(30, 20)0.42.5710.8871.6650.8911.3150.8952.4550.8881.5630.8931.2820.896
0.82.2120.8911.2510.8951.1670.8992.1720.8921.2470.8970.9720.900
(50, 20)0.42.1000.8921.1620.8961.1010.9002.0720.8931.1410.8980.9160.901
0.81.7400.8961.1180.9000.9260.9041.7050.8971.0750.9020.8440.905
(50, 40)0.42.0730.8931.1390.8970.9540.9011.9740.8941.1150.8990.8620.902
0.81.6960.8981.1070.9020.8960.9061.6350.8991.0670.9040.8000.907
(80, 30)0.41.5110.9011.0840.9060.8690.9091.4790.9021.0010.9070.7920.910
0.81.2200.9060.9700.9090.7800.9131.1820.9070.8740.9100.7240.913
(80, 60)0.41.4470.9031.0040.9080.8530.9111.3420.9040.9590.9090.7840.912
0.81.1690.9080.8540.9110.7320.9141.0940.9090.8450.9120.6460.915
Table 7. The 95% interval estimation results of S ( t ) and h ( t ) .
Table 7. The 95% interval estimation results of S ( t ) and h ( t ) .
( n , m ) θ T = 0.3 T = 0.5
ACIBCIACIBCI
Prior AB AB
AILCPAILCPAILCPAILCPAILCPAILCP
S ( t )
(30, 10)0.40.4620.9320.1170.9440.1040.9460.2890.9340.1070.9460.0950.948
0.80.4100.9350.0990.9450.0910.9470.2400.9370.0980.9470.0870.949
(30, 20)0.40.4340.9330.1020.9440.0970.9460.2750.9350.1020.9460.0900.948
0.80.3840.9360.0950.9450.0880.9470.2190.9380.0920.9470.0820.949
(50, 20)0.40.2750.9390.0930.9450.0830.9480.1900.9410.0910.9470.0790.950
0.80.2210.9400.0850.9470.0790.9490.1820.9420.0840.9490.0750.951
(50, 40)0.40.2300.9400.0900.9460.0800.9480.1840.9410.0890.9480.0780.950
0.80.2040.9410.0800.9470.0740.9490.1790.9430.0790.9490.0720.951
(80, 30)0.40.1930.9410.0770.9480.0710.9500.1730.9430.0770.9500.0710.952
0.80.1520.9430.0650.9500.0620.9520.1490.9440.0640.9510.0600.953
(80, 60)0.40.1860.9420.0710.9490.0680.9510.1580.9440.0700.9510.0670.953
0.80.1450.9440.0570.9500.0550.9520.1410.9450.0560.9520.0540.954
h ( t )
(30, 10)0.41.1090.9420.7850.9460.6890.9491.0930.9430.7330.9470.6320.950
0.81.0170.9430.6920.9470.5760.9500.9890.9440.6460.9480.5680.951
(30, 20)0.41.0840.9430.7260.9470.6240.9501.0020.9440.7140.9480.6030.951
0.80.9850.9440.6380.9480.5660.9510.8550.9450.6100.9490.5480.955
(50, 20)0.40.7810.9470.5590.9500.5180.9530.7600.9480.5210.9510.4910.954
0.80.7440.9480.4240.9520.3900.9540.7350.9490.4070.9530.3610.955
(50, 40)0.40.7670.9470.5460.9510.4870.9540.7540.9480.5150.9520.3710.955
0.80.6960.9500.3820.9540.3300.9560.6690.9510.3570.9540.3180.958
(80, 30)0.40.6590.9510.3350.9550.3160.9580.6230.9520.3220.9560.3050.959
0.80.6090.9530.2590.9570.2530.9600.5490.9540.2580.9580.2450.961
(80, 60)0.40.6310.9530.3100.9570.2750.9590.5980.9540.2950.9580.2750.960
0.80.5760.9540.2330.9580.2040.9610.5180.9550.2210.9590.1980.961
Table 8. The interval estimation results of θ .
Table 8. The interval estimation results of θ .
( n , m ) θ 95% ACI95% BCI
Prior-APrior-B
AILCPAILCPAILCP
T = 0.3
(30, 10)0.40.4270.9510.3760.9540.3480.957
0.80.3880.9530.3220.9560.2790.959
(30, 20)0.40.4130.9520.3600.9550.3330.958
0.80.3590.9530.2980.9560.2680.959
(50, 20)0.40.3300.9540.2870.9570.2590.960
0.80.2870.9560.2560.9590.2400.962
(50, 40)0.40.2990.9560.2710.9590.2510.961
0.80.2730.9570.2460.9600.2260.962
(80, 30)0.40.2670.9570.2390.9600.2190.963
0.80.2450.9580.2170.9610.1670.964
(80, 60)0.40.2520.9580.2280.9610.1750.964
0.80.2130.9600.1970.9620.1530.965
T = 0.5
(30, 10)0.40.4130.9520.3570.9550.3340.957
0.80.3780.9540.3180.9570.2690.960
(30, 20)0.40.4060.9530.3460.9560.3250.958
0.80.3680.9540.2860.9570.2570.960
(50, 20)0.40.3210.9550.2790.9580.2510.961
0.80.2690.9570.2500.9600.2360.963
(50, 40)0.40.2800.9570.2680.9600.2410.962
0.80.2570.9580.2390.9610.2210.963
(80, 30)0.40.2470.9580.2300.9610.2000.964
0.80.2220.9600.2110.9620.1570.965
(80, 60)0.40.2390.9590.2180.9620.1670.965
0.80.1970.9610.1860.9640.1480.966
Table 9. Newly transformed jute fiber data.
Table 9. Newly transformed jute fiber data.
Time [Cause]
0.3675 [2]0.4081 [2]0.4558 [2]0.4666 [1]0.7009 [1]0.7146 [2]0.7638 [1]0.8355 [2]0.8364 [2]0.8942 [1],
0.9072 [2]1.1045 [2]1.1385 [2]1.1630 [1]1.1896 [2]1.1986 [2]1.2781 [1]1.3509 [1]1.4596 [2]1.4696 [1],
1.5029 [2]1.6837 [1]1.8266 [1]1.9342 [1]2.0275 [1]2.2565 [1]2.4443 [2]2.4633 [2]3.0000 [1]3.1286 [2],
3.2831 [2]3.6027 [1]3.6084 [2]3.7581 [2]3.7822 [2]4.0425 [1]4.1902 [2]4.2597 [1]4.5771 [1]4.6287 [1],
4.8966 [1]4.9568 [1]4.9794 [1]5.4744 [2]5.5042 [1]5.6239 [1]5.7486 [1]5.7860 [2]5.8557 [2]5.9440 [1],
5.9670 [2]6.4048 [1]6.7262 [2]6.8816 [2]7.1636 [1]7.6514 [2]8.1387 [1]16.6492 [2]
Table 10. Fitting the Hj model from jute fiber data.
Table 10. Fitting the Hj model from jute fiber data.
CausePar.MLE (Std.Er)95% ACI [IW]KS (p-Value)
Cause 1 λ 0.1139 (0.1117)(0.0015, 0.3328) [0.3313]0.1314 (0.6505)
μ 0.1001 (0.0257)(0.0496, 0.1505) [0.1009]
Cause 2 λ 0.4439 (0.1468)(0.1561, 0.7317) [0.5755]0.1320 (0.6458)
μ 0.0381 (0.0152)(0.0084, 0.0678) [0.0594]
Table 11. Two APTIIC competing risk samples with binomial removals from jute fiber data.
Table 11. Two APTIIC competing risk samples with binomial removals from jute fiber data.
S [ 1 ] : T ( d ) = 0.8 ( 3 ) and R * = 9
i12345678910
r i 221340000000
δ i 2112222122
x i 0.36750.46660.70090.83550.90721.13851.18961.27811.45961.5029
S [ 2 ] : T ( d ) = 0.6 ( 3 ) and R * = 7
i12345678910
r i 171130000000
δ i 2211121221
x i 0.36750.45580.46660.70090.76380.83550.89420.90721.13851.1630
i11121314151617181920
r i 0000000000
δ i 1121111212
x i 1.27811.35091.50291.68371.93422.02752.25652.46333.00003.7822
Table 12. Estimates of λ i , μ i , i = 1 , 2 , S ( t ) , h ( t ) , and θ from jute fiber data.
Table 12. Estimates of λ i , μ i , i = 1 , 2 , S ( t ) , h ( t ) , and θ from jute fiber data.
SamplePar.MLEACI
BayesBCI
Est.S.ErrLowerUpperIW
S [ 1 ] λ 1 4.21 × 10 8 5.00 × 10 7 0.00001.02 × 10 6 1.02 × 10 6
4.21 × 10 8 1.00 × 10 10 4.19 × 10 8 4.23 × 10 8 3.91 × 10 10
λ 2 2.64 × 10 9 4.43 × 10 8 0.00008.94 × 10 8 8.94 × 10 8
3.99 × 10 5 4.99 × 10 5 1.54 × 10 6 1.11 × 10 4 1.09 × 10 4
μ 1 2.58 × 10 1 1.49 × 10 1 0.00005.49 × 10 1 5.49 × 10 1
2.58 × 10 1 4.98 × 10 5 2.58 × 10 1 2.58 × 10 1 1.96 × 10 4
μ 2 6.08 × 10 1 2.30 × 10 1 1.57 × 10 1 0.11 × 10 + 1 9.00 × 10 1
6.08 × 10 1 4.97 × 10 5 6.07 × 10 1 6.08 × 10 1 1.95 × 10 4
S ( t ) 8.97 × 10 1 3.07 × 10 2 8.37 × 10 1 9.58 × 10 1 1.20 × 10 1
8.97 × 10 1 1.98 × 10 5 8.97 × 10 1 8.98 × 10 1 5.27 × 10 5
h ( t ) 3.91 × 10 2 2.70 × 10 2 0.00009.21 × 10 2 9.21 × 10 2
3.91 × 10 2 9.28 × 10 6 3.91 × 10 2 3.92 × 10 2 3.38 × 10 5
θ 2.23 × 10 1 2.58 × 10 2 1.72 × 10 1 2.73 × 10 1 1.01 × 10 1
1.82 × 10 1 4.06 × 10 2 1.34 × 10 1 2.37 × 10 1 1.03 × 10 1
S [ 2 ] λ 1 4.25 × 10 8 1.94 × 10 8 4.45 × 10 9 8.06 × 10 8 7.62 × 10 8
4.25 × 10 8 1.00 × 10 10 4.23 × 10 8 4.27 × 10 8 3.93 × 10 10
λ 2 5.96 × 10 8 1.23 × 10 6 0.00002.46 × 10 6 2.46 × 10 6
3.99 × 10 5 5.01 × 10 5 1.62 × 10 6 1.13 × 10 4 1.12 × 10 4
μ 1 3.96 × 10 1 1.14 × 10 1 1.72 × 10 1 6.20 × 10 1 4.48 × 10 1
3.96 × 10 1 4.99 × 10 5 3.96 × 10 1 3.96 × 10 1 1.94 × 10 4
μ 2 2.46 × 10 1 8.71 × 10 2 7.54 × 10 2 4.17 × 10 1 3.41 × 10 1
2.46 × 10 1 4.92 × 10 5 2.46 × 10 1 2.46 × 10 1 1.92 × 10 4
S ( t ) 9.23 × 10 1 1.66 × 10 2 8.90 × 10 1 9.55 × 10 1 6.50 × 10 2
9.23 × 10 1 2.04 × 10 5 9.23 × 10 1 9.23 × 10 1 5.47 × 10 5
h ( t ) 2.44 × 10 2 1.11 × 10 2 2.55 × 10 3 4.62 × 10 2 4.36 × 10 2
2.44 × 10 2 8.76 × 10 6 2.43 × 10 2 2.44 × 10 2 2.74 × 10 5
θ 6.59 × 10 2 1.09 × 10 2 4.45 × 10 2 8.72 × 10 2 4.27 × 10 2
6.18 × 10 2 4.07 × 10 3 4.30 × 10 2 8.37 × 10 2 4.07 × 10 2
Table 13. Summary for statistics of λ i , μ i , i = 1 , 2 , S ( t ) , and h ( t ) from jute fiber data.
Table 13. Summary for statistics of λ i , μ i , i = 1 , 2 , S ( t ) , and h ( t ) from jute fiber data.
SamplePar.MeanMode Q 1 Q 2 Q 3 Std.DSk.
S [ 1 ] λ 1 4.21 × 10 8 4.19 × 10 8 4.20 × 10 8 4.21 × 10 8 4.22 × 10 8 1.00 × 10 10 8.83 × 10 3
λ 2 3.99 × 10 5 1.17 × 10 5 1.59 × 10 5 3.40 × 10 5 5.77 × 10 5 3.00 × 10 5 9.84 × 10 1
μ 1 2.58 × 10 1 2.58 × 10 1 2.58 × 10 1 2.58 × 10 1 2.58 × 10 1 4.98 × 10 5 2.56 × 10 2
μ 2 6.08 × 10 1 6.08 × 10 1 6.07 × 10 1 6.08 × 10 1 6.08 × 10 1 4.97 × 10 5 2.14 × 10 2
S ( t ) 8.97 × 10 1 8.97 × 10 1 8.97 × 10 1 8.97 × 10 1 8.97 × 10 1 1.34 × 10 5 −5.21 × 10 1
h ( t ) 3.91 × 10 2 3.91 × 10 2 3.91 × 10 2 3.91 × 10 2 3.91 × 10 2 8.61 × 10 6 4.46 × 10 2
S [ 2 ] λ 1 4.25 × 10 8 4.27 × 10 8 4.25 × 10 8 4.25 × 10 8 4.26 × 10 8 1.00 × 10 10 −6.80 × 10 3
λ 2 3.99 × 10 5 4.81 × 10 5 1.56 × 10 5 3.38 × 10 5 5.76 × 10 5 3.04 × 10 5 0.11 × 10 + 1
μ 1 3.96 × 10 1 3.96 × 10 1 3.96 × 10 1 3.96 × 10 1 3.96 × 10 1 4.99 × 10 5 3.69 × 10 2
μ 2 2.46 × 10 1 2.46 × 10 1 2.46 × 10 1 2.46 × 10 1 2.46 × 10 1 4.92 × 10 5 −2.39 × 10 2
S ( t ) 9.23 × 10 1 9.23 × 10 1 9.23 × 10 1 9.23 × 10 1 9.23 × 10 1 1.40 × 10 5 −5.39 × 10 1
h ( t ) 2.44 × 10 2 2.44 × 10 2 2.44 × 10 2 2.44 × 10 2 2.44 × 10 2 7.02 × 10 6 1.76 × 10 1
Table 14. Times to failure of 21 electrical appliances.
Table 14. Times to failure of 21 electrical appliances.
Time [Cause]
0.12 [2]0.16 [2]0.16 [2]0.46 [2]0.46 [2]0.52 [2]0.98 [1]0.98 [2]2.70 [2]4.13 [1]4.95 [1]
4.95 [2]5.57 [2]6.16 [2]6.92 [1]10.65 [1]11.07 [2]11.93 [1]14.67 [1]14.67 [2]19.37 [1]
Table 15. Fitting the Hj model from electrical appliance data.
Table 15. Fitting the Hj model from electrical appliance data.
CausePar.MLE (Std.Er)95% ACI [IW]KS (p-Value)
Cause 1 λ 0.0750 (0.1030)(0.0000, 0.2770) [0.2770]0.1322 (0.9950)
μ 0.0144 (0.0062)(0.0022, 0.0266) [0.0244]
Cause 2 λ 0.6843 (0.2473)(0.1996, 1.1691) [0.9694]0.2109 (0.6093)
μ 0.0141 (0.0111)(0.0014, 0.0358) [0.0344]
Table 16. Two APTIIC competing risk samples with binomial removals from electrical appliance data.
Table 16. Two APTIIC competing risk samples with binomial removals from electrical appliance data.
S [ 1 ] : T ( d ) = 0.2 ( 2 ) and R * = 3
i12345678910
r i 4400000000
δ i 2222121222
x i 0.120.160.460.520.982.704.134.955.576.16
S [ 2 ] : T ( d ) = 0.3 ( 2 ) and R * = 2
i12345678910
r i 2200000000
δ i 2222121122
x i 0.120.160.460.520.982.704.134.955.576.16
i1112131415
r i 00000
δ i 11111
x i 6.9210.6511.9314.6719.37
Table 17. Estimates of λ i , μ i , i = 1 , 2 , S ( t ) , h ( t ) , and θ from electrical appliance data.
Table 17. Estimates of λ i , μ i , i = 1 , 2 , S ( t ) , h ( t ) , and θ from electrical appliance data.
SamplePar.MLEACI
BayesBCI
Est.S.ErrLowerUpperIW
S [ 1 ] λ 1 7.40 × 10 2 1.32 × 10 1 0.00003.33 × 10 1 3.33 × 10 1
7.41 × 10 2 9.98 × 10 3 5.45 × 10 2 9.38 × 10 2 3.93 × 10 2
λ 2 3.92 × 10 1 2.10 × 10 1 0.00008.03 × 10 1 8.03 × 10 1
3.92 × 10 1 1.01 × 10 2 3.73 × 10 1 4.12 × 10 1 3.91 × 10 2
μ 1 2.00 × 10 2 2.65 × 10 2 0.00007.19 × 10 2 7.19 × 10 2
2.08 × 10 2 8.74 × 10 3 4.84 × 10 3 3.87 × 10 2 3.38 × 10 2
μ 2 5.56 × 10 2 3.70 × 10 2 0.00001.28 × 10 1 1.28 × 10 1
5.56 × 10 2 9.59 × 10 3 3.69 × 10 2 7.45 × 10 2 3.76 × 10 2
S ( t ) 8.20 × 10 1 8.07 × 10 2 6.62 × 10 1 9.78 × 10 1 3.16 × 10 1
8.20 × 10 1 4.87 × 10 3 8.10 × 10 1 8.29 × 10 1 1.90 × 10 2
h ( t ) 1.72 × 10 2 2.49 × 10 2 0.00006.60 × 10 2 6.60 × 10 2
1.73 × 10 2 2.33 × 10 3 1.28 × 10 2 2.20 × 10 2 9.13 × 10 3
θ 1.45 × 10 1 4.41 × 10 2 5.90 × 10 2 2.32 × 10 1 1.73 × 10 1
1.27 × 10 1 1.85 × 10 2 5.74 × 10 2 2.19 × 10 1 1.62 × 10 1
S [ 2 ] λ 1 3.68 × 10 2 6.50 × 10 2 0.00001.64 × 10 1 1.64 × 10 1
3.68 × 10 2 9.93 × 10 4 3.48 × 10 2 3.87 × 10 2 3.90 × 10 3
λ 2 2.89 × 10 1 1.09 × 10 1 7.48 × 10 2 5.03 × 10 1 4.28 × 10 1
2.89 × 10 1 1.00 × 10 3 2.87 × 10 1 2.91 × 10 1 3.90 × 10 3
μ 1 1.41 × 10 2 5.82 × 10 3 2.67 × 10 3 2.55 × 10 2 2.28 × 10 2
1.41 × 10 2 9.89 × 10 4 1.21 × 10 2 1.60 × 10 2 3.89 × 10 3
μ 2 1.33 × 10 6 1.60 × 10 5 0.00003.27 × 10 5 3.27 × 10 5
7.27 × 10 4 9.12 × 10 4 2.78 × 10 5 2.05 × 10 3 2.03 × 10 3
S ( t ) 8.75 × 10 1 4.50 × 10 2 7.87 × 10 1 9.63 × 10 1 1.76 × 10 1
8.75 × 10 1 5.24 × 10 4 8.74 × 10 1 8.76 × 10 1 2.03 × 10 3
h ( t ) 6.08 × 10 3 8.49 × 10 3 0.00002.27 × 10 2 2.27 × 10 2
6.10 × 10 3 1.62 × 10 4 5.78 × 10 3 6.41 × 10 3 6.32 × 10 4
θ 5.41 × 10 2 2.61 × 10 2 2.81 × 10 3 1.05 × 10 1 1.02 × 10 1
5.13 × 10 2 2.76 × 10 3 1.43 × 10 2 1.10 × 10 1 9.53 × 10 2
Table 18. Summary for statistics of λ i , μ i , i = 1 , 2 , S ( t ) , and h ( t ) from electrical appliance data.
Table 18. Summary for statistics of λ i , μ i , i = 1 , 2 , S ( t ) , and h ( t ) from electrical appliance data.
SamplePar.MeanMode Q 1 Q 2 Q 3 St.DSk.
S [ 1 ] λ 1 7.41 × 10 2 6.02 × 10 2 6.73 × 10 2 7.41 × 10 2 8.09 × 10 2 9.98 × 10 3 1.34 × 10 2
λ 2 3.92 × 10 1 3.79 × 10 1 3.85 × 10 1 3.92 × 10 1 3.99 × 10 1 1.01 × 10 2 −6.51 × 10 3
μ 1 2.08 × 10 2 1.89 × 10 2 1.46 × 10 2 2.04 × 10 2 2.66 × 10 2 8.71 × 10 3 2.55 × 10 1
μ 2 5.56 × 10 2 4.01 × 10 2 4.90 × 10 2 5.56 × 10 2 6.20 × 10 2 9.59 × 10 3 2.66 × 10 2
S ( t ) 8.20 × 10 1 8.19 × 10 1 8.17 × 10 1 8.20 × 10 1 8.23 × 10 1 4.87 × 10 3 2.66 × 10 3
h ( t ) 1.73 × 10 2 1.50 × 10 2 1.57 × 10 2 1.73 × 10 2 1.89 × 10 2 2.33 × 10 3 9.83 × 10 2
S [ 2 ] λ 1 3.68 × 10 2 3.79 × 10 2 3.61 × 10 2 3.68 × 10 2 3.75 × 10 2 9.93 × 10 4 −1.63 × 10 2
λ 2 2.89 × 10 1 2.88 × 10 1 2.88 × 10 1 2.89 × 10 1 2.90 × 10 1 1.00 × 10 3 8.72 × 10 4
μ 1 1.41 × 10 2 1.30 × 10 2 1.34 × 10 2 1.41 × 10 2 1.48 × 10 2 9.89 × 10 4 −7.92 × 10 4
μ 2 7.27 × 10 4 1.00 × 10 3 2.89 × 10 4 6.10 × 10 4 1.06 × 10 3 5.52 × 10 4 0.11 × 10 + 1
S ( t ) 8.75 × 10 1 8.75 × 10 1 8.74 × 10 1 8.75 × 10 1 8.75 × 10 1 5.17 × 10 4 7.81 × 10 4
h ( t ) 6.10 × 10 3 6.12 × 10 3 5.99 × 10 3 6.10 × 10 3 6.21 × 10 3 1.61 × 10 4 −9.61 × 10 3
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alotaibi, R.; Nassar, M.; Elshahhat, A. Statistical Computation of Hjorth Competing Risks Using Binomial Removals in Adaptive Progressive Type II Censoring. Mathematics 2025, 13, 2010. https://doi.org/10.3390/math13122010

AMA Style

Alotaibi R, Nassar M, Elshahhat A. Statistical Computation of Hjorth Competing Risks Using Binomial Removals in Adaptive Progressive Type II Censoring. Mathematics. 2025; 13(12):2010. https://doi.org/10.3390/math13122010

Chicago/Turabian Style

Alotaibi, Refah, Mazen Nassar, and Ahmed Elshahhat. 2025. "Statistical Computation of Hjorth Competing Risks Using Binomial Removals in Adaptive Progressive Type II Censoring" Mathematics 13, no. 12: 2010. https://doi.org/10.3390/math13122010

APA Style

Alotaibi, R., Nassar, M., & Elshahhat, A. (2025). Statistical Computation of Hjorth Competing Risks Using Binomial Removals in Adaptive Progressive Type II Censoring. Mathematics, 13(12), 2010. https://doi.org/10.3390/math13122010

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop