1. Introduction
Modeling real-world data is a critical task in modern research, necessitating the use of appropriate statistical probability distributions that accurately capture the underlying shape of the data. Such distributions enable researchers to estimate key population characteristics and derive essential functions from the sampled data. The shape of the data refers not only to its distribution but also to the behavior of its failure rate or hazard rate function (HRF). In the statistical literature, numerous distributions have been developed, each with unique properties and applications. Among these, the Chen distribution, introduced by Chen [
1], stands out as a versatile model. It is particularly useful for modeling data with a bathtub-shaped or increasing HRF, making it a valuable tool for reliability analysis and survival studies. For more details on extensions of the Chen distribution, readers are referred to the works of Zamani et al. [
2,
3] and Tarvirdizade and Ahmadpour [
4]. Recently, Srivastava and Srivastava [
5] proposed the inverse Chen (IC) distribution, an inverted variant of the Chen distribution. This new distribution extends the applicability of the Chen model to diverse data patterns. Following its introduction, Agiwal [
6] conducted a comprehensive investigation to explore and analyze the properties of the IC distribution, offering some of its theoretical foundations and practical utility. Many real-world reliability data sets demonstrate asymmetric behavior, which traditional symmetric models cannot accurately capture. The inverse Chen distribution offers a flexible framework for modeling skewed lifetime data, making it suitable for such situations.
Suppose the random variable
X follows the Chen distribution. Then, the random variable
follows the IC distribution. The probability density function (PDF) and cumulative distribution function (CDF) can be expressed, respectively, as
and
where
and
and
are the model parameters. Two other essential functions associated with the random variable
Y are the reliability function (RF) and the HRF. These functions can be expressed, respectively, at a specific time
as
and
Agiwal [
6] noted that the IC distribution is well suited for modeling data with an upside-down bathtub shape. Additionally, Alotaibi et al. [
7] pointed out that the HRF of the IC distribution is highly flexible, capable of accommodating not only an upside-down bathtub shape but also increasing and decreasing patterns. Research on the IC distribution remains relatively limited, with only a few studies investigating its applications and properties. Among these, Kumar et al. [
8] explored inferences for two IC populations using joint Type-II censored data. Additionally, Alotaibi et al. [
7] investigated the estimation of the stress–strength index under an adaptive progressive Type-II censoring (APTIIC) scheme.
In reliability studies, the presence of censored data is prevalent due to the incomplete information regarding event times for all test units. Censoring serves as a mechanism to effectively manage both time and costs, making it beneficial for the assessment of reliable products as well as contemporary items. A widely employed approach is progressive Type-II censoring (PTIIC), which facilitates the removal of select operational items that can be used for subsequent research endeavors. The PTIIC plan enables researchers to remove live items from the test based on a predefined removal pattern. For further details on this scheme, readers can refer to Balakrishnan et al. [
9], Kundu [
10], Ng et al. [
11], Ghanbari et al. [
12], and AL-Zaydi [
13]. Ng et al. [
14] introduced a more flexible censoring strategy called the APTIIC scheme, with the conventional PTIIC strategy being a special case. This approach ensures the test concludes once the predetermined number of failures is reached, while keeping the total test duration close to a prefixed optimal time, denoted by
. The APTII scheme can be described as follows: Suppose
n identical items are placed on a life test at time zero with progressive censoring plan
, where
m represents the desired number of observed failures. As the experiment progresses, the researcher removes
items randomly from the remaining units immediately after recording the
i-th failure time, denoted as
, for
. This plan has two possible ways to conclude the experiment.
First Scenario: If , the experiment terminates at , aligning with the conventional PTIIC strategy.
Second Scenario: If occurs before , the removal pattern is adjusted by setting for , where d is the number of failures observed before the threshold . At , all remaining items are removed, i.e., .
In recent years, numerous authors have adopted the APTIIC plan to explore various estimation challenges for lifetime distributions, for instance, Panahi and Moradi [
15], Kohansal and Shoaee [
16], Helu [
17], Sharma and Kumar [
18], and Kumari et al. [
19]. Let
represent an APTIIC sample obtained from the IC population. Thus, the joint likelihood function, ignoring the constant term, can be expressed as
where
is the observed realization of
, and
.
The APTIIC plan operates under the assumption that the removal pattern during the test is predetermined. However, in practical scenarios, the removal pattern may occur randomly. Yuen and Tse [
20] highlighted that in some investigations, investigators might find it inappropriate or unsafe to continue examining certain tested units, irrespective of their failure status. In such circumstances, the removal of units after each failure is carried out in a random manner. This approach has attracted considerable attention from researchers, who seek to investigate the influence of random removal on the effectiveness and efficiency of estimation techniques, see, for example, Tse et al. [
21], Wu et al. [
22], and Elbatal et al. [
23]. Under the APTIIC framework, Elshahhat and Nassar [
24] suggested the APTIIC with binomial removals (APTIIC-BR) and discussed the estimation concerns associated with the Weibull lifetime model. According to Elshahhat and Nassar [
24], the joint probability of
can be expressed as follows:
where
is the binomial distribution parameter,
,
,
, and
. By integrating the likelihood function in (
5) with the joint probability in (
6), the full likelihood function for the parameter vector and the complete likelihood function of the model parameter vector
and the binomial distribution parameter
can be expressed as follows:
The primary motivation for this work lies in the critical role of the censoring plan in reliability analysis. Although the traditional PTIIC scheme has been widely used, it lacks flexibility because the number of units to be removed is fixed in advance. The APTIIC plan addresses this limitation by allowing removal decisions to be adjusted dynamically based on observed failures, thereby optimizing data collection. An additional advancement involves modeling unit removals using a binomial distribution, which introduces a probabilistic element that better captures real-world variability, unlike the deterministic nature of earlier methods. This combination of adaptability and randomness not only enhances the data collection process but also improves statistical efficiency. The second motivation arises from the importance of selecting an appropriate lifetime distribution in reliability engineering and survival analysis, where accurate modeling of failure mechanisms is essential. While traditional distributions such as the Weibull, exponential, and gamma have been widely used, their monotonic hazard rate structures often limit their effectiveness in capturing real-world failure behaviors, which frequently exhibit non-monotonic patterns. The IC distribution, known for its upside-down bathtub-shaped hazard function, addresses this limitation by providing a better fit for lifetime data with unimodal hazard rates. This motivates the investigation of the IC distribution as a promising addition to the suite of models available for analyzing data with unimodal failure patterns. Despite its potential, estimating the reliability metrics of the IC distribution has received limited attention, particularly in the context of modern censoring schemes, warranting further exploration.
As a result, this study aims to investigate the estimation of the IC distribution’s model parameters, the binomial distribution parameter, and two key reliability measures using data generated under the APTIIC-BR scheme. The main contributions of this study can be summarized as follows: (1) Investigating the parameter estimation and reliability metrics of the IC distribution within the APTIIC-BR framework, which is particularly well suited for resource-constrained environments. (2) Providing practical guidance for researchers and practitioners by comparing classical and Bayesian estimation methods, helping to identify the most appropriate approach for analyzing data with the IC distribution. (3) Demonstrating the real-world applicability of the proposed model, censoring scheme, and estimation techniques through the analysis of real data sets. To carry out this study, we employ the maximum likelihood method to derive point estimates and construct approximate confidence intervals (ACIs). In parallel, Bayesian estimates are obtained using the squared error (SE) loss function and Markov Chain Monte Carlo (MCMC) techniques, accompanied by symmetric Bayesian credible intervals (BCIs). The effectiveness of these estimation approaches is assessed through an extensive simulation study and the analysis of two real-world data sets. This analysis underscores the utility of the proposed approaches in addressing reliability estimation challenges.
The remainder of this study is structured as follows:
Section 2 focuses on the classical estimation method, providing detailed discussions on point and interval estimations for the model parameters, the binomial distribution parameter, and the two reliability measures.
Section 3 considers the Bayesian framework, presenting the derivation of Bayes point estimates and the construction of the BCIs.
Section 4 outlines the simulation study, including the design of the simulation experiments and a comprehensive analysis of the numerical results to evaluate the performance of the proposed estimation methods.
Section 5 applies the proposed methods to two real-world data sets, demonstrating their practical utility and effectiveness in real-life scenarios. Finally,
Section 6 concludes this paper by summarizing the key findings.
3. Bayesian Estimation
Bayesian estimation provides a powerful framework for reliability analysis by incorporating prior knowledge or expert opinions via prior distributions, as discussed in Zhuang et al. [
26]. This approach is particularly advantageous in situations characterized by limited or censored data, such as APTIIC-BR samples. In contrast to classical estimation methods, which depend exclusively on observed data and may produce unreliable estimates when sample sizes are small, Bayesian estimation integrates prior information with the joint likelihood function. Consequently, this methodology generates posterior distributions that yield both point estimates and credible interval ranges. In this section, we utilize the Bayesian estimation framework to compute the Bayes estimates and credible ranges for the unknown parameters, binomial parameter, and the two reliability metrics. The estimates are obtained under the SE loss function, assuming that the model parameters
and
are independent. Since conjugate priors are unavailable for the parameters
and
, we employ gamma distributions as their priors. This selection is justified by five key considerations: (1) The gamma distribution accommodates diverse prior beliefs by capturing a broad range of shapes via its shape parameter. Additionally, it includes the exponential and chi-square distributions as special cases, further enhancing its flexibility and applicability, see Al-Essa et al. [
27]. (2) Its support (positive real numbers) aligns with the natural domain of
and
, ensuring logical consistency in modeling. (3) Gamma priors are widely used in Bayesian frameworks, simplifying comparisons with existing studies and facilitating computational workflows. (4) Closed-form expressions for the gamma’s mean and variance facilitate direct hyper-parameter specification during simulations and empirical analyses. (5) The explicit variance formulation systematically evaluates how prior uncertainty impacts posterior estimates, enhancing interpretability. By utilizing these properties, the gamma prior balances theoretical issues with computational practicality, making it a good candidate for Bayesian inference without conjugate priors.
Assume that
and
, where
are known hyper-parameters. Then, the joint prior distribution for these random variables can be expressed as follows:
On the other hand, for the binomial distribution parameter
, it is well established that its natural conjugate prior is the beta distribution, see Soliman et al. [
28]. Let
and
be positive, known hyper-parameters; then, we can write the prior distribution of the binomial parameter as
Given that the model parameters are independent of the binomial distribution parameter, we first aim to derive the posterior distribution of the model parameters, followed by that of the binomial parameter. Using the likelihood function in (
8) and the prior distribution in (
14), the posterior distribution of
follows
where
A is the normalization factor that guarantees the integral of Equation (
16) evaluates to 1, and is given by
Likewise, the posterior distribution of the binomial distribution parameter
can be derived by combining (
6) and (
15) as
where
is the beta function. Employing the SE loss function, one can easily obtain the Bayes estimator of the binomial parameter
as the mean of the posterior distribution (
17) as
When
, it can be observed that the Bayes estimator of
coincides with its MLE.
For the model parameter, let
be any function of
. The Bayes estimator of
is then derived from the posterior distribution (
16) under the SE loss function as
Computing the Bayes estimator in (
19) is challenging due to the complex posterior distribution, which involves intractable integrals. These integrals arise from normalization and expectation calculations, making analytical solutions impractical. To address this, MCMC techniques like Gibbs sampling and Metropolis–Hastings (MH) algorithms can be used to numerically approximate the posterior distribution without directly evaluating the integrals. By generating samples from the posterior, the Bayes estimates and BCIs can be approximated effectively. The process begins by deriving the full conditional distributions of the model unknown parameters
and
. Based on these conditionals, appropriate MCMC methods are selected. From the posterior in (
16), the conditional distributions of the model parameters
and
are obtained as
and
Obtaining samples from the full conditional distributions presented in (
20) and (
21) is not feasible, as these distributions do not correspond to any common or well-known probability distributions. In such instances, the MH algorithm offers an effective solution. Unlike methodologies that depend on standard distributions, the MH algorithm does not require the full conditional distributions to belong to a recognizable family of distributions. Instead, it generates candidate samples utilizing a proposal distribution, which are subsequently accepted or rejected based on an acceptance probability. This acceptance probability ensures that the Markov chain converges to the desired posterior distribution over time. In this analysis, we employ the normal distribution as a proposal distribution for both parameters
and
. Below, we outline the detailed steps of obtaining the required MCMC samples.
The computation of Bayes estimates and the BCIs becomes straightforward once the MCMC samples have been generated. These samples, drawn from the posterior distribution, serve as the foundation for estimating various quantities of interest. Under the SE loss function, the Bayes estimates for the model parameters and the two reliability metrics can be obtained by calculating the mean of the corresponding MCMC samples as
After computing the Bayes point estimates of various parameters and the two reliability measures, we can easily compute the BCI for the binomial parameter
using the posterior distribution in (
17), while that for
, the RF, and the HRF can be computed through the generated MCMC samples. For
, the
BCI can be obtained through.
After obtaining the Bayes point estimates for the various parameters and the two reliability measures, we can proceed to compute the BCIs. For the binomial parameter
, the BCI can be directly derived using the posterior distribution given in (
17). However, for the parameters
and the RF and HRF, the BCIs are computed using the generated MCMC samples. Specifically, for
, the
BCI can be obtained as follows:
where
and
are the lower and upper bounds of the BCI, respectively. Based on (
22), these bounds can be computed as
where
is the quantile function of the beta distribution.
For
,
, the RF, and the HRF, we first sort the generated MCMC samples as
,
,
and
. Accordingly, the
BCIs are
4. Monte Carlo Simulations
To evaluate the behavior and accuracy of the acquired estimators derived for the parameters , , , , and , we conducted a simulation study involving 1000 APTIIC-BR samples from two different IC lifespan populations, namely Pop-1:IC(0.5, 0.8) and Pop-2:IC(1.5, 2.0). The study design incorporated various configurations of the parameters (binomial percentage), (threshold), n (total testing units), and m (total failed units); specifically, (=0.4,0.8), n(=40,80), and m is determined as a failure present for each n such as or 75%. To highlight the utility of the threshold on inferential calculations, for each scenario of , n, and m, two different levels of are utilized, namely (=1,3) and (=1.5,2.5) for Pop-1 and Pop-2, respectively. At and 1, the actual values of (,) in Pop-1 and -2 are taken as (0.9048, 0.8359) and (0.9240, 0.6705), respectively.
To investigate the influence of prior information PDFs on Bayesian estimation, we suggest using historical data to determine the hyper-parameters and (for ) corresponding to the IC(,) distribution parameters. Accordingly, we generated 10,000 data sets, each consisting of observations, from IC(,) to serve as historical samples for estimating these unknown parameters. The resulting hyper-parameter values of used in Bayesian computations are specified as follows:
For Pop-1: (36.93075, 88.86014, 73.18852, 107.9357);
For Pop-2: (34.68326, 60.66929, 22.08699, 29.28567).
Following the methodology outlined by Kundu [
10], we also considered two different informative sets for beta hyper-parameters
and
for
, such as
(=4, 8) and
(=6, 2). To ensure convergence of the MCMC approach detailed in
Section 3, the initial 2000 iterations were discarded as burn-in, and a total of
= 10,000 iterations were retained for estimating the Bayesian point estimates and 95% BCIs for
,
,
,
, and
. Initial parameter values for the MCMC sampling algorithm are set based on their respective frequentist estimates. Computational analyses are performed using
(version 4.2.2), with the frequentist and Bayesian MCMC procedures implemented via the
package (Henningsen and Toomet [
29]) and
package (Plummer et al. [
30]). For each experimental configuration, the average point estimate (APE) of
(as an example) is determined as follows:
where
represents the estimate of
for the
jth sample. To assess the precision of point estimates, the root mean squared error (RMSE) and average relative absolute bias (ARAB) are used as follows:
and
Furthermore, two key interval-based accuracy measures are considered, called average interval length (AIL) and coverage probability (CP) at a 95% confidence level, as follows:
where
denotes the indicator function, and
and
represent 95% interval bounds.
Simulation results for
,
,
,
, and
are summarized in
Table 1,
Table 2,
Table 3,
Table 4 and
Table 5. Based on key performance indicators, specifically, lower RMSE, ARAB, and AIL values, along with higher CP values, the following observations can be made:
The proposed inferential methods provide reliable results for , , , , and .
Increasing n or m enhances both point and interval estimation accuracy.
Bayesian estimators and credible intervals for , , , , or outperform frequentist estimates and asymptotic intervals, benefiting from the informative gamma and beta knowledge.
When increases, in both IC populations 1 and 2, the following trends are observed:
The RMSEs and ARABs of , , and decrease, whereas those of and increase;
The AILs of , , and decrease, whereas those of and increase;
The CPs of , , and increase, whereas those of and decrease.
When increases, in both IC populations 1 and 2, the following observations are noted:
The RMSEs and ARABs of , , and decrease, whereas those of and increase;
The AILs of , , , and decrease, whereas those of increase;
The CPs of , , , and increase, whereas those of decrease.
When and increase, the following observations are noted:
The RMSEs and ARABs of all parameters decrease;
The AILs of all parameters decrease except for , which increases. The opposite result of this comment is observed when compared based on their CP results.
In most situations, the CPs of the ACI (or BCI) of , , , , and align closely with the expected 95% level.
Overall, the Bayesian methodology, incorporating MCMC techniques with independent gamma and beta priors, is highly recommended for evaluating the lifetimes of components from an IC population when data are collected through binomial removal via the proposed censoring plan.