Stress–Strength Reliability Analysis for Different Distributions Using Progressive Type-II Censoring with Binomial Removal

: In the statistical literature, one of the most important subjects that is commonly used is stress–strength reliability, which is deﬁned as δ = P [ W < V ] , where V and W are the strength and stress random variables, respectively, and δ is reliability parameter. Type-II progressive censoring with binomial removal is used in this study to examine the inference of δ = P [ W < V ] for a component with strength V and being subjected to stress W . We suppose that V and W are independent random variables taken from the Burr XII distribution and the Burr III distribution, respectively, with a common shape parameter. The maximum likelihood estimator of δ is derived. The Bayes estimator of δ under the assumption of independent gamma priors is derived. To determine the Bayes estimates for squared error and linear exponential loss functions in the lack of explicit forms, the Metropolis– Hastings method was provided. Utilizing comprehensive simulations and two metrics (average of estimates and root mean squared errors), we compare these estimators. Further, an analysis is performed on two actual data sets based on breakdown times for insulating ﬂuid between electrodes recorded under varying voltages.


Introduction
A growing amount of pressure has been placed on manufacturers in recent years to create high-quality goods while lowering manufacturing costs and time frames.Studying reliability is increasingly important as global competitiveness increases.Reliability estimates, prediction, and optimization are built on the pillars of lifetime testing, structural reliability, and machine maintenance.The stress-strength (SS) model is mathematically written as δ = P[W < V], where V is the strength random variable, W is the stress random variable, and δ is the reliability parameter.In this model, the probability that the system can withstand the pressures placed on it is known as the system's reliability, or δ = P[W < V].A good illustration of both mechanical engineering and aerodynamics is the reliability of aircraft windshields.Various fields, including engineering, medicine, and the military, can employ SS models.SS reliability can provide scenarios for reliable structures such as carbon fiber, bridges, lifts, and others.The parameter δ is undoubtedly applicable in a wide range of sectors and offers more than just an SS model.It also gives a broad assessment of the differences between the two populations.For instance, in clinical investigations, we may assess the effectiveness of two treatments to compare V, the patient's life expectancy while receiving one medicine, and W, the patient's life expectancy when receiving a different medication.Information on more applications of this model can be found in [1].Numerous studies on the S-S model using complete and censored samples have been conducted by [2][3][4][5][6][7][8][9][10][11][12] and others.Some recent studies concerning SS models can be found in [13][14][15][16][17][18][19].
Censored samples are used to analyze lifetime data because, in life-testing trials, one frequently runs into circumstances where it takes a long time to accumulate sufficient number of failures needed to make a meaningful judgment.In the past ten years, the Type-II progressive censoring (TII-PC) scheme has become one of the most popular censoring methods.The following is an explanation of it: Assume that n identical units will be tested, and m failures will be recorded.When the first failure occurs, R 1 items are randomly selected and eliminated from the (n − 1).Similar to the first failure, R 2 items of the surviving objects are selected at random and eliminated, and so on.The remaining items are all suppressed at the moment of the mth failure.R = (R 1 , R 2 , . . ., R m ) displays the TII-PC scheme.For R = (0, 0, . . ., n − m) in TII-PC, Type-II censoring is obtained, and a complete sampling scheme when (n = m) and (R 1 = . . .= R m−1 = R m = 0).Research on the various characteristics of progressive censoring systems was provided by Balakrishnan [20] and Aggarwala and Balakrishnan [21].The prefixes R 1 , R 2 , . . ., R m are all present in this system.However, these numbers might happen at random in some real-world scenarios.According to Yuen and Tse [22], for instance, it is random and impossible to predict how many patients will withdraw from a clinical test at any given point.Additionally, even when some of the tested units have not failed, an experimenter may determine in some reliability trials that it is unsuitable or too unsafe to continue testing on some of the tested units.In these situations, the removal pattern is arbitrary at every failure (Yuen and Tse [22] and Amin [23]).This results in arbitrary removals and gradual censoring.As a result, several writers, including Wu et al. [24], Tse et al. [25], Dey and Dey [26], and Yan et al. [27], have examined the statistical inference on lifetime distributions under TII-PC with random removals.
In the literature, there is only one study regarding the parametric inference of the SS model with the stress and strength random variables belonging to the Marshall-Olkin extended Weibull family and where the observed samples are the TII-PC with fixed or random removal, as reported by Mokhlis et al. [28].The main goal of the present work is to examine the estimate of the SS reliability parameter δ = P[W < V] when the W and V are independent random variables with distinct distributions and the observed samples are the TII-PC with binomial removal.So, we will now give a brief summary of our research.

2.
An explicit expression of the SS reliability parameter δ is derived, when V and W are independent random variables following BXII (ϑ, ϕ 1 ) and BIII (ϑ, ϕ 2 ), respectively.This expression shows that δ does not depend on ϑ.

3.
The maximum likelihood estimate (MLE) of δ is obtained based on TII-PC with binomial removal.

4.
Under two distinct loss functions (squared error loss function (SEF) and linear exponential loss function (LNx)), the Bayes estimators of δ utilizing informative (INF) and non-informative (N-INF) priors are provided.

5.
The effectiveness of the developed estimates is evaluated using a Monte Carlo simulation analysis.6.
A real data example is provided that illustrates the theoretical findings.
This article is organized as follows.Section 2 provides the description of the parent distributions along with the SS reliability formula.The MLE of δ based on TII-PC is obtained in Section 3. Section 4 proposes Bayesian estimates using the Metropolis-Hastings algorithm for both symmetric and asymmetric loss functions.We provide a simulation analysis in Section 5 that compares the aforementioned estimation techniques.Section 6 provides a demonstration of how the suggested model and approaches may be applied to engineering issues.In Section 7, there is a summary and a few conclusions.

Description of the Parent Distributions and Expression of δ
In this section, a description of the parent distributions, namely the BXII and BIII distributions, is given.Also, the expression of the SS reliability δ = P[W < V] is provided, where V is the strength random variable that follows the BXII distribution, and W is the stress random variable that has the BIII distribution.
Burr [29] created a distributional scheme with twelve categories.Special focus has been placed on the BXII and BIII distributions.In the fields of lifetime and failure time modeling, the two-parameter BXII distribution is frequently used.In modeling lifetime data, or survival data, BXII and BIII have received special consideration because of their strong statistical and reliability characteristics.
Reference [30] noted that a significant amount of the curve shape properties in the Pearson family are covered when the parameters of the Burr distribution are chosen suitably.Since its shape parameter generates a variety of forms that are excellent fits for varied data, the BXII distribution has been used in research related to medicine, business, chemical engineering, quality control, and reliability.For instance, Ref. [31] illustrated the general applicability of the BXII distribution to any given collection of uni-model data, as well as the distribution's link to other distributions.To create an economical statistical design of the control chart for the non-normally distributed data, Ref. [32] employed the BXII distribution.It was used by [33] to simulate inpatient costs in English hospitals.The BXII distribution has recently been applied to a number of disciplines, including finance and economics (McDonald and Richards [34], hydrology (Mielke and Johnson [35]), medicine (Wingo [36]), mineralogy (Cook and Johnson [37]).The probability density function (PDF) and the survival (SF) of the BXII distribution are defined by: and where ϑ, ϕ 1 > 0 are the shape parameters.The BXII distribution's inferences have been the subject of several studies (see, for example, [38][39][40][41][42][43][44]). Figure 1 shows the plots of PDF for the BXII distribution.On the other hand, the BIII distribution has a wide range of applications in statistical modeling fields, including forestry (Gove et al. [45]), meteorology (Mielke [46]), fracture roughness data (Nadarajah and Kotz [47], and life testing (Hassen et al. [48]).In studies of the distribution of income, wages, and wealth, the BIII distribution is also known as the Dagum distribution [30].It is referred to as the inverse Burr distribution in the actuarial literature [49] and the Kappa distribution in the meteorological literature [46].For a random variable w ∈ R + , the PDF and SF of the BIII distribution, respectively, are given below: and where ϑ, ϕ 2 > 0, are the shape parameters.Several studies have looked at the implications of the BIII distribution (for instance, [50][51][52][53]).Figure 2 shows the plots of PDF for the BIII distribution.Let strength V∼ BXII (ϑ, ϕ 1 ) and stress W∼BIII (ϑ, ϕ 2 ) be independently distributed random variables with the common shape parameter ϑ and the different shape parameter (ϕ 1 , ϕ 2 ).The SS reliability formula of δ = P[W < V] is computed as follows: where Γ(.) is the gamma function.The SS parameter δ depends on the shape parameters ϕ 1 and ϕ 2 .

Bayesian Estimation
This section provides the Bayesian estimator of δ based on TII-PC with binomial removals, under the SEF and LNx loss functions, using INF and N-INF priors.We assume that the prior PDFs of ϑ, ϕ 1 , and ϕ 2 are given, respectively, by: and The joint posterior PDF of ϑ, ϕ 1 , and ϕ 2 is given by Since 0 < P j < 1, j = 1, 2, we consider the following prior PDFs for P j , j = 1, 2 where B(., .) is the beta function.The joint posterior PDF of ϕ 1 , ϕ 2 , ϑ, P 1 , and P 2 is given by: Using ( 12), we have where The conditional posteriors are given as: 1.

5.
For P 2 : From the above conditional posteriors, which appear complex, we will not be able to obtain a distribution to generate samples from these relationships.Therefore, we will use a numerical method to solve the integration of the original posterior distribution, in Equation ( 17), such as the Markov Chain Monte Carlo (MCMC) method.The Bayesian estimator of δ is defined as δSEF and δLNx , respectively, where it minimizes the SEF L SEF δ, δSEF , loss function, and LNx loss function, L LNx δ, δLNx . and where α is an LNx scale parameter (for further information, see [54]).It should be clear that it is impossible to calculate Equation (20) analytically.Approximating these equations can be achieved with the Metropolis-Hastings (MH) method and the MCMC technique.

MH Algorithm
The MH method (Algorithm 1) uses the stages listed below to draw a sample from the posterior density provided by Equation ( 20) φ1 , φ2 , where P 1 and P 2 are fixed.

2.3: Compute
, where π •• (•) is the posterior density in Equation (20).2.4: Generate a sample u from the uniform U(0, 1) distribution.2.5: Accept or reject the new candidate ξ Therefore, MCMC samples of (ϑ, φ 1 , φ 2 ) are obtained as: Hence, δ can be computed by substituting ξ (i) in Equation ( 5).Eventually, a portion of the initial samples can be removed (burn-in), and the remaining samples can be used to calculate Bayesian estimates (BEs) using random samples of size M drawn from the posterior density.The BEs of a parametric function δ under SEF and LNx are given by and where l B represents the number of burn-in samples.Substituting δ (i) with ξ (i) in the above equations, we can obtain BEs of δ with respect to SEF and LNx loss functions.

Elicitation of Hyper-Parameters
The determination of hyper-parameters relies on informative priors, derived from the MLEs for BXII(ϑ, φ 1 ).This is achieved by aligning the mean and variance of ( θj , φj 1 ) with the corresponding parameters of gamma priors.Here, j = 1, 2, . . ., f , and f denotes the number of available samples from the BXII(ϑ, φ 1 ) distribution (Dey et al. [55]).Equating the moments of ( θj , φj 1 ) with the moments of the gamma priors yields the following equations: By solving the mentioned pair of equations, we can express the estimated hyper-parameters as follows: We will apply the identical technique to calculate the hyper-parameters (a 3 , b 3 , a 4 , b 4 ) for the BIII(ϑ, φ 2 ) case.Here, ϑ remains consistent across two assumed distributions, implying that its hyper-parameters assume identical values, specifically a 1 = a 3 and b 1 = b 3 .

Numerical Outcomes
In this section, we investigate the application of Monte Carlo simulation to the proposed estimates of the SS reliability δ within the context of TII-PC, incorporating binomial removal.The primary objective of this simulation study is to scrutinize the properties and effectiveness of derived estimates through both the ML and Bayesian methods.It is worth noting that the numerical calculations were executed using the R programming language, alongside various auxiliary software packages, to facilitate equation solving and result extraction.The following arguments are assumed for the simulation process: 1.
We assume a total of 1000 replications for our simulations.

3.
We suggest a sample size of n = n 1 = n 2 with two values: 40 and 60.Furthermore, the number of stages m = m 1 = m 2 , varies depending on the chosen n value.Specifically, when n = 40, we configure m to be either 20 or 30.On the other hand, for n = 60, we explore options with m = 25 and 40 stages.4.
In simulating the removal of units from the life test, we model it following a binomial distribution with probability P = P 1 = P 2 .We explore various values for the probability P = 0.05, 0.20, 0.50, and 0.8.Concerning the random unit removal patterns in the TII-PC, we assume two primary patterns based on n, m, and the removal probability P, falling into two distinct cases: Scheme 1 (Sch-1): R 1 follows a binomial distribution with parameters (n − m − 1, P), and subsequent stages R j follow a binomial distribution with parameters (n − ∑ m−1 j=2 R j , P), where j = 2, . . ., m − 1.In this scheme, R m is set to zero.Scheme 2 (Sch-2): Here, R m follows a binomial distribution with parameters (n − m − 1, P) and preceding stages R m−j follow a binomial distribution with parameters (n − ∑ m j=m−1 R j , P).In this scheme, R 1 is set to zero.
Notably, Sch-1 involves a decreasing number of removals at each stage of censoring, while Sch-2 exhibits an increasing trend.
Step 2: Generate a random data set V of size n = n 1 from BXII(ϑ, ϕ 1 ) using the algorithm proposed by [56] and the provided R.
Step 3: Similarly, generate a random data set W of size n = n 2 from the BIII(ϑ, ϕ 2 ) given R • .
Step 5: Compute the BE using the MH algorithm as follows: 1.
Consider two scenarios for prior distributions.In the first scenario, an INF prior is employed, where hyper-parameter values are computed using the technique outlined in Section 4.2 and Equations ( 23).

2.
Consider the second scenario, which involves the N-INF prior, where all hyper-parameter values are set to zero.

3.
For the given hyper-parameters of prior distributions, generate 10,000 samples of δ from the posterior density using MCMC and the MH algorithm.4.
Discard the initial 2000 samples as burn-in from the overall set of 8000 samples generated from the posterior density.
Step 6: Repeat Steps 2 to 5 a total of 1000 times and save all the estimates.
Step 7: Calculate statistical metrics for point estimates: the average (A1) estimate and the root mean square error (A2) estimate.These calculations can be performed using the following formulas: In this context, δ signifies the actual value of the SS with the provided parameters, whereas δ indicates the estimated value of the SS.
Step 8: Repeat Steps 1 to 7 for the second scheme of removing items (Sch-2).
To provide point estimates of δ, we present the results of A1 and A2 estimates for various values of P and two proposed TII-PC schemes.Tables 1 and 2 correspond to cases, where ϕ 1 = 0.5 and ϕ 2 take values of 0.75 and 1.75, respectively.Additionally, Tables 3 and 4 correspond to cases where ϕ 1 = 1.5 and ϕ 2 take values of 0.75 and 1.75, respectively.The first row includes the A1 of δ and the second row includes the A2 of δ.From the results in Tables 1-4, we can draw some observations: 1.
As both n and m increase, there is a noticeable decrease in A2 for all proposed estimation methods, and A1 tends to converge to the true value of δ.

2.
With an increase in the removal probability (P), the A2 values also show an upward trend, indicating a decrease in the precision of the estimates as the value of P rises.

3.
In many instances, A2 estimates from Sch-2 appear to have slightly higher values compared to Sch-1 for all values of P except when P = 0.02.This suggests that Sch-1 may exhibit better performance.4.
When comparing BEs obtained using MCMC under the INF and N-INF approaches, there is a clear indication that the INF prior case significantly outperforms the N-INF prior case.

5.
The value of δ decreases with an increase in ϕ 2 , keeping ϑ and ϕ 1 constant.The same occurs when ϕ 1 increases.

Real Data Analysis
In this section, we analyze two actual datasets to illustrate the application of our proposed estimation techniques.These datasets consist of breakdown times for insulating fluid between electrodes recorded under varying voltages [57].Table 5 displays the failure times (in minutes) for insulating fluid between two electrodes subjected to 36 kV (V) and 34 kV (W).The Shapiro-Wilk normality tests were conducted to assess the normal distribution assumption for two datasets, V and W. The test statistics for the Shapiro-Wilk normality test were found to be 0.6082 and 0.7200 with corresponding values of p < 0.001 for the respective datasets.Therefore, we conclude that the two datasets do not follow a normal distribution.
The BXII(ϑ, ϕ 1 ) and BIII(ϑ, ϕ 2 ) distributions are initially applied independently to datasets V and W. First and foremost, it is crucial to ascertain the suitability of each distribution to analyze its respective dataset.This involves computing the MLEs for the parameters and assessing various goodness-of-fit criteria, including the negative loglikelihood criterion (NLC), the Akaike information criterion value (AICV), the Bayesian information criterion value (BICV), and the Anderson-Darling test (ADT) statistics, as well as the Kolmogorov-Smirnov test (K-ST) statistic and its corresponding p-value.These criteria are subsequently compared with those obtained from alternative distributions.For Dataset 1 with the BXII distribution, the alternatives include Weibull (WE), generalized exponential (Gen-Exp), exponential (Exp), and Lindely (L) distributions.As for Dataset 2, the compared distributions with BIII are inverse Weibull (Inv-WE), WE, Gen-Exp, and inverse gamma (Inv-Ga).Lower values of these criteria, coupled with larger p-values, indicate a superior fit.The findings, encompassing parameter estimates and goodnessof-fit statistics, are detailed in Table 6.The results from Table 6 indicate that, among the distributions considered, BXII and BIII serve as appropriate models for the provided Dataset 1 and Dataset 2, respectively.Additionally, Figure 3 presents visualizations of empirical and fitted distribution functions.These visuals distinctly highlight that the BXII and BIII distributions exhibit a more favorable alignment with Dataset 1 and Dataset 2, respectively, in comparison to the other distributions under consideration.This observation holds true, at least within the confines of these specific datasets.Next, we check whether the null hypothesis H 0 : ϑ Data W = ϑ Data V against the alternative H 1 : ϑ Data W = ϑ Data V holds.In this scenario, we calculate the test statistic as −2 * θData W , φ2 − * θData V , φ1 = 69.2209,and its associated p-value is found to be less than 0.05.Consequently, we accept the null hypothesis, affirming the validity of the assumption H 0 : ϑ Data W = ϑ Data V .
With the initial pair of datasets, we produce two sets of TII-PC samples from each dataset.These samples are constructed with a varying number of stages, precisely m = 10, adhering to the item removal scheme outlined in Table 7.We compute the estimate of δ through MLE for the parameters ϑ, ϕ 1 , and ϕ 2 , considering varying TII-PC patterns based on the provided two real datasets (V and W).The estimated value is found to be 0.7307.Furthermore, we calculate BEs using MCMC and utilizing the MH algorithm with the N-INF prior.While generating samples from the posterior distribution using MH, we initialize the value of δ as δ (0) = δ, where δ represents the MLE of δ.Subsequently, we discard the initial 2000 burn-in samples from a total of 10,000 samples generated from the posterior density.BEs are then derived using different loss functions, including SEF and LNx (with α = −1.5 for LNx 1 and α = 1.5 for LNx 2 ).The obtained BEs for SEF, LNx 1 and LNx 2 are 0.7709, 0.7667, and 0.7750, respectively.
Finally, the convergence of MCMC estimates using the MH algorithm for δ can be illustrated in Figure 4.This set of figures includes a trace plot, histogram, and cumulative mean for the estimated parameter δ under N-INF priors.These visualizations illustrate the normality of the generated posterior samples for the parameter δ and convergence to approximately 0.76.

Conclusions
Progressive censoring is frequently used in life testing and reliability studies to address a variety of issues that experimenters have while conducting various sorts of experiments, including cutting down on overall test duration, saving experimental units, and estimating effectively.One sort of progressive censoring that has been created to enable removal with specified distribution is the TII-PC with random removal.In this work, the estimate of the SS model is based on the assumption that the distributions of the random variables for stress and strength are distinct with common shape parameters.The point estimator for δ is generated using the TII-PC with binomial removal, taking the ML and Bayesian techniques into consideration.The MCMC approach and the MH algorithm, based on symmetric and asymmetric loss functions, are both carried out in light of INF and N-INF priors and result in Bayesian estimates.The effectiveness of the generated estimates is validated by a comprehensive simulation analysis.We discovered that the Bayes estimates employing the MCMC approach outperformed MLEs.Therefore, when analyzing data, one may consider using the Bayesian approach using the MH algorithm if prior knowledge about the data is available; otherwise, one may use ML or the Bayesian method based on the N-INF prior.Finally, to illustrate how our SS reliability model problem may be applied, we take a look at a real-world case.

8 Figure 2 .
Figure 2. Plots of PDF for the BIII distribution.

Figure 3 .
Figure 3.The empirical distribution function and fitted distribution functions for Datasets 1 and 2.

Table 1 .
Measures of the MLEs and BEs for ϕ 1 = 0.5 and ϕ 2 = 0.75 under different values of P, m, and n.

Table 2 .
Measures of the MLEs and BEs for ϕ 1 = 0.5 and ϕ 2 = 1.75 under different values of P, m, and n.

Table 3 .
Measures of the MLEs and BEs for ϕ 1 = 1.5 and ϕ 2 = 0.75 under different values of P, m, and n.

Table 6 .
Evaluation of the goodness of fit for the provided two datasets.

Table 7 .
Generated m data of the TII-PC and corresponding censored schemes.