Next Article in Journal
Canonical Effect on Quarkonium Enhancement in the Deconfined Medium
Previous Article in Journal
Weak Hopf Algebra Structures on Hybrid Numbers
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Statistical Inference of Inverse Weibull Distribution Under Joint Progressive Censoring Scheme

by
Jinchen Xiang
,
Yuanqi Wang
and
Wenhao Gui
*
School of Mathematics and Statistics, Beijing Jiaotong University, Beijing 100044, China
*
Author to whom correspondence should be addressed.
Symmetry 2025, 17(6), 829; https://doi.org/10.3390/sym17060829
Submission received: 8 March 2025 / Revised: 16 May 2025 / Accepted: 23 May 2025 / Published: 26 May 2025
(This article belongs to the Section Mathematics)

Abstract

:
In recent years, there has been an increasing interest in the application of progressive censoring as a means to reduce both cost and experiment duration. In the absence of explanatory variables, the present study employs a statistical inference approach for the inverse Weibull distribution, using a progressive type II censoring strategy with two independent samples. The article expounds on the maximum likelihood estimation method, utilizing the Fisher information matrix to derive approximate confidence intervals. Moreover, interval estimations are computed by the bootstrap method. We explore the application of Bayesian methods for estimating model parameters under both the squared error and LINEX loss functions. The Bayesian estimates and corresponding credible intervals are calculated via Markov chain Monte Carlo (MCMC). Finally, comprehensive simulation studies and real data analysis are carried out to validate the precision of the proposed estimation methods.

1. Introduction

The modeling and analysis of lifetimes play a crucial role in many scientific and technological fields. In practice, studying a complete sample is often impractical because increasing product reliability results in longer lifetime data (see [1]). To save time and reduce costs, various censoring schemes have been proposed in reliability and survival analysis. Among them, the type II censoring scheme is one of the most commonly applied methods. Nevertheless, within industrial contexts, the manufacture of products occurs on multiple production lines, thus necessitating comparative lifecycle analyses. In the evaluation of the reliability of two populations produced by different lines, joint censoring schemes are a more appropriate methodology. In their seminal work, Balakrishnan et al. [2] pioneered the introduction of a novel joint type II censoring scheme, providing a framework for statistical inferences concerning two exponential distributions.
Unlike conventional type II censoring, which involves unit withdrawal only at the experiment’s termination, progressive type II censoring, as pioneered by [3] removes units at each failure occurrence. This strategy saves time by discarding observations with excessively long lifetimes, thereby allowing parameter estimation even without continuous data collection throughout the experiment. Compared to standard censoring techniques, progressive type II censoring provides notable benefits. For a detailed review of work on progressive censoring, please refer to [4].
Relying solely on censoring methods for a single population poses several limitations. While progressive type II censoring removes some data, obtaining sufficient observations remains costly. Moreover, experiments based on one population cannot capture the interaction between different groups. To overcome these drawbacks, Rasouli and Balakrishnan [5] proposed the joint progressive type II censoring (JPC) scheme. This method records failures from two distinct populations, reducing the time needed to collect comparable data and enabling direct comparison of failure times under identical conditions.
Within this framework, two independent samples of sizes m and n are selected from each production line. Both samples are drawn from each production line. Both samples undergo simultaneous life-testing experiments. This method is particularly efficient for ending life tests once a specified number of failures is observed.
Several studies have investigated JPC and its related inference methods. Balakrishnan et al. [6] proposed likelihood-based inference for k exponential distributions under JPC, while Doostparast et al. [7] examined Bayesian estimation using a linear exponential loss function with JPC data. Mondal and Kundu [8] focused on point and interval estimation for Weibull parameters under JPC. Goel and Krishna [9] studied likelihood and Bayesian inference for k Lindley populations under joint type II censoring, with Krishna and Goel [10] extending this framework to JPC. Hassan et al. [11] addressed inference for the Burr type III distribution under JPC, while Kumar and Kumari [12] explored likelihood and Bayesian estimation for two inverse Pareto populations. More recently, Hasaballah et al. [13] analyzed reliability for two Nadarajah–Haghighi populations, and Abo-Kasem et al. [14] investigated inference for two Gompertz populations under similar censoring schemes.
The two-parameter inverse Weibull distribution (IWD) was first proposed by Kaller and Kamath [15] to model the degradation of diesel engine components. Since its introduction, the IWD has gained widespread recognition as a suitable model for lifetime data analysis. Langlands et al. [16] demonstrated its effectiveness in modeling breast cancer mortality data. Abhijit and Anindya [17] demonstrated that the IWD outperforms the normal distribution in analyzing ultrasonic pulse velocity data on concrete structures. Elio et al. [18] introduced a mixed IWD model to address extreme wind speed scenarios.
The IWD finds extensive applications in reliability research. Azm et al. [19] explored the estimation of unknown IWD parameters under competing risks using an adaptive type I progressive hybrid censoring scheme. They employed both maximum likelihood and Bayesian estimation methods, deriving asymptotic, bootstrap, and highest posterior density confidence intervals, and validated their approach with two real datasets. Bi and Gui [20] investigated stress–strength reliability estimation based on the IWD. They proposed an approximate maximum likelihood approach for point estimation and confidence interval construction, alongside Bayesian estimation and highest posterior density intervals derived via Gibbs sampling.
In another study, Alslman and Helu [21] estimated the stress–strength reliability for the IWD under adaptive type II progressive hybrid censoring. Shawky and Khan [22] employed maximum likelihood estimation for a multi-component IWD stress–strength model, confirming its effectiveness via Monte Carlo simulations. Ren et al. [23] demonstrated that Bayesian estimation outperforms other methods for IWD under progressive type II censoring. However, no one has studied the statistical inference of the IWD under the JPC model. The research on this aspect is of great significance.
This study applies the JPC scheme to estimate two independent IWD samples in scenarios where no explanatory variables are available. Point and interval estimators are constructed using maximum likelihood estimation (MLE) and Bayesian methods. Asymptotic confidence intervals (ACIs) are derived from the observed information matrix, while Bootstrap-P (Boot-P) and Bootstrap-T (Boot-T) methods yield bootstrap confidence intervals (CIs). Gamma priors are assumed for both shape and scale parameters, with Bayes estimates computed via the Metropolis–Hastings (M-H) algorithm under squared error (SE) and linear exponential (LINEX) loss functions. Monte Carlo simulations and a real-data analysis are conducted to assess the performance of these estimators.
The structure of the paper is as follows. Section 2 derives the maximum likelihood estimators for the IWD parameters. Section 3 discusses approximate confidence intervals based on these estimators, while Section 4 focuses on bootstrap-based confidence intervals. The Bayesian analysis is presented in Section 5, followed by the simulation results in Section 6. Section 7 applies the proposed methods to real datasets and Section 8 concludes the paper.

2. The Joint Progressive Type II Censor Scheme and Maximum Likelihood Estimation

2.1. Model Description

We begin by introducing the IWD, which is frequently employed to model lifetime data. The cumulative distribution function (CDF) of the IWD is given by
F ( x ) = e β x α , x > 0 , α , β > 0 .
Figure 1 illustrates this CDF.
The associated probability density function (PDF) is
f ( x ) = α β x ( α + 1 ) e β x α ,
as shown in Figure 2, and it can be observed that the IWD exhibits a right-skewed, unimodal PDF with a heavy right tail, making it suitable for modeling extreme events and late-life failures.
This paper focuses on inference of the two-sample IWD under the JPC model. Following the idea of [24], for product A, let X 1 , X 2 , , X m denote the lifetimes of m units. These are assumed to be independent and identically distributed (i.i.d.) according to the IWD with a shape parameter α 1 and scale parameter β . Similarly, for product B, let Y 1 , Y 2 , , Y n be the lifetimes of n units, which are also i.i.d. following the IWD with a shape parameter α 2 and the same scale parameter β .
Let K = m + n be the total sample size, with  λ 1 λ K representing the order statistics of the random variables { X 1 , X 2 , , X m ; Y 1 , Y 2 , , Y n } . In the JPC method, after the first failure, R 1 units are randomly removed from the remaining K 1 units. This procedure continues at the second failure, where R 2 units are randomly withdrawn from the remaining K R 1 2 surviving units, and so forth. The process continues until exactly r failures have been observed, at which point all remaining units are withdrawn. This termination criterion aligns with the definition of type II censoring. At the r-th failure, all remaining units are withdrawn, with the number of withdrawn units given by R r = K r i = 1 r 1 R i . The scheme is denoted by R = ( R 1 , R 2 , , R r ) , where r is the predetermined number of observed failures.
Let R i = s i + t i for i = 1 , , r , where s i and t i denote the number of units withdrawn at the i-th failure for the X and Y samples, respectively. Both s i and t i are random variables with unknown values. The observed data consist of ( H , Λ , S ) , where H = ( h 1 , h 2 , , h r ) indicates the source of failure, with  h i = 1 if λ i corresponds to an X failure and h i = 0 if it corresponds to a Y failure. The vector Λ = ( λ 1 , λ 2 , , λ r ) contains the failure times, and  S = ( s 1 , s 2 , , s r ) represents the number of units withdrawn from the X sample at each failure time. A concise overview of this type II censoring method is provided in Figure 3.

2.2. Maximum Likelihood Estimation

The likelihood function can be expressed based on the joint progressive censoring scheme as follows:
L ( α 1 , α 2 , β | H , Λ , S ) = C i = 1 r [ f ( λ i ) ] h i [ g ( λ i ) ] 1 h i [ F ¯ ( λ i ) ] s i [ G ¯ ( λ i ) ] t i .
where F ¯ = 1 F , G ¯ = 1 G , i = 1 r s i = m m r , i = 1 r t i = n n r and C = D 1 D 2 with
D 1 = j = 1 r m i = 1 j 1 h i i = 1 j 1 s i h i + n i = 1 j 1 1 h i i = 1 j 1 R i s i 1 h i ,
D 2 = j = 1 r m i = 1 j 1 h i i = 1 j 1 s i s j n i = 1 j 1 1 h i i = 1 j 1 R i s i t j m + n j i = 1 j 1 R i R j .
Therefore, the likelihood function becomes
L α 1 , α 2 , β | H , Λ , S = C i = 1 r α 1 h i α 2 1 h i β λ i α 1 + 1 h i λ i α 2 + 1 1 h i × e β λ i α 1 h i e β λ i α 2 1 h i 1 e β λ i α 1 s i 1 e β λ i α 2 t i
By taking the logarithm of both sides of Function (1), we obtain the following log-likelihood function:
ln L = ln C + m r ln α 1 + n r ln α 2 + r ln β i = 1 r α 1 + 1 h i ln λ i i = 1 r α 2 + 1 1 h i ln λ i i = 1 r h i β λ 1 α 1 i = 1 r 1 h i β λ i α 2 + i = 1 r s i ln 1 e β λ i α 1 + i = 1 r t i ln 1 e β λ i α 2
To obtain the MLEs of α 1 , α 2 , β , we take the first derivative of the log-likelihood Function (2) with respect to each parameter as follows:
ln L α 1 = m r α 1 i 1 r h i ln λ i + i 1 r h i β λ i α 1 ln λ i i = 1 r s i β λ i α 1 e β λ i α 1 ln λ i 1 e β λ i α 1 = 0 ,
ln L α 2 = n r α 2 i 1 r 1 h i ln λ i + i 1 r h i β λ i α 2 ln λ i i = 1 r t i β λ i α 2 e β λ i α 2 ln λ i 1 e β λ i α 2 = 0 ,
ln L β = r β i 1 r h i λ i α 1 i 1 r 1 h i λ i α 2 + i = 1 r s i λ i α 1 e β λ i α 1 1 e β λ i α 1 + i = 1 r t i λ i α 2 e β λ i α 2 1 e β λ i α 2 = 0 .
Remark 1.
MLEs are computed only for cases where 0 < m r < r as this ensures that the likelihood function contains useful information for parameter estimation.
Obtaining analytical solutions for these equations is challenging, which makes closed-form expressions for the parameters difficult to derive. Hence, parameter values are typically estimated using numerical techniques like the Newton–Raphson iteration.

3. Approximate Confidence Interval

Proposition 1.
The asymptotic normality of the MLEs allows us to compute the approximate 100 ( 1 η ) % ACIs for α 1 , α 2 , β as follows:
α ^ 1 Z η / 2 v a r ( α ^ 1 ) , α ^ 1 + Z η / 2 v a r ( α ^ 1 ) ,
α ^ 2 Z η / 2 v a r ( α ^ 2 ) , α ^ 2 + Z η / 2 v a r ( α ^ 2 ) ,
β ^ Z η / 2 v a r ( β ^ ) , β ^ + Z η / 2 v a r ( β ^ ) ,
Here, Z η / 2 is the percentile of the standard normal distribution corresponding to a right-tail probability of η / 2 .
Proof. 
The asymptotic distribution of the MLE is utilized to derive confidence intervals for the unknown parameters α 1 , α 2 , and β . The second partial derivatives of the log-likelihood Function (2) are expressed as follows:
2 ln L α 1 2 = m r α 1 2 i = 1 r h i β λ i α 1 ln λ i 2 i = 1 r s i β λ i α 1 ln λ i 2 e β λ i α 1 1 e β λ i α 1 2 ,
2 ln L α 2 2 = n r α 2 2 i = 1 r 1 h i β λ i α 2 ln λ i 2 i = 1 r t i β λ i α 2 ln λ i 2 e β λ i α 2 1 e β λ i α 2 2 ,
2 ln L β 2 = r β 2 i = 1 r s i λ i 2 α 1 e β λ i α 1 1 e β λ i α 1 2 i = 1 r t i λ i 2 α 2 e β λ i α 2 1 e β λ i α 2 2 ,
2 ln L α 1 β = i = 1 r h i λ i α 1 ln λ i + i = 1 r s I λ i α 1 e β λ i α 1 ln λ i 1 1 e β λ i α 1 + β λ i α 1 1 e β λ i α 1 2 ,
2 ln L α 2 β = i = 1 r 1 h i λ i α 2 ln λ i + i = 1 r t i λ i α 2 e β λ i α 2 ln λ i 1 1 e β λ i α 2 + β λ i α 2 1 e β λ i α 2 2 ,
2 ln L α 1 α 2 = 2 ln L α 2 α 1 = 0 .
Taking the expectation of the negative second derivatives from Equations (6)–(11), the Fisher information matrix (FIM) is defined as I i j = E 2 L δ i δ j , where i , j = 1 , 2 , 3 and ( δ 1 , δ 2 , δ 3 ) = ( α 1 , α 2 , β ) . The observed information matrix is computed by replacing the expected value, such as E ( 2 L α 2 ) , with 2 L α 2 | α = α ^ . Therefore, the observed Fisher information is given by the following:
I α ^ 1 , α ^ 2 , β ^ = 2 L α 1 2 2 L α 1 α 2 2 L α 1 β 2 L α 2 α 1 2 L α 2 2 2 L α 2 β 2 L β α 1 2 L β α 2 2 L β 2 α 1 = α ^ 1 , α 2 = α ^ 2 , β = β ^ .
The covariance matrix C o v ( α ^ 1 , α ^ 2 , β ^ ) can be calculated by inverting the FIM. Therefore, the asymptotic variance–covariance matrix for the MLEs is obtained by inverting the observed Fisher information, as shown below:
C o v α ^ 1 , α ^ 2 , β ^ = I 1 α ^ 1 , α ^ 2 , β ^ = v a r α ^ 1 c o v α ^ 1 , α ^ 2 c o v α ^ 1 , β ^ c o v α ^ 2 , α ^ 1 v a r α ^ 2 c o v α ^ 2 , β ^ c o v β ^ , α ^ 1 c o v β ^ , α ^ 2 v a r β ^ .
   □

4. Bootstrap Confidence Intervals

Bootstrap confidence intervals are an effective technique for measuring uncertainty in statistical analysis, especially when the underlying data distribution is complex or unknown. This method works by repeatedly resampling the observed data with replacements to create several “bootstrap samples”. Each of these samples generates a new estimate of the parameter of interest. By studying the spread and distribution of these estimates, bootstrap confidence intervals provide a range that likely contains the true value of the parameter. One of the key strengths of the bootstrap method is its flexibility, as it does not rely on assumptions about the specific form of the data distribution. This makes it particularly useful in real-world applications, where data may not follow standard parametric distributions.
We present bootstrap methods [25,26], which are particularly effective for small sample sizes. The steps outlined in Algorithms 1 and 2 are used to construct the Boot-P and Boot-T CI 100 ( 1 η ) % for α 1 , α 2 , and β .
Algorithm 1 Generation process of Boot-P CIs
1:
Based on the original sample Λ = ( λ 1 , λ 2 , , λ n ) , and compute the MLEs of the parameters α 1 , α 2 , β from Equations (3)–(5).
2:
Utilizing the estimated values α ^ 1 , α ^ 2 , β ^ to generate a bootstrap JPC sample denoted as Λ * by resampling from the original data, applying the estimated parameters to create new bootstrap samples.
3:
Calculate the bootstrap estimation of α 1 , α 2 and β based on the bootstrap sample, named as α ^ 1 * , α ^ 2 * , β ^ * .
4:
Repeat steps 2 and 3 for N iterations to obtain α ^ j 1 * , α ^ j 2 * , , α ^ j N n o o t and β ^ 1 * , β ^ 2 * , , β ^ N n o o t j = 1 , 2 .
5:
Sort α ^ j i * ( i = 1 , 2 , , N ) and β ^ i * ( i = 1 , 2 , , N ) in ascending order as α ^ ( j 1 ) * , α ^ ( j 2 ) * , , α ^ ( j N ) and β ^ ( 1 ) * , β ^ ( 2 ) * , , β ^ ( N ) , where j = 1 , 2 .
6:
The approximate Boot-P 100 ( 1 η ) % CI of α 1 , α 2 and β are presented by
α ^ ( j N ( η / 2 ) ) * , α ^ ( j N ( 1 η / 2 ) ) * , j = 1 , 2 ,
β ^ N ( η / 2 ) * , β ^ N ( 1 η / 2 ) * .
The symbol [] represents the floor function, which rounds down to the largest integer less than or equal to the given value.
Algorithm 2 Generation process of Boot-T CIs
1:
Using the same steps (1) to (3) as in the same steps Boot-P.
2:
Compute the Boot-T statistics T α ^ 1 * , T α ^ 2 * and T β ^ * follows:
T α ^ j * = ( α ^ j * α ^ j ) var ( α ^ j * ) , T β ^ * = ( β ^ * β ^ ) var ( β ^ * ) , j = 1 , 2 ,
where C o v ( α ^ 1 * , α ^ 2 * , β ^ * ) is given by Function (Section 3).
3:
Repeat steps 2 and 3 for N iterations to obtain T α ^ j 1 * , T α ^ j 2 * , , T α ^ j N * T β ^ 1 * , T β ^ 2 * , …, T β ^ N * , and j = 1 , 2 .
4:
Sort T α ^ j i * , ( i = 1 , 2 , , N ) in ascending order as T α ^ ( j 1 ) * , T α ^ ( j 2 ) * , …, T α ^ ( j N ) * , T β ^ ( 1 ) * , T β ^ ( 2 ) * , …, T β ^ ( N ) * , where j = 1 , 2 .
5:
The approximate Boot-t 100 ( 1 η ) % CI for α 1 , α 2 and β are given by
α ^ j + T α ^ j [ N ( η / 2 ) ] * , α ^ j + T α ^ j [ N ( 1 η / 2 ) ] * , j = 1 , 2 ,
β ^ + T β ^ [ N ( η / 2 ) ] * , β ^ + T β ^ [ N ( 1 η / 2 ) ] * .
The symbol [] represents the floor function, which rounds down to the largest integer less than or equal to the given value.

5. Bayesian Inference

5.1. Prior and Posterior Distribution

We assume that the parameters α 1 , α 2 , and β of the IWD follow independent gamma priors with hyperparameters a 1 , b 1 ; a 2 , b 2 , and a 3 , b 3 , respectively. Therefore, their joint prior distribution is expressed as
π α 1 , α 2 , β α 1 a 1 1 e b 1 α 1 α 2 a 2 1 e b 2 α 2 β a 3 1 e b 3 β .
The joint posterior distribution can be formulated as
π ( α 1 , α 2 , β H , Λ , S ) = π ( α 1 , α 2 , β ) L ( α 1 , α 2 , β H , Λ , S ) 0 0 0 π ( α 1 , α 2 , β ) L ( α 1 , α 2 , β H , Λ , S ) d α 1 d α 2 d β ,
where L ( α 1 , α 2 , β H , Λ , S ) represents the likelihood function, H = ( h 1 , h 2 , , h r ) indicates the source of failure, with  h i = 1 if λ i corresponds to an X failure and h i = 0 if it corresponds to a Y failure. The vector Λ = ( λ 1 , λ 2 , , λ r ) contains the failure times, and  S = ( s 1 , s 2 , , s r ) represents the number of units withdrawn from the X sample at each failure time.
Moreover, using the joint prior distribution and the likelihood function (1), the joint posterior distribution is given by
π α 1 , α 2 , β | H , Λ , S α 1 m r + a 1 1 α 2 n r + a 2 1 β r + a 3 1 exp { α 1 i = 1 r h i ln λ i b 1 α 1 α 2 i = 1 r 1 h i ln λ i b 2 α 2 β i = 1 r h i λ i α 1 β i = 1 r 1 h i λ i α 2 b 3 β + i = 1 r s i ln 1 e β λ i α 1 + i = 1 r t i ln 1 e β λ i α 2 } .

5.2. Loss Functions

In this subsection, we emphasize the significance of selecting appropriate loss functions for parameter estimation within Bayesian inference. Specifically, we consider two types of loss functions: the SE loss function and the asymmetric LINEX loss function.
The SE loss function is represented by
ξ S E ( Δ ) Δ 2 = [ u ( Φ ) u ^ ( Φ ) ] 2 .
In this equation, Δ = u ^ ( Φ ) u ( Φ ) , u ( Φ ) is any function of Φ = ( α 1 , α 2 , β ) , and  u ^ ( Φ ) represents the SE estimate of u ( Φ ) . Let u ^ ( Φ ) S E denote the Bayesian estimate of u ( Φ ) under the SE function, which is given by
u ^ ( Φ ) S E = E [ u ( Φ ) H , Λ , S ] = 0 0 0 u ( Φ ) π ( Φ H , Λ , S ) d α 1 d α 2 d β .
The SE loss function is widely used in the literature and is one of the most common loss functions. It is symmetric, treating overestimation and underestimation of parameters equally. However, in life-testing situations, one type of estimation error may have more severe consequences. To address this, we use the LINEX loss function, defined as follows:
ξ LINEX ( Δ ) e a Δ a Δ 1 , a 0 .
Here, Δ is as previously defined, and  u ^ ( Φ ) represents the LINEX estimate of u ( Φ ) .
The shape parameter a controls the direction and degree of asymmetry in the loss function. When a > 0 , overestimation is penalized more heavily than underestimation, whereas the reverse is true when a < 0 . For values of a near zero, the LINEX loss function becomes similar to the SE loss function. When a = 1 , the function is highly asymmetric, with overestimation costing more than underestimation. If  a < 0 , the loss increases almost exponentially for Δ = u ^ ( Φ ) u ( Φ ) < 0 and decreases nearly linearly for Δ = u ^ ( Φ ) u ( Φ ) > 0 .
The Bayes estimate u ^ ( Φ ) L I N E X is obtained to be
u ^ ( Φ ) L I N E X = 1 a ln E e a u ^ ( Φ ) H , Λ , S = 1 a ln 0 0 0 e a u ^ ( Φ ) π ( α 1 , α 2 , β H , Λ , S ) d α 1 d α 2 d β .
We found that analytical solutions are not feasible for this problem. To address this, the Metropolis–Hastings algorithm is used to approximate explicit formulas for u ^ ( Φ ) S E and u ^ ( Φ ) L I N E X , as well as to construct the associated confidence intervals.

5.3. Metropolis–Hastings Algorithm

Proposition 2.
The Bayes estimates based on the SE loss function are as follows:
α ^ 1 S E = 1 N N 0 i = N 0 + 1 N α 1 ( i ) ,
α ^ 2 S E = 1 N N 0 i = N 0 + 1 N α 2 ( i ) ,
β ^ S E = 1 N N 0 i = N 0 + 1 N β ( i ) .
and the Bayes estimates based on the LINEX loss function are
α ^ 1 L I N E X = 1 a ln 1 N N 0 i = N 0 + 1 N e a α 1 ( i ) ,
α ^ 2 L I N E X = 1 a ln 1 N N 0 i = N 0 + 1 N e a α 2 ( i ) ,
β ^ L I N E X = 1 a ln 1 N N 0 i = N 0 + 1 N e a β ( i ) .
where N denotes the total MCMC sample size and N 0 is the number of burn-in iterative values.
Proof. 
From Function (13), the conditional posterior density function of α 1 , α 2 , β can be obtained as the following proportionality:
π 1 * α 1 | H , Λ , S α 1 m r + a 1 1 exp { α 1 i = 1 r h i ln λ i b 1 α 1 β i = 1 r h i λ i α 1 + i = 1 r s i ln 1 e β λ i α 1 } ,
π 2 * α 2 | H , Λ , S α 2 n r + a 2 1 exp { α 2 i = 1 r 1 h i ln λ i b 2 α 2 β i = 1 r 1 h i λ i α 2 + i = 1 r t i ln 1 e β λ i α 2 } ,
π 3 * β | H , Λ , S β r + c + 1 exp { β i = 1 r h i λ i α 1 β i = 1 r 1 h i λ i α 2 d β + i = 1 r s i ln 1 e β λ i α 1 + i = 1 r t i ln 1 e β λ i α 2 } .
It is crucial to highlight that the conditional posterior distributions for α 1 , α 2 , and β , as given in Functions (14)–(16), do not reduce to any familiar standard forms. This complexity makes direct sampling using conventional methods rather challenging. However, these distributions appear to resemble the normal distribution.
The proposed distribution of π α 1 , α 2 , β | H , Λ , S is assumed to be multivariate normal, and the MCMC sample is generated using the Metropolis–Hastings method outlined in Algorithm 3.    □
Sort α j i , β i in ascending order to obtain { α ( j 1 ) , α ( j 2 ) , , α ( j ( N N 0 ) ) } . With that, the 100 ( 1 η ) % credible interval of the unknown parameter is constructed as
( α j [ ( N N 0 ) ( η / 2 ) ] , α j [ ( N N 0 ) ( 1 η / 2 ) ] ) and ( β [ ( N N 0 ) ( η / 2 ) ] , β [ ( N N 0 ) ( 1 η / 2 ) ] ) , j = 1 , 2
The shortest one of all the possible intervals with the length [ ( N N 0 ) ( 1 η ) ] is the highest posterior density credible interval.
Algorithm 3 Generating samples following the posterior distribution.
1:
Choose initial values of ( α 1 , α 2 , β ) as ( α 1 ( 0 ) , α 2 ( 0 ) , β ( 0 ) ) .
2:
Generate ( α 1 , α 2 , β ) by using the proposal normal distribution N ( α 1 ( i 1 ) , σ 1 2 ) , N ( α 2 ( i 1 ) , σ 2 2 ) and N ( β ( i 1 ) , σ 3 2 ) . Here, σ 1 2 , σ 2 2 , and  σ 3 2 represent the variance of the MLEs of α 1 , α 2 , and  β , respectively, which are obtained from the diagonal elements of the inverse Fisher information matrix.
3:
Calculate
P 1 * = π ( α 1 β , H , Λ , S ) π ( α 1 ( i 1 ) β , H , Λ , S ) ,
P 2 * = π ( α 2 β , H , Λ , S ) π ( α 2 ( i 1 ) β , H , Λ , S ) ,
and
P 3 * = π ( β α 1 , α 2 , H , Λ , S ) π ( β ( i 1 ) α 1 , α 2 , H , Λ , S ) .
4:
Compute the acceptance probability P 1 = min { 1 , P 1 * } , P 2 = min { 1 , P 2 * } , and  P 3 = min { 1 , P 3 * } .
5:
Generate samples u 1 , u 2 , and  u 3 from the uniform distribution U ( 0 , 1 ) . If  u 1 P 1 , accept α 1 ( i ) = α 1 ; otherwise, set α 1 ( i ) = α 1 ( i 1 ) . The same applies to α 2 ( i ) and β ( i ) with u 2 and u 3 .
6:
Repeat Steps 2–5 for N iterations to obtain enough samples.

6. Simulation Study

In this section, we assess the performance of the methods by simulating parameter estimates across different JPC schemes. The simulation follows a structured approach, including data generation, parameter estimation, and result evaluation. We reference the approach outlined in [27].
First, we drew samples from two populations, namely, population A and population B, which follow the distributions I W ( α 1 , β ) and I W ( α 2 , β ) , respectively. The true values of the parameters are set as α 1 = 1.5 , α 2 = 2 , and β = 1 . For the sample sizes ( m , n ) , we select the combinations ( 20 , 20 ) , ( 20 , 30 ) , ( 30 , 30 ) , ( 30 , 40 ) , and ( 40 , 50 ) , while the numbers of failures r are chosen as 20, 30, 40, 50, and 60. The procedure for generating JPC data is described in Algorithm 4.
Algorithm 4 The algorithm to generate samples under the JPC scheme.
1:
Generate m samples for population A from I W ( α 1 , β ) and n samples for population B from I W ( α 2 , β ) . Combine these samples and sort them in ascending order.
2:
At the i-th failure, the smallest value of the joint samples is noted as λ i . Then, it is determined whether this unit belongs to population A. If so, h i is set to 1; otherwise, it is set to 0.
3:
Given that R i is predetermined, randomly assign values to s i and t i under the condition s i + t i = R i . Then, s i units are randomly selected from population A and t i units from population B, respectively.
4:
Store the data as ( λ i , h i , s i , t i ) and reconstruct the joint samples using the remaining information.
5:
Repeat Steps 2–4 until the k-th failure occurs. There are k 1 = i = 1 k h i failures from population A and k 2 = k k 1 from population B. At this point, withdraw all remaining units. Then, s k = m i = 1 k 1 s i k 1 and t k = n i = 1 k 1 t i k 2 are recorded.
Once the censored data are obtained, the MLEs of α 1 , α 2 , and β are calculated using Function (1); then, we present the mean MSEs of the MLEs. The bootstrap method is used to compute the ACI for the parameters. The experiment was carried out using R, with the code written in RStudio version 2024.12.0+467. The RStudio environment was run on Windows 10 (64-bit), and the experimental computer was equipped with AMD Ryzen 7 5800H with Radeon Graphics and the NVIDIA GeForce GTX 1650 graphics card.
Here, R = ( 0 ( 19 ) , 20 ) means R 1 = R 2 = = R 19 = 0 , R 20 = 10 . For interval estimates, we construct 95 % ACIs, Boot-T CIs, and Boot-P CIs. Then, based on 1000 repetitions, we calculate the average lengths (ALs) and coverage probabilities (CPs) of these intervals. Table 1 and Table 2 demonstrate that the average estimates (AMLEs) of α 1 and α 2 exhibit bias for small sample sizes m + n . As the sample size increases, the estimates progressively converge to the true values. Table 3 indicates that the MSEs of β ^ are generally smaller than those of α ^ 1 and α ^ 2 in most cases. The ALs of the approximate confidence intervals are similar to those of the two bootstrap confidence intervals. Since the confidence probability of the bootstrap intervals consistently reaches 100%, the results are not included in the table.
Under the Bayesian framework, we compute both the parameter estimates and their corresponding MSEs using the SE and LINEX loss functions, employing either gamma informative priors (IPs) or non-informative priors (NIPs).
To compare the performance of these Bayesian estimates, the hyperparameters for the IPs are set as ( a 1 , a 2 , b 1 , b 2 , c , d ) = ( 15 , 10 , 10.5 , 5 , 10 , 10 ) . We select these hyperparameters to ensure that the prior expectations of the two populations align closely with the true values. And for NIPs, we assign a 1 = a 2 = b 1 = b 2 = c = d = 10 5 .
Table 4 and Table 5 present the average Bayesian estimates (ABEs), MSEs, and ALs of the 95% credible intervals for α 1 under both IPs and NIPs. The corresponding results for α 2 and β are provided in Appendix A, in Table A1, Table A2, Table A3 and Table A4. In particular, the MSEs for α ^ 1 remain consistently the lowest among the unknown parameters. Moreover, the Bayesian estimates under IPs tend to slightly underestimate the parameters, while those under NIP tend to slightly overestimate them. For the LINEX loss function with a = 2 , the credible intervals have the shortest average lengths. Additionally, the ALs of the confidence intervals are quite similar across all three loss functions. Furthermore, under both IPs and NIPs, the CPs for α 1 are consistently close to the nominal level of 95%, indicating reliable interval estimation for this parameter. However, the CPs for α 2 and β remain around 80%, suggesting that the current approach may not fully capture the uncertainty associated with this parameter. Future research is anticipated to explore appropriate methodologies to enhance the CPs, ensuring more robust and reliable interval estimation.
Histograms, cumulative mean plots, and estimation plots of the parameters produced by the MCMC method under gamma IPs and NIPs are shown in Figure 4, Figure 5, Figure 6, Figure 7, Figure 8 and Figure 9 (taking ( m , n ) = ( 40 , 50 ) , R = ( 0 ( 49 ) , 40 ) as an example). These plots demonstrate good convergence behavior, as evidenced by the stable cumulative means and well-mixed histograms.

7. Real-Data Analysis

7.1. Example 1

The following section presents an analysis of real data obtained from Balakrishnan et al. [28], which documents the breakdown time of electrical current under varying voltage levels. A detailed statistical examination of these data was later conducted by Ding and Gui [29]. For clearer demonstration, this study focuses on breakdown times recorded at voltage levels of 32 kV and 34 kV. The corresponding data are summarized in Table 6 for reference.
The goodness of fit of the samples to the IWD is assessed using the Kolmogorov–Smirnov (K-S) test. For the first sample, the test yields a p-value of 0.2391, while the second sample produces a p-value of 0.5941. The test results are visually illustrated in Figure 10.
Using the datasets above, we generated the JPC sample based on the censoring scheme. Suppose that the first sample has a size of m = 15 and n = 19 for the second sample. By implementing JPC, where K = m + n represents the total sample size, and setting r = 10 , 19 , with R = ( 0 ( 8 ) , 12 , 12 ) , ( 0 ( 17 ) , 14 , 1 ) , and ( 0 ( 17 ) , 10 , 5 ) , we proceed with the analysis.
The estimation results under different censoring schemes are presented in Table A5. The MLEs, MSEs, and AL for α 1 , α 2 , and β demonstrate variations across the censoring schemes, with noticeable differences in the estimated values and corresponding MSEs. In particular, the estimates of α 1 and α 2 show sensitivity to the censoring structure, while β remains relatively stable.
Table A6 and Table A7 in Appendix A further examine the influence of prior settings on parameter estimation, comparing the IP and NIP approaches. The hyperparameters for IPs are chosen based on the MLE results, ensuring that the prior mean approximates the MLE estimates. Table A6 presents the results under IPs with a 1 = 4 , b 1 = 10.5 , a 2 = 10 , b 2 = 20 , c = 18 , and d = 10 . In contrast, Table A7 in Appendix A reports the results using NIPs with a 1 = b 1 = a 2 = b 2 = c = d = 10 5 .

7.2. Example 2

The two datasets, each consisting of 30 observations ( m = n = 30 ) , pertain to the breaking strength of jute fiber glass laminate (JFGL) with diameters of 10 mm and 20 mm. The measurements are given in units of fiber strength (MPa). These data are from [30], and are presented in Table 7.
A rescaling factor of 1000 is applied to the data to prevent the estimates from becoming excessively large or small. The K-S test is also employed to evaluate the fit of the samples to the IWD. The resulting p-values are 0.6781 for the first sample and 0.8081 for the second. A visual representation of the test outcomes is provided in Figure 11.
Based on the datasets described above, JPC samples are constructed under a specified censoring scheme. The analysis is conducted using different censoring settings: setting r = 30 , 40 with ( 0 ( 28 ) , 15 , 15 ) , R = ( 0 ( 38 ) , 15 , 5 ) , and R = ( 0 ( 38 ) , 10 , 10 ) .
Table A8 displays the parameter estimates obtained under various censoring schemes. To assess the impact of prior selection, Appendix A includes Table A9 and Table A10, which present results from the IP and NIP settings. Specifically, Table A9 uses a 1 = 10 , b 1 = 10.5 , a 2 = 9 , b 2 = 20 , c = 2 , and d = 10 . In comparison, Table A10 in Appendix A reports the results using NIPs with a 1 = b 1 = a 2 = b 2 = c = d = 10 5 .

8. Conclusions

This paper investigates statistical inference for two inverse Weibull distributions under a joint progressive type II censoring scheme, assuming a common shape parameter with differing scale parameters. The Newton–Raphson method is used to derive maximum likelihood estimates, and asymptotic confidence intervals are constructed. Additionally, bootstrap techniques are applied to improve interval estimation. The foundation of our Bayesian analysis is the use of independent priors for the model parameters. We employ the MCMC algorithm to obtain Bayesian estimates under the squared error and LINEX loss functions, along with corresponding credible intervals. Through these procedures, we conduct simulations under various joint censoring schemes. Finally, we apply the proposed method to real datasets to demonstrate its practical implementation.
In this study, the populations are assumed to follow an inverse Weibull distribution with a common shape parameter. However, in practical applications, the shape parameters may vary across populations, highlighting the need for further investigation. Additionally, alternative approaches, such as the Jackknife method, could be explored by future researchers to further evaluate estimation performance and variability. Future research will also examine statistical inference methods for multiple populations, considering both identical and distinct lifetime distributions.

Author Contributions

Conceptualization and methodology, J.X. and Y.W.; software, J.X.; investigation, J.X.; writing—original draft preparation, J.X.; writing—review and editing, W.G. supervision, W.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by Project 202510004188 which was supported by National Training Program of Innovation and Entrepreneurship for Undergraduates. Wenhao’s work was partially supported by the Science and Technology Research and Development Project of China State Railway Group Company, Ltd. (No. N2023Z020).

Data Availability Statement

The data presented in this study are openly available in [28,30].

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Table A1. Bayesian estimations of parameter α 2 supposing informative priors ( a 1 , a 2 , b 1 , b 2 , c , d ) = ( 15 , 10 , 10.5 , 5 , 10 , 10 ) .
Table A1. Bayesian estimations of parameter α 2 supposing informative priors ( a 1 , a 2 , b 1 , b 2 , c , d ) = ( 15 , 10 , 10.5 , 5 , 10 , 10 ) .
( m , n ) r ( R 1 , , R r ) SELINEX (a = −2)LINEX (a = 2)
ABE (MSE)AL (CP)ABE (MSE)AL (CP)ABE (MSE)AL (CP)
(20, 20)20( 0 ( 19 ) , 20)2.04501.05782.08091.06931.95001.0111
(0.4506)(89.8%)(0.4816)(79.6%)(0.4609)(81.5%)
( 0 ( 18 ) , 5, 15)2.10211.06092.11781.09272.17831.0453
(0.4411)(87.8%)(0.4668)(81.6%)(0.4431)(83.7%)
30( 0 ( 29 ) , 10)1.98451.03492.02681.05342.05391.0484
(0.4671)(83.7%)(0.4497)(81.6%)(0.4357)(91.8%)
( 0 ( 29 ) , 8, 2)2.01081.02052.07201.06172.07981.0027
(0.4473)(81.6%)(0.4538)(79.6%)(0.4315)(83.6%)
(20, 30)30( 0 ( 29 ) , 20)1.98170.99241.97780.99082.00061.0493
(0.4466)(87.6%)(0.4440)(82.6%)(0.4557)(81.6%)
( 0 ( 28 ) , 15, 5)2.06851.08162.04011.11272.18271.0594
(0.4478)(87.8%)(0.4863)(79.6%)(0.4652)(79.9%)
40( 0 ( 39 ) , 10)2.043471.04052.09230.99161.99501.0081
(0.4413)(83.7%)(0.4240)(79.6%)(0.4425)(81.6%)
( 0 ( 38 ) , 5, 5)2.17661.07952.13141.04992.16751.0308
(0.4463)(89.8%)(0.4631)(83.7%)(0.4570)(83.7%)
(30, 30)30( 0 ( 29 ) , 30)2.00530.94721.99021.00671.98950.9612
(0.4272)(75.51%)(0.4457)(77.55%)(0.4396)(75.51%)
( 0 ( 28 ) , 20, 10)2.08561.01372.09361.03672.12221.0803
(0.4360)(86.6%)(0.4442)(81.6%)(0.4659)(81.6%)
40( 0 ( 39 ) , 20)1.98990.95202.00820.99222.03091.0350
(0.4474)(79.4%)(0.4364)(83.3%)(0.4418)(79.6%)
( 0 ( 38 ) , 20, 0)2.03481.04151.92781.03352.04161.0677
(0.4396)(75.5%)(0.4337)(79.6%)(0.4442)(87.8%)
(30, 40)40( 0 ( 39 ) , 30)1.94941.03392.01501.00221.99391.0178
(0.4650)(83.7%)(0.4368)(85.7%)(0.4475)(79.5%)
( 0 ( 39 ) , 29, 1)2.08651.02342.04931.01911.99921.0813
(0.4341)(77.5%)(0.4269)(83.7%)(0.4478)(85.5%)
50( 0 ( 49 ) , 20)1.98881.05242.02230.97361.99901.0195
(0.4689)(77.6%)(0.4222)(81.4%)(0.4347)(83.7%)
( 0 ( 48 ) , 15, 5)2.09361.00502.02241.02182.05350.9626
(0.4542)(83.5%)(0.4452)(83.7%)(0.4511)(87.3%)
(40, 50)50( 0 ( 49 ) , 40)1.95981.04262.02101.02412.01311.0456
(0.4476)(81.6%)(0.4403)(82.6%)(0.4434)(83.7%)
( 0 ( 48 ) , 35, 5)2.04471.01202.04111.01432.15821.0381
(0.4362)(89.6%)(0.4293)(87.6%)(0.4232)(85.7%)
60( 0 ( 59 ) , 30)1.98920.97262.00640.98212.02101.0211
(0.4316)(82.3%)(0.4224)(81.6%)(0.4438)(81.6%)
( 0 ( 58 ) , 25, 5)2.10371.05022.13641.10392.06561.0532
(0.4493)(79.5%)(0.4643)(85.7%)(0.4398)(85.7%)
Table A2. Bayesian estimations of parameter β supposing informative priors ( a 1 , a 2 , b 1 , b 2 , c , d ) = ( 15 , 10 , 10.5 , 5 , 10 , 10 ) .
Table A2. Bayesian estimations of parameter β supposing informative priors ( a 1 , a 2 , b 1 , b 2 , c , d ) = ( 15 , 10 , 10.5 , 5 , 10 , 10 ) .
( m , n ) r ( R 1 , , R r ) SELINEX (a = −2)LINEX (a = 2)
ABE (MSE)AL (CP)ABE (MSE)AL (CP)ABE (MSE)AL (CP)
(20, 20)20( 0 ( 19 ) , 20)0.99371.09541.00391.06111.01801.0334
(0.5189)(83.7%)(0.4993)(79.4%)(0.4800)(76.4%)
( 0 ( 18 ) , 5, 15)0.97811.09440.94581.09911.02121.1931
(0.4930)(83.7%)(0.5209)(77.6%)(0.6462)(79.3%)
30( 0 ( 29 ) , 10)0.99491.05731.00500.98680.99921.0741
(0.4900)(77.6%)(0.4175)(77.3%)(0.4891)(75.5%)
( 0 ( 29 ) , 8, 2)1.04520.97980.99851.06830.96231.0461
(0.4417)(79.5%)(0.5132)(81.4%)(0.4756)(83.7%)
(20, 30)30( 0 ( 29 ) , 20)1.00681.06031.02781.02551.03880.9640
(0.4847)(75.5%)(0.4776)(85.3%)(0.4477)(79.2%)
( 0 ( 38 ) , 15, 5)0.97811.09440.94581.09911.02121.1931
(0.4930)(83.7%)(0.5209)(77.6%)(0.6462)(79.3%)
40( 0 ( 39 ) , 10)0.99631.07291.02381.00671.00400.9967
(0.5050)(79.4%)(0.4489)(79.4%)(0.4339)(85.7%)
( 0 ( 28 ) , 5, 5)0.98071.17031.00751.09501.03161.0621
(0.5575)(79.5%)(0.4910)(81.6%)(0.4717)(83.5%)
(30, 30)30( 0 ( 29 ) , 30)1.01071.00051.00551.02910.98761.0327
(0.4328)(81.6%)(0.4670)(77.3%)(0.4569)(79.6%)
( 0 ( 28 ) , 20, 10)0.98061.05251.01691.01731.00681.0439
(0.5004)(79.2%)(0.4542)(83.5%)(0.5093)(82.4%)
40( 0 ( 39 ) , 20)0.99480.91461.02060.92661.00671.0614
(0.3782)(85.5%)(0.4180)(81.2%)(0.4759)(83.5%)
( 0 ( 38 ) , 20, 0)1.00301.03661.01681.02791.00141.0452
(0.4935)(80.1%)(0.4827)(85.2%)(0.4877)(83.4%)
(30, 40)40( 0 ( 39 ) , 30)0.9851.0981.0260.9901.0191.019
(0.5126)(79.6%)(0.4305)(71.4%)(0.4581)(69.4%)
( 0 ( 39 ) , 29, 1)1.02231.03151.03130.99911.01611.0187
(0.4671)(77.3%)(0.4496)(81.2%)(0.4723)(75.3%)
50( 0 ( 49 ) , 20)0.99330.99810.99991.00851.01430.9847
(0.4293)(82.6%)(0.4293)(83.6%)(0.4359)(80.2%)
( 0 ( 48 ) , 15, 5)0.99641.06251.00761.05800.97901.0990
(0.4780)(71.4%)(0.4708)(79.6%)(0.4960)(73.5%)
(40, 50)50( 0 ( 49 ) , 40)0.99930.99931.00551.00000.99281.0416
(0.4184)(79.5%)(0.4306)(77.6%)(0.4404)(85.7%)
( 0 ( 48 ) , 35, 5)0.99471.03500.97441.00480.97471.1015
(0.4617)(79.6%)(0.4355)86.4%)(0.5212)(78.4%)
60( 0 ( 59 ) , 30)1.00001.00131.00160.99561.00901.0714
(0.4469)(85.3%)(0.4297)(79.5%)(0.4790)(83.5%)
( 0 ( 58 ) , 25, 5)0.98601.06540.98191.13621.00071.0541
(0.4726)(79.6%)(0.5216)(81.6%)(0.4794)(83.7%)
Table A3. Bayesian estimations of parameter α 2 supposing non-informative priors a 1 = a 2 = b 1 = b 2 = c = d = 10 5 .
Table A3. Bayesian estimations of parameter α 2 supposing non-informative priors a 1 = a 2 = b 1 = b 2 = c = d = 10 5 .
( m , n ) r ( R 1 , , R r ) SELINEX (a = −2)LINEX (a = 2)
ABE (MSE)AL (CP)ABE (MSE)AL (CP)ABE (MSE)AL (CP)
(20, 20)20( 0 ( 19 ) , 20)2.13691.11842.11721.06492.13621.0704
(0.5073)(83.7%)(0.5054)(78.6%)(0.5017)(83.5%)
( 0 ( 18 ) , 5, 15)2.26111.21952.28371.09992.24131.2725
(0.5829)(87.8%)(0.5677)(85.7%)(0.5045)(95.9%)
30( 0 ( 29 ) , 10)2.10791.11182.05601.02221.97631.0789
(0.4709)(87.8%)(0.4957)(83.5%)(0.4851)(85.5%)
( 0 ( 29 ) , 8, 2)2.12121.10702.26861.06592.14521.0980
(0.4977)(77.6%)(0.4464)(83.7%)(0.4805)(83.7%)
(20, 30)30( 0 ( 29 ) , 20)1.99371.03792.03000.95382.11001.0113
(0.4733)(83.5%)(0.4905)(80.1%)(0.4708)(79.6%)
( 0 ( 28 ) , 15, 5)2.13321.13782.25541.06952.29031.1828
(0.4858)(89.8%)(0.4885)(83.7%)(0.5124)(83.7%)
40( 0 ( 39 ) , 10)1.98051.04491.96611.05332.01591.0594
(0.4789)(81.4%)(0.4404)(83.7%)(0.4919)(75.5%)
( 0 ( 28 ) , 5, 5)2.15491.08982.20241.13042.26371.1337
(0.4618)(83.7%)(0.5000)(83.7%)(0.5218)(81.6%)
(30, 30)30( 0 ( 29 ) , 30)2.02021.14492.00381.07912.02691.0245
(0.5005)(83.7%)(0.4847)(77.6%)(0.4466)(81.6%)
( 0 ( 28 ) , 20, 10)2.27791.09532.14661.04632.12711.0983
(0.4688)(79.6%)(0.4722)(80.3%)(0.4939)(79.6%)
40( 0 ( 39 ) , 20)2.03151.03942.04781.02091.92221.0062
(0.4481)(83.7%)(0.4735)(79.4%)(0.4616)(85.6%)
( 0 ( 38 ) , 20, 0)2.05221.02822.06111.00412.00381.0683
(0.4450)(86.6%)(0.4517)(79.6%)(0.4733)(79.6%)
(30, 40)40( 0 ( 39 ) , 30)2.13361.06892.04331.06202.07621.0605
(0.4662)(77.6%)(0.4636)(81.6%)(0.4551)(85.7%)
( 0 ( 39 ) , 29, 1)2.03410.97902.12791.08852.02631.0263
(0.4332)(75.5%)(0.4606)(91.8%)(0.4820)(85.3%)
50( 0 ( 49 ) , 20)2.09321.03101.98050.96602.08801.0606
(0.4436)(79.6%)(0.4270)(63.3%)(0.4548)(75.5%)
( 0 ( 48 ) , 15, 5)2.11141.03672.07791.04682.15961.0724
(0.4478)(73.5%)(0.4443)(87.8%)(0.4442)(89.8%)
(40, 50)50( 0 ( 49 ) , 40)1.94940.96711.98030.99922.03391.0119
(0.4416)(81.4%)(0.4323)(77.5%)(0.4348)(81.6%)
( 0 ( 48 ) , 35, 5)2.09591.05942.08351.02812.20551.0858
(0.4488)(83.7%)(0.4599)(81.4%)(0.4593)(81.6%)
60( 0 ( 59 ) , 30)2.05260.95742.01401.01951.98200.9500
(0.4237)(79.5%)(0.4510)(81.4%)(0.4167)(85.3%)
( 0 ( 58 ) , 25, 5)2.09871.07042.09921.02802.07600.9836
(0.4684)(87.6%)(0.4454)(83.5%)(0.4272)(79.5%)
Table A4. Bayesian estimations of parameter β supposing non-informative priors a 1 = a 2 = b 1 = b 2 = c = d = 10 5 .
Table A4. Bayesian estimations of parameter β supposing non-informative priors a 1 = a 2 = b 1 = b 2 = c = d = 10 5 .
( m , n ) r ( R 1 , , R r ) SELINEX (a = −2)LINEX (a = 2)
ABE (MSE)AL (CP)ABE (MSE)AL (CP)ABE (MSE)AL (CP)
(20, 20)20( 0 ( 19 ) , 20)1.01561.04431.03201.17961.02491.1103
(0.5076)(79.3%)(0.6132)(79.6%)(0.6268)(81.4%)
( 0 ( 18 ) , 5, 15)0.96601.29401.00271.31000.99791.2091
(0.7323)(79.6%)(0.7311)(83.7%)(0.6159)(81.4
30( 0 ( 29 ) , 10)1.00641.01800.98521.02470.98321.0145
(0.4826)(79.3%)(0.4801)(80.3%)(0.4279)(81.6%)
( 0 ( 29 ) , 8, 2)0.94841.17440.97511.06910.97621.1375
(0.5875)(83.4%)(0.4949)(81.6%)(0.5884)(82.5%)
(20, 30)30( 0 ( 29 ) , 20)0.99601.02051.05450.97280.96431.0074
(0.4698)(80.3%)(0.4499)(79.3%)(0.4287)(82.5%)
( 0 ( 28 ) , 15, 5)1.00091.19501.00811.14630.94241.2428
(0.6132)(79.6%)(0.5497)(75.5%)(0.6540)(85.7%)
40( 0 ( 39 ) , 10)0.99901.09701.00581.05300.99191.0736
(0.4729)(83.7%)(0.4792)(81.6%)(0.5184)(89.4%)
( 0 ( 28 ) , 5, 5)0.96671.10020.96941.15000.94871.2435
(0.5513)(77.5%)(0.5426)(79.4%)(0.6024)(89.8%)
(30, 30)30( 0 ( 29 ) , 30)1.01251.13290.99531.07731.05101.0657
(0.5555)(80.3%)(0.5020)(81.4%)(0.4924)(85.3%)
( 0 ( 28 ) , 20, 10)1.00121.12871.01901.08381.02831.1435
(0.5624)(83.5%)(0.5212)(83.3%)(0.5709)(85.5%)
40( 0 ( 39 ) , 20)1.00671.05270.99951.06001.01861.0202
(0.4831)(81.4%)(0.5084)(85.3%)(0.4666)(83.3%)
( 0 ( 38 ) , 20, 0)0.96171.03180.96821.04781.00711.0864
(0.4498)(80.6%)(0.4720)(79.4%)(0.5181)(79.5%)
(30, 40)40( 0 ( 39 ) , 30)0.97701.07181.00351.06891.02791.0498
(0.5480)(69.4%)(0.4782)(77.6%)(0.5072)(71.4%)
( 0 ( 39 ) , 29, 1)1.01401.01680.97131.12560.99211.0243
(0.4600)(80.3%)(0.5242)(85.7%)(0.4839)(81.2%)
50( 0 ( 49 ) , 20)1.02041.03571.00451.02240.97791.0610
(0.4712)(85.5%)(0.4547)(83.5%)(0.4787)(89.4%)
( 0 ( 48 ) , 15, 5)0.97831.02820.99421.12040.98831.0798
(0.4586)(79.2%)(0.5215)(80.6%)(0.4825)(85.6%)
(40, 50)50( 0 ( 49 ) , 40)1.02440.96671.01620.97960.99891.0231
(0.4103)(79.4%)(0.4253)(82.4%)(0.4731)(82.3%)
( 0 ( 48 ) , 35, 5)1.00691.06991.01811.01260.95491.1400
(0.5131)(81.4%)(0.4623)(78.5%)(0.5417)(881.4%)
60( 0 ( 59 ) , 30)0.97601.01661.00391.02640.99570.9585
(0.4538)(81.2%)(0.4558)(76.5%)(0.4469)(81.0%)
( 0 ( 58 ) , 25, 5)0.99381.07190.99001.05000.99881.0178
(0.4795)(79.6%)(0.4908)(77.6%)(0.4484)(82.5%)
Table A5. The AMLEs, MSEs, and AL for α 1 , α 2 , and β for example 1.
Table A5. The AMLEs, MSEs, and AL for α 1 , α 2 , and β for example 1.
( R 1 , , R r ) ParametersAMLEMSEAL
( 0 ( 8 ) , 12 , 12 ) α 1 0.43030.12010.1410
α 2 0.45570.13300.1692
β 1.87390.03230.0282
( 0 ( 17 ) , 14 , 1 ) α 1 0.38070.10570.0674
α 2 0.60130.03890.0654
β 1.85490.03020.0252
( 0 ( 17 ) , 10 , 5 ) α 1 0.36140.10290.0977
α 2 0.63150.03850.1165
β 1.84340.03020.0532
Table A6. The estimation of parameters when a 1 = 4 , b 1 = 10.5 , a 2 = 10 , b 2 = 20 , c = 18 , and d = 10 for example 1.
Table A6. The estimation of parameters when a 1 = 4 , b 1 = 10.5 , a 2 = 10 , b 2 = 20 , c = 18 , and d = 10 for example 1.
( R 1 , , R r ) ParametersSELINEX (a = −2)LINEX (a = 2)
ABEALABEALABEAL
( 0 ( 8 ) , 12 , 12 ) α 1 0.35831.44590.37401.44350.37041.4537
α 2 0.45931.44860.44501.43690.44791.4557
β 1.80471.47251.80801.46911.80681.4822
( 0 ( 17 ) , 14 , 1 ) α 1 0.34541.42070.34261.43780.34461.4406
α 2 0.54271.42200.54581.44350.54561.4522
β 1.77881.40131.78281.43491.77741.4672
( 0 ( 17 ) , 10 , 5 ) α 1 0.33571.43650.33521.42800.33151.4307
α 2 0.56151.40610.56431.44160.56561.4258
β 1.77271.41401.77051.42411.77331.4259
Table A7. The estimation of parameters when a 1 = b 1 = a 2 = b 2 = c = d = 10 5 for example 1.
Table A7. The estimation of parameters when a 1 = b 1 = a 2 = b 2 = c = d = 10 5 for example 1.
( R 1 , , R r ) ParametersSELINEX (a = −2)LINEX (a = 2)
ABEALABEALABEAL
( 0 ( 8 ) , 12 , 12 ) α 1 0.35521.43290.35931.45590.35881.4208
α 2 0.36701.45630.36181.47250.36691.4119
β 1.81701.46991.81791.46171.81811.4513
( 0 ( 17 ) , 14 , 1 ) α 1 0.32671.42060.32731.43130.32111.4573
α 2 0.56011.46170.56011.43430.56431.4491
β 1.76841.47061.76711.47231.76611.4453
( 0 ( 17 ) , 10 , 5 ) α 1 0.30711.42490.31191.44150.30621.4312
α 2 0.59511.45280.59201.41090.59321.4449
β 1.75601.44231.75141.42511.75371.4490
Table A8. The AMLEs, MSEs, and AL for α 1 , α 2 , and β for example 2.
Table A8. The AMLEs, MSEs, and AL for α 1 , α 2 , and β for example 2.
( R 1 , , R r ) ParametersAMLEMSEAL
( 0 ( 28 ) , 15 , 15 ) α 1 0.99000.02060.0901
α 2 0.82690.02350.0456
β 0.22670.09260.0364
( 0 ( 38 ) , 15 , 5 ) α 1 1.02120.00160.0321
α 2 0.89430.00170.0185
β 0.20110.00760.0032
( 0 ( 38 ) , 10 , 10 ) α 1 1.03000.01570.0391
α 2 0.88200.01780.0218
β 0.20200.07580.0046
Table A9. The estimation of parameters when a 1 = 10 , b 1 = 10.5 , a 2 = 9 , b 2 = 20 , c = 2 , and d = 10 for example 2.
Table A9. The estimation of parameters when a 1 = 10 , b 1 = 10.5 , a 2 = 9 , b 2 = 20 , c = 2 , and d = 10 for example 2.
( R 1 , , R r ) ParametersSELINEX (a = −2)LINEX (a = 2)
ABEALABEALABEAL
( 0 ( 28 ) , 15 , 15 ) α 1 0.97850.55540.98430.54350.98110.6022
α 2 0.82240.32000.82170.32330.82170.3236
β 0.23011.26630.22841.22920.22941.2336
( 0 ( 38 ) , 15 , 5 ) α 1 0.99780.50470.99810.58200.99760.5234
α 2 0.87700.24830.87590.32190.87720.2867
β 0.20931.21580.20991.24290.20961.2140
( 0 ( 38 ) , 10 , 10 ) α 1 1.00700.54551.00720.54621.00570.6085
α 2 0.86470.28870.86430.30710.86740.2868
β 0.21031.22180.21021.22140.21011.2162
Table A10. The estimation of parameters when a 1 = b 1 = a 2 = b 2 = c = d = 10 5 for example 2.
Table A10. The estimation of parameters when a 1 = b 1 = a 2 = b 2 = c = d = 10 5 for example 2.
( R 1 , , R r ) ParametersSELINEX (a = −2)LINEX (a = 2)
ABEALABEALABEAL
( 0 ( 28 ) , 15 , 15 ) α 1 0.96010.57630.96040.61130.95540.5686
α 2 0.80190.29690.80020.38790.80030.3585
β 0.24431.22650.24471.23090.24631.2479
( 0 ( 38 ) , 15 , 5 ) α 1 0.99910.57261.00080.54061.00120.5898
α 2 0.87430.32090.87550.28630.87440.3129
β 0.21201.23900.21091.21660.21141.2064
( 0 ( 38 ) , 10 , 10 ) α 1 1.00660.58941.01080.60591.01000.5671
α 2 0.86260.33310.86100.35100.86350.2783
β 0.21291.21000.21231.20990.21191.2343

References

  1. Kumar, S.; Kumari, A.; Kumar, K. Bayesian and Classical Inferences in Two Inverse Chen Populations Based on Joint Type-II Censoring. Am. J. Theor. Appl. Stat. 2022, 11, 150–159. [Google Scholar]
  2. Balakrishnan, N.; Rasouli, A. Exact likelihood inference for two exponential populations under joint Type-II censoring. Comput. Stat. Data Anal. 2008, 52, 2725–2738. [Google Scholar] [CrossRef]
  3. Balakrishnan, N.; Burkschat, M.; Cramer, E.; Hofmann, G. Fisher information based progressive censoring plans. Comput. Stat. Data Anal. 2008, 53, 366–380. [Google Scholar] [CrossRef]
  4. Balakrishnan, N.; Cramer, E. The Art of Progressive Censoring: Applications to Reliability and Quality; Statistics for Industry and Technology; Springer: New York, NY, USA, 2014. [Google Scholar]
  5. Rasouli, A.; Balakrishnan, N. Exact likelihood inference for two exponential populations under joint progressive type-II censoring. Commun. Stat. Methods 2010, 39, 2172–2191. [Google Scholar] [CrossRef]
  6. Balakrishnan, N.; Su, F.; Liu, K.Y. Exact likelihood inference for k exponential populations under joint progressive type-II censoring. Commun. Stat.-Simul. Comput. 2015, 44, 902–923. [Google Scholar] [CrossRef]
  7. Doostparast, M.; Ahmadi, M.V.; Ahmadi, J. Bayes estimation based on joint progressive type II censored data under LINEX loss function. Commun. Stat.-Simul. Comput. 2013, 42, 1865–1886. [Google Scholar] [CrossRef]
  8. Mondal, S.; Kundu, D. Point and interval estimation of Weibull parameters based on joint progressively censored data. Sankhya B 2019, 81, 1–25. [Google Scholar] [CrossRef]
  9. Goel, R.; Krishna, H. Likelihood and Bayesian inference for k Lindley populations under joint type-II censoring scheme. Commun. Stat.-Simul. Comput. 2023, 52, 3475–3490. [Google Scholar] [CrossRef]
  10. Krishna, H.; Goel, R. Inferences for two Lindley populations based on joint progressive type-II censored data. Commun. Stat.-Simul. Comput. 2022, 51, 4919–4936. [Google Scholar] [CrossRef]
  11. Hassan, A.S.; Elsherpieny, E.; Aghel, W.E. Statistical inference of the Burr Type III distribution under joint progressively Type-II censoring. Sci. Afr. 2023, 21, e01770. [Google Scholar] [CrossRef]
  12. Kumar, K.; Kumari, A. Bayesian and likelihood estimation in two inverse Pareto populations under joint progressive censoring. J. Indian Soc. Probab. Stat. 2023, 24, 283–310. [Google Scholar] [CrossRef]
  13. Hasaballah, M.M.; Tashkandy, Y.A.; Balogun, O.S.; Bakr, M. Reliability analysis for two populations Nadarajah-Haghighi distribution under Joint progressive type-II censoring. AIMS Math. 2024, 9, 10333–10352. [Google Scholar] [CrossRef]
  14. Abo-Kasem, O.; Almetwally, E.M.; Abu El Azm, W.S. Reliability analysis of two Gompertz populations under joint progressive type-II censoring scheme based on binomial removal. Int. J. Model. Simul. 2024, 44, 290–310. [Google Scholar] [CrossRef]
  15. Keller, A.Z.; Kamath, A.R.R. Alternate Reliability Models for Mechanical Systems. In Proceedings of the 3rd International Conference on Reliability and Maintainability (Fiabilité et Maintenabilité), Toulouse, France, 18–21 October 1982; pp. 411–415. [Google Scholar]
  16. Langlands, A.O.; Pocock, S.J.; Kerr, G.R.; Gore, S.M. Long-term survival of patients with breast cancer: A study of the curability of the disease. Br. Med. J. 1979, 2, 1247–1251. [Google Scholar] [CrossRef] [PubMed]
  17. Chatterjee, A.; Chatterjee, A. Use of the Fréchet distribution for UPV measurements in concrete. NDT E Int. 2012, 52, 122–128. [Google Scholar] [CrossRef]
  18. Chiodo, E.; Falco, P.D.; Noia, L.P.D.; Mottola, F. Inverse Log-logistic distribution for Extreme Wind Speed modeling: Genesis, identification and Bayes estimation. AIMS Energy 2018, 6, 926–948. [Google Scholar] [CrossRef]
  19. El Azm, W.A.; Aldallal, R.; Aljohani, H.M.; Nassr, S.G. Estimations of competing lifetime data from inverse Weibull distribution under adaptive progressively hybrid censored. Math. Biosci. Eng 2022, 19, 6252–6276. [Google Scholar] [CrossRef]
  20. Bi, Q.; Gui, W. Bayesian and classical estimation of stress-strength reliability for inverse Weibull lifetime models. Algorithms 2017, 10, 71. [Google Scholar] [CrossRef]
  21. Alslman, M.; Helu, A. Estimation of the stress-strength reliability for the inverse Weibull distribution under adaptive type-II progressive hybrid censoring. PLoS ONE 2022, 17, e0277514. [Google Scholar] [CrossRef]
  22. Shawky, A.I.; Khan, K. Reliability estimation in multicomponent stress-strength based on inverse Weibull distribution. Processes 2022, 10, 226. [Google Scholar] [CrossRef]
  23. Ren, H.; Hu, X. Estimation for inverse Weibull distribution under progressive type-II censoring scheme. AIMS Math. 2023, 8, 22808–22829. [Google Scholar] [CrossRef]
  24. Kumar, K.; Kumar, I. Estimation in inverse Weibull distribution based on randomly censored data. Statistica 2019, 79, 47–74. [Google Scholar]
  25. Hall, P. Theoretical comparison of bootstrap confidence intervals. Ann. Stat. 1988, 16, 927–953. [Google Scholar] [CrossRef]
  26. Efron, B. The Jackknife, the Bootstrap and Other Resampling Plans; SIAM: Philadelphia, PA, USA, 1982. [Google Scholar]
  27. El-Saeed, A.R.; Almetwally, E.M. On Algorithms and Approximations for Progressively Type-I Censoring Schemes. Stat. Anal. Data Mining: ASA Data Sci. J. 2024, 17, e11717. [Google Scholar] [CrossRef]
  28. Balakrishnan, N.; Iliopoulos, G. Stochastic monotonicity of the MLE of exponential mean under different censoring schemes. Ann. Inst. Stat. Math. 2009, 61, 753–772. [Google Scholar] [CrossRef]
  29. Ding, L.; Gui, W. Statistical inference of two gamma distributions under the joint type-II censoring scheme. Mathematics 2023, 11, 2003. [Google Scholar] [CrossRef]
  30. Xia, Z.; Yu, J.; Cheng, L.; Liu, L.; Wang, W. Study on the breaking strength of jute fibres using modified Weibull distribution. Compos. Part A Appl. Sci. Manuf. 2009, 40, 54–59. [Google Scholar] [CrossRef]
Figure 1. CDF of the inverse Weibull distribution.
Figure 1. CDF of the inverse Weibull distribution.
Symmetry 17 00829 g001
Figure 2. PDF of the inverse Weibull distribution.
Figure 2. PDF of the inverse Weibull distribution.
Symmetry 17 00829 g002
Figure 3. JPC scheme.
Figure 3. JPC scheme.
Symmetry 17 00829 g003
Figure 4. Simulation of α 1 with IP.(a) The trace plot of MCMC samples of α 1 with IPs. (b) Histogram of α 1 with IPs. (c) Cumulative mean plots of α 1 with IPs.
Figure 4. Simulation of α 1 with IP.(a) The trace plot of MCMC samples of α 1 with IPs. (b) Histogram of α 1 with IPs. (c) Cumulative mean plots of α 1 with IPs.
Symmetry 17 00829 g004
Figure 5. Simulation of α 2 with IPs. (a) The trace plot of MCMC samples of α 2 with IPs. (b) Histogram of α 2 with IPs. (c) Cumulative mean plots of α 2 with IPs.
Figure 5. Simulation of α 2 with IPs. (a) The trace plot of MCMC samples of α 2 with IPs. (b) Histogram of α 2 with IPs. (c) Cumulative mean plots of α 2 with IPs.
Symmetry 17 00829 g005
Figure 6. Simulation of β with IPs. (a) The trace plot of MCMC samples of β with IPs. (b) Histogram of β with IPs. (c) Cumulative mean plots of β with IPs.
Figure 6. Simulation of β with IPs. (a) The trace plot of MCMC samples of β with IPs. (b) Histogram of β with IPs. (c) Cumulative mean plots of β with IPs.
Symmetry 17 00829 g006
Figure 7. Simulation of α 1 with NIPs. (a) The trace plot of MCMC samples of α 1 with NIPs. (b) Histogram of α 1 with NIPs. (c) Cumulative mean plots of α 1 with NIPs.
Figure 7. Simulation of α 1 with NIPs. (a) The trace plot of MCMC samples of α 1 with NIPs. (b) Histogram of α 1 with NIPs. (c) Cumulative mean plots of α 1 with NIPs.
Symmetry 17 00829 g007
Figure 8. Simulation of α 2 with NIPs. (a) The trace plot of MCMC samples of α 2 with NIPs. (b) Histogram of α 2 with NIPs. (c) Cumulative mean plots of α 2 with NIPs.
Figure 8. Simulation of α 2 with NIPs. (a) The trace plot of MCMC samples of α 2 with NIPs. (b) Histogram of α 2 with NIPs. (c) Cumulative mean plots of α 2 with NIPs.
Symmetry 17 00829 g008
Figure 9. Simulation of β with NIPs. (a) The trace plot of MCMC samples of β with NIPs. (b) Histogram of β with NIPs. (c) Cumulative mean plots of β with NIPs.
Figure 9. Simulation of β with NIPs. (a) The trace plot of MCMC samples of β with NIPs. (b) Histogram of β with NIPs. (c) Cumulative mean plots of β with NIPs.
Symmetry 17 00829 g009
Figure 10. Fitness between the fitted distribution and the empirical distribution for example 1. (a) Fitness of dataset 1. (b) Fitness of dataset 2.
Figure 10. Fitness between the fitted distribution and the empirical distribution for example 1. (a) Fitness of dataset 1. (b) Fitness of dataset 2.
Symmetry 17 00829 g010
Figure 11. Fitness between the fitted distribution and the empirical distribution for example 2. (a) Fitness of dataset 1. (b) Fitness of dataset 2.
Figure 11. Fitness between the fitted distribution and the empirical distribution for example 2. (a) Fitness of dataset 1. (b) Fitness of dataset 2.
Symmetry 17 00829 g011
Table 1. AMLEs, MSEs, ACIs, Boot-T CIs, and Boot-P CIs of α 1 .
Table 1. AMLEs, MSEs, ACIs, Boot-T CIs, and Boot-P CIs of α 1 .
( m , n ) r ( R 1 , , R r ) AMLE (MSE)ACIBoot-T ALBoot-P AL
ALCP
(20, 20)20( 0 ( 19 ) , 20)1.7027 (0.2309)1.836089.5%1.67941.7570
( 0 ( 18 ) , 5, 15)1.6513 (0.2244)1.924991.3%1.78321.8529
30( 0 ( 29 ) , 10)1.5925 (0.1195)1.350791.5%1.29411.2804
( 0 ( 28 ) , 8, 2)1.6006 (0.1229)1.354391.0%1.27951.2911
(20, 30)30( 0 ( 29 ) , 20)1.6548 (0.1670)1.546588.7%1.49231.5469
( 0 ( 28 ) , 15, 5)1.6215 (0.1108)1.515789.6%1.42841.4801
40( 0 ( 39 ) , 10)1.6134 (0.1226)1.350190.0%1.23201.3142
( 0 ( 38 ) , 5, 5)1.4942 (0.0833)1.161895.1%1.08341.0957
(30, 30)30( 0 ( 29 ) , 30)1.6257 (0.1253)1.357291.5%1.32771.3003
( 0 ( 28 ) , 20, 10)1.6046 (0.1264)1.364091.7%1.29841.3362
40( 0 ( 39 ) , 20)1.5939 (0.0875)1.123590.9%1.04751.0692
( 0 ( 38 ) , 20, 0)1.6226 (0.0906)1.140789.2%1.10871.1204
(30, 40)40( 0 ( 39 ) , 30)1.5855 (0.1014)1.214891.0%1.16011.2213
( 0 ( 38 ) , 29, 1)1.6173 (0.1118)1.262589.3%0.89301.1819
50( 0 ( 49 ) , 20)1.5765 (0.0799)1.091492.2%1.06031.0860
( 0 ( 48 ) , 15, 5)1.5528 (0.0687)1.026391.6%0.91601.0000
(40, 50)50( 0 ( 49 ) , 40)1.5595 (0.0754)1.051592.6%0.92601.0616
( 0 ( 48 ) , 35, 5)1.5817 (0.0728)1.043692.7%0.92701.0140
60( 0 ( 59 ) , 30)1.5739 (0.0574)0.925092.6%0.92600.8984
( 0 ( 58 ) , 25, 5)1.5406 (0.0560)0.929892.6%0.92100.9249
Table 2. AMLEs, MSEs, ACIs, Boot-T CIs, and Boot-P CIs of α 2 .
Table 2. AMLEs, MSEs, ACIs, Boot-T CIs, and Boot-P CIs of α 2 .
( m , n ) r ( R 1 , , R r ) AMLE (MSE)ACIBoot-T ALBoot-P AL
ALCP
(20, 20)20( 0 ( 19 ) , 20)2.2568 (0.4263)2.526589.1%2.30582.3371
( 0 ( 18 ) , 5, 15)2.4729 (0.6462)2.702184.6%2.54682.5304
30( 0 ( 29 ) , 10)2.1285 (0.1726)1.613192.2%1.59261.5996
( 0 ( 28 ) , 2, 8)2.2105 (0.2826)1.920588.0%1.81801.7945
(20, 30)30( 0 ( 29 ) , 20)2.1136 (0.1569)1.498290.2%1.43571.4124
( 0 ( 18 ) , 15, 5)2.2705 (0.2505)1.782287.9%1.71991.6676
40( 0 ( 39 ) , 10)2.1024 (0.1122)1.284392.0%1.27831.2660
( 0 ( 38 ) , 5, 5)2.2802 (0.2066)1.490085.5%1.46191.4794
(30, 30)30( 0 ( 29 ) , 30)2.1330 (0.2243)1.787890.3%1.73241.7611
( 0 ( 28 ) , 20, 10)2.2581 (0.2875)1.904587.5%1.77201.7790
30( 0 ( 39 ) , 20)2.1206 (0.1340)1.405791.8%1.36821.3687
( 0 ( 28 ) , 20, 0)2.1560 (0.1590)1.487090.1%1.43971.4210
(30, 40)40( 0 ( 39 ) , 30)2.0866 (0.1098)1.279391.6%1.24281.2512
( 0 ( 38 ) , 29, 1)2.1423 (0.1422)1.382189.9%0.89901.3511
50( 0 ( 49 ) , 20)2.0665 (0.0871)1.157893.1%1.15951.1305
( 0 ( 48 ) , 15, 5)2.15499 (0.1136)1.205189.2%0.89201.0000
(40, 50)50( 0 ( 49 ) , 40)2.0872 (0.0907)1.149291.0%0.92001.1308
( 0 ( 48 ) , 35, 5)2.1518 (0.1080)1.188089.4%0.89401.1418
60( 0 ( 59 ) , 30)2.0577 (0.0775)1.073292.4%0.92401.0263
( 0 ( 58 ) , 25, 5)2.1192 (0.0891)1.101291.7%0.91701.0738
Table 3. AMLEs, MSEs, ACIs, Boot-T CIs, and Boot-P CIs of β .
Table 3. AMLEs, MSEs, ACIs, Boot-T CIs, and Boot-P CIs of β .
( m , n ) r ( R 1 , , R r ) AMLE (MSE)ACIBoot-T ALBoot-P AL
ALCP
(20, 20)20( 0 ( 19 ) , 20)1.0003 (0.0405)0.813094.9%0.80970.7918
( 0 ( 18 ) , 5, 15)0.9954 (0.0285)0.673994.8%0.65710.6537
30( 0 ( 29 ) , 10)1.0326 (0.0343)0.733693.0%0.93000.7270
( 0 ( 28 ) , 8, 2)1.0057 (0.0320)0.717393.9%0.68080.6771
(20, 30)30( 0 ( 29 ) , 20)1.0055 (0.0286)0.676494.2%0.67000.6645
( 0 ( 28 ) , 15, 5)0.9954 (0.0285)0.673994.8%0.65710.6537
40( 0 ( 39 ) , 10)1.0079 (0.0258)0.639794.2%0.63020.6323
( 0 ( 38 ) , 5, 5)0.9949 (0.0257)0.640594.7%0.94700.6159
(30, 30)30( 0 ( 29 ) , 30)0.9927 (0.0255)0.636995.8%0.61940.6155
( 0 ( 28 ) , 20, 10)0.9822 (0.0260)0.641196.1%0.64030.6233
40( 0 ( 39 ) , 20)1.0116 (0.0220)0.590295.0%0.57930.5667
( 0 ( 38 ) , 20, 0)0.9980 (0.0245)0.624895.2%0.95200.5974
(30, 40)40( 0 ( 39 ) , 30)0.9964 (0.0191)0.547994.3%0.94300.5538
( 0 ( 38 ) , 29, 1)0.9920 (0.0194)0.549995.9%0.95900.5399
50( 0 ( 49 ) , 20)1.0113 (0.0179)0.527593.5%0.93500.5115
( 0 ( 48 ) , 15, 5)1.0038 (0.0187)0.543694.8%0.94800.5438
(40, 50)50( 0 ( 49 ) , 40)1.0088 (0.0158)0.496694.7%0.94700.5138
( 0 ( 48 ) , 35, 5)0.9917 (0.0156)0.494895.4%0.95400.4892
60( 0 ( 59 ) , 30)1.0033 (0.0137)0.462494.9%0.94900.4751
( 0 ( 58 ) , 25, 5)1.0028 (0.0139)0.466194.8%0.95100.4648
Table 4. Bayesian estimations of parameter α 1 supposing informative priors ( a 1 , a 2 , b 1 , b 2 , c , d ) = ( 15 , 10 , 10.5 , 5 , 10 , 10 ) .
Table 4. Bayesian estimations of parameter α 1 supposing informative priors ( a 1 , a 2 , b 1 , b 2 , c , d ) = ( 15 , 10 , 10.5 , 5 , 10 , 10 ) .
( m , n ) r ( R 1 , , R r ) SELINEX (a = −2)LINEX (a = 2)
ABE (MSE)AL (CP)ABE (MSE)AL (CP)ABE (MSE)AL (CP)
(20, 20)20( 0 ( 19 ) , 20)1.55401.08561.46271.01221.42121.0443
(0.2319)(98.0%)(0.1806)(100.0%)(0.2106)(100.0%)
( 0 ( 18 ) , 5, 15)1.42821.04791.44771.07221.46861.0198
(0.1956)(100.0%)(0.2308)(100.0%)(0.1920)(100.0%)
30( 0 ( 29 ) , 10)1.47211.02631.44890.98931.52361.0107
(0.1914)(100.0%)(0.1853)(95.9%)(0.1835)(100.0%)
( 0 ( 29 ) , 8, 2)1.47730.99431.45171.05491.46241.0138
(0.1852)(100.0%)(0.2119)(100.0%)(0.1923)(97.96%)
(20, 30)30( 0 ( 29 ) , 20)1.48510.97361.46650.96901.42830.9938
(0.1693)(100.0%)(0.1710)(97.96%)(0.1840)(100.0%)
( 0 ( 28 ) , 15, 5)1.47341.00721.40910.99711.45080.9837
(0.1755)(100.0%)(0.1994)(100.0%)(0.1726)(100.0%)
40( 0 ( 39 ) , 10)1.54370.98581.51551.03841.43271.0452
(0.1704)(100.0%)(0.2019)(100.0%)(0.2023)(100.0%)
( 0 ( 38 ) , 5, 5)1.41751.047991.40960.99871.44621.0027
(0.1946)(100.0%)(0.1742)(100.0%)(0.1820)(100.0%)
(30, 30)30( 0 ( 29 ) , 30)1.47170.96401.49300.97611.49120.9608
(0.1699)(97.96%)(0.1731)(97.96%)(0.1736)(97.96%)
( 0 ( 28 ) , 20, 10)1.50020.99971.53251.01721.45441.0484
(0.1766)(100.0%)(0.1886)(100.0%)(0.2044)(100.0%)
40( 0 ( 39 ) , 20)1.47820.94191.46531.00491.49871.0303
(0.1609)(98.0%)(0.1841)(98.0%)(0.1935)(100.0%)
( 0 ( 38 ) , 20, 0)1.47691.00731.49241.04121.50211.0273
(0.1797)(100.0%)(0.2009)(100.0%)(0.1902)(100.0%)
(30, 40)40( 0 ( 39 ) , 30)1.4871.0391.4520.9421.4961.019
(0.1902)(100.0%)(0.1586)(98.0%)(0.1857)(100.0%)
( 0 ( 39 ) , 29, 1)1.49130.98771.43691.01161.53071.0151
(0.1748)(100.0%)(0.1784)(100.0%)(0.1825)(100.0%)
50( 0 ( 49 ) , 20)1.49930.98161.47481.02071.45091.0532
(0.1740)(100.0%)(0.1860)(100.0%)(0.2010)(100.0%)
( 0 ( 48 ) , 15, 5)1.45261.04671.45651.04281.43911.0215
(0.1966)(100.0%)(0.1974)(100.0%)(0.1903)(100.0%)
(40, 50)50( 0 ( 49 ) , 40)1.48621.04491.49221.00401.50551.0486
(0.1940)(100.0%)(0.1771)(100.0%)(0.2009)(100.0%)
( 0 ( 48 ) , 35, 5)1.52311.01251.49240.97421.47711.0412
(0.1779)(100.0%)(0.1720)(100.0%)(0.1994)(100.0%)
60( 0 ( 59 ) , 30)1.49261.01051.48200.99051.49941.0187
(0.1826)(100.0%)(0.1685)(100.0%)(0.1794)(100.0%)
( 0 ( 58 ) , 25, 5)1.47401.04251.50411.02001.47191.0245
(0.1929)(100.0%)(0.1828)(100.0%)(0.1831)(100.0%)
Table 5. Bayesian estimations of parameter α 1 supposing non-informative priors a 1 = a 2 = b 1 = b 2 = c = d = 10 5 .
Table 5. Bayesian estimations of parameter α 1 supposing non-informative priors a 1 = a 2 = b 1 = b 2 = c = d = 10 5 .
( m , n ) r ( R 1 , , R r ) SELINEX (a = −2)LINEX (a = 2)
ABE (MSE)AL (CP)ABE (MSE)AL (CP)ABE (MSE)AL (CP)
(20, 20)20( 0 ( 19 ) , 20)1.54150.99741.52021.12381.55601.0223
(0.1891)(98.0%)(0.2478)(100.0%)(0.2064)(95.9%)
( 0 ( 18 ) , 5, 15)1.39731.10961.41341.00521.48641.1652
(0.2565)(100.0%)(0.1947)(95.9%)(0.2797)(100.0%)
30( 0 ( 29 ) , 10)1.54591.01821.58211.02411.56571.0313
(0.2048)(98.0%)(0.1954)(100.0%)(0.1936)(100.0%)
( 0 ( 29 ) , 8, 2)1.53421.08391.49751.03561.54061.0968
(0.2248)(98.0%)(0.1963)(98.0%)(0.2452)(100.0%)
(20, 30)30( 0 ( 29 ) , 20)1.50681.04851.49690.99771.59881.0299
(0.2120)(100.0%)(0.1945)(98.0%)(0.1946)(100.0%)
( 0 ( 28 ) , 15, 5)1.52751.10281.45901.08651.58361.0912
(0.2460)(100.0%)(0.2205)(100.0%)(0.2500)(98.0%)
40( 0 ( 39 ) , 10)1.55441.04651.51681.01041.49951.0326
(0.1981)(100.0%)(0.1808)(100.0%)(0.1987)(98.0%)
( 0 ( 28 ) , 5, 5)1.45920.99431.43681.01641.41631.1130
(0.1860)(98.0%)(0.1974)(98.0%)(0.2435)(100.0%)
(30, 30)30( 0 ( 29 ) , 30)1.54521.03911.51981.06111.61590.9741
(0.2185)(98.0%)(0.2299)(95.9%)(0.1652)(100.0%)
( 0 ( 28 ) , 20, 10)1.52501.04621.47001.00061.48821.0511
(0.2010)(100.0%)(0.1809)(100.0%)(0.2093)(100.0%)
40( 0 ( 39 ) , 20)1.51121.05651.49321.02881.49701.0090
(0.2065)(100.0%)(0.2073)(98.0%)(0.1920)(95.9%)
( 0 ( 38 ) , 20, 0)1.55101.05841.51051.01001.53401.0369
(0.2045)(100.0%)(0.1877)(100.0%)(0.2034)(100.0%)
(30, 40)40( 0 ( 39 ) , 30)1.62261.06231.49311.03841.53351.0430
(0.2186)(98.0%)(0.1990)(100.0%)(0.1993)(100.0%)
( 0 ( 39 ) , 29, 1)1.50160.98011.49811.02581.54741.0125
(0.1733)(100.0%)(0.1882)(100.0%)(0.1988)(98.0%)
50( 0 ( 49 ) , 20)1.55361.01461.49530.97921.52701.0628
(0.1907)(100.0%)(0.1738)(100.0%)(0.2026)(100.0%)
( 0 ( 48 ) , 15, 5)1.52561.01281.49780.99981.49261.0616
(0.1907)(98.0%)(0.1738)(98.0%)(0.2008)(100.0%)
(40, 50)50( 0 ( 49 ) , 40)1.47730.95531.51630.99571.48681.0291
(0.1596)(100.0%)(0.1797)(100.0%)(0.1919)(100.0%)
( 0 ( 48 ) , 35, 5)1.52831.02981.50671.02151.51881.0704
(0.1918)(100.0%)(0.1924)(100.0%)(0.2128)(100.0%)
60( 0 ( 59 ) , 30)1.50160.98011.47801.01141.45940.9964
(0.1678)(100.0%)(0.1822)(100.0%)(0.1738)(100.0%)
( 0 ( 58 ) , 25, 5)1.46151.04001.52451.04071.52250.9925
(0.1914)(100.0%)(0.1957)(100.0%)(0.1707)(100.0%)
Table 6. Breakdown times at voltages of 32 and 34 kV.
Table 6. Breakdown times at voltages of 32 and 34 kV.
DatasetsBreakdown Times
Breakdown at 32 kV0.270.400.690.792.75
3.919.8813.9515.9327.80
53.2482.8589.29100.60215.10
Breakdown at 34 kV0.190.780.961.312.78
3.164.154.674.856.50
7.358.018.2712.0631.75
32.5333.9136.7172.89
Table 7. Breaking strength of JFGL with 10 mm and 20 mm.
Table 7. Breaking strength of JFGL with 10 mm and 20 mm.
DatasetsFiber Strength
10 mm43.9350.16101.15108.94123.06141.38
151.48163.40177.25183.16212.13257.44
262.90291.27303.90323.83353.24376.42
383.43422.11506.60530.55590.48637.66
671.49693.73700.74704.66727.23778.17
20 mm36.7545.5848.0171.4683.5599.72
113.85116.99119.86145.96166.49187.13
187.85200.16244.53284.64350.70375.81
419.02456.60547.44578.62581.60585.57
594.29662.66688.16707.36756.70765.14
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xiang, J.; Wang, Y.; Gui, W. Statistical Inference of Inverse Weibull Distribution Under Joint Progressive Censoring Scheme. Symmetry 2025, 17, 829. https://doi.org/10.3390/sym17060829

AMA Style

Xiang J, Wang Y, Gui W. Statistical Inference of Inverse Weibull Distribution Under Joint Progressive Censoring Scheme. Symmetry. 2025; 17(6):829. https://doi.org/10.3390/sym17060829

Chicago/Turabian Style

Xiang, Jinchen, Yuanqi Wang, and Wenhao Gui. 2025. "Statistical Inference of Inverse Weibull Distribution Under Joint Progressive Censoring Scheme" Symmetry 17, no. 6: 829. https://doi.org/10.3390/sym17060829

APA Style

Xiang, J., Wang, Y., & Gui, W. (2025). Statistical Inference of Inverse Weibull Distribution Under Joint Progressive Censoring Scheme. Symmetry, 17(6), 829. https://doi.org/10.3390/sym17060829

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop