Next Article in Journal
URAdv: A Novel Framework for Generating Ultra-Robust Adversarial Patches Against UAV Object Detection
Previous Article in Journal
Support Vector Machines and Model Selection for Control Chart Pattern Recognition
Previous Article in Special Issue
Analyzing Chemical Decay in Environmental Nanomaterials Using Gamma Distribution with Hybrid Censoring Scheme
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Inference for Two-Parameter Birnbaum–Saunders Distribution Based on Type-II Censored Data with Application to the Fatigue Life of Aluminum Coupon Cuts

1
Faculty of Engineering Technology, Al-Balqa Applied University, Amman 11134, Jordan
2
Faculty of Medicine, Memorial University of Newfoundland, St. John’s, NL A1C 5S7, Canada
3
Department of Mathematics and Statistics, McMaster University, Hamilton, ON L8S 4L8, Canada
Mathematics 2025, 13(4), 590; https://doi.org/10.3390/math13040590
Submission received: 7 January 2025 / Revised: 5 February 2025 / Accepted: 8 February 2025 / Published: 11 February 2025

Abstract

:
This study addresses the problem of parameter estimation and prediction for type-II censored data from the two-parameter Birnbaum–Saunders (BS) distribution. The BS distribution is commonly used in reliability analysis, particularly in modeling fatigue life. Accurate estimation and prediction are crucial in many fields where censored data frequently appear, such as material science, medical studies and industrial applications. This paper presents both frequentist and Bayesian approaches to estimate the shape and scale parameters of the BS distribution, along with the prediction of unobserved failure times. Random data are generated from the BS distribution under type-II censoring, where a pre-specified number of failures (m) is observed. The generated data are used to calculate the Maximum Likelihood Estimation (MLE) and Bayesian inference and evaluate their performances. The Bayesian method employs Markov Chain Monte Carlo (MCMC) sampling for point predictions and credible intervals. We apply the methods to both datasets generated under type-II censoring and real-world data on the fatigue life of 6061-T6 aluminum coupons. Although the results show that the two methods yield similar parameter estimates, the Bayesian approach offers more flexible and reliable prediction intervals. Extensive R codes are used to explain the practical application of these methods. Our findings confirm the advantages of Bayesian inference in handling censored data, especially when prior information is available for estimation. This work not only supports the theoretical understanding of the BS distribution under type-II censoring but also provides practical tools for analyzing real data in reliability and survival studies. Future research will discuss extensions of these methods to the multi-sample progressive censoring model with larger datasets and the integration of degradation models commonly encountered in industrial applications.

1. Introduction

The Birnbaum–Saunders (BS) distribution is a distribution built from the standard normal random variable using a monotone transformation method. The BS distribution is also known as the fatigue life distribution. In fact, Birnbaum and Saunders [1] introduced the distribution to model the fatigue life of metals subject to periodic stress. The BS distribution has many applications in different fields of science; fatigue failure caused under cyclic loading was provided by Birnbaum and Saunders [2] as a physical explanation of this model, and based on a biological model. Then, a more general derivation was derived by Desmond [3], who reduced some of the original propositions to strengthen the physical explanation for the use of this distribution. The hazard-rate property of the BS distribution, i.e., that it cannot be monotone and that it increases up to a point, then decreases, is the case applied to many real-life situations, as in a study of recovery from breast cancer. Langlands et al. [4] observed that the maximum mortality rate among breast cancer patients occurs about three years after the diagnosis; then, it slowly drops over a fixed interval of time.
For more details, one may refer to the comprehensive review paper by Balakrishnan and Kundu [5]. In this rich review paper, one may see other applications for the BS distribution, such as insurance data of Swedish third-party motor insurance for 1977 for one of several geographical zones, the survival life in hours for ball bearings of a certain type and the bone mineral density (BMD) measured in gm/cm2 for newborn babies. In fact, the BS model has received considerable attention in the last few years for various reasons; the probability density function of a BS distribution can take on different shapes; for example, it has a non-monotonic hazard function and a nice physical justification. Furthermore, many generalizations of the BS distribution have been presented in the literature, and abundant applications in many different research areas have been found.
The cumulative distribution function (CDF) and probability density function (PDF) of the Birnbaum–Saunders model with parameters of α and β > 0 , respectively, are
F T ( t ; α , β ) = Φ 1 α t β 1 2 β t 1 2 , 0 < t < , α , β > 0 ,
and
f ( t ; α , β ) = 1 2 2 π α β β t 1 2 + β t 3 2 exp 1 2 α 2 t β + β t 2 , t > 0 , α > 0 , β > 0 .
where α and β > 0 are the shape and scale parameters, respectively, and Φ ( . ) is the standard normal CDF. From now on, we adopt the notation of B S ( α , λ ) to denote to the BS distribution with parameters of α and β . Many authors in the literature have studied different aspects of the BS distribution; among them, Birnbaum and Saunders [2] studied the maximum likelihood (ML) estimators of the shape and scale parameters of the BS model, and their asymptotic distributions were obtained by Engelhardt et al. [6]. Balakrishnan and Zhu [7] established the existence and uniqueness of the ML estimators. Ng et al. [8] discussed the ML estimation of the α and β parameters based on type-II right-censored samples. Bayesian inference for the scale parameter ( β ) was considered by Padgett [9] with the use of a noninformative prior; it was assumed that the shape parameter was known. The same problem but when both parameters are unknown was considered by Achcar [10], and Achcar and Moala [11] addressed the problem in the case of censored data and covariates. Bayesian estimates and associated credible intervals for the parameters of the BS distribution under a general class of priors were handled by Wang et al. [12]. More recent studies further expanded the use of the BS distribution in various reliability contexts. Liu et al. [13] introduced a random-effect extension of the BS distribution to refine its modeling flexibility in reliability assessment. Sawlan et al. [14] applied the BS distribution to model metallic fatigue data, presenting its advantage in predicting fatigue life. Razmkhah et al. [15] developed a neutrosophic version of the BS distribution to account for uncertainty in data; they presented its application in industrial and environmental data analysis. Park and Wang [16] proposed a goodness-of-fit test for the BS distribution; they improved the fit evaluation of the test through a new probability plot-based approach. Finally, Dorea et al. [17] presented a generalized class of BS-type distributions that was used to model the fatigue-life data and address cases where crack damage follows heavy-tailed distributions.
Different types of censoring schemes (CSs) are adopted by authors in the literature, and the most common censoring schemes are usually called type-I and type-II CSs. Usually, the data are censored in reliability and life testing analysis. In type-II censoring, it is assumed that n items are tested. An integer of m < n is pre-fixed, and as the m-th failure is observed, the experiment stops simultaneously. The prediction problem of unseen or removed observations based on the early noticed observations in the current sample has received considerable interest in numerous fields of reliability and life testing conditions. One may refer to Kaminsky and Nelson [18] for more details. The Bayesian prediction of an unseen observation from a future sample based on a current noticed sample, which is known as the informative sample, is one of the most important aspects in Bayesian analysis. Prediction can be used effectively in the medical field in predicting the future prognosis of patients or future side effects associated of patients treated with medication. For more information about the importance of the prediction problem and its different applications, one may refer to Al-Hussaini [19], Kundu and Raqab [20] and Bdair et al. [21]. The phenomenon of observing censored data is of natural interest in survival, reliability and medical studies for a wide range of reasons (see, for example, Balakrishnan and Cohen [22]). Several applications of prediction problems can be found in meteorology, hydrology, industrial stress testing, athletic events and the medical field.
Our goals in this paper are two-fold: First we employ importance sampling to compute the Bayesian estimates of α and β of a BS type-II censored sample. We use very deep computer simulations to compare the executions of the Bayesian estimators with maximum likelihood estimators (MLEs). The symmetric credible intervals (CRIs) are also computed and compared to the confidence intervals (CIs) based on the asymptotic and bootstrap (Boot-t) arguments. The other goal involves considering the prediction of the life lengths of censored values. In this work, we implement the importance and Metropolis–Hastings (M–H) algorithms to estimate the posterior predictive density of unseen units based on current informative data, and we also construct the prediction intervals (PIs) of the these unseen units.
In many real-world applications, such as material testing and clinical trials, data are often incomplete due to censoring or accidental omission. Type-II censoring, in which a fixed number of failures is observed, is particularly common in reliability studies. Accurate estimation of failure parameters and predictions of unobserved data is crucial for understanding material behavior, ensuring the safety of industrial components and making informed decisions in medical research. This study presents a comprehensive comparison of frequentist and Bayesian approaches for estimating the parameters of type-II BS censored data. This paper introduces a perfect Bayesian approach that uses Markov chain Monte Carlo (MCMC) sampling to generate point estimates and credible intervals, providing a more flexible way to quantify uncertainty. The calculation of point predictions and credible intervals via this Bayesian approach effectively indicates the uncertainty associated with the predictions, providing a complete understanding of the possible range of outcomes compared to classical statistical methods. Additionally, this study applies these methods to both simulated and real-world datasets, illustrating their effectiveness in practical reliability analysis.
The practical applications of this work may cover many fields. Understanding the fatigue life of metals and other materials is important for validating the reliability of components used in auto manufacturing and structural engineering; these are some applications of materials science. The methods improved in this paper have strict ways of estimating the fatigue life of materials from censored data, which enables engineers to predict failure times accurately, which, in turn, lets them make better decisions. Another aspect of the application is medical research; it is essential to evaluate the treatment efficacy and patient prognosis, which can be achieved effectively by predicting patient survival times based on data from censored clinical trials. The Bayesian approach discussed in this work provides a useful approach to handling incomplete data and improving the accuracy of predictions. The primary objectives of this study are to explain the advantages of Bayesian inference and provide practical outlines to apply these methods in real-world reliability and survival studies.
Future research will extend the methods developed in this work to study multi-sample progressive censoring. This extension is particularly important in cases where multiple groups or samples are affected by different censoring scenarios, which usually occurs in industrial and clinical trials. By applying the methods to larger datasets, our goal will be to refine the robustness of the proposed approaches and to ensure the application of these methods to a wider range of real-world problems. Additionally, we plan to integrate degradation models, which are commonly used in industrial applications to monitor the deterioration of products over time. These models offer a deeper investigation of reliability, especially of products that deteriorate over time under stress. This approach can also provide more accurate predictions of failure times compared to classical lifetime models. With this combination of the strengths of type-II censoring, Bayesian inference and degradation modeling, we hope to offer more powerful tools for reliability and survival analysis.
The remaining sections of the paper include the following. In Section 2, we describe the determination of the MLEs of the scale and shape parameters. Asymptotic and Boot-t methods of constructing CIs of the parameters are also studied. In Section 3, we describe Bayesian estimates using importance sampling for the α and β model parameters. In Section 4, Gibbs and Metropolis sampling are employed to derive sample-based estimates for the predictive density functions of the parameters, as well as the times to failure of the surviving units ( U s : n m ( s = 1 , 2 , . . . , n m ) ) based on the noticed sample ( t 1 , t 2 , , t m ) . In Section 5, we present a data analysis and a Monte Carlo simulation for numerical comparison purposes.

2. Maximum Likelihood Method

Let T 1 , , T n be independent and identically distributed (iid) lifetimes with the PDF in (2) put under test. The integer of m < n is pre-fixed, and the experiment stops as soon as we observe the m-th failure. The first m failure times ( t 1 < < t m ) form an ordered type-II right-censored random sample, with the largest ( n m ) lifetimes having been censored. The likelihood function based on a type-II censored sample is given by (Balakrishnan and Cohen [22])
L 1 Φ 1 α ξ t m β n m × 1 α m β m i = 1 m ξ t i β exp 1 2 α 2 i = 1 m ξ 2 t i β ,
and the log-likelihood function of (3) is
ln L = const + ( n m ) ln 1 Φ 1 α ξ t m β m ln α m ln β + i = 1 m ln ξ t i β 1 2 α 2 i = 1 m ξ 2 t i β ,
where
ξ ( t ) = t 1 2 t 1 2 ,
ξ 2 ( t ) = t + t 1 2 ,
ξ ( t ) = 1 2 t t 1 2 + t 1 2 .
Based on Ng et al. [8], we consider
α 2 = h 2 ( β ) h 3 ( β ) h 1 ( β ) h 4 ( β ) h 1 ( β ) h 3 ( β ) ( = φ 2 ( β ) , say ) ,
where
h 1 ( β ) = ξ t m β ,
h 2 ( β ) = 1 m i = 1 m ξ 2 t i β ,
h 3 ( β ) = 1 + i = 1 m t i β ξ t i β ξ t i β 1 t m β ξ t m β ,
h 4 ( β ) = 1 + i = 1 m t i β ξ t i β ξ t i β 1 1 m i = 1 m t i β ξ t i β ξ t i β .
Then, we consider
Q ( β ) = φ 2 ( β ) 1 2 1 K ( β ) u 2 + 1 2 v φ ( β ) ( n m ) m H 1 φ ( β ) ξ t m β t m β ξ t m β ,
where
u = 1 m i = 1 m t i β ,
v = 1 m i = 1 m t i β 1 1 ,
K ( β ) = 1 m i = 1 m 1 + t i β 1 1 ,
and
K ( β ) = K ( β ) 2 1 m i = 1 m 1 + t i β 2 .
The ML estimate of β is the solution of Q ( β ) = 0 . The nonlinearity of Q ( β ) = 0 urges us to use a numerical procedure to solve it for β . The positive square root of the right-hand side of (5) gives the maximum likelihood estimate ( α ^ of α ) after calculating the ML estimate ( β ^ ) of β .
To construct confidence intervals for α and β , we need the observed Fisher information matrix ( J ( δ ^ M ) ) , which is the negative of second partial derivatives of the log-likelihood function in (4). It is well-known that the approximation of δ ^ M N 2 ( δ , J 1 ( δ ^ M ) ) is applicable, where δ = ( α , β ) and δ ^ M = ( α ^ M , β ^ M ) . Therefore, the approximations of 100 ( 1 γ ) % CIs for α and β are α ^ M z γ / 2 υ 11 , α ^ M + z 1 γ / 2 υ 11 and β ^ M z γ / 2 υ 22 , β ^ M + z 1 γ / 2 υ 22 , respectively, where υ 11 and υ 22 are the elements of the main diagonal of J 1 ( δ ^ M ) and z γ is 100 γ -th upper-point percentile of the standard Gaussian distribution. Usually, for small sample sizes, the CI based on the asymptotic result does not perform well in calculating the CI. For this reason, the CI based on the Boot-t method is an effective method in evaluating the CI; one may refer to, for example, Ahmed [23]. The algorithm that best describes this method is summarized as follows:
  • Step 1: Use the ML estimation to estimate α and β based on the observed informative sample (denoted by α ^ M and β ^ M ) ;
  • Step 2: Generate a Boot-t sample using α ^ M and β ^ M obtained in Step 1; then, calculate the first m observed censored units ( B U ( 1 ) , B U ( 2 ) , , B U ( m ) ) under the BS model. Next, compute the corresponding MLEs ( α ^ M and β ^ M ) of α and β and the elements ( υ 11 , υ 22 ) of the main diagonal of J 1 ( α ^ M , β ^ M ) ;
  • Step 3: Define an estimated Boot-t version,
    Q 1 = α ^ M α ^ M υ 11 , and Q 2 = β ^ M β ^ M υ 22 ,
    depending on the Boot-t sample generated in Step 2;
  • Step 4: Generate M = 1000 Boot-t samples and versions of Q 1 and Q 2 ; then, obtain the 100 ( γ / 2 ) -th and 100 ( 1 γ / 2 ) -th sample quantiles of Q 1 and Q 2 ;
  • Step 5: Compute the approximate 100 ( 1 γ ) % CIs for α and β as
    ( α ^ M q ^ 1 , 1 γ / 2 υ 11 , α ^ M L q ^ 1 , γ / 2 υ 11 ) and ( β ^ M L q ^ 2 , 1 γ / 2 υ 22 , β ^ M L q ^ 2 , γ / 2 υ 22 ) .

3. Bayesian Method

Here, based on a type-II censored sample from the two-parameter BS distribution, we obtain the posterior densities of the α and β parameters in order to present the corresponding Bayesian estimators of the mentioned parameters. To develop the Bayesian estimates, we first present independent priors for α and β based on the approximation presented by Tsay et al. [24]. Tsay et al. [24] stated that the error function ( e r f ( . ) ) can be written in a form of exponential function as
e r f ( z ) = 1 exp { c 1 z c 2 z 2 } ,
where c 1 = 1.0950 and c 2 = 0.7565 . It is well known that the standard normal CDF can be written in terms of e r f ( . ) as
Φ ( x ) = 1 2 π x e t 2 2 d t = 1 2 1 + e r f x 2 .
Using (6) and (7), we conclude that
1 Φ 1 α ξ t i β 1 2 exp c 1 1 2 α ξ t i β c 2 1 2 α ξ t i β 2 = 1 2 exp c 1 2 α ξ t i β exp c 2 2 α 2 ξ 2 t i β .
According to (8), the likelihood function in (3) can be approximated by
L 1 α m β m i = 1 m ξ t i β exp 1 2 α 2 i = 1 m ξ 2 t i β × exp c 1 ( n m ) 2 α ξ t m β exp c 2 ( n m ) 2 α 2 ξ 2 t m β 1 α m β m i = 1 m ξ t i β exp 1 2 α 2 i = 1 m ξ 2 t i β + c 2 ( n m ) ξ 2 t m β × exp c 1 ( n m ) 2 α ξ t m β .
If we use non-informative priors, improper posterior and continuous conjugate priors result, so we use the proper priors with known hyperparameters to support the property of posteriors. As stated by Wang et al. [12], we assume that β has an inverse gamma ( I G ) distribution, with parameters ( a 1 , b 1 ) denoted by I G ( β | a 1 , b 1 ) , and that λ = 2 α 2 also has an inverse gamma distribution with parameters ( a 2 , b 2 ) denoted by I G ( λ | a 2 , b 2 ) with the following densities:
ν β a 1 , b 1 = b 1 a 1 Γ ( a 1 ) β a 1 1 e b 1 β , ν λ a 2 , b 2 = b 2 a 2 Γ ( a 2 ) λ a 2 1 e b 2 λ ,
where a 1 , b 1 , a 2 , b 2 are positive real constants that mirror prior knowledge about the β and λ parameters. By combining (9) and (10), the joint posterior density of β and λ can be written as
p ( λ , β | T ) L ( λ , β ; T ) π ( β | a 1 , b 1 ) π ( λ | a 2 , b 2 ) ν β ( a 1 + m , b 1 ) ν λ a 2 + m 2 , i = 1 m ξ 2 t i β + c 2 ( n m ) ξ 2 t m β + b 2 × δ m ( β ) τ m ( λ , β ) i = 1 m ξ 2 t i β + c 2 ( n m ) ξ 2 t m β + b 2 ( a 2 + m 2 ) ,
where
δ m ( β ) = i = 1 m ξ t i β ,
and
τ m ( λ , β ) = exp c 1 ( n m ) λ ξ t m β .
Based on the observed type-II censored sample, the marginal density of β is given by
π ( β | T ) ν β ( a 1 + m , b 1 ) ω ( β ) ,
where
ω ( β ) = i = 1 m ξ 2 t i β + c 2 ( n m ) ξ 2 t m β + b 2 ( a 2 + m 2 ) δ m ( β ) E λ τ m ( λ , β ) ,
with E λ denoting the expectation with respect to I G ( a 2 + m 2 , i = 1 m ξ 2 t i β + c 2 ( n m ) ξ 2 t m β + b 2 ) . The conditional density of λ , given β and the data, is
ν ( λ | β , T ) ν λ a 2 + m 2 , i = 1 m ξ 2 t i β + c 2 ( n m ) ξ 2 t m β + b 2 τ m ( λ , β ) .
Now, we proceed to obtain the Bayesian estimators (BEs) of α and β under the squared error loss (SEL) function. The BE of any function of α and β (e.g., θ = η ( α , β ) ) under the SEL function is given by
θ ^ S E = 0 0 η ( α , β ) g 1 ( α | x , β ) g 2 ( β | x ) h ( β | x ) 0 0 g 1 ( α | x , β ) g 2 ( β | x ) h ( β | x ) .
The BEs for α and β cannot be obtained in explicit form because (12) cannot be evaluated analytically. For this reason, we use an importance sampling technique, as suggested by Chen and Shao [25], to approximate (12) and to construct the corresponding credible intervals. Based on the observed type-II censored sample, the Bayes estimator of β is written as
β ^ B = E ( β | T ) = E β β ω ( β ) E β ω ( β ) ,
where E β denotes the expectation with respect to I G ( a 1 + m , b 1 ) . Based on (11), we also have
E λ | β , T = E λ λ τ m ( λ , β ) E λ τ m ( λ , β ) .
As a result, the Bayesian estimate of λ can be written as
λ ^ B = E λ | T = E β E λ λ | β , T = E β ω ( β ) E λ | β , T E β ω ( β ) .
Equations (13) and (14) cannot be solved analytically, so to produce consistent sample-based estimators for λ and β and to construct corresponding credible intervals, we employ an importance sampling technique to approximate Equations (13) and (14). The Bayesian point estimators of λ and β can be determined using the following algorithm:
  • Importance sampling algorithm
1.
Generate M values of β from I G ( a 1 + m , b 1 ) (say, β i , i = 1 , 2 , , M );
2.
For each β i generated in Step 1, generate M values of λ from
I G a 2 + m 2 , i = 1 m ξ 2 t i β + c 2 ( n m ) ξ 2 t m β + b 2 ;
3.
Compute E λ λ τ m ( λ , β ) and E λ τ m ( λ , β ) with respect to the λ values simulated in Step 2;
4.
Compute ω ( β i ) and E ( λ | β i , T ) ;
5.
Average the numerators and the denominators of (13) and (14) with respect to the β values simulated in Step 1.
Now, the two-sided Bayesian credible confidence intervals, as well as the highest posterior density (HPD) CIs for α and β , can be improved here. Using the simulated values, 100 ( 1 γ ) % Bayesian CIs for the α and β parameters can be computed. Therefore, a 100 ( 1 γ ) % Bayesian CI for θ ( θ = α , or β ) is θ [ 100 γ / 2 ] , θ [ 100 ( 1 γ / 2 ) ] , where θ [ 100 γ ] is the 100 γ -th percentile of the θ values simulated using the above algorithm, with [ x ] being the integer part of x. Since the Bayesian credible CIs do not clarify whether the values of θ located inside these intervals have a higher probability than the values located outside the intervals, we present the HPD CI for θ . The HPD CI is considered one of the shortest-width CIs; here, the posterior density of any point outside the interval is less than that of any point within the interval. We use a Monte Carlo technique for importance sampling, which was first presented by Chen and Shao [25] to address the HPD CI for any function of the involved parameters (e.g., ϱ ( α , β ) ). We start by arranging the simulated values of ϱ ( α , β ) in ascending order to get
ϱ ( 1 ) ϱ ( 2 ) ϱ ( M ) ,
then compute the ratios as
ζ i = ω ( β ( i ) ) i = 1 M ω ( β ( i ) ) , i = 1 , 2 , , M .
Taking in consideration the fact that M is sufficiently large, the 100 ( 1 γ ) % HPD interval for ϱ is the shortest interval among the intervals ( I j ) for j = 1 , 2 , , M [ ( 1 γ ) M ] , with
I j = ϱ ( j M ) , ϱ ( j + [ ( 1 γ ) M ] M ) ,
where ϱ ( γ ) is 100 γ -th percentile of ϱ , which can be taken out as follows:
ϱ ( γ ) = ϱ ( 1 ) i f γ = 0 , ϱ ( i ) i f j = 1 i 1 ζ j γ j = 1 i ζ j ,
Then, the HPD CIs of α and β can be obtained accordingly.

4. Bayesian Prediction Method

Here, we discuss the problem of predicting the unseen items in the type-II censored data from the BS distribution. Precisely, our main interest is in the posterior density of the s-th-order statistic from a sample of size n m unseen items. For this, we predict U s : n m ( s = 1 , 2 , , n m ) based on the observed type-II censored sample ( t 1 , t 2 , . . . , t m ) . We start by presenting the posterior predictive density of Y = U s : n m , given the observed censored data, which has the form of
p ( y | data ) = 0 0 f Y | data ( y | α , β ) π ( α , β | data ) d α d β , y > t m ,
where f Y | data ( y | α , β ) is the conditional density function of Y, given T = t. According to the Markovian property of order statistics (Arnold et al. [26]), this conditional density is just the conditional density of Y given t m , that is, the pdf of the s-th-order statistic out of a sample of n m from F left truncated at t m . Precisely, it can be rewritten as f Y | data ( y | α , β ) = f Y | T = t m ( y | α , β ) .
Let us consider the case when s = m + 1 , in which case
f t m + 1 | t m ( y | α , β ) = ( n m ) f ( y ; α , β ) 1 F ( y ; α , β ) n m 1 1 F ( t m ; α , β ) n m , y > t m = ( n m ) 2 2 π α β β y 1 2 y β 3 2 exp y β + β y 2 2 α 2 1 Φ 1 α j ξ y β j n m 1 1 Φ 1 α j ξ t m β j n m .
The predictive density of t m + 1 at any point ( y > t m ) is then
f t m + 1 | t m ( y | data ) = 0 0 f t m + 1 | t m ( y | α , β ) π ( α , β | data ) d α d β , y > t m .
Visibly, the Bayesian predictive estimate ( E ( Y | data ) ) cannot be evaluated directly from (15) and (17). Therefore, we suggest Monte Carlo (MC) simulation to create a sample from the predictive distribution. Under the squared error loss (SEL) function, the Bayesian predictors (BPs) of Y = t s can be computed as
Y ^ = E p o s t e r i o r ( Y | data ) = t m y p ( y | data ) d y .
Based on MC samples α j , β j : j = 1 , , M , the simulation-based estimator of p ( y | x ) can be computed as
p ^ ( y | data ) = 1 M j = 1 M f y | t m ; α j , β j .
Using algebra, the sample-based predictor of Y can be simplified as follows:
Y ^ = 1 M j = 1 M i = 0 s 1 l = 0 n m s ( 1 ) i + l s 1 i n m s l β j 2 Φ i 1 α j ξ t m β j 1 Φ 1 α j ξ t m β j n m × 1 α j ξ t m β j α j u + ( α j u ) 2 + 4 2 Φ ( u ) s + l 1 i e u 2 2 d u .
Furthermore, the approximate estimator given in (18) can be used to find a two-sided prediction interval for Y. The 100 ( 1 γ ) % prediction interval for Y is ( L , U ) , where L and U can be computed numerically as
P Y > L | t = L p ^ ( y | t ) d y = 1 γ 2 ,   and P Y > U | t = U p ^ ( y | t ) d y = γ 2 .
Now, we discuss the use of the Gibbs sampler in estimating the posterior distribution. In fact, to estimate the posterior distribution using the Gibbs sampler, we need to generate samples from the full conditional distribution for each quantity involved. This is true for α and Y = ( t m + 1 , t m + 2 , , t n ) but not for β . Consequently, we adopt the M–H algorithm in combination with the Gibbs sampler to sample α and Y directly from their full conditional distributions. We update β via an M–H algorithm, as explained by Tierney [27], i.e., the normal distribution is the proposal distribution. Using the extended likelihood in (9) and the joint prior of β and λ , the full Bayesian model can be written as
π ( λ , β | T ) ϕ 1 ( λ | β , T ) ϕ 2 ( β | T ) ,
where ϕ 1 ( λ | β , T ) is the PDF of the inverse gamma distribution ( I G a 2 + m 2 , i = 1 m ξ 2 t i β + c 2 ( n m ) ξ 2 t m β + b 2 ) and ϕ 2 ( β | T ) is given by
ϕ 2 ( β | T ) ν β ( a 1 + m , b 1 ) δ m ( β ) τ m ( λ , β ) × i = 1 m ξ 2 t i β + c 2 ( n m ) ξ 2 t m β + b 2 ( a 2 + m 2 ) .
It can be easily seen that the full conditional distribution of λ , given β and T , is a known distribution, while the form of the full conditional distribution of β , given T , in (19) is not a well-known distribution; therefore, we cannot generate β directly. Accordingly, to generate β from the full conditional distribution of β , given T , we use the M–H algorithm with normal proposed distribution. One of the important things in this process is to decrease the rejection rate among all iterations as much as possible. Below, we present an algorithm that depends on the choice of the normal distribution as a proposal distribution and on the use of the M–H algorithm. This method can also be used to find the BEs and to structure the credible intervals for λ and β . The Gibbs sampler is considered a Markov chain Monte Carlo (MCMC) technique that simulates a Markov chain based on the full conditional distributions. We use the Gibbs sampler to obtain an approximation of θ B E and the corresponding credible intervals based on the drawing of MCMC samples ( ( λ l , β l ) , l = 1 , 2 , . . . , M ) from the joint posterior distribution. For this reason, we present the M–H algorithm according to the following steps:
  • M–H algorithm for prediction
1.
Start with the MLEs ( α ^ , β ^ ) as the initial values (e.g., ( α ( 0 ) , β ( 0 ) ) );
2.
Set J = 1;
3.
Given β ( J 1 ) , generate β from ϕ 2 ( β | T ) in (19) with N ( β ( J 1 ) , S β 2 ) as the proposal distribution, where S β 2 is the variance of β , which can be picked out to be the inverse of the Fisher information. The new values of β can be updated as follows:
a.
Generate ζ J from N ( β ( J 1 ) , S β 2 ) and u from U ( 0 , 1 ) ;
b.
If u < min ( 1 , ε ) , then let β ( J ) = ζ J ; otherwise, go to (a), where
ε = π ( ζ J | T ) π ( β ( J 1 ) | T ) ;
4.
Given β , generate α = λ 2 from I G a 2 + m 2 , i = 1 m ξ 2 t i β + c 2 ( n m ) ξ 2 t m β + b 2 ;
5.
Set J = J + 1 ;
6.
Repeat Steps 3–5 M times;

5. Data Analysis and Simulation

Here, an inclusive simulation study is performed to assess the performance of the sample-based estimates and predictors presented in the previous few sections, and we discuss the analysis of data extracted from a type-II censored BS model. All calculations are carried out using R programming software. The Pseudocodes for the Bayesian Estimation and Prediction, Core Functions in R and Software Versions are presented in the Appendix A, Appendix B, Appendix C and Appendix D, respectively.

5.1. Real Data Analysis

In this subsection, we start by presenting the analysis of a real dataset to explain the execution of the previously presented methods. The data represent the fatigue life of 6061-T6 aluminum coupons cut parallel to the direction of rolling and oscillated at 18 cycles per second with a maximum stress per cycle of 31,000 psi. These data were originally reported by Birnbaum and Saunders [2]. The dataset is expressed as follows:
70     90    96     97    99    100      103      104  104  105  107  108  108  108  109
109  112  112  113  114  114  114  116  119  120  120  120  121  121  123
124  124  124  124  124  128  128  129  129  130  130  130  131  131  131
131  131  132  132  132  133  134  134  134  134  134  136  136  137  138
138  138  139  139  141  141  142  142  142  142  142  142  144  144  145
146  148  148  149  151  151  152  155  156  157  157  157  157  158  159
162  163  163  164  166  166  168  170  174  196  212
For illustrative purposes, we suggest the following type-II censored samples with ( n = 101 , m = 90 ) and ( n = 101 , m = 70 ), i.e., we suggest only 90 and 70 available observations. Before we proceed with further analysis, we first present some basic descriptive statistics concluded from this dataset. The mean, the standard deviation and the coefficient of skewness are 133.73, 22.36 and 0.3355, respectively. The BS model may be used to analyze this dataset because the data are positively skewed. Now, we can simply numerically check the fit of the dataset. The Newton—Raphson method is employed to evaluate the MLEs of the parameters of the BS model. The MLEs of the shape and scale parameters are α ^ = 0.1704 and β ^ = 131.8213, respectively.
The Kolmogorov–Smirnov (K–S) and Cramer–von Mises (CvM) distances between the empirical and fitted distribution functions and the associated p-values are given by K S = 0.0849 , C v M = 0.0856 and 0.7052 , 0.6759 , respectively. These results indicate that the two-parameter Birnbaum–Saunders model fits this dataset perfectly. The empirical and fitted distribution functions are shown in Figure 1.
Using the observed and noticeable type-II censored data described above, we compute the MLEs and BEs of α and β . We start by generating 50,000 observations to calculate the BEs of α and β using the importance sampler and discard the initial 5000 samples for burn-in. Note that since we do not have any prior knowledge about the data, we assume that the priors are improper (precisely, a 1 = b 1 = a 2 = b 2 = 0 ) to compute BEs and HPD CIs. The M–H algorithm is also employed to calculate the BEs of β . The Gaussian distribution is an adequate proposal distribution for the full conditional distribution of β , as clearly shown by Figure 2. Then, we can pick out the parameters of the proposal distribution to determine the best fitted model for the full conditional distribution. As a result, we use the M–H technique with the Gaussian proposal model to generate samples from the target probability distribution. Our natural choice of the initial value of β is its MLE ( β ^ M ), which is computed using the Newton–Raphson method, while the variance of β is the reciprocal of the Fisher information, which is S β 2 = 5.960009 . We then generate 50 , 000 random variates with S β 2 = 5.960009 and find that the acceptance rate for this choice of variance is 71.03 % , which is quite favorable. We burn in the initial 5000 samples and use the remaining observation to compute the BEs.
The trace and ACF plots are used effectively in checking the convergence of the M–H algorithm as a graphical diagnostic convergence tool. The trace and ACF plots for β are shown in Figure 2. We can easily notice from the trace plot that there is a random scatter about a mean value represented by a solid line, with a fine mixing of the chains for the simulated values of β . We can also easily notice from the ACF plot that the autocorrelation of the chains is very low. Consequently, based on these plots, we can conclude that the M–H algorithm converges rapidly based on the proposed Gaussian distribution.
The outcomes for MLEs and BEs using importance and M–H samplers, in addition to the 95% Boot-t CI, asymptotic CI and HPD CI for α and β , are all displayed in Table 1.
Now, we consider the prediction of some of the unseen values. The point predicted values and PIs for unseen values are computed as presented in Table 2. The PIs are constructed under ab SEL function based on importance and M–H algorithms, the details of which are presented in Section 4. We can easily notice that the PIs established based on the M–H algorithm have the shortest intervals for all cases.

5.2. Simulation Results

Now, we use Monte Carlo simulation to compare the execution of the different estimation and prediction methods presented in the previous sections. We use bias and mean square error (MSE) criteria to compare the performance of the MLEs and BEs. In this simulation, we set the values of the parameters of BS to ( α = 2 , β = 1.5 ) and ( α = 1.5 , β = 1.0 ). Additionally, we consider different censoring schemes, along with different effective sample sizes of n = 25 , 50 and 100. To perform the Bayesian analysis effectively, we assume three different priors. For prior 0, i.e., improper prior information, the prior parameters are assumed to take zero values of a 1 = b 1 = a 2 = b 2 = 0 . Next, two additional proper priors are assumed with same means but different variances to mirror the sensitivity of our inference to variations in the specification of prior parameters. Consequently, prior 1 and prior 2 are assumed to be a 1 = a 2 = 3 , b 1 = b 2 = 1 and a 1 = a 2 = 5 , b 1 = b 2 = 2 , respectively. In this setting, prior 2 is more informative than prior 1. This enables us to evaluate the extent to which the informative prior contributes to the outcomes achieved based on observed values.
Table 3 presents the average widths (AWs) and coverage probabilities (CPs) of 95% CIs for α and β based on Boot-t, asymptotic ML and Bayesian methods with the improper prior (prior 0) and informative priors (prior 1 and prior 2) under the SEL function. For 10,000 replications, the average bias and MSEs of the MLEs and BEs under the SEL of α and β are computed as displayed in Table 4a,b. We observe from Table 3a,b that the HPD CIs are shorter than the asymptotic and Boot-t CIs under all priors for n = 25 , 50 , 100 with different values of m. The performance of HPD CIs tends to be higher under informative priors than that of the asymptotic and Boot-t CIs. The Boot-t method performs well when compared to the asymptotic method in terms of estimating all parameters. It can also be noticed that when n increases, all CI types tend to become shorter. The simulated CPs are very close to each other for all these CIs. As shown in Table 4a,b, in terms of bias and MSEs, the BEs perform well for all values of n and m. For all sample sizes, we can notice that the Bayesian estimates of α and β are better than the MLEs, as indicated by their lower bias and MSEs compared to MLE results. Additionally, as expected, we can notice that the BEs under the more informative prior 2 are better than the less informative and non-informative priors. Finally, we can clearly conclude that the results of BEs are sensitive to the presumed values of the prior parameters, specifically for the informative prior (prior 2).
For the prediction problem, different type-II censored schemes are randomly generated from the BS model and for n = 25 , 50 , 100 . Then, the average bias and mean square prediction errors (MSPEs) are computed for the predictors and PIs for censored times. The bias and MSPEs of BPs are computed over 10,000 replications for non-informative and informative priors. The values of the bias and MSPEs are reported in Table 5 and Table 6 for the predetermined censoring schemes. Table 7 and Table 8 include the AWs and CPs of 95 % PIs based on importance, M–H and Boot-t. It can be noticed from Table 5 and Table 6 that the BPs using the M–H method perform well compared to the importance sampler method in terms of the reported values of bias and MSPE for all priors. It is observed for all sample sizes that both importance sampling and M–H methods perform well under the more informative prior 2 than those using the less informative or non-informative priors, with obvious privilege for the M–H method due to its lower bias and MSPE values. As foreseen, when m increases, the MSPEs of the censored times tend to be larger due to the increase in the quantity of observed data. From Table 7 and Table 8, in terms of the AW criterion, a clear clue can be observed that supports the conclusion that the Bayesian PIs based on M–H sampling are the best PIs. We can also noticed that the PIs calculated using importance sampling perform better than those calculated using the Boot-t method based on the reported AW values. Furthermore, as anticipated, the farther the censored value is from the last observed value, the larger the AW values are compared to those that are closer. The CP values are close to each other. It can also be seen that the Bayesian PIs under the informative prior (prior 2) behave better than PIs based on other priors. Furthermore, the simulated CPs are high and tend to be close to the accurate prediction coefficient ( 1 γ = 0.95 ).

6. Discussion and Conclusions

In this study, we investigated parameter estimation and prediction methods for type-II censored BS data. We compared frequentist and Bayesian approaches, employing MCMC sampling to estimate parameters and construct credible intervals. Our results shows that that two methods provide comparable estimates for the shape and scale parameters. The Bayesian method provides more reliable and flexible prediction intervals compared to classical methods, especially when prior information is available. By applying these methods to simulated and real-world data, we have shown the practical advantage of the presented approaches in reliability and survival analysis.
Failure data, like the fatigue life of materials, are of credible interest in reliability analysis. Therefore, methods based on degradation data are increasingly common in industries such as manufacturing, healthcare and materials science. When the process leading to failure is slow or progressive with deteriorating performance over time, degradation data models are usually used. These models can employed effectively to better understanding how a product or system degrades before it fails, including through the use of functional models, stochastic processes and random-effects models.
Ruiz et al. [28], Zhai et al. [29], and Xu et al. [30], among others, have recently focused on using degradation models to evaluate product reliability. For example, Ruiz et al. [28] introduced generalized functional mixed models for accelerated degradation testing, which allowed them to predict failure times accurately by modeling the degradation process under stress. Zhai et al. [29] suggested a random-effects Wiener process approach to model product degradation with heterogeneity, considering the variability between products that may fail at different rates. Similarly, Xu et al. [30] developed a multivariate Student-t process model for degradation data with tail-weighted distributions to study common problems in industrial reliability.
Our current study mainly focuses on failure time data under type-II censoring, where the data represent the times at which components fail under stress or other conditions. The two approaches are applied in different contexts. The main advantage of degradation models is their ability to model the gradual deterioration of products, which allows for more careful estimations of remaining beneficial life before failure occurs. On the contrary, failure time analysis under type-II censoring is more convenient when data are collected after a certain number of failures have been observed. This makes it ideal for situations where products are tested to failure.
There is a clear possibility that future research will integrate these models with type-II censoring, as our current methods do not directly associate degradation data models. This combination might provide more accurate and comprehensive reliability assessments, especially when there is a relationship between failure times and degradation patterns for some products. By combining degradation modeling with the Bayesian approach used in this work, we could offer a more flexible and deep approach to handling both types of data, which, in turn, can enable better predictions of the possible failure times of a product based on its degradation path.
Moreover, this study provides valuable insights into parameter estimation and prediction using type-II censored BS data. However, the growing importance of degradation data models in reliability analysis calls for further consideration of how these models can be integrated with our presented methods. Future promising work will study these extensions, as well as the application of our methods to larger datasets, multi-sample progressive censoring schemes and industrial applications involving degradation data.

Funding

This research received no external funding.

Data Availability Statement

The original data presented in the study are openly available in Birnbaum and Saunders [2] at https://doi.org/10.2307/3212004.

Acknowledgments

This work was performed during a leave of the author from McMaster University during the academic year of 2020–2021.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Pseudocode for Bayesian Estimation

1.
Initialize parameters:
-
α 1 , β 1 , λ 1 : Prior parameters for the distribution
-
m, n: Number of observations (m) and total sample size (n)
-
M: Number of posterior samples
-
a 1 , b 1 , a 2 , b 2 : Hyperparameters for the prior distributions
-
c 1 , c 2 : Constants for model calculation
-
prior distributions for β ,   a n d   λ
2.
Set up prior distributions: - Generate different prior sets, such as:
-
Prior 1: a 1 = b 1 = a 2 = b 2 = 0
-
Prior 2: a 1 = a 2 = 3 ; b 1 = b 2 = 1
-
Prior 3: a 1 = a 2 = 5 ; b 1 = b 2 = 2
3.
For each simulation (i = 1 to n u m s i m u l a t i o n s ):
-
Generate observed data x . o b s (using rfatigue) from the fatigue distribution
-
Sort x . o b s
4.
For each prior (j = 1 to 3):
-
Generate M samples of β from the inverse gamma distribution (rinvgamma)
-
Initialize variables:
-
λ b e f o r e , λ a f t e r , α b e f o r e , α a f t e r , ξ m , T β , λ T β , ω , E λ β , ξ , x i 2
-
For each posterior sample (j = 1 to M):
a.
Calculate ξ : ξ m [ j ] = ( ( x . o b s [ m ] / β [ j ] ) 0.5 ( x . o b s [ m ] / β [ j ] ) ( 0.5 ) )
b.
Calculate ξ 2 : ξ 2 [ j ] = ( ( ( x . o b s [ m ] / β [ j ] ) + ( x . o b s [ m ] / β [ j ] ) ( 1 ) 2 ) )
c.
Sum of ξ 2 : s u m ξ 2 = ( ξ 2 [ j ] )
d.
Calculate ξ : ξ [ j ] = ( ( β [ j ] / ( 2 x . o b s ) ) ( ( x . o b s / β [ j ] ) ( 0.5 ) + ( x . o b s / β [ j ] ) ( 0.5 ) ) )
e.
Generate λ b e f o r e from rfatigue: λ b e f o r e = rfatigue(M, a 2 + (m / 2), ξ 2 + ( c 2 * (n − m) * ξ m [ j ] + b 2 ))
f.
Calculate λ a f t e r and α a f t e r :
-
λ a f t e r [ j ] = ( λ b e f o r e ) / M
-
α a f t e r [ j ] = ( λ a f t e r [ j ] / 2 )
g.
Calculate T β [ j ] using exponential function
h.
Calculate λ T β [ j ] using the average of T β [ j ]
i.
Calculate ω [ j ] using the formulas in the manuscript
j.
Calculate E λ β [ j ]
k.
Store ω , ξ , and ξ 2 values for each iteration
5.
Update β , λ , and α according to the formulas:
-
β B = ( β ω ) / s u m ( ω )
-
λ B = ( ω E λ β ) / s u m ( ω )
-
α B = ( λ B / 2 )
6.
Store the results for each iteration:
-
Store β B , λ B , and α B for each prior set
7.
Calculate the average CI lengths for α and β for each prior:
-
Calculate the average CI for α B and β B over all simulations
8.
Output the results:
-
Display the average CI lengths for α and β for each prior

Appendix B. Pseudocode for Bayesian Prediction

1.
Initialize parameters:
-
α 1 , β 1 , λ 1 : Prior parameters for the distribution
-
m, n: Number of observations (m) and total sample size (n)
-
M: Number of posterior samples
-
a 1 , b 1 , a 2 , b 2 : Hyperparameters for the prior distributions
-
c 1 , c 2 : Constants for model calculation
2.
For each simulation (q = 1 to M):
-
Generate a new x . o b s sample for each q using rfatigue
-
Sort the observations: x . o b s = s o r t ( x . o b s 1 )
3.
For each posterior sample (q = 1 to M):
a.
Generate posterior samples of β from the inverse gamma distribution (rinvgamma)
b.
Calculate ξ for each sample: ξ m [ q ] = ( ( x . o b s [ m ] / β [ q ] ) 0.5 ( x . o b s [ m ] / β [ q ] ) ( 0.5 ) )
c.
Calculate ξ 2 for each sample: ξ 2 [ q ] = ( ( ( x . o b s [ m ] / β [ q ] ) + ( x . o b s [ m ] / β [ q ] ) ( 1 ) 2 ) )
d.
Sum of ξ 2 for the current sample: ξ 2 = ( ξ 2 [ q ] )
e.
Calculate ξ for each sample: ξ [ q ] = ( ( β [ q ] / ( 2 x . o b s ) ) ( ( x . o b s / β [ q ] ) ( 0.5 ) + ( x . o b s / β [ q ] ) ( 0.5 ) ) )
f.
Generate λ b e f o r e from rfatigue for each sample
g.
Update λ a f t e r and α a f t e r based on the λ b e f o r e values:
-
λ a f t e r [ q ] = ( λ b e f o r e ) / M
-
α a f t e r [ q ] = ( λ a f t e r [ q ] / 2 )
4.
Define a function to calculate the predictor for a given censored value Y s :
a.
Initialize predictor value to 0
b.
Loop over all possible values of i and l (sum over i from 0 to s − 1 and l from 0 to n − m − s):
-
Calculate binomial coefficients: b i n o m i and b i n o m l
-
Calculate s i g n f a c t o r = ( 1 ) ( i + l )
-
Define integrand function for the integral calculation
-
Perform integration for each posterior sample q (using ‘ i n t e g r a t e ’ function)
c.
Sum all the contributions from the integration to compute the predictor
5.
For each censored value Y s (from m + 1 to n), calculate the prediction:
-
Calculate predictors using the function defined in step 4
-
Store the results of prediction for each censored value Y s in a matrix
6.
Calculate prediction intervals for each censored value:
-
For each censored value Y s , compute the 95 % prediction intervals by calculating the lower and upper quantiles ( 2.5 % and 97.5 % )
7.
Store the lengths of the prediction intervals for each sample:
-
Compute the interval length as ( u p p e r b o u n d l o w e r b o u n d ) and store it in a matrix
8.
Calculate the average interval lengths over M iterations:
-
Use the ‘ a p p l y ( ) ’ function to compute the average interval length for each censored value
9.
Output the results:
-
Display the average interval lengths for each censored value Y s

Appendix C. Core Functions in R

Below is a summary of the core functions used in your Bayesian method, including MLE and Bayesian sampling.
1.
**Core R functions for MLE**:
-
**mle2()** (from the bbmle package): Used for Maximum Likelihood Estimation to fit the model and estimate the parameters.
-
Input: Likelihood function and data
-
Output: MLE estimates for the parameters
2.
**Core R functions for Bayesian Sampling**:
-
**rinvgamma()** (from the ‘extraDistr’ package): Generates samples from the inverse gamma distribution (used for sampling the hyperparameters like beta).
-
Input: Shape and scale parameters
-
Output: Samples from the inverse gamma distribution
-
**rfatigue()** (from the ‘extraDistr’ package): Generates random samples from the Birnbaum–Saunders (fatigue life) distribution.
-
Input: Parameters for the Birnbaum–Saunders distribution
-
Output: Simulated values for fatigue life
-
**pnorm()** (from base R): The cumulative distribution function of the standard normal distribution, used in the calculation of integrals for Bayesian prediction.
-
Input: Values for the normal CDF
-
Output: CDF values for the normal distribution
-
**integrate()** (from base R): Performs numerical integration, used to compute the posterior predictive distribution.
-
Input: Function to integrate, lower and upper bounds of integration
-
Output: Numerical result of the integration
3.
**Other utilities**:
-
**sort()** (from base R): Sorts the observations.
-
Input: A vector of observations
-
Output: Sorted observations
-
**quantile()** (from base R): Used to compute the 95% prediction interval for each censored value.
-
Input: A vector of values
-
Output: Quantiles (e.g., 2.5% and 97.5% for a 95% interval)

Appendix D. Software Versions

Software Version Information:
-
R version 4.3.2 (2023-10-31) “Eye Holes”
-
bbmle package version 1.0.23: Used for Maximum Likelihood Estimation (MLE).
-
extraDistr package version 1.8: Used for sampling from the Birnbaum–Saunders and inverse gamma distributions.
-
base R functions used for data manipulation and numerical operations (such as sort(), quantile(), and integrate()).

References

  1. Birnbaum, Z.W.; Saunders, S.C. A new family of life distribution. J. Appl. Probab. 1969, 6, 319–327. [Google Scholar] [CrossRef]
  2. Birnbaum, Z.W.; Saunders, S.C. Estimation for a family of life distributions with applications to fatigue. J. Appl. Probab. 1969, 6, 328–347. [Google Scholar] [CrossRef]
  3. Desmond, A.F. Stochastic models of failure in random environments. Can. J. Stat. 1985, 13, 171–183. [Google Scholar] [CrossRef]
  4. Langl, A.O.; Pocock, S.J.; Kerr, G.R.; Gore, S.M. Long-term survival of patients with breast cancer: A study of the curability of the disease. Br. Med. J. 1979, 2, 1247–1251. [Google Scholar] [CrossRef] [PubMed]
  5. Balakrishnan, N.; Kundu, D. Birnbaum-Saunders distribution: A review of models, analyses, and applications. Appl. Stochastic. Models Bus Ind. 2019, 35, 4–132, (with discussions). [Google Scholar] [CrossRef]
  6. Engelhardt, M.; Bain, L.J.; Wright, F.T. Inferences on the parameters of the Birnbaum-Saunders fatigue life distribution based on maximum likelihood estimation. Technometrics 1981, 23, 251–256. [Google Scholar] [CrossRef]
  7. Balakrishnan, N.; Zhu, X. On the existence and uniqueness of the maximum likelihood estimates of parameters of Birnbaum-Saunders distribution based on Type-I, Type-II and hybrid censored samples. Statistics 2014, 48, 1013–1032. [Google Scholar] [CrossRef]
  8. Ng, H.K.T.; Kundu, D.; Balakrishnan, N. Point and interval estimations for the two-parameter Birnbaum-Saunders distribution based on type-II censored samples. Comput. Stat. Data Anal. 2006, 50, 3222–3242. [Google Scholar] [CrossRef]
  9. Padgett, W.J. On Bayes estimation of reliability for the Birnbaum-Saunders fatigue life model. IEEE Trans. Reliab. 1982, 31, 436–438. [Google Scholar] [CrossRef]
  10. Achcar, J.A. Inferences for the Birnbaum-Saunders fatigue life model using Bayesian methods. Comput. Stat. Data Anal. 1993, 15, 367–380. [Google Scholar] [CrossRef]
  11. Achcar, J.A.; Moala, F.A. Use of MCMC methods to obtain Bayesian inferences for the Birnbaum-Saunders distribution in the presence of censored data and covariates. Adv. Appl. Stat. 2010, 17, 1–27. [Google Scholar]
  12. Wang, M.; Sun, X.; Park, C. Bayesian analysis of Birnbaum-Saunders distribution via the generalized ratio-of-uniforms method. Comput. Stat. 2016, 31, 207–225. [Google Scholar] [CrossRef]
  13. Liu, Y.; Zhang, H.; Li, W. A novel random-effect Birnbaum-Saunders distribution for reliability assessment considering accelerated mechanism equivalence. Comput. Stat. 2024, 39, 501–515. [Google Scholar] [CrossRef]
  14. Sawlan, Z.; Hasan, M.; Abdel-Mageed, R. Modeling metallic fatigue data using the Birnbaum-Saunders distribution. Fatigue Fract. Eng. Mater Struct. 2023, 46, 1423–1434. [Google Scholar] [CrossRef]
  15. Razmkhah, M.; Karami, S.; Salim, S. Neutrosophic Birnbaum-Saunders distribution with applications. J. Stat. Comput. Simul. 2024, 94, 298–315. [Google Scholar]
  16. Park, C.; Wang, M. A goodness-of-fit test for the Birnbaum-Saunders distribution based on the probability plot. Stat. Probabil. Lett. 2023, 174, 125–136. [Google Scholar]
  17. Dorea, C.C.Y.; Silva, M.D.; Lima, G. A general class of fatigue-life distributions of Birnbaum-Saunders type. Reliab. Eng. Syst. Saf. 2023, 229, 108–118. [Google Scholar]
  18. Kaminsky, K.S.; Nelson, P.I. Prediction of order statistics. In Handbook of Statistics: Order Statistics: Applications; Balakrishnan, N., Rao, C.R., Eds.; North-Holland: Amsterdam, The Netherlands, 1998; Volume 17, pp. 431–450. [Google Scholar]
  19. Al-Hussaini, E.K. Predicting observable from a general class of distributions. J. Stat. Plan. Inference 1999, 79, 79–81. [Google Scholar] [CrossRef]
  20. Kundu, D.; Raqab, M.Z. Bayesian inference and prediction of order statistics for type-II censored Weibull distribution. J. Stat. Plan. Inference 2012, 142, 41–47. [Google Scholar] [CrossRef]
  21. Bdair, O.M.; Awwad, R.R.A.; Abufoudeh, G.K.; Naser, M.F.M. Estimation and prediction for flexible Weibull distribution based on progressive Type-II censored data. Commun. Math. Stat. 2020, 8, 255–277. [Google Scholar] [CrossRef]
  22. Balakrishnan, N.; Cohen, A.C. Order Statistics and Inference: Estimation Methods; Academic Press: San Diego, CA, USA, 1991. [Google Scholar]
  23. Ahmed, E.A. Bayesian estimation based on progressive Type-II censoring from two-parameter bathtub-shaped lifetime model: An Markov chain Monte Carlo approach. J. Appl. Stat. 2014, 41, 752–768. [Google Scholar] [CrossRef]
  24. Tsay, W.J.; Huang, C.J.; Fu, T.T.; Ho, I.L. A simple closed-form approximation for the cumulative distribution function of the composite error of stochastic frontier models. J. Prod. Anal. 2013, 39, 259–269. [Google Scholar] [CrossRef]
  25. Chen, M.H.; Shao, Q.M. Monte Carlo estimation of Bayesian credible and HPD intervals. J. Comput. Graph. Stat. 1999, 8, 69–92. [Google Scholar] [CrossRef]
  26. Arnold, B.C.; Balakrishan, N.; Nagaraja, H.N. A First Course in Order Statistics; Wiley: New York, NY, USA, 1992. [Google Scholar]
  27. Tierney, L. Markov chains for exploring posterior distributions. Ann. Stat. 1994, 22, 1701–1728. [Google Scholar] [CrossRef]
  28. Ruiz, C.; Liao, H.; Pohl, E.A. Generalized Functional Mixed Models for Accelerated Degradation-Based Reliability Analysis. IEEE Trans. Reliab. 2024. [Google Scholar] [CrossRef]
  29. Zhai, Q.; Li, Y.; Chen, P. Modeling Product Degradation with Heterogeneity: A General Random-Effects Wiener Process Approach. IISE Trans. 2025. [Google Scholar] [CrossRef]
  30. Xu, A.; Fang, G.; Zhuang, L.; Gu, C. A multivariate student-t process model for dependent tail-weighted degradation data. IISE Trans. 2024. [Google Scholar] [CrossRef]
Figure 1. Empirical and fitted distribution functions and Q–Q plots for the two datasets.
Figure 1. Empirical and fitted distribution functions and Q–Q plots for the two datasets.
Mathematics 13 00590 g001
Figure 2. Plots of Metropolis–Hastings Markov chains for β .
Figure 2. Plots of Metropolis–Hastings Markov chains for β .
Mathematics 13 00590 g002
Table 1. Point estimates and 95% C I s for α and β .
Table 1. Point estimates and 95% C I s for α and β .
SEL
m MLE Importance M–H Approximate CI Boot-t CI HPD CI
90 α 0.17060.17690.1738(0.1450, 0.1962)(0.1465, 0.1967)(0.1407, 0.1739)
β 131.8776132.2340130.9982(127.4480, 136.3064)(127.5583, 135.8003)(127.0845, 134.9120)
70 α 0.17350.17870.1762(0.1423, 0.2008)(0.1446, 0.1989)(0.1462, 0.1857)
β 132.1070132.3581131.7418(126.7428, 136.9137)(126.7720, 136.0554)(128.1083, 136.6701)
Table 2. Point predictors and 95% PIs of censored observations based on Bayesian methods.
Table 2. Point predictors and 95% PIs of censored observations based on Bayesian methods.
Point Predictors PIs
Importance M–H Importance M–H
m = 90n
Y 1 : 11 162.67162.18(162.35, 163.52)(161.75, 162.71)
Y 2 : 11 166.23163.65(165.87, 167.12)(163.36, 164.34)
Y 3 : 11 169.68166.16(169.27, 170.93)(165.95, 166.97)
Y 4 : 11 173.05170.61(172.91, 174.61)(170.14, 171.18)
Y 5 : 11 179.83174.70(179.15, 180.86)(179.38, 180.45)
Y 6 : 11 187.14180.22(186.85, 188.62)(179.88, 180.99)
Y 7 : 11 196.41187.85(195.74, 197.52)(187.63, 188.78)
Y 8 : 11 204.38195.79(204.18, 205.97)(195.42, 196.61)
Y 9 : 11 211.27204.16(210.86, 212.68)(203.33, 204.54)
Y 10 : 11 217.02213.06(216.20, 218.06)(212.35, 213.59)
m = 70n
Y 1 : 31 142.92142.36(142.12, 143.44)(141.87, 143.07)
Y 4 : 31 145.64144.39(144.92, 146.30)(143.93, 145.16)
Y 7 : 31 148.78147.68(148.06, 149.47)(146.83, 148.11)
Y 10 : 31 150.93150.51(150.23, 151.70)(149.22, 150.54)
Y 13 : 31 154.12153.14(153.64, 155.17)(152.02, 153.38)
Y 16 : 31 158.34156.75(157.51, 159.12)(155.81, 157.22)
Y 19 : 31 162.95160.45(162.33, 164.01)(159.72, 161.18)
Y 22 : 31 165.44164.80(164.19, 165.93)(164.03, 165.52)
Y 25 : 31 169.38167.65(168.48, 170.31)(166.59, 168.14)
Y 28 : 31 173.19169.28(171.86, 173.81)(168.65, 170.28)
Table 3. (a) AWs and CPs of 95% CIs when α = 2 and β = 1.5 ; (b) AWs and CPs of 95% CIs when α = 1.5 and β = 1.0 .
Table 3. (a) AWs and CPs of 95% CIs when α = 2 and β = 1.5 ; (b) AWs and CPs of 95% CIs when α = 1.5 and β = 1.0 .
(a)
Boot-t CIApproximate CIHPD CRI
CSs Prior 0Prior 1Prior 2
( 25 , 15 ) α AW2.25172.37391.95141.80171.7232
CP0.93170.93050.93440.93490.9412
β AW1.74141.85091.65411.62471.5132
CP0.94230.94140.94500.94610.9485
( 25 , 20 ) α AW2.19662.29781.87051.77511.6821
CP0.93270.93150.93520.93650.9425
β AW1.72701.79621.59321.57011.4904
CP0.94310.94210.94590.94720.9494
( 50 , 30 ) α AW2.09142.18851.81801.72371.6460
CP0.93330.93220.93570.93780.9425
β AW1.69161.73511.55661.52021.4721
CP0.94380.94270.94660.94810.9504
( 50 , 40 ) α AW1.99142.08051.72181.61911.5446
CP0.93840.93770.94040.94180.9448
β AW1.55381.61011.49121.45931.4119
CP0.94570.94460.94730.94920.9510
(b)
Boot-t CIApproximate CIHPD CRI
CSs Prior 0Prior 1Prior 2
( 100 , 50 ) α AW1.85131.92231.62891.50941.4650
CP0.94070.93990.94270.94380.9490
β AW1.44611.53381.39361.35271.3121
CP0.94700.94610.94830.95010.9519
( 100 , 75 ) α AW1.63321.76181.50591.46391.4012
CP0.94220.94090.94400.94620.9499
β AW1.27361.38121.20941.15091.4220
CP0.94790.94700.94910.95120.9523
( 100 , 90 ) α AW1.44181.56421.36011.30251.2180
CP0.94310.94200.94490.94680.9513
β AW1.07991.18460.98780.92220.8709
CP0.94880.94820.95040.95190.9530
( 25 , 15 ) α AW1.12871.17451.07491.01650.9624
CP0.92670.92490.92770.92890.9298
β AW0.76410.81850.71410.67220.6104
CP0.93770.93630.93890.93960.9405
( 25 , 20 ) α AW1.06281.11510.98560.93460.8957
CP0.92770.92610.92840.92930.9303
β AW0.73380.78480.69840.63250.5974
CP0.93880.93720.93970.94060.9414
( 50 , 30 ) α AW0.97921.02440.91840.87710.8263
CP0.92850.92730.92900.92980.9311
β AW0.72690.76740.67520.61840.5841
CP0.93980.93870.94100.94210.9433
( 50 , 40 ) α AW0.91760.97140.87680.83220.7968
CP0.92960.92850.93050.93120.9324
β AW0.70550.73540.65540.60650.5692
CP0.94090.94010.94250.94380.9449
( 100 , 50 ) α AW0.86200.91090.82240.78890.7405
CP0.93100.92990.93210.93270.9339
β AW0.69080.71150.63810.58120.5534
CP0.94270.94190.94410.94530.9467
( 100 , 75 ) α AW0.81190.85690.77510.73960.7008
CP0.93260.93120.93420.93590.9372
β AW0.66840.69120.61190.56870.5404
CP0.94430.94370.94580.94700.9481
( 100 , 90 ) α AW0.76680.80770.72590.69610.6643
CP0.93430.93290.93610.93840.9398
β AW0.61780.65460.59610.54130.5176
CP0.94620.94550.94760.94890.9499
Table 4. (a) Bias and MSEs of the MLEs and BEs when α = 2 and β = 1.5 ; (b) Bias and MSEs of the MLEs and BEs when α = 1.5 and β = 1.0 .
Table 4. (a) Bias and MSEs of the MLEs and BEs when α = 2 and β = 1.5 ; (b) Bias and MSEs of the MLEs and BEs when α = 1.5 and β = 1.0 .
(a)
MLEPrior 0Prior 1Prior 2
CSs BiasMSEBiasMSEBiasMSEBiasMSE
( 25 , 15 ) α 0.58770.6634−0.53620.61850.45810.58520.42080.5646
β 0.62930.88160.58630.82050.58140.78480.57470.7659
( 25 , 20 ) α 0.56150.6618−0.51990.61380.45210.58320.41990.5629
β −0.62200.87580.58480.80950.57570.77650.56870.7593
( 50 , 30 ) α 0.55580.6578−0.50340.61140.44560.57930.40390.5577
β 0.61420.86810.58030.78830.57410.76980.56470.7469
( 50 , 40 ) α 0.55080.6507−0.49840.60750.43610.56890.40040.5467
β 0.60720.85770.57520.77780.56440.7602−0.56040.7366
( 100 , 50 ) α 0.54220.64810.49010.59110.42880.55410.39860.5371
β 0.59940.84190.57210.76420.56840.7567−0.55850.7287
( 100 , 75 ) α −0.53280.6342−0.48620.58170.41940.54560.39340.5308
β 0.59160.82210.58950.74750.56860.7390−0.55240.7118
( 100 , 90 ) α −0.53780.6216−0.48070.57390.41280.53590.39120.5252
β 0.58880.79320.58050.71520.56420.6789−0.54930.6518
(b)
MLEPrior 0Prior 1Prior 2
CSs BiasMSEBiasMSEBiasMSEBiasMSE
( 25 , 15 ) α −0.52140.58360.48350.5523−0.44660.52540.40870.4978
β 0.54170.73360.52630.68770.51830.6564−0.50220.6328
( 25 , 20 ) α 0.50680.5687−0.46820.53630.43220.50810.39510.4795
β 0.53590.71670.51770.67440.50890.64120.50020.6202
( 50 , 30 ) α 0.48920.55220.45430.5207−0.42620.49240.38490.4629
β −0.52460.68670.51190.6379−0.49870.61840.49210.5962
( 50 , 40 ) α −0.47730.54190.44080.50670.41710.4761−0.37620.4473
β −0.50610.67150.49700.62260.48310.6033−0.47680.5829
( 100 , 50 ) α 0.46850.52690.42780.49330.41030.46120.36790.4340
β 0.49240.6547−0.48560.6106−0.47520.59420.46350.5718
( 100 , 75 ) α −0.44330.51570.41160.47930.39650.4536−0.35390.4258
β 0.47820.6387−0.46680.60360.45720.5834−0.45130.5611
( 100 , 90 ) α 0.43180.4962−0.40320.46740.37220.4426−0.34690.4130
β 0.46370.62470.45280.5886−0.44500.56570.43770.5462
Table 5. Bias and MSPEs of BPs for censored units when α = 2 and β = 1.5 .
Table 5. Bias and MSPEs of BPs for censored units when α = 2 and β = 1.5 .
Prior 0Prior 1Prior 2
CSs ImportanceM–HImportanceM–HImportanceM–H
Y 1 : 10 Bias0.4927−0.4718−0.47270.45580.45820.4414
( 25 , 15 ) MSPE0.96380.90350.92870.87780.85360.7928
Y 3 : 10 Bias0.52250.5064−0.49580.47260.4784−0.4567
MSPE0.98390.91890.96820.89650.93180.8590
Y 7 : 10 Bias0.5754−0.54190.52340.50120.4935−0.4731
MSPE0.99860.93760.97120.90940.95150.8708
Y 10 : 10 Bias−0.62540.56590.60600.54220.56640.5033
MSPE1.19030.96530.99250.93120.96740.8851
Y 1 : 5 Bias−0.47060.45530.44710.42320.4269−0.4061
( 25 , 20 ) MSPE0.93290.88740.88440.83750.83850.7862
Y 3 : 5 Bias0.48290.46460.46170.43510.4473−0.4129
MSPE0.94520.89630.89870.85660.84500.7921
Y 5 : 5 Bias−0.49920.47410.48360.44540.4692−0.4235
MSPE0.95560.90480.91160.86480.85090.8092
Y 1 : 20 Bias0.45950.4331−0.43270.41190.41800.3968
( 50 , 30 ) MSPE0.90650.84210.85620.80720.79450.7554
Y 5 : 20 Bias−0.47620.45710.4558−0.43090.43970.4098
MSPE0.92670.86900.88290.83480.82010.7860
Y 10 : 20 Bias−0.49560.46760.48790.44800.47220.4213
MSPE0.94450.89050.90140.85360.85620.7965
Y 15 : 20 Bias0.52060.4882−0.50630.46850.48850.4399
MSPE0.97660.92310.937250.87610.89640.8260
Y 20 : 20 Bias−0.54510.49230.52680.4767−0.50160.4456
MSPE0.99660.95860.97820.90770.93750.8563
Y 1 : 10 Bias0.4406−0.41930.41850.3933−0.39460.3739
( 50 , 40 ) MSPE0.87250.79170.82290.77240.76420.7311
Y 3 : 10 Bias0.4569−0.42160.43320.4094−0.42430.3817
MSPE0.89620.82480.85360.79680.80470.7552
Y 7 : 10 Bias0.4765−0.43740.45180.4147−0.4386−0.3984
MSPE0.92610.85600.88250.84850.84920.7946
Y 10 : 10 Bias0.49800.45160.47530.42760.4522−0.4058
MSPE0.95080.89710.93600.87400.93130.8421
Y 1 : 50 Bias−0.43570.40310.40460.38020.38470.3631
( 100 , 50 ) MSPE0.82700.72280.80690.70890.72260.6852
Y 10 : 50 Bias0.45240.42750.4215−0.40820.40770.3829
MSPE0.85330.74380.83660.74750.78960.7355
Y 20 : 50 Bias0.47130.4386−0.44620.4123−0.42270.3907
MSPE0.89520.82050.86920.80670.84590.7847
Y 30 : 50 Bias−0.49820.44480.46150.42270.4462−0.4079
MSPE0.95680.87500.91870.85920.88520.8356
Y 40 : 50 Bias0.50950.45470.47910.43130.45620.4175
MSPE0.99710.94670.95630.91680.90480.8653
Y 50 : 50 Bias0.5252−0.47400.49620.45150.4708−0.4356
MSPE1.09640.98061.01670.95510.97190.9046
Y 1 : 25 Bias−0.40520.37610.37710.33560.35820.3174
( 100 , 75 ) MSPE0.78490.70070.75840.68480.69380.6692
Y 10 : 25 Bias0.44500.39520.40970.3563−0.38160.3373
MSPE0.82530.77090.79670.72640.75100.6980
Y 20 : 25 Bias−0.47580.41360.43520.3866−0.41270.3588
MSPE0.86760.84710.84790.78120.80520.7249
Y 25 : 25 Bias0.49370.4311−0.46320.40590.4367−0.3742
MSPE0.90810.86890.86300.83510.84710.7763
Y 1 : 10 Bias−0.38690.35770.3550−0.31770.33780.2981
( 100 , 90 ) MSPE0.72550.65440.70650.64620.65520.6269
Y 3 : 10 Bias−0.40520.36500.3798−0.3262−0.35150.3074
MSPE0.77890.68370.74670.66410.69850.6480
Y 7 : 10 Bias−0.42590.38310.39540.34680.37220.3281
MSPE0.79650.71880.77130.68120.74610.6749
Y 10 : 10 Bias0.45370.4022−0.42390.36610.3953−0.3440
MSPE0.83120.74850.79630.70350.77130.6993
Table 6. Bias and MSPEs of BPs for censored units when α = 1.5 and β = 1.0 .
Table 6. Bias and MSPEs of BPs for censored units when α = 1.5 and β = 1.0 .
Prior 0Prior 1Prior 2
CSs ImportanceM–HImportanceM–HImportanceM–H
Y 1 : 10 Bias0.45660.4186−0.44250.41010.4355−0.4031
( 25 , 15 ) MSPE0.81140.74240.80590.73480.79680.7228
Y 5 : 10 Bias0.47330.42910.46250.42070.45890.4116
MSPE0.83650.75740.82210.74860.81330.7357
Y 10 : 10 Bias0.49170.43900.48860.4282−0.47390.4189
MSPE0.85200.76910.84060.76120.83110.7486
Y 1 : 5 Bias0.4683−0.4274−0.4442−0.41550.42620.4012
( 25 , 20 ) MSPE0.79520.72890.78720.71930.77860.7115
Y 3 : 5 Bias0.48280.44380.45940.42080.4382−0.4087
MSPE0.81080.73850.80230.72260.79830.7179
Y 5 : 5 Bias−0.50370.45560.47280.43720.4558−0.4154
MSPE0.95560.74580.91160.73210.85090.7249
Y 1 : 20 Bias−0.43360.39230.42580.38520.4122−0.3721
( 50 , 30 ) MSPE0.77280.70330.76190.69270.75420.6886
Y 10 : 20 Bias0.4585−0.41350.4448−0.40890.43250.3942
MSPE0.78620.70860.77410.70180.76390.6952
Y 20 : 20 Bias−0.4736−0.43920.46820.4282−0.45670.4122
MSPE0.79580.72240.78540.71660.77410.7052
Y 1 : 10 Bias0.42930.38580.40150.36430.38550.3597
( 50 , 40 ) MSPE0.73460.66250.71620.64330.70160.6331
Y 5 : 10 Bias0.43810.3951−0.41390.37790.39740.3686
MSPE0.74060.68640.73240.67410.71520.6576
Y 10 : 10 Bias0.45320.41200.42940.3906−0.41070.3775
MSPE0.77450.71540.74860.69860.73740.6693
Y 1 : 25 Bias0.3828−0.3569−0.37660.34580.36150.3346
( 100 , 75 ) MSPE0.65890.59620.64130.58320.62800.5673
Y 13 : 25 Bias0.4063−0.37790.39510.36650.38780.3532
MSPE0.70660.63820.68710.61190.57140.5879
Y 25 : 25 Bias0.4180−0.39130.40960.3857−0.3961−0.3677
MSPE0.76680.68930.75030.66800.73870.6459
Y 1 : 10 Bias−0.33970.30670.32650.28540.31590.2678
( 100 , 90 ) MSPE0.61580.54370.60080.52160.58370.5076
Y 5 : 10 Bias0.35180.3196−0.33810.29550.3292−0.2779
MSPE0.63740.57220.61920.55940.60830.5280
Y 10 : 10 Bias0.37820.33460.3591−0.30830.35110.2936
MSPE0.67330.60110.65270.57380.63390.5458
Table 7. AWs and CPs of PIs for censored units based on importance, M–H and Boot-t samplers when α = 2 and β = 1.5 .
Table 7. AWs and CPs of PIs for censored units based on importance, M–H and Boot-t samplers when α = 2 and β = 1.5 .
CSs
n = 25 Y 1 : 10 Y 3 : 10 Y 7 : 10 Y 10 : 10 --
m = 15 Boot-t2.2455 (0.9552)2.5622 (0.9532)2.7956 (0.9514)2.9261 (0.9493)--
Importance1.9832 (0.9590)2.2071 (0.9577)2.4747 (0.9553)2.6437 (0.9531)--
M–H1.7878 (0.9634)1.9144 (0.9617)2.1528 (0.9596)2.2952 (0.9580)--
n = 25 Y 1 : 5 Y 3 : 5 Y 5 : 5 ---
m = 20 Boot-t2.1034 (0.9541)2.2858 (0.9522)2.4235 (0.9509)---
Importance1.8235 (0.9569)2.0486 (0.9548)2.1962 (0.9537)---
M–H1.7168 (0.9619)1.8330 (0.9602)1.9553 (0.9589)---
n = 50 Y 1 : 20 Y 5 : 20 Y 10 : 20 Y 15 : 20 Y 20 : 20 -
m = 30 Boot-t1.9202 (0.9521)2.1908 (0.9508)2.3212 (0.9488)2.5113 (0.9472)2.6883 (0.9457)-
Importance1.7591 (0.9579)1.8105 (0.9564)1.9368 (0.9549)2.0827 (0.9534)2.1971 (0.9521)-
M–H1.6044 (0.9585)1.7020 (0.9573)1.8218 (0.9562)1.9654 (0.9549)2.1340 (0.9537)-
n = 50 Y 1 : 10 Y 3 : 10 Y 7 : 10 Y 10 : 10 --
m = 40 Boot-t1.7352 (0.9508)1.8547 (0.9487)1.9252 (0.9471)2.1624 (0.9460)--
Importance1.5765 (0.9563)1.6675 (0.9549)1.7428 (0.9535)1.9098 (0.9523)--
M–H1.2809 (0.9574)1.4270 (0.9562)1.5151 (0.9550)1.6443 (0.9542)--
n = 100 Y 1 : 50 Y 10 : 50 Y 20 : 50 Y 30 : 50 Y 40 : 50 Y 50 : 50
m = 50 Boot-t1.5937 (0.9492)1.7908 (0.9479)1.9655 (0.9464)2.1903 (0.9451)2.3765 (0.9438)2.5826 (0.9425)
Importance1.3866 (0.9547)1.5624 (0.9533)1.7139 (0.9519)1.8608 (0.9505)1.9890 (0.9487)2.1423 (0.9465)
M–H1.2921 (0.9558)1.4002 (0.9549)1.5520 (0.9534)1.7292 (0.9520)1.8937 (0.9511)2.0282 (0.9497)
n = 100 Y 1 : 25 Y 10 : 25 Y 20 : 25 Y 25 : 25 --
m = 75 Boot-t1.4652 (0.9478)1.6254 (0.9466)1.8429 (0.9453)1.9492 (0.9441)--
Importance1.2236 (0.9521)1.4351 (0.9511)1.6023 (0.9496)1.7385 (0.9485)--
M–H1.1305 (0.9532)1.2824 (0.9523)1.4325 (0.9510)1.5027 (0.9496)--
n = 100 Y 1 : 10 Y 3 : 10 Y 7 : 10 Y 10 : 10 --
m = 90 Boot-t1.2652 (0.9460)1.3824 (0.9446)1.4629 (0.9431)1.5492 (0.9418)--
Importance1.0952 (0.9508)1.2251 (0.9485)1.3423 (0.9471)1.4138 (0.9556)--
M–H0.8505 (0.9517)0.9824 (0.9504)1.1482 (0.9484)1.2627 (0.9471)--
Table 8. AWs and CPs of PIs for censored units based on importance, M–H and Boot-t samplers when α = 1.5 and β = 1.0 .
Table 8. AWs and CPs of PIs for censored units based on importance, M–H and Boot-t samplers when α = 1.5 and β = 1.0 .
CSs
n = 25 Y 1 : 10 Y 5 : 10 Y 10 : 10
m = 15 Boot-t1.8862 (0.9528)1.9374 (0.9512)2.2138 (0.9501)
Importance1.6974 (0.9545)1.8062 (0.9533)1.9318 (0.9520)
M–H1.5217 (0.9571)1.6682 (0.9553)1.7991 (0.9533)
n = 25 Y 1 : 5 Y 3 : 5 Y 5 : 5
m = 20 Boot-t1.7605 (0.9516)1.8426 (0.9504)1.9571 (0.9490)
Importance1.6033 (0.9532)1.7283 (0.9519)1.8479 (0.9509)
M–H1.4880 (0.9547)1.6046 (0.9535)1.7379 (0.9522)
n = 50 Y 1 : 20 Y 10 : 20 Y 20 : 20
m = 30 Boot-t1.6254 (0.9502)1.7533 (0.9489)1.8438 (0.9473)
Importance1.5039 (0.9517)1.6117 (0.9504)1.6995 (0.9491)
M–H1.3906 (0.9530)1.5056 (0.9518)1.5927 (0.9505)
n = 50 Y 1 : 10 Y 5 : 10 Y 10 : 10
m = 40 Boot-t1.4995 (0.9488)1.6228 (0.9475)1.7168 (0.9462)
Importance1.3455 (0.9498)1.4570 (0.9487)1.5617 (0.9474)
M–H1.2018 (0.9515)1.2984 (0.9504)1.4118 (0.9492)
n = 100 Y 1 : 25 Y 13 : 25 Y 25 : 25
m = 75 Boot-t1.3161 (0.9459)1.4471 (0.9446)1.5682 (0.9432)
Importance1.1535 (0.9472)1.2659 (0.9460)1.3829 (0.9446)
M–H1.0259 (0.9488)1.1138 (0.9474)1.2492 (0.9461)
n = 100 Y 1 : 10 Y 5 : 10 Y 10 : 10
m = 90 Boot-t1.1788 (0.9443)1.2958 (0.9430)1.3823 (0.9418)
Importance1.0209 (0.9457)1.1327 (0.9444)1.2273 (0.9430)
M–H0.8027 (0.9469)0.8912 (0.9460)0.9798 (0.9448)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bdair, O.M. Inference for Two-Parameter Birnbaum–Saunders Distribution Based on Type-II Censored Data with Application to the Fatigue Life of Aluminum Coupon Cuts. Mathematics 2025, 13, 590. https://doi.org/10.3390/math13040590

AMA Style

Bdair OM. Inference for Two-Parameter Birnbaum–Saunders Distribution Based on Type-II Censored Data with Application to the Fatigue Life of Aluminum Coupon Cuts. Mathematics. 2025; 13(4):590. https://doi.org/10.3390/math13040590

Chicago/Turabian Style

Bdair, Omar M. 2025. "Inference for Two-Parameter Birnbaum–Saunders Distribution Based on Type-II Censored Data with Application to the Fatigue Life of Aluminum Coupon Cuts" Mathematics 13, no. 4: 590. https://doi.org/10.3390/math13040590

APA Style

Bdair, O. M. (2025). Inference for Two-Parameter Birnbaum–Saunders Distribution Based on Type-II Censored Data with Application to the Fatigue Life of Aluminum Coupon Cuts. Mathematics, 13(4), 590. https://doi.org/10.3390/math13040590

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop