Parameter Estimation of Birnbaum-Saunders Distribution under Competing Risks Using the Quantile Variant of the Expectation-Maximization Algorithm

: Competing risks models, also known as weakest-link models, are utilized to analyze diverse strength distributions exhibiting multi-modality, often attributed to various types of defects within the material. The weakest-link theory posits that a material’s fracture is dictated by its most severe defect. However, multimodal problems can become intricate due to potential censoring, a common constraint stemming from time and cost limitations during experiments. Additionally, determining the mode of failure can be challenging due to factors like the absence of suitable diagnostic tools, costly autopsy procedures, and other obstacles, collectively referred to as the masking problem. In this paper, we investigate the distribution of strength for multimodal failures with censored data. We consider both full and partial maskings and present an EM-type parameter estimate for the Birnbaum-Saunders distribution under competing risks. We compare the results with those obtained from other distributions, such as lognormal, Weibull, and Wald (inverse-Gaussian) distributions. The effectiveness of the proposed method is demonstrated through two illustrative examples, as well as an analysis of the sensitivity of parameter estimates to variations in starting values.


Introduction
To design structures capable of withstanding predicted stresses, understanding the strength of the material they are constructed from is essential.Typically, materials undergo testing in a laboratory to ascertain their strength properties.Statistical models are then utilized to forecast the strength of structures or specimens of varying sizes.This is particularly critical for modern composite materials, where flaws may occur randomly during testing, consequently impacting the strength of individual specimens.Therefore, we may consider the individual strength of a specimen as a random variable, with its probability distribution function used to predict the strength of larger structures made from the same material.This underscores the importance of selecting suitable statistical models that effectively capture the observed data.The conventional approach in material property research has typically operated under the assumption that material strength adheres to the Weibull distribution, resulting in the creation of linear Weibull plots.However, several studies have noted the existence of various flaw types that can lead to material fractures (see, for example, [1][2][3][4][5][6], to name just a few).
It is noteworthy that several researchers have proposed statistical strength distributions based on concepts such as the "weakest-link theory" or "competing risk" to address the presence of multiple potential causes of failure.For example, Goda and Fukunaga [5] utilized a multi-modal Weibull distribution to analyze the strength distributions of silicon carbide and alumina fibers.Wagner [7] explored the competing risks model, while Taylor [8] proposed a Poisson-Weibull flaw model.Additionally, several authors have proposed end-effects models (or clamp-effects models) to study the strengths of exceptionally small fiber or composite specimens, as demonstrated by [9][10][11], among others.
These observations highlighted the need for developing efficient parameter estimation methodologies that can handle the Birnbaum-Saunders model when concerning the competing risks problem (see, for example, [25][26][27][28]).This paper makes a contribution by introducing an expectation-maximization (EM) algorithm for parameter estimation within the Birnbaum-Saunders competing risks model, addressing both masking and censoring effects.Initially, we delve into the complexities of multimodal problems involving masking and censoring under the Birnbaum-Saunders competing risks model.Subsequently, we propose a reliable parameter estimation approach utilizing the quantile variant of the EM (QEM) algorithm by [29].
Since implementing the EM algorithm can be challenging due to the requirement of explicit integration of the log-likelihood function in the E-step, the QEM variant provides a viable alternative.
We present an EM-type parameter estimate based on QEM, known for its stability in estimation and its capability to handle a large number of failure modes.
The structure of the remainder of this paper is as follows: A basic summary of the competing risks model is offered in Section 2. In Section 3, we present the distribution of material strength and its corresponding likelihood function.We discuss the EM and QEM algorithms in Section 4. We describe the QEM algorithm for parameter estimation in Section 5. We provide real-data examples for illustrative purposes in Section 6.We investigate the sensitivity of parameter estimates for the Newton-Raphson-type and QEM methods in Section 7. The paper ends with concluding remarks in Section 8.

Basics on Competing Risks Model
The examination of failure time or lifetime data has attracted much attention across various fields of study, including mechanical engineering, industrial engineering, electrical engineering, material science, and more.When examining industrial systems comprised of multiple interconnected components, system failure occurs when any of its components fail first, a phenomenon known as competing risks.Sometimes, identifying the exact cause of failure can be challenging or costly, resulting in cases where masking occurs when the failure time of an individual is observed without conducting a thorough investigation into the cause.This paper explores situations where the mode of the failure of the ith system may not always be definitively determined, and it is indicated by a subset of labels that define the component within the module.
As an illustration, if the ith system, consisting of J components, experiences failure because of the jth component only, the label is represented as a singleton M i = {j} (indicating no masking).If the cause of failure is completely unknown, the set of labels becomes M i = {1, 2, . . ., J} (indicating complete masking).In cases where failure is recognized by modes that involve two or more failures but not necessarily all failures, the set of labels is denoted as M i = {j 1 , j 2 , ..., j i } (indicating partial masking).Additionally, the problem can become more complex due to potential censoring, which commonly occurs in practical experiments due to time and cost constraints.Data are considered right-or left-censored when certain observations have either a lower or upper bound on their lifetime.
The conventional method for managing competing risks typically entails using hypothetical latent lifetimes corresponding to each possible cause of failure individually [30].To illustrate this, consider the following notation.A subject is subjected to various potential causes of failure, where there are J distinct modes of failure, each denoted by j = 1, 2, . . ., J. We represent T (j) i (i = 1, 2, . . ., n) as the continuous lifetime of the ith individual attributed to the jth cause.We assume that T (j) i are independent for all i and j and are identically distributed for all i given j.Its cumulative distribution function, probability density function, survival function, and the hazard rate function of T (j) i are denoted by F(•|θ (j) ), f (•|θ (j) ), S(•|θ (j) ), and h(•|θ (j) ), respectively, where θ (j) represents a vector of parameters for the jth cause.Then, we observe the lifetime of the ith subject, denoted by T i , which is determined by the random variable In real-world reliability analysis scenarios, obtaining complete observations of T i along with the jth cause may not always be feasible due to various masking and censoring mechanisms inherent in data collection.Hence, we additionally assume that each T i could potentially undergo masking or censoring.We assume that observations may undergo random right-censoring, where censoring times C i are independent of lifetimes T i for all i, and masking occurs with the set of labels defining the failed components.Consequently, we can observe triplets (Y i , M i , ∆ i ), where Y i = min(T i , C i ), and ∆ i is a censoring indicator variable defined as Let y i and δ i represent a realization of the random variables Y i and ∆ i , respectively.Cox [31] initially explored the analysis of exponential data with dual causes; a study later expanded to encompass multiple causes by Herman and Patell [32].Miyakawa [33] addressed the parametric estimation challenge in scenarios involving two causes and potential missing causes without censored data.The studies in [34][35][36][37] tackled the masking issue, albeit focusing solely on exponential models and offering explicit solutions under stringent assumptions.Kundu and Basu [38] built upon Miyakawa's research by providing approximations and asymptotic properties for parameter estimators, confidence intervals, and bootstrap confidence bounds.They obtained the exact maximum likelihood estimator for the exponential model with two causes and formulated likelihood equations for the Weibull case.However, their exact maximum likelihood estimator is only applicable in scenarios of complete masking without considering censored data.Park and Kulasekera [39] expanded upon earlier studies by proposing a closed-form maximum likelihood estimator for the exponential model.This estimator addresses scenarios involving multiple causes, censored data, and fully masked causes.They focused solely on cases where lifetime distributions adhered to exponential and Weibull distributions.Notably, the closed-form maximum likelihood estimator for the Weibull distribution necessitates prior estimation of the common shape parameter via the likelihood function.
Ishioka and Nonaka [40] introduced a reliable method for estimating the shared Weibull shape parameter involving two causes.They utilized a quasi-Newton method in cases where only system lifetime data are available and the associated indicator is undisclosed, akin to the masking issue under discussion.However, their methodology is confined to scenarios with two causes and a shared shape parameter.Another method suggested by [41] employs the EM algorithm.The EM sequences for the exponential model address diverse scenarios involving multiple causes, censoring, and extensive masking, were also developed by [41].However, their method is limited to exponential lifetime distributions because it requires closed forms for the hazard rate and survival functions.

Distribution of Material Strength
The weakest link theory is widely utilized to analyze material strength across multiple failure modes.This theory operates under two key assumptions [3,5]: (i) the material possesses numerous defects that limit its strength, with the material's strength being determined by the weakest defect present, and (ii) it assumes that defects do not interact with each other.These assumptions align with the competing risks model, which is based on hypothetical latent lifetimes.By employing the material's observed strengths rather than lifetimes, we can employ the theory of competing risks models in this scenario.Suppose that there are J independent defects in the material specimen and each failure mode is denoted by j = 1, 2, . . ., J. Let T (j) i (i = 1, 2, . . ., n) represent the strength of the ith material specimen attributed to the jth defect type.As mentioned earlier, the recorded strength of the ith material specimen is determined by i , . . ., T (J) i }.Then, the strength distribution of T i is expressed as where Θ = θ (1) , θ (2) , • • • , θ (J) .

Construction of Likelihood Function
In this section, we present a concise overview of the general maximum likelihood method proposed by [42,43].We represent the indicator function of an event A as I[A].For ease of notation, we define I i (j) = I[δ i = j], and we denote δ i = 0 in cases of censoring.The likelihood function of the censored sample is then given by S(y i |θ (j) ) I i (1) S(y i |θ (1) ) S(y i |θ (j) ) I i (J) S(y i |θ (J) ) where The likelihood function for each cause j.This implies that we can streamline the joint maximum likelihood problem for a set of J parameters into J distinct estimation tasks for the single parameter θ (j) , thereby significantly simplifying the numerical computations.
Subsequently, we examine the lifetime of a subject T i , which is subjected to masking, where the failure mode is only known to be one of the elements in a set M i .We must incorporate this into the likelihood function.It is essential to note that the cumulative incidence function (CIF) for each jth cause is represented by with its corresponding sub-density function where j = 1, 2, . . ., J. The probability density function of T i with M i is then given by For notational convenience, we denote δ i = −1 if the cause of failure is unknown.Hence, the overall likelihood incorporating masking and censoring is expressed as 1) , where L i (θ (j) ) is from (2).Then the overall likelihood above is expressed as where Generally, deriving the closed-form maximum likelihood estimate from the aforementioned likelihood function is highly challenging, if not impossible, requiring the use of numerical techniques to maximize L(Θ).While the Newton-Raphson method is commonly employed, it might exhibit sensitivity to initial values and could potentially fail to converge towards a solution.Additionally, in cases where the likelihood function involves a large number of failure modes, it may become over-parameterized, making direct maximization using the Newton-Raphson method ineffective.To address these challenges, the EM algorithm is utilized, as elaborated in the subsequent section.

The EM and QEM Algorithms
Here, we present the EM algorithm and formulate likelihood functions that are appropriate for utilization in the E-step of both the EM and QEM algorithms.

The EM Algorithm for Competing Risks Model
The EM algorithm is an iterative technique utilized to compute the ML estimates of parametric models.It proves particularly valuable when closed-form ML estimates are unavailable or when dealing with incomplete data.Originally proposed by Dempster et al. [44], the EM algorithm addresses these challenges.References such as [45][46][47][48] provide comprehensive information on the EM algorithm.For recent studies on the EM algorithm, readers are referred to [49,50].
The issue to address is whether the EM algorithm is applicable to the competing risks issue.When faced with masked data, where the failure mode belongs to a set M i , it suggests that the exact failure mode is unknown.Nonetheless, we can construct a complete-data likelihood, L c i (Θ), by treating the failure mode as missing information.To accomplish this, we can introduce an indicator variable to facilitate the construction of the complete-data likelihood.
i follows a Bernoulli distribution with probability given by P U It is immediate upon using (4) that we have We follow Section 2.8.2 of [51] by replacing f along with L i (θ (j) ) from ( 2).Then the complete-data likelihood with masking and censoring is expressed as where = h(x i |θ (j) ) = h(x i |θ (j) ) × S(x i |θ (j) ) When δ i = j, we have a singleton M i = {j} so that U (j) i ≡ 1.Then we have It is immediate upon using (10) and ∑ J ℓ=−1 I i (ℓ) = 1 that we can simplify (9) as follows It is important to highlight that the likelihood L c i (Θ) in ( 8) is fully decomposed by L c i (θ (j) ).As a result, the estimation challenge can be addressed independently for each individual parameter θ (j) , utilizing L c (θ (j) ) = ∏ n i=1 L c i (θ (j) ).Therefore, similar to our strategy in (2), employing this factorized complete-data likelihood L c (θ (j) ) instead of ∏ n i=1 L * i (Θ) enables us to simplify the joint maximum likelihood problem involving a set of J parameters into J individual estimation problems, each focused on a single parameter θ (j) .
Consequently, the maximization of the overall likelihood L * (Θ) is achieved by maximizing the J complete-data likelihoods L c (θ (j) ) = ∏ n i=1 L c i (θ (j) ) for j = 1, 2, . . ., J.While solving the likelihood presented in (5) poses challenges due to numerical complexities, adopting an EM framework and treating masked data as missing data enable the construction of a likelihood comprising individual likelihoods for each parameter θ (j) .This transformation of the problem into a missing-data problem significantly simplifies the numerical challenges.However, applying the EM algorithm in the case of missing data may not be immediately apparent, as will be discussed in the following section.
When assuming an exponential distribution of lifetimes and encountering both masked and censored data, using the EM algorithm based on (11) is straightforward since the hazard rate and survival functions have closed forms.However, in scenarios where lifetimes follow other distributions such as a normal distribution and the data consist of both masked and censored observations, utilizing (11) is not straightforward due to the absence of closedform hazard rate and survival functions.In such cases, although the overall likelihood cannot be represented as a product of individual likelihoods with single parameters, we can express the complete-data likelihood in (11) using closed-form probability density functions by treating censored observations as missing data.Consequently, implementing the EM algorithm is streamlined, and this method is applicable to various distributions, such as exponential, normal, lognormal, Weibull, and Wald (inverse Gaussian) distributions.For further details on its implementations, readers are directed to [42,52].
Here, we illustrate the process of formulating the complete-data likelihood using closed-form probability density functions by treating censored observation data as missing data.Let Z i represent a truncated version of X i at x i , where Z i > x i .Then, the completedata likelihood, which corresponds to (9) can be expressed as follows: where the probability density function of Z i is given by for t > x i .

Quantile Variant of the EM Algorithm
An obstacle in implementing the EM algorithm lies in the necessity to integrate the log-likelihood function in closed form during each E-step.This explicit integration can be circumvented by employing the QEM algorithm.The QEM approximation of the expected log-likelihood in the E-step is expressed as Here, log L c (•) denotes the complete-data log-likelihood in the EM algorithm, q , and ξ k represents various deterministic sequences such as k/K, k/(K + 1), (k − 1 2 )/K, etc.In this paper, we utilize ξ k = (k − 1 2 )/K for k = 1, 2, . . ., K. It is important to note that the QEM achieves accuracy at the deterministic rate of O(1/K 2 ).Alternatively, one may consider employing the Monte Carlo EM as proposed by [53].However, the MCEM achieves accuracy at the probabilistic rate of O p (1/ √ K).As a result, the QEM exhibits faster and more stable convergence properties in comparison to those of the MCEM.For further details, readers can refer to [29].

Parameter Estimation Using the QEM
In this section, we develop the EM-type maximum likelihood estimate of the parameters of the Birnbaum-Saunders distribution [12] under competing risks using the QEM algorithm in the presence of masking and censoring.

Suppose that T (j)
i is a Birnbaum and Saunders random variable with shape and scale parameters θ (j) = (α (j) , β (j) ) and its respective probability density function and cumulative distribution function of the Birnbaum-Saunders distribution are then provided as follows: and respectively.Here, Φ(•) is the standard normal cumulative distribution function.It should be noted that Engelhardt et al. [14] provided a convenient method for obtaining the maximum likelihood estimate under no competing risks.It is immediate upon using ( 12) that we have Then the QEM approximation of E log L c i (θ (j) )|Θ s is given by where s , . . ., θ s,i,k /K, and q(j) * s,i = (∑ K k=1 q (j) −1 s,i,k /K) −1 .We can calculate Υ (j) s,i using (7), given by Then we have the following QEM approximation of Q(Θ|Θ s ) qs,i .
It should be noted that q s ) is obtained by converting the below for q (j) s,i,k , where q where Differentiating Q(Θ|Θ s ) with α (j) and equating it to zero, we obtain Upon solving the aforementioned equation for α (j) , we find Differentiating Q(Θ|Θ s ) with β (j) and equating it to zero, we obtain Substituting ( 13) into ( 14), we have only one-dimensional estimating equation about β (j) .Let the solution of ( 14) denote β The QEM algorithm can stop if the changes are all relatively small, such as |α s+1 for j = 1, 2, . . ., J.

Examples
We consider two examples in this section.The analysis of these datasets was carried out using the R programming language [54].The R codes employed for analyzing the datasets presented in the examples can be found at the following URL: https://github.com/AppliedStat/R-code/tree/master/2024b.
To assess the goodness of fit of the models, we can utilize the mean square error (MSE) between the fitted model and the empirical distribution.For notational convenience, let t i be sorted such that t 1 ≤ t 2 ≤ • • • ≤ t n .Let Fn (t i ) represent the empirical cumulative distribution function, and F(t i ; θ) denote the fitted cumulative distribution function using the maximum likelihood estimate of θ.Subsequently, the MSE for the fitted model can be computed as Additionally, we consider the Kolmogorov-Smirnov statistic given by to assess the goodness of fit of the models.Particularly, when the data are uncensored, the empirical cumulative distribution function Fn (•) can be readily determined by In cases where the dataset contains censoring, the straightforward calculation of the empirical cumulative distribution function Fn (t) becomes impractical.Instead, one can derive the empirical cumulative distribution function using the well-known product limit estimator of the survival function S(t) = 1 − F(t) originally proposed by [55], which is expressed as Consequently, we derive Fn (t) = 1 − Ŝn (t).
Another approach to comparing the goodness of fit for each failure mode among the models is to contrast the empirical CIF with the parametric CIF defined by (3), as suggested by Aalen [56].However, in this paper, our focus lies on analyzing the strength distribution of the entire system or material specimen rather than the distribution for individual failure modes; hence, we do not delve into the CIF.Should one be interested in exploring the lifetime distribution for each failure mode, they are advised to examine the CIF.For additional insights into utilizing the CIF with masked data, interested readers can refer to [39,42].
In the following examples, the lognormal, Weibull, and Wald (inverse-Gaussian) distributions are employed to compare the results obtained from the proposed method in the subsequent examples.For a comprehensive discussion of the EM sequence related to these models, readers are referred to [42,43].

Pitch-Based Fiber Microbond Testing
An experiment conducted by [57] at Clemson University aimed to explore the interfacial bond strength between carbon fiber and matrix material.It is worth noting that the average diameter ranges approximately from 8 to 12 µm.In the microbond tests, ribbon fibers, which are flat-shaped rather than round-shaped, were utilized.These fibers were subjected to a droplet of epoxy resin, cured via heat treatment, and clamped in a micro-vise.Subsequently, the fiber underwent tensile loading to initiate debonding from the matrix droplet, and the resulting stress was monitored by a load cell.However, certain tested specimens exhibited inherent flaws, causing premature fiber breakage before debonding, which was treated as right-censored data.Kuhn and Padgett [58] analyzed this dataset employing kernel density estimation, focusing primarily on comparing the debonding strengths of ribbon fibers.In this scenario, all types of flaws were taken into account to estimate the strength distribution of the specimen.
The data in Table 1 display the tensile strength (in Newton) of fibers.In this case, there are only two causes of failure-fiber breakdown (denoted by B) and debonding (denoted by D).Table 2 summarizes the parameter estimates of the models under consideration.Based on the MSE criterion as provided in Table 3, the Birnbaum-Saunders model performs the best, although the MSEs of the Wald and lognormal models are also close to that of the Birnbaum-Saunders model.As depicted in the CDF plot and Weibull plot [59] in Figure 1, the lognormal and Wald models very closely approximate the Birnbaum-Saunders model.
It is important to note that the Weibull plot [59] is a visual tool used to determine if experimental data fits a Weibull distribution.This plot is generated by plotting log − log(1 − Fn (t i )) against log t i .To draw the Weibull plot, one of the popular plotting position algorithms uses Fn (t i ) = (i − 3/8)/(n + 1/4) for n ≤ 10 and Fn (t i ) = (i − 1/2)/n for n ≥ 11, originating from [60,61].For a comprehensive comparison of these plotting positions, we refer interested readers to [62].Based on Figure 1     An experiment conducted by [57] at Clemson University aimed to explore the inter-298 facial bond strength between carbon fiber and matrix material.It is worth noting that the 299 average diameter ranges approximately from 8 to 12 µm.In the microbond tests, ribbon 300 fibers, which are flat-shaped rather than round-shaped, were utilized.These fibers were 301 subjected to a droplet of epoxy resin, cured via heat treatment, and clamped in a micro-vise.302 Subsequently, the fiber underwent tensile loading to initiate debonding from the matrix 303  1.
It should also be noted that the Weibull plot is originally designed so that if the data follow a Weibull distribution, the data points will lie on a (nearly) straight line.However, this straightness can be violated due to competing risks or model departure.In the figure, we can easily observe that the data points deviate significantly from a straight line.In various tensile strength experiments, specimens often fail due to multiple factors, yet pinpointing the precise cause of failure proves challenging.Moreover, censoring occurs due to constraints in time and experiment costs.
To illustrate the application of the proposed method, the strength data presented in Table 4 were generated from the Birnbaum-Saunders distribution with the following parameters: α (1) = 1 and β (1) = 1 for the first component, α (2) = 2 and β (2) = 1 for the second component, and α (3) = 3 and β (3) = 1 for the third component.The dataset assumes fractures can be attributed to surface defects (mode 1), internal defects (mode 2), or end effects caused by specimen clamping (mode 3).Censored observations are indicated by a value of 0. To showcase partial or complete masking alongside censoring, the data were censored at a value of 2.0.
Tables 5 and 6 present a summary of model estimates and MSEs, respectively.Their corresponding fits are depicted in Figure 2 through Weibull plots.Due to masking, it is not feasible to individually classify the cause of fracture; hence, only "failure" or "censored" data is indicated on the figure.
By comparing the MSEs and Kolmogorov-Smirnov statistics, it is inferred that the strength distribution can be accurately represented by the Birnbaum-Saunders distribution.4.   4.

Sensitivity of Parameter Estimates to Variations in Starting Values
In this section, we compare the sensitivity of estimates obtained from the Newton-Raphson-type and QEM methods to variations in starting values.Our investigation into the sensitivity of estimates unfolds as follows: We consider a sample of lifetimes with a size of n = 100 from a three-component serial system.Assuming that the lifetimes of these components adhere to the Birnbaum-Saunders distribution, we set the following population parameter values: α (1) = 1 and β (1) = 1 for the first component, α (2) = 3 and β (2) = 5 for the second component, and α (3) = 5 and β (3) = 10 for the third component.
We generate system lifetimes using the Birnbaum-Saunders distribution with the specified population parameter values.To obtain the six parameter estimates, we first select starting values for both methods.Here, we randomly choose six starting values from the range (1, 10) and estimate the parameters using the Newton-Raphson-type and QEM methods.Next, we generate system lifetimes again and estimate the six parameters with different starting values.By repeating this procedure 500 times, we obtain 500 sets of parameter estimates, denoted by α(1) i , and β(3) i for i = 1, 2, . . ., 500.Using these 500 sets of parameter estimates, we create beanplots [63,64] of the estimates for each method in Figure 3 and calculate the empirical bias, variance, and MSE values for each method in Table 7.The figure illustrates that the Newton-Raphson-type method yields estimates with a wide spread, indicating sensitivity to starting values.In contrast, the EM algorithm produces estimates with a much narrower spread, indicating lower sensitivity to starting values.

Concluding Remarks
This research introduces a stable parameter estimation technique for the Birnbaum-Saunders distribution within the framework of competing risks, employing the quantile variant of the EM algorithm.One challenge in implementing the EM algorithm lies in the explicit integration of the log-likelihood function in each E-step, a hurdle circumvented by the quantile variant.With J failure modes, maximizing the original likelihood function translates to optimizing a function with 2 × J parameters.However, with our proposed method, estimation simplifies to determining only the shape parameter β (j) through a one-dimensional root search for each failure mode j for j = 1, 2, . . ., J.
Exploring competing risks models under the Birnbaum-Saunders distribution alongside masking and right censoring, this study lays the groundwork for future research avenues, including the development of models accommodating broader forms of censoring.Investigating the Bayesian approach to competing risks modeling within the context of the Birnbaum-Saunders distribution also holds promise for further advancements.
Additionally, developing robust parameter estimators under the Birnbaum-Saunders distribution alongside masking and right censoring can be a future issue.One possible way of handling this issue is to consider the generalized Birnbaum-Saunders mixture model [24].
The rigorous proof of the existence and uniqueness of the QEM estimate in the M-step was not provided, although the numerical calculation of the QEM estimate is available.Similar works addressing this issue have been studied by several authors [13,65,66].This could be a potential area for extended research in this paper.

Figure 1 .
Figure 1.(a) CDF plot and (b) Weibull plot with the interfacial bond strength data in Table1.
Figure 1.(a) CDF plot and (b) Weibull plot with the interfacial bond strength data in Table1.

Figure 1 .
Figure 1.(a) CDF plot and (b) Weibull plot with the interfacial bond strength data in Table1.

Table 4 .
Simulated strength data under the Birnbaum-Saunders model.There are three causes, along with masking and censoring.

Figure 2 .
Figure 2. (a) CDF plot and (b) Weibull plot with the interfacial bond strength data in Table4.

Figure 2 .
Figure 2. (a) CDF plot and (b) Weibull plot with the interfacial bond strength data in Table4.

Figure 3 .
Figure 3. Beanplots of the parameter estimates based on the Newton-Raphson-type method and the QEM method.

Table 1 .
Interfacial bond strength data for the pitch-based fiber microbond test with failure modes.

Table 2 .
Parameter estimates using the interfacial bond strength data in Table1.

Table 3 .
MSE and Kolmogorov-Smirnov statistic values for the interfacial bond strength data in Table1under the considered models.

Table 5 .
Parameter estimates using the masked and censored data in Table4.

Table 5 .
Parameter estimates using the masked and censored data in Table4.

Table 6 .
MSE and Kolmogorov-Smirnov statistic values for the masked and censored data in Table4under the considered models.

Table 6 .
MSE and Kolmogorov-Smirnov statistic values for the masked and censored data in Table4under the considered models.

Table 7 .
Empirical bias, variance, and MSE values from the parameter estimates.