Constant-Stress Modeling of Log-Normal Data under Progressive Type-I Interval Censoring: Maximum Likelihood and Bayesian Estimation Approaches

: This paper discusses inferential approaches for the problem of constant-stress accelerated life testing when the failure data are progressive type-I interval censored. Both frequentist and Bayesian estimations are carried out under the assumption that the log-normal location parameter is nonconstant and follows a log-linear life-stress model. The conﬁdence intervals of unknown parameters are also constructed based on asymptotic theory and Bayesian techniques. An analysis of a real data set is combined with a Monte Carlo simulation to provide a thorough assessment of the proposed methods.


Introduction
Accelerated life testing (ALT) is a valuable technique used in manufacturing design to test the reliability and longevity of products in a cost-effective and efficient manner.ALTs involve subjecting the units to higher-than-use stress conditions, thereby accelerating the aging and failure processes, and then estimating the lifetime distribution features at the use condition through a statistically appropriate model.The constant-stress ALT (CS-ALT) model is one of the two main methods used in ALTs.In this method, the units are subjected to a constant stress level throughout the test cycle.The second method is the step-stress model, which involves subjecting the units to a series of increasing stress levels in a stepwise manner.For an extensive coverage of various aspects of the ALT models, including test designs, data analysis methods, and reliability estimation, one can consult the monographs written by Nelson [1], Meeker and Escobar [2], Bagdonavicius and Nikulin [3], and Limon et al. [4].
Most test plans that are published use the concepts of type-I or type-II censoring for testing items under design stress in an accelerated setting.Type-I censoring involves stopping testing after a fixed duration, which provides the advantage of a precise experiment duration but leads to uncertainty about the number of failures observed.On the other hand, type-II censoring involves stopping testing after a predetermined number of failures, which provides certainty about the number of failures but leads to uncertainty about the experiment's duration.In reliability testing, type-II censoring schemes are often employed when conducting experiments to determine the reliability or failure rate of a product or system.Type-II censoring occurs when the experiment is terminated after a predetermined number of failures, and the remaining items are considered censored observations.In this context, Zheng [5] expressed the asymptotic Fisher information for type-II censored data, relating it to the hazard function.Additionally, they demonstrated that the factorization of the hazard function can be characterized by the linear property of the Fisher information in the context of type-II censoring.Moreover, Xu and Fei [6] explored theories related to the approximate optimal design for a simplified step-stress ALT method under type-II censoring.The researchers develop statistically approximated optimal ALT plans with the objective of minimizing the asymptotic variance of the maximum likelihood estimators for the p-th percentile of lifetime at the design stress level.Utilizing data from CS-ALT experiments with type-II censoring, Wenhui et al. [7] studied the interval estimate of the two-parameter exponential distribution.Furthermore, they derived generalized confidence intervals for the parameters in the life-stress model, including the location parameter, as well as the mean and reliability function at the designed stress level.
These two types of censoring are more distinct with smaller sample sizes, but their differences become less noticeable as the sample size increases.In this regard, several studies that are relevant to this issue at hand are present in Nelson and Kielpinski [8], Bai et al. [9], and Tang et al. [10], while the two types of censoring both have benefits, they share a significant disadvantage in that they permit the elimination of units solely after the conclusion of an experiment.In contrast, "progressive censoring", which is a more comprehensive censoring approach, permits units to be removed during a test before its completion.For a more in-depth examination of progressive censoring, Balakrishnan and Aggarwala's research [11] can be referenced.In certain situations, continuous monitoring of experiments to observe failure times is necessary.However, this may not always be feasible due to time and cost constraints.In such cases, the number of failures is recorded at predetermined time intervals, which is known as interval censoring.Aggarwala [12] developed a more generalized interval censoring scheme called progressive type-I interval censoring by combining interval censoring and progressive type-I censoring.In this scheme, the experimental units are observed at predetermined intervals, and only those units that have not failed up to a certain point are subjected to more frequent monitoring.This approach allows for more efficient use of resources and provides more precise estimates of failure times.Progressive type-I interval censoring has been the focus of significant attention by many authors in recent years.For instance, Chandrakant and Tripathi [13], Singh and Tripathi [14], Chen et al. [15], and Arabi Belaghi et al. [16] have conducted research and developed various methods for estimating the unknown parameters of different lifetime models under progressive type-I interval censoring scheme.
Several studies have been conducted on statistical inference for ALT models under different types of censoring schemes, such as type-I, type-II, and hybrid censoring (see, Abd El-Raheem [17][18][19], Sief et al. [20], Feng et al. [21], Balakrishnan et al. [22], and Nassar et al. [23]).However, the study of ALT models under a progressive type-I interval censoring scheme is still lacking in the literature, and as such, we aim to address this gap by exploring inferential approaches for CS-ALT models when the failure data are progressive type-I interval censored and are log-normally distributed.
The log-normal distribution is indeed widely used in failure time analysis and has proven to be a flexible and effective model for analyzing various physical phenomena.One notable characteristic of the log-normal distribution is that its hazard rate starts at zero, indicating a low failure rate at the beginning, then gradually increases to its maximum value, and eventually approaches zero as the variable x approaches infinity.This behavior makes the log-normal distribution suitable for modeling phenomena that exhibit initial low failure rates, followed by an increasing failure rate, and then a declining failure rate as time progresses.The applications of the log-normal distribution extend to various fields of study, including actuarial science, business, economics, and lifetime analysis of electronic components.Moreover, the log-normal distribution is valuable for analyzing both homogeneous and heterogeneous data.It can handle skewed data that may deviate from a normal distribution, making it suitable for modeling real-world datasets that exhibit asymmetry.This versatility has led to its application in a wide range of practical studies.One can refer to [24][25][26] for further insights into the applications of the log-normal distribution in various fields and to demonstrate its usefulness in analyzing different types of data.
Assuming that the lifetime of the test units represented by a random variable T follows a log-normal distribution with a nonconstant location parameter −∞ < µ < ∞ is affected by stress and a scale parameter σ > 0, then probability density function (PDF) and the cumulative distribution function (CDF) of the log-normal distribution can be expressed as follows: where Φ(•) is the standard normal CDF.
The article is structured into several sections, which are summarized as follows: Section 2 provides a description of the test process and the assumptions that underlie it.In Section 3, the maximum likelihood estimates along with their associated asymptotic standard error are discussed.Section 4 focuses on the discussion of Bayesian estimation techniques.The proposed methods in Sections 3 and 4 are then evaluated in Section 5 using simulation studies.Finally, Section 6 provides a summary of the findings as a conclusion.

Model Description
In the CS-ALT method, the test units are divided into groups and each group is subjected to a higher stress level than the typical stress level.The stress levels are denoted as S 0 for the standard stress level and S 1 < S 2 < . . .< S k for k different test stress levels.The data is collected using a progressive type-I interval censored sampling approach for each stress level S i , i = 1, 2, . . ., k.
In this approach, a set of n i identical units is simultaneously tested at time t i0 = 0 for each stress level S i .Inspections are performed at predetermined times t i1 < t i2 < . . .< t im i , with t im i being the planned end time of the experiment.During each inspection, the number of failed units X ij within the interval (t i(j−1) , t ij ] is recorded.Additionally, at each inspection time t ij , a random selection process eliminates R ij surviving units from the test, where R ij should not exceed the number of remaining units Y ij .The value of R ij is determined as a specified percentage p ij of the remaining units at t ij , using the formula R ij = p ij × y ij , where j = 1, 2, . . ., m i .The percentage values p ij are pre-specified, with p im i = 1, indicating that all remaining units are eliminated at the final inspection time.
In this scenario, a progressive type-I interval censored sample of size n i can be represented as: Here, the total sample size n is given by the sum of the number of units in each stress level, which is defined as

Basic Assumptions
In the CS-ALT context, the following assumptions are considered: 1.
The lifetime of test units follows a log-normal distribution at stress level S i , with PDF given by

2.
For the log-normal location parameter µ i , the life-stress model is assumed to be loglinear, i.e., it is described as log(µ i ) = a + bS i , i = 0, 1, . . ., k.
Here, a and b (where b < 0) are unknown coefficients that dependent on the product's nature and the test method used.Using this log-linear model, µ i can be further expressed as µ i = µ 0 e b(S i −S 0 ) = µ 0 θ h i , where µ 0 represents the location parameter of the log-normal distribution under the reference stress level S 0 .Additionally, θ = e b(S k −S 0 ) = µ k µ 0 < 1, and These assumptions provide the basis for analyzing and modeling the lifetime behavior of test units under different stress levels in CS-ALT experiments.Further details can be found in Chapter 2 of Nelson's book [1].

Maximum Likelihood Estimation
Based on the observed lifetime data D and the assumptions 1 and 2, the likelihood function for µ 0 , σ and θ is given by: where . The corresponding log-likelihood function denoted by L = ln L(µ 0 , σ, θ|D).When the partial derivatives of L with respect to µ 0 , σ, and θ are set to zero, the maximum likelihood estimators (MLEs) of µ 0 , σ, and θ can then be obtained by simultaneously solving the following equations: To simplify the expressions, we used the notations and . Here, φ(•) represents the standard normal PDF.Since the solutions to the aforementioned equations cannot be found in a closed form, the Newton-Raphson method is frequently employed in these circumstances to obtain the desired MLEs.

EM Algorithm
The expectation-maximization (EM) algorithm is a widely used tool for handling missing or incomplete data situations.It is a powerful iterative algorithm that seeks to maximize the likelihood function by estimating the missing data and the model parameters in an iterative manner.The EM algorithm is particularly useful when dealing with large amounts of missing data.Compared to other optimization methods such as the Newton-Raphson method, the EM algorithm is generally slower but more reliable in such cases.
The EM algorithm was first introduced by Dempster et al. [27], and has since been widely used in many different fields.McLachlan and Krishnan [28] provide a comprehensive treatment of the EM algorithm, while Little and Rubin [29] have highlighted the advantages of the EM algorithm over other methods for handling missing data.Considering progressive type-I interval censoring, the complete sample W i under stress level S i can be expressed as W i = W ij , W * ij , where W ij = (ω ij1 , ω ij2 , . . ., ω ijX ij ) represent the lifetimes of the units within the jth interval (t i(j−1) , t ij ] and W * ij = (ω * ij1 , ω * ij2 , . . ., ω * ijR ij ) denote to the lifetimes for the units that were removed at time t ij , for j = 1, 2, . . ., m i .As a result, we can express the log-likelihood function of the complete data set as By taking partial derivatives of Equation ( 6) with respect to µ 0 , σ, and θ, we can obtain the associated log-likelihood equations as follows: In the EM algorithm, two main steps are involved: The expectation step (E-step) and the maximization step (M-step).In the E-step, the observed and censored observations are replaced by their respective expected values.This step helps in estimating the missing or censored data.The process of finding the expected values in the E-step of the EM algorithm in our case involves calculating the expectations of four quantities Since W ij and W * ij are independent (see Ng et al. [30]), the process can be simplified using the following lemma (see Ng and Wang [31]).Lemma 1.Given t ij and t i(j−1) for i = 1, 2, . . ., k and j = 1, 2, . . ., m i , the conditional distributions of W and W * can be expressed as follows: Proof.The conditional distribution of W ij : the probability of W falling within the interval (t i(j−1) , t ij ] is given by: where To normalize the distribution within this interval, the PDF of W is divided by the probability of W falling within the interval: This expression represents the conditional distribution of W ij .Similarly, we can directly deduce the conditional distribution of W * ij using a similar approach.
Thus, based on this result, we can readily acquire the necessary expected values in the following formulas.
Subsequently, during the M-step, the goal is to maximize the results obtained from the E-step.Thus, if we denote the estimate of (µ 0 , σ, θ) at the o-th stage as (µ ), applying the M-step will lead to updated estimates at the (o + 1)-th stage.The updated estimates µ (o+1) 0 , θ (o+1) can be derived as the solution of the following equations. (o) .While the updated value σ (o+1) can be obtained as , where o) .The iterative process continues until the desired convergence is achieved, which is determined by checking if the absolute differences between the updated and the previous values of µ 0 , σ, and θ are all less than or equal to a predefined value threshold > 0. In mathematical terms, the convergence criterion is given by |µ

Midpoint Approximation Method
In this context, we assume that X ij of failures within each subinterval (t i(j−1) , t ij ] occurred at the midpoint of the interval, denoted as . Additionally, there are censored items R ij withdrawn at the censoring time t ij .Hence, the likelihood function can be approximated as: The associated log-likelihood function for (µ 0 , σ, θ) is denoted as L * = ln L * (µ 0 , σ, θ|D).To find the midpoint (MP) estimators of µ 0 , σ, and θ, we set the partial derivatives of L * with respect to µ 0 , σ, and θ to zero. where . By simultaneously solving Equations ( 15)-( 17), we can obtain the MP estimators for the parameters µ 0 , σ, and θ.
The advantage of the MP likelihood equations over the original likelihood equations is that they are often less complex and may lead to simpler numerical optimization procedures.This can enhance computational efficiency and facilitate the implementation of the estimation process.

Asymptotic Standard Errors
According to the missing information principle of Louis [32], the observed Fisher information matrix can be obtained as where Θ = (µ 0 , σ, θ), I D (Θ), I W (Θ), and I W|D (Θ) represent observed information, complete, and missing information matrices.The complete information matrix, I W (Θ), for the data from a log-normal distribution is provided by Moreover, the missing information matrix I W|D (Θ) can be expressed as Here, I ij W|D (Θ) represents the information matrix for a single observation, conditioned on the event where t i(j−1) < W ≤ t ij .Additionally, I ij W * |D (Θ) denotes the information matrix for a single observation that is censored at the failure time, t ij , conditioned on the event where W > t ij .The elements of both matrices can be obtained easily by utilizing Lemma 1 in the following manner: and where Afterward, the asymptotic variance-covariance matrix of the MLEs Θ = ( μ0 , σ, θ) can be obtained by inverting the matrix I D ( Θ).The asymptotic standard errors (ASEs) of MLEs can be easily obtained by taking the square root of the diagonal elements of the asymptotic variance-covariance matrix.Furthermore, the asymptotic two-sided confidence interval (CI) for Θ with a confidence level of 100(1 − γ)% where 0 < γ < 1, is given by: where Z γ corresponds to the γ-th percentile of the standard normal distribution, and Var( Θ) denotes the ASE of the estimated parameter.

Bayesian Estimation
In this study, we utilize Markov Chain Monte Carlo (MCMC) and Tierney-Kadane (T-K) approximation methods to investigate Bayesian estimates (BEs) of unknown parameters.The selection of an appropriate decision in decision theory relies on specifying an appropriate loss function.Therefore, we consider the squared error (SE) loss function, LINEX loss function, and general entropy (GE) loss function.
The SE loss function is suitable when the effects of overestimation and underestimation of the same magnitude are considered equally important.This loss function quantifies the discrepancy between the estimated and true values using the squared difference.
On the other hand, asymmetric loss functions are employed to capture the varying effects of errors when the true loss is not symmetric in terms of overestimation and underestimation.The LINEX loss function is an example of an asymmetric loss function that allows for different weighting of overestimation and underestimation errors.
Furthermore, the GE loss function takes into account a broader range of loss structures and provides flexibility in capturing the impact of different errors.
Under the SE loss function, the BE of the parameter Θ is given by its posterior mean.In the case of the LINEX and GE loss functions, the BE of Θ is determined differently.For the LINEX loss function, the BE of Θ is given by: Here, the sign of ν indicates the direction of the loss function (whether it penalizes overestimation or underestimation more), while the magnitude of ν indicates the degree of symmetry in the loss function.
For the GE loss function, the BE of Θ is given by: In this case, the shape parameter κ of the GE loss function is related to the deviation from symmetry in the loss function.
By using these specific formulas, we can determine the BE of Θ under the LINEX and GE loss functions, taking into account their respective characteristics and the implications of asymmetry or deviation from symmetry in the loss functions.
In situations where it is challenging to select priors in a straightforward manner, Arnold and Press [33] suggest adopting a piecewise independent approach for specifying priors.More specifically, we adopt a piecewise independent prior specification for the parameters, where the parameter µ 0 follows a normal prior distribution, σ is assigned a gamma prior distribution, and θ is assumed to have a uniform prior.Therefore, we can represent the joint prior distribution of µ 0 , σ, and θ as follows: By incorporating the likelihood function described in Equation ( 5) with the joint prior distribution outlined in Equation ( 21), we can derive the joint posterior distribution for µ 0 , σ, and θ in the following manner: The posterior mean of the function g(µ 0 , σ, θ) in terms of µ 0 , σ, and θ can be determined as follows: However, it is not feasible to obtain an analytical closed-form solution for the integral ratio in Equation (23).As a result, it is advisable to employ an approximation technique to compute the desired estimates.In the subsequent subsections, we will discuss various approximation methods that can be employed for this purpose.

MCMC Method
In this approach, we adopt the MCMC method to generate sequences of samples from the complete conditional distributions of the parameters.Roberts and Smith [34] introduced the Gibbs sampling method, which is an efficient MCMC technique when the complete conditional distributions can be easily sampled.Alternatively, by using the Metropolis-Hastings (M-H) algorithm, random samples can be obtained from any complex target distribution of any dimension, as long as it is known up to a normalizing constant.The original work by Metropolis et al. [35] and its subsequent extension by Hastings [36] form the foundation of the M-H algorithm.
To implement the Gibbs algorithm, the conditional probability densities of the parameters µ 0 , σ, and θ should be determined as follows: Since the posterior conditional distributions of µ 0 , σ, and θ are unknown, we employ the M-H algorithm to generate random numbers.In this case, we choose a normal distribution as our proposal density.The process of generating samples using the MCMC method follows the steps outlined in Algorithm 1.
Step 2: Set the iteration index i = 1.
Step 4: Compute the acceptance ratio r µ .
Step 5: Draw a random number u from a uniform distribution U(0, 1).
Step 6: If u ≤ r(µ Step 7: Do the steps 2 to 6 for the parameters σ and θ.
Step 9: Repeat steps 3 to 8 a total of N times, generating a chain of parameter values.
After running the algorithm for a sufficient number of iterations, the first N b simulated values (burn-in period) are discarded to eliminate the influence of the initial value selection.
The remaining values (µ (i) 0 , σ (i) , θ (i) ) for i = N b + 1, . . ., N (where N is the total number of iterations) form an approximate posterior sample that can be used for Bayesian inference.
Based on this posterior sample, BEs for a function of the parameters g(µ 0 , σ, θ) are provided under SE, LINEX, and GE loss functions, respectively, as The Bayesian credible CIs for any parameter, such as µ 0 , can be determined using the posterior MCMC sample after the burn-in period N b .The MCMC sample should be sorted in ascending order as µ . Based on this sorted sample, the two-sided Bayesian credible CI for µ 0 at a confidence level of 100(1 − γ)% is given by µ . Similarly, we can create Bayesian credible CIs for the parameters σ and θ in a similar approach.

Tierney-Kadane Method
Tierney and Kadane [37] proposed the T-K methodology, which is a technique for approximating the BE of a target function g(µ 0 , σ, θ).
The T-K methodology provides an approximation for the BE by incorporating the likelihood, prior, and target function.It utilizes maximum likelihood estimation to find the values that maximize the δ functions and takes into account the curvature of the log-likelihood and log-prior functions using Hessian matrices.
Here, in our case, Equation ( 22) can be directly used to obtain δ(µ 0 , σ, θ) as follows: Thus, ( μ0 δ , σδ , θδ ) are obtained by solving the following non-linear equations We may obtain |Λ| as follows based on the second-order derivative of δ(µ 0 , σ, θ), where In order to compute the BE of µ 0 , we set g(µ 0 , σ, θ) = µ 0 .So, Further, μ0 δ * , σδ * , θδ * can be obtained by solving the following equations and |Λ * µ 0 | can be computed from where Therefore, the BE for µ 0 under the SE loss function, using the T-K methodology, can be expressed as follows: By following the same reasoning, the BEs for σ and θ under the SE loss function, using the T-K methodology, can be computed straightforwardly.

Monte Carlo Simulation Study
Based on theoretical principles, it is not possible to directly compare various estimation methods or censoring schemes.Therefore, the purpose of this section is to evaluate the performance of several estimates that were discussed in previous sections using Monte Carlo simulations.The assessment of point estimates is based on their mean square error (MSE) and relative absolute bias (RAB), while the evaluation of interval estimates is based on their coverage probability (CP) and average width (AW).These measures provide insights into the accuracy and precision of the estimates.
To simulate data under progressive type-I interval censoring, we utilize the Aggarwala algorithm (Aggarwala et al. [12]) for a given set of parameters including the specified stress level S i , the sample sizes n i , and the number of subintervals m i .We also consider prefixed inspection times and censoring schemes.Under each stress level S i , i = 1, 2, . . ., k, starting with a sample of size n i , which is subjected to a life test at time t i0 = 0, we simulate the number of failed items X ij for each subinterval (t i(j−1) , t ij ] as follows: Let X i0 = 0 and R i0 = 0 and for j = 1, 2, . . ., m i X ij |X i(j−1) , R i(j−1) , . . ., X i0 , R i0 In the simulation study, we examine three distinct removal schemes, denoted as p 1 , p 2 , and p 3 , each characterized by different probabilities of removing items during intermediate inspection times where p 1 = (0.25, 0.25, 0, 0, 1), p 2 = (0, 0, 0.25, 0.25, 1), and p 3 = (0, 0, 0, 0, 1).The third Scheme, p 3 , resembles a conventional interval censoring scheme, where no removals occur during the intermediate inspection times.In addition to the regular stress level S 0 = 50, we include four stress levels: S 1 = 60, S 2 = 70, S 3 = 80, and S 4 = 90.These stress levels represent different levels of intensity or severity in the testing process.Additionally, for each stress level S i , we incorporate the same inspection times to account for varying durations of observation.The inspection times included are t i1 = 3, t i2 = 5, t i3 = 9, t i4 = 15, and t i5 = 25.These inspection times correspond to specific intervals during which the items are assessed or observed.
Furthermore, we consider various sample sizes to assess the impact of the number of items tested.The sample sizes considered are n = 40, n = 60, n = 80, and n = 100.These different sample sizes allow us to investigate the influence of the number of tested items on the estimation performance.The generated data follows a log-normal distribution with true parameters values µ 0 = 2, σ = 1, and θ = 0.95.
For the Bayesian analyses, informative priors are employed with specific hyperparameters.The hyperparameters are chosen such that the prior means correspond to the true parameter values.The hyperparameters are set as follows: λ 1 = 2, µ 1 = 0.1, λ 2 = 100, and µ 2 = 100.The T-K BEs are calculated under the SE loss function.On the other hand, the MCMC BEs are computed using the SE loss function as well as the LINEX loss function with different values of ν (−2, 0.001, and 2) and the GE loss function with different values of κ (−2, 0.001, and 2).
In Table 1, the MSE and RAB values are provided for the classical estimators of the parameters.The table clearly demonstrates that the EM estimators outperform the MLE and MP estimators in terms of having the lowest MSE and RAB values.This indicates that the EM estimators provide more accurate and less biased parameter estimates compared to their counterparts, as clearly demonstrated in Figure 1.
In Table 2, the MSE and RAB values are presented for the BEs of the parameters, specifically under the SE loss function.The table includes results obtained using the T-K and MCMC techniques.The tabulated results indicate that the performance of the two methods under different schemes is almost identical, as visually depicted in Figure 2 as well.Suggesting that both T-K and MCMC methods yield similar results in terms of MSE and RAB values.Additionally, in Tables 3 and 4, the MSE and RAB values are presented for the MCMC BEs of the parameters for three distinct choices of the shape parameter for the LINEX and GE loss functions.Table 3 corresponds to the LINEX.Based on the tabulated results, we can conclude that, for the parameters µ 0 and σ under the LINEX loss function, the MCMC BE shows higher accuracy when the parameter ν is set to 2. This means that for a specific value of ν (in this case, ν = 2), the MCMC BE provides more precise and reliable estimates for µ 0 and σ under the LINEX loss function compared to other values of ν.On the other hand, for the parameter θ under the same LINEX loss function, the MCMC BE performs better (i.e., shows higher accuracy) when ν is set to −2.
Moreover, when considering the GE loss function, the MCMC BE of µ 0 shows higher accuracy when κ is assigned a value of 2. Similarly, for the parameter σ, the MCMC BE exhibits better performance in terms of lower MSE and RAB values when κ is set to 0.001.On the other hand, for the parameter θ, the MCMC BE demonstrates better performance when κ is set to −2.Moreover, Table 5 presents the AWs and coverage probabilities (CPs) of the 95% asymptotic and Bayesian credible CIs for the parameters µ 0 , σ, and θ.The table clearly indicates that the Bayesian credible CIs have narrower widths compared to the asymptotic CIs.This suggests that the Bayesian credible CIs provide more precise and tighter estimation intervals for the parameters.Also, the Bayesian credible CIs demonstrate better overall performance, as indicated by the higher coverage probabilities, implying that they achieve a higher proportion of correctly capturing the true parameter values within the confidence intervals.
Based on the tabulated results in Tables 1-5, the following general concluding remarks can be drawn: 1.
For a fixed censoring scheme, the trend observed in the tabulated results indicates that as the sample size n increases, the MSE and RAB values of all estimates decrease.This trend aligns with the statistical theory, which suggests that larger sample sizes tend to result in more accurate parameter estimates.

2.
The Bayesian estimators consistently outperform the MLEs, EM estimators, and MP estimators in terms of MSE and RAB values.This highlights the superior performance of the Bayesian approach in estimation tasks.3.
Among the different progressive censoring schemes p 1 , p 2 , and p 3 , all the estimates obtained under scheme p 3 (traditional type-I interval censoring) exhibit the smallest MSE and RAB values compared to schemes p 1 and p 2 .This result is in line with expectations, as longer testing duration and lower censoring rates generally lead to more accurate parameter estimation.

4.
The BEs of the parameters under the LINEX loss function display higher accuracy compared to the estimators under the SE and GE loss functions.
These conclusions provide insights into the behavior and performance of different estimation methods, sample sizes, censoring schemes, and loss functions based on the tabulated results.

Data Analysis
The life data from steel samples, which were randomly divided into groups of 20 items, indicates that each group experienced varying levels of stress intensity (Kimber [38]; Lawless [39]).Cui et al. [40] demonstrated that the data could be well described by a log-normal distribution.Our analysis specifically focused on the data obtained from stress levels ranging from 35 to 36 MPa, with a normal stress level set at 30 MPa.To facilitate convenience, the data is replicated in Table 6 for easy reference.A progressively type-I interval censored sample is generated randomly from this dataset, taking into account the predetermined inspection times t ij = j * 50, under the scheme p = (0, 0, 0, 0, 1).The resulting simulated sample is presented in Table 7.
Table 7.The progressively type-I interval censored sample.

Conclusions
This article discusses statistical analysis in the context of progressive type-I interval censoring for the log-normal distribution in a CS-ALT setting.Both classical and Bayesian inferential procedures are applied to estimate the unknown parameters.To approximate the MLEs of the model parameters, the EM algorithm and mid-point approximation method are employed.For the Bayesian approach, the BEs are obtained based on different loss functions, namely the SE, LINEX, and GE loss functions.The Tierney-Kadane and MCMC methods are used to obtain approximate BEs.Additionally, the article derives the asymptotic confidence intervals based on the normality assumption of the MLEs and the Bayesian credible intervals using the MCMC procedure.The performance of the different estimation methods is evaluated through a simulation study.The results indicate that the BEs perform well based on measures such as mean squared error and relative absolute bias of the estimates.
While this article focuses on CS-ALT with progressive type-I interval censoring and the log-normal distribution, the same methodology can be applied to other lifetime distributions under different censoring schemes as well.
Overall, this article provides a comprehensive analysis of statistical procedures for estimating parameters in CS-ALT with progressive type-I interval censoring, specifically for the log-normal distribution.

Table 1 .
The MSEs of the classical estimates for µ 0 , σ, and θ, along with their corresponding RAB values, are provided within brackets.
MSE Figure 1.Comparison of classical estimators: MSE values for different parameters estimators.

Table 2 .
The MSEs of the BEs evaluated using the SE loss function, along with their corresponding RABs (shown in parentheses).Comparison of BEs under SE loss function: MSE values for different parameters estimators.

Table 3 .
The MSEs of the BEs evaluated under LINEX loss function with different values of ν (−2, 0.001, and 2), along with their corresponding RABs (shown in parentheses).

Table 4 .
The MSEs of the BEs evaluated under GE loss function with different values of κ (−2, 0.001, and 2), along with their corresponding RABs (shown in parentheses).

Table 5 .
Comparison of AWs and CPs of 95% asymptotic and Bayesian Credible CIs for µ 0 , σ, and θ.

Table 8 .
The classical point estimates for µ 0 , σ, and θ of the real data.

Table 9 .
The BEs under the SE loss function of the real data.

Table 10 .
The BEs under LINEX loss function with different values of ν for the real data.

Table 12 .
The ACI and BCI of µ 0 , σ, and θ for the real data.