Optimal Design for a Bivariate Step-Stress Accelerated Life Test with Alpha Power Exponential Distribution Based on Type-I Progressive Censored Samples

: We consider an optimization design for the alpha power exponential (APE) distribution as asymmetrical probability distributions under progressive type-I censoring for a step-stress accelerated life test. In this study, two stress variables are taken into account. To save the time and cost of lifetime testing, progressive censoring and accelerated life testing are utilized. The test units’ lifespans are supposed to follow an APE distribution. A cumulative exposure model is used to study the impact of varying stress levels. A log-linear relationship between the APE distribution’s scale parameter and stress is postulated. The maximum likelihood estimators, Bayesian estimators of the model parameters based on the symmetric loss function, approximate conﬁdence intervals (CIs) and credible intervals are provided. Under normal operating conditions, an ideal test plan is designed by minimizing the asymptotic variance of the percentile life.


Introduction
Accelerated life tests (ALTs) are a means of gathering more information in less time than would otherwise be possible. These ALTs achieve this when subjected to more stress than in usual operation conditions. This type of testing could save a significant amount of time and money. The step stress accelerated life test (SSALT) is one of these tests. The SSALT approach allows the experimenter to gradually increase the stress levels at pre-specified time intervals during the experiment. In a typical SSALT, a certain number of samples are placed in the test under a certain stress level (usually slightly higher than normal operating conditions) for a specific time period, and if a product unit does not fail at the pre-specified time, the stress is gradually increased until the entire test unit fails. That is, the load applied to it is gradually increased until the entire test unit fails or censoring occurs.
When stress (es) is applied to life test units in only two steps, this is referred to as a simple SSALT. Refs. [1,2] are excellent resources for interested readers. The authors in Ref. [3] also looked into other forms of accelerated exams that are routinely utilized in practice. There are also in-depth explanations of certain industrial applications. Ref. [2] examined numerous examples of the most common applications of accelerated testing in engineering and industry.
Many SSALTs with various types of censoring methods have been examined in the reliability engineering literature. The majority of studies concentrate on Type-I and Type-II censoring. When specific sample units are re-moved before failing the test, the cost of the test is lowered since these removed samples can be used elsewhere or in other tests [4]. Furthermore, in rare cases, components are misplaced or removed from the test prior to failure. This is referred to as progressively censored samples. See [5,6] for further information on progressively censored samples. This method saves the experimenter time and money, which is especially useful when the objects being examined are expensive. Ref. [7] investigated Type-II progressive censoring with binomial removals for the Weibull extended distribution. Furthermore, Ref. [8] used progressively Type-II censored data with uniform removals to estimate the parameters of the Pareto distribution. Ref. [9] demonstrated inference for step-stress models under exponential distribution in the context of progressively Type-I censored data, while [10] investigated the simple SSALT for the Weibull distribution using progressive first-failure filtering. Ref. [11] investigated the general k level SSALT with the Rayleigh lifetime distribution for units subjected to stress under progressive Type-I censoring. This paper investigates type-I progressive censoring using binomial removals. In most experiments, just one accelerating stress variable is used. However, using many stress factors is preferred because using only one variable may result in insufficient failure data. Increasing the number of stress factors would result in a greater understanding of the stress variables' simultaneous effects, as well as extra failure data. Ref. [12] investigated SSALT for two stress factors using Weibull failure times and Type-I censoring. Ref. [13] used a Type-I hybrid censoring method with an SSALT for two stress variables to achieve optimal holding times. The SSALT with two stress components is referred to as a bivariate SSALT.
In this research, we combined the bivariate SSALT with Type-I progressive censoring for products with APE distribution as a lifetime model. APE distribution was created by [14] from the exponential baseline distribution, in which its key features were investigated and parameter estimation was performed. Ref. [15] created the alpha power Weibull distribution and demonstrated that it outperforms certain other Weibull distribution versions using two real data sets. Additionally, [16] introduced alpha power generalized exponential (APGE) distribution using generalized exponential baseline distribution and the alpha power transformation (APT) technique, while [17] developed closed-form formulations for the APGE distribution's moment characteristics. Ref. [18] introduced the alpha power Inverted Topp-Leone distribution after investigating the general mathematical features of the APT family. In the literature, [19] presented the Marshall Olkin alpha power family as a new generalization of the APT class. Ref. [20] introduced a new Weibull extended distribution by using Marshall Olkin alpha power family as a new extension of Weibull extended APT distribution. APE distribution probability density function (PDF) and hazard function are similar to Weibull, gamma, and generalized exponential distributions. As a result, it can be used in place of the more frequent Weibull, Gamma, and generalized exponential distributions. Because the cumulative distribution function (CDF) of APE distribution can be specified in a closed form, it can also be used to analyze censored data. We also explore how to utilize maximum likelihood and Bayesian methods to estimate unknown parameters and introduce the three-parameter APE distribution, which is often used in data analysis. For demonstration purposes, an actual dataset was studied.
The topic of establishing the ideal (optimal) stress duration under various filtering was examined by [21][22][23][24][25][26]. The majority of research in this sector has concentrated on SSALT data analysis or the development of optimal test strategies. However, the author will cover both topics in this paper. The author of this research focuses on utilizing the maximum likelihood methods to estimate the specified model parameters. The model parameters' asymptotic, bootstrap confidence (CIs) and credible intervals are also calculated.
The primary goal of this study is to improve the test plan of the experiment. Changing periods for stress levels are therefore optimized according to this claim. In addition, we identify a novel intelligent particle swarm optimization (PSO) algorithm that mimics bird swarm behavior. When compared to other optimization techniques, the PSO algorithm is more objective and simpler to use and it is used in a variety of fields under the natural operating state with specified reliability (R). Furthermore, the proposed test plan's optimality criterion is to minimize the asymptotic variance (AV) of the logarithm of life for the 100%(1 − R)th percentile. The PSO approach was utilized to optimize the results. PSO is a very simple algorithm that appears to work well for optimizing a wide variety of functions. PSO, according to the author, has grown in popularity due to two important factors: (1) PSO's core algorithm is straightforward.
(2) PSO has been discovered to be quite effective in a wide range of applications, i.e., it can give excellent results at a low cost of computation [27,28].
The PSO method has been used in engineering and other relevant sciences, according to [29,30].
The remainder of this work is arranged in the following manner. The author describes the model and provides some necessary assumptions in Section 2. In Section 3, the approximation AVs and covariance matrix are presented using maximum likelihood estimation (MLE) of the model parameters. In Section 4, we look at approximation and credible intervals, CIs. In Section 5, the best test design is determined. Bayesian estimators are discussed in Section 6, while the simulation studies are provided in Section 7. To indicate the flexibility of the suggested method, two real data sets are shown in Section 8. Conclusions are presented in Section 9.

Model and Assumptions
Under type-I progressive censoring, the bivariate SSALT can be characterized as follows: Each stress variable has two levels when using the bivariate SSALT. Let S lk be the k-th stress level of variable l, where l = 1, 2 and k = 0, 1, 2. The stress levels S 10 and S 20 represent normal operational circumstances. Allow all n units of the experiment to start at step 1 with stress levels (S 11 , S 21 ) for a period of time τ 1 during which n 1 failures will be noted. The first stress variable is increased from S 11 to S 12 at time τ 1 , c 1 units are randomly removed from the remaining n − n 1 surviving units and the first stress variable is increased from S 11 to S 12 . The second phase is repeated until the predetermined time τ 2 has been predetermined from the remaining n − n 1 − c 1 − n 2 units at time τ 2 . The other stress variable is increased from S 21 to S 22 at the end of the second stage. The test continues until T is reached, at which point n 2 units have failed this stage. The remaining surviving units c 3 = n − n 1 − c 1 − n 2 − c 2 − n 3 are all removed from the test at time T.

Assumption
(1) The life of test units follows an APE distribution with a CDF in each step: where θ and α are the scale and shape parameters, respectively. The corresponding PDF and the hazard function are given by respectively. It should be noted that when α → 1 , the APE distribution becomes an exponential distribution with rate parameter θ. We provide the plots of the PDF, CDF, and hazard function in Figure 1, which shows that the APE distribution is flexible when analyzing various types of data. (2) It is assumed that the scale parameter is θ i at test step i which is a log-linear function of stress levels for i = 1, 2, 3 as follows: where β 0 , β 1 and β 2 are unknown parameters that vary based on the product and the test technique. It is assumed that there is no connection between the two pressures.
(3) The cumulative exposure model (CE) was taken into consideration as well. Regardless of how the probability is obtained, the remaining life in this model is simply decided by the current cumulative failure probability and the current stress level [31].
(4) The shape parameter remains constant for all stress levels.
For the bivariate SSALT and the CE model, the CDF of test units can be written as follows: The PDF for this can be written as follows:

Likelihood Function and Fisher Information Matrix
For life testing, the MLE approach is the most widely used estimation method. In a bivariate SSALT, let t ij , i = 1, 2, 3, j = 1, 2, . . . , n i signify the observation derived from a type-I progressively censored sample with random deletions. The number of units removed from the test at any one time follows a binomial distribution, and each unit is eliminated with the same probability p. That is to say, In addition, the PDF of the total number of units eliminated at each moment is as follows: Assume that C i is independent of t ij for all i.
The joint likelihood function is as follows: where (7) and (8) in Equation (9) to get Equation (10) as follows: where G(t) and g(t) will be substituted for G(t) and g(t) in Equations (3) and (4), for more information see [32]. Finally, the likelihood function is constructed by inserting L 1 (t; θ 1 , θ 2 , θ 3 , α, p|C = c) and p(C = c) in Equation (9). Maximizing the logarithm of the likelihood function instead of the likelihood function itself is frequently easier. The likelihood function's logarithm is as follows: By maximizing the aforementioned function, the MLE values of θ 1 , θ 2 , θ 3 , α and p can be obtained immediately. Thus, by calculating the equations of the first-order partial derivatives of the log-likelihood function with respect to θ 1 , θ 2 , θ 3 , α and p, the MLE values of θ 1 , θ 2 , θ 3 , α and p may be obtained while setting them to 0. ∂ι ∂ι ∂ι ∂p

Confidence Intervals
The CIs of the parameters are calculated in this section. Because our point estimate is the most likely value for the parameter, it makes sense to base the CIs on it. CIs are a set of values (intervals) that operate as good approximations of a population parameter that is unknown. Ref. [33] was the first to introduce CIs to statistics. Two types of CIs were calculated in this study.

Approximate Confidence Intervals
Because the PDF of the APE distribution is not symmetric, the asymptotic CIs based on the normality cannot perform well. We assume that the underlying distribution is APE. Thus, we think that the parametric bootstrap percentile interval is more appropriate than the nonparametric one. Additionally, it is known that the nonparametric bootstrap percentile interval does not work well in general. For more details, see Section 5.3.1 of [31] One can use parametric bootstrap interval with normal approximation or Studentization. However, this CI is symmetric so it may not work well with our asymmetric case.
The MLE results are consistent and regularly distributed according to large sample theory, with certain regularity constraints. Because MLE values for parameters are not in a closed form, accurate CIs cannot be calculated, so asymptotic CIs based on the asymptotic normal distribution of MLE values are calculated instead.

Bootstrap Confidence Intervals
CIs based on parametric bootstrap sampling percentile intervals can be produced in this subsection; for additional information, see [34]. The bootstrap CIs are calculated using the following Algorithm 1.

Algorithm 1:
The algorithm of the bootstrap confidence intervals.
Step 2 Bootstrap estimates: Calculate the bth bootstrap estimateŝ using the resample t * p obtained in Step 1.
Step 3 Repetition: Step 4. Arranging in ascending order Organize each estimate in ascending order so that we have The 100(1 − δ)% percentile bootstrap CIs for ω are then calculated as follows:

Optimization Criterion
One of the aims of this article is to investigate the selection of τ 1 and τ 2 in a bivariate SSALT with type-I progressive censoring, as stated in the introduction. The current criterion for determining the best values for τ 1 and τ 2 is their percentile life under normal operating conditions. Under natural conditions, this criterion reduces the AV of the MLE value of the logarithm of percentile life. This is the criterion that is most frequently employed. Before computing the optimization criterion, it is important to note that the parameters derive from prior experiments with similar products or from a small sample experiment. The best test design for an experiment is then determined based on these estimates. The function of time t reliability is based on the normal operating condition (S 10 , The 100(1 − δ)%th percentile life under the ordinary operating conditions (S 10 , S 20 ) for a certain reliability R is , when R = 0.5, log(t R ) is the logarithm of the median life at usual operating conditions with stress level (S 10 , S 20 ). Now, x i can be defined as follows: then, it can be seen that Then, according to assumption 2 and above result, it can be obtained that Therefore, the MLE value of the log of the 100(1 − R)th percentile life of the APE distribution with a specified reliability, R under the usual operating condition S 0 is l with the placement of (20) in (21), MLE of log t R is The AVlog t R is calculated using the delta method. As a result, AVlog t R is calculated to show that its minimum values lead to the optimal plan: where F is the Fisher information matrix and H is the row vector of the first derivative of log t R with respect toθ 1 ,θ 2 ,θ 3 ,α andp. i.e., Thus, the problem objective is to Minimum AVlog t R , obtained by Equation (30), leads to the optimal SSALT plan for the values τ * 1 and τ * 2 . Because the objective function is nonlinear, PSO is an effective optimization approach in this methodology and can be used to discover a near-optimal solution. PSO is a search algorithm and intelligence optimization method based on the social behaviors of fish and birds. PSO has proven to be effective in a wide range of optimization problems [31].
The PSO method's popularity has grown due to its simplicity and applicability in handling a wide range of engineering and management optimization challenges [35]. The PSO method has several advantages, including ease of implementation, robustness, and speed of computing.

Bayesian Estimators
In recent years, Bayesian inference has gained popularity in a range of fields, including engineering, clinical medicine, biology, and so on. Its ability to analyze data using prior information makes it particularly useful in dependability studies, where data availability is a big issue. In this part, the model parameters α, θ 1 , θ 2 , and θ 3 , Bayesian estimates and accompanying credible intervals are constructed.

Prior Information and Loss Function
Because the gamma distribution can take on a variety of shapes depending on its parameter values, employing separate gamma priors is straightforward and can result in more expressive posterior density estimates. As a result, we looked at gamma density priors, which are more flexible than other difficult prior distributions, and adapted APE support under bivariate SSALT and type-I progressive censoring model parameters. As a result, independent gamma PDFs are assumed for the APE distribution under bivariate SSALT and type-I progressive censoring model parameters, which are Gamma a j , b j ; j = 1, . . . , 4, respectively. The joint prior is as follows: where the hyperparameters a j , b j ; j = 1, . . . , 4 reflect prior knowledge of the unknown parameters α, θ 1 , θ 2 , and θ 3 and are expected to be non-negative. See [36,37] for further details on previous MLE and gamma estimates (elective hyperparameters). According to the literature, choosing the symmetric loss function (SLF) is a critical issue in Bayesian analysis. The SEL function is the most often utilized SLF in studies estimating the considered unknown values.

Posterior Analysis by SLF
The joint posterior density function is found by observing the APE distribution under bivariate SSALT and type-I progressive censoring sample data from the likelihood function and using prior knowledge.
The posterior expectation of α, θ 1 , θ 2 , and θ 3 is the Bayesian estimator of α, θ 1 , θ 2 , and θ 3 , such as α, θ 1 , θ 2 , and θ 3 , under the SEL function. In order to produce these estimations, the marginal posterior distributions for each of the parameters (α, θ 1 , θ 2 , and θ 3 ) must be gathered. However, explicit formulations for the marginal PDFs for each unknown parameter are clearly not practicable due to the implied mathematical calculations. As a result, we want to compute Bayesian estimates and credible intervals using simulation methods such as MCMC techniques.
One of the most useful MCMC algorithms is the Metropolis-Hastings (MH) algorithm, which is used to generate random samples using posterior density distribution and an independent proposal distribution to approximate Bayesian estimates and create the associated Highest Posterior Density (HPD) credible intervals. Furthermore, this method provides a chain form of the Bayesian estimate that is easy to utilize in practice. For more information on this algorithm, see [38,39].

Simulation Study
A simulation study was carried out to illustrate the theoretical results and to analyze the relative efficiency of step-stress testing vs. constant-stress testing, as well as to evaluate it as a function of changing factors of a parameter. For a better understanding of the test procedure, data were initially simulated. The author uses the following procedure to produce samples from the bivariate SSALT model described in Section 2: (a) From a uniform (0, 1) distribution, generate a random sample of size n and arrange them in ascending order to produce order statistics (U 1:n , U 2:n , . . . , U n:n ). (b) Find n 1 such that for a given stress change time τ 1 and parameter θ 1 , (c) c 1 units were randomly removed from non-failed items at time τ 1 . Assume that c 1 follows a binomial distribution with a chance of removal of p. (d) Find n 2 such that for a given stress change time τ 2 and parameter θ 2 , (e) c 2 units were randomly removed from non-failed items at time τ 2 . Assume that c 2 follows a binomial distribution with a chance of removal of p. (f) Find n 3 such that for a given prefixed censoring time T and parameter θ 3 , (g) c 3 = n − n 1 − c 1 − n 2 − c 2 − n 3 units were randomly removed from non-failed items at time T.
(h) The following demonstrates how to obtain ordered observations: , i = 1, 2, . . . , n 1 , (j) The probability p for binomial distribution has been changed to 0.3 and 0.5.
The stress levels and simulated data from the APE distribution under bivariate SSALT and type-I progressive censoring are shown in Tables 1-8.   A simulation study was carried out to acquire the MLE values and Bayesian estimation of the parameters, as well as to assess the precision of these estimates using bias, mean squared (MSEs), and length of CI (L.CI). The MLE values of α, θ 1 , θ 2 , θ 3 , and p can be obtained by solving Equations (12)- (15) and (16) using the optimization tool in R-packages with the obtained observations from the simulation algorithm. The Bayesian estimation values of α, θ 1 , θ 2 , and θ 3 are obtained by solving Equations (14)- (17) and (25) using the optimization tool in R-packages.
For various sample sizes and true values of the parameters, the results are presented in Tables 1-8. The results show that as the sample size grows, the Bias, MSE, L.CI, and length of credible CI (L.CCI) of the MLE and Bayesian estimate approach zero. Since the bias approaches zero, we postulate thatŵ n → pw . In addition, since the MSE approaches zero, the variance ofŵ n also approaches zero. Thus, we can consider the n b (ŵ n − w), in which one can choose b such that this distribution is non-degenerate, that is, n b (ŵ n − w) = Op(1). For more details, see Section 6.1.1 of [40]. The Bayesian estimation is the best estimation method based on bias, MSE, and L.CCI.
The stress levels and simulated data from the APE distribution under bivariate SSALT and type-I progressive censoring are shown in Table 9. The temperature levels are 358 and 378 degrees Fahrenheit, respectively, while the voltage levels are 12 and 16. Table 9 shows a basic SSALT strategy with two stress variables for the products. As can be seen in the first phase, 49 of the samples were unsuccessful. Nine samples failed in the second step, while 6 samples failed in the third step. To acquire the MLE and Bayesian estimation of the parameters, as well as to assess the precision of these estimates, stander error (SE) and a confidence interval (CI) were used for the simulated data in Table 10.  Table 11 discussed the optimality A (OA), and optimality B (OB), and we conclude that the model with p = 0.5 is the better than the model based on p = 0.3. By fixing one parameter and adjusting the others, we sketched the log-likelihood for each parameter as shown in Figure 2. The simulated data set performs quite well, as the four roots of the parameters are global maximums, as seen in Figure 3. We can obtain trace and density plots for all parameters in an MCMC trace in Figure 4. Figure 5 shows the posterior density of MCMC results for each parameter, which displays a symmetric normal distribution similar to the proposed distribution. Figure 6 confirms that the MCMC results have convergence.

Application of Real Data
In this section, we discussed two different datasets produced by this model. A real dataset is provided to illustrate the theoretical results.

Cancer Patient Data
This dataset refers to the remission times (in months) of a random sample of 128 bladder cancer patients, see [41]. The Kolmogorov-Smirnov goodness of fit test is employed for real data where we obtained the Kolmogorov-Smirnov distance (KSD) and its Kolmogorov-Smirnov p value (KSPV) indicates that the APE distribution fits the data in Table 12. Table 12 discussed MLE with stander error (SE), KSD, KSPV, AIC, and BIC. There, the KSPV is larger than 0.05, meaning that the distribution is fit for this data. Figure 7 confirms this conclusion as it shows that the Estimated pdf with a histogram of probability, and estimated CDF is nearly of the empirical CDF.  Table 13 shows the point and intervals estimation for the MLE and Bayesian estimation of the model when τ 1 = 7, τ 2 = 12, T = 50. By looking at the results in Table 13, we conclude that the Bayesian estimation is the best estimation method of parameters in this model where the SE values are smaller than the SE values of MLE and the confidence intervals are the shortest. By fixing two parameter and adjusting the others, we sketched the log.likelihood for each parameter for cancer patients data as shown in Figure 8. The cancer patients data set performs quite well, as the four roots of the parameters are global maximums, as seen in Figure 8. We can obtain trace plots for all parameters to confirm that the MCMC results have convergence in Figure 9. Furthermore, Figure 9 shows the posterior density results of MCMC for each parameter, which displays a symmetric normal distribution similar to the proposed distribution.

Failure Times
All 50 items put into use at t = 0 and failure times are in weeks. This data have been introduced by [ Table 14 discussed MLE with SE, KSD, KSPV, AIC, and BIC. There, the KSPV is 0.9604 which is larger than 0.05, then the distribution is fit for failure times data. The stress levels for failure times data under bivariate SSALT and type-I progressive censoring are shown in Table 15. Table 16 shows the point and interval estimation for MLE and the Bayesian estimation of the model when τ 1 = 8, τ 2 = 12, T = 17. By looking at the results in Table 16, we conclude that the Bayesian estimation is the best estimation method of parameters for this model, where the SE values are smaller than the SE values of MLE and the confidence intervals are the shortest. Table 17 presents the OA, and OB, AIC, BIC and estimated parameter of binomial probability. Additionally, we conclude the model with p = 0.5 is the better than the other model. The failure times dataset performs quite well, as the four roots of the parameters are global maximums, as seen in Figure 10. We can obtain trace and density plots for all parameters in MCMC results to confirm the convergence in Figure 11.

Conclusions
In most experiments, just one accelerating stress variable is used. Accelerating just one stress variable does not always produce enough failure data. As a result, two stress factors may be required for further acceleration. Two stress variables are examined in this paper. The inclusion of two stress variables in a test design will result in a better understanding of the effect of two stress variables acting concurrently. Furthermore, most studies in this sector focus on the development of optimal exam strategies. We examined an optimal test of the APE distribution for an SSALT under progressive type-I censoring in this paper. Both stress variables were taken into consideration. Progressive censoring and accelerated life testing were used to reduce the time and cost of lifetime testing. The APE distribution was expected to be followed by the test units' lifespans. The influence of varying stress levels was studied using a cumulative exposure model. The scale parameter of the APE distribution was shown to have a log-linear relationship with stress. The model parameters were estimated using maximum likelihood and Bayesian methods, for which the results show that as the sample size increases, the MLE values and Bayesian estimates approach zero. Similarly, as sample size grows, MSEs and L.CI fall. Based on bias, MSE as well as approximate CIs and credible intervals, it is clear that for the length of credible CI, Bayesian estimation is the best estimation approach (L.CCI). An optimal test plan was created by minimizing the AV of the percentile life under normal operating conditions. The simulation study was provided to prove the theoretical results.

Data Availability Statement:
The data used to support the findings of this study are included within the article.