Abstract
In this study, the estimation of the unknown parameters of an alpha power Weibull (APW) distribution using the concept of an optimal strategy for the step-stress accelerated life testing (SSALT) is investigated from both classical and Bayesian viewpoints. We used progressive type-II censoring and accelerated life testing to reduce testing time and costs, and we used a cumulative exposure model to examine the impact of various stress levels. A log-linear relation between the scale parameter of the APW distribution and the stress model has been proposed. Maximum likelihood estimators for model parameters, as well as approximation and bootstrap confidence intervals (CIs), were calculated. Bayesian estimation of the parameter model was obtained under symmetric and asymmetric loss functions. An optimal test plan was created under typical operating conditions by minimizing the asymptotic variance (AV) of the percentile life. The simulation study is discussed to demonstrate the model’s optimality. In addition, real-world data are evaluated to demonstrate the model’s versatility.
1. Introduction
Recently, most produced products are highly reliable with long lifespans, resulting in expensive costs and long experimental durations when tested under usual settings. When standard life-testing is no longer useful, the reliability experimenter may employ accelerated life-testing, in which the experimental units are subjected to higher stress levels than under normal operating settings. Accelerated life tests (ALTs) are used to obtain information on the lifetime distribution of items rapidly by testing them at higher than nominal levels of stress to trigger early failures; for more information on this topic, see [1,2]. Moreover, ALTs enable researchers to investigate the impact of stress factors such as pressure or temperature on the lives of experimental units. The ALT data must be fitted to a model that connects the lifetime to stress and estimates the parameters of the lifetime distribution under normal conditions. This necessitates the development of a model that connects the degrees of stress to the characteristics of the lifetime distribution. The cumulative exposure model, presented in [3] and described further in [4,5], is one such model. ALTs can be performed at increasing or continuous high stress levels. In practice, continual stress ALT results in very few failures during the experimental duration, limiting the efficacy of accelerated testing. The step-stress paradigm, which allows for a change in stress in steps at various intermediate stages of the experiment, is an example of accelerated testing. The step-stress paradigm has been extensively addressed in the literature; for example, [6,7,8,9,10] all considered inferences for the step-stress model assuming exponential lifetimes based on different removing strategies. In the instance of a simple step-stress model [11,12] discussed determining the ideal time to change the stress level from to . The authors of [13] provided both inference and an optimal progressive strategy for progressive type-II censoring. While all of these discussions focused on exponential step-stress models, inferential approaches and optimal progressive plans with Weibull distributed life were researched in [14,15,16] and used to develop simple and multiple step-stress models, respectively, with lognormally distributed lifetimes and type-I censoring. For a comprehensive evaluation of step-stress models, see [17,18], which also include a comprehensive review of previous work on exact inferential methods for exponential step-stress models, as well as optimal step-stress test design, and [19], which provides an outline of several achievements in progressive censorship.
Step-stress life testing (SSLT) is a type of ALT. The use of an SSLT allows the experimenter to control the level of stress during the experiment. Assume items are assessed at initial stress level. In ALTs, tests are frequently halted before all units fail, resulting in the use of filtered data to decrease test time and expenditure. Type-I and type-II censoring are the two most prevalent censoring systems in life testing or reliability trials. The progressive type-II censoring technique has recently gained popularity. It is frequently used for assessing very reliable data. The key advantage of progressive censoring is that it allows for more information on the lifetimes of the units without exposing all units to high levels of stress, leading to reduced associated costs. This type of censoring technique can be described as follows: In a progressively type-II censored sample, in a life testing experiment, assume identical objects are tested; is a predetermined number of failures; and are prefixed integers satisfying . of the surviving units is randomly withdrawn at the time of the first failure, . Similarly, when the second failure occurs, of the surviving units is randomly withdrawn, and so on. At the mth failure , the experiment is terminated, and all surviving units are withdrawn. See [20,21] for more information on progressive type-II censoring.
The following is how the document is written: Section 2 presents a description of the lifetime model and test assumptions. The MLEs of model parameters under the simple step-stress ALT are derived in Section 3. Section 4 presents the Bayes estimates (BEs) of model parameters produced using the Markov chain Monte Carlo (MCMC) approach. Section 5 examines a real data set to demonstrate the recommended approaches in Section 3 and Section 4. Section 6 constructs the asymptotic, bootstrap, and reasonable confidence intervals for the model parameters. The simulation studies are found in Section 7. In Section 8, the actual real data set is applied to illustrate the flexibility of the model based on progressive type-II censoring. Section 9 has the conclusion.
2. Model Description and Assumptions of Test
The alpha power Weibull (APW) distribution is important because it can simulate monotone and non-monotone failure rate functions, which are prominent in reliability research, and it extends the Weibull distribution. In fact, the APW distribution was inspired by the widespread usage of the Weibull distribution in reliability theory, as well as the fact that the generalization allows for flexibility in analyzing lifetime data. Furthermore, based on the proposed Weibull model, a new generalization of the Weibull distribution, called the APW model, was proposed in [22]. The cumulative distribution (CDF) and probability density function (PDF) are given as follows:
and
and the associated hazard rate function (hrf) is given by:
Assumptions of Test.
Letrepresent the stress levels in the test, and letrepresent the use-stress or design stress. Assumeidentical units are tested under stress level, and the surviving units are tested under stress levelat a predetermined time. The progressive type-II censoring is used as follows:units are randomly withdrawn from thesurviving units at the time of the first failureAt the time of the second failure,,units are picked at random from the remainingunits. When thefailure occurs, the test is ended, and all remainingunits are withdrawn. The complete samples and type-II censored samples are clearly exceptional examples of this technique. Let be the number of failures prior to time at stress level. The observed progressive censored data arewith these notations.
In the framework of simple step-stress ALT, the following assumptions are employed throughout the article:
- For stress levels , the failure time is given by
- The association between the life feature and the stress loading can take one of two forms:
- The Arrhenius model is as follows: , where is the absolute temperature.
- The inverse power model is defined as , where is the voltage.
- Exponential model: , where , and is a weathering variable.
More information on accelerated models can be found in [23]. Thus, for the above three models, is a linear function of the transformed stress , , or .
Furthermore, we assume that the relationship between the parameter and the stress level is linear.
where and are unidentified parameters and is an increasing function of .
- 3.
- The cumulative exposure model is still valid, as demonstrated in [23].
The CDF of a test unit under the simple step-stress ALT is calculated using the cumulative exposure model and the CDF presented in Equation (1).
where and is the answer to
The corresponding PDF is given by
where
3. Maximum Likelihood Estimation
The maximum likelihood estimators (MLEs) of the model parameters are obtained in this section. Let denote the observed lifetime values acquired from a progressive type-II censoring. The probability function of the four parameters , and , based on the progressive type-II censoring sample, is calculated from the CDF in Equation (5) and the associated PDF in Equation (6) as:
where
The MLEs of , and exist only if at least one failure occurs before and after , in which case the logarithm of the likelihood function of , and is provided by:
The derivatives of ℓ are given as follows:
We now have a system of nonlinear equations with the unknown parameters and . It is evident that obtaining a closed-form solution is quite challenging. As a result, an iterative approach such as the Newton–Raphson method can be employed to provide a numerical solution to the nonlinear system described above.
4. Fisher Information Matrix
The Fisher information matrix is a fundamental statistical construct that describes how much information data provides about an unknown quantity. It can be used to compute an estimator’s variance as well as the asymptotic behavior of maximum likelihood estimations. The Fisher information matrix’s inverse is an estimator of the asymptotic covariance matrix. The Fisher information matrix is calculated by taking the anticipated values of the loglikelihood function’s negative second partial and mixed partial derivatives with respect to , and . It is explained further below.
where , , , , and .
The elements of the relevant matrices are computed. As a result, the MLEs’ related variance–covariance matrix can be computed as follows:
5. Confidence Intervals
This section computes the confidence intervals for the parameters (CIs). We should base our confidence intervals on our point estimate because it is the most likely value for the parameter. A confidence interval (CI) is a set of values that serve as good approximations of an unknown population parameter. The first study to use confidence intervals in statistics was [24]. Two types of CIs were estimated in the present study.
5.1. Approximate Confidence Intervals
Asymptotic CIs based on normality do not perform well since the APW distribution’s pdf is not symmetric. It is assumed that the underlying distribution is APW. As a result, we choose the parametric bootstrap percentile interval over the nonparametric one. Furthermore, the nonparametric bootstrap percentile interval is generally known to perform badly in general; further material is provided in Section 5.3.1 of [25]. The parametric bootstrap interval can be used with normal approximation or Studentization. However, because this CI is symmetric, it might not be appropriate for our asymmetric case. The MLE results, according to large sample theory, are consistent and regularly distributed, subject to certain regularity constraints. Correct CIs cannot be obtained since parameter MLE values are not in closed form; instead, asymptotic CIs based on the asymptotic normal distribution of MLE values are estimated. Assume that is correct. is known to generate the asymptotic distribution of MLE values of , where is the variance–covariance matrix of the unknown parameters.
As previously established, the inverse of the Fisher information matrix is an estimator of the AV–covariance matrix. The approximate two-sided CIs for are supplied by
where is the standard normal distribution’s th-percentile.
5.2. Bootstrap Confidence Intervals
This subsection focuses on the generation of CIs based on parametric bootstrap sampling with percentile intervals. The Algorithm 1 below is used to compute the bootstrap CIs; for more information, see [26].
| Algorithm 1. Bootstrap Algorithm |
| 1: Step 0, basic setup: |
| 2: Set b = 1 |
| 3: Determine the MLE values of as indicated by . |
| 4: Step 1, sampling: |
| 5: Obtain the bootstrap resample from , where is the MLE from Step 0. |
| 6: Step 2, bootstrap estimates: |
| 7: Determine the bootstrap estimations. |
| 8: |
| 9: Use the resample acquired in Step 1. |
| 10: Step 3, repetition: |
| 11: Set |
| 12: and then repeat Steps 1–3 until b = B. |
| 13: Step 4, begin in ascending order: |
| 14: Arrange the estimates in ascending order so that we have |
| 15: |
The percentile bootstrap CIs for are then computed as follows:
where
6. Bayesian Estimation
In this section, we look at Bayesian estimators for unknown parameters, which can be thought of as alternatives to the aforementioned MLEs. The prior specifications for the unknown parameters are the starting point for Bayesian analysis. In this study, the parameters and are considered to be statistically independent and to be different gamma distributions, indicated by respectively. The joint prior distribution for the APW distribution parameters is provided by
where and represent the previously established hyperparameters that reflect prior knowledge of the unknown parameters. It is worth noting that using independent inverse gamma priors for unknown parameters is not illogical and may result in more expressive posterior density estimates due to the flexible forms of the gamma distribution. The joint posterior distribution of the unknown parameters that results is given by
This is unrecognizable, prompting us to use the Metropolis–Hastings (MH) technique to create posterior samples for their conditional posterior distributions. Later, the obtained samples will be used to approximate Bayes estimates and create the parameters’ highest posterior density (HPD) credible intervals; for additional information on this approach, see [27,28]. We discuss the symmetric (SLF) and general entropy (GE) loss functions for Bayesian analysis in this study, which are indicated as
where , and are the estimated posterior means of and , respectively.
The GE is an asymmetric loss function. This loss is a simple generalization of the entropy loss, which has been utilized by various writers, where the shape parameter is set to 1, defined by
where is an estimate of . denotes the Bayes estimator relative to the GE loss function and is given as:
assuming exists and is finite, where signifies the expected value. The author of [29] employed the loss function from Equation (24). It should be noted that any other loss function can be easily substituted in a similar manner.
7. Optimization Criterion
In recent years, there has been much interest in finding the optimal censoring scheme in the statistical literature; for example, see [30,31,32,33]. For fixed n and m, possible censoring schemes are all combinations such that , and choosing the best sample technique entails finding the progressive censoring scheme that provides the most information about the unknown parameters among all conceivable progressive censoring schemes. The first difficulty is, of course, determining how to generate unknown parameter information measures based on specific progressive censoring data, and the second is determining how to compare two distinct information measures based on two different progressive censoring techniques; see [34] for more information. The next part of this study covers some of the optimality criteria that were employed in this situation. In practice, we want to select the censoring scheme that delivers the most information about the unknown parameters; see [35] for further information. In our example, Table 1 presents a number of regularly used measures to help us choose the appropriate progressive censoring approach,
Table 1.
Some practical censoring plan optimum criteria.
Regarding , we wish to maximize the observed Fisher information values. Furthermore, our goal for criteria and is to minimize the determinant and trace of . Comparing multiple criteria is simple when dealing with single-parameter distributions; however, when dealing with unknown multiparameter distributions, comparing the two Fisher information matrices becomes more difficult because criteria and are not scale-invariant; however, the optimal censoring scheme of multiparameter distributions can be chosen using the scale-invariant criterion . The criterion , which is dependent on p, clearly tends to minimize the variance of logarithmic MLE of the pth quantile, . As a result, the logarithm for of the APW distribution is supplied by
From (3), the delta method is used to obtain the approximated variance for of the APW distribution as
where
The optimum progressive censoring, on the other hand, corresponds to the maximum value of the criterion and the lowest value of the criteria .
8. Simulation
For various selections of n, m, and values, simulation tests were carried out to examine the performances of the likelihood and Bayesian estimators under SLF and GE loss function in terms of their values of bias (VB) and values of mean square error (VMSE). Based on the asymptotic distribution of the MLEs, the 95% asymptotic confidence intervals are calculated. Additionally, two MLE bootstrap confidence interval approaches are obtained. The 95% credible confidence intervals are calculated by the highest posterior density (HPD). Since the unclosed forms of the MLE and posterior equations are unknown, we must use appropriate numerical methods, such as the Metropolis–Hastings algorithm for Bayesian analysis and the iterative Newton–Raphson method for MLEs, to solve these nonlinear equations. Considered are two progressive censorship schemes:
- Scheme I. and
- Scheme II. and
We used different optimization criteria to select the best scheme as determinant and trace of the AV matrices, maximize the principal diagonal elements of the Fisher information matrices, minimize the determinant and trace of the AV matrix, and minimize the variance of the logarithmic MLE of the pth quantile.
The following algorithm is used to carry out the estimation procedure:
- Give the numbers n, m, and . N = 50 and 100; m = 40 and 45 when n = 50, and m = 80 and 90 when n = 100.
- Give the parameters and their values.
- From the random variable t provided by (1), create a sample of the randomness of size n and sort it. The APW distribution random variable is simple to produce. For instance, if the uniform random variable U comes from the range [0, 1], then
- To generate progressively censored data for given n and m, use the model provided by Equation (6). The set of data can be thought of as:
- To obtain the MLEs of the parameters, the nonlinear system is solved using the Newton–Raphson method.
- To obtain the Bayesian estimations of the parameters, the posterior distribution is solved using the MCMC method by Metropolis–Hastings algorithm.
- Repeat steps 3 through 6 for 1000 iterations.
- Calculate the MILEs and Bayesian parameter-related average values of VB and VMSE.
- Calculate the various model parameter estimators’ confidence intervals.
- Calculate the various optimization criteria.
The following observations can be drawn from the simulation results shown in Table 2, Table 3, Table 4, Table 5, Table 6 and Table 7:
Table 2.
Bias, MSE, and length of CI with Scheme I when .
Table 3.
Bias, MSE, and length of CI with Scheme II when .
Table 4.
Bias, MSE, and length of CI with Scheme I when .
Table 5.
Bias, MSE, and length of CI with Scheme II when .
Table 6.
Optimization criteria when .
Table 7.
Optimization criteria when .
- The VB, VMSE, and LCI of the estimates for the two alternative censoring schemes decrease for fixed values of the sample size by increasing the censored sample size .
- By increasing the sample size n for fixed values of m, the VB, VMSE, and LCI for various censoring schemes drop.
- Under the cases taken into consideration, the symmetric and asymmetric Bayesian estimations are superior to the MLE in terms of VB and VMSE reduction.
- The LCI dramatically decreases, and the HPD’s symmetric and asymmetric Bayesian estimations outperform the MLE’s ACI.
- We note that the bootstrap CIs have the smallest CI lengths.
9. Application of Real Data
In this section, we apply real data to demonstrate the value and adaptability of the APW distribution using various schemes and sampling strategies. The data set, which was used in [22], corresponds to the days between 109 consecutive coal-mining incidents in Great Britain. Measures (AIC, CAIC, BIC, and HQIC), CVM, AD, Kolmogorov–Smirnov (KS), and p-values for the data set are included in Table 8 along with the MLEs and accompanying standard errors (SEs) of each model parameter. Figure 1 shows the relative histogram with the fitted APW distribution, the fitted APW CDF plots with empirical CDF, and the QQ and p-p plots of the data set. The outcomes in Table 8 are also supported by these graphical goodness-of-fit techniques in Figure 1.
Table 8.
MLE, SE, and different measures.
Figure 1.
APW plots with different shapes.
Using two distinct sampling schemes with = 200 of step-stress ALT, we create some samples of step-stress ALTs based on progressive type-II censoring with m = 80 and 95 using the real data set. Table 9 reports the generated data and matching censoring scheme. The unknown parameters of the APW under step-stress ALT based on the progressive type-II censoring model’s MLEs and Bayesian estimates with various loss functions are calculated using the data sets in Table 9 and are given in Table 10. Bayesian estimation based on different loss functions is presented in Table 10. We create 10,000 MCMC samples using the MCMC algorithm that is discussed. For the sake of using the MCMC sampler algorithm, the initial values of the unknown parameters were assumed to be their MLE estimators.
Table 9.
Data based on the model when .
Table 10.
Parameter estimation of the model when .
Figure 2 shows trace plots of the posterior distributions of parameters tracking the convergence of MCMC outputs. It shows how well the MCMC process converges. Figure 3 also shows the histograms for the marginal posterior density estimates of the parameters based on 10,000 chain values and the Gaussian kernel. The estimations clearly show that all of the generated posteriors are symmetric with respect to the theoretical posterior density functions.
Figure 2.
MCMC trace plot.
Figure 3.
Histogram of MCMC results and kernel density estimates of parameters.
We estimated tau where and the survival characteristics S1 and S2 for each distribution at distinct mission time , which are computed and listed in Table 11. The is near , and survival values of the estimates increase with the increase in the censored sample size . Scheme II is better than Scheme I, according to and survival values.
Table 11.
Estimated and survival values of the model.
Additionally, two-sided 95% HPD credible intervals, approximative confidence intervals with standard error (SE) of parameters, and several MCMC characteristics such as SE are computed and listed in Table 12. We will demonstrate the idea of optimal censoring under the four criteria listed in Table 1 using the data sets in Table 9. The determinant and trace of the observed V-C matrix of the MLEs (24) can be used to determine the first three requirements with ease in Table 13.
Table 12.
SE and confidence interval values of model for MLE and Bayesian estimation methods.
Table 13.
Optimization criteria for data modeling.
10. Conclusions
In this study, an optimization method for a step-stress accelerated life test with test unit lifetime is expected to follow an alpha power Weibull distribution. To reduce testing time and expenses, we used progressive type-II censoring and accelerated life testing, and we examined the impact of various stress levels using a cumulative exposure model. A log-linear relationship between the APW distribution’s scale parameter and stress has been postulated. Maximum likelihood estimators for model parameters were computed, as were approximation and bootstrap confidence intervals (CIs). Under symmetric and asymmetric loss functions, a Bayesian estimate of the parameter model is obtained. Under normal operating conditions, an optimal test plan was developed by minimizing the asymptotic variance (AV) of the percentile. The simulation research is discussed in order to show that the model is optimal. It is shown that raising the censored sample size m decreases the VB, VMSE, and LCI of the estimates for the two alternative censored methods for fixed values of the sample size n. The VB, VMSE, and LCI for various censoring schemes decrease as sample size n for fixed values of m increases, and these results are consistent with the theoretical results. In terms of VB and VMSE reduction, the symmetric and asymmetric Bayesian estimations outperform the MLE in the instances studied. The LCI drops considerably, and the HPD’s symmetric and asymmetric Bayesian estimations exceed the ACI of the MLE. The bootstrap CIs have the shortest CI lengths. Real-world data were also assessed to demonstrate the model’s adaptability.
Author Contributions
Conceptualization, R.A., E.M.A., D.K. and H.R.; methodology, R.A., E.M.A., D.K. and H.R.; validation, R.A., E.M.A., D.K. and H.R.; formal analysis, R.A., E.M.A., D.K. and H.R.; investigation, R.A., E.M.A., D.K. and H.R.; resources, R.A., E.M.A., D.K. and H.R.; data curation, R.A., E.M.A., D.K. and H.R.; writing—original draft preparation, R.A., E.M.A., D.K. and H.R.; writing—review and editing, R.A., E.M.A., D.K. and H.R.; project administration, R.A., E.M.A., D.K. and H.R.; All authors have read and agreed to the published version of the manuscript.
Funding
This research was funded by Princess Nourah bint Abdulrahman University Researchers Supporting Project (number PNURSP2022R50), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
Not applicable.
Acknowledgments
We would like to thank the Editor-in-Chief, Associate Editor, and two referees for useful comments and suggestions that have significantly improved this article. We acknowledge Princess Nourah bint Abdulrahman University Researchers Supporting Project (number PNURSP2022R50), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Liu, X.; Qiu, W.S. Modeling and planning of step-stress accelerated life tests with independent competing risks. IEEE Trans. Reliab. 2011, 60, 712–720. [Google Scholar] [CrossRef]
- Xu, A.; Basu, S.; Tang, Y. A full Bayesian approach for masked data in step-stress accelerated life testing. IEEE Trans. Reliab. 2014, 63, 798–806. [Google Scholar] [CrossRef]
- Sedyakin, N.M. On one physical principle in reliability theory. Tech. Cybernatics 1966, 3, 80–87. (In Russian) [Google Scholar]
- Nelson, W. Accelerated life testing: Step-stress models and data analysis. IEEE Trans. Reliab. 1980, 29, 103–108. [Google Scholar] [CrossRef]
- Bagdonavicius, V.; Nikulin, M. Accelerated Life Models: Modelling and Statistical Analysis; Chapman and Hall/CRC Press: Boca Raton, FL, USA, 2002. [Google Scholar]
- Ganguly, A.; Kundu, D.; Mitra, S. Bayesian analysis of a simple step-stress model under Weibull lifetimes. IEEE Trans. Reliab. 2015, 64, 473–485. [Google Scholar] [CrossRef]
- Xiong, C. Inference on a simple step-stress model with Type-II censored exponential data. IEEE Trans. Reliab. 1998, 47, 142–146. [Google Scholar] [CrossRef]
- Balakrishnan, N.; Kundu, D.; NG, H.; Kannan, N. Point and interval estimation for a simple step-stress model with Type-II censoring. J. Qual. Technol. 2007, 39, 35–47. [Google Scholar] [CrossRef]
- Balakrishnan, N.; Xie, Q. Exact inference for a simple step stress model with Type-I hybrid censored data from the exponential distribution. J. Stat. Plan. Inference 2007, 137, 3268–3290. [Google Scholar] [CrossRef]
- Balakrishnan, N.; Zhang, L.; Xie, Q. Inference for a simple step stress model with Type-I censoring and lognormally distributed lifetimes. Commun. Stat.—Theory Methods 2009, 38, 1690–1709. [Google Scholar] [CrossRef]
- Bai, D.; Kim, M.; Lee, S. Optimum simple step-stress accelerated life test with censoring. IEEE Trans. Reliab. 1989, 38, 528–532. [Google Scholar] [CrossRef]
- Xie, Q.; Balakrishnan, N.; Han, D. Exact Inference and Optimal Censoring Scheme for a Simple Step-Stress Model under Progressive Type-II Censoring. In Advances in Mathematical and Statistical Modeling; Arnold, B.C., Balakrishnan, N., Sarabia, J.M., Minguez, R., Eds.; Birkhauser: Berlin, Germany, 2009; pp. 107–137. [Google Scholar]
- Ng, H.; Chan, P.; Balakrishnan, B. Optimal progressive censoring plan for the Weibull distribution. Technometrics 2004, 46, 470–481. [Google Scholar] [CrossRef]
- Khamis, I.; Higgins, J. A new model for step-stress testing. IEEE Trans. Reliab. 1998, 47, 131–134. [Google Scholar] [CrossRef]
- Kateri, M.; Balakrishnan, N. Inference for a simple step-stress model with Type-II censoring, and Weibull distributed lifetimes. IEEE Trans. Reliab. 2008, 57, 616–626. [Google Scholar] [CrossRef]
- Miller, R.; Nelson, W.B. Optimum simple step-stress plans for accelerated life testing. IEEE Trans. Reliab. 1983, 32, 59–65. [Google Scholar] [CrossRef]
- Lin, C.; Chou, C. Statistical inference for a lognormal step-stress model with Type-I censoring. IEEE Trans. Reliab. 2012, 61, 361–377. [Google Scholar] [CrossRef]
- Gouno, E.; Balakrishnan, N. Step-stress accelerated life test. In Handbook of Statistics; Balakrishnan, N., Rao, C., Eds.; Advances in Reliability, Series; North-Holland: Amsterdam, The Netherlands, 2001; Volume 20, pp. 623–639. [Google Scholar]
- Nelson, W. A bibliography of accelerated test plans, Part I—Overview. IEEE Trans. Reliab. 2005, 54, 194–197. [Google Scholar] [CrossRef]
- Balakrishnan, N. A synthesis of exact inferential results for exponential step-stress models and associated optimal accelerated life-tests. Metriika 2009, 69, 351–396. [Google Scholar] [CrossRef]
- Balakrishnan, N. Progressive censoring methodology: An appraisal. TEST Off. J. Span. Soc. Stat. Oper. Res. 2007, 16, 211–259. [Google Scholar] [CrossRef]
- Efron, B.; Tibshirani, R. An Introduction to the Bootstrap; Chapman and Hall: New York, NY, USA, 1993. [Google Scholar]
- Nassar, M.; Alzaatreh, A.; Mead, M.; Abo-Kasem, O. Alpha power Weibull distribution: Properties and applications. Commun. Stat.—Theory Methods 2017, 46, 10236–10252. [Google Scholar] [CrossRef]
- Neyman, J. Outline of a theory of statistical estimation based on the classical theory of probability. Philos. Trans. R. Soc. Lond. Ser. A Math. Phys. Sci. 1937, 236, 333–380. [Google Scholar]
- Alotaibi, R.; Mutairi, A.A.; Almetwally, E.M.; Park, C.; Rezk, H. Optimal Design for a Bivariate Step-Stress Accelerated Life Test with Alpha Power Exponential Distribution Based on Type-I Progressive Censored Samples. Symmetry 2022, 14, 830. [Google Scholar] [CrossRef]
- Gelman, A.; Carlin, J.B.; Stern, H.S.; Rubin, D.B. Bayesian Data Analysis, 2nd ed.; Chapman and Hall/CRC: Boca Raton, FL, USA, 2004. [Google Scholar]
- Lynch, S.M. Introduction to Applied Bayesian Statistics and Estimation for Social Scientists; Springer: New York, NY, USA, 2007. [Google Scholar]
- Dey, S. Bayesian estimation of the shape parameter of the generalized exponential distribution under different loss functions. Pak. J. Stat. Oper. Res. 2010, 6, 163–174. [Google Scholar] [CrossRef] [Green Version]
- Balasooriya, U.; Balakrishnan, N. Reliability sampling plans for log-normal distribution, based on progressively-censored samples. IEEE Trans. Reliab. 2000, 49, 199–203. [Google Scholar] [CrossRef]
- Balasooriya, U.; Saw, S.L.C.; Gadag, V. Progressively censored reliability sampling plans for the Weibull distribution. Technometrics 2000, 42, 160–167. [Google Scholar] [CrossRef]
- Burkschat, M.; Cramer, E.; Kamps, U. On optimal schemes in progressive censoring. Stat. Probab. Lett. 2006, 76, 1032–1036. [Google Scholar] [CrossRef]
- Burkschat, M.; Cramer, E.; Kamps, U. Optimality criteria and optimal schemes in progressive censoring. Commun. Stat.—Theory Methods 2007, 36, 1419–1431. [Google Scholar] [CrossRef]
- Burkschat, M. On optimality of extremal schemes in progressive type II censoring. J. Stat. Plan. Inference 2008, 138, 1647–1659. [Google Scholar] [CrossRef]
- Pradhan, B.; Kundu, D. On progressively censored generalized exponential distribution. Test 2009, 18, 497–515. [Google Scholar] [CrossRef]
- Elshahhat, A.; Rastogi, M.K. Estimation of parameters of life for an inverted Nadarajah–Haghighi distribution from type-II progressively censored samples. J. Indian Soc. Probab. Stat. 2021, 22, 113–154. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).