Inference for the Exponential Distribution under Generalized Progressively Hybrid Censored Data from Partially Accelerated Life Tests with a Time Transformation Function

: In this article, the tampered failure rate model is used in partially accelerated life testing. A non-decreasing time function, often called a “time transformation function", is proposed to tamper the failure rate under design conditions. Different types of the proposed function, which have sufﬁcient conditions in order to be accelerating functions, are investigated. A baseline failure rate of the exponential distribution is considered. Some point estimation methods, as well as approximate conﬁdence intervals, for the parameters involved are discussed based on generalized progressively hybrid censored data. The determination of the optimal stress change time is discussed under two different criteria of optimality. A real dataset is employed to explain the theoretical outcomes discussed in this article. Finally, a Monte Carlo simulation study is carried out to examine the performance of the estimation methods and the optimality criteria.


Introduction
In the modern era, the products of manufacturing have attained a high quality and have become reliable. Thus, it is very hard to acquire sufficient failure information for products under normal functioning conditions. Moreover, testing their lifetimes under these conditions takes a long time and is very expensive. To overcome these problems, accelerated life tests (ALTs) or partially ALTs (PALTs) are applied to obtain sufficient information about the products' failure data quickly and in a short time. In ALTs, products are tested under higher-than-regular levels of stress to obtain early failure times. In PALTs, products are tested under both regular and accelerated conditions. The stress factors may be pressure, vibration, temperature, cycling rate, voltage, or any other factor that immediately affects the lifetime of the products. The information obtained from the accelerated or partially accelerated tests is used to predict the failure behavior of the products under normal conditions. Various techniques can be used to implement stress in ALTs. Common techniques are the progressive-stress, step-stress, and constant-stress techniques. For a good survey on ALTs, see [1][2][3][4][5][6].
In a PALT experiment, a random sample of units is put through a life test at a normal stress level, and then the stress is raised after a pre-fixed time or a pre-fixed number of failed units. The experiment continues under accelerated conditions until the units fail or the experiment's time is finished. For more details, see [4] and [7][8][9][10][11][12][13].
Three major models relate the distribution under accelerated stress to that under normal stress. The models are (i) the tampered random variable (TRV) model suggested in [14], (ii) the cumulative exposure (CE) model given by [15], and (iii) the tampered failure rate (TFR) model suggested in [16].
Several authors used the TFR model in ALTs; Wang and Fei [17] studied the conditions under which the coincidence of the TRV, TFR, and CE models occurred. They also generalized the TFR model from the step-stress ALT setting to the progressive-stress ALT setting (see also [10,[18][19][20][21]).
In our paper, we focus on the TFR model. As in Bhattacharyya and Soejoeti [16], the stress is raised after a change time point τ by multiplying the initial failure rate function (FRF) h(t) by an unknown factor λ (which may depend on τ). If h (t) is the total FRF in the PALT, then their proposed TFR model is defined by Censoring schemes have a major role in lifetime and reliability studies. In many practical experiments that depend on the lifetimes of items, the experiment may be finished before all of the items have failed due to the experimental time considered and the associated cost. In such situations, only a part of the failure information of the items is recorded, and the data obtained are called censored data.
Type-I and type-II censoring schemes are two of the most common censoring schemes implemented in life tests. Epstein [22] suggested a mixture of type-I and type-II censoring schemes called the hybrid censoring scheme. In many situations, it is planned in advance to remove items before failure at various steps of the experiment, but the above models of censoring schemes do not have the flexibility to allow for items to be removed from the experiment at various steps other than the experiment's end point. To solve this problem, Cohen [23] introduced a type-II progressive censoring scheme as a generalization for the above models of censoring schemes.
The type-II progressive censoring scheme can be implemented as follows: Assume that a random sample of n items is subjected to a life test experiment and that the number of observed failures m(< n) is predetermined before beginning the experiment with a pre-assigned progressive censoring scheme (R 1 , R 2 , . . . , R m ). At the first failure time T 1:n , R 1 operating items are selected at random and excluded from the experiment. At the next failure time T 2:n , R 2 operating items are selected at random and excluded from the experiment. The procedure is continued in the same manner until the last failure time T m:n occurs, at which all of the remaining operating items R m = n − m − ∑ m−1 i=1 R i are excluded from the experiment. Then, the experiment is stopped at T m:n .
A major drawback of the type-II progressive censoring scheme is that if the items are of a high quality and are reliable, the experimental time may be quite long. Kundu and Joarder [24] attempted to address this drawback in a modified scheme known as a type-I progressive hybrid censoring scheme, in which n, m, and (R 1 , R 2 , . . . , R m ), as well as the experimental time η, were assigned before beginning the experiment. In this situation, the experiment was finished at time T * =min{T m:n , η}. Except at the end time point, this scheme is the same as the type-II progressive censoring scheme.
One limitation of the type-I progressive hybrid censoring scheme is that the practical sample size is random and can be extremely small. Hence, statistical inference techniques can either be invalid or may be less successful. A new generalization of progressive censoring, which is called the generalized type-I progressive hybrid censoring scheme, was suggested by Cho et al. [25] to avoid defects that appeared in the type-I progressive hybrid censoring scheme. This censoring scheme ensures the satisfaction of at least a constant number of observed items k(< m < n) to provide the efficiency required for statistical analysis, and it also controls the total time of the experiment to stay close to the optimum time η if the number of observed failures is very small up to time η. In this situation, the experiment is finished at the time T * = max{T k:n , min{T m:n , η}}, and all of the remaining operating items are excluded from the experiment.
Several authors have discussed statistical inference based on the generalized type-I progressive hybrid censoring scheme. For example, Cho et al. [25,26] developed the exact likelihood inference technique for an exponential distribution and an entropy estimation technique for a Weibull distribution. Koley and Kundu [27] and Wang et al. [28] discussed the maximum likelihood and Bayes estimation techniques for exponential and Weibull competing risk models. Zhang and Shi [29] discussed the statistical prediction problem of unobserved failure times based on the simple step-stress ALT with the generalized exponential competing risk model.
What is new in the current article is the implementation of PALTs when the failure rate is accelerated under design conditions through a non-decreasing time function, which is often called a "time transformation function". Three different types of time functions are considered. The sufficient condition under which the time function can be an accelerating function is discussed in detail with each type. Based on generalized progressively hybrid censored data, some point and interval estimations for the parameters involved are discussed. The problem of determining the optimal stress change time is studied through two different criteria of optimality.
The remaining sections of the article are laid out as follows: In Section 2, the model is described with generalized type-I progressive hybrid censoring. Some estimation methods are discussed in Section 3. The optimal stress change time is verified in Section 4 based on two optimality criteria. An application to a real dataset is introduced in Section 5. Simulation studies are presented in Section 6, followed by concluding remarks in Section 7.

Model Description and Generalized Type-I Progressive Hybrid Censoring
DeGroot and Goel [14] considered a PALT in which a test unit is first run at normal conditions and, if it does not fail for a specified time τ, then it is run at accelerated conditions until failure occurs.
According to the TFR model (1), we assume that the n units to be tested are initially under normal conditions up to a fixed time τ, and if they do not fail by this time, they are placed under accelerated conditions until failure occurs or the experimental time goes to its end-whichever is realized first. The acceleration occurs by multiplying the FRF of the units under normal conditions by a factor λ(> 1).
What is new here is the choice of λ to be a non-negative increasing function of time t, λ ≡ λ(t), (t > τ), that is continuous and differentiable. Therefore, Model (1) becomes According to Model (2), the accelerated conditions increase over time, causing failures to occur more quickly than those previously considered when λ is considered constant. This also saves time and cost in the experiments.
In lifetime testing experiments and reliability studies, the exponential distribution (ED) is one of the most commonly discussed distributions due to its simplicity and easy mathematical manipulations, as it produces a simple, elegant, and closed-form solution to many problems. We assume that the random variable T follows the ED with scale parameter β(> 0). Then, the probability density function (PDF), cumulative distribution function (CDF), and FRF of T are given, respectively, by According to Model (2), the corresponding CDF of the total lifetime T in the PALT is given by In the following, based on the FRF (3), the function F 2 (t) defined in the CDF (4) will be deduced for three different cases of λ(t) ≡ λ(t; α), where α is a positive parameter. 1. Case where The sufficient condition for the function ψ 1 (t) to be an accelerating function is (see Mann et al. [30]) 2. Case B: If λ(t) = e α(t−τ) , then, as shown in Case A, F 2 (t), τ < t < ∞, can be written in the form where The sufficient condition for the function ψ 2 (t) to be an accelerating function is 3. Case C: If λ(t) = ln(e[α(t − τ) + 1]), then, as shown in Case A, F 2 (t), τ < t < ∞, can be written in the form where The sufficient condition for the function ψ 3 (t) to be an accelerating function is Hence, based on the FRF (3), the CDF (4) of the total lifetime T in the PALT for Cases A, B, and C can be rewritten in the form where The corresponding PDF is given by where  (2) is plotted in Figure 1 for three cases of λ(t), where it can be noticed that the FRF increases over time after the change time τ, and further increases also happen by increasing the value of parameter α.

Generalized Type-I Progressive Hybrid Censoring
In a PALT, the generalized type-I progressive hybrid censoring can be described as follows: 1.
Suppose that a set n of randomly chosen units is subjected to a lifetime testing experiment.

2.
The units are initially tested at normal stress conditions up to a fixed time τ. If they do not fail by this time, then they are tested under accelerated conditions. 3.
The associated lifetimes of the units, T 1 , T 2 , . . . , T n , have the same distribution as that of the CDF (14) and PDF (16).

5.
At the first failure time T 1:n , R 1 operating units are selected at random and excluded from the experiment. At the next failure time T 2:n , R 2 operating units are selected at random and excluded from the experiment, and this technique continues. Finally, the experiment is finished at the time T * = max{T k:n , min{T m:n , η}}, and all of the remaining operating units R F are excluded from the experiment. The values of the final censoring number R F are given in Table 1.

6.
Let r(r * ) be the number of units that fail under normal (accelerated) stress conditions before (after) time τ, and let D be the number of units that fail before time η. Then, the experimental end time T * is given by One of the next six cases may be noticed with the following observations: • Case 1: If the experimental time η is reached before the k-th failure time T k:n occurs and D(< k) failures occur up to time η, τ < η < T k:n < T m:n , then we will not eliminate any operating units from the experiment directly following the (D + 1)-th, . . . , (k − 1)-th failure times and will eliminate all of the remaining operating units R * k = n − k − ∑ k−1 i=1 R i from the experiment at the k-th failure time, thereby stopping the experiment at T * = T k:n , where R D+1 = · · · = R k−1 = 0; see Figure 2. In this case, we permit the experiment to continue after experimental time η is reached to ensure that at least the k-th failure time T k:n occurs. In this case, r * = k − r and the following observations will be noticed: T 1:n < · · · < T r:n ≤ τ < T r+1:n < · · · < T D:n ≤ η < T D+1:n < · · · < T k:n .
• Case 2: If the k-th failure time T k:n occurs between times τ and η, τ < T k:n ≤ η < T m:n , and D(≥ k) failures occur up to time η, then all of the remaining operating units R * η = n − D − ∑ D i=1 R i are eliminated from the experiment, thereby stopping the experiment at T * = η; see Figure 3. In this case, r * = D − r and the following observations will be noticed: T 1:n < · · · < T r:n ≤ τ < T r+1:n < · · · < T k:n < · · · < T D:n ≤ η.
• Case 3: If the k-th failure time T k:n occurs before time τ, T k:n ≤ τ < η < T m:n , and D(≥ k) failures occur up to time η, then all of the remaining operating units are eliminated from the experiment, thereby stopping the experiment at T * = η; see Figure 4. In this case, r * = D − r and the following observations will be noticed: T 1:n < · · · < T k:n < · · · < T r:n ≤ τ < T r+1:n < · · · < T D:n ≤ η.

•
Case 4: If the k-th failure time T k:n occurs after time τ and the m-th failure time T m:n occurs before time η, τ < T k:n < T m:n ≤ η, then all of the remaining operating units R m = n − m − ∑ m−1 i=1 R i are eliminated from the experiment, thereby stopping the experiment at T * = T m:n ; see Figure 5. In this case, r * = m − r and the following observations will be noticed: T 1:n < · · · < T r:n ≤ τ < T r+1:n < · · · < T k:n < · · · < T m:n ≤ η.

•
Case 5: If the k-th failure time T k:n occurs before time τ and the m-th failure time T m:n occurs between times τ and η, T k:m:n ≤ τ < T m:n ≤ η, then all of the remaining operating units R m = n − m − ∑ m−1 i=1 R i are eliminated from the experiment, thereby stopping the experiment at T * = T m:n ; see Figure 6. In this case, r * = m − r and the following observations will be noticed: T 1:n < · · · < T k:n < · · · < T r:n ≤ τ < T r+1:n < · · · < T m:n ≤ η.
• Case 6: If the m-th failure time T m:n occurs before time τ, T k:n < T m:n ≤ τ < η, then all of the remaining operating units R m = n − m − ∑ m−1 i=1 R i are eliminated from the experiment, thereby stopping the experiment at T * = T m:n ; see Figure 7. In this case, r = m, r * = 0, and the following observations will be noticed:

Estimation Methods
This section deals with procedures of obtaining maximum likelihood estimates (MLEs) and percentile estimates (PEs) of the underlying parameters α and β based on data collected from the PALT under generalized type-I progressive hybrid censoring.

Maximum Likelihood Estimation
The likelihood function based on generalized type-I progressively hybrid censored data from the PALT presented in Cases 1-6, which were given in the previous section, is given by 1.
For Cases 1-5: where t = (t 1:n , . . . , t B:n ), and the values of the experimental end time T * , B, ξ, and the final censored number R F for Cases 1-5 are presented in Table 1. It can be noticed that, with respect to Case 1, some values of R i , i = 1, 2, . . . , m may be changed during the test rather than being fixed before beginning the experiment.

Case 1 Cases 2 and 3 Cases 4 and 5
where t = (t 1:n , . . . , t m:n ). Generally, maximizing the natural logarithm of the likelihood function to determine the MLEs of the underlying parameters is easier than maximizing the likelihood function. Using the CDF (14) and PDF (16), the natural logarithm of the likelihood function takes the form: 1.
For Case 6: Based on Equations (20) and (21), the MLEs (α,β) of (α, β) can be obtained as follows: 1. For Cases 1-5, the MLEs of α and β can be calculated by equating to zero the first partial derivatives of (20) with respect to α and β. Then, the likelihood equations take the forms where From (22), the MLE of β can be obtained as a function of α as follows: By substitutingβ(α) in (23) and solving the likelihood equation ∂L ∂α = 0 with respect to α by using any numerical iteration method, the MLE of α can be obtained.

2.
For Case 6, no failures were detected under accelerated stress conditions, so the MLE of α does not exist, but the MLE of β can be calculated by equating to zero the first partial derivative of (21) with respect to β. Then, the MLE of β is given bŷ Remark 2. It can be noticed that: 1.
In Cases 1,2, and 4, if r = 0, no failures were detected under normal stress conditions; then, the MLE of β does not exist.

2.
In Case 3, if D = r, no failures were detected under accelerated stress conditions; then, the MLE of α does not exist.
Based on the common asymptotic normality theory of MLEs, we can consider that can be approximated by a standard normal distribution, i.e., where Var(α) and Var(β) are the variances ofα andβ, which can be obtained from the inverse of the local Fisher information matrix (FIM), i.e., where where the caretˆdenotes that the derivative is evaluated at (α,β). The second partial derivatives of the natural logarithm of the likelihood function with respect to α and β can be obtained without difficulty.
Suppose that ν 1 = α and ν 2 = β. Then, for i = 1, 2, a 100(1 − γ)% normal approximation confidence interval (NACI) for ν i can be defined as Sometimes, the lower bound of the NACI may have a negative value for the positive parameter. So, Meeker and Escobar [31] suggested using a log transformation confidence interval (LTCI) for this parameter. Based on the normal approximation of a log-transformed can be approximated to a standard normal distribution, i.e., Therefore, a 100(1 − γ)% LTCI for ν i can be defined as

Percentile Estimation
A percentile method for estimating unknown parameters was proposed by Kao [32]. It is simply natural to estimate the unknown parameter by adjusting a straight line to the theoretical points obtained by the CDF and the percentile points of the sample if data were collected from a closed-form CDF. The empirical CDF applied in this process can be described as where B is defined as in Table 1 and Based on generalized type-I progressive hybrid censoring under the PALT, the PEs of the parameters α and β can be obtained as follows: 1.
For Cases 1-5: It is possible to obtain the PEsα andβ of α and β by minimizing the next quantity with respect to α and β.

2.
For Case 6, the PE of α does not exist, but the PE of β can be obtained by minimizing the next quantity with respect to β: where Ω i is given by (33).

Common Optimality Criteria
The current section aims to study the problem of determining the optimal time, τ, of the increase in the stress level under generalized type-I progressively hybrid censored data supposing the ED as a lifetime model. The criteria of A-optimality and D-optimality, which allow the optimal value of τ to be determined, are considered; see, for example, [33][34][35].

1.
A-optimality criterion This criterion is concerned with the maximization of the trace of the FIM, which provides an overall information measure based on the marginal information about the parameters. It is preferred when the estimates are, at most, moderately correlated.
Therefore, the optimal value τ * A of τ can be obtained by maximizing the trace of the FIM (29) of MLEsα andβ, i.e., Maximize{tr(F)}.

2.
D-optimality criterion This criterion is concerned with maximizing the determinant of the FIM, which provides an overall measure of variability by taking into account the correlation between the estimates. It is preferred that it is applied when the estimates are highly correlated. It is also pertinent for the construction of the joint confidence region for the parameters. Therefore, the optimal value τ * D of τ can be obtained by maximizing (minimizing) the determinant of the FIM (inverse of the FIM) ( (29) and (28) iterative method is needed in order to obtain a numerical solution.

Real Dataset
In this section, we consider a real dataset in order to explain the procedures of inference presented in the last sections. The real dataset was collected from an ALT experiment by Zhu [36] as follows: A sample of n = 64 light bulbs was subjected to a life test at a stress level of 2.25 V, and then the sample subjected to accelerated conditions with a stress level 2.44 V at the time point of τ = 96 hours. The stress level of 2.25 V is considered to correspond to the normal stress conditions. The failure times of m = 53 light bulbs were noticed, R m = 11 light bulbs were removed from the experiment, and then the experiment was finished. The dataset is shown in Table 2. One logical question now is that of whether or not the CDF (14), with the ED as a lifetime distribution under normal stress conditions, fits the real dataset that appears in Table 2. For this reason, the Kolmogorov-Smirnov (K-S) test statistic and its corresponding p-value were used. Table 3 shows the results, in which it can be noticed that all of the p-values are greater than 0.05, so the ED fits the given real dataset well. This is shown also by drawing the empirical CDFs of the given real dataset together with the CDFs (14) in Cases A, B, and C; see Figure 8.
We consider n = 64, m = 53, k = 30, R 1 = R 2 = · · · = R m−1 = 0, R m = 11, τ = 96, and three values of η 116 (Scheme I), 125 (Scheme II), and 140 (Scheme III) in order to apply the generalized type-I progressive hybrid censoring to the real dataset; see Table 4. The MLEs, PEs, NACIs, and LTCIs of the parameters α and β were calculated, and the optimal values of τ under two optimality criteria-τ * D and τ * A -of increasing stress level were calculated, as presented in Table 5.  Table 3. MLEs of the parameters (α, β), the K-S statistic, and the p-value.

Parameters K-S p-Value
Case

Simulation Study
As it is theoretically difficult to assess the efficiency of estimation methods, a Monte Carlo simulation was used to overcome this difficulty. In the current section, through a Monte Carlo simulation, we conduct a numerical study in order to assess the efficiency and performance of the estimation methods and optimality criteria considered in the last sections. The following steps were used:

4.
Apply the generalized type-I progressive hybrid censoring to the random sample obtained in Step 3, as shown in Section 2.1.

5.
Find the value of r, which is the number of observations with values ≤ τ, as follows: t r:n ≤ τ < t r+1:n .
Compute the optimal values, τ * D and τ * A , of the stress change time, as shown in Section 4. 8.
Calculate the mean of the estimates, mean squared error (MSE), and relative absolute bias (RAB) ofν over M samples as follows: whereν is an estimate of ν. 10. Using Step 9, calculate the mean of the estimates of the parameters (α, β) with their MSEs and RABs. 11. Calculate the average of the RABs (ARAB) and MSEs (AMSE) as follows: 12. Evaluate the 95% NACIs and LTCIs of the parameters (α, β), and then evaluate their average interval lengths (AILs) and coverage probabilities (COVPs). In addition, also evaluate the average of the AILs (AAIL) as follows: The next three CSs are used in the generation of the samples: • CS1: otherwise.

Simulation Results
Based on the results of the calculations recorded in Tables 6-11, the following points can be observed: The MLEs are better than the PEs through the ARABs and AMSEs.

2.
The NACIs are better than the LTCIs through the AAILs. 3.
The optimal values τ * D and τ * A are close one to each other each time for Cases A and B. However, for Case C, the optimal value τ * D is less than τ * A and approaches the proposed value (τ = 0.2).
By increasing n, m, or η, the COVPs are close to 95% for Cases A and B. However, for Case C, the COVPs are higher than 95% because greater values of the MSEs of the estimates than those for Cases A and B were obtained, which led to wider CIs 8.
For fixed values of n and η, by increasing m, the optimal values τ * D and τ * A increase. 9.
For fixed m and η, by increasing n, the optimal values τ * D and τ * A decrease. 10. For fixed n and m, by increasing η, the optimal values τ * D and τ * A increase. Except in rare situations, which might be consequent to fluctuations in the data, the above findings are accurate. Based on the above observations, we recommend using the accelerating function λ(t) for the power series (Case A) or exponential (Case B) functions.        Table 11. AILs and COVPs (in %) of the 95% CIs of α and β based on 5000 simulations. The population parameter values are α = 1.0 and β = 0.55 for Case C.