Study on Lindley Distribution Accelerated Life Tests: Application and Numerical Simulation

: Saving money and time are very important in any research project, so we must ﬁnd a way to decrease the time of the experiment. This method is called the accelerated life tests (ALT) under censored samples, which is a very efﬁcient method to reduce time, which leads to a decrease in the cost of the experiment. This research project includes inference on Lindley distribution in a simple step-stress ALT for the Type II progressive censored sample. The paper contains two major sections, which are a simulation study and a real-data application on the experimental design of an industry experiment on lamps. These sections are used to conduct results on the study of the distribution. The simulation was done using Mathematica 11 program. To use real data in the censored sample, we ﬁtted them to be compatible with the Lindley distribution using the modiﬁed Kolmogorov–Smirnov (KS) goodness of ﬁt test for progressive Type II censored data. We used the tampered random variable (TRV) acceleration model to generate early failures of items under stress. We also found the values of the distribution parameter and the accelerating factor using the maximum likelihood estimation of (MLEs) and Bayes estimates (BEs) using symmetric loss function for both simulated data and real data. Next, we estimated the upper and lower bounds of the parameters using three methods, namely approximate conﬁdence intervals (CIs), Bootstrap CIs, and credible CIs, for both parameters of the distribution, ψ and ζ . Finally, we found the value of the parameter for the real data set under normal use conditions and stress conditions and graphed the reliability functions under normal and accelerated use.


Introduction
Due to the use of high technology in manufacturing products, the reliability of products has become very high, thus it is very hard nowadays to find a sufficient number of failure times in classical life testing experiments and reliability experiments; most products have exponential life time and it may be thousands of hours. Our aim is find a suitable way to produce early failures, because, under normal use conditions for products, producing failures in the lifetime experiments with a limited testing time may produce very few failures and it is not sufficient to study the data and make a very good model for them. Thus, we use the best way for decreasing the lifetime of the products: Accelerated life tests (ALT). In ALT, units or products are exposed to tough conditions and high stress levels (humidity, temperature, pressure, voltage, etc.) according to the conditions of manufacturing. There are many methods and models of ALT; products are exposed to stress according to the purpose of the experiment and the type of products. Scientists apply different types and methods of acceleration, e.g., constant ALT, step stress ALT, and progressive stress ALT. Nelson [1] discussed these different types of ALT. The main purpose of using ALT is to drive items to failure quickly, because some items may have a long lifetime under normal operating conditions. Therefore, in ALT, we put products under higher stress than the usual stress conditions, e.g. by increasing operating temperatures, pressures, or voltages, according to the physical use of the product, to generate failures more quickly.
There are several different forms of applying stress to products, e.g. step-stress (see, e.g., [1]). Two other common types are constant stress, where the test is conducted under a constant degree of stress for the entire experiment, and progressive stress, where all test units are subjected to stress as a function of time, and the stress increases on the experimented items as the time of the experiment increases (see El-Din et al. [2,3] for more details about acceleration and its models).
In step-stress ALT, we first apply a certain value of stress on the items under testing for certain time τ. Then, after this time, the stress load is increased by a fixed value for a certain period and so on until all the units have failed or the experiment ends. One of the most common methods of simple step-stress ALT has two levels (see, e.g., El-Din et al. [2][3][4].) Two types of censoring approaches can be applied to units: Type I censoring and Type II censoring. Recently, Type II censoring has been shown to make perfect use of time because it presets the number of failures. This means that the experiment does not end until the required number of failures has been achieved. Type II censoring can be explained as follows. If, for example, we have several independent and identical products and the sample number is n products under lifetime observation, the experiment then reaches its end by achieving m number of failure, which is preset before the experiment begins. With a fixed censoring scheme, let us denote them as R 1 , R 2 , . . . , R m . For more extensive reading about this subject, see the works done by alBalakrishnan [5], Fathi [6], and Abd El-Raheem [7] . This paper aims to make a full study on the Lindley distribution under ALT using progressive Type II censored samples and apply an experimental application to introduce the importance of this distribution in fitting many real data applications in many fields of life. We refer to different recent studies to explain the difference kinds of ALT; for more reading about constant, step, and progressive ALT, see the works of El-din et al. [8][9][10][11].
This paper is organized as follows. In Section 2, a brief literature review about the Lindley distribution and its applications in many fields of life, as well as the assumptions of the acceleration model used in this study, is presented. In Section 3, the maximum likelihood estimation (MLEs) of the parameters are obtained. We present another updated type of estimation, the Bayes estimation (BEs), using a symmetric loss function for model parameters in Section 4. We introduce three different types of intervals, namely asymptotic, bootstrap, and credible confidence intervals (CIs), for the parameters of the model in Section 5. In Section 6, a real data example for reliability engineering data is fitted and studied to apply the proposed methods (for more reading about reliability engineering modelling and applications, see [12][13][14]). Section 6 also includes the graphs of the reliability function and some elaboration about these graphs. Simulation study and some results and observations are presented in Section 7. Finally, the major findings are concluded in Section 8.

Lindley Distribution and Its Importance
This section identifies the importance of the Lindley distribution in the fields of business, pharmacy, biology, and so on. For example, Gomez et al. [15], applied the Lindley distribution to the application of strength systems' reliability. Ghitany et al. [16] created a new bounded domain probability density feature in view of the generalized Lindley distribution.
The novelty of this paper is that no one jas used the tampered random variable (TRV) ALT model  under progressive Type II censoring for the Lindley distribution and applied a real data experiment  on censored sample, not complete sample, using the modified Kolmogorov-Smirnov (KS) algorithm, we compared it with the two-parameter Weibull distribution and we proved that it provides better fitting to the real data experiment. This inspires us to work on implementing the SSALT model and estimating the parameters involved under Lindley distribution for simulated data and real data application. The first one introduced the Lindley distribution was Lindley [17], who combined exponential distribution with parameter (ψ) and Gamma (2,ψ). In 2012, Bakouch et al. [18] introduced an extended Lindley distribution that now has many applications in finance and economics. Ghitany et al. [19] proved that the Lindley distribution is a weighted distribution of gamma distribution and exponential distribution. Therefore, in many cases, the Lindley distribution is more flexible than these two and distributions, as he studied its properties, and showed through a numerical study that Lindley distribution is a better fit to lifetime data than exponential distribution. One of the major advantage of the Lindley distribution over many distributions, as an example of exponential distribution, is that it has an increased risk rate. Gomez et al. [15] introduced an improvement on the Lindley distribution named the Log Lindley distribution, which was used as a replacement for the beta regression model. Now, the probability density function (PDF) can be written of the Lindley distribution as follows: Its cumulative distribution function (CDF) is as Equation (2). By graphing the Lindsey's PDF and CDF we can deduce that they have asymmetric shapes. As the Lindley distribution has a failure-rate function, which is called the hazard rate function (HRF), and can be introduced by: For more details about real data applications using the Lindley distribution, see [17].
Assumptions and the Test Procedure and the Steps Used

1.
Let us assume that we have n identical and independent products that follows the Lindley distribution and these were subject to a lifetime examination in a lifetime experiment; 2.
The examination of the products ends as soon as the m th failure happens such that: (m ≤ n); 3.
All units run in normal-use conditions and after a prefixed time η, the stress is increased by a certain value; 4.
From the physical experiments on products, engineers have stated that the following law controls the connection between the stress on the products S and scale parameter σ. Thus, the law can be stated as follows: The model of inverse power law (IPL) is given by where b > 0, and voltage is denoted by S, and a is the model parameter; 5.
We will apply progressive Type II censoring, as discussed above, on the units of this experiment; 6.
After running the test on the products, the number of units that failed before stress is n 1 . In addition, n 2 is the total number of failed items after applying the stress at time η; 7.
We used the tampered random variable (TRV) model provided by [20]. This model states that under step stress partially accelerated life test (SSPALT), the lifetime of a unit can be written as: where z refers to the time of the product under the use conditions, η is the time that we change the stress, and ζ is the factor that we use to accelerate the failure time (ζ > 1); 8.
The PDF is divided as follows:

Estimation Using the Maximum Likelihood Function
Maximum likelihood estimation (MLE) in statistics is a method of estimating the parameters of a probability distribution by maximizing the probability function so that the observed data is most likely under the assumed statistical model. The maximum likelihood estimate gives a point estimation of the distribution parameter and this estimate makes the likelihood function maximum, one of the advantages of this method being that it is versatile and provides a good estimates for the distribution parameter.
This method works on finding the first derivatives for the log-likelihood function with respect to the distribution parameters and solving the equations simultaneously and finding the estimates that make the log-likelihood function at maximum value, this value is called the estimates of the distribution.
In this section, we used this method for estimating the parameters of the distribution and the accelerating factor. Thus, by using Equations (4) and substituting with it in Equation (5), we get the likelihood function under the progressive Type II censored sample. In the next subsection, we will introduce the procedures of this method.

Point Estimation
Let z i = z R i:m:n be the times that the items the failing occurred times the products under SSPALT, with censored scheme R = (R 1 , . . . , R m ), then the likelihood function is as expressed below, see [4] for more reading: By taking the log for both sides for Equation (6), we then get the log-likelihood function, as shown below: By finding the first derivative for the distribution parameters ψ and ζ as follows: These two Equations (9) and (10) are very hard to solve, so we will try to find a solution for the two equations by solving them numerically, using Mathematica 11 software and thereby finding the estimates for the two parameters ψ,ζ.

Bayes Estimation
Bayesian estimation is a modern efficient approximation for estimating the parameters compared with the maximum likelihood estimates method. As it takes into account both previous information and sample information and estimates the unspecified interest parameters. Bayesian estimation can be performed using symmetric and asymmetric loss function, according to the necessity of the experiment, and sometimes symmetric loss functions are better than asymmetric loss functions. There are different types of asymmetric loss functions, two being the linear exponential loss function and the general entropy loss function.
In this section, we proposed the Bayes estimators of the unknown parameters of the Lindley distribution using the symmetric loss function. In Bayesian estimation we must assign a prior distribution to the data, and in order to choose the prior density function that covers our belief on the data, we must choose appropriate values hyper-parameters. In this part of the paper, based on Type II progressive censored sample, we used the square error (SE) loss function to obtain the model parameters estimations for ψ, and ζ, as we deduced that both ψ, and ζ are independent, we choose a gamma priors as a prior distribution for the two parameters because it is versatile for adjusting different shapes of the distribution density. It also has another merit which is that it provides conjugacy and mathematical ease and has closed expressions for moments type. The two independent priors are as follows: As the gamma prior is very common, many authors have used it because it encourages researchers to feel confident in the data. For more information about BE, see Nassar and Eissa [21], and Singh et al. [22]. If we do not have any belief about the data, we must use non-informative priors, we do this by setting the following values so that µ i tends to zero, i = 1, 2 and λ i tends to infinity, i = 1, 2. In this way, we can change informative priors into non-informative priors, see Singh et al. [22]. After this, we can find the form of the joint PDF prior of ψ, and ζ as below: By multiplying Equation (11) with Equation (5), we get equation the posterior function in Equation (12): By making some simplifications on Equation (12) we get Equation (13).
The posterior density function in (14) for the two parameters ψ and ζ can be formed by the multiplication of Equations (6) with (11) and making some simplification, and its final form is as below: According to the SE loss function, the Bayes estimator for B = B(Θ), Θ = (ψ, ζ) (for more details see Ahmadi et al. [23]): where π * (Θ) is given by Equation (14).
In a fact we cannot find a result for integrals in Equation (15). Thus, we used the Markov chain Monte Carlo (MCMC) technique to approximate these integrals, and used the the Metropolis-Hasting algorithm as an example of the MCMC technique to find the estimates.

Using MCMC Method in Bayesian Estimation
In this section, we use the MCMC method because we do not have a well-known distribution for the posterior density function, and we then calculate the BEs of ψ and ζ. From Equation (14), the conditional posterior distribution functions for ψ and ζ are as shown below, respectively: Therefore, we do not have a closed form for the conditional posterior distribution ψ and ζ in (16) and (17) as it does not represent any known distribution. We therefore used the Metropolis-Hasting algorithm (for more information about this, see Upadhyay and Gupta [24]). The algorithm below explains the steps required to compute Bayes estimators for B = B(ψ, ζ) under the SE loss function.

The Metropolis-Hasting Algorithm
The Metropolis-Hastings algorithm and sometimes we call it the random walk algorithm, this kind of algorithm can be considered as a Markov Chain Monte Carlo (MCMC) method for generating data from any CDF as it used for normal distribution to generate data because it has the symmetric property that ensures that all other distribution are covered under the symmetric normal graph. This generated samples can be used to approximate the distribution or to compute an integral (e.g., an expected value), we use Algorithm 1 because sometimes obtaining samples is difficult, that is because the posterior we have is from an unknown distribution. We can use this algorithm with one dimension and two dimensions data. For more reading see [24][25][26][27][28][29][30][31].

4.
Repeat the iterations (3), with number of iterations N = 10,000 times and every time get the mean value of the estimates. 5.
To find the values of the parameters we evaluate the approximate means of B as follows: where M = N/5 is the burn-in period.

Interval Estimation
In this part, we tried to estimate the upper and lower bounds of the following parameters ψ and ζ using the following three methods: The first method is approximate CIs, the second method is the Bootstrap CIs, and the third method is credible CIs.

Finding Confidence Intervals for the Parameters
In statistics, a confidence interval (CI) is a type of estimate computed from the statistics of the observed data. This interval identifies the range of for the unknown parameters, it changes according to the confidence level chosen by the investigator, confidence interval for an unknown parameter is based on sampling the distribution of a corresponding estimator.
In this part of the paper, we will try to find the upper and lower bounds for Θ = (ζ, ψ). using the MLEs of the two parameters. Where the asymptotic distributions for the MLEs are is given by [32]: where I −1 is the variance covariance matrix of the unknown parameters (ψ, ζ).

Bootstrap Confidence Intervals
Bootstrapping assigns measures of accuracy (bias, variance, confidence intervals, prediction error, etc.) to sample estimates. This technique allows an estimation of the sampling distribution of almost any statistic using random sampling methods. In this part of the paper, we find the values of bootstrap CIs for ψ, ζ (see Efron and Tibshirani [33] for more information). Algorithm 2 below specifies the steps for obtaining lower and upper bounds using bootstrap CIs.

1.
Use the MLEs for ψ and ζ which are atψ ML and atζ ML .

2.
Use atψ ML and atζ ML to get random sample using the same censoring scheme let us name it f * 3.
Use f * to solve the equations numerically using the Mathematica 11 program and find the values of the estimates corresponding to bootstrap sample let us name them as atψ * and atζ * .

Credible Confidence Intervals
In Bayesian statistics, a credible interval is an interval in which the distribution parameter value falls between the lower and upper bounds of it with a certain confidence level. It is an interval in the domain of a posterior probability distribution. Credible intervals are symmetric in meaning to the approximate confidence interval, Bayesian intervals treat their bounds as fixed and the estimated parameter as a random variable, whereas frequentest confidence intervals treat their bounds as random variables and the parameter as a fixed value. In addition, Bayesian credible intervals use knowledge of the situation-specific prior distribution, while the frequentest confidence intervals do not. Algorithm 3 below was used to get credible CIs of ψ and ζ.

4.
Repeat the iterations (3), with iteration number, N = 10, 000 times and every time get the mean value of the estimates as follows.

5.
To find the values of the parameters we evaluate The approximate means of B as follows: where M = N/5 is the burn-in period. 6.
Get N estimates using the MCMC algorithm. 7.
arrange the N estimate generated in each iteration of the MCMC algorithm in ascending order as

Application on Real Data Set for Lindley Distribution
Here, we used real data to serve as a real-life example for the step stress model with a real data set, and fitted this data set and then made a statistical inference on this data to assess the performance of the Lindley distribution.

Example
The data set that we used in the application were collected from Chapter 5 of Zhu [26]. These data represent an experiment on some light bulbs with working-use stress of 2 voltage. Here, a sample of size n = 64 light bulbs were lit at 2.25 voltage for a period of 96 h before increasing the voltage to 2.44 voltage. This means the time of stress change was at η = 96 h. This lifetime experiment was performed on a sample of size n = 64 light bulbs under stress; in our experiment, we removed 11 bulbs when they were still working and functioning before they had reached their failure point. Consequently, we observed only m = 53, failures, as only n 1 = 34 had failed on the stress voltage S 1 = 2.25 V, and the scheme used in progressive censoring is R i , i = 1, 2, . . . , 53: From practical experiments, we deduced that the best model to represents the acceleration and voltage relationship is the inverse power model. Thus, the acceleration model can be expressed as: We use modified Kolmogorov-Smirnov goodness of fit test for progressive Type-II censored data to determine the goodness of fit for the data in the experiment, this method was suggested by Pakyari and Balakrishnan [34].

Comparison with Competitive Distribution
The importance of the Lindley distribution in this paper is that it provides more fit than its traditional competitor, the two-parameter Weibull distribution, as we checked for the p-value for both distributions for fitting the real data application and found that the Lindley distribution has p-value greater than 0.05 for both levels of stress in the experiment, while in case of the Weibull distribution it makes a poor-fitting for the real data set because the p-value of the first level of acceleration is less than 0.05, while it provides good fitting for the second level, so we can deduce that the Lindley distribution makes better fitting than the Weibull distribution for both levels of experiment so we can use it instead of the Weibull distribution by fitting the two levels of the Lindley distribution which has a merit over the two parameters Weibull distribution in fitting this kind of experiment. For more information about the reliability of engineering data, please see references [12][13][14].
The following table contains the value of test statistic and the p-values of each stress level for the Lindley distribution and Weibull two parameters distribution.

Important Results Conducted from Real Data
The following points illustrate briefly the work we have done in this real data example. According to the results in Tables 1-3.

1.
In the experiment with real data, we used a modified K-S method to ensure that our data was a good fit for our distribution; 2.
According to the p-values in Table 1, we deduced that our distribution made a good fit for the failure times of the experiment. After that, we first estimated the parameters using this real data, and then we concluded the CIs; 3.
By using the estimated parameters and the acceleration model estimates ata, atb, we deduced θ 0 , where θ 0 is the scale parameter under normal use. From Equation (27), we can evaluate the MLE of the scale parameter under normal conditions atθ 0 = e ata+atb ln(S 0 ) = 0.0000214702. Which is the scale parameter under normal use; 4.
By estimating the parameter under normal use we can use it to find the following: 5.
The mean time to failure (MTTF) under normal conditions is 6. The failure rates (hazard rate function ) under normal conditions is: The reliability function under normal conditions is: By graphing the reliability function we deduced the following: Reliability function under normal use, at time, equals zero the reliability function equal one, see Figure 1. Under stress conditions, we concluded that the reliability function decreases, as time increases, see Figure 2. As the stress increase once more it approaches zero see Figure 3.  Table 2. The table below shows value of test statistic and the corresponding p-values for each stress level for the Weibull distribution, we can see that the Weibull distribution makes a poor fitting for the real data set because the p-value of the first level of acceleration is less than 0.05, while it provides good fitting for the second level, on the other hand the Lindley distribution makes better than Weibull for both levels of experiment and we used one distribution to fit the whole experiment with both levels of acceleration, so we can use the Lindley distribution instead of the Weibull distribution for modeling the whole experiment according to the results in Table 2.

Simulation Studies
This part of the paper contains the simulation for the data in order to estimate the parameters by using both the MLEs and BEs (under square error loss (SEL) function), according to the mean square errors (MSEs) results in the tables below we can make a decision on the parameters, and in this simulation we used different values of n, m, and R i , i = 1, 2, . . . , m. Tables 4 and 5 gives us the results deduced from the simulation. The censoring schemes (CS) vector used in the simulation is defined below. Scheme 1: To make a complete simulation, we used the following algorithm to clarify the steps used in the whole simulation.

Important Results Conducted from Simulated Data
The following points illustrate briefly the observed results from simulation Algorithm 4. According to the results in Tables 4 and 5: 1.
As the sample size increased, the MSEs of BEs and MLEs estimation for the parameters ψ and ζ decreased. Sometimes this situation did not occur because of small disturbances in data generation; 2.
The MSEs for BEs of ψ and ζ are smaller than the MSEs of MLEs, and this is rational because the BE is the updated method, and more accurate than MLE; 3.
When the sample size increases, the length of the approximate, Bootstrap, and credible CIs reduced, except in some small iterations, and that is due to the randomization in the generation of data using the Mathematica package; 4.
The shortest interval is the credible CIs of ψ and ζ according to the length, and credible CIs had the highest coverage probability; 5.
The length of Bootstrap CIs is shorter than the approximate CIs in most cases. 6.
We deduced that the credible CIs was the shortest one and had the highest coverage probability among all intervals .

Algorithm 4
The complete algorithm for all simulation in the paper 1.

2.
For a random sample of size from m, we generated a random sample from uniform distribution (0, 1) distribution, as (v 1 , v 2 , . . . , v m ) by using mathematica 11.

3.
Assign values for the censored items R i , according to the CS above.

6.
We first must find the number of units n 1 , by the following idea such that v * n 1 :m:n < F 1 (η) ≤ v * n 1 +1:m:n . 7. Now, we can get the order observations c 1:m:n , c 2:m:n , . . . , c n 1 :m:n , c n 1 +1:m:n , . . . , c m:m:n ,, which are computed from the inverse CDF of the Lindley distribution. 8.
Use the generated data set to find the MLEs estimations by finding a solution to (9) and (10). 9. Now, we turn on to the step of finding the BEs of the model parameters under SE loss functions, with a total number of iterations mcmc N = 10,000 in the mcmc, and M = 1000 is the removed iterations from the calculations (nburn). By using Algorithm 1.

10.
With 95% confidence we compute the upper and lower bounds for the approximate confidence bounds of the following parameters ψ, ζ.

11.
Find the values of the upper and lower limits of the 95% Bootstrap CI and use the estimates generated for the MCMC algorithm to find the intervals of credible confidence, by using both of Algorithms 2 and 3 respectively. 12.
Do the steps from (2)-(11), 1000 number of iterations to make sure that the data is unaffected by random generating of data. 13.
Find the average value of MSEs for both ψ and ζ from the two estimation methods. 14.

Conclusions on Real Data and Simulation Results
In this paper, we made a statistical inference on step stress accelerated life tests under progressive Type II censoring when the lifetimes of the data follow the Lindley distribution. First, we used our simulation studies to find the estimation of the model parameters by using the classical method, which is MLEs and the other method is the Metropolis Hasting algorithm method to get the BEs. We conclude that the Bayesian method was better than the classical method because it had a smaller MSE compared with the other method. CIs, including approximate CIs, Bootstrap CIs, and credible CIs, were estimated for the parameters of the model, and we conclude that the credible interval was the best one according to the shortness of the interval length, and it had the highest coverage probability. All the calculations were worked out based on different sample sizes and using censoring Scheme 1. In Section 6, we introduce a real data application on a Lindley distribution to see whether the data made a good fit to it or not. This application consisted of two levels of acceleration, the first being complete and the second being censored and exposed to higher stress than the first. We fitted the data using the Lindley distribution and the two-parameter Weibull distribution. We deduced that the data are a good fit for the Lindley distribution based on the p-values in both levels, but they were poorly fit to the Weibull distribution in the first level and well fit in the second level, thus we could use the Lindley distribution as a good candidate to model this application while the Weibull distribution could not be used. We then made statistical inference using this application and estimate the parameters of the distribution by using the above two methods, but we used non-informative priors in the BEs and also estimated the three CIs for the model parameters. Funding: This research was funded by Taif University researchers supporting project number (TURSP-2020/160), Taif University, Taif, Saudi Arabia. This paper was funded by "Taif University Researchers Supporting Project number (TURSP-2020/160), Taif University, Taif, Saudi Arabia".