1. Introduction
Due to the use of high technology in manufacturing products, the reliability of products has become very high, thus it is very hard nowadays to find a sufficient number of failure times in classical life testing experiments and reliability experiments; most products have exponential life time and it may be thousands of hours. Our aim is find a suitable way to produce early failures, because, under normal use conditions for products, producing failures in the lifetime experiments with a limited testing time may produce very few failures and it is not sufficient to study the data and make a very good model for them. Thus, we use the best way for decreasing the lifetime of the products: Accelerated life tests (ALT). In ALT, units or products are exposed to tough conditions and high stress levels (humidity, temperature, pressure, voltage, etc.) according to the conditions of manufacturing. There are many methods and models of ALT; products are exposed to stress according to the purpose of the experiment and the type of products. Scientists apply different types and methods of acceleration, e.g., constant ALT, step stress ALT, and progressive stress ALT. Nelson [
1] discussed these different types of ALT. The main purpose of using ALT is to drive items to failure quickly, because some items may have a long lifetime under normal operating conditions. Therefore, in ALT, we put products under higher stress than the usual stress conditions, e.g. by increasing operating temperatures, pressures, or voltages, according to the physical use of the product, to generate failures more quickly.
There are several different forms of applying stress to products, e.g. step-stress (see, e.g., [
1]). Two other common types are constant stress, where the test is conducted under a constant degree of stress for the entire experiment, and progressive stress, where all test units are subjected to stress as a function of time, and the stress increases on the experimented items as the time of the experiment increases (see El-Din et al. [
2,
3] for more details about acceleration and its models).
In step-stress ALT, we first apply a certain value of stress on the items under testing for certain time
. Then, after this time, the stress load is increased by a fixed value for a certain period and so on until all the units have failed or the experiment ends. One of the most common methods of simple step-stress ALT has two levels (see, e.g., El-Din et al. [
2,
3,
4].) Two types of censoring approaches can be applied to units: Type I censoring and Type II censoring. Recently, Type II censoring has been shown to make perfect use of time because it presets the number of failures. This means that the experiment does not end until the required number of failures has been achieved. Type II censoring can be explained as follows. If, for example, we have several independent and identical products and the sample number is
n products under lifetime observation, the experiment then reaches its end by achieving
m number of failure, which is preset before the experiment begins. With a fixed censoring scheme, let us denote them as
. For more extensive reading about this subject, see the works done by alBalakrishnan [
5], Fathi [
6], and Abd El-Raheem [
7].
This paper aims to make a full study on the Lindley distribution under ALT using progressive Type II censored samples and apply an experimental application to introduce the importance of this distribution in fitting many real data applications in many fields of life. We refer to different recent studies to explain the difference kinds of ALT; for more reading about constant, step, and progressive ALT, see the works of El-din et al. [
8,
9,
10,
11].
This paper is organized as follows. In
Section 2, a brief literature review about the Lindley distribution and its applications in many fields of life, as well as the assumptions of the acceleration model used in this study, is presented. In
Section 3, the maximum likelihood estimation (MLEs) of the parameters are obtained. We present another updated type of estimation, the Bayes estimation (BEs), using a symmetric loss function for model parameters in
Section 4. We introduce three different types of intervals, namely asymptotic, bootstrap, and credible confidence intervals (CIs), for the parameters of the model in
Section 5. In
Section 6, a real data example for reliability engineering data is fitted and studied to apply the proposed methods (for more reading about reliability engineering modelling and applications, see [
12,
13,
14]).
Section 6 also includes the graphs of the reliability function and some elaboration about these graphs. Simulation study and some results and observations are presented in
Section 7. Finally, the major findings are concluded in
Section 8.
2. Lindley Distribution and Its Importance
This section identifies the importance of the Lindley distribution in the fields of business, pharmacy, biology, and so on. For example, Gomez et al. [
15], applied the Lindley distribution to the application of strength systems’ reliability. Ghitany et al. [
16] created a new bounded domain probability density feature in view of the generalized Lindley distribution.
The novelty of this paper is that no one jas used the tampered random variable (TRV) ALT model under progressive Type II censoring for the Lindley distribution and applied a real data experiment on censored sample, not complete sample, using the modified Kolmogorov–Smirnov (KS) algorithm, we compared it with the two-parameter Weibull distribution and we proved that it provides better fitting to the real data experiment. This inspires us to work on implementing the SSALT model and estimating the parameters involved under Lindley distribution for simulated data and real data application. The first one introduced the Lindley distribution was Lindley [
17], who combined exponential distribution with parameter (
) and Gamma (2,
). In 2012, Bakouch et al. [
18] introduced an extended Lindley distribution that now has many applications in finance and economics. Ghitany et al. [
19] proved that the Lindley distribution is a weighted distribution of gamma distribution and exponential distribution. Therefore, in many cases, the Lindley distribution is more flexible than these two and distributions, as he studied its properties, and showed through a numerical study that Lindley distribution is a better fit to lifetime data than exponential distribution. One of the major advantage of the Lindley distribution over many distributions, as an example of exponential distribution, is that it has an increased risk rate. Gomez et al. [
15] introduced an improvement on the Lindley distribution named the Log Lindley distribution, which was used as a replacement for the beta regression model. Now, the probability density function (PDF) can be written of the Lindley distribution as follows:
Its cumulative distribution function (CDF) is as Equation (
2). By graphing the Lindsey’s PDF and CDF we can deduce that they have asymmetric shapes. As the Lindley distribution has a failure-rate function, which is called the hazard rate function (HRF), and can be introduced by:
For more details about real data applications using the Lindley distribution, see [
17].
Assumptions and the Test Procedure and the Steps Used
Let us assume that we have n identical and independent products that follows the Lindley distribution and these were subject to a lifetime examination in a lifetime experiment;
The examination of the products ends as soon as the failure happens such that: ;
All units run in normal-use conditions and after a prefixed time , the stress is increased by a certain value;
From the physical experiments on products, engineers have stated that the following law controls the connection between the stress on the products S and scale parameter . Thus, the law can be stated as follows: The model of inverse power law (IPL) is given by: , where , and voltage is denoted by S, and a is the model parameter;
We will apply progressive Type II censoring, as discussed above, on the units of this experiment;
After running the test on the products, the number of units that failed before stress is . In addition, is the total number of failed items after applying the stress at time ;
We used the tampered random variable (TRV) model provided by [
20]. This model states that under step stress partially accelerated life test (SSPALT), the lifetime of a unit can be written as:
where
z refers to the time of the product under the use conditions,
is the time that we change the stress, and
is the factor that we use to accelerate the failure time (
> 1);
The PDF is divided as follows:
3. Estimation Using the Maximum Likelihood Function
Maximum likelihood estimation (MLE) in statistics is a method of estimating the parameters of a probability distribution by maximizing the probability function so that the observed data is most likely under the assumed statistical model. The maximum likelihood estimate gives a point estimation of the distribution parameter and this estimate makes the likelihood function maximum, one of the advantages of this method being that it is versatile and provides a good estimates for the distribution parameter.
This method works on finding the first derivatives for the log-likelihood function with respect to the distribution parameters and solving the equations simultaneously and finding the estimates that make the log-likelihood function at maximum value, this value is called the estimates of the distribution.
In this section, we used this method for estimating the parameters of the distribution and the accelerating factor. Thus, by using Equations (
4) and substituting with it in Equation (
5), we get the likelihood function under the progressive Type II censored sample. In the next subsection, we will introduce the procedures of this method.
Point Estimation
Let
be the times that the items the failing occurred times the products under SSPALT, with censored scheme
then the likelihood function is as expressed below, see [
4] for more reading:
By taking the log for both sides for Equation (6), we then get the log-likelihood function, as shown below:
By finding the first derivative for the distribution parameters
and
as follows:
These two Equations (
9) and (
10) are very hard to solve, so we will try to find a solution for the two equations by solving them numerically, using Mathematica 11 software and thereby finding the estimates for the two parameters
,
.
4. Bayes Estimation
Bayesian estimation is a modern efficient approximation for estimating the parameters compared with the maximum likelihood estimates method. As it takes into account both previous information and sample information and estimates the unspecified interest parameters. Bayesian estimation can be performed using symmetric and asymmetric loss function, according to the necessity of the experiment, and sometimes symmetric loss functions are better than asymmetric loss functions. There are different types of asymmetric loss functions, two being the linear exponential loss function and the general entropy loss function.
In this section, we proposed the Bayes estimators of the unknown parameters of the Lindley distribution using the symmetric loss function. In Bayesian estimation we must assign a prior distribution to the data, and in order to choose the prior density function that covers our belief on the data, we must choose appropriate values hyper-parameters. In this part of the paper, based on Type II progressive censored sample, we used the square error (SE) loss function to obtain the model parameters estimations for
and
, as we deduced that both
and
are independent, we choose a gamma priors as a prior distribution for the two parameters because it is versatile for adjusting different shapes of the distribution density. It also has another merit which is that it provides conjugacy and mathematical ease and has closed expressions for moments type. The two independent priors are as follows:
and
As the gamma prior is very common, many authors have used it because it encourages researchers to feel confident in the data. For more information about BE, see Nassar and Eissa [
21], and Singh et al. [
22]. If we do not have any belief about the data, we must use non-informative priors, we do this by setting the following values so that
tends to zero,
and
tends to infinity,
. In this way, we can change informative priors into non-informative priors, see Singh et al. [
22]. After this, we can find the form of the joint PDF prior of
, and
as below:
By multiplying Equation (
11) with Equation (
5), we get equation the posterior function in Equation (12):
By making some simplifications on Equation (12) we get Equation (13).
The posterior density function in (14) for the two parameters
and
can be formed by the multiplication of Equations (
6) with (
11) and making some simplification, and its final form is as below:
According to the SE loss function, the Bayes estimator for
,
(for more details see Ahmadi et al. [
23]):
where
is given by Equation (14).
In a fact we cannot find a result for integrals in Equation (15). Thus, we used the Markov chain Monte Carlo (MCMC) technique to approximate these integrals, and used the the Metropolis—Hasting algorithm as an example of the MCMC technique to find the estimates.
4.1. Using MCMC Method in Bayesian Estimation
In this section, we use the MCMC method because we do not have a well-known distribution for the posterior density function, and we then calculate the BEs of
and
. From Equation (14), the conditional posterior distribution functions for
and
are as shown below, respectively:
Therefore, we do not have a closed form for the conditional posterior distribution
and
in (16) and (17) as it does not represent any known distribution. We therefore used the Metropolis–Hasting algorithm (for more information about this, see Upadhyay and Gupta [
24]). The algorithm below explains the steps required to compute Bayes estimators for
under the SE loss function.
4.2. The Metropolis—Hasting Algorithm
The Metropolis—Hastings algorithm and sometimes we call it the random walk algorithm, this kind of algorithm can be considered as a Markov Chain Monte Carlo (MCMC) method for generating data from any CDF as it used for normal distribution to generate data because it has the symmetric property that ensures that all other distribution are covered under the symmetric normal graph. This generated samples can be used to approximate the distribution or to compute an integral (e.g., an expected value), we use Algorithm 1 because sometimes obtaining samples is difficult, that is because the posterior we have is from an unknown distribution. We can use this algorithm with one dimension and two dimensions data. For more reading see [
24,
25,
26,
27,
28,
29,
30,
31].
Algorithm 1 MCMC algorithm |
First of all set the starting values as follows Start the iterations with Get the values of estimates for and from Equations (16) and (17) respectively using the MCMC algorithm (Metropolis—Hasting algorithm). Repeat the iterations (3), with number of iterations 10,000 times and every time get the mean value of the estimates. To find the values of the parameters we evaluate the approximate means of B as follows:
where is the burn-in period. |
8. Conclusions on Real Data and Simulation Results
In this paper, we made a statistical inference on step stress accelerated life tests under progressive Type II censoring when the lifetimes of the data follow the Lindley distribution. First, we used our simulation studies to find the estimation of the model parameters by using the classical method, which is MLEs and the other method is the Metropolis Hasting algorithm method to get the BEs. We conclude that the Bayesian method was better than the classical method because it had a smaller MSE compared with the other method. CIs, including approximate CIs, Bootstrap CIs, and credible CIs, were estimated for the parameters of the model, and we conclude that the credible interval was the best one according to the shortness of the interval length, and it had the highest coverage probability. All the calculations were worked out based on different sample sizes and using censoring Scheme 1. In
Section 6, we introduce a real data application on a Lindley distribution to see whether the data made a good fit to it or not. This application consisted of two levels of acceleration, the first being complete and the second being censored and exposed to higher stress than the first. We fitted the data using the Lindley distribution and the two-parameter Weibull distribution. We deduced that the data are a good fit for the Lindley distribution based on the
p-values in both levels, but they were poorly fit to the Weibull distribution in the first level and well fit in the second level, thus we could use the Lindley distribution as a good candidate to model this application while the Weibull distribution could not be used. We then made statistical inference using this application and estimate the parameters of the distribution by using the above two methods, but we used non-informative priors in the BEs and also estimated the three CIs for the model parameters.