Next Article in Journal
Multiple Lump Novel and Accurate Analytical and Numerical Solutions of the Three-Dimensional Potential Yu–Toda–Sasa–Fukuyama Equation
Previous Article in Journal
Mean Value of the General Dedekind Sums over Interval \({[1,\frac{q}{p})}\)
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Study on Lindley Distribution Accelerated Life Tests: Application and Numerical Simulation

1
Department of Mathematics, Faculty of Science, Helwan University, P.O. Box 11731, Cairo 11795, Egypt
2
Department of Mathematics, College of Science, Jouf University, P.O. Box 2014, Sakaka 72393, Saudi Arabia
3
Department of Mathematics, Faculty of Science, Minia University, Minia 61111, Egypt
4
The High Institute of Engineering and Technology, Ministry of Higher Education, El-Minia 61111, Egypt
5
Department of Mathematics, College of Science, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia
6
Department of Mathematics, Faculty of Science, Al-Azher University, Nasr City 11884, Egypt
*
Author to whom correspondence should be addressed.
Symmetry 2020, 12(12), 2080; https://doi.org/10.3390/sym12122080
Submission received: 18 November 2020 / Revised: 9 December 2020 / Accepted: 10 December 2020 / Published: 15 December 2020

Abstract

:
Saving money and time are very important in any research project, so we must find a way to decrease the time of the experiment. This method is called the accelerated life tests (ALT) under censored samples, which is a very efficient method to reduce time, which leads to a decrease in the cost of the experiment. This research project includes inference on Lindley distribution in a simple step-stress ALT for the Type II progressive censored sample. The paper contains two major sections, which are a simulation study and a real-data application on the experimental design of an industry experiment on lamps. These sections are used to conduct results on the study of the distribution. The simulation was done using Mathematica 11 program. To use real data in the censored sample, we fitted them to be compatible with the Lindley distribution using the modified Kolmogorov–Smirnov (KS) goodness of fit test for progressive Type II censored data. We used the tampered random variable (TRV) acceleration model to generate early failures of items under stress. We also found the values of the distribution parameter and the accelerating factor using the maximum likelihood estimation of (MLEs) and Bayes estimates (BEs) using symmetric loss function for both simulated data and real data. Next, we estimated the upper and lower bounds of the parameters using three methods, namely approximate confidence intervals (CIs), Bootstrap CIs, and credible CIs, for both parameters of the distribution, ψ and ζ . Finally, we found the value of the parameter for the real data set under normal use conditions and stress conditions and graphed the reliability functions under normal and accelerated use.

1. Introduction

Due to the use of high technology in manufacturing products, the reliability of products has become very high, thus it is very hard nowadays to find a sufficient number of failure times in classical life testing experiments and reliability experiments; most products have exponential life time and it may be thousands of hours. Our aim is find a suitable way to produce early failures, because, under normal use conditions for products, producing failures in the lifetime experiments with a limited testing time may produce very few failures and it is not sufficient to study the data and make a very good model for them. Thus, we use the best way for decreasing the lifetime of the products: Accelerated life tests (ALT). In ALT, units or products are exposed to tough conditions and high stress levels (humidity, temperature, pressure, voltage, etc.) according to the conditions of manufacturing. There are many methods and models of ALT; products are exposed to stress according to the purpose of the experiment and the type of products. Scientists apply different types and methods of acceleration, e.g., constant ALT, step stress ALT, and progressive stress ALT. Nelson [1] discussed these different types of ALT. The main purpose of using ALT is to drive items to failure quickly, because some items may have a long lifetime under normal operating conditions. Therefore, in ALT, we put products under higher stress than the usual stress conditions, e.g. by increasing operating temperatures, pressures, or voltages, according to the physical use of the product, to generate failures more quickly.
There are several different forms of applying stress to products, e.g. step-stress (see, e.g., [1]). Two other common types are constant stress, where the test is conducted under a constant degree of stress for the entire experiment, and progressive stress, where all test units are subjected to stress as a function of time, and the stress increases on the experimented items as the time of the experiment increases (see El-Din et al. [2,3] for more details about acceleration and its models).
In step-stress ALT, we first apply a certain value of stress on the items under testing for certain time τ . Then, after this time, the stress load is increased by a fixed value for a certain period and so on until all the units have failed or the experiment ends. One of the most common methods of simple step-stress ALT has two levels (see, e.g., El-Din et al. [2,3,4].) Two types of censoring approaches can be applied to units: Type I censoring and Type II censoring. Recently, Type II censoring has been shown to make perfect use of time because it presets the number of failures. This means that the experiment does not end until the required number of failures has been achieved. Type II censoring can be explained as follows. If, for example, we have several independent and identical products and the sample number is n products under lifetime observation, the experiment then reaches its end by achieving m number of failure, which is preset before the experiment begins. With a fixed censoring scheme, let us denote them as R 1 , R 2 , , R m . For more extensive reading about this subject, see the works done by alBalakrishnan [5], Fathi [6], and Abd El-Raheem [7].
This paper aims to make a full study on the Lindley distribution under ALT using progressive Type II censored samples and apply an experimental application to introduce the importance of this distribution in fitting many real data applications in many fields of life. We refer to different recent studies to explain the difference kinds of ALT; for more reading about constant, step, and progressive ALT, see the works of El-din et al. [8,9,10,11].
This paper is organized as follows. In Section 2, a brief literature review about the Lindley distribution and its applications in many fields of life, as well as the assumptions of the acceleration model used in this study, is presented. In Section 3, the maximum likelihood estimation (MLEs) of the parameters are obtained. We present another updated type of estimation, the Bayes estimation (BEs), using a symmetric loss function for model parameters in Section 4. We introduce three different types of intervals, namely asymptotic, bootstrap, and credible confidence intervals (CIs), for the parameters of the model in Section 5. In Section 6, a real data example for reliability engineering data is fitted and studied to apply the proposed methods (for more reading about reliability engineering modelling and applications, see [12,13,14]). Section 6 also includes the graphs of the reliability function and some elaboration about these graphs. Simulation study and some results and observations are presented in Section 7. Finally, the major findings are concluded in Section 8.

2. Lindley Distribution and Its Importance

This section identifies the importance of the Lindley distribution in the fields of business, pharmacy, biology, and so on. For example, Gomez et al. [15], applied the Lindley distribution to the application of strength systems’ reliability. Ghitany et al. [16] created a new bounded domain probability density feature in view of the generalized Lindley distribution.
The novelty of this paper is that no one jas used the tampered random variable (TRV) ALT model under progressive Type II censoring for the Lindley distribution and applied a real data experiment on censored sample, not complete sample, using the modified Kolmogorov–Smirnov (KS) algorithm, we compared it with the two-parameter Weibull distribution and we proved that it provides better fitting to the real data experiment. This inspires us to work on implementing the SSALT model and estimating the parameters involved under Lindley distribution for simulated data and real data application. The first one introduced the Lindley distribution was Lindley [17], who combined exponential distribution with parameter ( ψ ) and Gamma (2, ψ ). In 2012, Bakouch et al. [18] introduced an extended Lindley distribution that now has many applications in finance and economics. Ghitany et al. [19] proved that the Lindley distribution is a weighted distribution of gamma distribution and exponential distribution. Therefore, in many cases, the Lindley distribution is more flexible than these two and distributions, as he studied its properties, and showed through a numerical study that Lindley distribution is a better fit to lifetime data than exponential distribution. One of the major advantage of the Lindley distribution over many distributions, as an example of exponential distribution, is that it has an increased risk rate. Gomez et al. [15] introduced an improvement on the Lindley distribution named the Log Lindley distribution, which was used as a replacement for the beta regression model. Now, the probability density function (PDF) can be written of the Lindley distribution as follows:
f ( z ) = ψ 2 ( 1 + z ) e ψ z 1 + ψ , z > 0 , ψ > 0 ,
F ( z ) = 1 1 + ψ z 1 + ψ e ψ z , z > 0 , ψ > 0 .
Its cumulative distribution function (CDF) is as Equation (2). By graphing the Lindsey’s PDF and CDF we can deduce that they have asymmetric shapes. As the Lindley distribution has a failure-rate function, which is called the hazard rate function (HRF), and can be introduced by:
h ( z ) = ( 1 + z ) ψ 1 + ( 1 + z ) ψ , z > 0 , ψ > 0 .
For more details about real data applications using the Lindley distribution, see [17].

Assumptions and the Test Procedure and the Steps Used

  • Let us assume that we have n identical and independent products that follows the Lindley distribution and these were subject to a lifetime examination in a lifetime experiment;
  • The examination of the products ends as soon as the m t h failure happens such that: ( m n ) ;
  • All units run in normal-use conditions and after a prefixed time η , the stress is increased by a certain value;
  • From the physical experiments on products, engineers have stated that the following law controls the connection between the stress on the products S and scale parameter σ . Thus, the law can be stated as follows: The model of inverse power law (IPL) is given by: ln ( σ i ) = a + b [ ln ( S i ) ] , where b > 0 , and voltage is denoted by S, and a is the model parameter;
  • We will apply progressive Type II censoring, as discussed above, on the units of this experiment;
  • After running the test on the products, the number of units that failed before stress is n 1 . In addition, n 2 is the total number of failed items after applying the stress at time η ;
  • We used the tampered random variable (TRV) model provided by [20]. This model states that under step stress partially accelerated life test (SSPALT), the lifetime of a unit can be written as:
    Z = z i f z η , η + z η ζ i f z > η ,
    where z refers to the time of the product under the use conditions, η is the time that we change the stress, and ζ is the factor that we use to accelerate the failure time ( ζ > 1);
  • The PDF is divided as follows:
    f ( z ) = 0 , z < 0 , f 1 ( z ) = ψ 2 ( 1 + z ) e ψ z 1 + ψ , 0 < z < η , f 2 ( z ) = ψ 2 ζ 1 + ( ζ ( z η ) + η ) e ψ ( ζ ( z η ) + η ) 1 + ψ , η < z < .

3. Estimation Using the Maximum Likelihood Function

Maximum likelihood estimation (MLE) in statistics is a method of estimating the parameters of a probability distribution by maximizing the probability function so that the observed data is most likely under the assumed statistical model. The maximum likelihood estimate gives a point estimation of the distribution parameter and this estimate makes the likelihood function maximum, one of the advantages of this method being that it is versatile and provides a good estimates for the distribution parameter.
This method works on finding the first derivatives for the log-likelihood function with respect to the distribution parameters and solving the equations simultaneously and finding the estimates that make the log-likelihood function at maximum value, this value is called the estimates of the distribution.
In this section, we used this method for estimating the parameters of the distribution and the accelerating factor. Thus, by using Equations (4) and substituting with it in Equation (5), we get the likelihood function under the progressive Type II censored sample. In the next subsection, we will introduce the procedures of this method.

Point Estimation

Let z i = z i : m : n R be the times that the items the failing occurred times the products under SSPALT, with censored scheme R = ( R 1 , , R m ) , then the likelihood function is as expressed below, see [4] for more reading:
L ( ψ , ζ ) = A i = 1 n 1 f 1 ( z i ) 1 F 1 ( z i ) R i i = n 1 + 1 m f 2 ( z i ) 1 F 2 ( z i ) R i ,
L = A i = 1 n 1 ψ 2 ( 1 + z i ) exp [ ψ z i ] 1 + ψ 1 + ψ z i 1 + ψ exp [ ψ z i ] R i i = 1 n 2 ψ 2 ζ ( 1 + ζ [ z i η ] + η ) exp [ ψ ζ ( z i η ) + η ] 1 + ψ 1 + ψ ζ ( z i η ) + η 1 + ψ exp [ ψ ζ ( z i η ) + η ] R i .
By taking the log for both sides for Equation (6), we then get the log-likelihood function, as shown below:
( ψ , ζ ) = log A + i = 1 m log ψ 2 + i = 1 n 2 log ζ i = 1 m log ( 1 + ψ ) + i = 1 n 1 log ( 1 + z i ) + i = 1 n 1 R i log ( 1 + ψ z i 1 + ψ ) i = 1 n 1 ψ z i ( R i + 1 ) + i = 1 n 2 log ( 1 + ζ ( z i η ) + η ) + i = 1 n 2 R i log ( 1 + ψ ( ζ ( z i η ) + η ) 1 + ψ ) i = 1 n 2 ψ ( ζ ( z i η ) + η ) ( R i + 1 ) ,
( ψ , ζ ) = log A + 2 m log ψ + n 2 log ζ m log ( 1 + ψ ) + i = 1 n 1 log ( 1 + z i ) + i = 1 n 1 R i log ( 1 + ψ z i 1 + ψ ) i = 1 n 1 ψ z i ( R i + 1 ) + i = 1 n 2 log ( 1 + ζ ( z i η ) + η ) + i = 1 n 2 R i log ( 1 + ψ ( ζ ( z i η ) + η ) 1 + ψ ) i = 1 n 2 ψ ( ζ ( z i η ) + η ) ( R i + 1 ) .
By finding the first derivative for the distribution parameters ψ and ζ as follows:
( ψ , ζ ) ζ = n 2 ζ + i = 1 n 2 z i η ( 1 + ζ ( z i η ) + η ) + i = 1 n 2 R i ψ ( z i η ) ( 1 + ψ + ψ ( ζ ( z i η ) + η ) ) i = 1 n 2 ψ ( z i η ) ( R i + 1 ) ,
( ψ , ζ ) ψ = 2 m ψ m 1 + ψ + i = 1 n 1 ( R i z i ( 1 + ψ 2 + 1 + ψ ψ z i ) i = 1 n 1 z i ( R i + 1 ) + i = 1 n 2 R i ζ ( z i η ) + η ) ( 1 + ψ 2 + 1 + ψ ψ ( ( ζ ( y η ) + η ) ) ζ ( z i η ) + η ) ( R i + 1 ) .
These two Equations (9) and (10) are very hard to solve, so we will try to find a solution for the two equations by solving them numerically, using Mathematica 11 software and thereby finding the estimates for the two parameters ψ , ζ .

4. Bayes Estimation

Bayesian estimation is a modern efficient approximation for estimating the parameters compared with the maximum likelihood estimates method. As it takes into account both previous information and sample information and estimates the unspecified interest parameters. Bayesian estimation can be performed using symmetric and asymmetric loss function, according to the necessity of the experiment, and sometimes symmetric loss functions are better than asymmetric loss functions. There are different types of asymmetric loss functions, two being the linear exponential loss function and the general entropy loss function.
In this section, we proposed the Bayes estimators of the unknown parameters of the Lindley distribution using the symmetric loss function. In Bayesian estimation we must assign a prior distribution to the data, and in order to choose the prior density function that covers our belief on the data, we must choose appropriate values hyper-parameters. In this part of the paper, based on Type II progressive censored sample, we used the square error (SE) loss function to obtain the model parameters estimations for ψ , and ζ , as we deduced that both ψ , and ζ are independent, we choose a gamma priors as a prior distribution for the two parameters because it is versatile for adjusting different shapes of the distribution density. It also has another merit which is that it provides conjugacy and mathematical ease and has closed expressions for moments type. The two independent priors are as follows:
π 1 ( ψ ) ψ μ 1 1 e ψ λ 1 , ψ > 0 , μ 1 , λ 1 > 0 ,
and
π 2 ( ζ ) ζ μ 2 1 e ζ λ 2 , ζ > 0 , μ 2 , λ 2 > 0 .
As the gamma prior is very common, many authors have used it because it encourages researchers to feel confident in the data. For more information about BE, see Nassar and Eissa [21], and Singh et al. [22]. If we do not have any belief about the data, we must use non-informative priors, we do this by setting the following values so that μ i tends to zero, i = 1 , 2 and λ i tends to infinity, i = 1 , 2 . In this way, we can change informative priors into non-informative priors, see Singh et al. [22]. After this, we can find the form of the joint PDF prior of ψ , and ζ as below:
π ( ψ , ζ ) ψ μ 1 1 ζ μ 2 1 e ( ψ λ 1 + ζ λ 2 ) , ψ , ζ > 0 .
By multiplying Equation (11) with Equation (5), we get equation the posterior function in Equation (12):
π * ( ψ , ζ ) L ( ψ , ζ ) π ( ψ , ζ ) ψ μ 1 1 ζ μ 2 1 exp ( ψ λ 1 + ζ λ 2 ) i = 1 n 1 ψ 2 ( 1 + z i ) exp [ ψ z i ] 1 + ψ 1 + ψ z i 1 + ψ exp [ ψ z i ] R i i = 1 n 2 ψ 2 ζ ( 1 + ζ [ z i η ] + η ) exp [ ψ ζ ( z i η ) + η ] 1 + ψ 1 + ψ ζ ( z i η ) + η 1 + ψ exp [ ψ ζ ( z i η ) + η ] R i
By making some simplifications on Equation (12) we get Equation (13).
π * ( ψ , ζ ) L ( ψ , ζ ) π ( ψ , ζ ) ψ μ 1 1 + 2 m ζ μ 2 1 + n 2 exp ( ψ λ 1 + ζ λ 2 ) i = 1 n 1 ( 1 + z i ) exp [ ψ z i ] 1 + ψ 1 + ψ z i 1 + ψ exp [ ψ z i ] R i i = 1 n 2 ( 1 + ζ [ z i η ] + η ) exp [ ψ ζ ( z i η ) + η ] 1 + ψ 1 + ψ ζ ( z i η ) + η 1 + ψ exp [ ψ ζ ( z i η ) + η ] R i
The posterior density function in (14) for the two parameters ψ and ζ can be formed by the multiplication of Equations (6) with (11) and making some simplification, and its final form is as below:
π * ( ψ , ζ ) L ( ψ , ζ ) π ( ψ , ζ ) ψ μ 1 1 + 2 m ζ μ 2 1 + n 2 exp ( ψ λ 1 + ζ λ 2 ) i = 1 n 1 ( 1 + z i ) exp [ ψ z i ] 1 + ψ 1 + ψ z i 1 + ψ exp [ ψ z i ] R i i = 1 n 2 ( 1 + ζ [ z i η ] + η ) exp [ ψ ζ ( z i η ) + η ] 1 + ψ 1 + ψ ζ ( z i η ) + η 1 + ψ exp [ ψ ζ ( z i η ) + η ] R i
According to the SE loss function, the Bayes estimator for B = B ( Θ ) , Θ = ( ψ , ζ ) (for more details see Ahmadi et al. [23]):
a t B S E = Θ B π * ( Θ ) d Θ ,
where π * ( Θ ) is given by Equation (14).
In a fact we cannot find a result for integrals in Equation (15). Thus, we used the Markov chain Monte Carlo (MCMC) technique to approximate these integrals, and used the the Metropolis—Hasting algorithm as an example of the MCMC technique to find the estimates.

4.1. Using MCMC Method in Bayesian Estimation

In this section, we use the MCMC method because we do not have a well-known distribution for the posterior density function, and we then calculate the BEs of ψ and ζ . From Equation (14), the conditional posterior distribution functions for ψ and ζ are as shown below, respectively:
π * ( ψ | ζ ) ψ μ 1 1 e ψ λ 1 i = 1 n 1 ψ 2 ( 1 + z i ) 1 + ψ ( 1 + z i ) R i ( 1 + ψ ) ( 1 + R i ) exp ( 1 + R i ) ψ z i × i = n 1 + 1 m ψ 2 ζ 1 + ( ζ [ z i η ] + η ) ζ + ψ ( ζ [ 1 + η ] + z i η ) R i exp ψ ( z i η ) ( ζ + R i ζ ) + η ( 1 + R i ) ,
π * ( ζ | ψ ) ζ μ 2 1 e ζ λ 2 i = 1 n 1 ψ 2 ( 1 + z i ) 1 + ψ ( 1 + z i ) R i ( 1 + ψ ) ( 1 + R i ) exp ( 1 + R i ) ψ z i × i = n 1 + 1 m ψ 2 ζ 1 + ( ζ [ z i η ] + η ) ζ + ψ ( ζ [ 1 + η ] + z i η ) R i exp ψ ( z i η ) ( ζ + R i ζ ) + η ( 1 + R i ) .
Therefore, we do not have a closed form for the conditional posterior distribution ψ and ζ in (16) and (17) as it does not represent any known distribution. We therefore used the Metropolis–Hasting algorithm (for more information about this, see Upadhyay and Gupta [24]). The algorithm below explains the steps required to compute Bayes estimators for B = B ( ψ , ζ ) under the SE loss function.

4.2. The Metropolis—Hasting Algorithm

The Metropolis—Hastings algorithm and sometimes we call it the random walk algorithm, this kind of algorithm can be considered as a Markov Chain Monte Carlo (MCMC) method for generating data from any CDF as it used for normal distribution to generate data because it has the symmetric property that ensures that all other distribution are covered under the symmetric normal graph. This generated samples can be used to approximate the distribution or to compute an integral (e.g., an expected value), we use Algorithm 1 because sometimes obtaining samples is difficult, that is because the posterior we have is from an unknown distribution. We can use this algorithm with one dimension and two dimensions data. For more reading see [24,25,26,27,28,29,30,31].
Algorithm 1 MCMC algorithm
  •  First of all set the starting values as follows ψ ( 0 ) = a t ψ M L E , ζ ( 0 ) = a t ζ M L E .
  •  Start the iterations with i = 1 .
  •  Get the values of estimates for ψ ( i ) and ζ ( i ) , i = 1 , , N , from Equations (16) and (17) respectively using the MCMC algorithm (Metropolis—Hasting algorithm).
  •  Repeat the iterations (3), with number of iterations N = 10,000 times and every time get the mean value of the estimates.
  •  To find the values of the parameters we evaluate the approximate means of B as follows:
E ( B ) = 1 N M i = M + 1 N B ( ψ ( i ) , ζ ( i ) ) ,
where M = N / 5 is the burn-in period.

5. Interval Estimation

In this part, we tried to estimate the upper and lower bounds of the following parameters ψ and ζ using the following three methods: The first method is approximate CIs, the second method is the Bootstrap CIs, and the third method is credible CIs

5.1. Finding Confidence Intervals for the Parameters

In statistics, a confidence interval (CI) is a type of estimate computed from the statistics of the observed data. This interval identifies the range of for the unknown parameters, it changes according to the confidence level chosen by the investigator, confidence interval for an unknown parameter is based on sampling the distribution of a corresponding estimator.
In this part of the paper, we will try to find the upper and lower bounds for Θ = ( ζ , ψ ) . using the MLEs of the two parameters. Where the asymptotic distributions for the MLEs are is given by [32]:
a t ψ ψ , a t ζ ζ N 0 , I 1 ( ψ , ζ ,
where I 1 is the variance covariance matrix of the unknown parameters ( ψ , ζ ) .
2 Θ ψ 2 = 2 m ψ 2 + m 1 + ψ 2 + i = 1 n 1 R i z i ( 2 1 + ψ + z i 1 + 2 ψ ( 1 + ψ 2 + 1 + ψ ψ z i ) + i = 1 n 2 R i ζ ( z i η ) + η ) ( 2 1 + ψ + 1 + 2 ψ ( ζ ( z i η ) + η ) ( 1 + ψ 2 + 1 + ψ ψ ( ζ ( z i η ) + η ) 2 .
2 Θ ζ 2 = n 2 ζ 2 i = 1 n 2 z i η 2 ( 1 + ζ ( z i η ) + η ) 2 i = 1 n 2 R i ψ 2 ( y η ) 2 ( 1 + ψ + ψ ( ζ ( z i η ) + η ) ) 2 .
2 Θ ψ ζ = i = 1 n 2 R i ( 1 + ψ + ψ ( ζ ( z i η ) + η ) ( z i η ) ψ ( z i η ) 1 + ζ ( z i η ) + η ) ( 1 + ψ + ψ ( ζ ( z i η ) + η ) 2 i = 1 n 2 ( z i η ) ( R i + 1 ) .
The approximate 95 % s for ψ is given by:
a t ψ L , a t ψ U = a t ψ ± 1.96 σ 11
a t ζ L , a t ζ U = a t ζ ± 1.96 σ 22

5.2. Bootstrap Confidence Intervals

Bootstrapping assigns measures of accuracy (bias, variance, confidence intervals, prediction error, etc.) to sample estimates. This technique allows an estimation of the sampling distribution of almost any statistic using random sampling methods. In this part of the paper, we find the values of bootstrap CIs for ψ , ζ (see Efron and Tibshirani [33] for more information). Algorithm 2 below specifies the steps for obtaining lower and upper bounds using bootstrap CIs.
Algorithm 2 Bootstrap algorithm
  •  Use the MLEs for ψ and ζ which are a t ψ M L and a t ζ M L .
  •  Use a t ψ M L and a t ζ M L to get random sample using the same censoring scheme let us name it f *
  •  Use f * to solve the equations numerically using the Mathematica 11 program and find the values of the estimates corresponding to bootstrap sample let us name them as a t ψ * and a t ζ * .
  •  Again and again make the steps ((1)–(3)) several times as an example 1000 times and put the estimates we have in a vector having the order from the smallest to the biggest value order to obtain the bootstrap vector of estimates { a t ψ * [ 1 ] , a t ψ * [ 2 ] , , a t ψ * [ 1000 ] } and { a t ζ * [ 1 ] , a t ζ * [ 2 ] , , a t ζ * [ 1000 ] } .
Thus, we can get 100 ( 1 α ) % bootstrap CIs for θ i , given by:
a t θ i L * , a t θ i U * = a t θ i * [ ζ B / 2 ] , a t θ i * [ ( 1 ζ / 2 ) B ] , i = 1 , 2 ,
where a t θ 1 * a t ψ * and a t θ 2 * a t ζ * .

5.3. Credible Confidence Intervals

In Bayesian statistics, a credible interval is an interval in which the distribution parameter value falls between the lower and upper bounds of it with a certain confidence level. It is an interval in the domain of a posterior probability distribution. Credible intervals are symmetric in meaning to the approximate confidence interval, Bayesian intervals treat their bounds as fixed and the estimated parameter as a random variable, whereas frequentest confidence intervals treat their bounds as random variables and the parameter as a fixed value. In addition, Bayesian credible intervals use knowledge of the situation-specific prior distribution, while the frequentest confidence intervals do not. Algorithm 3 below was used to get credible CIs of ψ and ζ .
Algorithm 3 Credible interval
  •  First of all set the starting values as follows ψ ( 0 ) = a t ψ M L E , ζ ( 0 ) = a t ζ M L E .
  •  Start the iterations with i = 1 .
  •  Get the values of estimates for ψ ( i ) and ζ ( i ) , i = 1 , , N , from Equations (16) and (17) respectively using the MCMC algorithm.
  •  Repeat the iterations (3), with iteration number, N = 10,000 times and every time get the mean value of the estimates as follows.
  •  To find the values of the parameters we evaluate The approximate means of B as follows:
    E ( B ) = 1 N M i = M + 1 N B ( ψ ( i ) , ζ ( i ) ) ,
    where M = N / 5 is the burn-in period.
  •  Get N estimates using the MCMC algorithm.
  •  arrange the N estimate generated in each iteration of the MCMC algorithm in ascending order as { a t θ i S E [ 1 ] , a t θ i S E [ 2 ] , , a t θ i S E [ N ] } , i = 1 , 2 , where a t θ 1 S E a t ψ S E , and a t θ 2 S E a t ζ S E .
Thus, we can get 100 ( 1 α ) % credible CIs for θ i is given by:
a t θ i S E [ α N / 2 ] , a t θ i S E [ ( 1 α / 2 ) N ] , i = 1 , 2 .

6. Application on Real Data Set for Lindley Distribution

Here, we used real data to serve as a real-life example for the step stress model with a real data set, and fitted this data set and then made a statistical inference on this data to assess the performance of the Lindley distribution.

6.1. Example

The data set that we used in the application were collected from Chapter 5 of Zhu [26]. These data represent an experiment on some light bulbs with working-use stress of 2 voltage. Here, a sample of size n = 64 light bulbs were lit at 2.25 voltage for a period of 96 h before increasing the voltage to 2.44 voltage. This means the time of stress change was at η = 96 h. This lifetime experiment was performed on a sample of size n = 64 light bulbs under stress; in our experiment, we removed 11 bulbs when they were still working and functioning before they had reached their failure point. Consequently, we observed only m = 53 , failures, as only n 1 = 34 had failed on the stress voltage S 1 = 2.25 V, and the scheme used in progressive censoring is R i , i = 1 , 2 , , 53 :
R i = 0 , i 35 11 , i = 35 .
From practical experiments, we deduced that the best model to represents the acceleration and voltage relationship is the inverse power model. Thus, the acceleration model can be expressed as:
ln ( σ i ) = a + b ln ( S i ) , b > 0 , i = 0 , 1 , 2 .
We use modified Kolmogorov–Smirnov goodness of fit test for progressive Type-II censored data to determine the goodness of fit for the data in the experiment, this method was suggested by Pakyari and Balakrishnan [34].

6.2. Comparison with Competitive Distribution

The importance of the Lindley distribution in this paper is that it provides more fit than its traditional competitor, the two-parameter Weibull distribution, as we checked for the p-value for both distributions for fitting the real data application and found that the Lindley distribution has p-value greater than 0.05 for both levels of stress in the experiment, while in case of the Weibull distribution it makes a poor-fitting for the real data set because the p-value of the first level of acceleration is less than 0.05, while it provides good fitting for the second level, so we can deduce that the Lindley distribution makes better fitting than the Weibull distribution for both levels of experiment so we can use it instead of the Weibull distribution by fitting the two levels of the Lindley distribution which has a merit over the two parameters Weibull distribution in fitting this kind of experiment. For more information about the reliability of engineering data, please see references [12,13,14].
The following table contains the value of test statistic and the p-values of each stress level for the Lindley distribution and Weibull two parameters distribution.

6.3. Important Results Conducted from Real Data

The following points illustrate briefly the work we have done in this real data example. According to the results in Table 1, Table 2 and Table 3.
  • In the experiment with real data, we used a modified K-S method to ensure that our data was a good fit for our distribution;
  • According to the p-values in Table 1, we deduced that our distribution made a good fit for the failure times of the experiment. After that, we first estimated the parameters using this real data, and then we concluded the CIs;
  • By using the estimated parameters and the acceleration model estimates a t a , a t b , we deduced θ 0 , where θ 0 is the scale parameter under normal use. From Equation (27), we can evaluate the MLE of the scale parameter under normal conditions a t θ 0 = e a t a + a t b ln ( S 0 ) = 0.0000214702 . Which is the scale parameter under normal use;
  • By estimating the parameter under normal use we can use it to find the following:
  • The mean time to failure (MTTF) under normal conditions is
    M T T F = 2 + θ 0 θ 0 ( 1 + θ 0 ) = 93 , 151.3 h .
  • The failure rates (hazard rate function ) under normal conditions is:
    h ( y ) = ( ( 1 + z ) θ 0 2 ) θ 0 ( ( 1 + θ 0 + z θ 0 ) , z > 0 ,
  • The reliability function under normal conditions is:
    R ( y ) = 1 + θ 0 z 1 + θ 0 e θ 0 z , z > 0 ,
  • By graphing the reliability function we deduced the following: Reliability function under normal use, at time, equals zero the reliability function equal one, see Figure 1. Under stress conditions, we concluded that the reliability function decreases, as time increases, see Figure 2. As the stress increase once more it approaches zero see Figure 3.

7. Simulation Studies

This part of the paper contains the simulation for the data in order to estimate the parameters by using both the MLEs and BEs (under square error loss (SEL) function), according to the mean square errors (MSEs) results in the tables below we can make a decision on the parameters, and in this simulation we used different values of n, m, and R i , i = 1 , 2 , , m . Table 4 and Table 5 gives us the results deduced from the simulation. The censoring schemes (CS) vector used in the simulation is defined below.
Scheme 1 : R i = 1 , i = 1 , 2 , , n m , 0 , o t h e r w i s e .
To make a complete simulation, we used the following algorithm to clarify the steps used in the whole simulation.

Important Results Conducted from Simulated Data

The following points illustrate briefly the observed results from simulation Algorithm 4. According to the results in Table 4 and Table 5:
  • As the sample size increased, the MSEs of BEs and MLEs estimation for the parameters ψ and ζ decreased. Sometimes this situation did not occur because of small disturbances in data generation;
  • The MSEs for BEs of ψ and ζ are smaller than the MSEs of MLEs, and this is rational because the BE is the updated method, and more accurate than MLE;
  • When the sample size increases, the length of the approximate, Bootstrap, and credible CIs reduced, except in some small iterations, and that is due to the randomization in the generation of data using the Mathematica package;
  • The shortest interval is the credible CIs of ψ and ζ according to the length, and credible CIs had the highest coverage probability;
  • The length of Bootstrap CIs is shorter than the approximate CIs in most cases.
  • We deduced that the credible CIs was the shortest one and had the highest coverage probability among all intervals.
Algorithm 4 The complete algorithm for all simulation in the paper
  •  Put fixed values for n, m, η , ψ , ζ .
  •  For a random sample of size from m, we generated a random sample from uniform distribution ( 0 , 1 ) distribution, as ( v 1 , v 2 , , v m ) by using mathematica 11.
  •  Assign values for the censored items R i , according to the CS above.
  •  Put E i = v i 1 / ( i + d = m i + 1 m R d ) , i = 1 , 2 , , m .
  •  Then, we put v i : m : n * = 1 d = m i + 1 m E d , i = 1 , 2 , , m so we can get a sample as following ( v 1 : m : n * , v 2 : m : n * , , v m : m : n * ) .
  •  We first must find the number of units n 1 , by the following idea such that v n 1 : m : n * < F 1 ( η ) v n 1 + 1 : m : n * .
  •  Now, we can get the order observations c 1 : m : n , c 2 : m : n , , c n 1 : m : n , c n 1 + 1 : m : n , , c m : m : n , , which are computed from the inverse CDF of the Lindley distribution.
  •  Use the generated data set to find the MLEs estimations by finding a solution to (9) and (10).
  •  Now, we turn on to the step of finding the BEs of the model parameters under SE loss functions, with a total number of iterations mcmc N = 10,000 in the mcmc, and M = 1000 is the removed iterations from the calculations (nburn). By using Algorithm 1.
  •  With 95 % confidence we compute the upper and lower bounds for the approximate confidence bounds of the following parameters ψ , ζ .
  •  Find the values of the upper and lower limits of the 95 % Bootstrap CI and use the estimates generated for the MCMC algorithm to find the intervals of credible confidence, by using both of Algorithms 2 and 3 respectively.
  •  Do the steps from (2)–(11), 1000 number of iterations to make sure that the data is unaffected by random generating of data.
  •  Find the average value of MSEs for both ψ and ζ from the two estimation methods.
  •  Try to repeat steps (1)–(13) by assigning various values for n , m, and R i , i = 1 , 2 , , m .

8. Conclusions on Real Data and Simulation Results

In this paper, we made a statistical inference on step stress accelerated life tests under progressive Type II censoring when the lifetimes of the data follow the Lindley distribution. First, we used our simulation studies to find the estimation of the model parameters by using the classical method, which is MLEs and the other method is the Metropolis Hasting algorithm method to get the BEs. We conclude that the Bayesian method was better than the classical method because it had a smaller MSE compared with the other method. CIs, including approximate CIs, Bootstrap CIs, and credible CIs, were estimated for the parameters of the model, and we conclude that the credible interval was the best one according to the shortness of the interval length, and it had the highest coverage probability. All the calculations were worked out based on different sample sizes and using censoring Scheme 1. In Section 6, we introduce a real data application on a Lindley distribution to see whether the data made a good fit to it or not. This application consisted of two levels of acceleration, the first being complete and the second being censored and exposed to higher stress than the first. We fitted the data using the Lindley distribution and the two-parameter Weibull distribution. We deduced that the data are a good fit for the Lindley distribution based on the p-values in both levels, but they were poorly fit to the Weibull distribution in the first level and well fit in the second level, thus we could use the Lindley distribution as a good candidate to model this application while the Weibull distribution could not be used. We then made statistical inference using this application and estimate the parameters of the distribution by using the above two methods, but we used non-informative priors in the BEs and also estimated the three CIs for the model parameters.

Author Contributions

Formal analysis, F.H.R. and S.A.M.M.; Funding acquisition, E.H.H. and M.S.M.; Investigation, S.A.M.M. and M.S.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Taif University researchers supporting project number (TURSP-2020/160), Taif University, Taif, Saudi Arabia. This paper was funded by “Taif University Researchers Supporting Project number (TURSP-2020/160), Taif University, Taif, Saudi Arabia”.

Acknowledgments

The authors are thankful for the Taif University researchers supporting project number TURSP-2020/160, Taif University, Taif, Saudi Arabia. This paper was funded by “Taif University Researchers Supporting Project number TURSP-2020/160, Taif University, Taif, Saudi Arabia”.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
PDFProbability density function
CDFCumulative distribution function
SSPALTStep Stress Partially Accelerated Life Test
CIsCredible confidence intervals
BEsBayes estimates
MLEsMaximum Likelihood
TRVTampered Random Variable
KSKolmogorov–Smirnov
ALTAccelerated Life Test
IPLinverse power law
SESquare error
MCMCMarkov chain Monte Carlo
MTTFThe mean time to failure
SELSquare error loss
CSThe censoring schemes

References

  1. Nelson, W. Accelerated Testing: Statistical Models, Test Plans and Data Analysis; Wiley: New York, NY, USA, 1990. [Google Scholar]
  2. El-Din, M.M.M.; Amein, M.M.; El-Attar, H.E.; Hafez, E.H. Symmetric and Asymmetric Bayesian Estimation For Lindley Distribution Based on Progressive First Failure Censored Data. Math. Sci. Lett. 2017, 6, 255–261. [Google Scholar] [CrossRef]
  3. El-Din, M.M.M.; Amein, M.M.; Abd El-Raheem, A.M.; Hafez, E.H.; Riad, F.H. Bayesian inference on progressive-stress accelerated life testing for the exponentiated Weibull distribution under progressive Type II censoring. J. Stat. Appl. Probab. Lett. 2020, 7, 109–126. [Google Scholar]
  4. El-Din, M.M.M.; Abu-Youssef, S.E.; Ali, N.S.A.; Abd El-Raheem, A.M. Estimation in step-stress accelerated life tests for Weibull distribution with progressive first-failure censoring. J. Stat. Appl. Probab. 2015, 3, 403–411. [Google Scholar]
  5. Balakrishnan, N.; Cramer, E. The Art of Progressive Censoring: Applications to Reliability and Quality; Birkhäuser: New York, NY, USA, 2014. [Google Scholar]
  6. Fathy, H.; Riad, E.; Hafez, H. Point and Interval Estimation for Frechet Distribution Based on Progressive First Failure Censored Data. J. Stat. Appl. Probab. 2020, 9, 181–191. [Google Scholar]
  7. Abd El-Raheem, A.M.; Abu-Moussa, M.H.; Hafez, M.M. Accelerated life tests under Pareto-IV lifetime distribution: Real data application and simulation study. Mathematics 2020, 8, 1786. [Google Scholar] [CrossRef]
  8. El-Din, M.M.M.; Abu-Youssef, S.E.; Ali, N.S.A.; Abd El-Raheem, A.M. Optimal plans of constant-stress accelerated life tests for the Lindley distribution. J. Test. Eval. 2017, 45, 1463–1475. [Google Scholar]
  9. El-Din, M.M.M.; Amein, M.M.; El-Raheem, A.M.A.; El-Attar, H.E.; Hafez, E.H. Estimation of the Coefficient of Variation for Lindley Distribution based on Progressive First Failure Censored Data. J. Stat. Appl. Probab. 2019, 8, 83–90. [Google Scholar]
  10. El-Din, M.M.M.; Amein, M.M.; El-Attar, H.E.; Hafez, E.H. Estimation in Step-Stress Accelerated Life Testing for Lindley Distribution with Progressive First-Failure Censoring. J. Stat. Appl. Probab. 2016, 5, 393–398. [Google Scholar] [CrossRef]
  11. El-Din, M.M.M.; Abu-Youssef, S.E.; Ali, N.S.A.; El-Raheem, A.M.A. Estimation in constant-stress accelerated life tests for extension of the exponential distribution under progressive censoring. Metron 2016, 74, 253–273. [Google Scholar] [CrossRef]
  12. Ling, M.H.; Hu, X.W. Optimal design of simple step-stress accelerated life tests for one-shot devices under Weibull distributions. Reliab. Eng. Syst. Saf. 2020, 193, 1–20. [Google Scholar] [CrossRef]
  13. Cheng, Y.; Elsayed, E.A. Reliability modeling of mixtures of one-shot units under thermal cyclic stresses. Reliab. Eng. Syst. Saf. 2017, 167, 58–66. [Google Scholar] [CrossRef]
  14. Wang, J. Data Analysis of Step-Stress Accelerated Life Test with Random Group Effects under Weibull Distribution. Math. Probl. Eng. 2020, 2020, 4898123. [Google Scholar] [CrossRef]
  15. Gmez-Déniz, E.; Sordo, M.A.; Calderín-Ojeda, E. The log–Lindley distribution as an alternative to the beta regression model with applications in insurance. Insur. Math. Econ. 2014, 54, 49–57. [Google Scholar] [CrossRef]
  16. Ghitany, M.E.; Al-Mutairi, D.K.; Aboukhamseen, S.M. Estimation of the Reliability of a Stress-Strength System from Power Lindley Distributions. Commun. Stat. Simul. Comput. 2015, 44, 118–136. [Google Scholar] [CrossRef]
  17. Lindley, D.V. Fudicial distributions and Bayes’ theorem. J. R. Stat. Soc. 1958, 20, 102–107. [Google Scholar]
  18. Bakouch, H.S.; Al-Zahrani, B.M.; Al-Shomrani, A.A.; Marchi, V.A.; Louzada, F. An extended Lindley distribution. J. Korean Stat. Soc. 2012, 41, 75–85. [Google Scholar] [CrossRef]
  19. Ghitany, M.E.; Atieh, B.; Nadarajah, S. Lindley distribution and its application. Math. Comput. Simul. 2008, 78, 493–506. [Google Scholar] [CrossRef]
  20. DeGroot, M.H.; Goel, P.K. Bayesian and optimal design in partially accelerated life testing. Nav. Res. Logist. 1979, 16, 223–235. [Google Scholar] [CrossRef]
  21. Nassar, M.M.; Eissa, F.H. Bayesian estimation for the generalized Weibull model. Commun. Stat. Theor. 2004, 33, 2343–2362. [Google Scholar] [CrossRef]
  22. Singh, S.K.; Singh, U.; Sharma, V.K. Bayesian estimation and prediction for the generalized Lindley distribution under asymmetric loss function, Hacet. J. Math. Stat. 2014, 43, 661–678. [Google Scholar]
  23. Ahmadi, J.; Jozani, M.J.; Marchand, E.; Parsian, A. Bayes estimation based on k-record data from a general class of distributions under balanced type loss functions. J. Stat. Plan. Inference 2009, 139, 1180–1189. [Google Scholar] [CrossRef]
  24. Upadhyay, S.K.; Gupta, A. A Bayes analysis of modified Weibull distribution via Markov chain Monte Carlo simulation. J. Stat. Comput. Simul. 2010, 80, 241–254. [Google Scholar] [CrossRef]
  25. Balakrishnan, N.; Sandhu, R.A. A simple simulation algorithm for generating progressively type-II censored samples. Am. Stat. 1995, 49, 229–230. [Google Scholar]
  26. Zhu, Y. Optimal Design and Equivalency of Accelerated Life Testing Plans. Ph.D. Thesis, The State University of New Jersey, New Brunswick, NJ, USA, 2010. [Google Scholar]
  27. El-Din, M.M.M.; Abu-Youssef, S.E.; Ali, N.S.A.; El-Raheem, A.M.A. Parametric inference on step-stress accelerated life testing for the extension of exponential distribution under progressive Type II censoring. Commun. Stat. Appl. Methods 2016, 23, 269–285. [Google Scholar] [CrossRef] [Green Version]
  28. El-Din, M.M.M.; Abu-Youssef, S.E.; Ali, N.S.A.; El-Raheem, A.M.A. Classical and Bayesian inference on progressive-stress accelerated life testing for the extension of the exponential distribution under progressive Type II censoring. Qual. Reliab. Eng. Int. 2017, 33, 2483–2496. [Google Scholar]
  29. El-Din, M.M.M.; Abd El-Raheem, A.M.; Abd El-Azeem, S.O. On Step-Stress Accelerated Life Testing for Power Generalized Weibull Distribution Under Progressive Type-II Censoring. Ann. Data Sci. 2020. [Google Scholar] [CrossRef]
  30. Gepreel, K.A.; Mahdy, A.M.S.; Mohamed, M.S.; Al-Amiri, A. Reduced differential transform method for solving nonlinear biomathematics models. Comput. Mater. Contin. 2019, 61, 979–994. [Google Scholar] [CrossRef]
  31. Mahdy, A.M.S.; Mohamed, M.S.; Gepreel, K.A.; AL-Amiri, A.; Higazy, M. Dynamical characteristics and signal flow graph of nonlinear fractional smoking mathematical model. Chaos Solitons Fractals 2020, 141, 110308. [Google Scholar] [CrossRef]
  32. Miller, R. Survival Analysis; Wiley: New York, NY, USA, 1981. [Google Scholar]
  33. Efron, B.; Tibshirani, R.J. An Introduction to the Bootstrap; Chapman & Hall: London, UK, 1993. [Google Scholar]
  34. Pakyari, R.; Balakrishnan, N. A general purpose approximate goodness-of-fit test for progressively Type II censored data. IEEE Trans. Reliab. 2012, 61, 238–243. [Google Scholar] [CrossRef]
Figure 1. The graph above is for the reliability function under normal conditions, we can see that from the graph that the reliability function at time = 0, it approaches to one, and it approaches to zero by increasing the time, and that is very rational under normal use for any item.
Figure 1. The graph above is for the reliability function under normal conditions, we can see that from the graph that the reliability function at time = 0, it approaches to one, and it approaches to zero by increasing the time, and that is very rational under normal use for any item.
Symmetry 12 02080 g001
Figure 2. The graph above is for the reliability function under stress level S = 2.25 volt conditions, we can deduce from the graph that the reliability function decreases rapidly by increasing time, and that result is very rational under stress use for any item as it was subjected to high voltage greater than the use conditions, which leads to early failure.
Figure 2. The graph above is for the reliability function under stress level S = 2.25 volt conditions, we can deduce from the graph that the reliability function decreases rapidly by increasing time, and that result is very rational under stress use for any item as it was subjected to high voltage greater than the use conditions, which leads to early failure.
Symmetry 12 02080 g002
Figure 3. The graph above is for the reliability function under stress level S = 2.44 volt conditions, we can deduce from the graph that the reliability function, decreases very rapidly by increasing the time and stress, and that is very rational under stress use for any item as it if any item was subjected to high voltage greater than the use conditions, which leads to early failure.
Figure 3. The graph above is for the reliability function under stress level S = 2.44 volt conditions, we can deduce from the graph that the reliability function, decreases very rapidly by increasing the time and stress, and that is very rational under stress use for any item as it if any item was subjected to high voltage greater than the use conditions, which leads to early failure.
Symmetry 12 02080 g003
Table 1. The table below shows how our distribution makes a good fit for the real data set as we see the value of test statistic and the corresponding p-values of each stress level for Lindley distribution.
Table 1. The table below shows how our distribution makes a good fit for the real data set as we see the value of test statistic and the corresponding p-values of each stress level for Lindley distribution.
Value of Voltage Stress2.25 V2.44 V
statistic0.3235290.945
p-value0.05637010.789
Table 2. The table below shows value of test statistic and the corresponding p-values for each stress level for the Weibull distribution, we can see that the Weibull distribution makes a poor fitting for the real data set because the p-value of the first level of acceleration is less than 0.05, while it provides good fitting for the second level, on the other hand the Lindley distribution makes better than Weibull for both levels of experiment and we used one distribution to fit the whole experiment with both levels of acceleration, so we can use the Lindley distribution instead of the Weibull distribution for modeling the whole experiment according to the results in Table 2.
Table 2. The table below shows value of test statistic and the corresponding p-values for each stress level for the Weibull distribution, we can see that the Weibull distribution makes a poor fitting for the real data set because the p-value of the first level of acceleration is less than 0.05, while it provides good fitting for the second level, on the other hand the Lindley distribution makes better than Weibull for both levels of experiment and we used one distribution to fit the whole experiment with both levels of acceleration, so we can use the Lindley distribution instead of the Weibull distribution for modeling the whole experiment according to the results in Table 2.
Value of Voltage Stress2.25 V2.44 V
statistic0.7941180.6646
p-value5.70644 × 10−200.912
Table 3. The lengths of confidence intervals (CIs), maximum likelihood estimation (MLEs), and Bayes estimates (BEs) using non-informative prior of the parameters ψ , ζ are introduced in the table below, where a t θ i = e a t a + a t b ln ( S i ) , i = 1 , 2 . For this data set, Bayesian analysis is carried out in case of non-informative priors.
Table 3. The lengths of confidence intervals (CIs), maximum likelihood estimation (MLEs), and Bayes estimates (BEs) using non-informative prior of the parameters ψ , ζ are introduced in the table below, where a t θ i = e a t a + a t b ln ( S i ) , i = 1 , 2 . For this data set, Bayesian analysis is carried out in case of non-informative priors.
MLEsBSECI
ACICredible IntervalBootstrap CI
a t a M L a t ψ M L a t ψ B S a t ψ a t ψ a t ψ
a t b M L a t ζ M L a t ζ B S a t ζ a t ζ a t ζ
−51.80840.02301070.6330910.01037330.730610.0276981
59.23642.802114.294122.791692.117590.7885519
Table 4. The table below contains the values of MSEs for MLEs and BEs (under square error loss function (SEL)) of ( ψ and ζ ψ = 1.2 with true values ζ = 1.1 and values of the prior parameters ( μ 1 = 14,400, λ 1 = 0.000083 , μ 2 = 12,100, and λ 2 = 0.000090 ), time of changing stress η = 2.8 .
Table 4. The table below contains the values of MSEs for MLEs and BEs (under square error loss function (SEL)) of ( ψ and ζ ψ = 1.2 with true values ζ = 1.1 and values of the prior parameters ( μ 1 = 14,400, λ 1 = 0.000083 , μ 2 = 12,100, and λ 2 = 0.000090 ), time of changing stress η = 2.8 .
nmC.SParameterMLSELCI
ACICredibleBootstrap
20101 ψ 0.02854124.2 × 10 6 1.031370.034630.31198
ζ 0.01882790.0032213.485220.035460.41693
50301 ψ 0.04109446.31019 × 10 6 0.5488110.038020.900282
ζ 0.4379620.002276212.41420.035671.17764
45301 ψ 0.01997788.26362 × 10 6 0.6025940.037510.352305
ζ 0.0172304 1.41265 × 10 6 2.724450.036050.409257
65451 ψ 0.02000468.26362 × 10 6 0.60259410.037510.352305
ζ 0.0146925 3.27477 × 10 6 2.533740.037450.410015
100601 ψ 0.05000464.26362 × 10 6 0.30259410.070010.302305
ζ 0.0126925 2.27477 × 10 6 1.79240.030450.39105
120651 ψ 0.01279850.00001598360.4029730.037550.350897
ζ 0.0204538 4.19817 × 10 6 2.442540.03770.415675
120801 ψ 0.02047590.00004400160.3681160.0377550.297268
ζ 0.0150823 4.48929 × 10 6 1.616790.03750.407272
1651201 ψ 0.02067590.00005500160.2912160.038850.24556
ζ 0.03223343 6.39076 × 10 6 1.29920.036030.40672
Table 5. The results in this table is the coverage probabilities of 95 % approximate, credible, and bootstrap CIs for ζ ψ .
Table 5. The results in this table is the coverage probabilities of 95 % approximate, credible, and bootstrap CIs for ζ ψ .
nmC.SParameterResults Obtained for Each Interval
ACICredible IntervaleBootstrap CI
20101 ψ 0.600.870.65
ζ 0.880.8710.88
50301 ψ 0.650.980.7
ζ 10.940.78
45301 ψ 0.750.920.88
ζ 0.90.910.92
65451 ψ 0.850.940.96
ζ 0.930.930.95
120651 ψ 0.860.950.96
ζ 0.940.950.96
120801 ψ 0.900.970.98
ζ 0.950.970.98
120801 ψ 0.900.970.98
ζ 0.950.970.98
1651201 ψ 0.920.980.99
ζ 0.960.970.98
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Hafez, E.H.; Riad, F.H.; Mubarak, S.A.M.; Mohamed, M.S. Study on Lindley Distribution Accelerated Life Tests: Application and Numerical Simulation. Symmetry 2020, 12, 2080. https://doi.org/10.3390/sym12122080

AMA Style

Hafez EH, Riad FH, Mubarak SAM, Mohamed MS. Study on Lindley Distribution Accelerated Life Tests: Application and Numerical Simulation. Symmetry. 2020; 12(12):2080. https://doi.org/10.3390/sym12122080

Chicago/Turabian Style

Hafez, E. H., Fathy H. Riad, Sh. A. M. Mubarak, and M. S. Mohamed. 2020. "Study on Lindley Distribution Accelerated Life Tests: Application and Numerical Simulation" Symmetry 12, no. 12: 2080. https://doi.org/10.3390/sym12122080

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop