Next Article in Journal
ω-Limit Sets of Impulsive Semigroups for Hyperbolic Equations
Previous Article in Journal
Optimal Treatment Strategy for Cancer Based on Mathematical Modeling and Impulse Control Theory
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Bayesian Estimation Using Product of Spacing for Modified Kies Exponential Progressively Censored Data

1
Department of Statistics, Faculty of Science, King Abdulaziz University, Jeddah 21589, Saudi Arabia
2
Department of Statistics, Faculty of Commerce, Zagazig University, Zagazig 44519, Egypt
*
Author to whom correspondence should be addressed.
Axioms 2023, 12(10), 917; https://doi.org/10.3390/axioms12100917
Submission received: 8 August 2023 / Revised: 2 September 2023 / Accepted: 22 September 2023 / Published: 27 September 2023
(This article belongs to the Section Mathematical Analysis)

Abstract

:
In life testing and reliability studies, most researchers have used the maximum likelihood estimation method to estimate unknown parameters, even though it has been proven that the maximum product of spacing method has properties as good as the maximum likelihood estimation method and sometimes even better. In this study, we aim to estimate the unknown parameters of the modified Kies exponential distribution along with the reliability and hazard rate functions under progressive type-II censoring scheme. The maximum likelihood and maximum product of spacing methods are considered in order to find the point estimates and approximate confidence intervals of the various parameters. Moreover, Bayesian estimations based on the likelihood function and the product of the spacing function of the unknown parameters are obtained using the squared error loss function with independent gamma priors. It is observed that the joint posterior distributions have complicated forms. Because of this, Lindley’s approximation and the Markov chain Monte Carlo technique are used to obtain the Bayesian estimates and highest posterior credible intervals. Monte Carlo simulations are performed in order to evaluate the performance of the proposed estimation methods. Two real datasets are studied to demonstrate the efficacy of the offered methodologies and highlight how simple and applicable it might be to apply them in practical fields.

1. Introduction

Lifetime distributions are useful statistical tools for modeling many properties of lifetime data. Recently, many lifetime distributions have been proposed to add more flexibility in analysing lifetime data. One of the most important proposed distributions is the modified Kies exponential (MKiEx), proposed by Al-Babtain et al. [1]. Let X be a random variable that follows the MKiEX distribution; then, according to Al-Babtain et al. [1], the cumulative distribution function (CDF) of X is
F ( x ; a , λ ) = 1 e ( e λ x 1 ) a .
The corresponding probability density function (PDF) is provided by
f ( x ; a , λ ) = a λ e λ a x ( 1 e λ x ) a 1 e ( e λ x 1 ) a .
The reliability function (RF) and hazard rate function (HRF) at mission time t > 0 are respectively provided by
R ( t ) = 1 F ( t ; a , λ ) = e ( e λ t 1 ) a
and
h ( t ) = a λ e λ a t ( 1 e λ t ) a 1 .
The MKiEx distribution has two parameters, the shape parameter a and the scale parameter λ . Al-Babtain et al. [1] showed that the density function of the MKiEx distribution can take different shapes based on the shape parameter a. The hazard rate function of the MKiEx distribution can be increasing and bathtub-shaped. Using real datasets, Al-Babtain et al. [1] proved that the MKiEx distribution performs better than a number of other distributions, such as exponential distributions and gamma distributions. Several studies have considered the estimation of the MKiEx distribution; for example, see Aljohani et al. [2], Nassar and Alam [3] and El-Raheem et al. [4].
In practice, testing the whole sample takes time. Therefore, it is recommended to stop the test before the whole sample size fails in order to save time and cost. In this case, the test produces censored data. In this procedure, the lifetimes of the first m ( m < n ) failed items are observed and the rest of the ( n m ) items are unobserved, i.e., censored. The main two types of conventional censoring schemes are type-I and type-II censoring. In the type-I censoring scheme, the n items are placed under the test and the test is stopped at a prespecified time T. The sample obtained from the test is called a time-censored sample, and the number of observed data in this case is random. On the other hand, in type-II censoring schemes the n items are placed under test and the test is stopped at a preset number of failures m < n ; in this case, the sample obtained from the test is called a failure-censored sample. Here, the termination time of the test is random. The conventional type-I and type-II censoring schemes are not flexible enough in removing items which only happen when the experiment is terminated. Because of this, progressive censoring schemes which contain a more general censoring plan have been introduced. In progressive type-II censoring, R 1 items are discarded from the n 1 remaining items when the first failure occurs. Likewise, R 2 2 units are randomly removed from the n 2 R 1 remaining items at the time of the second failure. This procedure continues until the m t h unit fails. It is important to mention here that m and ( R 1 , , R m ) are predetermined in advance. In the case of progressive type-II censoring, the likelihood function (LF) can be expressed as
L ( θ ) = C i = 1 m f ( x ( i ) ; θ ) [ 1 F ( x ( i ) ) ] R i ,
where C is a constant and θ is the vector of the unknown parameters. Note that x ( 1 ) , , x ( m ) are the realizations of X ( 1 ) , , X ( m ) . Many studies have investigated the estimation of the unknown parameters of various lifetime distributions under progressive type-II censoring schemes, including Ng et al. [5], Singh et al. [6], Alshenawy et al. [7], and Lin et al. [8]. For more details about censoring schemes, see Balakrishnan and Cramer [9] and Lawless [10].
In certain distributions, especially three or more parameter distributions, the ML estimation method can break down. Due to this, estimators may become inconsistent. The maximum product of spacing (MPS) method was proposed by Cheng and Amin [11] as a new alternative to the ML method. The spacings are the differences between the values of the CDF of two consecutive datapoints that are uniformly distributed on the interval [ 0 , 1 ] . The MPS method chooses the parameter value that makes the spacings as uniform as possible by maximizing the geometric mean of the spacings. According to Cheng and Amin [11], the MPS and ML estimation approaches are asymptotically identical and share the same characteristics of asymptotic sufficiency, consistency, and efficiency. Ranneby [12] proved that the MPS estimator has the same invariance property as the ML method. He showed that in certain situations the MPS method performs better than the ML estimation method, especially in cases with small sample sizes. According to Ng et al. [5], the product of the spacing function (PSF) under progressive type-II censoring becomes
S ( θ ) = C i = 1 m + 1 δ i ( θ ) i = 1 m [ 1 F ( x ( i ) ) ] R i ,
where C is a constant and
δ i ( θ ) = F ( x ( i ) ; θ ) F ( x ( i 1 ) ; θ ) , i = 1 , 2 , , m + 1 ,
where F ( x ( 0 ) ; θ ) = 0 and F ( x ( m + 1 ) ; θ ) = 1 . Many authors have used the MPS method to obtain estimates of the unknown parameters of lifetime models. Rahman and Pearson [13] used ML estimation and the MPS methods to obtain the estimates of the two-parameter exponential distribution. Singh et al. [14] obtained the point and asymptotic interval estimations of the parameters of generalized inverted exponential distributions using the MPS, ML, and least squares methods. For more, see the works of Singh et al. [6], Rahman et al. [15], Aruna [16], Teimouri and Nadarajah [17], Almetwally et al. [18], and Tashkandy et al. [19].
We are motivated to conduct the present work because of (1) the flexibility of the MKiEx distribution and its ability to model different types of data; (2) the efficiency of the progressive type-II censoring scheme in terminating the experiment as compared to conventional censoring schemes; (3) a lack of studies reported on estimation of the MKiEx distribution’s parameters using the MPS method and Bayesian estimation under PSF and the progressive type-II censoring scheme; and (4) the need to conduct a comparative study; here, we compare six estimation methods to determine the best performing method. Furthermore, Lindley’s approximation for the Bayesian estimation under PSF has not yet been studied. Thus, our main objective in this study is to discuss the estimation of the unknown parameters of the MKiEx distribution under progressive Type-II data. The RF and HRF are estimated as well. To achieve this, the ML, MPS, and Bayesian estimations are considered under the LF and PSF methods. Based on the asymptotic properties of the ML estimates (MLEs) and the MPS estimates (MPSEs), the approximate confidence intervals (ACIs) of the unknown parameters RF and HRF are obtained. The Bayesian estimates are calculated through the Monte Carlo Markov Chain (MCMC) procedure and Lindley’s approximation using the squared error loss function (SELF). The highest posterior density (HPD) credible intervals of the unknown parameters are investigated, along with the RF and HRF. A simulation study and two applications are provided to compare the various techniques and show the applicability of the proposed methods.
This article’s remaining sections are organized as follows. In Section 2, the ML method is applied to find the point and interval estimates of the unknown parameters. Section 3 discusses the point and interval estimations of the parameters using the MPS method. In Section 4, we conduct Bayesian estimation based on LF. Bayesian estimation based on PSF is considered in Section 5. The performance of the suggested methods is investigated through a simulation study in Section 6. The analysis of two real datasets is presented in Section 7. Finally, our conclusions are provided in Section 8.

2. Maximum Likelihood Estimation

Let x ( 1 ) , x ( 2 ) , , x ( m ) be a progressively type-II censored sample taken from a population that follows the MKiEx distribution, with the CDF and PDF provided by (1) and (2), respectively, and with a progressive censoring plan ( R 1 , R 2 , , R m ) . For simplicity, let x i = x ( i ) and A i = e λ x i 1 . From (1), (2), and (5), the LF and its logarithm without the constant term become
L ( a , λ ) = ( a λ ) m exp λ a i = 1 m x i i = 1 m ( R i + 1 ) A i a i = 1 m ( 1 e λ x i ) a 1
and
l ( a , λ ) = m l o g ( a λ ) + λ a i = 1 m x i i = 1 m ( R i + 1 ) A i a + ( a 1 ) i = 1 m log ( 1 e λ x i ) .
The MLEs of the two parameters a and λ of the MKiEx distribution, denoted by a ^ and λ ^ , are acquired by solving the following two nonlinear equations simultaneously:
l ( a , λ ) a = m a + λ i = 1 m x i + i = 1 m log ( 1 e λ x i ) i = 1 m ( R i + 1 ) A i a log ( A i ) = 0
and
l ( a , λ ) λ = m λ + a i = 1 m x i + ( a 1 ) i = 1 m x i A i 1 a i = 1 m ( R i + 1 ) x i e λ x i A i a 1 = 0 .
It is hard to solve the previous equations in explicit form. Therefore, we use the Newton–Raphson method to solve these equations numerically. The MLEs of RF and HRF can be found using the invariance property of the MLEs by replacing a and λ in (3) and (4) by a ^ and λ ^ , respectively. Thus,
R ^ ( t ) = e ( e λ ^ t 1 ) a ^ and H ^ ( t ) = a ^ λ ^ e λ ^ a t ( 1 e λ ^ t ) a ^ 1 .
We must confirm the existence and uniqueness of a and λ before obtaining their estimates. Due to the complex form of the normal equations, we can use graphical techniques to check them. A three-dimensional (3D) plot is created for the log-LF in (8) using a simulated sample from the MKiEx distribution under the progressive censoring plan R 1 = R 2 = = R 9 = 0 and R 10 = 20 , with a = 1 and λ = 1 , as shown in Figure 1. The 3D plot clearly shows that there are areas where there exist global extrema, which might indicate that the MLEs exist and are unique. The Fisher information is a method of calculating the amount of information carried by an observed random sample about unknown parameters a and λ . The Fisher information matrix can be defined as
I ( a , λ ) = E 2 l ( a , λ ) a 2 2 l ( a , λ ) a λ 2 l ( a , λ ) λ a 2 l ( a , λ ) λ 2 ;
for more details, see DeGroot and Schervish [20]. In many cases, it is not possible to obtain explicit mathematical expressions for the previous expectation; therefore, Cohen [21] considered using the observed Fisher information by substituting ( a , λ ) with ( a ^ , λ ^ ) . In this case, the asymptotic variance–covariance matrix of the MLEs can be written as follows:
I 1 ( a ^ , λ ^ ) = 2 l ( a , λ ) a 2 2 l ( a , λ ) a λ 2 l ( a , λ ) λ a 2 l ( a , λ ) λ 2 1 ( a , λ ) = ( a ^ , λ ^ ) = v a r ^ ( a ^ ) c o v ^ ( a ^ , λ ^ ) c o v ^ ( λ ^ , a ^ ) v a r ^ ( λ ^ ) ,
where 2 l ( a , λ ) a 2 , 2 l ( a , λ ) λ 2 and 2 l ( a , λ ) a λ are shown in Appendix A.
Under mild regularity conditions, it is known that the MLEs are asymptotically normally distributed, i.e., ( a ^ , λ ^ ) N [ ( a , λ ) , I 1 ( a ^ , λ ^ ) ] ; for more details, see Casella and Berger [22]. Therefore, we can calculate the ACI based on the asymptotic properties of the MLEs. Hence, the 100 ( 1 γ ) % ACIs of a and λ based on the MLEs are
a ^ ± z γ / 2 v a r ^ ( a ^ ) and λ ^ ± z γ / 2 v a r ^ ( λ ^ ) ,
respectively, where z γ is the γth percentile of a standard normal distribution.
To find the ACIs for the RF and HRF, the delta method (see, e.g., Greene [23]) can be used to obtain the approximate estimates of the variances associated with the MLEs of RF and HRF. According to the delta method, the approximate estimates of the variances of RF and HRF based on (11) are
v a r ^ ( R ^ ) = [ G ´ 1 I 1 ( a ^ , λ ^ ) G 1 ] ( a , λ ) = ( a ^ , λ ^ ) and v a r ^ ( h ^ ) = [ G ´ 2 I 1 ( a ^ , λ ^ ) G 2 ] ( a , λ ) = ( a ^ , λ ^ ) ,
respectively, where
G ´ 1 = R ( t ) a , R ( t ) λ and G ´ 2 = h ( t ) a , h ( t ) λ .
The elements of G ´ 1 and G ´ 2 are provided by
R ( t ) a = e ( e λ t 1 ) a ( e λ t 1 ) a log ( e λ t 1 ) ,
R ( t ) λ = a t e e λ t ( e λ t 1 ) a ( e λ t 1 ) a 1 ,
h ( t ) a = λ e ( a + 1 ) λ t ( 1 e λ t ) a ( a λ t + a l o g ( 1 e λ t ) + 1 ) ( e λ t 1 ) 1
and
h ( t ) λ = a e ( a + 1 ) λ t ( 1 e λ t ) a { e λ t [ a λ t + 1 ] λ t 1 } ( e λ t 1 ) 2 .
In this case, the 100 ( 1 γ ) % ACIs of RF and HRF based on the MLEs are respectively provided by
R ^ ( t ) ± z γ / 2 v a r ^ ( R ^ ( t ) ) and h ^ ( t ) ± z γ / 2 v a r ^ ( h ^ ( t ) ) .

3. Maximum Product of Spacing Estimation

Let x ( 1 ) , x ( 2 ) , , x ( m ) be a progressively type-II censored sample taken from an MKiEx population with a CDF and PDF provided by (1) and (2), respectively, and with a progressive censoring strategy ( R 1 , R 2 , , R m ) . Then, based on (1), (2), and (6) and ignoring the constant term, the PSF and the associated logarithm function are provided by
S a , λ = e i = 1 m R i A i a i = 1 m + 1 δ i
and
s a , λ = i = 1 m + 1 log ( δ i ) i = 1 m R i A i a ,
respectively, where δ i = e A i 1 a e A i a . The MPSEs of the unknown parameters, denoted by a ˜ and λ ˜ , are obtained by simultaneously solving the following normal equations:
s a , λ a = i = 1 m + 1 ϕ 1 i ϕ 1 i 1 δ i i = 1 m R i A i a log A i = 0
and
s a , λ λ = i = 1 m + 1 ϕ 2 i ϕ 2 i 1 δ i a i = 1 m R i x i e λ x i A i a 1 = 0 ,
where ϕ 1 i = e A i a A i a log ( A i ) and ϕ 2 i = a x i e λ x i A i a A i a 1 . The previous equations are difficult to explicitly solve. In order to numerically solve these equations, we use the Newton–Raphson method, as in the case of the MLEs. The MPSEs of the RF and HRF can be acquired by employing the invariance property, replacing a and λ in (3) and (4) by a ˜ and λ ˜ . Then, we can obtain the MPSEs of R ( t ) and h ( t ) , respectively, as follows:
R ˜ ( t ) = e ( e λ ˜ t 1 ) a ˜ and h ˜ ( t ) = a ˜ λ ˜ e λ ˜ a t ( 1 e λ ˜ t ) a ˜ 1 .
Using the same simulated sample as in the previous section, Figure 2 presents the 3D plot for the Log-PSF in (13), which we can use to check the existence and uniqueness of the MPSEs a ˜ and λ ˜ . From Figure 2, it can be concluded that the MPSEs might exist and may be unique.
Utilizing the asymptotic properties of the MPSEs, we can find the ACIs for a , λ , R ( t ) , and h ( t ) . In this case, the asymptotic variance–covariance matrix of the MPSEs can be expressed as follows:
I 1 ( a ˜ , λ ˜ ) = 2 l ( a , λ ) a 2 2 l ( a , λ ) a λ 2 l ( a , λ ) λ a 2 l ( a , λ ) λ 2 1 ( a , λ ) = ( a ˜ , λ ˜ ) = v a r ˜ ( a ˜ ) c o v ˜ ( a ˜ , λ ˜ ) c o v ˜ ( λ ˜ , a ˜ ) v a r ˜ ( λ ˜ ) ,
where 2 s ( a , λ ) a 2 , 2 s ( a , λ ) λ 2 , and 2 s ( a , λ ) a λ are shown in Appendix B. Hence, the 100 ( 1 γ ) % ACIs of a and λ based on the MPSEs are
a ˜ ± z γ / 2 v a r ˜ ( a ˜ ) and λ ˜ ± z γ / 2 v a r ˜ ( λ ˜ ) ,
respectively. To find the ACIs for the RF and HRF, we first approximate the estimates of the variances of RF and HRF based on (16). Thus,
v a r ˜ ( R ˜ ) = [ G ´ 1 I 1 ( a ˜ , λ ˜ ) G 1 ] ( a , λ ) = ( a ˜ , λ ˜ ) and v a r ˜ ( h ˜ ) = [ G ´ 2 I 1 ( a ˜ , λ ˜ ) G 2 ] ( a , λ ) = ( a ˜ , λ ˜ ) ,
respectively. As a result, the respective 100 ( 1 γ ) % ACIs of RF and HRF based on the MPSEs are provided by
R ˜ ( t ) ± z γ / 2 v a r ˜ ( R ˜ ( t ) ) and h ˜ ( t ) ± z γ / 2 v a r ˜ ( h ˜ ( t ) ) .

4. LF-Based Bayesian Estimation

In this section, the Bayesian estimations of a, λ , RF, and HRF are considered when the derivation of the joint posterior distribution is LF-based. The Bayesian estimation method assumes that the unknown parameters are random variables with prior information. In the case of the MKiEX model, we consider that the prior distributions of a and λ are independent, where the prior distribution of a is Gamma( a 1 , b 1 ) and the prior distribution of λ is Gamma( a 2 , b 2 ). As a result, the joint prior distribution of a and λ is
p ( a , λ ) a a 1 1 λ a 2 1 e ( b 1 a + b 2 λ ) , a j , b j > 0 , j = 1 , 2 ,
where we choose to use the gamma prior due to its mathematical flexibility and ability to cover a wide range of prior beliefs held by the experimenter; for more details, see Fathi et al. [24], Eliwa et al. [25], and Dey et al. [26]. By combining the LF in (7) and the joint prior distribution (17), the joint posterior distribution is provided by
g ( a , λ | x ) = B 1 1 a m + a 1 1 λ m + a 2 1 exp λ a i = 1 m x i b 2 i = 1 m ( R i + 1 ) A i a b 1 a × i = 1 m ( 1 e λ x i ) a 1 ,
where x is the data vector and
B 1 = 0 0 a m + a 1 1 λ m + a 2 1 exp λ a i = 1 m x i b 2 i = 1 m ( R i + 1 ) A i a b 1 a i = 1 m ( 1 e λ x i ) a 1 d a d λ .
Using SELF, the Bayes estimator of any function of the parameters, say, U ( a , λ ) , can be obtained as follows:
U ^ B ( a , λ ) = 0 0 U ( a , λ ) g ( a , λ | x ) d a d λ
It is complicated to obtain the Bayes estimator in (19) in explicit form; therefore, we propose using Lindley’s approximation and the MCMC technique to solve this difficulty in the preceding subsections.

4.1. Lindley’s Approximation

Lindley’s approximation was developed by Lindley [27] to obtain the Bayes estimators. Let U ( a , λ ) be a function of the parameters a and λ used to obtain the posterior expectation of U ( a , λ ) as
E [ U ( a , λ ) ] = U ( a , λ ) e l ( a , λ ) + ρ ( a , λ ) d ( a , λ ) e l ( a , λ ) + ρ ( a , λ ) d ( a , λ ) ,
where l ( a , λ ) is log-LF and ρ ( a , λ ) is the log of the joint priors of a and λ . The Bayes estimator of U ( a , λ ) can be obtained using Lindley’s approximation by approaching the ratio of the integrals as a whole, producing a single numerical result using Taylor series expansion. The Lindley’s approximation form of the expression provided by (20) is
E [ U ( a , λ ) ] U ( a , λ ) + 0.5 [ ( u a a + 2 u a ρ a ) σ a a + ( u λ a + 2 u λ ρ a ) σ λ a + ( u a λ + 2 u a ρ λ ) σ a λ + ( u λ λ + 2 u λ ρ λ ) σ λ λ ] + 0.5 [ ( u a σ a a + u λ σ a λ ) ( q a a a σ a a + q a λ a σ a λ + q λ a a σ λ a + q λ λ a σ λ λ ) + ( u a σ λ a + u λ σ λ λ ) ( q λ a a σ a a + q a λ λ σ a λ + q λ a λ σ λ a + q λ λ λ σ λ λ ) ] ,
where u a , u λ , u a λ , u λ a , u a a , and u λ λ are the derivatives of U ( a , λ ) . All terms in (21) are evaluated at the MLEs a ^ and λ ^ of a and λ , respectively. The quantities in (21) are shown in Appendix C. In addition, from the joint prior distribution in (17) we have
ρ ( a , λ ) = log [ g ( a , λ ) ] = ( a 1 1 ) log ( a ) + ( a 2 1 ) log ( λ ) ( b 1 a + b 2 λ ) .
Thus,
ρ a = a 1 1 a b 1
and
ρ λ = a 2 1 λ b 2 .
Now, when U ( a , λ ) = a , we have u a = 1 and u λ = u a a = u λ λ = u a λ = u λ a = 0 . Then, we can obtain the Bayesian estimator of a as follows:
a ^ B L = a ^ + Δ 1 ,
where
Δ 1 = ρ a σ a a + ρ λ σ a λ + 0.5 [ q a a a σ a a 2 + 3 q a a λ σ a a σ a λ + q a λ λ σ a a σ λ λ + 2 q a λ λ σ a λ 2 + q λ λ λ σ a λ σ λ λ ] .
Similarly, for U ( a , λ ) = λ and u λ = 1 , u a = u a a = u λ λ = u a λ = u λ a = 0 , we can obtain the Bayesian estimator of λ as follows:
λ ^ B L = λ ^ + Δ 2 ,
where
Δ 2 = ρ a σ λ a + ρ λ σ λ λ + 0.5 [ q a a a σ a a σ a λ + 2 q a a λ σ a λ 2 + 3 q a λ λ σ a λ σ λ λ + q a a λ σ a a σ λ λ + q λ λ λ σ λ λ 2 ] .
We can obtain the Bayesian estimator of the RF using the same approach. For simplicity, let A t = e λ t 1 and U ( a , λ ) = e A t a ; then, we have
u a = e A t a A t a log ( A t ) ,
u λ = a t e λ t A t a A t a 1 ,
u a a = e A t a A t a ( A t a 1 ) log 2 ( A t ) ,
u λ λ = a t 2 e λ t A t a A t a 2 ( a e λ t A t a a e λ t + 1 )
u a λ = u λ a = t e λ t ( e λ t 1 ) a A t a 1 [ a ( A t a 1 ) log ( A t ) 1 ]
Thus, the Bayesian estimator of the RF can be obtained as
R ^ B L = R ^ + Δ 3 ,
where
Δ 3 = 0.5 [ ( u a a + 2 u a ρ a ) σ a a + ( u λ a + 2 u λ ρ a ) σ λ a + ( u a λ + 2 u a ρ λ ) σ a λ + ( u λ λ + 2 u λ ρ λ ) σ λ λ ] + 0.5 [ ( u a σ a a + u λ σ a λ ) ( q a a a σ a a + q a λ a σ a λ + q λ a a σ λ a + q λ λ a σ λ λ ) + ( u a σ λ a + u λ σ λ λ ) × ( q λ a a σ a a + q a λ λ σ a λ + q λ a λ σ λ a + q λ λ λ σ λ λ ) ] .
For the HRF, let B t = 1 e λ t and U ( a , λ ) = a λ e λ a t B t a 1 ; then, we have
u a = λ e a λ t B t a 1 [ a ( λ t + log ( B t ) ) + 1 ] ,
u λ = a e λ t A t a 2 [ e λ t ( a λ t + 1 ) λ t 1 ] ,
u a a = [ λ e ( a + 1 ) λ t B t a ( λ t + l o g ( B t ) ) ( a λ t + a log ( B t ) + 2 ) ] A t 1 ,
u λ λ = a t e λ t A t a 3 { e λ t [ a ( e λ t { a λ t + 2 } 3 λ t 2 ) + λ t 2 ] + λ t + 2 } ,
u a λ = u λ a = e λ t A t a 2 { e λ t [ 2 a λ t + 1 ] + a [ e λ t ( a λ t + 1 ) λ t 1 ] log ( A t ) λ t 1 } .
Therefore, the Bayesian estimator of the HRF can be obtained as
h ^ B L = h ^ + Δ 4 ,
where
Δ 4 = 0.5 [ ( u a a + 2 u a ρ a ) σ a a + ( u λ a + 2 u λ ρ a ) σ λ a + ( u a λ + 2 u a ρ λ ) σ a λ + ( u λ λ + 2 u λ ρ λ ) σ λ λ ] + 0.5 [ ( u a σ a a + u λ σ a λ ) ( q a a a σ a a + q a λ a σ a λ + q λ a a σ λ a + q λ λ a σ λ λ ) + ( u a σ λ a + u λ σ λ λ ) × ( q λ a a σ a a + q a λ λ σ a λ + q λ a λ σ λ a + q λ λ λ σ λ λ ) ] .

4.2. MCMC Technique

In this subsection, we use the MCMC method to obtain the Bayesian estimates of a and λ as well as R ( t ) and h ( t ) . The MCMC technique can be used to simulate samples from (18), and in turn to obtain the Bayesian estimates. To generate samples from (18), we need to sample successively from a target distribution. Thus, in order to use the MCMC technique, we need to derive the full conditional distributions of a and λ , as follows:
g ( a | λ , x ) a m + a 1 1 exp { a [ λ i = 1 m x i + b 1 log ( 1 e λ x i ) ] } e i = 1 m ( R i + 1 ) ( e λ x i 1 ) a
and
g ( λ | a , x ) λ m + a 2 1 exp i = 1 m ( a 1 ) log ( 1 e λ x i ) ( R i + 1 ) ( e λ x i 1 ) a e λ ( a i = 1 m x i + b 2 ) .
It can be seen that (28) and (29) cannot be reduced to any well-known distributions. Therefore, we have used the plots of these conditional distributions to show that the distributions in (28) and (29) behave similar to the normal distribution. Thus, we employ the Metropolis–Hastings (M-H) technique with normal proposal distributions, as follows:
Step 1
Set initial guess points for ( a ( 0 ) , λ ( 0 ) ) as ( a ^ , λ ^ ) .
Step 2
Set j = 1 .
Step 3
Generate a* from N ( a ( j 1 ) , v a r ^ ( a ^ ) ) .
Step 4
Calculate β = m i n [ 1 , g 1 ( a * | λ ( j 1 ) , x ) g 1 ( a ( j 1 ) | λ ( j 1 ) , x ) ] .
Step 5
Generate a sample u from U ( 0 , 1 ) .
Step 6
If u β , accept the generated value and set a ( j ) = a * ; otherwise, reject the proposed distribution and set a ( j ) = a ( j 1 ) .
Step 7
Follow the same line as in steps in 3–6 to generate λ ( j ) from (29).
Step 8
Obtain R ( j ) ( t ) = e ( e λ ( j ) t 1 ) a ( j ) and h ( j ) ( t ) = a ( j ) λ ( j ) e λ ( j ) a ( j ) t ( 1 e λ ( j ) t ) a ( j ) 1 .
Step 9
Set j = j + 1 .
Step 10
Repeat steps 3–9 M times and obtain M random samples of a, λ , R ( t ) and h ( t ) as { a ( 1 ) , a ( 2 ) , , a ( M ) } , { λ ( 1 ) , λ ( 2 ) , , λ ( M ) } , { R ( 1 ) ( t ) , R ( 2 ) ( t ) , , R ( M ) ( t ) }
and { h ( 1 ) ( t ) , h ( 2 ) ( t ) , , h ( M ) ( t ) } , respectively.
Step 11
The LF-based Bayes estimates using a, λ , R ( t ) , and h ( t ) (say, θ ) under SELF are provided by
θ ^ B M = E ( θ | x ) = 1 M Q j = Q + 1 M θ ( j ) ,
where Q is the burn-in period.
Step 12
Follow the approach suggested by Chen and Shao [28] to obtain the HPD credible interval of the parameter θ , as follows:
  • Arrange the MCMC chain of θ after the burn-in period to obtain ( θ ( Q + 1 ) , , θ ( M ) ) .
  • In this case, the two-sided 100 ( 1 γ ) % HPD credible interval of θ can be found as follows:
    θ ( j * ) , θ ( j * + ( ( 1 γ ) ( M Q ) ) ,
    where j * = Q + 1 , Q + 2 , , M is selected such that
    θ ( j * + [ ( 1 γ ) ( M Q ) ] ) θ ( j * ) = min 1 j γ ( M Q ) θ ( j + [ ( 1 γ ) ( M Q ) ] ) θ ( j ) ,
where [h] denotes the largest integer less than or equal to h.

5. PSF-Based Bayesian Estimation

The standard Bayesian approach uses the LF to obtain the information from the sample. Coolen and Newby [29] have stated that there are two problems that occur when using the LF in the Bayesian method: (i) the LF is unbounded in certain cases, and (ii) the LF is sensitive to outliers. The unboundedness of the LF can be a serious barrier in Bayesian analysis. According to Cohen and Whitten [30], the unbounded likelihood problem typically arises when the unknown parameter is within the range of the support of PDF. Cheng and Amin [11] proved that the MLE is bound to fail for any PDFs which are heavy-tailed or J-shaped. In these circumstances, finding a posterior density function for the unknown parameter may be difficult. Therefore, Coolen and Newby [29] suggested the use of the PSF in Bayesian analysis instead of the LF. They justified this because the PSF is an approximation of the LF, and the product of the spacing is actually a probability function. They concluded that there are no theoretical problems related to using the PSF in place of the LF in Bayesian estimation. Furthermore, they showed that the numerical behaviour of the product of the spacing is better than the likelihood. Basu et al. [31] proved that the Bayesian approach with the PSF can perform better than Bayesian estimation with the LF. Dey et al. [26] derived the estimates of the gamma distribution with progressive type-II censoring data using the Bayesian method based on the LF and PSF, then compares it to the ML and MPS methods. In addition, Nassar and Elshahhat [32] conducted a statistical analysis of inverse Weibull constant-stress partially accelerated life tests with adaptive progressively type-I censored data using the methods of ML, MPS, and Bayesian estimation based on the LF and PSF. In this section, the Bayesian estimation of the various parameters of the MKiEx distribution is considered. By combining the PSF in (12) and the joint prior distribution (17), the joint posterior distribution of the unknown parameters a and λ can be expressed as follows:
g ( a , λ | x ) = B 2 1 a a 1 1 λ a 2 1 e ( b 1 a + b 2 λ ) m i = 1 R i A i a i = 1 m + 1 e A i 1 a e A i a
where
B 2 = 0 0 a a a 1 1 λ a 2 1 e ( b 1 a + b 2 λ ) m i = 1 R i A i a i = 1 m + 1 e A i 1 a e A i a d a d λ .
Using SELF, the PSF-based Bayes estimator U ( a , λ ) using any function of the parameters can be found as follows:
U ˜ B ( a , λ ) = 0 0 U ( a , λ ) g ( a , λ | x ) d a d λ .
The Bayes estimator in (33) is difficult to obtain in closed form. Therefore, we suggest using Lindley’s approximation and the MCMC technique.

5.1. Lindley’s Approximation

In the case of PSF, all terms in (21) are evaluated at the MPSEs in order to obtain the approximate explicit Bayesian estimators from (32). The other quantities in Lindley’s approximation provided by (21) are shown in Appendix D. Applying Lindley’s approximation as in the previous section, the Bayesian estimators of the various parameters can easily be obtained as follows:
a ˜ B L = a ˜ + Δ 1 ,
λ ˜ B L = λ ˜ + Δ 2 ,
R ˜ B L = R ˜ + Δ 3 ,
and
h ˜ B L = h ˜ + Δ 4 ,
where Δ j , j = 1 , 2 , 3 , 4 are provided by (24)–(27) and evaluated at the MPSEs of a and λ .

5.2. MCMC Technique

We suggest using the MCMC method discussed in Section 4.2 to obtain the Bayesian estimates of a and λ as well as the RF and HRF. We first need to derive the full conditional distribution of a and λ , as follows:
g * ( a | λ , x ) a a 1 1 e b 1 a i = 1 m + 1 e A i 1 a e A i a i = 1 m e R i A i a
and
g * ( λ | a , x ) λ a 2 1 e b 2 λ i = 1 m + 1 e A i 1 a e A i a i = 1 m e R i A i a .
It can be observed that while (34) and (35) cannot be reduced to any well known distributions, their plots are similar to the normal distribution. Therefore, we have employed the M-H technique with normal proposal distributions using the same steps presented in Section 4.2.

6. Simulation Study

To compare the suggested estimating approaches, we conducted a simulation study. We employed the MKiEx distribution with the parameters a = 1.5 , λ = 0.5 . We chose different values of sample sizes n = 30 , 50 , 60 and different number of failures m for each sample size n as follows: m = 10 , 15 for n = 30 , m = 10 , 25 for n = 50 , and m = 15 , 20 for n = 60 . We considered the following three different progressive schemes (sch):
sch 1: R 1 = R 2 = = R m 1 = 0 and R m = n m ;
sch 2: R 1 = n m and R 2 = R 3 = = R m = 0 ;
sch 3: R 1 = = R m 1 = R m = ( n m ) / m .
Thus, we have three sample sizes and three censoring schemes. Therefore, we have eighteen different cases. For each case, the point and interval estimates of RF and HRF are obtained at t = 0.3 . In all, 1000 progressive type-II censored samples were generated using the same algorithm proposed by Balakrishnan and Sandhu [33]. To guarantee the elimination of the effects of the initial values using the MCMC algorithm in Bayesian estimation, the first 2000 iterations of the 12,000 in the MCMC chain were discarded. The hyperparameters of the gamma priors were selected in such a way that the gamma prior mean (shape/rate) was the same as the original mean (parameter value). Hence, we chose the shape parameter as the chosen parameter value and the rate parameter as 1. Many authors have used this method of choosing the hyperparameters, including Kundu [34], Kundu and Howlader [35], Kundu and Raqab [36], Basu et al. [37], and Dey et al. [26]. It is possible to use other hyperparameter values, including the non-informative priors (which are expected to produce estimates that coincide with the MILEs), or to select the hyperparameter values arbitrarily. For the point estimates, we consider the relative absolute bias (RAB) and root mean square error (RMSE) as criteria to compare the various estimation methods. The RAB is a measure that indicates how large the bias of the estimator is relative to the true parameter value. The RAB of the estimated values for the parameter θ is
R A B ( θ ) = 1 S i = 1 S θ ^ i θ θ ,
where S is the number of simulated samples. The RMSE is the average squared distance between the estimated values and the actual value θ . The RMSE is provided by
R M S E ( θ ) = 1 S i = 1 S ( θ ^ i θ ) 2 1 / 2 ,
where θ ^ i is the estimated value of the parameter θ using any estimation method. On the other hand, the average interval length (AIL) and the coverage probability (CP) are considered criteria to compare the efficiency of the interval estimations. The AIL is simply the average of the difference between the upper and lower bounds of the interval, while the CP is the percentage of interval estimates within which the true value of the parameter lies. The AIL and CP of θ can be found by
A I L ( 1 γ ) % ( θ ) = 1 S i = 1 S ( θ ^ i U θ ^ i L )
and
C P ( 1 γ ) % ( θ ) = 1 S i = 1 S 1 ( θ ^ i L θ ^ i U ) ,
respectively, where θ ^ i L and θ ^ i U denote the lower and upper bounds of ( 1 γ ) % ACIs and HPDs, respectively, and 1 ( θ ^ i L θ ^ i U ) is an indicator function, where 1 ( θ ^ i L θ ^ i U ) = 1 if θ ^ i L θ θ ^ i U and 1 ( θ ^ i L θ ^ i U ) = 0 otherwise. The values of RAB, RMSE, AIL, and CP for all parameters were calculated using R software, and are displayed using heat maps in Figure 3 and Figure 4. From the simulation results in Figure 3 and Figure 4, the following conclusions can be drawn:
  • In most cases, the MPS method performs better in terms of RAB and RMSE than the ML method for a, λ , and h ( t ) , especially in the case of small m.
  • When n is fixed, the values of RAB, RMSE and AIL become smaller and smaller as m increases for each censoring scheme.
  • In most cases, when n and m are fixed, Scheme 2 performs better than the other schemes in terms of RAB, RMSE, and AIL.
  • In the majority of cases, LF-based Bayesian estimation using the MCMC technique and Lindley’s approximation performs better than the other approaches for all parameters in terms of RAB.
  • In most cases, when comparing the RMSE of a for different approaches, PSF-based Bayesian estimation using Lindley’s approximation performs better than the others, while LF-based Bayesian estimation usually performs better than the other methods for λ , R ( t ) , and h ( t ) in terms of RMSE.
  • For the parameter a, it is observed that PSF-based Bayesian estimation usually provides better estimates than other methods in terms of AIL, while for the parameters λ and R ( t ) the ML performs better than the other methods in terms of AIL in most cases. The AILs of h ( t ) when using the LF-based Bayesian method are usually shorter than the AILs using other methods.
  • In terms of CP, the PSF-based Bayesian method usually performs better other approaches for λ , R ( t ) , and h ( t ) . On the other hand, for a the LF-based Bayesian method always surpasses the other methods in terms of CP.
  • It can be concluded that the different estimates have asymptotical behaviour with large m. It is observed that the RABs, RMSEs, and AILs all tend to zero as m increases. In the case of small m, it is noted that the MLEs have larger values of RABs, RMSEs, and AILs in most of the cases when compared with the MPSEs and Bayes estimates. On the other hand, it can be seen that the various Bayes estimates perform well with both small and large m when compared with the classical estimates.
  • Overall, it can be noted that the MPS method is more accurate than the ML method in the case of a small number of observed failures. On the other hand, the Bayesian estimation methods perform better than the classical methods when there is prior information about the unknown parameters. The Bayesian method requires more computational time than the classical methods. Therefore, in the case of limited time the classical methods are to be preferred over the Bayesian methods. Finally, when no information is available about the unknown parameters and the decision is taken to use non-informative priors, it is advised to use the classical methods in this case, as the acquired estimates can be expected to approximately coincide.

7. Real Data Analysis

In this section, we illustrate the applicability of the suggested estimation methods using two real-life datasets. After checking the suitability of the MKiEx distribution to fit the considered datasets, different censored samples were generated and the point estimates with the associated standard errors (SEs) of the estimates were obtained. The lower bound, upper bound, and length of the ACIs and HPD credible intervals were obtained as well. The point and interval estimates of the RF and HRF were obtained at t = 0.3 . We eliminated the effects of the initial values using the MCMC procedure by discarding the first 2000 iterations from 12,000 MCMC chains. Regarding the choices of the hyperparameters, we followed the method proposed by Kundu [34] and Dey et al. [38] to select the values of ( a 1 , b 1 ) and ( a 2 , b 2 ) of a and λ , respectively.

7.1. Failure Times of Aircraft Windshields

These data were reported by Ijaz et al. [39], and represent the uncensored failure times of 84 aircraft windshields. The data values are: 0.040, 0.301, 0.309, 0.557, 0.943, 1.070, 1.124, 1.248, 1.281, 1.281, 1.303, 1.432, 1.480, 1.505, 1.506, 1.568, 1.615, 1.619, 1.652, 1.652, 1.757, 1.866, 1.876, 1.899, 1.911, 1.912, 1.914, 1.981, 2.010, 2.038, 2.085, 2.089, 2.097, 2.135, 2.154, 2.190, 2.194, 2.223, 2.224, 2.229, 2.300, 2.324, 2.385, 2.481, 2.610, 2.625, 2.632, 2.646, 2.661, 2.688, 2.820, 2.890, 2.902, 2.934, 2.962, 2.964, 3.000, 3.000, 3.103, 3.114, 3.117, 3.166, 3.344, 3.376, 3.443, 3.467, 3.478, 3.578, 3.595, 3.699, 3.779, 3.924, 4.035, 4.121, 4.167, 4.240, 4.255, 4.278, 4.305, 4.376, 4.449, 4.485, 4.570, 4.602. To check the suitability of the MKiEx model to fit this dataset, the Kolmogorov–Smirnov (KS) distance based on the MLEs was obtained. The KS = 0.065647 and the associated p-value = 0.8621. These results imply that the MKiEx is an acceptable model to fit the given data. Moreover, Figure 5 displays the TTT plot of the data. According to Barlow and Campo [40], if the curve is concave, this indicates that the hazard rate function is increasing; if this is first convex and afterwards concave, it indicates a bathtub shape for the hazard rate function. This in turn indicates that the data have an increasing failure rate function. Figure 6 shows the histogram with the density curve, estimated CDF, probability–probability (PP), and quantile–quantile (QQ) plots for the complete data. Figure 5 and Figure 6 indicate that the MKiEx distribution is a suitable model to analyze these data. Based on the complete data, the different progressive type-II censored samples shown in Table 1 were created as.
Table 2 shows the point estimates and SEs for a, λ , R ( t ) , and h ( t ) based on the generated samples presented in Table 1. It can be observed from Table 2 that in the case of small m, i.e., m = 10 , the SEs of the estimates of a and λ of PSF-based Bayesian estimation using the MCMC method are lower than the SEs using other methods. On the other hand, the SEs of R ( t ) and h ( t ) using the LF-based Bayesian method are lower than the same values when using the other methods. When m = 30 and m = 42 for λ , R ( t ) , and h ( t ) , the SEs using LF-based Bayesian estimation with the MCMC approach are lower than the SEs using the other approaches. The SE of the estimate of a using PSF-based Bayesian estimation with the MCMC method is shorter than the SEs using other methods. In all cases, when the value of m increases, the value of SE for all parameters decreases. Moreover, Table 3 shows the lower and upper bounds of the ACIs and HPDs. For small m, the lengths of the HPDs are shorter than the ACIs in most cases. For larger sample sizes, the interval length of a when using the PSF-based Bayesian method is shorter than when using the other methods. For the remaining parameters, the LF-based Bayesian method always results in shorter interval lengths comparing to the other methods. Increasing the value of m always decreases the interval length. Figure 7 shows the interval estimates of R ( x ) and h ( x ) for the third censoring scheme. It can be observed that the lower limits of R ( x ) with all methods are closer to 0 when x increases. The HPDs obtained based on both the LF and PSF of R ( x ) have good performance when compared with the classical ones. For h ( x ) , it can be seen that the ACI of MPS and the HPD interval when using PSF are shorter than the intervals obtained based on the other methods. Finally, to check the convergence of the MCMC procedures, using each Markovian chain of a and λ after burn-in from the third censoring sample as an example, a trace plot (which provides an essential tool to evaluate how well a chain is mixed) and density plot (which provides a histogram of chains) are shown in Figure 8. These results indicate that the chains converge well and that the burn-in period is suitable to remove the effect of the initial values.

7.2. Number of 1000s of Cycles to Failure for Electrical Appliances

This dataset, which was reported by Lawless [10], represents the complete sample of failure times of sixty cycles for electrical appliances. The data values are 0.014, 0.034, 0.059, 0.061, 0.069, 0.080, 0.123, 0.142, 0.165, 0.210, 0.381, 0.464, 0.479, 0.556, 0.574, 0.839, 0.917, 0.969, 0.991, 1.064, 1.088, 1.091, 1.174, 1.270, 1.275, 1.355, 1.397, 1.477, 1.578, 1.649, 1.702, 1.893, 1.932, 2.001, 2.161, 2.292, 2.326, 2.337, 2.628, 2.785, 2.811, 2.886, 2.993, 3.122, 3.248, 3.715, 3.790, 3.857, 3.912, 4.100, 4.106, 4.116, 4.315, 4.510, 4.580, 5.267, 5.299, 5.583, 6.065, 9.701. To check whether the MKiEx model fits this dataset, we conducted the KS test based on the MLEs. The KS = 0.064269 and the associated p-value = 0.9518. According to these results, the MKiEx is a suitable model to fit the provided data. Figure 9 shows the TTT plot of the data, indicating that the data have a bathtub-shaped failure rate function. Figure 10 shows the histogram with the density curve, estimated CDF, PPm and QQ plots for the complete data. Figure 9 and Figure 10 indicate that the MKiEx distribution is an appropriate model for analyzing the data. Different progressive type-II censored samples were generated using the complete data, as shown in Table 4.
Table 5 shows the point estimates and SEs of a, λ , R ( t ) , and h ( t ) based on the generated samples presented in Table 4. It can be observed from Table 5 that in the case of small m, i.e., m = 10 , the SEs of the estimates of a, λ , and h ( t ) using PSF-based Bayesian estimation with the MCMC method are lower than the SEs using other methods. On the other hand, the SEs of R ( t ) using the LF-based Bayesian method with Lindley’s approximation are lower than the SEs using other methods. When m = 30 and m = 42 , the SEs using both the LF and PSF Bayesian estimation methods with the MCMC approach are lower than the SEs using the other approaches for all parameters. In all cases, when the value of m increases, the value of SE for all parameters decreases. Table 6 shows the lower and upper bounds of the ACIs and HPDs. For small m, the length of the HPDs based on PSF are shorter than ACIs of both ML and MPS for a, λ , and h ( t ) . The ACI using the ML method of R ( t ) is the shortest among all methods. For larger sample sizes, the interval lengths of the HPDs are always shorter than the interval lengths of the ACIs when using the ML and MPS methods for all parameters. Increasing the value of m decreases the interval length.
Figure 11 shows the interval estimates of R ( x ) and h ( x ) for the third censoring scheme. It can be observed that the lower limits of R ( x ) with all methods are closer to 0 when x increases. The HPDs obtained based on both the LF and PSF of R ( x ) have good performance compared to the real data. For h ( x ) , it can be seen that the ACIs of the MPS and HPD interval using PSF are shorter than the ACIs of the ML and HPD interval when using LF methods. Finally, using the third censoring sample as an example, we checked the convergence of the MCMC method for each Markovian chain of a and λ after burn-in, with the trace plot and density plot shown in Figure 12. These results imply that the chains converge well and that the burn-in period is appropriate for removing the effects of the initial values.

8. Conclusions

In this paper, we have investigated the estimation problems of the modified Kies exponential distribution under a progressive type-II censoring scheme. Two classical estimation methods are used to estimate the unknown parameters of the modified Kies exponential distribution a and λ , as well as the associated reliability and hazard rate functions, namely, the maximum likelihood and maximum product of spacing. Moreover, Bayesian methods based on the likelihood and product of the spacing functions are used. For the Bayesian methods, we used Lindley’s approximation and the Monte Carlo Markov Chain technique. We obtained the approximate confidence intervals of the maximum likelihood and maximum product of spacing as well as the highest posterior density of the Bayesian method based on the likelihood and the product of the spacing functions. We conducted a simulation study using the modified Kies exponential model. Different sample sizes, parameter values, and censoring schemes were used when performing the simulation study. Finally, two real datasets were examined to show the practicality of the proposed approaches and demonstrate how easy to use and flexible it can be in real-world scenarios. Our numerical outcomes show that the Bayesian methods based on both the likelihood and product of the spacing functions are more accurate than the maximum likelihood and maximum product of the spacing techniques. It is important to mention here that the Bayesian methods require more computational time than the classical methods; therefore, in the case of limited time or no available prior information about the unknown parameters, the classical methods can be preferred over Bayesian methods. As future work, it is of interest to study the behaviour of the presented estimation methods for more general progressive censoring schemes, such as hybrid, general progressive, and adaptive censoring schemes.

Author Contributions

Conceptualization, T.K. and M.N.; methodology, T.K. and M.N.; software, T.K. and F.M.A.A.; validation, T.K., M.N. and F.M.A.A.; investigation, T.K., M.N. and F.M.A.A.; writing, T.K., M.N. and F.M.A.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Two real datasets are contained within the article.

Acknowledgments

The authors would like to thank the three reviewers for their valuable comments and suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. The Second Derivatives of log-LF

2 l ( a , λ ) a 2 = m a 2 i = 1 m ( R i + 1 ) A i a log 2 ( A i ) ,
2 l ( a , λ ) λ 2 = i = 1 m x i 2 e λ x i A i 2 { a [ A i a ( R i a { R i + 1 } e λ x i ) + A i a 1 ] + 1 } m λ 2 ,
and
2 l ( a , λ ) a λ = i = 1 m x i e λ x i A i 1 { [ ( R i + 1 ) A i a ( a log ( A i ) + 1 ) 1 ] } .

Appendix B. The Second Derivatives of log-PSF

2 s ( a , λ ) a 2 = i = 1 m + 1 ϕ 11 i ϕ 11 i 1 δ i ( ϕ 1 i ϕ 1 i 1 ) 2 δ i 2 i = 1 m R i A i a log 2 A i ,
2 s ( a , λ ) λ 2 = i = 1 m + 1 ϕ 22 i ϕ 22 i 1 δ i ( ϕ 2 i ϕ 2 i 1 ) 2 δ i 2 a i = 1 m R i x i 2 e λ x i A i a 2 ( a e λ x i 1 ) ,
and
2 s ( a , λ ) a λ = i = 1 m + 1 ϕ 12 i ϕ 12 i 1 δ i ( ϕ 1 i ϕ 1 i 1 ) ( ϕ 2 i ϕ 2 i 1 ) δ i 2 i = 1 m R i x i e λ x i A i a 1 ( a log ( A i ) + 1 ) ,
  • where ϕ 11 i = e A i a log 2 ( A i ) ( A i a A i 2 a ) ,
  • ϕ 22 i = ( a 1 ) a x i 2 e 2 λ x i A i a A i a 2 + a x i e λ x i A i a ( x i a x i e λ x i A i a 1 ) A i a 1 ,
  • and ϕ 12 i = x i e λ x i A i a A i a 1 + a x i e λ x i A i a A i a 1 log ( A i ) a x i e λ x i A i a A i 2 a 1 log ( A i ) .

Appendix C. The Derivatives in Lindley’s Approximation Using LF

q a a = 2 l ( a , λ ) a 2 , q λ λ = 2 l ( a , λ ) λ 2 , q a λ = q λ a = 2 l ( a , λ ) a λ ,
q a a a = 2 m a 3 i = 1 m ( R i + 1 ) A i a log 3 A i ,
q a a λ = q a λ a = q λ a a = i = 1 m x i ( R i + 1 ) ( e λ x i ) A i a 1 log ( A i ) ( a log ( A i ) + 2 ) ,
q λ λ λ = i = 1 m { x i 3 e λ x i [ a ( A i a { R i a [ R i + 1 ] e λ x i } + A i a 1 ) + 1 ] } A i 2 { 2 x i 3 e 2 λ x i [ a ( A i a { R i a [ R i + 1 ] e λ x i } + A i a 1 ) + 1 ] } A i 3 + { a x i 2 e λ x i [ a x i e λ x i A i ( a 1 ) ( R i a { R i + 1 } e λ x i ) a ( R i + 1 ) x i e λ x i A i a + a x i e λ x i A i ( a 1 ) ] } A i 2 + 2 m λ 3 ,
q a λ λ = q λ λ a = q λ a λ = i = 1 m x i 2 e λ x i { A i a [ R i a ( R i + 1 ) e λ x i ] + a [ ( R i + 1 ) ( e λ x i ) A i a + A i a log ( A i ) ( R i a { R i + 1 } e λ x i ) + A i a log ( A i ) + A i a 1 ] } A i 2 ,
σ a a = q λ λ q a a q λ λ q a λ 2 , σ λ λ = q a a q a a q λ λ q a λ 2 and σ a λ = σ λ a = q a λ q a a q λ λ q a λ 2 .

Appendix D. The Derivatives in Lindley’s Approximation Using PSF

q a a = 2 s ( a , λ ) a 2 , q λ λ = 2 s ( a , λ ) λ 2 , q a λ = q λ a = 2 s ( a , λ ) a λ ,
q a a a = i = 1 m + 1 ϕ 111 i ϕ 111 i 1 δ i 3 ( ϕ 11 i ϕ 11 i 1 ) ( ϕ 1 i ϕ 1 i 1 ) δ i 2 + 2 ( ϕ 1 i ϕ 1 i 1 ) 3 δ i 3 i = 1 m R i A i a log 3 ( A i ) ,
q a a λ = q a λ a = q λ a a = i = 1 m + 1 ( ϕ 2 i ϕ 2 i 1 ) ( ϕ 11 i ϕ 11 i 1 ) 2 ( ϕ 12 i ϕ 12 i 1 ) ( ϕ 1 i ϕ 1 i 1 ) δ i 2 + ϕ 112 i ϕ 112 i 1 δ i + 2 ( ϕ 2 i ϕ 2 i 1 ) ( ϕ 1 i ϕ 1 i 1 ) 2 δ i 3 i = 1 m x i R i e λ x i A i a 1 log ( A i ) ( a log ( A i ) + 2 ) ,
q λ λ λ = i = 1 m + 1 ϕ 222 i ϕ 222 i 1 δ i 3 ( ϕ 22 i ϕ 22 i 1 ) ( ϕ 2 i ϕ 2 i 1 ) δ i 2 + 2 ( ϕ 2 i ϕ 2 i 1 ) 3 δ i 3 a i = 1 m R i x i 3 e λ x i A i a 3 { e λ x i [ a ( a e λ x i 3 ) + 1 ] + 1 ,
q a λ λ = q λ λ a = q λ a λ = i = 1 m + 1 ( ϕ 1 i ϕ 1 i 1 ) ( ϕ 22 i ϕ 22 i 1 ) 2 ( ϕ 12 i ϕ 12 i 1 ) ( ϕ 2 i ϕ 2 i 1 ) δ i 2 + ϕ 122 i ϕ 122 i 1 δ i + 2 ( ϕ 1 i ϕ 1 i 1 ) ( ϕ 2 i ϕ 2 i 1 ) 2 δ i 3 i = 1 m R i x i 2 e λ x A i a 2 [ 2 a e λ x + a ( a e λ x 1 ) log ( A i ) 1 ] ,
where
ϕ 111 i = e A i a A i a log 3 ( A i ) 3 e A i a A i 2 a log 3 ( A i ) + e A i a A i 3 a log 3 ( A i ) ,
ϕ 112 i = a x i e λ x i A i a ( A i a 1 ) A i a 1 log 2 ( A i ) + a x i e λ x i A i a A i 2 a 1 log 2 ( A i ) a x i e λ x i A i a ( A i a 1 ) A i 2 a 1 log 2 ( A i ) + 2 x i e λ x i A i a ( A i a 1 ) A i a 1 log ( A i ) ,
ϕ 222 i = ( a 2 ) ( a 1 ) a e 3 λ x i A i a x i 3 A i a 3 + ( a 1 ) a e 2 λ x i A i a x i 2 ( x i a e λ x i A i a 1 x i ) A i a 2 + ( a 1 ) a e 2 λ x i A i a x i 2 ( 2 x i a e λ x i A i a 1 x i ) A i a 2 + a e λ x i A i a x i ( x i a e λ x i A i a 1 x i ) 2 A i a 1 + a e λ x i A i a x i [ ( a 1 ) a e 2 λ x i x i 2 A i a 2 a e λ x i x i 2 A i a 1 ] A i a 1 ,
and
ϕ 122 i = x i 2 e λ x i A i a A i a 2 { 2 a e λ x i ( A i a 1 ) a [ A i a + a e λ x i [ 3 A i a + A i 2 a + 1 ] 1 ] log ( A i ) + 1 } .

References

  1. Al-Babtain, A.A.; Shakhatreh, M.K.; Nassar, M.; Afify, A.Z. A new modified Kies family: Properties, estimation under complete and type-II censored samples, and engineering applications. Mathematics 2020, 8, 1345. [Google Scholar] [CrossRef]
  2. Aljohani, H.M.; Almetwally, E.M.; Alghamdi, A.S.; Hafez, E. Ranked set sampling with application of modified Kies exponential distribution. Alex. Eng. J. 2021, 60, 4041–4046. [Google Scholar] [CrossRef]
  3. Nassar, M.; Alam, F.M.A. Analysis of Modified Kies Exponential Distribution with Constant Stress Partially Accelerated Life Tests under Type-II Censoring. Mathematics 2022, 10, 819. [Google Scholar] [CrossRef]
  4. El-Raheem, A.M.A.; Almetwally, E.M.; Mohamed, M.S.; Hafez, E.H. Accelerated life tests for modified Kies exponential lifetime distribution: Binomial removal, transformers turn insulation application and numerical results. AIMS Math. 2021, 6, 5222–5255. [Google Scholar] [CrossRef]
  5. Ng, H.K.; Luo, L.; Hu, Y.; Duan, F. Parameter estimation of three-parameter weibull distribution based on progressively type-II censored samples. J. Stat. Comput. Simul. 2012, 82, 1661–1678. [Google Scholar] [CrossRef]
  6. Singh, R.; Kumar Singh, S.; Singh, U. Maximum product spacings method for the estimation of parameters of generalized inverted exponential distribution under progressive type II censoring. J. Stat. Manag. Syst. 2016, 19, 219–245. [Google Scholar] [CrossRef]
  7. Alshenawy, R.; Al-Alwan, A.; Almetwally, E.M.; Afify, A.Z.; Almongy, H.M. Progressive type-II censoring schemes of extended odd weibull exponential distribution with applications in medicine and engineering. Mathematics 2020, 8, 1679. [Google Scholar] [CrossRef]
  8. Lin, Y.J.; Okasha, H.M.; Basheer, A.M.; Lio, Y.L. Bayesian estimation of Marshall Olkin extended inverse Weibull under progressive type II censoring. Qual. Reliab. Eng. Int. 2023, 39, 931–957. [Google Scholar] [CrossRef]
  9. Balakrishnan, N.; Cramer, E. The Art of Progressive Censoring: Applications to Reliability and Quality; Springer: Berlin/Heidelberg, Germany, 2016. [Google Scholar]
  10. Lawless, J.F. Statistical Models and Methods for Lifetime Data; John Wiley & Sons: Hoboken, NJ, USA, 2003. [Google Scholar]
  11. Cheng, R.C.; Amin, N.A. Estimating parameters in continuous univariate distributions with a shifted origin. J. R. Stat. Soc. Ser. B (Methodol.) 1983, 45, 394–403. [Google Scholar] [CrossRef]
  12. Ranneby, B. The Maximum Spacing Method. An Estimation Method Related to the Maximum Likelihood Method. Scand. J. Stat. 1984, 11, 93–112. [Google Scholar]
  13. Rahman, M.; Pearson, L.M. Estimation in two-parameter exponential distributions. J. Stat. Comput. Simul. 2001, 70, 371–386. [Google Scholar] [CrossRef]
  14. Singh, U.; Singh, S.K.; Singh, R.K. A comparative study of traditional estimation methods and maximum product spacings method in generalized inverted exponential distribution. J. Stat. Appl. Probab. 2014, 3, 153–169. [Google Scholar] [CrossRef]
  15. Rahman, M.; An, D.; Patwary, M.S. Method of product spacings parameter estimation for beta inverse weibull distribution. Far East J. Theor. Stat. 2016, 52, 1–15. [Google Scholar] [CrossRef]
  16. Aruna, C.V. Estimation of Parameters of Lomax Distribution by Using Maximum Product Spacings Method. Int. J. Res. Appl. Sci. Eng. Technol. 2022, 10, 1316–1322. [Google Scholar] [CrossRef]
  17. Teimouri, M.; Nadarajah, S. MPS: An R package for modelling shifted families of distributions. Aust. N. Z. J. Stat. 2022, 64, 86–108. [Google Scholar] [CrossRef]
  18. Almetwally, E.M.; Jawa, T.M.; Sayed-Ahmed, N.; Park, C.; Zakarya, M.; Dey, S. Analysis of unit-Weibull based on progressive type-II censored with optimal scheme. Alex. Eng. J. 2023, 63, 321–338. [Google Scholar] [CrossRef]
  19. Tashkandy, Y.A.; Almetwally, E.M.; Ragab, R.; Gemeay, A.M.; El-Raouf, M.A.; Khosa, S.K.; Hussam, E.; Bakr, M. Statistical inferences for the extended inverse Weibull distribution under progressive type-II censored sample with applications. Alex. Eng. J. 2023, 65, 493–502. [Google Scholar] [CrossRef]
  20. DeGroot, M.H.; Schervish, M.J. Probability and Statistics; Pearson Education, Inc.: London, UK, 2019. [Google Scholar]
  21. Cohen, A.C. Maximum Likelihood Estimation in the Weibull Distribution Based on Complete and on Censored Samples. Technometrics 1965, 7, 579–588. [Google Scholar] [CrossRef]
  22. Casella, G.; Berger, R.L. Statistical Inference; Cengage Learning: Duxbury, CA, USA, 2002. [Google Scholar]
  23. Greene, W.H. Econometric Analysis; Prentice Hall: Englewood Cliffs, NJ, USA, 2000. [Google Scholar]
  24. Fathi, A.; Farghal, A.W.A.; Soliman, A.A. Bayesian and Non-Bayesian inference for Weibull inverted exponential model under progressive first-failure censoring data. Mathematics 2022, 10, 1648. [Google Scholar] [CrossRef]
  25. Eliwa, M.S.; EL-Sagheer, R.M.; El-Essawy, S.H.; Almohaimeed, B.; Alshammari, F.S.; El-Morshedy, M. General entropy with Bayes techniques under Lindley and MCMC for estimating the new Weibull–Pareto parameters: Theory and application. Symmetry 2022, 14, 2395. [Google Scholar] [CrossRef]
  26. Dey, S.; Elshahhat, A.; Nassar, M. Analysis of progressive type-II censored gamma distribution. Comput. Stat. 2022, 38, 481–508. [Google Scholar] [CrossRef]
  27. Lindley, D.V. Approximate Bayesian methods. Trab. Estad. Y Investig. Oper. 1980, 31, 223–245. [Google Scholar] [CrossRef]
  28. Chen, M.H.; Shao, Q.M. Monte Carlo Estimation of Bayesian Credible and HPD Intervals. J. Comput. Graph. Stat. 1998, 8, 69–92. [Google Scholar] [CrossRef]
  29. Coolen, F.; Newby, M. A note on the use of the product of spacings in Bayesian inference. KM 1991, 37, 19–32. [Google Scholar]
  30. Cohen, A.C.; Whitten, B.J. Parameter Estimation in Reliability and Life SPAN Models; Dekker: Nottingham, UK, 1988. [Google Scholar]
  31. Basu, S.; Singh, S.K.; Singh, U. Bayesian inference using product of spacings function for Progressive Hybrid Type-I censoring scheme. Statistics 2017, 52, 345–363. [Google Scholar] [CrossRef]
  32. Nassar, M.; Elshahhat, A. Statistical Analysis of Inverse Weibull Constant-Stress Partially Accelerated Life Tests with Adaptive Progressively Type I Censored Data. Mathematics 2023, 11, 370. [Google Scholar] [CrossRef]
  33. Balakrishnan, N.; Sandhu, R.A. A Simple Simulational Algorithm for Generating Progressive Type-II Censored Samples. Am. Stat. 1995, 49, 229–230. [Google Scholar] [CrossRef]
  34. Kundu, D. Bayesian Inference and Life Testing Plan for the Weibull Distribution in Presence of Progressive Censoring. Technometrics 2008, 50, 144–154. [Google Scholar] [CrossRef]
  35. Kundu, D.; Howlader, H. Bayesian inference and prediction of the inverse Weibull distribution for Type-II censored data. Comput. Stat. Data Anal. 2010, 54, 1547–1558. [Google Scholar] [CrossRef]
  36. Kundu, D.; Raqab, M.Z. Bayesian inference and prediction of order statistics for a Type-II censored Weibull distribution. J. Stat. Plan. Inference 2012, 142, 41–47. [Google Scholar] [CrossRef]
  37. Basu, S.; Singh, S.K.; Singh, U. Estimation of inverse Lindley distribution using product of spacings function for hybrid censored data. Methodol. Comput. Appl. Probab. 2019, 21, 1377–1394. [Google Scholar] [CrossRef]
  38. Dey, S.; Dey, T.; Luckett, D.J. Statistical inference for the generalized inverted exponential distribution based on upper record values. Math. Comput. Simul. 2016, 120, 64–78. [Google Scholar] [CrossRef]
  39. Ijaz, M.; Mashwani, W.K.; Belhaouari, S.B. A novel family of lifetime distribution with applications to real and simulated data. PLoS ONE 2020, 15, e0238746. [Google Scholar] [CrossRef]
  40. Barlow, R.E.; Campo, R.A. Total Time on Test Processes and Applications to Failure Data Analysis; Technical Report; California Univ. Berkeley Operations Research Center: Berkeley, CA, USA, 1975. [Google Scholar]
Figure 1. 3D plot of log-LF for simulated dataset.
Figure 1. 3D plot of log-LF for simulated dataset.
Axioms 12 00917 g001
Figure 2. 3D plot of log-PSF for a simulated dataset.
Figure 2. 3D plot of log-PSF for a simulated dataset.
Axioms 12 00917 g002
Figure 3. Heat map for the estimation results of a (left) and λ (right).
Figure 3. Heat map for the estimation results of a (left) and λ (right).
Axioms 12 00917 g003
Figure 4. Heat map for the estimation results of R ( t ) (left) and h ( t ) (right).
Figure 4. Heat map for the estimation results of R ( t ) (left) and h ( t ) (right).
Axioms 12 00917 g004
Figure 5. TTT plot for the first dataset.
Figure 5. TTT plot for the first dataset.
Axioms 12 00917 g005
Figure 6. Histogram of the first dataset with density curve, Estimated CDF, PP plot, and QQ plot.
Figure 6. Histogram of the first dataset with density curve, Estimated CDF, PP plot, and QQ plot.
Axioms 12 00917 g006
Figure 7. Interval estimates of R ( x ) (left) and h ( x ) (right) for the third censoring sample generated from the first dataset: (left) and λ (right).
Figure 7. Interval estimates of R ( x ) (left) and h ( x ) (right) for the third censoring sample generated from the first dataset: (left) and λ (right).
Axioms 12 00917 g007
Figure 8. Trace and density plots of MCMC draws of a and λ using the LF-based (left) and PSF-based (right) Bayesian estimation methods for the first dataset.
Figure 8. Trace and density plots of MCMC draws of a and λ using the LF-based (left) and PSF-based (right) Bayesian estimation methods for the first dataset.
Axioms 12 00917 g008
Figure 9. TTT plot for the second dataset.
Figure 9. TTT plot for the second dataset.
Axioms 12 00917 g009
Figure 10. Histogram of the second dataset with density curve, Estimated CDF, PP plot, and QQ plot.
Figure 10. Histogram of the second dataset with density curve, Estimated CDF, PP plot, and QQ plot.
Axioms 12 00917 g010
Figure 11. Interval estimates of R ( x ) (left) and h ( x ) (right) for the third censoring sample generated from the second dataset: (left) and λ (right).
Figure 11. Interval estimates of R ( x ) (left) and h ( x ) (right) for the third censoring sample generated from the second dataset: (left) and λ (right).
Axioms 12 00917 g011
Figure 12. Trace and density plots of MCMC draws of a and λ using the LF-based (left) and PSF-based (right) Bayesian estimation methods for the second dataset.
Figure 12. Trace and density plots of MCMC draws of a and λ using the LF-based (left) and PSF-based (right) Bayesian estimation methods for the second dataset.
Axioms 12 00917 g012
Table 1. Different censored data generated from the first dataset.
Table 1. Different censored data generated from the first dataset.
mCensoring SchemeGenerated Samples
10sch1: (2 × 9, 56)0.040, 0.301, 0.309, 0.557, 0.943, 1.124, 1.248, 1.281, 1.303, 1.432
30sch2: (2 × 27, 0 × 3)0.040, 0.301, 0.309, 0.557, 0.943, 1.070, 1.124, 1.248, 1.281, 1.303, 1.480, 1.615, 1.619, 1.757, 1.866, 1.876, 1.899, 1.912, 2.085, 2.097, 2.154, 2.223, 2.224, 2.324, 2.934, 3.000, 3.117, 3.166, 3.595, 4.305
42sch3: (1 × 42)0.040, 0.301, 0.309, 0.557, 1.070, 1.124, 1.248, 1.281, 1.303, 1.432, 1.480, 1.505, 1.568, 1.619, 1.652, 1.757, 1.866, 1.876, 1.899, 1.911, 1.912, 1.981, 2.010, 2.038, 2.135, 2.154, 2.190, 2.223, 2.229, 2.300, 2.324, 2.385, 2.481, 2.625, 2.632, 2.962, 3.117, 3.166, 3.344, 3.443, 3.699, 4.121
Table 2. Point estimates (with their SEs) for the first dataset.
Table 2. Point estimates (with their SEs) for the first dataset.
sch MLMPSLindley-MLLindley-MPSMCMC-MLMCMC-MPS
1a1.22060.96581.43250.84541.35040.9426
(0.3466)(0.2977)(0.2744)(0.2723)(0.2387)(0.1940)
λ 0.12940.08530.16390.06230.15020.0821
(0.0629)(0.0585)(0.0525)(0.0538)(0.0391)(0.0363)
R ( t ) 0.98080.97100.98330.96270.98190.9669
(0.0126)(0.0164)(0.0123)(0.0141)(0.0105)(0.0169)
h ( t ) 0.08050.09580.07100.09770.07730.0992
(0.0341)(0.0337)(0.0328)(0.0336)(0.0328)(0.0365)
2a1.75231.57261.81081.43851.78571.5176
(0.2382)(0.2218)(0.2309)(0.1767)(0.1638)(0.1437)
λ 0.22430.21430.22710.20310.22530.2110
(0.0185)(0.0204)(0.0183)(0.0170)(0.0130)(0.0138)
R ( t ) 0.99070.98600.99020.98000.99060.9830
(0.0056)(0.0078)(0.0056)(0.0050)(0.0042)(0.0066)
h ( t ) 0.05660.07610.05690.09690.05610.0863
(0.0268)(0.0326)(0.0268)(0.0251)(0.0195)(0.0257)
3a1.79681.66121.82941.57031.81961.6282
(0.2145)(0.2041)(0.212)(0.1827)(0.1569)(0.1354)
λ 0.23090.22620.23180.22160.23170.2256
(0.0146)(0.0157)(0.0146)(0.015)(0.0106)(0.0111)
R ( t ) 0.99130.98790.99060.98380.9910.9861
(0.0049)(0.0064)(0.0049)(0.0049)(0.0037)(0.0051)
h ( t ) 0.05450.06950.05620.08540.05490.0766
(0.0244)(0.0290)(0.0243)(0.0243)(0.0180)(0.0217)
Table 3. Interval estimates (with their lengths) for the first dataset.
Table 3. Interval estimates (with their lengths) for the first dataset.
sch MLMPSMCMC-MLMCMC-MPS
1a(0.5412, 1.9000)(0.3823, 1.5494)(0.9035, 1.8212)(0.5824, 1.3149)
[1.3588][1.1671][0.9177][0.7325]
λ (0.0062, 0.2526)(0.0000, 0.2000)(0.0749, 0.2225)(0.0209, 0.1550)
[0.2464][0.2000][0.1476][0.1341]
R ( t ) (0.9561, 1.0000)(0.9389, 1.0000)(0.962, 0.9976)(0.935, 0.9946)
[0.0439][0.0611][0.0356][0.0596]
h ( t ) (0.0137, 0.1474)(0.0298, 0.1618)(0.0215, 0.142)(0.0346, 0.1733)
[0.1337][0.1320][0.1205][0.1388]
2a(1.2854, 2.2191)(1.1379, 2.0073)(1.4706, 2.1054)(1.2526, 1.8175)
[0.9337][0.8693][0.6348][0.5649]
λ (0.188, 0.2606)(0.1743, 0.2543)(0.1999, 0.2505)(0.1853, 0.2385)
[0.0726][0.0800][0.0506][0.0533]
R ( t ) (0.9797, 1.0000)(0.9707, 1.0000)(0.982, 0.9971)(0.9698, 0.9939)
[0.0203][0.0293][0.0151][0.0241]
h ( t ) (0.0040, 0.1092)(0.0121, 0.1400)(0.0228, 0.0960)(0.0391, 0.1367)
[0.1052][0.1279][0.0731][0.0976]
3a(1.3765, 2.2172)(1.2612, 2.0612)(1.5198, 2.1271)(1.3551, 1.8802)
[0.8407][0.8000][0.6074][0.5251]
λ (0.2023, 0.2595)(0.1955, 0.2569)(0.2126, 0.2542)(0.2026, 0.2459)
[0.0572][0.0614][0.0416][0.0432]
R ( t ) (0.9816, 1.0000)(0.9753, 1.0000)(0.9836, 0.9971)(0.9757, 0.9945)
[0.0184][0.0247][0.0135][0.0188]
h ( t ) (0.0067, 0.1023)(0.0126, 0.1263)(0.0239, 0.0908)(0.038, 0.1195)
[0.0956][0.1138][0.0670][0.0815]
Table 4. Different censored data generated from the second dataset.
Table 4. Different censored data generated from the second dataset.
mCensoring SchemeGenerated Samples
10sch1: (2 × 9, 32)0.014, 0.034, 0.059, 0.069, 0.080, 0.123, 0.142, 0.210, 0.464, 0.556
20sch2: (1 × 19, 21)0.014, 0.034, 0.059, 0.061, 0.069, 0.080, 0.123, 0.142, 0.165, 0.210, 0.464, 0.556, 0.574, 0.839, 0.991, 1.064, 1.088, 1.270, 1.275, 1.355
30sch3: (1 × 30)0.014, 0.034, 0.059, 0.061, 0.069, 0.080, 0.123, 0.142, 0.165, 0.210, 0.381, 0.464, 0.479, 0.556, 0.574, 0.839, 0.969, 0.991, 1.064, 1.270, 1.275, 1.397, 1.578, 1.702, 1.932, 2.292, 2.628, 3.248, 3.912, 4.116
Table 5. Point estimates (with their SEs) for the second dataset.
Table 5. Point estimates (with their SEs) for the second dataset.
sch MLMPSLindley-MLLindley-MPSMCMC-MLMCMC-MPS
1a0.68770.57620.77670.52670.72710.5653
(0.1876)(0.1688)(0.1651)(0.1614)(0.1387)(0.1115)
λ 0.20840.13110.29350.11990.24230.1379
(0.1543)(0.1270)(0.1287)(0.1265)(0.1186)(0.0892)
R ( t ) 0.85910.85490.85690.85210.85900.8508
(0.0421)(0.0428)(0.0420)(0.0427)(0.0432)(0.0462)
h ( t ) 0.35910.30700.41310.28510.37790.3075
(0.1367)(0.1238)(0.1256)(0.1218)(0.1257)(0.1041)
2a0.70360.63910.74790.58940.72260.6323
(0.1314)(0.1228)(0.1237)(0.1123)(0.0970)(0.0857)
λ 0.25470.22430.28160.19760.26750.2262
(0.0828)(0.0841)(0.0783)(0.0797)(0.0603)(0.0606)
R ( t ) 0.84520.83350.84720.82330.84470.8287
(0.0408)(0.0418)(0.0407)(0.0406)(0.0358)(0.0374)
h ( t ) 0.40980.40120.41940.38460.41490.4029
(0.0865)(0.0859)(0.0860)(0.0843)(0.0749)(0.0731)
3a0.71500.66510.73000.62580.72270.6528
(0.1004)(0.0952)(0.0993)(0.0867)(0.0698)(0.0654)
λ 0.29670.28090.30550.27480.30220.2790
(0.0533)(0.0555)(0.0526)(0.0551)(0.0396)(0.0408)
R ( t ) 0.83270.82000.83180.80430.83200.8146
(0.0419)(0.0429)(0.0419)(0.0400)(0.0291)(0.0309)
h ( t ) 0.45610.45880.45730.46440.45810.4596
(0.0791)(0.0765)(0.0791)(0.0763)(0.0569)(0.0550)
Table 6. Interval estimates (with their lengths) for the second dataset.
Table 6. Interval estimates (with their lengths) for the second dataset.
sch MLMPSMCMC-MLMCMC-MPS
1a(0.3200, 1.0554)(0.2453, 0.9070)(0.4783, 1.0092)(0.3570, 0.7898)
[0.7354][0.6617][0.5308][0.4329]
λ (0.0000, 0.5108)(0.0000, 0.3799)(0.0365, 0.4687)(0.0058, 0.3122)
[0.5108][0.3799][0.4322][0.3063]
R ( t ) (0.7767, 0.9416)(0.7711, 0.9388)(0.7739, 0.9404)(0.7587, 0.9364)
[0.1649][0.1677][0.1666][0.1777]
h ( t ) (0.0910, 0.6271)(0.0644, 0.5496)(0.1447, 0.6213)(0.1249, 0.5134)
[0.5360][0.4852][0.4766][0.3885]
2a(0.4461, 0.9611)(0.3985, 0.8798)(0.5417, 0.9116)(0.4696, 0.7958)
[0.5150][0.4813][0.3698][0.3262]
λ (0.0924, 0.4169)(0.0596, 0.3891)(0.1491, 0.3818)(0.1105, 0.3443)
[0.3245][0.3295][0.2327][0.2338]
R ( t ) (0.7652, 0.9251)(0.7515, 0.9155)(0.7721, 0.9099)(0.7533, 0.8976)
[0.1599][0.1640][0.1378][0.1444]
h ( t ) (0.2403, 0.5794)(0.2329, 0.5695)(0.2672, 0.5588)(0.2639, 0.5485)
[0.3391][0.3366][0.2916][0.2846]
3a(0.5183, 0.9118)(0.4786, 0.8517)(0.5887, 0.8609)(0.5292, 0.7851)
[0.3935][0.3732][0.2721][0.2559]
λ (0.1921, 0.4012)(0.1722, 0.3896)(0.2291, 0.3843)(0.2001, 0.3586)
[0.2091][0.2174][0.1552][0.1586]
R ( t ) (0.7506, 0.9148)(0.7359, 0.9042)(0.7757, 0.8896)(0.75, 0.8706)
[0.1642][0.1683][0.1139][0.1206]
h ( t ) (0.3011, 0.6112)(0.3089, 0.6086)(0.3443, 0.5681)(0.3534, 0.5665)
[0.3101][0.2997][0.2238][0.2131]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kurdi, T.; Nassar, M.; Alam, F.M.A. Bayesian Estimation Using Product of Spacing for Modified Kies Exponential Progressively Censored Data. Axioms 2023, 12, 917. https://doi.org/10.3390/axioms12100917

AMA Style

Kurdi T, Nassar M, Alam FMA. Bayesian Estimation Using Product of Spacing for Modified Kies Exponential Progressively Censored Data. Axioms. 2023; 12(10):917. https://doi.org/10.3390/axioms12100917

Chicago/Turabian Style

Kurdi, Talal, Mazen Nassar, and Farouq Mohammad A. Alam. 2023. "Bayesian Estimation Using Product of Spacing for Modified Kies Exponential Progressively Censored Data" Axioms 12, no. 10: 917. https://doi.org/10.3390/axioms12100917

APA Style

Kurdi, T., Nassar, M., & Alam, F. M. A. (2023). Bayesian Estimation Using Product of Spacing for Modified Kies Exponential Progressively Censored Data. Axioms, 12(10), 917. https://doi.org/10.3390/axioms12100917

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop