Next Article in Journal
Numerical Algorithms for Estimating Probability Density Function Based on the Maximum Entropy Principle and Fup Basis Functions
Previous Article in Journal
Thermodynamic Properties of the First-Generation Hybrid Dendrimer with “Carbosilane Core/Phenylene Shell” Structure
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Classical and Bayesian Inference of an Exponentiated Half-Logistic Distribution under Adaptive Type II Progressive Censoring

Department of Mathematics, Beijing Jiaotong University, Beijing 100044, China
*
Author to whom correspondence should be addressed.
Entropy 2021, 23(12), 1558; https://doi.org/10.3390/e23121558
Submission received: 18 October 2021 / Revised: 16 November 2021 / Accepted: 17 November 2021 / Published: 23 November 2021
(This article belongs to the Section Information Theory, Probability and Statistics)

Abstract

:
The point and interval estimations for the unknown parameters of an exponentiated half-logistic distribution based on adaptive type II progressive censoring are obtained in this article. At the beginning, the maximum likelihood estimators are derived. Afterward, the observed and expected Fisher’s information matrix are obtained to construct the asymptotic confidence intervals. Meanwhile, the percentile bootstrap method and the bootstrap-t method are put forward for the establishment of confidence intervals. With respect to Bayesian estimation, the Lindley method is used under three different loss functions. The importance sampling method is also applied to calculate Bayesian estimates and construct corresponding highest posterior density (HPD) credible intervals. Finally, numerous simulation studies are conducted on the basis of Markov Chain Monte Carlo (MCMC) samples to contrast the performance of the estimations, and an authentic data set is analyzed for exemplifying intention.

1. Introduction

1.1. Adaptive Type II Progressive Censoring Scheme

In this day and age, owing to the development of science and technology, industrial products have become greatly reliable and as a result, getting sufficient failure time during a life testing experiment for any statistical analysis purposes results in a sharp increase in cost and time. Hence, the aim of reducing test time and saving the cost leads us into the realm of censoring. With units removed before their failure time purposefully, the duration and cost can be greatly reduced. Many statisticians have investigated various censoring schemes. The two most commonly used censoring schemes are type I and type II censoring schemes. In type I censoring, the life-testing experiment terminates at a predetermined time while, under type II censoring, the life-testing test stops once the observed failure units reach the predetermined number. For the sake of further reducing the experimental cost and time, a concoction of these two schemes called hybrid censoring was put forward. However, none of these schemes permits the survival units to be removed during the experiment, which lacks flexibility. Accordingly, the concept of progressive censoring was brought forward by [1] to increase the flexibility of removing units other than the terminal experimental time. A concise presentation of the progressive type II censoring is as follows. Assume that there are totally n identical units in the test. In addition, the failure time of the units is defined as X = ( X ( 1 : m : n ) , X ( 2 : m : n ) , , X ( m 1 : m : n ) , X ( m : m : n ) ) , and the censoring scheme denotes as R = ( R 1 , R 2 , , R m 1 , R m ) , where n m = i = 1 m R i . When the first unit fails at X 1 , we remove R 1 units from n 1 units remained randomly. Then, we remove R j units from the n j i = 1 j 1 R i remaining units on the occurrence of the j-th failure in the same way. In addition, you can refer to [1,2] for further information in progressive censoring.
However, one of the drawbacks of the scheme is that researchers can not control the experiment time in practical terms. Recently, Ref. [3] proposed a new censoring called adaptive type II progressive censoring scheme in the interest of saving the aggregate time and improving the analysis efficiency. Based on progressive type II censoring, the expected total experimental time T is also pre-fixed before the test. If  T > X ( m : m : n ) , the experiment is implemented according to the progressive type II censoring scheme with R = ( R 1 , R 2 , , R m 1 , R m ) and terminates at time X ( m : m : n ) . However, once the actual time runs over T, namely T < X ( m : m : n ) , we do not stop the test at T but no longer remove survival units after the prefixed time T. Suppose that the time runs over T right after the occurrence of the J-th failure, namely J = max { j , T > X ( j : m : n ) } . Therefore, once the concrete test time runs over T, the censoring scheme after time T becomes R J + 1 = R J + 2 = = R m 1 = 0 , R m = n m i = 1 J R i . In particular, there are two special situations with the change of T. If  T = , the scheme eventually turns into progressive type II censoring. In addition, if the expected time T equals to 0, the scheme changes into the common type II censoring scheme. Figure 1 presents adaptive type II progressive censoring.
Since the adaptive type II progressive censoring scheme was proposed, its good property has attracted a great number of researchers to study this field. The adaptive progressive type II censoring model was further studied in Ref. [4]. Under this censoring model, Ref. [5] also studied the estimator of unknown parameters of Weibull distribution. The classical estimations and the Bayesian estimations were both derived from the scheme. The adaptive type II progressive censoring was collaborated with the exponential step-stress accelerated life-testing model to derive confidence intervals in Ref. [6]. Furthermore, this censoring scheme was also extended by taking account of competing risks under two-Parameter Rayleigh Distribution and making classical and Bayesian inference by Ref. [7].

1.2. The Exponentiated Half-Logistic Distribution

The exponentiated half-logistic distribution (EHL) is extremely famous in numerous applications particularly in parameter estimates. It has been applied in many areas, including insurance, engineering, medicine, education, etc. This distribution is suitable for modeling lifetime data and is extremely parallel to the two-parameter family of distributions, which is noted in Ref. [8]. For example, the Gamma distribution is an important distribution in the two-parameter family of distributions. However, compared to the Gamma distribution, exponentiated half-logistic distribution has a whip hand due to the closed form of its cumulative distribution.
In this article, we focus on the exponentiated half-logistic distribution. The probability density function (PDF) is written as:
f ( x ; λ , σ ) = λ σ ( 1 e x σ 1 + e x σ ) λ 1 2 e x σ ( 1 + e x σ ) 2 , x > 0 , λ , σ > 0 ,
and the cumulative distribution function (CDF) is described as
F ( x ; λ , σ ) = ( 1 e x σ 1 + e x σ ) λ , x > 0 , λ , σ > 0 ,
where λ > 0 is the shape parameter and σ > 0 is the scale parameter. We denote this distribution as EHI( λ , σ ).
The corresponding reliability function is written as:
R ( t ) = 1 ( 1 e t σ 1 + e t σ ) λ , t > 0 ,
while the hazard rate function is:
h ( t ) = 2 σ λ e t σ ( 1 + e t σ ) 2 [ 1 ( 1 e t σ 1 + e t σ ) λ ] ( 1 e t σ 1 + e t σ ) λ 1 , t > 0 .
From Figure 2, when λ σ > 1 , the PDF of the exponentiated half-logistic distribution is unimodal. In addition, while λ σ < 1 , it becomes monotonically decreasing. When λ is fixed, the smaller σ is, the more sharply the PDF decreases. As for the CDF of the distribution, the growth of CDF becomes slow with σ increasing. Furthermore, smaller λ results in a higher rising rate.
When λ = 1 , the exponentiated half-logistic distribution degrades into the renowned half logistic distribution. The half logistic distribution has extensive use particularly employed in censored data in the area of survival analysis. This distribution has been studied by some researchers. The order statistics of the half logistic distribution was studied in Ref. [9]. On the basis of progressively type II censored data, Ref. [10] derived the classical and Bayes estimators of the scale parameter of this distribution. In accordance with the study results of [10], analytic expressions were studied further for the biases of the maximum likelihood estimators of the distribution in [11]. The generalized ranked-set sampling technique was employed for obtaining parameters estimation of the half-logistic distribution in [12].
The exponentiated half-logistic distribution has recently attracted a lot of researchers. On the basis of progressive Type-II censored data, Ref. [13] derived the maximum likelihood estimator of the scale parameter in an exponentiated half logistic distribution and proposed some approximate maximum likelihood estimators as well. In addition to the MLE, Ref. [14] focused on the moment estimators and entropy estimator in this distribution. For the purpose of promoting practicability of the distribution, Ref. [15] extended the exponentiated half-logistic distribution by putting forward the concept of the exponentiated half-logistic family, which is a fresh generator of continuous distributions of two excess parameters. Considering that the life test sometimes stops at a pre-determined time, Ref. [16] developed acceptance sampling for the percentiles of this distribution. Meanwhile, not only the operating characteristic values of the sampling plans but also the producer’s risk were shown. Based on the distribution, Ref. [17] proposed an attribute control chart for time truncated life tests with different shape parameters. Thus far, research associated with this distribution has a great deal of space to explore.
In this article, the problem of the point and interval estimation of the parameters for exponentiated half logistic distribution under adaptive type II progressive censored data are considered. We organize the remainder paper as follows. In Section 2, the maximum likelihood estimates are derived and computed. Meanwhile, the observed and expected Fisher information matrix is acquired and then the asymptotic confidence intervals are established. We employ the bootstrap resampling method to build two bootstrap confidence intervals in Section 3. As for Section 4, Bayesian estimations under several loss functions are carried out by utilizing the Lindley method. The importance sampling method is also used to calculate the Bayesian estimates and construct the highest posterior density (HPD) credible intervals. Simulations are conducted and the behaviors of estimators obtained with the diverse methods are evaluated and compared in Section 5. An authentic data set is studied to illustrate the effectiveness of estimation means in the above sections in Section 6. In the end, the conclusions of point and interval estimations are drawn in Section 7.

2. Maximum Likelihood Estimation

2.1. Point Estimation

In this section, maximum likelihood estimation is used to estimate the unknown parameters on the basis of the adaptive type II progressive censored data. Assume that the adaptive type II progressive censored data come from an exponentiated half-logistic distribution. Let x i : m : n denote the i-th observation, thus we know x 1 : m : n < x 2 : m : n < x m : m : n . In addition, T represents the expected experimental time and J denotes the index of the last failure before time T.
For the sake of simplicity, let x ̲ = ( x 1 , x 2 , , x m ) denote ( x ( 1 : m : n ) , x ( 2 : m : n ) , , x ( m : m : n ) ) . The likelihood function turns to be
L ( λ , σ | x ̲ ) = D J [ 1 F ( x m ) ] n m i = 1 J R i i = 1 J [ 1 F ( x i ) ] R i i = 1 m f ( x i ) ,
where
D J = i = 1 m ( n + 1 i k = 1 min { J , i 1 } R k ) .
The corresponding likelihood function is derived as
L ( λ , σ | x ̲ ) = D J σ m λ m e λ i = 1 m l n 1 + e x i σ 1 e x i σ 1 σ i = 1 m x i i = 1 m 1 1 e 2 x i σ × i = 1 J [ 1 ( 1 e x i σ 1 + e x i σ ) λ ] R i [ 1 ( 1 e x m σ 1 + e x m σ ) λ ] n m i = 1 J R i .
Therefore, the log-likelihood function can be obtained by
l ( λ , σ | x ̲ ) = D + m ln λ m ln σ i = 1 m x i σ λ i = 1 m ln 1 + e x i σ 1 e x i σ + i = 1 m ln 1 1 e 2 x i σ + i = 1 J R i ln ( 1 F ( x i ) ) + ( n m i = 1 J R i ) ln ( 1 F ( x m ) ) ,
where D is a constant.
Finding the partial derivatives involving σ and λ separately and letting them equal zero, the equations correspond to   
l σ = 1 σ m + ( 1 1 λ ) i = 1 m ζ i x i 1 σ i = 1 m ( F ( x i ) ) 1 λ x i i = 1 J R i η i x i ( n m i = 1 J ) η m x m = 0 ,
l λ = 1 λ m + i = 1 m ln F ( x i ) i = 1 J R i G i F ( x i ) ( n m i = 1 J R i ) G m F ( x m ) = 0 ,
where ζ i = f ( x i ) F ( x i ) , η i = f ( x i ) 1 F ( x i ) , G i = ln F ( x i ) 1 F ( x i ) .
The roots of the equations correspond to the MLEs. However, owing to the nonlinearity of the equations, obviously we can not obtain the explicit expressions. Thus, the Newton–Raphson method is employed to solve this problem. The Newton–Raphson method is an important method to find the roots of equations by employing the Taylor series method. Thus, the Newton–Raphson method is employed to acquire the MLEs, written as σ ^ and λ ^ .

2.2. Asymptotic Confidence Interval

In this subsection, the asymptotic confidence intervals for σ and λ are established by employing V a r ( σ ^ ) and V a r ( λ ^ ) . We acquire the asymptotic confidence intervals for σ and λ from the variance–covariance matrix, which is also known as the inverse Fisher information matrix. The Fisher information matrix is a generalization of the Fisher information amount. The Fisher information amount represents the average amount of information about the state parameters in a certain sense that a sample of random variables can provide. The Fisher information matrix (FIM) I σ , λ is
I ( σ , λ ) = E 2 l ( λ , σ ) σ 2 2 l ( λ , σ ) λ σ 2 l ( λ , σ ) λ σ 2 l ( λ , σ ) λ 2 .
Here,
2 l λ 2 = 1 λ 2 m + i = 1 J R i G i 2 F ( x i ) + ( n i = 1 J R i m ) F ( x m ) G m 2 ,
2 l λ σ = 1 λ σ i = 1 m ζ i x i + i = 1 J R i x i η i ( 1 + G i ) + ( n m i = 1 J R i ) x m η m ( 1 + G m ) ,
2 l σ 2 = 1 σ 2 { m + ( 1 1 λ ) i = 1 m x i [ ( 1 H i ) ζ i ζ 2 ] 1 σ i = 1 m x i F ( x i ) 1 λ ( 2 + 1 λ ζ i ) + i = 1 J [ η i 2 + ( H i 1 ) η i ] R i + ( n m i = 1 J R i ) [ η m 2 + ( H m 1 ) η m ] x m } ,
where H i = 1 + x i σ F ( x i ) 1 λ + ( 1 + λ ) x i λ ζ i .
The FIM I σ , λ is called the expected Fisher matrix. It is determined by the distribution of the order statistics X i . The PDF of X i based on the progressive type II censored sample generally can be derived from [1].
f x ( i ) ( x ( i ) ) = c i 1 0 k = 1 i d k , i 0 f ( x ( i ) ) [ 1 F ( x i ) ] r k 0 1 ,
where
c i 1 0 = k = 1 i r k 0 , r i 0 = m + 1 i + k = i m R k , i = 1 , 2 , , j , d 11 0 = 1 , d k , i 0 = h = 1 , h k i 1 r h 0 r k 0 , 1 k i j .
The adaptive progressive type II censoring is considered as an improvement of the progressive type II censoring. Actually, the PDF of X i of EHL( λ , σ ) under adaptive progressive type II censoring turns out to be
f x ( i ) ( x ( i ) ) = c i 1 1 c j 1 1 k = j + 1 i d k , i 1 v ( x ( i ) ) [ 1 V ( x ( i ) ) ] r k 1 1 ,
where
c i 1 1 = k = 1 i r k 1 , r i 1 = n i + 1 k = 1 j R k , i = j + 1 , j + 2 , , m , d j + 1 , j + 1 1 = 1 , d k , i 1 = h = j + 1 , h k i 1 r h 1 r k 1 , j + 1 k i m , v ( x ( i ) ) = f ( x ( i ) ) 1 F ( x ( j ) ) , V ( x ( i ) ) = F ( x ( i ) ) F ( x ( j ) ) 1 F ( x ( j ) ) .
After sorting out, the formula (15) can be written as   
f x ( i ) ( x ( i ) ) = c i 1 0 k = 1 i d k , i 0 λ σ ( 1 e x ( i ) σ 1 + e x ( i ) σ ) λ 1 2 e x ( i ) σ ( 1 + e x ( i ) σ ) 2 [ 1 ( 1 e x ( i ) σ 1 + e x ( i ) σ ) λ ] r k 0 1 , i = 1 , 2 , , j , c i 1 1 c j 1 1 k = j + 1 i d k , i 1 λ σ ( 1 e x ( i ) σ 1 + e x ( i ) σ ) λ 1 2 e x ( i ) σ ( 1 + e x ( i ) σ ) 2 1 ( 1 e x ( i ) σ 1 + e x ( i ) σ ) λ [ 1 ( 1 e x ( i ) σ 1 + e x ( i ) σ ) λ 1 ( 1 e x ( i ) σ 1 + e x ( i ) σ ) λ ] r k 1 1 , i = j + 1 , j + 2 , , m .
Afterwards, we can calculate Fisher information matrix FIM I ( σ , λ ) directly based on (16). In order to simplify complex calculation, the observed Fisher Information matrix I σ ^ , λ ^ is employed skillfully to approximate the expected Fisher information matrix, and then the variance–covariance matrix can be obtained. Then, the I ( σ ^ , λ ^ ) turns out to be
I ( σ ^ , λ ^ ) = 2 l ( λ , σ ) σ 2 2 l ( λ , σ ) λ σ 2 l ( λ , σ ) λ σ 2 l ( λ , σ ) λ 2 ( σ , λ ) = ( σ ^ , λ ^ ) .
Here, σ ^ and λ ^ are the MLEs of σ and λ separately.
Then, the asymptotic variance–covariance matrix is the inverse of the observed Fisher Information matrix I σ ^ , λ ^ , denoted as I 1 σ ^ , λ ^ .
I 1 ( σ ^ , λ ^ ) = V a r ( σ ^ ) C o v ( σ ^ , λ ^ ) C o v ( λ ^ , σ ^ ) V a r ( λ ^ ) .
Thus, the  100 1 α % asymptotic confidence intervals for σ and λ can be constructed as
σ ^ d α 2 × V a r σ ^ , σ ^ + d α 2 × V a r σ ^
and
λ ^ d α 2 × V a r ( λ ^ ) , λ ^ + d α 2 × V a r ( λ ^ )
where d α denotes the upper α -th quantile of the standard normal distribution.

3. Bootstrap Confidence Intervals

It is noticed that the classical theory works well with a large sample size while it makes little sense on the condition that the sample size is small. Thus, the bootstrap methods are applied to provide more precise confidence intervals.
The two most commonly used bootstrap methods are proposed, see [18]. One is the percentile bootstrap method (boot-p). It replaces the distribution of original sample statistics with the distribution of Bootstrap sample statistics to establish confidence intervals. The other is the bootstrap-t method (boot-t). In addition, the core idea of this method is to convert the Bootstrap sample statistic into the corresponding t statistic. The detailed procedure for simulation of the two bootstrap methods is listed, see Algorithms 1 and 2.
Algorithm 1: Constructing percentile bootstrap confidence intervals
step 1
Set the simulation number N b o o t times ahead.
step 2
Compute the MLEs of σ and λ under the original censored sample x ̲ = ( x 1 , x 2 , , x m ) , denoted as σ ^ and λ ^ . (If we carry out a simulation study, we should first generate an adaptive progressive type II censored sample x ̲ = ( x 1 , x 2 , , x m ) from EHL( λ , σ ) with T , n , m , R as the original sample.)
step 3
Generate a bootstrap sample x ̲ * using σ ^ , λ ^ and the same censoring pattern ( n , m , T , R ) . Then, calculate the bootstrap MLEs under sample x ̲ * , denote as σ ^ * and λ ^ * .
step 4
Repeat step 3 N b o o t times, then we can obtain a series of bootstrap MLEs
σ ^ * * ( 1 ) , σ ^ * * ( 2 ) , , σ ^ * * ( N b o o t ) and ( λ ^ * * ( 1 ) , λ ^ * * ( 2 ) , , λ ^ * * ( N b o o t ) ) .
step 5
Arrange ( σ ^ * * ( 1 ) , σ ^ * * ( 2 ) , , σ ^ * * ( N b o o t ) ) and λ ^ * * 1 , λ ^ * * 2 , , λ ^ * * N b o o t in ascending order, respectively, and obtain ( σ ^ * * 1 , σ ^ * * 2 , , σ ^ * * N b o o t ) and ( λ ^ * * 1 , λ ^ * * 2 , , λ ^ * * N b o o t ) .

3.1. Percentile Bootstrap Confidence Intervals

Then, the 100 1 α % Boot-p confidence intervals are given by σ ^ * * [ K 1 ] , σ ^ * * [ K 2 ] and λ ^ * * [ K 1 ] , λ ^ * * [ K 2 ] , where K 1 and K 2 are the integer parts of α 2 × N b o o t and ( 1 α 2 ) × N b o o t , respectively.

3.2. Bootstrap-t Confidence Intervals

Then, the 100 1 α % Boot-t confidence intervals are given by
σ ^ S 1 ˜ * * [ K 2 ] V a r ( σ ^ ) , σ ^ S 1 ˜ * * [ K 1 ] V a r ( σ ^ )
and
λ ^ S 1 ˜ * * [ K 2 ] V a r ( λ ^ ) , λ ^ S 1 ˜ * * [ K 1 ] V a r ( λ ^ )
where K 1 and K 2 are the integer parts of α 2 × N b o o t and ( 1 α 2 ) × N b o o t , respectively.
Algorithm 2: Constructing bootstrap-t confidence intervals
step 1
Set the simulation number N b o o t times ahead.
step 2
Compute the MLEs of σ and λ under the original censored sample x ̲ = ( x 1 , x 2 , , x m ) , denoted as σ ^ and λ ^ . (If we carry out a simulation study, we should first generate an adaptive progressive type II censored sample x ̲ = ( x 1 , x 2 , , x m ) from EHL( λ , σ ) with T , n , m , R as the original sample.)
step 3
Generate a bootstrap sample x ̲ * using σ ^ , λ ^ and the same censoring pattern ( n , m , T , R ) . Then, calculate the bootstrap MLEs σ ^ * and λ ^ * and their variances V a r ( σ ^ * ) and V a r ( λ ^ * ) .
step 4
Calculate the t-statistics S 1 ˜ = σ ^ * σ ^ V a r σ ^ * for σ ^ * and S 2 ˜ = λ ^ * λ ^ V a r λ ^ * for λ ^ * .
step 5
Repeat steps 2–3 N b o o t times to acquire a series of bootstrap t-statistics S 1 ˜ * * 1 , S 1 ˜ * * 2 , , S 1 ˜ * * N b o o t and S 2 ˜ * * 1 , S 2 ˜ * * 2 , , S 2 ˜ * * N b o o t .
step 6
Arrange S 1 ˜ * * 1 , S 1 ˜ * * 2 , , S 1 ˜ * * N b o o t and S 2 ˜ * * 1 , S 2 ˜ * * 2 , , S 2 ˜ * * N b o o t in ascending order respectively and obtain S 1 ˜ * * 1 , S 1 ˜ * * 2 , S 1 ˜ * * N b o o t and S 2 ˜ * * 1 , S 2 ˜ * * 2 , , S 2 ˜ * * N b o o t .

4. Bayesian Estimation

In this section, we compute the Bayesian estimates of the quantities by using the Lindley method and the importance sampling procedure. Unlike classical statistics, Bayesian statistics treat quantities as random variables, which combines the prior information with observed information.
The option of prior distribution is a pivotal problem. Generally speaking, the conjugate prior distribution is the first choice due to its algebraic simplicity. However, it is very difficult to find such prior when both quantities σ and λ are unknown. The prior distribution is reasonable to keep the same form as (6). Suppose that σ I G γ , δ and λ G a α , β and that these two priors are independent. The PDFs of their prior distributions correspond to
π ( σ ) = δ γ Γ ( γ ) σ γ 1 e δ σ , γ > 0 , δ > 0
π ( λ ) = β α Γ ( α ) λ α 1 e β λ , α > 0 , β > 0 .
The corresponding joint distribution is
π ( σ , λ ) = δ γ β α Γ ( γ ) Γ ( α ) σ γ 1 λ α 1 e ( δ σ + β λ ) ,
Given the sample x ̲ , the posterior distribution π ( σ , λ | x ̲ ) can be written as
π ( σ , λ | x ̲ ) = L ( x ̲ | σ , λ ) π ( σ , λ ) 0 0 L ( x ̲ | σ , λ ) π ( σ , λ ) d σ d λ .

4.1. Symmetric and Asymmetric Loss Functions

The loss function is employed to appraise the intensity of inconsistency between the estimation of the parameter and the true value. The squared error loss function is a symmetric loss function, which is applied in many areas. However, on the condition that overestimation causes greater loss compared with underestimation or vice versa, using a symmetric loss function is not suitable. Instead, the asymmetric loss function is employed to fix the problem. Therefore, we consider the Bayesian estimations under one symmetric loss function, namely the squared error loss function (SELF) as well as two asymmetric loss functions, namely the Linex Loss Function (LLF) and the General Entropy Loss Function (GELF) in this subsection.

4.1.1. Squared Error Loss Function (SELF)

The squared error loss function is a symmetric loss function, which puts the overestimate and underestimate on the same level. It is the sum of squared distances between the target variable and the predicted value. The function corresponds to
L S E ( υ , υ ^ ) = ( υ ^ υ ) 2 ,
where υ ^ is the estimation of υ .
The Bayesian estimation of υ under SELF is given by
υ ^ = E υ ( υ | x ̲ ) .
Then, for the unknown parameters σ and λ , the Bayesian estimates under SELF can be given directly as
σ ^ S E = 0 0 σ π ( σ , λ | x ̲ ) d σ d λ ,
λ ^ S E = 0 0 λ π ( σ , λ | x ̲ ) d σ d λ .

4.1.2. Linex Loss Function (LLF)

The Linex function is a well-known asymmetric loss function. It is defined as
L L L ( υ , υ ^ ) = e p ( υ ^ υ ) p ( υ ^ υ ) 1 .
The size of p denotes the level of asymmetry and its sign represents the direction of asymmetry. For p < 0 , LLF alters exponentially in the negative direction and linearly in the positive direction, thus a negative bias has a more serious impact—while, for p > 0 , the positive error will be punished heavily. The larger the dimension of p is, the larger the punishment intensity is. When p approaches 0, LLF is almost symmetric.
The Bayesian estimation of υ under LLF is written as
υ ^ L L = 1 p ln E υ ( e p υ | x ̲ ) .
Then, for unknown parameters σ and λ , the Bayesian estimates under LLF are
σ ^ L L = 1 p ln [ 0 0 e p σ π ( σ , λ | x ̲ ) d σ d λ ] ,
λ ^ L L = 1 p ln [ 0 0 e p λ π ( σ , λ | x ̲ ) d σ d λ ] .

4.1.3. General Entropy Loss Function (GELF)

The General Entropy loss function (GELF) is another noted asymmetric loss function, which is
L G E ( υ , υ ^ ) = ( υ ^ υ ) q q ln υ ^ υ 1 .
For q > 0 , the overestimation has a more serious impact compared with the underestimation, and vice versa. The Bayesian estimation of υ under GELF is derived:
υ ^ G E = [ E υ ( υ q | x ̲ ) ] 1 q .
Notably, when q = 1 , the Bayesian estimation under GELF has the same value as that under SELF. The Bayesian estimates of σ and λ under GELF correspond to
σ ^ G E = [ 0 0 σ q π ( σ , λ | x ̲ ) d σ d λ ] 1 q ,
λ ^ G E = [ 0 0 λ q π ( σ , λ | x ̲ ) d σ d λ ] 1 q .
We can know that the Bayesian estimates of σ and λ are in the modality of a ratio of two complicated integrals and the specific and explicit forms cannot be represented without trouble. Thus, the Lindley method is employed to solve this problem.

4.2. Lindley Approximation Method

In this subsection, in order to compute the Bayesian estimates, we apply the Lindley approximation method. Let φ ( σ , λ ) denote any function about σ and λ , l denote the log-likelihood function and ρ ( σ , λ ) = ln π ( σ , λ ) . According to the [19], the Bayesian estimates can be expressed by the posterior expectation of φ ( σ , λ )
E [ φ ( σ , λ ) | x ̲ ] = φ ( σ ^ , λ ^ ) + ρ 1 A 12 + 1 2 ( A + l 03 B 21 + l 30 B 12 + l 12 C 21 + l 21 C 12 ) + ρ 2 A 21 ,
where
A = i = 1 2 i = 1 2 φ i j b i j l i j = i + j l θ i θ j , i = 3 j a n d i , j = 0 , 1 , 2 , 3 ρ i = ρ θ i , φ i = ρ θ i , φ i j = 2 ρ θ i θ j , b i j = [ l i j ] 1 , A i j = φ b i i + φ j b j i , B i j = ( φ i b i i + φ j b i j ) b i i , C i j = 3 φ i b i i b i j + φ j ( b i i b j j + 2 b i j 2 ) .
Here, θ = θ 1 , θ 2 = σ , λ and b i j denotes the ( i , j ) -th component of the covariance matrix. Then, the Bayesian estimates under three loss functions SELF, LLF, and GELF are derived.

4.2.1. Squared Error Loss Function (SELF)

For σ , let φ ( σ , λ ) = σ ; therefore,
φ ( σ , λ ) = σ , φ 1 = 1 , φ 11 = φ 12 = φ 2 = φ 21 = φ 22 = 0 .
Then, the Bayesian estimate of σ under SELF is
σ ^ S E = σ ^ + 1 2 [ b 11 2 l 30 + 3 b 11 b 12 l 21 + b 11 b 22 l 12 + 2 b 21 2 l 12 + b 21 b 22 l 03 ] + ρ 1 b 11 + ρ 2 b 12 .
Similarly, for parameter λ , it is clear that φ ( σ , λ ) = λ , hence
φ ( σ , λ ) = λ , φ 2 = 1 , φ 21 = φ 22 = φ 1 = φ 11 = φ 12 = 0 .
Then, the Bayesian estimate of λ under SELF can be written as
λ ^ S E = λ ^ + 1 2 [ b 11 b 12 l 30 + b 11 b 22 l 21 + 2 b 12 2 l 12 + 3 b 21 b 22 l 12 + b 22 2 l 03 ] + ρ 1 b 21 + ρ 2 b 22 .

4.2.2. Linex Loss Function (LLF)

For σ , we take φ ( σ , λ ) = e p σ , hence
φ 1 = p e p σ , φ 11 = p 2 e p σ , φ 2 = φ 12 = φ 21 = φ 22 = 0 .
The Bayesian estimate of σ under LLF is derived as
σ ^ L L = 1 p ln { e p σ ^ + 1 2 φ 11 b 11 + 1 2 φ 1 [ b 11 2 l 30 + 3 b 11 b 12 l 21 + b 11 b 22 l 12 + 2 b 21 2 l 12 + b 21 b 22 l 03 ] + φ 1 ( ρ 1 b 11 + ρ 2 b 12 ) } .
Similarly, for the parameter λ , let φ ( σ , λ ) = e p λ , hence
φ 2 = p e p λ , φ 22 = p 2 e p λ , φ 1 = φ 11 = φ 12 = φ 21 = 0 .
The Bayesian estimate of λ under LLF can be written as
λ ^ L L = 1 p ln { e p λ ^ + 1 2 φ 22 b 22 + 1 2 φ 2 [ b 22 2 l 03 + 3 b 22 b 21 l 12 + b 11 b 22 l 21 + 2 b 12 2 l 21 + b 12 b 11 l 30 ] + φ 2 ( ρ 1 b 21 + ρ 2 b 22 ) } .

4.2.3. General Entropy Loss Function (GELF)

For parameter σ , let φ ( σ , λ ) = σ q , hence
φ 1 = q σ q 1 , φ 11 = q ( q + 1 ) σ q 2 , φ 2 = φ 12 = φ 21 = φ 22 = 0 .
The Bayesian estimate of σ under GELF can be written as
σ ^ G E = { σ ^ q + 1 2 φ 11 b 11 + 1 2 φ 1 [ b 11 2 l 30 + 3 b 11 b 12 l 21 + b 11 b 12 l 12 + 2 b 21 2 l 12 + b 21 b 22 l 03 ] + φ 2 ( ρ 1 b 11 + ρ 2 b 12 ) } 1 q .
Similarly, for parameter λ ,   , it is clear that φ ( σ , λ ) = λ q , hence
φ 2 = q λ q 1 , φ 22 = q ( q + 1 ) λ q 2 , φ 1 = φ 11 = φ 21 = φ 12 = 0 .
The Bayesian estimate of λ under GELF can be written as
σ ^ G E = { λ ^ q + 1 2 φ 22 b 22 + 1 2 φ 2 [ b 22 2 l 03 + 3 b 22 b 21 l 12 + b 11 b 22 l 21 + 2 b 12 2 l 21 + b 12 b 11 l 30 ] + φ 2 ( ρ 1 b 21 + ρ 2 b 22 ) } 1 q .
Though the Lindley approximation is effective to obtain point estimations by estimating the ratio of integrals, it can not provide credible intervals of the unknown parameters. Therefore, the importance sampling method is adopted to gain not only point estimation but also credible intervals.

4.3. Importance Sampling Procedure

The importance sampling procedure is an extension to the Monte Carlo method, which can greatly reduce the number of sample points drawn in the simulation, and is widely used in the reliability analysis of various models. From (6) and (21), the joint posterior distribution is derived by
π ( σ , λ | x ̲ ) δ γ β α Γ ( γ ) Γ ( α ) σ m γ 1 λ m + α 1 e λ i = 1 m ln 1 + e x i σ 1 e x i σ 1 σ i = 1 m x i δ σ β λ × i = 1 m 1 1 e 2 x i σ i = 1 J [ 1 ( 1 e x i σ 1 + e x i σ ) λ ] R i [ 1 ( 1 e x m σ 1 + e x m σ ) λ ] n m i = 1 J R i h 1 ( σ ) h 2 ( λ | σ ) h 3 ( σ , λ ) ,
where
h 1 ( σ ) = ( δ + i = 1 m x i ) γ + m Γ ( γ + m ) σ ( γ + m + 1 ) e δ + i = 1 m σ ,
h 2 ( λ | σ ) = [ β + i = 1 m ln 1 + e x i σ 1 e x i σ ] α + m Γ ( α + m ) λ α + m 1 e λ ( β + i = 1 m ln 1 e x i σ 1 + e x i σ ) ,
h 3 ( σ , λ ) = 1 [ β + i = 1 m ln 1 + e x i σ 1 e x i σ ] α + m i = 1 m 1 1 e 2 x i σ i = 1 J [ 1 ( 1 e x i σ 1 + e x i σ ) λ ] R i [ 1 ( 1 e x m σ 1 + e x m σ ) λ ] n m i = 1 J R i .
It is clear that h 1 ( σ ) is the PDF of an inverse Gamma distribution while h 2 ( λ ) is the PDF of a Gamma distribution.
Therefore, the Bayesian estimation of φ σ , λ is acquired by the following steps:
  • Generate σ from I G σ ( γ + m , δ + i = 1 m x i ) .
  • On the basis of step 1, generate λ from G a λ | σ ( m + α , i = 1 m ln 1 + e x i σ 1 e x i σ + β ) .
  • Repeat step 1 and step 2 M times and produce a series of samples.
  • The Bayesian estimate of φ ( σ , λ ) is calculated by
    φ ^ ( σ , λ ) = i = 1 M φ ( σ i , λ i ) h 3 ( σ i , λ i ) i = 1 M h 3 ( σ i , λ i ) .
Therefore, the Bayesian estimate of the unknown parameter σ and λ is derived by
σ ^ = i = 1 M σ i h 3 ( σ i , λ i ) i = 1 M h 3 ( σ i , λ i ) ,
λ ^ = i = 1 M λ i h 3 ( σ i , λ i ) i = 1 M h 3 ( σ i , λ i ) .
Let
h 3 i σ i , λ i = h 3 σ i , λ i i = 1 M h 3 σ i , λ i .
For the sake of simplicity, h 3 i σ i , λ i is denoted as h 3 i . Then, we sort { σ 1 , σ 2 , σ M } in ascending order as { σ 1 , σ 2 , σ M } . In addition, we combine h 3 i and σ i together as { σ 1 , h 3 1 , σ 2 , h 3 2 , σ M , h 3 M } . The HPD credible interval is established based on the estimate σ ^ p = σ ( g p ) , where g p is an integer that satisfies
i = 1 g p h 3 ( i ) p i = 1 g p + 1 h 3 ( i ) .
Hence, the  100 % ( 1 α ) credible interval can be represented as ( σ ^ ζ , σ ^ ζ + 1 α ) , ζ = h 3 ( 1 ) , h 3 ( 1 ) + h 3 ( 2 ) , , i = 1 g p h 3 ( i ) . Therefore, the HPD credible for σ is obtained by ( σ ^ ζ * , σ ^ ζ * + 1 α ) . Note that σ ^ ζ * + 1 α σ ^ ζ * σ ^ ζ + 1 α σ ^ ζ for all ζ .

5. Simulation

Plenty of simulation experiments are carried out to appraise the performance of our estimations by Monte Carlo simulations. Here, the R software is employed for all the simulations. The point estimation is evaluated by the mean square error (MSE) and estimation value (VALUE), while the interval estimation is assessed based on the coverage rate (CR) and interval mean length (ML). For point estimation, smaller mean square error and closer estimation value suggest better performance of estimation. In addition, for interval estimation, the higher the coverage rate is and the narrower the interval mean length is, the better the estimation is.
First of all, adaptive type II progressive censored data from an exponentiated half-logistic distribution should be generated. The algorithm for generating adaptive Type II progressive censored data from a general distribution can be obtained in [3]. The algorithm to generate the censored data is listed in Algorithm 3.
Algorithm 3: Generating adaptive type II progressive censored data from EHL( λ , σ ).
1.
Generate a Type II progressive censored sample from an exponentiated half-logistic distribution EHL( λ , σ ) with initial values of ( R 1 , R 2 , , R m ) and T , n , m :
(a)
generate independent random variables U 1 , U 2 , , U m from the uniform distribution U ( 0 , 1 ) .
(b)
Let V i = U i 1 i + j = m i + 1 m R i , i = 1 , 2 , , m .
(c)
Let W i = 1 V m V m 1 V m i + 1 , i = 1 , 2 , , m .
(d)
For certain σ and λ , let X i = F 1 ( W i ) . Then, X = ( X 1 , X 2 , , X m ) is the Type II progressive censored sample from EHL( λ , σ ).
2.
Confirm the value of J, and abandon the sample X J + 2 , , X m .
3.
Generate the first m J 1 order statistics from a truncated distribution f ( x ) [ 1 F ( x J + 1 ) ] with sample size n ( i = 1 J R i + J + 1 ) as X J + 2 , X J + 3 , , X m .
In order to carry out simulations, we set σ = 1.5 and λ = 1 . For comparison purposes, we consider T = 2 , 4 and ( n , m ) = ( 30 , 20 ) , ( 30 , 25 ) , ( 50 , 40 ) , ( 50 , 45 ) , ( 80 , 60 ) , ( 80 , 70 ) . For all the combinations of sample size and time T, two different censoring schemes (CS) are chosen:
  • Scheme I (Sch I): R 1 = n m , R k = 0 , k = 2 , 3 , , m .
  • Scheme II (Sch II): R 1 = R 2 = = R n m = 1 , R k = 0 , k > n m .
In addition, the specific diverse censoring schemes conceived for the simulation are listed in Table 1.
For simplicity, we abbreviate the censoring schemes. For example, (1, 1, 1, 0, 0, 0, 0) is represented as (1*3, 0*4). In each case, the simulation is repeated 3000 times. Then, the associated MSEs and VALUEs with the point estimation and the related coverage rates and mean lengths with the interval estimation can be acquired through Monte Carlo simulations using R software.
For maximum likelihood estimation, the L-BFGS-B method is used and the simulation results are put into Table A1. In Bayesian estimation, we employ not only non-informative priors (non-infor) but also informative priors (infor). For the non-informative priors, we set α = β = γ = δ = 0.0001 . Then, for the informative priors, we should first determine the hyper-parameters for Bayesian estimation. Generally speaking, the actual value of the parameter is usually considered as the expectation of the prior distribution. However, due to the complexity and interactive influence of the two prior distributions, the optimal value can not be found directly. Thus, we adopt a genetic algorithm and simulated annealing algorithm to determine the optimal hyper-parameters and the results are: γ = 4.5 , δ = 7.5 , α = 4.5 , β = 4.5 . To get Bayesian point estimation, the Lindley method and the importance sampling method are employed. Three loss functions are adopted separately for comparison purposes. The parameter p of LLF is set to 0.5 and 1 and the parameter q of GELF is set to 0.5 and 0.5.
The informative Bayes method uses minimization of loss functions, and such minimizations can only be performed if the true parameter values are known. Hence, informative Bayes can only be seen as a reference, or an oracle method.
The results are presented in Table A2, Table A3, Table A4, Table A5, Table A6, Table A7, Table A8 and Table A9. In addition, the mean length and coverage rate of asymptotic confidence intervals, boot-t intervals, boot-p intervals, and HPD intervals at 95% confidence/credible level are also shown in Table A10 and Table A11.
Due to the excessive amount of tables, it is not easy for readers to find rules of the estimation. Therefore, some figures which present the most representative simulation results are made to show the rules more intuitively. Figure 3 and Figure 4 present the MSEs of the maximum likelihood estimates of the two parameters under censoring scheme I and censoring scheme II when T = 2 . Figure 5 and Figure 6 compare the MSEs of maximum likelihood estimates with the Bayesian estimates with non-informative and informative priors obtained by importance sampling under censoring scheme I and T = 2 .
From Table A1, we can draw that
(1)
All the estimation values are generally inclined to approach the true value, and MSEs tend to decrease as the sample size n or observed numbers m or the value of m / n increases. The rules of the MSEs can be easily obtained from Figure 3 and Figure 4.
(2)
The MLEs of λ perform better than the MLEs of σ according to the MSE. However, the estimation values of σ are closer to the true value compared with those of λ .
(3)
Diverse censoring schemes show a regular mode in terms of MSE. From the Figure 3 and Figure 4, we can know that, when σ is considered, Sch I performs better than Sch II in all cases, yet when λ is considered, Sch II is more effective than Sch I except the case of n = 30 .
(4)
There is no observed specific pattern with the change of T. It is apprehensible because the observed data may remain unaltered when T changes.
(1)
Generally, the Bayesian estimates under three loss functions with informative priors are more accurate contrasted with MLEs in terms of MSE in all cases. This rule can be intuitively summarized from Figure 5 and Figure 6. This is because the Bayesian method not only considers the data but also takes the prior information of unknown parameters into account. In addition, the importance sampling procedure outperforms the Lindley method.
(2)
From Figure 5 and Figure 6, it is clear that the performance of the Bayesian estimates with non-informative priors is almost similar to MLEs under all circumstances. This is because we have no information with respect to the unknown parameters. In other words, it only takes the data into account. Thus, it is reasonable that the results are analogous to MLEs.
(3)
The Bayesian estimates under GELF are superior compared with those under SELF and LLF. For LLF, Bayesian estimates under p = 1 are better than those under p = 0.5 for the parameter λ , while choosing p = 0.5 is better than p = 1 for the estimate of σ . For GELF, take the fact that both q = 0.5 and q = 0.5 are satisfactory and perform well. On the whole, the Bayesian estimates under GELF using the importance sampling procedure are the most effective as they possess the minimal MSEs and the closest estimation values.
(4)
When σ is considered, Sch I performs better than Sch II except when n = 50 , yet when λ is taken into account, Sch II is superior compared with Sch I in most cases.
From Table A10 and Table A11, we can draw these conclusions
(1)
The mean lengths of all the intervals become narrower as n and m increase, and this pattern holds for both σ and λ . In addition, the coverage rate of intervals of σ is higher while the coverage rate of intervals of λ is stable with the increase of m and n.
(2)
The HPD credible intervals and boot-t intervals perform better contrasted by asymptotic confidence intervals due to narrower mean length and higher coverage rate. In addition, the HPD credible intervals possess the narrowest mean length while the boot-t intervals have the highest coverage rate.
(3)
The results of the two parameters’ intervals have no obvious connection with different censoring schemes.

6. Real Data Analysis

An authentic dataset is analyzed for expository intention by employing the methods mentioned above in this section. The dataset was initially from [20] and further employed by [21,22]. The complete data set describes log times to the breakdown of an insulating fluid testing experiment and is presented in Table 2.
At the beginning, we should consider the problem whether the distribution EHL λ , σ fits the data set well. The fitting effect of exponentiated half-logistic distribution and Half Logistic distribution with the CDF F ( x ) = 1 e x λ σ 1 + e x λ σ is compared. The criteria employed for examining the goodness of fit include the negative log-likelihood function ( ln L ), Kolmogorov–Smirnov (K-S) statistics with its p-value, Bayesian Information Criterion (BIC), and Akaike Information Criterion (AIC). The definitions are:
A I C = 2 × d ln L ,
B I C = d × ln n 2 × ln L ,
where d is the number of parameters, L is the maximized value of the likelihood function, and n denotes the total number of observed values.
The results of the K-S, p-value, AIC, BIC, and ln L of the two distributions are listed in Table 3. Obviously, exponentiated half-logistic distribution fits the model better since it has lower K-S, AIC, BIC, ln L statistics, and higher p-value. Then, we can analyze this data on the basis of our model.
We set n = 16 , m = 12 and T = 3 2 , 2. The two different censoring schemes are ( 4 , 0 11 ) and ( 1 4 , 0 8 ) . Table 4 presents the specific adaptive type II censoring data under different schemes based on the data set.
The point estimations for σ and λ are presented in Table 5 and Table 6. For Bayesian estimation, since we have no informative prior, a non-informative prior is applied, namely α = β = γ = δ = 0.0001 . Three loss functions are considered, and we still use the parameters in the previous simulation. At the same time, 95% ACIs, boot-p, boot-t, and HPD intervals are established, while Table 7 and Table 8 display the corresponding results. Let Lower denote the lower bound and Upper denote the upper bound.
From Table 5, Table 6, Table 7 and Table 8, the following conclusions are drawn:
(1)
The estimates of parameter σ using the Lindley method generally tend to be larger than those gained by the importance sampling procedure.
(2)
The estimates under the first censoring scheme are closer to the MLEs under the full sample, and the estimations using the Lindley method are more effective than those obtained by the importance sampling.
(3)
The results are relatively close between T = 1.5 and T = 2 when using the first censoring scheme because the observed data remain unaltered when the T is increasing.
(4)
The HPD credible intervals have the narrowest mean length among all the intervals while the ACIs possess the longest mean length.
(5)
The results of the two parameters’ intervals have no obvious connection with different censoring schemes.

7. Conclusions

In this manuscript, classical and Bayesian inference for exponentiated half-logistic distribution under adaptive Type II progressive censoring is considered. The maximum likelihood estimates are derived through the Newton–Raphson algorithm. Bayesian estimation under three loss functions is also considered and the estimates are derived through importance sampling and the Lindley method. Meanwhile, we establish the confidence and credible intervals of σ and λ and contrast them with each other. Asymptotic confidence intervals are constructed based on observed and expected Fisher information matrices. In order to tackle the problem of small sample size, boot-p and boot-t intervals are computed.
In the simulation section, estimation values and mean squared values are calculated to test the performance of the point estimation while mean lengths and coverage rates are considered for the interval estimation. According to the simulation results, it is clear that the Bayesian estimation which possesses suitable informative priors performs better than MLEs under all circumstances. In more detail, the Bayesian estimations under GELF perform best among all the estimations and the importance sampling procedure makes more sense than Lindley approximation. In addition, when it comes to interval estimation, boot-t and boot-p intervals perform better in the case of a small sample size than asymptotic confidence intervals. In addition, HPD credible intervals generally possess the shortest mean length while boot-t intervals have the highest coverage rate compared with other intervals.
Exponentiated half-logistic distribution under adaptive Type II progressive censoring is significant and practical due to the flexibility of the censoring scheme and the superior features of distribution. Furthermore, the competing risks and accelerated life test can be explored in the research field. In brief, carrying out further research on this model has great potential for survival and reliability analysis.

Author Contributions

Investigation, Z.X.; Supervision, W.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by Project 202210004001 which was supported by NationalTraining Program of Innovation and Entrepreneurship for Undergraduates.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are openly available in [20].

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. The Simulation Results of MLEs

Table A1. The simulation results of MLEs for σ and λ .
Table A1. The simulation results of MLEs for σ and λ .
TnmSch σ λ
VALUEMSEVALUEMSE
23020I1.46940.12071.10280.1075
II1.46610.14571.10750.1138
25I1.46670.09371.10240.1010
II1.47540.10051.10060.1025
5040I1.47870.06221.06070.0533
II1.47920.06491.05650.0496
45I1.48320.05581.05800.0477
II1.48160.05591.05530.0458
8060I1.48980.04041.04030.0297
II1.48580.04251.03410.0272
70I1.48580.03511.03480.0271
II1.48920.03641.03370.0258
43020I1.46510.12601.11330.1140
II1.45730.13161.11520.1170
25I1.46550.09901.10630.1039
II1.46610.10251.10490.1057
5040I1.47720.05831.05690.0503
II1.48570.06551.05000.0454
45I1.48230.05721.05180.0474
II1.48050.05801.05420.0456
8060I1.49040.04191.03420.0305
II1.49120.04451.03030.0288
70I1.48430.03521.03420.0266
II1.48840.03691.03360.0264

Appendix B. The Simulation Results of Bayesian Estimates with Non-Informative Priors

Table A2. The results of Bayesian estimates with non-informative priors for σ using the Lindley method.
Table A2. The results of Bayesian estimates with non-informative priors for σ using the Lindley method.
Tnm σ ^ SE σ ^ LL σ ^ GE
p = 1 2 p = 1 q = 1 2 q = 1 2
VALUEMSEVALUEMSEVALUEMSEVALUEMSEVALUEMSE
230201.53000.11871.53030.11831.53010.12011.53090.11561.52960.1117
1.53280.14151.53470.13761.53350.13971.53330.13861.53290.1397
251.52890.09261.53360.08911.53230.09161.53270.09171.53300.0864
1.53460.10031.52470.10101.52440.99381.52420.09711.52420.0965
50401.52090.05961.52210.06121.52090.06031.52100.06221.52010.0572
1.52060.06361.52100.06591.52000.06321.51990.06571.51980.0603
451.52020.05411.51750.05461.51670.05591.51640.05041.51620.0525
1.51950.05631.51940.05481.51810.05601.51740.05351.51830.0504
80601.50920.04021.51050.04011.50980.03971.51010.03531.50920.0285
1.51360.04381.51430.04311.51380.04191.51370.04151.51310.0404
701.51260.03411.51480.03391.51410.03481.51340.03121.51320.0329
1.51040.03581.51160.03711.51060.03601.51050.03361.51040.0375
430201.53240.12001.53590.12701.53390.12371.53410.11731.53430.1214
1.53670.12971.54330.12811.54270.12681.54210.12961.54150.1246
251.53270.09701.53540.09801.53400.09601.53381.00021.53430.0964
1.52920.09831.53470.10311.53390.10591.53310.10201.53310.1025
50401.51960.05791.52350.05791.52180.05211.52210.05611.52200.0524
1.51830.06331.51490.06521.51360.05651.51330.06281.51360.0586
451.51680.05801.51820.05521.51680.05321.51700.05211.51740.0542
1.51950.05841.51970.05981.51880.05701.51920.05791.51860.0559
80601.51090.04031.50970.04021.50910.04051.50860.04071.50850.0358
1.50880.04141.50980.04301.50880.04351.50830.04121.50800.0582
701.51210.03261.51600.03721.51500.03311.51510.02991.51560.0331
1.51050.03341.51240.03531.51080.03621.51130.03611.51050.0371
Table A3. The results of Bayesian estimates with non-informative priors for λ using the Lindley method.
Table A3. The results of Bayesian estimates with non-informative priors for λ using the Lindley method.
Tnm σ ^ SE λ ^ LL λ ^ GE
p = 1 2 p = 1 q = 1 2 q = 1 2
VALUEMSEVALUEMSEVALUEMSEVALUEMSEVALUEMSE
230201.10210.10651.10240.10751.10190.10621.10170.10591.10180.1062
1.10640.11291.10710.11471.10630.11271.10710.11341.10550.1125
251.10240.10121.10210.10031.10190.10031.10060.10001.10190.0991
1.09990.10141.09960.10221.09900.10221.10050.10161.09950.1016
50401.05940.05241.06040.05391.06000.05231.06040.05141.05900.0524
1.05630.04911.05610.04891.05580.04811.05470.04871.05610.0487
451.05780.04741.05700.04611.05670.04631.05720.04791.05650.0462
1.05430.04541.05520.04391.05510.04491.05340.04541.05510.0444
80601.03930.02821.03930.02851.03920.03131.03870.02821.04000.0290
1.03280.02671.03250.02601.03300.02751.03250.02551.03390.0256
701.03420.02661.03380.02641.03470.02611.03300.02591.03410.0269
1.03270.02501.03240.02501.03350.02681.03360.02561.03170.0257
430201.11210.11381.11230.11201.11270.11221.11240.11351.11190.1128
1.11450.11761.11400.11581.11480.11641.11440.11571.11400.1155
251.10530.10321.10570.10281.10600.10241.10550.10311.10540.1023
1.10340.10591.10480.10381.10340.10541.10470.10421.10400.1049
50401.05660.05011.05600.04901.05580.04931.05680.04931.05600.0490
1.04930.04501.04910.04481.04870.04461.04910.04601.04920.0449
451.05090.04621.05170.04701.05170.04731.05010.04591.05070.0459
1.05270.04481.05300.04491.05250.04501.05270.04571.05270.0436
80601.03380.02971.03370.02971.03400.02931.03300.02971.03400.0292
1.02970.02901.02900.02721.03000.02781.02880.02871.02870.0285
701.03260.02601.03310.02681.03340.02641.03360.02521.03400.0263
1.03280.02631.03240.02531.03240.02581.03320.02621.03160.0247
Table A4. The results of Bayesian estimates with non-informative priors for σ using importance sampling.
Table A4. The results of Bayesian estimates with non-informative priors for σ using importance sampling.
Tnm λ ^ SE λ ^ LL λ ^ GE
p = 1 2 p = 1 q = 1 2 q = 1 2
VALUEMSEVALUEMSEVALUEMSEVALUEMSEVALUEMSE
230201.53020.12071.47090.10891.46410.11721.53260.12051.52510.1298
1.53820.14191.53210.13231.52920.13911.53280.13561.53070.1438
251.53050.08921.53010.08821.52460.08151.53270.08221.52990.0844
1.53930.09971.53270.09891.52570.09071.53110.09251.52790.0942
50401.52930.05951.52750.06161.52440.05891.52030.05851.52420.0558
1.52530.06311.52200.06021.52910.06751.52530.05931.52810.0653
451.52690.05681.52960.05671.52660.05411.52900.05571.52370.0536
1.52530.05621.52700.05271.52480.05481.52770.05711.52260.0550
80601.51260.04031.51250.03571.51090.03961.51790.03891.51470.0376
1.50980.04281.51510.04041.51350.04221.51400.04441.51030.0431
701.51170.03501.50910.03401.50780.03311.51300.03401.51040.0360
1.51480.03711.50780.03101.50640.03311.51080.03481.50820.0328
430201.52880.12111.53130.11211.53230.11851.53300.11451.53210.1225
1.53780.13721.53240.13661.53020.13091.53630.13351.53560.1371
251.52980.09431.53520.09061.52850.09241.52270.09731.52020.0990
1.53110.10211.52410.10811.53700.09911.53330.10541.52060.1078
50401.52650.05151.52500.05561.52220.05531.52740.05901.52120.0562
1.52500.06331.52410.06651.52210.06341.52530.06091.52900.0683
451.52660.05791.52410.05071.52140.05841.52900.05581.52390.0537
1.52400.05961.52620.05561.52390.05371.52630.05751.52120.0553
80601.51670.04441.51010.04801.51840.04681.51170.04301.51830.0416
1.511980.04791.51450.04991.51300.03891.51620.04651.51480.0451
701.50830.03371.51080.03551.50950.03461.51440.03271.51180.0318
1.50820.03531.50690.03341.51570.03561.51460.03421.51210.0332
Table A5. The results of Bayesian estimates with non-informative priors for λ using importance sampling.
Table A5. The results of Bayesian estimates with non-informative priors for λ using importance sampling.
Tnm λ ^ SE λ ^ LL λ ^ GE
p = 1 2 p = 1 q = 1 2 q = 1 2
VALUEMSEVALUEMSEVALUEMSEVALUEMSEVALUEMSE
230201.10760.10331.10790.10531.10420.10031.10020.10831.10040.1028
1.10610.11101.10860.11571.10060.11731.10040.11831.10780.1193
251.10630.09621.10230.09271.10130.09141.10310.09410.90850.0932
1.10890.09661.10400.09181.10260.09341.10510.09391.10060.0948
50401.05890.05261.05130.04980.95470.05011.05140.05030.94700.0508
1.05320.04681.05610.04590.94930.04781.05610.04630.95170.0484
450.94750.04320.95080.04290.95310.04240.95070.04320.95610.0433
1.05070.04591.05180.04530.95540.04571.05190.04560.95850.0463
80600.96480.02820.97020.02790.97230.02710.97020.02810.97740.0276
0.96940.02650.97550.02640.97930.02530.97540.02650.97490.0257
700.97730.02450.97370.02450.97740.02570.97350.02460.97310.0262
0.97310.02610.97970.02610.97050.02610.97960.02620.97640.0265
430201.11170.11151.11430.11581.11640.11081.11570.11821.11260.1130
1.11870.11111.11550.11531.11340.11551.11680.11751.11120.1172
251.10800.10961.10320.10471.10460.10841.10450.10661.10210.1099
1.10420.10031.10780.10961.10240.10811.10870.10130.90010.1097
50401.05400.04980.94890.04911.05270.05220.94890.04950.94500.0533
1.05760.04881.05020.04791.05520.03961.05040.03840.94790.0430
451.05370.04820.94680.04750.95480.04490.94680.04790.95770.0456
1.05440.04540.94770.04490.95870.04470.94770.04520.95180.0452
80600.96500.03070.96070.03060.96820.02920.97060.03070.97340.0366
0.96750.02680.96360.02670.96470.02590.96350.02680.97030.0363
700.96300.02450.96930.02450.96840.02320.96920.02460.96420.0337
0.96390.02610.96030.02610.96220.02340.97020.02620.96830.0338

Appendix C. The Simulation Results of Bayesian Estimates with Informative Priors

Table A6. The results of Bayesian estimates with informative priors for σ using the Lindley method.
Table A6. The results of Bayesian estimates with informative priors for σ using the Lindley method.
Tnm σ ^ SE σ ^ LL σ ^ GE
p = 1 2 p = 1 q = 1 2 q = 1 2
VALUEMSEVALUEMSEVALUEMSEVALUEMSEVALUEMSE
230201.53190.11531.53070.11251.53150.12591.46990.10101.53290.1187
1.53520.12541.53380.11761.53100.13011.46370.10421.53050.1271
251.52350.08951.52980.08901.52120.11171.47370.07501.52950.0937
1.52950.11021.52570.09841.52050.12181.47410.08211.53770.1018
50401.52030.05421.52170.04991.52450.05161.47850.05311.52350.0567
1.52810.05661.52650.05191.52480.05931.47800.05451.52300.0632
451.51560.04171.51660.04851.51050.04391.48410.04401.51060.0498
1.52500.04731.52560.05351.52340.04481.48340.04761.52360.0509
80601.50590.03571.51140.03391.51330.03461.49610.03151.51650.0325
1.51540.03991.51510.04361.51490.04091.49920.03161.51760.0462
701.50330.02891.50100.02761.50520.02971.49930.02551.50950.0283
1.50590.02941.50340.02801.50230.03071.49340.02571.51640.0291
430201.53240.12341.53120.11441.53810.12921.46060.10251.53050.1202
1.53710.12351.53560.11541.53060.13641.46540.10261.53910.1250
251.52650.10561.52270.09451.52610.10941.47790.07751.52440.0920
1.52420.10641.53050.09511.52270.11151.47530.07891.53110.1141
50401.51730.06241.52580.05791.52390.06301.47720.05111.52270.0577
1.51980.06401.52840.05941.52900.06791.47500.05191.52750.0622
451.50910.05511.51990.04171.51290.05441.48530.04671.51320.0505
1.51210.05301.51300.04951.51460.05071.48990.04391.52530.0473
80601.51070.03801.51620.03601.51330.03461.49110.03201.51250.0325
1.51480.04711.51400.04041.51080.04061.49070.03291.51340.0457
701.50610.02841.50370.02701.50590.02781.49540.02601.51010.0264
1.50780.03001.50540.02851.50590.03181.49250.02481.51000.0302
Table A7. The results of Bayesian estimates with informative priors for λ using the Lindley method.
Table A7. The results of Bayesian estimates with informative priors for λ using the Lindley method.
Tnm λ ^ SE λ ^ LL λ ^ GE
p = 1 2 p = 1 q = 1 2 q = 1 2
VALUEMSEVALUEMSEVALUEMSEVALUEMSEVALUEMSE
230201.09090.10121.08950.09951.09080.09231.08870.09831.09150.0934
1.09610.09751.08830.09761.08560.09221.08730.09621.07950.0920
251.08780.09691.07720.09570.92730.09321.07660.09491.07730.0924
1.07650.09081.06970.09290.92780.09191.06890.09321.69830.0910
50401.06640.04981.04960.04910.94240.04371.05940.04870.94900.0433
1.05790.04351.04110.04310.95480.04021.05100.04280.95180.0397
450.99520.03850.96910.03850.95480.03370.95110.03830.95330.0333
1.05470.03731.03620.03680.96790.03131.04800.03660.95450.0310
80601.04760.01911.03280.01900.97370.01341.03280.01880.96840.0132
1.03020.01410.97630.01431.03120.01350.97650.01421.02540.0144
700.97750.01350.97370.01340.98090.01370.97380.01330.98490.0133
0.97520.01190.97630.01190.98520.01390.98140.01190.97900.0115
430201.09270.10371.08920.10170.91860.09961.08820.10061.08970.0967
1.08990.09651.08820.09571.08330.09771.08730.09371.08350.0967
250.92710.09300.92650.09250.92020.08610.92620.09181.07080.0955
1.08350.08581.06290.08501.07600.08381.07260.08441.06530.0826
50401.05650.04081.06830.04011.05910.03171.05200.03971.05590.0317
1.05060.03020.93400.03000.95990.02910.95390.02981.04650.0289
451.05460.03231.04820.03200.95760.03141.04810.03170.95420.0310
1.04950.02861.04330.02831.04490.02541.04320.02801.04120.0253
80600.97290.01520.96850.01520.96520.01350.97860.01510.97970.0142
1.03270.01510.96880.01500.96700.01840.97870.01491.02100.0132
700.97650.01200.97250.01290.97080.01250.98250.01290.98470.0120
1.02250.01200.97890.01250.97400.01200.98890.01280.97800.0119
Table A8. The results of Bayesian estimates with informative priors for σ using importance sampling.
Table A8. The results of Bayesian estimates with informative priors for σ using importance sampling.
Tnm σ ^ SE σ ^ LL σ ^ GE
p = 1 2 p = 1 q = 1 2 q = 1 2
VALUEMSEVALUEMSEVALUEMSEVALUEMSEVALUEMSE
230201.53800.07901.53360.06781.53370.07451.53790.06941.53360.0632
1.53470.09851.53940.08481.53530.09181.53200.08451.53420.0778
251.53520.07291.53040.06751.53130.06911.52820.06521.52690.0561
1.53030.07961.53620.06351.53610.07571.53680.07151.53300.0598
50401.52960.05221.52100.04711.52220.05051.52740.04871.52950.0457
1.53320.04761.52890.04391.52600.04611.52150.04451.52710.0424
451.52070.04391.52760.03771.52160.04271.51770.04131.51580.0362
1.52700.04631.52010.04371.52790.04431.52210.04251.52860.0421
80601.51880.02711.51510.02611.51390.02611.51090.02521.51080.0253
1.52010.03141.51140.02641.51420.03011.50440.02901.51130.0255
701.50750.02141.50380.01871.50370.02051.50220.01981.50880.0181
1.51170.02171.50710.01961.50680.02101.50440.02041.51100.0190
430201.53220.05271.53710.07371.53510.04941.53450.04611.53330.0685
1.53310.08281.53090.09311.53460.07771.53250.07181.53440.0853
251.52770.07631.52920.08491.52340.07321.52430.06651.52490.0596
1.52610.07711.53210.06391.53340.08171.53440.06841.53580.0698
50401.52540.04531.52030.03921.52630.04341.52060.04161.51910.0378
1.52490.04531.52070.04651.52680.04401.52190.04241.51950.0451
451.52330.03521.52040.03461.52610.03451.52190.03361.52280.0394
1.51570.04051.51330.03541.51830.03911.51360.03771.51560.0342
80601.51680.02701.51260.02171.51160.02581.50930.02471.51140.0210
1.51270.02781.51610.02291.51300.02681.51050.02481.51480.0221
701.50720.02271.51080.02201.51320.02201.51080.02141.50970.0214
1.50240.02541.50880.02611.50820.02441.50550.02361.50780.0235
Table A9. The results of Bayesian estimates with informative priors for λ using importance sampling.
Table A9. The results of Bayesian estimates with informative priors for λ using importance sampling.
Tnm λ ^ SE λ ^ LL λ ^ GE
p = 1 2 p = 1 q = 1 2 q = 1 2
VALUEMSEVALUEMSEVALUEMSEVALUEMSEVALUEMSE
230201.09090.10121.08950.09951.09080.09231.08870.09831.09150.0934
1.09610.09751.08830.09761.08560.09221.08730.09621.07950.0920
251.08780.09691.07720.09570.92730.09321.07660.09491.07730.0924
1.07650.09081.06970.09290.92780.09191.06890.09321.69830.0910
50401.06640.04981.04960.04910.94240.04371.05940.04870.94900.0433
1.05790.04351.04110.04310.95480.04021.05100.04280.95180.0397
450.99520.03850.96910.03850.95480.03370.95110.03830.95330.0333
1.05470.03731.03620.03680.96790.03131.04800.03660.95450.0310
80601.04760.01911.03280.01900.97370.01341.03280.01880.96840.0132
1.03020.01410.97630.01431.03120.01350.97650.01421.02540.0144
700.97750.01350.97370.01340.98090.01370.97380.01330.98490.0133
0.97520.01190.97630.01190.98520.01390.98140.01190.97900.0115
430201.09270.10371.08920.10170.91860.09961.08820.10061.08970.0967
1.08990.09651.08820.09571.08330.09771.08730.09371.08350.0967
250.92710.09300.92650.09250.92020.08610.92620.09181.07080.0955
1.08350.08581.06290.08501.07600.08381.07260.08441.06530.0826
50401.05650.04081.06830.04011.05910.03171.05200.03971.05590.0317
1.05060.03020.93400.03000.95990.02910.95390.02981.04650.0289
451.05460.03231.04820.03200.95760.03141.04810.03170.95420.0310
1.04950.02861.04330.02831.04490.02541.04320.02801.04120.0253
80600.97290.01520.96850.01520.96520.01350.97860.01510.97970.0142
1.03270.01510.96880.01500.96700.01840.97870.01491.02100.0132
700.97650.01200.97250.01290.97080.01250.98250.01290.98470.0120
1.02250.01200.97890.01250.97400.01200.98890.01280.97800.0119

Appendix D. The Simulation Results of All Intervals

Table A10. The simulation results of five intervals for σ .
Table A10. The simulation results of five intervals for σ .
TnmSchACIboop-pboot-tHPD
non-inforinfor
MLCRMLCRMLCRMLCRMLCR
23020I1.33170.89031.24900.88471.22060.91631.32260.88831.12320.8277
II1.39110.88931.40170.89171.38170.90471.37570.88911.19820.8383
25I1.19890.89231.20190.89701.11410.92101.16950.88871.00930.8410
II1.21530.90831.25040.89701.10930.91571.18350.90381.01410.8477
5040I0.96260.91600.99150.90560.87410.94400.94830.91060.76340.8580
II0.97120.92800.98170.91930.95960.92530.93820.92140.77620.8767
45I0.91310.92600.95400.92200.80970.94400.90800.92190.72330.8773
II0.91260.92500.94160.91470.81240.94330.89820.92370.71680.8753
8060I0.79190.92930.81170.91800.69170.95270.77320.92300.59020.8700
II0.80530.92830.79790.92930.70520.95200.78630.92720.60350.8800
70I0.73470.93170.75730.93960.63280.95000.70590.92520.53100.8847
II0.73130.92470.77660.93800.63730.94870.69820.92280.53850.8773
43020I1.33090.89431.37190.88971.23480.90071.32680.89031.13480.8367
II1.38530.88101.39720.89671.39120.91501.37400.88001.18380.8273
25I1.19810.90201.25430.89801.11310.92571.18970.89831.01040.8243
II1.22290.90501.27260.91131.12270.91901.19450.90151.01090.8387
5040I0.96210.92000.99060.91150.86220.95100.95620.91490.76140.8453
II0.97950.92300.98500.91600.87250.94870.96560.92020.76930.8513
45I0.91290.92230.93430.92670.81690.94670.91110.91830.71510.8680
II0.91140.92170.91620.92730.81480.95000.88820.91830.71540.8760
8060I0.78920.93400.81260.94380.68910.94670.76010.93140.59270.8673
II0.80000.92100.81650.92470.70620.95600.77230.91540.60340.8647
70I0.73540.93000.74430.91730.63230.95800.72860.92810.53740.8727
II0.73360.93100.73720.93330.63570.95200.71750.92580.53550.8713
Table A11. The simulation results of five intervals for λ .
Table A11. The simulation results of five intervals for λ .
TnmSchACIboop-pboot-tHPD
non-inforinfor
MLCRMLCRMLCRMLCRMLCR
23020I1.17250.97571.24750.96831.13840.97271.15180.97400.98550.9333
II1.12560.97601.11270.97201.08050.97671.10100.97570.91490.9343
25I1.09750.96701.11810.96831.04670.97001.08000.96620.88880.9310
II1.05470.97371.07050.96601.03070.97371.03330.96910.86490.9290
5040I0.81370.95630.91250.95700.74540.96650.81300.95310.81490.9165
II0.78500.96000.78320.95930.75400.96970.78430.95990.68280.9167
45I0.78610.96100.79720.96200.74090.96870.75720.95830.57140.9127
II0.77150.95400.77160.95330.73730.95970.74830.94740.56200.9207
8060I0.64440.95530.66640.95600.60280.96800.61770.95240.44560.9180
II0.60720.95430.65520.94670.56870.95100.59850.95090.41040.9213
70I0.60710.95330.62630.95930.57070.95030.58910.95210.41140.9193
II0.61190.95470.62440.96130.55690.95070.59530.94860.39630.9120
43020I1.18040.97301.28760.96901.07010.97671.16750.96730.96490.9340
II1.11620.96701.15900.97301.07100.97731.10490.96430.91500.9300
25I1.08620.96601.15970.97071.03030.97501.07060.96210.87760.9260
II1.04590.96671.08280.97031.00230.97931.01720.96430.86110.9293
5040I0.81690.95730.92450.95950.71830.97450.79070.95430.61260.9153
II0.78390.95530.79130.96200.74820.97270.77080.95300.58990.9193
45I0.78060.95430.78590.96270.72770.96130.77390.95250.57060.9113
II0.77240.96100.76740.95130.71570.97600.74420.95480.56340.9180
8060I0.64370.95930.66380.95200.58280.96470.61750.95840.44630.9167
II0.61250.94730.64840.95470.59910.96000.59760.94320.41080.9107
70I0.60940.95500.62550.95870.52270.96870.58130.94930.41060.9067
II0.61240.95230.59590.96000.53470.96670.60930.94710.39670.9187

References

  1. Balakrishnan, N.; Balakrishnan, N.; Aggarwala, R. Progressive Censoring: Theory, Methods, and Applications; Springer Science & Business Media: Berlin, Germany, 2000. [Google Scholar]
  2. Balakrishnan, N.; Cramer, E. The Art of Progressive Censoring; Birkhäuser: New York, NY, USA, 2014. [Google Scholar]
  3. Ng, H.K.T.; Kundu, D.; Chan, P.S. Statistical analysis of exponential lifetimes under an adaptive Type-II progressive censoring scheme. Nav. Res. Logist. 2009, 56, 687–698. [Google Scholar] [CrossRef] [Green Version]
  4. Cramer, E.; Iliopoulos, G. Adaptive progressive Type-II censoring. Test 2010, 19, 342–358. [Google Scholar] [CrossRef]
  5. Nassar, M.; Abo-Kasem, O.E.; Zhang, C.; Dey, S. Analysis of Weibull Distribution Under Adaptive Type-II Progressive Hybrid Censoring Scheme. J. Indian Soc. Probab. Stat. 2018, 19, 25–65. [Google Scholar] [CrossRef]
  6. Wang, B.X. Interval estimation for exponential progressive Type-II censored step-stress accelerated life-testing. J. Stat. Plan. Inference 2010, 140, 2706–2718. [Google Scholar] [CrossRef]
  7. Liu, S.; Gui, W. Estimating the parameters of the two-parameter Rayleigh distribution based on adaptive type II progressive hybrid censored data with competing risks. Mathematics 2020, 8, 1783. [Google Scholar] [CrossRef]
  8. Seo, J.I.; Kang, S.B. Notes on the exponentiated half logistic distribution. Appl. Math. Model. 2015, 39, 6491–6500. [Google Scholar] [CrossRef]
  9. Balakrishnan, N. Order statistics from the half logistic distribution. J. Stat. Comput. Simul. 1985, 20, 287–309. [Google Scholar] [CrossRef]
  10. Kim, C.; Han, K. Estimation of the scale parameter of the half-logistic distribution under progressively type II censored sample. Stat. Pap. 2010, 51, 375–387. [Google Scholar] [CrossRef]
  11. Giles, D.E. Bias Reduction for the Maximum Likelihood Estimators of the Parameters in the Half-Logistic Distribution. Commun. Stat.-Theory Methods 2012, 41, 212–222. [Google Scholar] [CrossRef]
  12. Adatia, A. Estimation of parameters of the half-logistic distribution using generalized ranked set sampling. Comput. Stat. Data Anal. 2000, 33, 1–13. [Google Scholar] [CrossRef]
  13. Kang, S.B.; Seo, J.I. Estimation in an exponentiated half logistic distribution under progressively type-II censoring. Commun. Stat. Appl. Methods 2011, 18, 657–666. [Google Scholar] [CrossRef] [Green Version]
  14. Gui, W. Exponentiated half logistic distribution: Different estimation methods and joint confidence regions. Commun. Stat.-Simul. Comput. 2017, 46, 4600–4617. [Google Scholar] [CrossRef]
  15. Cordeiro, G.M.; Alizadeh, M.; Ortega, E.M. The exponentiated half-logistic family of distributions: Properties and applications. J. Probab. Stat. 2014, 2014, 864396. [Google Scholar] [CrossRef]
  16. Rao, G.S.; Naidu, R. Acceptance sampling plans for percentiles based on the exponentiated half logistic distribution. Appl. Appl. Math. Int. J. 2014, 9, 39–53. [Google Scholar]
  17. Rao, G.S. A control chart for time truncated life tests using exponentiated half logistic distribution. Appl. Math. Inf. Sci. 2018, 12, 125–131. [Google Scholar] [CrossRef]
  18. Efron, B.; Tibshirani, R.J. An Introduction to the Bootstrap; CRC Press: Boca Raton, FL, USA, 1994. [Google Scholar]
  19. Lindley, D.V. Approximate bayesian methods. Trab. Estadística Investig. Oper. 1980, 31, 223–245. [Google Scholar] [CrossRef]
  20. Nelson, W.B. Applied Life Data Analysis; John Wiley & Sons: Hoboken, NJ, USA, 2003; Volume 521. [Google Scholar]
  21. Rastogi, M.K.; Tripathi, Y.M. Parameter and reliability estimation for an exponentiated half-logistic distribution under progressive type II censoring. J. Stat. Comput. Simul. 2014, 84, 1711–1727. [Google Scholar] [CrossRef]
  22. Kang, S.B.; Seo, J.I.; Kim, Y. Bayesian analysis of an exponentiated half-logistic distribution under progressively type-II censoring. J. Korean Data Inf. Sci. Soc. 2013, 24, 1455–1464. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Adaptive type II progressive censoring.
Figure 1. Adaptive type II progressive censoring.
Entropy 23 01558 g001
Figure 2. CDF (left) and PDF (right) of exponentiated half-logistic distribution.
Figure 2. CDF (left) and PDF (right) of exponentiated half-logistic distribution.
Entropy 23 01558 g002
Figure 3. The MSEs of the MLEs of parameter σ under two censoring schemes.
Figure 3. The MSEs of the MLEs of parameter σ under two censoring schemes.
Entropy 23 01558 g003
Figure 4. The MSEs of the MLEs of parameter λ under two censoring schemes.
Figure 4. The MSEs of the MLEs of parameter λ under two censoring schemes.
Entropy 23 01558 g004
Figure 5. The MSEs of MLEs and Bayesian estimates with non-informative and informative priors of parameter σ .
Figure 5. The MSEs of MLEs and Bayesian estimates with non-informative and informative priors of parameter σ .
Entropy 23 01558 g005
Figure 6. The MSEs of MLEs and Bayesian estimates with non-informative and informative priors of parameter σ .
Figure 6. The MSEs of MLEs and Bayesian estimates with non-informative and informative priors of parameter σ .
Entropy 23 01558 g006
Table 1. Different censoring schemes.
Table 1. Different censoring schemes.
TnmCSTnmCS
23020(10, 0*19)43020(10, 0*19)
(1*10, 0*10) (1*10, 0*10)
25(5, 0*24) 25(5, 0*24)
(1*5, 0*20) (1*5, 0*20)
5040(10, 0*39) 5040(10, 0*39)
(1*10, 0*30) (1*10, 0*30)
45(5, 0*44) 45(5, 0*44)
(1*5, 0*40) (1*5, 0*40)
8060(20, 0*59) 8060(20, 0*59)
(1*20, 0*40) (1*20, 0*40)
70(10, 0*69) 70(10, 0*69)
(1*10, 0*60) (1*10, 0*60)
Table 2. Real data set.
Table 2. Real data set.
0.2700271.022451.150571.423111.541161.578981.87181.9947
2.080692.112632.489893.457893.481873.523713.603054.28895
Table 3. The fitting results of the two distributions.
Table 3. The fitting results of the two distributions.
λ σ ln L AICBICK-S Statisticp-Value
HL1.00230.653627.031356.660956.20610.26590.3749
EHL2.43090.963924.448852.897654.44280.18360.5906
Table 4. Adaptive progressive type II censoring data under different schemes.
Table 4. Adaptive progressive type II censoring data under different schemes.
SchemeCensored Data
(4, 0*11), T = 1.5 0.270027, 1.57898, 1.8718, 1.9947, 2.08089, 2.11263
2.48989, 3.45789, 3.481865, 3.52371, 3.60305, 4.28895
(4, 0*11), T = 2 0.270027, 1.57898, 1.8718, 1.9947, 2.08089, 2.11263
2.48989, 3.45789, 3.481865, 3.52371, 3.60305, 4.28895
(1*4, 0*8), T = 1.5 0.270027, 1.15057, 1.54116, 1.57898, 1.8718, 1.9947
2.08089, 2.11263, 2.48989, 3.45789, 3.481865, 3.52371
(1*4, 0*8), T = 2 0.270027, 1.15057, 1.54116, 1.8718, 2.08089, 2.11263
2.48989, 3.45789, 3.48187, 3.52371, 3.60305, 4.28895
Table 5. The MLEs and Bayesian estimates of σ under SELF, LLF, and GELF by the Lindley approximation and the importance sampling.
Table 5. The MLEs and Bayesian estimates of σ under SELF, LLF, and GELF by the Lindley approximation and the importance sampling.
TRMLESELFLLFGELFMethod
p = 1 2 p = 1p = − 1 2 p = 1 2
1.5(4, 0*11)1.19581.12851.11041.09421.11341.0875Lindley
1.25771.31801.04081.06621.2887Importance sampling
(1*4, 0*11)1.20141.07941.06261.04831.06541.0429Lindley
1.23461.16961.14581.12571.1306Importance sampling
2(4, 0*11)1.19581.03401.01971.00611.02060.9959Lindley
1.30571.00471.23871.27870.9836Importance sampling
(1*4, 0*11)1.23260.95770.94510.93330.94510.9223Lindley
1.34201.28771.21471.18601.3209Importance sampling
Table 6. The MLEs and Bayesian estimates of λ under SELF, LLF, and GELF by the Lindley approximation and the importance sampling.
Table 6. The MLEs and Bayesian estimates of λ under SELF, LLF, and GELF by the Lindley approximation and the importance sampling.
TRMLESELFLLFGELFMethod
p = 1 2 p = 1p = − 1 2 p = 1 2
1.5(4, 0*11)2.43642.35912.38832.31822.29322.3234Lindley
2.48172.23032.34752.50602.3174Importance sampling
(1*4, 0*11)2.37482.53512.39082.18962.50622.3240Lindley
2.58652.30032.49132.42532.4157Importance sampling
2(4, 0*11)2.43642.57862.32822.14372.47982.2910Lindley
2.10382.10821.85962.03812.0456Importance sampling
(1*4, 0*11)2.38202.77322.52942.32022.70692.5050Lindley
2.53532.17582.51182.43882.4546Importance sampling
Table 7. The four intervals for σ at the 95% confidence/credible level.
Table 7. The four intervals for σ at the 95% confidence/credible level.
TRACIboot-pboot-tHPD
LowerUpperLowerUpperLowerUpperLowerUpper
1.5(4, 0*11)0.65681.73480.66451.78000.85901.69320.74251.6066
(1*4, 0*11)0.62431.77850.63992.07500.71761.73130.84601.7601
2(4, 0*11)0.65681.73480.60011.76400.69551.78240.69691.6236
(1*4, 0*11)0.62431.77850.63401.97320.82951.95100.90781.4993
Table 8. The four intervals for λ at the 95% confidence/credible level.
Table 8. The four intervals for λ at the 95% confidence/credible level.
TRACIboot-pboot-tHPD
LowerUpperLowerUpperLowerUpperLowerUpper
1.5(4, 0*11)0.51974.35301.31343.89750.80363.24351.07443.3039
(1*4, 0*11)0.51434.03541.24444.38821.05383.81010.75143.2800
2(4, 0*11)0.51974.35301.30854.01651.56644.34910.81513.4981
(1*4, 0*11)0.51434.03541.36283.88390.42893.29931.02543.7863
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Xiong, Z.; Gui, W. Classical and Bayesian Inference of an Exponentiated Half-Logistic Distribution under Adaptive Type II Progressive Censoring. Entropy 2021, 23, 1558. https://doi.org/10.3390/e23121558

AMA Style

Xiong Z, Gui W. Classical and Bayesian Inference of an Exponentiated Half-Logistic Distribution under Adaptive Type II Progressive Censoring. Entropy. 2021; 23(12):1558. https://doi.org/10.3390/e23121558

Chicago/Turabian Style

Xiong, Ziyu, and Wenhao Gui. 2021. "Classical and Bayesian Inference of an Exponentiated Half-Logistic Distribution under Adaptive Type II Progressive Censoring" Entropy 23, no. 12: 1558. https://doi.org/10.3390/e23121558

APA Style

Xiong, Z., & Gui, W. (2021). Classical and Bayesian Inference of an Exponentiated Half-Logistic Distribution under Adaptive Type II Progressive Censoring. Entropy, 23(12), 1558. https://doi.org/10.3390/e23121558

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop