Next Article in Journal
Analysis of Air Mean Temperature Anomalies by Using Horizontal Visibility Graphs
Next Article in Special Issue
From Rényi Entropy Power to Information Scan of Quantum States
Previous Article in Journal
Infinite Ergodic Walks in Finite Connected Undirected Graphs
Previous Article in Special Issue
Calibration Invariance of the MaxEnt Distribution in the Maximum Entropy Principle
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Estimation for Entropy and Parameters of Generalized Bilal Distribution under Adaptive Type II Progressive Hybrid Censoring Scheme

1
School of Electronics Engineering, Xi’an University of Posts and Telecommunications, Xi’an 710121, China
2
School of Mathematics and Statistics, Northwestern Polytechnical University, Xi’an 710072, China
*
Author to whom correspondence should be addressed.
Entropy 2021, 23(2), 206; https://doi.org/10.3390/e23020206
Submission received: 17 December 2020 / Revised: 31 January 2021 / Accepted: 4 February 2021 / Published: 8 February 2021
(This article belongs to the Special Issue The Statistical Foundations of Entropy)

Abstract

:
Entropy measures the uncertainty associated with a random variable. It has important applications in cybernetics, probability theory, astrophysics, life sciences and other fields. Recently, many authors focused on the estimation of entropy with different life distributions. However, the estimation of entropy for the generalized Bilal (GB) distribution has not yet been involved. In this paper, we consider the estimation of the entropy and the parameters with GB distribution based on adaptive Type-II progressive hybrid censored data. Maximum likelihood estimation of the entropy and the parameters are obtained using the Newton–Raphson iteration method. Bayesian estimations under different loss functions are provided with the help of Lindley’s approximation. The approximate confidence interval and the Bayesian credible interval of the parameters and entropy are obtained by using the delta and Markov chain Monte Carlo (MCMC) methods, respectively. Monte Carlo simulation studies are carried out to observe the performances of the different point and interval estimations. Finally, a real data set has been analyzed for illustrative purposes.

1. Introduction

To analyze and evaluate the reliability of products, life tests are often carried out. For products with long lives and high reliability, a censoring scheme is often adopted during the test to save on time and costs. Two commonly used censoring schemes are Type-I and Type-II censoring, but these two censoring schemes do not have the flexibility of allowing the removal of units at points other than the terminal point of the experiment. To allow for more flexibility in removing surviving units from the test, more general censoring approaches are required. The progressive Type-II censoring scheme is appealing and has attracted much attention in the literature. This topic can be found in [1]. One may also refer to [2] for a comprehensive review on progressive censoring. One drawback of the Type-II progressive censoring scheme is that the length of the experiment may be quite long for long-life products. Therefore, Kundu and Joarder [3] proposed a Type-II progressive hybrid censoring scheme where the experiment terminates at a pre-specified time. However, for the Type-II progressive hybrid censoring scheme, the drawback is that the effective sample size is a random variable, which may be very small or even zero. To strike a balance between the total testing time and the efficiency in statistical inference, Ng et al. [4] introduced an adaptive Type-II progressive hybrid censoring scheme (ATII-PHCS). This censoring scheme is described as follows. Suppose that n units are placed on test and X 1 , X 2 , , X n denote the corresponding lifetimes from a distribution with the cumulative distribution function (CDF) F ( x ) and the probability density function (PDF) f ( x ) . The number of observed failures m and time T are specified in advance and m < n . At the first failure time X 1 : m : n , R 1 units are randomly removed from the remaining n 1 units. Similarly, at the second failure time X 2 : m : n , R 2 units from the remaining n 2 R 1 units are randomly removed, and so on. If the mth failure occurs before time T (i.e., X m : m ; n < T ), the test terminates at time X m : m : n and all remaining R m units are removed, where R m = n m i = 1 m 1 R i and R i is specified in advance ( i = 1 , 2 , , m ). If the Jth failure occurs before time T (i.e., X J : m : n < T < X J + 1 : m : n where J + 1 < m ), then we will not withdraw any units from the test by setting R J + 1 = R J + 2 = = R m 1 = 0 , and the test will continue until the failure unit number reaches the prefixed number m. At the time of the mth failure, all remaining R m units are removed and the test terminates, where R m = n m i = 1 J R i .
The main advantage of ATII-PHCS is that it speeds up the test when the test duration exceeds the predetermined time T and ensures we get the effective number of failures m. It also illustrates how an experimenter can control the experiment. If one is interested in getting observations early, one will remove fewer units (or even none). For convenience, we let X i = X i : m : n , i = 1 , 2 , , m . After the above test, we get one of the following observation data cases:
  • Case I: ( X 1 , R 1 ) , ( X 2 , R 2 ) , , ( X m , R m ) if X m < T , where R m = n i = 1 m 1 R i m .
  • Case II: ( X 1 , R 1 ) , ( X 2 , R 2 ) , , ( X J , R J ) , ( X J + 1 , 0 ) , , ( X m 1 , 0 ) , ( X m , R m ) if X J < T < X J + 1 and J < m , where R m = n m i = 1 J R i .
The ATII-PHCS has been studied in recent years. Mazen et al. [5] discussed the statistical analysis of the Weibull distribution under an adaptive Type-II progressive hybrid censoring scheme. Zhang et al. [6] investigated the maximum likelihood estimations (MLEs) of the unknown parameters and acceleration factors in the step-stress accelerated life test, based on the tampered failure rate model with ATII-PHC samples. Cui et al. [7] studied the point and interval estimates of the parameters from the Weibull distribution, based on adaptive Type-II progressive hybrid censored data in a constant-stress accelerated life test. Ismail [8] proposed that the MLE of the Weibull distribution parameters and the acceleration factor were derived based on ATII-PHC schemes under a step-stress partially accelerated life test model. The statistical inference of the dependent competitive failure system under the constant-stress accelerated life test with ATII-PHC data was studied by Zhang et al. [9]. Under an adaptive Type-II progressive censoring scheme, Ye et al. [10] investigated the general statistical properties and then used the maximum likelihood technique to estimate the parameters of the extreme value distribution. Some other studies on the statistical inference of life models using ATII-PHCS were presented by Sobhi and Soliman [11] and Nassar et al. [12]. Xu and Gui [13] studied entropy estimation for the two-parameter inverse Weibull distribution under adaptive type-II progressive hybrid censoring schemes.
Entropy measures the uncertainty associated with a random variable. Let X be a random variable having a continuous CDF F ( x ) and PDF f ( x ) . Then, the Shannon entropy is defined as
H ( f ) = + f ( x ) ln f ( x ) d x .
In recent years, several scholars have studied the entropy estimation of different life distributions. Kang et al. [14] investigated the entropy estimators of a double exponential distribution based on multiply Type-II censored samples. Cho et al. [15] derived an estimation for the entropy function of a Rayleigh distribution based on doubly generalized Type-II hybrid censored samples. Baratpour et al. [16] developed the entropy of the upper record values and provided several upper and lower bounds for this entropy by using the hazard rate function. Cramer and Bagh [17] discussed the entropy of the Weibull distribution under progressive censoring. Cho et al. [18] obtained estimators for the entropy function of the Weibull distribution based on a generalized Type-II hybrid censored sample. Yu et al. [19] studied statistical inference in the Shannon entropy of the inverse Weibull distribution under progressive first-failure censoring.
In addition to the above-mentioned life distributions, the generalized Bilal (GB) distribution is also an important life distribution for analyzing lifetime data. The PDF and the CDF of the GB distribution, respectively, are given as
f ( x ; β , λ ) = 6 β λ x λ 1 exp ( 2 β x λ ) [ 1 exp ( β x λ ) ] , x > 0 , β > 0 , λ > 0 ,
F ( x ; β , λ ) = 1 exp ( 2 β x λ ) [ 3 2 exp ( β x λ ) ] , x > 0 , β > 0 , λ > 0 ,
The Shannon entropy of the GB distribution is given by
H ( f ) = H ( β , λ ) = 2.5 + γ ln ( 27 / 4 ) ln ( λ β 1 λ ) + 1 λ ( ln ( 9 / 8 ) γ ) , β > 0 , λ > 0 ,
where γ denotes the Euler–Mascheroni constant and γ = 0.5772 .
The GB distribution was first introduced by Abd-Elrahman [20]. He investigated the properties of the probability density and failure rate function of this distribution. A comprehensive mathematical treatment of the GB distribution was provided, and the maximum likelihood estimations of unknown parameters were derived under the complete sample. Abd-Elrahman [21] provided the MLEs and Bayesian estimations of the unknown parameters and the reliability function based on a Type-II censored sample. Since the failure rate function of GB distribution has an upside-down bathtub shape, and it can also be monotonically decreasing or monotonically increasing at some selected values of the shape parameters λ , the GB model is very useful in survival analysis and reliability studies.
To the best of our knowledge, there has been no published work on the estimation of the entropy and parameters of GB distribution under an ATII-PHCS. As such, these issues are considered in this paper. The main objective of this paper is to provide the estimation of the entropy and unknown parameters of GB distribution under an ATII-PHCS by using the frequency and Bayesian methods.
The rest of this paper is organized as follows. In Section 2, the MLEs of the parameters and entropy of GB distribution are obtained, and approximate confidence intervals are constructed using the ATII-PHC data. In Section 3, the Bayesian estimation of the parameters and entropy under three different loss functions are provided using Lindley’s approximation method. In addition, the Bayesian credible intervals of the parameters and entropy are also obtained by using the Markov chain Monte Carlo (MCMC) method. In Section 4, Monte Carlo simulations are carried out to investigate the performance of different point estimates and interval estimates. In Section 5, a real data set is analyzed for illustrative purposes. Some conclusions are presented in Section 6.

2. Maximum Likelihood Estimation

In this section, the MLE and approximate confidence intervals of the parameters and entropy of GB distribution will be discussed under the ATII-PHCS. Based on the data in Case I and Case II, the likelihood functions can be respectively written as
C a s e   I : L I ( β , λ | x ) i = 1 m f ( x i ; β , λ ) [ 1 F ( x i ; β , λ ) ) ] R i ,
C a s e   I I : L I I ( β , λ | x ) i = 1 m f ( x i ; β , λ ) ) i = 1 J [ 1 F ( x i ; β , λ ) ] R i [ 1 F ( x m ; β , λ ) ] n m i = 1 J R i ,
where x = ( x 1 , x 2 , , x m ) .
By combining L I ( β , λ | x )   and   L I I ( β , λ | x ) , the likelihood functions can be written uniformly as
L ( β , λ | x ) i = 1 m f ( x i ; β , λ ) ) i = 1 D [ 1 F ( x i ; β , λ ) ] R i [ 1 F ( x m ; β , λ ) ] R * = = i = 1 m 6 β λ x i λ 1 exp ( 2 β x i λ ) [ 1 exp ( β x i λ ) ] i = 1 D [ exp ( 2 β x i λ ) ( 3 2 exp ( β x i λ ) ) ] R i × [ exp ( 2 β x m λ ) ( 3 2 exp ( β x m λ ) ) ] R * ,
where R * = n m i = 1 D R i and, for Case I, D = m , R * = 0 , and for Case II, D = J , R * = n m i = 1 J R i .
The log-likelihood function is given by
l = ln L ( β , λ | x ) m ln ( 6 β λ ) + i = 1 m [ ( λ 1 ) ln x i 2 β x i λ + ln ( 1 exp ( β x i λ ) ) ]   + + i = 1 D [ 2 R i β x i λ + R i ln ( 3 2 exp ( β x i λ ) ) ] 2 R * β x m λ + R * ln ( 3 2 exp ( β x m λ ) ) .
By taking the first partial derivative of the log-likelihood function with regard to β and λ and equating them to zero, the following results can be obtained:
l β = m β + i = 1 m [ 3 x i λ + x i λ [ y 1 ( θ ) ] 1 ] + i = 1 D [ 3 R i x i λ + 3 R i x i λ [ y 2 ( θ ) ] 1 ] 3 R * x m λ + 3 R * x m λ [ y 3 ( θ ) ] 1 = 0 ,
l λ = m λ + i = 1 m [ ln x i 3 β x i λ ln x i + β x i λ ln x i [ y 1 ( θ ) ] 1 ] + i = 1 D [ 3 R i β x i λ ln x i + 3 R i β x i λ ln x i [ y 2 ( θ ) ] 1 ] 3 R * β x m λ ln x m + 3 R * β x m λ ln x m [ y 3 ( θ ) ] 1 = 0 ,
where θ = ( β , λ ) ,   y 1 ( θ ) = 1 exp ( β x i λ ) , y 2 ( θ ) = 3 2 exp ( β x i λ ) , y 3 ( θ ) = 3 2 exp ( β x m λ ) .
The MLEs of β and λ can be obtained by solving Equations (7) and (8), but the above two equations do not yield an analytical solution. Thus, we use the Newton–Raphson iteration method to obtain the MLEs of the parameters. For this purpose, we firstly calculate the second partial derivatives of the log-likelihood function with regard to β and λ :
2 l β 2 = m β 2 i = 1 m [ x i 2 λ exp ( β x i λ ) ] [ y 1 ( θ ) ] 2 i = 1 D 6 R i x i 2 λ exp ( β x i λ ) [ y 2 ( θ ) ] 2 6 R * x m 2 λ exp ( β x m λ ) [ y 3 ( θ ) ] 2 ,
2 l β λ = i = 1 m [ 3 x i λ ln x i + x i λ ln x i ( y 1 ( θ ) 1 [ 1 β x i λ exp ( β x i λ ) ( y 1 ( θ ) ) 1 ] + + i = 1 D [ 3 R i x i λ ln x i + 3 R i x i λ ln x i ( y 2 ( θ ) 1 ( 1 2 β x i λ exp ( β x i λ ) ( y 2 ( θ ) ) 1 ) ] 3 R * x m λ + 3 R * x m λ ln x m [ y 3 ( θ ) ] 1 [ 1 2 β x m λ exp ( β x m λ ) ( y 3 ( θ ) ) 1 ) ] ,
2 l λ 2 = m λ 2 + i = 1 m [ β x i λ ( ln x i ) 2 [ - 3 + ( y 1 ( θ ) ) 1 ] β 2 x i 2 λ ( ln x i ) 2 exp ( β x i λ ) ( y 1 ( θ ) ) 2 ] + i = 1 D [ 3 R i β x i λ ( ln x i ) 2 ( 1 ( y 2 ( θ ) ) 1 ) 6 R i β 2 x i 2 λ ( ln x i ) 2 exp ( β x i λ ) ( y 2 ( θ ) ) 2 ] 3 R * β x m λ ( ln x m ) 2 ( 1 ( y 3 ( θ ) ) 1 ) 6 R * β 2 x m 2 λ ( ln x m ) 2 exp ( β x m λ ) ( y 3 ( θ ) ) 2 .
Let I ( β , λ ) = [ I 11 I 12 I 21 I 22 ] , where
I 11 = 2 l β 2 ,   I 22 = 2 l λ 2 , I 12 = I 21 = 2 l β λ .  
On the basis of the above calculation results, we can implement the Newton–Raphson iteration method to obtain the MLEs of unknown parameters. The specific steps of this iteration method can be seen in Appendix B. After obtaining the MLE β ^ and λ ^ of the parameters β and λ , using the invariant property of MLEs, the MLE of the entropy H   ( f ) for the generalized Bilal distribution is given by
H ^ ( f ) = 2.5 + γ ln ( 27 / 4 ) - 1 λ ^ ln β ^ ln λ ^ + 1 λ ^ ( ln ( 9 / 8 ) γ ) .

Approximate Confidence Interval

In this subsection, the approximate confidence intervals of the parameters β , λ and the Shannon entropy H   ( f ) are derived. Based on regularity conditions, the MLEs ( β ^ , λ ^ ) are an approximately bivariate normal distribution N ( ( β , λ ) ,   I 1 ( β ^ , λ ^ ) ) , where the covariance matrix I 1 ( β , λ ) is an estimation of I 1 ( β , λ ) and   I 1 ( β ^ , λ ^ ) = [ I 11 I 12 I 21 I 22 ] ( β , λ ) = ( β ^ , λ ^ ) 1 , I 11 , I 22 , I 12 and I 21 are given by Equations (10)–(13), respectively.
Thus, the approximate 100 ( 1 α ) % two-sided confidence intervals (CIs) for parameters β , λ are given by
( β ^ ± z α / 2 V a r ( β ^ ) ) , ( λ ^ ± z α / 2 V a r ( λ ^ ) ) ,
where z α / 2 is the upper α / 2 percentile of the standard normal distribution and V a r ( β ^ ) , V a r ( λ ^ ) are the main diagonal elements of the matrix   I 1 ( β ^ , λ ^ ) .
Next, we use the delta method to obtain the asymptotic confidence interval of the entropy H   ( f ) . The delta method is a general approach to compute CIs for functions of MLEs. Under a progressive Type-II censored sample, the authors of [22] used the delta method to study the estimation of a new Weibull–Pareto distribution. The authors of [23] also used this method to investigate the estimation of the two-parameter bathtub lifetime model.
Let M T = ( H ( f ) β , H ( f ) λ ) , where H ( f ) β = 1 β λ , H ( f ) λ = 1 λ 2 ln β 1 λ 1 λ 2 ( ln 9 8 γ ) .
Then, the approximate estimates of v a r ( H ^ ( f ) ) is given by
v a ^ r ( H ^ ( f ) ) = [ M T   I 1 ( β , λ ) M ] | ( β , λ ) = ( β ^ , λ ^ ) ,  
where β ^ and λ ^ are the MLEs of β and λ , respectively, and I 1 ( β , λ ) denotes the inverse of the matrix I ( β , λ ) = [ I 11 I 12 I 21 I 22 ] . The elements of the matrix I ( β , λ ) are given by Equations (10)–(13), respectively. Thus, H ^ ( f ) H ( f ) v a ^ r ( H ^ ( f ) ) is asymptotically distributed as N ( 0 , 1 ) . The asymptotic 100 ( 1 α ) % CI for the entropy H   ( f ) is given by
( H ^ ( f ) ± Z α / 2 v a ^ r ( H ^ ( f ) ) )
where z α / 2 is the upper α / 2 percentile of the standard normal distribution.

3. Bayesian Estimation

In this section, we discuss the Bayesian point estimation of the parameters and entropy H   ( f ) for generalized Bilal distribution using Lindley’s approximation method under symmetric as well as asymmetric loss functions. Furthermore, the Bayesian CI of the parameters and entropy are also derived by using the Markov chain Monte Carlo method.

3.1. Loss Functions and Posterior Distribution

Choosing the loss function is an important part in the Bayesian inference. The commonly used symmetric loss function is the squared error loss (SEL) function, which is defined as
L 1 ( U , U ^ ) = ( U ^ U ) 2 .
Two popular asymmetric loss functions are the Linex loss (LL) and general entropy loss (EL) functions, which are respectively given by
L 2 ( U , U ^ ) = exp ( h ( U ^ U ) ) h   ( U ^ U ) 1 , h 0 ,
L 3 ( U , U ^ ) ( U ^ U ) q q ln ( U ^ U ) 1 , q 0 .
Here, U = U ( β , λ ) is any function of β and λ , and U ^ is an estimate of U . The constant h and q represent the weight of errors on different decisions. Under the above loss functions, the Bayesian estimate of function U can be calculated by
U ^ S = E ( U | x ) .
U ^ L = 1 h ln [ E ( exp ( h U ) | x ) ] , h 0 .
U ^ E = [ E ( U q | x ) ] 1 / q , q 0 .
To derive the Bayesian estimates of the function U ( β , λ ) , we consider prior distributions of the unknown parameters β and λ as independent Gamma distributions G a   ( a , b )   and G a   ( c , d ) , respectively. Therefore, the joint prior distribution of β and λ becomes
π ( β , λ ) = b a β a 1 Γ ( a ) exp ( b β )   d   c λ c 1 Γ ( c ) exp ( d λ ) ,   ( β , λ , a , b , c , d > 0 ) .
Based on the likelihood function L ( β , λ | x ) and the joint prior distribution of β and λ , the joint posterior density of parameters β and λ can be written as
π ( β , λ | x ) = π ( β , λ ) L ( β , λ | x ) 0 0 π ( β , λ ) L ( β , λ | x ) d β d λ π ( β , λ ) L ( β , λ | x ) = β a 1 exp ( b β ) λ c 1 exp ( d λ ) A 1 ( β , λ ) A 2 ( β , λ ) A 3 ( β , λ ) ,
where
A 1 ( β , λ ) = i = 1 m 6 β λ x i λ 1 exp ( 2 β x i λ ) [ 1 exp ( β x i λ ) ] ,
A 2 ( β , λ ) = i = 1 D [ exp ( 2 β x i λ ) ( 3 2 exp ( β x i λ ) ) ] R i ,  
A 3 ( β , λ ) = [ exp ( 2 β x m λ ) ( 3 2 exp ( β x m λ ) ) ] R * .
Therefore, the Bayesian estimate of U ( β , λ ) under the SEL, LL and GEL functions are respectively given by
U ^ S ( β , λ ) = 0 0 U ( β , λ ) π ( β , λ ) L ( β , λ | x ) d β d λ 0 0 π ( β , λ ) L ( β , λ | x ) d β d λ ,
U ^ L ( β , λ ) = 1 h ln [ 0 0 exp ( h U ( β , λ ) ) π ( β , λ ) L ( β , λ | x ) d β d λ 0 0 π ( β , λ ) L ( β , λ | x ) d β d λ ] ,
U ^ E ( β , λ ) = [ 0 0 ( U ( β , λ ) ) q π ( β , λ ) L ( β , λ | x ) d β d λ 0 0 π ( β , λ ) L ( β , λ | x ) d β d λ ]   1 q .

3.2. Lindley’s Approximation

From Equations (23)–(25), it is observed that all of these estimates of the U ( β , λ ) are in the form of the ratio of two integrals which cannot be reduced to a closed form. Therefore, we use Lindley’s approximation method to obtain the Bayesian estimates. If we let θ = ( θ 1 , θ 2 ) , then the posterior expectation of a function U   ( θ 1 , θ 2 ) can be approximated as in [18]:
U ^ = U ( θ ^ 1 , θ ^ 2 ) + 0.5 ( A + z 30 B 12 + z 03 B 21 + z 21 C 12 + z 12 C 21 ) + p 1 A 12 + p 2 A 21 ,
where U ( θ ^ 1 , θ ^ 2 ) is the MLE of U ( θ 1 , θ 2 ) and
A = i = 1 2 j = 1 2 u i j τ i j , B i j = ( u i τ i i + u j τ i j ) τ i i , C i j = 3 u i τ i i τ i j + u j ( τ i i τ j j + 2 τ i j 2 ) ,
p i = p θ i , u i = U θ i , u i j = 2 U θ i θ j , p = ln π ( θ 1 , θ 2 ) , A i j = u i τ i i + u j τ j i , z i j = i + j l ( θ 1 , θ 2 ) θ 1 i θ 2 j , i , j = 0 , 1 , 2 , 3 , i + j = 3 ,
where l denotes the log-likelihood function and τ i j ( i , j ) denotes the ( i , j ) - th element of the matrix [ 2 l / θ 1 i θ 2 j ] 1 . All terms are estimated by MLEs of the parameters θ 1 and θ 2 .
Based on the above equations, we have
z 30 = 3 l β 3 = 2 m β 3 + i = 1 m { x i 3 λ exp ( β x i λ ) ( y 1 ( θ ) ) 2 [ 1 + 2 ( y 1 ( θ ) ) 1 exp ( β x i λ ) ] } + i = 1 D { 6 R i x i 3 λ exp ( β x i λ ) ( y 2 ( θ ) ) 2 [ 1 + 4 exp ( β x i λ ) ( y 2 ( θ ) ) 1 ] } + 6 R * x m 3 λ exp ( β x m λ ) ( y 3 ( θ ) ) 2 [ 1 + 4 exp ( β x m λ ) ( y 3 ( θ ) ) 1 ] .
z 03 = 3 l λ 3 = 2 m λ 3 + i = 1 m { β x i λ ( ln x i ) 3 ( 3 + ( y 1 ( θ ) ) 1 ) β 2 x i 2 λ ( ln x i ) 3 exp ( β x i λ )   ( y 1 ( θ ) ) 2 ×   [ 3 β x i λ 2 β x i λ exp ( β x i λ ) ( y 1 ( θ ) ) 1 ] } + i = 1 D { [ 3 R i β x i λ ( ln x i ) 3 [ 1 ( y 2 ( θ ) ) 1 ] + + 6 R i β 2 x i 2 λ ( ln x i ) 3 exp ( β x i λ ) ( y 2 ( θ ) ) 2 ( 3 + β x i λ + 4 β x i λ ( y 2 ( θ ) ) 1 exp ( β x i λ ) ) } + + 3 R * β x m λ ( ln x m ) 3 [ 1 + ( y 3 ( θ ) ) 1 ] + 6 R * β 2 x m 2 λ ( ln x m ) 3 exp ( β x m λ ) ( y 3 ( θ ) ) 2 [ 3 + β x m λ + 4 β x m λ ( y 3 ( θ ) ) 1 exp ( β x m λ ) ] .
z 21 = 3 l β 2 λ = i = 1 m [ x i 2 λ ln x i exp ( β x i λ ) ( y 1 ( θ ) ) 2 ) [ 2 - β x i λ - 2 β x i λ exp ( β x i λ ) ( y 1 ( θ ) ) 1 ] i = 1 D [ 6 R i x i 2 λ ln x i exp ( β x i λ ) ( y 2 ( θ ) ) 2 [ 2 β x i λ 4 β x i λ exp ( β x i λ ) ( y 2 ( θ ) ) 1 ] 6 R * x m 2 λ ln x m exp ( β x m λ ) ( y 3 ( θ ) ) 2 ] [ 2 β x m λ 4 β x m λ ( y 3 ( θ ) ) 1 exp ( β x m λ ) ] .
z 12 = 3 l β λ 2 = i = 1 m [ 3 x i λ ( ln x i ) 2 + x i λ ( ln x i ) 2 ( y 1 ( θ ) ) 1 + β x i 2 λ ( ln x i ) 2 exp ( β x i λ ) ( y 1 ( θ ) ) 2 [ 3 + β x i λ + y 1 ( θ ) ) 1 β x i λ exp ( β x i λ ) ] ] + i = 1 D { 3 R i x i λ ( ln x i ) 2 + 3 R i x i λ ( ln x i ) 2 ( y 2 ( θ ) ) 1 + 6 β R i x i 2 λ ( ln x i ) 2 exp ( β x i λ ) ( y 2 ( θ ) ) 2 [ 3 + β x i λ exp ( β x i λ ) + 4 ( y 2 ( θ ) ) 1 β x i λ exp ( β x i λ ) ] 3 R * x m λ ( ln x m ) 2 + 3 R * x m λ ( ln x m ) 2 ( y 3 ( θ ) ) 1 } + 6 β R * x m 2 λ ( ln x m ) 2 exp ( β x m λ ) ( y 3 ( θ ) ) 2 [ 3 + β x m λ exp ( β x m λ ) + 4 ( y 3 ( θ ) ) 1 β x m λ exp ( β x m λ ) ] .
p 1 = a 1 β b , p 2 = c 1 λ d ,
τ 11 =   z 02 z 20 z 02 z 11 2 , τ 22 =   z 20 z 20 z 02 z 11 2 , τ 12 = τ 21 = z 11 z 20 z 02 z 11 2 ,
  z 20 = 2 l β 2 , z 11 = 2 l β λ , z 02 = 2 l λ 2 ,
where   z 20 , z 11 , z 02 are given by Equations (10)–(12), respectively.
Based on Lindley’s approximation, we can derive the Bayesian estimation of the two parameters, β and λ , and the entropy under different loss functions.

3.2.1. Squared Error Loss Function

When U   ( β , λ ) = β or λ , the Bayesian estimations of the parameters β and λ under the SEL function are given by, respectively,
β ^ S = β ^ + 0.5 [ τ 11 2 z 30 + τ 21 τ 22 z 03 + 3 τ 11 τ 12 z 21 + ( τ 11 τ 22 + 2 τ 21 2 ) z 12 ] + τ 11 p 1 + τ 12 p 2 ,  
λ ^ S = λ ^ + 0.5 [ τ 11 τ 12 z 30 + τ 22 2 z 03 + 3 τ 22 τ 21 z 12 + ( τ 11 τ 22 + 2 τ 21 2 ) z 21 ] + τ 21 p 1 + τ 22 p 2 ,  
where β ^ and λ ^ are the MLEs of the parameters β and λ , respectively.
Similarly, the Bayesian estimation of the entropy can be derived. We notice that
U ( β , λ ) = H ( β , λ ) = 2.5 + γ ln ( 27 / 4 ) ln λ 1 λ ln β + 1 λ ( ln ( 9 / 8 ) γ ) , u 1 = 1 β λ , u 2 = 1 λ + 1 λ 2 ( ln β ln ( 9 / 8 ) + γ ) , u 11 = 1 β 2 λ , u 22 = 1 λ 2 2 λ 3 ( ln β ln ( 9 / 8 ) + γ ) , u 12 = u 21 = 1 β λ 2 .
Thus, the Bayesian estimation of the entropy H   ( f ) under the SEL function is given by
H ^ S ( f ) =   H   ^ ( f ) + 0 . 5 [ u 11 τ 11 + 2 u 12 τ 12 + u 22 τ 22 + z 30 ( u 1 τ 11 + u 2 τ 12 ) τ 11 + z 03 ( u 2 τ 22 + u 1 τ 12 ) τ 22 + z 21 ( 3 u 1 τ 11 τ 12 + u 2 ( τ 11 τ 22 + 2 τ 12 2 ) ) + z 12 ( 3 u 2 τ 22 τ 21 + u 1 ( τ 11 τ 22 + 2 τ 21 2 ) ) ] + p 1 ( u 1 τ 11 + u 2 τ 21 ) + p 2 ( u 2 τ 22 + u 1 τ 12 ) ,
where H ^ ( f ) represents the maximum likelihood estimate of H   ( f ) .

3.2.2. Linex Loss Function

Based on Lindley’s approximation, the Bayesian estimations of two parameters, β and λ , and the entropy under the LL function can, respectively, be given by
β ^ L = 1 h ln { exp ( h β ^ ) + 0 . 5 [ u 11 τ 11 + u 1 τ 11 2 z 30 + u 1 τ 21 τ 22 z 03 + 3 u 1 τ 11 τ 12 z 21 + ( τ 11 τ 22 + 2 u 1 τ 21 2 ) u 1 z 12 ] + u 1 τ 11 p 1 + u 1 τ 12 p 2 }
λ ^ L = 1 h ln { exp ( h λ ^ ) + 0 . 5 [ u 22 τ 22 + u 2 τ 11 τ 12 z 30 + u 2 τ 22 2 z 03 + ( τ 11 τ 22 + 2 τ 12 2 ) u 2 z 21 + 3 u 2 τ 22 τ 21 z 21 ] + u 2 τ 12 p 1 + u 2 τ 22 p 2 }
H ^ L ( f ) =   1 h ln { exp [ h H ^ ( f ) ] + 0 . 5 [ u 11 τ 11 + 2 u 12 τ 12 + u 22 τ 22 + z 30 ( u 1 τ 11 + u 2 τ 12 ) τ 11 + z 03 ( u 2 τ 22 + u 1 τ 21 ) τ 22 + z 21 ( 3 u 1 τ 11 τ 12 + u 2 ( τ 11 τ 22 + 2 τ 12 2 ) ) + z 12 ( 3 u 2 τ 22 τ 21 + u 1 ( τ 11 τ 22 + 2 τ 21 2 ) ) ] + p 1 ( u 1 τ 11 + u 2 τ 21 ) + p 2 ( u 2 τ 22 + u 1 τ 12 ) } .
Here, β ^ and λ ^ are the MLEs of the parameters β and λ , and H ^ ( f ) represents the MLE of H   ( f ) . The detailed derivation of these Bayesian estimates is shown in Appendix C.

3.2.3. General Entropy Loss Function

Using Lindley’s approximation method, the Bayesian estimations of two parameters, β and λ , and the entropy under the GEL function can, respectively, be given by
β ^ E = { β ^ q + 0 . 5 [ u 11 τ 11 + u 1 τ 11 2 z 30 + u 1 τ 21 τ 22 z 03 + 3 u 1 τ 11 τ 12 z 21 + ( τ 11 τ 22 + 2 u 1 τ 21 2 ) u 1 z 12 ] + u 1 τ 11 p 1 + u 1 τ 12 p 2 }   1 / q
λ ^ L = { λ ^   q + 0 . 5 [ u 22 τ 22 + u 2 τ 11 τ 12 z 30 + u 2 τ 22 2 z 03 + ( τ 11 τ 22 + 2 τ 12 2 ) u 2 z 21 + 3 u 2 τ 22 τ 21 z 21 ] + u 2 τ 21 p 1 + u 2 τ 22 p 2 }   1 / q
H ^ E ( f ) = { [ H ^ ( f ) ] q + 0 . 5 [ ( u 11 τ 11 + 2 u 12 τ 12 + u 22 τ 22 ) + z 30 ( u 1 τ 11 + u 2 τ 12 ) τ 11 + z 03 ( u 2 τ 22 + u 1 τ 12 ) τ 22 + z 21 ( 3 u 1 τ 11 τ 12 + u 2 ( τ 11 τ 22 + 2 τ 12 2 ) ) + z 12 ( 3 u 2 τ 22 τ 21 + u 1 ( τ 11 τ 22 + 2 τ 21 2 ) ) ] + p 1 ( u 1 τ 11 + u 2 τ 21 ) + p 2 ( u 2 τ 22 + u 1 τ 12 ) } 1 / q .
Here, β ^ and λ ^ are the MLEs of the parameters β and λ , and H ^ ( f ) represents the MLE of H   ( f ) . The detailed derivation of these Bayesian estimates is shown in Appendix D.

3.3. Bayesian Credible Interval

In the previous subsection, we used the Lindley’s approximation method to obtain the Bayesian point estimation of the parameters and entropy. However, this approximation method cannot determine the Bayesian CIs. Thus, the MCMC method is applied to obtain the Bayesian CI for the parameters and entropy. The MCMC method is a useful technique for estimating complex Bayesian models. The Gibbs sampling and Metropolis–Hastings algorithm are the two most frequently applied MCMC methods which are used in reliability analysis, statistical physics and machine learning, among other applications. Due to their practicality, they have gained some attention among researchers, and interesting results have been obtained. For example, Gilks and Wild [24] proposed adaptive rejection sampling to handle non-conjugacy in applications of Gibbs sampling. Koch [25] studied the Gibbs sampler by means of the sampling–importance resampling algorithm. Martino et al. [26] established a new approach, namely by recycling the Gibbs sampler to improve the efficiency without adding any extra computational cost. Panahi and Moradi [27] developed a hybrid strategy, combining the Metropolis–Hastings [28,29] algorithm with the Gibbs sampler to generate samples from the respective posterior, arising from the inverted, exponentiated Rayleigh distribution. In this paper, we adopt the method proposed in [27] to generate samples from the respective posterior arising from the GB distribution. From Equations (6) and (22), the joint posterior of the parameters β , λ can be written as
π ( β , λ | x ) π ( β , λ ) L ( β , λ | x ) [ V ( λ ) ] m + a β m + a 1 exp [ β V ( λ ) ] i = 1 m [ 1 exp ( β x i λ ) ] × 1 [ V ( λ ) ] m + a i = 1 D ( 3 2 exp ( β x i λ ) ) R i ( 3 2 exp ( β x m λ ) ) R * λ m + c 1 exp ( d λ ) i = 1 m x i λ 1
Here, V ( λ ) = ( b + 2 i = 1 m x i λ + 2 i = 1 D R i x i λ + 2 R * x m λ ) . Therefore, we have
π ( β , λ | x ) π 1 ( β | λ , x ) π 2 ( λ | β , x ) ,
where
π 1 ( β | λ , x ) [ V ( λ ) ] m + a β m + a 1 exp [ β V ( λ ) ]
π 2 ( λ | β , x ) λ m + c 1 [ V ( λ ) ] m + a exp ( d λ ) exp [ β ( 2 i = 1 m x i λ + 2 i = 1 D R i x i λ + 2 R * x m λ ) ] × i = 1 m [ 1 exp ( β x i λ ) ] i = 1 D ( 3 2 exp ( β x i λ ) ) R i ( 3 2 exp ( β x m λ ) ) R * i = 1 m x i λ 1 .
It is observed that the posterior density π 1 ( β | λ , x ) of β , given λ , is the PDF of the Gamma distribution G a m m a ( m + a ,   b + 2 i = 1 m x i λ + 2 i = 1 D R i x i λ + 2 R * x m λ ) . However, the posterior density π 2 ( λ | β , x ) of λ , given β , cannot be reduced analytically to a known distribution. Therefore, we use the Metropolis–Hastings method with normal proposal distribution to generate random numbers from Equation (37). We use the next algorithm (Algorithm 1), proposed in [27], to generate random numbers from Equation (34) and construct the Bayesian credible interval of λ , β and the entropy H   ( f ) .
Algorithm 1 The MCMC method
Step 1: Choose the initial value ( β ( 0 ) , λ ( 0 ) ) .
Step 2: At stage i and for the given m, n and ATII-PH censored data, generate β ( i ) from the following:
G a m m a ( m + a ,   b + 2 i = 1 m x i λ + 2 i = 1 D R i x i λ + 2 R * x m λ ) .
Step 3: Generate λ ( i ) from π 2 ( λ ( i 1 ) | β ( i ) , x ) using the following steps.
Step 3-1: Generate λ from N ( λ ( i 1 ) , var ( λ ) ) .
Step 3-2: Generate the ω from the uniform distribution U(0, 1).
Step 3-3: Set λ ( i ) = { λ , i f ω r λ ( i 1 ) , i f ω > r , where r = min { 1 , π 2 ( λ | β ( i ) , x ) π 2 ( λ ( i 1 ) | β ( i ) , x ) } .
Step 4: Set i = i + 1 .
Step 5: By repeating Steps 2–4 N times, we get ( β 1 , λ 1 ) , ( β 2 , λ 2 ) , , ( β N , λ N ) . Furthermore, we compute H 1 , H 2 , , H N , where H i = H ( β i , λ i ) ,   i = 1 , 2 , , N and H ( β , λ ) is the Shannon entropy of the GB distribution.
Rearrange ( β 1 , β 2 , , β N ) , and ( H 1 , H 2 , , H N ) into ( β ( 1 ) , β ( 2 ) , , β ( N ) ) , ( λ ( 1 ) , λ ( 2 ) , , λ ( N ) ) and ( H ( 1 ) H ( 2 ) , , H ( N ) ) , where ( β ( 1 ) < β ( 2 ) < < β ( N ) ) , ( λ ( 1 ) < λ ( 2 ) < < λ ( N ) ) and ( H ( 1 ) < H ( 2 ) < < H ( N ) ) .
Then, the 100 ( 1 - α ) % Bayesian credible interval of the two parameters β , λ and the entropy are given by ( β ( N α / 2 ) ,   β ( N ( 1 α / 2 ) ) ) , ( λ ( N α / 2 ) ,   λ ( N ( 1 α / 2 ) ) ) and ( H ( N α / 2 ) ,   H ( N ( 1 α / 2 ) ) ) .

4. Simulation Study

In this section, a Monte Carlo simulation study is carried out to observe the performance of different estimators of the entropy, in terms of the MSEs for different values of n, m, T and censoring schemes. In addition, the average 95% asymptotic confidence intervals (ACIs), Bayesian credible intervals (BCIs) of β , λ and the entropy, as well as the average interval length (IL), are computed, and the performances are also compared. We consider the following three different progressive censoring schemes (CSs):
  • CS I: R m = n m , R i = 0 , i m ;
  • CS II: R 1 = n m , R i = 0 , i 1 ;
  • CS III: R m / 2 = n m , R i = 0 , f o r   i m 2 , if m is even or R ( m + 1 ) / 2 = n m , R i = 0 , f o r   i m + 1 2 , if m is odd.
Based on the following algorithm proposed by Balakrishnan and Sandhu [30] (Algorithm 2), we can generate an adaptive Type-II progressive hybrid censored sample from the GB distribution.
Algorithm 2. Generating a adaptive Type-II progressive hybrid censored sample from the GB distribution.
Step1: Generate m independent observations Z 1 , Z 2 , , Z m , where Z i follows the uniform distribution U ( 0 , 1 ) , i = 1 , 2 , , m .
Step 2: For the known censoring scheme ( R 1 , R 2 , , R m ) , let ξ i = Z i 1 / ( i + R m + R m 1 + + R m i + 1 ) , i = 1 , 2 , , m .
Step 3: By setting U i = 1 ξ m ξ m 1 ξ m i + 1 , then U 1 , U 2 , , U m is a Type-II progressive censored sample from the uniform distribution U ( 0 , 1 ) .
Step 4: Using the inverse transformation X i : m : n = F 1 ( U i ) , i = 1 , 2 , , m , we obtain a Type-II progressive censored sample from the GB distribution; that is, X 1 : m : n , X 2 : m : n , , X m : m : n , where F 1 ( ) denotes the GB distribution’s inverse cumulative functional expression with the parameter ( β , λ ) . The following theorem1 gives the uniqueness of the solution for the equation X i : m : n = F 1 ( U i ) , i = 1 , 2 , , m .
Step 5: If there exists a real number J satisfying X J : m : n < T X J + 1 : m : n , then we set index J and record X 1 : m : n , X 2 : m : n , , X J + 1 : m : n .
Step 6: Generate the first m J 1 order statistics X J + 2 : m : n , X J + 3 : m : n , , X m : m : n from the truncated distribution f ( x ; β , λ ) / [ 1 F ( x J + 1 ; β , λ ) ] with a sample size n J 1 i = 1 J R i .
Theorem 1.
The equation X i : m : n = F 1 ( U i ) has a unique solution, i = 1 , 2 , , m .
Proof. 
See Appendix A. □
In the simulation study, we took the values of the parameters of the GB distribution as β = 1, λ = 2. In this case, H(f) = 0.2448. The hyperparameter values of the prior distribution were taken as a = 1 , b = 3 , c = 2 , d = 3 . For the Linex loss function and general entropy loss function, we set h = 1.0 ,     1.0 and q = 1.0 ,     1.0 , respectively. In the Newton iterative algorithm and MCMC sampling algorithm, we chose the initial values of β and λ as β ( 0 ) = 0.9 , λ ( 0 ) = 1.9 ; the value of ε was taken as 10 6 . For different sample sizes n and different effective samples m and time T, we used 3000 simulated samples in each case. The average values and mean square errors (MSEs) of the MLEs and Bayesian estimations (BEs) for β , λ and the entropy were calculated. These results are reported in Table 1, Table 2, Table 3, Table 4, Table 5 and Table 6.
From Table 1, Table 2, Table 3, Table 4, Table 5 and Table 6, the following observations can be made:
  • For the fixed m and T values, the MSEs of the MLEs and Bayesian estimations of the two parameters and the entropy decreased when n increased. As such, we tended to get better estimation results with an increase in the test sample size;
  • For the fixed n and m values, when T increased, the MSEs of the MLEs and Bayesian estimations of the two parameters and the entropy did not show any specific trend. This could be due to the fact that the number of observed failures was preplanned, and no additional failures were observed when T increased;
  • In most cases, the MSEs of the Bayesian estimations under a squared error loss function were smaller than those of the MLEs. There was no significant difference in the MSEs between the Linex loss and general entropy loss functions;
  • For fixed values of n, m and T, Scheme II was smaller than Scheme I and Scheme III in terms of the MSE.
To further demonstrate the conclusions, the MSEs are plotted when the sample size increases under different censoring schemes. The trends are shown in Figure 1 (values come from Table 1, Table 2, Table 3, Table 4, Table 5 and Table 6).
Furthermore, the average 95% ACIs and BCIs of β , λ and the entropy, as well as the average lengths (ALs) and coverage probabilities of the confidence intervals, were computed. These results are displayed in Table A1, Table A2, Table A3 and Table A4 (See Appendix E).
From Table A1, Table A2, Table A3 and Table A4, the following can be observed:
  • The coverage probability of the approximate confidence intervals and Bayes credible intervals became bigger when n increased while m and T remain fixed;
  • For fixed values of n and m, when T increased, we did not observe any specific trend in the coverage probability of the approximate confidence intervals and Bayesian credible intervals;
  • For fixed values of n and T, the average length of the approximate confidence intervals and Bayesian credible intervals were narrowed down when n increased;
  • The average length of the Bayesian credible intervals was smaller than that of the asymptotic confidence intervals in most cases;
  • For fixed values of n and m, when T increased, we did not observe any specific trend in the average length of the confidence intervals;
  • For fixed values of n, m and T, Scheme II was smaller than Scheme I and Scheme III in terms of the average length of the credible interval;
  • For fixed values of n, m and T, the coverage probability of the approximate confidence intervals and Bayesian credible intervals were bigger than Scheme I and Scheme III.

5. Real Data Analysis

In this subsection, a real data set is considered to illustrate the use of the inference procedures discussed in this paper. This data set consisted of 30 successive values of March precipitation in Minneapolis–Saint Paul, which were reported by Hinkley [31]. The data set points are expressed in inches as follows: 0.32, 0.47, 0.52, 0.59, 0.77, 0.81, 0.81, 0.9, 0.96, 1.18, 1.20, 1.20, 1.31, 1.35, 1.43, 1.51, 1.62, 1.74, 1.87, 1.89, 1.95, 2.05, 2.10, 2.20, 2.48, 2.81, 3.0, 3.09, 3.37 and 4.75 in.
This data was used by Barreto-Souza and Cribari-Neto [32] for fitting the generalized exponential-Poisson (GEP) distribution and by Abd-Elrahman [20] for fitting the Bilal and GB distributions. In the complete sample case, the MLEs of β and λ were 0.4168 and 1.2486, respectively. In this case, we calculated the maximum likelihood estimate of the entropy as H(f) = 1.2786. For the above data set, Abd-Elrahman [20] pointed out that the negative of the log likelihood, Kolmogorov–Smirnov (K–S) test statistics and its corresponding p value related to these MLEs were 38.1763, 0.0532 and 1.0, respectively. Based on the value of p, it is clear that the GB distribution was found to fit the data very well. Using the above data set, we generated an adaptive Type-II progressive hybrid censoring scheme with an effective failure number m (m = 20).
When we took T = 4.0 and R 1 = R 2 = = R 5 = 1 ,   R 6 = R 7 = = R 15 = 0 ,   R 16 = R 17 = = R 20 = 1 , the obtained data in Case I were as follows:
Case I: 0.32, 0.52, 0.77, 0.81, 0.96, 1.18, 1.20, 1.31, 1.35, 1.43, 1.51, 1.62, 1.74, 1.87, 1.89, 1.95, 2.10, 2.48, 2.81 and 3.37.
When we took T = 2.0, R 1 = 1 ,   R 2 = R 3 = = R 8 = 0 , R 9 = R 10 = R 15 = 1 , R 16 = R 17 = = R 19 = 0 and R 20 = 2 , the obtained data in Case II were as follows:
Case II: 0.32, 0.47, 0.52, 0.59, 0.77, 0.81, 0.9, 0.96, 1.18, 1.20, 1.35, 1.43, 1.74, 1.87, 1.95, 2.10, 2.20, 2.48, 2.81 and 3.09.
Based on the above data, the maximum likelihood estimation and Bayesian estimation of the entropy and the two parameters could be calculated. For the Bayesian estimation, since we had no prior information about the unknown parameters, we considered the noninformative gamma priors of the unknown parameters as a = b = c = d = 0. For the Linex loss and general entropy functions, we set h = 1.0 ,       1.0 and q = 1.0 ,       1.0 , respectively. The MLEs and Bayesian estimations of the entropy and the two parameters were calculated by using the Newton–Raphson iteration and Lindley’s approximation method. These results are tabulated in Table 7 and Table 8. In addition, the 95% asymptotic confidence intervals (ACIs) and Bayesian credible intervals (BCs) of the two parameters and the entropy were calculated using the Newton–Raphson iteration, delta method and MCMC method. These results are displayed in Table 9.
From Table 7, Table 8 and Table 9, we can observe that the MLEs and Bayesian estimations of the parameters and the entropy were close to the estimations in the complete sample case. In most cases, the length of the Bayesian credible intervals was smaller than that of the asymptotic confidence intervals.

6. Conclusions

In this paper, we considered the estimation of parameters and entropy for generalized Bilal distribution using adaptive Type-II progressive hybrid censored data. Using an iterative procedure and asymptotic normality theory, we developed the MLEs and approximate confidence intervals of the unknown parameters and the entropy. The Bayesian estimates were derived by Lindley’s approximation under the square, Linex and general entropy loss functions. Since Lindley’s method failed to construct the intervals, we utilized Gibbs sampling together with the Metropolis–Hastings sampling procedure to construct the Bayesian credence intervals of the unknown parameters and the entropy. A Monte Carlo simulation was provided to show all the estimation results. The results illustrate that the proposed methods performed well. The applicability of the considered model in a real situation was illustrated, based on the data of March precipitation in Minneapolis–Saint Paul. It was observed that the considered model could be utilized to analyze this real data appropriately.

Author Contributions

Methodology and writing, X.S.; supervision, Y.S.; simulation study, K.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the National Natural Science Foundation of China (71571144, 71401134, 71171164, 11701406) and the Program of International Cooperation and Exchanges in Science and Technology funded by Shaanxi Province (2016KW-033).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not available.

Acknowledgments

The authors would like to thank the editors and the anonymous reviewers.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Proof of Theorem 1

We set y = exp   ( β x λ ) , then 0 < y < 1 . The cumulative distribution function of GB distribution can be written as
F ( x ; β , λ ) = 1 3 y 2 + 2 y 3 , 0 < y < 1
By setting u = 1 3 y 2 + 2 y 3 , 0 < u < 1 , then we get 3 y 2 2 y 3 + u 1 = 0 , 0 < y < 1 .
Set ρ ( y ) = 3 y 2 2 y 3 + u 1 , take the first derivative of ρ ( y ) with respect to y, and we have d ρ ( y ) d y = 6 y 6 y 2 > 0 , as 0 < y < 1 .
Notice that ρ ( y ) is a monotonically increasing function when 0 < y < 1 . Thus, there is a unique solution to the equation 3 y 2 2 y 3 + u 1 = 0 when 0 < y < 1 . As such, we have proven that the equation X i : m : n = F 1 ( U i ) has a unique solution ( i = 1 , 2 , , m ).

Appendix B. The Specific Steps of the Newton–Raphson Iteration Method

Step 1: Give the initial values of θ = ( β , λ ) ; that is, θ ( 0 ) = ( β ( 0 ) , λ ( 0 ) ) .
Step 2: In the kth iteration, calculate ( l β , l λ ) | β = β ( k ) λ = λ ( k ) and I ( β ( k ) , λ ( k ) ) , where I ( β ( k ) , λ ( k ) ) = [ I 11 I 12 I 21 I 22 ] | β = β ( k ) λ = λ ( k ) is the observed information matrix of the parameters β and λ , and I i j , i = 1 , 2 , 3 are given by Equations (10)–(13).
Step 3: Update ( β , λ ) T with
( β ( k + 1 ) , λ ( k + 1 ) ) T = ( β ( k ) , λ ( k ) ) T + I 1 ( β ( k ) , λ ( k ) ) × ( l β , l λ ) T | β = β ( k ) λ = λ ( k ) .
Here, ( β , λ ) T is the transpose of vector ( β , λ ) , and I 1 ( β ( k ) , λ ( k ) ) represents the inverse of the matrix I ( β ( k ) , λ ( k ) ) .
Step 4: Setting k = k + 1 , the MLEs of the parameters (denoted by β ^ and λ ^ ) can be obtained by repeating Steps 2 and 3 until | ( β ( k + 1 ) , λ ( k + 1 ) ) T ( β ( k ) , λ ( k ) ) T | < ε , where ε is a threshold value that is fixed in advance.

Appendix C. The Detailed Derivation of Bayesian Estimates of two Parameters ( β , λ ) and the Entropy under the LL Function

In this case, we take U ( β , λ ) = exp ( h β ) , and then
u 1 = h exp   ( h β ) , u 11 = h 2 exp   ( h β ) , u 12 = u 21 = u 22 = u 2 = 0 .
Using Equation (26), the Bayesian estimation of parameter β is given by
β ^ L = 1 h ln { exp ( h β ^ ) + 0 . 5 [ u 11 τ 11 + u 1 τ 11 2 z 30 + u 1 τ 21 τ 22 z 03 + 3 u 1 τ 11 τ 12 z 21 + ( τ 11 τ 22 + 2 u 1 τ 21 2 ) u 1 z 12 ] + u 1 τ 11 p 1 + u 1 τ 12 p 2 }
Similarly, the Bayesian estimation of parameter λ is obtained by
λ ^ L = 1 h ln { exp ( h λ ^ ) + 0 . 5 [ u 22 τ 22 + u 2 τ 11 τ 12 z 30 + u 2 τ 22 2 z 03 + ( τ 11 τ 22 + 2 τ 12 2 ) u 2 z 21 + 3 u 2 τ 22 τ 21 z 21 ] + u 2 τ 12 p 1 + u 2 τ 22 p 2 }
For the Bayesian estimation of the entropy, we have
U ( β , λ ) = exp [ h H ( f ) ] , u 1 = h β λ exp [ h H ( f ) ] , u 2 = h [ 1 λ + 1 λ 2 ( ln β ln ( 9 / 8 ) + γ ) ] exp [ h H ( f ) ] , u 11 = h [ 1 β 2 λ + h β 2 λ 2 ] exp [ h H ( f ) ] ,
u 22 = { h   [ 1 λ 2 2 λ 3 ( ln β ln ( 9 / 8 ) + γ ) ] + h 2 [ 1 λ + 1 λ 2 ( ln β ln ( 9 / 8 ) + γ ) ] 2 } exp [ h H ( f ) ] ,
u 12 = u 21 = = h [ h 1 β λ 2 h 1 β λ 3 ( ln β ln ( 9 / 8 ) + γ ) ] exp [ h H ( f ) ] .
The Bayesian estimation of the entropy under the LL function is given by
H ^ L ( f ) =   1 h ln { exp [ h H ^ ( f ) ] + 0 . 5 [ u 11 τ 11 + 2 u 12 τ 12 + u 22 τ 22 + z 30 ( u 1 τ 11 + u 2 τ 12 ) τ 11 + z 03 ( u 2 τ 22 + u 1 τ 21 ) τ 22 + z 21 ( 3 u 1 τ 11 τ 12 + u 2 ( τ 11 τ 22 + 2 τ 12 2 ) ) + z 12 ( 3 u 2 τ 22 τ 21 + u 1 ( τ 11 τ 22 + 2 τ 21 2 ) ) ] + p 1 ( u 1 τ 11 + u 2 τ 21 ) + p 2 ( u 2 τ 22 + u 1 τ 12 ) }

Appendix D. The Derivation of Bayesian Estimates of two Parameters ( β , λ ) and the Entropy under the GEL Function

In this case, we take U ( β , λ ) = β q and then u 1 = q β q 1 ,   u 11 = q ( q + 1 ) β q 2 , and u 12 = u 21 = u 22 = u 2 = 0 .
Using Equation (26), the Bayesian estimation of parameter β is given by
β ^ E = { β ^ q + 0 . 5 [ u 11 τ 11 + u 1 τ 11 2 z 30 + u 1 τ 21 τ 22 z 03 + 3 u 1 τ 11 τ 12 z 21 + ( τ 11 τ 22 + 2 u 1 τ 21 2 ) u 1 z 12 ] + u 1 τ 11 p 1 + u 1 τ 12 p 2 }   1 / q
Similarly, the Bayesian estimation of parameter λ is obtained by
λ ^ L = { λ ^   q + 0 . 5 [ u 22 τ 22 + u 2 τ 11 τ 12 z 30 + u 2 τ 22 2 z 03 + ( τ 11 τ 22 + 2 τ 12 2 ) u 2 z 21 + 3 u 2 τ 22 τ 21 z 21 ] + u 2 τ 21 p 1 + u 2 τ 22 p 2 }   1 / q
For the Bayesian estimation of the entropy under the general EL function, we take U ( β , λ ) = [ H ( f ) ] q , and then
u 1 = q β λ [ H ( f ) ] q 1 , u 2 = [ q λ q λ 2 ( ln β ln ( 9 / 8 ) + γ ) ] [ H ( f ) ] q 1 ,
u 2 = [ q λ q λ 2 ( ln β ln ( 9 / 8 ) + γ ) ] [ H ( f ) ] q 1 , u 11 = q ( q + 1 ) β 2 λ 2 [ H ( f ) ] q 2 q β 2 λ [ H ( f ) ] q 1 , u 22 = [ q λ 2 + 2 q λ 3 ( ln β ln ( 9 / 8 ) + γ ) ] [ H ( f ) ] q 1 + q ( q + 1 ) [ 1 λ 1 λ 2 ( ln β ln ( 9 / 8 ) + γ ) ] 2 [ H ( f ) ] q 2 , u 12 = u 21 = q ( q + 1 ) [ 1 β λ 2 1 β λ 3 ( ln β ln ( 9 / 8 ) + γ ) ] [ H ( f ) ] q 2 q β λ 2 [ H ( f ) ] q 1 .
Using Equation (26), the approximate Bayesian estimation of the entropy is given by
H ^ E ( f ) = { [ H ^ ( f ) ] q + 0 . 5 [ ( u 11 τ 11 + 2 u 12 τ 12 + u 22 τ 22 ) + z 30 ( u 1 τ 11 + u 2 τ 12 ) τ 11 + z 03 ( u 2 τ 22 + u 1 τ 12 ) τ 22 + z 21 ( 3 u 1 τ 11 τ 12 + u 2 ( τ 11 τ 22 + 2 τ 12 2 ) ) + z 12 ( 3 u 2 τ 22 τ 21 + u 1 ( τ 11 τ 22 + 2 τ 21 2 ) ) ] + p 1 ( u 1 τ 11 + u 2 τ 21 ) + p 2 ( u 2 τ 22 + u 1 τ 12 ) } 1 / q .

Appendix E

Table A1. The average 95% approximate confidence intervals and average lengths and coverage probabilities of β, λ and the entropy (β = 1, λ = 2, H(f) = 0.2448, T = 0.6).
Table A1. The average 95% approximate confidence intervals and average lengths and coverage probabilities of β, λ and the entropy (β = 1, λ = 2, H(f) = 0.2448, T = 0.6).
(n, m)SCβ
AL
CPλ
AL
CPH
AL
CP
(40, 15)I(0.6598, 1.5736)
0.9138
0.9042(1.2220, 3.1773)
1.9573
0.9216(0.0293, 1.1866)
1.1573
0.9184
II(0.6711, 1.4742)
0.8031
0.9253(1.4238, 2.8658)
1.4420
0.9361(0.0393, 0.7733)
0.7340
0.929
III(0.6343, 1.5347)
0.9004
0.9130(1.2645, 3.1064)
1.9319
0.9281(0.0254, 1.1244)
1.0990
0.9174
(50, 15)I(0.6421, 1.5458)
0.9037
0.9162(1.2837, 3.0913)
1.8076
0.9314(0.0203, 1.0469)
1.0266
0.9216
II(0.7102, 1.3884)
0.6782
0.9394(1.4416, 2.7246)
1.2830
0.9406(0.0438, 0.6924)
0.6486
0.9392
III(0.6914, 1.5147)
0.8233
0.9253(1.3021, 2.9705)
1.6684
0.9370(0.0264, 1.0759)
1.0495
0.9261
(60, 30)I(0.6377, 1.5335)
0.8958
0.9374(1.3388, 3.0191)
1.6803
0.9487(0.0151, 0.9112)
0.8959
0.9393
II(0.7093, 1.3769)
0.6676
0.9516(1.4807, 2.6886)
1.2069
0.9542(0.0536, 0.6667)
0.6131
0.9461
III(0.6934, 1.4786)
0.7852
0.9405(1.3955, 2.9630)
1.5675
0.9506(0.0325, 0.8630)
0.8305
0.9428
(70, 30)I(0.7329, 1.4293)
0.6964
0.9472(1.4068, 2.8432)
1.34364
0.9534(0.0298, 0.7943)
0.7645
0.9446
II(0.7247, 1.2859)
0.5602
0.9651(1.5369, 2.5891)
1.0522
0.9680(0.0614, 0.5498)
0.4884
0.9632
III(0.7392, 1.3486)
0.6154
0.9514(1.4476, 2.7845)
1.3361
0.9573(0.0498, 0.7185)
0.6687
0.9521
Table A2. The average 95% approximate confidence intervals and average lengths and coverage probabilities of β, λ and the entropy (β = 1, λ = 2, H(f) = 0.2448, T = 1.5).
Table A2. The average 95% approximate confidence intervals and average lengths and coverage probabilities of β, λ and the entropy (β = 1, λ = 2, H(f) = 0.2448, T = 1.5).
(n, m)SCβ
AL
CPλ
AL
CPH
AL
CP
(40, 15)I(0.5234, 1.8717)
1.3483
0.9231(1.2469, 3.2287)
1.9818
0.9274(0.0284, 1.1887)
1.1603
0.9267
II(0.6662, 1.4576)
0.7914
0.9372(1.4322, 2.8760)
1.4438
0.9405(0.0436, 0.7887)
0.7451
0.9393
III(0.5619, 1.8110)
1.2491
0.9252(1.2679, 3.2045)
1.9364
0.9364(0.0212, 1.1173)
1.0961
0.9340
(50, 15)I(0.5601, 1.6810)
1.1209
0.9230(1.3076, 3.0214)
1.7136
0.9363(0.0245, 0.9304)
0.9059
0.9347
II(0.7124, 1.3705)
0.6581
0.9418(1.4548, 2.7213)
1.2665
0.9462(0.0458, 0.6740)
0.6282
0.9515
III(0.6103, 1.5868)
0.9765
0.9336(1.3320, 2.9769)
1.6449
0.9372(0.0259, 0.8461)
0.8202
0.9347
(60, 30)I(0.6659, 1.5135)
0.8476
0.9418(1.3454, 3.0335)
1.6881
0.9521(0.0206, 1.0400)
1.0194
0.9464
II(0.7051, 1.3680)
0.6619
0.9592(1.4812, 2.6942)
1.2130
0.9574(0.0456, 0.6604)
0.6148
0.9531
III(0.6913, 1.4513)
0.7600
0.9431(1.3775, 2.8768)
1.4983
0.9520(0.0237, 0.9934)
0.9697
0.9506
(70, 30)I(0.7381, 1.3951)
0.6570
0.9492(1.4501, 2.7820)
1.3319
0.9582(0.0321, 0.7553)
0.7232
0.9523
II(0.7573, 1.2850)
0.5277
0.9704(1.5514, 2.5845)
1.0331
0.9726(0.0647, 0.5680)
0.5033
0.9741
III(0.7554, 1.3492)
0.5938
0.9546(1.4967, 2.7071)
1.2104
0.9615(0.0410, 0.7147)
0.6737
0.9591
Table A3. The average 95% Bayesian credible intervals and average lengths and coverage probabilities of β, λ and the entropy (β = 1, λ = 2 H(f) = 0.2448, T = 0.6).
Table A3. The average 95% Bayesian credible intervals and average lengths and coverage probabilities of β, λ and the entropy (β = 1, λ = 2 H(f) = 0.2448, T = 0.6).
(n, m)SCβ
AL
CPλ
AL
CPH
AL
CP
(40, 15)I(0.5521, 1.2841)
0.7320
0.9194(1.0215, 2.4593)
1.4378
0.9241(0.0213, 1.1750)
1.1537
0.9263
II(0.6378, 1.3228)
0.6850
0.9433(1.2854, 2.5238)
1.2384
0.9472(0.0395, 0.7752)
0.7357
0.9380
III(0.5670, 1.2953)
0.7283
0.9253(1.0579, 2.4762)
1.4183
0.9294(0.0224, 1.1192)
1.0968
0.9308
(50, 15)I(0.5924, 1.2871)
0.6947
0.9312(1.1731, 2.5054)
1.3323
0.9397(0.0298, 0.9231)
0.8933
0.9386
II(0.6897, 1.2921)
0.6024
0.9491(1.3580, 2.4935)
1.1355
0.9465(0.0548, 0.6751)
0.6203
0.9507
III(0.6067, 1.2854)
0.6787
0.9342(1.2051, 2.4718)
1.2667
0.9354(0.0278, 0.8553)
0.8275
0.9326
(60, 30)I(0.6450, 1.2925)
0.6475
0.9481(1.1389, 2.4565)
1.3176
0.9536(0.0397, 1.0509)
1.0112
0.9394
II(0.6870, 1.2905)
0.6035
0.9614(1.3883, 2.4740)
1.0857
0.9656(0.0578, 0.6717)
0.6139
0.9562
III(0.6565, 1.2812)
0.6247
0.9532(1.1919, 2.4423)
1.2504
0.9561(0.0319, 0.8408)
0.8029
0.9528
(70, 30)I(0.7062, 1.2494)
0.5432
0.9512(1.3068, 2.4374)
1.1306
0.9563(0.0324, 0.7516)
0.7192
0.9536
II(0.7451, 1.2449)
0.4998
0.9711(1.4821, 2.4494)
0.9673
0.9744(0.0701, 0.5672)
0.4971
0.9783
III(0.7162, 1.2359)
0.5197
0.9583(1.3597, 2.4443)
1.0846
0.9604(0.0440, 0.7067)
0.6627
0.9578
Table A4. The average 95% Bayesian credible intervals and average lengths and coverage probabilities of β, λ and the entropy (β = 1, λ = 2 H(f) = = 0.2448, T = 1.5).
Table A4. The average 95% Bayesian credible intervals and average lengths and coverage probabilities of β, λ and the entropy (β = 1, λ = 2 H(f) = = 0.2448, T = 1.5).
(n, m)SCβ
AL
CPλ
AL
CPH
AL
CP
(40, 15)I(0.5554, 1.2954)
0.7400
0.9218(1.0243, 2.4612)
1.4369
0.9354(0.0251, 1.1801)
1.1550
0.9258
II(0.6417, 1.3339)
0.6922
0.9439(1.2824, 2.5169)
1.2345
0.9485(0.0372, 0.7728)
0.7356
0.9394
III(0.5696, 1.3033)
0.7337
0.9275(1.0556, 2.4672)
1.4116
0.9318(0.0241, 1.1200)
1.0959
0.9337
(50, 15)I(0.5954, 1.2947)
0.6993
0.9417(1.1722, 2.4804)
1.3002
0.9420(0.0224, 1.0231)
1.0007
0.9418
II(0.68902, 1.2954)
0.6062
0.9506(1.3599, 2.5034)
1.1435
0.9525(0.0479, 0.6710)
0.6239
0.9526
III(0.6045, 1.2801)
0.6756
0.9359(1.2337, 2.5094)
1.2757
0.9364(0.0324, 1.0047)
0.9723
0.9371
(60, 30)I(0.6418, 1.2835)
0.6417
0.9494(1.1349, 2.4455)
1.3106
0.9548(0.0250, 0.9212)
0.8960
0.9417
II(0.6896, 1.2970)
0.6074
0.9628(1.3987, 2.4911)
1.0924
0.9662(0.0479, 0.6608)
0.6129
0.9573
III(0.6600, 1.2856)
0.6256
0.9556(1.1549, 2.4283)
1.2734
0.9571(0.0217, 0.8359)
0.8142
0.9538
(70, 30)I(0.7061, 1.2472)
0.5411
0.9526(1.3179, 2.4521)
1.1342
0.9571(0.0363, 0.7509)
0.7146
0.9548
II(0.7451, 1.2413)
0.4962
0.9725(1.4663, 2.4268)
0.9605
0.9757(0.0778, 0.5701)
0.4923
0.9793
III(0.7154, 1.2267)
0.5113
0.9594(1.3542, 2.4118)
1.0576
0.9624(0.0604, 0.7108)
0.6504
0.9585

References

  1. Balakrishnan, N.; Aggarwala, R. Progressive Censoring: Theory, Methods, and Applications; Birkhauser: Boston, MA, USA, 2000. [Google Scholar]
  2. Balakrishnan, N. Progressive censoring methodology: An appraisal. Test 2007, 16, 211–259. [Google Scholar] [CrossRef]
  3. Kundu, D.; Joarder, A. Analysis of type-II progressively hybrid censored data. Comput. Stat. Data Anal. 2006, 50, 2509–2528. [Google Scholar] [CrossRef]
  4. Ng, H.K.T.; Kundu, D.; Chan, P.S. Statistical analysis of exponential lifetimes under an adaptive Type II progressive censoring scheme. Naval Res. Logist. 2010, 56, 687–698. [Google Scholar] [CrossRef] [Green Version]
  5. Nassar, M.; Abo-Kasem, O.; Zhang, C.; Dey, S. Analysis of weibull distribution under adaptive Type-II progressive hybrid censoring scheme. J. Indian Soc. Probab. Stat. 2018, 19, 25–65. [Google Scholar] [CrossRef]
  6. Zhang, C.; Shi, Y. Estimation of the extended Weibull parameters and acceleration factors in the step-stress accelerated life tests under an adaptive progressively hybrid censoring data. J. Stat Comput. Simulat. 2016, 86, 3303–3314. [Google Scholar] [CrossRef]
  7. Cui, W.; Yan, Z.; Peng, X. Statistical analysis for constant-stress accelerated life test with Weibull distribution under adaptive Type-II hybrid censored data. IEEE Access 2019. [Google Scholar] [CrossRef]
  8. Ismail, A.A. Inference for a step-stress partially accelerated life test model with an adaptive Type-II progressively hybrid censored data from Weibull distribution. J. Comput. Appl. Math. 2014, 260, 533–542. [Google Scholar] [CrossRef]
  9. Zhang, C.; Shi, Y. Inference for constant-stress accelerated life tests with dependent competing risks from bivariate Birnbaum-Saunders distribution based on adaptive progressively hybrid censoring. IEEE Trans. Reliab. 2017, 66, 111–122. [Google Scholar] [CrossRef]
  10. Ye, Z.S.; Chan, P.S.; Xie, M. Statistical inference for the extreme value distribution under adaptive Type-II progressive censoring schemes. J. Stat. Comput. Simulat. 2014, 84, 1099–1114. [Google Scholar] [CrossRef]
  11. Sobhi, M.M.; Soliman, A.A. Estimation for the exponentiated Weibull model with adaptive Type-II progressive censored schemes. Appl. Math. Model. 2016, 40, 1180–1192. [Google Scholar] [CrossRef]
  12. Nassar, M.; Abo-Kasem, O.E. Estimation of the inverse Weibull parameters under adaptive type-II progressive hybrid censoring scheme. J. Comput. Appl. Math. 2017, 315, 228–239. [Google Scholar] [CrossRef]
  13. Xu, R.; Gui, W.H. Entropy estimation of inverse Weibull Distribution under adaptive Type-II progressive hybrid censoring schemes. Symmetry 2019, 11, 1463. [Google Scholar] [CrossRef] [Green Version]
  14. Kang, S.B.; Cho, Y.S.; Han, J.T.; Kim, J. An estimation of the entropy for a double exponential distribution based on multiply Type-II censored samples. Entropy 2012, 14, 161–173. [Google Scholar] [CrossRef] [Green Version]
  15. Cho, Y.; Sun, H.; Lee, K. An estimation of the entropy for a Rayleigh distribution based on doubly-generalized Type-II hybrid censored samples. Entropy 2014, 16, 3655–3669. [Google Scholar] [CrossRef] [Green Version]
  16. Baratpour, S.; Ahmadi, J.; Arghami, N.R. Entropy properties of record statistics. Stat. Pap. 2017, 48, 197–213. [Google Scholar] [CrossRef]
  17. Cramer, E.; Bagh, C. Minimum and maximum information censoring plans in progressive censoring. Commun. Stat. Theory Methods 2011, 40, 2511–2527. [Google Scholar] [CrossRef]
  18. Cho, Y.; Sun, H.; Lee, K. Estimating the entropy of a weibull distribution under generalized progressive hybrid censoring. Entropy 2015, 17, 102–122. [Google Scholar] [CrossRef] [Green Version]
  19. Yu, J.; Gui, W.H.; Shan, Y.Q. Statistical inference on the Shannon entropy of inverse Weibull distribution under the progressive first-failure censoring. Entropy 2019, 21, 1209. [Google Scholar] [CrossRef] [Green Version]
  20. Abd-Elrahman, A.M. A new two-parameter lifetime distribution with decreasing, increasing or upside-down bathtub-shaped failure rate. Commun. Stat. Theory Methods 2017, 46, 8865–8880. [Google Scholar] [CrossRef]
  21. Abd-Elrahman, A.M. Reliability estimation under type-II censored data from the generalized Bilal distribution. J. Egypt. Math. Soc. 2019, 27, 1–15. [Google Scholar] [CrossRef] [Green Version]
  22. Mahmoud, M.; EL-Sagheer, R.M.; Abdallah, S. Inferences for new Weibull–Pareto distribution based on progressively Type-II censored data. J. Stat. Appl. Probab. 2016, 5, 501–514. [Google Scholar] [CrossRef]
  23. Ahmed, E.A. Bayesian estimation based on progressive Type-II censoring from two-parameter bathtub-shaped lifetime model: An Markov chain Monte Carlo approach. J. Appl. Stat. 2014, 41, 752–768. [Google Scholar] [CrossRef]
  24. Gilks, W.R.; Wild, P. Adaptive rejection sampling for Gibbs sampling. J. R. Stat. Soc. 1992, C41, 337–348. [Google Scholar] [CrossRef]
  25. Koch, K.R. Gibbs sampler by sampling-importance-resampling. J. Geod. 2007, 81, 581–591. [Google Scholar] [CrossRef] [Green Version]
  26. Martino, L.; Elvira, V.; Camps-Valls, G. The recycling gibbs sampler for efficient learning. Digit. Signal Process. 2018, 74, 1–13. [Google Scholar] [CrossRef] [Green Version]
  27. Panahi, H.; Moradi, N. Estimation of the inverted exponentiated Rayleigh distribution based on adaptive Type II progressive hybrid censored sample. J. Comput. Appl. Math. 2020, 364, 112345. [Google Scholar] [CrossRef]
  28. Metropolis, N.; Rosenbluth, A.W.; Rosenbluth, M.N.; Teller, A.H.; Teller, E. Equations of state calculations by fast computing machines. J. Chem. Phys. 1953, 21, 1087–1092. [Google Scholar] [CrossRef] [Green Version]
  29. Hastings, W.K. Monte Carlo sampling methods using Markov chains and their applications. Biometrika 1970, 57, 97–109. [Google Scholar] [CrossRef]
  30. Balakrishnan, N.; Sandhu, R.A. A simple simulational algorithm for generating progressive Type-II censored samples. Am. Stat. 1995, 49, 229–230. [Google Scholar]
  31. Hinkley, D. On quick choice of power transformations. Appl. Stat. 1977, 26, 67–96. [Google Scholar] [CrossRef]
  32. Barreto-Souza, W.; Cribari-Neto, F. A generalization of the exponential-Poisson distribution. Stat. Probab. Lett. 2009, 79, 2493–2500. [Google Scholar] [CrossRef] [Green Version]
Figure 1. MSEs of different entropy estimations. (a) MSEs of MLEs of entropy in the case of T = 0.6 and T = 1.5. (b) MSEs of Bayesian estimations of entropy under a squared error loss function in the case of T = 0.6 and T = 1.5. (c) MSEs of Bayesian estimations of entropy under a Linex loss function in the case of T = 0.6. (d) MSEs of Bayesian estimations of entropy under a Linex loss function in the case of T = 1.5. (e) MSEs of Bayesian estimations of entropy under a general entropy loss function in the case of T = 0.6. (f) MSEs of Bayesian estimations of entropy under a general entropy loss function in the case of T = 1.5.
Figure 1. MSEs of different entropy estimations. (a) MSEs of MLEs of entropy in the case of T = 0.6 and T = 1.5. (b) MSEs of Bayesian estimations of entropy under a squared error loss function in the case of T = 0.6 and T = 1.5. (c) MSEs of Bayesian estimations of entropy under a Linex loss function in the case of T = 0.6. (d) MSEs of Bayesian estimations of entropy under a Linex loss function in the case of T = 1.5. (e) MSEs of Bayesian estimations of entropy under a general entropy loss function in the case of T = 0.6. (f) MSEs of Bayesian estimations of entropy under a general entropy loss function in the case of T = 1.5.
Entropy 23 00206 g001aEntropy 23 00206 g001b
Table 1. The average maximum likelihood estimations (MLEs) and mean square errors (MSEs) of β, λ and the entropy (β = 1, λ = 2, H(f) = 0.2448).
Table 1. The average maximum likelihood estimations (MLEs) and mean square errors (MSEs) of β, λ and the entropy (β = 1, λ = 2, H(f) = 0.2448).
(n, m)SCT = 0.6T = 1.5
β ^
MSE
λ ^
MSE
H ^
MSE
β ^
MSE
λ ^
MSE
H ^
MSE
(40, 15)I1.1850
0.1224
2.2096
0.1428
0.1903
0.0979
1.1875
0.1213
2.2848
0.1521
0.1950
0.0963
II1.0727
0.0709
2.1448
0.1258
0.2015
0.0376
1.0619
0.0609
2.1541
0.1336
0.2017
0.0279
III1.1819
0.1217
2.2354
0.1413
0.1947
0.0910
1.1864
0.1208
2.2362
0.1514
0.1968
0.0902
(50, 15)I1.1326
0.1053
2.1803
0.1398
0.2086
0.0797
1.0905
0.0741
2.1931
0.1483
0.1997
0.0750
II1.0498
0.0390
2.1017
0.1243
0.2281
0.0280
1.0390
0.0374
2.1076
0.1263
0.2169
0.0197
III1.1184
0.1013
2.1817
0.1345
0.2035
0.0742
1.0740
0.0602
2.1284
0.1448
0.2013
0.0598
(60, 30)I1.1006
0.0889
2.1758
0.1374
0.2029
0.0625
1.0689
0.0683
2.1795
0.1368
0.2033
0.0547
II1.0451
0.0363
2.0847
0.1066
0.2260
0.0231
1.0476
0.0383
2.0877
0.1048
0.2170
0.0158
III1.0860
0.0653
2.1528
0.1368
0.2086
0.0601
1.0583
0.0592
2.1571
0.1335
0.2090
0.0418
(70, 30)I1.0641
0.0704
2.1296
0.1202
0.2163
0.0516
1.0581
0.0597
2.1197
0.1278
0.2134
0.0417
II1.0246
0.0265
2.0785
0.0849
0.2294
0.0198
1.0231
0.0317
2.0715
0.0946
0.2245
0.0148
III1.0517
0.0580
2.1483
0.1203
0.2199
0.0591
1.0468
0.0485
2.1132
0.1203
0.2195
0.0361
Table 2. The average Bayesian estimations and MSEs of β, λ and the entropy under the squared error loss functon (β = 1, λ = 2; β = 1, λ = 2, H(f) = 0.2448).
Table 2. The average Bayesian estimations and MSEs of β, λ and the entropy under the squared error loss functon (β = 1, λ = 2; β = 1, λ = 2, H(f) = 0.2448).
(n, m)SCT = 0.6T = 1.5
β ^
MSE
λ ^
MSE
H ^
MSE
β ^
MSE
λ ^
MSE
H ^
MSE
(40, 15)I0.8625
0.0353
1.8735
0.1325
0.3357
0.0930
0.8687
0.0337
1.8761
0.1317
0.3301
0.0920
II0.9480
0.0235
1.9583
0.0954
0.2630
0.0342
0.9546
0.0255
1.9531
0.0938
0.2616
0.0217
III0.8795
0.0340
1.8041
0.1314
0.3264
0.0948
0.8837
0.0310
1.8996
0.1299
0.3034
0.0902
(50, 15)I0.9325
0.0297
1.8917
0.1185
0.3189
0.0796
0.8973
0.0289
1.8345
0.0975
0.2732
0.0741
II0.9645
0.0218
1.9907
0.0827
0.2580
0.0260
0.9694
0.0223
1.9763
0.0812
0.2303
0.0198
III0.9475
0.0253
1.9013
0.1072
0.3016
0.0546
0.9824
0.0234
1.9314
0.0972
0.2661
0.0486
(60, 30)I0.9274
0.0224
1.8445
0.1151
0.2357
0.0575
0.9457
0.0263
1.8781
0.0919
0.2674
0.0508
II0.9671
0.0202
1.9932
0.0728
0.2398
0.0235
0.9688
0.0207
2.0176
0.0741
0.2235
0.0179
III0.9185
0.0211
1.8525
0.1072
0.2301
0.0534
0.9316
0.0227
1.9427
0.0954
0.2652
0.0504
(70, 30)I0.9742
0.0198
1.9360
0.0775
0.2538
0.0404
0.9515
0.0213
1.9504
0.0892
0.2553
0.0401
II0.9895
0.0174
2.0413
0.0613
0.2506
0.0195
0.9804
0.0186
2.0378
0.0537
0.2260
0.0105
III0.9787
0.0182
1.9746
0.0761
0.2512
0.0397
0.9713
0.0194
1.9714
0.0683
0.2537
0.0346
Table 3. The average Bayesian estimations and MSEs of β, λ and the entropy under the Linex loss function (β = 1, λ = 2, T = 0.6, H(f) = 0.2448).
Table 3. The average Bayesian estimations and MSEs of β, λ and the entropy under the Linex loss function (β = 1, λ = 2, T = 0.6, H(f) = 0.2448).
(n, m)SC h = 1 h = 1
β ^
MSE
λ ^
MSE
H ^
MSE
β ^
MSE
λ ^
MSE
H ^
MSE
(40, 15)I0.8835
0.0355
1.8558
0.1261
0.3583
0.0964
0.8531
0.0366
1.8248
0.1343
0.2802
0.0904
II0.9740
0.0255
1.9161
0.0885
0.2587
0.0721
0.9308
0.0246
1.9092
0.1008
0.2469
0.0304
III0.9047
0.0308
1.8768
0.1249
0.3343
0.0929
0.8670
0.0335
1.8405
0.1889
0.2638
0.0884
(50, 15)I0.9047
0.0301
1.9415
0.1238
0.3158
0.0939
0.8704
0.0337
1.9175
0.1329
0.2736
0.0764
II0.9852
0.0218
2.0538
0.0789
0.2502
0.0623
0.9674
0.0213
1.9201
0.0912
0.2358
0.0265
III0.9105
0.0284
1.9771
0.0986
0.3046
0.0904
0.8924
0.0293
1.9203
0.1257
0.2604
0.0654
(60, 30)I0.9341
0.0223
1.9788
0.1127
0.2792
0.0836
0.9035
0.0238
1.9221
0.1308
0.2520
0.0543
II0.9834
0.0198
2.0465
0.0664
0.3743
0.0365
0.9609
0.0211
1.9447
0.0791
0.2118
0.0220
III0.9498
0.0204
1.9837
0.0973
0.3424
0.0829
0.9258
0.0207
1.9253
0.1227
0.2319
0.0425
(70, 30)I0.9561
0.0197
1.9889
0.0768
0.2546
0.0579
0.9378
0.0184
1.9543
0.0975
0.2407
0.0403
II0.9957
0.0174
2.0312
0.0572
0.2371
0.0281
0.9798
0.0159
2.0164
0.0614
0.2410
0.0187
III0.9687
0.0185
2.0024
0.0746
0.2265
0.0536
0.9451
0.0120
1.9623
0.0784
0.2409
0.0354
Table 4. The average Bayesian estimations and MSEs of β, λ and the entropy under the Linex loss function (β = 1, λ = 2, T = 1.5, H(f) = 0.2448).
Table 4. The average Bayesian estimations and MSEs of β, λ and the entropy under the Linex loss function (β = 1, λ = 2, T = 1.5, H(f) = 0.2448).
(n, m)SC h = 1 h = 1
β ^
MSE
λ ^
MSE
H ^
MSE
β ^
MSE
λ ^
MSE
H ^
MSE
(40, 15)I0.8896
0.0330
1.8328
0.1359
0.3492
0.1025
0.8510
0.0375
1.8127
0.1396
0.3381
0.0947
II0.9638
0.0248
1.9177
0.0863
0.2743
0.0365
0.9272
0.0265
1.9167
0.0982
0.2657
0.0301
III0.8922
0.0321
1.8691
0.1306
0.3424
0.0948
0.8631
0.0334
1.8430
0.1328
0.3343
0.0803
(50, 15)I0.9024
0.0234
1.8678
0.1094
0.3217
0.0921
0.8823
0.0315
1.8874
0.1173
0.3216
0.0810
II0.9713
0.0221
1.9401
0.0731
0.2601
0.0262
0.9418
0.0217
1.9824
0.0884
0.2632
0.0223
III0.9135
0.0231
1.8792
0.090
0.3383
0.0921
0.8975
0.0314
1.8845
0.1121
0.3210
0.0693
(60, 30)I0.9470
0.0219
1.8946
0.0951
0.3222
0.0727
0.9080
0.0234
1.9012
0.1075
0.3251
0.0536
II0.9795
0.0209
1.9452
0.0719
0.2518
0.0246
0.9548
0.0199
1.9616
0.0776
0.2513
0.0219
III0.9425
0.0213
1.8978
0.0906
0.3197
0.0648
0.9253
0.0213
1.9041
0.1069
0.3218
0.0412
(70, 30)I0.9583
0.0184
1.9562
0.0748
0.3165
0.0473
0.9491
0.0179
1.9493
0.0861
0.3314
0.0392
II0.9901
0.0163
2.0576
0.0652
0.2318
0.0168
0.9814
0.0153
2.0997
0.0608
0.2459
0.0161
III0.9711
0.0175
1.9230
0.0697
0.3027
0.0389
0.9502
0.0162
1.9894
0.0841
0.3267
0.0304
Table 5. The average Bayesian estimations and MSEs of β, λ and the entropy under the general entropy loss function (β = 1, λ = 2, T = 0.6, H(f) = 0.2448).
Table 5. The average Bayesian estimations and MSEs of β, λ and the entropy under the general entropy loss function (β = 1, λ = 2, T = 0.6, H(f) = 0.2448).
(n, m)SC q = 1 q = 1
β ^
MSE
λ ^
MSE
H ^
MSE
β ^
MSE
λ ^
MSE
H ^
MSE
(40, 15)I0.8739
0.0341
1.8380
0.1348
0.3181
0.0891
0.8288
0.0437
1.8173
0.1381
0.3558
0.1091
II0.9546
0.0239
1.9184
0.0966
0.2832
0.0234
0.9169
0.0265
1.9081
0.1084
0.2628
0.0315
III0.8828
0.0324
1.8422
0.1306
0.3097
0.0863
0.8494
0.0389
1.8266
0.1361
0.3207
0.1063
(50, 15)I0.9013
0.0305
1.8948
0.1191
0.3017
0.0463
0.8972
0.0380
1.8728
0.1231
0.3423
0.0598
II0.9701
0.0214
1.9386
0.0803
0.2695
0.0186
0.9430
0.0236
1.9471
0.0962
0.2268
0.0271
III0.9251
0.0263
1.8984
0.1093
0.3023
0.0486
0.8613
0.0308
1.8498
0.1176
0.3287
0.0525
(60, 30)I0.9270
0.0232
1.9089
0.0824
0.2776
0.0390
0.8975
0.0276
1.8785
0.1127
0.3270
0.0477
II0.9610
0.0190
2.0351
0.0686
0.2318
0.0197
0.9481
0.0210
2.0453
0.0791
0.2391
0.0245
III0.9406
0.0210
1.9105
0.0874
0.2698
0.0375
0.9116
0.0231
1.8938
0.1109
0.3168
0.0418
(70, 30)I0.9501
0.0171
1.9492
0.0778
0.2536
0.0265
0.9213
0.0202
1.9308
0.0840
0.2924
0.0392
II0.9817
0.0158
2.0147
0.0436
0.2325
0.0148
0.9681
0.0151
2, 1489
0.0526
0.2410
0.0272
III0.9546
0.0174
1.9602
0.0738
0.2513
0.0168
0.9467
0.0173
1.9436
0.0724
0.2902
0.0312
Table 6. The average Bayesian estimations and MSEs of β, λ and the entropy under the general entropy loss function (β = 1, λ = 2, T = 1.5, H(f) = 0.2448).
Table 6. The average Bayesian estimations and MSEs of β, λ and the entropy under the general entropy loss function (β = 1, λ = 2, T = 1.5, H(f) = 0.2448).
(n, m)SC q = 1 q = 1
β ^
MSE
λ ^
MSE
H ^
MSE
β ^
MSE
λ ^
MSE
H ^
MSE
(40, 15)I0.8770
0.0335
1.8569
0.1332
0.3564
0.0903
0.8224
0.0455
1.7924
0.1331
0.3598
0.1075
II0.9560
0.0218
1.9221
0.0914
0.2729
0.0198
0.9112
0.0257
1.9038
0.0913
0.2786
0.0294
III0.8836
0.0315
1.8297
0.1217
0.3519
0.0841
0.8453
0.0348
1.8374
0.1224
0.3547
0.1024
(50, 15)I0.8947
0.0298
1.8979
0.0981
0.3028
0.0372
0.8631
0.0362
1.8308
0.1134
0.3143
0.0483
II0.9685
0.0206
1.9793
0.0801
0.2610
0.0164
0.9377
0.0216
1.9467
0.0910
0.2656
0.0283
III0.8984
0.0278
1.9078
0.0931
0.3012
0.0416
0.8702
0.0302
1.8547
0.1086
0.3125
0.0502
(60, 30)I0.9244
0.0221
1.8446
0.0772
0.2731
0.0283
0.8930
0.0267
1.9208
0.1041
0.2812
0.0421
II0.9767
0.0188
2.0526
0.0614
0.2554
0.0164
0.9440
0.0202
2.0658
0.0718
0.2627
0.0238
III0.9387
0.0198
1.9541
0.0824
0.2709
0.0346
0.9125
0.0210
1.9435
0.0983
0.2801
0.0431
(70, 30)I0.9531
0.0167
1.9578
0.0738
0.2501
0.0247
0.9230
0.0188
1.9447
0.0814
0.2523
0.0370
II0.9814
0.0140
2.2263
0.0394
0.2309
0.0135
0.9675
0.0140
2.2680
0.0338
0.2352
0.0247
III0.9624
0.0163
1.9795
0.0745
0.2486
0.0216
0.9457
0.0164
1.9539
0.0718
0.2501
0.0306
Table 7. MLEs and Bayesian estimations of the parameters and the entropy.
Table 7. MLEs and Bayesian estimations of the parameters and the entropy.
MLEsCase ICase IIBEs
(Squared Loss)
Case ICase II
β ^ M 0.32890.3948 β ^ S 0.34280.4044
λ ^ M 1.04081.3373 λ ^ S 0.99741.2410
H ^ M 1.58901.3881 H ^ S 1.62301.4701
Table 8. Bayesian estimations of the parameters and the entropy under two loss functions.
Table 8. Bayesian estimations of the parameters and the entropy under two loss functions.
BEs
Linex Loss
h = 1 h = 1 BEs
Entropy
Loss
q = 1 q = 1
Case ICase IICase ICase IICase ICase IICase ICase II
β ^ L 0.34060.40310.33300.3958 β ^ E 0.33690.40250.32730.3852
λ ^ L 1.28931.02171.24420.9898 λ ^ E 1.26181.00601.21730.9765
H ^ L 1.47141.66811.43851.6276 H ^ E 1.46081.63401.43701.6249
Table 9. The 95% asymptotic confidence intervals (ACIs) and Bayesian credible intervals (BCIs) with the corresponding interval lengths (ILs) of the two parameters and the entropy.
Table 9. The 95% asymptotic confidence intervals (ACIs) and Bayesian credible intervals (BCIs) with the corresponding interval lengths (ILs) of the two parameters and the entropy.
ParameterACIs
IL
ParameterBCIs
IL
Case ICase IICase ICase II
β(0.2406, 0.5409)
0.3003
(0.1812, 0.4564)
0.2752
β(0.2760, 0.5625)
0.2865
(0.2210, 0.4923)
0.2713
λ(0.6899, 1.3918)
0.7019
(0.9884, 1.7863)
0.7979
λ(0.7021, 1.3566)
0.6545
(0.8776, 1.6743)
0.7967
H(1.2012, 1.9314)
0.7302
(1.0299, 1.7863)
0.7164
H(1.2487, 1.9707)
0.7220
(1.1266, 1.8671)
0.7405
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Shi, X.; Shi, Y.; Zhou, K. Estimation for Entropy and Parameters of Generalized Bilal Distribution under Adaptive Type II Progressive Hybrid Censoring Scheme. Entropy 2021, 23, 206. https://doi.org/10.3390/e23020206

AMA Style

Shi X, Shi Y, Zhou K. Estimation for Entropy and Parameters of Generalized Bilal Distribution under Adaptive Type II Progressive Hybrid Censoring Scheme. Entropy. 2021; 23(2):206. https://doi.org/10.3390/e23020206

Chicago/Turabian Style

Shi, Xiaolin, Yimin Shi, and Kuang Zhou. 2021. "Estimation for Entropy and Parameters of Generalized Bilal Distribution under Adaptive Type II Progressive Hybrid Censoring Scheme" Entropy 23, no. 2: 206. https://doi.org/10.3390/e23020206

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop