Next Article in Journal
Extending the Extreme Physical Information to Universal Cognitive Models via a Confident Information First Principle
Previous Article in Journal
A Maximum Entropy Fixed-Point Route Choice Model for Route Correlation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Estimation of the Entropy for a Rayleigh Distribution Based on Doubly-Generalized Type-II Hybrid Censored Samples

Department of Statistics, Pusan National University, Geumjeong-gu, Busan 609-735, Korea
*
Author to whom correspondence should be addressed.
Entropy 2014, 16(7), 3655-3669; https://doi.org/10.3390/e16073655
Submission received: 13 March 2014 / Revised: 9 June 2014 / Accepted: 26 June 2014 / Published: 1 July 2014

Abstract

:
In this paper, based on a doubly generalized Type II censored sample, the maximum likelihood estimators (MLEs), the approximate MLE and the Bayes estimator for the entropy of the Rayleigh distribution are derived. We compare the entropy estimators’ root mean squared error (RMSE), bias and Kullback–Leibler divergence values. The simulation procedure is repeated 10,000 times for the sample size n = 10, 20, 40 and 100 and various doubly generalized Type II hybrid censoring schemes. Finally, a real data set has been analyzed for illustrative purposes.

1. Introduction

Let Y be a random variable with a continuous distribution function (cdf) G(y) and a probability density function (pdf) g(y). The differential entropy H(Y) of the random variable is defined by Cover and Thomas [1] to be:
H ( Y ) = H ( f ) = g ( y ) log g ( y ) dy .
The cdf and pdf of the random variable Y having the Rayleigh distribution are given by:
G ( y ; σ ) = 1 exp ( y 2 2 σ 2 ) , y > 0 , σ > 0 ,
and:
g ( y ; σ ) = x σ 2 exp ( y 2 2 σ 2 ) , y > 0 , σ > 0 .
Let Z = Y/σ; then Z has a standard form of the Rayleigh distribution with the cdf written as:
F ( z ) = 1 exp ( z 2 2 ) , f ( z ) = z ( 1 F ( z ) ) .
For the pdf (2), the entropy simplifies to:
H ( f ) = 1 + log ( σ 2 ) + γ 2 ,
where γ is the Euler–Mascheroni constant.
The estimation of the parameters of the censored samples has been investigated by many authors, such as Harter and Moore [2], Dyer and Whisenand [3], Balakrishnan [4], Fernández [5] and Kim and Han [6]. Hater and Moore [2] derived an explicit form of the maximum likelihood estimators (MLEs) of the scale parameter σ based on Type II censored data. Dyer and Whisenand [3] considered the best linear unbiased estimator of σ based on Type II censored data. Balakrishnan [4] considered an approximate MLE of σ based on the doubly generalized Type II censored data. Fernández [5] considered a Bayes estimation of σ based on the doubly-generalized Type II censored data. Recently, Kim and Han [6] considered a Bayes estimation of σ based on the multiply Type II censored data.
In this paper, we derive the estimators for the entropy function of the Rayleigh distribution with an unknown scale parameter under doubly-generalized Type II hybrid censoring. We also compare the proposed estimators in the sense of the root mean squared error (RMSE) for various censored samples.
The rest of this paper is organized as follows. In Section 2, we introduce a doubly generalized Type II hybrid censoring scheme. In Section 3, we describe the computation of the entropy function with MLE and approximate the MLE and Bayes estimator of the unknown scale parameter in the Rayleigh distribution under doubly generalized Type II hybrid censored samples. A real data set has been analyzed in Section 4. In Section 5, the description of different estimators that are compared by performing the Monte Carlo simulation is presented, and Section 6 concludes.

2. Doubly-Generalized Type II Hybrid Censoring Scheme

Consider a life testing experiment in which n units are tested. Epstein [7] introduced a hybrid censoring scheme in which the test is terminated at a random time T 1 * = min { Y r : n , T }, where r ∈ {1, 2, ···,n}, T ∈ (0, ) are pre-fixed and Yr:n denote the r-th ordered failure time when the sample size is n. Next, Childs et al. [8] introduced a Type I hybrid censoring scheme and a Type II hybrid censoring scheme. The disadvantage of the Type I hybrid censoring scheme is that there is a possibility that very few failures may occur before time T. However, the Type II hybrid censoring scheme can guarantee a pre-fixed number of failures. In this case, the termination point is T 2 * = max { Y r : n , T }, where r ∈ {1, 2, ···,n} and T ∈ (0, ) are pre-fixed. Though the Type II hybrid censored scheme guarantees a pre-fixed number of failures, it might take a long time to observe r failures. In order to provide a guarantee in terms of the number of failures observed, as well as the time to complete the test, Chandrasekar et al. [9] introduced a generalized Type II hybrid censoring scheme.
Lee et al. [10] introduced a doubly generalized Type II hybrid censoring scheme that can be described as follows. Fix 1 ≤ rn and T1,T2,T3 (0, ), such that T1 < T2 < T3. If the l-th failure occurs before time T1, start the experiment at T1; if the l-th failure occurs after time T1, start at Yl:n. If the r-th failure occurs before time T2, terminate the experiment at T2; if the r-th failure occurs between T2 and T3, terminate at Yr:n; and in other cases, terminate the test at T3. Therefore, T1 represents the time at which the researcher starts the observation in the experiment. T2 represents the least time for which the researcher conducts the experiment. T3 represents the longest time for which the researcher allows the experiment to continue. For known r, l, T1, T2, T3, we can observe the following six cases of observations.
  • Case I : y1 < ··· <yl:n < ··· <yd1−1:n <T1 <yd1:n < ··· <yr:n <yd3:n <T2 < ··· <yd3+1:n, if yl:n <T1 and yr:n <T2.
  • Case II : y1 < ··· <T1 < ··· <yl:n < ··· <yr:n < ··· <yd3:n <T2 <yd3+1:n, if yl:n >T1 and yr:n <T2.
  • Case III : y1 < ··· <yl:n < ··· <yd1−1:n <T1 <yd1:n < ··· <T2 < ··· <yr:n, if yl:n <T1 and yr:n >T2.
  • Case IV : y1 < ··· <T1 < ··· <yl:n < ··· <T2 < ··· <yr:n, if yl:n >T1 and yr:n >T2.
  • Case V : y1 < ··· <yl:n < ··· <yd1−1:n <T1 <yd1:n < ··· <yd2:n <T3 <yd2+1 < ··· <yr:n, if yl:n <T1 and yr:n >T3.
  • Case VI : y1 < ··· <T1 < ··· <yl:n < ··· <yd2:n <T3 <yd2+1:n < ··· <yr:n, if yl:n >T1 and yr:n >T3.
Note that, in Case I, Case III and Case V, we do not observe yd1−1:n, but yd11:n <T1 <yd1:n means that the d1-th failure took place after T1, and no failure took place between yd1:n and T1. In Case I and Case II, we do not observe yd3+1:n, but yd3:n < T2 < yd3+1:n means that the d3-th failure took place before T2 and no failure took place between yd3:n and T2. In Case V and Case VI, we do not observe yd2+1:n, but yd2:n < T3 < yd2+1:n means that the d2-th failure took place before T3, and no failure took place between yd2:n and T3. A doubly-generalized Type II hybrid censoring scheme is presented in Figure 1.

3. Estimation of the Entropy

3.1. Maximum Likelihood Estimators

Assume that the failure times of the units are the Rayleigh distribution with cdf (1) and pdf (2). The likelihood functions for six different cases are as follows.
  • Case I
    L 1 ( σ ) = K I σ ( d 3 d 1 + 1 ) [ F ( z T 1 ) ] d 1 1 [ 1 F ( z T 2 ) ] n d 3 i = d 1 d 3 f ( z i : n ) ,
  • Case II
    L II ( σ ) = K II σ ( d 3 l + 1 ) [ F ( z l : n ) ] l 1 [ 1 F ( z T 2 ) ] n d 3 i = 1 d 3 f ( z i : n ) ,
  • Case III
    L III ( σ ) = K III σ ( r d 1 + 1 ) [ F ( z T 1 ) ] d 1 1 [ 1 F ( z r : n ) ] n r i = d 1 r f ( z i : n ) ,
  • Case IV
    L IV ( σ ) = K IV σ ( r l + 1 ) [ F ( z l : n ) ] l 1 [ 1 F ( z r : n ) ] n r i = 1 r f ( z i : n ) ,
  • Case V
    L V ( σ ) = K V σ ( d 2 d 1 + 1 ) [ F ( z T 1 ) ] d 1 1 [ 1 F ( z T 3 ) ] n d 2 i = 1 d 2 f ( z i : n ) ,
  • Case VI
    L VI ( σ ) = K VI σ ( d 2 l + 1 ) [ F ( z l : n ) ] l 1 [ 1 F ( z T 3 ) ] n d 2 i = l d 2 f ( z i : n ) ,
where KI = n!/(d1 − 1)!(nd3)!, KII = n!/(l − 1)!(nd3)!, KIII = n!/(d1 − 1)!(nr)!, KIV = n!/(l − 1)!(nr)!, KV = n!/(d1 − 1)!(nd2)!, KVI = n!/(l − 1)!(nd2)!, zT1 = T1/σ, zT2 = T2/σ, and zT3 = T3/σ.
Cases I, II, III, IV, V and VI can be combined and be represented as:
L ( σ ) = K σ A [ F ( z U 1 ) ] D 1 1 [ 1 F ( z U 2 ) ] n D 2 i = D 1 D 2 f ( z i : n ) .
Here, U1 = T1, U2 = T2,D1 = d1 and D2 = d3 for Case I, U1 = yl:n, U2 = T2,D1 = l and D2 = d3 for Case II, U1 = T1, U2 = yr:n,D1 = d1 and D2 = r for Case III, U1 = yl:n, U2 = yr:n,D1 = l and D2 = r for Case IV, U1 = T1, U2 = T3,D1 = d1 and D2 = d2 for Case V and U1 = yl:n, U2 = T3,D1 = l and D2 = d2 for Case VI. Furthermore, zU1 = U1/σ, zU2 = U2/σ, K = n!/(D1 − 1)!(nD2)! and A = D2D1 + 1.
From (5), the log-likelihood function can be expressed as:
ln L = A ln σ + ( D 1 1 ) ln F ( z U 1 ) + ( n D 2 ) ln [ 1 F ( z U 2 ) ] + i = D 1 D 2 ln f ( z i : n ) .
On differentiating the log-likelihood function (6) with respect to σ and equating to zero, we obtain the estimating equation:
ln L σ = 1 σ [ 2 A + ( D 1 + 1 ) f ( z U 1 ) F ( z U 1 ) z U 1 ( n D 2 ) z U 2 2 i = D 1 D 2 z i : n 2 ] = 0 .
Equation (7) can be solved numerically using the Newton–Raphson method, and an estimate of the entropy function (4) is:
H ^ = 1 + log ( σ ^ 2 ) + γ 2 .

3.2. Approximate Maximum Likelihood Estimators

Because the log-likelihood equations cannot be solved explicitly, it will be desirable to consider an approximation to the likelihood equations that will provide explicit estimators of σ. We expand the function f(zU1)zU1/F (zU1) in Taylor series around the points ξ, where ξ = F 1 ( p ) = 2 ln ( q ), (p) = D1/(n+ 1) and q = 1−p
We can approximate the functions by:
f ( z U 1 ) F ( z U 1 ) z U 1 α + β z U 1 ,
where:
α = 2 q ln q p ( 1 + 2 ln q p ) , β = 2 ξ q p ( 1 + ln q p ) .
By substituting Equation (8) into Equation (7), we obtain:
ln L * σ = 1 σ [ 2 A + ( D 1 1 ) ( α + β z U 1 ) ( n D 2 ) z U 2 2 i = D 1 D 2 z i : n 2 ] = 0 .
From Equation (9), we obtain σ̂as the solution of the quadratic equation:
K σ 2 + B σ C = 0 ,
where K = 2A +(D1 − 1)α > 0, B = (D1 − 1)βU1, and C = ( n D 2 ) U 2 2 + i = D 1 D 2 y i : n 2 > 0. Therefore,
σ ^ A = B + B 2 + 4 K C 2 K
is the only positive root.
With σ replaced by the σ̂A, in Equation (4), the entropy estimators of the Rayleigh distribution based on doubly generalized Type II hybrid censored samples are obtained as:
H ^ A = 1 + log ( σ ^ A 2 ) + γ 2 .

3.3. Bayes Estimation

In the Bayesian estimation, unknown parameters are assumed to behave as random variables with distributions commonly known as prior probability distributions. In practice, usually, a squared error loss function is taken in to consideration to produce Bayesian estimates. However, under this loss function, overestimation and underestimation are equally penalized, which is not a good criterion from a practical point of view. As an example, in reliability estimation, overestimation is considered to be more serious than the underestimation. Due to such restrictions various asymmetric loss functions are introduced in the literature, such as general entropy loss function. These loss functions have been proven useful for performing Bayesian analysis in different fields of reliability estimation and life testing problems (Rastogi and Tripathi [11]).
A very well-known symmetric loss function is the squared error loss function, which is defined as L1 d(σ), (σ) = ((σ) − d (σ))2 with (σ) being an estimate of d (σ). Here, d (σ) denotes some parametric function of σ. For this situation, the Bayesian estimate, say S (σ), is given by the posterior mean of d (σ).
One of the most commonly used asymmetric loss function is the general entropy loss given by:
L 2 ( d ( σ ) , d ^ ( σ ) ) ( d ^ ( σ ) d ( σ ) ) q q log ( d ^ ( σ ) d ( σ ) ) 1 , q 0 .
In this case, the Bayes estimate of d (σ) is obtained as:
d ^ E ( σ ) = ( E σ ( σ q | x ) ) 1 q ,
provided the above exception exists.

3.3.1. Non-Informative Prior

Since σ based on the doubly generalized Type II censored data is a random variable, we consider the non-informative prior distributions for σ, as:
π 1 ( σ ) ( 1 σ ) c .
By combining (5) with (11), the joint density function of σ and Y is given by:
π 1 ( σ | Y ) j = 0 D 1 1 ( D 1 1 j ) ( 1 ) j σ ( 2 A + c ) exp [ V 1 + j U 1 2 2 σ 2 ] ,
where V 1 = ( n D 2 ) U 2 2 + i = D 1 D 2 y i : n 2.
Further, the posterior density function of σ is given by:
π 1 ( σ | Y ) = V 1 A + ( c 1 ) / 2 σ ( 2 A + c 1 ) 1 exp ( V 1 2 σ 2 ) [ 1 exp ( U 1 2 2 σ 2 ) ] D 1 1 2 A + ( c 3 ) / 2 Γ ( A + ( c 1 ) / 2 ) j = 0 D 1 1 ( D 1 1 j ) ( 1 ) j [ 1 + j U 1 2 V 1 ] A ( c 1 ) / 2 .
Under a squared error loss function, the Bayes estimator of σ is the mean of the posterior density given by:
σ ˜ S 1 = E 1 [ σ | Y ] = Γ ( A + ( c 2 ) / 2 ) j = 0 D 1 1 ( D 1 1 j ) ( 1 ) j [ 1 + j U 1 2 V 1 ] A ( c 2 ) / 2 Γ ( A + ( c 1 ) / 2 ) j = 0 D 1 1 ( D 1 1 j ) ( 1 ) j [ 1 + j U 1 2 V 1 ] A ( c 1 ) / 2 ( V 1 2 ) 1 / 2 .
Similarly, the Bayes estimator of σ for the general entropy loss function is:
σ ˜ E 1 = { E 1 [ σ q | x ] } 1 / q ,
where:
E 1 [ σ q | x ] = Γ ( A + ( c + q 2 ) / 2 ) j = 0 D 1 1 ( D 1 1 j ) ( 1 ) j [ 1 + j U 1 2 V 1 ] A ( c + q 1 ) / 2 Γ ( A + ( c 1 ) / 2 ) j = 0 D 1 1 ( D 1 1 j ) ( 1 ) j [ 1 + j U 1 2 V 1 ] A ( c 1 ) / 2 ( 2 V 1 ) q / 2 .
With σ replaced by the σ̃S1 and σ̃E1, in Equation (4), the entropy estimator of the Rayleigh distribution based on doubly generalized Type II hybrid censored samples are obtained as:
H ˜ E 1 = 1 + log ( σ ˜ E 1 2 ) + γ 2 , H ˜ S 1 = 1 + log ( σ ˜ S 1 2 ) + γ 2 .

3.3.2. Natural Conjugate Prior

Since σ based on the doubly generalized Type II censored data is a random variable, we consider the natural conjugate family of prior distributions for σ that were used by Fernández [5], as:
π 2 ( σ ) ( 1 σ ) 2 α + 1 exp ( β 2 σ 2 ) , σ > 0 ,
where shape parameter α> 0 and scale parameter β> 0. This is known as the square root inverted gamma density. For β = 0, π(σ) reduces to a general class of improper priors. For α = β = 0, π(σ) reduces to the Jeffreys prior [12].
By combining (5) with (14), the joint density function of σ and Y is given by:
π 2 ( σ | Y ) j = 0 D 1 1 ( D 1 1 j ) ( 1 ) j σ ( 2 A + 2 α + 1 ) exp [ V 2 + j U 1 2 2 σ 2 ] ,
where V 2 = β + ( n D 2 ) U 2 2 + i = D 1 D 2 y i : n 2.
Further, the posterior density function of σ is given by:
π 2 ( σ | Y ) = V 2 A + α σ 2 ( A + α ) 1 exp ( V 2 2 σ 2 ) [ 1 exp ( U 1 2 2 σ 2 ) ] D 1 1 2 A + α 1 Γ ( A + α ) j = 0 D 1 1 ( D 1 1 j ) ( 1 ) j [ 1 + j U 1 2 V 2 ] A α .
Under squared error loss function, the Bayes estimator of σ is the mean of the posterior density given by:
σ ˜ S 2 = E 2 [ σ | Y ] = Γ ( A + α 1 2 ) j = 0 D 1 1 ( D 1 1 j ) ( 1 ) j [ 1 + j U 1 2 V 2 ] A α + 1 2 Γ ( A + α ) j = 0 D 1 1 ( D 1 1 j ) ( 1 ) j [ 1 + j U 1 2 V 1 ] A α ( V 2 2 ) 1 / 2 .
Similarly, the Bayes estimator of σ for the general entropy loss function is:
σ ˜ E 2 = { E 2 [ σ q | x ] } 1 / q ,
where:
E 2 [ σ q | x ] = Γ ( A + α + q 2 ) j = 0 D 1 1 ( D 1 1 j ) ( 1 ) j [ 1 + j U 1 2 V 2 ] A α q 2 Γ ( A + α ) j = 0 D 1 1 ( D 1 1 j ) ( 1 ) j [ 1 + j U 1 2 V 2 ] A α ( 2 V 2 ) q / 2 .
With σ replaced by the σ̃S2 and σ̃E2, in Equation (4), the entropy estimators of the Rayleigh distribution based on doubly generalized Type II hybrid censored samples are obtained as:
H ˜ E 2 = 1 + log ( σ ˜ E 2 2 ) + γ 2 , H ˜ S 2 = 1 + log ( σ ˜ S 2 2 ) + γ 2 .

3.3.3. Bayes Estimation Based on the Balanced Loss Function

From a Bayesian perspective, the choice of loss function is an essential part in the estimation and prediction problems. Recently, a more generalized loss function, called the balanced loss function (Jozani et al.[13]), of the form:
L ρ , w , δ 0 ( σ , δ ) = w ρ ( δ , δ 0 ) + ( 1 w ) ρ ( σ , δ ) ,
obtained, for instance, using the criterion of MLE, and the weight w takes values in [0,1). Here, ρ is an arbitrary loss function, while δ0 is a chosen a prior ‘target’ estimator of σ. A general development with regard to Bayesian estimators under Lρ,w,δ0 is given, namely by relating such estimators to Bayesian solutions to the unbalanced case, i.e., Lρ,w,δ0 with = 0. Lρ,w,δ0 can be specialized to various choices of loss function, such as for squared error loss and entropy (Ahmed [14]).
By choosing ρ(σ, δ) = (δ, σ)2, Equation (17) reduces to the balanced squared error loss function, in the form:
L w , δ 0 ( σ , δ ) = w ( δ δ 0 ) 2 + ( 1 w ) ( δ σ ) 2 ,
and the corresponding Bayes estimate of the unknown parameter σ is given by:
δ w , δ 0 ( y ) = w δ 0 + ( 1 + w ) E ( σ | Y ) .
By choosing ρ ( σ , δ ) = ( σ δ ) q q log σ δ 1 ;q ≠ 0, Equation (17) reduced to the balanced entropy loss function, in the form:
L w , δ 0 ( σ , δ ) = w { δ 0 δ log ( δ 0 δ ) 1 } + ( 1 w ) { ( σ δ ) 2 log ( σ δ ) 1 } ,
and the corresponding Bayes estimate of the unknown parameter σ is given by:
δ w , δ 0 ( y ) = { w ( δ 0 ( x ) ) q + ( 1 w ) E ( 1 σ q ) } 1 / q .
It is clear that the balanced loss functions are more general, which include the maximum likelihood estimate and both symmetric and asymmetric Bayes estimates as special cases.
Based on the balanced squared error loss function, given by Equations (12) and (15), the approximate Bayes estimates of the σ are given, respectively, by:
σ ˜ B S 1 = w σ ^ + ( 1 + w ) σ ˜ S 1 , σ ˜ B S 2 = w σ ^ + ( 1 w ) σ ˜ S 2 .
Furthermore, based on the balanced entropy loss function, given by Equations (13) and (16), the approximate Bayes estimates of the σ are given, respectively, by:
σ ˜ B E 1 = w σ ^ + ( 1 + w ) σ ˜ E 1 , σ ˜ B E 2 = w σ ^ + ( 1 w ) σ ˜ E 2 .
With σ replaced by the σ̃BS1, σ̃BS2, σ̃BE1 and σ̃BS2, in Equation (4), the entropy estimators of the Rayleigh distribution based on doubly generalized Type II hybrid censored samples are obtained as:
H ˜ B S 1 = 1 + log ( σ ˜ B S 1 2 ) + γ 2 , H ˜ B S 2 = 1 + log ( σ ˜ B S 2 2 ) + γ 2 ,
H ˜ B E 1 = 1 + log ( σ ˜ B E 1 2 ) + γ 2 , H ˜ B E 2 = 1 + log ( σ ˜ B E 2 2 ) + γ 2 ,

4. Illustrative Example

Leiblen and Zelen [15] performed life tests and determined the number of revolutions to failure for 23 ball bearings. The data are doubly generalized Type II hybrid censored data: 23 components were tested. The observed failure times are as follows: 0.1788, 0.2852, 0.3300, 0.4152, 0.4212, 0.4560, 0.4848, 0.5186, 0.5196, 0.5412, 0.5556, 0.6780, 0.6864, 0.6864, 0.6888, 0.8412, 0.9312, 0.9864, 1.0512, 1.0584, 1.2792, 1.2804, 1.7340.
In this example, we assume that the underlying distribution of this data is the Rayleigh distribution based on the doubly generalized Type II hybrid censoring scheme. We take Case I (T1=0.32, T2=0.7, T1=1.2, l=1 and r=17), Case II (T1=0.32, T2=0.7, T1=1.2, l=4 and r=20), Case III (T1=0.32, T2=0.7, T1=1.2, l=7 and r=23), Case IV (T1=0.64, T2=0.7, T1=1.5, l=1 and r=17), Case V (T1=0.64, T2=0.7, T1=1.5, l=3 and r=20) and Case VI (T1=0.64, T2=0.7, T1=1.5, l=7 and r=23). For the Bayesian inference, the prior parameters are chosen (α, β)=(2.0, 2.0) and c = 3. The Bayes estimator based on the natural conjugate prior and non-informative prior is obtained. Furthermore, the Bayes estimator based on the balanced loss function with w = 0.3, 0.5 and 0.7 is obtained. Table 1 presents estimation of entropy of doubly generalized Type II censoring schemes.

5. Results and Discussion

To compare the performance of the proposed estimators, we simulated the RMSE, bias and Kullback–Leibler divergence of all proposed estimators, by employing the Monte Carlo simulation method. We have used three different doubly generalized Type II hybrid censored sampling schemes, namely: Scheme I: T1=0.3, T1=1.7 and T1=2.0; Scheme II: T1=0.6, T1=1.7 and T1=2.0; and Scheme III: T1=0.3, T1=1.7 and T1=2.3. The doubly generalized Type II hybrid censored samples are generated from the Rayleigh distribution with σ = 1. Using these samples, the RMSE, bias and Kullback–Leibler divergence of entropy estimators are simulated by the Monte Carlo method based on 10,000 runs for the sample size n = 10, 20, 40 and 100. The prior parameters are chosen (α, β) = (2.0, 2.0) and c = 3. The Bayes estimator based on the natural conjugate prior and non-informative prior is obtained. Furthermore, the Bayes estimator based on the balanced loss function with w = 0.3, 0.5 and 0.7 is obtained. The simulation results are presented in Table S1 ∼ Table S10, respectively.
From Table S1 ∼ Table S10, the following general observations can be made. The RMSEs and Kullback–Leibler divergence decrease as sample size n increases. For a fixed sample size, the RMSEs and Kullback–Leibler divergence decrease generally as the number of censored samples decreases. For fixed sample and censored samples size, the RMSEs and Kullback–Leibler divergence decrease generally as the times T2 and T3 increases. It is also observed that the left censoring scheme has smaller RMSEs and Kullback–Leibler divergence than the corresponding estimators for right and doubly generalized censoring schemes. For Scheme I and the left censoring case, we presented these in Figure 2.
In Table S1, the average RMSEs and biases of the entropy estimator with MLE and approximate MLE are presented for various choices of n, l, r and censoring schemes. In general, we observed that MLE and approximate MLE behave quite similarly in terms of RMSE. From Table S2 ∼ Table S3, average RMSEs and the bias of the entropy estimator with Bayes estimators based on non-informative prior are presented for various choices of n, l, r and censoring schemes. In general, we observed that entropy estimator with Bayes estimator under the squared error loss function is superior to the respective entropy estimator with Bayes estimator under the general entropy loss function in terms of bias and RMSE. For estimating the entropy, the choice w = 0.7 seems to be a reasonable choice under balanced-square error loss and balanced-entropy loss function. From Table S4 ∼ Table S5, average RMSEs and the biases of the entropy estimator with the Bayes estimator based on the natural conjugate prior are presented for various choices of n, l, r and censoring schemes. In general, we observed that entropy estimator with the Bayes estimator under the squared error loss function is superior to the respective entropy estimator with the Bayes estimator under the general entropy loss function in terms of bias and RMSE. For estimating the entropy, the choice w = 0.3 seems to be a reasonable choice under balanced-square error loss and the balanced-entropy loss function.
In Table S6, the average Kullback–Leibler divergences of the entropy estimator with MLE and approximate MLE are presented for various choices of n, l, r and censoring schemes. In general, we observed that MLE is superior to the respective approximate MLE in terms of Kullback–Leibler divergence. From Table S7 ∼ Table S8, average Kullback–Leibler divergences of the entropy estimator with the Bayes estimator based on the non-informative prior are presented for various choices of n, l, r and censoring schemes. In general, we observed that the entropy estimator with the Bayes estimator under the squared error loss function is superior to the respective entropy estimator with the Bayes estimator under the general entropy loss function in terms of Kullback–Leibler divergence. For estimating the entropy, the choice w = 0.7 seems to be a reasonable choice under balanced square error loss and the balanced entropy loss function. From Table S9 ∼ Table S10, average Kullback–Leibler divergences of the entropy estimator with the Bayes estimator based on the natural conjugate prior are presented for various choices of n, l, r and censoring schemes. In general, we observed that the entropy estimator with the Bayes estimator under the squared error loss function is superior to the respective entropy estimator with the Bayes estimator under the general entropy loss function in terms of the Kullback–Leibler divergence. For estimating the entropy, the choice w = 0.3 seems to be a reasonable choice under the balanced-square error loss and the balanced-entropy loss function. Overall, the Bayes estimator using the squared error loss function based on the natural conjugate prior provide better estimates compared with other estimates.

6. Conclusions

In many life testing experiments, the experimenter may not observe the lifetimes of all inspected units in the life test. Censored data arises in these situations wherein the experimenter does not obtain complete information for all of the units under study. Different types of censoring arise based on how the data are collected from the life testing experiment. In order to provide a guarantee in terms of the number of failures observed, as well as the time to complete the test, Chandrasekar et al. [9] introduced a generalized Type II hybrid censoring scheme. Lee et al. [10] introduced a doubly generalized Type II hybrid censoring scheme, which can handle both right-censoring and left-censoring.
In this paper, we discussed entropy estimators for the Rayleigh distribution based on doubly-generalized Type II hybrid censored samples. The paper derived entropy estimators by using the MLE, approximate MLE and Bayes estimators of the σ in the Rayleigh distribution based on doubly generalized Type II hybrid censored samples and compared them in terms of their RMSE, bias and Kullback–Leibler divergence. Bayesian estimates using the non-informative and natural conjugate prior are obtained under three types of loss function, and it is observed that the Bayes estimate with respect to the natural conjugate prior under the squared error loss function works quite well in this case. Although we focused on the entropy estimate of the Rayleigh distribution in this article, the proposed estimation can be easily extended to other distributions. Particularly, the Bayes estimation can be applied to any other distributions. In contrast, an approximate MLE cannot be simply applied to the distributions with a shape parameter. Estimation on entropy parameters from other distributions is of potential interest in future research.

Appendix

It is impossible to compute a superior bound for the approximation for the MLE estimator and the approximate MLE. Thus, we use Monte Carlo simulations to compute a superior bound for various choices of n, l, r and censoring schemes. The results are presented in the following Table A1.
Table A1. Result of Monte Carlo simulations to compute a superior bound.
Table A1. Result of Monte Carlo simulations to compute a superior bound.
SchemenlrMSESchemenlrMSESchemenlrMSE

I10180.0017II10180.0018III10180.0016
290.0016290.0018290.0015
3100.00173100.00173100.0015
160.0016160.0018160.0016
380.0017380.0018380.0016
5100.00175100.00185100.0015



201180.0000201180.0000201180.0000
2190.00002190.00002190.0000
3200.00003200.00003200.0000
1160.00001160.00001160.0000
3180.00003180.00003180.0000
5200.00005200.00005200.0000

Supplementary Materials

Supplemenatry Tables can be found at https://www.mdpi.com/1099-4300/16/7/3655/s1.

Acknowledgments

Authors would like to express deep thanks to the Editor-in-Chief and the referees for their helpful comments and suggestions which led to a considerable improvement in the presentation of this paper. This work was supported for two years by Pusan National University Research Grant.

Authors Contributions

The authors contributed equally to the presented mathematical framework and the writing of the paper.

Conflict of Interest

The authors declare no conflict of interest.

References

  1. Cover, T.M.; Thomas, J.A. Elements of Information Theory; Wiley: Hoboken, NJ, USA, 2005. [Google Scholar]
  2. Harter, H.L.; Moore, A.H. Point and interval estimators based on m order statistics for the scale parameter of a Weibull population with known shape parameter. Technometrics 1965, 7, 405–422. [Google Scholar]
  3. Dyer, D.D.; Whisenand, C.W. Best linear estimator of the parameter of the Rayleigh distribution-Part I: small sample theory for censored order statistics. IEEE Trans. Reliab 1973, 22, 27–34. [Google Scholar]
  4. Balakrishnan, N. Approximate MLE of the scale parameter of the Rayleigh distribution with censoring. IEEE Trans. Reliab 1989, 38, 355–357. [Google Scholar]
  5. Fernández, A.J. Bayesian inference from type II doubly censored Rayleigh data. Stat. . Probab. . Lett 2000, 48, 393–399. [Google Scholar]
  6. Kim, C.; Han, K. Estimation of the scale parameter of the Rayleigh distribution with multiply type II censored sample. J. Stat. Comput. Simul 2009, 79, 965–976. [Google Scholar]
  7. Epstein, B. Truncated life tests in the exponential case. Ann. Math. Stat 1954, 25, 555–564. [Google Scholar]
  8. Childs, A.; Chandrasekar, B.; Balakrishnan, N.; Kundu, D. Exact likelihood inference based on type I and type II hybrid censored samples from the exponential distribution. Ann. Inst. Stat. Math 2003, 55, 319–330. [Google Scholar]
  9. Chandrasekar, B.; Childs, A.; Balakrishnan, N. Exact likelihood inferance for the exponential distribution under generalized type I and type II hybrid censoring. Nav. Res. Logist 2004, 51, 994–1004. [Google Scholar]
  10. Lee, K.; Park, C.; Cho, Y. Inference based on doubly generalized type II hybrid censored sample from a Half Logistic distribution. Commu. Korean Stat. Soc 2011, 18, 645–655. [Google Scholar]
  11. Rastogi, M.K.; Tripathi, Y.M. Estimating the parameters of a Burr distribution under progressive type II censoring. Stat. Meth 2012, 9, 381–391. [Google Scholar]
  12. Jeffreys, H. Theory of Probability; Clarendon Press: Oxford, UK, 1961. [Google Scholar]
  13. Jozani, M.J.; Marchand, E.; Parsian, A. Bayesian and robust Bayesian estimation under a general class of balanced loss function. Stat. Pap 2012, 53, 51–60. [Google Scholar]
  14. Ahmed, E.A. Bayesian estimation based on progressive Type II censoring from two-parameter bathtub-shaped lifetime model: an Markov chain Monte Carlo approach. J. Appl. Stat 2014, 41, 752–768. [Google Scholar]
  15. Leiblein, J.; Zelen, M. Statistical investigation of the fatigue life of deep-groove ball bearings. J. Res. Nat. Bur. Stand 1952. [Google Scholar]
Figure 1. The doubly generalized Type II hybrid censoring schemes.
Figure 1. The doubly generalized Type II hybrid censoring schemes.
Entropy 16 03655f1
Figure 2. The RMSEs of the estimators for Scheme I and left censoring.
Figure 2. The RMSEs of the estimators for Scheme I and left censoring.
Entropy 16 03655f2
Table 1. Estimation of entropy for example.
Table 1. Estimation of entropy for example.
CompleteCase ICase IICase IIICase IVCase VCase VI
Ĥ0.38460.36840.34790.38040.36230.34040.3762
ĤA0.38460.36610.34760.38040.30060.28610.3312
S10.37920.36110.34160.37400.35410.33350.3701
BS1w = 0.30.38080.36330.34350.37590.35650.33560.3719
w = 0.50.38190.36480.34480.37720.35820.33700.3732
w = 0.70.38300.36620.34600.37850.35980.33830.3744
E10.36340.33980.32340.35580.33230.31500.3533
BE1w = 0.30.36970.34820.33070.36300.34110.32250.3600
w = 0.50.37390.35390.35550.36790.34710.32750.3646
w = 0.70.37820.35970.34040.37290.35310.33260.3692
S20.42040.41790.39350.42160.41310.38750.4148
BS2w = 0.30.40980.40330.38010.40940.39810.37360.4034
w = 0.50.40270.39350.37100.40120.38800.36420.3957
w = 0.70.39550.38350.36180.39290.37780.35480.3880
E20.40520.39780.37620.40420.39270.36990.3987
BE2w = 0.30.39890.38880.36750.39690.38340.36090.3919
w = 0.50.39480.38290.36180.39220.37720.35490.3873
w = 0.70.39070.37710.35620.38740.37120.34910.3829

Share and Cite

MDPI and ACS Style

Cho, Y.; Sun, H.; Lee, K. An Estimation of the Entropy for a Rayleigh Distribution Based on Doubly-Generalized Type-II Hybrid Censored Samples. Entropy 2014, 16, 3655-3669. https://doi.org/10.3390/e16073655

AMA Style

Cho Y, Sun H, Lee K. An Estimation of the Entropy for a Rayleigh Distribution Based on Doubly-Generalized Type-II Hybrid Censored Samples. Entropy. 2014; 16(7):3655-3669. https://doi.org/10.3390/e16073655

Chicago/Turabian Style

Cho, Youngseuk, Hokeun Sun, and Kyeongjun Lee. 2014. "An Estimation of the Entropy for a Rayleigh Distribution Based on Doubly-Generalized Type-II Hybrid Censored Samples" Entropy 16, no. 7: 3655-3669. https://doi.org/10.3390/e16073655

Article Metrics

Back to TopTop