Next Article in Journal
Dynamics of Correlation Structure in Stock Market
Previous Article in Journal
Quantifying Compressibility and Slip in Multiparticle Collision (MPC) Flow Through a Local Constriction

Entropy 2014, 16(1), 443-454; doi:10.3390/e16010443

Article
Entropy Estimation of Generalized Half-Logistic Distribution (GHLD) Based on Type-II Censored Samples
Jung-In Seo and Suk-Bok Kang *
Department of Statistics, Yeungnam University, Dae-dong, Gyeongsan 712-749, Korea; E-Mail: leehoo1928@ynu.ac.kr
*
Author to whom correspondence should be addressed; E-Mail: sbkang@yu.ac.kr; Tel.: +82-53-810-2323.
Received: 21 October 2013; in revised form: 5 December 2013 / Accepted: 16 December 2013 /
Published: 2 January 2014

Abstract

: This paper derives the entropy of a generalized half-logistic distribution based on Type-II censored samples, obtains some entropy estimators by using Bayes estimators of an unknown parameter in the generalized half-logistic distribution based on Type-II censored samples and compares these estimators in terms of the mean squared error and the bias through Monte Carlo simulations.
Keywords:
entropy; expected Bayesian estimation; generalized half-logistic distribution; Type-II censored sample

1. Introduction

Shannon [1], who proposed information theory to quantify information loss, introduced statistical entropy. Several researchers have discussed inferences about entropy. Baratpour et al. [2] developed the entropy of upper record values and provided several upper and lower bounds for this entropy by using the hazard rate function. Abo-Eleneen [3] suggested an efficient method for the simple computation of entropy in progressively Type-II censored samples. Kang et al. [4] derived estimators for the entropy function of a double exponential distribution based on multiply Type-II censored samples by using maximum likelihood estimators (MLEs) and approximate MLEs (AMLEs).

Arora et al. [5] obtained the MLE of the shape parameter in a generalized half-logistic distribution (GHLD) based on Type-I progressive censoring with varying failure rates. Kim et al. [6] proposed Bayes estimators of the shape parameter and the reliability function for the GHLD based on progressively Type-II censored data under various loss functions. Seo et al. [7] developed an entropy estimation method for upper record values from the GHLD. Azimi [8] derived the Bayes estimators of the shape parameter and the reliability function for the GHLD based on Type-II doubly censored samples.

The cumulative distribution function (cdf) and probability density function (pdf) of the random variable, X, with the GHLD are, respectively:

F ( x ) = 1 ( 2 e x 1 + e x ) λ
and:
f ( x ) = λ ( 2 e x 1 + e x ) λ 1 1 + e x x > 0 , λ > 0
where λ is the shape parameter. As a special case, if λ = 1, then this is the half-logistic distribution.

The rest of this paper is organized as follows: Section 2 develops an entropy estimation method with Bayes estimators of the shape parameter in the GHLD based on Type-II censored samples. Section 3 assesses the validity of the proposed method by using Monte Carlo simulations, and Section 4 concludes.

2. Entropy Estimation

Let X be a random variable with the cdf, F (x), and the pdf, f(x). Then, the differential entropy of X is defined by Cover and Thomas [9] as:

H ( f ) = f ( x ) log f ( x ) d x

Let X1:n, X2:n, ··· , Xn:n be the order statistics of random samples X1, X2, ··· , Xn from the continuous pdf, f(x). In the conventional Type-II censoring scheme, r is assumed to be known in advance, and the experiment is terminated as soon as the r-th item fails (rn). Park [10] provided the entropy of a continuous probability distribution based on Type-II censored samples in terms of the hazard function, h(x) = f(x)/(1 − F (x)), as:

H ( f ) = i = 1 r [ 1 log ( n i + 1 ) f i : n ( x ) log h ( x ) d x ]
where fi:n(x) is the pdf of the i-th order statistic, Xi:n.

He also derived a nonparametric entropy estimator of Equation (4) as:

H m = i = 1 r [ log { n 2 m ( x i + m : n x i m : n ) } log ( n i + 1 ) ] n ( 1 r n ) log ( 1 r n )
where m is a positive integer smaller than r/2, which is referred to as window size; Xim:n = X1:n for im < 1 and Xi+m:n = Xr:n for i + m>n.

Theorem 1. The entropy of the GHLD based on Type-II censored samples is:

H ( f ) = i = 1 r [ 1 log ( n i + 1 ) I ]
where:
I = log λ Γ ( n + 1 ) Γ ( n i + 1 ) j = 1 2 j Γ ( n i + j / λ + 1 ) j Γ ( n + j / λ + 1 )
Proof. Let u = 1 − F (x). Then:
I = 0 f i : n ( x ) log h ( x ) d x = Γ ( n + 1 ) Γ ( i ) Γ ( n i + 1 ) 0 1 u n i ( 1 u ) i 1 log [ λ ( 1 1 2 u 1 / λ ) ] d u = log λ + Γ ( n + 1 ) Γ ( i ) Γ ( n i + 1 ) 0 1 u n i ( 1 u ) i 1 log ( 1 1 2 u 1 / λ ) d u = log λ Γ ( n + 1 ) Γ ( i ) Γ ( n i + 1 ) j = 1 2 j j 0 1 u n i + j / λ ( 1 u ) i 1 d u = log λ Γ ( n + 1 ) Γ ( n i + 1 ) j = 1 2 j Γ ( n i + j / λ + 1 ) j Γ ( n + j / λ + 1 )
by using:
log ( 1 z ) = j = 1 z j j for | z | < 1

Note that entropy Equation (6) depends on the shape parameter, λ. The following subsections estimate λ by using two Bayesian inference methods: one method makes use of a conjugate prior; the other, distributions of hyperparameters that are parameters of a prior.

2.1. Bayes Estimation

Theorem 2. A natural conjugate prior for the shape parameter, λ, is the gamma prior, given by:

π ( λ | α , β ) = β α Γ ( α ) λ α 1 e β λ , α , β > 0
Then, the Bayes estimator of λ based on the square error loss function (SELF) is obtained as:
λ ^ S = r + α β + h
where:
h = ( n r ) log ( 1 + e x r : n 2 e x r : n ) + i = 1 r log ( 1 + e x i : n 2 e x i : n )
Proof. The likelihood function based on the Type-II censoring scheme is written as:
L ( λ ) = n ! ( n r ) ! [ 1 F ( x r : n ) ] n r i = 1 r f ( x i : n ) = n ! ( n r ) ! λ r ( 2 e x r : n 1 + e x r : n ) λ ( n r ) i = 1 r ( 2 e x i : n 1 + e x i : n ) λ 1 1 + e x i : n
Then, it is easy to obtain the MLE of λ to be λ̂ = r/h from Equation (13).

From gamma prior Equation (10) and likelihood function Equation (13), the posterior distribution of λ is obtained as:

π ( λ | x _ ) = L ( λ ) π ( λ | α , β ) λ L ( λ ) π ( λ | α , β ) d λ = ( β + h ) r + α Γ ( r + α ) λ r + α 1 e ( β + h ) λ
which is a gamma distribution with the parameters (r + α, β + h). Here, the Bayes estimator of λ based on the SELF is the posterior mean, and therefore, this completes the proof.

2.2. E-Bayesian Estimation

Han [11] introduced a new Bayesian estimation method referred to as the expected Bayesian (E-Bayesian) estimation method to estimate failure probability. The present paper uses this method to obtain another Bayes estimator of the shape parameter, λ.

In Subsection 2.1, the gamma prior is supposed to obtain the Bayes estimator of λ. Hence, according to Han [12], the following prior distributions for α and β are considered:

π 1 ( α , β ) = 2 ( c β ) c 2 , 0 < α < 10 < β < c
π 2 ( α , β ) = 1 c , 0 < α < 1 , 0 < β < c
π 3 ( α , β ) = 2 β c 2 , 0 < α < 1 , 0 < β < c
which are decreasing, constant and increasing functions of β, respectively.

Theorem 3. For prior distributions (15), (16) and (17), the corresponding E-Bayes estimators of λ based on the SELF are, respectively:

λ ^ E S 1 = 2 c 2 ( r + 1 2 ) [ ( c + h ) log ( 1 + c h ) c ]
λ ^ E S 2 = 1 c ( r + 1 2 ) log ( 1 + c h )
λ ^ E S 3 = 2 c 2 ( r + 1 2 ) [ ( c h ) log ( 1 + c h ) ]
Proof. For the prior distribution (15):
λ ^ E S 1 = α β λ ^ S π 1 ( α , β ) d β d α = 2 c 2 0 1 ( r + α ) 0 c c β β + h d β d α = 2 c 2 [ ( c + h ) log ( c + h h ) c ] 0 1 ( r + α ) d α = 2 c 2 ( r + 1 2 ) [ ( c + h ) log ( 1 + c h ) c ]
Similarly:
λ ^ E S 2 = α β λ ^ S π 2 ( α , β ) d β d α = 1 c ( r + 1 2 ) log ( 1 + c h )
and:
λ ^ E S 3 = α β λ ^ S π 3 ( α , β ) d β d α = 2 c 2 ( r + 1 2 ) [ c h log ( 1 + c h ) ]
The following Theorem presents the relationships among E-Bayes estimators λ̂ESi (i = 1, 2, 3).

Theorem 4. For 0 <c<h, E-Bayes Estimators satisfy:

λ ^ E S 3 < λ ^ E S 2 < λ ^ E S 1
and:
lim h λ ^ E S 1 = lim h λ ^ E S 2 = lim h λ ^ E S 3
Proof. From Equations (18)(20):
λ ^ E S 1 λ ^ E S 2 = λ ^ E S 2 λ ^ E S 3 = 2 c 2 ( r + 1 2 ) [ ( c 2 + h ) log ( 1 + c h ) c ]
For 0 <c<h:
( c 2 + h ) log ( 1 + c h ) c = ( c 2 + h ) i = 1 ( 1 ) i 1 i ( c h ) i c = c 2 + 2 c h 2 h c 3 + 2 c 2 h 4 h 2 + c 4 + 2 c 3 h 6 h 3 c = c [ c 2 12 h 2 c 3 12 h 3 + 3 c 4 40 h 4 c 5 15 h 5 + ] = c [ c 2 12 h 2 ( 1 c h ) + 3 c 4 40 h 4 ( 1 8 c 9 h ) + ] > 0
by using:
log ( 1 + z ) = j = 1 ( 1 ) j 1 z j j for | z | < 1
That is:
λ ^ E S 1 λ ^ E S 2 = λ ^ E S 2 λ ^ E S 3 > 0 for 0 < c < h
In addition:
lim h ( λ ^ E S 1 λ ^ E S 2 ) = lim h ( λ ^ E S 2 λ ^ E S 3 ) = 2 c ( r + 1 2 ) lim h [ c 2 12 h 2 ( 1 c h ) + 3 c 4 40 h 4 ( 1 8 c 9 h ) + ] = 0
This completes the proof.

With λ replaced by the estimators, λ̂, λ̂S and λ̂ESi(i = 1, 2, 3), in Equation (7), entropy estimators of the GHLD based on Type-II censored samples are obtained as:

H ^ ( f ) = i = 1 r [ 1 log ( n i + 1 ) I ^ ]
H ^ S ( f ) = i = 1 r [ 1 log ( n i + 1 ) I ^ S ]
H ^ E S i ( f ) = i = 1 r [ 1 log ( n i + 1 ) I ^ E S i ]
where:
I ^ = log λ ^ Γ ( n + 1 ) Γ ( n i + 1 ) j = 1 2 j Γ ( n i + j / λ ^ + 1 ) j Γ ( n + j / λ ^ + 1 )
I ^ S = log λ ^ S Γ ( n + 1 ) Γ ( n i + 1 ) j = 1 2 j Γ ( n i + j / λ ^ S + 1 ) j Γ ( n + j / λ ^ S + 1 )
I ^ E S i = log λ ^ E S i Γ ( n + 1 ) Γ ( n i + 1 ) j = 1 2 j Γ ( n i + j / λ ^ E S i + 1 ) j Γ ( n + j / λ ^ E S i + 1 )

3. Application

To evaluate the performance of the estimators, their MSEsand biases are simulated through Monte Carlo simulations. Type-II censored samples are generated from the GHLD with λ = 0.5 for various Type-II censoring schemes. For each scheme, the MLE and Bayes estimators of the shape parameter, λ, are calculated. For the Bayesian inference, hyperparameter values are chosen as α = 1 and β = 2 with Var(λ) = 0.25 from gamma prior Equation (10). Then, the Bayes estimator based on the informative prior is obtained. To investigate the effect of the hyperparameter, the Bayes estimator is calculated based on a vague prior with very small parameters (here, α = 0.01 and β = 0.01). In addition, the E-Bayes estimators, λ̂ESi(i = 1, 2, 3), are calculated for c = 2(1)4, and then, corresponding entropy estimators are obtained. With this procedure repeated 10, 000 times, MSEs and biases of these estimators are obtained. Finally, MSEs and biases of nonparametric entropy estimator Equation (5) are simulated for window sizes m = 2(2)6. These values are tabulated in Tables 1 and 5.

From the Tables, the MLE and the Bayes estimator based on the vague prior are very close to each other, and the Bayes estimator based on the informative prior is more efficient than the MLE and the Bayes estimator based on the vague prior. In addition, except for λ̂ES1 with c = 2, E-Bayes estimators outperform the MLE and the Bayes estimator based on the vague prior. The same results are obtained for entropy estimators, and they are more efficient than the nonparametric entropy estimators.

For λ = 1.5 and λ = 2.5, the same simulation method is carried out, and the results are consistent with those reported in Tables 1 and 5. Therefore, the results for λ = 1.5 and λ = 2.5 are not reported.

4. Conclusions

This paper discusses entropy estimators for the GHLD with the shape parameter based on Type-II censored samples. The paper derived entropy estimators by using the MLE and Bayes estimators of the shape parameter in the GHLD based on Type-II censored samples and compared them in terms of their MSE and bias. The results show that E-Bayes estimators based on prior Equations (16) and (17) are more efficient than the MLE and the Bayes estimator based on the vague prior for the considered cases. This suggests that the E-Bayesian estimation method based on constant function Equation (16) or increasing function Equation (17) should be used in the case of no prior belief.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J 1948, 27, 379–423. [Google Scholar]
  2. Baratpour, S.; Ahmadi, J.; Arghami, N.R. Entropy properties of record statistics. Stat. Pap 2007, 48, 197–213. [Google Scholar]
  3. Abo-Eleneen, Z.A. The entropy of progressively censored samples. Entropy 2011, 13, 437–449. [Google Scholar]
  4. Kang, S.B.; Cho, Y.S.; Han, J.T.; Kim, J. An estimation of the entropy for a double exponential distribution based on multiply Type-II censored samples. Entropy 2012, 14, 161–173. [Google Scholar]
  5. Arora, S.H.; Bhimani, G.C.; Patel, M.N. Some results on maximum likelihood estimators of parameters of generalized half logistic distribution under Type-I progressive censoring with changing. Int. J. Contemp. Math. Sci 2010, 5, 685–698. [Google Scholar]
  6. Kim, Y.; Kang, S.B.; Seo, J.I. Bayesian estimation in the generalized half logistic distribution under progressively Type-II censoring. J. Korean. Data Inf. Sci. Soc 2011, 22, 977–987. [Google Scholar]
  7. Seo, J.I.; Lee, H.J.; Kang, S.B. Estimation for generalized half logistic distribution based on records. J. Korean. Data Inf. Sci. Soc 2012, 23, 1249–1257. [Google Scholar]
  8. Azimi, R. Bayesian estimation of generalized half logistic Type-II doubly censored data. Int. J. Sci. World 2013, 1, 57–60. [Google Scholar]
  9. Cover, T.M.; Thomas, J.A. Elements of Information Theory;; Wiley: Hoboken, NJ, USA, 2005. [Google Scholar]
  10. Park, S. Testing exponentiality based on the Kullback-Leibler information with the Type-II censored data. IEEE Trans. Reliab 2005, 54, 22–26. [Google Scholar]
  11. Han, M. E-Bayesian estimation of failure probability and its application. Math. Comput. Model 2007, 45, 1272–1279. [Google Scholar]
  12. Han, M. E-Bayesian estimation and hierarchical Bayesian estimation of failure rate. Appl. Math. Model 2009, 33, 1915–1922. [Google Scholar]
Table Table 1. MSEs (biases) of the maximum likelihood estimator (MLE) and the Bayes estimators.

Click here to display table

Table 1. MSEs (biases) of the maximum likelihood estimator (MLE) and the Bayes estimators.
nrλ̂λ̂S

α = 0.01, β = 0.01α = 1, β = 2
20180.00223 (0.01110)0.00223 (0.01124)0.00198 (0.01029)
160.00268 (0.01415)0.00268 (0.01410)0.00234 (0.01303)
140.00344 (0.01788)0.00344 (0.01790)0.00294 (0.01626)
120.00450 (0.02232)0.00450 (0.02231)0.00373 (0.01998)

30280.00104 (0.00615)0.00104 (0.00624)0.00096 (0.00587)
260.00112 (0.00791)0.00112 (0.00801)0.00104 (0.00754)
240.00126 (0.00911)0.00126 (0.00921)0.00115 (0.00865)
220.00146 (0.01060)0.00147 (0.01072)0.00133 (0.01002)
200.00176 (0.01252)0.00177 (0.01264)0.00159 (0.01176)

40380.00060 (0.00409)0.00060 (0.00415)0.00057 (0.00395)
360.00063 (0.00541)0.00064 (0.00548)0.00060 (0.00523)
340.00069 (0.00626)0.00069 (0.00633)0.00065 (0.00604)
320.00077 (0.00719)0.00077 (0.00727)0.00072 (0.00693)
300.00086 (0.00814)0.00086 (0.00823)0.00080 (0.00783)
280.00097 (0.00878)0.00097 (0.00887)0.00090 (0.00842)
Table Table 2. MSEs (biases) of E-Bayes estimators.

Click here to display table

Table 2. MSEs (biases) of E-Bayes estimators.
nrλ̂ESi

ic = 2c = 3c = 4
201810.00231 (0.01555)0.00211 (0.01088)0.00196 (0.00633)
20.00211 (0.01081)0.00190 (0.00396)0.00180 (−0.00265)
30.00195 (0.00608)0.00179 (−0.00295)0.00180 (−0.01162)

1610.00279 (0.01912)0.00252 (0.01382)0.00231 (0.00868)
20.00251 (0.01374)0.00222 (0.00600)0.00207 (−0.00144)
30.00230 (0.00836)0.00203 (−0.00183)0.00203 (−0.01157)

1410.00359 (0.02348)0.00320 (0.01737)0.00290 (0.01147)
20.00319 (0.01726)0.00277 (0.00836)0.00253 (−0.00013)
30.00288 (0.01104)0.00243 (−0.00064)0.00243 (−0.01173)

1210.00469 (0.02875)0.00413 (0.02155)0.00369 (0.01464)
20.00411 (0.02140)0.00349 (0.01098)0.00312 (0.00109)
30.00364 (0.01406)0.00294 (0.00040)0.00294 (−0.01246)

302810.00107 (0.00907)0.00100 (0.00609)0.00095 (0.00316)
20.00100 (0.00606)0.00093 (0.00166)0.00091 (−0.00265)
30.00095 (0.00306)0.00091 (−0.00277)0.00093 (−0.00845)

2610.00117 (0.01104)0.00108 (0.00782)0.00101 (0.00465)
20.00108 (0.00778)0.00099 (0.00302)0.00095 (−0.00162)
30.00101 (0.00453)0.00095 (−0.00177)0.00096 (−0.00789)

2410.00131 (0.01248)0.00121 (0.00898)0.00113 (0.00555)
20.00121 (0.00894)0.00110 (0.00378)0.00104 (−0.00125)
30.00112 (0.00540)0.00104 (−0.00142)0.00105 (−0.00805)

2210.00153 (0.01427)0.00140 (0.01044)0.00130 (0.00669)
20.00140 (0.01039)0.00126 (0.00475)0.00118 (−0.00073)
30.00129 (0.00652)0.00118 (−0.00094)0.00118 (−0.00815)

2010.00185 (0.01653)0.00168 (0.01229)0.00154 (0.00815)
20.00168 (0.01224)0.00149 (0.00601)0.00139 (−0.00002)
30.00154 (0.00795)0.00138 (−0.00027)0.00136 (−0.00819)

403810.00062 (0.00625)0.00059 (0.00406)0.00056 (0.00190)
20.00059 (0.00405)0.00056 (0.00080)0.00055 (−0.00240)
30.00056 (0.00184)0.00055 (−0.00247)0.00057 (−0.00670)

3610.00066 (0.00769)0.00062 (0.00537)0.00059 (0.00308)
20.00062 (0.00535)0.00058 (0.00191)0.00056 (−0.00147)
30.00059 (0.00301)0.00056 (−0.00155)0.00057 (−0.00602)

3410.00072 (0.00867)0.00067 (0.00620)0.00064 (0.00377)
20.00067 (0.00619)0.00062 (0.00253)0.00060 (−0.00105)
30.00064 (0.00370)0.00060 (−0.00113)0.00061 (−0.00587)

3210.00080 (0.00975)0.00075 (0.00712)0.00070 (0.00454)
20.00075 (0.00710)0.00068 (0.00322)0.00065 (−0.00059)
30.00070 (0.00446)0.00065 (−0.00069)0.00066 (−0.00572)

3010.00090 (0.01086)0.00083 (0.00805)0.00078 (0.00529)
20.00083 (0.00803)0.00076 (0.00388)0.00072 (−0.00018)
30.00078 (0.00520)0.00072 (−0.00030)0.00072 (−0.00566)

2810.00102 (0.01169)0.00094 (0.00868)0.00087 (0.00572)
20.00094 (0.00865)0.00085 (0.00420)0.00080 (−0.00014)
30.00087 (0.00562)0.00080 (−0.00027)0.00080 (−0.00600)
Table Table 3. MSEs (biases) of the nonparametric entropy estimators.

Click here to display table

Table 3. MSEs (biases) of the nonparametric entropy estimators.
nrHm

m = 2m = 4m = 6
20180.05727 (0.20756)0.06025 (0.22059)0.08642 (0.27390)
160.06583 (0.22228)0.07309 (0.24550)0.10819 (0.31053)
140.07890 (0.24193)0.09138 (0.27594)0.13773 (0.35178)
120.09593 (0.26667)0.11657 (0.31389)0.17984 (0.40306)

30280.03308 (0.16156)0.02725 (0.14729)0.03620 (0.17493)
260.03587 (0.16849)0.03088 (0.15896)0.04268 (0.19315)
240.03880 (0.17525)0.03484 (0.16998)0.04947 (0.20955)
220.04315 (0.18375)0.04048 (0.18340)0.05828 (0.22819)
200.04891 (0.19439)0.04792 (0.19957)0.06942 (0.24957)

40380.02460 (0.14259)0.01640 (0.11477)0.02001 (0.12952)
360.02624 (0.14731)0.01816 (0.12217)0.02306 (0.14138)
340.02786 (0.15161)0.02000 (0.12880)0.02599 (0.15110)
320.02972 (0.15641)0.02225 (0.13612)0.02943 (0.16125)
300.03206 (0.16178)0.02487 (0.14381)0.03325 (0.17169)
280.03429 (0.16653)0.02762 (0.15142)0.03753 (0.18264)
Table Table 4. MSEs (biases) of entropy estimators with the MLE and the Bayes estimators.

Click here to display table

Table 4. MSEs (biases) of entropy estimators with the MLE and the Bayes estimators.
nrĤ(f)ĤS(f)

α = 0.01, β = 0.01α = 1, β = 2
20180.00599 (0.01498)0.00599 (0.01522)0.00536 (0.01403)
160.00684 (0.01922)0.00684 (0.01947)0.00603 (0.01787)
140.00834 (0.02410)0.00833 (0.02412)0.00722 (0.02218)
120.01042 (0.02988)0.01042 (0.03020)0.00878 (0.02711)

30280.00291 (0.00865)0.00291 (0.00880)0.00271 (0.00830)
260.00303 (0.01140)0.00303 (0.01156)0.00281 (0.01092)
240.00331 (0.01307)0.00331 (0.01324)0.00305 (0.01248)
220.00376 (0.01513)0.00376 (0.01532)0.00343 (0.01438)
200.00441 (0.01775)0.00441 (0.01795)0.00399 (0.01679)

40380.00172 (0.00590)0.00172 (0.00601)0.00163 (0.00573)
360.00176 (0.00805)0.00176 (0.00817)0.00167 (0.00781)
340.00188 (0.00932)0.00189 (0.00944)0.00178 (0.00902)
320.00205 (0.01069)0.00205 (0.01082)0.00192 (0.01033)
300.00225 (0.01205)0.00225 (0.01219)0.00210 (0.01163)
280.00249 (0.01288)0.00250 (0.01303)0.00232 (0.01239)
Table Table 5. MSEs (biases) of entropy with E-Bayes estimators.

Click here to display table

Table 5. MSEs (biases) of entropy with E-Bayes estimators.
nrĤESi(f)

ic = 2c = 3c = 4
201810.00605 (−0.02246)0.00568 (−0.01481)0.00542 (−0.00729)
20.00567 (−0.01471)0.00534 (−0.00335)0.00526 (0.00776)
30.00541 (−0.00688)0.00526 (0.00827)0.00556 (0.02311)

1610.00695 (−0.02740)0.00644 (−0.01892)0.00609 (−0.01060)
20.00643 (−0.01879)0.00595 (−0.00622)0.00579 (0.00605)
30.00606 (−0.01008)0.00579 (0.00669)0.00607 (0.02307)

1410.00850 (−0.03320)0.00779 (−0.02362)0.00729 (−0.01425)
20.00778 (−0.02345)0.00709 (−0.00928)0.00681 (0.00450)
30.00725 (−0.01358)0.00679 (0.00532)0.00704 (0.02371)

1210.01062 (−0.04023)0.00962 (−0.02914)0.00888 (−0.01834)
20.00959 (−0.02891)0.00858 (−0.01256)0.00813 (0.00327)
30.00881 (−0.01743)0.00810 (0.00437)0.00833 (0.02548)

302810.00295 (−0.01359)0.00281 (−0.00861)0.00272 (−0.00368)
20.00281 (−0.00856)0.00269 (−0.00114)0.00269 (0.00617)
30.00272 (−0.00351)0.00269 (0.00639)0.00285 (0.01615)

2610.00310 (−0.01662)0.00292 (−0.01131)0.00280 (−0.00606)
20.00292 (−0.01126)0.00276 (−0.00335)0.00272 (0.00444)
30.00279 (−0.00586)0.00272 (0.00469)0.00286 (0.01509)

2410.00340 (−0.01864)0.00318 (−0.01295)0.00303 (−0.00733)
20.00318 (−0.01289)0.00298 (−0.00441)0.00292 (0.00393)
30.00302 (−0.00709)0.00292 (0.00421)0.00306 (0.01534)

2210.00387 (−0.02112)0.00360 (−0.01496)0.00341 (−0.00889)
20.00359 (−0.01489)0.00333 (−0.00574)0.00325 (0.00326)
30.00340 (−0.00861)0.00324 (0.00360)0.00339 (0.01560)

2010.00455 (−0.02424)0.00420 (−0.01751)0.00396 (−0.01089)
20.00420 (−0.01743)0.00386 (−0.00743)0.00373 (0.00237)
30.00394 (−0.01056)0.00372 (0.00277)0.00385 (0.01585)

403810.00175 (−0.00958)0.00167 (−0.00589)0.00163 (−0.00222)
20.00167 (−0.00586)0.00162 (−0.00035)0.00163 (0.00511)
30.00163 (−0.00213)0.00163 (0.00523)0.00173 (0.01251)

3610.00181 (−0.01188)0.00171 (−0.00801)0.00165 (−0.00417)
20.00171 (−0.00798)0.00163 (−0.00220)0.00162 (0.00351)
30.00165 (−0.00406)0.00162 (0.00364)0.00171 (0.01127)

3410.00194 (−0.01333)0.00183 (−0.00926)0.00176 (−0.00523)
20.00183 (−0.00923)0.00173 (−0.00316)0.00171 (0.00284)
30.00176 (−0.00511)0.00171 (0.00298)0.00179 (0.01098)

3210.00211 (−0.01490)0.00199 (−0.01061)0.00190 (−0.00636)
20.00199 (−0.01058)0.00187 (−0.00418)0.00183 (0.00214)
30.00189 (−0.00623)0.00183 (0.00231)0.00190 (0.01074)

3010.00233 (−0.01650)0.00218 (−0.01195)0.00207 (−0.00745)
20.00218 (−0.01191)0.00203 (−0.00514)0.00198 (0.00155)
30.00207 (−0.00730)0.00197 (0.00174)0.00205 (0.01066)

2810.00258 (−0.01760)0.00241 (−0.01276)0.00229 (−0.00797)
20.00241 (−0.01272)0.00224 (−0.00551)0.00218 (0.00160)
30.00228 (−0.00780)0.00217 (0.00181)0.00225 (0.01130)
Entropy EISSN 1099-4300 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert