Previous Article in Journal
Lorentz and CPT Tests Using Penning Traps

Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

# Estimating the Entropy for Lomax Distribution Based on Generalized Progressively Hybrid Censoring

by
Shuhan Liu
and
Wenhao Gui
*
Department of Mathematics, Beijing Jiaotong University, Beijing 100044, China
*
Author to whom correspondence should be addressed.
Symmetry 2019, 11(10), 1219; https://doi.org/10.3390/sym11101219
Submission received: 22 August 2019 / Revised: 25 September 2019 / Accepted: 27 September 2019 / Published: 1 October 2019

## Abstract

:
As it is often unavoidable to obtain incomplete data in life testing and survival analysis, research on censoring data is becoming increasingly popular. In this paper, the problem of estimating the entropy of a two-parameter Lomax distribution based on generalized progressively hybrid censoring is considered. The maximum likelihood estimators of the unknown parameters are derived to estimate the entropy. Further, Bayesian estimates are computed under symmetric and asymmetric loss functions, including squared error, linex, and general entropy loss function. As we cannot obtain analytical Bayesian estimates directly, the Lindley method and the Tierney and Kadane method are applied. A simulation study is conducted and a real data set is analyzed for illustrative purposes.

## 1. Introduction

Lomax distribution, also conditionally known as Pareto Type II distribution, is a heavy tail distribution widely used in reliability analysis, life testing problems, information theory, business, economics, queuing problems, actuarial modeling and biological sciences. It is essentially a Pareto Type II distribution that has been shifted so that it is non-negative. Lomax distribution was first introduced in Reference [1]. Ahsanullah [2] derived some distributional proprieties and presented two types of estimates for the unknown parameters based on the record value for a sequence of Lomax distribution. Afaq [3] derived the Bayesian estimators of Lomax distribution under three different loss functions using Jeffery’s and an extension of Jeffery’s prior, and compared the Bayesian estimates with the maximum likelihood estimate by using mean squared error. Ismail [4] derived the maximum likelihood estimators and interval estimators of the unknown parameters under a step-stress model supposing that the time to failure has a Lomax distribution with failure-censoring and studied the optimal test designs.
The cumulative distribution function of Lomax distribution is given as follows:
$F ( x ) = 1 − ( 1 + x λ ) − α , α > 0 , λ > 0 , x ≥ 0 .$
The corresponding probability density function of Lomax distribution is given by:
$f ( x ) = α λ ( 1 + x λ ) − ( α + 1 ) , x ≥ 0 .$
One characteristic of Lomax distribution is that there are many distributions which have a close relationship with it. Lomax distribution is a special case of q-exponential distribution, generalized Pareto distribution, beta prime distribution and F distribution. Plus, it is a mixture of exponential distributions where the mixing distribution of the rate is a gamma distribution. Besides, there are many variants of Lomax distribution, such as the McDonald Lomax distribution with five parameters ([5]), gamma Lomax distribution ([6]) and weighted Lomax distribution ([7]). Apparently, Lomax distribution is of great importance in statistics and probability. Another special characteristic of Lomax distribution is that it plays an important role in information theory. Ahmadi [8] found that on the assumption that (X,Y) has a bivariate Lomax joint survival function, then the bivariate dynamic residual mutual information is constant.
Entropy, which is one of the most significant terms in statistics and information theory, was originally proposed by Gibbs in the thermodynamic system. Shannon [9] re-defined it and introduced the concept of entropy into information theory to quantitatively measure the uncertainty of information. Cover and Thomas [10] extended Shannon’s idea and defined differential entropy (or continuous entropy) of a continuous random variable X with probability density function f as:
$H = H ( X ) = H ( f ) = − ∫ − ∞ ∞ f ( x ) log ( f ( x ) ) d x .$
Differential entropy can be used to measure the uniformity of a distribution. A distribution that spreads out has a higher entropy, whereas a highly peaked distribution has a relatively lower entropy. Many authors have carried out their studies based on entropy. Siamak and Ehsan [11] proposed Shannon aromaticity based on the concept of Shannon entropy in information theory and applied it to describe the probability of electronic charge distribution between atoms in a given ring. Tahmasebi and Behboodian [12] derived the entropy and the order statistics for the Feller-Pareto family and presented the entropy ordering property for the sample minimum and maximum of Feller-Pareto subfamilies. Cho et al. [13] estimated the entropy for Weibull distribution under three different loss functions based on generalized progressively hybrid censoring scheme. Seo and Kang [14] discussed the entropy of a generalized half-logistic distribution based on Type II censored samples.
The entropy of the Lomax distribution is given by:
$H = H ( f ) = log ( λ ) − log ( α ) + 1 α + 1 .$
As it is often inevitable to lose some experimental units before terminal time in a test, censoring is becoming increasingly popular in lifetime testing and survival analysis. The two most popular censoring schemes in literature are Type I and Type II censoring schemes. Hybrid censoring, which was first proposed in Reference [15], is a mixture of Type I and Type II censoring schemes. The conventional censoring schemes (Type I, Type II, hybrid censoring) do not allow experimental units to be removed other than the terminal time. The early removals are desirable so that some experimental units can be removed and used in other experiments. Accordingly, the progressively censoring scheme was introduced by Reference [16]. The progressively hybrid censoring scheme can be described as follows. Suppose that n identical units are placed on a test. Let $X 1 , X 2 , ⋯ , X m$ be the ordered failure times and $R = ( R 1 , R 2 , ⋯ , R m )$ be the censoring scheme. Then we have $∑ i = 1 m R m + m = n$. We refer to Reference [17] for detailed discussion about progressively censoring. The test is terminated either when a pre-assigned number, say m, failures have been observed or when a pre-specified time, say T, has been reached. So clearly the terminal time $T e n d = min { X m , T }$. When the first failure occurs, $R 1$ units are randomly removed from the test. When the second failure occurs, $R 2$ units are randomly removed. Eventually, when the m-th failure occurs or when the pre-specified time T is reached, all the remaining units are removed from the test.
The disadvantages of progressively hybrid censoring are that it cannot be applied and the accuracy of the estimates can be extremely low if only a few units fail before terminal time $T e n d$. For this reason, Cho et al. [18] proposed a generalized progressively hybrid censoring scheme. Suppose n identical units are placed on an experiment. For the sake of observing at least k (pre-determined) units in the experiment, the terminal time is adjusted to $T e n d * = max { X k , min { X m , T } }$. Similarly, $R 1$ units are randomly removed from the test when the first failure is observed. $R 2$ units are randomly removed when the second failure is observed. This continues until all the units are removed when the $T e n d *$ has been reached. A representation of the generalized progressively hybrid censoring scheme, the corresponding condition and the expressions of terminal time ($T e n d *$) and $R 1 *$, $R 2 *$, $R 3 *$, terminal removals are presented as follows.
• Case I ($T < X k < X m$)
$T e n d * = X k$, $R 1 * = n − k − ∑ i = 1 k − 1 R i .$
• Case II ($X k < T < X m$)
$T e n d * = T$, $R 2 * = n − D − ∑ i = 1 D R i .$
• Case III ($X k < X m < T$)
$T e n d * = X m$, $R 3 * = n − m − ∑ i = 1 m − 1 R i = R m .$
The objective of this paper is to derive the maximum likelihood estimator (MLE) and Bayesian estimators of entropy, and to compare the proposed estimates of entropy with respect to mean squared error (MSE) and bias value. We first derive the maximum likelihood estimators of the unknown parameters for the Lomax distribution and calculate the entropy based on the invariance property of the MLE. Next, we consider the Bayesian estimators of the entropy under squared error, linex and general entropy loss function. It is to be noted that it is not easy to obtain explicit expressions of Bayesian estimates and thus the Lindley method and the Tierney and Kadane method are applied. We conduct a simulation study and compare the performance of all the estimates of entropy with respect to MSE and bias values.
The rest of this paper is organized as follows. In Section 2, we deal with computing entropy based on MLEs. Bayesian estimators under squared error, linex and general entropy loss function are derived using the Lindley method and Tierney and Kadane method in Section 3. A simulation study is conducted and presented to study the performance of the estimates in Section 4. In Section 5, a real data set is analyzed for illustrative purposes. Finally, we conclude the paper in Section 6.

## 2. Maximum Likelihood Estimator

In this section, we obtain the maximum likelihood estimators of entropy based on generalized progressively hybrid censoring. The likelihood functions and log-likelihood functions for cases I, II and III can be found in Appendix A, and the likelihood functions and log-likelihood functions in Case I, Case II and Case III can be combined and written as
$L ( d a t a | α , λ ) = L ∝ α J λ − J ∏ i = 1 J ( 1 + X i λ ) − [ α ( 1 + R i ) + 1 ] e ( − α W ( λ ) ) .$
$l ( d a t a | α , λ ) = l ∝ J log ( α ) − J log ( λ ) − α ∑ i = 1 J ( 1 + R i ) log ( 1 + X i λ ) − ∑ i = 1 J log ( 1 + X i λ ) − α W ( λ ) .$
where $J = k , R k = R 1 * , W ( λ ) = 0$ for Case I; $J = D , W ( λ ) = R 2 * log ( 1 + T λ )$ for Case II; $J = m , W ( λ ) = 0$ for Case III.
Let Equation (5) take derivatives with respect to $α$ and $λ$. We can get
$∂ l ∂ α = J α − ∑ i = 1 J ( 1 + R i ) log ( 1 + X i λ ) − W ( λ ) = 0 .$
$∂ l ∂ λ = − J λ + α ∑ i = 1 J ( 1 + R i ) ( 1 λ − 1 λ + X i ) + ∑ i = 1 J ( 1 λ − 1 λ + X i ) − α W 1 ( λ ) = 0 .$
where, $W 1 ( λ )$=0 for Case I and Case III; $W 1 ( λ ) = R 2 * ( 1 λ + T − 1 λ )$ for Case II.
Equation (6) can also be written as
$α = A ( λ ) .$
where $A ( λ ) = J ∑ i = 1 J ( 1 + R i ) log ( 1 + X i λ ) + W ( λ ) .$
Using Equation (8), Equation (7) can be written as
$λ = B ( λ ) .$
where, $B ( λ ) = J A ( λ ) ∑ i = 1 J ( 1 + R i ) ( 1 λ − 1 X i + λ ) + ∑ i = 1 J ( 1 λ − 1 X i + λ ) − A ( λ ) W 1 ( λ ) .$
Therefore, we can use an iterative procedure, which was proposed by Reference [19] and applied in Reference [20], to compute the MLEs. Set an initial guess value of $λ$, say $λ ( 1 )$, then let $λ ( 2 ) = B ( λ ( 1 ) )$, $λ ( 3 ) = B ( λ ( 2 ) )$, ⋯, $λ ( i + 1 ) = B ( λ ( i ) ) , ⋯$. This iterative process continues until $| λ ( k + 1 ) − λ ( k ) | < ε$, where $ε$ is some pre-specified tolerance limit. Then $λ ^ = λ ( k + 1 ) .$
Then according to the invariance property of the MLE, using Equations (3) and (8), we can obtain:
$α ^ = A ( λ ^ )$
$H ^ = log ( α ^ ) − log ( λ ^ ) + 1 α ^ + 1$

## 3. Bayesian Estimation

#### 3.1. Prior and Posterior Distributions

It is to be noticed that both parameters $α$ and $λ$ are unknown, namely no natural conjugate bivariate prior distribution exists. Therefore we assume the independent priors of $α$ and $λ$ are $G a m m a ( a , b )$ and $G a m m a ( c , d )$ with corresponding means $a b$ and $c d$, respectively. The priors of $α$ and $λ$ can be written as:
$α ∼ G a ( a , b ) , π 1 ( α ) = b a Γ ( a ) α a − 1 e − b α .$
$λ ∼ G a ( c , d ) , π 2 ( λ ) = d c Γ ( c ) λ c − 1 e − d λ .$
where $a , b , c , d$ are the positive hyperparameters containing the prior knowledge.
Thus, the joint prior distribution is given by:
$π 0 ( α , λ ) ∝ α a − 1 λ c − 1 e − b α e − d λ .$
The posterior distribution of $α$ and $λ$ is:
$π ( α , λ | d a t a ) = L ( d a t a | α , λ ) π 0 ( α , λ ) ∫ 0 ∞ ∫ 0 ∞ L ( d a t a | α , λ ) π 0 ( α , λ ) d α d λ .$
Let $g ^ ( α , λ )$ be the expectation of a function of $α$ and $λ$, say $g ( α , λ )$. Then we obtain:
$g ^ ( α , λ ) = E [ g ( α , λ ) | d a t a ] = ∫ 0 ∞ ∫ 0 ∞ g ( α , λ ) L ( d a t a | α , λ ) π 0 ( α , λ ) d α d λ ∫ 0 ∞ ∫ 0 ∞ L ( d a t a | α , λ ) π 0 ( α , λ ) d α d λ .$
In this passage, we only compute the Bayesian estimates for entropy. So particularly, $g ( α , λ )$ is a function of H.

#### 3.2. Loss Function

In this subsection, we consider Bayesian estimators under symmetric and asymmetric loss functions. One of the most widely used symmetric loss functions is squared error loss function. The loss function is given by:
$L 1 ( θ , δ ) = ( δ − θ ) 2 ,$
where $δ$ is an estimator of $θ$.
In this case, the Bayesian estimator (say $θ ^ S$) is obtained as:
$θ ^ S = E ( θ | d a t a ) .$
As for asymmetric loss function, we choose two most commonly used asymmetric loss functions—linex loss function and general entropy loss function. The linex loss function is defined as:
$L 2 ( θ , δ ) = e h ( δ − θ ) − h ( δ − θ ) − 1 ,$
where, $δ$ is an estimator of $θ$, and h is the sign presenting the asymmetry. We refer to Reference [21] for detailed information about linex loss function.
The Bayesian estimator under linex function (say $θ ^ l$) is given by:
$θ ^ l = − 1 h log [ E ( e − h θ | d a t a ) ] .$
Next, the general entropy loss function is defined as:
$L 3 ( θ , δ ) = ( δ θ ) q − q log ( δ θ ) − 1 ,$
where $δ$ is an estimator of $θ$, and q is the sign presenting the asymmetry. For a detailed information, we refer to Reference [22].
The Bayesian estimator in this situation (say $θ ^ e$) is given as:
$θ ^ e = [ E ( θ − q | d a t a ) ] − 1 q .$
Using Equations (12)–(14), the Bayesian estimators of entropy under squared error loss function, linex loss function and general entropy loss function can be obtained as:
$H S ^ ( f ) = ∫ 0 ∞ ∫ 0 ∞ H ( f ) L ( d a t a | α , λ ) π 0 ( α , λ ) d α d λ ∫ 0 ∞ ∫ 0 ∞ L ( d a t a | α , λ ) π 0 ( α , λ ) d α d λ .$
$H ^ L ( f ) = − 1 h log [ ∫ 0 ∞ ∫ 0 ∞ e − h H ( f ) L ( d a t a | α , λ ) π 0 ( α , λ ) d α d λ ∫ 0 ∞ ∫ 0 ∞ L ( d a t a | α , λ ) π 0 ( α , λ ) d α d λ ] .$
$H E ^ ( f ) = [ ∫ 0 ∞ ∫ 0 ∞ H ( f ) − q L ( d a t a | α , λ ) π 0 ( α , λ ) d α d λ ∫ 0 ∞ ∫ 0 ∞ L ( d a t a | α , λ ) π 0 ( α , λ ) d α d λ ] − 1 q .$
It is to be noted that all the Bayesian estimates of entropy are in the form of a ratio of two integrals, which cannot be simplified or computed directly. Thus, we apply the Lindley method and Tierney and Knadane method to compute the estimates.

#### 3.3. Lindley Method

In this subsection, we apply the Lindley ([23]) method to compute the approximate Bayesian estimates of entropy. For the two parameters case, the Lindley method can be written as:
$g ^ = g ( α ^ , λ ^ ) + 0.5 [ ( u 11 τ 11 + 2 u 12 τ 12 + u 22 τ 22 ) + l 30 ( u 1 τ 11 + u 2 τ 12 ) τ 11 + l 03 ( u 2 τ 22 + u 1 τ 21 ) τ 22 + l 21 ( 3 u 1 τ 11 τ 12 + u 2 ( τ 11 τ 22 + 2 τ 12 2 ) ) + l 12 ( 3 u 2 τ 22 τ 21 + u 1 ( τ 22 τ 11 + 2 τ 21 2 ) ) ] + p 1 ( u 1 τ 11 + u 2 τ 21 ) + p 2 ( u 2 τ 22 + u 1 τ 12 ) .$
The notations in this formula and some basic formulas are omitted here and presented in Appendix B.
• For squared error loss function, we take:
$g ( α , λ ) = H ( f ) = log ( λ ) − log ( α ) + 1 α + 1 .$
Then we can compute that:
$u 1 = − 1 α − 1 α 2 , u 2 = 1 λ , u 12 = u 21 = 0 , u 11 = 1 α 2 + 2 α 3 , u 22 = − 1 λ 2 .$
Then using Equation (18), the Bayesian estimates under squared error loss function can be obtained as:
$H S ^ = g ^ = g ( α ^ , λ ^ ) + 0.5 [ ( u 11 τ 11 + u 22 τ 22 ) + l 30 ( u 1 τ 11 + u 2 τ 12 ) τ 11 + l 03 ( u 2 τ 22 + u 1 τ 21 ) τ 22 + l 12 ( 3 u 2 τ 22 τ 21 + u 1 ( τ 22 τ 11 + 2 τ 21 2 ) ) ] + p 1 ( u 1 τ 11 + u 2 τ 21 ) + p 2 ( u 2 τ 22 + u 1 τ 12 )$
• Further, for the linex loss function, we take:
$g ( α , λ ) = e − h H ( f ) , H ( f ) = log ( λ ) − log ( α ) + 1 α + 1 .$
We can obtain that:
$u 1 = h e − h H ( 1 α + 1 α 2 ) , u 2 = − h λ e − h H , u 12 = u 21 = − h 2 λ e − h H ( 1 α + 1 α 2 ) , u 11 = h e − h H [ h ( 1 α + 1 α 2 ) 2 − ( 1 α 2 + 2 α 3 ) ] , u 22 = h λ 2 e − h H ( 1 + h ) .$
Similarly, using Equation (18), we can derive the Bayesian estimates of entropy under the linex loss function as:
$H l ^ = − 1 h log ( g ^ ) = − 1 h log { g ( α ^ , λ ^ ) + 0.5 [ ( u 11 τ 11 + 2 u 12 τ 12 + u 22 τ 22 ) + l 30 ( u 1 τ 11 + u 2 τ 12 ) τ 11 + l 03 ( u 2 τ 22 + u 1 τ 21 ) τ 22 + l 12 ( 3 u 2 τ 22 τ 21 + u 1 ( τ 22 τ 11 + 2 τ 21 2 ) ) ] + p 1 ( u 1 τ 11 + u 2 τ 21 ) + p 2 ( u 2 τ 22 + u 1 τ 12 ) ) } .$
• For general entropy loss function, we take:
$g ( α , λ ) = H ( f ) − q , H ( f ) = log ( λ ) − log ( α ) + 1 α + 1 .$
We can derive that:
$u 1 = q H − ( q + 1 ) ( 1 α + 1 α 2 ) , u 2 = − q λ H − ( q + 1 ) , u 12 = u 21 = − q ( q + 1 ) λ H − ( q + 2 ) ( 1 α + 1 α 2 ) , u 11 = q H − ( q + 2 ) α 2 [ ( q + 1 ) ( 1 + 1 α ) 2 − H ( 1 + 2 α ) ] , u 22 = q λ 2 H − ( q + 2 ) ( 1 + q + H ) .$
Thus, the Bayesian estimate under general entropy loss function can be computed using:
$H e ^ = g ^ − 1 q = { g ( α ^ , λ ^ ) + 0.5 [ ( u 11 τ 11 + 2 u 12 τ 12 + u 22 τ 22 ) + l 30 ( u 1 τ 11 + u 2 τ 12 ) τ 11 + l 03 ( u 2 τ 22 + u 1 τ 21 ) τ 22 + l 12 ( 3 u 2 τ 22 τ 21 + u 1 ( τ 22 τ 11 + 2 τ 21 2 ) ) ] + p 1 ( u 1 τ 11 + u 2 τ 21 ) + p 2 ( u 2 τ 22 + u 1 τ 12 ) ) } − 1 q .$
In the next subsection, we apply the Tierney and Kadane method to compute the approximate Bayesian estimate.

#### 3.4. Tierney and Kadane Method

An alternative to the Lindley method to approximate the integrals is using the method proposed by Tierney and Kadane. Detailed information about the Tierney and Kadane method can be found in Reference [24], and a comparision between the Tierney and Kadane method and Lindley method can be found in [25]. In this subsection, we use the Tierney and Kadane method to estimate the entropy. We consider the function of $α$ and $λ$, say $g ( α , λ )$. The formulas in the Tierney and Kadane method are as follows:
$δ ( α , λ ) = l ( d a t a | α , λ ) + π 0 ( α , λ ) n , δ * ( α , λ ) = δ ( α , λ ) + log ( g ( α , λ ) ) n , g ^ = | ∑ * | | ∑ | e n [ δ * ( α T * ^ , λ T * ^ ) − δ ( α T ^ , λ T ^ ) ] ,$
where $∑$ and $∑ *$ are the inverse of the negative Hessian matrix of $δ$ and $δ *$, respectively.
Note that using Equations (5) and (10), we have:
$δ ( α , λ ) = 1 n [ ( J + a − 1 ) log α + ( c − 1 − J ) log λ − ( b + W ( λ ) ) α − d λ − α ∑ i = 1 J ( 1 + R i ) log ( 1 + X i λ ) − ∑ i = 1 J log ( 1 + X i λ ) ] .$
In order to compute $α T ^$ and $λ T ^$, we need to solve the following equations:
$∂ δ ∂ α = 1 n [ J + a − 1 α − ( b + W ( λ ) ) − ∑ i = 1 J ( 1 + R i ) log ( 1 + X i λ ) ] = 0 .$
$∂ δ ∂ λ = 1 n [ c − J − 1 λ − α W 1 ( λ ) − d + α ∑ i = 1 J ( 1 + R i ) ( 1 λ − 1 λ + X i ) + ∑ i = 1 J ( 1 λ − 1 λ + X i ) ] = 0 .$
We can also find that
$∂ 2 δ ∂ α 2 = − J + a − 1 n α 2 , ∂ 2 δ ∂ α ∂ λ = 1 n [ − W 1 ( λ ) + ∑ i = 1 J ( 1 + R i ) ( 1 λ − 1 λ + X i ) ] , ∂ 2 δ ∂ λ 2 = 1 n [ J + 1 − c λ 2 − α W 2 ( λ ) + α ∑ i = 1 J ( 1 + R i ) ( 1 ( λ + X i ) 2 − 1 λ 2 ) + ∑ i = 1 J ( 1 ( λ + X i ) 2 − 1 λ 2 ) ] .$
Thus, we can compute $| ∑ | = [ ∂ 2 δ ∂ α 2 ∂ 2 δ ∂ λ 2 − ( ∂ 2 δ ∂ α ∂ λ ) 2 ] − 1$ at $( α T ^ , λ T ^ )$
• For squared error loss function, we take:
$g ( α , λ ) = H ( f ) = log ( λ ) − log ( α ) + 1 α + 1 , δ S * = δ ( α , λ ) + log H n .$
Let $δ S *$ take derivatives with respect to $α$ and $λ$, we obtain:
$∂ δ S * ∂ α = ∂ δ ∂ α − 1 n H ( 1 α + 1 α 2 ) = 0 ,$
$∂ δ S * ∂ λ = ∂ δ ∂ λ + 1 n H λ = 0 .$
According to the equations above, we can obtain $( α T * ^ , λ T * ^ )$. In order to compute $| ∑ * | = [ ∂ 2 δ S * ∂ α 2 ∂ 2 δ S * ∂ λ 2 − ( ∂ 2 δ S * ∂ α ∂ λ ) 2 ] − 1$ at $( α T * ^ , λ T * ^ )$, we also need to compute:
$∂ 2 δ S * ∂ α ∂ λ = ∂ 2 δ ∂ α ∂ λ + [ 1 n H 2 λ ( 1 α + 1 α 2 ) ] , ∂ 2 δ S * ∂ α 2 = ∂ 2 δ ∂ α 2 + 1 n H 2 α 2 [ H ( 1 + 2 α ) − ( 1 + 1 α ) 2 ] , ∂ 2 δ S * ∂ λ 2 = ∂ 2 δ ∂ λ 2 − H + 1 n H 2 λ 2 .$
So the Bayesian estimate under squared error loss function is:
$H S ^ = g ^ = | ∑ * | | ∑ | e n [ δ * ( α T * ^ , λ T * ^ ) − δ ( α T ^ , λ T ^ ) ]$
• Further, for linex loss function, we take:
$g ( α , λ ) = e − h H ( f ) , H ( f ) = log ( λ ) − log ( α ) + 1 α + 1 , δ l * = δ ( α , λ ) − h H n .$
Thus, we have:
$∂ δ l * ∂ α = ∂ δ ∂ α + h n ( 1 α + 1 α 2 ) = 0 ,$
$∂ δ l * ∂ λ = ∂ δ ∂ λ − h n λ = 0 .$
Using Equations (23) and (24), we can obtain $( α T * ^ , λ T * ^ )$.
We can also compute that:
$∂ 2 δ l * ∂ α ∂ λ = ∂ 2 δ ∂ α ∂ λ ,$
$∂ 2 δ l * ∂ α 2 = ∂ 2 δ ∂ α 2 − h n α 2 ( 1 + 2 α ) ,$
$∂ 2 δ l * ∂ λ 2 = ∂ 2 δ ∂ λ 2 + h n λ 2 .$
Thus, we can compute $| ∑ * | = [ ∂ 2 δ S * ∂ α 2 ∂ 2 δ S * ∂ λ 2 − ( ∂ 2 δ S * ∂ α ∂ λ ) 2 ] − 1$ at $( α T * ^ , λ T * ^ )$. The Bayesian estimate under linex loss function can be derived as:
$H l ^ = − 1 h log ( g ^ ) = − 1 h log [ | ∑ * | | ∑ | e n [ δ * ( α T * ^ , λ T * ^ ) − δ ( α T ^ , λ T ^ ) ] ] .$
• As for general entropy loss function, we take:
$g ( α , λ ) = H ( f ) − q , H ( f ) = log ( λ ) − log ( α ) + 1 α + 1 , δ e * = δ ( α , λ ) − q n log H .$
We can compute that:
$∂ δ e * ∂ α = ∂ δ ∂ α + q n H ( 1 α + 1 α 2 ) = 0 ,$
$∂ δ e * ∂ λ = ∂ δ ∂ λ − q n H λ = 0 .$
Using Equations (25) and (26), we can compute $( α T * ^ , λ T * ^ )$.
We can also compute that:
$∂ 2 δ e * ∂ α ∂ λ = ∂ 2 δ ∂ α ∂ λ − q n H 2 λ ( 1 α + 1 α 2 ) ,$
$∂ 2 δ e * ∂ α 2 = ∂ 2 δ ∂ α 2 + q n H 2 α 2 [ ( 1 + 1 α ) 2 − H ( 1 + 2 α ) ] ,$
$∂ 2 δ e * ∂ λ 2 = ∂ 2 δ ∂ λ 2 + q n H 2 λ 2 ( 1 + H ) .$
Then, we can obtain $| ∑ * | = [ ∂ 2 δ e * ∂ α 2 ∂ 2 δ e * ∂ λ 2 − ( ∂ 2 δ e * ∂ α ∂ λ ) 2 ] − 1$ at $( α T * ^ , λ T * ^ )$.
Obviously, the Bayesian estimate under the general entropy loss function can be derived as:
$H e ^ = g ^ − 1 q = [ | ∑ * | | ∑ | e n [ δ * ( α T * ^ , λ T * ^ ) − δ ( α T ^ , λ T ^ ) ] ] − 1 q .$

## 4. Simulation Results

In this section, a simulation study is conducted in order to compare different estimates of entropy with respect to MSEs and bias values. First, we describe how to generate a generalized progressively hybrid censored sample for Lomax distribution.
Let $X = ( X 1 , X 2 , ⋯ , X m )$ be the Type II progressively censored sample for Lomax distribution with censoring scheme $R = ( R 1 , R 2 , ⋯ , R m )$. Then
$Y i = α log ( 1 + X i λ ) ( i = 1 , 2 , … , m )$
is the Type II progressively censored sample for a standard exponential distribution. According to the transformation proposed by [16], let
$Z 1 = n Y 1 , Z i = ( n − ∑ i = 1 i − 1 ( R i + 1 ) ) ( Y i − Y i − 1 ) ( i = 2 , … , m )$
Then $Z 1 , Z 2 , ⋯ , Z m$ are the independent random variables from standard exponential distribution. Further, after generating Type II progressively censored sample $X 1 , X 2 , ⋯ , X m$, we need to convert it to generalized progressively hybrid censored sample. For fixed T and k, if $T < X k < X m$, then the generalized progressively hybrid censored sample X is $X = ( X 1 , X 2 , ⋯ , X k )$; if $X k < T < X m$, then the corresponding generalized progressively hybrid censored sample X is $X = ( X 1 , X 2 , ⋯ , X D )$; and if $X k < X m < T$, then the corresponding generalized progressively hybrid censored sample X is $X = ( X 1 , X 2 , ⋯ , X m )$. The algorithm for generating generalized hybrid censored sample can be described as:
• generate $Z 1 , Z 2 , ⋯ , Z m$, where $Z i$$( i = 1 , 2 , ⋯ , m )$ is the random variable from standard exponential distribution.
• Let $Y 1 = Z 1 n$, $Y i = Y i − 1 + Z i n − ( i − 1 ) − ∑ k = 1 i − 1 R k$, then $Y = ( Y 1 , Y 2 , ⋯ , Y m )$ is the Type II progressively censored sample from standard exponential distribution.
• Further, let $X i = F − 1 ( 1 − e − Y i )$, where $F − 1$ is the inverse of the cumulative distribution function. Then $X = ( X 1 , X 2 , ⋯ , X m )$ is the Type II progressively censored sample for Lomax distribution.
• For pre-fixed T and k, if $T < X k < X m$, then the generalized progressively hybrid censored sample X is $X = ( X 1 , X 2 , ⋯ , X k )$; if $X k < T < X m$, then the corresponding generalized progressively hybrid censored sample X is $X = ( X 1 , X 2 , ⋯ , X D )$; if $X k < X m < T$, then the corresponding generalized progressively hybrid censored sample X is $X = ( X 1 , X 2 , ⋯ , X m )$.
Without loss of generality, we take $T = 1$, $α = 0.4$ and $λ = 0.2$. The desired MLEs of entropy is obtained based on the MLEs of unknown parameters. The Bayesian estimates under squared error, linex and general entropy loss function are derived using the Lindley method and the Tierney and Kadane method. Under the linex loss function, we choose $h = 0.2$ and $h = 0.5$. Under general entropy loss function, we choose $q = 0.2$ and $q = − 0.2$. For computing proper Bayesian estimates, we assign the value of hyperparameters as $a = 2 , b = 5 , c = 1 , d = 5$. Besides, we also computed Bayesian estimates with respect to non-informative prior distribution. This corresponds to the case when hyperparameters take values $a = b = c = d = 0$. The simulation results with non-informative priors are presented in Table 1 and the results with informative priors are presented in Table 2. We apply two different types of censoring scheme (Sch). Sch I: $R 1 = n − m$. $R i = 0$ for $i ≠ 1$; Sch II: $R 1 = R 2 = ⋯ = R n − m = 1$. $R i = 0$ for $i > n − m$.
In all the tables, the bias values and the MSEs of the MLES of entropy are presented in the fifth column. All other columns uniformly contain four values. The first value denotes the bias value of the Bayesian estimate using the Lindley method and the second value denotes the corresponding MSE of the estimate using the Lindley method. The third and forth values represent the bias value and MSE of the estimates obtained by the Tierney and Kadane method.
In general, we can observe that the MSE values decrease as the sample size n increases. For fixed n and m, the MSEs decrease as k increases. Plus, the Bayesian estimate with informative prior performs much better than that with non-informative prior with respect to bias value and MSE. Further, it is observed that Bayesian estimates with proper priors usually perform better than the MLEs, while MLEs compete really well with Bayesian estimates with non-informative priors. When the prior is informative, both the methods perform well; while when the prior is non-informative, the Tierney and Kadane method is a better choice. For linex loss function, $h = 0.5$ seems to be a better choice than $h = 0.2$, and for general entropy loss function, $q = 0.2$ compete well with $q = − 0.2$. Overall, the Bayesian estimates with proper prior under the linex loss function behave better than the other estimates.

## 5. Data Analysis

In this section, a real data set is analyzed for illustrative purposes. We consider the data set obtained from a meteorological study in Reference [26] and analyzed by Reference [27]. The data were based on the radar-evaluated rainfall from 52 cumulus clouds (26 seeded clouds and 26 control clouds), namely the sample size n is 52. We apply the Akaike information criterion (AIC), defined by $2 × ( p − log ( L ) )$, Bayesian information criterion (BIC), defined by $p × log ( n ) − 2 × log ( L )$, where p is the number of parameters, n is the number of observations, and L is the maximized value of the likelihood function and Kolmogorov-Smirnov (K-S) statistics with its p-value. The competing model is gamma distribution and generalized inverted exponential distribution. If the Lomax model fits well with the data, it will have low AIC, BIC, K-S statistics value and high p value. Plus, we also draw quantile-quantile plots to test the goodness of the model. The results are presented in Table 3 and Figure 1.
Although Gamma distribution competes well with Lomax distribution, its second parameter is too small. Plus, compared with gamma distribution, Lomax distribution is easier to handle with respect to the computational aspect. So, Lomax distribution is a better choice. We choose to apply the same censoring scheme as Reference [28], which is $R 1 = R 2 = ⋯ = R 24 = 1 , R 25 = 3$, $m = 25$ and $n = 52$ and the ordered progressively type-II censored sample generated by Reference [28] is: 1, 4.1, 4.9, 4.9, 7.7, 11.5, 17.3, 17.5, 21.7, 26.3, 28.6, 29, 31.4, 36.6, 40.6, 41.1, 68.5, 81.2, 92.4, 95, 115.3, 118.3, 119, 163, 198.6. Take Case I (k = 15, T = 80), Case II (k = 15, T = 100), and Case III (k = 18, T = 80). For Case I, the generalized progressively hybrid censored sample is 1.0, 4.1, 4.9, 4.9, 7.7, 11.5, 17.3, 17.5, 21.7, 26.3, 28.6, 29.0, 31.4, 36.6, 40.6, 41.1, 68.5; for Case II, the generalized progressiveky hybrid censored sample is 1.0, 4.1, 4.9, 4.9, 7.7, 11.5, 17.3, 17.5, 21.7, 26.3, 28.6, 29.0, 31.4, 36.6, 40.6, 41.1, 68.5, 81.2, 92.4, 95.0, and for Case III, the generalized progressively hybrid censored sample is 1.0, 4.1, 4.9, 4.9, 7.7, 11.5, 17.3, 17.5, 21.7, 26.3, 28.6, 29.0, 31.4, 36.6, 40.6, 41.1, 68.5, 81.2.
For the Bayesian estimation, we apply the non-informative prior $a = b = c = d = 0$. Besides, for the linex loss function, we choose $h = 0.2$ and $h = 0.5$. For the general loss function, we assign $q = 0.2$ and $q = − 0.2$. The results are shown in Table 4. We can observe that, although using different methods under different loss functions, the estimates of entropy are pretty close to each other.

## 6. Conclusions

The problem of estimating the entropy for Lomax distribution is considered in this paper, based on generalized progressively hybrid censoring. The maximum likelihood estimator of entropy is derived. Further, we apply the Lindley method and the Tierney and Kadane method to compute Bayesian estimates under squared error loss function, linex loss function and general entropy loss function. Then the proposed estimates are compared with respect to bias values and MSEs. It is observed that Bayesian estimates with proper priors behave better than the corresponding MLEs. However, MLEs compete very well with Bayesian estimates when the priors of the Bayesian estimates are non-informative. Overall, the Bayesian estimates with proper priors under the linex loss function behave better than other estimates according to the simulation results presented in this passage.
Much research has been done based on traditional censoring schemes and progressively censoring schemes, while few studies have been conducted based on generalized hybrid censoring schemes. Besides, compared with the parameters, entropy has a wider use in many fields now. Therefore, the estimation of entropy from other distributions based on generalized progressively hybrid censoring is still of great potential for future study.

## Author Contributions

Investigation, S.L.; Supervision, W.G.

## Funding

This research was supported by Project 202010004004 which was supported by National Training Program of Innovation and Entrepreneurship for Undergraduates.

## Acknowledgments

The authors would like to thank the four referees and the editor for their careful reading and constructive comments which led to this substantive improved version.

## Conflicts of Interest

The authors declare no conflict of interest.

## Appendix A. Likelihood and Log-Likelihood Functions

The likelihood functions and log-likelihood functions are presented in this Section as follows:
Case I:
$L ∝ α k λ k ( 1 + X k λ ) − [ α ( 1 + R 1 * ) + 1 ] ∏ i = 1 k − 1 ( 1 + X i λ ) − [ α ( 1 + R i ) + 1 ] . l ∝ k log ( α ) − k log ( λ ) − α ( 1 + R 1 * ) log ( 1 + X k λ ) − log ( 1 + X k λ ) − α ∑ i = 1 k − 1 ( 1 + R i ) log ( 1 + X i λ ) − ∑ i = 1 k − 1 log ( 1 + X i λ ) .$
Case II:
$L ∝ α D λ D ( 1 + T λ ) − α R 2 * ∏ i = 1 D ( 1 + X i λ ) − [ α ( R i + 1 ) + 1 ] .$
$l ∝ D log ( α ) − D log ( λ ) − α R 2 * log ( 1 + T λ ) − α ∑ i = 1 D ( 1 + R i ) log ( 1 + X i λ ) − ∑ i = 1 D log ( 1 + X i λ ) .$
Case III:
$L ∝ α m λ m ∏ i = 1 m ( 1 + X i λ ) − [ α ( R i + 1 ) + 1 ] .$
$l ∝ m log ( α ) − m log ( λ ) − α ∑ i = 1 m ( 1 + R i ) log ( 1 + X i λ ) − ∑ i = 1 m log ( 1 + X i λ ) .$

## Appendix B. Lindley Method

Notations and basic expressions in (18) in Lindley method are as follows:
$u i = ∂ g ∂ θ i , u i j = ∂ 2 g ∂ θ i ∂ θ j , l i j = ∂ i + j l ∂ θ 1 i ∂ θ 2 j , p = log ( π 0 ( α , λ ) ) , p i = ∂ p ∂ θ i ,$
$τ i j$ denotes the $( i , j )$-th element of the inverse of the negative Hessian matrix of log-likelihood function.
For Lomax distribution based on generalized progressively hybrid censoring sample, we have:
$l 30 = 2 J α 3 , l 03 = − 2 J λ 3 + α ∑ i = 1 J ( 1 + R i ) ( 2 λ 3 − 2 ( λ + X i ) 3 ) + ∑ i = 1 J ( 2 λ 3 − 2 ( λ + X i ) 3 ) − α W 3 ( λ ) , l 21 = 0 , l 12 = ∑ i = 1 J ( 1 + R i ) ( 1 ( λ + X i ) 2 − 1 λ 2 ) − W 2 ( λ ) , l 20 = − J α 2 , l 02 = J λ 2 + α ∑ i = 1 J ( 1 + R i ) ( 1 ( λ + X i ) 2 − 1 λ 2 ) + ∑ i = 1 J ( 1 ( λ + X i ) 2 − 1 λ 2 ) − α W 2 ( λ ) , l 11 = ∑ i = 1 J ( 1 + R i ) ( 1 λ − 1 λ + X i ) − W 1 ( λ ) .$
where, $W 1 ( λ ) = R 2 * ( 1 T + λ − 1 λ )$, $W 2 ( λ ) = R 2 * ( 1 λ 2 − 1 ( λ + T ) 2 )$, $W 3 ( λ ) = R 2 * ( 2 ( λ + T ) 3 − 2 λ 3 )$ for Case II, $W 1 ( λ ) = W 2 ( λ ) = W 3 ( λ ) = 0$ for Case I and Case III.
We also have:
$p 1 = a − 1 α − b , p 2 = c − 1 λ − d , τ 11 = − l 02 l 20 l 02 − l 11 2 , τ 12 = τ 21 = l 11 l 20 l 02 − l 11 2 , τ 22 = − l 20 l 20 l 02 − l 11 2 .$

## References

1. Lomax, K.S. Business Failures: Another Example of the Analysis of Failure Data. Publ. Am. Stat. Assoc. 1987, 49, 847–852. [Google Scholar] [CrossRef]
2. Ahsanullah, M. Record values of the Lomax distribution. Stat. Neerl. 2010, 45, 21–29. [Google Scholar] [CrossRef]
3. Afaq, A.; Ahmad, S.P.; Ahmed, A. Bayesian analysis of shape parameter of Lomax distribution using different loss functions. Int. J. Stat. Math. 2015, 2, 55–65. [Google Scholar]
4. Ismail, A.A. Optimum Failure-Censored Step-Stress Life Test Plans for the Lomax Distribution. Strength Mater. 2016, 48, 1–7. [Google Scholar] [CrossRef]
5. Lemonte, A.J.; Cordeiro, G.M. An extended Lomax distribution. Statistics 2013, 47, 800–816. [Google Scholar] [CrossRef]
6. Cordeiro, G.M.; Ortega, E.M.M.; Popovic, B.V. The gamma-Lomax distribution. J. Stat. Comput. Simul. 2015, 85, 305–319. [Google Scholar] [CrossRef]
7. Kilany, N.M. Weighted Lomax distribution. Springerplus 2016, 5, 1862–1880. [Google Scholar] [CrossRef]
8. Ahmadi, J.; Crescenzo, A.D.; Longobardi, M. On dynamic mutual information for bivariate lifetimes. Adv. Appl. Probab. 2015, 47, 1157–1174. [Google Scholar] [CrossRef] [Green Version]
9. Shannon, C.E. A mathematical theory of communication. Bell Labs Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef]
10. Cover, T.; Thomas, J. Elements of Information Theory; Wiley: Hoboken, NJ, USA, 2005. [Google Scholar]
11. Siamak, N.; Ehsan, S. Shannon entropy as a new measure of aromaticity, Shannon aromaticity. Phys. Chem. Chem. Phys. 2010, 12, 4742–4749. [Google Scholar]
12. Tahmasebi, S.; Behboodian, J. Shannon Entropy for the Feller-Pareto (FP) Family and Order Statistics of FP Subfamilies. Appl. Math. Sci. 2010, 4, 495–504. [Google Scholar]
13. Cho, Y.; Sun, H.; Lee, K. Estimating the Entropy of a Weibull Distribution under Generalized Progressive Hybrid Censoring. Entropy 2015, 17, 102–122. [Google Scholar] [CrossRef] [Green Version]
14. Seo, J.I.; Kang, S.B. Entropy Estimation of Generalized Half-Logistic Distribution (GHLD) Based on Type-II Censored Samples. Entropy 2014, 16, 443–454. [Google Scholar] [CrossRef] [Green Version]
15. Epstein, B. Truncated life-tests in the expotential case. Ann. Math. Stat. 1954, 25, 555–564. [Google Scholar] [CrossRef]
16. Balakrishnan, N.; Aggarwala, R. Progressive Censoring; Birkhauser: Boston, MA, USA, 2000. [Google Scholar]
17. Kundu, D.; Joarder, A. Analysis of Type-II progressively hybrid censored data. Comput. Stat. Data Anal. 2006, 50, 2509–2528. [Google Scholar] [CrossRef]
18. Cho, Y.; Sun, H.; Lee, K. An Estimation of the Entropy for a Rayleigh Distribution Based on Doubly-Generalized Type-II Hybrid Censored Samples. Entropy 2014, 16, 3655–3669. [Google Scholar] [CrossRef] [Green Version]
19. Kundu, D. On hybrid censored Weibull distribution. J. Stat. Plan. Inference 2007, 137, 2127–2142. [Google Scholar] [CrossRef] [Green Version]
20. Ma, Y.; Shi, Y. Inference for Lomax Distribution Based on Type-II Progressively Hybrid Censored Data; Vidyasagar University: Midnapore, India, 2013. [Google Scholar]
21. Parsian, A.; Kirmani, S. Handbook of Applied Econometrics and Statistical Inference; Chapter Estimation Under LINEX Loss Function; Marcel Dekker Inc.: New York, NY, USA, 2002; pp. 53–76. [Google Scholar]
22. Singh, P.K.; Singh, S.K.; Singh, U. Bayes Estimator of Inverse Gaussian Parameters Under General Entropy Loss Function Using Lindley’s Approximation. Commun. Stat. Simul. Comput. 2008, 37, 1750–1762. [Google Scholar] [CrossRef]
23. Lindley, D.V. Approximate Bayesian methods. Trab. Estad. Investig. Oper. 1980, 31, 223–245. [Google Scholar] [CrossRef]
24. Tierney, L.; Kadane, J. Accurate Approximations for Posterior Moments and Marginal Densities. Publ. Am. Stat. Assoc. 1986, 81, 82–86. [Google Scholar] [CrossRef]
25. Howlader, H.A. Bayesian survival estimation of Pareto distribution of the second kind based on failure-censored data. Comput. Stat. Data Anal. 2002, 38, 301–314. [Google Scholar] [CrossRef]
26. Simpson, J. Use of the gamma distribution in single-cloud rainfall analysis. Mon. Weather Rev. 1972, 100, 309–312. [Google Scholar] [CrossRef]
27. Giles, D.; Feng, H.; Godwin, R.T. On the bias of the maximum likelihood estimator for the two parameter Lomax distribution. Commun. Stat. Theory Methods 2013, 42, 1934–1950. [Google Scholar] [CrossRef]
28. Helu, A.; Samawi, H.; Raqab, M.Z. Estimation on Lomax progressive censoring using the EM algorithm. J. Stat. Comput. Simul. 2015, 85, 1035–1052. [Google Scholar] [CrossRef]
Figure 1. Quantile-quantile plots.
Figure 1. Quantile-quantile plots.
Table 1. The relative MSEs and biases of entropy estimates with MLE and the Bayes estimates with non-informative priors ($H ^$ for the MLEs, $H ^ s$ for the Bayesian estimates under the squared error loss function, $H ^ l$ for the Bayesian estimates under the linex loss function, $H ^ e$ for the Bayesian estimates under general entropy loss function, Lindley for the Lindley method and TK for the Tierney and Kadane method).
Table 1. The relative MSEs and biases of entropy estimates with MLE and the Bayes estimates with non-informative priors ($H ^$ for the MLEs, $H ^ s$ for the Bayesian estimates under the squared error loss function, $H ^ l$ for the Bayesian estimates under the linex loss function, $H ^ e$ for the Bayesian estimates under general entropy loss function, Lindley for the Lindley method and TK for the Tierney and Kadane method).
nmk$Sch$$H ^$$H ^ s$$H ^ l$$H ^ e$$Method$
h = 0.2h = 0.5q = 0.2q = −0.2
605240I−0.01570.05180.0171−0.0402−0.0712−0.0079$L i n d l e y$
(0.3544)(0.4323)(0.4141)(0.4054)(0.5412)(0.4432)
0.0282−0.0058−0.0566−0.0474−0.022$T K$
(0.3818)(0.3706)(0.3562)(0.3770)(0.3766)
II−0.0660.03310.0001−0.0539−0.0619−0.0303$L i n d l e y$
(0.374)(0.4233)(0.4065)(0.395)(0.4669)(0.4521)
−0.0062−0.0394−0.0912−0.0818−0.0577$T K$
(0.35)(0.341)(0.3329)(0.3516)(0.3489)
48I−0.02650.10060.0770.01160.04330.0668$L i n d l e y$
(0.2412)(0.4526)(0.4618)(0.3594)(0.5667)(0.5138)
0.08610.06330.02380.03030.047$T K$
(0.2981)(0.2873)(0.2744)(0.2871)(0.2917)
II−0.02240.05690.0312−0.00790.00130.02$L i n d l e y$
(0.2501)(0.2706)(0.2632)(0.2553)(0.2662)(0.2669)
0.04570.0221−0.018−0.01350.0067$T K$
(0.2745)(0.267)(0.2574)(0.2744)(0.2706)
1008064I−0.0346−0.0692−0.0978−0.1384−0.1935−0.1383$L i n d l e y$
(0.206)(0.2741)(0.2948)(0.3182)(0.4859)(0.3755)
0.03190.0116−0.0189-0.01290.0033$T K$
(0.2074)(0.2025)(0.1967)(0.2053)(0.2056)
II−0.0601−0.025−0.046−0.077−0.081−0.0567$L i n d l e y$
(0.2282)(0.2405)(0.238)(0.235)(0.2712)(0.2443)
0.0099−0.0105−0.042−0.0361−0.0213$T K$
(0.2159)(0.2115)(0.2068)(0.2141)(0.2147)
72I−0.0310.03790.0154−0.0125−0.00470.0066$L i n d l e y$
(0.1713)(0.2766)(0.2481)(0.2291)(0.2766)(0.2538)
0.0220.0062−0.0198−0.0148−0.0026$T K$
(0.1773)(0.174)(0.1705)(0.1763)(0.1765)
II−0.01380.030.0131−0.0128−0.00920.0059$L i n d l e y$
(0.1814)(0.1914)(0.1864)(0.1809)(0.1944)(0.1877)
0.03240.0161−0.0105−0.00520.0067$T K$
(0.1805)(0.1767)(0.1724)(0.1779)(0.1796)
Table 2. The relative MSEs and biases of entropy estimates with MLE and the Bayes estimates with informative priors ($H ^$ for the MLEs, $H ^ s$ for the Bayesian estimates under the squared error loss function, $H ^ l$ for the Bayesian estimates under the linex loss function, $H ^ e$ for the Bayesian estimates under general entropy loss function, Lindley for the Lindley method and TK for the Tierney and Kadane method).
Table 2. The relative MSEs and biases of entropy estimates with MLE and the Bayes estimates with informative priors ($H ^$ for the MLEs, $H ^ s$ for the Bayesian estimates under the squared error loss function, $H ^ l$ for the Bayesian estimates under the linex loss function, $H ^ e$ for the Bayesian estimates under general entropy loss function, Lindley for the Lindley method and TK for the Tierney and Kadane method).
nmk$Sch$$H ^$$H ^ s$$H ^ l$$H ^ e$$Method$
h = 0.2h = 0.5q = 0.2q = $− 0.2$
605240I−0.01570.1520.11810.07330.07180.101$L i n d l e y$
(0.3544)(0.3253)(0.3242)(0.3577)(0.3696)(0.3403)
0.0780.04810.00050.00950.0323$T K$
(0.3025)(0.2911)(0.2774)(0.2936)(0.2961)
II−0.0660.06870.0379−0.0173−0.03740.0012$L i n d l e y$
(0.374)(0.289)(0.2823)(0.3057)(0.4011)(0.3508)
0.05920.0294−0.0177−0.01060.0139$T K$
(0.2919)(0.2818)(0.2703)(0.284)(0.2873)
48I−0.02650.0640.03910.00110.010.0282$L i n d l e y$
(0.2412)(0.227)(0.2199)(0.2122)(0.2226)(0.2233)
0.03220.0091−0.0284−0.0219−0.004$T K$
(0.2242)(0.2184)(0.2125)(0.2225)(0.2233)
II−0.02240.06920.04370.00480.01160.03$L i n d l e y$
(0.2501)(0.2307)(0.2223)(0.2129)(0.2312)(0.2331)
0.03380.0109−0.0268−0.0213−0.0032$T K$
(0.2408)(0.2352)(0.2289)(0.2399)(0.2394)
1008064I−0.03460.0229−0.0009−0.0309−0.0393−0.016$L i n d l e y$
(0.206)(0.1948)(0.2015)(0.1963)(0.2426)(0.2195)
0.05630.03710.00730.01330.0278$T K$
(0.1748)(0.1702)(0.1644)(0.1709)(0.1716)
II−0.06010.02540.0052−0.0256−0.0186−0.0037$L i n d l e y$
(0.2282)(0.2015)(0.1965)(0.1912)(0.1986)(0.1989)
0.04880.0295−0.00070.00480.0197$T K$
(0.1989)(0.1942)(0.188)(0.1954)(0.1966)
72I−0.0310.02890.012−0.0137−0.00770.0046$L i n d l e y$
(0.1713)(0.16)(0.1568)(0.1534)(0.1585)(0.1586)
0.04260.02690.00130.0060.0183$T K$
(0.1649)(0.1614)(0.1573)(0.1623)(0.1631)
II−0.01380.04840.03160.00580.0120.0244$L i n d l e y$
(0.1814)(0.172)(0.1659)(0.1599)(0.1659)(0.1671)
0.03950.0231−0.00260.00180.0149$T K$
(0.1573)(0.154)(0.1502)(0.1556)(0.1552)
Table 3. AIC, BIC K-S statistics with p value for competing lifetime models.
Table 3. AIC, BIC K-S statistics with p value for competing lifetime models.
No.Distribution$MLEs$$AIC$$BIC$K-Sp Value
1.Lomax distribution$( α ^ , λ ^ ) = ( 1.3055 , 150.6285 )$681.4757685.37820.09520.7337
2.Gamma distribution$( α ^ , λ ^ ) = ( 0.5606 , 0.0021 )$682.1883686.09080.08930.8009
3.Generalized inverted exponential distribution$( α ^ , λ ^ ) = ( 0.1425 , 0.0819 )$788.2046792.10710.4184$2.4 × 10 − 8$
Table 4. Estimation of entropy based on a real data from a meteorological study.
Table 4. Estimation of entropy based on a real data from a meteorological study.
$H ^$$H ^ s$$H ^ l$$H ^ e$$Method$
h = 0.2h = 0.5q = 0.2q = $− 0.2$
I7.5146807.3919417.1990106.9666197.2361407.286647Lindley
8.0470297.6995647.4529387.7316898.073602TK
II6.5951856.1673386.1022236.0297706.1080766.126530Lindley
6.9751256.8215456.7173726.8563526.883334TK
III6.9561956.5861846.4622346.3244266.4782686.512196Lindley
7.4626137.1871177.0230527.2595977.390569TK

## Share and Cite

MDPI and ACS Style

Liu, S.; Gui, W. Estimating the Entropy for Lomax Distribution Based on Generalized Progressively Hybrid Censoring. Symmetry 2019, 11, 1219. https://doi.org/10.3390/sym11101219

AMA Style

Liu S, Gui W. Estimating the Entropy for Lomax Distribution Based on Generalized Progressively Hybrid Censoring. Symmetry. 2019; 11(10):1219. https://doi.org/10.3390/sym11101219

Chicago/Turabian Style

Liu, Shuhan, and Wenhao Gui. 2019. "Estimating the Entropy for Lomax Distribution Based on Generalized Progressively Hybrid Censoring" Symmetry 11, no. 10: 1219. https://doi.org/10.3390/sym11101219

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.