Next Article in Journal
Rib Reinforcement Bionic Topology Optimization under Multi-Scale Cyclic Excitation
Next Article in Special Issue
Risk Premium of Bitcoin and Ethereum during the COVID-19 and Non-COVID-19 Periods: A High-Frequency Approach
Previous Article in Journal
Passive Fuzzy Controller Design for the Parameter-Dependent Polynomial Fuzzy Model
Previous Article in Special Issue
Quality Performance Indicators Evaluation and Ranking by Using TOPSIS with the Interval-Intuitionistic Fuzzy Sets in Project-Oriented Manufacturing Companies
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Bayesian Estimations of Shannon Entropy and Rényi Entropy of Inverse Weibull Distribution

1
Department of Basic Subjects, Jiangxi University of Science and Technology, Nanchang 330013, China
2
College of Science, Jiangxi University of Science and Technology, Ganzhou 341000, China
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(11), 2483; https://doi.org/10.3390/math11112483
Submission received: 6 May 2023 / Revised: 21 May 2023 / Accepted: 26 May 2023 / Published: 28 May 2023

Abstract

:
In this paper, under the symmetric entropy and the scale squared error loss functions, we consider the maximum likelihood (ML) estimation and Bayesian estimation of the Shannon entropy and Rényi entropy of the two-parameter inverse Weibull distribution. In the ML estimation, the dichotomy is used to solve the likelihood equation. In addition, the approximation confidence interval is given by the Delta method. Because the form of estimation results is more complex in the Bayesian estimation, the Lindley approximation method is used to achieve the numerical calculation. Finally, Monte Carlo simulations and a real dataset are used to illustrate the results derived. By comparing the mean square error between the estimated value and the real value, it can be found that the performance of ML estimation of Shannon entropy is better than that of Bayesian estimation, and there is no significant difference between the performance of ML estimation of Rényi entropy and that of Bayesian estimation.

1. Introduction

Information is an abstract concept. In the face of a large amount of data, it is easy to know how much data there are, but it is not clear how much information these data contain. Entropy is one of the important terms in physics. Shannon [1] introduced the concept of entropy into statistics, which represents the uncertainty of events. This entropy is generally called “Shannon entropy”. Generally speaking, when we hear a message within expectation, we think it contains less information. When we hear an unexpected message, we think that the amount of information it conveys to us is significant. In statistics, the probability is usually used to describe the uncertainty of an event. Therefore, Shannon believes that probability can be used to describe the amount of information contained in an event. After that, Rényi [2] generalized Shannon entropy and put forward the concept of Rényi entropy. Since then, the study of entropy has attracted a lot of attention [3,4]. For example, Chacko and Asha [5] considered the maximum likelihood (ML) estimation and Bayesian estimation of Shannon entropy for a generalized exponential distribution by the important sampling method based on record values. Liu and Gui [6] considered the ML estimation and Bayesian estimation of Shannon entropy for a two-parameter Lomax distribution by the Lindley method and the Tierney–Kadane method under a generalized progressively hybrid censoring test. Shrahili et al. [7] considered the estimation of entropy of a log-logistic distribution. The estimations of different entropy functions were obtained by the ML method, and the approximate confidence intervals were obtained by using various censoring methods and sample sizes. Mahmoud et al. [8] considered the estimation of entropy and residual entropy of a two-parameter Lomax distribution based on the generalized type-II hybrid censoring scheme. The ML estimators and Bayesian estimators of entropy and residual entropy were obtained. The simulation study of estimating performance under different sample sizes was described. Finally, the conclusion was discussed. Hassan and Mazen [9] estimated three entropy measures for the inverse Weibull distribution using progressive Type-II censored data, which were Shannon entropy, Rényi entropy, and q-entropy. The method of maximum likelihood and the maximum product of spacing were used to estimate them. Mavis et al. [10] proposed and studied a gamma-inverse Weibull distribution, and some mathematical properties were given including moments, mean deviations, Bonferroni and Lorenz curves, and entropies. Basheer [11] introduced a new generalized alpha power inverse Weibull distribution, and the Shannon entropy and Rényi entropy were obtained. Valeriia and Broderick [12] proposed the weighted inverse Weibull class of distributions and derived the expressions of Shannon entropy and Rényi entropy.
In 1982, Keller and Kamath [13] introduced the Inverse Weibull Distribution (IWD) to model the degradation of mechanical components of diesel engines. It is a useful lifetime probability distribution, and it can be used to represent various failure characteristics. Depending on the value of the shape parameter of the IWD, the risk function can be changed flexibly. The use of IWDs for data fitting is therefore more appropriate in many cases. For example, Abhijit and Anindya [14] found that the use of the IWD was superior to previous normal models when measuring concrete structures using ultrasonic pulse velocities. Chiodo et al. [15] proposed a new model generated from an appropriate mixture of IWDs for modeling extreme wind speed scenarios. Langlands et al. [16] observed that breast cancer mortality data could be analyzed using IWDs for modeling analysis. That is why the two-parameter IWD has attracted the attention of more and more researchers and has caused discussion among them in recent years [17,18]. For example, Asuman and Mahmut [19] considered the classical and Bayesian estimation of parameters and the reliability function of the IWD. In classical estimation, they derived the ML estimators and modified ML estimators. In Bayesian estimation, they utilized the Lindley method to calculate the Bayesian estimators of parameters under symmetric and asymmetric loss functions. Sultan et al. [20] discussed the estimation of parameters of the IWD based on the progressive type-II censored sample. They put forward an approximate maximum likelihood method to obtain the ML estimator and used Lindley’s approximation to obtain the Bayesian estimators. Amirzadi et al. [21] considered the Bayesian estimation of the scale parameter and the reliability in the inverse generalized Weibull distribution, in addition to general entropy, the squared log error, and the weight squared error function. They introduced a new loss function to carry out Bayesian estimation. Peng and Yan [22] studied the Bayesian estimation and prediction for shape and scale parameters of the IWD under a general progressive censoring test. Sindhu et al. [23] assumed different priors and loss functions, and discussed the Bayesian estimation of inverse Weibull mixture distributions based on doubly censored data. Mohammad and Sana [24] obtained the Bayes estimators and ML estimators for the unknown parameters of the IWD under lower record values. Faud [25] developed a linear exponential loss function, and estimated the parameter and reliability of the IWD based on lower record values under this loss function. Li and Hao [26] considered the estimation of a stress–strength model when stress and strength are two independent IWDs with different parameters. Ismail and Tamimi [27] proposed a constant-stress partially accelerated-life test model and analyzed it using type-I censored data from an IWD. Kang and Han [28] derived the approximate maximum likelihood estimators of parameters of an IWD under multiply type-II censoring and also proposed a simple graphical method for a goodness-of-fit test. Saboori et al. [29] introduced a generalized modified inverse Weibull distribution, and some statistical and probabilistic properties were derived.
This paper considers the Bayesian estimation of Shannon entropy and Rényi entropy of a two-parameter IWD based on complete samples. In Section 2, some related knowledge is introduced first, and then the specific expressions of Shannon entropy and Rényi entropy of the two-parameter IWD are derived. In Section 3, the maximum likelihood estimators of the scale parameter and shape parameter of the IWD are derived by the dichotomy method, and then the ML estimators of Shannon entropy and Rényi entropy are obtained. In Section 4, the gamma distribution is adopted as the prior distribution (PD) of the scale parameter. A non-informative PD is adopted as the PD of the shape parameter. Then, the Bayesian estimators of Shannon entropy and Rényi entropy are obtained based on the symmetric entropy loss function and scale squared error loss function. The Lindley approximation is used to achieve the numerical calculation of the Bayesian estimators of entropy, on account of the complexity of these Bayesian estimators. In Section 5, Monte Carlo simulations are utilized to simulate and compare the estimators that are mentioned above. In Section 6, a real data set is analyzed for illustrative purposes. Finally, the conclusions of the article are given in Section 7.

2. Preliminary Knowledge

The probability density function (pdf) of the two-parameter IWD is defined as Equation (1):
f ( t ; ω , υ ) = ω υ t υ 1 exp ( ω t υ ) , ω > 0 , υ > 0 , t > 0 ,
and the cumulative distribution function (cdf) of the two-parameter IWD is defined as Equation (2):
F ( t ; ω , υ ) = exp ( ω t υ ) , ω > 0 , υ > 0 , t > 0 ,
where the scale parameter is ω and the shape parameter is υ .
Figure 1 shows the pdf of the IWD under different values of shape and scale parameters.
The Shannon entropy is defined in Equation (3) [1]
H s ( t ) = + f ( t ) ln [ f ( t ) ] d t ,
and the Rényi entropy is defined in Equation (4) [2]
H r ( t ) = 1 1 r ln + f r ( t ) d t , r > 0 , r 1 ,
where f ( t ) is the pdf of a continuous random variable T .
Theorem 1.
Let T 1 , T 2 , , T n be a random sample that follows the IWD with the pdf (1), with t 1 , t 2 , , t n being the sample observations of T 1 , T 2 , , T n .
(i)
The Shannon entropy of the IWD is shown in Equation (5):
H s = υ + 1 υ ( ln ω + γ ) ln ω υ + 1 .
(ii)
The Rényi entropy of the IWD is shown in Equation (6):
H r = 1 1 r [ r υ + r υ ln r + r υ ln ω + r ln υ + ln Γ ( r + r υ + 1 ) ] ,
where Γ ( ) is the gamma function and γ is the Euler constant.
Proof. 
The log density of pdf (1) of the IWD is shown in Equation (7):
ln [ f ( t ) ] = ln ω υ ( υ + 1 ) ln t ω t υ .
According to the log-density function (7) and Equation (3), the Shannon entropy of the IWD can be derived as follows:
H s = 0 + f ( t ) [ ln ω υ ( υ + 1 ) ln t ω t υ ] d t = ( ln ω υ ) 0 + f ( t ) d t + ( υ + 1 ) 0 + f ( t ) ln t d t + ω 0 + t υ f ( t ) d t = ln ω υ + ( υ + 1 ) E ( ln T ) + ω E ( T υ ) .
Obviously,
E ( T c ) = 0 + ω υ t c t υ 1 e ω t υ d t = ω t υ 0 + ( ω t υ ) c υ e ω t υ d ( ω t υ ) = ω c υ Γ ( 1 c υ ) .
Let c = υ ,
E ( T υ ) = ω 1 Γ ( 2 ) = 1 ω .
Because
E ( T c ln T ) = d E ( T c ) d c = 1 υ ω c υ Γ ( 1 c υ ) ln ω 1 υ ω c υ Γ ( 1 c υ ) = 1 υ ω c υ [ Γ ( 1 c υ ) ln ω Γ ( 1 c υ ) ] .
Let c = 0 ,
E ( ln T ) = 1 υ ( ln ω + γ ) .
Therefore, the Shannon entropy of the two-parameter IWD can be expressed as
H s = ln ω υ + ( υ + 1 ) E ( ln T ) + ω E ( T υ ) = υ + 1 υ ( ln ω + γ ) ln ω υ + 1 .
Obviously,
+ f r ( t ) d t = 0 + ( ω υ t υ 1 e ω t υ ) r d t = ( ω υ ) r 0 + t r υ r e r ω t υ d t = r r υ r υ ω r υ υ r Γ ( 1 + r + r υ υ ) .
Then, according to Equation (4), the Rényi entropy of the two-parameter IWD can be expressed as
H r = 1 1 r [ r υ r υ ln r + r υ ln ω + r ln υ + ln Γ ( 1 + r υ + r υ ) ] .

3. Maximum Likelihood Estimation

Suppose that T 1 , T 2 , , T n is a random sample that follows the IWD with the pdf (1), and t 1 , t 2 , , t n are the sample observations of T 1 , T 2 , , T n . Thus, the likelihood function (LF)can be derived as Equation (8):
s ( t ; ω , υ ) = ω n υ n ( i = 1 n t i υ 1 ) exp ( ω i = 1 n t i υ ) .
Then, the corresponding log LF of Equation (8) is shown in Equation (9):
S ( t ; ω , υ ) = ln s ( t ; ω , υ ) = n ln ω υ ( υ + 1 ) i = 1 n ln t i ω i = 1 n t i υ .
For convenience, we denote S ( t ; ω , υ ) as S . Thus, the likelihood equations can be expressed, respectively, as Equations (10) and (11):
S ω = n ω i = 1 n t i υ = 0 ,
S υ = n υ i = 1 n ln t i + ω i = 1 n t i υ ln t i = 0 .
By simplifying Equations (10) and (11), the ML estimator υ ^ is a unique solution of Equation (13), and the ML estimator ω ^ can be obtained according to Equation (12). The uniqueness of the solution to Equation (13) is proved in Appendix A.
ω ^ = n ( i = 1 n t i υ ^ ) 1 ,
y ( υ ) = n υ i = 1 n ln t i + n ( i = 1 n t i υ ) 1 i = 1 n t i υ ln t i .
According to Equation (13), it is difficult to obtain the analytical solution υ ^ . The following are the steps to obtain a numerical solution using the dichotomy method.
(i)
Give the accuracy ε , determine the interval [ υ u , υ l ] , and verify y ( υ u ) y ( υ l ) < 0 .
(ii)
Find the midpoint υ m of the interval [ υ u , υ l ] and calculate y ( υ m ) .
(iii)
If y ( υ m ) = 0 , υ ^ = υ m .
(iv)
If y ( υ u ) y ( υ m ) < 0 , υ l = υ m ; if y ( υ l ) y ( υ m ) < 0 , υ u = υ m .
(v)
If | υ u υ l | < ε , υ ^ is equal to υ u or υ l . If not, return to step (ii) to step (v).
Due to the invariance of ML estimation, the ML estimators of Shannon entropy and Rényi entropy can be obtained by putting ω ^ and υ ^ into Equations (3) and (4), and their mathematical expressions are shown in Equations (14) and (15):
H ^ s 1 = υ ^ + 1 υ ^ ( ln ω ^ + γ ) ln ω ^ υ ^ + 1 ,
H ^ r 1 = 1 1 r [ r υ ^ r υ ^ ln r + r υ ^ ln ω ^ + r ln υ ^ + ln Γ ( 1 + r υ ^ + r υ ^ ) ] .
Next, the Delta method is used to derive the approximate confidence intervals (briefly, ACIs) of Shannon entropy and Rényi entropy.
Denote vector D s and D r , respectively, as
D s = ( H s ω , H s υ ) ω = ω ^ , υ = υ ^ ,
D r = ( H r ω , H r υ ) ω = ω ^ , υ = υ ^ ,
in which D s and D r are calculated through Equations (18) and (19), respectively.
H s ω = 1 ω υ , H s υ = γ ln ω υ 2 1 υ
H r ω = r ( 1 r ) ω υ ,   H r υ = r ( 1 r ) υ [ 1 + ln r ln ω υ Γ ( 1 + r + r υ 1 ) υ Γ ( 1 + r + r υ 1 ) ]
According to the Delta method, calculate the estimated variance of H ^ s 1 and H ^ r 1 as Equations (20) and (21), respectively. I is the Fisher information matrix of ω and υ , and Equation (22) gives the elements of I . I 1 is the inverse matrix of I .
V s = D s I 1 D s T ω = ω ^ , υ = υ ^
V r = D r I 1 D r T ω = ω ^ , υ = υ ^
2 S ω 2 = n ω 2 , 2 S ω υ = 2 S υ ω = i = 1 n t i υ ln t i 2 S υ 2 = n υ 2 ω i = 1 n t i υ ( ln t i ) 2
Then, the 100 ( 1 α ) % ACI of Shannon entropy is Equation (23) and the 100 ( 1 α ) % ACI of Rényi entropy is Equation (24), where z α 2 is the upper ( α 2 )th quantile of the standardized normal distribution.
( H ^ s 1 z α 2 V s   ,   H ^ s 1 + z α 2 V s )
( H ^ r 1 z α 2 V r   ,   H ^ r 1 + z α 2 V r )

4. Bayesian Estimation

Bayesian estimation is a method of introducing prior information to deal with decision problems. The advantage is that it can include the prior information in statistical inference and improve the accuracy of the taken decision. From the time when Bayesian estimation was proposed to now, many researchers have adopted this method in estimating parameters and related functions. For example, Liu and Wang [30] considered the Bayesian inference and prediction of the proportional hazards model, based on interval type-II censored data. They Assumed that the prior distribution of parameter is non-informative prior distribution and used MH algorithm to obtain the parameter estimator. Ren [31] derived the Bayesian estimator and emprical Bayesian estimator of parameter of Rayleigh distribution under symmetric entropy loss function. Furthermore, this paper also discussed the admissibility conditions for a class of inverse linear estimators. Mohammad and Mina [32] presented the Bayesian inferences of parameters of the inverse Weibull distribution based on type-I hybrid censored data and computed the Bayes estimates using the Lindley approximation. Algarni et al. [33] considered the Bayes estimation of parameters for the inverse Weibull distribution by employing a progressive type-I censored sample. The Metropolis–Hasting (MH) algorithm was used to compute the Bayesian estimates.
In addition to the areas mentioned above, there are some recent applications of the Bayesian method. Zhou and Luo [34] developed a supplier’s recursive multiperiod discounted profit model based on Bayesian information updating. Yulin et al. [35] put forward a Bayesian approach to tackle the misalignment for over-the-air computation. Taborsky et al. [36] presented a novel generic Bayesian probabilistic model to solve the problem of parameter marginalization under the constraint of forced community structure. Oliver [37] introduced the Bayesian toolkit and showed how geomorphic models might benefit from probabilistic concepts. Ran et al. [38] proposed a Bayesian approach to measure the loss of privacy in a mechanism. Luo et al. [39] used the Bayesian information criterion for model selection when revisiting the lifetime data of brake pads. Peng et al. [40] extended a general Bayesian framework to deal with the degradation analysis of sparse degradation observations and evolving observations. František et al. [41] illustrated the benefit of Bayesian estimation in parametric survival analysis. Liu et al. [42] proposed fuzzy Bayesian knowledge tracing models to address continuous score scenarios. In predictive maintenance, Zhuang et al. [43] applied the Bayes theorem to the bidirectional LSTM network to optimize prognostic performance.
In this paper, the Bayesian estimations of Shannon entropy and Rényi entropy of the IWD are investigated under symmetric entropy (SE) and scale squared error (SSE) loss functions, which are widely used in Bayesian statistical inference [44,45,46].
(i)The SE loss function is defined in Equation (25) [44]:
L 1 ( H , H ^ ) = H H ^ + H ^ H 2 ,
where H ^ is the estimator of H .
Lemma 1.
Suppose that T is the historical data information about the entropy function H . Then, under the SE loss function (25), the Bayesian estimator H ^ 1 for any prior distribution is shown in Equation (26):
H ^ 1 = [ E ( H T ) E ( H 1 T ) ] 1 2 ,
where E ( H | T ) is the posterior expectation of H and E ( H 1 | T ) is the posterior expectation of H 1 .
Proof. 
Under the SE loss function (25), the Bayesian risk of H ^ is
R ( H ^ ) = E H ( E ( L 1 ( H , H ^ ) T ) ) .
To minimize R ( H ^ ) , we only need to minimize E ( L 1 ( H , H ^ ) | T ) . For convenience, let g ( H ^ ) = E ( L 1 ( H , H ^ ) | T ) .
Because
g ( H ^ ) = H ^ 1 E ( H T ) + H ^ E ( H 1 T ) 2 ,
and the derivative is
g ( H ^ ) = H ^ 2 E ( H T ) + E ( H 1 T ) .
The Bayesian estimator H ^ 1 can be obtained by g ( H ^ ) = 0 .
(ii) The SSE loss function is defined in Equation (Equation (27)) [46]
L 2 ( H , H ^ ) = ( H H ^ ) 2 H k ,
where k is a nonnegative integer. □
Lemma 2.
Suppose that T is the historical data information about the entropy function H . Then, under the SSE loss function (27), the Bayesian estimator H ^ 2 for any prior distribution is
H ^ 2 = E ( H 1 k T ) E ( H k T ) ,
where E ( H 1 k | T ) is the posterior expectation of H 1 k and E ( H k | T ) is the posterior expectation of H k .
Proof. 
Under the SSE loss function (27), the Bayesian risk of H ^ is
R ( H ^ ) = E H ( E ( L 2 ( H , H ^ ) T ) ) .
To minimize R ( H ^ ) , we only need to minimize E ( L 2 ( H , H ^ ) | T ) . Similarly, let h ( H ^ ) = E ( L 2 ( H , H ^ ) | T ) .
Because
h ( H ^ ) = E ( H 2 2 H H ^ + H ^ 2 H k T ) = E ( H 2 k T ) 2 H ^ E ( H 1 k T ) + H ^ 2 E ( H k T ) ,
and the derivative of h ( H ^ ) is
h ( H ^ ) = 2 E ( H 1 k T ) + 2 H ^ E ( H k T ) .
The Bayes estimator H ^ 2 can be obtained by h ( H ^ ) = 0 .
Assume that the scale parameter ω and shape parameter υ of the two-parameter IWD are independent random variables, where ω obeys Γ ( a , b ) and υ obeys a non-informative PD as follows, respectively:
P 1 ( ω ) = a b Γ ( b ) ω b 1 e a ω , a > 0 , b > 0 ,
P 2 ( υ ) 1 υ .
Thus, the joint PD of ω and υ is
P ( ω , υ ) a b υ Γ ( b ) ω b 1 e a ω .
Referring to the Bayesian formulation, the posterior distribution of ω and υ is
P ( ω , υ T ) = P ( ω , υ ) s ( t ; ω , υ ) 0 + 0 + P ( ω , υ ) s ( t ; ω , υ ) d ω d υ .
Thus, the Bayesian estimators of Shannon entropy and Rényi entropy, respectively, under SE can be expressed as
H ^ s 2 = [ E ( H s T ) E ( H s 1 T ) ] 1 2 = [ 0 + 0 + H s P ( ω , υ T ) d ω d υ 0 + 0 + H s 1 P ( ω , υ T ) d ω d υ ] 1 2 ,
H ^ r 2 = [ E ( H r T ) E ( H r 1 T ) ] 1 2 = [ 0 + 0 + H r P ( ω , υ T ) d ω d υ 0 + 0 + H r 1 P ( ω , υ T ) d ω d υ ] 1 2 .
The Bayesian estimators of Shannon entropy and Rényi entropy, respectively, under SSE can be expressed as
H ^ s 3 = E ( H s 1 k T ) E ( H s k T ) = 0 + 0 + H s 1 k P ( ω , υ T ) d ω d υ 0 + 0 + H s k P ( ω , υ T ) d ω d υ ,
H ^ r 3 = E ( H r 1 k T ) E ( H r k T ) = 0 + 0 + H r 1 k P ( ω , υ T ) d ω d υ 0 + 0 + H r k P ( ω , υ T ) d ω d υ .
From Equation (33) to Equation (36), it can be seen that the calculation of Bayesian estimators of Shannon and Rényi entropy are complex and difficult to calculate. Thus, the Lindley approximation will be employed to achieve the approximate calculation results of H ^ s 3 and H ^ r 3 . □

4.1. Bayesian Estimation by Using Lindley Approximation under SE Loss Function

Referring to the Lindley approximation, I ( t ) can be defined as
I ( t ) = E [ U ( ω , υ ) T ] = U ( ω , υ ) e S ( t ; ω , υ ) + G ( ω , υ ) d ( ω , υ ) e S ( t ; ω , υ ) + G ( ω , υ ) d ( ω , υ ) ,
where U ( ω , υ ) is a function of independent variables ω and υ , S ( t ; ω , υ ) is a log LF defined in Equation (9), and G ( ω , υ ) is the log of the joint PD defined in Equation (31).
If the sample size is large, Equation (37) can be expressed as
I ( t ) = U ( ω ^ , υ ^ ) + 1 2 ( A + B + C + D ) ,
where ω ^ and υ ^ are the ML estimators of ω and υ , respectively, and
A = ( U ^ ω ω + 2 U ^ ω G ^ ω ) σ ^ ω ω + ( U ^ υ ω + 2 U ^ υ G ^ ω ) σ ^ υ ω B = ( U ^ ω υ + 2 U ^ ω G ^ υ ) σ ^ ω υ + ( U ^ υ υ + 2 U ^ υ G ^ υ ) σ ^ υ υ C = ( U ^ ω σ ^ ω ω + U ^ υ σ ^ ω υ ) ( S ^ ω ω ω σ ^ ω ω + S ^ ω υ ω σ ^ ω υ + S ^ υ ω ω σ ^ υ ω + S ^ υ υ ω σ ^ υ υ ) D = ( U ^ ω σ ^ υ ω + U ^ υ σ ^ υ υ ) ( S ^ ω ω υ σ ^ ω ω + S ^ ω υ υ σ ^ ω υ + S ^ υ ω υ σ ^ υ ω + S ^ υ υ υ σ ^ υ υ )
σ i j ( i , j = ω , υ ) is the element of the inverse matrix of S i j .
The U ^ ω ω denotes taking the second derivative of U ( ω , υ ) with respect to ω and putting ω ^ into it. Similarly, the others can be expressed as
S ω ω υ = S ω υ ω = S υ ω ω = 0 , S ω ω ω = 3 n ω 3 S ω υ υ = S υ ω υ = S υ υ ω = i = 1 n t i υ ln t i , S υ υ υ = 2 n υ 3 + ω i = 1 n t i υ ( ln t i ) 3 . G ω = b 1 ω a , G υ = 1 υ
Under the SE loss function, the step of numerical calculation of Shannon entropy H ^ s 2 by the Lindley approximation is shown as follows:
When U ( ω , υ ) = H s ,
U ω = 1 ω υ , U υ = γ ln ω υ 2 1 υ U ω ω = 1 ω 2 υ , U υ υ = 2 ln ω + 2 γ υ 3 + 1 υ 2 , U ω υ = U υ ω = 1 ω υ 2 .
Putting Equations (40) and (41) into Equation (38), E ( H s | T ) is obtained.
Similarly, when U ( ω , υ ) = H s 1 ,
U ω = H s 2 1 ω υ , U υ = H s 2 ( γ + ln ω υ 2 + 1 υ ) U ω ω = 2 H s 3 1 ω 2 υ 2 + H s 2 1 ω 2 υ U υ υ = 2 H s 3 ( γ ln ω υ 2 1 υ ) 2 H s 2 ( 2 ln ω + 2 γ υ 3 + 1 υ 2 ) U ω υ = U υ ω = 2 H s 3 ( γ ln ω ω υ 3 1 ω υ 2 ) + H s 2 1 ω υ 2 .
Then, putting Equations (40) and (42) into Equation (38), E ( H s 1 T ) is obtained. Thus, the numerical calculation of Shannon entropy H ^ s 2 is calculated by Equation (33).
Under the SE loss function, the numerical calculation of Rényi entropy H ^ r 2 by the Lindley approximation is shown as follows:
When U ( ω , υ ) = H r ,
U ω = r ( 1 r ) ω υ , U ω ω = r ( 1 r ) ω 2 υ , U ω υ = U υ ω = r ( 1 r ) ω υ 2 U υ = r ( 1 r ) υ [ 1 + ln r ln ω υ Γ ( 1 + r + r υ 1 ) υ Γ ( 1 + r + r υ 1 ) ] U υ υ = 2 r ( ln ω ln r ) ( 1 r ) υ 3 [ Γ ( 1 + r + r υ 1 ) ] 2 Γ ( 1 + r + r υ 1 ) Γ ( 1 + r + r υ 1 ) ( 1 r ) r 2 υ 4 [ Γ ( 1 + r + r υ 1 ) ] 2 r ( 1 r ) υ 2
Putting Equations (40) and (43) into Equation (38), E ( H r | T ) is obtained.
When U ( ω , υ ) = H r 1 ,
U ω = r H r 2 ( 1 r ) ω υ , U υ = r H r 2 ( 1 r ) υ [ 1 + ln r ln ω υ Γ ( 1 + r + r υ 1 ) υ Γ ( 1 + r + r υ 1 ) ] U ω ω = 2 r 2 H r 3 ( 1 r ) 2 ω 2 υ 2 + r H r 2 ( 1 r ) ω 2 υ , U ω υ = U υ ω = r 2 2 H r 3 ( 1 r ) 2 ω υ 2 [ 1 + ln r ln ω υ 1 υ Γ ( 1 + r + r υ 1 ) Γ ( 1 + r + r υ 1 ) ] U υ υ = [ Γ ( 1 + r + r υ 1 ) ] 2 Γ ( 1 + r + r υ 1 ) Γ ( 1 + r + r υ 1 ) ( 1 r ) r 2 υ 4 [ Γ ( 1 + r + r υ 1 ) ] 2 H r 2 + r H r 2 ( 1 r ) υ 2 2 r ( ln ω ln r ) ( 1 r ) υ 3 H r 2 + 2 r 2 H r 3 ( 1 r ) 2 υ 2 [ 1 + ln r ln ω υ Γ ( 1 + r + r υ 1 ) υ Γ ( 1 + r + r υ 1 ) ]
Putting Equations (40) and (44) into Equation (38), E ( H r 1 | T ) is obtained. Thus, the numerical calculation of Shannon entropy H ^ r 2 is calculated by Equation (34).

4.2. Bayesian Estimation by Using Lindley Approximation under SSE Loss Function

Under the SSE loss function, the step of numerical calculation of Shannon entropy H ^ s 3 by the Lindley approximation is shown as follows:
When U ( ω , υ ) = H s 1 k ,
U ω = ( 1 k ) H s k 1 ω υ , U υ = ( 1 k ) H s k ( γ ln ω υ 2 1 υ ) U ω ω = k ( 1 k ) H s k 1 ( 1 ω υ ) 2 ( 1 k ) H s k 1 ω 2 υ U υ υ = k ( 1 k ) H s k 1 ( γ + ln ω υ 2 + 1 υ ) 2 + ( 1 k ) H s k ( 2 ln ω + 2 γ υ 3 + 1 υ 2 ) U ω υ = U υ ω = k ( 1 k ) H s k 1 1 ω υ ( γ + ln ω υ 2 + 1 υ ) ( 1 k ) H s k 1 ω υ 2 .
Then, putting Equations (40) and (45) into Equation (38), E ( H s 1 k T ) is obtained.
When U ( ω , υ ) = H s k ,
U ω = k H s k 1 1 ω υ , U υ = k H s k 1 ( γ + ln ω υ 2 + 1 υ ) U ω ω = k ( k + 1 ) H s k 2 ( 1 ω υ ) 2 + k H s k 1 1 ω 2 υ U υ υ = k ( k + 1 ) H s k 2 ( γ ln ω υ 2 1 υ ) 2 k H s k 1 ( 2 ln ω + 2 γ υ 3 + 1 υ 2 ) U ω υ = U υ ω = k ( k + 1 ) H s k 2 1 ω υ ( γ ln ω υ 2 1 υ ) + k H s k 1 1 ω υ 2 .
Then, putting Equations (40) and (46) into Equation (38), E ( H s k T ) is obtained. Thus, the numerical calculation of Shannon entropy H ^ s 3 is calculated by Equation (35).
Under the SSE loss function, the step of numerical calculation of Rényi entropy H ^ r 3 by the Lindley approximation is shown as follows:
When U ( ω , υ ) = H r 1 k ,
U ω = ( 1 k ) r H r k ( 1 r ) ω υ , U υ = ( 1 k ) r H r k ( 1 r ) υ [ 1 + ln r ln ω υ Γ ( 1 + r + r υ 1 ) υ Γ ( 1 + r + r υ 1 ) ] U ω ω = ( k 1 ) [ k r 2 H r k 1 ( 1 r ) 2 ω 2 υ 2 + r H r k ( 1 r ) ω 2 υ ] U ω υ = U υ ω = k r 2 H r k 1 ( k 1 ) ( 1 r ) 2 ω υ 2 [ 1 + ln r ln ω υ Γ ( 1 + r + r υ 1 ) υ Γ ( 1 + r + r υ 1 ) ] + r H r k ( k 1 ) ( 1 r ) ω υ 2 U υ υ = r 2 k H r k 1 ( k 1 ) ( 1 r ) 2 υ 2 [ 1 + ln r ln ω υ Γ ( 1 + r + r υ 1 ) υ Γ ( 1 + r + r υ 1 ) ] 2 + 2 r ( ln ω ln r ) ( 1 k ) ( 1 r ) υ 3 H r k [ Γ ( 1 + r + r υ 1 ) ] 2 Γ ( 1 + r + r υ 1 ) Γ ( 1 + r + r υ 1 ) ( 1 r ) ( 1 k ) 1 r 2 υ 4 [ Γ ( 1 + r + r υ 1 ) ] 2 H r k ( 1 k ) H r k r ( 1 r ) υ 2
Putting Equations (40) and (47) into Equation (38), E ( H r 1 k | T ) is obtained.
When U ( ω , υ ) = H r k ,
U ω = r k H r k 1 ( 1 r ) ω υ , U υ = r k H r k 1 ( 1 r ) υ [ Γ ( 1 + r + r υ 1 ) υ Γ ( 1 + r + r υ 1 ) 1 ln r ln ω υ ] U ω ω = k r ( 1 r ) ω 2 υ [ r H r k 2 ( k 1 ) υ + H r k 1 ] U ω υ = U υ ω = r 2 k ( k 1 ) H r k 2 ( 1 r ) 2 ω υ 2 [ 1 + ln r ln ω υ Γ ( 1 + r + r υ 1 ) υ Γ ( 1 + r + r υ 1 ) ] + r k H r k 1 ( 1 r ) ω υ 2 U υ υ = r 2 k ( k 1 ) H r k 2 ( 1 r ) 2 υ 2 [ 1 + ln r ln ω υ Γ ( 1 + r + r υ 1 ) υ Γ ( 1 + r + r υ 1 ) ] 2 2 r k ( ln ω ln r ) ( 1 r ) υ 3 H r k + 1 + [ Γ ( 1 + r + r υ 1 ) ] 2 Γ ( 1 + r + r υ 1 ) Γ ( 1 + r + r υ 1 ) ( 1 r ) r 2 υ 4 k 1 H r k + 1 [ Γ ( 1 + r + r υ 1 ) ] 2 + r k H r k 1 ( 1 r ) υ 2
Putting Equation (40) and Equation (48) into Equation (38), E ( H r k T ) is obtained. Thus, the numerical calculation of Rényi entropy H ^ r 3 is calculated by Equation (36).

5. Monte Carlo Simulation

In this chapter, a Monte Carlo simulation is used to generate random samples that obey the two-parameter IWD, and 1000 experiments are repeated with different sample sizes ( n = 10 , 20 , 30 , 40 , 50 , 60 , 70 , 80 , 90 , 100 ). The true values of the parameters in the two-parameter IWD are taken as ω = 1 and υ = 2 , the parameters of the gamma distribution are taken as a = 5 and b = 1 , the parameters in SSE are taken as k = 10 , and the parameters of the Rényi entropy are taken as r = 0.5 . Then, the mean squared error (briefly, MSE) is used to compare the performance of each estimator. The results of Shannon entropy are shown in Table 1 and the results of Rényi entropy are shown in Table 2. For showing the performance of 100 ( 1 α ) % ACIs, the coverage probability is calculated and the results are shown in Table 3.
For convenience, H s 0 and H r 0 represent the true values of Shannon entropy and Rényi entropy, respectively; M ^ s 1 and M ^ r 1 represent the mean values of 1000 ML estimates of entropy, respectively; M ^ s 2 and M ^ r 2 represent the mean values of 1000 Bayesian estimates of entropy, respectively, under the SE loss function; M ^ s 3 and M ^ r 3 represent the mean values of 1000 Bayesian estimates of entropy, respectively, under the SSE loss function; M S E s 1 and M S E r 1 represent MSEs of ML estimates of entropy, respectively; M S E s 2 and M S E r 2 represent MSEs of Bayesian estimates of entropy, respectively, under the SE loss function; M S E s 3 and M S E r 3 represent MSEs of Bayesian estimates of entropy, respectively, under the SSE loss function. The M ^ s j and M S E s j ( j = 1 , 2 , 3 ) are calculated by Equations (49) and (50), where m = 1000 and H ^ s j , i represents the i-th ML estimate or Bayesian estimate of Shannon entropy. The M ^ r j and M S E r j ( j = 1 , 2 , 3 ) are calculated by Equations (51) and (52), where m = 1000 and H ^ r j , i represents the i-th ML estimate or Bayesian estimate of Rényi entropy.
M ^ s j = 1 m i = 1 m H ^ s j , i ,
M S E s j = 1 m i = 1 m ( H ^ s j , i H s 0 ) 2 ,
M ^ r j = 1 m i = 1 m H ^ r j , i ,
M S E r j = 1 m i = 1 m ( H ^ r j , i H r 0 ) 2 .
Based on the above tables, the following findings of these study results can be drawn:
(1)
For Shannon entropy, the ML estimation performs better than the Bayesian estimation, while for Rényi entropy, the performance of ML estimation is similar to that of Bayesian estimation.
(2)
In Bayesian estimation, it is better to select the SE to estimate Shannon entropy. On the contrary, it is better to select the SSE to estimate Rényi entropy.
(3)
The sample size has a greater influence on Shannon entropy than on Rényi entropy. When the sample size increases gradually, the Bayesian estimation of Shannon entropy under SE is close to the ML estimation, but it has no obvious effect on Rényi entropy.
(4)
In Table 3, it can be noted that the coverage probability of ACIs is quite close to confidence levels.

6. Real Data Analysis

There is a real data set given by Bjerkdal [47] that represents the survival time (in days) of guinea pigs after the injection of different doses of tubercle bacilli. Kundu and Howlader [48] proved that this set of data using the IWD fitting effect is very good; therefore, this data set can be seen as a sample of the IWD. In Reference [47], the regimen number refers to the common logarithm of bacillary units contained in 0.5 mL of challenge solution. In other words, regimen 6.6 represents 4.0 × 106 bacillary units per 0.5 mL. Corresponding to regimen 6.6, the 72 observed observations are listed as follows:
12, 15, 22, 24, 24, 32, 32, 33, 34, 38, 38, 43, 44, 48, 52, 53, 54, 54, 55, 56, 57, 58, 58, 59, 60, 60, 60, 60, 61, 62, 63, 65, 65, 67,68, 70, 70, 72, 73, 75, 76, 76, 81, 83, 84, 85, 87, 91, 95, 96, 98, 99, 109, 110, 121, 127, 129, 131, 143, 146, 146, 175, 175, 211,233, 258, 258, 263, 297, 341, 341, 376.
Using the proposed estimates described in the above sections, the ML estimates and Bayesian estimates of Shannon entropy and Rényi entropy are displayed in Table 4. It is obvious that the ML estimates of entropies are all smaller than the Bayesian estimates under SE, and the Bayesian estimates under the SSE of entropies are all smaller than the ML estimates.

7. Conclusions

Entropy is a significant indicator for quantifying information uncertainty. In addition, the IWD is an important lifetime model in the field of reliability. The numerical description of the entropy can be used to improve the lifetime test. Therefore, this paper considers the Bayesian estimations of Shannon entropy and Rényi entropy based on the two-parameter IWD. First, the expressions of these entropies of the two-parameter IWD are derived in Theorem 1. For ML estimation, due to the invariance of ML estimation, the ML estimators of parameters are obtained by the dichotomy method at first. Then, the ML estimators of entropies can be obtained. Additionally, the approximate confidence intervals are given by the Delta method. For Bayesian estimation, the symmetric entropy loss function and scale squared error loss function are chosen. However, the forms of Bayesian estimators are complex and difficult to calculate. The Lindley approximation is used to solve this problem. Finally, the mean square errors of the above estimators are used to compare their performances. From the findings found in Section 5, it is better to use the ML estimator for Shannon entropy, and for Rényi entropy, the performances of the ML estimator and Bayesian estimator are analogous. Based on these findings, it is hoped that it will be helpful for researchers conducting lifetime tests.

Author Contributions

Conceptualization, H.R. and X.H.; methodology, H.R.; software, X.H.; validation, H.R. and X.H.; writing—original draft preparation, X.H.; writing—review and editing, H.R.; funding acquisition, H.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, grant number 71661012.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

First, analyze the monotonicity of y ( υ ) with respect to υ . The derivative of y ( υ ) is
y ( υ ) = n [ 1 υ 2 + ( i = 1 n t i υ ) i = 1 n t i υ ( ln t i ) 2 ( i = 1 n t i υ ln t i ) 2 ( i = 1 n t i υ ) 2 ] .
According to the Cauchy inequality, there is
( i = 1 n t i υ ) i = 1 n t i υ ( ln t i ) 2 ( i = 1 n t i υ ln t i ) 2 ( i = 1 n t i υ t i υ ln t i ) 2 ( i = 1 n t i υ ln t i ) 2 = ( i = 1 n t i υ ln t i ) 2 ( i = 1 n t i υ ln t i ) 2 = 0 .
Therefore, y ( υ ) < 0 . That is, y ( υ ) is a strictly monotonically decreasing function with respect to υ .
Then, the value domain of y ( υ ) is considered. Since y ( υ ) is a strictly monotonically decreasing function, its value domain is the left and right limit of y ( υ ) in the range of υ > 0 . It is clear that lim υ 0 + y ( υ ) = + . Denote lim υ + y ( υ ) as the right limit of y ( υ ) and denote t min = min { t 1 , t 2 , , t n } . The value domain of y ( υ ) is [ lim υ + y ( υ )   , + ) . Because of
lim υ + y ( υ ) = lim υ + n i = 1 n ( t i t min ) υ ln t i i = 1 n ( t i t min ) υ i = 1 n ln t i = n ln t min i = 1 n ln t i ,
there is lim υ + y ( υ ) < 0 .
According to the monotonicity and the value domain, y ( υ ) has a unique solution.

References

  1. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef]
  2. Rényi, A. On measures of entropy and information. In Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Contributions to the Theory of Statistics; University of California Press: Berkeley, CA, USA, 1970; Volume 4, pp. 547–561. [Google Scholar]
  3. Bulinski, A.; Dimitrov, D. Statistical Estimation of the Shannon Entropy. Acta Math. Sin. Engl. Ser. 2018, 35, 17–46. [Google Scholar] [CrossRef]
  4. Wolf, R. Information and entropies. Quantum Key Distrib. Introd. Exerc. 2021, 988, 53–89. [Google Scholar]
  5. Chacko, M.; Asha, P.S. Estimation of entropy for generalized exponential distribution based on record values. J. Indian Soc. Prob. St. 2019, 19, 79–96. [Google Scholar] [CrossRef]
  6. Liu, S.; Gui, W. Estimating the entropy for Lomax distribution based on generalized progressively hybrid censoring. Symmetry 2019, 11, 1219. [Google Scholar] [CrossRef]
  7. Shrahili, M.; El-Saeed, A.R.; Hassan, A.S.; Elbatal, I. Estimation of entropy for Log-Logistic distribution under progressive type II censoring. J. Nanomater. 2022, 3, 2739606. [Google Scholar] [CrossRef]
  8. Mahmoud, M.R.; Ahmad, M.A.M.; Mohamed, B.S.K. Estimating the entropy and residual entropy of a Lomax distribution under generalized type-II hybrid censoring. Math. Stat. 2021, 9, 780–791. [Google Scholar] [CrossRef]
  9. Hassan, O.; Mazen, N. Product of spacing estimation of entropy for inverse Weibull distribution under progressive type-II censored data with applications. J. Taibah Univ. Sci. 2022, 16, 259–269. [Google Scholar]
  10. Mavis, P.; Gayan, W.L.; Broderick, O.O. A New Class of Generalized Inverse Weibull Distribution with Applications. J. Appl. Math. Bioinform. 2014, 4, 17–35. [Google Scholar]
  11. Basheer, A.M. Alpha power inverse Weibull distribution with reliability application. J. Taibah Univ. Sci. 2019, 13, 423–432. [Google Scholar] [CrossRef]
  12. Valeriia, S.; Broderick, O.O. Weighted Inverse Weibull Distribution: Statistical Properties and Applications. Theor. Math. Appl. 2014, 4, 1–30. [Google Scholar]
  13. Keller, A.Z.; Kamath, A.R.R. Alternative reliability models for mechanical systems. In Proceedings of the 3rd International Conference on Reliability and Maintainability, Toulose, France, 16–21 October 1982. [Google Scholar]
  14. Abhijit, C.; Anindya, C. Use of the Fréchet distribution for UPV measurements in concrete. NDT E Int. 2012, 52, 122–128. [Google Scholar]
  15. Chiodo, E.; Falco, P.D.; Noia, L.P.D.; Mottola, F. Inverse loglogistic distribution for Extreme Wind Speed modeling: Genesis, identification and Bayes estimation. AIMS Energy 2018, 6, 926–948. [Google Scholar] [CrossRef]
  16. Langlands, A.; Pocock, S.J.; Kerr, G.R.; Gore, S.M. Long-term survival of patients with breast cancer: A study of the curability of the disease. BMJ 1979, 2, 1247–1251. [Google Scholar] [CrossRef]
  17. Ellah, A. Bayesian and non-Bayesian estimation of the inverse Weibull model based on generalized order statistics. Intell. Inf. Manag. 2012, 4, 23–31. [Google Scholar]
  18. Singh, S.K.; Singh, U.; Kumar, D. Bayesian estimation of parameters of inverse Weibull distribution. J. Appl. Stat. 2013, 40, 1597–1607. [Google Scholar] [CrossRef]
  19. Asuman, Y.; Mahmut, K. Reliability estimation and parameter estimation for inverse Weibull distribution under different loss functions. Kuwait J. Sci. 2022, 49, 2023051037. [Google Scholar]
  20. Sultan, K.S.; Alsadat, N.H.; Kundu, D. Bayesian and maximum likelihood estimations of the inverse Weibull parameters under progressive type-II censoring. J. Stat. Comput. Sim. 2014, 84, 2248–2265. [Google Scholar] [CrossRef]
  21. Amirzadi, A.; Jamkhaneh, E.B.; Deiri, E. A comparison of estimation methods for reliability function of inverse generalized Weibull distribution under new loss function. J. Stat. Comput. Sim. 2021, 91, 2595–2622. [Google Scholar] [CrossRef]
  22. Peng, X.; Yan, Z.Z. Bayesian estimation and prediction for the inverse Weibull distribution under general progressive censoring. Commun. Stat. Theory Methods 2016, 45, 621–635. [Google Scholar]
  23. Sindhu, T.N.; Feroze, N.; Aslam, M. Doubly censored data from two-component mixture of inverse Weibull distributions: Theory and Applications. J. Mod. Appl. Stat. Meth. 2016, 15, 322–349. [Google Scholar] [CrossRef]
  24. Mohammad, F.; Sana, S. Bayesian estimation and prediction for the inverse Weibull distribution based on lower record values. J. Stat. Appl. Probab. 2021, 10, 369–376. [Google Scholar]
  25. Faud, S. Al-Duais. Bayesian analysis of Record statistic from the inverse Weibull distribution under balanced loss function. Math. Probl. Eng. 2021, 2021, 6648462. [Google Scholar] [CrossRef]
  26. Li, C.P.; Hao, H.B. Reliability of a stress-strength model with inverse Weibull distribution. Int. J. Appl. Math. 2017, 47, 302–306. [Google Scholar]
  27. Ismail, A.; Al Tamimi, A. Optimum constant-stress partially accelerated life test plans using type-I censored data from the inverse Weibull distribution. Strength Mater. 2017, 49, 847–855. [Google Scholar] [CrossRef]
  28. Kang, S.B.; Han, J.T. The graphical method for goodness of fit test in the inverse Weibull distribution based on multiply type-II censored samples. SpringerPlus 2015, 4, 768. [Google Scholar] [CrossRef]
  29. Saboori, H.; Barmalzan, G.; Ayat, S.M. Generalized modified inverse Weibull distribution: Its properties and applications. Sankhya B 2020, 82, 247–269. [Google Scholar] [CrossRef]
  30. Liu, B.X.; Wang, C.J. Bayesian estimation of interval censored data with proportional hazards model under generalized exponential distribution. J. Appl. Stat. Manag. 2023, 42, 293–301. [Google Scholar]
  31. Ren, H.P. Bayesian estimation of parameter of Rayleigh distribution under symmetric entropy loss function. J. Jiangxi Univ. Sci. Technol. 2009, 31, 64–66. [Google Scholar]
  32. Mohammad, K.; Mina, A. Estimation of the Inverse Weibull Distribution Parameters under Type-I Hybrid Censoring. Austrian J. Stat. 2021, 50, 38–51. [Google Scholar]
  33. Algarni, A.; Elgarhy, M.; Almarashi, A.M.; Fayomi, A.; El-Saeed, A.R. Classical and Bayesian Estimation of the Inverse Weibull Distribution: Using Progressive Type-I Censoring Scheme. Adv. Civ. Eng. 2021, 2021, 5701529. [Google Scholar] [CrossRef]
  34. Zhou, J.H.; Luo, Y. Bayes information updating and multiperiod supply chain screening. Int. J. Prod. Econ. 2023, 256, 108750–108767. [Google Scholar] [CrossRef]
  35. Yulin, S.; Deniz, G.; Soung, C.L. Bayesian Over-the-Air Computation. IEEE J. Sel. Areas Commun. 2023, 41, 589–606. [Google Scholar]
  36. Taborsky, P.; Vermue, L.; Korzepa, M.; Morup, M. The Bayesian Cut. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 43, 4111–4124. [Google Scholar] [CrossRef]
  37. Korup, O. Bayesian geomorphology. Earth Surf. Process Landf. 2021, 46, 151–172. [Google Scholar] [CrossRef]
  38. Ran, E.; Kfir, E.; Mu, X.S. Bayesian privacy. Theor. Econ. 2021, 16, 1557–1603. [Google Scholar]
  39. Luo, C.L.; Shen, L.J.; Xu, A.C. Modelling and estimation of system reliability under dynamic operating environments and lifetime ordering constraints. Reliab. Eng. Syst. Saf. 2022, 218, 108136–108145. [Google Scholar] [CrossRef]
  40. Peng, W.W.; Li, Y.F.; Yang, Y.J.; Mi, J.H.; Huang, H.Z. Bayesian Degradation Analysis with Inverse Gaussian Process Models Under Time-Varying Degradation Rates. IEEE Trans. Reliab. 2017, 66, 84–96. [Google Scholar] [CrossRef]
  41. František, B.; Frederik, A.; Julia, M.H. Informed Bayesian survival analysis. BMC Med. Res. Methodol. 2022, 22, 238–260. [Google Scholar]
  42. Liu, F.; Hu, X.G.; Bu, C.Y.; Yu, K. Fuzzy Bayesian Knowledge Tracing. IEEE Trans. Fuzzy Syst. 2022, 30, 2412–2425. [Google Scholar] [CrossRef]
  43. Zhuang, L.L.; Xu, A.C.; Wang, X.L. A prognostic driven predictive maintenance framework based on Bayesian deep learning. Reliab. Eng. Syst. Saf. 2023, 234, 109181–109192. [Google Scholar] [CrossRef]
  44. Xu, B.; Wang, D.H.; Wang, R.T. Estimator of scale parameter in a subclass of the exponential family under symmetric entropy loss. Northeast Math. J. 2008, 24, 447–457. [Google Scholar]
  45. Li, Q.; Wu, D. Bayesian analysis of Rayleigh distribution under progressive type-II censoring. J. Shanghai Polytech. Univ. 2019, 36, 114–117. [Google Scholar]
  46. Song, L.X.; Chen, Y.S.; Xu, J.M. Bayesian estimation of Pission distribution parameter under scale squared error loss function. J. Lanzhou Univ. Tech. 2008, 34, 152–154. [Google Scholar]
  47. Bjerkdal, T. Acquisition of resistance in guinea pigs infected with different doses of virulent tubercle bacilli. Am. J. Epidemiol. 1960, 72, 130–148. [Google Scholar] [CrossRef]
  48. Kundu, D.; Howlader, H. Bayesian inference and prediction of the inverse Weibull distribution for Type-II censored data. Comput. Stat. Data Anal. 2010, 54, 1547–1558. [Google Scholar] [CrossRef]
Figure 1. The curves of the pdf of IWD with respect to different values of parameters.
Figure 1. The curves of the pdf of IWD with respect to different values of parameters.
Mathematics 11 02483 g001
Table 1. Estimates and MSEs of Shannon entropy ( H s 0 = 1.1727 ).
Table 1. Estimates and MSEs of Shannon entropy ( H s 0 = 1.1727 ).
Sample Size ( n ) EstimateMSE
M ^ s 1 M ^ s 2 M ^ s 3 M S E s 1 M S E s 2 M S E s 3
101.06040.82590.89140.19030.36660.2065
201.11830.96310.96830.08630.12820.1186
301.13881.03011.00760.05580.07510.0766
401.13551.05261.02920.04450.05740.0592
501.14611.07881.03790.03230.04040.0472
601.15031.09381.05060.02870.03430.0399
701.15791.10931.06460.02440.02820.0334
801.16231.11961.06940.01970.02240.0284
901.16531.12721.08030.01710.01910.0256
1001.16281.12841.07770.01610.01830.0244
Table 2. Estimates and MSEs of Rényi entropy ( H r 0 = 1.5641 ).
Table 2. Estimates and MSEs of Rényi entropy ( H r 0 = 1.5641 ).
Sample Size ( n ) EstimateMSE
M ^ r 1 M ^ r 2 M ^ r 3 M S E r 1 M S E r 2 M S E r 3
101.66811.77931.76820.05250.10750.0954
201.60561.65121.65870.01780.02180.0186
301.59991.62781.62290.01290.01360.0112
401.59031.61131.60820.01030.01030.0075
501.58291.59921.59720.00720.00710.0064
601.58091.59541.58960.00550.00570.0049
701.57651.58851.58780.00460.00460.0045
801.57811.58861.58570.00440.00410.0034
901.57521.58451.57790.00380.00380.0032
1001.57311.58141.57750.00320.00320.0031
Table 3. The coverage probability of 100 ( 1 α ) % ACIs with different α .
Table 3. The coverage probability of 100 ( 1 α ) % ACIs with different α .
Sample Size
( n )
Shannon EntropyRényi Entropy
α = 0.1 α = 0.05 α = 0.1 α = 0.05
100.96370.97520.96620.9791
200.97980.98940.97890.9884
300.98290.99160.98470.9930
400.98390.99410.98600.9953
500.98570.99460.98940.9957
600.98760.99470.99360.9954
700.98750.99470.99250.9965
800.98750.99400.99290.9972
900.98940.99550.99340.9966
1000.98650.99500.99290.9971
Table 4. The estimates and ACIs of entropies based on the real data set.
Table 4. The estimates and ACIs of entropies based on the real data set.
ML EstimatesBayesian Estimates 100 ( 1 - α ) % ACIs
Under SEUnder SSE α = 0.1 α = 0.05
Shannon entropy5.63075.69984.8706(5.1858, 6.0757)(5.1328, 6.1287)
Rényi entropy5.41294.72804.8706(5.1877, 5.6381)(5.1609, 5.6649)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ren, H.; Hu, X. Bayesian Estimations of Shannon Entropy and Rényi Entropy of Inverse Weibull Distribution. Mathematics 2023, 11, 2483. https://doi.org/10.3390/math11112483

AMA Style

Ren H, Hu X. Bayesian Estimations of Shannon Entropy and Rényi Entropy of Inverse Weibull Distribution. Mathematics. 2023; 11(11):2483. https://doi.org/10.3390/math11112483

Chicago/Turabian Style

Ren, Haiping, and Xue Hu. 2023. "Bayesian Estimations of Shannon Entropy and Rényi Entropy of Inverse Weibull Distribution" Mathematics 11, no. 11: 2483. https://doi.org/10.3390/math11112483

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop