Article New Information Measures for the Generalized Normal Distribution

We introduce a three-parameter generalized normal distribution, which belongs to the Kotz type distribution family, to study the generalized entropy type measures of information. For this generalized normal, the Kullback-Leibler information is evaluated, which extends the well known result for the normal distribution, and plays an important role for the introduced generalized information measure. These generalized entropy type measures of information are also evaluated and presented.


Introduction
The aim of this paper is to study the new entropy type information measures introduced by Kitsos and Tavoularis [1] and the multivariate hyper normal distribution defined by them.These information measures are defined, adopting a parameter  , as the  -moment of the score function (see Section 2), where  is an integer, while in principle 2   .One of the merits of this generalized normal distribution is that it belongs to the Kotz-type distribution family [2], i.e., it is an elliptically contoured distribution (see Section 3).Therefore it has all the characteristics and applications discussed in Baringhaus and Henze [3], Liang et al. [4] and Nadarajah [5].The parameter information measures related to the entropy, are often crucial to the optimal design theory applications, see [6].Moreover, it is proved that the defined generalized normal distribution provides equality to a new generalized

OPEN ACCESS
information inequality (Kitsos and Tavoularis [1,7]) regarding the generalized information measure as well as the generalized Shannon entropy power (see Section 3).
In principle, the information measures are divided into three main categories: parametric (a typical example is Fisher's information [8]), non parametric (with Shannon's information measure being the most well known) and entropy type [9].
The new generalized entropy type measure of information () X  J , defined by Kitsos and Tavoularis [1], is a function of density, as: ( ) ( ) ln ( ) From (1), we obtain that () X  J equals: For 2   , the measure of information 2 () X J is the Fisher's information measure: Proof.Considering the parameter  as a location parameter and transforming the family of  , the differentiation with respect to  is equivalent to the differentiation with respect to x .Therefore we can prove that ( ) ( ) . Indeed, from (1) we have:  JJ and the proposition has been proved.
Recall that the score function is defined as: ( ; ) ln ( ; ) ( ; ) under some regularity conditions, see Schervish [10] for details.It can be easily shown that when , 1 a behaves as the  -moment of the score function of ( ; ) fX .The generalized power is still the power of the white Gaussian noise with the same entropy, see [11], considering the entropy power of a random variable.
Recall that the Shannon entropy () X H is defined as ( ) ( )ln ( ) , see [9].The entropy power () The definition of the entropy power of a random variable X was introduced by Shannon in 1948 [11] as the independent and identically distributed components of a p-dimensional white Gaussian random variable with entropy () X H .The generalized entropy power () with the normalizing factor being the appropriate generalization of is still the power of the white Gaussian noise with the same entropy.Trivially, with 2   , the definition in ( 6) is reduced to the entropy power, i.e., 2 ( ) ( ) XX  NN .In turn, the quantity: appears very often when we define various normalizing factors, under this line of thought.
Proof.Indeed, as For the above introduced generalized entropy measures of information we need a distribution to play the "role of normal", as in the Fisher's information measure and Shannon entropy.In Kitsos and Tavoularis [12] extend the normal distribution in the light of the introduced generalized information measures and the optimal function satisfying the extension of the LSI.We form the following general definition for an extension of the multivariate normal distribution, the  -order generalized normal, as follows: Definition 1.1.The p -dimensional random variable X follows the  -order generalized Normal, with mean  and covariance matrix  , when the density function is of the form: .The normality factor ( , ) Cp is defined as: Notice that for Recall that the symmetric Kotz type multivariate distribution [2] has density: where 0, 0, 2 2 r s m n     and the normalizing constant ( , , ) K m r s is given by: see also [1] and [12].Therefore, it can be shown that the distribution (, Σ) ( , ) ( , ) Also note that for the normal distribution it holds

The Kullback-Leibler Information for  -Order Generalized Normal Distribution
Recall that the Kullback-Leibler (K-L) Information for two p -variate density functions , fg is defined as [13]: The following Lemma provides a generalization of the Kullback-Leibler information measure for the introduced generalized Normal distribution.KT  I is equal to: Proof.We have consecutively: 2 ( 1) 2 ( 1) 2 ( 1) Notice that the quadratic forms 1 11 ( ) ( ) , ( ) , 0,1 respectively, and thus the lemma has been proved.
Recall now the well known multivariate K-L information measure between two multivariate normal distributions with 10   is: which, for the univariate case, is: In what follows, an investigation of the K-L information measure is presented and discussed, concerning the introduced generalized normal.New results are provided that generalize the notion of K-L information.In fact, the following Theorem 2.1 generalizes 2 Ι ( KLI ) pp  for the  -order generalized Normal, assuming that 10   .Various "sequences" of the K-L information measures  (11) where: We can calculate the above integrals by writing them as: and then we substitute


. Thus, we get respectively: Using the known integrals: I can be calculated as: respectively.Thus, ( 11) can be written as: and by substitution of 1 I from ( 15) and ( , ) Cp from definition 1.1, we get: Assuming 10   , from (12), 3 I is equal to:  and using the known integral (14), we have: Thus, (16) finally takes the form: ( 1) ( ) , and so (10) For 2   we have the only case in which Due to the known integrals ( 13) and ( 14), which for 2   are reduced respectively to: Finally, using the above relationship for 3 I , (17) implies: and the theorem has been proved.  as: and since 1 1     , we get:     . Thus, This is due to the entropy of the symmetric Kotz type distribution (9) and it has been calculated (see [7]) for Theorem 2.2.The generalized Fisher's information of the generalized Normal (0, ) Proof.From: () Cp we have the result (see [7] for details).

Discussion and Further Analysis
We examine the behavior of the multivariate  -order generalized Normal distribution.Using Mathematica, we proceed to the following helpful calculations to analyze further the above theoretical results, see also [12].

Proposition 1 . 1 .
, 2 ( ) ( ) XX  JJ .That is, () X  Jis a generalized Fisher's entropy type information measure, and as the entropy, it is a function of density.When  is a location parameter, then ( ) ( ) is the well known multivariate distribution.

Lemma 2 . 1 .
The K-L information KLI p

Figure 1 .Figure 2 . 3 KLI
Figure 1.The graphs of the KLI p  , for dimensions 1,2,...,20 p  , as functions of the quotient 01 (provided that 01   ), where we can see that Switching to hyperspherical coordinates and taking into account the value of ( , )

Figure 3 2 
Figure 3 represents the univariate  -order generalized Normal distribution for various values of  : 2   (normal distribution),
Proof.We write the result from the previous Lemma 2.1 in the form of: has been proved.
The Shannon entropy of a random variable X  which follows the generalized normal  (provided that 01   ), where we can see that 23 KLI KLI ... pp  and KLI KLI , 2,3,... pp     .