Next Article in Journal / Special Issue
The Nonadditive Entropy Sq and Its Applications in Physics and Elsewhere: Some Remarks
Previous Article in Journal
Blind Deconvolution of Seismic Data Using f-Divergences
Previous Article in Special Issue
Tsallis Mutual Information for Document Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Projective Power Entropy and Maximum Tsallis Entropy Distributions

The Institute of Statistical Mathematics, Tachikawa, Tokyo 190-8562, Japan
*
Author to whom correspondence should be addressed.
Entropy 2011, 13(10), 1746-1764; https://doi.org/10.3390/e13101746
Submission received: 26 July 2011 / Revised: 20 September 2011 / Accepted: 20 September 2011 / Published: 26 September 2011
(This article belongs to the Special Issue Tsallis Entropy)

Abstract

:
We discuss a one-parameter family of generalized cross entropy between two distributions with the power index, called the projective power entropy. The cross entropy is essentially reduced to the Tsallis entropy if two distributions are taken to be equal. Statistical and probabilistic properties associated with the projective power entropy are extensively investigated including a characterization problem of which conditions uniquely determine the projective power entropy up to the power index. A close relation of the entropy with the Lebesgue space L p and the dual L q is explored, in which the escort distribution associates with an interesting property. When we consider maximum Tsallis entropy distributions under the constraints of the mean vector and variance matrix, the model becomes a multivariate q-Gaussian model with elliptical contours, including a Gaussian and t-distribution model. We discuss the statistical estimation by minimization of the empirical loss associated with the projective power entropy. It is shown that the minimum loss estimator for the mean vector and variance matrix under the maximum entropy model are the sample mean vector and the sample variance matrix. The escort distribution of the maximum entropy distribution plays the key role for the derivation.

1. Introduction

In the classical statistical physics and the information theory the close relation with Boltzmann-Shannon entropy has been well established to offer elementary and clear understandings. The Kullback-Leibler divergence is directly connected with maximum likelihood, which is one of the most basic ideas in statistics. Tsallis opened new perspectives for the power entropy to elucidate non-equilibrium states in statistical physics, and these give the strong influence on the research for non-extensive and chaotic phenomenon, cf. [1,2]. There are proposed several generalized versions of entropy and divergence, cf. [3,4,5,6,7]. We consider generalized entropy and divergence defined on the space of density functions with finite mass,
F = f : f ( x ) d x < , f ( x ) 0   for almost everywhere   x
in a framework of information geometry originated by Amari, cf. [8,9].
A functional D : F × F [ 0 , ) is called a divergence if D ( g , f ) 0 with equality if and only if g = f . It is shown in [10,11] that any divergence associates with a Riemannian metric and a pair of conjugate connections in a manifold modeled in F under mild conditions.
We begin with the original form of power cross entropy [12] with the index β of R defined by
C β ( o ) ( g , f ) = - 1 β g ( x ) { f ( x ) β - 1 } d x + 1 1 + β f ( x ) 1 + β d x
for all g and f in F , and so the power (diagonal) entropy
H β ( o ) ( f ) = C β ( o ) ( f , f ) = - 1 β ( β + 1 ) f ( x ) 1 + β d x + 1 β f ( x ) d x
See [13,14] for the information geometry and statistical applications for the independent component analysis and pattern recognition. Note that this is defined in the continuous case for probability density functions, but can be reduced to a discrete case, see Tsallis [2] for the extensive discussion on statistical physics. In fact, the Tsallis entropy
S q ( f ) = 1 q - 1 1 - f ( x ) q d x
for a probability density function f ( x ) is proportional to the power entropy to a constant with q H β ( o ) ( g ) - 1 , where q = 1 + β . The power divergence is given by
D β ( o ) ( g , f ) = C β ( o ) ( g , f ) - H β ( o ) ( g )
as, in general, defined by the difference of the cross entropy and the diagonal entropy.
In this paper we focus on the projective power cross entropy defined by
C γ ( g , f ) = - 1 γ ( 1 + γ ) g ( x ) f ( x ) γ d x f ( x ) 1 + γ d x γ 1 + γ
and so the projective power entropy is
H γ ( f ) = - 1 γ ( 1 + γ ) f ( x ) 1 + γ d x 1 1 + γ
The log expression for C γ ( g , f ) is defined by
C γ log ( g , f ) = - 1 γ log { - γ ( 1 + γ ) C γ ( g , f ) }
See [15,16] for the derivation of C γ log , and detailed discussion on the relation between C β ( o ) ( g , f ) and C γ ( g , f ) . The projective power cross entropy C γ ( g , f ) satisfies the linearity with respect to g and the projective invariance, that is C γ ( g , λ f ) = C γ ( g , f ) for any constant λ > 0 . Note that H γ ( f ) has a one-to-one correspondence with S q ( f ) as given by
H γ ( f ) = - 1 q ( q - 1 ) { 1 - ( q - 1 ) S q ( f ) } 1 q
where q = 1 + γ . The projective power divergence is
D γ ( g , f ) = C γ ( g , f ) - H γ ( g )
which will be discussed on a close relation with the H o ¨ lder’s inequality. The divergence defined by C γ ( g , f ) satisfies
D γ log ( g , f ) = C γ log ( g , f ) - C γ log ( g , g ) 0
for all γ of R if there exist integrals in D γ log ( g , f ) . The nonnegativity leads to
D γ ( g , f ) 0
We remark that the existence range of the power index γ for C γ ( g , f ) and H γ ( f ) depends on the sample space on which f and g are defined. If the sample space is compact, both C γ ( g , f ) and H γ are well-defined for all γ R . If the sample space is not compact, C γ ( g , f ) is defined for γ 0 and H γ ( f ) is for γ > - 1 . More precisely we will explore the case that the sample space is R d in a subsequent discussion together with moment conditions. Typically we observe that
lim γ 0 D γ ( g , f ) = D 0 ( g , f )
where D 0 ( g , f ) denotes the Kullback-Leibler divergence,
D 0 ( g , f ) = g ( x ) log g ( x ) f ( x ) d x
See Appendix 1 for the derivation of (5).
Let { x 1 , , x n } be a random sample from a distribution with the probability density function g ( x ) . A statistical model { f ( x , θ ) : θ Θ } with parameter θ is assumed to sufficiently approximate the underlying density function g ( x ) , where Θ is a parameter space. Then the loss function associated with the projective power entropy C γ ( g , f ( · , θ ) ) based on the sample is given by
L γ ( θ ) = - 1 γ ( 1 + γ ) 1 n i = 1 n k γ ( θ ) f ( x i , θ ) γ
in which we call
θ ^ γ argmin θ Θ L γ ( θ )
the γ-estimator, where
k γ ( θ ) = f ( x , θ ) 1 + γ d x - γ 1 + γ
We note that
E g { L γ ( θ ) } = C γ ( g , f ( · , θ ) )
where E g denotes the statistical expectation with respect to g. It is observed that the 0-estimator is nothing but the maximum likelihood estimator (MLE) since the loss L γ ( θ ) converges to the minus log-likelihood function,
L 0 ( θ ) = - 1 n i = 1 n log f ( x i , θ )
in the sense that
lim γ 0 L γ ( θ ) + 1 γ ( 1 + γ ) = L 0 ( θ )
If the underlying density function g ( x ) belongs to a Gaussian model with mean μ and variance σ 2 , then the MLEs for μ and σ 2 are the sample mean and sample variance. The reverse statement is shown in [17,18]. We will extend this theory to a case of the γ-estimator under γ-model.
In Section 2 we discuss characterization of the projective power entropy. In Section 3 the maximum entropy distribution with the Tsallis entropy S q with q = 1 + γ under the constraints of mean vector μ and variance matrix Σ is considered. We discuss the model of maximum entropy distributions, called the γ-model, in which 0-model and 2-model equal Gaussian and Wigner models, respectively. Then we show that the γ-estimators for μ and Σ under the γ-model are the sample mean and sample variance. Section 4 gives concluding remarks and further comments.

2. Projective Invariance

Let us look at a close relation of F with Lebesgue’s space
L p = f ( x ) : | f ( x ) | p d x <
where p 1 and the L p -norm p is defined by
f p = | f ( x ) | p d x 1 p
Let q be the conjugate index of p satisfying 1 / p + 1 / q = 1 , in which p and q can be expressed as functions of the parameter γ > 0 such that p = 1 + γ - 1 and q = 1 + γ . We note that this q is equal to the index q in Tsallis entropy S q in the relation q = 1 + γ . For any probability density function f ( x ) we define the escort distribution with the probability density function,
e q ( f ( x ) ) = f ( x ) q f ( y ) q d y
cf. [2] for extensive discussion. We discuss an interesting relation of the projective cross entropy (1) with the escort distribution. By the definition of the escort distribution,
C γ ( g , f ) = - 1 γ ( 1 + γ ) { e q ( f ( x ) ) } 1 p g ( x ) d x
We note that e q ( f ) 1 p is in the unit sphere of L p in the representation. The projective power diagonal entropy (2) is proportional to the L q -norm, that is,
H γ ( f ) = - 1 γ ( 1 + γ ) f q
from which the H o ¨ lder’s inequality
g ( x ) f ( x ) γ d x g q f γ p
claims that C γ ( g , f ) H γ ( g ) , or equivalently
D γ ( g , f ) 0
for all f and g in F , which is also led by C γ ( o ) ( g , f ) H γ ( o ) ( g ) . The equality in (10) holds if and only if f ( x ) = λ g ( x ) for almost everywhere x, where λ is a positive constant. The power transform suggests an interplay between the space L p and L q by the relation,
f γ p = f q γ
Taking the limit of γ to 0 in the H o ¨ lder’s inequality (9) yields that
g ( x ) log f ( x ) d x g ( x ) log g ( x ) d x
since
lim γ 0 g ( x ) f ( x ) γ - 1 γ d x = g ( x ) log f ( x ) d x
and
lim γ 0 f γ p g q - 1 γ = g ( x ) log g ( x ) d x
This limit regarding p associates with another space rather than the L space, which is nothing but the space of all density functions with finite Boltzmann-Shannon entropy, say L log . The power index γ reparameterizes the Lebesgue space L p and the dual space L q with the relation p = 1 + γ - 1 , however, to take the power transform f ( x ) γ is totally different from the ordinary discussion of the Lebesgue space, so that the duality converges to ( L log , L 1 ) as observed in (11). In information geometry the pair ( L log , L 1 ) corresponds to that of mixture and exponential connections, cf. [9]. See also another one-parameterization of L p space [19].
We now discuss a problem of the uniqueness for C γ ( g , f ) as given in the following theorem. A general discussion on the characterization is given in [16], however, the derivation is rather complicated. Here we assume a key condition that a cross entropy Γ ( g , f ) is linear in g to give an elementary proof. The Riesz representation theorem suggests
Γ ( g , f ) = c ( f ) g ( x ) ψ ( f ( x ) ) d x
where c ( f ) is a constant that depends on f. Thus we observe the following theorem when we make a specific form for c ( f ) to guarantee the scale invariance.
Theorem 1.. Define a functional Γ : F × F R by
Γ ( g , f ) = φ ρ ( f ( x ) ) d x g ( x ) ψ ( f ( x ) ) d x
where φ, ρ and ψ are differentiable and monotonic functions. Assume that
(i). Γ ( g , g ) = min f F Γ ( g , f ) for all g F ,
and that
(ii). Γ ( g , f ) = Γ ( g , λ f ) for all λ > 0 and all g , f F .
Then there exists γ such that Γ ( g , f ) = C γ ( g , f ) up to a constant factor, where C γ ( g , f ) is the projective power cross entropy defined by (1).
Proof.. The requirement (ii) means that
λ φ ρ ( λ f ( x ) ) d x ψ ( λ f ( x ) ) g ( x ) d x = 0
which implies that, if f is absolutely continuous and g is the Dirac measure at x 0 , then
ψ ˙ ( λ f ( x 0 ) ) ψ ( λ f ( x 0 ) ) λ f ( x 0 ) = c ( λ )
where
c ( λ ) = - λ φ ˙ ρ ( λ f ( x ) ) d x ρ ˙ ( λ f ( x ) ) f ( x ) d x φ ρ ( λ f ( x ) ) d x
Since we can take an arbitrary value f ( x 0 ) for any fixed λ,
ψ ˙ ( t ) ψ ( t ) = c ( λ ) t - 1
which is uniquely solved as ψ ( t ) = t γ where γ = c ( λ ) . Next let us consider a case of a finite discrete space, { x i : 1 i m } . Then, since ψ ( f ) = f γ , we can write
Γ ( g , f ) = φ i = 1 m ρ ( f i ) i = 1 m g i f i γ
where f i = f ( x i ) and g i = g ( x i ) . The requirement (i) leads that ( / f j ) Γ ( g , f ) | f = g = 0 for all j , 1 j m , which implies that
ρ ˙ ( g j ) = - γ c ( g 1 , , g m ) g j γ
where
c ( g 1 , , g m ) = φ i = 1 m ρ ( g i ) i = 1 m g i 1 + γ φ ˙ i = 1 m ρ ( g i )
It follows from (13) that c ( g 1 , , g m ) must be a constant in g 1 , , g m , say C, so that we solve (13) as ρ ( g j ) = - γ C g j 1 + γ / ( 1 + γ ) . Therefore, Equation (14) is written by
φ ˙ ( t ) φ ( t ) = - γ 1 + γ t - 1
which leads to φ ( t ) = t - γ 1 + γ . We conclude that Γ ( g , f ) C γ ( g , f ) , which completes the proof.☐
Remark 1.. The proof above is essentially applicable for the case that the integral (11) is given by the summation just for a binary distributions. In this sense the statement of Theorem 1 is not tight, however, statistical inference is discussed in a unified manner such that the distribution is either continuous or discrete. In a subsequent discussion we focus on the case for continuous distributions defined on R d .
Remark 2. We see the multiplicative decomposition for C γ ( g , f ) for statistical independence. In fact, if f and g are decomposed as f = f 1 f 2 , g = g 1 g 2 in the same partition, then
C γ ( g , f ) = C γ ( g 1 , f 1 ) C γ ( g 2 , f 2 )
This property is also elemental, but we do not assume this decomposability as the requirement in Theorem 1.

3. Model of Maximum Entropy Distributions

We will elucidate a dualistic structure between the maximum entropy model on H γ , defined in (2) and the minimum cross entropy estimator on C γ , defined in (1). Before the discussion we overview the classical case in which the maximum likelihood estimation nicely makes good performance under the maximum entropy model on the Boltzmann-Shannon entropy, that is, a Gaussian model if we consider the mean and variance constraint. We will use conventional notations that X denotes random variable with value x. Let { x 1 , , x n } be a random sample from a Gaussian distribution with the density function
f 0 ( x , μ , Σ ) = det ( 2 π Σ ) - 1 2 exp { - 1 2 ( x - μ ) T Σ - 1 ( x - μ ) }
The Gaussian density function is written by a canonical form
( 2 π ) - d 2 exp - 1 2 ( x - μ ) T Ξ ( x - μ ) + 1 2 log det ( Ξ )
where Ξ is called the canonical parameter defined by Σ - 1 . The differentiation of (15) on μ and Ξ yields
E f 0 ( · , μ , Σ ) ( X ) = μ and V f 0 ( · , μ , Σ ) ( X ) = Σ
where E f and V f denote the expectation vector and variance matrix with respect to a probability density function f ( x ) , respectively.
The maximum likelihood estimator is given by
( μ ^ 0 , Σ ^ 0 ) = ( x ¯ , S )
where x ¯ and S are the sample mean vector and the sample variance matrix,
x ¯ = 1 n i = 1 n x i , S = 1 n i = 1 n ( x i - x ¯ ) ( x i - x ¯ ) T
This is because the minus log-likelihood function is
L 0 ( μ , Σ ) = - 1 n i = 1 n log f 0 ( x i , μ , Σ )
which is written by
1 2 trace ( S ( μ ) Ξ ) - 1 2 log det ( Ξ )
apart from a constant, where
S ( μ ) = 1 n i = 1 n ( x i - μ ) ( x i - μ ) T
Hence the estimating system is
μ L 0 ( μ , Σ ) Ξ L 0 ( μ , Σ ) = Ξ ( x ¯ - μ ) 1 2 { S ( μ ) - Σ } = 0 O
which concludes the Expression (16) of the MLE since S ( μ ) = S + ( x ¯ - μ ) ( x ¯ - μ ) T . Alternatively, we have another route to show (16) as follows. The Kullback-Leibler divergence defined in (6) is given by
D 0 ( f 0 ( · , μ , Σ ) , f 0 ( · , μ 1 , Σ 1 ) )
= 1 2 ( μ - μ 1 ) T Σ 1 - 1 ( μ - μ 1 ) + 1 2 trace { ( Σ - Σ 1 ) Σ 1 - 1 } - 1 2 log det ( Σ Σ 1 - 1 )
Thus, we observe that
L 0 ( μ , Σ ) - L 0 ( x ¯ , S ) = D 0 ( f 0 ( · , x ¯ , S ) , f 0 ( · , μ , Σ ) )
which is nonnegative with equality if and only if ( μ , Σ ) = ( x ¯ , S ) . This implies (16).
Under mild regularity conditions the reverse statement holds, that is, the MLE for a location and scatter model satisfies (16) if and only if the model is Gaussian, cf. [17,18]. However, even if we do not assume anything for the underling distribution g ( x ) , the statistics x ¯ and S are asymptotically consistent for
μ g = E g ( X ) and Σ g = V g ( X )
This is a direct result from the strong law of large numbers, and the central limit theorem leads to the asymptotic normality for these two statistics. In this sense, ( x ¯ , S ) is also a nonparametric estimator for ( μ g , Σ g ) .
We explore a close relation of the statistical model and the estimation method. We consider a maximum entropy distribution with the γ-entropy H γ over the space of d-dimensional distributions with a common mean and variance,
F ( μ , Σ ) = { f F : f ( x ) d x = 1 , E f ( x ) = μ , V f ( x ) = Σ }
Then we define a distribution with a probability density function written by
f γ ( x , μ , Σ ) = c γ det ( 2 π Σ ) 1 2 1 - γ 2 + d γ + 2 γ ( x - μ ) T Σ - 1 ( x - μ ) + 1 γ
where ( ) + denotes a positive part and c γ is the normalizing factor,
c γ = 2 γ 2 + d γ + 2 γ d 2 Γ ( 1 + d 2 + 1 γ ) Γ ( 1 + 1 γ ) if   γ > 0 - 2 γ 2 + d γ + 2 γ d 2 Γ ( - 1 γ ) Γ ( - 1 γ - d 2 ) if   - 2 d + 2 < γ < 0
See the derivation for c γ in Appendix 2. If the dimension d equals 1, then f γ ( x , μ , Σ ) is a q-Gaussian distribution with q = γ + 1 . We remark that
lim γ 0 c γ = lim γ 0 c γ = 1
in which f γ ( x , μ , Σ ) is reduced to a d-variate Gaussian density when γ = 0 . The support of f γ ( · , μ , Σ ) becomes an ellipsoid defined as
x R d : ( x - μ ) T Σ - 1 ( x - μ ) 2 + d γ + 2 γ γ
if γ > 0 . On the other hand, if - 2 d + 2 < γ < 0 , the density function (21) is written as
f γ ( x , μ , Σ ) = det ( π τ Σ ) - 1 2 Γ ( - 1 γ ) Γ ( - 1 γ - d 2 ) 1 + 1 τ ( x - μ ) T Σ - 1 ( x - μ ) 1 γ
where
τ = - 2 + ( d + 2 ) γ γ
The d-variate t-distribution is defined by
g ν ( x , μ , P ) = det ( π ν P ) - 1 2 Γ ( ν + d 2 ) Γ ( ν 2 ) 1 + 1 ν ( x - μ ) T P - 1 ( x - μ ) - ν + d 2
cf. [20] for the extensive discussion. Assume that
ν + d 2 = - 1 γ and ν P = τ Σ
Then we observe from (23) and (24) that
f γ ( x , μ , Σ ) = g ν ( x , μ , P )
Accordingly, the density function f γ ( x , μ , Σ ) with - 2 d + 2 < γ < 0 is a t-distribution. The distribution has elliptical contours on the Euclidean space R d for any γ > - 2 d + 2 , as shown in Figure 1 for typical cases of γ.
Figure 1. t-distribution ( γ = - 0 . 4 ) , Gaussian ( γ = 0 ) and Wigner ( γ = 2 ) distributions.
Figure 1. t-distribution ( γ = - 0 . 4 ) , Gaussian ( γ = 0 ) and Wigner ( γ = 2 ) distributions.
Entropy 13 01746 g001aEntropy 13 01746 g001b
Let
M γ = f γ ( x , μ , Σ ) : μ R d , Σ S d
which we call γ-model, where S d denotes the space of all symmetric, positive-definite matrices of order d. We confirm the mean and variance of the γ-model as follows.
Lemma. Under the model M γ defined in (25) with the index γ > - 2 d + 2 ,
E f γ ( · , μ , Σ ) ( X ) = μ and V f γ ( · , μ , Σ ) ( X ) = Σ
Proof. We need to consider a family of escort distributions. In the model M γ we can define the escort distribution as
e q ( f γ ( x , μ , Σ ) ) = c γ * det ( Σ ) 1 2 1 - γ 2 + d γ + 2 γ ( x - μ ) T Σ - 1 ( x - μ ) + 1 + γ γ
where q = 1 + γ and c γ * is the normalizing factor. Hence,
e q ( f γ ( x , μ , Σ ) ) = c γ * det ( Σ ) - 1 2 γ 1 + γ - γ 2 + d γ + 2 γ ( x - μ ) T { det ( Σ ) - 1 2 γ 1 + γ Σ - 1 } ( x - μ ) + 1 + γ γ
Here we define alternative parameter Ξ γ to the original parameter Σ by the transform
Ξ γ = det ( Σ ) - 1 2 γ 1 + γ Σ - 1
and so that the inverse transform is given by
Σ = det ( Ξ γ ) γ d γ + 2 γ + 2 Ξ γ - 1
noting that det ( Ξ γ ) = det ( Σ ) - 1 2 d γ + 2 γ + 2 1 + γ . Thus, we get a canonical form of (26) as
e q ( f γ ( x , μ , Σ ) ) = c γ * det ( Ξ γ ) γ 2 + d γ + 2 γ - γ 2 + d γ + 2 γ ( x - μ ) T Ξ γ ( x - μ ) + 1 + γ γ
By analogy of the discussion for an exponential family we have the following expression for the braced term in (30) as
- 2 γ 2 + d γ + 2 γ 1 2 trace ( x x T Ξ γ ) - μ T Ξ γ x + 1 2 μ T Ξ γ μ - 2 + d γ + 2 γ 2 γ det ( Ξ γ ) γ d γ + 2 γ + 2
A property of the escort distribution suggests moment formulae for the distribution (25) as follows: We have an identity
μ c γ * det ( Ξ γ ) γ d γ + 2 γ + 2 - γ 2 + d γ + 2 γ ( x - μ ) T Ξ γ ( x - μ ) + 1 + γ γ d x = 0
which implies that
det ( Ξ γ ) γ d γ + 2 γ + 2 - γ 2 + d γ + 2 γ ( x - μ ) T Ξ γ ( x - μ ) + 1 γ Ξ γ ( x - μ ) d x = 0
which concludes that
E f γ ( · , μ , Σ ) ( X ) = μ
Similarly,
Ξ γ c γ * det ( Ξ γ ) γ d γ + 2 γ + 2 - γ 2 + d γ + 2 γ ( x - μ ) T Ξ γ ( x - μ ) + 1 + γ γ d x = 0
which is
c γ * det ( Ξ γ ) γ d γ + 2 γ + 2 - γ 2 + d γ + 2 γ ( x - μ ) T Ξ γ ( x - μ ) + 1 γ
× γ d γ + 2 γ + 2 det ( Ξ γ ) γ d γ + 2 γ + 2 Ξ γ - 1 - γ 2 + d γ + 2 γ ( x - μ ) ( x - μ ) T d x = 0
which concludes that
V f γ ( · , μ , Σ ) ( X ) = Σ
because of the relation of Ξ γ and Σ as observed in (29). The proof is complete.  ☐
Remark 3. The canonical form (30) of the escort distribution (26) plays an important role on the proof of Lemma. Basically we can write the canonical form of (21), however it is not known any link to distributional properties like a case of exponential family.
Remark 4. In Equation (31) the function
φ ( Ξ ) = 1 2 ω det ( Ξ ) ω
is viewed as a potential function in the Fenchel convex duality, where
ω = γ 2 + d γ + 2 γ
cf. [21,22] for the covariance structure model.
From Lemma we observe that f γ ( · , μ , Σ ) F ( μ , Σ ) . Next we show that the distribution with density f γ ( · , μ , Σ ) maximizes the γ-entropy H γ over the space F ( μ , Σ ) , where H γ is defined in (2).
Theorem 2.
(i). If - 2 d + 2 < γ 0 , then
f γ ( · , μ , Σ ) = argmax f F ( μ , Σ ) H γ ( f )
where F ( μ , Σ ) is defined in (20).
(ii). If γ > 0 , then
f γ ( · , μ , Σ ) = argmax f F ( μ , Σ ) ( γ ) H γ ( f )
where
F ( μ , Σ ) ( γ ) = { f F ( μ , Σ ) : f ( x ) = 0   for almost everywhere   x B ( μ , Σ ) }
with B ( μ , Σ ) being { x R d : ( x - μ ) T Σ - 1 ( x - μ ) > 2 + d γ + 2 γ γ } .
Proof. By the definition of F ( μ , Σ ) , we see from Lemma that f γ ( · , μ , Σ ) F ( μ , Σ ) for any γ ( - 2 d + 2 , 0 ) . This leads to
E f γ ( · , μ , Σ ) { f γ ( X , μ , Σ ) γ } = E f { f γ ( X , μ , Σ ) γ }
for any f in F ( μ , Σ ) , which implies that
H γ ( f γ ( · , μ , Σ ) ) = C γ ( f , f γ ( · , μ , Σ ) )
Hence
H γ ( f γ ( · , μ , Σ ) ) - H γ ( f ) = D γ ( f , f γ ( · , μ , Σ ) )
which is nonnegative as discussed in (4). This concludes (34). Similarly, we observe that (36) holds for any γ > 0 and any f in F ( μ , Σ ) ( γ ) since the support of f includes that of f ( · , μ , Σ ) . This concludes (35). ☐
We would like to elucidate a similar structure for the statistical inference by the minimum projective cross entropy in which the data set { x 1 , , x n } is assumed to follow the model M γ . We recall (8) from the relation of the projective cross entropy with the escort distribution
C γ ( g , f ) = - 1 γ ( 1 + γ ) e q ( f ( x ) ) γ 1 + γ g ( x ) d x
When we have got data { x 1 , , x n } to be fitted to the model M γ , the loss function is
L γ ( μ , Σ ) = - 1 γ ( 1 + γ ) 1 n i = 1 n e q ( f γ ( x i , μ , Σ ) ) γ 1 + γ
where f γ ( x , μ , Σ ) defined in (21). The γ-estimator is defined by
( μ ^ γ , Σ ^ γ ) = argmin ( μ , Σ ) L γ ( μ , Σ )
see the general definition (7). It follows from the canonical form defined in (30) with the canonical parameter Ξ γ defined in (28) that
L γ ( μ , Σ ) = - 1 γ ( 1 + γ ) ( c γ * ) γ γ + 1 det ( Ξ γ ) ω - ω { trace ( Ξ γ S ) + ( μ - x ¯ ) T Ξ γ ( μ - x ¯ ) }
where ( x ¯ , S ) and ω are defined in (17) and (33), and c γ * is the normalizing factor defined in (27). Here we note that if γ > 0 , then the parameter ( μ , Σ ) must be assumed to be in Θ n , where
Θ n = { ( μ , Σ ) R d × S d : ( x i - μ ) T Σ - 1 ( x i - μ ) < ω - 1 ( i = 1 , , n ) }
We note that L γ ( μ , Σ ) = C γ ( f ( · , x ¯ , S ) , f ( · , μ , Σ ) ) and L γ ( x ¯ , S ) = H γ ( f ( · , x ¯ , S ) ) since
E f ( · , x ¯ , S ) ( X ) = x ¯ , and V f ( · , x ¯ , S ) ( X ) = S
Accordingly, we observe the argument similar to (19) for the MLE. The projective divergence D γ defined in (3) equals the difference of the γ-loss functions as
L γ ( μ , Σ ) - L γ ( x ¯ , S ) = D γ ( f γ ( · , x ¯ , S ) , f γ ( · , μ , Σ ) ) ,
which is nonnegative with equality if and only if ( μ , Σ ) = ( x ¯ , S ) . See the discussion after equation (10). In this way, we can summarize the above discussion as follows:
Theorem 3. Let { x 1 , , x n } be a random sample from a γ-model defined in (21). Then the γ-estimator defined in (7) for ( μ , Σ ) is ( x ¯ , S ) , where ( x ¯ , S ) is defined in (17).
Proof. Let us give another proof. The estimating system is given by
μ L γ ( μ , Σ ) Ξ γ L γ ( μ , Σ ) = Ξ γ ( x ¯ - μ ) ω { det ( Ξ γ ) ω Ξ γ - 1 - S ( μ ) } = 0 O
which is equivalent to
μ - x ¯ Σ - S ( μ ) = 0 O
because of the relation of Ξ γ into Σ as given in (29). Thus, we also attain the conclusion ( μ ^ γ , Σ ^ γ ) = x ¯ , S . In this way, we obtain the solution of the equation system defined by (40) via the parameter Ξ γ using the relation of the escort distribution with the loss function (37). ☐
Remark 5. Consider the location model { f γ ( · , μ , Σ ) } with the location parameter μ, where Σ is known in Theorem 3. Then we easily see that the γ-estimator for μ is x ¯ . What about the reverse statement? We observe that if the γ-estimator for μ is x ¯ with the sample size n 3 , then the model is the γ-model, { f γ ( · , μ , Σ ) with the known Σ. The proof is parallel to that of Theorem 2 given in [17]. In fact, we conclude that the model density function f ( x ) satisfies that
{ f ( x - μ ) } γ = a + b ( x - μ ) T Σ - 1 ( x - μ )
where a and b are constants.
Remark 6. If we look at jointly Theorem 2 and 3, then
min ( μ , Σ ) R d × S d L γ ( μ , Σ ) = max f F ( x ¯ , S ) H γ ( f )
since L γ ( x ¯ , S ) = H γ ( f γ ( · , x ¯ , S ) ) . Both sides of (41) associate with inequalities (39) and (36) on γ-divergence in separate discussion.
Remark 7. The derivation of the γ-estimator in Theorem 3 is provided by the canonical parameter Ξ γ of the escort distribution as given in (28). Here we directly calculate the gradient of the loss with respect to Σ as follows:
Σ L γ ( μ , Σ ) = - 1 2 ( 1 + γ ) 2 det ( Σ ) - 1 2 γ 1 + γ ( 1 - ω trace { S ( μ ) Σ - 1 } ) Σ - 1 + γ ( 1 + γ ) ω det ( Σ ) - 1 2 γ 1 + γ Σ - 1 S ( μ ) Σ - 1 = - 1 2 ( 1 + γ ) 2 det ( Σ ) - 1 2 γ 1 + γ × ( 1 - ω trace { S ( μ ) Σ - 1 } ) Σ - 1 - 1 + γ 1 + 1 2 d γ + γ Σ - 1 S ( μ ) Σ - 1
Therefore we observe that if we put μ = x ¯ and Σ = α S ( x ¯ ) , then
Σ L γ ( x ¯ , α S ( x ¯ ) ) = - 1 2 γ 1 + γ det ( α S ( x ¯ ) ) - 1 2 γ 1 + γ ( α S ( x ¯ ) ) - 1
× ( 1 - ω trace { S ( x ¯ ) ( α S ( x ¯ ) ) - 1 } ) α S ( x ¯ ) - 1 + γ 1 + 1 2 d γ + γ S ( x ¯ ) ( α S ( x ¯ ) ) - 1
The bracketed term of (42) is given by
α ( 1 - ω trace { S ( x ¯ ) ( α S ( x ¯ ) ) - 1 } ) - 1 + γ 1 + 1 2 d γ + γ S ( x ¯ ) = α - d γ 2 + d γ + 2 γ - 1 + γ 1 + 1 2 d γ + γ S ( x ¯ )
which concludes that if α = 1 , then ( / Σ ) L γ ( x ¯ , α S ( x ¯ ) ) = 0 . This is a direct proof for Theorem 3, but it would accompany with a heuristic discussion for the substitution of ( μ , Σ ) into ( x ¯ , α S ( x ¯ ) ) .

4. Concluding Remarks

We explored the elegant property (39), the empirical Pythagoras relation between the γ-model and γ-estimator, in the sense that (39) directly shows Theorem 3 without any differential calculus. Another elegant expression is in the minimax game between Nature and a decision maker, see [23]. Consider the space F ( μ , Σ ) defined in (20). The intersection of the γ-model (21) and F ( μ , Σ ) is a singleton { f γ ( · , μ , Σ ) } , which is the minimax solution of
max g F ( μ , Σ ) min f F C γ ( g , f ) = C γ ( f γ ( · , μ , Σ ) , f γ ( · , μ , Σ ) )
Consider different indices γ and γ * which specify the γ-model and γ * -estimator, respectively. Basically the γ * -estimator is consistent under the γ-model for any choice of γ and γ * . If we specifically fix γ = 0 for the model, that is, a Gaussian model, then the γ * -estimator is shown to be qualitatively robust for any γ * > 0 , see [16]. The degree of robustness is proportional to the value of γ * with a trade for the efficiency. The γ * -estimator for ( μ , Σ ) of the Gaussian model is given by the solution of
μ = i = 1 n f 0 ( x i , μ , Σ ) γ * x i i = 1 n f 0 ( x i , μ , Σ ) γ *
Σ = ( 1 + γ * ) i = 1 n f 0 ( x i , μ , Σ ) γ * ( x i - μ ) ( x i - μ ) T i = 1 n f 0 ( x i , μ , Σ ) γ *
The weight function f 0 ( x i , μ , Σ ) γ * for the i-th observation x i becomes almost 0 when x i is an outlier. Alternatively, the classical robust method employs γ * = 0 , that is, the MLE for the misspecified model γ < 0 or t-distribution model, see [24,25]. Thus, the different indices γ and γ * work robust statistics in a dualistic manner.
This property is an extension of that associated between the exponential model and MLE, however, it is fragile in the sense that (19) does not hold if the indices in the γ-model and γ * -estimator are slightly different. In practice, we find some difficulties for a numerical task for solving the MLE under the γ-model with γ > 0 because the support of the density depends on the parameter and the index γ. We discussed statistical and probabilistic properties on the model and estimation associated with the specific cross entropy. A part of properties discussed still holds for any cross entropy in a much wider class, which is investigated from the point of the Fenchel duality in [13,26].

Acknowledgements

We would like to express our thanks to two referees for their helpful comments and constructive suggestions.

References

  1. Tsallis, C. Possible generalization of Boltzmann-Gibbs statistics. J. Statist. Physics. 1988, 52, 479–487. [Google Scholar] [CrossRef]
  2. Tsallis, C. Introduction to Nonextensive Statistical Mechanics: Approaching a Complex World; Springer-Verlag: New York, NY, USA, 2009. [Google Scholar]
  3. Cichocki, A.; Cruces, S.; Amari, S. Families of alpha- beta- and gamma- divergences: Flexible and robust measures of similarities. Entropy 2010, 12, 1532–1568. [Google Scholar] [CrossRef]
  4. Cichocki, A.; Cruces, S.; Amari, S. Generalized alpha-beta divergences and their application to robust nonnegative matrix factorization. Entropy 2011, 13, 134–170. [Google Scholar] [CrossRef]
  5. Csiszàr, I. Information-type measures of difference of probability distributions and indirect observation. Studia Scientiarum Mathematicarum Hungarica 1967, 2, 229–318. [Google Scholar]
  6. Rèny, I.A. On measures of entropy and information. Proc. Fourth Berkeley Symp. Math. Statist. Prob. 1961, 1, 547–561. [Google Scholar]
  7. Tøpsoe, F. Some inequalities for information divergence and related measures of discrimination. IEEE Trans. Inform. Theor. 2000, 46, 1602–1609. [Google Scholar]
  8. Amari, S. Differential-geometrical methods in statistics. In Lecture Notes in Statistics; Springer-Verlag: New York, NY, USA, 1985; Volume 28. [Google Scholar]
  9. Amari, S.; Nagaoka, H. Methods of information geometry. In Translations of Mathematical Monographs; American Mathematical Society: Providence, RI, USA, 2000; Volume 191. [Google Scholar]
  10. Eguchi, S. Second order efficiency of minimum contrast estimators in a curved exponential family. Ann. Statist. 1983, 11, 793–803. [Google Scholar] [CrossRef]
  11. Eguchi, S. Geometry of minimum contrast. Hiroshima Math. J. 1992, 22, 631–647. [Google Scholar]
  12. Basu, A.; Harris, I.R.; Hjort, N.L.; Jones, M.C. Robust and efficient estimation by minimizing a density power divergence. Biometrika 1988, 85, 549–559. [Google Scholar] [CrossRef]
  13. Eguchi, S. Information divergence geometry and the application to statistical machine learning. In Information Theory and Statistical Learning; Emmert-Streib, F., Dehmer, M., Eds.; Springer: New York, NY, USA, 2008; pp. 309–332. [Google Scholar]
  14. Minami, M.; Eguchi, S. Robust blind source separation by beta-divergence. Neural Comput. 2002, 14, 1859–1886. [Google Scholar]
  15. Eguchi, S.; Kato, S. Entropy and divergence associated with power function and the statistical application. Entropy 2010, 12, 262–274. [Google Scholar] [CrossRef]
  16. Fujisawa, H.; Eguchi, S. Robust parameter estimation with a small bias against heavy contamination. J. Multivariate Anal. 2008, 99, 2053–2081. [Google Scholar] [CrossRef]
  17. Azzalini, A.; Genton, M.G. On Gauss’s characterization of the normal distribution. Bernoulli 2007, 13, 169–174. [Google Scholar] [CrossRef]
  18. Teicher, H. Maximum likelihood characterization of distributions. Ann. Math. Statist. 1961, 32, 1214–1222. [Google Scholar] [CrossRef]
  19. Amari, S.; Ohara, A. Geometry of q-exponential family of probability distributions. Entropy 2011, 13, 1170–1185. [Google Scholar] [CrossRef]
  20. Kotz, S.; Nadarajah, S. Multivariate T Distributions and Their Applications; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
  21. Eguchi, S. A differential geometric approach to statistical inference on the basis of contrast functionals. Hiroshima Math. J. 1985, 15, 341–391. [Google Scholar]
  22. Wakaki, H.; Eguchi, S.; Fujikoshi, Y. A class of tests for general covariance structure. J. Multivariate Anal. 1990, 32, 313–325. [Google Scholar] [CrossRef]
  23. Grünwald, P.D.; Dawid, A.P. Game theory, maximum entropy, minimum discrepancy, and robust Bayesian decision theory. Ann. Statist. 2004, 32, 1367–1433. [Google Scholar]
  24. Kent, J.T.; Tyler, D.E. Redescending M-estimates of multivariate location and scatter. Ann. Statist. 1991, 19, 2102–2119. [Google Scholar] [CrossRef]
  25. Marrona, R.A. Robust M-estimators of multivariate location and scatter. Ann. Statist. 1976, 4, 51–67. [Google Scholar] [CrossRef]
  26. Eguchi, S. Information geometry and statistical pattern recognition. Sugaku Exposition 2006, 19, 197–216. [Google Scholar]

Appendix 1

We show (5). It follows from l’H o ^ pital’s rule that
lim γ 0 D γ ( g , f ) = γ g ( x ) 1 + γ d x 1 1 + γ - g ( x ) f ( x ) γ d x f ( x ) 1 + γ d x γ 1 + γ γ = 0
which is written as
1 1 + γ g ( x ) 1 + γ d x - γ 1 + γ g ( x ) 1 + γ log g ( x ) d x - g ( x ) f ( x ) γ log f ( x ) d x f ( x ) 1 + γ d x γ 1 + γ + γ 1 + γ g ( x ) f ( x ) γ d x f ( x ) 1 + γ d x 1 + 2 γ 1 + γ f ( x ) 1 + γ log f ( x ) d x γ = 0
which is reduced to
g ( x ) log g ( x ) d x - g ( x ) log f ( x ) d x
This completes the proof of (5). ☐

Appendix 2

First, we give the formula for c γ in (22) when γ > 0 . Let
I = 1 det ( 2 π Σ ) 1 2 1 - ω ( x - μ ) T Σ - 1 ( x - μ ) + 1 γ d x
where ω = γ 2 + d γ + 2 γ . The integral is rewritten as
I = ( 2 π ω ) - d 2 ( 1 - y T y ) + 1 γ d y
where y = ( ω ) 1 2 Σ - 1 / 2 ( x - μ ) . It is expressed in polar coordinates as
I = ( 2 π ω ) - d 2 S d - 1 0 1 ( 1 - r 2 ) 1 γ r d - 1 d r
where S d - 1 is the surface area of the unit sphere of d - 1 dimension, that is,
S d - 1 = 2 π d 2 Γ ( d 2 )
Since the integral in (43) is expressed by a beta function, we have
c γ = I - 1 = ( 2 ω ) d 2 Γ ( 1 + d 2 + 1 γ ) Γ ( 1 + 1 γ )
Second, we give the formula when - 2 d + 2 < γ < 0 . The argument similar to the above
I = ( - 2 π ω ) - d 2 ( 1 + y T y ) 1 γ d y
where y = ( - 2 π ω ) 1 / 2 Σ - 1 / 2 ( x - μ ) . It is expressed in polar coordinates as
I = ( - 2 π ω ) - d 2 S d - 1 0 ( 1 + r 2 ) 1 γ r d - 1 d r
which leads that
c γ = ( - 2 ω ) d 2 Γ ( - 1 γ ) Γ ( - 1 γ - d 2 )

Share and Cite

MDPI and ACS Style

Eguchi, S.; Komori, O.; Kato, S. Projective Power Entropy and Maximum Tsallis Entropy Distributions. Entropy 2011, 13, 1746-1764. https://doi.org/10.3390/e13101746

AMA Style

Eguchi S, Komori O, Kato S. Projective Power Entropy and Maximum Tsallis Entropy Distributions. Entropy. 2011; 13(10):1746-1764. https://doi.org/10.3390/e13101746

Chicago/Turabian Style

Eguchi, Shinto, Osamu Komori, and Shogo Kato. 2011. "Projective Power Entropy and Maximum Tsallis Entropy Distributions" Entropy 13, no. 10: 1746-1764. https://doi.org/10.3390/e13101746

Article Metrics

Back to TopTop