Article Projective Power Entropy and Maximum Tsallis Entropy Distributions

We discuss a one-parameter family of generalized cross entropy between two distributions with the power index, called the projective power entropy. The cross entropy is essentially reduced to the Tsallis entropy if two distributions are taken to be equal. Statistical and probabilistic properties associated with the projective power entropy are extensively investigated including a characterization problem of which conditions uniquely determine the projective power entropy up to the power index. A close relation of the entropy with the Lebesgue space Lp and the dual Lq is explored, in which the escort distribution associates with an interesting property. When we consider maximum Tsallis entropy distributions under the constraints of the mean vector and variance matrix, the model becomes a multivariate q-Gaussian model with elliptical contours, including a Gaussian and t-distribution model. We discuss the statistical estimation by minimization of the empirical loss associated with the projective power entropy. It is shown that the minimum loss estimator for the mean vector and variance matrix under the maximum entropy model are the sample mean vector and the sample variance matrix. The escort distribution of the maximum entropy distribution plays the key role for the derivation.


Introduction
In the classical statistical physics and the information theory the close relation with Boltzmann-Shannon entropy has been well established to offer elementary and clear understandings.The Kullback-Leibler divergence is directly connected with maximum likelihood, which is one of the most basic ideas in statistics.Tsallis opened new perspectives for the power entropy to elucidate non-equilibrium states in statistical physics, and these give the strong influence on the research for non-extensive and chaotic phenomenon, cf.[1,2].There are proposed several generalized versions of entropy and divergence, cf.[3][4][5][6][7].We consider generalized entropy and divergence defined on the space of density functions with finite mass, F = f : f (x)dx < ∞, f(x) ≥ 0 for almost everywhere x in a framework of information geometry originated by Amari, cf.[8,9].
A functional D : F × F → [0, ∞) is called a divergence if D(g, f ) ≥ 0 with equality if and only if g = f .It is shown in [10,11] that any divergence associates with a Riemannian metric and a pair of conjugate connections in a manifold modeled in F under mild conditions.
We begin with the original form of power cross entropy [12] with the index β of R defined by for all g and f in F, and so the power (diagonal) entropy See [13,14] for the information geometry and statistical applications for the independent component analysis and pattern recognition.Note that this is defined in the continuous case for probability density functions, but can be reduced to a discrete case, see Tsallis [2] for the extensive discussion on statistical physics.In fact, the Tsallis entropy for a probability density function f (x) is proportional to the power entropy to a constant with qH (o) β (g) − 1, where q = 1 + β.The power divergence is given by as, in general, defined by the difference of the cross entropy and the diagonal entropy.
In this paper we focus on the projective power cross entropy defined by (1) and so the projective power entropy is (2) The log expression for C γ (g, f ) is defined by See [15,16] for the derivation of C log γ , and detailed discussion on the relation between C (o) β (g, f ) and C γ (g, f ).The projective power cross entropy C γ (g, f ) satisfies the linearity with respect to g and the projective invariance, that is C γ (g, λf ) = C γ (g, f ) for any constant λ > 0. Note that H γ (f ) has a one-to-one correspondence with S q (f ) as given by where q = 1 + γ.The projective power divergence is which will be discussed on a close relation with the Hölder's inequality.The divergence defined by for all γ of R if there exist integrals in D log γ (g, f ).The nonnegativity leads to We remark that the existence range of the power index γ for C γ (g, f ) and H γ (f ) depends on the sample space on which f and g are defined.If the sample space is compact, both C γ (g, f ) and H γ are well-defined for all γ ∈ R. If the sample space is not compact, C γ (g, f ) is defined for γ ≥ 0 and H γ (f ) is for γ > −1.More precisely we will explore the case that the sample space is R d in a subsequent discussion together with moment conditions.Typically we observe that where D 0 (g, f ) denotes the Kullback-Leibler divergence, See Appendix 1 for the derivation of (5).
Let {x 1 , • • • , x n } be a random sample from a distribution with the probability density function g(x).A statistical model {f (x, θ) : θ ∈ Θ} with parameter θ is assumed to sufficiently approximate the underlying density function g(x), where Θ is a parameter space.Then the loss function associated with the projective power entropy C γ (g, f (•, θ)) based on the sample is given by the γ-estimator, where We note that where E g denotes the statistical expectation with respect to g.It is observed that the 0-estimator is nothing but the maximum likelihood estimator (MLE) since the loss L γ (θ) converges to the minus log-likelihood function, in the sense that If the underlying density function g(x) belongs to a Gaussian model with mean μ and variance σ 2 , then the MLEs for μ and σ 2 are the sample mean and sample variance.The reverse statement is shown in [17,18].We will extend this theory to a case of the γ-estimator under γ-model.
In Section 2 we discuss characterization of the projective power entropy.In Section 3 the maximum entropy distribution with the Tsallis entropy S q with q = 1 + γ under the constraints of mean vector μ and variance matrix Σ is considered.We discuss the model of maximum entropy distributions, called the γ-model, in which 0-model and 2-model equal Gaussian and Wigner models, respectively.Then we show that the γ-estimators for μ and Σ under the γ-model are the sample mean and sample variance.Section 4 gives concluding remarks and further comments.

Projective Invariance
Let us look at a close relation of F with Lebesgue's space where p ≥ 1 and the L p -norm p is defined by Let q be the conjugate index of p satisfying 1/p+1/q = 1, in which p and q can be expressed as functions of the parameter γ > 0 such that p = 1 + γ −1 and q = 1 + γ.We note that this q is equal to the index q in Tsallis entropy S q in the relation q = 1 + γ.For any probability density function f (x) we define the escort distribution with the probability density function, e q (f (x)) = f (x) q f (y) q dy cf.[2] for extensive discussion.We discuss an interesting relation of the projective cross entropy (1) with the escort distribution.By the definition of the escort distribution, We note that e q (f ) p is in the unit sphere of L p in the representation.The projective power diagonal entropy ( 2) is proportional to the L q -norm, that is, from which the Hölder's inequality for all f and g in F, which is also led by γ (g).The equality in (10) holds if and only if f (x) = λg(x) for almost everywhere x, where λ is a positive constant.The power transform suggests an interplay between the space L p and L q by the relation, Taking the limit of γ to 0 in the Hölder's inequality (9) yields that This limit regarding p associates with another space rather than the L ∞ space, which is nothing but the space of all density functions with finite Boltzmann-Shannon entropy, say L log .The power index γ reparameterizes the Lebesgue space L p and the dual space L q with the relation p = 1 + γ −1 , however, to take the power transform f (x) γ is totally different from the ordinary discussion of the Lebesgue space, so that the duality converges to (L log , L 1 ) as observed in (11).In information geometry the pair (L log , L 1 ) corresponds to that of mixture and exponential connections, cf.[9].See also another one-parameterization of L p space [19].
We now discuss a problem of the uniqueness for C γ (g, f ) as given in the following theorem.A general discussion on the characterization is given in [16], however, the derivation is rather complicated.Here we assume a key condition that a cross entropy Γ(g, f ) is linear in g to give an elementary proof.The Riesz representation theorem suggests where c(f ) is a constant that depends on f .Thus we observe the following theorem when we make a specific form for c(f ) to guarantee the scale invariance.
Then there exists γ such that Γ(g, f ) = C γ (g, f ) up to a constant factor, where C γ (g, f ) is the projective power cross entropy defined by (1).
Proof.The requirement (ii) means that which implies that, if f is absolutely continuous and g is the Dirac measure at x 0 , then Since we can take an arbitrary value f (x 0 ) for any fixed λ, which is uniquely solved as ψ(t) = t γ where γ = c(λ).Next let us consider a case of a finite discrete space, {x i : 1 ≤ i ≤ m}.Then, since ψ(f ) = f γ , we can write where It follows from ( 13) that c(g C, so that we solve (13) as ρ(g j ) = −γCg 1+γ j /(1 + γ).Therefore, Equation ( 14) is written by , which completes the proof.
Remark 1.The proof above is essentially applicable for the case that the integral (11) is given by the summation just for a binary distributions.In this sense the statement of Theorem 1 is not tight, however, statistical inference is discussed in a unified manner such that the distribution is either continuous or discrete.In a subsequent discussion we focus on the case for continuous distributions defined on R d .
Remark 2. We see the multiplicative decomposition for C γ (g, f ) for statistical independence.In fact, if f and g are decomposed as This property is also elemental, but we do not assume this decomposability as the requirement in Theorem 1.

Model of Maximum Entropy Distributions
We will elucidate a dualistic structure between the maximum entropy model on H γ , defined in (2) and the minimum cross entropy estimator on C γ , defined in (1).Before the discussion we overview the classical case in which the maximum likelihood estimation nicely makes good performance under the maximum entropy model on the Boltzmann-Shannon entropy, that is, a Gaussian model if we consider the mean and variance constraint.We will use conventional notations that X denotes random variable with value x.Let {x 1 , • • • , x n } be a random sample from a Gaussian distribution with the density function The Gaussian density function is written by a canonical form where Ξ is called the canonical parameter defined by Σ −1 .The differentiation of (15) on μ and Ξ yields where E f and V f denote the expectation vector and variance matrix with respect to a probability density function f (x), respectively.The maximum likelihood estimator is given by where x and S are the sample mean vector and the sample variance matrix, This is because the minus log-likelihood function is which is written by apart from a constant, where Hence the estimating equation system is which concludes the Expression (16) of the MLE since S(μ) = S + (x − μ)(x − μ) T .Alternatively, we have another route to show (16) as follows.The Kullback-Leibler divergence defined in ( 6) is given by Thus, we observe that which is nonnegative with equality if and only if (μ, Σ) = (x, S).This implies (16).Under mild regularity conditions the reverse statement holds, that is, the MLE for a location and scatter model satisfies (16) if and only if the model is Gaussian, cf.[17,18].However, even if we do not assume anything for the underling distribution g(x), the statistics x and S are asymptotically consistent for This is a direct result from the strong law of large numbers, and the central limit theorem leads to the asymptotic normality for these two statistics.In this sense, (x, S) is also a nonparametric estimator for (μ g , Σ g ).
We explore a close relation of the statistical model and the estimation method.We consider a maximum entropy distribution with the γ-entropy H γ over the space of d-dimensional distributions with a common mean and variance, Then we define a distribution with a probability density function written by where ( ) + denotes a positive part and c γ is the normalizing factor, See the derivation for c γ in Appendix 2. If the dimension d equals 1, then f γ (x, μ, Σ) is a q-Gaussian distribution with q = γ + 1.We remark that becomes an ellipsoid defined as On the other hand, if − 2 d+2 < γ < 0, the density function ( 21) is written as where The d-variate t-distribution is defined by cf. [20] for the extensive discussion.Assume that Then we observe from ( 23) and ( 24) that Accordingly, the density function The distribution has elliptical contours on the Euclidean space R d for any γ > − 2 d+2 , as shown in Figure 1 for typical cases of γ.
which we call γ-model, where S d denotes the space of all symmetric, positive-definite matrices of order d.We confirm the mean and variance of the γ-model as follows.
Lemma.Under the model M γ defined in (25) with the index γ > − 2 d+2 , Proof.We need to consider a family of escort distributions.In the model M γ we can define the escort distribution as where q = 1 + γ and c * γ is the normalizing factor.Hence, Here we define alternative parameter Ξ γ to the original parameter Σ by the transform and so that the inverse transform is given by . Thus, we get a canonical form of ( 26) as By analogy of the discussion for an exponential family we have the following expression for the braced term in (30) as A property of the escort distribution suggests moment formulae for the distribution (25) as follows: We have an identity which concludes that because of the relation of Ξ γ and Σ as observed in (29).The proof is complete.
Remark 3. The canonical form (30) of the escort distribution ( 26) plays an important role on the proof of Lemma.Basically we can write the canonical form of ( 21), however it is not known any link to distributional properties like a case of exponential family.
Remark 4. In Equation (31) the function is viewed as a potential function in the Fenchel convex duality, where cf. [21,22] for the covariance structure model.
We would like to elucidate a similar structure for the statistical inference by the minimum projective cross entropy in which the data set {x 1 , • • • , x n } is assumed to follow the model M γ .We recall (8) from the relation of the projective cross entropy with the escort distribution When we have got data {x 1 , • • • , x n } to be fitted to the model M γ , the loss function is where f γ (x, μ, Σ) defined in (21).The γ-estimator is defined by see the general definition (7).It follows from the canonical form defined in (30) with the canonical parameter Ξ γ defined in (28) that where (x, S) and ω are defined in ( 17) and (33), and c * γ is the normalizing factor defined in ( 27).Here we note that if γ > 0, then the parameter (μ, Σ) must be assumed to be in Θ n , where Accordingly, we observe the argument similar to (19) for the MLE.The projective divergence D γ defined in (3) equals the difference of the γ-loss functions as which is nonnegative with equality if and only if (μ, Σ) = (x, S).See the discussion after equation (10).
In this way, we can summarize the above discussion as follows: Theorem 3. Let {x 1 , • • • , x n } be a random sample from a γ-model defined in (21).Then the γ-estimator defined in (7) for (μ, Σ) is (x, S), where (x, S) is defined in (17).
Proof.Let us give another proof.The estimating equation system is given by ⎡ because of the relation of Ξ γ into Σ as given in (29).Thus, we also attain the conclusion (μ γ , Σ γ ) = x, S .In this way, we obtain the solution of the equation system defined by (40) via the parameter Ξ γ using the relation of the escort distribution with the loss function (37).
Remark 5. Consider the location model {f γ (•, μ, Σ)} with the location parameter μ, where Σ is known in Theorem 3. Then we easily see that the γ-estimator for μ is x.What about the reverse statement?
We observe that if the γ-estimator for μ is x with the sample size n ≥ 3, then the model is the γ-model, {f γ (•, μ, Σ) with the known Σ.The proof is parallel to that of Theorem 2 given in [17].In fact, we conclude that the model density function f (x) satisfies that where a and b are constants.
Remark 7. The derivation of the γ-estimator in Theorem 3 is provided by the canonical parameter Ξ γ of the escort distribution as given in (28).Here we directly calculate the gradient of the loss with respect to Σ as follows: Therefore we observe that if we put μ = x and Σ = αS(x), then The bracketed term of (42) is given by which concludes that if α = 1, then (∂/∂Σ)L γ (x, αS(x)) = 0.This is a direct proof for Theorem 3, but it would accompany with a heuristic discussion for the substitution of (μ, Σ) into (x, αS(x)).

Concluding Remarks
We explored the elegant property (39), the empirical Pythagoras relation between the γ-model and γ-estimator, in the sense that (39) directly shows Theorem 3 without any differential calculus.Another elegant expression is in the minimax game between Nature and a decision maker, see [23].Consider the space F(μ, Σ) defined in (20).The intersection of the γ-model (21) and F(μ, Σ) is a singleton {f γ (•, μ, Σ)}, which is the minimax solution of max g∈F (μ,Σ) Consider different indices γ and γ * which specify the γ-model and γ * -estimator, respectively.Basically the γ * -estimator is consistent under the γ-model for any choice of γ and γ * .If we specifically fix γ = 0 for the model, that is, a Gaussian model, then the γ * -estimator is shown to be qualitatively robust for any γ * > 0, see [16].The degree of robustness is proportional to the value of γ * with a trade for the efficiency.The γ * -estimator for (μ, Σ) of the Gaussian model is given by the solution of The weight function f 0 (x i , μ, Σ) γ * for the i-th observation x i becomes almost 0 when x i is an outlier.Alternatively, the classical robust method employs γ * = 0, that is, the MLE for the misspecified model γ < 0 or t-distribution model, see [24,25].Thus, the different indices γ and γ * work robust statistics in a dualistic manner.
This property is an extension of that associated between the exponential model and MLE, however, it is fragile in the sense that (19) does not hold if the indices in the γ-model and γ * -estimator are slightly different.In practice, we find some difficulties for a numerical task for solving the MLE under the γ-model with γ > 0 because the support of the density depends on the parameter and the index γ.We discussed statistical and probabilistic properties on the model and estimation associated with the specific cross entropy.A part of properties discussed still holds for any cross entropy in a much wider class, which is investigated from the point of the Fenchel duality in [13,26].