Entropy and divergence associated with power function and the statistical application. Entropy 2010

In statistical physics, Boltzmann-Shannon entropy provides good understanding for the equilibrium states of a number of phenomena. In statistics, the entropy corresponds to the maximum likelihood method, in which Kullback-Leibler divergence connects Boltzmann-Shannon entropy and the expected log-likelihood function. The maximum likelihood estimation has been supported for the optimal performance, which is known to be easily broken down in the presence of a small degree of model uncertainty. To deal with this problem, a new statistical method, closely related to Tsallis entropy, is proposed and shown to be robust for outliers, and we discuss a local learning property associated with the method.


Introduction
Consider a practical situation in which a data set {x 1 , • • • , x n } is randomly sampled from a probability density function of a statistical model {f θ (x) : θ ∈ Θ}, where θ is a parameter vector and Θ is the parameter space.A fundamental tool for the estimation of unknown parameter θ is the log-likelihood function defined by for any asymptotically consistent estimator θ of θ, where AV θ denotes the limiting variance matrix under the distribution with the density f θ (x).
On the other hand, the Boltzmann-Shannon entropy plays a fundamental role in various fields, such as statistical physics, information science and so forth.This is directly related with the MLE.Let us consider an underlying distribution with the density function p(x).The cross entropy is defined by We note that C 0 (p, f θ ) = E p {−ℓ(θ)}, where E p denotes the expectation with respect to p(x).Hence, the maximum likelihood principle is equal to the minimum cross entropy principle.The Kulback-Leibler (KL) divergence is defined by D 0 (p, q) = ∫ p(x) log p(x) q(x) dx (7) which gives a kind of information distance between p and q.Note that D 0 (p, f θ ) = C 0 (p, f θ ) − C 0 (p, p) An exponential (type) distribution model is defined by the density form where ψ(θ) is the cumulant transform defined by log ∫ exp{θ T t(x)}dx.Under the assumption of this family, the MLE has a number of convenient properties such as minimal sufficiency, unbiasedness, efficiency [1].In particular, the MLE for the expectation parameter η = E θ {t(X)} is explicitly given by which is associated with a dualistic relation of the canonical parameter θ and the expectation parameter η [2,3].Thus, the MLE satisfies such an excellent property, which is associated with logarithmic and exponential functions as in ( 2) and (8).
The MLE has been widely employed in statistics, in which the properties are supported in theoretical discussion, for example, as in [4].However, the MLE has some inappropriate properties when the underlying distribution does not belong to the model {f θ (x) : θ ∈ Θ}.A statistical model is just simulation of the true distribution as Fisher pointed in [1].The model, which is just used as a working model, is wrong in most practical cases.In such situations, the MLE does not show proper performance because of model uncertainty.In this paper we explore alternative estimation method than the MLE.

Power Divergence
The logarithmic transform for observed values is widely employed in data analysis.On the other hand, a power transformation defined by often gives more flexible performance to get good approximation to normal distribution [5].In analogy with this transform, the power cross entropy is defined by where β is a positive parameter.Thus, it is defined by the power transform of the density.If we take the limit of β to 0, then C β (p, q) converges to C 0 (p, q), which is given in (6).In fact, the power parameter β is not fixed, so that different β's give different behaviors of the power entropy.The diagonal power entropy is defined by which is given by taking C β the diagonal.Actually, this is equivalent to Tsallis q-entropy with the relation β = q − 1.
Let {x 1 , • • • , x n } be a random sample from unknown density function p(x).Then we define the empirical mean power likelihood by where κ β (θ) = ∫ f θ (x) β+1 dx/(β + 1).See [6][7][8][9] for statistical applications.Accordingly, the minus expectation value of ℓ β (θ) is equal to C β (p, f θ ).In general, a relation of cross and diagonal entropy leads to the inequality C β (p, q) ≤ C β (p, p), from which we define the power divergence by We extend power entropy and divergence defined over the space of all density functions, which are not always assumed to have a total mass one.In particular, this extension is useful for proposing boosting methods [10][11][12][13][14][15][16].
This derivation can be extended by a generator function U .Assume that U (t) is strictly increasing and convex.The Fencel duality discussion leads to a conjugate convex function of U (t) defined by which reduces to U * (s) = sξ(s) − U (ξ(s)), where ξ(s) is the inverse function of the derivation U of U .Then, U -cross entropy is defined by Similarly U -divergence is defined by We note that . By the definition of U * in (10) we see that the integrand of the right-hand side of ( 11) is always nonnegative.The power divergence is one example of U -divergence by fixing The power divergence can be defined on M as for µ and ν of M [17].Thus U β (t) is strictly increasing and convex, which implies that the integrand of the right-hand side of ( 12) is nonnegative.
To explore this, it seems sufficient to restrict the definition domain of D β to P. However, we observe that the restriction is not useful for statistical considerations.We discuss the restriction on the projective space as follows.Fix two functions µ and ν in M. We say that µ and ν are projectively equivalent if there exists a positive scalar λ such that Thus, we write ν ∼ µ.Similarly, we call a divergence D defined on M projectively invariant if for all λ > 0, κ > 0 We can derive a variant of power divergence as See Appendix 1 for the derivation.Immediately, we observe ∆ β satisfies (13), or projective invariance.Hereafter, we call ∆ β the projective power divergence.In this way, for p If we take a specific value of β, then where D 0 is nothing but the KL divergence (7).We observe that the projective power divergence satisfies information additivity.In fact, if we write p and q as p(x 1 , x 2 ) = p 1 (x 1 )p 2 (x 2 ) and q(x 1 , x 2 ) = q 1 (x 1 )q 2 (x 2 ), respectively, then which means information additivity.We note that this property is not satisfied by the original power divergence D β .Furthermore, we know that ∆ β associates with the Pythagorean identity in the following.Proposition 1. Assume that there exist three different points p, q and r in M satisfying Define a path {p t } 0≤t≤1 connecting p with q and a path {r s } 0≤s≤1 connecting r with q as holds for all t (0 < t < 1) and all s (0 < s < 1).
Proof is given in Appendix 2. This Pytahgorean-type identity is also satisfied with the D β [16].

Minimum Power Divergence Method
In the previous section we introduce a statistical method defined by minimization of the projective power divergence discussed.By the definition of ∆ β the cross projective power entropy is led to where Hence, this decomposition leads the empirical analogue based on a given data set which we call the mean power likelihood with the index β.Thus, the minus expectation of L β (θ) with respect to the unknown density function p(x) equals to Γ β (p, f θ ).The limit of β to 0 leads that L β (θ) Then the strong law of large number yields that as n increases to infinity.From the property associated with the projective power divergence it follows that Consequently, we conclude that the estimator θ β = argmin θ ′ ∈Θ L β (θ ′ ) converges to θ almost surely.The proof is similar to that for the MLE in Wald [18].In general any minimum divergence estimator satisfies the strong consistency in the asymptotical sense.
The estimator θ β is associated with the estimating function, where s(x, θ) is the score vector, (∂/∂θ) log f θ (x).We observe that the estimating function is unbiased in the sense that E θ {s β (x, θ)} = 0.This is because Thus the estimating equation is given by We see that the gradient vector of L β (θ) is proportional to S β (θ) as Hence, the estimating function (17) exactly leads to the estimator θ β .Accordingly, we obtain the following asymptotic normality where −→ D denotes convergence in law, and N (µ, V ) denotes a normal distribution with mean vector µ and variance matrix V .Here, the limiting variance matrix is The inequality (4) implies AV β (θ) ≥ I −1 θ for any β, which implies that any estimator θ β is not asymptotically efficient, where I θ denotes the Fisher information matrix defined in (3).In fact, the estimator θ β becomes efficient only when β = 0, which is reduced to the MLE.Hence, there is no optimal estimator except for the MLE in the class { θ β } β≥0 as far as the asymptotic efficiency is concerned.

Super Robustness
We would like to investigate the influence of the estimator θ β against outliers.We consider outliers in a probabilistic manner.An observation x o is called an outlier if f θ (x o ) is very small.Let us carefully look at the estimating equation (17).Then we observe that the larger the value of β is, the smaller ∥s β (x o , θ)∥ for all outliers x o .The estimator θ β is solved as which implies that, for a sufficiently large β, the estimating equation has little impact from outliers contaminated in the data set because the value of the integral ∫ f β θ is hardly influenced by the outliers.In this sense, θ β is robust for such β [19].From an empirical viewpoint, we know it is sufficient to fix β ≥ 0.1.In a case that f θ (x) is absolutely continuous in R p we see that lim |x|→∞ |s β (x, θ)| = 0, which is quite contrast with the optimal robust method (cf.[20]).Consider an ϵ-contamination model In this context, δ(x) is the density for outliers, which departs from the assumed density f θ (x) with a large degree.It seems reasonable to suppose that ∫ f θ (x)δ(x)dx ≃ 0. Thus if the true density function p(x) equals f θ,ϵ (x), then θ β becomes a consistent estimator for θ for all ϵ, 0 ≤ ϵ < 1.In this sense we say θ β satisfies super robustness.On the other hand, the mean power likelihood function ℓ β (θ) as given in (9) associates with the estimating function which is unbiased, but the corresponding estimator does not satisfy such super robustness.Let us consider a multivariate normal model N (µ, V ) with mean vector µ and variance matrix V in which the minimum projective power divergence method by ( 16) is applicable for the estimation of µ and V as follows: where S denotes the space of all symmetric, positive definite matrices.Noting the projective invariance, we obtain from which the estimating equation gives the weighted mean and variance as where w(x, µ, V ) is the weight function defined by exp{− 1 2 (x − µ) T V −1 (x − µ)}.Although we do not know the explicit solution, a natural iteration algorithm can be proposed that the left-hand sides of (19) and (20), say (µ t+1 , V t+1 ) are both updated by plugging (µ t , V t ) in the right-hand sides of ( 19) and (20).Obviously, for the estimator ( µ β , V β ) with β = 0, or the MLE, we need no iteration step but the sample mean vector and sample variance matrix as the exact solution.

Local Learning
We discuss a statistical idea beyond robustness.Since the expression ( 16) is inconvenient to investigate the behavior of the mean expected power likelihood function, we focus on as a core term, where p(x) is the true density function, that is, the underlying distribution generating the data set.We consider K mixture model, while p(x) is modeled as ϵ-contaminated density function f θ,ϵ (x) in the previous section.Thus, p(x) is written by K different density functions p k (x) as follows: where π k denotes the mixing ratio.We note that there exists redundancy for this modeling unless p k (x)s are specified.In fact, the case in which π 1 = 1 and p 1 (x) is arbitrarily means no restriction for p(x).However, we discuss I β (θ) on this redundant model and find that We confirm } taking the limit of β to 0. It is noted that I 0 (µ, V ) has a global maximizer ( µ, V ) that is a pair of the mean vector and variance matrix with respect to p(x) since we can write

}
This suggests a limitation of the maximum likelihood method.The MLE cannot change N ( µ, V ) as the estimative solution even if the true density function is arbitrarily in (21).On the other hand, if β becomes larger, then the graph of I β (µ, V ) is flexibly changed in accordance with p(x) in (21).For example, we assume where Here, we see a formula as shown in Appendix 3, from which we get that In particular, when β = 1, where O is a zero matrix and p(•) is defined as in (23).If the normal mixture model has K modes, I 1 (µ, V ) has the same K modes for sufficiently small det V .Therefore, the expected I β (µ, V ) with a large β adaptively behaves according to the true density function.This suggests that the minimum projective power divergence method can improve the weak point of the MLE if the true density function has much degree of model uncertainty.For example, such an adaptive selection for β is discussed in principal component analysis (PCA), which enables us to providing explanatory analysis rather than the conventional PCA.Consider a problem for extracting principal components in which the data distribution has a density function with multimodality as described in (21).Then we wish to search all the sets of the principal vectors for V k with k = 1, • • • , K. The minimum projective power divergence method can properly provide the PCA to search the principal vectors for V k at the centers µ k separately for k = 1, • • • , K. First we determine the first starting point, say (µ (1) , V (1) ) in which we employ the iteratively reweighted algorithm (19) and (20) starting from (µ (1) , V (1) ), so that we get the first estimator ( µ (1) , V (1) ).Then the estimator V (1) gives the first PCA with the center µ (1) by the standard method.Next, we updates the second starting point (µ (2) , V (2) ) to keep away from the first estimator ( µ (1) , V (1) ) by a heuristic procedure based on the weight function w(x, µ, V ) (see [22] for the detailed discussion).Starting from (µ (2) , V (2) ), the same algorithm (19) and (20) leads to the second estimator ( µ (2) , V (2) ) with the second PCA with the center V (2) .In this way, we can make this sequential procedure to explore the multimodal structure with an appropriately determined stopping rule.

Concluding Remarks
We focus on that the optimality property of the likelihood method is fragile under model uncertainty.Such weakness frequently appears in practice when we got a data set typically from an observational study rather than a purely randomized experimental study.However, the usefulness of likelihood method is supported as the most excellent method in statistics.We note that the minimum projective power divergence method reduces to the MLE by taking a limit of the index β to 0 since it has one degree of freedom of β as a choice of method.A data-adaptive selection of β is possible by cross validation method.However, an appropriate model selection criterion is requested for faster computation.
Recently novel methods for pattern recognition from machine learning paradigm have been proposed [23][24][25].These approaches are directly concerned with the true distribution in a framework of probability approximate correct (PAC) learning in the computational learning theory.We need to employ this theory for the minimum projective power divergence method.In statistical physics there are remarkable developments on Tsallis entropy with reference to disequilibrium state, chaos phenomena, scale free network and econophysics.We should explore these developments from the statistical point of view.