Abstract
In statistical physics, Boltzmann-Shannon entropy provides good understanding for the equilibrium states of a number of phenomena. In statistics, the entropy corresponds to the maximum likelihood method, in which Kullback-Leibler divergence connects Boltzmann-Shannon entropy and the expected log-likelihood function. The maximum likelihood estimation has been supported for the optimal performance, which is known to be easily broken down in the presence of a small degree of model uncertainty. To deal with this problem, a new statistical method, closely related to Tsallis entropy, is proposed and shown to be robust for outliers, and we discuss a local learning property associated with the method.
1. Introduction
Consider a practical situation in which a data set is randomly sampled from a probability density function of a statistical model , where θ is a parameter vector and Θ is the parameter space. A fundamental tool for the estimation of unknown parameter θ is the log-likelihood function defined by
which is commonly utilized by statistical researchers ranging over both frequentists and Bayesians. The maximum likelihood estimator (MLE) is defined by
The Fisher information matrix for θ is defined by
where denotes the transpose of θ. As the sample size n tends to infinity, the variance matrix of converges to . This inverse matrix exactly gives the lower bound in the class of asymptotically consistent estimators in the sense of matrix inequality, that is,
for any asymptotically consistent estimator of θ, where denotes the limiting variance matrix under the distribution with the density .
On the other hand, the Boltzmann-Shannon entropy
plays a fundamental role in various fields, such as statistical physics, information science and so forth. This is directly related with the MLE. Let us consider an underlying distribution with the density function . The cross entropy is defined by
We note that , where denotes the expectation with respect to . Hence, the maximum likelihood principle is equal to the minimum cross entropy principle. The Kulback-Leibler (KL) divergence is defined by
which gives a kind of information distance between p and q. Note that
An exponential (type) distribution model is defined by the density form
where is the cumulant transform defined by . Under the assumption of this family, the MLE has a number of convenient properties such as minimal sufficiency, unbiasedness, efficiency [1]. In particular, the MLE for the expectation parameter is explicitly given by
which is associated with a dualistic relation of the canonical parameter θ and the expectation parameter η [2,3]. Thus, the MLE satisfies such an excellent property, which is associated with logarithmic and exponential functions as in (2) and (8).
The MLE has been widely employed in statistics, in which the properties are supported in theoretical discussion, for example, as in [4]. However, the MLE has some inappropriate properties when the underlying distribution does not belong to the model . A statistical model is just simulation of the true distribution as Fisher pointed in [1]. The model, which is just used as a working model, is wrong in most practical cases. In such situations, the MLE does not show proper performance because of model uncertainty. In this paper we explore alternative estimation method than the MLE.
2. Power Divergence
The logarithmic transform for observed values is widely employed in data analysis. On the other hand, a power transformation defined by
often gives more flexible performance to get good approximation to normal distribution [5]. In analogy with this transform, the power cross entropy is defined by
where β is a positive parameter. Thus, it is defined by the power transform of the density. If we take the limit of β to 0, then converges to , which is given in (6). In fact, the power parameter β is not fixed, so that different β’s give different behaviors of the power entropy. The diagonal power entropy is defined by
which is given by taking the diagonal. Actually, this is equivalent to Tsallis q-entropy with the relation .
Let be a random sample from unknown density function . Then we define the empirical mean power likelihood by
where . See [6,7,8,9] for statistical applications. Accordingly, the minus expectation value of is equal to . In general, a relation of cross and diagonal entropy leads to the inequality , from which we define the power divergence by
We extend power entropy and divergence defined over the space of all density functions, which are not always assumed to have a total mass one. In particular, this extension is useful for proposing boosting methods [10,11,12,13,14,15,16].
This derivation can be extended by a generator function U. Assume that is strictly increasing and convex. The Fencel duality discussion leads to a conjugate convex function of defined by
which reduces to , where is the inverse function of the derivation of U. Then, U-cross entropy is defined by
Similarly U-divergence is defined by
We note that . By the definition of in (10) we see that the integrand of the right-hand side of (11) is always nonnegative. The power divergence is one example of U-divergence by fixing
The power divergence can be defined on as
for μ and ν of [17]. Thus is strictly increasing and convex, which implies that the integrand of the right-hand side of (12) is nonnegative.
To explore this, it seems sufficient to restrict the definition domain of to . However, we observe that the restriction is not useful for statistical considerations. We discuss the restriction on the projective space as follows. Fix two functions μ and ν in . We say that μ and ν are projectively equivalent if there exists a positive scalar λ such that
Thus, we write . Similarly, we call a divergence D defined on projectively invariant if for all
We can derive a variant of power divergence as
See Appendix 1 for the derivation. Immediately, we observe satisfies (13), or projective invariance. Hereafter, we call the projective power divergence. In this way, for and , it is obtained that
If we take a specific value of β, then
and
where is nothing but the KL divergence (7). We observe that the projective power divergence satisfies information additivity. In fact, if we write p and q as and , respectively, then
which means information additivity. We note that this property is not satisfied by the original power divergence . Furthermore, we know that associates with the Pythagorean identity in the following.
Proposition 1.
Assume that there exist three different points and r in satisfying
Define a path connecting p with q and a path connecting r with q as
Then
holds for all and all .
Proof is given in Appendix 2. This Pytahgorean-type identity is also satisfied with the [16].
3. Minimum Power Divergence Method
In the previous section we introduce a statistical method defined by minimization of the projective power divergence discussed. By the definition of the cross projective power entropy is led to
where . We see that . Hence, this decomposition leads the empirical analogue based on a given data set to
which we call the mean power likelihood with the index β. Thus, the minus expectation of with respect to the unknown density function equals to . The limit of β to 0 leads that converges to . Assume that is a random sample exactly from . Then the strong law of large number yields that
as n increases to infinity. From the property associated with the projective power divergence it follows that , which implies that . Consequently, we conclude that the estimator converges to θ almost surely. The proof is similar to that for the MLE in Wald [18]. In general any minimum divergence estimator satisfies the strong consistency in the asymptotical sense.
The estimator is associated with the estimating function,
where is the score vector, . We observe that the estimating function is unbiased in the sense that . This is because
Thus the estimating equation is given by
We see that the gradient vector of is proportional to as
Hence, the estimating function (17) exactly leads to the estimator .
Accordingly, we obtain the following asymptotic normality
where denotes convergence in law, and denotes a normal distribution with mean vector μ and variance matrix V. Here, the limiting variance matrix is
The inequality (4) implies for any β, which implies that any estimator is not asymptotically efficient, where denotes the Fisher information matrix defined in (3). In fact, the estimator becomes efficient only when , which is reduced to the MLE. Hence, there is no optimal estimator except for the MLE in the class as far as the asymptotic efficiency is concerned.
3.1. Super Robustness
We would like to investigate the influence of the estimator against outliers. We consider outliers in a probabilistic manner. An observation is called an outlier if is very small. Let us carefully look at the estimating equation (17). Then we observe that the larger the value of β is, the smaller for all outliers . The estimator is solved as
which implies that, for a sufficiently large β, the estimating equation has little impact from outliers contaminated in the data set because the value of the integral is hardly influenced by the outliers. In this sense, is robust for such β [19]. From an empirical viewpoint, we know it is sufficient to fix . In a case that is absolutely continuous in we see that , which is quite contrast with the optimal robust method (cf. [20]). Consider an ϵ-contamination model
In this context, is the density for outliers, which departs from the assumed density with a large degree. It seems reasonable to suppose that . Thus if the true density function equals , then becomes a consistent estimator for θ for all . In this sense we say satisfies super robustness. On the other hand, the mean power likelihood function as given in (9) associates with the estimating function
which is unbiased, but the corresponding estimator does not satisfy such super robustness.
Let us consider a multivariate normal model with mean vector μ and variance matrix V in which the minimum projective power divergence method by (16) is applicable for the estimation of μ and V as follows:
where denotes the space of all symmetric, positive definite matrices.
Noting the projective invariance, we obtain
from which the estimating equation gives the weighted mean and variance as
where is the weight function defined by . Although we do not know the explicit solution, a natural iteration algorithm can be proposed that the left-hand sides of (19) and (20), say are both updated by plugging in the right-hand sides of (19) and (20). Obviously, for the estimator with , or the MLE, we need no iteration step but the sample mean vector and sample variance matrix as the exact solution.
3.2. Local Learning
We discuss a statistical idea beyond robustness. Since the expression (16) is inconvenient to investigate the behavior of the mean expected power likelihood function, we focus on
as a core term, where is the true density function, that is, the underlying distribution generating the data set. We consider K mixture model, while is modeled as ϵ-contaminated density function in the previous section. Thus, is written by K different density functions as follows:
where denotes the mixing ratio. We note that there exists redundancy for this modeling unless s are specified. In fact, the case in which and is arbitrarily means no restriction for . However, we discuss on this redundant model and find that
We confirm
taking the limit of β to 0. It is noted that has a global maximizer that is a pair of the mean vector and variance matrix with respect to since we can write
This suggests a limitation of the maximum likelihood method. The MLE cannot change as the estimative solution even if the true density function is arbitrarily in (21). On the other hand, if β becomes larger, then the graph of is flexibly changed in accordance with in (21). For example, we assume
where is a normal density function . Then,
Here, we see a formula
as shown in Appendix 3, from which we get that
In particular, when ,
which implies that , where O is a zero matrix and is defined as in (23). If the normal mixture model has K modes, has the same K modes for sufficiently small . Therefore, the expected with a large β adaptively behaves according to the true density function. This suggests that the minimum projective power divergence method can improve the weak point of the MLE if the true density function has much degree of model uncertainty. For example, such an adaptive selection for β is discussed in principal component analysis (PCA), which enables us to providing explanatory analysis rather than the conventional PCA.
Consider a problem for extracting principal components in which the data distribution has a density function with multimodality as described in (21). Then we wish to search all the sets of the principal vectors for with . The minimum projective power divergence method can properly provide the PCA to search the principal vectors for at the centers separately for . First we determine the first starting point, say in which we employ the iteratively reweighted algorithm (19) and (20) starting from , so that we get the first estimator . Then the estimator gives the first PCA with the center by the standard method. Next, we updates the second starting point to keep away from the first estimator by a heuristic procedure based on the weight function (see [22] for the detailed discussion). Starting from , the same algorithm (19) and (20) leads to the second estimator with the second PCA with the center . In this way, we can make this sequential procedure to explore the multimodal structure with an appropriately determined stopping rule.
4. Concluding Remarks
We focus on that the optimality property of the likelihood method is fragile under model uncertainty. Such weakness frequently appears in practice when we got a data set typically from an observational study rather than a purely randomized experimental study. However, the usefulness of likelihood method is supported as the most excellent method in statistics. We note that the minimum projective power divergence method reduces to the MLE by taking a limit of the index β to 0 since it has one degree of freedom of β as a choice of method. A data-adaptive selection of β is possible by cross validation method. However, an appropriate model selection criterion is requested for faster computation.
Recently novel methods for pattern recognition from machine learning paradigm have been proposed [23,24,25]. These approaches are directly concerned with the true distribution in a framework of probability approximate correct (PAC) learning in the computational learning theory. We need to employ this theory for the minimum projective power divergence method. In statistical physics there are remarkable developments on Tsallis entropy with reference to disequilibrium state, chaos phenomena, scale free network and econophysics. We should explore these developments from the statistical point of view.
Acknowledgements
We thank to the anonymous referees for their useful comments and suggestions, in particular, on Proposition 1.
Appendix 1
We introduce the derivation of as follows. Consider the minimization for scalar multiplicity as
The gradient is
which leads to . Hence
Taking the ratio as
concludes the derivation of in (13).
Appendix 2
We give a proof of Proposition 1.
Proof.
By definition we get that
which implies
from (14). Similarly,
which is written as
Furthermore, (26) is rewritten as
which is
From (25) we can write
Then, we conclude that
which vanishes for any and . This completes the proof. □
Appendix 3
By writing a p-variate normal density function by
we have the formula
The proof of this formula is immediate. In fact, the left-hand side of (27) is written by
where
Hence, we get
Noting that
and
it is obtained that
because of . Therefore, (28) and (29) imply (24). □
References
- Fisher, R.A. On the mathematical foundations of theoretical statistics. Philos. Trans. Roy. Soc. London Ser. A 1922, 222, 309–368. [Google Scholar] [CrossRef]
- Amari, S. Lecture Notes in Statistics. In Differential-Geometrical Methods in Statistics; Springer-Verlag: New York, NY, USA, 1985; Volume 28. [Google Scholar]
- Amari, S.; Nagaoka, H. Translations of Mathematical Monographs. In Methods of Information Geometry; Oxford University Press: Oxford, UK, 2000; Volume 191. [Google Scholar]
- Akahira, M; Takeuchi, K. Lecture Notes in Statistics. In Asymptotic Efficiency of Statistical Estimators: Concepts and Higher Order Asymptotic Efficiency; Springer-Verlag: New York, NY, USA, 1981; Volume 7. [Google Scholar]
- Box, G.E.P.; Cox, D.R. An Analysis of Transformations. J. R. Statist. Soc. B 1964, 26, 211–252. [Google Scholar]
- Fujisawa, H.; Eguchi, S. Robust estimation in the normal mixture model. J. Stat. Plan Inference 2006, 136, 3989–4011. [Google Scholar] [CrossRef]
- Minami, M.; Eguchi, S. Robust blind source separation by beta-divergence. Neural Comput. 2002, 14, 1859–1886. [Google Scholar]
- Mollah, N.H.; Minami, M.; Eguchi, S. Exploring latent structure of mixture ICA models by the minimum beta-divergence method. Neural Comput. 2006, 18, 166–190. [Google Scholar] [CrossRef]
- Scott, D.W. Parametric statistical modeling by minimum integrated square error. Technometrics 2001, 43, 274–285. [Google Scholar] [CrossRef]
- Eguchi, S.; Copas, J.B. A class of logistic type discriminant functions. Biometrika 2002, 89, 1–22. [Google Scholar] [CrossRef]
- Kanamori, T.; Takenouchi, T.; Eguchi, S.; Murata, N. Robust loss functions for boosting. Neural Comput. 2007, 19, 2183–2244. [Google Scholar] [CrossRef] [PubMed]
- Lebanon, G.; Lafferty, J. Boosting and maximum likelihood for exponential models. In Advances in Neural Information Processing Systems; 2002; Volume 14, pp. 447–454. MIT Press: New York, NY, USA. [Google Scholar]
- Murata, N.; Takenouchi, T.; Kanamori, T.; Eguchi, S. Information geometry of U-Boost and Bregman divergence. Neural Comput. 2004, 16, 1437–1481. [Google Scholar] [CrossRef] [PubMed]
- Takenouchi, T.; Eguchi, S. Robustifying AdaBoost by adding the naive error rate. Neural Comput. 2004, 16, 767–787. [Google Scholar] [CrossRef] [PubMed]
- Takenouchi, T.; Eguchi, S.; Murata, N.; Kanamori, T. Robust boosting algorithm for multiclass problem by mislabelling model. Neural Comput. 2008, 20, 1596–1630. [Google Scholar] [CrossRef] [PubMed]
- Eguchi, S. Information geometry and statistical pattern recognition. Sugaku Expo. 2006, 19, 197–216. [Google Scholar]
- Basu, A.; Harris, I.R.; Hjort, N.L.; Jones, M.C. Robust and efficient estimation by minimising a density power divergence. Biometrika 1998, 85, 549–559. [Google Scholar] [CrossRef]
- Wald, A. Note on the Consistency of the Maximum Likelihood Estimate. Ann. Math. Statist. 1949, 20, 595–601. [Google Scholar] [CrossRef]
- Fujisawa, H.; Eguchi, S. Robust parameter estimation with a small bias against heavy contamination. J. Multivariate Anal. 2008, 99, 2053–2081. [Google Scholar] [CrossRef]
- Hampel, F.R.; Ronchetti, E.M.; Rousseeuw, P.J.; Stahel, W.A. Robust Statistics: The Approach Based on Influence Functions; Wiley: New York, NY, USA, 2005. [Google Scholar]
- Eguchi, S.; Copas, J.A. Class of local likelihood methods and near-parametric asymptotics. J. R. Statist. Soc. B 1998, 60, 709–724. [Google Scholar] [CrossRef]
- Mollah, N.H.; Sultana, N.; Minami, M.; Eguchi, S. Robust extraction of local structures by the minimum beta-divergence method. Neural Netw. 2010, 23, 226–238. [Google Scholar] [CrossRef] [PubMed]
- Friedman, J.H.; Hastie, T.; Tibshirani, R. Additive logistic regression: A statistical view of boosting. Annals of Statistics 2000, 28, 337–407. [Google Scholar] [CrossRef]
- Hastie, T.; Tibishirani, R.; Friedman, J. The Elements of Statistical Learning; Springer: New York, NY, USA, 2001. [Google Scholar]
- Schapire, R.E.; Freund, Y.; Bartlett, P.; Lee, W.S. Boosting the margin: A new explanation for the effectiveness of voting methods. Ann. Statist. 1998, 26, 1651–1686. [Google Scholar] [CrossRef]
© 2010 by the authors; licensee Molecular Diversity Preservation International, Basel, Switzerland. This article is an open-access article distributed under the terms and conditions of the Creative Commons Attribution license http://creativecommons.org/licenses/by/3.0/.