Open Access
This article is

- freely available
- re-usable

*Entropy*
**2017**,
*19*(7),
361;
doi:10.3390/e19070361

Article

Estimating Mixture Entropy with Pairwise Distances

^{1}

Santa Fe Institute, Santa Fe, NM 87501, USA

^{2}

Department of Aeronautics and Astronautics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA

^{*}

Author to whom correspondence should be addressed.

Received: 8 June 2017 / Accepted: 12 July 2017 / Published: 14 July 2017

## Abstract

**:**

Mixture distributions arise in many parametric and non-parametric settings—for example, in Gaussian mixture models and in non-parametric estimation. It is often necessary to compute the entropy of a mixture, but, in most cases, this quantity has no closed-form expression, making some form of approximation necessary. We propose a family of estimators based on a pairwise distance function between mixture components, and show that this estimator class has many attractive properties. For many distributions of interest, the proposed estimators are efficient to compute, differentiable in the mixture parameters, and become exact when the mixture components are clustered. We prove this family includes lower and upper bounds on the mixture entropy. The Chernoff $\alpha $-divergence gives a lower bound when chosen as the distance function, with the Bhattacharyaa distance providing the tightest lower bound for components that are symmetric and members of a location family. The Kullback–Leibler divergence gives an upper bound when used as the distance function. We provide closed-form expressions of these bounds for mixtures of Gaussians, and discuss their applications to the estimation of mutual information. We then demonstrate that our bounds are significantly tighter than well-known existing bounds using numeric simulations. This estimator class is very useful in optimization problems involving maximization/minimization of entropy and mutual information, such as MaxEnt and rate distortion problems.

Keywords:

mixture distribution; mixture of Gaussians; entropy estimation; MaxEnt; rate distortion## 1. Introduction

A mixture distribution is a probability distribution whose density function is a weighted sum of individual densities. Mixture distributions are a common choice for modeling probability distributions, in both parametric settings, for example, learning a mixture of Gaussians statistical model [1], and non-parametric settings, such as kernel density estimation.

It is often necessary to compute the differential entropy [2] of a random variable with a mixture distribution, which is a measure of the inherent uncertainty in the outcome of the random variable. Entropy estimation arises in image retrieval tasks [3], image alignment and error correction [4], speech recognition [5,6], analysis of debris spread in rocket launch failures [7], and many other settings. Entropy also arises in optimization contexts [4,8,9,10], where it is minimized or maximized under some constraints (e.g., MaxEnt problems). Finally, entropy also plays a central role in minimization or maximization of mutual information, such as in problems related to rate distortion [11].

Unfortunately, in most cases, the entropy of a mixture distribution has no known closed-form expression [12]. This is true even when the entropy of each component distribution does have a known closed-form expression. For instance, the entropy of a Gaussian has a well-known form, while the entropy of a mixture of Gaussians does not [13]. As a result, the problem of finding a tractable and accurate estimate for mixture entropy has been described as “a problem of considerable current interest and practical significance” [14].

One way to approximate mixture entropy is with Monte Carlo (MC) sampling. MC sampling provides an unbiased estimate of the entropy, and this estimate can become arbitrarily accurate by increasing the number of MC samples. Unfortunately, MC sampling is very computationally intensive, as, for each sample, the (log) probability of the sample location must be computed under every component in the mixture. MC sampling typically requires a large number of samples to estimate entropy, especially in high-dimensions. Sampling is thus typically impractical, especially for optimization problems where, for every parameter change, a new entropy estimate is required. Alternatively, it is possible to approximate entropy using numerical integration, but this is also computationally expensive and limited to low-dimensional applications [15,16].

Instead of Monte Carlo sampling or numerical integration, one may use an analytic estimator of mixture entropy. Analytic estimators have estimation bias but are much more computationally efficient. There are several existing analytic estimators of entropy, discussed in-depth below. To summarize, however, commonly-used estimators have significant drawbacks: they have large bias relative to the true entropy, and/or they are invariant to the amount of “overlap” between mixture components. For example, many estimators do not depend on the locations of the means in a Gaussian mixture model.

In this paper, we introduce a novel family of estimators for the mixture entropy. Each member of this family is defined via a pairwise-distance function between component densities. The estimators in this family have several attractive properties. They are computationally efficient, as long as the pairwise-distance function and the entropy of each component distribution are easy to compute. The estimation bias of any member of this family is bounded by a constant. The estimator is continuous and smooth and is therefore useful for optimization problems. In addition, we show that when the Chernoff $\alpha $-divergence (i.e., a scaled Rényi divergence) is used as a pairwise-distance function, the corresponding estimator is a lower-bound on the mixture entropy. Furthermore, among all the Chernoff $\alpha $-divergences, the Bhattacharrya distance ($\alpha =0.5$) provides the tightest lower bound when the mixture components are symmetric and belong to a location family (such as a mixture of Gaussians with equal covariances). We also show that when the Kullback–Leibler [KL] divergence is used as a pairwise-distance function, the corresponding estimator is an upper-bound on the mixture entropy. Finally, our family of estimators can compute the exact mixture entropy when the component distributions are grouped into well-separated clusters, a property not shared by other analytic estimators of entropy. In particular, the bounds mentioned above converge to the same value for well-separated clusters.

The paper is laid out as follows. We first review mixture distributions and entropy estimation in Section 2. We then present the class of pairwise distance estimators in Section 3, prove bounds on the error of any estimator in this class, and show distance functions that bound the entropy as discussed above. In Section 4, we consider the special case of mixtures of Gaussians, and give explicit expressions for lower and upper bounds on the mixture entropy. When all the Gaussian components have the same covariance matrix, we show that these bounds have particularly simple expressions. In Section 5, we consider the closely related problem of estimating the mutual information between two random variables, and show that our estimators can be directly used to estimate and bound the mutual information. For the Gaussian case, these can be used to bound the mutual information across a type of Additive White Noise Gaussian channel. Finally, in Section 6, we run numerical experiments and compare the performance of our lower and upper bounds relative to existing estimators. We consider both mixtures of Gaussians and mixtures of uniform distributions.

## 2. Background and Definitions

We consider the differential entropy of a continuous random variable X, defined as
where ${p}_{X}$ is a mixture distribution,
and where ${c}_{i}$ indicates the weight of component i (${c}_{i}\ge 0$, ${\sum}_{i}{c}_{i}=1$) and ${p}_{i}$ the probability density of component i.

$$H\left(X\right):=-\int {p}_{X}\left(x\right)\mathrm{ln}{p}_{X}\left(x\right)\phantom{\rule{0.166667em}{0ex}}dx\phantom{\rule{0.166667em}{0ex}},$$

$${p}_{X}\left(x\right)=\sum _{i=1}^{N}{c}_{i}{p}_{i}\left(x\right)\phantom{\rule{0.166667em}{0ex}},$$

We can treat the set of component weights as the probabilities of outcomes $1\cdots N$ of a discrete random variable C, where $\mathrm{Pr}(C=i)={c}_{i}$. Consider the mixed joint distribution of the discrete random variable C and the continuous random variable X,
and note the following identities for conditional and joint entropy [17],
where we use H for discrete and differential entropy interchangeably. Here, the conditional entropies are defined as

$${p}_{X,C}(x,i)={p}_{i}\left(x\right){c}_{i}\phantom{\rule{0.166667em}{0ex}},$$

$$H\left(X,C\right)=H\left(X|C\right)+H\left(C\right)=H\left(C|X\right)+H\left(X\right),$$

$$H\left(X\right|C)=\sum _{i}{c}_{i}H\left({p}_{i}\right)\phantom{\rule{2.em}{0ex}}\mathrm{and}\phantom{\rule{2.em}{0ex}}H\left(C\right|X)=\int {p}_{X,C}(x,i)\mathrm{log}\frac{{p}_{X,C}(x,i)}{{p}_{X}\left(x\right)}\phantom{\rule{0.166667em}{0ex}}dx\phantom{\rule{0.166667em}{0ex}}.$$

Using elementary results from information theory [2], $H\left(X\right)$ can be bounded from below by
since conditioning can only decrease entropy. Similarly, $H\left(X\right)$ can be bounded from above by
following from $H\left(X\right)=H(X,C)-H\left(C\right|X)$ and the non-negativity of the conditional discrete entropy $H\left(C\right|X)$. This upper bound on the mixture entropy was previously proposed by Huber et al. [18].

$$H\left(X\right)\ge H\left(X\right|C)\phantom{\rule{0.222222em}{0ex}},$$

$$H\left(X\right)\le H(X,C)=H\left(X\right|C)+H(C)\phantom{\rule{0.166667em}{0ex}},$$

It is easy to see that the bound in Equation (1) is tight when all the components have the same distribution, since then $H\left({p}_{X}\right)=H\left({p}_{i}\right)$ for all i. The bound in Equation (2) becomes tight when $H\left(C\right|X)=0$, i.e., when any sample from ${p}_{X}$ uniquely determines the component identity C. This occurs when the different mixture components have non-overlapping supports, ${p}_{i}\left(x\right)>0\Rightarrow {p}_{j}\left(x\right)=0$ for all x and $i\ne j$. More generally, the bound of Equation (2) becomes increasingly tight as the mixture distributions move farther apart from one another.

In the case where the entropy of each component density, $H\left({p}_{i}\right)$ for $i=1\cdots N$, has a simple closed form expression, the bounds in Equations (1) and (2) can be easily computed. However, neither bound depends on the “overlap” between components. For instance, in a Gaussian mixture model, these bounds are invariant to changes in the component means. The bounds are thus unsuitable for many problems; for instance, in optimization, one typically tunes parameters to adjust component means, but the above entropy bounds remain the same regardless of mean location.

There are two other estimators of the mixture entropy that should be mentioned. The first estimator is based on kernel density estimation [16,19]. It estimates the entropy using the mixture probability of the component means, ${\mu}_{i}$,

$${\widehat{H}}_{\mathrm{KDE}}\left(X\right):=-\sum _{i}{c}_{i}\mathrm{ln}\sum _{j}{c}_{j}{p}_{j}\left({\mu}_{i}\right)\phantom{\rule{0.166667em}{0ex}}.$$

The second estimator is a lower bound that is derived using Jensen’s inequality [2], $-\mathrm{E}\left[\mathrm{ln}f\left(X\right)\right]\ge -\mathrm{ln}\left[\mathrm{E}\left(f\right(X\left)\right)\right]$, giving

$$H\left(X\right):=-\int \sum _{i}{c}_{i}{p}_{i}\left(x\right)\mathrm{ln}\sum _{j}{c}_{j}{p}_{j}\left(x\right)\phantom{\rule{0.166667em}{0ex}}dx\phantom{\rule{0.166667em}{0ex}}\ge -\sum _{i}{c}_{i}\mathrm{ln}\sum _{j}{c}_{j}\left(\int {p}_{i}\left(x\right){p}_{j}\left(x\right)\phantom{\rule{0.166667em}{0ex}}dx\right)=:{\widehat{H}}_{\mathrm{ELK}}\phantom{\rule{0.166667em}{0ex}}.$$

In the literature, the term $\int {p}_{i}\left(x\right){p}_{j}\left(x\right)\phantom{\rule{0.166667em}{0ex}}dx$ has been referred to as the “Cross Information Potential” [20,21] and the “Expected Likelihood Kernel” [22,23] (ELK, we use this second acronym to label this estimator). When the component distributions are Gaussian, ${p}_{i}:=\mathcal{N}({\mu}_{i},{\sum}_{i})$, the ELK has a simple closed-form expression,
where each ${q}_{j,i}$ is a Gaussian defined as ${q}_{j,i}:=\mathcal{N}({\mu}_{j},{\sum}_{i}+{\sum}_{j})$. This lower bound was previously proposed for Gaussian mixtures in [18] and in a more general context in [12].

$${\widehat{H}}_{\mathrm{ELK}}\left(X\right)=-\sum _{i}{c}_{i}\mathrm{ln}\sum _{j}{c}_{j}{q}_{j,i}\left({\mu}_{i}\right)\phantom{\rule{0.166667em}{0ex}},$$

Both ${\widehat{H}}_{\mathrm{KDE}}$, Equation (3), and ${\widehat{H}}_{\mathrm{ELK}}$, Equation (5), are computationally efficient, continuous and differentiable, and depend on component overlap, making them suitable for optimization. However, as will be shown via numerical experiments (Section 6), they exhibit significant underestimation bias. At the same time, we will show that for Gaussian mixtures with equal covariance, ${\widehat{H}}_{\mathrm{KDE}}$ is only an additive constant away from an estimator in our proposed class.

## 3. Estimators Based on Pairwise-Distances

#### 3.1. Overview

Let $D({p}_{i}\parallel {p}_{j})$ be some (generalized) distance function between probability densities ${p}_{i}$ and ${p}_{j}$. Formally, we assume that D is a premetric, meaning that it is non-negative and $D({p}_{i}\parallel {p}_{j})=0$ if ${p}_{i}={p}_{j}$. We do not assume that D is symmetric, nor that it obeys the triangle inequality, nor that it is strictly greater than 0 when ${p}_{i}\ne {p}_{j}$.

For any allowable distance function D, we propose the following entropy estimator:

$${\widehat{H}}_{D}\left(X\right):=H\left(X\right|C)-\sum _{i}{c}_{i}\mathrm{ln}\sum _{j}{c}_{j}\mathrm{exp}(-D({p}_{i}\parallel {p}_{j}))\phantom{\rule{0.166667em}{0ex}}.$$

This estimator can be efficiently computed if the entropy of each component and $D({p}_{i}\parallel {p}_{j})$ for all $i,j$ have simple closed-form expressions. There are many distribution-distance function pairs that satisfy these conditions (e.g., Kullback–Leibler divergence, Renyi divergences, Bregman divergences, f-divergences, etc., for Gaussian, uniform, exponential, etc.) [24,25,26,27,28].

It is straightforward to show that for any D, ${\widehat{H}}_{D}$ falls between the bounds of Equations (1) and (2),

$$H\left(X\right|C)\le {\widehat{H}}_{D}\left(X\right)\le H(X,C)\phantom{\rule{0.166667em}{0ex}}.$$

To do so, consider the “smallest” and “largest” allowable distance functions,

$${D}_{\mathrm{min}}({p}_{i}\parallel {p}_{j})=0\phantom{\rule{2.em}{0ex}}\mathrm{and}\phantom{\rule{2.em}{0ex}}{D}_{\mathrm{max}}({p}_{i}\parallel {p}_{j})=\left\{\begin{array}{cc}0,\hfill & \mathrm{if}\phantom{\rule{4.pt}{0ex}}{p}_{i}={p}_{j},\hfill \\ \infty ,\hfill & \mathrm{otherwise}.\hfill \end{array}\right.$$

For any D and ${p}_{i},{p}_{j}$, ${D}_{\mathrm{min}}({p}_{i}\parallel {p}_{j})\le D({p}_{i}\parallel {p}_{j})\le {D}_{\mathrm{max}}({p}_{i}\parallel {p}_{j})$, thus

$${\widehat{H}}_{{D}_{\mathrm{min}}}\left(X\right)\le {\widehat{H}}_{D}\left(X\right)\le {\widehat{H}}_{{D}_{\mathrm{max}}}\left(X\right).$$

Plugging ${D}_{\mathrm{min}}$ into Equation (6) (noting that ${\sum}_{j}{c}_{j}=1$) gives ${\widehat{H}}_{{D}_{\mathrm{min}}}\left(X\right)=H\left(X\right|C)$, while plugging ${D}_{\mathrm{max}}$ into Equation (6) gives

$${\widehat{H}}_{{D}_{\mathrm{max}}}=H\left(X|C\right)-\sum _{i}{c}_{i}\mathrm{ln}\left({c}_{i}+\sum _{j\ne i}{c}_{j}{e}^{-{D}_{\mathrm{max}}({p}_{i}\parallel {p}_{j})}\right)\le H\left(X|C\right)-\sum _{i}{c}_{i}\mathrm{ln}{c}_{i}=H(X,C).$$

These two inequalities yield Equation (7). The true entropy, as shown in Section 2, also obeys $H\left(X|C\right)\le H\left(X\right)\le H\left(X,C\right)$. The magnitude of the bias of ${\widehat{H}}_{D}$ is thus bounded by

$$\begin{array}{cc}\hfill \left|{\widehat{H}}_{D}\left(X\right)-H\left(X\right)\right|& \le H\left(X,C\right)-H\left(X|C\right)=H\left(C\right)\phantom{\rule{0.166667em}{0ex}}.\hfill \end{array}$$

#### 3.2. Lower Bound

The “Chernoff $\alpha $-divergence” [28,29] for some real-valued $\alpha $ is defined as

$${C}_{\alpha}(p\parallel q):=-\mathrm{ln}\int {p}^{\alpha}\left(x\right){q}^{1-\alpha}\left(x\right)\phantom{\rule{0.166667em}{0ex}}dx\phantom{\rule{0.166667em}{0ex}}.$$

Note that ${C}_{\alpha}(p\parallel q)=(1-\alpha ){R}_{\alpha}(p\parallel q)$, where ${R}_{\alpha}$ is Rényi divergence of order α [30].

We show that for any $\alpha \in \left[0,1\right]$, ${\widehat{H}}_{{C}_{\alpha}}\left(X\right)$ is a lower bound on the entropy (for $\alpha \notin \left[0,1\right]$, ${C}_{\alpha}$ is not a valid distance function (see Appendix A)). To do so, we make use of a derivation from [31] and write,

$$\begin{array}{cc}\hfill H\left(X\right)& =H\left(X\right|C)-\int \sum _{i}{c}_{i}{p}_{i}\left(x\right)\mathrm{ln}\frac{{p}_{X}\left(x\right)}{{p}_{i}\left(x\right)}\phantom{\rule{0.166667em}{0ex}}dx\hfill \\ & =H\left(X\right|C)-\int \sum _{i}{c}_{i}{p}_{i}\left(x\right)\mathrm{ln}\frac{{p}_{X}\left(x\right)}{{{p}_{i}\left(x\right)}^{\alpha}{\sum}_{j}{c}_{j}{{p}_{j}\left(x\right)}^{1-\alpha}}\phantom{\rule{0.166667em}{0ex}}dx-\int \sum _{i}{c}_{i}{p}_{i}\left(x\right)\mathrm{ln}\left(\frac{{\sum}_{j}{c}_{j}{{p}_{j}\left(x\right)}^{1-\alpha}}{{{p}_{i}\left(x\right)}^{1-\alpha}}\right)\phantom{\rule{0.166667em}{0ex}}dx\hfill \\ & \stackrel{\left(a\right)}{\ge}H\left(X\right|C)-\int \sum _{i}{c}_{i}{p}_{i}\left(x\right)\mathrm{ln}\left(\frac{{\sum}_{j}{c}_{j}{{p}_{j}\left(x\right)}^{1-\alpha}}{{{p}_{i}\left(x\right)}^{1-\alpha}}\right)\phantom{\rule{0.166667em}{0ex}}dx\hfill \\ & \stackrel{\left(b\right)}{\ge}H\left(X\right|C)-\sum _{i}{c}_{i}\mathrm{ln}\int {{p}_{i}\left(x\right)}^{\alpha}\sum _{j}{c}_{j}{{p}_{j}\left(x\right)}^{1-\alpha}\phantom{\rule{0.166667em}{0ex}}dx\hfill \\ & =H\left(X\right|C)-\sum _{i}{c}_{i}\mathrm{ln}\sum _{j}{c}_{j}{e}^{-{C}_{\alpha}\left({p}_{i}\parallel {p}_{j}\right)}:={\widehat{H}}_{{C}_{\alpha}}\left(X\right)\phantom{\rule{0.166667em}{0ex}}.\hfill \end{array}$$

The inequalities (a) and (b) follow from Jensen’s inequality. This inequality is used directly in $\left(b\right)$, while in $\left(a\right)$ it follows from

$$\begin{array}{cc}\hfill -\int \sum _{i}{c}_{i}{p}_{i}\left(x\right)\mathrm{ln}\frac{{p}_{X}\left(x\right)}{{{p}_{i}\left(x\right)}^{\alpha}{\sum}_{j}{c}_{j}{{p}_{j}\left(x\right)}^{1-\alpha}}\phantom{\rule{0.166667em}{0ex}}dx& \ge -\mathrm{ln}\int \sum _{i}{c}_{i}{p}_{i}\left(x\right)\frac{{p}_{X}\left(x\right)}{{{p}_{i}\left(x\right)}^{\alpha}{\sum}_{j}{c}_{j}{{p}_{j}\left(x\right)}^{1-\alpha}}\phantom{\rule{0.166667em}{0ex}}dx\hfill \\ & =-\mathrm{ln}\int \sum _{i}{c}_{i}{{p}_{i}\left(x\right)}^{1-\alpha}\frac{{p}_{X}\left(x\right)}{{\sum}_{j}{c}_{j}{{p}_{j}\left(x\right)}^{1-\alpha}}\phantom{\rule{0.166667em}{0ex}}dx\hfill \\ & =-\mathrm{ln}\sum _{i}{c}_{i}\int {p}_{X}\left(x\right)\phantom{\rule{0.166667em}{0ex}}dx=0.\hfill \end{array}$$

Note that Jensen’s inequality is used in the derivations of both this lower bound as well as the lower bound ${\widehat{H}}_{\mathrm{ELK}}$ in Equation (4). However, the inequality is applied differently in the two cases, and, as will be demonstrated in Section 6, the estimators have different performance.

We have shown that using ${C}_{\alpha}$ as a distance function gives a lower bound on the mixture entropy for any $\alpha \in [0,1]$. For a general mixture distribution, one could optimize over the value of $\alpha $ to find the tightest lower bound. However, we can show that the tightest bound is achieved for $\alpha =0.5$ in the special case when all of the mixture components ${p}_{i}$ are symmetric and come from a location family,
Examples of this situation include mixtures of Gaussians with the same covariance (“homoscedastic” mixtures), multivariate t-distributions with the same covariance, location-shifted bounded uniform distributions, most kernels used in kernel density estimation, etc. It does not apply to skewed distributions, such as as the skew-normal distribution [12].

$${p}_{i}\left(x\right)=b(x-{\mu}_{i})=b({\mu}_{i}-x)\phantom{\rule{0.166667em}{0ex}}.$$

To show that $\alpha =0.5$ is optimal, first define the Chernoff $\alpha $-coefficient as

$${c}_{\alpha}({p}_{i}\parallel {p}_{j}):=\int {{p}_{i}\left(x\right)}^{\alpha}{{p}_{j}\left(x\right)}^{1-\alpha}dx\phantom{\rule{0.166667em}{0ex}}.$$

We show that for any pair ${p}_{i},{p}_{j}$ of symmetric distributions from a location family, ${c}_{\alpha}({p}_{j}\parallel {p}_{j})$ is minimized by $\alpha =0.5$. This means that all pairwise distances ${C}_{\alpha}({p}_{j}\parallel {p}_{i})\equiv -\mathrm{ln}{c}_{\alpha}({p}_{j}\parallel {p}_{i})$ are maximized by $\alpha =0.5$, and, therefore, the entropy estimator ${H}_{{C}_{\alpha}}$ (Equation (6)) is maximized by $\alpha =0.5$.

First, define a change of variables
which gives $x-{\mu}_{i}={\mu}_{j}-y$ and $x-{\mu}_{j}={\mu}_{i}-y$. This allows us to write the Chernoff $\alpha $-coefficient as
where, in $\left(a\right)$, we have substituted variables, and in $\left(b\right)$ we used the assumption that $b\left(x\right)=b(-x)$. Since we have shown that ${c}_{\alpha}({p}_{i}\parallel {p}_{j})={c}_{1-\alpha}({p}_{i}\parallel {p}_{j})$, ${c}_{\alpha}$ is symmetric in $\alpha $ about $\alpha =0.5$. In Appendix A, we show that ${c}_{\alpha}(p\parallel q)$ is everywhere convex in $\alpha $. Together, this means that ${c}_{\alpha}({p}_{i}\parallel {p}_{j})$ must achieve a minimum value at $\alpha =0.5$.

$$y:={\mu}_{i}+{\mu}_{j}-x\phantom{\rule{0.166667em}{0ex}},$$

$$\begin{array}{cc}\hfill {c}_{\alpha}({p}_{i}\parallel {p}_{j})=& \int {{p}_{i}\left(x\right)}^{\alpha}{{p}_{j}\left(x\right)}^{1-\alpha}dx\hfill \\ \hfill =& \int {b(x-{\mu}_{i})}^{\alpha}{b(x-{\mu}_{j})}^{1-\alpha}dx\hfill \\ \hfill \stackrel{\left(a\right)}{=}& \int {b({\mu}_{j}-y)}^{\alpha}{b({\mu}_{i}-y)}^{1-\alpha}dy\hfill \\ \hfill \stackrel{\left(b\right)}{=}& \int {b(y-{\mu}_{j})}^{\alpha}{b(y-{\mu}_{i})}^{1-\alpha}dy\hfill \\ \hfill =& {c}_{1-\alpha}({p}_{i}\parallel {p}_{j}),\hfill \end{array}$$

The Chernoff $\alpha $-coefficient for $\alpha =0.5$ is known as the Bhattacharyya coefficient, with the corresponding Bhattacharyya distance [32] defined as

$$\begin{array}{cc}\hfill \mathrm{BD}(p\parallel q)& :=-\mathrm{ln}\int \sqrt{p\left(x\right)q\left(x\right)}dx={C}_{0.5}(p\parallel q).\hfill \end{array}$$

Since any Chernoff $\alpha $-divergence is a lower bound for the entropy, we write the particular case of Bhattacharyya-distance lower bound as

$$H\left(X\right)\ge {\widehat{H}}_{\mathrm{BD}}\left(X\right):=H\left(X\right|C)-\sum _{i}{c}_{i}\mathrm{ln}\sum _{j}{c}_{j}{e}^{-\mathrm{BD}({p}_{i}\parallel {p}_{j})}\phantom{\rule{0.166667em}{0ex}}.$$

#### 3.3. Upper Bound

The Kullback–Leibler [KL] divergence [2] is defined as
Using KL divergence as the pairwise distance provides an upper bound on the mixture entropy. We show this as follows:
where ${\mathrm{E}}_{{p}_{i}}$ indicates expectation when X is distributed according to ${p}_{i}$, $H(\xb7\parallel \xb7)$ indicates the cross-entropy function, and we employ the identity $H({p}_{i}\parallel {p}_{j})=H\left({p}_{i}\right)+\mathrm{KL}({p}_{i}\parallel {p}_{j})$. The inequality in step $\left(a\right)$ uses a variational lower bound on the expectation of a log-sum [5,33],

$$\mathrm{KL}(p\parallel q):=\int p\left(x\right)\mathrm{ln}\frac{p\left(x\right)}{q\left(x\right)}dx\phantom{\rule{0.166667em}{0ex}}.$$

$$\begin{array}{cc}\hfill H\left(X\right)& =-\sum _{i}{c}_{i}{\mathrm{E}}_{{p}_{i}}\left[\mathrm{ln}\sum _{j}{c}_{j}{p}_{j}\left(X\right)\right]\hfill \\ & \stackrel{\left(a\right)}{\le}-\sum _{i}{c}_{i}\mathrm{ln}\sum _{j}{c}_{j}{e}^{{\mathrm{E}}_{{p}_{i}}\left[\mathrm{ln}{p}_{j}\left(X\right)\right]}\hfill \\ & =-\sum _{i}{c}_{i}\mathrm{ln}\sum _{j}{c}_{j}{e}^{-H({p}_{i}\parallel {p}_{j})}\hfill \\ & =\sum _{i}{c}_{i}H\left({p}_{i}\right)-\sum _{i}{c}_{i}\mathrm{ln}\sum _{j}{c}_{j}{e}^{-\mathrm{KL}({p}_{i}\parallel {p}_{j})},\hfill \end{array}$$

$$\mathrm{E}\left[\mathrm{ln}\sum _{j}{Z}_{j}\right]\ge \mathrm{ln}\sum _{j}\mathrm{exp}\left(\mathrm{E}\left[\mathrm{ln}{Z}_{j}\right]\right)\phantom{\rule{0.166667em}{0ex}}.$$

Combining yields the upper bound

$$H\left(X\right)\le {\widehat{H}}_{\mathrm{KL}}:=H\left(X\right|C)-\sum _{i}{c}_{i}\mathrm{ln}\sum _{j}{c}_{j}{e}^{-\mathrm{KL}({p}_{i}\parallel {p}_{j})}\phantom{\rule{0.166667em}{0ex}}.$$

#### 3.4. Exact Estimation in the “Clustered” Case

In the previous sections, we derived lower and upper bounds on the mixture entropy, using estimators based on Chernoff $\alpha $-divergence and KL divergence, respectively.

There are situations in which the lower and upper bounds become similar. Consider a pair of component distributions, ${p}_{i}$ and ${p}_{j}$. By applying Jensen’s inequality to Equation (9), we can derive the inequality ${C}_{\alpha}({p}_{i}\parallel {p}_{j})\le \alpha \mathrm{KL}({p}_{i}\parallel {q}_{j})$. There are two cases in which a pair of components contributes similarly to the lower and upper bounds. The first case is when ${C}_{\alpha}({p}_{i}\parallel {p}_{j})$ is very large, meaning that the KL is also very large. By Equation (6), distances enter into our estimators as $\mathrm{exp}(-D({p}_{i}\parallel {p}_{j}))$, and, in this case, $\mathrm{exp}(-\mathrm{KL}({p}_{i}\parallel {p}_{j}))\approx \mathrm{exp}(-{C}_{\alpha}({p}_{i}\parallel {p}_{j}))\approx 0$. In the second case, $\mathrm{KL}({p}_{i}\parallel {p}_{j})\approx 0$, meaning that ${C}_{\alpha}({p}_{i}\parallel {p}_{j})$ must also be near zero, and, in this case, $\mathrm{exp}(-\mathrm{KL}({p}_{i}\parallel {p}_{j}))\approx \mathrm{exp}(-{C}_{\alpha}({p}_{i}\parallel {p}_{j}))\approx 1$. Thus, the lower and upper bounds become similar when all pairs of components are either very close together or very far apart.

In this section, we analyze this special case. Specifically, we consider the situation when mixture components are “clustered”, meaning that there is a grouping of component distributions such that distributions in the same group are approximately the same and distributions assigned to different groups are very different from one another. We show that in this case our lower and upper bounds become equal and our pairwise-distance estimate of the entropy is tight. Though this situation may seem like an edge case, clustered distributions do arise in mixture estimation, e.g., when there are repeated data points, or as solutions to information-theoretic optimization problems [11]. Note that the number of groups is arbitrary, and therefore this situation includes the extreme cases of a single group (all component distributions are nearly the same) as well as N different groups (all component distributions are very different).

Formally, let the function $g\left(i\right)$ indicate the group of component i. We define that the components are “clustered” with respect to grouping g iff $\mathrm{KL}({p}_{i}\parallel {p}_{j})\le \kappa $ whenever $g\left(i\right)=g\left(j\right)$ for some small $\kappa $, and $\mathrm{BD}({p}_{i}\parallel {p}_{j})\ge \beta $ whenever $g\left(i\right)\ne g\left(j\right)$ some large $\beta $. We use the notation ${p}_{G}\left(k\right)={\sum}_{i}{\delta}_{g\left(i\right),k}{c}_{i}$ to indicate the sum of the weights of the components in group k, where ${\delta}_{ij}$ indicates the Kronecker delta function. For technical reasons, below we only consider ${C}_{\alpha}$ where $\alpha $ is strictly greater than 0.

We show that when $\kappa $ is small and $\beta $ is large, both ${\widehat{H}}_{{C}_{\alpha}}$ for $\alpha \in (0,1]$ and ${\widehat{H}}_{\mathrm{KL}}$ approach

$$H\left(X\right|C)-\sum _{k}{p}_{G}\left(k\right)\mathrm{ln}{p}_{G}\left(k\right)\phantom{\rule{0.166667em}{0ex}}.$$

Since one is a lower bound and one is an upper bound on the true entropy, the estimators become exact as they converge in value.

Recall that $\mathrm{BD}(p\parallel q)={C}_{0.5}(p\parallel q)$. For $\alpha \in (0,1]$, ${\alpha}^{-1}{C}_{\alpha}(p\parallel q)$ is a monotonically decreasing function for $\alpha \in (0,1]$ [34], meaning that ${C}_{\alpha}\ge 2\alpha \mathrm{BD}(p\parallel q)$ for $\alpha \in (0,0.5]$. In addition, ${(1-\alpha )}^{-1}{C}_{\alpha}(p\parallel q)$ is a monotonically increasing function for $\alpha >0$ [34], thus ${C}_{\alpha}\ge 2(1-\alpha )\mathrm{BD}(p\parallel q)$ for $\alpha \in [0.5,1]$. Using the assumption that $\mathrm{BD}({p}_{i}\parallel {p}_{j})\ge \beta $ and combining gives the bound
for $\alpha \in (0,1]$, leading to

$${C}_{\alpha}(p\parallel q)\ge (1-\left|1-2\alpha \right|)\beta $$

$$\begin{array}{cc}\hfill {\widehat{H}}_{{C}_{\alpha}}\left(X\right)& :=H\left(X\right|C)-\sum _{i}{c}_{i}\mathrm{ln}\sum _{j}{c}_{j}\mathrm{exp}(-{C}_{\alpha}({p}_{i}\parallel {p}_{j}))\hfill \\ & \ge H\left(X\right|C)-\sum _{i}{c}_{i}\mathrm{ln}\left[\sum _{j}{\delta}_{g\left(i\right),g\left(j\right)}{c}_{j}+\sum _{j}\left(1-{\delta}_{g\left(i\right),g\left(j\right)}\right){c}_{j}\mathrm{exp}(-{C}_{\alpha}({p}_{i}\parallel {p}_{j}))\right]\hfill \\ & \ge H\left(X\right|C)-\sum _{k}{p}_{G}\left(k\right)\mathrm{ln}\left[{p}_{G}\left(k\right)+(1-{p}_{G}\left(k\right))\mathrm{exp}(-(1-\left|1-2\alpha \right|)\beta )\right]\phantom{\rule{0.166667em}{0ex}}.\hfill \end{array}$$

In the second line, for the summation over $i,j$ in the same group, we’ve used the non-negativity of ${C}_{\alpha}({p}_{i}\parallel {p}_{j})\ge 0$.

For the upper bound ${\widehat{H}}_{\mathrm{KL}}$, we use that $\mathrm{KL}({p}_{i}\parallel {p}_{j})\le \kappa $ for i and j in the same group, and otherwise $\mathrm{exp}(-\mathrm{KL}({p}_{i}\parallel {p}_{j}))\ge 0$. This gives the bound

$$\begin{array}{cc}\hfill {\widehat{H}}_{\mathrm{KL}}\left(X\right)& :=H\left(X\right|C)-\sum _{i}{c}_{i}\mathrm{ln}\sum _{j}{c}_{j}\mathrm{exp}(-\mathrm{KL}({p}_{i}\parallel {p}_{j}))\hfill \\ & \le H\left(X\right|C)-\sum _{i}{c}_{i}\mathrm{ln}\left[\sum _{j}{\delta}_{g\left(i\right),g\left(j\right)}{c}_{j}{e}^{-\kappa}+\sum _{j}\left(1-{\delta}_{g\left(i\right),g\left(j\right)}\right){c}_{j}\mathrm{exp}(-\mathrm{KL}({p}_{i}\parallel {p}_{j}))\right]\hfill \\ & \le H\left(X\right|C)-\sum _{k}{p}_{G}\left(k\right)\mathrm{ln}{p}_{G}\left(k\right){e}^{-\kappa}\phantom{\rule{0.166667em}{0ex}}.\hfill \end{array}$$

The difference between the bounds is bounded by
where $\left|G\right|$ is the number of groups. Thus, the difference decreases at least linearly in $\kappa $ and exponentially in $\beta $. This shows that, in the clustered case, when $\kappa \approx 0$ and $\beta $ is very large, our lower and upper bounds become exact.

$$\begin{array}{cc}\hfill {\widehat{H}}_{\mathrm{KL}}\left(X\right)-{\widehat{H}}_{{C}_{\alpha}}\left(X\right)& \le \kappa +\sum _{k}{p}_{G}\left(k\right)\mathrm{ln}\left[1+\frac{(1-{p}_{G}\left(k\right))\mathrm{exp}(-(1-\left|1-2\alpha \right|)\beta )}{{p}_{G}\left(k\right)}\right]\hfill \\ & \le \kappa +\sum _{k}{p}_{G}\left(k\right)\frac{(1-{p}_{G}\left(k\right))\mathrm{exp}(-(1-\left|1-2\alpha \right|)\beta )}{{p}_{G}\left(k\right)}\hfill \\ & =\kappa +\left(\left|G\right|-1\right)\mathrm{exp}(-(1-\left|1-2\alpha \right|)\beta )\phantom{\rule{0.166667em}{0ex}},\hfill \end{array}$$

It also shows that any distance measure bounded between BD and KL also gives an exact estimate of entropy in the clustered case. Furthermore, the idea behind this proof can be extended to estimators induced by other bounding distances, beyond BD and KL, so as to show that a particular estimator converges to an exact entropy estimate in the clustered case. Note, however, that, for some distribution-distance pairs, the components will never be considered as “clustered”; e.g., the $\alpha $-Chernoff distance for $\alpha =0$ between any two Gaussians is 0, and so a Gaussian mixture distribution will never be considered clustered according to this distance.

Finally, in the perfectly clustered case, we can show that our lower bound, ${\widehat{H}}_{\mathrm{BD}}$, is at least as good as the Expected Likelihood Kernel lower bound, ${\widehat{H}}_{\mathrm{ELK}}$, as defined in Equation (4). See Appendix B for details.

## 4. Gaussian Mixtures

Gaussians are very frequently used as components in mixture distributions. Our family of estimators is well-suited to estimating the entropies of Gaussian mixtures, since the entropy of a d-dimensional Gaussian ${p}_{i}=\mathcal{N}\left({\mu}_{i},{\sum}_{i}\right)$ has a simple closed-form expression,
and because there are many distance functions between Gaussians with closed-form expressions (KL divergence, the Chernoff $\alpha $-divergences [35], 2-Wasserstein distance [36,37], etc.). In this section, we consider Gaussian mixtures and state explicit expression for the lower and upper bounds on the mixture entropy derived in the previous section. We also consider these bounds in the special case where all Gaussian components have the same covariance matrix (homoscedastic mixtures).

$$H\left({p}_{i}\right)=\frac{1}{2}\left[\mathrm{ln}\left|{\sum}_{i}\right|+d\mathrm{ln}2\pi +d\right]\phantom{\rule{0.166667em}{0ex}},$$

We first consider the lower bound, ${\widehat{H}}_{{C}_{\alpha}}$, based on the Chernoff $\alpha $-divergence distance function. For two multivariate Gaussians ${p}_{1}\sim \mathcal{N}\left({\mu}_{1},{\sum}_{1}\right)$ and ${p}_{2}\sim \mathcal{N}\left({\mu}_{2},{\sum}_{2}\right)$, this distance is defined as [35]:

$${C}_{\alpha}({p}_{1}\parallel {p}_{2})=\frac{\left(1-\alpha \right)\alpha}{2}{({\mu}_{1}-{\mu}_{2})}^{T}{\left((1-\alpha ){\sum}_{1}+\alpha {\sum}_{2}\right)}^{-1}({\mu}_{1}-{\mu}_{2})+\frac{1}{2}\mathrm{ln}\left(\frac{\left|(1-\alpha ){\sum}_{1}+\alpha {\sum}_{2}\right|}{{\left|{\sum}_{1}\right|}^{1-\alpha}{\left|{\sum}_{2}\right|}^{\alpha}}\right)\phantom{\rule{0.166667em}{0ex}}.$$

(As a warning, note that most sources show erroneous expressions for the Chernoff and/or Rényi $\alpha $-divergence between two multivariate Gaussians, including [27,29,38,39,40], and even a late draft of this manuscript.)

For the upper bound ${\widehat{H}}_{\mathrm{KL}}$, the KL divergence between two multivariate Gaussians ${p}_{1}\sim \mathcal{N}\left({\mu}_{1},{\sum}_{1}\right)$ and ${p}_{2}\sim \mathcal{N}\left({\mu}_{2},{\sum}_{2}\right)$ is

$$\begin{array}{cc}\hfill \mathrm{KL}({p}_{1}\parallel {p}_{2})& =\frac{1}{2}\left[\mathrm{ln}\left|{\sum}_{2}\right|-\mathrm{ln}\left|{\sum}_{1}\right|+{\left({\mu}_{1}-{\mu}_{2}\right)}^{T}{\sum}_{j}^{-1}\left({\mu}_{1}-{\mu}_{2}\right)+\mathrm{error}\left({\sum}_{2}^{-1}{\sum}_{1}\right)-d\right]\phantom{\rule{0.166667em}{0ex}}.\hfill \end{array}$$

The appropriate lower and upper bounds are found by plugging in Equations (14) and (15) into Equation (6).

These bounds have simple forms when all of the mixture components have equal covariance matrices; i.e., ${\mathrm{\Sigma}}_{i}=\mathrm{\Sigma}$ for all i. First, define a transformation in which each Gaussian component ${p}_{j}$ is mapped to a different Gaussian ${\tilde{p}}_{j,\alpha}$, which has the same mean but where the covariance matrix is rescaled by $\frac{1}{\alpha (1-\alpha )}$,

$${p}_{j}:=\mathcal{N}\left({\mu}_{j},\mathrm{\Sigma}\right)\phantom{\rule{1.em}{0ex}}\mapsto \phantom{\rule{1.em}{0ex}}{\tilde{p}}_{j,\alpha}:=\mathcal{N}\left({\mu}_{j},\frac{1}{\alpha (1-\alpha )}\mathrm{\Sigma}\right)\phantom{\rule{0.166667em}{0ex}}.$$

Then, the lower bound of Equation (10) can be written as

$${\widehat{H}}_{{C}_{\alpha}}=\frac{d}{2}+\frac{d}{2}\mathrm{ln}\left(\alpha (1-\alpha )\right)-\sum _{i}{c}_{i}\mathrm{ln}\sum _{j}{c}_{j}{\tilde{p}}_{j,\alpha}\left({\mu}_{i}\right)\phantom{\rule{0.166667em}{0ex}}.$$

This is derived by combining the expressions for ${C}_{\alpha}$, Equation (14), the entropy of a Gaussian, Equation (13), and the Gaussian density function. For a homoscedastic mixture, the tightest lower bound among the Chernoff $\alpha $-divergences is given by $\alpha =0.5$, corresponding to the Bhattacharyya distance,

$${\widehat{H}}_{\mathrm{BD}}=\frac{d}{2}+\frac{d}{2}\mathrm{ln}\frac{1}{4}-\sum _{i}{c}_{i}\mathrm{ln}\sum _{j}{c}_{j}{\tilde{p}}_{j,0.5}\left({\mu}_{i}\right)\phantom{\rule{0.166667em}{0ex}}.$$

(This is derived above in Section 3.2.)

For the upper bound, when all Gaussians have the same covariance matrix, we again combine the expressions for KL, Equation (15), the entropy of a Gaussian, Equation (13), and the Gaussian density function to give

$${\widehat{H}}_{\mathrm{KL}}\left(X\right)=\frac{d}{2}-\sum _{i}{c}_{i}\mathrm{ln}\sum _{j}{c}_{j}{p}_{j}\left({\mu}_{i}\right)\phantom{\rule{0.166667em}{0ex}}.$$

Note that this is exactly the expression for the kernel density estimator ${\widehat{H}}_{\mathrm{KDE}}$ (Equation (3)), plus a dimensional correction. Thus, surprisingly ${\widehat{H}}_{\mathrm{KDE}}$ is a reasonable entropy estimator for homoscedastic Gaussian mixtures, since it is only an additive constant away from KL-distance based estimator ${\widehat{H}}_{\mathrm{KL}}$ (which has various beneficial properties, as described above). This may explain why ${\widehat{H}}_{\mathrm{KDE}}$ has been used effectively in optimization contexts [4,8,9,10], where the additive constant is often irrelevant, despite lacking a principled justification in terms of being a a bound on entropy.

## 5. Estimating Mutual Information

It is often of interest, for example in rate distortion and related problems [11], to calculate the mutual information across a communication channel,
where U is the distribution of signals sent across the channel, and X is the distribution of messages received on the other end of the channel. As with mixture distributions, it is often easy to compute $H\left(X\right|U)$, the entropy of the received signal given the sent signal (i.e., the distribution of noise on the channel). The marginal entropy of the received signals, $H\left(X\right)$, on the other hand, is often difficult to compute.

$$MI(X;U)=H\left(X\right)-H\left(X\right|U),$$

In some cases, the distribution of U may be well approximated by a mixture model. In this case, we can estimate the entropy of the received signals, $H\left(X\right)$, using our pairwise distance estimators, as discussed in Section 3. In particular, we have the lower bound
and the upper bound,
where ${p}_{i}$ is the density of component i, and noting that the $H\left(X\right|U)$ terms cancel in the expression.

$$MI(X;U)\ge -\sum _{i}{c}_{i}\mathrm{ln}\sum _{j}{c}_{j}\mathrm{exp}(-{C}_{\alpha}({p}_{i}\parallel {p}_{j})),$$

$$MI(X;U)\le -\sum _{i}{c}_{i}\mathrm{ln}\sum _{j}{c}_{j}\mathrm{exp}(-\mathrm{KL}({p}_{i}\parallel {p}_{j})),$$

This also illuminates that the actual pairwise portion of the estimator, $-{\sum}_{i}{c}_{i}\mathrm{ln}{\sum}_{j}{c}_{j}\mathrm{exp}(-D({p}_{i}\parallel {p}_{j}))$ is a measure of the mutual information between the random variable specifying the component identity and the random variable distributed as the mixture of the component densities. If the components are identical, this mutual information is zero, since knowing the component identity tells one nothing about the outcome of X. On the other hand, when all of the components are very different from one another, knowing the component that generated X is very informative, giving the maximum amount of information, $H\left(C\right)$.

As a practical example, consider a scenario in which U is a random variable representing the outside temperature on any particular day. This temperature is measured with a thermometer with Gaussian measurement noise (the “Additive White Noise Gaussian channel”). This gives our measurement distribution

$$X=U+\mathcal{N}\left(0,{\sum}^{\prime}\right).$$

If the actual temperature U is (approximately or exactly) distributed as a mixture of M Gaussians, each one having mixture weight ${c}_{i}$, mean ${\mu}_{i}$, and covariance matrix ${\sum}_{i}$, then X will also be distributed as a mixture of M Gaussians, each with weight ${c}_{i}$, mean ${\mu}_{i}$, and covariance matrix ${\sum}_{j}^{\prime}:={\sum}_{i}+{\sum}^{\prime}$. We can then use our estimators to estimate the mutual information between the actual temperature, U, and thermometer measurements, X.

## 6. Numerical Results

In this section, we run numerical experiments and compare estimators of mixture entropy under a variety of conditions. We consider two different types of mixtures, mixtures of Gaussians and mixtures of uniform distributions, for a variety of parameter values. We evaluate the following estimators:

- The true entropy, $H\left(X\right)$, as estimated by a Monte Carlo sampling of the mixture model. Two thousand samples were used for each MC estimate for the mixtures of Gaussians, and 5000 samples were used for the mixtures of uniform distributions.
- Our proposed upper-bound, based on the KL divergence, ${\widehat{H}}_{\mathrm{KL}}$ (Equation (12))
- Our proposed lower-bound, based on the Bhattacharyya distance, ${\widehat{H}}_{\mathrm{BD}}$ (Equation (11))
- The kernel density estimate based on the component means, ${\widehat{H}}_{\mathrm{KDE}}$ (Equation (3))
- The lower bound based on the “Expected Likelihood Kernel”, ${\widehat{H}}_{\mathrm{ELK}}$ (Equation (4))
- The lower bound based on the conditional entropy, $H\left(X\right|C)$ (Equation (1))
- The upper bound based on the joint entropy, $H(X,C)$ (Equation (2)).

We show the values of the estimators 1–5 as line plots, while the region between the conditional (6) and joint entropy (7) is shown in shaded green. The code for these figures can be found at [41], and uses the Gonum numeric library [42].

#### 6.1. Mixture of Gaussians

In the first experiment, we evaluate the estimators on a mixture of randomly placed Gaussians, and look at their behavior as the distance between the means of the Gaussians increases. The mixture is composed of 100 10-dimensional Gaussians, each Gaussian distributed as ${p}_{i}=\mathcal{N}\left({\mu}_{i},{\mathit{I}}_{\left(10\right)}\right)$, where ${\mathit{I}}_{\left(d\right)}$ indicates the $d\times d$ identity matrix. Means are sampled from ${\mu}_{i}\sim \mathcal{N}(0,\sigma {\mathit{I}}_{\left(10\right)})$. Figure 1A depicts the change in estimated entropy as the means grow farther apart, in particular a function of $\mathrm{ln}\left(\sigma \right)$. We see that our proposed bounds are closer to the true entropy than the other estimators over the whole range of $\sigma $ values, and in the extremes, our bounds approach the exact value of the true entropy. This is as expected, since as $\sigma \to 0$ all of the Gaussian mixture components become identical, and as $\sigma \to \infty $ all of the Gaussian components grow very far apart, approaching the case where each Gaussian is in its own “cluster”. The ELK lower bound is a strictly worse estimate than ${\widehat{H}}_{\mathrm{BD}}$, in this experiment. As expected, the KDE estimator differs by exactly $d/2$ from the KL estimator.

In the second experiment, we evaluate the entropy estimators as the covariance matrices change from less to more similar. We again generate 100 10-dimensional Gaussians. Each Gaussian is distributed as ${p}_{i}=\mathcal{N}\left({\mu}_{i},{\sum}_{i}\right)$, where now ${\mu}_{i}\sim \mathcal{N}(0,{\mathit{I}}_{\left(10\right)})$ and ${\sum}_{i}\sim \mathcal{W}(\frac{1}{10+n}{\mathit{I}}_{\left(10\right)},n)$, where $\mathcal{W}(\mathit{V},n)$ is a Wishart distribution with scale-matrix $\mathit{V}$ and n degrees of freedom. Figure 1B compares the the estimators with the true entropy as a function of $\mathrm{ln}\left(n\right)$. When n is small, the Wishart distribution is broad and the covariance matrices differ significantly from one another, while as $n\to \infty $, all the covariance matrices become close to the identity ${\mathit{I}}_{\left(10\right)}$. Thus, for small n, we essentially recover a “clustered” case, in which every component is in its own cluster and our lower and upper bounds give highly accurate estimates. For large n, we converge to the $\sigma =1$ case of the first experiment.

In the third experiment, we again generate a mixture of 100 10-dimensional Gaussians. Now, however, the Gaussians are grouped into five “clusters”, with each Gaussian component randomly assigned to one of the clusters. We use $g\left(i\right)\in \left\{1\cdots 5\right\}$ to indicate the group of each Gaussian’s component $i\in \left\{1\cdots 100\right\}$, and each of the 100 Gaussians is distributed as ${p}_{i}=\mathcal{N}({\tilde{\mu}}_{g\left(i\right)},{\mathit{I}}_{\left(10\right)})$. The cluster centers ${\tilde{\mu}}_{k}$ for $k\in \left\{1\cdots 5\right\}$ are drawn from $\mathcal{N}(0,\sigma {\mathit{I}}_{\left(10\right)})$. The results are depicted in Figure 1C as a function of $\mathrm{ln}\left(\sigma \right)$. In the first experiment, we saw that the joint entropy $H(X,C)$ became an increasingly better estimator as the Gaussians grew increasingly far apart. Here, however, we see that there is a significant difference between $H(X,C)$ and the true entropy, even as the groups become increasingly separated. Our proposed bounds, on the other hand, provide accurate estimates of the entropy across the entire parameter sweep. As expected, they become exact in the limit when all clusters are at the same location, as well as when all clusters are very far apart from each other.

Finally, we evaluate the entropy estimators while changing the dimension of the Gaussian components. We again generate 100 Gaussian components, each distributed as ${p}_{i}=\mathcal{N}({\mu}_{i},{\mathit{I}}_{\left(d\right)})$, with ${\mu}_{i}\sim \mathcal{N}(0,\sigma {\mathit{I}}_{\left(d\right)})$. We vary the dimensionality d from 1 to 60. The results are shown in Figure 1D. First, we see that when $d=1$, the KDE estimator and the KL-divergence based estimator give a very similar prediction (differing only by $0.5$), but as the dimension increases, the two estimates diverge at a rate of $d/2$. Similarly, ${\widehat{H}}_{\mathrm{ELK}}$ grows increasingly less accurate as the dimension increases. Our proposed lower and upper bounds provide good estimates of the mixture entropy across the whole sweep across dimensions.

As previously mentioned, our lower and upper bounds tend to perform best at the “extremes” and worse in the intermediate regimes. In particular, in Figure 1A,C,D, the distances between component means increase from left to right. On the left hand side of these figures, all of the component means are close and the component distributions overlap, as evidenced by the fact that the mixture entropy is $\approx H\left(X\right|C)$, i.e., $I(X;C)\approx 0$. In this regime, when there is essentially a single “cluster”, and our bounds become tight (see Section 3.4). On the right hand side of these figures, the components’ means are all far apart from each other, and the mixture entropy $\approx H(X,C)$, i.e., $I(X;C)\approx H\left(C\right)$ (in Figure 1C, it is the five clusters that become far apart, and the mixture entropy $\approx H\left(X\right|C)+\mathrm{ln}5$). In this regime where there are many well-separated clusters, our bounds again become tight. In between these two extremes, however, there is no clear clustering of the mixture components, and the entropy bounds are not as tight.

As noted in the previous paragraph, the extremes in three out of the four subfigures approach the perfectly clustered case. In this situation, we show in Appendix B that the BD-based estimator is a better bound on the true entropy than the Expected Likelihood Kernel estimator. We see confirmation of this in the experimental results, where ${\widehat{H}}_{\mathrm{ELK}}$ performs worse than the pairwise-distance based estimators.

#### 6.2. Mixture of Uniforms

In the second set of experiments, we consider a mixture of uniform distributions. Unlike Gaussians, uniform distributions are bounded within a hyper-rectangle and do not have full support over the domain. In particular, a uniform distribution $p=\mathcal{U}(a,b)$ over d dimensions is defined as
where x, a, and b are d-dimensional vectors, and the subscript ${x}_{i}$ refers to value of x on dimension i. Note that when ${p}_{X}$ is a mixture of uniforms, there can be significant regions where ${p}_{X}\left(x\right)>0$, but ${p}_{i}\left(x\right)=0$ for some i.

$$p\left(x\right)\propto \left\{\begin{array}{cc}1,\hfill & \mathrm{if}\phantom{\rule{4.pt}{0ex}}{x}_{i}\in [{a}_{i},{b}_{i}]\phantom{\rule{0.277778em}{0ex}}\forall i=1\cdots d,\hfill \\ 0,\hfill & \mathrm{otherwise},\hfill \end{array}\right.\phantom{\rule{0.166667em}{0ex}}.$$

Here, we list the formulae for pairwise distance measure between uniform distributions. In the following, we use ${V}_{i}:={\int}_{x}1\{{p}_{i}\left(x\right)>0\}dx$ to indicate the “volume” of distribution ${p}_{i}$. Uniform components have a constant $p\left(x\right)$ over their support, and so ${p}_{i}\left(x\right)=1/{V}_{i}$ for all x where ${p}_{i}\left(x\right)>0$. Similarly, we use ${V}_{i\cap j}$ as the “volume of overlap” between ${p}_{i}$ and ${p}_{j}$, i.e., the volume of the intersection of the support of ${p}_{i}$ and ${p}_{j}$, ${V}_{i\cap j}:={\int}_{x}1\{{p}_{i}\left(x\right)>0\}1\{{p}_{j}\left(x\right)>0\}dx$. The distance measures between uniforms are then

$$\begin{array}{cc}\hfill H\left({p}_{i}\right)& =\mathrm{ln}{V}_{i},\hfill \\ \hfill \mathrm{KL}({p}_{i}\parallel {p}_{j})& =\left\{\begin{array}{cc}\mathrm{ln}({V}_{j}/{V}_{i}),\hfill & \mathrm{supp}\phantom{\rule{4.pt}{0ex}}{p}_{i}\subseteq \mathrm{supp}\phantom{\rule{4.pt}{0ex}}{p}_{j},\hfill \\ \infty ,\hfill & \mathrm{otherwise},\hfill \end{array}\right.\hfill \end{array}$$

$$\begin{array}{cc}\hfill \mathrm{BD}\left({p}_{i}\right|\left|{p}_{j}\right)& =0.5\mathrm{ln}{V}_{i}+0.5\mathrm{ln}{V}_{j}-\mathrm{ln}{V}_{i\cap j},\hfill \end{array}$$

$$\begin{array}{cc}\hfill {\widehat{H}}_{\mathrm{ELK}}& =-\sum _{i}{c}_{i}\mathrm{ln}\sum _{j}{c}_{j}\frac{{V}_{i\cap j}}{{V}_{i}{V}_{j}}\phantom{\rule{0.166667em}{0ex}}.\hfill \end{array}$$

Like the Gaussian case, we run four different computational experiments and compare the mixture entropy estimates to the true entropy, as determined by Monte Carlo sampling.

In the first experiment, the mixture consists of 100 10-dimensional uniform components, with ${p}_{i}=\mathcal{U}({\mu}_{i}-{\mathbf{1}}_{\left(10\right)},{\mu}_{i}+{\mathbf{1}}_{\left(10\right)})$ , and ${\mu}_{i}\sim \mathcal{N}(0,\sigma {\mathit{I}}_{\left(10\right)})$, where ${\mathbf{1}}_{\left(d\right)}$ refers to a d-dimensional vector of 1s. Figure 2A depicts the change in entropy as a function of $\mathrm{ln}\left(\sigma \right)$. For very small $\sigma $, the distributions are almost entirely overlapping, while for large $\sigma $ they tend very far apart. As expected, the entropy increases with $\sigma $. Here, we see that the prediction of ${\widehat{H}}_{\mathrm{KL}}$ is identical to $H(X,C)$, which arises because $\mathrm{KL}({p}_{i}\parallel {p}_{j})$ is infinite whenever the support of ${p}_{i}$ is not entirely contained in the support of ${p}_{j}$. Uniform components with equal size and non-equal means must have some region of non-overlap, and so the KL is infinite between all pairs of components, thus KL is effectively ${D}_{\mathrm{max}}$ (Equation (8)). In contrast, we see that ${\widehat{H}}_{\mathrm{BD}}$ estimates the true entropy quite well. This example demonstrates that getting an accurate estimate of mixture entropy may require selecting a distance function that works will with the component distributions. Finally, it turns out that, for uniform components of equal size, ${\widehat{H}}_{\mathrm{ELK}}={\widehat{H}}_{\mathrm{BD}}$. This can be seen by combining Equations (6) and (16), and comparing to Equation (17) (note that ${V}_{i}={V}_{j}$ when the components have equal size).

In the second experiment, we adjust the variance in the size of the uniform components. We again use 100 10-dimensional components, ${p}_{i}=\mathcal{U}({\mu}_{i}-{\gamma}_{i}{\mathbf{1}}_{\left(10\right)},{\mu}_{i}+{\gamma}_{i}{\mathbf{1}}_{\left(10\right)})$, where ${\mu}_{i}\sim \mathcal{N}(0,{\mathit{I}}_{\left(10\right)})$, and ${\gamma}_{i}\sim \mathrm{\Gamma}(1+\sigma ,1+\sigma )$, where $\mathrm{\Gamma}(\alpha ,\beta )$ is the Gamma distribution with shape parameter $\alpha $ and rate parameter $\beta $. Figure 2B shows the change in entropy estimates as a function of $\mathrm{ln}\left(\sigma \right)$. When $\sigma $ is small, the sizes have significant spread, while as $\sigma $ grows the distributions become close to equally sized. We again see that ${\widehat{H}}_{\mathrm{BD}}$ is a good estimator of entropy, outperforming all of the other estimators. Generally, not all supports will be non-overlapping, so ${\widehat{H}}_{\mathrm{KL}}$ will not necessarily be equal to $H(X,C)$, though we find the two to be numerically quite close. In this experiment, we find that the lower and upper bounds specified by ${\widehat{H}}_{\mathrm{BD}}$ and ${\widehat{H}}_{\mathrm{KL}}$ provide a tight estimate of the true entropy.

In the third experiment, we again consider a clustered mixture, and evaluate the entropy estimators as these clusters grow apart. Here, there are 100 components with ${p}_{i}=\mathcal{U}({\tilde{\mu}}_{g\left(i\right)}-{\mathbf{1}}_{\left(10\right)},{\tilde{\mu}}_{g\left(i\right)}+{\mathbf{1}}_{\left(10\right)})$, where $g\left(i\right)\in \left\{1\cdots 5\right\}$ is the randomly assigned cluster identity of component i. The cluster centers ${\tilde{\mu}}_{k}$ for $k\in \left\{1\cdots 5\right\}$ are generated according to $\mathcal{N}(0,\sigma {\mathit{I}}_{\left(10\right)})$. Figure 2C shows the change in entropy as the clusters locations move apart. Note that, in this case, the upper bound ${\widehat{H}}_{\mathrm{KL}}$ significantly outperforms $H(X,C)$, unlike in the first and second experiment, because in this experiment, components in the same cluster have perfect overlap. We again see that ${\widehat{H}}_{\mathrm{BD}}$ provides a relatively accurate lower bound for the true entropy.

In the final experiment, the dimension of the components is varied. There are again 100 components, with ${p}_{i}=\mathcal{U}({\mu}_{i}-{\mathbf{1}}_{\left(d\right)},{\mu}_{i}+{\mathbf{1}}_{\left(d\right)})$ , and ${\mu}_{i}\sim \mathcal{N}(0,\sigma {\mathit{I}}_{\left(d\right)})$. Figure 2D shows the change in entropy as the dimension increases from $d=1$ to $d=16$. Interestingly, in the low-dimensional case, $H\left(X\right|C)$ is a very close estimate for the true entropy, while in the high-dimensional case, the entropy becomes very close to $H(X,C)$. This is because in higher dimensions, there is more `space’ for the components to be far from each other. As in the first experiment, ${\widehat{H}}_{\mathrm{KL}}$ is equal to $H(X,C)$. We again observe that ${\widehat{H}}_{\mathrm{BD}}$ provides a tight lower bound on the mixture entropy, regardless of dimension.

## 7. Discussion

We have presented a new class of estimators for the entropy of a mixture distribution. We have shown that any estimator in this class has a bounded estimation bias, and that this class includes useful lower and upper bounds on the entropy of a mixture. Finally, we show that these bounds become exact when mixture components are grouped into well-separated clusters.

Our derivation of the bounds make use of some existing results [5,31]. However, to our knowledge, these results have not been previously used to estimate mixture entropies. Furthermore, they have not been compared numerically or analytically to better-known bounds.

We evaluated these estimators using numerical simulations of mixtures of Gaussians as well as mixtures of bounded (hypercube) uniform distributions. Our results demonstrate that our estimators perform much better than existing well-known estimators.

This estimator class can be especially useful for optimization problems that involve minimization of entropy or mutual information. If the distance function used in the pairwise estimator class is continuous and smooth in the parameters of the mixture components, then the entropy estimate is also continuous and smooth. This permits our estimators to be used within gradient-based optimization techniques, for example gradient descent, as often done in machine learning problems.

In fact, we have used our upper bound to implement a non-parametric, nonlinear version of the “Information Bottleneck” [43]. Specifically, we minimized an upper bound on the mutual information between input and hidden layer in a neural networks [11]. We found that the optimal distributions were often clustered (Section 3.4). That work demonstrated practically the value of having an accurate, differentiable upper bound on mixture entropy that performs well in the clustered regime.

Note that we have not proved that the bounds derived here are the best possible. Identifying better bounds, or proving that our results are optimal within some class of bounds, remains for future work.

## Acknowledgments

We thank David H. Wolpert for useful discussions. We would also like to thank the Santa Fe Institute for helping to support this research. This work was made possible through the support of AFOSR MURI on multi-information sources of multi-physics systems under Award Number FA9550-15-1-0038.

## Author Contributions

Artemy Kolchinsky and Brendan D. Tracey designed the method and experiments; Brendan D. Tracey performed the experiments; Artemy Kolchinsky and Brendan D. Tracey wrote the paper.

## Conflicts of Interest

The authors declare no conflict of interest.

## Appendix A. Chernoff α-Divergence Is Not a Distance Function for α ∉ [0, 1]

For any pair of densities p and q, consider the Chernoff $\alpha $-divergence
where the quantity ${c}_{\alpha}$ is called the Chernoff $\alpha $-coefficient [29]. Taking the second derivative of ${c}_{\alpha}$ with respect to $\alpha $ gives

$${C}_{\alpha}(p\parallel q):=-\mathrm{ln}\int {p}^{\alpha}\left(x\right){q}^{1-\alpha}\left(x\right)\phantom{\rule{0.166667em}{0ex}}dx=-\mathrm{ln}{c}_{\alpha}(p\parallel q),$$

$$\frac{{d}^{2}}{d{\alpha}^{2}}{c}_{\alpha}(p\parallel q)=\int {p}^{\alpha}\left(x\right){q}^{1-\alpha}\left(x\right){\left(\mathrm{ln}\frac{p\left(x\right)}{q\left(x\right)}\right)}^{2}dx.$$

Observe that this quantity is everywhere positive, meaning that ${c}_{\alpha}(p\parallel q)$ is convex everywhere. For simplicity, consider the case $p\ne q$, in which case this function is strictly convex. In addition, observe that for any p and q, ${c}_{\alpha}(p\parallel q)=1$ when $\alpha =0$ and $\alpha =1$. If ${c}_{\alpha}(p\parallel q)$ is strictly convex in $\alpha $, this must mean that ${c}_{\alpha}(p\parallel q)>1$ for $\alpha \notin \left[0,1\right]$. This in turn implies that the Chernoff $\alpha $-divergence ${C}_{\alpha}$ is strictly negative for $\alpha \notin \left[0,1\right]$. Thus, ${C}_{\alpha}$ is not a valid distance function for $\alpha \notin \left[0,1\right]$, as defined in Section 3.

## Appendix B. For Clustered Mixtures, Ĥ_{BD} ≥ Ĥ_{ELK}

Assume a mixture with perfect clustering (Section 3.4). Specifically, we assume that if $g\left(i\right)=g\left(j\right)$, then ${p}_{i}\left(x\right)={p}_{j}\left(x\right)$, and if $g\left(i\right)\ne g\left(j\right)$ then both $\mathrm{exp}(-{C}_{\alpha}({p}_{i}\left|\right|{p}_{j}\left)\right)\approx 0$ and $\int {p}_{i}\left(x\right){p}_{j}\left(x\right)dx\approx 0$.

In this case, our lower bound ${\widehat{H}}_{\mathrm{BD}}$ is at least as good as ${\widehat{H}}_{\mathrm{ELK}}$. Specifically, ${\widehat{H}}_{\mathrm{ELK}}$ becomes
where ${p}_{k}\left(x\right)$ is shorthand for the density of any component in cluster k (remember that all components in the same cluster have equal density). ${\widehat{H}}_{{C}_{\alpha}}$ becomes
where (a) uses Jensen’s inequality.

$$\begin{array}{cc}\hfill {\widehat{H}}_{\mathrm{ELK}}& =-\sum _{i}{c}_{i}\mathrm{ln}\sum _{j}{c}_{j}\int {p}_{i}\left(x\right){p}_{j}\left(x\right)dx\hfill \\ & =-\sum _{i}{c}_{i}\mathrm{ln}\sum _{j}{c}_{j}{\delta}_{g\left(i\right),g\left(j\right)}\int {p}_{i}{\left(x\right)}^{2}dx\hfill \\ & =-\sum _{k}{p}_{G}\left(k\right)\mathrm{ln}\sum _{k}{p}_{G}\left(k\right)\int {p}_{k}{\left(x\right)}^{2}dx\phantom{\rule{0.166667em}{0ex}},\hfill \end{array}$$

$$\begin{array}{cc}\hfill {\widehat{H}}_{{C}_{\alpha}}& =H\left(X\right|C)-\sum _{i}{c}_{i}\mathrm{ln}\sum _{j}{c}_{j}\mathrm{exp}(-{C}_{\alpha}({p}_{i}\left|\right|{p}_{j}\left)\right)\hfill \\ & =-\sum _{i}{c}_{i}\int {p}_{i}\left(x\right)\mathrm{ln}{p}_{i}\left(x\right)dx-\sum _{i}{c}_{i}\mathrm{ln}\sum _{j}{c}_{j}{\delta}_{g\left(i\right),g\left(j\right)}\hfill \\ & =-\sum _{k}{p}_{G}\left(k\right)\int {p}_{k}\left(x\right)\mathrm{ln}{p}_{k}\left(x\right)dx-\sum _{k}{p}_{G}\left(k\right)\mathrm{ln}{p}_{G}\left(k\right)\hfill \\ & \stackrel{\left(a\right)}{\ge}-\sum _{k}{p}_{G}\left(k\right)\mathrm{ln}\int {p}_{k}{\left(x\right)}^{2}dx-\sum _{k}{p}_{G}\left(k\right)\mathrm{ln}{p}_{G}\left(k\right)\hfill \\ & =-\sum _{k}{p}_{G}\left(k\right)\mathrm{ln}{p}_{G}\left(k\right)\int {p}_{k}{\left(x\right)}^{2}dx={\widehat{H}}_{\mathrm{ELK}}\phantom{\rule{0.166667em}{0ex}},\hfill \end{array}$$

## References

- McLachlan, G.; Peel, D. Finite Mixture Models; John Wiley & Sons: Hoboken, NJ, USA, 2004. [Google Scholar]
- Cover, T.M.; Thomas, J.A. Elements of Information Theory; John Wiley & Sons: Hoboken, NJ, USA, 2012. [Google Scholar]
- Goldberger, J.; Gordon, S.; Greenspan, H. An Efficient Image Similarity Measure Based on Approximations of KL-Divergence Between Two Gaussian Mixtures. In Proceedings of the 9th International Conference on Computer Vision, Nice, France, 13–16 October 2003; Volume 3, pp. 487–493. [Google Scholar]
- Viola, P.; Schraudolph, N.N.; Sejnowski, T.J. Empirical Entropy Manipulation for Real-World Problems. In Advances in Neural Information Processing Systems; The MIT Press: Cambridge, MA, USA, 1996; pp. 851–857. [Google Scholar]
- Hershey, J.R.; Olsen, P.A. Approximating the Kullback Leibler divergence between Gaussian mixture models. In Proceedings of the 2007 IEEE International Conference on Acoustics, Speech and Signal Processing, Honolulu, HI, USA, 15–20 April 2007; Volume 4, pp. 317–320. [Google Scholar]
- Chen, J.Y.; Hershey, J.R.; Olsen, P.A.; Yashchin, E. Accelerated monte carlo for kullback-leibler divergence between gaussian mixture models. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, Las Vegas, NV, USA, 31 March–4 April 2008; pp. 4553–4556. [Google Scholar]
- Capristan, F.M.; Alonso, J.J. Range Safety Assessment Tool (RSAT): An analysis environment for safety assessment of launch and reentry vehicles. In Proceedings of the 52nd Aerospace Sciences Meeting, National Harbor, MD, USA, 13–17 January 2014; p. 304. [Google Scholar]
- Schraudolph, N.N. Optimization of Entropy with Neural Networks. Ph.D. Thesis, University of California, San Diego, CA, USA, 1995. [Google Scholar]
- Schraudolph, N.N. Gradient-based manipulation of nonparametric entropy estimates. IEEE Trans. Neural Netw.
**2004**, 15, 828–837. [Google Scholar] [CrossRef] [PubMed] - Shwartz, S.; Zibulevsky, M.; Schechner, Y.Y. Fast kernel entropy estimation and optimization. Signal Process.
**2005**, 85, 1045–1058. [Google Scholar] [CrossRef] - Kolchinsky, A.; Tracey, B.D.; Wolpert, D.H. Nonlinear Information Bottleneck. arXiv, 2017; arXiv:1705.02436. [Google Scholar]
- Contreras-Reyes, J.E.; Cortés, D.D. Bounds on Rényi and Shannon Entropies for Finite Mixtures of Multivariate Skew-Normal Distributions: Application to Swordfish (Xiphias gladius Linnaeus). Entropy
**2016**, 18, 382. [Google Scholar] [CrossRef] - Carreira-Perpinan, M.A. Mode-finding for mixtures of Gaussian distributions. IEEE Trans. Pattern Anal. Mach. Intell.
**2000**, 22, 1318–1323. [Google Scholar] [CrossRef] - Zobay, O. Variational Bayesian inference with Gaussian-mixture approximations. Electron. J. Stat.
**2014**, 8, 355–389. [Google Scholar] [CrossRef] - Beirlant, J.; Dudewicz, E.J.; Györfi, L.; van der Meulen, E.C. Nonparametric entropy estimation: An overview. Int. J. Math. Stat. Sci.
**1997**, 6, 17–39. [Google Scholar] - Joe, H. Estimation of entropy and other functionals of a multivariate density. Ann. Inst. Stat. Math.
**1989**, 41, 683–697. [Google Scholar] [CrossRef] - Nair, C.; Prabhakar, B.; Shah, D. On entropy for mixtures of discrete and continuous variables. arXiv, 2006; arXiv:cs/0607075. [Google Scholar]
- Huber, M.F.; Bailey, T.; Durrant-Whyte, H.; Hanebeck, U.D. On entropy approximation for Gaussian mixture random vectors. In Proceedings of the IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, Seoul, Korea, 20–22 August 2008; pp. 181–188. [Google Scholar]
- Hall, P.; Morton, S.C. On the estimation of entropy. Ann. Inst. Stat. Math.
**1993**, 45, 69–88. [Google Scholar] [CrossRef] - Principe, J.C.; Xu, D.; Fisher, J. Information theoretic learning. Unsuperv. Adapt. Filter
**2000**, 1, 265–319. [Google Scholar] - Xu, J.W.; Paiva, A.R.; Park, I.; Principe, J.C. A reproducing kernel Hilbert space framework for information-theoretic learning. IEEE Trans. Signal Process.
**2008**, 56, 5891–5902. [Google Scholar] - Jebara, T.; Kondor, R. Bhattacharyya and expected likelihood kernels. In Learning Theory and Kernel Machines; Springer: Berlin, Germany, 2003; pp. 57–71. [Google Scholar]
- Jebara, T.; Kondor, R.; Howard, A. Probability product kernels. J. Mach. Learn. Res.
**2004**, 5, 819–844. [Google Scholar] - Banerjee, A.; Merugu, S.; Dhillon, I.S.; Ghosh, J. Clustering with Bregman divergences. J. Mach. Learn. Res.
**2005**, 6, 1705–1749. [Google Scholar] - Cichocki, A.; Zdunek, R.; Phan, A.H.; Amari, S.I. Similarity Measures and Generalized Divergences. In Nonnegative Matrix and Tensor Factorizations: Applications to Exploratory Multi-Way Data Analysis and Blind Source Separation; JohnWiley & Sons: Hoboken, NJ, USA, 2009; pp. 81–129. [Google Scholar]
- Cichocki, A.; Amari, S.I. Families of alpha-beta-and gamma-divergences: Flexible and robust measures of similarities. Entropy
**2010**, 12, 1532–1568. [Google Scholar] [CrossRef] - Gil, M.; Alajaji, F.; Linder, T. Rényi divergence measures for commonly used univariate continuous distributions. Inf. Sci.
**2013**, 249, 124–131. [Google Scholar] [CrossRef] - Crooks, G.E. On Measures of Entropy and Information. Available online: http://threeplusone.com/on_information.pdf (accessed on 12 July 2017).
- Nielsen, F. Chernoff information of exponential families. arXiv, 2011; arXiv:1102.2684. [Google Scholar]
- Van Erven, T.; Harremos, P. Rényi divergence and Kullback-Leibler divergence. IEEE Trans. Inf. Theory
**2014**, 60, 3797–3820. [Google Scholar] [CrossRef] - Haussler, D.; Opper, M. Mutual information, metric entropy and cumulative relative entropy risk. Ann. Stat.
**1997**, 25, 2451–2492. [Google Scholar] [CrossRef] - Fukunaga, K. Introduction to Statistical Pattern Recognition, 2nd ed.; Academic Press: Boston, MA, USA, 1990. [Google Scholar]
- Paisley, J. Two Useful Bounds for Variational Inference; Princeton University: Princeton, NJ, USA, 2010. [Google Scholar]
- Sason, I.; Verdú, S. f-Divergence Inequalities. IEEE Trans. Inf. Theory
**2016**, 62, 5973–6006. [Google Scholar] [CrossRef] - Hero, A.O.; Ma, B.; Michel, O.; Gorman, J. Alpha-Divergence for Classification, Indexing and Retrieval. Available online: https://pdfs.semanticscholar.org/6d51/fbf90c59c2bb8cbf0cb609a224f53d1b68fb.pdf (accessed on 14 July 2017).
- Dowson, D.; Landau, B. The Fréchet distance between multivariate normal distributions. J. Multivar. Anal.
**1982**, 12, 450–455. [Google Scholar] [CrossRef] - Olkin, I.; Pukelsheim, F. The distance between two random vectors with given dispersion matrices. Linear Algebra Appl.
**1982**, 48, 257–263. [Google Scholar] [CrossRef] - Pardo, L. Statistical Inference Based on Divergence Measures; CRC Press: Boca Raton, FL, USA, 2005. [Google Scholar]
- Hobza, T.; Morales, D.; Pardo, L. Rényi statistics for testing equality of autocorrelation coefficients. Stat. Methodol.
**2009**, 6, 424–436. [Google Scholar] - Nielsen, F. Generalized Bhattacharyya and Chernoff upper bounds on Bayes error using quasi-arithmetic means. Pattern Recognit. Lett.
**2014**, 42, 25–34. [Google Scholar] [CrossRef] - GitHub. Available online: https://www.github.com/btracey/mixent (accessed on 14 July 2017).
- Gonum Numeric Library. Available online: https://www.gonum.org (accessed on 14 July 2017).
- Tishby, N.; Pereira, F.; Bialek, W. The information bottleneck method. In Proceedings of the 37th Annual Allerton Conference on Communication, Control, and Computing, Monticello, IL, USA, 22–24 September 1999. [Google Scholar]

**Figure 1.**Entropy estimates for a mixture of a 100 Gaussians. In each plot, the vertical axis shows the entropy of the distribution, and the horizontal axis changes a feature of the components: (

**A**) the distance between means is increased; (

**B**) the component covariances become more similar (at the right side of the plot, all Gaussians have covariance matrices approximately equal to the identity matrix); (

**C**) the components are grouped into five “clusters”, and the distance between the locations of the clusters is increased; (

**D**) the dimension is increased.

**Figure 2.**Entropy estimates for a mixture of a 100 uniform components. In each plot, the vertical axis shows the entropy of the distribution, and the horizontal axis changes a feature of the components: (

**A**) the distance between means is increased; (

**B**) the component sizes become more similar (at the right side of the plot, all components have approximately the same size); (

**C**) the components are grouped into five “clusters”, and the distance between these clusters is increased; (

**D**) the dimension is increased.

© 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).