Inference of entropies of discrete random variables with unknown cardinalities

We examine the recently introduced NSB estimator of entropies of severely undersampled discrete variables and devise a procedure for calculating the involved integrals. We discover that the output of the estimator has a well defined limit for large cardinalities of the variables being studied. Thus one can estimate entropies with no a priori assumptions about these cardinalities, and a closed form solution for such estimates is given.


Introduction
Estimation of functions of a discrete random variable with an unknown probability distribution using independent samples of this variable seems to be an almost trivial problem known to many yet from the high school [1]. However, this simplicity vanishes if one considers an extremely undersampled regime, where K, the cardinality or the alphabet size of the random variable, is much larger than N , the number of independent samples of the variable. In this case the average number of samples per possible outcome (also called bin in this paper) is less than one, relative uncertainty in the underlying probability distribution is large, and the usual formulas for estimation of various statistics fail miserably. Then one has to use the power of Bayesian statistics to a priori constraint the set of allowable distributions and thus decrease the posterior error. Unfortunately, due to the usual bias-variance tradeoff, decreasing the variance this way may lead to an increased bias, i.e., the estimator becomes a function of the prior, rather than of the experimental data.
The situation is particularly bad for inferring the Boltzmann-Shannon entropy, S, one of the most important characteristics of a discrete variable. Its frequentist as well as common Bayesian estimators have low variances, but high biases that are very difficult to calculate (see Ref. [2] for a review). However, recently ideas from Bayesian model selection [3,4,5,6] were used by Nemenman, Shafee, and Bialek to suggest a solution to the problem [7]. Their method, hereafter called NSB, is robust and unbiased even for severely undersampled problems. We will review it and point out that it is equivalent to finding the number of yet unseen bins with nonzero probability given K, the maximum cardinality of the variable. While estimation of K by model selection techniques will not work, we will show that the method has a proper limit as K → ∞. Thus one should be able to calculate entropies of discrete random variables even without knowing their cardinality.

Summary of the NSB method
In Bayesian statistics, one uses Bayes rule to expresses posterior probability of a probability distribution q ≡ {q i }, i = 1 . . . K, of a discrete random variable with a help of its a priori probability, P(q). Thus if n i identical and independent samples from q are observed in bin i, such that K i=1 n i = N , then the posterior, P (q|n), is Following Ref. [7], we will focus on popular Dirichlet family of priors, indexed by a (hyper)parameter β: Here δ-function and Z(β) enforce normalizations of q and P β (q) respectively, and Γ stands for Euler's Γ-function. These priors are common in applications [8] since they, as well as the data term, P (n|q), are of a multinomial structure, which is analytically tractable. For example, in Ref. [9] Wolpert and Wolf calculated posterior averages, here denoted as . . . β , of many interesting quantities, including the distribution itself, and the moments of its entropy, which we will not reprint here.
As suggested by Eq. (3), Dirichlet priors add extra β sample points to each possible bin. Thus for β ≫ N/K the data is unimportant, and P (q|n) is dominated by the distributions close to the uniform one, q ≈ 1/K. The posterior mean of the entropy is then strongly biased upwards to its maximum possible value of S max = ln K. 1 Similarly, for β ≪ N/K, distributions in the vicinity of the frequentist's maximum likelihood estimate, q = n/N , are important, and S β has a strong downward bias [2].
In Ref. [7], Nemenman et al. traced this problem to the properties of the Dirichlet family: its members encode reasonable a priori assumptions about q, but not about S(q). Indeed, it turns out that a priori assumptions about the entropy are extremely biased, as may be seen from its following a priori moments.
where ψ m (x) = (d/dx) m+1 ln Γ(x) are the polygamma functions. ξ(β) varies smoothly from 0 for β = 0, through 1 for β ≈ 1/K, and to ln K for β → ∞. σ(β) scales as 1/ √ K for almost all β (see Ref. [7] for details). This is negligibly small for large K. Thus q that is typical in P β (q) usually has its entropy extremely close to some predetermined β-dependent value. It is not surprising then that this bias persists even after N < K data are collected.
The NSB method suggests that to estimate entropy with a small bias one should not look for priors that seem reasonable on the space of q, but rather the a priori distribution of entropy, P(S(q)), should be flattened. This can be done approximately by noting that Eqs. (4,5) ensure that, for large K, P(S) is almost a δ-function. Thus a prior that enforces 1 In this paper the unit of entropy is nat. Thus all logarithms are natural.
integration over all non-negative values of β, which correspond to all a priori expected entropies between 0 and ln K, should do the job of eliminating the bias in the entropy estimation even for N ≪ K. While there are probably other options, Ref. [7] centered on the following prior, which is a generalization of Dirichlet mixture priors [10] to an infinite mixture: Here Z is again the normalizing coefficient, and the term dξ/dβ ensures uniformity for the a priori expected entropy, ξ, rather than for β. A non-constant prior on β, P(β), may be used if sufficient reasons for this exist, but we will set it to one in all further developments.
Inference with the prior, Eq. (6), involves additional averaging over β (or, equivalently, ξ), but is nevertheless straightforward. The a posteriori moments of the entropy are , where the posterior density is (7) Nemenman et al. explain why this method should work using the theory of Bayesian model selection [3,4,5,6]. All possible probability distributions, even those that fit the data extremely badly, should be included in the posterior averaging. For models with a larger volume in q space, the number of such bad q's is greater, thus the probability of the model decreases. Correspondingly, such contributions from the phase space factors are usually termed Occam razor because they automatically discriminate against bigger, more complex models. If the maximum likelihood solution of a complex model explains the data better than that of a simpler one, 2 then the total probability, a certain combination of the maximum likelihood and the Occam factors, has a maximum for some non-trivial model, and the sharpness of the maximum grows with N . In other words, the data selects a model which is simple, yet explains it well.
In the case of Eq. (6), we can view different values of β as different models. The smaller β is, the closer it brings us to the frequentist's maximum likelihood solution, so the data gets explained better. However, as there is less smoothing [cf. Eq. (3)], smaller β results in the larger phase space. Thus, according to Ref. [7], one may expect that the integrals in Eq. (7) will be dominated by some β * , appropriate smoothing will be sharply selected, and · · · ≈ · · · β * . In the current paper we will investigate whether a maximum of the integrand in Eq. (7), indeed, exists and will study its properties. The results of the analysis will lead us to an extension and a simplification of the NSB method.

Calculation of the NSB integrals
We will calculate integrals in Eq. (7) using the saddle point method. Since the moments of S do not have N dependence, when N is large only the Γ-terms in ρ are important for estimating the position of the saddle and the curvature around it. We write Then the saddle point (equivalently, the maximum likelihood) value, κ * = Kβ * , solves the following equation obtained by differentiating Eq. (10).
where we use K m to denote the number of bins that have, at least, m counts. Note that We notice that if K ≫ N , and if there are at least a few bins that have more that one datum in them, i.e., K 1 < N , then the distribution the data is taken from is highly non-uniform. Thus the entropy should be much smaller than its maximum value of S max . Since for any β = O(1) the entropy is extremely close to S max (cf. Ref. [7]), small entropy may be achievable only if β * → 0 as K → ∞. Thus we will look for where none of κ j depends on K. Plugging Eq. (12) into Eq. (11), after a little algebra we get the first few terms in the expansion of κ * : and the zeroth order term solves the following algebraic equation If required, more terms in the expansion can be calculated, but for common applications K is so big that none are usually needed.
We now focus on solving Eq. (15). For κ 0 → 0 and N > 0, the r. h. s. of the equation is approximately 1/κ 0 [11]. On the other hand, for κ 0 → ∞, it is close to N/κ 0 . Thus if N = K 1 , that is, the number of coincidences among different data, ∆ ≡ N − K 1 , is zero, then the l. h. s. always majorates the r. h. s., and the equation has no solution. If there are coincidences, a unique solution exists, and the smaller ∆ is, the bigger κ 0 is. Thus we may want to search for κ 0 ∼ 1/∆ + O(∆ 0 ). Now it is useful to introduce the following notation: where each of f N 's scales as N 0 . Using standard results for polygamma functions [11], we rewrite Eq. (15) as Here we introduced the relative number of coincidences, δ ≡ ∆/N . Combined with the previous observation, Eq. (17) suggests that we look for κ 0 of the form where each of b j 's is independent of δ and scales as N 0 .
Substituting this expansion for κ 0 into Eq. (17), we see that it is self-consistent, and Again, more terms can be calculated if needed.
This expresses the saddle point value β * (or κ * , or ξ * ) as a power series in 1/K and δ.
In order to complete the evaluation of integrals in Eq. (7), we now need to calculate the curvature at this saddle point. Simple algebra results in Notice that the curvature does not scale as a power of N as was suggested in Ref. [7]. Our uncertainty in the value of ξ * is determined to the first order only by coincidences. One can understand this by considering a very large K with most of the bins having negligible probabilities. Then counts of n i = 1 are not informative for entropy estimation, as they can correspond to massive bins, as well as to some random bins from the sea of negligible ones. However, coinciding counts necessarily signify an important bin, which should influence the entropy estimator. Note also that to the first order in 1/K the exact positioning of coincidences does not matter: a few coincidences in many bins or many coincidences in a single one produce the same saddle point and the same curvature around it, provided that ∆ stays the same. While this is an artifact of our choice of the underlying prior P β (q) and may change in a different realization of the NSB method, this behavior parallels famous Ma's entropy estimator, which is also coincidence based [12].
In conclusion, if the number of coincidences, not N , is large, then a proper value for β is selected, and the variance of entropy is small. Then the results of this section transform calculations of complicated integrals in Eq. (7) to pure algebraic operations. This analysis has been used to write a general purpose software library for estimating entropies of discrete variables. The library is available from the author.

Choosing a value for K?
A question is in order now. If N ≪ K, the regime we are mostly interested in, then the number of extra counts in occupied bins, K 1 β, is negligible compared to the number of extra counts in empty bins, (K −K 1 )β ≈ Kβ. Then Eqs. (3,8) tell us that selecting β (that is, integrating over it) means balancing N , the number of actual counts versus κ = Kβ, the number of pseudocounts, or, equivalently, the scaled number of unoccupied bins. Why do we vary the pseudocounts by varying β? Can we instead use Bayesian model selection methods to set K? Indeed, not having a good handle on the value of K is usually one of the main reasons why entropy estimation is difficult. Can we circumvent this problem?
To answer this, note that smaller K leads to a higher maximum likelihood value since the total number of pseudocounts is less. Unfortunately, smaller K also means smaller volume in the distribution space since there are fewer bins, fewer degrees of freedom, available. As a result, Bayesian averaging over K will be trivial: the smallest possible number of bins, that is no empty bins, will dominate. This is very easy to see from Eq. (8): only the first ratio of Γ-functions in the posterior density depends on K, and it is maximized for K = K 1 . Thus straight-forward selection of the value of K is not an option. However, in the next Section we will suggest a way around this hurdle.

Unknown or infinite K
When one is not sure about the value of K, it is usually because its simple estimate is intolerably large. For example, consider measuring entropy of ℓ-gramms in printed English [13] using an alphabet with 29 characters: 26 different letters, one symbol for digits, one space, and one punctuation mark. Then even for ℓ as low as 7, a naive value for K is 29 7 ∼ 10 10 . Obviously, only a miniscule fraction of all possible 7-gramms may ever happen, but one does not know how many exactly. Thus one is forced to work in the space of full cardinality, which is ridiculously undersampled.
A remarkable property of the NSB method, as follows from the saddle point solution in Sec. 3, is that it works even for finite N and extremely big K (provided, of course, that there are coincidences). Moreover, if K → ∞, the method simplifies since then one should only keep the first term in the expansion, Eq. (12). Even more interestingly, for every β ≫ 1/K the a priori distribution of entropy becomes an exact delta function since the variance of entropy drops to zero as 1/K, see Eq. (5). Thus the NSB technique becomes more precise as K increases. So the solution to the problem of unknown cardinality is to use an upper bound estimate for K: it is much better to overestimate K than to underestimate it. If desired, one may even assume that K → ∞ to simplify the calculations.
It is important to understand which additional assumptions are used to come to this conclusion. How can a few data points specify entropy of a variable with potentially infinite cardinality? As explained in Ref. [7], a typical distribution in the Dirichlet family has a very particular rank ordered (Zipf) plot: the number of bins with the probability mass less than some q is given by an incomplete B-function, I, where B stand for the usual complete B-function. NSB fits for a proper value of β (and κ = Kβ) using bins with coincidences, the head of the rank ordered plot. But knowing β immediately defines the tails, where no data has been observed yet, and the entropy can be calculated. Thus if the Zipf plot for the distribution being studied has a substantially longer tail than allowed by Eq. (23), then one should suspect the results of the method. For example, NSB will produce wrong estimates for a distribution with q 1 = 0.5, q 2 , . . . q K = 0.5/(K − 1), and K → ∞.
With this caution in mind, we may now try to calculate the estimates of the entropy and its variance for extremely large K. We want them to be valid even if the saddle point analysis of Sec. 3 fails because ∆ is not large enough. In this case β * → 0, but κ * = Kβ * is some ordinary number. The range of entropies now is 0 ≤ S ≤ ln K → ∞, so the prior on S produced by P(q; β) is (almost) uniform over a semi-infinite range and thus is nonnormalizable. Similarly, there is a problem normalizing P β (q), Eq. (2). However, as is common in Bayesian statistics, these problems can be easily removed by an appropriate limiting procedure, and we will not pay attention to them in the future.
When doing integrals in Eq. (7), we need to find out how S(n) β depends on ξ(β). In the vicinity of the maximum of ρ, using the formula for S(n) β from Ref. [9] we get The expression for the second moment is similar, but complicated enough so that we chose not to write it here . The main point is that for K → ∞, δ = ∆/N → 0, and κ in the vicinity of κ * , the posterior averages of the entropy and its square are almost indistinguishable from ξ and ξ 2 , the a priori averages. Since now we are interested in small ∆ (otherwise we can use the saddle point analysis), we will use ξ m instead of S m β in Eq. (7). The error of such approximation is O δ, 1 K = O 1 N , 1 K . Now we need to slightly transform the Lagrangian, Eq. (10). First, we drop terms that do not depend on κ since they appear in the numerator and denominator of Eq. (7) and thus cancel. Second, we expand around 1/K = 0. This gives We note that κ is large in the vicinity of the saddle if δ is small and N is large, cf. Eq. (18). Thus, by definition of ψ-functions, ln Γ(κ + N ) − ln Γ(κ) ≈ N ψ 0 (κ) + N 2 ψ 1 (κ)/2. Further, ψ 0 (κ) ≈ ln κ, and ψ 1 (κ) ≈ 1/κ [11]. Finally, since ψ 0 (1) = −C γ , where C γ is the Euler's constant, Eq. (4) says that ξ − C γ ≈ ln κ. Combining all this, we get where the ≈ sign means that we are working with precision O 1 N , 1 K . Now we can write: The integral involved in these expressions can be easily calculated by substituting exp(C γ − ξ) = τ and replacing the limits of integration 1/K exp(C γ ) ≤ τ ≤ exp(C γ ) by 0 ≤ τ ≤ ∞. Such replacement introduces errors of the order (1/K) ∆ at the lower limit and δ 2 exp(−1/δ 2 ) at the upper limit. Both errors are within our approximation precision if there is, at least, one coincidence. Thus Finally, substituting Eq. (29) into Eqs. (27, 28) we get for the moments of the entropy S ≈ (C γ − ln 2) + 2 ln N − ψ 0 (∆) , (δS) 2 ≈ ψ 1 (∆) .
These equations are valid to zeroth order in 1/K and 1/N . They provide a simple, yet nontrivial, estimate of the entropy that can be used even if the cardinality of the variable is unknown. Note that Eq. (31) agrees with Eq. (22) since, for large ∆, ψ 1 (∆) ≈ 1/∆. Interestingly, Eqs. (30, 31) carry a remarkable resemblance to Ma's method [12].

Conclusion
We have further developed the NSB method for estimating entropies of discrete random variables. The saddle point of the posterior integrals has been found in terms of a power series in 1/K and δ. It is now clear that validity of the saddle point approximation depends not on the total number of samples, but only on the coinciding ones. Further, we have extended the method to the case of infinitely many or unknown number of bins and very few coincidences. We obtained closed form solutions for the estimates of entropy and its variance. Moreover, we specified an easily verifiable condition (extremely long tails), under which the estimator is not to be trusted. To our knowledge, this is the first estimator that can boast all of these features simultaneously. This brings us one more step closer to a reliable, model independent estimation of statistics of undersampled probability distributions.