Next Article in Journal
Did the Federal Agriculture Improvement and Reform Act of 1996 Affect Farmland Values?
Previous Article in Journal
Information Theory and Dynamical System Predictability

Entropy 2011, 13(3), 650-667; https://doi.org/10.3390/e13030650

Article
k-Nearest Neighbor Based Consistent Entropy Estimation for Hyperspherical Distributions
by 1,*, 1,2,* and
1
Health Effects Laboratory Division, National Institute for Occupational Safety and Health, Morgantown, WV 26505, USA
2
Department of Statistics, West Virginia University, Morgantown, WV 26506, USA
*
Authors to whom correspondence should be addressed.
Received: 22 December 2010; in revised form: 27 January 2011 / Accepted: 28 February 2011 / Published: 8 March 2011

Abstract

:
A consistent entropy estimator for hyperspherical data is proposed based on the k-nearest neighbor (knn) approach. The asymptotic unbiasedness and consistency of the estimator are proved. Moreover, cross entropy and Kullback-Leibler (KL) divergence estimators are also discussed. Simulation studies are conducted to assess the performance of the estimators for models including uniform and von Mises-Fisher distributions. The proposed knn entropy estimator is compared with the moment based counterpart via simulations. The results show that these two methods are comparable.
Keywords:
hyperspherical distribution; directional data; differential entropy; cross entropy; Kullback-Leibler divergence; k-nearest neighbor

1. Introduction

The Shannon (or differential) entropy of a continuously distributed random variable (r.v.) X with probability density function (pdf) f is widely used in probability theory and information theory as a measure of uncertainty. It is defined as the negative mean of the logarithm of the density function, i.e.,
$H ( f ) = - E f [ ln f ( X ) ]$
k-Nearest neighbor (knn) density estimators were proposed by Mack and Rosenblatt [1]. Penrose and Yukich [2] studied the laws of large numbers for k-nearest neighbor distances. The nearest neighbor entropy estimators when $X ∈ R p$ were studied by Kozachenko and Leonenko [3]. Singh et al. [4] and Leonenko et al. [5] extended these estimators using k-nearest neighbors. Mnatsakanov et al. [6] studied knn entropy estimators for variable rather than fixed k. Eggermontet et al. [7] studied the kernel entropy estimator for univariate smooth distributions. Li et al. [8] studied parametric and nonparametric entropy estimators for univariate multimodal circular distributions. Neeraj et al. [9] studied knn estimators of circular distributions for the data from the Cartesian product, that is, $[ 0 , 2 π ) p$. Recently, Mnatsakanov et al. [10] proposed an entropy estimator for hyperspherical data based on the moment-recovery (MR) approach (see also Section 4.3).
In this paper, we propose k-nearest neighbor entropy, cross-entropy and KL-divergence estimators for hyperspherical random vectors defined on a unit p-hypersphere $S p - 1$ centered at the origin in p-dimensional Euclidean space. Formally,
$S p - 1 = x ∈ R p : ∥ x ∥ = 1$
The surface area $S p$ of the hypersphere is well known: $S p = 2 π p / 2 Γ ( p 2 )$, where Γ is the gamma function. For a part of the hypersphere, the area of a cap with solid angle ϕ relative to its pole is given by Li [11] (cf. Gray [12]):
$S ( ϕ ) = 1 2 S p 1 - sgn ( cos ϕ ) I cos 2 ϕ 1 2 , p - 1 2$
where sgn is the sign function, and $I x ( α , β )$ is the regularized incomplete beta function.
For a random vector from the unit circle $S 1$, the von Mises distribution vM$( μ , κ )$ is the most widely used model:
$f vM ( x ; μ , κ ) = 1 2 π I 0 ( κ ) e κ μ T x$
where T is the transpose operator, $| | μ | | = 1$ and $κ ≥ 0$ are the mean direction vector and concentration parameters, and $I 0$ is the zero-order modified Bessel function of the first kind. Note that the von Mises distribution has a single mode. The multimodal extension to the von Mises distribution is the so-called generalized von Mises model. Its properties are studied by Yfantis and Borgman [13] and Gatto and Jammalamadaka [14].
The generalization of von Mises distribution onto $S p - 1$ is the von Mises-Fisher distribution (also known as Langevin distribution) vMF$p ( μ , κ )$ with pdf,
$f p ( x ; μ , κ ) = c p ( κ ) e κ μ T x$
where the normalization constant is
$c p ( κ ) = κ p / 2 - 1 ( 2 π ) p / 2 I p / 2 - 1 ( κ )$
and $I ν ( x )$ is the ν-order modified Bessel function of the first kind. See Mardia and Jupp [15] (p. 167) for details.
Since von Mises-Fisher distributions are members of the exponential family, by differentiating the cumulant generating function, one can obtain the mean and variance of $μ T X$:
$E f [ μ T X ] = A p ( κ )$
and
$V f [ μ T X ] = A p ′ ( κ )$
where $A p ( κ ) = I p / 2 ( κ ) / I p / 2 - 1 ( κ )$, and $A p ′ ( κ ) = d d κ A p ( κ ) = 1 - A p ( κ ) 2 - ( p - 1 ) / κ A p ( κ )$. See Watamori [16] for details. Thus the entropy of $f p$ is:
$H ( f p ) = - E f [ ln f p ( X ) ] = - ln c p ( κ ) - κ E f [ μ T X ] = - ln c p ( κ ) - κ A p ( κ )$
and
$V f [ ln f p ( X ) ] = κ 2 V f [ μ T X ] = κ 2 A p ′ ( κ )$
Spherical distributions have been used to model the orientation distribution functions (ODF) in HARDI (High Angular Resolution Diffusion Imaging). Knutsson [17] proposed a mapping from ($p = 3$) orientation to a continuous and distance preserving vector space ($p = 5$). Rieger and Vilet [18] generalized the orientation in any p-dimensional spaces. McGraw et al. [19] used vMF$3$ mixture to model the 3-D ODF and Bhalerao and Westin [20] applied $vMF 5$ mixture to 5-D ODF in the mapped space. Entropy of the ODF is proposed as a measure of anisotropy (Özarslan et al. [21], Leow et al. [22]). McGraw et al. [19] used Rényi entropy for the $vMF 3$ mixture since it has a closed form. Leow et al. [22] proposed an exponential isotropy measure based on the Shannon entropy. In addition, KL-divergence can be used to measure the closeness of two ODF’s. A nonparametric entropy estimator based on knn approach for hyperspherical data provides an easy way to compute the entropy related quantities.
In Section 2, we will propose the knn based entropy estimator for hyperspherical data. The unbiasedness and consistency are proved in this section. In Section 3, the knn estimator is extended to estimate cross entropy and KL-divergence. In Section 4, we present simulation studies using uniform hyperspherical distributions and aforementioned vMF probability models. In addition, the knn entropy estimator is compared with the MR approach proposed in Mnatsakanov et al. [10]. We conclude this study in Section 5.

2. Construction of knn Entropy Estimators

Let $X ∈ S p - 1$ be a random vector having pdf f and $X 1 , X 2 , … , X n$ be a set of i.i.d. random vectors drawn from f. To measure the nearness of two vectors x and $y$, we define a distance measure as the angle between them: $ϕ = arccos ( x T y )$ and denote the distance between $X i$ and its k-th nearest neighbor in the set of n random vectors by $ϕ i : = ϕ n , k , i$.
With the distance measure defined above and without loss of generality, the naïve k-nearest neighbor density estimate at $X i$ is thus,
$f n ( X i ) = k / n S ( ϕ i )$
where $S ( ϕ i )$ is the cap area as expressed by (3).
Let $L n , i$ be the natural logarithm of the density estimate at $X i$,
$L n , i = ln f n ( X i ) = ln k / n S ( ϕ i )$
and thus we construct a similar k-nearest neighbor entropy estimator (cf. Singh et al. [4]):
$H n ( f ) = - 1 n ∑ i = 1 n [ L n , i - ln k + ψ ( k ) ] = 1 n ∑ i = 1 n ln [ n S ( ϕ i ) ] - ψ ( k )$
where $ψ ( k ) = Γ ′ ( k ) Γ ( k )$ is the digamma function.
In the sequel, we shall prove the asymptotic unbiasedness and consistency of $H n ( f )$.

2.1. Unbiasedness of $H n$

To prove the asymptotic unbiasedness, we first introduce the following lemma:
Lemma 2.1. For a fixed integer $k < n$, the asymptotic conditional mean of $L n , i$ given $X i = x$, is
$E [ lim n → ∞ L n , i | X i = x ] = ln f ( x ) + ln k - ψ ( k )$
Proof. $∀ ℓ ∈ R$, consider the conditional probability
$P { L n , i < ℓ | X i = x } = P { f n ( X i ) < e ℓ | X i = x }$
$= P { S ( ϕ i ) > k n e - ℓ }$
Equation (11) implies that there are at most k samples falling within the cap $C i$ centered at $X i = x$ with area $S c i = k n e - ℓ$.
If we let
$p n , i = ∫ C i f ( y ) d y$
and $Y n , i$ be the number of samples falling onto the cap $C i$, then $Y n , i ∼ B I N ( n , p n , i )$, is a binomial random variable. Therefore,
$P { L n , i < ℓ | X i = x } = P { Y n , i < k }$
If we let $k n → 0$ as $n → ∞$, then $p n , i → 0$ as $n → ∞$. It is reasonable to consider the Poisson approximation of $Y n , i$ with mean $λ n , i = n p n , i = k e - ℓ S c i p n , i$. Thus, the limiting distribution of $Y n , i$ is a Poisson distribution with mean:
$λ i = lim n → ∞ λ n , i = k e - ℓ lim n → ∞ p n , i S c i = k e - ℓ f ( x )$
Define a random variable $L i$ having the conditional cumulative density function,
$F L i , x ( ℓ ) = lim n → ∞ P { L n , i < ℓ | X i = x }$
then
$F L i , x ( ℓ ) = ∑ j = 0 k - 1 [ k f ( x ) e - ℓ ] j j ! e - k f ( x ) e - ℓ$
By taking derivative w.r.t. , we obtain the conditional pdf of $L i$:
$f L i , x ( ℓ ) = [ k f ( x ) e - ℓ ] k ( k - 1 ) ! e - k f ( x ) e - ℓ$
The conditional mean of $L i$ is
$E [ L i | X i = x ] = ∫ - ∞ ∞ ℓ · [ k f ( x ) e - ℓ ] k ( k - 1 ) ! e - k f ( x ) e - ℓ d ℓ$
By change of variable, $z = k f ( x ) e - ℓ$,
$E [ L i | X i = x ] = ∫ 0 ∞ [ ln f ( x ) + ln k - ln z ] z k - 1 ( k - 1 ) ! e - z d z$
$= ln f ( x ) + ln k - ∫ 0 ∞ ln z z k - 1 ( k - 1 ) ! e - z d z$
$= ln f ( x ) + ln k - ψ ( k )$
Corollary 2.2. Given $X i = x$, let $η n , k , x : = n S ( ϕ i ) = k e - L n , i$, then $ln η n , k , x = ln k - L n , i$ converges in distribution to $ln η k , x = ln k - L i$, and
$E [ ln η k , x ] = - ln f ( x ) + ψ ( k )$
Moreover, $η k , x$ is a gamma r.v. with the shape parameter k and the rate parameter $f ( x )$.
Theorem 2.3. If a pdf f satisfies the following conditions: for some $ϵ > 0$,
$( A 1 ) : ∫ S p - 1 | ln f ( x ) | 1 + ϵ f ( x ) d x < ∞$,
$( A 2 ) : ∫ S p - 1 ∫ S p - 1 ln [ 1 - I ( x T y ) 2 ( 1 2 , p - 1 2 ) ] 1 + ϵ f ( x ) f ( y ) d x d y < ∞$, then the estimator proposed in (9) is asymptotically unbiased.
Proof. According to Corollary 2.2 and condition ($A 2$), we can show (see (16)–(22)) that for almost all values of $x ∈ S p - 1$, there exists a positive constant C such that
$( i ) E [ | ln η n , k , x | 1 + ϵ ] < C$ for all sufficiently large n.
Hence, applying the moment convergence theorem [23] (p. 186), it follows that
$lim n → ∞ E [ ln η n , k , x ] = E [ ln η k , x ] = - ln f ( x ) + ψ ( k )$
for almost all values of $x ∈ S p - 1$. In addition, using Fatou’s lemma and condition ($A 1$), we have that
$lim sup n → ∞ ∫ S p - 1 | E ln η n , k , x | 1 + ϵ f ( x ) d x ≤ ∫ S p - 1 lim sup n → ∞ | E ln η n , k , x | 1 + ϵ f ( x ) d x = ∫ S p - 1 | - ln f ( x ) + ψ ( k ) | 1 + ϵ f ( x ) d x ≤ C ϵ ∫ S p - 1 | - ln f ( x ) | 1 + ϵ f ( x ) d x + | ψ ( k ) | 1 + ϵ < ∞$
where $C ϵ$ is a constant. Therefore,
$lim n → ∞ E [ H n ( f ) ] = lim n → ∞ E f [ ln ( n S ( ϕ i ) ) ] - ψ ( k ) = lim n → ∞ ∫ S p - 1 E ln η n , k , x f ( x ) d x - ψ ( k ) = ∫ S p - 1 lim n → ∞ E ln η n , k , x f ( x ) d x - ψ ( k ) = ∫ S p - 1 E ln η k , x f ( x ) d x - ψ ( k ) = ∫ S p - 1 [ - ln f ( x ) + ψ ( k ) ] f ( x ) d x - ψ ( k ) = H ( f )$
To show $( i )$, one can follow the arguments similar to those used in the proof of Theorem 1 in [24]. Indeed, we can first establish
$( i i ) E [ | ln η 2 , 1 , x | 1 + ϵ ] < C .$
Namely, we justify that $( i )$ is valid when $n = 2$ and $k = 1$. But the inequality $( i i )$ follows immediately from the condition $( A 2 )$ and
$E ln η 2 , 1 , x 1 + ϵ = E ln [ 2 S ( ϕ 1 , 2 ) ] 1 + ϵ | X 1 = x$
$= E ln S p 1 - sgn ( x T X 2 ) I ( x T X 2 ) 2 ( 1 2 , p - 1 2 ) 1 + ϵ$
$≤ C ϵ | ln S p | 1 + ϵ + C ϵ | ln 2 | 1 + ϵ + C ϵ E f ln 1 - I ( x T X 2 ) 2 ( 1 2 , p - 1 2 ) 1 + ϵ 1 ( x T X 2 > 0 )$
$= C ϵ | ln S p | 1 + ϵ + | ln 2 | 1 + ϵ + 1 2 C ϵ E f ln 1 - I ( x T X 2 ) 2 ( 1 2 , p - 1 2 ) 1 + ϵ$
Here $ϕ 1 , 2 = arccos ( X 1 T X 2 )$ and $1 ( · )$ is the indicator function.
Now let us denote the distribution function of $η n , k , x$ by
$G n , k , x ( u ) = P ( η n , k , x ≤ u ) = P ( n S ( ϕ n , k , 1 ) ≤ u | X 1 = x ) = 1 - ∑ j = 0 k - 1 n - 1 j ∫ C x ( ϕ n ( u ) ) f ( y ) d y j 1 - ∫ C x ( ϕ n ( u ) ) f ( y ) d y n - 1 - j$
where $ϕ n ( u ) = S - 1 ( u n )$ and $C x ( ϕ )$ is a cap ${ y ∈ S p - 1 : y T x ≥ cos ϕ }$ with the pole x and base radius $sin ϕ$. Note also that the functions $S ( ϕ )$ (see (3)) and $ϕ n ( u ) = S - 1 ( u n )$ are both increasing functions.
Now, one can see (cf. (66) in [24]):
$E [ | ln η n , k , x | 1 + ϵ ] ≤ I 1 + I 2 + I 3$
where
$I 1 = ( 1 + ϵ ) ∫ 0 1 ln 1 u ϵ u - 1 G n , k , x ( u ) d u$
$I 2 = ( 1 + ϵ ) ∫ 1 n ( ln u ) ϵ u - 1 ( 1 - G n , k , x ( u ) ) d u$
$I 3 = ( 1 + ϵ ) ∫ n n S p ( ln u ) ϵ u - 1 ( 1 - G n , k , x ( u ) ) d u$
It is easy to see that for sufficiently large n and almost all $x ∈ S p - 1$:
$I 1 < ( 1 + ϵ ) f ( x ) Γ ( 1 + ϵ ) < ∞$
and
$I 2 ≤ ( 1 + ϵ ) ∑ j = 0 k - 1 [ sup y ∈ S p - 1 f ( y ) ] j f ( x ) - j - ϵ Γ ( j + ϵ ) < ∞$
(cf. (89) and (85) in [24], respectively).
Finally, let us show that $I 3 → 0$ as $n → ∞$. For each x with $f ( x ) > 0$, if we choose a $δ ∈ ( 0 , f ( x ) )$, then for all sufficiently large n, $n ∫ C x ( ϕ n ( n ) ) f ( y ) d y > f ( x ) - δ$, since the area of $C x ( ϕ n ( n ) )$ is equal to $1 n$. Using arguments similar to those used in (69)–(72) from [24], we have
$I 3 ≤ ( 1 + ϵ ) n k - 1 k e - ( n - k - 1 ) ( f ( x ) - δ ) 1 n$
$× ∫ n n S p ( ln u ) ϵ u - 1 1 - ∫ C x ( ϕ n ( u ) ) f ( y ) d y d u$
The integral in (19) after changing the variable, $t = 2 u n$, takes the form
$∫ 2 n 2 S p ln n t 2 ϵ t - 1 ( 1 - G 2 , 1 , x ( t ) ) d t$
$= ∫ 2 n 1 + ∫ 1 2 S p ln n t 2 ϵ t - 1 ( 1 - G 2 , 1 , x ( t ) ) d t$
since $ϕ n ( n t 2 ) = S - 1 ( t 2 ) = ϕ 2 ( t )$ and $1 - ∫ C x ( ϕ 2 ( t ) ) f ( y ) d y = 1 - G 2 , 1 , x ( t ) )$. The first integral in the right side of (20) is bounded as follows:
$∫ 2 n 1 ln n t 2 ϵ t - 1 ( 1 - G 2 , 1 , x ( t ) ) d t ≤ n 2 ln n 2 ϵ$
while for the second one, we have
$∫ 1 2 S p ln n t 2 ϵ t - 1 ( 1 - G 2 , 1 , x ( t ) ) d t ≤ C ϵ ln n 2 ϵ E [ η 2 , 1 , x ] + C ϵ ln n 2 ϵ B$
where
$B = ∫ 1 2 S p ( ln t ) ϵ t - 1 ( 1 - G 2 , 1 , x ( t ) ) d t = 1 1 + ϵ E | ln η 2 , 1 , x | 1 + ϵ$
Combination of (15)–(22) and $( i i )$ yields $( i )$.
Remark. Note that
$1 - I t 2 ( 1 2 , p - 1 2 ) ≈ 1 B ( 1 2 , p - 1 2 ) ( t 2 ) - 1 2 ( 1 - t 2 ) p - 1 2 ≈ 2 p - 1 2 B ( 1 2 , p - 1 2 ) ( 1 - t ) p - 1 2 as t ↑ 1$
where $B ( · , · )$ is the beta function. Hence, in the conditions $( A j ) , j = 2$, 4, 6 and 8, the difference $1 - I ( x T y ) 2 ( 1 2 , p - 1 2 )$ can be replaced by $1 - x T y$.

2.2. Consistency of $H n$

Lemma 2.4. Under the following conditions: for some $ϵ > 0$,
$( A 3 ) : ∫ S p - 1 | ln f ( x ) | 2 + ϵ f ( x ) d x < ∞$,
$( A 4 ) : ∫ S p - 1 ∫ S p - 1 ln [ 1 - I ( x T y ) 2 ( 1 2 , p - 1 2 ) ] 2 + ϵ f ( x ) f ( y ) d x d y < ∞$,
the asymptotic variance of $L n , i$ is finite and equals $V f [ ln f ( X ) ] + ψ 1 ( k )$, where $ψ 1 ( k )$ is the trigamma function.
Proof. The conditions $( A 3 )$ and $( A 4 )$, and the argument similar to the one used in the proof of Theorem 2.3, yields
$lim n → ∞ E [ L n , i 2 | X i = x ] = E [ L i 2 | X i = x ]$
Therefore, it is sufficient to prove that $V f [ L i ] = V f ( ln f ( X ) ) + ψ 1 ( k )$. Similarly to (14), we have
$E [ L i 2 | X i = x ] = ∫ 0 ∞ [ ln f ( x ) + ln k - ln z ] 2 z k - 1 ( k - 1 ) ! e - z d z$
$= [ ln f ( x ) + ln k ] 2 - 2 [ ln f ( x ) + ln k ] ψ ( k ) + Γ ′ ′ ( k ) / Γ ( k )$
Since $Γ ′ ′ ( k ) / Γ ( k ) = ψ 2 ( k ) + ψ 1 ( k )$,
$E [ L i 2 | X i = x ] = [ ln f ( x ) + ln k - ψ ( k ) ] 2 + ψ 1 ( k )$
After some algebra, it can be shown that
$V f [ L i ] = E f [ ( ln f ( X ) ) 2 ] - ( E f [ ln f ( X ) ] ) 2 + ψ 1 ( k )$
$= V f [ ln f ( X ) ] + ψ 1 ( k )$
Lemma 2.5. For a fixed integer $k < n$, $L n , i$ are asymptotically pairwise independent.
Proof. For a pair of random variables $L n , i$ and $L n , j$ with $i ≠ j$ and $X i ≠ X j$, following the similar argument for Lemma 2.1, $C i$ and $C j$ shrink as n increases. Thus, it is safe to assume that $C i$ and $C j$ are disjoint for large n, and $L n , i$ and $L n , j$ are independent. Hence Lemma 2.5 follows. ☐
Theorem 2.6. Under the conditions $( A 1 )$ through $( A 4 )$, the variance of $H n ( f )$ decreases with sample size n, that is
$lim n → ∞ V f [ H n ( f ) ] = 0$
and $H n ( f )$ is a consistent estimator of $H ( f )$.
Theorem 2.6 can be established by using Theorem 2.3 and Lemmas 2.4 and 2.5, and
$lim n → ∞ V f [ H n ( f ) ] = lim n → ∞ 1 n { V f [ ln f ( X ) ] + ψ 1 ( k ) } = 0$
For a finite sample, the variance of $H n ( f )$ can be approximated by $1 n { V f [ ln f ( x ) ] + ψ 1 ( k ) }$. For instance, for the uniform distribution, $V f [ ln f ( x ) ] = 0$ and $V [ H n ( f ) ] ≈ ψ 1 ( k ) / n$ and for a vMF$p ( μ , κ )$, $V [ H n ( f ) ] ≈ 1 n [ κ 2 A p ′ ( κ ) + ψ 1 ( k ) ]$. See the illustration in Figure 1. The simulation was done with sample size $n = 1000$ and the number of simulations was $N = 10 , 000$. Since $ψ 1 ( k )$ is a decreasing function, the variance of $H n ( f )$ decreases when k increases.
Figure 1. Variances of $H n ( f )$ by simulation and approximation.
Figure 1. Variances of $H n ( f )$ by simulation and approximation.

3. Estimation of Cross Entropy and KL-divergence

3.1. Estimation of Cross Entropy

The definition of cross entropy between continuous pdf’s f and g is,
$H ( f , g ) = - ∫ f ( x ) ln g ( x ) d x$
Given a random sample of size n from f, {$X 1 , X 2 , … , X n$}, and a random sample of size m from g, {$Y 1 , Y 2 , … , Y m$}, on a hypersphere, denote the knn density estimator of g by $g m$. Similarly to (7),
$g m ( X i ) = k / m S ( φ i )$
where $φ i$ is the distance from $X i$ to its k-th nearest neighbor in {$Y 1 , Y 2 , … , Y m$}. Analogously to the entropy estimator (9), the cross entropy can be estimated by:
$H n , m ( f , g ) = 1 n ∑ i = 1 n ln S ( φ i ) + ln m - ψ ( k )$
Under the conditions $( A 1 )$$( A 4 )$, for a fixed integer $k < min ( n , m )$, one can show that $H n , m ( f , g )$ is asymptotically unbiased. Moreover, by similar reasoning applied for $H n ( f )$, one can show that $H n , m ( f , g )$ is also consistent and $V [ H n , m ( f , g ) ] ≈ 1 n { V f [ ln g ( x ) ] + ψ 1 ( k ) }$. For example, when both f and g are vMF with the same mean direction and different concentration parameters, $κ 1$ and $κ 2$, respectively, the approximate variance will be $1 n [ κ 2 2 A p ′ ( κ 1 ) + ψ 1 ( k ) ]$. Figure 2 shows the approximated and simulated variance of the knn estimators for cross entropy are close to each other and both decrease with k. The simulation is done with sample size $n = m = 1000$ and the number of simulations was $N = 10 , 000$.
Figure 2. Variances of $H n , m$ by simulation and approximation.
Figure 2. Variances of $H n , m$ by simulation and approximation.

3.2. Estimation of KL-Divergence

KL-divergence is also known as relative entropy. It is used to measure the similarity of two distributions. Wang et al. [24] studied the knn estimator of KL-divergence for distributions defined on $R p$. Here we propose the knn estimator of KL-divergence of continuous distribution f from g defined on a hypersphere. The KL-divergence is defined as:
$K L ( f ∥ g ) = E f [ ln f ( X ) / g ( X ) ] = ∫ f ( x ) ln f ( x ) g ( x ) d x$
Equation (30) can also be expressed as $K L ( f ∥ g ) = H ( f , g ) - H ( f )$. Then the knn estimator of KL-divergence is constructed as $H n , m ( f , g ) - H n ( f )$, i.e.,
$K L n , m ( f ∥ g ) = 1 n ∑ i = 1 n ln f n ( X i ) g m ( X i ) = 1 n ∑ i = 1 n ln S ( φ i ) S ( ϕ i ) + ln m n$
where $g m ( X i )$ is defined as in (28). Besides, for finite samples, the variance of the estimator, $V [ K L n , m ]$, is approximately $1 n { V f [ ln f ( X ) ] + V f [ ln g ( X ) ] - 2 Cov f [ ln f ( X ) , ln g ( X ) ] + 2 ψ 1 ( k ) }$. When f and g are vMF as mentioned above, with concentration parameter $κ 1$ and $κ 2$, respectively, we have:
$V f [ ln f ( X ) ] = κ 1 2 A p ′ ( κ 1 ) V f [ ln g ( X ) ] = κ 2 2 A p ′ ( κ 1 )$
and
$Cov f [ ln f ( X ) , ln g ( X ) ] = κ 1 κ 2 A p ′ ( κ 1 )$
So the approximate variance is $1 n [ ( κ 1 - κ 2 ) 2 A p ′ ( κ 1 ) + 2 ψ 1 ( k ) ]$. Figure 3 shows the approximated and simulated variance of the knn estimators for KL-divergence. The approximation for von Mises-Fisher distribution is not as good as the one for uniform distributions. This could be due to the modality of von Mises-Fisher distributions or the finitude of sample sizes. The larger the sample size, the closer the approximation is to the true value.
Figure 3. Variances of $K L n , m$ by simulation and approximation.
Figure 3. Variances of $K L n , m$ by simulation and approximation.
In summary, we have
Corollary 3.1. (1) Under conditions $( A 1 ) , ( A 2 )$ and for some $ϵ > 0$,
$( A 5 ) : ∫ S p - 1 | ln g ( x ) | 1 + ϵ f ( x ) d x < ∞$,
$( A 6 ) : ∫ S p - 1 ∫ S p - 1 ln [ 1 - I ( x T y ) 2 ( 1 2 , p - 1 2 ) ] 1 + ϵ f ( x ) g ( y ) d x d y < ∞$,
for a fixed integer $k < min ( n , m )$, the knn estimator of KL-divergence given in (31) is asymptotically unbiased.
(2) Under condition $( A 3 ) , ( A 4 )$ and for some $ϵ > 0$,
$( A 7 ) : ∫ S p - 1 | ln g ( x ) | 2 + ϵ f ( x ) d x < ∞$,
$( A 8 ) : ∫ S p - 1 ∫ S p - 1 ln [ 1 - I ( x T y ) 2 ( 1 2 , p - 1 2 ) ] 2 + ϵ f ( x ) g ( y ) d x d y < ∞$,
for a fixed integer $k < min ( n , m )$, the knn estimator of KL-divergence given in (31) is asymptotically consistent.
To prove the last two corollaries, one can follow the similar steps proposed in Wang et al. [24].

4. Simulation Study

To demonstrate the proposed knn entropy estimators and assess their performance for finite samples, we conducted simulations for the uniform distribution and von Mises-Fisher distributions with the p-coordinate unit vector, $e p$, as the common mean direction for $p = 3$ and 10. For each distribution, we drew samples of size $n = 100$, 500 and 1000. All simulations were repeated $N = 10 , 000$ times. Bias, standard deviation (SD) and root mean squared error (RMSE) were calculated.

4.1. Bias and Standard Deviation

Figure 4, Figure 5, Figure 6, Figure 7, Figure 8 and Figure 9 show simulated bias and standard deviation of the proposed entropy, cross-entropy and KL-divergence estimators along different k. The pattern for the standard deviation is clear. It decreases sharply then slowly as k increases. This is consistent with the variance approximations described in Section 2 and Section 3. The pattern for bias is diverse. For uniform distributions, the bias term is very small. When the underlying distribution has a mode, for example, vMF models used in the current simulations, the relation between bias and k becomes complex and the bias term can be larger for larger k values.
Figure 4. $| Bias |$ (dashed line) and standard deviation (solid line) of entropy estimate $H n$ for uniform distributions.
Figure 4. $| Bias |$ (dashed line) and standard deviation (solid line) of entropy estimate $H n$ for uniform distributions.
Figure 5. $| Bias |$ (dashed line) and standard deviation (solid line) of entropy estimate $H n$ for vMF$p ( e p , 1 )$ distributions.
Figure 5. $| Bias |$ (dashed line) and standard deviation (solid line) of entropy estimate $H n$ for vMF$p ( e p , 1 )$ distributions.
Figure 6. $| Bias |$ (dashed line) and standard deviation (solid line) of cross entropy estimate $H n , m$ for uniform distributions.
Figure 6. $| Bias |$ (dashed line) and standard deviation (solid line) of cross entropy estimate $H n , m$ for uniform distributions.
Figure 7. $| Bias |$ (dashed line) and standard deviation (solid line) of cross entropy estimate $H n , m$ for $f = vMF p ( e p , 1 )$ and $g =$ uniform distributions.
Figure 7. $| Bias |$ (dashed line) and standard deviation (solid line) of cross entropy estimate $H n , m$ for $f = vMF p ( e p , 1 )$ and $g =$ uniform distributions.
Figure 8. $| Bias |$ (dashed line) and standard deviation (solid line) of KL-divergence estimate $K L n , m$ for uniform distributions.
Figure 8. $| Bias |$ (dashed line) and standard deviation (solid line) of KL-divergence estimate $K L n , m$ for uniform distributions.
Figure 9. $| Bias |$ (dashed line) and standard deviation (solid line) of KL-divergence estimate $K L n , m$ for $f = vMF p ( e p , 1 )$ and $g =$ uniform distributions.
Figure 9. $| Bias |$ (dashed line) and standard deviation (solid line) of KL-divergence estimate $K L n , m$ for $f = vMF p ( e p , 1 )$ and $g =$ uniform distributions.

4.2. Convergence

To validate the consistency, we conducted simulations of different sample size n from 10 to 100,000 for the distribution models used above. Figure 10 and Figure 11 shows the estimates and theoretical values of entropy, cross-entropy and KL-divergence for different sample sizes with $k = 1$ and $k = ⌊ ln n + 0 . 5 ⌋ =$ 2–12, respectively. The proposed estimators converge to the corresponding theoretical values quickly. Thus the consistency of these estimators are verified. The choice of k is an open problem for knn based estimation approaches. These figures show that using lager k, e.g., the logarithm of n, for lager n, is giving a slightly better preference.
Figure 10. Convergence of estimates with sample size n using the first nearest neighbor. For vMF$p$, $κ = 1$.
Figure 10. Convergence of estimates with sample size n using the first nearest neighbor. For vMF$p$, $κ = 1$.
Figure 11. Convergence of estimates with sample size n using $k = ⌊ ln n + 0 . 5 ⌋$ nearest neighbors. For vMF$p$, $κ = 1$.
Figure 11. Convergence of estimates with sample size n using $k = ⌊ ln n + 0 . 5 ⌋$ nearest neighbors. For vMF$p$, $κ = 1$.

4.3. Comparison with the Moment-Recovered Construction

Another entropy estimator for hyperspherical data was developed recently by Mnatsakanov et al. [10] using MR approach. We call this estimator the MR entropy estimator and denote it by $H n ( M R ) ( f )$:
$H n ( M R ) ( f ) = - 1 n ∑ i = 1 n ln P n , t ( X i ) + ln S ( arccos t )$
where $P n , t ( X i )$ is the estimated probability of the cap ${ y ∈ S p - 1 : y T X i ≥ t }$ defined by the revolution axis $X i$ and t is the distance from the cap base to the origin and acts as a tuning parameter. Namely, (see Mnatsakanov et al. [10]),
$P n , t ( X i ) = 1 n - 1 ∑ j = 1 , j ≠ i n ∑ k = ⌊ n t ⌋ + 1 n n k ( X j T X i ) k ( 1 - X j T X i ) n - k$
Via simulation study, the empirical comparison between $H n ( f )$ and $H n ( M R ) ( f )$ was done for the uniform and vMF distributions. The results are presented in Table 1. The values of k and t listed in the table are the optimal ones in the sense of minimizing RMSE. Z-tests and F-tests (at $α = 0 . 05$) were performed to compare the bias, standard deviation (variance) and RMSE (MSE) between the knn estimators and corresponding MR estimators. In general, for uniform distributions, there are no significant difference for biases. Among other comparisons, the differences are significant. Specifically, knn achieves slightly smaller bias and RMSE values than those of the MR method. The standard deviations of knn method are also smaller for the uniform distribution but larger for vMF distributions than those based on MR approach.
Table 1. Comparison of knn and moment methods by simulations for spherical distributions.
Table 1. Comparison of knn and moment methods by simulations for spherical distributions.
MethodknnMR
pnkbiasSDRMSEtbiasSDRMSE
Uniform:
3100990.005000.001470.005210.010.005230.011880.01298
35004990.001000.000130.001010.010.001070.002330.00257
310009990.000500.000050.000500.010.000510.001200.00130
10100990.005030.001300.005200.010.005280.013310.01432
105004990.001000.000110.001010.010.001020.002640.00283
1010009990.000500.000040.000500.010.000520.001300.00140
vMF$p ( e p , 1 )$:
3100710.016970.051420.054150.300.029290.047020.05540
35003370.003100.023360.023560.660.009690.023180.02512
310006700.001450.016620.016680.740.006200.016580.01770
10100460.023950.025670.035110.120.028950.023630.03737
10500760.007020.013610.015310.400.014070.012470.01881
101000900.003660.010260.010890.470.011150.009070.01437

5. Discussion and Conclusions

In this paper, the knn based estimators for entropy, cross-entropy and Kullback-Leibler divergence are proposed for distributions on hyperspheres. Asymptotic properties such as unbiasedness and consistency are proved and validated by simulation studies using uniform and von Mises-Fisher distribution models. The variances of these estimators decrease with k. For uniform distributions, variance is dominant and bias is negligible. When the underlying distributions are modal, the bias can be large if k is large. In general, we conclude that the behavior of knn and MR entropy estimators have similar performance in terms of root mean square error.

Acknowledgements and Disclaimer

The authors thank the anonymous referees for their helpful comments and suggestions. The research of Robert Mnatsakanov was supported by NSF grant DMS-0906639. The findings and conclusions in this report are those of the author(s) and do not necessarily represent the views of the National Institute for Occupational Safety and Health.

References

1. Mack, Y.; Rosenblatt, M. Multivariate k-nearest neighbor density estimates. J. Multivar. Anal. 1979, 9, 1–15. [Google Scholar] [CrossRef]
2. Penrose, M.D.; Yukich, J.E. Laws of large numbers and nearest neighbor distances. In Advances in Directional and Linear Statistics; Wells, M.T., SenGupta, A., Eds.; Physica-Verlag: Heidelberg, Germany, 2011; pp. 189–199. [Google Scholar]
3. Kozachenko, L.; Leonenko, N. On statistical estimation of entropy of a random vector. Probl. Inform. Transm. 1987, 23, 95–101. [Google Scholar]
4. Singh, H.; Misra, N.; Hnizdo, V.; Fedorowicz, A.; Demchuk, E. Nearest neighbor estimates of entropy. Am. J. Math. Manag. Sci. 2003, 23, 301–321. [Google Scholar] [CrossRef]
5. Leonenko, N.; Pronzato, L.; Savani, V. A class of Rényi information estimators for multidimensional densities. Ann. Stat. 2008, 36, 2153–2182, Correction: 2010, 38, 3837–3838. [Google Scholar] [CrossRef]
6. Mnatsakanov, R.; Misra, N.; Li, S.; Harner, E. kn-Nearest neighbor estimators of entropy. Math. Meth. Stat. 2008, 17, 261–277. [Google Scholar] [CrossRef]
7. Eggermont, P.P.; LaRiccia, V.N. Best asymptotic normality of the kernel density entropy estimator for smooth densities. IEEE Trans. Inf. Theor. 1999, 45, 1321–1326. [Google Scholar] [CrossRef]
8. Li, S.; Mnatsakanov, R.; Fedorowicz, A.; Andrew, M.E. Entropy estimation of multimodal circular distributions. In Proceedings of Joint Statistical Meetings, Denver, CO, USA, 3–7 August 2008; pp. 1828–1835.
9. Misra, N.; Singh, H.; Hnizdo, V. Nearest neighbor estimates of entropy for multivariate circular distributions. Entropy 2010, 12, 1125–1144. [Google Scholar] [CrossRef]
10. Mnatsakanov, R.M.; Li, S.; Harner, E.J. Estimation of multivariate Shannon entropies using moments. Aust. N. Z. J. Stat. 2011, in press. [Google Scholar]
11. Li, S. Concise formulas for the area and volume of a hyperspherical cap. Asian J. Math. Stat. 2011, 4, 66–70. [Google Scholar] [CrossRef]
12. Gray, A. Tubes, 2nd ed.; Birkhäuser-Verlag: Basel, Switzerland, 2004. [Google Scholar]
13. Yfantis, E.; Borgman, L. An extension of the von Mises distribution. Comm. Stat. Theor. Meth. 1982, 11, 1695–1076. [Google Scholar] [CrossRef]
14. Gatto, R.; Jammalamadaka, R. The generalized von Mises distribution. Stat. Methodol. 2007, 4, 341–353. [Google Scholar] [CrossRef]
15. Mardia, K.; Jupp, P. Directional Statistics; John Wiley & Sons, Ltd.: New York, NY, USA, 2000. [Google Scholar]
16. Watamori, Y. Statistical inference of Langevin distribution for directional data. Hiroshima Math. J. 1995, 26, 25–74. [Google Scholar]
17. Knutsson, H. Producing a continuous and distance preserving 5-D vector representation of 3-D orientation. In Proceedings of IEEE Computer Society Workshop on Computer Architecture for Pattern Analysis and Image Database Management, Miami Beach, FL, November 1985; pp. 175–182.
18. Rieger, B.; van Vliet, L. Representing orientation in n-dimensional spaces. Lect. Notes Comput. Sci. 2003, 2756, 17–24. [Google Scholar]
19. McGraw, T.; Vemuri, B.; Yezierski, R.; Mareci, T. Segmentation of high angular resolution diffusion MRI modeled as a field of von Mises-Fisher mixtures. Lect. Notes Comput. Sci. 2006, 3953, 463–475. [Google Scholar]
20. Bhalerao, A.; Westin, C.F. Hyperspherical von Mises-Fisher mixture (HvMF) modelling of high angular resolution diffusion MRI. Lect. Notes Comput. Sci. 2007, 4791, 236–243. [Google Scholar]
21. Özarslan, E.; Vemuri, B.C.; Mareci, T.H. Generalized scalar measures for diffusion MRI using trace, variance, and entropy. Magn. Reson. Med. Sci. 2005, 53, 866–876. [Google Scholar] [CrossRef] [PubMed]
22. Leow, A.; Zhu, S.; McMahon, K.; de Zubicaray, G.; Wright, M.; Thompson, P. A study of information gain in high angular resolution diffusion imaging (HARDI). In Proceedings of 2008 MICCAI Workshop on Computational Diffusion MRI, New York, NY, USA, 10 September 2008.
23. Loève, M. Probability Theory I, 4th ed.; Springer-Verlag: New York, NY, USA, 1977. [Google Scholar]
24. Wang, Q.; Kulkarni, S.R.; Verdú, S. Divergence estimation for multidimensional densities via k-nearest-neighbor distances. IEEE Trans. Inf. Theor. 2009, 55, 2392–2405. [Google Scholar] [CrossRef]