Next Article in Journal
Entropy: The Markov Ordering Approach
Next Article in Special Issue
Entropy and Phase Coexistence in Clusters: Metals vs. Nonmetals
Previous Article in Journal
Learning Genetic Population Structures Using Minimization of Stochastic Complexity
Previous Article in Special Issue
Effect of Counterion and Configurational Entropy on the Surface Tension of Aqueous Solutions of Ionic Surfactant and Electrolyte Mixtures
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Nearest Neighbor Estimates of Entropy for Multivariate Circular Distributions

1
Department of Statistics, West Virginia University, Morgantown, West Virginia 26506, USA
2
National Institute for Occupational Safety and Health, Morgantown, West Virginia 26505, USA
3
Department of Mathematics and Statistics, Indian Institute of Technology Kanpur, Kanpur 208 016, India
*
Author to whom correspondence should be addressed.
Deceased.
Entropy 2010, 12(5), 1125-1144; https://doi.org/10.3390/e12051125
Submission received: 26 February 2010 / Accepted: 29 April 2010 / Published: 6 May 2010
(This article belongs to the Special Issue Configurational Entropy)

Abstract

:
In molecular sciences, the estimation of entropies of molecules is important for the understanding of many chemical and biological processes. Motivated by these applications, we consider the problem of estimating the entropies of circular random vectors and introduce non-parametric estimators based on circular distances between n sample points and their k th nearest neighbors (NN), where k ( n - 1 ) is a fixed positive integer. The proposed NN estimators are based on two different circular distances, and are proven to be asymptotically unbiased and consistent. The performance of one of the circular-distance estimators is investigated and compared with that of the already established Euclidean-distance NN estimator using Monte Carlo samples from an analytic distribution of six circular variables of an exactly known entropy and a large sample of seven internal-rotation angles in the molecule of tartaric acid, obtained by a realistic molecular-dynamics simulation.

1. Introduction

Estimation of entropies of molecules is an important problem in molecular sciences. Internal configurational entropy of a molecule is the entropy of the joint distribution of the internal molecular coordinates (bond lengths, bond angles, and dihedral angles), and as such it is a measure of random fluctuations in these coordinates. Most significant contribution to the internal configurational entropy of a molecule comes from the fluctuations in dihedral angles (also called internal-rotation angles). Many important properties of complex molecules, such as their stability and adopted conformation, depend on random fluctuations in their internal coordinates. Estimation of the internal configurational entropy of molecules is therefore important for understanding many chemical and biological processes, such as the spontaneity of a chemical reaction, protein folding, intermolecular protein-protein interactions, and protein-ligand interactions. It is also a key in the design of drugs that can stabilize the normally folded molecular structure or correct a misfolded structure, since protein misfolding is a cause of several diseases such as Alzheimer disease, mad cow disease, cystic fibrosis, and some types of cancer.
Estimation of the internal entropy of macromolecules, such as proteins, is a challenging problem because of the large number of correlated internal molecular coordinates. A commonly used method of estimating the internal entropy of a molecule, known as the quasi-harmonic approach, is based on the assumption of a multivariate normal distribution for the internal molecular coordinates [1]. Misra et al. [2] discussed the decision theoretic estimation of the entropy of a multivariate normal distribution and obtained improvements over the best affine equivariant estimator under the squared error loss function. However, the assumption of a multivariate normal distribution for the internal coordinates of a molecule is appropriate only at low temperatures, when the fluctuations in its internal coordinates are small. At higher temperatures, the dihedral angles of a complex molecule exhibit multimodes and skewness in their distributions, and the multivariate normal distribution becomes inadequate.
Demchuk and Singh [3] discussed a circular probability approach for modeling the dihedral angles of a molecule in the estimation of internal rotational entropy. As an illustration, they modeled the torsional angle of the methanol molecule by a trimodal von Mises distribution and derived a bath-tub-shaped distribution for the torsional potential energy of the molecule. Singh et al. [4] introduced a torus version of a bivariate normal distribution for modeling two dihedral angles. The marginal distributions of the model are symmetric unimodal or symmetric bimodal depending on the configurations of the parameters. A multivariate generalization of this bivariate model has been proposed by Mardia et al. [5]. Hnizdo et al. [6] and Darian et al. [7] used a Fourier series expansion approach for modeling univariate and bivariate distributions of molecular dihedral angles. Complex molecules, however, have many significantly correlated dihedral angles, whose joint distribution can take an arbitrary form. For this reason, a non-parametric approach for estimating the entropy of a circular random vector of arbitrary dimensions is desirable.
Several non-parametric estimators of the entropy of an m-dimensional random variable X have been discussed in the literature. A common approach is to replace the probability density function (pdf) f ( · ) in the definition of the differential entropy,
H ( f ) = E f ( - ln f ( X ) )
by its non-parametric kernel or histogram density estimator [8,9]. However, in most practical situations, implementation of such estimates in higher dimensions becomes difficult. In one dimension ( m = 1 ), several authors have proposed estimates of entropy in the context of testing goodness of fit [10,11]. Singh et al. [12] proposed the following asymptotically unbiased and consistent nearest-neighbor (NN) estimator of the entropy H ( f ) :
H ^ k , n = m n i = 1 n ln R i , k , n + ln π m / 2 Γ ( m / 2 + 1 ) + γ - L k - 1 + ln n
Here, R i , k , n is the Euclidean distance of a point X i to its k th , k n - 1 nearest (in the Euclidean-distance sense) neighbor in a random sample X 1 , X 2 , . . . , X n from the distribution f ( · ) ; γ = 0 . 5772 is Euler’s constant, L 0 = 0 , L j = i = 1 j 1 / i , j = 1 , 2 , , and Γ ( · ) is the usual gamma function. For k = 1 , the estimator H ^ k , n reduces to the NN estimator proposed by Kozachenko and Leonenko [13]. Results similar to those of [12] have been also reported by Goria et al. [14]. For the purpose of the estimation of the information-theoretic quantity of mutual information, Kraskov et al. [15] generalized the first-nearest-neighbor estimator of [13] in terms of k th nearest-neighbor distances in a general metric, giving, however, explicit expressions only for the maximum and Euclidean metrics and without providing formal proofs of asymptotic unbiasedness and consistency. For m = k = 1 , Tsybakov and van der Meulen [16] established the mean-square-root-n consistency of a truncated version of H ^ k , n . Earlier, Loftsgaarden and Quesenberry [17] had used NN distances to construct non-parametric estimates of a multivariate pdf. Recently, Mnatsakanov et al. [18] studied k-NN estimators of entropy in which the parameter k is assumed to be a function of the sample size.
The NN entropy estimator (2) uses Euclidean distances between the sample points. However, when the random variable X is circular, it is natural to base an NN estimate of entropy on a circular distance rather than the Euclidean distance. A circular observation can be regarded as a point on a circle of unit radius. Once an initial direction and an orientation of the circle have been chosen, each circular observation can be specified by the angle from the initial direction to the point on the circle corresponding to the observation. In this paper, we construct estimates of the entropy of an m-dimensional circular random vector Θ ( 0 , 2 π ] m based on two different definitions of circular distances. Let ϕ = ( ϕ 1 , , ϕ m ) [ 0 , 2 π ) m and ψ = ( ψ 1 , , ψ m ) [ 0 , 2 π ) m be two observations on an m-dimensional circular random vector Θ. We define two circular distance functions d 1 ( · , · ) and d 2 ( · , · ) as follows:
d 1 ( ϕ , ψ ) = i = 1 m ( π - | π - | ϕ i - ψ i | | ) 2
and
d 2 ( ϕ , ψ ) = 2 i = 1 m [ 1 - cos ( ϕ i - ψ i ) ]
Note that π - | π - | ϕ i - ψ i | | , i = 1 , , m , is the arc length between the points ( cos ϕ i , sin ϕ i ) and ( cos ψ i , sin ψ i ) on the unit circle S 1 . On the other hand, [ 2 ( 1 - cos ( ϕ i - ψ i ) ) ] 1 / 2 is the Euclidean distance between the points ( cos ϕ i , sin ϕ i ) and ( cos ψ i , sin ψ i ) on the unit circle S 1 .
In Section 2 and Section 3, we propose explicit expressions for NN estimators of entropy based on the circular distance functions d 1 ( · , · ) and d 2 ( · , · ) , respectively, and prove their asymptotic unbiasedness and consistency (with some mathematical details given in an Appendix). In Section 4, we compare the performance of the estimator based on the circular distance d 1 with that of the Euclidean-distance estimator (2) using Monte Carlo simulations from an analytic 6-dimensional circular distribution, where the exact joint entropy is known. We there also apply the d 1 -distance estimator to the problem of estimating the entropy of a 7-dimensional joint distribution of the internal-rotation angles in the molecule of tartaric acid, using a large sample of these angles obtained by a realistic molecular-dynamics simulation.

2. Nearest Neighbor Estimates of Entropy Based on Circular Distance d 1

For constructing nearest neighbor estimates of the entropy of a circular random vector based on the distance function (3), we first derive the expression for the volume of a ball
N r ( ψ ) = { θ [ 0 , 2 π ) m : d 1 ( θ , ψ ) < r }
centered at ψ [ 0 , 2 π ) m and having a radius r [ 0 , m π ] , m π being the maximum value of d 1 ( · , · ) .
Lemma 2.1. Let r [ 0 , m π ] , ψ [ 0 , 2 π ) m and let V r be the volume of the ball N r ( ψ ) , defined by (5). Then
V r = 2 π m A m r 2 π 2
where A m ( · ) denotes the cumulative distribution function of the sum of m independent and identically distributed random variables each having a beta distribution with parameters α = 1 2 , β = 1 .
Proof. Without loss of generality, we may take ψ = ( 0 , , 0 ) . Then
V r = R d θ = 2 π m Pr i = 1 m π - π - U i 2 < r 2
where R = { θ = ( θ 1 , , θ m ) : θ [ 0 , 2 π ) m , i = 1 m ( π - | π - θ i | ) 2 < r 2 } , d θ = d θ 1 d θ m and U 1 , , U m are independent and identically distributed uniform random variables over the interval ( 0 , 2 π ) . Define C i = ( π - π - U i ) 2 / π 2 , i = 1 , , m . Then C 1 , , C m are independent and identically distributed beta random variables, having parameters α = 1 2 , β = 1 . Hence the result follows.
Remark 2.1. (i) For m = 1 and x [ 0 , 1 ] , we have A 1 ( x ) = x . For m = 2 and x [ 0 , 2 ] , it can be verified that
A 2 ( x ) = π x 4 , if 0 x 1 x - 1 + x 2 ( 2 arcsin 1 x - π 2 ) , if 1 < x 2
(ii) For m = 3 and x [ 0 , 3 ] , it can be verified that
A 3 ( x ) = π x 3 / 2 6 , if 0 x 1 π 12 ( - 3 + 9 x - 4 x 3 / 2 ) , if 1 x 2 x - 2 + 1 12 ( 1 - 3 x ) ( π - 4 arcsin 1 x - 1 ) + ( 4 - 12 x ) arctan x - 3 2 x - 2 + 1 3 x 3 / 2 arctan x ( x - 2 ) + arctan x - x - 1 x - 2 - arctan x + x - 1 x - 2 , if 2 < x 3
(iii) For a general m ( 1 ) and x [ 0 , 1 ] , it can be verified that
A m ( x ) = ( π x ) m 2 2 m Γ ( 1 + m 2 )
(iv) For any m 2 and for 1 x m , A m ( x ) satisfies the following recursion relation
A m ( x ) = 0 1 A m - 1 ( x - s ) f C ( s ) d s , if 1 x m - 1 A 1 ( x - m + 1 ) + x - m + 1 1 A m - 1 ( x - s ) f C ( s ) d s , if m - 1 x m
where f C ( · ) is the pdf of a beta random variable with parameters α = 1 / 2 and β = 1 .
The circular distance (3) becomes the Euclidean distance d E ( ϕ , ψ ) = [ i = 1 m ( ϕ i - ψ i ) 2 ] 1 / 2 when | ϕ i - ψ i | π , i = 1 , 2 , , m . Circular-distance balls N r q ( θ ) , where θ ( 0 , 2 π ) m and lim q r q = 0 , thus tend to the corresponding Euclidean-distance balls as q . We can therefore apply the Lebesgue differentiation theorem [19] to the probability density function f ( θ ) of a circular random variable Θ [ 0 , 2 π ) m in the form
lim q 1 V r q N r q ( θ ) f ( μ ) d μ = f ( θ ) , at almost all θ [ 0 , 2 π ) m
where V r q is given by (6). Equation (7) suggests that, given a sufficiently large random sample Θ 1 , , Θ n from the distribution of Θ, the probability density function f ( θ ) can be approximated at almost all θ [ 0 , 2 π ) m by
f ^ n ( 1 ) ( θ ) = N r ( θ ) n V r = N r ( θ ) n ( 2 π ) m A m ( r 2 π 2 )
where N r ( θ ) denotes the cardinality of the set { i : Θ i N r ( θ ) } and r is sufficiently small.
Guided by this insight, we will now construct nearest neighbor estimates of entropy for an m-variate circular random vector Θ, having a probability density function f ( · ) . Let Θ 1 , , Θ n be a random sample from the distribution of Θ and let k { 1 , , n - 1 } be a given positive integer. For i { 1 , , n } , let d 1 ( i , k , n ) denote the circular distance of Θ i from its k th closest neighbor, with respect to the circular distance d 1 ( · , · ) , i.e.,
d 1 ( i , k , n ) = k th smallest of { d 1 ( Θ j , Θ i ) j = 1 , , n , j i } , i = 1 , , n
Assume that the sample size n is sufficiently large, so that the distances d 1 ( i , k , n ) are small, on the average. Then, based on approximation (8), a reasonable estimator of f ( Θ i ) is
f ^ k , n ( 1 ) ( Θ i ) = k n ( 2 π ) m A m ( d 1 2 ( i , k , n ) π 2 )
and thus a reasonable estimator of the entropy H ( f ) = E ( - ln f ( Θ ) ) is
G ^ k , n ( 1 ) = - 1 n i = 1 n ln f ^ k , n ( 1 ) ( Θ i ) = 1 n i = 1 n ln n k ( 2 π ) m A m d 1 2 ( i , k , n ) π 2
In the following theorem, we derive the expression for the asymptotic mean of the estimator G ^ k , n ( 1 ) . Apart from the arguments for the interchange of the limit and the integral signs, the proof is similar to that of Theorem 8 of Singh et al. [12], who in their proof interchange the limit and the integral signs without mentioning the conditions under which it is allowed.
Theorem 2.1. Suppose that there exists an ϵ > 0 , such that
[ 0 , 2 π ) m ln f ( θ ) 1 + ϵ f ( θ ) d θ <
and
[ 0 , 2 π ) m [ 0 , 2 π ) m ln d 1 ( θ , μ ) 1 + ϵ f ( θ ) f ( μ ) d θ d μ <
Then, for a fixed k { 1 , 2 , , } (not depending on n)
lim n E f G ^ k , n ( 1 ) = L k - 1 - γ - ln k + H ( f )
where G ^ k , n ( 1 ) is defined by (9), L 0 = 0 , L j = i = 1 j 1 i , j = 1 , 2 , and γ = - 0 ( ln t ) e - t d t = 0 . 5772 is Euler’s constant.
Proof. Let
T i , k , n = ln n k ( 2 π ) m A m d 1 2 ( i , k , n ) π 2 , i = 1 , , n
Then T 1 , k , n , , T n , k , n are identically distributed random variables. Therefore,
E f G ^ k , n ( 1 ) = E f T 1 , k , n = [ 0 , 2 π ) m E f S θ , k , n f ( θ ) d θ
where, for a given θ [ 0 , 2 π ) m , S θ , k , n is a random variable having the same distribution as that of the conditional distribution of T 1 , k , n given Θ 1 = θ .
ρ k , n ( u ) = π A m - 1 k e u n ( 2 π ) m
where A m - 1 ( · ) denotes the inverse function of A m ( · ) . Using standard arguments, we get
P f S θ , k , n u = 1 - P f d 1 ( 1 , k , n ) > ρ k , n ( u ) Θ 1 = θ = 1 - j = 0 k - 1 n - 1 j P f N ρ k , n ( u ) ( θ ) j 1 - P f N ρ k , n ( u ) ( θ ) n - 1 - j
where N r ( · ) is defined by (5). For a fixed u ( - , ) , k { 1 , 2 , } and for almost all values of θ [ 0 , 2 π ) m , using Lemma 2.1, we have
lim n n P f N ρ k , n ( u ) ( θ ) = k e u lim n 1 V ρ k , n ( u ) N ρ k , n ( u ) ( θ ) f ( μ ) d μ = k e u f ( θ )
Therefore, using the Poisson approximation to the binomial distribution, we get
lim n P f S θ , k , n u = 1 - j = 0 k - 1 e - k f ( θ ) e u k f ( θ ) e u j j ! = 1 Γ ( k ) 0 k f ( θ ) e u e - t t k - 1 d t
for almost all values of θ [ 0 , 2 π ) m . For a fixed θ [ 0 , 2 π ) m , let S θ , k be a random variable having the pdf
g θ , k ( u ) = e - k f ( θ ) e u k f ( θ ) e u k Γ ( k ) , - < u <
Then, in view of (13),
S θ , k , n d S θ , k , as n
where d stands for the convergence in distribution. For each fixed θ [ 0 , 2 π ) m , it can be verified that
E f S θ , k = 1 Γ ( k ) 0 ( ln t ) e - t t k - 1 d t - ln k - ln f ( θ ) = L k - 1 - γ - ln k - ln f ( θ )
Under the condition (10), it can be shown (for details, see the Appendix) that, for almost all values of θ [ 0 , 2 π ) m , there exists a constant C (not depending on n) such that for all sufficiently large values of n
E f S θ , k , n 1 + ϵ < C
Then, in view of (14) and the moment convergence theorem, it follows that
lim n E f S θ , k , n = E f S θ , k = L k - 1 - γ - ln k - ln f ( θ )
for almost all values of θ [ 0 , 2 π ) m Using Fatou’s lemma, we get
lim sup n [ 0 , 2 π ) m E f S θ , k , n 1 + ϵ f ( θ ) d θ [ 0 , 2 π ) m lim sup n E f S θ , k , n 1 + ϵ f ( θ ) d θ = [ 0 , 2 π ) m L k - 1 - γ - ln k - ln f ( θ ) 1 + ϵ f ( θ ) d θ 2 ϵ - 1 L k - 1 - γ - ln k 1 + ϵ + [ 0 , 2 π ) m ln f ( θ ) 1 + ϵ f ( θ ) d θ <
by (10). Therefore,
lim n [ 0 , 2 π ) m E f S θ , k , n f ( θ ) d θ = [ 0 , 2 π ) m lim n E f S θ , k , n f ( θ ) d θ = [ 0 , 2 π ) m E f S θ , k f ( θ ) d θ
Now the result follows from (12) and (16).
Since the estimator G ^ k , n ( 1 ) is not asymptotically unbiased, we propose the following (asymptotic) bias corrected estimator for estimating the entropy H ( f ) :
H ^ k , n ( 1 ) = 1 n i = 1 n ln A m d 1 2 ( i , k , n ) π 2 + ln n ( 2 π ) m - L k - 1 + γ
Thus, we have the following corollary to Theorem 2.1.
Corollary 2.1. Under the assumptions of Theorem 2.1, the estimator H ^ k , n ( 1 ) is asymptotically unbiased for estimating the entropy H ( f ) .
The following theorem provides conditions under which the estimator H ^ k , n ( 1 ) is consistent for estimating the entropy H ( f ) .
Theorem 2.2. Suppose that there exists an ϵ > 0 , such that
[ 0 , 2 π ) m ln f ( θ ) 2 + ϵ f ( θ ) d θ <
and
[ 0 , 2 π ) m [ 0 , 2 π ) m ln d 1 ( θ , μ ) 2 + ϵ f ( θ ) f ( μ ) d θ d μ <
Then, for a fixed k { 1 , 2 , } (not depending on n),
lim n Var f H ^ k , n ( 1 ) = 0
and thus H ^ k , n ( 1 ) is a consistent estimator of the entropy H ( f ) .
Under conditions (18) and (19), in the proof of Theorem 2.2, the steps involved in justifying the interchange of the limit and the integral sign are tedious but virtually identical to the arguments used in the proof of Theorem 2.1. The remaining part of the proof is identical to the proof of Theorem 11 of Singh et al. [12]. We therefore omit the proof of Theorem 2.2.
Remark 2.2. For small values of m 4 , the function A m ( · ) involved in the evaluation of estimate H ^ k , n ( 1 ) can be computed using numerical integration. For moderate and large values of m, which is the case with many molecules encountered in molecular sciences, using the central limit theorem one can get a reasonable approximation of A m ( · ) by the cumulative distribution function of a normal distribution having mean m / 3 and variance 4 m / 45 .

3. Nearest Neighbor Estimates of Entropy Based on Circular Distance d 2

Let ψ [ 0 , 2 π ) m . In order to construct nearest neighbor estimates of entropy based on the distance function d 2 ( · , · ) , defined by (4), we require the volume of the ball to be
S r ( ψ ) = { θ [ 0 , 2 π ) m : d 2 ( θ , ψ ) < r }
centered at ψ [ 0 , 2 π ) m and having radius r [ 0 , 2 m ] .
Lemma 3.1. Let r [ 0 , 2 m ] , ψ [ 0 , 2 π ) m and let W r be the volume of the ball S r ( ψ ) , defined by (20). Then
W r = ( 2 π ) m B m r 2 4
where B m ( · ) denotes the cumulative distribution function of the sum of m independent and identically distributed random variables each having a beta distribution with parameters α = 1 2 and β = 1 2
Proof. We have
W r = { θ [ 0 , 2 π ) m , i = 1 m ( 1 - cos ( θ i ) ) < r 2 2 } d θ = ( 2 π ) m Pr i = 1 m 1 - cos U i 2 < r 2 4
where U 1 , , U m are independent and identically distributed uniform random variables over the interval [ 0 , 2 π ) .
Define D i = ( 1 - cos U i ) / 2 , i = 1 , , m . Then, D 1 , , D m are independent and identically distributed beta random variables, having parameters α = 1 2 β = 1 2 . Hence the result follows.
Remark 3.1. (i) For m = 1 and x ( 0 , 1 ) ,
B 1 ( x ) = 2 π arcsin ( x ) = 1 π arccos ( 1 - 2 x )
(ii) B m ( x ) satisfies a similar recursion relation as that satisfied by A m ( x ) and given in Remark 2.1 (iv).
For i { 1 , , n } , let d 2 ( i , k , n ) denote the circular distance of Θ i from its k th closest neighbor with respect to the circular distance d 2 ( · , · ) , defined by (4). Assume that the sample size n is sufficiently large, so that on average the distances d 2 ( i , k , n ) are small. Then, based on approximation (7), a reasonable estimator of f ( Θ i ) is
f ^ k , n ( 2 ) ( Θ i ) = k n ( 2 π ) m B m 1 4 d 2 2 ( i , k , n )
and thus a reasonable estimator of the entropy H ( f ) = E ( - ln f ( Θ ) ) is
G ^ k , n ( 2 ) = - 1 n i = 1 n ln f ^ k , n ( 2 ) ( Θ i ) = 1 n i = 1 n ln n ( 2 π ) m k B m 1 4 d 2 2 ( i , k , n )
The proof of the following theorem is identical to the proof of Theorem 2.1 and therefore it is omitted.
Theorem 3.1. Suppose that there exists an ϵ > 0 such that (10) holds and
[ 0 , 2 π ) m [ 0 , 2 π ) m ln d 2 ( θ , μ ) 1 + ϵ f ( θ ) f ( μ ) d θ d μ <
Then, for a fixed k { 1 , 2 , , } (not depending on n),
lim n E f G ^ k , n ( 2 ) = L k - 1 - γ - ln k + H ( f )
where G ^ k , n ( 2 ) is defined by (21).
Since the estimator G ^ k , n ( 2 ) is not asymptotically unbiased, we propose the following (asymptotic) bias corrected estimator for estimating the entropy H ( f ) :
H ^ k , n ( 2 ) = 1 n i = 1 n ln B m 1 4 d 2 2 ( i , k , n ) + ln n ( 2 π ) m - L k - 1 + γ
Thus, we have the following corollary to Theorem 3.1.
Corollary 3.1. Under the assumptions of Theorem 3.1, the estimator H ^ k , n ( 2 ) is asymptotically unbiased for estimating the entropy H ( f ) .
The following theorem provides conditions under which the estimator H ^ k , n ( 2 ) is consistent for estimating the entropy H ( f ) . The proof of the theorem follows using the arguments similar to the one given for the proof of Theorem 2.2.
Theorem 3.2. Suppose that there exists an ϵ > 0 such that (18) holds and
[ 0 , 2 π ) m [ 0 , 2 π ) m ln d 2 ( θ , μ ) 2 + ϵ f ( θ ) f ( μ ) d θ d μ <
Then, for a fixed k { 1 , 2 , } (not depending on n),
lim n Var f H ^ k , n ( 2 ) = 0
and therefore, under conditions (18) and (24), H ^ k , n ( 2 ) is a consistent estimator of the entropy H ( f ) .
Remark 3.2. (i) Using Remark 3.1 (i) and the fact that arccos ( x ) [ 0 , π ] is a decreasing function of x [ - 1 , 1 ] , for m = 1 and i { 1 , , n } , we have
B 1 1 4 d 2 2 ( i , k , n ) = 1 π arccos 1 - d 2 2 ( i , k , n ) 2 = 1 π × k th smallest of arccos cos Θ j - Θ i , j = 1 , , n , j i = 1 π × k th smallest of π - π - Θ j - Θ i , j = 1 , , n , j i
since arccos cos x = π - π - x , x [ - 2 π , 2 π ]
Therefore, for m = 1 and i { 1 , , n } , we have
B 1 1 4 d 2 2 ( i , k , n ) = d 1 ( i , k , n ) π G ^ k , n ( 1 ) = G ^ k , n ( 2 ) H ^ k , n ( 1 ) = H ^ k , n ( 2 )
Thus for m = 1 , estimators H ^ k , n ( 1 ) and H ^ k , n ( 2 ) , based on circular distance functions d 1 ( · , · ) and d 2 ( · , · ) respectively, are identical.
(ii) For small values of m 2 , the function B m ( · ) involved in the evaluation of estimate H ^ k , n ( 2 ) can be computed using numerical integration. For moderate and large values of m, which is the case with many molecules encountered in molecular sciences, using the central limit theorem one can get a reasonable approximation of B m ( · ) by the cumulative distribution function of a normal distribution having mean m / 2 and variance m / 8 .
With k { 1 , 2 , } , (17) and (23) define two classes of estimators for the entropy H ( f ) . The biases and variances of these estimators depend on k, the sample size n, and the pdf f ( · ) and its dimensions m. It would be useful to have the knowledge of the biases and variances as functions of k, n, m, and some characteristic of f ( · ) , such as μ = E ( h ( f ( Θ ) ) ) , where h ( · ) is a function such that a reliable estimate of μ can be obtained using the available data on Θ. We have not been able to derive any meaningful expressions for the biases and variances of the proposed estimators, and this problem is under further investigation.

4. Monte Carlo Results and a Molecular Entropy Example

The performance of an entropy estimator can be investigated rigorously by using Monte Carlo samples from a distribution for which the entropy is known exactly. While analytic distributions of more than two correlated circular variables with exactly calculable entropic attributes do not seem available, one may construct a distribution of higher dimensionality as a product of a suitable number of bivariate distributions. To test the performance of the circular-distance estimator (17), we used an analytic 6-dimensional circular distribution given as the product of three bivariate circular distributions, each of the form [4]
f ( θ 1 , θ 2 ) = C e κ 1 cos ( θ 1 - μ 1 ) + κ 2 cos ( θ 2 - μ 2 ) + λ sin ( θ 1 - μ 1 ) sin ( θ 2 - μ 2 )
which, as a circular analogue of the bivariate normal distribution, can be called the bivariate von Mises distribution. Details pertaining to the 6-dimensional distribution used and the Monte Carlo sampling are given in [20], where the same circular distribution was used in an investigation of the combined mutual-information-expansion and Euclidean-distance-NN method of entropy estimation.
Table 1. Circular- and Euclidean-distance estimates H ^ k , n ( 1 ) and H ^ k , n , respectively, from samples of size n of the analytic distribution of 6 circular variables; the exact entropy value is (to 4 decimals) H = 1 . 8334 .
Table 1. Circular- and Euclidean-distance estimates H ^ k , n ( 1 ) and H ^ k , n , respectively, from samples of size n of the analytic distribution of 6 circular variables; the exact entropy value is (to 4 decimals) H = 1 . 8334 .
n × 10 - 6 H ^ 1 , n ( 1 ) / H ^ 1 , n H ^ 2 , n ( 1 ) / H ^ 2 , n H ^ 3 , n ( 1 ) / H ^ 3 , n H ^ 4 , n ( 1 ) / H ^ 4 , n H ^ 5 , n ( 1 ) / H ^ 5 , n
0.051.868511.920191.963792.003952.03653
2.012252.099292.165402.223802.27390
0.101.815031.869681.904811.928991.95164
1.956402.019522.073392.112552.14865
0.201.800701.837131.856721.874981.89122
1.923051.966912.001102.030862.05650
0.401.80071.815821.828401.839041.84862
1.893631.926521.950011.969391.98668
0.601.800661.806931.814961.822961.83097
1.886961.909521.927641.943581.95783
0.801.798371.802971.810631.816601.82222
1.879361.899711.916491.929461.94125
1.001.795391.800171.805661.810301.81566
1.873171.892381.906941.918501.92915
2.001.796601.795331.797361.799701.80266
1.864711.875551.884931.892981.90073
4.001.796731.793831.794041.794801.79613
1.856961.864771.871361.877021.88211
6.001.797951.794911.793851.794191.79458
1.854031.860711.865521.870291.87414
8.001.798931.794841.793731.793221.79337
1.852401.857451.861971.865451.86891
10.001.800361.795621.794261.793501.79329
1.851701.855781.859691.862871.86583
Table 1 presents the circular-distance estimates H ^ k , n ( 1 ) , k = 1 , , 5 obtained from samples of sizes in the range n = 5 × 10 4 1 × 10 7 , together with the corresponding Euclidean-distance estimates H ^ k , n . Figure 1 displays the estimates H ^ k = 1 , n ( 1 ) and H ^ k = 1 , n as functions of the sample size n. Noting that the exact entropy value here is, to 4 decimal places, H = 1 . 8334 , we observe that as n increases the circular-distance estimates initially “undershoot” the exact value and then start to approach it slowly from below. In contrast, the Euclidean-distance estimates approach the exact value monotonically from above. Interestingly, the biases of the two kinds of estimates at sample sizes n 1 million are approximately equal in absolute value. The behavior of the circular-distance estimates at k = 2 , , 5 is similar to that at k = 1 , and the estimate values at different k’s become very close at n 1 million.
Figure 1. Plots of the circular- and Euclidean-distance estimates H ^ 1 , n ( 1 ) and H ^ 1 , n , respectively, as functions of the sample size n, for the analytic distribution of 6 circular variables; the exact entropy value is (to 4 decimals) H = 1 . 8334 .
Figure 1. Plots of the circular- and Euclidean-distance estimates H ^ 1 , n ( 1 ) and H ^ 1 , n , respectively, as functions of the sample size n, for the analytic distribution of 6 circular variables; the exact entropy value is (to 4 decimals) H = 1 . 8334 .
Entropy 12 01125 g001
To investigate the usefulness of circular-distance NN estimators in the problem of evaluating the configurational entropy of internal rotations in molecules, we used the circular-distance estimator H ^ k , n ( 1 ) to estimate the entropy of the joint distribution of internal-rotation angles in the molecule of tartaric acid, where the number of variables is m = 7 . Samples of size n up to 14.4 million of the internal-rotation angles were obtained from a molecular dynamics simulation of the (R,S) stereoisomer of this molecule [21]. Figure 2 shows marginal histograms, smoothed using a Gaussian kernel, of the seven internal-rotation angles of tartaric acid; note that these marginals display markedly non-Gaussian features. The code ANN [22] (with our modification for the circular distance d 1 ), which utilizes a k-d tree algorithm [23], was used for finding the k-th NN distances between sample points. Figure 3 presents the estimates H ^ k , n ( 1 ) , k = 1 , . . . , 5 as functions of the sample size n. The values of H ^ k , n ( 1 ) decrease as n increases, while, at a fixed value of n 7 million, they increase as k increases; at greater values of n, the estimates at different k’s become quite close in value. Figure 4 compares the circular-distance estimates H ^ 1 , n ( 1 ) and H ^ 5 , n ( 1 ) with the corresponding Euclidean-distance estimates H ^ k , n . We note that an n extrapolated Euclidean-distance estimate H ^ = 5 . 04 ± 0 . 01 was obtained for this entropy in [21]. Again, as in the case of the analytic circular distribution, this value approximately equals the arithmetic mean of the circular and Euclidean distance estimates at sample sizes n 1 million.
Figure 2. Smoothed marginal histograms of the internal-rotation angles ϕ i , i = 1 , , 7 of the (R,S) isomer of tartaric acid obtained by molecular dynamics simulations.
Figure 2. Smoothed marginal histograms of the internal-rotation angles ϕ i , i = 1 , , 7 of the (R,S) isomer of tartaric acid obtained by molecular dynamics simulations.
Entropy 12 01125 g002
Figure 3. Circular-distance nearest-neighbor estimates H ^ k , n ( 1 ) , k = 1 , , 5 of the entropy of the 7-dimensional joint distribution of internal-rotation angles in the (R,S) isomer of tartaric acid as functions of the sample size n. The estimates H ^ k , n ( 1 ) at a fixed n 7 million increase in value as k increases.
Figure 3. Circular-distance nearest-neighbor estimates H ^ k , n ( 1 ) , k = 1 , , 5 of the entropy of the 7-dimensional joint distribution of internal-rotation angles in the (R,S) isomer of tartaric acid as functions of the sample size n. The estimates H ^ k , n ( 1 ) at a fixed n 7 million increase in value as k increases.
Entropy 12 01125 g003
Figure 4. Circular-distance nearest-neighbor estimates H ^ 1 , n ( 1 ) and H ^ 5 , n ( 1 ) of the internal-rotation entropy of tartaric acid as functions of the sample size n compared with the Euclidean-distance nearest-neighbor estimates H ^ 1 , n and H ^ 5 , n . An n extrapolated estimate is H ^ = 5 . 04 ± 0 . 01 [21].
Figure 4. Circular-distance nearest-neighbor estimates H ^ 1 , n ( 1 ) and H ^ 5 , n ( 1 ) of the internal-rotation entropy of tartaric acid as functions of the sample size n compared with the Euclidean-distance nearest-neighbor estimates H ^ 1 , n and H ^ 5 , n . An n extrapolated estimate is H ^ = 5 . 04 ± 0 . 01 [21].
Entropy 12 01125 g004
Perhaps surprisingly, the results of both the analytic-distribution and molecular-simulation studies undertaken here indicate that only when relatively small data samples are available, the use of a circular-distance estimator has some advantage over the Euclidean-distance estimator. On samples of large size, needed for sufficient convergence of an NN estimate of the entropy of a multivariate distribution, the circular-distance estimates obtained did not have a significantly smaller bias than the Euclidean-distance estimates. In view of such findings, one may question whether the additional computational complexity of a circular-distance estimate is worth the effort. However, we observed that as the sample size increased, the circular NN distances in the sample became quickly so small that the circular-distance estimator H ^ k , n ( 1 ) coincided in value with the simpler Euclidean-distance estimator H ^ k , n in which the same NN-distance values were used. This is explained by the fact that when the circular NN distances d 1 ( i , k , n ) π , the estimator H ^ k , n ( 1 ) can be replaced with the estimator H ^ k , n in which the NN distances d 1 ( i , k , n ) are substituted for the Euclidean NN distances R ( i , k , n ) ; this fact follows directly from Remark 2.1 (iii). The only extra computational effort is then expended in finding the circular, instead of Euclidean, NN distances in a given sample.

Acknowledgments

The authors are thankful to Jun Tan for carrying out the circular-distance NN calculations, and to E. James Harner, Cecil Burchfiel, Robert Mnatsakanov and Dan S. Sharp for helpful discussions. The findings and conclusions in this paper are those of the authors and do not necessarily represent the views of the National Institute for Occupational Safety and Health.

Appendix

Here we provide the proof of (15). For a fixed θ [ 0 , 2 π ) m , let F θ , k , n ( · ) be the distribution function of the random variable S θ , k , n . To establish (15), we will first show that, for n = 2 and k = 1 ,
E f S θ , 1 , 2 1 + ϵ = - u 1 + ϵ d F θ , 1 , 2 ( u ) <
for almost all values of θ [ 0 , 2 π ) m . In order to establish (25), consider
E f S θ , 1 , 2 1 + ϵ = E f ln 2 ( 2 π ) m A m d 1 2 ( 1 , 1 , 2 ) / π 2 1 + ϵ | Θ 1 = θ = E f ln 2 ( 2 π ) m + ln A m d 1 2 ( θ , Θ 2 ) / π 2 1 + ϵ 2 ϵ ln 2 ( 2 π ) m 1 + ϵ + E f ln A m d 1 2 ( θ , Θ 2 ) / π 2 1 + ϵ
Thus, to establish (25), it is enough to show that the second term in the above expression is finite. Note that, for x [ 0 , m ] ,
A m ( x ) = Pr i = 1 m C i x i = 1 m Pr C i x m = x m m / 2
and thus
E f ln A m d 1 2 ( θ , Θ 2 ) / π 2 1 + ϵ m 1 + ϵ E f ln [ d 1 ( θ , Θ 2 ) / π m ] 1 + ϵ m 1 + ϵ 2 ϵ E f ln [ d 1 ( θ , Θ 2 ) ] 1 + ϵ + ln ( π m ) 1 + ϵ = m 1 + ϵ 2 ϵ [ 0 , 2 π ) m ln [ d 1 ( θ , μ ) ] 1 + ϵ f ( μ ) d μ + ln ( π m ) 1 + ϵ
In view of assumption (11), it follows that
[ 0 , 2 π ) m ln d 1 ( θ , μ ) 1 + ϵ f ( μ ) d μ <
for almost all values of θ [ 0 , 2 π ) m . Therefore, (25) is established.
Now we will establish (15). Consider
E f S θ , k , n 1 + ϵ = - 0 u 1 + ϵ d F θ , k , n ( u ) + 0 u 1 + ϵ d F θ , k , n ( u )
We can write
0 u 1 + ϵ d F θ , k , n ( u ) = ( 1 + ϵ ) 0 ln n u ϵ 1 - F θ , k , n ( u ) d u + ln n u ϵ 1 - F θ , k , n ( u ) d u = ( 1 + ϵ ) I 1 ( n ) + I 2 ( n ) , say .
We have,
I 2 ( n ) = ln n u ϵ j = 0 k - 1 n - 1 j P f N ρ k , n ( u ) ( θ ) j 1 - P f N ρ k , n ( u ) ( θ ) n - 1 - j d u
For j { 0 , 1 , , k - 1 } , we have n - 1 j n - 1 n - k n - 2 j Therefore,
I 2 ( n ) n - 1 n - k ln n u ϵ 1 - P f N ρ k , n ( u ) ( θ ) × j = 0 k - 1 n - 2 j P f N ρ k , n ( u ) ( θ ) j 1 - P f N ρ k , n ( u ) ( θ ) n - 2 - j d u = n - 1 n - k ln n u ϵ 1 - P f N ρ k , n ( u ) ( θ ) P B k , n - k - 1 P f N ρ k , n ( u ) ( θ ) d u
where B a , b denotes the beta random variable with parameter ( a , b ) , a > 0 , b > 0 . For u > ln n , we have P f N ρ k , n ( u ) ( θ ) P f N ρ k , n ( ln n ) ( θ ) Therefore, (28) yields
I 2 ( n ) n - 1 n - k P B k , n - k - 1 P f N ρ k , n ( ln n ) ( θ ) ln n u ϵ 1 - P f N ρ k , n ( u ) ( θ ) d u
Note that lim n ρ k , n ( ln n ) = 0 . Thus, on using (7), we get
lim n n k P f N ρ k , n ( ln n ) ( θ ) = f ( θ )
for almost all values of θ [ 0 , 2 π ) m .
For a θ [ 0 , 2 π ) m , for which f ( θ ) > 0 , choose δ ( 0 , f ( θ ) ) . Then, for sufficiently large values of n,
P f N ρ k , n ( ln n ) ( θ ) > k n ( f ( θ ) - δ )
and therefore, for sufficiently large values of n,
P B k , n - k - 1 P f N ρ k , n ( ln n ) ( θ ) P B k , n - k - 1 k n ( f ( θ ) - δ ) E B k , n - k - 1 2 k n ( f ( θ ) - δ ) 2 = k + 1 ( n - 1 ) k ( f ( θ ) - δ ) 2
Therefore, for sufficiently large values of n and for almost all values of θ [ 0 , 2 π ) m , (29) yields
I 2 ( n ) k + 1 k ( n - k ) ( f ( θ ) - δ ) 2 ln n u ϵ 1 - P f N ρ k , n ( u ) ( θ ) d u
On making the change of variable z = ln ( 2 k / n ) + u in the integral in (30), we get
ln n u ϵ 1 - P f N ρ k , n ( u ) ( θ ) d u = ln 2 k n u + ln n 2 k ϵ 1 - P f N ρ k , n ( u + ln ( n 2 k ) ) ( θ ) d u = ln 2 k n u + ln n 2 k ϵ 1 - P f N ρ 1 , 2 ( u ) ( θ ) d u = ln 2 k n 0 u + ln n 2 k ϵ 1 - P f N ρ 1 , 2 ( u ) ( θ ) d u + 0 u + ln n 2 k ϵ 1 - P f N ρ 1 , 2 ( u ) ( θ ) d u
We also have
ln 2 k n 0 u + ln n 2 k ϵ 1 - P f N ρ 1 , 2 ( u ) ( θ ) d u ln n 2 k ϵ ln 2 k n 0 1 - P f N ρ 1 , 2 ( u ) ( θ ) d u = ln n 2 k ϵ 2 k n 1 1 u 1 - P f N ρ 1 , 2 ( ln   u ) ( θ ) d u n 2 k ln n 2 k ϵ 2 k n 1 1 - P f N ρ 1 , 2 ( ln   u ) ( θ ) d u n 2 k ln n 2 k ϵ
and
0 u + ln n 2 k ϵ 1 - P f N ρ 1 , 2 ( u ) ( θ ) d u C ϵ ln n 2 k ϵ 0 1 - P f N ρ 1 , 2 ( u ) ( θ ) d u + 0 u ϵ 1 - P f N ρ 1 , 2 ( u ) ( θ ) d u
where C ϵ = max ( 1 , 2 ϵ - 1 ) .
Note that, for u ( - , ) ,
1 - P f N ρ 1 , 2 ( u ) ( θ ) = 1 - P f S θ , 1 , 2 u = 1 - F θ , 1 , 2 ( u )
Therefore (33) yields
0 u + ln n 2 k ϵ 1 - P f N ρ 1 , 2 ( u ) ( θ ) d u C ϵ ln n 2 k ϵ 0 1 - F θ , 1 , 2 ( u ) d u + 0 u ϵ 1 - F θ , 1 , 2 ( u ) d u
In view of (25), we have
0 1 - F θ , 1 , 2 ( u ) d u < and 0 u ϵ 1 - F θ , 1 , 2 ( u ) d u <
Therefore, using (30)-(35), we conclude that
lim n I 2 ( n ) = 0
for almost all values of θ.
Now consider
I 1 ( n ) = 0 ln n u ϵ 1 - F θ , k , n ( u ) d u = 0 ln n u ϵ j = 0 k - 1 n - 1 j P f N ρ k , n ( u ) ( θ ) j 1 - P f N ρ k , n ( u ) ( θ ) n - 1 - j d u
= 0 ln n u ϵ P B k , n - k P f N ρ k , n ( u ) ( θ ) d u
For u < ln n , we have
0 ρ k , n ( u ) π A m - 1 k n ( 2 π ) m 0 , a s n
Therefore, for u ( - , ) and for almost all values of θ [ 0 , 2 π ) m , we have
lim n n k e u P f N ρ k , n ( u ) ( θ ) = f ( θ )
uniformly in u.
For f ( θ ) > 0 , let δ ( 0 , f ( θ ) ) . Then, for sufficiently large n, u < ln n and for almost all values of θ [ 0 , 2 π ) m , we have
P f N ρ k , n ( u ) ( θ ) > k n f ( θ ) - δ e u
and therefore
P B k , n - k P f N ρ k , n ( u ) ( θ ) P B k , n - k k n ( f ( θ ) - δ ) e u E B k , n - k k n ( f ( θ ) - δ ) e u = e - u f ( θ ) - δ
Using (39) in (37) we conclude that, for sufficiently large values of n, and for almost all values of θ
I 1 ( n ) 1 f ( θ ) - δ 0 ln n u ϵ e - u d u 1 f ( θ ) - δ 0 u ϵ e - u d u <
Using (36) and (40) in (27), we conclude further that there exists a constant D 1 such that, for sufficiently large values of n,
0 u 1 + ϵ d F θ , k , n ( u ) d u < D 1
for almost all values of θ [ 0 , 2 π ) m .
Now consider
- 0 u 1 + ϵ d F θ , k , n ( u ) = - 0 ( - u ) 1 + ϵ d F θ , k , n ( u ) = ( 1 + ϵ ) - 0 ( - u ) ϵ F θ , k , n ( u ) d u
Note that, for each u ( - , 0 ) and for almost all values of θ [ 0 , 2 π ) m ,
lim n n k e u P f N ρ k , n ( u ) ( θ ) = f ( θ )
uniformly in u < 0 , i.e., for almost all values of θ [ 0 , 2 π ) m , u ( - , 0 ) and every δ > 0
P f N ρ k , n ( u ) ( θ ) < k n ( f ( θ ) + δ ) e u
for sufficiently large values of n. Therefore,
F θ , k , n ( u ) = P B k , n - k P f N ρ k , n ( u ) ( θ ) P B k , n - k k n ( f ( θ ) + δ ) e u = j = k n - 1 n - 1 j k n f ( θ ) + δ e u j 1 - k n f ( θ ) + δ e u n - 1 - j n - 1 n ( f ( θ ) + δ ) e u f ( θ ) + δ e u
Using (43) in (42), we conclude that, for sufficiently large values of n and for almost all values of θ [ 0 , 2 π ) m ,
- 0 u 1 + ϵ d F θ , k , n ( u ) = ( 1 + ϵ ) - 0 ( - u ) ϵ F θ , k , n ( u ) d u ( 1 + ϵ ) f ( θ ) + δ - 0 ( - u ) ϵ e u d u <
Finally, on using (41) and (44) in (26), we conclude (15).

References

  1. Karplus, M.; Kushick, J.N. Method for estimating the configurational entropy of macromolecules. Macromolecules 1981, 14, 325–332. [Google Scholar] [CrossRef]
  2. Misra, N.; Singh, H.; Demchuk, E. Estimation of the entropy of a multivariate normal distribution. J. Multivar. Anal. 2005, 92, 324–342. [Google Scholar] [CrossRef]
  3. Demchuk, E.; Singh, H. Statistical thermodynamics of hindered rotation from computer simulations. Mol. Phys. 2001, 99, 627–636. [Google Scholar] [CrossRef]
  4. Singh, H.; Hnizdo, V.; Demchuk, E. Probabilistic modeling of two dependent circular variables. Biometrika 2002, 89, 719–723. [Google Scholar] [CrossRef]
  5. Mardia, K.V.; Hughes, G.; Taylor, C.C.; Singh, H. A multivariate von Mises distribution with applications to bioinformatics. Can. J. Stat. 2008, 36, 99–109. [Google Scholar] [CrossRef]
  6. Hnizdo, V.; Fedorowicz, A.; Singh, H.; Demchuk, E. Statistical thermodynamics of internal rotation in a hindering potential of mean force obtained from computer simulations. J. Comput. Chem. 2003, 24, 1172–1183. [Google Scholar] [CrossRef] [PubMed]
  7. Darian, E.; Hnizdo, V.; Fedorowicz, A.; Singh, H.; Demchuk, E. Estimation of the absolute internal-rotation entropy of molecules with two torsional degrees of freedom from stochastic simulations. J. Comput. Chem. 2005, 26, 651–660. [Google Scholar] [CrossRef] [PubMed]
  8. Beirlant, J.; Dudewicz, E.J.; Gyorfi, L.; van der Meulen, E.C. Nonparametric estimation of entropy: An overview. Internat. J. Math. Stat. 1997, 6, 17–39. [Google Scholar]
  9. Scott, D. Multivariate Density Estimation: Theory, Practice and Visualization; Wiley: New York, NY, USA, 1992. [Google Scholar]
  10. Vasicek, O. On a test for normality based on sample entropy. J. R. Stat. Soc. Series B 1976, 38, 54–59. [Google Scholar]
  11. Dudewicz, E.J.; van der Meulen, E.C. Entropy-based tests of uniformity. J. Am. Stat. Assoc. 1981, 76, 967–974. [Google Scholar] [CrossRef]
  12. Singh, H.; Misra, N.; Hnizdo, V.; Fedorowicz, E.; Demchuk, E. Nearest neighbor estimates of entropy. Am. J. Math. Manag. Sci. 2003, 23, 301–321. [Google Scholar] [CrossRef]
  13. Kozachenko, L.F.; Leonenko, N.N. Sample estimates of entropy of a random vector. Prob. Inf. Trans. 1987, 23, 95–101. [Google Scholar]
  14. Goria, M.N.; Leonenko, N.N.; Novi Inveradi, P.L. A new class of random vector entropy estimators and its applications. Nonparam. Stat. 2005, 17, 277–297. [Google Scholar] [CrossRef]
  15. Kraskov, A.; Stögbauer, H.; Grassberger, P. Estimating mutual information. Phys. Rev. E 2004, 69, 066138-1–066138-16. [Google Scholar] [CrossRef]
  16. Tsybakov, A.B.; van der Meulen, E.C. Root-n consistent estimators of entropy for densities with unbounded support. Scan. J. Stat. 1996, 23, 75–83. [Google Scholar]
  17. Loftsgaarden, D.O.; Quesenberry, C.P. A non-parametric estimate of a multivariate density function. Ann. Math. Stat. 1965, 36, 1049–1051. [Google Scholar] [CrossRef]
  18. Mnatsakanov, R.M.; Misra, N.; Li, Sh.; Harner, E.J. kn-Nearest neighbor estimators of entropy. Math. Meth. Stat. 2008, 17, 261–277. [Google Scholar] [CrossRef]
  19. Lebesgue, H. Sur l’intégration des fonctions discontinues. Ann. Ecole Norm. 1910, 27, 361–450. [Google Scholar]
  20. Hnizdo, V.; Tan, J.; Killian, B.J.; Gilson, M.K. Efficient calculation of configurational entropy from molecular simulations by combining the mutual-information expansion and nearest-neighbor methods. J. Comput. Chem. 2008, 29, 1605–1614. [Google Scholar] [CrossRef] [PubMed]
  21. Hnizdo, V.; Darian, E.; Fedorowicz, A.; Demchuk, E.; Li, S.; Singh, H. Nearest-neighbor nonparametric method for estimating the configurational entropy of complex molecules. J. Comput. Chem. 2006, 28, 655–668. [Google Scholar] [CrossRef] [PubMed]
  22. Arya, S.; Mount, D.M. Approximate nearest neighbor searching. In the Proceedings of the Fourth Annual ACM-SIAM Symposium on Discrete Algorithms, 25–27 January 1993; p. 271. Available online: http:// www.cs.umd.edu/∼mount/ANN/ (accessed on 5 May 2010).
  23. Friedman, J.H.; Bentley, J.L.; Finkel, R.A. An algorithm for finding best matches in logarithmic expected time. ACM Trans. Math. Software 1977, 3, 209–226. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Misra, N.; Singh, H.; Hnizdo, V. Nearest Neighbor Estimates of Entropy for Multivariate Circular Distributions. Entropy 2010, 12, 1125-1144. https://doi.org/10.3390/e12051125

AMA Style

Misra N, Singh H, Hnizdo V. Nearest Neighbor Estimates of Entropy for Multivariate Circular Distributions. Entropy. 2010; 12(5):1125-1144. https://doi.org/10.3390/e12051125

Chicago/Turabian Style

Misra, Neeraj, Harshinder Singh, and Vladimir Hnizdo. 2010. "Nearest Neighbor Estimates of Entropy for Multivariate Circular Distributions" Entropy 12, no. 5: 1125-1144. https://doi.org/10.3390/e12051125

Article Metrics

Back to TopTop