Local Intrinsic Dimensionality, Entropy and Statistical Divergences

Properties of data distributions can be assessed at both global and local scales. At a highly localized scale, a fundamental measure is the local intrinsic dimensionality (LID), which assesses growth rates of the cumulative distribution function within a restricted neighborhood and characterizes properties of the geometry of a local neighborhood. In this paper, we explore the connection of LID to other well known measures for complexity assessment and comparison, namely, entropy and statistical distances or divergences. In an asymptotic context, we develop analytical new expressions for these quantities in terms of LID. This reveals the fundamental nature of LID as a building block for characterizing and comparing data distributions, opening the door to new methods for distributional analysis at a local scale.


Introduction
Fundamental activities for analyzing data include both an ability to characterize data complexity and an ability to make comparisons between distributions. Widely used measures for these activities include entropy (for assessing uncertainty) and statistical divergences or distances (to compare distributions) [1]. Such analysis can be performed at either a global scale across the entire data distribution or at a local scale, in the vicinity of a given location in the distribution.
An important measure of global complexity is intrinsic dimensionality, which captures the effective number of degrees of freedom needed to describe the entire dataset. On the other hand, local intrinsic dimensionality (LID) [2] is capable of characterizing the complexity of the data distribution around a specified query location, thus capturing the number of degrees of freedom present at a local scale. LID is a unitless quantity that can also be interpreted as a relative growth rate of probability measure within an expanding neighborhood around the specified query location, or the intrinsic dimension of the space immediately around the query point.
Our focus in this paper is to characterize entropy and statistical divergences at a highly local scale, for an asymptotically small vicinity around a specified location. We show that it is possible to leverage properties that arise from LID based characterizations of lower tail distributions [3], to develop analytical expressions for a wide selection of entropy variants and statistical divergences, in both univariate and multivariate settings. This yields expressions for tail entropies and tail divergences.
Analytical characterizations for tail divergences and tail entropies are appealing from a number of perspectives. These are as follows: • For univariate scenarios, if working with the tail of a distribution that has a single variable, we can conduct: -Temporal analysis: when a distribution models some property varying over time (e.g., survival analysis), we can analyze the entropy of a univariate distribution within an asymptotically short window of time, or the divergence between two univariate distributions within an asymptotically short window of time.

-
Distance-based analysis: when a distribution models distances from a query location to its nearest neighbors and the distances are induced by a global data distribution. Here, our results can be used for analysis of tail entropy or divergence between distributions within an asymptotically small distance interval.
In the case of the latter, this can provide insight into multivariate properties, since under minimal assumptions the divergences between univariate distance distributions provide lower bounds for distances between multivariate distributions [4,5]. This is applicable for models such as generative adversarial networks (GANs), where it is important to test correspondence between synthetic and true distributions at a local level [6].
• For multivariate scenarios where we are analyzing distributions with multiple variables: -If an assumption of locally spherical symmetry of the distribution holds, then we can directly compute the tail entropy of a distribution or the divergence between two tail distributions in the vicinity of a single point. Such an assumption is suitable for analyzing data distributions for many types of physical systems such as fluids, glasses, metals and polymers, where local isotropy holds.
A key challenge in developing analytical characterizations for tail entropies and tail divergences is how to avoid or minimize assumptions about the form of the local distribution in the vicinity of the query (for example, assumptions such as a local normal distribution or a local uniform distribution). As we will see, analytical results are in fact possible-as the neighborhood radius asymptotically tends to zero, the tail distribution (a truncated distribution induced from the global distribution) is guaranteed to converge to a generalized pareto distribution (GPD), with the GPD parameter determined by the LID value of the tail distribution. The technical challenge is to rigorously delineate under what circumstances it is possible to leverage this relationship to achieve a dramatic simplification of the integrals that are required to compute varieties of tail entropy or distribution divergences. Our results in this paper show that such simplifications are in fact possible, for a wide range of tail entropies and divergences. This allows us to characterize and analyze fundamental properties of local neighborhood geometry, with results holding asymptotically for essentially all smooth data distributions.
In summary, our key contributions are the development of substantial new theory that asymptotically relates tail entropy, divergences and LID. It builds on and extends an earlier work by Bailey et al. [3], which focused solely on univariate entropies, without reference to divergences or multivariate settings. Specifically in this paper, we: • Formulate technical lemmas which delineate when it is possible to substitute certain types of tail distributions by simple formulations that depend only on their associated LID values. • Use these lemmas to compute univariate tail formulations of entropy, cross entropy, cumulative entropy, entropy power and generalized q-entropies, all in terms of the LID values of the original tail distributions. • Use these lemmas to compute tail formulations of univariate statistical divergences and distances (Kullback-Leibler divergence, Jensen-Shannon divergence, Hellinger distance, χ 2 divergence, α-divergence, Wasserstein distance and L 2 distance). • Extend the univariate results to a multivariate context, when local spherical symmetry of the distribution holds.

Related Work
The core of our study involves intrinsic dimensionality (ID) and we begin by reviewing previous work on this topic.
There is a long history of work on ID, and this can be assessed either globally (for every data point) or locally (with respect to a chosen query point). Surveys of the field provide a good overview [7][8][9]. In the global case, a range of previous works have focused on topological models and appropriate estimation methods [10][11][12]. Such examples encompass techniques such as PCA and its variants [13], graph based methods [14] and fractal models [7,15]. Other approaches such as IDEA [16,17], DANCo [18] or 2-NN estimate the (global) intrinsic dimension based on concentration of norms and angles, or 2-nearest neighbors [19].
Local intrinsic dimensionality focuses on the intrinsic dimension of a particular query point and has been used in a range of applications. These include modeling deformation in granular materials [20,21], climate science [22,23], dimension reduction via local PCA [24], similarity search [25], clustering [26], outlier detection [27], statistical manifold learning [28], adversarial example detection [29], adversarial nearest neighbor characterization [30,31] and deep learning understanding [32,33]. In deep learning, it has been shown that adversarial examples are associated with high LID estimates, a characteristic that can be leveraged to build accurate adversarial example detectors [29]. It has also been found that the LID of deep representations [33] learned by Deep Neural Networks (DNNs) or the raw input data [34,35] is correlated with the generalization performance of DNNs. A 'dimensionality expansion' phenomenon has been observed when DNNs overfit to noisy class labels [32] and this can be leveraged to develop improved loss functions. The use of a "cross-LID" measure to evaluate the quality of synthetic examples generated by GANs has been proposed in [36]. Connections between local intrinsic dimensionality and global intrinsic dimensionality were explored by Romano et al in [37]. In the area of climate science and dynamical systems, a formulation similar to local intrinsic dimensionality has been developed and referred to as local dimension or instantaneous dimension [22,23,38], using links to extreme value theoretic methods. It has proved useful as measure to characterize predictability of states and explain system dynamics.
For local intrinsic dimensionality, a popular estimator is the maximum likelihood estimator, studied in the Euclidean setting by Levina and Bickel [39] and later formulated under the more general assumptions of extreme value theory by Houle [2] and Amsaleg et al. [40], who showed it to be equivalent to the classic Hill estimator [41]. Other local estimators include expected simplex skewness [42], the tight locality estimator [43], the MiND framework [17], manifold adaptive dimension [44], statistical distance [45] and angle-based approaches [46]. Smoothing approaches for estimation have also been used with success [47,48].
Local intrinsic dimensionality is closely related to (univariate) distance distributions. Fundamental relations for interpoint distances, connecting multivariate distributions and univariate distributions have been explored by both [4,5]. The former showed that two multivariate distributions are equal whenever the interpoint distances both within and between samples have the same univariate distribution, while the latter showed that two multivariate distributions F and G are different if their univariate distance distributions from some randomly chosen point z are different. This can form the basis of a two sample test for comparing F and G. These studies have implications for our work in this paper, since they characterize the role that comparison between univariate distributions can play as a necessary condition for comparing equality of multivariate distributions.
Our work in this paper formulates results for different varieties of entropy and different types of divergences. Entropy is a fundamental notion used across many scientific disciplines. A good overview of its role in information theory is presented in [49]. Entropy power (the exponential of entropy) is commonly used in signal processing and information theory, and is a building block for the well-known Shannon entropy power inequality which can be used to analyze the convolution of two independent random variables [50]. Entropy power goes under the name of perplexity in the field of natural language processing [51] and true diversity in the field of ecology [52]. It also corresponds to the volume of the smallest set that contains most of the probability measure [49], and it can be interpreted as a measure of statistical dispersion [53]. It is also related to Fisher information via Stam's inequality [54].
Cumulative entropy was formulated in [55] and is a modification of cumulative residual entropy [56]. It is popular in reliability theory where it is used to characterize uncertainty over time intervals. Apart from reliability theory analysis, it has been used in data mining tasks such as dependency analysis [57] and subspace cluster analysis [58], where it has proved more effective due to good estimation properties. These data mining investigations have used cumulative entropy at a global level (over the entire data domain), rather than at the local (tail) level, as in our study. Generalized variants based on Tsallis qstatistics have been developed for both entropy [59] and cumulative entropy [60]. Inclusion of the extra q parameter can assist with higher robustness to anomalies and better fitting to characteristics of data distributions. Tail entropy has been used in financial applications for measuring the expected shortfall [61] in the upper tail using quantization. This is different from our context, where the our exclusive focus is on lower tails and we develop exact results for an asymptotic regime where lower tail size approaches zero.
Divergences between probability distributions are a fundamental building block in statistics and are used to assess the degree to which one probability distribution is different from another probability distribution. They have a wide range of formulations [1] and applications, which range from use as objective functions in supervised and unsupervised machine learning [62], to hypothesis and two sample or goodness of fit testing in statistics [63], as well as generative modeling in deep learning, particularly using the Wasserstein distance [64]. Asymptotic forms of KL divergence have been investigated by Contreras-Reyes [65], for comparison of multivariate asymmetric heavy-tailed distributions.
Finally, we note that this work considerably expands a recent study by Bailey et al. [3], which established relationships between tail entropies and LID. This current paper extends and generalizes that work in several directions: (i) We establish general lemmas that provide sufficient conditions for when it is possible to substitute a tail distribution with components such as a power law, inside an integral. The techniques of [3] were specially crafted for specific integrals. (ii) We provide results for statistical divergences and distances (the work of [3] only considers entropy). (iii) We show how to formulate results for the multivariate context (as [3] only considers univariate scenarios).

Local Intrinsic Dimensionality
In this section, we summarize the LID model using the presentation of [2]. LID can be regarded as a continuous extension of the expansion dimension [66,67]. Like earlier expansion-based models of intrinsic dimension, its motivation comes from the relationship between volume and radius in an expanding ball, where (as originally stated in [68]) the volume of the ball is taken to be the probability measure associated with the region it encloses. The probability as a function of radius-denoted by F(r)-has the form of a univariate cumulative distribution function (CDF). The model formulation (as stated in [2]) generalizes this notion to real-valued functions F for which F(0) = 0, under appropriate assumptions of smoothness.

Definition 1 ([2]
). Let F be a real-valued function that is non-zero over some open interval containing r ∈ R, r = 0. The intrinsic dimensionality of F at r is defined as follows whenever the limit exists: When F satisfies certain smoothness conditions in the vicinity of r, its intrinsic dimensionality has a convenient known form: ). Let F be a real-valued function that is non-zero over some open interval containing r ∈ R, r = 0. If F is continuously differentiable at r and using F (r) to denote the derivative dF(r) dr , then Let x be a location of interest within a data domain S for which the distance measure d : S × S → R + ∪ 0 has been defined. To any generated sample s ∈ S we associate the distance d(x, s); in this way, a global distribution that produces the sample s can be said to induce the random value d(x, s) from a local distribution of distances taken with respect to x. The CDF F(r) of the local distance distribution is simply the probability of the sample distance lying within a threshold r-that is, F(r) Pr[d(x, s) ≤ r]. In characterizing the local intrinsic dimensionality in the vicinity of location x, we are interested in the limit of ID F (r) as the distance r tends to 0, which we denote by Henceforth, when we refer to the local intrinsic dimensionality (LID) of a function F, or of a point x whose induced distance distribution has F as its CDF, we will take 'LID' to mean the quantity ID * F . In general, ID * F is not necessarily an integer. In practice, estimation of the LID at x would give an indication of the dimension of the submanifold containing x that best fits the distribution.
The function ID F can be seen to fully characterize its associated function F. This result is analogous to a foundational result from the statistical theory of extreme values (EVT), in that it corresponds under an inversion transformation to the Karamata representation theorem [69] for the upper tails of regularly varying functions. For more information on EVT and how the LID model relates to the extreme-value theoretic generalized pareto distribution, we refer the reader to [2,70,71].
Theorem 2 (LID Representation Theorem [2]). Let F : R → R be a real-valued function, and assume that ID * F exists. Let x and w be values for which x/w and F(x)/F(w) are both positive. If F is non-zero and continuously differentiable everywhere in the interval [min{x, w}, max{x, w}], then whenever the integral exists.
In [2], conditions on x and w are provided for which the factor A F (x, w) can be seen to tend to 1 as x, w → 0. The convergence characteristics of F to its asymptotic form are expressed by the factor A F (x, w), which is related to the slowly varying component of functions as studied in EVT [70]. As we will shown in the next section, we make use of the LID Representation Theorem in our analysis of the limits of tail entropy variants under a form of normalization.

Definitions of Tail Entropies and Tail Dissimilarity Measures
In this section, we present the formulations of entropy, divergences and distances that will be studied in the later sections, in the light of the model of local intrinsic dimensionality outlined in Section 3. These entropies and dissimilarity measures will all be conditioned on the lower tails of smooth functions on domains bounded from below at zero. In each case, the formulations involve one or more non-negative real-valued functions whose restriction to [0, w] satisfies certain smooth growth properties: Definition 2. Let F : R + ∪ 0 → R + ∪ 0 be a function that is positive except at F(0) = 0. We say that F is a smooth growth function if

•
There exists a value r > 0 such that F is monotonically increasing over (0, r); • F is continuous over [0, r); • F is differentiable over (0, r); and • The local intrinsic dimensionality ID * F exists and is positive.
Given a smooth growth function F and a value w > 0, we define F w (t) , which can in turn be interpreted as the CDF of the distribution of X conditioned to the lower tail [0, w]. It is easy to see that for a sufficiently small choice of w, F w must also be a smooth growth function. Its derivative F w (t) = F (t) F(w) exists since F (t) exists, and thus can be regarded as the probability density function (PDF) of the restriction of F to [0, w]. In addition, it can easily be shown (using Theorem 1) that the LID of F w is identical to that of F.
If the monotonicity of the function F is strict over the domain of interest [0, r), its inverse function F −1 exists and satisfies the smooth growth conditions within some neighborhood of the origin. Moreover, F −1 w is also a smooth growth function over [0, 1], with F −1 w (0) = 0 and F −1 w (1) = w. The following tail entropy, tail divergence and tail distance formulations all apply to any functions F and G satisfying the conditions stated above; in particular, they involve one or more of F w , F w , G w , G w , and (if the monotonicity of the functions is strict) F −1 w and G −1 w . In their definitions, the only difference between the tail variants and the original versions is that the distributions are conditioned on the lower tail [0, w]. In the tail measures involving one or more of F w , F w , G w and G w , integration is performed over the lower tail and not the entire distributional range [0, +∞); for the variant involving F −1 w and G −1 w , integration is performed over [0, 1] for values of w constrained to the lower tail.
We begin with (differential) tail entropy. Entropy is perhaps the most fundamental and widely used model of data complexity and can be regarded as a measure of the uncertainty of a distribution. Differential entropy assesses the expected surprisal of a random variable and can take negative values.

Definition 3 (Tail Entropy). The entropy of F conditioned on
The tail entropy is equal to E(− log F w ), the expected value of the (tail) log-likelihood. It is also possible to define the variance of the (tail) log-likelihood. This is known as the varentropy. Understanding this further, note that one may define the information content of a random variable X with density function f , to be − log f (X). The entropy (uncertainty) then corresponds to the expected value of the information content of X and the varentropy corresponds to the variance of the information content of X. The varentropy was introduced by Song [72] as an intrinsic measure of the shape of a distribution and has been explored in a range of studies [73][74][75].

Definition 4 (Tail varentropy). The varentropy of F conditioned on
The cumulative entropy is a variant of entropy proposed in [55,56] due to its attractive theoretical properties. Tail conditioning on the cumulative entropy has the same general form as that of the tail entropy. Cumulative entropy [55,56] is an information-theoretic measure popular in reliability theory, where it is used to model uncertainty over time intervals. It corresponds to the expected value of the mean inactivity time. Compared to ordinary Shannon differential entropy, cumulative entropy has certain attractive properties, such as non-negativity and ease of estimation.

Definition 5 (Cumulative Tail Entropy). The cumulative entropy of F conditioned on
The entropy power is the exponential of the entropy, and is also known as perplexity in the natural language processing community. It corresponds to the volume of the smallest set that contains most of the probability measure [49], and can be interpreted as a measure of statistical dispersion [53]. There are several standard definitions of entropy power in the research literature. For our purposes, we adopt the simplest-the exponential of Shannon entropy-for our definition conditioned to the tail.

Definition 6 (Tail Entropy Power). The entropy power of F conditioned on
In the introduction, we briefly mentioned some motivation for the entropy power HP(F, w). We can add to this as follows: • It can be interpreted as a diversity. Observe that when F is a (univariate) uniform distance distribution ranging over the interval [0, w], we have ID * F = 1 and HP(F, w) = w. In other words, the entropy power is equal to the 'effective diversity' of the distribution (the number of neighbor distance possibilities). • Given two different queries, each with its own neighborhood, one query with tail entropy power equal to 2 and the other with tail entropy power equal to 4, we can say that the distance distribution of the second query is twice as diverse as that of the first query.
For each of the tail entropy variants introduced above, we also propose analogous variants based on the q-entropy formulation due to Tsallis [59]. Generalized Tsallis entropies [59,60] are a family of entropies characterized via an exponent parameter q applied to the probabilities, in which the traditional (Shannon) entropy variants are obtained as the special case when q is allowed to tend to 1. The use of such a parameter q can often facilitate more accurate fitting of data characteristics and robustness to outliers.

Definition 7 (Tail q-Entropy).
For any q > 0 (q = 1), the q-entropy of F conditioned on [0, w] is defined to be

Definition 8 (Cumulative Tail q-Entropy).
For any q > 0 (q = 1), the cumulative q-entropy of F conditioned on [0, w] is defined to be We define the tail q-entropy power using the q-exponential function from Tsallis statistics [59], exp q (x) [1 + (1 − q)x] 1 1−q . Note that L'Hôpital's rule can be used to show that exp q (x) → e x as q → 1.

Definition 9 (Tail q-Entropy Power).
For any q > 0 (q = 1), the q-entropy power of F conditioned on [0, w] is defined to be We next define the tail cross entropy. Cross entropy can be used to compare two probability distributions and is often employed as a loss function in machine learning, comparing a true distribution and a learned distribution. From an information theoretic perspective, cross entropy corresponds to the expected coding length when a wrong distribution G is assumed while the data actually follows a distribution F. Definition 10 (Tail Cross Entropy). The cross entropy from F to G, conditioned on [0, w], is defined to be Similar to entropy power, we can also define the cross entropy power, which is the exponential of the cross entropy.

Definition 11 (Tail Cross Entropy Power). The cross entropy power from
A classic and fundamental method for comparing two probability distributions is the Kullback-Leibler divergence (KL Divergence) [76]. KL(F, G) measures the degree to which a probability distribution G is different from a reference probability distribution F. It is a member of both the family of f -divergences and Bregman divergences. It is widely used in statistics, machine learning and information theory.
The tail KL divergence can be connected to the tail entropy and the tail cross entropy according to the relationship KL(F; G, w) = XH(F; G; w) − H(F, w).
The Jensen-Shannon divergence (JS divergence) [77] is another popular measure of distance between probability distributions. It is based on the KL divergence, but unlike the KL, the square root of the JS divergence is a true metric.

Definition 13 (Tail JS Divergence). The Jensen-Shannon divergence between F and G, conditioned on
The tail JS divergence can also be written in terms of the tail entropies JS(F; The L2 distance is the squared Euclidean distance when comparing two probability distributions. It is part of the family of β divergences when setting β = 2 [78].

Definition 14 (Tail L2 Distance). The L2 distance between F and G, conditioned on
The Hellinger distance [79] is a true metric for comparing two probability distributions. The squared Hellinger distance a member of the family of f -divergences and is part of the family of α divergences when setting α = 1 2 [80].
Definition 15 (Tail Hellinger Distance). The Hellinger distance between F and G, conditioned on [0, w], is defined to be The χ 2 divergence between two probability distributions [81] is a member of the family of f divergences and is part of the family of α divergences when setting α = 2 [80].

Definition 16 (Tail χ 2 -Divergence).
The χ 2 divergence between F and G, conditioned on [0, w], is defined to be The asymmetric α-divergence [80] is another member of the family of f divergences. When α = 2 it is proportional to the χ 2 divergence. When α = 0.5 it is proportional to the squared Hellinger distance. When α → 1 it corresponds to the KL-divergence.
The Wasserstein distance between two probability distributions is also known as the Kantorovich-Rubinstein metric [82] or the earth mover's distance. It has become very popular as part of the loss function used in generative adversarial networks [83]. In the univariate case it can be expressed in a simple analytic form.

Definition 18 (Tail Wasserstein Distance). The p-th Wasserstein distance between F and G, conditioned on
For some of the aforementioned tail measures, we will also consider a normalization of the entropy, divergence or distance (as the case may be) with respect to w, the length of the tail. In Sections 5 and 6, we will show that as w tends to zero, the limits of these (possibly normalized) tail entropies and tail divergences can be expressed in terms of the local intrinsic dimensionalities of F and G. The notation for these variants, and our results for their limits in terms of ID * F and ID * G , are summarized in Table 1. Table 1. Asymptotic equivalences between LID formulations and tail measures of entropy or divergence. In each case, the functions F and G are assumed to be smooth growth functions. In addition, for the Normalized Wasserstein Distance, F and G must be strictly monotonically increasing, thereby guaranteeing that the inverses of F w and G w exist near zero. In some cases, for the asymptotic limit to exist non-trivially (that is, to be both finite and non-zero), the tail entropy or tail divergence must be normalized by the multiplicative factor 1 w , w. For the Tail Entropy and Tail Cross Entropy, no reweighting by powers of w can lead to a non-trivial asymptotic limit as w tends to zero.

Tail Measure
Formulation

Simplification of Tail Measures
Next, we present the main theoretical contributions of the paper: three technical lemmas that will later be used to establish relationships between local intrinsic dimensionality and a variety of tail measures based on entropy, divergences or distances. The results presented in this section all apply asymptotically, as the tail boundary tends toward zero.
Each of the three lemmas allow, under certain conditions, the simplification of limits of integrals involving smooth growth functions of the form F w (as defined in Section 4), or its associated first derivative F w or inverse function F −1 w . The limit integral simplifications allow for the substitution of the function (or derivative or inverse) by expressions that involve one or more of the following: the LID value of the function, the variable of integration or the tail boundary w. Moreover, the lemmas require that the integrand be monotone with respect to small variations in the targeted function.
The first lemma allows terms of the form F w (resembling the CDF of a tail-conditioned distribution) to be converted into a term that depends only on the variable of integration, the tail length w, and the local intrinsic dimension ID * F .

Lemma 1.
Let F be a smooth growth function over the interval [0, r). Consider the function φ : R 2 + → R admitting a representation of the form where: for all fixed choices of t and w satisfying 0 < t ≤ w < r, ψ(t, w, z) is monotone and continuously partially differentiable with respect to z over the interval z ∈ (0, 1]. Then whenever the latter limit exists or diverges to +∞ or −∞. Proof. Since F is assumed to be a smooth growth function, the limit ID * F = lim v→0 + ID F (v) exists and is positive. We present an 'epsilon-delta' argument based on this limit. For any real value > 0 satisfying < min{r, ID * F }, there must exist a value 0 Exponentiating, we obtain the bounds Applying this bound together with Theorem 2, the ratio F w (t) = F(t) F(w) can be seen to satisfy Over the domain of interest 0 < t ≤ w < δ, the assumption that 0 < < min{r, ID * F } ensures that 0 < t w ≤ 1, and that the upper and lower bounds of Inequality (1) lie in the interval (0, 1]. Since ψ(t, w, z) has been assumed to be monotone with respect to z ∈ (0, 1], the maximum and minimum attained by ψ over choices of z restricted to any (closed) subinterval of (0, 1] must occur at opposite endpoints of the subinterval. With this in mind, for any choice of ∈ (0, min{r, ID * F }), Inequality (1) implies that Since ψ(t, w, z) and w 0 ψ(t, w, z) dt are also continuously partially differentiable with respect to z over z ∈ (0, 1], It therefore follows from the squeeze theorem for integrals that whenever the right-hand limit exists or diverges. In a manner similar to that of the preceding lemma, the following result allows terms of the form F −1 w (the inverse of F w ) to be converted into a term that depends only on the variable of integration, the tail length w and the local intrinsic dimension ID * F . Here, in order to ensure the existence of the inverse function, F (and by extension F w and F −1 w ) must be strictly monotonically increasing over the tail. Lemma 2. Let F be a smooth growth function over the interval [0, r). Let us also assume that, over the interval, the monotonicity of F is strict. Consider the function φ : R 2 + → R admitting a representation of the form φ(u, w) ≡ ψ(u, w, z(u, w)), where: for all fixed choices of u and w satisfying u ∈ [0, 1] and 0 < w < r, ψ(u, w, z) is monotone and continuously partially differentiable with respect to z over the interval z ∈ (0, r). Then ψ u, w, wu 1 ID * F du whenever the latter limit exists or diverges to +∞ or −∞.
Proof. First, we note that the strict monotonicity of F implies that for all u ∈ [0, 1] and w ∈ (0, r), the function F −1 w (u) is uniquely defined when F w is restricted to [0, w]. As in the proof of Lemma 1, an 'epsilon-delta' argument based on the existence of the limit ID * F = lim v→0 + ID F (v) yields the following: for any real value > 0 satisfying < min{r, ID * F }, there exists a value δ ∈ (0, ) such that holds for all 0 < t ≤ w < δ. Solving for t through exponentiation of the bounds, and then setting t = F −1 w (u), we obtain The remainder of the proof follows essentially the same path as that of Lemma 1. Over the domain of interest 0 < t ≤ w < δ, the assumption that 0 < < min{r, ID * F } ensures that 0 < t w ≤ 1, and that u lies in the interval (0, w]. Since ψ(u, w, z) has been assumed to be monotone with respect to z ∈ (0, r), the maximum and minimum attained by ψ over choices of z restricted to any (closed) subinterval of (0, r) must occur at opposite endpoints. Therefore, for any choice of ∈ (0, min{r, ID * F }), Since ψ(u, w, z) is also continuously partially differentiable with respect to z over z ∈ (0, r), It therefore follows from the squeeze theorem for integrals that whenever the right-hand limit exists or diverges.
The third lemma facilitates the conversion of a term of the form F w to F w , together with a factor that depends only on the variable of integration and ID * F . Since F is assumed to be a smooth growth function, F w must be smooth as well, and therefore F w satisfies the conditions of Theorem 1 over [0, w). Hence, F w can be substituted by an expression involving F w : The substitution comes at the cost of introducing a non-constant factor ID F (t). The following lemma shows that ID F (t) can in turn be substituted by the constant ID * F , provided that certain monotonicity assumptions are satisfied. Lemma 3. Let F be a smooth growth function over the interval [0, r). Consider the function φ : R 2 + → R admitting a representation of the form where: there exists a value γ ∈ (0, ID * F ) such that for all fixed choices of t satisfying 0 < t ≤ w < r, ψ(t, w, z) is monotone with respect to z over the interval z ∈ (ID * F −γ, ID * F +γ). Then whenever the latter limit exists or diverges to +∞ or −∞.
Proof. Since F is assumed to be a smooth growth function, the limit ID * F = lim v→0 + ID F (v) exists and is positive. We present an 'epsilon-delta' argument based on this limit. For any real value > 0 satisfying < min{r, γ}, there must exist a value 0 < δ < such that v < δ implies that | ID F (v) − ID * F | < . Since ψ(t, w, z) has been assumed to be monotone with respect to z over the interval z ∈ (ID * F −γ, ID * F +γ), the restriction v < δ < < min{r, γ} ensures that ψ(t, w, z) is monotone over the entire domain of interest 0 < t ≤ w < δ. Therefore, the maximum and minimum attained by ψ over choices of z restricted to any (closed) subinterval of (ID * F −γ, ID * F +γ) must occur at opposite endpoints of the subinterval. As in the proof of Lemma 1, Since ψ(t, w, z) is also continuously partially differentiable with respect to z over the range It therefore follows from the squeeze theorem for integrals that whenever the right-hand limit exists or diverges.

Derivation of the Limits of Tail Measures
In this section, we will see how the three substitution lemmas can be applied to the limits of tail measures of entropy, divergence or distance, so as to produce formulations that depend only on the local intrinsic dimensions of the functions involved. All three lemmas require that the integral function be monotone with respect to small variations in the term that is targeted for substitution. In the discussion, we choose two tail measures as running examples: the tail KL divergence and the second tail Wasserstein distance (p = 2).

Handling Derivatives of Smooth Growth Functions
In the case of the tail KL divergence, Theorem 1 allows us to substitute out the first derivatives F w and G w for the functions F w and G w :

Substitution of LID Functions by Constants
In the limit of the tail KL divergence, the functions ID F (t) and ID G (t) can be replaced by the constants ID * F and ID * G , respectively, through three successive applications of Lemma 3. To verify that the monotonicity condition of the Lemma is satisfied, we choose one of the terms and replace it by a new variable, z: For any fixed values of t and w, it is easy to see that the integrand is locally monotone in the vicinity of z = ID F (t)-here, if ln ID F (t) F w (t) ID G (t) G w (t) is positive, a small increase in z (above the value ID F (t)) would result in an increase in the value of the integrand, and a small decrease would cause the integrand to decrease. If instead the logarithmic factor were negative, an increase in z would result in a decrease in the value of the integrand. Either way, the integrand would be monotone in the vicinity of z = ID F (t) at each fixed value of t and w. Its monotonicity condition thus being satisfied, Lemma 3 allows the targeted instance of ID F (t) to be substituted by ID * F : Similarly, it can be verified that the new integrand is monotone in each of the remaining two factors ID F (t) and ID G (t); consequently, they too can be substituted by ID * F and ID * G , one at a time, to yield dt .

Elimination of Tail-Conditioned Smooth Growth Functions
Now that the tail KL divergence has been reformulated in terms of the tail-conditioned smooth growth functions F w and G w , these two functions can be substituted out via three successive applications of Lemma 1, so as to obtain the limit of an integral involving only the variable of integration t, and the constants w, ID F and ID G : As in the previous step in which ID F (t) and ID G (t) were substituted out, the monotonicity conditions of Lemma 1 can easily be verified. Now that the integral involves only constants and the variable t, it can be solved straightforwardly using the integration-by-parts technique, yielding

Elimination of the Inverses of Tail-Conditioned Smooth Growth Functions
We now turn our attention to the limit of the tail Wasserstein distance for the case p = 2. Using Lemma 2, the inverse functions F −1 w and G −1 w can be substituted out, provided that the monotonicity requirements are satisfied. However, immediate application of the lemma to F −1 w (u) or G −1 w (u) does not necessarily work-to see this, consider substituting F −1 w (u) by the new variable z.
Clearly, the integrand is not necessarily monotone in z in the vicinity of those values of the integration variable u where G −1 w (u) = z. Instead, we expand the squared difference and apply Lemma 3 to each of the resulting four occurrences of F −1 w and G −1 w , one by one. By way of illustration, we consider substitution by z for the factor of F −1 w (u) in the cross term: With respect to small variations in the variable z about the value F −1 w (u), noting that G −1 w is always non-negative, the integrand is easily seen to be monotone in z when G −1 w (u) is non-zero: for any increase in z, the value of the integrand decreases, and for any decrease in z, the value of the integrand increases. Lemma 2 can therefore be applied, producing After three more applications of Lemma 2, followed by taking the square root of the integral, we obtain

Normalization
Even though the limit of the second tail Wasserstein distance is zero and therefore uninteresting, we observe that by normalizing it by the tail length w, we arrive at a more useful result: In general, reweighting by a power of w may be required to expose a relationship between the tail limit of an entropy measure or divergence and an expression in terms of the local intrinsic dimensions of the functions involved. Since local intrinsic dimension is a unitless quantity, in order to establish a non-trivial formulation solely in terms of LID values, any tail measure whose values are not unitless will generally require some form of normalization. Generally speaking, for the normalized tail Wasserstein distances with p non-integer or p odd (Table 5), Lemma 2 cannot be applied, due to the absolute value operation in the integrand. It may happen that the functions F −1 (u) and G −1 (u) may have crossing points for many (possibly even infinitely many) values of u between 0 and 1. At these values of u, F −1 (u) − G −1 (u) = 0, and neither z − G −1 (u) nor F −1 (u) − y would be monotone in the vicinity of z = F −1 (u) or y = G −1 (u), as the case may be.

Summary of Results
For the tail JS divergence (Table 3), the derivation relies on the fact that the LID of the sum (or average) of two non-negative smooth growth functions is the smaller of the two individual LID values. This is an implication of the fact that lim t→0 + V(t) W(t) = 0 whenever the smooth growth functions V(t) and W(t) have 0 < ID * W < ID * V (see [84] for more details). Accordingly, if ID * F = ID * G , then the function (F or G) with smaller LID value must have the same LID value as the average function M(t) = F(t)+G(t) 2 , and the other function (G or F) must have LID value equal to the maximum of the two. From these observations, the derivation can be seen to hold.
The result for the limit of the tail KL divergence has an interesting interpretation in light of the so-called Itakura-Saito (IS) divergence (or distance) [85]: As the tail boundary w tends to 0, the tail KL divergence between smooth functions F and G tends to the (univariate) IS divergence between their associated LID values ID * G and ID * F : When F and G are interpreted as the CDFs of distance distributions, the shape parameters of the extreme-value-theoretic generalized pareto distributions (GPDs) that asymptotically characterize their lower tails are known to equal − 1 ID * F and − 1 ID * G , respectively [40]. Since the ratio of these parameters is equal to (the reciprocal of) the ratio of LID val-ues, the tail KL divergence between F and G can also be interpreted as tending to the IS divergence between GPD parameters. Table 2. Derivations of asymptotic relationships between tail entropy variants and local intrinsic dimensionality. Each step shows the equivalences between the formulations when w is allowed to tend to zero. In the comments column, for each step of the derivation, the lemmas invoked are stated, as well as any additional assumptions made. If a normalization other weighting is needed to avoid divergence, or convergence to a constant (independent of F), the details are shown in a comment in the final step. In all cases, F is assumed to be a smooth growth function.

Tail Measure Derivation Steps Comments
Entropy  Table 3. Derivations of asymptotic relationships between tail divergences and local intrinsic dimensionality. Each step shows the equivalences between the formulations when w is allowed to tend to zero. In the comments column, for each step of the derivation, the lemmas invoked are stated, as well as any additional assumptions made. If a normalization or weighting is needed, the details are shown in a comment in the final step. In all cases, F and G are assumed to be smooth growth functions.

Tail Measure Derivation Steps Comments
Cross Entropy The IS divergence is popular as an objective for matrix factorization of audio spectra [86], for assessing the loss of using entry y i,j to approximate a true entry x i,j ; more precisely, to approximate a matrix V by factorization WH, the loss is The IS divergence is a convenient choice for this scenario due to its scale-free property (d IS (x|y) = d IS (αx|αy) for any α = 0), thus giving the same relative weight to both small and large values of x i and y i , since they only appear as the ratio x i y i . This is important for scenarios such as audio spectra, where the magnitudes of x i and y i can vary greatly.
The Itakura-Saito divergence falls into the family of so-called Bregman divergences (or distances) [87], which have a geometric interpretation as the difference between the value of a convex generator function at x on the one hand, and the value at x of a hyperplane function that is tangent to the generator curve at y. Bregman divergences are a highly expressive family of distances with a wide range of applications [88]. For the IS divergence, the convex generator function is the negative logarithm − ∑ n i=1 ln x i . Interestingly, the KL divergence is also a Bregman divergence, with its convex generator being the negative entropy function ∑ n i=1 x i ln x i [89]. Table 4. Derivations of asymptotic relationships between tail distances and local intrinsic dimensionality. Each step shows the equivalences between the formulations when w is allowed to tend to zero. In the comments column, for each step of the derivation, the lemmas invoked are stated, as well as any additional assumptions made. For each tail distance, the first step of the derivations shows an expansion by which the monotonicity of each factor can be verified. If a normalization or weighting is needed, the details are shown in a comment in the final step. In all cases, F and G are assumed to be smooth growth functions. Table 5. Derivations of asymptotic relationships between tail Wasserstein distances and local intrinsic dimensionality. Each step shows the equivalences between the formulations when w is allowed to tend to zero. In the comments column, for each step of the derivation, the lemmas invoked are stated, as well as any additional assumptions made. Normalization details are shown in a comment in the final step. In all cases, F and G are assumed to be invertible smooth growth functions.

Tail Measure Derivation Steps Comments
Wasserstein Distance

Extension to Multivariate Distributions
Thus far, our results have focused on a univariate scenario, wherein entropy and divergence variants were shown to be asymptotically equivalent to formulations involving the local intrinsic dimensionalities of smooth distributions of a single random variable. As discussed in Section 3, these results can be applied to distance-based analysis, through characterizations involving the LIDs of local (univariate) distance distributions induced by the overall (global) multivariate distribution. These characterizations are indirect, in that they do not explicitly involve (nor do they require) any knowledge of the underlying global distribution and its parameters. However, characterizations in terms of induced distance distributions may not be entirely satisfying when the nature of the global multivariate distribution is either known or assumed. In this section, we will assume that our domain S is the n-dimensional space R n equipped with the Euclidean distance, d(x, y) = x − y . Within S, we will also assume that we are given a data distribution D with probability density function p : R n → R + ∪ 0.

Multivariate Tail Distributions with Local Spherical Symmetry
Within the Euclidean domain, the challenge is to analyze distributions in terms of the probability measure captured within volumes associated with a distributional tail. However, unlike in univariate distributions, there is no universally accepted notion of 'distributional tail' for multivariate distributions. For our purposes, given a distance r > 0, we define the tail of D of length r to be the region enclosed by the ball of radius r centered at the origin; that is, B(r) {x ∈ R n : x ≤ r}. The boundary of the tail is the (n − 1)dimensional surface area of B(r), which we denote by B (r) {x ∈ R n : x = r}.
To enable tractable analysis, we will assume that the PDF can be expressed in terms of a locally spherically symmetrical function. One example of where local spherical symmetry can be expected to hold is for a locally isotropic context. This is a common assumption for physical systems, including metals, glasses, fluids and polymers, for which the distribution locally surrounding a particle in the system does not have a directional preference.
Formally, we say that a density function f is locally spherically symmetrical within radius w if for all x ≤ w, we have f (x) = f * (r) for some univariate function f * where r = x . For f to be locally spherically symmetrical, it suffices that f (x) be equal to f (y) whenever 0 ≤ x = y ≤ r. The assumption also implies the existence of a function f * for which f (x) = f * (r), and therefore that f must be constant over all points of the sphere B (r).
The probability measure captured by B(r), which we denote by F(r), is obtained through the integration of f over this ball: It is not difficult to see that the univariate function F is simply the CDF of the distribution of distances to the origin induced by the global distribution D. If F is differentiable over the tail interval (0, r], then the integral of F over this interval exists, and equals F: The derivative F (r) can therefore be interpreted as the PDF of the radial distance distribution as measured from the origin. For spherically symmetric distributions in Euclidean spaces, the multivariate density and radial density is related through a factor that depends on the surface area of spherical volumes. The formulae for the volume of an n-sphere and its (n − 1)-dimensional surface area are given by V n (r) π n/2 Γ((n/2) + 1) r n and S n−1 (r) 2π n/2 Γ(n/2) r n−1 , respectively. Γ is the common gamma function Γ(n) = (n − 1)! if n is a positive integer and Γ(n + 1 2 ) = (n − 1 2 )(n − 3 2 ) . . . 1 2 √ π if n is a non negative integer. Furthermore, the volume and surface area have a simple relationship that allows for easy conversion between the two: r · S n−1 (r) = n · V n (r) .

Lemma 4 ([90]
). Let X be an n-dimensional random vector that is spherically symmetric with a radial distribution R. Then X has a density f (x) if and only if R has a density s and If F is a smooth growth function that is locally spherically symmetric over [0, r], Equation (2) and Lemma 4 together give us the following relationship between the radial density F and the multivariate density f : Conditioning the distribution to the ball B(r), the tail distribution PDF becomes

Multivariate Tail Entropy Variants
The aforementioned relationships between multivariate and radial densities can be immediately used to compute the various tail entropies for the locally spherically symmetric multivariate case. Useful background on evaluating radial integrals can be found in Baker [91]. For example, the multivariate Tail Entropy is Although the multivariate formulation of Tail Entropy H( f , w) resembles that of the univariate formulation H(F, w), the two are not identical. Nevertheless, the multivariate formulation can still be simplified using the technical lemmas introduced in Section 5. In much the same way as for the univariate Tail Entropy Power, we can use Theorem 1 together with Lemmas 1 and 3 to determine the limit of H( f , w) as w tends to 0. Replacing Solving the integral, and then using Equation (3) to convert the surface area factor S n−1 to an expression involving the volume V n , we eventually arrive at which diverges even when the Tail Entropy is reweighted by V n (w) (or indeed, by any other polynomial in w). However, the Tail Entropy Power, when normalized by V n (w), does converge to a strictly positive value: As one might expect in the n-dimensional Euclidean setting, the (normalized asymptotic) multivariate Tail Entropy Power is maximized whenever ID * F , the local intrinsic dimensionality of the associated radial CDF F, is equal to n.

Multivariate Cumulative Tail Entropy
In the multivariate setting, cumulative entropy is defined in terms of the distributional tail, according to the notion laid out in Section 7.1. In place of the usual probability density f (x), the entropy function is applied to the probability measure associated with the ball centered at the origin with radius x ; that is, with Note that since F takes the same value at x and y whenever x = y , the quantity F( x ) is locally spherically symmetric even when the underlying density function f is not.
We can adapt the multivariate formulation of cumulative residual entropy that was originally proposed by Rao [56]. The multivariate Cumulative Tail Entropy, conditioned to a distributional tail of radius w, is expressed as a multivariate integral involving F w ( x ), or as a radial integral involving F w , as follows: As in the treatment of the univariate tail entropies, we can use Lemma 1 to determine the Solving the integral, and then converting the surface area factor S n−1 to a volume factor V n using Equation (3), we obtain Although the multivariate Cumulative Tail Entropy vanishes as the tail boundary w tends to zero, when normalized by the tail volume V n (w) it converges to a strictly positive value: Again, as with the Normalized Tail Entropy Power, the (asymptotic) multivariate Tail Cumulative Entropy is maximized whenever ϕ = 1. That is, when ID * F = n.

Multivariate Tail Divergences
Several of the tail divergence measures, when considered in the multivariate setting under the assumptions of locally spherical symmetry, turn out to be identical to those of the radial (univariate) setting. As an example, consider the multivariate Tail KL Divergence, defined as Applying Lemma 4 and integrating radially over the tail, we see that the Tail KL Divergence of F and G, which (as stated in Table 1) has the limit Similarly, it can easily be seen that the multivariate versions of the JS Divergence, the Hellinger Distance, the χ 2 -Divergence and the α-Divergence all have radial integral formulations identical to their corresponding univariate versions.

Observations
The general strategy for deriving these results is essentially the same as for the multivariate Tail Entropy: first use Lemma 4 to convert the multidimensional integral to an integral in one dimension, then use the technical lemmas of Section 5 to simplify the univariate integral as before.
Our results for the locally spherically symmetric multivariate case are shown in Table 6; however, since their derivations greatly resemble those of the analogous univariate cases, we omit the details. Some remarks:

1.
A result for the Wasserstein Distance is not included, since its formulation does not generalize straightforwardly to higher dimensions, unlike the other divergence measures.

2.
The normalizations and weightings used depend only on the tail volume V n (w) and (for the Tsallis entropy variants) the parameter q. This generalizes our earlier univariate results where normalization was performed with regard to the tail length w.

3.
All the multivariate tail variants considered Table 6 . Among these, the Normalized Entropy Power and the Normalized Cumulative Entropy are maximized when ID * F = n, which can occur when the tail distribution is uniform. The Varentropy is minimized when ID * F = n, which can occur when the variance of the log-likelihood for a uniform distribution is equal to zero.

4.
As mentioned in Related Work, a number of previous studies in deep learning have found that the local intrinsic dimension in learned representations is lower than the dimension of the full space [32][33][34][35] (i.e., ID * F < n) and that the learning process progressively reduces local intrinsic dimension. Consider a concrete example where n = 100 and ID * F = 12 and the learning process is reducing ID * F at a point from 12 to 11. The consequent effect on entropy can be interpreted from two different perspectives, either as an increase in tail distance entropy or a decrease in tail location entropy: • Considering univariate normalized entropy power or normalized cumulative entropy ( Table 1), reduction of ID * F corresponds to an increase in entropy. Here, the entropy is measuring the uncertainty of the univariate random variable modeling distances to nearest neighbors. Thus, reduction of ID * F corresponds to an increase in "distance entropy". • Considering multivariate normalized entropy power or multivariate normalized cumulative entropy ( Table 6), reduction of ID * F corresponds to an decrease in entropy. Here, the entropy is measuring the uncertainty of the multivariate random variable modeling locations of nearest neighbors, assuming local spherical symmetry. So reduction of ID * F corresponds to a decrease in "location entropy". We will see a visualization of these scenarios in Section 7.6. 5.
All four of the multivariate tail divergences listed in Table 6, as well as the Hellinger Distance, have radial integral formulations that are identical to their univariate counterparts. All the divergences and distances (including the Weighted L2 Distance) are minimized when ID * F = ID * G . 6.
By setting n = 1, we can recover the univariate results from Table 1. However, note that the range of integration used in Table 6 is a hypersphere of radius w, where for n = 1 it is the interval [−w, w]. In contrast, the integral formulations listed in Table 1 were taken over the interval [0, w]. For some results, this means a minor (constant factor of 2) difference between Table 1 and the result from Table 6 when n = 1. Table 6. Asymptotic equivalences between LID formulations and tail measures of entropy or divergence for locally spherically symmetric distributions in the n-dimensional Euclidean setting. In each case, the density functions are assumed to be f and g, and the CDFs F and G of their induced distance distributions are assumed to be smooth growth functions. In the results, V n (r) and S n−1 (r) denote the volume and surface area of the n-dimensional ball with radius r (respectively). In some cases, for the asymptotic limit to exist non-trivially (that is, to be both finite and non-zero), the tail entropy or tail divergence must be normalized by some multiplicative factor dependent on the tail volume V n (w).

Visualization of Behavior
Our results in Table 6 relate local intrinsic dimensionality to entropies and divergences. If analyzing an n dimensional global distribution such as the standard normal distribution or uniform distribution, then the dimension of every sub-manifold (i.e., the local intrinsic dimensionality ID * F ) will be n. However, our interest is in situations where the local intrinsic dimensionality differs from the representation dimension n. To provide further intuition on this aspect, two plots are shown in Figure 1.  Figure 1a compares the behavior of the normalized entropy power and the normalized cumulative entropy (multiplied by a constant factor of 4) in n-dimensional space, as the ratio φ = ID * F n is varied. We see that these measures have similar trends and they are maximized when ID * F = n. We also see that when 1 ID * F < n, these entropic measures will decrease if ID * F is decreased (for a fixed n). On the other hand, if n = 1 and 1 ID * F , then these entropic measures will increase if ID * F is decreased, where n = 1 corresponds to the scenario where we are modeling the uncertainty of a distance distribution. This illustrates remark number 4 from Section 7.5 above. Figure 1b compares the behavior of different tail divergences as the ratio ρ = ID * G ID * F varies. The divergences shown are the KL divergence, the Jensen-Shannon divergence and the Hellinger distance. These measures have similar trends as ρ varies and are minimized and equal to zero when ID * F = ID * G . Also, the Hellinger distance is bounded above by 1.

Conclusions
In this theoretical investigation, we have established asymptotic relationships between tail entropy variants, tail divergences and the theory of local intrinsic dimensionality. Our results are derived under the assumption that the distribution(s) under consideration are being analyzed in a highly local context, within the distribution tail(s), an asymptotically small neighborhood whose radius approaches zero. These results show that tail entropies and tail divergences depend in a fundamental way on local intrinsic dimensionality and help form a theoretical foundation for cross-fertilization between intrinsic dimensionality research and entropy research. As future work, we plan to investigate the potential of these new characterizations in a range of application settings. For example, for use as a basis in machine learning to characterize and improve representations and representation learning, as well as use in understanding behavior of physical systems such as fluids and helping characterize their critical transitions in time and space.
Our results from both univariate and multivariate cases, show that the tail entropies and divergences considered in this paper depend only on (i) the embedding (representation) dimension in which the distribution is situated, and (ii) the local intrinsic dimension(s) of the distribution(s). Furthermore, in many cases there is dependence involving the ratio between the intrinsic dimension and the embedding dimension.
Consider the context of distance based analysis, when a distribution models distances from a central query location to its nearest neighbors, and the distances are induced by global data. In this situation, our characterization of entropy might be termed as 'personalized', in that entropy expresses the uncertainty (or complexity) from the perspective of the query, in regard to the distances to samples within an asymptotically small neighborhood. Phrased another way, these local entropies are 'observer-dependent', since they are tied to the choice of query (the observer). This can be contrasted with the more common notion of entropy, where one analyzes a global distribution, and there is no requirement of a query point or its local neighborhood.
As alluded to in the introduction, divergences between tail distributions could be used for comparison of real and synthetic distributions, as is commonly required for generative adversarial networks (GANs). Given a particular query location we may either: (i) compute the divergence between the univariate tail distance distributions of synthetic and real examples, as measured from a query point; or (ii) compute the divergence between the multivariate tail distance distributions of synthetic and real examples, again as measured from the query, under an assumption of local isotropy. Our results show that under the assumption of local spherical symmetry, the use of divergences (such as KL) between tail distance distributions is asymptotically equivalent to the standard multivariate formulations with the same divergences, when restricted to the neighborhoods around locations of interest. For future work it will be interesting to consider whether it is possible to further extend our multivariate results to elliptically symmetric distributions or skew-elliptical distributions, such as those studied by Contreras-Reyes [65].
Lastly, our results in Tables 1 and 6 show theoretical relationships for entropies and divergences, but in practice one must estimate the measures using samples of data. A natural approach here is to first estimate local intrinsic dimensional values such as ID * F and ID * G using any desired estimator (such as the maximum likelihood estimator [39][40][41]), and then plug in the estimated LID value into the desired tail entropy or tail divergence formula. For example, an estimator of the (univariate) Normalized Cumulative Entropy could be obtained by computing ID * F ( ID * F +1) 2 , where ID * F is the estimated LID of the distance distribution F.