Convergence Rates for Empirical Estimation of Binary Classification Bounds

Bounding the best achievable error probability for binary classification problems is relevant to many applications including machine learning, signal processing, and information theory. Many bounds on the Bayes binary classification error rate depend on information divergences between the pair of class distributions. Recently, the Henze–Penrose (HP) divergence has been proposed for bounding classification error probability. We consider the problem of empirically estimating the HP-divergence from random samples. We derive a bound on the convergence rate for the Friedman–Rafsky (FR) estimator of the HP-divergence, which is related to a multivariate runs statistic for testing between two distributions. The FR estimator is derived from a multicolored Euclidean minimal spanning tree (MST) that spans the merged samples. We obtain a concentration inequality for the Friedman–Rafsky estimator of the Henze–Penrose divergence. We validate our results experimentally and illustrate their application to real datasets.


Introduction
Divergence measures between probability density functions are used in many signal processing applications including classification, segmentation, source separation, and clustering (see [1][2][3]). For more applications of divergence measures, we refer to [4].
In classification problems, the Bayes error rate is the expected risk for the Bayes classifier, which assigns a given feature vector x to the class with the highest posterior probability. The Bayes error rate is the lowest possible error rate of any classifier for a particular joint distribution. Mathematically, let x 1 , x 2 , . . . , x N ∈ R d be realizations of random vector X and class labels S ∈ {0, 1}, with prior probabilities p = P(S = 0) and q = P(S = 1), such that p + q = 1. Given conditional probability densities f 0 (x) and f 1 (x), the Bayes error rate is given by The Bayes error rate provides a measure of classification difficulty. Thus, when known, the Bayes error rate can be used to guide the user in the choice of classifier and tuning parameter selection. In practice, the Bayes error is rarely known and must be estimated from data. Estimation of the Bayes error rate is difficult due to the nonsmooth min function within the integral in (1). Thus, research has focused on deriving tight bounds on the Bayes error rate based on smooth relaxations of the min function.
Many of these bounds can be expressed in terms of divergence measures such as the Bhattacharyya [5] and Jensen-Shannon [6]. Tighter bounds on the Bayes error rate can be obtained using an important divergence measure known as the Henze-Penrose (HP) divergence [7,8]. Many techniques have been developed for estimating divergence measures. These methods can be broadly classified into two categories: (i) plug-in estimators in which we estimate the probability densities and then plug them in the divergence function [9][10][11][12], (ii) entropic graph approaches, in which the relationship between the divergence function and a graph functional in Euclidean space is derived [8,13]. Examples of plug-in methods include k-nearest neighbor (K-NN) and Kernel density estimator (KDE) divergence estimators. Examples of entropic graph approaches include methods based on minimal spanning trees (MST), K-nearest neighbors graphs (K-NNG), minimal matching graphs (MMG), traveling salesman problem (TSP), and their power-weighted variants.
Disadvantages of plug-in estimators are that these methods often require assumptions on the support set boundary and are more computationally complex than direct graph-based approaches. Thus, for practical and computational reasons, the asymptotic behavior of entropic graph approaches has been of great interest. Asymptotic analysis has been used to justify graph based approaches. For instance, in [14], the authors showed that a cross match statistic based on optimal weighted matching converges to the the HP-divergence. In [15], a more complex approach based on the K-NNG was proposed that also converges to the HP-divergence.
The first contribution of our paper is that we obtain a bound on the convergence rates for the Friedman and Rafsky (FR) estimator of the HP-divergence, which is based on a multivariate extension of the non-parametric run length test of equality of distributions. This estimator is constructed using a multicolored MST on the labeled training set where MST edges connecting samples with dichotomous labels are colored differently from edges connecting identically labeled samples. While previous works have investigated the FR test statistic in the context of estimating the HP-divergence (see [8,16]), to the best of our knowledge, its minimax MSE convergence rate has not been previously derived. The bound on convergence rate is established by using the umbrella theorem of [17], for which we define a dual version of the multicolor MST. The proposed dual MST in this work is different than the standard dual MST introduced by Yukich in [17]. We show that the bias rate of the FR estimator is bounded by a function of N, η and d, as O (N) −η 2 (d(η+1)) , where N is the total sample size, d is the dimension of the data samples d ≥ 2, and η is the Hölder smoothness parameter 0 < η ≤ 1. We also obtain the variance rate bound as O (N) −1 .
The second contribution of our paper is a new concentration bound for the FR test statistic. The bound is obtained by establishing a growth bound and a smoothness condition for the multicolored MST. Since the FR test statistic is not a Euclidean functional, we cannot use the standard subadditivity and superadditivity approaches of [17][18][19]. Our concentration inequality is derived using a different Hamming distance approach and a dual graph to the multicolored MST.
We experimentally validate our theoretic results. We compare the MSE theory and simulation in three experiments with various dimensions d = 2, 4, 8. We observe that, in all three experiments, as sample size increases, the MSE rate decreases and, for higher dimensions, the rate is slower. In all sets of experiments, our theory matches the experimental results. Furthermore, we illustrate the application of our results on estimation of the Bayes error rate on three real datasets.

Related Work
Much research on minimal graphs has focused on the use of Euclidean functionals for signal processing and statistics applications such as image registration [20,21], pattern matching [22], and non-parametric divergence estimation [23]. A K-NNG-based estimator of Rényi and f -divergence measures has been proposed in [13]. Additional examples of direct estimators of divergence measures include statistic based on the nonparametric two sample problem, the Smirnov maximum deviation test [24], and the Wald-Wolfowitz [25] runs test, which have been studied in [26].
Many entropic graph estimators such as MST, K-NNG, MMG, and TSP have been considered for multivariate data from a single probability density f . In particular, the normalized weight function of graph constructions all converge almost surely to the Rényi entropy of f [17,27]. For N uniformly distributed points, the MSE is O(N −1/d ) [28,29]. Later, Hero et al. [30,31] reported bounds on L γ -norm bias convergence rates of power-weighted Euclidean weight functionals of order γ for densities f belonging to the space of Hölder continuous functions Σ d (η, K) as O N −αη/(αη+1) 1/d , where 0 < η ≤ 1, d ≥ 1, γ ∈ (1, d), and α = (d − γ)/d. In this work, we derive a bound on convergence rate of FR estimator for the HP-divergence when the density functions belong to the Hölder class, Σ d (η, K), for 0 < η ≤ 1, d ≥ 2 [32]. Note that throughout the paper we assume the density functions are absolutely continuous and bounded with support on the unit cube [0, 1] d .
In [28], Yukich introduced the general framework of continuous and quasi-additive Euclidean functionals. This has led to many convergence rate bounds of entropic graph divergence estimators.
The framework of [28] is as follows: Let F be finite subset of points in [0, 1] d , d ≥ 2, drawn from an underlying density. A real-valued function L γ defined on F is called a Euclidean functional of order γ if it is of the form L γ (F) = min E∈E ∑ e∈E |e(F)| γ , where E is a set of graphs, e is an edge in the graph E, |e| is the Euclidean length of e, and γ is called the edge exponent or power-weighting constant. The MST, TSP, and MMG are some examples for which γ = 1.
Following this framework, we show that the FR test statistic satisfies the required continuity and quasi-additivity properties to obtain similar convergence rates to those predicted in [28]. What distinguishes our work from previous work is that the count of dichotomous edges in the multicolored MST is not Euclidean. Therefore, the results in [17,27,30,31] are not directly applicable.
Using the isoperimetric approach, Talagrand [33] showed that, when the Euclidean functional L γ is based on the MST or TSP, then the functional L γ for derived random vertices uniformly distributed in a hypercube [0, 1] d is concentrated around its mean. Namely, with high probability, the functional L γ and its mean do not differ by more than C(N log N) (d−γ)/2d . In this paper, we establish concentration bounds for the FR statistic: with high probability 1 − δ, the FR statistic differs from its mean by not more than O (N) (d−1)/d log(C/δ) (d−1)/d , where C is a function of N and d.

Organization
This paper is organized as follows. In Section 2, we first introduce the HP-divergence and the FR multivariate test statistic. We then present the bias and variance rates of the FR-based estimator of HP-divergence followed by the concentration bounds and the minimax MSE convergence rate. Section 3 provides simulations that validate the theory. All proofs and relevant lemmas are given in the Appendices A-E.
Throughout the paper, we denote expectation by E and variance by abbreviation Var. Bold face type indicates random variables. In this paper, when we say number of samples we mean number of observations.

The Henze-Penrose Divergence Measure
Consider parameters p ∈ (0, 1) and q = 1 − p. We focus on estimating the HP-divergence measure between distributions f 0 and f 1 with domain R d defined by It can be verified that this measure is bounded between 0 and 1 and, if f 0 (x) = f 1 (x), then D p = 0. In contrast with some other divergences such as the Kullback-Liebler [34] and Rényi divergences [35], the HP-divergence is symmetrical, i.e., D p ( f 0 , f 1 ) = D q ( f 1 , f 0 ). By invoking relation (3) in [8], one can rewrite D p in the alternative form: Throughout the paper, we refer to A p ( f 0 , f 1 ) as the HP-integral. The HP-divergence measure belongs to the class of φ-divergences [36]. For the special case p = 0.5, the divergence (2) becomes the symmetric χ 2 -divergence and is similar to the Rukhin f -divergence. See [37,38].

The Multivariate Runs Test Statistic
The MST is a graph of minimum weight among all graphs E that span n vertices. The MST has many applications including pattern recognition [39], clustering [40], nonparametric regression [41], and testing of randomness [42]. In this section, we focus on the FR multivariate two sample test statistic constructed from the MST.
Assume that sample realizations from f 0 and f 1 , denoted by X m ∈ R m×d and Y n ∈ R n×d , respectively, are available. Construct an MST spanning the samples from both f 0 and f 1 and color the edges in the MST that connect dichotomous samples green and color the remaining edges black. The FR test statistic R m,n := R m,n (X m , Y n ) is the number of green edges in the MST. Note that the test assumes a unique MST, therefore all inter point distances between data points must be distinct. We recall the following theorem from [7,8]: As m → ∞ and n → ∞ such that m n + m → p and n n + m → q, In the next section, we obtain bounds on the MSE convergence rates of the FR approximation for HP-divergence between densities that belong to Σ d (η, K), the class of Hölder continuous functions with Lipschitz constant K and smoothness parameter 0 < η ≤ 1 [32]: Definition 1 (Hölder class). Let X ⊂ R d be a compact space. The Hölder class Σ d (η, K), with η-Hölder parameter, of functions with the L d -norm, consists of the functions g that satisfy where p k x (z) is the Taylor polynomial (multinomial) of g of order k expanded about the point x and η is defined as the greatest integer strictly less than η.
In what follows, we will use both notations R m,n and R m,n (X m , Y n ) for the FR statistic over the combined samples.

Convergence Rates
In this subsection, we obtain the mean convergence rate bounds for general non-uniform Lebesgue densities f 0 and f 1 belonging to the Hölder class Σ d (η, K). Since the expectation of R m,n can be closely approximated by the sum of the expectation of the FR statistic constructed on a dense partition of [0, 1] d , R m,n is a quasi-additive functional in mean. The family of bounds (A16) in Appendix B enables us to achieve the minimax convergence rate for the mean under the Hölder class assumption with smoothness parameter 0 < η ≤ 1, d ≥ 2: Theorem 2 (Convergence Rate of the Mean). Let d ≥ 2, and R m,n be the FR statistic for samples drawn from Hölder continuous and bounded density functions f 0 and f 1 in Σ d (η, K). Then, for d ≥ 2, This bound holds over the class of Lebesgue densities f 0 , f 1 ∈ Σ d (η, K), 0 < η ≤ 1. Note that this assumption can be relaxed to f 0 ∈ Σ s d (η, K 0 ) and f 1 ∈ Σ s d (η, K 1 ) that is Lebesgue densities f 0 and f 1 belong to the Strong Hölder class with the same Hölder parameter η and different constants K 0 and K 1 , respectively.
The following variance bound uses the Efron-Stein inequality [43]. Note that in Theorem 3 we do not impose any strict assumptions. We only assume that the density functions are absolutely continuous and bounded with support on the unit cube [0, 1] d . Appendix C contains the proof. Theorem 3. The variance of the HP-integral estimator based on the FR statistic, R m,n (m + n) is bounded by where the constant c d depends only on d.
By combining Theorems 2 and 3, we obtain the MSE rate of the form O m + n) −η 2 /(d(η+1)) + O (m + n) −1 . Figure 1 indicates a heat map showing the MSE rate as a function of d and N = m = n. The heat map shows that the MSE rate of the FR test statistic-based estimator given in (3) is small for large sample size N.

Proof Sketch of Theorem 2
In this subsection, we first establish subadditivity and superadditivity properties of the FR statistic, which will be employed to derive the MSE convergence rate bound. This will establish that the mean of the FR test statistic is a quasi-additive functional: Here, R m i ,n i is the number of dichotomous edges in partition Q i . Conversely, for the same conditions as above on partitions Q i , there exists a constant c 2 such that The inequalities (7) and (8) are inspired by corresponding inequalities in [30,31]. The full proof is given in Appendix A. The key result in the proof is the inequality: where |D| indicates the number of all edges of the MST which intersect two different partitions.
Furthermore, we adapt the theory developed in [17,30] to derive the MSE convergence rate of the FR statistic-based estimator by defining a dual MST and dual FR statistic, denoted by MST * and R * m,n respectively (see Figure 2): Figure 2. The dual MST spanning the merged set X m (blue points) and Y n (red points) drawn from two Gaussian distributions. The dual FR statistic (R * m,n ) is the number of edges in the MST * (contains nodes in X m ∪ Y n ∪ {2 corner points}) that connect samples from different color nodes and corners (denoted in green). Black edges are the non-dichotomous edges in the MST * . Definition 2 (Dual MST, MST * and dual FR statistic R * m,n ). Let F i be the set of corner points of the subsection Q i for 1 ≤ i ≤ l d . Then, we define MST * (X m ∪ Y n ∩ Q i ) as the boundary MST graph of partition Q i [17], which contains X m and Y n points falling inside the section Q i and those corner points in F i which minimize total MST length. Notice it is allowed to connect the MSTs in Q i and Q j through points strictly contained in Q i and Q j and corner points are taken into account under condition of minimizing total MST length. Another word, the dual MST can connect the points in Q i ∪ Q j by direct edges to pair to another point in Q i ∪ Q j or the corner the corner points (we assume that all corner points are connected) in order to minimize the total length. To clarify this, assume that there are two points in Q i ∪ Q j , then the dual MST consists of the two edges connecting these points to the corner if they are closed to a corner point; otherwise, dual MST consists of an edge connecting one to another. Furthermore, we define R * m,n (X m , Y n ∩ Q i ) as the number of edges in an MST * graph connecting nodes from different samples and number of edges connecting to the corner points. Note that the edges connected to the corner nodes (regardless of the type of points) are always counted in dual FR test statistic R * m,n .
In Appendix B, we show that the dual FR test statistic is a quasi-additive functional in mean and R * m,n (X m , Y n ) ≥ R m,n (X m , Y n ). This property holds true since MST(X m , Y n ) and MST * (X m , Y n ) graphs can only be different in the edges connected to the corner nodes, and in R * (X m , Y n ) we take all of the edges between these nodes and corner nodes into account.
To prove Theorem 2, we partition [0, 1] d into l d subcubes. Then, by applying Theorem 4 and the dual MST, we derive the bias rate in terms of partition parameter l (see (A16) in Theorem A1). See Appendix B and Appendix E for details. According to (A16), for d ≥ 2, and l = 1, 2, . . . , the slowest rates as a function of l are l d (m + n) η/d and l −ηd . Therefore, we obtain an l-independent bound by letting l be a function of m + n that minimizes the maximum of these rates i.e., The full proof of the bound in (2) is given in Appendix B.

Concentration Bounds
Another main contribution of our work in this part is to provide an exponential inequality convergence bound derived for the FR estimator of the HP-divergence. The error of this estimator can be decomposed into a bias term and a variance-like term via the triangle inequality: The bias bound was given in Theorem 2. Therefore, we focus on an exponential concentration bound for the variance-like term. One application of concentration bounds is to employ these bounds to compare confidence intervals on the HP-divergence measure in terms of the FR estimator. In [44,45], the authors provided an exponential inequality convergence bound for an estimator of Rény divergence for a smooth Hölder class of densities on the d-dimensional unite cube [0, 1] d . We show that if X m and Y n are the set of m and n points drawn from any two distributions f 0 and f 1 , respectively, the FR criteria R m,n is tightly concentrated. Namely, we establish that, with high probability, R m,n is within of its expected value, where * is the solution of the following convex optimization problem: Note that, under the assumption (m + n) 1/d 1, C m,n ( ) becomes a constant depending only on by 8 1 − (c 2 −2 , where c is a constant. This is inferred from Theorems 5 and 6 below as (m + n) 1/d 1. See Appendix D, specifically Lemmas A8-A12 for more detail. Indeed, we first show the concentration around the median. A median is by definition any real number M e that satisfies the inequalities P(X ≤ M e ) ≥ 1/2 and P(X ≥ M e ) ≥ 1/2. To derive the concentration results, the properties of growth bounds and smoothness for R m,n , given in Appendix D, are exploited.

Theorem 5 (Concentration around the median).
Let M e be a median of R m,n which implies that P R m,n ≤ M e ≥ 1/2. Recall * from (9) then we have Theorem 6 (Concentration of R m,n around the mean). Let R m,n be the FR statistic. Then, Here,C = 8(4) d/(d−1) and the explicit form for C m,n ( * ) is given by (10) when = * .
See Appendix D for full proofs of Theorems 5 and 6. Here, we sketch the proofs. The proof of the concentration inequality for R m,n , Theorem 6, requires involving the median M e , where P(R m,n ≤ M e ) ≥ 1/2, inside the probability term by using To prove the expressions for the concentration around the median, Theorem 5, we first consider the h d uniform partitions of [0, 1] d , with edges parallel to the coordinate axes having edge lengths h −1 and volumes h −d . Then, by applying the Markov inequality, we show that with at least probability n is subadditive with 2 threshold. Afterward, owing to the induction method [17], the growth bound can be derived with at least probability 1 − h δ h m,n . The growth bound explains that with high probability there exists a constant depending on and h, C ,h , such that R m,n ≤ C ,h m n 1−1/d . Applying the law of total probability and semi-isoperimetric inequality (A108) in Lemma A11 gives us (A35). By considering the solution to convex optimization problem (9), i.e., * and optimal h = 7 the claimed results (11) and (12) are derived. The only constraint here is that is lower bounded by a function of δ h m,n = O h d−1 (m + n) 1/d .
Next, we provide a bound for the variance-like term with high probability at least 1 − δ. According to the previous results, we expect that this bound depends on * , d, m and n. The proof is short and is given in Appendix D.
or, equivalently, where C m,n ( * ) depends on m, n, and d is given in (10) when = * .

Simulation Study
In this section, we apply the FR statistic estimate of the HP-divergence to both simulated and real data sets. We present results of a simulation study that evaluates the proposed bound on the MSE. We numerically validate the theory stated in Sections 2.2 and 2.4 using multiple simulations. In the first set of simulations, we consider two multivariate Normal random vectors X, Y and perform three experiments d = 2, 4, 8, to analyze the FR test statistic-based estimator performance as the sample sizes m, n increase. For the three dimensions d = 2, 4, 8, we generate samples from two normal distributions with identity covariance and shifted means: respectively. For all of the following experiments, the sample sizes for each class are equal (m = n).
We vary N = m = n up to 800. From Figure 3, we deduce that, when the sample size increases, the MSE decreases such that for higher dimensions the rate is slower. Furthermore, we compare the experiments with the theory in Figure 3. Our theory generally matches the experimental results. However, the MSE for the experiments tends to decrease to zero faster than the theoretical bound. Since the Gaussian distribution has a smooth density, this suggests that a tighter bound on the MSE may be possible by imposing stricter assumptions on the density smoothness as in [12]. In our next simulation, we compare three bivariate cases: first, we generate samples from a standard Normal distribution. Second, we consider a distinct smooth class of distributions i.e., binomial Gamma density with standard parameters and dependency coefficient ρ = 0.5. Third, we generate samples from Standard t-student distributions. Our goal in this experiment is to compare the MSE of the HP-divergence estimator between two identical distributions, f 0 = f 1 , when f 0 is one of the Gamma, Normal, and t-student density function. In Figure 4, we observe that the MSE decreases as N increases for all three distributions.
Standard t-Student.

Real Datasets
We now show the results of applying the FR test statistic to estimate the HP-divergence using three different real datasets [46]: The skin dataset is collected by randomly sampling B,G,R values from face images of various age groups (young, middle, and old), race groups (white, black, and asian), and genders obtained from the FERET and PAL databases [47]. • Sensorless Drive Diagnosis (ENGIN) dataset: In this dataset, features are extracted from electric current drive signals. The drive has intact and defective components. The dataset contains 11 different classes with different conditions. Each condition has been measured several times under 12 different operating conditions, e.g., different speeds, load moments, and load forces.
We focus on two classes from each of the HAR, SKIN, and ENGIN datasets, specifically, for HAR dataset two classes "sitting" and "standing" and for SKIN dataset the classes "Skin" and "Non-skin" are considered. In the ENGIN dataset, the drive has intact and defective components, which results in 11 different classes with different conditions. We choose conditions 1 and 2.
In the first experiment, we computed the HP-divergence using KDE plug-in estimator and then the MSE for the FR test statistic estimator is derived as the sample size N = m = n increases. We used 95% confidence interval as the error bars. We observe in Figure 5 that the estimated HP-divergence ranges in [0, 1], which is one of the HP-divergence properties [8]. Interestingly, when N increases the HP-divergence tends to 1 for all HAR, SKIN, and ENGIN datasets. Note that in this set of experiments we have repeated the experiments on independent parts of the datasets to obtain the error bars. Figure 6 shows that the MSE expectedly decreases as the sample size grows for all three datasets. Here, we have used the KDE plug-in estimator [12], implemented on the all available samples, to determine the true HP-divergence. Furthermore, according to Figure 6, the FR test statistic-based estimator suggests that the Bayes error rate is larger for the SKIN dataset compared to the HAR and ENGIN datasets.  In our next experiment, we add the first six features (dimensions) in order to our datasets and evaluate the FR test statistic's performance as the HP-divergence estimator. Surprisingly, the estimated HP-divergence doesn't change for the HAR sample; however, big changes are observed for the SKIN and ENGIN samples (see Figure 7). Finally, we apply the concentration bounds on the FR test statistic (i.e., Theorems 6 and 7) and compute theoretical implicit variance-like bound for the FR criteria with δ = 0.05 error for the real datasets ENGIN, HAR, and SKIN. Since datasets ENGIN, HAR, and SKIN have the equal total sample size N = m + n = 1200 and different dimensions d = 14, 12, 4, respectively; here, we first intend to compare the concentration bound (13) on the FR statistic in terms of dimension d when δ = 0.05. For real datasets ENGIN, HAR, and SKIN, we obtain where ξ = ξ [0.257, 0.005, 0.6 × 10 −11 ], respectively, and ξ is a constant not dependent on d.
One observes that as the dimension decreases the interval becomes significantly tighter. However, this could not be generally correct and computing bound (13) precisely requires the knowledge of distributions and unknown constants. In Table 1, we compute the standard variance-like bound by applying the percentiles technique and observe that the bound threshold is not monotonic in terms of dimension d. Table 1 shows the FR test statistic, HP-divergence estimate (denoted by R m,n , D p , respectively), and standard variance-like interval for the FR statistic using the three real datasets HAR, SKIN, and ENGIN.

Conclusions
We derived a bound on the MSE convergence rate for the Friedman-Rafsky estimator of the Henze-Penrose divergence assuming the densities are sufficiently smooth. We employed a partitioning strategy to derive the bias rate which depends on the number of partitions, the sample size m + n, the Hölder smoothness parameter η, and the dimension d. However, by using the optimal partition number, we derived the MSE convergence rate only in terms of m + n, η, and d. We validated our proposed MSE convergence rate using simulations and illustrated the approach for the meta-learning problem of estimating the HP-divergence for three real-world data sets. We also provided concentration bounds around the median and mean of the estimator. These bounds explicitly provide the rate that the FR statistic approaches its median/mean with high probability, not only as a function of the number of samples, m, n, but also in terms of the dimension of the space d. By using these results, we explored the asymptotic behavior of a variance-like rate in terms of m, n, and d.

Funding:
The work presented in this paper was partially supported by ARO grant W911NF-15-1-0479 and DOE grants DE-NA0002534 and DE-NA0003921.

Conflicts of Interest:
The authors declare no conflict of interest. The founding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results. In this section, we prove the subadditivity and superadditivity for the mean of FR test statistic. For this, first we need to illustrate the following lemma.

Abbreviations
be a uniform partition of [0, 1] d into l d subcubes Q i with edges parallel to the coordinate axes having edge lengths l −1 and volumes l −d . Let D ij be the set of edges of MST graph between Q i and Q j with cardinality |D ij |, then for |D| defined as the sum of |D ij | for all i, j = 1, . . . , l d , i = j, we have where η > 0 is the Hölder smoothness parameter and .
Here, and in what follows, denote Ξ MST (X n ) the length of the shortest spanning tree on where the minimum is over all spanning trees T of the vertex set X n . Using the subadditivity relation for Ξ MST in [17], with the uniform partition of [0, 1] d into l d subcubes Q i with edges parallel to the coordinate axes having edge lengths l −1 and volumes l −d , we have Note that using the result from ( [31], Proposition 3), for some constants C i1 and C i2 , we have To aim toward the goal (7), we partition [0, 1] d into M := l d subcubes Q i of side 1/l. Recalling Lemma 2.1 in [48], we therefore have the set inclusion: where D is defined as in Lemma A1. Let m i and n i be the number of sample {X 1 , . . . , X m } and Since set B has fewer edges than set A, thus (A5) implies that the difference set of B and A contains at most 2|D| edges, where |D| is the number of edges in D. On the other word, The number of edge linked nodes from different samples in set A is bounded by the number of edge linked nodes from different samples in set B plus 2|D|: Here, R m i ,n i stands with the number edge linked nodes from different samples in partition Q i , M. Next, we address the reader to Lemma A1, where it has been shown that there is a constant c such that E|D| ≤ c l d−1 (m + n) 1/d . This concludes the claimed assertion (7). Now, to accomplish the proof, the lower bound term in (8) is obtained with similar methodology and the set inclusion: This completes the proof.
(ii) (Subadditivity on E[R * m,n ] and Superadditivity) Partition [0, 1] d into l d subcubes Q i such that m i , n i be the number of sample X m = {X 1 , . . . , X m } and Y n = {Y 1 , . . . , Y n } respectively falling into the partition Q i with dual R * m i ,n i . Then, we have where c is a constant.
(i) Consider the nodes connected to the corner points. Since MST(X m , Y n ) and MST * (X m , Y n ) can only be different in the edges connected to these nodes, and in R * (X m , Y n ) we take all of the edges between these nodes and corner nodes into account, so we obviously have the second relation in (A8). In addition, for the first inequality in (A8), it is enough to say that the total number of edges connected to the corner nodes is upper bounded by 2 d c d .
(ii) Let |D * | be the set of edges of the MST * graph which intersect two different partitions. Since MST and MST * are only different in edges of points connected to the corners and edges crossing different partitions. Therefore, |D * | ≤ |D|. By eliminating one edge in set D in the worse scenario we would face two possibilities: either the corresponding node is connected to the corner which is counted anyways or any other point in MST graph which wouldn't change the FR test statistic. This implies the following subadditivity relation: Further from Lemma A1, we know that there is a constant c such that E|D| ≤ c l d−1 (m + n) 1/d . Hence, the first inequality in (A9) is obtained. Next, consider |D * c | which represents the total number of edges from both samples only connected to the all corners points in MST * graph. Therefore, one can easily claim: In addition, we know that |D * c | ≤ 2 d l d c d where c d stands with the largest possible degree of any vertex. One can write The following list of Lemmas A3, A4 and A6 are inspired from [49] and are required to prove Theorem A1. See Appendix E for their proofs.
Lemma A3. Let g(x) be a density function with support [0, 1] d and belong to the Hölder class Σ d (η, L), 0 < η ≤ 1, stated in Definition 1. In addition, assume that P(x) is a η-Hölder smooth function, such that its absolute value is bounded from above by a constant. Define the quantized density function with parameter l and constants φ i as Lemma A4. Denote ∆(x, S) the degree of vertex x ∈ S in the MST over set S with the n number of vertices. For given function P(x, x), one obtains where, for constant η > 0, Lemma A5. Assume that, for given k, g k (x) is a bounded function belong to Σ d (η, L). Let P : R d × R d → [0, 1] be a symmetric, smooth, jointly measurable function, such that, given k, for almost every x ∈ R d , P(x, .) is measurable with x a Lebesgue point of the function g k (.)P(x, .). Assume that the first derivative P is bounded. For each k, let Z k 1 , Z k 2 , . . . , Z k k be an independent d-dimensional variable with common density function g k . Set Lemma A6. Consider the notations and assumptions in Lemma A5. Then, Here, MST(S ) denotes the MST graph over nice and finite set S ⊂ R d and η is the smoothness Hölder parameter. Note that ς η (l, k) is given as before in Lemma A4 (A13).
Theorem A1. Assume R m,n := R(X m , Y n ) denotes the FR test statistic and densities f 0 and f 1 belong to the Hölder class Σ d (η, L), 0 < η ≤ 1. Then, the rate for the bias of the R m,n estimator for d ≥ 2 is of the form: The proof and a more explicit form for the bound (A16) are given in Appendix E.
Now, we are at the position to prove the assertion in (5). Without loss of generality, assume that (m + n)l −d > 1. In the range d ≥ 2 and 0 < η ≤ 1, we select l as a function of m + n to be the sequence increasing in m + n which minimizes the maximum of these rates: The solution l = l(m + n) occurs when l d (m + n) −η/d = l −ηd , or equivalently l = (m + n) η/(d 2 (η+1)) . Substitute this into l in the bound (A16), the RHS expression in (5) for d ≥ 2 is established.

Appendix C. Proof of Theorems 3
To bound the variance, we will apply one of the first concentration inequalities which was proved by Efron and Stein [43] and further was improved by Steele [18].
Lemma A7 (The Efron-Stein Inequality). Let X m = {X 1 , . . . , X m } be a random vector on the space S. Let X = {X 1 , . . . , X m } be the copy of random vector X m . Then, if f : S × · · · × S → R, we have Consider two set of nodes X i , 1 ≤ i ≤ m and Y j for 1 ≤ j ≤ n. Without loss of generality, assume that m < n. Then, consider the n − m virtual random points X m+1 , . . . , X n with the same distribution as X i , and define Z i := (X i , Y i ). Now, for using the Efron-Stein inequality on set Z n = {Z 1 , . . . , Z n }, we involve another independent copy of Z n as Z n = {Z 1 , . . . , Z n }, and define Z (i) is independent copy of (X 1 , Y 1 ). Next, define the function r m,n (Z n ) := R m,n /(m + n), which means that we discard the random samples X m+1 , . . . , X n , and find the previously defined R m,n function on the nodes X i , 1 ≤ i ≤ m and Y j for 1 ≤ j ≤ n, and multiply by some coefficient to normalize it. Then, according to the Efron-Stein inequality, we have Now, we can divide the RHS as (A18) The first summand becomes which can also be upper bounded as follows: For deriving an upper bound on the second line in (A19), we should observe how much changing a point's position modifies the amount of R m,n (X m , Y n ). We consider two steps of changing X 1 's position: we first remove it from the graph, and then add it to the new position. Removing it would change R m,n (X m , Y n ) at most by 2 c d because X 1 has a degree of at most c d , and c d edges will be removed from the MST graph, and c d edges will be added to it. Similarly, adding X 1 to the new position will affect R m,n (X m,n , Y m,n ) at most by 2c d . Thus, we have and we can also similarly reason that Therefore, totally we would have Furthermore, the second summand in (A18) becomes , the point X m+1 is a copy of virtual random point X m+1 , therefore this point doesn't change the FR test statistic R m,n . In addition, following the above arguments, we have Hence, we can bound the variance as below: (A20) Combining all results with the fact that n m + n → q concludes the proof.

Appendix D. Proof of Theorems 5-7
We will need the following prominent results for the proofs.
Lemma A8. For h = 1, 2, . . . , let δ h m,n be the function c h d−1 (m + n) 1/d , where c is a constant. Then, for > 0 , we have Note that, in the case ≤ δ h m,n , the above claimed inequality becomes trivial.
The subadditivity property for FR test statistic R m,n in Lemma A8, as well as Euclidean functionals, leads to several non-trivial consequences. The growth bound was first explored by Rhee (1993b) [50], and as is illustrated in [17,27] has a wide range of applications. In this paper, we investigate the probabilistic growth bound for R m,n . This observation will lead us to our main goal in this appendix that is providing the proof of Theorem 6. For what follows, we will use δ h m,n notation for the expression O h d−1 (m + n) 1/d . Lemma A9 (Growth bounds for R m,n ). Let R m,n be the FR test statistic. Then, for given non-negative , such that ≥ h 2 δ h m,n , with at least probability g( ) := 1 − h δ h m,n , h = 2, 3, . . . , we have Here, c ,h = O h d−1 − 1 depending only on and h.
The complexity of R m,n 's behavior and the need to pursue the proof encouraged us to explore the smoothness condition for R m,n . In fact, this is where both subadditivity and superadditivity for the FR statistic are used together and become more important.
Remark: Using Lemma A10, we can imply the continuty property, i.e., for all observations (X m , Y n ) and (X m , Y n ), with at least probability 2 g( ) − 1, one obtains Here, X m ∆ X m denotes symmetric difference of observations X m and X m .
The path to approach the assertions (11) and (12) proceeds via semi-isoperimentic inequality for the R m,n involving the Hamming distance.
Lemma A11 (Semi-Isoperimetry). Let µ be a measure on [0, 1] d ; µ n denotes the product measure on space ([0, 1] d ) n . In addition, let M e denotes a median of R m,n . Set Following the notations in [17], H(x, x ) = #{i, x i = x i ) and φ A (x ) + φ A (y ) = min{H(x, x ) + H(y, y ) : x, y ∈ A} and φ A (x ) φ A (y ) = min{H(x, x ) H(y, y ) : x, y ∈ A} . Then, Now, we continue by providing the proof of Theorem 5. Recall (A25) and denote In addition, for given integer h, define events B, B by where c ,h is a constant. By virtue of smoothness property, Lemma A10, for ≥ h 2 δ h m,n , we know P(B) ≥ 2g( ) − 1 and P(B ) ≥ 2g( ) − 1. On the other hand, we have Moreover, P(R m,n (X m , Y n ) ≤ M e ) ≥ 1/2. Therefore, we can write Note that P(B ∩ B ) = P(B) P(B ) ≥ 2 g( ) − 1 2 . Now, we easily claim that On the other word, calling φ A (x ) and φ A (y ) in Lemma A11, we get Furthermore, denote event Then, we have Using Since Consequently, from (A30), one can write The last inequality implies by owing to (A29) and µ m+n ( or equivalently this holds true when ≥ (2h therefore P R m,n (X m , Y n ) ≥ M e + t is less than and equal to By virtue of Lemma A11, finally we obtain Similarly, we can derive the same bound on P R m,n (X m , Y n ) ≤ M e − t , so we obtain where We will analyze (A35) together with Theorem 6. The next lemma will be employed in Theorem 6's proof.

Lemma A12 (Deviation of the Mean and Median
where C m,n ( , h) is a constant depending on , h, m, and n by where C is a constant and We conclude this part by pursuing our primary intension which has been the Theorem 6's proof. Observe from Theorem 5, (11) that Note that the last bound is derived by (11). The rest of the proof is as the following: . Therefore, it turns out that In other words, there exist constants C m,n ( , h) depending on m, n, , and h such that To verify the behavior of bound (A40) in terms of , observe (A35) first; it is not hard to see that this function is decreasing in . However, the function increases in . Therefore, one can not immediately infer that the bound in (12) is monotonic with respect to . For fixed N = n + m, d, and h, the first and second derivatives of the bound (12) with respect to are quite complicated functions. Thus, deriving an explicit optimal solution for the minimization problem with the objective function (12) is not feasible. However, in the sequel, we discuss that under conditions when t is not much larger than N = m + n, this bound becomes convex with respect to . Set where C m,n is given in (10) and By taking the derivative with respect to , we have where a h = hδ h m,n . The second derivative K( ) with respect to after simplification is given as . The first term in (A44) and K( ) are non-negative, so K( ) is convex if the second term in the second line of (A44) is non-negative. We know that ≥ h 2 δ h m,n = h a h , when h = 7, we can parameterize by setting it equal to γa h , where γ ≥ 7. After simplification, K( ) is convex if such that γ ≥ 7. One can easily check that, as γ → ∞, then (A46) tends to − 1 8 h . This term can be negligible unless we have t that is much larger than N = m + n with the threshold depending on d. Here, by setting B(t)/a h = 1, a rough threshold t = O 7 d−1 (m + n) 1−1/d 2 depending on d, m + n is proposed. Therefore, minimizing (A35) and (A40) with respect to when optimal h = 7 is a convex optimization problem. Denote * the solution of the convex optimization problem (9). By plugging optimal h (h = 7) and ( = * ) in (A35) and (A40), we derive (11) and (12), respectively.
In this appendix, we also analyze the bound numerically. By simulation, we observed that lower h i.e., h = 7 is the optimal value experimentally. Indeed, this can be verified by Theorem 11's proof. We address the reader to Lemma A8 in Appendix D and Appendix E where, as h increases, the lower bound for the probability increases too. In other words, for fixed N = m + n and d, the lowest h implies the maximum bound in (A92). For this, we set h = 7 in our experiments. We vary the dimension d and sample size N = m + n in relatively large and small ranges. In Table A1, we solve (9) for various values of d and N = m + n. We also compute the lower bound for i.e., 7 d+1 N 1/d per experiment. In Table A1, we observe that as we have higher dimension the optimal value * equals the lower bound h d+1 N 1/d , but this is not true for smaller dimensions with even relatively large sample size. Table A1. d, N, * are dimension, total sample size m + n, and optimal for the bound in (12). The column h d+1 N 1/d represents approximately the lower bound for which is our constraint in the minimization problem and our assumption in Theorems 5 and 6. Here, we set h = 7. To validate our proposed bound in (12), we again set h = 7 and for d = 4, 5, 7 we ran experiments with sample sizes N = m + n = 9000, 1100, 140, respectively. Then, we solved the minimization problem to derive optimal bound for t in the range 10 10 [1,3]. Note that we chose this range to have a non-trivial bound for all three curves; otherwise, the bounds partly become one. Figure A1 shows that when t increases in the given range, the optimal curves approach zero. To prove the Theorem 7 in the concentration of R m,n , Theorem 6, let

Appendix E. Additional Proofs
Lemma A3: Let g(x) be a density function with support [0, 1] d and belong to the Hölder class Σ d (η, L), 0 < η ≤ 1, expressed in Definition 1. In addition, assume that P(x) is a η-Hölder smooth function, such that its absolute value is bounded from above by some constants c. Define the quantized density function with parameter l and constants φ i as and M = l d and Proof. By the mean value theorem, there exist points i ∈ Q i such that Using the fact that g ∈ Σ d (η, L) and P(x) is a bounded function, we have Here, L is the Hölder constant. As x, i ∈ Q i , a sub-cube with edge length l −1 , then This concludes the proof.
Lemma A4: Let ∆(x, S) denote the degree of vertex x ∈ S in the MST over set S ⊂ R d with the n number of vertices. For given function P(x, x), one yields where for constant η > 0, Proof. Recall notations in Lemma A3 and Therefore, by substituting g, defined in (A47), into g with considering its error, we have Here, Q i represents as before in Lemma A3, so the RHS of (A51) becomes This gives the assertion (A49).
Lemma A5: Assume that, for given k, g k (x) is a bounded function belong to Σ d (η, L). Let P : R d × R d → [0, 1] be a symmetric, smooth, jointly measurable function, such that, given k, for almost every x ∈ R d , P(x, .) is measurable with x a Lebesgue point of the function g k (.)P(x, .). Assume that the first derivative P is bounded. For each k, let Z k 1 , Z k 2 , . . . , Z k k be independent d-dimensional variable with common density function g k . Set Proof. Let B(x, r) = {y : y − x d ≤ r}. For any positive K, we can obtain: where V is the volume of space B which equals O(k −1 ). Note that the above inequality appears because g k (x) ∈ Σ d (η, L) and P(x, x) ∈ [0, 1]. The first order Taylor series expansion of P(x, y) around x is Then, by recalling the Hölder class, we have Hence, the RHS of (A55) becomes The expression in (A54) can be obtained by choice of K.
Lemma A6: Consider the notations and assumptions in Lemma A5. Then, Here, MST(S ) denotes the MST graph over nice and finite set S ⊂ R d and η is the smoothness Hölder parameter. Note that ς η (l, k) is given as before in (A50).
Proof. Following notations in [49], let ∆(x, S) denote the degree of vertex x in the MST(S ) graph. Moreover, let x be a Lebesgue point of g k with g k (x) > 0. In addition, let Z x k be the point process {x, Z k 2 , Z k 3 , . . . , Z k k }. Now, by virtue of (A55) in Lemma A5, we can write On the other hand, it can be seen that Recalling (A57), By virtue of Lemma A4, (A49) can be substituted into expression (A59) to obtain (A56).
Theorem A1: Assume R m,n := R(X m , Y n ) denotes the FR test statistic as before. Then, the rate for the bias of the R m,n estimator for 0 < η ≤ 1, d ≥ 2 is of the form: Here, η is the Holder smoothness parameter. A more explicit form for the bound on the RHS is given in (A61) below: Proof. Assume M m and N n be Poisson variables with mean m and n, respectively, one independent of another and of {X i } and {Y j }. Let also X m and Y n be the Poisson processes {X 1 , . . . , X M n } and {Y 1 , . . . , Y N n }. Set R m,n := R m,n (X m , Y n ). Applying Lemma 1,and (12) cf. [49], we can write Here, K d denotes the largest possible degree of any vertex of the MST graph in R d . Moreover, by the matter of Poisson variable fact and using Stirling approximation [51], we have Similarly, E N n − n = O(n 1/2 ). Therefore, by (A62), one yields Therefore, Hence, it will suffice to obtain the rate of convergence of E R m,n (m + n) in the RHS of (A65). For this, let m i , n i denote the number of Poisson process samples X m and Y n with the FR statistic R m,n , falling into partitions Q i with FR statistic R m i ,n i . Then, by virtue of Lemma 4, we can write Note that the Binomial RVs m i , n i are independent with marginal distributions m i ∼ B(m, a i l −d ), n i ∼ B(n, b i l −d ), where a i , b i are non-negative constants satisfying, ∀i, a i ≤ b i and Therefore, and let Y n i be the set points with mark 2. Note that owing to the marking theorem [52], X m i and Y n i are independent Poisson processes with the same distribution as X m i and Y n i , respectively. Considering R m i .n i as FR statistic over nodes in Again using Lemma 1 and analogous arguments in [49] along with the fact that E |M m Here, By owing to Lemma A6, we obtain where The expression in (A67) equals Because of Jensen inequality for concave function: In addition, similarly since η < d, we have and, for d ≥ 2, one yields Next, we state the following lemma (Lemma 1 from [30,31]), which will be used in the sequel: Lemma A13. Let k(x) be a continuously differential function of x ∈ R which is convex and monotone decreasing over x ≥ 0. Set k (x) = dk(x) dx . Then, for any x 0 > 0, we have Next, continuing the proof of (A60), we attend to find an upper bound for In order to pursue this aim, in Lemma A13, consider k(x) = 1 x therefore as the function k(x) is decreasing and convex, one can write Using the Hölder inequality implies the following inequality: As random variables m i , n i are independent, and because of V[m i ] ≤ ma i l −d , V[n i ] ≤ nb i l −d , we can claim that the RHS of (A74) becomes less than and equal to where Going back to (A66), we have Finally, owing to a i ≤ b i and Passing to Definition 2, MST * , and Lemma A2, a similar discussion as above, consider the Poisson processes samples and the FR statistic under the union of samples, denoted by R * m,n , and superadditivity of dual R * m,n , we have Furthermore, by using the Jenson's inequality, we get Therefore, since a i ≤ b i , we can write Consequently, the RHS of (A79) becomes greater than or equal to By definition of the dual R * m,n and (i) in Lemma A2, we can imply The combination of two lower and upper bounds (A84) and (A77) yields the following result Recall ς η (l, m i , n i ), then we obtain In addition, we have Note that the above inequality is derived from (A80) and m m + n → p. Furthermore, The last line holds because of m i n i ≤ (m i + n i ) 2 . Going back to (A73), we can give an upper bound for the RHS of above inequality as Note that we have assumed a i ≤ b i and by using Hölder inequality we write As result, we have As a consequence, owing to (A85), for 0 < η ≤ 1, d ≥ 2, which implies η ≤ d − 1, we can derive (A61). Thus, the proof can be concluded by giving the summarized bound in (A60).
Lemma A8: For h = 1, 2, . . . , let δ h m,n be the function c h d−1 (m + n) 1/d . Then, for > 0, we have Note that in case ≤ δ h m,n the above claimed inequality is trivial.
We apply the induction methodology on #X m and #Y n . Set c := sup Alternatively, as for the induction hypothesis, we assume the stronger bound R m,n (X m , Y n ) ≤ c 1 #X m #Y n (d−1)/d − c 2 (A98) holds whenever #X m < m and #Y n < n with at least probability g( ). Note that d ≥ 2, > 0 and c 1 , c 2 both depend on , h. Hence, which implies P(R m,n ≤ c 1 − c 2 ) ≥ P(R m,n ≤ c). In addition, we know that P(R m,n ≤ c) = 1 ≥ g( ); therefore, the induction hypothesis holds particularly #X m = 1 and #Y n = 1. Now, consider the partition Q i of [0, 1] d ; therefore, for all 1 ≤ i ≤ h d , we have m i := #(X m ∩ Q i ) < m and n i := #(Y n ∩ Q i ) < n and thus, by induction hypothesis, one yields with at least probability g( ) Set B the event all i : R m i ,n i ≤ c 1 (m i n i ) 1−1/d − c 2 and B i stands with the event R m i ,n i ≤ c 1 (m i n i ) 1−1/d − c 2 . From (A96) and since Q i 's are partitions, which implies and Equivalently,  Now, let γ := #{i : m i , n i > 0} and using Hölder inequality gives Next, we just need to show that c 1 γ 1/d (m n) 1−1/d − γc 2 + c 2 (h d−1 − 1) in (A100) is less than or equal to c 1 (m n) 1−1/d − c 2 , which is equivalent to show We know that m, n ≥ 1 and c 1 ≥ d h d−1 c 2 , so it is sufficient to get choose t as γ = t h d , then 0 < t ≤ 1, so (A101) becomes Note that the function d h −1 1 − h t 1/d ) + t − h −1 has a minimum at t = 1 which implies (A101) and subsequently (A95). Hence, the proof is completed.
Lemma A10: (Smoothness for R m,n ) Given observations of X m := (X m , X m ) = {X 1 , . . . , X m , X m +1 , . . . , X m }, such that m + m = m and Y n := (Y n , Y n ) = {Y 1 , . . . , Y n , Y n +1 , . . . , Y n }, where n + n = n, denote R m,n (X m , Y n ) as before, the number of edges of MST(X m , Y n ) which connect a point of X m to a point of Y n . Then, for integer h ≥ 2, for all (X n , Y m ) ∈ [0, 1] d , ≥ h 2 δ h m,n , where δ h m,n = O h d−1 (m + n) 1/d , we have For the case < h 2 δ h m,n , this holds trivially.
Proof. We begin with removing the edges which contain a vertex in X m and Y n in minimal spanning tree on (X m , Y n ). Now, since each vertex has bounded degree, say c d , we can generate a subgraph in which has at most c d (#X m + #Y n ) components. Next, choose one vertex from each component and form the minimal spanning tree on these vertices, assuming all of them can be considered in FR test statistic, we can write with probability at least g( ), where g( ) is as in Lemma A9. Note that this expression is obtained from Lemma A9. In this stage, it remains to show that with at least probability g( ) R m,n (X m , Y n ) ≥ R m ,n (X m , Y n ) −c ,h #X m #Y n 1−1/d , which, again by using the method before, with at least probability g( ), one derives R m ,n (X m , Y n ) ≤ R m,n (X m , Y n ) +ĉ ,h c 2 d (#X m #Y n ) 1−1/d , orequivalently ≤ R m,n (X m , Y n ) + c h 2 #X m #Y n 1−1/d .
Hence, the smoothness is given with at least probability 2 g( ) − 1 as in the statement of Lemma A10.
where C m,n ( , h) stands with a form depends on , h, m, n as where C is a constant.
Proof. Following the analogous arguments in [17,53], we have . The inequality in (A113) is implied from Theorem 5.
Hence, the proof is completed.