On Wasserstein Two Sample Testing and Related Families of Nonparametric Tests

Nonparametric two sample or homogeneity testing is a decision theoretic problem that involves identifying differences between two random variables without making parametric assumptions about their underlying distributions. The literature is old and rich, with a wide variety of statistics having being intelligently designed and analyzed, both for the unidimensional and the multivariate setting. Our contribution is to tie together many of these tests, drawing connections between seemingly very different statistics. In this work, our central object is the Wasserstein distance, as we form a chain of connections from univariate methods like the Kolmogorov-Smirnov test, PP/QQ plots and ROC/ODC curves, to multivariate tests involving energy statistics and kernel based maximum mean discrepancy. Some connections proceed through the construction of a \textit{smoothed} Wasserstein distance, and others through the pursuit of a"distribution-free"Wasserstein test. Some observations in this chain are implicit in the literature, while others seem to have not been noticed thus far. Given nonparametric two sample testing's classical and continued importance, we aim to provide useful connections for theorists and practitioners familiar with one subset of methods but not others.


Introduction
Nonparametric two sample testing (or homogeneity testing) deals with detecting differences between two d-dimensional distributions, given samples from both, without making any parametric distributional assumptions. The popular tests for d = 1 are rather different from those for d > 1, and our interest is in tying together different tests used in both settings. There is a massive literature on the two-sample problem, having been formally studied for nearly a century, and there is no way we can cover the breadth of this huge and historic body of work. Our aim is much more restricted -we wish to study this problem through the eyes of the beautiful Wasserstein distance. We wish to form connections between several seemingly distinct families of such tests, both intuitively and formally, in the hope of informing both practitioners and theorists who may have familiarity with some sets of tests, but not others. We will also only introduce related work that has a direct relationship with this paper.
There are also a large number of tests for parametric two-sample testing (assuming a form for underlying distributions, like Gaussianity), and yet others for testing only differences in means of distributions (like Hotelling's t-test, Wilcoxon's signed rank test, Mood's median test). Our focus will be much more restricted -in this paper, we will restrict our attention only to nonparametric tests for testing differences in (any moment of the underlying) distribution.
Our paper started as an attempt to understand testing with the Wasserstein distance (also called earth-mover's distance or transportation distance). The main prior work in this area involve studying the "trimmed" comparison of distributions by Munk and Czado [1998], Freitag et al. [2007] with applications to biostatistics, specifically population bioequivalence, and later byÁlvarez-Esteban et al. [2008,2012]. Apart from two-sample testing, the study of univariate goodness-of-fit testing (or one-sample testing) was undertaken in del Barrio et al. [1999Barrio et al. [ , 2000Barrio et al. [ , 2005, and summarized extremely well in del Barrio [2004]. There are other semiparametric works specific to goodness-of-fit testing for location-scale families that we do not mention here, since they diverge from our interest in fully nonparametric two-sample testing for generic distributions.
In this paper, we uncover an interesting relationship between the multivariate Wasserstein test and the (Euclidean) Energy distance test, also called the Cramer test, proposed independently by Székely and Rizzo [2004] and Baringhaus and Franz [2004]. This proceeds through the construction of a smoothed Wasserstein distance, by adding an entropic penalty/regularization -varying the weight of the regularization interpolates between the Wasserstein distance at one extreme and the Energy distance at the other extreme. This also gives rise to a new connection between the univariate Wasserstein test and popular univariate data analysis tools like quantile-quantile (QQ) plots and the Cramer von-Mises (CvM) test. Due to the relationship between distances and kernels, we will also establish connections to the kernel-based multivariate test by Gretton et al. [2012] called the Maximum Mean Discrepancy, or MMD. Finally, the desire to design a univariate distribution-free Wasserstein test will lead us to the formal study of Receiver Operating Characteristic (ROC) curves, relating to work by .
Intuitively, the underlying reasons for the similarities and differences between these above tests can be seen through two lenses. First is the population viewpoint of how different tests work with different representations of distributions; most of these tests are based on differences between quantities that completely specify a distribution -(a) cumulative distribution functions (CDFs), (b) quantile functions (QFs), and (c) characteristic functions (CFs). Second is from the sample viewpoint of the behavior these statistics show under the null hypothesis; most of these tests have null distributions based on norms of Brownian bridges, alternatively viewed as infinite sums of weighted chi-squared distributions (due to the Karhunen-Loeve expansion).
While we connect a wide variety of popular and seemingly disparate families of tests, there are still further classes of tests that we do not have space to discuss. Some ex-amples of tests quite different from the ones studied here include rank based tests as covered by the excellent book Lehmann and D'Abrera [2006], and graphical tests that include spanning tree methods by Friedman and Rafsky [1979] (generalizing the runs test by Wald and Wolfowitz [1940]), nearest-neighbor based tests by Schilling [1986] and Henze [1988], and the cross-match tests by Rosenbaum [2005]. The book by Thas [2010] is also a very useful reference.
Paper Outline and Contributions. The rest of this paper proceeds as follows. In Section 2, we formally present the notation and setup of nonparametric two sample testing, as well as briefly introduce three different ways of comparing distributions -using CDFs, QFs and CFs. In Section 3 we form a novel connection between the multivariate Wasserstein distance, to the multivariate Energy Distance, and to the kernel MMD, through an entropy-smoothed Wasserstein distance. In Section 4 we relate the univariate Wasserstein two-sample test to PP and QQ plots/tests. Lastly, in Section 5, we will design a univariate Wasserstein test statistic that is also "distribution-free" unlike its classical counterpart, providing a careful and rigorous analysis of its limiting distribution by connecting it to ROC/ODC curves.

Nonparametric Two Sample Testing
More formally, given i.i.d. samples X 1 , ..., X n ∼ P and Y 1 , ..., Y m ∼ Q, where P and Q are probability measures on R d . We denote by P n and Q m the corresponding empirical measures. A test η is a function from the data D m,n := {X 1 , ...X n , Y 1 , ..., Y m } ∈ R d(m+n) to {0, 1} (or to [0, 1] if it is a randomized test).
Most tests proceed by calculating a scalar test statistic T m,n := T (D m,n ) ∈ R and deciding H 0 or H 1 depending on whether T m,n , after suitable normalization, is smaller or larger than a threshold t α . t α is calculated based on a prespecified false positive rate α, chosen so that, E H 0 η ≤ α, at least asymptotically. Indeed, all tests considered in this paper are of the form η(X 1 , ..., X n , Y 1 , ..., Y m ) = I (T m,n > t α ) We follow the Neyman-Pearson paradigm, were a test is judged by its power φ = φ(m, n, d, P, Q, α) = E H 1 η. We say that a test η is consistent, in the classical sense, when φ → 1 as m, n → ∞, α → 0.
All the tests we consider in this paper will be consistent in the classical sense mentioned above. Establishing general conditions under which these tests are consistent in the highdimensional setting is largely open. All the test statistics considered here are of the form that they are typically small under H 0 and large under H 1 (usually with appropriate scaling, they converge to zero and to infinity respectively with infinite samples). The aforementioned threshold t α will be determined by the distribution of the test statistic being used under the null hypothesis (i.e. assuming the null was true, we would like to know the typical variation of the statistic, and we reject the null if our observation is far from what is typically expected under the null). This naturally leads us to study the null distribution of our test statistic, i.e. the distribution of our statistic under the null hypothesis. Since these are crucial to running and understanding the corresponding tests, we will pursue their description in some detail in this paper.

Three Ways to Compare Distributions
The literature broadly has three dominant ways of comparing distributions, both in one and in multiple dimensions. These are based on three different ways of characterizing distributions -cumulative distribution functions (CDFs), characteristic functions (CFs) and quantile functions (QFs). Many of the tests we will consider involve calculating differences between (empirical estimates of) these quantities.
For example, it is well known that the Kolmogorov-Smirnov (KS) test by Kolmogorov [1933] and Smirnov [1948] involves differences in empirical CDFs. We shall later see that in one dimension, the Wasserstein distance calculates differences in QFs.
The KS test, the related Cramer von-Mises criterion by Cramér [1928] andVon Mises [1928], and Anderson-Darling test by Anderson and Darling [1952] are very popular in one dimension, but their usage has been more restricted in higher dimensions. This is mostly due to the curse of dimensionality involved with estimating multivariate empirical CDFs. While there has been work on generalizing these popular one-dimensional to higher dimensions, like Bickel [1969], these are seemingly not the most common multivariate tests.
Two classes of tests that are actually quite popular are kernel and distance based tests. As we will recap in more detail in later sections, it is known that the Gaussian kernel MMD implicitly calculates a (weighted) difference in CFs and the Euclidean energy distance implicitly works with a difference in (projected) CDFs.

Entropy Smoothed Wasserstein Distances
The theory of optimal transport (see [Villani, 2009]) provides a set of powerful tools to compare probability measures and distributions on R d through the knowledge of a metric on R d , which we assume to be the usual Euclidean metric in what follows. Among that set of tools, the following family of p-Wasserstein distances between probability measures is the best known.

Wasserstein Distance
Given an exponent p ≥ 1, the definition of the p-Wasserstein distance reads: Definition 1 (Wasserstein Distances). For p ∈ [1, ∞) and Borel probability measures P, Q on R d with finite p-moments, their p-Wasserstein distance [Villani, 2009, Sect. 6 where Γ(P, Q) is the set of all joint probability measures on R d × R d whose marginals are P, Q, i.e. such that for all subsets A remarkable feature of Wasserstein distances is that Definition 1 applies to all measures regardless of their absolute continuity with respect to the Lebesgue measure: the same definition works for both empirical measures and for their densities if they exist.
Writing 1 n for the n-dimensional vector of ones, when comparing two empirical measures with uniform 1 weight vectors 1 n /n and 1 m /m, the Wasserstein distance W p (P n , Q m ) exponentiated to the power p is the optimum of a network flow problem known as the transportation problem [Bertsimas and Tsitsiklis, 1997, Section 7.2]. This problem has a linear objective and a polyhedral feasible set, defined respectively through the matrix M XY of pairwise distances between elements of X and Y raised to the power p, and the polytope U nm defined as the set of n × m nonnegative matrices such that their row and column marginals are equal to 1 n /n and 1 m /m respectively: Let A, B := trace(A T B) be the usual Frobenius dot-product of matrices. Combining Eq. (2) and (3) of feasible set U nm and cost matrix M XY . We finish this section by pointing out that the rate of convergence as n, m → ∞ of W p (P n , Q m ) towards W p (P, Q) gets slower as the dimension d grows under mild assumptions. For simplicity of exposition consider m = n. For any p ∈ [1, ∞), it follows from Dudley [1968] that for d ≥ 3, the difference between W p (P n , Q n ) and W p (P, Q) scales as n −1/d . We also point out that when d = 2 the rate actually scales as √ ln(n) √ n (see Ajtai et al. [1984]). Finally, we note that when considering p = ∞ the rates of convergence are different to those when 1 ≤ p < ∞ . The work of Leighton and Shor [1989], Shor and Yukich [1991], García and Slepčev [2015] show that the rate of convergence of W ∞ (P n , Q n ) towards W ∞ (P, Q) is of order ln(n) n 1/d when d ≥ 3 and (ln(n)) 3/4 n 1/2 when d = 2. Hence, the original Wasserstein distance by itself may not be a favorable choice for a multivariate two sample test.

Smoothed Wasserstein Distance
Aside from the slow convergence rate of the Wasserstein distance between samples from two different measures to their distance in population, computing the optimum of (4) is expensive. This can be easily seen by noticing that the transportation problem boils down to an optimal assignment problem when n = m. Since the resolution of the latter has a cubic cost in n, all known algorithms that can solve the optimal transport problem scale at least super-cubicly in n. Using an idea that can be traced back as far as Schrodinger [1931], Cuturi [2013] recently proposed to use an entropic regularization of the optimal transport problem, to define the Sinkhorn divergence between P, Q parameterized by λ ≥ 0 as where E(T ) is the entropy of T seen as a discrete joint probability distribution, namely E(T ) := − ij T ij log(T ij ). Let T λ be the minimizer of the above smoothed optimal transport problem. This approach has two benefits: (i) because E(T ) is 1-strongly convex with respect to the ℓ 1 norm, the regularized problem is itself strongly convex and admits a unique optimal solution T λ , as opposed to the initial OT problem, for which the minimizer may not be unique; (ii) this optimal solution T λ is a diagonal scaling of e −M XY , the elementwise exponential matrix of −M XY . One can easily show using the Lagrange method of multipliers that there must exist two non-negative vectors u ∈ R n , v ∈ R m such that The solution to this diagonal scaling problem can be found efficiently through Sinkhorn's algorithm [Sinkhorn, 1967], which has a linear convergence rate [Franklin and Lorenz, 1989]. Sinkhorn's algorithm can be implemented in a few lines of code that only require matrix vector products and elementary operations, hence easily parallelized on modern hardware.

Smoothing the Wasserstein Distance to Energy Distance
An interesting class of tests are distance-based "energy statistics" as introduced in parallel by Baringhaus and Franz [2004] and Székely and Rizzo [2004]. The statistic, called the Cramer statistic by the former paper and Energy Distance by the latter, corresponds to the population quantity ). An associated test statistic can be calculated as Remarkably, ED(P, Q) = 0 iff P = Q. Hence, rejecting when ED b is larger than an appropriate threshold leads to a test which is consistent against all fixed alternatives where P = Q under mild conditions (like finiteness of E[X], E[Y ]); see aforementioned references for details. Then, the Sinkhorn divergence defined in (5) can be linked to the the energy distance when the regularization parameter is set to λ = 0, through the following formula: Indeed, notice first that T 0 is the maximal entropy table in U nm , namely the outer product (1 n 1 T m )/nm of the marginals 1 n /n and 1 m /m. Then (6) follows from the observation

From Energy Distance to Kernel Maximum Mean Discrepancy
Another popular class of tests that has emerged over the last decade, are kernel-based tests introduced independently by Gretton et al. [2006] and Fernández et al. [2008], and expanded on in Gretton et al. [2012]. Without getting into technicalities that are irrelevant for this paper, the Maximum Mean Discrepancy between P, Q is defined as where H k is a Reproducing Kernel Hilbert Space associated with Mercer kernel k(·, ·), and f H k ≤ 1 is its unit norm ball. While it is easy to see that MMD ≥ 0 always, and also that P = Q implies MMD = 0, Gretton et al. [2006] show that if k is "characteristic", the equality holds iff P = Q (the Gaussian kernel k(a, b) = exp(− a − b 2 /γ 2 ) is a popular example). Using the Riesz representation theorem and the reproducing property of H k , one can argue that MMD(H k , P, Q) = E P k(X, .) − E Q k(Y, .) H k and hence using the reproducing property again, one can conclude that This gives rise to a natural associated test statistic, a plugin estimator of MMD 2 : Apart from the fact that MMD(P, Q) = 0 iff P = Q the other fact that makes this a useful test statistic is that its estimation error, i.e. the error of MMD 2 u in estimating MMD 2 , scales like m+n mn , independent of d 2 . See Gretton et al. [2012] for a detailed proof. At first sight, the Energy Distance and the MMD look like fairly different tests. However, there is a natural connection that proceeds in two steps. Firstly, there is no reason to stick to only the Euclidean norm · 2 to measure distances for ED -the test can be extended to other norms, and in fact also other metrics; Lyons [2013] explains the details for the closely related independence testing problem. Following that, Sejdinovic et al. [2013] discuss the relationship between distances and kernels (again for independence testing, but the same arguments hold in the two sample testing setting also). Loosely speaking, for every kernel k, there exists a metric d (and also vice versa), given by d(x, y) := (k(x, x) + k(y, y))/2 − k(x, y), such that MMD with kernel k equals ED with metric d. This is a very strong connection between these two families of tests.

Wasserstein Distance and PP or QQ tests
For univariate random variables, a PP plot is a graphical way to view differences in empirical CDFs, while QQ plots are analogously for comparing QFs. Instead of relying on graphs, we can also make such tests more formal and rigorous as follows. We first present some results on the asymptotic distribution of the difference between P n and Q m when using the distance between the CDFs F n and G m and then later when using the distance between the QFs F −1 n and G −1 m . For simplicity we assume that both distributions P and Q are supported on the interval [0, 1]; we remark that under mild assumptions on P and Q, the results we present in this section still hold without such a boundedness assumption. Moreover we assume for simplicity that the CDFs F and G have positive densities on [0, 1].

Comparing CDFs (PP)
We start by noting F n may be interpreted as a random element taking values in the space D([0, 1]) of right continuous functions with left limits. It is well known that where B is a standard Brownian bridge in [0, 1] and where the weak convergence → w is understood as convergence of probability measures in the space D([0, 1]); see Chapter 3 in Billingsley [1968] for details. From this fact and the independence of the samples, it follows that under the null hypothesis H 0 : P = Q, as n, m → ∞ The previous fact, and continuity of the function h ∈ D([0, 1]) → 1 0 (h(t)) 2 dt, imply that as n, m → ∞, we have under the null, mn m + n 1 0 (F n (t) − G m (t)) 2 dt → w 1 0 (B(F (t))) 2 dt.
Observe that the above asymptotic null distribution depends on F which is unknown in practice. This is an obstacle 3 when considering any L p -distance, with 1 ≤ p < ∞, between the empirical cdfs F n and G m . Luckily, a different situation occurs when one considers the L ∞ -distance between F n and G m . Under the null, using again (7) we deduce that where the equality in the previous expression follows from the fact that the continuity of F implies that the interval [0, 1] is mapped onto the interval [0, 1]. This test statistic, the so-called Kolmogorov-Smirnov test statistic, is hence appropriate for two sample problems.
1 0 (f (t)) 2 dF (t) is continuous, we deduce where the second equality follows by a change of variables, leading to an expression that does not depend on F .

Comparing QFs (QQ)
We now turn our attention to QQ (quantile-quantile) plots and specifically the L 2 -distance between F −1 n and G −1 m . It can be shown that if F has a differentiable density f which (for the sake of simplicity) we assume is bounded away from zero, then For a proof of the above statement see Chapter 18 in Shorack and Wellner [1986]; for an alternative proof where the weak convergence is considered in the space of probability measures on L 2 ((0, 1)) (as opposed to the space D([0, 1]) we have been considering thus far) see del Barrio [2004].
We note that from the previous result and independence, it follows that under the null hypothesis H 0 : P = Q, In particular by continuity of the function h ∈ L 2 ((0, 1)) → 1 0 (h(t)) 2 dt, we deduce that Hence, as was the case when we considered the difference of the cdfs F n and G m , the asymptotic distribution of the L 2 -difference (or analogously any L p -difference for finite p) of the empirical quantile functions is also distribution dependent. Note however that there is an important difference between QQ and PP plots when using the L ∞ norm. We saw that the asymptotic distribution of the L ∞ norm of the difference of F n and G m is (under the null hypothesis) distribution free. Unfortunately, in the quantile case, we obtain which of course is distribution dependent. Since one would have to resort to computerintensive Monte-Carlo techniques (like bootstrap or permutation testing) to control type-1 error, these tests are sometimes overlooked (though with modern computing speeds, they merit further study).

Wasserstein is a QQ test
We recall that in general, for p ∈ [1, ∞) the p-Wasserstein distance between two probability measures P, Q on R with finite p-moments is given by Because the Wasserstein distance measures the cost of transporting mass from the original distribution P into the target distribution Q, one can say that it measures "horizontal" discrepancies between P and Q. Intuitively, two probability distributions P and Q that are different over "long" (horizontal) regions will be far away from each other in the Wasserstein distance sense, because in that case mass has to travel long distances to go from the original distribution to the target distribution. In the one dimensional case (in contrast with what happens in dimension d > 1), the p-Wasserstein distance has a simple interpretation in terms of the quantile functions F −1 and G −1 of P and Q respectively. The reason for this is that the optimal way to transport mass from P to Q has to satisfy certain monotonicity property which we describe in the proof of the following Lemma. This is a well known fact that can be found, for example, in Thas [2010]. Nevertheless here we present its proof in the Appendix for the sake of completeness.
Proposition 1. The p-Wasserstein distance between two probability measures P and Q on R with p-finite moments can be written as where F −1 and G −1 are the quantile functions of P and Q respectively.
Having considered the p-Wasserstein distance W p (P, Q) for p ∈ [1, ∞) in the one dimensional case, we conclude this section by considering the case p = ∞. Let P, Q be two probability measures on R with bounded support. That is, assume that there exist a number N > 0 such that supp(P ) ⊆ [−N, N ] and supp(Q) ⊆ [−N, N ]. We define the ∞-Wasserstein distance between P and Q by Proceeding as in the case p ∈ [1, ∞), it is possible to show that the ∞-Wasserstein distance between P and Q with bounded supports can be written in terms of the difference of the corresponding quantile functions as The Wasserstein distance is also sometimes called the Kantorovich-Rubinstein metric and the Mallow's distance in the statistical literature, where it has been studied extensively due to its ability to capture weak convergence precisely -W p (F n , F ) converges to 0 if and only if F n converges in distribution to F and also the p-th moment of X under F n converges to the corresponding moment under F ; see Dobrushin [1970], Mallows [1972], Bickel and Freedman [1981].

A Distribution-Free Wasserstein Test
As we earlier saw, under the null hypothesis H 0 : P = Q, the statistic mn has an asymptotic distribution which is not distribution free, i.e., it depends on F . We also saw that as opposed to what happens with the asymptotic distribution of the L ∞ distance between F n and G m , the asymptotic distribution of F −1 n − G −1 m ∞ does depend on the cdf F .
In this section we show how we can construct a distribution-free Wasserstein test. To prove that it is distribution-free, we connect it to the theory of ROC and ODC curves.

Relating Wasserstein Distance to ROC and ODC curves
Let P and Q be two distributions on R with cdfs F and G and quantile functions F −1 and G −1 respectively. We define the ROC curve between F and G as the function.
In addition, we define their ODC curve by, The following are straightforward properties of the ROC curve (see ).
2. If G(t) ≥ F (t) for all t then ROC(t) ≥ t for all t.
3. If F and G have densities with monotone likelihood ratio, then the ROC curve is concave.
4. The area under the ROC curve is equal to P(Y < X), where Y ∼ Q and X ∼ P .
Intuitively speaking, the faster the ROC curve increases towards the value 1, the easier it is to distinguish the distributions P and Q. Observe from their definitions that the ROC curve can be obtained from the ODC curve after reversing the axes. Given this, we focus from this point on only one of them, the ODC curve being more convenient.
The first observation about the ODC curve is that it can be regarded as the quantile function of the distribution G ♯ P (the push forward of P by G) on [0, 1] which is defined by Similarly, we can consider the measure G m♯ P n , that is, the push forward of P n by G m . We crucially note that the empirical ODC curve G m • F −1 n is the quantile function of G m♯ P n . From Section 4, we deduce that That is, the p-Wasserstein distance between the measures G m♯ P n and G ♯ P can be computed by considering the L p distance of the ODC curve and its empirical version. First we argue that under the null hypothesis H 0 : P = Q, the distribution of empirical ODC curve is actually independent of P . In particular, W p p (G m♯ P n , G ♯ P ) and W ∞ (G m♯ P n , G ♯ P ) are distribution free under the null! This is the content of the next lemma, proved in the Appendix.
Lemma 1 (Reduction to uniform distribution). Let F, G be two continuous and strictly increasing CDFs and let X 1 , . . . , X n ∼ F and Y 1 , . . . , Y m ∼ G be two independent samples. We let F n and G m be the CDFs associated to the empirical distributions induced by the Xs and the Y s respectively. Consider the (unknown) random variables, which are distributed uniformly on [0,1], . Let F U n be the empirical CDF associated to the (uniform) U X s and let G U m be the empirical CDF associated to the (uniform) U Y s. Then, under the null hypothesis H 0 : Note that since U X k , U Y k are obviously instantiations of uniformly distributed random variables, the RHS of the last equation only involves uniform r.v.s and hence, the distribution of G m • F −1 n is independent of F, G under the null. Now we are almost done -this above lemma will imply that the Wasserstein distance between G m • F −1 n and the uniform distribution More formally, we establish a result related to the asymptotic distribution of W p p (G m♯ P n , G ♯ P ) and W ∞ (G m♯ P n , G ♯ P ). We do this by first considering the asymptotic distribution of the difference between the empirical ODC curve and the population ODC curve regarding both of them as elements in the space D([0, 1]). This is the content of the following Theorem which follows directly from the work of Komlós et al. [1976] (see ).
Theorem 1. Suppose that F and G are two cdfs with densities f, g satisfying for all t ∈ [0, 1]. Also, assume that n m → λ ∈ [0, ∞) as n, m → ∞. Then, where B 1 and B 2 are two independent Brownian bridges and where the weak convergence must be interpreted as weak convergence in the space of probability measures on the space D([0, 1]).
As a corollary, under the null hypothesis H 0 : P = Q we obtain the following. Suppose that the CDF F of P is continuous and strictly increasing. Then, mn m + n W 2 2 (G m♯ P n , G ♯ P ) = mn m + n 1 0 (G m (F −1 n (t)) − t) 2 dt → w 1 0 (B(t)) 2 dt and mn m + n W ∞ (G m♯ P n , G ♯ P ) = mn m + n sup To see this, note that by Lemma 1 it suffices to consider F (t) = t in [0, 1]. In that case, the assumptions of Theorem 1 are satisfied and the result follows directly.
The takeaway message of this section is that instead of considering the Wasserstein distance between F m and G n , whose null distribution depends on unknown F , one can instead consider the Wasserstein distance between G m (F −1 n ) and the uniform distribution U [0, 1], since its null distribution is independent of F , i.e. we have constructed a distribution-free test.

Conclusion
In this paper, we connect a wide variety of univariate and multivariate test statistics, with the central piece being the Wasserstein distance. The Wasserstein statistic is closely related to univariate tests like the Kolmogorov-Smirnov test, graphical QQ plots, and a distribution-free variant of the test is proposed by connecting it to ROC/ODC curves. Through entropic smoothing, the Wasserstein test is also related to the multivariate tests of Energy Distance and hence transitively to the Kernel Maximum Mean Discrepancy.
We hope that this is a useful resource to connect the seemingly vastly different families of two sample tests, many of which can be analyzed under the two umbrellas of our paper -whether they differentiate between CDFs, QFs or CFs, and what their null distributions look like. A comprehensive empirical survey is also of interest but out of our current scope.

A Proof of Proposition 1
Proof: We first observe that the infimum in the definition of W p (P, Q) can be replaced by minimum, namely, there exists a transportation plan π ∈ Γ(P, Q) that achieves the infimum in (10). This can be deduced in a straightforward way by noting that the expression R×R |x − y| p dπ(x, y) is linear in π and that the set Γ(P, Q) is compact in the sense of weak convergence of probability measures on R × R. Let us denote by π * an element in Γ(P, Q) realizing the minimum in (10). Let (x 1 , y 1 ) ∈ supp(π * ) and (x 2 , y 2 ) ∈ supp(π * ) (here supp(π * ) stands for the support of π) and suppose that x 1 < x 2 . We claim that the optimality of π * implies that y 1 ≤ y 2 . To see this, suppose for the sake of contradiction that this is not the case, that is, suppose that y 2 < y 1 . We claim that in that case |x 1 − y 2 | p + |x 2 − y 1 | p < |x 1 − y 1 | p + |x 2 − y 2 | p .

B Proof of Lemma 1
Proof: We denote by Y (1) ≤ · · · ≤ Y (m) the order statistic associated to the Y s. For k = 1, . . . , m − 1 and t ∈ (0, 1), we have G m (t) = k m if and only if t ∈ [Y (k) , Y (k+1) ) which holds if and only if t ∈ [F −1 (U Y (k) ), F −1 (U Y (k+1) )), which in turn is equivalent to k+1) ). Thus, G m (t) = k m if and only if G U m (F (t)) = k m . From the previous observations we conclude that G m = G U m • F . Finally, since X k = F −1 (U X k ) we conclude that