Abstract
Two-sample and independence tests with the kernel-based mmd and hsic have shown remarkable results on i.i.d. data and stationary random processes. However, these statistics are not directly applicable to nonstationary random processes, a prevalent form of data in many scientific disciplines. In this work, we extend the application of mmd and hsic to nonstationary settings by assuming access to independent realisations of the underlying random process. These realisations—in the form of nonstationary time-series measured on the same temporal grid—can then be viewed as i.i.d. samples from a multivariate probability distribution, to which mmd and hsic can be applied. We further show how to choose suitable kernels over these high-dimensional spaces by maximising the estimated test power with respect to the kernel hyperparameters. In experiments on synthetic data, we demonstrate superior performance of our proposed approaches in terms of test power when compared to current state-of-the-art functional or multivariate two-sample and independence tests. Finally, we employ our methods on a real socioeconomic dataset as an example application.
1. Introduction
Nonstationary processes are the rule rather than the exception in many scientific disciplines such as epidemiology, biology, sociology, economics, or finance. In recent years, there has been a surge of interest in the analysis of problems described by large sets of interrelated variables with few observations over time, often involving complex nonlinear and nonstationary behaviours. Examples of such problems include the longitudinal spread of obesity in social networks [1], disease modelling from time-varying inter- and intracellular relationships [2], behavioural responses to losses of loved ones within social groups [3], and the linkage between climate change and the global financial system [4]. All such analyses rely on the statistical assessment of the similarity between, or the relationship amongst, noisy time series that exhibit temporal memory. Therefore, the ability to test the statistical significance of homogeneity and dependence between random processes that cannot be assumed to be independent and identically distributed (i.i.d.) is of fundamental importance in many fields.
Kernel-based methods provide a popular framework for homogeneity and independence tests by embedding probability distributions in rkhs [5] (Section 2.2). Of particular interest are the kernel-based two-sample statistic mmd (mmd) [6], which is used to assess whether two samples were drawn from the same distribution, hence testing for homogeneity; and the related hsic (hsic) [7], which is used to assess dependence between two random variables, thus testing for independence. These methods are nonparametric, i.e., they do not make any assumptions on the underlying distribution or the type of dependence. However, in their original form, both mmd and hsic assume access to a sample of i.i.d. observations—an assumption that is often violated for temporally-dependent data such as random processes.
Extensions of mmd and hsic to random processes have been proposed [8,9]. Yet, these methods require the random process to be stationary, meaning that its distribution does not change over time. While it is sometimes possible to approximately achieve stationarity with preprocessing techniques such as (seasonal) differencing or square root and power transformations, such approaches become cumbersome and notoriously difficult, particularly with large sets of variables. The stationarity assumption can therefore pose severe limitations in many application areas where multiple nonstationary processes must be taken into consideration. When studying the relationships of climate change to the global financial system, for example, factors such as greenhouse gas emissions, stock market indices, government spending, and corporate profits would have to be transformed or assumed to be stationary over time.
In this paper, we show how the kernel-based statistics mmd and hsic can be applied to nonstationary random processes. At the heart of our proposed approach is the simple, yet effective idea that realisations of a random process in the form of temporally-dependent measurements (i.e., the observed time series) can be viewed as independent samples from a multivariate probability distribution, provided that they are observed at the same points in time, i.e., over the same temporal grid. Then, mmd and hsic can be applied on these distributions to test for homogeneity and independence, respectively.
The remainder of this paper is structured as follows. After discussing related work in Section 2, we introduce our applications of two-sample and independence testing with mmd and hsic to nonstationary random processes in Section 3. We then carry out experiments on multiple synthetic datasets in Section 4 and demonstrate that the proposed tests have higher power compared with current functional or multivariate two-sample and independence tests under the same conditions. We provide an example application of our proposed methods to a socioeconomic dataset in Section 5 and conclude the paper with a brief discussion in Section 6.
2. Related Work
Two-sample and independence tests on stochastic processes have been widely studied in recent years. Under the stationarity assumption, ref. [8] investigate how the kernel cross-spectral density operator may be used to test for independence, and [9] formulate a wild bootstrap-based approach for both two-sample and independence tests, which outperforms [8] in various experiments. The wild bootstrap in [9] approximates the null hypothesis by assuming there exists a time lag such that a pair of measurements at any point in time t, , is independent of for . This method is applicable to test for instantaneous homogeneity and independence in stationary processes but requires further assumptions to investigate noninstantaneous cases: a maximum lag must be defined as the largest absolute lag for the test. This results in multiple hypothesis testing requiring adjustment by a Bonferroni correction. Further, ref. [10] have applied distance correlation [11], a hsic-related statistic, to independence testing on stationary random processes.
Beyond the stationarity assumption, two-sample testing in the functional data analysis literature has mostly focused on differences of mean [12] or covariance structures [13,14]. However, ref. [15] have developed a two-sample test for distributions based on generalisations of a finite-dimensional test by utilising functional principal component analysis, and [16] have derived kernels over functions to be used with mmd for the two-sample test. Independence testing for functional data using kernels was recently proposed in [17] but assumes the samples lie on a finite-dimensional subspace of the function space—an assumption not required in our work. Moreover, ref. [18] have developed computationally efficient methods to test for independence on high-dimensional distributions and large sample sizes by using eigenvalues of centred kernel matrices to approximate the distribution under the null hypothesis instead of simulating a large number of permutations.
3. mmd and hsic for Nonstationary Random Processes
3.1. Notation and Assumptions
Let and denote two nonstationary stochastic processes with probability laws and , respectively. We assume that we observe m independent realisations of and n independent realisations of in the form of time series measured at and time points, respectively. Said differently, the data samples are a set of nonstationary time series, , arriving over the same temporal grid, and similarly for with . Note that the measurements and are not independent across time (we use the terms ‘sample’ and ‘realisation’ interchangeably to denote and and the term ’measurement’ to denote the temporally dependent vectors and ).
We may view the realisations and as samples of multivariate probability distributions of dimension and , respectively, which are independent at any given point in time, i.e., and and . Consequently, we can represent these distributions by their mean embeddings and in reproducing kernel Hilbert space (rkhss) and use these to conduct kernel-based two-sample and independence tests. Given a characteristic kernel k, i.e., the mean embedding captures all information of a distribution [19], the dependence between measurements in time is captured by the ordering of the variables, and the fact that any characteristic kernel k is injective, thus guaranteeing a unique mapping of any probability distribution into a rkhs [20].
For homogeneity testing (), we use the kernel-based mmd statistic and require equal number of measurements but allow different sample sizes, . For independence testing (), we employ the related hsic, and in this case number of measurements can differ, but we require the same number of realisations, . We now describe how two-sample and independence tests can be performed under these assumptions.
3.2. mmd for Nonstationary Random Processes
Let be a characteristic kernel, such as the Gaussian kernel , which uniquely maps and to their associated rkhs via the mean embeddings and [5] (Section 2.1). The mmd between and in is defined as [6]:
Given samples and , can then be approximated by the following unbiased estimator [6]:
Henceforth, we drop the implied for ease of notation.
Using as a test statistic, one can construct a statistical two-sample test for the null hypothesis against the alternative hypothesis [21].
Let be the significance level of the test, i.e., the maximum allowable probability of falsely rejecting and hence an upper bound on the type-I error. Given , the threshold for the test statistic can be approximated with a permutation test as follows. We first generate P randomly permuted partitions of the set of all realisations with sizes commensurate with , denoted . We then compute , and sort the results in descending order. Finally, we select the statistic at position as our empirical threshold . The null hypothesis is then rejected if . For a computationally less expensive (but generally less accurate) option, the inverse cumulative density function of the Gamma distribution can be computed to approximate the null distribution [22].
3.3. hsic for Nonstationary Random Processes
Let denote the joint distribution of and , and let and be separable rkhss with characteristic kernels and , respectively. hsic is then defined as the mmd between and [7]:
Here, ⊗ denotes the tensor product. Recall that we assume an equal number of realisations m for both processes, and let be the kernel matrices with entries and , respectively. Given i.i.d. samples , an unbiased empirical estimator of is given by [23] (Theorem 2):
where and , and is the vector of ones. To ease our notation, we henceforth omit the implied and .
To test for statistical significance, we define the null hypothesis and the alternative . We broadly repeat the procedure outlined in Section 3.2 by bootstrapping the distribution under via permutations, with the distinction that we only permute the samples , resulting in , whilst the are kept unchanged [7]. is then computed for each permutation and the empirical threshold is taken as the statistic at position . The null hypothesis is rejected, if .
3.4. Maximising the Test Power
The power of both mmd-based two-sample and hsic-based independence tests is prone to decay in high dimensional spaces [24,25], as in our setting where each measurement point in time is treated as a separate dimension. Hence, we describe here how a kernel k can be chosen to maximise the test power, i.e., the probability of correctly rejecting given that it is false. First, note that under both [21] (Corollary 16) and [7] (Theorem 1) are asymptotically Gaussian:
where and denote the asymptotic variance of and , respectively [26] (Section 5.5.1 (A)).
Given a significance level , we define the test thresholds and and reject if or . Following [27], the test power is defined in terms of , the distributions under , with equal sample sizes as:
where is the cumulative density function of the standard Gaussian distribution and where with increasing sample size. To maximise the test power, we maximise the argument of , which we approximate by maximising and minimising for (7), and similarly for (8). The empirical unbiased variance in (7) was derived in [27], and we use [23] (Theorem 5) for in (8).
We perform this optimisation by splitting our samples into training and testing sets, of which we take the former to learn the kernel hyperparameters and the latter to conduct the final hypothesis test with the learnt kernel.
4. Experimental Results on Synthetic Data
To evaluate our proposed tests empirically, we first apply our homogeneity and independence tests to various nonstationary synthetic datasets. We report test performance using , the percentage of rejection of the null hypothesis , which becomes the test power once is false, by repeating the experiments on 200 trials (i.e., 200 independently generated synthetic datasets). We provide confidence intervals computed as .
4.1. Homogeneity Tests with mmd
4.1.1. Setup
We evaluate our mmd-based homogeneity test against shifts in mean and variance of two nonstationary stochastic processes and by establishing if they are correctly accepted or rejected under the null hypothesis . For ease of comparison, we adopt the experimental protocol of [15] and consider two stochastic processes based on a linear mixed effects model. We generate independent samples and on an equally spaced temporal grid of length in the interval ,
where we set with Fourier basis functions and . The coefficients and and the additive noises are all independent Gaussian-distributed random variables with means and variances specified below.
We evaluate the test power against varying values of shifts in mean and variance as follows:
- Mean shift: and . The basis coefficients are sampled as and , and the additive noises are sampled as .
- Variance shift: We take , and introduce a shift in variance in the first basis function coefficients via and . The second coefficients are sampled as , and the noises as .
The coefficients and for mean and variance shifts, respectively, determine the departure from the null hypothesis. Setting means is true, whereas means is false. Although this is not a necessity, we set the number of independent samples of and to be equal, . To test for statistical significance, we follow the procedure described in Section 3.2 and perform permutation tests of partitions for varying values of and and different sample sizes .
4.1.2. Baseline Results without Test Power Optimisation
Our baseline results are obtained with a Gaussian kernel with bandwidth equal to the median distance between observations of the aggregated samples. Figure 1 shows how our method (solid lines) compares to [15] (dashed lines) for discrete time points. For all sample sizes, the type-I error rate lies at or below the allowable probability of false rejection , and our method significantly outperforms [15] for nearly all levels of mean and variance shifts. Both shifts become easier to detect for larger sample sizes. Particularly strong improvements are achieved for mean shifts: our method makes no type-II errors for on samples, whereas [15] only reach such performance with samples and . We obtain similar test power results (see Appendix A.1) for coarser realisations with over the same interval .
Figure 1.
Results of our mmd-based homogeneity test for nonstationary random processes: percentage of rejected as mean shift (left) and variance shift (right) are varied. Our baseline method (solid lines) is compared to [15] (dashed lines) for different sample sizes and discrete time points.
4.1.3. Results of the Optimised Test
Next, we apply the method described in Section 3.4 to maximise the test power. Specifically, we search for the Gaussian kernel bandwidth (over spaces defined in Table A1 in Appendix A.2), that maximises the argument of in our approximations of (7) on our training samples. For demonstrative purposes, we choose to split our dataset equally into training and testing sets although other ratios may lead to higher test power. Figure 2 shows the results of the optimised test (dotted lines) against the baseline results (solid lines) and the results of [15] (dashed lines) for and samples and discrete points in time. We find that the test power is significantly improved by our optimisation for the detection of mean shifts. For instance, test power rises fourfold for and compared to our baseline method. Furthermore, we have no type-II errors once for , as compared to for our baseline test and for [15]. In its current form, however, our optimisation does not yield higher test power for the detection of variance shifts, a fact that we discuss in Section 6.
Figure 2.
Results of homogeneity test with optimising for test power: percentage of rejected for mean shift (left) and variance shift (right) for sample sizes and discrete time points. Our optimised test power method (dotted lines) is compared to our baseline method (solid lines) and [15] (dashed lines).
4.2. Independence Tests with hsic
4.2.1. Setup
To test for independence, the null hypothesis is . We assume we observe measurements and over temporal grids of length and in the interval , respectively. To measure type-I and type-II error rates, we use the following experimental protocols, partly adopted from [7,18,28]:
- Linear dependence: is generated as in (9) with , basis coefficients , , and noise . The samples of the second process are where , as in [18].
- Dependence through a shared coefficient: and are generated as in (9) with and independently sampled , , , as in the mean shift experiments of Section 4.1, but where the stochastic processes now share the second basis function coefficient: .
- Dependence through rotation: We start by generating independent and as in (9) with and , but with and drawn from: (i) student-t, (ii) uniform, or (iii) exponential distributions [28] (Table 3). We next multiply by a rotation matrix with to generate new rotated samples , which we then test for independence. Clearly, for our samples are independent and as is increased their dependence becomes easier to detect (see [7] (Section 4) and Figure A3 for implementation details).
Statistical significance is computed using permutations of whilst is kept fixed to approximate the distribution under . Test power is calculated for varying and different sample sizes .
4.2.2. Baseline Results without Test Power Optimisation
Our baseline results are computed using a Gaussian kernel with equal to the median distance between measurements in the corresponding sample. Figure 3 (left) shows the results of our test on the linear dependence experiments, which demonstrate, due to , how dependencies between individual points in time and an entire time series can be detected. We compare our method to: (i) a statistic explicitly aimed at linear dependence, , where is the Pearson correlation coefficient; and (ii) . For both of these methods, the distribution under is also approximated via permutations. We find that SubCorr outperforms the other methods in experiments with sample sizes , and SubHSIC achieves comparable results to our method. The results for (see Appendix A.1) are similar.
Figure 3.
Results of the hsic-based independence test: Test power for linear dependence (left) and dependence through shared coefficients (right) as sample size is varied for various numbers of time points. For the linear dependence, we compare our baseline results to SubCorr and SubHSIC; for the shared coefficient, we compare against two spectral approximations [18] (Section 5.1).
Figure 3 (right) displays the power of our independence test for the case of dependent samples through a shared coefficient for varying sample sizes m and measurements T. We compare our results to two spectral methods [18] that approximate the distribution under using eigenvalues of the centred kernel matrices of and : spectral hsic uses the unbiased estimator (4) as the test statistic with the eigenvalue-based null distribution; and spectral rff uses a test statistic induced by a number of random Fourier feature (rffs) (set here to 10) that approximate the kernel matrices of and . Our method and spectral hsic achieve improvement in test power compared to spectral rff. For small numbers of samples (), our method outperforms spectral hsic, which converges to the performance of our method with increasing sample size, as we would expect it [22] (Theorem 1).
Figure 4 shows the rotation dependence experiments, where corresponds to the null hypothesis (independence) and to the alternative. The distribution hyperparameters for and are detailed in Appendix A.3, and we set , although equality is not required. As expected, dependence is easier to detect with increasing . We observe that denser temporal measurements do not result in enhanced test power. Note that the test power is highly dependent on the distribution of the coefficients of the basis functions , .
Figure 4.
Results of the hsic-based independence test: Percentage of rejected in rotation dependence experiments for different number of discrete time points T and coefficients and drawn from three distributions: (i) student-t, (ii) uniform, and (iii) exponential (see Appendix A.3). The sample size is . The violet dotted lines are the results of our test power maximisation.
4.2.3. Results of the Optimised Test
The test power maximisation was applied to the rotation dependence experiments by searching for optimal Gaussian kernel bandwidths and over predefined intervals (specified in Appendix A.2). Figure 4 shows that the test power is improved when the basis function coefficients are drawn from uniform distributions. In this case, the percentage of rejected is higher for between and , but it levels off at once , which is the same level achieved by our baseline method for . With our current test-train split, our optimised test does not improve the test power if the basis function coefficients and are drawn from student-t or exponential distributions.
5. Application to a Socioeconomic Dataset
As a further illustration, we apply our method to the United Nations’ socioeconomic Sustainable Development Goal (sdgs) (see Appendix A.4 for details). Specifically, we investigate whether some so-called Targets of the 17 sdgs have been homogeneous over the last 20 years across low- and high-income countries and whether certain sdgs in African countries exhibit dependence over the same period. In both settings, we assume countries are independent.
For our homogeneity tests, we classify countries into low- and high-income according to [29]. We use temporal data of 76 Targets for which [30] provides data collected over the last years for low-income countries and high-income countries. Applying our baseline method without test power optimisation, we find that, out of the 76 Targets we have data available for, only 38 have had homogeneous trajectories in low- and high-income countries. For instance, whereas the ‘death rate due to road traffic injuries’ (Target 3.6) has been homogeneous between these two groups, the ‘fight the epidemics of AIDS, tuberculosis, malaria and others’ (Target 3.3) has not been homogeneous in low- and high-income countries.
For our independence tests, we consider temporal data from African countries over years and test any two Targets for pairwise independence. Of the total 2850 possible pairwise combinations, the null hypothesis of independence is rejected for 357. As an illustration, we examine the dependencies of ‘implementation of national social protection systems’ (Target 1.3) with ‘economic growth’ (Target 8.1) and the ‘proportion of informally employed workers’ (Target 8.3). Applying our baseline method, we accept the null hypothesis of independence between Target 1.3 and 8.1, i.e., we find that the ‘implementation of national social protection systems’ has been independent of economic growth. In contrast, we find that Target 1.3 has been dependent on the ‘proportion of informally employed workers’ (Target 8.3).
6. Discussion and Conclusions
Building on ideas from functional data analysis, we have presented approaches to testing for homogeneity and independence between two nonstationary random processes with the kernel-based statistics mmd and hsic. We view independent realisations of the underlying processes as samples from multivariate probability distributions to which mmd and hsic can be applied. Our tests are shown to outperform current state-of-the-art methods in a range of experiments. Furthermore, we optimise the test power over the choice of kernel and achieve improved results in most settings. However, we also observe that our optimisation procedure does not always yield an increase in test power. We leave the investigation of this behaviour open for future research with the possibility of defining search spaces and step sizes over kernel hyperparameters differently or of choosing a gradient-based approach for optimisation [27]. Our results show that small sample sizes of less than 40 independent realisations can already achieve high test power and that denser measurements over the same time period do not necessarily lead to enhanced test power.
The proposed tests can be of interest in many areas where nonstationary and nonlinear multivariate temporal datasets constitute the norm, as illustrated by our application to test for homogeneity and independence between the United Nations’ Sustainable Development Goals measured in different countries over the last 20 years.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
The socioeconomic dataset is freely available at [30].
Appendix A
Appendix A.1. Results for Realisations with Varying Number of Time Points, T
mmd We show here the results for mean and variance shifts for , but the results are similar for all tested sample sizes ,
Figure A1.
Results of mmd-based homogeneity test with : Percentage of rejected for mean shift (left) and variance shift (right) for sample sizes and T discrete time points in dimensions.
Figure A1.
Results of mmd-based homogeneity test with : Percentage of rejected for mean shift (left) and variance shift (right) for sample sizes and T discrete time points in dimensions.

hsic Experiments for linear dependence and dependence through shared second basis function coefficient for various T. We find that the granularity of measurements over time does not influence the text power significantly.
Figure A2.
Results of the hsic-based independence test: Test power for linear dependence (left) and dependence through shared coefficient (right) as sample size is varied for various numbers of time points .
Figure A2.
Results of the hsic-based independence test: Test power for linear dependence (left) and dependence through shared coefficient (right) as sample size is varied for various numbers of time points .

Appendix A.2. Test Power Maximisation
mmd For mean shift experiments for mmd, we predefine a linear search space with 11 values for the Gaussian kernel bandwidth due to the dependence on and similarly for variance shift experiments (both stated in Table A1). These search spaces resulted from extensive manual explorations for all shifts and sample sizes. We acknowledge that the test power may be further improved with search spaces of finer granularity.
hsic We define search intervals of both and across all angles but different for the student-t, uniform, and exponential distributions. For student-t and exponential distributions, both and were chosen as 20 evenly spaced numbers on a linear scale between 1 and 20. For uniform distributions, both and were chosen as 40 evenly spaced numbers on a linear scale between 1 and 40. These search spaces resulted from extensive manual explorations for all angles and distributions. We acknowledge that the test power may be further improved with search spaces of finer granularity.
Table A1.
Linear search spaces for bandwidth in mmd mean (left) and variance (right) shift experiments.
Table A1.
Linear search spaces for bandwidth in mmd mean (left) and variance (right) shift experiments.
| 0–2 | 2.25–3 | 3.25–5 | 5.5–8 | 0–4 | 5–14 | 15–32 | ||
| Step Size = 0.25 | Step Size = 0.5 | Step Size = 1 | ||||||
| search space for | 1 | 6 | 11 | 16 | search space for | 10 | 20 | 30 |
| 3 | 8 | 13 | 18 | 12 | 22 | 32 | ||
| 5 | 10 | 15 | 20 | 14 | 24 | 34 | ||
| 7 | 12 | 17 | 22 | 16 | 26 | 36 | ||
| 9 | 14 | 19 | 24 | 18 | 28 | 38 | ||
| 11 | 16 | 21 | 26 | 20 | 30 | 40 | ||
| 13 | 18 | 23 | 28 | 22 | 32 | 42 | ||
| 15 | 20 | 25 | 30 | 24 | 34 | 44 | ||
| 17 | 22 | 27 | 32 | 26 | 36 | 46 | ||
| 19 | 24 | 29 | 34 | 28 | 38 | 48 | ||
| 21 | 26 | 31 | 36 | 30 | 40 | 50 | ||
Appendix A.3. Distribution Specifications for Basis Function Coefficients in Rotation Mixing
Table A2.
Specifications of distributions for the rotation mixing. They are a subset of the distributions in [28] (Table 3), and is a proxy for both and .
Table A2.
Specifications of distributions for the rotation mixing. They are a subset of the distributions in [28] (Table 3), and is a proxy for both and .
| Distribution | Fourier Basis Function Coefficients | |
|---|---|---|
| Exponential | ||
| Student-t | ||
| Uniform | ||
Figure A3.
Illustration of and with (i) student-t, (ii) uniform, and (iii) exponential basis function coefficients being mixed by different rotation angles , ordered clockwise by increasing .
Figure A3.
Illustration of and with (i) student-t, (ii) uniform, and (iii) exponential basis function coefficients being mixed by different rotation angles , ordered clockwise by increasing .

Appendix A.4. SDG Dataset
Data of the Indicators measuring the progress of the Targets of the sdgs can be found at [30]. Each of these Indicators measures the progress towards a specific Target. For instance, an Indicator for Target 1.1, ‘by 2030, eradicate extreme poverty for all people everywhere, currently measured as people living on less than $1.90 a day’, is the ‘proportion of population below the international poverty line, by gender, age, employment status and geographical location (urban/rural)’. Each of the Targets belongs to one specific Goal (e.g., Target 1.1 belongs to Goal 1, ‘end poverty in all its forms everywhere’). There are 17 such Goals, which are commonly referred to as the Sustainable Development Goals (sdgs). We compute averages over all Indicators belonging to one Target for our analyses in Section 5.
The dataset of [30] has many missing values, especially for the time span 2000–2005. We impute these values using a weighted average across countries (where data is available) with weights inversely proportional to the Euclidean distance between indicators.
References
- Christakis, N.A.; Fowler, J.H. The spread of obesity in a large social network over 32 years. N. Engl. J. Med. 2007, 357, 370–379. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Barabási, A.L.; Gulbahce, N.; Loscalzo, J. Network medicine: A network-based approach to human disease. Nat. Rev. Genet. 2011, 12, 56–68. [Google Scholar] [CrossRef] [Green Version]
- Bond, R. Complex networks: Network healing after loss. Nat. Hum. Behav. 2017, 1, 1–2. [Google Scholar] [CrossRef]
- Battiston, S.; Mandel, A.; Monasterolo, I.; Schütze, F.; Visentin, G. A climate stress-test of the financial system. Nat. Clim. Chang. 2017, 7, 283–288. [Google Scholar] [CrossRef]
- Muandet, K.; Fukumizu, K.; Sriperumbudur, B.; Schölkopf, B. Kernel mean embedding of distributions: A review and beyond. Found. Trends Mach. Learn. 2017, 10, 1–141. [Google Scholar] [CrossRef]
- Gretton, A.; Borgwardt, K.; Rasch, M.; Schölkopf, B.; Smola, A.J. A kernel method for the two-sample-problem. arXiv 2008, arXiv:0805.2368. [Google Scholar]
- Gretton, A.; Fukumizu, K.; Teo, C.H.; Song, L.; Schölkopf, B.; Smola, A.J. A kernel statistical test of independence. NIPS 2008, 20, 585–592. [Google Scholar]
- Besserve, M.; Logothetis, N.K.; Schölkopf, B. Statistical analysis of coupled time series with Kernel Cross-Spectral Density operators. In Advances in Neural Information Processing Systems 26; Curran Associates, Inc.: Red Hook, NY, USA, 2013; pp. 2535–2543. [Google Scholar]
- Chwialkowski, K.; Sejdinovic, D.; Gretton, A. A wild bootstrap for degenerate kernel tests. In Advances in Neural Information Processing Systems; Curran Associates, Inc.: Red Hook, NY, USA, 2014; pp. 3608–3616. [Google Scholar]
- Davis, R.A.; Matsui, M.; Mikosch, T.; Wan, P. Applications of distance correlation to time series. Bernoulli 2018, 24, 3087–3116. [Google Scholar] [CrossRef] [Green Version]
- Székely, G.J.; Rizzo, M.L.; Bakirov, N.K. Measuring and testing dependence by correlation of distances. Ann. Stat. 2007, 35, 2769–2794. [Google Scholar] [CrossRef]
- Horváth, L.; Kokoszka, P.; Reeder, R. Estimation of the mean of functional time series and a two-sample problem. J. R. Stat. Soc. Ser. B 2012, 75, 103–122. [Google Scholar] [CrossRef] [Green Version]
- Fremdt, S.; Steinbach, J.G.; Horváth, L.; Kokoszka, P. Testing the Equality of Covariance Operators in Functional Samples. Scand. J. Stat. 2012, 40, 138–152. [Google Scholar] [CrossRef] [Green Version]
- Panaretos, V.M.; Kraus, D.; Maddocks, J.H. Second-Order Comparison of Gaussian Random Functions and the Geometry of DNA Minicircles. J. Am. Stat. Assoc. 2010, 105, 670–682. [Google Scholar] [CrossRef] [Green Version]
- Pomann, G.M.; Staicu, A.M.; Ghosh, S. A two-sample distribution-free test for functional data with application to a diffusion tensor imaging study of multiple sclerosis. J. R. Stat. Soc. Ser. C 2016, 65, 395–414. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Wynne, G.; Duncan, A.B. A kernel two-sample test for functional data. arXiv 2020, arXiv:2008.11095. [Google Scholar]
- Górecki, T.; Krzyśko, M.; Wołyński, W. Independence test and canonical correlation analysis based on the alignment between kernel matrices for multivariate functional data. Artif. Intell. Rev. 2018, 53, 475–499. [Google Scholar] [CrossRef] [Green Version]
- Zhang, Q.; Filippi, S.; Gretton, A.; Sejdinovic, D. Large-scale kernel methods for independence testing. Stat. Comput. 2018, 28, 113–130. [Google Scholar] [CrossRef] [Green Version]
- Sriperumbudur, B.K.; Gretton, A.; Fukumizu, K.; Schölkopf, B.; Lanckriet, G.R. Hilbert space embeddings and metrics on probability measures. J. Mach. Learn. Res. 2010, 11, 1517–1561. [Google Scholar]
- Sriperumbudur, B.K.; Fukumizu, K.; Lanckriet, G.R. Universality, Characteristic Kernels and RKHS Embedding of Measures. J. Mach. Learn. Res. 2011, 12, 2389–2410. [Google Scholar]
- Gretton, A.; Borgwardt, K.M.; Rasch, M.J.; Schölkopf, B.; Smola, A.J. A kernel two-sample test. J. Mach. Learn. Res. 2012, 13, 723–773. [Google Scholar]
- Gretton, A.; Fukumizu, K.; Harchaoui, Z.; Sriperumbudur, B.K. A fast, consistent kernel two-sample test. NIPS 2009, 23, 673–681. [Google Scholar]
- Song, L.; Smola, A.J.; Gretton, A.; Bedo, J.; Borgwardt, K. Feature selection via dependence maximization. J. Mach. Learn. Res. 2012, 13, 1393–1434. [Google Scholar]
- Ramdas, A.; Reddi, S.J.; Póczos, B.; Singh, A.; Wasserman, L. On the decreasing power of kernel and distance based nonparametric hypothesis tests in high dimensions. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, Austin, TX, USA, 25–30 January 2015. [Google Scholar]
- Reddi, S.; Ramdas, A.; Póczos, B.; Singh, A.; Wasserman, L. On the high dimensional power of a linear-time two sample test under mean-shift alternatives. Artif. Intell. Stat. 2015, 38, 772–780. [Google Scholar]
- Serfling, R.J. Approximation Theorems of Mathematical Statistics; Wiley Series in Probability and Mathematical Statistics; Wiley: New York, NY, USA, 2002. [Google Scholar]
- Sutherland, D.J.; Tung, H.Y.; Strathmann, H.; De, S.; Ramdas, A.; Smola, A.J.; Gretton, A. Generative models and model criticism via optimized maximum mean discrepancy. arXiv 2016, arXiv:1611.04488. [Google Scholar]
- Gretton, A.; Herbrich, R.; Smola, A.J.; Bousquet, O.; Schölkopf, B. Kernel methods for measuring independence. J. Mach. Learn. Res. 2005, 6, 2075–2129. [Google Scholar]
- World Bank. World Bank Country and Lending Groups. 2020. Available online: https://datahelpdesk.worldbank.org/knowledgebase/articles/906519-world-bank-country-and-lending-groups (accessed on 28 January 2020).
- World Bank. Sustainable Development Goals. 2020. Available online: https://datacatalog.worldbank.org/dataset/sustainable-development-goals (accessed on 28 January 2020).
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).



