Open Access
This article is

- freely available
- re-usable

*Entropy*
**2019**,
*21*(9),
878;
https://doi.org/10.3390/e21090878

Article

Two Tests for Dependence (of Unknown Form) between Time Series

^{1}

Metodos Cuantitativos para la Economía y la Empresa, Universidad de Murcia, 30100 Murcia, Spain

^{2}

Facultad de Económicas y Empresariales, Universidad Nacional de Educación a Distancia (UNED), 28040 Madrid, Spain

^{3}

Departamento Métodos Cuantitativos, Ciencias Juridicas y Lenguas Modernas, Universidad Politecnica de Cartagena, 30201 Cartagena, Spain

^{*}

Author to whom correspondence should be addressed.

Received: 26 July 2019 / Accepted: 5 September 2019 / Published: 9 September 2019

## Abstract

**:**

This paper proposes two new nonparametric tests for independence between time series. Both tests are based on symbolic analysis, specifically on symbolic correlation integral, in order to be robust to potential unknown nonlinearities. The first test is developed for a scenario in which each considered time series is independent and therefore the interest is to ascertain if two internally independent time series share a relationship of an unknown form. This is especially relevant as the test is nuisance parameter free, as proved in the paper. The second proposed statistic tests for independence among variables, allowing these time series to exhibit within-dependence. Monte Carlo experiments are conducted to show the empirical properties of the tests.

Keywords:

dependence; permutation entropy; symbolic correlation integral## 1. Introduction

In Science, either Natural or Social, a primary focus of attention has been the evaluation of whether two (or more) variables measured over time are related. If there is a relation, then further research into the nature and strength of the relation could be worthwhile in order to make new discoveries and/or gain predictive accuracy. Accordingly, testing for dependence between two univariate time series has been widespread, as we will show below. However, the vast majority of literature on the topic has relied on correlation as a main tool for developing other sophisticated statistical devices (tests). On the other hand, any correlation-based test is designed to detect linear relationships between time series, which limits the scope and utility of the test for dealing with (unknown) potential forms of dependence between the series beyond linear ones. The relatively few attempts to deal with nonlinear relationships have relied on nonparametric methods that generally require large data sets. In this age of massive data, this is not a limitation and, therefore, new statistical tests with fewer assumptions and with a wider range to detect potential relationships can be devised. The goal of this paper is to develop statistical tools to test dependence among time series robust to the form of the potential dependence that is established among the variables.

The classical Pearson cross-correlation coefficient assesses a linear relationship between two time series. However, it is well-known that it is not reliable when the time series under study are autocorrelated. In order to solve this caveat, scholars have developed several alternative statistical tests. Most of the work done is parametric and is based on the residuals of estimated models. The starting point was Haugh [1], who proposed a test for non-correlation between two jointly Gaussian covariance stationary time series, say $\left\{{X}_{t}\right\}$ and $\left\{{Y}_{t}\right\}$, by first prewhitening ${X}_{t}$ and ${Y}_{t}$ and then basing the test on the residual cross-correlation function. However, Haugh’s innovative test was rapidly criticized because (of problems) of its low power against popular alternative hypotheses (see [2] and [3] for the earliest documented critiques). Since then, Haugh’s technique has been extended in several ways and the common denominator has been to improve the power of the test, (see: [4,5,6,7,8,9,10,11] for details). Many of the results are summarized by [12]. Basically, all these procedures follow the main skeleton given by Haugh, namely, first to obtain white noise residuals by performing univariate autoregression, and then to check the sample cross-correlation of the residuals.

There have also been several attempts to avoid parametric approaches: see [13,14,15,16]. The last of these proposed a nonparametric test of independence which was designed to avoid the autoregressive moving average (ARMA) pre-specification and to avoid kernel selection methods. Pre-specification and kernel selection were potential limitations in Haugh’s [1] and Hong’s tests [8]. In this vein, this paper proposes two new nonparametric tests for independence between time series. The first test is developed for a scenario similar to that in Pearson’s original test, namely, for situations in which each considered time series is independent (within-independence or serially independent). Therefore, the interest is to investigate if two (or more) internally independent time series can be interrelated (either in a linear way, as in the case of correlation; or, more generally, in a nonlinear way). As mentioned earlier, there are many instances in several scientific domains where each time series is serially-dependent (auto-dependent). The second test proposed tests for independence between variables by allowing these time series to exhibit within-dependence. We use the concept of the symbolic correlation integral, as introduced in [17], to construct tests robust to nonlinearities.

Symbolic correlation integral is directly linked and influenced by the popular and well-known correlation integral definition, which can be understood as a measure of dependence. In this regard, dependence is not restricted to linear correlation (as in the case of Haugh-based tests) and therefore it will be robust to other complex forms of dependence that might occur between variables. Symbolic analysis is a field that has attracted the interest of many scholars and practitioners from several scientific disciplines (see [17]). By relying on these kinds of nonparametric concepts and techniques, we avoid imposing restrictive parametric assumptions, such as linearity and normality and, therefore, more generally applicable tests can be constructed. There might be other possibilities like entropy based statistics, as introduced in [18] that could avoid some of these restrictions.

The rest of the paper is structured as follows: in Section 2, we present the notation that will be used throughout the paper and the definition of symbolic correlation that serves as a common thread in the rest of the paper. Section 3 is devoted to presenting and defining the concept of joint symbolic correlation integral and its corresponding estimator. In Section 4, two tests for independence between series and their corresponding asymptotic treatment are described. In Section 5, empirical size and power are analyzed to better understand the finite sample behavior of the tests under several scenarios. We make a multi-level comparison with other tests available in the literature, and we provide some guidelines about how to fix the parameters of the new tests.

## 2. Definitions and Notation

Let ${\left\{{x}_{t}\right\}}_{t\in I}$ be a real-valued time series from a strictly stationary stochastic process of real random variables, where I is a set of time indexes. For a positive integer $m\ge 2$, we denote by ${S}_{m}$ the symmetric group of order $m!$, that is, the group formed by all the permutations of length m. Let $\pi =({i}_{1},{i}_{2},\cdots ,{i}_{m})\in {S}_{m}$. We will call an element $\pi $ in the symmetric group ${S}_{m}$ a symbol. We consider that the time series is embedded in an m-dimensional space as follows:

$${\overline{x}}_{t}=({x}_{t},{x}_{t+1},...,{x}_{t+(m-1)}).$$

When the set of indexes I is finite and has cardinality T, ${\left\{{\overline{x}}_{t}\right\}}_{t=1}^{n}$ is a vectorial time series of length $n=T-m+1$. Each ${\overline{x}}_{t}$ is called $m-$history and the positive integer m is usually known as the embedding dimension.

We say that ${\overline{x}}_{t}$ is of $\pi $-type if and only if $\pi =({i}_{1},{i}_{2},\cdots ,{i}_{m})$ is the unique symbol in the group ${S}_{m}$ satisfying the two following conditions:

$$\begin{array}{c}\left(a\right)\phantom{\rule{1.em}{0ex}}{x}_{t+{i}_{1}}\le {x}_{t+{i}_{2}}\le \cdots \le {x}_{t+{i}_{m}},\mathrm{and}\hfill \\ \left(b\right)\phantom{\rule{1.em}{0ex}}{i}_{s-1}<{i}_{s}if{x}_{t+{i}_{s-1}}={x}_{t+{i}_{s}}.\hfill \end{array}$$

Condition $\left(b\right)$ guarantees the uniqueness of the symbol $\pi $. This is justified if the values of ${x}_{t}$ have a continuous distribution, so equal values are very uncommon, with a theoretical probability of occurrence of 0. In the case of a discrete distribution, then condition $\left(b\right)$ guarantees that the symbolization map is well defined.

Next, we define the symbolization map
defined by $s\left({\overline{x}}_{t}\right)=\pi $ if and only if ${\overline{x}}_{t}$ is of $\pi $-type. Notice that the symbolization map s transforms the vectorial time series of m-histories into a sequence of symbols. Moreover, each element of ${\mathbb{R}}^{m}$ is mapped to a symbol of ${S}_{m}$ providing a partition of ${\mathbb{R}}^{m}$ of size $m!$, called symbolic partition of ${\mathbb{R}}^{m}$. Given two time instants, t and s, and two ordinal patterns $\pi ,\delta \in {S}_{m}$, we define ${p}_{ts}^{\pi \delta}$ as the probability that, in time t, ${\overline{x}}_{t}$ is of $\pi $-type and, in time s, ${\overline{x}}_{s}$ is of $\delta $-type. Thus, we can construct the following $m!\times m!$ probability matrix

$${s}_{x}:{\mathbb{R}}^{m}\u27f6{S}_{m}$$

$$P{M}_{x}\left(\right|t-s\left|\right)={\left({p}_{ts}^{\pi \delta}\right)}_{\pi \delta}.$$

Within this context, we define the indicator function
which always takes the value 1 when the ordinal patterns of the m-histories ${\overline{x}}_{t}$ and ${\overline{x}}_{s}$ are the same. The indicator variable, $I({s}_{x}\left({\overline{x}}_{t}\right),{s}_{x}\left({\overline{x}}_{s}\right))$, is a Bernoulli random variable with probability of success

$$I({s}_{x}\left({\overline{x}}_{t}\right),{s}_{x}\left({\overline{x}}_{s}\right))=\left\{\begin{array}{cc}1,& \phantom{\rule{4.pt}{0ex}}\mathrm{if}\phantom{\rule{4.pt}{0ex}}{s}_{x}\left({\overline{x}}_{t}\right)={s}_{x}\left({\overline{x}}_{s}\right),\\ 0,& \mathrm{otherwise},\end{array}\right.$$

$${\mu}_{ts}^{x}=\sum _{\pi \in {S}_{m}}{p}_{ts}^{\pi \pi}.$$

Morover, if $\left\{{x}_{t}\right\}$ is i.i.d. and $|t-s|\ge m$, then ${\mu}_{ts}^{x}={\displaystyle \frac{1}{m!}}$ (as shown in [17]).

Based on these concepts, Caballero et al. [17] defined the symbolic correlation integral of a time series ${\left\{{\overline{x}}_{t}\right\}}_{t\in I}$ for an embedding dimension $m\ge 2$ as
and they proved that under the null that ${\left\{{x}_{t}\right\}}_{t\in I}$ is i.i.d., the statistic
is asymptotically distributed as an $N(\frac{1}{m!},{\sigma}_{m})$, where the standard deviation ${\sigma}_{m}$ does not depend on the sample time series.

$$S{C}^{m}=\int \int I({s}_{x}\left(\overline{x}\right),{s}_{x}\left(\overline{y}\right))d\mu \left(\overline{x}\right)d\mu \left(\overline{y}\right),$$

$${\widehat{SC}}^{m}=\frac{2}{n(n-1)}\sum _{t>s}I({s}_{x}\left({\overline{x}}_{t}\right),{s}_{x}\left({\overline{x}}_{s}\right))$$

An interesting relation between $\widehat{S}{C}^{m}$ with a well-known index can be obtained. If $N\left(\pi \right)$ denotes the absolute frequency of $\pi \in {S}_{m}$ in the symbolic time series (of length n), then symbolic correlation integral can be rewritten as:
which is closely related to normalized Gini index or Index of Qualitative Variation (see [19] for details), which is given by

$${\widehat{SC}}^{m}={\displaystyle \frac{1}{n(n-1)}}\left(\sum _{\pi}N{\left(\pi \right)}^{2}-n\right)$$

$$\widehat{\mathrm{IQV}}\phantom{\rule{4pt}{0ex}}=\phantom{\rule{4pt}{0ex}}\frac{m!}{m!-1}\phantom{\rule{0.166667em}{0ex}}\left(1-\sum _{\pi}{\displaystyle \frac{N{\left(\pi \right)}^{2}}{{n}^{2}}}\right).$$

## 3. Joint Symbolic Correlation Integral

There is a natural extension of the symbolic correlation integral to a multivariate scenario. Similar to Symbolic Correlation Integral, Joint Symbolic Correlation Integral will measure the probability that all univariate time series forming the multivariate time series have the same ordinal pattern at different time periods (where the patterns could potentially vary along the time series). As we are therefore interested in analyzing a k-dimensional time series, first, we will define the Joint Symbolic Correlation Integral ($JSC$) of a set $X\subset {\mathbb{R}}^{{m}_{1}}\times \cdots \times {\mathbb{R}}^{{m}_{k}}$, and then we will focus on multivariate time series.

**Definition**

**1**(Joint Symbolic Correlation Integral)

**.**

Let ${X}_{j}\subset {\mathbb{R}}^{{m}_{j}}$ with $j=1,2,...,k$ be distributed according to invariant measures ${\nu}_{1},{\nu}_{2},\cdots ,{\nu}_{k}$, respectively, and $\nu =({\nu}_{1},\cdots ,{\nu}_{k})$ a invariant measure on $X={X}_{1}\times {X}_{2}\times \cdots \times {X}_{k}$. The joint symbolic correlation integral of the set $X={X}_{1}\times {X}_{2}\times \cdots \times {X}_{k}$ is defined as

$$JS{C}^{\mathbf{m}}\left(X\right)=\int \underset{2k}{\underbrace{\cdots}}\int \prod _{j=1}^{k}I({s}_{{x}_{j}}\left({\overline{x}}_{j}\right),{s}_{{x}_{j}}\left({\overline{y}}_{j}\right))d\nu ({\overline{x}}_{1},\cdots ,{\overline{x}}_{k})d\nu ({\overline{y}}_{1},\cdots ,{\overline{y}}_{k}).$$

Notice that the symbolization map for the multivariate time series is component-wise, and we allow for each component of the multivariate time series to have a different embedding dimension.

In the case in which the invariant measures are independent, that is, $\nu =({\nu}_{1},\cdots ,{\nu}_{k})={\displaystyle \prod _{j=1}^{k}}{\nu}_{j}$, it follows that
where $\mathbf{m}=({m}_{1},{m}_{2},\cdots ,{m}_{k})$.

$$\begin{array}{ccc}\hfill JS{C}^{\mathbf{m}}\left(X\right)& =& \int \underset{2k}{\underbrace{\cdots}}\int \prod _{j=1}^{k}I({s}_{{x}_{j}}\left({\overline{x}}_{j}\right),{s}_{{x}_{j}}\left({\overline{y}}_{j}\right))d\nu ({\overline{x}}_{1},\cdots ,{\overline{x}}_{k})d\nu ({\overline{y}}_{1},\cdots ,{\overline{y}}_{k})=\hfill \\ \hfill & =& \prod _{j=1}^{k}\int \int I({s}_{{x}_{j}}\left({\overline{x}}_{j}\right),{s}_{{x}_{j}}\left({\overline{y}}_{j}\right))d{\nu}_{j}=\prod _{j=1}^{k}S{C}^{{m}_{j}}\left({X}_{j}\right),\hfill \end{array}$$

Notice that, only under the null of independence among the time series, $JSC={\displaystyle \prod _{j=1}^{k}}S{C}^{{m}_{j}}={\displaystyle \prod _{j=1}^{k}}(1-{\displaystyle \frac{{m}_{j}!-1}{{m}_{j}!}}IQ{V}_{j})$.

Based on this definition, we also have a natural extension of $JSC$ for multivariate time series. To this end, given a multivariate time series ${\{{w}_{t}=({x}_{1t},{x}_{2t},\cdots ,{x}_{kt})\}}_{t\in I}$, each of the time series $\left\{{x}_{jt}\right\}$ is embedded in ${\mathbb{R}}^{{m}_{j}}$, as in the previous section, to construct the embedded multivariate time series ${\overline{w}}_{t}=({\overline{x}}_{1t},{\overline{x}}_{2t},\cdots ,{\overline{x}}_{kt})$. Then, the symbol space for ${\left\{{\overline{w}}_{t}\right\}}_{t\in I}$ is defined as ${\Gamma}_{k}={\prod}_{j=1}^{k}{S}_{{m}_{j}}$, the Cartesian product of symmetric groups ${S}_{{m}_{j}}$ with $j=1,2,...,k$, and the symbolization map is defined as ${s}_{w}=({s}_{{x}_{1}},{s}_{{x}_{2}},\cdots ,{s}_{{x}_{k}})$. Thus, the symbolization of the multivariate time series is component-wise.

Next, we are interested in computing the estimator, ${\widehat{JSC}}^{\mathbf{m}}$, of the joint symbolic correlation integral for ${\{{w}_{t}=({x}_{1t},{x}_{2t},\cdots ,{x}_{kt})\}}_{t=1}^{T}$. To this end, we define the indicator function
and then
where $n=min\{{n}_{1},{n}_{2},\cdots ,{n}_{k}\}$ with ${n}_{j}=T-{m}_{j}+1$ for $j=1,2,\cdots ,k$. In addition, the indicator function defined by Equation (6) is a Bernoulli random variable with probability of success ${\tilde{\mu}}_{ts}$. When the time series that form the vectorial time series $\left\{{w}_{t}\right\}$ are independent between themselves, we have that

$$J{S}_{ts}=\prod _{j=1}^{k}I({s}_{{x}_{j}}\left({\overline{x}}_{jt}\right),{s}_{{x}_{j}}\left({\overline{x}}_{js}\right))$$

$${\widehat{JSC}}^{\mathbf{m}}=\frac{2}{n(n-1)}\sum _{t=1}^{n-1}\sum _{s=t+1}^{n}J{S}_{ts},$$

$${\tilde{\mu}}_{ts}=\prod _{j=1}^{k}{\mu}_{ts}^{{x}_{j}}.$$

We should highlight that, when every ${\left\{{x}_{jt}\right\}}_{t=1}^{T}$ is a stationary time series, from [20], the estimator of joint symbolic correlation integral is asymptotically unbiased:

$$\underset{n\to \infty}{lim}E\left[{\widehat{JSC}}^{\mathbf{m}}\right]=JS{C}^{\mathbf{m}}.$$

## 4. Testing Independence between Time Series with $\mathbf{JSC}$

In this section, we construct a test for independence between time series based on $JSC$. We will distinguish between two null hypotheses. In the first, we will test independence between time series when they are i.i.d. In this case, we will provide the asymptotic distribution of the test statistic and will show the conditions under which this test is not affected by the intermediate step of estimating the parameters of a given model. Such tests are called nuisance-parameter-free tests. In the second, we will relax the null hypothesis to allow for the time series under consideration not to be i.i.d. In this second case, we will not give the asymptotic distribution but use bootstrapping to test for significance. Proofs can be found in Appendix A.

#### 4.1. ${H}_{0}$: Independent between Themselves While Being I.I.D.

We are going to consider a vectorial time series, ${\{{w}_{t}=({x}_{1t},{x}_{2t},\cdots ,{x}_{kt})\}}_{t\in I}$, where each $\left\{{x}_{jt}\right\}$ is i.i.d. and they are independent between themselves.

Since the time series $\left\{{x}_{jt}\right\}$ are i.i.d., from [17], we have that $S{C}^{{m}_{j}}=\frac{1}{{m}_{j}!}$ for $j=1,2,...,k$. Moreover, since the time series are independent between themselves, from Equations (8) and (9), we have that

$$\begin{array}{ccc}\underset{n\to \infty}{lim}E\left[{\widehat{JSC}}^{\mathbf{m}}\right]\hfill & =& JS{C}^{\mathbf{m}}={\displaystyle \prod _{j=1}^{k}}S{C}^{{m}_{j}}={\displaystyle \prod _{j=1}^{k}}{\displaystyle \frac{1}{{m}_{j}!}}.\hfill \end{array}$$

As the time series $\left\{{x}_{jt}\right\}$ are i.i.d. for all $j=1,2,...,k$, using Equation (8), it follows that
where $m=max\{{m}_{1},{m}_{2},...,{m}_{k}\}$ and then

$$E\left[{\widehat{JSC}}^{\mathbf{m}}\right]={\displaystyle \frac{2}{n(n-1)}}\left(\sum _{t=1}^{n-m+1}\sum _{s=t+1}^{t+m-1}{\tilde{\mu}}_{ts}+\sum _{t=n-m+2}^{n-1}\sum _{s=t+1}^{n}{\tilde{\mu}}_{ts}+\sum _{t=1}^{n-m}\sum _{s=t+m}^{n}\prod _{j=1}^{k}{\displaystyle \frac{1}{{m}_{j}!}}\right),$$

$$\underset{n\to \infty}{lim}E\left[{\widehat{JSC}}^{\mathbf{m}}\right]=\prod _{j=1}^{k}{\displaystyle \frac{1}{{m}_{j}!}}.$$

Under these conditions, the following result provides the asymptotic distribution of ${\widehat{JSC}}^{\mathbf{m}}$.

**Theorem**

**1.**

Let ${\left\{{x}_{jt}\right\}}_{{t}^{\in}I}$ with $j=1,2,...,k$ be k i.i.d. times series that are independent between themselves. Given embedding dimensions ${m}_{1},{m}_{2},...,{m}_{k}\ge 2$ and $m=max\{{m}_{1},{m}_{2},...,{m}_{k}\}$, it follows that

$$\sqrt{{\displaystyle \frac{n(n-1)}{2}}}\left({\widehat{JSC}}^{\mathbf{m}}-JS{C}^{\mathbf{m}}\right)$$

is asymptotically $\mathcal{N}(0,{\sigma}_{m})$ distributed, where

$$\begin{array}{ccc}JS{C}^{\mathbf{m}}\hfill & =& \prod _{j=1}^{k}\frac{1}{{m}_{j}!},\hfill \\ {\sigma}_{m}^{2}\hfill & =& \sum _{{h}_{1}=-m}^{m}\sum _{{h}_{2}=-m}^{m}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}E\left[(J{S}_{ts}-{\tilde{\mu}}_{t}s)(J{S}_{t+{h}_{1}s+{h}_{2}}-{\tilde{\mu}}_{t+{h}_{1}s+{h}_{2}})\right].\hfill \end{array}$$

In many cases, the researcher wants to apply statistics, like statistic Equation (11), to estimated errors (residuals) of a model fitted to the raw data. The asymptotic distribution of Equation (11) is derived under the assumption that true errors are considered. Even in the case that true errors are iid, the estimated errors might exhibit some form of dependence, and therefore the asymptotic distribution might be affected by the estimation process. We now wonder if the statistic Equation (11) can be safely applied after the estimation of some intermediate parameters.

**Definition**

**2.**

Let $S\left(\widehat{\theta}\right)$ be a statistic that depends upon some consistently estimated parameter, $\widehat{\theta}$. Assume that, at a true value θ,

$$\sqrt{n}\left(\frac{{S}_{n}\left(\theta \right)-\mu \left(\theta \right)}{\sigma \left(\theta \right)}\right){\to}^{d}\mathcal{N}(0,1),$$

then $S\left(\widehat{\theta}\right)$ is a nuisance-parameter free statistic if

$$\sqrt{n}\left({S}_{n}\left(\theta \right)-{S}_{n}\left(\widehat{\theta}\right)\right){\to}^{P}0,$$

where $\mu \left(\theta \right)=\underset{n\to \infty}{lim}E\left[{S}_{n}\left(\theta \right)\right]$ and ${\sigma}^{2}\left(\theta \right)=\underset{n\to \infty}{lim}E\left[{({S}_{n}\left(\theta \right)-\mu \left(\theta \right))}^{2}\right]$.

**Theorem**

**2**(Nuisance-free parameter property)

**.**

Assume that the data-generating process for each time series ${\left\{{x}_{jt}\right\}}_{{t}^{\in}I}$ is given by

$${x}_{jt}={G}_{j}({I}_{t-1}^{j},{\theta}_{j})+{u}_{jt},$$

where ${I}_{t-1}^{j}$ is a finite set of regressors, ${\theta}_{j}$ is a vector of parameters, $\left\{{u}_{jt}\right\}$ are i.i.d., and ${G}_{j}$ is a continuous function defined on a compact set for $j=1,2,...,k$. Then, given embedding dimensions ${m}_{1},{m}_{2},...,{m}_{k}\le 2$ and $m=max\{{m}_{1},{m}_{2},...,{m}_{k}\}$, the statistic ${\widehat{JSC}}^{m}$ applied to the residuals ${\left\{({u}_{1t},{u}_{2t},\cdots ,{u}_{kt})\right\}}_{t}$ is nuisance-parameter free.

#### 4.2. ${H}_{0}$: Time Series Independent between Themselves

When the time series under consideration, ${\left\{{x}_{jt}\right\}}_{{t}^{\in}I}$, are not i.i.d., we consider the following statistic:

$$\widehat{\delta}\left(\mathbf{m}\right)={\widehat{JSC}}^{\mathbf{m}}-\prod _{j=1}^{k}{\widehat{SC}}^{{m}_{j}}.$$

Notice that, when the time series are independent between themselves, $\widehat{\delta}\left(\mathbf{m}\right)=0$ and $\widehat{\delta}\left(\mathbf{m}\right)\ne 0$ otherwise. Thus, to test for significance, we use a bootstrap test based on the block-bootstrap proposed by Politis and White [21] and corrected in Patton et al. [22]. In order to be under the null of independence among the time series, we resample each time series independently rather than jointly.

Let us illustrate the procedure. Given the original time-series ${\left\{{x}_{jt}\right\}}_{t=1}^{T}$, we compute $\widehat{\delta}\left(\mathbf{m}\right)$ and evaluate the same for each of the $B-1$ bootstrap realizations of the k time-series, namely ${\widehat{\delta}}^{\left(b\right)}\left(\mathbf{m}\right)$. Next, we compute the upper and lower p-values as
where $\mathbf{1}$ is the Heaviside function. Based on these values, the null hypothesis of independence among time series is rejected at a significance level $\alpha $ if $upper-p<\frac{\alpha}{2}$ or $lower-p<\frac{\alpha}{2}$.

$$\begin{array}{ccc}\hfill \mathrm{upper}-p& =& \frac{1}{B}\sum _{i=1}^{B-1}\mathbf{1}({\widehat{\delta}}^{\left(b\right)}\left(\mathbf{m}\right)>\widehat{\delta}\left(\mathbf{m}\right)),\hfill \end{array}$$

$$\begin{array}{ccc}\hfill \mathrm{lower}-p& =& \frac{1}{B}\sum _{i=1}^{B-1}\mathbf{1}({\widehat{\delta}}^{\left(b\right)}\left(\mathbf{m}\right)<\widehat{\delta}\left(\mathbf{m}\right)),\hfill \end{array}$$

## 5. Simulations

#### 5.1. Empirical Size and Power

We present some evidence of the performance of the proposed tests when they are applied to systems that might (or might not) exhibit within-dependence (i.e., autodependent processes, whose easiest form is autocorrelation) and/or dependence between processes (between-dependence or cross-dependence). We refer to the statistics depicted in Equations (11) and (13) by Test1 and Test2. For simplicity, the statistics have been computed assuming that the correct time-lag has been obtained. To this end, we have considered the following systems of relations:

$$\begin{array}{cc}S1:{Y}_{t}={\epsilon}_{t}\hfill & {X}_{t}={\nu}_{t}\hfill \\ S2:{Y}_{t}=0.5{Y}_{t-1}+{\epsilon}_{t}\hfill & {X}_{t}=0.5{X}_{t-1}+{\nu}_{t}\hfill \\ S3:{Y}_{t}=0.5{\epsilon}_{t-1}^{2}+{\epsilon}_{t}\hfill & {X}_{t}=0.5{\nu}_{t-1}^{2}+{\nu}_{t}\hfill \\ S4:{Y}_{t}=0.8{Y}_{t-1}+{X}_{t}+{\epsilon}_{t}\hfill & {X}_{t}=0.8{X}_{t-1}+{\nu}_{t}\hfill \\ S5:{Y}_{t}=0.3{Y}_{t-1}+0.5{Y}_{t-2}{X}_{t-1}+{\epsilon}_{t}\hfill & {X}_{t}=0.7{X}_{t-1}+{\nu}_{t}\hfill \\ S6:{Y}_{t}={\epsilon}_{t}\hfill & {X}_{t}=0.8{Y}_{t-1}^{2}+{\nu}_{t}\hfill \\ S7:{Y}_{t}=0.6{\nu}_{t-1}^{2}+{\epsilon}_{t}\hfill & {X}_{t}=0.6{X}_{t-1}+{\nu}_{t}\hfill \\ S8:{Y}_{t}={X}_{t-1}\hfill & {X}_{t}=4{X}_{t-1}(1-{X}_{t-1}).\hfill \end{array}$$

We have selected these systems (with these parameters, sample sizes and relationships) because they have been studied in previous works, so comparability is easy. In particular, system S1 specifies two within-independent processes that are independent between themselves, while S2 specifies two between-independent Gaussian autoregressive AR(1) processes. System S3 represents two between-independent processes that exhibit a nonlinear form of within-dependence. On the other hand, systems S4, S5, S6, and S7 are between-dependent processes. S4 is a linear within-dependent model that entails ideal conditions (normality and linearity) for the application of parametric statistics. System S5 has a stable bivariate nonlinear autoregressive within-dependence. S6 model shows within-dependence in one of the variables, but not in the other. System S7 shows first a nontrivial nonlinear relation between the processes and, second, it considers two different forms of within-dependence. Finally, system S8 is very interesting as it is a nonlinear deterministic process formed from the chaotic logistic equation.

Each system was simulated 2000 times and the following tables collect the proportion of rejection the null hypothesis considered at the 5% nominal level. The tests were performed for $m=3$. The selection of parameter m is open to the practioner, and we will deal with this regard later on this section.

Table 1 shows the power and size results of Test1 for the eight models $S1$–$S8$ for the null ${X}_{t}$ and ${Y}_{t}$ being i.i.d. and independent between themselves.

We observe first that, when the stochastic system is generated under the null (i.e., two independent processes that are both auto-independent), the test rejects according at the fixed 5% level. In this regard, the new test Test1 behaves as expected under the null, regardless of the sample size. The results for systems S2 and S3 suggest that Test1 test easily detects a departure from one of the conditions of the null (namely, within-dependence), even when they are between-independent processes. The results for S4, S5 and S7 indicate that, when the departure from the null is larger (in this case both null conditions are violated), then Test1 behaves powerfully, even in the case of nonlinear dependencies, as in systems S5 and S7. As regards system S6, notice that, despite the fact that process Y is i.i.d., process X depends on Y. Even in the case that one process has no within-independence but there is between dependence, the test exhibits power to detect these departures. Finally, the results for system S8 are especially interesting because the processes involved are purely deterministic (there is no random term) and the dependence between them is evident. Despite this peculiar dynamic structure, it is noticeable that the Pearson cross-correlation test has a rejection rate of the null of 5%; in other words, Pearson’s test behaves at the nominal level of the statistic, which implies that it systematically suggests not to reject the null of independence between processes when there is an obvious deterministic dependence. In contrast, Test1 test rejects the null with great power. The explanation for this performance is that Pearson’s test is limited to detecting linear relationships between variables, while Test1 considers any form of potential relationship.

On the other hand, it can be also concluded from the above comments on simulations that Test1 is not capable of distinguishing between the forms of dependences (between and/or within). In other words, if the final user obtains a rejection of the null, she does not know the reason for the rejection. Given the simplicity of the test and its power with complex dependence forms, it is advisable to use Test1 as a first step. However, to complete the process, it is necessary either to use Test2 or to apply some sort pre-whitening process. We now explore the first solution (Test2) and, later in the paper, we consider the behavior of the test in the case of pre-whitening in a multivariate scenario.

Table 2 shows the power and size results of Test2 for the eight models $S1$–$S8$ for the null ${X}_{t}$ and ${Y}_{t}$ being independent just between themselves.

The empirical behavior of other available tests on the same systems and sample sizes are reported in the Appendix. Based on the results for these models, we make the following remarks:

(i) The output for systems S1, S2 and S3 hints that Test2 can correctly deal with models that exhibit several forms of within-dependence, and this internal dependence does not contaminate the ability of Test2 to indicate, at a nominal level of 0.05 that both processes are independent. In this regard, it is noteworthy to observe that, a Haugh-type test could not have been used with system S3 because these tests report confident results only for systems of linear and Gaussian dependence, as is the case of S2.

(ii) When within-dependence is linear, as in S4, the power of the test is impressive regardless of the sample size. This empirical fact implies that Test2 can be used to detect simple cases of linear relationships between variables. As reported in [14], Haugh-based tests also have extremely good power for this type of linear dependence. Accordingly, the final user of the tests could safely use the nonparametric Test2 test or parametric Haugh-based tests.

(iii) However, when there is within-dependence of nonlinear nature, as in S5, S6, S7 and S8, it is well known that Haugh based tests are unable to detect dependence, regardless of the sample size. As can be observed from this simulation, Test2 detects dependence when the sample is large enough.

From (i)–(iii), it can be concluded that Test2 can be used to effectively detect dependence between variables with fewer restrictions than other available tests in the literature, and, from a practical point of view, the larger the sample size, the more reliable the results are.

As mentioned earlier, an interesting advantage of both tests is that they can be used in a multivariate setting. We now consider two new sets of multivariate systems. The first set is formed of S9 and S10 systems. Each system is a three-variable stochastic linear system that is used in this paper to show that Test1 can be satisfactorily used for pre-whitened data as proved in the previous section:

S9 | ${Y}_{t}=0.6{Y}_{t-1}+{\epsilon}_{yt}$ | ${X}_{t}=0.5{X}_{t-1}+{\epsilon}_{xt}$ | ${Z}_{t}=0.7{Z}_{t-1}+{\epsilon}_{zt},$ |

S10 | ${Y}_{t}=0.6{X}_{t-1}+{\epsilon}_{yt}$ | ${X}_{t}=0.5{X}_{t-1}+{\epsilon}_{xt}$ | ${Z}_{t}=0.7{Y}_{t-1}+{\epsilon}_{zt}.$ |

These systems were generated and estimated by Ordinary Least Squares (OLS) to obtain the linear structure of each variable and residuals were then tested with Test1. The results are in Table 3.

After removing the linear structure, independent estimated errors are obtained. Provided that errors are simulated independently, it is expected that Test1 will not reject the null at the nominal level, as shown in the table above.

The second set of systems is conducted to study the behavior of Test2 in a multivariate system of complex relationships. To this end, we have considered three systems:

S11 | ${Y}_{t}=0.5{Y}_{t-1}+{Z}_{t}$ | ${X}_{t}=0.5{X}_{t-1}+{Z}_{t}$ | ${Z}_{t}={\epsilon}_{t}$ |

S12 | ${R}_{{Y}_{t}}={Y}_{t-1}(2.9(1-{Y}_{t-1}))exp(-0.36{Z}_{t})$ | ${R}_{{X}_{t}}={X}_{t-1}(3.1(1-{X}_{t-1}))exp(-0.3{Z}_{t})$ | ${Z}_{t}={\epsilon}_{t}$ |

${Y}_{t}=0.35{Y}_{t-1}+max({R}_{{Y}_{t-4}},0)$ | ${X}_{t}=0.4{X}_{t-1}+max({R}_{{X}_{t-4}},0)$ | ||

S13 | ${Y}_{t}=0.5{Y}_{t-1}+{\epsilon}_{{y}_{t}}$ | ${X}_{t}=0.5{X}_{t-1}+0.5{Y}_{t-1}+{\epsilon}_{{x}_{t}}$ | ${Z}_{t}={\epsilon}_{{z}_{t}}.$ |

S11 considers the case where two variables, X and Y, do not have between-dependence (cross-dependence) but are both driven by a common variable Z. One can think of Z as an environmental variable that determines (explains) X and Y, and therefore both are related by this external variable. Here, we expect Test2 to reject the null. Thus, S12 is a nonlinear and more complex model than S11, but, in essence, is similar. This system has two non-interacting variables, X and Y, that share common environmental forcing. Finally, S13 considers the case of three-variables where one of them has no dynamic structure, and the other two only have one-side dependence. We expect Test2 to be able to detect this dependence. The results are given in Table 4.

The results suggest that Test2 is able to clearly detect departures from the null in a multivariate context. Even in the case of hidden common variables, the test unveils the indirect relationship, despite its linear or nonlinear nature and despite the sample size. It is also concluded that Test2 needs more observations (larger sample sizes) to detect dependences when considering scenarios like S13, where of six potential relationships between variables, only one exists (from Y to X).

#### 5.2. Comparison with Other Tests

As we indicated in the introductory section, the technical literature on this topic has produced several statistics that test for independence between time series. This subsection aims to compare among the most relevant tests.

A comparison among tests can be conducted at several levels. We compare at the level of: the assumptions required to derive and implement the test, the parameters that the final user has to fix to conduct the test, and the empirical power of the test.

According to the literature, the improvements have occurred around some criteria that to some extent are related to the required assumptions for deriving the statistic and implementing the test(s). On this regard, scholars have mainly focused on the following criteria:

- (i)
- stationarity (or not) of the system generating process,
- (ii)
- linearity (or not) of the system, and
- (iii)
- robustness (or not) to the presence of outliers.

On the other hand, all available statistics require the final user of these statistical tests to make certain decisions on some aspects that will necessarily affect the final result of the test. Provided the test is used in the residuals of the model, one of the most important decisions is the fact that a correct model needs to be estimated. Obviously, pre-estimation (or not) of an autoregressive model before using the test is a critical decision. Another important choice for using the test is due to the fact that some of the tests relied on the use of kernels. Throughout the literature, there has been some controversy regarding how to choose the kernel and to what extent empirical behavior of the test changes because of the selected kernel. Along with the kernel, a selection is also required for truncation parameters. Finally, all the tests have to choose the number of observations in the lag vector, which is equivalent to parameter m (embedding) of our tests.

These observations lead us to complete the previous list with the following items:

- (iv)
- Pre-estimation,
- (v)
- Kernel selection,
- (vi)
- embedding selection.

Table 5 allows us to compare the tests considered in terms of the robustness to processes that might be nonstationary and nonlinear, and to the presence of outliers (criteria (i)–(iii)). The table also facilitates comparisons in line with the choices that the user has to make before using the test(s), (criteria (iv)–(vi)).

According to the previous table, the tests presented in this work have a greater range of applicability. The data that can be analyzed can be compatible with an ample number of models. In other words, other tests are less generally applicable. From a practical point of view, the new tests facilitate user work, since she has to select a smaller number of parameters. This is especially relevant since we alleviate the burden of modeling a (correct) autoregressive process. Any of our techniques only require selecting parameter m, which is a necessary parameter in all the available tests.

As explained in the introductory section, mainly all the available tests are derived from a seminal Haugh’s test, which is best known along with Hong’s test, which is the test with better behavior in terms of power. To complete the comparison, we now compare the results in terms of power. To do so, we consider these two well-known tests, namely Haugh and Hong tests, and compare it with Test2, which is the most general one. To make a fair comparison, it is only conducted on models to which both tests can be applied.

We firstly describe these competitive tests and then show their results on the corresponding systems.

The Haugh’s (1976) [1] procedure considers the following portmanteau statistic given by
where ${r}_{\widehat{u}\widehat{v}}\left(j\right)={\sum}_{t=j+1}^{n}{\widehat{u}}_{t}{\widehat{v}}_{t-j}/{\left({\sum}_{t=1}^{n}{\widehat{u}}_{t}^{2}{\sum}_{t=1}^{n}{\widehat{v}}_{t}^{2}\right)}^{1/2}$ are the residual cross-correlations for $0\le j\le n-1,$${r}_{\widehat{u}\widehat{v}}\left(j\right)={r}_{\widehat{u}\widehat{v}}(-j)$ for $1-n\le j<0$, and ${\widehat{u}}_{t},{\widehat{v}}_{t}$, $t=1,...,n$ are the two residual series of length n, obtained by fitting univariate models to each of the series. The constant $M\le n-1$ is a fixed integrer and must be chosen a priori. The asymptotic distribution of ${S}_{M}$ is chi-square under the null hypothesis of independence and the hypothesis is rejected for large values of the test statistic.

$${S}_{M}=n\sum _{j=-M}^{M}{r}_{\widehat{u}\widehat{v}}^{2}\left(j\right),$$

Hong (1996) [8] generalizes Haugh’s statistic. In fact, Hong’s test is a weighted sum of residual cross-correlations of the form
where ${M}_{n}\left(k\right)={\sum}_{j=1-n}^{n-1}\left(1-\left|j\right|/n\right){k}^{2}(j/d)$ and ${V}_{n}\left(k\right)={\sum}_{j=2-n}^{n-2}\left(1-\left|j\right|/n\right)\left(1-(\left|j\right|+1)/n\right){k}^{4}(j/d).$ The weighting depends on a kernel function k and a smoothing parameter d (both have to be selected a priori). Under the null hypothesis, the test statistic ${Q}_{n}$ is asymptotically $N(0,1)$ and it rejects the null for large values of ${Q}_{n}.$

$${Q}_{n}=\frac{n{\sum}_{j=1-n}^{n-1}{k}^{2}(j/d){r}_{\widehat{u}\widehat{v}}^{2}\left(j\right)-{M}_{n}\left(k\right)}{{\left[2{V}_{n}\left(k\right)\right]}^{1/2}},$$

The empirical power results on systems are collected in Table 6.

Results point to several observations: (a) all tests have maximum power for the simplest system (S4), so all are highly competitive, and therefore the following comments will only apply to systems S5, S6 and S7. (b) None of the tests compared is competitive for (small) sample sizes: 200 and 500. (c) Haugh and Hong tests do not improve power by increasing the sample size; however, Test2 not only improves, but also reaches levels close to full power.

#### 5.3. Selecting Parameter m

As mentioned earlier, all tests involve the selection of a parameter, m that comform the basic units of analysis. This parameter is an integer which stands for the fix length of the vectors that are formed to be introduced in the tests. These vectors are generally the first m consecutive observations from a time series (raw data or residuals). We have referred throughout this paper to this parameter as embedding dimension or m-history. This terminology is very frequent in the field related to entropy and nonlinear chaotic dynamical systems.

None of the cited tests provide advice for how to select this parameter. In this section, we reflect on how to select the parameter m and we analyze the empirical effect of increasing m.

An obvious observation is that, if the system generating process is constructed by two (or more) cross dependent equations where dependence is in lags larger than m, then no test will capture such a dependence and will consider that they are independent time series. One natural solution is to construct the same m-histories with some fixed time-delay, namely. ${\overline{x}}_{t,\tau}=({x}_{t},{x}_{t+\tau},{x}_{t+2\tau},...,{x}_{t+(m-1)\tau})$.

This problem and also this solution is rarely found in Economics and Finance; however, it is more frequent in Physics and subfields related with nonlinear and chaotic behavior. Indeed, the modeling and prediction of chaotic time series require proper reconstruction of the state space from the available data in order to successfully estimate invariant properties of the embedded attractor. Thus, one must choose appropriate delay time $\tau $ and embedding dimension m for phase space reconstruction. For the aim of the presented tests, there is no need to go beyond what other tests do for analyzing dependences.

Provided that the new tests rely on symbolic measures, it is worth considering in this regard that previous research on these measures (see [24]) has provided rules for selecting m. Each m induces a number of symbols. The number of symbols has to be large enough to capture departures from the null. For $m=2$, only ${(2!)}^{2}=4$ symbols are being evaluated and are expected to be too few to detect departures from the null(s). For the next integer, $m=3$, 36 symbols are evaluated, and in the case of increasing to an embedding dimension of 4, 576 symbols will then be used.

According to the authors, gains in terms of power can be obtained by increasing m according to the following rule: given a data set of T observations, the embedding dimension will be the largest m that satisfies $5\times \mathrm{number}\phantom{\rule{4.pt}{0ex}}\mathrm{of}\phantom{\rule{4.pt}{0ex}}\mathrm{symbols}<T$ with m = 2, 3, 4, .... In our case, the rule is $5{(m!)}^{2}<T$. The intuition beyond this rule is clear; on the one hand, the larger the number of symbols, the larger the sensitivity for detecting departures from the null. On the other hand, $m!$ grows very fast and statistical devices require enough samples to behave normally. Note that the larger the m, the finer the search for dependences, but at the cost of increasing sample size required to satisfy the rule.

Finally, the study is completed by empirically analyzing the effect (in terms of power) of a change in the embedding parameter. We consider the basic models (S4–S8) and we evaluate for $m=2,3,4$. It makes no sense to use $m=5$ because, in this case, 14,400 symbols will be used while (at most) only 3000 observations are available. Results are collected in Table 7.

From these results, we firstly observe that, for a few number of symbols (i.e., for $m=2$), the test has no power (as expected), except for the deterministic system and the linear one. In addition, secondly, the power of the test tends to increase with m. For this reason, we recommend the potential users of the test to adhere to the automatic rule for choosing parameter m.

## 6. Conclusions

In this work, we have extended the concept symbolic correlation integral to the multivariate domain with the intention of studying, in a non-parametric way, the dependency relationships that might occur between observed variables. Taking as our starting point the multivariate correlation integral, we have developed two statistical tests that can be used to detect dependence between series, even in the case when the relations between them are complex. Each new test is characterized by its corresponding null hypothesis. In both cases, these tests improve on the existing one in terms of power and usability, which are mostly designed to capture linear relationships. As a consequence of the nonparametric character of both tests, each test can be used with guarantees and practically without restrictions as long as the series are stationary and the sample size is not excessively small. This means that both tests are appropriate for massive data analysis. However, for small data sets (say, below 200 observations), users should be aware that nonlinearity is highly difficult to detect and our advice is to rely on other statistics if linear relations can provide a plausible link among series.

## Author Contributions

Conceptualization, M.V.C.-P., M.M.-G., J.M.R. and M.R.M.; methodology, M.V.C.-P., M.M.-G., J.M.R. and M.R.M.; software, M.V.C.-P., M.M.-G., J.M.R. and M.R.M.; writing—original draft preparation, M.V.C.-P., M.M.-G., J.M.R. and M.R.M.

## Funding

This research is the result of the activity performed under the program Groups of Excellence of the Region of Murcia, the Fundación Seneca, Science and Technology Agency of the region of Murcia project under grant 19884/GERM/15. All remaining errors are our responsibility.

## Conflicts of Interest

The authors declare no conflict of interest.

## Appendix A. Proofs

This section is devoted to the proof of Theorem 1. To this end, we need to prove the following proposition where ${J}_{ts}=J{S}_{ts}-{\tilde{\mu}}_{ts}$, which is a random variable with zero mean by Equations (6) and (8).

**Proposition**

**A1.**

Let ${\{{w}_{t}=({x}_{1t},{x}_{2t},\cdots ,{x}_{kt})\}}_{t=1}^{T}$ be a k-dimensional time series, where $\left\{{x}_{jt}\right\}$ is i.i.d. for all $t=1,2,...,k$ and they are independent between themselves. Given embedding dimensions ${m}_{1},{m}_{2},...,{m}_{k}\ge 2$ and $m=max\{{m}_{1},{m}_{2},...,{m}_{k}\}$, it follows that

$$\underset{n\to \infty}{lim}{\displaystyle \frac{n(n-1)}{2}}Var({\widehat{JSC}}^{\mathbf{m}}-JS{C}^{\mathbf{m}})={\sigma}_{m}^{2},$$

where

$$JS{C}^{\mathbf{m}}=\prod _{j=1}^{k}\frac{1}{{m}_{j}!},$$

$${\sigma}_{m}^{2}=\sum _{{h}_{1}=-m}^{m}\sum _{{h}_{2}=-m}^{m}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}E\left[{J}_{ts}{J}_{t+{h}_{1}s+{h}_{2}}\right],$$

with $s\ge t+3m-2$.

In order to prove Proposition A1, we need the following two technical lemmas.

**Lemma**

**A1.**

Let ${\{{w}_{t}=({x}_{1t},{x}_{2t},\cdots ,{x}_{kt})\}}_{t=1}^{T}$ be a k-dimensional time series where $\left\{{x}_{jt}\right\}$ is i.i.d. for all $t=1,2,...,k$ and they are independent between themselves. Given embedding dimensions ${m}_{1},{m}_{2},...,{m}_{k}\ge 2$ and $m=max\{{m}_{1},{m}_{2},...,{m}_{k}\}$, it follows that
provided that $|{t}^{\prime}-t|>m-1$ or $|s-{s}^{\prime}|>m-1$.

$$E\left[{J}_{ts}{J}_{{t}^{\prime}{s}^{\prime}}\right]=0$$

**Proof.**

$$\begin{array}{ccc}E\left[{J}_{ts}{J}_{{t}^{\prime}{s}^{\prime}}\right]\hfill & =& E\left[(J{S}_{ts}-{\tilde{\mu}}_{ts})(J{S}_{{t}^{\prime}{s}^{\prime}}-{\tilde{\mu}}_{{t}^{\prime}{s}^{\prime}})\right],\hfill \\ E\left[{J}_{ts}{J}_{{t}^{\prime}{s}^{\prime}}\right]\hfill & =& E\left[{\displaystyle \prod _{j=1}^{k}}\left(I(s\left({\overline{x}}_{jt}\right),s\left({\overline{x}}_{js}\right))-{\tilde{\mu}}_{ts}\right){\displaystyle \prod _{j=1}^{k}}\left(I(s\left({\overline{x}}_{j{t}^{\prime}}\right),s\left({\overline{x}}_{j{s}^{\prime}}\right))-{\tilde{\mu}}_{{t}^{\prime}{s}^{\prime}}\right)\right].\hfill \end{array}$$

As time series are independent between themselves, if we call ${I}_{ts}^{{x}_{j}}=I(s\left({\overline{x}}_{jt}\right),s\left({\overline{x}}_{js}\right))-{\mu}_{ts}^{{x}_{j}}$, we have

$$\begin{array}{ccc}E\left[{J}_{ts}{J}_{{t}^{\prime}{s}^{\prime}}\right]\hfill & =& {\displaystyle \prod _{j=1}^{k}}E\left[\left(I(s\left({\overline{x}}_{jt}\right),s\left({\overline{x}}_{js}\right))-{\mu}_{ts}^{{x}_{j}}\right)\left(I(s\left({\overline{x}}_{j{t}^{\prime}}\right),s\left({\overline{x}}_{j{s}^{\prime}}\right))-{\mu}_{{t}^{\prime}{s}^{\prime}}^{{x}_{j}}\right)\right]=\hfill \\ & =& E\left[{\displaystyle \prod _{j=1}^{k}}{I}_{ts}^{{x}_{j}}\xb7{I}_{{t}^{\prime}{s}^{\prime}}^{{x}_{j}}\right].\hfill \end{array}$$

As $m=max\{{m}_{1},{m}_{2},...,{m}_{k}\}$, we have that ${I}_{ts}^{{x}_{j}}$ and ${I}_{{t}^{\prime}{s}^{\prime}}^{{x}_{j}}$ are independent, then $E\left[{I}_{ts}^{{x}_{j}}{I}_{{t}^{\prime}{s}^{\prime}}^{{x}_{j}}\right]=0$ ([17], p. 549) provided that $|{t}^{\prime}-t|>m-1$ or $|s-{s}^{\prime}|>m-1$ and $E\left[{J}_{ts}{J}_{{t}^{\prime}{s}^{\prime}}\right]=0$ as desired. □

**Lemma**

**A2.**

Let ${\{{w}_{t}=({x}_{1t},{x}_{2t},\cdots ,{x}_{kt})\}}_{t=1}^{T}$ be a k-dimensional time series, where $\left\{{x}_{jt}\right\}$ are i.i.d. for $t=1,2,...,k$, and are independent between themselves. Let $m=max\{{m}_{1},{m}_{2},...,{m}_{k}\}$. Then, for $s\ge t+3m-2$, the covariance of the variable ${J}_{ts}$ with any other ${J}_{{t}^{\prime}{s}^{\prime}}$ depends only on ${h}_{1}=|{t}^{\prime}-t|$ and ${h}_{2}=|{s}^{\prime}-s|$ and not on the time periods themselves, that is,
for all $t,s,{t}^{\prime},{s}^{\prime}$.

$$\delta ({h}_{1},{h}_{2})=E\left[{J}_{ts}{J}_{{t}^{\prime}{s}^{\prime}}\right]$$

**Proof.**

Recall that, for $\pi ,\delta \in {S}_{{m}_{j}}$, ${\left({p}_{t{t}^{\prime}}^{\pi \delta}\right)}_{\pi \delta}=P{M}_{{x}_{j}}\left(\right|t-{t}^{\prime}\left|\right)$ and ${\left({p}_{s{s}^{\prime}}^{\pi \delta}\right)}_{\pi \delta}=P{M}_{{x}_{j}}\left(\right|s-{s}^{\prime}\left|\right)$ for every $j=1,2,...,k$. Fix t and s such that $s\ge t+3m-2$. Then, it follows that

- If ${h}_{1}\ge m$ or ${h}_{2}\ge m$, then by Lemma A1, we have that $\delta ({h}_{1},{h}_{2})=0$.
- If ${h}_{1}<m$ or ${h}_{2}<m$, we have the following cases:
- If ${t}^{\prime}>t$ and ${s}^{\prime}>s$$$\begin{array}{ccc}\hfill E\left[J{S}_{ts}J{S}_{{t}^{\prime}{s}^{\prime}}\right]& =& E\left[{\displaystyle \prod _{j=1}^{k}}{I}_{j}(s\left({\overline{x}}_{jt}\right),s\left({\overline{x}}_{js}\right)){I}_{j}(s\left({\overline{x}}_{j{t}^{\prime}}\right),s\left({\overline{x}}_{j{s}^{\prime}}\right))\right]=\hfill \\ & =& {\displaystyle \prod _{j=1}^{k}}\sum _{\pi \in {S}_{{m}_{j}}}\left(\sum _{\delta \in {S}_{{m}_{j}}}{p}_{t{t}^{\prime}}^{\pi \delta}{p}_{s{s}^{\prime}}^{\pi \delta}\right)\hfill \end{array}$$$$\delta ({h}_{1},{h}_{2})=E\left[{J}_{ts}{J}_{{t}^{\prime}{s}^{\prime}}\right]=\prod _{j=1}^{k}\sum _{\pi \in {S}_{{m}_{j}}}\sum _{\delta \in {S}_{{m}_{j}}}{p}_{t{t}^{\prime}}^{\pi \delta}{p}_{s{s}^{\prime}}^{\pi \delta}-\prod _{j=1}^{k}{\left({\displaystyle \frac{1}{{m}_{j}!}}\right)}^{2}.$$
- If ${t}^{\prime}>t$ and $s>{s}^{\prime}$$$\begin{array}{ccc}\hfill E\left[J{S}_{ts}J{S}_{{t}^{\prime}{s}^{\prime}}\right]& =& E\left[\prod _{j=1}^{k}{I}_{j}(s\left({\overline{x}}_{jt}\right),s\left({\overline{x}}_{js}\right)){I}_{j}(s\left({\overline{x}}_{j{t}^{\prime}}\right),s\left({\overline{x}}_{j{s}^{\prime}}\right))\right]=\hfill \\ & =& \prod _{j=1}^{k}\sum _{\pi \in {S}_{{m}_{j}}}\left(\sum _{\delta \in {S}_{{m}_{j}}}{p}_{t{t}^{\prime}}^{\pi \delta}{p}_{{s}^{\prime}s}^{\pi \delta}\right)\hfill \end{array}$$$$\delta ({h}_{1},{h}_{2})=E\left[{J}_{ts}{J}_{{t}^{\prime}{s}^{\prime}}\right]=\prod _{j=1}^{k}\sum _{\pi \in {S}_{{m}_{j}}}\sum _{\delta}{p}_{t{t}^{\prime}}^{\pi \delta}{p}_{{s}^{\prime}s}^{\pi \delta}-\prod _{j=1}^{k}{\left({\displaystyle \frac{1}{{m}_{j}!}}\right)}^{2}.$$
- If ${t}^{\prime}=t$ and ${s}^{\prime}\ne s$$$\begin{array}{ccc}\hfill E\left[J{S}_{ts}J{S}_{t{s}^{\prime}}\right]& =& E\left[\prod _{j=1}^{k}{I}_{j}(s\left({\overline{x}}_{jt}\right),s\left({\overline{x}}_{js}\right)){I}_{j}(s\left({\overline{x}}_{jt}\right),s\left({\overline{x}}_{j{s}^{\prime}}\right))\right]=\hfill \\ & =& \prod _{j=1}^{k}\left(\sum _{\pi \in {S}_{{m}_{j}}}{p}_{t}^{\pi}{p}_{s{s}^{\prime}}^{\pi \pi}\right)\hfill \end{array}$$$$\delta ({h}_{1},{h}_{2})=E\left[{J}_{ts}{J}_{t{s}^{\prime}}\right]=\prod _{j=1}^{k}\left(\sum _{\pi \in {S}_{{m}_{j}}}{p}_{t}^{\pi}{p}_{s{s}^{\prime}}^{\pi \pi}\right)-\prod _{j=1}^{k}{\left({\displaystyle \frac{1}{{m}_{j}!}}\right)}^{2}.$$

The remaining cases ${t}^{\prime}<t$, $s<{s}^{\prime}$, ${t}^{\prime}<t$, ${s}^{\prime}<s$, $t\ne {t}^{\prime}$, and $s={s}^{\prime}$ are symmetric to the previous ones. Since the probabilities used in Equations (A1)–(A3) depend only on ${h}_{1}$ and ${h}_{2}$, the proof follows. □

**Proof**

**of Proposition A1.**

We consider

$$\begin{array}{ccc}\hfill {\displaystyle \frac{n(n-1)}{2}}Var({\widehat{JSC}}^{\mathbf{m}}-JS{C}^{\mathbf{m}})& =& \frac{2}{n(n-1)}Var\left(\sum _{t=1}^{n-1}\sum _{s=t+1}^{n}{J}_{ts}\right)\hfill \\ & =& \frac{2}{n(n-1)}\sum _{t=1}^{n-1}\sum _{{t}^{\prime}=1}^{n-1}\sum _{s=t+1}^{n}\sum _{{s}^{\prime}={t}^{\prime}+1}^{n}E\left[{J}_{ts}{J}_{{t}^{\prime}{s}^{\prime}}\right].\hfill \end{array}$$

From Lemma A1, the variance Equation (A4) can be rewritten as

$$\begin{array}{ccc}\hfill \frac{2}{n(n-1)}\sum _{t=1}^{n-1}\sum _{{t}^{\prime}=1}^{n-1}\sum _{s=t+1}^{n}\sum _{{s}^{\prime}={t}^{\prime}+1}^{n}E\left[{J}_{ts}{J}_{{t}^{\prime}{s}^{\prime}}\right]& =& \frac{2}{n(n-1)}\sum _{t=1}^{n-1}\sum _{{t}^{\prime}\in N\left(t\right)}\sum _{s\in N\left(t\right)}\sum _{{s}^{\prime}\in N\left(s\right)}E\left[{J}_{ts}{J}_{{t}^{\prime}{s}^{\prime}}\right]\hfill \\ & +& \frac{2}{n(n-1)}\sum _{t=1}^{n-1}\sum _{{t}^{\prime}\in N\left(t\right)}\sum _{s\notin N\left(t\right)}\sum _{{s}^{\prime}\in N\left(s\right)}E\left[{J}_{ts}{J}_{{t}^{\prime}{s}^{\prime}}\right].\hfill \end{array}$$

Since $N\left(t\right)$ and $N\left(s\right)$ are finite sets, the first summand in Equation (A5) converges to zero as n approaches infinity. The second summand in Equation (A5) can be written as
where we denote by $\overline{N\left(t\right)}$ the complementary set of $N\left(t\right)$.

$$\begin{array}{ccc}\frac{2}{n(n-1)}\sum _{t=1}^{n-1}\sum _{{t}^{\prime}\in N\left(t\right)}\sum _{s\notin N\left(t\right)}\sum _{{s}^{\prime}\in N\left(s\right)}E\left[{J}_{ts}{J}_{{t}^{\prime}{s}^{\prime}}\right]& =& \frac{2}{n(n-1)}\sum _{t=1}^{n-1}\sum _{{t}^{\prime}\in N\left(t\right)}\sum _{s\notin N\left(t\right)}\sum _{{s}^{\prime}\in N\left(s\right)\cap N\left({t}^{\prime}\right)}E\left[{J}_{ts}{J}_{{t}^{\prime}{s}^{\prime}}\right]\\ \phantom{\rule{1.em}{0ex}}& +& \frac{2}{n(n-1)}\sum _{t=1}^{n-1}\sum _{{t}^{\prime}\in N\left(t\right)}\sum _{s\notin N\left(t\right)}\sum _{{s}^{\prime}\in N\left(s\right)\cap \overline{N\left({t}^{\prime}\right)}}E\left[{J}_{ts}{J}_{{t}^{\prime}{s}^{\prime}}\right],\end{array}$$

In the first summand in Equation (A6), since ${t}^{\prime}\in N\left(t\right)$ and $s\notin N\left(t\right)$, we have that, for a fixed t, the number of time indexes s that make ${s}^{\prime}\in N\left(s\right)\cap N\left({t}^{\prime}\right)$ possible is finite, and therefore this summand converges to zero as n goes to infinity.

Notice that, for fixed t and s with $s\ge t+2m$, we have that $E\left[{J}_{ts}{J}_{t+{h}_{1}s+{h}_{2}}\right]=E\left[{J}_{{t}^{\prime}{s}^{\prime}}{J}_{{t}^{\prime}+{h}_{1}{s}^{\prime}+{h}_{2}}\right]$ always and ${s}^{\prime}\notin N\left({t}^{\prime}\right)$. Furthermore, from Lemma A1, it follows that

$$\sum _{{t}^{\prime}}\sum _{{s}^{\prime}\ge {t}^{\prime}+m}E\left[{J}_{ts}{J}_{{t}^{\prime}{s}^{\prime}}\right]=\sum _{{t}^{\prime}=t-m+1}^{t+m-1}\sum _{{s}^{\prime}=s-m+1}^{s+m-1}E\left[{J}_{ts}{J}_{{t}^{\prime}{s}^{\prime}}\right]={\sigma}_{m}^{2}.$$

Then, from Equation (A7), we have that

$$\begin{array}{ccc}\underset{n\to \infty}{lim}\frac{n(n-1)}{2}Var({\widehat{JSC}}^{\mathbf{m}}-JS{C}^{\mathbf{m}})& =& \underset{n\to \infty}{lim}\frac{2}{n(n-1)}\sum _{t=1}^{n-1}\sum _{{t}^{\prime}\in N\left(t\right)}\sum _{s\notin N\left(t\right)}\sum _{{s}^{\prime}\in N\left(s\right)\cap \overline{N\left({t}^{\prime}\right)}}E\left[{J}_{ts}{J}_{{t}^{\prime}{s}^{\prime}}\right]\\ \phantom{\rule{1.em}{0ex}}& =& \underset{n\to \infty}{lim}\frac{2}{n(n-1)}\sum _{t=1}^{n-1}\sum _{s=t+2m}^{n}\sum _{{t}^{\prime}=t-m+1}^{t+m-1}\sum _{{s}^{\prime}=s-m+1}^{s+m-1}E\left[{J}_{ts}{J}_{{t}^{\prime}{s}^{\prime}}\right]\\ \phantom{\rule{1.em}{0ex}}& =& \underset{n\to \infty}{lim}\frac{2}{n(n-1)}\sum _{t=1}^{n-1}\sum _{s=t+2m}^{n}{\sigma}_{m}^{2}.\end{array}$$

Since the number of summands in ${\sum}_{t=1}^{n-1}{\sum}_{s=t+2m}^{n}$ is $\frac{1}{2}(-2-4m(-1+n)+n+{n}^{2})$, then
as desired. □

$$\begin{array}{ccc}\hfill \underset{n\to \infty}{lim}\frac{n(n-1)}{2}Var({\widehat{JSC}}^{\mathbf{m}}-\prod _{j=1}^{k}{\displaystyle \frac{1}{{m}_{j}!}})& =& \underset{n\to \infty}{lim}\frac{2}{n(n-1)}\sum _{t=1}^{n-1}\sum _{s=t+2m}^{n}{\sigma}_{m}^{2}={\sigma}_{m}^{2}\hfill \end{array}$$

**Proof**

**of Theorem 1.**

For each integer $k>4(m-1)$, we define
where
with $r=\left[\right(n-1)/k]$ the integer part of $(n-1)/k$.

$${Y}_{n}^{k}=\sqrt{{\displaystyle \frac{2}{n(n-1)}}}\left(\sum _{{j}_{1}=1}^{r}\sum _{{j}_{2}={j}_{1}+1}^{r}{S}_{{j}_{1}{j}_{2}}+\sum _{j=1}^{r}{T}_{j}\right),$$

$${S}_{{j}_{1}{j}_{2}}=\sum _{t=({j}_{1}-1)k+1}^{{j}_{1}k-2m+2}\sum _{s=({j}_{2}-1)k+1}^{{j}_{2}k-2m+2}{J}_{ts},$$

$${T}_{j}=\sum _{t=(j-1)k+1}^{jk-2m+1}\sum _{s=t+1}^{jk-2m+2}{J}_{ts},$$

It is interesting to note that the variables ${S}_{{j}_{1}{j}_{2}}$ are i.i.d. of mean zero and variance

$$v=\sum _{t=1}^{k-2m+2}\sum _{s=k+1}^{2k-2m+2}\sum _{{t}^{\prime}=1}^{k-2m+2}\sum _{{s}^{\prime}=k+1}^{2k-2m+2}E\left[{J}_{ts}{J}_{{t}^{\prime}{s}^{\prime}}\right].$$

Then, from Lemma A2, it follows that

$$v=\sum _{|{h}_{1}|<m}\sum _{|{h}_{2}|<m}(k-2m+2-|{h}_{1}\left|\right)(k-2m+2-|{h}_{2}\left|\right)E\left[{J}_{ts}{J}_{t+{h}_{1}s+{h}_{2}}\right].$$

Therefore, from the Central Limit Theorem for i.i.d random variables, we have that
is asymptotically $\mathcal{N}\left(0,{\displaystyle \frac{v}{{k}^{2}}}\right)$ as $n\to \infty $.

$$\sqrt{\left({\displaystyle \frac{2}{n(n-1)}}\right)}\left(\sum _{{j}_{1}=1}^{r}\sum _{{j}_{2}={j}_{1}+1}^{r}{S}_{{j}_{1}{j}_{2}}\right)$$

On the other hand, the variables ${T}_{j}$ are i.i.d with zero mean, independent of ${\sum}_{{j}_{1}=1}^{r}{\sum}_{{j}_{2}={j}_{1}+1}^{r}{S}_{{j}_{1}{j}_{2}}$ and $\underset{n\to \infty}{lim}Var\left(\sqrt{{\displaystyle \frac{2}{n(n-1)}}}{\sum}_{j=1}^{r}{T}_{j}\right)=0$. Moreover,

$$\underset{k\to \infty}{lim}{\displaystyle \frac{v}{{k}^{2}}}={\sigma}_{m}^{2}.$$

Then,
where $Y\sim \mathcal{N}(0,{\sigma}_{m}^{2}).$

$$\underset{k\to \infty}{lim}\underset{n\to \infty}{lim}{Y}_{n}^{k}=Y,$$

It only remains to show that
for all $\epsilon >0$, since Equation (11) will then follow from Proposition 6.3.9 in Brockwell and Davis (1987).

$$\underset{k\to \infty}{lim}\underset{n\to \infty}{lim}supP\left(\left|\sqrt{{\displaystyle \frac{n(n-1)}{2}}}\left({\widehat{JSC}}^{\mathbf{m}}-E\left[{\widehat{JSC}}^{\mathbf{m}}\right]\right)-{Y}_{n}^{k}\right|>\epsilon \right)=0$$

To establish Equation (A12), we write
where

$$\left(\sqrt{{\displaystyle \frac{n(n-1)}{2}}}\left({\widehat{JSC}}^{\mathbf{m}}-E\left[{\widehat{JSC}}^{\mathbf{m}}\right]\right)-{Y}_{n}^{k}\right)=\sqrt{{\displaystyle \frac{2}{n(n-1)}}}\left(U{S}_{n}^{k}+{U}_{n}^{k}+U{T}_{n}^{k}\right),$$

$$U{S}_{n}^{k}=\sum _{{j}_{1}=1}^{r-1}\sum _{t=({j}_{1}-1)k+1}^{{j}_{1}k-2m+2}\sum _{{j}_{2}={j}_{1}+1}^{r-1}\sum _{s={j}_{2}k-2m+3}^{{j}_{2}k}{J}_{ts}+\sum _{{j}_{1}=1}^{r-1}\sum _{t={j}_{1}k-2m+3}^{{j}_{1}k}\sum _{s={j}_{1}k+1}^{rk-2m+2}{J}_{ts},$$

$${U}_{n}^{k}=\sum _{t=1}^{rk-2m+2}\sum _{s=rk-2m+3}^{n}{J}_{ts}+\sum _{t=rk-2m+3}^{n-1}\sum _{s=t+1}^{n}{J}_{ts},$$

$$U{T}_{n}^{k}=\sum _{{j}_{1}=1}^{r-1}\sum _{t=({j}_{1}-1)k+1}^{{j}_{1}k-2m+2}\sum _{s={j}_{1}k-2m+3}^{{j}_{1}k}{J}_{ts}+\sum _{{j}_{1}=1}^{r-1}\sum _{t={j}_{1}k-2m+3}^{{j}_{1}k-1}\sum _{s=t+1}^{{j}_{1}k}{J}_{ts}.$$

Notice that, to prove Equation (A12), it is sufficient to prove the following equation by Chebyschev inequality:

$$\underset{k\to \infty}{lim}\underset{n\to \infty}{lim}supVar\left(\sqrt{{\displaystyle \frac{n(n-1)}{2}}}\left({\widehat{JSC}}^{\mathbf{m}}-E\left[{\widehat{JSC}}^{\mathbf{m}}\right]\right)-{Y}_{n}^{k}\right)=0.$$

Taking into account that $Var\left({J}_{ts}\right)\le 1$ for all $t,s$ that $\left|Cov(X,Y)\right|\le \sqrt{Var\left(X\right)Var\left(Y\right)}$ and Lemma A1, we have that

$$Var\left(\sum _{t={\alpha}_{1}}^{{\alpha}_{2}}\sum _{s={\beta}_{1}}^{{\beta}_{2}}{J}_{ts}\right)\le ({\alpha}_{2}-{\alpha}_{1}+1)({\beta}_{2}-{\beta}_{1}+1)(1+{(2m-1)}^{2}),$$

$$\begin{array}{ccc}Var\left(\sqrt{{\displaystyle \frac{2}{n(n-1)}}}U{S}_{n}^{k}\right)& \le & \frac{2}{n(n-1)}\left({\displaystyle \frac{(r-1)(r-2)}{2}}(k-2m+2)(2m-2)(1+{(2m-1)}^{2})\right.\\ \phantom{\rule{1.em}{0ex}}& +& (r-1)((r-1)k-2m+2)(1+{(2m-1)}^{2})\\ \phantom{\rule{1.em}{0ex}}& +& \left.2(r-1)(1+{(2m-1)}^{2})\sqrt{{\displaystyle \frac{r-2}{2}}(k-2m+2)(2m-2)((r-1)k-2m+2)}\right),\end{array}$$

$$\begin{array}{ccc}\hfill Var\left(\sqrt{{\displaystyle \frac{2}{n(n-1)}}}{U}_{n}^{k}\right)& \le & {\displaystyle \frac{2}{n(n-1)}}\left((rk-2m+2)m(1+{(2m-1)}^{2})+\right.\hfill \\ & +& {m}^{2}(1+{(2m-1)}^{2})+\hfill \\ & +& \left.2m(1+{(2m-1)}^{2})\sqrt{(rk-2m+2)m}\right),\hfill \end{array}$$

$$\begin{array}{ccc}Var\left(\sqrt{{\displaystyle \frac{2}{n(n-1)}}}U{T}_{n}^{k}\right)& \le & {\displaystyle \frac{2}{n(n-1)}}\left((r-1)(k-2m+2)(2m-2)(1+{(2m-1)}^{2})\right.+\\ \phantom{\rule{1.em}{0ex}}& \phantom{\rule{1.em}{0ex}}& (r-1){(2m-3)}^{2}(1+{(2m-1)}^{2})+\\ \phantom{\rule{1.em}{0ex}}& +& \left.2(r-1)(2m-3)(1+{(2m-1)}^{2})\sqrt{(k-2m+2)(2m-2)}\right).\end{array}$$

Then, it follows that

$$\begin{array}{c}\hfill \underset{k\to \infty}{lim}\underset{n\to \infty}{lim}supVar\left(\sqrt{{\displaystyle \frac{2}{n(n-1)}}}U{S}_{n}^{k}\right)=0,\\ \hfill \underset{k\to \infty}{lim}\underset{n\to \infty}{lim}supVar\left(\sqrt{{\displaystyle \frac{2}{n(n-1)}}}{U}_{n}^{k}\right)=0,\\ \hfill \underset{k\to \infty}{lim}\underset{n\to \infty}{lim}supVar\left(\sqrt{{\displaystyle \frac{2}{n(n-1)}}}U{T}_{n}^{k}\right)=0.\end{array}$$

Hence,
$$\underset{k\to \infty}{lim}\underset{n\to \infty}{lim}supVar\left(\sqrt{{\displaystyle \frac{n(n-1)}{2}}}\left({\widehat{JSC}}^{\mathbf{m}}-E\left[{\widehat{JSC}}^{\mathbf{m}}\right]\right)-{Y}_{n}^{k}\right)=0.$$

From Chebyshev’s inequality, we have that
which finishes the proof of the theorem. □

$$\underset{k\to \infty}{lim}\underset{n\to \infty}{lim}supP\left(\left|\sqrt{{\displaystyle \frac{n(n-1)}{2}}}\left({\widehat{JSC}}^{\mathbf{m}}-E\left[{\widehat{JSC}}^{\mathbf{m}}\right]\right)-{Y}_{n}^{k}\right|>\epsilon \right)=0,$$

**Proof**

**of Theorem 2.**

Let ${\widehat{\theta}}_{j}$ be a consistently estimated parameter of ${\theta}_{j}$ and ${m}_{j}\ge 2$ for all $j=1,2,...,k$ and let $m=max\{{m}_{1},{m}_{2},...,{m}_{k}\}$. Now, we want to prove that, for the multidimensional time series given by ${\left\{({u}_{1t},{u}_{2t},\cdots ,{u}_{kt})\right\}}_{t}$, it follows that

$$\sqrt{\frac{n(n-1)}{2}}\left({\widehat{JSC}}^{m}({\theta}_{1},{\theta}_{2},\cdots ,{\theta}_{k})-{\widehat{JSC}}^{m}({\widehat{\theta}}_{1},{\widehat{\theta}}_{1},\cdots ,{\widehat{\theta}}_{k})\right){\to}^{P}0.$$

For all $j=1,2,\cdots ,k$, the residual function is given by

$${u}_{jt}\left({\widehat{\theta}}_{j}\right)={x}_{jt}-{G}_{j}({I}_{t-1}^{j},{\widehat{\theta}}_{j})={u}_{jt}\left({\theta}_{j}\right)+{G}_{j}({I}_{t-1}^{j},{\theta}_{j})-{G}_{j}({I}_{t-1}^{j},{\widehat{\theta}}_{j}).$$

Then,

$$|{u}_{jt}\left({\theta}_{j}\right)-{u}_{jt}\left({\widehat{\theta}}_{j}\right)|=|{G}_{j}({I}_{t-1}^{j},{\theta}_{j})-{G}_{j}({I}_{t-1}^{j},{\widehat{\theta}}_{j})|.$$

The ${m}_{j}!$ symbols provide a partition of ${\mathbb{R}}^{{m}_{j}}$ in ${m}_{j}!$ subsets $\{{A}_{{\pi}_{1}}^{j},{A}_{{\pi}_{2}}^{j},\cdots ,{A}_{{\pi}_{{m}_{j}!}}^{j}\}$, such that the probability $P({\overline{u}}_{jt}\left({\theta}_{j}\right)\in {A}_{{\pi}_{i}}^{j}\cap {A}_{{\pi}_{l}}^{j})=0$ for all $i\ne l$ if the values of ${u}_{jt}$ come from a continuous distribution because then equal values are very uncommon, with a theoretical probability of occurrence of 0. Therefore, for a given ${m}_{j}$ history ${\overline{u}}_{jt}\left({\theta}_{j}\right)$, we may assume without loss of generality that it belongs to a ball of radius ${\epsilon}_{j}>0$ satisfying $B({\overline{u}}_{jt}\left({\theta}_{j}\right),{\epsilon}_{j})\subset {A}_{\pi}^{j}$ for some $\pi \in {S}_{{m}_{j}}$.

For each ${\epsilon}_{j}$ and given that ${G}_{j}$ is a continuous map in a compact set (and hence uniformly continuous) for all $j=1,2,\cdots ,k$, there exists a ${\delta}_{j}>0$ independent of ${I}_{t-1}^{j},{\theta}^{j}$ and ${\widehat{\theta}}_{j}$ such that, if $\Vert ({I}_{t-1}^{j},{\theta}_{j})-({I}_{t-1}^{j},{\widehat{\theta}}_{j})\Vert <{\delta}_{j}$, then by Equation (A18) $|{u}_{jt}\left({\theta}_{j}\right)-{u}_{jt}\left({\widehat{\theta}}_{j}\right)|<{\u03f5}_{j}$ and hence ${\overline{u}}_{jt}\left({\widehat{\theta}}_{j}\right)\in {A}_{\pi}^{j}$.

Thus, it follows that $I({s}_{{x}_{j}}\left({\overline{u}}_{jt}\left({\theta}_{j}\right)\right),{s}_{{x}_{j}}\left({\overline{u}}_{js}\left({\theta}_{j}\right)\right))=I({s}_{{x}_{j}}\left({\overline{u}}_{jt}\left({\widehat{\theta}}_{j}\right)\right),{s}_{{x}_{j}}\left({\overline{u}}_{js}\left({\widehat{\theta}}_{j}\right)\right))$ for almost all $t,s$, and hence
and therefore
which finishes the proof of the theorem. □

$$J{S}_{ts}({\theta}_{1},{\theta}_{2},\cdots ,{\theta}_{k})=J{S}_{ts}({\widehat{\theta}}_{1},{\widehat{\theta}}_{2},\cdots ,{\widehat{\theta}}_{k})$$

$$\underset{n\to \infty}{lim}P\left(\sqrt{\frac{n(n-1)}{2}}\left|{\widehat{JSC}}^{m}({\theta}_{1},{\theta}_{2},\cdots ,{\theta}_{k})-{\widehat{JSC}}^{m}({\widehat{\theta}}_{1},{\widehat{\theta}}_{1},\cdots ,{\widehat{\theta}}_{k})\right|>0\right)=0,$$

**Remark**

**A1.**

Notice that we have used only the fact that the map ${G}_{j}$ with $j=1,2,...,k$, are uniformly continuous. Therefore, we could state only this condition in Theorem 2 and avoid the compact domain of every ${G}_{j}$.

## References

- Haugh, L.D. Checking the independence of two covariance-stationary time series: A univariate residual cross-correlation approach. J. Am. Stat. Assoc.
**1976**, 71, 378–385. [Google Scholar] [CrossRef] - Pierce, A. Lack of dependence among economic variables. J. Am. Stat. Assoc.
**1977**, 72, 11–22. [Google Scholar] - Geweke, J. A comparison of tests of independence of two covariance stationary time series. J. Am. Stat. Assoc.
**1981**, 76, 363–373. [Google Scholar] [CrossRef] - Bouhaddioui, C.; Dufour, J.-M. Tests for non-correlation of two infinite-order cointegrated vector autoregressive series. J. Appl. Probab. Stat.
**2008**, 3, 78–94. [Google Scholar] - Bouhaddioui, C.; Roy, R. A generalized portmanteau test for independence of two infinite-order vector autoregressive series. J. Time Ser. Anal.
**2006**, 27, 505–544. [Google Scholar] [CrossRef] - Duchesne, P.; Roy, R. Robust tests for independence of two time series. Stat. Sin.
**2003**, 13, 827–852. [Google Scholar] - El Himdi, K.; Roy, R. Tests for noncorrelation of two multivariate ARMA time series. Can. J. Stat.
**1997**, 25, 233–256. [Google Scholar] [CrossRef] - Hong, Y. Testing for independence between two covariance stationary time series. Biometrika
**1996**, 83, 615–625. [Google Scholar] [CrossRef] - Koch, P.D.; Yang, S.S. A method for testing the independence of two time series that accounts for a potential pattern in the crosscorrelation function. J. Am. Stat. Assoc.
**1986**, 81, 533–544. [Google Scholar] [CrossRef] - Li, W.K.; Hui, Y.V. Robust residual cross correlation tests for lagged relations in time series. J. Stat. Comput. Simul.
**1994**, 49, 103–109. [Google Scholar] [CrossRef] - Pham, D.T.; Roch, R.; Cedras, L. Tests for non-correlation of two cointegrated ARMA time series. J. Time Ser. Anal.
**2003**, 24, 553–577. [Google Scholar] [CrossRef] - Muhammad, K.W.; Islam, K.A. Most stringent test of independence for time series. Commun. Stat. Simul. Comput.
**2018**. [Google Scholar] [CrossRef] - Chan, N.H.; Tran, L.T. Nonparametric tests for serial dependence. J. Time Ser. Anal.
**1992**, 13, 19–28. [Google Scholar] [CrossRef] - Matilla-Garcia, M.; Rodriguez, J.M.; Ruiz, M. A symbolic test for testing independence between time series. J. Time Ser. Anal.
**2010**, 31, 76–85. [Google Scholar] [CrossRef] - Robinson, P.M. Consistent nonparametric entropy-based testing. Rev. Econ. Stud.
**1991**, 58, 437–453. [Google Scholar] [CrossRef] - Skaug, H.J.; Tjøstheim, D. A nonparametric test of serial independence based on the empirical distribution function. Biometrika
**1993**, 80, 591–602. [Google Scholar] [CrossRef] - Caballero-Pintado, M.V.; Matilla-Garcia, M.; Marin, M.R. Symbolic correlation integral. Econom. Rev.
**2019**, 38, 533–556. [Google Scholar] [CrossRef] - Granger, C.W.; Maasoumi, E.; Racine, J. A dependence metric for possibly nonlinear processes. J. Time Ser. Anal.
**2004**, 25, 649–669. [Google Scholar] [CrossRef] - Agresti, A.; Agresti, B.F. Statistical Analysis of Qualitative Variation. Sociol. Methodol.
**1978**, 9, 204. [Google Scholar] [CrossRef] - Aaronson, J.; Burton, R.; Dehling, H.; Gilat, D.; Hill, T.; Weiss, B. Strong laws for L-and u-statistics. Trans. Am. Math. Soc.
**1996**, 348, 2845–2866. [Google Scholar] [CrossRef] - Politis, D.N.; White, H. Automatic block-length selection for the dependent bootstrap. Econom. Rev.
**2004**, 23, 53–70. [Google Scholar] [CrossRef] - Patton, A.J.; Politis, D.N.; White, H. Correction: automatic block-length selection for the dependent bootstrap. Econom. Rev.
**2009**, 28, 372–375. [Google Scholar] [CrossRef] - Hallin, M.; Saidi, A. Testing noncorrelation and noncausality between multivariate ARMA time series. J. Time Ser. Anal.
**2005**, 26, 83–105. [Google Scholar] [CrossRef] - Matilla-Garcia, M.; Ruiz, M. A non-parametric independence test using permutation entropy. J. Econom.
**2008**, 144, 139–155. [Google Scholar] [CrossRef]

S1 | S2 | S3 | S4 | S5 | S6 | S7 | S8 | |
---|---|---|---|---|---|---|---|---|

T = 200 | 0.045 | 0.288 | 0.175 | 1 | 0.389 | 0.190 | 0.271 | 1 |

T = 500 | 0.048 | 0.838 | 0.528 | 1 | 0.938 | 0.670 | 0.849 | 1 |

T = 1000 | 0.048 | 1 | 0.943 | 1 | 1 | 0.990 | 0.999 | 1 |

T = 3000 | 0.049 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |

S1 | S2 | S3 | S4 | S5 | S6 | S7 | S8 | |
---|---|---|---|---|---|---|---|---|

$T=200$ | 0.049 | 0.056 | 0.056 | 0.998 | 0.068 | 0.247 | 0.104 | 1 |

$T=500$ | 0.053 | 0.054 | 0.057 | 1 | 0.091 | 0.742 | 0.254 | 1 |

$T=1000$ | 0.058 | 0.055 | 0.057 | 1 | 0.200 | 0.992 | 0.636 | 1 |

$T=3000$ | 0.050 | 0.049 | 0.059 | 1 | 0.816 | 1 | 1 | 1 |

S9 | S10 | |
---|---|---|

$T=200$ | 0.053 | 0.051 |

$T=500$ | 0.050 | 0.050 |

$T=1000$ | 0.053 | 0.055 |

$T=3000$ | 0.053 | 0.054 |

S11 | S12 | S13 | |
---|---|---|---|

$T=200$ | 1 | 0.855 | 0.121 |

$T=500$ | 1 | 0.935 | 0.415 |

$T=1000$ | 1 | 0.974 | 0.879 |

$T=3000$ | 1 | 0.998 | 1 |

Tests | Robust to | Choices | |||||
---|---|---|---|---|---|---|---|

Properties | Nonstationarity | Nonlineariry | Outliers | Pre-Estimation | Kernel | Embedding | |

Haugh | No | No | No | Yes | No | Yes | |

Hong | No | No | No | Yes | Yes | Yes | |

Koch & Yang | No | No | No | Yes | No | Yes | |

Li & Hui | No | No | Yes | Yes | No | Yes | |

Hallin & Saidi [23] | No | No | No | Yes | No | Yes | |

Pham et al. | Allows cointegration | No | No | Yes | No | Yes | |

Test 2 (Test 1) | No (No) | Yes (Yes) | Yes (Yes) | No (No) | No (No) | Yes (Yes) |

Test | S4 | S5 | S6 | S7 | |
---|---|---|---|---|---|

T = 200 | ${S}_{M}$ | 1 | 0.21 | 0.11 | 0.11 |

${Q}_{m}$ | 1 | 0.23 | 0.20 | 0.29 | |

$Test2$ | 0.99 | 0.07 | 0.25 | 0.10 | |

T = 500 | ${S}_{M}$ | 1 | 0.20 | 0.10 | 0.14 |

${Q}_{m}$ | 1 | 0.22 | 0.23 | 0.31 | |

$Test2$ | 1 | 0.09 | 0.74 | 0.25 | |

T = 1000 | ${S}_{M}$ | 1 | 0.31 | 0.11 | 0.15 |

${Q}_{m}$ | 1 | 0.24 | 0.28 | 0.34 | |

$Test2$ | 1 | 0.20 | 0.99 | 0.64 | |

T = 3000 | ${S}_{M}$ | 1 | 0.29 | 0.37 | 0.17 |

${Q}_{m}$ | 1 | 0.24 | 0.27 | 0.33 | |

$Test2$ | 1 | 0.82 | 1 | 1 |

m | T | S4 | S5 | S6 | S7 | S8 |
---|---|---|---|---|---|---|

2 | 200 | 0.996 | 0.07 | 0.052 | 0.064 | 0.927 |

3 | 200 | 0.999 | 0.046 | 0.209 | 0.082 | 1 |

4 | 200 | 0.983 | 0.054 | 0.195 | 0.087 | 1 |

2 | 500 | 1 | 0.057 | 0.053 | 0.066 | 0.998 |

3 | 500 | 1 | 0.098 | 0.732 | 0.271 | 1 |

4 | 500 | 1 | 0.112 | 0.699 | 0.289 | 1 |

2 | 1000 | 1 | 0.06 | 0.052 | 0.059 | 1 |

3 | 1000 | 1 | 0.226 | 0.995 | 0.64 | 1 |

4 | 1000 | 1 | 0.321 | 0.994 | 0.749 | 1 |

2 | 3000 | 1 | 0.062 | 0.037 | 0.052 | 1 |

3 | 3000 | 1 | 0.799 | 1 | 0.998 | 1 |

4 | 3000 | 1 | 0.98 | 1 | 1 | 1 |

© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).