# Modeling Spectral Properties in Stationary Processes of Varying Dimensions with Applications to Brain Local Field Potential Signals

^{1}

^{2}

^{3}

^{*}

Previous Article in Journal

Previous Article in Special Issue

Previous Article in Special Issue

Department of Statistical Science, Southern Methodist University, Dallas, TX 75275, USA

School of Biological Sciences, University of California Irvine, Irvine, CA 92697, USA

Statistics Program, King Abdullah University of Science and Technology, Thuwal 23955, Saudi Arabia

Author to whom correspondence should be addressed.

Received: 22 October 2020 / Revised: 28 November 2020 / Accepted: 3 December 2020 / Published: 5 December 2020

(This article belongs to the Special Issue Time Series Modelling)

In some applications, it is important to compare the stochastic properties of two multivariate time series that have unequal dimensions. A new method is proposed to compare the spread of spectral information in two multivariate stationary processes with different dimensions. To measure discrepancies, a frequency specific spectral ratio (FS-ratio) statistic is proposed and its asymptotic properties are derived. The FS-ratio is blind to the dimension of the stationary process and captures the proportion of spectral power in various frequency bands. Here we develop a technique to automatically identify frequency bands that carry significant spectral power. We apply our method to track changes in the complexity of a 32-channel local field potential (LFP) signal from a rat following an experimentally induced stroke. At every epoch (a distinct time segment from the duration of the experiment), the nonstationary LFP signal is decomposed into stationary and nonstationary latent sources and the complexity is analyzed through these latent stationary sources and their dimensions that can change across epochs. The analysis indicates that spectral information in the Beta frequency band (12–30 Hertz) demonstrated the greatest change in structure and complexity due to the stroke.

Numerous applications require comparing two multivariate time series of unequal dimensions. Neuroscience experiments result in a stationary or nonstationary multivariate signal from different epochs (distinct non-overlapping successive time segments of the duration of the experiment). A popular approach to modeling such data decomposes the observed signal at every epoch into useful latent sources that can be stationary or nonstationary. These latent sources are lower dimensional time series obtained by linear transforms of the components of the observed multivariate series and they aim to capture important statistical properties of the observed series. At these epochs, dimension reduction techniques such as principal component analysis (PCA), factor modeling, independent component analysis (ICA), stationary subspace analysis (SSA) are often applied to extract useful lower-dimensional latent sources. Artificially setting the dimension of these latent sources to be the same across the epochs results in loss of important information since these changes could be indicative of useful brain processes such as learning (Fiecas and Ombao [1]). Indeed brain processes evolve across the entire recording period (Fiecas and Ombao [1], Ombao et al. [2]) leading to changes in the dimension of the latent sources across epochs. Moreover, the evolution of the dimension can itself serve as a feature in understanding how the brain function evolves during an experiment. As another example in neuroscience, the aim in functional connectivity is to model dependence between different brain regions at various epochs in an experiment; Cribben et al. [3], Cribben et al. [4], Cribben and Yu [5], Zhu and Cribben [6]. To mitigate the problem of high-dimensionality arising due to signal from densely voxelated cortical surface, parcellation leads to disjoint regions of interest (ROI) of the brain and signal summaries are obtained in each of these regions. Dependence measures between these ROIs are then computed using their respective signal summaries. In the above pursuit of region-wise comparison of the brain, it is natural to encounter the problem of comparing multivariate processes, say from two different regions that have unequal dimensions. In Wang et al. [7] the problem of modeling effective connectivity in high-dimensional cortical surface signal is pursued wherein a factor analysis is carried out on each ROI and vector autoregressive (VAR) models are used to jointly model the latent factors. Here again, one can potentially end up with unequal number of optimal latent factors from different ROIs thereby making the comparisons challenging.

The application that motivates our methodology is the analysis of local field potentials (LFP) in an experiment that simulates ischemic stroke in humans (Data source: Stroke experiment conducted in the lab of co-author (Ron Frostig) at his Neurobiology lab; http://frostiglab.bio.uci.edu/Home.html). The dataset comprises of 600 epochs worth of LFP recordings (each epoch is 1 s long) from 32 microelectrodes implanted in a rat’s cortex. Figure 1 below depicts the rat’s cortex and the locations of the 32 sensors implanted on the cortical surface from which the LFP signal is recorded. This 32-dimensional signal is our observed time series. A stroke is induced midway through the experiment (epoch 300) by clamping the medial cerebral artery. The goal is to develop a method that tracks changes in the complexity of signals following the stroke. From the observed LFP signal, useful lower-dimensional sources are extracted at each epoch and we shall characterize complexity in LFP through these useful latent sources and their varying dimensions across epochs.

Motivated by such applications, we propose a new method to compare spectral information in different multivariate stationary processes of varying dimensions. More specifically, the aim is to capture the amount of spectral information in various frequency bands in different stationary processes of unequal dimensions. There are already many methods and models that discuss evolution of spectral information but the key contribution of this paper is in modeling evolution of the spectrum while allowing dimension to also evolve over time. We introduce a frequency-specific spectral ratio, which we call the FS-ratio, statistic that measures the proportion of spectral power in various frequency bands. FS-ratio can be used to (i). identify frequency bands where there is significant discrepancies between pre and post stroke epochs, (ii). identify frequency bands that account for most variation within pre (and post) stroke epochs and (iii). identify the frequency bands that are consistent (vs inconsistent) across all the 600 epochs. One of the key features of this statistic is that it is blind to the dimension of the multivariate stationary process and can be used to compare successive epochs with possibly different dimensions in the stationary sources. Thus, the proposed FS-ratio is very useful in (a). discriminating between the pre and post stroke onset and (b). tracking changes over the entire course of the experiment while allowing for varying dimensions. In Section 2 we develop our FS-ratio statistic and derive its asymptotic properties. We return to the LFP dataset in Section 3 and discuss the usefulness of the proposed ratio statistic in discriminating between pre and post stroke onset. Section 4 concludes. Finally, we evaluate the performance of the proposed FS-ratio statistic through several simulation examples and the results are provided in Appendix A.

In this section, we first describe our FS-ratio statistic and the method to analyze the evolution of spectral information in stationary processes with varying dimensions. Using the FS-Ratio statistic, a technique to locate the frequency bands carrying significant spectral power is discussed in Section 2.1.1. The theoretical properties of the proposed statistic along with the required assumptions are discussed in Section 2.1.2.

Let ${X}_{t}$ be a ${d}_{1}$-variate time series and ${Y}_{t}$ be a ${d}_{2}$-variate time series where ${d}_{1}\ne {d}_{2}$ and $t\phantom{\rule{3.33333pt}{0ex}}=\phantom{\rule{3.33333pt}{0ex}}1,2,\dots ,T$. The spectral matrices of the two zero-mean multivariate stationary series are given by ${f}_{X}\left(\omega \right)\in {\mathbb{C}}^{{d}_{1}\times {d}_{1}}$ and ${f}_{Y}\left(\omega \right)\in {\mathbb{C}}^{{d}_{2}\times {d}_{2}}$ for $\omega \in (-\pi ,\pi )$. Here $(-\pi ,\pi )$ represents the normalized frequency range used to take care of aliases in the frequency components outside this range. This range $(-\pi ,\pi )$ is sometimes referred to as angular frequency scale with frequency $2\pi $ being called the Nyquist or folding frequency. With the discrete Fourier transforms of ${X}_{t}$ and ${Y}_{t}$ expressed as ${J}_{X,T}\left(\omega \right)=\frac{1}{\sqrt{2\pi T}}{X}_{t}{e}^{-it\omega}$ and ${J}_{Y,T}\left(\omega \right)\phantom{\rule{3.33333pt}{0ex}}=\phantom{\rule{3.33333pt}{0ex}}\frac{1}{\sqrt{2\pi T}}{Y}_{t}{e}^{-it\omega}$, respectively, the periodogram matrices ${I}_{X,T}\left(\omega \right)\in {\mathbb{C}}^{{d}_{1}\times {d}_{1}}$ and ${I}_{Y,T}\left(\omega \right)\in {\mathbb{C}}^{{d}_{2}\times {d}_{2}}$ of the two series are obtained by
where ${J}_{X,T}{\left(\omega \right)}^{\ast}$ denotes the conjugate transpose. The estimated spectral matrices, for $\omega \in (-\pi ,\pi )$, are given by
where ${\omega}_{j}=\frac{2\pi}{T}j$ and ${K}_{h}(\xb7)=\frac{1}{h}K\left(\frac{\xb7}{h}\right)$ where $K(\xb7)$ is a nonnegative symmetric kernel function and h denotes the bandwidth. Assumptions on the kernel and bandwidth to ensure uniform consistency in $\omega \in (-\pi ,\pi )$ of the estimated spectral matrices are listed in Section 2.1.2.

$${I}_{X,T}\left(\omega \right)={J}_{X,T}\left(\omega \right){J}_{X,T}{\left(\omega \right)}^{\ast}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\mathrm{and}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}{I}_{Y,T}\left(\omega \right)={J}_{Y,T}\left(\omega \right){J}_{Y,T}{\left(\omega \right)}^{\ast},$$

$${\widehat{f}}_{X}\left(\omega \right)=\frac{1}{T}\sum _{j=-\lfloor \frac{T}{2}\rfloor +1}^{\lfloor \frac{T}{2}\rfloor}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}{K}_{h}(\omega -{\omega}_{j})\phantom{\rule{0.277778em}{0ex}}{I}_{X,T}\left({\omega}_{j}\right)\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\mathrm{and}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}{\widehat{f}}_{Y}\left(\omega \right)=\frac{1}{T}\sum _{j=-\lfloor \frac{T}{2}\rfloor +1}^{\lfloor \frac{T}{2}\rfloor}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}{K}_{h}(\omega -{\omega}_{j})\phantom{\rule{0.277778em}{0ex}}{I}_{Y,T}\left({\omega}_{j}\right),$$

The aim of this work is to compare the two spectral matrices ${f}_{X}\left(\omega \right)$ and ${f}_{Y}\left(\omega \right)$ over a specific frequency range $(a,b)$ for some $0<a<b<\pi $. The challenge here, however, is that the dimensions of the processes ${X}_{t}$ and ${Y}_{t}$ are unequal and hence their spectral matrices have varying dimensions. We thus focus on the spread or distribution of spectral power in each of these stationary processes across different frequency ranges. More precisely, for the ${d}_{1}$-variate series ${X}_{t}$ we define the frequency-specific spectral (FS-ratio) parameter as
for some frequency band $(a,b)\subset (0,\pi )$ where $vec(\xb7)$ denotes vectorization of a matrix into a single column vector and $\left|\right|\xb7{\left|\right|}_{2}^{2}$ is the squared Euclidean norm. Observe that ${R}_{X,a,b}\in (0,1)$ can be viewed as a measure that captures the proportion of spectral power found in the frequency range $(a,b)$. Similarly using the spectral matrix ${f}_{Y}\left(\omega \right)$, ${R}_{Y,a,b}\in (0,1)$ can be defined for the ${d}_{2}$-variate series ${Y}_{t}$. Comparisons can now be made between the parameters ${R}_{X,a,b}$ and ${R}_{Y,a,b}$ to understand the amount of spectral power in the frequency range $(a,b)$ for the two multivariate series with unequal dimensions.

$${R}_{X,a,b}\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}\frac{\phantom{\rule{0.277778em}{0ex}}{r}_{X,a,b}}{{r}_{X,0,\pi}}\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}\frac{{\int}_{a}^{b}\left|\right|vec\left({f}_{X}\left(\omega \right)\right){\left|\right|}_{2}^{2}d\omega}{{\int}_{0}^{\pi}\left|\right|vec\left({f}_{X}\left(\omega \right)\right){\left|\right|}_{2}^{2}d\omega}$$

The data analogue of the FS-ratio parameter in (3) is then given by the FS-ratio statistic:
for some $0<a<b<\pi $. Similarly, the data analogue ${\widehat{R}}_{Y,a,b}$ can be obtained for the ${d}_{2}$-variate series ${Y}_{t}$. The asymptotic properties of the quantities ${\widehat{r}}_{X,a,b}$ and ${\widehat{R}}_{X,a,b}$ are discussed in Section 2.1.2. In neuroscience applications such as the one in Section 3, pre-defined frequency bands such as Theta, Alpha, Beta and Gamma are often used to understand the distribution of spectral power across these frequency bands. As opposed to using pre specified frequency bands, in Section 2.1.1 below we provide a data-driven technique to locate the various frequency ranges $(a,b)$ that carry significant spectral power.

$${\widehat{R}}_{X,a,b}\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}\frac{\phantom{\rule{0.277778em}{0ex}}{\widehat{r}}_{X,a,b}}{{\widehat{r}}_{X,0,\pi}}\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}\frac{{\int}_{a}^{b}\left|\right|vec\left({\widehat{f}}_{X}\left(\omega \right)\right){\left|\right|}_{2}^{2}d\omega}{{\int}_{0}^{\pi}\left|\right|vec\left({\widehat{f}}_{X}\left(\omega \right)\right){\left|\right|}_{2}^{2}d\omega}$$

In this section we describe our technique that uses the FS-Ratio statistic to find the frequency bands of interest. More precisely, we aim to locate the intervals $(a,b)$ used in (3) and (4) wherein the multivariate time series has significant proportions of spectral power.

Let ${X}_{t}$ be a ${d}_{1}$-variate zero-mean second order stationary time series with its ${d}_{1}\times {d}_{1}$ spectral matrix given by ${f}_{X}\left(\omega \right)$. With the FS-Ratio parameter defined in (3), we consider the scan parameter
for a small $\Delta >0$ and $0<a<\pi $. For the data analogue of the parameter above we consider a discretized sequence of frequency points $0<{a}_{1}<{a}_{2}<\dots <{a}_{Q}<\pi $ and evaluate the scan statistic as
for $j=1,2,\dots ,Q-1$. A plot of the scan statistic ${\widehat{\lambda}}_{X,{a}_{j}}$ across the various frequency points ${a}_{j}$ will indicate the frequency ranges over which the spectral matrix of ${X}_{t}$ has significant proportions of spectral power. Typically, one notices upward bumps in these plots over frequency ranges that carry significant spectral power; see Example 1 below and the top panel of Figure 2. Similarly for the ${d}_{2}$-variate series ${Y}_{t}$ one can define ${\lambda}_{Y,a}$, find the estimated version ${\widehat{\lambda}}_{Y,{a}_{j}}$ and obtain the plot of it across the various frequency points ${a}_{j}$. Comparisons can then be made between the series ${X}_{t}$ and ${Y}_{t}$ using these plots. The choice for $\Delta $ in (5) and number of points Q in (6) depends on the application under consideration. Certain applications demand attention to spectral power in very small frequency ranges and certain others might not. A multiscale approach can also be used where one obtains plots of the scan statistic ${\widehat{\lambda}}_{X,{a}_{j}}$ across frequency points ${a}_{j}$ for a sequence of $\Delta $ values. Visual inspection of these plots will help detect frequency ranges wherein the upward bumps are consistent across most values of $\Delta $. If we let the interval $(0,0.5)$ correspond to the interval $(0,\pi )$, our simulation study and real data analysis indicate a choice of $\Delta =0.01$ and $Q=49$ as reasonable.

$${\lambda}_{X,a}=1-\frac{{R}_{X,0,a-\Delta}}{{R}_{X,0,a}}=1-\frac{{\int}_{0}^{a-\Delta}\left|\right|vec\left({f}_{X}\left(\omega \right)\right){\left|\right|}_{2}^{2}d\omega}{{\int}_{0}^{a}\left|\right|vec\left({f}_{X}\left(\omega \right)\right){\left|\right|}_{2}^{2}d\omega}$$

$${\widehat{\lambda}}_{X,{a}_{j}}=1-\frac{{\widehat{R}}_{X,0,{a}_{j}}}{{\widehat{R}}_{X,0,{a}_{j+1}}}$$

We next provide a simple illustration of the scan statistic ${\widehat{\lambda}}_{X,a}$ through the following simulation example and show how it is useful in detecting important frequency ranges. The simulation scheme in this illustration is designed to mimic the real data situation in Section 3. There the entire duration of the neuroscience experiment is divided into non-overlapping successive time segments (a total of N epochs). The multivariate stationary processes of interest in these N epochs tend to have different dimensions and we attempt to mimic that scenario.

We consider N stationary processes, ${X}_{1,t},{X}_{2,t},\dots ,{X}_{N,t}$, with the series ${X}_{i,t}$ given by
where $i=1,2,\dots ,N=600$ epochs, $t=1,2,\dots ,T=1000$. Here ${V}_{i,t}^{\left(1\right)}\in {\mathbb{R}}^{3}$ and its components are given by ${v}_{0,t+k-1}+{v}_{1,t+k-1}$ for $k=1,2,3$ and ${v}_{0,t}$ follows a AR$\left(2\right)$ with $(-0.8,-0.7)$ and ${v}_{1,t}$ follows a AR$\left(2\right)$ with $(0.25,-0.75)$. The components of ${V}_{i,t}^{\left(2\right)}\in {\mathbb{R}}^{2}$ are given by ${v}_{2,t+k-1}$ for $k=1,2$ and ${v}_{2,t}$ follows a AR$\left(2\right)$ with $(1.25,-0.75)$.

$${X}_{i,t}=\left\{\begin{array}{cc}{V}_{i,t}^{\left(1\right)}\hfill & ifi\frac{N}{2}\hfill \\ {V}_{i,t}^{\left(2\right)}\hfill & ifi\ge \frac{N}{2}\hfill \end{array}\right.$$

We consider a discretized set of frequency points $\{{a}_{1},{a}_{2},\dots ,{a}_{Q}\}$ of the interval $(0,\pi )$. At each point ${a}_{j}$ we evaluate the average of the scan statistic ${\widehat{\lambda}}_{X,{a}_{j}}$ over epochs 1-299 and the average of the scan statistic over epochs 300–600. More precisely at each frequency point ${a}_{j}$ and at each epoch, we obtain ${\widehat{\lambda}}_{X,{a}_{j}}$ and compute averages of these quantities over the respective epochs. In the top panel of Figure 2 we plot this average scan statistic. For epochs 1–299, ${V}_{i,t}^{\left(1\right)}$ from (7) is a combination of two AR(2) processes with spectral density peaks at roughly 0.22 and 0.33. The top left plot in Figure 2 witnesses the scan statistic exhibiting bumps around those frequencies. Similarly for epochs 300–600, ${V}_{i,t}^{\left(2\right)}$ from (7) is generated from an AR(2) process with peak at roughly 0.12. The top right plot in Figure 2 witnesses the scan statistic exhibiting a bump around that frequency.

In the bottom panel of Figure 2 we plot averages of the statistic ${\widehat{R}}_{X,0,{a}_{j}}$ for $j=1,2,\dots ,Q-1$. We observe that this statistic is not as capable as the scan statistic ${\widehat{\lambda}}_{X,{a}_{j}}$ in bringing out the frequency ranges of significant spectral proportions.

In this section we list the required assumptions and discuss the asymptotic properties of the statistics ${\widehat{r}}_{X,a,b}$ and FS-ratio ${\widehat{R}}_{X,a,b}$.

Let ${Z}_{t}={({X}_{t},{Y}_{t})}^{{}^{\prime}},\phantom{\rule{0.277778em}{0ex}}t\in \mathbb{Z}$ be a $({d}_{1}+{d}_{2})$-variate zero-mean second-order stationary time series. For any $k>0$, the kth order cumulants of ${Z}_{t}$ satisfy
for $j=1,2,\dots ,k-1$ and ${b}_{1},{b}_{2},\dots ,{b}_{k}=1,2,\dots ,d={d}_{1}+{d}_{2}$ where ${c}_{{b}_{1},{b}_{2},\dots ,{b}_{k}}({u}_{1},{u}_{2},\dots ,{u}_{k-1})$ is the kth order joint cumulant of ${Z}_{{b}_{1},{u}_{1}},\dots ,{Z}_{{b}_{k-1},{u}_{k-1}},{Z}_{{b}_{k},0}$ as defined in Brillinger [8].

$$\sum _{{u}_{1},{u}_{2},\dots ,{u}_{k-1}\in \mathbb{Z}}[\phantom{\rule{0.277778em}{0ex}}1+|{u}_{j}{|}^{2}\phantom{\rule{0.277778em}{0ex}}]\phantom{\rule{0.277778em}{0ex}}{c}_{{b}_{1},{b}_{2},\dots ,{b}_{k}}({u}_{1},{u}_{2},\dots ,{u}_{k-1})\phantom{\rule{0.277778em}{0ex}}<\infty \phantom{\rule{-5.69046pt}{0ex}}$$

Please note that the kth order cumulant is given by ${c}_{{b}_{1},{b}_{2},\dots ,{b}_{k}}({u}_{1},{u}_{2},\dots ,{u}_{k-1})$$=cum\{{Z}_{{b}_{1},{u}_{1}},\dots ,{Z}_{{b}_{k-1},{u}_{k-1}},{Z}_{{b}_{k},0}\}$ where ${Z}_{{b}_{r},{u}_{s}}$ refers to component ${b}_{r}$ of the vector ${Z}_{{u}_{s}}$ with ${u}_{s}$ being the time point; see Theorem 2.3.2 of Brillinger [8]. For example when $k=2$, the 2nd order cumulant $cum\{{Z}_{{b}_{1},{u}_{1}},{Z}_{{b}_{2},{u}_{2}}\}=cov({Z}_{{b}_{1},{u}_{1}},{Z}_{{b}_{2},{u}_{2}})$ is the covariance between those two random variables.

(a). The kernel function $K(\xb7)$ is bounded, symmetric, nonnegative and Lipschitz-continuous with compact support $[-\pi ,\pi ]$ and
where $K\left(\omega \right)$ has a continuous Fourier transform $k\left(u\right)$ such that

$${\int}_{-\pi}^{\pi}K\left(\omega \right)d\omega =1.\phantom{\rule{-5.69046pt}{0ex}}$$

$$\int {k}^{2}\left(u\right)\mathrm{d}u<\infty \phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}and\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\int {k}^{4}\left(u\right)\mathrm{d}u<\infty .\phantom{\rule{-5.69046pt}{0ex}}$$

(b). The bandwidth h is such that ${h}^{9/2}T\to 0$ and ${h}^{2}T\to \infty $ as $T\to \infty $.

Assumptions 1 and 2 above are the same as in Eichler [9] where the first requires existence of all order moments of ${Y}_{t}$ and the second ensures consistency of the estimated spectral matrix. It must be noted that the assumptions on the kernel and bandwidth are primarily for establishing asymptotic result in (13) and can be weakened for Theorem 1.

Suppose that Assumptions 1,2 are satisfied. Then as $T\to \infty $,
where ${f}_{X}\left(\omega \right)={\left({f}_{X,rs}\right)}_{r,s=1,2,\dots ,{d}_{1}}$ is the ${d}_{1}\times {d}_{1}$ spectral matrix of ${X}_{t}$ and $\stackrel{P}{\to}$ denotes convergence in probability. Furthermore, let ${\overline{\Pi}}_{(a,b)}=(0,\pi )\backslash (a,b)$ for some $0<a<b<\pi $. If ${r}_{X,a,b}>0$ and ${r}_{X,{\overline{\Pi}}_{(a,b)}}>0$,
where ${r}_{X,{\overline{\Pi}}_{(a,b)}}={\int}_{{\overline{\Pi}}_{(a,b)}}\phantom{\rule{0.277778em}{0ex}}{f}_{X}{\left(\omega \right)}^{2}d\omega $.

$$\left(a\right).\phantom{\rule{56.9055pt}{0ex}}{\widehat{r}}_{X,a,b}\phantom{\rule{0.277778em}{0ex}}\stackrel{P}{\to}\phantom{\rule{0.277778em}{0ex}}{\int}_{a}^{b}\phantom{\rule{0.277778em}{0ex}}\sum _{r,s=1}^{{d}_{1}}\phantom{\rule{0.277778em}{0ex}}{f}_{X,rs}\left(\omega \right)\phantom{\rule{0.277778em}{0ex}}\overline{{f}_{X,rs}\left(\omega \right)}\phantom{\rule{0.277778em}{0ex}}d\omega ,\phantom{\rule{56.9055pt}{0ex}}\phantom{\rule{-5.69046pt}{0ex}}$$

$$\left(b\right).\phantom{\rule{73.97733pt}{0ex}}{\widehat{R}}_{X,a,b}\phantom{\rule{0.277778em}{0ex}}\stackrel{P}{\to}\phantom{\rule{0.277778em}{0ex}}{\left(1+\frac{{r}_{X,{\overline{\Pi}}_{(a,b)}}}{\phantom{\rule{0.277778em}{0ex}}{r}_{X,a,b}}\right)}^{-1}\phantom{\rule{91.04872pt}{0ex}}\phantom{\rule{-5.69046pt}{0ex}}$$

See Appendix B for details of the proof. □

Please note that in finite sample situations explored using simulation examples in Appendix A and the real data application in Section 3, we use the block bootstrap technique of Politis and Romano [10] for resampling from a stationary process. This is done to obtain sample quantiles of the FS-ratio statistic ${\widehat{R}}_{X,a,b}$.

In a special case wherein the dimensions of the two processes are the same (${d}_{1}={d}_{2}$), we wish to test for the equality of spectral matrices of same dimensions over an interval $0<a<b<\pi $. Let us assume ${d}_{1}={d}_{2}$ and $d={d}_{1}+{d}_{2}$ and define the $d\times d$ spectral matrix of ${Z}_{t}={({X}_{t},{Y}_{t})}^{{}^{\prime}}$ as
where the ${d}_{1}\times {d}_{1}$ matrix ${f}_{Z,12}\left(\omega \right)$ is the cross-spectral matrix of the processes ${X}_{t}$ and ${Y}_{t}$ and ${f}_{Z,11}\left(\omega \right)$ and ${f}_{Z,22}\left(\omega \right)$ are the spectral matrices of ${X}_{t}$ and ${Y}_{t}$ respectively. We consider testing
where $0<a<b<\pi $. The test statistic is

$${f}_{Z}\left(\omega \right)\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}\left[\begin{array}{cc}{f}_{Z,11}\left(\omega \right)& {f}_{Z,12}\left(\omega \right)\\ {f}_{Z,21}\left(\omega \right)& {f}_{Z,22}\left(\omega \right)\end{array}\right]$$

$${H}_{0}\phantom{\rule{0.277778em}{0ex}}:\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{4pt}{0ex}}{f}_{X}\left(\omega \right)={f}_{Y}\left(\omega \right)\phantom{\rule{0.277778em}{0ex}}\forall \phantom{\rule{0.277778em}{0ex}}\omega \in (a,b)$$

$${\widehat{D}}_{X,Y}={\int}_{a}^{b}\left|\right|vec({\widehat{f}}_{X}\left(\omega \right)-{\widehat{f}}_{Y}\left(\omega \right)){\left|\right|}_{2}^{2}d\omega .$$

The ${L}_{2}$ norm above in (12) on the spectral matrices is similar to the statistics considered in Eichler [9] and Dette and Paparoditis [11] wherein the problem of testing equality of spectral matrices is discussed. Suppose that Assumptions 1,2 are satisfied, an application of Theorem 3.5 of Eichler [9] yields, under ${H}_{0}$,
where
and
where $\stackrel{D}{\to}$ denotes convergence in distribution,
and ${\delta}_{rs}=I(r=s)$ is the Kronecker delta and tr(·) denotes the trace of a matrix.

$$2\pi T\sqrt{h}\phantom{\rule{0.277778em}{0ex}}{\widehat{D}}_{X,Y}-\frac{{\mu}_{XY}}{\sqrt{h}}\stackrel{D}{\to}N(0,{\sigma}_{XY}^{2})\phantom{\rule{-5.69046pt}{0ex}}$$

$${\mu}_{XY}={A}_{K}{\int}_{-\pi}^{\pi}{1}_{\omega \in (a,b)}\left(\phantom{\rule{0.277778em}{0ex}}\sum _{{p}_{1},{p}_{2}=1}^{2}\left(\phantom{\rule{0.277778em}{0ex}}-1\phantom{\rule{0.277778em}{0ex}}+\phantom{\rule{0.277778em}{0ex}}2{\delta}_{{p}_{1}{p}_{2}}\phantom{\rule{0.277778em}{0ex}}\right)tr{\left({f}_{Z,{p}_{1}{p}_{2}}\left(\omega \right)\right)}^{2}\right)d\omega \phantom{\rule{-5.69046pt}{0ex}}$$

$${\sigma}_{XY}^{2}={B}_{K}{\int}_{-\pi}^{\pi}{1}_{\omega \in (a,b)}\left(\phantom{\rule{0.277778em}{0ex}}\sum _{{p}_{1},{p}_{2},{p}_{3},{p}_{4}=1}^{2}(\phantom{\rule{0.277778em}{0ex}}-1\phantom{\rule{0.277778em}{0ex}}+\phantom{\rule{0.277778em}{0ex}}2{\delta}_{{p}_{1}{p}_{2}}\phantom{\rule{0.277778em}{0ex}})\phantom{\rule{0.277778em}{0ex}}(\phantom{\rule{0.277778em}{0ex}}-1\phantom{\rule{0.277778em}{0ex}}+\phantom{\rule{0.277778em}{0ex}}2{\delta}_{{p}_{3}{p}_{4}}\phantom{\rule{0.277778em}{0ex}})tr{(\phantom{\rule{0.277778em}{0ex}}{f}_{Z,{p}_{1}{p}_{3}}^{ij}\left(\omega \right)\overline{({f}_{Z,{p}_{2}{p}_{4}}^{ij}\left(\omega \right)})}^{T}{\phantom{\rule{0.277778em}{0ex}})}^{2}\right)d\omega .\phantom{\rule{-5.69046pt}{0ex}}$$

$${A}_{K}={\int}_{-\pi}^{\pi}{K}^{2}\left(v\right)dv,\phantom{\rule{0.277778em}{0ex}}{B}_{K}=4{\int}_{a-\pi}^{b+\pi}\phantom{\rule{0.277778em}{0ex}}{\left({\int}_{-\pi}^{\pi}K\left(u\right)K(u+v)du\right)}^{2}\phantom{\rule{0.277778em}{0ex}}dv$$

In the neuroscience application in Section 3, the entire duration of the experiment is divided into non-overlapping successive time segments (a total of N epochs). Each epoch results in a multivariate stationary process of interest with the dimensions of these processes varying across epochs. Letting ${X}_{1,t},{X}_{2,t},\dots ,{X}_{N,t}$ be the N stationary processes at these epochs, one can obtain the FS-ratio statistics ${\widehat{R}}_{{X}_{i},a,b}$, for $i=1,2,\dots ,N$, and view this is a series with time index being the epoch index i. Applying change point detection to this series to formally test for the significance of change points would require use of a divergence measure that measures distance between ${\widehat{R}}_{{X}_{i},a,b}$ and ${\widehat{R}}_{{X}_{j},a,b}$ when $i\ne j$. Different norms can be used to construct this divergence measure and this would serve as the test statistic. Large sample distributions of this statistic would provide critical values necessary for the test. One of the issues here would be in dealing with differing errors in estimating the FS-ratio statistics when the dimensions of the two series are very different and this needs further investigation.

In this section, we investigate the ability of the FS-ratio statistic to identify changes in the spectral properties of the local field potential (LFP) of a rat (Local field potential data on the experimental rat comes from the stroke experiment conducted at Frostig laboratory at University of California Irvine: http://frostiglab.bio.uci.edu/Home.html). The aim is to identify changes in complexity and structure of the multivariate cortex signal over the course of the experiment. It is also of interest to understand the differential roles of frequency bands and determine the specific bands that demonstrate the most significant changes that occurred due to the stroke.

At 32 locations on the rat’s cortex, microelectrodes are inserted: 4 layers in the cortex, at 300 $\mathsf{\mu}$m, 700 $\mathsf{\mu}$m, 1100 $\mathsf{\mu}$m and 1500 $\mathsf{\mu}$m and 8 microelectodes lined up in each of the 4 layers. We look at the field potential specific to the 32 locations recorded for a total duration of 10 min. This 10 min duration is divided into $N=600$ epochs (distinct successive non-overlapping time segments of the duration of the experiment) with each epoch comprising of 1 s worth of data. The sampling rate here is 1000 Hz resulting in $T=1000$ observations per epoch. Midway through the recording period (after epoch 300) a stroke is artificially induced by clamping the medial cerebral artery that supplied blood to the recorded area.

As a first step in our analysis, we applied a component-wise univariate test of second-order stationarity (Dwivedi and Subba Rao [12]) of the LFP signal at each epoch. In Figure 3, we present the p-values from a test of second-order stationarity carried out on each of the $p=32$ microelectrodes at each epoch. We notice that these individual microelectrodes are more stationary after the stroke than before.

Next, we model the observed 32-dimensional signal as a multivariate nonstationary time series using the stationary subspace analysis (SSA) setup. We assume the observed $p=32$ dimensional LFP signal ${S}_{i,t}$ is linearly generated by stationary and nonstationary sources in the cortex. More precisely we have,
where ${X}_{i,t}\in {\mathbb{R}}^{{d}_{i}}$ is latent stationary source, ${A}_{i}$ is a $p\times {d}_{i}$ unknown demixing matrix, ${\epsilon}_{i,t}$ are the nonstationary sources. This setup of starting with an observed nonstationary time series and, after some transformation, getting to a lower dimensional stationary time series has interesting applications in neuroscience. For instance, EEG signals measuring brain activity appear often as a multivariate nonstationary time series; see Ombao et al. [13], Srinivasan [14], Nunez and Srinivasan [15], von Bünau et al. [16], Wu et al. [17], Gao et al. [18], Euán et al. [19] for examples. Kaplan et al. [20] regard the nonstationarity as background activity in the brain signal and removing this nonstationarity was seen to improve prediction accuracy in neuroscience experiments; von Bünau et al. [21] and von Bünau et al. [16]. Thus, the aim of SSA is to separate the stationary from the nonstationary sources within each epoch and focus attention on the stationary sources. From a stroke neuroscientist’s perspective, the stationary sources within a short epoch of 1 s are considered to be the “stable” components of the signal since they are consistent within that short interval. The word consistent here refers to the statistical properties of the signal remaining the same within an epoch. Of course the transient components (nonstationary components) may also be of interest in other applications.

$${S}_{i,t}\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}{A}_{i}{X}_{i,t}\phantom{\rule{0.277778em}{0ex}}+\phantom{\rule{0.277778em}{0ex}}{\epsilon}_{i,t},\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}i=1,2,\dots ,N=600,$$

The next goal in the data analysis is to estimate the epoch-evolving dimension ${d}_{i}$ and the latent stationary time series ${X}_{i,t}\in {\mathbb{R}}^{{d}_{i}}$ where ${d}_{i}<p$. In Figure 4, we apply SSA and plot the estimates of the stationary subspace dimension ${d}_{i}$ across $N=600$ epochs using the method in Sundararajan et al. [22].

The evolutionary dimension ${d}_{i}$ of the latent stationary sources were presented in Figure 4. The plot indicates increase in the number of stationary sources in post-stroke epochs (after epoch 300) and this agrees with the results in Figure 3 wherein more epochs after the stroke witness stationary behavior in the individual LFP components. It is indeed interesting that immediately post-occlusion (or immediately after stroke onset), the LFPs are highly synchronized: the plots of the observed LFP ${S}_{i,t}$ and the estimated squared coherence between the 32 components (Figure 5) suggest that different electrodes look very similar and there is high coherence in between the entire network of electrodes at various frequency bands. Please note that for the observed 32 dimensional signal ${S}_{i,t}$ in epoch i, the squared coherence between two components ${S}_{p,i,t}$ and ${S}_{q,i,t}$, for $p\ne q$, at frequency $\omega $ is given by
where ${f}_{S,pq}\left(\omega \right)$ denotes the cross-spectrum between those two components and ${f}_{S,pp}\left(\omega \right)$ and ${f}_{S,qq}\left(\omega \right)$ are the univariate spectra of the components series ${S}_{p,i,t}$ and ${S}_{q,i,t}$ respectively. This observation of high coherence across electrodes immediately post-occlusion was confirmed by the neuroscientists and also reported in Ellen Wann’s PhD dissertation (Wann [23]). Next, we investigate further into the lead-lag cross-dependence between microelectrodes. We pre-whitened the observed time series to make the lag-0 covariance matrix identity. More precisely, one considers ${\Sigma}_{i}^{-1/2}{S}_{i,t}$ where ${\Sigma}_{i}^{-1/2}$ is the inverse square root of the lag-0 covariance matrix $V\left({S}_{i,t}\right)$. We observe, in Figure 5, the significant drop in the magnitude of squared coherence after pre-whitening indicating that the dependence among the 32 components is predominantly due to a contemporaneous (i.e., lag-0) dependence. One can also notice, from the right plot in Figure 5, a drop in the coherence in the gamma frequency band after the stroke.

$${C}_{p,q}\left(\omega \right)=\frac{|{f}_{S,pq}{\left(\omega \right)|}^{2}}{{f}_{S,pp}\left(\omega \right){f}_{S,qq}\left(\omega \right)}$$

We then estimated the latent stationary sources ${X}_{i,t}$ for the $i=1,2,\dots ,N=600$ epochs using the DSSA method in Sundararajan and Pourahmadi [24]. In order to overcome identifiability issues in the model in (16), SSA and PCA methods for time series assume an identity lag-0 covariance matrix for ${X}_{i,t}$ and resort to a pre-whitening technique to achieve this. Figure 6 plots the average squared coherence in the non pre-whitened and pre-whitened stationary sources across different frequency bands. Similar to the coherence pattern in the observed LFP in Figure 5, the left plot in Figure 6 witnesses an increase in the coherence after the occurrence of the stroke. This indicates the importance of the stationary components in explaining the high degree of synchronicity. Also, the right plot in Figure 6 indicates a substantial drop in the magnitude of coherence in the stationary sources. The pre-whitened stationary sources have lower coherence than the coherence of the stationary sources based on the non pre-whitened. As noted, previous findings have already indicated an increased coherence post stroke onset. Our analysis provided an additional insight that the increase in the coherence post-stroke is due only to contemporaneous (or lag-0) dependence. This indicates perfect temporal synchrony in a sense that there is no lead-lag cross-dependence between the electrodes. This was suggested by visual inspection of the LFP traces and hypothesized by neuroscientists though never formally confirmed until now with our analysis.

Next, the FS-ratio statistic was evaluated on these estimated stationary sources at each of the 600 epochs at various frequency bands. Figure 7 plots the estimated FS-ratio statistic ${\widehat{R}}_{{X}_{i},a,b}$, $i\phantom{\rule{3.33333pt}{0ex}}=\phantom{\rule{3.33333pt}{0ex}}1,2,\dots ,N\phantom{\rule{3.33333pt}{0ex}}=\phantom{\rule{3.33333pt}{0ex}}600$, for the known frequency bands: theta (4–8 Hertz), alpha (8–12 Hertz), beta (12–30 Hertz) and gamma (30–50 Hertz). At each epoch i, we obtained a $95\%$ confidence interval for the FS-ratio statistic using the block bootstrap technique of Politis and Romano [10]. To select the block length, we follow the procedure in Politis and White [25], Patton et al. [26]. Please note that this procedure is for the univariate case and hence we apply it to each component of the multivariate process ${X}_{i,t}$ and obtain the block length as the average over all components. The confidence intervals are the blue shaded regions in Figure 7 and Figure 8.

The FS-ratio statistic is seen to have differences in the pre and post stroke epochs in the Theta, Alpha, and Beta bands but not in the Gamma band. It can also be seen that the biggest difference in FS-ratio between pre and post stroke is in the Beta band wherein there is a decrease in the amount of spectral information after the stroke. Figure 8 also presents the FS-ratio statistic on other specified frequency bands wherein one notices differences between the pre and post stroke epochs.

Table 1 and Table 2 contain numerical summaries of the FS-ratio statistic for the pre and post stroke epochs at various frequency bands. We notice that the Beta band is where there is maximum difference observed between the pre and post stroke epochs. The Gamma band is consistent throughout the experiment’s 600 epochs. Within the pre stroke epochs (and also within the post-stroke epochs), the most variation in FS-ratio is observed in the Beta band.

The p-values presented in Figure 3 represent a test of second-order stationarity carried out on each of the $p=32$ microelectrodes at each epoch. We noticed that immediately after stroke the individual microelectrodes behaved in a more stationary manner and this was visibly different from what was observed before the stroke. Based on this analysis, it might be plausible that the LFP signal, under normal circumstances, exhibits nonstationary behavior and immediately post stroke the signal behaves in a more stationary manner thereby showing that the brain’s typical functions are affected. The plots of the observed LFP ${S}_{i,t}$ and the estimated squared coherence between the 32 components (Figure 5) indicate high cross-electrode coherence at various frequency bands immediately post stroke. This observation was also confirmed by the neuroscientists and also reported in Ellen Wann’s PhD dissertation (Wann [23]).

In Fontaine et al. [27], a univariate LFP microelectrode-wise change point analysis was performed on the same dataset. In their work, for various frequency bands, changes in the non-linear spectral dependence of the LFP signal is modeled using parametric copulas. They detected change-points for a fixed microelectrode and fixed frequency band. One can notice the detection of numerous change points in the Delta, Theta, Alpha, Beta and Gamma bands for individual microelectrodes 1, 9 and 17. The detected change points include several epochs with very few of them being close to the time of the occlusion (or induced stroke) which was epoch $i=300$.

In contrast, the advantages of our method are as follows: (i). The method treats the observed LFP signal as a multivariate nonstationary time series. Using (16), we model this observed multivariate signal as a mixture of stationary and nonstationary components. Figure 4 presents the dimension of stationary subspace (dimension of ${X}_{i,t}$) across the 600 epochs and this is seen to be a useful feature in understanding changes in the cortical signal after the occurrence of the induced stroke (epoch 300). In other words, an increase in the dimension ${d}_{i}$ after the stroke points to a more stationary behavior of the LFP signal after the stroke. (ii). The FS-ratio statistic, with the ability to compare two multivariate processes with unequal dimensions, is applied on the estimated processes ${X}_{i,t}$ for each of the 600 epochs and frequency band specific numerical summaries are presented. The Beta frequency band is seen to be display the greatest changes within the pre stroke and post stroke epochs and also between the pre stroke and post stroke epochs. Also, from Figure 7 and Figure 8, it is very easy to spot a change point at epoch 300 when the stroke was induced.

In this work, we proposed a new frequency-specific spectral ratio statistic FS-ratio that is demonstrated to be useful in comparing spectral information in two multivariate stationary processes of different dimensions. The method is motivated by applications in neuroscience wherein brain signal is recorded across several epochs and the widely used tactic is to assume the observed signal be linearly generated by latent sources of interest in lower dimensions. Applying PCA/ICA/SSA and other dimension reduction methods to the observed signal in different epochs in the experiment results in different estimates of the dimensions of latent sources. In these situations, the FS-ratio is seen to be useful because (i). It captures the proportion of spectral power in various frequency bands by means of a ${L}_{2}$-norm on the spectral matrices and (ii). It is blind to the dimension of the stationary process as it only looks at the proportion of spectral power at frequency bands. Under mild assumptions, the asymptotic properties of FS-ratio statistic are derived. We also provide a data-driven technique to locate the frequency bands that carry significant proportions of spectral power. In the application of our method to the LFP dataset, we witness the ability of our method in (i). identifying frequency bands where the pre and post stroke epochs are different, (ii). identifying frequency bands that accounts for most discrepancies within pre (and post) stroke epochs, (iii). identifying the frequency bands that are consistent across all the 600 epochs of the experiment and (iv). understanding the importance of contemporaneous dependence, both in the observed LFP and the stationary sources, across the 600 epochs and this indicated perfect synchrony (no lead-lag cross-dependence) among microelectrodes immediately after the stroke.

Topological data analysis (TDA) methods for characterizing complexity and detecting phase transitions exist in the literature; M. Piangerelli [28], Rucco et al. [29], Wang et al. [30]. Topological features from the observed series are extracted using techniques such as persistent entropy, persistence diagrams and Betti numbers and this can be viewed as another approach to identify changes in the multivariate time series due to events such as epilepsy and seizure.

The stroke experiment was conducted at the R.F.’s laboratory (Frostig laboratory at University of California Irvine: http://frostiglab.bio.uci.edu/Home.html) and the LFP data came from this experiment. The statistical methodology, theory and computations were performed by R.R.S. and H.O. All authors have read and agreed to the published version of the manuscript.

This work is support in part by KAUST, NIH NS066001, Leducq Foundation 15CVD02 and NIH MH115697.

The authors declare no conflict of interest.

In this section, we illustrate the performance of the FS-ratio statistic in capturing spread of spectral information using simulated examples. We consider four simulation schemes and report the key summaries of the FS-ratio statistic across repetitions of each of the four schemes. In addition, 95% bootstrap confidence limits for the FS-ratio statistic are computed from $B=500$ bootstrap replications. Here we use the block bootstrap procedure of Politis and White [25], Patton et al. [26]. For an estimate of the spectral matrix defined in (2), the Bartlett-Priestley kernel with bandwidth $h={T}^{-0.3}$ and the Daniell kernel, see Example 10.4.1 in Brockwell and Davis [31] with $m=\sqrt{T}$ were implemented. Similar results were obtained for the two kernel choices and only the results from the latter are presented.

The simulation schemes presented below are designed to mimic the real data situation in Section 3. There, the entire duration of the neuroscience experiment is divided into non-overlapping successive time segments (N epochs). Then from these N epochs, lower dimensional stationary sources of varying dimensions were extracted. Similarly, in our simulations below we shall simulate N stationary processes with varying dimensions and investigate the evolution of the FS-ratio statistic. We now present 4 simulation schemes to assess the behavior of our FS-ratio statistic. Scheme 1 simulates N multivariate stationary VAR(2) processes, ${X}_{i,t}$ where $i=1,2,\dots ,N=500$, with dimensions randomly chosen from $\{2,3,\dots ,30\}$ and the error vectors are component-wise i.i.d $N(0,1)$. The phase parameter $\theta $ of the AR components are allowed to vary across epochs i.e two different choices, ${\theta}_{1}$ and ${\theta}_{2}$, are included with ${\theta}_{1}=\frac{4\pi}{25}$ for epochs $i<N/2$ and ${\theta}_{2}=\frac{4\pi}{5}$ for epochs $i\ge N/2$. This causes a shift in the frequency bands of interest as we move across the epochs. Scheme 2 is similar to Scheme 1 but an interaction between the dimension and frequency is included. More precisely, lower dimensional signals simulated in epochs $i<N/2$ have a peak at frequency ${\theta}_{1}=\frac{4\pi}{25}$, and the higher dimensional signals simulated in epochs $i\ge N/2$ have a peak at frequency ${\theta}_{2}=\frac{4\pi}{5}$. Scheme 3 is also similar to Scheme 1 but the error vectors are allowed to have contemporaneous dependence. The N multivariate processes in Scheme 4 are first simulated as in Scheme 1 but are then pre-multiplied with a half-orthogonal matrix. This is in line with the model assumption (16) made in Section 3.

$${X}_{k,i,t}={\varphi}_{i,1}{X}_{k,i,t-1}+{\varphi}_{i,2}{X}_{k,i,t-2}+{\u03f5}_{k,i,t}$$

$${\theta}_{i}=\left\{\begin{array}{cc}\mathrm{cos}\left(\frac{4\pi}{25}\right)\hfill & \mathrm{if}i\frac{N}{2}\hfill \\ \mathrm{cos}\left(\frac{4\pi}{5}\right)\hfill & \mathrm{if}i\ge \frac{N}{2}\hfill \end{array}\right.$$

$${d}_{i}=\left\{\begin{array}{cc}{d}_{1,i}\hfill & \mathrm{if}i\frac{N}{2}\hfill \\ {d}_{2,i}\hfill & \mathrm{if}i\ge \frac{N}{2}\hfill \end{array}\right.$$

$${X}_{k,i,t}={\varphi}_{i,1}{X}_{k,i,t-1}+{\varphi}_{i,2}{X}_{k,i,t-2}+{\u03f5}_{k,i,t}$$

$$V\left({\u03f5}_{i,t}\right)=\left[\begin{array}{ccccc}1& \rho & {\rho}^{2}& \dots & {\rho}^{{p}_{i}-1}\\ \rho & 1& \rho & \dots & {\rho}^{{p}_{i}-2}\\ \vdots \\ {\rho}^{{p}_{i}-1}& {\rho}^{{p}_{i}-2}& {\rho}^{{p}_{i}-3}& \dots & 1\end{array}\right]$$

$${\theta}_{i}=\left\{\begin{array}{cc}\mathrm{cos}\left(\frac{4\pi}{25}\right)\hfill & \mathrm{if}i<\frac{N}{2}\hfill \\ \mathrm{cos}\left(\frac{4\pi}{5}\right)\hfill & \mathrm{if}i\ge \frac{N}{2}\hfill \end{array}\right.$$

Table A1 and Table A2 contain the numerical summaries of the FS-ratio statistic over 100 replications of Scheme 1. Please note that the phase parameter ${\theta}_{i}$ for $i<N/2$ in Scheme 1 is at $4\pi /25$ on a $(0,\pi )$ scale or equivalently at 0.0796 on a $(0,0.5)$ scale. We see from Table A1 that almost all of the spectral information is contained in the first two chosen frequency ranges around this peak. Similarly for $i\ge N/2$, the phase parameter is at $4\pi /5$ on a $(0,\pi )$ scale or equivalently at 0.3981 on a $(0,0.5)$ scale. Figure A1 plots a histogram density of the FS-ratio statistic from the 100 replications and similar histogram densities for Schemes 2, 3 and 4 can be found in Figure A2, Figure A3 and Figure A4. From Table A2 we notice that the last two chosen frequency ranges have all of the spectral information.

Frequency Range | Mean | Median | SD | Lower | Upper |
---|---|---|---|---|---|

(a,b) | CI | CI | |||

(0,0.08) | 0.5342 | 0.5391 | 0.0221 | 0.4984 | 0.6253 |

(0.08,0.16) | 0.4486 | 0.4544 | 0.0227 | 0.3566 | 0.4831 |

(0.16,0.24) | 0.0002 | 0.0002 | 0.0001 | 0.0005 | 0.0019 |

(0.24,0.32) | 0 | 0 | 0 | 0 | 0.0002 |

(0.32,0.40) | 0 | 0 | 0 | 0 | 0 |

(0.40,0.48) | 0 | 0 | 0 | 0 | 0 |

Frequency Range | Mean | Median | SD | Lower | Upper |
---|---|---|---|---|---|

(a,b) | CI | CI | |||

(0,0.08) | 0 | 0 | 0 | 0 | 0 |

(0.08,0.16) | 0 | 0 | 0 | 0 | 0 |

(0.16,0.24) | 0 | 0 | 0 | 0 | 0 |

(0.24,0.32) | 0.0003 | 0.0002 | 0.0002 | 0.0005 | 0.0017 |

(0.32,0.40) | 0.4561 | 0.4595 | 0.0181 | 0.3786 | 0.4903 |

(0.40,0.48) | 0.5205 | 0.5210 | 0.0169 | 0.4759 | 0.5826 |

Table A3 and Table A4 include numerical summaries of the FS-ratio statistic over 100 replications of the model in Scheme 2. Similar to Scheme 1, the phase parameter ${\theta}_{i}$ for $i<N/2$ is at 0.0796 on a $(0,0.5)$ scale for $i<N/2$ and at 0.3981 on a $(0,0.5)$ scale for $i\ge N/2$. The results from Table A3 indicate most of the spectral information are present in the first two chosen frequency ranges. Similarly for $i\ge N/2$, Table A4 shows that the last two chosen frequency ranges have all of the spectral information.

Frequency Range | Mean | Median | SD | Lower | Upper |
---|---|---|---|---|---|

(a,b) | CI | CI | |||

(0,0.08) | 0.5407 | 0.5349 | 0.0286 | 0.5041 | 0.6360 |

(0.08,0.16) | 0.4423 | 0.4482 | 0.0284 | 0.3475 | 0.4778 |

(0.16,0.24) | 0.0002 | 0.0003 | 0.0002 | 0.0006 | 0.0022 |

(0.24,0.32) | 0 | 0 | 0 | 0 | 0.0002 |

(0.32,0.40) | 0 | 0 | 0 | 0 | 0 |

(0.40,0.48) | 0 | 0 | 0 | 0 | 0 |

Frequency Range | Mean | Median | SD | Lower | Upper |
---|---|---|---|---|---|

(a,b) | CI | CI | |||

(0,0.08) | 0 | 0 | 0 | 0 | 0 |

(0.08,0.16) | 0 | 0 | 0 | 0 | 0 |

(0.16,0.24) | 0 | 0 | 0 | 0 | 0.0001 |

(0.24,0.32) | 0.0003 | 0.0002 | 0.0001 | 0.0005 | 0.0016 |

(0.32,0.40) | 0.4605 | 0.4616 | 0.0108 | 0.3862 | 0.4938 |

(0.40,0.48) | 0.5194 | 0.5186 | 0.0096 | 0.4800 | 0.5863 |

Table A5 and Table A6 contain the numerical summaries of the FS-ratio statistic over 100 replications of the model in Scheme 3. As in Scheme 1, the phase parameter ${\theta}_{i}$ for $i<N/2$ is at 0.0796 on a $(0,0.5)$ scale for $i<N/2$ and at 0.3981 on a $(0,0.5)$ scale for $i\ge N/2$. As in Scheme 1, results from Table A5 indicate most of the spectral information are present in the first two chosen frequency ranges. Similarly for $i\ge N/2$, Table A6 shows that the last two chosen frequency ranges have all of the spectral information.

Frequency Range | Mean | Median | SD | Lower | Upper |
---|---|---|---|---|---|

(a,b) | CI | CI | |||

(0,0.08) | 0.5371 | 0.5327 | 0.0238 | 0.4977 | 0.6284 |

(0.08,0.16) | 0.4459 | 0.4504 | 0.0239 | 0.3549 | 0.4843 |

(0.16,0.24) | 0.0002 | 0.0002 | 0.0001 | 0.0005 | 0.0020 |

(0.24,0.32) | 0 | 0 | 0 | 0 | 0.0002 |

(0.32,0.40) | 0 | 0 | 0 | 0 | 0 |

(0.40,0.48) | 0 | 0 | 0 | 0 | 0 |

Frequency Range | Mean | Median | SD | Lower | Upper |
---|---|---|---|---|---|

(a,b) | CI | CI | |||

(0,0.08) | 0 | 0 | 0 | 0 | 0 |

(0.08,0.16) | 0 | 0 | 0 | 0 | 0 |

(0.16,0.24) | 0 | 0 | 0 | 0 | 0.0001 |

(0.24,0.32) | 0.0003 | 0.0003 | 0.0002 | 0.0005 | 0.0018 |

(0.32,0.40) | 0.4531 | 0.4566 | 0.0196 | 0.3758 | 0.4907 |

(0.40,0.48) | 0.5252 | 0.5225 | 0.0172 | 0.4810 | 0.5948 |

Table A7 and Table A8 contain the numerical summaries of the FS-ratio statistic over 100 replications of the model in Scheme 4. Here we look at ${M}_{i,t}={A}_{i}{X}_{i,t}$ which is a mixture of the components of ${X}_{i,t}$ generated as in Scheme 1. Please note that the peak of the spectral densities of the components of ${M}_{i,t}$ is still at the phase parameter ${\theta}_{i}$ defined in Scheme 1. Hence, the results from Table A7 and Table A8 are similar to the results from Scheme 1.

Frequency Range | Mean | Median | SD | Lower | Upper |
---|---|---|---|---|---|

(a,b) | CI | CI | |||

(0,0.08) | 0.5342 | 0.5321 | 0.0155 | 0.4871 | 0.5955 |

(0.08,0.16) | 0.4489 | 0.4510 | 0.0159 | 0.3872 | 0.4949 |

(0.16,0.24) | 0.0002 | 0.0002 | 0.0001 | 0.0003 | 0.0011 |

(0.24,0.32) | 0.0001 | 0.0001 | 0 | 0 | 0.0001 |

(0.32,0.40) | 0 | 0 | 0 | 0 | 0 |

(0.40,0.48) | 0 | 0 | 0 | 0 | 0 |

Frequency Range | Mean | Median | SD | Lower | Upper |
---|---|---|---|---|---|

(a,b) | CI | CI | |||

(0,0.08) | 0 | 0 | 0 | 0 | 0 |

(0.08,0.16) | 0 | 0 | 0 | 0 | 0 |

(0.16,0.24) | 0 | 0 | 0 | 0 | 0.0001 |

(0.24,0.32) | 0.0003 | 0.0003 | 0.0001 | 0.0003 | 0.0011 |

(0.32,0.40) | 0.4553 | 0.4570 | 0.0130 | 0.4021 | 0.4987 |

(0.40,0.48) | 0.5234 | 0.5219 | 0.0119 | 0.4782 | 0.5738 |

Here we present the proofs of the theoretical results in Section 2.1.2.

Recall that for some $0<a<b<\pi $,

$$\begin{array}{c}\hfill {\widehat{r}}_{X,a,b}={\int}_{a}^{b}\left|\right|vec\left({\widehat{f}}_{X}\left(\omega \right)\right){\left|\right|}_{2}^{2}d\omega ={\int}_{a}^{b}\left|\right|\frac{1}{T}\sum _{j=-\lfloor T/2\rfloor}^{\lfloor T/2\rfloor}\phantom{\rule{0.277778em}{0ex}}{K}_{h}(\omega -{\omega}_{j})vec\left({I}_{X,T}\left({\omega}_{j}\right)\right){\left|\right|}^{2}\phantom{\rule{0.277778em}{0ex}}d\omega \\ \hfill ={\int}_{a}^{b}\frac{1}{{T}^{2}}\sum _{{j}_{1},{j}_{2}=-\lfloor T/2\rfloor}^{\lfloor T/2\rfloor}\phantom{\rule{0.277778em}{0ex}}{K}_{h}(\omega -{\omega}_{{j}_{1}}){K}_{h}(\omega -{\omega}_{{j}_{2}})\sum _{r,s=1}^{{d}_{i}}\phantom{\rule{0.277778em}{0ex}}{I}_{X,T,rs}\left({\omega}_{{j}_{1}}\right)\overline{{I}_{X,T,rs}\left({\omega}_{{j}_{2}}\right)}\phantom{\rule{0.277778em}{0ex}}d\omega .\end{array}$$

We first consider the expected value of this quantity.
It can be seen that as $T\to \infty $, $h\to 0$ and $Th\to \infty $ the above quantity converges to
Next, for the variance we have $V\left({\widehat{r}}_{X,a,b}\right)={A}_{1}-{A}_{2}$, where
For the difference in the expectations between ${A}_{1}$ and ${A}_{2}$ we discuss the relevant cases and their convergence to 0. Firstly, it can be seen that for the following three cases the difference in the expectations is asymptotically 0: (a). ${\omega}_{{j}_{1}}\ne {\omega}_{{j}_{2}}\ne {\omega}_{{j}_{3}}\ne {\omega}_{{j}_{4}}$, (b). ${\omega}_{{j}_{1}}={\omega}_{{j}_{2}}\ne {\omega}_{{j}_{3}}\ne {\omega}_{{j}_{4}}$, (c). ${\omega}_{{j}_{1}}={\omega}_{{j}_{2}}\ne {\omega}_{{j}_{3}}={\omega}_{{j}_{4}}$. Next, when ${\omega}_{{j}_{1}}={\omega}_{{j}_{3}}\ne {\omega}_{{j}_{2}}={\omega}_{{j}_{4}}$ we have,
The case when ${\omega}_{{j}_{1}}={\omega}_{{j}_{2}}={\omega}_{{j}_{3}}\ne {\omega}_{{j}_{4}}$ would have the same rate of decay as above. Next, when ${\omega}_{{j}_{1}}={\omega}_{{j}_{3}}\ne {\omega}_{{j}_{2}}\ne {\omega}_{{j}_{4}}$ we have,
Finally, we look at the case ${\omega}_{{j}_{1}}={\omega}_{{j}_{2}}={\omega}_{{j}_{3}}={\omega}_{{j}_{4}}$. We have
□

$$\begin{array}{c}\hfill E\left({\widehat{r}}_{X,a,b}\right)={\int}_{a}^{b}\frac{1}{{T}^{2}}\sum _{{j}_{1},{j}_{2}=-\lfloor T/2\rfloor}^{\lfloor T/2\rfloor}\phantom{\rule{0.277778em}{0ex}}{K}_{h}(\omega -{\omega}_{{j}_{1}}){K}_{h}(\omega -{\omega}_{{j}_{2}})\sum _{r,s=1}^{{d}_{i}}\phantom{\rule{0.277778em}{0ex}}E\left({I}_{X,T,rs}\left({\omega}_{{j}_{1}}\right)\overline{{I}_{X,T,rs}\left({\omega}_{{j}_{2}}\right)}\right)\phantom{\rule{0.277778em}{0ex}}d\omega \\ \hfill ={\int}_{a}^{b}\frac{1}{{T}^{2}}\sum _{{j}_{1},{j}_{2}=-\lfloor T/2\rfloor}^{\lfloor T/2\rfloor}\phantom{\rule{0.277778em}{0ex}}{K}_{h}(\omega -{\omega}_{{j}_{1}}){K}_{h}(\omega -{\omega}_{{j}_{2}})\sum _{r,s=1}^{{d}_{i}}\phantom{\rule{0.277778em}{0ex}}{f}_{X,rs}\left({\omega}_{{j}_{1}}\right)\overline{{f}_{X,rs}\left({\omega}_{{j}_{2}}\right)}\phantom{\rule{0.277778em}{0ex}}d\omega \phantom{\rule{0.277778em}{0ex}}+\phantom{\rule{0.277778em}{0ex}}o\left(1\right).\end{array}$$

$$\begin{array}{c}\hfill {\int}_{a}^{b}\phantom{\rule{0.277778em}{0ex}}\sum _{r,s=1}^{{d}_{i}}{\left({\int}_{-\pi}^{\pi}K\left(v\right)dv\right)}^{2}\phantom{\rule{0.277778em}{0ex}}{f}_{X,rs}\left(\omega \right)\phantom{\rule{0.277778em}{0ex}}\overline{{f}_{X,rs}\left(\omega \right)}\phantom{\rule{0.277778em}{0ex}}d\omega \phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}{\int}_{a}^{b}\phantom{\rule{0.277778em}{0ex}}\sum _{r,s=1}^{{d}_{i}}\phantom{\rule{0.277778em}{0ex}}{f}_{X,rs}\left(\omega \right)\phantom{\rule{0.277778em}{0ex}}\overline{{f}_{X,rs}\left(\omega \right)}\phantom{\rule{0.277778em}{0ex}}d\omega .\end{array}$$

$$\begin{array}{c}\hfill {A}_{1}={\int}_{a}^{b}{\int}_{a}^{b}\frac{1}{{T}^{4}}\sum _{{j}_{1},{j}_{2},{j}_{3},{j}_{4}}{K}_{h}(\omega -{\omega}_{{j}_{1}}){K}_{h}(\omega -{\omega}_{{j}_{2}}){K}_{h}(\lambda -{\omega}_{{j}_{3}}){K}_{h}(\lambda -{\omega}_{{j}_{4}})\phantom{\rule{0.277778em}{0ex}}\times \\ \hfill \sum _{r,s,t,u=1}^{{d}_{i}}E\left({I}_{X,T,rs}\left({\omega}_{{j}_{1}}\right)\overline{{I}_{X,T,rs}\left({\omega}_{{j}_{2}}\right)}{I}_{X,T,tu}\left({\omega}_{{j}_{3}}\right)\overline{{I}_{X,T,tu}\left({\omega}_{{j}_{4}}\right)}\right)\phantom{\rule{0.277778em}{0ex}}d\omega \phantom{\rule{0.277778em}{0ex}}d\lambda \phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\mathrm{and}\\ \hfill {A}_{2}={\int}_{a}^{b}{\int}_{a}^{b}\frac{1}{{T}^{4}}\sum _{{j}_{1},{j}_{2},{j}_{3},{j}_{4}}{K}_{h}(\omega -{\omega}_{{j}_{1}}){K}_{h}(\omega -{\omega}_{{j}_{2}}){K}_{h}(\lambda -{\omega}_{{j}_{3}}){K}_{h}(\lambda -{\omega}_{{j}_{4}})\phantom{\rule{0.277778em}{0ex}}\times \\ \hfill \sum _{r,s,t,u=1}^{{d}_{i}}E\left({I}_{X,T,rs}\left({\omega}_{{j}_{1}}\right)\overline{{I}_{X,T,rs}\left({\omega}_{{j}_{2}}\right)}\right)E\left({I}_{X,T,tu}\left({\omega}_{{j}_{3}}\right)\overline{{I}_{X,T,tu}\left({\omega}_{{j}_{4}}\right)}\right)\phantom{\rule{0.277778em}{0ex}}d\omega \phantom{\rule{0.277778em}{0ex}}d\lambda .\end{array}$$

$$\begin{array}{c}{\int}_{a}^{b}{\int}_{a}^{b}\frac{1}{{T}^{4}}\sum _{{j}_{1},{j}_{2}}{K}_{h}(\omega -{\omega}_{{j}_{1}}){K}_{h}(\omega -{\omega}_{{j}_{2}}){K}_{h}(\lambda -{\omega}_{{j}_{1}}){K}_{h}(\lambda -{\omega}_{{j}_{2}})\sum _{r,s,t,u=1}^{{d}_{i}}\left[E\right({I}_{X,T,rs}\left({\omega}_{{j}_{1}}\right)\overline{{I}_{X,T,rs}\left({\omega}_{{j}_{2}}\right)}\times \\ {I}_{X,T,tu}\left({\omega}_{{j}_{1}}\right)\overline{{I}_{X,T,tu}\left({\omega}_{{j}_{2}}\right)})-E\left({I}_{X,T,rs}\left({\omega}_{{j}_{1}}\right)\overline{{I}_{X,T,rs}\left({\omega}_{{j}_{2}}\right)}\right)E\left({I}_{X,T,tu}\left({\omega}_{{j}_{1}}\right)\overline{{I}_{X,T,tu}\left({\omega}_{{j}_{2}}\right)}\right)]d\omega \phantom{\rule{0.277778em}{0ex}}d\lambda \\ =\phantom{\rule{0.277778em}{0ex}}{\int}_{a}^{b}{\int}_{a}^{b}\frac{1}{{T}^{4}}\sum _{{j}_{1},{j}_{2}}{K}_{h}(\omega -{\omega}_{{j}_{1}}){K}_{h}(\omega -{\omega}_{{j}_{2}}){K}_{h}(\lambda -{\omega}_{{j}_{1}}){K}_{h}(\lambda -{\omega}_{{j}_{2}})\sum _{r,s,t,u=1}^{{d}_{i}}[E\left({I}_{X,T,rs}\left({\omega}_{{j}_{1}}\right){I}_{X,T,tu}\left({\omega}_{{j}_{1}}\right)\right)\times \\ E\left(\overline{{I}_{X,T,rs}\left({\omega}_{{j}_{2}}\right)}\overline{{I}_{X,T,tu}\left({\omega}_{{j}_{2}}\right)}\right)-E\left({I}_{X,T,rs}\left({\omega}_{{j}_{1}}\right)\right)E\left(\overline{{I}_{X,T,rs}\left({\omega}_{{j}_{2}}\right)}\right)E\left({I}_{X,T,tu}\left({\omega}_{{j}_{1}}\right)\right)E\left(\overline{{I}_{X,T,tu}\left({\omega}_{{j}_{2}}\right)}\right)]d\omega \phantom{\rule{0.277778em}{0ex}}d\lambda +o\left(1\right)\\ =\phantom{\rule{0.277778em}{0ex}}\frac{1}{{T}^{4}}\sum _{{j}_{1},{j}_{2}}{\left({\int}_{a}^{b}{K}_{h}(\omega -{\omega}_{{j}_{1}}){K}_{h}(\omega -{\omega}_{{j}_{2}})\phantom{\rule{0.277778em}{0ex}}d\omega \right)}^{2}\phantom{\rule{0.277778em}{0ex}}\sum _{r,s,t,u=1}^{{d}_{i}}[\left({f}_{X,rt}\left({\omega}_{{j}_{1}}\right){f}_{X,su}\left({\omega}_{{j}_{1}}\right)+{f}_{X,rs}\left({\omega}_{{j}_{1}}\right){f}_{X,tu}\left({\omega}_{{j}_{1}}\right)\right)\times \\ \left(\overline{{f}_{X,rt}\left({\omega}_{{j}_{2}}\right)}\phantom{\rule{0.277778em}{0ex}}\overline{{f}_{X,su}\left({\omega}_{{j}_{2}}\right)}+\overline{{f}_{X,rs}\left({\omega}_{{j}_{2}}\right)}\phantom{\rule{0.277778em}{0ex}}\overline{{f}_{X,tu}\left({\omega}_{{j}_{2}}\right)}\right)-\left({f}_{X,rs}\left({\omega}_{{j}_{1}}\right)\overline{{f}_{X,rs}\left({\omega}_{{j}_{2}}\right)}{f}_{X,tu}\left({\omega}_{{j}_{1}}\right)\overline{{f}_{X,tu}\left({\omega}_{{j}_{2}}\right)}\right)]\\ \phantom{\rule{0.277778em}{0ex}}+\phantom{\rule{0.277778em}{0ex}}o\left(1\right)\phantom{\rule{0.277778em}{0ex}}=\frac{1}{{T}^{4}{h}^{2}}\sum _{{j}_{1},{j}_{2}}{\left({\int}_{\frac{a-{\omega}_{{j}_{1}}}{h}}^{\frac{b-{\omega}_{{j}_{1}}}{h}}K\left(u\right)K(u+\frac{{\omega}_{{j}_{1}}-{\omega}_{{j}_{2}}}{h})du\right)}^{2}\phantom{\rule{0.277778em}{0ex}}\sum _{r,s,t,u=1}^{{d}_{i}}\left[\cdots \right]\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}O(\frac{1}{{T}^{2}h}).\end{array}$$

$$\begin{array}{c}{\int}_{a}^{b}{\int}_{a}^{b}\frac{1}{{T}^{4}}\sum _{{j}_{1},{j}_{2},{j}_{4}}{K}_{h}(\omega -{\omega}_{{j}_{1}}){K}_{h}(\omega -{\omega}_{{j}_{2}}){K}_{h}(\lambda -{\omega}_{{j}_{1}}){K}_{h}(\lambda -{\omega}_{{j}_{4}})\sum _{r,s,t,u=1}^{{d}_{i}}\left[E\right({I}_{X,T,rs}\left({\omega}_{{j}_{1}}\right)\overline{{I}_{X,T,rs}\left({\omega}_{{j}_{2}}\right)}\times \\ {I}_{X,T,tu}\left({\omega}_{{j}_{1}}\right)\overline{{I}_{X,T,tu}\left({\omega}_{{j}_{4}}\right)})-E\left({I}_{X,T,rs}\left({\omega}_{{j}_{1}}\right)\overline{{I}_{X,T,rs}\left({\omega}_{{j}_{2}}\right)}\right)E\left({I}_{X,T,tu}\left({\omega}_{{j}_{1}}\right)\overline{{I}_{X,T,tu}\left({\omega}_{{j}_{4}}\right)}\right)]d\omega \phantom{\rule{0.277778em}{0ex}}d\lambda \\ =\phantom{\rule{0.277778em}{0ex}}{\int}_{a}^{b}{\int}_{a}^{b}\frac{1}{{T}^{4}}\sum _{{j}_{1},{j}_{2},{j}_{4}}{K}_{h}(\omega -{\omega}_{{j}_{1}}){K}_{h}(\omega -{\omega}_{{j}_{2}}){K}_{h}(\lambda -{\omega}_{{j}_{1}}){K}_{h}(\lambda -{\omega}_{{j}_{4}})\sum _{r,s,t,u=1}^{{d}_{i}}[E\left({I}_{X,T,rs}\left({\omega}_{{j}_{1}}\right){I}_{X,T,tu}\left({\omega}_{{j}_{1}}\right)\right)\times \\ E\left(\overline{{I}_{X,T,rs}\left({\omega}_{{j}_{2}}\right)}\right)E\left(\overline{{I}_{X,T,tu}\left({\omega}_{{j}_{4}}\right)}\right)-E\left({I}_{X,T,rs}\left({\omega}_{{j}_{1}}\right)\right)E\left(\overline{{I}_{X,T,rs}\left({\omega}_{{j}_{2}}\right)}\right)E\left({I}_{X,T,tu}\left({\omega}_{{j}_{1}}\right)\right)E\left(\overline{{I}_{X,T,tu}\left({\omega}_{{j}_{4}}\right)}\right)]d\omega \phantom{\rule{0.277778em}{0ex}}d\lambda +o\left(1\right)\\ =\phantom{\rule{0.277778em}{0ex}}\frac{1}{{T}^{4}}\sum _{{j}_{1},{j}_{2},{j}_{4}}\left({\int}_{a}^{b}{K}_{h}(\omega -{\omega}_{{j}_{1}}){K}_{h}(\omega -{\omega}_{{j}_{2}})\phantom{\rule{0.277778em}{0ex}}d\omega \right)\left({\int}_{a}^{b}{K}_{h}(\lambda -{\omega}_{{j}_{1}}){K}_{h}(\lambda -{\omega}_{{j}_{4}})\phantom{\rule{0.277778em}{0ex}}d\lambda \right)\sum _{r,s,t,u=1}^{{d}_{i}}\left[\right({f}_{X,rt}\left({\omega}_{{j}_{1}}\right)\times \\ {f}_{X,su}\left({\omega}_{{j}_{1}}\right)+{f}_{X,rs}\left({\omega}_{{j}_{1}}\right){f}_{X,tu}\left({\omega}_{{j}_{1}}\right))\times \left(\overline{{f}_{X,rs}\left({\omega}_{{j}_{2}}\right)}\phantom{\rule{0.277778em}{0ex}}\overline{{f}_{X,tu}\left({\omega}_{{j}_{4}}\right)}\right)-\left({f}_{X,rs}\left({\omega}_{{j}_{1}}\right)\overline{{f}_{X,rs}\left({\omega}_{{j}_{2}}\right)}{f}_{X,tu}\left({\omega}_{{j}_{1}}\right)\overline{{f}_{X,tu}\left({\omega}_{{j}_{4}}\right)}\right)]\\ \phantom{\rule{0.277778em}{0ex}}+\phantom{\rule{0.277778em}{0ex}}o\left(1\right)=\frac{1}{{T}^{4}{h}^{2}}\sum _{{j}_{1},{j}_{2},{j}_{4}}\left({\int}_{\frac{a-{\omega}_{{j}_{1}}}{h}}^{\frac{b-{\omega}_{{j}_{1}}}{h}}K\left(u\right)K(u+\frac{{\omega}_{{j}_{1}}-{\omega}_{{j}_{2}}}{h})\phantom{\rule{0.277778em}{0ex}}du\right)\left({\int}_{\frac{a-{\omega}_{{j}_{1}}}{h}}^{\frac{b-{\omega}_{{j}_{1}}}{h}}K\left(v\right)K(v+\frac{{\omega}_{{j}_{1}}-{\omega}_{{j}_{4}}}{h})\phantom{\rule{0.277778em}{0ex}}dv\right)\times \\ \sum _{r,s,t,u=1}^{{d}_{i}}\phantom{\rule{0.277778em}{0ex}}\left[\phantom{\rule{0.277778em}{0ex}}\cdots \phantom{\rule{0.277778em}{0ex}}\right]\phantom{\rule{0.277778em}{0ex}}+o\left(1\right)\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}O(\frac{1}{T}).\end{array}$$

$$\begin{array}{c}{\int}_{a}^{b}{\int}_{a}^{b}\frac{1}{{T}^{4}}\sum _{{j}_{1}}{K}_{h}^{2}(\omega -{\omega}_{{j}_{1}}){K}_{h}^{2}(\lambda -{\omega}_{{j}_{1}})\sum _{r,s,t,u=1}^{{d}_{i}}\left[E\right({I}_{X,T,rs}\left({\omega}_{{j}_{1}}\right)\overline{{I}_{X,T,rs}\left({\omega}_{{j}_{1}}\right)}\times \\ {I}_{X,T,tu}\left({\omega}_{{j}_{1}}\right)\overline{{I}_{X,T,tu}\left({\omega}_{{j}_{1}}\right)})-E\left({I}_{X,T,rs}\left({\omega}_{{j}_{1}}\right)\overline{{I}_{X,T,rs}\left({\omega}_{{j}_{1}}\right)}\right)E\left({I}_{X,T,tu}\left({\omega}_{{j}_{1}}\right)\overline{{I}_{X,T,tu}\left({\omega}_{{j}_{1}}\right)}\right)]d\omega \phantom{\rule{0.277778em}{0ex}}d\lambda \\ =\frac{1}{{T}^{4}}\sum _{{j}_{1}}{\left({\int}_{a}^{b}{K}_{h}^{2}(\omega -{\omega}_{{j}_{1}})\phantom{\rule{0.277778em}{0ex}}d\omega \right)}^{2}\sum _{r,s,t,u=1}^{{d}_{i}}\left[\cdots \right]=\frac{1}{{T}^{4}{h}^{4}}\sum _{{j}_{1}}{\left({\int}_{a}^{b}{K}^{2}\left(\frac{\omega -{\omega}_{{j}_{1}}}{h}\right)d\omega \right)}^{2}\sum _{r,s,t,u=1}^{{d}_{i}}\left[\cdots \right]\\ =\frac{1}{{T}^{4}{h}^{2}}\sum _{{j}_{1}}{\left({\int}_{\frac{a-{\omega}_{{j}_{1}}}{h}}^{\frac{b-{\omega}_{{j}_{1}}}{h}}{K}^{2}\left(u\right)du\right)}^{2}\sum _{r,s,t,u=1}^{{d}_{i}}\left[\cdots \right]=\phantom{\rule{0.277778em}{0ex}}O(\frac{1}{{T}^{3}{h}^{2}})\end{array}$$

First, we observe that

$$\begin{array}{c}{\widehat{R}}_{X,a,b}=\frac{{\int}_{a}^{b}\left|\right|vec\left({\widehat{f}}_{X}\left(\omega \right)\right){\left|\right|}_{2}^{2}d\omega}{{\int}_{0}^{\pi}\left|\right|vec\left({\widehat{f}}_{X}\left(\omega \right)\right){\left|\right|}_{2}^{2}d\omega}=\frac{{\int}_{a}^{b}\left|\right|vec\left({\widehat{f}}_{X}\left(\omega \right)\right){\left|\right|}_{2}^{2}d\omega}{{\int}_{a}^{b}\left|\right|vec\left({\widehat{f}}_{X}\left(\omega \right)\right){\left|\right|}_{2}^{2}d\omega +{\int}_{{\overline{\Pi}}_{(a,b)}}\left|\right|vec\left({\widehat{f}}_{X}\left(\omega \right)\right){\left|\right|}_{2}^{2}d\omega}\\ ={\left(1+\frac{{\int}_{{\overline{\Pi}}_{(a,b)}}\left|\right|vec\left({\widehat{f}}_{X}\left(\omega \right)\right){\left|\right|}_{2}^{2}d\omega}{{\int}_{a}^{b}\left|\right|vec\left({\widehat{f}}_{X}\left(\omega \right)\right){\left|\right|}_{2}^{2}d\omega}\right)}^{-1}.\end{array}$$

A sufficient condition for joint consistency of ${({\widehat{r}}_{X,a,b},{\widehat{r}}_{X,{\overline{\Pi}}_{(a,b)}})}^{\top}$. Following the proof of Theorem 1 (a), we have $cov({\widehat{r}}_{X,a,b},{\widehat{r}}_{X,{\overline{\Pi}}_{(a,b)}})={C}_{1}-{C}_{2}$, where
As in the proof of Theorem 1 (a), it can be seen that, for the various cases, the covariance terms are of $O(\frac{1}{{T}^{{\delta}_{1}}{h}^{{\delta}_{2}}})$ where ${\delta}_{1},{\delta}_{2}\in \{0,1,2,3\}$ and ${\delta}_{1}>{\delta}_{2}$. The result above along with Theorem 1 implies
Finally, an application of the continuous mapping theorem yields the result. □

$$\begin{array}{c}\hfill {C}_{1}={\int}_{{\overline{\Pi}}_{(a,b)}}{\int}_{a}^{b}\frac{1}{{T}^{4}}\sum _{{j}_{1},{j}_{2},{j}_{3},{j}_{4}}{K}_{h}(\omega -{\omega}_{{j}_{1}}){K}_{h}(\omega -{\omega}_{{j}_{2}}){K}_{h}(\lambda -{\omega}_{{j}_{3}}){K}_{h}(\lambda -{\omega}_{{j}_{4}})\phantom{\rule{0.277778em}{0ex}}\times \\ \hfill \sum _{r,s,t,u=1}^{{d}_{i}}E\left({I}_{X,T,rs}\left({\omega}_{{j}_{1}}\right)\overline{{I}_{X,T,rs}\left({\omega}_{{j}_{2}}\right)}{I}_{X,T,tu}\left({\omega}_{{j}_{3}}\right)\overline{{I}_{X,T,tu}\left({\omega}_{{j}_{4}}\right)}\right)\phantom{\rule{0.277778em}{0ex}}d\omega \phantom{\rule{0.277778em}{0ex}}d\lambda \phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\mathrm{and}\\ \hfill {C}_{2}={\int}_{{\overline{\Pi}}_{(a,b)}}{\int}_{a}^{b}\frac{1}{{T}^{4}}\sum _{{j}_{1},{j}_{2},{j}_{3},{j}_{4}}{K}_{h}(\omega -{\omega}_{{j}_{1}}){K}_{h}(\omega -{\omega}_{{j}_{2}}){K}_{h}(\lambda -{\omega}_{{j}_{3}}){K}_{h}(\lambda -{\omega}_{{j}_{4}})\phantom{\rule{0.277778em}{0ex}}\times \\ \hfill \sum _{r,s,t,u=1}^{{d}_{i}}E\left({I}_{X,T,rs}\left({\omega}_{{j}_{1}}\right)\overline{{I}_{X,T,rs}\left({\omega}_{{j}_{2}}\right)}\right)E\left({I}_{X,T,tu}\left({\omega}_{{j}_{3}}\right)\overline{{I}_{X,T,tu}\left({\omega}_{{j}_{4}}\right)}\right)\phantom{\rule{0.277778em}{0ex}}d\omega \phantom{\rule{0.277778em}{0ex}}d\lambda .\end{array}$$

$${\left({\widehat{r}}_{X,a,b},{\widehat{r}}_{X,{\overline{\Pi}}_{(a,b)}}\right)}^{\top}\phantom{\rule{0.277778em}{0ex}}\stackrel{P}{\to}\phantom{\rule{0.277778em}{0ex}}{\left({r}_{X,a,b},{r}_{X,{\overline{\Pi}}_{(a,b)}}\right)}^{\top}.$$

- Fiecas, M.; Ombao, H. Modeling the Evolution of Dynamic Brain Processes During an Associative Learning Experiment. J. Am. Stat. Assoc.
**2016**, 111, 1440–1453. [Google Scholar] [CrossRef] - Ombao, H.; Fiecas, M.; Ting, C.M.; Low, Y.F. Statistical models for brain signals with properties that evolve across trials. NeuroImage
**2018**, 180, 609–618. [Google Scholar] [CrossRef] [PubMed] - Cribben, I.; Haraldsdottir, R.; Atlas, L.Y.; Wager, T.D.; Lindquist, M.A. Dynamic connectivity regression: Determining state-related changes in brain connectivity. NeuroImage
**2012**, 61, 907–920. [Google Scholar] [CrossRef] - Cribben, I.; Wager, T.; Lindquist, M. Detecting functional connectivity change points for single-subject fMRI data. Front. Comput. Neurosci.
**2013**, 7, 143. [Google Scholar] [CrossRef] [PubMed] - Cribben, I.; Yu, Y. Estimating whole-brain dynamics by using spectral clustering. J. R. Stat. Soc. Ser. (Appl. Stat.)
**2016**, 66, 607–627. [Google Scholar] [CrossRef] - Zhu, Y.; Cribben, I. Sparse Graphical Models for Functional Connectivity Networks: Best Methods and the Autocorrelation Issue. Brain Connect.
**2018**, 8, 139–165. [Google Scholar] [CrossRef] [PubMed] - Wang, Y.; Ting, C.; Ombao, H. Modeling Effective Connectivity in High-Dimensional Cortical Source Signals. IEEE J. Sel. Top. Signal Process.
**2016**, 10, 1315–1325. [Google Scholar] [CrossRef] - Brillinger, D. Time Series; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2001. [Google Scholar] [CrossRef]
- Eichler, M. Testing nonparametric and semiparametric hypotheses in vector stationary processes. J. Multivar. Anal.
**2008**, 99, 968–1009. [Google Scholar] [CrossRef] - Politis, D.N.; Romano, J.P. The Stationary Bootstrap. J. Am. Stat. Assoc.
**1994**, 89, 1303–1313. [Google Scholar] [CrossRef] - Dette, H.; Paparoditis, E. Bootstrapping frequency domain tests in multivariate time series with an application to comparing spectral densities. J. R. Stat. Soc. Ser. B
**2009**, 71, 831–857. [Google Scholar] [CrossRef] - Dwivedi, Y.; Subba Rao, S. A test for second-order stationarity of a time series based on the discrete Fourier transform. J. Time Ser. Anal.
**2011**, 32, 68–91. [Google Scholar] [CrossRef] - Ombao, H.; von Sachs, R.; Guo, W. SLEX Analysis of Multivariate Nonstationary Time Series. J. Am. Stat. Assoc.
**2005**, 100, 519–531. [Google Scholar] [CrossRef] - Srinivasan, R. High-Resolution EEG: Theory and Practice in Event-Related Potentials: A Methods Handbook; Handy, T.C., Ed.; MIT Press: Cambridge, MA, USA, 2003. [Google Scholar]
- Nunez, P.; Srinivasan, R. Electric Fields of the Brain: The Neurophysics of EEG, 2nd ed.; Ocford University Press: New York, NY, USA, 2003. [Google Scholar]
- von Bünau, P.; Meinecke, F.C.; Scholler, S.; Müller, K.R. Finding stationary brain sources in EEG data. In Proceedings of the 2010 Annual International Conference of the IEEE Engineering in Medicine and Biology, Buenos Aires, Argentina, 31 August–4 September 2010; pp. 2810–2813. [Google Scholar] [CrossRef]
- Wu, J.; Srinivasan, R.; Quinlan, E.B.; Solodkin, A.; Small, S.L.; Cramer, S.C. Utility of EEG measures of brain function in patients with acute stroke. J. Neurophysiol.
**2016**, 115, 2399–2405. [Google Scholar] [CrossRef] [PubMed] - Gao, X.; Shababa, B.; Fortin, N.; Ombao, H. Evolutionary State-Space Models With Applications to Time-Frequency Analysis of Local Field Potentials. arXiv
**2016**, arXiv:1610.07271. [Google Scholar] [CrossRef] - Euán, C.; Sun, Y.; Ombao, H. Coherence-based time series clustering for statistical inference and visualization of brain connectivity. Ann. Appl. Stat.
**2019**, 13, 990–1015. [Google Scholar] [CrossRef] - Kaplan, A.Y.; Fingelkurts, A.A.; Fingelkurts, A.A.; Borisov, S.V.; Darkhovsky, B.S. Nonstationary nature of the brain activity as revealed by EEG/MEG: Methodological, practical and conceptual challenges. Signal Process.
**2005**, 85, 2190–2212. [Google Scholar] [CrossRef] - von Bünau, P.; Meinecke, F.C.; Király, F.C.; Müller, K.R. Finding Stationary Subspaces in Multivariate Time Series. Phys. Rev. Lett.
**2009**, 103, 214101. [Google Scholar] [CrossRef] - Sundararajan, R.; Pipiras, V.; Pourahmadi, M. Stationary subspace analysis of nonstationary covariance processes: Eigenstructure description and testing. arXiv
**2019**, arXiv:1904.09420. [Google Scholar] - Wann, E.G. Large-Scale Spatiotemporal Neuronal Activity Dynamics Predict Cortical Viability in a Rodent Model of Ischemic Stroke. Ph.D. Thesis, UC Irvine, Irvine, CA, USA, 2017. [Google Scholar]
- Sundararajan, R.R.; Pourahmadi, M. Stationary subspace analysis of nonstationary processes. J. Time Ser. Anal.
**2018**, 39, 338–355. [Google Scholar] [CrossRef] - Politis, D.N.; White, H. Automatic Block-Length Selection for the Dependent Bootstrap. Econom. Rev.
**2004**, 23, 53–70. [Google Scholar] [CrossRef] - Patton, A.; Politis, D.N.; White, H. Correction to “Automatic Block-Length Selection for the Dependent Bootstrap” by D. Politis and H. White. Econom. Rev.
**2009**, 28, 372–375. [Google Scholar] [CrossRef] - Fontaine, C.; Frostig, R.D.; Ombao, H. Modeling non-linear spectral domain dependence using copulas with applications to rat local field potentials. Econom. Stat.
**2020**, 15, 85–103. [Google Scholar] [CrossRef] - Piangerelli, M.; Rucco, M.; Tesei, L.; Merelli, E. Topological classifier for detecting the emergence of epileptic seizures. BMC Res. Notes
**2018**, 11, 392. [Google Scholar] [CrossRef] [PubMed] - Rucco, M.; Concettoni, E.; Cristalli, C.; Ferrante, A.; Merelli, E. Topological classification of small DC motors. In Proceedings of the 2015 IEEE 1st International Forum on Research and Technologies for Society and Industry Leveraging a Better Tomorrow (RTSI), Turin, Italy, 16–18 September 2015; pp. 192–197. [Google Scholar] [CrossRef]
- Wang, Y.; Ombao, H.; Chung, M.K. Topological data analysis of single-trial electroencephalographic signals. Ann. Appl. Stat.
**2018**, 12, 1506–1534. [Google Scholar] [CrossRef] [PubMed] - Brockwell, P.J.; Davis, R.A. Time Series: Theory and Methods; Springer: New York, NY, USA, 1991. [Google Scholar] [CrossRef]

Frequency Band | Mean | Median | SD | Lower | Upper |
---|---|---|---|---|---|

CI | CI | ||||

Theta (4–8 Hz) | 0.079 | 0.079 | 0.004 | 0.061 | 0.081 |

Alpha (8–12 Hz) | 0.076 | 0.077 | 0.0035 | 0.059 | 0.078 |

Beta (12–30 Hz) | 0.332 | 0.332 | 0.0129 | 0.267 | 0.341 |

Gamma (30–50 Hz) | 0.144 | 0.144 | 0.006 | 0.141 | 0.191 |

Frequency Band | Mean | Median | SD | Lower | Upper |
---|---|---|---|---|---|

CI | CI | ||||

Theta (4–8 Hz) | 0.062 | 0.062 | 0.004 | 0.0422 | 0.0669 |

Alpha (8–12 Hz) | 0.060 | 0.061 | 0.004 | 0.041 | 0.064 |

Beta (12–30 Hz) | 0.283 | 0.285 | 0.018 | 0.202 | 0.292 |

Gamma (30–50 Hz) | 0.146 | 0.146 | 0.006 | 0.135 | 0.187 |

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).