Next Article in Journal
Information Dynamic Correlation of Vibration in Nonlinear Systems
Next Article in Special Issue
Selection Consistency of Lasso-Based Procedures for Misspecified High-Dimensional Binary Model and Random Regressors
Previous Article in Journal
Stochastic SIS Modelling: Coinfection of Two Pathogens in Two-Host Communities
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Estimation of Dynamic Networks for High-Dimensional Nonstationary Time Series

1
Department of Statistics and Data Science, University of Central Florida, 4000 Central Florida Blvd, Orlando, FL 32816, USA
2
Department of Statistics, University of Illinois at Urbana-Champaign, S. Wright Street, Champaign, IL 61820, USA
3
Department of Statistics, University of Chicago, 5747 S. Ellis Avenue, Jones 311, Chicago, IL 60637, USA
*
Author to whom correspondence should be addressed.
Entropy 2020, 22(1), 55; https://doi.org/10.3390/e22010055
Submission received: 14 November 2019 / Revised: 25 December 2019 / Accepted: 26 December 2019 / Published: 31 December 2019

Abstract

:
This paper is concerned with the estimation of time-varying networks for high-dimensional nonstationary time series. Two types of dynamic behaviors are considered: structural breaks (i.e., abrupt change points) and smooth changes. To simultaneously handle these two types of time-varying features, a two-step approach is proposed: multiple change point locations are first identified on the basis of comparing the difference between the localized averages on sample covariance matrices, and then graph supports are recovered on the basis of a kernelized time-varying constrained L 1 -minimization for inverse matrix estimation (CLIME) estimator on each segment. We derive the rates of convergence for estimating the change points and precision matrices under mild moment and dependence conditions. In particular, we show that this two-step approach is consistent in estimating the change points and the piecewise smooth precision matrix function, under a certain high-dimensional scaling limit. The method is applied to the analysis of network structure of the S&P 500 index between 2003 and 2008.

1. Introduction

Networks are useful tools to visualize the relational information among a large number of variables. An undirected graphical model belongs to a rich class of statistical network models that encodes conditional independence [1]. Canonically, Gaussian graphical models (or their normalized version partial correlations [2]) can be represented by the inverse covariance matrix (i.e., the precision matrix), where a zero entry is associated with a missing edge between two vertices in the graph. Specifically, two vertices are not connected if and only if they are conditionally independent, given the value of all other variables.
On one hand, there is a large volume of literature on estimating the (static) precision matrix for graphical models in the high-dimensional setting, where the sample size and the dimension are both large [3,4,5,6,7,8,9,10,11,12,13,14,15,16]. Most of the earlier work along this line assumes that the underlying network is time-invariant. This assumption is quite restrictive in practice and hardly plausible for many real-world applications, such as gene regulatory networks, social networks, and stocking market, where the underlying data generating mechanisms are often dynamic. On the other hand, dynamic random networks have been extensively studied from the perspective of large random graphs, such as community detection and edge probability estimation for dynamic stochastic block models (DSBMs) [17,18,19,20,21,22,23,24,25,26,27,28,29,30]. Such approaches do not model the sampling distributions of the error (or noise), since the “true” networks are connected with random edges sampled from certain probability models, such as the Erdős–Rényi graphs [31] and random geometric graphs [32].
In this paper, we view the (time-varying) networks of interests as non-random graphs. We adopt the graph signal processing approach for denoising the nonstationary time series and target on estimating the true unknown underlying graphs. Despite the recent attempts towards more flexible time-varying models [33,34,35,36,37,38,39,40], there are still a number of major limitations in the current high-dimensional literature. First, theoretical analysis was derived under the fundamental assumption that the observations are either temporally independent, or the temporal dependence has very specific forms, such as Gaussian processes or (linear) vector autoregression (VAR) [14,33,34,37,41,42,43]. Such dynamic structures are unduly demanding in view that many time series encountered in real applications have very complex nonlinear spatial-temporal dependency [44,45]. Second, most existing work assumes the data have time-varying distributions with sufficiently light tails, such as Gaussian graphical models and Ising models [33,34,36,41,42]. Third, in change point estimation problems for high-dimensional time series, piecewise constancy is widely used [41,42,46,47], which can be fragile in practice. For instance, financial data often appears to have time-dependent cross-volatility with structural breaks [48]. For resting-state fMRI signals, correlation analysis reveals both slowly varying and abruptly changing characteristics corresponding to modularities in brain functional networks [49,50].
Advances in analyzing high-dimensional (stationary) time series have been made recently to address the aforementioned nonlinear spatial-temporal dependency issue [14,37,43,51,52,53,54,55,56,57]. In [53,56,57], the authors considered the theoretical properties of regularized estimation of covariance and precision matrices, based on various dependence measures of high-dimensional time series. Reference [38] considered the non-paranormal graphs that evolve with a random variable. Reference [37] discussed the joint estimation of Gaussian graphical models based on a stationary VAR(1) model with special coefficient matrices, which may also depend on certain covariates. The authors applied a constrained L 1 -minimization for inverse matrix estimation (CLIME) estimator with a kernel estimator of covariance matrix and developed consistency in the graph recovery at a given time point. Reference [14] studied the recovery of the Granger causality across time and nodes assuming a stationary Gaussian VAR model with unknown order.
In this paper, we focus on the recovery of time-varying undirected graphs on the basis of the regularized estimation of the precision matrices for a general class of nonstationary time series. We simultaneously model two types of dynamics: abrupt changes with an unknown number of change points and the smooth evolution between the change points. In particular, we study a class of high-dimensional piecewise locally stationary processes in a general nonlinear temporal dependency framework, where the observations are allowed to have a finite polynomial moment.
More specifically, there are two main goals of this paper: first, to estimate the change point locations, as well as the number of change points, and second, to estimate the smooth precision matrix functions between the change points. Accordingly, our proposed method contains two steps. In the first step, the maximum norm of the local difference matrix is computed at each time point and the jumps in the covariance matrices are detected at the location where the maximum norms are above a certain threshold. In the second step, the precision matrices before and after the jump are estimated by a regularized kernel smoothing estimator. These two steps are recursively performed until a stopping criterion is met. Moreover, a boundary correction procedure based on data reflection is considered to reduce the bias near the change point.
We provide an asymptotic theory to justify the proposed method in high dimensions: point-wise and uniform rates of convergence are derived for the change point estimation and graph recovery under mild and interpretable conditions. The convergence rates are determined via subtle interplay among the sample size, dimensionality, temporal dependence, moment condition, and the choice of bandwidth in the kernel estimator. Our results are significantly more involved than problems for sub-Gaussian tails and independent samples. We highlight that uniform consistency in terms of time-varying network structure recovery is much more challenging and difficult than pointwise consistency. For the multiple change point detection problem, we also characterize the threshold of the difference statistic that gives a consistent selection of the number of change points.
We fix some notations: Positive, finite, and non-random constants, independent of the sample size n and dimension p, are denoted by C , C 1 , C 2 , , whose values may differ from line to line. For the sequence of real numbers, a n and b n , we write a n = O ( b n ) or a n b n if lim sup n ( a n / b n ) C for some constant C < and a n = o ( b n ) if lim n ( a n / b n ) = 0 . We say a n b n if a n = O ( b n ) and b n = O ( a n ) . For a sequence of random variables Y n and a corresponding set of constants a n , denote Y n = O P ( a n ) if for any ε > 0 there is a constant C > 0 such that P ( | Y n | / a n > C ) < ε for all n. For a vector x R p , we write | x | = ( j = 1 p x j 2 ) 1 / 2 . For a matrix Σ , | Σ | 1 = j , k | σ j k | , | Σ | = max j , k | σ j k | , | Σ | L 1 = max k j | σ j k | , | Σ | F = ( j , k σ j k 2 ) 1 / 2 and ρ ( Σ ) = max { | Σ x | : | x | = 1 } . For a random vector z R p , write z L a , a > 0 , if  z a = : [ E ( | z | a ) ] 1 / a < . Let  z = z 2 . Denote  a b = min ( a , b ) and a b = max ( a , b ) .
The rest of the paper is organized as follows: Section 2 presents the time series model, as well as the main assumptions, which can simultaneously capture the smooth and abrupt changes. In Section 3, we introduce the two-step method that first segments the time series based on the difference between the localized averages on sample covariance matrices and then recovers the graph support based on a kernelized CLIME estimator. In Section 4, we state the main theoretical results for the change point estimation and support recovery. Simulation examples are presented in Section 5 and a real data application is given in Section 6. Proof of main results can be found in Section 7.

2. Time Series Model

We first introduce a class of causal vector stochastic processes. Next, we state the assumptions to derive an asymptotic theory in Section 4 and explain their implications. Let  ε i R p , i Z be independent and identically distributed (i.i.d.) random vectors and F i = ( , ε i 1 , ε i ) be a shift process. Let  X i ( t ) = ( X i 1 ( t ) , , X i p ( t ) ) be a p-dimensional nonstationary time series generated by
X i ( t ) = H ( F i ; t ) ,
where H ( · ; · ) = ( H 1 ( · ; · ) , , H p ( · ; · ) ) is an R p -valued jointly measurable function. Suppose we observe the data points X i = X i , n = X i ( t i ) at the evenly spaced time intervals t i = i / n , i = 1 , 2 , , n ,
X i , n = H ( F i ; i / n ) .
We drop the subscription n in X i , n in the rest of this section. Since our focus is to study the second-order properties, the data is assumed to have a mean of zero.
Model (1) is first introduced in [58]. The stochastic process X i ( t ) i Z , t [ 0 , 1 ) can be thought of as a triangular array system, double indexed by i and t, while the observations ( X i ) i = 1 n are sampled from the diagonal of the array. On one hand, when fixing the time index t, the (vertical) process X i ( t ) i Z is stationary. On the other hand, since H ( F i ; t i ) is allowed to vary with t i , the diagonal process (2) is able to capture nonstationarity.
The process ( X i ) i Z is causal or non-anticipative as X i is an output of the past innovations ( ε j ) j i and does not depend on future innovations. In fact, it covers a broad range of linear and nonlinear, stationary and non-stationary processes, such as vector auto-regressive moving average processes, locally stationary processes, Markov chains, and nonlinear functional processes  [53,58,59,60,61].
Motivated by real applications where nonstationary time series data can involve both abrupt breaks and smooth varies between the breaks, we model the underlying processes as piecewise locally stationary with a finite number of structural breaks.
Definition 1 (Piecewise locally stationary time series model).
Define PLS ι ( [ 0 , 1 ] , L ) as the collection of mean-zero piecewise locally stationary processes on [ 0 , 1 ] , if for each ( X ( t ) ) 0 t 1 PLS ι ( [ 0 , 1 ] , L ) , there is a nonnegative integer ι such that X ( t ) is piecewise stochastic Lipschitz continuous in t with Lipschitz constant L on the interval [ t ( l ) , t ( l + 1 ) ) , l = 0 , , ι , where 0 = t ( 0 ) < t ( 1 ) < t ( ι ) < t ( ι + 1 ) = 1 . A vector stochastic process ( X ( t ) ) 0 t 1 PLS ι ( [ 0 , 1 ] , L ) if all coordinates belong to PLS ι ( [ 0 , 1 ] , L ) . For the process ( X 0 ( t ) ) 0 t 1 defined in (1), this means that there exists a non-negative integer ι and a constant L > 0 , such that
max 1 j p H j ( F 0 ; t ) H j ( F 0 ; t ) L | t t | f o r   a l l t ( l ) t , t < t ( l + 1 ) , 0 l ι .
Remark 1.
If we assume ( X i ( t ) ) 0 t 1 PLS ι ( [ 0 , 1 ] , L ) , i Z , then it follows that for each i = i k , , i + k , where k / n 0 , and that t ( l ) i , i < t ( l + 1 ) for some 0 l ι , we have
max 1 j p H j ( F i ; i / n ) H j ( F i ; i / n ) L k / n = o ( 1 ) .
In other words, within a locally stationary time period, in a local window of i, ( X i j ) i k i i + k can be approximated by the stationary process ( X i j ( i / n ) ) i k i i + k for each j = 1 , , p . This justifies the terminology of local stationarity.
The covariance matrix function of the underlying process is Σ ( t ) = σ j k ( t ) 1 j , k p , t [ 0 , 1 ] , where σ j k ( t ) = E ( H j ( F 0 ; t ) H k ( F 0 ; t ) ) , and the precision matrix function is Ω ( t ) = Σ ( t ) 1 = ω j k ( t ) 1 j , k p . The graph at time t is denoted by G ( t ) = ( V , E ( t ) ) , where V is the vertex set and E ( t ) = { ( j , k ) : ω j k ( t ) 0 } . Note that ( X i ( t ) ) t PLS ι ( [ 0 , 1 ] , L ) , i Z implies piecewise Lipschitz continuity in Σ ( t ) except at the breaks t ( 1 ) , , t ( ι ) . In particular, if  sup 0 t 1 max 1 j p H j ( F 0 ; t ) C for some constant C > 0 , then
| Σ ( s ) Σ ( t ) | 2 C L | s t | , s , t [ t ( l ) , t ( l + 1 ) ) , l = 0 , , ι .
The reverse direction is not necessarily true, i.e., (3) does not indicate ( X i ( t ) ) t PLS ι ( [ 0 , 1 ] , L ) , i Z in general. As a trivial example, let  ε i j = 2 1 / 2 with probability 2 / 3 and 2 with probability 1 / 3 i.i.d for all i , j . At time t k = k / n , let  X i j ( t k ) = ( 1 ) k t k ε i j . Then for any k and k such that k + k is odd, | Σ ( t k ) Σ ( t k ) | = | t k t k | , while X 01 ( t k ) X 01 ( t k ) 2 = t k + t k .
Assumption 1 (Piecewise smoothness).
(i) Assume ( X i ( t ) ) 0 t 1 PLS ι ( [ 0 , 1 ] , L ) for each i Z , where L > 0 and ι 0 are constants independent of n and p. (ii) For each l = 0 , , ι , and  1 j , k p , we have σ j k ( t ) C 2 [ t ( l ) , t ( l + 1 ) ) .
Now we introduce the temporal dependence measure. We quantify the dependence of X i ( t ) i Z by the dependence adjusted norm (DAN) (cf. [62]). Let  ε i be an independent copy of ε i and F i , { m } = ( , ε i m 1 , ε i m , ε i m + 1 , , ε i ) . Denote  X i , { m } ( t ) = X i 1 , { m } ( t ) , , X i p , { m } ( t ) , where X i j , { m } ( t ) = H j ( F i , { m } ; t ) , 1 j p . Here  X i , { m } ( t ) is a coupled version of X i ( t ) , with the same generating mechanism and input, except that ε i m is replaced by an independent copy ε i m .
Definition 2 (Dependence adjusted norm (DAN)).
Let constants a 1 , A > 0 . Assume  sup 0 t 1 X 1 j ( t ) a < , j = 1 , , p . Define the uniform functional dependence measure for the sequences ( X i j ( t ) ) i Z , t [ 0 , 1 ] of form (1) as
θ m , a , j = sup 0 t 1 X i j ( t ) X i j , { m } ( t ) a , j = 1 , , p ,
and Θ m , a , j = i = m θ i , a , j . The dependence adjusted norm of ( X i j ( t ) ) i Z , t [ 0 , 1 ] is defined as
X · , j a , A = sup m 0 ( m + 1 ) A Θ m , a , j ,
whenever X · , j a , A < .
Intuitively, the physical dependence measure quantifies the adjusted stochastic difference between the random variable and its coupled version by replacing past innovations. Indeed,  θ m , a , j measures the impact on X i j ( t ) uniform over t by replacing ε i m while freezing all the other inputs, while Θ m , a , j quantifies the cumulative influence of replacing ε m on ( X i j ( t ) ) i 0 uniform over t. Then  X · , j a , A controls the uniform polynomial decay in the lag of the cumulative physical dependence, where a depends on the the tail of marginal distributions of X 1 , j ( t ) and A quantifies the polynomial decay power and thus the temporal dependence strength. It is clear that X · , j a , A is a semi-norm, i.e., it is subaddative and absolutely homogeneous.
Assumption 2 (Dependence and moment conditions).
Let X i ( t ) be defined in (1) and X i in (2). There exist q > 2 and A > 0 such that
ν 2 q : = sup t [ 0 , 1 ] max 1 j p E | X j ( t ) | 2 q < and N X , 2 q : = max 1 j p X · , j 2 q , A < .
We let M X , q : = 1 j p X · , j 2 q , A q 1 / q and write N X = N X , 4 , M X = M X , 2 . The quantities M X , q and N X , 2 q measure the L q -norm aggregated effect and the largest effect of the element-wise DANs respectively. Both quantities play a role in the convergence rates of our estimator.
Obviously, we have X i j X i j , { m } a θ m , a , j and max 1 j p E | X i j | 2 q ν 2 q for all 1 i n . In contrast to other works in a high-dimensional covariance matrix and network estimation, where sub-Gaussian tails and independence are the keys to ensure consistent estimation. Assumption 2 only requires that the time series have a finite polynomial moment, and it allows linear and nonlinear processes with short memory in the time domain.
Example 1 (Vector linear process).
Consider the following vector linear process model
H ( F i ; t ) = m = 0 A m ( t ) ε i m ,
where ε i = ( ε 1 , , ε p ) and ε i j are i.i.d. with mean 0 and variance 1, and  ε i j q C q for each i Z and 1 j p with some constants q > 2 and C q > 0 . The vector linear process is commonly seen in literature and application [63]. It includes the time-varying VAR model where A m ( t ) = A ( t ) m as a special example.
Suppose that the coefficient matrices A m ( t ) = ( a m , j k ( t ) ) 1 j , k p , m = 0 , 1 , satisfy the following condition.
(A1) 
For each 1 j , k p , a m , j k ( t ) C 2 [ 0 , 1 ] .
(A2) 
For each 1 j p , there is a constant C A , j > 0 such that for each t [ 0 , 1 ] , k = 1 p a m , j k ( t ) 2 C A , j ( m + 1 ) 2 ( A + 1 ) for all m 0 .
(A3) 
For any t , t [ 0 , 1 ] , m = 0 k = 1 p [ a m , j k ( t ) a m , j k ( t ) ] 2 L 2 | t t | 2 for each j = 1 , , p .
Note that
σ j k ( t ) = m 0 A m , j · ( t ) A m , k · ( t ) , Θ m , q , j 2 C q q 1 m = 0 ( A m , j · A m , j · ) 1 / 2 , X i j ( t ) X i j ( t ) 2 = m = 0 A m , j · k = 1 p [ a m , j k ( t ) a m , j k ( t ) ] 2 ,
where A m , j · ( t ) is the jth row of A m ( t ) . Under conditions (A1)–(A3), one can easily verify that for each 1 j , k p , the process satisfies: (1) σ j k ( t ) C 2 [ 0 , 1 ] ; (2) X · , j q , A C q ( q 1 ) C A , j (due to Burkholder’s inequality, cf. [64]); (3) H j ( F 0 ; t ) H j ( F 0 ; t ) L | t t | .
Conditions (A1)–(A3) implicitly impose smoothness in each entry of the coefficient matrices, sparseness in each column of the entry and evolution, and polynomial decay rate in the lag m of each entry and its derivative.
For 1 l ι , let  δ j k ( t ( l ) ) : = σ j k ( t ( l ) ) σ j k ( t ( l ) ) and Δ ( t ( l ) ) = δ j k ( t ( l ) ) 1 j , k p , where σ j k ( t ( l ) ) = lim t t ( l ) σ j k ( t ) is well-defined in view of (3). We assume that the change points are separated and sizeable.
Assumption 3 (Separability and sizeability of change points).
There exist positive constants c 1 ( 0 , 1 ) and c 2 > 0 independent of n and p such that max 0 l ι ( t ( l + 1 ) t ( l ) ) c 1 and δ ( t l ) : = | Δ ( t l ) | c 2 .
In the high-dimensional context, we assume that the inverse covariance matrices are sparse in the sense of their L 1 norms.
Assumption 4 (Sparsity of precision matrices).
The precision matrix | Ω ( t ) | L 1 κ p for each t [ 0 , 1 ] , where κ p is allowed to grow with p.
If we further assume that the eigenvalues of the covariance matrices are bounded from below and above, i.e., there exists a constant 0 < c < 1 , such that c inf t [ 0 , 1 ] | Σ ( t ) | 2 sup t [ 0 , 1 ] | Σ ( t ) | 2 c 1 , then the covariance matrices and precision matrices are well-conditioned. In particular, as  | Ω ( t ) Ω ( t ) | c 2 | Σ ( t ) Σ ( t ) | , a small perturbation in the covariance matrix would guarantee a small change of the same order in the precision matrix under the spectral norm.

3. Method: Change Point Estimation and Support Recovery

In graphical models (such as the Gaussian graphical model or partial correlation graph), network structures relevant to correlations or partial correlations are second-order characteristics of the data distributions. Specifically, the existence of edges coincides with non-zero entries of the inverse covariance matrix. We consider the dynamics of time series with both structural breaks and smooth changes. The piecewise stochastic Lipschitz continuity in Definition 1 allows the time series to have discontinuity in the covariance matrix function at time points t ( l ) , l = 1 , , ι (i.e., change points), while only smooth changes (i.e., twice continuous differentiability of the covariance matrix function in Assumptions 1) can occur between the change points.
In the presence of change points, we must first remove the change points before applying any smoothing procedures since | Ω ( t ) Ω ( t ) | | Σ ( t ) | L 1 1 | Σ ( t ) | L 1 1 | Δ ( t ) | , i.e., a non-negligible abrupt change in the covariance matrix will result in a substantial change of the graph structure for sparse and smooth covariance matrices. Thus our proposed graph recovery method consists of two steps: change point detection and support recovery.
Let h h n > 0 be a bandwidth parameter such that h = o ( 1 ) and n 1 = o ( h ) , and  D h ( 0 ) = { h , h + 1 / n , , 1 h } be a search grid in ( 0 , 1 ) . Define
D ( s ) = n 1 i = 0 h n 1 X n s i X n s i i = 1 h n X n s + i X n s + i , s D h ( 0 ) .
To estimate the change points, compute
s ^ 1 = argmax s D h ( 0 ) | D ( s ) | .
The following steps are performed recursively. For  l = 1 , 2 , , let
D h ( l ) = D h ( l 1 ) { s ^ l 2 h , , s ^ l + 2 h } c ,
s ^ l + 1 = arg max s D h ( l ) | D ( s ) | ,
until the following criterion is attained:
max s D h ( l ) | D ( s ) | < ν ,
where ν is an early stopping threshold. The value of ν is determined in Section 4, which depends on the dimension and sample size, as well as the serial dependence level, tail condition, and local smoothness. Since our method only utilizes data in the localized neighborhood, multiple change points can be estimated and ranked in a single pass, which offers some computational advantage than the binary segmentation algorithm [41,46].
Once the change points are claimed, in the second step, we consider recovering the networks from the locally stationary time series before and after the structural breaks. In [11], where X i , i = 1 , , n are assumed with an identical covariance matrix, the precision matrix Ω ^ is estimated as,
Ω ^ λ = arg min Ω R p × p | Ω | 1 s . t . | Σ ^ Ω Id p | λ ,
where Σ ^ is the sample covariance matrix. Inspired by (10), we apply a kernelized time-varying (tv-) CLIME estimator for the covariance matrix functions of the multiple pieces of locally stationary processes before and after the structural breaks. Let
Σ ^ ( t ) = i = 1 n w ( t , t i ) X i X i ,
where
w ( t , i ) = K b ( t i , t ) i = 1 n K b ( t i , t )
and K b ( u , v ) = K ( | u v | / b ) / b . The bandwidth parameter b satisfies that b = o ( 1 ) and n 1 = o ( b ) . Denote  B n = n b . The kernel function K ( · ) is chosen to have properties as follows.
Assumption 5 (Regularity of kernel function).
The kernel function K ( · ) is non-negative, symmetric, and Lipschitz continuous with bounded support in [ 1 , 1 ] , and that 1 1 K ( u ) d u = 1 .
Assumption 5 is a common requirement on the kernel functions and can be fulfilled by a range of kernel functions, such as the uniform kernel, triangular kernel, and the Epanechnikov kernel. Now the tv-CLIME estimator of the precision matrix Ω ( t ) is defined by Ω ˜ ( t ) = ω ˜ j k ( t ) 1 j , k p , where ω ˜ j k ( t ) = min ( ω ^ j k ( t ) , ω ^ k j ( t ) ) , and  Ω ^ ( t ) Ω ^ λ ( t ) = ( ω ^ j k ( t ) ) 1 j , k p ,
Ω ^ λ ( t ) = arg min Ω R p × p | Ω | 1 s . t . | Σ ^ ( t ) Ω Id p | λ .
Similar hybridized kernel smoothing and the CLIME method for estimating the sparse and smooth transition matrices in high-dimensional VAR model has been considered in [65], where change point is not considered. Thus in the current setting we need to carefully control effect of (consistently) removing the change points before smoothing.
Then, the network is estimated by the “effective support” defined as follows.
G ^ ( t ; u ) = ( g ^ j k ( t ; u ) ) 1 j , k p , where g ^ j k ( t ; u ) = I | ω ˜ j k ( t ) | u .
It should be noted that the (vanilla) kernel smoothing estimator (11) of the covariance matrix does not adjust for the boundary effect due to the change points in the covariance matrice function. Thus, in the neighborhood of the change points, a larger bias can be induced in estimating Σ ( t ) by Σ ^ ( t ) . As a remedy, we apply the following reflection procedure for boundary correction. Suppose  t T ^ b + h 2 ( j ) for 1 j ι , Denote T ^ d ( j ) : = [ s ^ j d , s ^ j + d ) for d ( 0 , 1 ) . We replace (11) by
Σ ^ ( t ) = i = 1 n w ( t , t i ) x ˘ i x ˘ i ,
and then apply the rest of the tv-CLIME approach. Here
x ˘ i = x i if   ( i s ^ j n ) ( t s ^ j n ) 0 ; x 2 s ^ j n i otherwise .

4. Theoretical Results

In this section, we derive the theoretical guarantees for the change point estimation and graph support recovery. Roughly speaking, Proposition 1 and 2 below show that under appropriate conditions, if each element of the covariance matrix varies smoothly in time, one can obtain an accurate snapshot estimation of the precision matrices as well as the time-varying graphs with high probability via the proposed kernel smoothed constrained l 1 minimization approach.
Define J q , A ( n , p ) = M X , q ( p ϖ q , A ( n ) ) 1 / q , where ϖ q , A ( n ) = n , n ( log n ) 1 + 2 q , n q / 2 A q if A > 1 / 2 1 / q , A = 1 / 2 1 / q , and  0 < A < 1 / 2 1 / q , respectively.
Proposition 1 (Rate of convergence for estimating precision matrices: pointwise and uniform).
Suppose Assumptions 2, 4, and 5 hold with ι = 0 . Let  B n = b n for n 1 = o ( b ) and b = o ( 1 ) .
(i) 
Pointwise.Choose the parameter λ C κ p ( b 2 + B n 1 J q , A ( B n , p ) + N X ( log p / B n ) 1 / 2 ) in the tv-CLIME estimator Ω ^ λ ( t ) in (13), where C is a sufficiently large constant independent of n and p. Then for any t [ b , 1 b ] , we have
| Ω ^ λ ( t ) Ω ( t ) | = O P ( κ p λ ) .
(ii) 
Uniform.Choose λ C κ p b 2 + B n 1 J q , A ( n , p ) + N X B n 1 ( n log ( p ) ) 1 / 2 in the tv-CLIME estimator Ω ^ λ ( t ) in (13), where C is a sufficiently large constant independent of n and p. Then we have
sup t [ b , 1 b ] | Ω ^ λ ( t ) Ω ( t ) | = O P ( κ p λ ) .
The optimal order of the bandwidth parameter b = b in (17) is the solution to the following equation:
b 2 = B n 1 max ( J q , A ( n , p ) , N X ( n log ( p 2 ) ) 1 / 2 ) ,
which implies that the closed-form expression for b is given by
b = C 1 n 1 J q , A ( n , p ) 1 / 3 + C 2 N X 1 / 3 n 1 / 6 log ( p ) 1 / 6
for some constants C 1 and C 2 that are independent of n and p.
Given a finite sample, to distinguish the small entries in the precision matrix from the noise is challenging. Since a smaller magnitude of a certain element of the precision matrix implies a weaker connection of the edge in the graphical model, we instead consider the estimation of significant edges in the graph. Define the set of significant edges at level u as E * ( t ; u ) = ( j , k ) : g j k * ( t ; u ) 0 , where
g j k * ( t ; u ) = I | ω j k ( t ) | > u .
Then, as a consequence of (17), we have the following support recovery consistency result.
Proposition 2 (Consistency of support recovery: significant edges).
Choose u as u = C 0 κ p 2 b 2 , where C 0 is taken as a sufficiently large constant independent of n and p. Suppose that u = o ( 1 ) as n , p . Then under conditions of Proposition 1, we have that as n , p ,
P sup t [ b , 1 b ] ( j , k ) E c ( t ) I g ^ j k ( t ; u ) 0 0 0 ,
P sup t [ b , 1 b ] ( j , k ) E * ( t ; 2 u ) I g ^ j k ( t ; u ) = 0 0 0 .
Proposition 2 shows that the pattern of significant edges in the time-varying true graphs G ( t ) , t [ b , 1 b ] , can be correctly recovered with high probability. However, it is still an open question to what extent the edges with magnitude below u can be consistently estimated, which can be naturally studied in the multiple hypothesis testing framework. Nonetheless, hypothesis testing for graphical models on the nonstationary high-dimensional time series is rather challenging. We leave it as a future problem.
Propositions 1 and 2 together yield that the consistent estimation of the precision matrices and the graphs can be achieved before and after the change points. Now, we provide the theoretical result of the change point estimation. Theorem 1 below shows that if the change points are separated and sizable, then we can consistently identify them via the single pass segmentation approach under suitable conditions. Denote
h = C 1 n 1 J q , A ( n , p ) 1 / 3 + C 2 N X 1 / 3 n 1 / 6 log ( p ) 1 / 6 ,
where C 1 and C 2 are constants independent of n and p.
Theorem 1 (Consistency of change point estimation).
Assume X i R p admits the form (2). Suppose that Assumptions 2 to 3 are satisfied. Choose the bandwidth h = h , and  ν = ( 1 + L ) h 2 in (5) and (9) respectively. Assume that h = o ( 1 ) as n , p . We find that there exist constants C 1 , C 2 , C 3 independent of n and p, such that
P ( | ι ^ ι | > 0 ) C 1 p ϖ q , A ( n ) M X , q q ν 2 q q n q c 2 q 1 / 3 + C 2 p 2 exp C 3 ( n log 2 ( p ) N X 2 ) 1 / 3 .
Furthermore, in the event { ι = ι ^ } , the ordered change-point estimator ( s ^ ( 1 ) < s ^ ( 2 ) < < s ^ ( ι ^ ) ) defined in (7) satisfies
max 1 j ι | s ^ ( j ) t ( j ) | = O P ( h 2 ) .
Proposition 2 and Theorem 1 together indicate the consistency in the snapshot estimation of the time-varying graphs before and after the change points. In a close neighborhood of the change points, we have the following result for the recovery of the time-varying network. Denote  S : = b , 1 b ] ( 1 j ι ^ T ^ h 2 + b c ( j ) as the time intervals between the estimated change points, and  N : = [ 0 , b ) 1 j ι ^ ( T ^ h 2 + b T ^ h 2 c ) ( 1 b , 1 ] as the recoverable neighborhood of the jump.
Theorem 2.
Let Assumptions 2 to 5 be satisfied. We have the following results as n , p .
(i) 
Between change points.For t S , take b = b and u = u , where b and u are defined in Proposition 2. Suppose  u = o ( 1 ) . We have
sup t S max j , k | σ ^ j , k ( t ) σ j , k ( t ) | = O P ( b 2 ) .
Choose the penalty parameter as λ : = C 1 κ p b 2 , where C 1 is a constant independent of n and p. Then
sup t S | Ω ^ λ ( t ) Ω ( t ) | = O P ( κ p 2 b 2 ) .
Moreover,
P sup t S ( j , k ) E c ( t ) I g ^ j , k ( t ; u ) 0 = 0 1 ,
P sup t S ( j , k ) E * ( t ; 2 u ) I g ^ j k ( t ; u ) = 0 = 0 1 .
(ii) 
Around change points.For s N , take b = b : = C 1 n 1 J q , A ( n , p ) 1 / 2 + C 2 N X 1 / 2 n 1 / 4 log ( p ) 1 / 4 , and  u = u : = C 0 κ p 2 b , where C 0 , C 1 and C 2 are constants independent of n and p. Suppose  u = o ( 1 ) . We have
sup t N max j , k | σ ^ j , k ( t ) σ j , k ( t ) | = O P ( b ) .
Choose the penalty parameter as λ : = C 1 κ p b , where C 1 is a constant independent of n and p. Then
sup t N | Ω ^ λ ( t ) Ω ( t ) | = O P ( κ p 2 b ) .
Moreover,
P sup t N ( j , k ) E c ( t ) I g ^ j , k ( t ; u ) 0 = 0 1 ,
P sup t N ( j , k ) E * ( t ; 2 u ) I g ^ j , k ( t ; u ) = 0 = 0 1 .
Note that the convergence rates for the covariance matrix entries and precision matrix entries in case (ii) around the jump locations are slower than those for points well separated from the jump locations in case (i). This is because on the boundary due to the reflection, the smooth condition may no longer hold true. Indeed, we only take advantage of the Lipschitz continuous property of the covariance matrix function. Thus, we lose one degree of regularity in the covariance matrix function, and the bias term b 2 in the convergence rate of the between-jump area becomes b around the jumps. We also note that around the smaller neighborhood of the jump J : = 1 j ι ^ T ^ h 2 , due to the larger error in the change point estimation, consistent recovery of the graphs is not achievable.

5. A Simulation Study

We simulate data from the following multivariate time series model:
X i = m = 0 100 A m ( i ) ϵ i m , i = 1 , , n ,
where A m ( i ) R p × p , 1 m 100 , 1 i n , and  ϵ i m = ( ϵ i m , 1 , , ϵ i m , p ) , with ϵ m , k , m Z , j = 1 , , p generated as i.i.d. standardized T ( 8 ) random variables. In the simulation, we fix n = 1000 and vary p = 50 and p = 100 . For each m = 1 , , 100 , the coefficient matrices A m ( i ) = ( 1 + m ) β B m ( i ) , where β = 1 , and  B m ( 1 ) is an R p × p block diagonal matrix. The  5 × 5 diagonal blocks in B m ( i ) are fixed with i.i.d. N ( 0 , 1 ) entries and all the other entries are 0.
We consider the number of abrupt changes is ι = 2 and ( n t ( 1 ) , n t ( 2 ) ) = ( 300 , 650 ) . The matrix A 0 ( i ) is set to be a zero matrix for i = 1 , 2 , , 299 , while A 0 ( i ) = A 0 ( 299 ) + α α , i = 300 , 301 , , 649 , and  A 0 ( i ) = A 0 ( 649 ) α α , i = 650 , 651 , , 1000 , where the first 20 entries in α are taken to be a constant δ 0 and the others are 0.
We let the coefficient matrices A 1 ( i ) = { a m , j k ( i ) } 1 j , k p evolve at each time point, such that two entries are soft-thresholded and another two elements increase. Specifically, at time i, we randomly select two elements from the support of A 1 ( i ) , which are denoted as { a 1 , j l k l ( i ) } , l = 1 , 2 and that a 1 , j k ( i ) 0 , and set them to a 1 , j l k l ( i ) = sign ( a 1 , j l k l ( i ) ) ( | a 1 , j l k l ( i ) 0.05 | ) . We also randomly select two elements from A 1 ( i ) and increase their values by 0.03 .
Figure 1 and Figure 2 show the support of the true covariance matrices at i = 100 , 200 , , 900 .
In detecting the change points, the cutoff value ν of detection is chosen as follows. After removing the neighborhood of detected change points, we obtain D h ( l ) by ordering D h ( l ) , D h ( l ) , where l is obtained from (9) with ν = 0 . For  l = 1 , 2 , , l 1 , compute
R h ( l ) = D h ( l ) D h ( l + 1 ) .
We let ι ^ = arg max 0 l l 1 R h ( l ) and set ν = D h ( ι ^ ) .
We report the number of estimated jumps and the average absolute estimation error, where the average absolute estimation error is the mean of the distance between the estimated change points and the true change points. As is shown in Table 1 and Table 2, there is an apparent improvement in the estimation accuracy as the jump magnitude increases and dimension decreases. The detection is relatively robust to the choice of bandwidth.
We evaluate the support recovery performance of the time-varying CLIME at the lattice 100 , 200 , , 900 with λ = 0.02 , 0.06 , 0.1 . We take the uniform kernel function and the bandwidth is fixed as 0.2 . At each time point t 0 , two quantities are computed: sensitivity and specificity, which are defined as:
sensitivity = 1 j , k p I { g ^ j k ( t 0 ; u ) 0 , g j k ( t 0 ; u ) 0 } 1 j , k p I { g j k ( t 0 ; u ) 0 } , specificity = 1 j , k p I { g ^ j k ( t 0 ; u ) = 0 , g j k ( t 0 ; u ) = 0 } 1 j , k p I { g j k ( t 0 ; u ) = 0 } .
We plot the Receiver Operating Characteristic (ROC) curve, that is, sensitivity against 1-specificity. From Figure 3 and Figure 4 we observe that, due to a screening step, the support recovery is robust to the choice of λ , except at the change points, where a non-negligible estimation error of the covariance matrix is induced and the overall estimation is less accurate. As the effective dimension of the network remains the same at p = 50 and p = 100 by the construction of the coefficient matrix A m ( i ) , there is no significant difference in the ROC curves at different dimensions.

6. A Real Data Application

Understanding the interconnection among financial entities and how they vary over time provides investors and policy makers with insights into risk control and decision making. Reference [66] presents a comprehensive study of the applications of network theory in financial systems. In this section, we apply our method to a real financial dataset from Yahoo! Finance (finance.yahoo.com). The data matrix contains daily closing prices of 420 stocks that are always in the S&P 500 index between 2 January 2002 through 30 December 2011. In total, there are n = 2519 time points. We select 100 stocks with the largest volatility and consider their log-returns; that is, for  j = 1 , , 100 ,
X i j = log p i + 1 , j / p i j ,
where p i j is the daily closing price of the stock j at time point i. We first compute the statistic (5) and (6) for the change point detection. We look at the top three statistics for different bandwidths. For bandwidth k = n 1 / 5 = 0.21 , we rank the test statistic and find that the location for the top change point is: 7 February 2008 ( n s ^ 1 = 1536 ), which is shown in Figure 5. The detected change point is quite robust to a variety of choices of bandwidth. Our result is partially consistent with the change point detection method in [48]. In particular, the two breaks in 2006 and 2007 were also found in [48] and it is conjectured that the 2007 break may be associated to the U.S. house market collapse. Meanwhile, it is interesting to observe the increased volatility before the 2008 financial crisis.
Next, we estimate the time-varying networks before and after the change point at 26 May 2006 with the largest jump size. Specifically, we look at four time points at: 813, 828, 888, and 903, corresponding to 23 March 2006, 13 April 2006, 11 July 2006, and 1 August 2006. We use tv-CLIME (13) with the Epanechnikov kernel with the same bandwidth as in the change point detection to estimate the networks at the four points. Optimal tuning parameter λ is automatically selected according to the stability approach [67]. The following matrix shows the number of different edges at those four time points. It is observed that the time of the first two time points (813 and 828) and the last two (888 and 903) has a higher similarity than across the change point at time 858. The estimated networks are shown in Figure 6. Networks in the first and second row are estimated before and after the estimated change point at time 858, respectively. It is observed that at each time point the companies in the same section tend to be clustered together such as companies in the Energy section: OXY, NOV, TSO, MRO, and DO (highlighted in cyan). In addition, the distance matrix of estimated networks is estimated as
0 332 350 396 332 0 394 428 350 394 0 234 396 428 234 0 .

7. Proof of Main Results

7.1. Preliminary Lemmas

Lemma 1.
Let ( Y i ) i Z be a sequence that admits (2). Assume Y i L q for i = 1 , 2 , , and the dependence adjusted norm (DAN) of the corresponding underlying array ( Y i ( t ) ) satisfies Y · q , A < for q > 2 and A > 0 . Let ( ω ( t , t i ) ) i = 1 n be defined in (12) and suppose that the kernel function K ( · ) satisfies Assumption 5. Denote ϖ q , A ( n ) = n , n ( log n ) 1 + 2 q , n q / 2 A q if A > 1 / 2 1 / q , A = 1 / 2 1 / q , and 0 < A < 1 / 2 1 / q , respectively. Then there exist constants C 1 , C 2 and C 3 independent of n, such that for all x > 0 ,
sup t ( 0 , 1 ) P i = 1 n w ( t , t i ) Y i E ( Y i ) > x C 1 ϖ q , A ( B n ) Y · q , A q B n q x q + C 2 exp C 3 B n x 2 Y · 2 , A 2 .
P sup t ( 0 , 1 ) i = 1 n w ( t , t i ) Y i E ( Y i ) > x C 1 ϖ q , A ( n ) Y · q , A q B n q x q + C 2 exp C 3 B n 2 x 2 n Y · 2 , A 2 .
Proof. 
Let S i = j = 1 i Y i E ( Y i ) . Note that
sup t ( 0 , 1 ) i = 1 n w ( t , t i ) Y i = sup t ( 0 , 1 ) i = 1 n w ( t , t i ) ( S i S i 1 ) sup t i = 1 n 1 w ( t , t i ) w ( t , t i + 1 ) S i + sup t w ( t , 1 ) S n B n 1 max 1 i n | S i | ,
where the last inequality follows from the fact that sup t i = 1 n | w ( t , t i ) w ( t t i + 1 ) | B n 1 , due to Assumption 5.
To see (29), it suffices to show
P max 1 i n | S i | > x C 1 ϖ q , A ( n ) Y · q , A q x q + C 2 exp C 3 x 2 n Y · 2 , A 2 .
Now, we develop a probability deviation inequality for max 1 i n | j = 1 i α j Y j | , where α j 0 , 1 j n are constants such that 1 j n α j = 1 . Denote P 0 ( Y i ) = E ( Y i | ε i ) E ( Y i ) and
P k ( Y i ) = E ( Y i | ε i k , , ε i ) E ( Y i | ε i k + 1 , , ε i ) .
Then we can write
max 1 i n | j = 1 i α j Y j | max 1 i n | j = 1 i α j P 0 ( Y j ) | + max 1 i n | k = 1 n j = 1 i α j P k ( Y j ) | + max 1 i n | k = n + 1 j = 1 i α j P k ( Y j ) | .
Note that ( P 0 ( Y j ) ) j Z is an independent sequence. By Nagaev’s inequality and Ottaviani’s inequality, we have that
P ( max 1 i n | j = 1 i α j P 0 ( Y j ) | x ) j = 1 n α j q P 0 ( Y j ) q q x q + exp C 3 x 2 j = 1 n α j 2 P 0 ( Y j ) 2 2 j = 1 n α j q x q Y j q + exp C 3 x 2 j = 1 n α j 2 ,
where the last inequality holds because P 0 ( Y j ) q 2 Y j q by Jensen’s inequality. Since j = i + 1 α j P k ( Y j ) is a martingale difference sequence with respect to σ ( ε i + 1 k , ε i + 2 k , ) , we have that | k = 1 + n j = i + 1 n α j P k ( Y j ) | is a non-negative sub-martingale. Then by Doob’s inequality and Burkholder’s inequality, we have
P max 1 i n | k = n + 1 j = 1 i α j P k ( Y j ) | x P | k = n + 1 j = 1 n α j P k ( Y j ) | x 2 + P max 1 i n | k = n + 1 j = 1 + i n α j P k ( Y j ) | x 2 k = 1 + n j = 1 n α j P k ( Y j ) q q x q ( j = 1 n α j 2 ) q / 2 Θ n , q q x q Θ n , q q n q / 2 1 j = 1 n α j q x q .
Now, we deal with the term max 1 i n | k = 1 n j = 1 i α j P k ( Y j ) | . Define a m = min ( 2 m , n ) and M n = log n / log 2 . Then
max 1 i n | k = 1 n j = 1 i α j P k ( Y j ) | m = 1 M n max 1 i n | l = 1 i / a m j = 1 + ( l 1 ) a m min ( l a m , i ) k = 1 + a m 1 a m α j P k ( Y j ) | .
Let A o d d = { 1 l i / a m , l   is   odd } and A e v e n = { 1 l i / a m , l   is   even } . We have
P max 1 i n | l = 1 i / a m Z l , m , i | x P max 1 i n | A o d d Z l , m , i | x / 2 + P max 1 i n | A e v e n Z l , m , i | x / 2 ,
where we have that Z l , m , i : = j = 1 + ( l 1 ) a m min ( l a m , i ) α j P a m 1 a m ( Y j ) is independent of Z l + 2 , m , i for 1 l i / a m , 1 m M n , 1 i n , as P a m 1 a m ( Y j ) : = k = 1 + a m 1 a m P k ( Y j ) is a m -dependent. Therefore, we can apply Ottaviani’s inequality and Nagaev’s inequality for independent variables. As a consequence,
P max 1 i n | l = 1 i / a m Z l , m , i | x 1 l n / a m Z l , m , n q q x q + exp C 3 x 2 1 l n / a m Z l , m , n 2 2 .
Again, by Burkholder’s inequality, we have that for q 2 ,
Z l , m , n q k = 1 + a m 1 a m j = 1 + ( l 1 ) a m min ( l a m , n ) α j P k ( Y j ) q ( j = 1 + ( l 1 ) a m min ( l a m , n ) α j 2 ) 1 / 2 ( Θ a m 1 Θ a m ) .
Note j = 1 + ( l 1 ) a m min ( l a m , n ) α j 2 a m ( q 2 ) / q ( j = 1 + ( l 1 ) a m min ( l a m , n ) α j q ) 2 / q . Let τ m = m 2 / m = 1 M n m 2 , and we have τ m m 2 as 1 m = 1 M n m 2 π 2 / 6 . In respect to (34), we have that
P max 1 i n | k = 1 n j = 1 i P k ( Y j ) | x m = 1 M n P max 1 i n | l = 1 i / a m Z l , m , i | τ m x i = 1 n α j q x q Y · q , A q m = 1 M n τ m q a m ( 1 / 2 A ) q 1 + m = 1 M n exp C 3 x 2 τ m 2 a m 2 A j = 1 n α j 2 Y · 2 , A 2 .
Note m = 1 M n τ m q a m ( 1 / 2 A ) q 1 n 1 ϖ q , A ( n ) , and
m = 1 M n exp C 3 x 2 τ m 2 a m 2 A j = 1 n α j 2 Y · 2 , A 2 exp C 3 x 2 j = 1 n α j 2 Y · 2 , A 2 .
Combining (31), (32), (33) and (35), we obtain
P max 1 i n | j = 1 i α j Y j E ( Y j ) | > x C 1 ϖ q , A ( n ) j = 1 n α j q Y · q , A q n x q + C 2 exp C 3 x 2 j = 1 n α j 2 Y 2 , A 2 .
Now, we have (30) by taking α j = n 1 for j = 1 , , n . Note that since K ( · ) has bounded support, for any given t [ b , 1 b ] , we have
P | i = 1 n w ( t , t i ) ( Y i E Y i ) | > x P | i = B n B n w ( t , t t n + i ) ( Y t n + i E Y t n + i ) | > x C 1 ϖ q , A ( B n ) i = B n B n w ( t , t t n + i ) q Y · q , A q B n x q + C 2 exp C 3 x 2 i = B n B n w ( t , t t n + i ) 2 Y · 2 , A 2 .
Therefore (28) follows from (36) by taking α j = w ( t , t n + j ) , and note that for any t [ b , 1 b ] , i = B n B n w ( t , t t n + i ) β B n 1 β for a constant β 2 . □
Lemma 2.
Suppose ( X i j ) i Z , 1 j p satisfys Assumption 2. Furthermore, let Assumption 5 hold. Let ϖ q , A ( n ) be defined as in Lemma 1. Then there exist constants C 1 , C 2 , and C 3 independent of n and p, such that for all x > 0 , we have
sup t ( 0 , 1 ) P | i = 1 n ω ( t , t i ) X i X i E ( X i X i ) | x C 1 ν 2 q q p ϖ q , A ( B n ) M X , q q B n q x q + C 2 p 2 exp C 3 B n x 2 ν 4 2 N X 2 ,
and
P sup t ( 0 , 1 ) | i = 1 n w ( t , t i ) X i X i E ( X i X i ) | x C 1 ν 2 q q p ϖ q , A ( n ) M X , q q B n q x q + C 2 p 2 exp C 3 B n 2 x 2 n ν 4 2 N X 2 .
Proof. 
For 1 j , k p , let Y i , j k = X i j X i k . We now check the conditions in Lemma 1 for ( Y i , j k ) 1 i n . Denote Y i , j k , { m } = X i j , { m } X i k , { m } . Then the uniform functional dependence measure of ( Y i , j k ) i is
θ m , q , j k Y = sup i Y i , j k Y i , j k , { m } q = sup i X i j X i k X i j , { m } X i k , { m } q sup i X i j ( X i k X i k , { m } ) q + sup i X i k , { m } ( X i j X i j , { m } ) q .
Thus the DAN of the process Y · , j k satisfies that
Y · , j k q , A sup i X i j 2 q X · , k 2 q , A + sup i X i k 2 q X · , j 2 q , A ν q ( X · , k 2 q , A + X · , j 2 q , A ) .
The result follows immediately from Lemma 1 and the Bonferroni inequality. □
Lemma 3.
We adopt the notation in Lemma 2. Suppose Assumptions 2, 1, and 5 hold with ι = 0 . Recall B n = n b , where b 0 and B n / n as n . Then there exists a constant C independent of n and p such that Σ ^ ( t ) in (11) satisfies that for any t [ c , 1 c ] ,
| Σ ^ ( t ) Σ ( t ) | = O P b 2 + M X , q ν 2 q B n 1 ( p ϖ q , A ( B n ) ) 1 / q + ν 4 N X ( log p / B n ) 1 / 2 .
Furthermore,
sup t [ c , 1 c ] | Σ ^ ( t ) Σ ( t ) | = O P b 2 + M X , q ν 2 q B n 1 ( p ϖ q , A ( n ) ) 1 / q + ν 4 N X B n 1 [ n log p ] 1 / 2 .
Proof. 
First, we have
E σ ^ j k ( t ) σ j k ( t ) = i = 1 n w ( t , t i ) [ σ j k ( t i ) σ j k ( t ) ] .
Approximating the discrete summation with integral, we obtain for all 1 j , k p ,
sup t [ b , 1 b ] E σ ^ j k ( t ) σ j k ( t ) 1 1 K ( u ) [ σ j k ( u b + t ) σ j k ( t ) ] d u = O B n 1 .
By Assumption 1, we have
σ j k ( u b + t ) σ j k ( t ) = u b σ j k ( t ) + 1 2 u 2 b 2 σ j k ( t ) + o ( b 2 u 2 ) .
Thus we have sup t [ c , 1 c ] | E σ ^ ( t ) σ ( t ) | = O B n 1 + b 2 , in view of Assumption 5. By Lemma 2, we have
sup t ( 0 , 1 ) P Σ ^ ( t ) E Σ ^ ( t ) x C 1 p ν q q M X , q q ϖ q , A ( B n ) B n q x q + C 2 p 2 exp C 3 B n x 2 N X 2 .
Denote u = C 4 M X , q ν 2 q B n 1 ( p ϖ q , A ( B n ) ) 1 / q + ν 4 N X ( log p / B n ) 1 / 2 for a large enough constant C 4 , then for any t ( 0 , 1 ) ,
Σ ^ ( t ) E Σ ^ ( t ) = O P ( u ) .
Thus (39) is proved. The result (40) can be obtained similarly. □

7.2. Proof of Main Results

Proof of Proposition 1. 
Given (39) and (40), the proof of (16) is standard. (See, e.g., Theorem 6 of [11]). For λ and λ * given in Proposition 1, by Lemma 3, we have that, respectively,
λ sup t E κ p | Σ ^ ( t ) Σ ( t ) | ,
λ E κ p sup t | Σ ^ ( t ) Σ ( t ) | .
Then note that for any t [ 0 , 1 ] , for any λ > 0 ,
| Ω ^ λ ( t ) Ω ( t ) | | Ω ( t ) | L 1 | Σ ( t ) Ω ^ λ ( t ) Id p | | Ω ( t ) | L 1 | Σ ^ ( t ) Ω ^ λ ( t ) Id p | + | ( Σ ( t ) Σ ^ ( t ) ) Ω ( t ) | + | Ω ^ λ ( t ) Ω ( t ) | L 1 | Σ ^ ( t ) Σ ( t ) |
where by construction, we have | Σ ^ ( t ) Ω ^ λ ( t ) Id p | λ and | Ω ^ λ ( t ) Ω ( t ) | L 1 2 κ p . Consequently,
| Ω ^ λ ( t ) Ω ( t ) | κ p λ + 3 κ p | Σ ^ ( t ) Σ ( t ) | .
Then (16) and (17) follow from (41) to (43). □
Proof of Proposition 2. 
Theorem 2 is an immediate result of (17). □
Proof of Theorem 1. 
Denote r j , 1 j ι as the time point(s) of the time of jump ordered decreasingly in the sense of the infinite norm of covariance matrices, i.e., | Δ ( r 1 ) | | Δ ( r 2 ) | | Δ ( r ι ) | | Δ ( s ) | for s ( 0 , 1 ) { r 1 , , r ι } c . (Temporal order is applied if there is a tie.) Let T h ( j ) = [ r j h , r j + h ) . For h = o ( 1 ) , as a result of Assumption 3, T h ( j ) T h ( i ) = if i j for n sufficiently large. That is to say, each time point s ( 0 , 1 ) is in the neighborhood of, at most, one change point.
For any s [ t ( j ) , t ( j + 1 ) ) , j = 0 , 1 , , ι , denote D ( s ) = E [ D ( s ) ] and
D ( s ) = ( h s + t ( j ) ) Δ ( t ( j ) ) , t ( j ) s < t ( j ) + h 0 , t ( j ) + h s < t ( j + 1 ) h ( h + s r ) Δ ( t ( j + 1 ) ) , t ( j + 1 ) h s t ( j + 1 ) .
Then, for s 1 j ι [ t ( j ) + h , < t ( j + 1 ) h ) , by (3), we have
| Σ ( s + t ) Σ ( s ) | L t , | t | h ,
we can easily verify that
sup s [ 0 , 1 ] | D ( s ) D ( s ) | L h 2 .
Note that | D ( s ) | is maximized at s = r 1 and | D ( r 1 ) | = h | Δ ( r 1 ) | . By the triangle inequalities, we have that for some positive constant C, for any s [ 0 , 1 ] ,
| D ( r 1 ) | | D ( s ) | h c 2 | D ( r 1 ) D ( r 1 ) | | D ( s ) | | D ( s ) D ( s ) | h c 2 | D ( s ) | 2 L h 2 c 2 ( | s r 1 | h ) 2 L h 2 .
On the other hand, since | D ( r 1 ) | | D ( s ^ 1 ) | , we have
| D ( r 1 ) | | D ( s ^ 1 ) | | D ( r 1 ) | | D ( s ^ 1 ) | + | D ( r 1 ) D ( r 1 ) | + | D ( s ^ 1 ) D ( s ^ 1 ) | | D ( r 1 ) D ( r 1 ) | + | D ( s ^ 1 ) D ( s ^ 1 ) | .
Denote the event A : = { sup s [ h , 1 h ] | D ( s ) D ( s ) | h 2 } and let Y i = ( Y i , j k ) 1 j , k p , Y i , j k = X i j X i k σ i , j k . Note that
| D j k ( s ) D j k ( s ) | = 1 n i = 1 h n Y n s + 1 i , j k i = 1 h n Y n s + i , j k .
By Lemma 2, we have for any x > 0 ,
P sup s [ h , 1 h ] | D ( s ) D ( s ) | x C 1 p ϖ q , A ( n ) M X , q q ν 2 q q n q x q + C 2 p 2 exp C 3 n x 2 N X 2 .
It follows that
| D ( r 1 ) | | D ( s ^ 1 ) | = O P h 1 J q , A ( n , p ) + N X h 1 ( n 1 log ( p ) ) 1 / 2 .
Taking h = h , we have
| s ^ 1 r 1 | = O P ( h 2 ) .
Furthermore, we have
P ( A ) 1 C 1 p ϖ q , A ( n ) M X , q q ν 2 q q n q c 2 q 1 / 3 C 2 p 2 exp C 3 ( n log 2 ( p ) N X 2 ) 1 / 3 .
Let A k : = { max 1 j k | s ^ j r j | c 2 1 2 ( L + 1 ) h 2 } for some 1 k ι . Assume A k A . Under A k we have that [ r j h , r j + h ) T ^ 2 h ( j ) = : [ s ^ j 2 h , s ^ j + 2 h ) for 1 j k and r k + 1 1 j k T ^ 2 h ( j ) as a consequence of Assumption 3. According to (46) and (47), we have if A is true, | s ^ k + 1 r k + 1 | c 2 1 2 ( L + 1 ) h 2 , which implies A k + 1 A . The result (21) follows from deduction.
Suppose A holds. By the choice of ν , as a consequence of (45) and (49), and that ν h , we have that
sup s [ 0 , 1 ] | D ( s ) D ( s ) | ν .
As a result,
min 1 j ι | D ( r j ) | c 2 h ν ν ,
i.e., ι ^ ι . On the other hand, since 1 j ι T ^ 2 h ( j ) is excluded from the searching region for s ι + 1 , we have
sup s 1 j ι T ^ 2 h ( j ) c | D ( s ) | ν .
In other words, { ι ^ = ι } A . Thus (20) is proved. □
Proof of Theorem 2. 
We adopt the notations in the proof of Theorem 1 and assume that E holds. Similar to Lemma 3, we have that by Lemma 2, for any t ( 0 , 1 ) ,
Σ ^ ( t ) E Σ ^ ( t ) = O P ( u ) ,
where u = C 4 M X , q ν 2 q B n 1 ( p ϖ q , A ( B n ) ) 1 / q + ν 4 N X ( log p / B n ) 1 / 2 for a large enough constant C 4 .
Since under E , T b ( j ) T ^ b + h 2 ( j ) . For t 1 j ι T ^ b + h 2 ( j ) c [ b , 1 b ] , we have that for all 1 j , k p ,
E σ ^ j k ( t ) σ j k ( t ) = 1 1 K ( u ) [ σ j k ( u b + t ) σ j k ( t ) ] d u + O B n 1 = b σ j k ( t ) 1 1 u K ( u ) d u + 1 2 b 2 σ j k ( t ) + o ( b 2 ) 1 1 u 2 K ( u ) d u + O B n 1 = O ( b 2 + B n 1 ) .
On the other hand, for t 1 j ι T ^ b + h 2 ( j ) T h 2 c ( j ) [ 0 , b ] [ 1 b , 1 ] , due to reflection, we no longer have that differentiability. As a result of the Lipschitz continuity, we get
E σ ^ j k ( t ) σ j k ( t ) = 1 1 K ( u ) [ σ j k ( u b + t ) σ j k ( t ) ] d u + O B n 1 = O ( b + B n 1 ) .
The result (22) follows by the choices of b. The rest of the proof are similar to that of Proposition 1 and Theorem 2. □

Author Contributions

Methodology, M.X., X.C., W.B.W.; writing—original draft preparation, M.X., X.C., W.B.W.; writing—review and editing, M.X., X.C., W.B.W., software, M.X. All authors have read and agreed to the published version of the manuscript.

Funding

X.C.’s research is supported in part by NSF CAREER Award DMS-1752614 and UIUC Research Board Award RB18099. W.B.W.’s research is supported in part by NSF DMS-1405410.

Acknowledgments

X.C. acknowledges that part of this work was carried out at the MIT Institute for Data, System, and Society (IDSS).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lauritzen, S. Graphical Models; Clarendon Press: Oxford, UK, 1996. [Google Scholar]
  2. Peng, J.; Wang, P.; Zhou, N.; Zhu, J. Partial Correlation Estimation by Joint Sparse Regression Models. J. Am. Stat. Assoc. 2009, 104, 735–746. [Google Scholar] [CrossRef] [Green Version]
  3. Meinshausen, N.; Bühlmann, P. High-dimensional graphs and variable selection with the lasso. Ann. Stat. 2006, 34, 1436–1462. [Google Scholar] [CrossRef] [Green Version]
  4. Friedman, J.; Hastie, T.; Tibshirani, R. Sparse Inverse Covariance Estimation with the Graphical Lasso. Biostatistics 2008, 9, 432–441. [Google Scholar] [CrossRef] [Green Version]
  5. Banerjee, O.; El Ghaoui, L.; d’Aspremont, A. Model selection through sparse maximum likelihood estimation for multivariate Gaussian or binary data. J. Mach. Learn. Res. 2008, 9, 485–516. [Google Scholar]
  6. Rothman, A.J.; Bickel, P.J.; Levina, E.; Zhu, J. Sparse Permutation Invariant Covariance Estimation. Electron. J. Stat. 2008, 2, 494–515. [Google Scholar] [CrossRef]
  7. Yuan, M. High Dimensional Inverse Covariance Matrix Estimation via Linear Programming. J. Mach. Learn. Res. 2010, 11, 2261–2286. [Google Scholar]
  8. Yuan, M.; Lin, Y. Model selection and estimation in the Gaussian graphical model. Biometrika 2007, 94, 19–35. [Google Scholar] [CrossRef] [Green Version]
  9. Ravikumar, P.; Wainwright, M.J.; Raskutti, G.; Yu, B. High-dimensional covariance estimation by minimizing 1-penalized log-determinant divergence. Electron. J. Stat. 2011, 5, 935–980. [Google Scholar] [CrossRef]
  10. Candès, E.; Tao, T. Rejoinder: “The Dantzig selector: Statistical estimation when p is much larger than n”. Ann. Stat. 2007, 35, 2392–2404. [Google Scholar] [CrossRef]
  11. Cai, T.; Liu, W.; Luo, X. A constrained 1 minimization approach to sparse precision matrix estimation. J. Am. Stat. Assoc. 2011, 106, 594–607. [Google Scholar] [CrossRef] [Green Version]
  12. Cai, T.; Liu, W. Adaptive thresholding for sparse covariance matrix estimation. J. Am. Stat. Assoc. 2011, 106, 672–684. [Google Scholar] [CrossRef] [Green Version]
  13. Fan, J.; Feng, Y.; Wu, Y. Network Exploration via the Adaptive Lasso and SCAD penalties. Ann. Appl. Stat. 2009, 3, 521–541. [Google Scholar] [CrossRef] [Green Version]
  14. Basu, S.; Shojaie, A.; Michailidis, G. Network Granger causality with inherent grouping structure. J. Mach. Learn. Res. 2015, 16, 417–453. [Google Scholar]
  15. Loh, P.L.; Bühlmann, P. High-dimensional learning of linear causal networks via inverse covariance estimation. J. Mach. Learn. Res. 2014, 15, 3065–3105. [Google Scholar]
  16. Loh, P.L.; Wainwright, M.J. Structure estimation for discrete graphical models: Generalized covariance matrices and their inverses. Ann. Stat. 2013, 41, 3022–3049. [Google Scholar] [CrossRef]
  17. Lèbre, S.; Becq, J.; Devaux, F.; Stumpf, M.P.; Lelandais, G. Statistical inference of the time-varying structure of gene-regulation networks. BMC Syst. Biol. 2010, 4, 1–16. [Google Scholar] [CrossRef] [Green Version]
  18. Przytycka, T.M.; Singh, M.; Slonim, D.K. Toward the dynamic interactome: It’s Toward the dynamic interactome: it’s about time. Brief. Bioinform. 2010, 11, 15–29. [Google Scholar] [CrossRef] [Green Version]
  19. Khandani, A.E.; Lo, A.W. What happened to the quants in August 2007? Evidence from factors and transactions data. J. Financ. Mark. 2011, 14, 1–46. [Google Scholar] [CrossRef]
  20. Chi, K.T.; Liu, J.; Lau, F.C. A network perspective of the stock market. J. Empir. Financ. 2010, 17, 659–667. [Google Scholar]
  21. Durante, D.; Dunson, D.B.; Vogelstein, J.T. Nonparametric Bayes modeling of populations of networks. J. Am. Stat. Assoc. 2017, 112, 1516–1530. [Google Scholar] [CrossRef]
  22. Durante, D.; Dunson, D.B. Locally adaptive dynamic networks. Ann. Appl. Stat. 2016, 10, 2203–2232. [Google Scholar] [CrossRef]
  23. Han, Q.; Xu, K.; Airoldi, E. Consistent estimation of dynamic and multi-layer block models. In Proceedings of the 32nd International Conference on Machine Learning, Lille, France, 6–11 July 2015; pp. 1511–1520. [Google Scholar]
  24. Danaher, P.; Wang, P.; Witten, D.M. The joint graphical lasso for inverse covariance estimation across multiple classes. J. R. Stat. Soc. Ser. B Stat. Methodol. 2014, 76, 373–397. [Google Scholar] [CrossRef]
  25. Dondelinger, F.; Lèbre, S.; Husmeier, D. Non-homogeneous dynamic Bayesian networks with Bayesian regularization for inferring gene regulatory networks with gradually time-varying structure. Mach. Learn. 2013, 90, 191–230. [Google Scholar] [CrossRef] [Green Version]
  26. Pensky, M. Dynamic network models and graphon estimation. Ann. Stat. 2019, 47, 2378–2403. [Google Scholar] [CrossRef] [Green Version]
  27. Pensky, M.; Zhang, T. Spectral clustering in the dynamic stochastic block model. Electron. J. Stat. 2019, 13, 678–709. [Google Scholar] [CrossRef]
  28. Bhattacharjee, M.; Banerjee, M.; Michailidis, G. Change Point Estimation in a Dynamic Stochastic Block Model. arXiv 2018, arXiv:1812.03090. [Google Scholar]
  29. Bartlett, T.E.; Kosmidis, I.; Silva, R. Two-way sparsity for time-varying networks, with applications in genomics. arXiv 2018, arXiv:1802.08114. [Google Scholar]
  30. Gaucher, S.; Klopp, O. Maximum likelihood estimation of sparse networks with missing observations. arXiv 2019, arXiv:1902.10605. [Google Scholar]
  31. Erdös, P.; Rényi, A. On Random Graphs I. Publ. Math. Debr. 1959, 6, 290–297. [Google Scholar]
  32. Penrose, M. Random Geometric Graphs; Oxford University Press: Oxford, UK, 2003. [Google Scholar]
  33. Zhou, S.; Lafferty, J.; Wasserman, L. Time Varying Undirected Graphs. Mach. Learn. 2010, 80, 295–319. [Google Scholar] [CrossRef] [Green Version]
  34. Kolar, M.; Xing, E. On time varying undirected graphs. In Proceedings of the 14th International Conference on Artificial Intelligence and Statistics (AISTATS 2011), Ft. Lauderdale, FL, USA, 11–13 April 2011. [Google Scholar]
  35. Kolar, M.; Song, L.; Xing, E. Estimating time-varying networks. Ann. Appl. Stat. 2010, 4, 94–123. [Google Scholar] [CrossRef]
  36. Kolar, M.; Xing, E.P. Sparsistent Estimation Of Time-Varying Markov Sparsistent Estimation Of Time-Varying Markov Random Fields. arXiv 2009, arXiv:0907.2337. [Google Scholar]
  37. Qiu, H.; Han, F.; Liu, H.; Caffo, B. Joint estimation of multiple graphical models from high dimensional time series. J. R. Stat. Soc. Ser. B Stat. Methodol. 2015, 78, 487–504. [Google Scholar] [CrossRef] [Green Version]
  38. Lu, J.; Kolar, M.; Liu, H. Post-regularization Inference for Dynamic Nonparanormal Graphical Models. arXiv 2015, arXiv:1512.08298. [Google Scholar]
  39. Ahmed, A.; Xing, E.P. Recovering time-varying networks of dependencies Recovering time-varying networks of dependencies in social and biological studies. Proc. Natl. Acad. Sci. USA 2009, 106, 11878–11883. [Google Scholar] [CrossRef] [Green Version]
  40. Tibshirani, R.; Saunders, M.; Rosset, S.; Zhu, J.; Knight, K. Sparsity and smoothness via the fused lasso. J. R. Stat. Soc. Ser. B 2005, 67, 91–108. [Google Scholar] [CrossRef] [Green Version]
  41. Cho, H.; Fryzlewicz, P. Multiple-change-point detection for high dimensional time series via sparsified binary segmentation. J. R. Stat. Soc. Ser. B Stat. Methodol. 2015, 77, 475–507. [Google Scholar] [CrossRef] [Green Version]
  42. Roy, S.; Atchadè, Y.; Michailidis, G. Change-point estimation in high-dimensional Markov random field models. J. R. Stat. Soc. Ser. B Stat. Methodol. 2017, 79, 1187–1206. [Google Scholar] [CrossRef] [Green Version]
  43. Zhou, S. Gemini: Graph estimation with matrix variate normal instances. Ann. Stat. 2014, 42, 532–562. [Google Scholar] [CrossRef] [Green Version]
  44. Tong, H. Non-Linear Time Series: A Dynamical System Approach; Oxford University Press: Oxford, UK, 1993. [Google Scholar]
  45. Fan, J.; Yao, Q. Nonlinear Time Series: Nonparmatric and Parametric Methods; Springer: Berlin/Heidelberg, Germany, 2003. [Google Scholar]
  46. Fryzlewicz, P. Wild Binary Segmentation for multiple change-point detection. Ann. Stat. 2014, 42, 2243–2281. [Google Scholar] [CrossRef]
  47. Kokoszka, P.; Leipus, R. Change-point estimation in ARCH models. Bernoulli 2000, 6, 513–539. [Google Scholar] [CrossRef]
  48. Aue, A.; Hörmann, S.; Horváth, L.; Reimherr, M. Break detection in the covariance structure of multivariate time series models. Ann. Stat. 2009, 37, 4046–4087. [Google Scholar] [CrossRef] [Green Version]
  49. Chang, C.; Glover, G.H. Time-frequency dynamics of resting-state brain connectivity measured with fMRI. NeuroImage 2010, 50, 81–98. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  50. Hutchison, M.; Womelsdorf, T.; Gati, J.; Everling, S.; Menon, R. Resting-state networks show dynamic functional connectivity in awake humans and anesthetized macaques. Hum. Brain Mapp. 2013, 34, 2154–2177. [Google Scholar] [CrossRef]
  51. Wiesel, A.; Bibi, O.; Globerson, A. Time varying autoregressive moving average models for covariance estimation. IEEE Trans. Signal Process. 2013, 61, 2791–2801. [Google Scholar] [CrossRef]
  52. Qiu, H.; Han, F.; Liu, H.; Caffo, B. Robust Portfolio Optimization under High Dimensional Heavy-Tailed Time Series; Technical Report; Johns Hopkins University: Baltimore, MD, USA, 2014. [Google Scholar]
  53. Chen, X.; Xu, M.; Wu, W.B. Covariance and precision matrix estimation for high-dimensional time series. Ann. Stat. 2013, 41, 2994–3021. [Google Scholar] [CrossRef] [Green Version]
  54. Chen, X.; Xu, M.; Wu, W.B. Regularized Estimation of Linear Functionals of Precision Matrices for High-Dimensional Time Series. IEEE Trans. Signal Process. 2016, 64, 6459–6470. [Google Scholar] [CrossRef]
  55. Basu, S.; Michailidis, G. Regularized estimation in sparse high-dimensional time series models. Ann. Stat. 2015, 43, 1535–1567. [Google Scholar] [CrossRef] [Green Version]
  56. Bhattacharjee, M.; Bose, A. Consistency of large dimensional sample covariance matrix under weak dependence. Stat. Methodol. 2014, 20, 11–26. [Google Scholar] [CrossRef]
  57. Shu, H.; Nan, B. Estimation of Large Covariance and Precision Matrices from Temporally Dependent Observations. arXiv 2014, arXiv:1412.5059. [Google Scholar] [CrossRef] [Green Version]
  58. Draghicescu, D.; Guillas, S.; Wu, W.B. Quantile curve estimation and visualization for nonstationary time series. J. Comput. Graph. Stat. 2009, 18, 1–20. [Google Scholar] [CrossRef]
  59. Wu, W.B. Nonlinear system theory: Another look at dependence. Proc. Natl. Acad. Sci. USA 2005, 102, 14150–14154. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  60. Zhou, Z.; Wu, W.B. Local linear quantile estimation for nonstationary time series. Ann. Stat. 2009, 37, 2696–2729. [Google Scholar] [CrossRef]
  61. Zhou, Z.; Wu, W.B. Simultaneous inference of linear models with time varying coefficients. J. R. Stat. Soc. 2010, 72, 513–531. [Google Scholar] [CrossRef]
  62. Wu, W.B.; Wu, Y.N. Performance bounds for parameter estimates of high-dimensional linear models with correlated errors. Electron. J. Stat. 2016, 10, 352–379. [Google Scholar] [CrossRef]
  63. Ltkepohl, H. New Introduction to Multiple Time Series Analysis; Springer: Berlin/Heidelberg, Germany, 2007. [Google Scholar]
  64. Chow, Y.; Teicher, H. Probability Theory: Independence, Interchangeability, Martingales; Springer: New York, NY, USA, 1997; p. 414. [Google Scholar]
  65. Ding, X.; Qiu, Z.; Chen, X. Sparse transition matrix estimation for high-dimensional and locally stationary vector autoregressive models. Electron. J. Stat. 2017, 11, 3871–3902. [Google Scholar] [CrossRef]
  66. Allen, F.; Babus, A. Networks in Finance. In The Network Challenge: Strategy, Profit, and Risk in an Interlinked World; FT Press: Hoboken, NJ, USA, 2009. [Google Scholar]
  67. Liu, H.; Roeder, K.; Wasserman, L. Stability Approach to Regularization Selection (StARS) for High-Dim Graphical Models. In Proceedings of the 23rd International Conference on Neural Information Processing Systems (NIPS’10), Vancouver, BC, Canada, 6–9 December 2010. [Google Scholar]
Figure 1. Support of the true covariance matrices, p = 50 .
Figure 1. Support of the true covariance matrices, p = 50 .
Entropy 22 00055 g001
Figure 2. Support of the true covariance matrices, p = 100 .
Figure 2. Support of the true covariance matrices, p = 100 .
Entropy 22 00055 g002
Figure 3. ROC curve of the time-varying CLIME, p = 50 .
Figure 3. ROC curve of the time-varying CLIME, p = 50 .
Entropy 22 00055 g003
Figure 4. ROC curve of the time-varying CLIME, p = 100 .
Figure 4. ROC curve of the time-varying CLIME, p = 100 .
Entropy 22 00055 g004
Figure 5. Break size | D s | . From 4 February 2004, to 30 November 2009.
Figure 5. Break size | D s | . From 4 February 2004, to 30 November 2009.
Entropy 22 00055 g005
Figure 6. Estimated networks at time points 813, 828, 888, and 903, corresponding to 23 March 2006, 13 April 2006, 11 July 2006, and 1 August 2006. Colors correspond to the nine sections in the S&P dataset.
Figure 6. Estimated networks at time points 813, 828, 888, and 903, corresponding to 23 March 2006, 13 April 2006, 11 July 2006, and 1 August 2006. Colors correspond to the nine sections in the S&P dataset.
Entropy 22 00055 g006
Table 1. Average distance.
Table 1. Average distance.
Bandwidth0.140.160.180.20.220.24
p = 50 δ 0 = 1 23.421.017.4716.614.716.5
δ 0 = 2 7.46.98.38.17.26.3
p = 100 δ 0 = 1 37.230.126.425.521.221.3
δ 0 = 2 7.88.29.96.98.97.6
Table 2. Number of estimated change points.
Table 2. Number of estimated change points.
Bandwidth0.140.160.180.20.220.24
p = 50 δ 0 = 1 2.382.161.992.002.002.00
δ 0 = 2 2.462.312.002.002.002.00
p = 100 δ 0 = 1 2.252.091.991.992.002.00
δ 0 = 2 2.382.192.002.002.002.00

Share and Cite

MDPI and ACS Style

Xu, M.; Chen, X.; Wu, W.B. Estimation of Dynamic Networks for High-Dimensional Nonstationary Time Series. Entropy 2020, 22, 55. https://doi.org/10.3390/e22010055

AMA Style

Xu M, Chen X, Wu WB. Estimation of Dynamic Networks for High-Dimensional Nonstationary Time Series. Entropy. 2020; 22(1):55. https://doi.org/10.3390/e22010055

Chicago/Turabian Style

Xu, Mengyu, Xiaohui Chen, and Wei Biao Wu. 2020. "Estimation of Dynamic Networks for High-Dimensional Nonstationary Time Series" Entropy 22, no. 1: 55. https://doi.org/10.3390/e22010055

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop