Next Article in Journal
Complexity-Regularized Regression for Serially-Correlated Residuals with Applications to Stock Market Data
Previous Article in Journal
A Representation of the Relative Entropy with Respect to a Diffusion Process in Terms of Its Infinitesimal Generator
Article Menu

Export Article

Entropy 2014, 16(12), 6722-6738; doi:10.3390/e16126722

Article
A Large Deviation Principle and an Expression of the Rate Function for a Discrete Stationary Gaussian Process
Olivier Faugeras and James MacLaurin *
INRIA Sophia Antipolis Mediterannee, 2004 Route Des Lucioles, Sophia Antipolis, France
External Editor: Antonio Scarfone
*
Author to whom correspondence should be addressed.
Received: 3 November 2014; in revised form: 17 December 2014 / Accepted: 18 December 2014 / Published: 22 December 2014

Abstract

: We prove a large deviation principle for a stationary Gaussian process over ℝb indexed by ℤd (for some positive integers d and b), with positive definite spectral density, and provide an expression of the corresponding rate function in terms of the mean of the process and its spectral density. This result is useful in applications where such an expression is needed.
Keywords:
stationary Gaussian process; large deviation principle; spectral density

1. Introduction

In this paper, we prove a large deviation principle (LDP) for a spatially-stationary, indexed by ℤd, Gaussian process over ℝb and obtain an expression for the rate function. Our work in mathematical neuroscience involves the search for asymptotic descriptions of large ensembles of neurons [13]. Since there are many sources of noise in the brains of mammals [4], the mathematician interested in modeling certain aspects of brain functioning is often led to consider spatial Gaussian processes that (s)he uses to model these noise sources. This motivates us to use large deviation techniques. Being also interested in formulating predictions that can be experimentally tested in neuroscience laboratories, we strive to obtain analytical results, i.e., effective results from which, for example, numerical simulations can be developed. This is why we determine a more tractable expression for the rate function in this article.

Our result concerns the large deviations of ergodic phenomena, the literature of which we now briefly survey. Donsker and Varadhan obtained a large deviation estimate for the law governing the empirical process generated by a Markov process [5]. They then determined a large deviation principle for a ℤ-indexed stationary Gaussian process, obtaining a particularly elegant expression for the rate function using spectral theory. Chiyonobu et al. [68] obtain a large deviation estimate for the empirical measure generated by ergodic processes satisfying certain mixing conditions. Baxter et al. [9] obtain a variety of results for the large deviations of ergodic phenomena, including one for the large deviations of ℤ-indexed ℝb-valued stationary Gaussian processes. Steinberg et al. [10] have proven an LDP for a stationary ℤd-indexed Gaussian process over ℝ, and [11] obtain an LDP for an ℝ-indexed, ℝ-valued stationary Gaussian process.

In the work we are developing [12], we need large deviation results for spatially-ergodic Ornstein–Uhlenbeck processes. This requires Theorem 1 of this paper.

In the first section, we make some preliminary definitions and state the chief theorem, Theorem 1, for zero mean processes. In the second section, we prove the theorem. In Appendix A, we state and prove several identities involving the relative entropy, which are necessary for the proof of Theorem 1. In Appendix B, we prove a general result for the large deviations of exponential approximations. We prove Corollary 2, which extends the result in Theorem 1 to non-zero mean processes in the Appendix C.

2. Preliminary Definitions

For some topological space Ω equipped with its Borelian σ-algebra B(Ω), we denote the set of all probability measures by (Ω). We equip (Ω) with the topology of weak convergence. Our process is indexed by ℤd. For j ∈ ℤd, we write j = (j(1),…,j(d)). For some positive integer n > 0, we let Vn = {j ∈ ℤd: |j(δ)| ≤ n for all 1 ≤ δd}. Let T = b, for some positive integer b. We equip T with the Euclidean topology and T d with the cylindrical topology, and we denote the Borelian σ-algebra generated by this topology by B ( T d ). For some μ ( T d ) governing a process ( X j ) j d, we let μ V n ( T V n ) denote the marginal governing ( X j ) j V n. For some j ∈ ℤd, let the shift operator S j : T d T d be S(ω)k = ωj+k. We let s ( T d ) be the set of all stationary probability measures μ on ( T d , B ( T d ) ), such that, for all j ∈ ℤd, μ ○ (Sj)−1 = μ. We use Herglotz’s theorem to characterize the law Q s ( T d ) governing a stationary process (Xj) in the following manner. We assume that E[Xj] = 0 and:

E [ X 0 X k ] = 1 ( 2 π ) d [ π , π ] d exp ( i k , θ ) G ˜ ( θ ) d θ .

Our convention throughout this paper is to denote the transpose of X as X and the spectral density with a tilde. 〈⋅,⋅〉 is the standard inner product on ℝb. Here, G ˜ ( θ ) is a continuous function [−π, π]d → Cb×b, where we consider [−π, π]d to have the topology of a torus. In addition, G ˜ ( θ ) = G ˜ ( θ ) = G ˜ ¯ ( x ¯ indicates the complex conjugate of x). We assume that for all θ ∈ [−π, π]d, det G ˜ ( θ ) > G ˜ m i n for some G ˜ m i n > 0 from which it follows that for each θ, G ˜ ( θ ) is Hermitian positive definite. If x ∈ Cb, then xXj is a stationary sequence with spectral density x G ˜ x ¯. We employ the operator norm over Cb×b. Let p n : T d T d be such that pn (ω)k = ωk. Here, and throughout the paper, we take k mod Vn to be the element lVn, such that, for all 1 ≤ γd, l(γ) = k(γ) mod (2n + 1). Define the process-level empirical measure μ ^ n : T d s ( T d ) as:

μ ^ n ( ω ) = 1 ( 2 n + 1 ) d k V n δ S k p n ( ω ) .

Let n ( s ( T d ) ) be the image law of Q under μ ^ n. We note that in this definition, we need not have chosen Vn to have an odd side length (2n + 1): this choice is for notational convenience, and these results could easily be reproduced in the case that Vn has side length n.

In the context of mathematical neuroscience, (Xj) could correspond to a model of interacting neurons on a lattice (d = 1, 2 or 3), as in [12,13]. We note that the large deviation principle of this paper may be used to obtain an LDP for other processes using standard methods, such as the contraction principle or Lemma 13.

Definition 1. Let (Ω, H) be a measurable space and μ, ν probability measures.

I ( 2 ) ( μ v ) = sup f B { E μ [ f ] log E v [ exp ( f ) ] } ,
where B is the set of all bounded measurable functions. If Ω is Polishand H = B(Ω), then we only need to take the supremum over the set of all continuous bounded functions.

Let (Yj) be a stationary Gaussian process on T, such that E[Y j] = 0, E[Y j†Yk] = 0 and E[Yj †Yj] = Id b. Each Yj is governed by a law P, and we write the governing law in s ( T d ) as P d. It is clear that the governing law over Vn may be written as P V n (that is the product measure of P, indexed over Vn).

Definition 2. Let ε2 be the subset of s ( T d ) defined by:

ε 2 = { μ s ( T d ) | E μ [ ω 0 2 ] < } .

We define the process-level entropy to be, for μ s ( T d ):

I ( 3 ) ( μ ) = lim n 1 ( 2 n + 1 ) d I ( 2 ) ( μ V n P V n ) .

It is a consequence of Lemma 11 that if με2, then I(3)(μ) = ∞. For further discussion of this rate function and a proof that I(3) is well-defined, see [8].

Definition 3. A sequence of probability laws (Γn) on some topological space Ω equipped with its Borelian σ-algebra is said to satisfy a strong large deviation principle (LDP) with rate function I: Ω → ℝ if I is lower semicontinuous, for all open sets O,

lim ¯ n 1 ( 2 n + 1 ) d log Γ n ( O ) inf x O I ( x )
and for all closed sets F:
lim ¯ n 1 ( 2 n + 1 ) d log Γ n ( F ) inf x F I ( x ) .

If, furthermore, the set {x: I(x) ≤ α} is compact for all α ≥ 0, we say that I is a good rate function.

Definition 4. For με2, we denote the Cb × Cb-valued spectral measure on ([−π, π]d, B([−π, π]d)) (which exists due to Herglotz’s theorem) by μ ˜. We have:

1 ( 2 π ) d [ π , π ] d exp ( i k , θ ) d μ ˜ ( θ ) = E μ [ ω 0 ω k ] .

For For θ ∈ [−π, π]d let H ˜ ( θ ) = G ˜ ( θ ) 1 2 be the Hermitian positive definite square root of G ˜ ( θ ) 1 and:

H ˜ ( θ ) = j Z d H j exp ( i j , θ ) .

The b × b matrices Hj are the coefficients of the absolutely convergent Fourier series (due to Wiener’s theorem) of G ˜ 1 / 2. Define β : T Z d T Z d as follows:

( β ( ω ) ) k = j Z d H j ω k j .

The theorem below is the chief result of this paper.

Theorem 1. (∏n) satisfies a strong LDP with good rate function I(3) (μβ−1). Here:

I ( 3 ) ( μ β 1 ) = { I ( 3 ) ( μ ) Γ ( μ ) i f μ ε 2 o the r w i s e ,
where:
Γ ( μ ) = { Γ 1 + Γ 2 ( μ ) i f μ ε 2 0 o the r w i s e .

Here:

Γ 1 = 1 2 ( 2 π ) d [ π , π ] d log det G ˜ ( θ ) d θ Γ 2 ( μ ) = 1 2 ( 2 π ) d [ π , π ] d tr ( ( Id b G ˜ ( θ ) 1 ) d μ ˜ ( θ ) ) .

Finally, the rate function uniquely vanishes at μ = Q.

Corollary 2. Suppose that the underlying process Q is defined as previously, except with mean EQ[j]= c for all j ∈ ℤd and some constant c ∈ ℝb. If we denote the image law of the empirical measure by c n then ( c n ) satisfies a strong LDP with a good rate function (for με2):

I ( 3 ) ( μ ) Γ ( μ ) + 1 2 c G ˜ ( 0 ) 1 c c G ˜ ( 0 ) 1 m μ .

Here, mμ = Eμ(ω)[ωj] for all j ∈d. If με2, then the rate function is infinite. The rate function has a unique minimum, i.e., I ( 3 ) ( μ ) Γ ( μ ) + 1 2 c G ˜ ( 0 ) 1 c c G ˜ ( 0 ) 1 m μ = 0 if and only if μ = Q.

We prove this in Appendix C.

3. Proof of Theorem 1

In this proof, we essentially adapt the methods of [9,14]. We introduce the following metric over T d For j ∈ ℤd, let λ j = 3 d δ = 1 d 2 | j ( δ ) | Define the metric dλ as follows,

d λ ( x , y ) = j Z d λ j min ( x j y j , 1 ) ,
where the above is the Euclidean norm. Let dλ be the induced Prokhorov metric over ( T d ). These metrics are compatible with their respective topologies.

For θ ∈ [−π, π]d, F ˜ ( θ ) = G ˜ ( θ ) 1 / 2 be the Hermitian positive square root of G ˜ and:

F ˜ ( θ ) = j Z d F j exp ( i j , θ ) .

The b × b matrices Fj are the coefficients of the absolutely convergent Fourier series of the positive square root. Define τ : T Z d T Z d and τ ( M ) : T Z d T Z d as follows,

( τ ( ω ) ) k = j d F j ω k j , ( τ ( n ) ( ω ) ) k = j V n F j ω k j δ = 1 d ( 1 | j ( δ ) | 2 n + 1 ) .

We note that τ = β−1 (on a suitable domain, where the series are convergent) and that τ(n) is a continuous map, but τ is not continuous (in general).

We note (using Lemma 6) that P d τ 1 has spectral density G ˜. We define:

F ˜ ( n ) ( θ ) = j V n F j exp ( i j , θ ) δ = 1 d ( 1 | j ( δ ) | 2 n + 1 ) .

We write ε ( n ) = sup θ [ π , π ] d F ˜ ( θ ) F ˜ ( n ) ( θ ) 2. By Fejer’s theorem, ε(n) → 0 as n ∞. We define G ˜ ( n ) ( θ ) = F ˜ ( n ) ( θ ) 2, noting that this is the spectral density of P d τ ( n ) 1. Let Γ(n) (μ) = Γ1,(n) + Γ2,(n) (μ), where for με2,

Γ 1 , ( n ) = 1 2 ( 2 π ) d [ π , π log det G ˜ ( n ) ( θ ) d θ ,
Γ 2 , ( n ) ( μ ) = 1 2 ( 2 π ) d [ π , π tr ( ( Id b G ˜ ( n ) 1 ( θ ) ) d μ ˜ ( θ ) ) .

If με2, let Γ1,(n) = Γ2,(n)(μ) = 0. Let R n M ( M s ( T z d ) ) be the law governing μ ^ n ( Y ), where we recall that the stationary process (Yj) is defined just below Definition 1. Let Π ( m ) n M ( M s ( T Z d ) ) be the law governing μ ^ n ( τ ( m ) ( Y ) ).

Lemma 3. (Rn) satisfies a large deviation principle with good rate function I(3)(μ). If με2, then I(3)(μ) = ∞.

Proof. The first statement is proven in [8]. The last statement follows from Lemma 11 below.

Lemma 4. ( ( m ) n ) satisfies a strong LDP with good rate function given by, for με2:

I ( m ) ( 3 ) ( μ ) = inf v ε 2 : μ = v τ ( m ) 1 I ( 3 ) ( v ) = I ( 3 ) ( μ ) Γ ( m ) ( μ ) .

If με2, then I ( m ) ( 3 ) ( μ ) = .

Proof. The sequence of laws governing μ ^ n ( Y ) τ ( m ) 1 (as n → ∞, with m fixed) satisfies a strong LDP with good rate function as a consequence of an application of the contraction principle to Lemma 3 (since τ(m) is continuous). Now, through the same reasoning as in Lemma 2.1, Theorem 2.3 in [14], it follows from this that ( ( m ) n ) satisfies a strong LDP with the same rate function as that of μ ^ n ( Y ) τ ( m ) 1. The last identification in (11) follows from Lemmas 6 and 9 in Appendix A. We only need to take the infimum over ε2, because by Lemma 3, I(3) (ν) is infinite for νε2.

Lemma 5. If 0 < b < 1 2 ε ( m ), then for all n > 0:

1 ( 2 n + 1 ) d log E P Z d [ exp ( b k V n τ ( m ) ( ω ) k τ ( ω ) k 2 ) ] 1 2 log ( 1 2 b ε ( m ) ) .

The proof is almost identical to that in Lemma 2.4 in [14]. We are now ready to prove Theorem 1.

Proof. We apply Lemma 13 in the Appendix to the above result. We substitute Y(m) = τ(m)(ω) and W = τ(ω). Taking m → ∞ in the equation in Lemma 5, we find that (38) is satisfied if we stipulate that κ = 0. After noting the LDP governing ( ( m ) n ) in (11), we may thus conclude that (Πn) satisfies a strong LDP with good rate function:

lim δ 0 lim ¯ m inf γ B δ ( μ ) I ( m ) ( 3 ) ( γ ) ,
where Bδ(μ) = {γ: dλ,M(μ,γ) ≤ δ}.

It remains for us to identify (12) with the rate function in (4). We claim that for each δ > 0,

lim ¯ m inf γ B δ ( μ ) ( I ( m ) ( 3 ) ( γ ) ) = inf γ B δ ( μ ) ( I ( 3 ) ( γ ) Γ ( γ ) ) .

To see this, we have from Lemma 11 that for all m and all γ and constants α1 < 1, α3 > 1 and α2, α4 ∈ ℝ, (note that if γ ∉ ε2 the inequalities below are immediate from the definitions):

I ( 3 ) ( γ ) Γ ( γ ) ( 1 α 1 ) I ( 3 ) ( γ ) α 2
I ( m ) ( 3 ) ( γ ) ( 1 α 1 ) I ( 3 ) ( γ ) α 2 I ( 3 ) ( γ ) Γ ( γ ) ( 1 + α 3 ) I ( 3 ) ( γ ) α 4 I ( m ) ( 3 ) ( γ ) ( 1 + α 3 ) I ( 3 ) ( γ ) α 4

We thus see that if I(3)(γ) = ∞ for all γ ∈ Bδ(μ), then (13) is identically infinite on both sides. Otherwise, it may be seen from (14) and (15) that it suffices to establish (13) in the case that B δ ( μ ) = B l δ ( μ ) : = { γ : d λ , M ( μ , γ ) δ , I ( 3 ) ( γ ) l } for some l < ∞. However, it follows from (29) and (34) that for all γ B l δ ( μ ), there exist constants ( α 5 m ), which converge to zero as m → ∞ and such that | I ( m ) ( 3 ) ( γ ) I ( 3 ) ( γ ) + Γ ( γ ) | α 5 m. We may thus conclude (13). The expression for the rate function in (4) now follows, since I(3)(γ) – Γ(γ) is lower semicontinuous, by Lemma 12.

For the second statement in the Theorem, if I(3) (μβ−1) = 0, then I ( 2 ) ( ( μ β 1 ) V n P V n ) = 0 for all n. However, since the relative entropy has a unique zero, this means that ( μ β 1 ) V n = P V n for all n. However, this means that μ β 1 = P Z d, and therefore (using Lemma 6), μ = Q.

Appendix

A. Properties of the Entropy

Let K ˜ : [ π , π ] d C b × b possess an absolutely convergent Fourier series and be such that the eigenvalues of K ˜ ( θ ) are strictly greater than zero for all θ. We require that K ˜ is the density of a stationary sequence, which means that we must also assume that for all θ:

K ˜ ( θ ) = K ˜ ( θ ) = K ˜ ¯ ( θ ) .

This means, in particular, that K ˜ ( θ ) is Hermitian. We write:

( Δ ( ω ) ) k = j Z d S j ω k j , where
j Z d S j exp ( i j , θ ) = K ˜ ( θ ) 1 2 .

Here, K ˜ 1 2 is understood to be the positive Hermitian square root of K ˜ 1. The Fourier series of K ˜ 1 2 is absolutely convergent as a consequence of Wiener’s theorem. In this section, we determine a general expression for I(3) (ξ ○ ∆−1). We are generalizing the result for b = 1 with ℤ-indexing given in [14]. These results are necessary for the proofs in the previous section.

We similarly write that:

( ϒ ( ω ) ) k = j Z d R j ω k j , where
j Z d R j exp ( i j , θ ) = K ˜ ( θ ) 1 2 .

As previously, K ˜ 1 2 is understood to be the positive definite Hermitian square root of K ˜. We note that Rj = Rj and Sj = Sj.

Lemma 6. For all ξ ∈ ε2, ξ ○ ∆−1 and ξ ○ γ−1 are in ε2 and:

ξ Δ 1 ϒ 1 = ξ ϒ 1 Δ 1 = ξ .

Proof. We make use of the following standard Lemma from [15], to which the reader is referred for the definition of an orthogonal stochastic measure. Let (Uj) ∈ ℝb be a zero-mean stationary sequence governed by ξ ∈ ε2. Then, there exists an orthogonal ℝb-valued stochastic measure Zξ = Zξ(∆) (∆ ∈ B([−π, π[d)), such that for every j ∈ ℤd (ξ a.s.):

U j = 1 2 π [ π , π ] d exp ( i j , θ Z ξ ( d θ ) ) .

Conversely, any orthogonal stochastic measure defines a zero-mean stationary sequence through (20). It may be inferred from this representation that:

Z ξ Δ 1 ( d θ ) = K ˜ ( θ ) 1 2 Z ξ ( d θ ) , Z ξ ϒ 1 ( d θ ) = K ˜ ( θ ) 1 2 Z ξ ( d θ ) .

The proof that this is well-defined makes use of the fact that K ˜ 1 2 and K ˜ 1 2 are uniformly continuous, since their Fourier series’ each converge uniformly This gives us the lemma. We note for future reference that, if ξ has spectral measure d ξ ˜ ( θ ), then the spectral density of ξ ○ ∆−1 is:

K ˜ 1 2 ( θ ) d ξ ˜ ( θ ) K ˜ 1 2 ( θ ) .

It remains for us to determine a specific expression for I(3) (ξ ○ γ−1).

Definition 5. If ξ ∈ ε2, we define:

Γ Δ ( ξ ) = 1 2 ( E ξ [ ω 0 2 ] E ξ Δ 1 [ ω 0 2 ] ) 1 2 ( 2 π ) d [ π , π ] d log det ( K ˜ ( θ ) ) d θ .

Otherwise, we define ΓΔ(ξ) = 0.

Lemma 7. If ξε2,

Γ Δ ( ξ ) = 1 2 ( 2 π ) d [ π , π ] d tr ( ( Id b K ˜ ( θ ) 1 ) d ξ ˜ ( θ ) ) 1 2 ( 2 π ) d [ π , π ] d log det ( K ˜ ( θ ) ) d θ .

Proof. We see from (21) that ξ ○ ∆−1 has spectral measure K ˜ 1 2 ( θ ) d ξ ˜ ( θ ) K ˜ 1 2 ( θ ). We thus find that:

Γ Δ ( ξ ) = 1 2 ( 2 π ) d [ π , π ] d tr ( d ξ ˜ ( θ ) ) 1 2 ( 2 π ) d [ π , π ] d tr ( K ˜ 1 2 ( θ ) d ξ ˜ ( θ ) K ˜ 1 2 ( θ ) ) 1 2 ( 2 π ) d [ π , π ] d log det ( K ˜ ( θ ) ) d θ . = 1 2 ( 2 π ) d [ π , π ] d tr ( d ξ ˜ ( θ ) ) 1 2 ( 2 π ) d [ π , π ] d tr ( K ˜ ( θ ) 1 d ξ ˜ ( θ ) ) 1 2 ( 2 π ) d [ π , π ] d log det ( K ˜ ( θ ) ) d θ .

Lemma 8. For all ξ ∈ ε2,

I ( 3 ) ( ξ Δ 1 ) I ( 3 ) ( ξ ) Γ Δ ( ξ ) .

Proof. We assume for now that there exists a q, such that Sj = 0 for all |j| ≥ q, denoting the corresponding map by ∆q. Let N q m : T V m T V m be the following linear operator. For jVm, let ( N q m ω ) j = k V m S ( k j ) mod V m ω k. Let ξ q , n = ξ V n ( N q n ) 1. It follows from this assumption that the Vl marginals of ξ q , n and ξ Δ q 1 are the same, as long as lnq. Thus:

I ( 2 ) ( ( ξ Δ q 1 ) V l P V l ) = I ( 2 ) ( ( ξ q , n ) V l P V l ) I ( 2 ) ( ξ q , n P V n ) .

This last inequality follows from a property of the relative entropy I(2), namely that it is nondecreasing as we take a ‘finer’ σ-algebra (it is a direct consequence of Lemma 2.3 in [5]. If ξ V n does not have a density for some n, then I(3) (ξ) is infinite and the Lemma is trivial. If otherwise, we may readily evaluate I ( 2 ) ( ξ q , n P V n ) using a change of variable to find that:

I ( 2 ) ( ξ q , n P V n ) = I ( 2 ) ( ξ V n P V n ) + 1 2 E ξ V n [ N q n ω 2 ω 2 ] + 1 2 log det ( N q n ) .

We divide (24) by (2l + 1)d, substitute the above result and, finally, take l → ∞ (while fixing n = l + q) to find that:

I ( 3 ) ( ξ Δ q 1 ) I ( 3 ) ( ξ ) + 1 2 ( E ξ ( Δ ) 1 [ ω 0 2 ] E ξ [ ω 0 2 ] ) + 1 2 ( 2 π ) d [ π , π ] d log det ( K ˜ ( θ ) ) d θ = I ( 3 ) ( ξ ) Γ q Δ ( ξ ) .

Here, Γ q Δ ( ξ ) is equal to Γ(ξ), as defined above, subject to the above assumption that Sj = 0 for |j| > q. On taking q → ∞, it may be readily seen that Γ q Δ Γ Δ pointwise. Furthermore, the lower semicontinuity of I(3) dictates that:

I ( 3 ) ( ξ Δ 1 ) lim ¯ q I ( 3 ) ( ξ Δ q 1 ) ,
which gives us the Lemma. □

Lemma 9. For all ξ ∈ ε2, I(3) (ξ ○ ∆−1) = I(3) (ξ) − Γ (ξ).

Proof. We find, similarly to the previous Lemma, that if γ M s ( T Z d ), then:

I ( 3 ) ( γ ϒ 1 ) I ( 3 ) ( γ ) + 1 2 [ E γ ϒ 1 [ ω 0 2 ] E γ [ ω 0 2 ] ] 1 2 ( 2 π ) d [ π , π ] d log det K ˜ ( θ ) d θ .

We substitute γ = ξ ○ ∆−1 into the above and, after noting Lemma 6, we find that:

I ( 3 ) ( ξ ) I ( 3 ) ( ξ Δ 1 ) + 1 2 ( E ξ [ ω 0 2 ] E ( ξ Δ 1 ) [ ω 0 2 ] ) 1 2 ( 2 π ) d [ π , π ] d log det K ˜ ( θ ) d θ = I ( 3 ) ( ξ Δ 1 ) + Γ Δ ( ξ ) .

The result now follows from the previous Lemma and (26). □

We next prove some matrix identities, which are needed in the proof of Lemma 11.

Lemma 10. If A, Bl×l (for some positive integer l) are both Hermitian, then:

tr ( AB ) b A B { l A tr ( B ) i f B i s p o s i t i v e l tr ( A ) B i f A i s p o s i t i v e

If A and B are positive and, in addition, A is invertible, then:

tr ( B ) l A 1 2 tr ( ABA ) .

Proof. The first part of (27) follows from von Neumann’s trace inequality. Given two matrices A and B in ℂl×l:

| tr ( AB ) | t = 1 l α t β t ,
where the αt’s and βts are the singular values of A and B. In the case where A and B are Hermitian, the singular values are the magnitudes of the (real) eigenvalues. By Cauchy-Schwartz, t = 1 l α t β t t = 1 l α t 2 × t = 1 l β t 2 l A B . If B is positive, so are its eigenvalues and ||B|| ≤ tr(B), hence (27). If A is invertible, tr(B) = tr(A−2ABA). If, moreover, A and B are both Hermitian positive, we obtain the second identity by applying (27) to the Hermitian matrix A−2 and the Hermitian positive matrix ABA. □

Lemma 11. For all 0 < a < 1 2,

a E μ [ ω 0 2 + b 2 log ( 1 2 a ) I ( 3 ) ( μ ) ] .

For all με2, there exist constants α1 < 1, α2 ∈ ℝ, α3 > 1, α4 ∈ ℝ and m > 0, such that for all m > m ,

Γ ( m ) ( μ ) α 1 I ( 3 ) ( μ ) + α 2 ,
Γ ( μ ) α 1 I ( 3 ) ( μ ) + α 2 .
Γ ( m ) ( μ ) α 3 I ( 3 ) ( μ ) + α 4
Γ ( μ ) α 3 I ( 3 ) ( μ ) + α 4

There exist constants α 3 m , α 4 m > 0 that converge to zero as m → ∞, such that:

| Γ ( m ) ( μ ) Γ ( μ ) | α 3 m E μ [ ω 0 2 ] + α 4 m .

Proof. It is a standard result that if μ M s ( T Z d ), then I(3)(μ) = ∞, for which the first result is evident. Let μ M s ( T Z d ). For w T V n, we let f ( ω ) = k V n ω k 2. The function fM(ω) = f(ω)1af(ω)≤m is bounded, and hence, from Definition 1, we have:

a T V n f M d μ V n log T V n exp ( a f M ) d P V n + I ( 2 ) ( μ V n P V n ) .

We obtain using an easy Gaussian computation that:

log τ V n exp ( a f ) d P V n = ( 2 n + 1 ) d b 2 log ( 1 2 a ) .

Upon taking M → ∞ and applying the dominated convergence theorem, we obtain:

a E μ [ ω 0 2 ] b 2 log ( 1 2 a ) + 1 ( 2 n + 1 ) d I ( 2 ) ( μ V n P V n ) .

We have used the stationarity of μ. By taking the limit n, we obtain the first inequality (29).

It follows from the definition that (30)(33) are true if με2. Thus, we may assume that με2. We choose m to be such that the eigenvalues of F ˜ ( m ) (as defined in (8)) are strictly greater than zero for all m > m . It may be easily verified that:

s = 1 b μ ˜ s s ( [ π , π ] d ) = ( 2 π ) d E μ [ ω 0 2 ] .

We observe the following upper and lower bounds, which hold for all m > m (and for Γ1, too),

Γ 1 , ( m ) 1 2 inf m m , θ [ π , π ] d log det G ˜ ( m ) ( θ ) <
Γ 1 , ( m ) 1 2 sup m m , θ [ π , π ] d log det G ˜ ( m ) ( θ ) >

We recall that, since G ˜ ( m ) = F ˜ ( 2 ) 2,

Γ 2 , ( m ) ( μ ) = 1 2 ( 2 π ) d [ π , π ] d tr ( d μ ˜ ( θ ) ) 1 2 ( 2 π ) d [ π , π ] d tr ( F ˜ ( m ) 2 ( θ ) d μ ˜ ( θ ) ) .

Note that tr ( F ˜ ( m ) 2 ( θ ) d μ ˜ ( θ ) ) = tr ( F ˜ ( m ) 1 ( θ ) ( F ˜ ( m ) 1 ( θ ) d μ ˜ ( θ ) F ˜ ( m ) 1 ( θ ) ) F ˜ ( m ) ( θ ) ) = tr ( F ˜ ( m ) 1 ( θ ) d μ ˜ ( θ ) F ˜ ( m ) 1 ( θ ) ) and apply Lemma 10, (28) to this, obtaining:

Γ 2 , ( m ) ( μ ) 1 2 ( 1 α 1 * ) E μ [ ω 0 2 ] ,
where:
α 1 * = 1 b inf θ [ π , π ] d , m m ˜ ( F ˜ ( m ) ( θ ) 2 ) > 0.

If α 1 * 1, then (30) is clear, because I(3)(μ) ≥ 0; the above inequality would imply that Γ2,(m) is negative and (36). Otherwise, we may use (29) and (36) to find that:

Γ ( m ) ( μ ) 1 2 a ( 1 α 1 * ) I ( 3 ) ( μ ) b ( 1 α 1 * ) 4 a log ( 1 2 a ) 1 2 inf m m , θ [ π , π ] d log det G ˜ ( m ) .

We may substitute a > 1 2 ( 1 α 1 * ), letting α 1 = 1 2 a ( 1 α 1 * ), into the above to obtain (30). The second inequality (31) follows by taking m → ∞ in the first.

For the third inequality, we find using (27) that:

Γ 2 , ( m ) ( μ ) 1 2 ( 1 α 3 * ) E μ [ ω 0 2 ] ,
where:
α 3 * = b inf θ [ π , π ] d , m m ( F ˜ ( m ) ( θ ) 2 ) .

If α 3 * 1 then (32) is clear, because of the fact that I(3)(μ) ≥ 0, the above inequality would mean that Γ2,(m) is positive and (37). Otherwise, we may use (29) and (37) to find that:

Γ 2 , ( m ) ( μ ) 1 α 3 * 2 a I ( 3 ) ( μ ) + b ( 1 α 3 * ) 4 a log ( 1 2 a ) 1 2 sup m m , θ [ π , π ] d log det G ˜ ( m ) ( θ ) .

This yields (32) on taking a < α 3 * 1 2. Taking limits as m yields (33).

Now, making use of Lemma 10, (27), it may be seen that:

| Γ 2 , ( m ) ( μ ) Γ 2 ( μ ) | = 1 2 ( 2 π ) d | [ π , π ] d tr ( ( G ˜ 1 ( θ ) G ˜ ( m ) 1 ( θ ) ) d μ ˜ ( θ ) ) | . b 2 ( 2 π ) d G m [ π , π ] d tr ( d μ ˜ ( θ ) ) .

Here, G m = sup θ [ π , π ] d G ˜ 1 ( θ ) G ˜ ( m ) 1 ( θ ) . The convergence of Γ1,(m) to Γ1 is clear. We thus obtain (34).

Lemma 12. I(3)(μ) − Γ(μ) is lower-semicontinuous.

Proof. Since I(3)(μ) − Γ(μ) is infinite for all με2, we only need to prove the Lemma for με2. We need to prove that if μ(j)μ, then lim ¯ j I ( 3 ) ( μ ( j ) β 1 ) I ( 3 ) ( μ β 1 ). We may assume, without loss of generality, that:

lim ¯ j I ( 3 ) ( μ ( j ) β 1 ) = lim ¯ j I ( 3 ) ( μ ( j ) β 1 ) .

If E μ ( j ) [ v 0 2 ], then by (29) in Lemma 11, limj→∞ I(3)(μ(j)β−1) = ∞, satisfying the requirements of the Lemma.

Thus, we may assume that there exists a constant l, such that E μ ( j ) [ ω 0 2 ] l for all j. We therefore have that, for all m, because of (11) and (4),

lim ¯ j I ( 3 ) ( μ ( j ) β 1 ) = lim ¯ j ( I ( m ) ( 3 ) ( μ ( j ) ) + Γ ( m ) ( μ ( j ) ) Γ ( μ ( j ) ) ) .

Making use of (32), we may thus conclude that:

lim ¯ j I ( 3 ) ( μ ( j ) β 1 ) lim ¯ j ( I ( m ) ( 3 ) ( μ ( j ) ) ϵ ( m ) * ) ,
for some ϵ ( M ) *, which goes to zero as m → ∞. In addition, lim ¯ j I ( m ) ( 3 ) ( μ ( j ) ) I ( m ) 3 ( μ ) due to the lower semi-continuity of I ( m ) ( 3 ). On taking m → ∞, since I ( m ) ( 3 ) ( μ ) I ( 3 ) ( μ β 1 ) we find that:
lim ¯ j I ( 3 ) ( μ ( j ) β 1 ) ( μ ( j ) β 1 ) .

B. A Lemma on the Large Deviations of Stationary Random Variables

The following lemma is an adaptation of Theorem 4.9 in [9] to ℤd. We state it in a general context. Let B be a Banach Space with norm || ⋅ ||. For j ∈ ℤd, let λ j = 3 d δ = 1 d 2 | j ( δ ) |. We note that k Z d λ k = 1. Define the metric dλ on B Z d by:

d λ ( x , y ) = j Z d λ j min ( x j y j , 1 ) .

Let the induced Prokhorov metric on M ( B Z d ) be dλ,M. For ω B Z d and j ∈ ℤd, we define the shift operator as Sj(ω)k = ωj+k. Let B δ ( A ) = { x B Z d : d λ ( x , y ) δ } for some yA} be the closed blowup and B(δ) be the closed blowup of {0}.

Suppose that for m ∈ ℤ+, Y(m), W B Z d are stationary random variables, governed by a probability law ℙ. We suppose that μ ^ n ( Y ( m ) ) is governed by ( m ) n and μ ^ n is governed by W n; these being the empirical process measures, defined analogously by (2). Suppose that for each m, ( ( m ) n ) satisfies an LDP with good rate function J(m). Suppose that W = Y(m) + Z(m) for some series of stationary random variables Z(m) on B Z d.

Lemma 13. If there exists a constant κ > 0, such that for all b > 0:

lim ¯ m lim ¯ n 1 ( 2 n + 1 ) d log E exp ( b j V n min ( Z ( m ) j 2 , 1 ) ) < k ,
then ( W n ) satisfies an LDP with good rate function:
J ( x ) = lim ϵ 0 lim ¯ m inf y B ϵ ( x ) J ( m ) ( y ) = lim ϵ 0 lim ¯ m inf y B ϵ ( x ) J ( m ) ( y ) .

Proof. It suffices, thanks to Theorem 4.2.16, Exercise 4.2.29 in [16], to prove that for all ϵ > 0,

lim m lim ¯ n 1 ( 2 n + 1 ) d log P ( d λ , M ( μ ^ n ( W ) , μ ^ n ( Y ( m ) ) ) > ϵ ) = .

For x B Z d, write |x|λ:= dλ(x, 0). Let B B ( B Z d ). Then, noting the definition of pn just above (2),

μ ^ ( n ) ( W ) ( B ) = 1 ( 2 n + 1 ) d j V n 1 B ( S j p n ( Y ( m ) ) + S j p n ( Z ( m ) ) ) 1 ( 2 n + 1 ) d j V n { 1 B ( S j p n ( Y ( m ) ) + S j p n ( Z ( m ) ) ) 1 B ( ϵ ) ( S j p n ( Z ( m ) ) ) + 1 B ( ϵ ) c ( S j p n ( Z ( m ) ) ) } 1 ( 2 n + 1 ) d j V n 1 B ϵ ( S j p n ( Y ( m ) ) ) + 1 ( 2 n + 1 ) d # { j V n : | S j p n ( Z ( m ) ) | λ > ϵ } μ ^ n ( Y ( m ) ) ( B ϵ ) + 1 ( 2 n + 1 ) # { j V n : | S j p n ( Z ( m ) ) | λ > ϵ } .

Thus:

P ( d λ , M ( μ ^ n ( W ) , μ ^ n ( Y ( m ) ) ) > ϵ ) P ( 1 ( 2 n + 1 ) d # { j V n : | S j p n ( Z ( m ) ) | λ > ϵ } > ϵ ) P ( j V n | S j p n ( Z ( m ) ) | λ 2 > ( 2 n + 1 ) d ϵ 3 ) exp ( b ( 2 n + 1 ) d ϵ 3 ) E P [ exp ( b j V n | S j p n ( Z ( m ) ) | λ 2 ) ] exp ( b ( 2 n + 1 ) d ϵ 3 ) E P [ exp ( b j V n λ k min ( p n ( Z ( m ) ) j + k 2 , 1 ) ) ]
for an arbitrary b > 0. Since K Z d λ k = 1 and the exponential function is convex, by Jensen’s inequality:
exp ( b ( 2 n + 1 ) d ϵ 3 ) E P [ exp ( b j V n , k Z d λ k min ( p n ( Z ( m ) ) j + k 2 , 1 ) ) ] exp ( b ( 2 n + 1 ) d ϵ 3 ) E P [ k Z d λ k exp ( b j V n min ( p n ( Z ( m ) ) j + k 2 , 1 ) ) ] = exp ( b ( 2 n + 1 ) d ϵ 3 ) E P [ exp ( b j V n min ( ( Z ( m ) j ) 2 , 1 ) ) ] ,
by the stationarity of Z(m) and the fact that k Z d λ k = 1. We may thus infer, using (38), that:
lim ¯ m lim ¯ n 1 ( 2 n + 1 ) d log P ( d λ , M ( μ ^ n ( W ) , μ ^ n ( Y ( m ) ) ) > ϵ ) b ϵ 3 + k .

Since b is arbitrary, we may take b → ∞ to obtain (39). □

C. Proof of Corollary 2

We now prove Corollary 2.

Proof. Let ϕ : T T be ϕ(⍵) = + c and ϕ Z d : T Z d T Z d be ϕ Z d ( ω ) j = ϕ ( ω j ). Let Ψ Z d : M s ( T Z d ) M s ( T Z d ) be Ψ Z d ( μ ) = μ ( ϕ Z d ) 1 and Ψ : M s ( T ) M s ( T ) be Ψ(v) = vϕ−1. It is easily checked that these maps are bicontinuous bijections for their respective topologies. Since c n = c n ( Ψ Z d ) 1, we have, by a contraction principle, Theorem 4.2.1 in [16], that c n satisfies a strong LDP with good rate function:

I ( 3 ) ( ( Ψ Z d ) 1 ( μ ) ) Γ ( ( Ψ Z d ) 1 ( μ ) ) .

Clearly, ( Ψ Z d ) 1 ( μ ) is in ε2 if and only if μ is in ε2. Let v = ( Ψ Z d ) 1 ( μ ). It is well known that if ( Ψ Z d ) 1 ( μ ) V n is absolutely continuous relative to P V n, then the relative entropy may be written as:

I ( 2 ) ( ( ( Ψ Z d ) 1 ( μ ) ) V n P V n ) = E ( ( Ψ Z d ) 1 ( μ ) ) V n ( x ) [ log d ( ( Ψ Z d ) 1 ( μ ) ) V n d P V n ( x ) ] = E μ V n ( x ) [ log d μ V n d ( Ψ ( P ) ) V n ( x ) ] = I ( 2 ) ( μ V n ( Ψ ( P ) ) V n ) .

Otherwise, the relative entropy is infinite. Thus, if the relative entropy is finite, ( Ψ Z d ) 1 ( μ ) V n must possess a density, and this means that μ V n possesses a density, which we denote by r ( x ) : T V n T V n. We note that the density of ( Ψ ( P ) ) V n is:

ρ V n ( x ) = ( 2 π ) ( 2 n + 1 ) d b 2 exp ( 1 2 j V n x j c 2 ) .

Accordingly, we find that:

I ( 2 ) ( μ V n ( Ψ ( P ) ) V n ) = E μ V n ( x ) [ log ( r ( x ) ρ V n ( x ) ) ] = I ( 2 ) ( μ V n P V n ) ( 2 n + 1 ) d m μ c + ( 2 n + 1 ) d 2 c 2 .

We divide by (2n + 1)d and take n to infinity to obtain:

I ( 3 ) ( ( Ψ Z d ) 1 ( μ ) ) = I ( 3 ) ( μ ) m μ c + 1 2 c 2 .

If μ V n does not possess a density for some n, then both sides of the above equation are infinite. It may be verified that the spectral density of ν is given by:

d v ˜ ( θ ) = d μ ˜ ( θ ) + ( 2 π ) d δ ( θ ) ( c c m μ c c m μ ) .

On substituting this into the expression in Theorem 1, we find that:

Γ 2 ( v ) = Γ 2 ( μ ) + 1 2 tr ( ( Id b G ˜ ( 0 ) 1 ) c c ) 1 2 tr ( ( Id b G ˜ ( 0 ) 1 ) ( m μ c + c m μ ) ) = Γ 2 ( μ ) + 1 2 c ( Id b G ˜ ( 0 ) 1 ) c c ( Id b G ˜ ( 0 ) 1 ) m μ .

We have used the fact that G ˜ ( 0 ) is symmetric. We thus obtain (5). This minimum of the rate function remains unique because of the bijectivity of ( Ψ Z d ). □

Acknowledgments

This work was partially supported by the European Union Seventh Framework Programme (FP7/2007-2013) under Grant agreement No. 269921 (BrainScaleS), No. 318723 (Mathemacs), and by the ERCadvanced grant, NerVi, no. 227747.

This work was supported by INRIA FRM, ERC-NERVI number 227747, European Union Project # FP7-269921 (BrainScales), and Mathemacs # FP7-ICT-2011.9.7

Author Contributions

Both authors contributed to all the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Baladron, J.; Fasoli, D.; Faugeras, O.; Touboul, J. Mean-field description and propagation of chaos in networks of Hodgkin-Huxley and Fitzhugh-Nagumo neurons. J. Math. Neurosci. 2012, 2. [Google Scholar] [CrossRef]
  2. Faugeras, O.; Maclaurin, J. Asymptotic description of neural networks with correlated synaptic weights; Rapport de recherche RR-8495; INRIA: Sophia Antipolis, France; March; 2014. [Google Scholar]
  3. Faugeras, O.; Touboul, J.; Cessac, B. A constructive mean field analysis of multi population neural networks with random synaptic weights and stochastic inputs. Front. Comput. Neurosci. 2009, 3. [Google Scholar] [CrossRef]
  4. Rolls, E.T.; Deco, G. The Noisy Brain: Stochastic Dynamics as a Principle of Brain Function; Oxford university press: Oxford, UK, 2010. [Google Scholar]
  5. Donsker, M.D.; Varadhan, S.R.S. Asymptotic evaluation of certain markov process expectations for large time, iv. Comm. Pure Appl. Math. 1983, XXXVI, 183–212. [Google Scholar]
  6. Chiyonobu, T.; Kusuoka, S. The large deviation principle for hypermixing processes. Probab. Theor. Relat. Field. 1988, 78, 627–649. [Google Scholar]
  7. Bryc, W.; Dembo, A. Large deviations and strong mixing. In Annales de l’IHP Probabilités et statistiques; Elsevier: Amsterdam, The Netherlands, 1996; Volume 32, pp. 549–569. [Google Scholar]
  8. Deuschel, J.D.; Stroock, D.W.; Zessin, H. Microcanonical distributions for lattice gases. Comm. Math. Phys. 1991, 139, 83–101. [Google Scholar]
  9. Baxter, J.R.; Jain, N.C. An approximation condition for large deviations and some applications. In Convergence in Ergodic Theory and Probability; Bergulson, V., Ed.; Ohio State University, Mathematical Research Institute: Columbus, OL, USA, 1993. [Google Scholar]
  10. Steinberg, Y.; Zeitouni, O. On tests for normality. IEEE Trans. Inf. Theory. 1992, 38, 1779–1787. [Google Scholar]
  11. Bryc, W.; Dembo, A. On large deviations of empirical measures for stationary gaussian processes. Stoch. Process. Appl. 1995, 58, 23–34. [Google Scholar]
  12. Faugeras, O.; MacLaurin, J. Asymptotic description of stochastic neural networks. I. Existence of a large deviation principle. Comptes Rendus Mathematiques 2014, 352(10). [Google Scholar]
  13. Faugeras, O.; MacLaurin, J. Large deviations of a spatially-ergodic neural network with learning 2014, arXiv, 1404.0732.
  14. Donsker, M.D.; Varadhan, S.R.S. Large deviations for stationary Gaussian processes. Commun. Math. Phys. 1985, 97, 187–210. [Google Scholar]
  15. Shiryaev, A.N. Probability; Springer: Berlin, Germany, 1996. [Google Scholar]
  16. Dembo, A.; Zeitouni, O. Large Deviations Techniques, 2nd ed; Springer: Berlin, Germany, 1997. [Google Scholar]
Entropy EISSN 1099-4300 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top