Next Article in Journal
Gradient-Based Iterative Parameter Estimation Algorithms for Dynamical Systems from Observation Data
Next Article in Special Issue
Poincaré-Type Inequalities for Compact Degenerate Pure Jump Markov Processes
Previous Article in Journal
An Efficient Analytical Technique, for The Solution of Fractional-Order Telegraph Equations
Article Menu
Issue 5 (May) cover image

Export Article

Mathematics 2019, 7(5), 427; https://doi.org/10.3390/math7050427

Article
Retrieving a Context Tree from EEG Data
1
Instituto de Matemática e Estatística, Universidade de São Paulo, São Paulo 05508-090, Brazil
2
Centro de Matemática, Universidad de la República, Uruguay and Instituto Pasteur de Montevideo, Montevideo 11400, Uruguay
3
Instituto de Matemática, Universidade Federal do Rio de Janeiro, Rio de Janeiro 21941-909, Brazil
4
Instituto de Biofísica, Universidade Federal do Rio de Janeiro, Rio de Janeiro 21941-902, Brazil
*
Author to whom correspondence should be addressed.
Received: 28 March 2019 / Accepted: 5 May 2019 / Published: 14 May 2019

Abstract

:
It has been repeatedly conjectured that the brain retrieves statistical regularities from stimuli. Here, we present a new statistical approach allowing to address this conjecture. This approach is based on a new class of stochastic processes, namely, sequences of random objects driven by chains with memory of variable length.
Keywords:
stochastic chains with memory of variable length; sequences of random objects driven by context tree models; stochastic modeling of EEG data

1. Introduction

Consider the following experimental situation. A listener is exposed to a sequence of auditory stimuli, generated by a stochastic chain, while electroencephalographic (EEG) signals are recorded from his scalp. Starting from Von Helmholtz [1], a classical conjecture in neurobiology claims that the listener’s brain automatically identifies statistical regularities in the sequence of stimuli (see, for instance, [2,3]). If this is the case, then a signature of the stochastic chain generating the stimuli should somehow be encoded in the brain activity. The question is whether this signature can be identified in the EEG data recorded during the experiment. The goal of this paper is to discuss a new probabilistic framework in which this conjecture can be formally addressed.
To model the relationship between the random chain of auditory stimuli and the corresponding EEG data, we introduce a new class of stochastic processes. A process in this class has two components. The first one is a stochastic chain taking values in the set of auditory units. The second one is a sequence of functions corresponding to the sequence of EEG chunks recorded during the exposure of the successive auditory stimuli.
We use a stochastic chain with memory of variable length to model the dependence from the past characterizing the sequence of auditory stimuli. Stochastic chains with memory of variable length were introduced by Rissanen [4], as a universal system for data compression. In his seminal paper, Rissanen observed that in many real life stochastic chains the dependence from the past has not a fixed length. Instead, it changes at each step as a function of the past itself. He called a context the smallest final string of past symbols containing all the information required to predict the next symbol. The set of all contexts defines a partition of the past and can be represented by a rooted and labeled oriented tree. For this reason, many authors call stochastic chains with memory of variable length context tree models. We adopt this terminology here. A non-exhaustive list of articles on context tree models, with applications in biology and linguistics, includes [5,6,7,8,9,10,11,12,13].
An interesting point about stochastic chains with memory of variable length with finite context trees is that they are dense in the d ¯ -topology in the class of chains of infinite order with continuous and non-null transition probabilities and summable continuity rates. This result follows easily from Fernández and Galves [14] and Duarte et al. [15]. We refer the reader to these articles for definitions and more details.
Besides modeling the chain of auditory units, we must also model the relationship between the chain of stimuli and the sequence of EEG chunks. To that end, we assume that at each time step a new EEG chunk is chosen according to a probability measure (defined on suitable class of functions) which depends only on the context assigned to the sequence of auditory units generated up to that time. In particular, this implies that to describe the new class of stochastic chains introduced in this paper, we also need to consider a family of probability measures on the set of functions corresponding to the EEG chunks, indexed by the contexts of the context tree characterizing the chain of auditory stimuli.
In this probabilistic framework, the neurobiological question can now be rigorously addressed as follows. Is it possible to retrieve the context tree characterizing the chain of stimuli from the corresponding EEG data? This is a problem of statistical model selection in the class of stochastic processes we have just informally described.
This article is organized as follows. In Section 2, we provide an informal overview of our approach. In Section 3, we introduce the notation, recall what is a context tree model and introduce the new class of sequences of random objects driven by context tree models. A statistical procedure to select, given the data, a member on the class of sequences of random objects driven by context tree models is presented in Section 4. The theoretical result supporting the proposed method, namely Theorem 1, is given in the same section. In Section 5, we conduct a brief simulation study to illustrate the statistical selection procedure presented in Section 4. The proof of Theorem 1 is given in Section 6.

2. Informal Presentation of Our Approach

Volunteers are exposed to sequences of auditory stimuli generated by a context tree models while EEG signals are recorded. The auditory units used as stimuli are strong beats, weak beats or silent units, represented by symbols 2 , 1 and 0, respectively.
The way the sequence of auditory units was generated can be informally described as follows. Start with the deterministic sequence
2 1 1 2 1 1 2 1 1 2 1 1 2 .
Then, replace each weak beat (symbol 1) by a silent unit (symbol 0) with probability ϵ in an independent way.
An example of a sequence produced by this procedure acting on the basic sequence would be
2 1 1 2 0 1 2 1 1 2 0 0 2 .
In the sequel, this stochastic chain is denoted by the symbols ( X 0 , X 1 , X 2 , ) .
The stochastic chain just described can be generated step by step by an algorithm using only information from the past. We impose to the algorithm the condition that it uses, at each step, the shortest string of past symbols necessary to generate the next symbol.
This algorithm can be described as follows. To generate X n , given the past X n 1 , X n 2 , , we first look to the last symbol X n 1 .
  • If X n 1 = 2 , then
    X n = 1 , with   probability 1 ϵ , 0 , with   probability ϵ .
  • If X n 1 = 1 or X n 1 = 0 , then we need to go back one more step,
    if X n 2 = 2 , then
    X n = 1 , with   probability 1 ϵ , 0 , with   probability ϵ ;
    if X n 2 = 1 or X n 2 = 0 , then X n = 2 with probability 1.
The algorithm described above is characterized by two elements. The first one is a partition of the set of all possible sequences of past units. This partition is represented by the set
τ = { 00 , 10 , 20 , 2 , 01 , 11 , 21 , 2 } .
In partition τ , the string 00 represents the set of all strings ending by the ordered pair ( 0 , 0 ) ; 10 represents the set of all strings ending by the ordered pair ( 1 , 0 ) , …; and finally the symbol 2 represents the set of all strings ending by 2. Following Rissanen [4], let us call context any element of this partition.
For instance, if
, X n 3 = 1 , X n 2 = 2 , X n 1 = 0 , X n = 1 .
the context associated to this past sequence is 01.
The partition τ of the past as described above can be represented by a rooted and labeled tree (see Figure 1) where each element of the partition is described as a leaf of the tree.
In the construction described above, for each sequence of past symbols, the algorithm first identifies the corresponding context w in the partition τ . Once the context w is identified, the algorithm chooses a next symbol a { 0 , 1 , 2 } using the transition probability p ( a | w ) . In other terms, each context w in τ defines a probability measure on { 0 , 1 , 2 } . The family of transition probabilities indexed by elements of the partition is the second element characterizing the algorithm.
The families of transition probabilities associated to τ are shown in Table 1.
Using the notion of context tree, the neurobiological conjecture can now be rephrased as follows. Is the brain able to identify the context tree generating the sample of auditory stimuli? From an experimental point of view, the question is whether it is possible to retrieve the tree presented in Figure 1 from the corresponding EEG data. To deal with this question we introduce a new statistical model selection procedure described below.
Let Y n be the chunk of EEG data recorded while the volunteer is exposed to the auditory stimulus  X n . Observe that Y n is a continuous function taking values in R d , where d 1 is the number of electrodes. Its domain is the time interval of length, say T, during which the acoustic stimulus X n is presented.
The statistical procedure introduced in the paper can be informally described as follows. Given a sample ( X 0 , Y 0 ) , , ( X n , Y n ) of auditory stimuli and associated EEG chunks and for a suitable initial integer k 1 , do the following.
  • For each string u = u 1 , u 2 , , u k of symbols in { 0 , 1 , 2 } , identify all occurrences in the sequence X 0 , X 1 , , X n of the string a u , obtained by concatenating the symbol a { 0 , 1 , 2 } and the string  u .
  • For each a { 0 , 1 , 2 } , define the subsample of all EEG chunks Y m = Y m ( a u ) such that X m k = a , X m k + 1 = u 1 , , X m = u k (see Figure 2).
  • For any pair a , b { 0 , 1 , 2 } , test the null hypothesis that the law of the EEG chunks Y ( a u ) and Y ( b u ) collected at Step 2 are equal.
    (a)
    If the null hypothesis is not rejected for any pair of final symbols a and b, we conclude that the occurrence of a or b before the string u do not affect the law of EEG chunks. Then, we start again the procedure with the one step shorter sequence u = u 2 , , u k .
    (b)
    If the null hypothesis is rejected for at least one pair of final symbols a and b, we conclude that the law of EEG chunks depend on the entire string a u and we stop the pruning procedure.
  • We keep pruning the sequence u 1 , , u k until the null-hypothesis is reject for the first time.
  • Call τ ^ n the tree constituted by the strings which remained after the pruning procedure.
The question is whether τ ^ n coincides with the context tree τ generating the sequence of auditory stimuli.
An important technical issue must be clarified at this point, namely, how to test the equality of the laws of two subsamples of EEG chunks. This is done using the projective method informally explained below.
Suppose we have two samples of random functions, each sample composed by independent realizations of some fixed law. To test whether the two samples are generated by the same law, we choose at random a “direction” and project each function in the samples in this direction. This produces two new samples of real numbers. Now, we test whether the samples of the projected real numbers have the same law. Under suitable conditions, a theorem by Cuesta-Albertos et al. [16] ensures that for almost all directions if the test does not reject the null hypothesis that the projected samples have the same law, then the original samples also have the same law.
The arguments informally sketched in this section are formally developed in the subsequent sections.

3. Notation and Definitions

Let A be a finite alphabet. Given two integers m , n Z with m n , the string ( u m , , u n ) of symbols in A is often denoted by u m n ; its length is ( u m n ) = n m + 1 . The empty string is denoted by ∅ and its length is ( ) = 0 . Fixing two strings u and v of elements of A, we denote by u v the string in A ( u ) + ( v ) obtained by the concatenation of u and v. By definition u = u = u for any string u A ( u ) . The string u is said to be a suffix of v if there exists a string s satisfying v = s u . This relation is denoted by u v . When v u , we say that u is a proper suffix of v and write u v . Hereafter, the set of all finite strings of symbols in A is denoted by A : = k = 1 A k . For any finite string w = w k 1 with k 2 , we write s u f ( w ) to denote the one-step shorter string w k + 1 1 .
Definition 1.
A finite subset τ of A is a context tree if it satisfies the following conditions:
1. 
Suffix Property. For no w τ we have u τ with u w .
2. 
Irreducibility. No string belonging to τ can be replaced by a proper suffix without violating the suffix property.
The set τ can be identified with the set of leaves of a rooted tree with a finite set of labeled branches. The elements of τ are always denoted by w , u , v , .
The height of the context tree τ is defined as ( τ ) = max { ( w ) : w τ } . In the present paper, we only consider context trees with finite height.
Definition 2.
Let τ and τ be two context trees. We say that τ is smaller than τ and write τ τ , if for every w τ there exists w τ such that w w .
Given a context tree τ , let p = { p ( · w ) : w τ } be a family of probability measures on A indexed by the elements of τ . The pair ( τ , p ) is called a probabilistic context tree on A. Each element of τ is called a context. For any string x n 1 A n with n ( τ ) , we write c τ ( x n 1 ) to denote the only context in τ which is a suffix of x n 1 .
Definition 3.
A probabilistic context tree ( τ , p ) with height ( τ ) = k is irreducible if for any a k 1 A k and b A there exist a positive integer n = n ( a k 1 , b ) and symbols a 0 , a 1 , , a n = b A such that
p ( a 0 | c τ ( a k 1 ) ) > 0 , p ( a 1 | c τ ( a 0 a k 1 ) ) > 0 , , p ( a n | c τ ( a n 1 , , a 0 a k 1 ) ) > 0 .
Definition 4.
Let ( τ , p ) be a probabilistic context tree on A. A stochastic chain ( X n ) n N taking values in A is called a context tree model compatible with ( τ , p ) if
1. 
for any n ( τ ) and any finite string x n 1 A n such that P X 0 n 1 = x n 1 > 0 , it holds that
P X n = a X 0 n 1 = x n 1 = p a c τ x n 1 f o r a l l a A ,
where c τ x n 1 is the only context in τ which is a suffix of x n 1 .
2. 
For any 1 j < ( c τ x n 1 ) , there exists a A such that
P X n = a X 0 n 1 = x n 1 P X n = a X n j n 1 = x j 1 .
With this notation, we can now introduce the class of random objects driven by a context tree model.
Definition 5.
Let A be a finite alphabet, ( τ , p ) a probabilistic context tree on A, ( F , F ) a measurable space and ( Q w : w τ ) a family of probability measures on ( F , F ) . The bivariate stochastic chain ( X n , Y n ) n N taking values in A × F is a sequence of random objects driven by a context tree model compatible with ( τ , p ) and ( Q w : w τ ) if the following conditions are satisfied:
1. 
( X n ) n N is a context tree model compatible with ( τ , p ) .
2. 
The random elements Y 0 , Y 1 , are F -measurable. Moreover, for any integers ( τ ) m n , any string x m ( τ ) + 1 n A n m + ( τ ) and any sequence J m , , J n of F -measurable sets,
P Y m J m , , Y n J n | X m ( τ ) + 1 n = x m ( τ ) + 1 n = k = m n Q c τ ( x k ( τ ) + 1 k ) ( J k ) ,
where c τ ( x k ( τ ) + 1 k ) is the context in τ assigned to the string of symbols x k ( τ ) + 1 k .
Definition 6.
A sequence of random objects driven by a context tree model compatible with ( τ , p ) and ( Q w : w τ ) is identifiable if for any context w τ there exists a context u τ such that suf ( w ) = suf ( u ) and Q w Q u .
The process ( X n ) is called the stimulus chain and ( Y n ) is called the response chain.
The experimental situation described in Section 2 can now be formally presented as follows.
  • The stimulus chain ( X n ) is a context tree model taking values in an alphabet having as elements symbols indicating the different types of auditory units appearing in the sequence of stimuli. We call ( τ , p ) its probabilistic context tree.
  • Each element Y n of the response chain ( Y n ) represents the EEG chunk recorded while the volunteer is exposed to the auditory stimulus X n . Thus, Y n = ( Y n ( t ) , t [ 0 , T ] ) is a function taking values in R d , where T is the time distance between the onsets of two consecutive auditory stimuli and d the number of electrodes used in the analysis. The sample space F is the Hilbert space L 2 ( [ 0 , T ] , R d ) of R d -valued functions on [ 0 , T ] having square integrable components. The Hilbert space F is endowed with its usual Borel σ -algebra F .
  • Finally, ( Q w , w τ ) is a family of probability measures on L 2 ( [ 0 , T ] , R d ) describing the laws of the EEG chunks.
From now on, the pair ( F , F ) always denotes the Hilbert space L 2 ( [ 0 , T ] , R d ) endowed with its usual Borel σ -algebra.

4. Statistical Selection for Sequences of Random Objects Driven by Context Tree Models

Let ( X 0 , Y 0 ) , , ( X n , Y n ) , with X k A and Y k F for 0 k n , be a sample produced by a sequence of random objects driven by a context tree model compatible with ( τ ¯ , p ¯ ) and ( Q ¯ w : w τ ¯ ) . Before introducing the statistical selection procedure, we need two more definitions.
Definition 7.
Let τ be a context tree and fix a finite string s A . We define the branch in τ induced by s as the set B τ ( s ) = { w τ : w s } . The set B τ ( s ) is called a terminal branch if for all w B τ ( s ) it holds that w = a s for some a A .
Given a sample X 0 , , X n of symbols in A and a finite string u A , the number of occurrences of u in the sample X 0 , , X n is defined as
N n ( u ) = m = l ( u ) 1 n 1 { X m ( u ) + 1 m = u } .
Definition 8.
Given integers n > L 1 , an admissible context tree of maximal height L for the sample X 0 , , X n of symbols in A, is any context tree τ satisfying
1. 
w τ if and only if ( w ) L and N n ( w ) 1 .
2. 
Any string u A with N n ( u ) 1 is a suffix of some w τ or has a suffix w τ .
For any pair of integers 1 L < n and any string u A with ( u ) L , call I n ( u ) the set of indexes belonging to { ( u ) 1 , , n } in which the string u appears in sample X 0 , , X n , that is
I n ( u ) = { ( u ) 1 m n : X m ( u ) + 1 m = u } .
Observe that by definition | I n ( u ) | = N n ( u ) . If I n ( u ) = { m 1 , , m N n ( u ) } , we set Y k ( u ) = Y m k for each 1 k N n ( u ) . Thus, Y 1 ( u ) , , Y N n ( u ) ( u ) is the subsample of Y 0 , , Y n induced by the string u.
Given u A such that N n ( u ) 1 and h F , we define the empirical distribution associated to the projection of the sample Y 1 ( u ) , , Y N n ( u ) ( u ) onto the direction h as
Q ^ n u , h ( t ) = 1 N n ( u ) m = 1 N n ( u ) 1 ( , t ] ( Y m ( u ) , h ) , t R ,
where for any pair of functions f , h F ,
f , h = i = 1 d 0 T f i ( t ) h i ( t ) d t .
For a given pair u , v A , with max { ( u ) , ( v ) } L and h F , the Kolmogorov–Smirnov distance between the empirical distributions Q ^ n u , h and Q ^ n v , h is defined by
KS ( Q ^ n u , h , Q ^ n v , h ) = sup t R | Q ^ n u , h ( t ) Q ^ n v , h ( t ) | .
Finally, we define for any pair u , v A such that max { ( u ) , ( v ) } L and h F ,
D n h ( ( Y 1 ( u ) , , Y N n ( u ) ( u ) ) , ( Y 1 ( v ) , , Y N n ( v ) ( v ) ) ) = N n ( u ) N n ( v ) N n ( u ) + N n ( v ) KS ( Q ^ n u , h , Q ^ n v , h ) .
Our selection procedure can now be described as follows. Fix an integer 1 L < n and let T n be the largest admissible context tree of maximal height L for the sample X 0 , , X n . The largest means that if τ is any other admissible context tree of maximal height L for the sample X 1 n , then τ T n .
For any string u A such that B T n ( u ) is a terminal branch, we test the null hypothesis
H 0 ( u ) : L Y 1 ( a u ) , , Y N n ( a u ) ( a u ) = L Y 1 ( b u ) , , Y N n ( b u ) ( b u ) , a u , b u B T n ( u )
using the test statistic
n ( u ) = n W ( u ) = max a , b A D n W ( Y 1 ( a u ) , , Y N n ( a u ) ( a u ) ) , ( Y 1 ( b u ) , , Y N n ( b u ) ( b u ) ) ,
where W = ( ( W 1 ( t ) , , W d ( t ) ) : t [ 0 , T ] ) is a realization of a d-dimensional Brownian motion in  [ 0 , T ] .
We reject the null hypothesis H 0 ( u ) when n ( u ) > c , where c > 0 is a suitable threshold. When the null hypothesis H 0 ( u ) is not rejected, we prune the branch B T n ( u ) in T n and set as a new candidate context tree
T n = T n \ B T n ( u ) { u } .
On the other hand, if the null hypothesis H 0 ( u ) is rejected, we keep B T n ( u ) in T n and stop testing H 0 ( s ) for strings s A such that s u .
In each pruning step, take a string s A that induces a terminal branch in T n and has not been tested yet. This pruning procedure is repeated until no more pruning is performed. We denote by τ ^ n the final context tree obtained by this procedure. The formal description of the above pruning procedure is provided in Algorithm 1 as pseudocode.
Algorithm 1 Pseudocode describing the pruning procedure used to select the tree τ ^ n .
Input: 
A sample ( X 0 , Y 0 ) , , ( X n , Y n ) with X k A and Y k F for 0 k n , a positive threshold c and a positive integer L.
Output: 
A tree τ ^ n
1:
τ T n
2:
Flag ( s ) “not visited” for all string s such that s w T n
3:
for k in L to 1 do
4:
while s τ : ( s ) = k , Flag ( s ) = “not visited” and B τ ( s ) is a terminal branch do
5:
  Choose a s such that ( s ) = k , Flag ( s ) = “not visited” and B τ ( s ) is a terminal branch
6:
  Compute the test statistic n ( s ) to test H 0 ( s )
7:
  if n ( s ) > c then
8:
   Flag ( u ) “visited” u s
9:
  else
10:
    τ ( τ \ B τ ( s ) ) { s }
11:
  end if
12:
end while
13:
end for
14:
Return τ ^ n = τ .
To state the consistency theorem, we need the following definitions.
Definition 9.
A probability measure P defined on ( F , F ) satisfies Carleman condition if all the absolute moments m k = | | h | | k P ( d h ) , k 1 , are finite and
k 1 m k 1 / k = + .
Definition 10.
Let P be a probability measure on ( F , F ) . We say that P is continuous if P h is continuous for any h F , where P h is defined by
P h ( ( , t ] ) = P ( x F : x , h t ) , t R .
Let V be a finite set of indexes and ( P i : i V ) be a family of probability measures on ( F , F ) . We say that ( P i : i V ) is continuous if for all i V , the probability measure P i is continuous.
In what follows, let c α = ( 1 / 2 ) ln ( 2 / α ) , where α ( 0 , 1 ) . We say that α n 0 slowly enough as n if
n c α n as n .
Theorem 1.
Let ( X 0 , Y 0 ) , , ( X n , Y n ) be a sample produced by a identifiable sequence of random objects driven by a context tree model compatible with ( τ ¯ , p ¯ ) and ( Q ¯ w : w τ ¯ ) , and let τ ^ n be the context tree selected from the sample by Algorithm 1 with L ( τ ¯ ) and threshold c α n = ( 1 / 2 ) ln ( 2 / α n ) , where α n ( 0 , 1 ) . If ( τ ¯ , p ¯ ) is irreducible and ( Q ¯ w : w τ ¯ ) is continuous and satisfies Carleman condition, then for α n 0 slowly enough as n ,
lim n P ( τ ^ n τ ¯ ) = 0 .
The proof of Theorem 1 is presented in Section 6.

5. Simulation Study

In this section, we illustrate the performance of Algorithm 1 by applying it in a toy example. We consider the context tree model compatible with ( τ ¯ , p ¯ ) described in Section 2 with ϵ = 0.2 . For each w τ ¯ , we assume Q ¯ w is the law of a diffusion process with drift coefficient f w = ( f w ( t ) ) 0 t 1 and constant diffusion coefficient. For simplicity, all diffusion coefficients are assumed to be 1. For each context w τ ¯ , we assume f w = K g w , where K is a positive constant and g w is the density of a Gaussian random variable with mean μ w and standard deviation σ w , restricted to the interval [ 0 , 1 ] . In the simulation, we take K = 5 . The shapes of the functions f w and corresponding values of μ w and σ w are shown in Figure 3. One can check that the assumptions of Theorem 1 are satisfied by this toy example.
To numerically implement Algorithm 1, we assume that all trajectories of the diffusion processes are observed on equally spaces point 0 = t 0 < t 1 < < t 100 = 1 , where t i = i 100 for each 1 i 100 . For each sample size n = 100 , 120 , 140 , , 1000 , we estimate the fraction of times Algorithm 1, with α n = 1 / n and L = 4 , correctly identifies the context tree τ ¯ based on 100 random samples of the model with size n . The results are reported in Figure 4.

6. Proof of Theorem 1

The proof of Theorem 1 is a direct consequence of Propositions 1 and 2 presented below.
Proposition 1.
Let ( X 0 , Y 0 ) , , ( X n , Y n ) be a sample produced by a sequence of random objects driven by a context tree model compatible with ( τ ¯ , p ¯ ) and ( Q ¯ w : w τ ¯ ) satisfying the assumptions of Theorem 1. Let α ( 0 , 1 ) and set c α = 1 / 2 ln ( 2 / α ) . For any integer L ( τ ¯ ) , context w τ ¯ , direction h F \ { 0 } , and strings u , v k = 1 L ( w ) A k such that w u and w v , it holds that
lim n P ( D n h ( ( Y 1 ( u ) , , Y N n ( u ) ( u ) ) , ( Y 1 ( v ) , , Y N n ( v ) ( v ) ) ) > c α ) = α .
In particular, for any α n 0 as n , we have
lim n P ( D n h ( ( Y 1 ( u ) , , Y N n ( u ) ( u ) ) , ( Y 1 ( v ) , , Y N n ( v ) ( v ) ) ) > c α n ) = 0 .
Proof. 
The irreducibility of ( τ ¯ , p ¯ ) implies that P-a.s. both N n ( u ) and N n ( v ) tend to + as n diverges. Thus, Theorem 3.1(a) of [16] implies that the law of D n h ( ( Y 1 ( u ) , , Y N n ( u ) ( u ) ) , ( Y 1 ( v ) , , Y N n ( v ) ( v ) ) ) is independent of the strings u and v, and also of the direction h F \ { 0 } . It also implies that D n h ( ( Y 1 ( u ) , , Y N n ( u ) ( u ) ) , ( Y 1 ( v ) , , Y N n ( v ) ( v ) ) converges in distribution to K = sup t [ 0 , 1 ] | B ( t ) | as n , where B = ( B ( t ) : t [ 0 , 1 ] ) is a Brownian Bridge. Since P ( K > c α ) = α , the first part of the result follows.
By the first part of the proof, for any fixed α ( 0 , 1 ) , we have that for all n large enough,
P ( D n h ( ( Y 1 ( u ) , , Y N n ( u ) ( u ) ) , ( Y 1 ( v ) , , Y N n ( v ) ( v ) ) ) > c α ) ) 2 α .
Thus, given ϵ > 0 , take α ( 0 , 1 ) such that 2 α < ϵ to deduce that for all n large enough,
P ( D n h ( ( Y 1 ( u ) , , Y N n ( u ) ( u ) ) , ( Y 1 ( v ) , , Y N n ( v ) ( v ) ) ) > c α ) ) < ϵ .
Since c α n as n , we have that for all n sufficiently large c α n > c α so that the result follows from the previous inequality. □
Proposition 2 reads as follows.
Proposition 2.
Let ( X 0 , Y 0 ) , , ( X n , Y n ) be a sample produced by a identifiable sequence of random objects driven by a context tree model compatible with ( τ ¯ , p ¯ ) and ( Q ¯ w : w τ ¯ ) , and let τ ^ n satisfying the assumptions of Theorem 1. Let α ( 0 , 1 ) and define c α = 1 / 2 ln ( 2 / α ) . For any string s A such that B τ ¯ ( s ) is a terminal branch there exists a pair w , w B τ ¯ ( s ) such that for almost all realization of a Brownian motion W = ( W ( t ) : t [ 0 , T ] ) on [ 0 , T ] ,
lim n P ( D n W ( ( Y 1 ( w ) , , Y N n ( w ) ( w ) ) , ( Y 1 ( w ) , , Y N n ( w ) ( w ) ) ) c α n ) = 0 ,
whenever α n 0 slowly enough as n .
Proof. 
Since the sequence of random objects ( X 0 , Y 0 ) , ( X 1 , Y 1 ) , is identifiable and B τ ¯ ( s ) is a terminal branch, there exists a pair w , w B τ ¯ ( s u f ( w ) ) whose associated distributions Q ¯ w and Q ¯ w on F are different, and both Q ¯ w and Q ¯ w satisfy the Carleman condition. For each n 1 , define
N n : = N n ( w ) N n ( w ) N n ( w ) + N n ( w ) ,
if min { N n ( w ) , N n ( w ) } 1 . Otherwise, we set N n = 0 . The irreducibility of ( τ ¯ , p ¯ ) implies that n 1 / 2 N n C as n P-a.s., where C is a positive constant depending on w and w .
Now, Theorem 3.1(b) of [16] implies that, for almost all realization of a Brownian motion W on F,
lim inf n KS ( Q ^ n W , w , Q ^ n W , w ) > 0 P- a . s .
Since D W ( ( Y 1 ( w ) , , Y N n ( w ) ( w ) ) , ( Y 1 ( w ) , , Y N n ( w ) ( w ) ) / c α n = n c α n N n n KS ( Q ^ n h , w , Q ^ n h , w ) and α n 0 slowly enough, the result follows. □
Proof of Theorem 1.
Let C τ ¯ be the set of contexts belonging to a terminal branch of τ ¯ . Define also the following events
U n = w C τ ¯ { n W ( suf ( w ) ) c α n } and O n = w τ ¯ s w : ( s ) L { n W ( s ) > c α n } .
It follows from the definition of Algorithm 1 that
P ( τ ^ n τ ¯ ) = P ( U n ) + P ( O n ) .
Thus, it is enough to prove that for any ϵ > 0 there exists n 0 = n 0 ( ϵ ) such that P ( U n ) ϵ / 2 and P ( O n ) ϵ / 2 for all n n 0 .
By the union bound, we see that
P ( U n ) w τ ¯ P ( n W ( suf ( w ) ) c α n ) .
The sequence of random objects ( X 0 , Y 0 ) , ( X 1 , Y 1 ) , is identifiable. Thus by observing that for each w C τ ¯ , B τ ¯ ( suf ( w ) ) is a terminal branch, we have that there exists w B τ ¯ ( suf ( w ) ) such that the associated distributions Q ¯ w and Q ¯ w on F are different, and both Q ¯ w and Q ¯ w satisfies Carleman condition. Since
{ n W ( suf ( w ) ) c α n } { D n W ( ( Y 1 ( w ) , , Y N n ( w ) ( w ) ) , ( Y 1 ( w ) , , Y N n ( w ) ( w ) ) c α n } ,
and τ ¯ is finite, Proposition 2 implies that P ( U n ) 0 as n , if α n 0 slowly enough. As a consequence, for any ϵ > 0 there exists n 0 = n 0 ( ϵ ) such that P ( U n ) ϵ / 2 for all n n 0 .
Using again the union bound, we have
P ( O n ) w τ ¯ s w : ( s ) L P ( n W ( s ) > c α n ) .
By observing that τ ¯ is finite, the alphabet A is finite and
{ n W ( s ) > c α n } = a , b A { D n W ( ( Y 1 ( a s ) , , Y N n ( a s ) ( a s ) ) , ( Y 1 ( b s ) , , Y N n ( b s ) ( b s ) ) > c α n } ,
we deduce from Proposition 1 and the inequality in Equation (6) that, for any ϵ > 0 , we have P ( O n ) ϵ / 2 for all n large enough. This concludes the proof of the theorem. □

Author Contributions

All authors contributed equally to this work.

Funding

This work is part of USP project Mathematics, computation, language and the brain, FAPESP project Research, Innovation and Dissemination Center for Neuromathematics (grant 2013/07699-0), and project Plasticity in the brain after a brachial plexus lesion (FAPERJ grants E26/010002902/2014, E26/010002474/2016). A. Galves and C.D. Vargas are partially supported by CNPq fellowships (grants 311 719/2016-3 and 309560/2017-9, respectively). C.D. Vargas is also partially supported by a FAPERJ fellowship (CNE 202.785/2018). A. Duarte was fully and successively supported by CNPq and FAPESP fellowships (grants 201696/2015-0 and 2016/17791-9). G. Ost was fully and successively supported by CNPq and FAPESP fellowships (grants 201572/2015-0 and 2016/17789-4).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Von Helmholtz, H. Handbuch der Physiologischen Optik; Translated by The Optical Society of America in 1924 from the third germand edition, 1910, Treatise on physiological optics, Volume III; Leopold Voss: Leipzig, Germany, 1867; Volume 3. [Google Scholar]
  2. Garrido, M.I.; Sahani, M.; Dolan, R.J. Outlier responses reflect sensitivity to statistical structure in the human brain. PLOS Comput. Biol. 2013, 9. [Google Scholar] [CrossRef] [PubMed]
  3. Wacongne, C.; Changeux, J.; Dehaene, S. A Neuronal Model of Predictive Coding Accounting for the Mismatch Negativity. J. Neurosci. 2012, 32, 3665–3678. [Google Scholar] [CrossRef] [PubMed][Green Version]
  4. Rissanen, J. A Universal Data Compression System. IEEE Trans. Inf. Theory 1983, 29, 656–664. [Google Scholar] [CrossRef]
  5. Bühlmann, P.; Wyner, A.J. Variable length Markov chains. Ann. Stat. 1999, 27, 480–513. [Google Scholar] [CrossRef][Green Version]
  6. Csiszár, I.; Talata, Z. Context tree estimation for not necessarily finite memory processes, Via BIC and MDL. IEEE Trans. Inf. Theory 2006, 52, 1007–1016. [Google Scholar] [CrossRef]
  7. Leonardi, F.G. A generalization of the PST algorithm: modeling the sparse nature of protein sequences. Bioinformatics 2006, 22, 1302–1307. [Google Scholar] [CrossRef] [PubMed][Green Version]
  8. Galves, A.; Löcherbach, E. Stochastic chains with memory of variable length. TICSP Ser. 2008, 38, 117–133. [Google Scholar]
  9. Garivier, A.; Leonardi, F. Context tree selection: A unifying view. Stoch. Processes Appl. 2011, 121, 2488–2506. [Google Scholar] [CrossRef][Green Version]
  10. Gallo, S. Chains with unbounded variable length memory: perfect simulation and a visible regeneration scheme. Adv. Appl. Probab. 2011, 43, 735–759. [Google Scholar] [CrossRef][Green Version]
  11. Galves, A.; Galves, C.; García, J.E.; Garcia, N.L.; Leonardi, F. Context tree selection and linguistic rhythm retrieval from written texts. Ann. Appl. Stat. 2012, 6, 186–209. [Google Scholar] [CrossRef]
  12. Galves, A.; Garivier, A.; Gassiat, E. Joint Estimation of Intersecting Context Tree Models. Scand. J. Stat. 2013, 40, 344–362. [Google Scholar] [CrossRef]
  13. Belloni, A.; Oliveira, R.I. Approximate group context tree. Ann. Stat. 2017, 45, 355–385. [Google Scholar] [CrossRef][Green Version]
  14. Fernández, R.; Galves, A. Markov approximations of chains of infinite order. Bull. Braz. Math. Soc. 2002, 33, 295–306. [Google Scholar] [CrossRef]
  15. Duarte, D.; Galves, A.; Garcia, N.L. Markov approximation and consistent estimation of unbounded probabilistic suffix trees. Bull. Braz. Math. Soc. 2006, 37, 581–592. [Google Scholar] [CrossRef]
  16. Cuesta-Albertos, J.A.; Fraiman, R.; Ransford, T. Random projections and goodness-of-fit tests in infinite-dimensional spaces. Bull. Braz. Math. Soc. New Ser. 2006, 37, 477–501. [Google Scholar] [CrossRef]
Figure 1. Graphical representation of the context tree τ .
Figure 1. Graphical representation of the context tree τ .
Mathematics 07 00427 g001
Figure 2. EEG signals recorded from four electrodes. The sequence of stimuli is indicated in the top horizontal line. Vertical lines indicate the beginning of the successive auditory units. The distance between two successive vertical lines is T > 0 . Solid vertical lines indicate the successive occurrence times of the string u . The first yellow strip corresponds to the chunk Y n ( a u ) associated to the string a u . The second yellow strip corresponds to the chunk Y n ( b u ) associated to the string b u .
Figure 2. EEG signals recorded from four electrodes. The sequence of stimuli is indicated in the top horizontal line. Vertical lines indicate the beginning of the successive auditory units. The distance between two successive vertical lines is T > 0 . Solid vertical lines indicate the successive occurrence times of the string u . The first yellow strip corresponds to the chunk Y n ( a u ) associated to the string a u . The second yellow strip corresponds to the chunk Y n ( b u ) associated to the string b u .
Mathematics 07 00427 g002
Figure 3. Functions g w and the corresponding values of μ w and σ w for w τ = { 2 , 11 , 21 , 01 , 00 , 10 , 20 } : (top) function g 2 ; (middle) functions g 21 , g 11 and g 01 ; and (bottom) functions g 20 , g 10 and g 00 .
Figure 3. Functions g w and the corresponding values of μ w and σ w for w τ = { 2 , 11 , 21 , 01 , 00 , 10 , 20 } : (top) function g 2 ; (middle) functions g 21 , g 11 and g 01 ; and (bottom) functions g 20 , g 10 and g 00 .
Mathematics 07 00427 g003
Figure 4. Proportion of correct identification of the context tree τ ¯ = { 2 , 01 , 11 , 21 , 20 , 10 , 00 } by applying Algorithm 1 to simulated data with sample sizes n = 100 , 120 , 140 , , 1000 . For sample sizes larger than 200, the proportion of correct identification is at least 95 % .
Figure 4. Proportion of correct identification of the context tree τ ¯ = { 2 , 01 , 11 , 21 , 20 , 10 , 00 } by applying Algorithm 1 to simulated data with sample sizes n = 100 , 120 , 140 , , 1000 . For sample sizes larger than 200, the proportion of correct identification is at least 95 % .
Mathematics 07 00427 g004
Table 1. Transition probabilities associated to the context tree τ .
Table 1. Transition probabilities associated to the context tree τ .
Context w p ( 0 | w ) p ( 1 | w ) p ( 2 | w )
2 ϵ 1 ϵ 0
21 ϵ 1 ϵ 0
20 ϵ 1 ϵ 0
11001
10001
01001
00001

© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Mathematics EISSN 2227-7390 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top