Critical Behavior from Deep Dynamics: A Hidden Dimension in Natural Language

We show that in many data sequences - from texts in different languages to melodies and genomes - the mutual information between two symbols decays roughly like a power law with the number of symbols in between the two. In contrast, we prove that Markov/hidden Markov processes generically exhibit exponential decay in their mutual information, which explains why natural languages are poorly approximated by Markov processes. We present a broad class of models that naturally reproduce this critical behavior. They all involve deep dynamics of a recursive nature, as can be approximately implemented by tree-like or recurrent deep neural networks. This model class captures the essence of probabilistic context-free grammars as well as recursive self-reproduction in physical phenomena such as turbulence and cosmological inflation. We derive an analytic formula for the asymptotic power law and elucidate our results in a statistical physics context: 1-dimensional"shallow"models (such as Markov models or regular grammars) will fail to model natural language, because they cannot exhibit criticality, whereas"deep"models with one or more"hidden"dimensions representing levels of abstraction or scale can potentially succeed.


I. INTRODUCTION
Critical behavior, where long-range correlations decay as a power law with distance, has many important physics applications ranging from phase transitions in condensed matter experiments to turbulence and inflationary fluctuations in our early Universe.It has important applications beyond the traditional purview of physics as well [1][2][3][4][5] including new results which we report in Figure I: the number of bits of information provided by a symbol about another drops roughly as a power-law 1 with distance in sequences as diverse as the human genome, music by Bach, and text in English and French.Why is this, when so many other correlations in nature instead drop exponentially [9]?
Better understanding such statistical properties of natural languages (in the broad sense of informationtransmitting sequences) is interesting not only for geneticists, musicologists and linguists, but also for the machine learning community.Consider how your phone can autocorrect your typing, how you can free up disk space with data compression software and how speech-to-text conversion enables you to talk to digital personal assistants such as Siri, Cortana and Google Now: these technolo-gies all exploit statistical properties of language, and can all be further improved if we can better understand these properties.Such deepened understanding is the goal of the present paper, focusing on critical behavior.Natural languages are difficult for machines to understand.This has been known at least as far back as Turing, whose eponymous test [14] relies upon this key fact.A tempting explanation is that natural language is something uniquely human.But this is far from a satisfactory one, especially given the recent successes of machines at performing tasks as complex and as "human" as playing Jeopardy![15], chess [16], Atari games [17] and Go [18].We will show that computer descriptions of language tend to suffer from a much simpler problem that has nothing to do with meaning, understanding or being non-human: they tend to get the basic statistical properties wrong.We will prove that Markov processes, the workhorse of modeling any sequential data with translational symmetry and one of the handful of models that are analytically tractable, fail epically by predicting exponentially decaying mutual information.On the other hand, impressive progress has been made by using deep neural networks for natural language processing (see, e.g., [19][20][21][22]); for recent reviews of deep neural networks, see [23,24].Unfortunately, unlike Markov and related n-gram models, these deep networks are often treated like inscrutable black boxes, given their enormous complexity.This has triggered many recent efforts to understand their advantages analytically [25][26][27][28], from a functional [29], topological [30], and geometric [31,32] perspective; this paper explores the advantages from a statistical physics perspective [33].
We will see that a key reason that currently popular recurrent neural networks with long-short-term memory (LSTM) [34] do much better is that they can replicate critical behavior, but that even they can be further improved, since they can under-predict long-range mutual Here the mutual information in bits per symbol is shown as a function of separation d(X, Y ) = |i − j|, where the symbols X and Y are located at positions i and j in the sequence in question, and shaded bands correspond to 1 − σ error bars.All measured curves are seen to decay roughly as power laws, explaining why they cannot be accurately modeled as Markov processes -for which the mutual information instead plummets exponentially (the example shown has I ∝ e −d/6 ).The measured curves are seen to be qualitatively similar to that of a famous critical system in physics: a 1D slice through a critical 2D Ising model, where the slope is −1/2.The human genome data consists of 177,696,512 base pairs {A, C, T,G} from chromosome 5 from the National Center for Biotechnology Information [10], with unknown base pairs omitted.The Bach data consists of 5727 notes from Partita No. 2 [11], with all notes mapped into a 12-symbol alphabet consisting of the 12 half-tones {C, C#, D, D#, E, F, F#, G, G#, A, A#, B}, with all timing, volume and octave information discarded.The three text corpuses are 100 MB from Wikipedia [12] (206 symbols), the first 114 MB of a French corpus [13] (185 symbols) and 27 MB of English articles from slate.com (143 symbols).The large long range information appears to be dominated by poems in the French sample and by html-like syntax in the Wikipedia sample.

information.
A final goal of this paper is to ameliorate the following problem: machine learning typically involves using something we do not fully understand (neural nets, etc.) to study something we also do not fully understand (English, etc.).If we are ever to understand how some learning algorithm works, we must first understand what we are trying to learn.For this reason, we will construct a simple class of analytically tractable models which qualitatively reproduce (some of) the statistics of natural languages -specifically, critical behavior.This paper is organized as follows.In Section II, we show how Markov processes exhibit exponential decay in mutual information with scale; we give a rigorous proof of this and other results in a series of appendices.To enable such proofs, we introduce a convenient quantity that we term rational mutual information, which bounds the mutual information and converges to it in the nearindependence limit.In Section III, we define a subclass of generative grammars and show that they exhibit critical behavior with power law decays.We then generalize our discussion using Bayesian nets and relate our findings to theorems in statistical physics.In Section IV, we discuss our results and explain how LSTM RNNs can reproduce critical behavior by emulating our generative grammar model.

II. MARKOV IMPLIES EXPONENTIAL DECAY
For two random variables X and Y , the following definitions of mutual information are all equivalent: where S ≡ − log B P is the Shannon entropy [35] and ) is the Kullback-Leibler divergence [36] between the joint probability distribution and the product of the individual marginals.If the base of the logarithm is taken to be B = 2, then I(X, Y ) is measured in bits.The mutual information can be interpreted as how much one variable knows about the other: I(X, Y ) is the reduction in the number of bits needed to specify for X once Y is specified.Equivalently, it is the number of encoding bits saved by using the true joint probability P (X, Y ) instead of approximating X and Y are independent.It is thus a measure of statistical dependencies between X and Y .Although it is more conventional to measure quantities such as the correlation coefficient ρ in statistics and statistical physics, the mutual information is more suitable for generic data, since it does not require that the variables X and Y are numbers or have any algebraic structure, whereas ρ requires that we are able to multiply X • Y and average.Whereas it makes sense to multiply numbers, is meaningless to multiply or average two characters such as "!" and "?".
The rest of this paper is largely a study of the mutual information between two random variables that are realizations of a discrete stochastic process, with some separation in τ in time.More concretely, we can think of sequences {X 1 , X 2 , X 3 , • • • } of random variables, where each one might take values from some finite alphabet.
For example, if we model English as a discrete stochastic process and take τ = 2, X could represent the first character ("F") in this sentence, whereas Y could represent the third character ("r") in this sentence.
In particular, we start by studying the mutual information function of a Markov process, which is analytically tractable.Let us briefly recapitulate some basic facts about Markov processes (see, e.g., [37] for a pedagogical review).A Markov process is defined by a matrix M of conditional probabilities M ab = P (X t+1 = a|X t = b).Such Markov matrices (also known as stochastic matrices) thus have the properties M ab ≥ 0 and a M ab = 1.They fully specify the dynamics of the model: where p t is a vector with components P (X t = a) that specifies the probability distribution at time t., where λ 2 is the second largest eigenvalue of M. If M is reducible or periodic, I can instead decay to a constant; no Markov process whatsoever can produce power-law decay.Suppose M is irreducible and aperiodic so that p t → µ as t → ∞ as mentioned above.This convergence of one-point statistics, e.g., p t , has been well-studied [37].However, one can also study higher order statistics such as the joint probability distribution for two points in time.For succinctness, let us write P (a, b) ≡ P (X = a, Y = b), where X = X t1 and Y = X t2 and τ ≡ |t 2 − t 1 |.We are interested in the asymptotic situation where the Markov process has converged to its steady state, so the marginal distribution P (a) ≡ b P (a, b) = µ a , independently of time.
If the joint probability distribution approximately factorizes as P (a, b) ≈ µ a µ b for sufficiently large and wellseparated times t 1 and t 2 (as we will soon prove), the mutual information will be small.We can therefore Taylor expand the logarithm from equation where we have defined the rational mutual information For comparing the rational mutual information with the usual mutual information, it will be convenient to take e as the base B of the logarithm.We derive useful properties of the rational mutual information in Appendix A.
To mention just one, we note that the rational mutual information is not just asymptotically equal to the mutual information in the limit of near-independence, but it also provides a strict upper bound on it: 0 ≤ I ≤ I R .
Let us without loss of generality take t 2 > t 1 .Then iterating equation (2) τ times gives P (b|a) = (M τ ) ba .Since P (a, b) = P (a)P (b|a), we obtain We will continue the proof by considering the typical case where the eigenvalues of M are all distinct (nondegenerate) and the Markov matrix is irreducible and aperiodic; we will generalize to the other cases (which form a set of measure zero) in Appendix B. Since the eigenvalues are distinct, we can diagonalize M by writing for some invertible matrix B and some a diagonal matrix D whose diagonal elements are the eigenvalues: D ii = λ i .Raising equation (5) to the power τ gives M τ = BD τ B −1 , i.e., Since M is non-degenerate, irreducible and aperiodic, 1 = , so all terms except the first in the sum of equation ( 6) decay exponentially with τ , at a decay rate that grows with c. Defining r = λ 3 /λ 2 , we have where we have made use of the fact that an irreducible and aperiodic Markov process must converge to its stationary distribution for large τ , and we have defined A as the expression in square brackets above, satisfying lim τ →∞ A ba = B b2 B −1 2a .Note that b A ba = 0 in order for M to be properly normalized.
Substituting equation (7) into equation (8) and using the facts that a µ a = 1 and b A ba = 0, we obtain where the term in the last parentheses is of the form In summary, we have shown that an irreducible and aperiodic Markov process with non-degenerate eigenvalues cannot produce critical behavior, because the mutual information decays exponentially.In fact, no Markov processes can, as we show in Appendix B.
To hammer the final nail into the coffin of Markov processes as models of critical behavior, we need to close a final loophole.Their fundamental problem is lack of long-term memory, which can be superficially overcome by redefining the state space to include symbols from the past.For example, if the current state is one of n and we wish the process to depend on the the last τ symbols, we can define an expanded state space consisting of the n τ possible sequences of length τ , and a corresponding n τ × n τ Markov matrix (or an n τ × n table of conditional probabilities for the next symbol given the last τ symbols).Although such a model could fit the curves in Figure I in theory, it cannot in practice, because M requires way more parameters than there are atoms in our observable universe (∼ 10 78 ): even for as few as n = 4 symbols and τ = 1000, the Markov process involves over 4 1000 ∼ 10 602 parameters.Scale-invariance aside, we can also see how Markov processes fail simply by considering the structure of text.To model English well, M would need to correctly close parentheses even if they were opened more than τ = 100 characters ago, requiring an M-matrix with than n 100 parameters, where n > 26 is the number of characters used.
We can significantly generalize Theorem 1 into a theorem about hidden Markov models (HMM).In an HMM, the observed sequence We can think of an HMM as follows: imagine a machine with an internal state space Y that updates itself according to some Markovian dynamics.The internal dynamics are never observed, but at each time-step, it also produces some output Y i → X i that form the sequence which we can observe.These models are quite general and are used to model a wealth of empirical data (see, e.g., [38]).
Theorem 2: Let M be a Markov matrix that generates the transitions between hidden states Y i in an HMM.If M is irreducible and aperiodic, then the asymptotic behavior of the mutual information , where λ 2 is the second largest eigenvalue of M. This theorem is a strict generalization of Theorem 1, since given any Markov process M with corresponding matrix M, we can construct an HMM that reproduces the exact statistics of M by using M as the transition matrix between the Y 's and generating X i from Y i by simply setting x i = y i with probability 1.
The proof is very similar in spirit to the proof of Theorem 1, so we will just present a sketch here, leaving a full proof to Appendix B. Let G be the Markov matrix that governs X i → Y i .To compute the joint probability between two random variables X t1 and X t2 , we simply compute the joint probability distribution between Y t1 and Y t2 , which again involves a factor of M τ and then use two factors of G to convert the joint probability on Y t1 , Y t2 to a joint probability on X t1 , X t2 .These additional two factors of G will not change the fact that there is an exponential decay given by M τ .
A simple, intuitive bound from information theory (namely the data processing inequality [37]) gives ).However, Theorem 1 implies that I(Y t1 , Y t2 ) decays exponentially.Hence I(X t1 , X t2 ) must also decay at least as fast as exponentially.
There is a well-known correspondence between so-called probabilistic regular grammars [39] (sometimes referred to as stochastic regular grammars) and HMMs.Given a probabilistic regular grammar, one can generate an HMM that reproduces all statistics and vice versa.Hence, we can also state Theorem 2 as follows: Corollary: No probabilistic regular grammar exhibits criticality.
In the next section, we will show that this statement is not true for context-free grammars.

III. POWER LAWS FROM GENERATIVE GRAMMAR
If computationally feasible Markov processes cannot produce genomes, melodies, texts or other sequences with roughly critical behavior, then how do such sequences arise?What sort of alternative processes can generate them?This question is not only interesting theoretically, but also important in practically: models which can approximate such critical sequences could explain how some machine learning algorithms perform better than others in tasks like such as language processing, and might suggest ways to improve existing algorithms.In the best case scenario, theoretical models may even shed light on how human brains can efficiently generate English sentences without storing googols of parameters.
One answer advanced by Chomsky [40] and others which will loosely inspire our work is the idea of deep structure.
Roughly speaking, the idea is that language is generated in a hierarchical rather than linear fashion.When we write an essay, we do so by thinking about some big idea, and then breaking it into sub-ideas, and sub-sub-ideas, etc.Similarly, we generate a sentence by first choosing its basic structure and then fleshing-out each part of the sentence with more modifiers, etc.For example, the two sentences "Bob loves Alice" and "Alice is loved by Bob" are "close" in meaning but there could be nothing similar about them if English were Markov.On the other hand, if English is generated hierarchically, these sentences might be close in the sense that they diverged close to the leaves of the generative tree.

A. A simple recursive grammar model
We can formalize the above considerations by giving production rules for a toy language L over an alphabet A.
In the parlance of theoretical linguistics, our language is generated by a stochastic or probabilistic context-free grammar (PCFG) [41][42][43][44].We will discuss the relationship between our model and a generic PCFG in Section C. The language is defined by how a native speaker of L produces sentences: first, she draws one of the |A| characters from some probability distribution µ on A. She then takes this character x 0 and replaces it with q new symbols, drawn from a probability distribution P (b|a), where a ∈ A is the first symbol and b ∈ A is any of the second symbols.This is repeated over and over.After u steps, she has a sentence of length q u .2 One can ask for the character statistics of the sentence at production step u given the statistics of the sentence at production step u − 1.The character distribution is simply Of course this equation does not imply that the process is a Markov process when the sentences are read left to right.To characterize the statistics as read from left to right, we really want to compute the statistical dependencies within a given sequence, e.g., at fixed u.
To see that the mutual information decays like a power law rather than exponentially with separation, consider two random variables X and Y separated by τ .One can ask how many generations took place between X and the nearest ancestor of X and Y .Typically, this will be about log q τ generations.Hence in the tree graph shown 0 2 in Figure 2, which illustrates the special case q = 2, the number of edges ∆ between X and Y is about 2 log q τ .Hence by the previous result for Markov processes, we expect an exponential decay of the mutual information in the variable ∆ ∼ 2 log q τ .This means that I(X, Y ) should be of the form where γ is controlled by the second-largest eigenvalue of G, the matrix of conditional probabilities P (b|a).But this exponential decay in ∆ is exactly a power-law decay in τ !This intuitive argument is transformed into a rigorous proof in Appendix C.

B. Further Generalization: strongly correlated characters in words
In the model we have been describing so far, all nodes emanating from the same parent can be freely permuted since they are conditionally independent.In this sense, characters within a newly generated word are uncorrelated.We call models with this property weakly correlated.There are still arbitrarily large correlations between words, but not inside of words.If a weakly correlated grammar allows a → ab, it must allow for a → ba with the same probability.We now wish to relax this property to allow for the strongly-correlated case where variables may not be conditionally independent given the parents.This allows us to take a big step towards mod-eling realistic languages: in English, god significantly differs in meaning and usage from dog.
In the previous computation, the crucial ingredient was the joint probability P (a, b) = P (X = a, Y = b).Let us start with a seemingly trivial remark.This joint probability can be re-interpreted as a conditional joint probability.Instead of X and Y being random variables at specified sites t 1 and t 2 , we can view them as random variables at randomly chosen locations, conditioned on their locations being t 1 and t 2 .Somewhat pedantically, we write P (a, b) = P (a, b|t 1 , t 2 ).This clarifies the important fact that the only way that P (a, b|t 1 , t 2 ) depends on t 1 and t 2 is via a dependence on ∆(t 1 , t 2 ).Hence This equation is specific to weakly correlated models and does not hold for generic strongly correlated models.
In computing the mutual information as a function of separation, the relevant quantity is the right hand side of equation ( 7).The reason is that in practical scenarios, we estimate probabilities by sampling a sequence at fixed separation Now whereas P (a, b|t 1 , t 2 ) will change when strong correlations are introduced, P (a, b|∆) will retain a very similar form.This can be seen as follows: knowledge of the geodesic distance corresponds to knowledge of how high up the closest parent node is in the hierarchy (see Figure 1).Imagine flowing down from the parent node to the leaves.We start with the stationary distribution µ i at the parent node.At the first layer below the parent node (corresponding to a causal distance ∆−2), we get Q rr ≡ P (rr ) = i P S (rr |i)P (i), where the symmetrized probability P S = 1 2 i [P (rr |i) + P (r r|i)] comes into play because knowledge of the fact that r, r are separated by ∆ − 2 gives no information about their order.To continue this process to the second stage and beyond, we only need the matrix G sr = P (s|r) = s P S (ss |r).The reason is that since we only wish to compute the twopoint function at the bottom of the tree, the only place where a three-point function is ever needed is at the very top of the tree, where we need to take a single parent into two children nodes.After that, the computation only involves evolving a child node into a grand-child node, and so forth.Hence the overall two-point probability matrix P (ab|∆) is given by the simple equation As we can see from the above formula, changing to the strongly correlated case essentially reduces to the weakly correlated case where except for a perturbation near the top of the tree.We can think of the generalization as equivalent to the old model except for a different initial condition.We thus expect on intuitive grounds that the model will still exhibit power law decay.This intuition is correct, as we will prove rigorously in Appendix C.
C. Further Generalization: Bayesian networks and context-free grammars Just how generic is the scaling behavior of our model?What if the length of the words is not constant?What about more complex dependencies between layers?If we retrace the derivation in the above arguments, it becomes clear that the only key feature of all of our models considered so far is that the rational mutual information decays exponentially with the causal distance ∆: This is true for (hidden) Markov processes and the hierarchical grammar models that we have considered above.So far we have defined ∆ in terms of quantities specific to these models; for a Markov process, ∆ is simply the time separation.Can we define ∆ more generically?In order to do so, let us make a brief aside about Bayesian networks.Formally, a Bayesian net is a directed acyclic graph (DAG), where the vertices are random variables and conditional dependencies are represented by the arrows.Now instead of thinking of X and Y as living at certain times (t 1 , t 2 ), we can think of them as living at vertices (i, j) of the graph.
We define ∆(i, j) as follows.Since the Bayesian net is a DAG, it is equipped with a partial order ≤ on vertices.We write k ≤ l iff there is a path from k to l, in which case we say that k is an ancestor of l.We define the L(k, l) to be the number of edges on the shortest directed path from k to l.Finally, we define the causal distance ∆(i, j) to be It is easy to see that this reduces to our previous definition of ∆ for Markov processes and recursive generative trees (see Figure 2).
Is it true that our exponential decay result from equation ( 14) holds even for a generic Bayesian net?The answer is yes, under a suitable approximation.The approximation is to ignore long paths in the network when computing the mutual information.In other words, the mutual information tends to be dominated by the shortest paths via a common ancestor, whose length is ∆.This is a generally a reasonable approximation, because these longer paths will give exponentially weaker correlations, so unless the number of paths increases exponentially (or faster) with length, the overall scaling will not change.
With this approximation, we can state a key finding of our theoretical work.Deep models are important because without the extra "dimension" of depth/abstraction, there is no way to construct "shortcuts" between random variables that are separated by large amounts of time with short-range interactions; 1D models will be doomed to exponential decay.Hence the ubiquity of power laws explains the success of deep learning.In fact, this can be seen as the Bayesian net version of the important result in statistical physics that there are no phase transitions in 1D [46,47].
There are close analogies between our deep recursive grammar and more conventional physical systems.For example, according to the emerging standard model of cosmology, there was an early period of cosmological inflation when density fluctuations get getting added on a fixed scale as space itself underwent repeated doublings, combining to produce an excellent approximation to a power-law correlation function.This inflationary process is simply a special case of our deep recursive model (generalized from 1 to 3 dimensions).In this case, the hidden "depth" dimension in our model corresponds to cosmic time, and the time parameter which labels the place in the sequence of interest corresponds to space.A similar physical analogy is turbulence in a fluid, where energy in the form of vortices cascades from large scales to ever smaller scales through a recursive process where larger vortices create smaller ones, leading to a scaleinvariant power spectrum.There is also a close analogy to quantum mechanics: in equation ( 13) expresses the exponential decay of the mutual information with geodesic distance through the Bayesian network; in quantum mechanics, the correlation function of a many body system decays exponentially with the geodesic distance defined by the tensor network which represents the wavefunction [48].
It is also worth examining our model using techniques from linguistics.A generic PCFG G consists of three ingredients: 1.An alphabet A = A ∪ T which consists of nonterminal symbols A and terminal symbols T .
2. A set of production rules of the form a → B, where the left hand side a ∈ A is always a single nonterminal character and B is a string consisting of symbols in A.

Probabilities associated with each production rule
It is a remarkable fact that any stochastic-context free grammars can be put in Chomsky normal form [43,49].This means that given G, there exists some other grammar Ḡ such that all the production rules are either of the form a → bc or a → α, where a, b, c ∈ A and α ∈ T and the corresponding languages L(G) = L( Ḡ).
In other words, given some complicated grammar G, we can always find a grammar Ḡ such that the corresponding statistics of the languages are identical and all the production rules replace a symbol by at most two symbols (at the cost of increasing the number of production rules in Ḡ).
This formalism allows us to strengthen our claims.Our model with a branching factor q = 2 is precisely the class of all context-free grammars that are generated by the production rules of the form a → bc.While this might naively seem like a very small subset of all possible context-free grammars, the fact that any context-free grammar can be converted into Chomsky normal form shows that our theory deals with a generic context-free grammar, except for the additional step of producing terminal symbols from non-terminal symbols.Starting from a single symbol, the deep dynamics of the PCFG in normal form are given by a strongly-correlated branching process with q = 2 which proceeds for a characteristic number of productions before terminal symbols are produced.Before most symbols have been converted to terminal symbols, our theory applies, and power-law correlations will exist amongst the non-terminal symbols.
To the extent that the terminal symbols that are then produced from non-terminal symbols reflect the correlations of the non-terminal symbols, we expect context-free grammars to be able to produce power law correlations.
From our corollary to Theorem 2, we know that regular grammars cannot exhibit power-law decays in mutual information.Hence context-free grammars are the simplest grammars which support criticality, e.g., they are the lowest in the Chomsky hierarchy that supports criticality.Note that our corollary to Theorem 2 also implies that not all context-free grammars exhibit criticality since regular grammars are a strict subset of context-free grammars.Whether one can formulate an even sharper criterion should be the subject of future work.

IV. DISCUSSION
We have shown that many data sequences generated for the purposes of communications -from English and French text to Bach and the human genome -exhibit critical behavior, where the mutual information between symbols decays roughly like a power law with separation.By introducing a quantity we term rational mutual information, we have proved that (hidden) Markov processes generically exhibit exponential decay, whereas deep generative grammars exhibit power law decays.This explains why natural languages are poorly approximated by Markov processes, but better approximated by deep recurrent neural networks now widely used in machine learning for natural language processing, as we will discuss in detail below.Furthermore, we have identified a crucial ingredient of any successful natural language model: it must have one or more "hidden" dimensions, which can be used to provide shortcuts between distant parts of the network; hence leading to longer-range correlations that are crucial for critical behavior.
Let us now explore some useful implications of these results, both for understanding the success of certain neural network architectures and for using the mutual information function as a tool for validating machine learning algorithms.

A. Connection to Recurrent Neural Networks
While the generative grammar model is appealing from a linguistic perspective, it may superficially appear to have little to do with machine learning algorithms that are implemented in practice.However, as we will now see, this model can in fact be viewed an idealized version of a long-short term memory (LSTM) recurrent neural network (RNN) that is generating ("hallucinating") a sequence.
First of all, Figure 4 shows that an LSTM RNN can in fact reproduce critical behavior.In this example, we trained an RNN (consisting of three hidden LSTM layers of size 256 as described in [20]) to predict the next character in the 100MB Wikipedia sample known as enwik8 [12].We then used the LSTM to hallucinate 1 MB of text and measured the mutual information as a function of distance.Figure 4 shows that not only is the resulting mutual information function a rough power law, but it also has a slope that is relatively similar to the original.
We can understand this success by considering a simplified model that is less powerful and complex than a full LSTM, but retains some of its core features -such an approach to studying deep neural nets has proved fruitful in the past (e.g., [27]).
The usual implementation of LSTMs consists of multiple cells stacked one on top of each other.Each cell of the LSTM (depicted as a yellow circle in Fig. 3) has a state that is characterized by a matrix of numbers C t and is updated according to the following rule where • denotes element wise multiplication, and ) is some function of the input x t from the cell from the layer above (denoted by downward arrows in Figure 3, the details of which do not concern us.Generically, a graph of this picture would look like a rectangular lattice, with each node having an arrow to its right (corresponding to the first term in the above equation), and an arrow from above (corresponding to the second term in the equation).However, if the forget weights f weights decay rapidly with depth (e.g., as we go from the bottom cell to the towards the top) so that the timescales for forgetting grow exponentially, we will show that a reasonable approximation to the dynamics is given by Figure 3.
If we neglect the dependency of D t on C t−1 , the forget gate f t leads to exponential decay of C t−1 e.g., C t = f t • C 0 ; this is how LSTM's forget their past.Note that all operations including exponentiation are performed element-wise in this section only.
In general, a cell will smoothly forget its past over a timescale of ∼ log(1/f ) ≡ τ f .On timescales τ f , the cells are weakly correlated; on timescales τ f , the cells are strongly correlated.Hence a discrete approximation to this above equation is the following: This simple approximation leads us right back to the hierarchical grammar!The first line of the above equation is labeled "remember" in Figure 2 and the second line is what we refer to as "Markov," since the next state depends only on the previous.Since each cell perfectly remembers its pervious state for τ f time-steps, the tree can be reorganized so that it is exactly of the form shown in Figure 3, by omitting nodes which simply copy the previous state.Now supposing that τ f grows exponentially with depth τ f (layer i) ∝ q τ f (layer i + 1), we see that the successive layers become exponentially sparse, which is exactly what happens in our deep grammar model, identifying the parameter q, governing the growth of the forget timescale, with the branching parameter in the deep grammar model.(Compare Figure 2 and Figure 3.)

B. A new diagnostic for machine learning
How can one tell whether an neural network can be further improved?For example, an LSTM RNN similar to the one we used in Figure 3 can predict Wikipedia text with a residual entropy ∼ 1.4 bits/character [20], which is very close to the performance of current state of the art custom compression software -which achieves ∼ 1.3 bits/character [50].Is that essentially the best compression possible, or can significant improvements be made?
Our results provide an powerful diagnostic for shedding further light on this question: measuring the mutual information as a function of separation between symbols is a computationally efficient way of extracting much more meaningful information about the performance of a model than simply evaluating the loss function, usually given by the conditional entropy H(X t |X t−1 , X t−2 , . . .).
Figure 3 shows that even with just three layers, the LSTM-RNN is able to learn long-range correlations; the slope of the mutual information of hallucinated text is comparable to that of the training set.However, the figure also shows that the predictions of our LSTM-RNN are far from optimal.Interestingly, the hallucinated text shows about the same mutual information for distances ∼ O(1), but significantly less mutual information at large separation.Without requiring any knowledge about the true entropy of the input text (which is famously NPhard to compute), this figure immediately shows that the LSTM-RNN we trained is performing sub-optimally; it is not able to capture all the long-term dependencies found in the training data.
As a comparison, we also calculated the bigram transition matrix P (X 3 X 4 |X 1 X 2 ) from the data and used it to hallucinate 1 MB of text.Despite the fact that this higher order Markov model needs ∼ 10 3 more parameters than our LSTM-RNN, it captures less than a fifth of the mutual information captured by the LSTM-RNN even at modest separations 5.
In summary, Figure 3 shows both the successes and shortcomings of machine learning.On the one hand, LSTM-RNN's can capture long-range correlations much more efficiently than Markovian models; on the other hand, they cannot match the two point functions of training data, never mind higher order statistics!One might wonder how the lack of mutual information at large scales for the bigram Markov model is manifested in the hallucinated text.Below we give a line from the FIG.4: Diagnosing different models with by hallucinating text and then measuring the mutual information as a function of separation.The red line is the mutual information of enwik8, a 100 MB sample of English Wikipedia.In shaded blue is the mutual information of hallucinated Wikipedia from a trained LSTM with 3 layers of size 256.We plot in solid black the mutual information of a Markov process on single characters, which we compute exactly.(This would correspond to the mutual information of hallucinations in the limit where the length of the hallucinations goes to infinity).This curve shows a sharp exponential decay after a distance of ∼ 10, in agreement with our theoretical predictions.We also measured the mutual information for hallucinated text on a Markov process for bigrams, which still underperforms the LSTMs in long-ranged correlations, despite having ∼ 10 3 more parameters than

Markov hallucinations:
[[computhourgist, Flagesernmenserved whirequotes or thand dy excommentaligmaktophy as its:Fran at ||&lt;If ISBN 088;&ampategor and on of to [[Prefung]]' and at them rector> This can be compared with an example from the LSTM RNN: Despite using many fewer parameters, the LSTM manages to produce a realistic looking URL and is able to close brackets correctly [51], something that the Markov model struggles with.

C. Outlook
Although great challenges remain to accurately model natural languages, our results at least allow us to improve on some earlier answers to key questions we sought to address : 1. Why is natural language so hard?The old answer was that language is uniquely human.The new answer is that at least part of the difficulty is that natural language is a critical system, with long-ranged correlations that are difficult for machines to learn.
2. Why are machines bad at natural languages, and why are they good?The old answer is that Markov models are simply not brain/human-like, whereas neural nets are more brain-like and hence better.
The new answer is that Markov models or other 1-dimensional models cannot exhibit critical behavior, whereas neural nets and other deep models (where an extra hidden dimension is formed by the layers of the network) are able to exhibit critical behavior.
3. How can we know when machines are bad or good?
The old answer is to compute the loss function.
The new answer is to also compute the mutual information as a function of separation, which can immediately show how well the model is doing at capturing correlations on different scales.
Future studies could include more comprehensive measurements from multiple language/music corpuses and other one dimensional data.In addition, more theoretical work is required to explain why natural languages have slopes that are all comparable ∼ 1/2.In statistical physics, "coincidences" of these sort are usually signs of universality: many seemingly unrelated systems have the same long-wavelength effective field theory at the critical point and hence display the same power law slopes.Perhaps something similar is happening here, though this is unexpected from the deep generative grammar model where any power law slope is possible.
First, we consider the case where the Markov matrix M has degenerate eigenvalues.In this case, we cannot guarantee that M can be diagonalized.However, any complex matrix can be put into Jordan normal form.In Jordan normal form, a matrix is block diagonal, with each d × d block corresponding to an eigenvalue with degeneracy d.These blocks have a particularly simple form, with block i having λ i on the diagonal and ones right above the diagonal.For example, if there are only three distinct eigenvalues and λ 2 is threefold degenerate, the the Jordan form of M would be Note that the largest eigenvalue is unique and equal to 1 for all irreducible and aperiodic M. In this example, the matrix power M τ is In the general case, raising a matrix to an arbitrary power will yield a matrix which is still block diagonal, with each block being an upper triangular matrix.The important point is that in block i, every entry scales ∝ λ τ i , up to a combinatorial factor.Each combinatorial factor grows only polynomially with τ , with the degree of the polynomials in the ith block bounded by the multiplicity of λ i , minus one.
Using this Jordan decomposition, we can replicate equation (7) and write There are two cases, depending on whether the second eigenvalue λ 2 is degenerate or not.If not, then the equation lim still holds, since for i ≥ 3, (λ i /λ 2 ) τ decays faster than any polynomial of finite degree.On the other hand, if the second eigenvalue is degenerate with multiplicity m 2 , we instead define A with the combinatorial factor removed: Hence in the most general case, the mutual information decays like a polynomial P(τ )e −γτ , where γ = 2 ln 1 λ2 .The polynomial is non-constant if and only if the second largest eigenvalue is degenerate.Note that even in this case, the mutual information decays exponentially in the sense that it is possible to bound the mutual information by an exponential.

The reducible case
Now let us generalize to the case where the Markov process is reducible, i.e., decomposable into a set of m noninteracting Markov processes for some integer m > 1.This means that the state space can be decomposed as where restricting the Markov process to each S i results in a well-defined Markov process on S i that does not interact with any other S i .If the system starts off in S 1 , it can never transition to S 2 , and so forth.Now if we restrict our model to the subset of interest, the mutual information will exponentially decay; however, for a generic initial state, the total probability within each set S i remains constant, so the mutual information will be asymptotically approach the entropy of the probability distribution across sets, which can be at most I = log 2 m.
In the language of statistical physics, this is an example of topological order which leads to constant terms in the correlation functions; here, the Markov graph of M is disconnected, so there are m degenerate equilibrium states.

The periodic case
If a Markov process is periodic, with a sequence that repeats forever, then the mutual information is a constant and never decays to zero, so the only power law that can be attained is the case of slope zero which does not correspond to critical behavior.

The n > 1 case
The following proof holds only for order n = 1 Markov processes, but we can easily extend the results for arbitrary n.Any n = 2 Markov process can be converted into an n = 1 Markov process on pairs of letters X 1 X 2 .
Hence our proof shows that I(X 1 X 2 , Y 1 Y 2 ) decays exponentially.But for any random variables X, Y , the data processing inequality [37] states that I(X, g(Y )) ≤ I(X, Y ), where g is an arbitrary function of Y .Letting g(Y 1 Y 2 ) = Y 1 , and then permuting and applying g(X 1 , X 2 ) = X 1 gives Hence, we see that I(X 1 , Y 1 ) must exponentially decay.
The preceding remarks can be easily formalized into a proof for an arbitrary Markov process by induction on n.

The detailed balance case
This asymptotic relation can be strengthened for a subclass of Markov processes which obey a condition known as detailed balance.This subclass arises naturally in the study of statistical physics [52].For our purposes, this simply means that there exist some real numbers K m and a symmetric matrix S ab = S ba such that Hence we have found an eigenvector of M for every eigenvector of S. Conversely, the set of eigenvectors of S forms a basis, so there cannot be any more eigenvectors of M .This implies that all the eigenvalues of M are given by P v m = e Km/2 v m , and the eigenvalues of P v are λ i .In other words, M and S share the same eigenvalues.
(3) µ a = 1 Z e Ka is an eigenvector with eigenvalue 1, and hence is the stationary state: (B11) Now using the fact that ||A|| 2 = tr A T A and is therefore invariant under an orthogonal change of basis, we find that Since the λ i 's are both the eigenvalues of M and S, and since M is irreducible and aperiodic, there is exactly one eigenvalue λ 1 = 1, and all other eigenvalues are less than one.Altogether, Hence one can easily estimate the asymptotic behavior of the mutual information if one has knowledge of the spectrum of M .We see that the mutual information exponentially decays, with a decay scale time-scale given by the second largest eigenvalue λ 2 : )

Hidden Markov Model
In this subsection, we generalize our findings to hidden Markov models and present a proof of Theorem 2. Based on the considerations in the main body of the text, the joint probability distribution between two visible states X t1 , X t2 is given by where the term in brackets would have been there in an ordinary Markov model and the two new factors of G are the result of the generalization.Note that as before, µ is the stationary state corresponding to M. We will only consider the typical case where M is aperiodic, irreducible, and non-degenerate; once we have this case, the other cases can be easily treated by mimicking our above proof for or ordinary Markov processes.Using equation (7) and defining g = Mµ gives Plugging this in to our definition of rational mutual information gives where we have used the facts that i G ij = 1, i A ij = 0, and as before C is asymptotically constant.This shows that I R ∝ λ 2τ 2 exponentially decays.
which gives where we have defined = λ ∆/2 2 . Now note that r A ar µ r = 0, since µ is an eigenvector with eigenvalue 1 of G ∆/2 .Hence this simplifies the above to just From the definition of rational mutual information, and employing the fact that i A ij = 0 gives where N ab ≡ (µ a µ b ) −1/2 r µ r A ar A br is a symmetric matrix and || • || denotes the Frobenius norm.Hence Let us now generalize to the strongly correlated case.As discussed in the text, the joint probability is modified to where Q is some symmetric matrix which satisfies r Q rs = µ s .We now employ our favorite trick of diagonalizing G and then writing where ≡ λ ∆/2−1 2 . This gives Now defining the symmetric matrices R ab ≡ rs Q rs A ar A bs ≡ (µ a µ b ) 1/2 N ab , and noting that a R ab = 0, we have which gives In either the strongly or the weakly correlated case, note that N is asymptotically constant.We can write the second largest eigenvalue |λ 2 | 2 = q −k2/2 , where q is the branching factor, Behold the glorious power law!We note that the normalization C must be a function of the form C = m 2 f (λ 2 , q), where m 2 is the multiplicity of the eigenvalue λ 2 .We evaluate this normalization in the next section.
As before, this result can be sharpened if we assume that G satisfies detailed balance G mn = e Km/2 S mn e −Kn/2 where S is a symmetric matrix and K n are just numbers.Let us only consider the weakly correlated case.By the spectral theorem, we diagonalize S into an orthonormal basis of eigenvectors v.As before, G and S share the same eigenvalues.Proceeding, where Z is a constant that ensures that P is properly normalized.Let us move full steam ahead to compute the rational mutual information: For simplicity, we specialize to the case q = 2 although our results can surely be extended to q > 2. Define δ = ∆/2 and d = |i − j|.We wish to compute the expected value of I R conditioned on knowledge of d.By Bayes rule, p(δ|d) ∝ p(d|δ)p(δ).Now p(d|δ) is given by a triangle distribution with mean 2 δ−1 and compact support (0, 2 δ ).On the other hand, p(δ) ∝ 2 δ for δ ≤ δ max and p(δ) = 0 for δ ≤ 0 or δ > δ max .This new constant δ max serves two purposes.First, it can be thought of as a way to regulate the probability distribution p(δ) so that it is normalizable; at the end of the calculation we formally take δ max → ∞ without obstruction.Second, if we are interested in empirically sampling the mutual information, we cannot generate an infinite string, so setting δ max to a finite value accounts for the fact that our generated string may be finite.
We now assume d 1 so that we can swap discrete sums with integrals.We can then compute the conditional expectation value of 2 −k2δ .This yields I R ≈ ∞ 0 2 −k2δ P (d|δ) dδ = 1 − 2 −k2 d −k2 k 2 (k 2 + 1) log (2) , (C17) or equivalently, It turns out it is also possible to compute the answer without making any approximations with integrals: The resulting predictions are compared in figure Figure 5.
Appendix D: Estimating (rational) mutual information from empirical data Estimating mutual information or rational mutual information from empirical data is fraught with subtleties.Residual from power law FIG.5: Decay of rational mutual information with separation for a binary sequence from a numerical simulation with probabilities p(0|0) = p(1|1) = 0.9 and a branching factor q = 2.The blue curve is not a fit to the simulated data but rather an analytic calculation.The smooth power law displayed on the left is what is predicted by our "continuum" approximation.The very small discrepancies (right) are not random but are fully accounted for by more involved exact calculations with discrete sums.
It is well known that a naive estimate of the Shannon entropy obtained Ŝ = −

K i=1
Ni N log Ni N is biased, generally underestimating the true entropy from finite samples.For example, We use the estimator advocated by Grassberger [53]: where ψ(x) is the digamma function, N = N i , and K is the number of characters in the alphabet.The mutual information estimator can then be estimated by Î(X, Y ) = Ŝ(X) + Ŝ(Y ) − Ŝ(X, Y ).The variance of this estimator is then the sum of the variances var( Î) = varEnt(X) + varEnt(Y ) + varEnt(X, Y ), (D2) where the varEntropy is defined as varEnt(X) = var (− log p(X), ) where we can again replace logarithms with the digamma function ψ.The uncertainty after N measurements is then ≈ var( Î)/N .
To compare our theoretical results with experiment in Fig. 4, we must measure the rational mutual information for a binary sequence from (simulated) data.For a binary sequence with covariance coefficient ρ(X, Y ) = P (1, 1) − P (1) 2 , the rational mutual information is I R (X, Y ) = ρ(X, Y ) P (0)P (1) 2 .

(D4)
This was essentially calculated in [54] by considering the limit where the covariance coefficient is small ρ 1.In their paper, there is an erroneous factor of 2. To estimate covariance ρ(d) as a function of d (sometimes confusingly referred to as the correlation function), we use the unbi-ased estimator for a data sequence {x 1 , x 2 , • • • x n }: However, it is important to note that estimating the covariance function ρ by averaging and then squaring will generically yield a biased estimate; we circumvent this by simply estimating I R (X, Y ) 1/2 ∝ ρ(X, Y ).

1 CFIG. 1 :
FIG.1: Decay of mutual information with separation.Here the mutual information in bits per symbol is shown as a function of separation d(X, Y ) = |i − j|, where the symbols X and Y are located at positions i and j in the sequence in question, and shaded bands correspond to 1 − σ error bars.All measured curves are seen to decay roughly as power laws, explaining why they cannot be accurately modeled as Markov processes -for which the mutual information instead plummets exponentially (the example shown has I ∝ e −d/6 ).The measured curves are seen to be qualitatively similar to that of a famous critical system in physics: a 1D slice through a critical 2D Ising model, where the slope is −1/2.The human genome data consists of 177,696,512 base pairs {A, C, T,G} from chromosome 5 from the National Center for Biotechnology Information [10], with unknown base pairs omitted.The Bach data consists of 5727 notes from Partita No. 2[11], with all notes mapped into a 12-symbol alphabet consisting of the 12 half-tones {C, C#, D, D#, E, F, F#, G, G#, A, A#, B}, with all timing, volume and octave information discarded.The three text corpuses are 100 MB from Wikipedia [12] (206 symbols), the first 114 MB of a French corpus [13] (185 symbols) and 27 MB of English articles from slate.com (143 symbols).The large long range information appears to be dominated by poems in the French sample and by html-like syntax in the Wikipedia sample.
a, b) P (a)P (b) = ab P (a, b) log B P (a, b) P (a)P (b) ,

1
Distance between symbols d(X,Y) Mutual information I(X,Y) in bits Ac tua l Wi kip ed ia L S T M -R N N -h a llu ci n a te d W ik ip e d ia M a r k o vh a ll u c in a t e d W ik ip e d ia

)
If m 2 = 1, this definition simply reduces to the previous definition of A. With this definition, lim τ →∞

)e
Let us note the following facts.(1) The matrix power is simply (M τ ) ab = e Ka/2 (S τ ) ab e −K b /2 .(2) By the spectral theorem, we can diagonalize S into an orthonormal basis of eigenvectors, which we label as v (or sometimes w), e.g., Sv = λ i v and v • w = δ vw .Notice that n M ab e Kn/2 v n = n Km/2 S mn v n = λ i e Km/2 v m .

e K b / 2 S(S τ ) 2 ab=
ba e −Ka/2 = µ a b M ba = µ a .(B10) The previous facts then let us finish the calculation: P (a, b) P (a)P (b) = ab e Ka (S τ ) 2 ab e −K b e K b −Ka = ab e Ka (S τ ) 2 ab e −K b e K b −Ka = ab ||S τ || 2 .

v v a v b e (Ka+K b )/2 2 = ab v λ ∆ v v a v b 2 .) 1 .
the Frobenius norm of the symmetric matrixH ≡ v λ ∆ v v a v b !The eigenvalues of the matrix can be read off, so we haveI R (a, b) = i=2 |λ i | 2∆ .(C15)Hence we have computed the rational mutual information exactly as a function of ∆.In the next section, we use this result to compute the mutual information as a function of separation |i − j|, which will lead to a precise evaluation of the normalization constant C in the equationI(a, b) ≈ C|i − j| −k2 .(C16Detailedevaluation of the normalization Let λ i denote the eigenvalues of M, sorted by decreasing magnitude:|λ 1 | ≥ |λ 2 | ≥ |λ 3 |...All Markov matrices have |λ i | ≤ 1,which is why blowup is avoided when equation (2) is iterated, and λ 1 = 1, with the corresponding eigenvector giving a stationary probability distribution µ satisfying Mµ = µ.→ 1 → 2 • • • that will never converge, we take the Markov process to be aperiodic.It is easy to show using the Perron-Frobenius theorem that being irreducible and aperiodic implies|λ 2 | < 1.This section is devoted to the intuition behind the following theorem, whose full proof is given in Appendix A and B. The theorem states roughly that for a Markov process, the mutual information between two points in time t 1 and t 2 decays exponentially for large separation |t 2 − t 1 |: Theorem 1: Let M be a Markov matrix that generates a Markov process.If M is irreducible and aperiodic, then the asymptotic behavior of the mutual information I(t 1 , t 2 ) is exponential decay toward zero for |t 2 − t FIG.2:Both a traditional Markov process (top) and our recursive generative grammar process (bottom) can be represented as Bayesian networks, where the random variable at each node depends only on the node pointing to it with an arrow.The numbers show the geodesic distance ∆ to the leftmost node, defined as the smallest number of edges that must be traversed to get there.Our results show that the mutual information decays exponentially with ∆.Since this geodesic distance ∆ grows only logarithmically with the separation in time in a hierarchical generative grammar (the hierarchy creates very efficient shortcuts), the exponential kills the logarithm and we are left with power-law decays of mutual information in such languages.