Next Article in Journal
Creation of Numerical Constants in Robust Gene Expression Programming
Previous Article in Journal
A Complexity View into the Physics of the Accelerating Seismic Release Hypothesis: Theoretical Principles
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Minimum Description Length Codes Are Critical

1
Kavli Institute for Systems Neuroscience and Centre for Neural Computation, Norwegian University of Science and Technology (NTNU), 7030 Trondheim, Norway
2
The Abdus Salam International Center for Theoretical Physics, 34151 Trieste, Italy
3
Scuola Internazionale Superiore di Studi Avanzati, 34136 Trieste, Italy
*
Author to whom correspondence should be addressed.
Entropy 2018, 20(10), 755; https://doi.org/10.3390/e20100755
Submission received: 4 September 2018 / Revised: 26 September 2018 / Accepted: 28 September 2018 / Published: 1 October 2018

Abstract

:
In the Minimum Description Length (MDL) principle, learning from the data is equivalent to an optimal coding problem. We show that the codes that achieve optimal compression in MDL are critical in a very precise sense. First, when they are taken as generative models of samples, they generate samples with broad empirical distributions and with a high value of the relevance, defined as the entropy of the empirical frequencies. These results are derived for different statistical models (Dirichlet model, independent and pairwise dependent spin models, and restricted Boltzmann machines). Second, MDL codes sit precisely at a second order phase transition point where the symmetry between the sampled outcomes is spontaneously broken. The order parameter controlling the phase transition is the coding cost of the samples. The phase transition is a manifestation of the optimality of MDL codes, and it arises because codes that achieve a higher compression do not exist. These results suggest a clear interpretation of the widespread occurrence of statistical criticality as a characterization of samples which are maximally informative on the underlying generative process.

1. Introduction

It is not infrequent to find empirical data which exhibits broad frequency distributions in the most disparate domains. Broad distributions manifest in the fact that if outcomes are ranked in order of decreasing frequency of their occurrence, then the rank frequency plot spans several orders of magnitude on both axes. Figure 1 reports few cases (see caption for details), but many more have been reported in the literature (see e.g., [1,2]). A straight line in the rank plot corresponds to a power law frequency distribution, where the number of outcomes that are observed k times behave as m k k μ 1 (with 1 / μ being the slope of the rank plot). Yet, as Figure 1 shows, empirical distributions are not always power laws, even though they are broad nonetheless. Countless mechanisms have been advanced to explain this behaviour [1,2,3,4,5,6]. It has recently been suggested that broad distributions arise from efficient representations, i.e., when the data samples relevant variables, which are those carrying the maximal amount of information on the generative process [7,8,9]. Such Maximally Informative Samples (MIS) are those for which the entropy of the frequency with which outcomes occur—called relevance in [8,9]—is maximal at a given resolution, which is measured by the number of bits needed to encode the sample (see Section 1.1). MIS exhibit power law distributions with the exponent μ governing the tradeoff between resolution and relevance [9]. This argument for the emergence of broad distributions is independent of any mechanism or model. A direct way to confirm this claim is to check that samples generated from models that are known to encode efficient representations are actually maximally informative. In this line, [10] found strong evidence that MIS occur in the representations that deep learning extracts from data. This paper explores the same issue in efficient coding as defined in Minimum Description Length [11].
Regarding empirical data as a message sent from nature, we expect it to be expressed in an efficient manner if relevant variables are chosen. This requirement can be made quantitative and precise, in information theoretic terms, following Minimum Description Length (MDL) theory [11]. MDL seeks the optimal encoding of data generated by a parametric model with unknown parameters (see Section 1.2). MDL derives a probability distribution over samples that embodies the requirement of optimal encoding. This distribution is the Normalized Maximum Likelihood (NML). This paper studies the NML as a generative process of samples and studies both its typical and atypical properties. In a series of cases, we find that samples generated by NMLs are typically close to being maximally informative, in the sense of [9], and that their frequency distribution is typically broad. In addition, we find that NMLs are critical in a very precise sense, because they sit at a second order phase transition that separates typical from atypical behavior. More precisely, we find that large deviations, for which the resolution attains atypically low values, exhibit a condensation phenomenon whereby all N points in the sample coincide. This is consistent with the fact that NML correspond to efficient coding of random samples generated from a model, so that codes achieving higher compression do not exist. Large deviations enforcing higher compression force parameters to corners of the allowed space where the model becomes deterministic.
The rest of the paper is organized as follows: the rest of the introduction lays the background of what follows by recalling the characterization of samples in terms of resolution and relevance, as in [9], and the derivation of NML in MDL, following [11]. Section 2.1 discusses typical properties of NML and Section 2.2 discusses large deviations of the coding cost. We conclude with a series of remarks on the significance of these results.
Setting the Scene
Let s ^ = ( s ( 1 ) , , s ( N ) ) be a sample of N observations, s ( i ) χ , of a system where χ is a countable finite state space. We define k s as the number of observations in s ^ for which s ( i ) = s , i.e., the frequency of s. The number of states s that occur k times will be denoted as m k . Both k s and m k depend on the sample s ^ .
We assume that s ^ is generated in a series of independent experiments or observations, all in the same conditions. This is equivalent to taking s ^ as a sequence of N independent draws from an unknown distribution p ( s ) (i.e., the generative process).

1.1. Resolution, Relevance and Maximally Informative Samples

The information content of the sample is measured by the number of bits needed to encode a single data point. This is given by Shannon entropy [18]. Taking the frequency k s / N as the probability of point s, this leads to:
H ^ [ s ] = s χ k s N log k s N = k k m k N log k N ,
where the ^ indicates that the entropy is computed from the empirical frequency. This quantity specifies the level of detail of the description provided by the variable s. At one extreme, all the data points are equal, i.e., s ( i ) = s , i = 1 , , N such that m k = 0 for k = 1 , , N 1 and m k = N = 1 . With this, one finds that H ^ [ s ] = 0 . On the other extreme, all the data points are different, i.e., s ( i ) s ( j ) , i j , such that m k = 1 = N and m k = 0 , k > 1 . Hence, one finds that H ^ [ s ] = log N . This is why we call H ^ [ s ] as the resolution, following [9]. The resolution clearly depends on the cardinality of χ . Only a part of H ^ [ s ] provides information on the generative process p ( s ) and this is given by the relevance
H ^ [ k ] = k k m k N log k m k N .
A simple argument, which is elaborated in detail in [9], is that the empirical frequency k s / N is the best estimate of p ( s ) , so conditional on k s , the sample does not contain any further information on p ( s ) . Note that k s is a function of s, which implies H ^ [ s , k ] = H ^ [ s ] H ^ [ k ] . Therefore, the difference H ^ [ s ] H ^ [ k ] = H ^ [ s | k ] quantifies the amount of noise the sample contains.
We call s ^ a Maximally Informative Sample (MIS) if m k is such that the relevance is maximal at a given resolution H ^ [ s ] = H 0 . This implies the maximization of the functional
F = H ^ [ k ] + μ ( H ^ [ s ] H 0 ) + λ k k m k N
over m k , where the Lagrange multipliers μ and λ are adjusted to enforce the conditions H ^ [ s ] = H 0 and k k m k = N , respectively. As shown in [7,8], MIS exhibit a power law frequency distribution
m k c k 1 μ
where c is a normalization constant such that k k m k = N . As H 0 varies from 0 to log N , MISs trace a curve in the resolution-relevance plane (see solid lines in Figure 2 and Figure 3 (B, C)) with μ as the negative slope. As discussed in [9,10], μ quantifies the trade-off between resolution and relevance: a decrease in resolution of one bit leads to an increase of μ bits in relevance. The point μ = 1 , which corresponds to Zipf’s law, sets the limit beyond which further reduction in H ^ [ s ] results in lossy compression, because, for μ < 1 , the increase in H ^ [ k ] cannot compensate the loss in resolution.

1.2. Minimum Description Length and the Normalized Maximum Likelihood

The main insight of MDL is that learning from data is equivalent to data compression [11]. In turn, data compression is equivalent to assigning a probability distribution over the space of samples.This section provides a brief derivation of this distribution whereas the rest of the paper discuss its typical and atypical properties. We refer the interested reader to [11,19] for a more detailed discussion of MDL.
From an information theoretic perspective, one can think of the sample, s ^ , as a message generated by some source (e.g., nature) that we wish to compress as much as possible. This entails translating s ^ in a sequence of bits. A code is a rule that achieves this for any s ^ χ N and its efficiency depends on whether frequent patterns are assigned short codewords or not. Conversely, any code implies a distribution P ( s ^ ) over the space of samples and the cost of encoding the sample s ^ under the code P is given by [18]
E = log P ( s ^ )
bits (assuming logarithm base two). Optimal compression is achieved when the code P coincides with the data generating process [18].
Consider the situation where the data is generated as independent draws from a parametric model f ( s | θ ) . If the value of θ were known, then the optimal code would be given by P ( s ^ ) = i f ( s ( i ) | θ ) f ( s ^ | θ ) . MDL seeks to derive P in the case where θ is not known (Indeed, MDL aims at deriving efficient coding under f irrespective of whether f ( s | θ ) is the “true” generative model or not. This allows one to compare different models and choose the one providing the most concise description of the data). This applies, for example, to the situation where s ^ is a series of experiments or observation aimed at measuring the parameters θ of a theory.
In hind sight, i.e., upon seeing the sample, the best code is f ( s ^ | θ ^ ) , where θ ^ ( s ^ ) is the maximum likelihood estimator for θ , and it depends on the sample s ^ . Therefore, one can define the regret R , as the additional encoding cost that one needs to spend to encode the sample s ^ , if one uses the code P ( s ^ ) to compress s ^ , i.e.,
R = log P ( s ^ ) min θ log f ( s ^ | θ ) .
Notice that min θ log f ( s ^ | θ ) = log f ( s ^ | θ ^ ( s ^ ) ) . R is called regret of P relative to f for sample s ^ because it depends both on P and on s ^ .
MDL derives the optimal code, P ¯ ( s ^ ) , that minimizes the regret, assuming that for any P the source produces the worst possible sample [11]. The solution [20]
P ¯ ( s ^ ) = f ( s ^ | θ ^ ( s ^ ) ) x ^ χ N f ( x ^ | θ ^ ( x ^ ) ) .
is called the Normalized Maximum Likelihood (NML). The optimal regret is given by
R ¯ = log s ^ χ N f ( s ^ | θ ^ ( s ^ ) )
which is known in MDL as the parametric complexity (Notice that e R ¯ can be seen as a partition sum. Hence, throughout the paper, we shall refer to the parametric complexity as the UC partition function.). For models in the exponential family, Rissanen showed that the parametric complexity is asymptotically given by [21]
R ¯ k 2 log N 2 π + log det I ( θ ) d θ + O ( 1 )
where I ( θ ) is the Fisher information matrix with the matrix elements defined by an expectation I i j ( θ ) = 2 log f ( s | θ ) θ i θ j θ over the parametric model f ( s | θ ) (see Appendix A for a simple derivation). The NML code is a universal code because it achieves a compression per data point which is as good as the compression that would be achieved with the optimal choice of θ when one has large enough samples. This is easy to see, because the regret R ¯ / N per data point vanishes in the limit N , hence the NML code achieves the same compression as f ( s ^ | θ ^ ) .
Notice also that the optimal regret, R ¯ , in Equation (8) is independent of the sample s ^ . It indeed provides a measure of complexity of the model f that can be used in model selection schemes. For exponential families, MDL procedure penalizes models with a cost which equals the one obtained in Bayesian model selection [22] under a Jeffreys prior. Indeed, considering P ¯ ( s ^ ) as a generative model for samples, one can show that the induced distribution on θ is given by Jeffreys prior (see Appendix A).

2. Results

2.1. NML Codes Provide Efficient Representations

In this section we consider P ¯ as a generative model for samples and we investigate its typical properties for some representative statistical models.

2.1.1. Dirichlet Model

Let us start by considering the Dirichlet model distribution f ( s | θ ) = θ s , s χ . The parameters θ s 0 are constrained by the normalization condition s χ θ s = 1 . Let S = | χ | denote the cardinality of χ and define, for convenience, ρ = N / S as the average number of points per state. Because each observation is mutually independent, the likelihood of a sample s ^ given θ = ( θ 1 , , θ S ) can be written as
f ( s ^ | θ ) = s χ θ s k s ,
where k s is the number of times that the state s occurs in the sample s ^ . From here, it can be seen that θ ^ s = k s / N is the maximum likelihood estimator for θ s . Thus, the universal code for the Dirichlet model can now be constructed as
P ¯ ( s ^ ) = e R ¯ s χ k s N k s
which can be read as saying that for each s, the code needs k s log ( k s / N ) + R ¯ / N bits. In terms of the frequencies, { k 1 , , k S } , the universal codes can be written as
P ¯ ( k 1 , , k S ) = e R ¯ N ! s χ k s ! s χ k s N k s δ s χ k s N
wherein the multinomial coefficient, N ! s χ k s ! , counts the number of samples with a given frequency profile k 1 , , k S . In order to compute the optimal regret R ¯ , we have to evaluate the partition function
e R ¯ = N ! N N e N π π d μ 2 π e i μ N k 1 = 0 k 1 k 1 e k 1 e i μ k 1 k 1 ! k S = 0 k S k S e k S e i μ k S k S !
= N ! N N e N π π d μ 2 π e i μ N N ( i μ ) S
2 π N π π d μ 2 π e S Φ ( i μ )
where
Φ ( z ) = ρ z + log N ( z )
and
N ( z ) = k = 0 k k e ( 1 + z ) k k ! .
The integral in Equation (15) is dominated by the value where the function ϕ attains its saddle point value z * ( ρ ) , which is given by the condition
d Φ d z = ρ k z = 0
where the average z is taken with respect to the distribution
q ( k | z ) = 1 N ( z ) k k e ( 1 + z ) k k ! .
Gaussian integration around the saddle point leads then to
e R ¯ ρ e S Φ ( z * ( ρ ) ) k 2 z * k z * 2
where we used the identity Φ ( z ) = k 2 z k z 2 .
The distribution Equation (12) can also be written introducing the Fourier representation of the delta function
P ¯ ( k 1 , , k S ) = N ! e R ¯ N N e N π π d μ 2 π e i μ N s χ k s k s e ( 1 + i μ ) k s k s ! .
For typical sequences k 1 , , k S , the integral is also dominated by the value μ = i z * ( ρ ) that dominates Equation (15), which means that the distribution factorizes as
P ¯ ( k 1 , , k S ) s χ q ( k s | z * ) .
This means that the NML is, to a good approximation, equivalent to S independent draws from the distribution q ( k | z * ) or, equivalently, that the distribution q ( k | z * ) is the one that characterizes typical samples. This is fully confirmed by Figure 2A, which compares q ( k | z * ) with the empirical distribution of k s drawn from P ¯ . For large k, we find q ( k | z * ) e z * k / k , which shows that the distribution of frequencies is broad, with a cutoff at 1 / z * . This underlying broad distribution is confirmed by Figure 2B which shows the dependence of the degeneracy m k with the frequency k.
In the regime where ρ 1 and k is large, the cutoff extends to large values of k and we find z * ( ρ ) 1 2 ρ (see Appendix B.1). In addition, the parametric complexity can be computed explicitly via Equation (9) in this regime, with the result
R ¯ S 2 ( 1 + log ρ ) 1 2 log ( 2 ρ ) , ρ 1 .
The coding cost of a typical sample is given by
E = log P ¯ ( s ^ )
= s χ k s log k s N + R ¯
= N H ^ [ s ] + R ¯ .
The number of samples with encoding cost E can be computed in the following way. The number of samples that correspond to a given degeneracy m k of the states that occurs k s = k times in s ^ , is given by
N ! k ( k m k ) ! .
Therefore, the number of samples with coding cost E is
W ( E ) = { m k } M ( E ) N ! k ( k m k ) !
= { m k } M ( E ) e log N ! k log ( k m k ) !
{ m k } M ( E ) e N H ^ [ k ] , ρ 1
where M ( E ) is the set of all sequences { m k } that are consistent with samples in χ N and satisfy Equation (26). The last expression assumes log M ! M log M M , which is reasonable for M = k m k 1 , i.e., when ρ 1 . In this regime we expect the sum over M ( E ) to be dominated by samples with maximal H ^ [ k ] . Indeed, Figure 2C,D show that samples drawn from P ¯ achieve values of H ^ [ k ] close to the theoretical maximum, especially in the region ρ 1 .

2.1.2. A Model of Independent Spins

In order to corroborate our results for the Dirichlet model, we study the properties of the universal codes for a model of independent spins, i.e., a paramagnet. For a single spin, s = ± 1 , in a local field h, the probability distribution is given by
P ( s | h ) = e s h 2 cosh h .
Thus for a sample s ^ of size N,
P ( s ^ | h ) = e N m h N log ( 2 cosh h )
where m = 1 N i = 1 M s ( i ) is the local magnetization. The maximum likelihood estimate for h is h ^ ( m ) = tanh 1 m , hence the universal code for a single spin can be written as
P ¯ ( s ^ ) = e N m h ( m ) log 2 cosh h ( m ) R ¯
where R ¯ 1 2 log π N 2 (see Appendix B.2). Note that a sample with a magnetization m can be realized by considering the permutation of the up-spins ( s = 1 , where there are = N + N m 2 of such spins) and the permutation of the down-spins ( s = 1 , where there are N of such spins). Consequently, the magnetization for samples drawn from P ¯ has a broad distribution given by the arcsin law (see Appendix B.2)
P ¯ ( m ) = N N N m 2 e N m tanh 1 m log 2 cosh tanh 1 m R ¯
1 π 1 m 2 , m [ 1 , 1 ] .
It is straightforward to see that the model of a single spin is equivalent to a Dirichlet model with two states χ = { 1 , + 1 } . In terms of the number of up-spins, using m = 2 N N , the NML for a single spin can be written as
P ¯ ( ) = e R ¯ N N 1 N N .
The NML for a paramagnet with n independent spins reads as
P ¯ ( 1 , , n ) = e n R ¯ i = 1 n N i i N i 1 i N N i .
Figure 3 reports the properties of the typical samples of the NML of a paramagnet. We observed that the frequency distribution of typical samples is broad (Figure 3A) and that typical samples attain values of H [ k ] very close to the maximum for a given value of H ^ [ s ] (Figure 3B,C). As the size N of data increases, the NML enters the well-sampled regime where H ^ [ k ] H ^ [ s ] , indicating that the data processing inequality [18] is saturated. In this regime, typical samples are those which maximize the entropy H ^ [ s ] .

2.1.3. Sherrington-Kirkpatrick Model

In the following sections, we extend our findings to systems of interacting variables (graphical models) and discuss the properties of typical samples drawn from the corresponding NML distribution. We shall first consider models in which the observed variables are interacting either directly (Sherrington-Kirkpatrick model) and then restricted Boltzmann machines, where the variables interact indirectly through hidden variables.
In this section, s = ( s 1 , , s n ) is a configuration of n spins s i { ± 1 } . In the Sherrington-Kirkpatrick (SK) model, the distribution of s, considers all interactions up to two-body
P ( s | J , h ) = 1 Z ( J , h ) exp i h i s i + i < j J i j s i s j , s = ( s 1 , , s n )
where the partition function
Z ( J , h ) = s 1 = ± 1 s n = ± 1 exp i h i s i + i < j J i j s i s j
is a normalization constant which depends on the pairwise couplings, J with J i j = J j i being the coupling strength between s i and s j , and external local fields, h . Thus, given a sample, s ^ = ( s ( 1 ) , , s ( N ) ) of N observations, the likelihood reads as
P ( s ^ | J , h ) = exp N i h i m i + N i < j J i j c i j N log Z ( J , h ) .
where m i = 1 N l = 1 N s i ( l ) and c i j = 1 N l = 1 N s i ( l ) s j ( l ) are the magnetization and pairwise correlation respectively. Note that all the needed information about the SK model is encapsulated in the free energy, ϕ ( J , h ) = log Z ( J , h ) . Indeed, the maximum likelihood estimators for the couplings, J ^ , and local fields, h ^ , are the solutions of the self-consistency equations
ϕ ( J , h ) h i = m i , ϕ ( J , h ) J i j = c i j , i , j = 1 , , n .
The universal codes for the SK model then reads as
P ¯ ( s ^ ) = exp N i h ^ i m i + i < j J ^ i j c i j ϕ ( J ^ , h ^ ) R ¯ .
However, unlike for the Dirichlet model and the paramagnet model, the UC partition function, e R ¯ , for the SK model is analytically intractable (For SK models which possess some particular structures, a calculation of the UC partition function has been done in [23]). To this, we resort to a Markov chain Monte Carlo (MCMC) approach to sample the universal codes (See Appendix C.1). Figure 4A,C shows the properties of the typical samples drawn from the universal codes of the SK model in Equation (42).

2.1.4. Restricted Boltzmann Machines

We consider a restricted Boltzmann machine (RBM) wherein one has a layer composed of n v independent visible boolean units, v = ( v 1 , , v n v ) , which are interacting with n h independent hidden boolean units, h = ( h 1 , , h n h ) , in another layer where v i , h i = 0 , 1 . The probability distribution can be written down as
P ( v , h | θ = ( a , b , w ) ) = 1 Z ( θ ) exp i = 1 n v a i v i + j = 1 n h b j h j + i = 1 n v j = 1 n h v i w i j h j
where the partition function
Z ( θ ) = v 1 = 0 , 1 v n v = 0 , 1 h 1 = 0 , 1 h n h = 0 , 1 exp i = 1 n v a i v i + j = 1 n h b j + i = 1 n v v i w i j h j
is a function of the parameters, θ , with w i j is the interaction strength between v i and h j , a and b are the local fields acting on the visible v and hidden h units respectively. Because the hidden units, h , are mutually independent, we can factorize and then marginalize the sum over the hidden variables, h , to obtain the distribution of a single observation, v , as
P ( v | θ ) = 1 Z ( θ ) exp i = 1 n v a i v i + j = 1 n h log 2 cosh i = 1 n v v i w i j + b j .
Then, the probability distribution for a sample, v ^ = ( v ( 1 ) , , v ( N ) ) , of N observations is simply
P ( v ^ | θ ) = k = 1 N p ( v ( k ) | θ ) .
The parameters, θ ^ , can be estimated by maximizing the likelihood using the Contrastive Divergence (CD) algorithm [24,25] (see Appendix C.2). Once the maximum likelihood parameters, θ ^ , have been inferred, then the universal codes for the RBM can be built as
P ¯ ( v ^ ) = e R ¯ k = 1 N p ( v ( k ) | θ ^ ) .
In addition, like in the SK model, the UC partition function, e R ¯ , for the RBM cannot be solved analytically. To this, we also resort to a MCMC approach to sample the universal codes (See Appendix C.1). Figure 4B,D shows the properties of the typical samples drawn from the universal codes of the RBM in Equation (47).
Taken together, we see that even for models that incorporate interactions, the typical samples of the NML i) have broad frequency distributions and ii) they achieve values of H ^ [ k ] close to the maximum, given H ^ [ s ] . Due to computational constraints, we only present the results for N = 10 3 however, we expect that increasing N will only shift the NML towards the well-sampled regime.

2.2. Large Deviations of the Universal Codes Exhibit Phase Transitions

In this section, we focus on the distribution of the resolution H ^ [ s ] for samples s ^ drawn from P ¯ . We note that
H ^ [ s ] = 1 N i = 1 N log k s ( i ) N
has the form of an empirical average. Hence, we expect it to attain a given value for typical samples drawn from P ¯ . This also suggests that the probability to draw samples with resolution H ^ [ s ] = E different from the typical value has the large deviation form P { H ^ [ s ] = E } e N I ( E ) , to leading order for N 1 . In order to establish this result and to compute the function I ( E ) , as in [26] and [27], we observe that
P { H ^ [ s ] = E } = s ^ P ¯ ( s ^ ) δ H ^ [ s ] E
= N d q 2 π s ^ P ¯ ( s ^ ) e i q N ( H ^ [ s ] E ) ,
where we used the integral representation of the δ function and P ¯ ( s ^ ) is the NML distribution in Equation (7). Upon defining
s ^ P ¯ ( s ^ ) e i q N H ^ [ s ] = e N ϕ ( i q ) ,
let us assume, as in the Gärtner–Ellis theorem [26], that ϕ ( i q ) is finite for N 1 for all q in the complex plane. Then Equation (49) can be evaluated by a saddle point integration
P { H ^ [ s ] = E } = N d α 2 π e N [ i α E ϕ ( i α ) ]
e N [ β E ϕ ( β ) ] ,
where we account only for the leading order. β is related to the saddle point value q * = i β that dominates the integral and it is given by the solution of the saddle point condition
E = d d β ϕ ( β ) .
Equation (52) shows that the function I ( E ) is the Legendre transform of ϕ ( β ) , i.e.,
I ( E ) = β E + ϕ ( β )
with β ( E ) given by the condition (53), as in the Gärtner–Ellis theorem [26]. Further insight and a direct calculation from the definition in Equation (50) reveals that Equation (53) can also be written as
E = s ^ P ¯ β ( s ^ ) H ^ [ s ]
which is the average of H ^ [ s ] over a “tilted” probability distribution [26]
P ¯ β ( s ^ ) = P ¯ ( s ^ ) e N β H ^ [ s ] ϕ ( β ) ,
hence β arises as the Lagrange multiplier enforcing the condition H ^ [ s ] = E . Conversely, when β ( E ) is fixed by the condition Equation (e̊fapp3:saddle2), samples drawn from P ¯ β have H ^ [ s ] E . In other words, P ¯ β describes how large deviations with H ^ [ s ] = E are realized. Therefore, typical samples that realize such large deviations can be obtained by sampling the distribution P ¯ β ( s ^ ) in Equation (56). Figure 5 show that, for Dirichlet models, samples obtained from P ¯ β exhibit a sharp transition at β = 0 . The resolution (see green lines in Figure 5) sharply vanishes for negative values of β as a consequence of the fact that the distribution localizes to samples where almost all outcomes coincide, i.e., s i = s ¯ . This is evidenced by the fact that the maximal frequency k s ¯ = max s k s approaches N very fast (see purple lines in Figure 5). In other words, β = 0 marks a localization transition where the symmetry between the states in χ is broken, because one state s ¯ is sampled an extensive number of times k s ¯ N .
One direct way to see this is to consider the Dirichlet model and use the “tilted” distribution in Equation (56) to compute the distribution
q β ( k | z ) = 1 N ( z ) k ( 1 β ) k e ( 1 + z ) k k ! .
of k s following the same steps leading to Equation (19), where again z is fixed by the condition k q β ( k | z ) k = N / S . For β 0 , we again find, as in Equation (22), that k s can be considered as independent draws from the same distribution q β ( k | z ) . For β < 0 , we find that the distribution q β ( k | z ) develops a sharp maximum at k = N indicating that, as mentioned above, the sample concentrates on one state s ¯ .
This behavior is generic whenever the underlying model f ( s | θ ) itself localizes for certain values θ ¯ of the parameters, i.e., when f ( s | θ ¯ ) = δ s , s ¯ . In order to see this, notice that, in general, we can write
f ( s ^ | θ ^ ( s ^ ) ) = s f ( s | θ ^ ( s ^ ) ) k s .
Thus, by inserting the identity e N H ^ [ s ] + N H ^ [ s ] , the NML distribution in Equation (7) can be re-cast as
P ¯ ( s ^ ) = e N H ^ [ s ] N D K L ( p ^ | | θ ^ ) R ¯
where p ^ s = k s / N is the empirical distribution and
D K L ( p ^ | | θ ^ ) = s p ^ s log p ^ s f ( s | θ ^ ( s ^ ) )
is a Kullback-Leibler divergence.
Now, we observe that
e N ϕ ( β ) = e R ¯ s ^ e ( 1 β ) N H ^ [ s ] N D K L ( p ^ | | θ ^ )
e R ¯ s ^ e ( 1 β ) N H ^ [ s ] N D K L ( p ^ | | θ 0 )
= e R ¯
where the inequality in Equation (61) derives from the fact that θ ^ ( s ^ ) , the maximum likelihood estimator for sample s ^ , is replaced by a generic value θ 0 and consequently, D K L ( p ^ | | θ ^ ) D K L ( p ^ | | θ 0 ) . The equality in Equation (62), instead, derives from the choice θ 0 = θ ¯ such that f ( s | θ ¯ ) = δ s , s ¯ . Under this choice, only the term corresponding to “localized” samples where s ( i ) = s 0 for all points in the sample, survive in the sum on s ^ . For such localized samples, H ^ [ s ] = D K L ( p ^ | | θ 0 ) = 0 , hence Equation (62) follows.
Because of the logarithmic dependence of the regret R ¯ on N (see Equation (9)), Equation (62) implies that, for all β ,
ϕ ( β ) R ¯ / N 0
for N 1 . Given that H ^ [ s ] 0 in Equation (55), then E 0 and therefore, Equation (53) implies that ϕ ( β ) is a non-decreasing function of β . In addition, ϕ ( 0 ) = 0 by Equation (50). Taken together, these facts require that ϕ ( β ) = 0 for all values β 0 . On the other hand, for β > 0 , the function ϕ ( β ) is analytic with all finite derivatives, which corresponds to higher moments of H ^ [ s ] under P ¯ β . Therefore, β = 0 , which corresponds to the typical behavior of the NML, coincides with a second order phase transition point because the function ϕ ( β ) exhibits a discontinuity in the second derivative. In terms of P ¯ β ( s ^ ) , the phase transition separates a region ( β 0 ) where all samples s ^ have a finite probability from a region ( β < 0 ) where only one sample, the one with s ( i ) = s ¯ , i , has non-zero probability and H ^ [ s ] = 0 .
The phase transition is a natural consequence of the fact that NML provide efficient coding of samples generated from f ( s | θ ) . It states that codes P ¯ β that achieve a compression different from the one achieved by the NML only exist for higher coding costs. Codes with lower coding cost only describe non-random samples that correspond to deterministic models f ( s | θ ¯ ) = δ s , s ¯ .

3. Discussion

The aim of this paper is to elucidate the properties of efficient representations of data corresponding to universal codes that arise in MDL. Taking NML as a generative model, we find that typical samples are characterized by broad frequency distributions and that they achieve values of the relevance which are close to the maximal possible H ^ [ k ] .
In addition, we find that samples generated from NML are critical in a very precise sense. If we force NML to use less bits to encode samples, then the code localizes on deterministic samples. This is a consequence of the fact that if there were codes that required fewer bits, then NML would not be optimal.
This contributes to the discussion on the ubiquitous finding of statistical criticality [1,4] by providing a clear understanding of its origin. It suggests that statistical criticality can be related to a precise second order phase transition in terms of large deviations of the coding cost. This phase transition separates random samples that span a large range of possible outcomes (the set χ in the models discussed above) from deterministic ones, where one outcome occurs most of the time. The phase transition is accompanied by a spontaneous symmetry breaking in the permutation between samples. The frequencies of outcomes in the symmetric phase ( β 0 ) are generated as independent draws from the same distribution, that is sharply peaked for β > 0 as can be checked in the case of the Dirichlet model. Instead, for β < 0 , only one state is sampled. In the typical case, β = 0 , the symmetry between outcomes is weakly broken, as there are outcomes that occur more frequently than others. At β = 0 , the samples maintain the maximal discriminative power over outcomes. This type of phase transitions in large deviations is very generic, and it occurs in large deviations whenever the underlying distribution develops fat tails (see e.g., [27]).
This leads to the conjecture that broad distributions arise as a consequence of efficient coding. More precisely, broad distributions arise when the variables sampled are relevant, i.e., when they provide an optimal representation. This is precisely the point which has been made in [7,8,9]. The results in the present paper add a new perspective whereby maximally informative samples can be seen as universal codes.

Author Contributions

R.J.C., M.M. and Y.R. conceptualized the research, performed the analysis and wrote the paper.

Funding

This work was supported by the Kavli Foundation and the Centre of Excellence scheme of the Research Council of Norway—Centre for Neural Computation (grant number 223262).

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
MDLminimum description length
NMLnormalized maximum likelihood
MISmaximally informative sample
SKSherrington-Kirkpatrick
RBMrestricted Boltzmann machine
CDcontrastive divergence
PCDpersistent contrastive divergence
MCMCMarkov chain Monte Carlo

Appendix A. Derivation for the Parametric Complexity

In order to compute the parametric complexity, given in Equation (8), let us consider the integral d θ f ( s ^ | θ ) g ( θ ) for a generic function g ( θ ) . For N 1 , the integral is dominated by the point θ = θ ^ ( s ^ ) that maximizes log f ( s ^ | θ ) , and it can be computed by the saddle point method. Performing a Taylor expansion around the maximum likelihood parameters, θ ^ ( s ^ ) , one finds (up to leading orders in N)
log f ( s ^ | θ ) = log f ( s ^ | θ ^ ( s ^ ) ) 1 2 i , j N ( θ i θ ^ i ) I i , j ( θ ^ ) ( θ j θ ^ j ) + O ( ( θ θ ^ ) 3 ) .
where
I i , j ( θ ^ ) = 1 N 2 log f ( s ^ | θ ) θ i θ j
= s χ k s N 2 log f ( s | θ ) θ i θ j .
Note that for exponential families, the Hessian of the log-likelihood is independent of the data, and hence it coincides with the Fisher Information matrix [22]
I i , j ( θ ) = s χ f ( s | θ ) 2 log f ( s | θ ) θ i θ j
The integral can then be computed by Gaussian integration, as
d θ f ( s ^ | θ ) g ( θ ) f ( s ^ | θ ^ ( s ^ ) ) g ( θ ^ ( s ^ ) ) d θ e N 2 i , j ( θ i θ ^ i ) I i j ( θ ^ ) ( θ j θ ^ j )
= f ( s ^ | θ ^ ( s ^ ) ) g ( θ ^ ( s ^ ) ) 2 π N k 2 1 det I ( θ ) ^ .
where k is the number of parameters. If we choose g ( θ ) to be
g ( θ ) = N 2 π k 2 det I ( θ )
and take a sum over all samples s ^ on both sides of Equation (A5), Equation (A6) becomes
s ^ f ( s ^ | θ ^ ( s ^ ) ) s ^ N 2 π k 2 d θ f ( s ^ | θ ) det I ( θ )
= N 2 π k 2 det I ( θ ) d θ .
Hence, the parametric complexity, R ¯ = log s ^ f ( s ^ | θ ^ ( s ^ ) ) , is asymptotically given by Equation (9) when N 1 .
Notice also that P ¯ ( s ^ ) induces a distribution over the space of parameters θ . With the choice
g ( θ ) = N 2 π k 2 det I ( θ ) δ ( θ θ 0 ) ,
the same procedure as above shows that
s ^ P ¯ ( s ^ ) δ θ ^ ( s ^ ) θ 0 = e R ¯ s ^ f ( s ^ | θ ^ ( s ^ ) ) δ θ ^ ( s ^ ) θ 0
= e R ¯ N 2 π k 2 det I ( θ 0 )
= det I ( θ 0 ) d θ det I ( θ )
which is the Jeffreys prior.

Appendix B. Calculating the Parametric Complexity

In this section, we calculate the parametric complexity for the Dirichlet model for ρ = N / S 1 where N is the number of observations in the sample s ^ and S is the size of the state space χ and the paramagnetic Ising model.

Appendix B.1. Dirichlet Model

In the regime where ρ 1 and k large such that we can employ Stirling’s approximation, k ! = 2 π k k k e k , the normalization can be calculated as
k = 0 k k e k e z * ( ρ ) k k ! k = 0 e z * ( ρ ) k 2 π k
= 0 e z * ( ρ ) k d k 2 π k
= 1 2 π π z * ( ρ )
= 1 2 z * ( ρ ) .
Similarly, we can also calculate
k = 0 k k + 1 e k e z * ( ρ ) k k ! k = 0 k e z * ( ρ ) k 2 π k
= 0 k 2 π e z * ( ρ ) k d k
= 1 2 π π 2 ( z * ( ρ ) ) 3 2
= 1 ( 2 z * ( ρ ) ) 3 2
and thus, the saddle point value z * can now be evaluated as
z * ( ρ ) 1 2 ρ .
In the same regime, given the determinant det I ( θ ) of the Fisher information matrix for the Dirichlet model,
det I ( θ ) = s χ 1 θ s
the parametric complexity can be approximated as
e R ¯ N 2 π ( S 1 ) / 2 d θ det I ( θ )
= N 2 π ( S 1 ) / 2 Γ ( 1 2 ) S Γ ( S 2 )
e S 2 ( 1 + log ρ ) 2 ρ
which, together with Equation (20) and the fact and the variance k 2 z * k z * 2 = 2 ρ 2 , implies that Φ ( z * ( ρ ) ) = 1 2 ( 1 + log ρ ) .

Appendix B.2. Paramagnet Model

The parametric complexity for the paramagnetic Ising model, given P ¯ ( m ) in Equation (34), is given by
e R ¯ = M = N N N N M 2 e M tanh 1 ( M / N ) N log 2 cosh tanh 1 ( M / N ) .
where M = N , N + 2 , , N 2 , N runs on N + 1 values. When N 1 , the magnetization, m = M / N , can be treated as a continuous variable and consequently, the sum can be approximated as an integral: M N 2 1 1 d m . Hence, by using the identities tanh 1 ( m ) = 1 2 log 1 + m 1 m , cosh tanh 1 ( m ) = 1 1 m 2 and K ! K K e K 2 π K , one finds that
e R ¯ N 2 1 1 1 2 π N ( 1 m 2 )
= π N 2 .

Appendix C. Simulation Details

Appendix C.1. Sampling Universal Codes through Markov Chain Monte Carlo

Unlike the Dirichlet model and the independent spin model, analytic calculations for the Sherrington-Kirkpatrick (SK) model and the restricted Boltzmann machine (RBM) are generally not possible, because the partition function Z, and consequently, the UC partition function e R ¯ , is computationally intractable. In order to sample the NML for these graphical models, we turn to a Markov chain Monte Carlo (MCMC) approach in which the transition probability, P ( s ^ s ^ ) , can be built using the following heuristics:
  • Starting from the sample, s ^ , we calculate the maximum likelihood estimates, θ ^ ( s ^ ) , of the parameters of the model, p ( s ^ | θ ) by either solving Equation (41) for the SK model or by Contrastive Divergence (CD κ ) [24,25] for the RBM (see Appendix C.2).
  • We generate a new sample, s ^ from s ^ by flipping a spin in randomly selected r points s ( i ) of the sample. The number of selected spins, r, must be chosen carefully such that r must be large enough to ensure faster mixing but small enough so the new inferred model, p ( s ^ | θ ) , is not too far from the starting model, p ( s ^ | θ ) .
  • The maximum likelihood estimators, θ ^ ( s ^ ) for the new sample are calculated as in Step 1.
  • Compute
    Δ E = log p ( s ^ | θ ^ ( s ^ ) ) log p ( s ^ | θ ^ ( s ^ ) )
    and accept the move s ^ s ^ with probability min e N Δ E , 1 .

Appendix C.2. Estimating RBM Parameters through Contrastive Divergence

Given a sample, v ^ = ( v ( 1 ) , , v ( N ) ) , of N observations, the log-likelihood for the restricted Boltzmann machine (RBM) is given by
log L ( θ ) = k = 1 N log h P ( v ( k ) , h | θ ) .
The inference of the parameters, θ , proceeds by updating θ such that the log-likelihood, log L ( θ ) , is maximized. This updating formulation for the parameters is given by
Δ θ = ϵ N log L ( θ ) θ
where ϵ is the learning rate parameter. The corresponding gradients for the parameters, w , a and b can then be written down respectively as
log L ( θ ) w i j = k = 1 N h v i ( k ) h j P ( v ( k ) , h | θ ) v h v i h j P ( v , h | θ )
log L ( θ ) a i = k = 1 N h v i ( k ) P ( v ( k ) , h | θ ) v h v i P ( v , h | θ )
log L ( θ ) b j = k = 1 N h h j P ( v ( k ) , h | θ ) v h h j P ( v , h | θ )
where the first terms denote averages over the data distribution while the second terms denote averages over the model distribution.
Here, we use the contrastive divergence (CD) approach which is a variation of the steepest gradient descent of L ( θ ) . Instead of performing the integration over the model distribution, CD approximates the partition function by averaging over distribution obtained after taking κ Gibbs sampling steps away from the data distribution.
To do this, we exploit the factorizability of the conditional distributions of the RBM. In particular, the conditional probability for the forward propagation (i.e., sampling the hidden variables given the visible variables) from v to h j reads as
P ( h j = 1 | v , θ ) = 1 1 + exp b j i v i w i j .
Similarly, the conditional probability for the backward propagation (i.e., sampling the visible variables from the hidden variables) from h to v i reads as
P ( v i = 1 | h , θ ) = 1 1 + exp a i j h j w i j .
The Gibbs sampling is done by propagating a sample, v ( k ) = v ( k ) ( 0 ) , forward and backward κ times: v ( k ) ( 0 ) h ( k ) ( 0 ) v ( k ) ( 1 ) h ( k ) ( κ 1 ) v ( k ) ( κ ) h ( k ) ( κ ) . And thus, the Gibbs sampling approximates the gradient in Equation (A33) as
log L ( θ ) w i j = k = 1 N v i ( k ) ( 0 ) h j ( k ) ( 0 ) v i ( k ) ( κ ) h j ( k ) ( κ ) .
In the CD approach, each parameter update for a batch is called an epoch. While larger κ approximates well the partition function, it also induces an additional computational cost. To find the global minimum more efficiently, we randomly divided the samples into groups of mini-batches. This approach introduces stochasticity and consequently reduces the likelihood of the learning algorithm to be confined in a local minima. However, a mini-batch approach can result in data-biased sampling. To circumvent this issue, we adopted the Persistent CD (PCD) algorithm where the Gibbs sampling extends to several epochs, each using different mini-batches. In the PCD approach, the initial visible variable configuration, v ( k ) ( 0 ) , was set to random for the first mini-batch, but the final configurations, ( v ( k ) ( κ ) , h ( k ) ( κ ) ) , of the current batches become the initial configuration for the next mini-batches. In this paper, we performed Gibbs sampling at κ = 10 steps where we update the parameters, θ , are updated at 2500 epochs at a rate ϵ = 0.01 with 200 mini-batches per epochs. For other details regarding inference of parameters of the RBM, we refer the reader to [24,25].

Appendix C.3. Source Codes

All the calculations in this manuscript were done using personalized scripts written in Python 3. The source codes are accessible online (https://github.com/rcubero/UniversalCodes (accessed on 8 May 2012)).

References

  1. Muñoz, M.A. Colloquium: Criticality and dynamical scaling in living systems. Rev. Mod. Phys. 2018, 90, 031001. [Google Scholar] [CrossRef]
  2. Newman, M.E.J. Power laws, Pareto distributions and Zipf’s law. Contemp. Phys. 2005, 46, 323–351. [Google Scholar] [CrossRef]
  3. Bak, P. How Nature Works: The Science of Self-Organized Criticality; Copernicus: Göttingen, Germany, 1996. [Google Scholar]
  4. Mora, T.; Bialek, W. Are biological systems poised at criticality? J. Stat. Phys. 2011, 144, 268–302. [Google Scholar] [CrossRef]
  5. Simini, F.; González, M.C.; Maritan, A.; Barabási, A.L. A universal model for mobility and migration patterns. Nature 2012, 484, 96. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Schwab, D.J.; Nemenman, I.; Mehta, P. Zipf’s law and criticality in multivariate data without fine-tuning. Phys. Rev. Lett. 2014, 113, 068102. [Google Scholar] [CrossRef] [PubMed]
  7. Marsili, M.; Mastromatteo, I.; Roudi, Y. On sampling and modeling complex systems. J. Stat. Mech. Theory Exp. 2013, 9, 1267–1279. [Google Scholar] [CrossRef]
  8. Haimovici, A.; Marsili, M. Criticality of mostly informative samples: A bayesian model selection approach. J. Stat. Mech. Theory Exp. 2015, 10, P10013. [Google Scholar] [CrossRef]
  9. Cubero, R.J.; Jo, J.; Marsili, M.; Roudi, Y.; Song, J. Minimally sufficient representations, maximally informative samples and Zipf’s law. arXiv, 2018; arXiv:1808.00249. [Google Scholar]
  10. Song, J.; Marsili, M.; Jo, J. Resolution and relevance trade-offs in deep learning. arXiv, 2017; arXiv:1710.11324. [Google Scholar]
  11. Grünwald, P.D. The Minimum Description Length Principle; MIT Press: Massachusetts, MA, USA, 2007. [Google Scholar]
  12. Ter Steege, H.; Pitman, N.C.A.; Sabatier, D.; Baraloto, C.; Salomão, R.P.; Guevara, J.E.; Phillips, O.L.; Castilho, C.V.; Magnusson, W.E.; Molino, J.F.; et al. Hyperdominance in the Amazonian tree flora. Science 2013, 342, 1243092. [Google Scholar] [CrossRef] [PubMed]
  13. Condit, R.; Lao, S.; Pérez, R.; Dolins, S.B.; Foster, R.; Hubbell, S. Barro Colorado Forest Census Plot Data (Version 2012). Available online: https://repository.si.edu/handle/10088/20925 (accessed on 1 October 2018).
  14. Combine Your Old LEGO® to Build New Creations. Available online: https://rebrickable.com/ (accessed on 1 October 2018).
  15. Mazzolini, A.; Gherardi, M.; Caselle, M.; Lagomarsino, M.C.; Osella, M. Statistics of shared components in complex component systems. Phys. Rev. X 2018, 8, 021023. [Google Scholar] [CrossRef]
  16. Gama-Castro, S.; Salgado, H.; Santos-Zavaleta, A.; Ledezma-Tejeida, D.; Muñiz-Rascado, L.; García-Sotelo, J.S.; Alquicira-Hernández, K.; Martínez-Flores, I.; Pannier, L.; Castro-Mondragón, J.A.; et al. Regulondb version 9.0: High-level integration of gene regulation, coexpression, motif clustering and beyond. Nucleic Acids Res. 2015, 44, 133–143. [Google Scholar] [CrossRef] [PubMed]
  17. Balakrishnan, R.; Park, J.; Karra, K.; Hitz, B.C.; Binkley, G.; Hong, E.L.; Sullivan, J.; Micklem, G.; Cherry, J.M. Yeastmine—An integrated data warehouse for Saccharomyces cerevisiae data as a multipurpose tool-kit. Database 2012, 2012, bar062. [Google Scholar] [CrossRef] [PubMed]
  18. Cover, T.M.; Thomas, J.A. Elements of Information Theory; John Wiley & Sons: New York, NY, USA, 2012. [Google Scholar]
  19. Grünwald, P.D. A tutorial introduction to the minimum description length principle. arXiv, 2004; arXiv:math/0406077. [Google Scholar]
  20. Shtarkov, Y.M. Universal sequential coding of single messages. Transl. Prob. Inf. Transm. 1987, 23, 175–186. [Google Scholar]
  21. Rissanen, J.J. Fisher information and stochastic complexity. IEEE Trans. Inf. Theory 1996, 42, 40–47. [Google Scholar] [CrossRef]
  22. Balasubramanian, V. MDL, Bayesian inference, and the geometry of the space of probability distributions. In Advances in Minimum Description Length: Theory and Applications; Grnwald, P.D., Myung, I.J., Pitt, M.A., Eds.; The MIT Press: Massachusetts, MA, USA, 2005. [Google Scholar]
  23. Beretta, A.; Battistin, C.; de Mulatier, C.; Mastromatteo, I.; Marsili, M. The stochastic complexity of spin models: How simple are simple spin models? arXiv, 2017; arXiv:1702.07549. [Google Scholar]
  24. Hinton, G.E. Training products of experts by minimizing contrastive divergence. Neural Comput. 2002, 14, 1771–1800. [Google Scholar] [CrossRef] [PubMed]
  25. Hinton, G.E.; Salakhutdinov, R.R. Reducing the dimensionality of data with neural networks. Science 2006, 313, 504–507. [Google Scholar] [CrossRef] [PubMed]
  26. Mezard, M.; Montanari, A. Information, Physics, and Computation; Oxford University Press: Oxford, UK, 2009. [Google Scholar]
  27. Filiasi, M.; Livan, G.; Marsili, M.; Peressi, M.; Vesselli, E.; Zarinelli, E. On the concentration of large deviations for fat tailed distributions, with application to financial data. J. Stat. Mech. Theory Exp. 2014, 9, P09030. [Google Scholar] [CrossRef]
Figure 1. Rank plot of the frequencies across a broad range of datasets. Log-log plots of rank versus frequency from diverse datasets: survey of 4962 species of trees across 116 families sampled from the Amazonian lowlands [12], survey of 1053 species of trees across 376 genera and 89 families sampled across a 50 hectare plot in the Barro Colorado Island (BCI), Panama [13], counts indicating the inclusion of each 13,001 LEGO parts on 2613 distributed toy sets [14,15] and the number of genes that are regulated by each of the 203 transcription factors (TFs) in E. coli [16] and 188 TFs in S. cerevisiae (yeast) [17] through binding with transcription factor binding sites (TFBS).
Figure 1. Rank plot of the frequencies across a broad range of datasets. Log-log plots of rank versus frequency from diverse datasets: survey of 4962 species of trees across 116 families sampled from the Amazonian lowlands [12], survey of 1053 species of trees across 376 genera and 89 families sampled across a 50 hectare plot in the Barro Colorado Island (BCI), Panama [13], counts indicating the inclusion of each 13,001 LEGO parts on 2613 distributed toy sets [14,15] and the number of genes that are regulated by each of the 203 transcription factors (TFs) in E. coli [16] and 188 TFs in S. cerevisiae (yeast) [17] through binding with transcription factor binding sites (TFBS).
Entropy 20 00755 g001
Figure 2. Properties of the typical samples generated from the NML of the Dirichlet model. (A) A plot showing the frequency distribution of the typical samples of the Dirichlet NML code. Given S, the cardinality of the state space, χ , with S = 1.0 × 10 3 (orange dots), 5.0 × 10 3 (green squares), and 1.0 × 10 4 (red triangles), we compute the average frequency distribution across 100 generated samples from the Dirichlet NML of size N = 10 S such that the average frequency per state, ρ , is fixed. This is compared against the theoretical calculations (solid black line) for q ( k | z * ) in Equation (19). (B) Plot showing the degeneracy, m k , of the frequencies, k, in a representative typical sample of length N = 10 3 generated from the Dirichlet NML code with average frequencies per spike: ρ = 100 (yellow triangle), ρ = 10 (orange x-mark) and ρ = 2 (red cross). The corresponding dashed lines depict the best-fit line. (C,D) Plots of H ^ [ s ] versus H ^ [ k ] for the typical samples of the Dirichlet NML code. For a fixed size of the data, N ( N = 10 3 in C and N = 10 4 in D), we have drawn 100 samples from the Dirichlet NML code varying ρ , ranging from 2 to 100. The results are compared against the H ^ [ k ] and H ^ [ s ] for maximally informative samples (MIS, solid black line) and random samples (dashed black lines). For the MIS, the theoretical lower bound is reported [8]. For the random samples, we compute the averages of H ^ [ s ] and H ^ [ k ] over 10 7 realizations of random distributions of N balls in L boxes, with L ranging from 2 to 10 7 . Here, each box corresponds to one state s = 1 , , L and k s is the number of balls in box s. Note that all the calculated values for H ^ [ k ] and H ^ [ s ] are normalized by log N .
Figure 2. Properties of the typical samples generated from the NML of the Dirichlet model. (A) A plot showing the frequency distribution of the typical samples of the Dirichlet NML code. Given S, the cardinality of the state space, χ , with S = 1.0 × 10 3 (orange dots), 5.0 × 10 3 (green squares), and 1.0 × 10 4 (red triangles), we compute the average frequency distribution across 100 generated samples from the Dirichlet NML of size N = 10 S such that the average frequency per state, ρ , is fixed. This is compared against the theoretical calculations (solid black line) for q ( k | z * ) in Equation (19). (B) Plot showing the degeneracy, m k , of the frequencies, k, in a representative typical sample of length N = 10 3 generated from the Dirichlet NML code with average frequencies per spike: ρ = 100 (yellow triangle), ρ = 10 (orange x-mark) and ρ = 2 (red cross). The corresponding dashed lines depict the best-fit line. (C,D) Plots of H ^ [ s ] versus H ^ [ k ] for the typical samples of the Dirichlet NML code. For a fixed size of the data, N ( N = 10 3 in C and N = 10 4 in D), we have drawn 100 samples from the Dirichlet NML code varying ρ , ranging from 2 to 100. The results are compared against the H ^ [ k ] and H ^ [ s ] for maximally informative samples (MIS, solid black line) and random samples (dashed black lines). For the MIS, the theoretical lower bound is reported [8]. For the random samples, we compute the averages of H ^ [ s ] and H ^ [ k ] over 10 7 realizations of random distributions of N balls in L boxes, with L ranging from 2 to 10 7 . Here, each box corresponds to one state s = 1 , , L and k s is the number of balls in box s. Note that all the calculated values for H ^ [ k ] and H ^ [ s ] are normalized by log N .
Entropy 20 00755 g002
Figure 3. Properties of typical samples for the NML codes of the paramagnet. (A) Plots showing the degeneracy, m k , of the frequencies, k, in a representative typical sample of length N = 10 4 generated from the NML of a paramagnet with different number of independent spins: n = 4 (blue star), n = 12 (red cross) and n = 20 (yellow diamond). The corresponding dashed lines depict the best-fit line. (B,C) Plots of the H ^ [ k ] versus H ^ [ s ] of the typical samples generated from the paramagnet NML code for varying sizes of the data, N = 10 4 (B) and N = 10 5 (C), and for varying number of spins, n, ranging from 3 to 20. Given N and n, we compute the H ^ [ k ] and H ^ [ s ] over 100 realizations of the NML code of a paramagnet. The results are compared against the H ^ [ k ] and H ^ [ s ] for maximally informative samples (solid black line) and random samples (dashed black line) as described in Figure 2. Note that all the calculated H [ k ] and H [ s ] are normalized by log N .
Figure 3. Properties of typical samples for the NML codes of the paramagnet. (A) Plots showing the degeneracy, m k , of the frequencies, k, in a representative typical sample of length N = 10 4 generated from the NML of a paramagnet with different number of independent spins: n = 4 (blue star), n = 12 (red cross) and n = 20 (yellow diamond). The corresponding dashed lines depict the best-fit line. (B,C) Plots of the H ^ [ k ] versus H ^ [ s ] of the typical samples generated from the paramagnet NML code for varying sizes of the data, N = 10 4 (B) and N = 10 5 (C), and for varying number of spins, n, ranging from 3 to 20. Given N and n, we compute the H ^ [ k ] and H ^ [ s ] over 100 realizations of the NML code of a paramagnet. The results are compared against the H ^ [ k ] and H ^ [ s ] for maximally informative samples (solid black line) and random samples (dashed black line) as described in Figure 2. Note that all the calculated H [ k ] and H [ s ] are normalized by log N .
Entropy 20 00755 g003
Figure 4. Properties of typical samples for the NML codes of two graphical models: the Sherrington-Kirkpatrick (SK) model and the restricted Boltzmann machine (RBM). Left panels (A,C) show plots of the degeneracy, m k , of the frequency, k, for representative typical samples generated from the NML codes for the SK model (A) and the RBM given a number of hidden variables, n h = 7 (B) for different number of (visible) spins, n. The corresponding dashed lines show the best-fit lines. On the other hand, right panels (B,D) show plots of the H ^ [ k ] versus H ^ [ s ] of the typical samples drawn from the NML codes for the SK model (B) and the RBM with n h = 7 (D) for N = 10 3 and for varying number of spins, n ranging from 3 to 12. Given N and n of a graphical model, we compute the H ^ [ k ] and H ^ [ s ] for 100 samples drawn from the respective NML codes through a Markov chain Monte Carlo (MCMC) approach (see Appendix C.1). Note that for the RBM, varying n h do not qualitatively affect the observations made in this paper. As before, the H ^ [ k ] and H ^ [ s ] are normalized by log N and the typical NML samples are compared against maximally informative samples (solid black line) and random samples (dashed black line) as described in Figure 2.
Figure 4. Properties of typical samples for the NML codes of two graphical models: the Sherrington-Kirkpatrick (SK) model and the restricted Boltzmann machine (RBM). Left panels (A,C) show plots of the degeneracy, m k , of the frequency, k, for representative typical samples generated from the NML codes for the SK model (A) and the RBM given a number of hidden variables, n h = 7 (B) for different number of (visible) spins, n. The corresponding dashed lines show the best-fit lines. On the other hand, right panels (B,D) show plots of the H ^ [ k ] versus H ^ [ s ] of the typical samples drawn from the NML codes for the SK model (B) and the RBM with n h = 7 (D) for N = 10 3 and for varying number of spins, n ranging from 3 to 12. Given N and n of a graphical model, we compute the H ^ [ k ] and H ^ [ s ] for 100 samples drawn from the respective NML codes through a Markov chain Monte Carlo (MCMC) approach (see Appendix C.1). Note that for the RBM, varying n h do not qualitatively affect the observations made in this paper. As before, the H ^ [ k ] and H ^ [ s ] are normalized by log N and the typical NML samples are compared against maximally informative samples (solid black line) and random samples (dashed black line) as described in Figure 2.
Entropy 20 00755 g004
Figure 5. Typical realizations of large deviations from the NML code of the Dirichlet model. For a fixed parameter, β ranging from β = 1 to β = 1 , samples are obtained from P ¯ β in Equation (56) for varying length of the dataset, N ( N = 10 4 in solid lines with circle markers and N = 10 5 in dashed lines with square markers). The resolution H ^ [ s ] normalized by log N (in green lines) and the maximal frequency k s ¯ normalized by N (in purple lines) are calculated as an average over 100 realizations of P ¯ β given β . The point β = 0 corresponds to the typical samples that are realized from the Dirichlet NML code in Equation (12).
Figure 5. Typical realizations of large deviations from the NML code of the Dirichlet model. For a fixed parameter, β ranging from β = 1 to β = 1 , samples are obtained from P ¯ β in Equation (56) for varying length of the dataset, N ( N = 10 4 in solid lines with circle markers and N = 10 5 in dashed lines with square markers). The resolution H ^ [ s ] normalized by log N (in green lines) and the maximal frequency k s ¯ normalized by N (in purple lines) are calculated as an average over 100 realizations of P ¯ β given β . The point β = 0 corresponds to the typical samples that are realized from the Dirichlet NML code in Equation (12).
Entropy 20 00755 g005

Share and Cite

MDPI and ACS Style

Cubero, R.J.; Marsili, M.; Roudi, Y. Minimum Description Length Codes Are Critical. Entropy 2018, 20, 755. https://doi.org/10.3390/e20100755

AMA Style

Cubero RJ, Marsili M, Roudi Y. Minimum Description Length Codes Are Critical. Entropy. 2018; 20(10):755. https://doi.org/10.3390/e20100755

Chicago/Turabian Style

Cubero, Ryan John, Matteo Marsili, and Yasser Roudi. 2018. "Minimum Description Length Codes Are Critical" Entropy 20, no. 10: 755. https://doi.org/10.3390/e20100755

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop