Next Article in Journal
Latent Class Analysis with Arbitrary-Distribution Responses
Previous Article in Journal
Variations on the Expectation Due to Changes in the Probability Measure
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Empirical Lossless Compression Bound of a Data Sequence

1
State Key Laboratory of Mathematical Science, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190, China
2
School of mathematical sciences, University of Chinese Academy of Sciences, Beijing 100049, China
Entropy 2025, 27(8), 864; https://doi.org/10.3390/e27080864
Submission received: 15 July 2025 / Revised: 10 August 2025 / Accepted: 11 August 2025 / Published: 14 August 2025
(This article belongs to the Section Information Theory, Probability and Statistics)

Abstract

We consider the lossless compression bound of any individual data sequence. Conceptually, its Kolmogorov complexity is such a bound yet uncomputable. According to Shannon’s source coding theorem, the average compression bound is n H , where n is the number of words and H is the entropy of an oracle probability distribution characterizing the data source. The quantity n H ( θ ^ n ) obtained by plugging in the maximum likelihood estimate is an underestimate of the bound. Shtarkov showed that the normalized maximum likelihood (NML) distribution is optimal in a minimax sense for any parametric family. Fitting a data sequence—without any a priori distributional assumption—by a relevant exponential family, we apply the local asymptotic normality to show that the NML code length is n H ( θ ^ n ) + d 2 log n 2 π + log Θ | I ( θ ) | 1 / 2 d θ + o ( 1 ) , where d is dictionary size, | I ( θ ) | is the determinant of the Fisher information matrix, and Θ is the parameter space. We demonstrate that sequentially predicting the optimal code length for the next word via a Bayesian mechanism leads to the mixture code whose length is given by n H ( θ ^ n ) + d 2 log n 2 π + log | I ( θ ^ n ) | 1 / 2 w ( θ ^ n ) + o ( 1 ) , where w ( θ ) is a prior. The asymptotics apply to not only discrete symbols but also continuous data if the code length for the former is replaced by the description length for the latter. The analytical result is exemplified by calculating compression bounds of protein-encoding DNA sequences under different parsing models. Typically, compression is maximized when parsing aligns with amino acid codons, while pseudo-random sequences remain incompressible, as predicted by Kolmogorov complexity. Notably, the empirical bound becomes more accurate as the dictionary size increases.

1. Introduction

The computation of the compression bound of any individual sequence is both a philosophical and a practical problem. It touches on the fundamentals of human intelligence. After several decades of effort, many insights have been gained by experts from various disciplines.
In essence, the bound is the length of the shortest program that prints the sequence on a Turing machine, referred to as the Solomonoff–Kolmogorov–Chaitin algorithmic complexity. Under this framework, if a sequence cannot be compressed by any computer program, it is considered random. On the other hand, if we can compress the sequence using a certain program or coding scheme, then it is not random, and we uncover some pattern or knowledge within the sequence. Nevertheless, Kolmogorov complexity is not computable.
Along another line, the source coding theorem proposed by Shannon [1] claimed that the average shortest code length is no less than n H , where n is the number of words and H is the the entropy of the source, assuming its distribution can be specified. Although Shannon’s probabilistic framework has inspired the inventions of some ingenious compression methods, n H is an oracle bound. Some further questions need to be addressed. First, where does the probability distribution come from? A straightforward approach is to infer it from the data themselves. However, in the case of discrete symbols, plugging in the empirical word frequencies θ ^ n observed in the sequence results in n H ( θ ^ n ) , which can be shown to be an underestimate of the bound. Second, the word frequencies are counted according to a dictionary. Different dictionaries yield different distributions and thus different codes. What is the criterion for selecting a good dictionary? Third, the behavior of some compression algorithms, such as Lempel–Ziv coding, shows that as the sequence length increases, the size of the dictionary also grows. What is the exact impact of the dictionary size on the compression? Fourth, can we achieve the compression limit using a predictive code that processes the data in only one pass? Fifth, how is the bound derived from the probabilistic framework, if at all, connected to the conclusions drawn from the algorithmic complexity?
In this article, we review the key ideas of lossless compression and present some new mathematical results relevant to the aforementioned problems. Besides the algorithm complexity and the Shannon source coding theorem, the technical tools center around the normalized maximum likelihood (NML) coding [2,3] and predictive coding [4,5]. The expansions of these code lengths lead to an empirical compression bound that is indeed sequence specific and naturally linked to algorithmic complexity. Although the primary theme is the pathwise asymptotics, their related average results are also discussed for the sake of comparison. The analytical results apply not only to discrete symbols but also to continuous data provided the codelength for the former is replaced by the description length for the latter [6]. Other than theoretical justification, the empirical bound is exemplified by protein-coding DNA sequences and pseudo-random sequences.

2. A Brief Review of the Key Concepts

2.1. Data Compression

The basic concepts of lossless coding can be found in the textbook [7]. Before we proceed, it is helpful to clarify the jargon used in this paper: strings, symbols, and words. We illustrate them with an example. The following “studydnasequencefromthedatacompressionpointofviewforexampleabcdefghijklmnopqrstuvwxyz”, is a string. The 26 small case distinct English letters appearing in the string are called symbols, and they form an alphabet. If we parse the string into “study”, “dnasequence”, “fromthedata”, “compressionpointofview”, “forexample”, “abcdefg”, “hijklmnopq”, and “rstuvwxyz”, these substrings are called words.
The implementation of data compression involves an encoder and a decoder. The encoder parses the string to be compressed into words and replaces each word by its codeword. This produces a new string, which is hopefully shorter than the original one in terms of bits. The decoder, conversely, parses the new string into codewords, and interprets each codeword back to a word of the original symbols. The collection of all distinct words in a parsing is called a dictionary.
In the context of data compression, two issues arise naturally. First, is there a lower bound? Second, how do we compute this bound, or is it computable at all?

2.2. Prefix Code

A fundamental concept in lossless compression is the prefix code or instantaneous code. A code is called a prefix one if no codeword is a prefix of any other codeword. The prefix constraint has a close relationship to the metaphor of the Turing machine, by which the algorithmic complexity is defined. Given a prefix code over an alphabet of α symbols, the codeword lengths l 1 , l 2 , ⋯, l m , where m is the dictionary size, must satisfy the Kraft inequality: i = 1 m α l i 1 . Conversely, given a set of code lengths that satisfy this inequality, there exists a prefix code with those code lengths. Note that the dictionary size in a prefix code could be either finite or countably infinite.
The class of prefix codes is a subset of the more general class of uniquely decodable codes, and one may expect that some uniquely decodable codes could be advantageous over prefix codes in terms of data compression. However, this is not necessarily the case, for it can be shown that the codeword lengths of any uniquely decodable code must satisfy the Kraft inequality. Therefore, a prefix code can always be constructed to match the codeword lengths of any given uniquely decodable code.
A prefix code has an attractive self-punctuating feature: it can be decoded without reference to the future codewords since the end of a codeword is immediately recognizable. For these reasons, prefix coding is commonly used in practice. A conceptual yet convenient generalization of the Kraft inequality is to drop the integer requirement for code lengths and ignore the effect of rounding. A general set of code lengths can be implemented by the arithmetic coding [8,9]. This generalization leads to a correspondence between probability distributions and prefix code lengths: for every distribution P over the dictionary, there exists a prefix code C whose length L C ( x ) is equal to log P ( x ) for all words x. Conversely, for every prefix code C on the dictionary, there exists a probability measure P such that log P ( x ) is equal to the code length L C ( x ) for all words x.

2.3. Shannon’s Probability-Based Coding

In his seminal work [1], Shannon proposed the source coding theorem based on a probabilistic framework. Supposing a finite number of words A 1 , A 2 , ⋯, A m are generated from a probabilistic source denoted by a random variable X with frequencies p i , i = 1 , , m , then the expected length of any prefix code is no shorter than the entropy of this source defined as H ( X ) = i = 1 m p i log p i . This result offers a lower bound of data compression if a probabilistic model can be assumed. Throughout this paper, we take 2 as the base of the logarithm operation, and thereby bit is the unit of code lengths.
Huffman code is such an optimal code that reaches the expected code length. The codewords are defined by a binary tree constructed from word frequencies. Another well-known method is the Shannon–Fano–Elias code, which uses at most two bits more than the theoretical lower bound. The code length of A i in Shannon–Fano–Elias code is approximately equal to log p i .

2.4. Kolmogorov Complexity and Algorithm-Based Coding

Kolmogorov, who laid the foundation of probability theory, interestingly set aside probabilistic models and, along with other researchers including Solomonoff and Chaitin, pursued an alternative path to understanding the informational structure of data based on the notion of a universal Turing machine. Kolmogorov [10] stated that “information theory must precede probability theory, and not be based on it.”
We give a brief account of some facts about Kolmogorov complexity relevant to our study, and refer readers to Li and Vitányi [11], Vitányi and Li [12] for further detail. A Turing machine is a computer with a finite state operating on a finite symbol set, and is essentially the abstraction of any physical computer that has CPUs, memory, and input and output devices. At each unit of time, the machine reads in one operation command from the program tape, writes some symbols on a work tape, and changes its state according to a transition table. Two important features need more explanation. First, the program is linear, namely, the machine reads the tape from left to right, and never goes back. Second, the program is prefix-free, namely, no program that leads to a halting computation can be the prefix of another such program. This feature is an analog to the concept of prefix-coding. A universal Turing machine can reproduce the results of any other machines. The Kolmogorov complexity of a word x with respect to a universal computer U , denoted by K U ( x ) , is defined as the minimum length overall programs that print x and then halt.
The Kolmogorov complexities of all words satisfy the Kraft inequality, due to its natural connection to prefix coding. In fact, for a fixed machine U , we can encode x by the minimum length program that prints x and halt. Given a long string, if we define a way to parse it into words, we can then encode each word by the above program. Consequently, we encode the string by concatenating the programs one after another. Decoding is straightforward: we input the concatenated program into U , and it reconstructs the original string.
One obvious way of parsing is to take the string itself as the only word. Thus, how much we can compress the string depends on the complexity of this string. At this point, we see the connection between data compression and the Kolmogorov complexity, which is defined for each string with respect to an implementable type of computational machine—the Turing machine.
Next, we highlight some theoretical results about Kolmogorov complexity. First, it is not machine specific except for a machine-specific constant. Second, the Kolmogorov complexity is unfortunately not computable. Third, there exists a universal probability P U ( x ) with respect to a universal machine such that 2 K ( x ) P U ( x ) c 2 K ( x ) for all strings, where c is a constant independent of x. This means that up to an additive constant, K ( x ) is equivalent to log P U ( x ) , which can be viewed as the code lengths of a prefix code in light of the Shannon–Fano–Elias code. Because of the non-computability of Kolmogorov complexity, the universal probability is likewise not computable.
The study of the Kolmogorov complexity reveals that the assessment of the exact compression bounds of strings is beyond the ability of any specific Turing machine. However, any program executed on a Turing Machine provides, up to an additive constant, an upper bound on the complexity.

2.5. Correspondence Between Probability Models and String Parsing

A critical question remaining to be answered in the Shannon source coding theorem is the following: Where does the model that defines probabilities come from? According to the theorem, the optimal code lengths are proportional to the negative logarithm of the word frequencies. Once the dictionary is defined, the word frequencies can be counted for any individual string to be compressed. Equivalently, a dictionary can be induced by the way we parse a string, c.f. Figure 1. It is worth noting that the term “letter” instead of “word” was used in Shannon’s original paper [1], which did not address how to parse strings into words at all.

2.6. Fixed-Length and Variable-Length Parsing

The words generated from the parsing process could be either of the same length or of variable lengths. For example, we can encode Shakespeare’s work letter by letter, or encode it by natural words of varying lengths. A choice made at this point leads to two quite different coding schemes.
If we decompose a string into words containing the same number of symbols, this is a fixed-length parsing. The two extra bits for each word is a big deal when each word contains only a few symbols. As the word length gets longer and longer, the relative impact of the two extra bits becomes negligible for each block. An effective alternative to avoid the issue of extra bits is the arithmetic coding, which integrates the codes of successive words at the cost of more computations.
Variable-length parsing decomposes a string into words containing a variable number of symbols. The popular Lempel–Ziv coding is such a scheme. Although the complexity of a string x is not computable, the complexity of ‘ x 1 ’ relative to ‘x’ is small. To concatenate an ‘1’ to the end of ‘x’, we can simply use the program that prints x followed by printing ‘1’. A recursive implementation of this idea leads to the Lempel–Ziv coding, which concatenates the address of ‘x’ and the code of ‘1’.
Please notice that as the data length increases, the dictionary size resulting from the parsing scheme of the Lempel–Ziv coding increases as well—unless an upper limit is imposed. Along the process of encoding, each word occurs only once because, down the road, either it will not be a prefix of any other word, or a new word concatenating it with a certain suffix symbol will be found. To a good approximation, all the words encountered up to a point are equally likely. If we use the same number of bits to store the addresses of these words, their code lengths are equal. Approximately, it obeys Shannon’s source coding theorem too.

2.7. Parametric Models and Complexity

Hereafter, we use parametric probabilistic models to count prefix code lengths. The specification of a parametric model includes three aspects: a model class; a model dimension; and parameter values. Suppose we restrict our attention to some hypothetical model classes. Each of these model classes is indexed by a set of parameters, and we define the number of parameters in each model its dimension. We also assume the identifiability of the parameterization, that is, different parameter values correspond to different models. Let us denote one such model class by a probability measure { P θ : θ a n o p e n s e t Θ R d } , and their corresponding frequency functions by { p ( x ; θ ) } . The model class is usually defined by a parsing scheme. For example, if we parse a string symbol by symbol, then the number of words equals the number of symbols appearing in the string. We denote the number of symbols by α , then d = α 1 . If we parse the string by every two symbols, then the number of words increases to d = α 2 1 , and so on.
From the above review of Kolmogorov complexity, it is clear that strings themselves do not admit probability models in the first place. Nevertheless, we can fit a string using a parametric model. By doing so, we need to pay extra bits to describe the model as observed by Dr. Rissanen. He termed them as stochastic complexity or parametric complexity. The total code lengths under a model include both the data description and the parametric complexity.

2.8. Two References for Code Length Evaluation

The evaluation of redundancy of a given code requires a reference. Two such references are discussed in the literature. In the first scenario, we assume that the words X ( n ) = { X 1 , X 2 , X n } are generated according to P θ 0 as independent and identically distributed (i.i.d.) random variables, whose outcomes are denoted by { x i } . Then the optimal code length is given by L 0 = i = 1 n log p ( X i ; θ 0 ) . As n goes large, its average code length is given by E ( L 0 ) = n H ( θ ) . In general, the code length corresponding to any distribution Q ( x ) is given by L Q = i = 1 n log q ( X i ) , and its redundancy is R Q = L Q L 0 . The expected redundancy is the Kullback–Leibler divergence between the two distributions:
E P θ 0 ( L Q L 0 ) = E P θ 0 log P θ 0 ( X ( n ) ) Q ( X ( n ) ) = D ( P θ 0 | | Q ) 0 .
It can be shown that minmax and maxmin values of average redundancy are equal [13]:
inf Q sup θ E P θ log P θ ( X ( n ) ) Q ( X ( n ) ) = sup θ inf Q E P θ log P θ ( X ( n ) ) Q ( X ( n ) ) = I ( Θ ; X ( n ) ) .
A key historical result on redundancy [14,15] is that for each positive number ϵ and for all θ 0 Θ except in a set whose volume goes to zero as n
E P θ 0 ( L Q L 0 ) d ϵ 2 log n .
All these results are about average code length over all possible strings.
Another reference with which any code can be compared is obtained by replacing θ 0 by the maximum likelihood estimate θ ^ n in L 0 ; that is, L θ ^ n = i = 1 n log p ( X i ; θ ^ n ) . Please notice that L θ ^ n does not satisfy the Kraft inequality. This perspective is a practical one, since in reality x ( n ) is simply data without any probability measure. Given a parametric model class { P θ } , we fit the data using one surrogate model that maximizes the likelihood. Then we consider
L Q L θ ^ n = log p ( x ( n ) ; θ ^ ( x ( n ) ) ) q ( x ( n ) ) .

2.9. Optimality of Normalized Maximum Likelihood Code Length

Minimizing the above quantity leads to the normalized maximum-likelihood (NML) distribution:
p ^ ( x ( n ) ) = p ( x ( n ) ; θ ^ ( x ( n ) ) ) x ( n ) p ( x ( n ) ; θ ^ ( x ( n ) ) ) .
The NML code length is thus given by
L N M L = log p ( x ( n ) ; θ ^ ( x ( n ) ) ) + log x ( n ) p ( x ( n ) ; θ ^ ( x ( n ) ) ) .
Shtarkov [3] proved the optimality of NML code by showing it solves
min q max x ( n ) log p ( x ( n ) ; θ ^ ( x ( n ) ) ) q ( x ( n ) ) ,
where q ranges over the set of virtually all distributions. Later Rissanen [2] further proved that NML code solves
min q max g E g [ log p ( X ( n ) ; θ ^ ( X ( n ) ) ) q ( X ( n ) ) ] ,
where q and g range over the set of virtually all distributions. This result states that the NML code is still optimal even if the data are generated from outside the parametric model family. Namely, regardless of the source nature in practice, we can always find the optimal code length from a distribution family.

3. Empirical Code Lengths Based on Exponential Family Distributions

In this section, we fit data from a source, either discrete or continuous, by an exponential family due to the following considerations. First, the multinomial distribution, which is used to encode discrete symbols, is an exponential family. Second, according to the Pitman–Koopman–Darmois theorem, exponential families are, under certain regularity conditions, the only models that admit sufficient statistics whose dimensions remain bounded as the sample size grows. On one hand, this property is most desirable in data compression. On the other hand, the results would be valid in the broader context of statistical learning, beyond source coding. Third, as we will show, the first term in the code length expansion is nothing but the empirical entropy for exponential families, which is a straightforward extension of Shannon’s source coding theorem.

3.1. Exponential Families

Consider a canonical exponential family of distributions { P θ : θ Θ } , where the natural parameter space Θ is an open set of R d . The density function is given by
p ( x ; θ ) = exp { θ T S ( x ) A ( θ ) } ,
with respect to some measure μ ( d x ) on the support of data. The transposition of a matrix (or vector) V is represented by V T here and throughout the paper. S ( · ) is the sufficient statistic for the parameter θ . We denote the first and the second derivatives of A ( θ ) respectively by A ˙ ( θ ) and A ¨ ( θ ) . The entropy or differential entropy of P θ is H ( θ ) = A ( θ ) θ T A ˙ ( θ ) . The following result is an empirical and pathwise version of Shannon’s source coding theorem.
Theorem 1.
[Empirical optimal source code length] If we fit an individual data sequence by an exponential family distribution, the NML code length is given by
L N M L = n H ( θ ^ n ) + d 2 log n 2 π + log Θ | I ( θ ) | 1 / 2 d θ + o ( 1 ) ,
where H ( θ ^ n ) is the entropy evaluated at the maximum likelihood estimate (MLE) θ ^ n = θ ^ ( x ( n ) ) , and | I ( θ ) | is the determinant of the Fisher information I ( θ ) = [ E ( 2 l o g p ( X ; θ ) θ j θ k ) ] j , k = 1 , , d . The integral in the expression is taken over the parameter space Θ, and is assumed to be finite.
Importantly, we do not assume that the data are generated from an exponential family. Rather, for any given data sequence, we fit a distribution from a relevant exponential family to describe the data. In this context, the distribution serves purely as a modeling tool, and any appropriate option from the exponential family toolbox may be used.
The first term in (2) is n A ( θ ^ n ) [ i = 1 n S ( x i ) ] T θ ^ n = n A ( θ ^ n ) n A ˙ ( θ ^ n ) T θ ^ n = n H ( θ ^ n ) , namely, the entropy in Shannon’s theorem except that the model parameter is replaced by the MLE. The second term has a close relationship to the BIC introduced by Akaike [16] and Schwartz [17], and the third term involves the Fisher information which characterizes the local property of a distribution family. Surprisingly and interestingly, this empirical version of the lossless coding theorem brings together three foundational contributions: those of Shannon, Akaike–Schwarz, and Fisher.
Next, we give a heuristic proof of (5) by the local asymptotic normality (LAN) [18], though a complete proof can be found in the Appendix A. In the definition of NML code length (2), the first term becomes empirical entropy for exponential families. Namely,
L N M L = n H ( θ ^ n ) + log x ( n ) p ( x ( n ) ; θ ^ n ) .
The remaining difficulty is the computation of the summation. In a general problem of data description length, Rissanen [19] derived an analytical expansion requiring five assumptions, which were hard to verify. Here we show for sources from exponential families, the expansion is valid as long as the integral is finite.
Let U ( θ , r n ) be a cube of size r n centering at θ , where r is a constant. LAN states that we can expand probability density in each neighborhood U ( θ , r n ) as follows:
log p ( x ( n ) ; θ + h ) p ( x ( n ) ; θ ) = h T [ i = 1 n S ( x i ) n A ˙ ( θ ) ] 1 2 h T [ n I ( θ ) ] h + o ( h ) ,
where I ( θ ) = A ¨ ( θ ) . Maximizing the likelihood in U ( θ , r n ) with respect to h leads to
max h log p ( x ( n ) ; θ + h ) p ( x ( n ) ; θ ) = 1 2 [ i = 1 n S ( x i ) n A ˙ ( θ ) ] T [ n I ( θ ) ] 1 [ i = 1 n S ( x i ) n A ˙ ( θ ) ] + o ( r n ) .
Consequently, if θ ^ n ( x ( n ) ) falls into the neighborhood U ( θ , r n ) , we have
p ( x ( n ) ; θ ^ n ) = e { 1 2 [ i = 1 n S ( x i ) n A ˙ ( θ ) ] T [ n I ( θ ) ] 1 [ i = 1 n S ( x i ) n A ˙ ( θ ) ] + o ( r n ) } p ( x ( n ) ; θ ) ,
where θ ^ n solves i = 1 n S ( x i ) = n A ˙ ( θ ^ n ) . Applying the Taylor expansion, we get
i = 1 n S ( x i ) n A ˙ ( θ ) = n A ˙ ( θ ^ n ) n A ˙ ( θ ) = [ n A ¨ ( θ ) ] ( θ ^ n θ ) + o ( r n ) .
Plugging it into (7) leads to
p ( x ( n ) ; θ ^ n ) = e { 1 2 ( θ ^ n θ ) T [ n A ¨ ( θ ) ] ( θ ^ n θ ) + o ( r n ) } p ( x ( n ) ; θ ) .
If we consider i.i.d. random variables Y 1 , , Y n sampled from the exponential distribution (4), then the MLE θ ^ ( Y ( n ) ) is a random variable. The summation of the quantity (8) in the neighborhood U ( θ , r n ) can be expressed as the following expectation of θ ^ ( Y ( n ) ) :
E [ e { 1 2 ( θ ^ n θ ) T [ n A ¨ ( θ ) ] ( θ ^ n θ ) } 1 ( θ ^ n U ( θ , r n ) ) ] .
Due to the asymptotic normality of MLE θ ^ ( Y ( n ) ) , namely, θ ^ n θ d N ( 0 , [ n I ( θ ) ] 1 ) , the density of θ ^ ( Y ( n ) ) is approximated by
| n I ( θ ) | 1 / 2 ( 2 π ) d / 2 e { 1 2 ( θ ^ n θ ) T [ n A ¨ ( θ ) ] ( θ ^ n θ ) } d θ ^ n .
Applying this density to the expectation in (9), we find the two exponential terms cancel out, and obtain | n I ( θ ) | 1 / 2 ( 2 π ) d / 2 . The sum of its logarithm over all neighborhoods U ( θ , r n ) leads to the remaining terms in (5).
The optimality of the NML code is established in the minimax settings. Yet its implementation requires two passes over the data: one for word counting with respect to a dictionary, and another for encoding, c.f. Figure 2.

3.2. Bayesian Predictive Coding

It is natural to ask whether there exists a scheme that passes through the data only once and still can compress the data equally well. It turns out that predictive coding is such a scheme for a given dictionary. The idea of predictive coding is to sequentially make inferences about the parameters in the probability function p ( x ; θ ) , which is then used to update the code book. That is, after obtaining observations x 1 , , x i , we calculate the MLE θ ^ i , and in turn encode the next observation according to the current estimated distribution. Its code length is thus L p r e d i c t i v e = ( i = 1 n log p ( X i + 1 | θ ^ i ) ) . This procedure (Rissanen [15,20]) is closely related to the prequential approach to statistical inference as advocated by Dawid [21,22]. Predictive coding is intuitively optimal due to two important fundamental results. First, the MLE θ ^ i is asymptotically most accurate since it gathers all the information in X 1 , , X i for inference in the parametric model p ( x ; θ ) . Second, the code length log p ( X i + 1 | θ ^ i ) is optimal as dictated by the Shannon source coding theorem. In the case of exponential families, Proposition 2.2 in [5] showed that L p r e d i c t i v e can be expanded as follows:
L p r e d i c t i v e = n H ( θ ^ n ) + d 2 log n + D ˜ n ( ω ) ,
where the sequence of random variables { D ˜ n ( ω ) } converges to an almost surely finite random variable D ˜ ( ω ) .
Alternatively, we can use Bayesian estimates in the predictive coding. Starting from a prior distribution w ( θ ) , we encode x 1 by the marginal distribution q 1 ( x 1 ) = Θ p ( x 1 | θ ) w ( θ ) d θ resulted from w ( · ) . The posterior is given by
w 1 ( θ ) = p ( x 1 | θ ) w ( θ ) / Θ p ( x 1 | θ ) w ( θ ) d θ .
We then use this posterior as the updated prior to encode the next word x 2 . Using induction, we can show that the marginal distribution to encode the k-th word is
q k ( x k ) = Θ [ i = 1 k p ( x i | θ ) ] w ( θ ) d θ Θ [ i = 1 k 1 p ( x i | θ ) ] w ( θ ) d θ .
Meanwhile, the updated posterior, also the prior for the next round encoding, becomes
w k ( θ ) = [ i = 1 k p ( x i | θ ) ] w ( θ ) Θ [ i = 1 k p ( x i | θ ) ] w ( θ ) d θ .
Proposition 1.
[Bayesian predictive code length] The total Bayesian predictive code length for a string of n words is
L B p r e d i c t i v e = L m i x t u r e = k = 1 n log q k ( x k ) = log Θ [ i = 1 n p ( x i | θ ) ] w ( θ ) d θ .
Thus the Bayesian predictive code is nothing but the mixture code referred to in reference [4]. The above scheme is illustrated in Figure 3.
Theorem 2.
[Expansion of Bayesian predictive code length] If we fit a data sequence by an exponential family distribution, the mixture code length has the expansion
L B p r e d i c t i v e = n H ( θ ^ n ) + d 2 log n 2 π + log | I ( θ ^ n ) | 1 / 2 w ( θ ^ n ) + o ( 1 ) ,
where w ( θ ) is any mixture of conjugate prior distributions.
Once again, we do not assume that the data are generated from an exponential family. Rather than using a single distribution from an exponential family, Bayesian predictive coding employs a mixture of such distributions, thereby generally enhancing its approximation capability.
The result holds for general priors that can be approximated by a mixture of conjugate ones. In the case of multinomial distributions, the conjugate prior is the Dirichlet distribution. Any prior w ( θ ) continuous on the d-dimensional simplex within the cube [ 0 , 1 ] ( m + 1 ) can be uniformly approximated by the Bernstein polynomials of m variables, each term of which corresponds to a Dirichlet distribution [23,24]. It is important to note that in the current setting, the source is not assumed to be i.i.d. samples from an exponential family distribution as in Theorem 2.2 and Proposition 2.3 in [5].
When Θ | I ( θ ) | 1 / 2 d θ is finite, we can take the Jeffreys prior, w ( θ ) = | I ( θ ) | 1 / 2 | I ( θ ) | 1 / 2 d θ then (10) becomes (5). Putting them together, we have shown that the optimal code length can be achieved by the Bayesian predictive coding scheme.

3.3. Redundancy

Now we examine the empirical code length under Shannon’s setting. That is, we evaluate the redundancy of the code length assuming the source is from a hypothetical distribution. The reference is the optimal code length L 0 = i = 1 n log p ( X i ; θ 0 ) as introduced in Section 2.8, and E ( L 0 ) = n H ( θ ) .
Proposition 2.
If we assume that a source follows an exponential family distribution, then
n H ( θ ^ n ) L 0 = C n log log n + o ( 1 ) ,
where the sequence of non-negative random variables { C n } have a bounded upper limit, lim ¯ n C n d , almost surely. If we further assume that Θ | I ( θ ) | 1 / 2 d θ < , then
R N M L = L N M L L 0 = d 2 log n 2 π C n log log n + log Θ | I ( θ ) | 1 / 2 d θ + o ( 1 ) ,
where { C n } is the same as the above.
The left side of (11) is [ i = 1 n log p ( X i | θ ^ n ) ] [ i = 1 n log p ( X i | θ 0 ) ] , and the rest is true according to the proof of Proposition 2.2 in [5], Equation (18). The NML code is a special case of the mixture code, whose redundancy is given by Theorem 2.2 in [5]. The details on the law of the iterated logarithm can be found in the book [25]. We note that lim ¯ n C n is bounded below by 1. This proposition confirms that n H ( θ ^ n ) is an underestimate of the theoretical optimal code length. Notably, this setting in which the data is assumed to originate from a probabilistic source of fixed dimension serves only the theoretical analysis.

3.4. Coding of Discrete Symbols and Multinomial Model

For compressing strings of discrete symbols, it is sufficient to consider the discrete distribution specified by a probability vector, θ = ( p 1 , p 2 , , p d , p d + 1 ) , where k = 1 d + 1 p k = 1 . Its frequency function is P ( X = k ) = k = 1 d + 1 p k 1 ( X = k ) . The Fisher information matrix I ( p 1 , p d ) can be shown to be
E 2 l o g P ( X = k ) p j p k j , k = 1 , , d = 1 p 1 + 1 p d + 1 1 p d + 1 1 p d + 1 1 p d + 1 1 p 2 + 1 p d + 1 1 p d + 1 1 p d + 1 1 p d + 1 1 p d + 1 p d + 1 .
Thus | I ( p 1 , p d ) | = 1 / k = 1 d + 1 p k .
Suppose X 1 , , X n are i.i.d. random variables obeying the above discrete distribution. Then S = i = 1 n X i follows a multinomial distribution M u l t i ( n ; p 1 , p 2 , , p d , p d + 1 ) . Its conjugate prior distribution is the Dirichlet distribution ( α 1 , α 2 , , α d + 1 ) , whose density function is
Γ ( k = 1 d + 1 α k ) k = 1 d + 1 Γ ( α k ) k = 1 d + 1 p k α k 1 ,
where Γ ( t ) = u = 0 + u t e u d u . Since the Jeffreys prior is proportional to | I ( p 1 , p d ) | 1 / 2 , in this case, it equals Dirichlet(1/2,1/2, ⋯, 1/2), whose density is Γ ( ( d + 1 ) / 2 ) Γ ( 1 / 2 ) d + 1 k = 1 d + 1 p k 1 / 2 . The Jeffreys prior was also used by Krichevsky [14] to derive optimal universal codes.
It is noticed that Γ ( 1 / 2 ) = π . Plug it into Equation (10), we have the following specific form of the NML code length for the multinomial distribution. Remember that the distribution or word frequencies are specific for a given dictionary Φ , and we thus term it as L N M L @ Φ . If we change the dictionary, the code length changes accordingly.
Proposition 3.
[Optimal code lengths for a multinomial distribution]
L N M L @ Φ = n H ( θ ^ n ) + d 2 log n d 2 log Γ ( d + 1 2 ) + 1 2 log π ,
where n H ( θ ^ n ) = k = 1 d p ^ k log p ^ k , p ^ k = n k / n —the frequency of the k-th word appearing in the string.

4. Compression of Random Sequences and DNA Sequences

4.1. Lossless Compression Bound and Description Length

Given a dictionary of words, we parse a string into words followed by counting their frequencies p ^ k = n k / n , the total number of words n, and the number of distinct words d. Plugging them into expression (13), we obtain the lossless compression bound for this dictionary or parsing. If a different parsing is tried, the three quantities—word frequencies, number of words, dictionary size (number of distinct words)—would change, and the resulting bound would change accordingly. In the general situation where the data are not necessarily discrete symbols, we replace the code length with description length (10) as termed by Rissanen.
Since each parsing corresponds to a probabilistic model, the code length is model dependent. The comparison of two or more coding schemes is exactly the selection of models, with the expression (13) as the objective function.

4.2. Rissanen’s Principle of Minimum Description Length and Model Selection

Rissanen, in his works [15,19,26], proposed the principle of minimum description length (MDL) as a more general modeling rule than that of maximum likelihood, which was recommended, analyzed, and popularized by R. A. Fisher. From the information-theoretic point of view, when we encode data from a source by prefix coding, the optimal code is the one that achieves the minimum description length. Because of the equivalence between a prefix code length and the negative logarithm of the corresponding probability distribution—via Kraft’s inequality –this leads naturally to the modeling principle: the MDL principle. That is, one should choose the model or prefix coding algorithm that gives the minimal description of data; see Hansen and Yu [27] for a review on this topic. We also refer readers to [6,28] for a more complete account of the MDL principle.
MDL is a mathematical formulation of the general principle known as Occam’s razor: choose the simplest explanation consistent with the observed data [7]. We make one remark about the significance of MDL. On the one hand, Shannon’s work establishes the connection between optimal coding and probabilistic models. On the other hand, Kolmogorov’s algorithmic theory says that the complexity, or the absolute optimal coding, cannot be proved by any Turing machines. MDL offers a practical principle: it allows us to make choices among possible models and coding algorithms without requiring proof of optimality. As more model candidates are evaluated over time, our understanding continues to progress.

4.3. Compression Bounds of Random Sequences

A random sequence is incompressible by any model-based or algorithmic prefix coding as indicated by the complexity results [11,12]. Thus a legitimate compression bound of a random sequence should be no less than one up to certain variations. Conversely, if the compression rates of a sequence using L N M L @ Φ as the compression bound are no less than 1 under all dictionaries Φ , namely,
min Φ : d i c t i o n a r i e s L N M L @ Φ L R A W = 1 + L N M L @ Φ L R A W L R A W 1 ,
where L R A W is the theoretical bit-length of the raw sequence, then the sequence can be considered random. If we assume the source follows a uniform distribution, L R A W = n H while L N M L @ Φ can be calculated by (13). Although testing all dictionaries is challenging, we can experiment with a subset—particularly those suggested by the domain experience. More theoretical analysis on randomness testing based on universal codes can be found in the book [29].

4.4. A Simulation Study: Compression Bounds of Pseudo-Random Sequences

Simulations were carried out to test the theoretical bounds. First, a pseudo-random binary string of size 3000 was simulated in R according to Bernoulli trials with a probability of 0.5. In Table 1, the first column shows the word length used for parsing the data; The second column shows the word number; and the third column shows the number of distinct words. We group the terms in (13) into three parts: the term involving n, the term involving log n , and others. The bounds by n H ( θ ^ n ) , n H ( θ ^ n ) + d 2 log n and L N M L in (13) are respectively shown in the next three columns. As the word length increases, d increases, and the bounds by n H ( θ ^ n ) exhibit a decreasing trend. A bound smaller than 1 indicates the sequence can be compressed, contradicting the assertion that random sequences cannot be so. When the word length is 8, the dictionary size is 375, and the bound by n H ( θ ^ n ) is only 0.929. The incompressibility nature of random sequences falsifies n H ( θ ^ n ) as a legitimate compression bound. If the log n term is included, the bounds are always larger than 1. The bounds by L N M L (13) are tighter while remaining larger than 1—except the case at the bottom row, where the number of distinct words approaches the total number of words, making the expansion insufficient. Since L N M L is an achievable bound, n H ( θ ^ n ) + d 2 log n is an overestimate.

4.5. Knowledge Discovery by Data Compression

On the other hand, if we can compress a sequence by a certain prefix coding scheme, then this sequence is not random. In the meantime, this coding scheme presents a clue to understanding the information structure hidden in the sequence. Data compression is one general learning mechanism, among others, to discover knowledge from nature and other sources.
Ryabko, Astola, and Gammerman [30] applied the idea of Kolmogorov complexity to the statistical testing of some typical hypotheses. This approach was used to analyze DNA sequences in [31].

4.6. DNA Sequences of Proteins

The information carried by the DNA double helix consists of two long complementary strings of the letters A, G, C, and T. It is interesting to see if we can compress DNA sequences at all. Next, we carried out the lossless compression experiments on a couple of protein-encoding DNA sequences.

4.7. Rediscovery of the Codon Structure

In Table 2, we present the result of applying the NML code length L N M L in (13) to an E. coli protein gene sequence labeled by b0059 [32], which has 2907 nucleotides. Each row corresponds to one model used for encoding. All the models being tested are listed in the first column. In the first model, we encode the DNA nucleotides one by one and name it Model 1. In the second or third model, we parse the DNA sequence by pairs and then encode the resulting dinucleotide sequence according to their frequencies. Different starting positions result two different phases, denoted by 2.0 and 2.1, respectively. Other models are understood in the same fashion. Note that all these models are generated by fixed-length parsing. The last model “a.a.” means we first translate DNA triplets into amino acids and then encode the resulting amino acid sequence. The second column shows the total number of words in each parsed sequence. The third column shows the number of different words in each parsed sequence, or the dictionary size. The fourth column is the empirical entropy estimated from observed frequencies. The next column is the first term in expression (13), which is the product of the second and fourth columns. Then we calculate the rest terms in (13). The total bits are then calculated, and the compression rates are the ratios L N M L / ( 2907 2 ) . The last column displays the compression rates for each model.
All the compression rates are around 1, except for the rate obtained from Model 3.0, which aligns with the correct codon and phase. Thus the comparison of compression bounds rediscovers the codon structure of this protein-coding DNA sequence and the phase of the open reading frame. It is somewhat surprising that the optimal code length L N M L allows us to mathematically identify the triplet coding system using only the sequence of one gene. Historically, this system was discovered by Francis Crick and his colleagues in the early 1960s using frame-shift mutations on bacteria-phage T4.
Next, we take a closer look at the results. The compression rate of the four-nucleotide word coding is closest to 1, and thus it behaves more like “random”. For example, it is 0.9947 for Model 4.2. The first term of empirical entropy contributes 5431 bits, while the rest terms contribute 346 bits. If we use d 2 log n instead, the rest term is 0.5 ( 219 1 ) log ( 726 ) 1036 bits, and the compression rate becomes 1.11, which is less tight. If the Ziv–Lempel algorithm is applied to the b0059 sequence, 635 words are generated along the way. Each word requires l o g ( 635 ) bits for keeping the address of its prefix and 2 bits for the last nucleotide. In total, it needs 635 l o g ( 635 ) = 5912 bits for storing addresses, and 635 2 = 1270 bits for storing the words’ last symbol. The compression rate of Ziv–Lempel coding is 1.24.

4.8. Redundant Information in Protein Gene Sequences

It is known that the 4 3 = 64 triplets correspond to only 20 amino acids plus stop codons. Thus redundancy does exist in protein-coding sequences. Most of the redundancy lies in the third position of a codon. For example, GGA, GGC, GGT, and GGG all correspond to glycine. According to Table 2, the amino acid sequence contains 4048.22 bits of information, while there are 5277.51 bits of information in Model 3.0. Thus the redundancy in this sequence is estimated to be (5277.51 − 4048.22)/4048.22 = 0.30.

4.9. Randomization

To evaluate the accuracy or significance of the compression rates of a DNA sequence, we need a reference distribution for comparison. A common method is to consider the randomness obtained by permutations. That is, given a DNA sequence, we permute the nucleotide bases and re-calculate the compression rates. By repeating this permutation procedure, we generate a reference distribution.
In Table 3, we examine the compression rates for E. coli ORF b0060, which has 2352 nucleotides. First, the optimal compression rate of 0.958 is achieved with model 3.0. Second, we further carry out the calculations for permuted sequences. The averages, standard deviations, and 1% (or 99%) quantiles of compression rates under different models are shown in Table 3 as well. Except for Model 1, all the compression rates, in terms of either averages or 1% quantiles are above 1 for both L N M L and n H ( θ ^ n ) + d 2 log n . Third, the results by the single term n H ( θ ^ n ) are about 0.996, 0.994, 0.986, and 0.952, respectively, for one-, two-, three-, and four-nucleotide models. The 99 % quantiles of n H ( θ ^ n ) for the four-nucleotide models are no larger than 0.961. Fourth, the results of n H ( θ ^ n ) + d 2 log n show extra bits compared to those of L N M L , and the compression ratio go from 1.02 to 1.17, suggesting the rest terms in (13) are not negligible.
It is noted Models 3.1 and 3.2 are obtained by phase-shifting from the correct Model 3.0. Other models are obtained by incorrect parsing. These models can serve as references for Model 3.0. The incorrect parsing and phase-shifting resemble the behavior of the linear congruential pseudo-random number generator, and serve as a form of randomization.

5. Discussion

Putting together the analytical results and numerical examples, we show the compression bound of a data sequence using an exponential family is the code length derived from the NML distribution (5). The empirical bound can be implemented by the Bayesian predictive coding for any given dictionary or model. Different models are then compared by their empirical compression bounds.
The examples of DNA sequences and pseudo-random sequences indicate that the compression rates by any dictionary are indeed larger than 1 for random sequences, in line with the assertions of the Kolmogorov complexity theory. Conversely, if significant compression is achieved by a specific model, certain knowledge is gained. The codon structure is such an instance.
Unlike the algorithmic complexity, which includes an additive constant, the results based on probability distributions give the exact bits of code lengths. All three terms in (5) are important for the compression bound. Using only the first term n H ( θ ^ n ) can lead to bounds of random sequences smaller than 1. The discrepancy increases as the dictionary size grows as seen from Table 1 and Table 3. The bound by adding the second term d 2 log n had been proposed by the two-part coding or the Kolmogorov complexity. It is equivalent to BIC widely used in model selection. However, it overestimates the influence of the dictionary size as shown by the examples of simulations and DNA sequences. The inclusion of the Fisher information in the third term results in a tighter bound. The terms other than n H ( θ ^ n ) become larger as the dictionary size increases in Table 1 and Table 3. The observation that the compression bounds—considering all terms in (5)—remain slightly above 1 for all tested libraries meets our expectation on the incompressibility of random sequences.
Although the empirical compression bound is obtained under the i.i.d. model, the word length can be set sufficiently large to capture the local dependencies between symbols. Indeed, as shown in the examples of DNA sequences, the empirical entropy term in (5) may become smaller, for either the original sequences or the permuted ones. Meanwhile, the second term become larger. For a specific sequence, a better dictionary is selected by trading off the entropy term against the model complexity term.
Rissanen [19] obtained an expansion of the NML code length, in which the first term is the log-likelihood of data with the parameters plugged in by the MLE. In this article, we show it is exactly the empirical entropy if the parametric model takes any exponential family. According to this formulation, the NML code length can be viewed as an empirical and pathwise version of Shannon’s source coding theorem. Furthermore, the asymptotics in [19] requires five assumptions, which are hard to examine. Suzuki and Yamanishi proposed a Fourier approach to calculate the NML code length [33] for continuous random variables with certain assumptions. In contrast, we show that (5) is valid for exponential families, as long as Θ | I ( θ ) | 1 / 2 d θ < , without any additional assumptions. If the Jeffreys prior is improper in the interior of the full parameter space, we can restrict the parameter to a compact subset. Exponential families include not only distributions of discrete symbols, such as the multinomial but also continuous distributions, such as the normal distribution.
The mathematics underlying the expansion of NML is the structure of local asymptotic normality as proposed by LeCam [18]. LAN has been used to demonstrate the optimality of certain statistical estimates. This article connects LAN to compression bound. We have shown as long as LAN is valid, a similar expansion to (2) can be obtained.
Compared to Shannon’s source coding theorem that assumes the data are from a source with a given distribution, we reported empirical versions that apply to each individual data sequence without any a priori distributional assumption. In contrast to the average optimal code length in Shannon’s theorem, the results presented in Theorems 1 and 2 are sequence specific, although distributions are used as tools for analysis. We describe the data by fitting distributions from exponential families and mixtures of exponential families, which are sufficient for most scenarios in coding and statistical learning. Knowledge discovery is illustrated through an example involving DNA sequences. The results are primarily conceptual, and their connection to recent progress in data compression—such as [34]—is worth further investigation.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/e27080864/s1.

Funding

This research is supported by the National Key Research and Development Program of China (2022YFA1004801), the National Natural Science Foundation of China (Grant No. 11871462, 32170679, 91530105), the National Center for Mathematics and Interdisciplinary Sciences of the Chinese Academy of Sciences, and the Key Laboratory of Systems and Control of the CAS.

Data Availability Statement

The numerical results reported in the article can be reproduced by the Supplementary Files. One file with R code corresponds to Table 1. The other with Python 3.13 code corresponds to Table 2 and Table 3.

Acknowledgments

The author is grateful to Bin Yu and Jorma Rissanen for their guidance in exploring this topic. He would like to thank Xiaoshan Gao for his support of this research, Zhanyu Wang for proofreading the manuscript, and his family for their unwavering support.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
NMLnormalized maximum likelihood
L N M L @ Φ NML code length under dictionary Φ
L B p r e d i c t i v e Bayesian predictive code length
ACarithmetic coding
LANlocal asymptotic normality

Appendix A

This section contains the proofs of the results in Section 2 and Section 3. The following basic facts about the exponential family (4) are needed, see [35].
  • E ( S ( X ) ) = A ˙ ( θ ) , and V a r ( S ( X ) ) = A ¨ ( θ ) .
  • A ˙ ( · ) is one to one on the natural parameter space.
  • The MLE θ ^ n based on ( X 1 , , X n ) is given by θ ^ n = A ˙ 1 ( S ¯ n ) , where S ¯ n = 1 n i = 1 n S ( X i ) .
  • The Fisher information matrix I ( θ ) = A ¨ ( θ ) .
Proof of Theorem 1.
In the canonical exponential family, the natural parameter space is open and convex. Since Θ | I ( θ ) | 1 / 2 d θ < , we can find a series of bounded set { Θ ( k ) , k = 1 , 2 , } such that log [ Θ | I ( θ ) | 1 / 2 d θ ] log [ Θ ( k ) | I ( θ ) | 1 / 2 d θ ] = ϵ k . where ϵ k 0 . Furthermore, we can select each bounded set Θ ( k ) so that it can be partitioned into disjoint cubes, each of which is denoted by U ( θ j ( k ) , r n ) with θ j ( k ) as its center and r n as its side length. Namely, Θ ( k ) = j U ( θ j ( k ) , r n ) , and U ( θ j 1 ( k ) , r n ) U ( θ j 2 ( k ) , r n ) = for j 1 j 2 .
The normalizing constant in Equation (6) can be summed (integration in the case of continuous variables) by the sufficient statistic i = 1 n S ( x i ) , and in turn by the MLE θ ^ n
{ x ( n ) , θ ^ n Θ ( k ) } p ( x ( n ) ; θ ^ n ) = U ( θ j ( k ) , r n ) { x ( n ) , θ ^ n U ( θ j ( k ) , r n ) } p ( x ( n ) ; θ ^ n ) ,
p ( x ( n ) ; θ ^ n ) = e { θ ^ n T i = 1 n S ( x i ) n A ( θ ^ n ) } μ n ( d x n ) = e { n θ ^ n T S ¯ n n A ( θ ^ n ) } μ n ( d x n ) .
Now expand n [ θ S ¯ n A ( θ ) ] around θ ^ n within the neighborhood U ( θ j ( k ) ) .
n [ θ T S ¯ n A ( θ ) ] = n [ θ ^ n T S ¯ n A ( θ ^ n ) ] + ( θ θ ^ n ) T [ n S ¯ n n A ˙ ( θ ^ n ) ] 1 2 ( θ θ ^ n ) T [ n A ¨ ( θ ^ n ) ] ( θ θ ^ n ) + M 1 n | | θ θ ^ n | | 3 .
Since the MLE θ ^ n = A ˙ 1 ( S ¯ n ) , the second term is zero. Furthermore, we expand A ¨ ( θ ^ n ) around A ¨ ( θ ) , and rearrange the terms in the equation, then we have
n [ θ ^ n T S ¯ n A ( θ ^ n ) ] = n [ θ T S ¯ n A ( θ ) ] + 1 2 ( θ ^ n θ ) T [ n A ¨ ( θ ) ] ( θ ^ n θ ) + M 2 n | | θ θ ^ n | | 3 ,
where the constant M 2 involves the third-order derivatives of A ( θ ) , which is continuous in the canonical exponential family and thus bounded in the bounded set Θ ( k ) . In other words, M 2 is bounded uniformly across all { U ( θ j ( k ) , r n ) } . Similar bounded constants will be used repeatedly hereafter. Then Equation (A2) becomes
p ( x ( n ) ; θ ^ n ) = e 1 2 ( θ θ ^ n ) T [ n A ¨ ( θ ) ] ( θ θ ^ n ) + M 2 n | | θ θ ^ n | | 3 e n [ θ S ¯ n A ( θ ) ] μ n ( d x n ) .
Notice that the exponential form e n [ θ S ¯ n A ( θ ) ] is the density of n S ¯ . If we consider i.i.d. random variables Y 1 , , Y n sampled from the exponential distribution (4), the MLE θ ^ ( Y ( n ) ) is a random variable. Take θ = θ j ( k ) , then the sum of the above quantity over the neighborhood U ( θ j ( k ) , r n ) is nothing but the expectation of θ ^ ( Y ( n ) ) = A ˙ 1 ( S ¯ n ) with respect to the distribution of n S ¯ n , evaluated at the parameter θ j ( k ) :
{ x ( n ) , θ ^ n U ( θ j ( k ) , r n ) } p ( x ( n ) ; θ ^ n ) = E θ j ( k ) [ 1 [ θ ^ n U ( θ j ( k ) , r n ) ] e 1 2 ( θ ^ n θ j ( k ) ) T [ n A ¨ ( θ j ( k ) ) ] ( θ ^ n θ j ( k ) ) + M 2 n | | θ ^ n θ j ( k ) | | 3 ] .
Let ξ n = n ( θ ^ n θ j ( k ) ) . Now θ ^ n U ( θ j ( k ) , r n ) if and only if ξ n U ( 0 , r ) , where U ( 0 , r ) is the d-dimensional cube centered at zero with the side length r. Next expanding e M 2 n | | θ ^ n θ j ( k ) | | 3 in the neighborhood, the above becomes
{ θ ^ n U ( θ j ( k ) , r n ) } p ( x ( n ) ; θ ^ n ) = E [ 1 [ ξ n U ( 0 , r ) ] [ e 1 2 ξ n T I ( θ j ( k ) ) ξ n ( 1 + M 3 n 1 2 ) ] .
According to the central limit theorem, n d 2 [ i = 1 n S ( Y i ) A ˙ ( θ j ( k ) ) ] d N ( 0 , A ¨ ( θ j ( k ) ) ) . Moreover, the approximation error has the Berry–Esseen bound O ( n 1 2 ) , where the constant is determined by the bound on A ( θ ) ’s third-order derivatives. Similarly, we have the asymptotic normality of MLE, ξ n ( Y ( n ) ) d N ( 0 , I ( θ j ( k ) ) 1 ) , where the Berry–Esseen bound is valid for the convergence, see [36]. Therefore, the expectation converges as follows:
E { 1 [ ξ n U ( 0 , r ) ] e 1 2 ξ n T I ( θ j ( k ) ) ξ n ( 1 + M 3 n 1 2 ) } = 1 [ ξ n U ( 0 , r ) ] [ e 1 2 ξ n T I ( θ j ( k ) ) ξ n ( 1 + M 3 n 1 2 ) ] | I ( θ j ( k ) ) | 1 / 2 ( 2 π ) d / 2 e 1 2 ξ n T I ( θ j ( k ) ) ξ n d ξ n + M 4 n 1 2 = ( 2 π ) d 2 | I ( θ j ( k ) ) | 1 2 1 [ ξ n U ( 0 , r ) ] ( 1 + M 3 n 1 2 ) ] d ξ n + M 4 n 1 2 = ( 2 π ) d 2 | I ( θ j ( k ) ) | 1 2 r d ( 1 + M 3 n 1 2 ) + M 4 n 1 2 = n d 2 ( 2 π ) d 2 | I ( θ j ( k ) ) | 1 / 2 ( r d n d 2 ) ( 1 + M 3 n 1 2 ) .
Plugging this into the sum (A1), we obtain
{ x ( n ) , θ ^ n Θ ( k ) } p ( x ( n ) ; θ ^ n ) = n d 2 ( 2 π ) d 2 [ U ( θ j ( k ) , r n ) | I ( θ j ( k ) ) | 1 / 2 ( r d n d 2 ) ] ( 1 + M 3 n 1 2 ) n d 2 ( 2 π ) d 2 [ Θ ( k ) | I ( θ ) | 1 / 2 d θ ] ( 1 + M 3 n 1 2 ) ]
log [ { x ( n ) , θ ^ n Θ ( k ) } p ( x ( n ) ; θ ^ n ) ] = d 2 log n 2 π + log Θ ( k ) | I ( θ ) | 1 / 2 d θ + M 3 n 1 2 = d 2 log n 2 π + log Θ | I ( θ ) | 1 / 2 d θ + [ log Θ ( k ) | I ( θ ) | 1 / 2 d θ log Θ | I ( θ ) | 1 / 2 d θ ] + M 3 n 1 2 = d 2 log n 2 π + log Θ | I ( θ ) | 1 / 2 d θ ϵ k + M 3 n 1 2 .
Note that the bound M 3 of the last term relies solely on Θ ( k ) . For a given k, we select n such that the last term is sufficiently small. This completes the proof. □
Proof of Theorem 2.
First we consider the conjugate prior of (4), which takes the form
u ( θ ) = exp { α θ β A ( θ ) B ( α , β ) } ,
where α is a vector in R d , β is a scalar, and B ( α , β ) = log Θ exp { α θ β A ( θ ) } d θ . Then the marginal density is
m ( x ( n ) ) = exp { B ( i = 1 n T ( x i ) + α , n + β ) B ( α , β ) } ,
according to the definition of B ( · , · ) . Therefore
L m i x t u r e = B ( α , β ) B ( i = 1 n S ( x i ) + α , n + β ) = B ( α , β ) log ( Θ exp { n L n ( t } d t ) ,
where n L n ( t ) = [ i = 1 n S ( x i ) + α ] T t ( n + β ) A ( t ) . The minimum of L n ( t ) is achieved at
θ ˜ n = θ ˜ ( x ( n ) ) = A ˙ 1 ( i = 1 n S ( x i ) + α n + β ) .
Notice that
A ˙ ( θ ˜ n ) = i = 1 n S ( x i ) + α n + β = i = 1 n S ( x i ) n + O ( 1 n ) = A ˙ ( θ ^ n ) + O ( 1 n ) .
Through Taylor’s expansion, it can be shown that
θ ˜ n = θ ^ n + O ( 1 n ) .
Notice that L ¨ n ( t ) = n + β n A ¨ ( t ) . Let Σ = L ¨ n 1 ( θ ˜ n ) = n n + β A ¨ 1 ( θ ˜ n ) . By expanding L n ( t ) at the saddle point θ ˜ n and applying the Laplace method (see [37]), we have
log ( Θ exp { n L n ( t ) } d t ) = d 2 log n 2 π + 1 2 log ( d e t Σ ) + n L n ( θ ˜ n ) + O ( 1 n ) d 2 log n 2 π 1 2 log ( d e t I ( θ ˜ n ) ) + n L n ( θ ˜ n ) .
Next,
L m i x t u r e n H ( θ ^ n ) B ( α , β ) + d 2 log n 2 π + 1 2 log ( d e t I ( θ ˜ n ) ) n L n ( θ ˜ n ) n H ( θ ^ n ) = B ( α , β ) + d 2 log n 2 π + 1 2 log ( d e t I ( θ ˜ n ) ) [ i = 1 n S ( x i ) + α ] T θ ˜ n + ( n + β ) A ( θ ˜ n ) n A ( θ ^ n ) + [ i = 1 n S ( x i ) ] T θ ^ n = d 2 log n 2 π + 1 2 log ( d e t I ( θ ˜ n ) ) [ α T θ ˜ n β A ( θ ˜ n ) B ( α , β ) ] [ i = 1 n S ( x i ) ] T ( θ ˜ θ ^ n ) + n [ A ( θ ˜ n ) A ( θ ^ n ) ] = d 2 log n 2 π + 1 2 log ( d e t I ( θ ˜ n ) ) log w ( θ ˜ n ) [ i = 1 n S ( x i ) T ( θ ˜ θ ^ n ) + n A ˙ ( θ ^ ) T ( θ ˜ n θ ^ n ) + n 2 ( θ ˜ n θ ^ n ) T A ¨ ( θ ^ n ) ( θ ˜ n θ ^ n ) ] + O ( 1 n ) = d 2 log n 2 π + 1 2 log ( d e t I ( θ ˜ n ) ) log w ( θ ˜ n ) n [ [ S ¯ n A ˙ ( θ ^ n ) ] T ( θ ˜ n θ ^ n ) + 1 2 ( θ ˜ n θ ^ n ) T A ¨ ( θ ^ n ) ( θ ˜ n θ ^ n ) ] + O ( 1 n ) d 2 log n 2 π + 1 2 log ( d e t I ( θ ^ n ) ) log w ( θ ^ n ) .
The last step is valid because of θ ^ n = A ˙ 1 ( S ¯ n ) and (A5). This proves the case of the prior u ( θ ) in (A3). Meanwhile, we obtain the expansion of (A4)
m ( x ( n ) ) = exp { L m i x t u r e } = exp { B ( i = 1 n S ( x i ) + α , n + β ) B ( α , β ) } = exp { n H ( θ ^ n ) d 2 log n 2 π 1 2 log ( d e t I ( θ ^ n ) ) + log w ( θ ^ n ) + o ( 1 ) } = [ w ( θ ^ n ) + o ( 1 ) ] exp { n H ( θ ^ n ) d 2 log n 2 π 1 2 log ( d e t I ( θ ^ n ) ) } .
If the prior of θ takes the form of a finite mixture of the conjugate distributions (A3) as in the following
w ( θ ) = j = 1 J λ j exp { α j θ β j A ( θ ) B ( α j , β j ) } = j = 1 J λ j u j ( θ ) ,
where j = 1 J λ j = 1 , 0 < λ j < 1 , j = 1 , , J . Then the marginal density is given by
m ( x ( n ) ) = j = 1 J λ j exp { B ( i = 1 n T ( x i ) + α j , n + β j ) B ( α j , β j ) } = j = 1 J λ j [ u j ( θ ^ n ) + o ( 1 ) ] exp { n H ( θ ^ n ) d 2 log n 2 π 1 2 log ( d e t I ( θ ^ ) ) } = exp { n H ( θ ^ n ) d 2 log n 2 π 1 2 log ( d e t I ( θ ^ n ) ) } [ j = 1 J λ j u j ( θ ^ n ) + o ( 1 ) ] = e x p { n H ( θ ^ n ) d 2 log n 2 π 1 2 log ( d e t I ( θ ^ n ) ) } [ w ( θ ^ n ) + o ( 1 ) ] = exp { n H ( θ ^ n ) d 2 log n 2 π 1 2 log ( d e t I ( θ ^ n ) ) + log w ( θ ^ n ) + o ( 1 ) } .
Each summand is approximated by (A6). This completes the proof because of L m i x t u r e = log { m ( x ( n ) ) } . □

References

  1. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423, 623–656. [Google Scholar] [CrossRef]
  2. Rissanen, J. Strong optimality of the normalized ML models as universal codes and information in data. IEEE Trans. Inf. Theory 2001, 47, 1712–1717. [Google Scholar] [CrossRef]
  3. Shtarkov, Y.M. Universal sequential coding of single messages. Probl. Inf. Transm. 1987, 23, 3–17. [Google Scholar]
  4. Clarke, B.S.; Barron, A.R. Jeffrey’ prior is asymptotically least favorable under entropy risk. J. Stat. Plan. Inference 1994, 41, 37–64. [Google Scholar] [CrossRef]
  5. Li, L.M.; Yu, B. Iterated logarithmic expansions of the pathwise code lengths for exponential families. IEEE Trans. Inf. Theory 2000, 46, 2683–2689. [Google Scholar]
  6. Barron, A.; Rissanen, J.; Yu, B. The minimum description length principle in coding and modeling. IEEE. Trans. Inform. Theory 1998, 44, 2743–2760. [Google Scholar] [CrossRef]
  7. Cover, T.M.; Thomas, J.A. Elements of Information Theory; Wiley: Hoboken, NJ, USA, 1991. [Google Scholar]
  8. Rissanen, J. Generalized Kraft inequality and arithmetic coding. IBM J. Res. Dev. 1976, 20, 199–203. [Google Scholar] [CrossRef]
  9. Rissanen, J.J.; Langdon, G.G. Arithmetic coding. IBM J. Res. Dev. 1979, 23, 149–162. [Google Scholar] [CrossRef]
  10. Cover, T.M.; Gacs, P.; Gray, R.M. Kolmogorov’s Contributions to Information Theory and Algorithmic Complexity. Ann. Probab. 1989, 17, 840–865. [Google Scholar] [CrossRef]
  11. Li, M.; Vitányi, P. An Introduction to Kolmogorov Complexity and Its Applications; Springer: New York, NY, USA, 1996. [Google Scholar]
  12. Vitányi, P.; Li, M. Minimum description length induction, Bayesianism, and Kolmogorov complexity. IEEE Trans. Inform. Theory 2000, 46, 446–464. [Google Scholar] [CrossRef]
  13. Haussler, D. A general minimax result for relative entropy. IEEE Trans. Inf. Theory 1997, 43, 1276–1280. [Google Scholar] [CrossRef]
  14. Krichevsky, R.E. The connection between the redundancy and reliability of information about the source. Probl. Inform. Trans. 1968, 4, 48–57. [Google Scholar]
  15. Rissanen, J. Stochastic complexity and modeling. Ann. Stat. 1986, 14, 1080–1100. [Google Scholar] [CrossRef]
  16. Akaike, H. On entropy maximisation principle. In Applications of Statistics; Krishnaiah, P.R., Ed.; North Holland: Amsterdam, The Netherlands, 1970; pp. 27–41. [Google Scholar]
  17. Schwarz, G. Estimating the dimension of a model. Ann. Stat. 1978, 6, 461–464. [Google Scholar] [CrossRef]
  18. Cam, L.L.; Yang, G. Asymptotics in Statistics: Some Basic Concepts. 2000. Available online: https://link.springer.com/book/10.1007/978-1-4612-1166-2 (accessed on 14 July 2025).
  19. Rissanen, J. Fisher information and stochastic complexity. IEEE Trans. Inform. Theory 1996, 42, 40–47. [Google Scholar] [CrossRef]
  20. Rissanen, J. A predictive least squares principle. IMA J. Math. Control Inf. 1986, 3, 211–222. [Google Scholar] [CrossRef]
  21. Dawid, A.P. Present position and potential developments: Some personal views, statistical theory, the prequential approach. J. R. Stat. Soc. Ser. B 1984, 147, 278–292. [Google Scholar] [CrossRef]
  22. Dawid, A.P. Prequential analysis, stochastic complexity and Bayesian inference. In Proceedings of the Fourth Valencia International Meeting on Bayesian Statistics, Peñíscola, Spain, 15–20 April 1991; pp. 15–20. [Google Scholar]
  23. Powell, M.J.D. Approximation Theory and Methods; Cambridge University Press: Cambridge, UK, 1981. [Google Scholar]
  24. Prolla, J.B. A generalized bernstein approximation theorem. Math. Proc. Camb. Philos. Soc. 1988, 104, 317–330. [Google Scholar] [CrossRef]
  25. Durrett, R. Probability: Theory and Examples; Wadsworth & Brooks/Cole: Pacific Grove, CA, USA, 1991. [Google Scholar]
  26. Rissanen, J. Stochastic Complexity and Statistical Inquiry; World Scientific: Singapore, 1989. [Google Scholar]
  27. Hansen, M.; Yu, B. Model selection and minimum description length principle. J. Am. Stat. Assoc. 2001, 96, 746–774. [Google Scholar] [CrossRef]
  28. Grünwald, P. The Minimum Description Length Principle; MIT Press: Cambridge, MA, USA; London, UK, 2007. [Google Scholar]
  29. Ryabko, B.; Astola, J.; Malyutov, M. Statistical Methods Based on Universal Codes; Springer International Publishing: Cham, Switzerland, 2016. [Google Scholar]
  30. Ryabko, B.; Astola, J.; Gammerman, A. Application of Kolmogorov complexity and universal codes to identity testing and nonparametric testing of serial independence for time series. Theor. Comput. Sci. 2006, 359, 440–448. [Google Scholar] [CrossRef]
  31. Usotskaya, N.; Ryabko, B. Applications of information-theoretic tests for analysis of DNA sequences based on markov chain models. Comput. Stat. Data Anal. 2009, 53, 1861–1872. [Google Scholar] [CrossRef]
  32. E. coli Genome and Protein Genes. Available online: https://www.ncbi.nlm.nih.gov/genome (accessed on 14 July 2025).
  33. Suzuki, A.; Yamanishi, K. Exact calculation of normalized maximum likelihood code length using Fourier analysis. In Proceedings of the 2018 IEEE International Symposium on Information Theory (ISIT), Vail, CO, USA, 17–22 June 2018; pp. 1211–1215. [Google Scholar]
  34. Ulacha, G.; Łazoryszczak, M. Lossless image compression using context-dependent linear prediction based on mean absolute error minimization. Entropy 2024, 26, 1115. [Google Scholar] [CrossRef] [PubMed]
  35. Brown, L.D. Fundamentals of Statistical Exponential Families: With Applications in Statistical Decision Theory; Institute of Mathematical Statistics: Hayward, CA, USA, USA, 1986. [Google Scholar]
  36. Pinelis, I.; Molzon, R. Optimal-order bounds on the rate of convergence to normality in the multivariate delta method. Electron. J. Stat. 2016, 10, 1001–1063. [Google Scholar] [CrossRef]
  37. De Bruijn, N.G. Asymptotic Methods in Analysis; North-Holland: Amsterdam, The Netherlands, 1958. [Google Scholar]
Figure 1. Correspondence between probability models and string parsing by a dictionary. The parsing of a string requires a dictionary Φ , which can be defined either prior to parsing or dynamically during parsing as in Lempel–Ziv coding.
Figure 1. Correspondence between probability models and string parsing by a dictionary. The parsing of a string requires a dictionary Φ , which can be defined either prior to parsing or dynamically during parsing as in Lempel–Ziv coding.
Entropy 27 00864 g001
Figure 2. An implementation of NML code length. The data were read in two rounds, one for counting word frequencies, and the other for encoding.
Figure 2. An implementation of NML code length. The data were read in two rounds, one for counting word frequencies, and the other for encoding.
Entropy 27 00864 g002
Figure 3. Scheme of Bayesian predictive coding.
Figure 3. Scheme of Bayesian predictive coding.
Entropy 27 00864 g003
Table 1. The data compression rates of a binary string of size 3000 under different parsing models. The data were simulated in R according to Bernoulli trials with probability 0.5.
Table 1. The data compression rates of a binary string of size 3000 under different parsing models. The data were simulated in R according to Bernoulli trials with probability 0.5.
Parsing the String
by Nine Libraries
Compression Rates
by Three Quantities
Word
Length
n : Word
Number
d : Size of
Dictionary
nH ( θ ^ n )
 
nH ( θ ^ n )
+ d 2 log n
L NML
(13)
1300020.9999181.0018431.001952
2150040.9985911.0038661.003641
3100080.9989611.0105871.008834
4750160.9954221.0192991.012974
5600320.9896031.0372851.018977
6500640.9815971.0757381.027959
74281240.9648851.1443241.031261
83751960.9289301.2068291.006312
93332520.8729581.2238460.950283
Table 2. The data compression rates of E. coli ORF b0059 calculated by (13) under different parsing models.
Table 2. The data compression rates of E. coli ORF b0059 calculated by (13) under different parsing models.
Modeln: Wordd: SizeEmpirical1stRest L NML @ Φ Compression
Numberof Φ EntropyTermTermsTotal BitsRate
1290741.99245792.0016.585808.580.9991
2.01453163.95705749.4959.815809.310.9995
2.11453163.94255728.4459.815788.250.9959
3.0969585.28425120.39157.115277.510.9077
3.1968635.59055411.63167.135578.760.9605
3.2968645.67065489.10169.115658.210.9742
4.07262187.45075409.24345.075754.310.9908
4.17262177.43375396.87344.205741.070.9885
4.27262197.48145431.49345.9405777.430.9947
4.37262217.46785421.64347.675769.310.9933
a. a.969214.10563978.3169.924048.220.6963
Table 3. The data compression rates of E. coli ORF b0060 and statistics from permutations. The protein-coding gene sequence consists of 2352 nucleotide bases. The second row shows the parsing models, and the third row shows the compression rates computed by L N M L for the raw sequence. The subsequent rows present statistics obtained from permutations, in order, for L N M L (13), n H ( θ ^ n ) , and n H ( θ ^ n ) + d 2 log n , respectively. For n H ( θ ^ n ) , the 99 % -quantiles are shown, all of which are smaller than 1. For the other two, the 1 % -quantiles are shown, all which are larger than 1 except in Model 1.0.
Table 3. The data compression rates of E. coli ORF b0060 and statistics from permutations. The protein-coding gene sequence consists of 2352 nucleotide bases. The second row shows the parsing models, and the third row shows the compression rates computed by L N M L for the raw sequence. The subsequent rows present statistics obtained from permutations, in order, for L N M L (13), n H ( θ ^ n ) , and n H ( θ ^ n ) + d 2 log n , respectively. For n H ( θ ^ n ) , the 99 % -quantiles are shown, all of which are smaller than 1. For the other two, the 1 % -quantiles are shown, all which are larger than 1 except in Model 1.0.
Models Generated by Different Parsing
1.0 2.0 2.1 3.0 3.1 3.2 4.0 4.1 4.2 4.3
L N M L (13)raw data0.9991.0001.0010.9580.9800.9891.0011.0020.9970.999
bound definitionstatistics of permutations
L N M L (13)average0.9991.0061.0061.0201.0201.0201.0201.0201.0201.020
SD ( × 10 3 )0.000.740.761.691.771.714.224.334.153.92
1 % -quantile0.9991.0041.0041.0161.0161.0161.0111.0101.0111.011
n H ( θ ^ n ) average0.9960.9940.9940.9860.9870.9860.9520.9520.9520.952
SD ( × 10 3 )0.000.740.761.691.771.713.693.793.633.42
99 % -quantile0.9960.9950.9950.9900.9900.9900.9610.9610.9600.960
n H ( θ ^ n ) + d 2 log n average0.9991.0101.0101.0511.0511.0511.1741.1741.1741.174
SD ( × 10 3 )0.000.740.761.701.781.717.497.667.377.01
1 % -quantile0.9991.0081.0081.0461.0471.0471.1571.1561.1571.157
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, L.M. Empirical Lossless Compression Bound of a Data Sequence. Entropy 2025, 27, 864. https://doi.org/10.3390/e27080864

AMA Style

Li LM. Empirical Lossless Compression Bound of a Data Sequence. Entropy. 2025; 27(8):864. https://doi.org/10.3390/e27080864

Chicago/Turabian Style

Li, Lei M. 2025. "Empirical Lossless Compression Bound of a Data Sequence" Entropy 27, no. 8: 864. https://doi.org/10.3390/e27080864

APA Style

Li, L. M. (2025). Empirical Lossless Compression Bound of a Data Sequence. Entropy, 27(8), 864. https://doi.org/10.3390/e27080864

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop