Next Article in Journal
Application of the EGM Method to a LED-Based Spotlight: A Constrained Pseudo-Optimization Design Process Based on the Analysis of the Local Entropy Generation Maps
Previous Article in Journal
On the Thermodynamics of Classical Micro-Canonical Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Effective Complexity of Stationary Process Realizations

1
Max Planck Institute for Mathematics in the Sciences, Inselstr. 22, Leipzig 04103, Germany
2
Institute of Mathematics 7-2, Technical University Berlin, Straße des 17. Juni 136, Berlin 10623, Germany
3
Santa Fe Institute, 1399 Hyde Park Road, Santa Fe, NM 87501, USA
*
Author to whom correspondence should be addressed.
Entropy 2011, 13(6), 1200-1211; https://doi.org/10.3390/e13061200
Submission received: 8 May 2011 / Revised: 15 June 2011 / Accepted: 17 June 2011 / Published: 22 June 2011

Abstract

:
The concept of effective complexity of an object as the minimal description length of its regularities has been initiated by Gell-Mann and Lloyd. The regularities are modeled by means of ensembles, which is the probability distributions on finite binary strings. In our previous paper [1] we propose a definition of effective complexity in precise terms of algorithmic information theory. Here we investigate the effective complexity of binary strings generated by stationary, in general not computable, processes. We show that under not too strong conditions long typical process realizations are effectively simple. Our results become most transparent in the context of coarse effective complexity which is a modification of the original notion of effective complexity that needs less parameters in its definition. A similar modification of the related concept of sophistication has been suggested by Antunes and Fortnow.

1. Introduction

The concept of effective complexity has been initiated by Gell-Mann and Lloyd in [2], see also [3]. The main motivation was to define a complexity measure that distinguishes between regular and random aspects of a given object typically encoded as a binary string. This is in contrast to Kolmogorov complexity which is not sensitive to the source of incompressibility and in this sense fails to capture what is meant by complexity in the common language.
The main idea underlying the concept has been considered at different places in the literature, see [4,5,6,7,8,9,10]. It may be summarized as follows. One considers programs computing a given binary string as consisting of two parts: the implementation of an algorithm and a valid input for that algorithm. Then the corresponding measures of complexity refer to the algorithm part.
In [2] the algorithm part has been motivated as a description of a physical theory-represented by a probability distribution on finite binary strings, while the second part has been used to distinguish one among all possible objects contained in the (typical) support of the distribution. Effective complexity is equal to the length of the algorithm/theory part which is minimized over the set of programs that compute the string and that are almost minimal, i.e., their length is close to the Kolmogorov complexity of the string.
In [1] we have proposed a definition of effective complexity in precise terms of algorithmic information theory. Our formalization allows to include the concept into the context of algorithmic statistics, which also deals with two-part codings of binary strings [5]. Instances of corresponding measures of complexity are Kolmogorov minimal sufficient statistics and sophistication [5,10]. Roughly speaking, while Kolmogorov minimal sufficient statistics of a binary string x is the minimal algorithmic statistic of x from the model class of finite sets, and sophistication refers to the model class of total programs, effective complexity mainly coincides with the length of algorithmic statistics of x minimized over computable probability distributions.
More precisely, the minimization domain of effective complexity consists of computable probability distributions with total information which is approximately equal to the Kolmogorov complexity of the string: the tolerance level being specified by a parameter Δ. Total information has been defined by Gell-Mann and Lloyd in [2,3] as the sum of Kolmogorov complexity and Shannon entropy of a given computable ensemble. It is worth mentioning that it is equivalent to the concept of physical entropy introduced by Zurek for large physical systems such as thermodynamic engines [8].
Restricting the minimization domain of effective complexity by intersecting with subsets corresponding to pre-knowledge about the object, which is subjective to the observer, one ends up with a version of effective complexity with constraints. As far as we know, there is no literature other than the papers by Gell-Mann and Lloyd [2], where the idea to incorporate subjective pre-knowledge into the measure of complexity has been considered explicitly.
Compared to the effective complexity without constraints, which we will refer to as plain effective complexity or simply effective complexity, this gives a larger value and is the reason why Gell-Mann and Lloyd suggest to use the constrained version instead of the plain one: “If we impose no other conditions, every entity would come out simple!”, see [2], (p. 392). This statement has to be contrasted with the fact that there exist strings with large plain effective complexity, cf. Theorem 13 of our previous work [1]. See also corresponding results in the context of algorithmic statistics and sophistication, Theorem 2.2 in [5] and Theorem 6.5 in [6], respectively. Hence, the above conviction can be substantiated only in a weaker version referring to some typical behaviour. In the present contribution, we find a framework to this end in the form of almost sure statements in terms of probability theory. The focus is on mathematical foundations for the concept of effective complexity. In particular, we extend the analysis of [1] to the context of asymptotic behaviour.
In more detail, we investigate discrete-time stochastic processes with binary state space in the context of effective complexity as it has been defined in [1]. In addition to proving that typical strings are simple with respect to the plain effective complexity, our results also allow a deeper understanding of the dependence of effective complexity on the parameter Δ. Recall that this parameter determines the minimization domain consisting of computable ensembles with a total information that is Δ-close to the Kolmogorov complexity of the string. A corresponding parameter also appears in the context of sophistication and, more generally, algorithmic sufficient statistics [5,10]. Conceptually, it also corresponds to the significance level of Bennett’s logical depth defined in [11]. The relation between effective complexity and logical depth has been elaborated in [1].
In [10] Antunes and Fortnow suggested a modification of sophistication called coarse sophistication. In an analogous way, we introduce coarse effective complexity. It modifies the original concept of plain effective complexity by, roughly speaking, incorporating Δ into the definition as a further minimization argument. As a consequence, the definition becomes independent of the choice of this parameter. Our main results on effective complexity have direct implications on the asymptotic behaviour of coarse effective complexity. In particular, for an arbitrary stationary process the value of coarse effective complexity of a typical finite string is asymptotically upper bounded by any linear function of a string’s length.
After fixing notations and the mathematical framework in Section 2, we formulate and prove our main result, Theorem 1, in Section 3. It states that sufficiently long typical strings generated by a stationary process are effectively simple. The proof relies on the observation that the total information of uniform distributions on universally typical subsets is upper bounded by a value that exceeds the Kolmogorov complexity of a typical string by any linear growing amount in the string’s length. In Section 4 we introduce the concept of coarse effective complexity. We show that strings of moderate value of coarse effective complexity exist, see Theorem 3, and derive from our main theorem an upper bound on coarse effective complexity of long typical realizations of a stationary process, see Theorem 4. Finally, Section 5 contains some conclusions and an outlook for further analysis of effective complexity in its constrained version.

2. Notations and Preliminaries

We denote by { 0 , 1 } * the set of finite binary strings, i.e., { 0 , 1 } * = { λ } n N { 0 , 1 } n , where λ is the empty string, while the set of doubly infinite sequences ( , x 1 , x 0 , x 1 , ) with x i { 0 , 1 } , i Z , is denoted by { 0 , 1 } . We write ( x ) for the length of x { 0 , 1 } * . Finite blocks x m n = ( x m , x m + 1 , , x n ) , m n , of x { 0 , 1 } are elements of { 0 , 1 } * of length ( x m n ) = n m + 1 . We may identify them with cylinder sets [ x m n ] : = { y { 0 , 1 } : y i = x i , m i n } . In a similar fashion, strings x { 0 , 1 } * are associated to cylinder sets of the form [ x ] : = { y { 0 , 1 } : y i = x i , 1 i ( x ) } . The σ-algebra on { 0 , 1 } generated by the cylinder sets [ x m n ] , m , n Z , m n , is denoted by Σ. We write T ( { 0 , 1 } ) for the convex set of probability measures on ( { 0 , 1 } , Σ ) , which are invariant with respect to the left-shift T on { 0 , 1 } . The subset of ergodic T-invariant probability measures, i.e., the extremal points in T ( { 0 , 1 } ) , is denoted by E ( { 0 , 1 } ) .
Let P T ( { 0 , 1 } ) . The random variables X i , i Z , given by the coordinate projections X i ( x ) : = x i , x { 0 , 1 } , respectively, represent a stationary process with values in { 0 , 1 } . Typical outcomes of such stochastic processes are the main focus in the present paper. The goal is to estimate their effective complexity. We will refer to elements of T ( { 0 , 1 } ) as stationary probability measures and stationary stochastic processes interchangeably.
Adopting the setup of [1] as far as possible, we refer to probability distributions on { 0 , 1 } * as ensembles.
For each n N we identify the joint distribution (alternatively called n-block distribution) P ( n ) of n successive outcomes ( X 1 , X 2 , , X n ) of a stationary process with an ensembles E n on { 0 , 1 } * through the relation: E n ( x ) = P ( n ) ( x ) if ( x ) = n and E n ( x ) = 0 otherwise.
Recall the definition of prefix Kolmogorov complexity K ( x ) of a binary string x { 0 , 1 } * :
K ( x ) : = min { ( p ) : U ( p ) = x }
where U is an arbitrary but fixed universal prefix computer. For details concerning the basics as well as deeper results on Kolmogorov complexity theory we refer to the book by Li and Vitányi [12].
We call an ensemble computable if there exists a program for the universal computer U that, given x { 0 , 1 } * and m N as inputs, computes an approximation of the probability E ( x ) with accuracy of at least 2 m .
In [1] we have introduced an extension of the notion of Kolmogorov complexity to the case of computable ensembles E with computable and finite entropies H ( E ) . Here we mean by entropy H ( E ) the Shannon entropy of the probability distribution E defined by x { 0 , 1 } * E ( x ) log E ( x ) . Note that a computable ensemble does not necessarily have a computable entropy, such that the corresponding requirement is a restriction, see [1] for details. In what follows a distinction only between general ensembles and computable ones with computable and finite entropies is drawn. We will refer to the latter ones as computable for short.
The Kolmogorov complexity K ( E ) of a computable ensemble E is defined as the length of the shortest computer program that, given x { 0 , 1 } * and m N as inputs, outputs both E ( x ) and H ( E ) with an accuracy of at least 2 m .
Additionally, we need to define computability of stochastic processes. The following definition is a reformulation of the notion of a “computable measure” in [12].
A stationary process P is called computable if there exists a program p { 0 , 1 } * for a universal computer U that, given x { 0 , 1 } * and m N as inputs, computes the probability P ( [ x ] ) up to accuracy 2 m .

3. Effective Complexity of Stationary Processes

The goal is to show that under not too strong conditions long typical samples of stationary processes are effectively simple. Before we make rigorous statements we need a number of definitions. The first ones we adopt from our previous paper [1].
Let δ 0 . We say that an ensemble E is δ-typical for a string x { 0 , 1 } * , or alternatively, we call x δ-typical for E , if the Shannon entropy H ( E ) of E is finite and
log E ( x ) H ( E ) ( 1 + δ )
In particular, the special case of an equidistributed ensemble is δ-typical for all strings in the support and any δ 0 The total information Σ ( E ) of a computable ensemble E is defined by
Σ ( E ) : = H ( E ) + K ( E )
For a motivation of the two definitions above, see [1].
Let δ 0 and Δ > 0 . Effective complexity E δ , Δ ( x ) of a finite string x { 0 , 1 } * is defined by
E δ , Δ ( x ) : = min { K ( E ) : E P δ , Δ ( x ) }
where P δ , Δ ( x ) denotes the minimization domain associated to x:
P δ , Δ ( x ) : = { E : E computable ensemble , E δ -typical for x , Σ ( E ) K ( x ) + Δ }
We refer to elements of P δ , Δ ( x ) as effective ensembles for x.
Taking the point of view of [2], which was reviewed in [1], effective ensembles represent theories that are judged to be good explanations for the appearance of x.
The more general notion of effective complexity with constraints has been suggested in [2] mainly to circumvent problems of plain effective complexity. We have discussed them shortly in the Introduction. The main idea is that the constraints reflect some pre-knowledge about the possible theory for x. In [1] we have proposed a formalization of the constrained version in the following manner:
E δ , Δ ( x | C ) : = min { K ( E ) : E P δ , Δ ( x ) , E C }
where C is a subset of P ( { 0 , 1 } * ) . Note that with C = P ( { 0 , 1 } * ) we have E δ , Δ ( x ) = E δ , Δ ( x | C ) , for all x { 0 , 1 } * .
In what follows other essential concepts are that of typical and/or universally typical subsets.
Let P be a T-invariant probability measure on ( { 0 , 1 } , Σ ) . We call a sequence of subsets M n Σ , n N , P-typical if
lim n P ( M n ) = 1
We call ( M n ) strongly P-typical if for P-almost all x there exists an N x N such that
x M n for every n N x
The above notions of typicality apply naturally to sequences M n { 0 , 1 } n , n N , if we identify subsets M n , n N , with cylinder sets [ M n ] { 0 , 1 } , respectively.
Let Λ be a set of stationary processes with values in { 0 , 1 } . We call M n { 0 , 1 } n , n N , universally typical for Λ if the sequence is P-typical for every P Λ , i.e., lim n P ( n ) ( M n ) = 1 . We call the sequence universally strongly typical for Λ if it is strongly P-typical for every P Λ .
For sets Λ r E ( { 0 , 1 } ) consisting of ergodic processes with entropy rate upper bounded by r > 0 there exist universally typical subsets T r , n { 0 , 1 } n with
| T r , n | 2 r n
for all n N . Moreover, there are methods to construct such sequences of universally typical subsets for Λ r . We will apply the Lempel-Ziv algorithm in the construction procedure below. This famous algorithm represents a universal sequential data compression, see [13,14,15]. The main point is that all we need to know about an ergodic process P is its entropy rate h P . This allows to prove the following theorem for stationary, in general not computable processes.
Theorem 1. Let P be a stationary process, δ 0 , ϵ > 0 and Δ n = ϵ n . Then P is effectively simple in the sense that for P-almost every x,
E δ , Δ n ( x 1 n ) < + log n + O ( log log n )
Proof. Firstly, assume that P is an ergodic process with entropy rate h P . We construct universally typical subsets T r , n { 0 , 1 } n , n N , such that for an appropriate choice of the parameter r = r ( h P , ϵ ) the total information of the uniform distributions E r , n on T r , n { 0 , 1 } n , respectively, is upper-bounded by K ( x 1 n ) + Δ n of P-almost every x { 0 , 1 } and for n large enough. Hence the Kolmogorov complexity of E r , n , which is approximately upper bounded by log n , gives an estimate from above on the effective complexity E δ , Δ n ( x 1 n ) of sufficiently long P-typical strings x 1 n .
First, let r > 0 be arbitrary and define for each n N the subset T r , n { 0 , 1 } n as the set consisting of all binary strings x 1 n which are mapped by the Lempel-Ziv (LZ) algorithm to a code word of length L Z ( x 1 n ) lower than n r . Then the sequence T r , n , n N , is universally typical for the set Λ r of ergodic processes with entropy rates lower than r.
Recall the following remarkable property of the LZ algorithm: For every ergodic process Q with entropy rate h Q it holds lim n 1 n L Z ( x 1 n ) = h Q for Q-almost all x { 0 , 1 } . This implies that indeed the subsets T r , n as constructed above are typical for any ergodic Q with h Q < r .
The upper bound (2) on the size | T r , n | follows from the fact that the LZ algorithm is a faithful coder. Hence the codeword lengths satisfy the Kraft inequality:
1 x 1 n { 0 , 1 } n 2 L Z ( x 1 n ) x 1 n T r , n 2 L Z ( x 1 n ) x 1 n T r , n 2 n r = | T r , n | 2 n r
Next, we show that if r is chosen to be a positive rational number satisfying 0 < r h P < ϵ / 4 then for P-almost every x { 0 , 1 } there exists an N x N such that
Σ ( E r , n ) K ( x 1 n ) + Δ n for every n N x
where again E r , n denotes the uniform distribution on the universally typical subset T r , n . First note that for all n N
H ( E r , n ) = log | T r , n | r n
Secondly, we prove that there is a constant c N such that for all n N
K ( E r , n ) K ( n ) + K ( r ) + c
This is derived from the existence of a program p of length c which expects as inputs n N , r Q and x { 0 , 1 } * and outputs the value 1 | T r , n | if x T r , n { 0 , 1 } n and 0 otherwise. Thus for fixed inputs n and r it gives a description of the uniform distribution E r , n on T r , n .
Indeed, p may be constructed on the base of a program p L Z implementing the Lempel-Ziv (LZ) algorithm on the given reference universal computer U. For given inputs n and r let p apply p L Z as a subroutine in order to determine elements of T r , n . Then for fixed n N the number | T r , n | and hence the probability value 1 / | T r , n | of each x T r , n may be calculated easily.
To specify r Q and n N a number of K ( r ) + K ( n ) bits is sufficient. With c = ( p ) the estimate (5) follows.
Next, fix an N N such that K ( n ) + K ( r ) + c ϵ 4 n for all n N . Then
Σ ( E r , n ) n r + K ( n ) + K ( r ) + c n r + ϵ 4 n h + ϵ 2
where the last inequality holds by assumption r h < ϵ 4 . According to the theorem by Brudno, see [16], for P-almost all x there exists an N x , ϵ N such that K ( x 1 n ) n ( h ϵ 2 ) for all n N x , ϵ . It follows for Δ n = ϵ n
K ( x 1 n ) + Δ n n h + ϵ 2 , n N x , ϵ
Relations (6) and (7) together imply (4) for P-almost all x and n N x : = max { N x , ϵ , N } . It follows that P-almost surely the effective complexity E δ , Δ n ( x 1 n ) is upper bounded by the Kolmogorov complexity of E r , n for n N x :
E δ , Δ n ( x 1 n ) K ( E r , n ) K ( n ) + K ( r ) + c < + log n + O ( log log n )
Now let P be an arbitrary stationary process. Recall that there is a unique ergodic decomposition of P
P = E ( { 0 , 1 } ) Q d μ ( Q )
Moreover, to P-almost every x { 0 , 1 } we may associate an ergodic component Q x of P such that x is a typical element of Q x . Then there exists an N x , ϵ such that
K ( x 1 n ) + Δ n n h x + ϵ 2 , n N x , ϵ
where h x denotes the entropy rate of Q x . Hence the proof for the stationary case reduces to the ergodic situation considered in the first part above.
Finally, we remark that Theorem 1 applies to the large class of stationary processes. This covers, in particular, the case of independent identically distributed processes, which represent the simplest and at the same time one of the best studied classes of stochastic processes. Further, our theorem is valid for stationary Markov chains. These represent another important and rather feasible class of stationary processes.

4. Coarse Effective Complexity

Our main result becomes most transparent if presented in the context of coarse effective complexity. This is a modification of plain effective complexity which incorporates the parameter Δ as a penalty into the original formula. It is inspired by a corresponding modification of sophistication, called coarse sophistication, which has been introduced by Antunes and Fortnow in [10].
Let δ 0 . The coarse effective complexity E δ ( x ) of a finite string x { 0 , 1 } * is defined by
E δ ( x ) : = min { K ( E ) + Σ ( E ) K ( x ) : E is computable ensemble , E is δ typical for x }
The term Σ ( E ) K ( x ) accounts for the exact value by which the total information of an ensemble E exceeds the Kolmogorov complexity of x. By definition of total information Σ ( E ) an equivalent expression for E δ ( x ) reads:
E δ ( x ) = min { 2 K ( E ) + H ( E ) K ( x ) : E is computable ensemble , E is δ typical for x }
We derive the basic properties of coarse effective complexity similarly as it has been done in [10] in the context of coarse sophistication. That is, firstly, in the proposition below, we prove an upper bound on coarse effective complexity. Secondly, we show existence of strings which are close to saturate this bound.
Proposition 2. Let δ 0 . There is a constant c such that for all x { 0 , 1 } * we have
E δ ( x ) n 2 + log n + c
where n = ( x ) .
Proof. Suppose that K ( x ) n 2 + log n . Let E x denote the ensemble with E ( x ) = 1 and E ( y ) = 0 for y x . Note that E x is trivially δ-typical for x for any δ 0 and obviously H ( E x ) = 0 . Moreover, there is a constant c 1 such that it holds K ( E x ) K ( x ) + c 1 . This implies the upper bound
E δ ( x ) 2 K ( E x ) + 0 K ( x ) K ( x ) + 2 c 1 n 2 + log n + 2 c 1
where the last line holds by assumption.
Now, suppose that K ( x ) > n 2 + log n . Let E n be the ensemble on { 0 , 1 } * given by E n ( y ) = 1 2 n for all y with ( y ) = n and vanishing elsewhere. Then H ( E n ) = n and there exists a constant c 2 , independent of n, such that K ( E n ) log n + c 2 . It follows
E δ ( x ) 2 log n + 2 c 2 + n K ( x ) n 2 + log n + 2 c 2
where, again, the second line holds by assumption on K ( x ) . Setting c : = max { 2 c 1 , 2 c 2 } completes the proof.
Theorem 3. Let δ 0 . For every sufficiently large n N there exists a string x { 0 , 1 } n with
E δ ( x ) ( 1 3 δ ) n 2 ( 2 + 3 δ ) log n 2 log log n + C
where C is a global constant.
Proof. For x { 0 , 1 } * and Δ 0 denote by E x Δ the minimal ensemble associated to E δ , Δ ( x ) . Due to Lemma 22 in [1] for every ϵ > 0 there exists a subset S x Δ of { 0 , 1 } * such that
log | S x Δ | H ( E x Δ ) ( 1 + δ ) + ϵ
K ( S x Δ ) K ( E x Δ ) + c 1
where c 1 is a global constant. In [1] we have proven the relation
K ( x | S x Δ , K ( S x Δ ) ) log | S x Δ | 1 + δ log n 2 log log n Λ Δ
which holds for arbitrary x { 0 , 1 } n , n N . The term Λ Δ is constant in x { 0 , 1 } * and monotonically increasing in Δ, cf. ( 32 ) in [1]. Now, let K n : = max { K ( t ) | t { 0 , 1 } n } and define
k : = n δ ( K n + Δ n + ϵ ) + log n + 2 log log n Λ Δ n c 2
where Δ n : = n 2 + log n + c is the upper bound on E δ ( x ) obtained in the previous proposition, and c 2 is a global constant from Theorem IV.2 in [5], see also Lemma 12 in [1]. If n is large enough then 0 < k < n holds, and Theorem IV.2 in [5] applies: There is a string x k { 0 , 1 } n such that
K ( x k | S , K ( S ) ) < log | S | n k + c 2
for every set S x k with K ( S ) < k c 3 , where c 3 is another global constant. Let E x denote the minimizing ensemble associated to coarse effective complexity E δ ( x ) and Δ x : = K ( E x ) + H ( E x ) K ( x ) such that E δ ( x ) = K ( E x ) + Δ x . Further, define S x : = S x Δ x . We have the inequality
δ ( K n + Δ n + ϵ ) δ ( K ( x k ) + Δ x + ϵ ) δ H ( E x k ) + ϵ δ H ( E x k ) + ϵ 1 + δ = δ 1 + δ H ( E x k ) ( 1 + δ ) + ϵ 1 1 + δ 1 log | S x k |
where the last upper bound holds by (10). Now suppose that K ( S x k ) < k c 1 . Then
K ( x k | S x k , K ( S x k ) ) < log | S x k | n + k + c 2 log | S x k | log n 2 log log n Λ Δ n δ ( K n + Δ n + ϵ ) log | S x k | 1 + δ log n 2 log log n Λ Δ n log | S x k | 1 + δ log n 2 log log n Λ Δ x k
But the strict inequality is a contradiction to (12). Hence our assumption must be false and we instead have K ( S x k ) k c 3 . By E δ ( x ) = K ( E x ) + Δ x and using both (11) and the bound K n n + 2 log n + γ , where γ is a global constant, we finally obtain
E δ ( x k ) = K ( E x k ) + Δ x k K ( S x k ) c 1 + Δ x k k c 3 c 1 + Δ x k n δ 3 2 n + 3 log n + γ + c + ϵ log n 2 log log n n 2 log n c c 2 1 c 3 c 1 = ( 1 3 δ ) n 2 ( 2 + 3 δ ) log n 2 log log n + C
where C : = δ ( γ + ϵ ) 1 ( 1 + δ ) c c 1 c 2 c 3 .
Although, according to the above theorem, for arbitrary large n the existence of strings of length n with moderate coarse effective complexity is ensured, the coarse effective complexity of sufficiently long prefixes of a typical stationary process realization becomes small. This is a direct implication of Theorem 1.
Theorem 4. Let P be a stationary process, δ 0 and ϵ > 0 . Then for P-almost every x
E δ ( x 1 n ) ϵ n + log n + O ( log log n )
Proof. By definition of coarse effective complexity, it holds E δ ( x ) Δ + E δ , Δ ( x ) , for all x { 0 , 1 } * and Δ > 0 . We set Δ n = ϵ n . Then the conditions of Theorem 1 are satisfied, and applying (3), we arrive at (14).

5. Conclusions

In this contribution, we studied the notion of plain effective complexity, which is assigned to a given string, within the context of an underlying stochastic process as model of the string generating mechanism. In [1] we have shown that strings which are called “non-stochastic” in the context of Kolmogorov minimal sufficient statistics have large value of plain effective complexity. The existence of such strings has been proven by Gács, Tromp and Vitányi in [5]. Here, our aim was to understand how properties of the stochastic process such as ergodicity and stationarity influence the effective complexity of corresponding typical realizations. Is it possible that the prefixes of a typical process realization represent a sequence of finite strings in increasing length n that eventually have a high or moderate value of effective complexity? Our main theorem refers to stationary and in general non-computable processes. It proves that modelling the regularities of strings by computable ensembles with total information that is allowed to excess the string’s Kolmogorov complexity up to a linearly growing amount ϵ n with an arbitrary small ϵ > 0 is sufficient for typically generating non-complex strings.
The value ϵ n plays the role of a parameter in the concept of effective complexity. In order to have a notion that is independent of this parameter we introduced coarse effective complexity. It corresponds to coarse sophistication introduced by Antunes and Fortnow in [10] and modifies effective complexity by incorporating the parameter as a further minimization argument. Our result on effective complexity has a direct implication on the asymptotic behaviour of coarse effective complexity of typical realizations of a stationary process. The main statement in this context demonstrates the utility of the linear parameter scaling which we have considered. Moreover, it allows to analyze the interplay between the complexity of a stochastic process and the complexity of its typical realizations. In particular, it demonstrates that, in order to have a notion of effective complexity that also reflects the complexity of a stochastic process, further modifications of plain effective complexity are necessary, for instance introduction of appropriate constraints. This possibility is in line with Gell-Mann and Lloyd’s suggestion in [2] which we discussed in the Introduction.
Finally, we point out that continuing our previous work [1] we have formulated our results for the concept of effective complexity only. However, in line with the general equivalence statements obtained in the literature, cf. Section V in [6] or Lemma 20 in [1], it should be possible to reformulate our main theorem in the more general context of algorithmic statistics. Indeed, our upper bound on effective complexity of typical process realizations is derived in terms of computable ensembles that are uniform distributions on finite sets (universally typical subsets). This demonstrates the close relation in particular to the concept of Kolmogorov minimal sufficient statistics which refers to the model class of finite sets.

Acknowledgements

The authors would like to thank their colleagues at the MPI MiS, in particular Eckehard Olbrich, Wolfgang Löhr and Nils Bertschinger for their interest and helpful discussions. This work has been supported by the Santa Fe Institute.

References

  1. Ay, N.; Müller, M.; Szkoła, A. Effective complexity and its relation to logical depth. IEEE Trans. Inform. Theory 2010, 56, 4593–4607. [Google Scholar] [CrossRef]
  2. Gell-Mann, M.; Lloyd, S. Effective complexity. In Nonextensive Entropy-Interdisciplinary Applications; Gell-Mann, M., Tsallis, C., Eds.; Oxford University Press: New York, NY, USA, 2004; pp. 387–398. [Google Scholar]
  3. Gell-Mann, M.; Lloyd, S. Information measures, effective complexity, and total information. Complexity 1996, 2, 44–52. [Google Scholar] [CrossRef]
  4. Rissanen, J. Stochastic complexity in statistical inquiry. In Series in Computer Science 15; World scientific publishing Co.: Singapore, 1988. [Google Scholar]
  5. Gács, P.; Tromp, J.T.; Vitányi, P.M. Algorithmic statistics. IEEE Trans. Inform. Theory 2001, 47, 2443–2463. [Google Scholar] [CrossRef]
  6. Vitányi, P.M. Meaningful information. IEEE Trans. Inform. Thoery 2006, 52, 4617–4626. [Google Scholar] [CrossRef]
  7. Koppel, M. Structure. In The Universal Turing Machine: A Half-Century Survey; Herken, R., Ed.; Oxford University Press: Oxford, UK, 1988; pp. 235–252. [Google Scholar]
  8. Zurek, W.H. Algorithmic randomness and physical entropy. Phys. Rev. A 1989, 40, 4731–4751. [Google Scholar] [CrossRef] [PubMed]
  9. Vereshchagin, N.K.; Vitányi, P.M.B. Kolmogorov’s structure function and model selection. IEEE Trans. Inform. Theory 2004, 50, 3265–3290. [Google Scholar] [CrossRef]
  10. Antunes, L.; Fortnow, L. Sophistication revisited. Theory Comput. Syst. 2009, 45, 150–161. [Google Scholar] [CrossRef]
  11. Bennett, C. Logical depth and physical complexity. In The Universal Turing machine-a Half-Century Survey; Herken, R., Ed.; Oxford University Press: Oxford, UK, 1988. [Google Scholar]
  12. Li, M.; Vitanyi, P. An Introduction to Kolmogorov Complexity and Its Applications, 2nd Ed. ed; Springer-Verlag: New York, NY, USA, 1997. [Google Scholar]
  13. Kieffer, J. A unified approach to weak universal source coding. IEEE Trans. Inform. Theory 1978, 24, 674–682. [Google Scholar] [CrossRef]
  14. Ziv, J. Coding of sources with unknown statistics-I: Probability of encoding error. IEEE Trans. Inform. Theory 1972, 18, 384–389. [Google Scholar] [CrossRef]
  15. Cover, T.M.; Thomas, J.A. Elements of Information Theory; John Wiley & Sons: New York, NY, USA, 1991. [Google Scholar]
  16. Brudno, A.A. Entropy and the complexity of the trajectories of a dynamical system. Trans. Moscow Math. Soc. 1983, 2, 127–151. [Google Scholar]

Share and Cite

MDPI and ACS Style

Ay, N.; Müller, M.; Szkoła, A. Effective Complexity of Stationary Process Realizations. Entropy 2011, 13, 1200-1211. https://doi.org/10.3390/e13061200

AMA Style

Ay N, Müller M, Szkoła A. Effective Complexity of Stationary Process Realizations. Entropy. 2011; 13(6):1200-1211. https://doi.org/10.3390/e13061200

Chicago/Turabian Style

Ay, Nihat, Markus Müller, and Arleta Szkoła. 2011. "Effective Complexity of Stationary Process Realizations" Entropy 13, no. 6: 1200-1211. https://doi.org/10.3390/e13061200

Article Metrics

Back to TopTop