Ordinal Pattern Based Entropies and the Kolmogorov–Sinai Entropy: An Update

Different authors have shown strong relationships between ordinal pattern based entropies and the Kolmogorov–Sinai entropy, including equality of the latter one and the permutation entropy, the whole picture is however far from being complete. This paper is updating the picture by summarizing some results and discussing some mainly combinatorial aspects behind the dependence of Kolmogorov–Sinai entropy from ordinal pattern distributions on a theoretical level. The paper is more than a review paper. A new statement concerning the conditional permutation entropy will be given as well as a new proof for the fact that the permutation entropy is an upper bound for the Kolmogorov–Sinai entropy. As a main result, general conditions for the permutation entropy being a lower bound for the Kolmogorov–Sinai entropy will be stated. Additionally, a previously introduced method to analyze the relationship between permutation and Kolmogorov–Sinai entropies as well as its limitations will be investigated.


Introduction
The Kolmogorov-Sinai entropy is a central measure for quantifying the complexity of a measure-preserving dynamical system. Although it is easy from the conceptional viewpoint, its determination and its estimation from given data can be challenging. Since Bandt, Keller, and Pompe showed the coincidence of Kolmogorov-Sinai entropy and permutation entropy for interval maps (see [1]), there have been different attempts to approach the Kolmogorov-Sinai entropy by ordinal pattern based entropies (see e.g., [2][3][4][5][6] and references therein), leading to a nice subject of study. In this paper, we want to discuss the relationship of the Kolmogorov-Sinai entropy to the latter kind of entropies. We respond to the state of the art and give some generalizations and new results, mainly emphasizing combinatorial aspects.
For this, let (Ω, A , µ, T) be a measure-preserving dynamical system, which we think to be fixed in the whole paper. Here, (Ω, A , µ) is a probability space equipped with a A -A -measurable map T : Ω → Ω satisfying µ(T −1 (A)) = µ(A) for all A ∈ A . Certain properties of the system will be specified at the places where they are of interest. It is suggested for the following to interpret Ω as the set of states of a system, µ as their distribution, and T as a description of the dynamics underlying the system and saying that the system is in state T(ω) at time t + 1 if it is in state ω ∈ Ω at time t.
In the following, we give the definitions of the central entropies considered in this paper.

The Kolmogorov-Sinai Entropy
The base of quantifying dynamical complexity is to consider the development of partitions and their entropies under the given dynamics. Recall that the coarsest partitions refining given partitions P 1 , P 2 , . . . , P k and P, Q of Ω are defined by k s=1 P s := k s=1 P s = ∅ | P s ∈ P s for s = 1, 2, . . . , k and P ∨ Q := {P ∩ Q = ∅ | P ∈ P, Q ∈ Q}, respectively. The entropy of a finite or countably infinite partition Q ⊂ A of Ω is given by For a finite or countably infinite partition P := {P i } i∈I ⊂ A of Ω and some k ∈ N, consider the partition for each multiindex i = (i 0 , i 1 , . . . , i k−1 ) ∈ I k . The entropy rate of T with regard to a finite or countably infinite partition P ⊂ A with H(P ) < ∞ is defined by h(P ) := lim n→∞ 1 n H(P (n) ).
The Kolmogorov-Sinai entropy is then defined as where the supremum is taken over all finite or over all countably finite partitions P ⊂ A with H(P ) < ∞.

Ordinal Pattern Based Entropy Measures
As the determination and estimation of the Kolmogorov-Sinai entropy based on the given definition are often not easy, there are many different alternative approaches to it, among them the permutation entropy approach by Bandt and Pompe [7]. The latter is built up on the concept of ordinal patterns, which we describe in a general manner now.
For this, let X = (X 1 , X 2 , . . . , X d ) : Ω → R d be a random vector for d ∈ N. Here, each of the random variables X i can be interpreted as an observable measuring some quantity in the following sense: If the system is in state ω at time 0, then the arschvalue of the quantity mesured at time t provides X i (T t (ω)). This general approach includes the one-dimensional case that states and measurements coincide, and this is that Ω ⊆ R and X = id is the identical map on Ω. This case, originally considered in [7] and subsequent papers, is discussed in Section 3. We refer to it as the simple one-dimensional case.
We speak of the permutation entropy if the upper and lower permutation entropies coincide.

Outline of This Paper
In Section 2, we will focus on the relationship between permutation and Kolmogorov-Sinai entropies in the general setting. With Theorems 1 and 3, we will restate two known statements. A new proof of Theorem 1 will be given in Appendix A.2. Theorem 3 is stated for completeness. Theorem 2 establishes a new relationship between the conditional permutation entropy and the Kolmogorov-Sinai entropy.
In Section 3, the relationship between permutation and Kolmogorov-Sinai entropies in the one-dimensional case is investigated. Conditions are introduced, under which the permutation entropy is equal to the Kolmogorov-Sinai entropy. The given conditions allow for a generalization of previous results. We will explain why (countably) piecewise monotone functions satisfy these conditions and consider two examples.
In Section 4, we will investigate a method to analyze the relationship between permutation and Kolmogorov-Sinai entropies that was first introduced in [5]. We will use this method to relate two different kinds of conditional permutation entropies in the general setting. Theorem 5 shows that this method cannot be used directly to prove equality between permutation and Kolmogorov-Sinai entropies.
The results of the paper are summarized in Section 5. The proofs for all new results can be found in the Appendix A.

Partitions via Ordinal Patterns
Given some d, n ∈ N and some random vector X = (X 1 , X 2 , . . . , X d ), the partition described above can be defined in an alternative way, which is a bit more abstract but better fitting for the approach used in the proof of Theorem 4: We can determine to which set P π ∈ OP X i (n) a point ω ∈ Ω belongs to if we know whether X i (T s (ω)) < X i (T t (ω)) holds true for all s, t ∈ {0, 1, . . . , n − 1} with s < t. Therefore, we can write Throughout this paper, we will use the set to describe the order relation between two points. This notation allows us to write

Ordinal Characterization of the Kolmogorov-Sinai Entropy
To be able to reconstruct all information of the given system via quantities based on the random vector X = (X 1 , X 2 , . . . , X d ) → R d , we need to assume that the latter itself does not reduce information. From the mathematical viewpoint, this means that the σ-algebra generated by X is equivalent to the originally given σ-algebra A , i.e., that holds true, which is roughly speaking that orbits are separated by the given random vector.
For definitions and some more details concerning σ-algebras and partitions, see Appendix A.1.
The following statement saying that, under (1), ordinal patterns entailing the complete information of the given system have been shown in [3] in a s slightly weaker form than given here. Theorem 1. Let X : Ω → R d be a random vector satisfying (1). Then, holds true.
Note that the inequality in (2) is a relatively simple fact: Since the partition OP X (n) is finer than the partition (OP X (k)) (n−k) for all n ≥ k, we have Dividing both sides by n and taking n and subsequently k to infinity proves this inequality.
Proofs of the inequality PE X ≥ KS are also implicitly given in [1,8]. One-dimensional systems with direct observation as considered there are discussed in Section 3 in detail.
We will give a proof of the equality in (2) in Appendix A.2 being alternative to that in [3].

Conditional Entropies
In the case that (1) holds and that KS and PE X coincide, in Appendix A.3, we will prove different representations of the Kolmogorov-Sinai entropy by ordinal pattern based conditional entropies as they are given in the following theorem. Theorem 2. Let X : Ω → R d be a random vector satisfying (1). If KS ≥ PE X is true, then holds true for all k ∈ N, in particular, in the case k = 1, one has KS = CE X = PE X = PE X .

Amigó'S Approach
Amigó et al. [2,8] describe an alternative ordinal way to the Kolmogorov-Sinai entropy, which is based on a refining sequence of finite partitions. We present it in a slightly more general manner as originally given and in the language of finite-valued random variables. Note that the basic result behind Amigo's approach in [2,8] is that the Kolmogorov-Sinai entropy of a finite alphabet source and its permutation entropy given some order on the alphabet coincide (see also [9] for an alternative algebraic proof of the statement).
Theorem 3. Given a sequence (X k ) k∈N of R-valued random variables satisfying

The Simple One-Dimensional Case
In the following, we consider the case that Ω is a subset of R with A coinciding with the Borel σ-algebra B on Ω, and with X = id being the identical map on Ω. The X is superfluous here, which is why we leave out each superscript X. For example, we write OP(n) instead of OP id (n).

(Countably) Piecewise Monotone Maps
We discuss some generalization of the results of Bandt, Keller, and Pompe that Kolmogorov-Sinai entropy and permutation entropy coincide for interval maps (see [1]) on the basis of a statement given in the paper [10]. The discussion sheds some light on structural aspects of the proofs given in that paper with some potential for further generalizations.

Definition 1.
Let Ω be a subset of R and B be the Borel σ-algebra on Ω and µ be a probability measure on (Ω, B). Then, we call a partition M = {M i } i∈I of Ω ordered (with regard to µ), if M ⊂ B and holds true for all i 1 , i 2 ∈ I with i 1 = i 2 . Here, µ 2 denotes the product measure of µ with itself. We call a map T : Ω → Ω (countably) piecewise monotone (with regard to µ) if there exists a finite (or countably infinite) ordered partition M = {M i } i∈I of Ω with H(M) < ∞ such that holds true for all i ∈ I.
Given a probability space (Ω, A , µ), for two families of sets P, Q ⊆ A , we write P ≺ Q if, for all Q ∈ Q, there exists a P ∈ P with µ(Q \ P) = 0. If P = {P i } i∈I and Q = {Q j } j∈J are partitions of Ω in A , P ≺ Q is equivalent to the fact that for every i ∈ I there exists a set J i ⊆ J such that P i and j∈J i Q j are equal up to some set with measure 0. Moreover, given a partition M = {M i } i∈I of a set Ω, let In Appendix A.4, we will show the following statement: Let Ω be a subset of R and A = B be the Borel σ-algebra on Ω, and assume that the following conditions are satisfied: Condition 2: For all > 0, there exists a finite or countably infinite ordered partition Q with H(Q) < ∞ and Then, PE ≤ KS holds true.
Theorem 4 extracts the two central arguments in proving the main statement of [10] in the form of Conditions 1 and 2. This statement is given in a slightly stronger form in Corollary 1. In the proof of [10], the m in Condition 1 is equal to 1. We will discuss in Section 3.2 a situation where Condition 1 with m = 2 is of interest.

Corollary 1.
Let Ω be a compact subset of R and A = B be the Borel σ-algebra on Ω. If T is (countably) piecewise monotone, then PE ≤ KS holds true.
Since below we directly refer to the main statement in [10], which assumes compactness, and for simplicity, the Theorem is formulated under this assumption, we however will discuss a relaxation of the assumption in Remark A1.
To prove the above corollary, one needs to verify that Conditions 1 and 2 are satisfied for one-dimensional systems if T is piecewise monotone. It is easy to see that Condition 2 holds true for T being aperiodic and ergodic: If T is aperiodic, for any > 0, one can choose a finite ordered partition Q such that µ(Q) < holds true for all Q ∈ Q. The ergodicity then implies One can also show that Condition 2 is true for non-ergodic aperiodic compact systems (see Remark A1 and [10]).
If T is (countably) piecewise monotone, there exists a finite (or countable infinite) ordered partition M = {M i } i∈I with H(M) < ∞ satisfying (4), which is equivalent to (7) is true for all i ∈ I. Because M is an ordered partition, we have Hence, Condition 1 holds true if T is (countably) piecewise monotone. To show that Corollary 1 holds true if the dynamical system is not aperiodic, one splits the system into a periodic part and an aperiodic part in the following way: Let be the set of periodic points. Assume that µ(Θ) / ∈ {0, 1} is true. Then, holds true, where (9) is the periodic part of the upper permutation entropy and (10) the aperiodic part. One can use the aperiodic version of Corollary 1 to show that the Kolmogorov-Sinai entropy is an upper bound for (10). The proof of Corollary 1 for non-aperiodic dynamical systems is complete with Lemma A5 in Appendix A.4, which shows that (9) is equal to 0.

Examples
In order to illustrate the discussion in Section 3.1, we consider two examples. The first one reflects the situation in Corollary 1, and the second one discusses the case m = 2 in Condition 1 in Theorem 4.
is called a Gaussian map (see Figure 2a). This map is measure-preserving with regard to the measure µ, which is defined by µ(A) = 1 is a countably infinite partition into monotony intervals of T satisfying H(M) < ∞. This map is countably piecewise monotone and ergodic. Thus, its Kolmogorov-Sinai entropy is equal to its permutation entropy.
Entropy 2020, xx, 5 9 of 26 is called a Gaussian map (see Figure 2a). This map is measure-preserving with regard to the measure µ, which is defined by µ(A) = 1 is a countably infinite partition into monotony intervals of T satisfying H(M) < ∞. This map is countably piecewise monotone and ergodic. Thus, its Kolmogorov-Sinai entropy is equal to its permutation entropy.  (i + 1)(log(i + 1)) 2 , a 0 := 0, The map T : Ω → Ω is defined as piecewise linear on each set M i (see Figure 2b) by Let λ be the one-dimensional Lebesgue measure. Define a measure µ on (Ω, B) by (i + 1)(log(i + 1)) 2 , a 0 := 0, The map T : Ω → Ω is defined as piecewise linear on each set M i (see Figure 2b) by Let λ be the one-dimensional Lebesgue measure. Define a measure µ on (Ω, B) by One can verify that T is measure-preserving and ergodic with regard to µ. The partition M := {M i } i∈N does satisfy (5) for m = 1, but H(M) = ∞ holds true. Therefore, Condition 1 does not hold true for m = 1. However, one can show that Condition 1 holds true for m = 2 and the partition M := {M 1 , ∞ m=2 M i }, which implies that the Kolmogorov-Sinai entropy is equal to the permutation entropy of this map due to Theorem 4.

A Supplementary Aspect
To determine under what conditions the Kolmogorov-Sinai entropy and the upper or lower permutation entropies coincide remains an open problem in the general case, and in the simple one-dimensional case of maps not being (countably) piecewise monotone the relation of Kolmogorov-Sinai and upper and lower permutation entropies is not completely understood. There is not even known an example where the entropies differ. Finally, we shortly want to discuss a further approach for discussing the relationship of Kolmogorov-Sinai entropy and upper and lower permutation entropies.
In [12], it was shown that under (1) the Kolmogorov-Sinai entropy is equal to the permutation entropy if roughly speaking the information contents of 'words' of k 'successive' ordinal patterns of large length n is not too far from the information contents of ordinal patterns of length n + k − 1. We want to explain this for the simple one-dimensional case and k = 2.
The following is shown in Appendix A.5:

Lemma 1.
Let Ω be a subset of R and A = B be the Borel σ-algebra on Ω. Then, holds true for all n ∈ N.
This indicates that analyzing the measure of V n as defined in (11) can be a useful approach to gain inside into the relationship between different kinds of entropies based on ordinal patterns. In particular, the behavior of µ(V n ) for n → ∞ is of interest.

Lemma 2.
Let Ω be a subset of R and A = B be the Borel σ-algebra on Ω. If T is ergodic, then The statement under the assumption of mixing has been shown in [5], and the proof in the ergodic case is given in Appendix A.5.
One can show that in the simple one-dimensional case the Kolmogorov-Sinai entropy is equal to the permutation entropy if holds true. Using (12), this is the case when ∑ ∞ n=1 µ(V n ) is finite, providing a fast decay of the µ(V n ). However, we have ∑ ∞ n=1 µ(V n ) = ∞ as stated in Theorem 5, which will be proved in Appendix A.6.

Theorem 5.
Let Ω be a subset of R, A = B be the Borel σ-algebra on Ω and T be aperiodic and ergodic. Then, Although formula ∑ ∞ n=1 µ(V n ) < ∞ is false, we cannot answer the question of whether or when (13) is valid. Possibly, an answer to this question, and a better understanding of the kind of decay of the µ(V n ), could be helpful in further investigating the relationship of Kolmogorov-Sinai entropy and upper and lower permutation entropies, at least in the simple one-dimensional ergodic case.

Conclusions
With Theorem 1, we have slightly generalized a statement given in [3] by removing a technical assumption and using more basic combinatorial arguments. The remaining assumption (1) on the random vector X cannot be weakened in general.
In Section 2.3, we have shown that the equality of the permutation entropy and the Kolmogorov-Sinai entropy implies the equality of conditional permutation entropy and Kolmogorov-Sinai entropies as well. We considered two different kinds of conditional permutation entropy, which have turned out to be equal in the cases considered in Section 2.3; it is however not clear whether these two kinds of conditional permutation entropy are equal in the general.
In Section 4, we have established some condition under which these two kinds of conditional entropy are equal, independently from of the equality between permutation and Kolmogorov-Sinai entropies. This condition is based on a concept introduced in [5] that was originally introduced as a tool for better understanding the relationship between permutation and Kolmogorov-Sinai entropies in a general setting. However, with Theorem 5, we have shown that this tool cannot directly be used to show the equality between permutation and Kolmogorov-Sinai entropies. It is an interesting question of whether and how a clever adaption and improvement of it can allow for new insights in the relationship between permutation and Kolmogorov-Sinai entropies.
In Section 3, we considered the simpler one-dimensional case. With Theorem 4, we have given two conditions under which the permutation entropy is a lower bound for the Kolmogorov-Sinai entropy. This theorem generalizes previous statements in [1] and slightly generalizes a statement in [10]. One of the conditions (Condition 2) holds true for a large class of dynamical systems, while, for the other one (Condition 1) to hold true, it is necessary that the system is in some sense 'order preserving'. It is still an unsolved and interesting question, whether Condition 1 can be weakened, especially since, to the best of our knowledge, there does not exist a counterexample to the equality of permutation entropy and Kolmogorov-Sinai entropies. Finding a generalization of Theorem 4 to a multidimensional setting is a further interesting question one could ask.

Conflicts of Interest:
The authors declare no conflict of interest.

Appendix A. Proofs
Appendix A. 1

. Preliminaries
Given a probability space (Ω, A , µ) and two σ-algebras A 1 , A 2 ⊆ A , we write For a collection of R-valued random variables {X i } i∈I defined on some measure space (Ω, A ), we denote by Given a family of disjoint sets P ⊆ A and some set Q ∈ A , we define =µ(Q) · (log(µ(Q)) − log(#∆(P |Q))) for all Q ∈ Q. Using the above inequality implies

This is equivalent to
Appendix A.2. Proof of the Equality in Formula (2) The proof is based on the following Lemma A1 and Corollary A1.
Lemma A1. Let X : Ω → R be a random variable and U a finite ordered partition of R with regard to the image measure µ X . Then, for P := X −1 (U ), n ∈ N and all P π ∈ OP X (n) Proof. Set I = {1, 2, . . . , #U } and label the sets U i ∈ U with i ∈ I in such a way that holds true for all i 1 , i 2 ∈ I. Since U is assumed to be an ordered partition, this is always possible. Set P i := X −1 (U i ) for all i ∈ I so that P = {P i } i∈I . Fix n ∈ N and P π ∈ OP X (n). Using for all i = (i 0 , i 1 , . . . i n−1 ) ∈ I n , we have #∆(P (n) |P π ) = #{i ∈ I n : µ(P(i) ∩ P π ) > 0}.
The lemma can be used to directly prove the following result.
Corollary A1. Let X = (X 1 , X 2 , . . . , X d ) : Ω → R d be a random vector and U a finite partition of R into intervals. Then, Proof. Take k, m ∈ N and set P i := X −1 i (U ). Then, holds true for all i ∈ {1, 2, . . . , d}. Together with (A1) and Lemma A1, this provides We are now able to prove of the equality in (2). Let p i : R d → R with p i ((x 1 , x 2 , . . . , x d )) = x i be the projection on the i-th coordinate, and let B(R d ) denote the Borel σ-algebra on R d . Since this σ-algebra is generated by sets of the type where I i are intervals, there exists an increasing sequence of finite partition (U l ) l∈N of R into intervals, such that holds true. Using (1), this implies Thus, (P l ) l∈N with is a generating sequence of finite partitions, which implies (see e.g., [13]) Corollary A1 provides h(P l ) ≤ lim k→∞ h(OP X (k)) for all l ∈ N. Combining the two previous statements yields On the other hand, holds true, which, together with (A3), finishes the proof of the equality in (2).

Appendix A.3. Proof of Theorem 2
For preparing the proof of Theorem 2, let us first give two lemmata.
Lemma A2. Let (P n ) n∈N be a sequence of finite partitions of Ω in A satisfying Then, holds true for all k ∈ N.
Proof. Take k ∈ N. We have The Stolz-Cesàro theorem further provides Notice that (A4) is fulfilled for P n := OP X (n).
Using the future formula for the entropy rate (see e.g., [13]), we can write h(OP X (n)) = lim l→∞ H OP X (n) T −1 (OP X (n)) (l) for all n ∈ N. This implies for all k ∈ N. Now, Lemmas A2 and A3 provide The assumption KS ≥ PE X then implies

. Proofs for the Simple One-Dimensional Case
This subsection is mainly devoted to the proofs of Theorem 4 and to the proof of Lemma A5 mentioned at the end of Section 3.1. Recall the assumption that Ω is a subset of R and A = B the Borel σ-algebra on Ω. The following lemma is a step to the proof of Theorem 4.
Proof. Fix m ∈ N and n ∈ N with n ≥ m and i = (i 0 , i 1 , . . . , i n−1 ) ∈ I n . We will show that holds true for all s ∈ N with s ≥ m using induction over s: The above statement is trivial for s = m. Suppose that (A5) holds true for some s ∈ N with s ≥ m. We will show that (A5) then holds true for s + 1: In (A6), the induction hypotheses was used.
Therefore, #∆(OP(n)|M(i)) = #∆(M (n) ∨ OP(n)|M(i)) If i s = i t is true, using the fact that M is an ordered partition yields For all other cases, we have The above observations can be summarized as ≤ 2 if s = t and i s = i t .
We come now to the proof of Theorem 4, which slightly generalizes a proof given in [10], where the case m = 1 was considered. For better readability, we restate this proof with the generalization to arbitrary m ∈ N at the appropriate places within the proof.

Lemma A5.
Let Ω be a subset of R and A = B be the Borel σ-algebra on Ω. If T is (countably) piecewise monotone and completely periodic, i.e., This provides PE = 0 because can be chosen arbitrarily close to 0.

Remark A1.
To be able to show that Condition 2 holds true under the assumptions of Corollary 1 for non-ergodic systems via ergodic decomposition, one needs to require that (Ω, B, µ) is a Lebesgue space. A probability space where Ω is a complete separable metric space and B the completion of the Borel σ-algebra on Ω, i.e., B contains additionally all subsets of Borel sets with measure 0. If Ω ⊆ R is a Borel subset, (Ω, B, µ) is a Lebesgue space if B is complete with regard to µ (see e.g., [14]). Alternatively, one can use Rokhlin-Halmos towers to show that Condition 2 holds true for non-ergodic systems (see [10]). For this approach, it is only necessary to require that Ω is a separable metric space, B the Borel σ-algebra on Ω, and T : Ω → Ω an aperiodic map [15].
Moreover, notice that, in [10], it was required that Ω is a compact metric space so that µ is regular, which allowed for approximating any set of B by a finite union of intervals. However, this is not necessary because the Borel σ-algebra is generated by the algebra containing all sets of the type I ∩ Ω, where I is an open or closed interval, and every set of a σ-algebra can be approximated by a set of the algebra that generates that σ-algebra (see, e.g., [16]).
Appendix A.5. Proof of Lemma 1 and the 'Ergodic Part' of Lemma 2 Let Ω be a subset of R and A = B be the Borel σ-algebra on Ω. We start with showing Lemma 1. For this, fix some n ∈ N. By its definition, the set V n can be written as a union of sets in OP(n) ∨ T −1 (OP(n)). Notice that For Q ∈ OP(n) ∨ T −1 (OP(n)), consider some ω ∈ Q. If ω / ∈ V n is true, we can use the transitivity of the order relation to determine the order relation of ω and T n (ω) from the ordering given by Q.
for all Q ⊆ Ω \ V n . Thus, This shows Lemma 1.
To prove the 'ergodic part' of Lemma 2, take > 0. Choose an ordered partition U = {U i } N i=1 of Ω such that 0 < µ(U i ) < holds true for all i ∈ I. This is always possible because µ was assumed to be aperiodic. Label the sets U i ∈ U with i ∈ {1, 2, . . . , N} in such a way that holds true for all i 1 , i 2 ∈ {1, 2, . . . , N}. Since T is ergodic, there exists an n o ∈ N such that The set N i=1 n 0 −1 s=1 T −s (U i ) consists of all ω ∈ Ω with orbit (ω, T(ω), . . . , T n 0 −1 ) visiting each of the sets in U . Thus, if such ω lies in U i with 1 < i < N and in V t for t ≥ n 0 , by definition of V t , the point T t (ω) must belong to U t−1 ∪ U t ∪ U t+1 . With a similar argumentation for ω ∈ U 1 or ω ∈ U N , one obtains the following: holds true for all t ∈ N with t > n 0 . Using the ergodicity of T implies The Stolz-Cesàro theorem then provides Since can be choosen arbitrarily close to 0, this implies lim inf n→∞ µ (V n ) = 0.
Appendix A.6. Proof of Theorem 5 Let Ω be a subset of R and A = B be the Borel σ-algebra on Ω and consider the set Since T is µ-almost surely aperiodic, we have µ(Θ) = 0.