Next Article in Journal
Maximum Entropy in Drug Discovery
Previous Article in Journal
The Role of Vegetation on the Ecosystem Radiative Entropy Budget and Trends Along Ecological Succession
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On the Connections of Generalized Entropies With Shannon and Kolmogorov–Sinai Entropies

Department of Mathematics, Cracow University of Economics, Rakowicka 27, 31-510 Kraków, Poland
Entropy 2014, 16(7), 3732-3753; https://doi.org/10.3390/e16073732
Submission received: 17 May 2014 / Revised: 19 June 2014 / Accepted: 30 June 2014 / Published: 3 July 2014

Abstract

:
We consider the concept of generalized Kolmogorov–Sinai entropy, where instead of the Shannon entropy function, we consider an arbitrary concave function defined on the unit interval, vanishing in the origin. Under mild assumptions on this function, we show that this isomorphism invariant is linearly dependent on the Kolmogorov–Sinai entropy.

1. Introduction

The Boltzmann-Gibbs entropy and the Shannon entropy are measures that lie on the foundations of statistical physics and modern information theory, respectively. The dynamical counterparts of the latter—the dynamical and the Kolmogorov-Sinai entropy—had a great impact on the modern theory of dynamical systems and mathematical physics (see, e.g., [1,2]). They were extensively studied and successfully applied, among others, to statistical physics and quantum information. It appeared to be an exceptionally powerful tool for exploring nonlinear systems. One of the biggest advantages of the Kolmogorov-Sinai entropies lies in the fact that they make it possible to distinguish the formally regular systems (those with the measure-theoretic entropy equal to zero) from the chaotic ones (with positive entropy, which implies the positivity of topological entropy [3]).
The Kolmogorov-Sinai entropy of a given transformation T acting on a probability space (X, Σ, μ) is defined as the supremum over all finite measurable partitions 𝒫 of the dynamical entropy of T with respect to 𝒫, denoted by h(T, 𝒫). As a dynamical counterpart of Shannon entropy, the entropy of transformation T with respect to a given partition 𝒫 is defined as the limit of the sequence ( 1 n H ( 𝒫 n ) ) n = 1 , where:
H ( 𝒫 n ) = A 𝒫 n η ( μ ( A ) )
with η being the Shannon function given by η(x) = −x log x for x > 0 with η(0) : = 0, and 𝒫n is the join partition of partitions Ti𝒫 for i = 0, ..., n – 1. The existence of the limit in the definition of the dynamical entropy follows from the subadditivity of η. The most common interpretation of this quantity is the average (over time and the phase space) one-step gain of information about the initial state. Taking the supremum over all finite partitions, we obtain an isomorphism invariant that measures the rate of producing randomness (chaos) by the system.
Since Shannon’s seminal paper [4], several refinements of the concept of Shannon static entropy were considered (see, e.g., Rényi [5], Arimoto [6], Wu, Verdú [7] and Csiszár’s survey article [8]). Their dynamical and measure-theoretic counterparts were also considered by a few authors. De Paly [9] proposed generalized dynamical entropies based on the concept of the relative static entropies. Unfortunately, it appeared that, despite some special cases [9,10], the explicit calculation of this invariant may not be possible. Grassberger and Procaccia proposed in [11] a dynamical counterpart of the well-known generalization of Shannon entropy—the Rényi entropy—and its measure-theoretic counterpart were considered by Takens and Verbitski. They showed that for ergodic transformations with positive measure-theoretic entropy, Rényi entropies of a measure-theoretic transformation are either infinite or equal to the measure-theoretic entropy [12]. The answer for non-ergodic aperiodic transformations is different: for Rényi entropies of order α > 1, they are equal to the essential infimum of measure-theoretic entropies of measures forming the decomposition of a given measure into ergodic components, while for α < 1, they are still infinite [13]. In particular, this means that Rényi entropies of order α < 1 are metric invariants sensitive to ergodicity. A similar generalization was made by Mesón and Vericat [1416] for so-called Havrda–Charvát–Tsallis entropy [17], and their results were similar to the ones obtained by Takens and Verbitski in [12].
In this paper, we present a generalization of the concept of dynamical entropy. This was made for few reasons. First of all, the entropy used in the theory of dynamical systems is a natural isomorphism invariant. However, outside of the class of Bernoulli automorphisms, it is not a complete invariant, i.e., two systems with the same entropy need not be isomorphic. In particular, it is not applicable to a wide class of systems with zero entropy. Moreover, we can ask whether considering of a function different than the Shannon function (at the expense of losing its interpretations from the information theory) can give a significantly new invariant. This can show which properties of the Shannon function are crucial for the usage of entropy in dynamical systems. Finally, different dynamical generalizations of entropy have been used by the physicists for years [1822]. However, up till now, there have not been many rigorous results. We hope that this note will help to fill in this gap.
Our approach is based on the generalization suggested by Rényi and extended by Arimoto applied to the dynamical case. Instead of the Shannon function η, we consider a concave function g :[0, 1] ↦ ℝ, such that lim x 0 + g ( x ) = g ( 0 ) = 0 and define the dynamical g-entropy of the finite partition 𝒫 as:
h ( g , T , 𝒫 ) = lim sup n 1 n A 𝒫 n g ( μ ( A ) ) .
The behavior of the quotient g(x)(x) as x converges to zero appears to be crucial for our considerations. Defining:
Ci ( g ) : = lim inf x 0 + g ( x ) η ( x ) and Cs ( g ) : = lim sup x 0 + g ( x ) η ( x )
we will prove that:
Ci ( g ) h ( T , 𝒫 ) h ( g , T , 𝒫 ) Cs ( g ) h ( T , 𝒫 ) .
In the case of Ci(g) = ∞, we will show that in every aperiodic system and for every γ ≥ 0, there exists a finite partition 𝒫, such that h(g, T, 𝒫) ≥ γ.
Considering the supremum over all partitions, we obtain a Kolmogorov entropy-like isomorphism invariant, which we will call the measure-theoretic g-entropy of a transformation with respect to an invariant measure. One might ask whether this invariant may give any new information about the system. We will prove (Theorem 5) that for g with Cs(g) < ∞, this new invariant is linearly dependent on Kolmogorov–Sinai entropy. It shows that Shannon entropy function is the special one from the point of view of the theory of dynamical entropies—it is the most natural one—not only does it have all of the properties that the entropy function should have [1], but also considering different entropy functions, we will not obtain an essentially different invariant. This result might have the other interpretation. Ornstein and Weiss showed in [23] that every finitely observable invariant for the class of all ergodic processes has to be a continuous function of the entropy. It is easy to see that any continuous function of the entropy is finitely observable; one simply composes the entropy estimators with the continuous function itself. In other words, an isomorphism invariant is finitely observable if and only if it is a continuous function of the Kolmogorov-Sinai entropy. Therefore, our result implies that the generalized measure-theoretic entropy is, in fact, finitely observable. It should be possible to give a more direct proof of the finite observability of the generalized measure-theoretic entropy, but the proof cannot be easier than the proof that entropy itself is finitely observable [24,25]. On the other hand, different entropy functions might be still of use, e.g., in the case of zero entropy systems, where we may consider generalizations of concepts used in this case: entropy convergence rates [26,27], generalized topological entropies [28] or entropy dimensions [29]. The generalization of entropy convergence rates can be found in [30]. The result implies that, from the point of view of the theory of entropy in dynamical systems, the crucial property of the Shannon function η is its behavior in the neighborhood of zero.
The note is organized as follows: in the next section, we introduce the dynamical g-entropy and establish its basic properties. The subsequent section is devoted to the construction of a zero dynamical entropy process with sufficiently large g-entropy. Finally, in the last section, we define a measure-theoretic g-entropy of a transformation and show connections between this new invariant and the Kolmogorov-Sinai entropy.

2. Results

2.1. Basic Facts and Definitions

Let (X, Σ, μ) be a Lebesgue space, and let g : [0, 1] ↦ ℝ be a concave function with g ( 0 ) = lim x 0 + g ( x ) = 0 (We might assume only that g(0) = 0, but then, the idea of the dynamical g-entropy would fail, since if 𝒫n+1𝒫n for every n and lim x 0 + g ( x ) 0, then the dynamical g-entropy of the partition 𝒫 would be infinite. Therefore, if g is not well-defined at zero, we will assume that g ( 0 ) : = lim x 0 + g ( x ).). By 𝒢0, we will denote the set of all such functions. Every g𝒢0 is subadditive, i.e., g(x + y) ≤ g(x)+ g(y) for every x, y, x + y ∈ [0, 1], and quasi-homogenic, i.e., φg :(0, 1] → ℝ defined by φg(x):= g(x)/x is decreasing (see [31]). If g is fixed, we will omit the index, writing just φ.
Any finite family of pairwise disjoint subsets of X, such that A i 𝒫 A i = X, is called a partition. The set of all finite partitions will be denoted by 𝔅. Foragiven 𝒫 ∈ 𝔅, we define the g-entropy of the partition 𝒫 as:
H ( g , 𝒫 ) : = A 𝒫 g ( μ ( A ) ) .
For g = η, the latter is equal to the Shannon entropy of the partition 𝒫. For 𝒫, 𝒬 ∈ 𝔅 of the space X, we define a new partition 𝒫𝒬 (the joint partition of 𝒫 and 𝒬) consisting of the subsets of the form BC, where B𝒫 and C𝒬.

2.1.1. Dynamical g-Entropies

For an automorphism T : XX and a partition 𝒫 = {E1, ..., Ek}, we put:
T j 𝒫 : = { T j E 1 , , T j E k }
and
𝒫 n = 𝒫 T 1 𝒫 T n + 1 𝒫 .
Now, for a given g𝒢0 and a finite partition 𝒫, we can define the dynamical g-entropy of the transformation T with respect to 𝒫 as:
h μ ( g , T , 𝒫 ) = lim sup n 1 n H ( g , 𝒫 n ) .
Alternatively we will call it the g-entropy of the process (X, Σ, μ, T, 𝒫). If the dynamical system (X, Σ, T, μ) is fixed, then we omit T, writing just h(g, 𝒫). As in the case of Shannon dynamical entropies, we are interested in the existence of the limit of ( 1 n H ( g , 𝒫 n ) ) n = 1 . If g = η, we obtain the Shannon dynamical entropy h(T, 𝒫). However, in the general case, we cannot replace an upper limit in Equation (2) by the limit, since it might not exist. The existence of the limit in the case of the Shannon function follows from the subadditivity of static Shannon entropy. This property has every subderivative function, i.e., a function for which the inequality g(xy) ≤ xg(y) + yg(x) holds for any x, y ∈ [0, 1], but this is not true in general (an appropriate example will be given in Section 2.1.3). Therefore, we propose more general classes of functions for which this limit exists:
𝒢 0 0 : = { g 𝒢 0 | lim x 0 + g ( x ) η ( x ) = 0 } or 𝒢 0 Sh : = { g 𝒢 0 | 0 < lim x 0 + g ( x ) η ( x ) < } .
It is easy to show that if g is subderivative, then the limit lim x 0 + g ( x ) / η ( x ) is finite. Moreover, we will see that values of dynamical g-entropies depend on the behavior of g in the neighborhood of zero. We will prove that if g 𝒢 0 0 𝒢 0 Sh, then there is a linear dependence between the dynamical g-entropy and the Shannon dynamical entropy of a given partition. Before we give the general result (Theorem 1), we will state a few facts, which we will use in the proof of this theorem. We give the following lemmas, omitting their elementary proofs.
Lemma 1. Let bi > 0, ai ∈ ℝ for i = 1,...,m, then
min i = 1 , , m a i b i i = 1 m a i i = 1 m b i max i = 1 , , m a i b i .
Lemma 2. If 𝒫 ∈ 𝔅, δ > 0 and g :[0, 1] ↦ ℝ, then:
A 𝒫 , μ ( A ) δ g ( μ ( A ) ) 1 δ max x [ δ , 1 ] g ( x ) .
The following lemma states that the value of the dynamical g-entropy is given by the behavior of g in the neighborhood of zero.
Lemma 3. If g1,g2𝒢0 and there exists c > 0, such that g1(x) = g2(x) for x ∈ [0, c], then for every 𝒫∈ 𝔅 h(g1, 𝒫) = h(g2, 𝒫).
Proof. Let 𝒫 ∈ 𝔅 and g1, g2𝒢0, c > 0, fulfill the assumptions. Because g𝒢0 is bounded, we have:
| H ( g 1 , 𝒫 n ) | H ( g 2 , 𝒫 n ) = | A 𝒫 n : μ ( A ) > c ( g 1 ( μ ( A ) ) g 2 ( μ ( A ) ) ) | 1 c max x [ c , 1 ] | g 1 ( x ) g 2 ( x ) | .
Dividing by n and converging to infinity, we obtain:
h ( g 1 , 𝒫 ) = h ( g 2 , 𝒫 ) .
We may state now the main theorem of this section.
Theorem 1. Let 𝒫 ∈ 𝔅.
(1)
If g𝒢0 is such that g′(0) < ∞, then h(g, 𝒫) = 0.
(2)
If g1,g2𝒢0 are such that g1(0) = g′2(0) = ∞,
lim inf x 0 + g 1 ( x ) g 2 ( x ) < ,
and h(g2, 𝒫) < ∞, then:
lim inf x 0 + g 1 ( x ) g 2 ( x ) h ( g 2 , 𝒫 ) h ( g 1 , 𝒫 ) .
If, additionally, lim sup x 0 + g 1 ( x ) g 2 ( x ) < , then:
h ( g 1 , 𝒫 ) lim sup x 0 + g 1 ( x ) g 2 ( x ) h ( g 2 , 𝒫 ) .
(3)
If h(g2, 𝒫) = ∞ and lim inf x 0 + g 1 ( x ) g 2 ( x ) > 0, then h(g1, 𝒫) = ∞.
Remark 1. Whenever g2 :[0, 1] ↦ ℝ is a nonnegative concave function satisfying g2(0) = 0 and g2(0) = ∞, we can have any pair 0 < ab ≤ ∞ as a limit inferior and limit superior of g1/g2 in zero, choosing a suitable function g1. The idea is as follows: construct g1 piecewise linear. To do so, define inductively a strictly decreasing sequence xk → 0 and a decreasing sequence of values yk = g1(xk) → 0, thus defining intervals Jk : = [xk+1, xk], where g is affine. The only constraint to get a concave function is that the slope of g on each interval Jk has to be smaller than yk/xk and increasing with respect to k; this is not an obstruction to approach any limit inferior and limit superior for g1(x)/g2(x), provided that xk+1 > 0 is chosen small enough.
Proof of Theorem 1. Let 𝒫 ∈ 𝔅. Suppose that g𝒢0 and g′(0) < ∞. Then:
h ( g , 𝒫 ) = lim sup n 1 n H ( g , 𝒫 n ) lim sup n 1 n φ ( 1 / card 𝒫 n ) lim x g ( 0 ) n = 0 ,
which completes the proof of Point 1. To show Point 2, let g1, g2𝒢0 be such that g1(0) = g2(0) = ∞ and h(g2, 𝒫) < ∞. W.l.o.g, we can assume that g1(x), g2(x) > 0 for x ∈ (0, 1), since if there exists x0 ∈ (0, 1), such that gi(x0) = 0 for i = 1 or i = 2, then we can define i :[0, 1] ↦ ℝ as:
g ˜ i ( x ) : = { g i ( x ) , for x [ 0 , s i ) g i ( s i ) , for x [ s i , 1 ]
where si ∈ (0, 1] is such that max x [ 0 , 1 ] g ( x ) = g ( s i ). Then, is strictly positive, and by Lemma 3 we have:
h ( g ˜ i 𝒫 ) = h ( g i , 𝒫 ) .
Let us assume that:
lim sup x 0 + g 1 ( x ) g 2 ( x ) < .
Since g is subadditive, the sequence ( H ( g , 𝒫 n ) ) n = 1 is nondecreasing, and there exists the limit of H(g2, 𝒫n). If it is finite, then h(g2, 𝒫) = 0, and by Equation (3) and Lemma 1, we have:
A 𝒫 n g 1 ( μ ( A ) ) A 𝒫 n : μ ( A ) < 1 2 g 1 ( μ ( A ) ) + 2 max x [ 1 2 , 1 ] g 1 ( x ) sup x ( 0 , 1 2 ) g 1 ( x ) g 2 ( x ) A 𝒫 n : μ ( A ) < 1 2 g 2 ( μ ( A ) ) + 2 max x [ 1 2 , 1 ] g 1 ( x ) .
Because lim sup x 0 + g 1 ( x ) g 2 ( x ) < , there exists M > 0, such that g1(x)/g2(x) < M for x < 1/2. Therefore, sup x ( 0 , 1 2 ) g 1 ( x ) g 2 ( x ) < , and by Lemma 1, we obtain:
0 h ( g 1 , 𝒫 ) = lim sup n 1 n H ( g 1 , 𝒫 n ) H ( g 2 , 𝒫 n ) H ( g 2 , 𝒫 n ) sup x ( 0 , 1 2 ) g 1 ( x ) g 2 ( x ) lim sup x 1 n H ( g 2 , 𝒫 n ) = 0 .
Thus, we can assume that lim n H ( g 2 , 𝒫 n ) = .
Fix ε > 0. There exists δ > 0, such that, for x ∈ (0, δ], we have:
lim inf x 0 + g 1 ( x ) g 2 ( x ) ε < g 1 ( x ) g 2 ( x ) lim sup x 0 + g 1 ( x ) g 2 ( x ) + ε .
Lemma 1 implies that:
lim inf x 0 + g 1 ( x ) g 2 ( x ) ε A 𝒫 n , μ ( A ) < δ g 1 ( μ ( A ) ) A 𝒫 n , μ ( A ) < δ g 2 ( μ ( A ) ) lim sup x 0 + g 1 ( x ) g 2 ( x ) + ε .
Using Equation (3) for every n > 0, we get:
A 𝒫 n , μ ( A ) δ g i ( μ ( B ) ) 1 δ G δ i ¯ .
where G δ i ¯ : = max x [ δ , 1 ] g i ( x ) for i = 1, 2. Therefore:
A 𝒫 n : μ ( A ) < δ g 1 ( μ ( A ) ) A 𝒫 n : μ ( A ) < δ g 2 ( μ ( A ) ) + 1 δ G δ 2 ¯ A 𝒫 n g 1 ( μ ( A ) ) A 𝒫 n g 2 ( μ ( A ) ) A 𝒫 n : μ ( A ) < δ g 1 ( μ ( A ) ) + 1 δ G δ 1 ¯ A 𝒫 n : μ ( A ) < δ g 2 ( μ ( A ) ) .
and A 𝒫 n : μ ( A ) < δ g 2 ( μ ( A ) ) ( n ). Dividing sums by A 𝒫 n : μ ( A ) < δ g 2 ( μ ( A ) ) and from Equation (4), we obtain:
lim inf x 0 + g 1 ( x ) g 2 ( x ) ε 1 + G δ 2 ¯ / δ A 𝒫 n : μ ( A ) < δ g 2 ( μ ( A ) ) A 𝒫 n g 1 ( μ ( A ) ) A 𝒫 n g 2 ( μ ( A ) ) lim sup x 0 + g 1 ( x ) g 2 ( x ) + ε + G δ 1 ¯ / δ A 𝒫 n : μ ( A ) < δ g 2 ( μ ( A ) )
Converging with n to infinity, we obtain:
lim inf x 0 + g 1 ( x ) g 2 ( x ) ε lim inf n H ( g 1 , 𝒫 n ) H ( g 2 , 𝒫 n ) lim sup n H ( g 1 , 𝒫 n ) H ( g 2 , 𝒫 n ) lim sup x 0 + g 1 ( x ) g 2 ( x ) + ε .
Therefore:
( lim inf x 0 + g 1 ( x ) g 2 ( x ) ε ) h ( g 2 , 𝒫 ) lim inf n H ( g 1 , 𝒫 n ) H ( g 2 , 𝒫 n ) lim sup n 1 n H ( g 2 , 𝒫 n ) lim sup n 1 n H ( g 1 , 𝒫 n ) lim sup n H ( g 1 , 𝒫 n ) H ( g 2 , 𝒫 n ) . lim sup n 1 n H ( g 2 , 𝒫 n )
( lim sup x 0 + g 1 ( x ) g 1 ( x ) + ε ) h ( g 2 , 𝒫 ) .
Thus, we obtain the assertion. In the case of the infinite upper limit of the quotient g1(x)/g2(x), we can repeat the above reasoning, just omitting the upper bound for the considered expressions.
If lim inf x 0 + g 1 ( x ) g 2 ( x ) > 0 and h(g2, 𝒫) = ∞, then lim inf x 0 + g 1 ( x ) g 2 ( x ) > ε, and using similar arguments, we obtain Point 3.
Similar arguments lead us to the following statement:
Theorem 2. Let g1, g2𝒢0 be such that lim x 0 + g 1 ( x ) / g 2 ( x ) = , and let a finite partition 𝒫 have positive g2-entropy. Then, h(g1, 𝒫) is infinite.
Theorems 1 and 2 imply a few corollaries:
Corollary 1. If there exists the limit lim x 0 + g 1 ( x ) g 2 ( x ) < , then h ( g 1 , 𝒫 ) = lim x 0 + g 1 ( x ) g 2 ( x ) h ( g 2 , 𝒫 )
Let 𝒢 0 : = { g 𝒢 0 | lim x 0 + g ( x ) η ( x ) = }. If g1 = g, g2 = η, then we have the following corollary:
Corollary 2. Let 𝒫∈ 𝔅 and g𝒢0, then:
(1)
If Ci(g) < ∞, then h(g, 𝒫) ≥ Ci(g) · h(𝒫).
(2)
If Cs(g) < ∞, then h(g, 𝒫) ∈ (Ci(g) · h(𝒫), Cs(g) · h(𝒫)).
(3)
If g 𝒢 0 0 𝒢 0 Sh, then h(g, 𝒫)=C(g) · h(𝒫).
(4)
If g 𝒢 0 and h(𝒫) > 0, then h(g, 𝒫) = ∞.
Corollary 3. If (X, Σ, μ, T) has positive Kolmogorov-Sinai entropy and g𝒢0 then: Cs(g) < ∞ ⇒ g-entropy of any process (X, Σ, μ, T, 𝒫) is finite ⇒ Ci(g) < ∞.
Corollary 4. If g 𝒢 0 0 𝒢 0 Sh, then h ( g , 𝒫 ) = lim n 1 n H ( g , 𝒫 n ).

2.1.2. Case of g 𝒢 0

We will show that for every g 𝒢 0 , any aperiodic automorphism T and every γ ∈ ℝ, there exists a partition 𝒫 ∈ 𝔅 such that h(g, 𝒫) ≥ γ. Since we omit the assumption of ergodicity, we will use different techniques, mainly based on the well-known Rokhlin Lemma, which guarantees the existence of so-called Rokhlin towers of a given height, covering a sufficiently large part of X. Using such towers, we will find lower bounds for the g-entropy of a process.
We will assume that we have an aperiodic system, i.e., (X, Σ, μ, T) for which
μ ( { x X : n T n x = x } ) = 0 .
If M0,..., Mn−1X are pairwise disjoint sets of equal measure, then τ = (M0, M1,..., Mn−1) is called a tower. If additionally Mk = T−(nk−1) Mn−1 for k = 1,..., n – 1, then τ is called a Rokhlin tower (It is also known as a Rokhlin–Halmos or Rokhlin–Kakutani tower.). By the same bold letter τ, we will denote the set k = 0 n 1 M k. Obviously μ(τ) = (Mn−1). Integer n is called the height of tower τ. Moreover, for i < j, we define a sub-tower:
τ i j : = ( M i , , M j ) and τ i j = k = i j M k .
In aperiodic systems, there exist Rokhlin towers of a given length that cover a sufficiently large part of X:
Lemma 4 ([32]). If T is an aperiodic and surjective transformation of Lebesgue space (X, Σ, μ), then for every ε > 0 and every integer n ≥ 2, there exists a Rokhlin tower τ of height n with μ(τ) > 1 – ε.
Our goal is to find a lower bound for the dynamical g-entropy of a given partition. For this purpose, we will use Rokhlin towers, and we will calculate dynamical g-entropy with respect to a given Rokhlin tower. This leads to the following quantity: Let 𝒫 be a finite partition of X and F ∈ Σ then, we define the (static) g-entropy of 𝒫 restricted to F as:
H F ( g , 𝒫 ) : = B 𝒫 g ( μ ( B F ) ) .
The following lemma gives the estimation for H(g, 𝒫) from below by the value of the g-entropy restricted to a subset of X.
Lemma 5. Let g𝒢0. Let 𝒫 be a finite partition, such that there exists a set E𝒫 with 0 < μ(E) < 1. If F ∈ Σ, then:
H ( g , 𝒫 ) H F ( g , 𝒫 ) 3 d max ,
where d max : = max x , y [ 0 , 1 ] | g ( x ) g ( y ) |.
Proof. Let A𝒫. Three-slope inequality implies that:
g ( μ ( A ) ) g ( μ ( A F ) ) μ ( A ) μ ( A F ) g ( 1 ) g ( μ ( A ) ) 1 μ ( A ) .
Thus, for sets of measure μ ( A ) 1 2, we have:
g ( μ ( A ) ) g ( μ ( A F ) ) μ ( A ) μ ( A F ) 1 μ ( A ) · d max 2 ( μ ( A ) μ ( A F ) ) · d max .
Therefore, we obtain:
A 𝒫 : μ ( A ) 1 / 2 ( g ( μ ( A ) ) g ( μ ( A F ) ) ) 2 d max A 𝒫 : μ ( A ) 1 / 2 μ ( A \ F ) 2 d max
and:
A 𝒫 : μ ( A ) > 1 / 2 ( g ( μ ( A ) ) g ( μ ( A F ) ) ) d max ,
which implies that:
H ( g , 𝒫 ) H F ( g , 𝒫 ) 3 d max .
The following lemma will play an important role in the proof of the main theorem of this section.
Lemma 6. Let n ∈ ℕ, E ∈ Σ. Let 𝒫E : = {E, X\E}. Suppose that g𝒢0 is nonnegative in [0,α], where α is some positive number. Then, there exist δ > 0 and s ∈ (0,α), such that:
| H ( g , 𝒫 n E ) H ( g , 𝒫 n F ) | 1 + 2 s d max
for every F ∈ Σ s.t. μ(EΔF) < δdenotes the symmetric difference), where d max : = max x , y [ 0 , 1 ] | g ( x ) g ( y ) |.
Proof. It is easy to show that for every n ∈ ℕ, E ∈ Σ, ε > 0, there exists δ > 0, such that:
min π A i 𝒫 n E , B π ( i ) 𝒫 n F μ ( A i Δ B π ( i ) ) < ε for F Σ such that : μ ( E Δ F ) < δ .
W.l.o.g., we may assume that i = 1 2 n μ ( A i Δ B i ) < ε (adding empty sets if necessary). The nonnegativity of g for x ∈ [0, α] and its concavity imply that there exists s ∈ (0, α), such that g is nondecreasing in [0, s]. Fix n ∈ ℕ and E ∈ Σ. There exists ε ∈ (0, s/2), such that:
g ( ε ) < 2 n .
It is easy to see that for x ∈ [0, s], the monotonicity and subadditivity of g implies that:
| g ( y ) g ( x ) | g ( | y x | ) .
Let F ∈ Σ be such that μ(EΔF) ≤ δ. Define 𝒟 s n = { i { 1 , , 2 n } | max { μ ( A i ) , μ ( B i ) } < s }. | max{μ(Ai), μ(Bi)} < s}. From Equations (5) and (6) and the monotonicity of g in [0, s], we obtain:
| H ( g , 𝒫 n E ) H ( g , 𝒫 n F ) | i 𝒟 s | g ( μ ( A i ) ) g ( μ ( B i ) ) | + 2 s d max i 𝒟 s g ( | μ ( A i ) μ ( B i ) | ) + 2 s d max i 𝒟 s g ( μ ( A i Δ B i ) ) + 2 s d max i = 1 m g ( μ ( A i Δ B i ) ) + 2 s d max 2 n g ( δ ) + 2 s d max 1 + 2 s d max
To find the lower bound for the g-entropy of a partition, we will use so-called independent sets. We construct the independent set in the following way: Let τ be a tower of height m. We divide the highest level of this tower (Mm−1) into two sets of equal measure, let us say I(m−1) and Mm−1\I(m−1). Next, we consider T−1I(m−1) and T−1(Mm−1\I(m−1)). We divide each of them into two sets of equal measure, obtaining sets I 1 ( m 2 ) , I 2 ( m 2 ) , I 3 ( m 2 ) , I 4 ( m 2 ), and define set I(m−2) as the algebraic sum of two of those sets—one subset of T−1I(m−1) and one of T−1(Mm−1\I(m−1)). We perform this algorithm, until we achieve the lowest level of the tower — M0 (see Figure 1). Eventually, we define I : = j = 0 m 1 I ( j ). We call this set an independent set in τ.
We can make this construction, because every aperiodic system does not have atoms of positive measure, and in every non-atomic Lebesgue space for every measurable set A and every α ∈ [0, α], there exists BA, such that μ(B) = α.
We are able to give an explicit formula for the g-entropy of the partition generated by the independent set in τ.
Lemma 7. Let τ = (M, TM,..., T2n−1M) be the Rokhlin tower of height 2n and I ∈ Σ be an independent set in τ. If g 𝒢 0 , then:
H τ 0 n 1 ( g , 𝒫 n I ) = μ ( τ ) 2 φ ( μ ( τ ) 2 n + 1 ) ,
where φ(x) = g(x)/x for x > 0.
Proof. The independence of I in τ implies that the partition:
𝒫 n I τ 0 n 1
is a partition of τ 0 n 1 into 2n sets of equal measure 2n μ ( τ 0 n 1 ). Therefore:
H τ 0 n 1 ( g , 𝒫 n I ) = A 𝒫 n I g ( μ ( A τ 0 n 1 ) ) = 2 n g ( μ ( τ 0 n 1 ) 2 n ) = μ ( τ 0 n 1 ) φ ( μ ( τ 0 n 1 ) 2 n ) = μ ( τ ) 2 φ ( μ ( τ ) 2 n + 1 ) .
Theorem 3. Let g 𝒢 0 and T be an aperiodic, surjective automorphism of a Lebesgue space (X, Σ, μ), and let γ ∈ ℝ, then, there exists a partition 𝒫𝔅, such that:
h ( g , 𝒫 ) γ .
Proof. We will prove that for any γ > 0, there exists a partition 𝒫E = {E, X\E}, such that h(g, 𝒫) ≥ γ. We define recursively a sequence of sets En ∈ Σ. Let:
E 0 : = , N 0 : = δ 0 : = 1 .
Let n > 0, and assume that we have already defined En−1, Nn−1 and δn−1. Using Lemma 6, we can choose δn > 0, such that:
δ n < 1 2 δ n 1
| H ( g , 𝒫 n E n 1 ) H ( g , 𝒫 N n F ) | < 1 + 2 s d max .
for any F ∈ Σ, for which μ(En−1 ΔF) < 2δn.
Since:
lim x 0 + g ( x ) η ( x ) = ,
we can choose such Nn ∈ ℕ that:
φ ( δ n 2 N n 1 ) φ η ( δ n 2 N n 1 ) > 2 γ δ n log 2 .
By Lemma 4, there exists Mn ∈ Σ, such that τn = (Mn, TMn,...,T2Nn−1Mn) is a Rokhlin tower of measure μ(τn) = δn. Let Inτn be an independent set in τn and:
E n : = ( E n 1 \ τ n ) I n .
Then:
μ ( E n 1 Δ E n ) μ ( τ n ) = δ n .
for all positive integers n. By Equation (7), we have δn < 2n, and we conclude that ( 1 E n ) n = 0 is a Cauchy sequence in L1(X). Therefore, there exist E ∈ Σ, such that 1En converges to 1E. For this set, we have:
μ ( E n Δ E ) k = n + 1 μ ( E k Δ E k 1 ) k = n + 1 δ k < 2 δ n + 1 .
Since Enτn = In, applying Equation (8) and Lemmas 5 and 7, we obtain that for Nn, such that δn · 2Nn−1 < s:
H ( g , 𝒫 N n E ) H ( g , 𝒫 N n E n ) 1 2 s d max H ( τ n ) 0 N n 1 ( g , 𝒫 N n E n ) ( 2 s + 3 ) d max 1 H ( τ n ) 0 N n 1 ( g , 𝒫 N n I n ) ( 2 s + 3 ) d max 1 [ μ ( τ n ) ln 2 2 ] ( N n + 1 ) μ ( τ n ) ln μ ( τ n ) 2 . φ ( μ ( τ n ) 2 N n 1 ) ln ( μ ( τ n ) 2 N n 1 ) ( 2 s + 3 ) d max 1 ln 2 2 δ n ( N n + 1 ) φ ( δ n 2 N n 1 ) φ η ( δ n 2 N n 1 ) ( 2 s + 3 ) d max 1 .
From Equation (9), we obtain that:
lim x H ( g , 𝒫 N n E ) N n ln 2 2 lim n δ n N n + 1 N n φ ( δ n 2 N n 1 ) φ η ( δ n 2 N n 1 ) γ .
Thus,
lim sup n 1 n H ( g , 𝒫 n ) γ .

2.1.3. Bernoulli Shifts

Let 𝒜 = {1,...,k} be a finite alphabet. Let X = { x = { x i } i = : x i 𝒜 } and σ be a left shift:
σ ( x ) i = x i + 1 .
For any st and block [ω0, ... ,ωts] with ai𝒜, we define a cylinder:
C s t ( ω 0 , , ω t s ) = { x X : x i = ω i s for i = s , , t } .
We consider a Borel σ-algebra with respect to the metric, which is given by d(x, y) = 2N, where N = min{|i| : xiyi}. Let p = (p1,..., pk) be a probability vector. We define a measure ρ = ρ(p) on 𝒜 by setting ρ({i}) = pi. Then, μp is a corresponding product measure on X = 𝒜𝕑. Thus, the static g-entropy of a partition 𝒫𝒜 = {[1], [2],..., [k]} is equal to:
H μ p ( g , 𝒫 n 𝒜 ) = ω 𝒜 n g ( μ ( C 0 n 1 ( ω 0 , , ω n 1 ) ) ) = ω 𝒜 n g ( p ω 0 p ω n 1 ) ,
where ω = (ω0,...,ωn−1). By the concavity of g, we have:
H μ p ( g , 𝒫 n 𝒜 ) φ ( 1 / k n )
where equality holds only when p = p * = ( 1 k , , 1 k ). Before calculating the dynamical g-entropy of 𝒫𝒜 with respect to μp*, we give the following lemma, the proof of which will be given later:
Lemma 8. If g𝒢0, then:
Cs ( g ) = lim sup n g ( κ n ) η ( κ n ) and Ci ( g ) = lim inf n g ( κ n ) η ( κ n )
for any κ > 1.
Therefore, applying Lemma 8 for the partition 𝒫𝒜 and κ = k, we obtain:
h μ p * ( g , 𝒫 𝒜 ) = lim sup n 1 n φ ( 1 k n ) = { Cs ( g ) log k , if Cs ( g ) < ; , otherwise .
Remark 2. If we consider the lower limit instead of the upper limit, we would obtain:
lim inf n 1 n φ ( 1 k n ) = { Ci ( g ) log k , i f Ci ( g ) < ; , otherwise .
Therefore, we cannot replace an upper limit by the limit in the definition of the dynamical g-entropy.
Proof of Lemma 8. We will show just the equality for the upper limit, since the equality for the lower limit may be obtained analogously. Let ( x n ) n = 1 and ( m n ) n = 1 be such that lim sup n g ( x n ) / η ( x n ) = c and xn ∈ (κmn,κmn+1) for every n ∈ ℕ. Then, –log xn ≥ – log κmn+1. Every g𝒢0 is quasi-homogenic; so, for every positive integer n occurs:
g ( x n ) x n < g ( κ m n ) κ m n .
Therefore:
g ( x n ) η ( x n ) = g ( x n ) x n 1 log x n g ( κ m n ) κ m n 1 ( m n 1 ) log κ = g ( κ m n ) η ( κ m n ) m n m n 1 ,
and:
lim sup x 0 + g ( x ) η ( x ) = lim sup n g ( κ n ) η ( κ n ) .

2.2. Kolmogorov-Sinai Entropy-Like Invariant

The basic tool considered in the ergodic theory is the Kolmogorov–Sinai entropy, which is a supremum of Shannon dynamical entropies over all finite partitions:
h μ ( T ) = sup 𝒫 𝔅 h ( T , 𝒫 ) .
It is invariant under metric isomorphism. Following the Kolmogorov proposition, we take the supremum over all partitions of the dynamical g-entropy of a partition. For a given system (X, Σ, μ, T), we define:
h μ ( g , T ) = sup 𝒫 𝔅 h ( g , T , 𝒫 )
and call it the measure-theoretic g-entropy of transformation T with respect to measure μ.
It is easy to see that it is an isomorphism invariant. Ornstein and Weiss [23] showed the striking result that measure-theoretic entropy is the only finitely observable invariant for the class of all ergodic processes. More precisely, every finitely observable invariant for a class of all ergodic processes is a continuous function of entropy. Of course, in the case of g 𝒢 0 0 𝒢 0 Sh by Corollary 2, we have:
h μ ( g , T ) = lim x 0 + g ( x ) η ( x ) h μ ( T ) .
We will show that for a wider class of functions, namely for those for which:
Cs ( g ) = lim sup x 0 + g ( x ) η ( x ) <
we have:
h μ ( g , T ) = Cs ( g ) h μ ( T )
for any ergodic transformation T. This shows that the measure-theoretic g-entropy is, in fact, finitely observable: one might simply compose the entropy estimators [25] with the linear function itself. Our proof will be similar to the proof of ([12], Theorem 1.1) where Takens and Verbitski showed that for ergodic transformations, the supremum over all finite partitions of dynamical Rényi entropies of order α > 1 are equal to the measure-theoretic entropy of T with respect to measure μ.
Let us introduce necessary definitions and facts. Let Ti be automorphisms of Lebesgue space (Xi, Σi, μi) for i = 1, 2, respectively. Then, we say that T2 is a factor of transformation T1, if there exists a homomorphism ϕ: X1X2, such that:
ϕ T 1 = T 2 ϕ μ 1 a . e . on X 1 .
Suppose that T2 is a factor of T1 under homomorphism ϕ. Then, for an arbitrary finite partition 𝒫 of X2, we have:
H ( g , i = 0 k 1 T 2 i 𝒫 ) = H ( g , i = 0 k 1 ϕ 1 T 2 i 𝒫 ) = H ( g , i = 0 k 1 T 1 i ϕ 1 𝒫 ) .
Hence, h(g, T2, 𝒫) = h(g, T1, ϕ−1 𝒫). Therefore:
h μ ( g , T 2 ) = sup 𝒫 finite h ( g , T 2 , 𝒫 ) = sup 𝒫 finite h ( g , T 1 , ϕ 1 𝒫 ) h ( g , T 1 ) .
This implies the following proposition:
Proposition 1. If T2 is a factor of T1, then for every function g𝒢0:
h μ ( g , T 2 ) h μ ( g , T 1 ) .

2.2.1. Measure-Theoretic g-Entropies for Bernoulli Automorphisms

An automorphism T on (X, Σ, μ) is called Bernoulli automorphism if it is isomorphic to some Bernoulli shift. The crucial role in the proof of the main theorem of this section (Theorem 5) will be played by a well-known Sinai theorem:
Theorem 4 (Sinai [33]). Let T be an arbitrary ergodic automorphism of some Lebesgue space (X, Σ, μ). Then, each Bernoulli automorphism with hμ(T1) ≤ hμ(T) is a factor of the automorphism T.
We start proving the following proposition:
Proposition 2. Let T be an arbitrary ergodic automorphism with hμ(T) ≥ log M for some integer M ≥ 2. Then, for every g𝒢0:
h μ ( g , T ) Cs ( g ) log M .
Proof. Consider a shift σ over the alphabet 𝒜 = {0, 1 ..., M – 1} with the corresponding Bernoulli measure generated by p 1 = = p M = 1 M. It is easy to see that hμ(σ) = log M. From Theorem 4, we conclude that σ is a factor of T . Therefore, applying formula (10), we obtain:
h μ ( g , T ) h μ ( g , σ ) h ( g , σ , 𝒫 𝒜 ) = lim sup n 1 n φ ( M n ) = log M lim sup n φ ( M n ) φ η ( M n ) .
Applying Lemma 8 completes the proof.

2.2.2. Main Theorem

Our goal in this section is the following result:
Theorem 5. Let T be an ergodic automorphism of Lebesgue space (X, Σ, μ), and g𝒢0 be such that Cs(g) ∈ (0, ∞). Then:
h μ ( g , T ) = { Cs ( g ) h μ ( T ) , i f h μ ( T ) < , , otherwise .
If g 𝒢 0 0, then hμ(g, T) = 0. If g𝒢0 is such that Cs(g) = ∞ and T has a positive measure-theoretic entropy, then hμ(g, T) = ∞.
Moreover, for g 𝒢 0 from Theorem 3, we have:
Corollary 5. Let g 𝒢 0 . If (X, T) is aperiodic and surjective, then hμ(g, T) = ∞.
To prove Theorem 5, we need the first few preliminary lemmas.
Lemma 9. If T is an automorphism of the Lebesgue space (X, Σ, μ), then for every g𝒢0:
h μ = ( g , T m ) m h μ ( g , T ) .
Proof. Let 𝒫∈ 𝔅, m ∈ ℕ. We have:
h ( g , T , 𝒫 ) = lim sup k 1 k H ( g , 𝒫 T m 𝒫 T m ( k 1 ) 𝒫 ) = lim n sup k n 1 k H ( g , 𝒫 T m 𝒫 T m ( k 1 ) 𝒫 ) .
Fix k ∈ ℕ. Then, i = 0 n 1 T i 𝒫 is a refinement of 𝒫Tm𝒫...Tm(k−1)𝒫 for n = km,..., k(m+1) − 1.
Therefore:
1 k H ( g , 𝒫 T m 𝒫 T m ( k 1 ) 𝒫 ) 1 k H ( g , 𝒫 n 1 ) = n k 1 n H ( g , 𝒫 n 1 ) k m + m 1 k 1 n H ( g , 𝒫 n 1 ) m ( 1 + 1 k ) 1 n H ( g , 𝒫 n 1 )
for n = km,..., k(m +1) – 1. Let us introduce the following notation:
c k : = 1 k H ( g , 𝒫 T m 𝒫 T m ( k 1 ) 𝒫 ) , a n : = 1 n H ( g , 𝒫 n 1 ) .
Then, we can rewrite Equation (12) in the form:
c k m ( 1 + 1 k ) a n
for n = km,..., km + m – 1. Taking the supremum in Equation (13), we obtain:
sup l k c l m ( 1 + 1 k ) sup n = l m , , l ( m + 1 ) 1 a n m ( 1 + 1 k ) sup n k m a n .
Therefore:
lim sup k c k m lim sup n a n ,
and this is equivalent to the statement:
h ( g , T m , 𝒫 ) m h ( g , T , 𝒫 ) .
Taking the supremum over all finite partitions, we obtain the assertion.
Next, the lemma will be just a weaker version of Theorem 5.
Lemma 10. If an automorphism Tm of a Lebesgue space (X, Σ, μ) is ergodic for every m ∈ ℕ, then for every g𝒢0, such that Cs(g) < ∞ holds:
h μ ( g , T ) Cs ( g ) h μ ( T ) .
If g 𝒢 0 0, then hμ(g, T) = 0. If g𝒢0 is such that Cs(g) = ∞ and T has a positive Kolmogorov-Sinai entropy, then hμ(g, T) = ∞.
Proof. The case of g 𝒢 0 0 follows from Corollary 2. Suppose that there exists such g 𝒢 0 \ 𝒢 0 0, which fulfills the assumptions of the lemma and for which we have:
Cs ( g ) h μ ( T ) h μ ( g , T ) > 0 .
Then, applying Lemma 9 to the transformation Tm and using equality hμ(Tm) = mhμ(T) (see [2], Theorem 4.3.16), we obtain:
Cs ( g ) h μ ( T m ) h μ ( g , T m ) m ( Cs ( g ) h μ ( T ) h μ ( g , T ) ) as m .
Therefore, for sufficiently large m, there exists an integer M for which:
h μ ( g , T m ) m h μ ( g , T ) < Cs ( g ) log M m Cs ( g ) h μ ( T ) = Cs ( g ) h μ ( T m ) .
Proposition 2 applied to the transformation Tm guarantees that for every g𝒢0 with positive (finite) Cs(g), we have:
h μ ( g , T m ) Cs ( g ) log M .
Comparing Equations (14) and (15), we obtain the contradiction, which implies that
h μ ( g , T ) = Cs ( g ) h μ ( T ) .
If Cs(g) = ∞ and hμ(T) > 0, then there exists such an integer m > 0 that:
h μ ( T m ) = m h μ ( T ) > log M
and by Proposition 2 and Lemma 9:
h μ ( g , T ) = h μ ( g , T m ) =
which completes the proof.
Proof of Theorem 5. If hμ(T) = 0, the statement is true, because for any 𝒫∈ 𝔅, we have:
0 h ( g , 𝒫 ) Cs ( g ) h ( 𝒫 ) = 0 .
Suppose that 0 < hμ(T) < ∞. Automorphism T is ergodic. Thus, it has a factor, which is a Bernoulli automorphism T′ with hμ(T) = hμ(T′). Every Bernoulli automorphism is mixing, so Tm is ergodic for each m. Applying Lemma 10, we obtain:
h μ ( g , T ) = Cs ( g ) h μ ( T ) = Cs ( g ) h μ ( T ) .
Since T′ is a factor of T, Proposition 1 implies that:
Cs ( g ) h μ ( T ) = Cs ( g ) h μ ( T ) = h μ ( g , T ) h μ ( g , T ) Cs ( g ) h μ ( T )
which completes the proof of the case of finite hμ(T). If hμ(T) = ∞, then Proposition 2 implies that:
h μ ( g , T ) Cs ( g ) log M
for every M > 0.

2.2.3. Generator Theorem Counterpart

In the case of g 𝒢 0 , there is no counterpart of a Kolmogorov-Sinai generator theorem, which states that the measure-theoretic entropy of the transformation T is realized on every generator of the σ-algebra Σ. Let us consider Sturm shifts, shifts that model translations of the circle 𝕋 = [0, 1). Let β ∈ [0, 1) and consider the translation ϕβ :[0, 1) ↦ [0, 1) defined by ϕβ(x) = x + β (mod 1). Let 𝒫 denote the partition of [0, 1) given by 𝒫 = {[0,β), [β, 1)}. Then, we associate a binary sequence to each t ∈ [0, 1) according to its itinerary relative to 𝒫; that is, we associate to t ∈ [0, 1) the bi-infinite sequence x defined by xi =0 if ϕ β i (t) ∈ [0,β) and xi =1 if ϕ β i (t) ∈ [β, 1). The set of such sequences is not necessary closed, but it is shift-invariant and so its closure is a shift space called the Sturmian shift. If β is irrational, then Sturmian shift is minimal, i.e., there is no proper subshift. Moreover, for a minimal Sturmian shift, the number of n-blocks that occur in an infinite shift space is exactly n +1. Therefore for zero-coordinate partition 𝒫𝒜, which is a finite generator of σ-algebra Σ, and for any function g𝒢0, we have:
H ( g , 𝒫 n 𝒜 ) = A 𝒫 n 𝒜 g ( μ S ( A ) ) φ ( 1 n + 1 )
where μS is the unique invariant measure for Sturm shift. Thus,
h ( g , 𝒫 𝒜 ) lim sup n n + 1 n g ( 1 n + 1 ) = 0 .
On the other hand, since it is strictly ergodic (thus aperiodic), Theorem 3 implies that for any g 𝒢 0 :
h μ ( g , T ) = .
Therefore, we have a finite generator, for which the supremum is not attained.

3. Discussion

In this note, we discussed the generalization of the dynamical and Kolmogorov-Sinai entropy based on the idea of considering in the definition of the dynamical entropy a concave function vanishing at the origin instead of the Shannon entropy function η. The connections between dynamical entropies and g-entropies show that the crucial property of η for applications of the Shannon entropy in the dynamical systems is the behavior of η in the neighborhood of zero. Additionally, the main result of the paper obtained for the generalization of the KS entropy states that, usually, there is a linear dependence between the obtained invariant and the KS entropy. It also implies (due to the fact that it is a continuous function of the entropy) that the measure-theoretic g-entropy is finitely observable (see, e.g., [23]). Moreover, considering functions that behave in the neighborhood of zero differently than the Shannon function usually trivializes the theory. On the other hand, we showed that if the limit of g/η at zero is infinite, then for every positive number γ, there exists a partition for which the g-entropy will be greater than or equal to γ. Thus, the measure-theoretic g-entropy will be infinite. The example from Section 2.2.3, based on this result, implies that there is no counterpart of the generator theorem (e.g., Tsallis entropies for α< 1, which we obtain considering g ( x ) = x = x α α 1 with α ∈ (0, 1), fit into this scheme). However, the concept of g-entropies is still of use, e.g., considering the rate of convergence of partial g-entropies may give additional information about the system [30]. The most promising direction in this context seems to be considering functions for which C(g) = 0 and g′(0) = ∞.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Downarowicz, T. Entropy in Dynamical Systems; Cambridge University Press: New York, NY, USA, 2011. [Google Scholar]
  2. Katok, A.; Hasselblatt, B. Introduction to the Modern Theory of Dynamical Systems; Cambridge University Press: New York, NY, USA, 1997. [Google Scholar]
  3. Misiurewicz, M. A short proof of the variational principle for 𝕑n+ action on a compact space. Astérisque 1976, 40, 147–157. [Google Scholar]
  4. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J 1948, 27. [Google Scholar]
  5. Rényi, A. On measures of entropy and information. In Proceedings of the 4th Berkeley Symposium on Mathematical Statistics and Probability, 20 June–30 July 1960; University of California Press: Berkeley, CA, USA, 1961; Volume 1. Statistical Laboratory of the University of California: Berkeley, CA, USA; pp. 547–561.
  6. Arimoto, S. Information-theoretical considerations on estimation problems. Inf. Control 1971, 19, 181–194. [Google Scholar]
  7. Wu, Y.; Verdú, S. Rényi Information Dimension: Fundamental Limits of Almost Lossless Analog Compression. IEEE Trans. Inf. Theory 2010, 56, 3721–3748. [Google Scholar]
  8. Csiszár, I. Axiomatic characterization of information measures. Entropy 2008, 10, 261–273. [Google Scholar]
  9. De Paly, T. On entropy-like invariants for dynamical systems. Z. Anal. Anwend 1982, 1, 69–79. [Google Scholar]
  10. De Paly, T. On a class of generalized K-entropies and Bernoulli shifts. Z. Anal. Anwend 1982, 1, 87–96. [Google Scholar]
  11. Grassberger, P.; Procaccia, I. Estimation of the Kolmogorov entropy from a chaotic signal. Phys. Rev. A 1983, 28, 2591–2593. [Google Scholar]
  12. Takens, F.; Verbitski, E. Generalized entropies: Rényi and correlation integral approach. Nonlinearity 1998, 11, 771–782. [Google Scholar]
  13. Takens, F.; Verbitski, E. Rényi entropies of aperiodic dynamical systems. Isr. J. Math 2002, 127, 279–302. [Google Scholar]
  14. Liu, Q.; Cao, K.-F.; Peng, S.-L. A generalized Kolmogorov–Sinai-like entropy under Markov shifts in symbolic dynamics. Physica A 2009, 388, 4333–4344. [Google Scholar]
  15. Mesón, A.M.; Vericat, F. Invariant of dynamical systems: A generalized entropy. J. Math. Phys 1996, 37, 4480–4483. [Google Scholar]
  16. Mesón, A.M.; Vericat, F. On the Kolmogorov-like generalization of Tsallis entropy, correlation entropies and multifractal analysis. J. Math. Phys 2002, 43, 904–918. [Google Scholar]
  17. Havrda, J.; Charvàt, F. Quantification method of classification processes. Concept of structural α-entropy. Kybernetika 1967, 3, 30–35. [Google Scholar]
  18. Abe, S. Tsallis entropy: How unique? Contin. Mech. Thermodyn 2004, 16, 237–244. [Google Scholar]
  19. Furuichi, S. Information theoretical properties of Tsallis entropies. J. Math. Phys 2006, 47, 023302. [Google Scholar]
  20. Tsallis, C. Possible generalization of Boltzmann–Gibbs statistics. J. Stat. Phys 1988, 52, 479–487. [Google Scholar]
  21. Tsallis, C. Entropic nonextensivity: A possible measure of complexity. Chaos Solitons Fractals 2002, 13, 371–391. [Google Scholar]
  22. Tsallis, C.; Plastino, A.R.; Zheng, W.-M. Power-law sensitivity to initial conditions—New entropic representation. Chaos Solitons Fractals 1997, 8, 885–891. [Google Scholar]
  23. Ornstein, D.S.; Weiss, B. Entropy is the only finitely observable invariant. J. Mod. Dyn 2007, 1, 93–105. [Google Scholar]
  24. Weiss, B. (The Hebrew University of Jerusalem); 2013; personal communication.
  25. Weiss, B. Single Orbit Dynamics; American Mathematical Society: Providence, RI, USA, 2000. [Google Scholar]
  26. Blume, F. The Rate of Entropy Convergence, University of North Carolina, Chapel Hill, NC, USA, 1995.
  27. Blume, F. Possible rates of entropy convergence. Ergod. Theory Dyn. Syst 1997, 17, 45–70. [Google Scholar]
  28. Galatolo, S. Global and local complexity in weakly chaotic dynamical systems. Discret. Contin. Dyn. Syst 2003, 9, 1607–1624. [Google Scholar]
  29. Ferenczi, S.; Park, K.K. Entropy dimensions and a class of constructive examples. Discret. Contin. Dyn. Syst 2007, 17, 133–141. [Google Scholar]
  30. Falniowski, F. Possible g-entropy convergence rates. 2013, arXiv. 1309.6246. [Google Scholar]
  31. Rosenbaum, R.A. Sub-additive functions. Duke Math. J 1950, 17, 227–247. [Google Scholar]
  32. Heinemann, S.-M.; Schmitt, O. Rokhlin’s Lemma for non-invertible maps. Dyn. Syst. Appl 2001, 10, 201–214. [Google Scholar]
  33. Sinai, Y.G. Weak isomorphism of transformation with an invariant measure. Sov. Math 1962, 3, 1725–1729. [Google Scholar]
Figure 1. Set I (with dashes) in a tower a height of five.
Figure 1. Set I (with dashes) in a tower a height of five.
Entropy 16 03732f1

Share and Cite

MDPI and ACS Style

Falniowski, F. On the Connections of Generalized Entropies With Shannon and Kolmogorov–Sinai Entropies. Entropy 2014, 16, 3732-3753. https://doi.org/10.3390/e16073732

AMA Style

Falniowski F. On the Connections of Generalized Entropies With Shannon and Kolmogorov–Sinai Entropies. Entropy. 2014; 16(7):3732-3753. https://doi.org/10.3390/e16073732

Chicago/Turabian Style

Falniowski, Fryderyk. 2014. "On the Connections of Generalized Entropies With Shannon and Kolmogorov–Sinai Entropies" Entropy 16, no. 7: 3732-3753. https://doi.org/10.3390/e16073732

Article Metrics

Back to TopTop