Open Access
This article is

- freely available
- re-usable

*Entropy*
**2019**,
*21*(1),
7;
doi:10.3390/e21010007

Article

On the Relation between Topological Entropy and Restoration Entropy

Fakultät für Informatik und Mathematik, Universität Passau, Innstraße 33, 94032 Passau, Germany

Received: 28 November 2018 / Accepted: 20 December 2018 / Published: 23 December 2018

## Abstract

**:**

In the context of state estimation under communication constraints, several notions of dynamical entropy play a fundamental role, among them: topological entropy and restoration entropy. In this paper, we present a theorem that demonstrates that for most dynamical systems, restoration entropy strictly exceeds topological entropy. This implies that robust estimation policies in general require a higher rate of data transmission than non-robust ones. The proof of our theorem is quite short, but uses sophisticated tools from the theory of smooth dynamical systems.

Keywords:

topological entropy; restoration entropy; state estimation under communication constraints; SRB measures; Anosov diffeomorphisms## 1. Introduction

This paper compares two notions of entropy that are relevant in the context of state estimation under communication constraints. Since the work of Savkin [1], it has been well known that the topological entropy of a dynamical system characterizes the smallest rate of information above which an estimator, receiving its state information at the corresponding rate, is able to generate a state estimate of arbitrary precision. Topological entropy is a quantity that has been studied in the mathematical field of dynamical systems since the 1960s and has turned out to be a useful tool for solving many theoretical and practical problems, cf. the survey [2] and the monograph [3]. A big drawback of this notion in the context of state estimation is that topological entropy is highly discontinuous with respect to the dynamical system under consideration in any reasonable topology, cf. [4]. As a consequence, estimation policies based on topological entropy are likely to suffer from a lack of robustness. Additionally, topological entropy is very hard to compute or estimate. There are only few numerical approaches that potentially work for multi-dimensional systems, cf. [5,6,7,8], and each of them has its drawbacks and restrictions.

A possible remedy for these problems is provided in the works [9,10] of Matveev and Pogromsky. One of the main ideas in these papers is to replace the topological entropy as a figure-of-merit for the necessary rate of data transmission with a possibly larger quantity, named restoration entropy, which describes the smallest data rate above which a more robust form of state estimation can be achieved (called regular observability in [9,10]).

Looking at one of the simplest types of nonlinear dynamical systems, namely Anosov diffeomorphisms, the main result of the paper at hand demonstrates that for most dynamical systems, we have to expect that the restoration entropy strictly exceeds the topological entropy. That is, to achieve a state estimation objective that is more robust with respect to perturbations, one has to pay the price of using a channel that allows for a larger rate of data transmission. More specifically, our result shows that the equality of topological and restoration entropy implies a great amount of uniformity in the dynamical system under consideration, which can be expressed in terms of the unstable Lyapunov exponents at each point, whose sum essentially has to be a constant. Such a property can easily be destroyed by a small perturbation, showing that arbitrarily close to the given system, we find systems whose restoration entropy strictly exceeds their topological entropy. Since Anosov diffeomorphisms are considered as a paradigmatic class of chaotic dynamical systems, this property can be expected for a much larger class of systems.

To prove our result, we need a number of high-level concepts and results from the theory of topological, measurable, and smooth dynamical systems. This includes the concepts of topological and metric pressure, Lyapunov exponents, SRB measures, and uniform hyperbolicity.

For further reading on the topic of state estimation under communication constraints, we refer the reader to [1,9,10,11,12,13,14] and the references given therein.

The structure of this paper is as follows: In Section 2, we collect all necessary definitions and results from the theory of dynamical systems. Section 3 introduces the concept of restoration entropy and explains its operational meaning in the context of estimation under communication constraints. In Section 4, we prove our main result and provide some interpretation and an example. Finally, Section 5 contains some concluding remarks.

## 2. Tools from Dynamical Systems

Notation: By $\mathbb{Z}$, we denote the set of all integers, by $\mathbb{N}$ the set of positive integers, and ${\mathbb{N}}_{0}:=\{0\}\cup \mathbb{N}$. All logarithms are taken to the base two. If M is a Riemannian manifold, we write $|\xb7|$ for the induced norm on any tangent space ${T}_{x}M$, $x\in M$. The notation $\parallel \xb7\parallel $ is reserved for operator norms. We write $\mathrm{cl}A$ and $\mathrm{int}A$ for the closure and the interior of a set A in a metric space, respectively. Finally, the notation $A\subset B$ (A subset of B) does not exclude the case $A=B$.

In this paper, we use several sophisticated results from the theory of dynamical systems, in particular from smooth ergodic theory. In the following, we try to explain these results without going too much into technical details.

Let $T:X\to X$ be a continuous map on a compact metric space $(X,d)$. Via its iterates:
the map T generates a discrete-time dynamical system on X with associated orbits ${\{{T}^{n}(x)\}}_{n\in {\mathbb{N}}_{0}}$, $x\in X$. We call the pair $(X,T)$ a topological dynamical system (TDS).

$${T}^{0}:={\mathrm{id}}_{X},\phantom{\rule{1.em}{0ex}}{T}^{n+1}:=T\circ {T}^{n},\phantom{\rule{1.em}{0ex}}n=0,1,2,\dots $$

#### 2.1. Entropy and Pressure

Let $(X,T)$ be a TDS. The topological entropy ${h}_{\mathrm{top}}(T)$ measures the total exponential complexity of the orbit structure of $(X,T)$ in terms of the maximal numbers of finite-time orbits that are distinguishable w.r.t. to a finite resolution. One amongst different possible formal definitions is as follows. For $n\in \mathbb{N}$ and $\epsilon >0$, a set $E\subset X$ is called $(n,\epsilon ,T)$-separated if for any $x,y\in E$ with $x\ne y$, we have:

$$d({T}^{i}(x),{T}^{i}(y))\ge \epsilon \phantom{\rule{1.em}{0ex}}\mathrm{for}\phantom{\rule{4pt}{0ex}}\mathrm{at}\phantom{\rule{4pt}{0ex}}\mathrm{least}\phantom{\rule{4pt}{0ex}}\mathrm{one}\phantom{\rule{4pt}{0ex}}0\le i<n.$$

That is, we can distinguish any two points in E at a resolution of $\epsilon $ by looking at their length-n finite-time orbits. By the compactness of X, there is a uniform upper bound on the cardinality of any (n, ε, T)-separated set. Writing r(n, ε, T) for the maximal possible cardinality,

$$\begin{array}{c}\hfill {h}_{\mathrm{top}}(T):=\underset{\epsilon \downarrow 0}{lim}\underset{n\to \infty}{lim\; sup}\frac{1}{n}logr(n,\epsilon ,T).\end{array}$$

This definition is due to Bowen [15] and (independently) Dinaburg [16]. However, it should be noted that the first definition of topological entropy, given by Adler, Konheim, and McAndrew [17], was in terms of open covers of X and was modeled in strict analogy to the metric (= measure−theoretic) entropy defined earlier by Kolmogorov and Sinai [18,19].

To define metric entropy, one additionally needs a Borel probability measure $\mu $ on X that is preserved by T in the sense that $\mu (A)=\mu ({T}^{-1}(A))$ for every Borel set A. By the theorem of Krylov–Bogolyubov, every continuous map on a compact space admits at least one such measure, cf. [20], Theorem 4.1.1. We write ${\mathcal{M}}_{T}$ for the set of all T-invariant Borel probability measures. For any finite measurable partition $\mathcal{P}$ of X, we define the entropy of T on $\mathcal{P}$ by:

$${h}_{\mu}(T;\mathcal{P}):=\underset{n\to \infty}{lim}\frac{1}{n}{H}_{\mu}\left(\underset{i=0}{\overset{n-1}{\bigvee}}{T}^{-i}\mathcal{P}\right).$$

Here, ⋁ denotes the join operation. That is, ${\bigvee}_{i=0}^{n-1}{T}^{-i}\mathcal{P}$ is the partition of X whose elements are all intersections of the form ${P}_{0}\cap {T}^{-1}({P}_{1})\cap \dots \cap {T}^{-n+1}({P}_{n-1})$ with ${P}_{i}\in \mathcal{P}$. Moreover, ${H}_{\mu}(\xb7)$ denotes the Shannon entropy of a partition, i.e., ${H}_{\mu}(\mathcal{Q})=-{\sum}_{Q\in \mathcal{Q}}\mu (Q)log\mu (Q)$ for any finite partition $\mathcal{Q}$. The metric entropy of T w.r.t. $\mu $ is then defined by:
the supremum taken over all finite measurable partitions $\mathcal{P}$ of X (replacing measurable partitions with open covers and Shannon entropy with the logarithm of the cardinality of a minimal finite subcover, the same construction yields the topological entropy as defined in [17]).

$${h}_{\mu}(T):=\underset{\mathcal{P}}{sup}{h}_{\mu}(T;\mathcal{P}),$$

To understand the meaning of ${h}_{\mu}$, note that ${H}_{\mu}(\mathcal{Q})$ is the average amount of uncertainty as one attempts to predict the partition element to which a randomly-chosen point belongs. Hence, ${h}_{\mu}(T)$ measures the average uncertainty per iteration in guessing the partition element of a typical length-n orbit.

The variational principle for entropy states that:
where the supremum is not necessarily a maximum. This variational principle can be regarded as a quantitative version of the theorem of Krylov–Bogolyubov.

$${h}_{\mathrm{top}}(T)=\underset{\mu \in {\mathcal{M}}_{T}}{sup}{h}_{\mu}(T),$$

Another concept (of which entropy is a special case) used in dynamical systems and inspired by ideas in thermodynamics is pressure. In this context, any continuous function $\varphi :X\to \mathbb{R}$, also called a potential or an observable, gives rises to the metric pressure of T w.r.t. $\varphi $ for a given $\mu \in {\mathcal{M}}_{T}$, defined as:

$${P}_{\mu}(T,\varphi ):={h}_{\mu}(T)+\int \varphi \mathrm{d}\mu .$$

To define an associated notion of topological pressure, put ${S}_{n}\varphi (x):={\sum}_{i=0}^{n-1}\varphi ({T}^{i}(x))$ and:

$$R(n,\epsilon ,\varphi ;T):=sup\left\{\sum _{x\in E}{2}^{{S}_{n}\varphi (x)}:E\subset X\text{}\mathrm{is}\text{}(n,\epsilon ,T)-\mathrm{separated}\right\}.$$

Then, the topological pressure of T w.r.t. $\varphi $ is given by:

$${P}_{\mathrm{top}}(T,\varphi ):=\underset{\epsilon \downarrow 0}{lim}\underset{n\to \infty}{lim\; sup}\frac{1}{n}logR(n,\epsilon ,\varphi ;T).$$

#### 2.2. Subadditive Cocycles

Let $T:X\to X$ be a map. A subadditive cocycle over $(X,T)$ is a sequence ${({f}_{n})}_{n\in {\mathbb{N}}_{0}}$ of functions ${f}_{n}:X\to \mathbb{R}$ satisfying:

$${f}_{n+m}(x)\le {f}_{n}(x)+{f}_{m}({T}^{n}(x)),\phantom{\rule{1.em}{0ex}}\forall n,m\in {\mathbb{N}}_{0},\phantom{\rule{4pt}{0ex}}x\in X.$$

If equality holds in this relation, we call ${({f}_{n})}_{n\in {\mathbb{N}}_{0}}$ an additive cocycle over $(X,T)$.

If X has the structure of a probability space with a $\sigma $-algebra $\mathcal{F}$ and a probability measure $\mu $ on $\mathcal{F}$, T is measurable, and $\mu $ is T-invariant, we speak of a measurable subadditive cocycle provided that all ${f}_{n}$ are measurable. In the context of a TDS $(X,T)$, we speak of a continuous subadditive cocycle if all ${f}_{n}$ are continuous.

The most fundamental result about subadditive cocycles is Kingman’s subadditive ergodic theorem, cf. [3], Theorem 2.1.4:

**Theorem**

**1.**

Let $T:X\to X$ be a measure-preserving map on a probability space $(X,\mathcal{F},\mu )$ and ${({f}_{n})}_{n\in {\mathbb{N}}_{0}}$ a measurable subadditive cocycle over $(X,T)$ such that each ${f}_{n}$ is integrable. Then, the limit:
exists for μ-almost every $x\in X$. If, additionally, μ is ergodic, then the limit is constant with:

$$\underset{n\to \infty}{lim}\frac{1}{n}{f}_{n}(x)$$

$$\underset{n\to \infty}{lim}\frac{1}{n}{f}_{n}(x)=\underset{n\to \infty}{lim}\frac{1}{n}\int {f}_{n}\mathrm{d}\mu .$$

Observe that the limit on the right-hand side of (3) always exists by Fekete’s subadditivity lemma (see [3], Fact 2.1.1), because the sequence ${a}_{n}:=\int {f}_{n}\mathrm{d}\mu $ is subadditive, i.e., ${a}_{n+m}\le {a}_{n}+{a}_{m}$. Kingman’s theorem can, in particular, be applied if $(X,T)$ is a TDS, $\mu \in {\mathcal{M}}_{T}$, and ${({f}_{n})}_{n\in {\mathbb{N}}_{0}}$ is a continuous subadditive cocycle.

Now, we consider again a TDS $(X,T)$ and a continuous subadditive cocycle ${({f}_{n})}_{n\in {\mathbb{N}}_{0}}$ over $(X,T)$. We define the extremal growth rate of $({f}_{n})$ by:

$$\beta [({f}_{n})]:=\underset{x\in X}{sup}\underset{n\to \infty}{lim\; sup}\frac{1}{n}{f}_{n}(x).$$

The following result is well known and can be found in [22], Theorem A.3, for instance:

**Lemma**

**1.**

Let ${({f}_{n})}_{n\in {\mathbb{N}}_{0}}$ be a continuous subadditive cocycle over a TDS $(X,T)$. Then:

$$\beta [({f}_{n})]=\underset{\mu \in {\mathcal{M}}_{T}}{sup}\underset{n>0}{inf}\frac{1}{n}\int {f}_{n}\mathrm{d}\mu =\underset{n>0}{inf}\underset{x\in X}{sup}\frac{1}{n}{f}_{n}(x)=\underset{n>0}{inf}\underset{\mu \in {\mathcal{M}}_{T}}{sup}\frac{1}{n}\int {f}_{n}\mathrm{d}\mu .$$

Here, all infima can be replaced with limits. Moreover, every supremum is attained.

#### 2.3. Lyapunov Exponents, SRB Measures, and Pesin’s Formula

To describe the long-term dynamical behavior of smooth systems, the notion of Lyapunov exponents is crucial. Given a ${C}^{1}$-diffeomorphism $T:M\to M$ on a compact Riemannian manifold M, the Lyapunov exponent at $x\in M$ in direction $0\ne v\in {T}_{x}M$ is the number:
provided that the limit exists. Lyapunov exponents measure how fast nearby solutions diverge from each other. The most general result on their existence and their properties is the multiplicative ergodic theorem (MET), also known as Oseledets theorem, cf. [23,24]. We need the following version of the theorem (which is not the most general):

$$\lambda (x,v):=\underset{n\to \infty}{lim}\frac{1}{n}log|\mathrm{D}{T}^{n}(x)v|,$$

**Theorem**

**2.**

Let $T:M\to M$ be a ${C}^{1}$-diffeomorphism of a compact Riemannian manifold M and $\mu \in {\mathcal{M}}_{T}$. Then, there exists a Borel set $\Omega \subset M$ with $\mu (\Omega )=1$ and $T(\Omega )=\Omega $ such that the following holds: for every $x\in \Omega $, there exist numbers ${\lambda}_{1}(x)>\dots >{\lambda}_{r(x)}(x)$, and the tangent space at x splits into linear subspaces as:
such that the following properties hold:

$${T}_{x}M={E}_{1}(x)\oplus \cdots \oplus {E}_{r(x)}(x)$$

- (i)
- For every $0\ne v\in {E}_{i}(x)$, we have:$$\underset{n\to \pm \infty}{lim}\frac{1}{n}log|\mathrm{D}{T}^{n}(x)v|={\lambda}_{i}(x).$$
- (ii)
- The functions $r(\xb7)$, $dim{E}_{i}(\xb7)$, and ${\lambda}_{i}(\xb7)$ are measurable and constant along orbits. Moreover,$$\mathrm{D}T(x){E}_{i}(x)={E}_{i}(T(x)),\phantom{\rule{1.em}{0ex}}i=1,\dots ,r(x).$$
- (iii)
- For every $x\in \Omega $, the limit:$${\Lambda}_{x}:=\underset{n\to \infty}{lim}{(\mathrm{D}{T}^{n}{(x)}^{*}\mathrm{D}{T}^{n}(x))}^{1/2n}$$

Typically, a given map has a huge number of associated invariant measures. To obtain a good description of the global dynamical behavior, one has to select specific invariant measures that determine the behavior of the system on a large set of initial states. In this context, the notion of an SRB measure (Sinai–Ruelle–Bowen measure) comes into play. An SRB measure is a measure with at least one positive Lyapunov exponent almost everywhere, having absolutely continuous conditional measures on unstable manifolds. We are not going to give a technical definition of the latter property. Instead, we state the following celebrated theorem due to Ledrappier and Young [25], which characterizes this property in terms of metric entropy. Here, we use the short-cut:
for the sum of all positive Lyapunov exponents at a point $x\in \Omega $, counted with multiplicities.

$${\lambda}^{+}(x):=\sum _{i=1}^{r(x)}max\{0,{\lambda}_{i}(x)dim{E}_{i}(x)\}$$

**Theorem**

**3.**

Let $T:M\to M$ be a ${C}^{2}$-diffeomorphism of a compact manifold M and $\mu \in {\mathcal{M}}_{T}$. Then, the formula:
holds if and only if μ has absolutely continuous conditional measures on unstable manifolds.

$${h}_{\mu}(T)=\int {\lambda}^{+}\mathrm{d}\mu $$

Additionally, note that for any ${C}^{1}$-diffeomorphism T and any $\mu \in {\mathcal{M}}_{T}$, the inequality:
holds, which is known as Ruelle’s inequality or Ruelle–Margulis inequality [26] (Formula (4) was first proven by Pesin for smooth invariant measures).

$${h}_{\mu}(T)\le \int {\lambda}^{+}\mathrm{d}\mu $$

#### 2.4. Anosov Diffeomorphisms

One of the simplest classes of smooth dynamical systems with complicated dynamical behavior is the class of Anosov diffeomorphisms. In this paper, we use these systems for two reasons. First, they have positive topological entropy, and second, they are very well understood and there are many tools available to describe their properties.

Let M be a compact Riemannian manifold. A ${C}^{1}$-diffeomorphism $T:M\to M$ is called an Anosov diffeomorphism if there exists a splitting:
into linear subspaces such that the following conditions are satisfied:

$${T}_{x}M={E}_{x}^{u}\oplus {E}_{x}^{s},\phantom{\rule{1.em}{0ex}}\forall x\in M$$

- (A1)
- $\mathrm{D}T(x){E}_{x}^{u}={E}_{T(x)}^{u}$ and $\mathrm{D}T(x){E}_{x}^{s}={E}_{T(x)}^{s}$ for all $x\in M$.
- (A2)
- There are constants $c\ge 1$ and $\lambda \in (0,1)$, so that, for all $x\in M$ and $n\in {\mathbb{N}}_{0}$,$$\begin{array}{cc}\hfill |\mathrm{D}{T}^{n}(x)v|& \le c{\lambda}^{n}|v|\mathrm{for}\text{}\mathrm{all}\text{}v\in {E}_{x}^{s},\hfill \\ \hfill |\mathrm{D}{T}^{-n}(x)v|& \le c{\lambda}^{n}|v|\mathrm{for}\text{}\mathrm{all}\text{}v\in {E}_{x}^{u}.\hfill \end{array}$$

From (A1) and (A2), it automatically follows that ${E}_{x}^{s}$ and ${E}_{x}^{u}$ vary continuously with x, cf. [20], Proposition 6.4.4. The existence of a splitting as above is also known as uniform hyperbolicity.

The simplest examples of Anosov diffeomorphisms are hyperbolic linear torus automorphisms, i.e., maps on the n-dimensional torus ${\mathbb{T}}^{n}={\mathbb{R}}^{n}/{\mathbb{Z}}^{n}$ of the form:
where $A\in {\mathbb{Z}}^{n\times n}$ is an integer matrix satisfying $|detA|=1$ and $|\lambda |\ne 1$ for all eigenvalues $\lambda $ of A. Observe that the assumption $|detA|=1$ guarantees that ${T}_{A}$ is invertible with inverse ${T}_{A}^{-1}={T}_{{A}^{-1}}$ (because ${A}^{-1}$ also has integer entries) and at the same time implies that ${T}_{A}$ is area-preserving. That is, the normalized Lebesgue measure on ${\mathbb{T}}^{n}$ is an element of ${\mathcal{M}}_{{T}_{A}}$. The assumption on the eigenvalues of A together with the fact that the derivative $\mathrm{D}{T}_{A}(x)$ at any point $x\in {\mathbb{T}}^{n}$ can be identified with A itself implies the Anosov Properties (A1) and (A2).

$${T}_{A}(x)=Ax\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}(\mathrm{mod}\phantom{\rule{4pt}{0ex}}{\mathbb{Z}}^{n}),\phantom{\rule{1.em}{0ex}}{T}_{A}:{\mathbb{T}}^{n}\to {\mathbb{T}}^{n},$$

It is well known that Anosov diffeomorphisms are structurally stable, i.e., any sufficiently small ${C}^{1}$-perturbation ${T}_{\epsilon}$ of an Anosov diffeomorphism $T:M\to M$ is also an Anosov diffeomorphism, which is topologically conjugate to T, see [20], Proposition 6.4.6 and Corollary 18.2.2. That is, there exists a homeomorphism $h:M\to M$, so that:

$${h}^{-1}\circ {T}_{\epsilon}\circ h=T.$$

If we assume that T is an arbitrary Anosov diffeomorphism of the torus, the existence of a unique entropy-maximizing measure $\mu $ follows. That is, $\mu $ is the unique element of ${\mathcal{M}}_{T}$ satisfying:

$${h}_{\mathrm{top}}\left(T\right)={h}_{\mu}\left(T\right).$$

This follows from a combination of results that can be found in Katok and Hasselblatt [20], namely Theorem 20.3.7, Proposition 18.6.5, Theorem 18.3.9, and Corollary 6.4.10. The entropy-maximizing measure $\mu $ is also known as the Bowen-measure.

In this context, also the notion of topological mixing is important. An Anosov diffeomorphism (or simply a continuous map) $T:M\to M$ is called topologically mixing if for any two nonempty open sets $A,B\subset M$, there exists an integer N such that ${T}^{n}(A)\cap B\ne \varnothing $ for all $n\ge N$. In particular, all Anosov diffeomorphisms on ${\mathbb{T}}^{n}$ are topologically mixing ([20], Proposition 18.6.5).

## 3. State Estimation and Restoration Entropy

The notion of restoration entropy was introduced in [10] for systems given by ODEs on ${\mathbb{R}}^{n}$. However, it is immediately clear from the definition that restoration entropy can be defined for any continuous map on a compact metric space as follows. Let $T:X\to X$ be a continuous map on a metric space $(X,d)$ and $K\subset X$ a compact set with $T(K)\subset K$. For every $x\in X$, $n\in \mathbb{N}$ and $\epsilon >0$, let $p(n,x,\epsilon )$ denote the smallest number of $\epsilon $-balls needed to cover the image ${T}^{n}({B}_{\epsilon}(x)\cap K)$. If the map is not clear from the context, we also write $p(n,x,\epsilon ;T)$. Then:

$${h}_{\mathrm{res}}({T}_{|K}):=\underset{n\to \infty}{lim}\frac{1}{n}\underset{\epsilon \downarrow 0}{lim\; sup}\underset{x\in X}{sup}logp(n,x,\epsilon ).$$

The existence of the limit in n follows from the subadditivity of the sequence ${a}_{n}:={lim\; sup}_{\epsilon \downarrow 0}$ ${sup}_{x\in X}logp(n,x,\epsilon )$ (using Fekete’s lemma). If we assume that T is a ${C}^{1}$-diffeomorphism of a compact Riemannian manifold, the numbers $p(n,x,\epsilon )$ can be estimated in terms of the unstable singular values of $\mathrm{D}{T}^{n}(x)$. This is related to the simple fact that the image of a ball under a linear map (in our case, the local linear approximation $\mathrm{D}{T}^{n}(x)$ to ${T}^{n}$) is an ellipsoid with semi-axes of lengths proportional to the singular values. This leads to the following result, proven in [10], Theorem 11, for continuous-time systems. The proof carries over to discrete-time systems on Riemannian manifolds without any problem.

**Theorem**

**4.**

Let $T:M\to M$ be a ${C}^{1}$-diffeomorphism of a d-dimensional Riemannian manifold M and $K\subset M$ a forward-invariant compact set of T with $\mathrm{cl}K=\mathrm{cl}(\mathrm{int}K)$. Then:
where ${\alpha}_{1}(n,x)\ge \dots \ge {\alpha}_{d}(n,x)$ denote the singular values of $\mathrm{D}{T}^{n}(x)$.

$${h}_{\mathrm{res}}({T}_{|K})=\underset{n\to \infty}{lim}\frac{1}{n}\underset{x\in K}{max}\sum _{i=1}^{d}max\{0,log{\alpha}_{i}(n,x)\},$$

For the analysis of ${h}_{\mathrm{res}}$, based on the above formula, the following observations are crucial:

- We have$$\sum _{i=1}^{d}max\{0,log{\alpha}_{i}(n,x)\}=log\prod _{i=1}^{d}max\{1,log{\alpha}_{i}(n,x)\}=log\parallel \mathrm{D}{T}^{n}{(x)}^{\wedge}\parallel ,$$
- The sequence ${f}_{n}(x):=log\parallel \mathrm{D}{T}^{n}{(x)}^{\wedge}\parallel $, ${f}_{n}:M\to \mathbb{R}$, is a continuous subadditive cocycle over $(K,{T}_{|K})$, since:$$\begin{array}{cc}\hfill {f}_{n+m}(x)& =log\parallel \mathrm{D}{T}^{n+m}{(x)}^{\wedge}\parallel =log\parallel \mathrm{D}{T}^{m}{({T}^{n}(x))}^{\wedge}\mathrm{D}{T}^{n}{(x)}^{\wedge}\parallel \hfill \\ & \le log\left(\parallel \mathrm{D}{T}^{m}{({T}^{n}(x))}^{\wedge}\parallel \xb7\parallel \mathrm{D}{T}^{n}{(x)}^{\wedge}\parallel \right)\hfill \\ & =log\parallel \mathrm{D}{T}^{n}{(x)}^{\wedge}\parallel +log\parallel \mathrm{D}{T}^{m}{({T}^{n}(x))}^{\wedge}\parallel ={f}_{n}(x)+{f}_{m}({T}^{n}(x)).\hfill \end{array}$$Alternatively, this follows from Horn’s inequality for singular values; see [27], Chapter I, Proposition 2.3.1.

In the following, we explain the operational meaning of the quantity ${h}_{\mathrm{res}}({T}_{|K})$.

Consider the dynamical system given by:

$${x}_{t+1}=T({x}_{t}),\phantom{\rule{1.em}{0ex}}{x}_{0}\in K,\phantom{\rule{4pt}{0ex}}t=0,1,2,\dots $$

Suppose that a sensor, fully observing the state ${x}_{t}$, sends its data to an encoder. At the sampling times $t=0,1,2,\dots $, the encoder sends a signal ${e}_{t}$ through a noise-free discrete channel to a decoder (without transmission delay). The decoder acts as an observer of the system, trying to reconstruct the state from the received data. We write ${\widehat{x}}_{t}$ for the estimate generated by the observer at time t. Moreover, we assume that we start with an initial estimate ${\widehat{x}}_{0}\in K$ of a specified accuracy.

With $\mathcal{M}$ denoting the coding alphabet, the encoder and the observer are described by mappings:
and:
The argument $\delta $ corresponds to the initial error at time zero, i.e., $d({x}_{0},{\widehat{x}}_{0})\le \delta $. In particular, we assume that both the encoder and the observer are given the data ${\widehat{x}}_{0}$ and $\delta $.

$${e}_{t}={\mathcal{C}}_{t}({x}_{0},{x}_{1},\dots ,{x}_{t};{\widehat{x}}_{0},\delta ),\phantom{\rule{1.em}{0ex}}{\mathcal{C}}_{t}:{K}^{t+1}\times K\times {\mathbb{R}}_{>0}\to \mathcal{M},$$

$${\widehat{x}}_{t}={\mathcal{E}}_{t}({e}_{0},{e}_{1},\dots ,{e}_{t};{\widehat{x}}_{0},\delta ),\phantom{\rule{1.em}{0ex}}{\mathcal{E}}_{t}:{\mathcal{M}}^{t+1}\times K\times {\mathbb{R}}_{>0}\to X.$$

We assume that the channel can transmit at least ${b}_{-}(r)$ and at most ${b}_{+}(r)$ bits in any time interval of length r. The capacity of the channel is then defined by:
assuming that these limits exist and coincide.

$$C:=\underset{r\to \infty}{lim}\frac{{b}_{-}(r)}{r}=\underset{r\to \infty}{lim}\frac{{b}_{+}(r)}{r},$$

We consider the following two observation objectives:

- (O1)
- The observer observes the system with exactness $\epsilon >0$ if there exists $\delta =\delta (\epsilon ,K)$, so that ${x}_{0},{\widehat{x}}_{0}\in K$ with $d({x}_{0},{\widehat{x}}_{0})\le \delta $ implies:$$\underset{t\ge 0}{sup}d({x}_{t},{\widehat{x}}_{t})\le \epsilon .$$
- (O2)
- The observer regularly observes the system if there exist $G,{\delta}_{*}>0$, so that for all $\delta \in (0,{\delta}_{*})$ and ${x}_{0},{\widehat{x}}_{0}\in K$ with $d({x}_{0},{\widehat{x}}_{0})\le \delta $,$$\underset{t\ge 0}{sup}d({x}_{t},{\widehat{x}}_{t})\le G\delta .$$

We say that the system is:

- observable on K over a channel of capacity C if for every $\epsilon >0$, an observer exists that observes the system with exactness $\epsilon $ over this channel;
- regularly observable on K over a channel of capacity C if there exists an observer that regularly observes the system over this channel.

**Theorem**

**5.**

The smallest channel capacity ${C}_{0}$, so that System (6) is:

- observable on K over every channel of capacity $C>{C}_{0}$ is given by:$${C}_{0}={h}_{\mathrm{top}}({T}_{|K}).$$
- regularly observable on K over every channel of capacity $C>{C}_{0}$ is given by:$${C}_{0}={h}_{\mathrm{res}}({T}_{|K}).$$

Since regular observability implies observability, it is clear that:

$${h}_{\mathrm{top}}({T}_{|K})\le {h}_{\mathrm{res}}({T}_{|K}).$$

As already pointed out in the Introduction, the quantity ${h}_{\mathrm{top}}(\xb7)$ is highly discontinuous w.r.t. the dynamical system. Moreover, the corresponding data-rate theorem has the disadvantage that the final error $\epsilon $ may be much larger than the initial error $\delta $, which cannot happen in the case of regular observability. From Theorem 4 in combination with Lemma 1, one sees that in the smooth case, ${h}_{\mathrm{res}}$ is an infimum over functions that are continuous w.r.t. T in the ${C}^{1}$-topology. This implies at least upper semicontinuity. Hence, we can expect that coding and estimation strategies based on restoration entropy enjoy better properties than those based on topological entropy.

## 4. Results

Before we present our main result, we prove two lemmas, which are of independent interest.

**Lemma**

**2.**

Let $T:M\to M$ be a ${C}^{2}$-diffeomorphism on a compact Riemannian manifold M. Then, for any $\mu \in {\mathcal{M}}_{T}$, we have:

$$\int {\lambda}^{+}\mathrm{d}\mu =\underset{n\to \infty}{lim}\frac{1}{n}\int log\parallel \mathrm{D}{T}^{n}{(x)}^{\wedge}\parallel \mathrm{d}\mu (x).$$

**Proof.**

Let $d=dimM$. First observe that we have the identity:
where ${\alpha}_{1}(n,x)\ge \dots \ge {\alpha}_{d}(n,x)$ are the singular values of $\mathrm{D}{T}^{n}(x)$, see [27], Chapter I, Proposition 7.4.2. Hence,

$$\parallel \mathrm{D}{T}^{n}{(x)}^{\wedge}\parallel =max\left\{1,\underset{1\le k\le d}{max}\prod _{i=1}^{k}{\alpha}_{i}(n,x)\right\},$$

$$log\parallel \mathrm{D}{T}^{n}{(x)}^{\wedge}\parallel =max\left\{0,\underset{1\le k\le d}{max}\sum _{i=1}^{k}log{\alpha}_{i}(n,x)\right\}.$$

The maximum over k is clearly attained when k is the maximal number such that ${\alpha}_{i}(n,x)>1$ for all $1\le i\le k$. Hence,

$$log\parallel \mathrm{D}{T}^{n}{(x)}^{\wedge}\parallel =max\left\{0,\sum _{{\alpha}_{i}(n,x)>1}log{\alpha}_{i}(n,x)\right\}.$$

The numbers ${\alpha}_{i}(n,x)$ are the eigenvalues of ${A}_{n}(x):={(\mathrm{D}{T}^{n}{(x)}^{*}\mathrm{D}{T}^{n}(x))}^{1/2}$. Theorem 2 states that ${A}_{n}{(x)}^{1/n}\to {\Lambda}_{x}$ for $\mu $-almost every $x\in M$ and the logarithms of the eigenvalues of ${\Lambda}_{x}$ are the Lyapunov exponents at x. Since eigenvalues depend continuously on the matrix, it follows that:
and consequently

$$\underset{n\to \infty}{lim}\frac{1}{n}log\parallel \mathrm{D}{T}^{n}{(x)}^{\wedge}\parallel ={\lambda}^{+}(x)\phantom{\rule{1.em}{0ex}}\mu -\mathrm{a}.\mathrm{e}.$$

$$\int {\lambda}^{+}\mathrm{d}\mu =\int \underset{n\to \infty}{lim}\frac{1}{n}log\parallel \mathrm{D}{T}^{n}{(x)}^{\wedge}\parallel \mathrm{d}\mu (x).$$

Applying the theorem of dominated convergence then yields the result. □

**Lemma**

**3.**

Let $T:M\to M$ be a ${C}^{2}$-diffeomorphism on a compact Riemannian manifold M such that ${h}_{\mathrm{top}}(T)={h}_{\mathrm{res}}(T)$. Then, if T has an entropy-maximizing measure ${\mu}_{*}$, it follows that:

$${h}_{{\mu}_{*}}(T)=\int {\lambda}^{+}\mathrm{d}{\mu}_{*}.$$

**Proof.**

Assume to the contrary that ${h}_{{\mu}_{*}}(T)<\int {\lambda}^{+}\mathrm{d}{\mu}_{*}$ (using Ruelle’s inequality (5)). Then, Lemma 2 implies:

$${h}_{\mathrm{top}}(T)={h}_{{\mu}_{*}}(T)<\int {\lambda}^{+}\mathrm{d}{\mu}_{*}=\underset{n\to \infty}{lim}\frac{1}{n}\int log\parallel \mathrm{D}{T}^{n}{(x)}^{\wedge}\parallel \mathrm{d}{\mu}_{*}(x).$$

According to Theorem 4 and the subsequent observation, an application of Lemma 1 yields:

$${h}_{\mathrm{res}}(T)=\underset{\mu \in {\mathcal{M}}_{T}}{sup}\underset{n\to \infty}{lim}\frac{1}{n}\int log\parallel \mathrm{D}{T}^{n}{(x)}^{\wedge}\parallel \mathrm{d}\mu (x).$$

Combining these observations gives ${h}_{\mathrm{top}}(T)<{h}_{\mathrm{res}}(T)$, in contradiction to our assumption. □

Now, we are in a position to state our main result.

**Theorem**

**6.**

Let $T:M\to M$ be a topologically mixing ${C}^{2}$-Anosov diffeomorphism on a compact Riemannian manifold M such that ${h}_{\mathrm{top}}(T)={h}_{\mathrm{res}}(T)$. Then, the unique entropy-maximizing measure ${\mu}_{*}\in {\mathcal{M}}_{T}$ is an SRB measure. Moreover, the function:
is constant.

$$\mu \mapsto \int {\lambda}^{+}\mathrm{d}\mu ,\phantom{\rule{1.em}{0ex}}{\mathcal{M}}_{T}\to {\mathbb{R}}_{\ge 0}$$

**Proof.**

First note that the existence and uniqueness of an entropy-maximizing measure ${\mu}_{*}$ follows from [20], Theorem 20.3.7, Theorem 18.3.9, and Corollary 6.4.10. Here, the assumption that T is topologically mixing is crucial. By the preceding lemma combined with Theorem 3, we already know that ${\mu}_{*}$ has absolutely continuous conditional measures on unstable manifolds. Since an Anosov diffeomorphism has positive Lyapunov exponents everywhere (where they exist), attained in all directions of the unstable subspace ${E}_{x}^{u}$, it follows that ${\mu}_{*}$ is an SRB measure.

Now, let $\mu \in {\mathcal{M}}_{T}$ be chosen arbitrarily. Due to the invariance of $\mu $, we have:
for every $n\in \mathbb{N}$, implying:
where we use Kingman’s subadditive ergodic theorem, applied to the continuous additive cocycle ${f}_{n}(x):=log|det\mathrm{D}{T}^{n}{(x)}_{|{E}_{x}^{u}}|$ ($n\in {\mathbb{N}}_{0}$), and the theorem of dominated convergence. Observe that the function ${J}^{u}T(x):=log|det\mathrm{D}T{(x)}_{|{E}_{x}^{u}}|$ is continuous (using the fact that $x\mapsto {E}_{x}^{u}$ is continuous). Hence, we can consider the affine function:
The variational principle (2) for pressure tells us that:

$$\begin{array}{cc}\hfill {\int log|det\mathrm{D}T(x)}_{|{E}_{x}^{u}}|\mathrm{d}\mu (x)& =\int \frac{1}{n}\sum _{i=0}^{n-1}log|det\mathrm{D}T{({T}^{i}(x))}_{|{E}_{{T}^{i}(x)}^{u}}|\mathrm{d}\mu (x)\hfill \\ & =\int \frac{1}{n}log|det\mathrm{D}{T}^{n}{(x)}_{|{E}_{x}^{u}}|\mathrm{d}\mu (x)\hfill \end{array}$$

$$\begin{array}{cc}\hfill \int {\lambda}^{+}\mathrm{d}\mu & =\int \underset{n\to \infty}{lim}\frac{1}{n}log|det\mathrm{D}{T}^{n}{(x)}_{|{E}_{x}^{u}}|\mathrm{d}\mu (x)\hfill \\ & =\underset{n\to \infty}{lim}\int \frac{1}{n}log|det\mathrm{D}{T}^{n}{(x)}_{|{E}_{x}^{u}}|\mathrm{d}\mu (x)=\int log|det\mathrm{D}T{(x)}_{|{E}_{x}^{u}}|\mathrm{d}\mu (x),\hfill \end{array}$$

$${\alpha}_{\mu}:\mathbb{R}\to \mathbb{R},\phantom{\rule{1.em}{0ex}}{\alpha}_{\mu}(t):={P}_{\mu}(T,-t{J}^{u}T)={h}_{\mu}(T)-t\int {\lambda}^{+}\mathrm{d}\mu .$$

$${P}_{\mathrm{top}}(-t{J}^{u}T)=\underset{\mu \in {\mathcal{M}}_{T}}{sup}{\alpha}_{\mu}(t),\phantom{\rule{1.em}{0ex}}\forall t\in \mathbb{R}.$$

Hence, $t\mapsto {P}_{\mathrm{top}}(-t{J}^{u}T)$, as the supremum over affine functions, is a convex function.

Using that ${\mu}_{*}$ is the entropy-maximizing measure and Theorem 3, respectively, we obtain:

$${\alpha}_{{\mu}_{*}}(0)={h}_{\mathrm{top}}(T)\phantom{\rule{1.em}{0ex}}\mathrm{and}\phantom{\rule{1.em}{0ex}}{\alpha}_{{\mu}_{*}}(1)=0.$$

On the other hand, also:

$${P}_{\mathrm{top}}(-0\xb7{\lambda}^{+})={h}_{\mathrm{top}}(T)\phantom{\rule{1.em}{0ex}}\mathrm{and}\phantom{\rule{1.em}{0ex}}{P}_{\mathrm{top}}(-1\xb7{J}^{u}T)=0.$$

The second identity here follows from the fact that ${P}_{\mathrm{top}}(-1\xb7{J}^{u}T)={sup}_{\mu \in {\mathcal{M}}_{T}}({h}_{\mu}(T)-\int {\lambda}^{+}\mathrm{d}\mu )$ and ${h}_{\mu}(T)\le \int {\lambda}^{+}\mathrm{d}\mu $ by Ruelle’s inequality (5). Hence, ${P}_{\mathrm{top}}(-1\xb7{J}^{u}T)={h}_{{\mu}_{*}}(T)-\int {\lambda}^{+}\mathrm{d}{\mu}_{*}=0$.

By convexity of $t\mapsto {P}_{\mathrm{top}}(-t{J}^{u}T)$ and (7), this implies:

$${P}_{\mathrm{top}}(-t{J}^{u}T)={\alpha}_{{\mu}_{*}}(t),\phantom{\rule{1.em}{0ex}}\forall t\in \mathbb{R}.$$

From (7), it now follows that all of the maps ${\alpha}_{\mu}$ have the same slope, i.e., $\int {\lambda}^{+}\mathrm{d}\mu $ is independent of $\mu $. □

The above theorem shows that the equality ${h}_{\mathrm{top}}(T)={h}_{\mathrm{res}}(T)$ is a very restrictive condition. Indeed, this can be seen as follows. Any topologically mixing Anosov diffeomorphism has an abundance of periodic points. Indeed, the set of periodic points is dense in M; see [20], Corollary 6.4.19. If we consider a periodic point $p\in M$ of period ${n}_{p}\in \mathbb{N}$, we can consider the invariant measure ${\mu}_{p}$ given by:
with ${\delta}_{(\xb7)}$ being the Dirac measure at a point. The above theorem implies that, under ${h}_{\mathrm{top}}(T)={h}_{\mathrm{res}}(T)$, the number:
is independent of the periodic point p chosen. On the other hand, we know that every sufficiently small ${C}^{2}$-perturbation of T yields another ${C}^{2}$-Anosov diffeomorphism, topologically conjugate to T, hence also topologically mixing. If this perturbation is only performed in a small vicinity of a fixed periodic orbit, it can easily change the number $\gamma (p)$, while not changing it for most of the other periodic orbits. As a consequence, the perturbed diffeomorphism ${T}_{\epsilon}$ cannot satisfy ${h}_{\mathrm{top}}({T}_{\epsilon})={h}_{\mathrm{res}}({T}_{\epsilon})$.

$${\mu}_{p}:=\frac{1}{{n}_{p}}\sum _{i=0}^{{n}_{p}-1}{\delta}_{{T}^{i}(p)}$$

$$\gamma (p):=\int {\lambda}^{+}\mathrm{d}{\mu}_{p}=\frac{1}{{n}_{p}}log\left|det\left(\mathrm{D}{T}^{{n}_{p}}{(p)}_{|{E}_{p}^{u}}:{E}_{p}^{u}\to {E}_{p}^{u}\right)\right|$$

The following corollary gives another characterization of Anosov diffeomorphisms with ${h}_{\mathrm{top}}={h}_{\mathrm{res}}$ in a two-dimensional case.

**Corollary**

**1.**

Consider a ${C}^{2}$ area-preserving Anosov diffeomorphism $T:{\mathbb{T}}^{2}\to {\mathbb{T}}^{2}$ of the two-torus. Then, the equality ${h}_{\mathrm{top}}\left(T\right)={h}_{\mathrm{res}}\left(T\right)$ is equivalent to the existence of a hyperbolic linear automorphism ${T}_{A}:{\mathbb{T}}^{2}\to {\mathbb{T}}^{2}$ and a ${C}^{1}$-diffeomorphism $h:{\mathbb{T}}^{2}\to {\mathbb{T}}^{2}$ such that ${h}^{-1}\circ T\circ h={T}_{A}$.

**Proof.**

It follows immediately from Theorem 6 in combination with [20], Corollary 20.4.4, that the identity ${h}_{\mathrm{top}}\left(T\right)={h}_{\mathrm{res}}\left(T\right)$ implies the existence of a ${C}^{1}$-conjugacy, as asserted. The other direction is easy to see, using the definition of restoration entropy. If ${h}^{-1}\circ T\circ h={T}_{A}$, then also ${h}^{-1}\circ {T}^{n}\circ h={T}_{A}^{n}$ for all $n\in \mathbb{N}$. We use that a ${C}^{1}$-map on a compact manifold has a global Lipschitz constant. Let $L:=\mathrm{Lip}\left(h\right)$ and ${L}^{\prime}:=\mathrm{Lip}\left({h}^{-1}\right)$ be Lipschitz constants of h and ${h}^{-1}$, respectively. Then:

$${T}^{n}\left({B}_{\epsilon}\left(x\right)\right)=h\circ {T}_{A}^{n}\circ {h}^{-1}\left({B}_{\epsilon}\left(x\right)\right).$$

Observe that ${h}^{-1}\left({B}_{\epsilon}\left(x\right)\right)\subset {B}_{{L}^{\prime}\epsilon}\left({h}^{-1}\left(x\right)\right)$. Let $N\left(l\right)$ denote the minimal number of $\epsilon $-balls needed to cover an $l\epsilon $-ball in ${\mathbb{T}}^{2}$ for any $l>0$. Then, the minimal number of $\epsilon $-balls needed to cover ${T}_{A}^{n}{h}^{-1}\left({B}_{\epsilon}\left(x\right)\right)$ is bounded from above by $N\left({L}^{\prime}\right){sup}_{z\in {\mathbb{T}}^{2}}p(n,z,\epsilon ;{T}_{A})$. This implies:

$$p(n,x,\epsilon ;T)\le N\left(L\right)N\left({L}^{\prime}\right)\underset{z\in {\mathbb{T}}^{2}}{sup}p(n,z,\epsilon ;{T}_{A}).$$

Hence,

$$\underset{x\in {\mathbb{T}}^{2}}{sup}\frac{1}{n}logp(n,x,\epsilon ;T)\le \frac{1}{n}logN\left(L\right)N\left({L}^{\prime}\right)+\underset{x\in {\mathbb{T}}^{2}}{sup}\frac{1}{n}logp(n,x,\epsilon ;{T}_{A}).$$

Taking the lim sup for $\epsilon \downarrow 0$ and subsequently the limit for $n\to \infty $, we obtain that ${h}_{\mathrm{res}}\left(T\right)\le {h}_{\mathrm{res}}\left({T}_{A}\right)$. The other inequality can be proven analogously, so:

$${h}_{\mathrm{res}}\left(T\right)={h}_{\mathrm{res}}\left({T}_{A}\right).$$

Since T and ${T}_{A}$ are topologically conjugate (the ${C}^{1}$-diffeomorphism h is a homeomorphism, in particular), they also have the same topological entropy:

$${h}_{\mathrm{top}}\left(T\right)={h}_{\mathrm{top}}\left({T}_{A}\right).$$

To complete the proof, it now suffices to show that ${h}_{\mathrm{res}}\left({T}_{A}\right)={h}_{\mathrm{top}}\left({T}_{A}\right)$. We can compute ${h}_{\mathrm{res}}\left({T}_{A}\right)$ using Theorem 4. To this end, observe that A is a hyperbolic matrix. If $|{\lambda}_{1}|>1>|{\lambda}_{2}|$ are its eigenvalues, we obtain:
implying ${h}_{\mathrm{res}}\left({T}_{A}\right)=log\left|{\lambda}_{1}\right|$. It is well known that this is also the value of the topological entropy ${h}_{\mathrm{top}}\left({T}_{A}\right)$; see [20], Section 4. This also follows from the combination of the variational principle with Theorem 3. □

$$\underset{n\to \infty}{lim}\frac{1}{n}\sum _{i=1}^{2}max\{0,log{\alpha}_{i}(n,x)\}=log\left|{\lambda}_{1}\right|\phantom{\rule{1.em}{0ex}}\forall x\in {\mathbb{T}}^{2},$$

The following example demonstrates how restrictive the condition ${h}_{\mathrm{res}}(T)={h}_{\mathrm{top}}(T)$ is by looking at small perturbations of Arnold’s Cat Map.

**Example**

**1.**

Arnold’s Cat Map is the hyperbolic linear two-torus automorphism ${T}_{A}:{\mathbb{T}}^{2}\to {\mathbb{T}}^{2}$ induced by the integer matrix:
with determinant $detA=1$. Observe that the derivative $\mathrm{D}{T}_{A}(x)$ can be identified with A for each $x\in {\mathbb{T}}^{2}$. Since A is a hyperbolic matrix with eigenvalues:
satisfying $|{\gamma}_{2}|>1>|{\gamma}_{1}|$, it follows that ${T}_{A}$ is a ${C}^{\infty}$ area-preserving Anosov diffeomorphism. Hence, Corollary 1 yields:

$$A:=\left(\begin{array}{cc}2& 1\\ 1& 1\end{array}\right)$$

$${\gamma}_{1}=\frac{3}{2}-\frac{1}{2}\sqrt{5}\phantom{\rule{1.em}{0ex}}and\phantom{\rule{1.em}{0ex}}{\gamma}_{2}=\frac{3}{2}+\frac{1}{2}\sqrt{5}$$

$${h}_{\mathrm{top}}({T}_{A})={h}_{\mathrm{res}}({T}_{A})=log|{\gamma}_{2}|.$$

Now, we consider a perturbation of the form:
which is well defined as a torus map, since the sine function is $2\pi $-periodic. By the structural stability of Anosov diffeomorphisms, for a sufficiently small ε, this map is topologically conjugate to ${T}_{A}$, hence has the same topological entropy $log|{\gamma}_{2}|$. However, its restoration entropy is strictly greater. This can be seen by looking at the fixed point $(0,0)$ with the associated derivative:

$${T}_{A}^{\epsilon}(x,y):=\left(2x+y+\epsilon sin(2\pi x),x+y\right)\phantom{\rule{4pt}{0ex}}(\mathrm{mod}\phantom{\rule{4pt}{0ex}}{\mathbb{Z}}^{2}),\phantom{\rule{1.em}{0ex}}\epsilon >0$$

$$D{T}_{A}^{\epsilon}(0,0)=\left(\begin{array}{cc}2+2\pi \epsilon & 1\\ 1& 1\end{array}\right).$$

The eigenvalues of this matrix can be computed as:

$${\lambda}_{\pm}=\frac{3}{2}+\pi \epsilon \pm \frac{1}{2}\sqrt{5+4\pi \epsilon (1+\pi \epsilon )}.$$

Since ${\lambda}_{+}>{\gamma}_{2}$, Theorem 4 yields ${h}_{\mathrm{res}}({T}_{A}^{\epsilon})\ge log|{\lambda}_{+}|>{h}_{\mathrm{top}}({T}_{A}^{\epsilon})$ for $\epsilon >0$ sufficiently small.

## 5. Conclusions

In this paper, we compared two notions of entropy for dynamical systems that have an operational meaning in the context of state estimation over digital channels: topological entropy and restoration entropy. Looking at Anosov diffeomorphisms (a paradigmatic class of chaotic dynamical systems), our main result demonstrates that the equality of these two quantities implies a great amount of uniformity in the given system. For area-preserving Anosov diffeomorphisms on the two-torus, this uniformity can be expressed in terms of the existence of a ${C}^{1}$-conjugacy to a linear system. Hence, we can conclude that for most dynamical systems, the strict inequality ${h}_{\mathrm{top}}<{h}_{\mathrm{res}}$ holds. The operational meaning of this inequality is that for regular observability, as defined in Section 3, a strictly larger channel capacity is necessary than for observability.

## Funding

This research received no external funding.

## Acknowledgments

The author owes particular thanks to Katrin Gelfert, who provided one of the main ideas in the proof of Theorem 6 during the Mini-Workshop Entropy, Information and Control held at the Mathematisches Forschungsinstitut Oberwolfach from 4–10 March 2018. The author also thanks Alexander Pogromsky for fruitful discussions on restoration entropy.

## Conflicts of Interest

The author declares no conflict of interest.

## References

- Savkin, A.V. Analysis and synthesis of networked control systems: Topological entropy, observability, robustness and optimal control. Autom. J. IFAC
**2006**, 42, 51–62. [Google Scholar] [CrossRef] - Katok, A. Fifty years of entropy in dynamics: 1958–2007. J. Mod. Dyn.
**2007**, 1, 545–596. [Google Scholar] [CrossRef] - Downarowicz, T. Entropy in Dynamical Systems; New Mathematical Monographs Volume 18; Cambridge University Press: Cambridge, UK, 2011. [Google Scholar]
- Misiurewicz, M. On non-continuity of topological entropy. Bull. Acad. Polon. Sci. Str. Sci Math. Astronom. Phys.
**1971**, 19, 319–320. [Google Scholar] - Chen, Q.; Ott, E.; Hurd, L. Calculating topological entropies of chaotic dynamical systems. Phys. Lett. A
**1991**, 156, 48–52. [Google Scholar] [CrossRef] - D’Alessandro, G.; Grassberger, P.; Isola, S.; Politi, A. On the topology of the Hénon map. J. Phys. A
**1990**, 23, 5285–5294. [Google Scholar] - Froyland, G.; Junge, O.; Ochs, G. Rigorous computation of topological entropy with respect to a finite partition. Phys. D
**2001**, 154, 68–84. [Google Scholar] [CrossRef] - Newhouse, S.; Pignataro, T. On the estimation of topological entropy. J. Stat. Phys.
**1993**, 72, 1331–1351. [Google Scholar] [CrossRef] - Matveev, A.; Pogromsky, A. Observation of nonlinear systems via finite capacity channels: Constructive data rate limits. Autom. J. IFAC
**2016**, 70, 217–229. [Google Scholar] [CrossRef] - Matveev, A.; Pogromsky, A. Observation of nonlinear systems via finite capacity channels. Part II: Restoration entropy and its estimates. Automatica
**2016**, 70, 217–229. [Google Scholar] [CrossRef] - Matveev, A.S.; Savkin, A.V. Estimation and Control over Communication Networks; Birkhäuser Boston: Boston, MA, USA, 2009. [Google Scholar]
- Liberzon, D.; Mitra, S. Entropy and minimal bit rates for state estimation and model detection. IEEE Trans. Autom. Control
**2018**, 63, 3330–3344. [Google Scholar] [CrossRef] - Matveev, A.S. State estimation via limited capacity noisy communication channels. Math. Control Signals Syst.
**2008**, 20, 1–35. [Google Scholar] [CrossRef] - Kawan, C.; Yüksel, S. On optimal coding of non-linear dynamical systems. IEEE Trans. Inf. Theory
**2018**, 64, 6816–6829. [Google Scholar] [CrossRef] - Bowen, R. Entropy for group endomorphisms and homogeneous spaces. Trans. Am. Math. Soc.
**1971**, 153, 401–414. [Google Scholar] [CrossRef] - Dinaburg, E.I. A connection between various entropy characterizations of dynamical systems. Izv. Akad. Nauk SSSR Ser. Mat.
**1971**, 35, 324–366. [Google Scholar] - Adler, R.L.; Konheim, A.G.; McAndrew, M.H. Topological entropy. Trans. Am. Math. Soc.
**1965**, 114, 309–319. [Google Scholar] [CrossRef] - Kolmogorov, A.N. A new metric invariant of transient dynamical systems and automorphisms in Lebesgue spaces. Dokl. Akad. Nauk SSSR
**1958**, 119, 861–864. [Google Scholar] - Sinai, J. On the concept of entropy for a dynamic system. Dokl. Akad. Nauk SSSR
**1959**, 124, 768–771. [Google Scholar] - Katok, A.; Hasselblatt, B. Introduction to the Modern Theory of Dynamical Systems; Encyclopedia of Mathematics and its Applications Series 54; Cambridge University Press: Cambridge, UK, 1995. [Google Scholar]
- Walters, P. A variational principle for the pressure of continuous transformations. Am. J. Math.
**1975**, 97, 937–971. [Google Scholar] [CrossRef] - Morris, I.D. Mather sets for sequences of matrices and applications to the study of joint spectral radii. Proc. Lond. Math. Soc.
**2013**, 107, 121–150. [Google Scholar] [CrossRef] - Arnold, L. Random Dynamical Systems; Springer: Berlin/Heidelberg, Germany; New York, NY, USA, 1998. [Google Scholar]
- Colonius, F.; Kliemann, W. Dynamical Systems and Linear Algebra; American Mathematical Society: Providence, RI, USA, 2014; Volume 158. [Google Scholar]
- Ledrappier, F.; Young, L.-S. The metric entropy of diffeomorphisms. I. Characterization of measures satisfying Pesin’s entropy formula. Ann. Math.
**1985**, 122, 509–539. [Google Scholar] [CrossRef] - Ruelle, D. An inequality for the entropy of differentiable maps. Bol. Soc. Brasil. Mat.
**1978**, 9, 83–87. [Google Scholar] [CrossRef] - Boichenko, V.A.; Leonov, G.A.; Reitmann, V. Dimension Theory for Ordinary Differential Equations; Teubner: Stuttgart, Germany, 2005. [Google Scholar]

© 2018 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).