# Entropy, Age and Time Operator

^{1}

^{2}

^{*}

## Abstract

**:**

## 1. Introduction

_{t}) of some random variable at each clock time t, as the expectation of the time operator (Section 2). The aging of canonical processes (Bernoulli) has been shown to keep step with the clock time t [9,10]:

## 2. Time Operator and Age

_{t}, t = 1, 2, … is constructed as follows: At each stage t, the observation defines a partition of the sample space Ω into two sets ${\mathrm{\Xi}}_{t}^{1}$, ${\mathrm{\Xi}}_{t}^{0}$ corresponding to the values 0,1. At the next stage t + 1, the sample space Ω is partitioned into the corresponding sets ${\mathrm{\Xi}}_{t+1}^{1}$, ${\mathrm{\Xi}}_{t+0}^{1}$ corresponding to the values 0,1. Therefore, the knowledge obtained after all successive observations up to time t is the common refinement of the corresponding partitions. We denote by ${\mathbb{E}}_{t}$ the conditional expectation projecting onto the fluctuations $\mathscr{H}$ observed up to time t. The sequence of conditional expectation projections: ${\mathbb{E}}_{t}$, t = 0, 1, … is a resolution of the identity in $\mathscr{H}$, i.e.,: ${\mathbb{E}}_{0}=O$, ${\mathbb{E}}_{\infty}=I$ and ${\mathbb{E}}_{{t}_{1}}\le {\mathbb{E}}_{{t}_{2}}$. We have omitted the mathematical and technical details, as they are presented elsewhere [19,20], and they are not necessary for the scope of this work.

**Definition 1**. The self-adjoint operator with spectral projections being the conditional expectations${\mathbb{E}}_{t}$ on the space of fluctuations $\mathscr{H}$ is called the time operator of the stochastic process X

_{t}, t = 1, 2, ….

_{eq}is the equilibrium distribution of the process. Formula (7) follows from estimations of Markov chains, verified by a Monte Carlo method [20].

_{t}, t = 1, 2, … [20] with state space S = {0, 1}:

_{01}− w

_{10}, |γ| < 1 is the second largest eigenvalue of the stochastic transition matrix W [32]:

_{κλ}are the transition probabilities:

**Lemma 1**. For any two state Markov chain:

**Proof**. From Equation (12),

_{eq}║

_{r}with respect to all initial distributions ρ(0) relates the age with any r-norm distance, generalizing Equation (22) as follows:

## 3. Lyapunov Functionals and Entropies

**LF1**$\mathcal{V}(y)\ge 0$, for all y.**LF2**The equation $\mathcal{V}(y)=0$ has the unique solution $y=0:\mathcal{V}(y)=0\iff y=0$.**LF3**$\mathcal{V}$ vanishes as $t\to \infty :\underset{t\to \infty}{\mathrm{lim}}\mathcal{V}({y}_{t})=0=\mathcal{V}(0)$.**LF4**$\mathcal{V}$ is monotonically decreasing: $\mathcal{V}({y}_{{t}_{2}})\le \mathcal{V}({y}_{{t}_{1}})$, if t_{2}> t_{1}.

- Boltzmann–Gibbs–Shannon entropy [43]:$${\mathcal{I}}^{BGS}(\rho (t))=-{\rho}_{0}(t)\mathrm{ln}{\rho}_{0}(t)-(1-{\rho}_{0}(t))\mathrm{ln}(1-{\rho}_{0}(t))$$
- Tsallis entropies [37]:$${\mathcal{I}}_{q}^{T}(\rho (t))=-\frac{1-{({\rho}_{0}(t))}^{q}-{(1-{\rho}_{0}(t))}^{q}}{1-q}$$
- Kaniadakis entropies [38–40]:$${\mathcal{I}}_{\kappa}^{K}(\rho (t))=-{\rho}_{0}(t){\mathrm{ln}}_{\left\{\kappa \right\}}({\rho}_{0}(t))-(1-{\rho}_{0}(t)){\mathrm{ln}}_{\left\{\kappa \right\}}(1-{\rho}_{0}(t)),\phantom{\rule{0.3em}{0ex}}\left|\kappa \right|\le 1$$$${\mathrm{ln}}_{\kappa}(x)=\frac{{x}^{\kappa}-{x}^{-\kappa}}{2\kappa}$$

_{κ}(x), Equation (20) are discussed in [44]. The Boltzmann–Gibbs–Shannon entropy, Equation (17), is a special case of the Tsallis entropies, Equation (18) for q → 1. The limiting case of κ → 0 reduces the Kaniadakis entropy, Equation (19), to the Boltzmann–Gibbs–Shannon entropy, Equation (17). Moreover, Kaniadakis entropies are related to the Tsallis entropies as follows [38]:

_{01}= 0.245, w

_{10}= 0.095. The evolution of the Boltzmann–Gibbs–Shannon entropy, Equations (16) and (17), for the initial distribution ρ

_{0}(0) = 0, ρ

_{1}(0) = 1 is monotonic (Figure 1), while for the initial distribution ρ

_{0}(0) = 1, ρ

_{1}(0) = 0 is non-monotonic (Figure 2). For both initial conditions, however, the evolution of the total variation distance is monotonic. This is also the case for all r-norm distances, as demonstrated by Lemma 1.

_{0}(0) = 0, ρ

_{1}(0) = 1 (Figure 4), in accordance with the second law of thermodynamics, while for the initial distribution ρ

_{0}(0) = 1, ρ

_{1}(0) = 0 (Figure 3), they violate the second law of thermodynamics in threeways: (1) the approach to equilibrium is non-monotonic; (2) the equilibrium distribution is not the maximum entropy distribution; and (3) the system begins with a state with entropy lower than the equilibrium entropy, then evolves to the state ρ

_{0}(3) = 0.4866, ρ

_{1}(3) = 0.5134 with maximal entropy and then relaxes monotonically to the equilibrium with lower entropy.

## 4. Age in Terms of Entropy

_{t}) with the evolution of the total variation distance and the evolution of Lyapunov functionals, Equation (16), defined from the Tsallis and Kaniadakis entropies.

_{t}) can be expressed in terms of entropies, as both distances and entropies estimate the “distance” from equilibrium:

_{t}), for several rates of convergence to equilibrium γ.

_{t}) to a constant α for n = 8 different doubly-stochastic Markov chains:

_{n}− 1. The differences t − Age(X

_{t}) become constant after a time instant t, which depends on the rate of convergence 2λ

_{n}− 1 (Figure 5).

_{TV}(ρ(t), ρ

_{eq}) = 0.5(2λ

_{n}− 1)

^{t}, because |ρ

_{0}− ρ

_{eq}

_{,0}| = 0.5 for ρ

_{0}= 0 or ρ

_{0}= 1 and ρ

_{eq,}

_{0}= 0.5.

_{t}) versus the total variation distance, Equation (8), in Table 1. The objective is to compare the mean square error (MSE) among the estimations of Equation (22) of the total variation distance and the estimations of Equation (23) among the Boltzmann–Gibbs–Shannon entropy, Tsallis entropy and Kaniadakis entropy.

_{TV}(ρ(t), ρ

_{eq}) and the age differences t − Age(X

_{t}) is statistically significant (Table 1), verifying the results of the Monte Carlo simulation technique that we followed in [20]. Concerning the validity of the linear regression formula Equation (23) for the Boltzmann–Gibbs–Shannon entropy, we repeat the same analysis in Table 2.

_{t}) better compared to the Boltzmann–Gibbs–Shannon entropy (Figures 6 and 7). More specifically, in Figure 6, the mean square error (MSE) of the linear regression to the Tsallis-entropy (q = 2.5) Lyapunov functional is less than the MSE of the linear fit to the Boltzmann–Gibbs–Shannon entropy Lyapunov functional. Searching for other entropic indices q > 0, so that the MSE is further reduced, using the Tsallis entropy Lyapunov functional, we found that for q = 2.5, the MSE attains its minimum (Figure 8) for all rates of convergence γ.

_{t}) significantly when compared to the Boltzmann–Gibbs–Shannon entropy.

- Tsallis entropy:$$Age({X}_{t})=t-\alpha -\beta \cdot |{\mathcal{I}}_{q=2.5}^{T}(\rho (t))-{\mathcal{I}}_{q=2.5}^{T}({\rho}_{eq})|$$
- Kaniadakis entropy:$$Age({X}_{t})=t-\alpha -\beta \cdot |{\mathcal{I}}_{\left|\kappa \right|=1}^{K}(\rho (t))-{\mathcal{I}}_{\left|\kappa \right|=1}^{K}({\rho}_{eq})|$$

## 5. Concluding Remarks

_{t}, t = 1, 2, …:

- The Lyapunov functionals defined in terms of norms (total variation and r-norms; Equation (29) in [20] and Equation (14), respectively) evolve monotonically towards equilibrium, respecting the second law of thermodynamics; Figures 1 and 2:$$\begin{array}{c}{\mathcal{V}}_{TV}(t)={d}_{TV}(t)=|{\rho}_{0}-{\rho}_{eq,0}|\cdot |\gamma {|}^{t}\\ {\mathcal{V}}_{r}(t)={\Vert \rho (t)-{\rho}_{eq}\Vert}_{r}=\sqrt[r]{2}\cdot |{\rho}_{0}-{\rho}_{eq,0}|\cdot |\gamma {|}^{t}\end{array}$$
- For the same system, the Lyapunov functionals defined in terms of entropies evolve violating the second law of thermodynamics in three ways (Figure 3): (1) the approach to equilibrium is non-monotonic; (2) the equilibrium distribution is not the maximum entropy distribution; and (3) the initial entropy is lower than the equilibrium entropy, then entropy increases above equilibrium and then decreases monotonically to equilibrium. Monotonicity violations (1) have been reported for non-doubly-stochastic regular Markov chains ([42] (Theorem 5, p. 104); p. 81 in [43]). Examples of evolutions where the maximum entropy is not the equilibrium entropy (2), therefore violating Jaynes [47] maximum entropy principle, have also been reported [48] (pp. 82–83 in [43]). We did not find entropy evolutions with the behavior (3) in Figure 3.

## Acknowledgments

## Author Contributions

## Conflicts of Interest

## References

- Pauli, W. Prinzipien der Quantentheorie 1. In Encyclopedia of Physics; Volume 5, Flugge, S., Ed.; Springer-Verlag: Berlin, Germany, 1958; English Translation: General Principles of Quantum Mechanics; Achuthan, P., Venkatesan, K., Translater; Springer: Berlin, Germany, 1980. [Google Scholar]
- Putnam, C.R. Commutation Properties of Hilbert Space Operators and Related Topic; Springer: Berlin, Germany, 1967. [Google Scholar]
- Misra, B. Nonequilibrium entropy, Lyapounov variables, and ergodic properties of classical systems. Proc. Natl. Acad. Sci. USA
**1978**, 75, 1627–1631. [Google Scholar] - Misra, B.; Prigogine, I.; Courbage, M. Lyapunov variable: Entropy and measurements in quantum mechanics. Proc. Natl. Acad. Sci. USA
**1979**, 76, 4768–4772. [Google Scholar] - Courbage, M. On necessary and sufficient conditions for the existence of Time and Entropy Operators in Quantum Mechanics. Lett. Math. Phys
**1980**, 4, 425–432. [Google Scholar] - Lockhart, C.M.; Misra, B. Irreversebility and measurement in quantum mechanics. Physica A
**1986**, 136, 47–76. [Google Scholar] - Antoniou, I.; Suchanecki, Z.; Laura, R.; Tasaki, S. Intrinsic irreversibility of quantum systems with diagonal singularity. Physica A
**1997**, 241, 737–772. [Google Scholar] - Courbage, M.; Fathi, S. Decay probability distribution of quantum-mechanical unstable systems and time operator. Physica A
**2008**, 387, 2205–2224. [Google Scholar] - Misra, B.; Prigogine, I.; Courbage, M. From deterministic dynamics to probabilistic descriptions. Physica A
**1979**, 98, 1–26. [Google Scholar] - Prigogine, I. From Being to Becoming; Freeman: New York, NY, USA, 1980. [Google Scholar]
- Courbage, M.; Misra, B. On the equivalence between Bernoulli dynamical systems and stochastic Markov processes. Physica A
**1980**, 104, 359–377. [Google Scholar] - Courbage, M. Intrinsic Irreversibility of Kolmogorov Dynamical Systems. Physica A
**1983**, 122, 459–482. [Google Scholar] - Antoniou, I. The Time Operator of the Cusp Map. Chaos Soliton Fractal
**2001**, 12, 1619–1627. [Google Scholar] - Gustafson, K.; Misra, B. Canonical Commutation Relations of Quantum Mechanics and Stochastic Regularity. Lett. Math. Phys
**1976**, 1, 275–280. [Google Scholar] - Gustafson, K.; Goodrich, R. Kolmogorov systems and Haar systems. Colloq. Math. Soc. Janos Bolyai
**1987**, 49, 401–416. [Google Scholar] - Gustafson, K. Lectures on Computational Fluid Dynamics, Mathematical Physic and Linear Algebra; Abe, T., Kuwahara, K., Eds.; World Scientific: Singapore, Singapore, 1997. [Google Scholar]
- Antoniou, I.; Gustafson, K. Wavelets and Stochastic Processes. Math. Comput. Simul
**1999**, 49, 81–104. [Google Scholar] - Antoniou, I.; Prigogine, I.; Sadovnichii, V.; Shkarin, S. Time Operator for Diffusion. Chaos Soliton Fractal
**2000**, 11, 465–477. [Google Scholar] - Antoniou, I.; Christidis, T. Bergson’s Time and the Time Operator. Mind Matter
**2010**, 8, 185–202. [Google Scholar] - Gialampoukidis, I.; Gustafson, K.; Antoniou, I. Time Operator of Markov Chains and Mixing Times. Applications to Financial Data. Physica A
**2014**, 415, 141–155. [Google Scholar] - Gialampoukidis, I.; Gustafson, K.; Antoniou, I. Financial Time Operator for random walk markets. Chaos Soliton Fractal
**2013**, 57, 62–72. [Google Scholar] - Aldous, D.; Fill, J. Reversible Markov Chains and Random Walks on Graphs, 2002. 14 January 2015.
- Levin, D.A.; Peres, Y.; Wilmer, E.L. Markov Chains and Mixing Times; American Mathematical Society: Providence, RI, USA, 2009. [Google Scholar]
- Aldous, D.; Lovász, L.; Winkler, P. Mixing times for uniformly ergodic Markov chains. Stoch Process. Appl
**1997**, 71, 165–185. [Google Scholar] - Levene, M.; Loizou, G. Kemeny’s constant and the random surfer. Am. Math. Mon
**2002**, 741–745. [Google Scholar] - Jenamani, M.; Mohapatra, P.K.; Ghose, S. A stochastic model of e-customer behavior. Electron. Commer. R. A
**2003**, 2, 81–94. [Google Scholar] - Kirkland, S. Fastest expected time to mixing for a Markov chain on a directed graph. Linear Algebra Appl
**2010**, 433, 1988–1996. [Google Scholar] - Crisostomi, E.; Kirkland, S.; Shorten, R. A. Google-like model of road network dynamics and its application to regulation and control. Int. J. Control
**2011**, 84, 633–651. [Google Scholar] - Brin, S.; Page, L. The anatomy of a large-scale hypertextual Web search engine. Comput. Netw. ISDN
**1998**, 30, 107–117. [Google Scholar] - Hamilton, J.D. A new approach to the economic analysis of nonstationary time series and the business cycle. Econometrica
**1989**, 357–384. [Google Scholar] - Lockhart, C.M.; Misra, B.; Prigogine, I. Geodesic instability and internal time in relativistic cosmology. Phys. Rev. D
**1982**, 25, 921. [Google Scholar] - Kemeny, J.G.; Snell, J.L. (Eds.) Finite Markov Chains; D. Van Nostrand: Princeton, NJ, USA, 1960.
- Howard, R.A. Dynamic Probabilistic Systems; Wiley: New York, NY, USA, 1971; Volume I. [Google Scholar]
- De La Llave, R. Rates of convergence to equilibrium in the Prigogine-Misra-Courbage theory of irreversibility. J. Stat. Phys
**1982**, 29, 17–31. [Google Scholar] - Atmanspacher, H. Dynamical Entropy in Dynamical Systems; Springer: Berlin, Germany, 1997. [Google Scholar]
- Mackey, M.C. The dynamic origin of increasing entropy. Rev. Mod. Phys
**1989**, 61, 981. [Google Scholar] - Tsallis, C. Possible generalization of Boltzmann–Gibbs statistics. J. Stat. Phys
**1988**, 52, 479–487. [Google Scholar] - Kaniadakis, G. Non–linear kinetics underlying generalized statistics. Physica A
**2001**, 296, 405–425. [Google Scholar] - Kaniadakis, G. H–theorem and generalized entropies within the framework of nonlinear kinetics. Phys. Lett. A
**2001**, 288, 283–291. [Google Scholar] - Kaniadakis, G.; Lissia, M.; Scarfone, A.M. Deformed logarithms and entropies. Physica A
**2004**, 340, 41–49. [Google Scholar] - Gorban, A.N. Entropy: The Markov Ordering Approach. Entropy
**2010**, 12, 1145–1193. [Google Scholar] - Cover, T.M. Which processes satisfy the second law? In PhysicaL Origins of Time Asymmetry; Halliwell, J.J., Pérez-Mercader, J., Zurek, W.H., Eds.; Cambridge University Press: Cambridge, UK, 1994; pp. 98–107. [Google Scholar]
- Cover, T.M.; Thomas, J.A. Elements of Information Theory, 2nd ed.; Wiley: New York, NY, USA, 2006. [Google Scholar]
- Kaniadakis, G. Theoretical foundations and mathematical formalism of the power-law tailed statistical distributions. Entropy
**2013**, 15, 3983–4010. [Google Scholar] - Kondepudi, D.; Prigogine, I. Modern Thermodynamics, From Heat Engines to Dissipative Structures; Wiley: Chichester, UK, 1998. [Google Scholar]
- Tsallis, C. The Nonadditive Entropy S
_{q}and Its Applications in Physics and Elsewhere: Some Remarks. Entropy**2011**, 13, 1765–1804. [Google Scholar] - Jaynes, E.T. Probability Theory: The Logic of Science; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
- Landsberg, P.T. Is equilibrium always an entropy maximum? J. Stat. Phys
**1984**, 35, 159–169. [Google Scholar]

**Figure 1.**The evolution of the total variation distance (solid line) from equilibrium and the Boltzmann–Gibbs–Shannon entropy, Equations (16) and (17), for the two-state Markov chain with w

_{01}= 0.245, w

_{10}= 0.095 and initial distribution ρ

_{0}(0) = 0, ρ

_{1}(0) = 1.

**Figure 2.**The evolution of the total variation distance (solid line) from equilibrium and the Boltzmann–Gibbs–Shannon entropy, Equations (16) and (17), for the two-state Markov chain with w

_{01}= 0.245, w

_{10}= 0.095 and initial distribution ρ

_{0}(0) = 1, ρ

_{1}(0) = 0.

**Figure 3.**The evolution of the Boltzmann–Gibbs–Shannon entropy, the Tsallis entropy for q = 1.3648 and the Kaniadakis entropy for κ = 0.0045 for the initial distribution ρ

_{0}(0) = 1, ρ

_{1}(0) = 0. The Kaniadakis entropy is indistinguishable from the Boltzmann–Gibbs–Shannon entropy, because the value of κ is very close to zero. The values of the entropic indices q, κ were selected so that the difference between the maximum entropy and the equilibrium entropy is maximal. The distribution of maximal entropy is ρ

_{0}(3) = 0.4866, ρ

_{1}(3) = 0.5134.

**Figure 4.**The evolution of the Boltzmann–Gibbs–Shannon entropy, the Tsallis entropy for q = 1.3648 and the Kaniadakis entropy for κ = 0.0045 for the initial distribution ρ

_{0}(0) = 0, ρ

_{1}(0) = 1. The three entropies increase monotonically to the equilibrium entropy, which is the maximum entropy.

**Figure 6.**The mean square error of the linear fit to the total variation distance, the Boltzmann–Gibbs–Shannon entropy and the Tsallis entropy for q = 1.5, q = 2 and q = 2.5.

**Figure 7.**The mean square error of the linear fit to the total variation distance, the Boltzmann–Gibbs–Shannon entropy and the Kaniadakis entropy for κ = 0.8, κ = 0.9 and κ = 1.

**Figure 8.**The mean square error (MSE) as a function of the entropic index q ∈ [1.4, 3.5] for all rates of convergence γ ∈ [0.2, 0.8]. The MSE attains its minimum for q = 2.5.

**Figure 9.**The mean square error (MSE) as a function of the entropic index κ ∈ (0, 1] for all rates of convergence γ ∈ [0.2, 0.8]. The MSE attains its minimum for κ = 1.

**Figure 10.**The percentage decrease in the mean square error using the Lyapunov functionals associated with Tsallis entropy, Equation (28), and Kaniadakis entropy, Equation (29), compared with the MSE of the Boltzmann–Gibbs–Shannon Lyapunov functional, Equation (30).

Rate γ | $\widehat{\alpha}(\mathrm{SE})$ | $\widehat{\beta}(\mathrm{SE})$ | Pearson Coefficient | Mean Square Error |
---|---|---|---|---|

0.8 | 1.975(0.007) | −4.932(0.030) | −1.000 | 0.00002844 |

0.7 | 1.047(0.012) | −2.941(0.067) | −0.998 | 0.00025143 |

0.6 | 0.600(0.010) | −1.933(0.075) | −0.996 | 0.00034544 |

0.5 | 0.350(0.007) | −1.340(0.067) | −0.993 | 0.00023253 |

0.4 | 0.198(0.004) | −0.947(0.053) | −0.991 | 0.00010321 |

0.3 | 0.102(0.002) | −0.656(0.037) | −0.990 | 0.00003005 |

0.2 | 0.043(0.001) | −0.417(0.021) | −0.992 | 0.00000431 |

0.1 | 0.100(0.000) | −0.203(0.007) | −0.997 | 0.00000011 |

**Table 2.**Linear regression between t − Age(X

_{t}) and the Lyapunov functional defined in terms of the Boltzmann–Gibbs–Shannon entropy, Equation (23).

Rate γ | $\widehat{\alpha}(\mathrm{SE})$ | $\widehat{\beta}(\mathrm{SE})$ | Pearson Coefficient | Mean Square Error |
---|---|---|---|---|

0.8 | 1.410(0.068) | −2.874(0.269) | −0.979 | 0.01402500 |

0.7 | 0.846(0.038) | −2.331(0.228) | −0.978 | 0.00609177 |

0.6 | 0.522(0.020) | −1.980(0.176) | −0.981 | 0.00191404 |

0.5 | 0.319(0.009) | −1.743(0.127) | −0.987 | 0.00046904 |

0.4 | 0.185(0.004) | −1.586(0.085) | −0.993 | 0.00008470 |

0.3 | 0.097(0.001) | −1.486(0.051) | −0.997 | 0.00000927 |

0.2 | 0.041(0.000) | −1.427(0.023) | −0.999 | 0.00000039 |

0.1 | 0.010(0.000) | −1.396(0.006) | −1.000 | 0.00000000 |

© 2015 by the authors; licensee MDPI, Basel, Switzerland This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Gialampoukidis, I.; Antoniou, I.
Entropy, Age and Time Operator. *Entropy* **2015**, *17*, 407-424.
https://doi.org/10.3390/e17010407

**AMA Style**

Gialampoukidis I, Antoniou I.
Entropy, Age and Time Operator. *Entropy*. 2015; 17(1):407-424.
https://doi.org/10.3390/e17010407

**Chicago/Turabian Style**

Gialampoukidis, Ilias, and Ioannis Antoniou.
2015. "Entropy, Age and Time Operator" *Entropy* 17, no. 1: 407-424.
https://doi.org/10.3390/e17010407