# Generalized (c,d)-Entropy and Aging Random Walks

^{1}

^{2}

^{3}

^{*}

## Abstract

**:**

## 1. Introduction: Mini-Review of $(c,d)$-Entropy

- Khinchin’s first axiom states that for a system with W potential outcomes (states), each of which is given by a probability, ${p}_{i}\ge 0$, with ${\sum}_{i=1}^{W}{p}_{i}=1$, the entropy, $S({p}_{1},\cdots ,{p}_{W})$, as a measure of uncertainty about the system must take its maximum for the equi-distribution ${p}_{i}=1/W$, for all i.
- Khinchin’s second axiom (missing in [4]) states that any entropy should remain invariant under adding zero-probability states to the system, i.e., $S({p}_{1},\cdots ,{p}_{W})=S({p}_{1},\cdots ,{p}_{W},0)$.
- Khinchin’s third axiom (separability axiom) finally makes a statement of the composition of two finite probabilistic systems, A and B. If the systems are independent of each other, entropy should be additive, meaning that the entropy of the combined system, $A+B$, should be the sum of the individual systems, $S\left(A+B\right)=S\left(A\right)+S\left(B\right)$. If the two systems are dependent on each other, the entropy of the combined system, i.e., the information given by the realization of the two finite schemes, A and B, $S(A+B)$, is equal to the information gained by a realization of system A, $S\left(A\right)$, plus the mathematical expectation of information gained by a realization of system B, after the realization of system A, ${S\left(A+B\right)=S\left(A\right)+S|}_{A}\left(B\right)$.
- Khinchin’s fourth axiom is the requirement that entropy is a continuous function of all its arguments, ${p}_{i}$, and does not depend on anything else.

**Table 1.**Order in the zoo of recently introduced entropies for which SK1–SK3 hold. All of them are special cases of the entropy given in Equation (3), and their asymptotic behavior is uniquely determined by c and d. It can be seen immediately that ${S}_{q>1}$, ${S}_{b}$ and ${S}_{E}$ are asymptotically identical; so are ${S}_{q<1}$ and ${S}_{\kappa}$, as well as ${S}_{\eta}$ and ${S}_{\gamma}$.

Entropy | c | d | Reference | |
---|---|---|---|---|

${S}_{c,d}=er{\sum}_{i}\Gamma (d+1,1-cln{p}_{i})-cr$ | $(r={(1-c+cd)}^{-1})$ | c | d | |

${S}_{BGS}={\sum}_{i}{p}_{i}ln(1/{p}_{i})$ | 1 | 1 | [5] | |

${S}_{q<1}\left(p\right)=\frac{1-\sum {p}_{i}^{q}}{q-1}$ | $(q<1)$ | $c=q<1$ | 0 | [6] |

${S}_{\kappa}\left(p\right)=-{\sum}_{i}{p}_{i}\frac{{p}_{i}^{\kappa}-{p}_{i}^{-\kappa}}{2\kappa}$ | ($0<\kappa \le 1$) | $c=1-\kappa $ | 0 | [8] |

${S}_{q>1}\left(p\right)=\frac{1-\sum {p}_{i}^{q}}{q-1}$ | $(q>1)$ | 1 | 0 | [6] |

${S}_{b}\left(p\right)={\sum}_{i}(1-{e}^{-b{p}_{i}})+{e}^{-}b-1$ | $(b>0)$ | 1 | 0 | [9] |

${S}_{E}\left(p\right)={\sum}_{i}{p}_{i}(1-{e}^{\frac{{p}_{i}-1}{{p}_{i}}})$ | 1 | 0 | [10] | |

${S}_{\eta}\left(p\right)={\sum}_{i}\Gamma (\frac{\eta +1}{\eta},-ln{p}_{i})-{p}_{i}\Gamma \left(\frac{\eta +1}{\eta}\right)$ | $(\eta >0)$ | 1 | $d=\frac{1}{\eta}$ | [7] |

${S}_{\gamma}\left(p\right)={\sum}_{i}{p}_{i}{ln}^{1/\gamma}(1/{p}_{i})$ | 1 | $d=1/\gamma $ | [12,13] | |

${S}_{\beta}\left(p\right)={\sum}_{i}{p}_{i}^{\beta}ln(1/{p}_{i})$ | $c=\beta $ | 1 | [14] |

- SK1: The requirement that S depends continuously on p implies that g is a continuous function.
- SK2: The requirement that the entropy is maximal for the equi-distribution ${p}_{i}=1/W$ (for all i) implies that g is a concave function.
- SK3: The requirement that adding a zero-probability state to a system, $W+1$ with ${p}_{W+1}=0$, does not change the entropy implies that $g\left(0\right)=0$.
- SK4 (separability axiom): The entropy of a system, composed of sub-systems A and B, equals the entropy of A plus the expectation value of the entropy of B, conditional on A. Note that this also corresponds exactly to Markovian processes.

- Boltzmann–Gibbs entropy belongs to the $(c,d)=(1,1)$ class. One gets from Equation (3)$${S}_{1,1}\left[p\right]=\sum _{i}{g}_{1,1}\left({p}_{i}\right)=-\sum _{i}{p}_{i}ln{p}_{i}+1$$
- Tsallis entropy belongs to the $(c,d)=(c,0)$ class. From Equation (3) and the choice $r=1/(1-c)$ (see below), we get$$\begin{array}{c}{S}_{c,0}\left[p\right]=\sum _{i}{g}_{c,0}\left({p}_{i}\right)=\frac{1-{\sum}_{i}{p}_{i}^{c}}{c-1}+1\hfill \end{array}$$
- The entropy related to stretched exponentials [7] belongs to the $(c,d)=(1,d)$ classes; see Table 1. As a specific example, we compute the $(c,d)=(1,2)$ case$${S}_{1,2}\left[p\right]=2\left(1-\sum _{i}{p}_{i}ln{p}_{i}\right)+\frac{1}{2}\sum _{i}{p}_{i}{\left(ln{p}_{i}\right)}^{2}$$

**Figure 1.**Entropies parametrized in the $(c,d)$-plane, with their associated distribution functions. Boltzmann–Gibbs–Shannon (BGS) entropy corresponds to $(1,1)$, Tsallis entropy to $(c,0)$ and entropies for stretched exponentials to $(1,d>0)$. Entropies leading to distribution functions with compact support belong to equivalence class $(1,0)$. Figure from [3].

#### 1.1. Distribution Functions

#### Special Cases of Distribution Functions

#### 1.2. How to Determine the Exponents, c and d.

#### 1.3. A Note on Rényi-type Entropies

## 2. Aging Random Walks

#### 2.1. Accelerating and Auto-Correlated Random Walks

**Figure 2.**An example for an auto-correlated random walk that persistently walks in the same direction for $\propto {n}^{1-\alpha}$ steps ($\alpha =0.5$).

#### 2.2. Generalization to Aging (Path-Dependent) Random Walks

**Figure 3.**Comparison of the first three even moments, $\langle {x}^{2}\left(N\right)\rangle $, $\langle {x}^{4}\left(N\right)\rangle $ and $\langle {x}^{6}\left(N\right)\rangle $, and the average number of direction reversal-decisions, ${k}_{-}\left(N\right)$, with $1\le N\le 50,000$ for the auto-correlated random walk (blue lines) and aging random walks (red dashed lines) for values $\alpha =0.2$, $0.5$ and $0.8$.

#### 2.3. General Classes of Aging Random Walks

**Figure 4.**The maximal number of direction reversal decisions in random walks in entropic classes $(c,d=0)$ with $0<c<1$ for the values $\lambda =1.1$, $1.2$ and $1.3$.

**Figure 5.**In the three top panes, the second moment, $\langle {x}^{2}\left(N\right)\rangle $, is shown for $\nu =0.2$, $0.5$ and $0.8$, for $\lambda =1.1$ (black), $1.2$ (red) and $1.3$ (green). The blue dotted and dashed lines indicate the functions, ${N}^{2}$ and N, respectively. A cross-over from $\langle {x}^{2}\left(N\right)\rangle \sim N$ to $\langle {x}^{2}\left(N\right)\rangle \sim {N}^{2}$ (free motion) is clearly visible for $\nu =0.5$ and $0.8$. The three bottom panes show the average number of direction reversal-decisions, ${k}_{-}\left(N\right)$. Simulations were performed in the range $1\le N\le 50,000$. For $\nu \to 1$, the crossover happens at smaller N, for all values of λ.

## 3. Conclusions

## Conflicts of Interest

## References

- Hanel, R.; Thurner, S. A comprehensive classification of complex statistical systems and an axiomatic derivation of their entropy and distribution functions. Europhys. Lett.
**2011**, 93, 20006. [Google Scholar] [CrossRef] - Hanel, R.; Thurner, S. When do generalized entropies apply? How phase space volume determines entropy. Europhys. Lett.
**2011**, 96, 50003. [Google Scholar] [CrossRef] - Thurner, S.; Hanel, R. What Do Generalized Entropies Look Like? An Axiomatic Approach for Complex, Non-Ergodic Systems. In Recent Advances in Generalized Information Measures and Statistics; Kowalski, A.M., Rossignoli, R., Curado, E.M.F., Eds.; Bentham Science eBook: Sharjah, United Arab Emirates, 2013; in press. [Google Scholar]
- Shannon, C.E. A Mathematical theory of communication. Bell Syst. Tech. J.
**1948**, 27, 379–423, 623–656. [Google Scholar] - Khinchin, A.I. Mathematical Foundations of Information Theory; Dover Publications: Mineola, NY, USA, 1957. [Google Scholar]
- Tsallis, C. Possible generalization of Boltzmann-Gibbs statistics. J. Stat. Phys.
**1988**, 52, 479–487. [Google Scholar] [CrossRef] - Anteneodo, C.; Plastino, A.R. Maximum entropy approach to stretched exponential probability distributions. J. Phys. A: Math Gen.
**1999**, 32, 1089–1097. [Google Scholar] [CrossRef] - Kaniadakis, G. Statistical mechanics in the context of special relativity. Phys. Rev. E
**2002**, 66, 056125. [Google Scholar] [CrossRef] - Curado, E.M.F.; Nobre, F.D. On the stability of analytic entropic forms. Phys. Stat. Mech. Appl.
**2004**, 335, 94–106. [Google Scholar] [CrossRef] - Tsekouras, G.A.; Tsallis, C. Generalized entropy arising from a distribution of q indices. Phys. Rev. E
**2005**, 71, 046144. [Google Scholar] [CrossRef] - Hanel, R.; Thurner, S. Generalized Boltzmann factors and the maximum entropy principle: Entropies for complex systems. Phys. Stat. Mech. Appl.
**2007**, 380, 109–114. [Google Scholar] [CrossRef] - Ubriaco, M.R. Entropies based on factional calculus. Phys. Lett. A
**2009**, 373, 2516–2519. [Google Scholar] [CrossRef] - Tsallis, C. Introduction to Nonextensive Statistical Mechanics; Springer: New York, NY, USA, 2009. [Google Scholar]
- Shafee, F. Lambert function and a new non-extensive form of entropy. IMA J. Appl. Math.
**2007**, 72, 785–800. [Google Scholar] [CrossRef] - Hanel, R.; Thurner, S.; Gell-Mann, M. Generalized entropies and the transformation group of superstatistics. Proc. Natl. Acad. Sci. USA
**2011**, 108, 6390–6394. [Google Scholar] [CrossRef] - Hanel, R.; Thurner, S.; Gell-Mann, M. Generalized entropies and logarithms and their duality relations. Proc. Natl. Acad. Sci. USA
**2012**, 109, 19151–19154. [Google Scholar] [CrossRef] [PubMed] - Lesche, B. Instabilities of Rényi entropies. J. Stat. Phys.
**1982**, 27, 419–422. [Google Scholar] [CrossRef] - Abe, S. Stability of Tsallis entropy and instabilities of Rényi and normalized Tsallis entropies. Phys. Rev. E
**2002**, 66, 046134. [Google Scholar] [CrossRef] - Jizba, P.; Arimitsu, T. Observability of Rényis entropy. Phys. Rev. E
**2004**, 69, 026128. [Google Scholar] [CrossRef] - Kaniadakis, G.; Scarfone, A.M. Lesche stability of κ-entropy. Phys. Stat. Mech. Appl.
**2004**, 340, 102–109. [Google Scholar] [CrossRef] - Hanel, R.; Thurner, S.; Tsallis, C. On the robustness of q-expectation values and Rényi entropy. Europhys. Lett.
**2009**, 85, 20005. [Google Scholar] [CrossRef] - Tsallis, C.; Gell-Mann, M.; Sato, Y. Asymptotically scale-invariant occupancy of phase space makes the entropy S
_{q}extensive. Proc. Natl. Acad. Sci. USA**2005**, 102, 15377–15382. [Google Scholar] [CrossRef] [PubMed]

© 2013 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).

## Share and Cite

**MDPI and ACS Style**

Hanel, R.; Thurner, S. Generalized (c,d)-Entropy and Aging Random Walks. *Entropy* **2013**, *15*, 5324-5337.
https://doi.org/10.3390/e15125324

**AMA Style**

Hanel R, Thurner S. Generalized (c,d)-Entropy and Aging Random Walks. *Entropy*. 2013; 15(12):5324-5337.
https://doi.org/10.3390/e15125324

**Chicago/Turabian Style**

Hanel, Rudolf, and Stefan Thurner. 2013. "Generalized (c,d)-Entropy and Aging Random Walks" *Entropy* 15, no. 12: 5324-5337.
https://doi.org/10.3390/e15125324