A Brief Review of Generalized Entropies

Entropy appears in many contexts (thermodynamics, statistical mechanics, information theory, measure-preserving dynamical systems, topological dynamics, etc.) as a measure of different properties (energy that cannot produce work, disorder, uncertainty, randomness, complexity, etc.). In this review, we focus on the so-called generalized entropies, which from a mathematical point of view are nonnegative functions defined on probability distributions that satisfy the first three Shannon–Khinchin axioms: continuity, maximality and expansibility. While these three axioms are expected to be satisfied by all macroscopic physical systems, the fourth axiom (separability or strong additivity) is in general violated by non-ergodic systems with long range forces, this having been the main reason for exploring weaker axiomatic settings. Currently, non-additive generalized entropies are being used also to study new phenomena in complex dynamics (multifractality), quantum systems (entanglement), soft sciences, and more. Besides going through the axiomatic framework, we review the characterization of generalized entropies via two scaling exponents introduced by Hanel and Thurner. In turn, the first of these exponents is related to the diffusion scaling exponent of diffusion processes, as we also discuss. Applications are addressed as the description of the main generalized entropies advances.


Introduction
The concept of entropy was introduced by Clausius [1] in thermodynamics to measure the amount of energy in a system that cannot produce work, and given an atomic interpretation in the foundational works of statistical mechanics and gas dynamics by Boltzmann [2,3], Gibbs [4], and others. Since then, entropy has played a central role in many-particle physics, notoriously in the description of non-equilibrium processes through the second principle of thermodynamics and the principle of maximum entropy production [5,6]. Moreover, Shannon made of entropy the cornerstone on which he built his theory of information and communication [7]. Entropy and the associated entropic forces are also the main character in recent innovative approaches to artificial intelligence and collective behavior [8,9]. Our formalism is information-theoretic (i.e., entropic forms are functions of probability distributions) owing to the mathematical properties that we discuss along the way, but can be translated to a physical context through the concept of microstate.
The prototype of entropy that we are going to consider below is the Boltzmann-Gibbs-Shannon (BGS) entropy, In its physical interpretation, k = 1.3807 × 10 −23 J/K is the Boltzmann constant, W is the number of microstates consistent with the macroscopic constraints of a given thermodynamical system, and p i is the probability (i.e., the asymptotic fraction of time) that the system is in the microstate i. In information theory, k is set equal to 1 for mathematical convenience, as we do hereafter, and S BGS measures the average information conveyed by the outcomes of a random variable with probability distribution {p 1 , ..., p W }. We use natural logarithms unless otherwise stated, although logarithms to base 2 is the natural choice in binary communications (the difference being the units, nats or bits, respectively). Remarkably enough, Shannon proved in Appendix B of his seminal paper [7] that Equation (1) follows necessarily from three properties or axioms (actually, four are needed; more on this below).
BGS entropy was later on generalized by other "entropy-like" quantities in dynamical systems (Kolmogorov-Sinai entropy [10], etc.), information theory (Rényi entropy [11], etc.), and statistical physics (Tsallis entropy [12], etc.), to mention the most familiar ones (see, e.g., [13] for an account of some entropy-like quantities and their applications, especially in time series analysis). Similar to with S BGS , the essence of these new entropic forms was distilled into a small number of properties that allow sorting them out in a more systematic way [13,14]. Currently, the uniqueness of S BGS is derived from the four Khinchin-Shannon axioms (Section 2). However, the fourth axiom, called the separability or strong additivity axiom (which implies additivity, i.e., S(A 1 + A 2 ) = S(A 1 ) + S(A 2 ), where A 1 + A 2 stands for a system composed of any two probabilistically independent subsystems A 1 and A 2 ), is violated by physical systems with long-range interactions [15,16]. This poses the question of what mathematical properties have the "generalized entropies" satisfying only the other three axioms. These are the primary candidates for extensive entropic forms, i.e., functions S such that S(B 1 ∪ B 2 ) = S(B 1 ) + S(B 2 ), the shorthand B 1 ∪ B 2 standing for the physical system composed of the subsystems B 1 and B 2 . Note that B 1 ∪ B 2 = B 1 + B 2 in non-ergodic interacting systems just because the number of states in B 1 ∪ B 2 is different from the number of states in B 1 + B 2 . A related though different question is how to weaken the separability axiom to identify the extensive generalized entropies; we come back briefly to this point in Section 2 when speaking of the composability property.
Along with S BGS , typical examples of generalized entropies are the Tsallis entropy [12], (q ∈ R, q = 1, with the proviso that for q < 0 terms with p i = 0 are omitted), and the Rényi entropy [11], (q ≥ 0, q = 1). The Tsallis and Rényi entropies are related to the BGS entropy through the limits this being one of the reasons they are considered generalizations of the BGS entropy. Both T q and R q have found interesting applications [15,17]; in particular, the parametric weighting of the probabilities in their definitions endows data analysis with additional flexibility. Other generalized entropies that we consider in this paper are related to ongoing work on graphs [18]. Further instances of generalized entropies are also referred to below. Let us remark at this point that S BGS , T q , R q and other generalized entropies considered in this review can be viewed as special cases of the (h, φ)-entropies introduced in [19] for the study of asymptotic probability distributions. In turn, (h, φ)-entropies were generalized to quantum information theory in [20]. Quantum (h, φ)-entropies, which include von Neumann's entropy [21] as well as the quantum versions of Tsallis' and Rényi's entropies, have been applied, for example, to the detection of quantum entanglement (see [20] and references therein). In this review, we do not consider quantum entropies, which would require advanced mathematical concepts, but only entropies defined on classical, discrete and finite probability distributions. If necessary, the transition to continuous distributions is done by formally replacing probability mass functions by densities and sums by integrals. For other approaches to the concept of entropy in more general settings, see [22][23][24][25].
Generalized entropies can be characterized by two scaling exponents in the limit W → ∞, which we call Hanel-Thurner exponents [16]. For the simplest generalized entropies, which include T q but not R q (see Section 2), these exponents allow establishing a relationship between the abstract concept of generalized entropy and the physical properties of the system they describe through their asymptotic scaling behavior in the thermodynamic limit. That is, the two exponents label equivalence classes of systems which are universal in that the corresponding entropies have the same thermodynamic limit. In this regard, it is interesting to mention that, for any pair of Hanel-Thurner exponents (at least within certain ranges), there is a generalized entropy with those exponents, i.e., systems with the sought asymptotic behavior. Furthermore, the first Hanel-Thurner exponent allows also establishing a second relation with physical properties, namely, with the diffusion scaling exponents of diffusion processes, under some additional assumptions.
The rest of this review is organized as follows. The concept of generalized entropy along with some formal preliminaries and its basic properties are discussed in Section 1. As way of illustration, we discuss in Section 3 the Tsallis and Renyi entropies, as well as more recent entropic forms. The choice of the former ones is justified by their uniqueness properties under quite natural axiomatic formulations. The Hanel-Thurner exponents are introduced in Section 4, where their computation is also exemplified. Their aforementioned relation to diffusion scaling exponents is explained in Section 5. The main messages are recapped in Section 6. There is no section devoted to the applications but, rather, these are progressively addressed as the different generalized entropies are presented. The main text has been supplemented with three appendices at the end of the paper.

Generalized Entropies
Let P be the set of probability mass distributions {p 1 , ..., p W } for all W ≥ 2. For any function H : P → R + (R + being the nonnegative real numbers), the Shannon-Khinchin axioms for an entropic form H are the following.
where H(Y |X) is the entropy of Y conditional on X. In particular, if X and Y are independent (i.e., p ij = p i· p ·j ), then H(Y |X) = H(Y) and A function H such that Equation (5) holds (for independent random variables X and Y) is called additive. Physicists prefer writing X + Y for composed systems with microstate probabilities p ij = p i· p ·j ; this condition holds approximately only for weakly interacting systems X and Y.
With regard to Equation (5), let us remind that, for two general random variables X and Y, the difference I(X; Y) = H(X) + H(Y) − H(X, Y) ≥ 0 is the mutual information of X and Y. It holds I(X; Y) = 0 if and only if X and Y are independent [26].
More generally, a function H such that H(p 1 q 1 , . . . , p 1 q U , p 2 q 1 , . . . , p 2 q U , . . . , p W q 1 , . . . , p W q U ) (6) = H(p 1 , . . . , p W ) + H(q 1 , . . . , q U ) + (1 − α)H(p 1 , . . . , p W )H(q 1 , . . . , q U ), (α > 0) is called α-additive. With the same notation as above, we can write this property as where, again, X and Y are independent random variables. In a statistical mechanical context, X and Y may stand also for two probabilistically independent (or weakly interacting) physical systems. If α = 1, we recover additivity (Equation (5)). In turn, additivity and α-additivity are special cases of composability [15,27]: with the same caveats for X and Y. Here, Φ is a symmetric function of two variables. Composability was proposed in [15] to replace axiom SK4. Interestingly, it has been proved in [27] that, under some technical assumptions, the only composable generalized entropy of the form in Equation (10) is T q , up to a multiplicative constant. As mentioned in Section 1, a function F : P → R + satisfying axioms SK1-SK4 is necessarily of the form F(p 1 , ..., p W ) = kS BGS (p 1 , ..., p W ) for every W, where k is a positive constant ( [28], Theorem 1). The same conclusion can be derived using other equivalent axioms [14,29]. For instance, Shannon used continuity, the property that H(1/n, ..., 1/n) increases with n, and a property called grouping [29] or decomposibility [30], which he defined graphically in Figure 6 of [7]: . This property allows reducing the computation of H(p 1 , ..., p W ) to the computation of the entropy of dichotomic random variables. According to ([15], Section 2.1.2.7), Shannon missed in his uniqueness theorem to formulate the condition in Equation (5), X and Y being independent random variables.
Nonnegative functions defined on P that satisfy axioms SK1-SK3 are called generalized entropies [16]. In the simplest situation, a generalized entropy has the sum property [14], i.e., the algebraic form The following propositions are immediate.
Note that Proposition (iv) follows from the symmetry and concavity of F g (since the unique maximum of F g must occur at equal probabilities).
We conclude from Propositions (ii), (iv) and (v) that, for F g to be a generalized entropy, the following three condition suffice: As in [16], we say that a macroscopic statistical system is admissible if it is described by a generalized entropy F g of the form in Equation (10) such that g verifies Conditions (C1)-(C3). By extension, we say also that the generalized entropy F g is admissible. Admissible systems and generalized entropies are the central subject of this review. Clearly, S BGS is admissible because On the other hand, T q corresponds to For T q to be admissible, Condition (C1) requires q ≥ 0 and Condition (C3) requires q > 0. An example of a function F : P → R + with the sum property that does not qualify for admissible generalized entropy is This probability functional was used in [31] to classify sleep stages.
Other generalized entropies that are considered below have the form where G is a continuous monotonic function, and g is continuous with g(0) = 0. By definition, F G,g is also symmetric, and Proposition (iii) holds with the obvious changes. However, the concavity of g is not a sufficient condition any more for F G,g to be a generalized entropy. Such is the case of the Rényi entropy R q (Equation (3)); here but g(x) (and, hence, ∑ W i=1 g(p i )) is not ∩-convex for q > 1. Furthermore, note that axiom SK3 requires q > 0 for R q to be a generalized entropy.
Since Equation (10) is a special case of Equation (14) (set G to be the identity map id(u) = u), we can refer to both cases just by using the notation F G,g , as we do hereafter.
We say that two probability distributions where 0 < δ 1; other norms, such as the two-norm and the max-norm, will do as well since they are all equivalent in the metric sense. A function F : P → R + is said to be Lesche-stable if for all W and > 0 there exists δ > 0 such that where Lesche stability is called experimental robustness in [15] because it guarantees that similar experiments performed on similar physical systems provide similar results for the function F. According to [16], all admissible systems are Lesche stable.

Examples of Generalized Entropies
As way of illustration, we put the focus in this section on two classical generalized entropies as well as on some newer ones. The classical examples are the Tsallis entropy and the Rényi entropy because they have extensively been studied in the literature from an axiomatic point of view too. As it turns out, they are unique under some natural assumptions, such as additivity, α-additivity or composability (see below for details). The newer entropies are related to potential applications of the concept of entropy to graph theory [18]. Other examples of generalized entropies are listed in Appendix A for further references.

Tsallis Entropy
A simple way to introduce Tsallis' entropy as a generalization of the BGS entropy is the following [15]. Given q ∈ R, define the q-logarithm of a real number x > 0 as Note that ln 1 x is defined by continuity since lim q→1 ln q x = ln x. If the logarithm in the definition of S BGS , Equation (1), is replaced by ln q , then we obtain the Tsallis entropy: As noted before, q > 0 for T q to be an admissible generalized entropy. Alternatively, the definition can also be generalized to provide the Tsallis entropy via the q-derivative, Although Tsallis proposed his entropy (Equation (17)) in 1988 to go beyond the standard statistical mechanics [12], basically the same formula had already been proposed in 1967 by Havrda and Charvát (with a different multiplying factor) in the realm of cybernetics and control theory [32].
Some basic properties of T q follow.
(T5) Similar to what happens with the BGS entropy, Tsallis entropy can be uniquely determined (except for a multiplicative positive constant) by a small number of axioms. Thus, Abe [35] characterized the Tsallis entropy by: (i) continuity; (ii) the increasing monotonicity of T q (1/W, ..., 1/W) with respect to W; (iii) expansivity; and (iv) a property involving conditional entropies. Dos Santos [36], on the other hand, used the previous Axioms (i) and (ii), q-additivity, and a generalization of the grouping axiom (Equation (9)). Suyari [37] derived T q from the first three Shannon-Khinchin axioms and a generalization of the fourth one. The perhaps most economical characterization of T q was given by Furuichi [38]; it consists of continuity, symmetry under the permutation of p 1 , ..., p W , and a property called q-recursivity. As mentioned in Section 2, Tsallis entropy was recently shown [27] to be the only composable generalized entropy of the form in Equation (10) under some technical assumptions. Further axiomatic characterizations of the Tsallis entropy can be found in [39]. An observable of a thermodynamical (i.e., many-particle) system, say its energy or entropy, is said to be extensive if (among other characterizations), for a large number N of particles, that observable is (asymptotically) proportional to N. For example, for a system whose particles are weakly interacting (think of a dilute gas), the additive S BGS is extensive, whereas the non-additive T q (q = 1) is non-extensive. The same happens with ergodic systems [40]. However, according to [15], for a non-ergodic system with strong correlations, S BGS can be non-extensive while T q can be extensive for a particular value of q; such is the case of a microcanonical spin system on a network with growing constant connectancy [40]. This is why T q represents a physically relevant generalization of the traditional S BGS . Axioms SK1-SK3 are expected to hold true also in strongly interacting systems.

Rényi Entropy
A simple way to introduce Rényi's entropy as a generalization of S BGS is the following [17]. By definition, the BGS entropy of the probability distribution {p 1 , ..., p W } (or of a random variable X with that probability distribution) is the linear average of the information function or, equivalently, the expected value of the random variable ln 1 p(X) : In the general theory of expected values, for any invertible function φ and realizations x 1 , ..., x W of X in the definition domain of φ, an expected value can be defined as Applying this definition to ln 1 p(X) , we obtain If this generalized average has to be additive for independent events, i.e., it has to satisfy Equation (6) with α = 1, then must hold, where c 1 , c 2 are positive constants, and q > 0, q = 1. The first case leads to S BGS , Equation (1), after choosing c 1 = e. The second case leads to the Rényi entropy (actually, a one-parameter family of entropies) R q , Equation (3), after choosing c 2 = e as well.
Next, we summarize some important properties of the Rényi entropy.
(R1) R q is additive by construction.
(R3) R q is ∩-convex for 0 < q ≤ 1 and it is neither ∩-convex nor ∪-convex for q > 1. Figure 2 plots R q (p, 1 − p) for q = 0.5, 1, 2 and 5. (R4) R q is Lesche-unstable for all q > 0, q = 1 [49]. (R5) The entropies R q are monotonically decreasing with respect to the parameter q for any distribution of probabilities, i.e., This property follows from the formula However, the axiomatic characterizations of the Rényi entropy are not as simple as those for the Tsallis entropy. See [27,51,52] for some contributions in this regard. For some values of q, R q has particular names. Thus, R 0 = ln W is called Hartley or max-entropy, which coincides numerically with S BGS for an even probability distribution. We saw in (R2) that R q converges to the BGS entropy in the limit q → 1. R 2 = − ∑ W i=1 p 2 i is called collision entropy. In the limit q → ∞, R q converges to the min-entropy The name of R ∞ is due to property (R5). Rényi entropy has found interesting applications in random search [53], information theory (especially in source coding [54,55]), cryptography [56], time series analysis [57], and classification [46,58], as well as in statistical signal processing and machine learning [17].
As for H 2 , this probability functional is of the type in Equation (14) with and G(u) = e u . To prove that H 2 is a generalized entropy, note that satisfies axioms SK1-SK3 for the same reasons as H 1 does. Therefore, the same happens with H 2 on account of the exponential function being continuous (SK1), increasingly monotonic (SK2), and univalued (SK3). Finally, H 3 is of the type in Equation (10) with Since H 3 = 1 + ln H 2 , it is a generalized entropy because, as shown above, ln H 2 satisfies axioms SK1-SK3.
(i = 1, 2, 3), see Figure 4, approximate S BGS (p, 1 − p) measured in bits very well. In particular, the relative error in the approximation of S BGS (p, 1 − p) byH 2 (p, 1 − p) is less than 2.9 × 10 −4 , so their graphs overlap when plotted. A further description of the entropies in Equations (18)-(20) is beyond the scope of this section. Let us only mention in this regard that these entropies can be extended into the realm of acyclic directed graphs.

Hanel-Thurner Exponents
All generalized entropies F G,g group in classes labeled by two exponents (c, d) introduced by Hanel and Thurner [16], which are determined by the limits lim W→∞ F G,g (p 1 , ..., p λW ) F G,g (p 1 , ..., p W ) = λ 1−c (25) (W being as before the cardinality of the probability distribution or the total number of microstates in the system, λ > 1) and (a > 0). Note that the limit in Equation (26) does not depend actually on c. The limits in Equations (25) and (26) can be computed via the asymptotic equipartition property [26]. Thus, asymptotically with ever larger W (thermodynamic limit). Set now x = 1/W to derive = λ 1−c (27) and Clearly, the scaling exponents c, d of a generalized entropy F G,g depend on the behavior of g in an infinitesimal neighborhood (0, ε] of 0 (i.e., g(ε) with 0 < ε 1), as well as on the properties of G if G = id. We call (c, d) the Hanel-Thurner (HT) exponents of the generalized entropy F G,g .
When G = id, Equations (27) and (28) abridge to (after replacing λ −1 by z), and respectively. In this case, 0 < c ≤ 1, while d can be any real number. If c = 1, the concavity of g implies d ≥ 0 [16]. The physical properties of admissible systems are uniquely characterized by their HT exponents, i.e., by their asymptotic properties in the limit W → ∞ [16]. In this sense, we can also speak of the universality class (c, d).
As way of illustration, we are going to derive the HT exponents of S BGS , T q and R q .
As for the generalized entropies H 1 , H 2 , and H 3 considered in Section 3.3, we show in Appendix B that their HT exponents are (1, 1), (0, 0), and (1, 1), respectively. Thus, H 1 and H 3 belong to the same universality class as S BGS , while the HT exponents of H 2 and R q (both of the same the type in Equation (14)) are different. Moreover, the interested reader will find in Table 1 of [16] the HT exponents of the generalized entropies listed in Appendix A.
An interesting issue that arises at this point is the inverse question: Given c ∈ (0, 1] and d ∈ R, is there an admissible system such that its HT exponents are precisely (c, d)? The answer is yes, at least under some restrictions on the values of c and d. Following [16], we show in Appendix C that, if then the "generalized (c, d)-entropy" has HT exponents (c, d). Here, A > 0 and Γ is the incomplete Gamma function (Section 6.5 of [59]), that is, Several application cases where generalized (c, d)-entropies are relevant have been discussed by Hanel and Thurner in [40] (super-diffusion, spin systems, binary processes, and self-organized critical systems) and [60] (aging random walks, i.e., random walks whose transition rates between states are path-and time-dependent).

Asymptotic Relation between the HT Exponent c and the Diffusion Scaling Exponent
In contrast to "non-interacting" systems, where both the additivity and extensivity of the BGS entropy S BGS hold, in the case of general interacting statistical systems these properties can no longer be simultaneously satisfied, requiring a more general concept of entropy [16,40]. Following [16] (Section 4), a possible generalization of S BGS for admissible systems is defined via the two asymptotic scaling relations in Equations (29) and (30), i.e., the HT exponents c and d, respectively. These asymptotic exponents can be interpreted as a measure of deviation from the "non-interacting" case regarding the stationary behavior.

The Non-Stationary Regime
In this section, we describe a relation between the exponent c and a similar macroscopic measure that characterizes the system in the non-stationary regime, thus providing a meaningful interpretation of the exponent. The non-stationary behavior of a system can possibly be described by the Fokker-Planck (FP) equation governing the time evolution of a probability density function p = p(x, t). In this continuous limit, the generalized entropy F g is assumed to be written as F g [p(s)] = g(p(s))ds, where g is asymptotically characterized by Equation (29) and s = s(x) is a time-independent scalar function of the space coordinate x (for example, a potential) [61,62].
Going beyond the scope of the simplest FP equation, we consider systems for which the correlation among their (sub-)units can be taken into account by replacing the diffusive term ∂ 2 x p with an effective term ∂ 2 is a pre-defined functional of the probability density. Φ[p] can be either derived directly from the microscopical transition rules or it may be defined based on macroscopic assumptions. The resulting FP equation can be written as where D, β are constants and u(x) is a time-independent external potential. For simplicity, hereafter we exclusively focus on one dimensional FP equations. In the special case of Φ[p] = p and no external forces, Equation (34) reduces to the well-known linear diffusion equation The above equation is invariant under the space-time scaling transformation with γ = 1 2 [63,64]. This scaling property opens up the possibility of a phenomenological and macroscopic characterization of anomalous diffusion processes [15,44] as well, which correspond to more complicated non-stationary processes described by FP equations in the form of Equation (34) with a non-trivial value of γ. With the help of the transformation in Equation (36), we can also classify correlated statistical systems according to the rate of the spread of their probability density functions over time in the asymptotic limit and, thus, quantitatively describe their behavior in the non-stationary regime.

Relation between the Stationary and Non-Stationary Regime
To reasonably and consistently relate the generalized entropies to the formalism of FP equations-corresponding to the stationary and non-stationary regime, respectively-the functional Φ[p] has to be chosen such that the stationary solution of the general FP equation becomes equivalent to the Maximum Entropy (MaxEnt) probability distribution calculated with the generalized entropies. These MaxEnt distributions can be obtained analogously to the results by Hanel and Thurner in [16,40], where they used standard constrained optimization to find the most general form of MaxEnt distributions, which turned out to be p( ) = E c,d,r (− ) with Here, B, r are constants depending only on the c, d parameters and W k is the kth branch of the Lambert-W function (specifically, branch k = 0 for d ≥ 0 and branch k = 1 for d < 0). The consistency criterion imposed above accords with the fact that many physical systems tend to converge towards maximum entropy configuration over time, however, it specifies the limits of our assumptions.
Consider systems described by Equation (34) in the absence of external force, i.e., By assuming that the corresponding stationary solutions can be identified with the MaxEnt distributions in Equation (37), it can be shown that the functional form of the effective density Φ[p] must be expressed as where we neglected additive and multiplicative constant factors for the sake of simplicity. Similar implicit equations have already been investigated in [61,62,65]. Once the asymptotic phase space volume scaling relation in Equation (29) holds, it can also be shown that the generalized FP in Equation (38) (with Φ as in Equation (39)) obeys the diffusion scaling property in Equation (36) with a non-trivial value of γ in the p → 0 asymptotic limit [66] (assuming additionally the existence of the solution of Equation (38), at least from an appropriate initial condition). A simple algebraic relation between the diffusion scaling exponent γ and the phase space volume scaling exponent c can be established [66], which can be written as Therefore, this relation between c and γ defines families of FP equations which show asymptotic invariance under the scaling relation in Equation (36).

Conclusions
This review concentrates on the concept of generalized entropy (Section 2), which is relevant in the study of real thermodynamical systems and, more generally, in the theory of complex systems. Possibly the first example of a generalized entropy was introduced by Rényi (Section 3.2), who was interested in the most general information measure which is additive in the sense of Equation (5), with the random variables X and Y being independent. Another very popular generalized entropy was introduced by Tsallis as a generalization of the Boltzmann-Gibbs entropy (Section 3.1) to describe the properties of physical systems with long range forces and complex dynamics in equilibrium. Some more exotic generalized entropies are considered in Section 3.3, while other examples that have been published in the last two decades are gathered in Appendix A. Our approach was to a great extent formal, with special emphasis in Sections 2 and 3 on axiomatic formulations and mathematical properties. For expository reasons, applications are mentioned and the original references given as our description of the main generalized entropies progressed, rather than addressing them jointly in a separate section.
An alternative approach to generalized entropies other than the axiomatic one (Section 2) consists in characterizing their asymptotic behavior in the thermodynamic limit W → ∞. Hanel and Thurner showed that two scaling exponents (c, d) suffice for admissible generalized entropies, i.e., those entropies of the form in Equation (10) with g continuous, concave and g(0) = 0 (Section 4); it holds c ∈ (0, 1] and d ∈ R. As a result, the admissible systems fall in equivalence classes labeled by the exponents (c, d) of the corresponding entropies. Conversely, to each (c, d), there is a generalized entropy with those Hanel-Thurner exponents (see Equation (32)), at least for the most interesting value ranges.
It is also remarkable that, at asymptotically large times and volumes, there is a 1-to-1 relation between the equivalence class of generalized entropies with a given c ∈ (0, 1] and the equivalence class of Fokker-Planck equations in which the invariance in Equation (36) holds with γ = 1 1+c ∈ 1 2 , 1 (Section 5). This means that the equivalence classes of admissible systems can generally be mapped into anomalous diffusion processes and vice versa, thus conveying the same information about the system in the asymptotic limit (i.e., when p(x, t) → 0) [66]. A schematic visualization of this relation is provided in Figure 5. Moreover, the above result can actually be understood as a possible generalization of the Tsallis-Bukman relation [44].