Abstract
Mathematical generalizations of the additive Boltzmann–Gibbs–Shannon entropy formula have been numerous since the 1960s. In this paper we seek an interpretation of the Rényi and Tsallis q-entropy formulas single parameter in terms of physical properties of a finite capacity heat-bath and fluctuations of temperature. Ideal gases of non-interacting particles are used as a demonstrating example.
1. Introduction
Entropy is a great tool in thermodynamics and statistical physics. Conceptualized originally by Clausius [1] as a state descriptor, that distinguishes it from heat, it became a basic principle for statistical and informatics calculations. Its classical form, the Boltzmann entropy has a wide use [2,3,4,5]. Nevertheless, mostly in mathematical approaches to informatics, its generalizations occurred: altered, non-logarithmic formulas between the probability of a given sate and the total entropy of the system [6,7,8,9,10].
Still, the classical logarithmic formula is of widest use, having a number of properties that make it destined to be a convex measure of information and probability. The most known generalization is due to Alfred Renyi [6], who constructed a formula being also additive for factorizing joint probability, but abandoned the logarithm as the sole function with this property. There is a parameter, occurring as a power of probability, denoted by q or . The classical formula emerges in the limit, formally.
As interesting as the Renyi entropy is, its form is not an expectation value. The q-entropy as an expectation value was suggested by C. Tsallis [11,12,13], as that form is not additive for factorizing joint probabilities (or being additive for correlated, non-factorizing probabilities). Other and further generalizations, naming more parameters, were also suggested formulated in terms of leading order corrections to the Boltzmann formula in the thermodynamical limit or just utilizing more parameters for possible deformations of the original formula [9,14,15,16,17,18,19]. Properties of generalized entropy formulas were intensely studied, for two selected examples with respect to the Tsallis entropy see [20,21].
In the present paper another viewpoint is presented: (i) first we identify deviations from the classical logarithmic formula derived from deviations from additivity; (ii) then we demonstrate how phase space finiteness effects cause corrections to the additivity of entropy in coincidence with the factorization of probability in a microcanonical approach; (iii) and finally we ask the question of which modified entropy can be the most additive one in this respect. More closely, a general group entropy [22] is considered following non-addition rules and a limit of its infinite repetitions on small amounts is considered as an asymptotic rule of composition [23]. Associative rules form group operations, therefore a logarithm of the formal group can be derived from a general composition law, which is then additive. This will be the content of the next section.
There were decade-long discussions about the physical (or statistical) meaning of the parameter q, the first non-universal parameter occurring in generalized entropy formulas. It may be bound to the sort of the system under discussion, to its material properties, but at the same time it occurs generally in a given class of statistical systems, including informatics, statistical mechanics, dynamics at the edge of chaos, and complex random networks. A few approaches in the quest of uncovering physical mechanisms determining the value of q in particular cases in which the present author was involved, are in Refs. [23,24,25,26,27]. Similar studies by others are copiously cited in review books, cf. [12,24].
Following this, a physical interpretation of the parameter q will be established connected to the finite heat capacity of an environment [28] and to possible fluctuations in phase space dimensionality. The latter is akin to the superstatistical approach [29,30,31,32,33]. A balance between physical factors reducing and increasing the value of q may ensure the classical case, however, in a general setting it is not provided. One is then tempted to restore additivity at best—since classical thermodynamics is based on this property. The attempt is made by using a logarithm of the formal group of entropy composition instead of the original entropy, , and deriving again the associated parameter. Then generates a differential equation for the function, and due to that, a new composition rule.
Finally some examples will be discussed and families of forms will be established. The Boltzmann, Renyi, Tsallis entropies are all special cases, and they represent physical extremes in terms of the heat container capacity and the relative size of superstatistical fluctuations.
2. Logarithm of the Formal Group
It is worth starting our mathematical considerations with the composition law of entropy, or any other real valued physical quantity, when contacting and combining two systems to a bigger, unified one. Abandoning additivity, which ensures the co-extensivity properties of entropy and other extensive quantities, constructed also as expectation values, further composition rules are considered for non-extensive statistical mechanics [12]. The Abe-Tsallis composition law,
is a particular case for a more general one, described by a two-variable function, . In such composition rules the entropies, (i = 1, 2, or 12) can be any state functions, and assumed to be valid in general, both for equilibrium and non-equilibrium entropies. Here we do not address such questions, just look for the consequences of adopting non-additive rules for the entropy. The value of the parameter q is usually in the open interval , in measured cases very often close but not equal to . Formally, however, it can be any real number, even negative. Certainly for , the entropies cannot be too large in order to avoid a negative composite result. In thermodynamics we deal mostly with large systems; thus, the requirement of associativity is natural:
Having the third law in mind, on the other hand, zero entropy is a valid value, and its addition must be trivial:
and similarly . These two requirements already circumvent composition as a group operation. The question of inverse building remains only nontrivial.
Here, the logarithm of the formal group is helpful. It can be shown that, from the associativity Equation (2), it follows the existence of a monotonous and hence invertible mapping, which maps the general rule to the addition:
The function is the formal logarithm; it can be constructed asymptotically from the rule as follows. Imagine we compose from x to by adding small amounts, , a number of time, N, so that . Then, in a general step, one proceeds as
The index n in this change is additive, while x and y are not. Seeking for a continuous limit in the variable between zero and one, we arrive at
Here and . Denoting the primitive function of the above integral by , our result reads as
This form breaks the symmetry between x and y. We remedy this problem by re-defining , i.e., taking steps by the additive quantity . By this we obtain the K-additivity
This is the sought mapping to additivity; therefore, is called the formal logarithm. We note by passing this point that due to the continuous limit in the above derivation, the new rule is only asymptotic:
does not always coincide with the starting rule, . Such asymptotic rules, however, build attractors among all composition rules. The result Equation (6) can also be obtained by taking the partial derivative of Equation (4) at , using and integrating. The asymptotic rule is a reconstruction of the composition rule from its first derivative at a very small (zero) second argument.
Here, we mention examples. The Tsallis-Abe rule, with , leads to the formal logarithm and does not change in the asymptotics: . A more general rule, using a general function of the product of the composables, on the other hand leads back to the Tsallis-Abe rule with . Triviality requires , of course. The properties and also do hold.
Lesser-known, more complex rules can also be investigated. For example, results in a logarithm of a rational function for . Instead of listing more and more examples, however, let us close this section with a more general comment. Once we change the simple additivity to K-additivity, equivalent to the use of an associative composition rule, the entropy formula in terms of the probability also changes.
Considering an ensemble in the Gibbs sense, the state i is realized times, while altogether, instances are investigated. The probability of being in state i approaches the ratio. Since the individual contribution to entropy would be in the classical approach, a composition of W such instances from which the i-th is repeated times shall be constructed by K-additivity. The logarithm of the probability being additive for factorizing joint probabilities, a non-additive entropy can be constructed by the inverse function of the formal logarithm:
In this way the generalized entropy formula belongs to an ensemble average value (or expectation value):
This formula may need a little more explanation. Since we replaced the original additivity assumption with K-additivity, the additive quantities, reflected in the formula which is additive for factorizing probabilities, must be a K-function of the non-additive ones. The above Equations (10) and (11) are for the non-additive quantities; therefore, the inverse function, is used on the additive log.
With the example of the Tsallis–Abe composition law, we have and . This delivers the Tsallis entropy,
with as the non-additive, but expectation value-like construction, with the corresponding Rényi entropy,
as the version additive for factorizing probabilities, but not being an expectation value (ensemble average).
Consequently, equilibrium distributions when maximizing the entropy or its monotonous function, , deliver corresponding solutions of
while keeping an average energy fixed. Both for the Rényi and Tsallis q-entropy the resulting canonical distribution becomes a cut power law in the individual energy, :
In the () limit the Boltzmann-Gibbs exponential emerges. The factor Z ensures the normalization, .
3. q Parameter in the Boltzmannian Approach
At a first glance, it seems arbitrary which composition rule, and consequently which formal logarithm, is to be used in our models. However, the parameters occurring in a modification of the entropy addition law need some connection to physical reality. In this section we first extend the familiar textbook derivation of the exponential canonical distribution by going a step further in the thermodynamical limit expansion and then compare the result with the cut power-law canonical distribution. This gives rise to a possible physical interpretation of the parameter q.
Following the classical argumentation, there is a factor in the probability of a subsystem having energy out of the total E occurring as a ration of corresponding phase space volumes:
Here, the averaging is over parallel ensemble copies of the same system, allowing for microscopical fluctuations in parameters beyond the total energy, E, like particle numbers, charges, etc. The occupied phase space volumes are connected to the entropy, . Expanding the expression Equation (16) up to first order in one arrives at the well known canonical factor
Here Z is a normalization factor ensuring . The temperature is interpreted as . The information about the environment (heat bath) is comprised into this single parameter, traditionally.
Now we look at the consequences of going one step further, i.e., performing an expansion:
This result is to be compared with the canonical distribution following from the Rényi and Tsallis entropy, to the Tsallis–Pareto distribution [12,34] to the same order
Term by term comparison between Equations (18) and (19) interprets the parameters T and q in the Tsallis–Pareto distribution as being
Denoting as a fluctuating quantity, one uses its variance in the above interpretation of q, besides the derivative of the temperature , with C being the total capacity of the heat bath in the formula :
This result allows for q values both smaller and larger than one; the remaining a special choice. Indeed, textbooks [35] suggest that the is the only possible choice and then conclude that argues for the ”one over square root law” for energy fluctuations. This argumentation is, however, misleading: the physical situation and size of the heat bath actually present must determine the value of the parameter q.
4. Optimal Restoration of Additivity
In the present section we shall optimize the choice of instead of S in estimating the phase space volumes above in order to achieve . This requirement leads to a differential equation restricting the function . Since according to the Tsallis-Abe composition law is the additive case, seeking for we call restoration of additivity. This is the best result possible, keeping in mind that the Tsallis-Pareto distribution is also an approximation in case of finite heat bathes, although one term improved beyond the traditional Boltzmann–Gibbs exponential.
The canonical statistical factor in this case transmutes to
Expanding up to terms, as in the previous section, with the assumption that the function is universal, i.e., independent from the energy stored in the heat bath, we obtain
Comparing this with a Tsallis–Pareto distribution of the same approximation, we obtain another temperature and different variance parameters, and q:
The requirement, , singles out a formal logarithm for the entropy composition rule, which optimizes to the subleading order in general. This results in a simple, solvable differential equation for :
Here, we again used the fact that and . Ordering this equation to one obtains
Such an equation can be solved by quadrature even for complicated functions of the entropy.
The simplest physical system is an ideal gas: in this case, the heat capacity is independent of the total entropy, so is a constant. On the other hand, we may also assume that the relative variance in temperature, comprised in the term , is also a constant with respect to the total entropy. In this simplest case, it is straightforward to obtain the optimal formal logarithm, from Equation (26).
Here we present the solution, which contains two parameters:
This ansatz is a “to and back” construction in a form with being the formal logarithm associated to the Tsallis–Abe composition rule. The reciprocal inverse of the first derivative of is obtained as
satisfying the condition. Substituting this function and its first derivative into Equation (26) we conclude that with
It is interesting to check some limits of this expression. For or in the case, corresponding to zero fluctuations in the thermodynamical temperature, one obtains
This generates a non-additive entropy formula
The corresponding canonical distribution is a complicated expression involving Lambert’s function.
In the other extreme, , meaning the presence of an infinite heat capacity (ideal) heat bath, we arrive at
In this case the non-additive entropy formula delivers a Tsallis entropy, cf. Equations (11) and (12), and the canonical distribution is a Tsallis–Pareto distribution.
It is most intriguing that the choice , i.e., stating that the temperature fluctuations are exactly following the inverse square root law, ; and therefore, , always the Boltzmann–Gibbs formula emerges:
and S itself is additive.
One realizes that the balance between ensemble fluctuations and the finiteness of the heat bath determines which is the optimally additive entropy formula. It is convenient to use the formal logarithm instead of S as an additive quantity. Still, the traditional balance derived from () is not always given in physical situations. Some might find it strange that the parameter q depends on the heat capacity controlling the reservoir. Here the relation is with the same system’s heat capacity. Analogously, a much simpler correspondence is true for a fixed volume of photon gas, where , since the equation of state is given by , stemming from and .
5. Fluctuations in Phase Space Dimension
One of the most prominent cases when fluctuations are “external”, occurring due to the way of collecting data and not stemming from the finiteness of the heat bath, is the study of single particle energy spectra in high energy collisions. Hadronization makes n particles, event by event a different number, while the total energy shared by them is approximately constant. This situation is opposite to the energy fluctuations with a fixed number of particles (atoms).
In this case the phase space to be filled by individual energies has a fluctuating dimension, while the total energy determining the microcanonical hypersurface is fixed. Then, depending on the actual probability of making n hadrons in a single collision event, , the summed distribution of single particle energy, frequently measured by the transverse momentum for very energetic particles, will differ from the traditionally expected exponential or Gaussian. In order to dwell on this problem, let us first review how the microcanonical phase space is calculated at a given total energy, E, and how dimensions for n particles move in some spatial dimensions.
Phase space is over momenta. Individual energies are functions of momenta according to the corresponding dispersion relation. A number of such relations look like a power of the absolute value, so they can be comprised into an -norm:
with individual momentum components, altogether n-dimension in phase space and E total energy. The function also reflects the dispersion relation between energy and momenta. For a one dimensional jet , here simply and the norm is to be used. For nonrelativistic ideal gases , therefore, and the norm is used.
For extreme relativistic particles , and measures the volume satisfying
The general formula reads as
Originally Dirichlet obtained this formula in a french publication [36]. More contemporary popularizations are due to Smith and Vamanamurthy [37] from 1989 and Xianfu Wang [38] from 2005. Wikipedia also has an entry on this formula [39] and a recursive proof in few lines can be obtained from [40]. A microcanonical constrained hypersurface is the derivative of the above volume formula against the total energy
Meanwhile the surface is the derivative against R.
We define a ratio of volumes, and consider the case, related to relativistic particles in a 1-dimensional jet:
For normalization we use the pure environmental factor, the above ratio without the single particle factor:
This is normalized over an integral in the single-particle phase space. This means an integral over p between and , while with the absolute value:
Due to the absolute value expression, this integral is twice of that between 0 and E, whence the factor 2 in the denominator becomes cancelled.
Once its integral is normalized to 1, also the mixture
is normalized to 1, provided that the sum is also normalized.
Finally, note that the ratio of volumes and energy shells in the 1-dimensional, relativistic case are simply related:
in the (relativistic) -norm delivers due to
So we have and
Finally, we have
In this case the microcanonical ratio, is normalized to 1 for an integral over between 0 and E.
For ideal gases, one considers , delivering
Here, q is actually the measure of non-Poissonity,
For the negative binomial distribution (NBD) , for the Poissonian exactly . In hadronization statistics the Tsallis–Pareto distribution extracted from transverse momentum distributions and the multiplicity fluctuations event by event go hand in hand [41,42,43]. Such distributions may be a consequence of dynamical random processes, too, as it is investigated in the framework of the Local Growth Global Reset (LGGR) model recently on the master equation level [44], or earlier in the framework of the generalization of Boltzmann’s kinetic approach and the related H theorem [45,46,47].
6. Conclusions
In conclusion, we investigated the physical background for applying non-additive entropy in three steps: (i) we reviewed associative composition rules and the derivation of their asymptotic version valid in the thermodynamical limit, (ii) we have optimized which formal logarithm of the entropy, , is to be used in phase space occupation probability arguments, including but not restricted to the case for Tsallis entropy, and finally, (iii) we reviewed phase space ratios in high energy jets as an application for superstatistical fluctuation in the dimensionality of phase space volumes. The coupling of these three aspects in a particular chain concludes that even ideal gases in a finite heat bath environment and away from thermal equilibrium can show a certain ambiguity, best removed by using non-additive entropy formulas.
To show an example, a certain ambiguity for estimating the heat capacity from maximizing the mutual entropy between observed subsystem and heat bath is discussed in detail for ideal gases in the Appendix A. We demonstrate in Appendix B that using instead of S indeed clears the mismatch between statistical informatics—postulating mutual entropy maximum in equilibrium—and thermodynamics, obtaining the heat capacity of a subsystem independently of the heat bath, i.e., using the Universal Thermostat Independence (UTI) principle.
Funding
This research was funded by NKFIH grant number K123815.
Data Availability Statement
Not applicable.
Conflicts of Interest
The authors declare no conflict of interest.
Appendix A. Ideal Gas with Finite Heat-Bath
First, we describe how a finite heat capacity heat bath influences the heat capacity of an observed subsystem. The effect stems from maximizing the mutual entropy instead of the subsystem’s entropy alone.
We consider ideal gases, occupying phase space volumes according to the N-Ball in norm picture. N is the number of degrees of freedom, or particles, while dimensionality factorizes. For one-dimensional extreme relativistic particles, the radius is and one uses the diamond shape, -norm. For traditional, nonrelativistic particles the radius is and one uses the spherical, -norm.
In both cases the entropy is given as
We have for one-dimensional jets, and for traditional ideal gases in 3 dimensions. The first derivative wrsp the energy defines the variable:
The second derivative defines implicitly the heat capacity:
with
For a subsystem connected with another system (may be called reservoir if large enough) not the individual, but the mutual entropy maximum describes the most probable energy for a subsystem. We have and when fixing and all N-s from that the first derivative,
and the second derivative against is given by
Due to , the most probable energy value for the subsystem is obtained from the vanishing of the first derivative of . This occurs at , according to the definition of and Equation (A5). At this point from the second derivative, an effective, intercorrelated heat capacity for the subsystem appears. We have
and from that the effective heat capacity
Exactly the correction is the effect of a finite heat bath reservoir which vanishes in the thermodynamical limit, .
A similar result can be derived when fixing the total energy, , and then looking for the most probable subsystem energy, . We get
In this case the total system has fixed parameters (energy, heat capacity). Again, in the thermodynamical limit the correction vanishes.
Appendix B. Universal Thermostat Independence
The above corrections, in one case positive, in another negative, make the concept of heat capacity ambiguous. In order to avoid this discrepancy, we can follow two strategies: (i) ignore the problem and restrict to the infinite capacity reservoir limit, or (ii) compensate for this leading effect near to the maximal probability. Choosing the second strategy is equivalent with admitting that the original additive concept of mutual entropy has to be generalized.
Following the second option, we consider the maximum of the mutual K-entropy instead of the original entropy, and the additive quantity associated to a possibly non-additive entropy, , is constructed in a way that the corresponding heat capacity appears infinite.
It is easy to achieve by choosing accordingly. All usual quantities, temperature, heat capacity acquire a K index by doing so and we obtain
Solving it for the K-capacity of heat, we have
The universal thermostat independence requires that , which is a second order differential equation for , quoted as the UTI principle in [28,48],
With the boundary conditions (for the sake of keeping the third law of thermodynamics) and (just keeping the Boltzmann constant at its original value ), we arrive at a solution for a constant C, independent of S, as being
Applying now K-additivity with this formula, we require zero mutual K-entropy reflecting total independence of the energy states of reservoir and subsystem,
we derive the Tsallis–Abe composition law. From Equations (A13) and (A14), one writes
From here, a product rule follows,
which in turn simplifies to
Comparing this with the Tsallis–Abe rule Equation (1) one obtains . This is already a part of the result presented in Equation (21). For the total result, previously known temperature fluctuation can already be counted for, which may or may not under- or overcompensate this effect.
References
- Clausius, R. Théorie Mécanique de la Chaleur; Libraire Scientifique, Industrielle et Agricole, Série B, No. 2; Eugéne Lacroix: Paris, France, 1868. [Google Scholar]
- Boltzmann, L. Weitere Studien über das Wärmegleichgewicht unter Gasmolekülen. Wien. Ber. 1872, 66, 275. [Google Scholar]
- Boltzmann, L. Über die Beziehung einer allgemeine mechanischen Satzes zum zweiten Hauptsatze der Wärmetheorie. Sitzungsberichte K. Akad. Wiss. Wien 1877, 75, 67. [Google Scholar]
- Gibbs, J.W. Elementary Principles in Statistical Mechanics; C. Scriber’s Sons: New York, NY, USA, 1902. [Google Scholar]
- Shannon, C. A mathematical theory of communication. Bell. Syst. Tech. J. 1948, 27, 379–423; ibid 623–656. [Google Scholar] [CrossRef]
- Renyi, A. On measures of information and entropy. In Proceedings of the Fourth Berkeley Symposium on Mathematics, Statistics and Probability, Berkeley, CA, USA, 20 June–30 July 1960; Volume 1, pp. 547–561. [Google Scholar]
- Havrda, J.; Charvat, F. Quantification Method of Classification Processes. Concept of Structural Entropy. Kybernetika 1967, 3, 30–35. [Google Scholar]
- Daróczy, Z. Generalized Information Functions. Inf. Control 1970, 16, 36. [Google Scholar] [CrossRef]
- Sharma, B.D.; Mittal, D.P. New nonadditive measures of inaccuracy. J. Math. Sci. 1975, 10, 122. [Google Scholar]
- Nielsen, F.; Nock, R. A closed-form expression for the Sharma-Mittal entropy of exponential families. J. Phys. A 2011, 45, 032003. [Google Scholar] [CrossRef]
- Tsallis, C. Possible generalization of Boltzmann–Gibbs statistics. J. Stat. Phys. 1988, 52, 479. [Google Scholar] [CrossRef]
- Tsallis, C. Introduction to Non-Extensive Statistical Mechanics: Approaching a Complex World; Springer Science+Business Media LLC: New York, NY, USA, 2009. [Google Scholar]
- Tsallis, C. Nonadditive entropy: The concept and its use. EPJ A 2009, 40, 257–266. [Google Scholar] [CrossRef]
- Hanel, R.; Thurner, S. A comprehensive classification of complex statistical systems and an axiomatic derivation of their entropy and distribution functions. EPL 2011, 93, 20006. [Google Scholar] [CrossRef]
- Hanel, R.; Thurner, S. When do generalized entropies apply? How phase space volume determines entropy. EPL 2011, 96, 50003. [Google Scholar] [CrossRef]
- Landsberg, P.T. Is equilibrium always an entropy maximum? J. Stat. Phys. 1984, 35, 159. [Google Scholar] [CrossRef]
- Tisza, L. Generalized Thermodynamics; MIT Press: Cambridge, MA, USA, 1961. [Google Scholar]
- Maddox, J. When entropy does not seem extensive. Nature 1993, 365, 103. [Google Scholar] [CrossRef]
- Zapirov, R.G. Novije Meri i Metodi b Teorii Informacii; New Measures and Methods in Information Theory; Kazan State Tech University: Kazan, Russia, 2005; ISBN 5-7579-0815-7. [Google Scholar]
- Abe, S. Axioms and uniqueness theorem for Tsallis entropies. Phys. Lett. A 2000, 271, 74. [Google Scholar] [CrossRef]
- Santos, R.J.V. Generalization of Shannon’s theorem for Tsallis entropy. J. Math. Phys. 1997, 38, 4104. [Google Scholar] [CrossRef]
- Jensen, H.J.; Tempesta, P. Group Entropies: From Phase Space Geometry to Entropy Functionals via Group Theory. Entropy 2018, 20, 804. [Google Scholar] [CrossRef]
- Biro, T.S. Abstract composition rule for relativistic kinetic energy in the thermodynamical limit. EPL 2008, 84, 56003. [Google Scholar] [CrossRef]
- Biro, T.S. Is There a Temperature? Conceptual Challenges at High Energy, Acceleration and Complexity; Springer Series on Fundamental Theories of Physics 1014; Springer Science+Business Media LLC: New York, NY, USA, 2011. [Google Scholar]
- Biro, T.S.; Jakovac, A. Power-law tails from multiplicative noise. Phys. Rev. Lett. 2005, 94, 132302. [Google Scholar] [CrossRef]
- Biro, T.S.; Purcsel, G. Non-extensive Boltzmann-equation and hadronization. Phys. Rev. Lett. 2005, 95, 062302. [Google Scholar] [CrossRef]
- Biro, T.S.; Purcsel, G.; Urmossy, K. Non-extensive approach to quark matter. EPJ A 2009, 40, 325–340. [Google Scholar] [CrossRef]
- Biro, T.S.; Van, P.; Barnafoldi, G.G.; Urmossy, K. Statistical Power Law due to Reservoir Fluctuations and the Universal Thermostat Independence Principle. Entropy 2014, 16, 6497–6514. [Google Scholar] [CrossRef]
- Beck, C.; Cohen, E.G.D. Superstatistics. Physica A 2003, 322, 267–275. [Google Scholar] [CrossRef]
- Cohen, E.D.G. Superstatistics. Physica D 2004, 193, 35. [Google Scholar] [CrossRef]
- Beck, C. Recent developments in superstatistics. Braz. J. Phys. 2009, 38, 357. [Google Scholar] [CrossRef]
- Beck, C. Superstatistics in high-energy physics, Application to cosmic ray energy spectra and e+e- annihilation. EPJ A 2009, 40, 267–273. [Google Scholar] [CrossRef]
- Beck, C. Dynamical foundations of nonextensive statistical mechanics. Phys. Rev. Lett. 2001, 87, 180601. [Google Scholar] [CrossRef]
- Available online: https://en.wikipedia.org/wiki/Pareto_distribution#Generalized_Pareto_distributions (accessed on 26 October 2022).
- Landau, L.D.; Lifshitz, E.M. Course of Theoretical Physics; Statistical Physics; Elsevier Science: Amsterdam, The Netherlands, 2013; Volume 5, ISBN 9781483103372. [Google Scholar]
- Dirichlet, L.P.G. Sur une nouvelle méthode pour la détermination des intégrales multiples. J. Math. Pures Appl. 1839, 4, 164–168. [Google Scholar]
- Smith, D.J.; Vamanamurthy, M.K. How Small Is a Unit Ball? Math. Mag. 1989, 62, 101–167. [Google Scholar] [CrossRef]
- Wang, X. Volumes of Generalized Unit Balls. Math. Mag. 2005, 78, 390–395. [Google Scholar] [CrossRef]
- Available online: htps://en.wikipedia.org/wiki/Volume_of_an_n-ball (accessed on 26 October 2022).
- Available online: https://math.stockexhange.com/questions/301506/hypervolume-of-a-n-dimensional-ball-in-p-norm (accessed on 26 October 2022).
- Wilk, G.; Wlodarczyk, Z. On the interpretation of nonextensive parameter q in Tsallis statistics and Levy distributions. Phys. Rev. Lett. 2000, 84, 2770. [Google Scholar] [CrossRef]
- Wilk, G. Fluctuations, correlations and non-extensivity. Braz. J. Phys. 2007, 37, 714. [Google Scholar] [CrossRef]
- Wilk, G.; Wlodarczyk, Z. Power laws in elementary and heavy-ion collisions. A story of fluctuations and nonextensivity? EPJ A 2009, 40, 299–312. [Google Scholar] [CrossRef]
- Biro, T.S.; Neda, Z. Unidirectional random growth with resetting. Physica A 2018, 499, 335–361. [Google Scholar] [CrossRef]
- Kaniadakis, G. Non-linear kinetics underlying generalized statistics. Physica A 2001, 296, 405. [Google Scholar] [CrossRef]
- Kaniadakis, G. H-theorem and generalized entropies within the framework of nonlinear kinetics. Phys. Lett. A 2001, 288, 283. [Google Scholar] [CrossRef]
- Kaniadakis, G. Relativistic entropy and related Boltzmann kinetics. EPJ A 2009, 40, 275–287. [Google Scholar] [CrossRef]
- Biró, T.S.; Barnaföldi, G.G.; Ván, P. New entropy formula with fluctuating reservoir. Phys. A 2015, 417, 215–220. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).