Next Article in Journal
Natural Convection and Entropy Generation in Nanofluid Filled Entrapped Trapezoidal Cavities under the Influence of Magnetic Field
Next Article in Special Issue
Self-Replicating Spots in the Brusselator Model and Extreme Events in the One-Dimensional Case with Delay
Previous Article in Journal
Understanding Interdependency Through Complex Information Sharing
Previous Article in Special Issue
Interacting Brownian Swarms: Some Analytical Results
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Non-Extensive Entropic Distance Based on Diffusion: Restrictions on Parameters in Entropy Formulae

by
Tamás Sándor Biró
1,* and
Zsolt Schram
2
1
Wigner Research Center for Physics, Konkoly-Thege M. 29-33, Budapest H-1121, Hungary
2
Department of Theoretical Physics, University of Debrecen, Bem tér 18/B, Debrecen H-4026, Hungary
*
Author to whom correspondence should be addressed.
Entropy 2016, 18(2), 42; https://doi.org/10.3390/e18020042
Submission received: 25 November 2015 / Revised: 15 January 2016 / Accepted: 22 January 2016 / Published: 27 January 2016

Abstract

:
Based on a diffusion-like master equation we propose a formula using the Bregman divergence for measuring entropic distance in terms of different non-extensive entropy expressions. We obtain the non-extensivity parameter range for a universal approach to the stationary distribution by simple diffusive dynamics for the Tsallis and the Kaniadakis entropies, for the Hanel–Thurner generalization, and finally for a recently suggested log-log type entropy formula which belongs to diverging variance in the inverse temperature superstatistics.

1. Introduction

Over the last decades, there have been several suggestions for generalizations of the Boltzmann–Gibbs–Shannon (BGS) entropy formula [1,2,3,4,5,6]. Most formulas can be grouped into categories either by their mathematical form (trace form or a function of the trace form) [7], or by the scaling properties for large systems, usually providing large entropies, S, even if not necessarily proportional to the logarithm of the number of states, ln W [8,9,10,11]. Such entropy formulas are often presented based on axioms, like the Khinchin or Shannon axioms for the most known logarithmic entropy formula due to Boltzmann and Gibbs. Non-extensive entropy then abandons the additivity axiom and replaces it with a more general assumption. For the group entropy, the assumption of the associativity of the entropy composition rule is used [7]. Another way to obtain a generalized entropy formula starts with physical properties, like the universality of the thermal equilibrium between two systems as described by the zeroth law of thermodynamics [12,13], and by this it also includes the associativity assumption [14]. It is also noteworthy that starting with a general, non-associative composition prescription one arrives asymptotically at an associative one—only by repeating it in small steps and reconstructing the effective composition formula in the continuous scaling limit [15].
Deformations of the original Boltzmann–Gibbs–Shannon entropy formula can also be achieved by tracing it back from the canonical distribution, exponential in the subsystem energy, when parameters of this distribution are treated as stochastic ones, or otherwise randomized, and hence integrated over another—the superstatistical—distribution [16,17,18]. Instead of treating the temperature or its inverse as a fluctuating quantity, one may also consider the number of degrees of freedom, and hence the dimensionality of phase space, as a fluctuating quantity. In [19,20,21,22,23] it has been shown that simple fluctuation patterns of the particle number, N, influencing the phase space volume Ω occupied by an ideal gas, can lead both to an exponential distribution of exp ( ω / T ) by a Poisson N distribution, or to cut power-law like Tsallis–Pareto distribution by a negative binomial N distribution. In general, a trace form entropy,
K ( S ) : = i = 1 W p i K ( ln p i )
emerges with K ( S ) satisfying a second order differential equation, the additivity restoration condition (ARC) [24]. Connecting to a more traditional notation, the K function is related to the deformed logarithm function as
ln def ( x ) = K ( ln x )
This form includes the one parameter deformed logarithms as well as the generalizations with more parameters [12,13,25,26,27].
In the present paper we investigate whether such entropy formulas define an entropic distance between two probability distributions, which has the following useful properties:
  • it is positive for any two different distributions;
  • it is zero for comparing any distribution with itself;
  • it is symmetric.
A symmetric distance measure is most naturally formulated as a sum of relative divergences, ρ ( P , Q ) = σ ( P | Q ) + σ ( Q | P ) . For deformed logarithms two basic forms of suggestions occur in the literature, one based on the ratio P n / Q n and one based on the difference in their logarithm, cf. refs. [28,29,30,31]. Furthermore we are interested in a definition which describes a distance that is always shrinking between a dynamically evolving distribution and the one belonging to maximal entropy under a spontaneous dynamics.
In the following we show this behavior for the traditional logarithmic entropy formula and a generally state-dependent nearest neighbour master equation (defined below), and then propose a generalization of the symmetrized entropic distance measure based on the deformed logarithm function. We conclude that some proposals encertain a shrinking of the entropic distance during an approach to the stationary distribution only for a restricted range of the non-extensivity parameter(s) used in the entropy formula.

2. Probability Distributions

The Poisson distribution is ubiquitous in statistical phenomena, where the discreteness of the basic variable, n, and its positivity are natural constraints. This distribution,
P n POI = a n n ! e a
is parameterized by a single parameter, coinciding with the mean value of the variable n : a = n . The logarithm of its characteristic function,
ln χ POI = ln n e n x P n POI = n ( 1 e x )
generates all central moments of the distribution. Notably, due to δ n 2 = n the Poisson distribution narrows for growing mean value of n.
This distribution plays a central role in different areas of physics: it occurs as the statistics of spontaneous radioactive decays during given time intervals, as the number of photons in a Glauber coherent state, or as the number distribution of randomly produced hadrons in high energy experiments when uniformly filling the available phase space. In the latter case, a given total energy, E, is distributed among n newly made particles in experiment. The statistical weight to find a single particle with individual energy ω among the products is roughly proportional to the relative phase space described by a ratio of corresponding n-dimensional spheres. Its average over the Poisson distribution is given by the following remarkable formula
( E ω ) n E n = n 1 ω E n P n POI = e n ω / E
Several authors use the power n 1 instead of n in this formula. The difference traces back to different concepts on the hyperspheric phase space volume vs. surface. Since for large n it does not matter, we do not go into details here.
This result allows for a Gibbsean interpretation of this statistical factor with the kinetic temperature T = E / n .
Beyond the Poissonian also sub- and super-Poissonian bosonic states occur with the corresponding Bernoulli or negative binomial multiplicity distributions in experiments. The above ideal phase space ratio averages in those cases to a statistical factor generalizing the exponential to a cut power-law form, also known as a canonical distribution to the Tsallis entropy, e.g.,
( E ω ) n E n = n 1 ω E n P n NBD = 1 + 1 k + 1 n ω / E k 1
for the negative binomial distribution
P n NBD = k + n n p n ( 1 + p ) n k 1
with n = p ( k + 1 )
The above Tsallis–Pareto distribution, Equation (6) can also be obtained as a canonical distribution to the Tsallis entropy formula [3,4,5],
K ( S ) = 1 1 q i p i q p i
with q = 1 + 1 / ( k + 1 ) and T = E / n .
These and similar experiences suggest that different entropy formulas can be used as a basis for defining the entropic distance. Earlier attempts mostly defined a divergence formula based on the deformed logarithm, ln def , of the ratio of the two respective probabilities [31,32]. An alternative approach—which we propose to follow—uses the difference of the deformed logarithms as a definition, it is based on a Bregman type divergence. While for the traditional logarithm function these alternative definitions coincide (due to ln ( x / y ) = ln x ln y ), they do differ appreciably for the deformed logarithm functions in general use.

3. Master Equation

Let us denote the probability of having exactly n quanta in the observed system (part of phase space) at the time t generally by P n ( t ) . This quantity may depend on further parameters, this is suppressed in the notation by now. However, in any blind choice case it is proportional to the quantity 1 / W n , to the ratio of one to the number of possible ways to realize the state with n quanta.
A wide range of phenomena can be described by a simplified dynamics, assuming that only one quantum can be exchanged in a time instant. In this case, the evolution of P n ( t ) and W n depends only on the state probabilities of having one more or one less quantum. We consider here only the linearized version of this dynamics, the so called master equation:
P ˙ n = ( λ P ) n + 1 ( λ P ) n + ( μ P ) n 1 ( μ P ) n
Here, λ n is the transition (decay) rate from a state with n quanta to the state n 1 , μ n is the corresponding reverse (growth) rate from n to n + 1 . In this case the occurrence of state n in a huge parallel ensemble (Gibbs ensemble) of systems is fed by both the ( n + 1 ) n and ( n 1 ) n processes and it is diminished by the reverse processes. It is of special interest to investigate processes when λ and μ are related by symmetry principles, like time reversal invariance or subsystem—reservoir homogeneity.
We shall quote the general detailed balance solution of Equation (9), P n e q = Q n , as the equilibrium distribution. Since the equation is homogeneous and linear in the Q n -s the overall normalization is not fixed by it. Being the equilibrium distribution, all Q ˙ n -s should be zero. That condition is fulfilled only if
( λ Q ) n + 1 = ( μ Q ) n
from which it follows that also
( λ Q ) n = ( μ Q ) n 1
holds, annullating all evolution. Based on this observation, the detailed balance distribution satisfies
Q n = i = 0 n 1 μ i j = 1 n λ j Q 0
where Q 0 can be obtained from the normalization condition
n Q n = 1
This is kept during the evolution, since n P ˙ n = 0 upon Equation (9).

4. Entropic Distance

Based on the Boltzmann-Gibbs entropy,
S = n P n ln P n
a distance measure from a reference distribution, Q n is based on the quantity
σ ( P | Q ) = n P n ln P n Q n
By this definition σ ( Q | Q ) = 0 , and it gives a positive Kullback–Leibler divergence [33,34,35] between the normalized distributions P n and Q n . This can be easily proved by using the Jensen inequality, A i P i P i A i applied to A i = Q i / P i . The symmetric combination, based on a sum of Bregman type divergences [28,29,30,31],
ρ ( P , Q ) = σ ( P | Q ) + σ ( Q | P ) = n P n Q n ln P n Q n
is non-negative then term by term. For a fixed, time-independent Q n the master Equation (9) causes σ ( P | Q ) to decrease:
σ ˙ ( P | Q ) = n ( λ P ) n + 1 ( λ P ) n + ( μ P ) n 1 ( μ P ) n ln P n Q n
The sum can be re-arranged by summation index re-definition to
σ ˙ ( P | Q ) = n ( λ P ) n + 1 ( μ P ) n ln P n Q n + 1 Q n P n + 1
This expression is non-positive for arbitrary P n ( t ) values only if
λ n + 1 Q n + 1 = μ n Q n
Namely, one arrives solely in this case at the ( 1 x ) ln x type expression under the sum. But this means that the reference distribution, to which any initial distribution approaches if evolving according to the master equation, is the detailed balance distribution! It can be shown that by the virtue of the master equation also the distance σ ( Q | P ) is decreasing term by term. The symmetrically summed Kullback–Leibler distance to the detailed balance distribution is also reduced: ρ ˙ ( P , Q ) 0 , due to
σ ˙ ( Q | P ) = n ( P n Q n ) P ˙ n P n = n ( λ P ) n + 1 Q n P n 1 x 2 0
(Here x = ( μ P ) n / ( λ P ) n + 1 ).
In principle different distance measures (based on deformed entropy formulas) can also be used to investigate this property of a linear master equation. If one considers an entropy defined by a deformed logarithm function,
S = n P n ln def P n
then the Kullback–Leibler divergence and the definition of the distance between probability distributions can be modified accordingly as
ρ ( P , Q ) = n ( P n Q n ) ln def P n ln def Q n = n R n
This measure is zero only if P n = Q n for all n; otherwise it is positive if ln def P is a strictly increasing function, i.e., ln def P > 0 on the whole interval ( 0 , 1 ) .
The condition of the convergence to the P n = Q n detailed balance solution then reads as
ρ ˙ = n P ˙ n R n 0
with
R n = R n P n = ln def P n ln def Q n + ( P n Q n ) ln def P n
On the other hand, from the master equation one obtains
P ˙ n = Γ n x n + 1 x n Γ n 1 x n x n 1
with x n = P n / Q n and Γ n = ( λ Q ) n + 1 = ( μ Q ) n . Substituting Equation (25) into Equation (23) and re-arranging the summation indices finally we arrive at the requirement
ρ ˙ = n Γ n x n + 1 x n R n R n + 1 0
This is satisfied if R n is strictly increasing, which means that R n > 0 for all possible P n and Q n values; ρ ˙ = 0 being realized only if x n + 1 = x n = 1 (the last equality is due to the normalization n P n = n Q n x n = 1 while n Q n = 1 ). The condition for approaching the detailed balance distribution then finally reads as
d R n d x n = Q n 2 ln def P n + ( P n Q n ) ln def P n = Q n R n > 0
It is clear that this is a more detailed restriction than just the concavity of the entropy formula. The Q-independent part of the expression in the bracket equals ( P ln def P ) , whose positivity is the well-known concavity requirement for the entropy formula. In cases when also ln def P < 0 , this concaivity suffices for the uniform approach to the stationary distribution. In the opposite case we arrive at the nontrivial constraint ( P ln def P ) > ln def P at Q n = 1 .
Let us now test whether some familiar suggestions for deformed logarithms satisfy this constraint. The Tsallis logarithm is defined as
ln def P n = P n q 1 1 q 1
It results in ln def P n = P n q 2 > 0 and due to Equation (27) the final constraint is
Q n P n q 3 q P n + ( 2 q ) Q n > 0
This is fulfilled for arbitrary ( P , Q ) probability distribution pairs if 0 q 2 ; otherwise it might be violated. The classical logarithm is reconstructed when q = 1 .
The kappa-exponential, promoted by Kaniadakis [36], belongs to the deformed logarithm
ln def P n = P n κ P n κ 2 κ
In this case one obtains
( 1 + κ ) P n 1 + κ + Q n P n κ + ( 1 κ ) P n 1 κ + Q n P n κ > 0
as the condition for the correct evolution towards the detailed balance solution by the linear master equation. This is fulfilled universally (i.e., for arbitrary P n and Q n in the physical interval of [ 0 , 1 ] ) as long as κ 2 1 . One obtains this result as follows. Both square bracket expressions in Equation (31) are non-negative. Their sum is also non-negative if 1 κ + 1 . Otherwise there are P n values near to zero for which one of the summands, either due to ( 1 κ ) < 0 or due to ( 1 + κ ) < 0 , diverge to negative infinity, spoiling the inequality in this way. The classical logarithm is reconstructed for κ = 0 and in order to have the same power-law tail in canonical distributions κ = q 1 . The condition for universal evolution translates to 0 q 2 .
By now there is a much bigger variety of suggested entropy formulas, many of them having the trace form using a deformed logarithm function [37,38,39]. Here we analyse one of them, the ( c , d ) entropy suggested by Hanel and Thurner [8], since it includes a number of different entropy formulas suggested earlier as particular cases. It is given as
S c , d = i = 1 W A Γ ( d + 1 , 1 c ln p i ) B p i
with
Γ ( d + 1 , z ) = z t d e t d t
being the incomplete gamma function. The coefficients A and B can be determined by considering the equiprobability case, p i = 1 / W , resulting in S c , d eq . prob . = K ( ln W ) , and applying the conditions ln def 1 = K ( 0 ) = 0 and ln def 1 = K ( 0 ) = 1 . By this procedure one obtains A = 1 / ( Γ 1 + c Γ 1 ) and B = A Γ 1 , with Γ 1 = Γ ( d + 1 , 1 ) and Γ 1 = 1 / e .
The corresponding deformed logarithm of the probability, based on the form Equation (21), is
ln def P n = A P n Γ ( d + 1 , 1 c ln P n ) + B
Here, for instance, the c = q , d = 0 choice gives the Tsallis logarithm, Equation (28) with A = e / ( 1 c ) and B = 1 / ( 1 c ) .
The first and second derivative of the deformed logarithm are given as
ln def P n = A P n 2 Γ + c Γ , ln def P n = A P n 3 2 Γ + 3 c Γ + c 2 Γ
with
Γ = P n c e 1 c ln P n d
and
Γ = P n c e 1 c ln P n d 1 1 d c ln P n
Here we used the abbreviations Γ, Γ and Γ without explicitely writing out the arguments d + 1 and 1 c ln P n .
All this leads to the following criterion for the term by term approach to the stationary distribution Q n by a general P n via the diffusion like master equation:
A Q n 2 Γ + 3 c Γ + c 2 Γ P n c Γ + c 2 Γ > 0
Suppose now that A > 0 . This requires e Γ ( d + 1 , 1 ) > c . (In the above d = 0 example that translates to c < 1 .) Furthermore, considering the extreme case Q n = 0 (equivalent to the concavity condition) we deal with
c Γ + c 2 Γ = c e P n c ( 1 c ln P n ) d 1 ( c 1 ) ( 1 c ln P n ) c d < 0
This is fulfilled for 0 < c < 1 and d > 1 1 / c . Once this is so, observing that the lhs expression of the inequality Equation (39) is linear in Q n , the condition has to be checked only at the other endpoint of the Q n interval. At Q n = 1 we have:
2 Γ + 3 c Γ + c 2 Γ > P n c Γ + c 2 Γ
Although this criterion looks involved, we note, that at P n = 1 it simplifies to 1 / A > 0 , i.e., Γ ( d + 1 , 1 ) > c / e as supposed above. Similar analysis can be done for the opposite, A < 0 case.
Finally, the recently suggested doubly logarithmic entropy formula, designed for extreme large fluctuations in a reservoir by Biro et al. [23,24], considers
ln def P n = ln 1 ln P n
as the deformed logarithm, leading to
1 P n 2 ( 1 ln P n ) 2 2 P n ( P n + Q n ) ln P n > 0
for the evolution condition ρ ˙ < 0 . This is fulfilled for any pair of distributions, since ln P n < 0 is always true.

5. Conclusions

By now the use of non-extensive entropy formulas is clearly numerous. A review of all areas of physics where a power-law tailed canonical distribution occurs can be found in [5]. Furthermore, applications to non-Gaussian velocity distributions in dusty plasmas can be found in [40,41]. In this paper we have concentrated on a mathematical problem in the background testing generalized entropy formulas with respect to whether they serve as a basis for a well-behaving distance measure which describes an approach to the stationary distribution of state-dependent diffusion-like master equations. We have found that, in some parameter range of the suggested modern entropy formulas, the distance measure, based on the difference of deformed logarithms of the respective distributions, behaves well as shrinking uniformly, term by term during the time evolution towards the stationary distribution of general, state-dependent diffusion-like master equations. In particular, while the Tsallis entropy and other generalized entropy formulas (involving further deformed logarithm functions) guarantee the above term by term approach only in a restricted range of parameters, the extreme case belonging to a diverging variance in the temperature superstatistics, with a log-log form entropy formula, behaves universally well in this respect.

Acknowledgments

This work was supported by the Hungarian National Research Fund (Grant K 104260) and by the Helmholtz International Center for FAIR (Facility for Antiproton and Ion Research) within the framework of the LOEWE program (Landes Offensive zur Entwicklung Wissenschaftlich-ökonomischer Exzellenz) launched by the State of Hesse.

Author Contributions

Tamás Sándor Biró and Zsolt Schram performed and controled the calculations and prepared the manuscript alternatingly. Both authors have read and approved the final manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Rényi, A. On Measures of Entropy and Information. In Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, Berkeley, CA, USA, 20 June–30 July 1960; University of California Press: Berkeley, CA, USA, 1961; Volume 1, pp. 547–561. [Google Scholar]
  2. Rényi, A. Probability Theory; North Holland: Amsterdam, The Netherlands, 1970. [Google Scholar]
  3. Tsallis, C. Possible generalization of Boltzmann–Gibbs statistics. J. Stat. Phys. 1988, 52, 479–487. [Google Scholar] [CrossRef]
  4. Tsallis, C. Nonextensive statistics: Theoretical, experimental and computational evidences and connections. Braz. J. Phys. 1999, 29. [Google Scholar] [CrossRef]
  5. Tsallis, C. Introduction to Nonextensive Statistical Mechanics; Springer: New York, NY, USA, 2009. [Google Scholar]
  6. Almeida, M.P. Generalized entropies from first principles. Physica A 2001, 300, 424–432. [Google Scholar] [CrossRef]
  7. Tempesta, P. Group entropies, correlation laws and zeta functions. Phys. Rev. E 2015, 84, 021121. [Google Scholar] [CrossRef] [PubMed]
  8. Hanel, R.; Thurner, S. A comprehensive classification of complex statistical systems and axiomatic derivation of their entropy and distribution functions. Europhys. Lett. 2011, 93, 20006. [Google Scholar] [CrossRef]
  9. Hanel, R.; Thurner, S. When do generalised entropies apply? How phase space volume determines entropy. Europhys. Lett. 2011, 96, 50003. [Google Scholar] [CrossRef]
  10. Hanel, R.; Thurner, S.; Gell-Mann, M. How multiplicity determines entropy and derivation of the maximum entropy principle for complex systems. Proc. Natl. Acad. Sci. USA 2014, 111, 6905–6910. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  11. Hanel, R.; Thurner, S. Generalized (c,d) entropies and aging random walks. Entropy 2013, 15, 5324–5337. [Google Scholar] [CrossRef]
  12. Kaniadakis, G. Maximum entropy principle and power-law tailed distributions. Eur. Phys. J. B 2009, 70, 3–13. [Google Scholar] [CrossRef]
  13. Kaniadakis, G. Theoretical Foundations and Mathematical Formalism of the Power-Law Tailed Statistical Distributions. Entropy 2013, 15, 3983–4010. [Google Scholar] [CrossRef]
  14. Biró, T.S.; Ván, P. Zeroth law compatibility of non-additive thermodynamics. Phys. Rev. E 2011, 83, 061187. [Google Scholar] [CrossRef] [PubMed]
  15. Biró, T.S. Abstract composition rule for relativistic kinetic theory in the thermodynamical limit. Europhys. Lett. 2008, 84, 56003. [Google Scholar] [CrossRef]
  16. Beck, C. Dynamical foundations of nonextensive statistical mechanics. Phys. Rev. Lett. 2001, 87, 180601. [Google Scholar] [CrossRef]
  17. Beck, C.; Cohen, E.G.D. Superstatistics. Physica A 2003, 322, 267–275. [Google Scholar] [CrossRef]
  18. Abe, S.; Beck, C.; Cohen, E.G.D. Superstatistics, thermodynamics and fluctuations. Phys. Rev. E 2007, 76, 031102. [Google Scholar] [CrossRef] [PubMed]
  19. Biró, T.S. Is There a Temperature? Springer: New York, NY, USA, 2011. [Google Scholar]
  20. Ván, P.; Barnaföldi, G.G.; Biró, T.S.; Ürmössy, K. Nonadditive thermostatistics and thermodynamics. J. Phys. Conf. Ser. 2012, 394, 012002. [Google Scholar] [CrossRef]
  21. Biró, T.S. Ideal gas provides q-entropy. Physica A 2013, 392, 3132–3139. [Google Scholar] [CrossRef]
  22. Biró, T.S.; Ván, P.; Barnaföldi, G.G. Quark-gluon plasma connected to finite heat bath. Eur. phys. J. A 2013, 49. [Google Scholar] [CrossRef]
  23. Biró, T.S.; Barnaföldi, G.G.; Ván, P. New Entropy Formula with Fluctuating Reservoir. Physica A 2015, 417, 215–220. [Google Scholar] [CrossRef]
  24. Biró, T.S.; Ván, P.; Barnaföldi, G.G.; Ürmössy, K. Statistical Power Law due to Reservoir Fluctuations and the Universal Thermostat Independence Principle. Entropy 2014, 16, 6497–6514. [Google Scholar] [CrossRef]
  25. Abe, S. A note on the q-deformation-theoretic aspect of the generalized entropies in nonextensive physics. Phys. Lett. A 1997, 224, 326–330. [Google Scholar] [CrossRef]
  26. Kaniadakis, G. Relativistic entropy and related Boltzmann kinetics. Eur. Phys. J. A 2009, 40, 275–287. [Google Scholar] [CrossRef]
  27. Ubriaco, M.R. Entropies based on factional calculus. Phys. Lett. A 2009, 373, 2516–2519. [Google Scholar] [CrossRef]
  28. Naudts, J. Deformed exponentials and logarithms in generalized tehrmostatistics. Physica A 2002, 316, 323–334. [Google Scholar] [CrossRef]
  29. Naudts, J. Continuity of a class of entropies and relative entropies. Rev. Math. Phys. 2004, 16, 809–822. [Google Scholar] [CrossRef]
  30. Naudts, J. Generalized thermostatistics based on deformed exponential and logarithmic functions. Physica A 2004, 340, 32–40. [Google Scholar] [CrossRef]
  31. Naudts, J. Generalised Exponential Families and Associated Entropy Functions. Entropy 2008, 10, 131–149. [Google Scholar] [CrossRef]
  32. Tsallis, C. Generalized entropy-based criterion for consistent testing. Phys. Rev. E 1998, 58. [Google Scholar] [CrossRef]
  33. Kullback, S.; Leibler, R.A. On information and sufficiency. Ann. Math. Stat. 1951, 22, 79–86. [Google Scholar] [CrossRef]
  34. Kullback, S. Letter to the Editor: The Kullback–Leibler distance. Am. Stat. 1987, 41, 340–341. [Google Scholar]
  35. Van Erven, T.; Harremoës, P. Rényi Divergence and Kullback–Leibler Divergence. IEEE Trans. Inf. Theory 2014, 60, 3797–3820. [Google Scholar] [CrossRef]
  36. Kaniadakis, G. Non-linear kinetics underlying generalized statistics. Physica A 2001, 296, 405–425. [Google Scholar] [CrossRef]
  37. Borges, E.P.; Raditi, I. A family of nonextensive entropies. Phys. Lett. A 1998, 246, 399–402. [Google Scholar] [CrossRef]
  38. Schwämmle, V.; Tsallis, C. Two-parameter generalization of the logarithm and exponenetial functions and the Boltzmann–Gibbs–Shannon entropy. J. Math. Phys. 2007, 48, 113301. [Google Scholar] [CrossRef]
  39. Tsallis, C.; Cirto, L.J.L. Black hole thermodynamical entropy. Eur. Phys. J. C 2013, 73. [Google Scholar] [CrossRef]
  40. Liu, B.; Goree, J. Superdiffusion and non-Gaussian statistics in a driven-dissipative 2D dusty plasma. Phys. Rev. Lett. 2008, 100, 055033. [Google Scholar] [CrossRef] [PubMed]
  41. Amour, R.; Tribecke, M. Variable charge dust acoustic solitary waves in a dusty plasma with non-extensive electron velocity distribution. Phys. Plasmas 2010, 17, 063702. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Biró, T.S.; Schram, Z. Non-Extensive Entropic Distance Based on Diffusion: Restrictions on Parameters in Entropy Formulae. Entropy 2016, 18, 42. https://doi.org/10.3390/e18020042

AMA Style

Biró TS, Schram Z. Non-Extensive Entropic Distance Based on Diffusion: Restrictions on Parameters in Entropy Formulae. Entropy. 2016; 18(2):42. https://doi.org/10.3390/e18020042

Chicago/Turabian Style

Biró, Tamás Sándor, and Zsolt Schram. 2016. "Non-Extensive Entropic Distance Based on Diffusion: Restrictions on Parameters in Entropy Formulae" Entropy 18, no. 2: 42. https://doi.org/10.3390/e18020042

APA Style

Biró, T. S., & Schram, Z. (2016). Non-Extensive Entropic Distance Based on Diffusion: Restrictions on Parameters in Entropy Formulae. Entropy, 18(2), 42. https://doi.org/10.3390/e18020042

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop