Next Article in Journal / Special Issue
Lévy Analysis of HBT Correlation Functions in s N N = 62 GeV and 39 GeV Au + Au Collisions at PHENIX
Previous Article in Journal
Higher Spin Fields in Hyperspace. A Review
Previous Article in Special Issue
Hamiltonian Dynamics of Doubly-Foliable Space-Times
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Entropic Distance for Nonlinear Master Equation

by
Tamás Sándor Biró
1,*,†,
András Telcs
1 and
Zoltán Néda
2
1
Hungarian Academy of Science Wigner Research Centre for Physics, 1121 Budapest, Hungary
2
Department of Physics, Universitate Babes-Bolyai, Cluj 400084, Romania
*
Author to whom correspondence should be addressed.
A talk based on this work was presented by T.S.Biró at the BGL 2017 Gyöngyös, Hungary.
Universe 2018, 4(1), 10; https://doi.org/10.3390/universe4010010
Submission received: 31 October 2017 / Revised: 11 December 2017 / Accepted: 27 December 2017 / Published: 4 January 2018

Abstract

:
More and more works deal with statistical systems far from equilibrium, dominated by unidirectional stochastic processes, augmented by rare resets. We analyze the construction of the entropic distance measure appropriate for such dynamics. We demonstrate that a power-like nonlinearity in the state probability in the master equation naturally leads to the Tsallis (Havrda–Charvát, Aczél–Daróczy) q-entropy formula in the context of seeking for the maximal entropy state at stationarity. A few possible applications of a certain simple and linear master equation to phenomena studied in statistical physics are listed at the end.

1. Definition and Properties of Entropic Distance

Dealing with the dynamics of classical probabilities, we would like to propose a general recipe for defining the corresponding formula for the entropic divergence between two probability distributions. Our goal is to handle complex systems with a stochastic dynamics, generalized to nonlinear dependence on the probabilities. For the study of quantum state probabilities and their distance measures we refer to a recent paper [1] and references therein.
Entropic distance, more properly called “entropic divergence”, is traditionally interpreted as a relative entropy, as a difference between entropies with a prior condition, and without [2]. It is also the Boltzmann–Shannon entropy of a distribution relative to another [3]. Looking at this construction, however, from the viewpoint of a generalized entropy [4], the simple difference or logarithm of a ratio cannot be held as a definition anymore.
Instead, in this paper, we explore a reverse engineering concept: seeking an entropic divergence formula, which is subject to some wanted properties, we consider entropy as a derived quantity. More precisely, we seek entropic divergence formulas appropriate for given stochastic dynamics, shrinking during the approach to a stationary distribution, whenever it exists, and establish the entropy formula from this distance to the uniform distribution. By doing so we serve two goals: (i) having constructed a non-negative entropic distance we derive an entropy formula which is maximal for the uniform distribution; and (ii) we come as near as possible to the classical difference formula for the relative entropy.
Starting from a given master equation, it is far from trivial that which is the most suitable entropy divergence formula for analyzing the stability of a stationary solution. In the present paper we provide a general procedure to obtain a general entropic divergence formula for atypical cases. Although we exemplify only the well-known cases of the logarithmic formula of the Kullback–Leibler and that of the Renyi divergence, our result readily generalizes to an infinite number of cases, distinguished by the dependence on the initial state probability at each transition term.
We start our discussion by contrasting the definition of the metric distance, known from geometry, to the basic properties of an entropic distance. The metric distance possesses the following properties:
  • ρ ( P , Q ) 0 for a pair of points P and Q,
  • ρ ( P , Q ) = 0 only for P = Q ,
  • ρ ( P , Q ) = ρ ( Q , P ) symmetric measure,
  • ρ ( P , Q ) ρ ( P , R ) + ρ ( R , Q ) , the triangle inequality in elliptic spaces.
The entropic divergence on the other hand is neither necessarily symmetric, nor can satisfy a triangle inequality. On the other hand it is subject to the second law of thermodynamics, distinguishing the time arrow from the past to the future. We require for a real functional, ρ [ P , Q ] , depending on the distributions P n and Q n , the followings to hold:
  • ρ [ P , Q ] 0 for a pair of distributions P n and Q n ,
  • ρ [ P , Q ] = 0 only if the distributions coincide P n = Q n ,
  • d d t ρ [ P , Q ] 0 if Q n is the stationary distribution,
  • d d t ρ [ P , Q ] = 0 only for P n = Q n , i.e., the stationary distribution is unique.
Although this definition is not symmetric in the handling of the normalized distributions P n and Q n , it is an easy task to consider the symmetrized version, s [ P , Q ] ρ [ P , Q ] + ρ [ Q , P ] . This symmetrized, entropic divergence inherits some properties from the fiducial construction. Considering a scaling trace form entropic divergence, ρ [ P , Q ] = n σ ( ξ n ) Q n with ξ n = P n / Q n , to begin with, we identify the following symmetrized kernel function:
s ( ξ ) : = σ ( ξ ) + ξ σ ( 1 / ξ ) .
The only constraint is to start with a core function, σ ( ξ ) with a definite concavity. Jensen inequality tells for σ > 0 that
n σ ( ξ n ) Q n σ n ξ n Q n = σ n P n = σ ( 1 ) .
For satisfying property 1 and 2 one simply sets σ ( 1 ) = 0 . Interestingly enough, this setting suffices also for the satisfaction of the second law of thermodynamics, formulated above as further constraints 3 and 4. As a consequence of the symmetrization, it also follows that s ( 1 ) = 0 and s > 0 .
The symmetrized entropic divergence shows some new, emergent properties. We list its derivatives as follows:
s ( ξ ) = σ ( ξ ) + ξ σ ( 1 / ξ ) s ( ξ ) = σ ( ξ ) + σ ( 1 / ξ ) 1 ξ σ ( 1 / ξ ) s ( ξ ) = σ ( ξ ) 1 ξ 2 σ ( 1 / ξ ) + 1 ξ 2 σ ( 1 / ξ ) + 1 ξ 3 σ ( 1 / ξ ) .
The consequences, listed below, can be derived from these general relations:
  • s ( 1 ) = 2 σ ( 1 ) = 0 ,
  • s ( 1 ) = σ ( 1 ) = 0 ,
  • s > 0 ξ m = 1 is a minimum,
  • s ( ξ ) 0 .
In this way the kernel function, and hence each summand in the symmetrized entropic divergence formula, is non-negative, not only the total sum.

2. Entropic Distance Evolution Due to Linear Stochastic Dynamics

Now we study properties 3 and 4, by evaluating the rate of change of the entropic divergence in time. This change is based on the dynamics (time evolution) of the evolving distribution, P n ( t ) , while the targeted stationary distribution, Q n is, by definition, time independent. First we consider a class of stochastic evolutions governed by differential equations for P ˙ n ( t ) d P n d t , linear in the distribution, P n ( t ) [5]. We consider the trace form ρ [ P , Q ] = n Q n σ P n Q n and the background master equation
P ˙ n = m w n m P m w m n P n .
The antisymmetrized sum in the above equation is merely to ensure the conservation of the norm, n P n = 1 , during the time evolution. Using again the notation ξ n = P n / Q n we obtain
ρ ˙ = n σ ( ξ n ) P ˙ n = n , m σ ( ξ n ) w n m ξ m Q m w m n ξ n Q n .
The basic trick is to apply the splitting ξ m = ξ n + ( ξ m ξ n ) to get
ρ ˙ = n σ ( ξ n ) ξ n m w n m Q m w m n Q n + n , m σ ( ξ n ) ( ξ m ξ n ) w n m Q m .
Here the sum in the first term vanishes due to the very definition of the stationary distribution, Q n . For estimating the remaining term, we utilize the Taylor series remainder theorem in the Lagrange form. We recall the Taylor expansion of the kernel function σ ( ξ ) ,
σ ( ξ m ) = σ ( ξ n ) + σ ( ξ n ) ( ξ m ξ n ) + 1 2 σ ( c m n ) ( ξ n ξ m ) 2 ,
with c m n [ ξ m , ξ n ] . Here the first derivative term has occurred in Equation (6). This construction delivers
ρ ˙ = n , m σ ( ξ m ) σ ( ξ n ) w n m Q m 1 2 n , m σ ( c m n ) ξ m ξ n 2 w n m Q m .
Here the first sum vanishes again—after exchanging the indices m and n in the first summand, the result is proportional to the total balance expression, which is zero for the stationary distribution. With positive transition rates, w n m > 0 the approach to stationary distribution, ρ ˙ 0 is hence proven for all σ > 0 . We note that we never used the detailed balance condition for the transition rates, only the vanishing of the total balance, which defines the stationary distribution.
This proof, without recalling the detailed balance condition as Boltzmann’s famous H-theorem did, is quite general. Any core function with positive second derivative and the scaling trace form co-act to ensure the correct change in time. By using the traditional choice, σ ( ξ ) = ln ξ , we have σ = 1 / ξ and σ ( ξ ) = 1 / ξ 2 > 0 , satisfying indeed all requirements. The integrated entropic divergence formula (no symmetrization) in this case is given as the Kullback–Leibler divergence :
ρ [ P , Q ] = n Q n ln Q n P n .
There is a rationale behind using the logarithm function. It is the only one being additive for the product form of its argument, mapping factorizing and hence statistically independent distributions to an additive entropic divergence kernel: For P n ( 12 ) = P n ( 1 ) P n ( 2 ) also Q n ( 12 ) = Q n ( 1 ) Q n ( 2 ) therefore we have ξ n ( 12 ) = ξ n ( 1 ) ξ n ( 2 ) . Aiming at σ ( ξ ( 12 ) ) = σ ( ξ ( 1 ) ) + σ ( ξ ( 2 ) ) , the solution is σ ( ξ ) = α ln ξ . For σ > 0 it must be α < 0 , so without restricting generality one chooses α = 1 .
Finally, we would like to treat this entropic divergence as an entropy difference. This is achieved when comparing the stationary distribution to the uniform distribution, U n = 1 / W , n = 1 , 2 , , W . Using the above Kullback–Leibler divergence formula one easily derives
ρ [ U , Q ] = n = 1 W Q n ln ( W Q n ) = ln W + n Q n ln Q n = S B G [ U ] S B G [ Q ]
with
S B G [ Q ] = n Q n ln Q n ,
being the Boltzmann–Gibbs–Planck–Shannon entropy formula. From the Jensen inequality it follows ρ [ U , Q ] 0 , so S B G [ U ] S B G [ Q ] .

3. Entropic Divergence Evolution for Nonlinear Master Equations

Detailed balance is also not needed for a more general dynamics. We consider Markovian dynamics, with a master equation nonlinear in the distribution, P n , as
P ˙ n = m w n m a ( P m ) w m n a ( P n ) .
The stationarity condition defines
0 = m w n m a ( Q m ) w m n a ( Q n ) .
The entropic distance formula is sought for in the trace form (but this time without the scaling assumption):
ρ [ P , Q ] = n σ ( P n , Q n ) ,
the dependence on Q n is fixed by ρ [ Q , Q ] = 0 . The change of the entropic divergence in this case is given by
ρ ˙ = m , n σ P n w n m a ( Q m ) ξ m w m n a ( Q n ) ξ n
with ξ n : = a ( P n ) / a ( Q n ) . We again put ξ m = ξ n + ( ξ m ξ n ) in the first summand:
ρ ˙ = n σ P n ξ n m w n m a ( Q m ) w m n a ( Q n ) + n , m σ P n w n m a ( Q m ) ξ m ξ n
In order to use the remainder theorem one has to identify
σ P n = κ ( ξ n ) = κ a ( P n ) a ( Q n ) .
This ensures ρ ˙ < 0 for any κ > 0 and P Q .
We examine the example of the q–Kullback–Leibler or Rényi divergence. Starting with the classical logarithmic kernel, κ ( ξ ) = ln ξ , we have κ ( ξ ) = 1 / ξ 2 > 0 . Now having a nonlinear stochastic dynamics, a ( P ) = P q , the integrated entropic divergence formula (without symmetrization) delivers the Tsallis divergence [6,7,8],
σ P n = Q n q P n q , ρ [ P , Q ] = n Q n ln q Q n P n .
with
ln q ( x ) = 1 x q 1 1 q
being the so called deformed logarithm with the real parameter q.
We again would like to interpret this entropic divergence as entropy difference. The entropic divergence of the stationary distribution from the uniform distribution U n = 1 / W , n = 1 , 2 , , W is given by:
ρ [ U , Q ] = n = 1 W Q n 1 q 1 ( W Q n ) q 1 = W q 1 S T [ U ] S T [ Q ] .
with S T being the Tsallis entropy formula:
S T [ Q ] = 1 1 q n ( Q n q Q n ) = n Q n ln q ( Q n ) .
From the Jensen inequality, it follows ρ [ U , Q ] 0 , so S T [ U ] S T [ Q ] , i.e., the Tsallis entropy formula is also maximal for the uniform distribution. The factor W q 1 signifies non-extensivity, a dependence on the number of states in the relation between the entropic divergence and the relative Tsallis entropy.

4. Master Equation for Unidirectional Growth and Reset

With the particular choice of the transition rates, w n m = μ m δ n 1 , m + γ m δ n , 0 , one describes a local growth process augmented with direct resetting transitions from any state to the ground state labeled by the index zero [9]. The corresponding master equation
P n ˙ = μ n 1 P n 1 μ n + γ n P n
is terminated at n = 1 and the equation for the n = 0 state takes care of the normalization conservation:
P ˙ 0 = n = 1 γ n P n μ 0 P 0 .
For the stationary distribution one obtains
Q n = μ n 1 μ n + γ n Q n 1 = = μ 0 Q 0 μ n j = 1 n 1 + γ j μ j 1 ,
and Q 0 has to be obtained from the normalization. Table 1 summarizes some well known probability density functions, PDFs , which emerge as stationary distribution to this simplified stochastic dynamics upon different choices of the growth and reset rates μ n and γ n . In the continuous limit we obtain
t P ( x , t ) = x μ ( x ) P ( x , t ) γ ( x ) P ( x , t ) .
with the stationary distribution
Q ( x ) = K μ ( x ) e 0 x γ ( u ) μ ( u ) d u .
Finally we derive a bound for the entropy production in the continuous model of unidirectional growth with resetting.
First we study the time evolution of the ratio, ξ ( t , x ) = P ( x , t ) / Q ( x ) . Using P = ξ Q we get from Equation (25):
Q ξ t = ξ ( μ Q ) x μ Q ξ x γ Q ξ .
Using the same equation for stationary Q ( x ) and dividing by Q we obtain
ξ t = μ ( x ) ξ x .
Now we turn to the evolution of the entropic divergence,
ρ ( t ) 0 s ( ξ ( t , x ) ) Q ( x ) d x ,
With the symmetrized kernel, s ( ξ ) = σ div ( ξ ) + ξ σ div ( 1 / ξ ) 0 , one gets using s t = μ ( x ) s x the following distance evolution, considering the boundary condition ξ ( t , 0 ) = 1 and s ( 1 ) = 0 :
d ρ d t = 0 s ( ξ ( t , x ) ) Q ( x ) γ ( x ) d x
We note that for the Kullback–Leibler divergence the following symmetrized kernel function has to be used: σ ( ξ ) = ln ξ leads to s ( ξ ) = ( ξ 1 ) ln ξ and in this way ensures d ρ d t 0 .
In order to obtain a lower bound for the speed of the approach to stationarity, we use again the Jensen inequality for s ( ξ ) :
p ( x ) s ( ξ ( x ) ) d x s p ( x ) ξ ( x ) d x
with any arbitrary p ( x ) 0 satisfying p ( x ) d x = 1 . For pour purpose we choose p ( x ) = γ ( x ) Q ( x ) / γ Q d x . This leads to the following result:
d ρ d t γ · s γ t γ = γ γ t · ln γ t γ .
Note that the controlling quantity is actually the expectation value of the resetting rate, p ( x ) ξ ( x ) d x = γ P d x = γ t . Since s ( ξ ) reaches its minimum with the value zero only at the argument 1, the entropic divergence ρ ( t ) stops changing only if the stationary distribution is achieved. In all other cases it shrinks.

5. Conclusions

Summarizing, in this paper we have presented a construction strategy for the entropic distance formula, designed to shrink for a given wide class of stochastic dynamics. The very entropy formula was then derived from inspecting this distance between the uniform distribution and the stationary PDF of the corresponding master equation. In this way, for linear master equations the well-known Kullback–Leibler definition arises, while for nonlinear dependence on the occupation probabilities one always arrives at an accordingly modified expression. In particular, for a general power-like dependence the Tsallis q-entropy occurs as the “natural” relative entropy interpretation of the proper entropic divergence. In the continuous version of the growth and reset master equation, a dissipative probability flow supported with an inflow at the boundary, a lower bound was given for the shrinking speed of the symmetrized entropic divergence using the Jensen inequality.
To finish this paper we would like to make some remarks on real world applications of the above discussed mathematical treatment. Among possible applications of the growth and resetting model, we mention the network degree distributions showing exponential behavior for constant rates and a Tsallis–Pareto distribution [10] (in the discrete version a Waring distribution [11,12]) for having a linear preference in the growth rate, μ n = α ( n + b ) . For high energy particle abundance (hadron multiplicity) distributions the negative binomial PDF is an excellent approximation [13], when both rates μ and γ are linear functions of the state label. For middle and small settlement size distributions a log-normal PDF arise, achievable with linear growth rate, μ ( x ) and a logarithmic reset rate, γ ( x ) ln x . Citations of scientific papers and Facebook shares and likes also follow a scaling Tsallis–Pareto distribution [14,15], characteristic to constant resetting and linear growth rates. While wealth seems to be distributed according to a Pareto-law tail, the middle class incomes rather show a gamma distribution, stemming from linear reset and growth rates. For a review of such applications see our forthcoming work.

Acknowledgments

This work has been supported by the Hungarian National Bureau for Research Development and Innovation, NKFIH under project Nr. K 123815.

Author Contributions

This paper is based on the talk given by T.S. Biró at the Bolyai-Gauss-Lobachevskii conference, BGL 2017, in Gyöngyös, Hungary. T.S. Biró invented, suggested and worked out the unidrectional reset model and delivered the derivation of the evolution of the entropic distance based on the stochastic master equations. A. Telcs contributed the lower-bound estimate on the entropic distance rate of change based on the Jensen inequality. Z. Néda worked on the resetting model, its generalization for expanding sample space processes and on collecting convincingly interpreting data for the applications.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
PDFprobability density function

References

  1. Hardal, A.Ü.C.; Müstecaplioğlu, Ö.E. Rényi Divergences, Bures Geometry and Quantum Statistical Thermodynamics. Entropy 2016, 18, 455. [Google Scholar] [CrossRef]
  2. Manke, R.; Kapuzs, J.; Lubashevsky, I. Physics of Stochastic Processes; Wiley & Sons: Hoboken, NJ, USA, 2009. [Google Scholar]
  3. Kullback, S.; Leibler, R.A. On information and sufficiency. Ann. Math. Stat. 1951, 22, 79–86. [Google Scholar] [CrossRef]
  4. Csiszár, I. Axiomatic Characterizations of Information Measures. Entropy 2008, 10, 261–273. [Google Scholar] [CrossRef]
  5. Biró, T.S.; Schram, Z.; Jenkovszky, L. Entropy Production During Hadronization of a Quark-Gluon Plasma. arXiv, 2017; arXiv:1707.07912. [Google Scholar]
  6. Tsallis, C. Possible generalization of Boltzmann-Gibbs Statistics. J. Stat. Phys. 1988, 52, 479–487. [Google Scholar] [CrossRef]
  7. Havrda, J.; Charvát, F. Quantification method of classification processes. Concept of structural α-entropy. Kybernetika 1967, 3, 30–35. [Google Scholar]
  8. Aczél, J.; Daróczy, Z. On Measures of Information and Their Characterizations; Academic Press: New York, NY, USA, 1975. [Google Scholar]
  9. Biró, T.; Néda, Z. Dynamical Stationarity as a Result of Sustained Random Growth. Phys. Rev. E 2017, 95, 032130. [Google Scholar] [CrossRef] [PubMed]
  10. Thurner, S.; Kyriakopoulos, F.; Tsallis, C. Unified model for network dynamics exhibiting nonextensive statistics. Phys. Rev. E 2007, 76, 036111. [Google Scholar] [CrossRef] [PubMed]
  11. Irwin, J.O. The Place of Mathematics in Medical and Biological Statistics. J. R. Stat. Soc. A 1963, 126, 1–45. [Google Scholar] [CrossRef]
  12. Krapivsky, P.L.; Rodgers, G.J.; Redner, S. Degree Distributions of Growing Networks. Phys. Rev. Lett. 2001, 86, 5401–5404. [Google Scholar] [CrossRef] [PubMed]
  13. Bíró, G. The Application of the New Generation of Detector Simulations in High Energy Physics for the Investigation of Identified Hadron Spectra. Master’s Thesis, Eotvos University, Budapest, Hungary, 2016. Available online: http://birogabesz.web.elte.hu/MScDiplomamunka (accessed on 30 October 2017).
  14. Schubert, A.; Glänzel, W. A dynamic look at a class of skew distributions. A model with scientometric applications. Scientometrics 1984, 6, 149–167. [Google Scholar] [CrossRef]
  15. Néda, Z.; Varga, L.; Biró, T.S. Science and Facebook: The same popularity law! PLoS ONE 2017, 12, e0179656. [Google Scholar] [CrossRef] [PubMed]
Table 1. Summary of rates and stationary PDFs .
Table 1. Summary of rates and stationary PDFs .
γ n , γ ( x ) μ n , μ ( x ) Q n , Q ( x )
constconstgeometrical → exponential
constlinearWaring → Tsallis/Pareto
constsublinear powerWeibull
constquadratic polynomialPearson
constexpGompertz
ln ( x / a ) α x Log-Normal
linearconstGauss
α ( a x c ) α x Gamma

Share and Cite

MDPI and ACS Style

Biró, T.S.; Telcs, A.; Néda, Z. Entropic Distance for Nonlinear Master Equation. Universe 2018, 4, 10. https://doi.org/10.3390/universe4010010

AMA Style

Biró TS, Telcs A, Néda Z. Entropic Distance for Nonlinear Master Equation. Universe. 2018; 4(1):10. https://doi.org/10.3390/universe4010010

Chicago/Turabian Style

Biró, Tamás Sándor, András Telcs, and Zoltán Néda. 2018. "Entropic Distance for Nonlinear Master Equation" Universe 4, no. 1: 10. https://doi.org/10.3390/universe4010010

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop