Tsallis Entropy , Escort Probability and the Incomplete Information Theory

Non-extensive statistical mechanics appears as a powerful way to describe complex systems. Tsallis entropy, the main core of this theory has been remained as an unproven assumption. Many people have tried to derive the Tsallis entropy axiomatically. Here we follow the work of Wang (EPJB, 2002) and use the incomplete information theory to retrieve the Tsallis entropy. We change the incomplete information axioms to consider the escort probability and obtain a correct form of Tsallis entropy in comparison with Wang’s work.


Introduction
The entropy is the key concept to extract universal features of a system from its microscopic details.In the statistical mechanics two forms are considered to describe the concept of entropy.In the first form, the entropy is defined as the logarithm of total number of microstates in phase/Hilbert space multiplied by a constant coefficient (the Boltzmann veiwpoint).In the second form it is written in terms of the probability to occupy the microstates (the Gibbs viewpoint) [1].Using simple algebraic manipulation which can be found in every textbooks of statistical mechanics, the equality between these two definitions is proved.In the information theory, the entropy measures the uncertainty in an ensemble.Shannon derived the same form for the entropy which is similar to the Gibbs relation.He used the axioms of the information theory that are intuitively correct for non-interacting systems in physics [2].Jaynes showed that all obtained results in the statistical mechanics are an immediate consequence of the Shannon entropy [3].In spite of all successes of Boltzmann-Gibbs-Shannon entropy to discover the thermodynamics of systems from mechanical details in the equilibrium conditions, it is not able to interpret the complexity in natural systems.In 1988 Tsallis introduced a new definition for entropy which successfully describes the statistical features of complex systems [4]: where k is a positive constant (choosing an appropriate unit we can set it to one), p i stands for probability for occupation of i-th state of the system, N counts the known microstates of the systems and q is a positive real parameter.Tsallis entropy is non-extensive, which means that if two identical systems combine, the entropy of combined system is not equal to summation of entropy of its subsystems.Using simple calculation we can show that in the limit q → 1 the Tsallis entropy tend to the BGS entropy.Non-extensive statistical mechanics which is established by optimization of Tsallis entropy in presence of appropriate constraints, can interpret properties of many physical systems [5][6][7][8].Since the thermodynamic limit is a meaningless concept to nanosystems, we cannot derive many familiar thermodynamical results.It has been shown that nanosystems obey the non-extensive statistical mechanics [9,10].The existence of long range interaction between system's entities fails the condition of non-interacting components for a system when we want to derive the BGS entropy.In this case simulations show that the entropy and energy are non-extensive [11,12].There are many natural or social systems with small size or long range interaction between their components, therefore we can use the non-extensive statistical mechanics to study such systems.It was found that the wealth distribution for an economic agent in a conservative exchange market can be classified by Tsallis entropic index q, which distinguishes two different regimes-the large and small size market [13].Tsallis statistics can also be used to obtain spatio-temporal and magnitudinal distribution of seismic activities [14,15].Many other applications of the non-extensive statistical mechanics are found in the references of the Tsallis book [8].
Several attempts have been done to extract the Tsallis entropy from axioms of the information theory.Among them the works of Wang are more plausible [16,17], but he has obtained a different form for the non-extensive entropy instead of the usual form of the Tsallis entropy.
Here we follow Wang's method in using the axioms of the incomplete information theory except for the escort probability which will be introduced in the next section.The usual form of Tsallis entropy for escort probability is the consequence of our attempt.
We discuss the incomplete knowledge and its relation to the information theory in the next section.The third section is devoted to introducing the escort probability and derivation of Tsallis entropy from axioms of the incomplete information theory.Finally we summarize our work and discuss its advantages and disadvantages.

Incomplete Information Theory
The set of outcomes of a random variable, {x 1 , . . ., x N }, and their corresponding occurrence probabilities, {p 1 , . . ., p N }, is called an ensemble.N is the number of distinguished outcomes.The main goal of statistical mechanics is the determination of the outcomes probability with regard to some constraints.The constraints are usually given as an equation like f (x 1 , . . ., x N , p 1 , . . ., p N ) = 0.According to maximum entropy principle, the entropy which is a function of probabilities, S(p 1 , . . ., p N ), should be maximized in an equilibrium condition.The Lagrange Multiplier method can be used to optimize the entropy with accompany to constraints.
The functional form of the entropy is derived from axioms of the information theory.These axioms consider the entropy of a microcanonical ensemble, if all probabilities are equal to each other.In this case the entropy is equal to the Hartley measure, I(N ) = S( 1N , . . ., 1 N ).For other cases, entropy is obtained directly from Hartley measure.The axioms of the information theory are, . The first axiom states the value of zero, i.e., for certain outcome the entropy is equal to zero.The second axiom determines a unit to measure the uncertainty.The Hartley measure is an increasing function of the number of outcomes as stated in the third axiom.The fourth axiom states the extensivity feature of the Hartley measure and the entropy.The fifth axiom is called composition rule and it states that if we partition the set of outcomes into n distinct subsets, n a=1 N a = N , then any subset can be considered as a new event with probability of occurrence p a belonging to the a-th subset.The uncertainty of the mother set in terms of its outcomes is equal to the uncertainty of the mother set in terms of its subsets plus the weighted combination of the uncertainty of subsets.The fifth axiom allows us to derive the entropy for an ensemble with non-equal probability outcomes.It can be shown that the unique function which satisfies the above axioms is I(N ) = ln N , therefore the Shannon entropy is obtained by a simple calculation, Two later axioms are completely related to the complete knowledge about the subsets, n a=1 p a = 1.Sometimes this assumption may fail [16][17][18][19] and the sum of the probability of events becomes less than unity, n a=1 p a < 1, which means that we do not know the set of events completely and some events may possibly be remained unknown.For example earthquakes with strength of greater than ten Richter have not been recorded yet but they would be possible.Our knowledge about the seismic events are incomplete in this respect [15].Wang, in his works assumed that, a real parameter q exists so that p q a = 1.He also changed the last two axioms of the information theory to include the incomplete information condition [16,17], The second axiom is also changed into I((1+(1−q)) 1 1−q ) = 1 for simplicity of algebraic manipulations.It is clear that, I(N ) = N 1−q −1 1−q , is an increasing monotonic function and satisfies the axiom (iv ).The entropy can be obtained by using the (v ) axiom, The above entropy has a different form than the usual Tsallis entropy.In the following section we introduce the escort probability and change the axioms of the information entropy to consider it.By using the same method as the Wang's work we obtain the Tsallis entropy in terms of the escort probability.

Tsallis Entropy in Terms of the Escort Probability
Correspond to any probability p a in an incomplete set of probabilities, we can define an effective real probability π a as follows [20][21][22][23], where q is a real positive parameter.This is the actual probability which can be measured from empirical data and is called the escort probability.In order to conclude the escort probabilities in entropy we should change the last two axioms of the information theory.Entropy dependence on the escort probabilities represents the incompleteness of our knowledge, (iv ) I(N × M ) = I(N ) + I(M ) + g(q)I(N )I(M ), The function g(q) will be determined later as a consequence of the entropy maximization principle.
If we choose as a solution which satisfies (iv ) axiom, then entropy gets the following form.
There is a straightforward calculation to show the above entropy approaches to the BGS form in the limit, g(q) → 0. Any result in the incomplete knowledge condition should approach its counterpart in the complete information state if we allow the function g(q) tends to zero.For an example we consider the canonical ensemble.The above entropy should be maximized to accompany with the following constraints.
Using the method of Lagrange multipliers, the escort probabilities are derived.In this respect we define the following auxiliary function.

R(π
The equation δR(π i ) = 0 gives us the stationary form for the escort probability, Applying the normalization condition we have, The equation 8 approaches to π a ∼ exp(−βx a ), in the limit g(q) → 0 if we put γ = 1 g(q) .To ensure that the derived form for the escort probability maximize the entropy, we test the second derivative of the function R, d 2 R(π a ) dπ a 2 = −(g(q) + 1)π g(q)−1 a (10) The first criterion for the function g(q) is obtained, Monotonicity of the entropy gives us the second criterion, dg(q) dq > 0 Applying the above criteria the function g(q) is not determined uniquely.In special case g(q) = q − 1, we can recover the usual form of the Tsallis entropy, In the above proof we use Equation 6 as a definition for the expectation value of the random variable x.
Other definitions for the expectation value [24,25] also lead us to the same form to Tsallis entropy.

Conclusions
In conclusion, we change the axioms of the information theory to include the escort probability as a sign of incompleteness of our knowledge.In a similar way to the Wang's method, we obtain the usual form of Tsallis entropy but in terms of the escort probability.