Open Access
This article is

- freely available
- re-usable

*Entropy*
**2008**,
*10*(2),
124-130;
https://doi.org/10.3390/entropy-e10020124

Article

Incremental Entropy Relation as an Alternative to MaxEnt

^{1}

Exact Sciences Faculty, National University La Plata (UNLP), IFLP-CCT-CONICET, Argentina

^{2}

CREG-UNLP and Conicet, Argentina; Department of Physics, University of Pretoria, South Africa; Instituto Carlos I de Fisica Teorica y Computacional, Universidad de Granada, Spain

^{3}

CBPF, Rio de Janeiro, Brazil

^{4}

Departament de Fisica and IFISC, Universitat de les Illes Balears, 07122 Palma de Mallorca, Spain

^{*}

Author to whom correspondence should be addressed.

Received: 5 March 2008; in revised form: 14 June 2008 / Accepted: 22 June 2008 / Published: 24 June 2008

## Abstract

**:**

We show that, to generate the statistical operator appropriate for a given system, and as an alternative to Jaynes’ MaxEnt approach, that refers to the entropy S, one can use instead the increments δS in S. To such an effect, one uses the macroscopic thermodynamic relation that links δS to changes in i) the internal energy E and ii) the remaining M relevant extensive quantities A

_{i}, i = 1, . . . , M, that characterize the context one is working with.Keywords:

MaxEnt; probability density function; Thermodynamics’ relations## 1. Introduction

Here we wish to address an issue belonging to the foundations of statistical mechanics (SM) by revisiting the role of the entropy S in Jaynes’ SM-formulation [1, 2], based upon the MaxEnt axiom: entropy is to be extremized (with suitable constraints). Of course, SM microscopically “explains” thermodynamics. The later can be axiomatized, as it is well-known, using four macroscopic postulates [3]. Now, Jaynes’ axioms for SM and those of thermodynamics belong to different worlds altogether. The former speak of “observers’ ignorance”, a concept germane to the thermodynamics language, that refers to laboratory-parlance. Of course, there is nothing to object in this respect. However, one might ask whether it would be possible to look for a SM-“Jaynes’ counterpart” that speaks a similar language to that of thermodynamics. We shall address this issue below and try to provide answers. Our starting point is a brief re-visitation of thermodynamics’ axioms. Its four postulates are enumerated below and they are entirely equivalent to the celebrated three laws of thermodynamics [3]:

- For every system there exists a quantity E, called the internal energy, such that a unique E−value is associated to each of its states. The difference between such values for two different states in a closed system is equal to the work required to bring the system, while adiabatically enclosed, from one state to the other.
- There exist particular states of a system, called the equilibrium ones, that are uniquely determined by E and a set of, say M, extensive (macroscopic) parameters R
_{ν}. The number and characteristics of the R_{ν}depends on the nature of the system [4]. - For every system there exists a state function S(E, ∀R
_{ν}) that (i) always grows if internal constraints are removed and (ii) is a monotonously (growing) function of E. S remains constant in quasi-static adiabatic changes. - S and the temperature vanish for the state of minimum energy and are ≥ 0 for all other states.

From axiom 3 ones extracts, in particular, the following two statements, essential for our purposes

- Statement 3a) for every system there exists a state function S, a function of E and the R
_{ν}S = S(E, R_{1},. . . , R_{M}). - Statement 3b) S is a monotonous (growing) function of E, so that one can interchange the roles of E and S in (1) and writeE = E(S, R
_{1},. . . , R_{M}),

Eq. (2) clearly indicates that
with P

_{ν}generalized pressures and the temperature T defined as [3]## 2. Our goal

We will show here, as our goal, that one can give Eq. (3) the status of an axiom of statistical mechanics! We introduce first a set of new extensive quantities A
a macroscopic statement whose microscopic import will become evident if we establish the relation between the R

_{ν}, appropriately related (see below) to the R_{ν}and postulate for statistical mechanics that (**Axiom (1)**, the incremental entropy postulate)_{ν}and the A_{ν}(Axiom 2 below). This entails obviously that more is needed for the microscopic theory one is here building up. The minimum amount of microscopic information that we would have still to add to our axiomatics in order to get all the results of equilibrium statistical mechanics is precisely such relation. At this point we will merely conjecture that the following statements might suffice:**Axiom (2)**

If there are W microscopic accessible states labelled by i, of microscopic probability p

_{i}, then**(2-i)**and**(2-ii)**$$E=E(\mathcal{F})$$where $\mathcal{F}$ stands for any set of additional quantities on which S and E may putatively also depend. Moreover,**(2-iii)**E and the external parameters are now to be regarded as expectation values of suitable operators, respectively the Hamiltonian H and the quantum operators corresponding to the macroscopic quantities R_{ν}, to be here called ${\mathcal{R}}_{\nu}$, so that A_{ν}≡ < ${\mathcal{R}}_{\nu}$ >.

Thus the A

_{ν}, and also E (we realize at this stage), will depend on the eigenvalues of these operators and on the probability set. One may recognize now that**Axiom (2)**is just a form of Boltzmann’s “atomic” conjecture, pure and simple: macroscopic quantities are statistical averages evaluated using a microscopic probability distribution [5]. In order to prove that the above two postulates indeed allow one to erect the mighty SM-edifice we show below that they are equivalent to Jaynes’ SM-axiomatics [1]. A brief sketch reviewing MaxEnt follows, for the reader’s benefit.#### 2.1. Information theory and the MaxEnt approach of Jaynes’

The main idea of information theory (IT) is to associate a degree of knowledge (or ignorance) I to any normalized probability distribution (PD) p
k being an appropriate information unit (for instance, the bit). The quantum I−version replaces the probability distribution by the density operator ρ and the sum by the Trace operation. The main SMobjective thus gets translated into the issue of finding the PD (or the density operator) that best describes the system of interest. Jaynes appeals for this to his MaxEnt postulate, the only one needed in his formulation [6]: MaxEnt axiom. Assume your prior knowledge about the system is given by the values of M expectation values
Then, ρ is uniquely fixed by extremizing I(ρ) subject to ρ−normalization plus the constraints given by the M conditions constituting our assumed foreknowledge
This leads, after a Lagrange-constrained extremizing process, to the introduction of M Lagrange multipliers λ

_{i}, (i = 1, . . . , W), determined by a functional of the {p_{i}}. I is called an information measure [6,7,8]. Shannon, IT’s founder [7,8], proposed for I in 1948 the form_{ν}, that one assimilates to the generalized pressures P_{ν}. The truth, the whole truth, nothing but the truth [6]. If the entropic measure that reflects our ignorance were not maximized, we would be inventing information that we do not possess.In performing the variational process Jaynes discovers that, provided one sets k = k

_{B}in (8) (k_{B}being Boltzmann’s constant) the information measure equals the entropic one. Thus, I ≡ S, the equilibrium thermodynamic entropy, with the caveat that our prior knowledge must refer just to extensive quantities. Once ρ is at hand, yields complete microscopic information with respect to the system of interest.The path to be followed should be clear now: we need to prove that the incremental entropy axiomatics, i.e., the set (5) - (7), is equivalent to MaxEnt.

## 3. The proof

We cover here only the classical instance. The quantal extension is of a straightforward character. We start with the generic differential change p

_{i}→ p_{i}+ dp_{i}, but constrained by Eq. (5). The differentials dp_{i}must be of such character that (5) holds. Of course, S, A_{j}, and E will change with dp_{i}and the concomitant changes are constrained by (5). We need not specify neither the explicit form of the information measure nor the way in which mean values are evaluated. In both cases, several possibilities have been advanced during the last 20 years [9]. For a detailed discussion of this issue Ref. [10] is to be recommended. The ingredients of our scenario are- an arbitrary, smooth functionI ≡ S({p
_{i}}),_{i}}) is a concave function, - M quantities A
_{ν}that represent mean values of extensive physical quantities ${\mathcal{R}}_{\nu}$. These physical quantities ${\mathcal{R}}_{\nu}$ take, for the micro-state i, the value ${a}_{i}^{\nu}$ with probability p_{i}, - another arbitrary smooth, monotonic function g(p
_{i}) (g(0) = 0; g(1) = 1), that customarily (when the ordinary logarithmic Shannon entropy is used) is taken to be just g(p_{i}) = p_{i}.

We deal then with (we take A
where ϵ

_{1}≡ E)_{i}is the energy associated to the microstate i. The probability variations dp_{i}in turn generate corresponding changes dS, dA_{ν}, and dE in, respectively, S, the A_{ν}, and E.#### 3.1. Part I

The essential point that we are introducing is to enforce obedience to
with T the temperature and λ
If we expand the resulting equation up to first order in the dp
where primes denote p

_{ν}generalized pressures: λ_{ν}= −P_{ν}. We use now the expressions (11), (12), and (13) so as to cast (14) in terms of the probabilities, according to the change
p

_{i}→ p_{i}+ dp_{i}._{i}it is immediately found that the following set of equations ensues [11,12] (remember that the Lagrange multipliers are identical to minus the generalized pressures P_{ν}of Eq. (3)) :_{i}−derivatives. Eq. (15) should yield one and just one p_{i}−expression (one probability distribution), which it indeed does (see [11,12]). We do not need here, however, an explicit expression for this probability distribution, as will be immediately realized below.#### 3.2. Part II

Alternatively, proceed à la MaxEnt. This requires extremizing the entropy S subject to the usual constraints in E, A

_{ν}, and normalization. The ensuing, Jaynes’ variational treatment is seen in [11,12], after appropriately dealing with delicate normalization-related issues [11,12], to yield the very set of Eqs. (15) as well! These equations arise then out of two clearly separate treatments: (I) our methodology, based on Eqs. (5) and (7), and (II), following the MaxEnt prescriptions. This entails that MaxEnt and our axiomatics co-imply each other, becoming thus equivalent ways of building up statistical mechanics.## 4. Conclusions

We have seen that the set of equations
yields a probability distribution that coincides with the PD provided by either

- the MaxEnt’s, SM axiomatics of Jaynes’
- our two postulates: incremental entropy (5) and Boltzmann conjecture (7).

- the macroscopic thermodynamic relation dE = TdS + ∑
_{ν}P_{ν}dA_{ν},, adding to it - Boltzmann’s conjecture of an underlying microscopic scenario ruled by microstate probability distributions.

## Acknowledgments

This work was partially supported by (i) the MEC grant FIS2005-02796 (Spain) and FEDER (EU), (ii) the grant FQM 2445, Junta de Andalucia (Spain), and (iii) PIP6036-CONICET (Argentine Agency).

## References

- Jaynes, E.T. Papers on probability, statistics and statistical physics; Rosenkrantz, R.D., Ed.; Reidel: Dordrecht, The Netherlands, 1987. [Google Scholar]
- Grandy, W.T.; Milonni, P.W. (Eds.) Physics and Probability. Essays in Honor of Edwin T. Jaynes; Cambridge University Press: Cambridge, NY, USA, 1993.
- Desloge, E.A. Thermal physics; Holt, Rhinehart and Winston: New York, NY, USA, 1968. [Google Scholar]
- The MaxEnt treatment assumes that these macrocopic parameters are the expectation values of appropiate operators.
- Lindley, D. Boltzmann’s atom; The Free Press: New York, NY, USA, 2001. [Google Scholar]
- Katz, A. Principles of Statistical Mechanics. The information Theory Approach; Freeman and Co.: San Francisco, CA, USA, 1967. [Google Scholar]
- Shannon, C.E. A mathematical theory of communication. Bell System Technol. J.
**1948**, 27, 379–390. [Google Scholar] [CrossRef] - Cover, T.M.; Thomas, J.A. Elements of information theory; J. Wiley Sons: New York, NY, USA, 1991. [Google Scholar]
- Gell-Mann, M.; Tsallis, C. (Eds.) Nonextensive Entropy: Interdisciplinary applications; Oxford University Press: Offord, UK, 2004.
- Ferri, G.L.; Martinez, S.; Plastino, A. Equivalence of the four versions of Tsallis’s statistics. Journal of Statistical Mechanics
**2005**, 04009. [Google Scholar] [CrossRef] - Curado, E.; Plastino, A. Equivalence between maximum entropy principle and enforcing dU=TdS. Phys. Rev. E
**2005**, 72, 047103:1–047103:3. [Google Scholar] - Curado, E.; Plastino, A. Generating statistical distributions without maximizing the entropy. Physica A
**2006**, 365, 24–27. [Google Scholar]

© 2008 by the authors; licensee Molecular Diversity Preservation International, Basel, Switzerland. This article is an open-access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).