Next Article in Journal
Service-Oriented Model Encapsulation and Selection Method for Complex System Simulation Based on Cloud Architecture
Previous Article in Journal
Frequency Dependence of the Entanglement Entropy Production in a System of Coupled Driven Nonlinear Oscillators
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Thermodynamics Beyond Molecules: Statistical Thermodynamics of Probability Distributions

Department of Chemical Engineering, Pennsylvania State University, University Park, PA 16802, USA
Entropy 2019, 21(9), 890; https://doi.org/10.3390/e21090890
Submission received: 26 July 2019 / Revised: 5 September 2019 / Accepted: 11 September 2019 / Published: 13 September 2019
(This article belongs to the Section Information Theory, Probability and Statistics)

Abstract

:
Statistical thermodynamics has a universal appeal that extends beyond molecular systems, and yet, as its tools are being transplanted to fields outside physics, the fundamental question, what is thermodynamics, has remained unanswered. We answer this question here. Generalized statistical thermodynamics is a variational calculus of probability distributions. It is independent of physical hypotheses but provides the means to incorporate our knowledge, assumptions and physical models about a stochastic processes that gives rise to the probability in question. We derive the familiar calculus of thermodynamics via a probabilistic argument that makes no reference to physics. At the heart of the theory is a space of distributions and a special functional that assigns probabilities to this space. The maximization of this functional generates the mathematical network of thermodynamic relationship. We obtain statistical mechanics as a special case and make contact with Information Theory and Bayesian inference.

Graphical Abstract

1. Introduction

What is thermodynamics? The question, so central to physics, has been asked numerous times and has been given nearly as many different answers. To quote just a few: thermodynamics is “the branch of science concerned with the relations between heat and other forms of energy involved in physical and chemical processes” [1]; “the study of the restrictions on the possible properties of matter that follow from the symmetry properties of the fundamental laws of physics” [2]; “concerned with the relationships between certain macroscopic properties of a system in equilibrium” [3]; “a phenomenological theory of matter” [4]. Such statements, while strictly true, focus on aspects that are far too narrow to converge to a definition of sufficient generality as to what to call thermodynamics or how to carry it outside physics. And yet, since Gibbs [5], Shannon [6] and Jaynes [7] drew quantitative connections between entropy and probability distributions, thermodynamics has been spreading to new fields. The tools of statistical thermodynamics are now used in network theory [8], ecology [9], epidemics [10], neuroscience [11], financial markets [12], and in the study of complexity in general. What motivates the impulse to apply thermodynamics to such vastly diverse problems? Is thermodynamics even applicable outside classical or quantum mechanical systems? And if so, what is the scope of its applicability?
Here we answer these fundamental questions: Statistical thermodynamics is variational calculus applied to probability distributions and by extension to stochastic processes in general; it is independent of physical hypotheses but provides the means to incorporate our knowledge and model assumptions about the particular problem. The fundamental ensemble is a space of probability distributions sampled via a bias functional. The maximization of this functional expresses a distribution—any distribution—via a set of parameters (microcanonical partition function, canonical partition function and generalized temperature) that are connected through a set of mathematical relationships that we recognize as the familiar equations of thermodynamic. Entropy and the second law have simple interpretations in this theory. We obtain statistical mechanics as a special case and make contact with Information Theory and Bayesian inference.

2. The Calculus of Statistical Thermodynamics

Before we derive a theory of generalized thermodynamics we review the key elements of the standard thermodynamic calculus. The central quantity of interest in statistical thermodynamics is the probability of microstate. For a system of N particles in volume V and temperature T this probability is given by the exponential (canonical) distribution,
Prob ( microstate   i ) = e β E i Q ,
where Q is the canonical partition function, E i is the energy of microstate, β = 1 / k B T and k B is Boltzmann’s constant. The corresponding probability to find the system in a microstate with energy E is obtained by summing all microstates with fixed energy E and is given by
Prob ( E ) = Ω e β E Q ,
where Ω is the microcanonical partition function, also equal to the number of microstates with energy E, volume V and number of particles N. The mean energy E ¯ and the parameters Ω , Q and β that appear in Equation (2) are interrelated:
log Ω = β E ¯ + log Q ,
β = log Ω E ¯ ,
E ¯ = log Q β ,
2 log Ω E ¯ 2 0 .
Equations (3) and (4) establish that log Ω ( E , V , T ) and log Q ( β , V , N ) are Legendre pairs; Equation (6) states that log Ω is concave. In addition, any probability distribution p i that could be assigned to microstate i under fixed ( E ¯ , V , N ) satisfies the inequality,
i p i log p i log Ω ,
with the equal sign only for the canonical distribution in Equation (1). This inequality is the statistical expression of the second law. If we identify k B log Ω with entropy and ( log Q ) / β with free energy Equations (3)–(6) represent the familiar relationships of classical thermodynamics. Along with Equations (2) and (7), which provide the probabilistic context, the above set comprises the core relationships of statistical thermodynamics. The physical assumptions and postulates that produce these results can be found in any standard textbook (for example [3]). We will now show that this network of mathematical relationships arises naturally via a probabilistic construction that makes no reference to physics and endows any probability distribution f ( x ) , x 0 with the thermodynamic relationships shown here.

3. Theory

3.1. Random Sampling

Consider the continuous probability distribution h 0 ( x ) 0 , x ( x a , x b ) , normalized to unit area. We define a discrete grid x i = x a + ( i 1 ) Δ with Δ = ( x b x a ) / K , i = 1 , 2 K + 1 , such that the probability to sample a value of x in the ith interval is
p i = h 0 ( x i ) Δ ,
if Δ is sufficiently small. We sample N values from h 0 and construct the frequency distribution n = ( n 1 , n 2 , ) , where n i is the number of sampled values that lie in the ith interval. The probability to observe distribution n in a random sample of size N is
P ( n | p , N ) = N ! i p i n i n i ! ,
and its logarithm is
log P ( n | p , N ) = i n i log n i p i N + O ( log N ) ,
where p = ( p 1 , p 2 ) . We define h ( x i ) = n i / N Δ and take the limit Δ 0 , N in Equation (10). We then have P ( n | p , N ) δ P ( h | h 0 , N ) and
log δ P ( h | h 0 , N ) N = h ( x ) log h ( x ) h 0 ( x ) d x D ( h h 0 ) ,
where δ P ( h | h 0 , N ) is the probability to sample region ( h , h + δ h ) in the continuous space of distributions, while taking a random sample of size N from h 0 (all integrals are understood to be taken over the domain of h 0 ). Any probability distribution h ( x ) defined in the domain of h 0 may materialize in a random sample taken from h 0 . Clearly, the most probable distribution in this space is h 0 and indeed h 0 maximizes Equation (11). For all other distributions we must have δ P ( h | h 0 , N ) δ P ( h 0 | h 0 , N ) = 1 , or
D ( h h 0 ) 0 ,
with the equal sign only for h = h 0 . The probability in the limit N to obtain h 0 relative to the probability to obtain any other distribution is
δ P ( h 0 | h 0 , N ) δ P ( h | h 0 , N ) = e N D ( h h 0 ) .
Accordingly, h 0 is overwhelmingly more probable than any other distribution in its domain.
These results make contact with a broader mathematical literature. The quantity D ( h h 0 ) in Equation (11) is the relative entropy (Kullback-Leibler divergence) of distribution h with respect to h 0 , and plays an important role in Information Theory [13,14,15]; Equation (12) is the Gibbs inequality, a well known property of relative entropy; the relationship between relative entropy and the probability of a sample drawn from h 0 is a known result in the theory of large deviations [16]. The key point we take from these results is that the process of sampling distribution h 0 establishes a probability space of distributions with the same domain as h 0 —these are the distributions obtained as samples. The Gibbs inequality states the elementary fact that the most probable distribution in this space is h 0 . We will now generalize this probability space and the Gibbs inequality.

3.2. Biased Sampling

Random sampling always converges to the distribution from which the sample is taken; the probability of all other distributions vanishes as N . We now modify the sampling process in order to obtain some different limiting distribution h * while still sampling from h 0 . We do this by applying a bias, such that a random sample of size N from h 0 is accepted with probability proportional to W [ N h ] , where N h is the frequency distribution of the sample and W is a functional with the homogeneous property log W [ N h ] = N log W [ h ] . We require homogeneity so that the limiting distribution is independent of N when N . By virtue of homogeneity, log W is written as
log W [ h ] = h ( x ) log w ( x ; h ) d x ,
where log w ( x ; h ) is the variational derivative of log W [ h ] with respect to h. The probability to obtain a sample with distribution h under this biased sampling is
P ( h | p , W , N ) = W [ N h i ] r N N ! i p i n 1 n i ! ,
where r N is a normalizing constant; the logarithm of this probability in the continuous limit is
log δ P ( h | h 0 , W , N ) N = h ( x ) log h ( x ) w ( x ; h ) h 0 ( x ) d x log r .
We define the probability functional
log ϱ [ h | h 0 , W ] h ( x ) log h ( x ) w ( x ; h ) h 0 ( x ) d x log r ,
so that the probability to observe a distribution within ( h , h + δ h ) in a biased sample taken from h 0 is δ P ( h | h 0 , N ) = ϱ N [ h | h 0 , W ] . The ratio of the probability to sample the most probable distribution h * relative to that for any other distribution in the continuous limit is
δ P ( h * | h 0 , W , N ) δ P ( h | h 0 , W , N ) = ϱ [ h * | h 0 , W ] ϱ [ h | h 0 , W ] N .
As in random sampling, the most probable distribution is overwhelmingly more probable than any other feasible distribution. Then we must have
ϱ [ h | h 0 , W ] 1 ,
with the equal sign only for the most probable distribution h * . This distribution is (see Supplementary Information).
h * ( x ) = w ( x ; h * ) h 0 ( x ) r ,
with r determined by normalization. If we choose w ( x ; h ) = f ( x ) / h 0 ( x ) , where f is any other normalized distribution in the domain of h 0 , we obtain h * = f . Therefore, a suitable bias can always be constructed such that any distribution in the domain of h 0 may be obtained as the most probable distribution by biased sampling of h 0 ; conversely, any distribution h 0 may be used to generate a sample of any other distribution f over the same domain by biased sampling.

3.3. Canonical Sampling

We now choose the generating distribution h 0 to be the normalized exponential distribution with parameter β ,
h 0 ( x ) = β e β x ; 0 x < ,
and write the probability functional ϱ in Equation (17) as
log ϱ [ h | W , β ] = h ( x ) log h ( x ) w ( x ; h ) d x β x ¯ log q ,
where q = r / β and x ¯ is the mean of h ( x ) . We call this probability space canonical. The probability of h is ϱ N [ h | W , β ] and by the same argument that led to Equation (19) we now have
ϱ [ h | W , β ] 1 .
The equal sign defines the most probable distribution h * ; this distribution is
h * ( x ) = w ( x ; h * ) e β x q .
The parameter q is fixed by the normalization condition and satisfies
x ¯ = d log q d β .
(Details are given in Supplementary Information).

3.4. Microcanonical Sampling

Next we define the microcanonical space as the subset of distributions with fixed mean x ¯ . The generating distribution is again the exponential function, which we now write as
h 0 ( x ) = e x / x ¯ x ¯ ,
with x ¯ fixed. The probability to observe distribution h while sampling h 0 is still given by Equation (16) except that r is replaced with a new normalizing factor r . We define the microcanonical probability functional
log ϱ [ h | W , x ¯ ] = h ( x ) log h ( x ) w ( x ; h ) d x log ω ,
with log ω = 1 + log x ¯ + log r and write the probability of h as ϱ N [ h | W ; x ¯ ] . The argument that produced Equations (19) and (23) now gives
ϱ [ h | W , x ¯ ] 1 .
This functional is maximized by the same distribution h * that maximizes the canonical functional, Equation (24), except that both q and β are now Lagrange multipliers and are fixed by normalization and by the known mean x ¯ . As in the canonical case, h * is overwhelmingly more probable than any other distribution in the microcanonical space and its mean satisfies Equation (25). We insert Equation (24) into (28) to obtain
log ω = S [ h * ] + log W [ h * ] ,
where S [ h * ] is the Gibbs–Shannon entropy of the most probable distribution,
S [ h * ] = 0 h * ( x ) log h * ( x ) d x .
Substituting Equation (24) for h * in (29) we obtain a relationship between ω , β , q and x ¯ :
log ω = β x ¯ + log q .
In combination with Equation (25), this result defines log ω ( x ¯ ) as the Legendre transformation of q ( β ) with respect to β . By the reciprocal property of the transformation we then have
β = d log ω d x ¯ .
Given Equation (31), the canonical probability functional in Equation (22) and the microcanonical functional in Equation (27) are seen to be the same. The difference is that in canonical maximization x ¯ is a floating parameter, whereas in the microcanonical maximization it is held constant. Both functionals are maximized by the same distribution and have the same β , q, ω at same x ¯ : the two ensembles are equivalent. Finally, the maximization of the microcanonical functional implies that ϱ [ h ; W , x ¯ ] is a concave functional in h. It follows that log ω is a concave function of x ¯ , therefore we must have
d 2 log ω d x ¯ 2 = d β d x ¯ 0 .
The details are shown in Supplementary Information.

4. Generalized Statistical Thermodynamics

These results can be summarized in the form of the following theorem:
Theorem 1.
Given normalized distribution f ( x ) , x 0 , with mean x ¯ , it is possible to construct a functional W such that:
(a) All distributions h ( x ) , x 0 , with mean x ¯ satisfy the inequality
log W [ h ] 0 h ( x ) log h ( x ) d x log ω
with the equal sign only for h = f , a condition that defines ω;
(b) f can be expressed in canonical form as
f ( x ) = w ( x ) e β x q ,
where log w is the variational derivative of log W [ f ] ; and
(c) parameters x ¯ , β, q and ω satisfy
x ¯ = d log q d β ,
β = d log ω d x ¯ ,
log ω = β x ¯ + log q ,
d 2 log ω d x ¯ 2 0 .
The existence of W is established by the fact that the functional
log W [ h ] = 0 h ( x ) log f ( x ) d x ,
satisfies the theorem. This is a linear functional whose derivative is log f for all h. More generally, any homogeneous functional log W [ h ] of degree 1, linear or non-linear, whose derivative at h = f is given by
δ log W [ h ] δ h h = f = log f ( x ) + a 0 + a 1 x log w ( x ) ,
where a 0 and a 1 satisfy
d a 0 d a 1 = x ¯ ,
but are otherwise arbitrary, also satisfies the theorem. The inequality in Equation (39) follows from the concave requirement that ensures the maximization of Equation (34).
We recognize Equation (35) as the canonical distribution of statistical mechanics, Equations (36)–(38) and (33), which relate its parameters, as the core set of thermodynamic relationships, and Equation (34) as the inequality of the second law. The probabilistic interpretation is that any distribution f may be obtained as the most probable distribution under a probability measure defined via a suitable functional W. Whereas in statistical mechanics the central stochastic variable is the mechanical microstate, in generalized thermodynamics it is the probability distribution itself. Thermodynamics may be condensed into the microcanonical inequality in Equation (34), a generalized expression of the second law that defines the most probable distribution in the microcanonical space. All relationships between ω (microcanonical partition function), q (canonical partition function), β (generalized inverse temperature) and x ¯ follow from the maximization of this inequality and have equivalents in familiar thermodynamics. The derivatives d log q / d β and d log ω / d x ¯ in Equations (36) and (37) may be viewed as equations of change along a path (“process”) in the space of distributions under fixed bias W. This path is described parametrically in terms of x ¯ and represents a nonstationary stochastic process. We call this process quasistatic—a continuous path of distributions that maximize locally the thermodynamic functional.

4.1. Contact with Statistical Mechanics

The obvious way to make contact with statistical mechanics is to take f to be the probability of microstate at fixed temperature, volume and number of particles. The postulate of equal a priori probabilities fixes the selection functional and its derivative to W = w = 1 ; if we identify x as the energy E i of microstate i, β as 1 / k B T , q as the thermodynamic canonical partition function, ω as the thermodynamic microcanonical partition function, Equations (24)–(33) map to standard thermodynamic relationships. From Equation (29) we obtain ϱ = e S [ h ] / ω : the canonical probability f maximizes entropy and thus we obtain the second law.
This is not the only way to establish contact with statistical mechanics. We may choose f to be some other probability distribution, for example, the probability to find a macroscopic system of fixed ( T , V , N ) at energy E. We write the energy distribution in the form of Equation (24) with w, β and q to be determined. From Equations (25), (31) and (32) with x ¯ = E ¯ we make the identifications β 1 / k B T , log q F / k B T (free energy), log ω thermodynamic entropy. To identify w we require input from physics and this comes via the observation that the probability density of macroscopic energy E is asymptotically a Dirac delta function at E = E ¯ . Then S [ f ] = 0 (this is the entropy of the energy distribution, not to be confused with thermodynamic entropy). From Equations (14) and (29) we find log W [ f ] = log w ( x ; f ) = log ω , and conclude that log ω is the thermodynamic entropy. This establishes correspondence between generalized thermodynamics and macroscopic (classical) thermodynamics. If we further postulate, again motivated by physics, that w ( E ) is the number of microstates under fixed volume and number of particles, we establish the microscopic connection. Since f ( E ) is proportional to the number of microstates with energy E and individual microstates are unobservable, we may as well ascribe equal probability to all microstates. Thus we recover the postulate of equal a priori probabilities (statistical thermodynamics). Finally, by adopting a physical model of microstate, classical, quantum or other, we obtain classical statistical mechanics, quantum statistical mechanics or yet-to-be-discovered statistical mechanics, depending on the model. In all cases the thermodynamic calculus is the same, only the enumeration of microstates—that is, W— depends on the physical model.

4.2. What is W?

Once the selection functional W is specified the most probable distribution is fixed and all canonical variables become known functions of x ¯ . But what is W? The selection functional is a placeholder for our knowledge, hypotheses and model assumptions about the stochastic processes that gives rise to the probability distribution of interest. This knowledge fully specifies the distribution. The opposite is not true: given distribution f there is an infinite number of functionals W that produce that distribution as the most probable distribution in their probability space. This nonuniquness is a feature, not a bug: it allows models that are quite different in their details to produce the same final distribution. Here is an example. The unbiased functional W [ h ] = w ( x ) = 1 produces the exponential distribution
h * ( x ) = e β x q ,
with canonical parameters
β = 1 / x ¯ , q = x ¯ , log ω = 1 + log x ¯ .
Now consider the nonlinear selection functional
log W [ h ] = S [ h ] = 0 h ( x ) log h ( x ) d x ,
whose logarithm is equal to entropy. The corresponding microcanonical probability functional is obtained by inserting this into Equation (27),
log ϱ [ h | W , x ¯ ] = 2 0 h ( x ) log h ( x ) d x log ω
and is maximized by (see Supplementary Information)
h * ( x ) = w ( x ) e β x q ,
with
w ( x ) = x ¯ e x / x ¯ , β = 2 / x ¯ , q = x ¯ 2 , log ω = 2 + 2 log x ¯ .
We have arrived at the exponential distribution, the same distribution that is obtained by the unbiased functional w ( x ) = 1 , but with different canonical parameters because the probability space from which it arises is now different. If all we know is that the probability distribution in a stochastic process is exponential, it is not possible to determine whether it was obtained using W [ h ] = 1 , W [ h ] = e S [ h ] , or any other functionals that is capable of reproducing the exponential distribution. While the selection bias identifies the most probable distribution uniquely, the opposite is not true.
The selection functional represents external input to thermodynamics and is fixed by the rules that govern the stochastic process that produces the distribution in question. In the case of statistical mechanics it is fixed by the postulate of equal a priori probabilities. In another example, recently given for stochastic binary clustering, it is fixed by the aggregation kernel, a function that determines the aggregation probability between clusters of different sizes [17]. The selection functional is the contact point between generalized statistical thermodynamics—a mathematical theory for generic distributions—and physics, i.e., our knowledge in the form of model assumptions and postulates about the process that gives rise to the observed distribution. It is interesting to point out that the variational derivative w in Equation (27) appears in the form of Bayesian prior [18]. In the context of generalized thermodynamics w is not a prior distribution—although it might if a 0 = a 1 = 0 in Equation (41). In general, w is a non normalizable derivative of the functional that represents our knowledge about the process, an improper prior that points nonetheless to a proper distribution.

5. Thermodynamic Sampling of Distributions

We have shown that any distribution f ( x ) defined in R + can be viewed as the most probable distribution in an appropriately constructed probability space. Here we will show that any distribution f in this domain can be obtained as the equilibrium distribution of reacting clusters under an appropriately constructed equilibrium constant. Consider a population of M identical particles (“monomers”) distributed into N clusters and let m = ( m 1 , m 2 , m N ) be an ordered list of N cluster masses with total mass M such that m k is the mass of the kth cluster in the list (“configuration”). The complete set of configurations with N clusters and total mass M comprises the cluster ensemble ( M , N ) . Let n = ( n 1 , n 2 ) be the size distribution of the clusters in configuration m such that n i is the number of clusters with i monomers. With M , N at fixed M / N = x ¯ , the cluster ensemble contains every discrete distribution h i = n i / N with mean x ¯ . We now construct the following stochastic process: given a configuration m , pick two clusters at random, merge them, then split them back into two clusters at random. This amounts to an exchange of mass between two clusters that is represented schematically by the reaction
m i + m j m i + m j
and transforms the parent configuration m into an new configuration m with the same number of clusters N and total mass M. This process may also be represented as a reaction that transforms a parent configuration into an offspring,
m K m .
We define the equilibrium constant of this reaction as
K m m = W ( n ) W ( n ) ,
where n and n are the cluster size distributions of the product and reactant configurations, respectively, and W ( n ) is the selection functional applied to distribution n . The change δ n of the corresponding distributions upon the exchange reaction is a change of 1 in the number of cluster masses m i and m j on the reactant side, and + 1 for cluster masses on the product side. By virtue of the homogeneous property of log W , its change for large M and N is a differential that can be expressed in terms of the derivatives of log W
log W ( n ) log W ( n ) = log w ( m i ) log w ( m j ) + log w ( m i ) + log w ( m j ) ,
where log w is the functional derivative of log W evaluated in distribution n . Using this result the equilibrium constant becomes
K m m = w ( m i ) w ( m j ) w ( m i ) w ( m j ) .
This has the standard form of an equilibrium constant for the reaction in Equation (49). We may identify w ( x ) as the “fugacity” of species x and “species” as a cluster with mass x. The reaction can be simulated by Monte Carlo using the Metropolis transition probabilities
P n n = rnd if rnd K n n , 1 if rnd > K n n ,
where rnd is a uniform random number in ( 0 , 1 ) . This forms a reducible Markov process that samples the microcanonical space of distribution n with fixed zeroth order moment N and first moment M. Its stationary distribution is [19]
h * ( x ) = w ( x ) e β x q .
where log w ( x ) is the functional derivative of log W evaluated at h = h * and the parameters β and q are obtained by solving the set of equations
q = 0 w ( x ) e β x d x ,
x ¯ = 1 q 0 x w ( x ) e β x d x .
With W [ h ] = w [ x ] = 1 we obtain the exponential distribution, which implies that the exchange reaction with equilibrium constant K = 1 for all transitions is equivalent to unbiased sampling from an exponential distribution with fixed mean x ¯ = M / N .
Once the selection functional W is given the most probable distribution is fixed and may be obtained either by simulation or in many cases analytically. We will now construct W such that the most probable distribution is any distribution f defined in R + . We construct the linearized selection functional
log W [ h ] = 0 h ( x ) log w ( x ) d x
with w from Equation (41), which we write in the form
w ( x ) = f ( x ) e a 0 + a 1 x
and a 0 and a 1 arbitrary constants. It is easy to show that the selection of a 0 and a 1 is immaterial because both constants drop out of Equation (53). If we choose a 0 = a 1 , then w ( x ) = f ( x ) ; alternatively, we may choose these constants so as to obtain simpler forms for w ( x ) . We demonstrate the construction of w with three examples using the exponential, the Weibull, and the uniform distribution.
  • Exponential distribution
    f ( x ) = e x / x ¯ / x ¯ .
    The function w is
    w ( x ) = e x / x ¯ + a 0 + a 1 x ¯ .
    Choosing a 0 = log x ¯ , a 1 = 1 / x ¯ we obtain w exp ( x ) = 1 , which represents the unbiased selection functional.
  • Weibull distribution
    f ( x ) = k λ x λ k 1 e ( x / λ ) k .
    Using a 0 = k log λ log k and a 1 = 0 in Equation (59) we obtain
    w Weibull ( x ) = x k 1 e ( x / λ ) k .
  • Uniform distribution
    f ( x ) = 1 / ( b a ) a x b 0 otherwise .
    With a 0 = a 1 = 0 we obtain
    w uniform ( x ) = f ( x ) .
We implement thermodynamic sampling using Monte Carlo. We begin with an ordered list of N integers i > 0 whose sum is M. We then pick two numbers at random and implement a random exchange reaction to produce a new pair of integers with the same combined sum. The new pair replaces the old with acceptance probability computed according to Equation (54) using K eq from Equation (53) and the function w ( x ) obtained above. Following a successful trial we calculate the distribution of the current configuration. The mean distribution is obtained by averaging over a large number of trials. For these simulations N = 100 , M = 3000 , x ¯ = 30 , and the mean distribution is calculated over 20,000 trials. As we discuss elsewhere, the mean distribution and the most probable distribution converge to each other unless the system exhibits phase separation [17,19,20]. The results in Figure 1 make it clear that thermodynamic sampling converges indeed to the distribution for which the w function was derived. Any discrete distribution h i , and with proper scaling, any continuous distribution h ( x ) , may be associated with the equilibrium distribution of reacting clusters under a suitable equilibrium constant.
The selection functionals constructed by the procedure discussed here apply the variational derivative at f to all distributions h, i.e., they are linearized at the most probable distribution. Any nonlinear functional log W with the same derivative at h = f will produce the same distribution as the stationary distribution under exchange reactions. One example is the entropic functional in Equation (45), a nonlinear functional that produces the exponential distribution. Even though the entropic and unbiased functionals both produce the same distribution (Figure 2a), their corresponding ensembles are distinctly different because each functional assigns different probabilities to the distributions of the ensemble. This difference can be seen in the fluctuations (Figure 2b). The entropic functional is more selective than the unbiased, which picks configurations with equal probability. Accordingly, fluctuations in the entropic ensemble have narrower distribution. This can be clearly seen in Figure 2b that shows the fluctuations in the number of monomers for the entropic and the unbiased functionals.

6. Conclusions

Stripped to its core, what we call statistical thermodynamics is a mapping between a probability distribution f and a set of functions, { w , β , q , ω } from which the distribution may be reconstructed. What we call classical thermodynamics is the set of relationships among { β , q , ω , x ¯ } —relationships that are the same for all distributions. What we call second law is the variational condition that identifies the most probable distribution in the domain of feasible distributions. What we call quasistatic process is a path in the space of distributions under fixed W. Physics enters through W. This generic mathematical formalism applies to any distribution. To use an analogy, thermodynamics is a universal grammar that becomes a language when applied to specific problems. It is a fitting coincidence—or perhaps an inevitable consequence—that it was the human desire to maximize the amount of useful work in the steam engine that would eventually make contact with the variational foundation of thermodynamics. Gibbs’s breakthrough was to connect thermodynamics to a probability distribution, and that of Shannon and Jaynes to transplant it outside physics. In the time since, the vocabulary of statistical thermodynamics has felt intuitively familiar across disciplines in a déjà vu sort of manner, even as its grammar remained undeciphered. This intuition can now be understood: The common thread that runs through every discipline that has adopted the thermodynamic language is an underlying stochastic process, and where there is probability, there is statistical thermodynamics.

Supplementary Materials

The following are available online at https://www.mdpi.com/1099-4300/21/9/890/s1.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. The Concise Oxford English Dictionary, 11th ed.; Oxford University Press: Oxford, UK, 2008.
  2. Callen, H. Thermodynamics and Introduction to Thermostatistics, 2nd ed.; Wiley: Hoboken, NJ, USA, 1985. [Google Scholar]
  3. Hill, T.L. Statistical Mechanics Principles and Selected Applications; Reprint of the 1956 Edition; Dover: Mineola, NY, USA, 1987. [Google Scholar]
  4. Huang, K. Statistical Mechanics, 2nd ed.; Wiley: New York, NY, USA, 1963. [Google Scholar]
  5. Gibbs, J.W. Elementary Principles in Statistical Mechanics; Reprint of the 1902 Edition; Ox Bow Press: Woodbridge, CT, USA, 1981. [Google Scholar]
  6. Shannon, C.E. A Mathematical Theory of Communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef] [Green Version]
  7. Jaynes, E.T. Information Theory and Statistical Mechanics. Phys. Rev. 1957, 106, 620–630. [Google Scholar] [CrossRef]
  8. Albert, R.; Barabási, A.L. Statistical mechanics of complex networks. Rev. Mod. Phys. 2002, 74, 47–97. [Google Scholar] [CrossRef] [Green Version]
  9. Harte, J.; Zillio, T.; Conlisk, E.; Smith, A.B. Maximum Entropy and the State-Variable approach to macroecology. Ecology 2008, 89, 2700–2711. [Google Scholar] [CrossRef] [PubMed]
  10. Durrett, R. Stochastic Spatial Models. SIAM Rev. 1999, 41, 677–718. [Google Scholar] [CrossRef]
  11. Timme, N.M.; Lapish, C. A Tutorial for Information Theory in Neuroscience. eNeuro 2018. [Google Scholar] [CrossRef] [PubMed]
  12. Voit, J. The Statistical Mechanics of Financial Markets; Springer: Berlin/Heidelberg, Germany, 2005. [Google Scholar] [CrossRef]
  13. Kullback, S.; Leibler, R.A. On Information and Sufficiency. Ann. Math. Statist. 1951, 22, 79–86. [Google Scholar] [CrossRef]
  14. Gray, R.M. Entropy and Information Theory, 2nd ed.; Springer: New York, NY, USA, 2011. [Google Scholar]
  15. Cover, T.M.; Thomas, J.A. Elements of Information Theory, 2nd ed.; Wiley Interscience: Hoboken, NJ, USA, 2006. [Google Scholar]
  16. Touchette, H. The large deviation approach to statistical mechanics. Phys. Rep. 2009, 478, 1–69. [Google Scholar] [CrossRef] [Green Version]
  17. Matsoukas, T. Statistical Thermodynamics of Irreversible Aggregation: The Sol-Gel Transition. Sci. Rep. 2015, 5, 8855. [Google Scholar] [CrossRef] [PubMed]
  18. Jaynes, E. Prior Probabilities. IEEE Trans. Syst. Sci. Cybern. 1968, 4, 227–241. [Google Scholar] [CrossRef]
  19. Matsoukas, T. Statistical thermodynamics of clustered populations. Phys. Rev. E 2014, 90, 022113. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  20. Matsoukas, T. Abrupt percolation in small equilibrated networks. Phys. Rev. E 2015, 91, 052105. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The exchange reaction transfers mass between two clusters and samples the space of all distributions with fixed number of clusters N and fixed total number of monomers M. We may construct the equilibrium constant of this reaction so as to to obtain any desired equilibrium distribution. Any distribution f ( x ) , x 0 , may be obtained as the equilibrium distribution. In this example we obtain (a) the exponential distribution; (b) the Weibull distribution with λ = 33.8514 , k = 2 ; and (c) the uniform distribution between a = 20 and b = 40 . In all cases x ¯ = 30 .
Figure 1. The exchange reaction transfers mass between two clusters and samples the space of all distributions with fixed number of clusters N and fixed total number of monomers M. We may construct the equilibrium constant of this reaction so as to to obtain any desired equilibrium distribution. Any distribution f ( x ) , x 0 , may be obtained as the equilibrium distribution. In this example we obtain (a) the exponential distribution; (b) the Weibull distribution with λ = 33.8514 , k = 2 ; and (c) the uniform distribution between a = 20 and b = 40 . In all cases x ¯ = 30 .
Entropy 21 00890 g001aEntropy 21 00890 g001b
Figure 2. (a) The entropic selection functional, W [ h ] = e S [ h ] , and the unbiased functional, W [ h ] = 1 , both produce the same equilibrium distribution (exponential). Nonetheless the two selection functionals represent distinctly different ensembles, as can be seen in fluctuations of the number of monomers (b). The entropic functional is more selective than the unbiased and produces a tighter distribution of fluctuations.
Figure 2. (a) The entropic selection functional, W [ h ] = e S [ h ] , and the unbiased functional, W [ h ] = 1 , both produce the same equilibrium distribution (exponential). Nonetheless the two selection functionals represent distinctly different ensembles, as can be seen in fluctuations of the number of monomers (b). The entropic functional is more selective than the unbiased and produces a tighter distribution of fluctuations.
Entropy 21 00890 g002

Share and Cite

MDPI and ACS Style

Matsoukas, T. Thermodynamics Beyond Molecules: Statistical Thermodynamics of Probability Distributions. Entropy 2019, 21, 890. https://doi.org/10.3390/e21090890

AMA Style

Matsoukas T. Thermodynamics Beyond Molecules: Statistical Thermodynamics of Probability Distributions. Entropy. 2019; 21(9):890. https://doi.org/10.3390/e21090890

Chicago/Turabian Style

Matsoukas, Themis. 2019. "Thermodynamics Beyond Molecules: Statistical Thermodynamics of Probability Distributions" Entropy 21, no. 9: 890. https://doi.org/10.3390/e21090890

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop