Next Article in Journal
Order–Disorder-Type Transitions Through a Multifractal Procedure in Cu-Zn-Al Alloys—Experimental and Theoretical Design
Previous Article in Journal
Portfolio Model Considering Normal Uncertain Preference Relations of Investors
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Entropy Maximization, Time Emergence, and Phase Transition

Department of Mathematics, Iowa State University, 411 Morrill Rd., Ames, IA 50011, USA
Entropy 2025, 27(6), 586; https://doi.org/10.3390/e27060586
Submission received: 25 April 2025 / Revised: 28 May 2025 / Accepted: 29 May 2025 / Published: 30 May 2025
(This article belongs to the Section Thermodynamics)

Abstract

We survey developments in the use of entropy maximization for applying the Gibbs Canonical Ensemble to finite situations. Biological insights are invoked along with physical considerations. In the game-theoretic approach to entropy maximization, the interpretation of the two player roles as predator and prey provides a well-justified and symmetric analysis. The main focus is placed on the Lagrange multiplier approach. Using natural physical units with Planck’s constant set to unity, it is recognized that energy has the dimensions of inverse time. Thus, the conjugate Lagrange multiplier, traditionally related to absolute temperature, is now taken with time units and oriented to follow the Arrow of Time. In quantum optics, where energy levels are bounded above and below, artificial singularities involving negative temperatures are eliminated. In a biological model where species compete in an environment with a fixed carrying capacity, use of the Canonical Ensemble solves an instance of Eigen’s phenomenological rate equations. The Lagrange multiplier emerges as a statistical measure of the ecological age. Adding a weak inequality on an order parameter for the entropy maximization, the phase transition from initial unconstrained growth to constrained growth at the carrying capacity is described, without recourse to a thermodynamic limit for the finite system.

1. Introduction

Jaynes popularized the entropy maximization technique as a powerful modeling tool for working with finite systems, where results like the Central Limit Theorem or the Stirling Approximation are neither necessary nor appropriate [1,2,3]. On the basis of Jaynes’ work, this survey is designed to highlight some selected aspects of the technique that have appeared over the last thirty years, particularly driven by biological insights in parallel with more traditional topics from physics. Our examples are chosen to be as simple as possible, while still illustrating the key points we wish to convey. In particular, we avoid any reprise of the dependence of entropy and randomness on computational complexity or instrumental resolving power, as discussed in [4]. Further details may be found in the cited references and their bibliographies, but there are many opportunities for interested readers to continue the development and refinement of the topics that we raise.
In Section 2, we set the framework for most of the paper by revisiting the very well-known Gibbs Canonical Ensemble. In particular, we draw attention to an important but rarely mentioned subtlety, namely the strength of the inequality constraints in the procedure of entropy maximization by the method of Lagrange multipliers (7). For perspective, Section 3 takes a brief look at the alternative game-theoretic approach adopted by Topsøe and his school (cf. e.g., [5,6,7]). We propose a biological interpretation for the game as ecological co-evolution between a pair of species: predator and prey. The prey’s interest in randomizing the interactions with the predator, and the predator’s interest in regularizing those interactions, exactly capture the roles of the two players in the abstract game. Thus, the issue,
“The sense in assuming that Player I has the opposite aim, namely to maximize the cost function is more dubious”.
Ref. [5] (p. 198) is resolved by assigning the role of Player I to the prey and Player II to the predator. In this interpretation, the cost function measures how long it will take the predator to determine the prey’s strategy for escape from pursuit.
Section 4 reviews the standard interpretation of the entropy maximization approach to the Canonical Ensemble within statistical mechanics, where macrostates 1 , , i , , r are identified by respective energies E 1 , , E i , , E r . In preparation for the subsequent application to a phase transition in an ecological system (Section 6), we go one step beyond Baez’ advocacy of the Lagrange multiplier β as a “coolness” parameter [8] in preference to the temperature T, arguing instead for τ , the negative of the coolness β , as the best choice. Certainly, the temperature T is ill-suited to treatment of condensed matter situations where energies of states are bounded both below and above (e.g., in quantum optics, cf. [9,10]). The use of β as a coordinate in condensed matter physics [9] (Figure 2) then naturally leads to our preference for τ , whose increase is subsequently seen to concur with the Arrow of Time. Compare (27) with (31), for example.
Section 5 uses the Canonical Ensemble for the analysis of an ecology, Lake Gibbs, where species 1 , , i , , r with respective natural growth rates
E 1 < < E i < < E r
compete within an environment having a fixed carrying capacity of N individuals. This system provides a macroscopic model of Eigen’s phenomenological rate equations [11]. While the equations may be solved using standard techniques for handling ordinary differential equations (ODEs), starting from known initial conditions [12,13], the entropy maximization technique offers a novel approach to the solution of the system of coupled ODEs, without the need for initial conditions [14]. This feature of the entropy maximization technique is especially relevant for biological applications, where one encounters existing systems whose genesis is uncertain: the classic “chicken-and-egg” dilemma!
At first glance, use of the Canonical Ensemble in biology may appear to be unrelated to the classical use case of statistical mechanics. However, following the lead of the particle physicists in using natural units with Planck’s constant set to 1 [15] (§III.2), the energies E i that appear in the statistical mechanics applications of the Canonical Ensemble are seen to have the dimensions of inverse time, exactly like the growth rates E i that appear in the ecological application. Thus, our preferred conjugate Lagrange multiplier τ becomes directly identifiable as an emergent time parameter, sharing the statistical macroscopic nature of temperature. As τ increases, the ecology of Lake Gibbs ages by moving from a diverse mix of the species 1 , , r towards an unhealthy monoculture dominated by the most prolific species r—compare (1). The ecology could be rejuvenated by restocking the lake with a broad variety of species, thereby resetting the emergent system time τ back to a lower value.
Section 6 extends the entropy maximization treatment of the Lake Gibbs ecology: not only to cover the constrained phase analyzed in Section 5, but also the earlier unconstrained phase where each species i (for 1 i r ) is growing exponentially at its natural, unchecked pace E i , before the carrying capacity of the lake is reached [16]. Thus, entropy maximization is shown to handle a phase transition, for a finite system, without resort to any infinite “thermodynamic limit”. This is achieved by moving beyond the strict inequalities for the constraints on the optimization domain noted in Section 2. Along with positivity constraints for parameters p 1 , , p r tracking the respective species 1 , , r , an order parameter p 0 subject to a weak non-negativity constraint is added (38). If the constraint is binding, i.e., p 0 = 0 , then the entropy maximization reduces to its previous form for the constrained phase as described in Section 5. On the other hand, if the constraint is slack, i.e., p 0 > 0 , then the entropy maximization returns the unconstrained phase where each species is growing exponentially. As a proof of concept, this basic example suggests that future research, working with richer constellations of strong and weak inequality constraints, should provide finitary entropy maximization analyses of more elaborate, multidimensional phase diagrams.

2. The Canonical Ensemble

Consider a finite, nonempty set or phase space (Figure 1) that comprises N equally likely individual elements or microstates (dots in Figure 1) with a partition
Π = { C 1 , , C r }
into the disjoint union of a family of r subsets or macrostates (boxes in Figure 1). Suppose that the macrostate C i comprises n i microstates, for 1 i r , so that
N = n 1 + + n r .
The set Π of (2) may be considered as (invoking) an experiment: take a microstate and determine the macrostate C i to which it belongs.
Figure 1. A phase space of microstates, with a partition Π = { C 1 , , C r } into macrostates [17] (Figure 2).
Figure 1. A phase space of microstates, with a partition Π = { C 1 , , C r } into macrostates [17] (Figure 2).
Entropy 27 00586 g001
What information is gained by performing the experiment? Initially, a store of size log N would be required to tag the microstates. Suppose that we perform the experiment and obtain outcome C i . In that case, knowing that the microstates are localized within C i , we now only require a store of size log n i . The information gain as a result of the experiment is log N log n i . However, there is only a probability
p i = n i N
of obtaining outcome C i . The expected information gain from the experiment is the weighted average
H ( Π ) = i = 1 r p i log N log n i = i = 1 r p i log p i
of the information gains from each of the possible outcomes. This quantity is described as the (information-theoretic) entropy of the partition Π . It may be characterized as the expected value of the logarithm of the odds, namely p i 1 to one, of obtaining a particular macrostate C i .
In practice, the specific partition of the phase space into macrostates is not known a priori. However, suppose that a numerical value (typically representing a dimensioned scientific quantity like an energy or a growth rate) is associated with each macrostate. A particular macrostate C i then consists of all those microstates that yield an observed numerical value E i when the experiment is performed. While the actual probabilities p i of the individual macrostates may be unknown themselves, suppose that the expected outcome
E = i = 1 r p i E i
is known, say from a measurement performed on a sample of the microstates. In order to construct a model, the probabilities p i have to be assigned. If the expected value (6) is the only information available, then the truest model is the one that maximizes the entropy H ( Π ) subject to the constraint (6). This model is known as Gibbs’ canonical ensemble.
The maximization problem is usually solved by the method of Lagrange multipliers [18] (Th. 3.2.2). Here, we wish to draw attention to a subtle detail of the procedure, which is rarely stated explicitly: the nonemptiness of the macrostates in the family (2) means that the optimization is taken over the set
Δ r 1 = { ( p 1 , , p r ) p 1 + + p r = 1 , 0 < p 1 , , p r }
of positive probabilities, the interior of the ( r 1 ) -dimensional simplex Δ r 1 . Thus, in maximization of the Lagrangrian
L ( p i , κ , τ ) = i = 1 r p i log p i + σ 1 i = 1 r p i τ E i = 1 r p i E i ,
the stationarity conditions
L p i = 0
for all 1 i r apply since the maximization is performed over Δ r 1 . They give
log p i = ( 1 + σ ) + τ E i
or
p i = exp ( τ E i ) / exp ( 1 + σ ) .
Substitution in the completeness constraint from (3) yields
1 = i = 1 r p i = i = 1 r exp ( τ E i ) / exp ( 1 + σ )
or
exp ( 1 + σ ) = i = 1 r exp ( τ E i ) .
Defining the partition function (or “Zustandsumme”)
Z ( τ ) = i = 1 r exp ( τ E i )
of τ , we have
p i = exp ( τ E i ) Z ( τ )
in Gibbs’ canonical ensemble. The entropy (5) may be written as
H ( Π ) = τ E + log Z ( τ )
in terms of the variable τ . As another point worthy of special attention, we emphasize that the units for the multiplier τ are inverse to those for the numerical values E i .

3. The Game Theoretic Approach

While the entropy maximization procedure outlined in the previous section has a number of advantages, most notably the identification of the Lagrange multiplier τ as a conjugate to the numerical values E i assigned to the macrostates, it is not the only approach. The Danish school (cf. [5,6,7], for example) have strongly advocated for an approach that involves a game between two players. In their version,
Player I chooses a consistent distribution, and Player II chooses a general code. …[T]he objective of Player II appears well motivated. [A] cost function can be interpreted as mean representation time, …and it is natural for Player II to attempt to minimize this quantity. The sense in assuming that Player I has the opposite aim, namely to maximize the cost function, is more dubious. [5] (p.198).
Here, in our simplified setting, which is chosen to avoid detailed topological and analytical concerns, we present a brief and introductory alternative account that is founded upon a more adequate, symmetrical notation and a representative interpretation of the two respective Players I and II as prey and predator in an ecological context. A parallel interpretation in a political science context might identify “people” and “government” as the two players. A relationship of this kind is implied, for example, in the work of J.C. Scott [19,20]. We refer to our description as the Predator–Prey Representation.
The ecological principle underlying the Predator–Prey Representation is that predators must seek to regularize their relationships with their prey, while the prey seek to randomize those relationships. To catch their prey, predators need encoded hunting strategies. But to evade their predators, the prey need unpredictable escape strategies.
In the model, the prey may draw from a set P Δ r 1 of so-called consistent probability distributions Π on a set of evasive actions that includes no more than a finite number r of elements. Thus, a consistent distribution gives a specific option for an escape strategy. Figure 2 shows a toy model strategy where the prey is aiming to distract its predator when being pursued from behind, by swinging its tail in a pendulum-like motion. In Figure 2b, the particular consistent distribution Π exhibited is related to (and identified with) a macrostate partition Π = { C 1 , C 2 , C 3 } of the type of phase space displayed in Figure 1. Furthermore, the phase space in this particular example may be overlaid on the classical phase space of a linear harmonic oscillator (such as a pendulum) with position variable q and momentum variable p. Thus, a microstate (such as m that appears in the macrostate C 1 ) may be considered to describe a brief video clip of a specific tail motion.
Table 1 identifies the macrostates with specific tail motion features that would be perceived by the predator in their encoding of the prey’s behavior, using the binary prefix code κ displayed in Figure 2a.
When applying the code κ during the chase in its attempt to learn what the prey is doing, the predator first checks if the prey’s tail is on the left side ( q < 0 ). If the answer is “yes” or 1, the predator has taken just 1 step to correctly identify that the prey has selected macrostate C 1 . This event occurs with probability 2 1 within the prey’s strategy Π . On the other hand, if the answer is “no” or 0, so the tail is on the right, the predator then checks if the prey’s tail is swinging to the right ( p > 0 ). If the answer to this question is “yes” or 1, the predator has taken 2 steps to correctly identify that the prey has selected macrostate C 2 , an event that occurs with probability 2 2 within the strategy Π . Finally, if the answer to the latter question is “no” or 0, the predator has taken 2 steps to correctly identify that the prey has selected macrostate C 3 , an event that, again, occurs with probability 2 2 within strategy Π . Thus, with the code κ , the expected number of steps that the predator takes to recognize the prey’s strategy Π is
Π | κ = 1 2 · 1 + 1 4 · 2 + 1 4 · 2 = 3 2 ,
matching the entropy
H ( Π ) = i = 1 3 p i log 2 p i
of the consistent distribution Π in bits.
In the “bra-ket” notation that appears on the left of (14), we switch the sides relative to [5] (p.196) and [7] (8) so that the distribution Π belonging to Player I (the prey) appears first, while the code κ belonging to Player II (the predator) appears second. In general, for a code κ with respective code lengths κ 1 , , κ r and a consistent distribution Π with respective probabilities p 1 , , p r , the cost function
Π | κ = i = 1 r p i κ i
is defined. As seen on the basis of the illustrative example from Figure 2, the cost function represents the expected time (number of questions asked and answered) taken for Player II using a binary prefix code κ to recognize which macrostate has been chosen from the canonical distribution Π adopted by Player I. The abstract information-theoretic game between Player I and Player II, as envisaged by the Danish school, is instantiated by the concrete ecological games that take place over multiple generations as prey and predator population pairs co-evolve. In particular, the problem raised in the earlier quotation from [5] (p. 198),
“The sense in assuming that Player I has the opposite aim, namely to maximize the cost function is more dubious”,
is clearly solved by the prey’s interest in extending the time it takes its predators to identify an escape strategy.
The full set of escape strategies Π available to the prey is identified as the set P of consistent distributions. Now, consider a particular code κ available to the predator as a hunting strategy. The risk [5] (3.11)
R ( P | κ ) = sup Π P Π | κ
associated with that hunting strategy expresses the maximum length of time it might take the predator to identify the prey’s behavior using κ —a measure of the predator’s risk of starvation if it were to stubbornly rely on κ as its hunting strategy. Successful predators deploy multiple hunting strategies, assembled in a set K of codes κ . Their risk of starvation is reduced to their minimum risk value [5] (3.12)
R min ( P | K ) = inf κ K R ( P | κ )
if they are able to draw on any one of these strategies.
Dually, we begin by noting that the full set of hunting strategies κ available to the predator has been identified as the set K of codes. We may now consider a particular consistent distribution Π that is available to the prey as an escape strategy. The coded entropy
H ( Π | K ) = inf κ K Π | κ
(cf. [5] (3.9)) associated with that escape strategy expresses the minimum length of time it might take a predator to identify the strategy—a measure of the prey’s risk of capture or randomization success—if it were to stubbornly rely on Π as its escape strategy. Successful prey species deploy multiple escape strategies, assembled in their repertoire P . Their time of freedom when pursued is maximized to the maximum coded entropy
H max ( P | K ) = sup Π P H ( Π | K )
if they are able to draw on any one of these strategies (cf. [5] (3.10)).
Taking infima over K on each side of the quantified statement
( P , λ ) P × K , P | λ sup Π P Π | λ
gives P P , inf κ K P | κ inf κ K sup Π P Π | κ . The inequality
sup Π P inf κ K Π | κ inf κ K sup Π P Π | κ
then follows on taking the supremum over P . It provides the final link in the full chain
inf κ K Π | κ = ( 18 ) H ( Π | κ ) sup Π P H ( Π | K ) = ( 19 ) H max ( P | K ) ( 20 ) R min ( P | K ) = ( 17 ) inf κ K R ( P | κ ) R ( P | κ ) = ( 16 ) sup Π P Π | κ
of inequalities that summarizes the relationships between the behaviors of the prey and the predator. Reference [5] continues with a general abstract analysis of when equality is obtained. In the ecological setting, equality is to be expected for stable predator–prey population pairs that have co-evolved over multiple generation times.

4. Statistical Mechanics

After the brief excursion into the game-theoretic approach, we return to a consideration of the Lagrangian approach as presented in Section 2. In the classical applications of the canonical ensemble, one may consider the microstates as particles having a certain energy. Thus, the numerical value E i associated with macrostate C i is an energy (say in joules). The conjugate variable τ , which was carefully chosen to match the non-classical applications in the subsequent sections, is connected to the temperature T (say in degrees Kelvin) by
( k T ) τ + 1 = 0
using Boltzmann’s constant k. Baez [8] (p. 30) refers to the traditional conjugate variable β = τ = 1 / k T as the coolness: the lower the (non-negative) temperature T, the higher the value of β . The problem with such traditional conventions, even within statistical mechanics, is that they are ill-adapted to handling negative temperatures, which are readily observed in condensed matter situations where energy levels are bounded both below and above [9,10]. In particular, the use of β (i.e., our τ !) as an abscissa coordinate in the first figure of [9] should be noted. If T = 0 and τ = 0 are avoided, (22) shows that an increase in τ conveniently corresponds to an increase in T, and vice versa.
The relation (22) gives some insight into the nature of the quantity τ in the canonical ensemble: Just like the temperature, it is a statistical property of collections of microstates. The thermodynamic entropy is
S = k H
in joules per Kelvin degree. The thermodynamic potential is the dimensionless quantity
Ψ = log Z ( τ ) ,
while the Helmholtz free energy is
F = k T Ψ = Ψ / τ
in joules. The relation (13) takes the form
F = E T S .
The Equation (12) becomes
p i = e E i / k T / e F / k T ,
a well-known formula of kinetic theory (compare [21]). For example, it may be used to describe the distribution of atmospheric particles at different heights, according to their potential energy in the Earth’s gravitational field [21] (§40-1,2), [3] (§6.1.2(a)).
When considering physical applications of the canonical ensemble, it may prove useful to use natural units or Planck units with Planck’s constant set to 1 [15] (§III.2). Then, the energies that appear in the statistical mechanics applications of the Gibbs ensemble are seen to have the dimension (time)−1. For example, the energy of a photon of light is given as the product of Planck’s constant with the frequency of the corresponding wave.

5. Time Emergence

In the statistical mechanical applications of the canonical ensemble discussed in Section 4, the coolness β = 1 / k T , a Lagrange conjugate of energy, emerges as a statistical property of a collection of microstates. On the other hand, using natural units, such Lagrange conjugates of energy as β or our preferred τ should appear with the units of time. Now, following [14], we examine a model where τ does indeed represent an emergent intrinsic age of a biological system. It provides a conceptually instructive model of competition between r different species, labeled 1 , , r , as described by Eigen’s phenomenological rate equations [11]. Suppose that species i has an unconstrained growth rate of E i (say in per annum units). This means that a population of n i individuals of species i growing without constraint has a rate of change
n ˙ i = n i E i
(using Newton’s dot notation for the derivative). At a Newtonian time t in an interval [ s , u ] from a start time s to an ultimate time u, the population n i ( t ) is given as
n i ( t ) = n i ( s ) exp ( t E i )
—exponential growth. Competition (as modeled by Eigen’s equations) arises when the individuals of the r species form a joint population maintained at a constant total count N. The birth of one individual is compensated for by the death of another.
Figure 3 visualizes the individuals as fish in a lake, where the food requirements of each individual fish are the same, and the food supply sustains the constant total number N. Figure 3 may also be viewed as a slightly less abstract version of Figure 1. The individual fish correspond to the microstates, which share a macrostate if they belong to the same species.
The traditional treatment to determine the population n i ( t ) of species i at given Newtonian times t in an interval [ s , u ] (cf. [12,13] or [16] (§4)) involves solving the coupled quasilinear system
n ˙ i ( t ) = E i E ( t ) n i ( t ) = E i n i ( t ) j = 1 r n i ( t ) E j N n j ( t )
of ordinary differential equations. Although (26) may be seen as a special case of [3] (20 Equation (80)), the treatment in that reference, concerned as it is with equilibrium or long-term values, does not relate to our analysis, the coupling being introduced through the final, quadratic or two-point interaction term of (26). Based on an Ansatz to translate to a linear system [16] (26), the classical solutions
n i ( t ) = n i ( s ) exp ( t E i ) / exp s t E ( t ) d t
are valid over the Newtonian time interval [ s , u ] . Here, the initial conditions that are required to solve the system of ordinary differential equations record the population count n i ( s ) at t = s .
The function E ( t ) appearing in the middle part of (26), whose specification follows from
0 = d d t i = 1 r n i ( t ) = i = 1 r n ˙ i ( t ) = i = 1 r E i E ( t ) n i ( t ) = i = 1 r E i n i ( t ) N E ( t ) ,
represents the instantaneous cull rate
E ( t ) = i = 1 r n i ( t ) N E i
required to hold the total population constant at the carrying capacity N. The argument
s t E ( t ) d t
of the exponential in the denominator of the right hand side of (27) is then recognized as counting the total number of starvation victims registered over the time interval [ s , t ] .
In the approach through the canonical ensemble, we imagine going to the lake and catching a moderately sized group of fish in a net. The relative frequency f i of each species i in the catch is taken as a good approximation to the overall probability p i of catching a member of that species. In other words, we take
f i = n i N
for 1 i r at any given time. Knowing these relative frequencies f i , together with the unconstrained growth rates E i of each species, we obtain the expected average value (6) that is the premise for the canonical ensemble. The relationship (28), fundamental to our approach, equates the cull rate E to the average
i = 1 r f i E i
or gross growth rate (GGR) of the population. Equation (12) yields
n i ( τ ) = N exp ( τ E i ) / j = 1 r exp ( τ E j ) ,
in particular with
n i ( s ) = N exp ( s E i ) / j = 1 r exp ( s E j )
as an initial condition that does not need to be specified separately in our approach. When the species are labeled by increasing unconstrained birth rates (assuming non-degeneracy), say
0 < E 1 < E 2 < < E r
with 1 < r , then the most prolific species r will ultimately dominate.
The intrinsic time τ that appears in (31) is an emergent statistical property of the complex system. As the system ages, the proportion of the dominant species r increases, leading to a lack of biodiversity. Restocking the lake with a good mix of the various species would rejuvenate the ecosystem, resetting the system time τ independently of the relentless forward progress of the Newtonian time t.
In conjunction with the emergence of the system time τ , the canonical ensemble treatment of the ecosystem with the fixed carrying capacity N has an additional feature, which will be exploited more in the following section. For this discussion, assume the start time s is set to t = 0 , with a uniform distribution at that time in which each species i has a population
n i ( 0 ) = N r .
Then, the solutions (25) of the unconstrained Equation (24) for a negative time t take the form
n i ( t ) = N r exp ( t E i ) .
Let M denote the total fish population at any given time. Thus, M = N for t > 0 , while Equation (35) gives
M = N r i = 1 r exp ( t E i )
for t < 0 . As a consequence, the equations
f i = exp ( t E i ) / j = 1 r exp ( t E j )
for the relative frequencies, obtained for a positive time t from (12) making use of the canonical ensemble description of the constrained ecology, are equally valid during the unconstrained population growth at negative times. It is indeed remarkable that the relative frequencies extrapolate backwards from the canonical ensemble, even though the assumptions leading to the canonical ensemble do not apply in this unconstrained regime.

6. A Phase Transition

In the classical entropy-maximization treatment of the canonical ensemble (Section 2), the optimization is taken over the open set
Δ r 1 = { ( p 1 , , p r ) p 1 + + p r = 1 , 0 < p 1 , , p r }
(7) of positive probabilities, the interior of the ( r 1 ) -dimensional simplex Δ r 1 . In the competition model presented in Section 5, p i represents the probability of catching a fish of species i, for 1 i r , although we often prefer to use the relative frequency f i of (29) as a proxy. In this section, following [16], we now consider an additional variable p 0 and the subset
Δ r = { ( p 0 , p 1 , , p r ) p 0 + p 1 + + p r = 1 , 0 p 0 , 0 < p 1 , , p r }
of the r-dimensional simplex Δ r , maximizing the parametric entropy
i = 0 r p i log p i
over the non-open set Δ r . This procedure leads to a more complete description of the ecology depicted in Figure 3 that also applies to the unconstrained phase where the total fish population M is below the carrying capacity N. In this broadened context, the regime analyzed in Section 5 is called the constrained phase. For simplicity of exposition, the phase transition is assumed to take place at system time τ = 0 with a uniform distribution of all the species at that time, as in (34) above. Following the analogy between time and temperature, one might regard setting the time of the phase transition to zero as analogous to the (pre-1948) Celsius scale setting of zero for a phase transition of water.
Given the various kinds of phase transition that physicists might recognize, we invoke Penrose’s general definition [22] (§28.1),
“A phenomenon of this nature, where a reduction in the ambient temperature induces an abrupt gross overall change in the nature of the stable equilibrium state of the material, is called a phase transition”,
to justify our current terminology. In the ecological setting, a “reduction in the ambient temperature” T is interpreted as an increase in the system time τ in accordance with the relation (22). Co-opting common physical terminology, we describe the variable p 0 in (38) as the order parameter [23]. Since the variables p 1 , , p r no longer function directly as naive catch probabilities during the unconstrained phase, the way they do during the constrained phase, we refer to them as (additional) parameters in the complete history of the ecosystem, thereby leading to the terminology of (39) for the corresponding entropy. We then define
H = i = 1 r f i log f i
as the population entropy (Figure 4) to contrast with the parametric entropy of (39).
Since the population counts of the various fish species do not undergo a discontinuous change at the phase transition, one may wonder where the “abrupt gross overall change in the nature of the stable equilibrium state” comes in. Mathematically, it is seen in the change from the exponential growth solution (25) to the modified version (27), where the denominator with the exponentiated integral suddenly appears. For the individual fish, it means the drastic arrival of the possibility of death by starvation, where previously they were always able to live out their natural lifespans. It is also worth noting the emergence of the “long-range correlations” indicated by the addition of the final term of (26) to the original unconstrained growth Equation (24).
Heuristically, if not too literally, the order parameter p 0 may be associated with a ghost species 0 having a natural unconstrained growth rate E 0 = 0 . Consider maximization of the parametric entropy (39) over the non-open set Δ r of (38) subject to the equality constraint
D = i = 0 r p i E i
on the parameters. The quantity D appearing in (41)—obviously motivated as a version of (6) that has been extended to include the ghosts—is discussed at the end of this section using (51). We take the Lagrangian
L ( p i , σ , τ ) = i = 0 r p i log p i + σ 1 i = 0 r p i τ D i = 0 r p i E i
in terms of the parameters p 0 , p 1 , , p r . When the weak inequality constraint on p 0 from the definition (38) of Δ r is binding or “active” [18] (p 221), i.e., p 0 = 0 and p 0 log p 0 = 0 (either by convention or as the result of the limiting procedure lim p 0 0 + ), we have
L ( p 0 , p 1 , p r , σ , τ ) = L ( p 1 , p r , σ , τ )
in terms of the original Lagrangian (8). Thus, in the situation where the order parameter p 0 is zero and the remaining parameters p i , , p r are recognized as the corresponding relative frequencies, the extended description reduces to the original description. In particular, when p 0 = 0 (i.e., there are no ghosts), the parametric entropy reduces to the population entropy.
When p 0 > 0 , the weak inequality constraint on p 0 from the definition (38) of Δ r is slack or “inactive” [18] (p. 221); the analysis of the Lagrangian (42) for the parametric entropy proceeds in similar fashion to the analysis of the original Lagrangian (8) in Section 2. The stationarity conditions L / p i = 0 for 0 i r reduce to log p i = ( 1 + σ ) + τ E i or p i = exp ( τ E i ) / exp ( 1 + σ ) . A substitution in the completeness constraint yields
1 = i = 0 r p i = i = 0 r exp ( τ E i ) / exp ( 1 + σ )
or
exp ( 1 + σ ) = i = 0 r exp ( τ E i ) = 1 + i = 1 r exp ( τ E i ) = 1 + Z ( τ )
using (11) for the latter term, yielding the expressions
p i = exp ( τ E i ) / j = 0 r exp ( τ E j ) = exp ( τ E i ) 1 + Z ( τ )
for the parameters p 0 , p 1 , , p r .
For i = 0 , the expression (43) determines the order parameter as
p 0 = 1 + j = 1 r exp ( τ E j ) 1 = 1 1 + Z ( τ ) .
Taking Equation (43) for 0 < i r , the remaining parameters may be rewritten in the form
p i = p 0 exp ( τ E i ) .
For 1 i r , a substitution of this expression into (35) yields
n i = N p i r p 0 ,
whence (36) may be rewritten as
M = j = 1 r n j = N ( 1 p 0 ) r p 0
to determine the total population in the unconstrained phase in terms of the order parameter. The assignment p 0 M represented by (47) may also be inverted to yield
p 0 = N N + r M
as an equivalent expression of p 0 in terms of M. It is clear that the expressions (46) and (47), valid for negative times, will not continue to hold in the constrained regime where p 0 = 0 .
In the unconstrained phase, an experiment may be conducted to determine the total population (47) and, thus, the order parameter as given by (48). A fisherman trawls a fixed volume of water and counts the number M of fish caught in the trawl. The number of fish N that would be caught in the trawl at the carrying capacity N is presumed to be known, so the total population M is obtained as M N / N . This refinement of the catch protocol is described as the trawl. Using the relation (45) that holds for p 0 > 0 , the relative frequency of species i in the trawl is
f i = exp ( E i τ ) j = 1 r exp ( E j τ ) = p i j = 1 r p j = p i ( 1 p 0 ) 1
recalling (37) for the first equality. The outer fragment
f i = p i ( 1 p 0 ) 1 or   equivalently p i = f i ( 1 p 0 )
of (49) is then seen to hold for the entire history, extending the previous identification f i = p i which only holds in the constrained regime p 0 = 0 . The factor ( 1 p 0 ) appearing in (50) is described as the modifier for any p 0 within the range { 0 } ( 1 + r ) 1 , 1 , and, thus, the parameters p 1 , , p r are recognized as modified relative frequencies.
The quantity D that appears in the constraint (41) may now be examined. We have
D = i = 0 r p i E i = ( 1 p 0 ) i = 1 r f i E i = ( 1 p 0 ) E
using the second equation of (50). Since D is given as the product of the modifier with the gross growth rate (30), it is described as the modified gross growth rate. In particular, once the order parameter is known from the trawl, then the modified GGR is obtained from the unmodified GGR, which is also determined from the trawl.
In summary, the extended Lagrangian L given in (42) represents a maximization of the parametric entropy, subject to the completeness constraint on the parameters and the knowledge of the modified gross growth rate D that is obtained from the trawl. When L is maximized over the constraint set Δ r that includes a weak inequality for the order parameter p 0 along with the usual strong inequalities for the remaining parameters, it enables one to use entropy maximization for the modeling of a phase transition in a finite situation without recourse to any infinitary “thermodynamic limit”.

Funding

This research received no external funding.

Data Availability Statement

No new data were created or analyzed in this study.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Jaynes, E.T. Information theory and statistical mechanics, I. Phys. Rev. 1957, 106, 620–630. [Google Scholar] [CrossRef]
  2. Jaynes, E.T. Information theory and statistical mechanics, II. Phys. Rev. 1957, 108, 171–190. [Google Scholar] [CrossRef]
  3. Kapur, J.N. Maximum Entropy Models in Science and Engineering; Wiley: New York, NY, USA, 1993. [Google Scholar]
  4. Smith, J.D.H. Some observations on the concepts of information-theoretic entropy and randomness. Entropy 2001, 3, 1–11. [Google Scholar] [CrossRef]
  5. Harremoës, P.; Topsøe, F. Maximum entropy fundamentals. Entropy 2001, 3, 191–226. [Google Scholar] [CrossRef]
  6. Topsøe, F. Information theoretical optimization techniques. Kybernetika 1979, 15, 8–27. [Google Scholar]
  7. Topsøe, F. Game theoretical optimization inspired by information theory. J. Global Optim. 2009, 43, 553–564. [Google Scholar] [CrossRef]
  8. Baez, J. What is Entropy? Available online: https://arxiv.org/abs/2409.09232 (accessed on 3 April 2025).
  9. Braun, S.; Schneider, U. Negative Absolute Temperature. Available online: https://www.quantum-munich.de/119947/Negative-Absolute-Temperatures (accessed on 3 April 2025).
  10. Braun, S.; Ronzheimer, J.P.; Schreiber, M.; Hodgman, S.S.; Rom, T.; Bloch, I.; Schneider, U. Negative absolute temperature for motional degrees of freedom. Science 2013, 339, 52–55. [Google Scholar] [CrossRef]
  11. Eigen, M. Self-organization of matter and the evolution of biological macromolecules. Naturwiss 1971, 58, 465–523. [Google Scholar] [CrossRef] [PubMed]
  12. Jones, B.L.; Enns, R.H.; Rangnekar, S.S. On the theory of selection of coupled macromolecular systems. Bull. Math. Biol. 1976, 38, 15–28. [Google Scholar] [CrossRef]
  13. Thompson, C.J.; McBride, J.L. On Eigen’s theory of the self-organization of matter and the evolution of biological macromolecules. Math. Biosci. 1974, 21, 127–142. [Google Scholar] [CrossRef]
  14. Smith, J.D.H. Competition and the canonical ensemble. Math. Biosci. 1996, 133, 69–83. [Google Scholar] [CrossRef] [PubMed]
  15. Zee, A. Quantum Field Theory in a Nutshell, 2nd ed.; Princeton University Press: Princeton, NJ, USA, 2010. [Google Scholar]
  16. Smith, J.D.H.; Yang, Z. Phase transition and time emergence in a statistical model of species competition. Results Phys. 2023, 44, 106178. [Google Scholar] [CrossRef]
  17. Smith, J.D.H. On the Mathematical Modeling of Complex Systems; Center for Advanced Studies, Warsaw University of Technology: Warsaw, Poland, 2013; Available online: https://www.csz.pw.edu.pl/index.php/cszeng/content/download/2671/20221/file/LN08.pdf (accessed on 20 April 2025).
  18. Hestenes, M. Optimization Theory: The Finite Dimensional Case; Wiley: New York, NY, USA, 1975. [Google Scholar]
  19. Scott, J.C. Seeing Like a State; Yale University Press: Yale, CT, USA, 1998. [Google Scholar]
  20. Scott, J.C. The Art of Not Being Governed; Yale University Press: Yale, CT, USA, 2009. [Google Scholar]
  21. Feynman, R.P.; Leighton, R.B.; Sands, M. The Feynman Lectures on Physics; Addison-Wesley: Reading, MA, USA, 1963; Volume I. [Google Scholar]
  22. Penrose, R. The Road to Reality; Jonathan Cape: London, UK, 2004. [Google Scholar]
  23. Yeomans, J.M. Statistical Mechanics of Phase Transitions; Clarendon Press: Oxford, UK, 1992. [Google Scholar]
Figure 2. Matching codes to consistent distributions. (a): A binary prefix code κ , with 1 as “yes” and 0 as “no” in response to the boxed questions at the internal nodes of the tree. The triple of respective code lengths for C 1 = 1 , C 2 = 01 and C 3 = 00 is ( 1 , 2 , 2 ) . (b) A phase space where the partition Π = { C 1 , C 2 , C 3 } witnesses a consistent distribution, also written as Π = ( 2 1 , 2 2 , 2 2 ) or 2 1 , 2 , 2 in an array notation, that matches the code κ . The particular microstate m inside the macrostate C 1 would have the tail of the prey positioned halfway out to the left, with an intermediate value for its leftward momentum.
Figure 2. Matching codes to consistent distributions. (a): A binary prefix code κ , with 1 as “yes” and 0 as “no” in response to the boxed questions at the internal nodes of the tree. The triple of respective code lengths for C 1 = 1 , C 2 = 01 and C 3 = 00 is ( 1 , 2 , 2 ) . (b) A phase space where the partition Π = { C 1 , C 2 , C 3 } witnesses a consistent distribution, also written as Π = ( 2 1 , 2 2 , 2 2 ) or 2 1 , 2 , 2 in an array notation, that matches the code κ . The particular microstate m inside the macrostate C 1 would have the tail of the prey positioned halfway out to the left, with an intermediate value for its leftward momentum.
Entropy 27 00586 g002
Figure 3. Lake Gibbs: a fixed total population N of fish in species 1 , , i , , r competing in an environment with a constant influx of nutrients. Compare [17] (Figure 16), [16] (Figure 1).
Figure 3. Lake Gibbs: a fixed total population N of fish in species 1 , , i , , r competing in an environment with a constant influx of nutrients. Compare [17] (Figure 16), [16] (Figure 1).
Entropy 27 00586 g003
Figure 4. The population entropy (40) over a time range extending from low negative to high positive values, embracing the phase transition at τ = 0 . At low negative times, the small value of the population entropy is due to the predominance of the least fecund species, required for the population to evolve to the uniform distribution at the phase transition where the carrying capacity is reached. At high positive times, the small value of the population entropy is due to the predominance of the most fecund species. Compare [16] (Figure 3), and also the first figure of [9]. The discs indicate characteristic times for the involution, marking points of inflection for the population entropy. These times do not appear in the first figure of [9], where the entropy curve is concave.
Figure 4. The population entropy (40) over a time range extending from low negative to high positive values, embracing the phase transition at τ = 0 . At low negative times, the small value of the population entropy is due to the predominance of the least fecund species, required for the population to evolve to the uniform distribution at the phase transition where the carrying capacity is reached. At high positive times, the small value of the population entropy is due to the predominance of the most fecund species. Compare [16] (Figure 3), and also the first figure of [9]. The discs indicate characteristic times for the involution, marking points of inflection for the population entropy. These times do not appear in the first figure of [9], where the entropy curve is concave.
Entropy 27 00586 g004
Table 1. Predator’s interpretation of prey’s tail motion. The prey’s strategy is determined by its random choice of macrostate from the consistent distribution Π shown in Figure 2b.
Table 1. Predator’s interpretation of prey’s tail motion. The prey’s strategy is determined by its random choice of macrostate from the consistent distribution Π shown in Figure 2b.
MacrostatePositions in the MacrostateMomenta in the Macrostate
C 1 Left sideAny
C 2 Right sideRightwards
C 3 Right sideLeftwards
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Smith, J. Entropy Maximization, Time Emergence, and Phase Transition. Entropy 2025, 27, 586. https://doi.org/10.3390/e27060586

AMA Style

Smith J. Entropy Maximization, Time Emergence, and Phase Transition. Entropy. 2025; 27(6):586. https://doi.org/10.3390/e27060586

Chicago/Turabian Style

Smith, Jonathan. 2025. "Entropy Maximization, Time Emergence, and Phase Transition" Entropy 27, no. 6: 586. https://doi.org/10.3390/e27060586

APA Style

Smith, J. (2025). Entropy Maximization, Time Emergence, and Phase Transition. Entropy, 27(6), 586. https://doi.org/10.3390/e27060586

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop