Next Article in Journal
Heat Transfer Performance of a Novel Multi-Baffle-Type Heat Sink
Previous Article in Journal
Bayesian 3D X-ray Computed Tomography with a Hierarchical Prior Model for Sparsity in Haar Transform Domain
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

The Price Equation Program: Simple Invariances Unify Population Dynamics, Thermodynamics, Probability, Information and Inference

Department of Ecology and Evolutionary Biology, University of California, Irvine, CA 92697-2525, USA
Entropy 2018, 20(12), 978; https://doi.org/10.3390/e20120978
Submission received: 22 October 2018 / Revised: 26 November 2018 / Accepted: 14 December 2018 / Published: 16 December 2018
(This article belongs to the Section Entropy Reviews)

Abstract

:
The fundamental equations of various disciplines often seem to share the same basic structure. Natural selection increases information in the same way that Bayesian updating increases information. Thermodynamics and the forms of common probability distributions express maximum increase in entropy, which appears mathematically as loss of information. Physical mechanics follows paths of change that maximize Fisher information. The information expressions typically have analogous interpretations as the Newtonian balance between force and acceleration, representing a partition between the direct causes of change and the opposing changes in the frame of reference. This web of vague analogies hints at a deeper common mathematical structure. I suggest that the Price equation expresses that underlying universal structure. The abstract Price equation describes dynamics as the change between two sets. One component of dynamics expresses the change in the frequency of things, holding constant the values associated with things. The other component of dynamics expresses the change in the values of things, holding constant the frequency of things. The separation of frequency from value generalizes Shannon’s separation of the frequency of symbols from the meaning of symbols in information theory. The Price equation’s generalized separation of frequency and value reveals a few simple invariances that define universal geometric aspects of change. For example, the conservation of total frequency, although a trivial invariance by itself, creates a powerful constraint on the geometry of change. That constraint plus a few others seem to explain the common structural forms of the equations in different disciplines. From that abstract perspective, interpretations such as selection, information, entropy, force, acceleration, and physical work arise from the same underlying geometry expressed by the Price equation.

1. Introduction

The Price equation is an abstract mathematical description for the change in populations. The most general form describes a way to map entities between two sets. That abstract set mapping partitions the forces that cause change between populations into two components: the direct and inertial forces.
The direct forces change frequencies. The inertial forces change the values associated with population members. Changed values can be thought of as an altered frame of reference driven by the inertial forces.
From the abstract perspective of the Price equation, one can see the same partition of direct and inertial forces in the fundamental equations of many different subjects. That abstract unity clarifies understanding of natural selection and its relations to such disparate topics as thermodynamics, information, the common forms of probability distributions, Bayesian inference, and physical mechanics.
In a special form of the Price equation, the changes caused by the direct and inertial forces cancel so that the total remains conserved. That conservation law defines a universal invariance and canonical separation of the direct and inertial forces. The canonical separation of forces clarifies the common mathematical structure of seemingly different topics.
This article sketches the overall argument for the common mathematical structure of different subjects. The argument is, at present, a broad framing of conjectures. The conjectures raise many interesting problems that require further work. Consult Frank [1,2] for mathematical details, open problems, and citations to additional literature.

2. The Abstract Price Equation

The Price equation describes the change in the average value of some property between two populations [1,3]. Consider a population as a set of things. Each thing has a property indexed by i. Those things with a common property index comprise a fraction, q i , of the population and have average value, z i , for whatever we choose to measure by z. Write q and z as the vectors over all i. The population average value is z ¯ = q · z = q i z i , summed over i.
A second population has matching vectors q and z . Those vectors for the second population are defined by the special set mapping of the abstract Price equation. In particular, q i is the fraction of the second population derived from entities with index i in the first population. The second population does not have its own indexing by i. Instead, the second population’s indices derive from the mapping of the second population’s members to the members of the first population.
Similarly, z i is the average value in the second population of members derived from entities with index i in the first population. Let Δ be the difference between the derived population and the original population, Δ q = q q and Δ z = z z .
To calculate the change in average value, it is useful to begin by considering q and z as abstract variables associated with the first set, and q and z as corresponding variables from the second set.
The change in the product of q and z is Δ ( q z ) = q z q z . Note that q = q + Δ q and z = z + Δ z . We can write the total change in the product as a discrete analog of the chain rule for differentiation of a product, yielding two partial change terms
Δ ( q z ) = ( q + Δ q ) ( z + Δ z ) q z = ( Δ q ) z + ( q + Δ q ) Δ z = ( Δ q ) z + q Δ z .
The first term, ( Δ q ) z , is the partial difference of q holding z constant. The second term, q Δ z , is the partial difference of z holding q constant. In the second term, we use q as the constant value because, with discrete differences, one of the partial change terms must be evaluated in the context of the second set.
The same product rule can be applied to vectors, yielding the abstract form of the Price equation
Δ z ¯ = Δ ( q · z ) = Δ q · z + q · Δ z .
The abstract Price equation simply partitions the total change in the average value into two partial change terms.
Note that q has a clearly defined meaning as frequency, whereas z may be chosen arbitrarily as any values assigned to members. The values, z , define the frame of reference. Because frequency is clearly defined, whereas values are arbitrary, the frequency changes, Δ q , take on the primary role in analyzing the structural aspects of change that unify different subjects.
The primacy of frequency change naturally labels the first term, with Δ q , as the changes caused by the direct forces acting on populations. Because q and q define a sequence of probability distributions, the primary aspect of change concerns the dynamics of probability distributions.
The arbitrary aspect of the values, z , naturally labels the second term, with Δ z , as the changes caused by the forces that alter the frame of reference, the inertial forces.
Table 1 defines commonly used symbols. Table A1 and Table A2 summarize mathematical forms and relations between disciplines.

3. Canonical Form

The prior section emphasized the primary role for the dynamics of probability distributions, Δ q , which follows as a consequence of the forces acting on populations.
The canonical form of the Price equation focuses on the dynamics of probability distributions and the associated forces that cause change. To obtain the canonical form, define
a i = Δ q i q i
as the relative change in the frequency of the ith type.
We can use any value for z in the Price equation. Choose z a . Then
Δ a ¯ = Δ q · a + q · Δ a = 0 ,
in which the equality to zero expresses the conservation of total probability
a ¯ = q · a = i q i Δ q i q i = i Δ q i = 0 ,
because the total changes in probability must cancel to keep the sum of the probabilities constant at one.
Thus, Equation (3) appears as a seemingly trivial result, a notational spin on Δ q i = 0 . However, many generalities and connections between seemingly different disciplines follow from the partition of conserved probability into the two terms of Equation (3).

4. Preliminary Interpretation

The Price equation by itself does not calculate the particular Δ q values of dynamics. Instead, the equation emphasizes the fundamental constraint on dynamics that arises from invariant total probability. The changes, Δ q , must satisfy the constraint in Equation (3), specifying certain properties that any possible dynamical path must have.
Put another way, all possible dynamical paths will share certain invariant properties. It is those invariant properties that reveal the ultimate unity between different applications and disciplines.
Note that q is fundamental, whereas z is an arbitrary assignment of value or meaning. The focus on q corresponds to the reason why information theory considers only probabilities, without consideration of meaning or values. In general, the unifying fundamental aspect among disciplines concerns the dynamics of probability distributions. We can then add values or meaning to that underlying fundamental basis.
In particular, we can first study universal aspects of the canonical invariant form based on a . We can then derive broader results by simply making the coordinate transformation a z , yielding the most general expression of the abstract Price equation in Equation (1).
Constraints on z ¯ or Δ z ¯ specify additional invariances, which determine further structure of the possible dynamical paths and equilibria. Each z i may be a vector of values, allowing multiple constraints associated with the z values.
Alternatively, one can study the conditions required for Δ z ¯ to change in particular ways. For example, what are the necessary and sufficient patterns of association between initial frequency, q , relative frequency change, a , and value, z , to drive the change, Δ z ¯ , in a particular direction?

5. Temporal Dynamics

The frequency change terms, Δ q i , arise from the abstract set mapping assignment of members in the second set to members in the first set. In some cases, the abstract set mapping may differ from the traditional notion of dynamics as a temporal sequence, in which q i is the frequency of type i in the second set.
We may add various assumptions to achieve a temporal interpretation in which i retains its meaning as a type through time. For example, following Price [4], we may partition q q into two steps. In the initial step, q q , the mapping preserves type, such that q i describes the frequency of type i in the second set.
In the subsequent step, q q , the mapping accounts for the forces that change type. For a force that makes the change i j , we map type j members in the second set to type j members in the first set. Thus, Δ q j = q j q j describes the net frequency change from the gains and losses caused by the forces of type reassignment.
For this two-step process that preserves type, the net change q q combines the type-changing forces with other forces that alter frequency. Thus, we may consider type-preserving maps as a special case of the general abstract set mapping. In this article, I focus on the properties of the general abstract set mapping.

6. Key Results

Later sections use the abstract Price equation to show formal relations between natural selection and information theory, the dynamics of entropy and probability, basic aspects of physical dynamics, and other fundamental principles [2]. Here, I list some key results without derivation or discussion. This listing gives a sense of where the argument will go, providing a target for further development in later sections.
Throughout this article, I use ratios of vectors to denote elementwise division, for example q / q = q 1 / q 1 , q 2 / q 2 , . A constant added to or multiplied by a vector applies the operation to each element of the vector, for example, a + b z , for constants a and b, yields a + b z i for each i.
  • D’Alembert’s principle of physical mechanics. We can write the canonical Price equation of Equation (3) as d’Alembert’s partition [2,5] between the direct forces, F = a , and the inertial forces of acceleration, I , as
    Δ a ¯ = ( F + I ) · Δ q = 0 .
    This equation generalizes Newton’s second law that force equals mass times acceleration, describing the balance between force and acceleration. Here, the direct forces, F , balance the inertial forces of acceleration, I , along the path of change, Δ q . The condition Δ a ¯ = 0 describes conservative systems. For nonconservative systems, we can use a z , with Δ z ¯ not necessarily conserved.
  • Information theory. For small changes, Δ q q ˙ and F = a log q / q , the direct force term is
    Δ q · F = Δ q · a = D q | | q + D q | | q = q ˙ i 2 q i = F ,
    in which D is the Kullback–Leibler divergence, a fundamental measure of information, and F is a nondimensional expression of Fisher information [6].
  • Extreme action. The term for direct force, or action, q ˙ · F , yields frequency change dynamics, q ˙ , determined by the extremum of the action, subject to constraint
    L = q ˙ i ϕ i 1 2 κ ( q ˙ i 2 q i C 2 ) ξ ( q ˙ i 0 ) ,
    in which ϕ = F is a given force vector. The first parenthetical term constrains the incremental distance between probability distributions to be F = q ˙ i 2 / q i = C 2 , for a given constant, C. The second parenthetical term constrains the total probability to remain invariant.
  • Entropy and thermodynamics. The force vector, ϕ , can be described as a growth process, q i = q i e ϕ i , with ϕ i = log q i / q i . A constraint on the system’s partial change in some quantity, q ˙ · z = B , constrains the new frequency vector, q . We may write the constraint as q ˙ · log q = λ ( q ˙ · z ) = λ B , thus
    L = q ˙ · log q 1 2 κ ( F C 2 ) ξ ( q ˙ · 1 0 ) λ ( q ˙ · z B ) .
    The action term, q ˙ · log q , is the increase in entropy, q · log q . Maximizing the action maximizes the production of entropy.
  • Maximum entropy and statistical mechanics. In the prior example, the work done by the force of constraint is q ˙ · F c = λ B , with F c = log q = log k λ z . At maximum entropy, we obtain an equilibrium, log q = log q . Thus, the maximum entropy equilibrium probability distribution is
    q = k e λ z .
    This Gibbs–Boltzmann-exponential distribution is the principal result of statistical mechanics. Here, we obtained that result through a Price equation abstraction that led to maximum entropy production, subject to a constraining invariance on a component of change in z ¯ .
  • Constraint, invariance and sufficiency. The maximum entropy probability distribution expresses the forces of constraint, F c , acting on z . Different constraints yield different distributions. For example, the constraint q · ( z μ 2 ) = σ 2 yields a Gaussian distribution for given mean, μ , and variance, σ 2 . This constraint is sufficient to determine the form of the distribution. Similarly, for small changes, the total change of the direct forces
    Δ q · a = Δ q · F q ˙ i 2 q i = F ,
    does not require the exact form of the frequency changes, q ˙ . It is sufficient to know the Fisher information distance, q ˙ i 2 / q i = F , which determines the subsets of the possible change vectors, q ˙ , with the same invariant Fisher distance, F . Many results from the abstract Price equation express invariance and sufficiency.
  • Inference: data as a force. Use θ i as an index for different parameter values. Then q θ matches the Bayesian notion of a prior probability distribution for the values of θ . The posterior distribution is
    q θ = q θ L θ ,
    in which the normalized likelihood, L θ , describes the force of the data that drives the change in probability. In Price notation, the normalized likelihood is equivalent to the force vector, L F , and also L 1 a . With that definition for a in terms of the force of the data, the structure and general properties of Bayesian inference follow as a special case of the abstract Price equation.
  • Invariance, scale and probability distributions. The maximum entropy probability distribution in Equation (7) is invariant to affine transformation, z a + b z , because k and λ adjust to a and b. That affine invariance with respect to z, which arises directly from the abstract Price equation, is sufficient by itself to determine the structure of commonly observed probability distributions, without need of invoking entropy maximization. The structure of common probability distributions is
    q = k e λ e β w .
    The function w ( z ) is a scale for z, such that a shift in that scale, w α + w , only changes z by a constant multiple, and therefore does not change the probability pattern. Simple forms of w lead to the various commonly observed continuous probability distributions. For example, w ( z ) = log z yields the stretched exponential distribution.

7. History of Earlier Forms

Before analyzing the abstract Price equation and the unification of disciplines, it is useful to write down some of the earlier expressions and applications of the Price equation from biology [1,7,8,9].

7.1. Fitness and Average Excess

This section extends the definition of relative changes in Equation (2). Let w i = q i / q i be the relative growth, or relative fitness, of the ith type. Then we may define
a i = w i 1 = q i q i 1 = Δ q i q i ,
which, in biology, is Fisher’s average excess in fitness [10]. Note that Δ q i = q i a i and that the average value of w is w ¯ = 1 , thus a i = w i w ¯ .

7.2. Variance in Fitness

Considering a as a measure of fitness, the first term of Equation (3) becomes the partial change in average fitness caused by the direct forces, F . In symbols
Δ F a ¯ = Δ q · a = i Δ q i ( Δ q i q i ) = i q i ( Δ q i q i 2 ) = i q i a i 2 = V w ,
in which Δ F is the partial change caused by the direct forces, and V w is the variance in fitness.

7.3. Fundamental Theorem

If we let
a i = α x i + ϵ i
be the regression of fitness, a i , on some predictor, x i , and define g i = α x i , then
Δ F a ¯ = i q i a i 2 = V g + V ϵ .
If one interprets x i as an inherited gene, and ϵ i as an environmental effect that is not transmitted to the next generation, then the partial change in fitness by natural selection that is transmitted to the next generation is Δ NS a ¯ = V g . This result is analogous to Fisher’s fundamental theorem of natural selection [8,11,12,13].
The analysis tracks three sets. The initial set before selection with a ¯ , the second set after selection with a ¯ , and the third set after transmission with a ¯ . The set after transmission retains only those changes associated with x i , interpreted as an inherited gene, such that Δ a ¯ = a ¯ a ¯ .

7.4. Covariance Form and Replicators

Using the definitions of relative fitness and average excess, the first term of the Price equation is
Δ q · z = ( Δ q i ) z i = q i a i z i = q i ( w i w ¯ ) z i = Cov ( w , z ) ,
in which Cov ( w , z ) is the covariance between fitness and value. This covariance implies that natural selection tends to increase the average value of z in proportion to the association between fitness and value. If the values do not change, Δ z i = 0 , then the total change is
Δ z ¯ = Cov ( w , z ) .
This covariance equation has been widely used to study natural selection [9,14,15,16,17].
In one common application, sometimes referred to as the replicator problem, we label each individual in a population by its own unique index, i, and let z i = p i be 0 or 1 to specify if each individual is a type 0 or type 1 individual [18,19]. We can think of p i as the frequency of type 1 in individual i. Then p ¯ is the frequency of type 1 individuals in the population, and
Δ p ¯ = Cov ( w , p )
is the frequency change of types in the population [20]. Here, we assume that individuals do not change their type during transmission, Δ p i = 0 , so that the second Price equation term is zero. This assumption is usually interpreted in biology as the absence of mutation.

7.5. Levels of Selection

We can write the second Price equation term as
q · Δ z = q i ( Δ z i ) = q i w i ( Δ z i ) = E ( w Δ z ) ,
in which E denotes the expectation operator for the average value. Combining this expression with Equation (13), we obtain an alternative form of the Price equation
Δ z ¯ = Cov ( w , z ) + E ( w Δ z ) .
This form is often used to analyze how selection acts at different levels, such as individual versus group selection [3,21]. As an example, consider a variant of the replicator problem, which uses z p , yielding
Δ p ¯ = Cov ( w , p ) + E ( w Δ p ) ,
in which p i now denotes the frequency of type 1 individuals within the ith group of individuals, w i is the fitness of the ith group relative to all other groups, and Δ p i is the change in the frequency of type 1 individuals within the ith group. Thus, the two terms can be interpreted as the change caused by selection between groups and the change caused by selection between individuals within groups.

8. Mathematical Properties

This section illustrates mathematical properties of the Price equation. These mathematical properties set the foundation for unifying apparently different kinds of problems from different disciplines.

8.1. Geometry and Work

Write the standard Euclidean geometry vector length as the square root of the sum of squares
z = z i 2 .
For any vector z
Δ q · z = Δ q z cos ω = Cov ( w , z ) ,
in which ω is the angle between the vectors Δ q and z . If we interpret z F as an abstract, nondimensional force, then
Δ q · F = Δ q F cos ω
expresses an abstract notion of work as the distance moved, Δ q , multiplied by the component of force acting along the path, F cos ω .

8.2. Divergence between Sets

If we let z a describe the relative growth of the various frequencies, a i = Δ q i / q i , then the divergence between sets can be expressed as
Δ F a ¯ = Δ q · a = ( Δ q i q i 2 ) = Δ q q 2 = V w = R 2 ,
in which R is the radius of a sphere on which must lie all possible Δ q / q changes with the same divergence between sets. If we choose to interpret a as an abstract notion of force, or fitness, acting on frequency changes, then Δ q · a is the work, with magnitude Δ q / q 2 , that separates the probability distribution q from q .

8.3. Small Changes, Paths and Logarithms

If we think of the separation between sets as a sequence of small changes along a path, with each small change as Δ q q ˙ , then
a q ˙ q = d log q ,
in which the overdot and the symbol “ d ” equivalently describe the differential. Then the partial change by direct forces separates the probability distributions of the two sets by the path length
Δ F a ¯ = Δ q · a = q ˙ q 2 = F ,
in which F is an abstract, nondimensional expression of the Fisher information distance metric.

8.4. Unitary and Canonical Coordinates

Let r = q . Then r = 1 , expressing the conservation of total probability as a vector of unit length, in which all possible probability combinations of r define the surface of a unit sphere. In Hamiltonian analyses of d’Alembert’s principle for the canonical Price equation, r is a canonical coordinate system [5].
The unitary coordinates, r , also provide a direct description of Fisher information path length as a distance between two probability distributions
4 r ˙ 2 = 4 d q 2 = q ˙ q 2 = F .
The constraint on total probability makes square root coordinates the natural system in which to analyze Euclidean distances, which are the sums of squares. See Figure 1.

8.5. Affine Invariance

Affine transformation shifts and stretches (multiplies) values, z a + b z , for shift by a and stretch by b. Here, addition or multiplication of a vector by a constant applies to each element of the vector.
In the abstract Price equation
Δ z ¯ = Δ q · z + q Δ z ,
affine transformation, z a + b z , alters the terms as: Δ z ¯ b Δ z ¯ , because the shift constant cancels in the differences; Δ q · z b Δ q · z , because in ( Δ q i ) ( a + b z i ) , we have a Δ q i = 0 ; and q Δ z b q Δ z , because the shift constant cancels in the differences. The stretch factor b multiplies each term and therefore cancels, leaving the Price equation invariant to affine transformation of the z values. Much of the universal structure expressed by the Price equation follows from this affine invariance.

8.6. Probability vs. Frequency

In this article, I use probability and frequency interchangeably. Many subtle issues distinguish the concepts and applications associated with those alternative words. However, in this attempt to identify common mathematical structure between various subjects, those distinctions are not essential. See Jaynes [22] for discussion.

9. D’Alembert’s Principle

The remaining sections repeat the list of topics in the Key results section. Prior publications discussed these topics [1,2]. Here, I present additional details, roughly sketching how the structure provided by the abstract Price equation unifies various subjects.
We can rewrite the canonical Price equation for the conservation of total probability in Equation (3) as
Δ a ¯ = ( F + I ) · Δ q = 0 .
Here, Δ q satisfies the constraint on total probability and any other specified constraints. The direct forces are F = a = Δ q / q . The inertial forces are
I = Δ 2 q Δ q Δ q q ,
in which Δ 2 q = Δ ( q q ) is the second difference of q , which is roughly like an acceleration.
D’Alembert’s principle is a generalization of Newton’s second law, force equals mass times acceleration [23]. In one dimension, Newton’s law is F = I , for force, F, and mass times acceleration, I , so that F + I = 0 . D’Alembert generalizes Newton’s law to a statement about motion in multiple dimensions such that, in conservative systems, the total work for a displacement, Δ q , and total forces, F + I , is zero. Work is the distance moved multiplied by the force acting in the direction of the movement.
The canonical Price equation of Equation (3) is an abstract, nondimensional generalization of d’Alembert for probability distributions that conserve total probability. The movement of the probability distribution between two populations, or sets, can be partitioned into the balancing work components of the direct forces, Δ q · F , and the inertial forces, Δ q · I . We can often specify the direct forces in a simple and clear way. The balancing inertial forces may then be analyzed by d’Alembert’s principle [23].
The movement of probability distributions in the canonical Price equation is always conservative, Δ a ¯ = 0 , so that d’Alembert’s principle holds. When we transform to the general Price equation by a z , then it may be that Δ z ¯ 0 and the system is not conservative. In that case, we may consider constraints on Δ z ¯ and how those constraints influence the possible paths of change for Δ q .
We can obtain a simple form of d’Alembert’s principle for probability distributions when displacements are small, Δ q q ˙ d q . Define the relative change operator as d log , the differential of the logarithm. Then F = d log q and I = d log ( d log q ) = d log 2 q , yielding
( F + I ) · d q = ( d log q + d log 2 q ) · d q = 0 ,
with the direct force proportional to the relative change in frequencies, and the inertial force proportional to the relative nondimensional acceleration in frequencies.
From Equation (5), the work of the direct forces, d q · F = q ˙ · F = F , is the Fisher information path length that separates the probability distributions, q and q , associated with the two sets. The inertial forces cause a balancing loss, q ˙ · I = F , which describes the loss in Fisher information that arises from the recalculation of the relative forces in the new frame of reference, q . The balancing loss occurs because the average relative force, or fitness, is always zero in the current frame of reference, for example, q · a = q i ( q ˙ i / q i ) = 0 . Any gain in relative fitness, q ˙ · F = F , must be balanced by an equivalent loss in relative fitness, q ˙ · I = F .
Here, the notions of force, inertia, and work are nondimensional mathematical abstractions that arise from the common underlying structure between the Price equation and the equations of physical mechanics. Similarly, the Fisher information measure here is an abstraction of the standard usage of the Fisher metric.
By equating force with relative frequency change, we intentionally blur the distinction between external causes and internal effects. By describing change as the difference between two abstract sets rather than change through time or space, we intentionally blur the scale of change. By separating frequencies, q , from property values, z , we intentionally distinguish universal aspects of structural change between sets from the particular interpretations of property values in each application. The blurring of cause, effect and scale, and the separation of frequency from value, lead to abstract mathematical expressions that reveal the common underlying structure between seemingly different subjects.

10. Information Theory

When changes are small, the direct force term of the canonical Price equation expresses classic measures of information theory (Equation (5)). In particular, q ˙ · a = q ˙ · F is a symmetric expression of the Kullback–Leibler divergence, which measures the change in information associated with the separation between two probability distributions [6].
For small changes, the Kullback–Leibler divergence is equivalent to a nondimensional expression of the Fisher information metric. The Fisher metric provides the foundation for much of classic statistical theory and for the subject of information geometry [24,25]. The Fisher metric also arises as an equivalent description for dynamics in many classic problems in physics and other subjects [26].
What does it mean that the Price equation matches classic measures of information, which also arise other subjects? That remains an open question. I suggest that the Price equation reveals the common mathematical structure among those seemingly different subjects. That mathematical structure arises from the conserved quantities, invariances, or constraints that impose a common pattern on dynamics. By this interpretation, dynamics is just a description of the changes between a sequence of sets.
The key aspect of the Price equation seems to be the separation of frequencies from property values. That separation shadows Shannon’s separation of the information in a message, expressed by frequencies of symbols in sets, from the meaning of a message, expressed by the properties associated with the message symbols. The Price equation takes that separation further by considering the abstract description of the separation between sets rather than the information in messages. Price [4] was clearly influenced by the information theory separation between frequency and property in his discussion of a generalized notion of natural selection that might unify disparate subjects.
The equivalence of the Price equation and information measures arises directly from the assumption of small changes. For larger changes, the relation between the Price equation and information remains an open problem. We might, for example, describe larger changes as
q i = q i e m i ,
in which m i is a nondimensional expression for the total force that separates frequencies. From that expression,
m i = log q i q i = log w i ,
in which w i is a form relative fitness, and m i is called the Malthusian parameter in biology. Then, similarly to Equation (5), we have
Δ q · m = D q | | q + D q | | q ,
which is known as the Jeffreys divergence. In this case, with Δ q not necessarily small, we no longer have a direct equivalence to Fisher information.
Information geometry, which analyzes continuous paths along contours of conserved total probability, describes the relations between Fisher information and this discrete divergence [27]. The idea is that big changes, Δ q , become a series of small changes, q ˙ , along a continuous path that connects the endpoints, q to q . Each small step along the path can be described as a Fisher information path length, and the sum of those small lengths equals the Jeffreys divergence.
Earlier work in population genetics theory derived the total change caused by natural selection as q ˙ 2 / q i (reviewed by [28,29,30]). That initial work did not emphasize the equivalence of the change by natural selection and Fisher information [31]. Here, the Fisher metric arises most simply as the continuous limiting form of the canonical Price equation description for the distance between two sets.

11. Extreme Action

We can write Equation (6) as
L = q ˙ · ϕ 1 2 κ ( F C 2 ) ξ ( q ˙ · 1 0 ) .
By the principle of extreme action, the dynamics, q ˙ , maximize or minimize (extremize) the action, q ˙ · ϕ , subject to the constraints. In this case, maximizing the action simply describes the fact that the movement, q ˙ , tends to be in the direction of the force vector, ϕ , subject to any constraints on motion.
The Lagrangian, L , combines the action and the constraints into one expression. To illustrate the principle of extreme action with the Lagrangian above, we maximize the action subject to the constraints by solving L / q ˙ i = 0 , while also solving for κ and ξ by requiring that F = C 2 and q ˙ · 1 = 0 . The solution is
q ˙ i = κ q i ( ϕ i ϕ ¯ ) ,
in which ϕ i ϕ ¯ is the excess force relative to the average, and ξ = ϕ ¯ follows from satisfying the constraint on total probability under the assumption of small changes. The constant, κ = C / σ ϕ , satisfies the constraint on total path length, F = C 2 , in which σ ϕ is the standard deviation of the forces. We can rewrite the solution as
m i = q ˙ i q i = κ ( ϕ i ϕ ¯ ) .
This expression shows that we can determine the frequency changes, q ˙ , from the given forces, ϕ , or we can determine the forces from the given frequency changes. The mathematics is neutral about what is given and what is derived.
In this case, ϕ is an arbitrary force vector. Using z = ϕ in the general Price equation does not necessarily yield Δ z ¯ = Δ ϕ ¯ = 0 . A nonconservative system does not satisfy d’Alembert’s principle. Often, we can specify certain invariances associated with Δ z ¯ , and use those invariances as additional forces of constraint on q ˙ in the Lagrangian. The additional forces of constraint typically alter the dynamics and the potential equilibria, as shown in the following section.
Across many disciplines, problems can often be solved by this variational method of writing a Lagrangian and then extremizing the action subject to the constraints [23]. The difficulty is determining the correct Lagrangian for a particular problem. No general method specifies the correct form.
In this example, the Price equation essentially gave us the form of the action and the constraints. Here, the action is the frequency displacement multiplied by the arbitrary force vector, q ˙ · ϕ , which is analogous to the physical work done in the movement of the probability distribution. The constraints follow from the conservation of total probability and the description of total distance moved as Fisher information, F , which arises from the canonical Price equation.

12. Entropy and Thermodynamics

The tendency for systems to increase in entropy provides the foundation for much of thermodynamics [32]. Entropy can be studied abstractly by the information entropy quantity, E = q · log q . For small changes in frequencies, the change in entropy is d E = q ˙ · log q .
System dynamics often maximize the production of entropy [33]. Maximum entropy production suggests that the dynamics may be analyzed by a Lagrangian in which the action to be maximized is the production of entropy, q ˙ · log q .
In the basic Lagrangian for dynamics given by Equation (29), the action is the abstract notion of physical work, q ˙ · ϕ , the displacement, q ˙ , multiplied by the force, ϕ .
The force vector, ϕ , can be related to frequency change in a growth process, q i = q i e ϕ i , with ϕ i = m i = log ( q i / q i ) , as in Equation (27). The work becomes
q ˙ · ϕ = q ˙ · log q q ˙ · log q ,
in which the second term on the right is the production of entropy.
If the system conserves the change in some quantity, Δ z ¯ = B , then that invariant change imposes a constraint on the possible change in the probability distribution, q ˙ = q q . Suppose that the value z i is a property of a type, i, such that each type does not change its property value between sets, Δ z i = z i z i = 0 . Then, from the general Price equation, Δ z ¯ = B implies q ˙ · z = B . This constraint acts as a force that limits the possible probability distributions, q , given the initial distribution, q .
We can express the constraint q ˙ · z = B on z in terms of a constraint on q as log q = log k λ z , for constant, k. Then the constraint q ˙ · z has an equivalent expression in terms of q as
q ˙ · log q = λ ( q ˙ · z ) = λ B .
We can now split the total force, ϕ , as in Equation (31) and, considering q ˙ · log q as a force of constraint, we can rewrite the Lagrangian of Equation (29) as
L = q ˙ · log q 1 2 κ ( F C 2 ) ξ ( q ˙ · 1 0 ) λ ( q ˙ · z B ) .
The action term, d E = q ˙ · log q , is the increase in entropy, E = q · log q . Maximizing the action maximizes the production of entropy.
The maximization by solving L / q ˙ i = 0 subject to the constraints yields a solution with the same form as Equation (30). The force term is replaced by a partition of forces into components that match the direct entropy increase and the constraint on z as
ϕ i ϕ ¯ = E i λ z i ,
in which the star superscripts denote the deviations from average values, E i = log q i E and z i = z i z ¯ , thus
q ˙ i = κ q i ( E i λ z i ) .
The value of κ is C / σ ϕ , as in the previous section. In this case, we use for ϕ the partition of the forces on the right side of Equation (34) into the direct entropy and the constraining forces.
The constraint q ˙ · z = B implies
λ = β E z B κ σ z 2 .
The term β E z is the regression of log q on z , which acts to transform the scale for the forces of constraint imposed by z to be on a common scale with the direct forces of entropy, log q . The term B / κ σ z 2 describes the required force of constraint on frequency changes so that the new frequencies move z ¯ by the amount q ˙ · z = B . The term σ z 2 is the variance in z .
In these examples of dynamics derived from Lagrangians, the action is the partial change term of the direct forces derived from the universal properties of the Price equation. Thus, the maximum entropy production in this case can be interpreted as a universal partial maximum entropy production principle, in the Price equation sense of the partial change associated with the direct forces, holding the inertial frame constant [2].
In many applications, causal analysis reduces to this pattern of partial change by direct focal causes, holding other causes constant. The particular partition into direct, constraining, and inertial forces is a choice that we make to isolate or highlight particular causes [23].

13. Entropy and Statistical Mechanics

When entropy reaches its maximum value subject to the forces of constraint, equilibrium occurs at q = q . From the force of constraint given in the previous section, log q = log k λ z , the equilibrium can be written as
q = k e λ z ,
in which I have dropped the i subscript. This Gibbs–Boltzmann-exponential distribution is the principal result of statistical mechanics [34]. Here, we obtained the exponential distribution through a Price equation abstraction that led to maximum entropy production.
This result suggests that equilibrium probability distributions are simple expressions of maximum entropy subject to the forces of constraint. Jaynes [35,36] developed this maximum entropy perspective in his quest to overthrow Boltzmann’s canonical ensemble for statistical mechanics. The canonical ensemble describes macroscopic probability patterns by aggregation over a large number of equivalent microscopic particles.
The theory of statistical mechanics, based on the microcanonical ensemble, yields several commonly observed probability distributions. However, Jaynes [22] emphasized that the same probability distributions commonly arise in economics, biology, and many other disciplines. In those nonphysical disciplines, there is no meaningful canonical ensemble of identical microscopic particles. According to Jaynes, there must be another more general cause of the common probability patterns. The maximization of entropy is one possibility [37].
Jaynes emphasized that increase in entropy is equivalent to loss of information. The inherent randomizing tendency in all systems causes loss of information. Maximum entropy is simply a consequence of that loss of information. Because systems lose all information except the forces of constraint, common probability distributions simply reflect those underlying forces of constraint.
The Gibbs–Boltzmann-exponential distribution in Equation (36) expresses the simple force of constraint on the mean of some value, z , associated with the system. Different constraints lead to different distributions. For example, the constraint q · ( z μ 2 ) = σ 2 yields a Gaussian distribution for mean μ and variance σ 2 .
Jaynes invoked maximum entropy as a consequence of the thermodynamic principle that systems increase in entropy. Here, I developed the maximization of entropy from the abstract Price equation expression for frequency dynamics and the extreme action principle.
Extreme action simply expresses the notion that changing frequencies align with the direction of the force vector. That geometric alignment is equivalent to the maximization of frequency change multiplied by force, an abstract notion of physical work.
Jaynes argued that the fundamental notion of information sets the underlying structural unity of thermodynamics, probability, and many aspects of statistical inference. I argue for underlying unity based on abstract properties of invariance and geometry [2]. Those properties of invariance and geometry give a common mathematical structure to any problem that can be considered abstractly by the Price equation’s description of the change between two sets. The next section reviews and extends these notions of invariance and common mathematical structure.

14. Invariance and Sufficiency

The Price equation expresses constraints on the change in probability distributions between sets, Δ q . For example, if z ¯ is a constant, conserved value, then the changes, Δ q , must satisfy that constraint. We may say that the conserved value of z ¯ imposes a force of constraint on the frequency changes. This section relates the Price equation’s abstract notions of change and constraint to Jaynes’ arguments.
Jaynes emphasized that systems tend to increase in entropy or, equivalently, to lose information. Entropy increase is a force that drives a system to an equilibrium at which entropy is maximized subject to any forces of constraint.
Because entropy increase is essentially universal, it is sufficient to know the particular forces of constraint to determine the most likely form of a probability distribution. Sufficiency expresses the forces of constraint in terms of conserved quantities.
Put another way, sufficiency partitions all possible populations into subsets. Each subset contains all of those populations with the same invariant conserved quantity. For example, if the constraint is a conserved value of z ¯ , then all populations with the same invariant value of z ¯ fall into the same subset.
To analyze the force arising from constraint on z ¯ and the most likely form of the associated probability distribution, it is sufficient to know that the dynamics of populations driven by entropy increase must remain within the subset with invariant values defined by the constraints of the conserved quantities.
Jaynesian thermodynamics follows from the general force of information loss, in which the constraints sufficiently describe the only information that remains after maximum information loss.
The Price equation goes beyond Jaynes in revealing the underlying abstract mathematical structure that unifies seemingly different subjects. In all of the disciplines we have discussed, the key results for each discipline arise from the basic description of change between sets constrained by invariant conditions that we place on frequency, q , and value, z . In addition, the Price equation expresses the intrinsic invariance to affine transformation z a + b z .
From the perspective of the abstract Price equation, notions of information and entropy increase arise as secondary descriptions of the underlying primary geometric aspects of change between sets subject to intrinsic invariances and to invariant conditions imposed as constraints. Those aspects of geometry and invariance set the shared foundations for many seemingly different disciplines.

15. Inference: Data as a Force

Jaynes considered information as a force that changes probability distributions. Entropy increase is the force that causes loss of information, driving probability distributions to maximum entropy subject to constraint. For inference, data provide an informational force that drives the Bayesian dynamics of probability distributions to provide estimates of parameter values. The parameters are typically the conserved, constrained quantities that are sufficient to define maximum entropy probability distributions.
How does the Jaynesian interpretation of data as an informational force in statistical inference follow from the underlying Price equation abstraction? Consider the estimation of a parameter, θ , such as the mean of an exponential probability distribution. In the Bayesian framework, we describe the current information that we have about θ by the probability distribution, q θ .
The value of q θ represents the relative likelihood that the true value of the parameter is θ . The probability distribution over alternative values of θ represents our current knowledge, or information, about θ . To relate this to the Price framework, note that we are now using θ as the subscript for types instead of i. The vector q now implicitly describes the set of values for q θ .
Our problem concerns how new information about θ changes the probability values to q θ . The new probability values summarize the combination of our prior information in q θ and the force of the new information in the data. This problem is the Bayesian dynamics of combining a prior distribution, q θ , with new data to generate a posterior distribution, q θ , with Δ q θ = q θ q θ .
We have from our universal definitions for change given earlier the relation q θ = q θ w θ , in which we called w = q / q the relative fitness, describing the force of change on probabilities. Here, the force arises from the way in which new data alters the net likelihood associated with a value of θ .
Following Bayesian tradition, denote that force of the data as L ˜ ( D | θ ) , the likelihood of observing the data, D, given a value for the parameter, θ . To interpret a force as equivalent to relative fitness, the average value of the force must be one to satisfy the conservation of total probability. Thus, define
w θ = L θ = L ˜ ( D | θ ) θ q θ L ˜ ( D | θ ) .
We can now write the classic expression for Bayesian updating of a prior, q θ , driven by the force of new data, L θ = L ( D | θ ) , to yield the posterior, q θ , as
q θ = q θ L θ .
By recognizing L as a force vector acting on frequency change, we can use all of the general results derived from the Price equation. For example, the Malthusian parameter, m , relates to the log-likelihood as
m = log q q = Δ log q = log L .
This equivalence for log-likelihood relates frequency change to the Kullback–Leibler expressions for the change in information
Δ q · log L = D q | | q + D q | | q ,
which we may think of as the gain of information from the force of the data. Perhaps the most general expression of change describes the relative separation within the unitary square root coordinates as the Euclidean length
Δ q · L = Δ q q 2 ,
which is an abstract, nondimensional expression for the work done by the displacement of the frequencies, Δ q , in relation to the force of the data, L .
I defined L as a normalized form of the likelihood, L ˜ , such that the average value is one, L ¯ = q · L = 1 . Thus, we have a canonical form of the Price equation for normalized likelihood
Δ L ¯ = Δ q · L + q · Δ L = 0 .
The second terms show how the inertial forces alter the frame of reference that determines the normalization of the likelihoods, L ˜ L . Typically, as information is gained from data, the normalizing force of the frame of reference reduces the force of the same data in subsequent updates.
All of this simply shows that Bayesian updating describes the change in probability distributions between two sets. That change between sets follows the universal principles given by the abstract Price equation.
Prior work noted the analogy between natural selection and Bayesian updating [38,39,40]. Here, I emphasized a more general perspective that includes natural selection and Bayesian updating as examples of the common invariances and geometry that unify many topics.

16. Invariance and Probability

In the earlier section Affine invariance, I showed that the Price equation is invariant to affine transformations z a + b z . This section suggests that the Price equation’s intrinsic affine invariance explains universal aspects of probability distributions in a more general and fundamental manner than Jaynes’ focus on entropy and information.
The general form of probability distributions in Equation (36) followed from the constraint log q = log k λ z . Affine transformation does not change the force imposed by that constraint, because
log k λ z log k a λ b λ z = log k a λ b z ,
in which k a = k e a λ and λ b = b λ . Because the constants, k a and λ b , adjust to satisfy underlying constraints, the shift and stretch constants a and b do not alter the constraints or the final form of the probability distribution.
Thus, the probability distribution in Equation (36), arising from analysis of extreme action applied to a Lagrangian, is affine invariant with respect to z . We can make a more fundamental argument, by deriving the form of the probability distribution solely as a consequence of the intrinsic affine invariance of the Price equation.
In particular, shift invariance by itself explains why the probability distribution in Equation (36) has an exponential form [41]. If we assume that the functional form for the probability distribution, q i = f ( z i ) , is invariant to a constant shift, a + z i , then, dropping the i subscripts and using continuous notation, by the conservation of total probability
k 0 f ( z ) d z = k a f ( a + z ) d z = 1
holds for any magnitude of the shift, a, in which the proportionality constant, k a , changes with the magnitude of the shift, a, independently of the value of z, in order to satisfy the conservation of total probability.
Because k a is independent of z, the condition for the conservation of total probability is
k a f ( a + z ) = k 0 f ( z ) .
The invariance holds for any shift, a, so it must hold for an infinitesimal shift, a = ϵ . We can write the Taylor series expansion for an infinitesimal shift as
f ( ϵ + z ) = f ( z ) + ϵ f ( z ) = κ ϵ f ( z ) ,
with κ ϵ = 1 λ ϵ , because ϵ is small and independent of z, and κ 0 = 1 . Thus,
f ( z ) = λ f ( z )
is a differential equation with solution
q = f ( z ) = k e λ z ,
in which k is determined by the conservation of total probability, and λ is determined by z ¯ . When z ranges over positive values, z > 0 , then k = λ = 1 / z ¯ . Invariance to stretch transformation by b follows from the adjustment, λ b , given above.
Affine invariance of the probability distribution with respect to z implies additional structure. In particular, we can write z = e β w , in which a shift w ( z ) α + w ( z ) multiplies z by a constant, which does not change the form of the probability distribution. Thus, in terms of the shift-invariant scale, w ( z ) , we obtain the canonical expression that describes nearly all commonly observed continuous probability distributions [41,42]
q d ψ = k e λ e β w d ψ ,
when we add a few additional details about the measure, d ψ z , and the commonly observed base scales, w ( z ) . Understanding the abstract form of common probability patterns clarifies the study of many problems [42,43,44] (see Appendix A).

17. Meaning

One cannot explain mathematical form by appeal to extrinsic physical notions. The structure of mathematical results does not follow from energy or heat or natural selection. Instead, those extrinsic phenomena arise as consistent interpretations for the structure of the mathematics.
The mathematical structure can only be analyzed, explained and understood by reference to mathematical properties. For example, we may invoke invariance, conserved values, and geometry to understand why certain mathematical forms arise in the abstract Price equation description for changes in frequency, and why those same forms recur in many different applications. We may not invoke entropy or information as a cause, only as a description.
My goal has been to reveal the common mathematical structure that unifies seemingly disparate results from different subjects. The common mathematical structure arises primarily through simple invariances and their expression in geometry.

Funding

This research was funded by the Donald Bren Foundation.

Conflicts of Interest

The author declares no conflict of interest.

Appendix A. Value of Synthesis by Invariance

I have been asked to comment on how this synthesis of concepts may enhance scientific progress. The primary modes of progress follow two lines.
First, one can more easily understand the vast literature that makes connections between disciplines. For example, information is often discussed as if it were a primary concept that clarifies the meaning of biological or physical principles. By contrast, in this synthesis based on the fundamental invariances expressed by the abstract Price equation, various information and entropy forms arise directly. This synthesis provides value if one feels curiosity about the similarity of mathematical forms or wishes to understand the literature that discusses such similarities.
Second, new mathematical results and new insights into empirical phenomena may follow. I believe this to be true. However, the argument for novel results and insights is nearly impossible to make. For any particular result or insight, it is always possible to claim that the same could have been achieved without the broader framing. Ascribing the origins of insight to a general framework is almost always subjective.
The strongest argument I can make arises from two personal anecdotes. It is only in these cases that I understand the origin of insight in relation to the broad use of invariance as a unifying perspective.

Appendix A.1. Probability, Invariance, and Maximum Entropy

The first anecdote shows how observations in biology motivated my search for a broader synthesis of concepts between disciplines. That synthesis, in terms of invariance, helped me to understand the observed biological patterns. It also led to a unified understanding of the commonly observed probability distributions in terms of the invariances that define scale, and an understanding of the relations between the equations of thermodynamics, natural selection in biology, and probability patterns.
In my work on cancer and other aspects of age-related disease [42,45], I noted that a wide variety of seemingly different dynamical models of disease progression tended to converge to a few similar forms of probability distributions for the age of disease onset. At first, I used Jaynes’ maximum entropy approach [22,35,36] to try and understand the relations between apparently complex processes and the resulting simple patterns [37]. That worked, in the sense that one could find constraints that led to maximum entropy distributions that matched the data.
The problem with maximum entropy is that the constraints simply describe the patterns in the data, without giving one a sense of how patterns arise and what relates different patterns to each other. Instead, one ends up with a catalog of the commonly observed probability distributions and the matching constraints for each distribution.
Those difficulties led me to study the forms of commonly observed probability distributions. I felt that if I could understand probability patterns more deeply, I would be in a better position to understand the biological problems that interested me. And, along the way, I would perhaps better understand more general aspects of probability patterns.
Over many years, I developed a unified understanding of probability patterns in terms of invariance and scale [41,46]. I used that improved understanding of probability to enhance my analyses of age-related diseases [42] and the size distributions of trees in forests [43].
That work on invariance and scale in probability left open the puzzle of how that perspective related to Jaynes’ classic maximum entropy approach. Although my invariance approach to probability patterns could stand separately from maximum entropy, Jaynes’ approach was widely used and formed a standard against which my new work would reasonably be compared. Also, I developed my ideas by initially starting with maximum entropy, and Jaynes himself strongly hinted that invariance might be the way forward from where he left the subject [22].
How could I connect my pure invariance approach to Jaynes’ work on maximum entropy, which was developed explicitly as an extension to classical thermodynamics and statistical mechanics?
My work on probability seemingly has little relation to the Price equation. However, in my other studies, I had been using the Price equation as a tool to understand natural selection in biology [1,7,47]. Over time, I began to see the broader connections between the Price equation and information theory [31,48,49].
Through those studies of natural selection and the Price equation, I gained understanding of the dynamics of information. I was then able to see the connections between some of the classic results of thermodynamic change in entropy and the equations of natural selection.
With that broader understanding of entropy and information dynamics, I could then synthesize Jaynes’ maximum entropy approach to probability with my approach based on invariance and scale [2]. Some fundamental aspects of physical mechanics also began to fit within the unified structure [5]. All of that abstract work fed back into my analyses and understanding of age-related diseases, the sizes of trees, and the distribution of enzyme rates [42,43].
For any of the particular insights into empirical problems or any of the particular mathematical results, it would have been possible to achieve the same without a broader perspective or an attempt to unify between disciplines. However, in fact, the broader perspective and unification of disciplines played a primary role.

Appendix A.2. The Universal Law of Generalization in Psychology

The second anecdote shows how the broad framework led to a new insight for a particular discipline. In this case, I happened to read an article in Science about an intriguing pattern in psychology [50].
The probability that an organism perceives two stimuli as similar typically decays exponentially with the separation between the stimuli. The exponential decay in perceptual similarity is often referred to as the universal law of generalization [51,52].
Both theory and empirical analysis depend on the definition of the perceptual scale. For example, how does one translate the perceived differences between two circles with different properties into a quantitative measurement scale?
There are many different suggestions in the literature for how to define a perceptual scale. Each of those suggestions develops very specific notions of measurement based, for example, on information theory, Kolmogorov complexity theory, or multidimensional scaling descriptions derived from observations [50,51,52].
I showed that the inevitable shift invariance of any reasonable perceptual scale determines the exponential form for the universal law of generalization in perception [44]. All of the other details of information, complexity, and empirical scaling are superfluous with respect to understanding why the universal law of generalization has the exponential form.
Certainly, the insight that the inevitable shift invariance of scale is a sufficient explanation does not require a broad conceptual framework derived from the Price equation. However, I was able to see immediately that solution only because I had for years been working toward a unified understanding of information, scale, and invariance. Many others had worked on this central puzzle in psychology without seeing the underlying simplicity.

Appendix B. Mathematical Expressions from Various Disciplines

Table A1. Mathematical forms that highlight similarities between different disciplines, part 1.
Table A1. Mathematical forms that highlight similarities between different disciplines, part 1.
Mathematical FormCommentsEquation
Price equation:
Δ z ¯ = Δ q · z + q · Δ z Most general form; separates frequency, q , from property value, z ; partitions frequency change and property value change(1)
Δ a ¯ = Δ q · a + q · Δ a = 0 Canonical form; emphasizes conservation of total frequency; recover general form by coordinate change a z (3)
Mathematical relations:
Δ q · z = Δ q z cos ω Geometric equivalence for dot product; a F yields abstract expression of physical work (see below)(19)
Δ q · z = Cov ( w , z ) Equivalent statistical form(13)
q · Δ z = E ( w Δ z ) Equivalent statistical form(15)
Δ q · a = Δ q / q 2 Geometric expression for total distance between sets in terms of frequency; discrete generalization of Fisher information, F (20)
Physical mechanics:
Δ a ¯ = ( F + I ) · Δ q = 0 Abstraction of D’Alembert’s principle for physical work in conservative systems; work from direct forces, Δ q · F = Δ q · a , balances work from inertial forces, Δ q · I = q · Δ a ; generalize by coordinate transformation a z ; cases in which Δ z ¯ 0 describe nonconservative systems(23)
Δ q · F = Δ q F cos ω Abstract form of work as distance moved, Δ q , multiplied by component of force along path, F cos ω ; for given lengths of force and frequency change vectors, the frequency changes that minimize the angle between force and frequency change maximize the work(19)
Information theory:
Δ q · m = J ( q , q ) Jeffreys divergence, J ( = ) D q | | q + D q | | q for z m = log q / q (28)
Δ q · m q ˙ · a For small changes, m a for Δ q q ˙ (5)
q ˙ · a = q ˙ / q 2 = F Abstract nondimensional expression of Fisher information as distance of relative frequency changes(21)
q ˙ / q 2 = 4 r ˙ 2 = F Fisher information as simple Euclidean geometric distance of frequency change in unitary coordinates, r = q (22)
q ˙ · F = q ˙ · d log q = F For F a , work of direct forces in terms of d’Alembert(25)
q ˙ · I = q ˙ · d log 2 q = F Work of inertial forces, the change in frame of reference(25)
Bayesian inference:
log L m ; L 1 a For relative likelihood, L (38)
q θ = q θ L θ Bayesian updating(37)
Δ q · log L = J ( q , q ) Follows from log L m (39)
Δ q · log L q ˙ · a = F Follows from m a for Δ q q ˙ (5)
Δ L ¯ = Δ q · L + q · Δ L = 0 Likelihood form of canonical Price equation, from L 1 a (40)
Table A2. Mathematical forms that highlight similarities between different disciplines, part 2.
Table A2. Mathematical forms that highlight similarities between different disciplines, part 2.
Mathematical FormCommentsEquation
Natural selection:
Δ F a ¯ = Δ q · a = V w Natural selection moves population a distance equal to the variance in fitness; equivalent to abstract form of physical work with a F (11)
Δ F a ¯ = V w = V g + V ϵ Partition variance (distance) into part associated with genetic predictors, V g , and part associated with other environment effects, V ϵ (12)
Δ NS a ¯ = V g Analog of fundamental theorem, the part of total transmissible change caused by natural selection(12)
Δ p ¯ = Cov ( w , p ) Replicator equation with p z as gene frequency within individuals and p ¯  as population gene frequency(14)
Δ p ¯ = Cov ( w , p ) + E ( w Δ p ) Group selection with p z as gene frequency within groups, first term as selection between groups, and second term as selection within groups(17)
Natural selection:
Δ F a ¯ = Δ q · a = V w Natural selection moves population a distance equal to the variance in fitness; equivalent to abstract form of physical work with a F (11)
Δ F a ¯ = V w = V g + V ϵ Partition variance (distance) into part associated with genetic predictors, V g , and part associated with other environment effects, V ϵ (12)
Δ NS a ¯ = V g Analog of fundamental theorem, the part of total transmissible change caused by natural selection(12)
Δ p ¯ = Cov ( w , p ) Replicator equation with p z as gene frequency within individuals and p ¯ as population gene frequency(14)
Δ p ¯ = Cov ( w , p ) + E ( w Δ p ) Group selection with p z as gene frequency within groups, first term as selection between groups, and second term as selection within groups(17)
Extreme action:
L = q ˙ · ϕ + constraints Lagrangian as work of direct forces, ϕ F ; maximizing the work (action), q ˙ · ϕ , chooses the frequency changes, q ˙ , in the direction of the forces subject to constraints(29)
q ˙ i = κ q i ( ϕ i ϕ ¯ ) Dynamics for constrained total frequency and constrained total distance, F = C 2 , with κ = C / σ ϕ and σ ϕ as standard deviation of forces(30)
Thermodynamics:
a = Δ q / q q ˙ / q Equivalence for small changes(2)
m = log q / q q ˙ / q Define force ϕ m , with q i = q i e m i q i m i (26)
q ˙ · ϕ = q ˙ · log q q ˙ · log q Term q ˙ · log q is production of entropy(31)
L = q ˙ · log q + constraints Maximizing Lagrangian maximizes production of entropy(33)
q ˙ · log q = λ ( q ˙ · z ) = λ B If Δ z = 0 , then constraint Δ z ¯ = B implies q ˙ · z = B , which constrains vector of new frequencies, q (32)
log q = log k λ z Force of constraint in previous line(32)
q ˙ i = κ q i ( E i λ z i ) Dynamics that maximize entropy production(35)
Statistical mechanics:
q i = k e λ z i Solution for probability distribution from force of constraint at equilibrium, q = q , and constraint z ¯ = q · z = 1 / λ (36)
q i = k e ( z i μ ) 2 / 2 σ 2 Gaussian distribution from constraint σ 2 = q · z μ 2 (36)
q i = k e λ T ( z i ) Jaynesian maximum entropy distribution from constraint q · T ( z ) = 1 / λ (36)
Probability distributions:
q = k e λ e β w Canonical form of continuous probability distributions; w ( z ) is shift-invariant scaling of z such that probability pattern is invariant to constant shift, w α + w (44)

References

  1. Frank, S.A. Natural selection. IV. The Price equation. J. Evol. Biol. 2012, 25, 1002–1019. [Google Scholar] [CrossRef] [PubMed]
  2. Frank, S.A. Universal expressions of population change by the Price equation: Natural selection, information, and maximum entropy production. Ecol. Evol. 2017, 7, 3381–3396. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Price, G.R. Extension of covariance selection mathematics. Ann. Hum. Genet. 1972, 35, 485–490. [Google Scholar] [CrossRef] [PubMed]
  4. Price, G.R. The nature of selection. J. Theor. Biol. 1995, 175, 389–396. [Google Scholar] [CrossRef] [PubMed]
  5. Frank, S.A. D’Alembert’s direct and inertial forces acting on populations: The Price equation and the fundamental theorem of natural selection. Entropy 2015, 17, 7087–7100. [Google Scholar] [CrossRef]
  6. Cover, T.M.; Thomas, J.A. Elements of Information Theory; Wiley: New York, NY, USA, 1991. [Google Scholar]
  7. Frank, S.A. George Price’s contributions to evolutionary genetics. J. Theor. Biol. 1995, 175, 373–388. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  8. Frank, S.A. The Price equation, Fisher’s fundamental theorem, kin selection, and causal analysis. Evolution 1997, 51, 1712–1729. [Google Scholar] [CrossRef] [PubMed]
  9. Walsh, B.; Lynch, M. Evolution and Selection of Quantitative Traits; Oxford University Press: Oxford, UK, 2018. [Google Scholar]
  10. Fisher, R.A. Average excess and average effect of a gene substitution. Ann. Eugen. 1941, 11, 53–63. [Google Scholar] [CrossRef]
  11. Fisher, R.A. The Genetical Theory of Natural Selection, 2nd ed.; Dover: New York, NY, USA, 1958. [Google Scholar]
  12. Price, G.R. Fisher’s ‘fundamental theorem’ made clear. Ann. Hum. Genet. 1972, 36, 129–140. [Google Scholar] [CrossRef] [PubMed]
  13. Ewens, W.J. An interpretation and proof of the fundamental theorem of natural selection. Theor. Popul. Biol. 1989, 36, 167–180. [Google Scholar] [CrossRef]
  14. Robertson, A. A mathematical model of the culling process in dairy cattle. Anim. Prod. 1966, 8, 95–108. [Google Scholar] [CrossRef]
  15. Wade, M.J. Soft selection, hard selection, kin selection, and group selection. Am. Nat. 1985, 125, 61–73. [Google Scholar] [CrossRef]
  16. Gardner, A. The Price equation. Curr. Biol. 2008, 18, R198–R202. [Google Scholar] [CrossRef] [PubMed]
  17. Queller, D.C. Fundamental theorems of evolution. Am. Nat. 2017, 189, 345–353. [Google Scholar] [CrossRef] [PubMed]
  18. Taylor, P.D.; Jonker, L.B. Evolutionary stable strategies and game dynamics. Math. Biosci. 1978, 40, 145–156. [Google Scholar] [CrossRef]
  19. Schuster, P.; Sigmund, K. Replicator dynamics. J. Theor. Biol. 1983, 100, 533–538. [Google Scholar] [CrossRef]
  20. Price, G.R. Selection and covariance. Nature 1970, 227, 520–521. [Google Scholar] [CrossRef] [PubMed]
  21. Hamilton, W.D. Innate social aptitudes of man: An approach from evolutionary genetics. In Biosocial Anthropology; Fox, R., Ed.; Wiley: New York, NY, USA, 1975; pp. 133–155. [Google Scholar]
  22. Jaynes, E.T. Probability Theory: The Logic of Science; Cambridge University Press: New York, NY, USA, 2003. [Google Scholar]
  23. Lanczos, C. The Variational Principles of Mechanics, 4th ed.; Dover Publications: New York, NY, USA, 1986. [Google Scholar]
  24. Fisher, R.A. Theory of statistical estimation. Math. Proc. Camb. Phil. Soc. 1925, 22, 700–725. [Google Scholar] [CrossRef]
  25. Amari, S.; Nagaoka, H. Methods of Information Geometry; Oxford University Press: New York, NY, USA, 2000. [Google Scholar]
  26. Frieden, B.R. Science from Fisher Information: A Unification; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
  27. Dabak, A.G.; Johnson, D.H. Relations between Kullback-Leibler Distance and Fisher Information. Available online: citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.15.2517 (accessed on 16 December 2018).
  28. Ewens, W.J. An optimizing principle of natural selection in evolutionary population genetics. Theor. Popul. Biol. 1992, 42, 333–346. [Google Scholar] [CrossRef]
  29. Wei, E.; Justh, E.W.; Krishnaprasad, P.S. Pursuit and an evolutionary game. Proc. R. Soc. Lond. A 2009, 465, 1539–1559. [Google Scholar] [CrossRef] [Green Version]
  30. Raju, V.; Krishnaprasad, P.S. A variational problem on the probability simplex. In Proceedings of the 57th IEEE Conference on Decision and Control, Miami Beach, FL, USA, 17–19 December 2018. preliminary draft. [Google Scholar]
  31. Frank, S.A. Natural selection maximizes Fisher information. J. Evol. Biol. 2009, 22, 231–244. [Google Scholar] [CrossRef] [PubMed]
  32. Van Ness, H.C. Understanding Thermodynamics; Dover Publications: New York, NY, USA, 1983. [Google Scholar]
  33. Dewar, R.C.; Lineweaver, C.H.; Niven, R.K.; Regenauer-Lieb, K. (Eds.) Beyond the Second Law: Entropy Production and Non-Equilibrium Systems; Springer: Berlin, Germany, 2014. [Google Scholar]
  34. Feynman, R.P. Statistical Mechanics: A Set of Lectures, 2nd ed.; Westview Press: New York, NY, USA, 1998. [Google Scholar]
  35. Jaynes, E.T. Information theory and statistical mechanics. Phys. Rev. 1957, 106, 620–630. [Google Scholar] [CrossRef]
  36. Jaynes, E.T. Information theory and statistical mechanics. II. Phys. Rev. 1957, 108, 171–190. [Google Scholar] [CrossRef]
  37. Frank, S.A. The common patterns of nature. J. Evol. Biol. 2009, 22, 1563–1585. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  38. Shalizi, C.R. Dynamics of Bayesian updating with dependent data and misspecified models. Electron. J. Stat. 2009, 3, 1039–1074. [Google Scholar] [CrossRef]
  39. Harper, M. The replicator equation as an inference dynamic. arXiv, 2010; arXiv:0911.1763v3. [Google Scholar]
  40. Campbell, J.O. Universal Darwinism as a process of Bayesian inference. Hypothesis Theory 2016, 10, 49. [Google Scholar] [CrossRef] [PubMed]
  41. Frank, S.A. Common probability patterns arise from simple invariances. Entropy 2016, 18, 192. [Google Scholar] [CrossRef]
  42. Frank, S.A. Invariant death. F1000Research 2016, 5, 2076. [Google Scholar] [CrossRef] [PubMed]
  43. Frank, S.A. The invariances of power law size distributions. F1000Research 2016, 5, 2074. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  44. Frank, S.A. Measurement invariance explains the universal law of generalization for psychological perception. Proc. Natl. Acad. Sci. USA 2018, 115, 9803–9806. [Google Scholar] [CrossRef] [PubMed]
  45. Frank, S.A. Dynamics of Cancer: Incidence, Inheritance, and Evolution; Princeton University Press: Princeton, NJ, USA, 2007. [Google Scholar]
  46. Frank, S.A. How to read probability distributions as statements about process. Entropy 2014, 16, 6059–6098. [Google Scholar] [CrossRef]
  47. Frank, S.A. Hierarchical selection theory and sex ratios I. General solutions for structured populations. Theor. Popul. Biol. 1986, 29, 312–342. [Google Scholar] [CrossRef] [Green Version]
  48. Frank, S.A. Natural selection. V. How to read the fundamental equations of evolutionary change in terms of information theory. J. Evol. Biol. 2012, 25, 2377–2396. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  49. Frank, S.A. Natural selection. VI. Partitioning the information in fitness and characters by path analysis. J. Evol. Biol. 2013, 26, 457–471. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  50. Sims, C.R. Efficient coding explains the universal law of generalization in human perception. Science 2018, 360, 652–656. [Google Scholar] [CrossRef] [PubMed]
  51. Shepard, R.N. Toward a universal law of generalization for psychological science. Science 1987, 237, 1317–1323. [Google Scholar] [CrossRef] [PubMed]
  52. Chater, N.; Vitányi, P.M. The generalized universal law of generalization. J. Math. Psychol. 2003, 47, 346–369. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Geometry of change by direct forces. See Table 1 for definition of symbols. Table A1 and Table A2 summarize distance expressions and point to locations in the text with further details. (a) The abstract physical work of the direct forces as the distance moved between the initial set with frequencies q , and the altered set with frequencies q . For discrete changes, the frequencies are normalized by the square root of the frequencies in the initial set. The distance can equivalently be described by the various expressions shown, in which V w is the variance in fitness from population biology, J is the Jeffreys divergence from information theory, and F is the Fisher information metric which arises in many disciplines. (b) When changes are small, the same geometry and distances can be described more elegantly in unitary square root coordinates, r = q . The symbol “→” denotes the limit for small changes.
Figure 1. Geometry of change by direct forces. See Table 1 for definition of symbols. Table A1 and Table A2 summarize distance expressions and point to locations in the text with further details. (a) The abstract physical work of the direct forces as the distance moved between the initial set with frequencies q , and the altered set with frequencies q . For discrete changes, the frequencies are normalized by the square root of the frequencies in the initial set. The distance can equivalently be described by the various expressions shown, in which V w is the variance in fitness from population biology, J is the Jeffreys divergence from information theory, and F is the Fisher information metric which arises in many disciplines. (b) When changes are small, the same geometry and distances can be described more elegantly in unitary square root coordinates, r = q . The symbol “→” denotes the limit for small changes.
Entropy 20 00978 g001
Table 1. Definitions of key symbols and concepts.
Table 1. Definitions of key symbols and concepts.
SymbolDefinitionEquation
q Vector of frequencies with q i = 1 (1)
z Values with average z ¯ = q · z ; use z a , F , etc. for specific interpretations(1)
Δ q Discrete changes, Δ q i = q i q i , may be large(1)
q ˙ Small, differential changes, Δ q q ˙ d q (5)
a Relative change of the ith type, a i = Δ q i / q i q ˙ i / q i = log q i / q i (2)
m Malthusian parameter, m = log q / q , log of relative fitness, w (26)
w Relative fitness, w i = q i / q i , with m = log w (10)
F Direct nondimensional forces, may be used for values z F (4)
I Inertial nondimensional forces, may be interpreted as acceleration (24)(4)
ϕ Force vector F ϕ when specific for particular case(6)
Δ q · F Abstract notion of physical work as displacement multiplied by force(5)
D q | | q Kullback–Leibler divergence between q and q (5)
F Fisher information, nondimensional expression(5)
L Lagrangian, used to find extremum subject to constraints(6)
L Likelihoods, L θ , for parameter values, θ ; interpreted as force, F L (9)
Δ F Partial change caused by direct forces, e.g., Δ q · F or Δ q · ϕ or Δ q · L (11)
· Euclidean vector length, e.g., z or F or Δ q (18)
r Unitary coordinates, r = q , with r = 1 as invariant total probability(22)

Share and Cite

MDPI and ACS Style

Frank, S.A. The Price Equation Program: Simple Invariances Unify Population Dynamics, Thermodynamics, Probability, Information and Inference. Entropy 2018, 20, 978. https://doi.org/10.3390/e20120978

AMA Style

Frank SA. The Price Equation Program: Simple Invariances Unify Population Dynamics, Thermodynamics, Probability, Information and Inference. Entropy. 2018; 20(12):978. https://doi.org/10.3390/e20120978

Chicago/Turabian Style

Frank, Steven A. 2018. "The Price Equation Program: Simple Invariances Unify Population Dynamics, Thermodynamics, Probability, Information and Inference" Entropy 20, no. 12: 978. https://doi.org/10.3390/e20120978

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop