Next Article in Journal
A Study on Railway Surface Defects Detection Based on Machine Vision
Previous Article in Journal
Configuration Selection of the Multi-Loop Organic Rankine Cycle for Recovering Energy from a Single Source

Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Fundamental Theorem of Natural Selection

by 1,2
1
Department of Mathematics, University of California, Riverside, CA 92521, USA
2
Centre for Quantum Technologies, National University of Singapore, Singapore 117543, Singapore
Entropy 2021, 23(11), 1436; https://doi.org/10.3390/e23111436
Received: 6 October 2021 / Revised: 25 October 2021 / Accepted: 26 October 2021 / Published: 30 October 2021

Abstract

:
Suppose we have n different types of self-replicating entity, with the population $P i$ of the ith type changing at a rate equal to $P i$ times the fitness $f i$ of that type. Suppose the fitness $f i$ is any continuous function of all the populations $P 1 , … , P n$. Let $p i$ be the fraction of replicators that are of the ith type. Then $p = ( p 1 , … , p n )$ is a time-dependent probability distribution, and we prove that its speed as measured by the Fisher information metric equals the variance in fitness. In rough terms, this says that the speed at which information is updated through natural selection equals the variance in fitness. This result can be seen as a modified version of Fisher’s fundamental theorem of natural selection. We compare it to Fisher’s original result as interpreted by Price, Ewens and Edwards.

1. Introduction

In 1930, Fisher [1] stated his “fundamental theorem of natural selection” as follows:
The rate of increase in fitness of any organism at any time is equal to its genetic variance in fitness at that time.
Some tried to make this statement precise as follows:
The time derivative of the mean fitness of a population equals the variance of its fitness.
However, this is only true under very restrictive conditions, so a controversy was ignited.
An interesting resolution was proposed by Price [2], and later amplified by Ewens [3] and Edwards [4]. We can formalize their idea as follows. Suppose we have n types of self-replicating entity, and idealize the population of the ith type as a positive real-valued function $P i ( t )$. Suppose
$d d t P i ( t ) = f i ( P 1 ( t ) , … , P n ( t ) ) P i ( t )$
where the fitness $f i$ is a differentiable function of the populations of every type of replicator. The mean fitness at time t is
$f ¯ ( t ) = ∑ i = 1 n p i ( t ) f i ( P 1 ( t ) , … , P n ( t ) )$
where $p i ( t )$ is the fraction of replicators of the ith type:
$p i ( t ) = P i ( t ) | ∑ j = 1 n P j ( t ) .$
By the product rule, the rate of change of the mean fitness is the sum of two terms:
$d d t f ¯ ( t ) = ∑ i = 1 n p ˙ i ( t ) f i ( P 1 ( t ) , … , P n ( t ) ) + ∑ i = 1 n p i ( t ) d d t f i ( P 1 ( t ) , … , P n ( t ) ) .$
The first of these two terms equals the variance of the fitness at time t. We give the easy proof in Theorem 1. Unfortunately, the conceptual significance of this first term is much less clear than that of the total rate of change of mean fitness. Ewens concluded that “the theorem does not provide the substantial biological statement that Fisher claimed”.
However, there is another way out, based on an idea Fisher himself introduced in 1922: Fisher information [5]. Fisher information gives rise to a Riemannian metric on the space of probability distributions on a finite set, called the ‘Fisher information metric’—or in the context of evolutionary game theory, the ‘Shahshahani metric’ [6,7,8]. Using this metric we can define the speed at which a time-dependent probability distribution changes with time. We call this its ‘Fisher speed’. Under just the assumptions already stated, we prove in Theorem 2 that the Fisher speed of the probability distribution
$p ( t ) = ( p 1 ( t ) , … , p n ( t ) )$
is the variance of the fitness at time t.
As explained by Harper [9,10], natural selection can be thought of as a learning process, and studied using ideas from information geometry [11]—that is, the geometry of the space of probability distributions. As $p ( t )$ changes with time, the rate at which information is updated is closely connected to its Fisher speed. Thus, our revised version of the fundamental theorem of natural selection can be loosely stated as follows:
As a population changes with time, the rate at which information is updated equals the variance of fitness.
The precise statement, with all the hypotheses, is in Theorem 2. However, one lesson is this: variance in fitness may not cause ‘progress’ in the sense of increased mean fitness, but it does cause change.

2. The Time Derivative of Mean Fitness

Suppose we have n different types of entity, which we call replicators. Let $P i ( t ) ,$ or $P i$ for short, be the population of the ith type of replicator at time t, which we idealize as taking positive real values. Then a very general form of the Lotka–Volterra equations says that
$d P i d t = f i ( P 1 , … , P n ) P i .$
where $f i : [ 0 , ∞ ) n → R$ is the fitness function of the ith type of replicator. One might also consider fitness functions with explicit time dependence, but we do not do so here.
Let $p i ( t )$, or $p i$ for short, be the probability at time t that a randomly chosen replicator will be of the ith type. More precisely, this is the fraction of replicators of the ith type:
$p i = P i ∑ j P j .$
Using these probabilities, we can define the mean fitness $f ¯$ by
$f ¯ = ∑ j = 1 n p j f j ( P 1 , … , P n )$
and the variance in fitness by
$Var ( f ) = ∑ j = 1 n p j f j ( P 1 , … , P n ) − f ¯ 2 .$
These quantities are also functions of t, but we suppress the t dependence in our notation.
Fisher said that the variance in fitness equals the rate of change of mean fitness. Price [2], Ewens [3] and Edwards [4] argued that Fisher only meant to equate part of the rate of change in mean fitness to the variance in fitness. We can see this in the present context as follows. The time derivative of the mean fitness is the sum of two terms:
$d f ¯ d t = ∑ i = 1 n p ˙ i f i ( P 1 ( t ) , … , P n ( t ) ) + ∑ i = 1 n p i d d t f i ( P 1 ( t ) , … , P n ( t ) )$
and as we now show, the first term equals the variance in fitness.
Theorem 1.
Suppose positive real-valued functions $P i ( t )$ obey the Lotka–Volterra equations for some continuous functions $f i : [ 0 , ∞ ) n → R$. Then
$∑ i = 1 n p ˙ i f i ( P 1 ( t ) , … , P n ( t ) ) = Var ( f ) .$
Proof.
First we recall a standard formula for the time derivative $p ˙ i$. Using the definition of $p i$ in Equation (2), the quotient rule gives
$p ˙ i = P ˙ i ∑ j P j − P i ∑ j P ˙ j ( ∑ j P j ) 2$
where all sums are from 1 to n. Using the Lotka–Volterra equations this becomes
$p ˙ i = f i P i ∑ j P j − P i ∑ j f j P j ( ∑ j P j ) 2$
where we write $f i$ to mean $f i ( P 1 , … , P n )$, and similarly for $f j$. Using the definition of $p i$ again, this simplifies to:
$p ˙ i = f i p i − ∑ j f j p j p i$
and thanks to the definition of mean fitness in Equation (3), this reduces to the well-known replicator equation:
$p ˙ i = f i − f ¯ p i .$
Now, the replicator equation implies
$∑ i f i p ˙ i = ∑ i f i f i − f ¯ p i .$
On the other hand,
$∑ i f ¯ ( f i − f ¯ ) p i = f ¯ ∑ i ( f i − f ¯ ) p i = 0$
since $∑ i f i p i = f ¯$ but also $∑ i f ¯ p i = f ¯$. Subtracting Equation (8) from Equation (7) we obtain
$∑ i f i p ˙ i = ∑ i ( f i − f ¯ ) ( f i − f ¯ ) p i$
or simply
$∑ i f i p ˙ i = Var ( f ) . □$
The second term of Equation (5) only vanishes in special cases, e.g., when the fitness functions $f i$ are constant. When the second term vanishes we have
$d f ¯ d t = Var ( f ) .$
This is a satisfying result. It says the mean fitness does not decrease, and it increases whenever some replicators are more fit than others, at a rate equal to the variance in fitness. However, we would like a more general result, and we can state one using a concept from information theory: the Fisher speed.

3. The Fisher Speed

While Theorem 1 allows us to express the variance in fitness in terms of the time derivatives of the probabilities $p i$, it does so in a way that also explicitly involves the fitness functions $f i$. We now prove a simpler formula for the variance in fitness, which equates it with the square of the ‘Fisher speed’ of the probability distribution $p = ( p 1 , … , p n )$.
The space of probability distributions on the set ${ 1 , … , n }$ is the $( n − 1 )$-simplex
$Δ n − 1 = { ( x 1 , … , x n ) : x i ≥ 0 , ∑ i = 1 n x i = 1 }$
The Fisher metric is the Riemannian metric g on the interior of the $( n − 1 )$-simplex such that given a point p in the interior of $Δ n − 1$ and two tangent vectors $v , w$ we have
$g ( v , w ) = ∑ i = 1 n v i w i p i .$
Here we are describing the tangent vectors $v , w$ as vectors in $R n$ with the property that the sum of their components is zero: this makes them tangent to the $( n − 1 )$-simplex. We are demanding that x be in the interior of the simplex to avoid dividing by zero, since on the boundary of the simplex we have $p i = 0$ for at least one choice of i.
If we have a time-dependent probability distribution $p ( t )$ moving in the interior of the $( n − 1 )$-simplex as a function of time, its Fisher speed is defined by
$g ( p ˙ ( t ) , p ˙ ( t ) ) = ∑ i = 1 n p ˙ i ( t ) 2 p i ( t ) 1 / 2$
if the derivative $p ˙ ( t )$ exists. This is the usual formula for the speed of a curve moving in a Riemannian manifold, specialized to the case at hand.
These are all the formulas needed to prove our result. However, for readers unfamiliar with the Fisher metric, a few words may provide some intuition. The factor of $1 / p i$ in the Fisher metric changes the geometry of the simplex so that it becomes round, with the geometry of a portion of a sphere in $R n$. But more relevant here is the Fisher metric’s connection to relative information—a generalization of Shannon information that depends on two probability distributions rather than just one [12]. Given probability distributions $p , q ∈ Δ n − 1$, the information of q relative to p is
$I ( q , p ) = ∑ i = 1 n q i ln q i p i .$
This is the amount of information that has been updated if one replaces the prior distribution p with the posterior q. So, sometimes relative information is called the ‘information gain’. It is also called ‘relative entropy’ or ‘Kullback–Leibler divergence’. It has many applications to biology [9,10,13,14].
Suppose $p ( t )$ is a smooth curve in the interior of the $( n − 1 )$-simplex. We can ask the rate at which information is being updated as time passes. Perhaps surprisingly, an easy calculation gives
$d d t I ( p ( t ) , p ( t 0 ) ) | t = t 0 = 0 .$
Thus, to first order, information is not being updated at all at any time $t 0 ∈ R .$ However, another well-known calculation (see, e.g., [15]) shows that
$d 2 d t 2 I ( p ( t ) , p ( t 0 ) ) | t = t 0 = g ( p ˙ ( t 0 ) , p ˙ ( t 0 ) ) .$
So, to second order in $t − t 0$, the square of the Fisher speed determines how much information is updated when we pass from $p ( t 0 )$ to $p ( t )$.
Theorem 2.
Suppose positive real-valued functions $P i ( t )$ obey the Lotka–Volterra equations for some continuous functions $f i : [ 0 , ∞ ) n → R$. Then the square of the Fisher speed of the probability distribution $p ( t )$ is the variance of the fitness:
$g ( p ˙ , p ˙ ) = Var ( f ( P ) ) .$
Proof.
Consider the square of the Fisher speed
$g ( p ˙ , p ˙ ) = ∑ i = 1 n p ˙ i 2 p i$
and use the replicator equation
$p ˙ i = f i − f ¯ p i$
obtaining
$g ( p ˙ , p ˙ ) = ∑ i = 1 n ( f i ( P ) − f ¯ ( P ) ) 2 p i = Var ( f )$
as desired. □
The generality of this result is remarkable. Formally, any autonomous system of first-order differential equations
$d d t P i ( t ) = F i ( P 1 ( t ) , … , P n ( t ) )$
can be rewritten as Lotka–Volterra equations
$d d t P i ( t ) = f i ( P 1 ( t ) , … , P n ( t ) ) P i ( t )$
simply by setting
$f i ( P 1 , … , P n ) = F i ( P 1 , … , P n ) / P i .$
In general $f i$ is undefined when $P i = 0$, but this not a problem if we restrict ourselves to situations where all the populations $P i$ are positive; in these situations, Theorems 1 and 2 apply.

Funding

This research received no external funding.

Not applicable.

Not applicable.

Acknowledgments

This research was done at the Topos Institute. I thank Marc Harper for his invaluable continued help with this subject, and evolutionary game theory more generally. I also thank Rob Spekkens for some helpful comments.

Conflicts of Interest

The authors declares no conflict of interest.

References

1. Fisher, R.A. The Genetical Theory of Natural Selection; Clarendon Press: Oxford, UK, 1930. [Google Scholar]
2. Price, G.R. Fisher’s “fundamental theorem” made clear. Ann. Hum. Genet. 1972, 36, 129–140. [Google Scholar] [CrossRef] [PubMed]
3. Ewens, W.J. An interpretation and proof of the Fundamental Theorem of Natural Selection. Theor. Popul. Biol. 1989, 36, 167–180. [Google Scholar] [CrossRef]
4. Edwards, A.W.F. The fundamental theorem of natural selection. Biol. Rev. 1994, 69, 443–474. [Google Scholar] [CrossRef] [PubMed]
5. Fisher, R.A. On the mathematical foundations of theoretical statistics. Philos. Trans. A Math. Phys. Eng. Sci. 1922, 222, 309–368. [Google Scholar]
6. Akin, E. The Geometry of Population Genetics; Springer: Berlin/Heidelberg, Germany, 1979. [Google Scholar]
7. Akin, E. The differential geometry of population genetics and evolutionary games. In Mathematical and Statistical Developments of Evolutionary Theory; Lessard, S., Ed.; Springer: Berlin/Heidelberg, Germany, 1990; pp. 1–93. [Google Scholar]
8. Shahshahani, S. A new mathematical framework for the study of linkage and selection. Mem. Am. Math. Soc. 1979, 17, 211. [Google Scholar] [CrossRef]
9. Harper, M. Information geometry and evolutionary game theory. arXiv 2009, arXiv:0911.1383. [Google Scholar]
10. Harper, M. The replicator equation as an inference dynamic. arXiv 2009, arXiv:0911.1763. [Google Scholar]
11. Amari, S. Information Geometry and Its Applications; Springer: Berlin/Heidelberg, Germany, 2016. [Google Scholar]
12. Cover, T.M.; Thomas, J.A. Elements of Information Theory, 2nd ed.; Wiley-Interscience: New York, NY, USA, 2006. [Google Scholar]
13. Baez, J.C.; Pollard, B.S. Relative entropy in biological systems. Entropy 2016, 18, 46. [Google Scholar] [CrossRef]
14. Leinster, T. Entropy and Diversity: The Axiomatic Approach; Cambridge Press: Cambridge, UK, 2021. [Google Scholar]
15. Baez, J.C. Information Geometry, Part 7. 2011. Available online: https://math.ucr.edu/home/baez/information (accessed on 28 October 2021).
 Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Baez, J.C. The Fundamental Theorem of Natural Selection. Entropy 2021, 23, 1436. https://doi.org/10.3390/e23111436

AMA Style

Baez JC. The Fundamental Theorem of Natural Selection. Entropy. 2021; 23(11):1436. https://doi.org/10.3390/e23111436

Chicago/Turabian Style

Baez, John C. 2021. "The Fundamental Theorem of Natural Selection" Entropy 23, no. 11: 1436. https://doi.org/10.3390/e23111436

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.