# Relative Entropy in Biological Systems

^{1}

^{2}

^{3}

^{*}

## Abstract

**:**

## 1. Introduction

- a population approaching an evolutionarily stable state;
- random processes such as mutation, genetic drift, the diffusion of organisms in an environment or the diffusion of molecules in a liquid;
- a chemical reaction approaching equilibrium.

**relative information**, or more precisely the

**information of p relative to q**, is

**divergence**: it obeys

- In Section 2 we consider a very general form of the Lotka–Volterra equations, which are a commonly used model of population dynamics. Starting from the population ${P}_{i}$ of each type of replicating entity, we can define a probability distribution$${p}_{i}=\frac{{P}_{i}}{{\displaystyle \sum _{i\in X}{P}_{i}}},$$
- In Section 3 we consider a special case of the replicator equation that is widely studied in evolutionary game theory. In this case we can think of probability distributions as mixed strategies in a two-player game. When q is a dominant strategy, $I(q,p(t\left)\right)$ can never increase when $p\left(t\right)$ evolves according to the replicator equation. We can think of $I(q,p(t\left)\right)$ as the information that the population has left to learn. Thus, evolution is analogous to a learning process—an analogy that in the field of artificial intelligence is exploited by evolutionary algorithms.
- In Section 4 we consider continuous-time, finite-state Markov processes. Here we have probability distributions on a finite set X evolving according to a linear equation called the master equation. In this case $I\left(p\right(t),q(t\left)\right)$ can never increase. Thus, if q is a steady state solution of the master equation, both $I\left(p\right(t),q)$ and $I(q,p(t\left)\right)$ are nonincreasing. We can always write q as the Boltzmann distribution for some energy function $E:X\to \mathbb{R}$, meaning that$${q}_{i}=\frac{exp(-{E}_{i}/kT)}{{\displaystyle \sum _{j\in X}exp(-{E}_{j}/kT)}},$$$$I(p\left(t\right),q)=\frac{F\left(p\right)-F\left(q\right)}{kT}.$$
- Finally, in Section 5 we consider chemical reactions and other processes described by reaction networks. In this context we have nonnegative real populations ${P}_{i}$ of entities of various kinds $i\in X$, and these population distributions evolve according to a nonlinear equation called the rate equation. We can generalize relative information from probability distributions to populations by setting$$I(P,Q)=\sum _{i\in X}{P}_{i}ln\left(\frac{{P}_{i}}{{Q}_{i}}\right)-\left({P}_{i}-{Q}_{i}\right).$$

## 2. The Replicator Equation

**replicators**. We will call the types of replicators

**species**, but they do not need to be species in the biological sense. For example, the replicators could be genes, and the types could be alleles. Or the replicators could be restaurants, and the types could be restaurant chains.

**Lotka–Volterra equations**says that

**fitness function**of the i-th species. We can create a vector whose components are all the populations:

**P**opulation ${P}_{i}$ is being normalized to give a

**p**robability ${p}_{i}.$

**mean fitness**. In other words, it is the average, or expected, fitness of a replicator chosen at random from the whole population. Let us write it thus:

**replicator equation**in its classic form:

## 3. Evolutionary Game Theory

**fitness matrix**.

**pure strategies**. The

**payoff matrix**${A}_{ij}$ specifies the first player’s winnings if the first player chooses the pure strategy i and the second player chooses the pure strategy j. A probability distribution on the set of pure strategies is called a

**mixed strategy**. The first player’s expected winnings will be $p\xb7Aq$ if they use the mixed strategy p and the second player uses the mixed strategy q.

**dominant mixed strategy**if

**steady state**solution of the replicator equation, meaning one that does not depend on time. To see this, let $r\left(t\right)$ be the solution of the replicator equation with $r\left(0\right)=q$. Then $I(q,r(t\left)\right)$ is nonincreasing because q is dominant. Furthermore $I(q,r(t\left)\right)=0$ at $t=0$, since for any probability distribution we have $I(q,q)=0$. Thus we have $I(q,r(t\left)\right)\le 0$ for all $t\ge 0$. However, relative information is always non-negative, so we must have $I(q,r(t\left)\right)=0$ for all $t\ge 0$. This forces $r\left(t\right)=q$, since the relative information of two probability distributions can only vanish if they are equal.

**evolutionarily stable state**if

**symmetric Nash equilibrium**. In other words, the invaders can’t on average do better playing against the original population than members of the original population are. The second says that if the invaders are just as good at playing against the original population, they must be worse at playing against each other! The combination of these conditions means the invaders won’t take over.

**evolutionarily stable strategy**if Maynard Smith’s condition (9) holds along with

**weakly evolutionarily stable strategies**.

## 4. Markov Processes

**Markov process**M consists of:

- a finite set X of
**states**, - a finite set T of
**transitions**, - maps $s,t:T\to X$ assigning to each transition its
**source**and**target**, - a map $r:T\to (0,\infty )$ assigning a
**rate constant**$r\left(\tau \right)$ to each transition $\tau \in T$.

**Hamiltonian**. If $i\ne j$ we define

**master equation**for a time-dependent probability distribution on X is:

**infinitesimal stochastic**, meaning that its off-diagonal entries are non-negative and the entries in each column sum to zero:

**Kolmogorov forward equation**:

**Kolmogorov backward equation**:

**steady states**: probability distributions q that obey

**energy**${E}_{i}$ to each state such that the steady state probabilities ${q}_{i}$ are given by the so-called

**Boltzmann distribution**:

**partition function**, defined by

**temperature**$T=1/\beta $, setting Boltzmann’s constant to 1. Then, for any probability distribution p on X we can define the

**expected energy**:

**entropy**:

**free energy**

## 5. Reaction Networks

**reaction network**consists of:

- a finite set S of
**species**, - a finite set X of
**complexes**with $X\subseteq {\mathbb{N}}^{S}$, - a finite set T of
**reactions**or**transitions**, - maps $s,t:T\to X$ assigning to each reaction its
**source**and**target**, - a map $r:T\to (0,\infty )$ assigning to each reaction a
**rate constant**.

**population**${P}_{i}\in [0,\infty )$ of each species i. We can summarize these in a population vector

- the vector $t\left(\tau \right)-s\left(\tau \right)$ whose i-th component is the change in the number of items of the i-th species due to the reaction τ;
- the concentration of each input species i of τ raised to the power given by the number of times it appears as an input, namely ${s}_{i}\left(\tau \right)$;
- the rate constant $r\left(\tau \right)$ of τ.

**rate equations**are

- H: healthy white blood cells,
- I: infected white blood cells,
- V: virions (that is, individual virus particles).

- α: the birth of one healthy cell, which has no input and one H as output.
- β: the death of a healthy cell, which has one H as input and no output.
- γ: the infection of a healthy cell, which has one H and one V as input, and one I as output.
- δ: the reproduction of the virus in an infected cell, which has one I as input, and one I and one V as output.
- ϵ: the death of an infected cell, which has one I as input and no output.
- ζ: the death of a virion, which has one V as input and no output.

- R: rabbits,
- W: wolves,

**complex balanced**if for each complex $\kappa \in K$ we have

## 6. Conclusions

## Acknowledgments

## Author Contributions

## Conflicts of Interest

## References

- Crooks, G.E. On measures of entropy and information. Available online: http://threeplusone.com/info (accessed on 27 January 2016).
- Gorban, A.N.; Gorban, P.A.; Judge, G. Entropy: The Markov ordering approach. Entropy
**2010**, 12, 1145–1193. [Google Scholar] [CrossRef] - Hobson, A. Concepts in Statistical Mechanics; Gordon and Breach: New York, NY, USA, 1971. [Google Scholar]
- Baez, J.C.; Fritz, T. A Bayesian characterization of relative entropy. Theory Appl. Categories
**2014**, 29, 422–456. [Google Scholar] - Cover, T.M.; Thomas, J.A. Elements of Information Theory, 2nd ed.; Wiley: Hoboken, NJ, USA, 2006. [Google Scholar]
- Pollard, B. Open Markov processes: a compositional perspective on non-equilibrium steady states in biology.
**2016**. [Google Scholar] - Harper, M. Information geometry and evolutionary game Theory.
**2009**. [Google Scholar] - Harper, M. The replicator equation as an inference dynamic.
**2009**. [Google Scholar] - Akin, E. The Geometry of Population Genetics; Springer: Berlin/Heidelberg, Germany, 1979. [Google Scholar]
- Akin, E. The differential geometry of population genetics and evolutionary games. In Mathematical and Statistical Developments of Evolutionary Theory; Lessard, S., Ed.; Springer: Berlin/Heidelberg, Germany, 1990; pp. 1–93. [Google Scholar]
- Hofbauer, J.; Schuster, P.; Sigmund, K. A note on evolutionarily stable strategies and game dynamics. J. Theor. Biol.
**1979**, 81, 609–612. [Google Scholar] [CrossRef] - Sandholm, W.H. Evolutionary game theory. Available online: http://www.ssc.wisc.edu/~whs/research/egt.pdf (accessed on 27 January 2016).
- Smith, J.M. Game theory and the evolution of fighting. In On Evolution; Edinburgh University Press: Edinburgh, UK, 1972; pp. 8–28. [Google Scholar]
- Smith, J.M. Evolution and the Theory of Games; Cambridge University Press: Cambridge, UK, 1982. [Google Scholar]
- Thomas, B. On evolutionarily stable sets. J. Math. Biol.
**1985**, 22, 105–115. [Google Scholar] [CrossRef] - Mitchell, M. An Introduction to Genetic Algorithms; MIT Press: Cambridge, MA, USA, 1998. [Google Scholar]
- Friston, K.; Ao, P. Free energy, value, and attractors. Comput. Math. Methods Med.
**2012**, 2012, 937860. [Google Scholar] [CrossRef] [PubMed] - Edelman, G.M. Neural Darwinism: The Theory of Neuronal Group Selection; Basic Books: New York, NY, USA, 1987. [Google Scholar]
- Nielsen, R. Statistical Methods in Molecular Evolution; Springer: Berlin/Heidelberg, Germany, 2005. [Google Scholar]
- Sober, E.; Steel, M. Entropy increase and information loss in Markov models of evolution. Biol. Philos.
**2011**, 26, 223–250. [Google Scholar] [CrossRef] - Gorban, A.N. General H-theorem and entropies that violate the Second Law. Entropy
**2014**, 16, 2408–2432. [Google Scholar] [CrossRef] - Liese, F.; Vajda, I. Convex Statistical Distances; Teubner: Leipzig, Germany, 1987. [Google Scholar]
- Moran, P.A.P. Entropy, Markov processes and Boltzmann’s H-theorem. Math. Proc. Camb. Philos. Soc.
**1961**, 57, 833–842. [Google Scholar] [CrossRef] - Price, H. Time’s Arrow and Archimedes’ Point: New Directions for the Physics of Time; Oxford University Press: Oxford, UK, 1997. [Google Scholar]
- Zeh, H.D. The Physical Basis of the Direction of Time; Springer: Berlin/Heidelberg, Germany, 2001. [Google Scholar]
- Norris, J.R. Markov Processes; Cambridge University Press: Cambridge, UK, 1997. [Google Scholar]
- Rogers, L.C.G.; Williams, D. Diffusions, Markov Processes, and Martingales: Volume 1, Foundations, 2nd ed.; Cambridge University Press: Cambridge, UK, 2000. [Google Scholar]
- Rogers, L.C.G.; Williams, D. Diffusions, Markov Processes, and Martingales: Volume 2, Itô Calculus, 2nd ed.; Cambridge University Press: Cambridge, UK, 2000. [Google Scholar]
- Ethier, S.N.; Kurtz, T.G. Markov Processes: Characterization and Convergence; Wiley: Hoboken, NJ, USA, 2005. [Google Scholar]
- Baez, J.C.; Biamonte, J. Quantum Techniques for Stochastic Mechanics.
**2015**. [Google Scholar] - Korobeinikov, A. Global properties of basic virus dynamics models. Bull. Math. Biol.
**2004**, 66, 879–883. [Google Scholar] [CrossRef] [PubMed] - Horn, F.; Jackson, R. General mass action kinetics. Arch. Ration. Mech. Anal.
**1972**, 49, 81–116. [Google Scholar] [CrossRef] - Feinberg, M. Lectures on chemical reaction networks. Available online: https://crnt.osu.edu/LecturesOnReactionNetworks (accessed on 27 January 2016).
- Gopalkrishnan, M. Lyapunov functions for complex-balanced systems. Available online: https://johncarlosbaez.wordpress.com/2014/01/07/lyapunov-functions-for-complex-balanced-systems/ (accessed on 27 January 2016).
- Anderson, D. Comment on Azimuth Blog, 2014. Available online: https://johncarlosbaez.wordpress.com/2014/01/07/lyapunov-functions-for-complex-balanced-systems/#comment-35537 (accessed on 27 January 2016).

© 2016 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons by Attribution (CC-BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Baez, J.C.; Pollard, B.S.
Relative Entropy in Biological Systems. *Entropy* **2016**, *18*, 46.
https://doi.org/10.3390/e18020046

**AMA Style**

Baez JC, Pollard BS.
Relative Entropy in Biological Systems. *Entropy*. 2016; 18(2):46.
https://doi.org/10.3390/e18020046

**Chicago/Turabian Style**

Baez, John C., and Blake S. Pollard.
2016. "Relative Entropy in Biological Systems" *Entropy* 18, no. 2: 46.
https://doi.org/10.3390/e18020046