# The Free Energy Requirements of Biological Organisms; Implications for Evolution

^{1}

^{2}

^{3}

## Abstract

**:**

## 1. Introduction

- a physical system $\mathcal{S}$;
- a process Λ running over $\mathcal{S}$;
- an associated coarse-grained set V giving the macrostates of $\mathcal{S}$;

- running Λ on $\mathcal{S}$ ensures that the distribution across V changes according to π, even if the initial distribution differs from $P\left({v}_{t}\right)$;
- Λ is thermodynamically reversible if applied to $P\left({v}_{t}\right)$.

- Motivated by the example of a digital computer, the analysis in [46] was formulated for systems that change the value v of a single set of physical variables, V. Therefore, for example, as formulated there, bit erasure means a map that sends both ${v}_{t}=0$ and ${v}_{t}=1$ to ${v}_{t+1}=0$.Here, I instead formulate the analysis for biological “input-output” systems that implement an arbitrary stochastic map taking one set of “input” physical variables X, representing the state of a sensor, to a separate set of “output” physical variables, Y, representing the action taken by the organism in response to its sensor reading. Therefore, as formulated in this paper, “bit erasure” means a map π that sends both ${x}_{t}=0$ and ${x}_{t}=1$ to ${y}_{t+1}=0$. My first contribution is to show how to implement any given stochastic map $X\to Y$ with a process that requires minimal work if it is applied to some specified distribution over X and to calculate that minimal work.
- In light of the free energy costs associated with implementing a map π, what π would we expect to be favored by natural selection? In particular, recall that adding noise to a computation can result in a reduction in how much work is needed to implement it. Indeed, by using a sufficiently noisy π, an organism can increase its stored free energy (if it started in a state with less than maximal entropy). Therefore, noise might not just be a hindrance that an organism needs to circumvent; an organism may actually exploit noise, to “recharge its battery”. This implies that an organism will want to implement a “behavior” π that is noisy as possible.In addition, not all terms in a map ${x}_{t}\to {y}_{t+1}$ are equally important to an organism’s reproductive fitness. It will be important to be very precise in what output is produced for some inputs ${x}_{t}$, but for other inputs, precision is not so important. Indeed, for some inputs, it may not matter at all what output the organism produces in response. In light of this, natural selection would be expected to favor organisms that implement behaviors π that are as noisy as possible (thereby saving on the amount of free energy the organism needs to acquire from its environment to implement that behavior), while still being precise for those inputs where behavioral fitness requires it. I write down the equations for what π optimizes this tradeoff and show that it is approximated by a Boltzmann distribution over a sum of behavioral fitness and energy. I then use that Boltzmann distribution to calculate a lower bound on the maximal reproductive fitness over all possible behaviors π.
- My last contribution is to use the preceding results to relate the free energy flux incident on the entire biosphere to the maximal “rate of computation” implemented by the biosphere. This relation gives an upper bound on the rate of computation that humanity as a whole can ever achieve, if it restricts itself to the surface of Earth.

## 2. Formal Preliminaries

#### 2.1. General Notation

#### 2.2. Thermodynamically-Optimal Processes

**Definition 1.**

- at both the start and finish of Λ, the system is in contact with a (single) heat bath at temperature T;
- Λ transforms any starting distribution P to an ending distribution $\pi P$, where neither of those two distributions need be at equilibrium for their respective Hamiltonians;
- Λ is thermodynamically reversible when run on some particular starting distribution P.

**Example 1.**

**Example 2.**

#### 2.3. Quenching Processes

- an initial/final Hamiltonian ${H}_{sys}^{t}\left(r\right)$;
- an initial distribution ${\rho}^{t}\left(r\right)$;
- a final distribution ${\rho}^{t+1}\left(r\right)$;

- (i)
- To begin, the system has Hamiltonian ${H}_{sys}^{t}\left(r\right)$, which is quenched into a first quenching Hamiltonian:$$\begin{array}{ccc}\hfill {H}_{quench}^{t}\left(r\right)& \equiv & -kTln\left[{\rho}^{t}\left(r\right)\right]\hfill \end{array}$$Because the quench is effectively instantaneous, it is thermodynamically reversible and is adiabatic, involving no heat transfer between the system and the heat bath. On the other hand, while r is unchanged in a quench and, therefore, so is the distribution over R, in general, work is required if ${H}_{quench}^{t}\ne {H}_{sys}^{t}$ (see [32,33,53,54]).Note that if the Q process is applied to the distribution ${\rho}^{t}$, then at the end of this first step, the distribution is at thermodynamic equilibrium. However, if the process is applied to any other distribution, this will not be the case. In this situation, work is unavoidably dissipated in in the next step.
- (ii)
- Next, we isothermally and quasi-statically transform ${H}_{quench}^{t}$ to a second quenching Hamiltonian,$$\begin{array}{ccc}\hfill {H}_{quench}^{t+1}\left(r\right)& \equiv & -kTln\left[{\rho}^{t+1}\left(r\right)\right]\hfill \end{array}$$
- (iii)
- Next, we run a quench over R “in reverse”, instantaneously replacing the Hamiltonian ${H}_{quench}^{t+1}\left(r\right)$ with the initial Hamiltonian ${H}_{sys}^{t}$, with no change to r. As in step (i), while work may be done (or extracted) in step (iii), no heat is transferred.

**Example 3.**

#### 2.4. Guided Q Processes

- an initial/final Hamiltonian ${H}_{sys}^{t}(r,s)$;
- an initial joint distribution ${\rho}^{t}(r,s)$;
- a time-independent partition of S specifying an associated set of macrostates, $v\in V$;
- a conditional distribution $\overline{\pi}(r\mid v)$.

- (i)
- To begin, the system has Hamiltonian ${H}_{sys}^{t}(r,s)$, which is quenched into a first quenching Hamiltonian written as:$$\begin{array}{ccc}\hfill {H}_{quench}^{t}(r,s)& \equiv & {H}_{quench;S}^{t}\left(s\right)+{H}_{quench;int}^{t}(r,s)\hfill \end{array}$$$$\begin{array}{ccc}\hfill {H}_{quench;int}^{t}(r,s)& \equiv & -kTln\left[{\rho}^{t}(r\mid s)\right]\hfill \end{array}$$$$\begin{array}{ccc}\hfill {H}_{quench;S}^{t}\left(s\right)& \equiv & -kTln\left[{\rho}^{t}\left(s\right)\right]\hfill \end{array}$$Note that away from those boundaries of the partition elements defining V, ${\rho}^{t}(r,s)$ is the equilibrium distribution for ${H}_{quench}^{t}$.
- (ii)
- Next, we isothermally and quasi-statically transform ${H}_{quench}^{t}$ to a second quenching Hamiltonian,$$\begin{array}{ccc}\hfill {H}_{quench;S}^{t+1}(r,s)& \equiv & {H}_{quench;S}^{t}\left(s\right)+{H}_{quench;int}^{t+1}(r,s)\hfill \end{array}$$$$\begin{array}{ccc}\hfill {H}_{quench;int}^{t}(r,s)& \equiv & -kTln\left[\overline{\pi}(r\mid V\left(s\right))\right]\hfill \end{array}$$Note that the term in the Hamiltonian that only concerns $\mathcal{S}$ does not change in this step. Therefore, the infinite potential barriers delineating partition boundaries in S remain for the entire step. I assume that as a result of those barriers, the coupling of $\mathcal{S}$ with the heat bath during this step cannot change the value of v. As a result, even though the distribution over r changes in this step, there is no change to the value of v. To describe this, I say that v is “semi-stable” during this step. (To state this assumption more formally, let $A({s}^{\prime},{s}^{\u2033})$ be the (matrix) kernel that specifies the rate at which ${s}^{\prime}\to {s}^{\u2033}$ due to heat transfer between $\mathcal{S}$ and the heat bath during during this step (ii) [32,33]. Then, I assume that $A({s}^{\prime},{s}^{\u2033})$ is arbitrarily small if $V({s}^{\prime \prime})\ne V({s}^{\prime})$.)As an example, the different bit strings that can be stored in a flash drive all have the same expected energy, but the energy barriers separating them ensure that the distribution over bit strings relaxes to the uniform distribution infinitesimally slowly. Therefore, the value of the bit string is semi-stable.Note that even though a semi-stable system is not at thermodynamic equilibrium during its “dynamics” (in which its macrostate does not change), that dynamics is thermodynamically reversible, in that we can run it backwards in time without requiring any work or resulting in heat dissipation.
- (iii)
- Next, we run a quench over $R\times S$ “in reverse”, instantaneously replacing the Hamiltonian ${H}_{quench}^{t+1}(r,s)$ with the initial Hamiltonian ${H}_{sys}^{t}(r,s)$, with no change to r or s. As in step (i), while work may be done (or extracted) in step (iii), no heat is transferred.

**Proposition 1.**

## 3. Organisms

#### 3.1. The Input and Output Spaces of an Organism

- $[{x}_{t},0]\phantom{\rule{0.277778em}{0ex}}\to \phantom{\rule{0.277778em}{0ex}}[{x}_{t},{y}_{t}]$, where ${y}_{t}$ is formed by sampling $\pi ({y}_{t}\mid {x}_{t})$;
- $[{x}_{t},{y}_{t}]\to [0,{y}_{t}]$;
- If ${y}_{t}\in {Y}_{halt}$, the process ends;
- $[0,{y}_{t}]\to [{y}_{t},{y}_{t}]$;
- $[{y}_{t},{y}_{t}]\phantom{\rule{0.277778em}{0ex}}\to \phantom{\rule{0.277778em}{0ex}}[{y}_{t},0]$;
- Return to (1) with t replaced by $t+1$;

#### 3.2. The Thermodynamics of Mapping an Input Space to an Output Space

**Proposition 2.**

- With probability ${\mathcal{P}}^{\prime}({y}_{t}\in {Y}_{halt})$, the ping-pong sequence at iteration t of the associated organism process maps the distribution:$$\begin{array}{ccc}\hfill {\mathcal{P}}^{\prime}\left({x}_{t}\right)\delta ({y}_{t-1},0)& \to & \delta ({x}_{t},0){\mathcal{P}}^{\prime}({y}_{t}\mid {y}_{t}\in {Y}_{halt})\hfill \end{array}$$$$\begin{array}{ccc}\hfill {\mathcal{P}}^{\prime}\left({x}_{t}\right)\delta ({y}_{t-1},0)& \to & \mathcal{P}({x}_{t+1}\mid {y}_{t}\notin {Y}_{halt})\delta ({y}_{t},0)\hfill \end{array}$$
- If ${\mathcal{G}}_{t}={\mathcal{P}}_{t}$ for all $t\le {\tau}^{*}$, the total work the organism expends to map the initial distribution ${\mathcal{P}}_{1}\left(x\right)$ to the ending distribution ${\mathcal{P}}_{{\tau}^{*}}\left(y\right)$ is:$$\begin{array}{ccc}\hfill {\Omega}_{{\mathcal{P}}_{1}}^{\pi}& \equiv & \sum _{y}{\mathcal{P}}_{{\tau}^{*}}\left(y\right)\mathbb{E}({H}_{out}\mid y)-\mathbb{E}({H}_{out}\mid {y}^{\prime}){{|}_{{y}^{\prime}=0}\phantom{\rule{0.277778em}{0ex}}-\phantom{\rule{0.277778em}{0ex}}\sum _{x}\phantom{\rule{4pt}{0ex}}{\mathcal{P}}_{1}\left(x\right)\mathbb{E}({H}_{in}\mid x)\phantom{\rule{0.277778em}{0ex}}+\phantom{\rule{0.277778em}{0ex}}\mathbb{E}({H}_{in}\mid {x}^{\prime})|}_{{x}^{\prime}=0}\hfill \\ & & +\phantom{\rule{0.277778em}{0ex}}kT\left(S\left({\mathcal{P}}_{1}\left(X\right)\right)-S\left({\mathcal{P}}_{{\tau}^{*}}\left(Y\right)\right)\right)\hfill \end{array}$$
- There is no physical process that both performs the same map as the organism process and that requires less work than the organism process does when applied to $\mathcal{P}\left({x}_{t}\right)\delta ({y}_{t},0)$.

**Proof.**

#### 3.3. Input Distributions and Dissipated Work

#### 3.4. Optimal Organisms

- In some biological scenarios, the amount of such dissipated work that cannot be avoided in implementing π, ${\widehat{W}}_{{\mathcal{P}}_{1}}^{\pi}$, will be comparable to (or even dominate) the minimal amount of reversible work needed to implement π, ${\Omega}_{{\mathcal{P}}_{1}}^{\pi}$. However, for simplicity, in the sequel, I concentrate solely on the dependence on π of the reproductive fitness of a process that implements π that arises due to its effect on ${W}_{{\mathcal{P}}_{1}}^{\pi}$. Equivalently, I assume that I can approximate differences ${\widehat{W}}_{{\mathcal{P}}_{1}}^{\pi}-{\widehat{W}}_{{\mathcal{P}}_{1}}^{{\pi}^{\prime}}$ as equal to ${\widehat{W}}_{{\mathcal{P}}_{1}}^{\pi}-{\widehat{W}}_{{\mathcal{P}}_{1}}^{{\pi}^{\prime}}$ up to an overall proportionality constant.
- Real organisms have internal energy stores that allow them to use free energy extracted from the environment at a time ${t}^{\prime}<1$ to drive a process at time $t=1$, thereby “smoothing out” their free energy needs. For simplicity, I ignore such energy stores. Under this simplification, the organism needs to extract at least ${\Omega}_{{\mathcal{P}}_{1}}^{\pi}$ of free energy from its environment to implement a single iteration of π on ${\mathcal{P}}_{1}$. That minimal amount of needed free energy is another contribution to the “reproductive fitness cost to the organism of physically implementing π starting from the input distribution ${\mathcal{P}}_{1}$”.
- As another simplifying assumption, I suppose that the (expected) reproductive fitness of an organism that implements the map π starting from ${\mathcal{P}}_{1}$ is just:$$\begin{array}{ccc}\hfill \mathcal{F}({\mathcal{P}}_{1},\pi ,f)& \equiv & \alpha {\mathbb{E}}_{{\mathcal{P}}_{1},\pi}\left(f\right)-{\Omega}_{{\mathcal{P}}_{1}}^{\pi}\hfill \end{array}$$

**Corollary 1.**

**Corollary 2.**

**Proof.**

_{1}that optimize $\pi ({y}_{1}\mid {x}_{1})$ will put all its probability mass on different edges of the unit simplex over Y. So when those edges are averaged under ${\mathcal{P}}_{1}\left({x}_{1}\right)$, the result is a marginal distribution $\mathcal{P}\left({y}_{1}\right)$ that lies in the interior of the unit simplex over Y.

## 4. General Implications for Biology

## 5. Discussion

## Acknowledgments

## Conflicts of Interest

## Appendix A: Proof of Proposition 1

**Lemma 1.**

**Proof.**

## Appendix B: The GQ Processes Iterating a Ping-Pong Sequence

- To construct the GQ process for the first stage, begin by writing:$$\begin{array}{ccc}\hfill {\rho}^{t}(w,u)& =& \sum _{x,y}{\mathcal{G}}_{t}\left(x\right)\delta (y,0){q}_{proc}^{x}\left(w\right){q}_{out}^{y}\left(u\right)\hfill \\ & =& {q}_{out}^{0}\left(u\right){\mathcal{G}}_{t}\left(\mathcal{X}\left(w\right)\right){q}_{proc}^{\mathcal{X}\left(w\right)}\left(w\right)\hfill \end{array}$$$$\begin{array}{ccc}\hfill {\rho}^{t}(u\mid x)& =& \frac{{\sum}_{w\in \mathcal{X}\left(x\right)}{\rho}^{t}(w,u)}{{\sum}_{{u}^{\prime},w\in \mathcal{X}\left(x\right)}{\rho}^{t}(w,{u}^{\prime})}\hfill \\ & =& {q}_{out}^{0}\left(u\right)\hfill \end{array}$$Next, by the discussion at the end of Section 2.4, this GQ process will be thermodynamically reversible since by assumption, ${\rho}^{t}(u\mid x)$ is the actual initial distribution over u conditioned on x.
- To construct the GQ process for the second stage, start by defining an initial distribution based on a (possibly counterfactual) prior ${\mathcal{G}}_{t}\left(x\right)$:$$\begin{array}{ccc}\hfill \widehat{\rho}({w}_{t},{u}_{t})& \equiv & \sum _{x,y}{\mathcal{G}}_{t}\left(x\right){q}_{proc}^{x}\left({w}_{t}\right)\pi (y\mid x){q}_{out}^{y}\left({u}_{t}\right)\hfill \end{array}$$$$\begin{array}{ccc}\hfill \widehat{\rho}({w}_{t}\mid {y}_{t})& =& \frac{{\sum}_{{u}_{t}\in \mathcal{Y}\left({y}_{t}\right)}\widehat{\rho}({w}_{t},{u}_{t})}{{\sum}_{{w}^{\prime},{u}^{\prime}\in \mathcal{Y}\left({y}_{t}\right)}\widehat{\rho}({w}^{\prime},{u}^{\prime})}\hfill \end{array}$$$$\begin{array}{ccc}\hfill \widehat{\rho}({w}_{t}\mid {y}_{t})& =& {\mathcal{G}}_{t}({x}_{t}\mid {y}_{t}){q}_{proc}^{{x}_{t}}\left({w}_{t}\right)\hfill \end{array}$$$$\begin{array}{ccc}\hfill {\mathcal{G}}_{t}({x}_{t}\mid {y}_{t})& \equiv & \frac{\pi ({y}_{t}\mid {x}_{t}){\mathcal{G}}_{t}\left({x}_{t}\right)}{{\sum}_{{x}^{\prime}}\pi ({y}_{t}\mid {x}^{\prime}){\mathcal{G}}_{t}\left({x}^{\prime}\right)}\hfill \end{array}$$$$\begin{array}{ccc}\hfill \overline{\pi}({w}_{t}\mid {y}_{t})& \equiv & I({w}_{t}\in \mathcal{X}\left(0\right)){q}_{proc}^{0}\left({w}_{t}\right)\hfill \end{array}$$Next, by the discussion at the end of Section 2.4, this GQ process will be thermodynamically reversible if $\widehat{\rho}({w}_{t}\mid {y}_{t+1})$ is the actual distribution over ${w}_{t}$ conditioned on ${y}_{t+1}$. By Equation (76), this in general requires that ${\mathcal{G}}_{t}\left({x}_{t}\right)$, the assumption for the initial distribution over ${x}_{t}$ that is built into the step (ii) GQ process, is the actual initial distribution over ${x}_{t}$. As discussed at the end of Section 2.3, work will be dissipated if this is not the case. Physically, this means that if the device implementing this GQ process is thermodynamically optimal for one input distribution, but used with another, then work will be dissipated (the amount of work dissipated is given by the change in the Kullback–Leibler divergence between G and $\mathcal{P}$ in that stage (4) GQ process; see [46]).
- We can also implement the fourth stage by running a (different) GQ process over X guided by Y. This GQ process is a simple copy operation, i.e., implements a single-valued, invertible function from ${y}_{t+1}$ to the initialized state x. Therefore, it is thermodynamically reversible. Finally, we can implement the fifth stage by running an appropriate GQ process over Y guided by X. This process will also be thermodynamically reversible.

## References

- Frank, S.A. Natural selection maximizes Fisher information. J. Evolut. Biol.
**2009**, 22, 231–244. [Google Scholar] [CrossRef] [PubMed] - Frank, S.A. Natural selection. V. How to read the fundamental equations of evolutionary change in terms of information theory. J. Evolut. Biol.
**2012**, 25, 2377–2396. [Google Scholar] [CrossRef] [PubMed] - Donaldson-Matasci, M.C.; Bergstrom, C.T.; Lachmann, M. The fitness value of information. Oikos
**2010**, 119, 219–230. [Google Scholar] [CrossRef] [PubMed] - Krakauer, D.C. Darwinian demons, evolutionary complexity, and information maximization. Chaos Interdiscip. J. Nonlinear Sci.
**2011**, 21, 037110. [Google Scholar] [CrossRef] [PubMed] - Taylor, S.F.; Tishby, N.; Bialek, W. Information and fitness. 2007; arXiv:0712.4382. [Google Scholar]
- Bullmore, E.; Sporns, O. The economy of brain network organization. Nat. Rev. Neurosci.
**2012**, 13, 336–349. [Google Scholar] [CrossRef] [PubMed] - Sartori, P.; Granger, L.; Lee, C.F.; Horowitz, J.M. Thermodynamic costs of information processing in sensory adaptation. PLoS Comput. Biol.
**2014**, 10, e1003974. [Google Scholar] - Mehta, P.; Schwab, D.J. Energetic costs of cellular computation. Proc. Natl. Acad. Sci. USA
**2012**, 109, 17978–17982. [Google Scholar] [CrossRef] [PubMed] - Mehta, P.; Lang, A.H.; Schwab, D.J. Landauer in the age of synthetic biology: Energy consumption and information processing in biochemical networks. J. Stat. Phys.
**2015**, 162, 1153–1166. [Google Scholar] [CrossRef] - Laughlin, S.B. Energy as a constraint on the coding and processing of sensory information. Curr. Opin. Neurobiol.
**2001**, 11, 475–480. [Google Scholar] [CrossRef] - Govern, C.C.; ten Wolde, P.R. Energy dissipation and noise correlations in biochemical sensing. Phys. Rev. Lett.
**2014**, 113, 258102. [Google Scholar] [CrossRef] [PubMed] - Govern, C.C.; ten Wolde, P.R. Optimal resource allocation in cellular sensing systems. Proc. Natl. Acad. Sci. USA
**2014**, 111, 17486–17491. [Google Scholar] [CrossRef] [PubMed] - Lestas, I.; Vinnicombe, G.; Paulsson, J. Fundamental limits on the suppression of molecular fluctuations. Nature
**2010**, 467, 174–178. [Google Scholar] [CrossRef] [PubMed] - England, J.L. Statistical physics of self-replication. J. Chem. Phys.
**2013**, 139, 121923. [Google Scholar] [CrossRef] [PubMed] - Landenmark, H.K.; Forgan, D.H.; Cockell, C.S. An estimate of the total DNA in the biosphere. PLoS Biol.
**2015**, 13, e1002168. [Google Scholar] [CrossRef] [PubMed] - Landauer, R. Irreversibility and heat generation in the computing process. IBM J. Res. Dev.
**1961**, 5, 183–191. [Google Scholar] [CrossRef] - Landauer, R. Minimal energy requirements in communication. Science
**1996**, 272, 1914–1918. [Google Scholar] [CrossRef] [PubMed] - Landauer, R. The physical nature of information. Physics Lett. A
**1996**, 217, 188–193. [Google Scholar] [CrossRef] - Bennett, C.H. Logical reversibility of computation. IBM J. Res. Dev.
**1973**, 17, 525–532. [Google Scholar] [CrossRef] - Bennett, C.H. The thermodynamics of computation—A review. Int. J. Theor. Phys.
**1982**, 21, 905–940. [Google Scholar] [CrossRef] - Bennett, C.H. Time/space trade-offs for reversible computation. SIAM J. Comput.
**1989**, 18, 766–776. [Google Scholar] [CrossRef] - Bennett, C.H. Notes on Landauer’s principle, reversible computation, and Maxwell’s Demon. Stud. Hist. Philos. Sci. B
**2003**, 34, 501–510. [Google Scholar] [CrossRef] - Maroney, O. Generalizing Landauer’s principle. Phys. Rev. E
**2009**, 79, 031105. [Google Scholar] [CrossRef] [PubMed] - Plenio, M.B.; Vitelli, V. The physics of forgetting: Landauer’s erasure principle and information theory. Contemp. Phys.
**2001**, 42, 25–60. [Google Scholar] [CrossRef][Green Version] - Shizume, K. Heat generation required by information erasure. Phys. Rev. E
**1995**, 52, 3495–3499. [Google Scholar] [CrossRef] - Fredkin, E.; Toffoli, T. Conservative Logic; Springer: Berlin/Heidelberg, Germany, 2002. [Google Scholar]
- Faist, P.; Dupuis, F.; Oppenheim, J.; Renner, R. A quantitative Landauer’s principle. 2012; arXiv:1211.1037. [Google Scholar]
- Touchette, H.; Lloyd, S. Information-theoretic approach to the study of control systems. Physica A
**2004**, 331, 140–172. [Google Scholar] [CrossRef] - Sagawa, T.; Ueda, M. Minimal energy cost for thermodynamic information processing: Measurement and information erasure. Phys. Rev. Lett.
**2009**, 102, 250602. [Google Scholar] [CrossRef] [PubMed] - Dillenschneider, R.; Lutz, E. Comment on “Minimal Energy Cost for Thermodynamic Information Processing: Measurement and Information Erasure”. Phys. Rev. Lett.
**2010**, 104, 198903. [Google Scholar] [CrossRef] [PubMed] - Sagawa, T.; Ueda, M. Fluctuation theorem with information exchange: Role of correlations in stochastic thermodynamics. Phys. Rev. Lett.
**2012**, 109, 180602. [Google Scholar] [CrossRef] [PubMed] - Crooks, G.E. Entropy production fluctuation theorem and the nonequilibrium work relation for free energy differences. Phys. Rev. E
**1999**, 60, 2721–2726. [Google Scholar] [CrossRef] - Crooks, G.E. Nonequilibrium measurements of free energy differences for microscopically reversible Markovian systems. J. Stat. Phys.
**1998**, 90, 1481–1487. [Google Scholar] [CrossRef] - Janna, F.C.; Moukalled, F.; Gómez, C.A. A Simple Derivation of Crooks Relation. Int. J. Thermodyn.
**2013**, 16, 97–101. [Google Scholar] - Jarzynski, C. Nonequilibrium equality for free energy differences. Phys. Rev. Lett.
**1997**, 78. [Google Scholar] [CrossRef] - Esposito, M.; van den Broeck, C. Second law and Landauer principle far from equilibrium. Europhys. Lett.
**2011**, 95, 40004. [Google Scholar] [CrossRef] - Esposito, M.; van den Broeck, C. Three faces of the second law. I. Master equation formulation. Phys. Rev. E
**2010**, 82, 011143. [Google Scholar] [CrossRef] [PubMed] - Parrondo, J.M.; Horowitz, J.M.; Sagawa, T. Thermodynamics of information. Nat. Phys.
**2015**, 11, 131–139. [Google Scholar] [CrossRef] - Pollard, B.S. A Second Law for Open Markov Processes. 2014; arXiv:1410.6531. [Google Scholar]
- Seifert, U. Stochastic thermodynamics, fluctuation theorems and molecular machines. Rep. Prog. Phys.
**2012**, 75, 126001. [Google Scholar] [CrossRef] [PubMed] - Takara, K.; Hasegawa, H.H.; Driebe, D. Generalization of the second law for a transition between nonequilibrium states. Phys. Lett. A
**2010**, 375, 88–92. [Google Scholar] [CrossRef] - Hasegawa, H.H.; Ishikawa, J.; Takara, K.; Driebe, D. Generalization of the second law for a nonequilibrium initial state. Phys. Lett. A
**2010**, 374, 1001–1004. [Google Scholar] [CrossRef] - Prokopenko, M.; Einav, I. Information thermodynamics of near-equilibrium computation. Phys. Rev. E
**2015**, 91, 062143. [Google Scholar] [CrossRef] [PubMed] - Sagawa, T. Thermodynamic and logical reversibilities revisited. J. Stat. Mech.
**2014**, 2014, P03025. [Google Scholar] [CrossRef] - Mandal, D.; Jarzynski, C. Work and information processing in a solvable model of Maxwell’s demon. Proc. Natl. Acad. Sci. USA
**2012**, 109, 11641–11645. [Google Scholar] [CrossRef] [PubMed] - Wolpert, D.H. Extending Landauer’s bound from bit erasure to arbitrary computation. 2015; arXiv:1508.05319. [Google Scholar]
- Barato, A.C.; Seifert, U. Stochastic thermodynamics with information reservoirs. Phys. Rev. E
**2014**, 90, 042150. [Google Scholar] [CrossRef] [PubMed] - Deffner, S.; Jarzynski, C. Information processing and the second law of thermodynamics: An inclusive, Hamiltonian approach. Phys. Rev. X
**2013**, 3, 041003. [Google Scholar] [CrossRef] - Barato, A.C.; Seifert, U. An autonomous and reversible Maxwell’s demon. Europhys. Lett.
**2013**, 101, 60001. [Google Scholar] [CrossRef] - Mackay, D. Information Theory, Inference, and Learning Algorithms; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
- Cover, T.; Thomas, J. Elements of Information Theory; Wiley: New York, NY, USA, 1991. [Google Scholar]
- Yeung, R.W. A First Course in Information Theory; Springer: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
- Reif, F. Fundamentals of Statistical and Thermal Physics; McGraw-Hill: New York, NY, USA, 1965. [Google Scholar]
- Still, S.; Sivak, D.A.; Bell, A.J.; Crooks, G.E. Thermodynamics of prediction. Phys. Rev. Lett.
**2012**, 109, 120604. [Google Scholar] [CrossRef] [PubMed] - Hopcroft, J.E.; Motwani, R.; Ullman, J.D. Introduction to Automata Theory, Languages and Computability; Addison-Wesley: Boston, MA, USA, 2000. [Google Scholar]
- Li, M.; Vitányi, P. An Introduction to Kolmogorov Complexity and Its Applications; Springer: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
- Grunwald, P.; Vitányi, P. Shannon information and Kolmogorov complexity. 2004; arXiv:cs/0410002. [Google Scholar]
- Kussell, E.; Leibler, S. Phenotypic diversity, population growth, and information in fluctuating environments. Science
**2005**, 309, 2075–2078. [Google Scholar] [CrossRef] [PubMed] - Sandberg, A. Energetics of the brain and AI. 2016; arXiv:1602.04019. [Google Scholar]

© 2016 by the author; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons by Attribution (CC-BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Wolpert, D.H. The Free Energy Requirements of Biological Organisms; Implications for Evolution. *Entropy* **2016**, *18*, 138.
https://doi.org/10.3390/e18040138

**AMA Style**

Wolpert DH. The Free Energy Requirements of Biological Organisms; Implications for Evolution. *Entropy*. 2016; 18(4):138.
https://doi.org/10.3390/e18040138

**Chicago/Turabian Style**

Wolpert, David H. 2016. "The Free Energy Requirements of Biological Organisms; Implications for Evolution" *Entropy* 18, no. 4: 138.
https://doi.org/10.3390/e18040138