All articles published by MDPI are made immediately available worldwide under an open access license. No special
permission is required to reuse all or part of the article published by MDPI, including figures and tables. For
articles published under an open access Creative Common CC BY license, any part of the article may be reused without
permission provided that the original article is clearly cited. For more information, please refer to
Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature
Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for
future research directions and describes possible research applications.
Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive
positive feedback from the reviewers.
Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world.
Editors select a small number of articles recently published in the journal that they believe will be particularly
interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the
most exciting work published in the various research areas of the journal.
Recent advances in nonequilibrium statistical physics have provided unprecedented insight into the thermodynamics of dynamic processes. The author recently used these advances to extend Landauer’s semi-formal reasoning concerning the thermodynamics of bit erasure, to derive the minimal free energy required to implement an arbitrary computation. Here, I extend this analysis, deriving the minimal free energy required by an organism to run a given (stochastic) map π from its sensor inputs to its actuator outputs. I use this result to calculate the input-output map π of an organism that optimally trades off the free energy needed to run π with the phenotypic fitness that results from implementing π. I end with a general discussion of the limits imposed on the rate of the terrestrial biosphere’s information processing by the flux of sunlight on the Earth.
It is a truism that biological systems acquire and store information about their environments [1,2,3,4,5]. However, they do not just store information; they also process that information. In other words, they perform computation. The energetic consequences for biological systems of these three processes—acquiring, storing, and processing information—are becoming the focus of an increasing body of research [6,7,8,9,10,11,12,13,14,15]. In this paper, I further this research by analyzing the energetic resources that an organism needs in order to compute in a fitness-maximizing way.
Ever since Landauer’s seminal work [16,17,18,19,20,21,22,23,24,25,26], it has been appreciated that the laws of statistical physics impose lower bounds on how much thermodynamic work must be done on a system in order for that system to undergo a two-to-one map, e.g., to undergo bit erasure. By conservation of energy, that work must ultimately be acquired from some external source (e.g., sunlight, carbohydrates, etc.). If that work on the system is eventually converted into heat that is dumped into an external heat bath, then the system acts as a heater. In the context of biology, this means that whenever a biological system (deterministically) undergoes a two-to-one map, it must use free energy from an outside source to do so and produces heat as a result.
These early analyses led to a widespread belief that there must be strictly positive lower bounds on how much free energy is required to implement any deterministic, logically-irreversible computation. Indeed, Landauer wrote “...logical irreversibility is associated with physical irreversibility and requires a minimal heat generation” . In the context of biology, such bounds would translate to a lower limit on how much free energy a biological system must “harvest” from its environment in order to implement any particular (deterministic) computation, not just bit erasure.
A related conclusion of these early analyses was that a one-to-two map, in which noise is added to a system that is initially in one particular state with probability one, can act as a refrigerator , rather than a heater, removing heat from the environment [16,20,21,22]. Formally, the minimal work that needs to be done on a system in order to make it undergo a one-to-two map is negative. So for example, if the system is coupled to a battery that stores free energy, a one-to-two map can “power the battery”, by gaining free energy from a heat bath rather than dumping it there. To understand this intuitively, suppose we have a two-state system that is initially in one particular state with probability one. Therefore, the system initially has low entropy. That means we can connect it to a heat bath and then have it do work on a battery (assuming the battery was initially at less than maximum storage), thereby transferring energy from the heat bath into that battery. As it does this, though, the system gets thermalized, i.e., undergoes a one-to-two map (as a concrete example, this is what happens in adiabatic demagnetization of an Ising spin system ).
This possibility of gaining free energy by adding noise to a computation, or at least reducing the amount of free energy the computation needs, means that there is a trade-off in biology: on the one hand, there is a benefit to having biological computation that is as precise as possible, in order to maximize the behavioral fitness that results from that computation; on the other hand, there is a benefit to having the computation be as imprecise as possible, in order to minimize the amount of free energy needed to implement that computation. This tradeoff raises the intriguing possibility that some biological systems have noisy dynamics “on purpose”, as a way to maintain high stores of free energy. For such a system, the noise would not be an unavoidable difficulty to be overcome, but rather a resource to be exploited.
More recently, there has been dramatic progress in our understanding of non-equilibrium statistical physics and its relation to information-processing [27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43]. Much of this recent literature has analyzed the minimal work required to drive a physical system’s (fine-grained) microstate dynamics during the interval from to in such a way that the associated dynamics over some space of (coarse-grained) macrostates is given by some specified Markov kernel π. In particular, there has been detailed analysis of the minimal work needed when there are only two macrostates, and , and we require that both get mapped by π to the macrostate [36,38,44]. By identifying the macrostates as Information Bearing Degrees of Freedom (IBDF)  of an information-processing device like a digital computer, these analyses can be seen as elaborations of the analyses of Landauer et al. on the thermodynamics of bit erasure. Recently, these analyses of maps over binary spaces V have been applied to explicitly biological systems, at least for the special case of a periodic forcing function .
These analyses have resulted in substantial clarifications of Landauer’s semiformal reasoning, arguably overturning it in some regards. For example, this analysis has shown that the logical (ir)reversibility of π has nothing to do with the thermodynamic (ir)reversibility of a system that implements π. In particular, it is possible to implement bit erasure (which is logically irreversible) in a thermodynamically-reversible manner. In the modern understanding, there is no irreversible increase of entropy in bit erasure. Instead, there is a minimal amount of thermodynamic work that needs to be expended in a (thermodynamically reversible) implementation of bit erasure (see Example 3 below.)
Many of these previous analyses consider processes for implementing π that are tailored for some specific input distribution over the macrostates, . Such processes are designed to be thermodynamically reversible when run on . However, when run on a distribution other than , they are thermodynamically irreversible, resulting in wasted (dissipated) work. For example, in , the amount of work required to implement π depends on an assumption for ϵ, the probability of a one in a randomly-chosen position on the bit string.
In addition, important as they are, these recent analyses are not applicable to arbitrary maps π over a system’s macrostates. For example, as discussed in , the “quench-based” devices analyzed in [36,38,44] can only implement maps whose output is independent of its input (as an example, the output of bit erasure, an erased bit, is independent of the original state of the bit).
Similarly, the devices considered in [45,47] combine a “tape” containing a string of bits with a “tape head” that is positioned above one of the bits on the tape. In each iteration of the system, the bit currently under the tape head undergoes an arbitrary map to produce a new bit value, and then, the tape is advanced so that the system is above the next bit. Suppose that, inspired by , we identify the state of the IBDF of the overall tape-based system as the entire bit string, aligned so that the current tape position of the read/write subsystem is above bit zero. In other words, we would identify each state of the IBDF as an aligned big string where N is the number of bits that have already been processed, and the (negative) minimal index could either be finite or infinite (note that unless we specify which bit of the string is the current one, i.e., which has index zero, the update map over the string is not defined).
This tape-based system is severely restricted in the set of computations it can implement on its IBDF. For example, because the tape can only move forward, the system cannot deterministically map an IBDF state to an IBDF state . (In , the tape can rewind. However, such rewinding only arises due to thermal fluctuations and therefore does not overcome the problem.)
It should be possible to extend either the quench-based devices reviewed in  and the tape-based device introduced in  into a system that could perform arbitrary computation. In fact, in , I showed how to extend quench-based devices into systems that could perform arbitrary computation in a purely thermodynamically-reversible manner. This allowed me to calculate the minimal work that any system needs to implement any given conditional distribution π. To be precise, I showed how for any π and initial distribution , one could construct:
a physical system ;
a process Λ running over ;
an associated coarse-grained set V giving the macrostates of ;
running Λ on ensures that the distribution across V changes according to π, even if the initial distribution differs from ;
Λ is thermodynamically reversible if applied to .
By the second law, no process can implement π on with less work than Λ requires. Therefore, by calculating the amount of work required by Λ, we calculate a lower bound on how much work is required to run π on . In the context of biological systems, that bound is the minimal amount of free energy that any organism must extract from its external environment in order to run π.
However, just like in the systems considered previously in the literature, this Λ is thermodynamically optimized for that initial distribution . It would be thermodynamically irreversible (and therefore dissipate work) if used for any other other initial distribution. In the context of biological systems, this means that while natural selection may produce an information-processing organism that is thermodynamically optimal in one environment, it cannot produce one that is thermodynamically optimal in all environments.
Biological systems are not only information-processing systems, however. As mentioned above, they also acquire information from their environment and store it. Many of these processes have nonzero minimal thermodynamic costs, i.e., the system must acquire some minimal free energy to implement them. In addition, biological systems often rearrange matter, thereby changing its entropy. Sometimes, these systems benefit by decreasing entropy, but sometimes, they benefit by increasing entropy, e.g., as when cells use depletion forces, when they exploit osmotic pressures, etc. This is another contribution to their free energy requirements. Of course, biological systems also typically perform physical “labor”, i.e., change the expected energy of various systems, by breaking/making chemical bonds, and on a larger scale, moving objects (including themselves), developing, growing, etc. They must harvest free energy from their environment to power this labor, as well. Some biological processes even involve several of these phenomena simultaneously, e.g., a biochemical pathway that processes information from the environment, making and breaking chemical bonds as it does so and also changing its overall entropy.
In this paper, I analyze some of these contributions to the free energy requirements of biological systems and the implications of those costs for natural selection. The precise contributions of this paper are:
Motivated by the example of a digital computer, the analysis in  was formulated for systems that change the value v of a single set of physical variables, V. Therefore, for example, as formulated there, bit erasure means a map that sends both and to .
Here, I instead formulate the analysis for biological “input-output” systems that implement an arbitrary stochastic map taking one set of “input” physical variables X, representing the state of a sensor, to a separate set of “output” physical variables, Y, representing the action taken by the organism in response to its sensor reading. Therefore, as formulated in this paper, “bit erasure” means a map π that sends both and to . My first contribution is to show how to implement any given stochastic map with a process that requires minimal work if it is applied to some specified distribution over X and to calculate that minimal work.
In light of the free energy costs associated with implementing a map π, what π would we expect to be favored by natural selection? In particular, recall that adding noise to a computation can result in a reduction in how much work is needed to implement it. Indeed, by using a sufficiently noisy π, an organism can increase its stored free energy (if it started in a state with less than maximal entropy). Therefore, noise might not just be a hindrance that an organism needs to circumvent; an organism may actually exploit noise, to “recharge its battery”. This implies that an organism will want to implement a “behavior” π that is noisy as possible.
In addition, not all terms in a map are equally important to an organism’s reproductive fitness. It will be important to be very precise in what output is produced for some inputs , but for other inputs, precision is not so important. Indeed, for some inputs, it may not matter at all what output the organism produces in response. In light of this, natural selection would be expected to favor organisms that implement behaviors π that are as noisy as possible (thereby saving on the amount of free energy the organism needs to acquire from its environment to implement that behavior), while still being precise for those inputs where behavioral fitness requires it. I write down the equations for what π optimizes this tradeoff and show that it is approximated by a Boltzmann distribution over a sum of behavioral fitness and energy. I then use that Boltzmann distribution to calculate a lower bound on the maximal reproductive fitness over all possible behaviors π.
My last contribution is to use the preceding results to relate the free energy flux incident on the entire biosphere to the maximal “rate of computation” implemented by the biosphere. This relation gives an upper bound on the rate of computation that humanity as a whole can ever achieve, if it restricts itself to the surface of Earth.
In Section 2, I first review some of the basic quantities considered in nonequilibrium statistical physics and then review some of the relevant recent work in nonequilibrium statistical physics (involving “quenching processes”) related to the free energy cost of computation. I then discuss the limitations in what kind of computations that recent work can be used to analyze. I end by presenting an extension to that recent work that does not have these limitations (involving “guided quenching processes”). In Section 3, I use this extension to calculate the minimal free energy cost of any given input-output “organism”. I end this section by analyzing a toy model of the role that this free energy cost would play in natural selection. Those interested mainly in these biological implications can skip Section 2 and should still be able to follow the thrust of the analysis.
In this paper I extend the construction reviewed in  to show how to construct a system to perform any given computation in a thermodynamically reversible manner. (It seems likely that the tape-based system introduced in  could also be extended to do this.)
2. Formal Preliminaries
2.1. General Notation
I write for the cardinality of any countable space X. I will write the Kronecker delta between any two elements as . For any logical condition ζ, (0, respectively) if ζ is true (false, respectively). When referring generically to any probability distribution, I will write “”. Given any distribution p defined over some space X, I write the Shannon entropy for countable X, measured in nats, as:
As shorthand, I sometimes write as or even just when p is implicit. I use similar notation for conditional entropy, joint entropy of more than one random variable, etc. I also write mutual information between two random variables X and Y in the usual way, as [50,51,52].
Given a distribution and a conditional distribution , I will use matrix notation to define the distribution :
For any function and distribution , I write:
I will also sometimes use capital letters to indicate variables that are marginalized over, e.g., writing:
Below, I often refer to a process as “semi-static”. This means that these processes transform one Hamiltonian into another one so slowly that the associated distribution is always close to equilibrium, and as a result, only infinitesimal amounts of dissipation occur during the entire process. For this assumption to be valid, the implicit units of time in the analysis below must be sufficiently long on the timescale of the relaxation processes of the physical systems involved (or equivalently, those relaxation processes must be sufficiently quick when measured in those time units).
If a system with states x is subject to a Hamiltonian , then the associated equilibrium free energy is:
where as usual , and the partition function is:
However, the analysis below focuses on nonequilibrium distributions , for which the more directly relevant quantity is the nonequilibrium free energy, in which the distribution need not be a Boltzmann distribution for the current Hamiltonian:
where k is Boltzmann’s constant. For fixed H and T, is minimized by the associated Boltzmann distribution p, for which it has the value . It will be useful below to consider the changes in nonequilibrium free energy that accompany a change from a distribution P to a distribution M accompanied by a change from a Hamiltonian H to a Hamiltonian :
2.2. Thermodynamically-Optimal Processes
If a process Λ maps a distribution P to a distribution M thermodynamically reversibly, then the amount of work it uses when applied to P is [38,48,53,54]. In particular, is the amount of work used by a thermodynamically-reversible process Λ that maps a distribution P to . Equivalently, it is negative for the amount of work that is extracted by Λ when transforming P to .
In addition, by the second law, there is no process that maps P to M while requiring less work than a thermodynamically-reversible process that maps P to M. This motivates the following definition.
Suppose a system undergoes a process Λ that starts with Hamiltonian H and ends with Hamiltonian . Suppose as well that:
at both the start and finish of Λ, the system is in contact with a (single) heat bath at temperature T;
Λ transforms any starting distribution P to an ending distribution , where neither of those two distributions need be at equilibrium for their respective Hamiltonians;
Λ is thermodynamically reversible when run on some particular starting distribution P.
Then, Λ is thermodynamically optimal for the tuple .
Suppose we run a process over a space , transforming the distribution to a distribution . Therefore, x and y are statistically independent at both the beginning and the end of the process, and while the distribution over x undergoes a transition from , the distribution over y undergoes a cyclic process, taking (note that it is not assumed that the ending and starting y’s are the same or that x and y are independent at times between and ).
Suppose further that at both the beginning and end of the process, there is no interaction Hamiltonian, i.e., at those two times:
Then, no matter how x and y are coupled during the process, no matter how smart the designer of the process, the process will require work of at least:
Note that this amount of work is independent of M.
As a cautionary note, the work expended by any process operating on any initial distribution is the average of the work expended on each x. However, the associated change in nonequilibrium free energy is not the average of the change in nonequilibrium free energy for each x. This is illustrated in the following example.
Suppose we have a process Λ that sends each initial x to an associated final distribution , while transforming the initial Hamiltonian H into the final Hamiltonian . Write for the work expended by Λ when it operates on the initial state x. Then, the work expended by Λ operating on an initial distribution is . In particular, choose the process Λ, so that it sends with minimal work. Then:
However, this does not equal the average over x of the associated changes to nonequilibrium free energy, i.e.,
(where is the distribution over X that is a delta function at x). The reason is that the entropy terms in those two nonequilibrium free energies are not linear; in general, for any probability distribution ,
I now summarize what will be presented in the rest of this section.
Previous work showed how to construct a thermodynamically-optimal process for many tuples . In particular, as discussed in the Introduction, it is known how to construct a thermodynamically-optimal process for any tuple where is independent of x, like bit erasure. Accordingly, we know the minimal work necessary to run any such tuple. In Section 2.3, I review this previous analysis and show how to apply it to the kinds of input-output systems considered in this paper.
However, as discussed in the Introduction, until recently, it was not known whether one could construct a thermodynamically-optimal process for any tuple . In particular, given an arbitrary pair of an initial distribution p and conditional distribution π, it was not known whether there is a process Λ that is thermodynamically optimal for for some H and . This means that it was not known what the minimal needed work is to apply an arbitrary stochastic map π to an arbitrary initial distribution p. In particular, it was not known if we could use the difference in nonequilibrium free energy between p and to calculate the minimal work needed to apply a computation π to an initial distribution p.
This shortcoming was overcome in , where it was explicitly shown how to construct a thermodynamically-optimal process for any tuple . In Section 2.4, I show in detail how to construct such processes for any input-output system.
Section 2.4 also discusses the fact that a process that is thermodynamically optimal for need not be thermodynamically optimal for if . Intuitively, if we construct a process Λ that results in minimal required work for initial distribution p and conditional distribution π, but then apply that machine to a different distribution , then in general, work is dissipated. While that Λ is thermodynamically reversible when applied to p, in general, it is not thermodynamically reversible when applied to . As an example, if we design a computer to be thermodynamically reversible for input distribution p, but then use it with a different distribution of inputs, then work is dissipated.
In a biological context, this means that if an organism is “designed” not to dissipate any work when it operates in an environment that produces inputs according to some p, but instead finds itself operating in an environment that produces inputs according to some , then it will dissipate extra work. That dissipated work is wasted since it does not change π, i.e., has no consequences for the input-output map that the organism implements. However, by the conservation of energy, that dissipated work must still be acquired from some external source. This means that the organism will need to harvest free energy from its environment at a higher rate (to supply that dissipated work) than would an organism that were “designed” for .
2.3. Quenching Processes
A special kind of process, often used in the literature, can be used to transform any given initial nonequilibrium distribution into another given nonequilibrium distribution in a thermodynamically-reversible manner. These processes begin by quenching the Hamiltonian of a system. After that, the Hamiltonian is isothermally and quasi-statically changed, with the system in continual contact with a heat bath at a fixed temperature T. The process ends by applying a reverse quench to return to the original Hamiltonian (see [36,38,44] for discussion of these kinds of processes).
More precisely, such a Quenching (Q) process applied to a system with microstates is defined by:
an initial/final Hamiltonian ;
an initial distribution ;
a final distribution ;
and involves the following three steps:
To begin, the system has Hamiltonian , which is quenched into a first quenching Hamiltonian:
In other words, the Hamiltonian is changed from to too quickly for the distribution over r to change from .
Because the quench is effectively instantaneous, it is thermodynamically reversible and is adiabatic, involving no heat transfer between the system and the heat bath. On the other hand, while r is unchanged in a quench and, therefore, so is the distribution over R, in general, work is required if (see [32,33,53,54]).
Note that if the Q process is applied to the distribution , then at the end of this first step, the distribution is at thermodynamic equilibrium. However, if the process is applied to any other distribution, this will not be the case. In this situation, work is unavoidably dissipated in in the next step.
Next, we isothermally and quasi-statically transform to a second quenching Hamiltonian,
Physically, this means two things. First, that a smooth sequence of Hamiltonians, starting with and ending with , is applied to the system. Second, that while that sequence is being applied, the system is coupled with an external heat bath at temperature T, where the relaxation timescales of that coupling are arbitrarily small on the time scale of the dynamics of the Hamiltonian. This second requirement ensures that to first order, the system is always in thermal equilibrium for the current Hamiltonian, assuming it started in equilibrium at the beginning of the step (recall from Section 2.1 that I assume that quasi-static transformations occur in an arbitrarily small amount of time, since the relaxation timescales are arbitrarily short).
Next, we run a quench over R “in reverse”, instantaneously replacing the Hamiltonian with the initial Hamiltonian , with no change to r. As in step (i), while work may be done (or extracted) in step (iii), no heat is transferred.
Note that we can specify any Q process in terms of its first and second quenching Hamiltonians rather than in terms of the initial and final distributions, since there is a bijection between those two pairs. This central role of the quenching Hamiltonians is the basis of the name “Q” process (I distinguish the distribution ρ that defines a Q process, which is instantiated in the physical structure of a real system, from the actual distribution P on which that physical system is run).
Both the first and third steps of any Q process are thermodynamically reversible, no matter what distribution that process is applied to. In addition, if the Q process is applied to , the second step will be thermodynamically reversible. Therefore, as discussed in [36,38,48,54], if the Q process is applied to , then the expected work expended by the process is given by the change in nonequilibrium free energy in going from to ,
Note that because of how and are defined, there is no change in the nonequilibrium free energy during the second step of the Q process if it is applied to :
All of the work arises in the first and third steps, involving the two quenches.
The relation between Q processes and information-processing of macrostates arises once we specify a partition over R. I end this subsection with the following example of a Q process:
Suppose that R is partitioned into two bins, i.e., there are two macrostates. For both and , for both partition elements v, with abuse of notation, define:
Consider the case where has full support, but . Therefore, the dynamics over the macrostates (bins) from to sends both v’s to zero. In other words, it erases a bit.
For pedagogical simplicity, take to be uniform. Then, plugging in to Equation (16), we see that the minimal work is:
(the two terms are sometimes called “internal entropies” in the literature ).
In the special case that is uniform and that is the same for both t and both , we recover Landauer’s bound, , as the minimal amount of work needed to erase the bit. Note though that outside of that special case, Landauer’s bound does not give the minimal amount of work needed to erase a bit. Moreover, in all cases, the limit in Equation (20) is on the amount of work needed to erase the bit; a bit can be erased with zero dissipated work, pace Landauer. For this reason, the bound in Equation (20) is sometimes called “generalized Landauer cost” in the literature .
On the other hand, suppose that we build a device to implement a Q process that achieves the bound in Equation (20) for one particular initial distribution over the value of the bit, . Therefore, in particular, that device has “built into it” a first and second quenching Hamiltonian given by:
If we then apply that device with a different initial macrostate distribution, , in general, work will be dissipated in step (ii) of the Q process, because will not be an equilibrium for . In the context of biology, if a bit-erasing organism is optimized for one environment, but then used in a different one, it will necessarily be inefficient, dissipating work (the minimal amount of work dissipated is given by the drop in the value of the Kullback–Leibler divergence between and as the system develops from to ; see ).
2.4. Guided Q Processes
Soon after the quasi-static transformation step of any Q process begins, the system is thermally relaxed. Therefore, all information about , the initial value of the system’s microstate, is quickly removed from the distribution over r (phrased differently, that information has been transferred into inaccessible degrees of freedom in the external heat bath). This means that the second quenching Hamiltonian cannot depend on the initial value of the system’s microstate; after that thermal relaxation of the system’s microstate, there is no degree of freedom in the microstate that has any information concerning the initial microstate. This means that after the relaxation, there is no degree of freedom within the system undergoing the Q process that can modify the second quenching Hamiltonian based on the value of the initial microstate.
As a result, by itself, a Q process cannot change an initial distribution in a way that depends on that initial distribution. In particular, it cannot map different initial macrostates to different final macrostates (formally, a Q process cannot map a distribution with support restricted to the microstates in the macrostate to one final distribution and map a distribution with support restricted to the macrostate to a different final distribution).
On the other hand, both quenching Hamiltonians of a Q process running on a system with microstates can depend on , the initial microstate of a different system, . Loosely speaking, we can run a process over the joint system that is thermodynamically reversible and whose effect is to implement a different Q process over R, depending on the value . In particular, we can “coarse-grain” such dependence on : given any partition over whose elements are labeled by , it is possible that both quenching Hamiltonians of a Q process running on are determined by the macrostate .
More precisely, a Guided Quenching (GQ) process over R guided by V (for conditional distribution and initial distribution )” is defined by a quadruple:
an initial/final Hamiltonian ;
an initial joint distribution ;
a time-independent partition of S specifying an associated set of macrostates, ;
a conditional distribution .
It is assumed that for any where ,
i.e., that the distribution over r at the initial time t can depend on the macrostate v, but not on the specific microstate s within the macrostate v. It is also assumed that there are boundary points in S (“potential barriers”) separating the members of V in that the system cannot physically move from v to without going through such a boundary point.
The associated GQ process involves the following steps:
To begin, the system has Hamiltonian , which is quenched into a first quenching Hamiltonian written as:
and for all s except those at the boundaries of the partition elements defining the macrostates V,
However, at the s lying on the boundaries of the partition elements defining V, is arbitrarily large. Therefore, there are infinite potential barriers separating the macrostates of .
Note that away from those boundaries of the partition elements defining V, is the equilibrium distribution for .
Next, we isothermally and quasi-statically transform to a second quenching Hamiltonian,
( being the partition element that contains s).
Note that the term in the Hamiltonian that only concerns does not change in this step. Therefore, the infinite potential barriers delineating partition boundaries in S remain for the entire step. I assume that as a result of those barriers, the coupling of with the heat bath during this step cannot change the value of v. As a result, even though the distribution over r changes in this step, there is no change to the value of v. To describe this, I say that v is “semi-stable” during this step. (To state this assumption more formally, let be the (matrix) kernel that specifies the rate at which due to heat transfer between and the heat bath during during this step (ii) [32,33]. Then, I assume that is arbitrarily small if .)
As an example, the different bit strings that can be stored in a flash drive all have the same expected energy, but the energy barriers separating them ensure that the distribution over bit strings relaxes to the uniform distribution infinitesimally slowly. Therefore, the value of the bit string is semi-stable.
Note that even though a semi-stable system is not at thermodynamic equilibrium during its “dynamics” (in which its macrostate does not change), that dynamics is thermodynamically reversible, in that we can run it backwards in time without requiring any work or resulting in heat dissipation.
Next, we run a quench over “in reverse”, instantaneously replacing the Hamiltonian with the initial Hamiltonian , with no change to r or s. As in step (i), while work may be done (or extracted) in step (iii), no heat is transferred.
There are two crucial features of GQ processes. The first is that a GQ process faithfully implements even if its output varies with its input and does so no matter what the initial distribution over is. The second is that for a particular initial distribution over , implicitly specified by , the GQ process is thermodynamically reversible.
The first of these features is formalized with the following result, proven in Appendix A:
A GQ process over R guided by V (for conditional distribution and initial distribution ) will transform any initial distribution into a distribution without changing the distribution over s conditioned on v.
Consider the special case where the GQ process is in fact applied to the initial distribution that defines it,
(recall Equation (25)). In this case, the initial distribution is a Boltzmann distribution for the first quenching Hamiltonian; the final distribution is:
and the entire GQ process is thermodynamically reversible. This establishes the second crucial feature of GQ processes.
Plugging in, in this special case, the change in nonequilibrium free energy is:
This is the minimal amount of free energy needed to implement the GQ process. An important example of such a thermodynamically-optimal GQ process is the work-free copy process discussed in  and the references therein.
Suppose that we build a device to implement a GQ process over R guided by V for conditional distribution and initial distribution:
Therefore, that device has “built into it” first and second quenching Hamiltonians that depend on and . Suppose we apply that device in a situation where the initial distribution over r conditioned on v is in fact and the initial distribution over s conditioned on v is in fact , but the initial macrostate distribution, , does not equal . In this situation, the actual initial distribution at the start of step (ii) of the GQ process will not be an equilibrium for the initial quenching Hamiltonian. However, this will not result in there being any work dissipated during the thermal relaxation of that step. That is because the distribution over v in that step does not relax, no matter what it is initially (due to the infinite potential barriers in S), while the initial distribution over conditioned on v is in thermal equilibrium for the initial quenching Hamiltonian.
However, now suppose that we apply the device in a situation where the initial distribution over r conditioned on v does not equal . In this situation, work will be dissipated in step (ii) of the GQ process. That is because the initial distribution over r when the relaxation starts is not in thermal equilibrium for the initial quenching Hamiltonian, and this distribution does relax in step (ii). Therefore, if the device was not “designed” for the actual initial distribution over r conditioned on v (i.e., does not use a that equals that actual distribution), it will necessarily dissipate work.
As elaborated below, this means that if a biological organism that implements any map is optimized for one environment, i.e., one distribution over its inputs, but then used in an environment with a different distribution over its inputs, it will necessarily be inefficient, dissipating work (recall that above, we established a similar result for the specific type of Q process that can be used to erase a bit).
In this section, I consider biological systems that process an input into an output, an output that specifies some action that is then taken back to the environment. As shorthand, I will refer to any biological system that does this as an “organism”. A cell exhibiting chemotaxis is an example of an organism, with its input being (sensor readings of) chemical concentrations and its output being chemical signals that in turn specify some directed motion it will follow. Another example is a eusocial insect colony, with its inputs being the many different materials that are brought into the nest (including atmospheric gases) and its output being material waste products (including heat) that in turn get transported out of the colony.
Physically, each organism contains an “input subsystem”, a “processor subsystem” and an “output subsystem” (among others). The initial macrostate of the input subsystem is formed by sampling some distribution specified by the environment and is then copied to the macrostate of the processor subsystem. Next, the processor iterates some specified first-order time-homogenous Markov chain (for example, if the organism is a cell, this Markov chain models the iterative biochemical processing of the input that takes place within the organism). The ending value of the chain is the organism’s output, which specifies the action that the organism then takes back to its environment. In general, it could be that for certain inputs, an organism never takes any action back to its environment, but instead keeps processing the input indefinitely. Here, that is captured by having the Markov chain keep iterating (e.g., the biochemical processing keeps going) until it produces a value that falls within a certain predefined halting (sub)set, which is then copied to the organism’s output (the possibility that the processing never halts also ensures that the organism is Turing complete [55,56,57]).
There are many features of information processing in real biological systems that are distorted in this model; it is just a starting point. Indeed, some features are absent entirely. In particular, since the processing is modeled as a first-order Markov chain, there is no way for an organism described by this model to “remember” a previous input it received when determining what action to take in response to a current input. Such features could be incorporated into the model in a straight-forward way and are the subject of future work.
In the next subsection, I formalize this model of a biological input-output system, in terms of an input distribution, a Markov transition matrix and a halting set. I then analyze the minimal amount of work needed by any physical system that implements a given transition matrix when receiving inputs from a given distribution, i.e., the minimal amount of work a real organism would need to implement its input-output behavior that it exhibits in its environment, if it were free to use any physical process that obeys the laws of physics. To perform this analysis, I will construct a specific physical process that implements an iteration of the Markov transition matrix of a given organism with minimal work, when inputs are generated according to the associated input distribution. This process involves a sequence of multiple GQ processes. It cannot be emphasized enough that these processes I construct are not intended to describe what happens in real biological input-output systems, even as a cartoon. These processes are used only as a calculational tool, for finding a lower bound on the amount of work needed by a real biological organism to implement a given input-output transition matrix.
Indeed, because real biological systems are often quite inefficient, in practice, they will often use far more work than is given by the bound I calculate. However, we might expect that in many situations, the work expended by a real biological system that behaves according to some transition matrix is approximately proportional to the work that would be expended by a perfectly efficient system obeying the same transition matrix. Under that approximation, the relative sizes of the bounds given below should reflect the relative sizes of the amounts of work expended by real biological systems.
3.1. The Input and Output Spaces of an Organism
Recall from Section 2.4 that a subsystem S cannot use a thermodynamically-reversible Q process to update its own macrostate in an arbitrary way. However a different subsystem can guide an arbitrary updating of the macrostate of S, with a GQ process. In addition, the work required by a thermodynamically-reversible process that implements a given conditional distribution from inputs to outputs is the same as the work required by any other thermodynamically-reversible process that implements that same distribution.
In light of these two facts, for simplicity, I will not try to construct a thermodynamically-reversible process that implements any given organism’s input-output distribution directly, by iteratively updating the processor until its state lies in the halting subset and then copying that state to the output. Instead, I will construct a thermodynamically-reversible process that implements that same input-output distribution, but by “ping-ponging” GQ processes back and forth between the state of the processor and the state of the output system, until the output’s state lies in the halting set.
Let W be the space of all possible microstates of a processor subsystem, and U the (disjoint) space of all possible microstates of an output subsystem. Let be a partition of W, i.e., a coarse-graining of it into a countable set of macrostates. Let X be the set of labels of those partition elements, i.e., the range of the map (for example, in a digital computer, could be a map taking each microstate of the computer’s main RAM, , into the associated bit string, ). Similarly, let be a partition of U, the microstate of the output subsystem. Let Y be the set of labels of those partition elements, i.e., the range of the map , with the halting subset of Y. I generically write an element of X as x and an element of Y as y. I assume that X and Y, the spaces of labels of the processor and output partition elements, respectively, have the same cardinality and, so, indicate their elements with the same labels. In particular, if we are concerned with Turing-complete organisms, X and Y would both be , the set of all finite bit strings (a set that is bijective with ).
For notational convenience, I arbitrarily choose one non-empty element of X and one non-empty element of Y and the additional label 0 to both of them (for example, in a Turing machine, it could be that we assign the label 0 to the partition element that also has label ). Intuitively, these elements represent the “initialized” state of the processor and output subsystems, respectively.
The biological system also contains an input subsystem, with microstates and coarse-graining partition that produces macrostates . The space B is the same as the space X (and therefore is the same as Y). The state of the input at time , , is formed by sampling an environment distribution. As an example, could be determined by a (possibly noisy) sensor reading of the external environment. As another example, the environment of an organism could directly perturb the organism’s input macrostate at . For simplicity, I assume that both the processor subsystem and the output subsystem are initialized before is generated, i.e., that .
After is set this way, it is copied to the processor subsystem, setting . At this point, we iterate a sequence of GQ processes in which x is mapped to y, then y is mapped to x, then that new x is mapped to a new y, etc., until (and if) . To make this precise, adopt the notation that refers to the joint state . Then, after is set, we iterate the following multi-stage ping-pong sequence:
, where is formed by sampling ;
If , the process ends;
Return to (1) with t replaced by ;
If this process ends (at stage (3)) with , then the associated value is used to specify an action by the organism back on its environment. At this point, to complete a thermodynamic cycle, both x and y are reinitialized to zero, in preparation for a new input.
Here, for simplicity, I do not consider the thermodynamics of the physical system that sets the initial value of by “sensing the environment”; nor do I consider the thermodynamics of the physical system that copies that value to (see  and the references therein for some discussion of the thermodynamics of copying). In addition I do not analyze the thermodynamics of the process in which the organism uses to “take an action back to its environment” and thereby reinitializes y. I only calculate the minimal work required to implement the phenotype of the organism, which here is taken to mean the iterated ping-pong sequence between X and Y.
Moreover, I do not make any assumption for what happens to after it is used to set ; it may stay the same, may slowly decay in some way, etc. Accordingly, none of the thermodynamic processes considered below are allowed to exploit (some assumption for) the value of b when they take place to reduce the amount of work they require. As a result, from now on, I ignore the input space and its partition.
Physically, a ping-pong sequence is implemented by some continuous-time stochastic processes over . Any such process induces an associated discrete-time stochastic process over . That discrete-time process comprises a joint distribution defined over a (possibly infinite) sequence of values That distribution in turn induces a joint distribution over associated pairs of partition element labels,
For calculational simplicity, I assume that , at the end of each stage in a ping-pong sequence that starts at any time , is the same distribution, which I write as . I make the analogous assumption for to define (in addition to simplifying the analysis, this helps ensure that we are considering cyclic processes, a crucial issue whenever analyzing issues like the minimal amount of work needed to implement a desired map). Note that if . To simplify the analysis further, I also assume that all “internal entropies” of the processor macrostates are the same, i.e., is independent of y, and similarly for the internal entropies of the output macrostates.
Also for calculational simplicity, I assume that at the end of each stage in a ping-pong sequence that starts at any time , there is no interaction Hamiltonian coupling any of the three subsystems (though obviously, there must be such coupling at non-integer times). I also assume that at all such moments, the Hamiltonian over U is the same function, which I write as . Therefore, for all such moments, the expected value of the Hamiltonian over U if the system is in state at that time is:
Similarly, and define the Hamiltonians at all such moments, over the input and processor subsystems, respectively.
I will refer to any quadruple and three associated Hamiltonians as an organism.
For future use, note that for any iteration , initial distribution , conditional distribution and halting subset ,
I end this subsection with some notational comments. I will sometimes abuse notation and put time indices on distributions rather than variables, e.g., writing rather than . In addition, sometimes, I abuse notation with temporal subscripts. In particular, when the initial distribution over X is , I sometimes use expressions like:
However, I will always be careful when writing joint distributions over variables from different moments of time, e.g., writing:
3.2. The Thermodynamics of Mapping an Input Space to an Output Space
Our goal is to construct a physical process Λ over an organism’s quadruple that implements an iteration of a given ping-pong sequence above for any particular t. In addition, we want Λ to be thermodynamically optimal with the stipulated starting and ending joint Hamiltonians for all iterations of the ping-pong sequence when it is run on an initial joint distribution:
In Appendix B, I present four separate GQ processes that implement stages (1), (2), (4) and (5) in a ping-pong sequence (and so implement the entire sequence). The GQ processes for stages (1), (4) and (5) are guaranteed to be thermodynamically reversible, for all t. However, each time-t GQ process for stage (2) is parameterized by a distribution . Intuitively, that distribution is a guess, made by the “designer” of the (time-t) stage (2) GQ process, for the marginal distribution over the values at the beginning of the associated stage (1) GQ process. That stage (2) GQ process will also be thermodynamically reversible, if the distribution over at the beginning of the stage (1) GQ process is in fact . Therefore, for that input distribution, the sequence of GQ processes is thermodynamically optimal, as desired. However, as discussed below, in general, work will be dissipated if the stage (2) GQ process is applied when the distribution over at the beginning of stage (1) differs from .
I call such a sequence of five processes implementing an iteration of a ping-pong sequence an organism process. It is important to emphasize that I do not assume that any particular real biological system runs an organism process. An organism process provides a counterfactual model of how to implement a particular dynamics over , a model that allows us to calculate the minimal work used by any actual biological system that implements that dynamics.
Suppose that an organism process always halts for any , such that . Let be the last iteration at which such an organism process may halt, for any of the inputs , such that (note that if X is countably infinite, might be countable infinity). Suppose further that no new input is received before if the process halts at some and that all microstates are constant from such a τ up to (so, no new work is done during such an interval). In light of the iterative nature of organism processes, this last assumption is equivalent to assuming that if .
I say that the organism process is recursive when all of these conditions are met, since that is the adjective used in the theory of Turing machines. For a recursive organism process, the ending distribution over y is:
Fix any recursive organism process, iteration , initial distributions , conditional distribution and halting subset .
With probability , the ping-pong sequence at iteration t of the associated organism process maps the distribution:
and then halts, and with probability , it instead maps:
If for all , the total work the organism expends to map the initial distribution to the ending distribution is:
There is no physical process that both performs the same map as the organism process and that requires less work than the organism process does when applied to .
Repeated application of Proposition 1 gives the first result.
Next, combine Equation (70) in Appendix B, Equation (33) and our assumptions made just before Equation (35) to calculate the work needed to implement the GQ process of the first stage of an organism process at iteration t:
Analogous equations give the work for the remaining three GQ processes. Then, apply these equations repeatedly, starting with the distribution given in Equation (46) (note that all terms for iterations of the ping-pong sequence with cancel out). This gives the second result.
Finally, the third result is immediate from the assumption that for all t, which guarantees that each iteration of the organism process is thermodynamically reversible. ☐
The first result in Proposition 2 means that no matter what the initial distribution over X is, the organism process updates that distribution according to π, halting whenever it produces a value in . This is true even if the output of π depends on its input (as discussed in the Introduction, this property is violated for many of the physical processes considered in the literature).
The first terms in the definition of , given by a sum of expected values of the Hamiltonian, can be interpreted as the “labor” done by the organism when processing into , e.g., by making and breaking chemical bonds. It quantifies the minimal amount of external free energy that must be used to implement the amount of labor that is (implicitly) specified by π. The remaining terms, a difference of entropies, represent the free energy required by the “computation” done by the organism when it undergoes π, independent of the labor done by the organism.
3.3. Input Distributions and Dissipated Work
Suppose that at the beginning of some iteration t of an organism process, the distribution over is some that differs from , the prior distribution “built into” the (quenching Hamiltonians defining the) organism process. Then, as elaborated at the beginning of Section 3.2, in general, this iteration of the organism process will result in dissipated work.
As an example, such dissipation will occur if the organism process is used in an environment that generates inputs according to a distribution that differs from , the distribution “built into” the organism process. In the context of biology, if a biological system gets optimized by natural selection for one environment, but is then used in another one, it will necessarily operate (thermodynamically sub-optimally) in that second environment.
Note though that one could imagine designing an organism to operate optimally for a distribution over environments, since that is equivalent to a single average distribution over inputs. More precisely, a distribution over environments is equivalent to a single environment generating inputs according to:
We can evaluate the thermodynamic cost for this organism that behaves optimally for an uncertain environment.
As a comparison point, we can also evaluate the work used in an impossible scenario where varies stochastically but the organism magically “knows” what each is before it receives an input sampled from that , and then changes its distributions accordingly to what the average thermodynamic cost in this impossible scenario would be
with equality only if Pr(.) is a delta function about one particular . So in general, even if an organism chooses its (fixed) to be optimal for an uncertain environment, it cannot do as well as it would if it could magically change appropriately before each new environment it encounters.
As a second example, in general, as one iterates an organism process, the initial distribution is changed into a sequence of new distributions . In general, many of these distributions will differ, i.e., for many , . Accordingly, if one is using some particular physical device to implement the organism process, unless that device has a clock that it can use to update from one iteration to the next (to match the changes in ), the distribution built into the device will differ from at some times t. Therefore, without such a clock, work will be dissipated.
Bearing these caveats in mind, unless explicitly stated otherwise, in the sequel, I assume that the time-t stage (2) GQ process of an organism makes the correct guess for the input distribution at the start of the time-t ping-pong sequence, i.e., that its parameter is always the same as the distribution over x at the beginning of the time-t stage (1) process. In this case, the minimal free energy required by the organism is , and no work is dissipated.
It is important to realize that in general, if one were to run a Q process over X in the second stage of an organism process, rather than a GQ process over X guided by Y, there would be nonzero dissipated work. The reason is that if we ran such a Q process, we would ignore the information in concerning the variable we want to send to zero, . In contrast, when we use a GQ process over X guided by Y, no information is ignored, and we maintain thermodynamic reversibility. The extra work of the Q process beyond that of the GQ process is:
In other words, using the Q process would cause us to dissipate work . This amount of dissipated work equals zero if the output of π is independent of its input, as in bit erasure. It also equals zero if is a delta function. However, for other π and , that dissipated work will be nonzero. In such situations, stage 2 would be thermodynamically irreversible if we used a Q process over to set x to zero.
As a final comment, it is important to emphasize that no claim is being made that the only way to implement an organism process is with Q processes and/or GQ processes. However, the need to use the organism process in an appropriate environment, and for it to have a clock, should be generic, if we wish to avoid dissipated work.
3.4. Optimal Organisms
From now on, for simplicity, I restrict attention to recursive organism processes.
Recall that adding noise to π may reduce the amount of work required to implement it. Formally, Proposition 2 tells us that everything else being equal, the larger is, the less work is required to implement the associated π (indeed, the thermodynamically-optimal implementation of a one-to-many map π actually draws in free energy from the heat bath, rather than requiring free energy that ends up being dumped into that heat bath). This implies that an organism will want to implement a π that is as noisy as possible.
In addition, not all maps are equally important to an organism’s reproductive fitness. It will be important to be very precise in what output is produced for some inputs , but for other inputs, precision is not so important. Indeed, for some inputs, it may not matter at all what output the organism produces in response.
In light of this, natural selection would be expected to favor π’s that are as noisy as possible, while still being precise for those inputs where reproductive fitness requires it. To simplify the situation, there are two contributions to the reproductive fitness of an organism that implements some particular π: the free energy (and other resources) required by that implementation and the “phenotypic fitness” that would arise by implementing π even if there were no resources required to implement it.
Therefore, there will be a tradeoff between the resource cost of being precise in π with the phenotypic fitness benefit of being precise. In particular, there will be a tradeoff between the thermodynamic cost of being precise in π (given by the minimal free energy that needs to be used to implement π) and the phenotypic fitness of that π. In this subsection, I use an extremely simplified and abstracted model of reproductive fitness of an organism to determine what π optimizes this tradeoff.
To start, suppose we are given a real-valued phenotypic fitness function . This quantifies the benefit to the organism of being precise in what output it produces in response to its inputs. More precisely, quantifies the impact on the reproductive fitness of the organism that arises if it outputs in response to an input it received, minus the effect on reproductive fitness of how the organism generated that response. That second part of the definition means that behavioral fitness does not include energetic costs associated with mapping . Therefore, it includes neither the work required to compute a map taking nor the labor involved in carrying out that map going into f (note that in some toy models, would be an expectation value of an appropriate quantity, taken over states of the environment, and conditioned on and ). For an input distribution and conditional distribution π, expected phenotypic fitness is:
The expected phenotypic fitness of an organism if it implements π on the initial distribution is only one contribution to the overall reproductive fitness of the organism. In addition, there is a reproductive fitness cost to the organism that depends on the specific physical process it uses to implement π on . In particular, there is such a cost arising from the physical resources that the process requires.
There are several contributions to this cost. In particular, different physical processes for implementing π will require different sets of chemicals from the environment, will result in different chemical waste products, etc. Here, I ignore such “material” costs of the particular physical process the organism uses to implement π on .
However, in addition to these material costs of the process, there is also a cost arising from the thermodynamic work required to run that process. If we can use a thermodynamically-reversibly process, then by Equation (49), for fixed and π, the minimal possible such required work is . Of course, in many biological scenarios, it is not possible to use a thermodynamically-reversible organism process to implement π. As discussed in Section 3.3, this is the case if the organism process is “designed” for an environment that generates inputs x according to while the actual environment in which the process is used generates inputs according to some . However, there are other reasons why there might have to be non-zero dissipated work. In particular, there is non-zero dissipated work if π must be completed quickly, and so, it cannot be implemented using a quasi-static process (it does not do an impala any good to be able to compute the optimal direction in which to flee a tiger chasing it, if it takes the impala an infinite amount of time to complete that computation). Additionally, of course, it may be that a minimal amount of work must be dissipated simply because of the limited kinds of biochemical systems available to a real organism.
I make several different simplifying assumptions:
In some biological scenarios, the amount of such dissipated work that cannot be avoided in implementing π, , will be comparable to (or even dominate) the minimal amount of reversible work needed to implement π, . However, for simplicity, in the sequel, I concentrate solely on the dependence on π of the reproductive fitness of a process that implements π that arises due to its effect on . Equivalently, I assume that I can approximate differences as equal to up to an overall proportionality constant.
Real organisms have internal energy stores that allow them to use free energy extracted from the environment at a time to drive a process at time , thereby “smoothing out” their free energy needs. For simplicity, I ignore such energy stores. Under this simplification, the organism needs to extract at least of free energy from its environment to implement a single iteration of π on . That minimal amount of needed free energy is another contribution to the “reproductive fitness cost to the organism of physically implementing π starting from the input distribution ”.
As another simplifying assumption, I suppose that the (expected) reproductive fitness of an organism that implements the map π starting from is just:
Therefore, α is the benefit to the organism’s reproductive fitness of increasing f by one, measured in units of energy. This ignores all effects on the distribution that would arise by having different π implemented at times earlier than . It also ignores the possible impact on reproductive fitness of the organism’s implementing particular sequences of multiple y’s (future work involves weakening all of these assumptions, with particular attention to this last one). Under this assumption, varying π has no effect on , the initial entropy over processor states. Similarly, it has no effect on the expected value of the Hamiltonian then.
Combining these assumptions with Proposition 2, we see that after removing all terms in that do not depend on π, we are left with . This gives the following result:
Given the assumptions discussed above, up to an additive constant that does not depend on π:
The first term in Corollary 1 reflects the impact of π on the phenotypic fitness of the organism. The second term reflects the impact of π on the amount of labor the organism does. Finally, the last term reflects the impact of π on the amount of computation the organism does; the greater the entropy of , the less total computation is done. In different biological scenarios, the relative sizes of these three terms may change radically. In some senses, Corollary 1 can be viewed as an elaboration of , where the “cost of sensing” constant in that paper is decomposed into labor and computation costs.
From now on, for simplicity, I assume that . So no matter what the input is, the organism process runs π exactly once to produce the output. Returning to our actual optimization problem, by Lagrange multipliers, if the π that maximizes the expression in Corollary 1 lies in the interior of the feasible set, then it is the solution to a set of coupled nonlinear equations, one equation for each pair :
where the are the Lagrange multipliers ensuring that for all . Unfortunately, in general, the solution may not lie in the interior, so that we have a non-trivial optimization problem.
However, suppose we replace the quantity:
in Corollary 1 with . Since [50,51], this modification gives us a lower bound on expected reproductive fitness:
The π that maximizes is just a set of Boltzmann distributions:
For each , this approximately optimal conditional distribution puts more weight on if the associated phenotypic fitness is high, while putting less weight on if the associated energy is large. In addition, we can use this distribution to construct a lower bound on the maximal value of the expected reproductive fitness:
Given the assumptions discussed above,
Each term in the summand depends on the Y-space distribution , but no other terms in π. Therefore, we can evaluate each such term separately for its maximizing (Boltzmann) distribution . In the usual way, this is given by the log of the associated partition function (normalization constant) , since for any and associated Boltzmann ,
where , as usual. Comparing to Equation (59) establishes that:
and then gives the claimed result. ☐
As an aside, suppose we had , for all x and that f were non-negative. Then if in addition the amount of expected work were given by the mutual information between and rather than the difference in their entropies, our optimization problem would reduce to finding a point on the rate-distortion curve of conventional information theory, with f being the distortion function . (See also  for a slight variant of rate-distortion theory, appropriate when Y differs from X, and so the requirement that is dropped.) However as shown above the expected work to implement π does not depend on the precise coupling between and under π, but only the associated marginal distributions. So rate-distortion theory does not directly apply.
On the other hand, some of the same kinds of analysis used in rate-distortion theory can also be applied here. In particular, for any particular component where , since ,
(where , as usual). So is concave in every component of π. This means that the optimizing channel π may lies on the edge of the feasible region of conditional distributions. Note though that even if the solution is on the edge of the feasible region, in general for different x1 that optimize will put all its probability mass on different edges of the unit simplex over Y. So when those edges are averaged under , the result is a marginal distribution that lies in the interior of the unit simplex over Y.
As a cautionary note, often in the real world, there is an inviolable upper bound on the rate at which a system can “harvest” free energy from its environment, i.e., on how much free energy it can harvest per iteration of π (for example, a plant with a given surface area cannot harvest free energy at a faster rate than sunlight falls upon its surface). In that case, we are not interested in optimizing a quantity like , which is a weighted average of minimal free energy and expected phenotypic fitness per iteration of π. Instead, we have a constrained optimization problem with an inequality constraint: find the π that maximizes some quantity (e.g., expected phenotypic fitness), subject to an inequality constraint on the free energy required to implement that π. Calculating solutions to these kinds of constrained optimization problem is the subject of future work.
4. General Implications for Biology
Any work expended on an organism must first be acquired as free energy from the organism’s environment. However, in many situations, there is a limit on the flux of free energy through an organism’s immediate environment. Combined with the analysis above, such limits provide upper bounds on the “rate of (potentially noisy) computation” that can be achieved by a biological organism in that environment, once all energetic costs for the organism’s labor (i.e., its moving, making/breaking chemical bonds, etc.) are accounted for.
As an example, human brains do little labor. Therefore, these results bound the rate of computation of a human brain. Given the fitness cost of such computation (the brain uses ∼20% of the calories used by the human body), this bound contributes to the natural selective pressures on humans (in the limit that operational inefficiencies of the brain have already been minimized). In other words, these bounds suggest that natural selection imposes a tradeoff between the fitness quality of a brain’s decisions and how much computation is required to make those decisions. In this regard, it is interesting to note that the brain is famously noisy, and as discussed above, noise in computation may reduce the total thermodynamic work required (see [6,10,59] for more about the energetic costs of the human brain and its relation to Landauer’s bound).
As a second example, the rate of solar free energy incident upon the Earth provides an upper bound on the rate of computation that can be achieved by the biosphere (this bound holds for any choice for the partition of the biosphere’s fine-grained space into macrostates, such that the dynamics over those macrostates executes π). In particular, it provides an upper bound on the rate of computation that can be achieved by human civilization, if we remain on the surface of the Earth and only use sunlight to power our computation.
Despite the use of the term “organism”, the analysis above is not limited to biological individuals. For example, one could take the input to be a current generation population of individuals, together with attributes of the environment shared by those individuals. We could also take the output to be the next generation of that population, after selective winnowing based on the attributes of the environment (e.g., via replicator dynamics). In this example, the bounds above do not refer to the “computation” performed by an individual, but rather by an entire population subject to natural selection. Therefore, those bounds give the minimal free energy required to run natural selection.
As a final example, one can use these results to analyze how the thermodynamic behavior of the biosphere changes with time. In particular, if one iterates π from one t to the next, then the associated initial distributions change. Accordingly, the minimal amount of free energy required to implement π changes. In theory, this allows us to calculate whether the rate of free energy required by the information processing of the terrestrial biosphere increases with time. Prosaically, has the rate of computation of the biosphere increased over evolutionary timescales? If it has done so for most of the time that the biosphere has existed, then one could plausibly view the fraction of free energy flux from the Sun that the biosphere uses as a measure of the “complexity” of the biosphere, a measure that has been increasing throughout the lifetime of the biosphere.
Note as well that there is a fixed current value of the total free energy flux incident on the biosphere (from both sunlight and, to a much smaller degree, geologic processes). By the results presented above, this rate of free energy flux gives an upper bound on the rate of computation that humanity as a whole can ever achieve, if it monopolizes all resources of Earth, but restricts itself to the surface of Earth.
The noisier the input-output map π of a biological organism, the less free energy the organism needs to acquire from its environment to implement that map. Indeed, by using a sufficiently noisy π, an organism can increase its stored free energy. Therefore, noise might not just be a hindrance that an organism needs to circumvent; an organism may actually exploit noise, to “recharge its battery”.
In addition, not all maps are equally important to an organism’s reproductive fitness. In light of this, natural selection would be expected to favor π’s that are as noisy as possible, while still being precise for those inputs where reproductive fitness requires it.
In this paper, I calculated what π optimizes this tradeoff. This calculation provides insight into what phenotypes natural selection might be expected to favor. Note though that in the real world, there are many other thermodynamic factors that are important in addition to the cost of processing sensor readings (inputs) into outputs (actions). For example, there are the costs of acquiring the sensor information in the first place and of internal storage of such information, for future use. Moreover, in the real world, sensor readings do not arrive in an i.i.d. basis, as assumed in this paper. Indeed, in real biological systems, often, the current sensor reading, reflecting the recent state of the environment, reflects previous actions by the organism that affected that same environment (in other words, real biological organisms often behave like feedback controllers). All of these effects would modify the calculations done in this paper.
In addition, in the real world, there are strong limits on how much time a biological system can take to perform its computations, physical labor and rearranging of matter, due to environmental exigencies (simply put, if the biological system is not fast enough, it may be killed). These temporal constraints mean that biological systems cannot use fully reversible thermodynamics. Therefore, these temporal constraints increase the free energy required for the biological system to perform computation, labor and/or rearrangement of matter.
Future work involves extending the analysis of this paper to account for such thermodynamic effects. Combined with other non-thermodynamic resource restrictions that real biological organisms face, such future analysis should help us understand how closely the organisms that natural selection has produced match the best ones possible.
I would like to thank Daniel Polani, Sankaran Ramakrishnan and especially Artemy Kolchinsky for many helpful discussions and the Santa Fe Institute for helping to support this research. This paper was made possible through the support of Grant No. TWCF0079/AB47 from the Templeton World Charity Foundation and Grant No. FQXi-RHl3-1349 from the FQXi foundation. The opinions expressed in this paper are those of the author and do not necessarily reflect the view of Templeton World Charity Foundation.
Conflicts of Interest
The author declares no conflict of interest. The funding sponsors had no role in the design of the study; in the collection, analyses or interpretation of data; in the writing of the manuscript; nor in the decision to publish the results.
Appendix A: Proof of Proposition 1
We begin with the following lemma:
A GQ process over R guided by V (for conditional distribution π and initial distribution ) will transform any initial distribution:
into a distribution:
Fix some by sampling . Since in a GQ, microstates only change during the quasi-static relaxation, after the first quench, s and, therefore, v still equal . Due to the infinite potential barriers in , while s may change during that relaxation, v will not, and so, . Therefore:
Now, at the end of the relaxation step, has settled to thermal equilibrium within the region . Therefore, combining Equation (65) with Equations (29) and (28), we see that the distribution at the end of the relaxation is:
Averaging over then gives :
Next, note that if . Therefore, if Equation (64) holds and we sum over all for an arbitrary v, we get:
Furthermore, no matter what is, . As a result, Lemma 1 implies that a GQ process over R guided by V (for conditional distribution π and initial distribution ) will transform any initial distribution into a distribution . This is true whether or not or . This establishes the claim of Proposition 1 that the first “crucial feature” of GQ processes holds.
Appendix B: The GQ Processes Iterating a Ping-Pong Sequence
In this section, I present the separate GQ processes for implementing the stages of a ping-pong sequence.
First, recall our assumption from just below the definition of a ping-pong sequence that at the end of any of its stages, is always the same distribution (and similarly for distributions like ). Accordingly, at the end of any stage of a ping-pong sequence that implements a GQ process over U guided by X, we can uniquely recover the conditional distribution from :
(and similarly, for a GQ process over W guided by Y). Conversely, we can always recover from , simply by marginalizing. Therefore, we can treat any distribution defining such a GQ process interchangeably with a distribution (and similarly, for distributions and occurring in GQ processes over W guided by Y).
To construct the GQ process for the first stage, begin by writing:
where is an assumption for the initial distribution over x, one that in general may be wrong. Furthermore, define the associated distribution:
By Corollary 1, running a GQ process over Y guided by X for conditional distribution and initial distribution will send any initial distribution to a distribution . Therefore, in particular, it will send any initial . Due to the definition of and Equation (70), the associated conditional distribution over y given x, , is equal to . Accordingly, this GQ process implements the first stage of the organism process, as desired. In addition, it preserves the validity of our assumptions that and similarly for .
Next, by the discussion at the end of Section 2.4, this GQ process will be thermodynamically reversible since by assumption, is the actual initial distribution over u conditioned on x.
To construct the GQ process for the second stage, start by defining an initial distribution based on a (possibly counterfactual) prior :
and the associated conditional distribution:
Furthermore, define a conditional distribution:
Consider a GQ process over W guided by Y for conditional distribution and initial distribution . By Corollary 1, this GQ process implements the second stage, as desired. In addition, it preserves the validity of our assumptions that and similarly fo .
Next, by the discussion at the end of Section 2.4, this GQ process will be thermodynamically reversible if is the actual distribution over conditioned on . By Equation (76), this in general requires that , the assumption for the initial distribution over that is built into the step (ii) GQ process, is the actual initial distribution over . As discussed at the end of Section 2.3, work will be dissipated if this is not the case. Physically, this means that if the device implementing this GQ process is thermodynamically optimal for one input distribution, but used with another, then work will be dissipated (the amount of work dissipated is given by the change in the Kullback–Leibler divergence between G and in that stage (4) GQ process; see ).
We can also implement the fourth stage by running a (different) GQ process over X guided by Y. This GQ process is a simple copy operation, i.e., implements a single-valued, invertible function from to the initialized state x. Therefore, it is thermodynamically reversible. Finally, we can implement the fifth stage by running an appropriate GQ process over Y guided by X. This process will also be thermodynamically reversible.
Frank, S.A. Natural selection. V. How to read the fundamental equations of evolutionary change in terms of information theory. J. Evolut. Biol.2012, 25, 2377–2396. [Google Scholar] [CrossRef] [PubMed]
Donaldson-Matasci, M.C.; Bergstrom, C.T.; Lachmann, M. The fitness value of information. Oikos2010, 119, 219–230. [Google Scholar] [CrossRef] [PubMed]
Krakauer, D.C. Darwinian demons, evolutionary complexity, and information maximization. Chaos Interdiscip. J. Nonlinear Sci.2011, 21, 037110. [Google Scholar] [CrossRef] [PubMed]
Taylor, S.F.; Tishby, N.; Bialek, W. Information and fitness. 2007; arXiv:0712.4382. [Google Scholar]
Bullmore, E.; Sporns, O. The economy of brain network organization. Nat. Rev. Neurosci.2012, 13, 336–349. [Google Scholar] [CrossRef] [PubMed]
Sartori, P.; Granger, L.; Lee, C.F.; Horowitz, J.M. Thermodynamic costs of information processing in sensory adaptation. PLoS Comput. Biol.2014, 10, e1003974. [Google Scholar]
Mehta, P.; Schwab, D.J. Energetic costs of cellular computation. Proc. Natl. Acad. Sci. USA2012, 109, 17978–17982. [Google Scholar] [CrossRef] [PubMed]
Mehta, P.; Lang, A.H.; Schwab, D.J. Landauer in the age of synthetic biology: Energy consumption and information processing in biochemical networks. J. Stat. Phys.2015, 162, 1153–1166. [Google Scholar] [CrossRef]
Laughlin, S.B. Energy as a constraint on the coding and processing of sensory information. Curr. Opin. Neurobiol.2001, 11, 475–480. [Google Scholar] [CrossRef]
Govern, C.C.; ten Wolde, P.R. Energy dissipation and noise correlations in biochemical sensing. Phys. Rev. Lett.2014, 113, 258102. [Google Scholar] [CrossRef] [PubMed]
Govern, C.C.; ten Wolde, P.R. Optimal resource allocation in cellular sensing systems. Proc. Natl. Acad. Sci. USA2014, 111, 17486–17491. [Google Scholar] [CrossRef] [PubMed]
Lestas, I.; Vinnicombe, G.; Paulsson, J. Fundamental limits on the suppression of molecular fluctuations. Nature2010, 467, 174–178. [Google Scholar] [CrossRef] [PubMed]
Faist, P.; Dupuis, F.; Oppenheim, J.; Renner, R. A quantitative Landauer’s principle. 2012; arXiv:1211.1037. [Google Scholar]
Touchette, H.; Lloyd, S. Information-theoretic approach to the study of control systems. Physica A2004, 331, 140–172. [Google Scholar] [CrossRef]
Sagawa, T.; Ueda, M. Minimal energy cost for thermodynamic information processing: Measurement and information erasure. Phys. Rev. Lett.2009, 102, 250602. [Google Scholar] [CrossRef] [PubMed]
Dillenschneider, R.; Lutz, E. Comment on “Minimal Energy Cost for Thermodynamic Information Processing: Measurement and Information Erasure”. Phys. Rev. Lett.2010, 104, 198903. [Google Scholar] [CrossRef] [PubMed]
Sagawa, T.; Ueda, M. Fluctuation theorem with information exchange: Role of correlations in stochastic thermodynamics. Phys. Rev. Lett.2012, 109, 180602. [Google Scholar] [CrossRef] [PubMed]
Crooks, G.E. Entropy production fluctuation theorem and the nonequilibrium work relation for free energy differences. Phys. Rev. E1999, 60, 2721–2726. [Google Scholar] [CrossRef]
Crooks, G.E. Nonequilibrium measurements of free energy differences for microscopically reversible Markovian systems. J. Stat. Phys.1998, 90, 1481–1487. [Google Scholar] [CrossRef]
Janna, F.C.; Moukalled, F.; Gómez, C.A. A Simple Derivation of Crooks Relation. Int. J. Thermodyn.2013, 16, 97–101. [Google Scholar]
Jarzynski, C. Nonequilibrium equality for free energy differences. Phys. Rev. Lett.1997, 78. [Google Scholar] [CrossRef]
Esposito, M.; van den Broeck, C. Second law and Landauer principle far from equilibrium. Europhys. Lett.2011, 95, 40004. [Google Scholar] [CrossRef]
Esposito, M.; van den Broeck, C. Three faces of the second law. I. Master equation formulation. Phys. Rev. E2010, 82, 011143. [Google Scholar] [CrossRef] [PubMed]
Parrondo, J.M.; Horowitz, J.M.; Sagawa, T. Thermodynamics of information. Nat. Phys.2015, 11, 131–139. [Google Scholar] [CrossRef]
Pollard, B.S. A Second Law for Open Markov Processes. 2014; arXiv:1410.6531. [Google Scholar]
Seifert, U. Stochastic thermodynamics, fluctuation theorems and molecular machines. Rep. Prog. Phys.2012, 75, 126001. [Google Scholar] [CrossRef] [PubMed]
Takara, K.; Hasegawa, H.H.; Driebe, D. Generalization of the second law for a transition between nonequilibrium states. Phys. Lett. A2010, 375, 88–92. [Google Scholar] [CrossRef]
Hasegawa, H.H.; Ishikawa, J.; Takara, K.; Driebe, D. Generalization of the second law for a nonequilibrium initial state. Phys. Lett. A2010, 374, 1001–1004. [Google Scholar] [CrossRef]
Prokopenko, M.; Einav, I. Information thermodynamics of near-equilibrium computation. Phys. Rev. E2015, 91, 062143. [Google Scholar] [CrossRef] [PubMed]
Sagawa, T. Thermodynamic and logical reversibilities revisited. J. Stat. Mech.2014, 2014, P03025. [Google Scholar] [CrossRef]
Mandal, D.; Jarzynski, C. Work and information processing in a solvable model of Maxwell’s demon. Proc. Natl. Acad. Sci. USA2012, 109, 11641–11645. [Google Scholar] [CrossRef] [PubMed]
Wolpert, D.H. Extending Landauer’s bound from bit erasure to arbitrary computation. 2015; arXiv:1508.05319. [Google Scholar]
Barato, A.C.; Seifert, U. Stochastic thermodynamics with information reservoirs. Phys. Rev. E2014, 90, 042150. [Google Scholar] [CrossRef] [PubMed]
Deffner, S.; Jarzynski, C. Information processing and the second law of thermodynamics: An inclusive, Hamiltonian approach. Phys. Rev. X2013, 3, 041003. [Google Scholar] [CrossRef]
Barato, A.C.; Seifert, U. An autonomous and reversible Maxwell’s demon. Europhys. Lett.2013, 101, 60001. [Google Scholar] [CrossRef]
Mackay, D. Information Theory, Inference, and Learning Algorithms; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
Cover, T.; Thomas, J. Elements of Information Theory; Wiley: New York, NY, USA, 1991. [Google Scholar]
Yeung, R.W. A First Course in Information Theory; Springer: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
Reif, F. Fundamentals of Statistical and Thermal Physics; McGraw-Hill: New York, NY, USA, 1965. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely
those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or
the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas,
methods, instructions or products referred to in the content.