Next Article in Journal
Improving Neural Machine Translation by Filtering Synthetic Parallel Data
Next Article in Special Issue
Chemical Reaction Networks Possess Intrinsic, Temperature-Dependent Functionality
Previous Article in Journal
On the Statistical Mechanics of Life: Schrödinger Revisited
Previous Article in Special Issue
Fitness Gain of Individually Sensed Information by Cells
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Dissipation in Non-Steady State Regulatory Circuits

1
Faculty of Mathematics, Informatics, and Mechanics, University of Warsaw, 02-097 Warszawa, Poland
2
Laboratoire de Physique de l’École Normale Supérieure (PSL University), CNRS, Sorbonne Université, and Université de Paris, 75005 Paris, France
3
Capital Fund Management, 23 rue de l’Université, 75007 Paris, France
*
Author to whom correspondence should be addressed.
Entropy 2019, 21(12), 1212; https://doi.org/10.3390/e21121212
Submission received: 8 November 2019 / Revised: 4 December 2019 / Accepted: 5 December 2019 / Published: 10 December 2019
(This article belongs to the Special Issue Information Flow and Entropy Production in Biomolecular Networks)

Abstract

:
In order to respond to environmental signals, cells often use small molecular circuits to transmit information about their surroundings. Recently, motivated by specific examples in signaling and gene regulation, a body of work has focused on the properties of circuits that function out of equilibrium and dissipate energy. We briefly review the probabilistic measures of information and dissipation and use simple models to discuss and illustrate trade-offs between information and dissipation in biological circuits. We find that circuits with non-steady state initial conditions can transmit more information at small readout delays than steady state circuits. The dissipative cost of this additional information proves marginal compared to the steady state dissipation. Feedback does not significantly increase the transmitted information for out of steady state circuits but does decrease dissipative costs. Lastly, we discuss the case of bursty gene regulatory circuits that, even in the fast switching limit, function out of equilibrium.

1. Introduction

Cells rely on molecular signals to inform themselves about their surroundings and their own internal state [1]. These signals can describe the surrounding sugar type and concentration, which is the case of many bacterial operons, such as those used for lactose or galactose breakdown [2]. Signaling and activation of phosphorylated receptors provide a means of informing bacterial cells on faster timescales about a wide range of conditions including crowding, growth signals, and stress [3]. Triggered by these signals cells activate regulatory networks and cascades that allow them to respond in an appropriate way to existing signals.
A response is usually trigerred by a change in the environment, which perturbs the previous state of the cell and the regulatory system. Specifically, if the regulatory circuit was functioning in steady state, a change in the concentration of the signaling molecule, or the appearance of a new molecule will kick it out of steady state. Here we investigate the response to such perturbations.
In this paper we study abstract mathematical models whose goal is to capture the main regulatory features of biochemical circuits. Our models do not capture many of the details of biochemical complexity of regulatory units in real cells. By “circuit” or “network” throughout the paper we mean a set of stochastic processes that transform an input signal through a regulatory function to produce an output response. This use of the word “circuit” or network is standard in the biophysics literature [1,2,3]. Abstract models of biochemical circuits have proven useful in understanding molecular regulation in many biological systems from development to immunology [2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28].
The energy dissipated in a regulatory network comes on one hand from the fact that certain steps, for example producing proteins, require ATP. However, energy dissipation also measures how far out of equilibrium a given circuit functions by identifying irreversible (so ATP consuming) reactions [29,30,31].
Regulatory circuits that function out of equilibrium do not obey detailed balanced, which means they dissipate energy, even if they produce the same amount of proteins as circuits that function in equilibrium. We are interested in exploring the constraints that energy dissipation imposes on the types of regulatory functions. The motivation is not because of limiting energetic resources in cells; ATP is typically assumed to be abundant [32,33] or can be generated by burning carbon present in the cell. Rather we consider energy dissipation as a measure of irreversibility that allows us to compare the irreversibility of signaling encoded in given regulatory functions.
In order to concentrate on this specific problem of dissipation coming from regulatory functions, we choose to study a simplified model with two binary elements: a receptor and a protein. Each element can be in one of two states: active or inactive, and its state regulates the state of the other element. The first element—the receptor—is our input that responds to changes in the environment, and the second element—the regulatory protein such as a kinase in a two component signaling cascade—is the output of our regulatory system. We do not take into account the ATP-ADP balance for these reactions, but concentrate on the dissipation coming from the regulatory computation. Effectively, we assume that while ATP is certainly needed, it is part of the hardware of the network. In turn, we are interested in the question of given a certain set of hardware, what are the best regulatory functions (software) we can implement. We have reduced a description of a biochemical circuit to stochastic processes governing the flipping of binary variables and we will study the parameters of these processes.
Dissipation in molecular regulatory networks has received a lot of theoretical attention [29,30,31,34,35,36,37,38]. This line of thought goes back to the non-equilibrium scheme of kinetic proofreading [4,5] in which energy is used for error correction of the signal. A more recent application [29] has shown that energy dissipation is also needed for regulatory circuits to adapt to external signals and respond accurately. A similar conclusion that energy dissipation is necessary was reached for molecular circuits that try to learn about external concentrations [30] and it was shown that the amount of dissipated energy limits reliable readout [30,36,39,40,41,42]. Results linking information, dissipation and learning [43,44,45] have been derived in the general framework of stochastic thermodynamics [34,46]. In the context of biochemical reactions, both continuous biochemical kinetics models [30,41,42,47] and bipartite two state systems [40,43,48,49,50,51] have been used in this context. Among other topics the link between dissipation and prediction has been explored, again showing that long term prediction requires energy expenditure [35,47], and the non-predictive part of the information about past fluctuations is linked to dissipation [35]. Most recently, the links between information and dissipation have been studied in spatial systems [52].
A regulatory circuit fulfils a function and we assume that the goal of our network is to maximally transmit information between the input and output [12]. This objective function has been studied before theoretically, using both binary and more detailed models [53,54,55,56,57,58,59]. Others have also optimized the rate of information transmission [19,20,21,60]. Information transmission in regulatory circuits has also been investigated experimentally in fly development [18,61,62], NF κ B signaling [28], calcium signaling [63] and dynamical readouts were compared to static information transmission between the input and output of ERK, calcium and NF κ B signaling networks [64]. While it is an arbitrary choice of the objective function for a regulatory network, and many networks do not optimize information transmission, it is rather unlikely that a circuit aimed at sensing and responding to the environment does not transmit any information about the signal to the output. This choice of the objective function allows us to perform concrete calculations and investigate the trade-off between information and dissipation, which are both tied to the logic of the regulatory system.
Here, inspired by receptor-ligand binding, we use a simple two state system to build intuition about the trade-offs in information transmission, dissipation and functioning out of steady state. In a pedagogical spirit we remind the reader of the notions of information (Section 3), dissipation (Section 4) and review some of our previous results from work that studied information transmission [65] and the trade-offs between information transmission and dissipation for regulatory circuits in steady state [66] (Section 6.1 and Section 6.2). A signal often perturbs the system out of steady state, to which it then relaxes back. In this paper we calculate the non-equilibrium dissipation for circuits that function out of steady state and maximally transmit information between the input and a potentially delayed output given constraints on dissipation. While the setup of the optimization problem (Equation (13)) is the same as in our previous work [66], considering average dissipation (Equation (14)) is new (Section 6.3 and Section 6.4).
Lastly, we include some comments on dissipation in simple gene regulatory circuits with bursty transcription (Section 7) [67,68,69,70,71,72,73]. We show how even a fast switching gene promoter need not be in equilibrium. Our goal is not to provide an exhaustive review of the field but to illustrate with simple examples some trade-offs that appear in these molecular circuits.

2. Model

We consider a system at time t consisting of two discreet random variables z t and x t , which describe the input state and output state of the systems, respectively. We previously used these abstract stochastic processes to study regulation in biochemical circuits in Mancini et al. [66] and such binary models of biochemical circuits have been studied by others [40,43,48,49,50,51]. For simplicity we assume that x and z can take only two values: + (active state) and − (inactive state). The input state corresponds to the presence or absence of a signaling molecule (or a high or low concentrations of a signaling molecule), whereas the output state is activation or not of a response pathway or regulator. The specific regulatory interactions between them will be defined later within the specfic studied model(s). At every time t, the system is in one of four possible states ( z t , x t ): ( , ) , ( , + ) , ( + , ) , or ( + , + ) . The master equation for the temporal evolution of the conditional probability distribution p ( z t , x t | z 0 , x 0 ) of the system is:
t p ( z t , x t | z 0 , x 0 ) = L p ( z t , x t | z 0 , x 0 ) ,
where L is a 4 × 4 matrix with transition rates between the four states. We will be interested in the joint probability p ( x t , z 0 ) , that is we will look at the output variable x at time t and the initial state of the input variable z:
p ( x t , z 0 ) = x 0 , z t = ± 1 p ( z t , x t | z 0 , x 0 ) · p ( x 0 , z 0 ) .
This probability is needed in the computation of the central quantity we optimize: the time-delayed mutual information between the initial state of the input and the state of the output at t (defined in Section 3). After marginalization over possible states of z 0 we will obtain p ( x t ) = z 0 p ( x t , z 0 ) , which in turn is indispensable for calculating the dissipation of the system defined in Section 4.
We restrict our analysis to symmetric models, in which we do not distinguish between the ( , ) and ( + , + ) states, and, analogously, between the ( , + ) and ( + , ) states. This is a simplification that is not motivated by a biological observation. The symmetry of the model allows us to write the probability distribution at any time t as p ( x t , z 0 ) = 1 + μ t 4 , 1 μ t 4 , 1 μ t 4 , 1 + μ t 4 , assuming the initial probability distribution also assumes the same symmetry: p ( x 0 , z 0 ) = p 0 = 1 + μ 0 4 , 1 μ 0 4 , 1 μ 0 4 , 1 + μ 0 4 . For the models in which the initial distribution is the steady state one, p init = p ( x 0 , z 0 ) = p ss , which imposes a condition on μ 0 .

2.1. Model without Feedback: S and S ˜

The first, simplest model we analyze is a symmetric model in which only the input affects the output and there is no feedback from the output to the input. The output variable either aligns or anti-aligns to the input variable with rate r, regardless of the state of the input (see Figure 1A). The input variable z flips between active and inactive states with rate u and the output variable x aligns with rate r and anti-aligns with rate s (see Figure 1). The dynamics is given by a transition rate matrix given in Appendix A.
We calculate analytically the joint probability distribution p ( x t , z 0 ) (a four-dimensional vector) and marginal probability distributions p ( x t ) and p ( z 0 ) (two-dimensional random vectors), needed to find the mutual information, that we will define in Equation (3), as a function of the transition rates u, s, r, and a parameter μ 0 that parametrizes the initial state of the system (see Appendix B). We set, without loss of generality, one rate equal to 1, specifically r = 1 . The specific expressions for the probability distributions for the occupancy of the four states for the model without feedback are given in Appendix A. In steady state the probability distribution for the occupancy of the four states simplifies to p = u + 1 2 s + 4 u + 2 , s + u 2 s + 4 u + 2 , s + u 2 s + 4 u + 2 , u + 1 2 s + 4 u + 2 .
We will consider this model in steady state, and we will call it model S. We will also allow for the initial conditions to be out of steady state, and then we will call it model S ˜ .

2.2. Models with Feedback: F and F ˜

In the second analyzed model we allow the input variable to be dependent on the output, i.e., we allow for a feedback from x to z. We keep as much symmetry as possible, while still not distinguishing between the states ( , ) and ( + , + ) , and between ( , + ) and ( + , ) . The scheme is given in Figure 1B. In terms of the rates we allow the original input z t switching parameters, to be different depending on the state of the output x t introducing the rate α for anti-aligning the two variables and y for aligning the two variables. The notion of input and output is no longer meaningful since both variables influence each other. We note that this scheme is not the most general model possible since we impose the symmetry between the ‘pure’ states, i.e., ( , ) and ( + , + ) , and the ‘mixed’ states, i.e., ( , + ) and ( + , ) , which reduces the number of parameters from 8 (as was studied in Mancini et al. [65]) to 4 (as was considered in Mancini et al. [66]). The transition matrix for this model, and the steady state probabilities are given in Appendix B.
Similarly to the case of the model without feedback, we consider the model with feedback in steady state and call it model F, or let the initial conditions be out of steady state by considering all values of μ 0 (model S ˜ ). To sumarize, we use the following notation:
  • S - no feedback, stationary initial condition;
  • S ˜ - no feedback, optimal initial condition;
  • F - with feedback, stationary initial condition;
  • F ˜ - with feedback, optimal initial condition.

3. Information

The mutual information measured between the input z at time 0 and output x at time t is defined as [53,74]:
I [ x t , z 0 ] = x t , z 0 p ( x t , z 0 ) log p ( x t , z 0 ) p ( x t ) p ( z 0 ) .
In order to analyse the system in its natural timescale, we set t = τ / λ , where λ is the inverse of the relaxation time (smallest, non-zero eigenvalue of the matrix L ) and calculate I [ x τ ; z 0 ] = I [ x λ · t ; z 0 ] . The term under the logarithm, which has been called the thermodynamic coupling function for systems with many degrees of freedom [75,76], describes the degree of correlation of the two variables, and is zero if the joined probability distribution factorizes. The thermodynamic coupling function has been shown to be useful to quantify the contributions of specific energy terms in binary models of allosteric systems [75,76].
Again exploiting the symmetry of the problem, the mutual information can be written as
I [ x t , z 0 ] = 1 2 ( 1 + μ ) log ( 1 + μ ) + ( 1 μ ) log ( 1 μ ) ,
where | μ | 1 . Since we have fixed r = 1 , the symmetry of clockwise and counter-clockwise rotations is broken and μ [ 0 , 1 ] . Information is an increasing function of μ and is maximized at I [ x t , z 0 ] = 1 bit for μ = 1 . The specific values for μ are given in Appendix A and Appendix B for the models with and without feedback.

4. Non-Equilibrium Dissipation

We consider the limitations on the regulatory functions coming from having a fixed amount of energy to dissipate during the signaling process that transmits information. Large amounts of dissipated energy allow systems to function far out of equilibrium, whereas no dissipated energy corresponds to equlibrium circuits. We quantify the degree to which the system functions out of equilibrium by comparing the probability of a forward, P ( x ) , and backward, P ( x ˜ ) , trajectory along the same path [34,77]:
σ = x P ( x ) log P ( x ) P ( x ˜ ) ,
where the paths are defined as x = ( x 1 , x 2 , , x N ) and x ˜ = ( x N , x N 1 , , x 1 ) and each state x i is a four dimensional probability of the input and output at time i. Using the Markov nature of the transitions P ( x t + 1 | x t ) we write the probability of the forward path starting from the initial state x 1 as
P ( x ) = P 1 ( x 1 ) t = 1 N 1 P t t + 1 ( x t + 1 | x t ) ,
and analogously for the backward path. Equation (5) now becomes:
σ = x P ( x 1 , . . . , x N ) log P 1 ( x 1 ) t = 1 N 1 P t t + 1 ( x t + 1 | x t ) P N ( x N ) t = 1 N 1 P t + 1 t ( x t | x t + 1 ) = x P ( x 1 , . . . , x N ) log t = 1 N 1 P t t + 1 ( x t + 1 | x t ) P t ( x t ) t = 1 N 1 P t + 1 t ( x t | x t + 1 ) P t ( x t + 1 ) ,
where we multiplied both the numerator and the denominator by the same product of probabilities P ( x 2 ) · · P ( x N ) . Simplifying further and marginalizing over the elements of x not equal to x t or x t + 1 :
σ = t = 1 N 1 x P ( x 1 , . . . , x N ) log P t t + 1 ( x t + 1 | x t ) P t ( x t ) P t + 1 t ( x t | x t + 1 ) P t ( x t + 1 ) = t = 1 N 1 P ( x t , x t + 1 ) log P t t + 1 ( x t + 1 | x t ) P t ( x t ) P t + 1 t ( x t | x t + 1 ) P t ( x t + 1 ) = t σ ( t ) ,
which defines the time dependent dissipation production rate, σ ( t ) .
Noting that P t t + 1 ( x t + 1 | x t ) = P t + 1 t ( x t | x t + 1 ) = P ( x t + 1 = i | x t = j ) and by explicitly defining the transition rates:
P ( x t + 1 = i | x t = j ) = w i j δ t + ( 1 w i j δ t ) δ i j ,
and renaming P t ( x t + 1 ) = p j ( t ) and P y ( x t ) = p i ( t ) we obtain [34,77,78]:
σ ( t ) = i , j w i j p j ( t ) log w i j p j ( t ) w j i p i ( t ) ,
that in the limit of t results in the steady state entropy dissipation rate:
σ ss = i , j p j ss w i j log w i j w j i ,
where p j ss is the steady state probability distribution. We describe an alternative derivation of dissipation in Appendix C.
Again, we rescale the time in the above quantities by setting t = τ / λ ( λ being is the inverse of the relaxation time):
σ ^ ( τ ) = 1 λ σ ( τ / λ ) , σ ^ ss = 1 λ σ ss .

5. Setup of the Optimization

With these definitions, following Mancini et al. [66], we can ask what are the circuits that optimally transmit information given a limited constrained amount of steady state dissipation σ ^ ss :
max L | σ ^ s s I ( τ ) ,
over the circuit’s reaction rates, L . The energy expense of a circuit that remains in steady state is well defined by this quantity. However the total expense of circuits that function out of steady state must be calculated as the integral of the entropy dissipation rate in Equation (10) over the entire time the circuit is active, τ p , such as the duration of the cell cycle or the interval between new inputs that kick the system into the initial non-equilibrium state. After some time the circuit will relax to steady state (see the diagram in Figure 2) and its energetic expense is well described by the steady state dissipation. But the initial non-equilibrium steady state costs the system some energy. We can compare the performance of circuits with different regulatory designs by considering the average energy expenditure until a given time τ p :
Σ avg ( τ p ) = 1 τ p 0 τ p σ ^ ( τ ) d τ .
We can foresee that circuits that spend most of their time in steady state will have their expenditure dominated by σ ss , whereas circuits that spend a lot of time relaxing to steady state will be dominated by the additional out of steady state dissipation cost Δ Σ = Σ avg σ ^ ss . When τ p , all circuits spend most of their time in steady state and the average integral in (14) converges to σ ^ ( τ ) σ ^ ss as τ , so that the cost is dominated by the steady state dissipation.
Using the steady state distribution for model S and Equation (11) we can evaluate the non-rescaled steady state dissipation calculated for the model without feedback [66]
σ ss ( u , s ) = ( s 1 ) u log 2 ( s ) 1 + s + 2 u .
If we impose a non-equlibrium state by setting s 0 , the dissipation rescaled by the characteristic decay time (the lowest non-zero eigenvalue given by the minimum of the two non-zero eigenvalues 1 + s , and 2 u ) tends to infinity
σ ^ ss ( u , s ) = σ ss / λ = ( s 1 ) u log 2 ( s ) ( 1 + s + 2 u ) · min ( 1 + s , 2 u ) s 0 ,
as expected. We also verify numerically that even in a non-steady state system that is kept out of equilibrium (Equation (10)) the rescaled dissipation (Equation (16)) tends to infinity, σ ^ = as s 0 , for all τ , μ 0 and u.
The steady state dissipation rescaled by the smallest eigenvalue for models F and F ˜ is [66]:
σ ^ ss ( α , s , y ) = 2 ( α s y ) A ( A ρ ) log 2 α s y ,
where
A = 1 + s + y + α ,
ρ = ( 1 + s + y + α ) 2 8 ( s y + α ) .

6. Results

The task is to find maximal mutual information between the input and the output, with or without constraints, for all model variants, (regulation with and without feedback; starting at steady state, or starting out of steady state) and compare their performance—the amount of information transmitted and the energy dissipated. To build intuition we first summarize the results of the unconstrained optimization obtained by Mancini et al. [65]. Then, a constraint will be set on the steady state dissipation rate σ ^ ss as in Mancini et al. [66]. We extend the latter results to models S ˜ and F ˜ by performing the optimization also with respect to the initial distribution. Finally, to compare not only the information transmitted in the models, but also its cost, we will calculate the average dissipation of the models.
In all cases we are looking for the maximum mutual information between the input at time 0 and the output at time τ , in the space of parameters (u, s and r for the model without feedback and α , y, s and r for the model with feedback). We can also treat the initial distribution (parametrized by a single parameter, μ 0 ), as an additional constraint or set μ 0 to be equal to μ 0 ss , i.e., fix the initial distribution to be the steady state one. Optimizing with a constraint is looking for the maximum of the function not in the whole parameter space ( R + N ), but on the manifold given by σ ss (parameters) = constraint. Finally, to compare not only the information transmitted in the models, but also its cost, we will calculate the average dissipation of the models.

6.1. Unconstrained Optimization

The results of the unconstrained optimization are summarized in Figure 3. As expected the maximum amount of information that can be transmitted decays with the readout time for all models. Feedback allows for better information transmission only in the case when the initial distribution is fixed to its steady state value. Optimizing over the initial distribution renders the models considered here without ( F ˜ ) and with feedback ( S ˜ ) equivalent. In this case the system relies on its initial condition and information loss is due to the system decorrelating and loosing information about its initial state. For a fixed initial distribution the model with feedback performs better than the model without feedback. We note that the feedback model considered here is a simplified model compared to the one studied in Mancini et al. [65], with less parameters. A full asymmetric model with feedback can transmit more information than a model without feedback if the initial conditions are not in steady state. However these architerctures correspond to infinite dissipation solutions since all backward rates are forbidden and the circuit can never regain its initial state since one of the states i becomes absorbing, p ( y ) = δ y , i , and attracts the whole probability weight. We are therefore restricting our exploration of models with feedback to the subclass without an absorbing steady state.
The modes of regulation of the circuits corresponding to the optimal solutions were discussed in previous work [65,66]. In short, the information-optimal steady state system uses rates that break the detailed balance and induce an order in visiting the four states i. Feedback increases the transmitted information for long time delays by implementing these cycling solutions using a mixture of fast and slow rates. Allowing for out of steady state initial conditions, circuits relax to absorbing final states that need to be externally reset. In this case the optimal solution with and without feedback both result in the stochastic processes cycling through the four states and simply relies on the decorrelation of the initial state.

6.2. Constraining σ ^ ss

We next looked for rates that maximize the transmitted information I [ x τ , z 0 ] at a fixed time τ given a fixed steady state dissipation rate σ ^ ss . We first plot the maximal mutual information as function of the readout time, τ , for models without feedback, S (dashed lines) and S ˜ (solid lines), (Figure 4). Not surprisingly, maximum information is a decreasing function of τ for both models, larger values of steady state dissipation, σ ^ ss , allow for more information transmitted, and model S ˜ with optimized initial conditions transmits more information than model S, which remains in steady state.
However, comparing all four models, the conclusion about the equivalence of the out of steady state model with ( F ˜ ) and without ( S ˜ ) feedback no longer holds when we constrain σ ^ ss (Figure 5). The difference between optimal mutual information transmitted in models S ˜ and F ˜ is higher for systems that have smaller dissipation budgets σ ^ ss , and, as shown previously (Figure 3), the difference vanishes as σ ^ ss . The remaining conclusions from Figure 4 hold: models with feedback transmits more information than models without feedback and models with free initial distributions transmit more information than the steady state models, as in the unconstrained optimization case (Figure 3).
Phase diagrams describing the optimal modes of regulation for steady state circuits are reported in Mancini et al. [66]. At large dissipation rates, the optimal out-of-equilibrium cicruits exploit the increased decorrelation time of the system since cycling solutions are permitted. Close to equilibrium, circuits with no feedback cannot transmit a lot of information. Circuits with feedback use a combination of slow and fast rates to transmit information. The optimal close to equilibrium regulatory functions rapidly align the two variables z t and x t ( y > α , s small), and slowly anti-aligns them, increasing the probability to be in the aligned ( + , + ) and ( , ) states. This results in a positive feedback loop. The same strategy of adjusting rates is used far from equilibrium but this time results in a cycling solution which translated into a negative feedback loop ( α > y , s 0 ).
Allowing the circuit to function out of steady state optimizes the initial condition μ 0 to be as far as possible from the steady state. The optimal initial condition is μ 0 = 1 , where only the aligned states are occupied (the initial distribution is p 0 = ( 0.5 , 0 , 0 , 0.5 ) ). This initial condition combined with u < r and s < r (Figure A1) decreases the decorrelation time and even a circuit with no feedback can transmit non-zero information. The rates of the circuits without feedback are simply set by the dissipation constraint, with s 0 for large dissipation and taking the value to balance u close to equilibrium (Figure A1). Optimal circuits far from equilibrium were reported in Mancini et al. [66] and close to equilibrium are shown in Figure 6. Circuits with feedback also mostly rely on the decorrelation of the initial state. Since the majority of the initial probability weight is in the aligned states, the y and α are always roughly equal (Figure A2). Only at intermediate dissipation rates, y slightly smaller than α and small s stabliize the initial aligned states and further decrease the decorrelation time (Figure 6), encoding small negative feedback in the circuit.
To summarize, for all σ ^ ss < , as well as for circuits that have no constraints on σ ^ ss , we found I ( S ) < I ( S ) ˜ , I ( F ) < I ( F ˜ ) , and I ( S ) < I ( F ) . Also, for all σ ^ ss < , I ( S ˜ ) < I ( F ˜ ) , with I ( S ˜ ) σ ^ ss I ( F ˜ ) , where we have defined the optimal mutual information I ( M ) of a model M { S , S ˜ , F , F ˜ } .

6.3. Cost of Optimal Information

The maximum information is obtained for maximum allowed steady state dissipation. Interestingly the steady state dissipation σ ^ ss combined with the circuit topology impose a constraint on the maximum allowed Σ avg ( τ p ) . This result follows from the fact that the system strongly relies on the initial condition to increase the information transmitted at small times. Larger μ 0 values allow the system to transmit more information, since the equilibration time is longer. However, fixing the value of σ ^ ss constrains the allowed value of μ 0 that determine the initial condition. To gain intuition, additionally to fixing σ ^ ss , we will fix the mean dissipation Σ avg ( τ p ) until a reset time τ p > τ and find the transition rates returning the optimal mutual information for a chosen readout time τ τ p . The results of this optimization presented in Figure 7, show that as Σ avg increases, μ 0 tends towards 1, which corresponds to a probability distribution where only the asymmetric states ( p 0 = ( 0.5 , 0 , 0 , 0.5 ) ) are occupied and the transmitted information increases. Further increasing dissipation shows that the σ ^ ss constraint can be satisfied in two ways: either by a positive or negative μ 0 . Not only does the positive μ 0 transmit more information but the negative μ 0 is forbidden by our choice of r = 1 . Above a certain value of σ ^ ss only the forbidden negative μ 0 = 1 branch corresponding to an initial distribution with all the weight in the anti-aligned states p 0 = ( 0 , 0.5 , 0.5 , 0 ) remains (if we chose the counter clockwise solutions by fixing s = 1 , this probability vector would have been the maximally informative initial state). The system cannot fulfill the constraint of such high dissipation. If we do not constrain σ ^ ss we find that the maximum information corresponds to μ 0 = 1 [65], which we report in our analysis below.
We have seen that for both models, if we can choose the initial distribution instead of starting from the steady state, we can significantly increase the transmitted information. What is the “cost” of this choice of initial distribution? To estimate this total cost we calculate the average dissipation during time τ p > τ , τ p Σ avg ( τ p ) , for the circuit with the highest mutual information attainable for a given steady state dissipation rate rate σ ^ ss if we allow the initial condition to be out of the steady state (Figure 2). We also introduce the relaxation cost, τ p ( Σ avg σ ^ ss ) (Figure 8A), as the additional energy dissipated above the steady state value. As argued already, the systems that starts at steady state, i.e., for which μ 0 = μ 0 ss , will not pay an additional cost (see Figure 2, for μ 0 = μ 0 ss the function of σ ^ ( τ p ) is constant, equal to σ ^ ss ). In this case the mean total dissipation, Σ avg ( τ p ) , will be equal to σ ^ ss and the relaxation cost goes to zero.
As shown in Figure 8B, the total cost (z-axis, in colour) generated was only slightly larger for S ˜ than for S and the difference is more pronounced only for relatively small σ ^ ss , where the cost in the steady state circuits goes to zero. This result holds for different combinations of delay readout times τ and reset times τ p , although the value of the total cost naturally increases with τ p . As discussed above, more information can be transmitted at shorter times and by optimizing over the initial condition.
In order to quantify the intuition that S ˜ transmits more information than S at a small price, we plotted in Figure 8C the information gain, I * I ss , and the relaxation cost with respect to τ p ( σ ^ ss ) . I * I ss is the difference between the optimal information when the initial distribution is free to be optimized over ( S ˜ ) and the optimal information for the system with a steady state initial distribution (S). It quantifies the additional cost from optimizing the initial condition of the gain in information transmission. The relaxation cost is almost the same regardless of the reset time, τ p . The relaxation cost and the information gain decrease with increasing steady state dissipation, σ ^ ss , as in this regime even the steady state system is able to have slow decorrelation by tuning the switching rates.
This analysis shows that higher optimal mutual information obtained by optimizing over the initial distribution does not generate significantly higher costs. The same result holds when comparing models with feedback F and F ˜ (Figure 8D). The information increase from feedback in the F ˜ model with optimized initial conditions compared to the F steady state model is minimal at large σ ^ ss (as expected from Figure 5). While the F ˜ model with feedback always transmits more information than the S ˜ model without feedback, the total average cost for all σ ^ ss is smaller for the F ˜ model with feedback than for the S ˜ model without feedback. This results means that even when feedback does not increase the transmitted information compared to models without feedback, it decreases the total cost.
The information gain of circuits with optimized initial conditions compared to steady state circuits is larger for the S ˜ model without feedback than the F ˜ model with feedback (Figure 8E) and the relaxation cost decreases monotonically with increasing σ ^ ss . In both the case with and without feedback there is a non-zero and non-infinite value of steady state dissipation where the information gain from optimizing the initial condition is largest. In summary, optimizing the initial condition nearly always incurs a cost, however it absolutely always results in a significant information gain. Table 1 summarizes the comparison of the optimal transmitted information I ( M ) and the total cost C ( M ) for all four models M { S , S ˜ , F , F ˜ } .

6.4. Suboptimal Circuits

We found the parameters of the stochastic processes, including the initial conditions, that optimally transmit delayed information between the two variables given a constraint on σ ^ ss . However the real initial stimulus may deviate from the optimal one, due to random fluctuations of the environment. To see how an much information an optimal circuit can transmit for different initial conditions, we took the optimal parameters for different fixed σ ^ ss and readout delay τ , varied the initial condition μ 0 and evaluated the transmitted information and the mean dissipation Σ avg ( τ p ) for both models: S ˜ and F ˜ (Figure 9). We find that while information always decreases (Figure 9A,C,E for model S ˜ and Figure 9G,I,K for model F ˜ ), as expected, the mean dissipation can be smaller for unexpected values of the initial condition (Figure 9B,D,F for model S ˜ and Figure 9H,J,L for model F ˜ ). The transmitted information of the suboptimal circuits is larger than that of the optimal steady state circuit for many values of μ 0 , especially those close to the optimum of the non-steady state circuit ( μ 0 = 1 ). The same conclusions hold for suboptimal circuits with and without feedback. The range of μ 0 values where suboptimal circuits provide an information gain is smaller for circuits with feedback than without feedback, due to the already large information transmission capacity of steady state circuits with feedback.

7. Gene Regulatory Circuits

The coupled two state system model considered above can be thought of as a simplified model of receptor—ligand binding. It can also be considered as an overly simplified model of gene regulation where the input variable describes the presence or absence of a transcription factor and the output—the activation state of the regulated gene. However, the continuous nature of transcription factor concentrations has proven important when considering information transmission in these systems [53,54]. We will not repeat the whole optimization problem for continuous variables but we calculate and discuss the form of dissipation in the simplest gene regulatory module that can function out of equilibrium.

7.1. Bursty Gene Regulation

The simplest gene regulatory system that can function out of equilibrium is a model that accounts for transcriptional bursts [67,68,69,70,71,72,73]. The promoter state has two possible states: a basal expression state where the gene is read out a basal rate R 0 and an activated expression state where the gene is read out at rate R 1 . The promoter switches between these two states by binding a transcription factor present at concentration c, with rate k + and unbinds at a constant rate k . The probability that there are g product proteins of this gene in the cell (we integrate out the mRNA state due to a separation of timescales) is P ( g ) = P 0 ( g ) + P 1 ( g ) , where P 0 ( g ) describes the probability that the promoter is in the basal state and there are g proteins and P 1 ( g ) describes the analogous probability for the promoter to be in the activated state. The probability distribution evolves both due to binding and unbinding of the transcription factor and to protein production and degradation (with rate τ 1 ) according to
d P 0 ( g ) d t = g + 1 τ P 0 ( g + 1 ) + k P 1 ( g ) + R 0 P 0 ( g 1 ) + k + c + g τ + R 0 P 0 ( g ) ,
d P 1 ( g ) d t = g + 1 τ P 1 ( g + 1 ) + k + c P 0 ( g ) + R 1 P 1 ( g 1 ) + k + g τ + R 1 P 1 ( g ) .
These equations can be solved analytically in steady state in terms of special functions [79,80]. In the limit of fast promoter switching ( k + and k go to infinity and their ratio K k + / k is constant) the system is well described by a Poisson distribution
P 1 * ( g ) = 1 1 + c K ( R e f τ ) g g ! exp ( R e f τ ) = c K P 0 * ( g )
where R e f f is an effective production rate:
R e f f = k + c R 1 + k R 0 k + c + k .
The total steady state dissipation σ s s = σ 0 + σ 1 + σ 2 calculated from Equation (11) can be split into three parts, where
σ 0 = g P 0 * ( g ) k + c P 1 * ( g ) k log k + c k ,
σ 1 = g P 0 * ( g ) R 0 log ( R 0 τ ) + P 1 * ( g ) R 1 log ( R 1 τ ) ,
σ 2 = g P 0 * ( g ) R 0 log ( g + 1 ) + g τ log R 0 τ g + g P 1 * ( g ) R 1 log ( g + 1 ) + g τ log R 1 τ g .
The first two expressions can be simplified using the normalization relations g P 0 * ( g ) + P 1 * ( g ) = 1 and g P 1 * ( g ) = k + c k + k + c obtaining:
σ 0 = 0
σ 1 = 1 k + k + c R 0 log ( R 0 τ ) k + R 1 log ( R 1 τ ) k + c .
We now use these results to examine steady state dissipation in the equilibrium limit and the limit of the fast switching promoter. Similar results but in slightly different limits were obtained in reference [30].
Equilibrium Limit. Equilibrium is surely achieved if there is only one promoter state. In terms of our model this corresponds to k + is vanishing and k 0 . In this limit the activated state is never occupied and the steady state probability goes to P 1 * ( g ) 0 . Equations (21) and (22) result in a Poisson distribution with mean R 0 τ and we can verify that detailed balance is satisfied
P 0 * ( g ) W ( g g ± 1 ) = P 0 * ( g ± 1 ) W ( g ± 1 g ) ,
as confirmed by σ 2 = σ 1 in Equations (25)–(27).
Fast promoter switching limit. In the fast promoter switching limit the dissipation of the system is:
σ F S = c K ( 1 + c K ) 2 ( R 0 R 1 ) Log R 0 R 1 .
σ F S is always positive, but the equilibrium regime is reached only if k or k + asymptotically vanish. For finite binding and unbinding rates the system is not in equilibrium despite being well described by an equilibrium-like steady state probability distribution. Since this example is mainly presented as a pedagogical application of dissipation, for completeness we derive similar results in the Langevin description in Appendix D, discussing the differences in dissipation arising from model coarse graining [81,82,83].

8. Discussion

All living organisms, even the most simple ones, in order to adapt to the environment, must read and process information. In the case of cells, transmitting information means sensing chemical stimuli via receptors and activating biochemical pathways in response to these signals. Such reading and transmitting signals comes at a price—it consumes energy. There are plenty of possible designs of these regulatory circuits, yet not all of them are found in nature [2]. The question arises why some network regulatory functions are frequent and others non-existing. One way to approach such a question is to optimize a (specific) function by choosing the circuit’s regulatory function. The choices of optimzed function that have been considered include noise (minimization) [11], time-delay of response (minimization) [2] or information transmitted between the input and output (maximization) [56].
Two different circuits can produce and use the same amount of proteins, but the energy dissipated in them is different. In other words, we assume that while ATP is certainly needed in a molecular circuit, it is part of the hardware of the network and cannot be modified a lot. Instead, we asked about the best regulatory functions (software) we can implement, given a certain set of hardware. For this reason we worked with a simplified binary representation of the circuits to concentrate on the regulatory computation and turned the problem of finding the optimal regulatory function into finding the optimal parameters of stochastic processes.
Our main previous findings about steady state circuits can be related to tasks performed by the circuits [66]. Circuits that function close to equilibrium transmit information optimally using positive feedback loops that are characteristic of long-term readouts responsible for cell fate commitment [84,85]. Circuits that function far from equilibrium transmit information using negative feedback loops that are representative of shock responses that are transient but need to be fast [86,87]. Therefore cells may implement non-equilibrium solutions when fast responses are needed and rely on equilibrium responses when averaging is possible and there is no rush. This results agrees with the general finding of Lan et al. [29] for continuous biochemical kinetics that negative feedback circuits always break detailed balance and such circuits function out of equilibrium.
In general in steady state we find that models with feedback significantly outperform models without feedback in terms of optimal information transmission between the two variables, but the respective costs of optimal information transmission are the same. Circuits close and far to equilibrium rely on a mixture of slow and fast timescales to delay relaxation and transmit information. The only other solution available in our simple setting is using the initial condition, which is efficient in terms of information transmission but costly.
Here we identified two properties linked to feedback: it does not necessarily transmit more information if we are allowed to pick an optimal initial condition compared to a system without feedback. Yet in this case implementing a circuit with feedback can reduce the non-equilibrium costs. In general, introducing an optimized intitial condition incurs a cost, but this cost is often minimal, especially taking into account the information gained. This cost is interpretable biologically as the external energetic cost needed to place the system in a specific initial condition. This cost must be provided by the work of another regulatory element or circuit or an external agent or force. This specific initial condition requires poising the system in a specific point. Yet it does not seem biologically implausible, let alone impossible, to “prepare” the intitial state after cell division or mitosis, or upon entering a new phase of the cell cycle [88]. For example, a specific gene expression state or receptor state (e.g., ( + , + ) or ( , )) seems easily attainable. Modifying the initial conditions from the optimal μ 0 in circuits that function out of steady state decreases the transmitted information but can also decrease the mean dissipation. Therefore optimizing preparing the system out of steady state may still be a useful strategy for transmitting information.
One could look at these results from two perspectives: on the one hand argue that circuits with feedback transmit more information in the steady state setting; on the other hand feedback exhibits frugality in expenses in the case of optimized initial distributions. One could also defend the models without feedback stating that they can be only slightly worse in terms of information transmission (optimized initial distribution case) and can be found to dissipate the same amount of energy (steady state initial distribution). All circuits will reach steady state, however especially during fast processes such as development [89] or stress response [87], the information transmitted during short times may be what matters for downstream processes. In general regardless of the timescale, circuits with feedback perform better (or equally well) than regulatory system with no feedback, both in terms of information transmission and the cost of transmitting this optimal information.
The learning rate is another quantity that has been useful in studying bipartite systems in stochastic thermodynamics [43,44,45]. The learning rate, defined as l x = τ I [ z τ , x t + τ ] | τ = 0 , gives the instantaneous increase in information that the output variable has by continuing to learn about the input variable. We calculate the learning rate for our informationally-optimal models when they are in steady state (Figure A3). For models without feedback the learning rate is bounded by σ x (as defined in Appendix E), such that η = x / σ x 1 . It this case the interpretation of the learning rate allows us to estimate how closely the output variable is following the input variable and positive learning rates are indicative of adaptation and learning. Not surprisingly we find that the model with steady state initial conditions has a larger learning rate than the model with optimized initial conditions since model A ˜ relies less on the parameters of the network than model A to transmit information and more on the initial conditions (that are forgotten in the steady state calculation). Calculating a time delay dependent learning rate would be more informative. The learning rate also increases with σ ^ , in agreement with previous statements that learning is easier far from equilibrium [29,30,43]. We also performed the same calculation for models with feedback but as was pointed out previously [44,90,91], the interpretation of the learning rate becomes less clear in these systems since input and output are no longer clearly defined. Instead the above one-sided definition should be replaced by a time integral over the trajectory to distinguish if the learning is of the other variable (z) or a previous instance of the same variable ( x t τ ). The calculated quantity instead tells us about the ability of x to respond to z, assuming z was fluctuating freely. In that sense a positive value of l x tells us that the dynamics of the two variables of the circuit are not completely decoupled in steady state, except in the case of model F close to equilibrium. Our results tell us that equilibrium imposes a symmetry between input and output, which is broken either by initial conditions ( F ˜ at small σ ^ ) or large dissipation.
Lastly, for pedagogical purposes we attempted to discuss the link between dissipation calculations that are often performed on binary regulatory systems and continuous variables, showing that the simplest model of bursty transcription can result in non-zero dissipation, even in the fast switching limit where the steady state equilibrium Poisson distribution is recovered. Bursty gene expression is wide spread from bacteria [71,72], yeast [92] to invertebrates [73,89] and mammals [68]. Bursty self-activating genes in intermediate fast switching regimes have also been shown to have different stability properties than pure equilibrium systems, due to non-equilibrium cycling through the coupled promoter and protein states [93]. While cells are not energy limited, the discussion recounted in this paper may suggest that different modes of regulation (including burstiness) may be better suited to slow and fast responses.

Author Contributions

Conceptualization, P.S.-R., D.V. and A.M.W.; Formal analysis, P.S.-R., D.V., J.M. and A.M.W.; Funding acquisition, A.M.W.; Investigation, P.S.-R., D.V., J.M. and A.M.W.; Methodology, P.S.-R., D.V. and A.M.W.; Project administration, J.M. and A.M.W.; Software, P.S.-R. and D.V.; Supervision, J.M. and A.M.W.; Visualization, P.S.-R. and A.M.W.; Writing—original draft, P.S.-R., D.V. and A.M.W.; Writing—review & editing, P.S.-R., D.V., J.M. and A.M.W.

Funding

This research was funded by FP7 People: Marie-Curie Actions grant number 303561 and Narodowe Centrum Nauki grant number 2015/17/B/ ST1/00693.

Acknowledgments

We thank T. Lipniacki, A. Nourmohammad and T. Mora for helpful discussions. This work was in part supported by MCCIG no. 303561. J. M. would like to thank the National Science Centre (Poland) for financial support under Grant No. 2015/17/B/ ST1/00693.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Model without Feedback

The transition matrix for the model without feedback reads:
L = u + s u r 0 u u + r 0 s s 0 u + r u 0 r u u + s ,
where the rates are defined in Figure 1A. By matrix diagonalization we find the eigenvalues and eigenvectors and calculate the probability distribution p ( x τ , z 0 ) at time τ for the four states,
p + , + ( τ ) = p , ( τ ) = e τ ( s + 2 u + 1 ) λ 4 ( s 2 u + 1 ) ( ( μ 0 ( s 2 u + 1 ) + s 1 ) e 2 τ u λ + ( s 1 ) e ( s + 1 ) τ λ + ( s 2 u + 1 ) e τ ( s + 2 u + 1 ) λ ) ,
and
p + , ( τ ) = p , + ( τ ) = e τ ( s + 2 u + 1 ) λ 4 ( s 2 u + 1 ) ( ( μ 0 ( s 2 u + 1 ) + s 1 ) e 2 τ u λ + ( s 1 ) e ( s + 1 ) τ λ + ( s 2 u + 1 ) e τ ( s + 2 u + 1 ) λ ) .
The steady state distribution is given by the eigenvector corresponding to the zeroth eigenvalue,
p = u + 1 2 s + 4 u + 2 , s + u 2 s + 4 u + 2 , s + u 2 s + 4 u + 2 , u + 1 2 s + 4 u + 2 .
These results allow us to calculate
μ = e 2 u t ( 1 s ) 1 + s 2 u + e ( 1 + s ) t ( μ 0 ( 1 + s 2 u ) ( 1 s ) ) 1 + s 2 u
= μ 0 e ( 1 + s ) t + 1 s 1 + s 2 u e ( 1 + s ) t e ( 1 + s ) t .

Appendix B. Model with Feedback

The transition matrix for the model with feedback reads defined in Figure 1B:
L = α + s y r 0 α y + r 0 s s 0 y + r α 0 r y s + α .
Figure A1. The optimal parameters as a function of the readout delay, τ , for the models without feedback, S and S ˜ , at different constrained steady state dissipation rates σ ^ ss .
Figure A1. The optimal parameters as a function of the readout delay, τ , for the models without feedback, S and S ˜ , at different constrained steady state dissipation rates σ ^ ss .
Entropy 21 01212 g0a1
The detailed derivation of the steady state quantities and eigenvalues is given in Mancini et al. [66]. Here we just summarize the main results. The steady state probability distribution is:
p = 1 2 A { 1 + y , s + α , s + α , 1 + y } ,
where we have defined A and ρ in Equations (18) and (19).
The eigenvalues of the matrix in Equation (A7) are { λ i } = { 0 , A , ( A ρ ) / 2 , ( A + ρ ) / 2 } and λ = ( A ρ ) / 2 is always the smallest eigenvalue. For a model with steady state initial conditions μ reads
μ = exp A 2 λ τ { q cosh ρ 2 λ τ s 2 ( 1 + y ) 2 4 α + α 2 + 2 s ( 2 y + α ) A ρ sinh ρ 2 λ τ } ,
with q = ( 1 + y s α ) / A and the rescaled time τ = t λ .
Figure A2. The optimal parameters as a function of the readout delay τ for models with feedback, F and F ˜ , at different constrained steady state dissipation rates σ ^ ss .
Figure A2. The optimal parameters as a function of the readout delay τ for models with feedback, F and F ˜ , at different constrained steady state dissipation rates σ ^ ss .
Entropy 21 01212 g0a2

Appendix C. Entropy Production Rate

In this Appendix we present an alternative derivation of dissipation. We denote probability of state i by p i and the entropy of the distribution is defined as:
S ( t ) = i p i ( t ) log p i ( t ) .
The entropy production rate formula is derived by differentiating the entropy with respect to time:
S ˙ ( t ) = i p ˙ i ( t ) log p i ( t ) i p i ( t ) 1 p i ( t ) p ˙ i ( t ) = i p ˙ i ( t ) log p i ( t ) i p i ( t ) .
Denoting by w i j the transition rate from state i to state j, we obtain p ˙ i ( t ) = j i w j i p j ( t ) w i j p i ( t ) . We define w i i as j , j i w i j , so that we can write compactly p ˙ i ( t ) = j p j ( t ) w j i and the expression for S ˙ ( t ) becomes:
S ˙ ( t ) = i j w j i p j ( t ) log p i ( t ) 0 = i , j w j i p j ( t ) log p i ( t ) .
With the definition of w i i , the terms w i j satisfy j w i j = 0 . The following expression i p i ( t ) log p i ( t ) j w i j = i , j w i j log p i ( t ) is then equal to zero and we subtract it form (A11) to obtain a compact form:
S ˙ ( t ) = i , j p i ( t ) w i j log p i ( t ) i , j p i ( t ) w i j log p j ( t ) = i , j p i ( t ) w i j log p i ( t ) p j ( t ) .
Further formula manipulation gives:
S ˙ ( t ) = 1 2 i , j p i ( t ) w i j log p i ( t ) p j ( t ) + 1 2 j , i p j ( t ) w j i log p j ( t ) p i ( t ) = 1 2 i , j p i ( t ) w i j log p i ( t ) p j ( t ) 1 2 j , i p j ( t ) w j i log p i ( t ) p j ( t ) = 1 2 i , j p i ( t ) w i j p j ( t ) w j i log p i ( t ) p j ( t ) = 1 2 i , j p i ( t ) w i j p j ( t ) w j i log w j i w i j entropy flow + 1 2 i , j p i ( t ) w i j p j ( t ) w j i log p i ( t ) w i j p j ( t ) w j i entropy production rate .
The difference between the entropy production rate and the entropy flow, is the rate at which the whole entropy of a system changes. The entropy flow quantifies the flux of entropy from the system to the outside. In the steady state, as the entropy does not change, the two terms are equal, which means that the whole entropy produced by the system is dissipated.
The second underbracket of (Equation (A13)) can be rewritten in the familiar form:
σ ( t ) = i , j p i ( t ) w i j log p i ( t ) w i j p j ( t ) w j i .

Appendix D. Langevin Description of Bursty Gene Regulation

A bursty model of transcription such as the one presented in Section 7.1 can be written in a Langevin description introducing the frequency for the promoter to be in the activated state n [53]:
d n d t = c k + n k n + ξ n ,
d g d t = R n 1 τ g + ξ g ,
where the fluctuations are given by
ξ n ( t ) ξ n ( t ) = 2 ( k + c ( 1 n ¯ ) + k n ¯ ) δ ( t t ) ,
ξ g ( t ) ξ g ( t ) = 2 ( R n ¯ + g ¯ / τ ) δ ( t t ) .
These equations describe the fluctuations of the promoter state and the protein concentration g around the equilibrium solution n ¯ , g ¯ = k + c k + k + c , k + c R τ k + k + c . In order to lighten notations, we have used ( n , g ) instead of the standard form ( δ n , δ g ) to describe fluctuations. Equations (A15) and (A16) can be recast into the matrix form form X ˙ = A X + ξ with
A = c k + + k 0 R 1 τ ,
and the noise correlation matrix ξ ( t ) ξ ( t ) = 2 D δ ( t t ) is
D = k + c ( 1 n ¯ ) + k n ¯ 0 0 ( R n ¯ + g ¯ / τ ) .
The correlation matrix Σ can be computed with standard methods [56], by inverting the relation D = A Σ + Σ A t :
Σ = n n n g g n g g = 1 ( c k + + k ) 2 · 2 c k k + 2 c k k + R τ ( ( c k + τ + k τ + 1 ) 2 c k k + R τ ( c k + τ + k τ + 1 ) 2 c k + R τ τ ( c k + + k ) 2 + k R + c k + + k ( c k + τ + k τ + 1 ) .

Entropy Production

The probability of a trajectory of a multivariate Langevin process can be calculated via the Onsager-Machlup formalism. Using this probability as starting point, the dissipation can be exactly derived (see Ref. [94], where the computation is done in detail and in a self-contained fashion). For the case of symmetric variables under time reversal the entropy production can be written in a compact form, where we have the index k runs over all the variables:
W ( t ) = k D k k 1 0 t d s A X k X ˙ k .
In our case, using Equations (A19) and (A20), one has
W ( t ) = D n n 1 0 t d t ( c k + n ( t ) k n ( t ) ) n ˙ ( t ) + D g g 1 0 t d t ( R n ( t ) 1 τ g ( t ) ) g ˙ ( t ) .
Figure A3. The learning rate for the output variable x as a function of the rescaled steady state dissipation, σ ^ ss , calculated at steady state for models with (F and F ˜ ) and without feedback (S and S ˜ ). Models S ˜ and F ˜ have optimized initial conditions (that do not enter this calculations except for the optimal parameters) and models S and F are constrained to have initial conditions in steady state.
Figure A3. The learning rate for the output variable x as a function of the rescaled steady state dissipation, σ ^ ss , calculated at steady state for models with (F and F ˜ ) and without feedback (S and S ˜ ). Models S ˜ and F ˜ have optimized initial conditions (that do not enter this calculations except for the optimal parameters) and models S and F are constrained to have initial conditions in steady state.
Entropy 21 01212 g0a3
Equation (A23) can be simplified by considering that all terms which are exact derivatives are not extensive in time (terms like 0 t d t n ( t ) n ˙ ( t ) = 1 2 n 2 ( t ) n 2 ( 0 ) or its equivalent in g can be neglected in the large t limit). All the steady state correlations are also time translational invariant, i.e., 0 t d t n ( t ) g ˙ ( t ) t n g ˙ . As a result, the dissipation is:
σ L E = lim t W t t = R R n ¯ + g ¯ τ n g ˙ .
The correlation n g ˙ in Equation (A24) can be computed by replacing g ˙ with Equation (A16), yielding n g ˙ = R n n 1 τ n g . Substituting this expression into Equation (A21) we obtain:
σ L E ( c ) = R τ ( 1 c k + τ s ) τ + τ s ,
where τ s = ( c k + + k ) 1 . Note that in the limit τ s 0 the dissipation is not dependent on c and equal to σ 0 L E = R / ( 1 + c K ) , where K is equal to k + / k . Additionally, for K (which corresponds to no flux to the inactive state, k 0 ) the dissipation vanishes like in the master equation formulation (Equation (30)).
As final remark, we note that a Langevin formulation is a coarse grained description of the Master equation approach described in Section 7.1. This kind of coarse graining procedure integrates away degrees of freedom which can carry non-equilibrium currents and can lead to lower values of dissipation [81,82,83]. For instance, consider the limit R 0 ϵ small but finite, Equation (30) becomes σ M E = c K ( 1 + R 1 ) / ( 1 + c K ) 2 log R 1 ϵ and one finds σ M E > σ L E .

Appendix E. Learning Rate

Lastly, following Barato et al. [43] we consider the learning rate in steady state (we limit ourselves to the steady state discussion since it allows us to get analytical intuition)
l x = i p i i j w i j log p i p j ,
which was defined to describe the rate at which the output x learns about the dynamics of the stochastic input z. For our system, the learning rate is explicitly given by
l x = p 1 w 13 log p 1 p 3 + p 3 w 31 log p 3 p 1 p 2 w 24 log p 2 p 4 + p 4 w 42 log p 4 p 2 ,
and is bounded by σ x defined as:
σ x = ( p 1 w 13 p 3 w 31 ) w 13 log w 13 w 31 ( p 2 w 24 p 4 w 42 ) log w 24 w 42 .
For the models without feedback (S and S ˜ ) the learning rate is:
l x = u ( s 1 ) s + 2 u + 1 log u + s u + 1 .
In models without feedback w 12 = w 21 and w 34 = w 43 and the steady state dissipation rate comes only from the output, x ( σ z = 0 and σ x = σ ss ) and is given by Equation (15), such that
η = l x σ x = log u + s u + 1 / log s 1 .
For models with feedback (F and F ˜ ) the learning rate is harder to interpret since the input no longer changes independently of the output. Formally we can still calculate the quantity in Equation (A27) as
l x = y ( s α ) α + s + y + 1 log y + 1 α + 1 ,
and
σ x = y ( s α ) α + s + y + 1 log s .
The informational efficiency is:
η = y + 1 α + 1 / log s ,
which is bounded by 1 only if s y α (see Figure A3).

References

  1. Bialek, W. Biophysics; Princeton University Press: Princeton, NJ, USA, 2012. [Google Scholar]
  2. Alon, U. An Introduction to Systems Biology: Design Principles of Biological Circuits; Chapman & Hall: Boca Raton, FL, USA, 2006. [Google Scholar]
  3. Phillips, R.; Kondev, J.; Theriot, J.; Garcia, H. Physical Biology of the Cell; Garland Science: New York, NY, USA, 2012; p. 2012. [Google Scholar]
  4. Hopfield, J. Kinetic proofreading: A new mechanism for reducing errors in biosynthetic processes requiring high specificity. Proc. Natl. Acad. Sci. USA 1974, 71, 4135–4139. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Ninio, J. Kinetic amplification of enzyme discrimination. Biochimie 1975, 57, 587–595. [Google Scholar] [CrossRef]
  6. McKeithan, T.W. Kinetic proofreading in T-cell receptor signal transduction. Proc. Natl. Acad. Sci. USA 1995, 92, 5042–5046. [Google Scholar] [CrossRef] [Green Version]
  7. Tostevin, F.; Howard, M. A stochastic model of Min oscillations in Escherichia coli and Min protein segregation during cell division. Phys. Biol. 2006, 3, 1. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  8. Tostevin, F.; Howard, M. Modeling the Establishment of {PAR} Protein Polarity in the One-Cell C. elegans Embryo. Biophys. J. 2008, 95, 4512–4522. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. François, P.; Hakim, V. Design of genetic networks with specified functions by evolution in silico. Proc. Natl. Acad. Sci. USA 2004, 101, 580–584. [Google Scholar] [CrossRef] [Green Version]
  10. François, P.; Hakim, V.; Siggia, E.D. Deriving structure from evolution: Metazoan segmentation. Mol. Syst. Biol. 2007, 3, 154. [Google Scholar] [CrossRef]
  11. Saunders, T.E.; Howard, M. Morphogen profiles can be optimized to buffer against noise. Phys. Rev. E 2009, 80, 041902. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  12. Tkacik, G.; Callan, C.G.; Bialek, W. Information flow and optimization in transcriptional regulation. Proc. Natl. Acad. Sci. USA 2008, 105, 12265–12270. [Google Scholar] [CrossRef] [Green Version]
  13. Mehta, P.; Goyal, S.; Long, T.; Bassler, B.L.; Wingreen, N.S. Information processing and signal integration in bacterial quorum sensing. Mol. Syst. Biol. 2009, 5, 325. [Google Scholar] [CrossRef]
  14. Bintu, L.; Buchler, N.E.; Garcia, H.G.; Gerland, U.; Hwa, T.; Kondev, J.; Phillips, R. Transcriptional regulation by the numbers: Models. Curr. Opin. Genet. Dev. 2005, 15, 116–124. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Bintu, L.; Buchler, N.E.; Garcia, H.G.; Gerland, U.; Hwa, T.; Kondev, J.; Kuhlman, T.; Phillips, R. Transcriptional regulation by the numbers: Applications. Curr. Opin. Genet. Dev. 2005, 15, 125–135. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Garcia, H.G.; Phillips, R. Quantitative dissection of the simple repression input-output function. Proc. Natl. Acad. Sci. USA 2011, 108, 12173–12178. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Kuhlman, T.; Zhang, Z.; Saier, M.H.; Hwa, T. Combinatorial transcriptional control of the lactose operon of Escherichia coli. Proc. Natl. Acad. Sci. USA 2007, 104, 6043–6048. [Google Scholar] [CrossRef] [Green Version]
  18. Dubuis, J.O.; Tkacik, G.; Wieschaus, E.F.; Gregor, T.; Bialek, W. Positional information, in bits. Proc. Natl. Acad. Sci. USA 2013, 110, 16301–16308. [Google Scholar] [CrossRef] [Green Version]
  19. Tostevin, F.; ten Wolde, P.R. Mutual Information between Input and Output Trajectories of Biochemical Networks. Phys. Rev. Lett. 2009, 102, 218101. [Google Scholar] [CrossRef] [Green Version]
  20. Tostevin, F.; ten Wolde, P.R. Mutual information in time-varying biochemical systems. Phys. Rev. E 2010, 81, 061917. [Google Scholar] [CrossRef] [Green Version]
  21. de Ronde, W.H.; Tostevin, F.; ten Wolde, P.R. Effect of feedback on the fidelity of information transmission of time-varying signals. Phys. Rev. E 2010, 82, 031914. [Google Scholar] [CrossRef] [Green Version]
  22. Savageau, M. Design of molecular control mechanisms and the demand for gene expression. Proc. Natl. Acad. Sci. USA 1977, 74, 5647–5651. [Google Scholar] [CrossRef] [Green Version]
  23. Scott, M.; Gunderson, C.W.; Mateescu, E.M.; Zhang, Z.; Hwa, T. Interdependence of cell growth and gene expression: Origins and consequences. Science 2010, 330, 1099–1102. [Google Scholar] [CrossRef]
  24. Aquino, G.; Tweedy, L.; Heinrich, D.; Endres, R.G. Memory improves precision of cell sensing in fluctuating environments. Sci. Rep. 2014, 4, 5688. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Vergassola, M.; Villermaux, E.; Shraiman, B.I. ‘Infotaxis’ as a strategy for searching without gradients. Nature 2007, 445, 406–409. [Google Scholar] [CrossRef] [PubMed]
  26. Celani, A.; Vergassola, M. Bacterial strategies for chemotaxis response. Proc. Natl. Acad. Sci. USA 2010, 107, 1391–1396. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Siggia, E.D.; Vergassola, M. Decisions on the fly in cellular sensory systems. Proc. Natl. Acad. Sci. USA 2013, 110, E3704–E3712. [Google Scholar] [CrossRef] [Green Version]
  28. Cheong, R.; Rhee, A.; Wang, C.J.; Nemenman, I.; Levchenko, A. Information Transduction Capacity of Noisy Biochemical Signaling. Science 2011, 334, 354–358. [Google Scholar] [CrossRef] [Green Version]
  29. Lan, G.; Sartori, P.; Neumann, S.; Sourjik, V.; Tu, Y. The energy-speed-accuracy trade-off in sensory adaptation. Nat. Phys. 2012, 8, 422–428. [Google Scholar] [CrossRef]
  30. Mehta, P.; Schwab, D.J. Energetic costs of cellular computation. Proc. Natl. Acad. Sci. USA 2012, 109, 17978–17982. [Google Scholar] [CrossRef] [Green Version]
  31. Cao, Y.; Wang, H.; Ouyang, Q.; Tu, Y. Biochemical oscillations. Nature Physics 2015, 1–8. [Google Scholar] [CrossRef]
  32. Milo, R.; Phillips, R. Cell Biology by the Numbers; Garland Science: New York, NY, USA, 2015; p. 2015. [Google Scholar]
  33. Moran, U.; Phillips, R.; Milo, R. SnapShot: Key numbers in biology. Cell 2010, 141, 1262. [Google Scholar] [CrossRef] [Green Version]
  34. Seifert, U. Stochastic thermodynamics, fluctuation theorems and molecular machines. Rep. Prog. Phys. 2012, 75, 126001. [Google Scholar] [CrossRef] [Green Version]
  35. Still, S.; Sivak, D.A.; Bell, A.J.; Crooks, G.E. Thermodynamics of Prediction. Phys. Rev. Lett. 2012, 109, 120604. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  36. Ouldridge, T.E.; Govern, C.C.; Rein, P. Thermodynamics of Computational Copying in Biochemical Systems. Phys. Rev. X 2017, 021004. [Google Scholar] [CrossRef] [Green Version]
  37. Rein, P.; Becker, N.B.; Ouldridge, T.E.; Mugler, A. Fundamental Limits to Cellular Sensing. J. Stat. Phys. 2016, 162, 1395–1424. [Google Scholar] [CrossRef] [Green Version]
  38. Sagawa, T.; Ito, S. Maxwell’s demon in biochemical signal transduction transduction with feedback loop. Nat. Commun. 2015, 6, 7498. [Google Scholar] [CrossRef] [Green Version]
  39. Barato, A.C.; Hartich, D.; Seifert, U. Information-theoretic versus thermodynamic entropy production in autonomous sensory networks. Phys. Rev. E 2013, 87, 042104. [Google Scholar] [CrossRef] [Green Version]
  40. Barato, A.C.; Hartich, D.; Seifert, U. Efficiency of cellular information processing. New J. Phys. 2014, 16, 103024. [Google Scholar] [CrossRef] [Green Version]
  41. Bo, S.; Giudice, M.D.; Celani, A. Thermodynamic limits to information harvesting by sensory systems. J. Stat. Mech. Theory Exp. 2015, 2015, P01014. [Google Scholar] [CrossRef] [Green Version]
  42. Govern, C.C.; ten Wolde, P.R. Energy Dissipation and Noise Correlations in Biochemical Sensing. Phys. Rev. Lett. 2014, 113, 258102. [Google Scholar] [CrossRef]
  43. Barato, A.C.; Seifert, U. Thermodynamic Uncertainty Relation for Biomolecular Processes. Phys. Rev. Lett. 2015, 114, 158101. [Google Scholar] [CrossRef] [Green Version]
  44. Brittain, R.A.; Jones, N.S.; Ouldridge, T.E. What we learn from the learning rate. J. Stat. Mech. Theory Exp. 2017, 6, 063502. [Google Scholar] [CrossRef] [Green Version]
  45. Goldt, S.; Seifert, U. Stochastic Thermodynamics of Learning. Phys. Rev. Lett. 2017, 118, 010601. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  46. Parrondo, J.M.R.; Horowitz, J.M.; Sagawa, T. Thermodynamics of information. Nat. Phys. 2015, 11, 131–139. [Google Scholar] [CrossRef]
  47. Becker, N.B.; Mugler, A.; ten Wolde, P.R. Prediction and Dissipation in Biochemical Sensing. arXiv 2013, arXiv:1312.5625. Available online: http://arxiv.org/abs/1312.5625 (accessed on 9 December 2019).
  48. Horowitz, J.M.; Esposito, M. Thermodynamics with Continuous Information Flow. Phys. Rev. X 2014, 4, 031015. [Google Scholar] [CrossRef] [Green Version]
  49. Allahverdyan, A.E.; Janzing, D.; Mahler, G. Thermodynamic efficiency of information and heat flow. J. Stat. Mech. Theory Exp. 2009, 2009, P09011. [Google Scholar] [CrossRef] [Green Version]
  50. Sartori, P.; Granger, L.; Lee, C.F.; Horowitz, J.M. Thermodynamic costs of information processing in sensory adaptation. PLoS Comput. Biol. 2014, 10, e1003974. [Google Scholar] [CrossRef] [Green Version]
  51. Hartich, D.; Barato, A.C.; Seifert, U. Sensory capacity: An information theoretical measure of the performance of a sensor and sensory capacity. Phys. Rev. E 2016, 93, 022116. [Google Scholar] [CrossRef] [Green Version]
  52. Falasco, G.; Rao, R.; Esposito, M. Information Thermodynamics of Turing Patterns. Phys. Rev. Lett. 2018, 121, 108301. [Google Scholar] [CrossRef] [Green Version]
  53. Tkačik, G.; Walczak, A.M. Information transmission in genetic regulatory networks: A review. J. Phys. Condens. Matter Inst. Phys. J. 2011, 23, 153102. [Google Scholar] [CrossRef] [Green Version]
  54. Tkačik, G.; Walczak, A.M.; Bialek, W. Optimizing information flow in small genetic networks. Phys. Rev. E 2009, 80, 031920. [Google Scholar] [CrossRef] [Green Version]
  55. Walczak, A.M.; Tkačik, G.; Bialek, W. Optimizing information flow in small genetic networks. II. Feed-forward interactions. Phys. Rev. E 2010, 81, 041905. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  56. Tkačik, G.; Walczak, A.M.; Bialek, W. Optimizing information flow in small genetic networks. III. A self-interacting gene. Phys. Rev. E 2012, 85, 041903. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  57. Mugler, A.; Walczak, A.; Wiggins, C. Spectral solutions to stochastic models of gene expression with bursts and regulation. Phys. Rev. E 2009, 80, 041921. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  58. Rieckh, G.; Tkačik, G. Noise and Information Transmission in Promoters with Multiple Internal States. Biophys. J. 2014, 106, 1194–1204. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  59. Sokolowski, T.R.; Tkačik, G. Optimizing information flow in small genetic networks. IV. Spatial coupling. Phys. Rev. E 2015, 91, 062710. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  60. de Ronde, W.H.; Tostevin, F.; ten Wolde, P.R. Feed-forward loops and diamond motifs lead to tunable transmission of information in the frequency domain. Phys. Rev. E 2012, 86, 021913. [Google Scholar] [CrossRef] [PubMed]
  61. Gregor, T.; Wieschaus, E.F.; McGregor, A.P.; Bialek, W.; Tank, D.W. Stability and nuclear dynamics of the Bicoid morphogen gradient. Cell 2007, 130, 141–152. [Google Scholar] [CrossRef] [Green Version]
  62. Gregor, T.; Tank, D.W.; Wieschaus, E.F.; Bialek, W. Probing the limits to positional information. Cell 2007, 130, 153–164. [Google Scholar] [CrossRef] [Green Version]
  63. Pahle, J.; Green, A.K.; Dixon, C.J.; Kummer, U. Information transfer in signaling pathways: A study using coupled simulated and experimental data. BMC Bioinform. 2008, 9, 139. [Google Scholar] [CrossRef] [Green Version]
  64. Selimkhanov, J.; Taylor, B.; Yao, J.; Pilko, A.; Albeck, J.; Hoffmann, A.; Tsimring, L.; Wollman, R. Accurate information transmission through dynamic biochemical signaling networks. Science 2014, 346, 1370–1373. [Google Scholar] [CrossRef] [Green Version]
  65. Mancini, F.; Wiggins, C.H.; Marsili, M.; Walczak, A.M. Time-dependent information transmission in a model regulatory circuit. Phys. Rev. E 2013, 88, 022708. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  66. Mancini, F.; Marsili, M.; Walczak, A.M. Trade-offs in delayed information transmission in biochemical networks. J. Stat. Phys. 2015, 1504, 03637. [Google Scholar] [CrossRef] [Green Version]
  67. Kepler, T.B.; Elston, T.C. Stochasticity in Transcriptional Regulation: Origins, Consequences, and Mathematical Representations. Biophys. J. 2001, 81, 3116–3136. [Google Scholar] [CrossRef] [Green Version]
  68. Raj, A.; Peskin, C.S.; Tranchina, D.; Vargas, D.Y.; Tyagi, S. Stochastic mRNA Synthesis in Mammalian Cells. PLoS Biol. 2006, 4, e309. [Google Scholar] [CrossRef] [PubMed]
  69. Friedman, N.; Cai, L.; Xie, X. Linking Stochastic Dynamics to Population Distribution: An Analytical Framework of Gene Expression. Phys. Rev. Lett. 2006, 97, 168302. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  70. Walczak, A.M.; Sasai, M.; Wolynes, P.G. Self-consistent proteomic field theory of stochastic gene switches. Biophys. J. 2005, 88, 828–850. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  71. Cai, L.; Friedman, N.; Xie, X.S. Stochastic protein expression in individual cells at the single molecule level. Nature 2006, 440, 358–362. [Google Scholar] [CrossRef]
  72. So, L.h.; Ghosh, A.; Zong, C.; Sepúlveda, L.A.; Segev, R.; Golding, I. General properties of the transcriptional time-series in E. Coli. Nat. Genet. 2011, 43, 554–560. [Google Scholar] [CrossRef]
  73. Desponds, J.; Tran, H.; Ferraro, T.; Lucas, T.; Dostatni, N.; Walczak, A.M. Precision of Readout at the hunchback Gene: Analyzing Short Transcription Time Traces in Living Fly Embryos. PLoS Comput. Biol. 2016, 12, e1005256. [Google Scholar] [CrossRef] [Green Version]
  74. Cover, T.; Thomas, J. Elements of Information Theory; John Wiley: New York, NY, USA, 1991. [Google Scholar]
  75. Levine, M.V.; Weinstein, H. AIM for Allostery: Using the Ising Model to Understand Information Processing and Transmission in Allosteric Biomolecular Systems. Entropy 2015, 17, 2895–2918. [Google Scholar] [CrossRef] [Green Version]
  76. Cuendet, M.A.; Weinstein, H.; Levine, M.V. The Allostery Landscape: Quantifying Thermodynamic Couplings in Biomolecular Systems. J. Chem. Theory Comput. 2016. [Google Scholar] [CrossRef] [PubMed]
  77. Crooks, G.E. Nonequilibrium Measurements of Free Energy Differences for Microscopically Reversible Markovian Systems. J. Stat. Phys. 1998, 90, 1481–1487. [Google Scholar] [CrossRef]
  78. Tome, T.; de Oliveira, M.J. Entropy Production in Nonequilibrium Systems at Stationary States. Phys. Rev. Lett. 2012, 108, 020601. [Google Scholar] [CrossRef] [Green Version]
  79. Hornos, J.E.M.; Schultz, D.; Innocentini, G.C.P.; Wang, J.; Walczak, A.M.W.; Onuchic, J.N.; Wolynes, P.G. Self-regulating gene: An exact solution. Phys. Rev. E 2005, 1–5. [Google Scholar] [CrossRef] [Green Version]
  80. Miekisz, J.; Szymanska, P. Gene Expression in Self-repressing System with Multiple Gene Copies. Bull. Math. Biol. 2013, 317–330. [Google Scholar] [CrossRef] [Green Version]
  81. Crisanti, A.; Puglisi, A.; Villamaina, D. Nonequilibrium and information: The role of cross correlations. Phys. Rev. E 2012, 061127. [Google Scholar] [CrossRef] [Green Version]
  82. Puglisi, A.; Pigolotti, S.; Rondoni, L.; Vulpiani, A. Entropy production and coarse graining in Markov processes. J. Stat. Mech. Theory Exp. 2010, 05015. [Google Scholar] [CrossRef] [Green Version]
  83. Busiello, D.M.; Hidalgo, J.; Maritan, A. Entropy production for coarse-grained dynamics. arXiv 2019, arXiv:1810.01833v2. [Google Scholar] [CrossRef]
  84. Xiong, W.; Ferrell, J.E., Jr. A positive feedback based bistable memory module that governs a cell fate decision. Nature 2003, 426, 460–465. [Google Scholar] [CrossRef]
  85. Tanaka, K.; Augustine, G.J. A Positive Feedback Signal Transduction Loop Determines Timing of Cerebellar Long-Term Depression. Neuron 2008, 59, 608–620. [Google Scholar] [CrossRef] [Green Version]
  86. Guisbert, E.; Herman, C.; Lu, C.Z.; Gross, C.A. A chaperone network controls the heat shock response in E. coli. Genes Dev. 2004, 2812–2821. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  87. Lahav, G.; Rosenfeld, N.; Sigal, A.; Geva-zatorsky, N.; Levine, A.J.; Elowitz, M.B.; Alon, U. Dynamics of the p53-Mdm2 feedback loop in individual cells. Nat. Genet. 2004, 36, 147–150. [Google Scholar] [CrossRef] [PubMed]
  88. Tyson, J.J.; Novák, B. Models in biology: Lessons from modeling regulation of the eukaryotic cell cycle. BMC Biol. 2015, 1–10. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  89. Lucas, T.; Tran, H.; Perez Romero, C.A.; Guillou, A.; Fradin, C.; Coppey, M.; Walczak, A.M.; Dostatni, N. 3 minutes to precisely measure morphogen concentration. PLoS Genet. 2018, 14, e1007676. [Google Scholar] [CrossRef] [Green Version]
  90. Sagawa, T.; Ueda, M. Fluctuation theorem with information exchange: Role of correlations in stochastic thermodynamics. Phys. Rev. Lett. 2012, 109, 1–5. [Google Scholar] [CrossRef] [Green Version]
  91. Sagawa, T.; Ueda, M. Nonequilibrium thermodynamics of feedback control. Phys. Rev. E Stat. Nonlinear Soft Matter Phys. 2012, 85, 1–16. [Google Scholar] [CrossRef] [Green Version]
  92. Raser, J.; O’Shea, E. Control of stochasticity in eukaryotic gene expression. Science 2004, 304, 1811–1814. [Google Scholar] [CrossRef] [Green Version]
  93. Walczak, A.M.; Onuchic, J.N.; Wolynes, P.G. Absolute rate theories of epigenetic stability. Proc. Natl. Acad. Sci. USA 2005, 102, 18926–18931. [Google Scholar] [CrossRef] [Green Version]
  94. Puglisi, A.; Villamaina, D. Irreversible effects of memory. EPL (Europhys. Lett.) 2009, 88, 30004. [Google Scholar] [CrossRef] [Green Version]
Figure 1. A cartoon of the possible states and transitions for both models: without feedback (A), and with feedback (B). Since there are two binary variables there are four states; transition rates are marked next to respective arrows. Note the symmetry between the “pure” ( ( , ) and ( + , + ) ) states and the “mixed” states ( ( , + ) and ( + , ) ) in both models. Representation of a possible time evolution of the system. Two variables flip between active (+) and inactive (−) states with respective rates. In the model without feedback (C) the output variable depends on the input variable (the output aligns to the input with rate r or anti-aligns, with rate s), the input variable z flips freely between its active and inactive state, regardless of the state of the output. In the model with feedback (D), there is a difference in rates of flipping of the input that depends on the state of the output.
Figure 1. A cartoon of the possible states and transitions for both models: without feedback (A), and with feedback (B). Since there are two binary variables there are four states; transition rates are marked next to respective arrows. Note the symmetry between the “pure” ( ( , ) and ( + , + ) ) states and the “mixed” states ( ( , + ) and ( + , ) ) in both models. Representation of a possible time evolution of the system. Two variables flip between active (+) and inactive (−) states with respective rates. In the model without feedback (C) the output variable depends on the input variable (the output aligns to the input with rate r or anti-aligns, with rate s), the input variable z flips freely between its active and inactive state, regardless of the state of the output. In the model with feedback (D), there is a difference in rates of flipping of the input that depends on the state of the output.
Entropy 21 01212 g001
Figure 2. Schematic representation of system’s relaxation. The entropy dissipation rate, σ ^ ( τ ) relaxes with time to its steady state value, σ ^ ss . At τ p the system is “kicked out” or reset, thus the pink area represents the total energy dissipated until that time. The information is collected at an earlier readout time τ .
Figure 2. Schematic representation of system’s relaxation. The entropy dissipation rate, σ ^ ( τ ) relaxes with time to its steady state value, σ ^ ss . At τ p the system is “kicked out” or reset, thus the pink area represents the total energy dissipated until that time. The information is collected at an earlier readout time τ .
Entropy 21 01212 g002
Figure 3. Results of the unconstrained optimization—mutual information for the models without feedback (S and S ˜ ) and with feedback (F and F ˜ ) with respect to the readout time τ . Optimization done both when the initial distribution is fixed to its steady state value (no tilde) and when the parameter is subjected to optimization as well (with tilde).
Figure 3. Results of the unconstrained optimization—mutual information for the models without feedback (S and S ˜ ) and with feedback (F and F ˜ ) with respect to the readout time τ . Optimization done both when the initial distribution is fixed to its steady state value (no tilde) and when the parameter is subjected to optimization as well (with tilde).
Entropy 21 01212 g003
Figure 4. Results of the optimization problem with constrained steady state dissipation for models without feedback. Optimal mutual information as function of the readout time, τ , for different constrained steady state dissipation rates, σ ^ ss , for the model S (dashed lines) an S ˜ (solid lines).
Figure 4. Results of the optimization problem with constrained steady state dissipation for models without feedback. Optimal mutual information as function of the readout time, τ , for different constrained steady state dissipation rates, σ ^ ss , for the model S (dashed lines) an S ˜ (solid lines).
Entropy 21 01212 g004
Figure 5. Results of the optimization problem with constrained steady state dissipation for all four models. Optimal mutual information as function of the readout time, τ , for two different constrained steady state dissipation rates, σ ^ ss , for the models S and F (dashed lines), and the models S ˜ and F ˜ (solid lines).
Figure 5. Results of the optimization problem with constrained steady state dissipation for all four models. Optimal mutual information as function of the readout time, τ , for two different constrained steady state dissipation rates, σ ^ ss , for the models S and F (dashed lines), and the models S ˜ and F ˜ (solid lines).
Entropy 21 01212 g005
Figure 6. A graphical representation of the optimal circuits without ( S ˜ ) and with ( F ˜ ) feedback for delayed information transmission with optimized non-steady state initial conditions with a constraint on steady state dissipation σ ^ ss . The exact rate values depend on the value of σ ^ ss and examples are shown in Figure A1 (model S ˜ ) and Figure A2 (model F ˜ ). The depicted circuits are close to equilibrium. The gray arrow indicates a smaller rate than the black arrow. Optimal non-steady state initial states that have highest probability are shown in red.
Figure 6. A graphical representation of the optimal circuits without ( S ˜ ) and with ( F ˜ ) feedback for delayed information transmission with optimized non-steady state initial conditions with a constraint on steady state dissipation σ ^ ss . The exact rate values depend on the value of σ ^ ss and examples are shown in Figure A1 (model S ˜ ) and Figure A2 (model F ˜ ). The depicted circuits are close to equilibrium. The gray arrow indicates a smaller rate than the black arrow. Optimal non-steady state initial states that have highest probability are shown in red.
Entropy 21 01212 g006
Figure 7. Optimal mutual information ( I * ) and optimal parameters μ 0 , u, and s for the S ˜ model without feedback as function of the average dissipation, Σ avg , for two values of the readout time, τ = 0.5 ((A) panels), and τ = 2 ((B) panels), and three values of the reset time, τ p (different colours of curves). Steady state dissipation, σ ^ ss , was fixed to 0.1 .
Figure 7. Optimal mutual information ( I * ) and optimal parameters μ 0 , u, and s for the S ˜ model without feedback as function of the average dissipation, Σ avg , for two values of the readout time, τ = 0.5 ((A) panels), and τ = 2 ((B) panels), and three values of the reset time, τ p (different colours of curves). Steady state dissipation, σ ^ ss , was fixed to 0.1 .
Entropy 21 01212 g007
Figure 8. (A) Cartoon depicting the relaxation cost (pink area) τ p ( Σ avg σ ^ ss ) of the system equilibrating from a non-steady state initial state, and thus σ ^ ( τ ) σ ^ ss . (B) The total cost, τ p Σ avg , of the optimal information transmitted as a function of the steady state entropy dissipation rate, τ p σ ^ ss , for models without feedback, that start with the steady state distribution, S, and that optimize the initial distribution, S ˜ . Results shown for two choices of reset τ p and readout τ timescales. For the steady state models τ p Σ avg = τ p σ ^ ss . (C) The information gain, I * I ss , of the optimized initital condition model ( S ˜ ) compared to the steady state initial condition model (S) and the relaxation cost, τ p ( Σ avg σ ^ ss ) , as a function of the steady state entropy dissipation rate for the same choices of τ p and τ as in panel (B). (D) Comparison of the optimal delayed information and total dissipative cost as a function of the steady state entropy dissipation rate for all four models: without feedback (S, S ˜ ) and with feedback (F, F ˜ ), with the initial distribution equal to the steady state one (S, F) or optimized over ( S ˜ , F ˜ ). τ = τ p = 0.5 . (E) The information gain and relaxation cost of circuits with optimized initial conditions compared to steady state ones for the models with ( F ˜ ) and without feedback ( S ˜ ). τ = τ p = 0.5 .
Figure 8. (A) Cartoon depicting the relaxation cost (pink area) τ p ( Σ avg σ ^ ss ) of the system equilibrating from a non-steady state initial state, and thus σ ^ ( τ ) σ ^ ss . (B) The total cost, τ p Σ avg , of the optimal information transmitted as a function of the steady state entropy dissipation rate, τ p σ ^ ss , for models without feedback, that start with the steady state distribution, S, and that optimize the initial distribution, S ˜ . Results shown for two choices of reset τ p and readout τ timescales. For the steady state models τ p Σ avg = τ p σ ^ ss . (C) The information gain, I * I ss , of the optimized initital condition model ( S ˜ ) compared to the steady state initial condition model (S) and the relaxation cost, τ p ( Σ avg σ ^ ss ) , as a function of the steady state entropy dissipation rate for the same choices of τ p and τ as in panel (B). (D) Comparison of the optimal delayed information and total dissipative cost as a function of the steady state entropy dissipation rate for all four models: without feedback (S, S ˜ ) and with feedback (F, F ˜ ), with the initial distribution equal to the steady state one (S, F) or optimized over ( S ˜ , F ˜ ). τ = τ p = 0.5 . (E) The information gain and relaxation cost of circuits with optimized initial conditions compared to steady state ones for the models with ( F ˜ ) and without feedback ( S ˜ ). τ = τ p = 0.5 .
Entropy 21 01212 g008
Figure 9. Information for model S ˜ (panels (A,C,E)) and model F ˜ (panels (G,I,K)) and Σ avg ( τ p ) for model S ˜ (panels (B,D,F)) and model F ˜ (panels (H,J,L)) of information-optimal circuits with μ 0 = 1 evaluated for different values of the initial condition μ 0 . The circuits parameters are evaluated by optimizing information transmission for τ = 0.5 (A,B), τ = 1 (C,D) and τ = 2 (E,F) and fixed σ ^ ss = 0.15 (blue lines), σ ^ ss = 0.35 (magenta lines), σ ^ ss = 0.75 (green lines). τ p = τ in all plots. For comparison we plot the optimal information of the steady state circuit S and F, respectively, optimized for the same steady state dissipation σ ^ ss and readout delay τ (solid lines). The information always decreases for non-optimal values of μ 0 but the mean dissipation can be smaller for unexpected initial conditions.
Figure 9. Information for model S ˜ (panels (A,C,E)) and model F ˜ (panels (G,I,K)) and Σ avg ( τ p ) for model S ˜ (panels (B,D,F)) and model F ˜ (panels (H,J,L)) of information-optimal circuits with μ 0 = 1 evaluated for different values of the initial condition μ 0 . The circuits parameters are evaluated by optimizing information transmission for τ = 0.5 (A,B), τ = 1 (C,D) and τ = 2 (E,F) and fixed σ ^ ss = 0.15 (blue lines), σ ^ ss = 0.35 (magenta lines), σ ^ ss = 0.75 (green lines). τ p = τ in all plots. For comparison we plot the optimal information of the steady state circuit S and F, respectively, optimized for the same steady state dissipation σ ^ ss and readout delay τ (solid lines). The information always decreases for non-optimal values of μ 0 but the mean dissipation can be smaller for unexpected initial conditions.
Entropy 21 01212 g009
Table 1. Comparison between the four models, S, F, S ˜ , and  F ˜ in terms of optimal mutual information, I opt , and the cost (value of Σ avg calculated with optimal rates), C.
Table 1. Comparison between the four models, S, F, S ˜ , and  F ˜ in terms of optimal mutual information, I opt , and the cost (value of Σ avg calculated with optimal rates), C.
I opt Cost
S, F I ( S ) < I ( F ) C ( S ) = C ( F )
S ˜ , F ˜ I ( S ˜ ) I ( F ˜ ) C ( S ˜ ) > C ( F ˜ )

Share and Cite

MDPI and ACS Style

Szymańska-Rożek, P.; Villamaina, D.; Miȩkisz, J.; Walczak, A.M. Dissipation in Non-Steady State Regulatory Circuits. Entropy 2019, 21, 1212. https://doi.org/10.3390/e21121212

AMA Style

Szymańska-Rożek P, Villamaina D, Miȩkisz J, Walczak AM. Dissipation in Non-Steady State Regulatory Circuits. Entropy. 2019; 21(12):1212. https://doi.org/10.3390/e21121212

Chicago/Turabian Style

Szymańska-Rożek, Paulina, Dario Villamaina, Jacek Miȩkisz, and Aleksandra M. Walczak. 2019. "Dissipation in Non-Steady State Regulatory Circuits" Entropy 21, no. 12: 1212. https://doi.org/10.3390/e21121212

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop