Open Access
This article is

- freely available
- re-usable

*Entropy*
**2018**,
*20*(8),
617;
doi:10.3390/e20080617

Review

Entropy in Cell Biology: Information Thermodynamics of a Binary Code and Szilard Engine Chain Model of Signal Transduction

Department of Discovery Medicine, Pathology Division, Graduate School of Medicine, Kyoto University, Yoshida-Konoe-cho, Sakyo-ku, Kyoto 606-8315, Japan

Received: 24 June 2018 / Accepted: 13 August 2018 / Published: 19 August 2018

## Abstract

**:**

A model of signal transduction from the perspective of informational thermodynamics has been reported in recent studies, and several important achievements have been obtained. The first achievement is that signal transduction can be modelled as a binary code system, in which two forms of signalling molecules are utilised in individual steps. The second is that the average entropy production rate is consistent during the signal transduction cascade when the signal event number is maximised in the model. The third is that a Szilard engine can be a single-step model in the signal transduction. This article reviews these achievements and further introduces a new chain of Szilard engines as a biological reaction cascade (BRC) model. In conclusion, the presented model provides a way of computing the channel capacity of a BRC.

Keywords:

biological reaction cascade; binary code system; average entropy production rate; mutual entropy; Szilard engine chain; fluctuation theorem## 1. Introduction

Information science provides a theoretical framework for understanding cell biology. Variable types of information entropy have been defined and applied for biological research. “Single cell entropy” was introduced for the estimation of the specified gene and kinase protein expression network [1,2]. Further, multicellular behaviour was analysed by a mathematical model in which individual cells interact with each other by secretion and sensing [3,4]. Immunological responses against variable antigens were quantified using entropy defined by the selective probability of amino acid residues [5]. The genetic entropy defined by DNA mutation rate is computable and useful in analysing molecular evolution [6], and the correlation analysis of the mutated gene frequency responsible for the cancer pattern development provides a useful predictive data for clinical prognosis. Further, the transfer entropy is generalised by the Kullback–Leibler divergence between two probabilistic transition statuses along a time course and a measurement of the transfer entropy enables the quantification of the information flow between stationary systems evolving in time [7,8]. Mutual entropy was defined on the basis of the correlation analysis between enzymes and metabolites such as ATP [9].

In addition to these recent develpments, significant achievements have been reported by the application of information thermodynamics to cell system that involves a feedback controller; hence, it can be an integrative system, in which information and thermodynamic entropy intersect [10,11,12,13,14,15]. Many reports on the study of information-driven works have recently been presented. For example, an information-driven artificial molecular motor device consisting of an enzyme has been reported [11].

The upper limit of the average work <w> that can be extracted from thermodynamic engine depends on the system temperature T, Boltzmann constant k

_{B}, free energy change ΔF and mutual entropy H informed by the feedback controller [12,13,14,15]:
$$\langle w\rangle \le -\Delta F+{k}_{B}TH.$$

Inequality (1) implies that free energy and mutual entropy are exchangeable parameters. For the isovolumic and isothermal biological system, Inequality (1) can be simplified to

$$\langle w\rangle \le {k}_{B}TH.$$

As shown in below, the work extracted from an ideal Szilard engine is,

$$\langle w\rangle ={k}_{B}TH$$

From the viewpoint of information thermodynamics, the signal transduction in Escherichia coli was reported [16]. Our earlier works considered a probability that the mutual entropy may be utilised for exchanging signalling molecules along the biological reaction cascade (BRC) [17,18,19,20]. This review later particularly introduces an ideal chain of Szilard engine constituting the BRC model [17].

## 2. A Common BRC Model

Let us consider modelling signal transduction by focusing on aspects that are common to several signal transductions. In BRC, the substrate protein in the reaction may become an enzyme or modulator in the next reaction. The most well-known example of a chain reaction is the chain of phosphorylation of proteins in the mitogen-activated protein kinase (MAPK) cascade [21,22,23,24,25,26] that is shown in (4):

$$\begin{array}{l}\begin{array}{l}EGF+EGFR({X}_{1})\leftrightarrow EGF+EGFR*({X}_{1}*),\hfill \\ EGFR*+Ras({X}_{2})\to EGFR*+Ras*({X}_{2}*),\hfill \\ Ras*+c-Raf({X}_{3})\to Ras*+c-Raf*({X}_{3}*),\hfill \end{array}\hfill \\ \begin{array}{l}c-Raf*+MEK({X}_{4})\to c-Raf*+MEK*({X}_{4}*),\hfill \\ MEK*+ERK({X}_{5})\to MEK*+ERK*({X}_{5}*).\hfill \end{array}\hfill \end{array}$$

In this BRC, the epidermal growth factor receptor (EGFR), Ras (a type of GTPase), a proto-oncogene c-Raf, MAP kinase-extracellular signal-regulated kinase (MEK) and kinase-extracellular signal-regulated kinase (ERK) follow the stimulation with the epidermal growth factor (EGF). Phosphatases were omitted in the above equation. This MAPK cascade is a ubiquitous signalling pathway in variable cell types, which allows growth and proliferation. The EGFR mutation promotes the enhancement of this cascade, which contributes to the tumourogenesis of the lung and other cancers [27].

To understand the essence of complicated cell signaling, the BRC model consisting of j

^{th}and reverse −j^{th}steps can be constructed (1 ≤ j ≤ n):
$$\begin{array}{l}{X}_{1}(R)+L\to {X}_{1}-L*:{1}^{st}\\ {X}_{1}-L*\to {X}_{1}+L:-{1}^{st}\\ {X}_{1}-L*+{X}_{2}+A\to {X}_{1}-L*+{X}_{2}*+D:{2}^{nd}\\ {X}_{2}*+P{h}_{2}\to {X}_{2}+P{h}_{2}+Pi:-{2}^{nd}\\ \cdots \\ {X}_{j}*+{X}_{j+1}+A\to {X}_{j}*+{X}_{j+1}*+D:{j}^{th}\\ {X}_{j+1}*+P{h}_{j+1}\to {X}_{j+1}+P{h}_{j+1}+Pi:-{j}^{th}\\ \cdots \\ {X}_{n-1}*+{X}_{n}+A\to {X}_{n-1}*+{X}_{n}*+D:{\left(n-1\right)}^{th}\\ {X}_{n}*+P{h}_{n}\to {X}_{n}+P{h}_{n}+Pi:-{\left(n-1\right)}^{th}\\ {X}_{n}*+DNA+RNApol+N\text{}ribonucleotide\to \\ {X}_{n}+DNA*+RNApol+{(ribonucleotide)}_{N}:{n}^{th}\end{array}$$

Each step represents an activation of the signalling molecules X

_{j}in the cytoplasm maintained by a chemical reservoir of mediator A, such as adenosine triphosphate (ATP), and an inactivation of the signalling molecules X_{j}* by enzymes Ph_{j}. ATP is hydrolysed into adenosine diphosphate (ADP; D in (5)) and inorganic phosphate (Pi), which modifies the amino acid residue of X_{j}. X_{j}and X_{j}* denote unmodified (inactive) and modified (active) signalling molecules, respectively. The first reaction represents the ligand (L), EGF in the MAPK cascade, an extracellular molecule and stimulates X_{1}, which represents a receptor (R), EGFR in the MAPK cascade, on the cellular membrane. Afterward, X_{1}− L* complex promotes the modification of X_{2}in the cytoplasm into X_{2}* activated by Pi that originated from A, and D is produced. Further, X_{2}* promotes the modification of X_{3}into X_{3}*. In this manner, the j^{th}signalling molecule, X_{j}*, activates X_{j}_{+1}in the cytoplasm into X_{j}_{+1}*. Following the (n − 1)^{th}step, the signalling molecule X_{n}* binds to the promoter region of the DNA and induces the mRNA transcription in the n^{th}step. In addition, during the reverse BRC steps, the inactivation of X_{j}* into X_{j}occurs through enzymes that catalyse inactivation or through self-inactivation by X_{j}*, in which Pi is released. Thus, j^{th}and −j^{th}step forms a cycle reaction consisting of activation and inactivation. Finally, a pre-stimulation steady state of the individual step is recovered. Such reaction chain schemes were previously described by Gaspard et al. and Tsuruyama [17,18,28,29,30].## 3. Binary Code Model of BRCs

Recent studies showed that the BRC can be interpreted as a binary code system with two forms of signalling molecules, namely an active form (X

_{j}*) and an inactive form (X_{j}) in individual step [18,19,20]. The total signal event number Ψ in a given BRC event, can be described using the concentration of inactive X_{j}and active molecules X_{j}* as follows:
$$\Psi =\frac{X!}{{\displaystyle {\displaystyle \prod}_{j=1}^{n}}{X}_{j}!{\displaystyle {\displaystyle \prod}_{j=1}^{n}}{X}_{j}*!}$$

X represents the total concentration of signalling molecules. The logarithm of Ψ is approximated according to Starling’s equation and gives Shannon’s entropy S using the selection probability of X

_{j}or X_{j}*, p_{j}= X_{j}/X and p_{j}* = X_{j}*/X [19]:
$$S=\mathrm{log}\Psi =-X\left({\displaystyle \sum}_{j=1}^{n}{p}_{j}\mathrm{log}{p}_{j}+{\displaystyle \sum}_{j=1}^{n}{p}_{j}*\mathrm{log}{p}_{j}*\right)$$

Selecting the j

^{th}step component of S in (7) gives:
$${s}_{j}\triangleq -X\left[{p}_{j}\mathrm{log}{p}_{j}+{p}_{j}*\mathrm{log}{p}_{j}*\right]$$

When the signal is transmitted to the j

^{th}step, concentrations of X_{j}or X_{j}* fluctuate, and s_{j}is given using the probability fluctuation dp_{j}and dp_{j}*,
$${s}_{j}\triangleq -X\left[\left({p}_{j}+d{p}_{j}\right)\mathrm{log}\left({p}_{j}+d{p}_{j}\right)+\left({p}_{j}*+d{p}_{j}*\right)\mathrm{log}\left({p}_{j}*+d{p}_{j}*\right)\right]$$

Because the signal has not yet reached the (j + 1)

^{th}step; hence, the entropy of the (j + 1)^{th}step remains:
$${s}_{j+1}\triangleq -X\left[{p}_{j}\mathrm{log}{p}_{j}+{p}_{j}*\mathrm{log}{p}_{j}*\right]$$

Therefore, the entropy difference H
with
and

_{j}is generated between the j^{th}and (j + 1)^{th}step is presented as follows [17,18,19]:
$${H}_{j}\triangleq {s}_{j}-{s}_{j+1}=X\Delta {p}_{j}*\mathrm{log}\frac{{p}_{j}}{{p}_{j}*}=\Delta {X}_{j}*\mathrm{log}\frac{{p}_{j}}{{p}_{j}*}$$

$${X}_{j}+{X}_{j}*=const.$$

$${p}_{j}+{p}_{j}*=const.$$

$$\begin{array}{l}\Delta {X}_{j}+\Delta {X}_{j}*=0\hfill \\ \Delta {p}_{j}+\Delta {p}_{j}*=0\hfill \end{array}$$

In addition, the entropy difference per single active molecule, h

_{j}, is given by Equation (11):
$${h}_{j}\triangleq {H}_{j}/\Delta {X}_{j}*=\mathrm{log}\frac{{p}_{j}}{{p}_{j}*}$$

In previous report [19], entropy current C

_{j}was introduced as follows:
$${C}_{j}={k}_{B}T\frac{\partial {s}_{j}}{\partial {p}_{j}*}\Delta {p}_{j}*\approx {k}_{B}T\mathrm{log}\frac{{p}_{j}}{{p}_{j}*}\Delta {X}_{j+1}*$$

Accordingly, the entropy current density c

_{j}per single active molecule is given as:
$${c}_{j}=\frac{{C}_{j}}{\Delta {X}_{j}*}={k}_{B}T\mathrm{log}\frac{{p}_{j}}{{p}_{j}*}$$

## 4. Mutual Entropy in BRCs

For the evaluation of mutual entropy in Equation (15) according to information theory, let us consider the channel capacity of the j

^{th}cycle (1 ≤ j ≤ n) (Figure 1). The natural logarithm was applied in place of the base-2 logarithm to simplify the description. The entropy h_{j}^{0}is given using q_{j}= p_{j}/(p_{j}+ p_{j}*) and q_{j}* = p_{j}*/(p_{j}+ p_{j}*), as follows:
$${h}_{j}{}^{0}=-{q}_{j}\mathrm{log}{p}_{j}-{q}_{j}*\mathrm{log}{q}_{j}*$$

The conditional entropies h
h
with

_{j}(j + 1|j) from the (j + 1)^{th}step for the given j^{th}step can be described as a linear function of q_{j}* using the probability of the noise occurrence probability ϕ_{j}and ξ_{j}= −ϕ_{j}logϕ_{j}− (1 − ϕ_{j})log (1 − ϕ_{j}) [13]:
$$h(j+1|j)=-{\xi}_{j}{q}_{j}*\equiv -\left({\phi}_{j}\mathrm{log}{\phi}_{j}+(1-{\phi}_{j})\mathrm{log}{\phi}_{j}\right){q}_{j}*$$

_{j}^{0}and h(j + 1|j) are chosen in such a manner as to maximize mutual entropy, which is defined by h_{j}^{0}− h(j + 1|j), subject to the constraint q_{j}* + q_{j}= 1. The channel capacity is defined as the maximum value of mutual entropy:
$${c}_{j}\triangleq {\left[{h}_{j}{}^{0}-h\left(j+1|j\right)\right]}^{\mathrm{max}}$$

$${h}_{j}\triangleq {h}_{j}{}^{0}-h\left(j+1|j\right)$$

To obtain the maximized mutual entropy, the following function U

_{j}using the undetermined parameter λ is maximized using Lagrange’s method for undetermined multipliers as follows [13]:
$${U}_{j}=-{q}_{j}\mathrm{log}{q}_{j}-{q}_{j}*\mathrm{log}{q}_{j}*-{\xi}_{j}{q}_{j}*+\lambda \left[{q}_{j}+{q}_{j}*\right]$$

Then

$$\frac{\partial}{\partial {q}_{j}}{U}_{j}=-\mathrm{log}{q}_{j}-1+\lambda $$

$$\frac{\partial}{\partial {q}_{j}*}{U}_{j}=-\mathrm{log}{q}_{j}*-1+\lambda -{\xi}_{j}$$

Setting the right-hand side of Equations (23) and (24) to zero, and eliminating λ, we have:

$$\mathrm{log}\frac{{q}_{j}}{{q}_{j}*}=\mathrm{log}\frac{{p}_{j}}{{p}_{j}*}={\xi}_{j}$$

From q
with

_{j}+ q_{j}* = 1 and (25), the following can be derived:
$${q}_{j}=\frac{{\varphi}_{j}}{{\varphi}_{j}+1}$$

$${q}_{j}*=\frac{1}{{\varphi}_{j}+1}$$

$${\varphi}_{j}=\mathrm{exp}\left({\xi}_{j}\right)$$

As a result, the channel capacity of the j

^{th}step, C_{j}, is given using (21), (26)–(28) as a maximum value of mutua entropy:
$${C}_{j}\triangleq {h}_{j}{}^{\mathrm{max}}=\left(-\frac{{\varphi}_{j}}{{\varphi}_{j}+1}\mathrm{log}\frac{{\varphi}_{j}}{{\varphi}_{j}+1}-\frac{1}{{\varphi}_{j}+1}\mathrm{log}\frac{1}{{\varphi}_{j}+1}-{\xi}_{j}\frac{{\varphi}_{j}}{{\varphi}_{j}+1}\right)=-\mathrm{log}\frac{{\varphi}_{j}}{{\varphi}_{j}+1}$$

The mutual entropy of reverse signal transduction is also given by the entropy h

_{−j}^{0}= h_{j}^{0}= −q_{j}logq_{j}− q_{j}*logq_{j}* and mutual entropy as h_{−j}^{0}− h(−j − 1|−j) (Figure 1). The following function, U_{−j}for the reverse transduction using the undetermined parameter λ′, is maximised as follows:
$${U}_{-j}=-{q}_{j}\mathrm{log}{q}_{j}-{q}_{j}*\mathrm{log}{q}_{j}*-{\xi}_{-j}{q}_{j}+{\lambda}^{\prime}\left[{q}_{j}+{q}_{j}*\right]$$

In above, ξ

_{−j}= −ϕ_{−j}logϕ_{−j}− (1 − ϕ_{−j}) log (1 − ϕ_{−j}), and ϕ_{−j}denotes the noise occurrence probability in the reverse cascade. Then:
$$\frac{\partial}{\partial {q}_{j}}{U}_{-j}=-\mathrm{log}{q}_{j}-1+{\lambda}^{\prime}-{\xi}_{-j}$$

$$\frac{\partial}{\partial {q}_{j}*}{U}_{-j}=-\mathrm{log}{q}_{j}*-1+{\lambda}^{\prime}$$

Setting the right-hand side of Equations (31) and (32) to zero, and eliminating λ′, we have:

$$\mathrm{log}\frac{{q}_{j}}{{q}_{j}*}=\mathrm{log}\frac{{p}_{j}}{{p}_{j}*}=-{\xi}_{-j}={\xi}_{j}$$

Accordingly, the channel capacity C

_{−j}is given by:
$$\begin{array}{l}{C}_{-j}\triangleq {h}_{-j}{}^{\mathrm{max}}\triangleq \left(-\frac{{\varphi}_{j}}{{\varphi}_{j}+1}\mathrm{log}\frac{{\varphi}_{j}}{{\varphi}_{j}+1}-\frac{1}{{\varphi}_{j}+1}\mathrm{log}\frac{1}{{\varphi}_{j}+1}-{\xi}_{-j}\frac{1}{{\varphi}_{j}+1}\right)\\ =\left(-\frac{{\varphi}_{j}}{{\varphi}_{j}+1}\mathrm{log}\frac{{\varphi}_{j}}{{\varphi}_{j}+1}-\frac{1}{{\varphi}_{j}+1}\mathrm{log}\frac{1}{{\varphi}_{j}+1}+\mathrm{log}{\varphi}_{j}\frac{1}{{\varphi}_{j}+1}\right)\\ =-\mathrm{log}\frac{1}{{\varphi}_{j}+1}\end{array}$$

The channel capacity of the j

^{th}cycle step is defined and calculated as follows:
$${h}_{j}={C}_{-j}-{C}_{j}=\mathrm{log}{\varphi}_{j}={\xi}_{j}=\mathrm{log}\frac{{p}_{j}}{{p}_{j}*}$$

Thus, we can obtain the mutual entropy as entropy difference in Equation (15).

## 5. Szilard Engine Chain as a BRC Model

Subsequently, let us consider that phosphorylation and dephosphorylation reactions form a cycle reaction that simultaneously activates the next cycle reaction in a BRC. A cycle reaction of individual step can be modelled as a Szilard engine, which may serve as a model of the conversion system [17]. The Szilard engine was established by Leo Szilard considering Maxwell’s demon paradox [31,32]. In the engine model, Maxwell’s demon, which is a feedback controller, utilises the position information of a single gas particle in a box that contacts with a heat bath. As an initial state, the boundary is inserted to a room at the middle position such that the controller can determine whether a single gas particle is in the left space or in the right space of the room. The information gained by the controllers is equal to one bit (i.e., left or right). In the case of the particle in the left, let the boundary be quasi-statically moved in the right orientation for recovery of the full volume of the room. In both cases, the particle isothermally expands with the movement of the boundary back to its original full volume. The extracted work is equal to k

_{B}Tln2. This process is equivalent to the system, in which the feedback controller transforms the gained information into the actual expansion work. The feedback controller system has been produced in the actual experimental study [33,34]. Thereby, let us consider the feedback controller is informed whether the signalling molecule is an active or inactive type in place of measuring the particle position.As reported previously [17,19], the BRC for modelling can be divided into n number of hypothetical compartment fields corresponding to the individual j

^{th}steps (1 ≤ j ≤ n) that corresponds to a single Szilard engine. The diffusion rate of signaling molecule is sufficiently low because of its high molecular weight and they are hypothesised to be localized in the compartment fields. Each field contains all X_{j}_{+1}* and X_{j}_{+1}species (1 ≤ j ≤ n − 1), with the concentrations identical to those of X_{j}_{+1}*^{st}and X_{j}_{+1}^{st}, respectively, at the steady state. The feedback controller has the potential to recognise the molecule concentration. Subsequently, the controller selects X_{j}_{+1}* or X_{j}_{+1}for its transfer (Figure 2). The steps are summarised as follows when BRC proceeds:- (i)
- When the signal transduction initiates, the controller measures the changes in the concentration of the active molecule X
_{j}_{+1}* and X_{j}_{+1}in the j^{th}field. - (ii)
- At the j
^{th}step in the signalling cascade, the feedback controller introduces ΔX_{j}_{+1}* of X_{j}_{+1}* to the (j + 1)^{th}field from the j^{th}field by opening the forward gate on the boundary in the j^{th}field to the (j + 1)^{th}field. Simultaneously, the controller introduces ΔX_{j+1}of X_{j+1}to the (j + 1)^{th}field from the j^{th}field by opening the back gate on the boundary. - (iii)
- Subsequently, X
_{j}_{+1}* can flow back with the forward transfer of X_{j}_{+1}from the (j + 1)^{th}field to the j^{th}field because of the entropy difference (see Equation (13)). X_{j}_{+1}can also flow back with the backward transfer of X_{j}_{+1}from the (j + 1)^{th}field to the j^{th}field because of the concentration gradient. - (iv)
- In (iii), ΔX
_{j}_{+1}* and ΔX_{j}_{+1}can quasi-statically rotate the exchange machinery on the hypothetical partition between the j^{th}and (j + 1)^{th}fields, which has the ability to extract chemical work equivalent to w_{j+}_{1}= k_{B}Th_{j+1}. - (v)
- As the next step, w
_{j+}_{1}is linked to the modification of X_{j}_{+2}into X_{j}_{+2}*, which further causes the concentration difference of X_{j}_{+2}* introduced by the feedback controller from the (j + 1)^{th}field to the (j + 2)^{th}field. The next step proceeds as aforementioned in (ii) to (iii).

Accordingly, replacing the suffix j + 1 by j for simplification, the chemical work w

**extracted from the j**_{j}^{th}Szilard engine is given using the mutual entropy h_{j}informed to the controller whether the signalling molecules increase or decrease according to Equations (15) and (35) [12,13,14,18,19]:
$${w}_{j}={k}_{B}T{h}_{j}\Delta {X}_{j}*={k}_{B}T\Delta {X}_{j}*\mathrm{log}\frac{{p}_{j}}{{p}_{j}*}$$

## 6. Conservation of the Average Entropy Production

Next, let us review the optimized coding way for maximizing the signal event number for a given duration in this binary coding model of signal transduction in a nonequilibrium steady system. First, the duration of signal transduction is defined in consideration of the signal orientation (i.e., forward τ

_{j}and backward τ_{−j}). Positive and negative values are assigned to τ_{j}and τ_{−j}for distinction of the signal direction. τ_{j}represents the duration of the tentative increase in the active molecule X_{j}*, whilst τ_{−j}represents the duration to the recovery to the initial state. The step cycle duration is represented by τ_{j}− τ_{−j}.The average entropy production ζ
where, s

_{j}and ζ_{−j}during the signal transduction are defined during τ_{j}− τ_{−j}and the average entropy production rate (AEPR) is defined using a bracket < > as:
$$\langle {\zeta}_{j}\rangle \triangleq \frac{1}{{\tau}_{j}-{\tau}_{-j}}{\displaystyle {\int}_{0}^{{\tau}_{j}-{\tau}_{-j}}{\zeta}_{j}\left({s}_{j}\right)}d{s}_{j}$$

$$\langle {\zeta}_{-j}\rangle \triangleq \frac{1}{\left|{\tau}_{j}-{\tau}_{-j}\right|}{\displaystyle {\int}_{0}^{\left|{\tau}_{j}-{\tau}_{-j}\right|}{\zeta}_{j}\left({s}_{j}\right)d{s}_{j}}$$

_{j}is an arbitrary parameter representing the progression of a reaction event [35]. The transitional probability p (j + 1|j) is the probability of the (j + 1)^{th}step given the j^{th}step during τ_{j}, and p (j|j + 1) is the transitional probability of the j^{th}step given the (j + 1)^{th}step during τ_{−j}. The AEPR <ζ_{j}> during the signal transduction from the j^{th}to the (j + 1)^{th}field is given according to fluctuation theorem (FT) at the steady state:
$$\underset{{\tau}_{j}-{\tau}_{-j}\to \infty}{\mathrm{lim}}\frac{1}{{\tau}_{j}-{\tau}_{-j}}\mathrm{log}\frac{p\left(j+1|j\right)}{p\left(j|j+1\right)}=\langle {\zeta}_{j}\rangle $$

The AEPR <ζ

_{−j}> from the (j + 1)^{th}to the j^{th}field is given:
$$\underset{\left|{\tau}_{-j}-{\tau}_{j}\right|\to \infty}{\mathrm{lim}}\frac{1}{\left|{\tau}_{-j}-{\tau}_{j}\right|}\mathrm{log}\frac{p\left(j|j+1\right)}{p\left(j+1|j\right)}=\langle {\zeta}_{-j}\rangle $$

The following equation is given using signal current density c

_{j}in (17) [19,35]:
$$\underset{{\tau}_{j}-{\tau}_{-j}\to \infty}{\mathrm{lim}}\frac{1}{{\tau}_{j}-{\tau}_{-j}}\mathrm{log}\frac{p\left(j+1|j\right)}{p\left(j|j+1\right)}=\frac{{c}_{j}}{{k}_{B}T\left({\tau}_{j}-{\tau}_{-j}\right)}\Delta {X}_{j}*$$

Substituting the right side of Equation (17) into the right side of (41), an important result is given [19]:

$$\underset{{\tau}_{j}-{\tau}_{-j}\to \infty}{\mathrm{lim}}\frac{1}{{\tau}_{j}-{\tau}_{-j}}\mathrm{log}\frac{p\left(j+1|j\right)}{p\left(j|j+1\right)}=\underset{{\tau}_{j}-{\tau}_{-j}\to \infty}{\mathrm{lim}}\frac{1}{{\tau}_{j}-{\tau}_{-j}}\mathrm{log}\frac{{p}_{j}}{{p}_{j}*}$$

When the signal even number is maximised, the logarithm of the selection probability is described simply using the average entropy production rate β independent of the step number according to previous reports [17,18,19,20]:

$$-\mathrm{log}{p}_{j}=\beta {\tau}_{j}$$

$$\mathrm{log}{p}_{j}*=\beta {\tau}_{-j}$$

This is one type of entropy coding. Substitution of the right sides of Equations (43) and (44) into (42) gives:

$$\underset{{\tau}_{j}-{\tau}_{-j}\to \infty}{\mathrm{lim}}\frac{1}{{\tau}_{j}-{\tau}_{-j}}\mathrm{log}\frac{p\left(j+1|j\right)}{p\left(j|j+1\right)}=\underset{{\tau}_{j}-{\tau}_{-j}\to \infty}{\mathrm{lim}}\beta \frac{-{\tau}_{j}-{\tau}_{-j}}{{\tau}_{j}-{\tau}_{-j}}~-\beta $$

Likewise,

$$\underset{\left|{\tau}_{j}-{\tau}_{-j}\right|\to \infty}{\mathrm{lim}}\frac{1}{\left|{\tau}_{j}-{\tau}_{-j}\right|}\mathrm{log}\frac{p\left(j+1|j\right)}{p\left(j|j+1\right)}=\underset{\left|{\tau}_{j}-{\tau}_{-j}\right|\to \infty}{\mathrm{lim}}\beta \frac{-{\tau}_{j}-{\tau}_{-j}}{{\tau}_{j}-{\tau}_{-j}}~\beta $$

We used τ

_{j}<< τ_{−j}as shown in Figure 3 in (45) and (46) and sufficient long duration of the whole signal transduction according to experimental data [23,36,37]. The dephosphorylation of signaling molecule X_{j}* takes a significantly longer time, τ_{−j}. Subsequently, Equations (39), (40), (45) and (46) provide:
$$\beta =-\langle {\zeta}_{j}\rangle =\langle {\zeta}_{-j}\rangle \equiv \langle \zeta \rangle $$

In summary, we obtained the following result from Equations (43)–(47):

$$-\mathrm{log}{p}_{j}=\langle \zeta \rangle {\tau}_{j}$$

$$\mathrm{log}{p}_{j}*=\langle \zeta \rangle {\tau}_{-j}$$

Equations (48) and (49) implies the integration of information entropy, code length, and thermodynamic AEPR. In these equations, step numbers j and −j in ζ

_{j}and ζ_{−j}were omitted because ζ_{j}and ζ_{−j}are independent of the step number. Thus, the theoretical basis of the consistency of the average entropy production rate can be obtained.The chemical extracted average chemical work <w

_{j}> in Equation (36) from (i) to (iv) in Section 5 is calculated as follows using Equations (15), (36), (48) and (49) [18]:
$$\langle {w}_{j}\rangle ={k}_{B}T{H}_{j}=\underset{}{\overset{}{{\displaystyle \int}}}{k}_{B}T\mathrm{log}\frac{{p}_{j}}{{p}_{j}*}d{X}_{j}*={k}_{B}T\Delta {X}_{j}*\langle \zeta \rangle \left({\tau}_{j}-{\tau}_{-j}\right)$$

The summation of the right side of (50) gives the total work:
with
and

$$\langle w\rangle \triangleq {k}_{B}T{\displaystyle \sum}_{j=1}^{n}\langle \zeta \rangle \left({\tau}_{j}-{\tau}_{-j}\right)\Delta {X}_{j}{}^{*}={k}_{B}TH$$

$$H\triangleq \langle \zeta \rangle {\displaystyle \sum}_{j=1}^{n}\left({\tau}_{j}-{\tau}_{-j}\right)\Delta {X}_{j}*={\displaystyle \sum}_{j=1}^{n}{\sigma}_{j}\Delta {X}_{j}*$$

$${\sigma}_{j}\triangleq \langle \zeta \rangle \left({\tau}_{j}-{\tau}_{-j}\right).$$

Here, σ

_{j}stands for the entropy production during τ_{j}− τ_{−j}at the j^{th}step.## 7. Conclusions

Signal transduction is an important research topic in life science, but quantitatively evaluating data remains difficult. This review pointed out the possibility of quantitative signalling to life scientists. The current review can be summarised in the following points:

- (i)
- The BRC can be expressed by a kind of binary code system consisting of two types of signalling molecules: activated and inactivated.
- (ii)
- The individual reaction step of the BRC can be thought of as a cycle of a Szilard engine chain, in which the process of repeats of signalling molecule activation/inactivation.
- (iii)
- The average entropy production rate is consistent during BRC.
- (iv)
- The signal transduction amount can be calculated through the BRC.

The chain of Szilard engines is a useful model to show how signal transduction in one step induces signal transduction in the next step, in which a series of chains is formed. The most important point of this model is to directly give the signal transduction amount by the exchange work according to Equation (3). The currently introduced chain illustrates that the feedback controller transfers signal molecules based on the measurement of the increase and decrease of the signal molecule. Subsequently, the exchanger molecule on the boundary between the steps can extract work between because of the entropy gradient consisting of the two types of signalling molecules. In this way, the signal transduction amount can be clearly quantified by the combination of chemical work.

Herein, let us consider the calculation of the entropy production based on the kinetics of the activation of signalling molecules according to (5). The signalling system is contacted with a chemical bath outside the system that provides ATP. The transitional rate from the j

^{th}step to the (j + 1)^{th}step, v_{j}, obtained using the kinetic coefficient k_{j}for the j^{th}step as follows:
$${v}_{j}={k}_{j}A{X}_{j}*{X}_{j+1}$$

The transitional rate from the (j + 1)
where, k

^{th}step to the j^{th}step, v_{−j}, which is equal to the demodification (dephosphorylation) of the backward signal transduction, is given using the kinetic coefficient k_{−j}for the −j^{th}step:
$${v}_{-j}={k}_{-j}P{h}_{j+1}{X}_{j+1}*$$

_{j}and k_{−j}represent the kinetic coefficients. The signal transduction system remains at a detailed balance around the steady state, the homeostatic point:
$$p\text{}(j\left|j+1){v}_{-j}=p(j+1\right|j){v}_{j}$$

Combining Equations (47), (54)–(56), we obtain the following from FT:

$$\begin{array}{l}\underset{{\tau}_{j}-{\tau}_{-j}\to \infty}{\mathrm{lim}}\frac{1}{{\tau}_{j}-{\tau}_{-j}}\mathrm{log}\frac{p(j+1|j)}{p(j|j+1)}=\underset{{\tau}_{j}-{\tau}_{-j}\to \infty}{\mathrm{lim}}\frac{1}{{\tau}_{j}-{\tau}_{-j}}\mathrm{log}\frac{{k}_{j}A{X}_{j}*{X}_{j+1}}{{k}_{-j}P{h}_{j+1}{X}_{j+1}*}\hfill \\ ==\underset{{\tau}_{j}-{\tau}_{-j}\to \infty}{\mathrm{lim}}\frac{1}{{\tau}_{j}-{\tau}_{-j}}\left(\mathrm{log}\frac{{k}_{j}A{X}_{j}*}{{k}_{-j}P{h}_{j+1}}+\mathrm{log}\frac{{p}_{j+1}}{{p}_{j+1}*}\right)\hfill \\ \simeq \underset{{\tau}_{j}-{\tau}_{-j}\to \infty}{\mathrm{lim}}\frac{1}{{\tau}_{j}-{\tau}_{-j}}\mathrm{log}\frac{{p}_{j+1}}{{p}_{j+1}*}=-\langle \zeta \rangle \hfill \end{array}$$

Above result contains Equation (42). Therefore, for sufficient long duration τ

_{j}− τ_{−j}:
$$\mathrm{log}\frac{p(j+1|j)}{p(j|j+1)}=\mathrm{log}\frac{{k}_{j}A{X}_{j}*{X}_{j+1}}{{k}_{-j}P{h}_{j+1}{X}_{j+1}*}\simeq -\langle \zeta \rangle \left({\tau}_{j}-{\tau}_{-j}\right)=-{\sigma}_{j}.$$

Using the concentration of the active signalling molecules at the steady state, X

_{j+}_{1}^{st}*, we have:
$${X}_{j+1}*={X}_{j+1}{}^{st}*+\Delta {X}_{j+1}*$$

Substitution of Equation (59) into Equation (58) produces:

$$\mathrm{log}\frac{{k}_{j}A{X}_{j}*{X}_{j+1}}{{k}_{-j}P{h}_{j+1}{X}_{j+1}*}=-{\sigma}_{j}+\mathrm{log}\frac{{k}_{j}A{X}_{j}{}^{st}*{X}^{st}{}_{j+1}}{{k}_{-j}P{h}_{j+1}{X}_{j+1}{}^{st}*}$$

Here, the entropy production σ

_{j}in (53) is defined in the j^{th}step:
$$-{\sigma}_{j}={k}_{B}T\mathrm{log}\left(\frac{1+\Delta {X}_{j+1}/{X}^{st}{}_{j+1}}{1+\Delta {X}_{j+1}*/{X}^{st}{}_{j+1}*}\right).$$

In Equation (61), the fluctuation of X

_{j}* is negligible during signal transduction relative to ΔX_{j+}_{1}and ΔX_{j+}_{1}* according to experimental data (36). The sum of the concentrations of X_{j+}_{1}and X_{j+}_{1}* is equal to the total concentration X_{j}_{+1}^{0}that is kept constant because the signal transduction rate is significantly greater than the production of signalling molecular proteins. Then:
$${X}_{j+1}+{X}_{j+1}*={X}_{j+1}{}^{0}=const.$$

Equations (54), (55) and (62) give the concentrations at steady state:

$${X}_{j+1}{}^{st}=\frac{{k}_{-j}P{h}_{j+1}p(j|j+1)}{{k}_{-j}P{h}_{j+1}p(j\left|j+1)+{k}_{j}p(j+1\right|j)A}{X}_{j+1}{}^{0}$$

$${X}_{j+1}{}^{st}*=\frac{{k}_{j}p(j+1|j)A}{{k}_{-j}P{h}_{j+1}p(j\left|j+1)+{k}_{j}p(j+1\right|j)A}{X}_{j}{}^{0}$$

Ph

_{j}_{+1}signifies the phosphatase concentration in the j^{th}step. The fluctuation of the transmitted information is described as follows using an integral form of Equation (61):
$$\begin{array}{l}-{\sigma}_{j}=\underset{0}{\overset{{\tau}_{j}-{\tau}_{-j}}{{\displaystyle \int}}}\mathrm{log}\left(\frac{1+\Delta {X}_{j+1}/{X}^{st}{}_{j+1}}{1+\Delta {X}_{j+1}*/{X}^{st}{}_{j+1}*}\right)\frac{\Delta {X}_{j+1}*}{\Delta {s}_{j}}d{s}_{j}\hfill \\ =\underset{0}{\overset{{\tau}_{j}-{\tau}_{-j}}{{\displaystyle \int}}}\frac{{X}_{j+1}{}^{0}}{{X}_{j+1}{}^{st}*{X}_{j+1}{}^{st}}\frac{\Delta {X}_{j+1}*}{\Delta {s}_{j}}d{s}_{j}=\underset{0}{\overset{{\tau}_{j}-{\tau}_{-j}}{{\displaystyle \int}}}\frac{{X}_{j+1}{}^{0}}{{X}_{j+1}{}^{st}*{X}_{j+1}{}^{st}}\frac{d{X}_{j+1}*}{dA}\frac{dA}{d{s}_{j}}d{s}_{j}\hfill \end{array}$$

We used the approximation log (1 + x) ~ x in the logarithmic term in (65). A simple calculation of Equations (63) and (64) gives:

$$\frac{d{X}_{j+1}*}{dA}=\frac{1}{A}\frac{{X}_{j+1}{}^{st}*{X}_{j+1}{}^{st}}{{X}_{j+1}{}^{0}}$$

Then, substitution of Equation (66) into Equation (65) gives:
with
A

$$\begin{array}{l}-{\sigma}_{j}={\displaystyle {\int}_{{A}_{ji}}^{{A}_{jf}}\frac{1}{A}}dA={\left[\mathrm{log}A\right]}_{{A}_{ji}}^{{A}_{jf}}\\ =\mathrm{log}\frac{{A}_{jf}}{{A}_{ji}}=\mathrm{log}\frac{{A}_{ji}-\Delta {A}_{j}}{{A}_{ji}}\approx -\frac{\Delta {A}_{j}}{{A}_{ji}}\end{array}$$

$${A}_{jf}={A}_{ji}-\Delta {A}_{j}$$

_{jf}and A_{ji}signify the local concentration of the mediator ATP at the initial and final state, respectively, at the j^{th}step. ΔA_{j}signifies the concentration change of ATP at the j^{th}step. Thus, the total entropy production σ is simply given as follows:
$$\sigma \triangleq {\displaystyle \sum}_{j=1}^{n}{\sigma}_{j}=-{\displaystyle \sum}_{j=1}^{n}\mathrm{log}\frac{{A}_{jf}}{{A}_{ji}}=-\mathrm{log}\frac{{A}_{1i}-{\displaystyle {\displaystyle \sum}_{j=1}^{n}}\Delta {A}_{j}}{{A}_{1i}}\simeq \frac{{\displaystyle {\displaystyle \sum}_{j=1}^{n}}\Delta {A}_{j}}{A}.$$

In above, we used the approximation log (1 + x) ~ x again and set A
or

_{1i}equal to the initial concentration of ATP, A. Thus, ATP is the mediator of signal transduction. In an actual experiment, rigorously measuring the concentration change of ATP at individual signal steps is difficult because ATP is consumed in a variety of reactions as a basic metabolite for cell activity. Alternatively, the ratio ΔX_{j+}_{1}/ΔX_{j+}_{1}^{st}is negligible during signal transduction according to experimental data [36], we have from (61):
$$-{\sigma}_{j}={\displaystyle {\int}_{0}^{{\tau}_{j}-{\tau}_{-j}}\mathrm{log}\left(1+\Delta {X}_{j+1}*\left({s}_{j}\right)/\Delta {X}_{j+1}{}^{st}\text{}*\right)}d{s}_{j}$$

$$-\langle \zeta \rangle =\frac{-{\sigma}_{j}}{{\tau}_{j}-{\tau}_{-j}}=\frac{1}{{\tau}_{j}-{\tau}_{-j}}{\displaystyle {\int}_{0}^{{\tau}_{j}-{\tau}_{-j}}\mathrm{log}\left(1+\Delta {X}_{j+1}*\left({s}_{j}\right)/\Delta {X}_{j+1}{}^{st}\text{}*\right)}d{s}_{j}$$

As aforementioned, the right side of Equation (71) indicates that AEPR <ζ> is consistent during the cascade. Accordingly, the measurement of AEPR will provide an evidence of its consistency during the signaling cascade. In this manner, the rigorous measurement of the concentration change of active signaling molecule may provide more direct evidence in the presented theory.To date, experimental data have demonstrated that the time course of increase in active signaling molecules shows a similar time course plot, as shown in Figure 3, suggesting the consistency of the AEPR [36,37].

Further study is required to prove which signal transduction strategy a biological system will select. For example, the cell system may select a strategy to maximise signal event number during a given duration by application of non-redundant signal system; in contrast, accuracy of the signal transduction may be prioritized by application of redundant signal system. The strategy chosen for signal cascade by the cell system will likely be determined experimentally. The cost-performance of metabolomics substance tradeoffs for cellular regulatory functions and information processing will been argued by evaluation of recent experimental data [23,24,25,26]. By measuring the consumption of metabolite, Luo et al. were successful in their estimation of biological information [38]. The relationship between the ATP concentration in cellular tissues and information transmission has been vigorously studied in the analysis of nerve excitement transmission [39], and this review may suggest implications for quantitative information transmission.

The discussion developed herein has some limitations; hence, we would like to mention it at the end. A detailed balancing between modification and demodification is assumed at each step for the application of FT. Therefore, the current discussion is also possible only when the distance from the detailed balance is not great [17,18,19,20]. This also depends on how we consider the range of FT application or Jarzynski equality. FT has been applied to study a non-equilibrium system [28,29,40,41], limit cycle [42], molecular machines [43], and biological phenomenon [44], including membrane transport [45], molecular motor activity [46], and RNA folding [47]. The adaptation and extension of the current discussion to the non-linear phenomenon [48,49] and far from steady state or active matters will be the next theoretical subjects. However, at the least, interpreting the signal cycle as a Szilard engine is considered as an effective idea for thought of experiments, and a chain of the engines will serve as an actual BRC.

In conclusion, the information thermodynamics approach described herein provides a framework for the analysis of signal transduction BRC. This theoretical approach appears suitable for the identification of novel active signalling cascades among response cascades in which AEPR is consistent through the given cascade. This review presents that the binary coding system and the Szilard engine chain model may be the theoretical basis of computation of the channel capacity of BRC.

## Funding

This research was supported by a Grant-in-Aid from the Ministry of Education, Culture, Sports, Science, and Technology of Japan (Synergy of Fluctuation and Structure: Quest for Universal Laws in Non-Equilibrium Systems, P2013-201 Grant-in-Aid for Scientific Research on Innovative Areas, MEXT, Japan).

## Acknowledgments

I thank Kenichi Yoshikawa of Doshisha University for his advice.

## Conflicts of Interest

The author declares no conflict of interest.

## References

- Guo, M.; Bao, E.L.; Wagner, M.; Whitsett, J.A.; Xu, Y. Slice: Determining cell differentiation and lineage based on single cell entropy. Nucleic Acids Res.
**2017**, 45, e54. [Google Scholar] [CrossRef] [PubMed] - Cheng, F.; Liu, C.; Shen, B.; Zhao, Z. Investigating cellular network heterogeneity and modularity in cancer: A network entropy and unbalanced motif approach. BMC Syst. Biol.
**2016**, 10, 65. [Google Scholar] [CrossRef] [PubMed] - Maire, T.; Youk, H. Molecular-level tuning of cellular autonomy controls the collective behaviors of cell populations. Cell Syst.
**2015**, 1, 349–360. [Google Scholar] [CrossRef] [PubMed] - Olimpio, E.P.; Dang, Y.; Youk, H. Statistical dynamics of spatial-order formation by communicating cells. iScience
**2018**, 2, 27–40. [Google Scholar] [CrossRef] - Mora, T.; Walczak, A.M.; Bialek, W.; Callan, C.G., Jr. Maximum entropy models for antibody diversity. Proc. Natl. Acad. Sci. USA
**2010**, 107, 5405–5410. [Google Scholar] [CrossRef] [PubMed] - Tugrul, M.; Paixao, T.; Barton, N.H.; Tkacik, G. Dynamics of transcription factor binding site evolution. PLoS Genet.
**2015**, 11, e1005639. [Google Scholar] [CrossRef] [PubMed] - Orlandi, J.G.; Stetter, O.; Soriano, J.; Geisel, T.; Battaglia, D. Transfer entropy reconstruction and labeling of neuronal connections from simulated calcium imaging. PLoS ONE
**2014**, 9, e98842. [Google Scholar] [CrossRef] [PubMed] - Kullback, S.; Leibler, R.A. On information and sufficiency. Ann. Math. Stat.
**1951**, 22, 79–86. [Google Scholar] [CrossRef] - McGrath, T.; Jones, N.S.; Ten Wolde, P.R.; Ouldridge, T.E. Biochemical machines for the interconversion of mutual information and work. Phys. Rev. Lett.
**2017**, 118, 028101. [Google Scholar] [CrossRef] [PubMed] - Crofts, A.R. Life, information, entropy, and time: Vehicles for semantic inheritance. Complexity
**2007**, 13, 14–50. [Google Scholar] [CrossRef] [PubMed] - Seifert, U. Stochastic thermodynamics of single enzymes and molecular motors. Eur. Phys. J. E Soft Matter
**2011**, 34, 1–11. [Google Scholar] [CrossRef] [PubMed] - Ito, S.; Sagawa, T. Information thermodynamics on causal networks. Phys. Rev. Lett.
**2013**, 111, 18063. [Google Scholar] [CrossRef] [PubMed] - Ito, S.; Sagawa, T. Maxwell’s demon in biochemical signal transduction with feedback loop. Nat. Commun.
**2015**, 6, 7498. [Google Scholar] [CrossRef] [PubMed] - Sagawa, T.; Ueda, M. Minimal energy cost for thermodynamic information processing: Measurement and information erasure. Phys. Rev. Lett.
**2009**, 102, 250602. [Google Scholar] [CrossRef] [PubMed] - Sagawa, T.; Ueda, M. Generalized jarzynski equality under nonequilibrium feedback control. Phys. Rev. Lett.
**2010**, 104, 090602. [Google Scholar] [CrossRef] [PubMed] - Sagawa, T.; Kikuchi, Y.; Inoue, Y.; Takahashi, H.; Muraoka, T.; Kinbara, K.; Ishijima, A.; Fukuoka, H. Single-cell E. coli response to an instantaneously applied chemotactic signal. Biophys. J.
**2014**, 107, 730–739. [Google Scholar] [CrossRef] [PubMed] - Tsuruyama, T. Information thermodynamics of the cell signal transduction as a szilard engine. Entropy
**2018**, 20, 224. [Google Scholar] [CrossRef] - Tsuruyama, T. The conservation of average entropy production rate in a model of signal transduction: Information thermodynamics based on the fluctuation theorem. Entropy
**2018**, 20, 303. [Google Scholar] [CrossRef] - Tsuruyama, T. Information thermodynamics derives the entropy current of cell signal transduction as a model of a binary coding system. Entropy
**2018**, 20, 145. [Google Scholar] [CrossRef] - Tsuruyama, T. Analysis of Cell Signal Transduction Based on Kullback–Leibler Divergence: Channel Capacity and Conservation of Its Production Rate during Cascade. Entropy
**2018**, 20, 438. [Google Scholar] [CrossRef] - Zumsande, M.; Gross, T. Bifurcations and chaos in the mapk signaling cascade. J. Theor. Biol.
**2010**, 265, 481–491. [Google Scholar] [CrossRef] [PubMed] - Yoon, J.; Deisboeck, T.S. Investigating differential dynamics of the mapk signaling cascade using a multi-parametric global sensitivity analysis. PLoS ONE
**2009**, 4, e4560. [Google Scholar] [CrossRef] [PubMed] - Wang, H.; Ubl, J.J.; Stricker, R.; Reiser, G. Thrombin (par-1)-induced proliferation in astrocytes via mapk involves multiple signaling pathways. Am. J. Physiol. Cell Physiol.
**2002**, 283, C1351–C1364. [Google Scholar] [CrossRef] [PubMed] - Qiao, L.; Nachbar, R.B.; Kevrekidis, I.G.; Shvartsman, S.Y. Bistability and oscillations in the huang-ferrell model of mapk signaling. PLoS Comput. Biol.
**2007**, 3, 1819–1826. [Google Scholar] [CrossRef] [PubMed] - Purutçuoğlu, V.; Wit, E. Estimating network kinetics of the mapk/erk pathway using biochemical data. Math. Probl. Eng.
**2012**, 2012, 1–34. [Google Scholar] [CrossRef] - Blossey, R.; Bodart, J.F.; Devys, A.; Goudon, T.; Lafitte, P. Signal propagation of the mapk cascade in xenopus oocytes: Role of bistability and ultrasensitivity for a mixed problem. J. Math. Biol.
**2012**, 64, 1–39. [Google Scholar] [CrossRef] [PubMed] - Wu, Y.L.; Cheng, Y.; Zhou, X.; Lee, K.H.; Nakagawa, K.; Niho, S.; Tsuji, F.; Linke, R.; Rosell, R.; Corral, J.; et al. Dacomitinib versus gefitinib as first-line treatment for patients with EGFR-mutation-positive non-small-cell lung cancer (ARCHER 1050): A randomised, open-label, phase 3 trial. Lancet Oncol.
**2017**, 18, 1454–1466. [Google Scholar] [CrossRef] - Andrieux, D.; Gaspard, P. Fluctuation theorem and onsager reciprocity relations. J. Chem. Phys.
**2004**, 121, 6167–6174. [Google Scholar] [CrossRef] [PubMed] - Andrieux, D.; Gaspard, P. Temporal disorder and fluctuation theorem in chemical reactions. Phys. Rev. E Stat. Nonlinear Soft Matter Phys.
**2008**, 77, 031137. [Google Scholar] [CrossRef] [PubMed] - Gaspard, P. Fluctuation theorem for nonequilibrium reactions. J. Chem. Phys.
**2004**, 120, 8898–8905. [Google Scholar] [CrossRef] [PubMed] - Szilárd, L. Über die entropieverminderung in einem thermodynamischen system bei eingriffen intelligenter wesen. Zeitschrift für Physik
**1929**, 53, 840–856. [Google Scholar] [CrossRef] - Szilard, L. On the decrease of entropy in a thermodynamic system by the intervention of intelligent beings. Behav. Sci.
**1964**, 9, 301–310. [Google Scholar] [CrossRef] [PubMed] - Schmick, M.; Liu, Q.; Ouyang, Q.; Markus, M. Fluctuation theorem for a single particle in a moving billiard: Experiments and simulations. Phys. Rev. E Stat. Nonlinear Soft Matter Phys.
**2007**, 76, 021115. [Google Scholar] [CrossRef] [PubMed] - Schmick, M.; Markus, M. Fluctuation theorem for a deterministic one-particle system. Phys. Rev. E Stat. Nonlinear Soft Matter Phys.
**2004**, 70, 065101. [Google Scholar] [CrossRef] [PubMed] - Chong, S.H.; Otsuki, M.; Hayakawa, H. Generalized green-kubo relation and integral fluctuation theorem for driven dissipative systems without microscopic time reversibility. Phys. Rev. E Stat. Nonlinear Soft Matter Phys.
**2010**, 81, 041130. [Google Scholar] [CrossRef] [PubMed] - Mina, M.; Magi, S.; Jurman, G.; Itoh, M.; Kawaji, H.; Lassmann, T.; Arner, E.; Forrest, A.R.; Carninci, P.; Hayashizaki, Y.; et al. Promoter-level expression clustering identifies time development of transcriptional regulatory cascades initiated by erbb receptors in breast cancer cells. Sci. Rep.
**2015**, 5, 11999. [Google Scholar] [CrossRef] [PubMed] - Xin, X.; Zhou, L.; Reyes, C.M.; Liu, F.; Dong, L.Q. Appl1 mediates adiponectin-stimulated p38 mapk activation by scaffolding the tak1-mkk3-p38 mapk pathway. Am. J. Physiol.-Endocrinol. Metab.
**2011**, 300, E103–E110. [Google Scholar] [CrossRef] [PubMed] - Luo, L.F. Entropy production in a cell and reversal of entropy flow as an anticancer therapy. Front. Phys. China
**2009**, 8, 122–136. [Google Scholar] [CrossRef] - Tsukada, M.; Ishii, N.; Sato, R. Stochastic automaton models for the temporal pattern discrimination of nerve impulse sequences. Biol. Cybern.
**1976**, 21, 121–130. [Google Scholar] [CrossRef] [PubMed] - Ponmurugan, M. Generalized detailed fluctuation theorem under nonequilibrium feedback control. Phys. Rev. E Stat. Nonlinar Soft Matter Phys.
**2010**, 82, 031129. [Google Scholar] [CrossRef] [PubMed] - Wang, G.M.; Reid, J.C.; Carberry, D.M.; Williams, D.R.; Sevick, E.M.; Evans, D.J. Experimental study of the fluctuation theorem in a nonequilibrium steady state. Phys. Rev. E Stat. Nonlinear Soft Matter Phys.
**2005**, 71, 046142. [Google Scholar] [CrossRef] [PubMed] - Xiao, T.J.; Hou, Z.; Xin, H. Entropy production and fluctuation theorem along a stochastic limit cycle. J. Chem. Phys.
**2008**, 129, 114506. [Google Scholar] [CrossRef] [PubMed] - Seifert, U. Stochastic thermodynamics, fluctuation theorems and molecular machines. Rep. Prog. Phys.
**2012**, 75, 126001. [Google Scholar] [CrossRef] [PubMed] - Paramore, S.; Ayton, G.S.; Voth, G.A. Transient violations of the second law of thermodynamics in protein unfolding examined using synthetic atomic force microscopy and the fluctuation theorem. J. Chem. Phys.
**2007**, 127, 105105. [Google Scholar] [CrossRef] [PubMed] - Berezhkovskii, A.M.; Bezrukov, S.M. Fluctuation theorem for channel-facilitated membrane transport of interacting and noninteracting solutes. J. Phys. Chem. B
**2008**, 112, 6228–6232. [Google Scholar] [CrossRef] [PubMed] - Lacoste, D.; Lau, A.W.; Mallick, K. Fluctuation theorem and large deviation function for a solvable model of a molecular motor. Phys. Rev. E Stat. Nonlinear Soft Matter Phys.
**2008**, 78, 011915. [Google Scholar] [CrossRef] [PubMed] - Porta, M. Fluctuation theorem, nonlinear response, and the regularity of time reversal symmetry. Chaos
**2010**, 20, 023111. [Google Scholar] [CrossRef] [PubMed] - Collin, D.; Ritort, F.; Jarzynski, C.; Smith, S.B.; Tinoco, I., Jr.; Bustamante, C. Verification of the crooks fluctuation theorem and recovery of rna folding free energies. Nature
**2005**, 437, 231–234. [Google Scholar] [CrossRef] [PubMed] - Sughiyama, Y.; Abe, S. Fluctuation theorem for the renormalized entropy change in the strongly nonlinear nonequilibrium regime. Phys. Rev. E Stat. Nonlinear Soft Matter Phys.
**2008**, 78, 021101. [Google Scholar] [CrossRef] [PubMed]

**Figure 1.**Schematic of the relationship between the j

^{th}step to (j + 1)

^{th}step (left) and the −j

^{th}step to (−j − 1)

^{th}step (right) of a simple discrete channel. The left graph shows a signal transduction and its channel capacity is expressed by C

_{j}. The right graph shows the reverse signal transduction and its channel capacity is expressed by C

_{−j}. In the reverse signal transduction, from the −j

^{th}step to (−j − 1)

^{th}step, q

_{j}transmits the signal to q

_{j−}

_{1}, but it may transmit the signal to q

_{j−}

_{1}* in error.

**Figure 2.**Schematic showing a Szilard engine chain. The feedback controller measures the changes in concentration of signalling molecules. For the signal transduction, the controller opens the gate of the hypothetical boundary. The grey circles on the boundary represent the exchanger between ΔX

_{j}

_{+1}and ΔX

_{j}

_{+1}*. The j

^{th}field recovers to the initial state.

**Figure 3.**A common time course of the j

^{th}cycle showing the concentration of X

_{j}* during phosphorylation [36,37]. The vertical axis represents the concentration of X

_{j}*. The horizontal axis denotes the duration (min or time unit). τ

_{j}and τ

_{−j}denote the duration of the j

^{th}step and the reversible −j

^{th}step, respectively. Line X

_{j}* = X

_{j}*

^{st}denotes the X

_{j}* concentration at the initial steady state before the beginning of the signal event.

© 2018 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).