Next Article in Journal
Altered Brain Complexity in Women with Primary Dysmenorrhea: A Resting-State Magneto-Encephalography Study Using Multiscale Entropy Analysis
Next Article in Special Issue
Information Dynamics of a Nonlinear Stochastic Nanopore System
Previous Article in Journal
Second-Law Analysis: A Powerful Tool for Analyzing Computational Fluid Dynamics (CFD) Results
Previous Article in Special Issue
Magnetic Engine for the Single-Particle Landau Problem
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Information Landscape and Flux, Mutual Information Rate Decomposition and Connections to Entropy Production

1
State Key Laboratory of Electroanalytical Chemistry, Changchun Institute of Applied Chemistry, Changchun, Jilin 130022, China
2
Department of Chemistry and Physics, State University of New York, Stony Brook, NY 11794, USA
*
Author to whom correspondence should be addressed.
Entropy 2017, 19(12), 678; https://doi.org/10.3390/e19120678
Submission received: 29 September 2017 / Revised: 27 November 2017 / Accepted: 6 December 2017 / Published: 11 December 2017
(This article belongs to the Special Issue Thermodynamics and Statistical Mechanics of Small Systems)

Abstract

:
We explored the dynamics of two interacting information systems. We show that for the Markovian marginal systems, the driving force for information dynamics is determined by both the information landscape and information flux. While the information landscape can be used to construct the driving force to describe the equilibrium time-reversible information system dynamics, the information flux can be used to describe the nonequilibrium time-irreversible behaviors of the information system dynamics. The information flux explicitly breaks the detailed balance and is a direct measure of the degree of the nonequilibrium or time-irreversibility. We further demonstrate that the mutual information rate between the two subsystems can be decomposed into the equilibrium time-reversible and nonequilibrium time-irreversible parts, respectively. This decomposition of the Mutual Information Rate (MIR) corresponds to the information landscape-flux decomposition explicitly when the two subsystems behave as Markov chains. Finally, we uncover the intimate relationship between the nonequilibrium thermodynamics in terms of the entropy production rates and the time-irreversible part of the mutual information rate. We found that this relationship and MIR decomposition still hold for the more general stationary and ergodic cases. We demonstrate the above features with two examples of the bivariate Markov chains.

1. Introduction

There is growing interest in studying two interacting information systems in the fields of control theory, information theory, communication theory, nonequilibrium physics and biophysics [1,2,3,4,5,6,7,8,9]. Significant progresses has been made recently towards the understanding of the information system in terms of information thermodynamics [10,11,12,13]. However, the identification of the global driving forces for the information system dynamics is still challenging. Here, we aim to fill this gap by quantifying the driving forces for the information system dynamics. Inspired by the recent development of landscape and flux theory for the continuous nonequilibrium systems [14,15,16] and the Markov chain decomposition dynamics for the discrete systems [17,18,19,20,21,22,23], we show that at least for the underlying marginal Markovian cases, the driving force for information dynamics is determined by both the information landscape and information flux. The information landscape can be used to construct the driving force responsible for the equilibrium time-reversible part of the information dynamics. The information flux explicitly breaks the detailed balance and provides a quantitative measure of the degree of nonequilibrium or time-irreversibility. It is responsible for the time-irreversible part of the information dynamics. The Mutual Information Rate (MIR) [24] represents the correlation between two information subsystems. We uncovered that the MIR between the two subsystems can be decomposed into the time-reversible and time-irreversible parts, respectively. Especially when the two subsystems act as Markov chains, this decomposition can be expressed in terms of information landscape-flux decomposition for Markovian dynamics. An important signature of nonequilibrium is the Entropy Production Rate (EPR) [17,25,26]. We also uncover the intimate relation between the EPRs and the time-irreversible part of the MIR. We demonstrate the above features with two cases of the bivariate Markov chains. Furthermore, we show that the decomposition of the MIR and the relationship between the EPRs and the time-irreversible part of the MIR still hold for more general stationary and ergodic cases.

2. Bivariate Markov Chains

Markov chains have been often assumed for the underlying dynamics of the total system in random environments. When the two subsystems together jointly form a Markov chain in continuous or discrete time, the resulting chain is called the Bivariate Markov Chain (BMC, a special case of the multivariate Markov chain with two stochastic variables). The processes of the two subsystems are correspondingly said to be marginal processes or a marginal chain. The BMC was used to model ion channel currents [2]. It was also used to model delays and congestion in a computer network [3]. Recently, different models of BMC appeared in nonequilibrium statistical physics for capturing or implementing Maxwell’s demon [4,5,6], which can be seen as one marginal chain in the BMC playing feedback control to the other marginal chain. Although the BMC has been studied for decades, there are still challenges on quantifying the dynamics of the whole, as well as the two subsystems. This is because neither of them needs to be a Markovian chain in general [7], and the quantifications of the probabilities (densities) for the trajectories of the two subsystems involve the complicated random matrix multiplications [8]. This leads to the problem not exactly being analytically solvable. The corresponding numerical solutions often lack direct mathematical and physical interpretations.
The conventional analysis of the BMC focuses on the mutual information [9] of the two subsystems for quantifying the underlying information correlations. There are three main representations of this. The first one was proposed and emphasized in the works of Sagawa, T. and Ueda, M. [11] and Parrondo, J. M. R., Horowitz, J. M. and Sagawa, T. [10], respectively, for explaining the mechanism of Maxwell’s demon in Szilard’s engine. In this representation, the mutual information between the demon and controlled system characterizes the observation and the feedback of the demon. This leads to an elegant approach, which includes the increment of the mutual information into a unified fluctuation relation. The second representation was proposed by the work of Horowitz, J. M. and Esposito, M. [12] in an attempt to explain the violation of the second law in a specified BMC, the bipartite model, where the mutual information is divided into two parts corresponding to the two subsystems, respectively, which were said to be the information flows. This representation tries to explain the mechanism of the demon because one can see that the information flows do contribute to the entropy production for both the demon and controlled system. The first two representations are based on the ensembles of the subsystem states. This means that the mutual information is defined only on the time-sliced distributions of the system states, which somehow lack the information of subsystem dynamics: the time-correlations of the observation and feedback of the demon. The last representation was seen in the work of Barato, A. C., Hartich, D. and Seifert, U. [13], where a more general definition of mutual information in information theory was used, which is defined on the trajectories of the two subsystems. More exactly, this is the so-called Mutual Information Rate (MIR) [24], which quantifies the correlation between the two subsystem dynamics. However, due to the difficulties from the possible underlying non-Markovian property of the marginal chains, exactly solvable models and comprehensive conclusions are still challenging from this representation.
In this study, we study the discrete-time BMC in both stochastic information dynamics and thermodynamics. To avoid the technical difficulty caused by non-Markovian dynamics, we first assume that the two marginal chains follow the Markovian dynamics. The non-Markovian case will be discussed elsewhere. We explore the time-irreversibility of BMC and marginal processes in the steady state. Then, we decompose the driving force for the underlying dynamics as the information landscape and information flux [14,15,16], which can be used to describe the time-reversible parts and time-irreversible parts, respectively. We also prove that the non-vanishing flux fully describes the time-irreversibility of BMC and marginal processes.
We focus on the mutual information rate between the two marginal chains. Since the two marginal chains are assumed to be Markov chains here, the mutual information rate is exactly analytically solvable, which can be seen as the averaged conditional correlation between the two subsystem states. Here, the conditional correlations reveal the time correlations between the past states and the future states.
Corresponding to the landscape-flux decomposition in stochastic dynamics, we decompose the MIR into two parts: the time-reversible and time-irreversible parts, respectively. The time-reversible part measures the part of the correlations between the two marginal chains in both forward and backward processes of BMC. The time-irreversible part measures the difference between the correlations in forward and backward processes of BMC, respectively. We can see that a non-vanishing time-irreversible part of the MIR must be driven by a non-vanishing flux in the steady state, and it can be seen as the sufficient condition for a BMC to be time-irreversible.
We also reveal the important fact that the time-irreversible parts of MIR contribute to the nonequilibrium Entropy Production Rate (EPR) of the BMC by the simple equality:
E P R o f B M C = E P R o f 1 s t m a r g i n a l c h a i n + E P R o f 2 n d m a r g i n a l c h a i n + 2 × time - irreversible p a r t o f M I R .
The decomposition of the MIR and the relation between the time-irreversible part of MIR and EPRs can also be found in stationary and ergodic non-Markovian cases, which will be given in the discussions in the Appendix. This may help to develop a general theory of nonequilibrium non-Markovian interacting information systems.

3. Information Landscape and Information Flux for Determining the Information Dynamics, Time-Irreversibility

Consider the case that two interacting information systems form a finite-state, discrete-time, ergodic and irreducible bivariate Markov chain,
Z = ( X , S ) = { ( X ( t ) , S ( t ) ) , t 1 } ,
We assume that the information state space of X is given by X = { 1 , , d } and the information state space of S is given by S = { 1 , , l } . The information state space of Z is then given by Z = X × S . The stochastic information dynamics can then be quantitatively described by the time evolution of the probability distribution of information state space Z, characterized by the following master equation (or the information system dynamics) in discrete time,
p z ( z ; t + 1 ) = z q z ( z | z ) p z ( z ; t ) , for t 1 , and z Z
where p z ( z ; t ) = p z ( x , s ; t ) is the probability of observing state z (or joint probability of X = x and S = s ) at time t; q z ( z | z ) = q z ( x , s | x , s ) 0 are the transition probabilities from z = ( x , s ) to z = ( x , s ) , respectively, and have z q z ( z | z ) = 1 .
We assume that there exists a unique stationary distribution π z such that π z ( z ) = z q z ( z | z ) π z ( z ) . Then, given an arbitrary initial probability distribution, the probability distribution goes to π z exponentially fast in time. If the initial distribution is π z , we say that Z is in Steady State (SS), and our discussion is based on this SS.
The marginal chains of Z, i.e., X and S, do not need to be Markov chains in general. For the simplicity of analysis, we assume that both marginal chains are Markov chains, and the corresponding transition probabilities are given by q x ( x | x ) and q s ( s | s ) (for x , x X and s , s S ), respectively. Then, we have the following master equations (or the information system dynamics) for X and S, respectively,
p x ( x ; t + 1 ) = x q x ( x | x ) p x ( x ; t ) ,
and,
p s ( s ; t + 1 ) = s q s ( s | s ) p s ( s ; t ) ,
where p x ( x ; t ) and p s ( s ; t ) are the probabilities of observing X = x and S = s at time t, respectively.
We consider that both Equations (3) and (4) have unique stationary solutions π x and π s , which satisfy π x ( x ) = x q x ( x | x ) π x ( x ) and π s ( s ) = s q s ( s | s ) π s ( s ) respectively. Furthermore, we assume that when Z is in SS, π x and π s are also achieved. The relations between π x , π s and π z read,
π x ( x ) = s π z ( x , s ) , π s ( s ) = x π z ( x , s ) .
In the rest of this paper, we let X T = { X ( 1 ) , X ( 2 ) , , X ( T ) } , S T = { S ( 1 ) , S ( 2 ) , S ( T ) } , and Z T = { Z ( 1 ) , Z ( 2 ) , , Z ( T ) } = ( X T , S T ) denote the time sequences of X, S and Z in time T, respectively.
To characterize the time-irreversibility of the Markov chain C in information dynamics in SS, we introduce the concept of probability flux. Here, we let C denote the arbitrary Markov chain in { Z , X , S } , and let c, π c , q c and C T denote arbitrary state of C, the stationary distribution of C, the transition probabilities of C and a time sequence of C in time T and in SS, respectively.
The averaged number transitions from the state c to state c, denoted by N ( c c ) , in unit time in SS can be obtained as:
N ( c c ) = π c ( c ) q c ( c | c ) .
This is also the probability of the time sequence C T = { C ( 1 ) = c , C ( 2 ) = c } , ( T = 2 ). Correspondingly, the averaged number of reverse transitions, denoted by N ( c c ) , reads:
N ( c c ) = π c ( c ) q c ( c | c ) .
This is also the probability of the time-reverse sequence C ˜ T = { C ( 1 ) = c , C ( 2 ) = c } , ( T = 2 ). The difference between these two transition numbers measures the time-reversibility of the forward sequence C T in SS,
J c ( c c ) = N ( c c ) N ( c c ) = P ( C T ) P ( C ˜ T ) = π c ( c ) q c ( c | c ) π c ( c ) q c ( c | c ) , for C = X , S , or Z .
Then, J c ( c c ) is said to be the probability flux from c to c in SS. If J c ( c c ) = 0 for arbitrary c and c, then C T ( T = 2 ) is time-reversible; otherwise, when J c ( c c ) 0 , C T is time-irreversible. Clearly, we have from Equation (6) that:
J c ( c c ) = J c ( c c ) .
The transition probability determines the evolution dynamics of the information system. We can decompose the transition probabilities q c ( c | c ) into two parts: the time-reversible part D c and time-irreversible part B c , which read:
q c ( c | c ) = D c ( c c ) + B c ( c c ) , with D c ( c c ) = 1 2 π c ( c ) ( π c ( c ) q c ( c | c ) + π c ( c ) q c ( c | c ) ) , B c ( c c ) = 1 2 π c ( c ) J c ( c c ) .
From this decomposition, we can see that the information system dynamics is determined by two driving forces. One of the driving forces is determined by the steady state probability distribution. This part of the driving force is time-reversible. The other driving force for the information dynamics is the steady state probability flux, which breaks the detailed balance and quantifies the time-irreversibility. Since the steady state probability distribution measures the weight of the information state, therefore it can be used to quantify the information landscape. If we define the potential landscape for the information system as ϕ = log π , then the driving force D c ( c c ) = 1 2 ( q c ( c | c ) + π c ( c ) π c ( c ) q c ( c | c ) ) = 1 2 ( q c ( c | c ) + exp [ ( ϕ c ( c ) ϕ c ( c ) ] q c ( c | c ) ) is expressed in term of the difference of the potential landscape. This is analogous to the landscape-flux decomposition of Langevin dynamics in [15]. Notice that the information landscape is directly related to the steady state probability distribution of the information system. In general, the information landscape is at nonequilibrium since the detailed balance is often broken for general cases. Only when the detailed balance is preserved, the nonequilibrium information landscape is reduced to the equilibrium information landscape. Even though the information landscape is not at equilibrium in general, the driving force D c ( c c ) is time-reversible due to the decomposition construction. The steady state probability flux measures the information flow in the dynamics and therefore can be termed as the information flux. In fact, the nonzero information flux explicitly breaks the detailed balance because of the net flow to or from the system. It is therefore a direct measure of the degree of the nonequilibrium or time-irreversibility in terms of the detailed balance breaking.
Note that the decomposition for the discrete Markovian information process can be viewed as the separation of the current corresponding to the 2 B c ( c c ) π c ( c ) here and the activity corresponding to the 2 D c ( c c ) π c ( c ) in a previous study [19]. The landscape and flux decomposition here for the reduced information dynamics are in a similar spirit as the whole state space decomposition with the information system and the associated environments. When the detailed balance is broken, the information landscape (defined as the negative logarithm of the steady state probability ϕ = log π ) is not the same as the equilibrium landscape under the detailed balance. There can be uniqueness issue related to the decomposition. To avoid the confusion, we make a physical choice, or in other words, we can fix the gauge so that the information landscape always coincides with the equilibrium landscape when the detailed balance is satisfied. In other words, we want to make sure the Boltzmann law applies at equilibrium with detailed balance. In this way, we can decompose the information landscape and information flux for nonequilibrium information systems without detailed balance. By solving the linear master equation for the steady state, we can quantify the nonequilibrium information landscape, and from that, we can obtain the corresponding steady state probability flux. Some studies discussed various aspects of this issue [18,19,27,28].
By Equations (7) and (8), we have the following relations:
π c ( c ) D c ( c c ) = π c ( c ) D c ( c c ) , π c ( c ) B c ( c c ) = π c ( c ) B c ( c c ) .
As we can see in the next section, D c and B c are useful for us to quantify time-reversible and time-irreversible observables of C, respectively.
We give the interpretation that the non-vanishing information flux J c fully measures the time-irreversibility of the chain C in time T for T 2 . Let C T be an arbitrary sequence of C in SS, and without loss of generality, we let T = 3 . Similar to Equation (6), the measure of the time-irreversibility of C T can be given by the difference between the probability of C T = { C ( 1 ) , C ( 2 ) , C ( 3 ) } and that of its time-reversal C ˜ T = { C ( 3 ) , C ( 2 ) , C ( 1 ) } , such as:
P ( C T ) P ( C ˜ T ) = π c ( C ( 1 ) ) q c ( C ( 2 ) | C ( 1 ) ) q c ( C ( 3 ) | C ( 2 ) ) π c ( C ( 3 ) ) q c ( C ( 2 ) | C ( 3 ) ) q c ( C ( 1 ) | C ( 2 ) ) = π c ( C ( 1 ) ) D c ( C ( 1 ) C ( 2 ) ) + B c ( C ( 1 ) C ( 2 ) ) D c ( C ( 2 ) C ( 3 ) ) + B c ( C ( 2 ) C ( 3 ) ) π c ( C ( 3 ) ) D c ( C ( 3 ) C ( 2 ) ) + B c ( C ( 3 ) C ( 2 ) ) D c ( C ( 2 ) C ( 1 ) ) + B c ( C ( 2 ) C ( 1 ) ) , for C = X , S or Z .
Then, by the relations given in Equation (9), we have that P ( C T ) P ( C ˜ T ) = 0 holds for arbitrary C T if and only if B c ( C ( 1 ) C ( 2 ) ) = B c ( C ( 2 ) C ( 3 ) ) = 0 or equivalently J c ( C ( 1 ) C ( 2 ) ) = J c ( C ( 2 ) C ( 3 ) ) = 0 . This conclusion can be made for arbitrary T > 3 . Thus, non-vanishing J c can fully describe the time-irreversibility of C for C = X , S or Z.
We show the relations between the fluxes of the whole system J z and of the subsystem J x as follows:
J x ( x x ) = π x ( x ) q x ( x | x ) π x ( x ) q x ( x | x ) = P ( { x , x } ) P ( { x , x } ) = s , s P ( { ( x , s ) , ( x , s ) } ) P ( { ( x , s ) , ( x , s ) } ) = s , s π z ( x , s ) q z ( x , s | x , s ) π z ( x , s ) q z ( x , s | x , s ) = s , s J z ( ( x , s ) ( x , s ) ) .
Similarly, we have:
J s ( s s ) = x , x J z ( ( x , s ) ( x , s ) ) .
These relations indicate that the subsystem fluxes J x and J s can be seen as the coarse-grained levels of total system flux J z by averaging over the other parts of the system S and X, respectively. We should emphasize that non-vanishing J z does not mean X or S is time-irreversible and vice versa.

4. Mutual Information Decomposition to Time-Reversible and Time-Irreversible Parts

According to information theory, the two interacting information systems represented by bivariate Markov chain Z can be characterized by the Mutual Information Rate (MIR) between the marginal chains X and S in SS. The mutual information rates represent the correlation between two interacting information systems. The MIR is defined on the probabilities of all possible time sequences, P ( Z T ) , P ( X T ) and P ( S T ) and is given by [24],
I ( X , S ) = lim T 1 T Z T P ( Z T ) log P ( Z T ) P ( X T ) P ( S T ) .
It measures the correlation between X and S in unit time, or say, the efficient bits of information that X and S exchange with each other in unit time. The MIR must be non-negative, and a vanishing I ( X , S ) indicates that X and S are independent of each other. More explicitly, the corresponding probabilities of these sequences can be evaluated by using Equations (2)–(4); we have:
P ( X T ) = π x ( X ( 1 ) ) t = 1 T 1 q x ( X ( t + 1 ) | X ( t ) ) , P ( S T ) = π s ( S ( 1 ) ) t = 1 T 1 q s ( S ( t + 1 ) | S ( t ) ) , P ( Z T ) = π z ( Z ( 1 ) ) t = 1 T 1 q z ( Z ( t + 1 ) | Z ( t ) ) .
By substituting these probabilities into Equation (12) (see Appendix A), we have the exact expression of MIR as:
I ( X , S ) = z , z π z ( z ) q z ( z | z ) log q z ( z | z ) q x ( x | x ) q s ( s | s ) = i ( z | z ) z , z 0 , for z = ( x , s ) , and z = ( x , s ) .
where i ( z | z ) = log q z ( z | z ) q x ( x | x ) q s ( s | s ) is the conditional (Markovian) correlation between the states x and s when the transition z = ( x , s ) z = ( x , s ) occurs. This indicates that when the two marginal processes are both Markovian, the MIR is the average of the conditional (Markovian) correlations. These correlations are measurable when transitions occur, and they can be seen as the observables of Z.
By noting the decomposition of transition probabilities in Equation (8), we have a corresponding decomposition of I ( X , S ) such as:
I ( X , S ) = I D ( X , S ) + I B ( X , S ) , with I D ( X , S ) = z , z π z ( z ) D z ( z | z ) i ( z | z ) = 1 2 z , z ( π z ( z ) q z ( z | z ) + π z ( z ) q z ( z | z ) ) i ( z | z ) , I B ( X , S ) = z , z π z ( z ) B z ( z | z ) i ( z | z ) = 1 2 z , z J z ( z | z ) i ( z | z ) = 1 4 z , z J z ( z | z ) ( i ( z | z ) i ( z | z ) ) .
This means that the mutual information representing the correlations between the two interacting systems can be decomposed into the time-reversible equilibrium part and the time-irreversible nonequilibrium part. The origin of this is from the fact that the underlying information system dynamics is determined by both the time-reversible information landscape and time-irreversible information flux. These equations are very important to establish the link to the time-irreversibility. We now give further interpretation for I D ( X , S ) and I B ( X , S ) :
Consider a bivariate Markov chain Z in SS wherein X and S are dependent on each other, i.e., I ( X , S ) = I D ( X , S ) + I B ( X , S ) > 0 . By the ergodicity of Z, we have the MIR, which measures the averaged conditional correlation along the time sequences Z T ,
lim T 1 T i ( Z ( t + 1 ) | Z ( t ) ) Z T = I ( X , S ) , for 1 < t < T .
Then, I B ( X , S ) measures the change of the averaged conditional correlation between X and S when a sequence of Z turns back in time,
lim T 1 T i ( Z ( t + 1 ) | Z ( t ) ) i ( Z ( t ) | Z ( t + 1 ) ) Z T = 2 I B ( X , S ) .
A negative I B ( X , S ) shows that the correlation between X and S becomes strong in the time-reversal process of Z; A positive I B ( X , S ) shows that the correlation becomes weak in the time-reversal process of Z. Both cases show that the Z is time-irreversible since we have a non-vanishing J z . However, the case of I B ( X , S ) = 0 is complicated, since it indicates either a vanishing J z or a non-vanishing J z . Anyway, we see that a non-vanishing I B ( X , S ) is a sufficient condition for Z to be time-irreversible. On the other hand, I D ( X , S ) = I ( X , S ) I B ( X , S ) measures the correlation remaining in the backward process of Z.
The definition of MIR in Equation (12) turns out to be appropriate for even more general stationary and ergodic (Markovian or non-Markovian) processes. Consequentially, the decomposition of MIR is useful to quantify the correlation between two stationary and ergodic processes in a wider sense, i.e., to monitor the changes of the correlation in the forward and the backward processes. As a special case, the analytical expressions in Equation (14) are the reduced results, which are valid for Markovian cases. A brief discussion of the decomposition of MIR of more general processes can be found in Appendix B.

5. Relationship between Mutual Information and Entropy Production

The Entropy Production Rates (EPR) or energy dissipation (cost) rate at steady state is a quantitative nonequilibrium measure, which characterizes the time-irreversibility of the underlying processes. The EPR of a stationary and ergodic process C (here C = Z , X or S) can be given by the difference between the averaged surprisal (negative logarithmic probability) of the backward sequences C ˜ T and that of forward sequences C T in the long time limit, i.e.,
R c = lim T 1 T log P ( C T ) log P ( C ˜ T ) C T = lim T 1 T log P ( C T ) P ( C ˜ T ) C T 0 ,
where R c is said to be the EPR of C [25]; log P ( C T ) and log P ( C ˜ T ) are said to be the surprisal of a forward and a backward sequence of C, respectively. We see that C is time-reversible (i.e., P ( C T ) = P ( C ˜ T ) for arbitrary C T for large T) if and only if R c = 0 . Additionally, this is due to the form of R c , which is exactly a Kullback–Leibler divergence. When C is Markovian, then R c reduces into the following form when Z, X or S is assigned to C, respectively [17,26],
R z = 1 2 z , z J z ( z z ) log q z ( z | z ) q z ( z | z ) , R x = 1 2 x , x J x ( x x ) log q x ( x | x ) q x ( x | x ) , R s = 1 2 s , s J s ( s s ) log q s ( s | s ) q s ( s | s ) ,
where total and subsystem entropy productions R z , R x and R s correspond to Z, X and S, respectively. Here, R z usually contains the detailed interaction information of the system (or subsystems) and environments; R x and R s provide the coarse-grained information of time-irreversible observables of X and Z, respectively. Each non-vanishing EPR indicates that the corresponding Markov chain is time-irreversible. Again, we emphasize that a non-vanishing R z does not mean X or S is time-irreversible and vice versa.
We are interested in the connection between these EPRs and mutual information. We can associate them with I B ( X , S ) by noting Equations (10), (11) and (14). We have:
I B ( X , S ) = 1 4 z , z J z ( z | z ) ( i ( z | z ) i ( z | z ) ) = 1 4 z , z J z ( z | z ) log q z ( z | z ) q z ( z | z ) 1 4 x , x J x ( x | x ) log q x ( x | x ) q x ( x | x ) 1 4 s , s J s ( s | s ) log q s ( s | s ) q s ( s | s ) = 1 2 ( R z R x R s ) .
We note that I B ( X , S ) is intimately related to the EPRs. This builds up a bridge between these EPRs and the irreversible part of the mutual information. Moreover, we also have:
R z = R x + R s + 2 I B ( X , S ) 0 , R x + R s 2 I B ( X , S ) , R z 2 I B ( X , S ) .
This indicates that the time-irreversible MIR contributes to the detailed EPRs. In other words, the differences of the entropy production rate of the whole system and subsystems provide the origin of the time-irreversible part of the mutual information. This reveals the nonequilibrium thermodynamic origin of the irreversible mutual information or correlations. Of course, since the EPR is related to the flux directly as is seen from the above definitions, the origin of the EPR or nonequilibrium thermodynamics is from the non-vanishing information flux for the nonequilibrium dynamics. On the other hand, the irreversible part of the mutual information measures the correlations, and it contributes to the EPRs of the correlated subsystems.
Furthermore, the last expression in Equation (17) (also the expressions in Equation (18)) can be generalized to more general stationary and ergodic processes. A related discussion and demonstration of this can be seen in Appendix B.

6. A Simple Case: The Blind Demon

As a concrete example, we consider a two-state system coupled to two information baths a and b. The states of the system are denoted by X = { x : x = 0 , 1 } , respectively. Each bath sends an instruction to the system. If the system adopts one of them, it then follows the instruction and makes the change of the state. The instructions generated from one bath are independently and identically distributed (Bernoulli trials). Both the probability distributions of the instructions corresponding to the baths follow Bernoulli distributions and read { ϵ a ( x ) : x X , ϵ a ( x ) 0 , x ϵ a ( x ) = 1 } for bath a and { ϵ b ( x ) : x X , ϵ b ( x ) 0 , x ϵ b ( x ) = 1 } for bath b, respectively. Since the system cannot execute two instructions simultaneously, there exists an information demon that makes choices for the system. The demon is blind to caring about the system, and it makes choices independently and identically distributed. The choices of the demon are denoted by S = { s : s = a , b } , respectively. The probability distribution of the demon’s choices reads { P ( s ) : s S , P ( a ) = p , P ( b ) = 1 p , p [ 0 , 1 ] } . Still, we use Z = ( X , S ) with X X and S S to denote the BMC of the system and the demon.
Consequentially, the transition probabilities of the system read:
q x ( x | x ) = p ϵ a ( x ) + ( 1 p ) ϵ b ( x ) .
The transition probabilities of the demon read:
q s ( s | s ) = P ( s ) .
Additionally, the transition probabilities of the joint chain read:
q z ( x , s | x , s ) = P ( s ) ϵ s ( x ) .
We have the corresponding steady state distributions or the information landscapes as,
π x ( x ) = p ϵ a ( x ) + ( 1 p ) ϵ b ( x ) , π s ( s ) = P ( s ) , π z ( x , s ) = P ( s ) π x ( x ) .
We obtain the information fluxes as,
J x ( x x ) = 0 , for all x , x X J s ( s s ) = 0 , for all s , s S J z ( ( x , s ) ( x , s ) ) = P ( s ) P ( s ) ( π x ( x ) ϵ s ( x ) π x ( x ) ϵ s ( x ) ) .
Here, we use the notations ϵ s ( x ) and ϵ s ( x ) ( s , s = a or b ) to denote the probabilities of the instructions x or x from bath a or b briefly. We obtain the EPRs as:
R x = 0 , R s = 0 , R z = x p ( 1 p ) ( ϵ a ( x ) ϵ b ( x ) ) ( log ϵ a ( x ) log ϵ b ( x ) ) .
We evaluate the MIR as:
I ( X , S ) = x π x ( x ) log π x ( x ) + p x ϵ a ( x ) log ϵ a ( x ) + ( 1 p ) x ϵ b ( x ) log ϵ b ( x ) .
The time-irreversible part of I ( X , S ) reads,
I B ( X , S ) = 1 2 R z .

7. Conclusions

In this work, we identify the driving forces for the information system dynamics. We show that for marginal Markovian information systems, the information dynamics is determined by both the information landscape and information flux. While the information landscape can be used to construct the driving force for describing the time-reversible behavior of the information dynamics, the information flux can be used to describe the time-irreversible behavior of the information dynamics. The information flux explicitly breaks the detailed balance and provides a quantitative measure of the degree of the nonequilibrium or time-irreversibility. We further demonstrate that the mutual information rate, which represents the correlations, can be decomposed into the time-reversible part and the time-irreversible part originated from the landscape and flux decomposition of the information dynamics. Finally, we uncover the intimate relationship between the difference of the entropy productions of the whole system and those of the subsystems and the time-irreversible part of the mutual information. This will help with understanding the non-equilibrium behavior of the interacting information system dynamics in stochastic environments. Furthermore, we verify that our conclusions on the mutual information rate and entropy production rate decomposition can be made more general for the stationary and ergodic processes.

Acknowledgments

This work was supported in part by the National Natural Science Foundation of China (NSFC-91430217) and the National Science Foundation (U.S.) (NSF-PHY-76066).

Author Contributions

Qian Zeng and Jin Wang conceived and designed the experiments; Qian Zeng performed the experiments; Qian Zeng and Jin Wang analyzed the data; Qian Zeng and Jin Wang contributed reagents/materials/analysis tools; Qian Zeng and Jin Wang wrote the paper.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
BMCBivariate Markov Chain
EPREntropy Production Rate
MIRMutual Information Rate
SSSteady State

Appendix A

Here, we derive the exact form of the Mutual Information Rate (MIR, Equation (13)) in the steady state by using the cumulant-generating function.
We write an arbitrary time sequence of Z in time T in the following form:
Z T = { Z ( 1 ) , , Z ( i ) , , Z ( T ) } , for T 2 ,
where Z ( i ) (for i 1 ) denotes the state at time i. The corresponding probability of Z T is in the following form:
P ( Z T ) = π z ( Z 1 ) i = 1 T 1 q z ( Z i + 1 | Z i ) .
We let the chain U = ( X , S ) denote a process that X and S follow the same Markov dynamics in Z, but are independent of each other. Then, we have that the transition probabilities of U read:
q u ( u | u ) = q ( x , s | x , s ) = q x ( x | x ) q s ( s | s ) .
Then, the probability of a time sequence of U, U T , with the same trajectory of Z T reads:
P ( U T ) = π u ( Z 1 ) i = 1 T 1 q u ( Z i + 1 | Z i ) ,
with π u ( x , s ) = π x ( x ) π s ( s ) being the stationary probability of U.
For evaluating the exact form of MIR, we introduce the cumulant-generating function of the random variable log P ( Z T ) P ( U T ) ,
K ( m , T ) = log exp m log P ( Z T ) P ( U T ) Z T .
We can see that:
lim T lim m 0 1 T K ( m , T ) m = lim T 1 T log P ( Z T ) P ( U T ) Z T = I ( X , S ) .
Thus, our idea is to evaluate K ( m , T ) at first. We have:
K ( m , T ) = log exp m log P ( Z T ) P ( U T ) Z T = log Z T ( P ( Z T ) ) m + 1 ( P ( U T ) ) m = log { Z ( 0 ) , Z ( 1 ) , , Z ( T ) } ( π z m + 1 ( Z 0 ) ) ( π u m ( Z 0 ) ) i = 0 T 1 q z m + 1 ( Z i + 1 | Z i ) q u m ( Z i + 1 | Z i ) ,
where we realize that the last equality can be rewritten in the form of matrix multiplication.
We introduce the following matrices and vectors for Equation (A6) such that:
Q z = ( Q z ) ( z , z ) = q z ( z | z ) , for z , z Z , G ( m ) = ( G ( m ) ) ( z , z ) = q z m + 1 ( z | z ) q u m ( z | z ) , for z , z Z , π z = ( π z ) z = π z ( z ) , for z Z , v ( m ) = ( v ( m ) ) z = π z m + 1 ( z ) π u m ( z ) ,
where Q z is the transition matrix of Z; π z is the stationary distribution of Z. It can be also verified that:
Q z = G ( 0 ) , π z = v ( 0 ) , π z = Q z π z , 1 Q z = 1 , lim m 0 d G ( m ) d m = lim m 0 d G ( m ) d m ( z , z ) = q z ( z | z ) log q z ( z | z ) q u ( z | z ) , for z , z Z , lim m 0 d v ( m ) d m = lim m 0 d v ( m ) d m z = π z ( z ) log π z ( z ) π u ( z ) , for z Z ,
where 1 is the vector of all ones with appropriate dimension.
Then, K ( m , T ) can be rewritten in a compact form such that:
K ( m , T ) = log 1 G T 1 ( m ) v ( m ) .
Then, we substitute Equation (A9) into Equation (A5) and have:
I ( X , S ) = lim T lim m 0 1 T K ( m , T ) m = lim T lim m 0 1 T log 1 G T 1 ( m ) v ( m ) m = lim T lim m 0 1 T ( T 1 ) 1 G T 2 ( m ) d G ( m ) d m v ( m ) + 1 G T 1 ( m ) d v ( m ) d m = lim T 1 T ( T 1 ) 1 G T 2 ( 0 ) lim m 0 d G ( m ) d m v ( 0 ) + 1 G T 1 ( 0 ) lim m 0 d v ( m ) d m .
By noting Equation (A8) and T 2 , we obtain Equation (13) from Equation (A10) such that:
I ( X , S ) = lim T 1 T ( T 1 ) 1 G T 2 ( 0 ) lim m 0 d G ( m ) d m v ( 0 ) + 1 G T 1 ( 0 ) lim m 0 d v ( m ) d m = lim T 1 1 T 1 lim m 0 d G ( m ) d m π z + 1 T 1 lim m 0 d v ( m ) d m = 1 lim m 0 d G ( m ) d m π z = ( x , s ) , ( x , s ) π z ( x , s ) q z ( x , s | x , s ) log q z ( x , s | x , s ) q x ( x | x ) q s ( s | s ) .

Appendix B

Appendix B.1 Discussions on the Generality of Mutual Information Rate Decomposition and Connections to Entropy Production in Terms of Equations (14), (17), and (18)

For general cases, indeed, we do not expect that both X and S are Markovian. Even the joint chain Z may be non-Markovian. This means that Equation (2) may fail to depict the dynamics of Z. Then, the landscape-flux decomposition needs to be generalized to this situation. Such decomposition was not developed yet for the non-Markovian cases. This will be discussed in a separate work. However, when Z is a stationary and ergodic process (also assume that both X and S are stationary and ergodic), we show that the MIR can be decomposed into two parts as is shown in Equation (14), and an interesting relation between the MIR and EPRs can still be found in the same form of the last expression in Equation (17).
We are interested in the correlation between the forward sequences of X and S, which can be measured by log P ( Z T ) P ( X T ) P ( S T ) ( Z T = ( X T , S T ) ), then the MIR can be used to quantify the average rate of this correlation in the long time limit as shown in Equation (12). Furthermore, we are interested in the averaged difference between the rate of the correlation of the backward processes and that of the forward processes. This comes to the time-irreversible part of the MIR defined by:
I B ( X , S ) = lim T 1 2 T log P ( Z T ) P ( X T ) P ( S T ) log P ( Z ˜ T ) P ( X ˜ T ) P ( S ˜ T ) Z T ,
where log P ( Z ˜ T ) P ( X ˜ T ) P ( S ˜ T ) quantifies the correlation between the backward sequences of X and S. Clearly, the time-irreversible part of MIR depicting the correlation of the forward processes of X and S is enhanced ( I B ( X , S ) > 0 ) or weakened ( I B ( X , S ) < 0 ) compared to that of the backward processes. The other important part of the MIR, namely the time-reversible part, shows that the averaged rate of the correlation that remains in both forward and backward processes,
I D ( X , S ) = lim T 1 2 T log P ( Z T ) P ( X T ) P ( S T ) + log P ( Z ˜ T ) P ( X ˜ T ) P ( S ˜ T ) Z T ,
Consequentially, the MIR I ( X , S ) is decomposed into two parts shown as I ( X , S ) = I D ( X , S ) + I B ( X , S ) . In Markovian cases, each part of the MIR reduces to the form in Equation (14) respectively.
The relation between the time-irreversible part of the MIR and EPRs can be shown as follows,
I B ( X , S ) = lim T 1 2 T log P ( Z T ) P ( X T ) P ( S T ) log P ( Z ˜ T ) P ( X ˜ T ) P ( S ˜ T ) Z T = lim T 1 2 T log P ( Z T ) P ( Z ˜ T ) Z T log P ( X T ) P ( X ˜ T ) X T log P ( S T ) P ( S ˜ T ) S T = 1 2 R z R x R s ,
which is in the same form as Equation (17). Additionally, due to the non-negativity of the EPRs, the inequalities in (18) still hold for general cases.

Appendix B.2. The Smart Demon

To verify the conclusions in more general cases, we constructed a model of a smart demon, which reflects a more general situation in the nature: the two information subsystems play feedback to each other. A three-state information system is connected to two information baths labeled a and b, respectively. The states of the system are denoted by X = { x : x = 0 , 1 , 2 } , respectively. Each bath sends an instruction to the system. If the system adopts one of them, it then follows the instruction and makes a change of the state. The instructions generated from one arbitrary bath are independent and identically distributed. The probability distributions of the instructions corresponding to the baths read { ϵ s ( x ) : ϵ s ( x ) 0 , x X ϵ s ( x ) = 1 } (for s = a , b ), respectively. Since the system cannot execute the two incoming instructions simultaneously, there exists an information demon making choices for the system. The choices of the demon are denoted by the labels of the baths S = { s : s = a , b } , respectively. The demon observes the state of the system and plays feedback. The (conditional) probability distribution of the demon’s choices reads { d ( s | x , s ) : d ( s | x , s ) 0 , s S d ( s | x , s ) = 1 , x X , s S } . Still, we use X, S and Z = ( X , S ) to denote the processes of the system, the demon and the corresponding joint chain, a BMC, respectively.
The transition probabilities of the BMC read:
q z ( z | z ) = q z ( x , s | x , s ) = d ( s | x , s ) ϵ s ( x ) ,
where ϵ s ( x ) denotes the probability of the instruction x from bath s = a , b . We assume that there is a unique stationary distribution of z, π z such that:
π z ( z ) = z q z ( z | z ) π z ( z ) .
The stationary distributions of S and X then read:
π s ( s ) = x π z ( x , s ) , π x ( x ) = s π z ( x , s ) .
The behavior of the demon can be seen as a Markovian process in the steady state. The corresponding transition probabilities of the system read:
q s ( s | s ) = 1 π s ( s ) x d ( s | x , s ) π z ( x , s ) .
It can be verified that π s is the unique stationary distribution of S. However, the dynamics of the system always behaves as a non-Markovian process in general.
To characterize the time-irreversibility of Z, X and S, we use the definition of EPR in Equation (15) and have:
R z = 1 2 z , z J z ( z z ) log q z ( z | z ) q z ( z | z ) , R s = 1 2 s , s J s ( s s ) log q s ( s | s ) q s ( s | s ) = 0 , R x = lim T 1 T X T P ( X T ) log P ( X T ) P ( X ˜ T ) ,
where:
P ( X T ) = S T P ( Z T = ( X T , S T ) ) .
To quantify the correlation between the system and demon, we use the definition of MIR in Equation (12).
We are also interested in the time-irreversible part of MIR, I B ( X , S ) , which influences the EPR of the system, R x . This can be seen from Equation (A14) such that:
R x = R z R s 2 I B ( X , S ) .
We use numerical simulations, which evaluate R x , I ( X , S ) and I B ( X , S ) directly from the typical sequences of Z (see [7,8]). The corresponding results can be given by:
R x 1 T log P ( X T ) P ( X ˜ T ) , for large T , I ( X , S ) 1 T log P ( Z T ) P ( X T ) P ( S T ) , for large T , I B ( X , S ) 1 2 T log P ( Z T ) P ( X T ) P ( S T ) 1 2 T log P ( Z ˜ T ) P ( X ˜ T ) P ( S ˜ T ) , for large T ,
where Z T = ( X T , S T ) is a typical sequence of Z (hence, X T and S T are typical sequences of X and S, respectively). The convergence of this numerical simulation can be observed as T increases. To confirm the result R x = R z R s 2 I B ( X , S ) , we use different typical sequences in calculating R x and I B ( X , S ) , respectively. R z and R s are calculated by using the corresponding analytical results shown above.
For numerical simulations, we randomly choose two groups of the parameters: the probabilities of the instructions of the baths ϵ a and ϵ b and the probabilities of the demon’s choices d (see Table A1 and Table A2). We evaluate R x , I ( X , S ) and I B ( X , S ) for both groups. The values of the numerical results are listed in Table A3.
Table A1. Two groups of ϵ a and ϵ b .
Table A1. Two groups of ϵ a and ϵ b .
{ ϵ a ( x = 0 ) , ϵ a ( x = 1 ) , ϵ a ( x = 2 ) } { ϵ b ( x = 0 ) , ϵ b ( x = 1 ) , ϵ b ( x = 2 ) }
Group 1 { 0.2344 , 0.2730 , 0.4926 } { 0.4217 , 0.4094 , 0.1689 }
Group 2 { 0.1305 , 0.3972 , 0.4723 } { 0.3358 , 0.0010 , 0.6633 }
Table A2. Two groups of d.
Table A2. Two groups of d.
{ d ( s = a | x = 0 , s = a ) , d ( s = b | x = 0 , s = a ) } { d ( s = a | x = 1 , s = b ) , d ( s = b | x = 0 , s = b ) }
Group 1 { 0.3844 , 0.6156 } { 0.6811 , 0.3189 }
Group 2 { 0.1072 , 0.8928 } { 0.7473 , 0.2527 }
{ d ( s = a | x = 1 , s = a ) , d ( s = b | x = 1 , s = a ) } { d ( s = a | x = 1 , s = b ) , d ( s = b | x = 1 , s = b ) }
Group 1 { 0.5195 , 0.4805 } { 0.8088 , 0.1912 }
Group 2 { 0.6595 , 0.3405 } { 0.1600 , 0.8400 }
{ d ( s = a | x = 2 , s = a ) , d ( s = b | x = 2 , s = a ) } { d ( s = a | x = 2 , s = b ) , d ( s = b | x = 2 , s = b ) }
Group 1 { 0.3775 , 0.6225 } { 0.3340 , 0.6660 }
Group 2 { 0.0232 , 0.9768 } { 0.0814 , 0.9186 }
Table A3. Numerical results of R z , R x , I ( X , S ) and I B ( X , S ) .
Table A3. Numerical results of R z , R x , I ( X , S ) and I B ( X , S ) .
R z R x I ( X , S ) I B ( X , S )
Group 1 0.0645 0.0018 0.0885 0.0313
Group 2 0.5485 0.1291 0.3385 0.2097

References

  1. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef]
  2. Ball, F.; Yeo, G.F. Lumpability and Marginalisability for Continuous-Time Markov Chains. J. Appl. Probab. 1993, 30, 518–528. [Google Scholar] [CrossRef]
  3. Wei, W.; Wang, B.; Towsley, D. Continuous-time hidden Markov models for network performance evaluation. Perform. Eval. 2002, 49, 129–146. [Google Scholar] [CrossRef]
  4. Strasberg, P.; Schaller, G.; Brandes, T.; Esposito, M. Thermodynamics of a physical model implementing a Maxwell demon. Phys. Rev. Lett. 2013, 110, 040601. [Google Scholar] [CrossRef] [PubMed]
  5. Koski, J.V.; Kutvonen, A.; Khaymovich, I.M.; Ala-Nissila, T.; Pekola, J.P. On-Chip Maxwell’s Demon as an Information-Powered Refrigerator. Phys. Rev. Lett. 2015, 115, 260602. [Google Scholar] [CrossRef] [PubMed]
  6. Mcgrath, T.; Jones, N.S.; Ten Wolde, P.R.; Ouldridge, T.E. Biochemical Machines for the Interconversion of Mutual Information and Work. Phys. Rev. Lett. 2017, 118, 028101. [Google Scholar] [CrossRef] [PubMed]
  7. Mark, B.L.; Ephraim, Y. An EM algorithm for continuous-time bivariate Markov chains. Comput. Stat. Data Anal. 2013, 57, 504–517. [Google Scholar] [CrossRef]
  8. Ephraim, Y.; Mark, B.L. Bivariate Markov Processes and Their Estimation. Found. Trends Signal Process. 2012, 6, 1–95. [Google Scholar] [CrossRef]
  9. Cover, T.M.; Thomas, J.A. Elements of Information Theory, 2nd ed.; John Wiley & Sons: Hoboken, NJ, USA, 2006; ISBN 13-978-0-471-24195-9. [Google Scholar]
  10. Parrondo, J.M.R.; Horowitz, J.M.; Sagawa, T. Thermodynamics of information. Nat. Phys. 2015, 11, 131–139. [Google Scholar] [CrossRef]
  11. Sagawa, T.; Ueda, M. Fluctuation theorem with information exchange: Role of correlations in stochastic thermodynamics. Phys. Rev. Lett. 2012, 109, 180602. [Google Scholar] [CrossRef] [PubMed]
  12. Horowitz, J.M.; Esposito, M. Thermodynamics with Continuous Information Flow. Phys. Rev. X 2014, 4, 031015. [Google Scholar] [CrossRef]
  13. Barato, A.C.; Hartich, D.; Seifert, U. Rate of Mutual Information Between Coarse-Grained Non-Markovian Variables. J. Stat. Phys. 2013, 153, 460–478. [Google Scholar] [CrossRef]
  14. Wang, J.; Xu, L.; Wang, E.K. Potential landscape and flux framework of nonequilibrium networks: Robustness, dissipation, and coherence of biochemical oscillations. Proc. Natl. Acad. Sci. USA 2008, 105, 12271–12276. [Google Scholar] [CrossRef] [PubMed]
  15. Wang, J. Landscape and flux theory of non-equilibrium dynamical systems with application to biology. Adv. Phys. 2015, 64, 1–137. [Google Scholar] [CrossRef]
  16. Li, C.H.; Wang, E.K.; Wang, J. Potential flux landscapes determine the global stability of a Lorenz chaotic attractor under intrinsic fluctuations. J. Chem. Phys. 2012, 136, 194108. [Google Scholar] [CrossRef] [PubMed]
  17. Schnakenberg, J. Network theory of microscopic and macroscopic behavior of master equation systems. Rev. Mod. Phys. 1976, 48, 571–585. [Google Scholar] [CrossRef]
  18. Zia, R.K.P.; Schmittmann, B. Probability currents as principal characteristics in the statistical mechanics of non-equilibrium steady states. J. Stat. Mech.-Theory E 2007, 2007. [Google Scholar] [CrossRef]
  19. Maes, C.; Netočný, K. Canonical structure of dynamical fluctuations in mesoscopic nonequilibrium steady states. Europhys. Lett. 2008, 82. [Google Scholar] [CrossRef]
  20. Qian, M.P.; Qian, M. Circulation for recurrent markov chains. Probab. Theory Relat. 1982, 59, 203–210. [Google Scholar]
  21. Zhang, Z.D.; Wang, J. Curl flux, coherence, and population landscape of molecular systems: Nonequilibrium quantum steady state, energy (charge) transport, and thermodynamics. J. Chem. Phys. 2014, 140, 245101. [Google Scholar] [CrossRef] [PubMed]
  22. Zhang, Z.D.; Wang, J. Landscape, kinetics, paths and statistics of curl flux, coherence, entanglement and energy transfer in non-equilibrium quantum systems. New J. Phys. 2015, 17, 043053. [Google Scholar] [CrossRef]
  23. Luo, X.S.; Xu, L.F.; Han, B.; Wang, J. Funneled potential and flux landscapes dictate the stabilities of both the states and the flow: Fission yeast cell cycle. PLoS Comput. Biol. 2017, 13, e1005710. [Google Scholar] [CrossRef] [PubMed]
  24. Gray, R.; Kieffer, J. Mutual information rate, distortion, and quantization in metric spaces. IEEE Trans. Inf. Theory 1980, 26, 412–422. [Google Scholar] [CrossRef]
  25. Maes, C.; Redig, F.; van Moffaert, A. On the definition of entropy production, via examples. J. Math. Phys. 2000, 41, 1528–1554. [Google Scholar] [CrossRef]
  26. Gaspard, P. Time-reversed dynamical entropy and irreversibility in Markovian random processes. J. Stat. Phys. 2004, 117, 599–615. [Google Scholar] [CrossRef]
  27. Feng, H.D.; Wang, J. Potential and flux decomposition for dynamical systems and non-equilibrium thermodynamics: Curvature, gauge field, and generalized fluctuation-dissipation theorem. J. Chem. Phys. 2011, 135, 234511. [Google Scholar] [CrossRef] [PubMed]
  28. Polettini, M. Nonequilibrium thermodynamics as a gauge theory. Europhys. Lett. 2012, 97, 30003. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Zeng, Q.; Wang, J. Information Landscape and Flux, Mutual Information Rate Decomposition and Connections to Entropy Production. Entropy 2017, 19, 678. https://doi.org/10.3390/e19120678

AMA Style

Zeng Q, Wang J. Information Landscape and Flux, Mutual Information Rate Decomposition and Connections to Entropy Production. Entropy. 2017; 19(12):678. https://doi.org/10.3390/e19120678

Chicago/Turabian Style

Zeng, Qian, and Jin Wang. 2017. "Information Landscape and Flux, Mutual Information Rate Decomposition and Connections to Entropy Production" Entropy 19, no. 12: 678. https://doi.org/10.3390/e19120678

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop