Next Article in Journal
Coherent Control of Diabolic Points of a Hermitian Hamiltonian in a Four-Level Atomic System Using Structured Light Fields
Previous Article in Journal
Numerical Study on Excitation–Contraction Waves in 3D Slab-Shaped Myocardium Sample with Heterogeneous Properties
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Information Exchange Fluctuation Theorem Under Coarse-Graining

Department of Mathematics, Kwangwoon University, 20 Kwangwoon-ro, Seoul 01897, Republic of Korea
Mathematics 2025, 13(16), 2607; https://doi.org/10.3390/math13162607
Submission received: 20 July 2025 / Revised: 13 August 2025 / Accepted: 13 August 2025 / Published: 14 August 2025

Abstract

The fluctuation theorem for information exchange, originally established by Sagawa and Ueda, provides a fundamental framework for understanding the role of correlations in coupled classical stochastic systems. Building upon this foundation, Jinwoo demonstrated that the pointwise mutual information between correlated subsystems captures entropy production as a state function during coupling processes. In this study, we investigate the robustness of this information-theoretic fluctuation theorem under coarse-graining in coupled classical fluctuating systems. We rigorously prove that the fluctuation theorem remains invariant under arbitrary coarse-graining transformations and derive hierarchical relationships between information measures across different scales, thereby establishing its fundamental character as independent of the level of system description. Our results demonstrate that the relationship between information exchange and entropy production is preserved across different scales of observation, providing deeper insights into the thermodynamic foundations of information processing in classical stochastic systems.

1. Introduction

Stochastic thermal fluctuations constitute a cornerstone in the operation of molecular machinery and systems operating far from thermodynamic equilibrium. These fluctuations facilitate energy transfer between molecular components and their surroundings, permitting molecules to surmount energetic barriers and achieve stable low-energy configurations. The inherent randomness of molecular motion transforms thermodynamic quantities such as heat and work into stochastic variables. During the past twenty years, fluctuation theorems have emerged as a powerful class of exact relations revealing universal principles that constrain these random thermodynamic quantities in processes driving systems away from equilibrium.
The theoretical foundation was established through seminal contributions including the Evans–Searles’s transient fluctuation theorem (1994), followed by the groundbreaking Jarzynski equality [1] (1997) and the Crooks fluctuation theorem [2] (1998), which spawned numerous theoretical extensions and generalizations. Seifert’s formulation extended thermodynamic laws to individual stochastic trajectories [3], while Hatano and Sasa developed frameworks for transitions between non-equilibrium steady states [4]. Jinwoo and Tanaka [5,6] demonstrated that fluctuation theorems hold even within an ensemble of trajectories conditioned at a final (micro/meso) state, establishing that local free energy captures fluctuating work within the path ensemble reaching a specific (micro/meso) state during an externally driven non-equilibrium process. Single-molecule experimental studies have provided rigorous validation of these theoretical predictions, yielding valuable insights into biomolecular dynamics [7,8,9,10,11,12].
Contemporary research in fluctuation theorems for information exchange within coupled classical stochastic systems has established a comprehensive theoretical framework linking information theory with non-equilibrium statistical mechanics [13,14,15,16,17]. This rapidly advancing field, built upon the pioneering contributions of Sagawa and Ueda, has witnessed substantial theoretical developments and experimental validations in recent years [18,19]. The discipline investigates fundamental principles governing the energetic requirements of information manipulation, the influence of statistical correlations on entropy generation, and the persistence of thermodynamic laws across different observational scales.
This theoretical framework has become particularly relevant for understanding biological systems, as living organisms have evolved sophisticated information processing architectures essential for their survival and reproduction [20,21,22]. These biological systems demonstrate remarkable capabilities in detecting environmental chemical signals [23,24], propagating information across complex signaling pathways [25,26,27] and regulating genetic expression through molecular communication networks [28,29]. Cellular systems have developed the ability to perform temporal integration by encoding environmental conditions into internal molecular configurations, thereby minimizing sensory uncertainties [30,31]. Understanding the thermodynamic implications of information processing is therefore essential for comprehending the intricate mechanisms underlying biological computation and how fluctuation theorems govern the fundamental limits of these processes.
Sagawa and Ueda incorporated informational considerations into fluctuation theorem formalism [15]. Their work established a fluctuation theorem for information exchange processes, creating a unified description of measurement and feedback control phenomena in non-equilibrium settings [17]. Their theoretical framework analyzed scenarios where system X undergoes evolution dependent on the instantaneous state y of system Y under the assumption that Y remains stationary during X’s temporal evolution. They rigorously demonstrated that correlation establishment between subsystems inevitably generates entropy production, providing definitive resolution to Maxwell’s demon paradox and establishing mathematical foundations for understanding energy-information conversion mechanisms.
Subsequent theoretical advances have extended these foundational results. Particularly significant is Jinwoo’s work that eliminated the static constraint imposed by Sagawa and Ueda, demonstrating that identical fluctuation theorem forms remain valid when both subsystems X and Y undergo simultaneous dynamic evolution [32]. Jinwoo also demonstrated that pointwise mutual information between a coupled state captures the entropy production within the ensemble of paths that reach the coupled state [33].
The present investigation examines the stability of information-theoretic fluctuation theorems under coarse-graining in coupled classical stochastic systems. Establishing how these fundamental relationships maintain their validity across multiple observational scales is critical for demonstrating the universal nature of information-thermodynamic principles and their applicability to practical systems where complete microscopic details may be experimentally inaccessible or computationally intractable.
In this study, we investigate the robustness of information-theoretic fluctuation theorems under coarse-graining in coupled classical stochastic systems. Building upon the foundational framework established by Sagawa and Ueda [17] and the subsequent generalization by Jinwoo [32,33], our research addresses a fundamental question: do the relationships between information exchange and entropy production remain invariant when the system description is coarse-grained to different levels of detail?
The motivation for this investigation stems from the practical reality that physical systems are typically observed and described at various scales, from microscopic molecular dynamics to mesoscopic collective behaviors. Understanding whether fluctuation theorems maintain their validity across these different levels of description has profound implications for both theoretical understanding and practical applications. If these theorems are indeed robust under coarse-graining, it would establish their fundamental character as scale-independent principles governing information processing in thermodynamic systems.
Our approach rigorously proves that the fluctuation theorem for information exchange remains invariant under coarse-graining operations, thereby establishing its fundamental character as independent of the level of system description. The results demonstrate that the relationship between information exchange and entropy production is preserved across different scales of observation, providing deeper insights into the thermodynamic foundations of information processing in classical stochastic systems.

2. Theoretical Background and Framework

2.1. Information Exchange Fluctuation Theorem

We examine a system weakly coupled to a thermal reservoir characterized by inverse temperature β : = 1 / ( k B T ) , where k B denotes the Boltzmann constant and T represents the reservoir temperature. The external protocol μ t perturbs the system from its equilibrium state throughout the time interval 0 t τ . We denote by T the complete ensemble of microscopic trajectories, while T x τ represents the subset of trajectories that terminate at the specific microstate x τ at the final time τ .
We examine the Sagawa–Ueda fluctuation theorem for information exchange [17], focusing particularly on its generalized formulation [32]. For this analysis, we study two subsystems X and Y immersed in a thermal reservoir characterized by inverse temperature β . Throughout the protocol μ t , these subsystems undergo mutual interaction and coupled evolution. Under these conditions, the information exchange fluctuation theorem takes the form:
e Π + Δ M T = 1 ,
where the angular brackets denote ensemble averaging across all trajectories T of the coupled system, Π represents the total entropy production encompassing contributions from system X, system Y, and the thermal bath, while Δ M quantifies the variation in mutual information linking X and Y.
A trajectory-conditioned variant of Equation (1) reads:
I τ ( x τ , y τ ) = ln e ( Π + I 0 ) x τ , y τ ,
where the averaging is performed over the restricted ensemble of trajectories terminating at the coupled state ( x τ , y τ ) at the final time τ , and I t (for 0 t τ ) denotes the pointwise mutual information between the microstates of systems X and Y at time t [33].

2.2. System Description and Definitions

Consider two finite classical stochastic systems X and Y that are weakly coupled to a thermal reservoir characterized by inverse temperature β . Throughout the time interval 0 t τ , an external driving parameter μ t may force either or both subsystems into non-equilibrium states [34,35,36]. The temporal evolution of systems X and Y under the influence of the time-dependent protocol μ t is governed by classical stochastic dynamics, resulting in stochastic trajectories { x t } and { y t } .
We note that our framework operates within stochastic thermodynamics, where we consider trajectory ensembles—collections of individual stochastic realizations { x t , y t } under non-equilibrium driving μ t —rather than classical equilibrium ensembles (NVT, NPT). While the system is coupled to a thermal reservoir, the external protocol drives it away from equilibrium, making our approach ensemble-agnostic in the traditional thermodynamic sense.
We define reverse process μ t : = μ τ t for 0 t τ , where the external parameter is time-reversed [13,14]. The initial probability distribution P 0 ( x , y ) for the reverse process should be the final probability distribution for the forward process P τ ( x , y ) so that we have
P 0 ( x ) = P 0 ( x , y ) d y = P τ ( x , y ) d y = P τ ( x ) , P 0 ( y ) = P 0 ( x , y ) d x = P τ ( x , y ) d x = P τ ( y ) .
For each trajectory { x t } and { y t } for 0 t τ , we define the time-reversed conjugate as follows:
{ x t } : = { x τ t } , { y t } : = { y τ t } ,
where ∗ denotes momentum reversal.
Due to the inherent stochasticity of the trajectories, we perform multiple realizations of the driving protocol μ t , starting from an initial joint probability distribution P 0 ( x , y ) defined over the complete set of microstates ( x , y ) for the combined system. Through this ensemble of realizations, the coupled subsystems evolve to produce a time-dependent joint probability distribution P t ( x , y ) throughout the duration 0 t τ . We assume P 0 ( x , y ) 0 for all ( x , y ) , ensuring that P t ( x , y ) 0 , P t ( x ) 0 , and P t ( y ) 0 hold for all microstates throughout the evolution.
The entropy production Π generated during the protocol μ t over the time period 0 t τ is defined as Π : = Δ ξ X Y + β H , where Δ ξ X Y = Δ ξ X + Δ ξ Y represents the stochastic entropy changes, with Δ ξ X : = ln P τ ( x τ ) + ln P 0 ( x 0 ) and Δ ξ Y : = ln P τ ( y τ ) + ln P 0 ( y 0 ) , while H denotes the heat transferred to the thermal bath [2,3].
The pointwise mutual information I t at temporal instance t characterizing the correlation between microstates x t and y t is defined as:
I t ( x t , y t ) : = ln P t ( x t , y t ) P t ( x t ) P t ( y t ) .

2.3. Coarse-Graining Transformations

In this study, we consider coarse-graining transformations a and b, which are functions of microstates x and y, respectively. These transformations represent state space coarse-graining rather than coarse-grained force fields. Examples include as follows:
(i) Spatial binning: a ( x ) = bin ( x ) , where molecular positions x are discretized into spatial bins of a certain width, commonly used in analyzing diffusion processes and spatial correlations in biomolecular systems;
(ii) Energy-based grouping: a ( x ) = E ( x ) where microstates x are classified according to their energy levels E, particularly relevant for studying conformational transitions in proteins and folding dynamics;
(iii) Reaction coordinate projection: a ( x ) = f ( x ) where f represents a collective variable such as end-to-end distance in polymers, radius of gyration in protein folding, or dihedral angles in molecular conformational changes;
(iv) Cluster-based coarse-graining: a ( x ) = c where c denotes the cluster index obtained from unsupervised clustering algorithms applied to molecular configurations, widely employed in Markov state model construction for biomolecular dynamics;
(v) Order parameter discretization: a ( x ) = sign ( ψ ( x ) ) where ψ ( x ) is an order parameter such as local crystallinity in phase transitions or helical content in protein secondary structure formation.
It is important to distinguish our approach from coarse-grained force fields commonly used in molecular simulations. While coarse-grained force fields involve approximating the underlying Hamiltonian, our coarse-graining refers to mathematical transformations that map the full microscopic state space onto a reduced dimensional space while preserving the original dynamics and essential thermodynamic information.

3. Main Results

3.1. Coarse-Grained Information Exchange Fluctuation Theorems

The central contributions of this work are the following three fluctuation theorems that demonstrate the invariance of information-entropy relationships under arbitrary coarse-graining transformations:
Theorem 1. 
(Coarse-grained information exchange fluctuation theorem):
I τ ( a , b ) = ln e ( Π + I 0 ) a , b
where the angular brackets denote the ensemble average over all conditioned trajectories that terminate at coupled microstates ( x τ , y τ ) satisfying a ( x τ ) = a and b ( y τ ) = b .
Theorem 2. 
(Hierarchical mutual information relation):
e I τ ( x τ , y τ ) a ( x τ ) = a , b ( y τ ) = b = e I τ ( a , b ) ,
where the angular brackets represent the ensemble average taken over the conditional probability distribution P τ ( x τ , y τ | a , b ) .
Theorem 3. 
(Work fluctuation theorem for information exchange):
e β W a , b = e β Δ Φ X ( a , τ ) β Δ Φ Y ( b , τ ) I τ ( a , b ) ,
where Δ Φ X ( a , τ ) and Δ Φ Y ( b , τ ) are the differences between non-equilibrium and equilibrium free energies for the coarse-grained states.
These theorems establish that the fundamental relationship between pointwise mutual information and entropy production is preserved across different scales of observation, while demonstrating hierarchical organization of information measures across observational scales.

3.2. Mathematical Foundation and Assumptions

Our derivation relies fundamentally on the microscopic reversibility condition [2,37,38,39]. This condition is valid under three essential assumptions that must be explicitly stated:
(i) Stochastic dynamics: The system evolution is governed by stochastic processes with well-defined transition probabilities.
(ii) Markovian property: The dynamics are memoryless, meaning that future evolution depends only on the current state, not on the history of how that state was reached.
(iii) Microscopic reversibility: The underlying dynamics satisfy detailed balance in equilibrium, ensuring that forward and reverse processes are related through the dissipated heat.
These assumptions ensure that forward and reverse path probabilities are connected through the heat exchange with the reservoir, forming the mathematical foundation for all fluctuation relations derived in this work. The microscopic reversibility condition reads:
P T ( { x t } , { y t } | x 0 , y 0 ) P T ( { x t } , { y t } | x 0 , y 0 ) = e β H ,
where primed quantities refer to the time-reversed conjugates.

3.3. Proof of Theorems 1 and 2

Let T be the set of all trajectories { x t } and { y t } , and T x τ , y τ be that of trajectories conditioned at coupled-microstates ( x τ , y τ ) at time τ . By the definition of time-reversed conjugate, the set T of all time-reversed trajectories is identical to T , and the set T x 0 , y 0 of time-reversed trajectories conditioned at x 0 and y 0 is identical to T x τ , y τ . Thus, we may use the same notation for both forward and backward pairs. We note that the path probabilities P T and P T x τ , y τ are normalized over all paths in T and T x τ , y τ , respectively.
Now we consider the ensemble of paths T a , b and normalized path probability P T a , b such that
T a , b = a ( x τ ) = a , b ( y τ ) = b T x τ , y τ and T a , b P T a , b { x t } , { y t } d { x t } d { y t } = 1 .
We restrict our attention to those paths that are in T a , b and divide both numerator and denominator of the left-hand side of Equation (9) by P τ ( a , b ) . Since P τ ( a , b ) is identical to P 0 ( a , b ) , Equation (9) becomes as follows:
P T a , b ( { x t } , { y t } | x 0 , y 0 ) P T a , b ( { x t } , { y t } | x 0 , y 0 ) = e β H
since the probability of paths is now normalized over T a , b . Then, we have the following:
(12) P T a , b ( { x t } , { y t } ) P T a , b ( { x t } , { y t } ) = P T a , b ( { x t } , { y t } | x 0 , y 0 ) P T a , b ( { x t } , { y t } | x 0 , y 0 ) · P 0 ( x 0 , y 0 ) P 0 ( x 0 , y 0 ) (13) = P T a , b ( { x t } , { y t } | x 0 , y 0 ) P T a , b ( { x t } , { y t } | x 0 , y 0 ) · P 0 ( x 0 , y 0 ) P 0 ( x 0 ) P 0 ( y 0 ) · P 0 ( x 0 ) P 0 ( y 0 ) P 0 ( x 0 , y 0 ) × P 0 ( x 0 ) P 0 ( x 0 ) · P 0 ( y 0 ) P 0 ( y 0 ) (14) = exp { β H + I τ ( x τ , y τ ) I 0 ( x 0 , y 0 ) Δ ξ X Δ ξ Y } (15) = exp { Π + I τ ( x τ , y τ ) I 0 ( x 0 , y 0 ) } .
To obtain Equation (13) from Equation (12), we multiply Equation (12) by P 0 ( x 0 ) P 0 ( y 0 ) P 0 ( x 0 ) P 0 ( y 0 ) and P 0 ( x 0 ) P 0 ( y 0 ) P 0 ( x 0 ) P 0 ( y 0 ) , which equals 1. We obtain Equation (14) by applying Equations (3) and (11) to Equation (13).
Now we multiply both sides of Equation (15) by e I τ ( x τ , y τ ) and P T a , b ( { x t } , { y t } ) , and take integral over all paths in T a , b :
e ( Π + I 0 ) a , b : = { x t } , { y t } T a , b e ( Π + I 0 ) P T a , b ( { x t } , { y t } ) d { x t } d { y t } = { x t } , { y t } T a , b e I τ ( x τ , y τ ) P T a , b ( { x t } , { y t } ) d { x t } d { y t } .
Noting that
P τ ( a , b ) P T a , b = P τ ( x τ , y τ ) P T x τ , y τ
we obtain Equations (6) and (7) as follows
e ( Π + I 0 ) a , b = a ( x τ ) = a , b ( y τ ) = b e I τ ( x τ , y τ ) P τ ( x τ , y τ ) P τ ( a , b ) d x τ d y τ × { x t } , { y t } T x τ , y τ P T x τ , y τ ( { x t } , { y t } ) d { x t } d { y t } (18) = a ( x τ ) = a , b ( y τ ) = b e I τ ( x τ , y τ ) P τ ( x τ , y τ ) P τ ( a , b ) d x τ d y τ = a ( x τ ) = a , b ( y τ ) = b P τ ( x τ ) P τ ( y τ ) P τ ( x τ , y τ ) P τ ( x τ , y τ ) P τ ( a , b ) d x τ d y τ = P τ ( a ) P τ ( b ) P τ ( a , b ) (19) = e I τ ( a , b )
Here we use the fact that e I τ ( x τ , y τ ) is constant for all paths in T x τ , y τ , probability distribution P T x τ , y τ is normalized over all paths in T x τ , y τ , and d { x t } = d { x t } and d { y t } = d { y t } by the definition of the time-reversal conjugate.
The information exchange fluctuation theorems demonstrate unambiguously that, analogous to how local free energy captures work [5], the pointwise mutual information between coupled states ( a , b ) captures entropy production within the ensemble of trajectories terminating at each respective state. The subsequent corollary elucidates entropy production in greater detail from an energetic perspective.

3.4. Local Non-Equilibrium Free Energy: Definitions and Relations

To avoid confusion regarding the various expressions for local non-equilibrium free energy, we clarify the logical hierarchy of definitions [5,6]:
Fundamental definition: The local non-equilibrium free energy is defined as a state function:
ϕ ( x , t ) : = E ( x ; μ t ) + k B T ln P t ( x ) ,
where E ( x ; μ t ) is the internal energy and P t ( x ) is the probability distribution.
Fluctuation theorem consequence: The connection to work fluctuations follows from the Feynman–Kac formula:
ϕ ( x τ , τ ) = F e q ( μ 0 ) 1 β ln e β W T x τ ,
where the right-hand side emerges as a consequence of the stochastic thermodynamics framework with the assumption that the initial probability distribution corresponds to the equilibrium one at μ 0 .
Coarse-grained extension: For coarse-grained states, the fundamental definition becomes:
Φ ( a , t ) : = G ( a ; μ t ) + k B T ln P t ( a ) ,
where G ( a ; μ t ) : = β 1 ln a ( x ) = a e β E ( x ; μ t ) d x represents the conformational free energy. We note that the multi-scale relationship is captured by the following identity:
e β Φ ( a , t ) = e β ϕ ( x , t ) a ( x ) = a ,
where the angular brackets denote the conditional ensemble average taken with respect to the normalized probability distribution P t ( x , t ) / P t ( a , t ) .
Coarse-grained fluctuation theorem: The corresponding work relation for coarse-grained states:
Φ ( a , τ ) = F e q ( μ 0 ) 1 β ln e β W T a
can be derived using the Feynman–Kac approach combined with hierarchical averaging. The key insight is that the coarse-grained work fluctuation theorem emerges from:
e β W T a = a ( x τ ) = a e β W T x τ P τ ( x τ ) P τ ( a ) d x τ
When combined with the Feynman–Kac relation for individual microstates and the hierarchical scaling principle Equation (23), this yields the coarse-grained fluctuation theorem.

3.5. Proof of Theorem 3

We now prove the information exchange work fluctuation theorem. The local free energies ϕ X and ϕ Y for systems X and Y, respectively, at time t with macrostate μ t are defined as follows:
ϕ X ( x t , t ) : = E X ( x t ; μ t ) k B T ξ X [ P t ( x t ) ] ϕ Y ( y t , t ) : = E Y ( y t ; μ t ) k B T ξ Y [ P t ( y t ) ] ,
where T is the temperature of the heat bath, k B is the Boltzmann constant, E X and E Y are the internal energies of systems X and Y, respectively, and ξ X and ξ Y are the stochastic entropies of X and Y, respectively [2,3]. Work done on either one or both systems through process μ t is expressed by the first law of thermodynamics as follows:
W : = Δ E + H ,
where Δ E is the change in internal energy of the total system composed of X and Y. If we assume that systems X and Y are weakly coupled, in that the interaction energy between X and Y is negligible compared to the internal energy of X and Y, we may have
Δ E : = Δ E X + Δ E Y ,
where Δ E X : = E X ( x τ , τ ) E X ( x 0 , 0 ) and Δ E Y : = E Y ( y τ , τ ) E Y ( y 0 , 0 ) [40]. We rewrite Equation (14) by adding and subtracting the change in internal energy Δ E X of X and Δ E Y of Y as follows:
(29) P T a , b ( { x t } , { y t } ) P T a , b ( { x t } , { y t } ) = exp { β ( H + Δ E X + Δ E Y ) + β Δ E X Δ ξ X + β Δ E Y Δ ξ Y } × exp { I τ ( x τ , y τ ) I 0 ( x 0 , y 0 ) } (30) = exp { β W + β ϕ X ( x τ , τ ) β ϕ X ( x 0 , 0 ) + β ϕ Y ( y τ , τ ) β ϕ Y ( y 0 , 0 ) × exp { I τ ( x τ , y τ ) I 0 ( x 0 , y 0 ) }
where we have applied Equations (26)–(28) consecutively to Equation (29) to obtain Equation (30). Now we multiply both sides of Equation (30) by e β ϕ X ( x τ , τ ) β ϕ Y ( y τ , τ ) I τ ( x τ , y τ ) and P T a , b ( { x t } , { y t } ) , and take the integral over all paths in T a , b to obtain the following:
e β W β ϕ X ( x 0 , 0 ) β ϕ Y ( y 0 , 0 ) I 0 a , b : = { x t } , { y t } T x τ , y τ e β W β ϕ X ( x 0 , 0 ) β ϕ Y ( y 0 , 0 ) I 0 P T a , b ( { x t } , { y t } ) d { x t } d { y t } = { x t } , { y t } T x τ , y τ e β ϕ X ( x τ , τ ) β ϕ Y ( y τ , τ ) I τ ( x τ , y τ ) P T a , b ( { x t } , { y t } ) d { x t } d { y t }
Noting that
P τ ( a , b ) P T a , b = P τ ( x τ , y τ ) P T x τ , y τ
we obtain
e β W β ϕ X ( x 0 , 0 ) β ϕ Y ( y 0 , 0 ) I 0 a , b = a ( x τ ) = a , b ( y τ ) = b e β ϕ X ( x τ , τ ) β ϕ Y ( y τ , τ ) I τ ( x τ , y τ ) P τ ( x τ , y τ ) P τ ( a , b ) d x τ d y τ × { x t } , { y t } T x τ , y τ P T x τ , y τ ( { x t } , { y t } ) d { x t } d { y t } = a ( x τ ) = a , b ( y τ ) = b e β ϕ X ( x τ , τ ) β ϕ Y ( y τ , τ ) P τ ( x τ ) P τ ( y τ ) P τ ( x τ , y τ ) P τ ( x τ , y τ ) P τ ( a , b ) d x τ d y τ = a ( x τ ) = a , b ( y τ ) = b e β ϕ X ( x τ , τ ) β ϕ Y ( y τ , τ ) P τ ( x τ ) P τ ( y τ ) P τ ( a ) P τ ( b ) P τ ( a ) P τ ( b ) P τ ( a , b ) d x τ d y τ = e I τ ( a , b ) a ( x τ ) = a e β ϕ X ( x τ , τ ) P τ ( x τ ) P τ ( a ) d x τ b ( y τ ) = b e β ϕ Y ( y τ , τ ) P τ ( y τ ) P τ ( b ) d y τ = e β Φ X ( a , τ ) β Φ Y ( b , τ ) I τ ( a , b ) ,
which generalizes known relations in the literature [15,40,41,42,43,44]. The last equality follows from Equation (23). Under the initial equilibrium condition, Equation (32) reduces to Equation (8). We note that Equation (32) holds under the weak-coupling assumption between systems X and Y during process μ t , and Φ X and Φ Y are non-equilibrium free energies, which are different from the equilibrium free energy that appears in similar relations in the literature [15,41,42,43,44].

Limitations of the Weak Coupling Approximation

It is important to note that the weak coupling approximation does not affect our main results regarding the invariance of information exchange fluctuation theorems under coarse-graining, Equations (6) and (7). The fundamental relationship between pointwise mutual information and entropy production under coarse-graining, which constitutes the core contribution of this work, remains valid regardless of the interaction strength between subsystems X and Y. This is because the derivation of these relations relies solely on the microscopic reversibility condition and the definition of pointwise mutual information, without invoking any assumptions about the magnitude of interaction energies. However, the weak coupling approximation becomes crucial when deriving the work fluctuation theorem for information exchange Equation (8), as it directly affects the decomposition of the total internal energy. In this case, the work fluctuation theorem Equation (32) derived here should be considered as leading-order approximations, with interaction-dependent corrections becoming increasingly important.
The weak coupling approximation Equation (28) assumes that the interaction energy between systems X and Y is negligible compared to their individual internal energies. It may break down in several important physical scenarios. In systems involving charged biomolecules or polyelectrolytes, Coulombic interactions can become comparable to or exceed thermal energies, particularly at low ionic strengths or high charge densities [45,46,47]. DNA–protein complexes represent a prominent example, where electrostatic binding energies can reach 10 20 k B T [48,49]. Similarly, hydrogen bonding networks in protein–DNA interactions, base pairing in nucleic acids, and water-mediated interactions in biological systems often involve interaction energies on the order of several k B T [50,51]. Watson-Crick base pairs, for instance, exhibit binding energies of 2–5 k B T per hydrogen bond [52,53].
The approximation becomes particularly questionable near phase transitions, where correlation lengths diverge and interaction energies between subsystems can become substantial relative to individual thermal energies [54,55]. This is especially relevant in systems exhibiting cooperative transitions such as protein folding [56,57] or lipid membrane phase transitions [58,59]. Furthermore, systems with metal coordination complexes [60], strong π - π stacking interactions [61,62], and other “non-covalent” interactions that approach covalent bond strengths (∼50–100 k B T ) [63] may violate the weak coupling condition.
Allosteric protein systems present another important class where the approximation may fail. In these systems, conformational coupling between distant sites involves substantial interaction energies that may violate the weak coupling condition, particularly in systems exhibiting strong cooperativity [64,65,66,67]. The long-range nature of allosteric communication often involves networks of residues where local perturbations can propagate throughout the protein structure, leading to interaction energies that are not negligible compared to thermal fluctuations.

3.6. Computational Implementation and Molecular Dynamics

Our theoretical framework can be directly implemented and validated using molecular dynamics simulations. The connection to practical computational methods, particularly thermostat implementations, provides a bridge between our abstract formulation and concrete applications.
The thermal reservoir coupling in our theory corresponds to thermostat mechanisms in molecular dynamics simulations. In Nosé–Hoover thermostat implementations [68,69], the heat H in our entropy production Π = Δ ξ X Y + β H corresponds to the cumulative energy transferred to the thermostat reservoir, which can be computed as H = 0 τ Q ˙ bath d t where Q ˙ bath represents the instantaneous heat flow to the thermal bath [70]. The canonical sampling generated by the Nosé–Hoover mechanism ensures that probability distributions P t ( x , y ) follow the appropriate non-equilibrium form during the driving protocol. Alternative thermostat implementations such as Langevin dynamics [71], where stochastic friction and random forces provide thermal coupling, or velocity rescaling methods [72] offer equivalent mechanisms for maintaining thermal contact with the reservoir.
The theoretical predictions of our framework can be systematically tested through molecular dynamics simulations by computing entropy production along individual trajectories using thermostat heat flows, applying the coarse-graining functions a ( x ) and b ( y ) to the resulting trajectory data, and evaluating pointwise mutual information I t ( a , b ) from histograms of coarse-grained states accumulated during the simulation. The hierarchical relationships expressed in Equation (7) can then be verified across different levels of resolution by comparing mutual information computed at various coarse-graining scales. Recent advances in enhanced sampling methods [73,74] and trajectory analysis techniques [75,76] provide additional tools for systematically exploring the coarse-graining space and validating our theoretical predictions.

4. Conclusions

In this study, we have rigorously established that the fluctuation theorem for information exchange remains invariant under coarse-graining transformations in coupled classical stochastic systems. The fundamental relationship between pointwise mutual information and entropy production is preserved across different scales of observation regardless of interaction strength between subsystems. However, our derivation of the work fluctuation theorem relies on the weak coupling approximation where interaction energies are negligible compared to individual subsystem energies.
The key findings of our work include: (i) the derivation of coarse-grained fluctuation theorems that maintain their functional form under arbitrary coarse-graining transformations, (ii) the proof that pointwise mutual information between coarse-grained states captures entropy production within trajectory ensembles, and (iii) the establishment of hierarchical relationships between information measures at different observational scales.
These results have profound implications for understanding information processing in thermodynamic systems. The scale-independence of these fluctuation theorems establishes their fundamental character as universal principles governing energy-information conversion mechanisms, regardless of the level of system description. This universality is particularly significant for biological systems, where information processing occurs across multiple organizational scales, from molecular interactions to cellular signaling networks.
The robust theoretical framework developed here provides a foundation for analyzing the thermodynamics of dynamic molecular information processes and dynamic allosteric transitions in complex biological systems. Furthermore, our findings offer new perspectives on the thermodynamic constraints that govern information processing in classical stochastic systems, with potential applications ranging from molecular machinery design to understanding the fundamental limits of biological computation.

Funding

L.J. was supported by the National Research Foundation of Korea Grant funded by the Korean Government (RS-2016-NR017140), and in part by Kwangwoon University Research Grant in 2023.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Jarzynski, C. Nonequilibrium equality for free energy differences. Phys. Rev. Lett. 1997, 78, 2690–2693. [Google Scholar] [CrossRef]
  2. Crooks, G.E. Entropy production fluctuation theorem and the nonequilibrium work relation for free energy differences. Phys. Rev. E 1999, 60, 2721–2726. [Google Scholar] [CrossRef]
  3. Seifert, U. Entropy production along a stochastic trajectory and an integral fluctuation theorem. Phys. Rev. Lett. 2005, 95, 040602. [Google Scholar] [CrossRef]
  4. Hatano, T.; Sasa, S.i. Steady-state thermodynamics of Langevin systems. Phys. Rev. Lett. 2001, 86, 3463–3466. [Google Scholar] [CrossRef] [PubMed]
  5. Jinwoo, L.; Tanaka, H. Local non-equilibrium thermodynamics. Sci. Rep. 2015, 5, 7832. [Google Scholar] [CrossRef] [PubMed]
  6. Jinwoo, L. Roles of local nonequilibrium free energy in the description of biomolecules. Phys. Rev. E 2023, 107, 014402. [Google Scholar] [CrossRef] [PubMed]
  7. Hummer, G.; Szabo, A. Free energy reconstruction from nonequilibrium single-molecule pulling experiments. Proc. Natl. Acad. Sci. USA 2001, 98, 3658–3661. [Google Scholar] [CrossRef] [PubMed]
  8. Liphardt, J.; Onoa, B.; Smith, S.B.; Tinoco, I.; Bustamante, C. Reversible unfolding of single RNA molecules by mechanical force. Science 2001, 292, 733–737. [Google Scholar] [CrossRef]
  9. Liphardt, J.; Dumont, S.; Smith, S.; Tinoco, I., Jr.; Bustamante, C. Equilibrium information from nonequilibrium measurements in an experimental test of Jarzynski’s equality. Science 2002, 296, 1832–1835. [Google Scholar] [CrossRef]
  10. Trepagnier, E.H.; Jarzynski, C.; Ritort, F.; Crooks, G.E.; Bustamante, C.J.; Liphardt, J. Experimental test of Hatano and Sasa’s nonequilibrium steady-state equality. Proc. Natl. Acad. Sci. USA 2004, 101, 15038–15041. [Google Scholar] [CrossRef]
  11. Collin, D.; Ritort, F.; Jarzynski, C.; Smith, S.B.; Tinoco, I.; Bustamante, C. Verification of the Crooks fluctuation theorem and recovery of RNA folding free energies. Nature 2005, 437, 231–234. [Google Scholar] [CrossRef]
  12. Alemany, A.; Mossa, A.; Junier, I.; Ritort, F. Experimental free-energy measurements of kinetic molecular states using fluctuation theorems. Nat. Phys. 2012, 8, 688–694. [Google Scholar] [CrossRef]
  13. Ponmurugan, M. Generalized detailed fluctuation theorem under nonequilibrium feedback control. Phys. Rev. E 2010, 82, 031129. [Google Scholar] [CrossRef] [PubMed]
  14. Horowitz, J.M.; Vaikuntanathan, S. Nonequilibrium detailed fluctuation theorem for repeated discrete feedback. Phys. Rev. E 2010, 82, 061120. [Google Scholar] [CrossRef] [PubMed]
  15. Sagawa, T.; Ueda, M. Generalized Jarzynski equality under nonequilibrium feedback control. Phys. Rev. Lett. 2010, 104, 090602. [Google Scholar] [CrossRef]
  16. Horowitz, J.M.; Parrondo, J.M. Thermodynamic reversibility in feedback processes. EPL Europhys. Lett. 2011, 95, 10005. [Google Scholar] [CrossRef]
  17. Sagawa, T.; Ueda, M. Fluctuation theorem with information exchange: Role of correlations in stochastic thermodynamics. Phys. Rev. Lett. 2012, 109, 180602. [Google Scholar] [CrossRef]
  18. Zeng, Q.; Wang, J. New fluctuation theorems on Maxwell’s demon. Sci. Adv. 2021, 7, eabf1807. [Google Scholar] [CrossRef]
  19. Yan, L.L.; Bu, J.T.; Zeng, Q.; Zhang, K.; Cui, K.F.; Zhou, F.; Su, S.L.; Chen, L.; Wang, J.; Chen, G.; et al. Experimental Verification of Demon-Involved Fluctuation Theorems. Phys. Rev. Lett. 2024, 133, 090402. [Google Scholar] [CrossRef]
  20. Hartwell, L.H.; Hopfield, J.J.; Leibler, S.; Murray, A.W. From molecular to modular cell biology. Nature 1999, 402, C47. [Google Scholar] [CrossRef]
  21. Crofts, A.R. Life, information, entropy, and time: Vehicles for semantic inheritance. Complexity 2007, 13, 14–50. [Google Scholar] [CrossRef]
  22. Cheong, R.; Rhee, A.; Wang, C.J.; Nemenman, I.; Levchenko, A. Information transduction capacity of noisy biochemical signaling networks. Science 2011, 334, 354–358. [Google Scholar] [CrossRef]
  23. McGrath, T.; Jones, N.S.; ten Wolde, P.R.; Ouldridge, T.E. Biochemical Machines for the Interconversion of Mutual Information and Work. Phys. Rev. Lett. 2017, 118, 028101. [Google Scholar] [CrossRef]
  24. Ouldridge, T.E.; Govern, C.C.; ten Wolde, P.R. Thermodynamics of Computational Copying in Biochemical Systems. Phys. Rev. X 2017, 7, 021004. [Google Scholar] [CrossRef]
  25. Becker, N.B.; Mugler, A.; ten Wolde, P.R. Optimal Prediction by Cellular Signaling Networks. Phys. Rev. Lett. 2015, 115, 258103. [Google Scholar] [CrossRef] [PubMed]
  26. Cheng, F.; Liu, C.; Shen, B.; Zhao, Z. Investigating cellular network heterogeneity and modularity in cancer: A network entropy and unbalanced motif approach. BMC Syst. Biol. 2016, 10, 65. [Google Scholar] [CrossRef] [PubMed]
  27. Whitsett, J.A.; Guo, M.; Xu, Y.; Bao, E.L.; Wagner, M. SLICE: Determining cell differentiation and lineage based on single cell entropy. Nucleic Acids Res. 2016, 45, e54. [Google Scholar] [CrossRef] [PubMed]
  28. Olimpio, E.P.; Dang, Y.; Youk, H. Statistical Dynamics of Spatial-Order Formation by Communicating Cells. iScience 2018, 2, 27–40. [Google Scholar] [CrossRef]
  29. Maire, T.; Youk, H. Molecular-Level Tuning of Cellular Autonomy Controls the Collective Behaviors of Cell Populations. Cell Syst. 2015, 1, 349–360. [Google Scholar] [CrossRef]
  30. Mehta, P.; Schwab, D.J. Energetic costs of cellular computation. Proc. Natl. Acad. Sci. USA 2012, 109, 17978–17982. [Google Scholar] [CrossRef]
  31. Govern, C.C.; ten Wolde, P.R. Energy dissipation and noise correlations in biochemical sensing. Phys. Rev. Lett. 2014, 113, 258102. [Google Scholar] [CrossRef] [PubMed]
  32. Jinwoo, L. Fluctuation Theorem of Information Exchange between Subsystems that Co-Evolve in Time. Symmetry 2019, 11, 433. [Google Scholar] [CrossRef]
  33. Jinwoo, L. Fluctuation Theorem of Information Exchange within an Ensemble of Paths Conditioned on Correlated-Microstates. Entropy 2019, 21, 477. [Google Scholar] [CrossRef] [PubMed]
  34. Jarzynski, C. Equalities and inequalities: Irreversibility and the second law of thermodynamics at the nanoscale. Annu. Rev. Codens. Matter Phys. 2011, 2, 329–351. [Google Scholar] [CrossRef]
  35. Seifert, U. Stochastic thermodynamics, fluctuation theorems and molecular machines. Rep. Prog. Phys. 2012, 75, 126001. [Google Scholar] [CrossRef]
  36. Spinney, R.; Ford, I. Fluctuation Relations: A Pedagogical Overview. In Nonequilibrium Statistical Physics of Small Systems; Wiley-VCH Verlag GmbH & Co. KGaA: Weinheim, Germany, 2013; pp. 3–56. [Google Scholar]
  37. Kurchan, J. Fluctuation theorem for stochastic dynamics. J. Phys. A Math. Gen. 1998, 31, 3719. [Google Scholar] [CrossRef]
  38. Maes, C. The fluctuation theorem as a Gibbs property. J. Stat. Phys. 1999, 95, 367–392. [Google Scholar] [CrossRef]
  39. Jarzynski, C. Hamiltonian derivation of a detailed fluctuation theorem. J. Stat. Phys. 2000, 98, 77–102. [Google Scholar] [CrossRef]
  40. Parrondo, J.M.; Horowitz, J.M.; Sagawa, T. Thermodynamics of information. Nat. Phys. 2015, 11, 131–139. [Google Scholar] [CrossRef]
  41. Kawai, R.; Parrondo, J.M.R.; den Broeck, C.V. Dissipation: The phase-space perspective. Phys. Rev. Lett. 2007, 98, 080602. [Google Scholar] [CrossRef]
  42. Takara, K.; Hasegawa, H.H.; Driebe, D. Generalization of the second law for a transition between nonequilibrium states. Phys. Lett. A 2010, 375, 88–92. [Google Scholar] [CrossRef]
  43. Hasegawa, H.H.; Ishikawa, J.; Takara, K.; Driebe, D.J. Generalization of the second law for a nonequilibrium initial state. Phys. Lett. A 2010, 374, 1001–1004. [Google Scholar] [CrossRef]
  44. Esposito, M.; Van den Broeck, C. Second law and Landauer principle far from equilibrium. Europhys. Lett. 2011, 95, 40004. [Google Scholar] [CrossRef]
  45. Record, M.T., Jr.; Lohman, T.M.; De Haseth, P. Effects of Na+ and Mg2+ on the helix-coil transition of DNA. J. Mol. Biol. 1976, 107, 145–158. [Google Scholar]
  46. Anderson, C.F.; Record, M.T., Jr. Electrostatic effects in protein folding, binding, and condensation. Curr. Opin. Struct. Biol. 1995, 5, 796–806. [Google Scholar]
  47. Rohs, R.; West, S.M.; Sosinsky, A.; Liu, P.; Mann, R.S.; Honig, B. Origins of specificity in protein-DNA recognition. Annu. Rev. Biochem. 2009, 78, 233–271. [Google Scholar] [CrossRef]
  48. Honig, B.; Nicholls, A. Classical electrostatics in biology and chemistry. Science 1995, 268, 1144–1149. [Google Scholar] [CrossRef]
  49. Sheinerman, F.B.; Norel, R.; Honig, B. Electrostatic aspects of protein–protein interactions. Curr. Opin. Struct. Biol. 2000, 10, 153–159. [Google Scholar] [CrossRef]
  50. Jeffrey, G.A. An Introduction to Hydrogen Bonding; Oxford University Press: New York, NY, USA, 1997. [Google Scholar]
  51. Steiner, T. The hydrogen bond in the solid state. Angew. Chem. Int. Ed. 2002, 41, 48–76. [Google Scholar] [CrossRef]
  52. Saenger, W. Principles of Nucleic Acid Structure; Springer-Verlag: New York, NY, USA, 1984. [Google Scholar]
  53. Freier, S.M.; Kierzek, R.; Jaeger, J.A.; Sugimoto, N.; Caruthers, M.H.; Neilson, T.; Turner, D.H. Improved free-energy parameters for predictions of RNA duplex stability. Proc. Natl. Acad. Sci. USA 1986, 83, 9373–9377. [Google Scholar] [CrossRef]
  54. Stanley, H.E. Introduction to Phase Transitions and Critical Phenomena; Oxford University Press: New York, NY, USA, 1971. [Google Scholar]
  55. Peliti, L. Statistical Mechanics in a Nutshell; Princeton University Press: Princeton, NJ, USA, 2011. [Google Scholar]
  56. Muñoz, V.; Thompson, P.A.; Hofrichter, J.; Eaton, W.A. Folding dynamics and mechanism of β-hairpin formation. Nature 1997, 390, 196–199. [Google Scholar] [CrossRef]
  57. Socci, N.D.; Onuchic, J.N.; Wolynes, P.G. Kinetic approach to folding and misfolding of a few simple protein models. J. Chem. Phys. 1996, 104, 5860–5868. [Google Scholar] [CrossRef]
  58. Heimburg, T.; Jackson, A.D. On soliton propagation in biomembranes and nerves. Proc. Natl. Acad. Sci. USA 2005, 102, 9790–9795. [Google Scholar] [CrossRef] [PubMed]
  59. Mouritsen, O.G. Life—As a Matter of Fat: The Emerging Science of Lipidomics; Springer: Berlin/Heidelberg, Germany, 2004. [Google Scholar]
  60. Cotton, F.A.; Wilkinson, G.; Murillo, C.A.; Bochmann, M. Advanced Inorganic Chemistry, 6th ed.; Wiley: New York, NY, USA, 2006. [Google Scholar]
  61. Hunter, C.A.; Sanders, J.K.M. The nature of ππ interactions. J. Am. Chem. Soc. 1990, 112, 5525–5534. [Google Scholar] [CrossRef]
  62. Martínez, C.R.; Iverson, B.L. Rethinking the term “π-stacking”. Chem. Sci. 2012, 3, 2191–2201. [Google Scholar] [CrossRef]
  63. Schneider, H.J. Noncovalent interactions: A brief account of a long history. Chem. Soc. Rev. 2015, 44, 3235–3243. [Google Scholar] [CrossRef]
  64. Tsai, C.J.; Nussinov, R. A unified view of? How allostery works? PLoS Comput. Biol. 2014, 10, e1003394. [Google Scholar] [CrossRef]
  65. Cuendet, M.A.; Weinstein, H.; LeVine, M.V. The allostery landscape: Quantifying thermodynamic couplings in biomolecular systems. J. Chem. Theory Comput. 2016, 12, 5758–5767. [Google Scholar] [CrossRef]
  66. Hilser, V.J.; Wrabl, J.O.; Motlagh, H.N. Structural and energetic basis of allostery. Annu. Rev. Biophys. 2012, 41, 585–609. [Google Scholar] [CrossRef]
  67. Motlagh, H.N.; Wrabl, J.O.; Li, J.; Hilser, V.J. The ensemble nature of allostery. Nature 2014, 508, 331–339. [Google Scholar] [CrossRef]
  68. Nosé, S. A molecular dynamics method for simulations in the canonical ensemble. Mol. Phys. 1984, 52, 255–268. [Google Scholar] [CrossRef]
  69. Hoover, W.G. Canonical dynamics: Equilibrium phase-space distributions. Phys. Rev. A 1985, 31, 1695–1697. [Google Scholar] [CrossRef] [PubMed]
  70. Evans, D.J.; Holian, B.L. The Fundamentals of Molecular Dynamics Simulation; World Scientific: Singapore, 2008. [Google Scholar]
  71. Grest, G.S.; Kremer, K. Molecular dynamics simulation for polymers in the presence of a heat bath. Phys. Rev. A 1986, 33, 3628–3631. [Google Scholar] [CrossRef] [PubMed]
  72. Bussi, G.; Donadio, D.; Parrinello, M. Canonical sampling through velocity rescaling. J. Chem. Phys. 2007, 126, 014101. [Google Scholar] [CrossRef] [PubMed]
  73. Laio, A.; Parrinello, M. Escaping free-energy minima. Proc. Natl. Acad. Sci. USA 2002, 99, 12562–12566. [Google Scholar] [CrossRef]
  74. Barducci, A.; Bussi, G.; Parrinello, M. Well-tempered metadynamics: A smoothly converging and tunable free-energy method. Phys. Rev. Lett. 2008, 100, 020603. [Google Scholar] [CrossRef]
  75. Noé, F.; Schütte, C.; Vanden-Eijnden, E.; Reich, L.; Weikl, T.R. Constructing the equilibrium ensemble of folding pathways from short off-equilibrium simulations. Proc. Natl. Acad. Sci. USA 2009, 106, 19011–19016. [Google Scholar] [CrossRef]
  76. Chodera, J.D.; Noé, F. Markov state models of biomolecular conformational dynamics. Curr. Opin. Struct. Biol. 2014, 25, 135–144. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jinwoo, L. Information Exchange Fluctuation Theorem Under Coarse-Graining. Mathematics 2025, 13, 2607. https://doi.org/10.3390/math13162607

AMA Style

Jinwoo L. Information Exchange Fluctuation Theorem Under Coarse-Graining. Mathematics. 2025; 13(16):2607. https://doi.org/10.3390/math13162607

Chicago/Turabian Style

Jinwoo, Lee. 2025. "Information Exchange Fluctuation Theorem Under Coarse-Graining" Mathematics 13, no. 16: 2607. https://doi.org/10.3390/math13162607

APA Style

Jinwoo, L. (2025). Information Exchange Fluctuation Theorem Under Coarse-Graining. Mathematics, 13(16), 2607. https://doi.org/10.3390/math13162607

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop