Stochastic Thermodynamics of Multiple Co-Evolving Systems—Beyond Multipartite Processes

Many dynamical systems consist of multiple, co-evolving subsystems (i.e., they have multiple degrees of freedom). Often, the dynamics of one or more of these subsystems will not directly depend on the state of some other subsystems, resulting in a network of dependencies governing the dynamics. How does this dependency network affect the full system’s thermodynamics? Prior studies on the stochastic thermodynamics of multipartite processes have addressed this question by assuming that, in addition to the constraints of the dependency network, only one subsystem is allowed to change state at a time. However, in many real systems, such as chemical reaction networks or electronic circuits, multiple subsystems can—or must—change state together. Here, we investigate the thermodynamics of such composite processes, in which multiple subsystems are allowed to change state simultaneously. We first present new, strictly positive lower bounds on entropy production in composite processes. We then present thermodynamic uncertainty relations for information flows in composite processes. We end with strengthened speed limits for composite processes.

counts must change state concurrently.As another example, the voltages on different conductors in a circuit (Fig. 1(b)) must change state at the same time.
There has been some preliminary work extending the stochastic thermodynamics of MPPs to address the broader scenario in which each mechanism couples to a set of multiple subsystems [4].Systems with this nature are called composite systems, and their dynamics is called a composite process.
Here we extend this preliminary work and obtain new results in the stochastic thermodynamics of composite processes.We decompose key quantities (including probability flows, entropy production, and dynamical activity) into contributions from each mechanism.We also analyze the network specifying which (set of) subsystems can affect the dynamics of each subsystem.This network gives rise to units, which are subsets of subsystems whose joint dynamics does not depend on the state of the rest of the system.We then use the specification of units and the decomposition of key quantities to derive a wealth of thermodynamic uncertainty relations (TURs).Finally, we derive a strengthened thermodynamic speed limit theorem (SLT) for composite processes.This speed limit provides a tighter restriction on how much the probability distribution over system states can change during a fixed time interval, using the contributions from each mechanism to entropy production and dynamical activity.These results also apply to MPPs, since they are a special case of a composite process.
We begin by reviewing the preliminary work on composite processes, including the specification of units.We then present how key quantities decompose into contributions from each mechanism coupled to the system.We then present our results for TURs and strengthened SLTs.We conclude by discussing our results in the broader contexts of the thermodynamics of constraints and the thermodynamics of computation and by suggesting avenues of future work.

I. STOCHASTIC THERMODYNAMICS OF COMPOSITE PROCESSES A. Background on composite processes
A composite process is a generalization of MPPs, describing the co-evolution of a finite set of subsystems, N = {1, 2, . . ., N }.Each subsystem i has a discrete state space X i .x indicates a state vector in X = ×i∈N X i , the joint state space of the full system.x A indicates a state vector in X = ×i∈A X i , the joint state space of the subset A. The probability that the entire system is in a state x at time t evolves according to a master equation: This stochastic dynamics arises due to couplings of the system with a set of mechanisms V = {v 1 , v 2 , . . ., v M }.In general, each such mechanism v couples to only a subset of the subsystems.We refer to the set of subsystems to which a mechanism v couples as its puppet set, and write it as P (v) ⊆ N .
As an example, an MPP is a composite process where each mechanism couples to only one subsystem (although a single subsystem might be coupled to multiple mechanisms [1]).So in an MPP, the cardinality of every puppet set is 1.
At any given time, a composite system changes state due to its interaction with at most one mechanism, just as with MPPs.Accordingly, the rate matrix of the overall system is a sum of mechanism-specific rate matrices: (Here and throughout, for any two variables, z, z contained in the same space, δ z z is the Kronecker delta function that equals 1 when z = z, and equals 0 otherwise).
We can illustrate composite processes using a toy stochastic chemical reaction network (Fig. 1a) [5][6][7].This network involves four co-evolving species {X 1 , X 2 , X 3 , X 4 } that change state according to three chemical reactions {A, B, C} (left).The system state is a vector consisting of the number of molecules of each species in the system.Only one reaction can occur at a time, but when a reaction does occur, multiple subsystems all change their state.For example, in the forward reaction A, species X 1 , X 2 , and X 3 must change state at the same time, by counts of {−2, −1, +1}, respectively.Accordingly, this reaction network is not an MPP.However, it is a composite process.
We can illustrate this composite process in terms of the associated puppet sets (righthand side of figure).There are a total of three such puppet sets, one for each of the possible chemical reactions.These three puppet sets are indicated by translucent bubbles in the righthand part of the figure.The mechanisms of the three puppet sets are denoted as r A , r B , and r C , and the puppet set of mechanism r is denoted as P (r).
As another example, consider a toy electronic circuit (Fig. 1b) [8] consisting of four conductors (the four circles in the left-hand side of the figure) and three devices (the three bidirectional arrows).The state of the system is a vector consisting of the voltage on each conductor.Two of the conductors (1 and 4) are "regulated", since they are tied directly to fixed voltage sources (V 1 and V 4 ).The other two conductors (2 and 3) are "free" to stochastically change state via the effect of devices A, B, and C.
The composite process capturing the dynamics of the state of this circuit is illustrated in the right-hand side of the figure.There are three puppet sets (each a translucent bubble), each corresponding to a mechanism associated with one of the devices in the system.The mechanisms are denoted as r A , r B , and r C , and the puppet set of mechanism r is denoted as P (r).
In an MPP, even though the mechanisms that affect the dynamics of any subsystem i do not affect the dynamics of any other subsystem, in general the dynamics of i will depend on the states of some set of other subsystems.For example, in a bipartite process [9], both of the subsystems can be modeled as having their own set of mechanisms, but each subsystem's dynamics is governed by the state of the other subsystem as well as its own state.
Similarly, in a composite process, the dynamics of each subsystem i can depend on the state of other subsystems in addition to its own state.Each such dependency can be represented as an edge in a directed graph.In the resulting dependency network each edge j → i means that the state of subsystem j affects the rate of state transitions in subsystem i.We refer to the set of subsystems whose state affects the dynamics of i as the leaders of i.So j → i means that j is a leader of i.In any dependency network, the leaders of each subsystem i are its parents, pa(i).
The leader set for a mechanism v is defined to be the union of the leaders of each subsystem in the puppet set of v: L(v) = i∈P (v) pa(i).As an example, even though the puppet set of mechanism v 2 in Fig. 2   The leader set of any mechanism is a (perhaps proper) superset of its puppet set.Accordingly, we can write With abuse of notation, we can rewrite this in a way that explicitly embodies the fact that the instantaneous dy-namics of the puppet set P (v) depends at most on the state of the leader set L(v), and not on the state of any of the subsystems in N \L(v): A unit ω ⊆ N is a collection of subsystems such that as the full system's state evolves via a master equation according to K(t), the marginal distribution over the states of the unit also evolves according to its own CTMC: for some associated rate matrix K(ω; t).Intuitively, a unit is any set of subsystems whose evolution is independent of the states of the subsystems outside the unit.Typically, a unit is a union of leader sets.In such cases no subsystem in the unit has parents outside of the unit.Importantly though, this doesn't prevent there being a subsystem in the unit that is a leader for some subsystem outside of the unit.Informally speaking, the boundary of a unit in an dependency network can have outgoing edges, even though it cannot have any incoming edges.Any union of units is a unit, and any non-empty intersection of units is a unit [4].Note that the entire system N itself is a unit.We denote the set of all units as N † .
We highlight that for any pair of nested units ω and α ⊆ ω, it is true that [3,4]: A set of units N * is called a unit structure if it obeys the following properties [4]: • The union of the units in the set equals N .
• The set is closed under intersections of its units.
We define an inclusion-exclusion sum of a function f ω evaluated on every unit ω in a unit structure N * as For example, the time-t inclusion-exclusion (or "inex" for short) information reads Using the fact that the heat flow into the unit structure also decomposes into an in-ex sum, we can decompose the global EP incurred during a time period [0, τ] according to where ∆I N * is the change in the in-ex information during the time period [0, τ].For a detailed proof of the in-ex decomposition of the global EP, see [2,3].
One can use the in-ex sum decomposition of the EP in various ways depending on what degrees of freedom are accessible in the system of interest.For example, if one can calculate the mismatch cost [20,21] λ ω for each unit in the unit structure, then the in-ex sum can be rewritten: where ξ ω = σ ω − λ ω is the "residual EP" due to everything aside from the mismatch cost.Additionally, a very large number of lower bounds can be obtained on the global EP by replacing any positive σ ω (or any such set of them) in the in-ex sum with any lower bound (e.g., TUR, SLT, etc.) on the value of that unit's EP. (See [2] for examples in the special case of MPPs.)

B. Decomposition of thermodynamic and dynamical quantities in composite processes
Since the entire system N is itself a unit, we will write all our results in terms of units for the rest of the paper.
The rate matrix of each unit ω in a composite process decomposes into rate matrices from each mechanism whose leader set is a subset of ω: Similarly, we can decompose the EP rate of any unit ω: into contributions ζv ω (t) from each mechanism whose leader set is a subset of ω.In particular, since the entire system is a unit whose state transitions are mediated by every mechanism v ∈ V , the global EP rate decomposes as σ N (t) = v ζv N (t).A unit's dynamical activity also decomposes: Similarly, the entire system's dynamical activity can be decomposed as A N (t) = v A(v; t).Note that the dynamics of every pair of nested units ω, α ⊆ ω must be consistent with one another [4], which means that A α (v; t) = A ω (v; t) = A(v; t) for all α and ω.
We denote the probability flow from x ω → x ω due to mechanism v as A x ω x ω (v; t) = K x ω x ω (v; t) p x ω (t).We write the net probability current from x ω → x ω due to mechanism (v; t).The total net probability current from x ω → x ω equals the sum of the probability currents due to each mechanism whose leader set is a subset of the unit ω: Accordingly, we can decompose the master equation for the unit ω into probability currents induced by each mechanism:

II. THERMODYNAMIC UNCERTAINTY RELATIONS FOR COMPOSITE PROCESSES
For any unit ω that is in an NESS, any linear function of probability currents C ω is a current.It can be divided into contributions from each mechanism: where C x ω x ω = −C x ω x ω is some anti-symmetric function of state transitions, and we have dropped the time dependence in the steady state.
Importantly, the current contribution from each mechanism Ċω (v) is itself a current.So all of the thermodynamic uncertainty relations (TURs) hold for the time-integrated version of any such mechanism-specific current.In an NESS running for a time period of length τ, this mechanism-specific time-integrated current is C ω (v) = τ Ċω (v).Additionally, since every unit evolves according to its own CTMC, the TURs hold for each unit.
For example, the finite-time TUR bounds the precision of any current in a CTMC with respect to its EP rate [10,22].For a composite process, this holds for any unit and any arbitrary time-integrated current: Additionally, for any mechanism v : L(v) ⊆ ω and any associated current C ω (v), The vector-valued TUR following [11] holds for a vector Ċω of any set of (potentially mechanism-specific) currents { Ċω } that are not linearly dependent: where Ξ −1 ω is the inverse of the covariance matrix of the associated time-integrated currents {C ω }.
Any of these TURs can be useful to bound the entropy production when one has limited access to the system in the sense that one can measure state transitions i) due only to some subset of the mechanisms influencing the system or state transitions or ii) involving some subset of units in the system.

A. Information Flow TURs
One important quantity in an MPP is information flow [1,9,23].Here, we extend the concept of information flow to composite processes.For any unit ω in an NESS, a set of subsystems A ⊂ ω, and a set of subsystems B ⊂ ω (for which A ∩ B = ∅), the information flow is the rate of decrease in the conditional entropy of the state of B given the state of A, due to state transitions in A: So, when ω is in an NESS, the information flow is a current for which C x ω x ω = C x ω\A ,x A x ω\A ,x A = δ x ω\A x ω\A ln . The contribution to that information flow that is due to interactions of the unit with reservoir v : Since these information flows are currents, the TURs will apply to them.This observation in combination with Eq. ( 7) suggests that the precision of an information flow is (best) bounded by the reciprocal of the entropy production of the smallest unit which contains A ∪ B.

III. STRENGTHENED THERMODYNAMIC SPEED LIMITS FOR COMPOSITE PROCESSES
Here we derive a speed limit similar to the one in [15], but for composite processes.This speed limit is tighter than the one presented in that paper.Our analysis will hold for an arbitrary unit ω (which could be the entire system N itself): where the dynamics occurs during the time period [0, τ].
Additionally, l ω is the total variation distance between the initial (time-0) and final (time-τ) probability distributions over states of the unit ω.A tot ω (v; τ) is the total time-integrated dynamical activity due to mechanism v. ζ v ω (τ) is the total contribution to the entropy production of unit ω due to interactions of ω with mechanism v.
We start by bounding the total variation distance between the initial and final (time-τ) probability distributions over states of the unit ω: In a composite process, we can further bound the integrand: We write the time-t "conditional probability distribution" of the forward process, under the counterfactual scenario that the process evolves with coupling only to mechanism v : L(v) ⊆ ω as Intuitively, this can be interpreted as a conditional probability that if a jump occurs at t due reservoir v : L(v) ⊆ ω, that the state before the jump was x ω and the state afterwards was x ω We write the same quantity for the reverse process as The total variation distance between these matrices d TV (W ω (v; t), W ω (v; t)) represents how irreversible this counterfactual process (the one driven only by mechanism v) is at time t.Using these definitions, we can rewrite Eq. (32) as Plugging into Eq.( 30), we obtain We next make use of the fact that mechanism v's contribution to the EP rate of unit ω (Eq.( 15)) can be written in terms of the Kullback-Leibler (KL) divergence between the conditional distributions of the forward and backward processes as Any positive monotonic concave function f relates the total variation distance to the KL divergence [15] according to: We can use this relationship to relate Eq. (37) to l ω .Combining Eqs.(36) to (38), as the total (ensembleaverage) contribution to the EP of unit ω caused by an interaction of the system with mechanism v during the time period [0, τ].Also define A tot ω (v; τ) = τ 0 dtA ω (v; t) as the total (ensemble-average) number of state transitions in the unit ω that are caused by an interaction of the system with mechanism v. Then using the positivity of the dynamical activity and of the EP, together with the concavity of f , we can further bound the right hand side to obtain a general limit for composite processes: This result provides an upper bound on how much l ω can change during the time interval [0, τ], in terms of the associated activity of ω and the contribution of ω to EP.So Eq. ( 40) is a thermodynamic speed limit theorem, involving By comparison, the speed limit in [15] applied to a unit ω reads For a composite process, the right hand side of this "global" bound expands to By Jensen's inequality, the speed limit for composite processes (Eq.( 40)) is always tighter than the speed limit provided by [15] (Eq.( 41)).For a concave function f , a set of numbers a x v in its domain, and positive weights a v , Jensen's inequality states that Setting proves that Eq. ( 40)) is always tighter than Eq.(41).Intuitively, this occurs because we're able to define the mechanismspecific contributions to the EP and activity in a composite process.
[15] provides some examples of acceptable functions f .For example, if we follow Pinsker's inequality and choose f = x 2 , then the speed limit provided by [15] collapses to the speed limit derived in [13].If we plug in this choice of f to Eq. (40), extract the parameter τ by using the average frequency of state transitions , and rearrange terms, we obtain the tightest of which is given by: This particular speed limit tells us that speed of the evolution of the system's probability distribution cannot be greater than the speed of evolution of the distribution over the coordinates of the "slowest-evolving" unit.

IV. DISCUSSION
Here we have introduced the stochastic thermodynamics of composite processes.This work presents a preliminary analysis of how information flows in a composite process are constrained by the entropy productions of units.It also demonstrates that bounds on the speed of transforming a system's probability distribution over states can be tightened with knowledge of the contributions to the entropy production and dynamical activity from each mechanism with which the system interacts.
This work fits into a growing branch of research on the stochastic thermodynamics of constraints.One example of research in this area investigates the effect of constraints on the control protocol (time sequence of rate matrices evolving the probability distribution) [24] There has also been some important work where the "constraint" on such a many-degree-of-freedom classical system is simply that it be some very narrowly defined type of system, whose dynamics is specified by many different kinds of parameters.For example, there has been analysis of the stochastic thermodynamics of chemical reaction networks [6,7], of electronic circuits [8,25,26], and of biological copying mechanisms [27].This work analyzes the consequences of a major class of dynamical constraints that arises because many of these systems are most naturally modelled as a set of multiple co-evolving subsystems [1-4, 9, 28-30].In particular, the main constraints on such systems are that only certain subsets of subsystems can simultaneously change state a given time, and the dependencies between subsystems impose restrictions on their joint dynamics.
There remain many avenues of potential future work, especially in the thermodynamics of computation.Many computational processes consist of multiple, co-evolving systems with the broad set of constraints that allow them to be easily modeled as a composite process.Research in this direction would first require formalizing the notion of computation in a composite process.One such computation, which equates to the identity map, is simply communication (information transmission).One could extend the recent study on the fundamental thermodynamic costs of communication [31] to tie Shannon information theory to the stochastic thermodynamics of composite processes.More generally, for any given computation, one could analyze the tradeoffs between the energy cost required to implement that computation and the performance (accuracy, time, etc.) of a composite process.In particular, there could be rich structure in how the properties of the dependency network in a composite process affects these trade-offs.

FIG. 1 .
FIG. 1. Examples of systems whose dynamics can be modeled as composite processes.Each system consists of multiple subsystems (blue circles).Mechanisms are denoted as r, and their puppet sets P (r) are indicated by translucent white bubbles.(a) An example stochastic chemical reaction network consists of four co-evolving species {X 1 , X 2 , X 3 , X 4 } that change state according to three chemical reactions {A, B, C}.(b) An example toy circuit consists of four conductors {1, 2, 3, 4} that change state via interactions with three devices {A, B, C}.
is {A, C, D}, the leader set of v 2 is {A, B, C, D}.

FIG. 2 .
FIG.2.The dependency network specifies how the dynamics of each subsystem is governed by the state of other subsystems.This network defines the leader sets in a composite process.