Next Article in Journal
A Chaotic-Based Encryption/Decryption Framework for Secure Multimedia Communications
Next Article in Special Issue
Medium Entropy Reduction and Instability in Stochastic Systems with Distributed Delay
Previous Article in Journal
Shortcut-to-Adiabaticity-Like Techniques for Parameter Estimation in Quantum Metrology
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Entropy Production in Exactly Solvable Systems

by
Luca Cocconi
1,2,3,†,
Rosalba Garcia-Millan
1,2,4,†,
Zigan Zhen
1,2,
Bianca Buturca
5 and
Gunnar Pruessner
1,2,*
1
Department of Mathematics, Imperial College London, 180 Queen’s Gate, London SW7 2AZ, UK
2
Centre for Complexity Science, Imperial College London, London SW7 2AZ, UK
3
The Francis Crick Institute, 1 Midland Rd, London NW1 1AT, UK
4
DAMTP, Centre for Mathematical Sciences, University of Cambridge, Wilberforce Road, Cambridge CB3 0WA, UK
5
Department of Physics, Imperial College London, Exhibition Road, London SW7 2AZ, UK
*
Author to whom correspondence should be addressed.
These authors have contributed equally to this work.
Entropy 2020, 22(11), 1252; https://doi.org/10.3390/e22111252
Submission received: 8 October 2020 / Revised: 30 October 2020 / Accepted: 1 November 2020 / Published: 3 November 2020
(This article belongs to the Special Issue Nonequilibrium Thermodynamics and Stochastic Processes)

Abstract

:
The rate of entropy production by a stochastic process quantifies how far it is from thermodynamic equilibrium. Equivalently, entropy production captures the degree to which global detailed balance and time-reversal symmetry are broken. Despite abundant references to entropy production in the literature and its many applications in the study of non-equilibrium stochastic particle systems, a comprehensive list of typical examples illustrating the fundamentals of entropy production is lacking. Here, we present a brief, self-contained review of entropy production and calculate it from first principles in a catalogue of exactly solvable setups, encompassing both discrete- and continuous-state Markov processes, as well as single- and multiple-particle systems. The examples covered in this work provide a stepping stone for further studies on entropy production of more complex systems, such as many-particle active matter, as well as a benchmark for the development of alternative mathematical formalisms.

1. Introduction

Stochastic thermodynamics has progressively evolved into an essential tool in the study of non-equilibrium systems as it connects the quantities of interest in traditional thermodynamics, such as work, heat and entropy, to the properties of microscopically resolved fluctuating trajectories [1,2,3]. The possibility of equipping stochastic processes with a consistent thermodynamic and information-theoretic interpretation has resulted in a number of fascinating works, with the interface between mathematical physics and the biological sciences proving to be a particularly fertile ground for new insights (e.g., [4,5,6,7,8]). The fact that most of the applications live on the small scale is not surprising, since it is precisely at the microscopic scale that fluctuations start to play a non-negligible role.
The concept of entropy and, more specifically, entropy production has attracted particular interest, as a consequence of the quantitative handle it provides on the distinction between equilibrium systems, passive systems relaxing to equilibrium and genuinely non-equilibrium, ‘active’ systems. While there exist multiple routes to the mathematical formulation of entropy production [9,10,11,12,13,14], the underlying physical picture is consistent: the entropy production associated with an ensemble of stochastic trajectories quantifies the degree of certainty with which we can assert that a particular event originates from a given stochastic process or from its suitably defined conjugate (usually, its time-reverse). When averaged over long times (or over an ensemble), a non-vanishing entropy production signals time-reversal symmetry breaking at the microscopic scale. This implies, at least for Markovian systems, the existence of steady-state probability currents in the state space, which change sign under time-reversal. When a thermodynamically consistent description is available, the average rate of entropy production can be related to the rate of energy or information exchange between the system, the heat bath(s) it is connected to, and any other thermodynamic entity involved in the dynamics, such as a measuring device [15,16,17]. Whilst the rate of energy dissipation is of immediate interest since it captures how ‘costly’ it is to sustain specific dynamics (e.g., the metabolism sustaining the development of an organism [18,19]), entropy production has also been found to relate non-trivially to the efficiency and precision of the corresponding process via uncertainty relations [3,20]. Entropy production along fluctuating trajectories also plays a fundamental role in the formulation of various fluctuation theorems [12].
Given the recent interest in stochastic thermodynamics and entropy production in particular, as well as the increasing number of mathematical techniques implemented for the quantification of the latter, it is essential to have available a few, well-understood reference systems, for which exact results are known. These can play the role of benchmarks for new techniques, while helping neophytes to develop intuition. This is not to say that exact results for more complicated systems are not available, see for example [21], however they are usually limited to small systems and/or require numerical evaluation. In this work, we will present results exclusively in the framework proposed by Gaspard [11], specifically in the form of Equations (4), (15) and (16), which we review and contextualise by deriving them via different routes in Section 2. In Section 3, we begin the analysis with processes in discrete state space (Section 3.1, Section 3.2, Section 3.3, Section 3.4, Section 3.5, Section 3.6, Section 3.7 and Section 3.8), and subsequently extend it to the continuous case (Section 3.9, Section 3.10 and Section 3.11). Finally, in Section 3.12 and Section 3.13 we consider processes that involve both discrete and continuous degrees of freedom. Time is taken as a continuous variable throughout.

2. Brief Review of Entropy Production

Entropy production of jump processes. The concept of time-dependent informational entropy associated with a given ensemble of stochastic processes was first introduced by Shannon [22]. For an arbitrary probability mass function P n ( t ) of time t over a discrete set of states n Ω , the Shannon entropy is defined as
S ( t ) = n P n ( t ) ln P n ( t )
with the convention henceforth of x ln x = 0 for x = 0 . It quantifies the inherent degree of uncertainty about the state of a process. In the microcanonical ensemble P n is constant in t and n and upon providing an entropy scale in the form of the Boltzmann constant k B , Shannon’s entropy reduces to that of traditional thermodynamics given by Boltzmann’s S = k B ln | Ω | , where | Ω | = 1 / P n is the cardinality of Ω . In Markovian systems, the probability P n ( t ) depends on n and evolves in time t according to the master equation
P ˙ n ( t ) = m P m ( t ) w m n P n ( t ) w n m
with non-negative transition rates w m n from state m to state n m . Equation (2) reduces to P ˙ n ( t ) = m P m ( t ) w m n by imposing the Markov condition m w n m = 0 , equivalently w n n = m n w n m , which we will use in the following. For simplicity, we will restrict ourselves to time-independent rates w n m but as far as the following discussion is concerned, generalising to time-dependent rates is a matter of replacing w n m by w n m ( t ) . The rate of change of entropy for a continuous time jump process can be derived by differentiating S ( t ) in Equation (1) with respect to time and substituting (2) into the resulting expression [11,23], thus obtaining
S ˙ ( t ) = m , n P m ( t ) w m n ln P n ( t ) = m , n P n ( t ) w n m ln P n ( t ) P m ( t ) = S ˙ e ( t ) + S ˙ i ( t )
where we define
S ˙ e ( t ) = 1 2 m , n P n ( t ) w n m P m ( t ) w m n ln w n m w m n = m , n P n ( t ) w n m ln w n m w m n = m , n P n ( t ) w n m P m ( t ) w m n ln w n m w 0
S ˙ i ( t ) = 1 2 m , n P n ( t ) w n m P m ( t ) w m n ln P n ( t ) w n m P m ( t ) w m n = m , n P n ( t ) w n m ln P n ( t ) w n m P m ( t ) w m n = m , n ( P n ( t ) w n m P m ( t ) w m n ) ln P n ( t ) w n m w 0
with arbitrary positive rate w 0 to restore dimensional consistency, that cancel trivially. Here we follow the convention [1] to split the rate of entropy change into two contributions: the first, Equation (4a), commonly referred to as “external” entropy production or entropy flow, is denoted by S ˙ e . It contains a factor ln ( w n m / w m n ) corresponding, for systems satisfying local detailed balance, to the net change in entropy of the reservoir(s) associated with the system’s transition from state n to state m. For such thermal systems, S ˙ e can thus be identified as the rate of entropy production in the environment [9,24]. The second contribution, Equation (4b), termed “internal” entropy production and denoted by S ˙ i is non-negative because ( x y ) ln ( x / y ) 0 for any two real, positive x, y and using the convention z ln z = 0 for z = 0 . The internal entropy production vanishes when the global detailed balance condition P n ( t ) w n m = P m ( t ) w m n is satisfied for all pairs of states. In this sense, a non-vanishing S ˙ i is the fingerprint of non-equilibrium phenomena. Defining π n = lim t P n ( t ) as the probability mass function at steady-state, the internal entropy production rate can be further decomposed into two non-negative contributions, S ˙ i ( t ) = S ˙ i a ( t ) + S ˙ i na ( t ) , of the form
S ˙ i a ( t ) = m , n P n ( t ) w n m ln π n w n m π m w m n
S ˙ i na ( t ) = m , n P n ( t ) w n m ln P n ( t ) π m P m ( t ) π n = n P ˙ n ( t ) ln P n ( t ) π n .
These contributions are usually referred to as adiabatic (or housekeeping) and non-adiabatic (or excess) entropy production rates, respectively [1]. The non-adiabatic contribution vanishes at steady-state, lim t S ˙ i na ( t ) = 0 . While these quantities have received attention in the context of fluctuation theorems [23], they will not be discussed further here. At steady-state, namely when P ˙ n ( t ) = 0 for all n, S ˙ ( t ) in Equation (3) vanishes by construction, so that the internal and external contributions to the entropy production cancel each other exactly, S ˙ ( t ) = S ˙ e ( t ) + S ˙ i ( t ) = 0 , while they vanish individually only for systems at equilibrium. Equation (4) will be used throughout the present work to compute the entropy productions of discrete-state processes.
Entropy production as a measure of time-reversal-symmetry breaking. As it turns out, a deeper connection between internal entropy production and time-reversal symmetry breaking can be established [11]. The result, which we re-derive below, identifies S ˙ i as the relative dynamical entropy (i.e., the Kullback–Leibler divergence [25]) per unit time of the ensemble of forward paths and their time-reversed counterparts. To see this, we first need to define a path n = ( n 0 , n 1 , , n M ) as a set of trajectories starting at time t 0 and visiting states n j at successive discrete times t j = t 0 + j τ with j = 0 , 1 , , M , equally spaced by a time interval τ . For a time-homogeneous Markovian jump process in continuous time, the joint probability of observing a particular path is
P ( n ; t 0 , M τ ) = P n 0 ( t 0 ) W ( n 0 n 1 ; τ ) W ( n 1 n 2 ; τ ) W ( n M 1 n M ; τ )
where P n 0 ( t 0 ) is the probability of observing the system in state n 0 at time t 0 , while W ( n j n j + 1 ; τ ) is the probability that the system is in state n j + 1 time τ after being in state n j . This probability can be expressed in terms of the transition rate matrix w with elements w m n . It is W ( n m ; τ ) = [ exp ( w τ ) ] n m , the matrix elements of the exponential of the matrix w τ with the Markov condition imposed. It can be expanded in small τ as
W ( n m ; τ ) = δ n , m + w n m τ + O ( τ 2 ) ,
where δ n , m is the Kronecker- δ function. We can now define a dynamical entropy per unit time [22] as
h ( t 0 , Δ t ) = lim M 1 Δ t n 0 , , n M P ( n ; t 0 , Δ t ) ln P ( n ; t 0 , Δ t ) .
where the limit is to be considered a continuous time limit taken at fixed Δ t = t M t 0 = M τ [26], thus determining the sampling interval τ , and the sum runs over all possible paths n . Other than τ , the paths are the only quantity on the right-hand side of Equation (8) that depend on M. The dynamical entropy h ( t 0 , Δ t ) may be considered the expectation of ln ( P ( n ; t 0 , Δ t ) ) across all paths. Similarly to the static Shannon entropy, the dynamical entropy h ( t 0 , Δ t ) quantifies the inherent degree of uncertainty about the evolution over a time Δ t of a process starting at a given time t 0 . To compare with the dynamics as observed under time-reversal, one introduces the time-reversed path n R = ( n M , n M 1 , , n 0 ) and thus the time-reversed dynamical entropy per unit time as
h R ( t 0 , Δ t ) = lim M 1 Δ t n 0 , , n M P ( n ; t 0 , Δ t ) ln P ( n R ; t 0 , Δ t ) .
While similar in spirit to h ( t 0 , Δ t ) , the physical interpretation of h R ( t 0 , Δ t ) as the expectation of ln ( P ( n R ; Δ t ) ) under the forward probability P ( n ; t 0 , Δ t ) is more convoluted since it involves the forward and the backward paths simultaneously, which have potentially different statistics. However, time-reversal symmetry implies precisely identical statistics of the two ensembles, whence h ( t 0 , Δ t ) = h R ( t 0 , Δ t ) . The motivation for introducing h R ( t 0 , Δ t ) is that the difference of the two dynamical entropies defined above is a non-negative Kullback–Leibler divergence given by
h R ( t 0 , Δ t ) h ( t 0 , Δ t ) = lim M 1 Δ t n P ( n ; t 0 , Δ t ) ln P ( n ; t 0 , Δ t ) P ( n R ; t 0 , Δ t ) .
Using Equation (6) in (10) with Equation (7) provides the expansion
h R ( t 0 , Δ t ) h ( t 0 , Δ t ) = n m P n ( t 0 ) w n m ln P n ( t 0 ) w n m P m ( t 0 ) w m n + O ( Δ t ) ,
which is an instantaneous measure of the Kullback–Leibler divergence. The limit of h R ( t 0 , Δ t ) h ( t 0 , Δ t ) in small Δ t is finite and identical to the internal entropy production (4b) derived above. This result establishes the profound connection between broken global detailed balance, Equation (4), and Kullback–Leibler divergence, Equation (11), both of which can thus be recognised as fingerprints of non-equilibrium systems. In light of this connection, it might not come as a surprise that the steady-state rate of entropy production is inversely proportional to the minimal time needed to decide on the direction of the arrow of time [27].
Entropy production for continuous degrees of freedom. The results above were obtained for Markov jump processes within a discrete state space. However, the decomposition of the rate of change of entropy in Equation (3) into internal and external contributions can be readily generalised to Markovian processes with continuous degrees of freedom, for example a spatial coordinate. For simplicity, we will restrict ourselves to processes in one dimension but as far as the following discussion is concerned, generalising to higher dimensions is a matter of replacing spatial derivatives and integrals over the spatial coordinate with their higher dimensional counterparts. The dynamics of such a process with probability density P ( x , t ) to find it at x at time t are captured by a Fokker–Planck equation of the form P ˙ ( x , t ) = x j ( x , t ) , with j the probability current, augmented by an initial condition P ( x , 0 ) . Starting from the Gibbs–Shannon entropy for a continuous random variable S ( t ) = d x P ( x , t ) ln ( P ( x , t ) / P 0 ) with some arbitrary density scale P 0 for dimensional consistency, we differentiate with respect to time and substitute x j ( x , t ) for P ˙ ( x , t ) to obtain
S ˙ ( t ) = d x P ˙ ( x , t ) ln P ( x , t ) P 0 = d x ( x P ( x , t ) ) j ( x , t ) P ( x , t ) ,
where the second equality follows upon integration by parts using d x P ˙ ( x , t ) = 0 by normalisation. For the paradigmatic case of an overdamped colloidal particle, which will be discussed in more detail below (Section 3.9, Section 3.10 and Section 3.11), the probability current is given by j ( x , t ) = D x P ( x , t ) + μ F ( x , t ) P ( x , t ) with local, time-dependent force F ( x , t ) . We can then decompose the entropy production S ˙ ( t ) = S ˙ i ( t ) + S ˙ e ( t ) into internal and external contributions as
S ˙ i ( t ) = d x j ( x , t ) 2 D P ( x , t ) 0
and
S ˙ e ( t ) = d x μ D F ( x , t ) j ( x , t ) ,
respectively. The Kullback–Leibler divergence between the densities of forward and time-reversed paths can be calculated as outlined above for discrete state systems, thus producing an alternative expression for the internal entropy production in the form
S ˙ i ( t ) = lim Δ t 0 h R ( t , Δ t ) h ( t , Δ t ) = lim τ 0 1 2 τ d x d x ( P ( x , t ) W ( x x , τ ) P ( x , t ) W ( x x , τ ) ) ln P ( x , t ) W ( x x , τ ) P ( x , t ) W ( x x , τ ) .
Here we have introduced the propagator W ( x x , τ ) , the probability density that a system observed in state x will be found at x time τ later. In general, here and above, the density W ( x x , τ ) depends on the absolute time t, which we have omitted here for better readability. The corresponding expression for the entropy flow is obtained by substituting (15) into the balance equation S ˙ e ( t ) = S ˙ ( t ) S ˙ i ( t ) , whence
S ˙ e ( t ) = lim τ 0 1 2 τ d x d x ( P ( x , t ) W ( x x , τ ) P ( x , t ) W ( x x , τ ) ) ln W ( x x , τ ) W ( x x , τ ) .
Since lim τ 0 W ( x x , τ ) = δ ( x x ) [28] and P ( x , t ) δ ( x x ) = P ( x , t ) δ ( x x ) , the factor in front of the logarithm in (15) and (16) vanishes in the limit of small τ , lim τ 0 P ( x , t ) W ( x x ; τ ) P ( x , t ) W ( x x ; τ ) = 0 . Together with the prefactor 1 / τ , this necessitates the use of L’Hôpital’s rule
lim τ 0 1 τ P ( x , t ) W ( x x ; τ ) P ( x , t ) W ( x x ; τ ) = P ( x , t ) W ˙ ( x x ) P ( x , t ) W ˙ ( x x )
where we used the shorthand
W ˙ ( x x ) : = lim τ 0 d d τ W ( x x ; τ ) ,
which is generally given by the Fokker–Planck equation of the process, so that
P ˙ ( x , t ) = d x P ( x , t ) W ˙ ( x x ) .
In the continuum processes considered below, in particular Section 3.11, Section 3.12 and Section 3.13, W ˙ ( x x ) is a kernel in the form of Dirac δ -functions and derivatives thereof, acting under the integral as the adjoint Fokker–Planck operator on P ( x , t ) . With Equation (17) the internal entropy production of a continuous process (15) may conveniently be written as
S ˙ i ( t ) = 1 2 d x d x P ( x , t ) W ˙ ( x x ) P ( x , t ) W ˙ ( x x ) × lim τ 0 ln P ( x , t ) W ( x x ; τ ) P ( x , t ) W ( x x ; τ )
= d x d x P ( x , t ) W ˙ ( x x ) × lim τ 0 ln P ( x , t ) W ( x x ; τ ) P ( x , t ) W ( x x ; τ )
= d x d x P ( x , t ) W ˙ ( x x ) P ( x , t ) W ˙ ( x x ) × lim τ 0 ln P ( x , t ) W ( x x ; τ ) W 0 P 0
with suitable constants W 0 and P 0 . Correspondingly, the (external) entropy flow (16) is
S ˙ e ( t ) = 1 2 d x d x P ( x , t ) W ˙ ( x x ) P ( x , t ) W ˙ ( x x ) × lim τ 0 ln W ( x x ; τ ) W ( x x ; τ )
= d x d x P ( x , t ) W ˙ ( x x ) × lim τ 0 ln W ( x x ; τ ) W ( x x ; τ )
= d x d x P ( x , t ) W ˙ ( x x ) P ( x , t ) W ˙ ( x x ) × lim τ 0 ln W ( x x ; τ ) W 0 .
All of these expressions assume that the limits of the logarithms exist. Naively replacing them by ln ( δ ( x x ) / δ ( x x ) ) produces a meaningless expression with a Dirac δ -function in the denominator. Equations (20) and (21) are identically obtained in the same manner as Equation (4) with the master Equation (2) replaced by the Fokker–Planck Equation (19). All of these expressions, Equations (4), (20) and (21), may thus be seen as Gaspard’s [11] framework.
Langevin description and stochastic entropy. We have seen in Equations (13) and (14) how the notion of entropy production can be extended to continuous degrees of freedom by means of a Fokker–Planck description of the stochastic dynamics. The Fokker–Plank equation is a deterministic equation for the probability density and thus provides a description at the level of ensembles, rather than single fluctuating trajectories. A complementary description can be provided by means of a Langevin equation of motion, which is instead a stochastic differential equation for the continuous degree of freedom [29]. The presence of an explicit noise term, which usually represents faster degrees of freedom or fluctuations induced by the contact with a heat reservoir, allows for a clearer thermodynamic interpretation. A paradigmatic example is that of the overdamped colloidal particle mentioned above, whose dynamics is described by
x ˙ ( t ) = μ F ( x , t ) + ζ ( t )
with μ a mobility, F ( x , t ) a generic force and ζ ( t ) a white noise term with covariance ζ ( t ) ζ ( t ) = 2 D δ ( t t ) . For one-dimensional motion on the real line, the force F ( x , t ) can always be written as the gradient of a potential V ( x , t ) , namely F ( x , t ) = x V ( x , t ) , so that it is conservative. For time-independent, stable potentials, V ( x , t ) = V ( x ) , this leads over long timeframes to an equilibrium steady-state. This property does not hold in higher dimensions and for different boundary conditions (e.g., periodic), in which case the force F ( x , t ) need not have a corresponding potential V ( x , t ) for which F ( x , t ) = V ( x , t ) [30].
The concept of entropy is traditionally introduced at the level of ensembles. However, due to its role in fluctuation theorems [1,24], a consistent definition at the level of single trajectories is required. This can be constructed along the lines of [12] by positing the trajectory-dependent entropy S ( x ( t ) , t ) where x ( t ) is a random trajectory as given by Equation (22) and
S ( x , t ) = ln ( P ( x , t ) / P 0 ) .
Here, P ( x , t ) denotes the probability density of finding a particle at position x at time t as introduced above and P 0 is a scale as used above to maintain dimensional consistency. Given that x ( t ) is a random variable, so is S ( x ( t ) , t ) , which may be regarded as an instantaneous entropy. Taking the total derivative with respect to t produces
d d t S ( x ( t ) , t ) = t P ( x , t ) P ( x , t ) x = x ( t ) x P ( x , t ) P ( x , t ) x = x ( t ) x ˙ ( t ) = t P ( x , t ) P ( x , t ) x = x ( t ) + j ( x ( t ) , t ) D P ( x ( t ) , t ) x ˙ ( t ) μ D F ( x ( t ) , t ) x ˙ ( t )
where we have used the processes’ Fokker–Planck equation t P ( x , t ) = x j ( x , t ) with j ( x , t ) = μ F ( x , t ) P ( x , t ) D x P ( x , t ) . The total time derivative has been taken as a conventional derivative implying the Stratonovich convention indicated by ∘, which will become relevant below. The term in (24) containing t P ( x , t ) accounts for changes in the probability density due to its temporal evolution, such as relaxation to a steady state, and any time-dependent driving protocol. The product F ( x ( t ) , t ) x ˙ ( t ) can be interpreted as a power expended by the force and in the absence of an internal energy of the particle, dissipated in the medium. With Einstein’s relation defining the temperature of T = D / μ of the medium, the last term may be written as
S ˙ m ( t ) = F ( x ( t ) , t ) x ˙ ( t ) T
and thus interpreted as the entropy change in the medium. Together with the entropy change of the particle, this gives the total entropy change of particle and medium,
S ˙ tot ( t ) = d d t S ( x ( t ) , t ) + S ˙ m ( t ) = t P ( x , t ) P ( x , t ) x = x ( t ) + j ( x ( t ) , t ) D P ( x ( t ) , t ) x ˙ ( t ) ,
which is a random variable, as it depends on the position x ( t ) . It also draws on P ( x , t ) and j ( x , t ) which are properties of the ensemble. To make the connection to the entropies constructed above we need to take an ensemble average of the instantaneous S ˙ tot ( t ) . To do so, we need an interpretation of the last term of (26), where the noise ζ ( t ) of x ˙ ( t ) , Equation (22), multiplies j ( x ( t ) , t ) / P ( x ( t ) , t ) . Equivalently, we need the joint density P ( x , x ˙ ; t ) of position x and velocity x ˙ at time t. In the spirit of Ito, this density trivially factorises into a normally distributed x ˙ μ F ( x , t ) and P ( x , t ) as the increment x ˙ d t on the basis of (22) depends only on the particle’s current position x ( t ) . However, this is not so in the Stratonovich interpretation of P ( x , x ˙ ; t ) , as here the increment depends equally on x ( t ) and x ( t + d t ) [1,31,32]. Taking the ensemble average of S ˙ tot thus produces
S ˙ tot ( t ) = d x d x ˙ S ˙ tot ( t ) P ( x , x ˙ ; t ) = d x t P ( x , t ) P ( x , t ) d x ˙ P ( x , x ˙ ; t ) + d x d x ˙ j ( x , t ) D P ( x , t ) x ˙ P ( x , x ˙ ; t ) ,
where x and x ˙ are now dummy variables. The first term on the right hand side vanishes, because P ( x , t ) = d x ˙ P ( x , x ˙ ; t ) is the marginal of P ( x , x ˙ ; t ) and d x t P ( x , t ) = 0 by normalisation. The integral over x ˙ in the second term produces the expected particle velocity conditional to its position,
x ˙ | x , t = d x ˙ x ˙ P ( x , x ˙ ; t ) P ( x , t )
in the Stratonovich sense, where it gives rise to the current [12], x ˙ | x , t = j ( x , t ) / P ( x , t ) , so that
S ˙ tot ( t ) = d x j 2 ( x , t ) D P ( x , t ) 0 ,
which vanishes only in the absence of any probability current, i.e., in thermodynamic equilibrium. In the Ito sense, the conditional expectation (28) would have instead given rise to the ensemble-independent drift, x ˙ | x , t = μ F ( x , t ) . Comparing to Equation (13), the expectation S ˙ tot ( t ) turns out to be the internal entropy production S ˙ i ( t ) , so that S ˙ tot ( t ) of Equation (26) may be regarded as its instantaneous counterpart.
Path integral methods. An interesting aspect of working with the Langevin description is the possibility of casting probability densities p ( [ x ] ; t ) for paths x ( t ) with t [ 0 , t ] into path integrals, for example in the Onsager–Machlup formalism [33,34]. For the colloidal particle introduced in (22), it gives p ( [ x ] ; t ) = N exp A ( [ x ] ; t ) with the action functional
A ( [ x ] ; t ) = 0 t d t ( x ˙ ( t ) μ F ( x ( t ) , t ) ) 2 4 D μ 2 0 t d t x F ( x ( t ) , t )
in the Stratonovich discretisation, which differs from the Ito form only by the second term ([34], Section 4.5), which is the Jacobian of the transform of the noise ζ ( t ) to x ( t ) , Equation (22). The Stratonovich form is needed so that the action does not give preference to a particular time direction [35]. This choice plays a role in every product of white noise, as is implicit to x ˙ , and a random variable. We therefore indicate the choice by a ∘ also in powers, reminding us that F ( x ( t ) , t ) should be read as F ( ( x ( t ) + x ( t + Δ t ) ) / 2 , t + Δ t ) and x ˙ ( t ) as ( x ( t + Δ t ) x ( t ) ) / 2 with discretisation time step Δ t . Evaluating the action for the reversed path x R ( t ) = x ( t t ) then gives
A ( [ x R ] ; t ) = 0 t d t ( x ˙ R ( t ) μ F ( x R ( t ) , t ) ) 2 4 D μ 2 0 t d t x F ( x R ( t ) , t )
= 0 t d t ( x ˙ ( t ) + μ F ( x ( t ) , t t ) ) 2 4 D μ 2 0 t d t x F ( x ( t ) , t t ) .
If the force is even under time reversal, F ( x , t ) = F ( x , t t ) , in particular when it is independent of time, the path probability density obeys
ln p ( [ x ] ; t ) p ( [ x R ] ; t ) = 0 t d t F ( x ( t ) , t ) x ˙ ( t ) T = S m ( t ) ,
with random variables multiplied with Stratonovich convention. With Equation (25), the integral in Equation (33) can be identified as the entropy of the medium. When the driving is time-independent and the system’s probability distribution eventually becomes stationary, such that lim t S ˙ ( x , t ) = 0 , Equation (23), the only contribution to the total entropy change is due to change of entropy in the medium, Equation (26). Assuming that the system is ergodic, we have the equivalence lim t S m ( t ) / t = lim t S ˙ tot ( t ) , where denotes an ensemble average. Using Equations (13) and (29) gives lim t S m ( t ) / t = lim t S ˙ i ( t ) . Equation (33) can therefore be used directly to compute the steady-state internal entropy production rate. The equivalence between the long-time limit t and the ensemble average holds only for ergodic systems, whose unique steady-state does not depend on the specific initialisation x ( 0 ) . This connection between stochastic thermodynamics and field theory has stimulated a number of works aimed at characterising the non-equilibrium features of continuum models of active matter [13,36]. Extensions of this formalism to systems driven by correlated noise have also been proposed [37].

3. Systems

In this section, we calculate the entropy production rate on the basis of Gaspard’s framework [11], Equations (4), (15) and (16), for different particle systems. We cover the systems listed in Table 1, with both discrete and continuous states and with one or multiple particles.

3.1. Two-State Markov Process

Consider a particle that hops between two states, 1 and 2, with transition rates W ˙ ( 1 2 ) = α and W ˙ ( 2 1 ) = β , see Figure 1 [23,38], and using the notation in Equation (18) for discrete states. The rate-matrix (see Equation (7)) may thus be
w = α α β β ,
with P ( t ) = ( P 1 ( t ) , P 2 ( t ) ) the probability of the particle to be in state 1 or 2 respectively as a function of time. By normalisation, P 1 ( t ) + P 2 ( t ) = 1 , with probabilistic initial condition P ( 0 ) = ( p , 1 p ) . Solving the master equation in Equation (2) yields
P ( t ) = ( P 1 ( t ) , P 2 ( t ) ) = 1 α + β β + r e ( α + β ) t , α r e ( α + β ) t ,
with r = α p β ( 1 p ) , corresponding to an exponentially decaying probability current
P 1 ( t ) α P 2 ( t ) β = r e ( α + β ) t .
The internal entropy production (4b) is then
S ˙ i ( t ) = [ P 1 ( t ) α P 2 ( t ) β ] ln P 1 ( t ) α P 2 ( t ) β = r e ( α + β ) t ln 1 + r β e ( α + β ) t 1 r α e ( α + β ) t ,
see Figure 2, and the entropy flow (4a),
S ˙ e ( t ) = r e ( α + β ) t ln α β .
At stationarity, S ˙ i = S ˙ e = 0 and therefore the two-state Markov process reaches equilibrium. In this example, the topology of the transition network does not allow a sustained current between states, which inevitably leads to equilibrium in the steady state and, therefore, there is production of entropy only due to the relaxation of the system from the initial state.

3.2. Three-State Markov Process

We extend the system in Section 3.1 to three states, 1, 2 and 3, with transition rates W ˙ ( 1 2 ) = α , W ˙ ( 2 3 ) = α , W ˙ ( 3 1 ) = α , W ˙ ( 2 1 ) = β , W ˙ ( 3 2 ) = β , and W ˙ ( 1 3 ) = β , see Figure 3, and using the notation Equation (18) for discrete states. The rate matrix (see Equation (7)) is then
w = ( α + β ) α β β ( α + β ) α α β ( α + β ) .
Assuming the initial condition P ( 0 ) = ( 1 , 0 , 0 ) , the probabilities of states 1, 2 and 3 respectively, evolve according to Equation (2), which has solution
P 1 ( t ) = 1 3 1 + 2 e 3 ϕ t cos ( 3 ψ t ) ,
P 2 ( t ) = 1 3 1 2 e 3 ϕ t cos ( 3 ψ t π / 3 ) ,
P 3 ( t ) = 1 3 1 2 e 3 ϕ t cos ( 3 ψ t + π / 3 ) ,
with ϕ = ( α + β ) / 2 and ψ = ( α β ) / 2 .
The entropy production (4b) is then, using (40),
S ˙ i ( t ) = ( α β ) ln α β + P 1 ( t ) α P 2 ( t ) β ln P 1 ( t ) P 2 ( t ) + P 2 ( t ) α P 3 ( t ) β ln P 2 ( t ) P 3 ( t ) + P 3 ( t ) α P 1 ( t ) β ln P 3 ( t ) P 1 ( t ) ,
see Figure 2, and the entropy flow (4a),
S ˙ e ( t ) = ( α β ) ln α β ,
which is constant throughout. At stationarity, the system is uniformly distributed and, if α β , the entropy production and flow satisfy S ˙ i = S ˙ e 0 . If α β , the particle has a net drift that sustains a probability current ( α β ) / 3 in the system, which prevents the system from reaching equilibrium. This setup can be generalised straightforwardly to M-state Markov processes characterised by the same cyclic structure W ˙ ( i i + 1 ( mod M ) ) = α and W ˙ ( i i 1 ( mod M ) ) = β , to find that the steady-state entropy production is independent of M. These can be seen as simple models of protein conformational cycles driven by instantaneous energy inputs of order k B T ln ( α / β ) , for example ATP hydrolysis [39].

3.3. Random Walk on a Complete Graph

Consider a random walker on a complete graph with d nodes, where each node is connected to all other nodes, and the walker jumps from node j { 1 , 2 , , d } to node k { 1 , 2 , , d } , k j , with rate w j k , see Figure 4. These are the off-diagonal elements of the corresponding Markov matrix whose diagonal elements are w j j = i = 1 , i j d w j i . The probability vector P ( t ) = ( P 1 ( t ) , P 2 ( t ) , , P d ( t ) ) has components P j ( t ) that are the probability that the system is in state j at time t. The general case of arbitrary transition rates is impossible to discuss exhaustively. In the uniform case, w j k = α , the Markov matrix has only two distinct eigenvalues, namely eigenvalue α d with degeneracy d 1 and eigenvalue 0 with degeneracy 1. Assuming an arbitrary initial condition P ( 0 ) , the probability distribution at a later time t is
P j ( t ) = 1 d + e d α t P j ( 0 ) 1 d .
The steady state, which is associated with the vanishing eigenvalue, is the uniform distribution lim t P j ( t ) = 1 / d for all j { 1 , 2 , , d } . The entropy production (4b) of the initial state relaxing to the uniform state is
S ˙ i ( t ) = 1 2 α e d α t j , k P j ( 0 ) P k ( 0 ) ln 1 + e d α t P j ( 0 ) d 1 1 + e d α t P k ( 0 ) d 1 ,
and the entropy flow (4a) is S ˙ e = 0 throughout. If the walker is initially located on node k, so that P j ( 0 ) = δ j , k , the entropy production simplifies to
S ˙ i ( t ) = ( d 1 ) α e d α t ln ( 1 + d e d α t 1 e d α t ) .
We can see that the system reaches equilibrium at stationarity, since lim t S ˙ i ( t ) = S ˙ e ( t ) = 0 . Over long timeframes ( d e d α t 1 ), the asymptotic behaviour of S ˙ i is
S ˙ i ( t ) = d ( d 1 ) α e 2 d α t + O ( e 3 d α t ) ,
by expanding the logarithm in the small exponential.

3.4. N Independent, Distinguishable Markov Processes

In the following, we consider N non-interacting, distinguishable particles undergoing Markovian dynamics on a discrete state space, see Figure 5. Each of the N particles carries an index { 1 , 2 , , N } and is in state n { 1 , 2 , , d } , so that the state of the entire system is given by an N-particle state n = ( n 1 , n 2 , , n N ) . Particle distinguishability implies the factorisation of state and transition probabilities into their single-particle contributions, whence the joint probability P n ( t ) of an N-particle state n factorises into a product of single particle probabilities P n ( ) ( t ) of particle to be in state n ,
P n ( t ) = = 1 N P n ( ) ( t ) .
Further, the Poissonian rate w n m from N-particle state n to N-particle state m n vanishes for all transitions n m that differ in more than one component , i.e., w n m = 0 unless there exists a single { 1 , 2 , , N } such that m k = n k for all k , in which case w n m = w n m ( ) , the transition rates of the single particle transition of particle .
The entropy production of this N-particle system according to Equation (4b),
S ˙ i ( t ) = 1 2 n m ( P n ( t ) w n m P m ( t ) w m n ) ln P n ( t ) w n m P m ( t ) w m n
simplifies considerably due to w n m , as the sum may be re-written as
n m w n m = n m w n m
with m = ( n 1 , n 2 , , n 1 , m , n + 1 , , n N ) so that w n m = w n m ( ) and
S ˙ i ( t ) = 1 2 n = 1 N m k = 1 N P n k ( k ) ( t ) w n m ( ) k = 1 N P m k ( k ) ( t ) w m n ( ) ln k = 1 N P n k ( k ) ( t ) w n m ( ) k = 1 N P m k ( k ) ( t ) w m n ( ) .
Since m k = n k for any k inside the curly bracket, we may write
k = 1 N P n k ( k ) ( t ) = P n ( ) ( t ) k = 1 k N P n k ( k ) ( t ) and k = 1 N P m k ( k ) ( t ) = P m ( ) ( t ) k = 1 k N P n k ( k ) ( t ) .
The product k N P n k ( k ) ( t ) can thus be taken outside the curly bracket in Equation (50) and be summed over, as well as cancelled in the logarithm. After changing the dummy variables in the remaining summation from n and m to n and m respectively, the entropy production is
S ˙ i ( t ) = 1 2 = 1 N n m P n ( ) ( t ) w n m ( ) P m ( ) ( t ) w m n ( ) ln P n ( ) ( t ) w n m ( ) P m ( ) ( t ) w m n ( ) ,
which is the sum of the entropy productions of the single particle { 1 , 2 , , N } , Equation (4b), irrespective of how each particle is initialised. The same argument applies to S ˙ e , the entropy flow Equation (4a). The entropy production and flow obviously simplify to an N-fold product of the single particle expressions if w n m ( ) do not depend on and all particles are initialised by the same P n ( 0 ) independent of . This result may equally be found from the dynamical entropy per unit time, Equation (8).

3.5. N Independent, Indistinguishable Two-State Markov Processes

Suppose that N identical, indistinguishable, non-interacting particles follow the two-state Markov process described in Section 3.1, Figure 6 [23]. There are Ω = N + 1 distinct states given by the occupation number n { 0 , 1 , , N } of one of the two states, say state 1, as the occupation number of the other state follows as N n given the particle number N is fixed under the dynamics. In the following, P ( n , t ) denotes the probability of finding n particles in state 1 at time t. The master equation is then
P ˙ ( n , t ) = α n P ( n , t ) + α ( n + 1 ) P ( n + 1 , t ) β ( N n ) P ( n , t ) + β ( N n + 1 ) P ( n 1 , t ) .
The state space and the evolution in it can be thought of as a hopping process on a one-dimensional chain of states with non-uniform rates. Provided P ( n , 0 ) initially follows a binomial distribution, P ( n , 0 ) = N n p n ( 1 p ) N n with probability p for a particle to be placed in state 1 initially, the solution of Equation (53) is easily constructed from the solution P 1 ( t ) in Equation (35) of Section 3.1 via
P ( n , t ) = N n P 1 n ( t ) ( 1 P 1 ( t ) ) N n for 0 n N
with P 1 ( 0 ) = p , as P ˙ 1 ( t ) = α P 1 ( t ) + β ( 1 P 1 ( t ) ) , which can be verified by substituting Equation (54) into Equation (53). Using Equations (34) and (54) in (4b), the entropy production reads
S ˙ i ( t ) = n = 1 N [ P ( n , t ) α n P ( n 1 , t ) β ( N n + 1 ) ] ln P ( n , t ) α n P ( n 1 , t ) β ( N n + 1 )
= N [ P 1 ( t ) α ( 1 P 1 ( t ) ) β ] ln P 1 ( t ) α ( 1 P 1 ( t ) ) β ,
which is the N-fold multiple of the result of the corresponding single particle system, Equation (37). This result, Equation (55b), depends on the initialisation being commensurable with Equation (54) which otherwise is recovered only asymptotically and only if the stationary distribution is unique.
Further, the entropy production of N indistinguishable particles being the N-fold entropy production of a single particle does not extend to the external entropy flow, which lacks the simplification of the logarithm and gives
S ˙ e ( t ) = N [ α P 1 ( t ) β ( 1 P 1 ( t ) ) ] ln α β + n = 0 N 1 P 1 n ( t ) ( 1 P 1 ( t ) ) N 1 n N 1 n ln n + 1 N n
thus picking up a correction in the form of the additional sum in the curly bracket that vanishes only at N = 1 or P 1 ( t ) = 1 / 2 , but does not contribute at stationarity because of the overall prefactor α P 1 β ( 1 P 1 ) that converges to 0. To make sense of this correction in relation to particle indistinguishability, with the help of Equation (54) we can rewrite the difference between the right hand side of Equation (56) and the N-fold entropy flow of a single two-state system (38) as
N [ α P 1 ( t ) β ( 1 P 1 ( t ) ) ] n = 0 N 1 P 1 n ( t ) ( 1 P 1 ( t ) ) N 1 n N 1 n ln n + 1 N n = n = 0 N 1 [ α ( n + 1 ) P ( n + 1 , t ) β ( N n ) P ( n , t ) ] ln n + 1 N n
which now explicitly involves the net probability current from the occupation number state with n + 1 particles in state A to that with n particles in state A, as well as a the logarithm
ln n + 1 N n = ln N n ln N n + 1 .
Written in terms of the same combinatorial factors appearing in Equation (54), the logarithm (58) can be interpreted as a difference of microcanonical (Boltzmann) entropies, defined as the logarithm of the degeneracy of the occupation number state if we were to assume that the N particles are distinguishable. With the help of the master Equation (53) as well as Equation (54) and (58), the term Equation (57) may be rewritten to give
S ˙ e ( t ) = N [ α P 1 ( t ) β ( 1 P 1 ( t ) ) ] ln α β n = 0 N 1 P ˙ ( n , t ) ln N n
This result is further generalised in Equation (70).

3.6. N Independent, Indistinguishable d-State Processes

We generalise now the results in Section 3.3 and Section 3.5 to N independent d-state Markov processes, see Figure 7. These results represent a special case of those obtained in [40] when the N processes are non-interacting. In this section, we consider non-interacting, indistinguishable particles hopping on a graph of d nodes with edge-dependent hopping rates w j k . As in the two-state system in Section 3.5, we find that the internal (but not the external) entropy production of the d-state system S ˙ i is N times the entropy production of the individual processes assuming the initial condition is probabilistically identical for all single-particle sub-systems. The entropy productions of a single such process according to Equation (4) read
S ˙ i ( 1 ) ( t ) = 1 2 j k [ P j ( t ) w j k P k ( t ) w k j ] ln P j ( t ) w j k P k ( t ) w k j ,
S ˙ e ( 1 ) ( t ) = 1 2 j k [ P j ( t ) w j k P k ( t ) w k j ] ln w j k w k j ,
where P j ( t ) is the time-dependent probability of a single-particle process to be in state j, Section 3.3.
To calculate the entropy production of the N concurrent indistinguishable processes using the occupation number representation, we first derive the probability of an occupation number configuration n = ( n 1 , n 2 , , n d ) , with j = 1 d n j = N , which similarly to Equation (54) is given by the multinomial distribution
P n ( t ) = N ! j = 1 d P j n j ( t ) n j !
for the probability P n ( t ) of the system to be in state n at time t assuming that each particle is subject to the same single-particle distribution P j ( t ) , j { 1 , 2 , , d } for all t, i.e., in particular assuming that all particles are initialised identically, by placing them all at the same site or, more generally, by placing them initially according to the same distribution P j ( 0 ) . Given this initialisation, Equation (61) solves Equation (2)
P ˙ n ( t ) = m P m ( t ) w m n P n ( t ) w n m
with the transition rates w m n discussed below.
For non-interacting processes with a unique stationary distribution, Equation (61) is always obeyed in the limit of long times after initialisation, since the single-particle distributions P j ( t ) are identical at steady state. The entropy production Equation (4b) of the entire system has the same form as Equation (48) of Section 3.4 (N independent, distinguishable particles) with w n m , however now the transition rate between the occupation number state n = ( n 1 , n 2 , , n d ) with 0 n k N to occupation number state m = ( m 1 , m 2 , , m d ) . The rate w n m vanishes except when m differs from n in exactly two distinct components, say m j = n j 1 0 and m k = n k + 1 1 in which case w n m = n j w j k with w j k the transition rates of a single particle from j to k as introduced above. For such m , the rate obeys w m n = m k w k j and the probability P m ( t ) fulfills
P m ( t ) = P n ( t ) P k ( t ) n j P j ( t ) m k = P n ( t ) P k ( t ) w n m w k j P j ( t ) w m n w j k ,
which simplifies the entropy production Equation (48) to
S ˙ i ( t ) = 1 2 n m ( P n ( t ) w n m P m ( t ) w m n ) ln P n ( t ) w n m P m ( t ) w m n = 1 2 n j k ( P n ( t ) n j w j k P m ( t ) m k w k j ) ln P j ( t ) w j k P k ( t ) w k j
where the sum n runs over all allowed configurations, namely 0 n j N for j = 1 , 2 , , d with j n j = N and m = ( n 1 , n 2 , , n j 1 , , n k + 1 , , n d ) is derived from n as outlined above. Strictly, P n ( t ) has to be defined to vanish for invalid states n , so that the first bracket in the summand of Equation (64) vanishes, in particular when n j = 0 , in which case m j = 1 . To proceed, we introduce the probability
P ¯ n ¯ j ( t ) = ( N 1 ) ! P j n j 1 ( n j 1 ) ! i = 1 , i j d P i n i n i ! ,
defined to vanish for n j = 0 , so that P n ( t ) n j = N P j ( t ) P ¯ n ¯ j ( t ) . The probability P ¯ n ¯ j ( t ) is that of finding n i particles at states i j and n j 1 particles at state j. It is Equation (61) evaluated in a system with only N 1 particles and configuration n ¯ j = ( n 1 , n 2 , , n j 1 , n j 1 , n j + 1 , , n d ) = m ¯ k a function of n . Equation (64) may now be rewritten as
S ˙ i ( t ) = N 2 j k n P ¯ n ¯ j ( t ) P j ( t ) w j k n P ¯ m ¯ k ( t ) P k ( t ) w k j ln P j ( t ) w j k P k ( t ) w k j
where we have used that the arguments of the logarithm are independent of n and m . The summation over n gives
n P ¯ n ¯ j ( t ) = n P ¯ m ¯ k ( t ) = 1
so that
S ˙ i ( t ) = N 2 j k ( P j ( t ) w j k P k ( t ) w k j ) ln P j ( t ) w j k P k ( t ) w k j = N S ˙ i ( 1 ) ( t )
which is the N-fold entropy production of the single particle system S ˙ i ( t ) , Equation (60a), or equivalently that of N distinguishable particles, Equation (52), Section 3.4. As in Section 3.5, this dramatic simplification does not carry over to the external entropy flow Equation (4a)
S ˙ e ( t ) = N 2 j k n P ¯ n ¯ j ( t ) ( P j ( t ) w j k P k ( t ) w k j ) ln n j w j k ( n k + 1 ) w k j = N 2 j k ( P j ( t ) w j k P k ( t ) w k j ) ln w j k w k j m N 2 j k n P ¯ n ¯ j ( t ) ( P j ( t ) w j k P k ( t ) w k j ) ln n j n k + 1 ,
where of the last two terms only the first is the N-fold entropy flow of the single particle system S ˙ e ( t ) , Equation (60b). The reason for the second term is the lack of a cancellation mechanism to absorb the n j and n k + 1 from the logarithm. Rewriting the second term as
N 2 j k n P ¯ n ¯ j ( t ) ( P j ( t ) w j k P k ( t ) w k j ) ln n j n k + 1
= 1 2 n j k P n ( t ) n j w j k P n P k ( t ) n j P j ( t ) ( n k + 1 ) ( n k + 1 ) w k j ln n j n k + 1
= n P ˙ n ( t ) ln N n 1 , . . . , n d ,
using Equation (62) where we re-expressed the logarithm as
ln n j n k + 1 = ln N n 1 , , n j 1 , , n k + 1 , , n d ln N n 1 , , n d ,
shows that the correction term has the same form as the corresponding term in the two-state system, Equation (57), namely that of a difference of microcanonical (Boltzmann) entropies of the multi-particle states. It vanishes when all n j are either 0 or 1, as expected for d N and also at stationarity when P ˙ n ( t ) = 0 . In that limit S ˙ e = S ˙ i when indeed Equation (60a) gives
lim t S ˙ i ( 1 ) ( t ) = 1 2 k ( P j w j k P k w k j ) ln w j k w k j ,
with P j = lim t P j ( t ) . As far as the entropy production S ˙ i ( t ) is concerned, we thus recover and generalise the result in Section 3.5 on indistinguishable particles in a two-state system, which produce N times the entropy of a single particle. In Section 3.4, it was shown that N distinguishable particles have the same entropy production and flow as the sum of the entropy productions of individual particles. In Section 3.5 and Section 3.6, it was shown that the entropy production of indistinguishable particles, which require the states to be represented by occupation numbers, show the N-fold entropy production of the single particle system, provided suitable initialisation, but asymptotically independent of initialisation, provided the stationary state has a unique distribution. The same does not apply to the entropy flow, which generally acquires additional logarithmic terms accounting for the degeneracy of the occupation number states. The extra terms, however, are bound to vanish at stationarity, when S ˙ e ( t ) = S ˙ i ( t ) .

3.7. Random Walk on a Lattice

In this section, we study a particle on a one-dimensional lattice that hops to the right nearest neighbouring site with rate r and to the left with rate , see Figure 8. The case of unequal hopping rates, r , is generally referred to as an asymmetric random walk and can be seen as a minimal model for the directed motion of a molecular motor on a cytoskeletal filament [41]. The position x of the particle at time t, after N ( t ) jumps, is
x = x 0 + i = 1 N ( t ) Δ x i ,
where the random hops Δ x i are independent and identically distributed, and x 0 is the initial position at time t = 0 . If a is the lattice spacing, the distance increments are Δ x i = + a with probability r / ( + r ) and Δ x i = a with probability / ( + r ) . The probability distribution of the particle position is
P ( x , t ; x 0 ) = n = 0 H ( n , t ) P n ( x ; x 0 ) ,
where H ( n , t ) is the probability that by time t, the particle has hopped N ( t ) = n times, and P n ( x ; x 0 ) is the probability that the particle is at position x after n hops starting from x 0 . Since jumping is a Poisson process with rate r + , the random variable N ( t ) has a Poisson distribution,
H ( n , t ) = ( ( + r ) t ) n n ! e ( + r ) t .
On the other hand, the distribution of the position x after n jumps is the binomial distribution
P n ( x ; x 0 ) = n k x r k x n k x ( + r ) n ,
where k x = ( n + ( x x 0 ) / a ) / 2 is the number of jumps to the right, 0 k x n with (77) implied to vanish if k x is not integer. From Equation (74), the parity of ( x x 0 ) / a and N ( t ) are identical. Using (76) and (77), the probability distribution in (75) reads
P ( x , t ; x 0 ) = e ( + r ) t r x x 0 2 a I | x x 0 | a , 2 t r ,
where I ( n , z ) is the modified Bessel function of the first kind of n , z C , which is defined as [42]
I ( m , z ) = j = 0 1 j ! Γ ( j + m + 1 ) z 2 2 j + m .
The transition probability is then
W ( x y ; τ ) = e ( + r ) τ r y x 2 a I | y x | a , 2 τ r .
Using (78) and (80) to calculate the entropy production (4b), we need the following identity for | y x | / a = | m | 1 ,
lim τ 0 1 τ I | m | , 2 τ r = r δ | m | , 1 ,
which follows immediately from the definition of the modified Bessel function, Equation (79). It indicates that the only transitions that contribute to the entropy production are those where the particle travels a distance equal to the lattice spacing a. Then, the entropy production reads
S ˙ i ( t ) = 1 2 ( r ) ln r + e ( + r ) t m = r m 2 I | m | , 2 t r × r ln I | m | , 2 t r I | m + 1 | , 2 t r + ln I | m | , 2 t r I | m 1 | , 2 t r ,
see Figure 9. The entropy flow is S ˙ e ( t ) = ( r ) ln ( r / ) independent of t, which owes its simplicity to the transition rates being independent of the particle’s position. We are not aware of a method to perform the sum in (82) in closed form and, given that this expression involves terms competing at large times t, we cannot calculate the stationary entropy production lim t S ˙ i ( t ) . If we naively assume that the sum in Equation (82) converges such that it is suppressed by the exponential exp ( r + ) t , then the entropy production S ˙ i appears to converge to 1 2 ( r ) ln ( r / ) . If that were the case, S ˙ = S ˙ i + S ˙ e would converge to a negative constant, while S ( t ) , Equation (1), which vanishes at t = 0 given the initialisation of P ( x , t ; x 0 ) = δ ( x x 0 ) / a , 0 , is bound to be strictly positive at all finite t. Given that P ( x , t ; x 0 ) does not converge, not much else can be said about S ( t ) or S ˙ . Numerically, using the GNU Scientific Library [43] implementation of Bessel functions, we find that, asymptotically for large times, Equation (82) is
S ˙ i ( t ) ( r ) ln r + 1 2 t + O t 2
if r and S ˙ i ( t ) 1 / ( 2 t ) + O t 3 if r = , see Figure 9.
To take the continuum limit a 0 of the probability distribution (78), we define v and D such that r + = 2 D / a 2 and r = v / a . Using the asymptotic expansion of I ( m , z ) in m [42]
I ( m , z ) exp z 2 π z 1 4 m 2 1 8 z + 4 m 2 1 4 m 2 9 2 ! ( 8 z ) 2 4 m 2 1 4 m 2 9 4 m 2 25 3 ! ( 8 z ) 3 + ,
which is valid for | arg z | < π / 2 , we obtain in fact the Gaussian distribution,
lim a 0 1 a P x a , t ; x 0 a ; r ( v , D , a ) , ( v , D , a ) = 1 4 π D t e ( x x 0 v t ) 2 4 D t ,
which corresponds to the distribution of a drift–diffusive particle, which is studied in Section 3.9. Therefore, all results derived in Section 3.9, apply to the present system in the continuum limit.

3.8. Random Walk on a Ring Lattice

In this section, we extend the system in Section 3.7 to a random walk on a ring lattice of length L > 2 , so that 1 x L , see Figure 10. The probability distribution P L ( x , t ) of the particle on the ring follows from the distribution on the one-dimensional lattice P ( x , t ) in (78), by mapping all positions x + j L on the one-dimensional lattice to position x { 1 , 2 , , L } on the ring with j being the winding number irrelevant to the evolution of the walker. Then, the distribution on the ring lattice reads,
P L ( x , t ; x 0 ) = j = P ( x + j L , t ; x 0 )
and similarly for the transition probability W ( x y , τ ) = P L ( y , τ ; x ) . To calculate the entropy production (4b), each pair of points x , y on the lattice is mapped to a pair of points on the ring.
For L > 2 , as τ 0 only transitions to distinct, nearest neighbours contribute and the expression for the entropy production simplifies dramatically,
S ˙ i ( t ) = ( r ) ln r + m = 1 L / a P L ( m a , t ; x 0 ) r ln P L ( m a , t ; x 0 ) P L ( ( m + 1 ) a , t ; x 0 ) + ln P L ( m a , t ; x 0 ) P L ( ( m 1 ) a , t ; x 0 )
and similarly for
S ˙ e ( t ) = ( r ) ln r .
While the entropy flow S ˙ e on a ring is thus identical to that of a particle on a one-dimensional lattice, the entropy production S ˙ i on a ring is in principle more complicated, but with a lack of cancellations of r / in the logarithm as found in Section 3.7 and P L reaching stationarity comes the asymptote
lim t S ˙ i ( t ) = ( r ) ln r .
This is easily derived from lim t P L ( x , t ; x 0 ) = 1 / L taken into the finite sum of Equation (87). It follows that S ˙ ( t ) = S ˙ i ( t ) + S ˙ e ( t ) converges to 0 at large t, as expected for a convergent stationary distribution.
The case L = 2 and the less interesting case L = 1 are not covered above, because of the different topology of the phase space of L > 2 compared to L = 2 . The difference can be observed in the different structure of the transition matrices (34) and (39). The framework above is based on each site having two outgoing and two incoming rates, 2 L in total. However, for L = 2 there are only two transitions, which cannot be separated into four to fit the framework above, because even when rates of concurrent transitions between two given states are additive, their entropy production generally is not. The case of L = 2 is recovered in the two-state system of Section 3.1 with α = β = r + , which is at equilibrium in the stationary state.

3.9. Driven Brownian Particle

In continuum space, the motion of a freely diffusive particle with diffusion constant D and drift v is governed by the Langevin equation x ˙ = v + 2 D ξ ( t ) , where ξ ( t ) is a Gaussian white noise with zero mean, ξ ( t ) = 0 , and covariance ξ ( t ) ξ ( t ) = δ ( t t ) , see Figure 11 [44]. The corresponding Fokker–Planck equation for the probability distribution P ( x , t ) is [45]
t P ( x , t ) = v x P ( x , t ) + D x 2 P ( x , t ) .
Assuming the initial condition P ( x , 0 ) = δ ( x x 0 ) , the solution to the Fokker–Planck equation is the Gaussian distribution
P ( x , t ) = 1 4 D π t e ( x x 0 v t ) 2 4 D t ,
which is also the Green function of the Fokker–Planck Equation (90). We therefore also have the transition probability density from state x to state y over an interval τ ,
W ( x y , τ ) = 1 4 D π τ e ( y x v τ ) 2 4 D τ .
Substituting (91) and (92) into Equation (15) for the internal entropy production of a continuous system gives
S ˙ i = lim τ 0 1 τ d x d y 1 4 D π t e ( x x 0 v t ) 2 4 D t 1 4 D π τ e ( y x v τ ) 2 4 D τ ( y x 0 ) 2 ( x x 0 ) 2 4 D t + ( y x ) v 2 D ,
where the Gaussian integrals can be evaluated in closed form, S ˙ i ( t ) = lim τ 0 1 / ( 2 t ) + v 2 / D + v 2 τ / ( 2 D t ) . Taking the limit τ 0 then gives the entropy production rate [44,46,47],
S ˙ i ( t ) = 1 2 t + v 2 D ,
see Figure 12. Similarly, following (16), the entropy flow reads S ˙ e ( t ) = v 2 / D independent of time t. As S ˙ i ( t ) 0 , we see that for finite t or v 0 , the system is out of equilibrium with a sustained probability current, so that there is in fact no steady-state distribution. We can verify Equation (94) for the time-dependent internal entropy production by computing the probability current
j ( x , t ) = ( v D x ) P ( x , t ) = v 2 + ( x x 0 v t ) 4 t e ( x x 0 v t ) 2 4 D t π D t
and substituting it together with (91), into (29). As expected, the two procedures return identical results. The independence of the transient contribution 1 / ( 2 t ) to the internal entropy production on the diffusion constant is remarkable although necessary on dimensional grounds, as a consequence of S ˙ i having dimensions of inverse time. The diffusion constant characterising the spatial behaviour of diffusion suggests that it is the temporal, rather than the spatial, features of the process that determine its initial entropy production.

3.10. Driven Brownian Particle in a Harmonic Potential

Consider a drift–diffusive particle such as in Section 3.9 that is confined in a harmonic potential V ( x ) = 1 2 k x 2 , where k is the potential stiffness, see Figure 13 [48]. The Langevin equation is x ˙ = v k x + 2 D ξ ( t ) , where ξ ( t ) = 0 and ξ ( t ) ξ ( t ) = δ ( t t ) and the Fokker–Planck equation for P ( x , t ) is [45]
t P ( x , t ) = x ( ( v k x ) P ( x , t ) ) + D x 2 P ( x , t ) .
Assuming the initial condition P ( x , 0 ) = δ ( x x 0 ) , the solution to the Fokker–Plank equation is the Gaussian distribution
P ( x , t ) = k 2 π D ( 1 exp 2 k t ) e k x v k x 0 v exp k t 2 2 D k ( 1 exp 2 k t ) ,
corresponding to a probability current j ( x , t ) = ( v k x D x ) P ( x , t ) of the form
j ( x ) = k 2 π D ( 1 e 2 k t ) e k t v 1 e k t k x 0 x e k t 1 e 2 k t e ( v ( 1 e k t ) k ( x x 0 e k t ) 2 2 D k ( 1 e 2 k t ) .
The transition probability density within τ is then also of Gaussian form, namely
W ( x y , τ ) = k 2 π D ( 1 exp 2 k τ ) e k y v k x v exp k τ 2 2 D k ( 1 exp 2 k τ ) .
Using (97) and (99) in (15) gives the entropy production rate
S ˙ i = ( v k x 0 ) 2 D k e 2 k t + k exp 2 k t 1 exp 2 k t ,
see Figure 12, and in (16) the external entropy flow
S ˙ e = ( v k x 0 ) 2 D k e 2 k t .
In the limit t , the system will reach equilibrium as P ( x , t ) in Equation (97) converges to the Boltzmann distribution k 2 π D exp ( k x v ) 2 2 D k of the effective potential 1 2 k x 2 v x at temperature D. This is consistent with (100) and (101), since lim t S ˙ i ( t ) = lim t S ˙ e ( t ) = 0 . Similarly to drift diffusion on the real line, Equation (94), there is a transient contribution to the entropy production that is independent of the diffusion constant D but does now depend on the stiffness k, which has dimensions of inverse time, through the rescaled time k t .

3.11. Driven Brownian Particle on a Ring with Potential

Consider a drift–diffusive particle on a ring x [ 0 , L ) in a smooth potential V ( x ) , Figure 14, initialised at position x 0 . The Langevin equation of the particle is [49,50,51] x ˙ = v x V ( x ) + 2 D ξ ( t ) , where ξ ( t ) is Gaussian white noise. The Fokker–Planck equation is then
t P ( x , t ; x 0 ) = x ( ( v V ( x ) ) P ( x , t ; x 0 ) ) + D x 2 P ( x , t ; x 0 )
with V ( x ) = d d x V ( x ) and boundary condition P ( n ) ( 0 , t ; x 0 ) = P ( n ) ( L , t ; x 0 ) for for all n 0 derivatives and t 0 . At stationarity, in the limit t , where t P ( x , t ; x 0 ) = 0 , the solution to the Fokker–Planck Equation (102) is [29,51,52]
P s ( x ) = lim t P ( x , t ) = Z e V ( x ) v x D x x + L d y e V ( y ) v y D ,
where Z is the normalisation constant. The corresponding steady-state probability current j = ( v x V ) P s D x P s is independent of x by continuity, 0 = t P = x j , and reads [45]
j = Z e v L D 1 .
In order to calculate the entropy production according to (15) and (16) using (17), we need W ( x y ; τ ) for small τ . As discussed after Equation (17), W ( x y ; τ ) obeys the Fokker–Planck Equation (102) in the form
τ W ( x y ; τ ) = y v V ( y ) W ( x y ; τ ) + D y 2 W ( x y ; τ )
with lim τ 0 W ( x y ; τ ) = δ ( y x ) , so that
W ˙ ( x y ) = lim τ 0 τ W ( x y ; τ ) = V ( y ) δ ( y x ) ( v V ( y ) ) δ ( y x ) + D δ ( y x )
to be evaluated under an integral, where δ ( y x ) = d d y δ ( y x ) will require an integration by parts. As for the logarithmic term, we use [28,45]
W ( x y ; τ ) = 1 4 π D τ e y x τ v V ( x ) 2 4 D τ 1 + O ( τ )
so that
ln W ( x y ; τ ) W ( y x ; τ ) = y x 2 D 2 v V ( x ) V ( y ) + O ( τ ) .
The entropy flow Equation (16) in the more convenient version Equation (21a) can be obtained easily using Equations (106) and (108)
S ˙ e ( t ) = 0 L d x d y P ( x , t ) V ( y ) δ ( y x ) ( v V ( y ) ) δ ( y x ) + D δ ( y x )
× y x 2 D 2 v V ( x ) V ( y )
= 0 L d x P ( x , t ) 1 D v V ( x ) 2 V ( x )
after suitable integration by parts, whereby derivatives of the δ -function are conveniently interpreted as derivatives with respect to y to avoid subsequent differentiation of P ( x , t ) . Since δ ( y x ) ( y x ) = 0 , the factor ( y x ) / ( 2 D ) needs to be differentiated for a term to contribute. In the absence of a potential, P ( x , t ) = 1 / L at stationarity, so that Equation (111) simplifies to S ˙ e ( t ) = v 2 / D and lim t S ˙ i ( t ) = v 2 / D , Equation (94). Using the probability current j ( x , t ) = D x P ( x , t ) + v V ( x ) P ( x , t ) , the entropy flow simplifies further to
S ˙ e ( t ) = 0 L d x j ( x , t ) v V ( x ) D
so that at stationarity, when the current is spatially uniform, lim t S ˙ e ( t ) = lim t j ( x , t ) v L / D as the potential is periodic, entering only via the current.
An equivalent calculation of S ˙ i on the basis of (20a) gives
S ˙ i ( t ) = S ˙ e + 0 L d x d y P ( x , t ) V ( y ) δ ( y x ) ( v V ( y ) ) δ ( y x ) + D δ ( y x ) ln P ( x , t ) P ( y , t )
= S ˙ e + 0 L d x D ( P ( x , t ) ) 2 P ( x , t ) P ( x , t ) V ( x )
= S ˙ e 0 L d x j ( x , t ) x ln P ( x , t )
= 0 L d x j 2 ( x , t ) D P ( x , t ) ,
with the last line identical to Equation (13).
By considering the functional derivative δ Z / δ V ( z ) in 0 L d x P ( x ) = 1 of Equation (103), one can show that the stationary current j ( x , t ) Equation (104) is extremal for constant V ( x ) , indicating that the magnitude of the stationary entropy flow Equation (113d) is maximised in a constant potential.

3.12. Run-and-Tumble Motion with Diffusion on A ring

Consider the dynamics of a run-and-tumble particle on a ring x [ 0 , L ) with Langevin equation x ˙ = v ( t ) + 2 D ξ ( t ) , where the drift v ( t ) is a Poisson process with rate α that alternates the speed of the particle between the constants v 1 and v 2 , and ξ ( t ) is Gaussian white noise, Figure 15. Run-and-tumble particles are widely studied as a model of bacterial motion [53]. The drift being v ( t ) = v 1 or v ( t ) = v 2 will be referred to as the mode of the particle being 1 or 2 respectively. Defining P 1 ( x , t ) and P 2 ( x , t ) as the joint probabilities that the particle is at position x at time t and in mode 1 or 2 respectively, the coupled Fokker–Planck equations for P 1 and P 2 are
t P 1 ( x , t ) = v 1 x P 1 ( x , t ) + D x 2 P 1 ( x , t ) α ( P 1 ( x , t ) P 2 ( x , t ) )
t P 2 ( x , t ) = v 2 x P 2 ( x , t ) + D x 2 P 2 ( x , t ) α ( P 2 ( x , t ) P 1 ( x , t ) )
whose stationary solution is the uniform distribution lim t P 1 ( x , t ) = lim t P 2 ( x ) = 1 / ( 2 L ) , as is easily verified by direct substitution. The corresponding steady-state probability currents thus read j 1 = v 1 / ( 2 L ) and j 2 = v 2 / ( 2 L ) .
In the following, we denote by the propagator W ( x y , Q R ; τ ) the probability density that a particle at position x in mode Q is found time τ later at position y in mode R. For Q = R , this propagation is a sum over all even numbers m of Poissonian switches, that occur with probability ( α τ ) m exp α τ / m ! , which includes the probability exp α τ of not switching at all over a total of time τ . For Q R , the propagation is due to an odd number of switches.
For m = 0 , the contribution to W ( x y , Q R ; τ ) is thus exp α τ W ( x y ; τ ) , with W ( x y ; τ ) of a drift diffusion particle on a ring, Section 3.11, but without potential, approximated at short times τ by the process on the real line, Equation (92) with drift v = v 1 or v = v 2 according to the particle’s mode. For m = 1 , the contribution is a single convolution over the time t [ 0 , τ ) at which the particle changes mode, most easily done after Fourier transforming. Before presenting this calculation in real space, we argue that any such convolution will result in some approximate Gaussian with an amplitude proportional to 1 / τ multiplied by a term of order ( α τ ) m . In small τ , therefore only the lowest orders need to be kept, m = 0 for Q = R and m = 1 for Q R .
More concretely,
W ( x y , 1 2 ; τ )
= d z 0 τ d τ 1 4 π D τ e ( z x v 1 τ ) 2 4 D τ e α τ 1 4 π D ( τ τ ) e y z v 2 ( τ τ ) 2 4 D ( τ τ ) e α ( τ τ ) +
= α exp α τ 2 ( v 1 v 2 ) erf x y + v 1 τ 4 D τ erf x y + v 2 τ 4 D τ +
which in small τ , when v 1 , 2 τ / 4 D τ 1 , so that erf ( r + ε ) = erf ( r ) + 2 ε e r 2 / π + , expands to
W ( x y , 1 2 ; τ ) = α τ 4 π D τ e ( y x ) 2 4 D τ 1 + O ( τ ) = W ( x y , 2 1 ; τ ) ,
whereas W ( x y , Q Q ; τ ) , the propagator with an even number of mode switches, is given by Equation (92) to leading order in τ ,
W ( x y , Q Q ; τ ) = 1 4 π D τ e ( y x v Q τ ) 2 4 D τ α τ 1 + O ( τ ) .
Much of the calculation of the entropy production follows the procedure in Section 3.9 and Section 3.11 to be detailed further below. To this end, we also need
lim τ 0 d d τ W ( x y , 1 2 ; τ ) = W ˙ ( x y , 1 2 ) = α δ ( x y ) = W ˙ ( x y , 2 1 ) .
As far as processes are concerned that involve a change of particle mode, therefore only the transition rates enter, not diffusion or drift. Given a uniform stationary spatial distribution of particles of any mode, mode changes between two modes cannot result in a sustained probability current, even when the switching rates differ,
P 1 W ˙ ( 1 2 ) P 2 W ˙ ( 2 1 ) ln P 1 W ˙ ( 1 2 ) P 2 W ˙ ( 2 1 ) = 0
for P 1 W ˙ ( 1 2 ) = P 2 W ˙ ( 2 1 ) at stationarity as in the process discussed in Section 3.1. A probability current and thus entropy production can occur when different particle modes result in a different distribution, Section 3.10, or when mode switching between more than two modes results in a current in its own rights, Section 3.2 and Section 3.13.
Since the full time-dependent density is beyond the scope of the present work, we calculate entropy flow and production at stationary on the basis of a natural extension of Equations (4), (16) and (21a) to a mixture of discrete and continuous states
= lim t S ˙ e ( t ) = lim t S ˙ i ( t ) = Q , R { 1 , 2 } 0 L d x d y P Q ( x , t ) W ˙ ( x y , Q R ) lim τ 0 ln W ( x y , Q R ; τ ) W ( y x , R Q ; τ )
= v 1 2 + v 2 2 2 D
which immediately follows from Section 3.9 and Section 3.11, as the stationary density is constant, P Q = P R = 1 / ( 2 L ) , and only Q = R contribute, with
lim τ 0 ln W ( x y , 1 2 ; τ ) W ( y x , 2 1 ; τ ) = 0 .
If the drifts are equal in absolute value | v 1 | = | v 2 | = v , then we recover the entropy production of a simple drift–diffusive particle, S ˙ i = v 2 / D . This is because we can think of run-and-tumble as a drift–diffusion particle that changes direction instantly. Since changing the direction produces no entropy, the total entropy production rate should be the same as a drift–diffusion particle. The entropy production can alternatively be derived via (29) by computing S ˙ i = d x j 1 2 / ( D P 1 ) + j 2 2 / ( D P 2 ) with the steady-state currents stated above.

3.13. Switching Diffusion Process on a Ring

The dynamics of a one-dimensional run-and-tumble particle discussed above can be readily generalised to the so called switching diffusion process [54] by allowing for an extended set { v i } of drift modes i = 1 , , M , Figure 16. The corresponding Langevin equation for the particle position on a ring x [ 0 , L ] is almost identical to that of run-and-tumble, namely x ˙ = v ( t ) + 2 D ξ ( t ) , with the exception that the process v ( t ) is now an M-state Markov process. In the general case, a single switching rate α is thus not sufficient and the full transition rate matrix α i j needs to be provided. In this formulation, the run-and-tumble dynamics Section 3.12 correspond to the choice M = 2 with symmetric rates α 12 = α 21 = α . Defining P i ( x , t ) as the joint probability that at time t the particle is at position x and in mode i, thereby moving with velocity v i , the system (114) of Fokker–Planck equations generalises to
t P i ( x , t ) = x [ ( v i D x ) P i ( x , t ) ] + j P j ( x , t ) α j i
where the transmutation rates α i j from mode i to mode j are assumed to be independent of position. To ease notation we use the convention α j j = i j α j i . For a non-vanishing diffusion constant, the stationary solution is uniform for all modes and given by lim t P i ( x , t ) = z i / L , where z i is the ith element of the eigenvector z satisfying j z j = 1 and the eigenvalue relation j z j α j i = 0 , which we assume to be unique for simplicity.
The calculation of the steady-state entropy production follows very closely that of run-and-tumble presented above. The conditional transition probabilities including up to one transmutation event read to leading order
W ( x y , i j ; τ ) = e α i i τ 4 π D τ e y x v i τ 2 4 D τ 1 + O ( τ ) for i = j α i j 2 ( v i v j ) erf x y + v i τ 4 D τ erf x y + v j τ 4 D τ 1 + O ( τ ) for i j ,
so that
lim τ 0 d d τ W ( x y , i j ; τ ) = D y 2 δ ( y x ) v i y δ ( y x ) + α i i δ ( y x ) for i = j α i j δ ( y x ) for i j .
We could perform the calculation of the entropy production using the procedure of Section 3.9 rather than drawing on the operator for i = j , which, however, is used in the following for convenience, see Section 3.11. Substituting (125) and (126) into (20a) and assuming steady-state densities, we arrive at
lim t S ˙ i ( t ) = lim t S ˙ e ( t ) = 0 L d x d y i z i L D y 2 δ ( y x ) v i y δ ( y x ) + α i i δ ( y x ) ( y x ) v i D + 0 L d x d y i , j i z i L α i j δ ( y x ) ln α i j α j i ,
where we have used Equation (126) in the operators containing the δ -functions and Equation (125) in the logarithms. The term ln ( α i j / α j i ) is obtained by the same expansion as used in Equation (117), Section 3.12. Both terms contributing to the entropy production above are familiar from previous sections: the first is a sum over the entropy production of M drift–diffusion processes with characteristic drift v i , Section 3.11 without potential, weighted by the steady-state marginal probability z i for the particle to be in state i; the second is the steady-state entropy production of an M-state Markov process with transition rate matrix α i j , which reduces to Equation (4) after integration. Carrying out all integrals, we finally have
lim t S ˙ i ( t ) = lim t S ˙ e ( t ) = i z i v i 2 D + 1 2 i , j ( z i α i j z j α j i ) ln α i j α j i .
Unlike run-and-tumble, Section 3.12, the transmutation process in switching diffusion does in general contribute to the entropy production for M > 2 , since the stationary state generally does not satisfy global detailed balance. However, contributions to the total entropy production originating from the switching and those from the diffusion parts of the process are effectively independent at steady state, as only the stationary marginal probabilities z i of the switching process feature as weights in the entropy production of the drift–diffusion. Otherwise, the parameters characterising the two processes stay separate in Equation (128). Further, the drift–diffusion contributions of the form v i 2 / D are invariant under the time-rescaling α i j T α i j . This property originates from the steady-state distributions P i ( x ) being uniform and would generally disappear in a potential, Section 3.10.

4. Discussion and Concluding Remarks

In this work, we calculate the rate of entropy production within Gaspard’s framework [11] from first principles in a collection of paradigmatic processes, encompassing both discrete and continuous degrees of freedom. Based on the Markovian dynamics of each system, where we can, we derive the probability distribution of the particle (or particles) as a function of time P ( x , t ) from Dirac or Kronecker- δ initial conditions P ( x , 0 ) = δ ( x x 0 ) , from which the transition probability W ( x y ; τ ) follows straightforwardly. In some cases, we determine only the stationary density and the (short-time) propagator W ( x y ; τ ) to leading order in τ . We then use Equation (4) for discrete systems or Equations (20) and (21) for continuous systems to calculate the time-dependent entropy production. We set out to give concrete, exact results in closed form, rather than general expressions that are difficult to evaluate, even when we allowed for general potentials in Section 3.11. In summary, the ingredients that are needed to calculate the entropy production in closed form in the present framework are: (a) the probability (density) P ( x , t ) to find the system in state x ideally as a function of time t and (b) the propagator W ( x y ; τ ) , the probability (density) that the system is found at a certain state y after some short time τ given an initial state x. If the propagator is known for any time τ , it can be used to calculate the probability P ( x , t ; x 0 ) = W ( x 0 x ; t ) for some initial state x 0 . However, this full time dependence is often difficult to obtain. The propagator is further needed in two forms, firstly lim τ 0 τ W ( x y ; τ ) when it is most elegantly written as an operator in continuous space, and secondly lim τ 0 ln ( W ( x y ; τ ) / W ( y x ; τ ) ) .
For completeness, where feasible, we have calculated the probability current j ( x , t ) in continuous systems at position x. The mere presence of such a flow indicates broken time-reversal symmetry and thus non-equilibrium. Our results on the discrete systems (Section 3.1, Section 3.2, Section 3.3, Section 3.4, Section 3.5, Section 3.6, Section 3.7 and Section 3.8) illustrate two important aspects of entropy production. First, the need of a probability flow P A W ˙ ( A B ) P B W ˙ ( B A ) between states: in the two-state system Section 3.1 there are no transition rates α and β such that there is a sustained probability flow and therefore, the system inevitably relaxes to equilibrium. However, in the three-state system Section 3.2 the transition rates can be chosen so that there is a perpetual flow ( α β ) / 3 between any two states and therefore there is entropy production not only during relaxation but also at stationarity. Hence, we can ascertain these as non-equilibrium steady states in the long term limit due to the non-vanishing rate of internal entropy production. Uniformly distributed steady states can be far from equilibrium, as a rigorous analysis on the basis of the microscopic dynamics reveals, although an effective dynamics may suggest otherwise.
Second, we see how the extensivity of entropy production arises in the N-particle systems (Section 3.4, Section 3.5 and Section 3.6), independently of whether the particles are distinguishable or not. We therefore conclude that the number of particles in the system must be accounted for when calculating the entropy production, and doing otherwise will not lead to a correct result. This is sometimes overlooked, especially when using effective theories. In the continuous systems (Section 3.9, Section 3.10 and Section 3.11), which involve a drift v and a diffusion constant D, we always find the contribution v 2 / D to the entropy production emerging one way or another. Moreover, in the case of drift–diffusion on the real line (Section 3.9) we find that the contribution due to the relaxation of the system 1 / ( 2 t ) is independent of any of the system parameters.
Finally, we have studied two systems (Section 3.12 and Section 3.13) where the state space has a discrete and a continuous component. The discrete component corresponds to the transmutation between particle species, i.e., their mode of drifting, whereas the continuous component corresponds to the particle motion. We find that both processes, motion and transmutation, contribute to the entropy production rate essentially independently since any term that combines both processes is a higher-order term contribution in τ , and therefore vanishes in the limit τ 0 .
This work has applications to the field of active particle systems, where particles are subject to local non-thermal forces. In fact, the systems studied in Section 3.2, Section 3.8, Section 3.9, Section 3.10, Section 3.11, Section 3.12 and Section 3.13 are prominent examples of active systems. We have shown that their entropy production crucially relies on the microscopic dynamics of the system, which are captured by the Fokker–Planck equation (or the master equation for discrete systems) and its solution. However, in interacting many-particle systems, such a description is not available in general. Instead, we may choose to use the Doi–Peliti formalism [55,56,57,58,59,60,61,62,63] to describe the system, since it provides a systematic approach based on the microscopic dynamics and which retains the particle entity.

Author Contributions

Formal analysis, L.C., R.G.-M., Z.Z., B.B. and G.P.; Supervision, G.P.; Writing—original draft, L.C., R.G.-M., Z.Z. and B.B.; Writing—review & editing, G.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

The authors would like to thank Letian Chen, Greg Pavliotis and Ziluo Zhang for discussions and kind advice. The authors gratefully acknowledge Kin Tat (Kenneth) Yiu’s much earlier, related work [64].

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Seifert, U. Stochastic thermodynamics, fluctuation theorems, and molecular machines. Rep. Prog. Phys. 2012, 75, 126001. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Jiang, D.Q.; Qian, M.; Qian, M.P. Mathematical Theory of Nonequilibrium Steady States: On the Frontier of Probability and Dynamical Systems; Lecture notes in Mathematics; Springer: Berlin/Heidelberg, Germany, 2004. [Google Scholar]
  3. Seifert, U. Stochastic thermodynamics: From principles to the cost of precision. Physica A 2018, 504, 176–191. [Google Scholar] [CrossRef] [Green Version]
  4. Barato, A.C.; Hartich, D.; Seifert, U. Efficiency of cellular information processing. New J. Phys. 2014, 16, 103024. [Google Scholar] [CrossRef] [Green Version]
  5. Lan, G.; Tu, Y. Information processing in bacteria: Memory, computation, and statistical physics: A key issues review. Rep. Prog. Phys. 2016, 79, 052601. [Google Scholar] [CrossRef] [PubMed]
  6. Cao, Y.; Wang, H.; Ouyang, Q.; Tu, Y. The free-energy cost of accurate biochemical oscillations. Nat. Phys. 2015, 11, 772–778. [Google Scholar] [CrossRef]
  7. Schmiedl, T.; Seifert, U. Stochastic thermodynamics of chemical reaction networks. J. Chem. Phys. 2007, 126, 044101. [Google Scholar] [CrossRef] [Green Version]
  8. Pietzonka, P.; Fodor, É.; Lohrmann, C.; Cates, M.E.; Seifert, U. Autonomous Engines Driven by Active Matter: Energetics and Design Principles. Phys. Rev. X 2019, 9, 041032. [Google Scholar] [CrossRef] [Green Version]
  9. Schnakenberg, J. Network theory of microscopic and macroscopic behavior of master equation systems. Rev. Mod. Phys. 1976, 48, 571–585. [Google Scholar] [CrossRef]
  10. Maes, C. The Fluctuation Theorem as a Gibbs Property. J. Stat. Phys. 1999, 95, 367–392. [Google Scholar] [CrossRef]
  11. Gaspard, P. Time-Reversed Dynamical Entropy and Irreversibility in Markovian Random Processes. J. Stat. Phys. 2004, 117, 599–615. [Google Scholar] [CrossRef]
  12. Seifert, U. Entropy production along a stochastic trajectory and an integral fluctuation theorem. Phys. Rev. Let. 2005, 95, 040602. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Nardini, C.; Fodor, É.; Tjhung, E.; van Wijland, F.; Tailleur, J.; Cates, M.E. Entropy Production in Field Theories without Time-Reversal Symmetry: Quantifying the Non-Equilibrium Character of Active Matter. Phys. Rev. X 2017, 7, 021007. [Google Scholar] [CrossRef]
  14. Landi, G.T.; Tomé, T.; de Oliveira, M.J. Entropy production in linear Langevin systems. J. Phys. A Math. Theor. 2013, 46, 395001. [Google Scholar] [CrossRef]
  15. Munakata, T.; Rosinberg, M.L. Entropy production and fluctuation theorems for Langevin processes under continuous non-Markovian feedback control. Phys. Rev. Lett. 2014, 112, 180601. [Google Scholar] [CrossRef] [Green Version]
  16. Loos, S.A.M.; Klapp, S.H.L. Heat flow due to time-delayed feedback. Sci. Rep. 2019, 9, 2491. [Google Scholar] [CrossRef]
  17. Ouldridge, T.E.; Brittain, R.A.; Wolde, P.R.T. The power of being explicit: Demystifying work, heat, and free energy in the physics of computation. In The Energetics of Computing in Life and Machines; Wolpert, D.H., Ed.; SFI Press: Santa Fe, NM, USA, 2018. [Google Scholar]
  18. Rodenfels, J.; Neugebauer, K.M.; Howard, J. Heat Oscillations Driven by the Embryonic Cell Cycle Reveal the Energetic Costs of Signaling. Dev. Cell 2019, 48, 646–658. [Google Scholar] [CrossRef] [Green Version]
  19. Song, Y.; Park, J.O.; Tanner, L.; Nagano, Y.; Rabinowitz, J.D.; Shvartsman, S.Y. Energy budget of Drosophila embryogenesis. Curr. Biol. 2019, 29, R566–R567. [Google Scholar] [CrossRef]
  20. Horowitz, J.M.; Gingrich, T.R. Thermodynamic uncertainty relations constrain non-equilibrium fluctuations. Nat. Phys. 2020, 16, 15–20. [Google Scholar] [CrossRef]
  21. Dorosz, S.; Pleimling, M. Entropy production in the nonequilibrium steady states of interacting many-body systems. Phys. Rev. E 2011, 83, 031107. [Google Scholar] [CrossRef] [Green Version]
  22. Shannon, C.E. A Mathematical Theory of Communication. Bell Syst. Tech. J 1948, 27, 379–423. [Google Scholar] [CrossRef] [Green Version]
  23. Esposito, M.; Van den Broeck, C. Three faces of the second law. I. Master equation formulation. Phys. Rev. E 2010, 82, 011143. [Google Scholar] [CrossRef] [Green Version]
  24. Lebowitz, J.L.; Spohn, H. A Gallavotti–Cohen-Type Symmetry in the Large Deviation Functional for Stochastic Dynamics. J. Stat. Phys. 1999, 95, 333–365. [Google Scholar] [CrossRef] [Green Version]
  25. Kullback, S.; Leibler, R.A. On Information and Sufficiency. Ann. Math. Stat. 1951, 22, 79–86. [Google Scholar] [CrossRef]
  26. Diana, G.; Esposito, M. Mutual entropy production in bipartite systems. J. Stat. Mech. Theory Exp. 2014, 2014, P04010. [Google Scholar] [CrossRef] [Green Version]
  27. Roldán, E.; Neri, I.; Dörpinghaus, M.; Meyr, H.; Jülicher, F. Decision Making in the Arrow of Time. Phys. Rev. Lett. 2015, 115, 250602. [Google Scholar] [CrossRef] [Green Version]
  28. Wissel, C. Manifolds of equivalent path integral solutions of the Fokker-Planck equation. Z. Phys. B Condens. Matter 1979, 35, 185–191. [Google Scholar] [CrossRef]
  29. Pavliotis, G.A. Stochastic Processes and Applications—Diffusion Processes, the Fokker-Planck and Langevin Equations; Springer: New York, NY, USA, 2014. [Google Scholar]
  30. Wang, J. Landscape and flux theory of non-equilibrium dynamical systems with application to biology. Adv. Phys. 2015, 64, 1–137. [Google Scholar]
  31. Seifert, U. Lecture Notes: Soft Matter. From Synthetic to Biological Materials. 39th IFF Spring School, Institut of Solid State Research, Jülich. 2008. Available online: https://www.itp2.uni-stuttgart.de/dokumente/b5_seifert_web.pdf (accessed on 2 November 2020).
  32. Pietzonka, P.; Seifert, U. Entropy production of active particles and for particles in active baths. J. Phys. A Math. Theor. 2017, 51, 01LT01. [Google Scholar]
  33. Onsager, L.; Machlup, S. Fluctuations and Irreversible Processes. Phys. Rev. 1953, 91, 1505–1512. [Google Scholar]
  34. Täuber, U.C. Critical Dynamics: A Field Theory Approach to Equilibrium and Non-Equilibrium Scaling Behavior; Cambridge University Press: Cambridge, UK, 2014. [Google Scholar] [CrossRef] [Green Version]
  35. Cugliandolo, L.F.; Lecomte, V. Rules of calculus in the path integral representation of white noise Langevin equations: The Onsager–Machlup approach. J. Phys. A Math. Theor. 2017, 50, 345001. [Google Scholar] [CrossRef] [Green Version]
  36. Fodor, É.; Nardini, C.; Cates, M.E.; Tailleur, J.; Visco, P.; van Wijland, F. How far from equilibrium is active matter? Phys. Rev. Lett. 2016, 117, 038103. [Google Scholar] [CrossRef] [Green Version]
  37. Caprini, L.; Marconi, U.M.B.; Puglisi, A.; Vulpiani, A. The entropy production of Ornstein–Uhlenbeck active particles: A path integral method for correlations. J. Stat. Mech. Theory Exp. 2019, 2019, 053203. [Google Scholar] [CrossRef] [Green Version]
  38. Lesne, A. Shannon entropy: A rigorous notion at the crossroads between probability, information theory, dynamical systems and statistical physics. Math. Struct. Comput. Sci. 2014, 24, e240311. [Google Scholar] [CrossRef] [Green Version]
  39. Zhang, X.J.; Qian, H.; Qian, M. Stochastic theory of nonequilibrium steady states and its applications. Part I. Phys. Rep. 2012, 510, 1–86. [Google Scholar] [CrossRef]
  40. Herpich, T.; Cossetto, T.; Falasco, G.; Esposito, M. Stochastic thermodynamics of all-to-all interacting many-body systems. New J. Phys. 2020, 22, 063005. [Google Scholar] [CrossRef]
  41. Lacoste, D.; Lau, A.W.; Mallick, K. Fluctuation theorem and large deviation function for a solvable model of a molecular motor. Phys. Rev. E 2008, 78, 011915. [Google Scholar] [CrossRef] [Green Version]
  42. Magnus, W.; Oberhettinger, F.; Soni, R.P. Formulas and Theorems for the Special Functions of Mathematical Physics; Springer: Berlin, Germany, 1966. [Google Scholar]
  43. Galassi, M.; Davies, J.; Theiler, J.; Gough, B.; Jungman, G.; Alken, P.; Booth, M.; Rossi, F. GNU Scientific Library Reference Manual, 3rd ed.; Network Theory Ltd., 2009; p. 592. Available online: https://www.gnu.org/software/gsl/ (accessed on 27 October 2020).
  44. Van den Broeck, C.; Esposito, M. Three faces of the second law. II. Fokker-Planck formulation. Phys. Rev. E 2010, 82, 011144. [Google Scholar] [CrossRef] [Green Version]
  45. Risken, H.; Frank, T. The Fokker-Planck Equation—Methods of Solution and Applications; Springer: Berlin/Heidelberg, Germany, 1996. [Google Scholar]
  46. Maes, C.; Redig, F.; Moffaert, A.V. On the definition of entropy production, via examples. J. Math. Phys. 2000, 41, 1528–1554. [Google Scholar] [CrossRef] [Green Version]
  47. Spinney, R.E.; Ford, I.J. Entropy production in full phase space for continuous stochastic dynamics. Phys. Rev. E 2012, 85, 051113. [Google Scholar] [CrossRef] [Green Version]
  48. Andrieux, D.; Gaspard, P.; Ciliberto, S.; Garnier, N.; Joubaud, S.; Petrosyan, A. Thermodynamic time asymmetry in non-equilibrium fluctuations. J. Stat. Mech. Theory Exp. 2008, 2008, 01002. [Google Scholar] [CrossRef] [Green Version]
  49. Reimann, P.; Van den Broeck, C.; Linke, H.; Hänggi, P.; Rubi, J.M.; Pérez-Madrid, A. Giant Acceleration of Free Diffusion by Use of Tilted Periodic Potentials. Phys. Rev. Lett. 2001, 87, 010602. [Google Scholar] [CrossRef] [Green Version]
  50. Pigolotti, S.; Neri, I.; Roldán, É.; Jülicher, F. Generic properties of stochastic entropy production. Phys. Rev. Lett. 2017, 119, 140604. [Google Scholar] [CrossRef]
  51. Neri, I.; Roldán, É.; Pigolotti, S.; Jülicher, F. Integral fluctuation relations for entropy production at stopping times. J. Stat. Mech. Theory Exp. 2019, 2019, 104006. [Google Scholar] [CrossRef] [Green Version]
  52. Horsthemke, W.; Lefever, R. Noise-Induced Transitions-Theory and Applications in Physics, Chemistry, and Biology; Springer: Berlin/Heidelberg, Germany, 1984. [Google Scholar]
  53. Schnitzer, M.J. Theory of continuum random walks and application to chemotaxis. Phys. Rev. E 1993, 48, 2553–2568. [Google Scholar] [CrossRef]
  54. Yang, S.X.; Ge, H. Decomposition of the entropy production rate and nonequilibrium thermodynamics of switching diffusion processes. Phys. Rev. E 2018, 98, 012418. [Google Scholar] [CrossRef]
  55. Doi, M. Second quantization representation for classical many-particle system. J. Phys. A Math. Gen. 1976, 9, 1465–1477. [Google Scholar] [CrossRef]
  56. Peliti, L. Path integral approach to birth-death processes on a lattice. J. Phys. 1985, 46, 1469–1483. [Google Scholar] [CrossRef] [Green Version]
  57. Täuber, U.C.; Howard, M.; Vollmayr-Lee, B.P. Applications of field-theoretic renormalization group methods to reaction-diffusion problems. J. Phys. A Math. Gen. 2005, 38, R79–R131. [Google Scholar] [CrossRef] [Green Version]
  58. Smith, E.; Krishnamurthy, S. Path-reversal, Doi-Peliti generating functionals, and dualities between dynamics and inference for stochastic processes. arXiv 2018, arXiv:1806.02001. [Google Scholar]
  59. Bordeu, I.; Amarteifio, S.; Garcia-Millan, R.; Walter, B.; Wei, N.; Pruessner, G. Volume explored by a branching random walk on general graphs. Sci. Rep. 2019, 9, 15590. [Google Scholar] [CrossRef]
  60. Lazarescu, A.; Cossetto, T.; Falasco, G.; Esposito, M. Large deviations and dynamical phase transitions in stochastic chemical networks. J. Chem. Phys. 2019, 151, 064117. [Google Scholar] [CrossRef] [Green Version]
  61. Pausch, J.; Pruessner, G. Is actin filament and microtubule growth reaction-or diffusion-limited? J. Stat. Mech. Theory Exp. 2019, 2019, 053501. [Google Scholar] [CrossRef] [Green Version]
  62. Garcia-Millan, R. The concealed voter model is in the voter model universality class. J. Stat. Mech. 2020, 2020, 053201. [Google Scholar] [CrossRef]
  63. Garcia-Millan, R.; Pruessner, G. Run-and-tumble motion: Field theory and entropy production. 2020; To be published. [Google Scholar]
  64. Yiu, K.T. Entropy Production and Time Reversal. Master’s Thesis, Imperial College London, London, UK, 2017. [Google Scholar]
Figure 1. Two-state Markov chain in continuous time. The black blob indicates the current state of the system. Independently of the choice of α and β , this processes settles into an equilibrium steady-state over long timeframes (in the absence of an external time-dependent diving).
Figure 1. Two-state Markov chain in continuous time. The black blob indicates the current state of the system. Independently of the choice of α and β , this processes settles into an equilibrium steady-state over long timeframes (in the absence of an external time-dependent diving).
Entropy 22 01252 g001
Figure 2. Entropy production of the two- and three-state Markov processes (black and grey lines respectively) discussed in Section 3.1 and Section 3.2 as a function of time. For the two-state system we plot Equation (37) with p = 1 both for the symmetric, α = β (solid lines), and for the asymmetric case, α β (dashed lines). In both, the entropy production decays exponentially over long timeframes. For the three-state system, Equation (41), the asymmetric case displays a finite entropy production rate over long timeframes, consistent with Equation (42) and the condition that at stationarity S ˙ i ( t ) = S ˙ e ( t ) .
Figure 2. Entropy production of the two- and three-state Markov processes (black and grey lines respectively) discussed in Section 3.1 and Section 3.2 as a function of time. For the two-state system we plot Equation (37) with p = 1 both for the symmetric, α = β (solid lines), and for the asymmetric case, α β (dashed lines). In both, the entropy production decays exponentially over long timeframes. For the three-state system, Equation (41), the asymmetric case displays a finite entropy production rate over long timeframes, consistent with Equation (42) and the condition that at stationarity S ˙ i ( t ) = S ˙ e ( t ) .
Entropy 22 01252 g002
Figure 3. Three-state Markov chain in continuous time. The black blob indicates the current state of the system. Symmetry under cyclic permutation is introduced by imposing identical transition rates α and β for counter-clockwise and clockwise transition, respectively.
Figure 3. Three-state Markov chain in continuous time. The black blob indicates the current state of the system. Symmetry under cyclic permutation is introduced by imposing identical transition rates α and β for counter-clockwise and clockwise transition, respectively.
Entropy 22 01252 g003
Figure 4. Random walk on a complete graph of d nodes (here shown for d = 6 ). The black blob indicates the current state of the system. For uniform transition rates, the symmetry under node relabelling leads to an equilibrium, homogeneous steady-state with P j = 1 / d for all j.
Figure 4. Random walk on a complete graph of d nodes (here shown for d = 6 ). The black blob indicates the current state of the system. For uniform transition rates, the symmetry under node relabelling leads to an equilibrium, homogeneous steady-state with P j = 1 / d for all j.
Entropy 22 01252 g004
Figure 5. Example of N = 5 non-interacting, distinguishable processes with d 1 = 4 , d 2 = 2 , d 3 = 3 , d 4 = 5 and d 5 = 5 . The black blobs indicate the current state of each sub-system.
Figure 5. Example of N = 5 non-interacting, distinguishable processes with d 1 = 4 , d 2 = 2 , d 3 = 3 , d 4 = 5 and d 5 = 5 . The black blobs indicate the current state of each sub-system.
Entropy 22 01252 g005
Figure 6. N independent, indistinguishable two-state Markov processes in continuous time. The black blobs indicate the current state of the single-particle sub-system. Since processes are indistinguishable, states are fully characterised by the occupation number of either state, if the total number of particles is known.
Figure 6. N independent, indistinguishable two-state Markov processes in continuous time. The black blobs indicate the current state of the single-particle sub-system. Since processes are indistinguishable, states are fully characterised by the occupation number of either state, if the total number of particles is known.
Entropy 22 01252 g006
Figure 7. N independent, indistinguishable d-state Markov processes (here shown for d = 6 and N = 8 ) in continuous time. Black blobs indicate the current state of the single-particle sub-systems. Due to indistinguishability, multi-particle states are fully characterised by the occupation number of an arbitrary subset of d 1 states, if the total number of particles is known.
Figure 7. N independent, indistinguishable d-state Markov processes (here shown for d = 6 and N = 8 ) in continuous time. Black blobs indicate the current state of the single-particle sub-systems. Due to indistinguishability, multi-particle states are fully characterised by the occupation number of an arbitrary subset of d 1 states, if the total number of particles is known.
Entropy 22 01252 g007
Figure 8. Simple random walk on an infinite, one-dimensional lattice in continuous time. The black blob indicates the current position of the random walker. The left and right hopping rates, labelled and r respectively, are assumed to be homogeneous but not equal in general, thus leading to a net drift of the average position.
Figure 8. Simple random walk on an infinite, one-dimensional lattice in continuous time. The black blob indicates the current position of the random walker. The left and right hopping rates, labelled and r respectively, are assumed to be homogeneous but not equal in general, thus leading to a net drift of the average position.
Entropy 22 01252 g008
Figure 9. Entropy production of a random walk on a one-dimensional lattice (RW on Z ), for symmetric and asymmetric hopping rates, as a function of time, Equation (82) (solid lines). The asymptotic behaviour at large t, Equation (83) (dotted lines), decays algebraically in the symmetric case ( r = ) and converges to a positive constant in the asymmetric case ( r ).
Figure 9. Entropy production of a random walk on a one-dimensional lattice (RW on Z ), for symmetric and asymmetric hopping rates, as a function of time, Equation (82) (solid lines). The asymptotic behaviour at large t, Equation (83) (dotted lines), decays algebraically in the symmetric case ( r = ) and converges to a positive constant in the asymmetric case ( r ).
Entropy 22 01252 g009
Figure 10. Simple random walk on an periodic, one-dimensional ‘ring’ lattice in continuous time. This model generalises the three-state Markov chain discussed in Section 3.2 to L states. The black blob indicates the current position of the random walker. Due to the finiteness of the state space, this process is characterised by a well defined steady-state, which is an equilibrium one for symmetric rates = r .
Figure 10. Simple random walk on an periodic, one-dimensional ‘ring’ lattice in continuous time. This model generalises the three-state Markov chain discussed in Section 3.2 to L states. The black blob indicates the current position of the random walker. Due to the finiteness of the state space, this process is characterised by a well defined steady-state, which is an equilibrium one for symmetric rates = r .
Entropy 22 01252 g010
Figure 11. Driven Brownian particle on the real line. The black blob indicates the particle’s current position.
Figure 11. Driven Brownian particle on the real line. The black blob indicates the particle’s current position.
Entropy 22 01252 g011
Figure 12. Entropy production of the drift–diffusion process in an external potential, Equation (100), as a function of time for different parameter combinations. For vanishing potential stiffness, k 0 , we recover Equation (94) for a free drift–diffusion particle. In particular, for v = 0 the entropy production decays algebraically, while for v 0 it converges to the constant value v 2 / D . For k > 0 , the algebraic decay is suppressed exponentially over long timeframes as the process settles into its equilibrium steady-state.
Figure 12. Entropy production of the drift–diffusion process in an external potential, Equation (100), as a function of time for different parameter combinations. For vanishing potential stiffness, k 0 , we recover Equation (94) for a free drift–diffusion particle. In particular, for v = 0 the entropy production decays algebraically, while for v 0 it converges to the constant value v 2 / D . For k > 0 , the algebraic decay is suppressed exponentially over long timeframes as the process settles into its equilibrium steady-state.
Entropy 22 01252 g012
Figure 13. Driven Brownian particle in a harmonic potential. This process reduces to the standard Ornstein–Uhlenbeck process upon rescaling x x + v / k . The black blob indicates the particle’s current position. The presence of a binding potential implies that the system relaxes to an equilibrium steady-state over long timeframes.
Figure 13. Driven Brownian particle in a harmonic potential. This process reduces to the standard Ornstein–Uhlenbeck process upon rescaling x x + v / k . The black blob indicates the particle’s current position. The presence of a binding potential implies that the system relaxes to an equilibrium steady-state over long timeframes.
Entropy 22 01252 g013
Figure 14. Driven Brownian particle on a ring x [ 0 , L ) with a periodic potential satisfying V ( x ) = V ( x + L ) . Any finite diffusion constant D > 0 results in a stationary state over long timeframes that is non-equilibrium for v 0 . The black blob indicates the particle’s current position.
Figure 14. Driven Brownian particle on a ring x [ 0 , L ) with a periodic potential satisfying V ( x ) = V ( x + L ) . Any finite diffusion constant D > 0 results in a stationary state over long timeframes that is non-equilibrium for v 0 . The black blob indicates the particle’s current position.
Entropy 22 01252 g014
Figure 15. Run-and-tumble motion with diffusion on a ring x [ 0 , L ) . A run-and-tumble particle switches stochastically, in a Poisson process with rate α , between two modes 1 and 2 characterised by an identical diffusion constant D but distinct drift velocities v 1 and v 2 . The two modes are here represented in black and grey, respectively. For arbitrary positive diffusion constant D or tumbling rate α with v 1 v 2 , the steady state is uniform but generally non-equilibrium.
Figure 15. Run-and-tumble motion with diffusion on a ring x [ 0 , L ) . A run-and-tumble particle switches stochastically, in a Poisson process with rate α , between two modes 1 and 2 characterised by an identical diffusion constant D but distinct drift velocities v 1 and v 2 . The two modes are here represented in black and grey, respectively. For arbitrary positive diffusion constant D or tumbling rate α with v 1 v 2 , the steady state is uniform but generally non-equilibrium.
Entropy 22 01252 g015
Figure 16. Switching diffusion process on a ring x [ 0 , L ] in continuous time. A switching diffusion process involves a stochastic switching between M modes characterised by an identical diffusion constant D but distinct drifts v i ( i = 1 , 2 , , M ). The marginal switching dynamics are characterised as an M-state Markov process with transition rates α i j from mode i to mode j.
Figure 16. Switching diffusion process on a ring x [ 0 , L ] in continuous time. A switching diffusion process involves a stochastic switching between M modes characterised by an identical diffusion constant D but distinct drifts v i ( i = 1 , 2 , , M ). The marginal switching dynamics are characterised as an M-state Markov process with transition rates α i j from mode i to mode j.
Entropy 22 01252 g016
Table 1. List of particle systems for which we have calculated their entropy production S ˙ i ( t ) .
Table 1. List of particle systems for which we have calculated their entropy production S ˙ i ( t ) .
SectionSystem S ˙ i ( t )
Section 3.1Two-state Markov process(37)
Section 3.2Three-state Markov process(41)
Section 3.3Random walk on a complete graph(44), (45)
Section 3.4N independent, distinguishable Markov processes(52)
Section 3.5N independent, indistinguishable two-state Markov processes(55b)
Section 3.6N independent, indistinguishable d-state processes(68)
Section 3.7Random Walk on a lattice(82)
Section 3.8Random Walk on a ring lattice(87), (89)
Section 3.9Driven Brownian particle(94)
Section 3.10Driven Brownian particle in a harmonic potential(100)
Section 3.11Driven Brownian particle on a ring with potential(113d)
Section 3.12Run-and-tumble motion with diffusion on a ring(121)
Section 3.13Switching diffusion process on a ring(128)
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Cocconi, L.; Garcia-Millan, R.; Zhen, Z.; Buturca, B.; Pruessner, G. Entropy Production in Exactly Solvable Systems. Entropy 2020, 22, 1252. https://doi.org/10.3390/e22111252

AMA Style

Cocconi L, Garcia-Millan R, Zhen Z, Buturca B, Pruessner G. Entropy Production in Exactly Solvable Systems. Entropy. 2020; 22(11):1252. https://doi.org/10.3390/e22111252

Chicago/Turabian Style

Cocconi, Luca, Rosalba Garcia-Millan, Zigan Zhen, Bianca Buturca, and Gunnar Pruessner. 2020. "Entropy Production in Exactly Solvable Systems" Entropy 22, no. 11: 1252. https://doi.org/10.3390/e22111252

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop